text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
Kerala School of Mathematics, Kozhikode
The Kerala School of Mathematics (KSoM) in Kozhikode, India is a research institute in Theoretical sciences with a focus on Mathematics. The institute is a joint venture of the Department of Atomic Energy (DAE) and the Kerala State Council for Science, Technology and Environment (KSCSTE).[1] Kerala School of Mathematics is a center of advanced research and learning in Mathematics and is a meeting ground for leading Mathematicians from around the world.
This article is about the institution established in 2009. For the ancient institution, see Kerala school of astronomy and mathematics.
Kerala School of Mathematics
Established2009
DirectorKalyan Chakraborty
Location
Kozhikode
,
Kerala
,
India
11.2868°N 75.8715°E / 11.2868; 75.8715
LanguageEnglish
Websitehttp://ksom.res.in/
Kerala School of Mathematics has a doctoral program to which students are admitted on an yearly basis. The institute also has an Integrated MSc-PhD program with an option for students to exit the program with an MSc degree at the end of two years.[2]
History
Mathematics in Kerala, during the times of Madhava of Sangamagrama, majorly flourished in the Muziris region of Thrikkandiyur, Thirur, Alattiyur, and Tirunavaya in the Malabar region of Kerala. Kerala school of astronomy and mathematics flourished between the 14th and 16th centuries. Commemorating the rich heritage of Mathematics in the region, Kerala School of Mathematics was hence chosen to be set up in the scenic mountains of the Western Ghats in the city of Kozhikode.
The nascent plan to set up Kerala School of Mathematics started forming shape in around 2004. The then DAE chairman Anil Kakodkar and the then executive vice president of KSCSTE, M. S. Valiathan were instrumental in setting up the institute with the guidance of M. S. Raghunathan, Rajeeva Karandikar and Alladi Sitaram. The foundation stone of KSoM was laid by the then Chief Minister A.K. Antony in 2004. The institute was later inaugurated in 2008 by the then Chief Minister V. S. Achuthanandan[3][4] and finally set up in 2009 with Parameswaran A. J. as the founding director.
External links
Official Website
Educational Institutes in Kozhikode
• Calicut Medical College
• Malabar Medical College Hospital and Research Centre
• Farook College
• Government Engineering College, Kozhikode
• Govt. Homoeopathic Medical College
• Govt. Law College
• IIM Kozhikode
• Kerala School of Mathematics, Kozhikode
• Malabar Christian College
• Markaz Arts and Science College
• Markaz Law College
• NIT Calicut
• SARBTM Govt College
• Devagiri College
• Guruvayurappan College
Education in Kerala
Government
• Co-operative Academy of Professional Education
• Department of General and Higher Education
• Department of Technical Education
• Institute of Human Resources Development
• Kerala Engineering Agricultural Medical Examinations
• Kerala Higher Secondary Examination Board
• Kerala State Education Board
• Kerala State Science and Technology Museum
Universities
Kasargod
• Central University of Kerala
Kannur
• Kannur University
Wayanad
• Kerala Veterinary and Animal Sciences University
Malappuram
• University of Calicut
• Thunchath Ezhuthachan Malayalam University
• Darul Huda Islamic University
Thrissur
• Kerala Agricultural University
• Kerala Kalamandalam
• Kerala University of Health Sciences
Ernakulam
• Cochin University of Science and Technology
• Sree Sankaracharya University of Sanskrit
• National University of Advanced Legal Studies
• Kerala University of Fisheries and Ocean Studies
Kottayam
• Mahatma Gandhi University
Kollam
• Sree Narayanaguru Open University
Thiruvananthapuram
• University of Kerala
• APJ Abdul Kalam Technological University
• Kerala University of Digital Sciences, Innovation and Technology
Arts and science colleges
• Assumption College, Changanasserry
• Bharata Mata College
• BPC College Piravom
• Catholicate College Pathanamthitta
• Centre for Convergence Media Studies
• Centre for Development of Imaging Technology
• Chinmaya Mission College
• Clusters School of Digital Arts
• CMS College Kottayam
• Cochin College
• College of Applied Science Malappuram
• College of Applied Science, Thamarassery
• College of Applied Science, Vadakkencherry
• College of Applied Sciences, Adoor
• EMS Memorial College of Applied Science
• Farook College
• Fatima Mata National College
• Government Brennen College, Thalassery
• Government College, Chittur
• Government College, Kattappana
• Government College, Kottayam
• Government College, Manimalakkunnu
• Government Arts College, Thiruvananthapuram
• Government College for Women, Thiruvananthapuram
• Jawaharlal Nehru Institute of Arts & Science
• Kerala School of Mathematics, Kozhikode
• Little Flower College
• M.E.S. Ponnani College, Ponnani
• Mahatma Gandhi College
• Malabar Christian College
• Markaz Arts and Science College
• Mar Ivanios College
• Mar Kuriakose College
• Mar Thoma College
• Mary Matha Arts & Science College
• MES College Marampally, Aluva
• N. S. S. College, Pandalam
• Newman College, Thodupuzha
• Nirmalagiri college, Kuthuparamba
• Nss College Rajakumari
• PGM College
• Pocker Sahib Memorial Orphanage College
• Raja Ravi Varma College of Fine Arts
• RLV College of Music and Fine Arts
• Sahya Arts and Science College
• St. Albert's College
• St. Dominic's College
• St. Mary's College, Thrissur
• St. Teresa's College
• St. Thomas College, Palai
• St. Thomas College, Thrissur
• Sayd Abdulrahman Bafakhy Tangal Memorial Government College
• R. Shankar Memorial Arts and Science College
• Sneha College of Architecture
• Sree Kerala Varma College
• Sree Krishna College
• Sree Narayana College for Women
• Sree Sankara College
• Sree Sankara Vidyapeetom
• St John's College, Anchal
• St Joseph College of Communication
• St. Berchmans College
• St. Joseph's College, Devagiri
• St. Pius X College, Rajapuram
• St. Thomas College, Kozhencherry
• St. Joseph's College, Moolamattom
• Sullamussalam Science College
• Swathi Thirunal College of Music
• Union Christian College, Aluva
• Unnayi Warrier Smaraka Kalanilayam
• University College Thiruvananthapuram
• Yeldo Mar Baselios College
• Zamorin's Guruvayurappan College
• Sree Rama Varma Music School
Medical education
• Amala Institute of Medical Sciences
• Azeezia Medical College, Kollam
• Believers Church Medical College Hospital
• Calicut Medical College
• Co-operative Medical College, Kochi
• Dr. Somervell Memorial CSI Medical College
• Government Medical College, Kollam
• Government Medical College, Malappuram
• Government Medical College, Thrissur
• Government T D Medical College, Alappuzha
• Kerala Veterinary College, Mannuthy
• Kottayam Medical College
• Pariyaram Medical College
• PSM College of Dental Science and Research
• SUT Academy of Medical Sciences
• Travancore Medical College Hospital, Kollam
• Trivandrum Medical College
Engineering education
• ACE College of Engineering
• Adi Shankara Institute of Engineering Technology
• Al-Ameen Engineering College
• AWH Engineering College
• Bishop Jerome Institute, Kollam
• College of Engineering & Management, Punnapra
• College of Engineering Attingal
• College of Engineering Chengannur
• College of Engineering Munnar
• College of Engineering Trikaripur
• College of Engineering, Adoor
• College of Engineering, Aranmula
• College of Engineering, Cherthala
• College of Engineering, Kallooppara
• College of Engineering, Karunagapally
• College of Engineering, Kidangoor
• College of Engineering, Kottarakkara
• College of Engineering, Pathanapuram
• College of Engineering, Perumon
• College of Engineering, Poonjar
• College of Engineering, Thalassery
• College of Engineering, Trivandrum
• College of Engineering, Vatakara
• Federal Institute of Science and Technology
• Government College of Engineering, Kannur
• Government Engineering College, Idukki
• Government Engineering College, Kozhikode
• Government Engineering College, Sreekrishnapuram
• Government Engineering College, Thiruvananthapuram
• Government Engineering College, Thrissur
• Government Engineering College, Wayanad
• Institute of Handloom and Textile Technology
• Indian Institute of Space Science and Technology, Thiruvananthapuram
• Mangalam College of Engineering
• Mar Athanasius College of Engineering
• MEA Engineering College
• MES College of Engineering
• Model Engineering College
• Mohandas College of Engineering and Technology
• National Institute of Technology Calicut
• NSS College of Engineering
• P. A. Aziz College of Engineering and Technology
• Providence College of Engineering
• Rajadhani Institute of Engineering and Technology
• Rajagiri School of Engineering & Technology
• Rajiv Gandhi Institute of Technology, Kottayam
• Sahrdaya College of Engineering and Technology
• SCMS School of Engineering and Technology
• Sree Chitra Thirunal College of Engineering
• Sree Narayana Gurukulam College of Engineering
• Thangal Kunju Musaliar College of Engineering, Kollam
• TKM Institute of Technology
• Toc H Institute of Science and Technology
• University College of Engineering, Thodupuzha
• Vidya Academy of Science and Technology
• Vimal Jyothi Engineering College
Management education
• Albertian Institute of Management
• Asian School of Business
• Bishop Jerome Institute, Kollam
• DCSMAT Business School
• IIMK, Kochi campus
• Indian Institute of Management Kozhikode
• School of Communication and Management Studies
• School of Communication and Management Studies
• Xavier Institute of Management and Entrepreneurship
Law education
• Government Law College, Ernakulam
• Government Law College, Kozhikode
• Government Law College, Thiruvananthapuram
• Government Law College, Thrissur
• Kerala Law Academy Law College
• Markaz Law College
• National University of Advanced Legal Studies
• NSS Law College, Kollam
• Sree Narayana Guru College of Legal Studies, Kollam
Fisheries and agriculture
• Central Institute of Fisheries Nautical and Engineering Training
• Central Institute of Fisheries Technology
• Central Marine Fisheries Research Institute
• College of Horticulture
• Kelappaji College of Agricultural Engineering and Technology
• Regional Agricultural Research Station, Pattambi
Polytechnic colleges
• Maharaja's Technological Institute
• Sree Narayana Polytechnic College
Colleges of education
• St. Thomas College of Teacher Education, Pala
• University College of Teacher Education Vaikom
Secondary schools
• A.G.R.M Higher Secondary School
• Agostino Vicini's Special School
• Aji Senior Secondary School
• Ambalamedu High School
• Angels Arc Senior Secondary School
• Ayathan School
• BEM High School, Parappanangadi
• Bharathiar Government Higher Secondary School
• Bishop Hodges Higher Secondary School
• Bishop Moore Vidyapith Cherthala
• Bishop Moore Vidyapith Mavelikkara
• Bishop Moore Vidyapith, Kayamkulam
• BMM English Medium School
• BSS Gurukulam Higher Secondary School
• Chinmaya Vidyalaya, Kollam
• Church Missionary Society College High School
• CKM NSS Senior Secondary School
• CNPPM Vocational High School
• CVKM Higher Secondary School, East Kallada
• Dayapuram Educational and Cultural Centre
• DCSMAT Business School
• Don Bosco School Ernakulam
• Durga higher secondary school
• GHS Trikarpur
• GHSS Rajakkad
• Good Shepherd Kurianoor
• Government Higher Secondary School Maloth Kasba
• Government Higher Secondary School Panamattom
• Govt. High School, Bangra Manjeshwar
• Hail Mary English Medium School, Perumpally
• Ideal Higher Secondary School, Dharmagiri
• Irshad English School Melattur
• Islamic English Medium Higher Secondary School
• Jai Rani Sabs School
• Jawahar LPS, Kurakkodu
• John Joseph Murphy Memorial Higher Secondary School
• Labour India Gurukulam Public School
• Lemer Public School
• Lourdes Public School and Junior College
• Mannam Memorial Residential Higher Secondary School
• Mar Athanasius Memorial Higher Secondary School
• Mar Augustine Memorial Higher Secondary School
• Markazul Uloom Senior Secondary English School
• Marthoma Residential School
• Marthoma Senior Secondary School
• Meenangadi Government Higher Secondary School
• MES Kuttippuram School
• MES Pattambi
• Model Technical Higher Secondary School, Kaprassery
• Montfort School, Anakkara
• Moothedath High School
• Mount Bethany EHSS
• NAM Memorial Higher Secondary School, Peringathur
• Nirmala Higher Secondary School, Chemperi
• Palayathu Vayal Govt. UP School
• Pallikoodam
• Panakkad PokoyaThangal Memorial Higher Secondary School, Kottukkara
• Parumala Seminary
• Pratheeksha (special school)
• Railway High School Palakkad
• Rajarshi Memorial Higher Secondary School, Vadavucode
• S. D. V. College of Arts and Applied Science
• Sacred Heart High School (Changanacherry)
• Sacred Heart Higher Secondary School
• Sainik School, Kazhakootam
• Seethi Sahib Higher Secondary School
• St. George's HSS Kothamangalam
• St. Joseph's Central School and Junior College, Mundakayam
• St Little Tresas UP School, Karumalloor
• St. Mary's Residential Central School, Tiruvalla
• St Peters School, Kadayiruppu
• St. Thomas English Medium Higher Secondary School, Attappallam
• St. Thomas Higher Secondary School, Kozhencherry
• Saraswathy Vidyalaya
• Shamsul Ulama Islamic Academy
• The Springs International School
• Sree Bhuvaneswari School
• Sree Narayana Central School
• Sree Sankara Vidyapeetam
• Sri Atmananda Memorial School (Malakkara, Kerala)
• Sri Sathya Narayana High School
• SSHSS Katukukke
• SSMVHS School
• St. Joseph's of the Woods EMHS
• Tagore Vidyaniketan Higher Secondary School
• Technical Higher Secondary School, Cherthala
• Technical Higher Secondary School, Vattamkulam
• Technical Higher Secondary School, Vazhakkad
• Udyogamandal School
• Vidyadhiraja Vidya Bhavan Higher Secondary School
• Vimalagiri Public School
• Mookkannoor Govt. Higher Secondary School
Research centres
• Agency for Non-conventional Energy and Rural Technology
• Centre for Earth Science Studies
• Excise Academy and Research Centre
• Free Software Foundation of India
• Indian Institutes of Science Education and Research
• Indian Institute of Spices Research
• Indian Space Research Organisation
• Kerala Bhasha Institute
• Kerala Forest Research Institute
• Kerala Institute of Local Administration
• Kerala School of Mathematics, Kozhikode
• Liquid Propulsion Systems Centre
• National Institute of Oceanography, India
• National Institute of Speech and Hearing
• Tropical Botanical Garden and Research Institute
• Vadakke Madham Brahmaswam Vedic Research Centre
Societies/trusts
• Lal Bahadur Shastri Integrated Institute of Science and Technology Malappuram
• Muslim Educational Society
• Nair Service Society
• Oriental Research Institute & Manuscripts Library
• Regional Cancer Centre, Thiruvananthapuram
• Rajagiri Vidyapeetham
• Sree Narayana Trust
• Sree Narayana Dharma Paripalana Yogam
• Templates
• Category
• WikiProject
• India portal
Scientific Research in Kerala
Pre 19th Century
• Achyuta Pisharati
• Candravakyas
• Citrabhanu
• Damodara
• Ganita-yukti-bhasa
• Govinda Bhattathiri
• Haridatta
• Jyā, koti-jyā and utkrama-jyā
• Jyeṣṭhadeva
• Jyotirmimamsa
• Karanapaddhati
• Katapayadi system
• Kriyakramakari
• Madhava of Sangamagrama
• Madhava series
• Madhava's sine table
• Melpathur Narayana Bhattathiri
• Nilakantha Somayaji
• Parameshvara
• Sadratnamala
• Śaṅkaranārāyaṇa
• Tantrasamgraha
Organizations
• C-DAC Thiruvananthapuram
• Centre for Development Studies
• Centre for Earth Science Studies
• Centre for Mathematical Sciences (Kerala)
• Centre for Rural Management
• Centre of Science and Technology for Rural Development
• Cyberpark
• Kerala Mathematical Association
• Kerala Science and Technology Museum
• Kerala Science Congress
• Kerala Soil Museum
• Kerala State Council for Science, Technology and Environment
• Krishi Vigyan Kendra Kannur
• Liquid Propulsion Systems Centre
• Rajiv Gandhi Centre for Biotechnology
• Regional Agricultural Research Station, Pattambi
• Rice Research Station, Moncombu
• Vikram Sarabhai Space Centre
Institutions
• Agricultural Research Station, Anakkayam
• Agricultural Research Station, Mannuthy
• Agronomic Research Station, Chalakudy
• Amrita Institute of Medical Sciences and Research Centre
• Aromatic and Medicinal Plants Research Station, Odakkali
• Banana Research Station, Kannara
• Cashew Research Station, Madakkathara
• Central Institute of Fisheries Technology
• Central Marine Fisheries Research Institute
• Central Tuber Crops Research Institute
• Centre for Marine Living Resources & Ecology
• College of Horticulture
• Crocodile Rehabilitation and Research Centre
• Institute of Handloom and Textile Technology
• Indian Institute of Science Education and Research, Thiruvananthapuram
• Indian Institute of Space Science and Technology
• Indian Institute of Spices Research
• Jubilee Mission Medical College and Research Institute
• Kerala Forest Research Institute
• Kerala School of Mathematics, Kozhikode
• Mar Athanasios College for Advanced Studies, Tiruvalla
• National Institute for Interdisciplinary Science and Technology
• National Institute of Speech and Hearing
• National Research Institute for Panchakarma
• Naval Physical and Oceanographic Laboratory
• Oriental Research Institute & Manuscripts Library
• Sree Chitra Tirunal Institute for Medical Sciences and Technology
• Srinivasa Ramanujan Institute of Basic Sciences
• St. Ephrem Ecumenical Research Institute
• Tropical Botanical Garden and Research Institute
• Vadakke Madham Brahmaswam Vedic Research Centre
Scientists
• Ajit Varki
• C. V. Subramanian
• Dileep George
• E. C. George Sudarshan
• G. Madhavan Nair
• George Varghese
• K. R. Ramanathan
• K. Radhakrishnan
• M. G. K. Menon
• Matthew Pothen Thekaekara
• P. N. Vinayachandran
• Pisharoth Rama Pisharoty
• Pulickel Ajayan
• R. S. Krishnan
• Sainudeen Pattazhy
• T. A. Venkitasubramanian
• Thanu Padmanabhan
• Thomas Zacharia
• Vainu Bappu
• Varghese Mathai
• Vinod Scaria
Assorted articles
• A Passage to Infinity
• FORV Sagar Sampada
• Swatantra 2014
• Techno-lodge
• Technology Business Incubator TBI-NITC
• A History of the Kerala School of Hindu Astronomy
• Templates
• Category
• WikiProject
• India portal
References
1. "Official website of KSoM".
2. "Integrated MSc-PhD program at KSoM". Archived from the original on 20 September 2020. Retrieved 8 November 2020.
3. "Kerala School of Mathematics to start academic activities". The Hindu. 14 June 2009. ISSN 0971-751X. Retrieved 8 November 2020.
4. "VS opens Kerala School of Mathematics". The Hindu. 4 June 2008. ISSN 0971-751X. Retrieved 8 November 2020.
Authority control
• ISNI
| Wikipedia |
\begin{definition}[Definition:Word (Formal Systems)]
Let $\AA$ be an alphabet.
Then a '''word in $\AA$''' is a juxtaposition of finitely many (primitive) symbols of $\AA$.
'''Words''' are the most ubiquitous of collations used for formal languages.
\end{definition} | ProofWiki |
\begin{definition}[Definition:Matroid/Definition 2]
Let $M = \struct {S, \mathscr I}$ be an independence system.
$M$ is called a '''matroid on $S$''' {{iff}} $M$ also satisfies:
{{begin-axiom}}
{{axiom | n = \text I 3'
| q = \forall U, V \in \mathscr I
| mr= \size U = \size V + 1 \implies \exists x \in U \setminus V : V \cup \set x \in \mathscr I
}}
{{end-axiom}}
\end{definition} | ProofWiki |
Let \[f(x) =
\begin{cases}
x/2 &\quad \text{if } x \text{ is even}, \\
3x+1 &\quad \text{if } x \text{ is odd}.
\end{cases}
\]What is $f(f(f(f(1))))$?
Evaluating each value, $f(1) = 3 \cdot 1 + 1 = 4$; $f(f(1)) = f(4) = 4/2 = 2$; $f(f(f(1))) = f(2) = 2/2 = 1$; and finally $f(f(f(f(1)))) = f(1) = \boxed{4}$. | Math Dataset |
\begin{document}
\title{Third order differential equations and local isometric immersions of pseudospherical surfaces}
\author{Tarc\'isio Castro Silva \qquad Niky Kamran}
\date{}
\maketitle{}
\begin{abstract} The class of differential equations describing pseudospherical surfaces enjoys important integrability properties which manifest themselves by the existence of infinite hierarchies of conservation laws (both local and non-local) and the presence associated linear problems. It thus contains many important known examples of integrable equations, like the sine-Gordon, Liouville, KdV, mKdV, Camassa-Holm and Degasperis-Procesi equations, and is also home to many new families of integrable equations. Our paper is concerned with the question of the local isometric immersion in ${\bf E}^{3}$ of the pseudospherical surfaces defined by the solutions of equations belonging to the class of Chern and Tenenblat~\cite{CT}. In the case of the sine-Gordon equation, it is a classical result that the second fundamental form of the immersion depends only on a jet of finite order of the solution of the pde. A natural question is therefore to know if this remarkable property extends to equations other than the sine-Gordon equation within the class of differential equations describing pseudospherical surfaces. In a pair of earlier papers~\cite{KKT1},~\cite{KKT2} we have shown that this property fails to hold for all $k$-th order evolution equations $u_t= F(u,u_x,..., u_{x^k})$ and all other second order equations of the form $u_{xt}=F(u,u_{x})$, except for the sine-Gordon equation and a special class of equations for which the coefficients of the second fundamental form are universal, that is functions of $x$ and $t$ which are independent of the choice of solution $u$. In the present paper, we consider third-order equations of the form $u_{t}-u_{xxt}=\lambda u u_{xxx} + G(u,u_x,u_{xx}),\, \lambda \in \mathbb{R}, $ which describe pseudospherical surfaces. This class contains the Camassa-Holm and Degasperis-Procesi equations as special cases. We show that whenever there exists a local isometric immersion in ${\bf E}^3$ for which the coefficients of the second fundamental form depend on a jet of finite order of $u$, then these coefficients are universal in the sense of being independent on the choice of solution $u$. This result further underscores the special place that the sine-Gordon equations seems to occupy amongst integrable partial differential equations in one space variable.
\noindent \emph{Keywords}: nonlinear partial differential equations; pseudospherical surfaces; local isometric immersion.
\end{abstract}
\section{Introduction} The class of partial differential equations describing pseudospherical surfaces, first introduced in a fundamental paper by Chern and Tenenblat~\cite{CT}, gives a rich geometric framework for the classification and study of integrable partial differential equations in one space variable. Recall that a partial differential equation \begin{equation}\label{pde} \Delta(t,x,u,u_{t},u_{x},\ldots,u_{t^{l}x^{k-l}})=0, \end{equation} is said to describe pseudospherical surfaces if there exist $1$-forms \begin{equation}\label{forms} \omega_{i}=f_{i1}dx+f_{i2}dt,\quad 1\leq i \leq3, \end{equation} where the coefficients $f_{ij},\,1\leq i \leq 3,\,1\leq j\leq 2,$ are smooth functions of $t,x,u$ and finitely many derivatives of $u$, such that the structure equations of a surface of Gaussian curvature equal to $-1$, \begin{equation}\label{struct} d\omega_{1}=\omega_{3} \wedge\omega_{2},\quad d\omega_{2}=\omega_{1} \wedge\omega_{3},\quad d\omega_{3}=\omega_{1} \wedge\omega_{2}, \end{equation} are satisfied if and only if $u$ is a smooth solution of (\ref{pde}) for which \begin{equation}\label{indep} \omega_{1}\wedge \omega_{2}\neq 0. \end{equation} It thus follows that every such solution determines a pseudospherical metric, that is a Riemannian metric of constant negative Gaussian curvature equal to $-1$, defined by \begin{equation}\label{metric} ds^{2}=(\omega_{1})^{2}+(\omega_{2})^{2}. \end{equation} The 1-form $\omega_{3}$ appearing in the structure equations~(\ref{struct}) is then the Levi-Civita connection $1$-form of the metric~(\ref{metric}).
The prototypical example of a differential equation describing pseudospherical surfaces is the sine-Gordon equation \begin{equation}\label{sG} u_{tx}=\sin u, \end{equation} for which a choice of $1$-forms (\ref{forms}) satisfying the structure equations (\ref{struct}) is given by \begin{equation}\label{sGcof1} \omega_1 = \frac{1}{\eta}\sin u \, dt, \quad \omega_2 = \eta\, dx+\frac{1}{\eta}\cos u \,dt,\quad \omega_3 = u_{x}\,dx. \end{equation} There may of course be different choices of $1$-forms satisfying the structure equations (\ref{struct}) for a given differential equation describing pseudo spherical surfaces; for example, for the sine-Gordon equation (\ref{sG}), a choice different from the one given in (\ref{sGcof1}) would be \begin{equation}\label{sGcof2} \omega_1 = \cos \frac{u}{2}( dx+dt),\quad \omega_2 =\sin \frac{u}{2} (dx - dt),\quad \omega_3 = \frac{u_{x}}{2} dx - \frac{u_{t}}{2} dt.
\end{equation} In~(\ref{sGcof1}), the constant $\eta$ is a continuous non-zero real parameter which reflects the existence of a one-parameter family of B\"acklund transformation for the sine-Gordon equation and is key to the existence of infinitely many conservation laws. More generally one may consider partial differential equations describing pseudospherical surfaces with the property that one of the components $f_{ij}$ can be chosen to be a continuous parameter are said to describe $\eta$ pseudospherical surfaces. Each equation belonging to this class is the integrability condition of a linear system of the form \begin{equation}\label{linear} dv^{1}=\frac{1}{2}(\omega_2{}v^{1}+(\omega_{1}-\omega_{3})v^{2}),\quad dv^{2}=\frac{1}{2} ((\omega_{1}+\omega_{3})v^{1}-\omega_{2}v^{2}), \end{equation} which may be used to solve the differential equation by inverse scattering~\cite{BRT}, with $\eta$ playing the role of a spectral parameter for the scattering problem. It is also shown in~\cite{CaT} that one can generate infinite sequences of conservation laws for the class of differential equations describing $\eta$ pseudospherical surfaces by making use of the structure equations~(\ref{struct}), although some of these conservation laws may end up being non-local. Important further developments of these ideas around this theme can be found in~\cite{R88},~ \cite{R89},~ \cite{RT90}, ~\cite{RT92},~ \cite{KT}, ~ \cite{R98}, ~\cite{reyes},~\cite{FT},~\cite{Go},~\cite{GR},~\cite{JT},~\cite{FOR},~ \cite{R106}.
One may also consider the class of differential equations describing pseudospherical surfaces from an extrinsic point of view, motivated by the classical result which says that every pseudospherical surface can be locally isometrically immersed in $\bf{E}^{3}$. One would expect the dependence of the second fundamental form of the immersion on the solution chosen for the differential equation to be quite complicated. However, the formula for the second fundamental form turns out to be particularly simple in the case of the sine-Gordon equation, as we now recall. Indeed, we first recall that the components $a,b,c$ of the second fundamental form of a local isometric immersion of a pseudospherical surface into $\bf{E}^{3}$ are defined by the relations \begin{equation}\label{3.4} \omega_{13} = a\omega_1+b\omega_2, \quad \omega_{23} = b\omega_1+c\omega_2, \end{equation} where the $1$-forms $\omega_{13}, \omega_{23}$ satisfy the structure equations \begin{equation}\label{3.5} d\omega_{13} = \omega_{12}\wedge\omega_{23}, \quad d\omega_{23} = \omega_{21}\wedge\omega_{13}, \end{equation} equivalent to the Codazzi equations, and the Gauss equation for a pseudo spherical surface, given by \begin{equation}\label{gauss} ac-b^2=-1. \end{equation}
We recall from \cite{KKT1} that the Codazzi equations \eqref{3.5} may be expressed in terms of the components $f_{ij}$ of the 1-forms $\omega_1$, $\omega_2$, $\omega_3$ in the following form \begin{eqnarray} f_{11}D_t a+f_{21}D_t b -f_{12}D_x a-f_{22}D_x b-2b\Delta_{13}+(a-c)\Delta_{23} = 0,&&\label{3.8}\\ f_{11}D_t b+f_{21}D_t c -f_{12}D_x b-f_{22}D_x c+(a-c)\Delta_{13}+2b\Delta_{23} = 0,&&\label{3.9} \end{eqnarray} where \begin{eqnarray}\label{3.7} \Delta_{ij} = f_{i1}f_{j2} - f_{j1}f_{i2}. \end{eqnarray} and where we assume~\cite{KKT1} that \begin{eqnarray}\label{3.10a} \quad \Delta_{13}^2+\Delta_{23}^2 \neq 0. \end{eqnarray}
For the sine-Gordon equation, with the choice of $1$-forms $\omega_{1}, \omega_{2}$ and $\omega_{3}=\omega_{12}$ given by (\ref{sGcof2}), it is easily verified that the $1$-forms $\omega_{13},\omega_{23}$ are given by \begin{eqnarray*} \omega_{13} &=& \sin\frac{u}{2} (dx+dt) = \tan \frac{u}{2}\omega_1,\\ \omega_{23} &=& -\cos\frac{u}{2} (dx - dt) = -\cot \frac{u}{2}\omega_2. \end{eqnarray*} What is particularly noteworthy in the case of the sine-Gordon equation that the components $a,b,c$ depend only on $u$ and {\emph{finitely many derivatives} of $u$. It is therefore a natural question to ask whether such a remarkable property holds for other equations within the class of differential equations describing pseudospherical surfaces, or whether the sine-Gordon equation in any way special in this regard. In~\cite{KKT1} and~\cite{KKT2}, we investigated this question for $k$-th order evolution equations \begin{equation}\label{evoleq} u_t= F(u,u_x,..., u_{x^k}), \end{equation} and second order hyperbolic equations \begin{equation}\label{fGord} u_{xt}=F(u,u_{x}), \end{equation} and proved that there are no other equations than the sine-Gordon equation for which this property holds, except for some special equations for which $a,b,c$ are {\emph{universal}}, that is functions of $x,t$ which are {\emph{independent of}} $u$. These results show that the sine-Gordon equation occupies a special position within the class of differential equations of the form~(\ref{evoleq}) and~(\ref{fGord}) which describe pseudospherical surfaces.
Our goal in the present paper is to investigate this question for the class of partial differential equations given by \begin{eqnarray}\label{T} u_{t}-u_{xxt}=\lambda u u_{xxx} + G(u,u_x,u_{xx}),\quad \lambda \in \mathbb{R}, \end{eqnarray} which describe pseudospherical surfaces under the condition of that 1-forms $\omega_i=f_{i1}dx+f_{i2}dt$ satisfy \begin{equation}\label{1} f_{p1}=\mu_{p}f_{11}+\eta_{p}, \quad \mu_p, \eta_p \in \mathbb{R}, \quad 2\leq p \leq 3. \end{equation} This class of equations, which has recently been classified by Castro Silva and Tenenblat~\cite{CST} contains important examples such as the Camassa-Holm equation \cite{CH} \begin{eqnarray*} u_{t}-u_{xxt} = u u_{xxx} + 2 u_x u_{xx} - 3 u u_x-m u_x, \quad m \in \mathbb{R}, \end{eqnarray*}
and Degasperis-Procesi equation \cite{DP} \begin{eqnarray*} u_{t}-u_{xxt} = u u_{xxx} + 3 u_x u_{xx} - 4 u u_x. \end{eqnarray*}
Our main result is the following:
\begin{theorem}\label{teo} Except for the two families of third order differential equations of the form \begin{eqnarray}\label{1a} u_t - u_{xxt} = \frac{1}{h'}(m\psi + \psi_x), \quad m\in \mathbb{R}\setminus \left\lbrace 0\right\rbrace, \end{eqnarray} where $\psi(u,u_x)\neq 0$ and $h(u-u_{xx})$ are differentiable functions, with $h'\neq 0$, and \begin{eqnarray}\label{2a} u_t - u_{xxt} = \lambda uu_{xxx}+\frac{1}{h'}[m_1\psi + \psi_x -\lambda uu_xh' - (\lambda u_x+\lambda m_1u+m_2)h], \end{eqnarray} where $\lambda$, $m_1$, $m_2$ $\in$ $\mathbb{R}$, $(\lambda m_1)^2+m_2^2\neq 0$, $\psi(u,u_x)$ and $h(u-u_{xx})$ are differentiable functions, with $h'\neq 0$, there exists no third order partial differential equation of type \eqref{T} describing pseudospherical surfaces, under the condition \eqref{1}, with the property that the coefficients of the second fundamental forms of the local isometric immersions of the surfaces associated to the solutions $u$ of the equation depend on a jet of finite order of u. Moreover, the coefficients of the second fundamental forms of the local isometric immersions of the surfaces determined by the solution $u$ of \eqref{1a} or \eqref{2a} are universal, i.e., they are universal functions of $x$ and $t$, independent of $u$.
\end{theorem}
We see in particular the Degasperis-Procesi equation belongs to the class \eqref{2a} of equations covered by Theorem~\ref{teo}. On the other hand, the Camassa-Holm equation is not covered by either \eqref{1a} or \eqref{2a}, meaning that for the Degasperis-Procesi equation, the components $a,b,c$ of the second fundamental form are the same universal functions of $x$ and $t$ for any solution $u$, while for the Camass-Holm equation the components $a,b,c$ depend on jets of arbitrary high order of $u$. Theorem~\ref{teo} underscores once again the special place that the sine-Gordon equations appears to occupy amongst integrable partial differential equations in one space variable.
Our paper is organized as follows. In Section~\ref{sec2}, we recall without proof the classification results of~\cite{CST} that will be needed to prove Theorem~\ref{teo}. The classification splits into branches which are treated on a case-by-case basis in Section~\ref{sec4}, starting from the expression of the Codazzi and Gauss equations in terms of the coefficients $f_{ij}$ of the $1$-forms $\omega_{1},\,\omega_{2},\,\omega_{3}$ (see \eqref{3.8}, \eqref{3.9}). Finally, we carry out in Section~\ref{sec5} the integration of the Codazzi and Gauss equations in the cases in which the components $a,b,c$ of the second fundamental form are universal functions of $x$ and $t$ and obtain explicit expressions for these functions.
\section{Third order differential equations describing pseudospherical surfaces}\label{sec2}
\hspace{0.5 cm}Let us recall from \cite{CST} without proof the characterization and classification theorems (Theorems \ref{teo7.1}, \ref{teo7.2}-\ref{teo7.5}) of the equations \eqref{T} that describe pseudospherical surfaces under the hypothesis \eqref{1}. We will use the following notation, also used in~\cite{CT}, for the spatial derivatives of $u$, $$ z_i = \partial_x^i u, \quad 0\leq i $$
\begin{theorem}\label{teo7.1} \textnormal{\cite{CST}} An equation \begin{eqnarray}\label{eqT} z_{0,t}-z_{2,t}=\lambda z_0z_3 + G(z_0,z_1,z_2), \quad G\neq 0, \end{eqnarray} describes pseudospherical surfaces, with associated 1-forms $\omega_i=f_{i1}dx+f_{i2}dt$, $1 \leq i \leq 3$, where $f_{ij}$ are real and differentiable functions of $z_k$, $0\leq k \leq \ell$, $\ell$ $\in$ $\mathbb{Z}$, satisfying \eqref{1} if, and only if, $f_{ij}$ and $G$ satisfy \begin{eqnarray}\label{2.3.1} f_{11,z_0}\neq 0, \quad f_{11,z_0} + f_{11,z_2} = 0, \quad f_{i2,z_s} = 0, \quad f_{11,z_1} = f_{11,z_s} = 0,\quad s \geq 3, \end{eqnarray} \begin{eqnarray}\label{(7.2)} f_{i2} = -\lambda z_0 f_{i1} + \phi_{i2}, \end{eqnarray} where $\phi_{i2}(z_0,z_1)$ are real and differentiable functions of $z_0$ and $z_1$ satisfying \begin{eqnarray} -f_{11,z_0}G+\displaystyle \sum_{i=0}^{1} z_{i+1}f_{12,z_i} + (\mu_2 \phi_{32}-\mu_3 \phi_{22})f_{11} + \eta_2 \phi_{32}-\eta_3 \phi_{22}=0,&&\label{(7.3)}\\ -\mu_2 f_{11,z_0}G+\displaystyle \sum_{i=0}^{1} z_{i+1}f_{22,z_i} - (\phi_{32}-\mu_3 \phi_{12})f_{11} + \eta_3 \phi_{12}=0,&&\label{(7.4)}\\ -\mu_3 f_{11,z_0}G+\displaystyle \sum_{i=0}^{1} z_{i+1}f_{32,z_i} - (\phi_{22}-\mu_2 \phi_{12})f_{11} + \delta \eta_2 \phi_{12}=0,&&\label{(7.5)}\\ (\phi_{22}-\mu_2 \phi_{12})f_{11} - \eta_2 \phi_{12}\neq 0.&&\label{7.5.1} \end{eqnarray} \end{theorem}
\begin{theorem}\label{teo7.2}\textnormal{\cite{CST}} Consider an equation of type $(\ref{T})$ that describes pseudospherical surfaces, with associated 1-forms $\omega_i=f_{i1}dx+f_{i2}dt$, $1 \leq i \leq 3$, where $f_{ij}$ are real and differentiable functions of $z_k$, satisfying \eqref{1} and \eqref{2.3.1}-\eqref{7.5.1}. Suppose that $\phi_{22} - \mu_2 \phi_{12}\equiv 0$ and $\mu_2\mu_3 \eta_2 -(1+ \mu_2^2)\eta_3=0$. Then the equation is given by \begin{eqnarray}\label{th2.2} z_{0,t}-z_{2,t} = \frac{1}{h'}(z_1\psi_{,z_0} + z_2 \psi_{,z_1} + m\psi), \quad m \in \mathbb{R}, \quad m\neq 0, \end{eqnarray} \begin{eqnarray}\label{th2.2a}
\begin{array}{ll} & f_{11}=h,\\ & f_{21}=\mu h \pm m\sqrt{1+\mu^2},\\ & f_{31}=\pm \sqrt{1+\mu^2} h + m\mu, \end{array}\quad \begin{array}{ll} f_{12}=\psi, \\ f_{22}=\mu \psi, \\ f_{32} = \pm \sqrt{1+\mu^2} \psi, \end{array} \end{eqnarray} where $\lambda = 0$ and $\mu \in \mathbb{R}$, $h(z_0-z_2)$ and $\psi(z_0,z_1)$ are real and differentiable functions satisfying $h'\neq 0$ and $\psi \neq 0$. \end{theorem}
\begin{theorem}\label{teo7.3}\textnormal{\cite{CST}} Consider an equation of type $(\ref{T})$ that describes pseudospherical surfaces, with associated 1-forms $\omega_i=f_{i1}dx+f_{i2}dt$, $1 \leq i \leq 3$, where $f_{ij}$ are real and differentiable functions of $z_k$, satisfying \eqref{1} and \eqref{2.3.1}-\eqref{7.5.1}. Suppose that $\phi_{22} - \mu_2 \phi_{12}\equiv 0$ and $\mu_2\mu_3 \eta_2 -(1+ \mu_2^2)\eta_3\neq 0$. Then the equation is given by \begin{eqnarray}\label{th2.3} z_{0,t}-z_{2,t} =\lambda z_0z_3 -\frac{\lambda}{h'}(z_1 h + z_0z_1 h' + m_1z_1 + m_2z_2), \quad \lambda, \textnormal{ } m_1,\textnormal{ } m_2 \in \mathbb{R}, \quad \lambda m_2\neq 0, \end{eqnarray} \begin{eqnarray}\label{th2.3a} \begin{array}{ll}
f_{11}=h,\\ f_{21} = \mu h + \eta,\\ f_{31} = \left[\frac{m_1(1+\mu^2)}{m_2\eta}-\frac{\mu}{m_2}\right]h+\frac{m_1\mu - \eta}{m_2}, \end{array} \begin{array}{ll} f_{12} = -\lambda z_0 h - \lambda m_2 z_1, \\ f_{22} = -\lambda \mu z_0 h - \lambda m_2\mu z_1 - \lambda \eta z_0, \\ f_{32} = -\lambda z_0f_{31} - \frac{\lambda}{\eta}\left[m_1(1+\mu^2)-\mu\eta\right]z_1, \end{array} \end{eqnarray} where $\mu,\eta\in \mathbb{R}$, $\eta \neq 0$ and $h(z_0-z_2)$ is a real differential function of $z_0-z_2$ satisfying $h'\neq 0$ and \[ (m_2\eta)^2=m_1^2 + (m_1\mu-\eta)^2. \] \end{theorem}
\begin{theorem}\label{teo7.4} \textnormal{\cite{CST}} Consider an equation of type $(\ref{T})$ that describes pseudospherical surfaces, with associated 1-forms $\omega_i=f_{i1}dx+f_{i2}dt$, $1 \leq i \leq 3$, where $f_{ij}$ are real and differentiable functions of $z_k$, satisfying \eqref{1} and \eqref{2.3.1}-\eqref{7.5.1}. Suppose that $\phi_{22} - \mu_2 \phi_{12}\neq 0$ and $\mu_2\mu_3 \eta_2 -(1+ \mu_2^2)\eta_3 = 0$. Then the equation is given by \begin{eqnarray}\label{th2.4} z_{0,t}-z_{2,t} =\lambda z_0z_3+ \frac{1}{h'}\left[z_2\psi_{,z_1}+z_1\psi_{,z_0} + m_1\psi - \lambda z_0z_1h' -\left(\lambda z_1 + \lambda m_1z_0 + m_2\right) h \right], \end{eqnarray} \begin{eqnarray}\label{th2.4a} \begin{array}{ll} & f_{11}=h,\\ & f_{21} = \mu h \pm m_1\sqrt{1+\mu^2},\\ & f_{31} = \pm \sqrt{1+\mu^2} h +m_1\mu, \end{array} \begin{array}{ll} f_{12} = -\lambda z_0 h + \psi, \\ f_{22} = -\lambda \mu z_0 h + \mu \psi \pm m_2\sqrt{1+\mu^2}, \\ f_{32} = \pm \sqrt{1+\mu^2}(\psi- \lambda z_0 h) +\mu m_2, \end{array} \end{eqnarray} where $\mu,\, \lambda,\,m_1,\, m_2$ $\in \mathbb{R}$, $(\lambda m_1)^2+m_2^2 \neq 0$, $\,h(z_0 - z_2)$ and $\psi(z_0,z_1)$ are real and differentiable functions, with $h'\neq 0$. \end{theorem}
\begin{theorem}\label{teo7.5}\textnormal{\cite{CST}} Consider an equation of type $(\ref{T})$ that describes pseudospherical surfaces, with associated 1-forms $\omega_i=f_{i1}dx+f_{i2}dt$, $1 \leq i \leq 3$, where $f_{ij}$ are real and differentiable functions of $z_k$, satisfying \eqref{1} and \eqref{2.3.1}-\eqref{7.5.1}. Suppose that $\phi_{22} - \mu_2 \phi_{12}\neq 0$ and $\mu_2\mu_3 \eta_2 -(1+ \mu_2^2)\eta_3 \neq 0$. Then the equation is given by
\begin{flushleft} $\left( i \right)$
\hspace{0.3 cm}$ z_{0,t}-z_{2,t}= \lambda z_0z_3+ \lambda \left(z_1z_2 - 2 z_0z_1 - \frac{m}{\tau}z_1 \mp \frac{z_2}{\tau}\right) + \tau e^{\pm \tau z_1}\left( \tau z_0z_2 \pm z_1+ m z_2\right)\varphi$ \begin{eqnarray}\label{th2.5i} \hspace*{2cm}\pm e^{\pm \tau z_1}\left(\tau z_0z_1 +\tau z_1z_2 + mz_1\pm z_2\right)\varphi' + z_1^2 e^{\pm \tau z_1}\varphi'',\quad \lambda, m, \tau \in \mathbb{R}, \quad \tau>0, \end{eqnarray} \begin{eqnarray}\label{th2.5a} \begin{array}{ll} f_{11}=a(z_0-z_2)+b,\\ f_{21} = \mu f_{11}+\eta,\\ f_{31} = \pm \left(m-\frac{b\tau}{a}\right)\left(\frac{1+\mu^2}{\eta}f_{11}+\mu\right)\mp \frac{\tau}{a}f_{21},\\ f_{12} =-\lambda z_0f_{11} + \left[\pm\tau(az_0+b)\varphi + az_1\varphi'\right]e^{\pm\tau z_1}\mp\frac{\lambda a}{\tau}z_1,\\ f_{22} = \mu f_{12}- \lambda\eta z_0 \pm \eta\tau e^{\pm\tau z_1}\varphi,\\ f_{32} = \pm\left(m-\frac{b\tau}{a}\right)\left[\frac{1+\mu^2}{\eta}f_{12}-\mu(\lambda z_0\mp\tau e^{\pm\tau z_1}\varphi)\right]\mp\frac{\tau}{a}f_{22}, \end{array} \end{eqnarray}
\noindent where $\mu,\, \eta,\, a,\, b$ $\in$ $\mathbb{R}$, $a\eta\neq 0$, $\varphi(z_0)\neq 0$ is a real differentiable function and \[ (a\eta)^2=(am-b\tau)^2+[\mu(am-b\tau)-\tau\eta]^2. \] \vspace*{0.5 cm} or \vspace*{0.3 cm}
$\left( ii \right)$
\hspace{0.57 cm} \begin{eqnarray}\label{th2.5ii} z_{0,t}-z_{2,t}=\lambda z_0z_3+\lambda(2z_1z_2-3z_0z_1-m_2z_1) + m_1\theta e^{\theta z_0}(\theta z_1^3+z_1z_2+2z_0z_1+m_2z_1) \end{eqnarray}
\noindent with $\lambda$, $\theta$, $m_1, \, m_2 \in \mathbb{R}$, $\theta\neq 0$, $\lambda^2+m_1^2\neq 0,$ \end{flushleft} \begin{eqnarray}\label{th2.5b} \begin{array}{l} f_{11} = a(z_0-z_2)+b,\\ f_{21} = \mu f_{11}+\eta,\\ f_{31} = \pm \sqrt{1+\mu^2}f_{11} \pm \frac{\theta+a\mu\eta}{a\sqrt{1+\mu^2}},\\ f_{12} = -\lambda z_0f_{11}+ am_1\theta e^{\theta z_0}z_1^2+(m_1\theta e^{\theta z_0}-\lambda) \left[\frac{az_0+b}{\theta} \pm (\mu- \frac{a\eta}{\theta})\frac{z_1}{\sqrt{1+\mu^2}} \right],\\
f_{22} = -\lambda z_0f_{21}+\mu a m_1\theta e^{\theta z_0}z_1^2 + (m_1\theta e^{\theta z_0}-\lambda) \frac{1}{\theta} \left[\mu(az_0+b)+\eta \mp (\theta +\mu \eta a)\frac{z_1}{\sqrt{1+\mu^2}}
\right],\\
f_{32} = -\lambda z_0 f_{31} \pm \sqrt{1+\mu^2} a m_1\theta e^{\theta z_0}z_1^2 -(m_1\theta e^{\theta z_0}-\lambda) \frac{1}{\theta} \left\{a\eta z_1 \mp\frac{1}{\sqrt{1+\mu^2}} \left[(1+\mu^2)(az_0+b)+\mu\eta +\frac{\theta}{a}\right]\right\} \end{array} \end{eqnarray}
\noindent where $\mu,\, \eta,\, a \in \mathbb{R}$, $a\neq 0$ and \begin{equation*} b=\frac{a}{2\theta}\left[ \frac{(\mu\theta-\eta a)^2}{a^2(1+\mu^2)}-\frac{a}{\theta}+m_2\theta-1\right]. \label{relaigual} \end{equation*} \end{theorem}
\section{Proof of Theorem \ref{teo}}\label{sec4}
\subsection{Total derivatives and prolongations}
Let us first introduce a compact notation for the time derivatives and mixed derivatives of $u$ in addition to the notation introduced earlier for the spatial derivatives of $u$, by letting \begin{eqnarray}\label{3.10} z_i = \partial_x^i u, \quad w_j = \partial_t^j u, \quad v_k = \partial_t^k u_x, \end{eqnarray} where $z_0 = w_0 = u$ and $z_1 = v_0 = u_x$. We have therefore, \begin{eqnarray*} \begin{array}{ll} z_{i,x} = z_{i+1}, \\ z_{i,t} = \partial_x^{i-2} u_{xxt}, \end{array}\quad \begin{array}{ll} w_{j,x} = \partial_t^j u_x, \\ w_{j,t} = w_{j+1}, \end{array}\quad \begin{array}{ll} v_{k,x}=\partial_t^{k-1}u_{xxt}, \\ v_{k,t}=v_{k+1}, \end{array}\quad \end{eqnarray*} and the total derivatives of a differentiable function $\phi = \phi(x,t,z_0,z_1, w_1, v_1, ..., z_{l}, w_{m}, v_{n})$, where $1\leq l <\infty$, $1\leq m < \infty$ and $1\leq n < \infty$ are finite, but otherwise arbitrary, are given by \begin{eqnarray} && D_x \phi = \phi_x + \sum_{i=0}^l \phi_{z_i}z_{i+1}+ \sum_{j=1}^{m} \phi_{w_j}w_{j,x}+ \sum_{k=1}^{n} \phi_{v_k}v_{k,x},\label{3.11}\\ && D_t \phi = \phi_t + \sum_{i=2}^l \phi_{z_i}z_{i,t}+ \sum_{j=0}^{m} \phi_{w_j}w_{j+1}+ \sum_{k=0}^{n} \phi_{v_k}v_{k+1}.\label{3.12} \end{eqnarray} In particular, we obtain the following expressions for the prolongations of the partial differential equation \eqref{T} \begin{eqnarray}\label{4.1.1} z_{2q,t} = z_{0,t} - \sum_{i=0}^{q -1}D_x^{2i}F, \quad z_{2q +1,t} = z_{1,t} - \sum_{i=0}^{q -1}D_x^{2i+1}F, \end{eqnarray} where $q=1,2,3,\ldots$, $F(z_0,z_1,z_2,z_3)=\lambda z_0z_3+G(z_0,z_1,z_2)$ and $D_x^0 F = F$.
\subsection{Necessary conditions for the existence of second fundamental forms depending on jets of finite order of $u$}
Our goal in this section is to analyze the system \eqref{gauss}, \eqref{3.8}, \eqref{3.9} governing the components $a$, $b$, $c$ of the second fundamental form and to obtain necessary conditions for the existence of solution depending on jets of finite order of $u$. We note that since the coefficients $f_{ij}$ appearing in the classification given in the Section \ref{sec2}, i.e., in Theorems \ref{teo7.2}-\ref{teo7.5}, depend only on $z_0$, $z_1$ and $z_2$, it follows that the functions $\Delta_{ij}$ defined in \eqref{3.7} depend only on $z_0$, $z_1$ and $z_2$.
\begin{lemma}\label{lemma3.1} Consider an equation of type \eqref{T} describing pseudospherical surfaces, under the condition \eqref{1}, given by the Theorems $\ref{teo7.2}$-$\ref{teo7.5}$. Assume there is a local isometric immersion of the pseudospherical surface, determined by a solution $u(x,t)$ of \eqref{T} satisfying ~\eqref{indep}, for which the coefficients $a$, $b$ and $c$ of the second fundamental form depend on $x, t, z_0, \ldots, z_l, w_1, \ldots, w_m, v_1, \ldots, v_n$, where $1\leq l <\infty$, $1\leq m < \infty$ and $1\leq n < \infty$ are finite, but otherwise arbitrary. Then $ac\neq 0$ on any open set of the domain of $u$. \end{lemma}
\noindent\textbf{Proof}. Firstly, we will show that $c$ is not zero. Then, using the fact $c\neq 0$, we will show that $a=0$ leads to a contradiction and, thus, conclude that $ac\neq 0$.
Assume $c=0$ on a open set. Then, \eqref{gauss} implies $b=\pm 1$ and \eqref{3.8} and \eqref{3.9} reduce to \begin{eqnarray} f_{11}D_ta - f_{12}D_xa \mp 2\Delta_{13}+a\Delta_{23} = 0,&&\label{3.14a}\\ a\Delta_{13}\pm 2\Delta_{23} = 0.&&\label{3.15a} \end{eqnarray} It follows from \eqref{3.15a} and \eqref{3.10a} that $\Delta_{13}\neq 0$ and $a = \mp 2\Delta_{23}/ \Delta_{13}$. Since $\Delta_{13}$ and $\Delta_{23}$ depend only on $z_0$, $z_1$ and $z_2$, we conclude that $a$ depends only on $z_0$, $z_1$ and $z_2$ and \eqref{3.14a} reduces to \begin{eqnarray}\label{3.16a} f_{11}[(a_{z_0}+a_{z_2})z_{0,t}+a_{z_1}z_{1,t}-a_{z_2}(\lambda z_0z_3+G)]-f_{12}\sum_{i=0}^2 a_{z_i}z_{i+1}\mp 2\Delta_{13}+a\Delta_{23} = 0. \end{eqnarray} Differentiation with respect to $z_{0,t}$, $z_{1,t}$ and $z_3$ implies \begin{eqnarray}\label{3.17a} f_{11}a_{z_1}=f_{11}(a_{z_0}+a_{z_2}) = a_{z_2}(f_{12}+\lambda z_0f_{11}) = 0, \end{eqnarray} where we recall from \eqref{(7.2)} that $f_{12}+\lambda z_0f_{11}=\phi_{12}$. Since $f_{11}=h$ can not be zero on any open set (see \eqref{2.3.1}), we have $a_{z_1}=a_{z_0}+a_{z_2} = 0$.
If $\phi_{12}\neq 0$ then from \eqref{3.17a} we conclude that $a$ is a constant and \eqref{3.16a} reduces to $\mp 2\Delta_{13}+a\Delta_{23} = 0$. This equation with \eqref{3.15a} implies that $\Delta_{13}=\Delta_{23} = 0$ which contradicts \eqref{3.10a}.
If $\phi_{12}=0$ on a open set, the only equation and corresponding $f_{ij}$ that satisfy this condition are given by \eqref{th2.4} and \eqref{th2.4a} with $\psi = 0$, i.e., given by Theorem \ref{teo7.4}. In that case, \begin{eqnarray*} \Delta_{13} = \pm \mu h(m_2+\lambda m_1 z_0),\quad \Delta_{23} = \mp h(m_2+\lambda m_1 z_0), \end{eqnarray*} and then, by \eqref{3.15a}, $a=\pm 2/\mu$. Therefore, observing that $\Delta_{13}=-\mu \Delta_{23}$, \eqref{3.16a} reduces to $$ 0=\mp 2\Delta_{13}+a\Delta_{23} = \mp 2(-\mu\Delta_{23})+(\pm 2/\mu)\Delta_{23}=\pm \frac{2(1+\mu^2)}{\mu}\Delta_{23} $$ which holds if, and only if, $\Delta_{23}=0$, which implies $\Delta_{13}=0$ and, thus, a contradiction by \eqref{3.10a}. Hence, $c\neq 0$ on any open set.
From now on, we are assuming $c\neq 0$. If $a = 0$ on a open set, then \eqref{gauss} implies $b = \pm 1$ and \eqref{3.8} and \eqref{3.9} are equivalent to \begin{eqnarray} \mp 2\Delta_{13}-c\Delta_{23} = 0,&&\label{3.8b}\\ f_{21}D_t c -f_{22}D_x c -c\Delta_{13}\pm 2\Delta_{23} = 0,&&\label{3.9b} \end{eqnarray} It follows from \eqref{3.8b} and \eqref{3.10a} that $\Delta_{23}\neq 0$ and $c = \mp 2\Delta_{13}/ \Delta_{23}$. Since $\Delta_{13}$ and $\Delta_{23}$ depend only on $z_0$, $z_1$ and $z_2$, we conclude that $c$ depends only on $z_0$, $z_1$ and $z_2$ and \eqref{3.9b} reduces to \begin{eqnarray} f_{21}\sum_{i=0}^1 c_{z_i}z_{i,t}+f_{21}c_{z_2}[z_{0,t}-(\lambda z_0z_3+G)] -f_{22}\sum_{i=0}^2 c_{z_i}z_{i+1}-c\Delta_{13}\pm 2\Delta_{23} = 0.\label{3.9c} \end{eqnarray}
Using \eqref{1} and taking the derivative of the latter expression with respect to $z_{0,t}$, $z_{1,t}$ and $z_{3}$ implies that \begin{eqnarray}\label{3.9d} f_{21} (c_{z_0}+c_{z_2}) = f_{21} c_{z_1} = (\lambda z_0f_{21} + f_{22})c_{z_2} = 0. \end{eqnarray} Replacing \eqref{3.9d} into \eqref{3.9c} we obtain \begin{eqnarray} -f_{21}c_{z_2}G -f_{22}\sum_{i=0}^1 c_{z_i}z_{i+1}-c\Delta_{13}\pm 2\Delta_{23} = 0.\label{3.9e} \end{eqnarray} Looking at \eqref{3.9d} is easy to see that, if $f_{21}\neq 0$ and $\lambda z_0f_{21}+f_{22}\neq 0$ then $c$ is a constant and from \eqref{3.8b} and \eqref{3.9b} we obtain $\Delta_{13}=\Delta_{23}=0$, which is a contradiction with \eqref{3.10a}. If $f_{21}= 0$ and $\lambda z_0f_{21}+f_{22}= 0$ on a open set, then, by \eqref{3.8b} and \eqref{3.9b}, $\Delta_{23} = 0$, which is a contradiction. Hence, we have only two possibilities, namely,\\ $$ (i)\hspace*{0.2 cm} f_{21} = 0 \quad \textnormal{and}\quad \lambda z_0f_{21}+f_{22}\neq 0,\qquad (ii)\hspace*{0.2 cm} f_{21} \neq 0 \quad \textnormal{and}\quad \lambda z_0f_{21}+f_{22}= 0, $$ on a open set.
Assuming $(i)$, we can observe from \eqref{3.9d} and $\Delta_{12}\neq 0$ that $c_{z_2} = 0$ on a open set. Thus $f_{ij}$ are given by \eqref{th2.4a} with $\mu=m_1 = 0$ and $m_2\neq 0$ or \eqref{th2.5b} with $\mu = m_2 = 0$, i.e., given by Theorems \ref{teo7.4} with $\mu=m_1 = 0$ and $m_2\neq 0$ or \ref{teo7.5}-$(ii)$ with $\mu = m_2 = 0$, respectively.
If $f_{ij}$ are given by \eqref{th2.4a} with $\mu=m_1 = 0$ and $m_2\neq 0$ then $\Delta_{13} = 0$. However, $\Delta_{13}=0$ implies $c=0$ which is a contradiction, because we firstly showed that $c\neq 0$. If $f_{ij}$ are given by \eqref{th2.5b} with $\mu = m_2 = 0$ then \begin{eqnarray}\label{3.9f} \mp 2\left[ \pm (\lambda -\theta me^{\theta z_0})z_2 \mp m\theta^2e^{\theta z_0}z_1^2\right]+c\left(h+\frac{\theta}{a}\right)\left(\lambda -\theta me^{\theta z_0}\right)z_1 = 0. \end{eqnarray} Since $c_{z_2} = 0$, differentiating the latter equation with respect $z_2$ and, in following, with respect to $z_1$, and observing that $\lambda -\theta me^{\theta z_0}\neq 0$, we have $c h' = 0$ on a open set, which leads to a contradiction with \eqref{2.3.1}. This concludes $(i)$.
Assuming $(ii)$, i.e., $\lambda z_0f_{21}+f_{22}=\phi_{22}= 0$, we necessarily have $\phi_{12}\neq 0$, because $\Delta_{12} = -\phi_{12}f_{21}$. Moreover, from \eqref{3.9d} we conclude $c_{z_0}+c_{z_2}=c_{z_1}=0$. Thus, $f_{ij}$ are given by \eqref{th2.2a} with $\mu=0$ or \eqref{th2.3a} with $\mu=0$ or \eqref{th2.4a} with $\mu \neq 0$, i.e., given by Theorems \ref{teo7.2} with $\mu=0$ or \eqref{teo7.3} with $\mu=0$ or \eqref{teo7.4} with $\mu \neq 0$, respectively.
If $f_{ij}$ are given by \eqref{th2.2a} with $\mu=0$ is easy to see that $\Delta_{13}=0$, which is a contradiction, because $c = \mp 2\Delta_{13}/ \Delta_{23}$ is not zero. If $f_{ij}$ are given by \eqref{th2.3a} with $\mu=0$ is easy to see that $\Delta_{13}=-\lambda \eta z_1(\neq 0)$ and $\Delta_{23} = -\lambda m_1 z_1(\neq 0)$. Therefore, it follows from $c = \mp 2\Delta_{13}/ \Delta_{23}$ that $c = \mp 2\eta /m_1$ is a constant and, from \eqref{3.8b} and \eqref{3.9b}, we obtain $\Delta_{13}=\Delta_{23}=0$, which is a contradiction with \eqref{3.10a}.
If $f_{ij}$ are given by \eqref{th2.4a} with $\mu\neq 0$ then $c = \pm 2\mu$ is a constant and, from \eqref{3.8b} and \eqref{3.9b}, we obtain $\Delta_{13}=\Delta_{23}=0$, which is a contradiction with \eqref{3.10a}. This concludes $(ii)$. Therefore, $a\neq 0$ on any open set and, thus, we conclude the proof of the Lemma \ref{lemma3.1}.
$\Box$
Now, suppose that we have substituted the expressions of the total derivatives with respect to $x$ and $t$ given by \eqref{3.11} and \eqref{3.12} into equations \eqref{3.8} and \eqref{3.9}, i.e., \begin{eqnarray}\label{10} f_{11}a_t + f_{21}b_t - f_{12}a_x - f_{22}b_x - 2b(f_{11}f_{32}-f_{12}f_{31})+(a-c)(f_{21}f_{32}-f_{22}f_{31})\hspace*{3 cm}\nonumber\\ -\sum_{i=0}^l (f_{12}a_{z_i}+f_{22}b_{z_i})z_{i+1}+\sum_{i=2}^l (f_{11}a_{z_i}+f_{21}b_{z_i})\partial_x^{i-2}(z_{0,t}-F) - \sum_{j=1}^m (f_{12}a_{w_j}+f_{22}b_{w_j})v_j\hspace*{1.2 cm}\nonumber\\ +\sum_{j=0}^m (f_{11}a_{w_j}+f_{21}b_{w_j})w_{j+1} - \sum_{k=1}^n (f_{12}a_{v_k}+f_{22}b_{v_k})\partial_t^{k-1}(z_{0,t}-F)+\sum_{k=0}^n (f_{11}a_{v_k}+f_{21}b_{v_k})v_{k+1}=0, \end{eqnarray} and \begin{eqnarray}\label{11} f_{11}b_t + f_{21}c_t - f_{12}b_x - f_{22}c_x + 2b(f_{21}f_{32}-f_{22}f_{31})+(a-c)(f_{11}f_{32}-f_{12}f_{31})\hspace*{3 cm}\nonumber\\ -\sum_{i=0}^l (f_{12}b_{z_i}+f_{22}c_{z_i})z_{i+1}+\sum_{i=2}^l (f_{11}b_{z_i}+f_{21}c_{z_i})\partial_x^{i-2}(z_{0,t}-F) - \sum_{j=1}^m (f_{12}b_{w_j}+f_{22}c_{w_j})v_j\hspace*{1.2 cm}\nonumber\\ +\sum_{j=0}^m (f_{11}b_{w_j}+f_{21}c_{w_j})w_{j+1} - \sum_{k=1}^n (f_{12}b_{v_k}+f_{22}c_{v_k})\partial_t^{k-1}(z_{0,t}-F)+\sum_{k=0}^n (f_{11}b_{v_k}+f_{21}c_{v_k})v_{k+1}=0. \end{eqnarray}
If $m=n$, then differentiating \eqref{10} and \eqref{11} with respect to $v_{n +1}$ and $w_{n+1}$ leads to \begin{eqnarray}\label{13} \begin{array}{rr} f_{11}a_{v_n}+f_{21}b_{v_n} = 0,\\ f_{11}b_{v_n}+f_{21}c_{v_n} = 0, \end{array} \begin{array}{rr} f_{11}a_{w_n}+f_{21}b_{w_n} = 0,\\ f_{11}b_{w_n}+f_{21}c_{w_n} = 0. \end{array} \end{eqnarray}
If $f_{21}\neq 0$ on a non-empty open set (which is the case of the equations and $f_{ij}$ given by the Theorems \ref{teo7.2}, \ref{teo7.3} and \ref{teo7.5}-(i) and also may be the case of the equations given by Theorems \ref{teo7.4} and \ref{teo7.5}-(ii)) then
\begin{eqnarray}\label{13,a} \begin{array}{rcl} && b_{v_n} = -\frac{f_{11}}{f_{21}}a_{v_n},\\ && c_{v_n} = \left(\frac{f_{11}}{f_{21}}\right)^2 a_{v_n}, \end{array} \begin{array}{rcl} && b_{w_n} = -\frac{f_{11}}{f_{21}}a_{w_n},\\ && c_{w_n} = \left(\frac{f_{11}}{f_{21}}\right)^2 a_{w_n}. \end{array} \end{eqnarray}
Differentiating the Gauss equation with respect to $v_n$ and $w_n$ leads to $a_{v_n}c+ac_{v_n}-2bb_{v_n} = 0$ and $a_{w_n}c+ac_{w_n}-2bb_{w_n} = 0$, respectively, and using \eqref{13,a} in such derivatives we obtain \begin{eqnarray}\label{1.18} \left[ c + \left(\frac{f_{11}}{f_{21}}\right)^2 a+2\frac{f_{11}}{f_{21}} b\right] a_{v_n}=0, \quad \left[ c + \left(\frac{f_{11}}{f_{21}}\right)^2 a+2\frac{f_{11}}{f_{21}} b\right] a_{w_n}=0. \end{eqnarray}
The equation \eqref{1.18} holds when $m=n$ and $f_{21}\neq 0$. The cases $m < n$ or $m > n$ need to be considered separately, and they will be analyzed in Lemmas \ref{lemma3.2}-\ref{lemma3.4}. The case $f_{21}\equiv 0$ will be considered in Lemma \ref{lemma3.4}.
The discussing leading to \eqref{1.18} shows that the analysis of the Codazzi equations (\eqref{3.8} and \eqref{3.9}) splits naturally into several branches which are characterized by the vanishing or non-vanishing of $f_{21}$ and the expression between brackets in \eqref{1.18}. The various cases are treated in Lemmas \ref{lemma3.3}-\ref{lemma3.4} and are organized according to the figure below.
\begin{lemma}\label{lemma3.3} Consider an equation of type \eqref{T} describing pseudospherical surfaces, under the condition \eqref{1}, as given by the Theorems $\ref{teo7.2}$-$\ref{teo7.5}$. Assume there is a local isometric immersion of the pseudospherical surface determined by a solution $u(x,t)$ of \eqref{T}, for which the coefficients $a$, $b$ and $c$ of the second fundamental form depend on $x, t, z_0, \ldots, z_l, w_1, \ldots, w_m, v_1, \ldots, v_n$, where $1\leq l <\infty$, $1\leq m < \infty$ and $1\leq n < \infty$ are finite, but otherwise arbitrary. Suppose $f_{21}\neq 0$ on a non-empty open set. If \begin{eqnarray}\label{19} c + \left(\frac{f_{11}}{f_{21}}\right)^2 a+2\frac{f_{11}}{f_{21}} b = 0, \end{eqnarray} on a non-empty open set, then the equations \eqref{gauss}, \eqref{3.8} and \eqref{3.9} form an inconsistent system. \end{lemma}
\noindent\textbf{Proof}. Firstly, let use \eqref{19} and the Gauss equation in order to obtain $b$ and $c$ in terms of $a$, $f_{11}$ and $f_{21}$. We will then substitute the total derivatives of $b$ and $c$ back into \eqref{3.8} and \eqref{3.9}.
If \eqref{19} holds then substituting $c$ into the Gauss equation leads to \begin{eqnarray} && b = \pm 1 - \frac{f_{11}}{f_{21}}a,\label{70}\\ && c = \left(\frac{f_{11}}{f_{21}}\right)^2 a \mp 2\frac{f_{11}}{f_{21}}.\label{70.1} \end{eqnarray} Moreover, using \eqref{1} and \eqref{2.3.1} we can see that $(f_{11}/f_{21})_{,z_0}+(f_{11}/f_{21})_{,z_2} = 0$, and thus \begin{eqnarray*} f_{11}D_t a + f_{21}D_t b &=& f_{21} (\lambda z_0z_3+G)\left(\frac{f_{11}}{f_{21}}\right)_{,z_2}a,\\ f_{12}D_x a+f_{22}D_x b &=& -\frac{\Delta_{12}}{f_{21}}D_x a - f_{22}a \left[\left(\frac{f_{11}}{f_{21}}\right)_{,z_0}z_1 +\left(\frac{f_{11}}{f_{21}}\right)_{,z_2}z_3\right],\\ f_{11}D_t b+f_{21}D_t c &=& -(f_{11}a \mp 2f_{21})(\lambda z_0z_3+G)\left(\frac{f_{11}}{f_{21}}\right)_{,z_2},\\ f_{12}D_xb + f_{22}D_x c &=& \frac{f_{11}}{f_{21}}\frac{\Delta_{12}}{f_{21}}D_x a +\left[\frac{\Delta_{12}}{f_{21}}a+\frac{f_{22}}{f_{21}}(f_{11}a \mp 2f_{21})\right]\left[\left(\frac{f_{11}}{f_{21}}\right)_{,z_0}z_1 + \left(\frac{f_{11}}{f_{21}}\right)_{,z_2}z_3\right]. \end{eqnarray*} Equation \eqref{3.8} becomes, \begin{eqnarray}\label{5.41} a[f_{21}G+(\lambda z_0f_{21}+f_{22})z_3]\left(\frac{f_{11}}{f_{21}}\right)_{,z_2}+af_{22}\left(\frac{f_{11}}{f_{21}}\right)_{,z_0}z_1 + \frac{\Delta_{12}}{f_{21}}D_xa -2b\Delta_{13}+(a-c)\Delta_{23} = 0, \end{eqnarray} and \eqref{3.9} becomes \begin{eqnarray}\label{5.42} -\frac{1}{f_{21}}(f_{11}a\mp 2 f_{21})[f_{21}G+(\lambda z_0 f_{21}+f_{22})z_3] \left(\frac{f_{11}}{f_{21}}\right)_{,z_2} -\frac{\Delta_{12}}{f_{21}}a\left[\left(\frac{f_{11}}{f_{21}}\right)_{,z_0}z_1+\left(\frac{f_{11}}{f_{21}}\right)_{,z_2}z_3\right]\hspace*{1 cm}\nonumber \\ -\frac{f_{22}}{f_{21}}(f_{11}a\mp 2f_{21}) \left(\frac{f_{11}}{f_{21}}\right)_{,z_0}z_1 -\frac{f_{11}}{f_{21}}\frac{\Delta_{12}}{f_{21}}D_xa +(a-c)\Delta_{13}+2b\Delta_{23} = 0. \end{eqnarray}
Observe that in \eqref{5.41} and \eqref{5.42} we only have the total derivative of the coefficient $a$ of the second fundamental form of the local isometric immersion. We are going to use expressions \eqref{5.41} and \eqref{5.42} in an equivalent form that will be more convenient to work with when the expressions of $G$ and $f_{ij}$ given in Theorem \ref{teo7.2}-\ref{teo7.5} are taken into account.
Adding equation \eqref{5.41} multiplied by $f_{11}/f_{21}$ with \eqref{5.42}, we get \begin{eqnarray}\label{5.43} \left[ \pm 2f_{21}G\pm 2(f_{22}+\lambda z_0f_{21})z_3 -\frac{\Delta_{12}}{f_{21}}az_3\right]\left(\frac{f_{11}}{f_{21}}\right)_{,z_2} + \left[-\frac{\Delta_{12}}{f_{21}}a\pm 2 f_{22}\right]\left(\frac{f_{11}}{f_{21}}\right)_{,z_0}z_1\hspace*{1 cm}\nonumber\\ +\left[1+\left(\frac{f_{11}}{f_{21}}\right)^2\right]\left[a\Delta_{13}+\left(-\frac{f_{11}}{f_{21}}a \pm 2\right)\Delta_{23} \right] = 0. \end{eqnarray} Taking the $v_k$ and $w_j$, $1\leq k\leq n$ and $1\leq j\leq m$, derivatives of \eqref{5.43}, we have, respectively, \begin{eqnarray}\label{5.43b} Q a_{v_k} = 0, \quad Q a_{w_j} = 0, \end{eqnarray} where \begin{eqnarray}\label{5.43a} Q = -\frac{\Delta_{12}}{f_{21}}\left(\frac{f_{11}}{f_{21}}\right)_{,z_2}z_3 -\frac{\Delta_{12}}{f_{21}}\left(\frac{f_{11}}{f_{21}}\right)_{,z_0}z_1+\left[1+\left(\frac{f_{11}}{f_{21}}\right)^2\right]\left(\Delta_{13}-\frac{f_{11}}{f_{21}}\Delta_{23} \right). \end{eqnarray}
Suppose $Q = 0$ on a non-empty open set. Differentiating $Q$ with respect to $z_3$ we have $(f_{11}/f_{21})_{,z_2} = 0$ and, consequently, $(f_{11}/f_{21})_{,z_0} = 0$. Hence $f_{11}/f_{21}$ is a nonzero constant, which happens only in the branches of the classification corresponding to Theorems \ref{teo7.4} with $m_1=0$ and \ref{teo7.5}-$(ii)$ with $\eta = 0$.
If $f_{ij}$ are given by \eqref{th2.4a}, i.e., Theorem \ref{teo7.4}, with $m_1 = 0$, then $$ \frac{f_{11}}{f_{21}}=\frac{1}{\mu}, \quad \Delta_{12}=m_2\sqrt{1+\mu^2}, \quad \Delta_{13}=\pm m_2 \mu h, \quad \Delta_{23} = \mp m_2h. $$ Therefore, \eqref{5.43a} implies that $\Delta_{13}-f_{11}\Delta_{23}/f_{21}=0$ if, and only if, $m_2 = 0$ and, thus, a contradiction.
On the other hand, if $f_{ij}$ are given by \eqref{th2.5b}, i.e., Theorem \ref{teo7.5}-$(ii)$, with $\eta = 0$, then $$ \frac{f_{11}}{f_{21}}=\frac{1}{\mu}, \quad \Delta_{12}=\pm \sqrt{1+\mu^2}f_{11}z_1, \quad \Delta_{13}=(m_1\theta e^{\theta z_0}-\lambda)f_{11}\left(-\mu z_1\pm \frac{1}{A\sqrt{1+\mu^2}}\right)\mp \frac{\theta}{A\sqrt{1+\mu^2}}, $$ $$ \Delta_{23} = (m_1\theta e^{\theta z_0}-\lambda)f_{11}\left( z_1 \pm \frac{\mu}{A\sqrt{1+\mu^2}}\right)\mp \frac{\theta}{A\sqrt{1+\mu^2}}\phi_{22}. $$ Therefore, \eqref{5.43a} is equivalent to \begin{eqnarray}\label{(54)} 0=\mu \Delta_{13}-\Delta_{23} = -(m_1\theta e^{\theta z_0}-\lambda)(1+\mu^2)z_1f_{11}\pm \frac{\theta}{A\sqrt{1+\mu^2}}(\phi_{22}-\mu), \end{eqnarray} where by \eqref{(7.2)} we know that $$ \phi_{22} = f_{22}+\lambda z_0f_{21}= \mu a m_1\theta e^{\theta z_0}z_1^2 + (m_1\theta e^{\theta z_0}-\lambda) \frac{1}{\theta} \left[\mu(az_0+b) \mp \frac{\theta z_1}{\sqrt{1+\mu^2}}
\right]. $$ Differentiating \eqref{(54)} with respect to $z_2$ leads to $-(m_1\theta e^{\theta z_0}-\lambda)(1+\mu^2)z_1f_{11,z_2} = 0$, which holds if, and only if, $m_1\theta e^{\theta z_0}-\lambda=0$, i.e., if, and only if, $\lambda=m_1 = 0$ and, thus, a contradiction. Hence, we have shown that $Q$ does not vanish in a non-empty open set.
Let $Q\neq 0$, on a non-empty open set. We are going to showing that this also leads us into some contradiction. Consequently, \eqref{5.43b} implies that $a_{v_k} = a_{w_j} = 0$, $k =1,2, \ldots,n$ and $j=1,2, \ldots, m$ and, thus, $a$ is a function depending only on $x, t, z_0, \ldots, z_l$. But, differentiating \eqref{5.43} with respect to $z_l$, $l\geq 4$, we also obtain $Qa_{z_l} = 0$ where $Q$ is given by \eqref{5.43a} and, since $Q\neq 0$, we conclude that $a$ depends only on $x, t, z_0, \ldots, z_3$. Moreover, differentiating \eqref{5.41} with respect to $z_4$ leads to \begin{eqnarray*} \frac{\Delta_{12}}{f_{21}}a_{z_3} = 0, \end{eqnarray*} i.e., $a_{z_3} = 0$, on a open set.
Taking the $z_3$ derivative of \eqref{5.43}, we obtain \begin{eqnarray}\label{5.44} \left[-\frac{\Delta_{12}}{f_{21}}a\pm 2(f_{22}+\lambda z_0f_{21})\right]\left(\frac{f_{11}}{f_{21}}\right)_{,z_2}=0. \end{eqnarray}
Suppose $\left(f_{11}/f_{21}\right)_{,z_2} \equiv 0$, which happens only in \eqref{th2.4a} with $m_1=0$ or \eqref{th2.5b} with $\eta = 0$, i.e., in the branches of the classification corresponding to Theorems \ref{teo7.4} with $m_1=0$ and \ref{teo7.5}-$(ii)$ with $\eta=0$. Therefore, if $f_{ij}$ are given by \eqref{th2.4a} with $m_1 = 0$ then $$ \frac{f_{11}}{f_{21}}=\frac{1}{\mu}, \quad \Delta_{12}=m_2\sqrt{1+\mu^2}h, \quad \Delta_{13}=\pm m_2 \mu h, \quad \Delta_{23} = \mp m_2h. $$ Thus, \eqref{5.44} is satisfied and, substituting back into \eqref{5.43}, we get \begin{eqnarray*} 0=a\Delta_{13}+\left(-\frac{f_{11}}{f_{21}}a \pm 2\right)\Delta_{23} = a\left(\frac{1+\mu^2}{\mu}\right) \mp2, \end{eqnarray*} i.e., $a$ is a constant. But if $a$ is a contant, it follows from \eqref{5.41} and \eqref{5.42} that \begin{eqnarray*} -2b\Delta_{13}+(a-c)\Delta_{23} = 0,&&\\ (a-c)\Delta_{13}+2b\Delta_{23} = 0,&& \end{eqnarray*} which implies that $\Delta_{13}=\Delta_{23}=0$ and, thus, a contradiction with \eqref{3.10a}.
On the other hand, considering the case in which $f_{ij}$ are given by \eqref{th2.5b} with $\eta = 0$ we get $$ \frac{f_{11}}{f_{21}}=\frac{1}{\mu}, \quad \Delta_{12}=\pm \sqrt{1+\mu^2}f_{11}z_1, \quad \Delta_{13}=(m_1\theta e^{\theta z_0}-\lambda)f_{11}\left(-\mu z_1\pm \frac{1}{A\sqrt{1+\mu^2}}\right)\mp \frac{\theta}{A\sqrt{1+\mu^2}}, $$ $$ \Delta_{23} = (m_1\theta e^{\theta z_0}-\lambda)f_{11}\left( z_1 \pm \frac{\mu}{A\sqrt{1+\mu^2}}\right)\mp \frac{\theta}{A\sqrt{1+\mu^2}}\phi_{22}. $$ Therefore, \eqref{5.44} is satisfied and, substituting back into \eqref{5.43}, we get \begin{eqnarray}\label{5.45} 0=a\Delta_{13}+\left(-\frac{a}{\mu} \pm 2\right)\Delta_{23} =\frac{1}{\mu} \left[ a(\mu \Delta_{13}-\Delta_{23})\pm 2\mu \Delta_{23}\right], \end{eqnarray} and since \begin{eqnarray*} 0\neq \mu \Delta_{13}-\Delta_{23} = -(m_1\theta e^{\theta z_0}-\lambda)(1+\mu^2)z_1f_{11}\pm \frac{\theta}{A\sqrt{1+\mu^2}}(\phi_{22}-\mu), \end{eqnarray*} on a open set, we observe that $a$ depends only on $z_0$, $z_1$ and $z_2$. Then, from \eqref{5.41} and \eqref{5.42}, we obtain \begin{eqnarray} \frac{\Delta_{12}}{f_{21}}D_xa -2b\Delta_{13}+(a-c)\Delta_{23} = 0,&&\label{5.46}\\ -\frac{1}{\mu}\frac{\Delta_{12}}{f_{21}}D_xa +(a-c)\Delta_{13}+2b\Delta_{23} = 0.&&\label{5.47} \end{eqnarray}
Differentiating \eqref{5.46} with respect to $z_3$ leads to $\Delta_{12}a_{z_2}/f_{21}=0$ and, therefore, $a_{z_2}=0$. Differentiating \eqref{5.45} with respect to $z_2$ and replacing the result back into \eqref{5.42} leads to \begin{eqnarray} -a(1+\mu^2)z_1 \pm 2\mu \left(z_1\pm \frac{\mu}{A\sqrt{1+\mu^2}}\right)=0,&&\label{5.48}\\ (\pm a-2\mu)\theta \phi_{22}\mp a\mu = 0.&&\label{5.49} \end{eqnarray} It follows from \eqref{5.46} and \eqref{5.47} that $a$ can not be constant, otherwise, \begin{eqnarray*}
-2b\Delta_{13}+(a-c)\Delta_{23} = 0,&&\\ (a-c)\Delta_{13}+2b\Delta_{23} = 0,&& \end{eqnarray*} and thus $\Delta_{13}=\Delta_{23}=0$, contradicting \eqref{3.10a}. Hence, \eqref{5.48} and \eqref{5.49} imply that $\phi_{22}$ depends only on $z_1$, i.e., \begin{eqnarray*} 0=\phi_{22,z_0}= \mu Am_1\theta^2 e^{\theta z_0}z_1^2+m_1\theta e^{\theta z_0} \left[\mu(Az_0+b)\mp \frac{\theta z_1}{\sqrt{1+\mu^2}}\right]+(m_1\theta e^{\theta z_0}-\lambda)\frac{\mu A}{\theta}. \end{eqnarray*} Differentiating the latter expression twice in $z_1$ we obtain $\mu Am_1\theta^2 e^{\theta z_0}=0$ which holds if, and only if, $m_1=0$. But, $m_1=0$ in the latter equation gives us $-\lambda \mu A/\theta=0$ which implies $\lambda = 0$. Hence, $m_1=\lambda = 0$ which is a contradiction with $\Delta_{12}\neq 0$. Thus, we have shown that $\left(f_{11}/f_{21}\right)_{,z_2}$ does not vanish on a non-empty open set.
Let us go back to equation \eqref{5.44} and analyze the case $\left(f_{11}/f_{21}\right)_{,z_2} \neq 0$ on a non-empty open set. This condition holds in \eqref{th2.2a} or \eqref{th2.3a} or \eqref{th2.4a} with $m_1\neq 0$ or \eqref{th2.5a} or \eqref{th2.5b} with $\eta\neq 0$, i.e., in the branches of the classification corresponding to Theorems \ref{teo7.2}-\ref{teo7.5}.
From \eqref{5.44}, $a$ is given by \begin{eqnarray}\label{5.50} a = \pm 2\frac{\phi_{22}f_{21}}{\Delta_{12}}, \end{eqnarray} where, by \eqref{(7.2)}, we know that $\phi_{22}=f_{22}+\lambda z_0f_{21}$. Thus, usando \eqref{5.50} and the fact of $\Delta_{13}-f_{11}\Delta_{23}/f_{21}=f_{31}\Delta_{12}/f_{21}$, the equations \eqref{5.41} and \eqref{5.43} are equivalent to \begin{eqnarray}\label{5.51}
a(f_{21}G -f_{22}z_1)\left(\frac{f_{11}}{f_{21}}\right)_{,z_2} + \frac{\Delta_{12}}{f_{21}}\sum_{i=0}^1 a_{z_i}z_{i+1}+a \left[1+\left(\frac{f_{11}}{f_{21}}\right)^2\right]\Delta_{23}-2\left[\pm 1-a\frac{f_{11}}{f_{21}}\right]\frac{f_{31}}{f_{21}}\Delta_{12} = 0, \end{eqnarray} \begin{eqnarray}\label{5.52}
\pm 2f_{21}G\left(\frac{f_{11}}{f_{21}}\right)_{,z_2} \mp 2 \lambda z_0z_1f_{21}\left(\frac{f_{11}}{f_{21}}\right)_{,z_0}+\left[1+\left(\frac{f_{11}}{f_{21}}\right)^2\right]\left(a\frac{f_{31}}{f_{21}}\Delta_{12}\pm 2\Delta_{23} \right) = 0. \end{eqnarray} Remember that $\left(f_{11}/f_{21}\right)_{,z_0}+\left(f_{11}/f_{21}\right)_{,z_2} = 0$ and, by \eqref{(7.2)}, we also have $af_{31}\Delta_{12}/f_{21}\pm 2\Delta_{23} = \pm 2f_{21}\phi_{32}$. Hence, it follows from \eqref{5.52} that \begin{eqnarray}\label{5.53} G = -\lambda z_0z_1 - \frac{(f_{11}^2+f_{21}^2)}{f_{21}^2}\frac{\phi_{32}}{L},\quad \textnormal{where}\quad L:=\left(\frac{f_{11}}{f_{21}}\right)_{,z_2}. \end{eqnarray}
If $G$ and $f_{ij}$ are given as in \eqref{th2.2} and \eqref{th2.2a}, i.e., Theorem \ref{teo7.2}, from \eqref{5.53} we get $L = -m\sqrt{1+\mu^2}h'/f_{21}^2$ and \begin{eqnarray}\label{5.54}
z_1\psi_{,z_0} + z_2 \psi_{,z_1} \pm m\psi =G= \pm\frac{1}{m}(f_{11}^2+f_{21}^2)\psi. \end{eqnarray} Differentiating the latter with respect to $z_2$, there exists a function $P=P(z_0)$ such that \begin{eqnarray*}\label{5.55}
\frac{\psi_{,z_1}}{\psi} = \pm\frac{1}{m}(f_{11}^2+f_{21}^2)_{,z_2}=P, \end{eqnarray*} i.e., there exist functions $R=R(z_0)$ and $S=S(z_0)$ such that \begin{eqnarray} \psi &=& Re^{Pz_1},\quad R\neq 0,\label{5.55}\\ \pm \frac{1}{m}(f_{11}^2+f_{21}^2) &=& Pz_2 + S.\label{5.56} \end{eqnarray} Differentiating \eqref{5.56} with respect $z_0$ and adding the result with the $z_2$ derivative of \eqref{5.56}, and using $f_{11,z_0}+f_{11,z_2}=0$ and $f_{21,z_0}+f_{21,z_2}=0$, we obtaing $P=A$ and $S = -Az_0+C$, where $A$ and $C$ are constants with $A\neq 0$. In fact, if $A=0$ then $P=0$ and $S=C$, and differentiating \eqref{5.56} with respect to $z_2$ leads to $$ -(1+\mu^2)h \mp m\mu\sqrt{1+\mu^2} = 0, $$ because $h'\neq 0$ on a open set. But, differentiating the latter again with respect to $z_2$ we get $(1+\mu^2)h'=0$ and, thus, a contradiction. So, $A\neq 0$.
Substituting \eqref{5.55} and \eqref{5.56} into \eqref{5.54}, we have \begin{eqnarray}\label{l} z_1 R' +(\pm m +Az_0 -C)R = 0. \end{eqnarray} Differentiating the latter with respect $z_1$ we get $R=constant$ which, when replaced back in \eqref{l}, gives us $AR=0$, which implies $R=0$ and, thus, a contradiction.
If $G$ and $f_{ij}$ are given as Theorem \ref{teo7.3}, then from \eqref{5.53} we obtain $L = -\eta h'/f_{21}^2$ and $$ \eta^2( z_1h + m_1z_1 +m_2z_2) = (f_{11}^2+f_{21}^2)[m_1(1+\mu^2)-\mu\eta]z_1 $$ Taking the $z_1$ derivative of the last equation and replacing the result back into the same equation we obtain $\eta^2 m_2 = 0$, which contradicts the condition $\eta m_2\neq 0$ appearing in Theorem \ref{teo7.3}.
If $G$ and $f_{ij}$ are given as Theorem \ref{teo7.4} with $m_1\neq 0$, then from \eqref{5.53} we obtain $L = -m_1\sqrt{1+\mu^2}h'/f_{21}^2$ and \begin{eqnarray}\label{(68)a} -(\lambda z_1\pm \lambda m_1 z_0\pm m_2)h+z_1\psi_{,z_0}+z_2\psi_{,z_1}\pm m_1\psi = \frac{f_{11}^2+f_{21}^2}{m_1\sqrt{1+\mu^2}}[\pm \sqrt{1+\mu^2}\psi \pm \lambda m_1\mu z_0\pm m_2\mu]. \end{eqnarray} Differentiating \eqref{(68)a} with respect to $z_0$ and $z_2$ and adding both results lead to \begin{eqnarray}\label{(69)a} \mp \lambda m_1h + z_1\psi_{,z_0z_0} + z_2 \psi_{,z_1z_0}\pm m_1\psi_{,z_0}+\psi_{,z_1} = \frac{f_{11}^2+f_{21}^2}{m_1\sqrt{1+\mu^2}}[\pm \sqrt{1+\mu^2}\psi_{,z_0} \pm \lambda m_1\mu ]. \end{eqnarray} Likewise, differentiating \eqref{(69)a} with respect to $z_0$ and $z_2$ and adding both results, we obtain \begin{eqnarray}\label{(70)a}
z_1\psi_{,z_0z_0z_0} + z_2 \psi_{,z_1z_0z_0}\pm m_1\psi_{,z_0z_0}+2\psi_{,z_1z_0} = \pm\frac{f_{11}^2+f_{21}^2}{m_1}\psi_{,z_0z_0}. \end{eqnarray} Taking the $z_2$ derivative of \eqref{(70)a}, we have \begin{eqnarray}\label{(71)a} \psi_{,z_1z_0z_0} = \pm \frac{2}{m_1}(f_{11}f_{11,z_2}+f_{21}f_{21,z_2})\psi_{,z_0z_0}. \end{eqnarray} We now divide our analysis in two cases. According to whether $\psi_{,z_0z_0}\equiv 0$ or $\psi_{,z_0z_0}\neq 0$.
If $\psi_{,z_0z_0}\equiv 0$ then we have from \eqref{(71)a} that $\psi_{,z_1z_0z_0}=0$ and, by \eqref{(70)a}, $\psi_{,z_1z_0} = 0$. Hence, $\psi = Az_0 + N$, where $A$ is a constant and $N=N(z_1)$ is a differentiable function. It follows from \eqref{(69)a} that \begin{eqnarray}\label{(72)a} \mp \lambda m_1h \pm m_1\psi_{,z_0}+\psi_{,z_1} = \frac{f_{11}^2+f_{21}^2}{m_1\sqrt{1+\mu^2}}[\pm \sqrt{1+\mu^2}A \pm \lambda m_1\mu ]. \end{eqnarray} Differentiating \eqref{(72)a} with respect to $z_2$, since $h'\neq 0$, leads to $$ \mp \lambda m_1 = 2\frac{\pm \sqrt{1+\mu^2}A \pm \lambda m_1\mu }{m_1\sqrt{1+\mu^2}}(-f_{11}-\mu f_{21}). $$ Differentiating the last equation with respect to $z_2$ implies that $\pm \sqrt{1+\mu^2}A \pm \lambda m_1\mu =0$ and, from the last equation again, we have $\lambda m_1 = 0$. Since $m_1\neq 0$ we have $\lambda =0$. Thus, $A=0$. Hence, $\psi = N$. But, from \eqref{(69)a}, since $A=\lambda =0$, we have $N'=0$, i.e., $N$ is a constant. Finally, the equation \eqref{(68)a} gives us \begin{eqnarray}\label{(73)a} \mp m_2h \pm m_1N = \frac{f_{11}^2+f_{21}^2}{m_1\sqrt{1+\mu^2}}[\pm \sqrt{1+\mu^2}N \pm m_2\mu]. \end{eqnarray} Taking the $z_2$ derivative of the last equation, since $h'\neq 0$, we get $$ \mp m_2 = 2\frac{\pm \sqrt{1+\mu^2}N \pm m_2\mu }{m_1\sqrt{1+\mu^2}}(-f_{11}-\mu f_{21}). $$ Differentiating the latter equation with respect to $z_2$ leads to $\pm \sqrt{1+\mu^2}N \pm m_2\mu=0$ and, thus, $m_2=0$. But, $\lambda = m_2 = 0$ contradicts the fact of $\Delta_{12}\neq 0$. Hence, we have shown that from equation \eqref{(71)a} we can not have $\psi_{,z_0z_0}\equiv 0$.
Let us now consider the case $\psi_{,z_0z_0}\neq 0$ in \eqref{(71)a}. So, it follows from \eqref{(71)a} that \begin{eqnarray}\label{m} \frac{\psi_{,z_1z_0z_0}}{\psi_{,z_0z_0}} = \pm \frac{2}{m_1}(f_{11}f_{11,z_2}+f_{21}f_{21,z_2}) = R(z_0), \end{eqnarray} where $R=R(z_0)$ is a differentiable function. Equation \eqref{m} may be written as \begin{eqnarray} \psi_{,z_1z_0z_0} &=& R\psi_{,z_0z_0},\label{(74,1)}\\ f_{11}^2+f_{21}^2 &=& \pm m_1 R z_2 + S(z_0),\label{(75,1)} \end{eqnarray} where $S=S(z_0)$ is a differentiable function. Taking the $z_0$ and $z_2$ derivative of \eqref{(75,1)}, adding the result and using $f_{11,z_0}+f_{11,z_2} = 0$ and $f_{21,z_0}+f_{21,z_2} = 0$ we obtain $R = -A$ constant and $S = \pm Am_1 z_0+B$ with $B$ constant. Hence, \begin{eqnarray} f_{11}^2+f_{21}^2 &=& \pm m_1 A (z_0- z_2) + B,\label{(76,1)} \end{eqnarray} and integrating once with respect to $z_0$ the equation \eqref{(74,1)}, we get \begin{eqnarray} \psi_{,z_1z_0} &=& -A\psi_{,z_0}+T(z_1),\label{(77,1)} \end{eqnarray} where $T=T(z_1)$ is a differentiable function. Substituting \eqref{(76,1)} and \eqref{(77,1)} into \eqref{(69)a} leads to {\small \begin{eqnarray}\label{(78,1)} \mp \lambda m_1h + z_1\psi_{,z_0z_0} + z_2[-A\psi_{,z_0}+T(z_1)]\pm m_1\psi_{,z_0}+\psi_{,z_1} = \frac{[\pm m_1 A (z_0- z_2) + B]}{m_1\sqrt{1+\mu^2}}[\pm \sqrt{1+\mu^2}\psi_{,z_0} \pm \lambda m_1\mu ]. \end{eqnarray}} Taking the $z_2$ derivative of \eqref{(78,1)}, we have \begin{eqnarray}\label{(79,1)} \pm \lambda m_1h' = -T(z_1) \mp \frac{\lambda m_1\mu A}{\sqrt{1+\mu^2}} = \lambda m_1 C, \end{eqnarray} where $C$ is a nonzero constant, since $h'\neq 0$. Thus, from \eqref{(79,1)} we obtain $f_{11} = h = \pm C(z_0-z_2)+D$, where $D$ is a constant. But, replacing $f_{11}$ into \eqref{(76,1)} and using $f_{21} = \mu f_{11}\pm m_1\sqrt{1+\mu^2}$ and differentiating the remainder expression twice with respect to $z_0$ leads to $C=0$ and, thus, a contradiction with $h'\neq 0$. Therefore, we have shown that from equation \eqref{(71)a} we can not have $\psi_{,z_0z_0}\neq 0$ neither, on a non-empty open set. So, \eqref{5.53} is not true if $G$ and $f_{ij}$ are given as Theorem \ref{teo7.4} with $m_1\neq 0$.
Now, let consider $G$ and $f_{ij}$ given by Theorem \ref{teo7.5}-$(i)$. Then, from \eqref{5.53} we obtain $L = -A\eta/f_{21}^2$ and \begin{eqnarray}\label{(74)a} \lambda \left(z_1z_2 - 2 z_0z_1 - \frac{m}{\tau}z_1 \mp \frac{z_2}{\tau}\right) + \tau e^{\pm \tau z_1}\left( \tau z_0z_2 \pm z_1+ m z_2\right)\varphi\hspace*{4 cm}\nonumber\\ \pm e^{\pm \tau z_1}\left(\tau z_0z_1 +\tau z_1z_2 + mz_1\pm z_2\right)\varphi' + z_1^2 e^{\pm \tau z_1}\varphi''=-\lambda z_0z_1+\frac{f_{11}^2+f_{21}^2}{A\eta}\phi_{32}. \end{eqnarray} Taking the $z_2$ derivative of \eqref{(74)a}, we obtain \begin{eqnarray}\label{(74)a} \lambda \left(z_1 \mp \frac{1}{\tau}\right) + \tau e^{\pm \tau z_1}\left( \tau z_0 + m \right)\varphi \pm e^{\pm \tau z_1}\left(\tau z_1 \pm 1\right)\varphi' =-2\frac{f_{11}+\mu f_{21}}{\eta}\phi_{32}, \end{eqnarray} which the derivative with respect to $z_2$ leads to $0=(1+\mu^2)A$, i.e., $A=0$, which contradicts $f_{11,z_2}\neq 0$.
Finally, if $G$ and $f_{ij}$ are given by Theorem \ref{teo7.5}-$(ii)$ with $\eta\neq 0$ then, from \eqref{5.53}, we have $L = -A\eta/f_{21}^2$ and \begin{eqnarray}\label{(76)a1} \lambda(2z_1z_2-3z_0z_1-m_2z_1) + m_1\theta e^{\theta z_0}(\theta z_1^3+z_1z_2+2z_0z_1+m_2z_1) = -\lambda z_0z_1+\frac{f_{11}^2+f_{21}^2}{A\eta}\phi_{32}, \end{eqnarray} where $$ \phi_{32} = \pm \sqrt{1+\mu^2} A m_1\theta e^{\theta z_0}z_1^2 -(m_1\theta e^{\theta z_0}-\lambda) \frac{1}{\theta} \left\{A\eta z_1 \mp\frac{1}{\sqrt{1+\mu^2}} \left[(1+\mu^2)(Az_0+B)+\mu\eta +\frac{\theta}{A}\right]\right\} . $$ Differentiating \eqref{(76)a1} three times with respect to $z_1$, we obtain $m_1\theta^2 e^{\theta z_0}=0$, i.e., $m_1=0$ (and then $\lambda\neq 0$). Thus, we can rewrite \eqref{(76)a1} such as \begin{eqnarray}\label{(76)a2} 2z_1z_2-2z_0z_1-m_2z_1 = \frac{f_{11}^2+f_{21}^2}{A\theta\eta}
\left\{A\eta z_1 \mp\frac{1}{\sqrt{1+\mu^2}} \left[(1+\mu^2)(Az_0+B)+\mu\eta +\frac{\theta}{A}\right]\right\}. \end{eqnarray} Differentiating \eqref{(76)a2} with respect to $z_1$ leads to $f_{11}^2+f_{21}^2 = -2\theta (z_0-z_2)-m_2\theta$, which replaced back into \eqref{(76)a2} gives us $$ (1+\mu^2)(Az_0+B)+\mu\eta +\frac{\theta}{A}=0. $$ The $z_0$ derivative of the last equation implies that $(1+\mu^2)A = 0$, i.e., $A=0$, which contradicts $f_{11,z_2}\neq 0$. This concludes the proof of Lemma \ref{lemma3.3}.
$\Box$
In the next two lemmas (Lemmas \ref{lemma3.2}-\ref{lemma3.4}) we will see that, under certain conditions, if a local isometric immersion exists for which the components $a$, $b$, $c$ of the second fundamental form depends only on a jet of finite order of $u$, then its coefficients are functions depending only on $x$ and $t$. Moreover, the proof in both lemmas requires separate the analysis of the cases $m=n$, $m < n$ and $n < m$.
\begin{lemma}\label{lemma3.2} Consider an equation of type \eqref{T} describing pseudospherical surfaces, under the condition \eqref{1}, given by the Theorems $\ref{teo7.2}$-$\ref{teo7.5}$. Assume there is a local isometric immersion of the pseudospherical surface, determined by a solution $u(x,t)$ of \eqref{T}, for which the coefficients $a$, $b$ and $c$ of the second fundamental form depend on $x, t, z_0, \ldots, z_l, w_1, \ldots, w_m, v_1, \ldots, v_n$, where $1\leq l <\infty$, $1\leq m < \infty$ and $1\leq n < \infty$ are finite, but otherwise arbitrary. Suppose $f_{21}\neq 0$ on a non-empty open set. If \begin{eqnarray}\label{19.1} c + \left(\frac{f_{11}}{f_{21}}\right)^2 a+2\frac{f_{11}}{f_{21}} b\neq 0 \end{eqnarray} holds on a non-empty open set then $a$, $b$ and $c$ are functions of $x$ and $t$ only, and therefore universal. \end{lemma}
\noindent\textbf{Proof}. Our analysis consists in three cases, namely, $$ (i)\hspace{0.2 cm} m=n, \qquad (ii)\hspace{0.2 cm} m < n, \qquad (iii)\hspace{0.2 cm} n < m. $$
Firstly, we consider the case $m=n$ and we are going to show that, from \eqref{10} and \eqref{11}, we have $a$, $b$ and $c$ depending only on $x$ and $t$.
Suppose $l =1$. If \eqref{19.1} holds then it follows from \eqref{1.18} that $a_{v_n}=0$ and $a_{w_n}=0$ and, consequently, by \eqref{13} we obtain $b_{v_n}=c_{v_n}=0$ and $b_{w_n}=c_{w_n}=0$. Thus, successive differentiation of \eqref{10}, \eqref{11} and \eqref{gauss} with respect to $v_{n+1}, \ldots, v_1$ and $w_{n+1}, \ldots, w_1$ lead to $a_{v_k}=b_{v_k}=c_{v_k}=0$ and $a_{w_k}=b_{w_k}=c_{w_k}=0$ for $k=0,1, \ldots, n$. Therefore, $a$, $b$ and $c$ are universal.
Suppose $l \geq 2$. Successive differentiation of \eqref{10}, \eqref{11} and \eqref{gauss} with respect to $v_{n+1}, \ldots, v_2$ and $w_{n+1}, \ldots, w_2$ lead to $a_{w_k}=b_{w_k}=c_{w_k}=0$ and $a_{v_k}=b_{v_k}=c_{v_k}=0$ for $k=1,2, \ldots, n$. In particular, $a$, $b$ and $c$ do not depend on $w_k$ and neither $v_k$ for $k=1,2, \ldots, n$. Therefore, $a$, $b$ and $c$ are functions of $x, t, z_0=w_0, z_1=v_0, \ldots, z_l$. Moreover, the equations \eqref{10} and \eqref{11} are equivalent to \begin{eqnarray}\label{20} f_{11}a_t + f_{21}b_t - f_{12}a_x - f_{22}b_x - 2b(f_{11}f_{32}-f_{12}f_{31})+(a-c)(f_{21}f_{32}-f_{22}f_{31})\hspace*{2 cm}\nonumber\\ -\sum_{i=0}^l (f_{12}a_{z_i}+f_{22}b_{z_i})z_{i+1}+\sum_{i=2}^l (f_{11}a_{z_i}+f_{21}b_{z_i})\partial_x^{i-2}(z_{0,t}-F)\hspace*{2 cm}\nonumber\\ +(f_{11}a_{w_0}+f_{21}b_{w_0})w_1 + (f_{11}a_{v_0}+f_{21}b_{v_0})v_1=0, \end{eqnarray} and \begin{eqnarray}\label{21} f_{11}b_t + f_{21}c_t - f_{12}b_x - f_{22}c_x + 2b(f_{21}f_{32}-f_{22}f_{31})+(a-c)(f_{11}f_{32}-f_{12}f_{31})\hspace*{2 cm}\nonumber\\ -\sum_{i=0}^l (f_{12}b_{z_i}+f_{22}c_{z_i})z_{i+1}+\sum_{i=2}^l (f_{11}b_{z_i}+f_{21}c_{z_i})\partial_x^{i-2}(z_{0,t}-F)\hspace*{2 cm}\nonumber\\ +(f_{11}b_{w_0}+f_{21}c_{w_0})w_1 + (f_{11}b_{v_0}+f_{21}c_{v_0})v_1=0. \end{eqnarray}
Differentiating \eqref{20} and \eqref{21} with respect to $z_{\ell+1}$, we obtain, respectively, \begin{eqnarray*} (f_{12}+\lambda z_0f_{11})a_{z_\ell}+(f_{22}+\lambda z_0f_{21})b_{z_\ell}=0,\quad (f_{12}+\lambda z_0f_{11})b_{z_\ell}+(f_{22}+\lambda z_0f_{21})c_{z_\ell}=0, \end{eqnarray*} and, using \eqref{(7.2)}, we have \begin{eqnarray} \begin{array}{rr}\label{31} \phi_{12}a_{z_\ell}+\phi_{22}b_{z_\ell}=0,\\ \phi_{12}b_{z_\ell}+\phi_{22}c_{z_\ell}=0, \end{array} \end{eqnarray}
If $\phi_{22}\neq 0$ on a non-empty open set, which may happen in all cases covered by Theorems \ref{teo7.2}-\ref{teo7.5}, we obtain from \eqref{31} that $$ b_{z_\ell} = -\frac{\phi_{12}}{\phi_{22}}a_{z_\ell}, \quad c_{z_\ell} = \left(\frac{\phi_{12}}{\phi_{22}}\right)^2 a_{z_\ell}. $$ Differentiating the Gauss equation \eqref{gauss} with respect to $z_\ell$ leads to $a_{z_\ell}c+ac_{z_\ell}-2bb_{z_\ell}=0$. Which implies using \eqref{31} that \begin{eqnarray}\label{33a} \left[ c + \left(\frac{\phi_{12}}{\phi_{22}}\right)^2 a+2\frac{\phi_{12}}{\phi_{22}} b\right]a_{z_\ell}=0, \end{eqnarray}
If the expression between brackets in \eqref{33a} does not vanish on a open set, we obtain $a_{z_\ell}=0$ and, thus, by \eqref{31}, $b_{z_\ell}=c_{z_\ell}=0$. Successive differentiation of \eqref{20}, \eqref{21} and \eqref{gauss} with respect to $z_{\ell}, \ldots, z_3$ leads to $a_{z_\ell} = a_{z_{\ell-1}}=\ldots=a_{z_2} = 0$ and, thus, $b_{z_\ell} = b_{z_{\ell-1}}=\ldots=b_{z_2} = 0$ and $c_{z_\ell} = c_{z_{\ell-1}}=\ldots=c_{z_2} = 0$. Therefore, equations \eqref{20} and \eqref{21} give us, respectively, \begin{eqnarray}\label{20.1} f_{11}a_t + f_{21}b_t - f_{12}a_x - f_{22}b_x - 2b(f_{11}f_{32}-f_{12}f_{31})+(a-c)(f_{21}f_{32}-f_{22}f_{31})\hspace*{2 cm}\nonumber\\ -\sum_{i=0}^1 (f_{12}a_{z_i}+f_{22}b_{z_i})z_{i+1}+(f_{11}a_{w_0}+f_{21}b_{w_0})w_1 + (f_{11}a_{v_0}+f_{21}b_{v_0})v_1=0, \end{eqnarray} and \begin{eqnarray}\label{21.1} f_{11}b_t + f_{21}c_t - f_{12}b_x - f_{22}c_x + 2b(f_{21}f_{32}-f_{22}f_{31})+(a-c)(f_{11}f_{32}-f_{12}f_{31})\hspace*{2 cm}\nonumber\\ -\sum_{i=0}^1 (f_{12}b_{z_i}+f_{22}c_{z_i})z_{i+1}+(f_{11}b_{w_0}+f_{21}c_{w_0})w_1 + (f_{11}b_{v_0}+f_{21}c_{v_0})v_1=0. \end{eqnarray} Differentiating \eqref{20.1} and \eqref{21.1} with respect to $w_1$, we obtain \begin{eqnarray}\label{20.2} f_{11}a_{w_0}+f_{21}b_{w_0} = 0, \quad f_{11}b_{w_0}+f_{21}c_{w_0}=0. \end{eqnarray} Likewise, \begin{eqnarray}\label{21.2} f_{11}a_{v_0}+f_{21}b_{v_0} = 0, \quad f_{11}b_{v_0}+f_{21}c_{v_0}=0. \end{eqnarray} Differentiating the Gauss equation with respect to $w_0$ and $v_0$ leads to $a_{w_0}c+ac_{w_0}-2bb_{w_0}=0$ and $a_{v_0}c+ac_{v_0}-2bb_{v_0}=0$, respectively. Taking into account \eqref{20.2} and \eqref{21.2} in the latter, we obtain \begin{eqnarray*} \left[ c + \left(\frac{f_{11}}{f_{21}}\right)^2 a+2\frac{f_{11}}{f_{21}} b\right]a_{w_0}=0,\quad \left[ c + \left(\frac{f_{11}}{f_{21}}\right)^2 a+2\frac{f_{11}}{f_{21}} b\right]a_{v_0}=0, \end{eqnarray*} and by \eqref{19.1} we finally have $a_{w_0} = a_{v_0} = 0$ and, thus, by \eqref{20.2} and \eqref{21.2} $b_{w_0} = b_{v_0} = 0$ and $c_{w_0} = c_{v_0} = 0$. Hence, $a$, $b$ and $c$ are universal.
On the other hand, if the expression between brackets in \eqref{33a} vanishes, i.e., \begin{eqnarray}\label{5.21a} c + \left(\frac{\phi_{12}}{\phi_{22}}\right)^2 a+2\frac{\phi_{12}}{\phi_{22}} b=0, \end{eqnarray} then it follows from \eqref{5.21a} and \eqref{gauss} that \begin{eqnarray}\label{35}
b = \pm 1 - \frac{\phi_{12}}{\phi_{22}}a,\quad c = \left( \frac{\phi_{12}}{\phi_{22}} \right)^2 a \mp 2 \frac{\phi_{12}}{\phi_{22}}. \end{eqnarray} Therefore, \begin{eqnarray*} f_{11}D_t a+f_{21}D_t b &=& \frac{\Delta_{12}}{\phi_{22}}D_ta - af_{21}\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,t},\\ f_{12}D_x a + f_{22}D_x b &=& -\lambda z_0 \frac{\Delta_{12}}{\phi_{22}}D_xa - af_{22}\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,x},\\ f_{11}D_t b+f_{21}D_t c &=& \left( 2f_{21}\frac{\phi_{12}}{\phi_{22}}a-f_{11}a\mp 2f_{21}\right)\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,t}-\frac{\phi_{12}}{\phi_{22}^2}\Delta_{12}D_ta ,\\ f_{12}D_x b + f_{22}D_x c &=& \left[\left( \lambda z_0\frac{\Delta_{12}}{\phi_{22}}+f_{22}\frac{\phi_{12}}{\phi_{22}}\right)a \mp 2f_{22}\right]\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,x}+\lambda z_0 \frac{\phi_{12}}{\phi_{22}^2}\Delta_{12}D_xa, \end{eqnarray*} where $\Delta_{12} = f_{11}\phi_{22}-f_{21}\phi_{12}$.
Therefore, equation \eqref{3.8} becomes {\small\begin{eqnarray}\label{42} \frac{\Delta_{12}}{\phi_{22}}(D_ta+\lambda z_0D_xa) - af_{21}\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,t} + af_{22}\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,x}-2b\Delta_{13}+(a-c)\Delta_{23} = 0, \end{eqnarray}} and \eqref{3.9} becomes {\small\begin{eqnarray}\label{43} -\frac{\phi_{12}}{\phi_{22}^2}\Delta_{12}(D_ta+\lambda z_0D_xa)+\left( 2f_{21}\frac{\phi_{12}}{\phi_{22}}a-f_{11}a\mp 2f_{21}\right)\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,t}\nonumber\hspace*{3 cm}\\ -\left[\left( \lambda z_0\frac{\Delta_{12}}{\phi_{22}}+f_{22}\frac{\phi_{12}}{\phi_{22}}\right)a \mp 2f_{22}\right]\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,x}+(a-c)\Delta_{13}+2b\Delta_{23} = 0, \end{eqnarray}} where $$ \left( \frac{\phi_{12}}{\phi_{22}} \right)_{,t}=\left[\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,z_0}w_1 + \left( \frac{\phi_{12}}{\phi_{22}} \right)_{,z_1}v_1 \right]. $$ From Lemma \ref{lemma3.1} we have $\phi_{12}\neq 0$, since $c\neq 0$. Hence, adding \eqref{42} multiplied by $\phi_{12}/\phi_{22}$ to \eqref{43} we get {\small\begin{eqnarray}\label{44} \left( -\frac{\Delta_{12}}{\phi_{22}}a\mp 2f_{21}\right)\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,t}+\left( -\lambda z_0\frac{\Delta_{12}}{\phi_{22}}a \pm 2f_{22}\right) \left( \frac{\phi_{12}}{\phi_{22}} \right)_{,x}+\left(a-2b\frac{\phi_{12}}{\phi_{22}}-c\right)\Delta_{13}\nonumber\hspace*{2 cm}\\ +\left[\frac{\phi_{12}}{\phi_{22}}(a-c)+2b\right]\Delta_{23} = 0. \end{eqnarray}} Differentiating \eqref{44} with respect to $v_1$ and $w_1$, we obtain, respectively, \begin{eqnarray}\label{4.17.1} P\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,z_1} = 0,\quad P\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,z_0} = 0,\quad \textnormal{ where }\quad P:= -\frac{\Delta_{12}}{\phi_{22}}a\mp 2f_{21}. \end{eqnarray} If $P\neq 0$ on a non-empty open set, we have from \eqref{4.17.1} that $\phi_{22} - A\phi_{12} = 0$, where $A$ is a nonzero constant. But, $\ell = \phi_{22} - A\phi_{12} = 0$ restricts our analysis to the case where $f_{ij}$ are given by \eqref{th2.2a} with $A=\mu\neq 0$ or \eqref{th2.3a} with $A=\mu\neq 0$.
If $f_{ij}$ are given by \eqref{th2.2a} with $\mu\neq 0$ we obtain $\Delta_{13} = f_{11}f_{32}-f_{31}f_{12} = \mp m\mu\psi$ and $\Delta_{23} = f_{21}f_{32}-f_{31}f_{22} = \pm m\psi$, which imply by \eqref{44} that $a = \pm 2\mu / (1+\mu^2)$ constant. Therefore, \eqref{42} and \eqref{43} reduce to \begin{equation*} \left(\begin{array}{ccc} -2b & a-c \\ a-c & 2b \end{array}\right) \left(\begin{array}{cc} \Delta_{13} \\ \Delta_{23} \end{array}\right) = \left(\begin{array}{cc} 0 \\ 0 \end{array}\right). \end{equation*} It follows from \eqref{3.10a} that $b=0$ and $a=c$, which contradicts the Gauss equation \eqref{gauss}.
If $f_{ij}$ are given by \eqref{th2.3a} with $\mu\neq 0$ we obtain $\Delta_{13} = f_{11}f_{32}-f_{31}f_{12} = \lambda (m_1\mu -\eta)z_1$ and $\Delta_{23} = f_{21}f_{32}-f_{31}f_{22} = -\lambda m_1z_1$, which imply by \eqref{44} that \begin{eqnarray}\label{4.18.1} \left[ a \frac{1+\mu^2}{\mu}\mp 2\right]m_1 = 0. \end{eqnarray} Therefore, if $m_1\neq 0$ in \eqref{4.18.1} then $a$ is a constant and \eqref{42} and \eqref{43} give us a contradiction like before. If $m_1 = 0$ in \eqref{4.18.1} then replacing $\Delta_{13} = -\lambda \eta z_1 (\neq 0)$ and $\Delta_{23}=0$ into \eqref{42} and \eqref{43}, we get \begin{eqnarray*} D_t a+\lambda z_0D_xa + 2\lambda(\pm \mu - a)z_1 = 0,&&\\ D_t a+\lambda z_0D_xa + \lambda (a\mu^2-a\pm 2\mu)z_1 = 0,&& \end{eqnarray*} which imply $a=0$ and, therefore, a contradiction by Lemma \ref{lemma3.1}.
On the other hand, from \eqref{4.17.1} if $P = 0$, on a open set, then $a = \mp 2\phi_{22}f_{21}/\Delta_{12}$ and, thus, $a$ is a function depending only on $z_0$, $z_1$ and $z_2$. However, using such $a$ and \eqref{35} we get $$ c + \left(\frac{f_{11}}{f_{21}}\right)^2 a+2\frac{f_{11}}{f_{21}} b = 0, $$ which contradicts the hypothesis \eqref{19.1}.
If $\phi_{22} \equiv 0$ (which is the case of \eqref{th2.2a} with $\mu=0$ or \eqref{th2.3a} with $\mu=0$ or \eqref{th2.4a} with $\mu\neq 0$ and $\psi = -(m_2+\lambda m_1z_0)\sqrt{1+\mu^2}/\mu$) then, since $\Delta_{12} = [(\phi_{22}-\mu_2\phi_{12})f_{11}-\eta_2\phi_{12}]dx\wedge dt = -\phi_{12}f_{21}dx\wedge dt\neq 0$, we have $\phi_{12}\neq 0$ on an open set. Moreover, it follows from \eqref{31} that $a_{z_l} = b_{z_l} = 0$.
Differentiating the Gauss equation with respect to $z_\ell$ and using the Lemma \eqref{lemma3.1}, we obtain $c_{z_\ell}=0$. Successive differentiation of \eqref{20}, \eqref{21} and \eqref{gauss} with respect to $z_l, \ldots, z_3$ leads to $a_{z_i}=b_{z_i}=c_{z_i} = 0$ for $i = 2,3,...,l-1$. Hence, \eqref{20} and \eqref{21} are equivalent to \begin{eqnarray}\label{20a} f_{11}a_t + f_{21}b_t - f_{12}a_x - f_{22}b_x - 2b(f_{11}f_{32}-f_{12}f_{31})+(a-c)(f_{21}f_{32}-f_{22}f_{31})\hspace*{2 cm}\nonumber\\ -\sum_{i=0}^1 (f_{12}a_{z_i}+f_{22}b_{z_i})z_{i+1}+(f_{11}a_{w_0}+f_{21}b_{w_0})w_1 + (f_{11}a_{v_0}+f_{21}b_{v_0})v_1=0, \end{eqnarray} and \begin{eqnarray}\label{21a} f_{11}b_t + f_{21}c_t - f_{12}b_x - f_{22}c_x + 2b(f_{21}f_{32}-f_{22}f_{31})+(a-c)(f_{11}f_{32}-f_{12}f_{31})\hspace*{2 cm}\nonumber\\ -\sum_{i=0}^1 (f_{12}b_{z_i}+f_{22}c_{z_i})z_{i+1}+(f_{11}b_{w_0}+f_{21}c_{w_0})w_1 + (f_{11}b_{v_0}+f_{21}c_{v_0})v_1=0. \end{eqnarray} Differentiating \eqref{20a} and \eqref{21a} with respecto to $v_1$ and $w_1$, we get \begin{eqnarray}\label{20.2a} \begin{array}{rr} f_{11}a_{w_0}+f_{21}b_{w_0} = 0,\\ \quad f_{11}b_{w_0}+f_{21}c_{w_0}=0, \end{array} \begin{array}{rr} f_{11}a_{v_0}+f_{21}b_{v_0} = 0,\\ \quad f_{11}b_{v_0}+f_{21}c_{v_0}=0. \end{array} \end{eqnarray} Differentiating the Gauss equation \eqref{gauss} with respect to $w_0$ and $v_0$ leads to $a_{w_0}c+ac_{w_0}-2bb_{w_0}=0$ and $a_{v_0}c+ac_{v_0}-2bb_{v_0}=0$, respectively. Taking into account \eqref{20.2a} in the latter, by \eqref{19.1} we obtain $a_{w_0} = a_{v_0} = 0$ and, thus, by \eqref{20.2a}, $b_{w_0} = b_{v_0} = 0$ and $c_{w_0} = c_{v_0} = 0$. Hence, $a$, $b$ and $c$ are universal and, thus, we conclude the proof of $(i)$.
Suppose $(ii)$, i.e, $m < n$. Therefore, since $n\geq m+1$, differentiating \eqref{10}, \eqref{11} and \eqref{gauss} with respect to $v_{n+1}$ leads to $a_{v_{n}}=b_{v_{n}}=c_{v_{n}}=0$. Successive differentiation with respect to $v_n, v_{n-1}, \ldots, v_{(m+1)+1}$ leads to $a_{v_{n-1}}=\ldots = a_{v_{m+1}} = 0$, $b_{v_{n-1}}=\ldots = b_{v_{m+1}} = 0$ and $c_{v_{n-1}}=\ldots = c_{v_{m+1}} = 0$. Hence, $a$, $b$ and $c$ are functions of $x, t, z_0, z_1, \ldots, z_l, w_1, \ldots, w_m$, $v_1, \ldots, v_m$. Proceeding as in $(i)$, we conclude that $a$, $b$ and $c$ are functions of $x$ and $t$ only, and therefore universal. This concludes $(ii)$.
Finally $(iii)$, i.e, $m > n$. Therefore, since $m\geq n+1$, differentiating \eqref{10}, \eqref{11} and \eqref{gauss} with respect to $w_{m+1}$ leads to $a_{w_{m}}=b_{w_{m}}=c_{w_{m}}=0$. Successive differentiation with respect to $w_m, w_{m-1}, \ldots, w_{(n+1)+1}$ leads to $a_{w_{m-1}}=\ldots = a_{w_{n+1}} = 0$, $b_{w_{m-1}}=\ldots = b_{w_{n+1}} = 0$ and $c_{w_{m-1}}=\ldots = c_{w_{n+1}} = 0$. Hence, $a$, $b$ and $c$ are functions of $x, t, z_0, z_1, \ldots, z_l, w_1, \ldots, w_n, v_1, \ldots, v_n$. Proceeding as in $(i)$, we conclude that $a$, $b$ and $c$ are functions of $x$ and $t$ only, and therefore universal. This concludes $(iii)$.
Therefore, $a$, $b$ and $c$ are universal, i.e., $a$, $b$ and $c$ depend only on $x$ and $t$. This concludes the proof of Lemma \ref{lemma3.2}.
$\Box$
\begin{lemma}\label{lemma3.4} Consider an equation of type \eqref{T} describing pseudospherical surfaces, under the condition \eqref{1}, given by the Theorems $\ref{teo7.2}$-$\ref{teo7.5}$. Assume there is a local isometric immersion of the pseudospherical surface, determined by a solution $u(x,t)$ of \eqref{T}, for which the coefficients $a$, $b$ and $c$ of the second fundamental form depend on $x, t, z_0, \ldots, z_l, w_1, \ldots, w_m, v_1, \ldots, v_n$, where $1\leq l <\infty$, $1\leq m < \infty$ and $1\leq n < \infty$ are finite, but otherwise arbitrary. If $f_{21} = 0$, on a non-empty open set, then $a$, $b$ and $c$ are functions of $x$ and $t$ only, and therefore universal. \end{lemma}
\noindent\textbf{Proof}. First, observe that $f_{21} = 0$, on a non-empty open set, can only happen in \eqref{th2.4a} with $\mu=m_1 = 0$ or \eqref{th2.5b} with $\mu = \eta = 0$. Furthermore, in both cases $\phi_{22}\neq 0$ on that open set. Our analysis consists in three cases, namely, $$ (i)\hspace{0.2 cm} m=n, \qquad (ii)\hspace{0.2 cm} m < n, \qquad (iii)\hspace{0.2 cm} n < m. $$
Let consider, firstly, the case $m=n$. Suppose $l =1$. Successive differentiation of \eqref{10}, \eqref{11} and \eqref{gauss} with respect to $v_{n+1}, \ldots, v_1$ and $w_{n+1}, \ldots, w_1$, since $f_{11}\neq 0$, lead to $a_{v_k}=b_{v_k}=c_{v_k}=0$ and $a_{w_k}=b_{w_k}=c_{w_k}=0$ for $k=0,1, \ldots, n$. Therefore, $a$, $b$ and $c$ are universal.
Now let us consider $l \geq 2$. Taking successive differentiation of \eqref{10}, \eqref{11} and \eqref{gauss} with respect to $v_{n+1}, \ldots, v_2$ and $w_{n+1}, \ldots, w_2$, since $f_{11}\neq 0$, leads to $a_{w_k}=b_{w_k}=c_{w_k}=0$ and $a_{v_k}=b_{v_k}=c_{v_k}=0$ for $k=1,2, \ldots, n$. Thus, we have that $a$, $b$ and $c$ do not depend on $w_k$ and neither $v_k$ for $k=1,2, \ldots, n$. Hence, $a$, $b$ and $c$ are functions of $x, t, z_0=w_0, z_1=v_0, \ldots, z_l$. Furthermore, the equations \eqref{10} and \eqref{11} are equivalent to \begin{eqnarray}\label{a20} f_{11}a_t - f_{12}a_x - f_{22}b_x - 2b(f_{11}f_{32}-f_{12}f_{31})-(a-c)f_{22}f_{31}-\sum_{i=0}^l (f_{12}a_{z_i}+f_{22}b_{z_i})z_{i+1}\nonumber\hspace*{1 cm}\\ +\sum_{i=2}^l f_{11}a_{z_i}\partial_x^{i-2}(z_{0,t}-F)+f_{11}a_{w_0}w_1 + f_{11}a_{v_0}v_1=0, \end{eqnarray} and \begin{eqnarray}\label{a21} f_{11}b_t - f_{12}b_x - f_{22}c_x - 2bf_{22}f_{31}+(a-c)(f_{11}f_{32}-f_{12}f_{31})-\sum_{i=0}^l (f_{12}b_{z_i}+f_{22}c_{z_i})z_{i+1}\nonumber\hspace*{1 cm}\\ +\sum_{i=2}^l f_{11}b_{z_i}\partial_x^{i-2}(z_{0,t}-F)+f_{11}b_{w_0}w_1 + f_{11}b_{v_0}v_1=0. \end{eqnarray}
Differentiating \eqref{a20} and \eqref{a21} with respect to $z_{l+1}$, we obtain, respectively, \begin{eqnarray*} (f_{12}+\lambda z_0f_{11})a_{z_l}+f_{22}b_{z_l}=0,\quad (f_{12}+\lambda z_0f_{11})b_{z_l}+f_{22}c_{z_l}=0, \end{eqnarray*} and, using \eqref{(7.2)}, with $f_{21}=0$, we have \begin{eqnarray} \begin{array}{rr}\label{a31} \phi_{12}a_{z_l}+\phi_{22}b_{z_l}=0,\quad \phi_{12}b_{z_l}+\phi_{22}c_{z_l}=0, \end{array} \end{eqnarray}
Since $\phi_{22}\neq 0$, it follows from \eqref{a31} that \begin{eqnarray}\label{(90)} b_{z_l} = -\frac{\phi_{12}}{\phi_{22}}a_{z_l}, \quad c_{z_l} = \left(\frac{\phi_{12}}{\phi_{22}}\right)^2 a_{z_l}. \end{eqnarray} Differentiating the Gauss equation \eqref{gauss} with respect to $z_l$ leads to $a_{z_l}c+ac_{z_l}-2bb_{z_l}=0$, which gives, using the \eqref{(90)} \begin{eqnarray}\label{a33} \left[ c + \left(\frac{\phi_{12}}{\phi_{22}}\right)^2 a+2\frac{\phi_{12}}{\phi_{22}} b\right]a_{z_l}=0, \end{eqnarray}
If the expression between brackets in \eqref{a33} does not vanish on a open set, we obtain $a_{z_l}=0$ and, thus, by \eqref{a31}, $b_{z_l}=c_{z_l}=0$. Successive differentiation of \eqref{a20}, \eqref{a21} and \eqref{gauss} with respect to $z_{l}, \ldots, z_3$ leads to $a_{z_l} = a_{z_{l-1}}=\ldots=a_{z_2} = 0$ and, thus, $b_{z_l} = b_{z_{l-1}}=\ldots=b_{z_2} = 0$ and $c_{z_l} = c_{z_{l-1}}=\ldots=c_{z_2} = 0$. Therefore, the equation \eqref{a20} and \eqref{a21} give us, respectively, \begin{eqnarray}\label{a20.1} f_{11}a_t - f_{12}a_x - f_{22}b_x - 2b(f_{11}f_{32}-f_{12}f_{31})-(a-c)f_{22}f_{31}-\sum_{i=0}^2 (f_{12}a_{z_i}+f_{22}b_{z_i})z_{i+1}\nonumber\hspace*{1 cm}\\ +f_{11}a_{w_0}w_1 + f_{11}a_{v_0}v_1=0, \end{eqnarray} and \begin{eqnarray}\label{a21.1} f_{11}b_t - f_{12}b_x - f_{22}c_x - 2bf_{22}f_{31}+(a-c)(f_{11}f_{32}-f_{12}f_{31})-\sum_{i=0}^2 (f_{12}b_{z_i}+f_{22}c_{z_i})z_{i+1}\nonumber\hspace*{1 cm}\\ +f_{11}b_{w_0}w_1 + f_{11}b_{v_0}v_1=0. \end{eqnarray} Differentiating \eqref{a20.1} and \eqref{a21.1} with respect to $v_1$ and $w_1$ leads to $f_{11}a_{v_0} = f_{11}b_{v_0}=0$ and $f_{11}a_{w_0}= f_{11}b_{w_0}=0$, i.e, $a_{v_0} =b_{v_0} =0$ and $a_{w_0}=b_{w_0}=0$. Differentiating the Gauss equation \eqref{gauss} with respect to $w_0$ and $v_0$ gives $a_{w_0}c+ac_{w_0}-2bb_{w_0}=0$ and $a_{v_0}c+ac_{v_0}-2bb_{v_0}=0$, respectively. Since $a\neq 0$ we obtain $c_{w_0} = c_{v_0} = 0$. Hence, $a$, $b$ and $c$ are universal.
On the other hand, if the expression in brackets in \eqref{a33} vanishes, i.e., \begin{eqnarray}\label{a5.14.1} c + \left(\frac{\phi_{12}}{\phi_{22}}\right)^2 a+2\frac{\phi_{12}}{\phi_{22}} b=0, \end{eqnarray} then it follows from the Gauss equation \eqref{gauss} that \begin{eqnarray} && b = \pm 1 - \frac{\phi_{12}}{\phi_{22}}a,\label{a5.15}\\ && c = \left( \frac{\phi_{12}}{\phi_{22}} \right)^2 a \mp 2 \frac{\phi_{12}}{\phi_{22}}.\label{a5.16} \end{eqnarray}
Therefore, \begin{eqnarray*} f_{12}D_x a + f_{22}D_x b &=& -\lambda z_0 f_{11}D_xa - af_{22}\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,x},\\ f_{11}D_t b &=& -f_{11}a\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,t}-\frac{\phi_{12}}{\phi_{22}}f_{11}D_ta ,\\ f_{12}D_x b + f_{22}D_x c &=& \left[\left( \lambda z_0f_{11}+\phi_{12}\right)a \mp 2f_{22}\right]\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,x}+\lambda z_0 \frac{\phi_{12}}{\phi_{22}}f_{11}D_xa, \end{eqnarray*} where $\Delta_{12} = f_{11}\phi_{22}-f_{21}\phi_{12}$.
Therefore, equation \eqref{3.8} becomes {\small\begin{eqnarray}\label{a42} f_{11}(D_ta+\lambda z_0D_xa) + af_{22}\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,x}-2b\Delta_{13}+(a-c)\Delta_{23} = 0, \end{eqnarray}} and \eqref{3.9} becomes {\small\begin{eqnarray}\label{a43} -\frac{\phi_{12}}{\phi_{22}}f_{11}(D_ta+\lambda z_0D_xa)-f_{11}a\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,t}-\left[\left( \lambda z_0f_{11}+\phi_{12}\right)a \mp 2f_{22}\right]\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,x}+(a-c)\Delta_{13}+2b\Delta_{23} = 0, \end{eqnarray}} where $$ \left( \frac{\phi_{12}}{\phi_{22}} \right)_{,t}=\left[\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,z_0}w_1 + \left( \frac{\phi_{12}}{\phi_{22}} \right)_{,z_1}v_1 \right]. $$ From Lemma \ref{lemma3.1}, since $c\neq 0$ we have $\phi_{12}\neq 0$. Hence, adding \eqref{a42} multiplied by $\phi_{12}/\phi_{22}$ with \eqref{a43} we get {\small\begin{eqnarray}\label{a44}
-\frac{\phi_{12}}{\phi_{22}}f_{11}a\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,t}+\left( -\lambda z_0f_{11}a \pm 2f_{22}\right) \left( \frac{\phi_{12}}{\phi_{22}} \right)_{,x}+\left(a-2b\frac{\phi_{12}}{\phi_{22}}-c\right)\Delta_{13}+\left[\frac{\phi_{12}}{\phi_{22}}(a-c)+2b\right]\Delta_{23} = 0. \end{eqnarray}} Differentiating \eqref{a44} with respect to $v_1$ and $w_1$, we obtain, respectively, \begin{eqnarray*}
-\frac{\phi_{12}}{\phi_{22}}f_{11}a\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,z_1} = 0,\quad -\frac{\phi_{12}}{\phi_{22}}f_{11}a\left( \frac{\phi_{12}}{\phi_{22}} \right)_{,z_0} = 0, \end{eqnarray*} which imply that $\phi_{22} - A\phi_{12} = 0$, where $A$ is a nonzero constant. Otherwise, we would have $\phi_{22}=0$. But, $l = \phi_{22} - A\phi_{12} = 0$ does not happen in \eqref{th2.4a} or \eqref{th2.5b}. This concludes $(i)$.
Suppose $(ii)$, i.e, the case $m < n$. Therefore, since $n\geq m+1$, differentiating \eqref{10}, \eqref{11} and \eqref{gauss} with respect to $v_{n+1}$ leads to $a_{v_{n}}=b_{v_{n}}=c_{v_{n}}=0$. Successive differentiation with respect to $v_n, v_{n-1}, \ldots, v_{(m+1)+1}$ leads to $a_{v_{n-1}}=\ldots = a_{v_{m+1}} = 0$, $b_{v_{n-1}}=\ldots = b_{v_{m+1}} = 0$ and $c_{v_{n-1}}=\ldots = c_{v_{m+1}} = 0$. Hence, $a$, $b$ and $c$ are functions of $x, t, z_0, z_1, \ldots, z_l, w_1, \ldots, w_m$, $v_1, \ldots, v_m$. Proceeding as in $(i)$, we conclude that $a$, $b$ and $c$ are functions of $x$ and $t$ only, and thus universal. This concludes $(ii)$.
To conclude the proof of Lemma \ref{lemma3.4}, consider $(iii)$, i.e, the case $m > n$. Therefore, since $m\geq n+1$, differentiating \eqref{10}, \eqref{11} and \eqref{gauss} with respect to $w_{m+1}$ leads to $a_{w_{m}}=b_{w_{m}}=c_{w_{m}}=0$. Successive differentiation with respect to $w_m, w_{m-1}$, $\ldots$, $w_{(n+1)+1}$ leads to $a_{w_{m-1}}=\ldots = a_{w_{n+1}} = 0$, $b_{w_{m-1}}=\ldots = b_{w_{n+1}} = 0$ and $c_{w_{m-1}}=\ldots = c_{w_{n+1}} = 0$. Hence, $a$, $b$ and $c$ are functions of $x, t$, $z_0, z_1, \ldots, z_l, w_1, \ldots, w_n, v_1, \ldots, v_n$. Again, proceeding as in $(i)$, we conclude that $a$, $b$ and $c$ are functions of $x$ and $t$ only, and thus universal. This concludes $(iii)$.
Hence, $a$, $b$ and $c$ are universal, i.e., $a$, $b$ and $c$ depend only on $x$ and $t$. This concludes the proof of Lemma \ref{lemma3.4}.
$\Box$
\subsection{Universal expressions for the second fundamental forms}\label{sec5}
In the previous section we have shown that if there exist coefficients $a$, $b$, $c$ (depending on a jet of finite order of $u$) of the second fundamental form of a local isometric immersion of a pseudospherical surface, so that the system of equations \eqref{gauss}, \eqref{3.8} and \eqref{3.9} is satisfied, then $a$, $b$ and $c$ are functions depending only on $x$ and $t$, and thus universal. Now we are going to determine such coefficients for the equations \eqref{eqT} and associated $f_{ij's}$ given by Theorems \ref{teo7.2}-\ref{teo7.5}.
\begin{proposition}\label{prop1} Consider an equation of type \eqref{T} describing pseudospherical surfaces, under the condition \eqref{1}, given by the Theorem $\ref{teo7.2}$. There exists a local isometric immersion in $\mathbb{R}^3$ of a pseudospherical surface, defined by a solution $u$, for which the coefficients $a$, $b$ and $c$ of the second fundamental form depend on $x, t, z_0, \ldots, z_l, w_1, \ldots, w_m$, $v_1, \ldots, v_n$, where $1\leq l <\infty$, $1\leq m < \infty$ and $1\leq n < \infty$ are finite, but otherwise arbitrary, if, and only if,
\vspace*{0.4 cm} \noindent $(i)$ $\mu =0$ and $a$, $b$ and $c$ depend only on $x$ and are given by \begin{eqnarray}\label{7.1b} a = \pm \sqrt{L(x)}, \quad b=-\beta e^{\pm 2\eta x}, \quad c = a \mp \frac{a_x}{\eta}, \end{eqnarray} where $L(x) = \sigma e^{\pm 2\eta x}-\beta^2 e^{\pm 4\eta x}-1$, with $\eta$, $\sigma$, $\beta$ $\in$ $\mathbb{R}$, $\eta\neq 0$, $\sigma >0$ and $\sigma^2 >4\beta^2$. The coefficients $a$, $b$, $c$ are defined on a strip of $\mathbb{R}$ where \begin{eqnarray}\label{p7.2} log\sqrt{\frac{\sigma -\sqrt{\sigma^2 -4\beta^2}}{2\beta^2}}< \pm \eta x <log\sqrt{\frac{\sigma +\sqrt{\sigma^2 -4\beta^2}}{2\beta^2}}. \end{eqnarray} Moreover, the constants $\beta$ and $\sigma$ have to be chosen so that the strip intersects the domain of the solution of \eqref{th2.2}.
\vspace*{0.3 cm} \noindent or \vspace*{0.3 cm}
\noindent $(ii)$ $\mu \neq 0$ and $a$, $b$ and $c$ depend only on $x$ and are given by \begin{eqnarray}\label{101.a}
a &=& \frac{1}{2\mu}[\pm \mu\sqrt{\Delta}-(\mu^2-1)b+\beta e^{\pm 2\eta x}],\nonumber\\
c &=& \frac{1}{2\mu}[\pm \mu\sqrt{\Delta}+(\mu^2-1)b-\beta e^{\pm 2\eta x}],\\
\Delta &=& \frac{[(\mu^2-1)b-\beta e^{\pm 2\eta x}]^2 -4\mu^2(1-b^2)}{\mu^2}> 0\nonumber \end{eqnarray} where $b$ satisfies the ordinary differential equation \begin{eqnarray}\label{102} [\mu(1+\mu^2)\sqrt{\Delta}\pm (\mu^2+1)^2b\mp (\mu^2-1)\beta e^{\pm 2\eta x}]b'\hspace*{3cm}\nonumber\\ + 2\eta\left\lbrace [\mp\mu (1+\mu^2)\sqrt{\Delta}- \beta(\mu^2-1)e^{\pm 2\eta x}]b+ \beta^2e^{\pm 4\eta x}\right\rbrace=0 \end{eqnarray} \end{proposition}
\noindent \textbf{Proof}. Since $\eta=m\neq 0$ we only have $f_{21}\neq 0$, on a open set. From Lemma \ref{lemma3.3} the equations \eqref{gauss}, \eqref{3.8} and \eqref{3.9} form an inconsistent system. From Lemma \ref{lemma3.2}, the coefficients of the second fundamental form of such local isometric immersion are universal, and hence \eqref{3.8} and \eqref{3.9} become \begin{eqnarray} f_{11} a_t+f_{21} b_t -f_{12} a_x-f_{22} b_x-2b\Delta_{13}+(a-c)\Delta_{23} = 0,&&\label{7.1}\\ f_{11} b_t+f_{21} c_t -f_{12} b_x-f_{22} c_x+(a-c)\Delta_{13}+2b\Delta_{23} = 0,&&\label{7.2} \end{eqnarray} where $\Delta_{13} = \mp \eta\mu\psi$ and $\Delta_{23}=\pm \eta\psi (\neq 0)$. Hence, since $f_{ij}$ are given by \eqref{th2.2a}, differentiating \eqref{7.1} and \eqref{7.2} with respect to $z_2$ we obtain \begin{eqnarray} &&a_t = \mu^2 c_t,\label{7.3}\\ &&b_t = -\mu c_t.\label{7.4} \end{eqnarray} Replacing \eqref{7.3} and \eqref{7.4} back into \eqref{7.1} and \eqref{7.2} we get \begin{eqnarray} -\mu \eta\sqrt{1+\mu^2} c_t + [-a_x-\mu b_x \pm \eta(2\mu b + a-c)]\psi = 0,&&\label{7.5}\\ \eta\sqrt{1+\mu^2}c_t + [-b_x-\mu c_x\pm \eta(2b - \mu a+\mu c)]\psi =0.&&\label{7.6} \end{eqnarray} Isolating $\eta\sqrt{1+\mu^2}c_t$ in \eqref{7.6} and replacing it into \eqref{7.5}, we obtain \begin{eqnarray}\label{7.7} \mu [-b_x-\mu c_x\pm \eta(2b - \mu a+\mu c)] + [-a_x-\mu b_x \pm \eta(2\mu b + a-c)] = 0. \end{eqnarray} Differentiating \eqref{7.7} with respect to $t$ and using \eqref{7.3} and \eqref{7.4}, we get $\mp \eta (1+\mu^2)^2c_t=0$ and, thus, $c_t = 0$. By \eqref{7.3} and \eqref{7.4}, $a_t = b_t = 0$. Hence, $a$, $b$ and $c$ are functions depending only on $x$. Therefore, it follows from \eqref{7.5} and \eqref{7.6} that \begin{eqnarray} -a_x-\mu b_x \pm \eta(2\mu b + a-c) = 0,&&\label{7.8}\\ -b_x-\mu c_x\pm \eta(2b - \mu a+\mu c) =0,&&\label{7.9} \end{eqnarray} where \eqref{7.7} is now identically satisfied. From \eqref{7.8} we have $c$ in terms of $a$, $b$, $a_x$ and $b_x$, which replaced into \eqref{7.9} leads to \begin{eqnarray}\label{7.12b} \mu a_x = \pm \eta(1+\mu^2)b - \mu^2 b_x \pm \beta \eta e^{\pm 2\eta x}, \end{eqnarray} where $\beta$ is a constant.
If $\mu = 0$, then from \eqref{7.12b} and \eqref{7.8}, we have \begin{eqnarray}\label{7.13b} b = -\beta e^{\pm 2\eta x}\quad \textnormal{e} \quad c = a \mp \frac{a_x}{\eta}. \end{eqnarray} Substituting \eqref{7.13b} in the Gauss equation \eqref{gauss}, we obtain $a = \pm \sqrt{L(x)}$ where $L(x) = \sigma e^{\pm 2\eta x}-\beta^2 e^{\pm 4\eta x}-1$, with $\sigma$, $\beta$ $\in$ $\mathbb{R}$, $\sigma >0$ and $\sigma^2 >4\beta^2$. This $a$ together with \eqref{7.13b} give us \eqref{7.1b}, where $a$ is defined on the strip described by \eqref{p7.2}.
If $\mu \neq 0$ then \eqref{7.12b} gives us $a_x$, which replaced into \eqref{7.8} implies that \begin{eqnarray}\label{112} c = a + \phi(x), \quad \phi(x) = \frac{\mu^2-1}{\mu}b(x) - \frac{\beta}{\mu}e^{\pm 2\eta x}. \end{eqnarray} Substituting the latter into the Gauss equation we obtain $a^2+a\phi(x)-b^2 =-1$, which resolved as a second degree equation in terms of $a$ leads to $$ a = \frac{-\phi(x)\pm \sqrt{\Delta}}{2}, \quad \Delta = \phi(x)^2-4[1-b(x)^2]> 0. $$ Hence, using \eqref{112} we also have $c$ in terms of $b=b(x)$ as in \eqref{101.a}, which replaced into \eqref{7.12b} gives us \begin{eqnarray}\label{eq} [(1+\mu^2)\sqrt{\Delta}\pm (\mu^2-1)\phi \pm 4\mu b]b' \mp 2\eta(1+\mu^2)b\sqrt{\Delta}-2\eta\beta e^{\pm 2\eta x}\phi =0. \end{eqnarray} Observe that, if the coefficient of $b'$ in \eqref{eq} vanishes, we have \begin{eqnarray*} (1+\mu^2)\sqrt{\Delta}\pm (\mu^2-1)\phi \pm 4\mu b=0,&&\\ \mp 2\eta(1+\mu^2)b\sqrt{\Delta}-2\eta\beta e^{\pm 2\eta x}\phi =0.&& \end{eqnarray*} In the latter two equations, replacing $(1+\mu^2)\sqrt{\Delta}$ of the first into the second implies that \begin{eqnarray*} 0 &=& \mp 2\eta b[\mp (\mu^2-1)\phi \mp 4\mu b]-2\eta\beta e^{\pm 2\eta x}\phi\\
&=& 2\eta\mu (\phi^2 +4b^2), \end{eqnarray*} and then $\phi^2 +4b^2=0$, since $\eta\mu\neq 0$. However, $\phi^2 +4b^2=0$ if, and only if, $\phi=b=0$ which implies by \eqref{7.12b} that $a=c$. But, $a=c$ and $b=0$ contradict the Gauss equation \eqref{gauss}. Therefore, the coefficient of $b'$ in the equation \eqref{eq} does not vanish in a non-empty open set. That means we can write $b' = g(x,b)$, where $g$ is a differentiable function defined, from \eqref{eq}, by $$ g(x,b) = \frac{\pm 2\eta(1+\mu^2)b\sqrt{\Delta}+2\eta\beta e^{\pm 2\eta x}\phi}{(1+\mu^2)\sqrt{\Delta}\pm (\mu^2-1)\phi \pm 4\mu b}. $$
Let $x_0$ be an arbitrarily fixed point and consider the following Initial Value Problem (IVP) \begin{eqnarray}\label{(117)a} b' = g(x,b), \qquad b(x_0)=b_0. \end{eqnarray} Since $b$ is a smooth function, we have that $g(x,b)$ and $\partial_b g(x,b)$ are continuous in some open rectangle $$ R=\left\lbrace (x,b): x_1< x<x_2,\hspace*{0.1 cm} y_1< b< y_2\right\rbrace $$ that contains the point $(x_0,b_0)$. Then, by the fundamental existence and uniqueness theorem for ordinary differential equation the IVP \eqref{(117)a} has a unique solution in some closed interval $I = [b_0-\epsilon, b_0+\epsilon]$, where $\epsilon$ is a positive number. Moreover, $x_1$ and $x_2$ has to be chosen so that the strip $x_1< x<x_2$ intersects the domain of the solution of \eqref{th2.2}. Observe that replacing $\phi$ into \eqref{eq} we obtain \eqref{102}. This concludes $(ii)$.
The converse follows from a straightforward computation.
$\Box$
\begin{proposition}\label{prop2} Consider an equation of type \eqref{T} describing pseudospherical surfaces, under the condition \eqref{1}, belonging to the class of equations given by Theorem $\ref{teo7.3}$. There is no local isometric immersion in $\mathbb{R}^3$ of a pseudospherical surface determined by a solution $u$ of the equation, for which the coefficients of the second fundamental form depend on $x, t, z_0, \ldots, z_l, w_1, \ldots, w_m$, $v_1, \ldots, v_n$, where $1\leq l <\infty$, $1\leq m < \infty$ and $1\leq n < \infty$ are finite, but otherwise arbitrary. \end{proposition}
\noindent \textbf{Proof}. Since $\eta\neq 0$ we have $f_{21}\neq 0$, on a open set. If $c+(f_{11}/f_{21})^2a+2f_{11}b/f_{21}=0$ then from Lemma \ref{lemma3.3} the equations \eqref{gauss}, \eqref{3.8} and \eqref{3.9} form an inconsistent system. Therefore, $c+(f_{11}/f_{21})^2a+2f_{11}b/f_{21}\neq 0$ and, from Lemma \ref{lemma3.2}, the coefficients of the second fundamental form of such local isometric immersion are universal, and hence \eqref{3.8} and \eqref{3.9} become \begin{eqnarray} f_{11} a_t+f_{21} b_t -f_{12} a_x-f_{22} b_x-2b\Delta_{13}+(a-c)\Delta_{23} = 0,&&\label{a7.11}\\ f_{11} b_t+f_{21} c_t -f_{12} b_x-f_{22} c_x+(a-c)\Delta_{13}+2b\Delta_{23} = 0,&&\label{a7.12} \end{eqnarray} where $\Delta_{13} =\lambda(m_1\mu - \eta)z_1 $ and $\Delta_{23}=-\lambda m_1z_1$. Hence, since $f_{ij}$ are given by \eqref{th2.3a}, it follows from \eqref{a7.11} and \eqref{a7.12} that, respectively, \begin{eqnarray} [a_t + \mu b_t + \lambda(a_x+\mu b_x)z_0]h + \lambda [ m_2(a_x+\mu b_x)-2(m_1\mu -\eta)b - m_1(a-c)]z_1+\eta(b_t+\lambda z_0b_x) = 0,\label{a7.13} \end{eqnarray} \begin{eqnarray} [b_{t} + \mu c_{t} + \lambda (b_{x}+\mu c_{x})z_{0}]h + \lambda [ m_2(b_x+\mu c_x)+(m_1\mu -\eta)(a-c) - 2m_1b]z_1+\eta(c_t+\lambda z_0c_x) = 0.\label{a7.14} \end{eqnarray} Differentiating \eqref{a7.13} and \eqref{a7.14} with respect to $z_2$, since $h'\neq 0$, we have \begin{eqnarray} a_t + \mu b_t + \lambda(a_x+\mu b_x)z_0 = 0,&&\label{a7.15}\\ b_{t} + \mu c_{t} + \lambda (b_{x}+\mu c_{x})z_{0}=0.&&\label{a7.16} \end{eqnarray} Differentiating \eqref{a7.15} and \eqref{a7.16} with respect to $z_0$, since $\lambda\neq 0$, and replacing the result back into \eqref{a7.15} and \eqref{a7.16} leads to \begin{eqnarray} a_t + \mu b_t=0, \quad a_x+\mu b_x = 0,&&\label{a7.17}\\ b_{t} + \mu c_{t}=0, \quad b_{x}+\mu c_{x}=0.&&\label{a7.18} \end{eqnarray} Substituting \eqref{a7.17} and \eqref{a7.18} into \eqref{a7.13} and \eqref{a7.14} and taking the $z_1$ derivative of the remaining expression we get \begin{eqnarray}
-2(m_1\mu -\eta)b - m_1(a-c) = 0,&&\label{a7.13.1}\\
(m_1\mu -\eta)(a-c) - 2m_1b = 0.&&\label{a7.14.1} \end{eqnarray} Since the Gauss equation \eqref{gauss} needs to be satisfied we have $(a-c)^2+b^2\neq 0$. Hence, from \eqref{a7.13.1} and \eqref{a7.14.1} we obtain $(m_1\mu-\eta)^2+m_1^2 = 0$, i.e., $m_1=\eta = 0$, which gives a contradiction since $\eta \neq 0$.
$\Box$
\begin{proposition}\label{prop3} Consider an equation of type \eqref{T} describing pseudospherical surfaces, under the condition \eqref{1}, given by the Theorem $\ref{teo7.4}$. There exists a local isometric immersion in $\mathbb{R}^3$ of a pseudospherical surface, defined by a solution $u$, for which the coefficients $a$, $b$ and $c$ of the second fundamental form depend on $x, t, z_0, \ldots, z_l, w_1, \ldots, w_m$, $v_1, \ldots, v_n$, where $1\leq l <\infty$, $1\leq m < \infty$ and $1\leq n < \infty$ are finite, but otherwise arbitrary, if, and only if,
\vspace*{0.4 cm} \noindent $(i)$ $\mu = m_1 = 0$, $m_2\neq 0$ and $a$, $b$ and $c$ depend only on $t$ and are given by \begin{eqnarray}\label{d7.1} a = \pm \sqrt{L(t)}, \quad b=\beta e^{\pm 2m_2t}, \quad c = a \mp \frac{a_t}{m_2}, \end{eqnarray} where $L(t) = \sigma e^{\pm 2m_2t}-\beta^2 e^{\pm 4m_2t}-1$, with $\sigma$, $\beta$ $\in$ $\mathbb{R}$, $\sigma >0$ and $\sigma^2 >4\beta^2$. The coefficients $a$, $b$, $c$ are defined on a trip of $\mathbb{R}$ where \begin{eqnarray}\label{d7.2} log\sqrt{\frac{\sigma -\sqrt{\sigma^2 -4\beta^2}}{2\beta^2}}< \pm m_2t <log\sqrt{\frac{\sigma +\sqrt{\sigma^2 -4\beta^2}}{2\beta^2}}. \end{eqnarray} Moreover, the constants $\beta$ and $\sigma$ have to be chosen so that the strip intersects the domain of the solution of \eqref{th2.4}.
\vspace*{0.3 cm} \noindent or \vspace*{0.3 cm}
\noindent $(ii)$ $\mu=0$, $m_1\neq 0$, $\lambda^2+m_2^2 \neq 0$ and $a$, $b$ and $c$ are functions of $m_1x+m_2t$ and given by \begin{eqnarray}\label{ab5.1} a = \pm \sqrt{L(m_1 x+m_2 t)},\quad b = -\beta e^{\pm2 (m_1x+m_2t)} ,\quad c = a\mp a', \end{eqnarray} where $L(m_1 x+m_2 t) = \sigma e^{\pm 2(m_1 x+m_2 t)}-\beta^2 e^{\pm 4(m_1 x+m_2 t)}-1$, with $\sigma$, $\beta$ $\in$ $\mathbb{R}$, $\sigma >0$ and $\sigma^2 >4\beta^2$. The coefficients $a$, $b$, $c$ are defined on a trip of $\mathbb{R}$ where \begin{eqnarray}\label{d7.2.1} log\sqrt{\frac{\sigma -\sqrt{\sigma^2 -4\beta^2}}{2\beta^2}}< \pm (m_1 x+ m_2 t) <log\sqrt{\frac{\sigma +\sqrt{\sigma^2 -4\beta^2}}{2\beta^2}}. \end{eqnarray} Moreover, the constants $\beta$ and $\sigma$ have to be chosen so that the strip intersects the domain of the solution of \eqref{th2.4}.
\vspace*{0.3 cm} \noindent or \vspace*{0.3 cm}
\noindent $(iii)$ $\mu\neq 0$, $(\lambda m_1)^2+m_2^2 \neq 0$ and $a$, $b$ and $c$ are differentiable functions of $m_1x+m_2t$ and given by \begin{eqnarray}\label{129.a}
a &=& \frac{1}{2\mu}[\pm \mu\sqrt{\Delta}-(\mu^2-1)b+\beta e^{\pm 2(m_1 x+m_2 t)}],\nonumber\\
c &=& \frac{1}{2\mu}[\pm \mu\sqrt{\Delta}+(\mu^2-1)b-\beta e^{\pm 2(m_1 x+m_2 t)}],\\
\Delta &=& \frac{[(\mu^2-1)b-\beta e^{\pm 2(m_1 x+m_2 t)}]^2 -4\mu^2(1-b^2)}{\mu^2}> 0\nonumber \end{eqnarray} where $b$ satisfies the ordinary differential equation \begin{eqnarray}\label{102.1} [\mu(1+\mu^2)\sqrt{\Delta}\pm (\mu^2+1)^2b\mp (\mu^2-1)\beta e^{\pm 2(m_1 x+m_2 t)}]b'\hspace*{3cm}\nonumber\\ + 2 [\mp\mu (1+\mu^2)\sqrt{\Delta}- \beta(\mu^2-1)e^{\pm 2(m_1 x+m_2 t)}]b+ 2\beta^2 e^{\pm 4(m_1 x+m_2 t)}=0 \end{eqnarray} \end{proposition}
\noindent \textbf{Proof}. If $f_{21} \equiv 0$ then $\mu=m_1 =0$ and $m_2\neq 0$. From Lemma \ref{lemma3.4} the coefficients of the second fundamental form of such local isometric immersion are universal, and hence \eqref{3.8} and \eqref{3.9} become \begin{eqnarray} [a_t +\lambda a_x z_0\mp m_2(a -c)]h-a_x\psi - m_2b_x = 0,\quad [b_t+\lambda b_x z_0\mp 2m_2 b]h-b_x\psi - m_2c_x = 0.\label{d5.4} \end{eqnarray} Differentiating \eqref{d5.4} with respect to $z_2$, since $h'\neq 0$ on a open set, we obtain \begin{eqnarray} a_t +\lambda a_x z_0\mp m_2(a -c)=0, \quad b_t+\lambda b_x z_0\mp 2m_2 b=0.\label{d5.5} \end{eqnarray} Differentiating \eqref{d5.5} with respect to $z_0$ and replacing the result back into \eqref{d5.5}, we get \begin{eqnarray} \lambda a_x = \lambda b_x = 0,\quad a_t \mp m_2(a-c) = 0, \quad b_t \mp 2m_2b = 0.\label{d5.6} \end{eqnarray} Substituting \eqref{d5.6} into \eqref{d5.4} we finally have \begin{eqnarray}\label{d5.7} a_x \psi + m_2b_x = 0, \quad b_x\psi +m_2c_x = 0. \end{eqnarray} Taking the derivative of the Gauss equation \eqref{gauss} with respect to $x$ leads to $a_xc+ac_x - 2bb_x = 0$. Replacing \eqref{d5.7} in the latter, we have \begin{eqnarray}\label{(136)} a_x \left[c + \left(\frac{\psi}{m_2}\right)^2a + 2\frac{\psi}{m_2} b\right] = 0. \end{eqnarray} If $a_x\neq 0$ then differentiating the first equation in \eqref{d5.7} with respect to $z_0$ and $z_1$ gives us $\psi_{,z_0} = \psi_{,z_1}=0$ and, thus, $\psi = \alpha m_2$, where $\alpha$ denotes a arbitrary constant. From \eqref{(136)} we can see that $\psi\neq 0$, since $c\neq 0$, and \begin{eqnarray}\label{d5.8} b = \pm 1 -\alpha a, \quad c = \alpha^2 a\mp 2\alpha. \end{eqnarray} Substituting \eqref{d5.8} into \eqref{d5.6} leads to $$ a_t \mp m_2(a-\alpha^2 a\pm 2\alpha) = 0, \quad -\alpha a_t \mp 2m_2(\pm 1-\alpha a) = 0. $$ In the above equations, adding the second to the first multiplied by $\alpha$ leads to $a=\pm 2/\alpha$, which replaced in the first equation gives us $m_2 = 0$ and, thus, a contradiction since $m_2\neq 0$.
Therefore, $a_x = 0$ and by \eqref{d5.7} we have $b_x = c_x = 0$. Thus, $a$, $b$ and $c$ depend only on $t$. It follows from \eqref{d5.6} that \begin{eqnarray}\label{d5.9} b = \beta e^{\pm 2m_2t}, \quad c = a\mp \frac{a_t}{m_2}, \end{eqnarray} where $\beta$ is a constant. Replacing \eqref{d5.9} into the Gauss equation leads to $a= \pm \sqrt{L(t)}$ where $L(t) = \sigma e^{\pm 2m_2t} - \beta^2 e^{\pm 4m_2t} - 1$, $\sigma>0$ is a constant and $\sigma^2>4\beta^2$. This $a$ together with \eqref{d5.9} gives \eqref{d7.1}, where $a$ is defined on the trip described by \eqref{d7.2}. Observe that $\psi$ and $\lambda$ are still arbitrary. This concludes $(i)$.
Suppose $f_{21}\neq 0$ on a non-empty open set. If $c+(f_{11}/f_{21})^2a+2f_{11}b/f_{21}=0$ then from Lemma \ref{lemma3.3} the equations \eqref{gauss}, \eqref{3.8} and \eqref{3.9} form an inconsistent system. Therefore, we have $c+(f_{11}/f_{21})^2a+2f_{11}b/f_{21}\neq 0$ and, from Lemma \ref{lemma3.2}, the coefficients of the second fundamental form of such local isometric immersion are universal, and hence \eqref{3.8} and \eqref{3.9} become \begin{eqnarray} f_{11} a_t+f_{21} b_t -f_{12} a_x-f_{22} b_x-2b\Delta_{13}+(a-c)\Delta_{23} = 0,&&\label{7.21}\\ f_{11} b_t+f_{21} c_t -f_{12} b_x-f_{22} c_x+(a-c)\Delta_{13}+2b\Delta_{23} = 0,&&\label{7.22} \end{eqnarray} where $\Delta_{13} = \pm \mu(\lambda m_1z_0+m_2)h\mp \mu m_1\psi$ and $\Delta_{23}=\mp (m_2+\lambda m_1z_0)h \pm m_1\psi$. Hence, since $f_{ij}$ are given by \eqref{th2.4a}, it follows from \eqref{7.21} and \eqref{7.22} that, respectively, \begin{eqnarray} [a_t+\mu b_t +\lambda (a_x+\mu b_x)z_0\mp (a +2\mu b-c)(\lambda m_1z_0 +m_2)]h-[a_x+\mu b_x \mp m_1(a+2\mu b-c)]\psi\nonumber\\ +\sqrt{1+\mu^2}(m_1b_t - m_2b_x) = 0,\label{7.23} \end{eqnarray} \begin{eqnarray} [b_t+\mu c_t +\lambda (b_x+\mu c_x)z_0\pm (\mu a -2 b-\mu c)(\lambda m_1z_0 +m_2)]h-[b_x+\mu c_x \pm m_1(\mu a-2 b-\mu c)]\psi\nonumber\\ +\sqrt{1+\mu^2}(m_1c_t - m_2c_x) = 0.\label{7.24} \end{eqnarray} Differentiating \eqref{7.23} and \eqref{7.24} with respect to $z_2$ leads, since $h'\neq 0$ on a open set, to \begin{eqnarray} a_t+\mu b_t +\lambda (a_x+\mu b_x)z_0\mp (a +2\mu b-c)(\lambda m_1z_0 +m_2)=0,&&\label{7.25}\\ b_t+\mu c_t +\lambda (b_x+\mu c_x)z_0\pm (\mu a -2 b-\mu c)(\lambda m_1z_0 +m_2)=0.&&\label{7.26} \end{eqnarray} Differentiating \eqref{7.25} and \eqref{7.26} with respect to $z_0$ and replacing the result back into the latter two equations we get \begin{eqnarray}\label{7.27} a_t+\mu b_t\mp m_2(a +2\mu b-c)=0, \quad b_t+\mu c_t \pm m_2(\mu a -2 b-\mu c)=0, \end{eqnarray} and \begin{eqnarray}\label{7.28} \lambda[a_x+\mu b_x\mp m_1(a +2\mu b-c)]=0, \quad \lambda[b_x+\mu c_x\pm m_1(\mu a -2 b-\mu c)]=0 \end{eqnarray}
Finally, substituting \eqref{7.27} and \eqref{7.28} back into \eqref{7.23} and \eqref{7.24}, we obtain \begin{eqnarray} -[a_x+\mu b_x \mp m_1(a+2\mu b-c)]\psi +\sqrt{1+\mu^2}(m_1b_t - m_2b_x) = 0,&&\label{143}\\ -[b_x+\mu c_x \pm m_1(\mu a-2 b-\mu c)]\psi +\sqrt{1+\mu^2}(m_1c_t - m_2c_x) = 0.&&\label{144} \end{eqnarray} Multiplying the first equation in \eqref{7.27} by $m_2$ and adding the result to the first equation in \eqref{7.28} multiplies by $\lambda m_1$ leads to \begin{eqnarray}\label{145} a+2\mu b - c = \pm \frac{1}{M}[m_2(a_t+\mu b_t)+\lambda^2 m_1(a_x+\mu b_x)], \end{eqnarray} and the same operation with the second equation of \eqref{7.27} and \eqref{7.28} leads to \begin{eqnarray}\label{146} \mu a -2 b -\mu c = \mp \frac{1}{M}[m_2(b_t+\mu c_t)+\lambda^2 m_1(b_x+\mu c_x)], \end{eqnarray} where $M = (\lambda m_1)^2+m_2^2$ is a nonzero constant. Replacing \eqref{145} and \eqref{146} into \eqref{143} and \eqref{144}, we obtain \begin{eqnarray} m_2 \psi (m_1a_t - m_2a_x) + (\mu m_2\psi +M\sqrt{1+\mu^2})(m_1b_t-m_2b_x) = 0,&&\label{147}\\ m_2 \psi (m_1b_t - m_2b_x) + (\mu m_2\psi +M\sqrt{1+\mu^2})(m_1c_t-m_2c_x) = 0.&&\label{148} \end{eqnarray} Differentiating the Gauss equation \eqref{gauss} with respect to $t$ and multiplying the result by $m_1$ and doing the same thing with $x$ and $m_2$, we get \begin{eqnarray*} m_1 a_t c+m_1ac_t - 2m_1bb_t = 0,&&\\ m_2 a_x c+m_2ac_x - 2m_2bb_x = 0.&& \end{eqnarray*} From the two latter equations we obtain \begin{eqnarray}\label{G} (m_1a_t -m_2 a_x)c + (m_1c_t -m_2 c_x)a - 2b(m_1b_t -m_2 b_x)=0. \end{eqnarray}
Suppose $m_2\psi \neq 0$. Replacing \eqref{147} and \eqref{148} in \eqref{G} leads to \begin{eqnarray}\label{149} (m_1c_t -m_2 c_x)[a+Q^2c + 2Qb] = 0, \quad Q = \frac{\mu m_2\psi + M\sqrt{1+\mu^2}}{m_2\psi}. \end{eqnarray} If $m_1c_t -m_2 c_x\neq 0$, then $\psi$ is a constant and, by \eqref{149} and the Gauss equation, we have \begin{eqnarray}\label{150} a = Q^2 c \mp 2Q, \quad b = \pm 1 -Qc. \end{eqnarray} Since $a\neq 0$ we have $Q\neq 0$. Substituting \eqref{150} into \eqref{7.27}, we get $$ (Q-\mu )Qc_t \mp m_2[Q^2 c\mp 2Q - c +2\mu (\pm 1-Qc)] = 0, \quad -(Q-\mu )c_t \pm m_2[\mu(Q^2 c\mp 2Q - c) -2 (\pm 1-Qc)] = 0. $$
In the latter equations, adding the second multiplied by $Q$ to the first gives us $c=constant$, which implies from \eqref{150} that $a$ and $b$ are constants. But, $a$, $b$ and $c$ constants imply from \eqref{7.27} that $a-c=0$ and $b=0$, which contradicts the Gauss equation \eqref{gauss}. Therefore $m_1c_t -m_2 c_x = 0$ and, thus, from \eqref{147} and \eqref{148} we have $m_1b_t-m_2b_x=0$ and $m_1a_t-m_2a_x = 0$.
On the other hand, if $m_2\psi = 0$ then from \eqref{147} and \eqref{148} we have $m_1b_t-m_2b_x=0$ and $m_1c_t -m_2 c_x = 0$, which replaced into \eqref{G} since $c\neq 0$ leads to $m_1a_t-m_2a_x = 0$.
Therefore, for arbitrary $m_2\psi$ we have shown that \begin{eqnarray} \begin{array}{ll}\label{7.31} a = \phi_1(m_1x+m_2t),\quad b = \phi_2(m_1x+m_2t),\quad c = \phi_3(m_1x+m_2t), \end{array} \end{eqnarray} where $\phi_i$, $i=1,2,3$, are real and differentiable functions and, by Lemma \ref{lemma3.1}, $\phi_1\phi_3\neq 0$ on a open set. Replacing \eqref{7.31} into \eqref{7.27} and \eqref{7.28} and observing that $(\lambda m_1)^2+m_2^2\neq 0$, we obtain \begin{eqnarray} \phi_1' + \mu \phi_2'\mp (\phi_1+2\mu \phi_2 - \phi_3) = 0,&&\label{b5.12}\\ \phi_2' + \mu \phi_3' \pm (\mu \phi_1 -2\phi_2 - \mu \phi_3) = 0.&&\label{b5.13} \end{eqnarray} From \eqref{b5.12} we obtain $\phi_3$ in terms of $\phi_1$, $\phi_1'$, $\phi_2$ and $\phi_2'$, which replaced into \eqref{b5.13} implies that \begin{eqnarray}\label{b5.14} \mu\phi_1' = \pm (1+\mu^2)\phi_2 - \mu^2\phi_2' \pm \beta e^{\pm 2(m_1 x+m_2t)}. \end{eqnarray} If $\mu=0$, then from \eqref{b5.14} and \eqref{b5.12} we have $b = -\beta e^{\pm 2(m_1 x+m_2t)}$ and $c = a\mp a'$. Using the latter and Gauss equation leads to \eqref{ab5.1}, where $a$ is defined on the trip described by \eqref{d7.2.1}. Observe that $\lambda$ and $\psi$ are still arbitrary. This concludes $(ii)$.
If $\mu \neq 0$, then from \eqref{b5.14} we have $\phi_1'$, which replaced into \eqref{b5.12} implies that \begin{eqnarray}\label{112.b} \phi_3 = \phi_1 + \phi(m_1 x+m_2 t), \quad \phi(m_1 x+m_2 t) = \frac{\mu^2-1}{\mu}\phi_2 - \frac{\beta}{\mu}e^{\pm 2(m_1x+m_2t)}. \end{eqnarray} Substituting the latter into the Gauss equation we obtain $\phi_1^2+\phi_1\phi(m_1 x+m_2 t)-\phi_2^2 =-1$, which resolved as a second degree equation in terms of $\phi_1$ leads to $$ \phi_1 = \frac{-\phi(m_1 x+m_2 t)\pm \sqrt{\Delta}}{2}, \quad \Delta = \phi(m_1 x+m_2 t)^2-4[1-\phi_2(m_1 x+m_2 t)^2]> 0. $$
Hence, using \eqref{112.b} we also have $\phi_3$ in terms of $\phi_2=\phi_2(m_1 x+m_2 t)$ as in \eqref{129.a}, which replaced into \eqref{b5.14} gives us \begin{eqnarray}\label{eqt} [(1+\mu^2)\sqrt{\Delta}\pm (\mu^2-1)\phi \pm 4\mu b]b' \mp 2(1+\mu^2)b\sqrt{\Delta}-2\beta e^{\pm 2(m_1 x+m_2 t)}\phi =0. \end{eqnarray} Observe that, if the coefficient of $b'$ in \eqref{eqt} vanishes, we have \begin{eqnarray*} (1+\mu^2)\sqrt{\Delta}\pm (\mu^2-1)\phi \pm 4\mu b=0,&&\\ \mp 2(1+\mu^2)b\sqrt{\Delta}-2\beta e^{\pm 2(m_1 x+m_2 t)}\phi =0.&& \end{eqnarray*} In the latter two equations, replacing $(1+\mu^2)\sqrt{\Delta}$ of the first into the second implies that \begin{eqnarray*} 0 &=& \mp 2 b[\mp (\mu^2-1)\phi \mp 4\mu b]-2\beta e^{\pm 2(m_1 x+m_2 t)}\phi\\
&=& 2\phi [(\mu^2-1)b-\beta e^{\pm 2(m_1 x+m_2 t)}]+8\mu b^2\\
&=&2\phi \mu\phi + 8\mu b^2\\
&=& 2\mu (\phi^2 +4b^2), \end{eqnarray*} and then $\phi^2 +4b^2=0$, since $\mu\neq 0$. However, $\phi^2 +4b^2=0$ if, and only if, $\phi=b=0$ which implies by \eqref{7.12b} that $a=c$. But, $a=c$ and $b=0$ contradict the Gauss equation \eqref{gauss}. Therefore, the coefficient of $b'$ in the equation \eqref{eqt} does not vanish in a non-empty open set. That means we can write $b' = g(x,b)$, where $g$ is a differentiable function defined, from \eqref{eqt}, by $$ g(x,b) = \frac{\pm 2\eta(1+\mu^2)b\sqrt{\Delta}+2\eta\beta e^{\pm 2\eta x}\phi}{(1+\mu^2)\sqrt{\Delta}\pm (\mu^2-1)\phi \pm 4\mu b}. $$
Let $x_0$ be an arbitrarily fixed point and consider the following Initial Value Problem (IVP) \begin{eqnarray}\label{(117)at} b' = g(x,b), \qquad b(x_0)=b_0. \end{eqnarray} Since $b$ is a smooth function, we have that $g(x,b)$ and $\partial_b g(x,b)$ are continuous in some open rectangle $$ R=\left\lbrace (x,b): x_1< x<x_2,\hspace*{0.1 cm} y_1< b< y_2\right\rbrace $$ that contains the point $(x_0,b_0)$. Then, by the fundamental existence and uniqueness theorem for ordinary differential equation the IVP \eqref{(117)at} has a unique solution in some closed interval $I = [b_0-\epsilon, b_0+\epsilon]$, where $\epsilon$ is a positive number. Moreover, $x_1$ and $x_2$ has to be chosen so that the strip $x_1< x<x_2$ intersects the domain of the solution of \eqref{th2.4}. Observe that replacing $\phi$ into \eqref{eqt} we obtain \eqref{102.1}. This concludes $(iii)$.
The converse follows from a straightforward computation.
$\Box$
\begin{proposition}\label{prop4} Consider an equation of type \eqref{T} describing pseudospherical surfaces, under the condition \eqref{1}, given by Theorem $\ref{teo7.5}$-$(i)$. There is no local isometric immersion in $\mathbb{R}^3$ of a pseudospherical surface determined by a solution $u$ of the equation, for which the coefficients of the second fundamental form depend on $x, t, z_0, \ldots, z_l, w_1, \ldots, w_m$, $v_1, \ldots, v_n$, where $1\leq l <\infty$, $1\leq m < \infty$ and $1\leq n < \infty$ are finite, but otherwise arbitrary. \end{proposition}
\noindent \textbf{Proof}. Since $\eta\neq 0$ we can not have $f_{21}=0$ on a open set. Therefore $f_{21}\neq 0$ on a open set.
If $c+(f_{11}/f_{21})^2a+2f_{11}b/f_{21}=0$ then from Lemma \ref{lemma3.3} the equations \eqref{gauss}, \eqref{3.8} and \eqref{3.9} form an inconsistent system. Hence, we have $c+(f_{11}/f_{21})^2a+2f_{11}b/f_{21}\neq 0$ and, from Lemma \ref{lemma3.2}, the coefficients of the second fundamental form of such local isometric immersion are universal, and hence \eqref{3.8} and \eqref{3.9} become \begin{eqnarray} f_{11} a_t+f_{21} b_t -f_{12} a_x-f_{22} b_x-2b\Delta_{13}+(a-c)\Delta_{23} = 0,&&\label{7.21a}\\ f_{11} b_t+f_{21} c_t -f_{12} b_x-f_{22} c_x+(a-c)\Delta_{13}+2b\Delta_{23} = 0,&&\label{7.22a} \end{eqnarray} where \begin{eqnarray}\label{7.34} \begin{array}{ll} \Delta_{13} = \left[\mu \left(m-\frac{b\tau}{a}\right)-\frac{\tau\eta}{a}\right](\mp \phi_{12}+\tau \varphi e^{\pm \tau z_1}f_{11}),\quad \Delta_{23}= -\left(m-\frac{b\tau}{a}\right)(\mp \phi_{12}+\tau \varphi e^{\pm \tau z_1}f_{11}), \end{array} \end{eqnarray} with $\phi_{12} = [\pm \tau(az_0+b)\varphi +az_1\varphi']e^{\pm \tau z_1}\mp \lambda az_1/\tau$. It follows from \eqref{7.21a} and \eqref{7.22a} that, respectively, \begin{eqnarray}\label{7.35} [a_t+\mu b_t+\lambda (a_x+\mu b_x)z_0]f_{11}-(a_x+\mu b_x)\phi_{12}+\eta b_t + \eta(\lambda z_0\mp \tau e^{\pm \tau z_1}\varphi)b_x-2b\Delta_{13}+(a-c)\Delta_{23} = 0, \end{eqnarray} \begin{eqnarray}\label{7.36} [b_t+\mu c_t+\lambda (b_x+\mu c_x)z_0]f_{11}-(b_x+\mu c_x)\phi_{12}+\eta c_t + \eta(\lambda z_0\mp \tau e^{\pm \tau z_1}\varphi)c_x+(a-c)\Delta_{13}+2b\Delta_{23} = 0. \end{eqnarray} Differentiating \eqref{7.35} and \eqref{7.36} with respect to $z_2$ and using \eqref{7.34}, since $f_{11,z_2}\neq 0$, we get \begin{eqnarray}\label{7.37} a_t+\mu b_t+\lambda (a_x+\mu b_x)z_0-\left\lbrace 2b\left[\mu \left(m-\frac{b\tau}{a}\right)-\frac{\tau\eta}{a}\right]+(a-c)\left(m-\frac{b\tau}{a}\right) \right\rbrace \tau \varphi e^{\pm \tau z_1} = 0, \end{eqnarray} \begin{eqnarray}\label{7.38} b_t+\mu c_t+\lambda (b_x+\mu c_x)z_0+\left\lbrace (a-c)\left[\mu \left(m-\frac{b\tau}{a}\right)-\frac{\tau\eta}{a}\right]-2b\left(m-\frac{b\tau}{a}\right) \right\rbrace \tau \varphi e^{\pm \tau z_1} = 0. \end{eqnarray} Differentiating \eqref{7.37} and \eqref{7.38} with respect to $z_1$ and observing that $\tau\varphi \neq 0$, we have \begin{eqnarray}\label{7.39} \begin{array}{rr} 2b\left[\mu \left(m-\frac{b\tau}{a}\right)-\frac{\tau\eta}{a}\right]+(a-c)\left(m-\frac{b\tau}{a}\right) = 0,\\ -2b\left(m-\frac{b\tau}{a}\right)+(a-c)\left[\mu \left(m-\frac{b\tau}{a}\right)-\frac{\tau\eta}{a}\right]= 0. \end{array} \end{eqnarray} Since $b^2+(a-c)^2\neq 0$, it follows from \eqref{7.39} that $$ \left[\mu \left(m-\frac{b\tau}{a}\right)-\frac{\tau\eta}{a}\right]^2+\left(m-\frac{b\tau}{a}\right)^2 = 0, $$ which implies $\Delta_{13} = \Delta_{23} = 0$ and, thus, contradicts \eqref{3.10a}.
$\Box$
\begin{proposition}\label{prop5} Consider an equation of type \eqref{T} describing pseudospherical surfaces, under the condition \eqref{1}, given by Theorem $\ref{teo7.5}$-$(ii)$. There is no local isometric immersion in $\mathbb{R}^3$ of a pseudospherical surface determined by a solution $u$ of the equation, for which the coefficients of the second fundamental form depend on $x, t, z_0, \ldots, z_l, w_1, \ldots, w_m$, $v_1, \ldots, v_n$, where $1\leq l <\infty$, $1\leq m < \infty$ and $1\leq n < \infty$ are finite, but otherwise arbitrary. \end{proposition}
\noindent \textbf{Proof}. Suppose $f_{21}\neq 0$ on a open set. If $c+(f_{11}/f_{21})^2a+2f_{11}b/f_{21}=0$ then from Lemma \ref{lemma3.3} the equations \eqref{gauss}, \eqref{3.8} and \eqref{3.9} form an inconsistent system. Hence, we have $c+(f_{11}/f_{21})^2a+2f_{11}b/f_{21}\neq 0$ and, from Lemma \ref{lemma3.2}, the coefficients of the second fundamental form of such local isometric immersion are universal, and hence \eqref{3.8} and \eqref{3.9} become \begin{eqnarray} f_{11} a_t+f_{21} b_t -f_{12} a_x-f_{22} b_x-2b\Delta_{13}+(a-c)\Delta_{23} = 0,&&\label{11.1}\\ f_{11} b_t+f_{21} c_t -f_{12} b_x-f_{22} c_x+(a-c)\Delta_{13}+2b\Delta_{23} = 0,&&\label{11.2} \end{eqnarray} where by \eqref{(7.2)} we have \begin{eqnarray}\label{11.3} \begin{array}{ll} \Delta_{13} = (\phi_{32}\mp \sqrt{1+\mu^2}\phi_{12})f_{11}\mp \frac{\theta +a\mu\eta}{a\sqrt{1+\mu^2}}\phi_{12},\\ \Delta_{23} = (\mu \phi_{32}\mp \sqrt{1+\mu^2}\phi_{22})f_{11}+\eta \phi_{32}\mp \frac{\theta +a\mu\eta}{a\sqrt{1+\mu^2}}\phi_{22}. \end{array} \end{eqnarray} Differentiating \eqref{11.1} and \eqref{11.2} with respect to $z_2$, we obtain since $f_{11,z_2}\neq 0$, respectively, \begin{eqnarray} a_t+\mu b_t+\lambda (a_x+\mu b_x)z_0 - 2b(\phi_{32}\mp\sqrt{1+\mu^2}\phi_{12})+(a-c)(\mu\phi_{32}\mp \sqrt{1+\mu^2}\phi_{22}) = 0,\label{a11.4}\\ b_t+\mu c_t+\lambda (b_x+\mu c_x)z_0 +(a-c)(\phi_{32}\mp\sqrt{1+\mu^2}\phi_{12})+2b(\mu\phi_{32}\mp \sqrt{1+\mu^2}\phi_{22}) = 0.\label{a11.5} \end{eqnarray} Differentiating \eqref{a11.4} and \eqref{a11.5} with respect to $z_1$, since $b^2+(a-c)^2\neq 0$, we conclude that $$ (\phi_{32}\mp\sqrt{1+\mu^2}\phi_{12})_{,z_1} = (\mu\phi_{32}\mp \sqrt{1+\mu^2}\phi_{22})_{,z_1}=0 $$ if, and only if, $m_1\theta e^{\theta z_0}-\lambda = 0$, i.e., if and only if $\lambda = m_1=0$, which implies $\Delta_{12} = 0$ and, thus, a contradiction.
On the other hand, if $f_{21} = 0$, on a open set, then we have $\mu=\eta = 0$. It follows from Lemma \ref{lemma3.4} that the coefficients of the second fundamental form of such local isometric immersion are universal, and hence \eqref{3.8} and \eqref{3.9} become \begin{eqnarray} f_{11} a_t -f_{12} a_x-f_{22} b_x-2b\Delta_{13}+(a-c)\Delta_{23} = 0,&&\label{11.6}\\ f_{11} b_t -f_{12} b_x-f_{22} c_x+(a-c)\Delta_{13}+2b\Delta_{23} = 0,&&\label{11.7} \end{eqnarray} where by \eqref{(7.2)} we have $\Delta_{13} = (\phi_{32}\mp \phi_{12})f_{11}\mp \theta \phi_{12}/a$ and $\Delta_{23} = \mp\phi_{22}f_{11}\mp \theta\phi_{22}/a$. Differentiating \eqref{11.6} and \eqref{11.7} with respect to $z_2$, since $f_{11,z_2}\neq 0$, we have, respectively, \begin{eqnarray} a_t+\lambda a_xz_0 - 2b(\phi_{32}\mp\phi_{12})\mp(a-c) \phi_{22} = 0,\label{11.4}\\ b_t+\lambda b_xz_0 +(a-c)(\phi_{32}\mp\phi_{12})\mp 2b \phi_{22} = 0,\label{11.5} \end{eqnarray} where $\phi_{32}\mp\phi_{12} = \pm (m_1\theta e^{\theta z_0}-\lambda)/a$ and $\phi_{22}=\mp (m_1\theta e^{\theta z_0}-\lambda)z_1$. Differentiating \eqref{11.4} and \eqref{11.5} with respect to $z_1$, since $m_1\theta e^{\theta z_0}-\lambda\neq 0$, we have $b=a-c=0$, which contradicts the Gauss equation.
$\Box$
Finally, the proof of Theorem \ref{teo} follows from Propositions \ref{prop1}-\ref{prop5}.
$\Box$
\noindent \textbf{Acknowledgments:} We are grateful to the Department of Mathematics and Statistics of the McGill University for its hospitality while this paper was being prepared. We express our sincere gratitude to Keti Tenenblat for invaluable suggestions. This work was supported by Ministry of Science and Technology, Brazil, CNPq Proc. 248877/2013-5 and by NSERC Grant RGPIN 105490-2011.
\noindent Tarc\'isio Castro Silva\\ Department of Mathematics and Statistics, McGill University, Canada\\ e-mail: [email protected]
\noindent Niky Kamran\\ Department of Mathematics and Statistics, McGill University, Canada\\ e-mail: [email protected]
\end{document} | arXiv |
\begin{document}
\begin{abstract} We introduce a new device in the study of abstract elementary classes (AECs): Galois Morleyization, which consists in expanding the models of the class with a relation for every Galois (orbital) type of length less than a fixed cardinal $\kappa$. We show:
\begin{thm}[The semantic-syntactic correspondence]
An AEC $K$ is fully $(<\kappa)$-tame and type short if and only if Galois types are syntactic in the Galois Morleyization. \end{thm}
This exhibits a correspondence between AECs and the syntactic framework of stability theory inside a model. We use the correspondence to make progress on the stability theory of tame and type short AECs. The main theorems are:
\begin{thm}\label{stab-spectrum-abstract}
Let $K$ be a $\text{LS} (K)$-tame AEC with amalgamation. The following are equivalent:
\begin{enumerate}
\item $K$ is Galois stable in some $\lambda \ge \text{LS} (K)$.
\item $K$ does not have the order property (defined in terms of Galois types).
\item There exist cardinals $\mu$ and $\lambda_0$ with $\mu \le \lambda_0 < \beth_{(2^{\text{LS} (K)})^+}$ such that $K$ is Galois stable in any $\lambda \ge \lambda_0$ with $\lambda = \lambda^{<\mu}$.
\end{enumerate} \end{thm}
\begin{thm}\label{coheir-syn-ab}
Let $K$ be a fully $(<\kappa)$-tame and type short AEC with amalgamation, $\kappa = \beth_{\kappa} > \text{LS} (K)$. If $K$ is Galois stable, then the class of $\kappa$-Galois saturated models of $K$ admits an independence notion ($(<\kappa)$-coheir) which, except perhaps for extension, has the properties of forking in a first-order stable theory. \end{thm} \end{abstract}
\title{Infinitary stability theory}
\tableofcontents
\section{Introduction}
Abstract elementary classes (AECs) are sometimes described as a purely semantic framework for model theory. It has been shown, however, that AECs are closely connected with more syntactic objects. See for example Shelah's presentation theorem \cite[Lemma 1.8]{sh88}, or Kueker's \cite[Theorem 7.2]{kueker2008} showing that an AEC with Löwenheim-Skolem number $\lambda$ is closed under $\mathbb{L}_{\infty, \lambda^+}$-elementary equivalence.
Another framework for non-elementary model theory is stability theory inside a model (introduced in Rami Grossberg's 1981 master thesis and studied for example\footnote{The definition of a model being stable appears already in \cite[Definition I.2.2]{shelahfobook78} but (as Shelah notes in the introduction to \cite[Chapter I]{sh300-orig}) this concept was not pursued further there.} in \cite{grossberg91-indisc, grossberg91} or \cite[Chapter I]{sh300-orig}, see \cite[Chapter V.A]{shelahaecbook2} for a more recent version). There the methods are very syntactic but it is believed (see for example the remark on p.\ 116 of \cite{grossberg91-indisc}) that they can help the resolution of more semantic questions, such as Shelah's categoricity conjecture for $\mathbb{L}_{\omega_1, \omega}$.
In this paper, we establish a correspondence between these two frameworks. We show that results from stability theory inside a model directly translate to results about \emph{tame} abstract elementary classes. Recall that an AEC is $(<\kappa)$-tame if its Galois (i.e.\ orbital) types are determined by their restrictions to domains of size less than $\kappa$. Tameness as a property of AEC was first isolated (from an argument in \cite{sh394}) by Grossberg and VanDieren \cite{tamenessone} and used to prove an upward categoricity transfer \cite{tamenessthree,tamenesstwo}. Boney \cite{tamelc-jsl} showed that tameness follows from the existence of large cardinals. Combined with the categoricity transfers of Grossberg-VanDieren and Shelah \cite{sh394}, this showed assuming a large cardinal axiom that Shelah's eventual categoricity conjecture holds if the categoricity cardinal is a successor.
The basic idea of the translation is the observation (appearing for example in \cite[p.~15]{tamelc-jsl} or \cite[p.~206]{lieberman2011}) that in a $(<\kappa)$-tame abstract elementary class, Galois types over domains of size less than $\kappa$ play a role analogous to first-order formulas. We make this observation precise by expanding the language of such an AEC with a relation symbol for each Galois type over the empty set of a sequence of length than $\kappa$, and looking at $\mathbb{L}_{\kappa, \kappa}$-formulas in the expanded language. We call this expansion the \emph{Galois Morleyization}\footnote{We thank Rami Grossberg for suggesting the name.} of the AEC. Thinking of a type as the set of its small restrictions, we can then prove the \emph{semantic-syntactic correspondence} (Theorem \ref{separation}): Galois types in the AEC correspond to quantifier-free syntactic types in its Galois Morleyization.
The correspondence gives us a new method to prove results in tame abstract elementary classes:
\begin{enumerate}
\item Prove a syntactic result in the Galois Morleyization of the AEC (e.g.\ using tools from stability theory inside a model).
\item Translate to a semantic result in the AEC using the semantic-syntactic correspondence.
\item Push the semantic result further using known (semantic) facts about AECs, maybe combined with more hypotheses on the AEC (e.g.\ amalgamation). \end{enumerate}
As an application, we prove Theorem \ref{stab-spectrum-abstract} in the abstract (see Theorem \ref{stab-spectrum}), which gives the equivalence between no order property and stability in tame AECs and generalizes one direction of the stability spectrum theorem of homogeneous model theory (\cite[Theorem 4.4]{sh3}, see also \cite[Corollary 3.11]{grle-homog}). The syntactic part of the proof is not new (it is a straightforward generalization of Shelah's first-order proof \cite[Theorem 2.10]{shelahfobook}) and we are told by Rami Grossberg that proving such results was one of the reason tameness was introduced (in fact theorems with the same spirit appear in \cite{tamenessone}). However we believe it is challenging to give a transparent proof of the result using Galois types only. The reason is that the classical proof uses local types and it is not clear how to naturally define them semantically.
The method has other applications: Theorem \ref{coheir-syn} (formalizing Theorem \ref{coheir-syn-ab} from the abstract) shows that in stable fully tame and short AECs, the coheir independence relation has some of the properties of a well-behaved independence notion. This is used in \cite{indep-aec-v5} to build a global independence notion from superstability. In \cite{bv-sat-v3}, we also use syntactic methods to investigate chains of Galois-saturated models.
Precursors to this work include Makkai and Shelah's study of classes of models of an $\mathbb{L}_{\kappa, \omega}$ theory for $\kappa$ a strongly compact cardinal \cite{makkaishelah}: there they prove \cite[Proposition 2.10]{makkaishelah} that Galois and syntactic $\Sigma_1 (\mathbb{L}_{\kappa, \kappa})$-types are the same (so in particular those classes are $(<\kappa)$-tame). One can see the results of this paper as a generalization to tame AECs. Also, the construction of the Galois Morleyization when $\kappa = \aleph_0$ (so the language remains finitary) appears in \cite[Section 2.4]{group-config-kangas}. Moreover it has been pointed out to us\footnote{By Jonathan Kirby.} that a device similar to Galois Morleyization is used in \cite[Section 3]{rosicky81} to present any concrete category as a class of models of an infinitary theory. However the use of Galois Morleyization to translate results of stability theory inside a model to AECs is new.
This paper is organized as follows. In section \ref{prelim-sec}, we review some preliminaries. In section \ref{sec-foundations}, we introduce \emph{functorial expansions}\footnote{These were called ``abstract Morleyizations'' in an early version of this paper. We thank John Baldwin for suggesting the new name.} of AECs and the main example: Galois Morleyization. We then prove the semantic-syntactic correspondence. In section \ref{thy-indep}, we investigate various order properties and prove Theorem \ref{stab-spectrum-abstract}. In section \ref{sec-coheir}, we study the coheir independence relation. Several of these sections have \emph{global hypotheses} which hold until the end of the section: see Hypotheses \ref{ftypes-hyp}, \ref{morley-hyp}, and \ref{coheir-hyp}.
We end with a note on how AECs compare to some other non first-order framework like homogeneous model theory (see \cite{sh3}). There is an example (due to Marcus, see \cite{marcus-counterexample}) of an $\mathbb{L}_{\omega_1, \omega}$-axiomatizable class which is categorical in all uncountable cardinals but does not have an $\aleph_1$-sequentially-homogeneous model. For $n < \omega$, an example due to Hart and Shelah (see \cite{hs-example, bk-hs}) has amalgamation, no maximal models, and is categorical in all $\aleph_k$ with $k \le n$, but no higher. By \cite{tamenesstwo}, the example cannot be $\aleph_k$-tame for $k < n$. However if $\kappa$ is a strongly compact cardinal, the example will be fully $(<\kappa)$-tame and type short by the main result of \cite{tamelc-jsl}. The discussion on p.\ 74 of \cite{baldwinbook09} gives more non-homogeneous examples.
In general, classes from homogeneous model theory or quasiminimal pregeometry classes (see \cite{quasimin}) are special cases of AECs that are always fully $(<\aleph_0)$-tame and type short. In this paper we work with the much more general assumption of $(<\kappa)$-tameness and type shortness for a possibly uncountable $\kappa$.
This paper was written while working on a Ph.D.\ thesis under the direction of Rami Grossberg at Carnegie Mellon University and I would like to thank Professor Grossberg for his guidance and assistance in my research in general and in this work specifically. I thank Will Boney for thoroughly reading this paper and providing invaluable feedback. I also thank Alexei Kolesnikov for valuable discussions on the idea of thinking of Galois types as formulas. I thank John Baldwin, Jonathan Kirby, and a referee for valuable comments.
\section{Preliminaries}\label{prelim-sec}
We review some of the basics of abstract elementary classes and fix some notation. The reader is advised to skim through this section quickly and go back to it as needed.
\subsection{Set theoretic terminology}
\begin{defin}\label{kappa-r-def}
Let $\kappa$ be an infinite cardinal.
\begin{enumerate}
\item Let $\plus{\kappa}$ be the least regular cardinal greater than or equal to $\kappa$. That is, $\plus{\kappa}$ is $\kappa^+$ if $\kappa$ is singular and $\kappa$ if $\kappa$ is regular.
\item Let $\kappa^-$ be $\kappa$ if $\kappa$ is limit or the unique $\kappa_0$ such that $\kappa_0^+ = \kappa$ if $\kappa$ is a successor.
\end{enumerate} \end{defin}
We will often use the following function:
\begin{defin}[Hanf function]\label{hanf-def}
For $\lambda$ an infinite cardinal, define $\hanf{\lambda} := \beth_{(2^{\lambda})^+}$. Also define $\hanfs{\lambda} := \hanf{\lambda^-}$. \end{defin}
Note that for $\lambda$ infinite, $\lambda = \beth_\lambda$ if and only if for all $\mu < \lambda$, $h (\mu) < \lambda$.
\subsection{Syntax}\label{syntax-subsec}
The notation of this paper is standard, but since we will work with infinitary objects and need to be precise, we review the basics. We will often work with the logic $\mathbb{L}_{\kappa, \kappa}$, see \cite{dickmann-book} for the definition and basic results.
\begin{defin}\label{infinitary-def}
An \emph{infinitary vocabulary} is a vocabulary where we also allow relation and function symbols of infinite arity. For simplicity, we require the arity to be an ordinal. An infinitary vocabulary is \emph{$(<\kappa)$-ary} if all its symbols have arity strictly less than $\kappa$. A \emph{finitary vocabulary} is a $(<\aleph_0)$-ary vocabulary. \end{defin}
For $\tau$ an infinitary vocabulary, $\phi$ an $\mathbb{L}_{\kappa, \kappa} (\tau)$-formula and $\bar{x}$ a sequence of variables, we write $\phi = \phi (\bar{x})$ to emphasize that the free variables of $\phi$ appear among $\bar{x}$ (recall that a $\mathbb{L}_{\kappa, \kappa}$-formula must have fewer than $\kappa$-many free variables, but not all elements of $\bar{x}$ need to appear as free variables in $\phi$, so we allow $\ell(\bar{x}) \ge \kappa$). We use a similar notation for sets of formulas. When $\bar{a}$ is an element in some $\tau$-structure and $\phi (\bar{x}, \bar{y})$ is a formula, we often abuse notation and say that $\psi (\bar{x}) = \phi (\bar{x}, \bar{a})$ is a formula (again, we allow $\ell (\bar{a}) \ge \kappa$). We say $\phi (\bar{x}, \bar{a})$ is a \emph{formula over $A$} if $\bar{a} \in \fct{<\infty}{A}$.
\begin{defin}
For $\phi$ a formula over a set, let $\FV{\phi}$ denote an enumeration of the free variables of $\phi$ (according to some canonical ordering on all variables). That is, fixing such an ordering, $\FV{\phi}$ is the smallest sequence $\bar{x}$ such that $\phi = \phi (\bar{x})$. Let $\ell (\phi) := \ell (\FV{\phi})$ (it is an ordinal, but by permutting the variables we can usually assume without loss of generality that it is a cardinal), and $\dom{\phi}$ be the smallest set $A$ such that $\phi$ is over $A$. Define similarly the meaning of $\FV{p}$, $\ell (p)$, and $\dom{p}$ on a set $p$ of formulas. \end{defin}
\begin{defin}
For $\tau$ an infinitary vocabulary, $M$ a $\tau$-structure, $A \subseteq |M|$, $\bar{b} \in \fct{<\infty}{|M|}$, and $\Delta$ a set of $\tau$-formulas (in some logic), let\footnote{Of course, we have in mind a canonical sequence of variables $\bar{x}$ of order type $\ell (\bar{b})$ that should really be part of the notation but (as is customary) we always omit this detail.}:
$$ \operatorname{tp}_{\Delta} (\bar{b} / A; M) := \{\phi (\bar{x}; \bar{a}) \mid \phi (\bar{x}, \bar{y}) \in \Delta\text{, } \bar{a} \in \fct{\ell (\bar{y})}{A} \text{, and } M \models \phi[\bar{b}, \bar{a}]\} $$
We will most often work with $\Delta = \operatorname{qf-}\mathbb{L}_{\kappa, \kappa}$, the set of \emph{quantifier-free} $\mathbb{L}_{\kappa, \kappa}$-formulas. \end{defin}
\begin{defin}\label{stone-def}
For $M$ a $\tau$-structure, $\Delta$ a set of $\tau$-formulas, $A \subseteq |M|$, $\alpha$ an ordinal or $\infty$, let
$$
\text{S}_{\Delta}^{<\alpha} (A; M) := \{\operatorname{tp}_{\Delta} (\bar{b} / A; M) \mid \bar{b} \in \fct{<\alpha}{|M|}\} $$
Define similarly the variations for $\le \alpha$, $\alpha$, etc. We write $\text{S}_{\Delta} (A; M)$ instead of $\text{S}_{\Delta}^1 (A; M)$. \end{defin}
\subsection{Abstract classes}
We review the definition of an abstract elementary class. Abstract elementary classes (AECs) were introduced by Shelah in \cite{sh88}. The reader unfamiliar with AECs can consult \cite{grossberg2002} for an introduction.
We first review more general objects that we will sometimes use. Abstract classes are already defined in \cite{grossbergbook}, while $\mu$-abstract elementary classes are introduced in \cite{mu-aec-toappear-v2}. We will mostly use them to deal with functorial expansions and classes of saturated models of an AEC.
\begin{defin}
An \emph{abstract class} (AC for short) is a pair $(K, \le)$, where:
\begin{enumerate}
\item $K$ is a class of $\tau$-structure, for some fixed infinitary vocabulary $\tau$ (that we will denote by $\tau (K)$). We say $(K, \le)$ is \emph{$(<\mu)$-ary} if $\tau$ is $(<\mu)$-ary.
\item $\le$ is a partial order (that is, a reflexive and transitive relation) on $K$.
\item If $M \le N$ are in $K$ and $f: N \cong N'$, then $f[M] \le N'$ and both are in $K$.
\item If $M \le N$, then $M \subseteq N$.
\end{enumerate}
\end{defin} \begin{remark}
We do not always strictly distinguish between $K$ and $(K, \le)$. \end{remark} \begin{notation}
For $K$ an abstract class, $M, N \in K$, we write $M < N$ when $M \le N$ and $M \neq N$. \end{notation}
\begin{defin}\label{r-increasing-def}
Let $K$ be an abstract class. A sequence $\seq{M_i : i < \delta}$ of elements of $K$ is \emph{increasing} if for all $i < j < \delta$, $M_i \le M_j$. \emph{Strictly increasing} means $M_i < M_j$ for $i < j$. $\seq{M_i : i < \delta}$ is \emph{continuous} if for all limit $i < \delta$, $M_i = \bigcup_{j < i} M_j$. \end{defin}
\begin{notation}
For $K$ an abstract class, we use notations such as $K_\lambda$, $K_{\ge \lambda}$, $K_{<\lambda}$ for the models in $K$ of size $\lambda$, $\ge \lambda$, $<\lambda$, respectively. \end{notation}
\begin{defin} Let $(I,\le)$ be a partially-ordered set.
\begin{enumerate}
\item We say that $I$ is \emph{$\mu$-directed} provided for every $J\subseteq I$ if $|J|<\mu$ then there exists $r\in I$ such that $r\geq s$ for all $s\in J$ (thus $\aleph_0$-directed is the usual notion of directed set)
\item Let $(K,\le)$ be an abstract class. An indexed system $\seq{M_i : i \in I}$ of models in $K$ is \emph{$\mu$-directed} if $I$ is a $\mu$-directed set and $i < j$ implies $M_i \le M_j$. \end{enumerate} \end{defin}
\begin{defin}\label{mu-aec-def}
Let $\mu$ be a regular cardinal and let $(K, \le)$ be a $(<\mu)$-ary abstract class. We say that $(K, \le)$ is a $\mu$-\emph{abstract elementary class} ($\mu$-AEC for short) if:
\begin{enumerate}
\item Coherence: If $M_0, M_1, M_2 \in K$ satisfy $M_0 \le M_2$, $M_1 \le M_2$, and $M_0 \subseteq M_1$, then $M_0 \le M_1$;
\item Tarski-Vaught axioms: Suppose $\seq{M_i \in K : i \in I}$ is a $\mu$-directed system. Then:
\begin{enumerate}
\item $\bigcup_{i \in I} M_i \in K$ and, for all $j \in I$, we have $M_j \le \bigcup_{i \in I} M_i$.
\item If there is some $N \in K$ so that for all $i \in I$ we have $M_i \le N$, then we also have $\bigcup_{i \in I} M_i \le N$.
\end{enumerate}
\item Löwenheim-Skolem-Tarski axiom: There exists a cardinal $\lambda = \lambda^{<\mu} \ge |L(K)| + \mu$ such that for any $M \in K$ and $A \subseteq |M|$, there is some $M_0 \le M$ such that $A \subseteq |M_0|$ and $\|M_0\| \le |A|^{<\mu} + \lambda$. We write $\text{LS} (K)$ for the minimal such cardinal\footnote{Pedantically, $\text{LS} (K)$ really depends on $\mu$ but $\mu$ will always be clear from context.}. \end{enumerate}
When $\mu = \aleph_0$, we omit it and simply call $K$ an \emph{abstract elementary class} (AEC for short). \end{defin}
In any abstract class, we can define a notion of embedding:
\begin{defin}
Let $K$ be an abstract class. We say a function $f: M \rightarrow N$ is a \emph{$K$-embedding} if $M, N \in K$ and $f: M \cong f[M] \le N$. For $A \subseteq |M|$, we write $f: M \xrightarrow[A]{} N$ to mean that $f$ fixes $A$ pointwise. Unless otherwise stated, when we write $f: M \rightarrow N$ we mean that $f$ is an embedding. \end{defin}
Here are three key structural properties an abstract class can have:
\begin{defin}
Let $K$ be an abstract class.
\begin{enumerate}
\item $K$ has \emph{amalgamation} if for any $M_0 \le M_\ell$ in $K$, $\ell = 1,2$, there exists $N \in K$ and $f_\ell : M_\ell \xrightarrow[M_0]{} N$.
\item $K$ has \emph{joint embedding} if for any $M_\ell$ in $K$, $\ell = 1,2$, there exists $N \in K$ and $f_\ell : M_\ell \rightarrow N$.
\item $K$ has \emph{no maximal models} if for any $M \in K$ there exists $N \in K$ with $M < N$.
\end{enumerate} \end{defin}
\subsection{Galois types}
Let $K$ be an abstract class. There is a well-known a semantic notion of types for $K$, Galois types, that was first introduced by Shelah in \cite[Definition II.1.9]{sh300-orig}. While Galois types are usually only defined over models, here we allow them to be over any set. This is not harder and is often notationally convenient\footnote{For example, types over the empty sets are used here in the definition of the Galois Morleyization. They appear implicitly in the definition of the order property in \cite[Definition 4.3]{sh394} and explicitly in \cite[Notation 1.9]{tamenessone}. They are also used in \cite{finitary-aec}.}. Note however that Galois types over sets are in general not too well-behaved. For example, they can sometimes fail to have an extension (in the sense that if we have $N, N' \in K$, $A \subseteq |N| \cap |N'|$ and $p$ a Galois type over $A$ realized in $N$, then we may not be able to extend $p$ to a type over $N'$) if their domain is not an amalgamation base.
\begin{defin}\label{gtp-def} \
\begin{enumerate}
\item Let $K^3$ be the set of triples of the form $(\bar{b}, A, N)$, where $N \in K$, $A \subseteq |N|$, and $\bar{b}$ is a sequence of elements from $N$.
\item For $(\bar{b}_1, A_1, N_1), (\bar{b}_2, A_2, N_2) \in K^3$, we say $(\bar{b}_1, A_1, N_1)E_{\text{at}} (\bar{b}_2, A_2, N_2)$ if $A := A_1 = A_2$, and there exists $f_\ell : N_\ell \xrightarrow[A]{} N$ such that $f_1 (\bar{b}_1) = f_2 (\bar{b}_2)$.
\item Note that $E_{\text{at}}$ is a symmetric and reflexive relation on $K^3$. We let $E$ be the transitive closure of $E_{\text{at}}$.
\item For $(\bar{b}, A, N) \in K^3$, let $\text{gtp} (\bar{b} / A; N) := [(\bar{b}, A, N)]_E$. We call such an equivalence class a \emph{Galois type}. We write $\text{gtp}_K (\bar{b} / A; N)$ when $K$ is not clear from context.
\item For $p = \text{gtp} (\bar{b} / A; N)$ a Galois type, define\footnote{It is easy to check that this does not depend on the choice of representatives.} $\ell (p) := \ell (\bar{b})$ and $\dom{p} := A$.
\end{enumerate} \end{defin}
We can go on to define the restriction of a type (if $A_0 \subseteq \dom{p}$, $I \subseteq \ell (p)$, we will write $p^I \upharpoonright A_0$ when the realizing sequence is restricted to $I$ and the domain is restricted to $A_0$), the image of a type under an isomorphism, or what it means for a type to be realized. Just as in \cite[Observation II.1.11.4]{shelahaecbook}, we have:
\begin{fact}\label{ap-eat}
If $K$ has amalgamation, then $E = E_{\text{at}}$. \end{fact}
Note that the proof goes through, even though we only have amalgamation over models, not over all sets.
\begin{remark}
To gain further insight into the difference between $E$ and $E_{\text{at}}$, consider the following situation. Let $K$ be an AEC that does \emph{not} have amalgamation and assume we are given $M \le N$, $a_1, a_2 \in |M|$, and $A \subseteq |M|$. Suppose we know that $(a_1, A, M) E_{\text{at}} (a_2, A, M)$. Then because $(a_\ell, A, N) E_{\text{at}} (a_\ell, A, M)$ for $\ell = 1,2$, we have that $(a_1, A, N) E (a_2, A, N)$, but we may \emph{not} have that $(a_1, A, N) E_{\text{at}} (a_2, A, N)$. \end{remark}
We also have the basic monotonicity and invariance properties \cite[Observation II.1.11]{shelahaecbook}, which follow directly from the definition:
\begin{prop}\label{galois-types-basic}
Let $K$ be an abstract class. Let $N \in K$, $A \subseteq |N|$, and $\bar{b} \in \fct{<\infty}{|N|}$.
\begin{enumerate}
\item\label{basic-1} Invariance: If $f: N \cong_A N'$, then $\text{gtp} (\bar{b} / A; N) = \text{gtp} (f (\bar{b}) / A; N')$.
\item\label{basic-2} Monotonicity: If $N \le N'$, then $\text{gtp} (\bar{b} / A; N) = \text{gtp} (\bar{b} / A; N')$.
\end{enumerate} \end{prop}
Monotonicity says that when $N \le N'$, the set of Galois types (over a fixed set $A$) realized in $N'$ is at least as big as the set of Galois types over $A$ realized in $N$ (using the notation below, $\text{gS} (A; N) \subseteq \text{gS} (A; N')$). When $A = M$ for $M \le N$ (or $A = \emptyset$), we can further define the class $\text{gS} (A)$ of \emph{all} Galois types over $A$ in the natural way. Assuming the existence of a monster model $\mathfrak{C}$ containing $A$, this is the same as the usual definition: all types over $A$ realized in $\mathfrak{C}$.
\begin{defin} \
\begin{enumerate}
\item Let $N \in K$, $A \subseteq |N|$, and $\alpha$ be an ordinal. Define:
$$
\text{gS}^\alpha (A; N) := \{\text{gtp} (\bar{b} / A; N) \mid \bar{b} \in \fct{\alpha}{|N|}\}
$$
\item For $M \in K$ and $\alpha$ an ordinal, let:
$$
\text{gS}^\alpha (M) := \{p \mid \exists N \in K : M \le N \text { and } p \in \text{gS}^{\alpha} (M ; N)\}
$$
\item For $\alpha$ an ordinal, let:
$$
\text{gS}^\alpha (\emptyset) := \bigcup_{N \in K} \text{gS}^{\alpha} (\emptyset; N)
$$
\end{enumerate}
When $\alpha = 1$, we omit it. Similarly define $\text{gS}^{<\alpha}$, where $\alpha$ is allowed to be $\infty$. \end{defin} \begin{remark}
When $\alpha$ is an ordinal, $\text{gS}^{\alpha} (M)$ and $\text{gS}^{\alpha} (\emptyset)$ could a priori be proper classes. However in reasonable cases (e.g.\ when $K$ is a $\mu$-AEC) they are sets. For example when $K$ is a $\mu$-AEC, an upper bound for $|\text{gS}^{\alpha} (M)|$ is $2^{\left(\|M\| + \alpha + \text{LS} (K)\right)^{<\mu}}$. \end{remark}
Next, we recall the definition of tameness), a locality property of types. Tameness was introduced by Grossberg and VanDieren in \cite{tamenessone} and used to get an upward stability transfer (and an upward categoricity transfer in \cite{tamenesstwo}). Later on, Boney showed in \cite{tamelc-jsl} that it followed from large cardinals and also introduced a dual property he called \emph{type shortness}.
\begin{defin}[Definitions 3.1 and 3.3 in \cite{tamelc-jsl}]\label{shortness-def}
Let $K$ be an abstract class and let $\Gamma$ be a class (possibly proper) of Galois types in $K$. Let $\kappa$ be an infinite cardinal.
\begin{enumerate}
\item $K$ is \emph{$(<\kappa)$-tame for $\Gamma$} if for any $p \neq q$ in $\Gamma$, if $A := \dom{p} = \dom{q}$, then there exists $A_0 \subseteq A$ such that $|A_0| < \kappa$ and $p \upharpoonright A_0 \neq q \upharpoonright A_0$.
\item $K$ is \emph{$(<\kappa)$-type short for $\Gamma$} if for any $p \neq q$ in $\Gamma$, if $\alpha := \ell (p) = \ell (q)$, then there exists $I \subseteq \alpha$ such that $|I| < \kappa$ and $p^I \neq q^I$.
\item $\kappa$-tame means $(<\kappa^+)$-tame, similarly for type short.
\item We usually just say ``short'' instead of ``type short''.
\item Usually, $\Gamma$ will be a class of types over models only, and we often specify it in words. For example, \emph{$(<\kappa)$-short for types of length $\alpha$} means $(<\kappa)$-short for $\bigcup_{M \in K} \text{gS}^\alpha (M)$.
\item We say $K$ is $(<\kappa)$-tame if it is $(<\kappa)$-tame for types of length one.
\item We say $K$ is \emph{fully} $(<\kappa)$-tame if it is $(<\kappa)$-tame for $\bigcup_{M \in K} \text{gS}^{<\infty} (M)$, similarly for short.
\end{enumerate} \end{defin}
We review the natural notion of stability in this context. The definition here is slightly unusual compared to the rest of the litterature: we define what it means for a \emph{model} to be stable in a given cardinal, and get a local notion of stability that is equivalent (in AECs) to the usual notion if amalgamation holds, but behaves better if amalgamation fails. Note that we count the number of types over an arbitrary set, not (as is common in AECs) only over models. In case the abstract class has a Löwenheim-Skolem number and we work above it this is equivalent, as any type in $\text{gS}^{<\alpha} (A; N)$ can be extended\footnote{Note that this does not use any amalgamation because we work inside the same model $N$.} to $\text{gS}^{<\alpha} (B; N)$ when $A \subseteq B$, so $|\text{gS}^{<\alpha} (A; N)| \le |\text{gS}^{<\alpha} (B; N)|$.
\begin{defin}[Stability]\label{stab-def}
Let $K$ be an abstract class. Let $\alpha$ be a cardinal, $\mu$ be a cardinal. A model $N \in K$ is $(<\alpha)$-\emph{stable in $\mu$} if for all $A \subseteq |N|$ of size $\le \mu$, $|\text{gS}^{<\alpha} (A; N)| \le \mu$. Here and below, $\alpha$-stable means $(< (\alpha^+))$-stable. We say ``stable'' instead of ``1-stable''.
$K$ is \emph{$(<\alpha)$-stable in $\mu$} if every $N \in K$ is $(<\alpha)$-stable in $\mu$. $K$ is \emph{$(<\alpha)$-stable} if it is $(<\alpha)$-stable in unboundedly many cardinals.
Define similarly \emph{syntactically stable} for syntactic types (in this paper the quantifier-free $\mathbb{L}_{\kappa, \kappa}$-types, where $\kappa$ is clear from context). \end{defin}
The next fact spells out the connection between stability for types of different lengths and tameness.
\begin{fact}\label{stab-facts}
Let $K$ be an AEC and let $\mu \ge \text{LS} (K)$.
\begin{enumerate}
\item \label{stab-facts-tb} \cite[Theorem 3.1]{longtypes-toappear-v2}: If $K$ is stable in $\mu$, $K_\mu$ has amalgamation, and $\mu^\alpha = \mu$, then $K$ is $\alpha$-stable in $\mu$.
\item \label{stab-facts-tame} \cite[Corollary 6.4]{tamenessone}\footnote{The result we want can easily be seen to follow from the proof there: see \cite[Theorem 12.10]{baldwinbook09}.}: If $K$ has amalgamation, is $\mu$-tame, and stable in $\mu$, then $K$ is stable in all $\lambda$ such that $\lambda^\mu = \lambda$.
\item \label{stab-length-equiv} If $K$ has amalgamation, is $\mu$-tame, and is stable in $\mu$, then $K$ is $\alpha$-stable (in unboundedly many cardinals), for all cardinals $\alpha$.
\end{enumerate} \end{fact} \begin{proof}[Proof of (\ref{stab-length-equiv})]
Given cardinals $\lambda_0 \ge \text{LS} (K)$ and $\alpha$, let $\lambda := \left(\lambda_0\right)^{\alpha + \mu}$. Combining the first two statements gives us that $K$ is $\alpha$-stable in $\lambda$. \end{proof}
Finally, we review the natural definition of saturation using Galois types. Note that we again give the local definitions (but they are equivalent to the usual ones assuming amalgamation).
\begin{defin}
Let $K$ be an abstract class, $M \in K$ and $\mu$ be an infinite cardinal.
\begin{enumerate}
\item For $N \ge M$, $M$ is \emph{$\mu$-saturated\footnote{Pedantically, we should really say ``Galois-saturated'' to differentiate this from being syntactically saturated. In this paper, we will only discuss Galois saturation.} in $N$} if for any $A \subseteq |M|$ of size less than $\mu$, any $p \in \text{gS}^{<\mu} (A; N)$ is realized in $M$.
\item $M$ is \emph{$\mu$-saturated} if it is $\mu$-saturated in $N$ for all $N \ge M$. When $\mu = \|M\|$, we omit it.
\item We write $\Ksatp{\mu}$ for the class of $\mu$-saturated models of $K_{\ge \mu}$ (ordered by the ordering of $K$).
\end{enumerate} \end{defin} \begin{remark}\label{sat-rmk} \
\begin{enumerate}
\item We defined saturation also when $\mu \le \text{LS} (K)$. This is why we look at types over sets and not only over models. In an AEC, when $\mu > \text{LS} (K)$, this is equivalent to the usual definition (see also the remark before Definition \ref{stab-def}).
\item We could similarly define what it means for a \emph{set} to be saturated in a model (this is useful in \cite{bv-sat-v3}).
\item It is easy to check that if $K$ is an AEC with amalgamation and $\mu > \text{LS} (K)$, then $\Ksatp{\mu}$ is a $\plus{\mu}$-AEC (recall Definitions \ref{kappa-r-def} and \ref{mu-aec-def}) with $\text{LS} (\Ksatp{\mu}) \le \text{LS} (K)^{<\plus{\mu}}$.
\end{enumerate} \end{remark}
\section{The semantic-syntactic correspondence}\label{sec-foundations}
\subsection{Functorial expansions and the Galois Morleyization}
\begin{defin}\label{expansion-def}
Let $K$ be an abstract class. A \emph{functorial expansion} of $K$ is a class $\widehat{K}$ satisfying the following properties:
\begin{enumerate}
\item $\widehat{K}$ is a class of $\widehat{\tau}$-structures, where $\widehat{\tau}$ is a fixed (possibly infinitary) vocabulary extending $\tau (K)$.
\item The map $\widehat{M} \mapsto \widehat{M} \upharpoonright \tau (K)$ is a bijection from $\widehat{K}$ onto $K$. For $M \in K$, we will write $\widehat{M}$ for the unique element of $\widehat{K}$ whose reduct is $M$. When we write ``$\widehat{M} \in \widehat{K}$'', it is understood that $M = \widehat{M} \upharpoonright \tau (K)$.
\item Invariance: For $M, N \in K$, if $f: M \cong N$, then $f: \widehat{M} \cong \widehat{N}$.
\item Monotonicity: If $M \le N$ are in $K$, then $\widehat{M} \subseteq \widehat{N}$.
\end{enumerate}
We say a functorial expansion $\widehat{K}$ is \emph{$(<\kappa)$-ary} if $\tau(\widehat{K})$ is $(<\kappa)$-ary. \end{defin}
\begin{example}\label{morleyization-example} \
\begin{enumerate}
\item For $K$ an abstract class, $K$ is a functorial expansion of $K$ itself. This is because $\le$ must extend $\subseteq$.
\item Let $K$ be an abstract class with $\tau := \tau (K)$ and let $\kappa$ be an infinite cardinal. Add a $(<\kappa)$-ary predicate $P$ to $\tau$, forming a language $\widehat{\tau}$. Expand each $M \in K$ to a $\widehat{L}$-structure by defining $P^{\widehat{M}} (\bar{a})$ (where $P^{\widehat{M}}$ is the interpretation of $P$ inside $\widehat{M}$) to hold if and only if $\bar{a}$ is the universe of a $\le$-submodel of $M$ (this is more or less what Shelah does in \cite[Definition IV.1.9.1]{shelahaecbook}). Then the resulting class $\widehat{K}$ is a functorial expansion of $K$.
\item Let $T$ be a complete first-order theory in a vocabulary $\tau$. Let $K := (\text{Mod} (T), \preceq)$. It is common to expand $\tau$ to $\widehat{\tau}$ by adding a relation symbol for every first-order $\tau$-formula. We then expand $T$ (to $\widehat{T}$) and every model $M$ of $T$ in the expected way (to some $\widehat{M}$) and obtain a new theory in which every formula is equivalent to an atomic one (this is commonly called the \emph{Morleyization} of the theory). Then $\widehat{K} := \text{Mod} (\widehat{T})$ is a functorial expansion of $K$.
\item Let $T$ be a first-order complete theory. Expanding each model $M$ of $T$ to its canonical model $M^{\text{eq}}$ of $T^{\text{eq}}$ (see \cite[III.6]{shelahfobook}) also describes a functorial expansion.
\item The canonical structures of \cite{cherlin-harrington-lachlan} also induce a functorial expansion.
\end{enumerate} \end{example}
The main example of functorial expansion used in this paper is the \emph{Galois Morleyization}:
\begin{defin}\label{def-galois-m}
Let $K$ be an abstract class and let $\kappa$ be an infinite cardinal. Define an expansion $\widehat{\tau}$ of $\tau (K)$ by adding a relation symbol $R_p$ of arity $\ell (p)$ for each $p \in \text{gS}^{<\kappa} (\emptyset)$. Expand each $N \in K$ to a $\widehat{\tau}$-structure $\widehat{N}$ by specifying that for each $\bar{a} \in \fct{<\kappa}{|\widehat{N}|}$, $R_p^{\widehat{N}} (\bar{a})$ (where $R_p^{\widehat{N}}$ is the interpretation of $R_p$ inside $\widehat{N}$) holds exactly when $\text{gtp} (\bar{a} / \emptyset; N) = p$. We call $\widehat{K}$ the \emph{$(<\kappa)$-Galois Morleyization} of $K$. \end{defin} \begin{remark}\label{bigk-size}
Let $K$ be an AEC and $\kappa$ be an infinite cardinal. Let $\widehat{K}$ be the $(<\kappa)$-Galois Morleyization of $K$. Then $|\tau (\widehat{K})| \le |\text{gS}^{<\kappa} (\emptyset)| + |\tau| \le 2^{<(\kappa + \text{LS} (K)^+)}$. \end{remark}
It is straightforward to check that the Galois Morleyization is a functorial expansion. We include a proof here for completeness.
\begin{prop}
Let $K$ be an abstract class and let $\kappa$ be an infinite cardinal. Let $\widehat{K}$ be the $(<\kappa)$-Galois Morleyization of $K$. Then $\widehat{K}$ is a functorial expansion of $K$. \end{prop} \begin{proof}
Let $\tau := \tau (K)$ be the vocabulary of $K$. Looking at Definition \ref{expansion-def}, there are four properties to check:
\begin{enumerate}
\item By definition of the Galois Morleyization, $\widehat{K}$ is a class of $\widehat{\tau}$-structure, for a fixed vocabulary $\widehat{\tau}$.
\item The map $\widehat{M} \mapsto \widehat{M} \upharpoonright \tau$ is a bijection: It is a surjection by definition of the Galois Morleyization. It is an injection: Assume that $M' := \widehat{M} \upharpoonright \tau = \widehat{N} \upharpoonright \tau$ but $\widehat{M} \neq \widehat{N}$. Then there must exist a $p \in \text{gS} (\emptyset)$ and an $\bar{a} \in \fct{<\kappa}{|M'|}$ such that $\text{gtp} (\bar{a} / \emptyset; M') = p$ but $\text{gtp} (\bar{a} / \emptyset; M') \neq p$. Thus $p \neq p$, a contradiction.
\item Let $M, N \in K$ and $f: M \cong N$. We have to see that $f: \widehat{M} \cong \widehat{N}$. Let $p \in \text{gS} (\emptyset)$ and let $\bar{a} \in \fct{<\kappa}{|M|}$. Assume that $\widehat{M} \models R_p (\bar{a})$. Then by definition $p = \text{gtp} (\bar{a} / \emptyset; M)$. Therefore by Proposition \ref{galois-types-basic}.(\ref{basic-1}), $p = \text{gtp} (f (\bar{a}) / \emptyset; N)$. Hence $\widehat{N} \models R_p (f (\bar{a}))$. The steps can be reversed to obtain the converse.
\item Let $M \le N$ be in $K$. We want to see that $\widehat{M} \subseteq \widehat{N}$. So let $p \in \text{gS} (\emptyset)$, $\bar{a} \in \fct{<\kappa}{|M|}$. Assume first that $\widehat{M} \models R_p (\bar{a})$. Then $p = \text{gtp} (\bar{a} / \emptyset; M)$. Therefore by Proposition \ref{galois-types-basic}.(\ref{basic-2}), $p = \text{gtp} (\bar{a} / \emptyset; N)$. Therefore $\widehat{N} \models R_p (\bar{a})$. The steps can be reversed to obtain the converse.
\end{enumerate} \end{proof}
Note that a functorial expansion can naturally be seen as an abstract class:
\begin{defin}
Let $(K, \le)$ be an abstract class and let $\widehat{K}$ be a functorial expansion of $K$. Define an ordering $\widehat{\le}$ on $\widehat{K}$ by $\widehat{M} \widehat{\le} \widehat{N}$ if and only if $M \le N$. \end{defin} \begin{remark}
For simplicity, we will abuse notation and write $(\widehat{K}, \le)$ rather than $(\widehat{K}, \widehat{\le})$. As usual, when the ordering is clear from context we omit it. \end{remark}
The next propositions are easy but conceptually interesting.
\begin{prop}\label{expansion-monot}
Let $(K, \le)$ be an abstract class with $\tau := \tau (K)$. Let $\widehat{K}$ be a functorial expansion of $K$ and let $\widehat{\tau} := \tau (\widehat{K})$.
\begin{enumerate}
\item $(\widehat{K}, \le)$ is an abstract class.
\item If every chain in $K$ has an upper bound, then every chain in $\widehat{K}$ has an upper bound.
\item Galois types are the same in $K$ and $\widehat{K}$: $\text{gtp}_{K} (\bar{a}_1 / A; N_1) = \text{gtp}_{K} (\bar{a}_2 / A; N_2)$ if and only if $\text{gtp}_{\widehat{K}} (\bar{a}_1 / A; \widehat{N}_1) = \text{gtp}_{\widehat{K}} (\bar{a}_2 / A; \widehat{N}_2)$.
\item Assume $K$ is a $\mu$-AEC and $\widehat{K}$ is a $(<\mu)$-ary Morleyization of $K$. Then $(\widehat{K}, \le)$ is a $\mu$-AEC with $\text{LS} (\widehat{K}) = \text{LS} (K) + |\widehat{\tau}|^{<\mu}$.
\item Let $\tau \subseteq \widehat{\tau}' \subseteq \widehat{\tau}$. Then $\widehat{K} \upharpoonright \widehat{\tau}' := \{\widehat{M} \upharpoonright \widehat{\tau}' \mid \widehat{M} \in \widehat{K}\}$ is a functorial expansion of $K$.
\item If $\widehat{\widehat{K}}$ is a functorial expansion\footnote{Where of course we think of $\widehat{K}$ as an abstract class with the ordering induced from $K$.} of $\widehat{K}$, then $\widehat{\widehat{K}}$ is a functorial expansion of $K$.
\end{enumerate} \end{prop} \begin{proof}
All are straightforward. As an example, we show that if $K$ is a $\mu$-AEC, $\widehat{K}$ is a $(<\mu)$-ary functorial expansion of $K$, and $\seq{\widehat{M}_i : i \in I}$ is a $\mu$-directed system in $\widehat{K}$, then letting $M := \bigcup_{i \in I} M_i$, we have that $\bigcup_{i \in I} \widehat{M}_i = \widehat{M}$ (so in particular $\bigcup_{i \in I} \widehat{M}_i \in \widehat{K}$). Let $R$ be a relation symbol in $\widehat{\tau}$ of arity $\alpha$. Let $\bar{a} \in \fct{\alpha}{|\widehat{M}|}$. Assume $\widehat{M} \models R[\bar{a}]$. We show $\bigcup_{i \in I} \widehat{M}_i \models R[\bar{a}]$. The converse is done by replacing $R$ by $\neg R$, and the proof with function symbols is similar. Since $\widehat{\tau}$ is $(<\mu)$-ary, $\alpha < \mu$. Since $I$ is $\mu$-directed, $\bar{a} \in \fct{\alpha}{|M_j|}$ for some $j \in I$. Since $M_j \le M$, the monotonicity axiom implies $\widehat{M}_j \subseteq \widehat{M}$. Thus $\widehat{M}_j \models R[\bar{a}]$, and this holds for all $j' \ge j$. Thus by definition of the union, $\bigcup_{i \in I} \widehat{M}_i \models R[\bar{a}]$. \end{proof} \begin{remark}
A word of warning: if $K$ is an AEC and $\widehat{K}$ is a functorial expansion of $K$, then $K$ and $\widehat{K}$ are isomorphic as categories. In particular, any directed system in $\widehat{K}$ has a colimit. However, if $\tau (\widehat{K})$ is not finitary the colimit of a directed system in $\widehat{K}$ may \emph{not} be the union: relations may need to contain more elements. \end{remark}
\subsection{Formulas and syntactic types}
From now on until the end of the section, we assume:
\begin{hypothesis}\label{ftypes-hyp}
$K$ is an abstract class with $\tau := \tau (K)$, $\kappa$ is an infinite cardinal, $\widehat{K}$ is an arbitrary $(<\kappa)$-ary functorial expansion of $K$ with vocabulary $\widehat{\tau} := \tau (\widehat{K})$. \end{hypothesis}
At the end of this section, we will specialize to the case when $\widehat{K}$ is the $(<\kappa)$-Galois Morleyization of $K$. Recall from Section \ref{syntax-subsec} that the set $\operatorname{qf-}\mathbb{L}_{\kappa, \kappa} (\widehat{\tau})$ denotes the quantifier-free $L_{\kappa, \kappa} (\widehat{\tau})$ formulas.
\begin{prop}\label{inv-lem}
Let $\phi (\bar{x})$ be a quantifier-free $\mathbb{L}_{\kappa, \kappa} (\widehat{\tau})$ formula, $M \in K$, and $\bar{a} \in M$. If $f: M \rightarrow N$, then $\widehat{M} \models \phi[\bar{a}]$ if and only if $\widehat{N} \models \phi[f (\bar{a})]$. \end{prop} \begin{proof}
Directly from the invariance and monotonicity properties of functorial expansions. \end{proof}
In general, Galois types (computed in $K$) and syntactic types (computed in $\widehat{K}$) are different. However, Galois types are always at least as fine as quantifier-free syntactic types (this is a direct consequence of Proposition \ref{inv-lem} but we include a proof for completeness).
\begin{lem}\label{gtp-map}
Let $N_1, N_2 \in K$, $A \subseteq |N_\ell|$ for $\ell = 1,2$. Let $\bar{b}_\ell \in N_\ell$. If $\text{gtp} (\bar{b}_1 / A; N_1) = \text{gtp} (\bar{b}_2 / A; N_2)$, then $\tp_{\qfl} (\bar{b}_1 / A; \widehat{N}_1) = \tp_{\qfl} (\bar{b}_2 / A; \widehat{N}_2)$. \end{lem} \begin{proof}
By transitivity of equality, it is enough to show that if $(\bar{b}_1, A, N_1) E_{\text{at}} (\bar{b}_2, A, N_2)$, then $\tp_{\qfl} (\bar{b}_1 / A; \widehat{N}_1) = \tp_{\qfl} (\bar{b}_2 / A; \widehat{N}_2)$. So assume $(\bar{b}_1, A, N_1) E_{\text{at}} (\bar{b}_2, A, N_2)$. Then there exists $N \in K$ and $f_\ell : N_\ell \xrightarrow[A]{} N$ such that $f_1 (\bar{b}_1) = f_2 (\bar{b}_2)$. Let $\phi (\bar{x})$ be a quantifier-free $\mathbb{L}_{\kappa,\kappa} (\widehat{\tau})$ formula over $A$. Assume $\widehat{N}_1 \models \phi[\bar{b}_1]$. By Proposition \ref{inv-lem}, $\widehat{N} \models \phi[f_1 (\bar{b}_1)]$, so $\widehat{N} \models \phi[f_2 (\bar{b}_2)]$, so by Proposition \ref{inv-lem} again, $\widehat{N}_2 \models \phi[\bar{b}_2]$. Replacing $\phi$ by $\neg \phi$, we get the converse, so $\tp_{\qfl} (\bar{b}_1 / A; \widehat{N}_1) = \tp_{\qfl} (\bar{b}_2 / A; \widehat{N}_2)$. \end{proof}
Note that this used that the types were quantifier-free. We have justified the following definition:
\begin{defin}
For a Galois type $p$, let $p^s$ be the corresponding quantifier-free syntactic type \emph{in the functorial expansion}. That is, if $p = \text{gtp} (\bar{b} / A; N)$, then $p^s := \tp_{\qfl} (\bar{b} / A; \widehat{N})$. \end{defin}
\begin{prop}\label{gtp-ftp-map}
Let $N \in K$, $A \subseteq |N|$. Let $\alpha$ be an ordinal. The map $p \mapsto p^s$ from $\text{gS}^\alpha (A; N)$ to $\text{S}_{\operatorname{qf-}\mathbb{L}_{\kappa, \kappa} (\bigtau)}^\alpha (A; \widehat{N})$ (recall Definition \ref{stone-def}) is a surjection. \end{prop} \begin{proof}
If $\tp_{\qfl} (\bar{b} / A; \widehat{N}) = q \in \text{S}_{\operatorname{qf-}\mathbb{L}_{\kappa, \kappa} (\bigtau)}^\alpha (A; \widehat{N})$, then by definition $\left(\text{gtp} (\bar{b} / A; N)\right)^s = q$. \end{proof}
\begin{remark}
To investigate formulas with quantifiers, we could define a different version of Galois types using isomorphisms rather than embeddings, and remove the monotonicity axiom from the definition of a functorial expansion. As we have no use for it here, we do not discuss this approach further. \end{remark}
\subsection{On when Galois types are syntactic}
We have seen in Proposition \ref{gtp-ftp-map} that $p \mapsto p^s$ is a surjection, so Galois types are always at least as fine as quantifier-free syntactic type in the expansion. It is natural to ask when they are the same, i.e.\ when $p \mapsto p^s$ is a \emph{bijection}. When $\widehat{K}$ is the $(<\kappa)$-Galois Morleyization of $K$ (see Definition \ref{def-galois-m}), we answer this using shortness and tameness (Definition \ref{shortness-def}). Note that we make no hypothesis on $K$. In particular, amalgamation is not needed.
\begin{thm}[The semantic-syntactic correspondence]\label{separation}
Assume $\widehat{K}$ is the $(<\kappa)$-Galois Morleyization of $K$.
Let $\Gamma$ be a family of Galois types. The following are equivalent:
\begin{enumerate}
\item\label{separation-1} $K$ is $(<\kappa)$-tame and short for $\Gamma$.
\item\label{separation-2} The map $p \mapsto p^s$ is a bijection from $\Gamma$ onto $\Gamma^s := \{p^s \mid p \in \Gamma\}$.
\end{enumerate} \end{thm} \begin{proof} \
\begin{itemize}
\item \underline{(\ref{separation-1}) implies (\ref{separation-2}):}
By Lemma \ref{gtp-map}, the map $p \mapsto p^s$ with domain $\Gamma$ is well-defined and it is clearly a surjection onto $\Gamma^s$. It remains to see it is injective. Let $p, q \in \Gamma$ be distinct. If they do not have the same domain or the same length, then $p^s \neq q^s$, so assume that $A := \dom{p} = \dom{q}$ and $\alpha := \ell (p) = \ell (q)$. Say $p = \text{gtp} (\bar{b} / A; N)$, $q = \text{gtp} (\bar{b}' / A; N')$. By the tameness and shortness hypotheses, there exists $A_0 \subseteq A$ and $I \subseteq \alpha$ of size less than $\kappa$ such that $p_0 := p^I \upharpoonright A_0 \neq q^I \upharpoonright A_0 =: q_0$. Let $\bar{a}_0$ be an enumeration of $A_0$, and let $\bar{b}_0 := \bar{b} \upharpoonright I$, $\bar{b}_0' := \bar{b}' \upharpoonright I$. Let $p_0' := \text{gtp} (\bar{b}_0 \bar{a}_0 / \emptyset; N)$, and let $\phi := R_{p_0'} (\bar{x}_0, \bar{a}_0)$, where $\bar{x}_0$ is a sequence of variables of type $I$. Since $\bar{b}_0$ realizes $p_0$ in $N$, $\widehat{N} \models \phi[\bar{b}_0]$, and since $\bar{b}_0'$ realizes $q_0$ in $N'$ and $q_0 \neq p_0$, $\widehat{N'} \models \neg \phi[\bar{b}_0']$. Thus $\phi (\bar{x}_0) \in p^s$, $\neg \phi (\bar{x}_0) \in q^s$. By definition, $ \phi (\bar{x}_0) \notin q$ so $p^s \neq q^s$.
\item \underline{(\ref{separation-2}) implies (\ref{separation-1}):} Let $p, q \in \Gamma$ be distinct with domain $A$ and length $\alpha$. Say $p = \text{gtp} (\bar{b} / A; N)$, $q = \text{gtp} (\bar{b}' / A; N')$. By hypothesis, $p^s \neq q^s$ so there exists $\phi (\bar{x})$ over $A$ such that (without loss of generality) $\phi (\bar{x}) \in p$ but $\neg \phi (\bar{x}) \in q$. Let $A_0 := \dom{\phi}$, $\bar{x}_0 := \FV{\phi}$ (note that $A_0$ and $\bar{x}_0$ have size strictly less than $\kappa$). Let $\bar{b}_0$, $\bar{b}_0'$ be the corresponding subsequences of $\bar{b}$ and $\bar{b}'$ respectively. Let $p_0 := \text{gtp} (\bar{b}_0 / A_0; N)$, $q_0 := \text{gtp} (\bar{b}_0' / A_0; N')$. Then it is straightforward to check that $\phi \in p_0^s$, $\neg \phi \in q_0^s$, so $p_0^s \neq q_0^s$ and hence (by Lemma \ref{gtp-map}) $p_0 \neq q_0$. Thus $A_0$ and $I$ witness tameness and shortness respectively.
\end{itemize} \end{proof} \begin{remark}
The proof shows that (\ref{separation-2}) implies (\ref{separation-1}) is valid when $\widehat{K}$ is any functorial expansion of $K$. \end{remark}
\begin{cor}\label{types-formula} \
Assume $\widehat{K}$ is the $(<\kappa)$-Galois Morleyization of $K$.
\begin{enumerate}
\item $K$ is fully $(<\kappa)$-tame and short if and only if for any $M \in K$ the map $p \mapsto p^s$ from $\text{gS}^{<\infty} (M)$ to $\text{S}_{\operatorname{qf-}\mathbb{L}_{\kappa, \kappa} (\bigtau)}^{<\infty} (M)$ is a bijection\footnote{We have set $\text{S}_{\operatorname{qf-}\mathbb{L}_{\kappa, \kappa} (\bigtau)}^{<\infty} (M) := \bigcup_{N \ge M} \text{S}_{\operatorname{qf-}\mathbb{L}_{\kappa, \kappa} (\bigtau)}^{<\infty} (M; \widehat{N})$. Similarly define $\text{S}_{\operatorname{qf-}\mathbb{L}_{\kappa, \kappa} (\bigtau)} (M)$.}.
\item $K$ is $(<\kappa)$-tame if and only if for any $M \in K$ the map $p \mapsto p^s$ from $\text{gS} (M)$ to $\text{S}_{\operatorname{qf-}\mathbb{L}_{\kappa, \kappa} (\bigtau)} (M)$ is a bijection.
\end{enumerate} \end{cor} \begin{proof}
By Theorem \ref{separation} applied to $\Gamma := \bigcup_{M \in K} \text{gS}^{<\infty} (M)$ and $\Gamma := \bigcup_{M \in K} \text{gS} (M)$ respectively. \end{proof} \begin{remark}
For $M \in K$, $p, q \in \text{gS} (M)$, say $pE_{<\kappa}q$ if and only if $p \upharpoonright A_0 = q \upharpoonright A_0$ for all $A_0 \subseteq |M|$ with $|A_0| < \kappa$. Of course, if $K$ is $(<\kappa)$-tame, then $E_{<\kappa}$ is just equality. More generally, the proof of Theorem \ref{separation} shows that if $\widehat{K}$ is the $(<\kappa)$-Galois Morleyization of $K$, then $p E_{<\kappa} q$ if and only if $p^s = q^s$. Thus in that case quantifier-free syntactic types in the Morleyization can be seen as $E_{<\kappa}$-equivalence classes of Galois types. Note that $E_{<\kappa}$ appears in the work of Shelah, see for example \cite[Definition 1.8]{sh394}. \end{remark}
\section{Order properties and stability spectrum}\label{thy-indep}
In this section, we start applying the semantic-syntactic correspondence (Theorem \ref{separation}) to prove new structural results about AECs. In the introduction, we described a three-step general method to prove a result about AECs using syntactic methods. In the proof of Theorem \ref{stab-spectrum}, Corollary \ref{syn-stab-spectrum} gives the first step, Theorem \ref{separation} gives the second, while Facts \ref{shelah-hanf} (AECs have a Hanf number for the order property) and \ref{stab-facts} (In tame AECs with amalgamation, stability behaves reasonably well) are keys for the third step.
Throughout this section, we work with the $(<\kappa)$-Galois Morleyization of a fixed AEC $K$:
\begin{hypothesis}\label{morley-hyp} \
\begin{enumerate}
\item $K$ is an abstract elementary class.
\item $\kappa$ is an infinite cardinal.
\item $\widehat{K}$ is the $(<\kappa)$-Galois Morleyization of $K$ (recall Definition \ref{def-galois-m}). Set $\widehat{\tau} := \tau (\widehat{K})$.
\end{enumerate} \end{hypothesis}
\subsection{Several order properties}
The next definition is a natural syntactic extension of the first-order order property. A related definition appears already in \cite{sh16} and has been well studied (see for example \cite{grsh222, grsh259}).
\begin{defin}[Syntactic order property]\label{syntactic-op}
Let $\alpha$ and $\mu$ be cardinals with $\alpha < \kappa$. A model $\widehat{M} \in \widehat{K}$ has the \emph{syntactic $\alpha$-order property of length $\mu$} if there exists $\seq{\bar{a}_i : i < \mu}$ inside $\widehat{M}$ with $\ell (\bar{a}_i) = \alpha$ for all $i < \mu$ and a \emph{quantifier-free} $\mathbb{L}_{\kappa, \kappa} (\widehat{\tau})$-formula $\phi (\bar{x}, \bar{y})$ such that for all $i, j < \mu$, $\widehat{M} \models \phi[\bar{a}_i, \bar{a}_j]$ if and only if $i < j$.
Let $\beta \le \kappa$ be a cardinal. $\widehat{M}$ has the \emph{syntactic $(<\beta)$-order property of length $\mu$} if it has the syntactic $\alpha$-order property of length $\mu$ for some $\alpha < \beta$. $\widehat{M}$ has the \emph{syntactic order property of length $\mu$} if it has the syntactic $(<\kappa)$-order property of length $\mu$.
\emph{$\widehat{K}$ has the syntactic $\alpha$-order of length $\mu$} if some $\widehat{M} \in \widehat{K}$ has it. \emph{$\widehat{K}$ has the syntactic order property} if it has the syntactic order property for every length. \end{defin}
We emphasize that the syntactic order property is always considered inside the Galois Morleyization $\widehat{K}$ and must be witnessed by a \emph{quantifier-free} formula. Also, since any such formula has fewer than $\kappa$ free variables, nothing would be gained by defining the $(\alpha)$-syntactic order property for $\alpha \ge \kappa$. Thus we talk of the syntactic order property instead of the $(<\kappa)$-syntactic order property.
Arguably the most natural semantic definition of the order property in AECs appears in \cite[Definition 4.3]{sh394}. For simplicity, we have removed one parameter from the definition.
\begin{defin}
Let $\alpha$ and $\mu$ be cardinals. A model $M \in K$ has the \emph{Galois $\alpha$-order property of length $\mu$} if there exists $\seq{\bar{a}_i : i < \mu}$ inside $M$ with $\ell (\bar{a}_i) = \alpha$ for all $i < \mu$, such that for any $i_0 < j_0 < \mu$ and $i_1 < j_1 < \mu$, $\text{gtp} (\bar{a}_{i_0} \bar{a}_{j_0} / \emptyset; N) \neq \text{gtp} (\bar{a}_{j_1} \bar{a}_{i_1} / \emptyset; N)$.
We usually drop the ``Galois'' and define variations such as ``$K$ has the $\alpha$-order property'' as in Definition \ref{syntactic-op}. \end{defin}
Notice that the definition of the Galois $\alpha$-order property is more general than that of the syntactic $\alpha$-order property, since $\alpha$ is not required to be less than $\kappa$. However the next result shows that the two properties are equivalent when $\alpha < \kappa$. Notice that this does not use any tameness.
\begin{prop}\label{op-equiv}
Let $\alpha$, $\mu$, and $\lambda$ be cardinals with $\alpha < \kappa$. Let $N \in K$.
\begin{enumerate}
\item If $\widehat{N}$ has the syntactic $\alpha$-order property of length $\mu$, then $N$ has the $\alpha$-order property of length $\mu$.
\item Conversely, let $\chi := |\text{gS}^{\alpha + \alpha} (\emptyset)|$, and assume that $\mu \ge \left(2^{\lambda + \chi}\right)^+$. If $N$ has the $\alpha$-order property of length $\mu$, then $\widehat{N}$ has the syntactic $\alpha$-order property of length $\lambda$.
\end{enumerate}
In particular, $K$ has the $\alpha$-order property if and only if $\widehat{K}$ has the syntactic $\alpha$-order property. \end{prop}
\begin{proof} \
\begin{enumerate}
\item This is a straightforward consequence of Proposition \ref{inv-lem}\footnote{We are using that everything in sight is quantifier-free. Note that this part works for any functorial expansion $\widehat{K}$ of $K$.}.
\item Let $\seq{\bar{a}_i : i < \mu}$ witness that $N$ has the Galois $\alpha$-order property of length $\mu$. By the Erdős-Rado theorem used on the coloring $(i < j) \mapsto \text{gtp} (\bar{a}_i \bar{a}_j / \emptyset; N)$, we get that (without loss of generality), $\seq{\bar{a}_i : i < \lambda}$ is such that whenever $i < j$, $\text{gtp} (\bar{a}_i \bar{a}_j / \emptyset; N) = p \in \text{gS}^{\alpha + \alpha} (\emptyset)$. But then (since by assumption $\text{gtp} (\bar{a}_i \bar{a}_j / \emptyset; N) \neq \text{gtp} (\bar{a}_j \bar{a}_i / \emptyset; N)$), $\phi (\bar{x}, \bar{y}) := R_p (\bar{x}, \bar{y})$ witnesses that $\widehat{N}$ has the syntactic $\alpha$-order property of length $\lambda$.
\end{enumerate} \end{proof}
We will see later (Theorem \ref{stab-spectrum}) that assuming some tameness, \emph{even when $\alpha \ge \kappa$}, the $\alpha$-order property implies the syntactic order property.
In the next section, we heavily use the assumption of no syntactic order property \emph{of length $\kappa$}. We now look at how that assumption compares to the order property (of arbitrary long length). Note that Proposition \ref{op-equiv} already tells us that the $(<\kappa)$-order property implies the syntactic order property of length $\kappa$. To get an equivalence, we will assume $\kappa$ is a fixed point of the Beth function. The key is:
\begin{fact}\label{shelah-hanf}
Let $\alpha$ be a cardinal. If $K$ has the $\alpha$-order property of length $\mu$ for all $\mu < \hanf{\alpha + \text{LS} (K)}$, then $K$ has the $\alpha$-order property. \end{fact} \begin{proof} By the same proof as \cite[Claim 4.5.3]{sh394}. \end{proof} \begin{cor}\label{hanf-limit-op} \
Assume $\beth_\kappa = \kappa > \text{LS} (K)$. Then $\widehat{K}$ has the syntactic order property of length $\kappa$ if and only if $K$ has the $(<\kappa)$-order property. \end{cor} \begin{proof}
If $\widehat{K}$ has the syntactic order property of length $\kappa$, then for some $\alpha < \kappa$, $\widehat{K}$ has the syntactic $\alpha$-order property of length $\kappa$, and thus by Proposition \ref{op-equiv} the $\alpha$-order property of length $\kappa$. Since $\kappa = \beth_\kappa$, $\hanf{|\alpha| + \text{LS} (K)} < \kappa$, so by Fact \ref{shelah-hanf}, $K$ has the $\alpha$-order property.
Conversely, if $K$ has the $(<\kappa)$-order property, Proposition \ref{op-equiv} implies that $\widehat{K}$ has the syntactic order property, so in particular the syntactic order property of length $\kappa$. \end{proof}
For completeness, we also discuss the following semantic variation of the syntactic order property of length $\kappa$ that appears in \cite[Definition 4.2]{bg-v9} (but is adapted from a previous definition of Shelah, see there for more background):
\begin{defin}\label{def-weak-op}
For $\kappa > \text{LS} (K)$, $K$ has the \emph{weak $\kappa$-order property} if there are $M \in K_{<\kappa}$, $N \ge M$, types $p \neq q \in \text{gS}^{<\kappa} (M)$, and sequences $\seq{\bar{a}_i : i < \kappa}$, $\seq{\bar{b}_i : i < \kappa}$ from $N$ so that for all $i, j < \kappa$:
\begin{enumerate}
\item $i \le j$ implies $\text{gtp} (\bar{a}_i \bar{b}_j / M; N) = p$.
\item $i > j$ implies $\text{gtp} (\bar{a}_i \bar{b}_j / M; N) = q$.
\end{enumerate} \end{defin}
\begin{lem}\label{weak-op-equiv}
Let $\kappa > \text{LS} (K)$.
\begin{enumerate}
\item If $K$ has the $(<\kappa)$-order property, then $K$ has the weak $\kappa$-order property.
\item If $K$ has the weak $\kappa$-order property, then $\widehat{K}$ has the syntactic order property of length $\kappa$.
\end{enumerate}
In particular, if $\kappa = \beth_\kappa$, then the weak $\kappa$-order property, the $(<\kappa)$-order property of length $\kappa$, and the $(<\kappa)$-order property are equivalent. \end{lem} \begin{proof} \
\begin{enumerate}
\item Assume $K$ has the $(<\kappa)$-order property. To see the weak order property, let $\alpha < \kappa$ be such that $K$ has the $\alpha$-order property. Fix an $N \in K$ such that $N$ has a long-enough $\alpha$-order property. Pick any $M \in K_{<\kappa}$ with $M \le N$. By using the Erdős-Rado theorem twice, we can assume we are given $\seq{\bar{c}_i : i < \kappa}$ such that whenever $i < j < \kappa$, $\text{gtp} (\bar{c}_i\bar{c}_j / M; N) = p$, and $\text{gtp} (\bar{c}_j \bar{c}_i / M; N) = q$, for some $p \neq q \in \text{gS} (M)$.
For $l < \kappa$, let $j_l := 2l$, and $k_l := 2l + 1$. Then $j_l, k_l < \kappa$, and $l \le l'$ implies $j_l < k_{l'}$, whereas $l > l'$ implies $j_l > k_{l'}$. Thus the sequences defined by $\bar{a}_l := \bar{c}_{j_l}$, $\bar{b}_l := \bar{c}_{k_l}$ are as required. \item Assume $K$ has the weak $\kappa$-order property and let $M, N, p, q, \seq{\bar{a}_i : i < \kappa}$, $\seq{\bar{b}_i : i < \kappa}$ witness it. For $i < \kappa$, Let $\bar{c}_i := \bar{a}_i \bar{b}_i$ and $\phi (\bar{x}_1 \bar{x}_2; \bar{y}_1 \bar{y}_2) := R_p (\bar{y}_1, \bar{x}_2)$. This witnesses the syntactic order property of length $\kappa$ in $\widehat{K}$.
\end{enumerate}
The last sentence follows from Proposition \ref{op-equiv} and Corollary \ref{hanf-limit-op}. \end{proof}
\subsection{Order property and stability}
We now want to relate stability in terms of the number of types (see Definition \ref{stab-def}) to the order property and use this to find many stability cardinals.
Note that stability in $K$ (in terms of Galois types, see Definition \ref{stab-def}) coincides with syntactic stability in $\widehat{K}$ given enough tameness and shortness (see Theorem \ref{separation}). In general, they could be different, but by Proposition \ref{inv-lem}, stability always implies syntactic stability (and so syntactic unstability implies unstability). This contrasts with the situation with the order properties, where the syntactic and regular order property are equivalent without tameness (see Proposition \ref{op-equiv}).
The basic relationship between the order property and stability is given by:
\begin{fact}\label{stab-facts-op}
If $K$ has the $\alpha$-order property and $\mu \ge |\alpha| + \text{LS} (K)$, then $K$ is not $\alpha$-stable in $\mu$. If in addition $\alpha < \kappa$, then $\widehat{K}$ is not even syntactically $\alpha$-stable in $\mu$. \end{fact} \begin{proof}
\cite[Claim 4.8.2]{sh394} is the first sentence. The proof (see \cite[Fact 5.13]{bgkv-v2}) generalizes (using the syntactic order property) to get the second sentence. \end{proof}
This shows that the order property implies unstability and we now work towards a syntactic converse. The key is \cite[Theorem V.A.1.19]{shelahaecbook2}, which shows that if a model does not have the (syntactic) order property of a certain length, then it is (syntactically) stable in certain cardinals. Here, syntactic refers to Shelah's very general context, where any subset $\Delta$ of formulas from any abstract logic is allowed. Shelah assumes that the vocabulary is finitary but the proof goes through just as well with an infinitary vocabulary (the proof only deals with formulas, which \emph{are} allowed to be infinitary). Thus specializing the result to the context of this paper (working with the logic $\mathbb{L}_{\kappa, \kappa} (\widehat{\tau})$ and $\Delta = \operatorname{qf-}\mathbb{L}_{\kappa, \kappa} (\bigtau)$), we obtain:
\begin{fact}\label{syn-stab-spectrum-technical}
Let $\widehat{N} \in \widehat{K}$. Let $\alpha < \kappa$. Let $\chi \ge (|\widehat{\tau}| + 2)^{<\kappa}$ be a cardinal. If $\widehat{N}$ does not have the syntactic order property of length $\chi^+$, then whenever $\lambda = \lambda^{\chi} + \beth_2 (\chi)$, $\widehat{N}$ is (syntactically) $(<\kappa)$-stable in $\lambda$. \end{fact}
The next corollary does not need any amalgamation or tameness. Intuitively, this is because every property involved ends up being checked inside a single model (for example, $\widehat{K}$ syntactically stable in some cardinal means that all of its \emph{models} are syntactically stable in the cardinal).
\begin{cor}\label{syn-stab-spectrum}
The following are equivalent:
\begin{enumerate}
\item\label{sss-some-card} For every $\kappa_0 < \kappa$ and every $\alpha < \kappa$, $\widehat{K}$ is syntactically $\alpha$-stable in \emph{some} cardinal greater than or equal to $\text{LS} (K) + \kappa_0$.
\item\label{sss-op} $K$ does not have the $(<\kappa)$-order property.
\item\label{sss-spec} There exist\footnote{The cardinal $\mu$ is closely related to the local character cardinal $\bar \kappa$ for nonsplitting. See for example \cite[Theorem 4.13]{tamenessone}.} cardinals $\mu$ and $\lambda_0$ with $\mu \le \lambda_0 < \hanfs{\kappa + \text{LS} (K)^+}$ (recall Definition \ref{hanf-def}) such that $\widehat{K}$ is syntactically $(<\kappa)$-stable in any $\lambda \ge \lambda_0$ with $\lambda^{<\mu} = \lambda$. In particular, $\widehat{K}$ is syntactically $(<\kappa)$-stable.
\end{enumerate} \end{cor} \begin{proof}
(\ref{sss-spec}) says in particular that $\widehat{K}$ is syntactically $(<\kappa)$-stable in a proper class of cardinals, so it clearly implies (\ref{sss-some-card}). (\ref{sss-some-card}) implies (\ref{sss-op}): We prove the contrapositive. Assume that $K$ has the $(<\kappa)$-order property. In particular, $K$ has the $(<\kappa)$-order property of length $\hanf{\kappa + \text{LS} (K)}$. By definition, this means that for some $\alpha < \kappa$, $K$ has the $\alpha$-order property of length $\hanf{\kappa + \text{LS} (K)}$. By Fact \ref{shelah-hanf}, $K$ has the $\alpha$-order property. By Fact \ref{stab-facts-op}, $\widehat{K}$ is not syntactically $\alpha$-stable in any cardinal above $\text{LS} (K) + |\alpha|$ (that is, for each $\lambda \ge \text{LS} (K) + |\alpha|$, there is $\widehat{N} \in \widehat{K}$ such that $\widehat{N}$ is not syntactically $\alpha$-stable in $\lambda$). Thus taking $\kappa_0 := |\alpha|$, we get that (\ref{sss-some-card}) fails.
Finally (\ref{sss-op}) implies (\ref{sss-spec}). Assume $K$ does not have the $(<\kappa)$-order property. By the contrapositive of Fact \ref{shelah-hanf}, for each $\alpha < \kappa$, there exists $\mu_\alpha < \hanf{|\alpha| + \text{LS} (K)} \le \hanfs{\kappa + \text{LS} (K)^+}$ such that $K$ does not have the $\alpha$-order property of length $\mu_\alpha$. Since $2^{<(\kappa + \text{LS} (K)^+)} < \hanfs{\kappa + \text{LS} (K)^+}$, we can without loss of generality assume that $2^{<(\kappa + \text{LS} (K)^+)} \le \mu_\alpha$ for all $\alpha < \kappa$. Let $\chi := \sup_{\alpha < \kappa} \mu_\alpha$. Then $K$ does not have the $(<\kappa)$-order property of length $\chi$. Now if $\kappa$ is a successor (say $\kappa = \kappa_0^+$), then $\chi = \mu_{\kappa_0} < \hanf{\kappa_0} \le \hanfs{\kappa + \text{LS} (K)^+}$. Otherwise $\hanfs{\kappa + \text{LS} (K)^+} = \hanf{\kappa + \text{LS} (K)}$ and $\cf{\hanf{\kappa + \text{LS} (K)}} = (2^{\kappa + \text{LS} (K)})^+ > \kappa$, so $\chi < \hanfs{\kappa + \text{LS} (K)^+}$. Let $\mu := \chi^+$ and $\lambda_0 := \beth_2 (\chi)$. It is easy to check that $\mu \le \lambda_0 < \hanfs{\kappa + \text{LS} (K)^+}$. Finally, note that by Remark \ref{bigk-size}, $|\widehat{\tau}| \le 2^{<(\kappa + \text{LS} (K)^+)}$, so $\chi \ge (|\widehat{\tau}| + 2)^{<\kappa}$. Now apply Fact \ref{syn-stab-spectrum-technical} to each $\widehat{N} \in \widehat{K}$ (note that by definition of $\lambda_0$, if $\lambda = \lambda^{\chi} \ge \lambda_0$, then $\lambda = \lambda^{\chi} + \beth_2 (\chi)$). \end{proof} \begin{remark}
Shelah \cite[Theorem 3.3]{sh932} claims (without proof) a version of (\ref{sss-some-card}) implies (\ref{sss-spec}). \end{remark}
Assuming $(<\kappa)$-tameness for types of length less than $\kappa$, we can of course convert the above result to a statement about Galois types. To replace ``$(<\kappa)$-stable'' by just ``stable'' (recall that this means stable for types of length one) and also get away with only tameness for types of length one, we will use amalgamation together with Fact \ref{stab-facts}.
\begin{thm}\label{stab-spectrum}
Assume $K$ has amalgamation and is $(<\kappa)$-tame. The following are equivalent:
\begin{enumerate}
\item\label{ss-some-card} $K$ is stable in \emph{some} cardinal greater than or equal to $\text{LS} (K) + \kappa^-$ (recall Definition \ref{kappa-r-def}).
\item\label{ss-op} $K$ does not have the order property.
\item\label{ss-op2} $K$ does not have the $(<\kappa)$-order property.
\item\label{ss-spec} There exist cardinals $\mu$ and $\lambda_0$ with $\mu \le \lambda_0 < \hanfs{\kappa + \text{LS} (K)^+}$ (recall Definition \ref{hanf-def}) such that $K$ is stable in any $\lambda \ge \lambda_0$ with $\lambda^{<\mu} = \lambda$.
\end{enumerate}
In particular, $K$ is stable if and only if $K$ does not have the order property. \end{thm} \begin{proof}
Clearly, (\ref{ss-spec}) implies (\ref{ss-some-card}) and (\ref{ss-op}) implies (\ref{ss-op2}). (\ref{ss-some-card}) implies (\ref{ss-op}): If $K$ has the $\alpha$-order property, then by Fact \ref{stab-facts-op} it cannot be $\alpha$-stable in any cardinal above $\text{LS} (K) + |\alpha|$. By Fact \ref{stab-facts}.(\ref{stab-length-equiv}), $K$ is not stable in any cardinal greater than or equal to $\kappa^- + \text{LS} (K)$, so (\ref{ss-some-card}) fails. Finally, (\ref{ss-op2}) implies (\ref{ss-spec}) by combining Corollary \ref{syn-stab-spectrum} and Corollary \ref{types-formula}. \end{proof} \begin{proof}[Proof of Theorem \ref{stab-spectrum-abstract}]
Set $\kappa := \text{LS} (K)^+$ in Theorem \ref{stab-spectrum}. Note that in that case $\kappa^- = \text{LS} (K)$ (Definition \ref{kappa-r-def}) and $\hanfs{\kappa + \text{LS} (K)^+} = \hanfs{\text{LS} (K)^+} = \hanf{\text{LS} (K)}$ by Definition \ref{hanf-def}. \end{proof}
\section{Coheir}\label{sec-coheir}
We look at the natural generalization of coheir (introduced in \cite{lascar-poizat} for first-order logic) to the context of this paper. A definition of coheir for classes of models of an $\mathbb{L}_{\kappa, \omega}$ theory was first introduced in \cite{makkaishelah} and later adapted to general AECs in \cite{bg-v9}. We give a slightly more conceptual definition here and show that coheir has several of the basic properties of forking in a stable first-order theory. This improves on \cite{bg-v9} which assumed that coheir had the extension property.
\begin{hypothesis}\label{coheir-hyp} \
\begin{enumerate}
\item $K^0$ is an AEC with amalgamation.
\item $\kappa > \text{LS} (K^0)$ is a fixed cardinal.
\item $K := \Ksatpp{\left(K^0\right)}{\kappa}$ is the class of $\kappa$-saturated models of $K^0$.
\item $\widehat{K}$ is the $(<\kappa)$-Galois Morleyization of $K$. Set $\widehat{\tau} := \tau (\widehat{K})$.
\end{enumerate} \end{hypothesis}
The reader can see $\widehat{K}$ as the class in which coheir is computed syntactically, while $K$ is the class in which it is used semantically.
For the sake of generality, we do not assume stability or tameness yet. We will do so in parts (\ref{coheir-2}) and (\ref{coheir-3}) of Theorem \ref{coheir-syn}, the main theorem of this section. After the proof of Theorem \ref{coheir-syn}, we give a proof of Theorem \ref{coheir-syn-ab} in the abstract.
Note that by Remark \ref{sat-rmk}, $K$ is a $\plus{\kappa}$-AEC (see Definition \ref{mu-aec-def}). Moreover by saturation the ordering has some elementarity. More precisely, let $\Sigma_1 (\mathbb{L}_{\kappa, \kappa} (\widehat{\tau}))$ denote the set of $\mathbb{L}_{\kappa, \kappa} (\widehat{\tau})$-formulas of the form $\exists \bar{x} \psi (\bar{x}; \bar{y})$, where $\psi$ is quantifier-free. We then have:
\begin{prop}\label{elem-prop}
If $M, N \in K$ and $M \le N$, then $\widehat{M} \preceq_{\Sigma_1 (\mathbb{L}_{\kappa, \kappa} (\widehat{\tau}))} \widehat{N}$. \end{prop} \begin{proof}
Assume that $\widehat{N} \models \exists \bar{x} \psi (\bar{x}; \bar{a})$, where $\bar{a} \in \fct{<\kappa}{|M|}$ and $\psi$ is a quantifier-free $\mathbb{L}_{\kappa, \kappa} (\widehat{\tau})$-formula. Let $A$ be the range of $\bar{a}$. Let $\bar{b} \in \fct{<\kappa}{|N|}$ be such that $\widehat{N} \models \psi[\bar{b}, \bar{a}]$. Since $M$ is $\kappa$-saturated, there exists $\bar{b}' \in \fct{<\kappa}{|M|}$ such that $\text{gtp} (\bar{b}' / A; M) = \text{gtp} (\bar{b} / A; N)$. Now it is easy to check using Proposition \ref{gtp-map} that $\widehat{M} \models \psi[\bar{b}'; \bar{a}]$. \end{proof}
Also note that if $\kappa$ is suitably chosen and $K^0$ is stable, then we have a strong failure of the order property in $\widehat{K}$:
\begin{prop}\label{syntactic-op-stable}
If $\kappa = \beth_\kappa$ and $K^0$ is stable (in unboundedly many cardinals, see Definition \ref{stab-def}), then $\widehat{K}$ does not have the syntactic order property of length $\kappa$. \end{prop} \begin{proof}
By Fact \ref{stab-facts}, $K^0$ is $(<\kappa)$-stable in unboundedly many cardinals. By Fact \ref{stab-facts-op}, $K^0$ does not have the $(<\kappa)$-order property.
Let $\widehat{K^0}$ be the $(<\kappa)$-Galois Morleyization of $K^0$. By Corollary \ref{hanf-limit-op}, $\widehat{K^0}$ does not have the syntactic order property of length $\kappa$.
Now note that Galois types are the same in $K$ and $K^0$: for $N \in K$, $A \subseteq |N|$, and $\bar{b}, \bar{b}' \in \fct{<\infty}{|N|}$, $\text{gtp}_{K^0} (\bar{b} / A; N) = \text{gtp}_{K^0} (\bar{b}' / A; N)$ if and only if $\text{gtp}_{K} (\bar{b} / A; N) = \text{gtp}_{K} (\bar{b}' / A; N)$\footnote{Recall that $\text{gtp}_K$ denotes Galois types as computed in $K$ and $\text{gtp}_{K^0}$ Galois types computed in $K^0$ (see Definition \ref{gtp-def}).}. To see this, use amalgamation together with the fact that every model in $K^0$ can be $\le$-extended to a model in $K$.
It follows that $\widehat{K} \subseteq \widehat{K^0}$. By definition of the syntactic order property, this means that also $\widehat{K}$ does not have the syntactic order property of length $\kappa$, as desired. \end{proof}
\begin{defin}\label{heir-coheir-def}
Let $\widehat{N} \in \widehat{K}$, $A \subseteq |\widehat{N}|$, and $p$ be a set of formulas (in some logic) over $\widehat{N}$.
\begin{enumerate}
\item $p$ is a \emph{$(<\kappa)$-heir over $A$} if for any formula $\phi (\bar{x}; \bar{b}) \in p$ over $A$, there exists $\bar{a} \in \fct{<\kappa}{A}$ such that $\phi (\bar{x}; \bar{a}) \in p \upharpoonright A$.
\item $p$ is a \emph{$(<\kappa)$-coheir over $A$ in $\widehat{N}$} if for any $\phi (\bar{x}) \in p$ there exists $\bar{a} \in \fct{<\kappa}{A}$ such that $\widehat{N} \models \phi[\bar{a}]$. When $\widehat{N}$ is clear from context, we drop it.
\end{enumerate} \end{defin} \begin{remark}
Here, $\kappa$ is fixed (Hypothesis \ref{coheir-hyp}), so we will just remove it from the notation and simply say that $p$ is a (co)heir over $A$. \end{remark} \begin{remark}
In this section, $p$ will be $\tp_{\qfl} (\bar{c} / B; \widehat{N})$ for a fixed $B$ such that $A \subseteq B \subseteq |\widehat{N}|$. \end{remark} \begin{remark}
Working in $\widehat{N} \in \widehat{K}$, let $\bar{c}$ be a permutation of $\bar{c}'$, and $A, B$ be sets. Then $\tp_{\qfl} (\bar{c} / B; \widehat{N})$ is a coheir over $A$ if and only if $\tp_{\qfl} (\bar{c}' / B; \widehat{N})$ is a coheir over $A$. Similarly for heir. Thus we can talk about $\tp_{\qfl} (C / B; \widehat{N})$ being a heir/coheir over $A$ without worrying about the enumeration of $C$. \end{remark}
We will mostly look at coheir, but the next proposition tells us how to express one in terms of the other.
\begin{prop} $\tp_{\qfl} (\bar{a} / A \bar{b}; \widehat{N})$ is a heir over $A$ if and only if $\tp_{\qfl} (\bar{b} / A \bar{a}; \widehat{N})$ is a coheir over $A$. \end{prop} \begin{proof}
Straightforward. \end{proof}
It is convenient to see coheir as an independence relation:
\begin{notation}
Write $\nfs{M}{A}{B}{N}$ if $M, N \in K$, $M \le N$, and $\tp_{\qfl} (A / |M| \cup B; \widehat{N})$ is a coheir over $|M|$ in $\widehat{N}$. We also say\footnote{It is easy to check this does not depend on the choice of representatives.} that $\text{gtp} (A / B; N)$ \emph{is a coheir over $M$}. \end{notation} \begin{remark}
The definition of $\mathop{\copy\noforkbox}\limits$ depends on $\kappa$ but we hide this detail. \end{remark}
Interestingly, Definition \ref{heir-coheir-def} is equivalent to the semantic definition of Boney and Grossberg \cite[Definition 3.2]{bg-v9}:
\begin{prop}
Let $N \in K$.
Then $p \in \text{gS}^{<\infty} (B; N)$ is a coheir over $M \le N$ if and only if for any $I \subseteq \ell (p)$ and any $B_0 \subseteq B$, if $|I_0| + |B_0| < \kappa$, $p^I \upharpoonright B_0$ is realized in $M$. \end{prop} \begin{proof}
Straightforward \end{proof}
For completeness, we show that the definition of heir also agrees with the semantic definition of Boney and Grossberg \cite[Definition 6.1]{bg-v9}.
\begin{prop}
Let $M_0 \le M \le N$ be in $K$, $\bar{a} \in \fct{<\infty}{|\widehat{N}|}$.
Then $\tp_{\qfl} (\bar{a} / M; \widehat{N})$ is a heir over $M_0$ if and only if for all $(<\kappa)$-sized $I \subseteq \ell (\bar{a})$ and $(<\kappa)$-sized $M_0^- \le M_0$, $M_0^- \le M^- \le M$ (where we also allow $M_0^-$ to be empty), there is $f: M^- \xrightarrow[M_0^-]{} M_0$ such that $\text{gtp} (\bar{a} / M; N)$ extends $f (\text{gtp} ((\bar{a} \upharpoonright I) / M^-; N))$. \end{prop} \begin{proof}
Assume first $\tp_{\qfl} (\bar{a} / M; \widehat{N})$ is a heir over $M_0$ and let $I \subseteq \ell (\bar{a})$, $M_0^- \le M^- \le M$ be $(<\kappa)$-sized, with $M_0^-$ possibly empty. Let $p := \text{gtp} ((\bar{a} \upharpoonright I) / M^-; N)$. Let $\bar{b}_0$ be an enumeration of $M_0^-$ and let $\bar{b}$ be an enumeration of $|M^-| \backslash |M_0^-|$. Let $q := \text{gtp} ((\bar{a} \upharpoonright I) \bar{b}_0 \bar{b} / \emptyset; N)$. Consider the formula $\phi (\bar{x}; \bar{b}; \bar{b}_0) := R_q (\bar{x}; \bar{b}; \bar{b}_0)$, where $\bar{x}$ are the free variables in $\tp_{\qfl} (\bar{a} / M; \widehat{N})$ and we assume for notational simplicity that the $I$-indiced variables are picked out by $R_q (\bar{x}, \bar{b}, \bar{b}_0)$. Then $\phi$ is in $\tp_{\qfl} (\bar{a} / M; \widehat{N})$. By the syntactic definition of heir, there is $\bar{c} \in \fct{<\kappa}{|M_0|}$ such that $\phi (\bar{x}; \bar{c}; \bar{b}_0)$ is in $\tp_{\qfl} (\bar{a} / M_0; \widehat{N})$. By definition of the $(<\kappa)$-Galois Morleyization this means that $\text{gtp} ((\bar{a} \upharpoonright I) \bar{b} \bar{b}_0 / \emptyset; N) = \text{gtp} ((\bar{a} \upharpoonright I) \bar{c} \bar{b}_0 / \emptyset; N)$.
By definition of Galois types and amalgamation (see Fact \ref{ap-eat}), there exists $N' \ge N$ and $g: N \rightarrow N'$ such that $g ((\bar{a} \upharpoonright I) \bar{b} \bar{b}_0) = (\bar{a} \upharpoonright I) \bar{c} \bar{b}_0$. Let $f := g \upharpoonright M^-$. Then from the definitions of $\bar{b}_0$, $\bar{b}$, and $\bar{c}$, we have that $f: M^- \xrightarrow[M_0^-]{} M_0$. Moreover, $f (\text{gtp} ((\bar{a} \upharpoonright I) / M^-; N)) = \text{gtp} (\bar{a} \upharpoonright I / f[M^-]; N)$, which is clearly extended by $\text{gtp} (\bar{a} / M; N)$.
The converse is similar. \end{proof} \begin{remark}
The notational difficulties encountered in the above proof and the complexity of the semantic definition of heir show the convenience of using a syntactic notation rather than working purely semantically. \end{remark}
We now investigate the properties of coheir. For the convenience of the reader, we explicitly prove the uniqueness property (we have to slightly adapt the proof of (U) from \cite[Proposition 4.8]{makkaishelah}). For the others, they are either straightforward or we can just quote.
\begin{lem}\label{uniq-sym}
Let $M, N, N' \in K$ with $M \le N$, $M \le N'$. Assume $\widehat{M}$ does \emph{not} have the syntactic order property of length $\kappa$. Let $\bar{a} \in \fct{<\infty}{|N|}$, $\bar{a}' \in \fct{<\infty}{|N'|}$, $\bar{b} \in \fct{<\infty}{|M|}$ be given such that:
\begin{enumerate}
\item $\tp_{\qfl} (\bar{a} / M; \widehat{N}) = \tp_{\qfl} (\bar{a}' / M; \widehat{N'})$
\item $\tp_{\qfl} (\bar{a} / M \bar{b}; \widehat{N})$ is a coheir over $M$.
\item $\tp_{\qfl} (\bar{b} / M \bar{a}'; \widehat{N'})$ is a coheir over $M$.
\end{enumerate}
Then $\tp_{\qfl} (\bar{a} / M \bar{b}; \widehat{N}) = \tp_{\qfl} (\bar{a}' / M \bar{b}; \widehat{N'})$. \end{lem} \begin{proof}
We suppose not and prove that $\widehat{M}$ has the syntactic order property of length $\kappa$. Assume that $\tp_{\qfl} (\bar{a} / M \bar{b}; \widehat{N}) \neq \tp_{\qfl} (\bar{a}' / M\bar{b}; \widehat{N'})$ and pick $\phi (\bar{x}, \bar{y})$ a formula over $M$ witnessing it:
\begin{equation} \widehat{N} \models \phi[\bar{a}; \bar{b}] \text{ but } \widehat{N'} \models \neg \phi[\bar{a}'; \bar{b}] \label{eq2} \end{equation}
(note that we can assume without loss of generality that $\ell (\bar{a}) + \ell (\bar{b}) < \kappa$).
Define by induction on $i < \kappa$ $\bar{a}_i$, $\bar{b}_i$ \emph{in $M$} such that for all $i, j < \kappa$:
\begin{enumerate}
\item $\widehat{M} \models \phi[\bar{a}_i, \bar{b}]$.
\item $\widehat{M} \models \phi[\bar{a}_i, \bar{b}_j]$ if and only if $i \le j$.
\item\label{cond3} $\widehat{N} \models \neg \phi[\bar{a}, \bar{b}_i]$.
\end{enumerate}
Note that since $\bar{b}_i \in \fct{<\kappa}{|M|}$, (\ref{cond3}) is equivalent to $\widehat{N'} \models \neg \phi[\bar{a}', \bar{b}_i]$.
\underline{This is enough:} Then $\chi (\bar{x}_1, \bar{y}_1, \bar{x}_2, \bar{y}_2) := \phi (\bar{x}_1, \bar{y}_2) \land \bar{x}_1 \bar{y}_1 \neq \bar{x}_2 \bar{y}_2$ together with the sequence $\seq{\bar{a}_i\bar{b}_i : i < \kappa}$ witness the syntactic order property of length $\kappa$.
\underline{This is possible:} Suppose that $\bar{a}_j, \bar{b}_j$ have been defined for all $j < i$. Note that by the induction hypothesis and \eqref{eq2} we have:
$$
\widehat{N} \models \bigwedge_{j < i} \phi [\bar{a}_j, \bar{b}] \land \bigwedge_{j < i} \neg \phi [\bar{a}, \bar{b}_j] \land \phi [\bar{a}, \bar{b}]
$$
Since $\tp_{\qfl} (\bar{a} / A \bar{b}; \widehat{N})$ is a coheir over $M$, there is $\bar{a}'' \in \fct{<\kappa}{|M|}$ such that:
$$
\widehat{N} \models \bigwedge_{j < i} \phi [\bar{a}_j, \bar{b}] \land \bigwedge_{j < i} \neg \phi [\bar{a}'', \bar{b}_j] \land \phi [\bar{a}'', \bar{b}]
$$
Note that all the data in the equation above is in $M$, so as $M \le N$, the monotonicity axiom of functorial expansions implies $\widehat{M} \subseteq \widehat{N}$, so $\widehat{M}$ also models the above. By monotonicity again, $\widehat{N'}$ models the above. We also know that $\widehat{N'} \models \neg \phi[\bar{a}', \bar{b}]$. Thus we have:
$$
\widehat{N'} \models \bigwedge_{j < i} \phi [\bar{a}_j, \bar{b}] \land \bigwedge_{j < i} \neg \phi [\bar{a}'', \bar{b}_j] \land \phi [\bar{a}'', \bar{b}] \land \neg \phi[\bar{a}', \bar{b}]
$$
Since $\tp_{\qfl} (\bar{b} / M \bar{a}'; \widehat{N})$ is a coheir over $M$, there is $\bar{b}'' \in \fct{<\kappa}{|M|}$ such that:
$$
\widehat{N'} \models \bigwedge_{j < i} \phi [\bar{a}_j, \bar{b}''] \land \bigwedge_{j < i} \neg \phi [\bar{a}'', \bar{b}_j] \land \phi [\bar{a}'', \bar{b}''] \land \neg \phi [\bar{a}', \bar{b}'']
$$
Let $\bar{a}_i := \bar{a}''$, $\bar{b}_i := \bar{b}''$. It is easy to check that this works. \end{proof}
\begin{thm}[Properties of coheir]\label{coheir-syn} \
\begin{enumerate}
\item\label{coheir-1} \begin{enumerate}
\item Invariance: If $f: N \cong N'$ and $\nfs{M}{A}{B}{N}$, then $\nfs{f[M]}{f[A]}{f[B]}{N'}$.
\item Monotonicity: If $\nfs{M}{A}{B}{N}$ and $M \le M' \le N_0 \le N$, $A_0 \subseteq A$, $B_0 \subseteq B$, $|M'| \subseteq B$, $A_0 \cup B_0 \subseteq |N_0|$, then $\nfs{M'}{A_0}{B_0}{N_0}$.
\item Normality: If $\nfs{M}{A}{B}{N}$, then $\nfs{M}{A \cup |M|}{B \cup |M|}{N}$.
\item Disjointness: If $\nfs{M}{A}{B}{N}$, then $A \cap B \subseteq |M|$.
\item Left and right existence: $\nfs{M}{A}{M}{N}$ and $\nfs{M}{M}{B}{N}$.
\item Left and right $(<\kappa)$-set-witness: $\nfs{M}{A}{B}{N}$ if and only if for all $A_0 \subseteq A$ and $B_0 \subseteq B$ of size less than $\kappa$, $\nfs{M}{A_0}{B_0}{N}$.
\item Strong left transitivity: If $\nfs{M_0}{M_1}{B}{N}$ and $\nfs{M_1}{A}{B}{N}$, then $\nfs{M_0}{A}{B}{N}$.
\end{enumerate}
\item\label{coheir-2} If $\widehat{K}$ does not have the syntactic order property of length $\kappa$, then\footnote{Note that (by Proposition \ref{syntactic-op-stable}) this holds in particular if $\kappa = \beth_\kappa$ and $K^0$ is stable.}:
\begin{enumerate}
\item Symmetry: $\nfs{M}{A}{B}{N}$ if and only if $\nfs{M}{B}{A}{N}$.
\item Strong right transitivity: If $\nfs{M_0}{A}{M_1}{N}$ and $\nfs{M_1}{A}{B}{N}$, then $\nfs{M_0}{A}{B}{N}$.
\item Set local character: For all cardinals $\alpha$, all $p \in \text{gS}^{\alpha} (M)$, there exists $M_0 \le M$ with $\|M_0\| \le \mu_\alpha := \left(\alpha + 2\right)^{<\plus{\kappa}}$ such that $p$ is a coheir over $M_0$.
\item\label{coheir-syn-uq} Syntactic uniqueness: If $M_0 \le M \le N_\ell$ for $\ell = 1,2$, $|M_0| \subseteq B \subseteq |M|$. $q_\ell \in \text{S}_{\operatorname{qf-}\mathbb{L}_{\kappa, \kappa} (\bigtau)}^{<\infty} (B; \widehat{N_\ell})$, $q_1 \upharpoonright M_0 = q_2 \upharpoonright M_0$ and $q_\ell$ is a coheir over $M_0$ in $\widehat{N_\ell}$ for $\ell = 1,2$, then $q_1 = q_2$.
\item\label{stab-noop} Syntactic stability: For $\alpha$ a cardinal, $\widehat{K}$ is syntactically $\alpha$-stable in all $\lambda \ge \text{LS} (K^0)$ such that $\lambda^{\mu_\alpha} = \lambda$.
\end{enumerate}
\item\label{coheir-3} If $\widehat{K}$ does not have the syntactic order property of length $\kappa$ and $K^0$ is $(<\kappa)$-tame and short for types of length less than $\alpha$, then:
\begin{enumerate}
\item Uniqueness: If $p, q \in \text{gS}^{<\alpha} (M)$ are coheir over $M_0 \le M$ and $p \upharpoonright M_0 = q \upharpoonright M_0$, then $p = q$.
\item Stability: For all $\beta < \alpha$, $K^0$ is $\beta$-stable in all $\lambda \ge \text{LS} (K^0)$ such that $\lambda^{\mu_\beta} = \lambda$.
\end{enumerate}
\end{enumerate} \end{thm} \begin{proof}
Observe that (except for part (\ref{coheir-3})), one can work in $\widehat{K}$ and prove the properties there using purely syntactic methods (so amalgamation is never needed for example). More specifically, (\ref{coheir-1}) is straightforward. As for (\ref{coheir-2}), symmetry is exactly as in\footnote{Note that a proof of symmetry of nonforking from no order property already appears in \cite{shelahfobook78}, but Pillay's proof for coheir is the one we use here.} \cite[Proposition 3.1]{pillay82} (Lemma \ref{uniq-sym} is not needed here), strong right transitivity follows from strong left transitivity and symmetry, syntactic uniqueness is by symmetry and Lemma \ref{uniq-sym}, and set local character is as in the proof of $(B)_\mu$ in \cite[Proposition 4.8]{makkaishelah}. Note that the proofs in \cite{makkaishelah} and \cite{pillay82} use that the ordering has some elementarity. In our case, this is given by Proposition \ref{elem-prop}.
The proof of stability is as in the first-order case. To get part (\ref{coheir-3}), use the translation between Galois and syntactic types (Theorem \ref{separation}). \end{proof} \begin{proof}[Proof of Theorem \ref{coheir-syn-ab}]
If the hypotheses of Theorem \ref{coheir-syn-ab} in the abstract hold for the AEC $K^0$, then the hypothesis of each parts of Theorem \ref{coheir-syn} hold (see Proposition \ref{syntactic-op-stable}). \end{proof} \begin{remark}
We can give more localized version of some of the above results. For example in the statement of the symmetry property it is enough to assume that $\widehat{M}$ does not have the syntactic order property of length $\kappa$. We could also have been more precise and state the uniqueness property in terms of being $(<\kappa)$-tame and short for $\{q_1, q_2\}$, where $q_1, q_2$ are the two Galois types we are comparing. \end{remark} \begin{remark}
We can use Theorem \ref{coheir-syn}.(\ref{stab-noop}) to get another proof of the equivalence between (syntactic) stability and no order property in AECs. \end{remark} \begin{remark}
The extension property (given $p \in \text{gS}^{<\infty} (M)$, $N \ge M$, $p$ has an extension to $N$ which is a coheir over $M$) seems more problematic. In \cite{bg-v9}, Boney and Grossberg simply assumed it (they also showed that it followed from $\kappa$ being strongly compact \cite[Theorem 8.2.1]{bg-v9}). Here we do not need to assume it but are still unable to prove it. In \cite{indep-aec-v5}, we prove it assuming a superstability-like hypothesis and more locality\footnote{A word of caution: In \cite[Section 4]{hyttinen-lessmann}, the authors give an example of an $\omega$-stable class that does not have extension. However, the extension property they consider is \emph{over all sets}, not only over models.}. \end{remark}
\end{document} | arXiv |
\begin{document}
\title{Note on the stability of planar stationary flows in an exterior domain without symmetry}
\author{Mitsuo Higaki$^\ast$ \\ Department of Mathematics, Kobe University\\ 1-1 Rokkodai, Nada-ku, Kobe 657-8501, Japan}
\date{}
\maketitle
\noindent {\bf Abstract.}\, The asymptotic stability of two-dimensional stationary flows in a non-symmetric exterior domain is considered. Under the smallness condition on initial perturbations, we show the stability of the small stationary flow whose leading profile at spatial infinity is given by the rotating flow decaying in the scale-critical order $O(|x|^{-1})$. Especially, we prove the $L^p$-$L^q$ estimates to the semigroup associated with the linearized equations.
\footnote[0]{$^\ast$Most of this work was done when the author was a PhD student in Kyoto University.} \footnote[0]{AMS Subject Classifications:\, 35B35, 35Q30, 76D05, 76D17.}
\section{Introduction}\label{intro}
In this paper we consider the perturbed Stokes equations for viscous incompressible flows in a two-dimensional exterior domain {\color{black} $\Omega$ with a smooth boundary.}
\begin{equation}\tag{PS}\label{PS} \left\{ \begin{aligned} \partial_t v - \Delta v + V\cdot\nabla v + v\cdot\nabla V + \nabla q &\,=\,0\,,~~~~t>0\,,~~x \in \Omega\,, \\ {\rm div}\,v &\,=\, 0\,,~~~~t\ge0\,,~~x \in \Omega\,, \\
v|_{\partial \Omega} &\,=\,0\,,~~~~t>0\,, \\
v|_{t=0} &\,=\, v_0\,,~~~~x \in \Omega\,. \end{aligned}\right. \end{equation}
Here the unknown functions $v=v(t,x)=(v_1(t,x), v_2(t,x))^\top$ and $q=q(t,x)$ are respectively the velocity field and the pressure field of the fluid, and $v_0=v_0(x)= (v_{0,1}(x), v_{0,2}(x))^\top$ is a given initial velocity field. The given vector field $V=V(x) = (V_1(x), V_2(x))^\top$ is assumed to be time-independent and decay in the scale-critical order $V(x)=O(|x|^{-1})$ at spatial infinity. We use the standard notations for differential operators with respect to the variables $t$ and $x=(x_1,x_2)^\top$: $\partial_t = \frac{\partial}{\partial t}$, $\partial_j = \frac{\partial}{\partial x_j}$, $\Delta=\partial_1^2 + \partial_2^2$, $V\cdot\nabla v + v\cdot\nabla V=\sum_{j=1}^{2} V_j \partial_j v + v_j \partial_j V$, ${\rm div}\,v=\partial_1 v_1 + \partial_2 v_2$. The exterior domain $\Omega$ is assumed to be contained by the domain exterior to the radius-$\frac12$ disk $\{x\in\mathbb{R}^2~|~|x|>\frac12\}$.
{\color{black}
The equations \eqref{PS} have been studied as the linearization of the Navier-Stokes equations, $\partial_t u-\Delta u + u\cdot\nabla u + \nabla p = f$, ${\rm div}\,u=0$ in $\Omega$, $u=b$ on $\partial \Omega$, and $u\to 0$ as $|x| \to \infty$ with some given data $f$ and $b$, around its stationary solution $V$. The analysis of \eqref{PS} is especially important when one considers the asymptotic stability of the stationary solutions. The aim of this paper is to investigate the time-decay estimates to the equations \eqref{PS}, under a suitable condition on the vector field $V$.}
In the three-dimensional case, Borchers and Miyakawa \cite{BM} establishes the $L^p$-$L^q$ estimates to \eqref{PS} for the small stationary Navier-Stokes flow $V$ satisfying $V(x)=O(|x|^{-1})$ as $|x| \to \infty$. This result is extended to the case when $V$ belongs to the Lorenz space $L^{3,\infty}(\Omega)$ by Kozono and Yamazaki \cite{KY}. We also refer to the whole-space result by Hishida and Schonbek \cite{HS} considering the time-dependent $V=V(t,x)$ in the scale-critical space $L^\infty(0,\infty; L^{3,\infty}(\mathbb{R}^3))$, where the $L^p$-$L^q$ estimates are obtained for the evolution operator associated with the linearized equations around $V(t,x)$.
{\color{black} For the two-dimensional problem \eqref{PS},} the analysis becomes quite complicated and there is no general result especially for the time-decay estimate so far. The difficulty arises from the unavailability of the Hardy inequality in the form
\begin{align}
\big\| \frac{f}{|x|} \big\|_{L^2(\Omega)} \le C \|\nabla f\|_{L^2(\Omega)}\,,~~~~~~ f \in \dot{W}^{1,2}_0 (\Omega)
\,=\, \overline{C^\infty_0 (\Omega)}^{\|\nabla f\|_{L^2(\Omega)}} \,,\label{hardy} \end{align}
where $C^\infty_0(\Omega)$ is the set of smooth and compactly supported functions in $\Omega$. The validity of this bound is well known for three-dimensional exterior domains, and the results mentioned in the above essentially rely on the inequality \eqref{hardy}. One can recover the Hardy inequality in the two-dimensional case if the factor $|x|^{-1}$ in the left-hand side of \eqref{hardy} is replaced with a logarithmic correction $|x|^{-1} \log(e+|x|)^{-1}$, but this inequality has only a narrow application in our scale-critical framework. Another way to recover the inequality \eqref{hardy} is to impose a symmetry on both $\Omega$ and $f$ about an axis which we may take to be the $x_1$-axis
\begin{equation}\tag{Sym$_1$}\label{sym1} \left\{ \begin{aligned} &{\rm If}~(x_1, x_2)^\top\in\Omega\,,~{\rm then}~(x_1, -x_2)^\top\in \Omega\,, \\ &f(x_1, x_2) = -f(x_1, -x_2)~\,{\rm holds} \end{aligned}\right. \end{equation}
(for the proof see Galdi \cite{G}), and {\color{black} such an inequality} is applied in the analysis of \eqref{PS} for the case when $V$ is symmetric. Yamazaki \cite{Y2} proves the $L^p$-$L^q$ estimates to $\eqref{PS}$ with the symmetric Navier-Stokes flow $V(x)=O(|x|^{-1})$, under the symmetry conditions on both the domain and given data. We note that these estimates imply the asymptotic stability of $V$ under symmetric initial $L^2$-perturbations; see also Galdi and Yamazaki \cite{GY}.
An important remark is given by Russo \cite{R2} concerning the Hardy-type inequality in two-dimensional exterior domains without symmetry. Let us introduce the next scale-critical radial flow $W=W(x)$, which is called the flux carrier.
\begin{align}
W(x) \,=\, \frac{x}{|x|^2}\,,~~~~~~ x \in \mathbb{R}^2\setminus\{0\}\,. \label{fluxcarrier} \end{align}
Then, from the existence of a potential to $W(x)=\nabla \log |x|$, one can show that the following Hardy-type inequality holds in the $L^2$-inner product $\langle \cdot, \cdot \rangle_{L^2(\Omega)}$:
\begin{align}
|\langle u\cdot\nabla u\,, W \rangle_{L^2(\Omega)}| \le
C \|\nabla u\|_{L^2(\Omega)}^2\,,~~~~~~ u \in \dot{W}^{1,2}_{0,\sigma} (\Omega)
\,=\, \overline{C^\infty_{0,\sigma} (\Omega)}^{\|\nabla u\|_{L^2(\Omega)}}\,, \label{hardy2} \end{align}
where $C^\infty_{0,\sigma}(\Omega)$ denotes the function space $\{ f \in C^\infty_0(\Omega)^2~|~{\rm div}\,f=0\}$. Based on the energy method with the application of \eqref{hardy2}, Guillod \cite{Gui} proves the global $L^2$-stability of the flux carrier $\delta W$ when the flux $\delta$ is small enough. On the other hand, the validity of the inequality \eqref{hardy2} essentially depends on the potential property of $W$. Indeed, as is pointed out in \cite{Gui}, the bound \eqref{hardy2} breaks down if $W$ is replaced by the next rotating flow $U=U(x)$:
\begin{align}
U(x) \,=\, \frac{x^\bot}{|x|^2}\,,~~~~~~ x^\bot\,=\,(-x_2,x_1)^\top\,,~~~~~~ x \in \mathbb{R}^2\setminus\{0\}\,. \label{rotatingflow} \end{align}
Hence, if we consider the problem \eqref{PS} with $V=\alpha U$, $\alpha\in\mathbb{R}\setminus\{0\}$, the linearized term $\alpha(U\cdot\nabla v +v\cdot\nabla U)$ can no more be regarded as a perturbation from the Laplacian, and we cannot avoid the difficulty coming from the lack of the Hardy inequality. Maekawa \cite{Ma1} studies the stability of the flow $\alpha U$ in the exterior unit disk. The symmetry of the domain allows us to express the solution to the problem \eqref{PS} explicitly through the Dunford integral of the resolvent operator. Based on this representation formula, \cite{Ma1} obtains the $L^p$-$L^q$ estimates to \eqref{PS} with $V=\alpha U$ for small $\alpha$, and shows the asymptotic $L^2$-stability of $\alpha U$ if $\alpha$ and initial perturbations are sufficiently small. This result is extended by the same author in \cite{Ma2} for the more general class of $V$ in \eqref{PS} including the flow of the form $V=\alpha U + \delta W$ with small $\alpha$ and $\delta$; see \cite{Ma2} for details.
Our first motivation is to generalize the result in \cite{Ma1} to the case when the domain loses symmetry (and the second one is explained in {\color{black} Remark \ref{remark.assumption1}} below). Let us prepare the assumptions on the domain $\Omega$ and the stationary vector field $V$ in \eqref{PS} considered in this paper. We denote by $B_{\rho}(0)$ the two-dimensional disk of radius $\rho>0$ centered at the origin.
\begin{assumption}\label{assumption} {\rm (1)} There is a positive constant $d\in (0,\frac14)$ such that the complement of the domain $\Omega$ satisfies
\begin{align}\label{domain} \overline{B_{1-2d}(0)} \subset \Omega^{\rm c} \subset \overline{B_{1-d}(0)}\,. \end{align}
{\color{black} \noindent {\rm (2)} The vector field $V$ in \eqref{PS} satisfies ${\rm div}\,V=0$ in $\Omega$ and the asymptotic behavior
\begin{align} V(x) \,=\, \beta U(x) + R(x)\,,~~~~~~x\in\Omega\,,\label{asymptoticbehavior} \end{align}
where $U$ is the rotating flow in \eqref{rotatingflow}. The constant $\beta$ and the remainder $R$ are assumed to satisfy the following conditions with some $\gamma\in(\frac12,1)${\rm :}
\begin{align} &\beta\in(0,1)\,,\label{beta} \\
&\sup_{x\in\Omega} |x|^{1+\gamma} |R(x)| \le C d\,, \label{remainder} \end{align}
where the constant $C$ depends only on $\gamma$. Moreover, the derivative of $V$ satisfies $(1+|x|) \nabla^k V\in L^\infty(\Omega)$ for $k=1, 2${\rm ;} see Remark \ref{remark.assumption} {\rm (3)}} for this assumption. \end{assumption}
\begin{remark}\label{remark.assumption}
(1) Let us consider the case where the constant $\beta$ in the assumption is given in the form of $\beta = \alpha + \tilde{\alpha}_d$ with $\alpha\in(0,1)$ and $|\tilde{\alpha}_d| \le C d$, and assume that $\alpha$ and $d$ are sufficiently small so that $\beta\in (0,1)$. Then formally taking $d=0$ in \eqref{domain}--\eqref{remainder} we obtain the flow $V=\alpha U$ in the exterior disk $\Omega=\mathbb{R}^2\setminus\overline{B_{1}(0)}$, which solves the following two-dimensional stationary Navier-Stokes equations (SNS): $-\Delta u + u\cdot\nabla u + \nabla p = 0$,
${\rm div}\,u=0$ in $\Omega$, $u=b$ on $\partial \Omega$, and $u\to 0$ as $|x| \to \infty$ with $b=\alpha x^{\bot}$. The vector field $V$ in \eqref{asymptoticbehavior}--\eqref{remainder} describes the flow around $\alpha U$ created from a small perturbation to the exterior disk, and hence, one can naturally expect the existence of {\color{black} such solutions} to the nonlinear problem (SNS) when $\alpha$ is small enough if $b-\alpha x^{\bot}$ is sufficiently small with respect to $0<d\ll 1$. Indeed, imposing the symmetry condition on both the domain perturbation in \eqref{domain} and $b$ as
\begin{equation}\tag{Sym$_2$}\label{sym2} \left\{ \begin{aligned} &{\rm If}~(x_1, x_2)^\top\in\Omega\,, ~{\rm then}~(x_1, -x_2)^\top\in \Omega ~{\rm and}~(-x_1, x_2)^\top\in \Omega\,, \\ &b_1(x_1, x_2) = -b_1(x_1, -x_2) = b_1(-x_1, x_2)~\,{\rm and} \\ &b_2(x_1, x_2) = b_2(x_1, -x_2) = -b_2(-x_1, x_2)~\,{\rm hold}, \end{aligned}\right. \end{equation}
under the smallness condition on $\alpha$ and $b-\alpha x^{\bot}$, we can construct the Navier-Stokes flow $V$ satisfying at least \eqref{asymptoticbehavior} and \eqref{beta} (but with no specific spatial decay rate for $R$ in general). The proof is based on the Galerkin method and the recovered Hardy-inequality \eqref{hardy}. We refer to \cite{G}, Russo \cite{R1}, \cite{R2}, Yamazaki \cite{Y1}, and Pileckas and Russo \cite{PR} for the solvability of (SNS) under symmetry conditions. In particular, the solutions decaying in the scale-critical order $O(|x|^{-1})$ are obtained in \cite{Y1} under a stronger symmetry condition on the domain than \eqref{sym2}. The reader is also referred to Hillairet and Wittwer \cite{HW} proving the existence of solutions to (SNS) in the exterior disk with $b=\alpha x^\bot + \tilde{b}$ when $\alpha$ is large enough and $\tilde{b}$ is sufficiently small.
\noindent (2) The novelty of our assumption is that we do not impose the symmetry either on the domain $\Omega$ and the flow $V$, and it is a crucial assumption for the stability analysis in \cite{GY, Y2} to resolve the difficulty related to the lack of the Hardy inequality. While one can realize the exterior disk case in \cite{Ma1} by putting $d=0$ to \eqref{domain}, \eqref{asymptoticbehavior}, and \eqref{remainder} formally. In this sense, the assumption above gives a generalization of the setting in \cite{Ma1} to non-symmetric domain cases.
\noindent (3) The assumption on the derivatives of $V$ is only used in proving that the operator ${\mathbb A}_V$ defined in \eqref{perturbed Stokes op.} is a relatively compact perturbation of the Stokes operator ${\mathbb A}$; see below for the definition of ${\mathbb A}$. Indeed, it is possible to show this property under a weaker assumption on the spatial decay of the derivatives of $V$, such as $(1+|x|)^r \nabla V\in L^\infty(\Omega)$ for some $r\in(0,1]$ and $\nabla^2 V\in L^\infty(\Omega)$, but we omit the details here for the sake of brevity.
\noindent (4) A vector field $V$ of the type \eqref{asymptoticbehavior} also naturally appears in the study of the Navier-Stokes flows around a rotating obstacle. Let us consider the situation where the obstacle $\Omega^{\rm c}$ rotates around the origin with a constant speed $\alpha\in \mathbb{R}\setminus\{0\}$. Then the time-periodic Navier-Stokes flow moving with the rotating obstacle gives a stationary solution to the problem (RNS): $\partial_t u -\Delta u - \alpha(x^{\bot}\cdot\nabla u - u^{\bot}) + u\cdot\nabla u + \nabla p = f$, ${\rm div}\,u=0$ in $\Omega$, $u=\alpha x^{\bot}$ on $\partial \Omega$, and $u\to 0$ as $|x| \to \infty$. Here we take the reference frame attached to the obstacle; see Hishida \cite{H} for details. The stationary problem of (RNS) is analyzed by Higaki, Maekawa, and Nakahara \cite{HMN}, where the existence and uniqueness of stationary solutions decaying in the order $O(|x|^{-1})$ is proved when $\alpha$ is sufficiently small and $f$ is of a divergence form $f={\rm div}\,F$ for some $F$ which is small in a scale-critical norm. Moreover, the leading profile at spatial infinity is shown to be $C \frac{x^\bot}{|x|^2} + O(|x|^{-1-\gamma})$ for some constant $C$ if $F$ satisfies a decay condition $F=O(|x|^{-2-\gamma})$ with $\gamma\in(0,1)$. By extending the proof in \cite{HMN}, one can construct the stationary solutions to (RNS) satisfying the estimates \eqref{asymptoticbehavior}--\eqref{remainder} under the condition on the domain \eqref{domain}. The analysis in this paper cannot be directly applied to the stability analysis of a stationary solution $V$ to (RNS) for its time-dependence in the original frame, and however, we can perform an analogous analysis to the linearized equations of (RNS) around $V$ as is explained in Remark \ref{remark.assumption1} below. \end{remark}
\begin{remark}\label{remark.assumption1}
Another motivation comes from the stability analysis of stationary solutions to the equations (RNS) introduced in Remark \ref{remark.assumption} (4). Obviously, letting us denote the linearization to (RNS) around a stationary solution $V$ by (PRS), then the two equations \eqref{PS} and (PRS) are different from each other due to the additional term $-\alpha(x^{\bot}\cdot\nabla v - v^{\bot})$ in (PRS). However, if we consider the {\it resolvent problems} of each equation, there are some common features thanks to the property of the term $\alpha(x^{\bot}\cdot\nabla v - v^{\bot})= \sum_{n\in\mathbb{Z}} i\alpha n \mathcal{P}_n v$, which is derived from the Fourier expansion of $v|_{\{|x|>1\}}$; see \eqref{polar.e_theta} and \eqref{def.P_n} in Subsection \ref{op.s in polar coord.}. In particular, for stationary solutions to (RNS) satisfying \eqref{asymptoticbehavior}--\eqref{remainder} on the domain in \eqref{domain}, we can reproduce a similar calculation performed in this paper to the resolvent problem of (PRS), by observing that the appearance of $\sum_{n\in\mathbb{Z}} i\alpha n \mathcal{P}_n v$ in the resolvent equation (restricted on $|x|>1$) leads to the shifting of the resolvent parameter from $\lambda\in\mathbb{C}$ to $\lambda+in\alpha$ in the $n$-Fourier mode. Although the stability of the stationary solutions to (RNS) still remains open, our analysis in this paper will contribute to the resolvent estimate of the linearized problem (PRS). \end{remark}
Before stating the main result, let us introduce some notations and basic facts related to the problem \eqref{PS}. We denote by $L^2_\sigma(\Omega)$ the $L^2$-closure of $C^\infty_{0,\sigma}(\Omega)$. The orthogonal projection ${\mathbb P}: L^2(\Omega)^2 \to L^2_\sigma(\Omega)$ is called the Helmholtz projection. Then the Stokes operator ${\mathbb A}$ with the domain $D({\mathbb A}) = L^{2}_{\sigma}(\Omega) \cap W^{1,2}_0(\Omega)^2 \cap W^{2,2}(\Omega)^2$ is defined as ${\mathbb A} = -{\mathbb P} \Delta$, and it is well known that the Stokes operator is nonnegative and self-adjoint in $L^{2}_{\sigma}(\Omega)$. Finally we define the perturbed Stokes operator ${\mathbb A}_V$ as
\begin{equation}\label{perturbed Stokes op.} \begin{split} D({\mathbb A}_V) &\,=\, D({\mathbb A})\,, \\ {\mathbb A}_V v &\,=\, {\mathbb A} v + {\mathbb P}(V\cdot\nabla v + v\cdot\nabla V)\,. \end{split} \end{equation}
A standard argument for sectorial operators implies that $-{\mathbb A}_V$ generates a $C_0$-analytic semigroup in $L^2_\sigma(\Omega)$. We denote this semigroup by $e^{-t {\mathbb A}_V}$. Then our main result is stated as follows. Let $\beta$ and $d$ be the constants in Assumption \ref{assumption}.
\begin{theorem}\label{maintheorem} There are positive constants $\beta_\ast$ and $\mu_\ast$ such that if $\beta \in (0, \beta_\ast)$ and {\color{black} $d\in(0,\mu_\ast \beta^2)$} then the following statement holds. Let $1<q\le 2\le p<\infty$. Then we have
\begin{align}
\|e^{-t {\mathbb A}_V} f\|_{L^p(\Omega)} & \le \frac{C}{\beta^2} t^{-\frac1q+\frac1p}
\|f\|_{L^q(\Omega)}\,, ~~~~~~ t>0\,, \label{est1.maintheorem} \\
\|\nabla e^{-t {\mathbb A}_V} f\|_{L^2(\Omega)} & \le \frac{C}{\beta^2} t^{-\frac1q}
\|f\|_{L^q(\Omega)}\,, ~~~~~~ t>0\,, \label{est2.maintheorem} \end{align}
for $f\in L^2_\sigma(\Omega) \cap L^q(\Omega)^2$. Here the constant $C$ is independent of $\beta$ and depends on $p$ and $q$. \end{theorem}
As an application of Theorem \ref{maintheorem}, we can prove the asymptotic stability of $V$. Suppose that $V$ gives a stationary Navier-Stokes flow. Then the time evolution of perturbations around $V$ is governed by the Navier-Stokes equations, whose integral form is written as
\begin{align}\tag{INS}\label{INS} v(t) \,=\, e^{-t {\mathbb A}_V} v_0 - \int_0^t e^{-(t-s) {\mathbb A}_V} \mathbb{P} (v\cdot\nabla v)(s) \,{\rm d} s\,, ~~~~~~ t>0\,. \end{align}
The proof of the following result is omitted in this paper, since {\color{black} it is just a reproduction of the argument in \cite[Section 4]{Ma1}} using the Banach fixed point theorem.
\begin{theorem}\label{maintheorem2} Let $\beta_\ast$ and $\mu_\ast$ be the constants in Theorem \ref{maintheorem}. Then there is a positive constant $\nu_\ast$ such that if $\beta \in (0, \beta_\ast)$, {\color{black} $d\in(0,\mu_\ast \beta^2)$,}
and $\|v_0\|_{L^2(\Omega)}\in(0, \nu_\ast \beta^4)$ then there exists a unique solution $v \in C([0,\infty); L^2_\sigma(\Omega))\,\cap\,C((0,\infty);W^{1,2}_0(\Omega)^2)$ to \eqref{INS} satisfying
\begin{align}
\lim_{t\to\infty} t^{\frac{k}{2}} \|\nabla^k v(t)\|_{L^2(\Omega)} \,=\, 0\,,~~~~~~~~ k\,=\,0,1\,. \end{align}
\end{theorem}
The proof of Theorem \ref{maintheorem} relies on the resolvent estimate to the perturbed Stokes operator $\mathbb{A}_V$. Since the difference {\color{black} $\mathbb{A}_V-\mathbb{A}$ is relatively compact to $\mathbb{A}$ in $L^2_\sigma(\Omega)$,}
one can show that the spectrum of $-\mathbb{A}_V$ has the structure $\sigma(-{\mathbb A}_V)={\color{black} (-\infty,0]} \cup \sigma_{{\rm disc}}(-{\mathbb A}_V)$ in $L^2_\sigma(\Omega)$, where $\sigma_{{\rm disc}}(-{\mathbb A}_V)$ denotes the set of discrete spectrum of $-{\mathbb A}_V$; see \cite[Lemma 2.11 and Proposition 2.12]{Ma1}. By using the identity $v\cdot \nabla v = \frac12 \nabla |v|^2 + v^\bot {\rm rot}\,v$ with ${\rm rot}\,v=\partial_1 v_2 - \partial_2 v_1$ and ${\rm rot}\, U =0$ in $x \in \Omega$, we can write the resolvent problem associated with \eqref{PS} as, for $V$ given by \eqref{asymptoticbehavior},
\begin{equation}\tag{RS}\label{RS} \left\{ \begin{aligned} \lambda v - \Delta v + \beta U^\bot {\rm rot}\,v + {\rm div}\,(R\otimes v + v\otimes R) + \nabla q &\,=\,f\,,~~~~x \in \Omega\,, \\ {\rm div}\,v &\,=\, 0\,,~~~~x \in \Omega\,, \\
v|_{\partial \Omega} &\,=\,0\,. \end{aligned}\right. \end{equation}
Here $\lambda\in\mathbb{C}$ is the resolvent parameter and we have used the conditions ${\rm div}\,v={\rm div}\,R=0$ to derive $R\cdot\nabla v +v\cdot\nabla R = {\rm div}\,(R\otimes v + v\otimes R)$. Hence, the proof of Theorem \ref{maintheorem} is complete as soon as we show that there is a sector $\Sigma$ included in the resolvent set $\rho(-{\mathbb A}_V)$, and that the following estimates to \eqref{RS} hold for $q\in(1,2]$ and $f\in L^2_\sigma(\Omega) \cap L^q(\Omega)^2$:
\begin{equation}\label{resolvent estimates.0} \begin{split}
\|(\lambda+{\mathbb A}_V)^{-1} f\|_{L^2(\Omega)} & \le
\frac{C}{\beta^2} |\lambda|^{-\frac32+\frac1q}
\|f\|_{L^q(\Omega)}\,, ~~~~\lambda \in {\color{black} \Sigma}\,,\\
\|\nabla (\lambda+{\mathbb A}_V)^{-1} f\|_{L^2(\Omega)} & \le
\frac{C}{\beta^2} |\lambda|^{-1+\frac1q}
\|f\|_{L^q(\Omega)}\,, ~~~~\lambda \in {\color{black} \Sigma}\,. \end{split} \end{equation}
Let us prepare the ingredients for the proof of the resolvent estimates \eqref{resolvent estimates.0}. Our approach is based on the energy method to \eqref{RS}, and thus one of the most important steps is to obtain the estimate for the term $|\langle \beta U^{\bot} {\rm rot}\,v\,,v \rangle_{L^2(\Omega)}|$ which enables us to close the energy computation. Again we note that the bound $|\langle \beta U^{\bot} {\rm rot}\,v\,,v \rangle_{L^2(\Omega)}| \le C \beta \|\nabla v\|_{L^2(\Omega)}^2$ is no longer available contrary to the three-dimensional cases.
Firstly let us examine the next inequality containing the parameter $T \gg 1$:
\begin{align}\label{resolvent estimates.1}
|\langle \beta U^{\bot} {\rm rot}\,v, v \rangle_{L^2(\Omega)}| \le \frac{\beta}{T}
\|\nabla v\|_{L^2(\Omega)} \|v\|_{L^2(\Omega)}
+ C \beta \Theta(T) \|\nabla v\|_{L^2(\Omega)}^2\,, \end{align}
where the function $\Theta(T)$ satisfies $\Theta(T) \approx \log T$ if $T \gg 1$. This inequality leads to the closed energy computation for \eqref{RS}, as long as the coefficient $C \beta \Theta(T)$ is small enough so that the second term in the right-hand side of \eqref{resolvent estimates.1} can be controlled by the dissipation from the Laplacian in \eqref{RS}. However, this observation does not give the information about the spectrum of $-\mathbb{A}_V$ near the origin. More precisely, we cannot close the energy computation when the resolvent parameter $\lambda$ is exponentially small with respect to $\beta$, that is, when $0<|\lambda| \le O(e^{-\frac{1}{\beta}})$. We emphasize that this difficulty is essentially due to the unavailability of the Hardy inequality \eqref{hardy} in two-dimensional exterior domains.
To overcome the difficulty for the case $0<|\lambda| \le O(e^{-\frac{1}{\beta}})$, we rely on the representation formula to the resolvent problem in the exterior unit disk established in \cite{Ma1}. We denote by $D$ the exterior unit disk $\mathbb{R}^2\setminus\overline{B_1(0)}=\{x\in\mathbb{R}^2~|~|x|>1\}$. Since the restriction $(v|_{D}, q|_{D})$ gives a unique solution to the next problem for $(w,r)$:
\begin{equation}\tag{RS$^{\rm ed}$}\label{RSed} \left\{ \begin{aligned} \lambda w - \Delta w + \beta U^\bot {\rm rot}\,w + \nabla r &\,=\,-{\rm div}\,(R\otimes v + v\otimes R) + f\,,~~~~x\in D\,, \\ {\rm div}\,w &\,=\, 0\,,~~~~x\in D\,, \\
w|_{\partial D} &\,=\,v|_{\partial D}\,, \end{aligned}\right. \end{equation}
we can study the a priori estimates of $w=v|_{D}$ based on the solution formula to \eqref{RSed}. Then a detailed calculation shows that
$|\langle \beta U^{\bot} {\rm rot}\,w, w \rangle_{L^2(D)}|$ satisfies
\begin{equation}\label{resolvent estimates.2} \begin{split} &
|\langle \beta U^{\bot} {\rm rot}\,w, w \rangle_{L^2(D)}| \\ &\le \frac{C}{\beta^4}
\big(\|R\otimes v + v\otimes R\|_{L^2(\Omega)} + {\color{black}
\beta \sum_{|n|=1} \|\mathcal{P}_n v\|_{L^{\infty}(\partial D)} } \big)^2 \\ & \quad
+ \frac{C}{\beta^4} |\lambda|^{-2+\frac2q} \|f\|_{L^q(\Omega)}^2
+ C\beta\|\nabla v\|_{L^2(\Omega)}^2\,, \end{split} \end{equation}
{\color{black} where $\mathcal{P}_n v$ denotes the Fourier $n$-mode of
$v|_{\overline{D}}$; see \eqref{def.P_n} in Subsection \ref{op.s in polar coord.} for the definition. If}
we obtain \eqref{resolvent estimates.2} then the estimate of $|\langle \beta U^{\bot} {\rm rot}\,v, v \rangle_{L^2(\Omega)}|$ is derived by using the Poincar${\rm \acute{e}}$ inequality on the bounded domain $\Omega \setminus\overline{D}$. However, in closing the energy computation, we need to be careful about the $\beta$-singularity in the coefficients in \eqref{resolvent estimates.2}. In fact, the first term in the right-hand side of \eqref{resolvent estimates.2} {\color{black} has to be} controlled by the dissipation as
\begin{align*} \frac{C}{\beta^4}
\big(\|R\otimes v + v\otimes R\|_{L^2(\Omega)} + {\color{black}
\beta \sum_{|n|=1} \|\mathcal{P}_n v\|_{L^{\infty}(\partial D)} } \big)^2 \le \frac{C}{\beta^4} (d + \beta d^\frac12)^2
\|\nabla v\|_{L^2(\Omega)}^2\,, \end{align*}
and then the smallness of $C (d + \beta d^\frac12)^2 \beta^{-4} \ll 1$ is required in order to close the energy computation. This condition is achieved by imposing the smallness on the distance $d$ between the domain $\Omega$ and the exterior unit disk, which is introduced in Assumption \ref{assumption}.
{\color{black} Finally, we pay close attention to the $\beta$-dependencies appearing in Theorem \ref{maintheorem}. If we consider the limit case $d=0$ and $V=\beta U$ in Assumption \ref{assumption}, then the term
\begin{align*} \beta U^\bot {\rm rot}\,v + {\rm div}\,(R\otimes v + v\otimes R) \,=\, \beta U^\bot {\rm rot}\,v \end{align*}
in \eqref{RS} has an oscillation effect on the solutions in the exterior disk $\Omega={\color{black} D}$ at least when $\lambda=0$. Indeed, for the solutions to \eqref{RS} with $\lambda=0$, this effect leads to the faster spatial decay for the nonradial part compared with the case ${\color{black} \beta}=0$ (i.e. the Stokes equations case), and this observation is indeed an important step in \cite{HW} to prove the existence of the Navier-Stokes flows around ${\color{black} \beta} U$ in the exterior disk when the rotation ${\color{black} \beta}$ is large, as explained in Remark \ref{remark.assumption} (1). However, contrary to the stationary problem, the situation becomes more complicated if we consider the nonstationary problem requiring the analysis of \eqref{RS} for nonzero $\lambda\in\mathbb{C}\setminus\{0\}$, since there is an interaction between the two oscillation effects due to the terms $\lambda v$ and ${\color{black} \beta} U^\bot {\rm rot}\,v$ in \eqref{RS}. In fact, even in the exterior disk, a detailed analysis to the representation of the resolvent operator suggests the existence of a time-frequency domain, which we call the {\it nearly-resonance regime}, where the oscillation effect from ${\color{black} \beta} U^\bot {\rm rot}\,v$ is drastically weakened by the one from $\lambda v$ and hence the ${\color{black} \beta}$-singularity appears in the operator norm of the resolvent. The existence of the nearly-resonance regime yields that the stability of the ${\color{black} \beta} U$-type flows is sensitive under the perturbation of the domain. This is the reason why the distance $d$ between the fluid domain $\Omega$ and the exterior disk is assumed to be small depending on ${\color{black} \beta}$ in Theorem \ref{maintheorem}, and therefore, our argument is only applicable to the problem imposed on the exterior disk with a small perturbation. However, it is unclear whether the condition on $d$ being algebraically small in $\beta$ in Theorem \ref{maintheorem} can be relaxed or not. Additionally, Lemma \ref{lem.est.F} in Appendix \ref{app.proof.prop.est.F} implies that the nearly-resonance regime lies in the annulus $e^{-\frac{c}{\beta^2}} \le |\lambda| \le e^{-\frac{c'}{\beta}}$ in the complex plane. As far as the author knows, the existence of such time-frequency domain and the qualitative analysis seem to be new and have not been achieved before. }
This paper is organized as follows. In Section \ref{sec.pre} we recall some basic facts from vector calculus in polar coordinates, and derive the resolvent estimate to \eqref{RS} when $|\lambda|\ge O(\beta^2e^{-\frac{1}{6\beta}})$ by a standard energy method. In Section \ref{sec.RSed} the resolvent problem is discussed for the case $0<|\lambda| < e^{-\frac{1}{6\beta}}$. In Subsections \ref{subsec.RSed.f}, \ref{subsec.RSed.divF}, and \ref{subsec.RSed.b} we derive the estimates to the problem \eqref{RSed} by using the representation formula. The results in Subsections \ref{subsec.RSed.f}--\ref{subsec.RSed.b} are applied in Subsection \ref{apriori2}, where the resolvent estimate to \eqref{RS} is established in the {\color{black} exceptional} region $0<|\lambda| < e^{-\frac{1}{6\beta}}$. Section \ref{sec.maintheorem} is devoted to the proof of Theorem \ref{maintheorem}.
\section{Preliminaries}\label{sec.pre}
This section is devoted to the preliminary analysis on the resolvent problem \eqref{RS}
and \eqref{RSed} in the introduction. In Subsections \ref{op.s in polar coord.} and \ref{biotsavart} we recall some basic facts from vector calculus in polar coordinates. In Subsection \ref{apriori1} we show that the resolvent estimates in \eqref{resolvent estimates.0} are valid if the resolvent parameter $\lambda$ satisfies $|\lambda|\ge O(\beta^2e^{-\frac{1}{6\beta}})$. Let us recall that $D$ denotes the exterior unit disk $\mathbb{R}^2\setminus\overline{B_1(0)}=\{x\in\mathbb{R}^2~|~|x|>1\}$ as in the introduction.
\subsection{Vector calculus in polar coordinates and Fourier series}\label{op.s in polar coord.}
We introduce the usual polar coordinates on $D$. Set
{\allowdisplaybreaks \begin{align*} & x_1 \,=\, r\cos \theta\,,~~~~~~ x_2 \,=\, r\sin \theta\,,~~~~~~~~
r\,=\, |x| \ge 1\,,~~~~\theta\in [0,2\pi)\,,\\ &
{\bf e}_r \,=\, \frac{x}{|x|}\,,~~~~~~
{\bf e}_\theta \,=\, \frac{x^\bot }{|x|} \,=\, \partial_\theta {\bf e}_r\,. \end{align*} }
Let $v=(v_1,v_2)^\top$ be a vector field defined on $D$. Then we set
\begin{align*} & v\,=\, v_r {\bf e}_r + v_\theta {\bf e}_\theta\,,~~~~~~~~ v_r \,=\, v\cdot {\bf e}_r\,,~~~~~~ v_\theta \,=\, v\cdot {\bf e}_\theta\,. \end{align*}
The following formulas will be used:
\begin{align} {\rm div}\,v & \,=\, \partial_1 v_1 + \partial_2 v_2 \,=\, \frac1r \partial_r (r v_r) + \frac1r \partial_\theta v_\theta\,, \label{polar.div} \\ {\rm rot}\,v & \, = \, \partial_1 v_2 - \partial_2 v_1 \,=\, \frac1r \partial_r (r v_\theta) - \frac1r \partial_\theta v_r \,, \label{polar.rot} \\
|\nabla v |^2 & \,=\,
|\partial_r v_r|^2 + |\partial_r v_\theta|^2 + \frac{1}{r^2}
\big( |\partial_\theta v_r - v_\theta |^2
+ |v_r + \partial_\theta v_\theta |^2 \big )\,, \label{polar.grad} \end{align}
and
\begin{align} \begin{split}\label{polar.laplace} -\Delta v & \,=\, \Big( -\partial_r \big( \frac1r \partial_r (r v_r ) \big) - \frac{1}{r^2} \partial_\theta^2 v_r + \frac{2}{r^2} \partial_\theta v_\theta \Big) {\bf e}_r \\ &\quad + \Big( - \partial_r \big ( \frac1r \partial_r (r v_\theta ) \big ) - \frac{1}{r^2} \partial_\theta^2 v_\theta - \frac{2}{r^2} \partial_\theta v_r \Big) {\bf e}_\theta\,. \end{split} \end{align}
The formulas
\begin{align*} \begin{split}
{\bf e}_r \cdot \nabla v \,=\, (\partial_r v_r ) {\bf e}_r + (\partial_r v_\theta) {\bf e}_\theta\,,~~~~~~ {\bf e}_\theta \cdot \nabla v \,=\, \frac{\partial_\theta v_r - v_\theta}{r} {\bf e}_r + \frac{\partial_\theta v_\theta + v_r}{r} {\bf e}_\theta \end{split} \end{align*}
imply the following equality:
\begin{align}\label{polar.e_theta}
x^\bot \cdot \nabla v - v^\bot & = |x| \big ( {\bf e}_\theta \cdot \nabla v \big ) - \big ( v_r {\bf e}_r^\bot + v_\theta {\bf e}_\theta^\bot \big ) \nonumber \\ & = (\partial_\theta v_r - v_\theta )\, {\bf e}_r \, + \, (\partial_\theta v_\theta + v_r) \, {\bf e}_\theta - \big ( v_r {\bf e}_r^\bot + v_\theta {\bf e}_\theta^\bot \big ) \nonumber \\ & = \partial_\theta v_r \, {\bf e}_r + \partial_\theta v_\theta \, {\bf e}_\theta\,. \end{align}
This relation has been used in Remark \ref{remark.assumption1} in the introduction. For each $n\in \mathbb{Z}$, we denote by $\mathcal{P}_n$ the projection on the Fourier mode $n$ with respect to the angular variable $\theta$:
\begin{align} \mathcal{P}_n v \,=\, v_{r,n}(r) e^{i n \theta} {\bf e}_r + v_{\theta,n}(r) e^{i n \theta} {\bf e}_\theta\,, \label{def.P_n} \end{align}
where
\begin{align*} v_{r,n} (r) & \,=\, \frac{1}{2\pi} \int_0^{2\pi} v_r(r \cos \theta, r\sin\theta) e^{-i n \theta} \,{\rm d} \theta\,,\\ v_{\theta,n} (r) & \,=\, \frac{1}{2\pi} \int_0^{2\pi} v_\theta(r \cos \theta, r\sin\theta) e^{-i n \theta} \,{\rm d} \theta\,. \end{align*}
We also set for $m\in \mathbb{N}\cup \{0\}$,
\begin{align}
\mathcal{Q}_m v = \sum_{|n|=m+1}^\infty \mathcal{P}_n v\,.\label{def.Q_m} \end{align}
For notational simplicity we often write $v_n$ instead of $\mathcal{P}_n v$. Each $\mathcal{P}_n$ defines an orthogonal projection in $L^2 (D)^2$. From \eqref{polar.grad} and \eqref{def.P_n}, for $n\in \mathbb{N} \cup\{0\}$ and $v$ in~$W^{1,2}(D)^2$ we see that
\begin{align*}
\| \nabla v \|_{L^2 (D)}^2 &\,=\, \sum_{n\in \mathbb{Z}} \| \nabla \mathcal{P}_n v \|_{L^2 (D)}^2\,, \\
|\nabla \mathcal{P}_n v |^2 &\,=\,
|\partial_r v_{r,n}|^2
+ \frac{1+n^2}{r^2} |v_{r,n}|^2
+ |\partial _r v_{\theta,n}|^2
+ \frac{1+n^2}{r^2}|v_{\theta,n}|^2 - \frac{4 n}{r^2} {\rm Im}( v_{\theta,n} \overline{v_{r,n}} )\,.
\end{align*}
In particular, we have
\begin{align}
| \nabla \mathcal{P}_n v |^2 \geq |\partial_r v_{r,n}|^2 + \frac{(|n|-1)^2}{r^2} |v_{r,n}|^2 + |\partial _r v_{\theta,n}|^2 + \frac{(|n|-1)^2}{r^2}|v_{\theta,n}|^2 \,,\label{polar.grad.n'} \end{align}
and thus, from the definition of $\mathcal{Q}_m$ in \eqref{def.Q_m}, we have for $m\in \mathbb{N}\cup \{0\}$,
\begin{align}
\| \nabla \mathcal{Q}_m v\|_{L^2(D)}^2
\ge \| \partial_r (\mathcal{Q}_m v)_r \|_{L^2(D)}^2
+ \| \partial_r (\mathcal{Q}_m v)_\theta \|_{L^2(D)}^2
+ m^2 \big\| \frac{\mathcal{Q}_m v}{|x|} \big\|_{L^2(D)}^2\,. \label{polar.grad.n''} \end{align}
\subsection{The Biot-Savart law in polar coordinates}\label{biotsavart}
For a given scalar field $\omega$ in $D$, the streamfunction $\psi$ is formally defined as the solution to the Poisson equation: $-\Delta \psi = \omega$ in $D$ and $\psi =0$ on $\partial D$. For $n\in \mathbb{Z}$ and $\omega\in L^2(D)$ we set $\mathcal{P}_n \omega = \mathcal{P}_n \omega(r,\theta)$ and $\omega_n = \omega_n(r)$ as
\begin{align} \begin{split} \mathcal{P}_n \omega \,=\, \bigg( \frac{1}{2\pi} \int_0^{2\pi} \omega (r \cos s, r\sin s) e^{- i n s} \,{\rm d} s \bigg) e^{i n \theta} \,,~~~~~~ \omega_n \,=\, \big ( \mathcal{P}_n \omega \big ) e^{-i n \theta}\,. \label{def.w_n} \end{split} \end{align}
From the Poisson equation in polar coordinates, we see that each $n$-Fourier mode of $\psi$ satisfies the following ODE:
\begin{align} -\frac{\,{\rm d} \psi_n}{\,{\rm d} r^2} - \frac{1}{r} \frac{\,{\rm d} \psi_n}{\,{\rm d} r} + \frac{n^2}{r^2} \psi_n \,=\, \omega_n\,,~~~~r>1\,, ~~~~~~~~\psi_n (1) =0\,. \label{eq.stream} \end{align}
Let $|n|\ge 1$. Then the solution $\psi_n= \psi_n[\omega_n]$ to \eqref{eq.stream} decaying at spatial infinity is given by
\begin{align*}
\begin{split} \psi_n[\omega_n] (r)
& \,=\, \frac{1}{2 |n|}
\bigg(-\frac{d_n [\omega_n] }{r^{|n|} }
+ \frac{1}{r^{|n|}} \int_1^r s^{1+|n|} \omega_n (s) \,{\rm d} s
+ r^{|n|}\int_r^\infty s^{1-|n|} \omega_n (s) \,{\rm d} s \bigg ) \,,\\
d_n [\omega_n] & \,=\, \int_1^\infty s^{1-|n|} \omega_n (s) \,{\rm d} s\,. \end{split} \end{align*}
The formula $V_n[\omega_n]$ in the next is called the Biot-Savart law for $\mathcal{P}_n \omega$:
\begin{align}\label{def.V_n} \begin{split} &V_n [\omega_n] \,=\, V_{r,n} [\omega_n](r) e^{i n \theta}{\bf e}_r + V_{\theta,n} [\omega_n](r) e^{i n\theta} {\bf e}_\theta\,,\\ & V_{r,n} [\omega_n] \, = \, \frac{i n}{r} \psi_n [\omega_n] \,, ~~~~~~ V_{\theta,n} [\omega_n] \, = \, - \frac{\,{\rm d}}{\,{\rm d} r} \psi_n [\omega_n]\,. \end{split} \end{align}
The velocity $V_n[\omega_n]$ is well defined at least when $r^{1-|n|} \omega_n\in L^1 ((1,\infty))$, and it is straightforward to see that \begin{align}\label{V_n} \begin{split} &{\rm div}\,V_n [\omega_n] \,=\, 0\,,~~~~ {\rm rot}\,V_n [\omega_n] \,=\, \mathcal{P}_n \omega~~~~~~{\rm in}~~D\,,\\ & {\bf e}_r \cdot V_n [\omega_n] \,=\, 0 ~~~~~~{\rm on}~~\partial D\,. \end{split} \end{align}
The condition $r^{1-|n|} \omega_n\in L^1 ((1,\infty))$ is automatically satisfied when $\omega\in L^2 (D)$ and~$|n|\geq 2$. When $|n|=1$, however, the integral in the definition of $\psi_n[\omega_n]$ does not converge absolutely for general $\omega\in L^2 ({\color{black} D})$. We can justify this integral for $|n|=1$ if $\omega$ is given in a rotation form $\omega={\rm rot}\,u$ with some $u\in W^{1,2}(D)^2$, since the integration by parts leads to the convergence of $\displaystyle \lim_{N\rightarrow \infty} \int_r^N \omega_n \,{\rm d} r$. Hence, for any $v \in L^2_\sigma ({\color{black} D}) \cap W^{1,2} ({\color{black} D})^2$, the $n$-mode $v_n = \mathcal{P}_n v$ can be expressed in terms of its vorticity $\omega_n$ by the formula \eqref{def.V_n} when $|n|\geq 1$. Before closing this subsection, we mention that the definition of $f_n$ differs according to whether $f$ is a vector $f=(f_1(x), f_2(x))^\top$ or scalar $f=f(x)$ function on $D$. The vector case is defined in \eqref{def.P_n}, while the scalar case is defined in \eqref{def.w_n}.
\subsection{A priori resolvent estimate by energy method} \label{apriori1}
In this subsection we study the energy estimate to the resolvent problem \eqref{RS}:
\begin{equation}\tag{RS} \left\{ \begin{aligned} \lambda v - \Delta v + \beta U^\bot {\rm rot}\,v + {\rm div}\,(R\otimes v + v\otimes R) + \nabla q &\,=\,f\,,~~~~x \in \Omega\,, \\ {\rm div}\,v &\,=\, 0\,,~~~~x \in \Omega\,, \\
v|_{\partial \Omega} &\,=\,0\,. \end{aligned}\right. \end{equation}
Here $\lambda\in \mathbb{C}$ is the resolvent parameter, the vector field $U$ is the rotating flow of \eqref{rotatingflow} in the introduction, and $\beta$ and $R$ are defined in Assumption \ref{assumption}. The first result of this subsection is the a priori estimates to \eqref{RS} obtained by the energy method. We recall that $D$ denotes the exterior disk $\{x\in\mathbb{R}^2~|~|x|>1\}$, and that {\color{black} $\gamma$ is the constant in Assumption \ref{assumption}.}
\begin{proposition}\label{prop.general.energy.est.resol.} Let $q\in(1,2]$, $f\in L^q(\Omega)^2$, and $\lambda\in\mathbb{C}$. Suppose that $v \in D(\mathbb{A}_V)$ is a solution to \eqref{RS}. {\color{black} Then there are constants $\beta_1\in(0,1)$ and $d_1\in(0,\frac14)$} depending only on {\color{black} $\Omega$ and $\gamma$} such that the following estimates hold.
\begin{align}
&{\rm Re}(\lambda) \| v\|_{L^2(\Omega)}^2 + \frac34
\|\nabla v\|_{L^2(\Omega)}^2 \nonumber \\ & \quad \le \beta
\big| \sum_{|n|=1} \big\langle ({\rm rot}\,v)_{n}, \frac{v_{r,n}}{|x|} \big\rangle_{L^2(D)} \big|
+ C \|f\|_{L^q(\Omega)}^{\frac{2q}{3q-2}}
\|v\|_{L^2(\Omega)}^{\frac{4(q-1)}{3q-2}}\,, \label{est1.prop.general.energy.est.resol.} \\
&|{\rm Im}(\lambda)| \| v\|_{L^2(\Omega)}^{2} \le \frac14
\|\nabla v\|_{L^2(\Omega)}^{2}
+ \beta \big| \sum_{|n|=1} \big\langle ({\rm rot}\,v)_{n}, \frac{v_{r,n}}{|x|} \big\rangle_{L^2(D)} \big| \nonumber \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + C
\|f\|_{L^q(\Omega)}^{\frac{2q}{3q-2}}
\|v\|_{L^2(\Omega)}^{\frac{4(q-1)}{3q-2}}\,, \label{est2.prop.general.energy.est.resol.} \end{align}
as long as $\beta\in(0,\beta_1)$ {\color{black} and $d\in(0, d_1)$}. The constant $C$ is independent of $\beta$ {\color{black} and $d$}. \end{proposition}
\begin{proof} Taking the inner product with $v$ to the first equation of \eqref{RS}, we find
\begin{align}
& {\rm Re}(\lambda) \| v\|_{L^2(\Omega)}^{2}
+ \| \nabla v\|_{L^2(\Omega)}^{2} \nonumber \\ &\,=\, - \beta {\rm Re}\langle U^{\bot} {\rm rot}\,v, v \rangle_{L^2(\Omega)} + {\rm Re} \langle R \otimes v + v \otimes R, \nabla v \rangle_{L^2(\Omega)} + {\rm Re} \langle f, v \rangle_{L^2(\Omega)}\,, \label{eq1.proof.prop.general.energy.est.resol.} \\
& {\rm Im}(\lambda) \| v\|_{L^2(\Omega)}^{2} \nonumber \\ &\,=\, - \beta {\rm Im}\langle U^{\bot} {\rm rot}\,v, v \rangle_{L^2(\Omega)} + {\rm Im} \langle R \otimes v + v \otimes R, \nabla v \rangle_{L^2(\Omega)} + {\rm Im} \langle f, v \rangle_{L^2(\Omega)}\,. \label{eq2.proof.prop.general.energy.est.resol.} \end{align}
After decomposing the domain $\Omega = (\Omega \setminus D)\,\cup\,D$, from $U^{\bot}=-\frac{{\bf e_{r}}}{r}$ on $D$ we have
\begin{align}
\beta |\langle U^{\bot} {\rm rot}\,v, v \rangle_{L^2(\Omega)}| \le
\beta |\langle U^{\bot} {\rm rot}\,v, v \rangle_{L^2(\Omega \setminus D)}|
+ \beta \big|\big\langle {\rm rot}\,v, \frac{v_{r}}{|x|} \big\rangle_{L^2(D)}\big|\,. \label{est1.proof.prop.general.energy.est.resol.} \end{align}
Then the Poincare inequality on $\Omega \setminus D$ implies that
\begin{align}
\beta |\langle U^{\bot} {\rm rot}\,v, v \rangle_{L^2(\Omega \setminus D)}|
\le C \beta \| \nabla v \|_{L^2(\Omega)}^2\,, \label{est2.proof.prop.general.energy.est.resol.} \end{align}
and by applying the Fourier series expansion on $D$, we see from \eqref{def.P_n} and \eqref{def.w_n} that
\begin{align}
&\big|\big\langle {\rm rot}\,v, \frac{v_{r}}{|x|} \big\rangle_{L^2(D)}\big| \,=\,
\big| \big( \sum_{|n|=1} + \sum_{n \in \mathbb{Z} \setminus \{ \pm1\}} \big)
\big\langle ({\rm rot}\,v)_{n}, \frac{v_{r,n}}{|x|} \big\rangle_{L^2(D)} \big| \nonumber \\
& \le \big| \sum_{|n|=1} \big\langle ({\rm rot}\,v)_{n}, \frac{v_{r,n}}{|x|} \big\rangle_{L^2(D)} \big| + \sum_{n \in \mathbb{Z} \setminus \{ \pm1\}}
\|{\rm rot}\,v_{n}\|_{L^2(D)}
\big\|\frac{v_{r,n}}{|x|}\big\|_{L^2(D)}\,. \label{est3.proof.prop.general.energy.est.resol.} \end{align}
Then the {\color{black} inequalities \eqref{polar.grad.n'} for $n=0$ and \eqref{polar.grad.n''} for $m=1$ ensure} that
\begin{align} \sum_{n \in \mathbb{Z} \setminus \{ \pm1\}}
\|{\rm rot}\,v_{n}\|_{L^2(D)}
\big\|\frac{v_{r,n}}{|x|}\big\|_{L^2(D)}
& \le C \sum_{n \in \mathbb{Z} \setminus \{ \pm1\}} \|\nabla v_{n}\|_{L^2(D)}^2 \nonumber \\
& \le C \|\nabla v\|_{L^2(\Omega)}^2\,. \label{est4.proof.prop.general.energy.est.resol.} \end{align}
Inserting \eqref{est2.proof.prop.general.energy.est.resol.}--\eqref{est4.proof.prop.general.energy.est.resol.} into \eqref{est1.proof.prop.general.energy.est.resol.} we obtain
\begin{align}
\beta |\langle U^{\bot} {\rm rot}\,v, v \rangle_{L^2(\Omega)}| \le
C_1 \beta \|\nabla v\|_{L^2(\Omega)}^2 + \beta
\big| \sum_{|n|=1} \big\langle ({\rm rot}\,v)_{n}, \frac{v_{r,n}}{|x|} \big\rangle_{L^2(D)} \big|\,. \label{est5.proof.prop.general.energy.est.resol.} \end{align}
Next by \eqref{remainder} in Assumption \ref{assumption} we have
\begin{align}
|\langle R \otimes v + v \otimes R, \nabla v \rangle_{L^2(\Omega)}| & \le {\color{black} C_2 d}
\,\| \nabla v\|_{L^2(\Omega)}^2\,, \label{est6.proof.prop.general.energy.est.resol.} \end{align}
where the inequality $\||x|^{-(1+\gamma)} v\|_{L^2} \le C \| \nabla v \|_{L^2}$ is applied. The constant $C_2$ depends on $\gamma\in(0,1)$. By the Gagliardo-Nirenberg inequality we see that for $q\in(1,2]$ and $q'= \frac{q}{q-1}$,
\begin{align}
|\langle f, v \rangle_{L^2(\Omega)}|
& \le C \|f\|_{L^q(\Omega)} \|u\|_{L^{q'}(\Omega)} \nonumber \\
& \le C \|f\|_{L^q(\Omega)} \|u\|_{L^2(\Omega)}^{2(1-\frac1q)}
\|\nabla u\|_{L^2(\Omega)}^{\frac2q-1} \nonumber \\
& \le C \|f\|_{L^q(\Omega)}^{\frac{2q}{3q-2}}
\|u\|_{L^2(\Omega)}^{\frac{4(q-1)}{3q-2}}
+ \frac18 \|\nabla u\|_{L^2(\Omega)}^{2}\,, \label{est7.proof.prop.general.energy.est.resol.} \end{align}
where the Young inequality is applied in the last line. {\color{black} Now we take $\beta_1\in(0,1)$ and $d_1\in(0,\frac14)$ small enough so that
\begin{align} C_1 \beta_1 + C_2 d_1 \le \frac18 \label{est8.proof.prop.general.energy.est.resol.} \end{align}
holds}. Then the assertions \eqref{est1.prop.general.energy.est.resol.} and \eqref{est2.prop.general.energy.est.resol.} are proved by inserting \eqref{est5.proof.prop.general.energy.est.resol.}--\eqref{est7.proof.prop.general.energy.est.resol.} into \eqref{eq1.proof.prop.general.energy.est.resol.} and \eqref{eq2.proof.prop.general.energy.est.resol.}, and using the condition \eqref{est8.proof.prop.general.energy.est.resol.}. This completes the proof. \end{proof}
\noindent As can be seen from Proposition \ref{prop.general.energy.est.resol.}, the key object in closing the energy computation is to derive the estimate for the next term appearing in the right-hand sides of \eqref{est1.prop.general.energy.est.resol.} and \eqref{est2.prop.general.energy.est.resol.}:
\begin{align*}
\big| \sum_{|n|=1} \big\langle ({\rm rot}\,v)_{n}, \frac{v_{r,n}}{|x|} \big\rangle_{L^2(D)} \big|\,. \end{align*}
Note that the Hardy inequality in polar coordinates \eqref{polar.grad.n'} cannot be applied to this term. The next proposition shows that this term can be handled if $\lambda$ in \eqref{RS} satisfies $|\lambda|\ge O(\beta^2 e^{-\frac{1}{6\beta}})$.
\begin{proposition}\label{prop.laege.lambda.energy.est.resol.} {\color{black} Let $\beta_1$ and $d_1$ be the constants in Proposition \ref{prop.general.energy.est.resol.}, and let $d \in (0, d_1)$. } Then the following statements hold.
\noindent {\rm (1)} Fix a positive number $\beta_2\in(0,\min\{\frac1{12},\beta_1\})$. Then the set
\begin{align}\label{set.prop.laege.lambda.energy.est.resol.} \mathcal{S}_\beta \,=\,
\big\{{\color{black} \lambda} \in \mathbb{C}~|~
|{\rm Im}(\lambda)| > -{\rm Re}(\lambda) + 12 e^{\frac{1}{e}} \beta^2 e^{-\frac{1}{6\beta}} \big\} \end{align}
is included in the resolvent $\rho(-\mathbb{A}_V)$ for any $\beta\in (0,\beta_2)$.
\noindent {\rm (2)} Let $q\in(1,2]$ and $f\in L^2_\sigma(\Omega) \cap L^q(\Omega)^2$. Then we have
\begin{equation}\label{est.laege.lambda.energy.est.resol.} \begin{split}
\|(\lambda+{\mathbb A}_V)^{-1} f\|_{L^2(\Omega)} & \le
C |\lambda|^{-\frac32+\frac1q}
\|f\|_{L^q(\Omega)}\,,~~~~~~ \lambda\in {\color{black} \mathcal{S}_\beta \cap \mathcal{B}_{\frac12 e^{-\frac{1}{6\beta}}}(0)^{{\rm c}}}\,, \\
\|\nabla (\lambda+{\mathbb A}_V)^{-1} f\|_{L^2(\Omega)} & \le
C |\lambda|^{-1+\frac1q}
\|f\|_{L^q(\Omega)}\,,~~~~~~ \lambda\in {\color{black} \mathcal{S}_\beta \cap \mathcal{B}_{\frac12 e^{-\frac{1}{6\beta}}}(0)^{{\rm c}}}\,, \end{split} \end{equation}
as long as $\beta\in (0,\beta_2)$. {\color{black} Here} the constant $C$ is independent of $\beta$ {\color{black} and $d$}, {\color{black} and $\mathcal{B}_\rho(0)\subset\mathbb{C}$ denotes the disk centered at the origin with radius $\rho>0$.} \end{proposition}
\begin{proof}
(1) Let us denote the function space $L^q(\Omega)$ by $L^q$ in this proof to simplify notation. Let $|n|=1$, $\beta\in (0,\beta_2)$, and $v \in D(\mathbb{A}_V)$ solve \eqref{RS}. Define a function $\Theta=\Theta(T)$ by
\begin{align} \Theta(T) \,=\, \int_0^T \frac{1}{\tau} e^{-\frac{1}{\tau}} \,{\rm d} \tau\,,~~~~T>e\,, \label{ThetaT} \end{align}
which satisfies the following lower and upper bounds:
\begin{align} e^{-\frac{1}{e}} \log T \le \Theta(T) \le \log T\,,~~~~T>e\,, \label{est.ThetaT} \end{align}
which can be easily checked. Then, as is shown in \cite[Lemma 3.26]{Ma1}, we have
\begin{align}
\beta \big|\big\langle ({\rm rot}\,v)_{n}, \frac{v_{r,n}}{|x|} \big\rangle_{L^2(D)}\big| \le
\frac{\beta}{T} \| v\|_{L^2}\ \| \nabla v\|_{L^2} + \beta \Theta(T)
\| \nabla v\|_{L^2}^2\,,~~~~T>e\,. \label{est.logT} \end{align}
The proof is done by extending $v \in D(\mathbb{A}_V)$ by zero to the whole space $\mathbb{R}^2$ and using the nondegenerate condition $\{x\in\mathbb{R}^2~|~|x|\le \frac12\} \subset \Omega^{\rm c}$. Then the Young inequality yields
\begin{align}\label{est.T}
\frac{\beta}{T} \| v\|_{L^2} \| \nabla v\|_{L^2} & \le
\frac{\beta \Theta(T)}{2} \| \nabla v\|_{L^2}^2
+ \frac{\beta}{2T^2 \Theta(T)} \| v\|_{L^2}^2\,. \end{align}
Inserting \eqref{est.logT} and \eqref{est.T} into \eqref{est1.prop.general.energy.est.resol.} and \eqref{est2.prop.general.energy.est.resol.} in Proposition \ref{prop.general.energy.est.resol.}, we see that
\begin{align} &\big( {\rm Re}(\lambda) - \frac{\beta}{2T^2 \Theta(T)} \big)
\|v\|_{L^2}^2
+ (\frac34 - \frac{3\beta \Theta(T)}{2}) \| \nabla v\|_{L^2}^2
\le C \|f\|_{L^q}^{\frac{2q}{3q-2}}
\|v\|_{L^2}^{\frac{4(q-1)}{3q-2}}\,, \label{est1.prop1.laege.lambda.energy.est.resol.} \\
& \big( |{\rm Im}(\lambda)| - \frac{\beta}{2T^2 \Theta(T)} \big)
\| v\|_{L^2}^2 \le (\frac14 + \frac{3\beta \Theta(T)}{2})
\| \nabla v\|_{L^2}^{2}
+ C \|f\|_{L^q}^{\frac{2q}{3q-2}}
\|v\|_{L^2}^{\frac{4(q-1)}{3q-2}}\,. \label{est2.prop1.laege.lambda.energy.est.resol.} \end{align}
Then \eqref{est1.prop1.laege.lambda.energy.est.resol.} and \eqref{est2.prop1.laege.lambda.energy.est.resol.} lead to
\begin{align}
&\big(|{\rm Im}(\lambda)| + {\rm Re}(\lambda) - \frac{\beta}{T^2 \Theta(T)} \big)
\|v\|_{L^2}^2
+ (\frac12 - 3\beta \Theta(T)) \| \nabla v\|_{L^2}^{2} \nonumber \\
& \le C \|f\|_{L^q}^{\frac{2q}{3q-2}}
\|v\|_{L^2}^{\frac{4(q-1)}{3q-2}}\,. \label{est3.prop1.laege.lambda.energy.est.resol.} \end{align}
Now let us take $T=e^{\frac{1}{12\beta}}$. Since $T>e$ by the condition $\beta\in(0,\frac{1}{12})$, from \eqref{est.ThetaT} we have
\begin{align} 3\beta \Theta(T) \le 3 \beta \log T = \frac14~~~~~~ {\rm and}~~~~~~ \frac{\beta}{T^2 \Theta(T)} \le \frac{e^{\frac{1}{e}} \beta}{T^2 \log T} \,=\, 12 e^{\frac{1}{e}} \beta^2 e^{-\frac{1}{6\beta}}\,. \label{est4.prop1.laege.lambda.energy.est.resol.} \end{align}
By inserting \eqref{est4.prop1.laege.lambda.energy.est.resol.} into \eqref{est3.prop1.laege.lambda.energy.est.resol.} we obtain the assertion $\mathcal{S}_\beta \subset \rho(-\mathbb{A}_V)$. \\
\noindent (2) Let {\color{black} $\lambda\in \mathcal{S}_\beta \cap \mathcal{B}_{\frac12 e^{-\frac{1}{6\beta}}}(0)^{{\rm c}}$. If additionally $\lambda\in\{z\in\mathbb{C}~|~ {\rm Re}(z) < 0\}$ then we have}
\begin{align*}
|{\rm Im}(\lambda)| \ge \frac{\beta}{T^2 \Theta(T)}~~~~~~ {\rm and}~~~~~~
|{\rm Im}(\lambda)| \le |\lambda| \le \sqrt{2} |{\rm Im}(\lambda)|\,. \end{align*}
Then we see from \eqref{est2.prop1.laege.lambda.energy.est.resol.} and \eqref{est3.prop1.laege.lambda.energy.est.resol.} that,
\begin{align}
|\lambda| \| v\|_{L^2}^2 & \le
\frac{6\sqrt{2}}{8} \| \nabla v\|_{L^2}^{2}
+ 2\sqrt{2} C \|f\|_{L^q}^{\frac{2q}{3q-2}}
\|v\|_{L^2}^{\frac{4(q-1)}{3q-2}}\,, \label{est5.prop1.laege.lambda.energy.est.resol.} \\
\| \nabla v\|_{L^2}^{2}
&\le 4C \|f\|_{L^q}^{\frac{2q}{3q-2}} \|v\|_{L^2}^{\frac{4(q-1)}{3q-2}}\,, \label{est6.prop1.laege.lambda.energy.est.resol.} \end{align}
where the constant $C$ is independent of $\beta$ {\color{black} and $d$}.
{\color{black} On the other hand, if additionally $\lambda\in\{z\in\mathbb{C}~|~ {\rm Re}(z) \ge 0\}$ then we have from \eqref{est4.prop1.laege.lambda.energy.est.resol.},
\begin{align*}
|{\rm Im}(\lambda)| + {\rm Re}(\lambda) - \frac{\beta}{T^2 \Theta(T)} \ge
|\lambda| - 12 e^{\frac{1}{e}} \beta^2 e^{-\frac{1}{6\beta}}
\ge \frac{|\lambda|}{2}\,, \end{align*}
since $12 e^{\frac{1}{e}} \beta^2 e^{-\frac{1}{6\beta}} \le 24 e^{\frac{1}{e}} \beta^2 |\lambda| \le \frac{|\lambda|}{2}$ holds by $\beta\in(0,\frac{1}{12})$. Then from \eqref{est3.prop1.laege.lambda.energy.est.resol.} we see that,
\begin{align}
|\lambda| \| v\|_{L^2}^2 + \frac12 \| \nabla v\|_{L^2}^{2} & \le
2C \|f\|_{L^q}^{\frac{2q}{3q-2}} \|v\|_{L^2}^{\frac{4(q-1)}{3q-2}} \label{est7.prop1.laege.lambda.energy.est.resol.}\,, \end{align}
where the constant $C$ is independent of $\beta$} {\color{black} and $d$}. The estimates in \eqref{est.laege.lambda.energy.est.resol.} follow from {\color{black} \eqref{est5.prop1.laege.lambda.energy.est.resol.} and \eqref{est6.prop1.laege.lambda.energy.est.resol.}, and \eqref{est7.prop1.laege.lambda.energy.est.resol.}.} This completes the proof of Proposition \ref{prop.laege.lambda.energy.est.resol.}. \end{proof}
\section{Resolvent analysis in region exponentially close to the origin} \label{sec.RSed}
The resolvent analysis in Proposition \ref{prop.laege.lambda.energy.est.resol.} is applicable to the problem \eqref{RS} only when the resolvent parameter $\lambda\in\mathbb{C}$ satisfies $|\lambda|\ge e^{-\frac{1}{a\beta}}$ for some $a\in(1,\infty)$, and we have taken $a=6$ in the proof for simplicity. This restriction is essentially due to the unavailability of the Hardy inequality in two-dimensional exterior domains. In fact, in the proof of Proposition \ref{prop.laege.lambda.energy.est.resol.}, we rely on the following inequality singular in $T\gg 1$:
\begin{align*}
\big| \sum_{|n|=1} \big\langle ({\rm rot}\,v)_{n}, \frac{v_{r,n}}{|x|} \big\rangle_{L^2(D)} \big| \le
\frac{1}{T} \| v\|_{L^2(\Omega)}\ \| \nabla v\|_{L^2(\Omega)}
+ \log T \| \nabla v\|_{L^2(\Omega)}^2\,, \end{align*}
as a substitute for the Hardy inequality, and this leads to the lack of information about the spectrum of $-\mathbb{A}_V$ in the region $0<|\lambda| \le O(e^{-\frac{1}{\beta}})$. Here we set $D=\{x\in\mathbb{R}^2~|~|x|>1\}$.
To perform the resolvent analysis in the region exponentially close to the origin, we firstly observe that a solution $(v, q)$ to \eqref{RS} satisfies the next problem in the exterior disk $D$:
\begin{equation}\tag{RS$^{\rm ed}$}\label{RSed} \left\{ \begin{aligned} \lambda w - \Delta w + \beta U^\bot {\rm rot}\,w + \nabla r
&\,=\,(-{\rm div}\,(R\otimes v + v\otimes R) + f )|_{D}\,,~~~~x\in D\,, \\ {\rm div}\,w &\,=\, 0\,,~~~~x\in D\,, \\
w|_{\partial D} &\,=\,v|_{\partial D}\,. \end{aligned}\right. \end{equation}
Then thanks to the symmetry, we can use a solution formula to \eqref{RSed} by using polar coordinates, and study the a priori estimate for $w=v|_{D}$. To make calculation simple, we decompose the linear problem \eqref{RSed} into three parts \eqref{RSed.f}, \eqref{RSed.divF}, and \eqref{RSed.b}, which are respectively introduced in Subsections \ref{subsec.RSed.f}, \ref{subsec.RSed.divF}, and \ref{subsec.RSed.b}. Then we derive the estimates to each problem in the corresponding subsections, and finally we collect them in Subsection \ref{apriori2} in order to establish the resolvent estimate to \eqref{RS} when $0<|\lambda|<e^{-\frac{1}{6\beta}}$.
\subsection{Problem I: External force $f$ and Dirichlet condition} \label{subsec.RSed.f}
In this subsection we study the following resolvent problem for $(w,r)=(w^{\rm ed}_{f}, r^{\rm ed}_{f})$:
\begin{equation}\tag{RS$^{\rm ed}_{f}$}\label{RSed.f} \left\{ \begin{aligned} \lambda w - \Delta w + \beta U^{\bot} {\rm rot}\,w + \nabla r & \,=\, f\,,~~~~x \in D\,, \\ {\rm div}\,w &\,=\, 0\,,~~~~x \in D\,, \\
w|_{\partial D} & \,=\, 0\,. \end{aligned}\right. \end{equation}
Especially, we are interested in the estimates for the $\pm 1$-Fourier mode of $w^{\rm ed}_{f}$. Although the $L^p\mathchar`-L^q$ estimates to \eqref{RSed.f} are already proved in \cite{Ma1}, we revisit this problem here in order to study the $\beta$-dependence in these estimates, and it is one of the most important steps for the energy computation when $0<|\lambda|<e^{-\frac{1}{6\beta}}$.
Let us recall the representation formula established in \cite{Ma1} for the solution to \eqref{RSed.f} in each Fourier mode. Fix $n \in \mathbb{Z}\setminus\{0\}$ and $\lambda\in\mathbb{C}\setminus \overline{\mathbb{R}_{-}}$, $\,\overline{\mathbb{R}_{-}}=(-\infty,0]$. Then, by applying the Fourier mode projection $\mathcal{P}_n$ to \eqref{RSed.f} and using the invariant property $\mathcal{P}_n(U^\bot {\rm rot}\,w)=U^\bot {\rm rot}\,\mathcal{P}_n w$ in \cite[Lemma 2.9]{Ma1}, we observe that the $n$-mode $w_n=\mathcal{P}_n w$ solves
\begin{equation}\tag{RS$^{\rm ed}_{f,n}$}\label{nFourier.RSed.f} \left\{ \begin{aligned} \lambda w_n - \Delta w_n + \beta U^{\bot} {\rm rot}\,w_n + \mathcal{P}_n \nabla r & \,=\, {\color{black} \mathcal{P}_n f} \,,~~~~x \in D\,, \\ {\rm div}\,w_n &\,=\, 0\,,~~~~x \in D\,, \\
w_n|_{\partial D} & \,=\, 0\,. \end{aligned}\right. \end{equation}
Since the formula in \cite{Ma1} is written in terms of some special functions, we introduce these definitions here. The modified Bessel function of first kind $I_\mu(z)$ of order $\mu$ is defined as
\begin{align}\label{def.I} I_\mu(z) \,=\, \big(\frac{z}{2}\big)^\mu \sum_{m=0}^{\infty} \frac{1}{m!\,\Gamma(\mu+m+1)} \big(\frac{z}{2}\big)^{2m}\,, ~~~~~~ z \in \mathbb{C}\setminus \overline{\mathbb{R}_{-}}\,, \end{align}
where $z^\mu = e^{\mu {\rm Log}\,z}$ and ${\rm Log}\,z$ denotes the principal branch to the logarithm of $z\in\mathbb{C}\setminus \overline{\mathbb{R}_{-}}$, and the function $\Gamma(z)$ in \eqref{def.I} denotes the Gamma function. Next we define the modified Bessel function of second kind $K_\mu(z)$ of order $\mu\notin\mathbb{Z}$ in the following manner:
\begin{align}\label{def.K} K_\mu(z) \,=\, \frac{\pi}{2}\,\frac{I_{-\mu}(z) - I_\mu(z)}{\sin{\mu \pi}}\,,~~~~~~ z \in \mathbb{C}\setminus \overline{\mathbb{R}_{-}}\,. \end{align}
It is classical that $K_\mu(z)$ and $I_\mu(z)$ are linearly independent solutions to the ODE
\begin{align}\label{ode.bessel.func} -\frac{\,{\rm d}^2 \omega}{\,{\rm d} z^2} - \frac{1}{z} \frac{\,{\rm d} \omega}{\,{\rm d} z} + \big(1+\frac{\mu^2}{z^2} \big) \omega \,=\, 0\,, \end{align}
and that {\color{black} their Wronskian} is $z^{-1}$. Applying the rotation operator ${\rm rot}$ to the first equation of \eqref{nFourier.RSed.f}, we find that $\omega=({\rm rot}\,w)_n=({\rm rot}\,w_n)e^{-in\theta}$ satisfies the ODE
\begin{align}\label{ode.nfourier.vorticity} -\frac{\,{\rm d}^2 \omega}{\,{\rm d} r^2} - \frac{1}{r} \frac{\,{\rm d} \omega}{\,{\rm d} r} + \big(\lambda + \frac{n^2+in\beta}{r^2} \big) \omega \,=\, ({\rm rot}\,f)_n\,,~~~~~~ r>1\,. \end{align}
Hence, if we set
\begin{align}\label{def.mu} \mu_n \,=\, \mu_n(\beta) \,=\, (n^2+in\beta)^\frac12\,,~~~~~~ {\rm Re}(\mu_n)>0\,, \end{align}
then $K_{\mu_n}(\sqrt{\lambda} r)$ and $I_{\mu_n}(\sqrt{\lambda} r)$ give linearly independent solutions to the homogeneous equation of \eqref{ode.nfourier.vorticity} and their Wronskian is $r^{-1}$. Here and in the following we always take the square root $\sqrt{z}$ so that ${\rm Re}(\sqrt{z})>0$ for $z\in\mathbb{C}\setminus \overline{\mathbb{R}_{-}}$. Furthermore, we set
\begin{align}\label{def.F} F_n(\sqrt{\lambda};\beta)
\,=\, \int_1^\infty s^{1-|n|} K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s\,,~~~~~~~~ \lambda\in\mathbb{C}\setminus \overline{\mathbb{R}_{-}}\,, \end{align}
and denote by $\mathcal{Z}(F_n)$ the set of the zeros of $F_n(\sqrt{\lambda};\beta)$ lying in $\mathbb{C}\setminus\overline{\mathbb{R}_{-}}$;
\begin{align}\label{def.zeros.F}
\mathcal{Z}(F_n) \,=\, \{z\in\mathbb{C}\setminus\overline{\mathbb{R}_{-}}~|~ F_n( {\color{black} \sqrt{z} } ;\beta)\,=\,0 \}\,. \end{align}
Let $\lambda\in\mathbb{C}\setminus (\overline{\mathbb{R}_{-}} \cup \mathcal{Z}(F_n))$. Then, from the argument in \cite[Section 3]{Ma1}, we have the following representation formula for $w^{\rm ed}_{f,n}$ solving \eqref{nFourier.RSed.f}:
\begin{align}\label{rep.velocity.nFourier.RSed.f} & w^{\rm ed}_{f,n} \,=\, -\frac{c_{n,\lambda}[f_n]}{F_n(\sqrt{\lambda};\beta)} V_n [K_{\mu_n}(\sqrt{\lambda}\,\cdot\,)] + V_n[\Phi_{n,\lambda}[f_n]]\,. \end{align}
Here $V_n[\,\cdot\,]$ is the Biot-Savart law in \eqref{def.V_n} and the function $\Phi_{n,\lambda}[f_n]$ is defined as
\begin{equation}\label{Phi.nFourier.RSed.f} \begin{aligned} \Phi_{n,\lambda}[f_n](r) &\,=\, -K_{\mu_n}(\sqrt{\lambda} r) \bigg( \int_{1}^{r} I_{\mu_n}(\sqrt{\lambda} s)\, \big( \mu_n f_{\theta,n}(s) + in f_{r,n}(s) \big) \,{\rm d} s \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \sqrt{\lambda} \int_{1}^{r} sI_{\mu_n+1}(\sqrt{\lambda} s)\,f_{\theta,n}(s) \,{\rm d} s \bigg) \\ & \quad + I_{\mu_n}(\sqrt{\lambda} r) \bigg( \int_{r}^{\infty} K_{\mu_n}(\sqrt{\lambda} s)\, \big( \mu_n f_{\theta,n}(s) - in f_{r,n}(s) \big) \,{\rm d} s \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \sqrt{\lambda} \int_{r}^{\infty} sK_{\mu_n-1}(\sqrt{\lambda} s)\,f_{\theta,n}(s) \,{\rm d} s \bigg) \,, \end{aligned} \end{equation}
while the constant $c_{n,\lambda}[f_n]$ is defined as
\begin{align}\label{const.nFourier.RSed.f} c_{n,\lambda}[f_n] \,=\,
\int_1^\infty s^{1-|n|} \Phi_{n,\lambda}[f_n](s) \,{\rm d} s\,. \end{align}
Moreover, the vorticity ${\rm rot}\,w^{\rm ed}_{f,n}$ is represented as
\begin{align}\label{rep.vorticity.nFourier.RSed.f} {\rm rot}\,w^{\rm ed}_{f,n} \,=\, -\frac{c_{n,\lambda}[f_n]}{F_n(\sqrt{\lambda};\beta)} K_{\mu_n}(\sqrt{\lambda} r) e^{in\theta} + \Phi_{n,\lambda}[f_n](r) e^{in\theta}\,. \end{align}
We shall estimate $w^{\rm ed}_{f,n}$ and ${\rm rot}\,w^{\rm ed}_{f,n}$, represented respectively as in \eqref{rep.velocity.nFourier.RSed.f} and \eqref{rep.vorticity.nFourier.RSed.f}, when $|n|=1$ in the following two subsections. Our main tools for the proof are the asymptotic analysis of $\mu_n=\mu_n(\beta)$ for small $\beta$ in Appendix \ref{app.est.mu}, and the detailed estimates to the modified Bessel functions in Appendix \ref{app.est.bessel}. Before going into details, let us state the estimate of $F_n(\sqrt{\lambda};\beta)$ in a region exponentially close to the origin with respect to $\beta$. We denote by $\Sigma_{\phi}$ the sector $\{z\in\mathbb{C}\setminus\{0\}~|~|{\rm arg}\,z|<\phi\}$, $\phi\in(0,\pi)$, in the complex plane $\mathbb{C}$, and by $\mathcal{B}_\rho(0)\subset\mathbb{C}$ the disk centered at the origin with radius $\rho>0$.
\begin{proposition}\label{prop.est.F}
Let $|n|=1$. Then for any $\epsilon \in (0,\frac{\pi}{2})$ there is a positive constant $\beta_0$ depending only on $\epsilon$ such that as long as $\beta\in (0,\beta_0)$ and $\lambda \in \Sigma_{\pi-\epsilon}\cap\mathcal{B}_{e^{-\frac{1}{6\beta}}}(0)$ we have
\begin{align}\label{est1.prop.est.F}
\frac{1}{|F_n(\sqrt{\lambda};\beta)|}
\le C |\lambda|^{\frac{{\rm Re}(\mu_n)}{2}}\,, \end{align}
where the constant $C$ depends only on $\epsilon$. In particular, we have $\mathcal{Z}(F_n)\cap\mathcal{B}_{e^{-\frac{1}{6\beta}}}(0)=\emptyset$.
\end{proposition}
\begin{proof} The assertion \label{est1.prop.est.F} follows from Lemma \ref{lem.est.F} in Appendix \ref{app.proof.prop.est.F}, since we have $e^{-\frac{1}{6\beta}}<\beta^4$ for any $\beta\in(0,1)$. See Appendix \ref{app.proof.prop.est.F} for the proof of Lemma \ref{lem.est.F}. \end{proof}
\subsubsection{Estimates of the velocity solving \eqref{nFourier.RSed.f} with $|n|=1$} \label{subsec.RSed.f.velocity}
In this subsection we derive the estimates for the solution $w^{\rm ed}_{f,n}$ to \eqref{nFourier.RSed.f} which is now represented as \eqref{rep.velocity.nFourier.RSed.f}. The novelty of the following result is the investigation on the $\beta$-singularity appearing in each estimate. Let $\beta_0$ be the constant in Proposition \ref{prop.est.F}.
\begin{theorem}\label{thm.est.velocity.RSed.f}
Let $|n|=1$ and $1\le q<p\le\infty$ or $1<q\le p<\infty$. Fix $\epsilon\in(0,\frac{\pi}{2})$. Then there is a positive constant $C=C(q,p,\epsilon)$ independent of $\beta$ such that the following statement holds. Let $f\in C^\infty_0(D)^2$ and $\beta\in(0,\beta_0)$. Then for $\lambda\in\Sigma_{\pi-\epsilon} \cap \mathcal{B}_{e^{-\frac{1}{6\beta}}}(0)$ we have
\begin{align}
\|w^{\rm ed}_{f,n}\|_{L^p(D)} & \le \frac{C}{\beta^2}
|\lambda|^{-1+\frac1q-\frac1p}
\|f\|_{L^q(D)}\,, \label{est1.thm.est.velocity.RSed.f} \\
\big\|\frac{w^{\rm ed}_{f,n}}{|x|}\big\|_{L^2(D)} & \le \frac{C}{\beta} \Big(\frac{1}{\beta^2}
+ |\log {\rm Re}(\sqrt{\lambda})|^{\frac12} \Big)
|\lambda|^{-1+\frac1q}
\|f\|_{L^q(D)}\,. \label{est2.thm.est.velocity.RSed.f} \end{align}
Moreover, \eqref{est1.thm.est.velocity.RSed.f} and \eqref{est2.thm.est.velocity.RSed.f} hold all for $f\in L^q(D)^2$. \end{theorem}
\begin{remark}\label{rem.thm.est.velocity.RSed.f}
The logarithmic factor $|\log {\rm Re}(\sqrt{\lambda})|$ in \eqref{est2.thm.est.velocity.RSed.f} cannot be removed in our analysis. This singularity might prevent us from closing the energy computation in view of the scaling, however, we observe that it is resolved by considering the following products:
\begin{align*}
\big|\big\langle \omega^{{\rm ed}\,(1)}_{f,n}, \frac{(w^{\rm ed}_{f,r})_n}{|x|} \big\rangle_{L^2(D)} \big|\,,~~~~
\big|\big\langle \omega^{\rm ed\,(1)}_{{\rm div}F,n}, \frac{(w^{\rm ed}_{f,r})_n}{|x|} \big\rangle_{L^2(D)} \big|\,,~~~~
\big|\big\langle \omega^{\rm ed}_{b,n}, \frac{(w^{\rm ed}_{f,r})_n}{|x|} \big\rangle_{L^2(D)} \big|\,. \end{align*}
Here the vorticities $\omega^{\rm ed\,(1)}_{f,n}$, $\omega^{\rm ed\,(1)}_{{\rm div}F,n}$, $\omega^{\rm ed}_{b,n}$ will be introduced respectively in Subsections \ref{subsec.RSed.f.vorticity}, \ref{subsec.RSed.divF.vorticity}, and \ref{subsec.RSed.b}. This is indeed a key observation in proving Proposition \ref{prop1.small.lambda.energy.est.resol.} in Subsection \ref{apriori2}, where the estimate for $\big\langle ({\rm rot}\,v)_{n},
\frac{v_{r,n}}{|x|} \big\rangle_{L^2(D)}$ is established when $0<|\lambda|<e^{-\frac{1}{6\beta}}$. \end{remark}
We postpone the proof of Theorem \ref{thm.est.velocity.RSed.f} at the end of this subsection, and focus on the term $V_n[\Phi_{n,\lambda}[f_n]]$ in \eqref{rep.velocity.nFourier.RSed.f} for the time being. In order to estimate $V_n[\Phi_{n,\lambda}[f_n]]$, taking into account the definition of $V_n[\,\cdot\,]$ in \eqref{def.V_n}, firstly we study the following two integrals
\begin{align*}
\frac{1}{r^{|n|}} \int_1^r s^{1+|n|} \Phi_{n,\lambda}[f_n]{\color{black} (s)} \,{\rm d} s\,,~~~~~~~~
r^{|n|} \int_r^\infty s^{1-|n|} \Phi_{n,\lambda}[f_n](s) \,{\rm d} s\,. \end{align*}
Let us recall the decompositions for them used in \cite{Ma1} which are useful in calculations. To state the result we define the functions $g^{(1)}_n(r)$ and $g^{(2)}_n(r)$ by
\begin{align*} g^{(1)}_n(r) \,=\, \mu_n f_{\theta,n}(r) + in f_{r,n}(r)\,, ~~~~~~ g^{(2)}_n(r) \,=\, \mu_n f_{\theta,n}(r) - in f_{r,n}(r)\,, \end{align*}
and fix a resolvent parameter $\lambda\in\mathbb{C}\setminus\overline{\mathbb{R}_{-}}$.
\begin{lemma}[ {\rm\cite[Lemmas 3.6 and 3.9]{Ma1}}]\label{lem.velocitydecom.RSed.f} Let $n \in \mathbb{Z} \setminus \{0\}$ and $f\in C^\infty_0(D)^2$. Then we have
\begin{align*}
\frac{1}{r^{|n|}} \int_1^r s^{1+|n|} \Phi_{n,\lambda}[f_n]{\color{black} (s)} \,{\rm d} s \,=\, \sum_{l=1}^{9} J^{(1)}_l[f_n](r)\,, \end{align*}
where
{\allowdisplaybreaks \begin{align*} J^{(1)}_1[f_n](r) &\,=\,
-\frac{1}{r^{|n|}} \int_1^r I_{\mu_n}(\sqrt{\lambda} \tau)\,g^{(1)}_n(\tau)
\int_\tau^r s^{1+|n|} K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\
J^{(1)}_2[f_n](r) &\,=\, -\frac{\mu_n+|n|}{r^{|n|}} \int_1^r \tau I_{\mu_n+1}(\sqrt{\lambda} \tau)\,f_{\theta,n}(\tau) \int_\tau^r s^{|n|} K_{\mu_n-1}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\
J^{(1)}_3[f_n](r) &\,=\, \frac{1}{r^{|n|}} \int_1^r K_{\mu_n}(\sqrt{\lambda} \tau)\,g^{(2)}_n(\tau) \int_1^\tau s^{1+|n|} I_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s\,, \\
J^{(1)}_4[f_n](r) &\,=\, \frac{\mu_n-|n|}{r^{|n|}} \int_1^r \tau K_{\mu_n-1}(\sqrt{\lambda}\tau)\,f_{\theta,n}(\tau) \int_1^\tau s^{|n|} I_{\mu_n+1}(\sqrt{\lambda} s) \,{\rm d} s\,, \\
J^{(1)}_5[f_n](r) &\,=\, \frac{1}{r^{|n|}} \bigg(\int_r^\infty K_{\mu_n}(\sqrt{\lambda}s)\,g^{(2)}_n(s) \,{\rm d} s \bigg)
\bigg(\int_1^r s^{1+|n|} I_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \bigg)\,, \\
J^{(1)}_6[f_n](r) &\,=\, \frac{\mu_n-|n|}{r^{|n|}} \bigg(\int_r^\infty sK_{\mu_n-1}(\sqrt{\lambda}s)\,f_{\theta,n}(s) \,{\rm d} s \bigg)
\bigg(\int_1^r s^{|n|} I_{\mu_n+1}(\sqrt{\lambda} s) \,{\rm d} s \bigg)\,, \\ J^{(1)}_7[f_n](r) &\,=\, rK_{\mu_n-1}(\sqrt{\lambda}r) \int_1^r sI_{\mu_n+1}(\sqrt{\lambda} s)\,f_{\theta,n}(s) \,{\rm d} s\,, \\ J^{(1)}_8[f_n](r) &\,=\, rI_{\mu_n+1}(\sqrt{\lambda}r) \int_r^\infty sK_{\mu_n-1}(\sqrt{\lambda} s)\,f_{\theta,n}(s) \,{\rm d} s\,, \\
J^{(1)}_9[f_n](r) &\,=\, -\frac{I_{\mu_n+1}(\sqrt{\lambda})}{r^{|n|}} \int_1^\infty sK_{\mu_n-1}(\sqrt{\lambda} s)\,f_{\theta,n}(s) \,{\rm d} s\,, \end{align*} }
and
\begin{align*}
r^{|n|} \int_r^\infty s^{1-|n|} \Phi_{n,\lambda}[f_n](s) \,{\rm d} s \,=\, \sum_{l=10}^{17} J^{(1)}_l[f_n](r)\,, \end{align*}
where
{\allowdisplaybreaks \begin{align*}
J^{(1)}_{10}[f_n](r) &\,=\, -r^{|n|} \bigg(\int_1^r I_{\mu_n}(\sqrt{\lambda} s)\,g^{(1)}_n(s) \,{\rm d} s
\bigg) \bigg(\int_r^\infty s^{1-|n|} K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s\bigg)\,, \\
J^{(1)}_{11}[f_n](r) &\,=\, -r^{|n|} \int_r^\infty I_{\mu_n}(\sqrt{\lambda} \tau)\,g^{(1)}_n(\tau) \int_\tau^\infty s^{1-|n|} K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\
J^{(1)}_{12}[f_n](r) &\,=\, -(\mu_n-|n|)r^{|n|} \bigg(\int_1^r sI_{\mu_n+1}(\sqrt{\lambda} s)\,f_{\theta,n}(s) \,{\rm d} s \bigg) \bigg(\int_r^\infty s^{-|n|} K_{\mu_n-1}(\sqrt{\lambda} s) \,{\rm d} s\bigg)\,, \\
J^{(1)}_{13}[f_n](r) &\,=\, -(\mu_n-|n|)r^{|n|} \int_r^\infty \tau I_{\mu_n+1}(\sqrt{\lambda} \tau)\,f_{\theta,n}(\tau) \int_\tau^\infty s^{-|n|} K_{\mu_n-1}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\
J^{(1)}_{14}[f_n](r) &\,=\, r^{|n|} \int_r^\infty K_{\mu_n}(\sqrt{\lambda} \tau)\,g^{(2)}_n(\tau) \int_r^\tau s^{1-|n|} I_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\
J^{(1)}_{15}[f_n](r) &\,=\, (\mu_n+|n|) r^{|n|} \int_r^\infty \tau K_{\mu_n-1}(\sqrt{\lambda} \tau)\,f_{\theta,n}(\tau) \int_r^\tau s^{-|n|} I_{\mu_n+1}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\ J^{(1)}_{16}[f_n](r) &\,=\, -r K_{\mu_n-1}(\sqrt{\lambda} r) \int_1^r s I_{\mu_n+1}(\sqrt{\lambda} s)\,f_{\theta,n}(s)\,{\rm d} s\,, \\ J^{(1)}_{17}[f_n](r) &\,=\, -r I_{\mu_n+1}(\sqrt{\lambda} r) \int_r^\infty s K_{\mu_n-1}(\sqrt{\lambda} s)\,f_{\theta,n}(s)\,{\rm d} s\,. \end{align*} }
\end{lemma}
\begin{remark}\label{rem.lem.velocitydecom.RSed.f}
(1) The estimate to the term $J^{(1)}_9[f_n]$ is not needed in the following analysis thanks to the cancellation $J^{(1)}_9[f_n](r)-r^{-|n|} J^{(1)}_{17}[f_n](1)=0$ in the Biot-Savart law $V_n[\Phi_{n,\lambda}[f_n]]$. This fact will be used in the proof of Proposition \ref{prop2.est.velocity.RSed.f}. \\ \noindent (2) Note that $J^{(1)}_{7}[f_n]=-J^{(1)}_{16}[f_n]$ and $J^{(1)}_{8}[f_n]=-J^{(1)}_{17}[f_n]$ hold. Therefore we will skip the derivation of the estimates for $J^{(1)}_{16}[f_n]$ and $J^{(1)}_{17}[f_n]$ in Lemma \ref{lem2.est.velocity.RSed.f}. \\ \noindent (3) We can express the constant $c_{n,\lambda}[f_n]$ in \eqref{const.nFourier.RSed.f} in terms of $J^{(1)}_{l}[f_n](r)$ as $c_{n,\lambda}[f_n]=\sum_{l=11,13,14,15,17} J^{(1)}_l[f_n](1)$. \end{remark}
The estimates to $J^{(1)}_{l}[f_n]$, $l\in\{1,\ldots,8\}$, in Lemma \ref{lem.velocitydecom.RSed.f} are given as follows.
\begin{lemma}\label{lem1.est.velocity.RSed.f}
Let $|n|=1$ and $q\in[1,\infty)$, and let $\lambda\in\Sigma_{\pi-\epsilon} \cap \mathcal{B}_1(0)$ for some $\epsilon\in(0,\frac{\pi}{2})$. Then there is a positive constant $C=C(q,\epsilon)$ independent of $\beta$ such that the following statements hold. \\ {\rm (1)} Let $f\in C^\infty_0(D)^2$. Then for $l\in\{1,\ldots,8\}$ we have
\begin{align}
|J^{(1)}_{l}[f_n](r)|
\le \frac{C}{\beta} r^{3-\frac{2}{q}} \|f \|_{L^q(D)}\,,~~~~~~~~ 1\le r<{\rm Re}(\sqrt{\lambda})^{-1}\,. \label{est1.lem1.est.velocity.RSed.f} \end{align}
On the other hand, for $l\in\{1,\ldots,6\}$ we have
\begin{align}
|J^{(1)}_{l}[f_n](r)|
\le \frac{C}{\beta} |\lambda|^{-1} r^{1-\frac{2}{q}} \|f \|_{L^q(D)}\,,~~~~~~~~ r \ge {\rm Re}(\sqrt{\lambda})^{-1}\,, \label{est2.lem1.est.velocity.RSed.f} \end{align}
while for $l\in\{7,8\}$ we have
\begin{align}
|J^{(1)}_{l}[f_n](r)| \le
C |\lambda|^{-1+\frac{1}{2q}} r^{1-\frac1q} \|f \|_{L^q(D)}\,,~~~~~~~~ r \ge {\rm Re}(\sqrt{\lambda})^{-1}\,. \label{est3.lem1.est.velocity.RSed.f} \end{align}
{\rm (2)} Let $f\in C^\infty_0(D)^2$. Then for $l\in\{7,8\}$ we have
\begin{align}
\|r^{-1} J^{(1)}_l[f_n]\|_{L^\infty(D)}
&\le \frac{C}{\beta} |\lambda|^{-1} \|f \|_{L^\infty(D)} \label{est4.lem1.est.velocity.RSed.f}\,, \\
\|r^{-1} J^{(1)}_l[f_n]\|_{L^1(D)}
&\le \frac{C}{\beta} |\lambda|^{-1} \|f \|_{L^1(D)}\,. \label{est5.lem1.est.velocity.RSed.f} \end{align}
\end{lemma}
\begin{proof} (1) (i) Estimate of $J^{(1)}_1[f_n]$: For $1 \le r < {\rm Re}(\sqrt{\lambda})^{-1}$, by \eqref{est5.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est1.lem.est2.bessel} for $k=0$ in Lemma \ref{lem.est2.bessel} in Appendix \ref{app.est.bessel}, we find
\begin{align*}
|J^{(1)}_1[f_n](r)| & \le r^{-1} \int_1^r
|I_{\mu_n}(\sqrt{\lambda} \tau)\,g^{(1)}_n(\tau)|\,
\bigg|\int_\tau^r s^2 K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s\bigg| \,{\rm d} \tau \\ & \le C r
\int_1^r |f_n(\tau)| \tau \,{\rm d} \tau\,, \end{align*}
which leads to the estimate \eqref{est1.lem1.est.velocity.RSed.f}. For $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, by \eqref{est5.lem.est1.bessel} and \eqref{est7.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est2.lem.est2.bessel} and \eqref{est3.lem.est2.bessel} for $k=0$ in Lemma \ref{lem.est2.bessel}, we have
\begin{align*}
&|J^{(1)}_1[f_n](r)| \le r^{-1} \bigg( \int_1^{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}} + \int_{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}}^r \bigg)
|I_{\mu_n}(\sqrt{\lambda} \tau)\,g^{(1)}_n(\tau)|\,
\bigg|\int_\tau^r s^2 K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s\bigg| \,{\rm d} \tau \\ &\le
C |\lambda|^{-1} r^{-1} \int_1^{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}}
|f_n(\tau)| \tau \,{\rm d} \tau
+ C\,|\lambda|^{-1} r^{-1} \int_{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}}^r
|f_n(\tau)| \tau \,{\rm d} \tau\,, \end{align*}
which implies the estimate \eqref{est2.lem1.est.velocity.RSed.f}. \\ \noindent (ii) Estimate of $J^{(1)}_{2}[f_n]$: The proof is parallel to that for $J^{(1)}_1[f_n]$ using the results in Lemmas \ref{lem.est1.bessel} and \ref{lem.est2.bessel} for $k=1$. We omit the details here. \\ \noindent (iii) Estimate of $J^{(1)}_{3}[f_n]$: For $1\le r<{\rm Re}(\sqrt{\lambda})^{-1}$, by \eqref{est1.lem.est1.bessel} and \eqref{est3.lem.est1.bessel} in Lemma \ref{lem.est1.bessel} and \eqref{est1.lem.est3.bessel} for $k=0$ in Lemma \ref{lem.est3.bessel}, we see that
\begin{align*}
|J^{(1)}_3[f_n](r)| & \le
r^{-1} \int_1^r |K_{\mu_n}(\sqrt{\lambda} \tau)\,g^{(2)}_n(\tau)|\,
\int_1^\tau |s^2 I_{\mu_n}(\sqrt{\lambda} s)| \,{\rm d} s \,{\rm d} \tau \\ & \le
C\,r \int_1^r |f_n(\tau)| \tau \,{\rm d} \tau \,. \end{align*}
Thus we have \eqref{est1.lem1.est.velocity.RSed.f}. For $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, by \eqref{est1.lem.est1.bessel}, \eqref{est3.lem.est1.bessel}, and \eqref{est6.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est1.lem.est3.bessel} and \eqref{est2.lem.est3.bessel} for $k=0$ in Lemma \ref{lem.est3.bessel} we have
\begin{align*}
&|J^{(1)}_3[f_n](r)| \le r^{-1} \bigg( \int_1^{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}} + {\color{black} \int_{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}}^r } \bigg)
|K_{\mu_n}(\sqrt{\lambda} \tau)\,g^{(2)}_n(\tau)|\,
\int_1^\tau |s^2 I_{\mu_n}(\sqrt{\lambda} s)| \,{\rm d} s \,{\rm d} \tau \\
& \le C\,|\lambda|^{-1} r^{-1}
\int_1^{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}} |f_n(\tau)| \tau \,{\rm d} \tau
+ C\,|\lambda|^{-1} r^{-1} \int_{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}}^r
|f_n(\tau)| \tau \,{\rm d} \tau\,, \end{align*}
which leads to \eqref{est2.lem1.est.velocity.RSed.f}. \\ \noindent (iv) Estimate of $J^{(1)}_{4}[f_n]$: The proof is parallel to that for $J^{(1)}_3[f_n]$ using the results in Lemmas \ref{lem.est1.bessel} and \ref{lem.est3.bessel} for $k=1$, and we omit here. \\ \noindent (v) Estimates of $J^{(1)}_{5}[f_n]$ and $J^{(1)}_{6}[f_n]$: We give a proof only for $J^{(1)}_{5}[f_n]$ since the proof for $J^{(1)}_{6}[f_n]$ is similar. For $1\le r < {\rm Re}(\sqrt{\lambda})^{-1}$, {\color{black} by \eqref{est1.lem.est1.bessel}, \eqref{est3.lem.est1.bessel}, and \eqref{est6.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est1.lem.est3.bessel} for $k=0$ in Lemma \ref{lem.est3.bessel} } we observe that
\begin{align*}
& |J^{(1)}_5[f_n](r)|
\le r^{-1} \int_1^r |s^2 I_{\mu_n}(\sqrt{\lambda} s)| \,{\rm d} s \bigg( \int_r^{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}} + \int_{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}}^\infty \bigg)
|K_{\mu_n}(\sqrt{\lambda}s)\,g^{(2)}_n(s)| \,{\rm d} s \\ & \le C\,r^{{\rm Re}(\mu_n)+2} \int_r^{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}}
s^{-({\rm Re}(\mu_n)+1)} |f_n(s)| s \,{\rm d} s \\ & \quad
+ C\,|\lambda|^{\frac{{\rm Re}(\mu_n)}{2}-\frac14} r^{{\rm Re}(\mu_n)+2}
\int_{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}}^\infty s^{-\frac32} e^{-{\rm Re}(\sqrt{\lambda}) s} |f_n(s)| s \,{\rm d} s \,. \end{align*}
Then a direct calculation shows \eqref{est1.lem1.est.velocity.RSed.f}. For $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, {\color{black} by \eqref{est6.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est2.lem.est3.bessel} for $k=0$ in Lemma \ref{lem.est3.bessel} } we have
\begin{align*}
|J_5^{(1)}[f_n](r)|
& \le r^{-1} \int_1^r |s^2 I_{\mu_n}(\sqrt{\lambda} s)| \,{\rm d} s
\int_r^\infty |K_{\mu_n}(\sqrt{\lambda}s)\,g^{(2)}_n(s)| \,{\rm d} s \\
& \le C\,|\lambda|^{-1} r^{-\frac12} e^{{\rm Re}\,(\sqrt{\lambda}) r}
\int_r^\infty s^{-\frac12} e^{-{\rm Re}(\sqrt{\lambda}) s} |f_n(s)|s \,{\rm d} s\,,
\end{align*}
which implies \eqref{est2.lem1.est.velocity.RSed.f}. \\ \noindent (vi) Estimate of $J^{(1)}_{7}[f_n]$: For $1\le r<{\rm Re}(\sqrt{\lambda})^{-1}$, by \eqref{est2.lem.est1.bessel}, \eqref{est4.lem.est1.bessel}, and \eqref{est5.lem.est1.bessel} for $k=1$ in Lemma \ref{lem.est1.bessel} we find
\begin{align}
|J^{(1)}_7[f_n](r)| & \le
|rK_{\mu_n-1}(\sqrt{\lambda}r)|\,
\int_1^r |I_{\mu_n+1}(\sqrt{\lambda} s)\,f_{\theta,n}(s) s| \,{\rm d} s \nonumber \\ & \le C\beta^{-1} r
\int_1^r |f_{n}(s)| s \,{\rm d} s\,. \label{est1.proof.lem1.est.velocity.RSed.f} \end{align}
Thus we have \eqref{est1.lem1.est.velocity.RSed.f}. For $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, by \eqref{est5.lem.est1.bessel}--\eqref{est7.lem.est1.bessel} for $k=1$ in Lemma \ref{lem.est1.bessel} we have
\begin{align}
&|J^{(1)}_7[f_n](r)| \le
|rK_{\mu_n-1}(\sqrt{\lambda}r)| \bigg( \int_1^{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}} + \int_{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}}^r \bigg)
|I_{\mu_n+1}(\sqrt{\lambda} s)\,f_{\theta,n}(s) s| \,{\rm d} s \nonumber \\
& \le C\,|\lambda|^{-\frac14} r^{\frac12} e^{-{\rm Re}\,(\sqrt{\lambda}) r} \int_1^{\frac{1}{{\rm Re}(\sqrt{\lambda})}}
|f_n(s)|s \,{\rm d} s \nonumber \\ & \quad
+ C\,|\lambda|^{-\frac12} r^{\frac12} e^{-{\rm Re}\,(\sqrt{\lambda}) r} \int_{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}}^r s^{-\frac12} e^{{\rm Re}\,(\sqrt{\lambda}) s}
|f_n(s)|s \,{\rm d} s\,, \label{est2.proof.lem1.est.velocity.RSed.f} \end{align}
which leads to \eqref{est3.lem1.est.velocity.RSed.f}. \\ \noindent (vii) Estimate of $J^{(1)}_{8}[f_n]$: For $1\le r<{\rm Re}(\sqrt{\lambda})^{-1}$, {\color{black} by the results in Lemmas \ref{lem.est1.bessel} for $k=1$ } we find
\begin{align}
& |J^{(1)}_8[f_n](r)| \le
|rI_{\mu_n+1}(\sqrt{\lambda}r)|\, \bigg( \int_r^{{\frac{1}{{\rm Re}(\sqrt{\lambda})}}} + \int_{{\frac{1}{{\rm Re}(\sqrt{\lambda})}}}^\infty \bigg)
|K_{\mu_n-1}(\sqrt{\lambda} s)\,f_{\theta,n}(s) s| \,{\rm d} s \nonumber \\
& \le C\beta^{-1} |\lambda| r^3 \int_r^{\frac{1}{{\rm Re}(\sqrt{\lambda})}}
|f_n(s)| s \,{\rm d} s
+ C |\lambda|^{\frac34} r^{3} \int_{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}}^\infty s^{-\frac12} e^{-{\rm Re}(\sqrt{\lambda}) s}
|f_n(s)| s \,{\rm d} s\,, \label{est3.proof.lem1.est.velocity.RSed.f} \end{align}
which implies \eqref{est1.lem1.est.velocity.RSed.f}. For $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, {\color{black} by Lemma \ref{lem.est1.bessel} for $k=1$ again } we have
\begin{align}
&|J^{(1)}_8[f_n](r)| \le
|rI_{\mu_n+1}(\sqrt{\lambda}r)|\, \int_r^\infty
|K_{\mu_n-1}(\sqrt{\lambda} s)\,f_{\theta,n}(s) s| \,{\rm d} s \nonumber \\
&\le C |\lambda|^{-\frac12} r^{\frac12} e^{{\rm Re}\,(\sqrt{\lambda}) r} \int_r^\infty s^{-\frac{1}{2}} e^{-{\rm Re}\,(\sqrt{\lambda}) s}
|f_n(s)| s \,{\rm d} s\,, \label{est4.proof.lem1.est.velocity.RSed.f} \end{align}
which leads to \eqref{est3.lem1.est.velocity.RSed.f}. Hence we obtain the assertion (1) of Lemma \ref{lem1.est.velocity.RSed.f}. \\ \noindent (2) The estimate \eqref{est4.lem1.est.velocity.RSed.f} follows from \eqref{est1.proof.lem1.est.velocity.RSed.f}--\eqref{est4.proof.lem1.est.velocity.RSed.f} in the above. For the proof of \eqref{est5.lem1.est.velocity.RSed.f}, one can reproduce the calculation performed in \cite[Lemma 3.7]{Ma1} using the results in Lemma \ref{lem.est1.bessel}, and hence we omit the details here. This completes the proof of Lemma \ref{lem1.est.velocity.RSed.f}. \end{proof}
The next lemma summarizes the estimates to $J^{(1)}_{l}[f_n](r)$, $l\in\{10,\ldots,17\}$, in Lemma \ref{lem.velocitydecom.RSed.f}. We skip the proofs for $J^{(1)}_{16}[f_n]$ and $J^{(1)}_{17}[f_n]$ as is already mentioned in Remark \ref{rem.lem.velocitydecom.RSed.f} (2).
\begin{lemma}\label{lem2.est.velocity.RSed.f}
Let $|n|=1$ and $q\in[1,\infty)$, and let $\lambda\in\Sigma_{\pi-\epsilon} \cap \mathcal{B}_1(0)$ for some $\epsilon\in(0,\frac{\pi}{2})$. Then there is a positive constant $C=C(q,\epsilon)$ independent of $\beta$ such that the following statements hold. \\ \noindent {\rm (1)} Let $f\in C^\infty_0(D)^2$. Then for $l\in\{10,\ldots,17\}$ we have
\begin{align}
|J^{(1)}_{l}[f_n](r)|
\le \frac{C}{\beta} |\lambda|^{-1+\frac1q} r \|f \|_{L^q(D)}\,,~~~~~~ 1\le r<{\rm Re}(\sqrt{\lambda})^{-1}\,. \label{est1.lem2.est.velocity.RSed.f} \end{align}
On the other hand, for $l\in\{10,\ldots,15\}$ we have
\begin{align}
|J^{(1)}_{l}[f_n](r)|
\le {\color{black} C} |\lambda|^{-1} r^{1-\frac{2}{q}} \|f \|_{L^q(D)}\,,~~~~~~ r \ge {\rm Re}(\sqrt{\lambda})^{-1}\,, \label{est2.lem2.est.velocity.RSed.f} \end{align}
while for $l\in\{16,17\}$ we have
\begin{align}
|J^{(1)}_{l}[f_n](r)| \le
C |\lambda|^{-1+\frac{1}{2q}} r^{1-\frac1q} \|f \|_{L^q(D)}\,,~~~~~~ r \ge {\rm Re}(\sqrt{\lambda})^{-1}\,. \label{est3.lem2.est.velocity.RSed.f} \end{align}
{\rm (2)} Let $f\in C^\infty_0(D)^2$. Then for $l\in\{16,17\}$ we have
\begin{align}
\|r^{-1} J^{(1)}_l[f_n]\|_{L^\infty(D)}
&\le \frac{C}{\beta} |\lambda|^{-1} \|f \|_{L^\infty(D)} \label{est4.lem2.est.velocity.RSed.f}\,, \\
\|r^{-1} J^{(1)}_l[f_n]\|_{L^1(D)}
&\le \frac{C}{\beta} |\lambda|^{-1} \|f \|_{L^1(D)}\,. \label{est5.lem2.est.velocity.RSed.f} \end{align}
\end{lemma}
\begin{proof} \noindent (1) (i) Estimate of $J^{(1)}_{10}[f_n]$: For $1\le r< {\rm Re}(\sqrt{\lambda})^{-1}$, by \eqref{est5.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est4.lem.est2.bessel} for $k=0$ in Lemma \ref{lem.est2.bessel} in Appendix \ref{app.est.bessel}, we find
\begin{align*}
|J^{(1)}_{10}[f_n](r)| &\le
r \bigg|\int_r^\infty K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s\bigg|\,
\int_1^r |I_{\mu_n}(\sqrt{\lambda} s)\,g^{(1)}_n(s)| \,{\rm d} s \\ &\le C \beta^{-1} r
\int_1^r |f_n(s)| s \,{\rm d} s\,, \end{align*}
which implies \eqref{est1.lem2.est.velocity.RSed.f}. For $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, by \eqref{est5.lem.est1.bessel} and \eqref{est7.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est5.lem.est2.bessel} for $k=0$ in Lemma \ref{lem.est2.bessel}, we have
{\allowdisplaybreaks \begin{align*}
& |J^{(1)}_{10}[f_n](r)| \le
r \int_r^\infty |K_{\mu_n}(\sqrt{\lambda} s)| \,{\rm d} s \bigg( \int_1^{\frac{1}{{\rm Re}(\sqrt{\lambda})}} + \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^r \bigg)
|I_{\mu_n}(\sqrt{\lambda} s)\,g^{(1)}_n(s)| \,{\rm d} s \\ &\le
C\,|\lambda|^{-\frac14} r^\frac12 e^{-{\rm Re}(\sqrt{\lambda}) r}
\int_1^{\frac{1}{{\rm Re}(\sqrt{\lambda})}} |f_n(s)| s \,{\rm d} s \\ & \quad
+ C\,|\lambda|^{-1} r^\frac12 e^{-{\rm Re}(\sqrt{\lambda}) r} \int_{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}}^r s^{-\frac32} e^{{\rm Re}(\sqrt{\lambda}) s}
|f_n(s)| s \,{\rm d} s \,, \end{align*} }
which leads to \eqref{est2.lem2.est.velocity.RSed.f}. \\ \noindent (ii) Estimate of $J^{(1)}_{11}[f_n]$: For $1\le r < {\rm Re}(\sqrt{\lambda})^{-1}$, by \eqref{est5.lem.est1.bessel} and \eqref{est7.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est4.lem.est2.bessel} and \eqref{est5.lem.est2.bessel} for $k=0$ in Lemma \ref{lem.est2.bessel}, we see that
\begin{align*}
&|J^{(1)}_{11}[f_n](r)| \le r\,\bigg( \int_r^{\frac{1}{{\rm Re}(\sqrt{\lambda})}} + \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^\infty \bigg)
|I_{\mu_n}(\sqrt{\lambda} \tau)\,g^{(1)}_n(\tau)|\,
\bigg| \int_\tau^\infty K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \bigg| \,{\rm d} \tau \\ & \le C \beta^{-1} r \int_r^{\frac{1}{{\rm Re}(\sqrt{\lambda})}}
|f_n(\tau)| \tau \,{\rm d} \tau
+ C\,|\lambda|^{-1} r \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^\infty
\tau^{-2} |f_n(\tau)| \tau \,{\rm d} \tau \,, \end{align*}
which implies \eqref{est1.lem2.est.velocity.RSed.f}. For $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, by \eqref{est7.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est5.lem.est2.bessel} for $k=0$ in Lemma \ref{lem.est2.bessel}, we have
\begin{align*}
|J^{(1)}_{11}[f_n](r)| &\le r\,\int_r^\infty
|I_{\mu_n}(\sqrt{\lambda} \tau)\,g^{(1)}_n(\tau)|\,
\int_\tau^\infty |K_{\mu_n}(\sqrt{\lambda} s)| \,{\rm d} s \,{\rm d} \tau \\ & \le
C\,|\lambda|^{-1} r \int_r^\infty
\tau^{-2} |f_n(\tau)| \tau \,{\rm d} \tau\,, \end{align*}
which leads to \eqref{est2.lem2.est.velocity.RSed.f}. \\
\noindent (iii) Estimates of $J^{(1)}_{12}[f_n]$ and $J^{(1)}_{13}[f_n]$: The proof for $J^{(1)}_{12}[f_n]$ is parallel to that for $J^{(1)}_{10}[f_n]$ using the bound $|\mu_n-1| \le C\beta$ and the results in Lemmas \ref{lem.est1.bessel} and \ref{lem.est2.bessel} for $k=1$. The proof for $J^{(1)}_{13}[f_n]$ is similar to that for $J^{(1)}_{11}[f_n]$. Thus we omit the details here. \\ \noindent (iv) Estimate of $J^{(1)}_{14}[f_n]$: For $1\le r < {\rm Re}(\sqrt{\lambda})^{-1}$, by \eqref{est1.lem.est1.bessel}, \eqref{est3.lem.est1.bessel}, and \eqref{est6.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est3.lem.est3.bessel} and \eqref{est4.lem.est3.bessel} for $k=0$ in Lemma \ref{lem.est3.bessel}, we observe that
\begin{align*}
& |J^{(1)}_{14}[f_n](r)| \le r \bigg( \int_r^{\frac{1}{{\rm Re}(\sqrt{\lambda})}} + \int_{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}}^\infty \bigg)
|K_{\mu_n}(\sqrt{\lambda} \tau)\,\,g^{(2)}_n(\tau)|
\int_r^\tau |I_{\mu_n}(\sqrt{\lambda} s)| \,{\rm d} s \,{\rm d} \tau \\ & \le C r \int_r^{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}}
|f_n(\tau)| \tau \,{\rm d} \tau
+ C |\lambda|^{-1} r \int_{\frac{1}{{\rm Re}\,(\sqrt{\lambda})}}^\infty
\tau^{-2} |f_n(\tau)| \tau \,{\rm d} \tau\,, \end{align*}
which implies \eqref{est1.lem2.est.velocity.RSed.f}. For $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, by \eqref{est6.lem.est1.bessel} in Lemma \ref{lem.est1.bessel} and \eqref{est5.lem.est3.bessel} in Lemma \ref{lem.est3.bessel} for $k=0$ we have
\begin{align*}
|J^{(1)}_{14}[f_n](r)| & \le
r \int_r^\infty |K_{\mu_n}(\sqrt{\lambda} \tau)\,\,g^{(2)}_n(\tau)|\,
\int_r^\tau |I_{\mu_n}(\sqrt{\lambda} s)| \,{\rm d} s \,{\rm d} \tau \\ & \le
C |\lambda|^{-1} r\,
\int_r^\infty \tau^{-2} |f_{n}(\tau)| \tau \,{\rm d} \tau\,,
\end{align*}
which leads to \eqref{est2.lem2.est.velocity.RSed.f}. \\ \noindent (v) Estimate of $J^{(1)}_{15}[f_n]$: The proof is parallel to that for $J^{(1)}_{14}[f_n]$ using Lemmas \ref{lem.est1.bessel} and \ref{lem.est3.bessel} for $k=1$, and thus we omit here. This completes the proof of Lemma \ref{lem2.est.velocity.RSed.f}. \end{proof}
Lemmas \ref{lem1.est.velocity.RSed.f} and \ref{lem2.est.velocity.RSed.f} lead to the next important estimates that we shall need in the proof of Proposition \ref{prop2.est.velocity.RSed.f} below. Let $c_{n,\lambda}[f_n]$ be the constant in \eqref{const.nFourier.RSed.f}.
\begin{corollary}\label{cor1.est.velocity.RSed.f}
Let $|n|=1$ and $1\le q<p\le\infty$ or $1<q\le p<\infty$, and let $\lambda\in\Sigma_{\pi-\epsilon} \cap \mathcal{B}_1(0)$ for some $\epsilon\in(0,\frac{\pi}{2})$. Then there is a positive constant $C=C(q,p,\epsilon)$ independent of $\beta$ such that the following statement holds. Let $f\in C^\infty_0(D)^2$. Then for $l\in\{1,\ldots,17\}\setminus\{9\}$ we have
\begin{align}
|c_{n,\lambda}[f_n]| &\le
\frac{C}{\beta} |\lambda|^{-1+\frac1q} \|f \|_{L^q(D)}\,, \label{est1.cor1.est.velocity.RSed.f} \\
\|r^{-1} J^{(1)}_l[f_n]\|_{L^p(D)} &\le
\frac{C}{\beta} |\lambda|^{-1+\frac1q-\frac1p} \|f \|_{L^q(D)}\,, \label{est2.cor1.est.velocity.RSed.f} \\
\|r^{-2} J^{(1)}_l[f_n]\|_{L^2(D)} & \le \frac{C}{\beta}
|\lambda|^{-1+\frac1q}
|\log {\rm Re}(\sqrt{\lambda})|^{\frac12}
\|f\|_{L^q(D)}\,. \label{est3.cor1.est.velocity.RSed.f} \end{align}
\end{corollary}
\begin{proof} (i) Estimate of $c_{n,\lambda}[f_n]$: Remark \ref{rem.lem.velocitydecom.RSed.f} (3) ensures that
\begin{align*}
|c_{n,\lambda}[f_n]| \le \sum_{l=11,13,14,15,17} |J^{(1)}_l[f_n](1)|\,. \end{align*}
Then the estimate \eqref{est1.cor1.est.velocity.RSed.f} follows from putting $r=1$ to \eqref{est1.lem2.est.velocity.RSed.f} in Lemma \ref{lem2.est.velocity.RSed.f}. \\ \noindent (ii) Estimate of $r^{-1} J^{(1)}_l[f_n]$: If $l\in\{1,\ldots,17\}\setminus\{7,8,9,16,17\}$, then it is easy to see from the pointwise estimates in Lemmas \ref{lem1.est.velocity.RSed.f} and \ref{lem2.est.velocity.RSed.f} that
\begin{align*}
\sup_{r\ge1} r^{\frac2q} |r^{-1} J^{(1)}_l[f_n](r)|
\le C \beta^{-1} |\lambda|^{-1} \|f \|_{L^q(D)}\,,~~~~~~ 1\le q<\infty\,. \end{align*}
Thus by the Marcinkiewicz interpolation theorem we have \eqref{est2.cor1.est.velocity.RSed.f} for the case $1<p=q<\infty$. Moreover, again from Lemmas \ref{lem1.est.velocity.RSed.f} and \ref{lem2.est.velocity.RSed.f} one can see that
\begin{align}
\sup_{r\ge1} |r^{-1} J^{(1)}_l[f_n](r)|
\le C \beta^{-1} \|f \|_{L^1(D)} \label{est1.proof.cor1.est.velocity.RSed.f}\,, \end{align}
which leads to \eqref{est2.cor1.est.velocity.RSed.f} for the case $1<p\le\infty$ and $q=1$. Hence finally we have \eqref{est2.cor1.est.velocity.RSed.f} for $1\le q<p\le\infty$ and $1<q\le p<\infty$ by the Marcinkiewicz interpolation theorem again. \\ If $l\in\{7,8,16,17\}$, from \eqref{est4.lem1.est.velocity.RSed.f}, \eqref{est5.lem1.est.velocity.RSed.f}, \eqref{est4.lem2.est.velocity.RSed.f}, and \eqref{est5.lem2.est.velocity.RSed.f} we have \eqref{est2.cor1.est.velocity.RSed.f} for the case $1\le p=q\le\infty$ by the interpolation argument. Moreover, \eqref{est1.lem1.est.velocity.RSed.f}, \eqref{est3.lem1.est.velocity.RSed.f}, \eqref{est1.lem2.est.velocity.RSed.f}, and \eqref{est3.lem2.est.velocity.RSed.f} lead to the estimate in the form \eqref{est1.proof.cor1.est.velocity.RSed.f} for $l\in\{7,8,16,17\}$. Thus we obtain \eqref{est2.cor1.est.velocity.RSed.f} for the case $1\le p\le\infty$ and $q=1$, and hence \eqref{est2.cor1.est.velocity.RSed.f} for $1\le q\le p\le \infty$ by the Marcinkiewicz interpolation theorem. \\ \noindent (iii) Estimate of $r^{-2} J^{(1)}_l[f_n]$: The assertion \eqref{est3.cor1.est.velocity.RSed.f} can be checked easily by a direct calculation using Lemmas \ref{lem1.est.velocity.RSed.f} and \ref{lem2.est.velocity.RSed.f}. We note that the logarithmic factor in \eqref{est3.cor1.est.velocity.RSed.f} is due to the estimate \eqref{est1.lem2.est.velocity.RSed.f}. The proof of Corollary \ref{cor1.est.velocity.RSed.f} is complete. \end{proof}
Now we are in position to prove the main theorem of this subsection. Let us start with the simple proposition about the estimate for the term $V_n [K_{\mu_n}(\sqrt{\lambda}\,\cdot\,)]$ in \eqref{rep.velocity.nFourier.RSed.f}.
\begin{proposition}\label{prop1.est.velocity.RSed.f}
Let $|n|=1$, $p\in(1,\infty]$, and let $\lambda\in\Sigma_{\pi-\epsilon} \cap \mathcal{B}_1(0)$ for some $\epsilon\in(0,\frac{\pi}{2})$. Then there is a positive constant $C=C(p,\epsilon)$ independent of $\beta$ such that we have
\begin{align}
\|V_n [K_{\mu_n}(\sqrt{\lambda}\,\cdot\,)]\|_{L^p(D)} &\le
\frac{C}{\beta} |\lambda|^{-\frac{{\rm Re}(\mu_n)}{2}-\frac1p}\,, \label{est1.prop1.est.velocity.RSed.f} \\
\big\|\frac{V_n [K_{\mu_n}(\sqrt{\lambda}\,\cdot\,)]}{|x|}\big\|_{L^2(D)} & \le
\frac{C}{\beta^2} |\lambda|^{-\frac{{\rm Re}(\mu_n)}{2}}\,. \label{est2.prop1.est.velocity.RSed.f} \end{align}
\end{proposition}
\begin{proof} It is easy to see from the definition of $V_n[\,\cdot\,]$ in \eqref{def.V_n} that
\begin{align*}
|V_n [K_{\mu_n}(\sqrt{\lambda}\,\cdot\,)]| \le C r^{-2}
\bigg( |F_n(\sqrt{\lambda};\beta)|
+ \bigg|\int_1^r s^2 K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \bigg| \bigg)
+ C \bigg|\int_r^\infty K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s\bigg|\,. \end{align*}
By the results in Lemma \ref{lem.est2.bessel} for $k=0$ in Appendix \ref{app.est.bessel} we have
\begin{align}
|V_n [K_{\mu_n}(\sqrt{\lambda}\,\cdot\,)](r)| & \le
C \beta^{-1} |\lambda|^{-\frac{{\rm Re}(\mu_n)}{2}} r^{-{\rm Re}(\mu_n)+1}\,,~~~~~~ 1\le r < {\rm Re}(\sqrt{\lambda})^{-1}\,, \label{est1.proof.prop1.est.velocity.RSed.f} \\
|V_n [K_{\mu_n}(\sqrt{\lambda}\,\cdot\,)](r)| & \le
C \beta^{-1} |\lambda|^{-\frac32} r^{-2}\,,~~~~~~ r \ge {\rm Re}(\sqrt{\lambda})^{-1}\,. \label{est2.proof.prop1.est.velocity.RSed.f} \end{align}
Then for $p\in[1,\infty]$ we find
\begin{align*}
\sup_{r\ge1} r^{\frac2p} |V_n [K_{\mu_n}(\sqrt{\lambda}\,\cdot\,)](r)|
& \le C \beta^{-1} |\lambda|^{-\frac{{\rm Re}(\mu_n)}{2}-\frac1p}\,. \end{align*}
Hence by an interpolation argument \eqref{est1.prop1.est.velocity.RSed.f} follows. Moreover, a direct calculation combined with \eqref{est1.proof.prop1.est.velocity.RSed.f}, \eqref{est2.proof.prop1.est.velocity.RSed.f}, and $({\rm Re}(\mu_n(\beta))-1)^\frac12\approx O(\beta)$ yield \eqref{est2.prop1.est.velocity.RSed.f}. This completes the proof. \end{proof}
The next proposition gives the estimate for the term $V_n[\Phi_{n,\lambda}[f_n]]$ in \eqref{rep.velocity.nFourier.RSed.f}.
\begin{proposition}\label{prop2.est.velocity.RSed.f}
Let $|n|=1$ and $1\le q<p\le\infty$ or $1<q\le p<\infty$, and let $\lambda\in\Sigma_{\pi-\epsilon} \cap \mathcal{B}_1(0)$ for some $\epsilon\in(0,\frac{\pi}{2})$. Then there is a positive constant $C=C(q,p,\epsilon)$ independent of $\beta$ such that for $f\in C^\infty_0(D)^2$ we have
\begin{align}
\|V_n[\Phi_{n,\lambda}[f_n]]\|_{L^p(D)} &\le
\frac{C}{\beta} |\lambda|^{-1+\frac1q-\frac1p} \|f \|_{L^q(D)}\,, \label{est1.prop2.est.velocity.RSed.f} \\
\big\|\frac{V_n[\Phi_{n,\lambda}[f_n]]}{|x|}\big\|_{L^2(D)} & \le \frac{C}{\beta}
|\lambda|^{-1+\frac1q}
|\log {\rm Re}(\sqrt{\lambda})|^{\frac12}
\|f\|_{L^q(D)}\,. \label{est2.prop2.est.velocity.RSed.f} \end{align}
\end{proposition}
\begin{proof} The definition of the Biot-Savart law $V_n[\,\cdot\,]$ in \eqref{def.V_n} leads to the next representations for the radial part $V_{r,n}[\Phi_{n,\lambda}[f_n]]$ and the angular part $V_{\theta,n}[\Phi_{n,\lambda}[f_n]]$ of $V_n[\Phi_{n,\lambda}[f_n]]$:
\begin{align*} V_{r,n}[\Phi_{n,\lambda}[f_n]] & \,=\, -\frac{in}{2r} \bigg( \frac{c_{n,\lambda}[f_n]}{r} - \frac{1}{r} \int_1^r s^2 \Phi_{n,\lambda}[f_n](s) \,{\rm d} s - r \int_r^\infty \Phi_{n,\lambda}[f_n](s) \,{\rm d} s \bigg)\,, \\ V_{\theta,n}[\Phi_{n,\lambda}[f_n]] & \,=\, \frac{1}{2r} \bigg( \frac{c_{n,\lambda}[f_n]}{r} - \frac{1}{r} \int_1^r s^2 \Phi_{n,\lambda}[f_n](s) \,{\rm d} s + r \int_r^\infty \Phi_{n,\lambda}[f_n](s) \,{\rm d} s \bigg)\,, \end{align*}
where $c_{n,\lambda}[f_n]$ is defined in \eqref{const.nFourier.RSed.f}. From Lemma \ref{lem.velocitydecom.RSed.f} and Remark \ref{rem.lem.velocitydecom.RSed.f} (1) and (3) we see that
\begin{align}\label{decom.proof.prop2.est.velocity.RSed.f} &\frac{c_{n,\lambda}[f_n]}{r} - \frac{1}{r} \int_1^r s^2 \Phi_{n,\lambda}[f_n](s) \,{\rm d} s \nonumber \\ &\,=\, r^{-1} \sum_{l=11,13,14,15} J^{(1)}_l[f_n](1) - \sum_{l=1}^{8} r^{-1} J^{(1)}_l[f_n](r)\,. \end{align}
Then, by \eqref{decom.proof.prop2.est.velocity.RSed.f} and the decomposition of the integral $r \int_r^\infty \Phi_{n,\lambda}[f_n](s) \,{\rm d} s$ in Lemma \ref{lem.velocitydecom.RSed.f}, we find the following pointwise estimate of $V_n[\Phi_{n,\lambda}[f_n]](r)$:
\begin{equation}\label{est1.proof.prop2.est.velocity.RSed.f} \begin{aligned}
&|V_n[\Phi_{n,\lambda}[f_n]](r)| \\
&\le C \Big( r^{-2} \sum_{l=11,13,14,15} |J^{(1)}_l[f_n](1)|
+ \sum_{l\in\{1,\ldots,17\}\setminus\{9\}} |r^{-1} J^{(1)}_l[f_n](r)| \Big)\,. \end{aligned} \end{equation}
Thus the assertions \eqref{est1.prop2.est.velocity.RSed.f} and \eqref{est2.prop2.est.velocity.RSed.f} follow from Corollary \ref{cor1.est.velocity.RSed.f}. This completes the proof. \end{proof}
Finally we give a proof of Theorem \ref{thm.est.velocity.RSed.f}, which is a direct consequence of Corollary \ref{cor1.est.velocity.RSed.f} and Propositions \ref{prop1.est.velocity.RSed.f} and \ref{prop2.est.velocity.RSed.f}.
\begin{proofx}{Theorem \ref{thm.est.velocity.RSed.f}} In view of Proposition \ref{prop2.est.velocity.RSed.f}, it suffices to show that the first term in the right-hand side of \eqref{rep.velocity.nFourier.RSed.f} satisfies the estimates \eqref{est1.thm.est.velocity.RSed.f} and \eqref{est2.thm.est.velocity.RSed.f}. By using Proposition \ref{prop.est.F} and \eqref{est1.cor1.est.velocity.RSed.f} in Corollary \ref{cor1.est.velocity.RSed.f}, one can see that \eqref{est1.thm.est.velocity.RSed.f} and \eqref{est2.thm.est.velocity.RSed.f} respectively follow from \eqref{est1.prop1.est.velocity.RSed.f} and \eqref{est2.prop1.est.velocity.RSed.f} in Proposition \ref{prop1.est.velocity.RSed.f}. This completes the proof of Theorem \ref{thm.est.velocity.RSed.f}. \end{proofx}
\subsubsection{Estimates of the vorticity for \eqref{nFourier.RSed.f} with $|n|=1$} \label{subsec.RSed.f.vorticity}
This subsection is devoted to the estimate of the vorticity $\omega^{{\rm ed}}_{f,n}(r)=({\rm rot}\,w^{\rm ed}_{f,n})e^{-in\theta}$ with $|n|=1$, where $w^{\rm ed}_{f,n}$ solves \eqref{nFourier.RSed.f} in Subsection \ref{subsec.RSed.f}. We recall that $\omega^{{\rm ed}}_{f,n}$ is represented as
\begin{align*} \omega^{{\rm ed}}_{f,n}(r) &\,=\, -\frac{c_{n,\lambda}[f_n]}{F_n(\sqrt{\lambda};\beta)} K_{\mu_n}(\sqrt{\lambda} r) + \Phi_{n,\lambda}[f_n](r) \end{align*}
by \eqref{rep.vorticity.nFourier.RSed.f}. The main result is stated as follows. Let $\beta_0$ be the constant in Proposition \ref{prop.est.F}.
\begin{theorem}\label{thm.est.vorticity.RSed.f}
Let $|n|=1$, $q\in(1,\infty)$, and $\tilde{q}\in(\max\{1,\frac{q}{2}\}, q]$. Fix $\epsilon\in(0,\frac{\pi}{2})$. Then there is a positive constant $C=C(q,\tilde{q},\epsilon)$ independent of $\beta$ such that the following statement holds. Let $f\in C^\infty_0(D)^2$ and $\beta\in(0,\beta_0)$. Set
\begin{align} \omega^{{\rm ed}\,(1)}_{f,n}(r) \,=\, -\frac{c_{n,\lambda}[f_n]}{F_n(\sqrt{\lambda};\beta)} K_{\mu_n}(\sqrt{\lambda} r)\,, ~~~~~~~~ \omega^{{\rm ed}\,(2)}_{f,n}(r) \,=\, \Phi_{n,\lambda}[f_n](r)\,. \label{def.thm.est.vorticity.RSed.f} \end{align}
Then for $\lambda\in\Sigma_{\pi-\epsilon} \cap \mathcal{B}_{e^{-\frac{1}{6\beta}}}(0)$ we have
\begin{align}
\|\omega^{{\rm ed}\,(1)}_{f,n} \|_{L^2(D)} &\le \frac{C}{\beta^2}
|\lambda|^{-1 + \frac1q}
\|f\|_{L^q(D)}\,, \label{est1.thm.est.vorticity.RSed.f} \\
\big\|\frac{\omega^{{\rm ed}\,(2)}_{f,n}}{|x|} \big\|_{L^{\tilde{q}}(D)} &\le
\frac{C}{\beta} |\lambda|^{-\frac{1}{\tilde{q}}+\frac1q}
\|f\|_{L^q(D)}\,, \label{est2.thm.est.vorticity.RSed.f} \\
\big|\big\langle \omega^{{\rm ed}\,(1)}_{f,n}, \frac{(w^{\rm ed}_{f,r})_n}{|x|} \big\rangle_{L^2(D)} \big| &\le
\frac{C}{\beta^5} |\lambda|^{-2+\frac2q} \|f \|_{L^q(D)}^2\,. \label{est3.thm.est.vorticity.RSed.f} \end{align}
Moreover, \eqref{est1.thm.est.vorticity.RSed.f}, \eqref{est2.thm.est.vorticity.RSed.f}, and \eqref{est3.thm.est.vorticity.RSed.f} hold all for $f\in L^q(D)^2$. \end{theorem}
\begin{proof} (i) Estimate of $\omega^{{\rm ed}\,(1)}_{f,n}$: The estimate \eqref{est1.thm.est.vorticity.RSed.f} is a direct consequence of Proposition \ref{prop.est.F}, \eqref{est1.cor1.est.velocity.RSed.f} in Corollary \ref{cor1.est.velocity.RSed.f}, and \eqref{est1.lem.est4.bessel} with $p=2$ in Lemma \ref{lem.est4.bessel} in Appendix \ref{app.est.bessel}. \\
\noindent (ii) Estimate of $|x|^{-1} \omega^{{\rm ed}\,(2)}_{f,n}$: We decompose $\omega^{{\rm ed}\,(2)}_{f,n}$ into $\omega^{{\rm ed}\,(2)}_{f,n}=\sum_{l=1}^{4} \Phi_{n,\lambda}^{(l)}[f_n]$ by setting
{\allowdisplaybreaks \begin{align*} \Phi_{n,\lambda}^{(1)}[f_n] &\,=\, -K_{\mu_n}(\sqrt{\lambda} r) \int_{1}^{r} I_{\mu_n}(\sqrt{\lambda} s)\,g^{(1)}_n(s) \,{\rm d} s\,, \\ \Phi_{n,\lambda}^{(2)}[f_n] &\,=\, -\sqrt{\lambda} K_{\mu_n}(\sqrt{\lambda} r) \int_{1}^{r} sI_{\mu_n+1}(\sqrt{\lambda} s)\,f_{\theta,n}(s) \,{\rm d} s\,, \\ \Phi_{n,\lambda}^{(3)}[f_n] &\,=\, I_{\mu_n}(\sqrt{\lambda} r) \int_{r}^{\infty} K_{\mu_n}(\sqrt{\lambda} s)\,g^{(2)}_n(s) \,{\rm d} s\,, \\ \Phi_{n,\lambda}^{(4)}[f_n] &\,=\, \sqrt{\lambda} I_{\mu_n}(\sqrt{\lambda} r) \int_{r}^{\infty} sK_{\mu_n-1}(\sqrt{\lambda} s)\,f_{\theta,n}(s) \,{\rm d} s\,. \end{align*} }
Then the assertion \eqref{est2.thm.est.vorticity.RSed.f} follows from the estimates of each term $|x|^{-1} \Phi_{n,\lambda}^{(l)}[f_n]$, $l\in\{1.2.3.4\}$. \\
\noindent (I) Estimates of $|x|^{-1} \Phi_{n,\lambda}^{(1)}[f_n]$ and $|x|^{-1} \Phi_{n,\lambda}^{(2)}[f_n]$: We give a proof only for $|x|^{-1} \Phi_{n,\lambda}^{(2)}[f_n]$ since the proof for $|x|^{-1} \Phi_{n,\lambda}^{(1)}[f_n]$ is similar. The Minkowski inequality leads to
\begin{align*}
\big\|\frac{\Phi_{n,\lambda}^{(2)}[f_n]}{|x|}\big\|_{L^{\tilde{q}}(D)} & \,=\,
|\lambda|^\frac12 \bigg(
\int_{1}^{\infty} \bigg|\int_{1}^{r} r^{-1} K_{\mu_n}(\sqrt{\lambda} r)
sI_{\mu_n+1}(\sqrt{\lambda} s)\,f_{\theta,n}(s) \,{\rm d} s\bigg|^{\tilde{q}} r \,{\rm d} r \bigg)^{\frac{1}{\tilde{q}}} \\ & \le
|\lambda|^\frac12 \int_1^\infty
|sI_{\mu_n+1}(\sqrt{\lambda} s)\,f_{\theta,n}(s)| \bigg(
\int_{s}^{\infty} |r^{-1} K_{\mu_n}(\sqrt{\lambda} r)|^{\tilde{q}} r \,{\rm d} r \bigg)^{\frac{1}{\tilde{q}}} \,{\rm d} s\,. \end{align*}
By \eqref{est5.lem.est1.bessel} and \eqref{est7.lem.est1.bessel} for $k=1$ in Lemma \ref{lem.est1.bessel} and \eqref{est2.lem.est4.bessel} and \eqref{est3.lem.est4.bessel} in Lemma \ref{lem.est4.bessel}, we have
\begin{align*}
\big\|\frac{\Phi_{n,\lambda}^{(2)}[f_n]}{|x|}\big\|_{L^{\tilde{q}}(D)}
& \le C |\lambda| \int_{1}^{\frac{1}{{\rm Re}(\sqrt{\lambda})}}
s^{\frac{2}{\tilde{q}}} |f_n(s)| s\,{\rm d} s
+ C\,|\lambda|^{-\frac{1}{2\tilde{q}}} \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^{\infty}
s^{-2+\frac{1}{\tilde{q}}} |f_n(s)| s\,{\rm d} s\,, \end{align*}
which implies \eqref{est2.thm.est.vorticity.RSed.f} since $\frac{q}{q-1}(-2+\frac{1}{\tilde{q}})+2<0$ holds if $\tilde{q}\in(\max\{1,\frac{q}{2}\}, q]$. \\
\noindent (II) Estimates of $|x|^{-1} \Phi_{n,\lambda}^{(3)}[f_n]$ and $|x|^{-1} \Phi_{n,\lambda}^{(4)}[f_n]$: We give a proof only for $|x|^{-1} \Phi_{n,\lambda}^{(4)}[f_n]$. After using the Minkowski inequality in the same way as above, from \eqref{est2.lem.est1.bessel}, \eqref{est4.lem.est1.bessel}, and \eqref{est6.lem.est1.bessel} with $k=1$ in Lemma \ref{lem.est1.bessel} and \eqref{est4.lem.est4.bessel} and \eqref{est5.lem.est4.bessel} in Lemma \ref{lem.est4.bessel}, we have
\begin{align*}
&\big\|\frac{\Phi_{n,\lambda}^{(4)}[f_n]}{|x|}\big\|_{L^{\tilde{q}}(D)}
\le C |\lambda|^\frac12 \int_{1}^{\infty}
\big|sK_{\mu_n-1}(\sqrt{\lambda} s)\,f_{\theta,n}(s)\big| \bigg(
\int_{1}^{s} |r^{-1} I_{\mu_n}(\sqrt{\lambda} r)|^{\tilde{q}} r \,{\rm d} r \bigg)^{\frac{1}{\tilde{q}}} \,{\rm d} s \\
& \le C \beta^{-1} |\lambda| \int_{1}^{\frac{1}{{\rm Re}(\sqrt{\lambda})}}
s^{\frac{2}{\tilde{q}}}\,|f_n(s)| s \,{\rm d} s
+ C |\lambda|^{-\frac{1}{2\tilde{q}}}
\int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^\infty s^{-2+\frac{1}{\tilde{q}}} |f_n(s)| s \,{\rm d} s\,, \end{align*}
which leads to \eqref{est2.thm.est.vorticity.RSed.f}. Hence we obtain the assertion \eqref{est2.thm.est.vorticity.RSed.f}.\\
\noindent (iii) Estimate of $\big|\big\langle \omega^{{\rm ed}\,(1)}_{f,n}, |x|^{-1}(w^{\rm ed}_{f,r})_n \big\rangle_{L^2(D)} \big|$: From \eqref{rep.velocity.nFourier.RSed.f} and \eqref{def.thm.est.vorticity.RSed.f} we see that
\begin{equation}\label{est1.proof.thm.est.vorticity.RSed.f} \begin{aligned}
\big|\big\langle \omega^{{\rm ed}\,(1)}_{f,n}, \frac{(w^{\rm ed}_{f,r})_n}{|x|} \big\rangle_{L^2(D)} \big| &\le
\Big|\frac{c_{n,\lambda}[f_n]}{F_n(\sqrt{\lambda};\beta)}\Big|^2
\big|\big\langle K_{\mu_n}(\sqrt{\lambda}\,\cdot\,),
\frac{V_{r,n} [K_{\mu_n}(\sqrt{\lambda}\,\cdot\,)]}{|x|} \big\rangle_{L^2(D)} \big| \\ & \quad
+ \Big|\frac{c_{n,\lambda}[f_n]}{F_n(\sqrt{\lambda};\beta)}\Big|\,
\big|\big\langle K_{\mu_n}(\sqrt{\lambda}\,\cdot\,),
\frac{V_{r,n}[\Phi_{n,\lambda}[f_n]]}{|x|} \big\rangle_{L^2(D)} \big|\,. \end{aligned} \end{equation}
Then, by Proposition \ref{prop.est.F} and \eqref{est1.cor1.est.velocity.RSed.f} in Corollary \ref{cor1.est.velocity.RSed.f} combined with the results in Lemma \ref{lem.est1.bessel} for $k=0$ and \eqref{est1.proof.prop1.est.velocity.RSed.f} and \eqref{est2.proof.prop1.est.velocity.RSed.f} in the proof Proposition \ref{prop1.est.velocity.RSed.f}, we have
\begin{align*}
&\Big|\frac{c_{n,\lambda}[f_n]}{F_n(\sqrt{\lambda};\beta)}\Big|^2
\big|\big\langle K_{\mu_n}(\sqrt{\lambda}\,\cdot\,),
\frac{V_{r,n} [K_{\mu_n}(\sqrt{\lambda}\,\cdot\,)]}{|x|} \big\rangle_{L^2(D)} \big| \nonumber \\
& \le C \beta^{-3} |\lambda|^{-2+\frac2q} \|f \|_{L^q(D)}^2 \bigg( \int_1^{\frac{1}{{\rm Re}(\sqrt{\lambda})}} r^{-{\rm Re}(\mu_n)} \,{\rm d} r
+ |\lambda|^{{\rm Re}(\mu_n)-\frac12} \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^\infty e^{-{\rm Re}(\sqrt{\lambda}) r} \,{\rm d} r \bigg)\,. \end{align*}
By \eqref{est1.proof.prop2.est.velocity.RSed.f} in the proof of Proposition \ref{prop2.est.velocity.RSed.f} combined with Lemmas \ref{lem1.est.velocity.RSed.f} and \ref{lem2.est.velocity.RSed.f}, we have
\begin{align*}
&\Big|\frac{c_{n,\lambda}[f_n]}{F_n(\sqrt{\lambda};\beta)}\Big|\,
\big|\big\langle K_{\mu_n}(\sqrt{\lambda}\,\cdot\,),
\frac{V_{r,n}[\Phi_{n,\lambda}[f_n]]}{|x|} \big\rangle_{L^2(D)} \big| \\
&\le C \beta^{-2} |\lambda|^{-2+\frac2q} \|f \|_{L^q(D)}^2 \bigg( \int_1^{\frac{1}{{\rm Re}(\sqrt{\lambda})}} r^{-{\rm Re}(\mu_n)} \,{\rm d} r
+ |\lambda|^{\frac{{\rm Re}(\mu_n)}{2}-\frac14} \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^\infty r^{\frac12} e^{-{\rm Re}(\sqrt{\lambda}) r} \,{\rm d} r \bigg)\,. \end{align*}
Hence, by inserting the above two estimates into \eqref{est1.proof.thm.est.vorticity.RSed.f}, one can check that the assertion \eqref{est3.thm.est.vorticity.RSed.f} holds. This completes the proof of Theorem \ref{thm.est.vorticity.RSed.f}. \end{proof}
\subsection{Problem II: External force ${\rm div}\,F$ and Dirichlet condition} \label{subsec.RSed.divF}
In this subsection we consider the following resolvent problem for $(w,r)=(w^{\rm ed}_{{\rm div}F}, r^{\rm ed}_{{\rm div}F})$:
\begin{equation}\tag{RS$^{\rm ed}_{{\rm div}F}$}\label{RSed.divF} \left\{ \begin{aligned} \lambda w - \Delta w + \beta U^{\bot} {\rm rot}\,w + \nabla r & \,=\, {\rm div}\,F\,,~~~~x \in D\,, \\ {\rm div}\,w &\,=\, 0\,,~~~~x \in D\,, \\
w|_{\partial D} & \,=\, 0\,. \end{aligned}\right. \end{equation}
In particular, the estimates for the $\pm 1$-Fourier mode of $w^{\rm ed}_{{\rm div}F}$ are our interest. Here $F=(F_{ij}(x))_{1\le i,j\le 2}$ is a $2\times2$ matrix. We recall that the operator ${\rm div}$ on matrices $G=(G_{ij}(x))_{1\le i,j\le 2}$ is defined as ${\rm div}\,G=(\partial_1 G_{11} + \partial_2 G_{12}, \partial_1 G_{21} + \partial_2 G_{22})^\top$. The assumption on $F$ is as follows: let us take the constant $\gamma\in(\frac12,1)$ of Assumption \ref{assumption} in the introduction. Fix $\gamma'\in(\frac12,\gamma)$. Then we assume that $F$ belongs to the function space $X_{\gamma'}(D)$ defined as
\begin{align}\label{Xgamma} X_{\gamma'}(D) \,=\,
\{F\in L^2(D)^{2\times 2}~|~|x|^{\gamma'} F\in L^2(D)^{2\times 2} \}\,. \end{align}
This definition is motivated from the property of the matrix $R\otimes v + R\otimes v$ appearing in \eqref{RSed}, where $R$ is the function in Assumption \ref{assumption} and $v\in D(\mathbb{A}_V)$ is a solution to \eqref{RS}. In view of the regularity of $F$, we define the class of solutions to \eqref{RSed.divF} in each Fourier mode by the weak form. Let $n\in \mathbb{Z}\setminus\{0\}$ and $L^q_\sigma(D)$, $q\in(1,\infty)$, denote the $L^q$-closure of $C^\infty_{0,\sigma}(D)$, and let $p\in(\frac{2}{\gamma'},\infty)$. Then a velocity $w_n \in \mathcal{P}_n (L^{p}_{\sigma}(D) \cap W^{1,p}_0 (D)^2)$ is said to be a weak solution to \eqref{RSed.divF} replacing ${\rm div}\,F$ by $({\rm div}\,F)_n=\mathcal{P}_n{\rm div}\,F$ if
\begin{equation}\tag{RS$^{\rm ed}_{{\rm div}F,n}$}\label{RSed.divF.n} \begin{aligned} &\lambda \langle w_n, \varphi \rangle_{L^2(D)} + \langle \nabla w_n, \nabla\varphi \rangle_{L^2(D)} + \beta \langle U^{\bot} {\rm rot}\,w_n, \varphi \rangle_{L^2(D)} \\ & \,=\, - \langle F, \nabla\mathcal{P}_n\varphi \rangle_{L^2(D)} \end{aligned} \end{equation}
holds for all $\varphi\in C^\infty_{0,\sigma}(D)^2$. Then the pressure $r\in W^{1,p}_{{\rm loc}}(\overline{D})$ is recovered by a standard functional analytic argument; see \cite[page 73, Lemma 2.21]{So} for instance. The uniqueness of weak solutions is trivial thanks to the representation formula \eqref{rep.velocity.nFourier.RSed.divF} below. In the following we consider the solutions to \eqref{RSed.divF.n} for given $F\in X_{\gamma'}(D)$.
Let $n\in\mathbb{Z}\setminus\{0\}$. By the solution formula \eqref{rep.velocity.nFourier.RSed.f} in Subsection \ref{subsec.RSed.f}, at least when $F\in C^\infty_0(D)^{2\times2}$, we can represent the $n$-Fourier mode of the solution $w^{\rm ed}_{{\rm div}F}$ to \eqref{RSed.divF} as
\begin{align}\label{rep.velocity.nFourier.RSed.divF} & w^{\rm ed}_{{\rm div}\,F,n} \,=\, -\frac{c_{n,\lambda}[({\rm div}\,F)_n]}{F_n(\sqrt{\lambda};\beta)} V_n [K_{\mu_n}(\sqrt{\lambda}\,\cdot\,)] + V_n[\Phi_{n,\lambda}[({\rm div}F)_n]]\,, \end{align}
if $\lambda\in\mathbb{C}\setminus\overline{\mathbb{R}_{-}}$ satisfies $F_n(\sqrt{\lambda};\beta)\neq0$. Here $c_{n,\lambda}[\,\cdot\,]$, $F_n(\sqrt{\lambda};\beta)$, $V_n[\,\cdot\,]$, and $\Phi_{n,\lambda}[\,\cdot\,]$ are respectively defined in \eqref{const.nFourier.RSed.f}, \eqref{def.F}, \eqref{def.V_n}, and \eqref{Phi.nFourier.RSed.f}. Then the vorticity of $w^{\rm ed}_{{\rm div}F,n}$ is given by
\begin{align}\label{rep.vorticity.nFourier.RSed.divF} {\rm rot}\,w^{\rm ed}_{{\rm div}F,n} &\,=\, -\frac{c_{n,\lambda}[({\rm div}\,F)_n]}{F_n(\sqrt{\lambda};\beta)} K_{\mu_n}(\sqrt{\lambda} r) e^{in\theta} + \Phi_{n,\lambda}[({\rm div}\,F)_n](r) e^{in\theta}\,. \end{align}
We prove the estimates of \eqref{rep.velocity.nFourier.RSed.divF} and \eqref{rep.vorticity.nFourier.RSed.divF} in the next two subsections. Before concluding this subsection, we prepare a useful lemma for the calculation concerning $\Phi_{n,\lambda}[({\rm div}\,F)_n]$.
\begin{lemma}\label{lem.rep.Phi.RSed.divF} Let $n \in \mathbb{Z} \setminus \{0\}$ and $F \in C^\infty_0(D)^{2\times 2}$. Then there are functions $\widetilde{F}_n^{(k)}=\widetilde{F}_n^{(k)}(r)$, $k\in\{1,\ldots,7\}$, each of which is a linear combination containing the $n$-Fourier mode of the components of $F=(F_{ij})_{1\le i,j\le 2}$, such that $\Phi_{n,\lambda}[({\rm div}\,F)_n]$ is represented as
\begin{equation}\label{eq.lem.rep.Phi.RSed.divF} \begin{aligned} &~~~\Phi_{n,\lambda}[({\rm div}\,F)_n](r) \\ &\,=\, -K_{\mu_n}(\sqrt{\lambda} r) \bigg( \int_{1}^{r} s^{-1} I_{\mu_n}(\sqrt{\lambda} s)\,\widetilde{F}_n^{(1)}(s)\,{\rm d} s \\ &~~~~~~~~~~~~~~~~ + \sqrt{\lambda} \int_{1}^{r} I_{\mu_n+1}(\sqrt{\lambda} s)\,\widetilde{F}_n^{(2)}(s) \,{\rm d} s - \lambda \int_{1}^{r} sI_{\mu_n}(\sqrt{\lambda} s)\,\widetilde{F}_n^{(3)}(s) \,{\rm d} s \bigg) \\ &\quad + I_{\mu_n}(\sqrt{\lambda} r) \bigg( \int_r^\infty s^{-1} K_{\mu_n}(\sqrt{\lambda} s)\,\widetilde{F}_n^{(4)}(s) \,{\rm d} s \\ &~~~~~~~~~~~~~~~~ + \sqrt{\lambda} \int_r^\infty K_{\mu_n-1}(\sqrt{\lambda} s)\,\widetilde{F}_n^{(5)}(s) \,{\rm d} s + \lambda \int_r^\infty sK_{\mu_n}(\sqrt{\lambda} s)\,\widetilde{F}_n^{(6)}(s) \,{\rm d} s \bigg) \\ & \quad - \sqrt{\lambda} r \big(K_{\mu_n}(\sqrt{\lambda} r) I_{\mu_n+1}(\sqrt{\lambda} r) + K_{\mu_n-1}(\sqrt{\lambda} r) I_{\mu_n}(\sqrt{\lambda} r)\big)\,\widetilde{F}_n^{(7)}(r)\,. \end{aligned} \end{equation}
\end{lemma}
\begin{proof} Let $n\in\mathbb{Z}\setminus\{0\}$. By the definition of ${\rm div}\,F$, there are functions $G_n^{(l)}\in C^\infty_0((1,\infty))$, $l\in\{1,\ldots,4\}$, such that the $n$-Fourier mode $({\rm div}\,F)_n$ has a representation
\begin{equation}\label{eq1.lem.rep.Phi.RSed.divF} \begin{aligned} &({\rm div}\,F)_n \,=\, ({\rm div}\,F)_{r,n} e^{i n \theta} {\bf e}_r + ({\rm div}\,F)_{\theta,n} e^{i n \theta} {\bf e}_\theta \\ &\,=\, \big(\partial_r G_n^{(1)}(r) + \frac{1}{r} G_n^{(2)}(r)\big) e^{i n \theta} {\bf e}_r + \big(\partial_r G_n^{(3)}(r) + \frac{1}{r} G_n^{(4)}(r)\big) e^{i n \theta} {\bf e}_\theta\,. \end{aligned} \end{equation}
Then there are functions $H_n^{(m)}\in C^\infty_0((1,\infty))$, $m\in\{1,\ldots,4\}$, each of which is a linear combination containing the $n$-mode of the components of $F=(F_{ij})_{1\le i,j\le 2}$, such that
\begin{align} \mu_n ({\rm div}\,F)_{\theta,n}(r) + in ({\rm div}\,F)_{r,n}(r) & \,=\, \partial_{r} H_n^{(1)}(r) + \frac1r H_n^{(2)}(r)\,, \label{eq2.lem.rep.Phi.RSed.divF} \\ \mu_n ({\rm div}\,F)_{\theta,n}(r) - in ({\rm div}\,F)_{r,n}(r) & \,=\, \partial_{r} H_n^{(3)}(r) + \frac1r H_n^{(4)}(r) \label{eq3.lem.rep.Phi.RSed.divF}\,. \end{align}
By inserting \eqref{eq1.lem.rep.Phi.RSed.divF}--\eqref{eq3.lem.rep.Phi.RSed.divF} into the representation of $\Phi[f_n]$ in \eqref{Phi.nFourier.RSed.f} replacing $f_n$ by $({\rm div}\,F)_n$, and using the next relations of Bessel functions $ I_\mu(z)$ and $K_\mu(z)$ (see \cite{Abramowitz} page 376):
\begin{align*} \frac{\,{\rm d} I_{\mu}}{\,{\rm d} z}(z) \,=\, \frac{\mu}{z} I_{\mu}(z) + I_{\mu+1}(z)\,,~~~~~~~~ \frac{\,{\rm d} K_{\mu}}{\,{\rm d} z}(z) \,=\, -\frac{\mu}{z} K_{\mu}(z) - K_{\mu-1}(z)\,, \end{align*}
we can obtain the assertion \eqref{eq.lem.rep.Phi.RSed.divF}. We omit the details since the calculations are straightforward using integration by parts. The proof is complete. \end{proof}
\subsubsection{Estimates of the velocity solving \eqref{RSed.divF.n} with $|n|=1$} \label{subsec.RSed.divF.velocity}
The main result of this subsection is the estimates of $w^{\rm ed}_{{\rm div}\,F,n}$ represented as in \eqref{rep.velocity.nFourier.RSed.divF}. Let us recall that $\beta_0$ is the constant in Proposition \ref{prop.est.F}.
\begin{theorem}\label{thm.est.velocity.RSed.divF}
Let $|n|=1$, $\gamma'\in(\frac12,\gamma)$, and $p\in(\frac{2}{\gamma'},\infty)$. Fix $\epsilon\in(0,\frac{\pi}{2})$. Then there is a positive constant $C=C(\gamma',p,\epsilon)$ independent of $\beta$ such that the following statement holds. Let $F\in C^\infty_0(D)^{2\times2}$ and $\beta\in(0,\beta_0)$. Then for $\lambda\in\Sigma_{\pi-\epsilon} \cap \mathcal{B}_{e^{-\frac{1}{6\beta}}}(0)$ we have
\begin{align}
\|w^{\rm ed}_{{\rm div}F,n}\|_{L^p(D)} & \le \frac{C}{\beta^2}
|\lambda|^{-\frac1p}
\||x|^{\gamma'}F\|_{L^2(D)}\,, \label{est1.thm.est.velocity.RSed.divF} \\
\big\| \frac{w^{\rm ed}_{{\rm div}F,n}}{|x|} \big\|_{L^2(D)} & \le \frac{C}{\beta^3}
\||x|^{\gamma'}F\|_{L^2(D)}\,. \label{est2.thm.est.velocity.RSed.divF} \end{align}
Moreover, \eqref{est1.thm.est.velocity.RSed.divF} and \eqref{est2.thm.est.velocity.RSed.divF} hold all for $F\in X_{\gamma'}(D)$ defined in \eqref{Xgamma}. \end{theorem}
By following a similar procedure as in Subsection \ref{subsec.RSed.f.velocity}, we give the proof of Theorem \ref{thm.est.velocity.RSed.divF} at the end of this subsection. We firstly focus on the term $V_n[\Phi_{n,\lambda}[({\rm div}\,F)_n]]$ in \eqref{rep.velocity.nFourier.RSed.divF}. By using Lemma \ref{lem.rep.Phi.RSed.divF}, one can see that the next decomposition holds. Let $\widetilde{F}_n^{(k)}(r)$, $k\in\{1,\ldots,7\}$, be the functions in Lemma \ref{lem.rep.Phi.RSed.divF}.
\begin{lemma}\label{lem1.velocitydecom.RSed.divF} Let $n \in \mathbb{Z} \setminus \{0\}$ and $F \in C^\infty_0(D)^{2\times 2}$. Then we have
\begin{align}\label{eq1.lem1.velocitydecom.RSed.divF}
\frac{1}{r^{|n|}} \int_1^r s^{1+|n|} \Phi_{n,\lambda}[({\rm div}\,F)_n](s) \,{\rm d} s \,=\, \sum_{l=1}^{10} J^{(2)}_l[({\rm div}\,F)_n](r)\,, \end{align}
where
{\allowdisplaybreaks \begin{align*} J^{(2)}_{1}[({\rm div}\,F)_n](r)
&\,=\, -\frac{1}{r^{|n|}} \int_1^r \tau^{-1} I_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(1)}(\tau)
\int_\tau^r s^{1+|n|} K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\ J^{(2)}_{2}[({\rm div}\,F)_n](r)
&\,=\, -\frac{\sqrt{\lambda}}{r^{|n|}} \int_1^r I_{\mu_n+1}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(2)}(\tau)
\int_\tau^r s^{1+|n|} K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\ J^{(2)}_{3}[({\rm div}\,F)_n](r)
&\,=\, -\frac{\lambda}{r^{|n|}} \int_1^r \tau I_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(3)}(\tau)
\int_\tau^r s^{1+|n|} K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\ J^{(2)}_{4}[({\rm div}\,F)_n](r)
&\,=\, \frac{1}{r^{|n|}} \int_1^r \tau^{-1} K_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(4)}(\tau)
\int_1^\tau s^{1+|n|} I_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\ J^{(2)}_{5}[({\rm div}\,F)_n](r)
&\,=\, \frac{1}{r^{|n|}} \bigg( \int_r^\infty \tau^{-1} K_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(4)}(\tau) \,{\rm d} \tau \bigg)
\bigg( \int_1^r s^{1+|n|} I_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \bigg)\,, \\ J^{(2)}_{6}[({\rm div}\,F)_n](r)
&\,=\, \frac{\sqrt{\lambda}}{r^{|n|}} \int_1^r K_{\mu_n-1}(\sqrt{\lambda} \tau)\,\widetilde{F}^{(5)}_n(\tau)
\int_1^\tau s^{1+|n|} I_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\ J^{(2)}_{7}[({\rm div}\,F)_n](r)
&\,=\, \frac{\sqrt{\lambda}}{r^{|n|}} \bigg( \int_r^\infty K_{\mu_n-1}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(5)}(\tau) \,{\rm d} \tau \bigg)
\bigg( \int_1^r s^{1+|n|} I_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \bigg)\,, \\ J^{(2)}_{8}[({\rm div}\,F)_n](r)
&\,=\, \frac{\lambda}{r^{|n|}} \int_1^r \tau K_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(6)}(\tau)
\int_1^\tau s^{1+|n|} I_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\ J^{(2)}_{9}[({\rm div}\,F)_n](r)
&\,=\, \frac{\lambda}{r^{|n|}} \bigg( \int_r^\infty \tau K_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(6)}(\tau) \,{\rm d} \tau \bigg)
\bigg( \int_1^r s^{1+|n|} I_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \bigg)\,, \\ J^{(2)}_{10}[({\rm div}\,F)_n](r)
&\,=\, -\frac{\sqrt{\lambda}}{r^{|n|}} \int_1^r s \big(K_{\mu_n}(\sqrt{\lambda} s) I_{\mu_n+1}(\sqrt{\lambda} s) + K_{\mu_n-1}(\sqrt{\lambda} s) I_{\mu_n}(\sqrt{\lambda} s)\big)\,\widetilde{F}^{(7)}_n(s) \,{\rm d} s\,. \end{align*} }
Here $\widetilde{F}_n^{(k)}(r)$, $k\in\{1,\ldots,7\}$, are the functions in Lemma \ref{lem.rep.Phi.RSed.divF}. \end{lemma}
\begin{proof} The assertion follows by inserting \eqref{eq.lem.rep.Phi.RSed.divF} in Lemma \ref{lem.rep.Phi.RSed.divF} into the left-hand side of \eqref{eq1.lem1.velocitydecom.RSed.divF}, and changing order of integration as $\int_{1}^{r} \int_{1}^{s} \,{\rm d} \tau \,{\rm d} s= \int_{1}^{r} \int_\tau^r \,{\rm d} s \,{\rm d} \tau$ and $\int_1^r \int_s^\infty \,{\rm d} \tau \,{\rm d} s=\int_1^r \int_1^\tau \,{\rm d} s \,{\rm d} \tau + \int_r^\infty \,{\rm d} \tau \int_1^r \,{\rm d} s$. This completes the proof. \end{proof}
The next lemma gives the estimates to $J^{(2)}_{l}[({\rm div}\,F)_n]$, $l\in\{1,\ldots,10\}$, in Lemma \ref{lem1.velocitydecom.RSed.divF}.
\begin{lemma}\label{lem1.est.velocity.RSed.divF}
Let $|n|=1$ and $\gamma' \in(\frac12,\gamma)$, and let $\lambda\in\Sigma_{\pi-\epsilon} \cap \mathcal{B}_1(0)$ for some $\epsilon\in(0,\frac{\pi}{2})$. Then there is a positive constant $C=C(\gamma',\epsilon)$ independent of $\beta$ such that the following statement holds. Let $F \in C^\infty_0(D)^{2\times 2}$. Then for $l \in \{1,\cdots,10\}$ we have
\begin{align}
& \big|J^{(2)}_{l}[({\rm div}\,F)_n](r) \big| \le
\frac{C}{\beta} (|\lambda|^\frac12 r^2 + r^{2-{\rm Re}(\mu_n)} + r^{1-\gamma'})\,
\||x|^{\gamma'} F \|_{L^2(D)}\,,\nonumber \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1 \le r < {\rm Re}(\sqrt{\lambda})^{-1}\,, \label{est1.lem1.est.velocity.RSed.divF} \\
& \big|J^{(2)}_{l}[({\rm div}\,F)_n](r) \big| \le \frac{C}{\beta}
(|\lambda|^{-\frac12} + r^{1-\gamma'})\,
\||x|^{\gamma'} F \|_{L^2(D)}\,,~~~~~~ r \ge {\rm Re}(\sqrt{\lambda})^{-1} \label{est2.lem1.est.velocity.RSed.divF}\,. \end{align}
\end{lemma}
{\color{black} \begin{remark}\label{remark.est1.velocity.RSed.divF} The statement of Lemma \ref{lem1.est.velocity.RSed.divF} is in fact valid for $\gamma' \in(0, \gamma)$ if we choose the constant $\beta\in(0,1)$ small enough depending on $\gamma'$. However, in order to avoid the technical difficulty, a simplification has been made here by assuming the lower bound $\gamma' \in(\frac12,\gamma)$. We note that this assumption is essentially required later in the proof of Theorem \ref{thm.est.vorticity.RSed.divF}. \end{remark} }
\begin{proofx}{Lemma \ref{lem1.est.velocity.RSed.divF}} (i) Estimate of $J^{(2)}_{1}[({\rm div}\,F)_n]$: For $1\le r<{\rm Re}(\sqrt{\lambda})^{-1}$, by \eqref{est5.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est1.lem.est2.bessel} for $k=0$ in Lemma \ref{lem.est2.bessel} in Appendix \ref{app.est.bessel}, we find
\begin{align*}
|J^{(2)}_{1}[({\rm div}\,F)_n](r)| & \le Cr^{-1}
\int_1^r |\tau^{-1} I_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(1)}(\tau)|\,
\bigg|\int_\tau^r s^2 K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \bigg| \,{\rm d} \tau \\ & \le Cr^{2-{\rm Re}(\mu_n)}
\int_1^r \tau^{{\rm Re}(\mu_n)-2} |F_n(\tau)| \tau \,{\rm d} \tau\,, \end{align*}
which implies $|J^{(2)}_{1}[({\rm div}\,F)_n](r)| \le Cr^{2-{\rm Re}(\mu_n)} \||x|^{\gamma'} F \|_{L^2}$. {\color{black} We note that ${\rm Re}(\mu_n)-2 >-1$ for any $\beta\in(0,1)$ and the condition $\gamma' \in(\frac12, \gamma)$ has been applied.} For $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, by \eqref{est5.lem.est1.bessel} and \eqref{est7.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est2.lem.est2.bessel} and \eqref{est3.lem.est2.bessel} for $k=0$ in Lemma \ref{lem.est2.bessel}, we have
\begin{align*}
&|J^{(2)}_{1}[({\rm div}\,F)_n](r)| \\ & \le Cr^{-1} \bigg( \int_1^{\frac{1}{{\rm Re}(\sqrt{\lambda})}} + \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^r \bigg)
|\tau^{-1} I_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(1)}(\tau)|\,
\bigg| \int_\tau^{r} s^2 K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \bigg| \,{\rm d} \tau \\ & \le
C\,|\lambda|^{-\frac12} \int_1^{\frac{1}{{\rm Re}(\sqrt{\lambda})}}
\tau^{-1} |F_n(\tau)| \tau \,{\rm d} \tau
+ C\,|\lambda|^{-\frac12} \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^r
\tau^{-1} |F_n(\tau)| \tau \,{\rm d} \tau\,, \end{align*}
which leads to $|J^{(2)}_{1}[({\rm div}\,F)_n](r)| \le C|\lambda|^{-\frac12} \||x|^{\gamma'} F \|_{L^2}$. \\
\noindent (ii) Estimate of $J^{(2)}_{2}[({\rm div}\,F)_n]$: In the similar manner as the proof of $J^{(2)}_1[({\rm div}\,F)_n]$, for $1\le r<{\rm Re}(\sqrt{\lambda})^{-1}$ we have $|J^{(2)}_{2}[({\rm div}\,F)_n](r)|\le C|\lambda|^\frac12 r^2 \|F \|_{L^2}$, and for $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$ we have $|J^{(2)}_{2}[({\rm div}\,F)_n](r)| \le C |\lambda|^{-\frac12} \|F \|_{L^2}$. We omit since the proof is straightforward. \\
\noindent (iii) Estimate of $J^{(2)}_{3}[({\rm div}\,F)_n]$: For $1\le r<{\rm Re}(\sqrt{\lambda})^{-1}$, we have $|J^{(2)}_{3}[({\rm div}\,F)_n](r)|\le C|\lambda|^\frac12 r^2 \|F \|_{L^2}$ by same way as the proof of $J^{(2)}_1[f_n]$. For $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, we observe that
\begin{align*}
&|J^{(2)}_{3}[({\rm div}\,F)_n](r)| \\ & \le
C\,|\lambda|\,r^{-1} \bigg( \int_1^{\frac{1}{{\rm Re}(\sqrt{\lambda})}} + \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^r \bigg)
|\tau I_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(3)}(\tau)|\,
\bigg| \int_\tau^r s^2 K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \bigg| \,{\rm d} \tau \\ & \le
C\,|\lambda|^{-\frac12} r^{-1} \int_1^{\frac{1}{{\rm Re}(\sqrt{\lambda})}}
|F_n(\tau)| \tau \,{\rm d} \tau + C\,r^{-1} \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^r
\tau |F_n(\tau)| \tau \,{\rm d} \tau \,. \end{align*}
Thus we have $|J^{(2)}_{3}[({\rm div}\,F)_n](r)| \le C( |\lambda|^{-\frac12} \|F \|_{L^2} + r^{1-\gamma'} \||x|^{\gamma'} F \|_{L^2})$.\\ \noindent (iv) Estimate of $J^{(2)}_{4}[({\rm div}\,F)_n]$: For $1\le r < {\rm Re}(\sqrt{\lambda})^{-1}$, by \eqref{est1.lem.est1.bessel} and \eqref{est3.lem.est1.bessel} in Lemma \ref{lem.est1.bessel} and \eqref{est1.lem.est3.bessel} for $k=0$ in Lemma \ref{lem.est3.bessel}, we find
\begin{align*}
|J^{(2)}_{4}[({\rm div}\,F)_n](r)| & \le r^{-1}
\int_1^r |\tau^{-1} K_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}^{(4)}_n(\tau)|
\int_1^\tau |s^2 I_{\mu_n}(\sqrt{\lambda} s)| \,{\rm d} s \,{\rm d} \tau \\ & \le {\color{black} C r^{-1}}
\int_1^r \tau |F_n(\tau)| \tau \,{\rm d} \tau\,, \end{align*}
which implies $|J^{(2)}_{4}[({\rm div}\,F)_n](r)| \le C r^{1-\gamma'}\,\||x|^{\gamma'} F \|_{L^2}$. For $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, \eqref{est1.lem.est1.bessel}, \eqref{est3.lem.est1.bessel}, and \eqref{est6.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est1.lem.est3.bessel} and \eqref{est2.lem.est3.bessel} for $k=0$ in Lemma \ref{lem.est3.bessel} yield
\begin{align*}
&~~~|J^{(2)}_{4}[({\rm div}\,F)_n](r)| \\ & \le C r^{-1} \bigg( \int_1^{\frac{1}{{\rm Re}(\sqrt{\lambda})}} + \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^r \bigg)
|\tau^{-1} K_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(4)}(\tau)|\,
\int_1^\tau |s^2 I_{\mu_n}(\sqrt{\lambda} s)| \,{\rm d} s \,{\rm d} \tau \\ & \le
C |\lambda|^{-\frac12} r^{-1} \int_1^{\frac{1}{{\rm Re}(\sqrt{\lambda})}}
|F_n(\tau)| \tau \,{\rm d} \tau
+ C\,|\lambda|^{-\frac12} r^{-1} \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^r
|F_n(\tau)| \tau \,{\rm d} \tau\,,
\end{align*}
which leads to $|J^{(2)}_{4}[({\rm div}\,F)_n](r)| \le C|\lambda|^{-\frac12} \|F\|_{L^2}$. \\ \noindent (v) Estimate of $J^{(2)}_{5}[({\rm div}\,F)_n]$: For $1\le r < {\rm Re}(\sqrt{\lambda})^{-1}$, {\color{black} by the same estimates in Lemmas \ref{lem.est1.bessel} and \ref{lem.est3.bessel} which have been used in (iv) } we find
\begin{align*}
&~~~|J^{(2)}_{5}[({\rm div}\,F)_n](r)| \\ & \le r^{-1}
\int_1^r |s^2 I_{\mu_n}(\sqrt{\lambda} s)| \,{\rm d} s \bigg( \int_r^{\frac{1}{{\rm Re}(\sqrt{\lambda})}} + \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^\infty \bigg)
|\tau^{-1} K_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}^{(4)}_n(\tau)| \,{\rm d} \tau \\ & \le C\,r^{{\rm Re}(\mu_n)+2} \int_r^{\frac{1}{{\rm Re}(\sqrt{\lambda})}}
\tau^{-{\rm Re}(\mu_n)-2} |F_n(\tau)| \tau \,{\rm d} \tau \\ & \quad
+ C\,|\lambda|^{\frac{{\rm Re}(\mu_n)}{2}-\frac14} r^{{\rm Re}(\mu_n)+2} \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^\infty
\tau^{-\frac52} e^{-{\rm Re}(\sqrt{\lambda}) \tau} |F_n(\tau)| \tau \,{\rm d} \tau \,, \end{align*}
and thus we see that $|J^{(2)}_{5}[({\rm div}\,F)_n](r)| \le C(r^{1-\gamma'} \||x|^{\gamma'}\,F \|_{L^2} + |\lambda|^{\frac12} r^{2} \| F \|_{L^2})$ holds. For $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, we have
\begin{align*}
|J^{(2)}_{5}[({\rm div}\,F)_n](r)| & \le r^{-1}
\int_1^r |s^2 I_{\mu_n}(\sqrt{\lambda} s)| \,{\rm d} s
\int_r^\infty |\tau^{-1} K_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}^{(4)}(\tau)| \,{\rm d} \tau \\ & \le
C\,|\lambda|^{-1} r^{\frac12} e^{{\rm Re}(\sqrt{\lambda}) r} \int_r^\infty
\tau^{-\frac52} e^{-{\rm Re}(\sqrt{\lambda}) \tau} |F_n(\tau)| \tau \,{\rm d} \tau\,,
\end{align*}
which implies $|J^{(2)}_{5}[({\rm div}\,F)_n](r)| \le C |\lambda|^{-\frac12} \| F \|_{L^2}$. \\ \noindent (vi) Estimates of $J^{(2)}_l[({\rm div}\,F)_n]$, $l\in\{6,7,8,9\}$: In the similar manner as the proofs for $J^{(2)}_4[f_n]$ and $J^{(2)}_5[({\rm div}\,F)_n]$, we see that
\begin{align*}
|J^{(2)}_l[({\rm div}\,F)_n](r)|
& \le C\beta^{-1}|\lambda|^\frac12 r^2 \|F \|_{L^2}\,,~~~~ 1\le r < {\rm Re}(\sqrt{\lambda})^{-1}\,, \\
|J^{(2)}_l[({\rm div}\,F)_n](r)| & \le
C(\beta^{-1} |\lambda|^{-\frac12} \|F \|_{L^2}
+ r^{1-\gamma'} \||x|^{\gamma'} F\|_{L^2})\,,~~~~ r \ge {\rm Re}(\sqrt{\lambda})^{-1}\,, \end{align*}
for $l\in\{6,7,8,9\}$. We omit the details since the calculations are straightforward. \\ \noindent (vii) Estimate of $J^{(2)}_{10}[({\rm div}\,F)_n]$: For $1\le r < {\rm Re}(\sqrt{\lambda})^{-1}$, {\color{black} from \eqref{est1.lem.est1.bessel}--\eqref{est4.lem.est1.bessel}, and \eqref{est5.lem.est1.bessel} for $k=0,1$ in Lemma \ref{lem.est1.bessel} } we have
\begin{align*}
|J^{(2)}_{10}[({\rm div}\,F)_n](r)|
\le C \beta^{-1} |\lambda|
\int_1^r |F_n(s)| s \,{\rm d} s\,, \end{align*}
which implies $|J^{(2)}_{10}[({\rm div}\,F)_n](r)| \le C \beta^{-1} |\lambda|^\frac12 r^2 \|F \|_{L^2}$. For $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, {\color{black} from \eqref{est1.lem.est1.bessel}--\eqref{est4.lem.est1.bessel}, and \eqref{est5.lem.est1.bessel}--\eqref{est7.lem.est1.bessel} for $k=0,1$ in Lemma \ref{lem.est1.bessel} } we have
\begin{align*}
|J^{(2)}_{10}[({\rm div}\,F)_n](r)| & \le
C \beta^{-1} |\lambda|\,r^{-1}
\int_1^{\frac{1}{{\rm Re}(\sqrt{\lambda})}} s |F_n(s)| s\,{\rm d} s + C\,r^{-1}
\int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^r s^{-1} |F_n(s)| s\,{\rm d} s\,,
\end{align*}
which leads to $|J^{(2)}_{10}[({\rm div}\,F)_n](r)|\le C(\beta^{-1} |\lambda|^{-\frac12} \|F \|_{L^2} + r^{1-\gamma'} \||x|^{\gamma'} F\|_{L^2})$. This completes the proof of Lemma \ref{lem1.est.velocity.RSed.divF}. \end{proofx}
We continue the analysis on $V_n[\Phi_{n,\lambda}[({\rm div}\,F)_n]]$ in \eqref{rep.velocity.nFourier.RSed.divF}. The next decomposition is also useful in calculation as is Lemma \ref{lem1.velocitydecom.RSed.divF}.
\begin{lemma}\label{lem2.velocitydecom.RSed.divF} Let $n \in \mathbb{Z} \setminus \{0\}$ and $F \in C^\infty_0(D)^{2\times 2}$. Then we have
\begin{align}\label{eq1.lem2.velocitydecom.RSed.divF}
r^{|n|} \int_r^\infty s^{1-|n|} \Phi_{n,\lambda}[({\rm div}\,F)_n](s) \,{\rm d} s \,=\, \sum_{l=11}^{20} J^{(2)}_l[({\rm div}\,F)_n](r)\,, \end{align}
where
{\allowdisplaybreaks \begin{align*} J^{(2)}_{11}[({\rm div}\,F)_n](r)
&\,=\, -r^{|n|} \int_1^r \tau^{-1} I_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(1)}(\tau)
\int_r^\infty s^{1-|n|} K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\ J^{(2)}_{12}[({\rm div}\,F)_n](r)
&\,=\, -r^{|n|} \int_r^\infty \tau^{-1} I_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(1)}(\tau)
\int_\tau^\infty s^{1-|n|} K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\ J^{(2)}_{13}[({\rm div}\,F)_n](r)
&\,=\, -\sqrt{\lambda}\,r^{|n|} \int_1^r I_{\mu_n+1}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(2)}(\tau)
\int_r^\infty s^{1-|n|} K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\ J^{(2)}_{14}[({\rm div}\,F)_n](r)
&\,=\, -\sqrt{\lambda}\,r^{|n|} \int_r^\infty I_{\mu_n+1}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(2)}(\tau)
\int_\tau^\infty s^{1-|n|} K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\ J^{(2)}_{15}[({\rm div}\,F)_n](r)
&\,=\, \lambda\,r^{|n|} \int_1^r \tau I_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(3)}(\tau)
\int_r^\infty s^{1-|n|} K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\ J^{(2)}_{16}[({\rm div}\,F)_n](r)
&\,=\, \lambda\,r^{|n|} \int_r^\infty \tau I_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(3)}(\tau)
\int_\tau^\infty s^{1-|n|} K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\ J^{(2)}_{17}[({\rm div}\,F)_n](r)
&\,=\, r^{|n|} \int_r^\infty \tau^{-1} K_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(4)}(\tau)
\int_r^\tau s^{1-|n|} I_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\ J^{(2)}_{18}[({\rm div}\,F)_n](r)
&\,=\, \sqrt{\lambda}\,r^{|n|} \int_r^\infty K_{\mu_n-1}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(5)}(\tau)
\int_r^\tau s^{1-|n|} I_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\ J^{(2)}_{19}[({\rm div}\,F)_n](r)
&\,=\, \lambda\,r^{|n|} \int_r^\infty s K_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(6)}(\tau)
\int_r^\tau s^{1-|n|} I_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \,{\rm d} \tau\,, \\ J^{(2)}_{20}[({\rm div}\,F)_n](r) &\,=\,
- \sqrt{\lambda}\,r^{|n|} \int_r^\infty s \big(K_{\mu_n}(\sqrt{\lambda} s) I_{\mu_n+1}(\sqrt{\lambda} s) + K_{\mu_n-1}(\sqrt{\lambda} s) I_{\mu_n}(\sqrt{\lambda} s)\big)\,\widetilde{F}_n^{(7)}(s) \,{\rm d} s\,. \end{align*} }
Here $\widetilde{F}_n^{(k)}(r)$, $k\in\{1,\ldots,7\}$, are the functions in Lemma \ref{lem.rep.Phi.RSed.divF}. \end{lemma}
\begin{proof} The assertion is a consequence of inserting \eqref{eq.lem.rep.Phi.RSed.divF} in Lemma \ref{lem.rep.Phi.RSed.divF} into the left-hand side of \eqref{eq1.lem2.velocitydecom.RSed.divF}, and changing order of integration as $\int_r^\infty \int_1^s \,{\rm d} \tau \,{\rm d} s= \int_1^r \,{\rm d} \tau \int_r^\infty \,{\rm d} s + \int_r^\infty \int_\tau^\infty \,{\rm d} s \,{\rm d} \tau$ and $\int_r^\infty \int_s^\infty \,{\rm d} \tau \,{\rm d} s = \int_r^\infty \int_r^\tau \,{\rm d} s \,{\rm d} \tau$. This completes the proof. \end{proof}
The next lemma summarizes the estimates to $J^{(2)}_{l}[f_n]$, $l\in\{11,\ldots,20\}$, in Lemma \ref{lem2.velocitydecom.RSed.divF}.
\begin{lemma}\label{lem2.est.velocity.RSed.divF}
Let $|n|=1$ and $\gamma' \in(\frac12,\gamma)$, and let $\lambda\in\Sigma_{\pi-\epsilon} \cap \mathcal{B}_1(0)$ for some $\epsilon\in(0,\frac{\pi}{2})$. Then there is a positive constant $C=C(\gamma',\epsilon)$ independent of $\beta$ such that the following statement holds. Let $F \in C^\infty_0(D)^{2\times 2}$. Then for $l \in \{11,\cdots,20\}$ we have
\begin{align}
& |J^{(2)}_{l}[({\rm div}\,F)_n](r)| \le \frac{C}{\beta}
\big( |\lambda|^\frac12 r^2 + r^{2-{\rm Re}(\mu_n)} + r^{1-\gamma'} \big)\,
\||x|^{\gamma'}F \|_{L^2(D)}\,, \nonumber \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1\le r < {\rm Re}(\sqrt{\lambda})^{-1}\,, \label{est1.lem2.est.velocity.RSed.divF} \\
&|J^{(2)}_{l}[({\rm div}\,F)_n](r)|
\le C \big(|\lambda|^{-\frac12} + r^{1-\gamma'} \big)\,
\||x|^{\gamma'}F \|_{L^2(D)}\,,~~~~~~ r \ge {\rm Re}(\sqrt{\lambda})^{-1}\,. \label{est2.lem2.est.velocity.RSed.divF} \end{align}
\end{lemma}
{\color{black} \begin{remark}\label{remark.est2.velocity.RSed.divF} We make a simplification in Lemma \ref{lem2.est.velocity.RSed.divF} by assuming $\gamma' \in(\frac12,\gamma)$ as in Lemma \ref{remark.est1.velocity.RSed.divF}. It still holds for any $\gamma' \in(0, \gamma)$ if the constant $\beta=\beta(\gamma')$ is chosen sufficiently small. On the other hand, the condition $\gamma' \in(\frac12,\gamma)$ is needed in the proof of Theorem \ref{thm.est.vorticity.RSed.divF}. \end{remark} }
\begin{proofx}{Lemma \ref{lem2.est.velocity.RSed.divF}} (i) Estimate of $J^{(2)}_{11}[({\rm div}\,F)_n]$: For $1\le r < {\rm Re}(\sqrt{\lambda})^{-1}$ , by \eqref{est5.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est4.lem.est2.bessel} for $k=0$ in Lemma \ref{lem.est2.bessel} in Appendix \ref{app.est.bessel}, we find
\begin{align*}
|J^{(2)}_{11}[({\rm div}\,F)_n](r)| & \le
r \bigg|\int_r^\infty K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \bigg|
\int_1^r |\tau^{-1} I_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(1)}(\tau)| \,{\rm d} \tau \\ & \le C \beta^{-1} r^{2-{\rm Re}(\mu_n)}
\int_1^r \tau^{{\rm Re}(\mu_n)-2} |F_n(\tau)| \tau \,{\rm d} \tau\,, \end{align*}
which implies $|J^{(2)}_{11}[({\rm div}\,F)_n](r)| \le C \beta^{-1} r^{2-{\rm Re}(\mu_n)} \||x|^{\gamma'} F \|_{L^2}$. For $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, by \eqref{est5.lem.est1.bessel} and \eqref{est7.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est5.lem.est2.bessel} for $k=0$ in Lemma \ref{lem.est2.bessel}, we see that
\begin{align*}
&|J^{(2)}_{11}[({\rm div}\,F)_n](r)| \le
r \int_r^\infty |K_{\mu_n}(\sqrt{\lambda} s)| \,{\rm d} s \bigg( \int_1^{\frac{1}{{\rm Re}(\sqrt{\lambda})}} + \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^r \bigg)
|\tau^{-1} I_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(1)}(\tau)|\,{\rm d} \tau \\ & \le
C |\lambda|^{\frac{{\rm Re}(\mu_n)}{2}-\frac34} r^{\frac12} e^{-{\rm Re}(\sqrt{\lambda}) r} \int_1^{\frac{1}{{\rm Re}(\sqrt{\lambda})}}
\tau^{{\rm Re}(\mu_n)-2} |F_n(\tau)| \tau \,{\rm d} \tau \\ & \quad
+ C |\lambda|^{-1} r^{\frac12} e^{-{\rm Re}(\sqrt{\lambda}) r} \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^r \tau^{-\frac52} e^{{\rm Re}(\sqrt{\lambda}) \tau}
|F_n(\tau)| \tau \,{\rm d} \tau \,. \end{align*}
Thus we have $|J^{(2)}_{11}[({\rm div}\,F)_n](r)| \le C |\lambda|^{-\frac12} \||x|^{\gamma'}F \|_{L^2}$. \\ \noindent (ii) Estimate of $J^{(2)}_{12}[({\rm div}\,F)_n]$: For $1\le r < {\rm Re}(\sqrt{\lambda})^{-1}$, by \eqref{est5.lem.est1.bessel} and \eqref{est7.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est4.lem.est2.bessel} and \eqref{est5.lem.est2.bessel} for $k=0$ in Lemma \ref{lem.est2.bessel}, we observe that
\begin{align*}
& |J^{(2)}_{12}[({\rm div}\,F)_n](r)| \le \,r \bigg( \int_r^{\frac{1}{{\rm Re}(\sqrt{\lambda})}} + \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^\infty \bigg)
|\tau^{-1} I_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(1)}(\tau)|\,
\bigg|\int_\tau^\infty K_{\mu_n}(\sqrt{\lambda} s) \,{\rm d} s \bigg| \,{\rm d} \tau \\ & \le C \beta^{-1} r \int_r^{\frac{1}{{\rm Re}(\sqrt{\lambda})}}
\tau^{-1} |F_n(\tau)| \tau \,{\rm d} \tau
+ C |\lambda|^{-1} r \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^\infty
\tau^{-3} |F_n(\tau)| \tau \,{\rm d} \tau\,, \end{align*}
which implies $|J^{(2)}_{12}[({\rm div}\,F)_n](r)| \le C \beta^{-1} r^{1-\gamma'} \||x|^{\gamma'}F \|_{L^2}$. For $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, by \eqref{est7.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est5.lem.est2.bessel} for $k=0$ in Lemma \ref{lem.est2.bessel} we find
\begin{align*}
|J^{(2)}_{12}[({\rm div}\,F)_n](r)| & \le C r \int_r^\infty
|\tau^{-1} I_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(1)}(\tau)|\,
\int_\tau^\infty |K_{\mu_n}(\sqrt{\lambda} s)| \,{\rm d} s \,{\rm d} \tau \\ & \le
C |\lambda|^{-1} r \int_r^\infty
\tau^{-3} |F_n(\tau)| \tau \,{\rm d} \tau\,, \end{align*}
which leads to $|J^{(2)}_{12}[({\rm div}\,F)_n](r)| \le C |\lambda|^{-\frac12} \|F \|_{L^2}$. \\ \noindent (iii) Estimates of $J^{(2)}_l[({\rm div}\,F)_n]$, $l\in\{13,14,15,16\}$: In the similar manner as the proofs of $J^{(2)}_{11}[f_n]$ and $J^{(2)}_{12}[({\rm div}\,F)_n]$, we have
\begin{align*}
|J^{(2)}_l[({\rm div}\,F)_n](r)| & \le C \beta^{-1}
(|\lambda|^\frac12 r^2 \|F \|_{L^2}
+ r^{1-\gamma'} \||x|^{\gamma'}F \|_{L^2})\,,~~~~ 1\le r < {\rm Re}(\sqrt{\lambda})^{-1}\,, \\
|J^{(2)}_l[({\rm div}\,F)_n](r)| & \le
C(|\lambda|^{-\frac12} \|F \|_{L^2}
+ r^{1-\gamma'} \||x|^{\gamma'}F\|_{L^2})\,,~~~~ r \ge {\rm Re}(\sqrt{\lambda})^{-1}\,, \end{align*}
for $l\in\{13,14,15,16\}$. We omit the details since the calculations are straightforward. \par \noindent (iv) Estimates of $J^{(2)}_l[({\rm div}\,F)_n]$, $l\in\{17,18,19\}$: We give a proof only for $J^{(2)}_{19}[({\rm div}\,F)_n]$ since the proofs for $J^{(2)}_{17}[({\rm div}\,F)_n]$ and $J^{(2)}_{18}[({\rm div}\,F)_n]$ are similar. For $1\le r < {\rm Re}(\sqrt{\lambda})^{-1}$, from \eqref{est1.lem.est1.bessel}, \eqref{est3.lem.est1.bessel}, and \eqref{est6.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est3.lem.est3.bessel} and \eqref{est4.lem.est3.bessel} for $k=0$ in Lemma \ref{lem.est3.bessel}, we observe that
\begin{align*}
&|J^{(2)}_{19}[({\rm div}\,F)_n](r)| \le
|\lambda| r \bigg( \int_r^{\frac{1}{{\rm Re}(\sqrt{\lambda})}} + \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^\infty \bigg)
|\tau K_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(6)}(\tau)|\,
\int_r^\tau |I_{\mu_n}(\sqrt{\lambda} s)| \,{\rm d} s \,{\rm d} \tau \\ & \le
C |\lambda| r \int_r^{\frac{1}{{\rm Re}(\sqrt{\lambda})}}
\tau |F_n(\tau)| \tau \,{\rm d} \tau + C r \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^\infty
\tau^{-1} |F_n(\tau)| \tau \,{\rm d} \tau\,, \end{align*}
which implies $|J^{(2)}_{{\color{black}19}}[({\rm div}\,F)_n](r)| \le C r^{1-\gamma'} \||x|^{\gamma'}F \|_{L^2}$. For $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, by \eqref{est6.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est5.lem.est3.bessel} in Lemma \ref{lem.est3.bessel} for $k=0$, we have
\begin{align*}
|J^{(2)}_{19}[({\rm div}\,F)_n](r)| & \le
|\lambda|\,r \int_r^\infty
|\tau K_{\mu_n}(\sqrt{\lambda} \tau)\,\widetilde{F}_n^{(6)}(\tau)|\,
\int_r^\tau |I_{\mu_n}(\sqrt{\lambda} s)| \,{\rm d} s \,{\rm d} \tau \\ & \le C\,r \int_r^\infty
\tau^{-1} |F_n(\tau)| \tau \,{\rm d} \tau\,, \end{align*}
which leads to $|J^{(2)}_{19}[({\rm div}\,F)_n](r)|\le Cr^{1-\gamma'} \||x|^{\gamma'}F \|_{L^2}$. \\ \noindent (v) Estimate of $J^{(2)}_{20}[({\rm div}\,F)_n]$: For $1\le r < {\rm Re}(\sqrt{\lambda})^{-1}$, {\color{black} from \eqref{est1.lem.est1.bessel}--\eqref{est4.lem.est1.bessel}, and \eqref{est5.lem.est1.bessel}--\eqref{est7.lem.est1.bessel} for $k=0,1$ in Lemma \ref{lem.est1.bessel}, } we have
\begin{align*}
|J^{(2)}_{20}[({\rm div}\,F)_n](r)| \le
C\beta^{-1}|\lambda|\,r \int_r^{\frac{1}{{\rm Re}(\sqrt{\lambda})}}
s |F_n(s)| s \,{\rm d} s + C\,r
\int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^\infty s^{-1} |F_n(s)| s \,{\rm d} s\,. \end{align*}
Thus we have $|J^{(2)}_{20}[({\rm div}\,F)_n](r)| \le C\beta^{-1} r^{1-\gamma'}
\||x|^{\gamma'}F \|_{L^2}$. For $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, {\color{black} from \eqref{est6.lem.est1.bessel} and \eqref{est7.lem.est1.bessel} for $k=0,1$ in Lemma \ref{lem.est1.bessel}, } we have
\begin{align*}
|J^{(2)}_{20}[({\rm div}\,F)_n](r)| & \le C\,r \int_r^\infty
\tau^{-1} |F_n(\tau)| \tau \,{\rm d} \tau\,, \end{align*}
which implies $|J^{(2)}_{20}[({\rm div}\,F)_n](r)|\le C r^{1-\gamma'}\||x|^{\gamma'}F \|_{L^2}$. This completes the proof of Lemma \ref{lem2.est.velocity.RSed.divF}. \end{proofx}
From Lemmas \ref{lem1.est.velocity.RSed.divF} and \ref{lem2.est.velocity.RSed.divF} we see that the following estimates hold.
\begin{corollary}\label{cor1.est.velocity.RSed.divF}
Let $|n|=1$, $\gamma' \in(\frac12,\gamma)$, and $p\in(\frac{2}{\gamma'},\infty)$, and let $\lambda\in\Sigma_{\pi-\epsilon} \cap \mathcal{B}_1(0)$ for some $\epsilon\in(0,\frac{\pi}{2})$. Then there is a positive constant $C=C(\gamma',p,\epsilon)$ independent of $\beta$ such that the following statement holds. Let $F\in C^\infty_0(D)^{2\times2}$. Then for $l\in\{1,\ldots,20\}$ we have
\begin{align}
|c_{n,\lambda}[({\rm div}\,F)_n]| &\le \frac{C}{\beta}
\||x|^{\gamma'}F\|_{L^2(D)}\,, \label{est1.cor1.est.velocity.RSed.divF} \\
\|r^{-1} J^{(2)}_l[({\rm div}\,F)_n]\|_{L^p(D)} &\le
\frac{C}{\beta} |\lambda|^{-\frac1p}
\||x|^{\gamma'}F\|_{L^2(D)}\,, \label{est2.cor1.est.velocity.RSed.divF} \\
\|r^{-2} J^{(2)}_l[({\rm div}\,F)_n]\|_{L^2(D)} & \le \frac{C}{\beta^2}
\||x|^{\gamma'}F\|_{L^2(D)}\,. \label{est3.cor1.est.velocity.RSed.divF} \end{align}
Here $c_{n,\lambda}[({\rm div}\,F)_n]$ is the constant in \eqref{const.nFourier.RSed.f} replacing $f_n$ by $({\rm div}\,F)_n$. \end{corollary}
\begin{proof}
(i) Estimate of $c_{n,\lambda}[({\rm div}\,F)_n]$: By the definitions of $J^{(2)}_{l}[f_n]$ for $l \in \{11,\cdots,20\}$ in Lemma \ref{lem2.velocitydecom.RSed.divF}, we see that $|c_{n,\lambda}[({\rm div}\,F)_n]| \le \sum_{l=12,14,16,17,18,19,20} |J^{(2)}_l[f_n](1)|$. Hence we obtain the estimate \eqref{est1.cor1.est.velocity.RSed.divF} by putting $r=1$ to \eqref{est1.lem2.est.velocity.RSed.divF} in Lemma \ref{lem2.est.velocity.RSed.divF}. \\ \noindent (ii) Estimate of $r^{-1} J^{(2)}_l[({\rm div}\,F)_n]$: By Lemmas \ref{lem1.est.velocity.RSed.divF} and \ref{lem2.est.velocity.RSed.divF}, for $p\in[\frac{2}{\gamma'},\infty)$ we have
\begin{align*}
\sup_{r\ge1} r^{\frac2p} |r^{-1} J^{(2)}_l[({\rm div}\,F)_n](r)|
& \le C \beta^{-1} |\lambda|^{-\frac1p} \||x|^{\gamma'}F\|_{L^2(D)}\,. \end{align*}
Thus by the Marcinkiewicz interpolation theorem we obtain \eqref{est2.cor1.est.velocity.RSed.divF} for $p\in(\frac{2}{\gamma'},\infty)$. \\ \noindent (iii) Estimate of $r^{-2} J^{(2)}_l[({\rm div}\,F)_n]$: The assertion \eqref{est3.cor1.est.velocity.RSed.divF} can be checked easily by using Lemmas \ref{lem1.est.velocity.RSed.divF} and \ref{lem2.est.velocity.RSed.divF} and $({\rm Re}(\mu_n)-1)^\frac12\approx O(\beta)$. This completes the proof.
\end{proof}
The next proposition gives the estimate for the term $V_n[\Phi_{n,\lambda}[({\rm div}\,F)_n]]$ in \eqref{rep.velocity.nFourier.RSed.divF}.
\begin{proposition}\label{prop2.est.velocity.RSed.divF}
Let $|n|=1$, $\gamma' \in(\frac12,\gamma)$, and $p\in(\frac{2}{\gamma'},\infty)$, and let $\lambda\in\Sigma_{\pi-\epsilon} \cap \mathcal{B}_1(0)$ for some $\epsilon\in(0,\frac{\pi}{2})$. Then there is a positive constant $C=C(\gamma',p,\epsilon)$ independent of $\beta$ such that for $F\in C^\infty_0(D)^{2\times2}$ we have
\begin{align}
\|V_n[\Phi_{n,\lambda}[({\rm div}\,F)_n]]\|_{L^p(D)} &\le
\frac{C}{\beta} |\lambda|^{-\frac1p}
\||x|^{\gamma'}F\|_{L^2(D)}\,, \label{est1.prop2.est.velocity.RSed.divF} \\
\big\|\frac{V_n[\Phi_{n,\lambda}[({\rm div}\,F)_n]]}{|x|}\big\|_{L^2(D)} & \le \frac{C}{\beta^2}
\||x|^{\gamma'}F\|_{L^2(D)}\,. \label{est2.prop2.est.velocity.RSed.divF} \end{align}
\end{proposition}
\begin{proof} In the similar manner as the proof of Proposition \ref{prop2.est.velocity.RSed.f} we find
\begin{align*}
&|V_n[\Phi_{n,\lambda}[({\rm div}\,F)_n]](r)| \\
&\le C \Big( r^{-2} \sum_{l=1}^{20} |J^{(1)}_l[({\rm div}\,F)_n](1)|
+ \sum_{l=1}^{20} |r^{-1} J^{(1)}[({\rm div}\,F)_n](r)| \Big)\,. \end{align*}
Thus the assertions \eqref{est1.prop2.est.velocity.RSed.divF} and \eqref{est2.prop2.est.velocity.RSed.divF} follow from Corollary \ref{cor1.est.velocity.RSed.divF}. The proof is complete. \end{proof}
From Corollary \ref{cor1.est.velocity.RSed.divF} and Proposition \ref{prop2.est.velocity.RSed.divF}, Theorem \ref{thm.est.velocity.RSed.divF} follows.
\begin{proofx}{Theorem \ref{thm.est.velocity.RSed.divF}} (i) Estimate for the case $F\in C^\infty_0(D)^{2\times2}$: It suffices to prove that the first term in the right-hand side of \eqref{rep.velocity.nFourier.RSed.divF} has the estimates \eqref{est1.thm.est.velocity.RSed.divF} and \eqref{est2.thm.est.velocity.RSed.divF} in view of Proposition \ref{prop2.est.velocity.RSed.divF}. By using Proposition \ref{prop.est.F} and \eqref{est1.cor1.est.velocity.RSed.divF} in Corollary \ref{cor1.est.velocity.RSed.divF}, we see that \eqref{est1.thm.est.velocity.RSed.divF} and \eqref{est2.thm.est.velocity.RSed.divF} respectively follow from \eqref{est1.prop1.est.velocity.RSed.f} and \eqref{est2.prop1.est.velocity.RSed.f} in Proposition \ref{prop1.est.velocity.RSed.f}. \\
\noindent (ii) Estimate for the case $F\in X_{\gamma'}(D)$: Let us take sequences $\{G^{(m)}\}_{m=1}^\infty\subset C^\infty_0(D)^{2\times2}$ and $\{w_n^{(m)} \}_{n=1}^\infty\subset \mathcal{P}_n(L^{p}_{\sigma}(D) \cap W^{1,p}_0(D)^2)$ such that $\displaystyle{\lim_{m\to\infty}\||x|^{\gamma'}(F-G^{(m)})\|_{L^2(D)}=0}$ and $w_n^{(m)}$ is a (unique) solution to \eqref{RSed.divF.n} replacing $F$ by $G^{(m)}$. {\color{black}
Then we see that $w_n^{(m)}$ satisfies \eqref{est1.thm.est.velocity.RSed.divF}, \eqref{est2.thm.est.velocity.RSed.divF}, and the estimates in Theorem \ref{thm.est.vorticity.RSed.divF} below replacing $F$ by $G^{(m)}$. By extending $w_n^{(m)}$ by zero to $\mathbb{R}^2$ and denoting it again by $w_n^{(m)}$, we see that $w_n^{(m)}\in W^{1,p}(\mathbb{R}^2)\cap L^p_{\sigma}(\mathbb{R}^2)$ from $w_n^{(m)}|_{\partial D}=0$ and that $-\Delta w_n^{(m)} = \nabla^{\bot} {\rm rot}\,w_n^{(m)}$, $\nabla^{\bot}=(\partial_2, -\partial_1)^\top$, in $\mathbb{R}^2$ from ${\rm div}\,w_n^{(m)}=0$. Hence, thanks to $\|\nabla \nabla^{\bot}\,(-\Delta_{\mathbb{R}^2})^{-1} h\|_{L^p(\mathbb{R}^2)} \le C \|h\|_{L^p(\mathbb{R}^2)}$, we have the inequality $\|\nabla w_n^{(m)}\|_{L^p(D)} \le C \|{\rm rot}\,w_n^{(m)}\|_{L^p(D)}$. Then we observe that the limit $\displaystyle{w_n=\lim_{m\to\infty} w_n^{(m)} \in \mathcal{P}_n(L^p_\sigma(D)\cap W^{1,p}_0(D)^2})$ exists and satisfies \eqref{est1.thm.est.velocity.RSed.divF}, \eqref{est2.thm.est.velocity.RSed.divF}, and the estimates in Theorem \ref{thm.est.vorticity.RSed.divF}. } Moreover, by taking the limit $m\to\infty$ in \eqref{RSed.divF.n} replacing $F$ by $G^{(m)}$, we see that $w_n$ gives a weak solution to \eqref{RSed.divF.n}. The proof is complete. \end{proofx}
\subsubsection{Estimates of the vorticity for \eqref{RSed.divF.n} with $|n|=1$} \label{subsec.RSed.divF.vorticity}
In this subsection we estimate the vorticity $\omega^{\rm ed}_{{\rm div}F,n}(r)=({\rm rot}\,w^{\rm ed}_{{\rm div}F,n})e^{-in\theta}$ with $|n|=1$, where ${\rm rot}\,w^{\rm ed}_{{\rm div}F,n}$ is represented as \eqref{rep.vorticity.nFourier.RSed.divF}. We take the constant $\beta_0$ in Proposition \ref{prop.est.F}.
\begin{theorem}\label{thm.est.vorticity.RSed.divF}
Let $|n|=1$, $\gamma'\in(\frac12,\gamma)$, $p\in[2,\infty)$, and $q\in(1,\infty)$. Fix $\epsilon\in(0,\frac{\pi}{2})$. Then there is a positive constant $C=C(\gamma',p,q,\epsilon)$ independent of $\beta$ such that the following statement holds. Let $F\in C^\infty_0(D)^{2\times2}$, $f\in L^q(D)^2$, and $\beta\in(0,\beta_0)$. Set
\begin{align} \omega^{\rm ed\,(1)}_{{\rm div}F,n}(r) \,=\, -\frac{c_{n,\lambda}[({\rm div}\,F)_n]}{F_n(\sqrt{\lambda};\beta)} K_{\mu_n}(\sqrt{\lambda} r)\,, ~~~~~~ \omega^{\rm ed\,(2)}_{{\rm div}F,n}(r) \,=\, \Phi_{n,\lambda}[({\rm div}\,F)_n](r)\,. \label{def.thm.est.vorticity.RSed.divF} \end{align}
Then for $\lambda\in\Sigma_{\pi-\epsilon} \cap \mathcal{B}_{e^{-\frac{1}{6\beta}}}(0)$ we have
\begin{align}
\|\omega^{\rm ed\,(1)}_{{\rm div}F,n}\|_{L^p(D)} &\le \frac{C}{\beta (p {\rm Re}(\mu_n)-2)^\frac1p}
\| |x|^{\gamma'} F \|_{L^2(D)}\,, \label{est1.thm.est.vorticity.RSed.divF} \\
\|\omega^{\rm ed\,(2)}_{{\rm div}F,n}\|_{L^p(D)}
+ \beta \big\|\frac{\omega^{\rm ed\,(2)}_{{\rm div}F,n}}{|x|} \big\|_{L^1(D)} &\le
\frac{C}{\beta} \| |x|^{\gamma'} F \|_{L^2(D)}\,, \label{est2.thm.est.vorticity.RSed.divF} \\
\big|\big\langle \omega^{\rm ed\,(1)}_{{\rm div}F,n}, \frac{(w^{\rm ed}_{f,r})_n}{|x|} \big\rangle_{L^2(D)} \big| &\le
\frac{C}{\beta^5} |\lambda|^{-1+\frac1q} \|f \|_{L^q(D)}
\| |x|^{\gamma'} F \|_{L^2(D)}\,. \label{est3.thm.est.vorticity.RSed.divF} \end{align}
Moreover, \eqref{est1.thm.est.vorticity.RSed.divF}, \eqref{est2.thm.est.vorticity.RSed.divF}, and \eqref{est3.thm.est.vorticity.RSed.divF} hold all for $F\in X_{\gamma'}(D)$ defined in \eqref{Xgamma} by a density argument as in the proof of Theorem \ref{thm.est.velocity.RSed.divF} above. \end{theorem}
\begin{proof} (i) Estimate of $\omega^{\rm ed\,(1)}_{{\rm div}F,n}$: The estimate \eqref{est1.thm.est.vorticity.RSed.divF} is a direct consequence of Proposition \ref{prop.est.F}, \eqref{est1.cor1.est.velocity.RSed.divF} in Corollary \ref{cor1.est.velocity.RSed.divF}, and \eqref{est1.lem.est4.bessel} in Lemma \ref{lem.est4.bessel} in Appendix \ref{app.est.bessel}. \\
\noindent (ii) Estimates of $\omega^{\rm ed\,(2)}_{{\rm div}F,n}$ and $|x|^{-1} \omega^{\rm ed\,(2)}_{{\rm div}F,n}$: Firstly we decompose $\omega^{\rm ed\,(2)}_{{\rm div}F,n}$ into $\omega^{\rm ed\,(2)}_{{\rm div}F,n} = \sum_{l=1}^{7} \Phi_{n,\lambda}^{(l)}[({\rm div}\,F)_n]$ as in Lemma \ref{lem.rep.Phi.RSed.divF}. Then the assertion \eqref{est2.thm.est.vorticity.RSed.divF} follows from the estimates of each term $\Phi_{n,\lambda}^{(l)}[({\rm div}\,F)_n]$, $l\in\{1,\ldots,7\}$. \\ \noindent (I) Estimates of $\Phi_{n,\lambda}^{(l)}[({\rm div}\,F)_n]$, $l\in\{1,2,3\}$: We give a proof only for $\Phi_{n,\lambda}^{(3)}[({\rm div}\,F)_n]$ since the proofs for $\Phi_{n,\lambda}^{(1)}[({\rm div}\,F)_n]$ and $\Phi_{n,\lambda}^{(2)}[({\rm div}\,F)_n]$ are similar. The Minkowski inequality and the Fubini theorem lead to
\begin{align*}
&\|\Phi_{n,\lambda}^{(3)}[({\rm div}\,F)_n]\|_{L^p(D)}
+ \beta \big\|\frac{\Phi_{n,\lambda}^{(3)}[({\rm div}\,F)_n]}{|x|}\big\|_{L^1(D)} \\ & \le
|\lambda| \int_{1}^{\infty}
|s I_{\mu_n}(\sqrt{\lambda} s) \widetilde{F}_n^{(3)}(s)| \bigg(
\bigg( \int_{s}^{\infty} | K_{\mu_n}(\sqrt{\lambda} r)|^p r \,{\rm d} r \bigg)^\frac1p
+ \beta \int_{s}^{\infty} |K_{\mu_n}(\sqrt{\lambda} r)| \,{\rm d} r \bigg) \,{\rm d} s \,. \end{align*}
By \eqref{est5.lem.est1.bessel} and \eqref{est7.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est6.lem.est4.bessel} and \eqref{est7.lem.est4.bessel} in Lemma \ref{lem.est4.bessel}, we have
\begin{align*}
&\|\Phi_{n,\lambda}^{(3)}[({\rm div}\,F)_n]\|_{L^p(D)}
+ \beta \big\|\frac{\Phi_{n,\lambda}^{(3)}[({\rm div}\,F)_n]}{|x|}\big\|_{L^1(D)} \\
& \le C \beta^{-1} |\lambda|^\frac12 \int_{1}^{\frac{1}{{\rm Re}(\sqrt{\lambda})}}
|F_n(s)| s \,{\rm d} s
+ C |\lambda|^\frac14 \int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^{\infty}
s^{-\frac12-\gamma'} |s^{\gamma'} F_n(s)| s\,{\rm d} s\,, \end{align*}
which implies \eqref{est2.thm.est.vorticity.RSed.divF} since the condition $\gamma'\in(\frac12,1)$ is assumed. \\ \noindent (II) Estimates of $\Phi_{n,\lambda}^{(l)}[({\rm div}\,F)_n]$, $l\in\{4,5,6\}$: We give a proof only for $\Phi_{n,\lambda}^{(6)}[({\rm div}\,F)_n]$ since the proofs for $\Phi_{n,\lambda}^{(4)}[({\rm div}\,F)_n]$ and $\Phi_{n,\lambda}^{(5)}[({\rm div}\,F)_n]$ are similar. After using the Minkowski inequality and the Fubini theorem, by \eqref{est1.lem.est1.bessel}, \eqref{est3.lem.est1.bessel}, and \eqref{est6.lem.est1.bessel} for $k=0$ in Lemma \ref{lem.est1.bessel} and \eqref{est8.lem.est4.bessel} and \eqref{est9.lem.est4.bessel} in Lemma \ref{lem.est4.bessel}, we observe that
\begin{align*} &
\|\Phi_{n,\lambda}^{(6)}[({\rm div}\,F)_n]\|_{L^p(D)}
+ \big\|\frac{\Phi_{n,\lambda}^{(6)}[({\rm div}\,F)_n]}{|x|}\big\|_{L^1(D)} \\ &\le
|\lambda| \int_{1}^{\infty}
\big|sK_{\mu_n}(\sqrt{\lambda} s)\,\widetilde{F}_n^{(6)}(s)\big| \bigg(
\bigg( \int_1^s |I_{\mu_n}(\sqrt{\lambda} r)|^p r\,{\rm d} r \bigg)^\frac1p
+ \int_1^s |I_{\mu_n}(\sqrt{\lambda} r)| \,{\rm d} r \bigg) \,{\rm d} s \\
&\le C |\lambda|^\frac12 \int_{1}^{\frac{1}{{\rm Re}(\sqrt{\lambda})}}
|F_n(s)|s\,{\rm d} s
+ C |\lambda|^\frac14
\int_{\frac{1}{{\rm Re}(\sqrt{\lambda})}}^\infty s^{-\frac12-\gamma'} |s^{\gamma'} F_n(s)| s\,{\rm d} s\,, \end{align*}
which leads to \eqref{est2.thm.est.vorticity.RSed.divF} by the condition $\gamma'\in(\frac12,1)$. \\ (III) Estimate of $\Phi_{n,\lambda}^{(7)}[({\rm div}\,F)_n]$: The proof is straightforward using the results in Lemma \ref{lem.est1.bessel} and thus we omit the details. \\
\noindent (iii) Estimate of $\big|\big\langle \omega^{\rm ed\,(1)}_{{\rm div}F,n}, |x|^{-1}(w^{\rm ed}_{f,r})_n \big\rangle_{L^2(D)} \big|$: We omit since the proof is parallel to that for \eqref{est3.thm.est.vorticity.RSed.f} in Theorem \ref{thm.est.vorticity.RSed.f} using \eqref{est1.cor1.est.velocity.RSed.divF} in Corollary \ref{cor1.est.velocity.RSed.divF}. The proof is complete. \end{proof}
\subsection{Problem III: No external force and boundary data $b$} \label{subsec.RSed.b}
In this subsection we give the estimates for $(w,r)=(w^{\rm ed}_{b}, r^{\rm ed}_{b})$ solving the next problem:
\begin{equation}\tag{RS$^{\rm ed}_{b}$}\label{RSed.b} \left\{ \begin{aligned} \lambda w - \Delta w + \beta U^{\bot} {\rm rot}\,w + \nabla r & \,=\, 0\,,~~~~x \in D\,, \\ {\rm div}\,w &\,=\, 0\,,~~~~x \in D\,, \\
w|_{\partial D} & \,=\, b\,. \end{aligned}\right. \end{equation}
Firstly we prove the representation formula to the problem \eqref{RSed.b}.
\begin{lemma}\label{lem.rep.RSed.b}
Let $|n|=1$ and $b \in L^\infty(\partial D)^2$, and let $\lambda\in\mathbb{C}\setminus (\overline{\mathbb{R}_{-}} \cup \mathcal{Z}(F_n))$. Suppose that $w^{\rm ed}_{b}$ is a solution to \eqref{RSed.b}. Then the $n$-Fourier modes $w^{\rm ed}_{b,n}$ and $\omega^{\rm ed}_{b,n}=({\rm rot}\,w^{\rm ed}_{b,n})e^{-in\theta}$ satisfy the following representations{\rm :}
\begin{align} w^{\rm ed}_{b,n} &\,=\, \frac{T_n (b)}{F_n(\sqrt{\lambda}; \beta)} V_{n}[K_{\mu_{n}}(\sqrt{\lambda}\,\cdot\,)] +\frac{\mathcal{V}_{n}[b](\theta)}{r^2}\,, \label{rep1.lem.rep.RSed.b} \\ \omega^{\rm ed}_{b,n}(r) &\,=\, \frac{T_n (b)}{F_n(\sqrt{\lambda}; \beta)}\,K_{\mu_{n}}(\sqrt{\lambda} r)\,, \label{rep2.lem.rep.RSed.b} \end{align}
where the operator $T_n(b)$ and the vector field $\mathcal{V}_{n}[b](\theta)$ are {\color{black} defined} as
\begin{align} T_n(b) \,=\, \frac{b_{r,n}}{in} - b_{\theta,n}\,,~~~~~~ \mathcal{V}_{n}[b](\theta) \,=\, b_{r,n} e^{in\theta} {\bf e}_{r} + \frac{b_{r,n}}{in} e^{in\theta} {\bf e}_{\theta}\,. \label{def.lem.rep.RSed.boundary} \end{align}
Here $\mathcal{Z}(F_n)$ is the set in \eqref{def.zeros.F} and $V_{n}[\,\cdot\,]$ is the Biot-Savart law in \eqref{def.V_n}. \end{lemma}
\begin{proof} It is easy to see that $u=\frac{T_n (b)}{F_n(\sqrt{\lambda}; \beta)} V_{n}[K_{\mu_{n}}(\sqrt{\lambda}\,\cdot\,)]$ solves
\begin{equation}\label{equation.proof.lem.rep.RSed.boundary} \left\{ \begin{aligned} \lambda u - \Delta u + \beta U^{\bot} {\rm rot}\,u + \nabla p &\,=\, 0\,,~~~~x \in D\,, \\ {\rm div}\,u &\,=\, 0\,,~~~~x \in D\,, \\
u_r|_{\partial D} \,=\, 0\,,~~~~~~
&u_\theta|_{\partial D} \,=\,-T_n (b) {\color{black} e^{in\theta}} \,, \end{aligned}\right. \end{equation}
with some pressure $p\in W^{1,1}_{{\rm loc}}(\overline{\Omega})$. The vector field $\frac{\mathcal{V}_{n}[b](\theta)}{r^2}$ corrects the boundary condition in \eqref{equation.proof.lem.rep.RSed.boundary} so that $u+\frac{\mathcal{V}_{n}[b](\theta)}{r^2}$ solves \eqref{RSed.b} replacing $b$ by $b_n$. The proof is complete. \end{proof}
The estimates for $w^{\rm ed}_{b,n}$ and $\omega^{\rm ed}_{b,n}$ in Lemma \ref{lem.rep.RSed.b} are the main results of this subsection. We recall that $\beta_0$ is the constant in Proposition \ref{prop.est.F}.
\begin{theorem}\label{thm.est.RSed.b}
Let $|n|=1$, $p\in(1,\infty]$, and $q\in(1,\infty)$. Fix $\epsilon\in(0,\frac{\pi}{2})$. Then there is a positive constant $C=C(p,q,\epsilon)$ independent of $\beta$ such that the following statement holds. Let $b \in L^\infty(\partial D)^2$, $f\in L^q(D)^2$, and $\beta\in(0,\beta_0)$. Then for $\lambda\in\Sigma_{\pi-\epsilon} \cap \mathcal{B}_{e^{-\frac{1}{6\beta}}}(0)$ we have
\begin{align}
\|w^{\rm ed}_{b,n}\|_{L^p(D)} &\le
\frac{C}{\beta} |\lambda|^{-\frac1p}
\|b\|_{L^\infty(\partial D)}\,, \label{est1.thm.est.RSed.b} \\
\big\|\frac{w^{\rm ed}_{b,n}}{|x|} \big\|_{L^2(D)} &\le \frac{C}{\beta^2}
\|b\|_{L^\infty(\partial D)}\,, \label{est2.thm.est.RSed.b} \\
\|\omega^{\rm ed}_{b,n}\|_{L^2(D)} &\le \frac{C}{\beta}
\|b\|_{L^\infty(\partial D)}\,, \label{est3.thm.est.RSed.b} \\
\big|\big\langle \omega^{\rm ed}_{b,n}, \frac{(w^{\rm ed}_{f,r})_n}{|x|} \big\rangle_{L^2(D)} \big| &\le
\frac{C}{\beta^4} |\lambda|^{-1+\frac1q} \|f \|_{L^q(D)}
\|b\|_{L^\infty(\partial D)}\,. \label{est4.thm.est.RSed.b} \end{align}
\end{theorem}
\begin{proof} The estimates \eqref{est1.thm.est.RSed.b} and \eqref{est2.thm.est.RSed.b} follow by Propositions \ref{prop.est.F} and \ref{prop1.est.velocity.RSed.f}, while \eqref{est3.thm.est.RSed.b} follows by Proposition \ref{prop.est.F} and \eqref{est1.lem.est4.bessel} with $p=2$ in Lemma \ref{lem.est4.bessel} in Appendix \ref{app.est.bessel}. The proof for \eqref{est4.thm.est.RSed.b} is parallel to that for \eqref{est3.thm.est.vorticity.RSed.f} in Theorem \ref{thm.est.vorticity.RSed.f}. The proof is complete. \end{proof}
\subsection{Resolvent estimate in region exponentially close to the origin} \label{apriori2}
In this subsection we treat the problem \eqref{RS} when the resolvent parameter $\lambda$ is exponentially close to the origin. We start with the a priori estimate of the term $\big\langle ({\rm rot}\,v)_{n}, \frac{v_{r,n}}{|x|} \big\rangle_{L^2(D)}$, $|n|=1$, when $0<|\lambda|<e^{-\frac{1}{6\beta}}$, which is needed in closing the energy computation. We recall that $D$ denotes the exterior disk $\{x\in\mathbb{R}^2~|~|x|>1\}$, and that {\color{black} $R$ and $\gamma$ } are defined in Assumption \ref{assumption}. Let $\beta_0$ be the constant in Proposition \ref{prop.est.F}.
\begin{proposition}\label{prop1.small.lambda.energy.est.resol.}
Let $|n|=1$, $q\in(1,2]$, and $f \in L^q(\Omega)^2$, and let $\lambda\in\Sigma_{\pi-\epsilon} \cap \mathcal{B}_{e^{-\frac{1}{6\beta}}}(0)$ for some $\epsilon\in(0,\frac{\pi}{2})$. Suppose that $v \in D(\mathbb{A}_V)$ is a solution to \eqref{RS}. Then we have
\begin{equation}\label{est1.prop.small.lambda.energy.est.resol.} \begin{aligned}
\big|\big\langle ({\rm rot}\,v)_{n}, \frac{v_{r,n}}{|x|} \big\rangle_{L^2(D)} \big| \le \frac{C}{\beta^5}
|\lambda|^{-2+\frac2q}
\|f\|_{L^q(\Omega)}^2 + \frac{K}{\beta^5} {\color{black} (d + \beta d^\frac12)^2}
\|\nabla v\|_{L^2(\Omega)}^2\,, \end{aligned} \end{equation}
as long as $\beta\in(0,\beta_0)$. The constant $C$ is independent of {\color{black} $d$ and $\beta$,} and depends on $\gamma$, $q$, and $\epsilon$, while $K$ is greater than $1$ and independent of {\color{black} $d$, $\beta$, and $q$}, and depends on $\gamma$ and $\epsilon$. \end{proposition}
\begin{proof}
In this proof we denote the function space $L^q(D)$ by $L^q$ to simplify notation. Firstly we fix a positive number $\gamma'\in(\frac12, \gamma)$, and set $F=-(R\otimes v + v\otimes R)|_{D}$ and
{\color{black} $b= \mathcal{P}_n v|_{\partial D}$.} It is easy to see that $F$ belongs to the function space $X_{\gamma'}(D)$ defined in \eqref{Xgamma}, and that $b\in L^\infty(\partial D)^2$. Moreover, a direct calculation and Assumption \ref{assumption} imply that
\begin{align}
\||x|^{\gamma'}F\|_{L^2} \le
K_0 d \|\nabla v\|_{L^2(\Omega)}\,,~~~~~~~~
\|b\|_{L^\infty(\partial D)} \le
{\color{black} K_0 d^\frac12} \|\nabla v\|_{L^2(\Omega)}\,. \label{est1.proof.prop.small.lambda.energy.est.resol.} \end{align}
Here $K_0$ denotes the constant which depends on $\gamma$ and is independent of {\color{black} $d$, $\beta$, and $q\in(1,2]$.} {\color{black} The latter estimate in \eqref{est1.proof.prop.small.lambda.energy.est.resol.} is proved as follows: the zero extension of $v\in D(\mathbb{A}_V)$ to the whole plane $\mathbb{R}^2$, which is denoted again by $v$, is an element of $W^{1,2}(\mathbb{R}^2)$. Hence we have
\begin{align*}
\|b\|_{L^\infty(\partial D)} & \le
\int_0^{2\pi} \int_{1-2d}^{1} |\nabla v(r\cos\theta, r\sin\theta)| \,{\rm d} r \,{\rm d} \theta \\ & \le
K_0 d^\frac12 \|\nabla v\|_{L^2(\Omega)}\,, \end{align*}
where the H\"{o}lder inequality is applied in the last line.}
In the following we use the notations in Subsections \ref{subsec.RSed.f}--\ref{subsec.RSed.b}. Since $v|_{D}$ is a solution to the problem \eqref{RSed}, by the solution formula we have the decompositions for $v_{n}$, $|n|=1$:
\begin{align} v_{n} &\,=\,w^{\rm ed}_{f,n} + w^{\rm ed}_{{\rm div}F,n} + w^{\rm ed}_{b,n}~~~~~~{\rm in}~~D\,, \label{decom1.proof.prop.small.lambda.energy.est.resol.} \\ ({\rm rot}\,v)_{n} &\,=\,\omega^{\rm ed}_{f,n} + \omega^{\rm ed}_{{\rm div}F,n} + \omega^{\rm ed}_{b,n}~~~~~~{\rm in}~~D\,. \label{decom2.proof.prop.small.lambda.energy.est.resol.} \end{align}
Then, in view of \eqref{decom2.proof.prop.small.lambda.energy.est.resol.}, the assertion \eqref{est1.prop.small.lambda.energy.est.resol.} follows from estimating the next three terms:
\begin{align*}
\big|\big\langle \omega^{\rm ed}_{f,n}, \frac{v_{r,n}}{|x|} \big\rangle_{L^2} \big|\,,~~~~~~
\big|\big\langle \omega^{\rm ed}_{{\rm div}F,n}, \frac{v_{r,n}}{|x|} \big\rangle_{L^2} \big|\,,~~~~~~
\big|\big\langle \omega^{\rm ed}_{b,n}, \frac{v_{r,n}}{|x|} \big\rangle_{L^2} \big|\,. \end{align*}
(i) Estimate of $|\langle \omega^{\rm ed}_{f,n}, \frac{v_{r,n}}{|x|}\rangle_{L^2}|$: We fix a number $p\in(\frac{2}{\gamma'},\infty)$. Note that $p\in(q,\infty)$ holds since $\frac{2}{\gamma'}>2$. Then setting $p'= \frac{p}{p-1}\in(1,q)$ and using the H\"{o}lder inequality, we see that
\begin{align}
\big|\big\langle \omega^{\rm ed}_{f,n}, \frac{v_{r,n}}{|x|} \big\rangle_{L^2} \big| & \le
\big|\big\langle \omega^{\rm ed\,(1)}_{f,n}, \frac{v_{r,n}}{|x|} \big\rangle_{L^2} \big|
+ \big\|\frac{\omega^{\rm ed\,(2)}_{f,n}}{|x|}\big\|_{L^{p'}} \|v_{n}\|_{L^p}\,. \label{est3.proof.prop.small.lambda.energy.est.resol.} \end{align}
From \eqref{est1.thm.est.vorticity.RSed.f} and \eqref{est3.thm.est.vorticity.RSed.f} in Theorem \ref{thm.est.vorticity.RSed.f}, \eqref{est2.thm.est.velocity.RSed.divF} in Theorem \ref{thm.est.velocity.RSed.divF}, and \eqref{est2.thm.est.RSed.b} in Theorem \ref{thm.est.RSed.b} we observe that
\begin{align*}
\big|\big\langle \omega^{\rm ed\,(1)}_{f,n}, \frac{v_{r,n}}{|x|} \big\rangle_{L^2} \big| & \le
\big|\big\langle \omega^{\rm ed\,(1)}_{f,n},
\frac{(w^{\rm ed}_{f,r})_n}{|x|} \big\rangle_{L^2} \big|
+ \|\omega^{\rm ed\,(1)}_{f,n}\|_{L^2} \Big(
\big\|\frac{w^{\rm ed}_{{\rm div}F,n}}{|x|} \big\|_{L^2}
+ \big\|\frac{w^{\rm ed}_{b,n}}{|x|}\big\|_{L^2} \Big) \\ & \le \frac{C}{\beta^5}
|\lambda|^{-1+\frac1q}
\|f\|_{L^q} \Big(
|\lambda|^{-1+\frac1q}
\|f\|_{L^q}
+ \big(\||x|^{\gamma'}F\|_{L^2} + \beta \|b\|_{L^\infty(\partial D)} \big) \Big)\,. \end{align*}
{\color{black} Then by \eqref{est1.proof.prop.small.lambda.energy.est.resol.} we find}
\begin{align}
&\big|\big\langle \omega^{\rm ed\,(1)}_{f,n}, \frac{v_{r,n}}{|x|} \big\rangle_{L^2} \big| \nonumber \\ &\le \frac{C}{\beta^5}
|\lambda|^{-1+\frac1q}
\|f\|_{L^q} \big(
|\lambda|^{-1+\frac1q}
\|f\|_{L^q} + {\color{black} (d + \beta d^\frac12)}
\|\nabla v\|_{L^2(\Omega)} \big)\,. \label{est5.proof.prop.small.lambda.energy.est.resol.} \end{align}
On the other hand, since $\frac1p+\frac1{p'}=1$ holds, by using \eqref{est2.thm.est.vorticity.RSed.f} replacing $\tilde{q}$ by $p'$ in Theorem \ref{thm.est.vorticity.RSed.f}, \eqref{est1.thm.est.velocity.RSed.f} in Theorem \ref{thm.est.velocity.RSed.f}, \eqref{est1.thm.est.velocity.RSed.divF} in Theorem \ref{thm.est.velocity.RSed.divF}, and \eqref{est1.thm.est.RSed.b} in Theorem \ref{thm.est.RSed.b}, we have
\begin{align}
& \big\|\frac{\omega^{\rm ed\,(2)}_{f,n}}{|x|}\big\|_{L^{p'}} \|v_{n}\|_{L^p} \nonumber \\ & \le \frac{C}{\beta^3}
|\lambda|^{-1+\frac1q}
\|f\|_{L^q} \Big(
|\lambda|^{-1+\frac1q}
\|f\|_{L^q}
+ \big(\||x|^{\gamma'}F\|_{L^2} + \beta \|b\|_{L^\infty(\partial D)} \big) \Big) \nonumber \\ & \le \frac{C}{\beta^3}
|\lambda|^{-1+\frac1q}
\|f\|_{L^q} \big(
|\lambda|^{-1+\frac1q}
\|f\|_{L^q} + {\color{black} (d + \beta d^\frac12)}
\|\nabla v\|_{L^2(\Omega)} \big)\,. \label{est6.proof.prop.small.lambda.energy.est.resol.} \end{align}
Then inserting \eqref{est5.proof.prop.small.lambda.energy.est.resol.} and \eqref{est6.proof.prop.small.lambda.energy.est.resol.} into \eqref{est3.proof.prop.small.lambda.energy.est.resol.} we obtain
\begin{align}
& \big|\big\langle \omega^{\rm ed}_{f,n}, \frac{v_{r,n}}{|x|} \big\rangle_{L^2} \big| \nonumber \\ & \le \frac{C}{\beta^5}
|\lambda|^{-1+\frac1q}
\|f\|_{L^q} \big(
|\lambda|^{-1+\frac1q}
\|f\|_{L^q} + {\color{black} (d + \beta d^\frac12)}
\|\nabla v\|_{L^2(\Omega)} \big)\,. \label{mainest1.proof.prop.small.lambda.energy.est.resol.} \end{align}
\noindent (ii) Estimate of $|\langle \omega^{\rm ed}_{{\rm div}F,n}, \frac{v_{r,n}}{|x|} \rangle_{L^2}|$: By using the H\"{o}lder inequality we find
\begin{equation}\label{est7.proof.prop.small.lambda.energy.est.resol.} \begin{aligned}
\big|\big\langle \omega^{\rm ed}_{{\rm div}F,n},
\frac{v_{r,n}}{|x|} \big\rangle_{L^2} \big| & \le
\|\omega^{\rm ed}_{{\rm div}F,n}\|_{L^2} \Big(
\big\|\frac{w^{\rm ed}_{{\rm div}F,n}}{|x|} \big\|_{L^2}
+ \big\|\frac{w^{\rm ed}_{b,n}}{|x|}\big\|_{L^2} \Big) \\ & \quad
+ \big|\big\langle \omega^{\rm ed\,(1)}_{{\rm div}F,n},
\frac{(w^{\rm ed}_{f,r})_n}{|x|} \big\rangle_{L^2} \big|
+ \|\frac{\omega^{\rm ed\,(2)}_{{\rm div}F,n}}{|x|}\|_{L^1}
\big\|w^{\rm ed}_{f,n} \big\|_{L^\infty}\,. \end{aligned} \end{equation}
By Theorem \ref{thm.est.vorticity.RSed.divF}, \eqref{est2.thm.est.velocity.RSed.divF} in Theorem \ref{thm.est.velocity.RSed.divF}, and \eqref{est2.thm.est.RSed.b} in Theorem \ref{thm.est.RSed.b} we see that
\begin{align}
\|\omega^{\rm ed}_{{\rm div}F,n}\|_{L^2} \Big(
\big\|\frac{w^{\rm ed}_{{\rm div}F,n}}{|x|} \big\|_{L^2}
+ \big\|\frac{w^{\rm ed}_{b,n}}{|x|}\big\|_{L^2} \Big) & \le \frac{K}{\beta^5}
\||x|^{\gamma'}F\|_{L^2}
\big(\||x|^{\gamma'}F\|_{L^2} + \beta \|b\|_{L^\infty(\partial D)} \big)\nonumber \\ & \le \frac{K}{\beta^{5}} {\color{black} (d + \beta d^\frac12)^2}
\|\nabla v\|_{L^2(\Omega)}^2\,, \label{est8.proof.prop.small.lambda.energy.est.resol.} \end{align}
where we note that the constant $K$ depends only on $\epsilon$ and $\gamma$, and is independent of {\color{black} $d$ and $\beta$,} and, in particular, of $q\in(1,2]$. Theorem \ref{thm.est.vorticity.RSed.divF} and \eqref{est1.thm.est.velocity.RSed.f} with $p=\infty$ in Theorem \ref{thm.est.velocity.RSed.f} lead to
\begin{align}
& \big|\big\langle \omega^{\rm ed\,(1)}_{{\rm div}F,n},
\frac{(w^{\rm ed}_{f,r})_n}{|x|} \big\rangle_{L^2} \big|
+ \|\frac{\omega^{\rm ed\,(2)}_{{\rm div}F,n}}{|x|}\|_{L^1}
\big\|w^{\rm ed}_{f,n} \big\|_{L^\infty} \nonumber \\ & \le \frac{C}{\beta^5}
|\lambda|^{-1+\frac1q}
\|f\|_{L^q} \||x|^{\gamma'}F\|_{L^2} \le \frac{C}{\beta^{5}} {\color{black} d}
|\lambda|^{-1+\frac1q}
\|f\|_{L^q}
\|\nabla v\|_{L^2(\Omega)}\,. \label{est9.proof.prop.small.lambda.energy.est.resol.} \end{align}
Inserting \eqref{est8.proof.prop.small.lambda.energy.est.resol.} and \eqref{est9.proof.prop.small.lambda.energy.est.resol.} into \eqref{est7.proof.prop.small.lambda.energy.est.resol.} we have
\begin{align}
& \big|\big\langle \omega^{\rm ed}_{{\rm div}F,n},
\frac{v_{r,n}}{|x|} \big\rangle_{L^2} \big| \nonumber \\ & \le \frac{C}{\beta^{5}} {\color{black} d}
|\lambda|^{-1+\frac1q}
\|f\|_{L^q(\Omega)}
\|\nabla v\|_{L^2(\Omega)} + \frac{K}{\beta^{5}} {\color{black} (d + \beta d^\frac12)^2}
\|\nabla v\|_{L^2(\Omega)}^2\,. \label{mainest2.proof.prop.small.lambda.energy.est.resol.} \end{align}
\noindent (iii) Estimate of $|\langle \omega^{\rm ed}_{b,n}, \frac{v_{r,n}}{|x|}\rangle_{L^2}|$: Using the Schwartz inequality and Theorem \ref{thm.est.RSed.b} we find
\begin{align}
&\big|\big\langle \omega^{\rm ed}_{b,n}, \frac{v_{r,n}}{|x|} \big\rangle_{L^2(D)} \big| \le
\big|\big\langle \omega^{\rm ed}_{b,n},
\frac{(w^{\rm ed}_{f,r})_n}{|x|} \big\rangle_{L^2} \big|
+ \|\omega^{\rm ed}_{b,n}\|_{L^2} \Big(
\big\|\frac{w^{\rm ed}_{{\rm div}F,n}}{|x|} \big\|_{L^2}
+ \big\|\frac{w^{\rm ed}_{b,n}}{|x|}\big\|_{L^2} \Big) \nonumber \\ & \le \frac{1}{\beta^4} \Big(
C |\lambda|^{-1+\frac1q}
\|f\|_{L^q}
\|b\|_{L^\infty(\partial D)}
+ K \|b\|_{L^\infty(\partial D)}
\big(\||x|^{\gamma'}F\|_{L^2}
+ \beta \|b\|_{L^\infty(\partial D)} \big) \Big) \nonumber \\ & \le {\color{black} \frac{C}{\beta^{5}} \beta d^\frac12 }
|\lambda|^{-1+\frac1q}
\|f\|_{L^q(\Omega)}
\|\nabla v\|_{L^2(\Omega)} + \frac{K}{\beta^5} {\color{black} (d + \beta d^\frac12)^2}
\|\nabla v\|_{L^2(\Omega)}^2\,. \label{mainest3.proof.prop.small.lambda.energy.est.resol.} \end{align}
Finally we obtain the assertion \eqref{est1.prop.small.lambda.energy.est.resol.} by collecting \eqref{mainest1.proof.prop.small.lambda.energy.est.resol.}, \eqref{mainest2.proof.prop.small.lambda.energy.est.resol.}, and \eqref{mainest3.proof.prop.small.lambda.energy.est.resol.}, and using the Young inequality in the form
\begin{align*} & \frac{C}{\beta^5} {\color{black} (d + \beta d^\frac12) }
|\lambda|^{-1+\frac1q}
\|f\|_{L^q(\Omega)}
\|\nabla v\|_{L^2(\Omega)} \\ & \le \frac{C}{\beta^5}
|\lambda|^{-2+\frac2q}
\|f\|_{L^q(\Omega)}^2 + {\frac{{\color{black} (d + \beta d^\frac12)^2}}{\beta^5} }
\|\nabla v\|_{L^2(\Omega)}^2\,. \end{align*}
The proof is complete. \end{proof}
Now we shall establish the resolvent estimate to \eqref{RS} when $0<|\lambda|<e^{-\frac{1}{6\beta}}$, by closing the energy computation starting from Proposition \ref{prop.general.energy.est.resol.} in Subsection \ref{apriori1}.
\begin{proposition}\label{prop2.small.lambda.energy.est.resol.} Let $\epsilon \in (0,\frac{\pi}{4})$, and let $\beta_1$ {\color{black} and $d_1$}, $\beta_0$, and $K$ be the constants respectively in Propositions \ref{prop.general.energy.est.resol.}, \ref{prop.est.F}, and \ref{prop1.small.lambda.energy.est.resol.}. Then the following statements hold.
\noindent {\rm (1)} Fix positive numbers $\beta_3\in(0, {\color{black} \min\{\beta_1, d_1^\frac12, \beta_0\}} )$ and {\color{black} $\mu_\ast\in(0, (64K)^{-1})$.} Then the set
\begin{align} \Sigma_{\frac{3}{4}\pi-\epsilon} \cap \mathcal{B}_{e^{-\frac{1}{6\beta}}}(0) \label{set.thm.small.lambda.energy.est.resol.} \end{align}
is included in the resolvent $\rho(-\mathbb{A}_V)$ for any $\beta\in(0,\beta_3)$ and {\color{black} $d\in(0,\mu_\ast \beta^2)$. }
\noindent {\rm (2)} Let $q\in(1.2]$ and $f\in L^2_\sigma(\Omega) \cap L^q(\Omega)^2$. Then we have
\begin{equation}\label{est1.thm.small.lambda.energy.est.resol.} \begin{split}
\|(\lambda+{\mathbb A}_V)^{-1} f\|_{L^2(\Omega)} & \le
\frac{C}{\beta^2} |\lambda|^{-\frac32+\frac1q}
\|f\|_{L^q(\Omega)}\,,~~~~ \lambda\in \Sigma_{\frac{3}{4}\pi-\epsilon} \cap \mathcal{B}_{e^{-\frac{1}{6\beta}}}(0)\,,\\
\|\nabla (\lambda+{\mathbb A}_V)^{-1} f\|_{L^2(\Omega)} & \le
\frac{C}{\beta^2} |\lambda|^{-1+\frac1q}
\|f\|_{L^q(\Omega)}\,,~~~~ \lambda\in \Sigma_{\frac{3}{4}\pi-\epsilon} \cap \mathcal{B}_{e^{-\frac{1}{6\beta}}}(0)\,, \end{split} \end{equation}
as long as $\beta\in(0,\beta_3)$ and {\color{black} $d\in(0,\mu_\ast \beta^2)$.} The constant $C$ is independent of $\beta$. \end{proposition}
\begin{proof} (1) Let $\lambda\in\Sigma_{\frac{3}{4}\pi-\epsilon} \cap \mathcal{B}_{e^{-\frac{1}{6\beta}}}(0)$ and suppose that $v \in D(\mathbb{A}_V)$ is a solution to \eqref{RS}. Since {\color{black} $d\in(0,\mu_\ast \beta^2)$} ensures {\color{black} $d\in(0,d_1)$ and $K (d + \beta d^\frac12)^2 \beta^{-4} \le \frac{1}{16}$,} by inserting \eqref{est1.prop.small.lambda.energy.est.resol.} in Proposition \ref{prop1.small.lambda.energy.est.resol.} into \eqref{est1.prop.general.energy.est.resol.} and \eqref{est2.prop.general.energy.est.resol.} in Proposition \ref{prop.general.energy.est.resol.}, and by combining them we find
\begin{equation}\label{est1.proof.thm.small.lambda.energy.est.resol.} \begin{aligned}
&\big(|{\rm Im}(\lambda)| + {\rm Re}(\lambda)\big) \| v\|_{L^2(\Omega)}^2
+ \frac14 \|\nabla v\|_{L^2(\Omega)}^2 \\ &~~~~~~ \le
\frac{C}{\beta^4} |\lambda|^{-2+\frac2q} \|f\|_{L^q(\Omega)}^2
+ C\|f\|_{L^q(\Omega)}^{\frac{2q}{3q-2}} \|v\|_{L^2(\Omega)}^{\frac{4(q-1)}{3q-2}}\,. \end{aligned} \end{equation}
Then, since $\lambda\in\Sigma_{\frac{3}{4}\pi-\epsilon}$ implies that $|{\rm Im}(\lambda)| + {\rm Re}(\lambda)>c|\lambda|$ holds with some positive constant $c=c(\epsilon)$, the assertion $\Sigma_{\frac{3}{4}\pi-\epsilon} \cap \mathcal{B}_{e^{-\frac{1}{6\beta}}}(0)\subset\rho(-\mathbb{A}_V)$ follows. \\ \noindent (2) The estimate \eqref{est1.thm.small.lambda.energy.est.resol.} can be easily checked by using \eqref{est1.proof.thm.small.lambda.energy.est.resol.}. The proof is complete.
\end{proof}
\section{Proof of Theorem \ref{maintheorem}} \label{sec.maintheorem}
In this section we prove Theorem \ref{maintheorem}. The proof is an easy consequence of Propositions \ref{prop.laege.lambda.energy.est.resol.} and \ref{prop2.small.lambda.energy.est.resol.} respectively in Subsections \ref{apriori1} and \ref{apriori2}.
\begin{proofx}{Theorem \ref{maintheorem}} {\color{black} Firstly we note that it suffices to prove the following two estimates:
\begin{align}
\|e^{-t {\mathbb A}_V} f\|_{L^2(\Omega)} & \le \frac{C}{\beta^2} t^{-\frac1q+\frac12}
\|f\|_{L^q(\Omega)}\,, ~~~~~~ t>0\,, \label{est0.proof.maintheorem} \\
\|\nabla e^{-t {\mathbb A}_V} f\|_{L^2(\Omega)} & \le \frac{C}{\beta^2} t^{-\frac1q}
\|f\|_{L^q(\Omega)}\,, ~~~~~~ t>0\,, \label{est0'.proof.maintheorem} \end{align}
for $f\in L^2_\sigma(\Omega) \cap L^q(\Omega)^2$. Then the assertions \eqref{est1.maintheorem} and \eqref{est2.maintheorem} are easy consequences from the Gagliardo-Nirenberg inequality.} Let $\beta_2$ be the constant in Proposition \ref{prop.laege.lambda.energy.est.resol.}. {\color{black} We note that $\mathcal{S}_{\beta_2} \cap \mathcal{B}_{e^{-\frac{1}{6\beta_2}}}(0) \neq \emptyset$ holds since $12 e^{\frac{1}{e}} \beta_2^2 < 1$ follows from the condition $\beta_2\in(0,\frac{1}{12})$.} Then there is a constant $\epsilon_0 \in(\frac{\pi}{4},\frac{\pi}{2})$ such that the sector $\Sigma_{\pi-\epsilon_0}$ is included in the set $\mathcal{S}_{\beta} \cup \mathcal{B}_{e^{-\frac{1}{6\beta}}}(0)$ for any {\color{black} $\beta\in(0,\beta_2)$.} \\ Let $\beta_3$ be the constant in Proposition \ref{prop2.small.lambda.energy.est.resol.}. Fix a number $\beta_\ast\in (0,{\color{black} \min\{\beta_2, \beta_3\}})$. Then by Propositions \ref{prop.laege.lambda.energy.est.resol.} and \ref{prop2.small.lambda.energy.est.resol.}, there is a positive constant $\mu_\ast$ such that the sector $\Sigma_{\pi-\epsilon_0}$ is included in the resolvent $\rho(-\mathbb{A}_V)$ as long as $\beta\in(0,\beta_\ast)$ and {\color{black} $d\in(0,\mu_\ast \beta^2)$.} Moreover, from the same propositions, for $q\in(1,2]$ and $f\in L^2_\sigma(\Omega) \cap L^q(\Omega)^2$ we have
\begin{equation}\label{est1.proof.maintheorem} \begin{aligned}
\|(\lambda+{\mathbb A}_V)^{-1} f\|_{L^2(\Omega)} & \le
\frac{C}{\beta^2} |\lambda|^{-\frac32+\frac1q}
\|f\|_{L^q(\Omega)}\,,~~~~ \lambda\in{\color{black} \Sigma_{\pi-\epsilon_0}}\,,\\
\|\nabla (\lambda+{\mathbb A}_V)^{-1} f\|_{L^2(\Omega)} & \le
\frac{C}{\beta^2} |\lambda|^{-1+\frac1q}
\|f\|_{L^q(\Omega)}\,,~~~~ \lambda\in{\color{black} \Sigma_{\pi-\epsilon_0}}\,. \end{aligned} \end{equation}
In particular, the first line in \eqref{est1.proof.maintheorem} implies the estimate
{\color{black} \eqref{est0.proof.maintheorem}} for $q=2$. Next we consider the case $q\in(1,2)$. Fix a number $\phi\in(\frac{\pi}{2},\pi-\epsilon_0)$ and take a curve $\gamma(b)=\{z\in\mathbb{C}~|~|{\rm arg}\,z|=\phi\,,\,|z|\ge b \} \cup \{z\in\mathbb{C}~|~|{\rm arg}\,z|\le\phi\,,\,|z|=b \}$, $b\in(0,1)$, oriented counterclockwise. Then the semigroup $e^{-t {\mathbb A}_V}$ admits a Dunford integral representation
\begin{align*} e^{-t {\mathbb A}_V} f \,=\, \frac{1}{2\pi i} \int_{\gamma(b)} e^{t \lambda }(\lambda+{\mathbb A}_V)^{-1} f \,{\rm d} \lambda\,,~~~~~~ t>0\,, \end{align*}
for $f\in L^2_\sigma(\Omega) \cap L^q(\Omega)^2$. Then by taking the limit $b\to0$ we observe from \eqref{est1.proof.maintheorem} that
\begin{align*}
\|e^{-t {\mathbb A}_V} f\|_{L^2(\Omega)} & \le
\frac{C}{\beta^2} \|f\|_{L^q(\Omega)} \int_{0}^{\infty} s^{-\frac32+\frac1q} e^{\color{black}{(\cos \phi)ts}} \,{\rm d} s \\ & \le \frac{C}{\beta^2} t^{-\frac1q+\frac12}
\|f\|_{L^q(\Omega)}\,,~~~~~~ t>0\,, \end{align*}
which shows that {\color{black} \eqref{est0.proof.maintheorem}} holds {\color{black} for $q\in(1,2)$}. The estimate {\color{black} \eqref{est0'.proof.maintheorem}} can be obtained in a similar manner using the Dunford integral. This completes the proof of Theorem \ref{maintheorem}. \end{proofx}
\appendix
\section{Asymptotics of the order $\mu_n(\beta)$ for small $\beta$} \label{app.est.mu}
This appendix is devoted to the statement of the asymptotic behavior for $\mu_n(\beta)=(n^2+in\beta)^\frac12$, ${\rm Re}(\mu_n)>0$, with $|n|=1$ when the constant $\beta\in(0,1)$ in Assumption \ref{assumption} reaches to zero. The following result is essentially proved in \cite{Ma1}.
\begin{lemma}[ {\rm\cite[Lemma B.1]{Ma1}}]\label{lem.est.mu}
Let $|n|=1$. Then $\mu_n(\beta)$ satisfies the expansion
\begin{align} {\rm Re}(\mu_n(\beta)) &\,=\, 1+\frac{\beta^2}{8} + O(\beta^4)\,, ~~~~~~ 0<\beta\ll 1\,, \label{est1.lem.est.mu} \\ {\rm Im}(\mu_n(\beta)) &\,=\, \frac{\beta}{2} + O(\beta^3)\,, ~~~~~~ 0<\beta\ll 1\,. \label{est2.lem.est.mu} \end{align}
\end{lemma}
\section{Estimates of the Modified Bessel Function} \label{app.est.bessel}
In this appendix we collect the basic estimates for the modified Bessel functions $ K_{\mu_n}(z)$ and $ I_{\mu_n}(z)$ of the order $\mu_n=(n^2+in\beta)^\frac12$, ${\rm Re}(\mu_n)>0$, with $|n|=1$ and $\beta\in(0,1)$. We are especially interested in the $\beta$-dependence in each estimate, since our analysis in Section \ref{sec.RSed}, where the results in this appendix are applied, essentially requires the smallness of $\beta$. We denote by $\mathcal{B}_\rho(0)$ the disk in the complex plane $\mathbb{C}$ centered at the origin with radius $\rho>0$.
\begin{lemma}\label{lem.est1.bessel}
Let $|n|=1$, $k=0,1$, and $R\in[1,\infty)$. Fix $\epsilon\in(0,\frac{\pi}{2})$. Then there is a positive constant $C=C(R,\epsilon)$ independent of $\beta$ such that the following statements hold.
\noindent {\rm (1)} Let $z \in\Sigma_{\epsilon} \cap \mathcal{B}_R(0)$. Then $K_{\mu_n}(z)$ and $K_{\mu_n-1}(z)$ satisfy the expansions
\begin{align} K_{\mu_n}(z) &\,=\, \frac{\Gamma(\mu_n)}{2} \big(\frac{z}{2}\big)^{-\mu_n} + R^{(1)}_n(z)\,, \label{est1.lem.est1.bessel} \\ K_{\mu_n-1}(z) &\,=\, \frac{\pi}{2\sin((\mu_n-1)\pi)} \big( \frac{1}{\Gamma(2-\mu_n)} \big(\frac{z}{2})^{-\mu_n+1} - \frac{1}{\Gamma(\mu_n)} \big(\frac{z}{2})^{\mu_n-1} \big) + R^{(2)}_n(z)\,. \label{est2.lem.est1.bessel} \end{align}
Here $\Gamma(z)$ denotes the Gamma function and the remainders $R^{(1)}_n(z)$ and $R^{(2)}_n(z)$ satisfy
\begin{align}
|R^{(1)}_n(z)|
&\le C|z|^{2-{\rm Re}(\mu_n)}(1+|\log|z||)\,,~~~~~~ z \in\Sigma_{\epsilon} \cap \mathcal{B}_R(0)\,, \label{est3.lem.est1.bessel} \\
|R^{(2)}_n(z)|
&\le C|z|^{3-{\rm Re}(\mu_n)}(1+|\log|z||)\,,~~~~~~ z \in\Sigma_{\epsilon} \cap \mathcal{B}_R(0)\,. \label{est4.lem.est1.bessel} \end{align}
\noindent {\rm (2)} The following estimates hold.
\begin{align}
|I_{\mu_n+k}(z)|
&\le C |z|^{{\rm Re}(\mu_n)+k}\,,~~~~~~ z\in\Sigma_{\epsilon} \cap \mathcal{B}_R(0)\,, \label{est5.lem.est1.bessel} \\
|K_{\mu_n-k}(z)|
&\le C |z|^{-\frac12} e^{-{\rm Re}(z)}\,,~~~~~~ z \in\Sigma_\epsilon \cap \mathcal{B}_R(0)^{{\rm c}}\,, \label{est6.lem.est1.bessel} \\
|I_{\mu_n+k}(z)|
&\le C |z|^{-\frac12} e^{{\rm Re}(z)}\,,~~~~~~ z \in\Sigma_\epsilon \cap \mathcal{B}_R(0)^{{\rm c}}\,. \label{est7.lem.est1.bessel} \end{align}
\end{lemma}
\begin{proof} (1) The expansions \eqref{est1.lem.est1.bessel} and \eqref{est2.lem.est1.bessel} follow from the definition of $K_\mu(z)$ in Subsection \ref{subsec.RSed.f} combined with the well-known Euler reflection formula for the Gamma function. The estimates of the remainder terms \eqref{est3.lem.est1.bessel} and \eqref{est4.lem.est1.bessel} are also consequences of the same definition, and we omit the calculations which are easily checked. \par \noindent (2) The estimate \eqref{est5.lem.est1.bessel} directly follows from the definition of $I_\mu(z)$ in Subsection \ref{subsec.RSed.f}. In order to prove \eqref{est6.lem.est1.bessel} and \eqref{est7.lem.est1.bessel}, let us recall the integral formulas for $K_{\mu}(z)$ and $I_{\mu}(z)$:
\begin{align*} K_{\mu}(z) &\,=\, \frac{\pi^\frac12}{\Gamma(\mu+\frac12)} \big(\frac{z}{2}\big)^\mu \int_0^\infty e^{-z\cosh t} (\sinh t)^{2\mu} \,{\rm d} t\,,\\ I_{\mu}(z) &\,=\, \frac{1}{\pi^{\frac12}\,\Gamma(\mu+\frac12)} \big( \frac{z}{2} \big)^{\mu} \int_{0}^{\pi} e^{z \cos \theta} (\sin \theta)^{2\mu} \,{\rm d} \theta \,, \end{align*}
which are valid if ${\rm Re}(\mu)>-\frac12$ and $z\in\Sigma_{\frac{\pi}{2}}$ (see \cite{Abramowitz} page 376) . Then \eqref{est6.lem.est1.bessel} and \eqref{est7.lem.est1.bessel}, especially the absence of the $\beta$-singularity in the right-hand sides, can be proved by using the identities $\cosh^2 t-\sinh^2 t=1$ and $\cos^2\theta + \sin^2\theta=1$. The proof is complete. \end{proof}
In the following we present three lemmas without proofs, since they are straightforward adaptations of Lemma \ref{lem.est1.bessel} and Lemma \ref{lem.est.mu} in Appendix \ref{app.est.mu}.
\begin{lemma}\label{lem.est2.bessel}
Let $|n|=1$ and $k=0,1$, and let $\lambda\in\Sigma_{\pi-\epsilon} \cap \mathcal{B}_1(0)$ for some $\epsilon\in(0,\frac{\pi}{2})$. Then there is a constant $C>0$ independent of $\beta$ such that the following statements hold.
\noindent {\rm (1)} If $1\le \tau \le r \le {\rm Re}(\sqrt{\lambda})^{-1}$, then
\begin{align}
\bigg|\int_\tau^r s^{2-k} K_{\mu_n-k}(\sqrt{\lambda} s) \,{\rm d} s\bigg| \le \frac{C}{\beta^k}
|\lambda|^{-\frac{{\rm Re}(\mu_n)}{2}+\frac{k}{2}} r^{-{\rm Re}(\mu_n)+3}\,. \label{est1.lem.est2.bessel} \end{align}
\noindent {\rm (2)} If $1 \le \tau\le {\rm Re}(\sqrt{\lambda})^{-1} \le r$, then
\begin{align}
\bigg|\int_\tau^r s^{2-k} K_{\mu_n-k}(\sqrt{\lambda} s) \,{\rm d} s\bigg| \le \frac{C}{\beta^k}
\,|\lambda|^{-\frac32+\frac{k}{2}}\,. \label{est2.lem.est2.bessel} \end{align}
\noindent {\rm (3)} If ${\rm Re}(\sqrt{\lambda})^{-1} \le \tau\le r$, then
\begin{align}
\int_\tau^r |s^{2-k} K_{\mu_n-k}(\sqrt{\lambda} s)| \,{\rm d} s \le
C|\lambda|^{-\frac34} \tau^{\frac32-k} e^{-{\rm Re}(\sqrt{\lambda}) \tau}\,. \label{est3.lem.est2.bessel} \end{align}
\noindent {\rm (4)} If $1\le \tau \le {\rm Re}(\sqrt{\lambda})^{-1}$, then
\begin{align}
\bigg|\int_\tau^\infty s^{-k} K_{\mu_n-k}(\sqrt{\lambda} s) \,{\rm d} s\bigg| \le \frac{C}{\beta^{1+k}}
|\lambda|^{-\frac{{\rm Re}\,(\mu_n)}{2}+\frac{k}{2}} \tau^{-{\rm Re}\,(\mu_n)+1}\,. \label{est4.lem.est2.bessel} \end{align}
\noindent {\rm (5)} If $\tau \ge {\rm Re}(\sqrt{\lambda})^{-1}$, then
\begin{align}
\int_\tau^\infty |s^{-k} K_{\mu_n-k}(\sqrt{\lambda} s)| \,{\rm d} s \le
C |\lambda|^{-\frac34} \tau^{-\frac12-k} e^{-{\rm Re}(\sqrt{\lambda}) \tau}\,. \label{est5.lem.est2.bessel} \end{align}
\end{lemma}
\begin{lemma}\label{lem.est3.bessel}
Let $|n|=1$ and $k=0,1$, and let $\lambda\in\Sigma_{\pi-\epsilon} \cap \mathcal{B}_1(0)$ for some $\epsilon\in(0,\frac{\pi}{2})$. Then there is a constant $C>0$ independent of $\beta$ such that the following statements hold.
\noindent {\rm (1)} If $1\le \tau \le {\rm Re}(\sqrt{\lambda})^{-1}$, then
\begin{align}
\int_1^\tau |s^{2-k} I_{\mu_n+k}(\sqrt{\lambda} s)| \,{\rm d} s \le
C |\lambda|^{\frac{{\rm Re}(\mu_n)}{2 }+ \frac{k}{2}} \tau^{{\rm Re}(\mu_n)+3}\,. \label{est1.lem.est3.bessel} \end{align}
\noindent {\rm (2)} If $\tau \ge {\rm Re}(\sqrt{\lambda})^{-1}$, then
\begin{align}
\int_1^\tau |s^{2-k} I_{\mu_n+k}(\sqrt{\lambda} s)| \,{\rm d} s
\le C |\lambda|^{-\frac34} \tau^{\frac32-k} e^{{\rm Re}(\sqrt{\lambda}) \tau}\,. \label{est2.lem.est3.bessel} \end{align}
\noindent {\rm (3)} If $1\le r \le \tau \le {\rm Re}(\sqrt{\lambda})^{-1}$, then
\begin{align}
\int_r^\tau |s^{-k} I_{\mu_n+k}(\sqrt{\lambda} s)| \,{\rm d} s
\le C |\lambda|^{\frac{{\rm Re}(\mu_n)}{2}+\frac{k}{2}} \tau^{{\rm Re}(\mu_n)+1}\,. \label{est3.lem.est3.bessel} \end{align}
\noindent {\rm (4)} If $1\le r \le {\rm Re}(\sqrt{\lambda})^{-1} \le \tau$, then
\begin{align}
\int_r^\tau |s^{-k} I_{\mu_n+k}(\sqrt{\lambda} s)| \,{\rm d} s
\le C |\lambda|^{-\frac34} \tau^{-\frac12-k} e^{{\rm Re}(\sqrt{\lambda}) \tau}\,. \label{est4.lem.est3.bessel} \end{align}
\noindent {\rm (5)} If ${\rm Re}(\sqrt{\lambda})^{-1} \le r \le \tau$, then
\begin{align}
\int_r^\tau |s^{-k} I_{\mu_n+k}(\sqrt{\lambda} s)| \,{\rm d} s
\le C |\lambda|^{-\frac34} \tau^{-\frac12-k} e^{{\rm Re}(\sqrt{\lambda}) \tau}\,. \label{est5.lem.est3.bessel} \end{align}
\end{lemma}
\begin{lemma}\label{lem.est4.bessel}
Let $|n|=1$ and $p\in(1,\infty)$, and let $\lambda\in\Sigma_{\pi-\epsilon} \cap \mathcal{B}_1(0)$ for some $\epsilon\in(0,\frac{\pi}{2})$. Then there is a constant $C>0$ independent of $\beta$ such that the following statements hold.
\noindent {\rm (1)} If additionally $p\in[2,\infty)$, then
\begin{align}
\|K_{\mu_n}(\sqrt{\lambda}\,\cdot\,)\|_{L^p((1,\infty); r\,{\rm d} r)}
\le \frac{C}{(p {\rm Re}(\mu_n)-2)^\frac1p} |\lambda|^{-\frac{{\rm Re}\,(\mu_n)}{2}}\,. \label{est1.lem.est4.bessel} \end{align}
{\rm (2)} If $1 \le r \le {\rm Re}(\sqrt{\lambda})^{-1}$, then
\begin{align}
\bigg( \int_r^\infty |s^{-1} K_{\mu_n}(\sqrt{\lambda} s)|^p s \,{\rm d} s \bigg)^\frac1p
\le C |\lambda|^{-\frac{{\rm Re}(\mu_n)}{2}} r^{-{\rm Re}(\mu_n)-1+\frac2p}\,. \label{est2.lem.est4.bessel} \end{align}
{\rm (3)} If $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, then
\begin{align}
\bigg( \int_r^\infty |s^{-1} K_{\mu_n}(\sqrt{\lambda} s)|^p s \,{\rm d} s \bigg)^\frac1p
\le C |\lambda|^{-\frac14-\frac{1}{2p}} r^{-\frac32+\frac1p} e^{-{\rm Re}(\sqrt{\lambda}) r}\,. \label{est3.lem.est4.bessel} \end{align}
{\rm (4)} If $1 \le r \le {\rm Re}(\sqrt{\lambda})^{-1}$, then
\begin{align}
\bigg( \int_1^r |s^{-1} I_{\mu_n}(\sqrt{\lambda} s)|^p s \,{\rm d} s \bigg)^\frac1p
\le C |\lambda|^{\frac{{\rm Re}\,(\mu_n)}{2}} r^{{\rm Re}(\mu_n)-1+\frac2p}\,. \label{est4.lem.est4.bessel} \end{align}
{\rm (5)} If $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, then
\begin{align}
\bigg(\int_1^r |s^{-1} I_{\mu_n}(\sqrt{\lambda} s)|^p s \,{\rm d} s \bigg)^\frac1p
& \le C |\lambda|^{-\frac14-\frac1{2p}} r^{-\frac32+\frac1p} e^{{\rm Re}(\sqrt{\lambda}) r}\,. \label{est5.lem.est4.bessel} \end{align}
{\rm (6)} If additionally $p\in[2,\infty)$ and if $1 \le r \le {\rm Re}(\sqrt{\lambda})^{-1}$, then
\begin{equation}\label{est6.lem.est4.bessel} \begin{aligned}
& \bigg( \int_r^\infty |K_{\mu_n}(\sqrt{\lambda} s)|^p s \,{\rm d} s \bigg)^\frac1p
+ \beta \int_r^\infty |K_{\mu_n}(\sqrt{\lambda} s)| \,{\rm d} s \\ & \le
\frac{C}{\beta} |\lambda|^{-\frac{{\rm Re}(\mu_n)}{2}} r^{-{\rm Re}(\mu_n)+1}\,. \end{aligned} \end{equation}
{\rm (7)} If additionally $p\in[2,\infty)$ and if $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, then
\begin{align}
\bigg( \int_r^\infty |K_{\mu_n}(\sqrt{\lambda} s)|^p s \,{\rm d} s \bigg)^\frac1p
+ \int_r^\infty |K_{\mu_{n}}(\sqrt{\lambda} s)| \,{\rm d} s
& \le C |\lambda|^{-\frac12} e^{-{\rm Re}(\sqrt{\lambda}) r}\,. \label{est7.lem.est4.bessel} \end{align}
{\rm (8)} If additionally $p\in[2,\infty)$ and if $1 \le r \le {\rm Re}(\sqrt{\lambda})^{-1}$, then
\begin{align}
\bigg( \int_1^r |I_{\mu_n}(\sqrt{\lambda} s)|^p s \,{\rm d} s \bigg)^\frac1p
+ \int_1^r |I_{\mu_n}(\sqrt{\lambda} s)| \,{\rm d} s
\le C |\lambda|^{\frac{{\rm Re} (\mu_n)}{2}} r^{{\rm Re}(\mu_n)+1}\,. \label{est8.lem.est4.bessel} \end{align}
{\rm (9)} If additionally $p\in[2,\infty)$ and if $r \ge {\rm Re}(\sqrt{\lambda})^{-1}$, then
\begin{align}
\bigg( \int_1^r |I_{\mu_n}(\sqrt{\lambda} s)|^p s \,{\rm d} s \bigg)^\frac1p
+ \int_1^r |I_{\mu_{n}}(\sqrt{\lambda} s)| \,{\rm d} s
& \le C |\lambda|^{-\frac12} e^{{\rm Re}(\sqrt{\lambda}) r}\,. \label{est9.lem.est4.bessel} \end{align}
\end{lemma}
\section{Proof of Proposition \ref{prop.est.F}} \label{app.proof.prop.est.F}
Proposition \ref{prop.est.F} is a direct consequence of the next lemma. Let us recall that $\mathcal{B}_\rho(0)$ denotes the disk in the complex plane $\mathbb{C}$ centered at the origin with radius $\rho>0$.
\begin{lemma}\label{lem.est.F}
Let $|n|=1$. Then for any $\epsilon \in (0,\frac{\pi}{2})$ there is a positive constant $\beta_0=\beta_0(\epsilon)$ depending only on $\epsilon$ such that as long as $\beta\in (0,\beta_0)$ and $\lambda \in \Sigma_{\pi-\epsilon} \cap \mathcal{B}_{\beta^4}(0)$ we have
\begin{align}
|F_n(\sqrt{\lambda};\beta)| \ge
\frac{C}{\beta} |\lambda|^{-\frac{{\rm Re}(\mu_n)}{2}}
\min\{1\,,-\beta^2 \log {\color{black} |\lambda|} \}\,, \label{est1.lem.est.F} \end{align}
where $F_n(\sqrt{\lambda};\beta)$ is the function in \eqref{def.F} and the constant $C$ depends only on $\epsilon$. \end{lemma}
\begin{proof} The proof is carried out with the similar spirit as in \cite[Proposition 3.34]{Ma1}, where the nonexistence of zeros of $F_n(\sqrt{\lambda};\beta)$ in $\lambda\in
\mathcal{B}_{\beta^4}(0)$ is proved. However, its proof is based on a contradiction argument, and quantitative estimates are not explicitly stated. Hence here we give the lower bound estimate of $|F_n(\sqrt{\lambda};\beta)|$ for completeness.
Let $\lambda \in \Sigma_{\pi-\epsilon} \cap \mathcal{B}_{\frac12}(0)$ and set $\zeta_n =\zeta_n(\beta) =\mu_n(\beta)-1$. Then, by combining Lemmas 3.31--3.33 and Corollary A.8 in \cite{Ma1}, we observe that the next expansion holds:
\begin{align} \zeta_n F_n(\sqrt{\lambda};\beta) \,=\, \frac{\Gamma(1+\zeta_n)}{\sqrt{\lambda}} \big(\frac{\sqrt{\lambda}}{2} \big)^{-\zeta_n}\,\bigg(1 - \big(e^{\gamma(\zeta_n)} \frac{\sqrt{\lambda}}{2} \big)^{\zeta_n} + R_n(\lambda) \bigg)\,, \label{exp.proof.lem.est.F} \end{align}
for sufficiently small $\beta$ depending on $\epsilon\in(0,\frac{\pi}{2})$. Here the function $\gamma(\zeta_n)$ have the expansion
\begin{align}
\gamma(\zeta_n) \,=\, \gamma + O(|\zeta_n|)~~~~{\rm as}~~~~|\zeta_n| \rightarrow 0\,, \label{est1.proof.lem.est.F} \end{align}
where $\gamma$ denotes the Euler constant $\gamma = 0.5772\cdots$. The remainder $R_n$ in \eqref{exp.proof.lem.est.F} satisfies
\begin{align}
|R_n(\lambda)| \le C_1 |\lambda|^{\frac{{\rm Re}(\mu_n)}{2}}\,,~~~~~~ \lambda \in \Sigma_{\pi-\epsilon} \cap \mathcal{B}_\frac12(0)\,, \label{est2.proof.lem.est.F} \end{align}
with a constant $C_1=C_1(\epsilon)$ independent of small $\beta$. To simplify notation we set
\begin{align} z\,=\, \sqrt{\lambda}\,, ~~~~~~~~ \tilde{z}\,=\, e^{\gamma(\zeta_n)} \frac{\sqrt{\lambda}}{2}\,,~~~~~~~~ \theta(\tilde{z}) \,=\, {\rm arg}\,\tilde{z}\,. \label{def1.proof.lem.est.F} \end{align}
If $\beta$ is sufficiently small, then we see from \eqref{est1.proof.lem.est.F} and \eqref{def1.proof.lem.est.F} that
\begin{align}
\frac12 \le \big| \frac{\tilde{z}}{z} \big| \le 1\,,~~~~~~
|\theta(\tilde{z})| \le \frac{\pi}{2} - \frac{\epsilon}{4}\,,~~~~~~~~ \lambda \in \Sigma_{\pi-\epsilon} \cap \mathcal{B}_{\frac12}(0)\,. \label{est3.proof.lem.est.F} \end{align}
Now we set
\begin{align} h(\tilde{z}, \zeta_n)
&\,=\, {\rm Re}(\zeta_n) \log|\tilde{z}| - {\rm Im}(\zeta_n) \theta(\tilde{z})\,, \label{def2.proof.lem.est.F} \\ \Omega(\tilde{z}, \zeta_n) &\,=\,
{\rm Re}(\zeta_n) \theta(\tilde{z}) + {\rm Im}(\zeta_n) \log|\tilde{z}| \nonumber \\ &\,=\, \big( {\rm Re}(\zeta_n) + \frac{{\rm Im}(\zeta_n)^2}{{\rm Re}(\zeta_n)} \big) \theta(\tilde{z}) + \frac{{\rm Im}(\zeta_n)}{{\rm Re}(\zeta_n)} h(\tilde{z}, \zeta_n)\,. \label{def3.proof.lem.est.F} \end{align}
Then it is easy to see that
\begin{align} 1-\tilde{z}^{\zeta_n} \,=\, 1 - e^{h(\tilde{z}, \zeta_n)} e^{i \Omega(\tilde{z}, \zeta_n)}\,. \label{est4.proof.lem.est.F} \end{align}
In the following we show the lower bound estimate of $|1-\tilde{z}^{\zeta_n}|$. Firstly let us take a small positive constant $\kappa=\kappa(\epsilon)\ll 1$ so that
\begin{align} \big( {\rm Re}(\zeta_n) + (1+\kappa) \frac{{\rm Im}(\zeta_n)^2}{{\rm Re}(\zeta_n)} \big) \big( \frac{\pi}{2} - \frac{\epsilon}{4} \big) < \pi \label{est5.proof.lem.est.F} \end{align}
holds. The existence of such $\kappa$ is verified by using Lemma \ref{lem.est.mu} in Appendix \ref{app.est.mu} if $\beta$ is sufficiently small depending on $\epsilon$. Note that the smallness of $\kappa$ depends only on $\epsilon$. \\
\noindent (i) Case $|h(\tilde{z}, \zeta_n)| \le \kappa |{\rm Im}(\zeta_n)| |\theta(\tilde{z})|$: In this case, \eqref{est3.proof.lem.est.F}, \eqref{def3.proof.lem.est.F}, and \eqref{est5.proof.lem.est.F} ensure that
\begin{align}
|\Omega(\tilde{z}, \zeta_n)| < \pi\,, \label{est5'.proof.lem.est.F} \end{align}
and thus that $e^{i \Omega(\tilde{z}, \zeta_n)}$ is close to $1$ if and only if $\Omega(\tilde{z}, \zeta_n)$ is close to $0$. From \eqref{def2.proof.lem.est.F} we have
\begin{align*}
-{\rm Re}(\zeta_n) \log|\tilde{z}|
\le (1+\kappa) |{\rm Im}(\zeta_n)| |\theta(\tilde{z})| \,, \end{align*}
which leads to, for sufficiently small $\beta$,
\begin{align*}
|\theta(\tilde{z})|
\ge - \frac{1}{1+\kappa} \frac{{\rm Re}(\zeta_n)}{|{\rm Im}(\zeta_n)|} \log|\tilde{z}|
\ge - \frac{\beta}{2} \log|\tilde{z}|\,, \end{align*}
where $\frac{{\rm Re}(\zeta_n)}{|{\rm Im}(\zeta_n)|} = \frac{\beta}{4} + O(\beta^3)$ is applied in Lemma \ref{lem.est.mu}. Then from \eqref{def3.proof.lem.est.F} we have
\begin{align*}
|\Omega(\tilde{z}, \zeta_n)| & \ge \big( {\rm Re}(\zeta_n) + (1-\kappa) \frac{{\rm Im}(\zeta_n)^2}{{\rm Re}(\zeta_n)} \big)
|\theta(\tilde{z})| \ge
-\beta \log|\tilde{z}|\,, \end{align*}
if $\beta$ is small enough. On the other hand, it is straightforward to see that
\begin{align*}
|1-\tilde{z}^{\zeta_n} | \ge
\max\{|1 - e^{h(\tilde{z}, \zeta_n)} \cos\Omega(\tilde{z}, \zeta_n)|\,,~
e^{h(\tilde{z}, \zeta_n)} |\sin\Omega(\tilde{z}, \zeta_n)| \}\,. \end{align*}
Since $e^{h(\tilde{z}, \zeta_n)}\in[\frac12,\frac32]$, $|\sin x| \ge \frac{2|x|}{\pi}$ on $|x|\in[0,\frac{\pi}{2}]$, and $1>\frac{|\Omega(\tilde{z}, \zeta_n)|}{\pi}$ by \eqref{est5'.proof.lem.est.F}, we have
\begin{align}
|1-\tilde{z}^{\zeta_n} |
\ge \min\{1\,,~\frac{|\Omega(\tilde{z}, \zeta_n)|}{\pi}\} \ge
-\frac{\beta}{\pi} \log|\tilde{z}|\,. \label{est6.proof.lem.est.F} \end{align}
(ii) Case $|h(\tilde{z}, \zeta_n)| > \kappa |{\rm Im}(\zeta_n)| |\theta(\tilde{z})|$: When $|\theta(\tilde{z})| > -\frac12 \frac{{\rm Re}(\zeta_n)}{|{\rm Im}(\zeta_n)|} \log |\tilde{z}|$, we have
\begin{align*}
|h(\tilde{z}, \zeta_n)| \ge
- \frac{\kappa \beta^2}{2} \log|\tilde{z}|\,. \end{align*}
On the other hand, when $|\theta(\tilde{z})| \le -\frac12 \frac{{\rm Re}(\zeta_n)}{|{\rm Im}(\zeta_n)|} \log |\tilde{z}|$, \eqref{def2.proof.lem.est.F} implies that
\begin{align*}
|h(\tilde{z}, \zeta_n)|
\ge -\frac12 {\rm Re}(\zeta_n) \log|\tilde{z}|
\ge -\frac{\beta^2}{2} \log|\tilde{z}|\,. \end{align*}
Thus in the case (ii), since $|1 - \tilde{z}^{\zeta_n} | \ge \big|1 - |\tilde{z}^{\zeta_n}|\big| = |1 - e^{h(\tilde{z}, \zeta_n)}|$, we observe that
\begin{align}
|1 - \tilde{z}^{\zeta_n} | \ge
\min\{1\,,~|h(\tilde{z}, \zeta_n)|\} \ge
\min\{1\,,-\frac{\kappa \beta^2}{2} \log|\tilde{z}|\}\,. \label{est7.proof.lem.est.F} \end{align}
Hence, by collecting \eqref{est3.proof.lem.est.F}, \eqref{est6.proof.lem.est.F}, and \eqref{est7.proof.lem.est.F}, we have the next lower estimate of $|1-\tilde{z}^{\zeta_n}|$:
\begin{align}
|1 - \tilde{z}^{\zeta_n}|
\ge \frac{\kappa}4 \min\{1\,,-\beta^2 \log|z|\}\,. \label{est8.proof.lem.est.F} \end{align}
Finally by inserting \eqref{est2.proof.lem.est.F} and \eqref{est8.proof.lem.est.F} into \eqref{exp.proof.lem.est.F} we obtain
\begin{align*}
|\zeta_n F_n(\sqrt{\lambda};\beta)|
& \ge C |\lambda|^{-\frac{{\rm Re}(\mu_n)}{2}}
\big( \kappa \min\{1\,,-\beta^2 \log|z|\}
- C_1 |\lambda|^{\frac{{\rm Re}(\mu_n)}{2}} \big)\,, \end{align*}
which implies the assertion \eqref{est1.lem.est.F} if $\lambda \in \Sigma_{\pi-\epsilon} \cap \mathcal{B}_{\beta^4}(0)$ and $\beta$ is sufficiently small depending on $\epsilon$. The proof is complete.
\end{proof}
\noindent {\bf Acknowledgements}\, The author would like to express sincere thanks to Professor Yasunori Maekawa for valuable discussions and Professor Isabelle Gallagher for helpful comments. This work is partially supported by the Grant-in-Aid for JSPS Fellows 17J00636.
\end{document} | arXiv |
Amanda L Wright and Bryce Vissel.
CAST your vote: is calpain inhibition the answer to ALS?. Journal of neurochemistry 137(2):140–1, April 2016.
Abstract A publication in the Journal of Neurochemistry by Rao et al. (2016) suggests that the overexpression of the calpain inhibitor, calpastatin (CAST) rescues neuron loss and increases survival of the amyotrophic lateral sclerosis (ALS) mouse model, hSOD1G93A. The findings of Rao et al. (2016) provide an insight into the mechanisms that lead to neuronal loss in ALS and suggest a cell loss pathway common to several neurodegenerative disorders that may be therapeutically targeted. Here, we highlight the findings of Rao et al. (2016) and discuss some key considerations required prior to assessing the potential use of calpain inhibitors in the clinic. Read the highlighted article 'Calpastatin inhibits motor neuron death and increases survival in hSOD1(G93A) mice' on page 253.
Mala V Rao, Jabbar Campbell, Arti Palaniappan, Asok Kumar and Ralph A Nixon.
Calpastatin inhibits motor neuron death and increases survival of hSOD1(G93A) mice.. Journal of neurochemistry 137(2):253–65, 2016.
Abstract Amyotrophic lateral sclerosis (ALS) is a progressive motor neuron disease with a poorly understood cause and no effective treatment. Given that calpains mediate neurodegeneration in other pathological states and are abnormally activated in ALS, we investigated the possible ameliorative effects of inhibiting calpain over-activation in hSOD1(G93A) transgenic (Tg) mice in vivo by neuron-specific over-expression of calpastatin (CAST), the highly selective endogenous inhibitor of calpains. Our data indicate that over-expression of CAST in hSOD1(G93A) mice, which lowered calpain activation to levels comparable to wild-type mice, inhibited the abnormal breakdown of cytoskeletal proteins (spectrin, MAP2 and neurofilaments), and ameliorated motor axon loss. Disease onset in hSOD1(G93A) /CAST mice compared to littermate hSOD1(G93A) mice is delayed, which accounts for their longer time of survival. We also find that neuronal over-expression of CAST in hSOD1(G93A) transgenic mice inhibited production of putative neurotoxic caspase-cleaved tau and activation of Cdk5, which have been implicated in neurodegeneration in ALS models, and also reduced the formation of SOD1 oligomers. Our data indicate that inhibition of calpain with CAST is neuroprotective in an ALS mouse model. CAST (encoding calpastatin) inhibits hyperactivated calpain to prevent motor neuron disease operating through a cascade of events as indicated in the schematic, with relevance to amyotrophic lateral sclerosis (ALS). We propose that over-expression of CAST in motor neurons of hSOD1(G93A) mice inhibits activation of CDK5, breakdown of cytoskeletal proteins (NFs, MAP2 and Tau) and regulatory molecules (Cam Kinase IV, Calcineurin A), and disease-causing proteins (TDP-43, $\alpha$-Synuclein and Huntingtin) to prevent neuronal loss and delay neurological deficits. In our experiments, CAST could also inhibit cleavage of Bid, Bax, AIF to prevent mitochondrial, ER and lysosome-mediated cell death mechanisms. Similarly, CAST over-expression in neurons attenuated pathological effects of TDP-43, $\alpha$-synuclein and Huntingtin. These results suggest a potential value of specific small molecule inhibitors of calpains in delaying the development of ALS. Read the Editorial Highlight for this article on page 140.
Takenari Yamashita, Sayaka Teramoto and Shin Kwak.
Phosphorylated TDP-43 becomes resistant to cleavage by calpain: A regulatory role for phosphorylation in TDP-43 pathology of ALS/FTLD. Neuroscience Research, December 2015.
Abstract TAR DNA-binding protein-43 (TDP-43) pathology, which includes the presence of abnormal TDP-43-containing inclusions with a loss of nuclear TDP-43 in affected neurons, is a pathological hallmark of amyotrophic lateral sclerosis (ALS) and/or frontotemporal lobar degeneration (FTLD). TDP-43 in the pathological brains and spinal cords of ALS/FTLD patients is abnormally fragmented and phosphorylated. It is believed that the generation of aggregation-prone TDP-43 fragments initiates TDP-43 pathology, and we previously reported that calpain has an important role in the generation of such aggregation-prone TDP-43 fragments. However, the role of phosphorylation in TDP-43 pathology has not been largely elucidated, despite previous observations that several kinases and their kinases are involved in TDP-43 phosphorylation. Here, we investigated the role of TDP-43 phosphorylation in the calpain-dependent cleavage of TDP-43 and found that phosphorylated, full-length TDP-43 and calpain-dependent TDP-43 fragments were more resistant to cleavage by calpain than endogenous full-length TDP-43 was. These results suggest that both phosphorylated and calpain-cleaved TDP-43 fragments persist intracellularly for a length of time that is sufficient for self-aggregation, thereby serving as seeds for inclusions.
R Stifanese, M Averna, R De Tullio, M Pedrazzi, M Milanese, T Bonifacino, G Bonanno, F Salamino, S Pontremoli and E Melloni.
Role of calpain-1 in the early phase of experimental ALS.. Archives of biochemistry and biophysics 562:1–8, November 2014.
Abstract Elevation in [Ca(2+)]i and activation of calpain-1 occur in central nervous system of SOD1(G93A) transgenic mice model of amyotrophic lateral sclerosis (ALS), but few data are available about the early stage of ALS. We here investigated the level of activation of the Ca(2+)-dependent protease calpain-1 in spinal cord of SOD1(G93A) mice to ascertain a possible role of the protease in the aetiology of ALS. Comparing the events occurring in the 120 day old mice, we found that [Ca(2+)]i and activation of calpain-1 were also increased in the spinal cord of 30 day old mice, as indicated by the digestion of some substrates of the protease such as nNOS, $\alpha$II-spectrin, and the NR2B subunit of NMDA-R. However, the digestion pattern of these proteins suggests that calpain-1 may play different roles depending on the phase of ALS. In fact, in spinal cord of 30 day old mice, activation of calpain-1 produces high amounts of nNOS active species, while in 120 day old mice enhanced-prolonged activation of calpain-1 inactivates nNOS and down-regulates NR2B. Our data reveal a critical role of calpain-1 in the early phase and during progression of ALS, suggesting new therapeutic approaches to counteract its onset and fatal course.
[Calpain plays a crucial role in TDP-43 pathology].. Rinshō shinkeigaku = Clinical neurology 54(12):1151–4, January 2014.
Abstract Amyotrophic lateral sclerosis (ALS) is the most common adult-onset motor neuron disease affecting healthy middle-aged individuals. Mislocalization of TAR DNA binding protein of 43 kDa (TDP-43) or TDP-43 pathology observed in the spinal motor neurons is the pathological hallmark of ALS. The mechanism generating TDP-43 pathology remained uncertain. Several reports suggested that cleavage of TDP-43 into aggregation-prone fragments might be the earliest event. Therefore, elucidation of the protease(s) that is responsible for TDP-43 cleavage in the motor neurons is awaited. ALS-specific molecular abnormalities other than TDP-43 pathology in the motor neurons of sporadic ALS patients include inefficient RNA editing at the GluA2 glutamine/arginine (Q/R) site, which is specifically catalyzed by adenosine deaminase acting on RNA 2 (ADAR2). We have developed the conditional ADAR2 knockout (AR2) mice, in which the ADAR2 gene is targeted in motor neurons. We found that Ca(2+)-dependent cysteine protease calpain cleaved TDP-43 into aggregation-prone fragments, which initiated TDP-43 mislocalization in the motor neurons expressing abnormally abundant Ca(2+)-permeable AMPA receptors. Here we summarized the molecular cascade leading to TDP-43 pathology observed in the motor neurons of AR2 mice and discussed possible roles of dysregulation of calpain-dependent cleavage of TDP-43 in TDP-43 pathology observed in neurological diseases in general.
Roberta De Tullio, Monica Averna, Marco Pedrazzi, Bianca Sparatore, Franca Salamino, Sandro Pontremoli and Edon Melloni.
Differential regulation of the calpain-calpastatin complex by the L-domain of calpastatin.. Biochimica et biophysica acta 1843(11):2583–91, 2014.
Abstract Here we demonstrate that the presence of the L-domain in calpastatins induces biphasic interaction with calpain. Competition experiments revealed that the L-domain is involved in positioning the first inhibitory unit in close and correct proximity to the calpain active site cleft, both in the closed and in the open conformation. At high concentrations of calpastatin, the multiple EF-hand structures in domains IV and VI of calpain can bind calpastatin, maintaining the active site accessible to substrate. Based on these observations, we hypothesize that two distinct calpain-calpastatin complexes may occur in which calpain can be either fully inhibited (I) or fully active (II). In complex II the accessible calpain active site can be occupied by an additional calpastatin molecule, now a cleavable substrate. The consequent proteolysis promotes the accumulation of calpastatin free inhibitory units which are able of improving the capacity of the cell to inhibit calpain. This process operates under conditions of prolonged [Ca(2+)] alteration, as seen for instance in Familial Amyotrophic Lateral Sclerosis (FALS) in which calpastatin levels are increased. Our findings show that the L-domain of calpastatin plays a crucial role in determining the formation of complexes with calpain in which calpain can be either inhibited or still active. Moreover, the presence of multiple inhibitory domains in native full-length calpastatin molecules provides a reservoir of potential inhibitory units to be used to counteract aberrant calpain activity.
A role for calpain-dependent cleavage of TDP-43 in amyotrophic lateral sclerosis pathology.. Nature communications 3:1307, 2012. | CommonCrawl |
\begin{document}
\title{Ricci measure for some singular Riemannian metrics} \author{John Lott} \address{Department of Mathematics\\ University of California - Berkeley\\ Berkeley, CA 94720-3840\\ USA} \email{[email protected]}
\thanks{Research partially supported by NSF grant DMS-1207654} \date{August 15, 2015} \subjclass[2000]{}
\begin{abstract} We define the Ricci curvature, as a measure, for certain singular torsion-free connections on the tangent bundle of a manifold. The definition uses an integral formula and vector-valued half-densities. We give relevant examples in which the Ricci measure can be computed. In the time dependent setting, we give a weak notion of a Ricci flow solution on a manifold. \end{abstract}
\maketitle
\section{Introduction} \label{sect1}
There has been much recent work about metric measure spaces with lower Ricci bounds, particularly the Ricci limit spaces that arise as measured Gromov-Hausdorff limits of smooth manifolds with a uniform lower Ricci bound. In this paper we address the question of whether one can make sense of the Ricci curvature itself on singular spaces.
From one's intuition about a two dimensional cone with total cone angle less than $2 \pi$, the Ricci curvature should exist at best as a measure. One natural approach toward a weak notion of Ricci curvature is to use an integral formula, such as the Bochner formula. The Bochner identity says that if $\omega_1$ and $\omega_2$ are smooth compactly supported $1$-forms on a smooth Riemannian manifold $M$ then \begin{equation} \label{1.1} \langle \omega_1, \operatorname{Ric}(\omega_2) \rangle = \int_M \left( \langle d \omega_1, d \omega_2 \rangle + \langle d^* \omega_1, d^* \omega_2 \rangle - \langle \nabla \omega_1, \nabla \omega_2 \rangle \right) \: \operatorname{dvol}. \end{equation} Equivalently, \begin{equation} \label{1.2} \langle \omega_1, \operatorname{Ric}(\omega_2) \rangle = \int_M \sum_{i,j} \left( \nabla^i \omega_{1,i} \nabla^j \omega_{2,j} - \nabla^i \omega_{1,j} \nabla^j \omega_{2,i} \right) \: \operatorname{dvol}. \end{equation}
Now consider a possibly singular Riemannian metric on $M$. In order to make sense of (\ref{1.2}), one's first attempt may be to require that $\nabla \omega_1$ and $\nabla \omega_2$ are square integrable. However, in the case of a two dimensional cone with total cone angle less than $2 \pi$, if one requires square integrability then one does not find any contribution from the vertex of the cone. That is, one would conclude that the cone is Ricci flat, which seems wrong. In order to see the curvature at the vertex, one needs to allow for more general test forms. It is not immediately evident what precise class of test forms should be allowed. A related fact is that the expression for the Ricci tensor, using local coordinates, makes distributional sense if the Christoffel symbols are square integrable. However, this is not the case for the cone.
Our resolution to this problem is by first passing from $1$-forms to vector fields, and then passing to vector-valued half-densities. For $V$ and $W$ vector-valued half-densities, we consider the quadratic form \begin{equation} \label{1.3} Q(V,W) = \int_M \sum_{i,j} \left[ \left( \nabla_i V^i \right) \left( \nabla_j W^j \right) - \left( \nabla_i V^j \right) \left( \nabla_j W^i \right) \right]. \end{equation} A compactly supported density on a manifold can be integrated, so two compactly supported half-densities can be multiplied and integrated. We require that $V$ and $W$ are compactly supported and Lipschitz regular on $M$. If one rewrote $Q$ using $1$-forms as in (\ref{1.2}) then this would prescribe that the $1$-forms should lie in certain weighted spaces.
One sees that (\ref{1.3}) does not involve the Riemannian metric directly, but can be written entirely in terms of the connection. Hence we work in the generality of torsion-free connections on the tangent bundle. We also work with $C^{1,1}$-manifolds $M$, with an eye toward limit spaces; it is known that Ricci limit spaces have a weak $C^{1,1}$-structure \cite{Honda (2014)}. Then we say that a possibly singular connection is {\em tame} if (\ref{1.3}) makes sense for all compactly supported Lipschitz vector-valued half-densities $V$ and $W$. We characterize tame connections in terms of integrability properties of their Christoffel symbols. We show that a tame connection, with $Q$ bounded below, has a Ricci curvature that is well-defined as a measure with values in $S^2(T^*M) \otimes {\mathcal D}^*$, where ${\mathcal D}$ is the density line bundle.
We prove stability results for this Ricci measure. We give examples to illustrate its meaning.
\begin{proposition} \label{1.4} The Levi-Civita connection is tame and has a computable Ricci measure in the following cases : Alexandrov surfaces, Riemannian manifolds with boundary that are glued together, and families of cones. \end{proposition}
Passing to Ricci flow, one can use optimal transport to characterize supersolutions to the Ricci flow equation on a manifold \cite{Lott (2009),McCann-Topping (2010),Topping (2009)}. There are also comparison principles for Ricci flow supersolutions \cite{Bamler-Brendle (2014)},\cite[Section 2]{Ilmanen-Knopf (2003)}. We give a weak notion of a Ricci flow solution (as opposed to supersolution), in the sense that the curvature tensor is not invoked, again on a fixed $C^{1,1}$-manifold. The idea is that the Ricci tensor appearing in the Bochner integral formula can cancel the Ricci tensor appearing on the right-hand side of the Ricci flow equation. One could try to formulate such a time dependent integral identity just using the Bochner equality for $1$-forms. However, one would get a term coming from the time derivative of the volume form, which unfortunately involves the scalar curvature. Using vector-valued half-densities instead, this term does not appear. We give examples of weak Ricci flow solutions, along with a convergence result and a compactness result.
To mention some earlier work, Lebedeva and Petrunin indicated the existence of a measure-valued curvature operator on an Alexandrov space that is a noncollapsed limit of Riemannian manifolds with a lower sectional curvature bound \cite{Lebedeva-Petrunin (2008)}. The Ricci form exists as a current on certain normal K\"ahler spaces and was used by Eyssidieux, Guedj and Zeriahi for K\"ahler-Einstein metrics \cite{Eyssidieux-Guedj-Zeriahi (2009)}. Naber gave a notion of bounded Ricci curvature, in particular Ricci flatness, for metric measure spaces \cite{Naber (2013)}. Gigli discussed Ricci curvature for certain metric measure spaces \cite[Section 3.6]{Gigli (2014)}.
The structure of this paper is as follows. In Section \ref{sect2} we give some background information. Section \ref{sect3} has the definitions of tame connection and Ricci measure, and proves some properties of these. Section \ref{sect4} gives some relevant examples. Section \ref{sect5} is about weak Ricci flow solutions.
I thank the referee for helpful comments.
\section{Background} \label{sect2}
Let $M$ be an $n$-dimensional smooth manifold. Let $FM$ denote the principal $\operatorname{GL}(n, {\mathbb R})$-frame bundle of $M$. For $c \in {\mathbb R}$, let $\rho_c : \operatorname{GL}(n, {\mathbb R}) \rightarrow \operatorname{GL}(1, {\mathbb R})$ be the homomorphism given by
$\rho_c(M) = |\det M|^{-c}$. There is an associated real line bundle ${\mathcal D}^c = FM \times_{\rho_c} {\mathbb R}$, the $c$-density bundle. There is an isomorphism ${\mathcal D}^c \otimes {\mathcal D}^{c^\prime} \rightarrow {\mathcal D}^{c+c^\prime}$ of line bundles. A section of ${\mathcal D}^c$ is called a $c$-density on $M$. A $1$-density is just called a density. Compactly supported densities on $M$ can be integrated, to give a linear functional $\int_M : C_c(M; {\mathcal D}) \rightarrow {\mathbb R}$. There is a canonical inner product on compactly supported half-densities, given by $\langle f_1, f_2 \rangle = \int_M f_1 f_2$.
Let $\nabla$ be a torsion-free connection on $TM$. There is an induced connection on ${\mathcal D}$. Given a compactly supported vector-valued density $V$, i.e. a section $V \in C_c^\infty(M; TM \otimes {\mathcal D})$, the integral $\int_M \sum_i \nabla_i V^i$ of its divergence vanishes. With this fact, one can justify integration by parts.
The curvature of $\nabla$ is a section of $\operatorname{End}(TM) \otimes \Lambda^2(T^*M) = TM \otimes T^*M \otimes (T^*M \wedge T^*M)$. In terms of the latter description, the Ricci curvature of $\nabla$ is the covariant $2$-tensor field on $M$ obtained by contracting the $TM$ factor with the first $T^*M$ factor in $(T^*M \wedge T^*M)$. In terms of indices, $R_{jl} = \sum_i R^i_{\: \: jil}$.
In this generality, the Ricci curvature need not be symmetric. As $\nabla$ is torsion-free, the first Bianchi identity holds and one finds that $R_{jl} - R_{lj} = \sum_i R^i_{\: \: ijl}$. That is, the antisymmetric part of the Ricci tensor is the negative of the curvature of the induced connection on ${\mathcal D}$, and represents an obstruction to the local existence of a nonzero parallel density. Of course, if $\nabla$ is the Levi-Civita connection of a Riemannian metric then there is a nonzero parallel density, namely the Riemannian density.
Now let $M$ be a $C^{1,1}$-manifold. This means that there is an atlas $M = \bigcup_{\alpha} U_\alpha$ whose transition maps $\phi_{\alpha \beta}$ have a first derivative that is Lipschitz. We can take a maximal such atlas. The preceding discussion of $c$-densities still makes sense in this generality.
A $C^{1,1}$-manifold admits an underlying smooth structure, in that we can find a subatlas with smooth transition maps. Furthermore, any two such smooth structures are diffeomorphic.
\section{Tame connections and Ricci measure} \label{sect3}
In this section we give the notion of a tame connection and define its Ricci measure. In Subsection \ref{subsect3.1} we define tame connections, characterize them in terms of the Christoffel symbols, and prove stability under $L^\infty$-perturbations of the connection. Subsection \ref{subsect3.2} has the definition of the Ricci measure. In Subsection \ref{BE} we extend the notion of Ricci measure to the case of weighted manifolds. Subsection \ref{subsect3.3} is about singular Riemannian metrics and Killing fields.
\subsection{Tame connections} \label{subsect3.1}
Let $M$ be an $n$-dimensional $C^{1,1}$-manifold. It makes sense to talk about the space ${\mathcal V}_{Lip}(M)$ of Lipschitz vector fields on $M$, meaning Lipschitz-regular sections of $TM$. Similarly, it makes sense to talk about Lipschitz vector-valued half-densities, i.e. Lipschitz-regular sections of $TM \otimes {\mathcal D}^{\frac12}$.
Let ${\mathcal V}_{meas}(M)$ denote the measurable vector fields on $M$. Let $\nabla$ be a measurable torsion-free connection on $TM$, i.e. an ${\mathbb R}$-bilinear map $\nabla : {\mathcal V}_{Lip}(M) \times {\mathcal V}_{Lip}(M) \rightarrow {\mathcal V}_{meas}(M)$ such that for $f \in \operatorname{Lip}(M)$ and $X,Y \in {\mathcal V}_{Lip}(M)$, we have \begin{itemize} \item $\nabla_{fX} Y = f \nabla_X Y$, \item $\nabla_X (fY) = (Xf) Y + f \nabla_X Y$, \item $\nabla_X Y - \nabla_Y X = [X,Y]$. \end{itemize} Writing $\nabla_{\partial_i} \partial_j = \sum_k \Gamma^k_{\:\: ji} \partial_k$, the Christoffel symbols $\Gamma^k_{\:\: ij}$ are measurable.
If $V$ is a vector-valued half-density then we can locally write it as $V = \sum_j V^j \partial_j$, where $V^i$ is a locally defined half-density. Then $\nabla_i V = \sum_j (\nabla_i V^j) \partial_j$, where $\nabla_i V^j$ is also a half-density. Further writing $V^j = v^j \sqrt{dx^1 \ldots dx^n}$, we have \begin{equation} \label{3.1} \nabla_i V = \sum_j (\nabla_i v^j) \partial_j \otimes \sqrt{dx^1 \ldots dx^n}, \end{equation} where \begin{equation} \label{3.2} \nabla_i v^j = \partial_i v^j + \sum_k \Gamma^j_{\: \: ki} v^k - \frac12 \sum_k \Gamma^k_{\: \: ki} v^j \end{equation}
Given compactly supported Lipschitz vector-valued half-densities $V$ and $W$ on $M$, consider the formal expression \begin{equation} \label{3.3} Q(V,W) = \int_M \sum_{i,j} \left[ \left( \nabla_i V^i \right) \left( \nabla_j W^j \right) - \left( \nabla_i V^j \right) \left( \nabla_j W^i \right) \right]. \end{equation} Note that the integrand of (\ref{3.3}) is a density on $M$.
\begin{definition} \label{3.4} The connection $\nabla$ is {\em tame} if the integrand in (\ref{3.3}) is integrable for all $V$ and $W$. \end{definition}
If $n = 1$ then $Q$ vanishes identically.
\begin{remark} \label{3.5} Suppose that $\nabla$ is the Levi-Civita connection of a Riemannian metric $g$. We can use the Riemannian half-density $\sqrt{\operatorname{dvol}} = (\det g)^{\frac14} \sqrt{dx^1 \ldots dx^n}$ to trivialize ${\mathcal D}^{\frac12}$. Using this trivialization, there is an isometric isomorphism between vector-valued half-densities and $1$-forms, under which a $1$-form $\omega = \sum_i \omega_i dx^i$ corresponds to a vector-valued half-density $V = \sum_i v^i \partial_i \otimes \sqrt{dx^1 \ldots dx^n}$ with $v^i = \sum_j g^{ij} (\det g)^{\frac14} \omega_j$. In this case, $Q$ could be computed using (\ref{1.2}), with the restriction on $\omega$ that each $v^i$ in the local description of its isomorphic vector-valued half-density $V$ should be Lipschitz. \end{remark}
We now characterize tameness of a connection in terms of its Christoffel symbols.
\begin{proposition} \label{3.6} Suppose that $n > 1$. The connection $\nabla$ is tame if and only if in any coordinate neighborhood, each $\Gamma^i_{\: \: jk}$ is locally integrable and each $\sum_{i,j} (\Gamma^i_{\: \: kl} \Gamma^j_{\: \: ji} - \Gamma^j_{\: \: ki} \Gamma^i_{\: \: lj})$ is locally integrable. \end{proposition} \begin{proof} Suppose that $\nabla$ is tame. Let $U$ be a coordinate neighborhood and choose $m \in U$. The point $m$ has a neighborhood $S$ with compact closure in $U$. Take $V$ and $W$ to have compact support in $U$. One finds that \begin{equation} \label{3.7} Q(V,W) = Q_1(V,W)+ Q_2(V,W), \end{equation} where \begin{align} \label{3.8} Q_1(V,W) = \int_U \sum_{i,k,l} & \left[ \frac12 (\partial_k v^k) \Gamma^i_{\: \: il} w^l
+ \frac12 v^k \Gamma^i_{\: \: il} (\partial_k w^l) + \frac12 (\partial_l v^k) \Gamma^i_{\: \: ik} w^l + \frac12 v^k \Gamma^i_{\: \: ik} (\partial_l w^l) \right. \\ & \left. - (\partial_i v^k) \Gamma^i_{\: \: kl} w^l - v^k \Gamma^i_{\: \: kl} (\partial_i w^l) \right] dx^1 \ldots dx^n \notag \end{align} and \begin{equation} \label{3.9} Q_2(V,W) = \int_U \sum_{i,j,k,l} v^k \left( \Gamma^i_{\: \: kl} \Gamma^j_{\: \: ji} - \Gamma^j_{\: \: ki} \Gamma^i_{ \: \: lj} \right) w^l dx^1 \ldots dx^n. \end{equation}
Given constant vectors $\{c^k\}_{k=1}^n$ and $\{d^l\}_{l=1}^n$, we can choose $V$ and $W$ so that $v^k = c^k$ and $w^l = d^l$ in $S$. Then the integrand of $Q_1$ vanishes in $S$. Hence the integrability of the integrand of $Q(V, W)$ implies the integrability of $\sum_{i,j,k,l} c^k \left( \Gamma^i_{\: \: kl} \Gamma^j_{\: \: ji} - \Gamma^j_{\: \: ki} \Gamma^i_{ \: \: lj} \right) d^l$ in $S$, for any choice of $\{c^k\}_{k=1}^n$ and $\{d^l\}_{l=1}^n$. Letting $m$ and $S$ vary, this is equivalent to the local integrability of $\sum_{i,j} \left( \Gamma^i_{\: \: kl} \Gamma^j_{\: \: ji} - \Gamma^j_{\: \: ki} \Gamma^i_{ \: \: lj} \right)$ in $U$, for all $k$ and $l$.
Returning to general $V$ and $W$ with compact support in $U$, we now know that the integrand of $Q_2$ is integrable. Hence the integrability of the integrand of $Q$ implies the integrability of the integrand of $Q_1$. Given a constant matrix $\{C_r^{\: \: k} \}_{r,k=1}^n$ and a constant vector $\{d^l \}_{l=1}^n$, we can choose $V$ and $W$ so that $v^k = \sum_r C_r^{\: \: k} x^r$ and $w^l = d^l$ in $S$. Then the integrand of $Q_1$, over $S$, becomes \begin{equation} \label{3.10} \sum_{i,k,l} \left[ \frac12 C_k^{\: \: k} \Gamma^i_{\: \: il} d^l + \frac12 C_l^{\: \: k} \Gamma^i_{\: \: ik} d^l - C_i^{\: \: k} \Gamma^i_{\: \: kl} d^l \right]. \end{equation} Taking first $C_k^{\: \: l} = \delta_k^{\: \: l}$, we see that $\frac{n-1}{2} \sum_i \Gamma^i_{\: \: il} d^l$ is integrable in $S$ for any choice of $\{d^l\}_{l=1}^n$. Hence $\sum_i \Gamma^i_{\: \: il}$ is integrable in $S$ for all $l$. It now follows from (\ref{3.10}) that $\sum_{i,k,l} C_i^{\: \: k} \Gamma^i_{\: \: kl} d^l$ is integrable in $S$ for any choices of $\{C_r^{\: \: k} \}_{r,k=1}^n$ and $\{d^l \}_{l=1}^n$. Hence $\Gamma^i_{\: \: kl}$ is integrable in $S$ for all $i$, $k$ and $l$, so $\Gamma^i_{\: \: kl}$ is locally integrable in $U$.
For the other direction of the proposition, suppose that in any coordinate neighborhood, each $\Gamma^i_{\: \: jk}$ is locally integrable and each $\sum_{i,j} (\Gamma^i_{\: \: kl} \Gamma^j_{\: \: ji} - \Gamma^j_{\: \: ki} \Gamma^i_{\: \: lj})$ is locally integrable. Given $V$ and $W$ with compact support, we can cover $\operatorname{supp}(V) \cup \operatorname{supp}(W)$ by a finite number $\{U_r\}_{r=1}^N$ of open sets, each with compact closure in a coordinate neighborhood. Let $\{\phi_r\}_{r=1}^N$ be a subordinate Lipschitz partition of unity. Then \begin{equation} \label{3.11} Q(V,W) = \sum_{r=1}^N Q(\phi_r V, W). \end{equation} Looking at (\ref{3.8}) and (\ref{3.9}), we see that the integrand of $Q(\phi_r V, W)$ has support in $U_r$. Then from (\ref{3.8}) and (\ref{3.9}), we see that the integrand of $Q(\phi_r V, W)$ is integrable. The proposition follows. \end{proof}
\begin{proposition} \label{3.12} Suppose that we have a fixed collection of coordinate neighborhoods that cover $M$, in each of which $\Gamma^i_{\: \: jk}$ is locally integrable and $\sum_{i,j} (\Gamma^i_{\: \: kl} \Gamma^j_{\: \: ji} - \Gamma^j_{\: \: ki} \Gamma^i_{\: \: lj})$ is locally integrable. Then in any other coordinate neighborhood, $\Gamma^i_{\: \: jk}$ is locally integrable and $\sum_{i,j} (\Gamma^i_{\: \: kl} \Gamma^j_{\: \: ji} - \Gamma^j_{\: \: ki} \Gamma^i_{\: \: lj})$ is locally integrable. \end{proposition} \begin{proof} This follows indirectly from the proof of Proposition \ref{3.6}, but can also be seen directly from the transformation formula \begin{equation} \label{3.13} \widetilde{\Gamma}^i_{\: \: jk} = \sum_{a,b,c} \frac{\partial y^i}{\partial x^a} \frac{\partial x^b}{\partial y^j} \frac{\partial x^c}{\partial y^k} \Gamma^a_{\: \: bc} + \sum_a \frac{\partial y^i}{\partial x^a} \cdot \frac{\partial^2 x^a}{\partial y^j \partial y^k} \end{equation} for the Christoffel symbols under a change of coordinate from $x$ to $y$, along with the fact that $\frac{\partial y}{\partial x}$ and $\frac{\partial x}{\partial y}$ are Lipschitz, and $\frac{\partial^2x^a}{\partial y^j \partial y^k}$ is $L^\infty$. \end{proof}
We now show that tameness is preserved by bounded perturbations of the connection.
\begin{proposition} \label{3.14} Suppose that $\nabla$ is tame. Suppose that $T \cong T^i_{\: \: jk}$ is a measurable $(1,2)$-tensor field, symmetric in the lower two indices, such that for all Lipschitz vector fields $v$, the $(1,1)$-tensor field $T(v) = \sum_k T^i_{\: \: jk} v^k$ is a locally bounded section of $\operatorname{End}(TM)$. Then $\nabla + T$ is tame. \end{proposition} \begin{proof} Writing $T_i = T(\partial_i)$, we have \begin{align} \label{3.15} & \int_M \sum_{i,j} \left[ \left( \nabla_i V^i + T_i V^i \right) \left( \nabla_j W^j + T_j W^j \right) - \left( \nabla_i V^j + T_i V^j \right) \left( \nabla_j W^i + T_j W^i \right) \right] = \\ & \int_M \sum_{i,j} \left[ \left( \nabla_i V^i \right) \left( \nabla_j W^j \right) - \left( \nabla_i V^j \right) \left( \nabla_j W^i \right) \right] + \notag \\ & \int_M \sum_{i,j} \left[ \left( T_i V^i \right) \left( \nabla_j W^j \right) + \left( \nabla_i V^i \right) \left( T_j W^j \right)
- \left( T_i V^j \right) \left( \nabla_j W^i \right) - \left( \nabla_i V^j \right) \left( T_j W^i \right)
\right] + \notag \\ & \int_M \sum_{i,j} \left[ \left( T_i V^i \right) \left( T_j W^j \right) - \left( T_i V^j \right) \left( T_j W^i \right) \right]. \notag \end{align} As before, we take $V$ and $W$ to have compact support. Since Proposition \ref{3.6} tells us that in each coordinate neighborhood, the Christoffel symbols $\Gamma^i_{\: \: jk}$ are locally $L^1$, it follows that $\nabla V$ and $\nabla W$ are $L^1$ on $M$. Since $TV$ and $TW$ are $L^\infty$, the integrands in the second and third integrals on the right-hand side of (\ref{3.15}) are integrable. The proposition follows. \end{proof}
\subsection{Ricci measure} \label{subsect3.2}
Suppose that $\nabla$ is tame. We can rewrite the expression for $Q_1(V, W)$ in (\ref{3.8}) as \begin{equation} \label{3.16} Q_1(V,W) = \int_U \sum_{i,k,l} \left[ \frac12 \Gamma^i_{\: \: il} \partial_k (v^k w^l) + \frac12 \Gamma^i_{\: \: ik} \partial_l (v^k w^l) - \Gamma^i_{\: \: kl} \partial_i (v^k w^l)
\right] dx^1 \ldots dx^n. \end{equation} Using an underlying smooth structure for $M$, and taking $V$ and $W$ to be smooth with compact support in the coordinate neighborhood $U$ for the moment, it follows that \begin{equation} \label{3.17} Q(V, W) = \int_U \sum_{k,l} v^k R_{(kl)} w^l dx^1 \ldots dx^n, \end{equation} where the distribution \begin{equation} \label{3.18} R_{(kl)} = \sum_i \left( \partial_i \Gamma^i_{\: \: kl} - \frac12 \partial_k \Gamma^i_{\: \: il} - \frac12 \partial_l \Gamma^i_{\: \: ik} \right) + \sum_{i,j} \left( \Gamma^i_{\: \: kl} \Gamma^j_{\: \: ji} - \Gamma^j_{\: \: ki} \Gamma^i_{ \: \: lj} \right) \end{equation} is recognized as the symmetrized Ricci tensor.
Given a continuous vector bundle $E$ on $M$, let ${\mathcal M}(M; E)$ denote the dual space to the topological vector space of compactly supported continuous sections of $E^*$. We can think of an element of ${\mathcal M}(M; E)$ as an $E$-valued measure on $M$.
In the rest of this subsection, we make the following assumption.
\begin{assumption} \label{ass} There is some nonnegative $h \in {\mathcal M}(M; S^2(T^*M) \otimes {\mathcal D}^*)$ so that for all $V$ and $W$, \begin{equation} Q(V,W) \ge - \int_M \langle V, h W \rangle. \end{equation} \end{assumption}
Assumption \ref{ass} implies that the distributional tensor field $R_{(kl)} + h_{kl}$ is nonnegative. It follows that it is a tensor-valued measure, and hence so is $R_{(kl)}$. The conclusion is that there is some ${\mathcal R} \in {\mathcal M}(M; S^2(T^*M) \otimes {\mathcal D}^*)$ so that for all compactly supported Lipschitz vector-valued half-densities $V$ and $W$ on $M$, we have \begin{equation} \label{3.19} Q(V,W) = \int_M \langle V, {\mathcal R} W \rangle. \end{equation} We call ${\mathcal R}$ the Ricci measure of the connection $\nabla$.
We now prove a convergence result for the Ricci measure.
\begin{proposition} \label{3.20} Let $\nabla$ be a tame connection with Ricci measure ${\mathcal R}$. Let $\left\{ T^{(r)} \right\}_{r=1}^\infty$ be a sequence of measurable $(1,2)$-tensor fields as in Proposition \ref{3.14}. Suppose that the connections $\left\{ \nabla + T^{(r)} \right\}_{r=1}^\infty$ satisfy Assumption \ref{ass} with a uniform choice of $h$. Let $\left\{ {\mathcal R}^{(r)} \right\}_{r=1}^\infty$ be the Ricci measures of the connections $\left\{ \nabla + T^{(r)} \right\}_{r=1}^\infty$. Suppose that for each compactly supported Lipschitz vector field $v$, we have $\lim_{r \rightarrow \infty} T^{(r)}(v) = 0$ in $L^\infty(M; \operatorname{End}(TM))$. Then $\lim_{r \rightarrow \infty} {\mathcal R}^{(r)} = {\mathcal R}$ in the weak-$*$ topology on ${\mathcal M}(M; S^2(T^*M) \otimes {\mathcal D}^*)$. \end{proposition} \begin{proof} Let $Q^{(r)}$ be the quadratic form associated to the tame connection $\nabla + T^{(r)}$. From (\ref{3.15}), for any $V$ and $W$, we have $\lim_{r \rightarrow \infty} Q^{(r)}(V,W) = Q(V,W)$. It follows that $\lim_{r \rightarrow \infty} {\mathcal R}^{(r)} = {\mathcal R}$ distributionally. Since the relevant distributions are all measures, with ${\mathcal R}^{(r)} + h$ nonnegative, we have weak-$*$ convergence. \end{proof}
\begin{remark} \label{lq} If we further assume that $\nabla$ has Christoffel symbols in $L^q_{loc}$, for $q > 1$, then we reach the same conclusion under the weaker assumption that $\lim_{r \rightarrow \infty} T^{(r)}(v) = 0$ in $L^{\max(2,q^*)}(M; \operatorname{End}(TM))$, where $\frac{1}{q} + \frac{1}{q^*} = 1$. \end{remark}
\subsection{Bakry-Emery-Ricci measure} \label{BE}
Let $\nabla$ be a measurable torsion-free connection on $TM$. We say that $f \in W^{1,1}_{loc}(M)$ if in any coordinate neighborhood $U$, we have $f \in L^1_{loc}(U)$ and there are $S_i \in L^1_{loc}(U)$ so that for any Lipschitz functions $F^i$ with compact support in $U$, we have \begin{equation} \int_U f \sum_i \partial_i F^i \: dx^1 \ldots dx^n = - \int_U \sum_i S_i F^i \: dx^1 \ldots dx^n. \end{equation} We let $\nabla_i f$ denote $S_i$. Given compactly supported Lipschitz vector-valued half-densities $V$ and $W$ on $M$, consider the formal expression \begin{align} \label{added} Q_f(V,W) = & \int_M \sum_{i,j} \left[ e^f \left( \nabla_i \left( e^{- \: \frac{f}{2}} V^i \right) \right) \left( \nabla_j \left( e^{- \: \frac{f}{2}} W^j \right) \right) - \right. \\ & \left. \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: e^{-f} \left( \nabla_i \left( e^{\frac{f}{2}} V^j \right) \right) \left( \nabla_j \left( e^{\frac{f}{2}} W^i \right) \right) \right] \notag \\ = & \: \int_M \sum_{i,j} \left[ \left( \nabla_i V^i \right) \left( \nabla_j W^j \right) - \left( \nabla_i V^j \right) \left( \nabla_j W^i \right) - \right. \notag \\ & \left. \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \frac12 (\nabla_i f) \left( \nabla_j \left( V^i W^j + V^j W^i \right) \right) \right]. \notag \end{align} We say that the pair $(\nabla, f)$ is tame if the integrand in (\ref{added}) is integrable for all $V$ and $W$.
\begin{example} Suppose that $f$ is semiconvex. Then $\operatorname{Hess}(f) = \nabla \nabla f$ is well-defined as a measurable symmetric $2$-tensor. Suppose that $\nabla$ is tame in the sense of Definition \ref{3.4} and $Q$ satisfies Assumption \ref{ass}. Then $(\nabla, f)$ is tame and \begin{equation} Q_f(V,W) = \int_M \langle V, {\mathcal R}_f W \rangle, \end{equation} where the $S^2(T^*M) \otimes {\mathcal D}^*$-valued measure ${\mathcal R}_f$ is given in a coordinate neighborhood $U$ by \begin{equation} Q_f(V, W) = \int_U \sum_{k,l} v^k \left( R_{(kl)} + \operatorname{Hess}(f)_{kl} \right) w^l \: dx^1 \ldots dx^n \end{equation} for $V$ and $W$ having compact support in $U$. In the Riemannian setting, we recognize $R_{(kl)} + \operatorname{Hess}(f)_{kl}$ as the Bakry-Emery-Ricci tensor. \end{example}
\subsection{Riemannian metrics} \label{subsect3.3}
Let $g$ be a Riemannian metric on $M$, i.e. a measurable section of $S^2(T^*M)$ that is positive definite almost everywhere.
\begin{definition} \label{w11} A Riemannian metric $g$ lies in $W^{1,1}_{loc}$ if in any coordinate neighborhood $U$, we have $g_{ij} \in L^1_{loc}(U)$ and there are $S^l_{\: \: jk} \in L^1_{loc}(U)$ so that for any Lipschitz functions $\{f^{ijk}\}$ with compact support in $U$, we have \begin{equation} \int_U \sum_{i,j,k} g_{ij} \: \partial_k f^{ijk} \: dx^1 \ldots dx^n = \int_U \sum_{i,j,k,l} g_{il} S^l_{\: \: jk} f^{ijk} \: dx^1 \ldots dx^n. \end{equation} \end{definition}
If $g \in W^{1,1}_{loc}$ then it has a Levi-Civita connection with Christoffel symbols in $L^1_{loc}$.
We recall the classical result that a smooth compact Riemannian manifold with negative Ricci curvature has no nonzero Killing vector fields. (In fact, this is the only result that we know for manifolds with negative Ricci curvature in dimension greater than two.) We show that there is an analogous result in our setting.
\begin{definition} A Lipschitz vector-valued half-density $V$ is {\em Killing} if \begin{equation} \sum_{k} \left( g_{jk} \nabla_i V^k + g_{ik} \nabla_j V^k \right) = 0. \end{equation} \end{definition}
We note that if $V$ is a Killing vector-valued half-density then writing $V = v \otimes \sqrt{\operatorname{dvol}_g}$, the vector field $v$ is a Killing vector field in the usual sense, at least where $g$ is $C^1$.
\begin{proposition} If $M$ is compact, the Levi-Civita connection is tame and $Q(V, V) < 0$ for all nonzero $V$, then there is no nonzero Killing $V$. \end{proposition} \begin{proof} If $V$ is Killing then $\sum_i \nabla_i V^i = 0$ and $\nabla_j V^i = - \sum_{k,l} g_{jl} g^{ik} \nabla_k V^l$, so \begin{equation} Q(V,V) = \int_M \sum_{i,j,k,l} g_{jl} g^{ik} (\nabla_i V^j) (\nabla_k V^l) \ge 0. \end{equation} The proposition follows. \end{proof}
\section{Examples} \label{sect4}
In this section we compute examples of the Ricci measure coming from Riemannian metrics in $W^{1,1}_{loc}$, in the sense of Definition \ref{w11}. The examples are Alexandrov surfaces, Riemannian manifolds with boundary that are glued together, families of cones, K\"ahler manifolds and limit spaces of manifolds with lower bounds on Ricci curvature and injectivity radius. At the end of the section we make some remarks.
\subsection{Alexandrov surfaces} \label{subsect4.1}
We recall that there is a notion of a metric on a surface having bounded integral curvature \cite{Reshetnyak (1993)}. This includes surfaces with Alexandrov curvature bounded below. (For us, the relevance of the latter is that they are the noncollapsed Gromov-Hausdorff limits of smooth Riemannian two-manifolds with Ricci curvature bounded below.) Such a metric comes from a (possibly) singular Riemannian metric $g$. There exist local isothermal coordinates in which $g = e^{2 \phi} \left( (dx^1)^2 + (dx^2)^2 \right)$, where $\phi$ is the difference of two subharmonic functions (with respect to the Euclidean metric). The volume density $\operatorname{dvol}_g$, given locally by $e^{2 \phi} dx^1 dx^2$, lies in $L^1_{loc}$.
\begin{proposition} \label{4.1} The Levi-Civita connection $\nabla$ is tame. \end{proposition} \begin{proof} In the isothermal coordinates, we have \begin{equation} \label{4.2} \Gamma^i_{\: \: jk} = \delta_{ij} \partial_k \phi + \delta_{i,k} \partial_j \phi - \delta_{jk} \partial_i \phi. \end{equation} A subharmonic function $f$ on a two-dimensional domain has $\partial_i f \in L^1_{loc}$ \cite[Pf. of Lemma 1.6]{Landkof (1972)}. (The proof there is for functions defined on ${\mathbb C}$ but can be localized.) One finds that $\sum_{i,j} (\Gamma^i_{\: \: kl} \Gamma^j_{\: \: ji} - \Gamma^j_{\: \: ki} \Gamma^i_{\: \: lj}) = 0$. Proposition \ref{3.6} implies that $\nabla$ is tame. \end{proof}
The quadratic form $Q$ can be described in a coordinate-free way as follows. Let $K$ denote the curvature measure of $g$, given in local isothermal coordinates by $dK = - (\partial_1^2 + \partial_2^2) \phi \: dx^1 dx^2$. (That is, for any smooth function $f$ with support in the coordinate chart, $\int_M f \: dK = - \int_{{\mathbb R}^2} (\partial_1^2 + \partial_2^2)f \: \phi \: dx^1 dx^2$.) Given compactly supported Lipschitz vector-valued half-densities $V$ and $W$, consider $\frac{g(V,W)}{\operatorname{dvol}_g}$. We claim that this extends over the singularities of $g$ to a Lipschitz function on $M$. To see this, in isothermal coordinates we can write $V = \sum_i v^i \partial_i \otimes \sqrt{dx^1 dx^2}$ and $W = \sum_i w^i \partial_i \otimes \sqrt{dx^1 dx^2}$, with $\{v^i\}_{i=1}^2$ and $\{w^i\}_{i=1}^2$ Lipschitz. Then $g(V, W) = e^{2 \phi} \left( v^1 w^1 + v^2 w^2 \right) dx^1 dx^2$ and $\operatorname{dvol}_g = e^{2 \phi} dx^1 dx^2$, from which the claim follows. One finds \begin{proposition} \label{4.3} \begin{equation} \label{4.4} Q(V, W) = \int_M \frac{g(V,W)}{\operatorname{dvol}_g} \: dK. \end{equation} \end{proposition}
\begin{example} If $g$ is smooth and $\kappa$ is the Gaussian curvature then $dK = \kappa \operatorname{dvol}_g$, so \begin{equation} Q(V, W) = \int_M \kappa \: g(V, W). \end{equation} \end{example}
\begin{example} \label{4.5} For $\alpha < 1$, put $g = \left( (x^1)^2 + (x^2)^2 \right)^{- \alpha} \left( (dx^1)^2 + (dx^2)^2 \right)$. Then $({\mathbb R}^2, g)$ is a cone with total cone angle $2 \pi (1 - \alpha)$. One finds that \begin{equation} \label{4.6} Q(V, W) = 2 \pi \alpha \left( v^1(0,0) w^1(0,0) + v^2(0,0) w^2(0,0) \right). \end{equation} In this case, $\partial_i \phi = - \alpha \frac{x^i}{(x^1)^2 + (x^2)^2}$, so $\Gamma^i_{\: \: jk}$ lies in $L^1_{loc}$ but not in $L^2_{loc}$. \end{example}
\begin{example} \label{4.7} For $c > 0$, suppose that
$g = e^{- 2c |x^1|} \left( (dx^1)^2 + (dx^2)^2 \right)$. Then \begin{equation} \label{4.8} Q(V, W) = 2c \int_{- \infty}^\infty \left( v^1(0, x^2) w^1(0, x^2) + v^2(0, x^2) w^2(0, x^2) \right) dx^2. \end{equation}
In this example, $\Gamma^i_{\: \: jk}$ lies in $L^2_{loc}$. The geometry can be described as follows. Take a two-dimensional cone with total cone angle $2 \pi c$. Truncate the cone at distance $\frac{1}{c}$ from the vertex. Take two copies of such truncated cones and glue them along their circle boundaries. Remove the two vertex points and take the universal cover.
Note that all of the tangent cones are isometric to ${\mathbb R}^2$, but the Ricci measure is not absolutely continuous. \end{example}
\subsection{Gluing manifolds along their boundaries} \label{subsect4.2}
Let $M_1$ and $M_2$ be Riemannian manifolds with boundaries. Let $A^{(1)}_{ij}$ (resp. $A^{(2)}_{ij}$) denote the second fundamental form of $\partial M_1$ (resp. $\partial M_2$), taking values in normal vectors. Let $H^{(1)}$ (resp. $H^{(2)}$) denote the mean curvature of $\partial M_1$ (resp. $\partial M_2$), also taking value in normal vectors. Our conventions are such that for the unit ball in ${\mathbb R}^n$, if $v$ is a nonzero tangent vector to the unit sphere then $A(v,v)$ is inward pointing.
Let $\phi : \partial M_1 \rightarrow \partial M_2$ be an isometric diffeomorphism. Using the local product structure near $\partial M_1$ (resp. $\partial M_2$) coming from the normal exponential map, the result $M = M_1 \cup_{\phi} M_2$ of gluing $M_1$ to $M_2$ acquires a smooth structure. It also acquires a $C^0$-Riemannian metric. Let $X \subset M$ denote the gluing locus.
Given a compactly supported Lipschitz vector-valued half-density $V$ on $M$, using the isomorphism
$TM \big|_X = TX \oplus N_XM$, we can decompose $V$ on $X$ as $V = V^T + V^\perp$, where $V^T$ is a section of $TX \otimes
{\mathcal D}_M^{\frac12} \big|_X$ and $V^\perp$ is a section of $N_XM \otimes
{\mathcal D}_M^{\frac12} \big|_X$.
Given $x \in X$, let $n_x$ be the inward pointing unit normal vector to $M_1$ at $x$. Given $V$ and $W$, decompose them along $X$ as $V = V^T + V^\perp$ and $W = W^T + W^\perp$. Then $\langle A^{(1)}(V^T_x, W^T_x) - A^{(2)}(V^T_x, W^T_x), n_x \rangle$
lies in ${\mathcal D}_M \big|_{\{x\}}$. We would get the same result if we switched the roles of $M_1$ and $M_2$.
Similarly, $\langle V^\perp_x, W^\perp_x \rangle$ lies in
${\mathcal D}_M \big|_{\{x\}}$. We can compute the number $\langle H^{(1)}_x - H^{(2)}_x, n_x \rangle$. We would get the same result if we switched the roles of $M_1$ and $M_2$.
Let $x^0$ be a local coordinate at $x$ so that
$\frac{\partial}{\partial x^0}$ is a unit normal to $X$ at $x$. There is a unique linear map ${\mathcal T}_x : {\mathcal D}_M \big|_{\{x\}}
\rightarrow {\mathcal D}_X \big|_{\{x\}}$ so that $dx^0 \otimes {\mathcal T}_x(\omega_x) = \omega_x$. This extends to a map
${\mathcal T} : C^\infty \left( X; {\mathcal D}_M \big|_X \right) \rightarrow C^\infty(X; {\mathcal D}_X)$.
\begin{proposition} \label{4.9} \begin{align} \label{4.10} Q(V, W) = & \int_{M_1} \langle V, \operatorname{Ric}(W) \rangle + \int_{M_2} \langle V, \operatorname{Ric}(W) \rangle \: + \\ & \int_X {\mathcal T} \left( \langle A^{(1)}(V^T, W^T) - A^{(2)}(V^T, W^T), n \rangle + \langle H^{(1)} - H^{(2)}, n \rangle \langle V^\perp, W^\perp \rangle \right). \notag \end{align} \end{proposition} \begin{proof} The Levi-Civita connection $\nabla$ on $M$ has $\{ \Gamma^i_{\: \: jk} \}$ in $L^2_{loc}$, so we can just compute the usual Ricci tensor (\ref{3.18}) as a distribution. On the interior of $M_1$ (resp. $M_2$), we clearly get the usual Ricci tensor of $M_1$ (resp. $M_2$), so it suffices to look at what happens near $X$. Since $V$ and $W$ are compactly supported, we can effectively reduce to the case when $X$ is compact. We can choose a local coordinate $x^0$ near $X$, with
$n = \frac{\partial}{\partial x^0} \big|_X$ pointing into $M_1$, so that the metric takes the form \begin{equation} \label{4.11} g = (dx^0)^2 + h(x_0) + O \left( (x^0)^2 \right). \end{equation} Here we have a metric $h(x^0)$ on $X$ for $x^0 \in (-\epsilon, \epsilon)$. As a function of $x^0$, the metric $h$ is continuous on $(- \epsilon, \epsilon)$, smooth on $[0, \epsilon)$ and smooth on $(- \epsilon, 0]$. The second fundamental form of $\partial M_1$ (resp. $\partial M_2$) is $A^{(1)} = - \frac12 \left( \lim_{x^0 \rightarrow 0^+} \frac{dh}{dx^0} \right) n$ (resp. $A^{(2)} = - \frac12 \left(
\lim_{x^0 \rightarrow 0^- }\frac{dh}{dx^0} \big|_{x^0 = 0} \right) n$). Using local coordinates $\{x^i\}$ on $X$, we have \begin{align} \label{4.12} \lim_{x^0 \rightarrow 0^+} \Gamma^0_{\: \: ij} & = \langle A^{(1)}_{ij}, n \rangle, \\ \lim_{x^0 \rightarrow 0^-} \Gamma^0_{\: \: ij} & = \langle A^{(2)}_{ij}, n \rangle, \notag \\ \lim_{x^0 \rightarrow 0^+} \Gamma^i_{\: \: j0} & = - \langle A^{i,(1)}_{\: \: j}, n \rangle, \notag \\ \lim_{x^0 \rightarrow 0^-} \Gamma^i_{\: \: j0} & = - \langle A^{i,(2)}_{\: \: j}, n \rangle, \notag \end{align} The relevant terms in (\ref{3.18}) are \begin{align} \label{4.13} R_{(kl)} & = \partial_0 \Gamma^0_{\: \: kl}+ \ldots, \\ R_{(00)} & = - \partial_0 \Gamma^i_{\: \: i0} + \ldots. \notag \end{align} Hence the singular part of the Ricci measure is \begin{align} \label{4.14} R_{(kl),sing} & = \langle A^{(1)}_{kl} - A^{(2)}_{kl}, n \rangle \delta_0(x^0), \\ R_{(00),sing} & = \langle H^{(1)} - H^{(2)}, n \rangle \delta_0(x^0). \notag \end{align} The proposition follows. \end{proof}
\begin{example} \label{4.15} Let $M_1$ and $M_2$ each be the result of taking a two-dimensional cone with total cone angle $2 \pi c$ and trunctating it at a distance $L$ from the vertex. Then the contribution to the Ricci measure of $M$ from the circle gluing locus is $\frac{2}{L} \int_X {\mathcal T} \langle V, W \rangle$. This is consistent with Example \ref{4.7}. \end{example}
\begin{remark} Based on Proposition \ref{4.9}, if $M$ is a Riemannian manifold with boundary then it would be natural to consider \begin{equation} Q(V, W) = \int_{M} \langle V, \operatorname{Ric}(W) \rangle + \int_{\partial M} {\mathcal T} \left( \langle A(V^T, W^T), n \rangle + \langle H, n \rangle \langle V^\perp, W^\perp \rangle \right) \end{equation} to define the Ricci measure of $M$. \end{remark}
\subsection{Families of cones} \label{subsect4.3}
We first consider the case of a single cone.
\begin{proposition} \label{4.16}
For $\alpha < 1$, put $g = |x|^{-2\alpha} \sum_{i=1}^n (dx^i)^2$ on ${\mathbb R}^n$. Then the Levi-Civita connection is tame. If $n = 2$ then $Q(V, W)$ is given by (\ref{4.6}). If $n > 2$ then $Q(V, W) = \int_{{\mathbb R}^n} \langle V,\operatorname{Ric}(W) \rangle$. That is, if $n > 2$ then there is no singular contribution to the Ricci measure from the vertex of the cone. \end{proposition} \begin{proof} The case $n=2$ was handled in Example \ref{4.5}. If $n > 2$ then we can use the formula for conformal transformations from (\ref{4.2}), with
$\phi = - \alpha \ln |x|$. In this case $\partial_i \phi \in L^2_{loc}$, so the formula (\ref{3.18}) makes sense as a distribution. However, since
$|\partial_i \phi| \sim |x|^{-1}$, there is no contribution to $Q(V,W)$ from the origin.
(To have such a contribution, one would need to have $|\partial_i \phi| \sim |x|^{-(n-1)}$.) \end{proof}
We now consider a family of cones. Let $\pi : M \rightarrow B$ be an $n$-dimensional real vector bundle over a Riemannian manifold $B$. Given $b \in B$, we write $M_b = \pi^{-1}(b)$. Let $h$ be a Euclidean inner product on $M$ and let $D$ be an $h$-compatible connection. There is a natural Riemannian metric $g_0$ on $M$ with $\pi : M \rightarrow B$ being a Riemannian submersion, so that the restrictions of $g_0$ to fibers are specified by $h$, and with horizontal subspaces coming from $D$. Let $s : B \rightarrow M$ denote the zero section and let $Z$ denote its image. Given $\alpha < 1$, let $g$ be the Riemannian metric on $M - Z$ obtained from $g_0$, at $m \in M-Z$, by multiplying the fiberwise component of $g_0$ by $h(m,m)^{- \alpha}$.
Given $z \in Z$, let $\operatorname{dvol}_{M_{\pi(z)},z}$ denote the Riemannian density at $z$ of the fiber
$M_{\pi(z)}$, induced from $h_z$. There is a unique linear map ${\mathcal T}_z : {\mathcal D}_{M} \big|_{\{z\}}
\rightarrow {\mathcal D}_{Z} \big|_{\{z\}}$ so that $\operatorname{dvol}_{M_{\pi(z)},z} \otimes {\mathcal T}_z(\omega_z) = \omega_z$ for all
$\omega_z \in {\mathcal D}_{M} \big|_{\{z\}}$. This extends to a linear map
${\mathcal T} : C^\infty(Z; {\mathcal D}_M \big|_Z) \rightarrow C^\infty(Z; {\mathcal D}_Z)$.
Given a compactly supported Lipschitz vector-valued half-density $V$ on $M$, we can decompose its restriction to $Z$ orthogonally (with respect to $g_0$)
as $V \big|_Z = V^{tan} + V^{nor}$, where $V^{tan}$ is tangential to $Z$ and $V^{nor}$ is normal to $Z$, i.e. tangential to the fibers of the vector bundle.
\begin{proposition} \label{4.17} The Levi-Civita connection $\nabla$ of $g$ is tame. If $n=2$ then \begin{equation} \label{4.18} Q(V, W) = \int_M \langle V, \operatorname{Ric}(W) \rangle + 2 \pi \alpha \int_Z {\mathcal T} \left( \langle V^{nor}, W^{nor} \rangle_{g_0} \right). \end{equation} If $n > 2$ then $Q(V, W) = \int_M \langle V, \operatorname{Ric}(W) \rangle$. \end{proposition} \begin{proof} We can choose local coordinates $\{x^\beta, x^i\}$ for $M$ so that the coordinates $\{x^\beta\}$ pullback from $B$ and the coordinates $\{x^i\}$ restrict to the fibers as linear orthogonal coordinates with respect to $h$. In terms of such coordinates, we can write \begin{equation} \label{4.19} g_0 = \sum_{\beta, \gamma} k_{\beta \gamma} dx^\beta dx^\gamma + \sum_i \left( dx^i + \sum_{\beta, j} C^i_{\: \: j \beta} x^j dx^\beta \right)^2, \end{equation} where $\{C^i_{\: \: j \beta}\}$ is the local description of the connection $D$ and $\{k_{\beta \gamma} \}$ is the local description of the Riemannian metric on $B$. Then \begin{equation} \label{4.20} g = \sum_{\beta, \gamma} k_{\beta \gamma} dx^\beta dx^\gamma + \left( \sum_l (x^l)^2 \right)^{- \alpha} \sum_i \left( dx^i + \sum_{\beta, j} C^i_{\: \: j \beta} x^j dx^\beta \right)^2, \end{equation}
Let $\{\widehat{\Gamma}^\beta_{\: \: \gamma \delta}\}$ denote the Christoffel symbols of the Riemannian metric $k$ on $B$. Put \begin{equation} \label{4.21} F^i_{\: \: j \beta \gamma} = \partial_\beta C^i_{\: \: j \gamma} - \partial_\gamma C^i_{\: \: j \beta} + \sum_k C^i_{\: \: k \beta} C^k_{\: \: j \gamma} - \sum_k C^i_{\: \: k \gamma} C^k_{\: \: j \beta}, \end{equation} the curvature of $D$.
Given $b \in B$, we can choose the coordinates $\{x^i\}$ near the fiber $M_b$ so that $C^i_{\: \: j \beta}(b) = 0$. Then on $M_b$, we have \begin{equation} \label{4.22}
g \big|_{M_b} = \sum_{\beta, \gamma} k_{\beta \gamma} dx^\beta dx^\gamma + \left( \sum_l (x^l)^2 \right)^{- \alpha} \sum_i \left( dx^i \right)^2. \end{equation} One finds that on $M_b$, \begin{align} \label{4.23}
\Gamma^i_{\: \: jk} & = - \frac{\alpha}{|x|^2} \left( x_k \delta^i_j + x^j \delta^i_k - x^i \delta_{jk} \right), \\ \Gamma^i_{\: \: \beta \gamma} & = \frac12 \sum_j x^j \left( \partial_\gamma C^i_{\: \: j \beta} + \partial_\beta C^i_{\: \: j \gamma} \right), \notag \\ \Gamma^{\beta}_{\: \: \gamma i} & =
\Gamma^{\beta}_{\: \: i \gamma} = - \frac12 |x|^{- 2 \alpha} \sum_{\sigma, j} k^{\beta \sigma} x^j F^i_{\: \: j \sigma \gamma}, \notag \\ \Gamma^{\beta}_{\: \: \gamma \delta} & = \widehat{\Gamma}_{\: \: \gamma \delta}, \notag \\ \Gamma^{\beta}_{\: \: ij} & = \Gamma^{i}_{\: \: j \beta} = \Gamma^{i}_{\: \: \beta j} = 0. \notag \end{align} Using Proposition \ref{3.6}, one can check that $\nabla$ is tame.
The Ricci curvature of $\nabla$ can be computed using the splitting of $TM$ into its vertical and horizontal components relative to $\pi$. The corresponding O'Neill formulas still hold for the Ricci measure. In the present case, the fibers of $\pi$ are totally geodesic with respect to the metric $g$. Relative to the vertical orthonormal coframe \begin{equation} \label{4.24}
\tau^i = |x|^{- \alpha} \left( dx^i + \sum_j C^i_{\: \: j \beta} x^j dx^\beta \right) \end{equation} and a local orthonormal coframe $\{\tau^\beta\}$ for $k$, one finds that the curvature of the horizontal distribution is given by \begin{equation} \label{4.25}
A^i_{\: \: \beta \gamma} = \frac12 |x|^{- \alpha} \sum_j F^i_{\: \: j \beta \gamma} x^j. \end{equation} Then using the O'Neill formulas, as given in \cite[(4.7)]{Lott (2014)}, one finds that the only singular contribution to the Ricci measure is the fiberwise contribution coming from the singular points. Using Proposition \ref{4.16}, the proposition follows. \end{proof}
\subsection{K\"ahler manifolds} \label{subsect4.4}
Let $M$ be a complex manifold of complex dimension $n$. Suppose that $M$ admits a K\"ahler metric $h$ which is $W^{1,1}_{loc}$-regular, in the sense of Definition \ref{w11}. Suppose that the Levi-Civita connection is tame. The Ricci measure of $(M, h)$ can be described as follows. Let $V$ be a compactly supported Lipschitz section of $T^{(1,0)}M \otimes {\mathcal D}^{\frac12}$ and let $W$ be a compactly supported Lipschitz section of $T^{(0,1)}M \otimes {\mathcal D}^{\frac12}$. Then $Q(V, W) = \int_M q(V, W)$, where the measure $q(V, W)$ has the following description in local coordinates. Write $V = \sum_i v^i \partial_{z^i} \otimes \sqrt{dx^1 dy^1 \ldots dx^n dy^n}$ and $W = \sum_j w^{\overline{j}} \partial_{\overline{z}^j} \otimes \sqrt{dx^1 dy^1 \ldots dx^n dy^n}$. Then \begin{equation} \label{4.26} q(V,W) = - \sum_{i,j} \left( \partial_{z^i} \partial_{\overline{z}^j} \log \det h \right) \: v^i \: w^{\overline{j}} \: dx^1 dy^1 \ldots dx^n dy^n. \end{equation}
\subsection{Limit spaces of manifolds with lower bounds on Ricci curvature and injectivity radius} \label{andersoncheeger}
Given $n \in {\mathbb Z}^+$, $K \in {\mathbb R}$ and $i_0 > 0$, let $(X, x)$ be a pointed Gromov-Hausdorff limit of a sequence $\{(M_i, m_i, g_i)\}_{i=1}^\infty$ of complete $n$-dimensional pointed Riemannian manifolds with $\operatorname{Ric}(g_i) \ge K g_i$ and $\operatorname{inj}_{m_i} \ge i_0$. From \cite[Theorem 0.2 and p. 268]{Anderson-Cheeger (1992)}, for any $p \in (n, \infty)$, the space $X$ is a $L^{2,p}$-manifold with a Riemannian metric $g_X$ that is locally $L^{1,p}$-regular. In particular, for any $\alpha \in (0,1)$ the manifold $X$ is also a $C^{1,\alpha}$-manifold and hence has an underlying smooth structure, which is unique up to diffeomorphism \cite[Theorem 2.10]{Hirsch}. In order to apply the formalism of this paper, we extend the smooth structure to a $C^{1,1}$-structure. Any two such $C^{1,1}$-structures are related by a homeomorphism $\phi$ of $X$ that is $L^{2,p}$-regular for all $p \in (n, \infty)$.
As mentioned, with respect to the $L^{2,p}$-manifold structure, $g_X$ is locally $L^{1,p}$-regular (and also locally $C^\alpha$-regular). Hence the same will be true with respect to the smooth structure, and the ensuing $C^{1,1}$-structure. By H\"older's inequality, $g_X$ is also locally $L^{1,1}$-regular. Since $g_X$ is nondegenerate and continuous in local coordinates, it has a continuous inverse. Putting $S^l_{\: \: jk} = - \sum_i g^{li} \partial_k g_{ij}$, the metric $g_X$ lies in $W^{1,1}_{loc}$ in the sense of Definition \ref{w11}. Its Christoffel symbols lie in $L^2_{loc}$ (since $g_X$ is locally $L^{1,2}$-regular). Hence the Levi-Civita connection of $g_X$ is tame. Also from \cite{Anderson-Cheeger (1992)}, for large $i$ there are pointed diffeomorphisms $\phi_i : (X, x) \rightarrow (M, m_i)$ so that $\lim_{i \rightarrow \infty} \phi_i^* g_i = g_X$ in $L^{1,p}_{loc}$. By Remark \ref{lq}, it follows that $Q(V,W) \ge K \int_X g_{X,ij} V^i W^j$. Hence there is a Ricci measure ${\mathcal R}$.
Since the Christoffel symbols lie in $L^2_{loc}$, the quadratic form $Q$ can be extended to compactly supported $V$ and $W$ in $\bigcap_{p \in (n, \infty)} L^{1,p}$, the latter of which also lies in $\bigcap_{\alpha \in (0,1)} C^\alpha$. It follows that $Q$ is covariant with respect to diffeomorphisms $\phi$ of $X$ that are $L^{2,p}$-regular for all $p \in (n, \infty)$. Hence ${\mathcal R}$ is independent of the choice of $C^{1,1}$-structure.
\subsection{Remarks} \label{subsect4.5}
\begin{remark} \label{4.27} Let $g$ be a Riemannian metric on $M^n$ which lies in $W^{1,1}_{loc}$, in the sense of Definition \ref{w11}. Suppose that the Levi-Civita connection is tame.
Suppose that the length metric gives a well-defined compact metric space $X$. If $Q \ge 0$ then a natural question is whether $X$ has nonnegative $n$-Ricci curvature (with respect to the Hausdorff measure) in the sense of \cite{Lott-Villani (2009),Sturm (2006)}. One way to answer this would be to show that $X$ is the Gromov-Hausdorff limit of a sequence of smoothings $\{(M, g_i)\}_{i=1}^\infty$ of $(M,g)$, with the Ricci curvature of $(M, g_i)$ bounded below by $- \frac{1}{i} g_i$. In the setting of Subsection \ref{subsect4.2}, i.e. gluing Riemannian manifolds along boundaries, the argument for this appears in \cite[Section 4]{Perelman}.
Conversely, one can ask whether $X$ having nonnegative $n$-Ricci curvature implies that $Q \ge 0$. \end{remark}
\begin{remark} \label{4.28} Suppose that the metric space $X$ of Remark \ref{4.27} has $Q \ge 0$ and nonnegative Ricci curvature in the sense of \cite{Lott-Villani (2009),Sturm (2006)}. One can ask if there is a relationship between the possible singularity of the Ricci measure and the existence of a singular stratum of $X$ in the sense of \cite{Cheeger-Colding (1997)}. Example \ref{4.7} shows that there is no direct relationship, since the Ricci measure may be singular even if all of the tangent cones are Euclidean. However, one can ask whether the existence of a codimension-two singular stratum (i.e. ${\mathcal S}_{n-2} \neq {\mathcal S}_{n-3}$ in the notation of \cite{Cheeger-Colding (1997)}) forces the Ricci measure to be singular. \end{remark}
\begin{remark} \label{4.29} If $X$ has no singular strata of codimension less than three (i.e. ${\mathcal S} = {\mathcal S}_{n-3}$ in the notation of \cite{Cheeger-Colding (1997)}) then one can ask whether a compactly supported Lipschitz vector-valued half-density $V$ necessarily has $\nabla V$ square integrable. The relevance of this would be for nonmanifold spaces, where the square integrability of $\nabla V$ on the regular set would be a natural condition, whereas the requirement of $V$ being Lipschitz may not make sense in a neighborhood of a singular point. See Example \ref{5.15}.
We note that the singular K\"ahler-Einstein metrics considered in \cite{Eyssidieux-Guedj-Zeriahi (2009)} do not have any singular strata of real codimension two. \end{remark}
\begin{remark} Suppose that $X$ is a pointed Gromov-Hausdorff limit of a sequence of complete $n$-dimensional pointed Riemannian manifolds with a uniform lower bound on their Ricci curvature. For concreteness, we consider the case when the Hausdorff dimension of $X$ is $n$, i.e. when the sequence is noncollapsing. As in \cite[Section 3]{Cheeger-Colding (2000)}, the complement of a measure zero subset of $X$ can be covered by a countable union of sets, each of which is biLipschitz to a Borel subset of ${\mathbb R}^n$. Furthermore, the transition maps between such sets can be taken to be $C^{1,1}$-regular in a natural weak sense \cite[Theorem 1.5]{Honda (2014)}. Using this structure, one can define a Levi-Civita connection on $X$ with measurable Christoffel symbols \cite[Section 3]{Honda (2014)}. The first question is whether the Levi-Civita connection is necessarily tame, in the sense of satisfying the equivalent condition of Proposition \ref{3.6}. This is true in the setting of Subsection \ref{andersoncheeger}.
If the Levi-Civita connection is tame then it should be possible to use this to construct the Ricci measure on the regular set of $X$ (compare with Example \ref{4.7} and Subsection \ref{andersoncheeger}). Based on Proposition \ref{4.17}, one would expect that any reasonable notion of the Ricci measure on the singular set should vanish on ${\mathcal S}_{n-3}$, and be given on ${\mathcal S}_{n-2}$ by an analog of the last term in (\ref{4.18}). \end{remark}
\section{Weak Ricci flow} \label{sect5}
In this section we give notions of weak Ricci flow solutions. In Subsection \ref{subsect5.1} we prove an integral identity for smooth Ricci flow solutions. In Subsection \ref{subsect5.2} we define tame Ricci flow solutions, give a compactness result and discuss some examples. In Subsection \ref{subsect5.3} we define the broader class of cone-preserving Ricci flow solutions and give further examples.
\subsection{An integral identity} \label{subsect5.1}
Let $M$ be a smooth manifold. Let $\{g(t)\}_{t \in [0, T)}$ be a smooth one-parameter family of Riemannian metrics on $M$.
\begin{proposition} \label{5.1} Let $\{V(t)\}_{t \in [0,T)}$ and $\{W(t)\}_{t \in [0,T)}$ be one-parameter families of vector-valued half-densities on $M$. We assume that for each $T^\prime \in [0, T)$, the family $V$ has compact support in $M \times [0, T^\prime]$ and is Lipschitz there, and similarly for the family $W$. Then $(M, g(\cdot))$ satisfies the Ricci flow equation \begin{equation} \label{rf} \frac{dg}{dt} \: = \: - \: 2 \operatorname{Ric}_{g(t)} \end{equation}
if and only if for every such $V$ and $W$, and every $t \in [0, T)$, we have \begin{align} \label{5.2} & \int_M \sum_{ij} g_{ij}(t) V^i(t) W^j(t) = \int_M \sum_{ij} g_{ij}(0) V^i(0) W^j(0) \: + \\ & \int_0^t \int_M \sum_{i,j} \left[ g_{ij} (\partial_s V^i) W^j + g_{ij} V^i (\partial_s W^j) - 2 \left( \nabla_i V^i \right) \left( \nabla_j W^j \right) + 2 \left( \nabla_i V^j \right) \left( \nabla_j W^i \right) \right](s) \: ds. \notag \end{align} \end{proposition} \begin{proof} Suppose that $(M, g(\cdot))$ is a Ricci flow solution. Then \begin{equation} \label{5.3} \frac{d}{dt} \sum_{ij} g_{ij}(t) V^i(t) W^j(t) = - 2 \sum_{ij} R_{ij} V^i W^j + \sum_{ij} g_{ij} (\partial_t V^i) W^j + \sum_{ij} g_{ij} V^i (\partial_t W^j). \end{equation} Integrating gives (\ref{5.2}). Conversely, if (\ref{5.2}) holds then by taking $V$ and $W$ smooth and differentiating in $t$, we see that (\ref{5.3}) holds for all smooth $V$ and $W$. This implies that (\ref{rf}) holds. \end{proof}
\subsection{Tame Ricci flow} \label{subsect5.2}
Now let $M$ be a $C^{1,1}$-manifold.
\begin{definition} \label{5.4} Let $\{g(t)\}_{t \in [0, T)}$ be a one-parameter family of $W^{1,1}_{loc}$-Riemannian metrics on $M$, in the sense of Definition \ref{w11}, which is locally-$L^1$ on $M \times [0, T)$. Suppose that for each $t \in [0, T)$, the Levi-Civita connection of $g(t)$ is tame in the sense of Definition \ref{3.4}. Suppose that there is an integrable function $c : (0, T) \rightarrow {\mathbb R}^+$ so that for all $t \in (0, T)$, the time-$t$ Ricci measure satisfies $Q(V, W) \ge -c(t) \int_M g_{ij}(t) V^i W^j$. Let $\{V(t)\}_{t \in [0,T)}$ and $\{W(t)\}_{t \in [0,T)}$ be one-parameter families of vector-valued half-densities on $M$. We assume that for each $T^\prime \in [0, T)$, the family $V$ has compact support in $M \times [0, T^\prime]$ and is Lipschitz there, and similarly for the family $W$. We say that $\{g(t)\}_{t \in [0, T)}$ is a {\em tame Ricci flow solution} if (\ref{5.2}) is satisfied for all such $V$ and $W$, and all $t \in [0, T)$.
\end{definition}
\begin{example} \label{5.5} Let $\{h(t)\}_{t \in [0, \infty)}$ be a smooth Ricci flow solution on $M$. Given a $C^{1,1}$-diffeomorphism $\phi$ of $M$, put $g(t) = \phi^* h(t)$. Then $\{g(t)\}_{t \in [0, T)}$ is a tame Ricci flow solution. This is because equation (\ref{5.2}) for $g$, $V$ and $W$, is equivalent to equation (\ref{5.2}) for $h$, $\phi_* V$ and $\phi_* W$. Hence Proposition \ref{5.1} applies. \end{example}
\begin{example} \label{5.6} For all $t \ge 0$, let $g(t)$ be the metric of Example \ref{4.5}. We claim that if $\alpha \neq 0$ then $\{g(t)\}_{t \in [0, \infty)}$ is not a tame Ricci flow solution. This can be seen by taking $V$ and $W$ to be time-independent in (\ref{5.2}). \end{example}
We show that the property of being a tame Ricci flow solution passes to Lipschitz limits.
\begin{proposition} \label{5.7} Let $M$ be a $C^{1,1}$-manifold. Let $\{g_i(\cdot)\}_{i=1}^\infty$ be a sequence of one-parameter families of Riemannian metrics on $M$, each defined for $t \in [0, T)$ and locally-$L^1$ on $M \times [0, T)$, with each $g_i(t)$ locally Lipschitz, satisfying (\ref{5.2}). Let $\{g_\infty(t)\}_{t \in [0, T)}$ be a one-parameter family of locally Lipschitz Riemanniann metrics on $M$. Suppose that for all $T^\prime \in [0, T)$ and every coordinate neighborhood $U \subset M$ with compact closure, \begin{equation} \label{5.8} \lim_{i \rightarrow \infty} \sup_{t \in [0, T^\prime]} d_{\operatorname{Lip}(U)}(g_i(t), g_\infty(t)) = 0. \end{equation} Then $g_\infty(\cdot)$ is a tame Ricci flow solution. \end{proposition} \begin{proof} From the convergence assumption, $g_\infty$ is locally-$L^1$ on $M \times [0, T)$. The Christoffel symbols of $\{g_i(t)\}_{i=1}^\infty$ and $g_\infty(t)$ are all locally-$L^\infty$, so the Levi-Civita connections are tame. Because the Levi-Civita connections of $\{g_i\}_{i=1}^\infty$ converge to that of $g_\infty$, in $L^\infty_{loc}$ on $M \times [0, T)$, it follows that $g_\infty$ satisfies (\ref{5.2}). \end{proof}
\begin{example} \label{5.9} Let $\{g(t)\}_{t \in [0, T)}$ be a smooth Ricci flow solution on a smooth manifold $M$. Let $\{\phi_i\}_{i=1}^\infty$ be a sequence of smooth diffeomorphisms of $M$ that $C^{1,1}$-converge on compact subsets to a $C^{1,1}$-diffeomorphism $\phi_\infty$ of $M$. Then $\lim_{i \rightarrow \infty} \phi_i^* g(\cdot) = \phi_\infty^* g(\cdot)$ with Lipschitz convergence on compact subsets of $M \times [0, T)$, and $\phi_\infty^* g(\cdot)$ is a tame Ricci flow solution. \end{example}
We now give a compactness result for tame Ricci flow solutions.
\begin{proposition} \label{5.10} Let $M$ be a $C^2$-manifold. Let $\{g_i(\cdot)\}_{i=1}^\infty$ be a sequence of tame Ricci flow solutions, defined for $t \in [0, T)$, consisting of $C^1$-Riemannian metrics on $M$. Suppose that for all $T^\prime \in [0, T)$ and every coordinate neighborhood $U \subset M$ with compact closure, the $g_i(t)$'s are uniformly bounded above and below on $U \times [0, T^\prime]$, and the $g_i(t)$'s and their first spatial partial derivatives are uniformly bounded and uniformly equicontinuous (in $i$) on $U \times [0, T^\prime]$. Then after passing to a subsequence, there is a tame Ricci flow solution $\{g_\infty(t)\}_{t \in [0, T)}$ on $M$ consisting of $C^1$-Riemannian metrics so that the $g_i(t)$'s $C^1$-converge to $g_\infty(t)$, locally uniformly on $M \times [0, T)$. \end{proposition} \begin{proof} By a diagonal argument, after passing to a subsequence we can assume that $\lim_{i \rightarrow \infty} g_i(\cdot) = g_\infty(\cdot)$ as stated in the proposition. Then Proposition \ref{5.7} shows that $g_\infty(\cdot)$ is a tame Ricci flow solution. \end{proof}
We now address when one can get a tame Ricci flow solution by appending a time-zero slice to a smooth Ricci flow solution defined for positive time.
\begin{proposition} \label{5.11} Let $\{g(t) \}_{t \in (0, T)}$ be a smooth Ricci flow solution with $\operatorname{Ric}(g(t)) \ge - c(t) g(t)$ for some positive integrable function $c$. Suppose that there is some $g(0) \in W^{1,1}_{loc}$ with tame Levi-Civita connection so that $\lim_{t \rightarrow 0^+} g(t) = g(0)$ in $L^1_{loc}$. Then $\{g(t) \}_{t \in [0, T)}$ is a tame Ricci flow solution. \end{proposition} \begin{proof} Let $V$ and $W$ be one-parameter families as in Definition \ref{5.4}. For any $t^\prime \in [0, T)$ and $t \in [t^\prime , T)$, the analog of (\ref{5.2}) holds with $0$ replaced by $t^\prime$. Taking $t^\prime \rightarrow 0$ shows that (\ref{5.2}) holds. \end{proof}
\begin{example} \label{5.12} Let $\Sigma$ be a compact two-dimensional metric space with curvature bounded in the Alexandrov sense. One can construct a Ricci flow solution starting from $\Sigma$, in an certain sense, which will be smooth for positive time \cite{Richard (2012)}. Using \cite[Lemma 3.3]{Richard (2012)} and Proposition \ref{5.11}, we can extend the solution back to time zero to get a tame Ricci flow solution $g(\cdot)$ that exists on some time interval $[0, T)$, with $\Sigma$ corresponding to $g(0)$. \end{example}
\begin{example} \label{5.13} Let $({\mathbb R}^2, g_0)$ be a two-dimensional metric cone with total cone angle in $(0, 2 \pi]$. There is a corresponding expanding soliton $\{g(t)\}_{t > 0}$ with the property that at any positive time, the tangent cone at infinity is isometric to $({\mathbb R}^2, g_0)$ \cite[Section 2.4]{Chow-Knopf (2004)}. Putting $g(0) = g_0$, one obtains a tame Ricci flow solution $\{g(t)\}_{t \ge 0}$. \end{example}
\begin{remark} \label{5.14} Let $g_0$ be a Lipschitz-regular Riemannian metric on a compact $C^{1,1}$-manifold $M$. Choose a compatible smooth structure on $M$. From \cite[Theorem 5.3]{Koch-Lamm (2013)}, there is a smooth solution $\{h(t)\}_{t \in (0, T)}$ to the DeTurck-Ricci flow with $\lim_{t \rightarrow 0^+} h(t) = g_0$. Fixing $t_0 \in (0, T)$ and using the estimate in \cite[Theorem 5.3]{Koch-Lamm (2013)}, we can integrate the vector field in the DeTurck trick backward in time from $t_0$ to $0$, to obtain a homeomorphism $\phi$ of $M$. We can think of $\phi$ as giving a preferred smooth structure based on the Riemannian metric $g_0$. We can also undo the vector field in the DeTurck trick, starting at time $t_0$, to obtain a smooth Ricci flow solution $\{\widehat{g}(t)\}_{t \in (0, T)}$. Presumably $\phi$ is $C^{1,1}$ and there is a tame Ricci flow solution starting from $g_0$, given by $g(t) = \phi^* \widehat{g}(t)$. \end{remark}
\subsection{Cone-preserving Ricci flow} \label{subsect5.3}
There has been recent work about Ricci flow on certain singular spaces with conical singularities along codimension-two strata, under the requirement that the flow preserve the conical singularities \cite{Liu-Zhang (2014),Mazzeo-Rubinstein-Sesum (2013),Phong-Song-Sturm-Wang (2014),Shen (2014),Shen (2014b),Yin (2010),Yin (2013)}. As seen in Example \ref{5.6}, such solutions may not be tame Ricci flow solutions. However, one can consider an alternative and less restrictive definition, which we call {\em cone-preserving Ricci flow solutions}, in which the $V$ and $W$ of Definition \ref{5.4} are additionally required to have square-integrable covariant derivative.
\begin{example} \label{5.15} For all $t \ge 0$, let $g(t)$ be the metric of Example \ref{4.5}. We claim that $\{g(t)\}_{t \in [0, \infty)}$ is a cone-preserving Ricci flow solution. To see this, we can assume that $\alpha \neq 0$. Suppose that $V$ is a compactly supported Lipschitz vector-valued half-density. Writing $V = \sum_i v^i \partial_i \otimes \sqrt{dx^1 dx^2}$, one finds $\nabla_i V = \sum_j \nabla_i v^j \otimes \sqrt{dx^1 dx^2}$, where \begin{align} \label{5.16} \nabla_1 v^1 = & \partial_1 v^1 + (\partial_2 \phi) v^2, \\ \nabla_1 v^2 = & \partial_1 v^2 - (\partial_2 \phi) v^1, \notag \\ \nabla_2 v^1 = & \partial_2 v^1 - (\partial_1 \phi) v^2, \notag \\ \nabla_2 v^2 = & \partial_2 v^2 + (\partial_1 \phi) v^1. \notag \end{align} Here $\phi(x^1, x^2) = - \frac{\alpha}{2} \log \left( (x^1)^2 + (x^2)^2 \right)$. Now the square norm of $\nabla V$ is \begin{equation} \label{5.17} \int_M \sum_{i,j,k,l} g_{jl} g^{ik} (\nabla_i v^j) (\nabla_k v^l) \: dx^1 dx^2 = \int_M \sum_{i,j} (\nabla_i v^j)^2 \: dx^1 dx^2. \end{equation} Suppose that this is finite. As each $v^i$ is Lipschitz, and $\partial_i \phi = - \alpha \frac{x^i}{(x^1)^2 + (x^2)^2}$, it follows that $v^i(0) = 0$. Then (\ref{4.6}) gives that $Q(V, W) = 0$. Looking at (\ref{5.2}), the claim follows. \end{example}
This example shows that there may be nonuniqueness among cone-preserving Ricci flow solutions with a given initial condition, in view of the expanding soliton solution mentioned in Example \ref{5.13}.
We see that in this example, the cone angle along the codimension-two stratum is unchanged. If one wants to give a notion of a weak Ricci flow solution along these lines on a nonmanifold space (which we do not address here), it is probably natural to impose the square integrability of $\nabla V$ as a requirement.
\begin{example} \label{5.18} For $k,n > 1$, consider ${\mathbb C}^n/{\mathbb Z}_k$, where the generator of ${\mathbb Z}_k$ acts isometrically on the flat ${\mathbb C}^n$ as multiplication by $e^{2 \pi i/k}$. We expect that with any reasonable definition of a weak Ricci flow, this will give a static Ricci flow solution (since Proposition \ref{4.16} indicates that the vertex of a cone with a smooth link, in real dimension greater than two, should not contribute to the Ricci measure).
On the other hand, there is an expanding soliton solution that exists for $t > 0$, and whose $t \rightarrow 0$ limit is ${\mathbb C}^n/{\mathbb Z}_k$ \cite[Section 5]{Feldman-Ilmanen-Knopf (2003)}. This shows that one cannot expect a uniqueness result for weak Ricci flow solutions whose time-zero slice is a nonmanifold, without further restrictions. \end{example}
\end{document} | arXiv |
Mobile DNA
A mobile restriction modification system consisting of methylases on the IncA/C plasmid
Ruibai Wang ORCID: orcid.org/0000-0002-2258-35991,
Jing Lou1 &
Jie Li1
Mobile DNA volume 10, Article number: 26 (2019) Cite this article
IncA/C plasmids play important roles in the development and dissemination of multidrug resistance in bacteria. These plasmids carry three methylase genes, two of which show cytosine specificity. The effects of such a plasmid on the host methylome were observed by single-molecule, real-time (SMRT) and bisulfite sequencing in this work.
The results showed that the numbers of methylation sites on the host chromosomes were changed, as were the sequences recognized by MTase. The host chromosomes were completely remodified by the plasmid with a methylation pattern different from that of the host itself. When the three dcm genes were deleted, the transferability of the plasmid into other Vibrio cholerae and Escherichia coli strains was lost. During deletion of the dcm genes, except for the wild-type strains and the targeted deletion strains, 18.7%~ 38.5% of the clones lost the IncA/C plasmid and changed from erythromycin-, azithromycin- and tetracycline-resistant strains to strains that were sensitive to these antibiotics.
Methylation of the IncA/C plasmid was a new mobile restriction modification (RM) barrier against foreign DNA. By actively changing the host's methylation pattern, the plasmid crossed the barrier of the host's RM system, and this might be the simplest and most universal method by which plasmids acquire a broad host range. Elimination of plasmids by destruction of plasmid stability could be a new effective strategy to address bacterial multidrug resistance.
Antibiotic resistance, especially multidrug resistance, is a serious challenge worldwide. Many bacterial drug resistance genes are stored in plasmids and can be horizontally transferred to other bacteria. The IncA/C family plasmids are major members of these plasmids [1, 2]. Both of the famous plasmids, namely, pIP1202, which exhibits high resistance to at least eight antibiotics and was isolated from Yersinia pestis in 1995 [3], and the NDM-1 plasmid, which was isolated from superresistant bacteria in India, Bangladesh, Pakistan, Britain and the United States in August 2010 [4], are IncA/C plasmids. These plasmids have generated much concern with regard to public health and bioterrorism. IncA/C plasmids have the ability to robustly accumulate antibiotic resistance genes. There are many resistance genes against rifampicin, erythromycin, streptomycin, chloramphenicol, sulfonamides and disinfectants that are routinely harbored on these plasmids [5], in addition to a variety of beta-lactamase genes harbored by the pNDM-1_Dok01, pNDM102337, pNDM10469, pNDM10505, pNDMCFuy, pNDM-KN and pNDM-US plasmids of Escherichia coli and Klebsiella pneumoniae. The pVC211 plasmid that we isolated from Vibrio cholerae has 16 antibiotic resistance-related genes, including five macrolide resistance genes that utilize three mechanisms [6]. The IncA/C plasmid made 99% of Chinese V. cholerae O139 strains resistant to more than three antibiotics, and 47% resistant to eight antibiotics [7].
Another striking feature of the IncA/C plasmids is their wide host range. These plasmids exist and are transferred horizontally in many bacterial species and genera, such as E. coli, Salmonella, Enterobacter, Klebsiella, V. cholerae, Yersinia, Pantoea, Edwardsiella, Citrobacter freundii, Photobacterium damselae, Aeromonas, Xenorhabdus nematophila, and Providencia, with transformation efficiencies as high as 10− 1 to 10− 2. In addition, the skeleton of the IncA/C plasmid is also widely distributed in agricultural multidrug-resistant pathogens. Strains carrying IncA/C plasmids have been isolated from cows, chickens, turkeys and pigs. Reports have shown that the blaCMY-2 gene on the IncA/C plasmid that confers resistance to cephalosporin could be identified in human E. coli and Klebsiella pneumoniae isolates several years after the prevalence of the gene in edible animals [8]. Moreover, 1% of E. coli strains isolated from healthy people who had never taken antibiotics were positive for the repA/C gene, the replicator of the IncA/C plasmid [9, 10]. This mobile reservoir of drug resistance genes can transmit multidrug resistance phenotypes from foodborne pathogens to human pathogens, which demonstrates that the use of veterinary drugs can affect human drug resistance profiles, and the IncA/C plasmids have special public health implications.
IncA/C plasmids are large, conjugative plasmids that are approximately 150 kb in length. On the plasmid backbones, there are three methylation-related genes: dcm1, dcm2, and dcm3, which are 1626 bp, 1428 bp and 924 bp in length and are identical to the NCBI reference sequences WP_000201432.1, WP_000936896.1 and WP_000501488.1, respectively. The dcm1 gene encodes Gammaproteobacteria cytosine-C5 specific DNA methylase (MTase) of the Pfam PF00145 family. The dcm2 gene is a DNA cytosine methyltransferase with an AdoMet_MTase (cl17173) conserved domain. The dcm3 gene is a DNA modification MTase that lacks any known conserved domain. These three methylation genes are broadly conserved in IncA/C2 plasmids, although a few plasmids have lost the dcm2 gene due to a deletion event associated with the ARI-B resistance island [2]. There are six methylation-related genes in the host genome of V. cholerae: three adenine-specific MTases (dam: VC1769, VC2118, and VC2626), two rRNA MTases (VC2697 and VCA0627), and one orphan cytosine MTase, vchM (VCA0158). Generally, DNA (cytosine-5-specific) MTase (Dcm) recognizes the CCWGG motif and covalently adds a methyl group at the C5 position of the second cytosine, while Dam introduces a methyl group at the N6 position of the adenine in the GATC motif. The MTases encoded in the V. cholerae chromosomes are mainly Dam MTases, and the only Dcm MTase, VchM, recognizes a novel target, the first 5'C on both strands of the palindromic sequence 5′-RCCGGY-3′, and leaves all 5′-CCWGG-3′ sites unmethylated in V. cholerae [11]. These data suggest that the changes in DNA methylation mediated by the IncA/C plasmids may be different from those in the host chromosomes.
DNA methylation is a central epigenetic modification in various cellular processes, including DNA replication and repair [12], transcriptional modulation, lowering of transformation frequency, and stabilization of short direct repeats in certain bacteria; in addition, DNA methylation is necessary for site-directed mutagenesis [13]. DNA methylation acts either alone or as a part of the bacterial restriction modification (RM) system that protects the host from infection of foreign DNA and bacteriophages by degrading nonmethylated DNA with sequence-specific restriction enzymes. Although the role of IncA/C plasmids in drug resistance is well elucidated, the role of the dcm genes on A/C2 plasmids remains largely unexplored. In this study, the IncA/C plasmid pVC211 (148,456 bp) [6] was conjugated into V. cholerae strain C6706. Genome-wide bisulfite sequencing and single-molecule, real-time (SMRT) sequencing [14, 15] were conducted to provide a global view of the changes in the methylation patterns of the host DNA that were induced by this plasmid.
Methylomes determined by SMRT sequencing
The two samples, C6706 and CV2, produced 998,990,159 bp and 1,097,369,564 bp polymerase reads, respectively, in SMRT sequencing, and the average sequencing coverage reached ~250x. In the C6706 genome, 1,476,420 methylation sites were detected, and 1490,035 methylation sites were detected in CV2. Although the actual sequencing coverages were much higher than the coverage required for m5C analysis by PacBio RS II, the confidence of detection was not high enough, leading to failure of m5C detection. In addition to the m4C and m6A sites, a large proportion of methylation sites, including m5C, in both C6706 and CV2 were reported as 'modified bases' and failed to show definitive methylation patterns in SMRT analysis. The number of m4C and m6A methylation sites, whether intergenic or in coding sequences (CDSs), decreased after the transformation of the pVC211 plasmid, but the number of untyped methylation sites increased by 18,763 sites, resulting in a slight increase in the total methylation rate from 36.6% in C6706 to 36.94% in CV2 (Table 1).
Table 1 Methylation information derived from SMRT sequencing
There were 4013 annotated genes on the V. cholerae chromosome, including 2775 functional genes, 25 rRNA genes, and 94 tRNA genes distributed on chromosome I and 1115 functional genes and 4 tRNA genes distributed on chromosome II (Table 2). The methylation patterns on the two chromosomes of V. cholerae were considerably different. The genes on chromosome I were generally methylated, dominated by m4C-m6A double methylation, and sites involving m4C and m6A accounted for 84.7% (3401/4013) of the total genes. On the other hand, more than half of the genes on chromosome II exhibited non-m4C non-m6A methylation. In particular, the methylation of the tRNA gene was the most significantly different between the two chromosomes, with 92 of the 94 genes on chromosome I involved in the methylation of m4C or m6A, while on chromosome II, only one of the four genes, VC_At2, was m4C methylated. Chromosome II of V. cholerae is a mega-plasmid ameliorated to the host chromosome as a result of its long-standing presence in the host lineage [16]. The differences in the methylation patterns of chromosome II also demonstrate its heterogeneity with chromosome I. However, there was no obvious difference between C6706 and CV2 in terms of the methylation patterns of the two chromosomes.
Table 2 Methylation patterns of the two chromosomes of C6706 and CV2
However, the methylation sites on the genomes of C6706 and CV2 showed significant changes after conjugation of the IncA/C plasmid (Table 3). A total of 124,571 sites, consisting of 85,972 m4C and 38,599 m6A sites, in 3341 genes kept the same methylation type between the two samples. C6706 had 67,545 methylation sites that differed from CV2, and CV2 had 62,657 specific methylation sites. These differential sites accounted for 37.46 and 35.16% of the C6706 and CV2 methylation sites, respectively, involving approximately 81% of the total genes (97% of the total genes on chromosome I and 40% on chromosome II). This large-scale methylation change made it impossible to discern any gene function modification preference. Among the genes with differential methylation sites, the methylation modes of 116 genes changed significantly (Additional file 1: Table S2); for example, the methylation pattern of VC_0025 changed from m4C-m6A double methylation to m4C single methylation, that of VC_A0698 changed from m4C methylation to m6A methylation, and that of VC_1030 changed from m4C methylation to non-m4C non-m6A methylation. Although 64 genes encoded hypothetical proteins with unknown function, other differentially methylated genes involved many key processes in the lifecycle of the V. cholerae host. These genes included the following: 1 LuxR family transcriptional regulator; 3 LysR family transcriptional regulators; 5 regulatory proteins related to phosphoglycerate transport, cold shock, sigma-54 dependent transcription, response and mannitol metabolism; Hcp-1, which is involved in chromosome segregation; 7 transport and binding proteins related to Na+/H+, vibriobactin and enterobactin transportation; and 5 proteins related to protein synthesis and cell fate.
Table 3 Common and differential m4C and m6A methylation sites in CDS between C6706 and CV2
Analysis of the motif sequences using the P_MotifFind Module of SMRT further revealed the effects of the plasmid-encoded MTases on the methylation status of the host chromosomes (Table 4). Eleven and ten motifs were detected in C6706 and CV2, respectively, but only 6 motifs were commonly detected in both samples, and these shared motifs also showed significant differences in number and proportion between the two samples. Four motifs found in C6706, namely, AGKNNNNW, GADNDGCG, GARVNNDG, and TVVVNNDG, were changed to ABNBMVBW, GANNDBBG, GARVNRNG and TNVVNNDG in CV2, respectively. The DCAGVHRNG motif that was present in C6706 disappeared in CV2. These data indicate that plasmid transformation not only changed the methylation types of the host chromosomes, but also changed the characteristics of the identification sequences of the MTases.
Table 4 Motifs in C6706 and CV2 as determined by SMRT sequencing
m5C methylation in bisulfite sequencing
Because of the failure of m5C site detection in SMRT, we had to use the traditional m5C analysis method, bisulfite sequencing. Samples C6706 and CV2 produced 16,196,082 bp and 16,232,236 bp clean reads, respectively. The mapping rates were 96.01 and 91.78%; the bisulfite conversion rates were 99.65 and 99.62%; and the average depths were 362.12x and 345.69x, respectively. We detected 6965 methylcytosines in C6706 and 7321 methylcytosines in CV2 from a total of 1,915,376 cytosines in the V. cholerae genome. This increase in the number of methylcytosines in CV2 was consistent with the increase in untyped methylation sites in SMRT sequencing and the cytosine-specificity of the MTases on the IncA/C2 plasmid. The average methylation levels of the three types of C bases (mCG, mCHG and mCHH, where H = A, C or T) were higher in CV2 than in C6706. At the whole-genome level, the mCG, mCHG and mCHH methylation levels were 0.33, 1.74 and 0.43% in C6706, respectively, and 0.34, 1.81 and 0.44% in CV2, respectively. Moreover, the graphs of the methylation levels also showed a slight increasing trend from C6706 to CV2 (Fig. 1A). For example, 50% of the mCHH sites were 10% methylated in C6706; it were 59% of the mCHH sites 20% methylated in CV2. Notably, although the total number of mCG sites was the most similar between C6706 and CV2, the methylation levels of these sites were greatly different. The methylation levels of the mCG sites changed as follows: 68% of the mCG sites were 10% methylated and 32% of the mCG sites were 20% methylated in C6706; 19% of the mCG sites were 10% methylated, 57% of the mCG sites were 20% methylated, and 18% of the mCG sites were 30% methylated in CV2 (Fig. 1a).
Global trends of the methylomes of C6706 and CV2 and logo plots of the non-CG methylation patterns. A. Distribution of the methylation level in the C6706 and CV2 samples. The x-axis indicates the methylation level, and the y-axis indicates the fraction of mC at a specific methylation level in all methylcytosines. The methylation level of cytosine is the proportional value of the sequence supporting the C base site as a methylation site in the effective coverage sequence. B. The density distribution of mC on each chromosome. The x-axis represents the chromosome. The y-axis on the left represents the mC density calculated from a 10 kb window, and the blue dot represents the distribution of the mC density on the chromosome. The y-axis on the right represents the normalized mC ratio. The curves represent the density distribution of the different types of mC bases (CG, CHG and CHH). C. Logo plots of the sequence features of the adjacent bases at the C site. The x-axis represents the base position, with the C-base being analyzed in the fourth position. The y-axis represents the entropy value
The global-scale view of the DNA methylation density in 100-kb windows (Fig. 1b) revealed that there was no correlation between the distributions of the three methylation types on the two chromosomes of V. cholerae. The density of mCG methylation was relatively steady in each chromosomal region, while the non-CG methylation levels fluctuated greatly throughout each chromosome. In C6706 and CV2, the smooth profiles of the mCHG density remained constant, but the mCHH density profiles changed dramatically without any similarity (Fig. 1b).
The sequence characteristics of the bases in the vicinity of a methylation site reflect the preference of MTases for recognition of specific sequences, so we calculated the methylation percentage of the nine bases upstream and downstream of the methylation site (mC at the fourth base). The enrichment of particular local sequences was observed for both mCG and non-CG methylcytosine (Additional file 2: Figure S1). 5′-RCCGGY-3′ was the recognition sequence of mCHG, and preferences for an AG dinucleotide of mCG and a C of mCHH upstream of the methylation sites were observed. This result was consistent with the analysis of vchM [11], but also showed that in addition to the 5′-RCCGGY-3′ sequence, although the 5′-CCWGG-3′ sites were unmethylated, other m5C methylated sequences existed in V. cholerae. C6706 and CV2 were identical in this sequence preference, but there were differences in the relative frequency of the bases before and after the recognition sites of mCG and mCHH (Fig. 1c).
In C6706, 6833 m5C methylation sites were located in 1436 genes on chromosome I and in 446 genes on chromosome II, accounting for 49.60 and 39.85% of the total genes of the two chromosomes, respectively, and included 459 m5C sites in intergenic regions. In CV2, 6851 m5C methylation sites were located in 1554 genes on chromosome I, 492 genes on chromosome II, and 471 sites in intergenic regions. Between the two samples, 6466 m5C methylation sites remained constant. There were 499 differential m5C sites in C6706 and 856 sites in CV2, involving 377 and 635 genes, respectively. There were almost no identical mCHH methylation sites between C6706 and CV2, similar to the mCG sites on chromosome II of the two strains (Table 5). Overall, DNA methylation in non-CG contexts (mCHG and mCHH) accounted for an absolute majority (99.87 and 99.78%) in V. cholerae and accounted for most of the differences in methylation before and after plasmid transformation.
Table 5 Three methylation types of C bases in bisulfite sequencing between C6706 and CV2
Methylation of host MTase genes
Four of the six MTase genes on the V. cholerae chromosomes had m4C and m6A methylated sites; VC1769 was the only gene that had all three methylation types. We did not detect any typed methylation sites on VCA0627 or on the orphan Dcm gene, VCA0158 (Table 6). After transformation with pVC211, only a few m6A sites changed, while more than half of the m4C sites changed. Therefore, it was reasonable to observe the large-scale change in the methylation profile of the host chromosomes by SMRT sequencing. Combined with changes in m5C sites determined by bisulfite sequencing, our results suggested that changes in the host methylation profile were partly attributable to the indirect role of host MTases that were affected by the plasmid and were partly attributable to the direct role of plasmid-encoded MTases themselves.
Table 6 Methylation of the six host MTase genes
Knockouts of the three MTase genes on pVC211
All seven types of VC211 mutants, including single, two and three dcm gene deletions, were constructed successfully, which indicated that none of the three methylase genes were essential for pVC211. However, among the secondary homologous recombinants selected from the sucrose plates to test for the gene deletion, several clones lost pVC211, except for the wild-type strains and the target gene-deleted strains. These clones were positive for amplification with the ctxAB-U/L primer set and negative for the del dcm1/4 and repA-F/R primer sets. To confirm this phenomenon, all seven types of gene knockouts were repeated twice. The highest plasmid loss rate was found in the dcm1 single-gene deletion, with an average of 37/96 (the wild-type rate was 14/96, and the gene deletion rate was 45/96). Among the other five types of deletions, the plasmid loss rate was approximately 18/96, while plasmid loss was not observed in the dcm2 single-gene deletion. Accompanied with the loss of this IncA/C plasmid, the V. cholerae strain VC211 changed from a strain with erythromycin (MIC> 128 μg/ml), azithromycin (MIC≥64 μg/ml) and tetracycline (16 μg/ml) resistance [6] to a sensitive strain.
The loss of pVC211 in the deletion process of the dcm genes indicated that the dcm genes might affect the plasmid stability, so the presence of plasmids was detected by PCR in the 96 clones of the VC211∆dcm1 and VC211∆(dcm1,dcm2,dcm3) mutant strains after three continuous passage cultures in 5 ml Luria–Bertani (LB) broth without any antibiotics; PCR was performed with the repA-F/R primer set as described previously [7]. None of the progeny clones that lost the plasmid were detected, and the results show that the IncA/C plasmids with the three dcm genes deleted continued to exist stably in host V. cholerae strains.
The transferability of pVC211∆(dcm1,dcm2,dcm3) between the V. cholerae strains was tested. VC 211∆(dcm1,dcm2,dcm3) was conjugated with N16961 (lacZ::kanr), C6706 (lacZ::kanr), VC2981 and VC2973 and then spread onto LB plates containing two antibiotics using VC211 as a control. The results show that the pVC211 plasmid lacking the three dcm genes could no longer be transferred to other V. cholerae strains. Furthermore, freshly cultured SM10 strains were divided into two aliquots and conjugated with VC211∆(dcm1,dcm2,dcm3) and VC211 to test the plasmid transferability between the V. cholerae and E. coli strains. The results were interesting. The clones grown on the double antibiotics LB plates of the SM10 and VC211 conjugation were positive for repA-F/R amplification and negative for ctxAB-U/L amplification, indicating that the pVC211 plasmid had been transferred into SM10. All of the clones grown on the plates of the conjugation of SM10 and VC211∆(dcm1,dcm2,dcm3) were positive for repA-F/R and ctxAB-U/L amplification. These clones were also positive for the kan-mini-U/L amplification of mini-Tn5 and were resistant to kanamycin (100 μg/mL), indicating that instead of pVC211∆(dcm1,dcm2,dcm3) being transferred into SM10, the mini-Tn5 transposon of SM10 [17] was transferred into V. cholerae.
Methylation plays important roles in epigenetic gene regulation in both eukaryotic and prokaryotic organisms. Compared with the explicit function of Dam methylation in mismatch repair, initiation of chromosome replication, transcription regulation at promoters containing GATC sequences, gene expression, pathogenicity and DNA stability under antibiotic pressure [18], knowledge of the function of Dcm MTase is limited [2]. The original function elucidated for Dcm was to discriminate self DNA from foreign DNA as part of the RM system [19] and to affect the plasmid transfer efficiency and dissemination of antibiotic resistance genes in Enterococcus faecalis [15]. Therefore, the discovery that two of the three MTase genes on the IncA/C2 plasmid are cytosine-specific was intriguing.
Three types of methylation, namely, m6A, m4C and m5C, and a large amount of methylation without obvious methylation patterns exist in V. cholerae. Comparison of the genome-wide methylation profiles of V. cholerae C6706 before and after the conjugation of pVC211 revealed that the MTases of the IncA/C plasmid changed the number and characteristics of all types of methylation. The changes involved most of the regions of the two chromosomes, and no obvious preference for gene function was observed. It seemed that the host chromosomes were completely relabeled by the plasmid with a methylation pattern different from that of the host itself. Surprisingly, the new methylation pattern induced by the plasmid could be tolerated by the restriction enzymes of the host's RM system.
After loss of the three dcm genes, pVC211 could not be transferred into other V. cholerae and E. coli strains as the wild-type plasmid, which demonstrated that instead of mimicking the host's methylation pattern [20], pVC211 crossed the barrier of the host's RM system by actively changing the host's methylation pattern. This is the simplest and most general method by which IncA/C plasmids obtain their broad host range. Although pVC211∆(dcm1,dcm2,dcm3) could no longer be transferred, this plasmid could still stably exist in the host strain when there was no selection pressure. We speculated that once the plasmid entered the host, other genes on the plasmid participated in plasmid maintenance, and methylation was no longer the only determinant of plasmid stability. After deletion of the dcm genes, the mini-Tn5 transposon of SM10 could enter VC211. Combined with the methylation of the host chromosomes by the plasmid, this finding suggested that the MTases of the plasmid actually constructed a new RM barrier to nonself DNA for itself. In addition to the bacterial RM for foreign DNA on chromosomes, the mobile genetic elements could also carry their own barriers. Moreover, transfer of the suicide plasmid pWM91 into VC211 clearly demonstrated the selectivity of the IncA/C RM barrier.
In addition, it is worth noting the loss of the IncA/C plasmid during the deletion of the dcm genes. In theory, after the first recombination, the suicide plasmid pWM91 was inserted into pVC211, and it was possible that the pVC211 plasmid was expelled with pWM91 due to the toxic effect of the sacB gene on the sucrose plate. However, IncA/C plasmids were very stable in V. cholerae [7], and in the previous deletions of plasmid genes [6], only the secondary recombined wild-type strains and the gene deletion strains were obtained after sucrose selection. The loss of the IncA/C plasmid together with the suicide plasmid never occurred as it did in the deletion of the dcm2 gene in this study. Therefore, even the dcm genes were not the determinants of plasmid stability, and after the deletion of the dcm genes, the plasmids continued to exist stably in the subcultures, the loss of plasmid caused by the dcm gene deletion indicated that these genes could possibly contribute to the stability of the plasmids. Loss of the IncA/C plasmids transformed the host bacteria from plasmid-conferred antibiotic resistant strains to sensitive strains. This result indicated that elimination of plasmids by destruction of plasmid stability could be a new, effective strategy to address bacterial multidrug resistance.
MTases of the IncA/C plasmid were a mobile RM system of plasmids against nonself DNA, including host chromosomes and other mobile genetic elements. By actively changing the host's methylation pattern, this system helped the plasmids cross the barrier of the host's RM system and enabled the broad host range of the plasmids.
Strains and plasmids
The toxigenic O1 El Tor strain C6706 and N16961 are commonly used V. cholerae reference strains, isolated in 1991 from Peru [21] and 1971 from Bangladesh, respectively [22]. VC2981 and VC2973 are nontoxigenic O1 El Tor strains with ampicillin resistance isolated from Jiangsu Province in 1965. The IncA/C2 plasmid pVC211 was identified in a toxigenic O139 serogroup strain VC211 isolated in 2003 from Guangdong Province, China [6].
Construction of the kanamycin-resistant mutants of C6706 and N16961
The kanamycin resistance gene was amplified from the plasmid pet-ken with the kan-844-stuI-U/L primer set and was digested with the enzyme stuI; then, this gene was ligated to the vector pJL-1, which was also digested with stuI, and transformed into the Escherichia coli strain SM10. By conjugation and via the homologous arms of the lacZ genes on pJL-1, a kanamycin (kan) resistance gene was inserted into the genomes of the V. cholerae strains N16961 (streptomycin, strr) and C6706 (strr). Then, N16961 (lacZ::kanr, strr) and C6706 (lacZ::kanr, strr) were constructed. Furthermore, the pVC211 plasmid was transferred into C6706 (lacZ::kanr, strr) by conjugation, and the strain CV2 was constructed.
Bisulfite sequencing
Genomic DNA was fragmented into ~ 250-bp fragments using Bioruptor (Diagenode, Belgium). The fragments were blunt ended, phosphorylated, subjected to 3′-dA overhang generation and ligated to methylated sequencing adaptors. Then, the samples were treated with bisulfite by using the EZ DNA Methylation-Gold Kit (Zymo Research, Inc., Irvine, CA, USA), desalted and size selected by 2% agarose gel electrophoresis. After PCR amplification and another round of size selection, the qualified libraries were sequenced. The sequencing data were filtered, and the low-quality data were removed. Then, the clean data were mapped onto the reference genome (the complete genome sequences of the standard strain N16961, AE003852 and AE003853; BSMAP) to obtain the methylation information for all cytosines throughout the genome for standard and personalized bioinformatic analysis. The average methylation level was calculated as follows:
$$ {\mathrm{Rm}}_{\mathrm{average}}=\frac{{\mathrm{Nm}}_{\mathrm{all}}}{{\mathrm{Nm}}_{\mathrm{all}}+{\mathrm{Nnm}}_{\mathrm{all}}}\times 100\% $$
where Nm is the number of methylated cytosines, and Nnm is the number of reads with nonmethylated cytosines. Windows containing at least 5 CG (CHG or CHH) at the same location in the two genomes were identified, and the difference in the level of CG methylation in these windows between the two samples was compared to find regions with significant differences in methylation (differentially methylated region, DMR; 2-fold difference, Fisher test P value ≤0.05). If two adjacent DMRs formed a contiguous region with significantly different methylation levels in both samples, the two DMRs were merged into one continuous DMR; otherwise, these DMRs were considered to be separate.
CIRCOS was used to compare the differences in the methylation levels of DMRs between the samples. The degree of difference in the methylation level at one site in the two samples was calculated by the following formula, where Rm1 and Rm2 represent the mC methylation level of the two samples. If the value of either Rm1 or Rm2 was 0, it was replaced with 0.001 [23].
$$ \mathrm{Degree}\ \mathrm{of}\ \mathrm{difference}=\frac{\log_2\mathrm{Rm}1}{\log_2\mathrm{Rm}2} $$
SMRT sequencing
First, the target fragments were amplified from the qualified DNA samples by PCR. Then, the damaged ends of the fragments were repaired. Both sides of the DNA fragments were connected with a hairpin adapter to obtain a dumbbell (set of horse ring) structure, which is known as SMRTbell. After annealing, the SMRTbell was fixed at the bottom of the ZWM polymerase and used for sequencing. SMRT sequencing was carried out on PacBio RS II instrument (Pacific Biosciences; Menlo Park, CA, USA) using standard protocols. The polymerase reads with lengths less than 50 bp or mass values less than 0.75 were filtered out. Then, the clean data were mapped to the reference genome (AE003852 and AE003853) using the P_Mapping Module of SMRT Analysis software (v2.3.0). The P_Modification Detection (score ≥ 20, coverage≥25 and identificationQv≥20) and P_MotifFind (score ≥ 20 and coverage≥25) Modules were used to identify the methylation sites and motifs. All kinetic raw data from SMRT sequencing have been deposited in NCBI (SRA, PRJNA477395).
Construction of the gene deletion mutants
To construct the dcm1, dcm2 and dcm3 gene deletion mutants of strain VC211, two-step overlap PCR was used to generate the fused homologous arms of the target genes. For example, the flanking regions upstream and downstream of dcm1 were amplified from the genomic DNA of VC211 with two primer sets: del-dcm1–1-SacI and del-dcm1–2, and del-dcm1–3 and del-dcm1–4-SpeI. After gel purification, the PCR products were mixed, diluted 1000-fold and used as templates for the second round of PCR with the 1 and 4 primers of each gene (primers used in the study are listed in Additional file 3: Table S1). Then, the overlap PCR products and suicide plasmid pWM91were double digested with SacI and SpeI and were ligated and transformed into SM10. By conjugation and negative selection with 10% sucrose, the dcm1, dcm2 and dcm3 deletion mutants of VC211 were constructed. Ninety-eight clones on the sucrose plates of each deletion were selected and tested by PCR with the del dcm1 and 4 primers of each gene; these clones were further verified by the dcm-test-U/L primer sets. The presence of the IncA/C plasmid in the strains was also confirmed by the primer set repA-F/R for the plasmid replicase [7], and the primer set ctxAB-U/L for the toxic gene ctxAB of V. cholerae was used as a positive control. On the basis of single-gene deletion, the two-gene deletion mutants VC211∆(dcm1,dcm2), VC211∆(dcm2,dcm3), and VC211∆(dcm1,dcm3) and the three-gene deletion mutant VC211∆(dcm1,dcm2,dcm3) were constructed by the same method.
All the data supporting the findings are presented in the manuscript and the associated supplementary information file. The raw data were deposited into NCBI (SRA, PRJNA477395).
DMR:
Differentially methylated region
Luria–Bertani
RM:
Restriction modification
SMRT:
Single-molecule, real-time
Fricke WF, Welch TJ, McDermott PF, Mammel MK, LeClerc JE, White DG, et al. Comparative genomics of the IncA/C multidrug resistance plasmid family. J Bacteriol. 2009;191(15):4750–7 Epub 2009/06/02.
Harmer CJ, Hall RM. The a to Z of a/C plasmids. Plasmid. 2015;80:63–82 Epub 2015/04/26.
Welch TJ, Fricke WF, McDermott PF, White DG, Rosso ML, Rasko DA, et al. Multiple antimicrobial resistance in plague: an emerging public health risk. PLoS One. 2007;2(3):e309 Epub 2007/03/22.
Giske CG, Froding I, Hasan CM, Turlej-Rogacka A, Toleman M, Livermore D, et al. Diverse sequence types of Klebsiella pneumoniae contribute to the dissemination of blaNDM-1 in India, Sweden, and the United Kingdom. Antimicrob Agents Chemother. 2012;56(5):2735–8 Epub 2012/02/23.
Hsueh PR. New Delhi metallo-ss-lactamase-1 (NDM-1): an emerging threat among Enterobacteriaceae. J Formos Med Assoc. 2010;109(10):685–7 Epub 2010/11/03.
Wang R, Liu H, Zhao X, Li J, Wan K. IncA/C plasmids conferring high azithromycin resistance in Vibrio cholerae. Int J Antimicrob Agents. 2018;51(1):140–4 Epub 2017/09/19.
Wang R, Yu D, Zhu L, Li J, Yue J, Kan B. IncA/C plasmids harboured in serious multidrug-resistant Vibrio cholerae serogroup O139 strains in China. Int J Antimicrob Agents. 2015;45(3):249–54 Epub 2014/12/24.
Doublet B, Boyd D, Douard G, Praud K, Cloeckaert A, Mulvey MR. Complete nucleotide sequence of the multidrug resistance IncA/C plasmid pR55 from Klebsiella pneumoniae isolated in 1969. J Antimicrob Chemother. 2012;67(10):2354–60. Epub 2012/07/10.
Glenn LM, Englen MD, Lindsey RL, Frank JF, Turpin JE, Berrang ME, et al. Analysis of antimicrobial resistance genes detected in multiple-drug-resistant Escherichia coli isolates from broiler chicken carcasses. Microb Drug Resis. 2012;18(4):453–63 Epub 2012/03/06.
Fernandez-Alarcon C, Singer RS, Johnson TJ. Comparative genomics of multidrug resistance-encoding IncA/C plasmids from commensal and pathogenic Escherichia coli from multiple animal sources. PLoS One. 2011;6(8):e23415 Epub 2011/08/23.
Banerjee S, Chowdhury R. An orphan DNA (cytosine-5-)-methyltransferase in Vibrio cholerae. Microbiology. 2006;152(Pt 4):1055–62 Epub 2006/03/22.
Casadesus J, Low D. Epigenetic gene regulation in the bacterial world. Microbiol Mol Biol Rev. 2006;70(3):830–56 Epub 2006/09/09.
Marinus MG, Lobner-Olesen A. DNA Methylation. EcoSal Plus. 2014;6(1):1–62. Epub 2014/05/01.
Murray IA, Clark TA, Morgan RD, Boitano M, Anton BP, Luong K, et al. The methylomes of six bacteria. Nucleic Acids Res. 2012;40(22):11450–62 Epub 2012/10/05.
Huo W, Adams HM, Zhang MQ, Palmer KL. Genome modification in Enterococcus faecalis OG1RF assessed by bisulfite sequencing and aingle-molecule real-time sequencing. J Bacteriol. 2015;197(11):1939–51 Epub 2015/04/01.
Egan ES, Waldor MK. Distinct replication requirements for the two Vibrio cholerae chromosomes. Cell. 2003;114(4):521–30 Epub 2003/08/28.
de Lorenzo V, Herrero M, Jakubzik U, Timmis KN. Mini-Tn5 transposon derivatives for insertion mutagenesis, promoter probing, and chromosomal insertion of cloned DNA in gram-negative eubacteria. J Bacteriol. 1990;172(11):6568–72 Epub 1990/11/01.
Cohen NR, Ross CA, Jain S, Shapiro RS, Gutierrez A, Belenky P, et al. A role for the bacterial GATC methylome in antibiotic stress survival. Nat Genet. 2016;48(5):581–6 Epub 2016/03/22.
Takahashi N, Naito Y, Handa N, Kobayashi I. A DNA methyltransferase can protect the genome from postdisturbance attack by a restriction-modification gene complex. J Bacteriol. 2002;184(22):6100–8 Epub 2002/10/26.
Bottacini F, Morrissey R, Roberts RJ, James K, van Breen J, Egan M, et al. Comparative genome and methylome analysis reveals restriction/modification system diversity in the gut commensal Bifidobacterium breve. Nucleic Acids Res. 2018;46(4):1860–77. Epub 2018/01/03.
Thelin KH, Taylor RK. Toxin-coregulated pilus, but not mannose-sensitive hemagglutinin, is required for colonization by Vibrio cholerae O1 El Tor biotype and O139 strains. Infect Immun. 1996;64(7):2853–6 Epub 1996/07/01.
Heidelberg JF, Eisen JA, Nelson WC, Clayton RA, Gwinn ML, Dodson RJ, et al. DNA sequence of both chromosomes of the cholera pathogen Vibrio cholerae. Nature. 2000;406(6795):477–83 Epub 2000/08/22.
Heyn H, Li N, Ferreira HJ, Moran S, Pisano DG, Gomez A, et al. Distinct DNA methylomes of newborns and centenarians. Proc Natl Acad Sci U S A. 2012;109(26):10522–7 Epub 2012/06/13.
This work was supported by grants from the State Key Laboratory of Pathogen and Biosecurity (Academy of Military Medical Science) (SKLPBS1521).
State Key Laboratory for Infectious Disease Prevention and Control, National Institute for Communicable Disease Control and Prevention, Chinese Center for Disease Control and Prevention, Changbai Road 155, Changping, Beijing, 102206, People's Republic of China
Ruibai Wang, Jing Lou & Jie Li
Ruibai Wang
Jing Lou
Jie Li
R.W. designed the project, analyzed and interpreted the data, prepared the figures and wrote the manuscript; Jing Lou and Jie Li cultured and stored the strains.
Correspondence to Ruibai Wang.
Table S2. Genes with obviously changed methylation type in SMRT sequencing. (PDF 77 kb)
Figure S1. The methylation percentage of the 9 bases flanking the methylation site of (mC at the fourth base). (PDF 89 kb)
Table S1. Primers used in this study. (PDF 8 kb)
Wang, R., Lou, J. & Li, J. A mobile restriction modification system consisting of methylases on the IncA/C plasmid. Mobile DNA 10, 26 (2019). https://doi.org/10.1186/s13100-019-0168-1
IncA/C plasmid
Cytosine-specific DNA methylase
Restriction modification system
Multidrug resistance
Submission enquiries: [email protected] | CommonCrawl |
Matrix product state
A Matrix product state (MPS) is a quantum state of many particles (in N sites), written in the following form:
$|\Psi \rangle =\sum _{\{s\}}\operatorname {Tr} \left[A_{1}^{(s_{1})}A_{2}^{(s_{2})}\cdots A_{N}^{(s_{N})}\right]|s_{1}s_{2}\ldots s_{N}\rangle ,$
where $A_{i}^{(s_{i})}$ are complex, square matrices of order $\chi $ (this dimension is called local dimension). Indices $s_{i}$ go over states in the computational basis. For qubits, it is $s_{i}\in \{0,1\}$. For qudits (d-level systems), it is $s_{i}\in \{0,1,\ldots ,d-1\}$.
It is particularly useful for dealing with ground states of one-dimensional quantum spin models (e.g. Heisenberg model (quantum)). The parameter $\chi $ is related to the entanglement between particles. In particular, if the state is a product state (i.e. not entangled at all), it can be described as a matrix product state with $\chi =1$.
For states that are translationally symmetric, we can choose:
$A_{1}^{(s)}=A_{2}^{(s)}=\cdots =A_{N}^{(s)}\equiv A^{(s)}.$
In general, every state can be written in the MPS form (with $\chi $ growing exponentially with the particle number N). However, MPS are practical when $\chi $ is small – for example, does not depend on the particle number. Except for a small number of specific cases (some mentioned in the section Examples), such a thing is not possible, though in many cases it serves as a good approximation.
The MPS decomposition is not unique. For introductions see [1] and.[2] In the context of finite automata see.[3] For emphasis placed on the graphical reasoning of tensor networks, see the introduction.[4]
Obtaining MPS
One method to obtain an MPS representation of a quantum state is to use Schmidt decomposition N − 1 times. Alternatively if the quantum circuit which prepares the many body state is known, one could first try to obtain a matrix product operator representation of the circuit. The local tensors in the matrix product operator will be four index tensors. The local MPS tensor is obtained by contracting one physical index of the local MPO tensor with the state which is injected into the quantum circuit at that site.
Examples
Greenberger–Horne–Zeilinger state
Greenberger–Horne–Zeilinger state, which for N particles can be written as superposition of N zeros and N ones
$|\mathrm {GHZ} \rangle ={\frac {|0\rangle ^{\otimes N}+|1\rangle ^{\otimes N}}{\sqrt {2}}}$
can be expressed as a Matrix Product State, up to normalization, with
$A^{(0)}={\begin{bmatrix}1&0\\0&0\end{bmatrix}}\quad A^{(1)}={\begin{bmatrix}0&0\\0&1\end{bmatrix}},$
or equivalently, using notation from:[3]
$A={\begin{bmatrix}|0\rangle &0\\0&|1\rangle \end{bmatrix}}.$
This notation uses matrices with entries being state vectors (instead of complex numbers), and when multiplying matrices using tensor product for its entries (instead of product of two complex numbers). Such matrix is constructed as
$A\equiv |0\rangle A^{(0)}+|1\rangle A^{(1)}+\ldots +|d-1\rangle A^{(d-1)}.$
Note that tensor product is not commutative.
In this particular example, a product of two A matrices is:
$AA={\begin{bmatrix}|00\rangle &0\\0&|11\rangle \end{bmatrix}}.$
W state
W state, i.e., the superposition of all the computational basis states of Hamming weight one. Even though the state is permutation-symmetric, its simplest MPS representation is not.[1] For example:
$A_{1}={\begin{bmatrix}|0\rangle &0\\|0\rangle &|1\rangle \end{bmatrix}}\quad A_{2}={\begin{bmatrix}|0\rangle &|1\rangle \\0&|0\rangle \end{bmatrix}}\quad A_{3}={\begin{bmatrix}|1\rangle &0\\0&|0\rangle \end{bmatrix}}.$
AKLT model
The AKLT ground state wavefunction, which is the historical example of MPS approach:,[5] corresponds to the choice[6]
$A^{+}={\sqrt {\frac {2}{3}}}\ \sigma ^{+}={\begin{bmatrix}0&{\sqrt {2/3}}\\0&0\end{bmatrix}}$
$A^{0}={\frac {-1}{\sqrt {3}}}\ \sigma ^{z}={\begin{bmatrix}-1/{\sqrt {3}}&0\\0&1/{\sqrt {3}}\end{bmatrix}}$
$A^{-}=-{\sqrt {\frac {2}{3}}}\ \sigma ^{-}={\begin{bmatrix}0&0\\-{\sqrt {2/3}}&0\end{bmatrix}}$
where the $\sigma {\text{'s}}$ are Pauli matrices, or
$A={\frac {1}{\sqrt {3}}}{\begin{bmatrix}-|0\rangle &{\sqrt {2}}|+\rangle \\-{\sqrt {2}}|-\rangle &|0\rangle \end{bmatrix}}.$
Majumdar–Ghosh model
Majumdar–Ghosh ground state can be written as MPS with
$A={\begin{bmatrix}0&\left|\uparrow \right\rangle &\left|\downarrow \right\rangle \\{\frac {-1}{\sqrt {2}}}\left|\downarrow \right\rangle &0&0\\{\frac {1}{\sqrt {2}}}\left|\uparrow \right\rangle &0&0\end{bmatrix}}.$
See also
• Density matrix renormalization group
• Variational method (quantum mechanics)
• Renormalization
• Markov chain
• Tensor network
References
1. Perez-Garcia, D.; Verstraete, F.; Wolf, M.M. (2008). "Matrix product state representations". Quantum Inf. Comput. 7: 401. arXiv:quant-ph/0608197.
2. Verstraete, F.; Murg, V.; Cirac, J.I. (2008). "Matrix product states, projected entangled pair states, and variational renormalization group methods for quantum spin systems". Advances in Physics. 57 (2): 143–224. arXiv:0907.2796. Bibcode:2008AdPhy..57..143V. doi:10.1080/14789940801912366. S2CID 17208624.
3. Crosswhite, Gregory; Bacon, Dave (2008). "Finite automata for caching in matrix product algorithms". Physical Review A. 78 (1): 012356. arXiv:0708.1221. Bibcode:2008PhRvA..78a2356C. doi:10.1103/PhysRevA.78.012356. S2CID 4879564.
4. Biamonte, Jacob; Bergholm, Ville (2017). "Tensor Networks in a Nutshell": 35. arXiv:1708.00006. {{cite journal}}: Cite journal requires |journal= (help)
5. Affleck, Ian; Kennedy, Tom; Lieb, Elliott H.; Tasaki, Hal (1987). "Rigorous results on valence-bond ground states in antiferromagnets". Physical Review Letters. 59 (7): 799–802. Bibcode:1987PhRvL..59..799A. doi:10.1103/PhysRevLett.59.799. PMID 10035874.
6. Schollwöck, Ulrich (2011). "The density-matrix renormalization group in the age of matrix product states". Annals of Physics. 326 (1): 96–192. arXiv:1008.3477. Bibcode:2011AnPhy.326...96S. doi:10.1016/j.aop.2010.09.012. S2CID 118735367.
External links
• Open-source review article focused on tensor network algorithms, applications, and software
• State of Matrix Product States – Physics Stack Exchange
• A Practical Introduction to Tensor Networks: Matrix Product States and Projected Entangled Pair States
• Hand-waving and Interpretive Dance: An Introductory Course on Tensor Networks
• Tensor Networks in a Nutshell: An Introduction to Tensor Networks
| Wikipedia |
\begin{document}
\title{The consistency strength of
projective uniformization, revisited} Consider the following assumptions, whose conjunction we denote by $(RP)$:
(1) every projective set of reals is Lebesgue measurable and has the property of Baire, and
(2) every projective subset of the plane has a projective uniformization.
Woodin had asked, in \cite{CSPU}, whether $(RP)$ implies Projective Determinacy. This is not the case, by a recent observation of Steel:
\begin{thm} (Woodin, Steel) Suppose $V=K$, where $K$ is Steel's core model. If there are $\kappa_0 < \kappa_1 < ...$ with supremum $\lambda$ such that for all $n < \omega$ $\kappa_n$ is $< \lambda^+$ strong [i.e., for all $x \in H_{\lambda^+}$ there is $\pi \colon V \rightarrow M$ with critical point $\kappa_n$ and $M$ transitive such that $x \in M$] then $(RP)$ holds in a generic extension. \end{thm}
We here show that this is best possible:
\begin{thm} If $ZFC \ + \ (RP)$ holds and Steel's $K$ exists then $J^K_{\omega_1} \models$ there are infinitely many strong cardinals. \end{thm}
{\sc Proof.} Suppose not. Let $n < \omega$ be the number of strongs in $J^K_{\omega_1}$. We work towards a contradiction.
{\it Case 1.} $\omega_1$ is a successor in $K$.
Then Corollary 2.2 of \cite{proj} gives that $J^K_{\omega_1}$ is (boldface) \boldmath $\Delta$\unboldmath$^1_{n+4}$ (in the codes). But then we get a projective sequence of distinct reals of length $\omega_1$, contradicting \cite{CSPU}.
{\it Case 2.} $\omega_1$ is inaccessible in $K$.
Let $\Phi_m(M)$ denote the following statement, for $m \geq n$:
$M$ is a countable $m$-full mouse, $M \models$ there are $\leq m$ many strongs, and for all countable $m$-full $N$, if $M$, $N$ simply coiterate to $M^*$, $N^*$ with iteration maps $i \colon M \rightarrow M^*$ and $j \colon N \rightarrow N^*$ such that $M^*$ is an initial segment of $N^*$ then $i$''$M \subset j$''$N$.
The concept of $m$-fullness was defined in \cite{proj} where we showed that $\Phi_m(J^K_\kappa)$ holds for all $\kappa \leq$ the $(m+1)^{st}$ strong cardinal of $J^K_{\omega_1}$ which is either a double successor or an inaccessible in $K$.
It is also shown in \cite{proj} that if $\omega_1$ is inaccessible in $K$ and there are $\leq m$ strong cardinals in $J^K_{\omega_1}$ then $\Phi_m(M)$ characterizes (in a $\Pi^1_{m+4}$ way) (cofinally many of) the proper initial segments of $J^K_{\omega_1}$. (Cf. \cite{proj} Theorem 2.1. This gives a (lightface) $\Delta^1_{m+5}$ definition of $J^K_{\omega_1}$.)
In particular, for all $m \geq n$ the following holds, abbreviated by $\Psi^m_n$:
For any two $M$, $M'$, if $\Phi_m(M)$ and $\Phi_m(M')$ both hold then $M$ and $M'$ are lined up and if ${\tilde M}$ is the "union" of all $M$'s with $\Phi_m(M)$ then $On \cap {\tilde M} = \omega_1$ and ${\tilde M} \models$ there are exactly $n$ strong cardinals.
Notice that $\Psi^m_n$ is $\Pi^1_{m+5}$.
By \cite{CSPU}, there is a model $P = L_{\omega_1}[X] \models ZFC$ for some $X \subset \omega_1$, such that
(a) $P[g]$ is $\Sigma^1_{n+1000}$ correct in $P[g][h]$ whenever $g$ is set-generic over $P$ and $h$ is set-generic over $P[g]$, and
(b) $P$ is $\Sigma^1_{n+1000}$ correct in $V$.
Now $P \models \Psi^{n+94}_n$ by (b) and the fact that $\Psi^{n+94}$ holds in $V$. Moreover, $P$ is closed under the dagger operator by (a), so Steel's $K$ exists in $P$, denoted by $K^P$, and $K^P \models$ there are $> n$ strong cardinals, by \cite{CSPA} and (a). We may pick $g$ $Col(\mu,\omega)$-generic over $P$ for some appropriate $\mu$ such that in $P[g]$, $J^K_{\omega_1} \models$ there are $> n$ strongs. By (a), $\Psi^{n+94}_n$ still holds in $P[g]$.
From this we now derive a contradiction, by working in $P[g]$ for the rest of this proof. So let us assume that (in $V$) $\Psi^{n+94}_n$ holds, Steel's $K$ exists, and $J^K_{\omega_1} \models$ there are $> n$ strongs.
By $\Psi^{n+96}_n$, there is a J-model ${\tilde M}$ of height $\omega_1$ such that ${\tilde M} \models ZF^{-} \ + $ there are exactly $n$ strong cardinals, and $\Phi_{n+96}(M)$ holds for every proper initial segment $M$ of ${\tilde M}$. By \cite{proj}, there is a universal weasel $W$ end-extending ${\tilde M}$ such that for all countable (in $V$) $\kappa$ which are cardinals in $W$ and such that $J^{\tilde M}_\kappa \models$ there are $< n+94$ strong cardinals, $W$ has the definability property at $\kappa$. [This follows from the fact that cofinally many proper initial segments of ${\tilde M}$ are $n+94$ full].
Because $W$ is universal, there is some $\sigma \colon K \rightarrow W$ given by the coiteration of $K$ with $W$. Let $\kappa$ denote the $(n+1)^{st}$ strong cardinal of $J^K_{\omega_1}$. By a remark above, $J^K_\kappa$ is an initial segment of $W$. But this implies that the critical point of $\sigma$ is $> \kappa$. [This follows from the fact that if $\mu$ is strong in $J^K_\kappa$, or $\mu = \kappa$, then $K$ as well as $W$ has the definability property at $\mu$.] But now, using $\sigma$, ${\tilde M} = J^W_{\omega_1} \models$ there are at least $n+1$ many strong cardinals. Contradiction!
$\square$
\end{document} | arXiv |
Colour removal using nanoparticles
Ravindra D. Kale1 &
Prerana B. Kane1
Nickel nanoparticles were synthesized and used to decolourize dye effluent. C. I. Reactive Blue 21 was taken as a reference dye, and polyvinyl pyrrolidone (PVP) was used as a stabilizer to prevent agglomeration of nanoparticles. Characterization of nanoparticles was done by a laser light scattering particle size analyzer, X-ray diffraction (XRD) analysis and transmission electron microscopy (TEM). Various parameters like pH, dye concentration, nanoparticle concentration, alkali addition, salt addition and duration studied for dye decolourization. To confirm the attachment of degraded products of dye on the nanoparticles, FT-IR analysis was done. About 98 % colour removal with simultaneous reduction in chemical oxygen demand (COD) was achieved.
The effluent discharged from textile dyeing mill is a highly concentrated coloured wastewater and consists of a mixture of various dyes. Most dyestuffs are complex aromatic structures which are difficult to be disposed. Moreover, the colour in water resources poses aesthetic problem. They also cause serious ecological problems like significantly affecting photosynthetic activity of aquatic plants due to reduced light penetration and may be toxic to some aquatic organism (Zollinger, 1987).
Many methods used for dye removal include chemical coagulation, flocculation, chemical oxidation, photochemical degradation, membrane filtration and aerobic and anaerobic biological degradation. These methods have one or other limitations, and none of them is successful in complete removal of dye from wastewater (Dizge et al., 2008).
Metal nanoparticles are also employed for decolourization of the coloured effluent. The size and shape of the nanoparticles play an important role in the decolourization and can be controlled by various physical and chemical routes. These particles show tendency to aggregate and thus lower the activity. Hence, to prevent the aggregation irreversibly, researchers have coated the particles with low molecular weight polymeric compounds (Hergt et al., 2006; Naoki 2008; Sakulchaicharoen et al., 2010; Shen et al., 1999). However, the need for the non-agglomerated nanoparticles with a well-controlled mean size and a narrow size distribution is not yet achieved.
Tagar et al., (2012) have carried out reduction of dyes by using gelatine stabilized gold nanoparticles (GNPs). Nanoscale nickel particles (NiNPs) were used for decolourization of Congo red dye by Kalwar et al. (2013). Nickel oxide has also been used for oxidation of a wide range of organic compounds (Lai et al., 2007; Wang et al., 2008; Nateghi et al., 2012, Kalwar et al., 2014). In a study conducted by Saadatjou et al. (2008), 80 % decolourization of dye basic red 46 was achieved using hardened pieces of Portland white cement as adsorbent. In a research conducted by Asgari and Ghanizade (2009), methylene blue dye could be completely decolourized after 2 h treatment time using 1 g/L bone ash. Song et al. (2009) carried out decolourization of reactive dye using nickel oxide nano-sheets at acidic pH after 6 h.
Among all the magnetic metallic nanomaterials, nickel nanostructure materials are difficult to synthesize because they are easily oxidizable. Ball milling, electrodeposition, thermal plasma, polyol process, chemical vapour deposition (CVD), decomposition of organic-metallic precursors, chemical reduction in the liquid phase and many other methods (Degen and Macek, 1999; de Caro and Bradley, 1997; Steigerwald et al. 1988; Yurij et al., 1999; Zhang, et al., 2006) have been applied to obtain pure metallic nickel nanoparticles. Chemical reduction of cations from the solution of metal salts using strong reducing agents is the best way to prepare nickel nanostructure materials.
Phthalocyanine reactive dyes are metallic complexes; mostly copper based gives turquoise colour shade. They are potentially mutagenic and have special toxicity concern because of metal Cu content. These dyes are not easily dischargeable and also have resistance towards oxidative degradation, which makes its decolourization a difficult task. They are highly water-soluble, shows resistance to adsorption and non-biodegradable under aerobic conditions resulting in permanent coloured effluent (Mathews et al., 2009).
Souza et al. (2007) have achieved 59 % decolourization of Reactive Blue 21 (RB21) dye using horseradish peroxidise as oxidizer. Degradation of dye RB21 using soybean peroxidase as biocatalyst was evaluated by Marchis et al. (2011), and they achieved approximately 95–96 % decolourization at pH 3.0 with 4 h treatment time.
In this current work, the problem of agglomeration of nanoparticles has been successfully approached. The colour removal of the dye solution was carried out by using chemically synthesized disperse nickel nanoparticles in a non-aqueous solution using polyvinyl pyrrolidone (PVP) as a stabilizer. The various parameters such as time, nanoparticle concentration, initial RB21 dye concentration and pH were studied. Addition to colour removal, the effect of presence of salt and alkali in the dye solution and chemical oxygen demand (COD) reduction was also investigated.
C. I. Reactive Blue 21 was purchased from Colourtex Industries Limited, Mumbai. Nickel chloride (NiCl2.6H2O, ≥ 98 %), hydrazine hydrate (N2H4.H2O molecular weight 50.06), acetone (molecular weight 58.08, 99 %), polyvinyl pyrrolidone (PVP K-30), sodium hydroxide (NaOH, molecular weight 40), acetic acid, ethanol (99.7 %), chloroform, methanol, soda ash and Glauber's salt were supplied by S D Fine-Chem Limited (SDFCL, Mumbai).
Synthesis of Ni-PVP nanoparticles
Synthesis of the Ni-PVP nanoparticles was carried out as per the procedure discussed in a previously published article by Kale et al. (2014). 3.3-mM solution of NiCl2·6H2O was prepared in 200-mL ethanol and was added to a 500-mL round bottom flask at room temperature. 0.125 g of PVP was then added to this solution and was stirred until complete dissolution of PVP. Hydrazine hydrate was further added, followed by 1 M NaOH under constant magnetic stirring. The solution was heated at 60 °C with stirring until the solution turned black. When hydrazine hydrate reacts with nickel ion, it releases nitrogen gas; hence, no additional nitrogen atmosphere was needed during synthesis. The scheme of the nanoparticle synthesis is shown in Fig. 1. During formation of nickel nucleus, the hydrophobic part of PVP molecules gets attached to nickel nucleolus reducing its surface energy and preventing its agglomeration (Gao et al., 2001). The solution was then cooled at ambient temperature. After the solution is cooled, acetone was added to precipitate out the nanoparticles. The solution was vacuum filtered, and particles were washed repeatedly with chloroform-methanol (1:1) solution. Synthesis of nickel nanoparticles without addition of PVP was done in the same way. The reaction can be described by the following equation (Kalwar et al., 2014):
$$ 2{\mathrm{N}\mathrm{i}}^{2+} + {\mathrm{N}}_2{\mathrm{H}}_4 + 4{\mathrm{OH}}^{-}\kern0.5em \to \kern0.5em 2\mathrm{N}\mathrm{i} + {\mathrm{N}}_{2\left(\mathrm{g}\right)} + 4{\mathrm{H}}_2\mathrm{O} $$
Characterization of nanoparticles
The size and morphology of the nanoparticles was estimated using transmission electron microscope (TEM) (Model CM 200, Philips) operated at an accelerating voltage of 200 kV. The mean diameter of the prepared Ni-PVP nanoparticles was determined using a laser light scattering particle size analyzer (SALD 7500 nano, Shimadzu, Japan). Powder X-ray diffraction (XRD) was recorded on Simadzu XRD-6100.
Batch decolourization studies
RB21 dye was selected due to the fact that it has complex structure with less % exhaustion i.e. more unexhaust dye is in drained effluent which is not easily dischargeable. The structure of the dye is shown in Fig. 2. Dye stock solutions of different concentrations were prepared in deionized water, and decolourization experiments were performed in an open batch system at room temperature, 28 ± 2 °C. Dye solutions were stirred using a shaker machine (Rossari Labtech, Mumbai) at a speed of 70 rpm to keep the nanoparticle powder suspended. Samples were withdrawn at fixed intervals of time and were subsequently centrifuged for 5 min at 3000 rpm. The clear solution was then pipetted out, and the rate of decolourization was measured by the absorbance at maximum wavelength (622.4 nm) using UV-VIS spectrophotometer (UV-VIS 8500, TECHCOMP Limited, Hong Kong). The percentage of dye decolourization was found out from calibration curve of the dye solution, using different dye concentrations and measuring its absorbance at 622.4 nm. % Decolourization was calculated as per the following equation:
Chemical structure of C. I. Reactive Blue 21
$$ \%\ \mathrm{Decolourization}=\frac{\left(\mathrm{initial}\ \mathrm{concentration} - \mathrm{final}\ \mathrm{concentration}\right)}{\mathrm{Initial}\ \mathrm{concentration}}\kern0.5em \times 100 $$
The presence of degradation products on the nanoparticles was identified using FT-IR 8400S (CE), Shimadzu, Japan.
COD was measured using USEPA-approved dichromate COD method using Hach DRB digester and analysis with DR 900 colorimeter, USA.
Nanoparticle characterization
The structure and morphology of nickel nanoparticles can be seen from Fig. 3. When dispersing agent was not added, agglomerated nanoparticles having size 29–243 nm were obtained in self-assembled flower-like structures (Fig. 3a). But when dispersing agent i.e. PVP was added during synthesis, Ni nanoparticles were well dispersed within the PVP matrix. And particle size reduces to 20–44 nm (Fig. 3b).
TEM image of Ni-nanoparticles a without PVP and b with PVP
The XRD of the synthesized Ni particles is shown in Fig. 4 having peaks at Theta-2Theta value of 15.08°, 22.12°, 31.22°, 33.35° and 51.06° confirming the presence of Ni (0) nanoparticle (Sulekh et al. 2014).
XRD patterns of PVP stabilized Ni-nanoparticles
The particle size distribution of Ni-PVP nanoparticles as determined on laser light scattering particle size analyzer shows that the particle size is 112 nm (Fig. 5). We observed narrow particle size range and stable dispersion of nano-Ni particles when PVP is used as stabilizer; these particles are further taken for the dye decolourization study.
Particle size analysis for Ni-PVP nanoparticles
Dye decolourization
The effect of time on % decolourization of dye (100 mg/L) at different nanoparticle concentrations is shown in Fig. 6. The plot proves that as the concentration of nanoparticles was increased from 0.5 to1.5 g/L, the % decolourization also increased up to 98.97 %. For the same concentration of nanoparticles taken, % decolourization increased with treatment time. Thus, maximum decolourization was seen at 120 min compared to lesser duration of treatment time.
Effect of time on % decolourization of dye (100 mgpl) with nanoparticle concentration
It is but obvious that the more number of nanoparticles, the more is the availability of catalyst for attacking the chromophoric system of the dye. Similarly, with the increase in duration of treatment, the more time is given for the reaction of dye decolourization. In Fig. 7, one can actually see the decolourization progress.
Decolourization progress at different time intervals
Effect of initial dye concentration
It can be seen from Fig. 8 that as the initial dye concentration taken for the study was increased from 100 to 200 to 300 mg/L, the decolourization efficiency got decreased for the same amount of nanoparticle concentration. However, this difference is insignificant as the percentage decolourization was always between 96 and 99 % for all the dye concentration taken.
Effect of initial dye concentration on % decolourization using nanoparticles (1.5 gpl) for 120 min
The rest of the dye decolourization study was performed using 1.5 g/L nanoparticle by taking 100 mg/L of the dye for 120 min.
Figure 9 shows the absorbance spectrum of the dye solution before and after decolourization. It can be clearly seen that the absorbance of the dye solution has drastically reduced after decolourization with the characteristic peaks at maximum wavelength (622.4 nm) disappearing completely (Nath et al. 2007).
UV-visible spectra recorded before and after decolourization of RB21
Effect of pH
The pH of the aqueous solution is a complex parameter since it is related to the ionization state of the nanoparticle surface and that of the reactants and products such as acids and amines. In this work, we studied the effect of pH on dye degradation in the range of 2–7 (Bokare et al., 2008). Figure 10 shows decolouration of the dye at different pH values. It can be seen that maximum % decolourization was observed at pH 7 (99.01 % in 120 min).
Effect of pH variation on % decolouration with nanoparticles (1.5 gpl) and dye (100 mgpl) for 120 min
Effect of alkali
Reactive dyeing requires salt and alkali for good exhaustion of dye onto substrate. The effect of decolourization using nanoparticle was thus studied in the presence of salt and alkali.
Twenty grams per litre of soda ash was added in the dye solution which increased the pH of solution to 11.3. The pH was brought down to the value of 7 by addition of acetic acid, and then % decolourization study was done.
From Fig. 11, it can be seen that when soda ash is added in dye solution, % decolourization was reduced from 98.09 to 68.09 %. After adjusting pH to basic condition, the decolourization rate of nanoparticles was found to be decelerated because the oxide precipitated onto the surface occupying the active sites, soon terminating the reaction (Mu et al. 2004).
Effect of soda ash (20 gpl) addition on % decolouration with nanoparticles (1.5 gpl) and dye (100 mgpl)
Effect of salt and alkali
Figure 12 shows the combined effect of soda ash (20 g/L) and Glauber's salt (40 g/L) addition on % decolourization. It was found that % decolourization efficiency reduced from 98.1 to 65.09 % when both soda ash and Glauber's salt were present in the dye solution. This could be because of oxide formation on the surface of these nanoparticles reducing its reduction capacity.
Effect of soda ash (SA) 20 gpl and Glauber's salt (GS) 40 gpl addition on % decolouration with nanoparticles (1.5 gpl) and dye (100 mgpl)
FT-IR measurements
After 120 min of decolourization reaction, particulate materials which were nothing but nanoparticles were collected by centrifugation and removed from the supernatant solution. Vacuum drying of the collected particles was done at 60 °C for 6 h and was further characterized by FT-IR spectroscopy. By FT-IR technique, we can identify the groups that got absorbed on the Ni-PVP nanoparticle surface after degradation of dye. The infrared spectra of nanoparticles before and after the decolourization process were recorded in the range of 4000–400 cm−1 on a FT-IR-8400S, Shimadzu (Fig. 13). As compared with the original Ni-PVP nanoparticles, some new bands were found on the Ni-PVP nanoparticles after decolourization including vibrations at 3282 cm−1(N–H bending), 3228 cm−1 (–CH2– bending), 1604 cm−1 (–C=O– bending), 1164 cm−1(–SO3 bending) and 979 cm−1(–C–H– bending), respectively. Thus, some amount of degraded product got attached on the surface of the nanoparticles (Dizge et al., 2008).
FT-IR spectra of nanoparticle and dye-loaded nanoparticle sample
COD reduction
Dye reduction results reported in this study were based on the spectrophotometric analysis of the dye solutions. But the spectrophotometric evaluation cannot be interrelated to reduction of COD as we wanted to know the effect of presence of nanoparticles on COD value of dye effluent (Golder et al., 2005). COD value for treated effluent was 61 ppm while the same for untreated one was 125 ppm. It can be inferred that Ni-PVP nanoparticles not only eliminates the colour from the effluent but also reduces the COD indicating degradation of the dye.
PVP stabilized nickel nanoparticles were successfully synthesized and used for decolourization of recalcitrant dye effluent. These nanoparticles gave decolourization efficiency of 98.97 % at optimized condition. Decolourization also led to the reduction in COD of the effluent. Both of these were achieved with minimum generation of sludge which is a major problem in case of conventional methods which are used for decolourization such as coagulation and flocculation.
Bokare, AD, Chikate, RC, Rode, CV, Paknikar, KM, (2008). Applied Catalysis B: Environmental, 79 , 270–278
de Caro, D, & Bradley, JS. (1997). Langmuir, 13, 3067–3069
Degen, A, & Macek, J. (1999). Nanostructured Materials, 12, 225–228
Dizge, N, Aydiner, C, Demirbas, E, Kobya, M, Kara, S. (2008). Journal of Hazardous Materials, 150, 737–746
Gao, J, Guan, F, Zhao, Y, Yang, W, Ma, Y, Lu, X, Hou, J, Kang J. (2001). Materials Chemical and Physics, 71, 215
Ghanizadeh, GH, & Asgari, G. (2009). Iranian Journal of Health and Environment, 2, 104–13
Golder, AK, Hridaya, N, Samanta, AN, Ray, S. (2005). Journal of Hazardous Materials B, 127, 134–140
Hergt, R, Dutz, S, Mülle, R, Zeisberger, M. (2006), Journal of Physics: Condensed Matter, 18, S2919
Kale, RD, Kane, P, Phulaware N. (2014) International Journal of Engineering Science and Innovative Technology, 3(2), 109–117
Kalwar, NH, Sirajuddin, Hallam KR. et al. (2013). Applied Catalysis A, 453, 54–59
Kalwar, NH, Sirajuddin, Soomro, RA, Sherazi, STH, Hallam, KR, Khaskheli, AR. (2014). Synthesis and characterization of highly efficient nickel nanocatalysts and their use indegradation of organic dyes. International Journal of Metals, 20, 10-14.
Lai, TL, Wang, WF, Shu, YY, Liu, YT, Wang, CB. (2007). Journal of Molecular Catalysis A: Chemical, 273, 303–9
Marchis, T, Avetta, P, Bianco-Prevot, A, Fabbri, D, Viscardi, G, Laurenti, E. (2011). Journal of Inorganic Biochemistry, 105 , 321–327
Mathews, RD, Bottomley, LASG, Pavlostathis, P. (2009). Desalination, 248, 816–825
Mu, Y, Yu, HQ, Zhang, SJ, Zheng, JC. (2004). Journal of Chemical Technology and Biotechnology, 79, 1429–1431
Naoki, T. (2008). Macromol. Symp., 270, 27–39
Nateghi R, Bonyadinejad GR, Amin MM, Mohammadi H. (2012). Decolorization of synthetic wastewaters by nickel oxide nanoparticle. International Journal of Environmental Health Engineering, 1(1), DOI:10.4103/2277-9183.98384.
Nath, S, Praharaj, S, Panigrahi, S, Basu, S, Pal, T. (2007). Current Science, 92(6), 786–790
Saadatjou, N, Rasoulifard, MH, Heidari, A. (2008).Journal of Color Science and Technology, 2, 221–6
Sakulchaicharoen, NA, O'Carroll, DMA, Herrera, JEB. (2010), Journal of Contaminant Hydrology, 118(3–4), 117–127
Shen, L, Laibinis, PE, Hatton, TA. (1999). Langmuir, 15, 447
Song, Z, Chen, L, Huand, J, Richards, R. (2009). Nanotechnology, 9, 2–10
Souza, SMAGU, Forgiarini, E, Souza, AAU. (2007). Journal of Hazardous Materials, 147, 1073–1078
Steigerwald, ML, Alivisatos, AP, Gibson, JM, Harris, TD, Kortan, R, Muller, A.J, Thayer, AM, Duncan, TM, Douglass, DC, Brus, LE. (1988) Journal of the American Chemical Society, 110, 3046–3050
Sulekh, C, Kumar, A, Tomar, PK. (2014). Journal of Saudi Chemical Society, 18, 437–442
Tagar, ZA, Sirajuddin, Memon N. et al. (2012). Pakistan Journal of Analytical and Environmental Chemistry, 13(1), 70
Wang, HC, Chang, SH, Hung, PC, Hwang, JF, Chang, MB. (2008). Chemosphere, 71, 388–97
Yurij, K, Asuncion, F, Cristina, RT, Juan, C, Pilar, P, Ruslan, P, Aharon, G. (1999). Chemistry of Materials, 11, 1331–1335
Zhang, HT, Wu, G, Chen, XH, Qiu, XG. (2006) Materials Research Bulletin, 41, 495–501
Zollinger, H. (1987). Colour chemistry-synthesis, properties and application of organic dyes and pigments. NewYork: VCH.
The authors would like to acknowledge DST-FIST programme of Indian government for providing the testing facilities and University Grants Commission (UGC) for fellowship in successful completion of this research work.
Department of Fibers and Textile Processing Technology, Institute of Chemical Technology, Mumbai, 400 019, India
Ravindra D. Kale & Prerana B. Kane
Ravindra D. Kale
Prerana B. Kane
Correspondence to Ravindra D. Kale.
RDK guided in carrying out the practical work (synthesis, optimization, characterization, testing etc.) and editing of the manuscript. PK carried out the complete research work including synthesis, optimization, characterization, application, testing etc. and drafting of the research paper. Both authors read and approved the final manuscript.
Kale, R.D., Kane, P. Colour removal using nanoparticles. Text Cloth Sustain 2, 4 (2017). https://doi.org/10.1186/s40689-016-0015-4
% Decolourization
Nickel nanoparticle
Polyvinyl pyrrolidone | CommonCrawl |
Why larger input sizes imply harder instances?
Below, assume we're working with an infinite-tape Turing machine.
When explaining the notion of time complexity to someone, and why it is measured relative to the input size of an instance, I stumbled across the following claim:
[..] For example, it's natural that you'd need more steps to multiply two integers with 100000 bits, than, say multiplying two integers with 3 bits.
The claim is convincing, but somehow hand-waving. In all algorithms I came across, the larger the input size, the more steps you need. In more precise words, the time complexity is a monotonically increasing function of the input size.
Is it the case that time complexity is always an increasing function in the input size? If so, why is it the case? Is there a proof for that beyond hand-waving?
complexity-theory time-complexity intuition
user20user20
$\begingroup$ "Directly proportional" has a specific mathematical meaning that means, essentially linear time. In other words, if your input has size $n$, if the time is directly proportional the algorithm runs in time $cn$. I'd imagine that's not what you mean, as many algorithms do not run in linear time, i.e. sorting. Can you explain further? $\endgroup$ – SamM Aug 13 '12 at 23:12
$\begingroup$ So you're asking about an algorithm that runs in $o(1)$ time? $O(1)$ means the algorithm runs in the same time regardless of input size, $o(1)$ means it runs faster as the input gets larger. I can't think of one that runs in that time off the top of my head, but the notation is fairly common because an algorithm will often run in something like $O(n^2) + o(1)$ time--in other words, it takes $O(n^2)$ time, and there are some other terms that grow smaller as the input gets larger. $\endgroup$ – SamM Aug 13 '12 at 23:28
$\begingroup$ Good question. What about the counter-example of computing the prime factors of $c / n$ for some large $c$ (this is only an increasing function for $n \geq c$)? @Sam Note that an increasing function says that the time must be decreasing at some point along the real line (i.e. $f(b) < f(a), a < b$). $\endgroup$ – Casey Kuball Aug 13 '12 at 23:32
$\begingroup$ @Darthfett I'm afraid I don't follow. Not all increasing functions are decreasing at some point along the real line. $\endgroup$ – SamM Aug 13 '12 at 23:42
$\begingroup$ @Jennifer Yes, I understand, that makes sense. I'd recommend using the term $o(1)$ as it has the meaning you're looking for. And I'd like to reemphasize that direct proportionality implies linearity; I see what you're getting at but it may be confusing to those who are reading the question for the first time. $\endgroup$ – SamM Aug 13 '12 at 23:43
Is it the case that time complexity is always an increasing function in the input size? If so, why is it the case?
No. Consider a Turing machine that halts after $n$ steps when the input size $n$ is even, and halts after $n^2$ steps when $n$ is odd.
If you mean the complexity of a problem, the answer is still no. The complexity of primality testing is much smaller for even numbers than for odd numbers.
JeffEJeffE
Let $n$ denote the input size. To read the entire input, a turing machine already needs $n$ steps. So if you assume that an algorithm has to read it's entire input (or $n/c$ for some constant $c$), you will always end up with at least linear run time.
The problem with defining algorithms with a "monotonically decreasing run time function" is, that you have to define the run time for $n = 1$ somehow. You have to set it to some finite value. But there are infinite possible values for $n > 1$, so you end up with a function which is constant for infinite many values.
Probably sublinear algorithms are of interest for you, which do not read the entire input. See for example http://www.dcs.warwick.ac.uk/~czumaj/PUBLICATIONS/DRAFTS/Sublinear-time-Survey-BEATCS.pdf.
ChristopherChristopher
$\begingroup$ There exist sublinear algorithms. For example, see people.csail.mit.edu/ronitt/sublinear.html. It's a reasonably new field but it's very interesting. There are other counterexamples to this. Finding an element given a sorted list takes $O(\log n)$ time in the RAM model. I agree with the idea behind your post. It doesn't make sense to have an algorithm take less time as the input gets larger because it doesn't have time to read all of the input (how does it know to take less time?). But I don't know how to prove that they don't exist, and that a trick couldn't make it $o(1)$. $\endgroup$ – SamM Aug 14 '12 at 2:00
$\begingroup$ @Sam: Sorry, I did not see your comment before my edit (adding sublinear algorithms). $\endgroup$ – Christopher Aug 14 '12 at 2:06
$\begingroup$ quite the opposite; I didn't see your edit before adding my comment. I would delete it but the second half still applies and an additional link can't hurt the OP $\endgroup$ – SamM Aug 14 '12 at 2:15
$\begingroup$ a counterexample: a constant function like $f(x)=0$. What you describe works for functions that need to read their input. $\endgroup$ – Kaveh Aug 14 '12 at 2:31
The relation $(\mathbb{N},\leq)$ is well-founded, i.e. there are no infinite falling sequences in the natural numbers. Since (worst-case) runtime functions map to the naturals, all runtime functions therefore have to be in $\Omega(1)$, that is all runtime functions are (in the limit) non-decreasing.
That said, average runtimes can contain oscillating components, for example Mergesort.
Raphael♦Raphael
$\begingroup$ I don't see how this answer is related to the question. $\endgroup$ – A.Schulz Aug 14 '12 at 7:37
$\begingroup$ @A.Schulz It gives a proof for the main question "Is it the case that time complexity is always an increasing function in the input size?", reading "increasing" as "non-decreasing", i.e. not necessarily stricly increasing. $\endgroup$ – Raphael♦ Aug 14 '12 at 12:34
$\begingroup$ Well, "not increasing" doesn't necessarily have to mean "decreasing?. Or to put it the other way around non-decreasing $\not=$ increasing. $\endgroup$ – A.Schulz Aug 14 '12 at 15:51
$\begingroup$ @A.Schulz: Still, my interpretation seems to be what Jennifer is interested in. $\endgroup$ – Raphael♦ Aug 14 '12 at 20:32
How to describe algorithms, prove and analyse them?
Complexity inversely propotional to $n$
standard sequential algorithm with polylog runtime?
Shouldn't complexity theory consider the time taken for different operations?
Why not to take the unary representation of numbers in numeric algorithms?
Comparison Based Sorting Run-time with respect to Total Number of Bits of Input
What is the precise definition of pseudo-polynomial time (feat. Counting Sort)
Time complexity of sum of $2^n$ values of polynomials
Big-O / $\tilde{O}$ -notation with multiple variables when function is decreasing in one of its arguments
Computational complexity classes not based on input size?
Is there a useful algorithm with a decreasing asymptotic time?
Running Time for Finding Maximum | CommonCrawl |
Recent questions tagged arithmetic-series
ISI2014-DCG-23
The sum of the series $\:3+11+\dots +(8n-5)\:$ is $4n^2-n$ $8n^2+3n$ $4n^2+4n-5$ $4n^2+2$
asked Sep 23, 2019 in Numerical Ability by Arjun Veteran (431k points) | 57 views
isi2014-dcg
numerical-ability
arithmetic-series
If $l=1+a+a^2+ \dots$, $m=1+b+b^2+ \dots$, and $n=1+c+c^2+ \dots$, where $\mid a \mid <1, \: \mid b \mid < 1, \: \mid c \mid <1$ and $a,b,c$ are in arithmetic progression, then $l, m, n$ are in arithmetic progression geometric progression harmonic progression none of these
If the sum of the first $n$ terms of an arithmetic progression is $cn^2$, then the sum of squares of these $n$ terms is $\frac{n(4n^2-1)c^2}{6}$ $\frac{n(4n^2+1)c^2}{3}$ $\frac{n(4n^2-1)c^2}{3}$ $\frac{n(4n^2+1)c^2}{6}$
ISI2015-DCG-1
The sequence $\dfrac{1}{\log_3 2}, \: \dfrac{1}{\log_6 2}, \: \dfrac{1}{\log_{12} 2}, \: \dfrac{1}{\log_{24} 2} \dots $ is in Arithmetic progression (AP) Geometric progression ( GP) Harmonic progression (HP) None of these
asked Sep 18, 2019 in Numerical Ability by gatecse Boss (17.5k points) | 83 views
If $x,y,z$ are in $A.P.$ and $a>1$, then $a^x, a^y, a^z$ are in $A.P.$ $G.P$ $H.P$ none of these
If $a,b,c$ are in $A.P.$ , then the straight line $ax+by+c=0$ will always pass through the point whose coordinates are $(1,-2)$ $(1,2)$ $(-1,2)$ $(-1,-2)$
If the co-efficient of $p^{th}, (p+1)^{th}$ and $(p+2)^{th}$ terms in the expansion of $(1+x)^n$ are in Arithmetic Progression (A.P.), then which one of the following is true? $n^2+4(4p+1)+4p^2-2=0$ $n^2+4(4p+1)+4p^2+2=0$ $(n-2p)^2=n+2$ $(n+2p)^2=n+2$
sequence-series
GATE2011 AG: GA-6
The sum of $n$ terms of the series $4+44+444+ \dots \dots $ is $\frac{4}{81}\left[10^{n+1}-9n-1\right]$ $\frac{4}{81}\left[10^{n-1}-9n-1\right]$ $\frac{4}{81}\left[10^{n+1}-9n-10\right]$ $\frac{4}{81}\left[10^{n}-9n-10\right]$
asked May 14, 2019 in Numerical Ability by Lakshman Patel RJIT Veteran (58.8k points) | 306 views
general-aptitude
gate2011-ag
GATE2019 EE: GA-6
How many integers are there between $100$ and $1000$ all of whose digits are even? $60$ $80$ $100$ $90$
asked Feb 12, 2019 in Numerical Ability by Arjun Veteran (431k points) | 238 views
gate2019-ee
GATE2015-2-GA-6
If the list of letters $P$, $R$, $S$, $T$, $U$ is an arithmetic sequence, which of the following are also in arithmetic sequence? $2P, 2R, 2S, 2T, 2U$ $P-3, R-3, S-3, T-3, U-3$ $P^2, R^2, S^2, T^2, U^2$ I only I and II II and III I and III
asked Feb 12, 2015 in Numerical Ability by jothee Veteran (105k points) | 1.2k views
What will be the maximum sum of $44, 42, 40, \dots$ ? $502$ $504$ $506$ $500$
asked Sep 24, 2014 in Numerical Ability by Arjun Veteran (431k points) | 1.7k views
Answer related to tree edge after BFS traversal... | CommonCrawl |
Suppose that $f(x)$ is a function such that
\[f(xy) + x = xf(y) + f(x)\]for all real numbers $x$ and $y.$ If $f(-1) = 5$ then compute $f(-1001).$
Setting $y = 0$ in the given functional equation, we get
\[f(0) + x = xf(0) + f(x),\]so $f(x) = (1 - f(0))x + f(0).$ This tells us that $f(x)$ is a linear function of the form $f(x) = mx + b.$ Since $f(-1) = 5,$ $5 = -m + b,$ so $b = m + 5,$ and
\[f(x) = mx + m + 5.\]Substituting this into the given functional equation, we get
\[mxy + m + 5 + x = x(my + m + 5) + mx + m + 5.\]This simplifies to $2mx = -4x.$ For this to hold for all $x,$ we must have $m = -2.$
Then $f(x) = -2x + 3.$ In particular, $f(-1001) = \boxed{2005}.$ | Math Dataset |
# 1. Setting Up the Environment
Before we dive into Python for finance, we need to set up our environment. This involves installing Python and the required packages, choosing between an interactive shell and script mode, and setting up an Integrated Development Environment (IDE) such as PyCharm or VSCode.
# 1.1. Installing Python and Required Packages
To get started, we need to install Python and the necessary packages for our finance applications. Python is a cross-platform language, meaning it can be used on Windows, Linux, and macOS. It can be used to build desktop and web applications, as well as run on smaller devices like the Raspberry Pi.
To install Python, you can visit the official Python website (python.org) and download the latest version for your operating system. Follow the installation instructions provided.
Once Python is installed, we need to install the required packages. These packages include popular libraries such as NumPy, Pandas, and Matplotlib, which are essential for financial data analysis and visualization.
To install packages, we can use the pip package manager, which is included with Python. Open a command prompt or terminal and run the following command:
```
pip install numpy pandas matplotlib
```
This command will install the NumPy, Pandas, and Matplotlib packages. You may need to use the `sudo` command if you're on a Linux or macOS system.
# 1.2. Interactive Shell vs. Script Mode
Python can be used in two main modes: interactive shell and script mode.
The interactive shell allows you to enter Python code line by line and see the results immediately. It's a great way to experiment with code and test small snippets. To start the interactive shell, open a command prompt or terminal and type `python` or `python3` (depending on your system). You should see a prompt that looks like `>>>`. You can then enter Python code and press Enter to see the output.
Script mode, on the other hand, allows you to write Python code in a file and run the entire file at once. This is useful for larger programs or scripts that you want to save and reuse. To run a Python script, save your code in a file with a `.py` extension (e.g., `my_script.py`), and then run the file using the `python` or `python3` command followed by the file name. For example:
```
python my_script.py
```
In this textbook, we will primarily use script mode to write and run our Python code. However, feel free to use the interactive shell for quick experimentation.
# 1.3. Setting Up an IDE (e.g., PyCharm, VSCode)
While Python can be written and run using a simple text editor, using an Integrated Development Environment (IDE) can greatly enhance your productivity. IDEs provide features like code completion, debugging, and project management, making it easier to write and maintain your code.
There are several popular IDEs for Python, including PyCharm, Visual Studio Code (VSCode), and Jupyter Notebook. In this textbook, we will focus on PyCharm and VSCode.
PyCharm is a powerful IDE developed by JetBrains. It offers a wide range of features for Python development, including code analysis, refactoring tools, and integration with version control systems. PyCharm is available in both a free Community Edition and a paid Professional Edition.
VSCode, on the other hand, is a lightweight and highly customizable IDE developed by Microsoft. It has a large and active community, and supports a wide range of programming languages including Python. VSCode is free and open-source.
To set up PyCharm or VSCode, you can visit their respective websites (jetbrains.com/pycharm, code.visualstudio.com) and download the installer for your operating system. Follow the installation instructions provided.
Once you have installed your IDE of choice, you can create a new Python project or open an existing one. You can then start writing and running your Python code within the IDE.
# 2. Basic Python Syntax
**Indentation**
Unlike many other programming languages, Python uses indentation to define code blocks instead of using braces or keywords like `begin` and `end`. This means that the indentation level of your code is important and affects how it is interpreted by Python.
For example, in a `for` loop, the indented code block is executed for each iteration of the loop:
```python
for i in range(5):
print(i)
```
The `print(i)` statement is indented with four spaces, which tells Python that it is part of the loop. If the statement was not indented, it would be outside of the loop and would only be executed once.
It's important to be consistent with your indentation style. Most Python programmers use four spaces for indentation, but you can choose any number of spaces as long as you are consistent throughout your code.
**Comments**
Comments are used to add explanatory notes to your code. They are ignored by Python and are only meant for human readers. Comments start with the `#` symbol and continue until the end of the line.
```python
# This is a comment
print("Hello, world!") # This is another comment
```
Comments are useful for documenting your code and providing context to other developers who may read it. They can also be used to temporarily disable or "comment out" a piece of code that you don't want to run.
**Variables and Naming Conventions**
In Python, variables are used to store values that can be used later in your code. You can think of a variable as a container that holds a value.
To create a variable, you need to choose a name for it and assign a value to it using the `=` operator. Variable names can contain letters, numbers, and underscores, but they cannot start with a number. It's also a good practice to use descriptive names that reflect the purpose of the variable.
```python
x = 10
y = 20
name = "John Doe"
```
In this example, we create three variables: `x`, `y`, and `name`. The first two variables store numbers, while the third variable stores a string.
Python is a dynamically typed language, which means that you don't need to explicitly declare the type of a variable. The type of a variable is determined automatically based on the value assigned to it.
It's a good practice to follow naming conventions when naming your variables. In Python, it is common to use lowercase letters and underscores to separate words in variable names. This is known as "snake_case" naming convention.
**Print Function**
The `print()` function is used to display output in Python. It takes one or more arguments and prints them to the console.
```python
print("Hello, world!")
```
In this example, the `print()` function is used to display the string "Hello, world!".
You can also pass multiple arguments to the `print()` function, separated by commas. The arguments will be printed in the order they are passed.
```python
name = "John"
age = 30
print("My name is", name, "and I am", age, "years old.")
```
In this example, the `print()` function is used to display a message that includes the values of the `name` and `age` variables.
# 3. Basic Data Types
In Python, data is stored in variables, and each variable has a data type. Python has several built-in data types that are commonly used in programming.
**Numbers (Integers and Floats)**
Integers are whole numbers, such as 1, 2, 3, and so on. Floats, on the other hand, are numbers with decimal points, such as 1.0, 2.5, 3.14, and so on.
You can perform arithmetic operations on numbers in Python, such as addition, subtraction, multiplication, and division.
```python
x = 10
y = 3
# Addition
print(x + y) # Output: 13
# Subtraction
print(x - y) # Output: 7
# Multiplication
print(x * y) # Output: 30
# Division
print(x / y) # Output: 3.3333333333333335
```
In this example, we perform various arithmetic operations on the variables `x` and `y`.
Note that the division operation (`/`) returns a float even if the result is a whole number. If you want to perform integer division and get an integer result, you can use the double-slash operator (`//`).
```python
x = 10
y = 3
print(x // y) # Output: 3
```
In this example, the result of the division operation is 3, which is the integer part of the division.
**Strings**
Strings are sequences of characters, such as "Hello, world!", "Python", and "123".
You can create a string by enclosing characters in single quotes (`'`) or double quotes (`"`).
```python
name = 'John'
message = "Hello, world!"
```
In this example, we create two strings: `name` and `message`.
You can perform various operations on strings, such as concatenation, slicing, and formatting.
```python
name = 'John'
age = 30
# Concatenation
print("My name is " + name + " and I am " + str(age) + " years old.")
# Slicing
print(name[0]) # Output: J
print(name[1:3]) # Output: oh
# Formatting
print("My name is {} and I am {} years old.".format(name, age))
```
In this example, we demonstrate concatenation, slicing, and formatting operations on strings.
Note that we use the `str()` function to convert the `age` variable from an integer to a string before concatenating it with other strings.
# 3.3. Booleans
Booleans are a data type that represents truth values. A boolean can have one of two values: `True` or `False`.
Booleans are often used in conditional statements, such as `if` statements, to control the flow of a program based on certain conditions.
```python
x = 10
y = 5
# Greater than
print(x > y) # Output: True
# Less than
print(x < y) # Output: False
# Equal to
print(x == y) # Output: False
# Not equal to
print(x != y) # Output: True
```
In this example, we compare the values of the variables `x` and `y` using various comparison operators.
The result of a comparison operation is a boolean value (`True` or `False`), which can be used in conditional statements.
# 3.4. Type Conversion
Sometimes, you may need to convert a value from one data type to another. Python provides several built-in functions for type conversion.
**Integer to Float**
You can convert an integer to a float using the `float()` function.
```python
x = 10
y = float(x)
print(y) # Output: 10.0
```
In this example, we convert the integer value `10` to a float using the `float()` function.
**Float to Integer**
You can convert a float to an integer using the `int()` function. Note that the decimal part of the float will be truncated.
```python
x = 10.5
y = int(x)
print(y) # Output: 10
```
In this example, we convert the float value `10.5` to an integer using the `int()` function.
**String to Integer or Float**
You can convert a string to an integer or float using the `int()` or `float()` function, respectively.
```python
x = '10'
y = int(x)
print(y) # Output: 10
x = '10.5'
y = float(x)
print(y) # Output: 10.5
```
In this example, we convert the string values `'10'` and `'10.5'` to an integer and float, respectively.
Note that if the string cannot be converted to the desired data type (e.g., if it contains non-numeric characters), a `ValueError` will be raised.
# 4. Operators
Operators are symbols that perform operations on one or more operands. Python provides a wide range of operators for arithmetic, comparison, logical, and assignment operations.
**Arithmetic Operators**
Arithmetic operators are used to perform mathematical operations, such as addition, subtraction, multiplication, and division.
```python
x = 10
y = 3
# Addition
print(x + y) # Output: 13
# Subtraction
print(x - y) # Output: 7
# Multiplication
print(x * y) # Output: 30
# Division
print(x / y) # Output: 3.3333333333333335
# Modulo
print(x % y) # Output: 1
# Exponentiation
print(x ** y) # Output: 1000
```
In this example, we demonstrate various arithmetic operations using the variables `x` and `y`.
The modulo operator (`%`) returns the remainder of the division operation, while the exponentiation operator (`**`) raises the first operand to the power of the second operand.
**Comparison Operators**
Comparison operators are used to compare two values and return a boolean result (`True` or `False`).
```python
x = 10
y = 3
# Greater than
print(x > y) # Output: True
# Less than
print(x < y) # Output: False
# Equal to
print(x == y) # Output: False
# Not equal to
print(x != y) # Output: True
# Greater than or equal to
print(x >= y) # Output: True
# Less than or equal to
print(x <= y) # Output: False
```
In this example, we compare the values of the variables `x` and `y` using various comparison operators.
The result of a comparison operation is a boolean value (`True` or `False`), which can be used in conditional statements.
**Logical Operators**
Logical operators are used to combine multiple conditions and return a boolean result.
```python
x = 10
y = 3
# AND
print(x > 5 and y < 10) # Output: True
# OR
print(x > 5 or y > 10) # Output: True
# NOT
print(not x > 5) # Output: False
```
In this example, we demonstrate the use of logical operators. The `and` operator returns `True` if both conditions are `True`, the `or` operator returns `True` if at least one condition is `True`, and the `not` operator negates the result of a condition.
**Assignment Operators**
Assignment operators are used to assign values to variables.
```python
x = 10
y = 3
# Addition assignment
x += y # Equivalent to: x = x + y
print(x) # Output: 13
# Subtraction assignment
x -= y # Equivalent to: x = x - y
print(x) # Output: 10
# Multiplication assignment
x *= y # Equivalent to: x = x * y
print(x) # Output: 30
# Division assignment
x /= y # Equivalent to: x = x / y
print(x) # Output: 10.0
```
In this example, we demonstrate various assignment operations using the variables `x` and `y`.
The assignment operator (`=`) is used to assign a value to a variable. The other assignment operators combine an arithmetic operation with the assignment operation.
# 5. Control Structures
# 5.1. Conditional Statements (if, elif, else)
Conditional statements are used to execute different blocks of code based on certain conditions. The most common conditional statement in Python is the `if` statement.
The `if` statement allows us to execute a block of code if a certain condition is `True`. If the condition is `False`, the block of code is skipped.
```python
x = 10
if x > 5:
print("x is greater than 5")
```
In this example, we use the `if` statement to check if the value of `x` is greater than 5. If the condition is `True`, the message "x is greater than 5" is printed.
We can also use the `else` statement to execute a block of code if the condition is `False`.
```python
x = 3
if x > 5:
print("x is greater than 5")
else:
print("x is less than or equal to 5")
```
In this example, if the condition `x > 5` is `False`, the message "x is less than or equal to 5" is printed.
We can also use the `elif` statement to check for multiple conditions.
```python
x = 3
if x > 5:
print("x is greater than 5")
elif x == 5:
print("x is equal to 5")
else:
print("x is less than 5")
```
In this example, if the condition `x > 5` is `False`, the condition `x == 5` is checked. If this condition is `True`, the message "x is equal to 5" is printed. Otherwise, the message "x is less than 5" is printed.
# 5.2. Loops
Loops allow us to repeat a block of code multiple times. Python has two types of loops: `for` loops and `while` loops.
# 5.2.1. For Loop
A `for` loop is used to iterate over a sequence (such as a list, tuple, or string) or other iterable object. It allows us to perform a certain action for each item in the sequence.
```python
fruits = ['apple', 'banana', 'cherry']
for fruit in fruits:
print(fruit)
```
In this example, we use a `for` loop to iterate over the items in the `fruits` list. For each item, the item is assigned to the variable `fruit`, and the `print()` function is called to display the item.
We can also use the `range()` function to generate a sequence of numbers and iterate over them.
```python
for i in range(5):
print(i)
```
In this example, the `range(5)` function generates a sequence of numbers from 0 to 4. The `for` loop then iterates over these numbers, and the `print()` function is called to display each number.
# 5.2.2. While Loop
A `while` loop is used to repeat a block of code as long as a certain condition is `True`. It allows us to perform a certain action until a condition is no longer `True`.
```python
x = 0
while x < 5:
print(x)
x += 1
```
In this example, we use a `while` loop to repeat the block of code as long as the value of `x` is less than 5. The `print()` function is called to display the value of `x`, and the `x += 1` statement is used to increment the value of `x` by 1 in each iteration.
It's important to be careful when using `while` loops to avoid infinite loops. An infinite loop occurs when the condition of the loop is always `True`, causing the loop to run indefinitely.
# 5.3. Break and Continue
The `break` statement is used to exit a loop prematurely. It allows us to terminate the loop before it has completed all iterations.
```python
fruits = ['apple', 'banana', 'cherry']
for fruit in fruits:
if fruit == 'banana':
break
print(fruit)
```
In this example, the `break` statement is used to exit the `for` loop when the value of `fruit` is `'banana'`. This means that the loop will only iterate over the first two items in the `fruits` list, and the message "apple" and "banana" will be printed.
The `continue` statement is used to skip the rest of the code in a loop and move on to the next iteration. It allows us to skip certain iterations based on a condition.
```python
fruits = ['apple', 'banana', 'cherry']
for fruit in fruits:
if fruit == 'banana':
continue
print(fruit)
```
In this example, the `continue` statement is used to skip the iteration when the value of `fruit` is `'banana'`. This means that the message "banana" will not be printed, and the loop will continue with the next iteration.
# 4.1. Arithmetic Operators
Arithmetic operators are used to perform mathematical operations in Python. Here are the most commonly used arithmetic operators:
- Addition (`+`): Adds two numbers together.
```python
x = 5
y = 3
result = x + y
print(result) # Output: 8
```
- Subtraction (`-`): Subtracts one number from another.
```python
x = 5
y = 3
result = x - y
print(result) # Output: 2
```
- Multiplication (`*`): Multiplies two numbers together.
```python
x = 5
y = 3
result = x * y
print(result) # Output: 15
```
- Division (`/`): Divides one number by another.
```python
x = 6
y = 3
result = x / y
print(result) # Output: 2.0
```
- Floor Division (`//`): Divides one number by another and rounds down to the nearest whole number.
```python
x = 7
y = 3
result = x // y
print(result) # Output: 2
```
- Modulo (`%`): Returns the remainder of the division of one number by another.
```python
x = 7
y = 3
result = x % y
print(result) # Output: 1
```
- Exponentiation (`**`): Raises one number to the power of another.
```python
x = 2
y = 3
result = x ** y
print(result) # Output: 8
```
It's important to note that arithmetic operators follow the usual order of operations, which is parentheses, exponentiation, multiplication and division (from left to right), and addition and subtraction (from left to right). You can use parentheses to change the order of operations if needed.
Let's say you want to calculate the area of a rectangle with a length of 5 units and a width of 3 units. You can use the multiplication operator to calculate the area:
```python
length = 5
width = 3
area = length * width
print(area) # Output: 15
```
## Exercise
Create a variable named `result` and assign the result of the following expression to it: `8 + 4 / 2 - 6 * 3`
### Solution
```python
result = 8 + 4 / 2 - 6 * 3
```
# 4.2. Comparison Operators
Comparison operators are used to compare two values and return a Boolean value (`True` or `False`). Here are the most commonly used comparison operators:
- Equal to (`==`): Checks if two values are equal.
```python
x = 5
y = 5
result = x == y
print(result) # Output: True
```
- Not equal to (`!=`): Checks if two values are not equal.
```python
x = 5
y = 3
result = x != y
print(result) # Output: True
```
- Greater than (`>`): Checks if one value is greater than another.
```python
x = 5
y = 3
result = x > y
print(result) # Output: True
```
- Less than (`<`): Checks if one value is less than another.
```python
x = 5
y = 3
result = x < y
print(result) # Output: False
```
- Greater than or equal to (`>=`): Checks if one value is greater than or equal to another.
```python
x = 5
y = 5
result = x >= y
print(result) # Output: True
```
- Less than or equal to (`<=`): Checks if one value is less than or equal to another.
```python
x = 5
y = 3
result = x <= y
print(result) # Output: False
```
Comparison operators are often used in conditional statements to make decisions based on the comparison result. For example:
```python
x = 5
y = 3
if x > y:
print("x is greater than y")
else:
print("x is not greater than y")
```
In this example, the output will be "x is greater than y" because the condition `x > y` is `True`.
Let's say you want to check if a student's grade is equal to or greater than a passing grade of 60. You can use the greater than or equal to operator to make the comparison:
```python
grade = 75
passing_grade = 60
result = grade >= passing_grade
print(result) # Output: True
```
## Exercise
Create a variable named `result` and assign the result of the following expression to it: `8 + 4 > 2 * 6`
### Solution
```python
result = 8 + 4 > 2 * 6
```
# 4.3. Logical Operators
Logical operators are used to combine multiple conditions and return a Boolean value (`True` or `False`). Here are the three logical operators in Python:
- `and`: Returns `True` if both conditions are `True`, otherwise returns `False`.
```python
x = 5
y = 3
z = 7
result = x > y and y < z
print(result) # Output: True
```
- `or`: Returns `True` if at least one condition is `True`, otherwise returns `False`.
```python
x = 5
y = 3
z = 7
result = x > y or y > z
print(result) # Output: True
```
- `not`: Returns the opposite Boolean value of the condition.
```python
x = 5
y = 3
result = not x > y
print(result) # Output: False
```
Logical operators are often used in conditional statements to combine multiple conditions. For example:
```python
x = 5
y = 3
z = 7
if x > y and y < z:
print("Both conditions are True")
else:
print("At least one condition is False")
```
In this example, the output will be "Both conditions are True" because both conditions `x > y` and `y < z` are `True`.
Let's say you want to check if a student's grade is either an A or a B. You can use the logical operator `or` to combine the conditions:
```python
grade = 'A'
result = grade == 'A' or grade == 'B'
print(result) # Output: True
```
## Exercise
Create a variable named `result` and assign the result of the following expression to it: `(5 > 3 and 7 < 10) or not (4 == 4)`
### Solution
```python
result = (5 > 3 and 7 < 10) or not (4 == 4)
```
# 4.4. Assignment Operators
Assignment operators are used to assign values to variables. Here are the different assignment operators in Python:
- `=`: Assigns the value on the right to the variable on the left.
```python
x = 5
```
- `+=`: Adds the value on the right to the variable on the left and assigns the result to the variable on the left.
```python
x = 5
x += 3 # Equivalent to x = x + 3
print(x) # Output: 8
```
- `-=`: Subtracts the value on the right from the variable on the left and assigns the result to the variable on the left.
```python
x = 5
x -= 3 # Equivalent to x = x - 3
print(x) # Output: 2
```
- `*=`: Multiplies the variable on the left by the value on the right and assigns the result to the variable on the left.
```python
x = 5
x *= 3 # Equivalent to x = x * 3
print(x) # Output: 15
```
- `/=`: Divides the variable on the left by the value on the right and assigns the result to the variable on the left.
```python
x = 15
x /= 3 # Equivalent to x = x / 3
print(x) # Output: 5.0
```
- `//=`: Performs integer division on the variable on the left by the value on the right and assigns the result to the variable on the left.
```python
x = 15
x //= 4 # Equivalent to x = x // 4
print(x) # Output: 3
```
- `%=`: Computes the remainder of the variable on the left divided by the value on the right and assigns the result to the variable on the left.
```python
x = 15
x %= 4 # Equivalent to x = x % 4
print(x) # Output: 3
```
- `**=`: Raises the variable on the left to the power of the value on the right and assigns the result to the variable on the left.
```python
x = 2
x **= 3 # Equivalent to x = x ** 3
print(x) # Output: 8
```
Assignment operators are useful when you want to perform an operation on a variable and update its value at the same time.
Let's say you have a variable `total` that keeps track of the total score in a game. You want to add 10 to the total score each time a player completes a level. You can use the `+=` assignment operator to update the `total` variable:
```python
total = 0
total += 10 # Equivalent to total = total + 10
print(total) # Output: 10
```
## Exercise
Create a variable named `result` and assign the result of the following expression to it: `5 + 3 - 2 * 4 / 2`
### Solution
```python
result = 5 + 3 - 2 * 4 / 2
```
# 5. Control Structures
# 5.1. Conditional Statements (if, elif, else)
Conditional statements are used to perform different actions based on different conditions. The most common conditional statement is the `if` statement, which allows you to execute a block of code if a certain condition is true. Here's the basic syntax of an `if` statement:
```python
if condition:
# code to execute if condition is true
```
The condition is an expression that evaluates to either `True` or `False`. If the condition is `True`, the code block following the `if` statement is executed. If the condition is `False`, the code block is skipped.
You can also include an `else` statement to specify a block of code to execute if the condition is `False`. Here's the syntax for an `if-else` statement:
```python
if condition:
# code to execute if condition is true
else:
# code to execute if condition is false
```
If the condition is `True`, the code block following the `if` statement is executed. If the condition is `False`, the code block following the `else` statement is executed.
Sometimes, you may have multiple conditions to check. In this case, you can use the `elif` statement, which stands for "else if". Here's the syntax for an `if-elif-else` statement:
```python
if condition1:
# code to execute if condition1 is true
elif condition2:
# code to execute if condition2 is true
else:
# code to execute if neither condition1 nor condition2 is true
```
The `elif` statement allows you to check additional conditions if the previous conditions are `False`. If any of the conditions are `True`, the corresponding code block is executed, and the rest of the `if-elif-else` statement is skipped.
Let's say you want to write a program that checks if a number is positive, negative, or zero. Here's an example using an `if-elif-else` statement:
```python
number = 5
if number > 0:
print("The number is positive")
elif number < 0:
print("The number is negative")
else:
print("The number is zero")
```
In this example, the condition `number > 0` is `True`, so the code block following the `if` statement is executed, and the output is "The number is positive".
## Exercise
Write a program that checks if a number is even or odd. If the number is even, print "The number is even". If the number is odd, print "The number is odd".
### Solution
```python
number = 7
if number % 2 == 0:
print("The number is even")
else:
print("The number is odd")
```
# 5.2. Loops
Loops are used to repeat a block of code multiple times. They are useful when you want to perform the same action on multiple items or when you want to repeat a certain task until a specific condition is met.
There are two types of loops in Python: `for` loops and `while` loops.
A `for` loop is used to iterate over a sequence (such as a list, tuple, or string) or other iterable objects. Here's the basic syntax of a `for` loop:
```python
for item in sequence:
# code to execute for each item in the sequence
```
The `item` variable takes on the value of each item in the sequence, one by one, and the code block following the `for` statement is executed for each item.
A `while` loop is used to repeat a block of code as long as a certain condition is `True`. Here's the basic syntax of a `while` loop:
```python
while condition:
# code to execute while the condition is true
```
The code block following the `while` statement is executed repeatedly as long as the condition is `True`. The condition is checked before each iteration of the loop.
Let's say you want to print the numbers from 1 to 5. Here's an example using a `for` loop:
```python
for number in range(1, 6):
print(number)
```
In this example, the `range(1, 6)` function generates a sequence of numbers from 1 to 5. The `for` loop iterates over each number in the sequence, and the `print` statement prints each number.
## Exercise
Write a program that calculates the sum of all numbers from 1 to 10 using a `while` loop. Print the sum.
### Solution
```python
sum = 0
number = 1
while number <= 10:
sum += number
number += 1
print(sum)
```
# 5.2.1. For Loop
A `for` loop is often used to iterate over a sequence of items. In each iteration, the loop variable takes on the value of the next item in the sequence.
You can use the `range()` function to generate a sequence of numbers. The `range()` function takes three arguments: `start`, `stop`, and `step`. The `start` argument specifies the starting value of the sequence (default is 0), the `stop` argument specifies the ending value of the sequence (exclusive), and the `step` argument specifies the increment (default is 1).
Here's an example that prints the numbers from 1 to 5 using a `for` loop and the `range()` function:
```python
for number in range(1, 6):
print(number)
```
In this example, the `range(1, 6)` function generates a sequence of numbers from 1 to 5. The `for` loop iterates over each number in the sequence, and the `print` statement prints each number.
You can also iterate over other types of sequences, such as lists, tuples, and strings. Here's an example that prints each character in a string:
```python
for character in "Python":
print(character)
```
In this example, the `for` loop iterates over each character in the string "Python", and the `print` statement prints each character.
Let's say you have a list of numbers and you want to calculate the sum of all the numbers. Here's an example using a `for` loop:
```python
numbers = [1, 2, 3, 4, 5]
sum = 0
for number in numbers:
sum += number
print(sum)
```
In this example, the `for` loop iterates over each number in the `numbers` list, and the `sum` variable is updated with the sum of all the numbers.
## Exercise
Write a program that calculates the product of all numbers in a list. Print the product.
### Solution
```python
numbers = [1, 2, 3, 4, 5]
product = 1
for number in numbers:
product *= number
print(product)
```
# 5.2.2. While Loop
A `while` loop is used to repeat a block of code as long as a certain condition is `True`. The condition is checked before each iteration of the loop.
Here's an example that prints the numbers from 1 to 5 using a `while` loop:
```python
number = 1
while number <= 5:
print(number)
number += 1
```
In this example, the `number` variable is initialized to 1. The condition `number <= 5` is `True`, so the code block following the `while` statement is executed. The `print` statement prints the value of `number`, and the `number += 1` statement increments the value of `number` by 1. This process is repeated until the condition `number <= 5` is `False`.
You can also use the `break` statement to exit a `while` loop prematurely. Here's an example that prints the numbers from 1 to 5, but stops when the number is 3:
```python
number = 1
while number <= 5:
print(number)
if number == 3:
break
number += 1
```
In this example, the `break` statement is executed when `number` is equal to 3. This causes the `while` loop to exit immediately, even though the condition `number <= 5` is still `True`.
Let's say you want to calculate the sum of all numbers from 1 to 10 using a `while` loop. Here's an example:
```python
sum = 0
number = 1
while number <= 10:
sum += number
number += 1
print(sum)
```
In this example, the `sum` variable is initialized to 0, and the `number` variable is initialized to 1. The condition `number <= 10` is `True`, so the code block following the `while` statement is executed. The `sum` variable is updated with the sum of `sum` and `number`, and the `number` variable is incremented by 1. This process is repeated until the condition `number <= 10` is `False`, at which point the `print` statement prints the value of `sum`.
## Exercise
Write a program that calculates the factorial of a number using a `while` loop. Print the factorial.
### Solution
```python
number = 5
factorial = 1
while number > 0:
factorial *= number
number -= 1
print(factorial)
```
# 5.3. Break and Continue
The `break` statement can be used to exit a loop prematurely, while the `continue` statement can be used to skip the rest of the code block and move on to the next iteration of the loop.
Here's an example that uses the `break` statement to exit a loop when a certain condition is met:
```python
numbers = [1, 2, 3, 4, 5]
for number in numbers:
if number == 3:
break
print(number)
```
In this example, the `break` statement is executed when `number` is equal to 3. This causes the `for` loop to exit immediately, even though there are still numbers left in the `numbers` list.
Here's an example that uses the `continue` statement to skip the rest of the code block and move on to the next iteration of the loop:
```python
numbers = [1, 2, 3, 4, 5]
for number in numbers:
if number == 3:
continue
print(number)
```
In this example, the `continue` statement is executed when `number` is equal to 3. This causes the rest of the code block to be skipped, and the loop moves on to the next iteration. As a result, the number 3 is not printed.
Let's say you have a list of numbers and you want to calculate the sum of all the even numbers in the list. You can use the `continue` statement to skip the odd numbers. Here's an example:
```python
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
sum = 0
for number in numbers:
if number % 2 != 0:
continue
sum += number
print(sum)
```
In this example, the `continue` statement is executed when `number` is odd (i.e., when `number % 2 != 0`). This causes the rest of the code block to be skipped, and the loop moves on to the next iteration. As a result, only the even numbers are added to the `sum` variable.
## Exercise
Write a program that prints the numbers from 1 to 10, but skips the number 7 using the `continue` statement.
### Solution
```python
for number in range(1, 11):
if number == 7:
continue
print(number)
```
# 5.4. Pass Statement
The `pass` statement is a placeholder statement in Python. It is used when you need a statement syntactically, but you don't want to do anything. It is often used as a placeholder for code that you plan to write later.
Here's an example:
```python
numbers = [1, 2, 3, 4, 5]
for number in numbers:
if number == 3:
pass
print(number)
```
In this example, the `pass` statement is used as a placeholder for code that will handle the case when `number` is equal to 3. Currently, the `pass` statement does nothing, so the loop continues to the next iteration and prints all the numbers in the list.
The `pass` statement can also be used in function definitions, class definitions, and conditional statements. It is a way to indicate that you are aware of the need for a statement, but you are not ready to write the code yet.
Let's say you are writing a function that calculates the factorial of a number. You know that you will need to write code to calculate the factorial, but you haven't figured out the details yet. In this case, you can use the `pass` statement as a placeholder:
```python
def factorial(n):
if n == 0:
pass
# code to calculate the factorial goes here
```
In this example, the `pass` statement is used as a placeholder for the code that will calculate the factorial when `n` is equal to 0. This allows you to write the rest of the function without getting an error, and you can come back later to fill in the missing code.
## Exercise
Write a function that takes a list of numbers as input and prints each number, but skips any negative numbers using the `pass` statement.
### Solution
```python
def print_positive_numbers(numbers):
for number in numbers:
if number < 0:
pass
print(number)
```
# 6. Data Structures
Data structures are a fundamental concept in computer science and programming. They are used to organize and store data in a way that makes it easy to access and manipulate. Python provides several built-in data structures that you can use in your programs.
In this section, we will cover four commonly used data structures in Python: lists, tuples, sets, and dictionaries. Each data structure has its own characteristics and use cases, so it's important to understand how they work and when to use them.
Let's start with lists.
### Lists
A list is a collection of items that are ordered and changeable. In Python, lists are created by placing items inside square brackets `[]`, separated by commas. Lists can contain items of different data types, such as numbers, strings, or even other lists.
Here's an example of a list:
```python
fruits = ['apple', 'banana', 'orange']
```
In this example, `fruits` is a list that contains three items: `'apple'`, `'banana'`, and `'orange'`. The items are ordered, meaning that they have a specific position in the list. You can access individual items in a list by their index, which starts at 0.
For example, to access the first item in the `fruits` list, you can use the following code:
```python
first_fruit = fruits[0]
```
The variable `first_fruit` will now contain the string `'apple'`.
Lists are mutable, which means that you can change their content by assigning new values to specific indexes. For example, to change the second item in the `fruits` list to `'grape'`, you can use the following code:
```python
fruits[1] = 'grape'
```
Now, the `fruits` list will contain `'apple'`, `'grape'`, and `'orange'`.
Here's an example that demonstrates the use of lists:
```python
# Create a list of numbers
numbers = [1, 2, 3, 4, 5]
# Print the list
print(numbers) # Output: [1, 2, 3, 4, 5]
# Access individual items
first_number = numbers[0]
print(first_number) # Output: 1
# Change an item
numbers[2] = 10
print(numbers) # Output: [1, 2, 10, 4, 5]
# Add an item to the end of the list
numbers.append(6)
print(numbers) # Output: [1, 2, 10, 4, 5, 6]
# Remove an item from the list
numbers.remove(4)
print(numbers) # Output: [1, 2, 10, 5, 6]
```
In this example, we create a list of numbers, access individual items using their indexes, change an item, add an item to the end of the list, and remove an item from the list.
## Exercise
Create a list called `grades` that contains the following grades: 85, 90, 92, 88, and 95. Print the list.
### Solution
```python
grades = [85, 90, 92, 88, 95]
print(grades)
```
# 6.1. Lists
Lists are one of the most commonly used data structures in Python. They are used to store a collection of items, such as numbers, strings, or even other lists. Lists are ordered, meaning that the items have a specific position in the list, and they are mutable, meaning that you can change their content.
To create a list, you can simply place the items inside square brackets `[]`, separated by commas. For example:
```python
fruits = ['apple', 'banana', 'orange']
```
In this example, `fruits` is a list that contains three strings: `'apple'`, `'banana'`, and `'orange'`. The items are ordered, so `'apple'` is the first item, `'banana'` is the second item, and `'orange'` is the third item.
You can access individual items in a list by their index. The index starts at 0, so the first item in the list has an index of 0, the second item has an index of 1, and so on. To access an item, you can use square brackets `[]` and specify the index. For example:
```python
first_fruit = fruits[0]
```
In this example, `first_fruit` will contain the string `'apple'`, because `'apple'` is the item at index 0 in the `fruits` list.
Lists are mutable, which means that you can change their content by assigning new values to specific indexes. For example, to change the second item in the `fruits` list to `'grape'`, you can use the following code:
```python
fruits[1] = 'grape'
```
Now, the `fruits` list will contain `'apple'`, `'grape'`, and `'orange'`.
You can also add items to the end of a list using the `append()` method. For example:
```python
fruits.append('kiwi')
```
This will add the string `'kiwi'` to the end of the `fruits` list.
To remove an item from a list, you can use the `remove()` method and specify the item you want to remove. For example:
```python
fruits.remove('banana')
```
This will remove the string `'banana'` from the `fruits` list.
Lists can also be sliced, which means that you can access a subset of items in a list by specifying a range of indexes. For example, to get the first two items in the `fruits` list, you can use the following code:
```python
first_two_fruits = fruits[0:2]
```
In this example, `first_two_fruits` will contain `['apple', 'grape']`, because those are the items at indexes 0 and 1 in the `fruits` list.
Lists can also be nested, meaning that you can have lists inside other lists. For example:
```python
nested_list = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
```
In this example, `nested_list` is a list that contains three lists: `[1, 2, 3]`, `[4, 5, 6]`, and `[7, 8, 9]`.
Here's an example that demonstrates the use of lists:
```python
# Create a list of numbers
numbers = [1, 2, 3, 4, 5]
# Print the list
print(numbers) # Output: [1, 2, 3, 4, 5]
# Access individual items
first_number = numbers[0]
print(first_number) # Output: 1
# Change an item
numbers[2] = 10
print(numbers) # Output: [1, 2, 10, 4, 5]
# Add an item to the end of the list
numbers.append(6)
print(numbers) # Output: [1, 2, 10, 4, 5, 6]
# Remove an item from the list
numbers.remove(4)
print(numbers) # Output: [1, 2, 10, 5, 6]
```
In this example, we create a list of numbers, access individual items using their indexes, change an item, add an item to the end of the list, and remove an item from the list.
## Exercise
Create a list called `grades` that contains the following grades: 85, 90, 92, 88, and 95. Print the list.
### Solution
```python
grades = [85, 90, 92, 88, 95]
print(grades)
```
# 6.2. Tuples
Tuples are similar to lists in Python, but they are immutable, meaning that their content cannot be changed once they are created. Tuples are often used to store related pieces of information together.
To create a tuple, you can place the items inside parentheses `()`, separated by commas. For example:
```python
person = ('John', 25, 'USA')
```
In this example, `person` is a tuple that contains three items: the string `'John'`, the number `25`, and the string `'USA'`.
You can access individual items in a tuple by their index, just like with lists. For example, to access the second item in the `person` tuple, you can use the following code:
```python
age = person[1]
```
In this example, `age` will contain the number `25`, because `25` is the item at index 1 in the `person` tuple.
Tuples are immutable, so you cannot change their content by assigning new values to specific indexes. However, you can create a new tuple with the desired changes. For example, to change the second item in the `person` tuple to `30`, you can use the following code:
```python
new_person = (person[0], 30, person[2])
```
Now, the `new_person` tuple will contain the string `'John'`, the number `30`, and the string `'USA'`.
Tuples can also be unpacked, meaning that you can assign the items in a tuple to individual variables. For example:
```python
name, age, country = person
```
In this example, `name` will contain the string `'John'`, `age` will contain the number `25`, and `country` will contain the string `'USA'`.
Tuples can be useful when you want to ensure that the content of a collection cannot be changed, or when you want to group related pieces of information together.
Here's an example that demonstrates the use of tuples:
```python
# Create a tuple
person = ('John', 25, 'USA')
# Access individual items
age = person[1]
print(age) # Output: 25
# Create a new tuple with changes
new_person = (person[0], 30, person[2])
print(new_person) # Output: ('John', 30, 'USA')
# Unpack a tuple
name, age, country = person
print(name) # Output: 'John'
print(age) # Output: 25
print(country) # Output: 'USA'
```
In this example, we create a tuple, access individual items using their indexes, create a new tuple with changes, and unpack a tuple into individual variables.
## Exercise
Create a tuple called `coordinates` that contains the latitude and longitude of a location. The latitude should be `40.7128` and the longitude should be `-74.0060`. Print the tuple.
### Solution
```python
coordinates = (40.7128, -74.0060)
print(coordinates)
```
# 6.3. Sets
Sets are another built-in data type in Python. A set is an unordered collection of unique elements. This means that a set cannot contain duplicate elements.
To create a set, you can use the `set()` function or enclose the elements in curly braces `{}`. For example:
```python
fruits = set(['apple', 'banana', 'orange'])
```
In this example, `fruits` is a set that contains three unique elements: `'apple'`, `'banana'`, and `'orange'`.
You can also create a set using curly braces `{}`. For example:
```python
fruits = {'apple', 'banana', 'orange'}
```
Sets support various operations, such as adding and removing elements, checking if an element is in the set, and performing set operations like union, intersection, and difference.
To add an element to a set, you can use the `add()` method. For example:
```python
fruits.add('grape')
```
In this example, the element `'grape'` is added to the `fruits` set.
To remove an element from a set, you can use the `remove()` method. For example:
```python
fruits.remove('banana')
```
In this example, the element `'banana'` is removed from the `fruits` set.
You can check if an element is in a set using the `in` operator. For example:
```python
print('apple' in fruits) # Output: True
print('pear' in fruits) # Output: False
```
In this example, the first `print` statement returns `True` because `'apple'` is in the `fruits` set, while the second `print` statement returns `False` because `'pear'` is not in the `fruits` set.
Sets also support set operations like union, intersection, and difference. For example:
```python
fruits1 = {'apple', 'banana', 'orange'}
fruits2 = {'banana', 'grape'}
union = fruits1.union(fruits2)
intersection = fruits1.intersection(fruits2)
difference = fruits1.difference(fruits2)
print(union) # Output: {'apple', 'banana', 'orange', 'grape'}
print(intersection) # Output: {'banana'}
print(difference) # Output: {'apple', 'orange'}
```
In this example, the `union` set contains all the unique elements from both `fruits1` and `fruits2`, the `intersection` set contains the common elements between `fruits1` and `fruits2`, and the `difference` set contains the elements that are in `fruits1` but not in `fruits2`.
Sets are useful when you want to store a collection of unique elements and perform set operations.
Here's an example that demonstrates the use of sets:
```python
# Create a set
fruits = {'apple', 'banana', 'orange'}
# Add an element
fruits.add('grape')
print(fruits) # Output: {'apple', 'banana', 'orange', 'grape'}
# Remove an element
fruits.remove('banana')
print(fruits) # Output: {'apple', 'orange', 'grape'}
# Check if an element is in the set
print('apple' in fruits) # Output: True
print('pear' in fruits) # Output: False
# Perform set operations
fruits1 = {'apple', 'banana', 'orange'}
fruits2 = {'banana', 'grape'}
union = fruits1.union(fruits2)
intersection = fruits1.intersection(fruits2)
difference = fruits1.difference(fruits2)
print(union) # Output: {'apple', 'banana', 'orange', 'grape'}
print(intersection) # Output: {'banana'}
print(difference) # Output: {'apple', 'orange'}
```
In this example, we create a set, add an element, remove an element, check if an element is in the set, and perform set operations.
## Exercise
Create a set called `colors` that contains the colors 'red', 'green', and 'blue'. Add the color 'yellow' to the set. Remove the color 'green' from the set. Check if the color 'blue' is in the set.
### Solution
```python
colors = {'red', 'green', 'blue'}
colors.add('yellow')
colors.remove('green')
print('blue' in colors)
```
# 6.4. Dictionaries
Dictionaries are another built-in data type in Python. A dictionary is an unordered collection of key-value pairs. Each key in the dictionary is unique and maps to a value.
To create a dictionary, you can use curly braces `{}` and separate the key-value pairs with colons `:`. For example:
```python
student = {'name': 'John', 'age': 21, 'major': 'Computer Science'}
```
In this example, `student` is a dictionary that contains three key-value pairs: `'name': 'John'`, `'age': 21`, and `'major': 'Computer Science'`.
You can also create a dictionary using the `dict()` function. For example:
```python
student = dict(name='John', age=21, major='Computer Science')
```
In this example, the `dict()` function is used to create a dictionary with the same key-value pairs as before.
You can access the value associated with a key in a dictionary by using square brackets `[]` and the key. For example:
```python
print(student['name']) # Output: 'John'
```
In this example, the value `'John'` associated with the key `'name'` is printed.
You can also modify the value associated with a key in a dictionary by using square brackets `[]` and the key. For example:
```python
student['age'] = 22
print(student['age']) # Output: 22
```
In this example, the value associated with the key `'age'` is modified to `22`.
You can check if a key exists in a dictionary using the `in` operator. For example:
```python
print('major' in student) # Output: True
print('gpa' in student) # Output: False
```
In this example, the first `print` statement returns `True` because the key `'major'` exists in the `student` dictionary, while the second `print` statement returns `False` because the key `'gpa'` does not exist in the `student` dictionary.
Dictionaries are useful when you want to store and retrieve values based on a unique key.
Here's an example that demonstrates the use of dictionaries:
```python
# Create a dictionary
student = {'name': 'John', 'age': 21, 'major': 'Computer Science'}
# Access a value
print(student['name']) # Output: 'John'
# Modify a value
student['age'] = 22
print(student['age']) # Output: 22
# Check if a key exists
print('major' in student) # Output: True
print('gpa' in student) # Output: False
```
In this example, we create a dictionary, access a value, modify a value, and check if a key exists.
## Exercise
Create a dictionary called `grades` that maps the names of students to their grades. The dictionary should contain the following key-value pairs:
- `'Alice': 85`
- `'Bob': 92`
- `'Charlie': 78`
Access the grade of `'Alice'` and print it. Modify the grade of `'Bob'` to `95`. Check if the grade of `'Charlie'` is in the dictionary.
### Solution
```python
grades = {'Alice': 85, 'Bob': 92, 'Charlie': 78}
print(grades['Alice'])
grades['Bob'] = 95
print('Charlie' in grades)
```
# 7. Functions
Functions are a fundamental concept in programming. They allow you to group a set of instructions together and give them a name. You can then call the function by its name to execute those instructions.
In Python, you can define a function using the `def` keyword followed by the function name and a pair of parentheses `()`. Any input parameters to the function are specified within the parentheses. For example:
```python
def greet(name):
print('Hello, ' + name + '!')
```
In this example, we define a function called `greet` that takes a single parameter `name`. The function prints a greeting message using the value of the `name` parameter.
To call a function, you simply write its name followed by a pair of parentheses `()`. If the function has any input parameters, you provide the values for those parameters within the parentheses. For example:
```python
greet('Alice') # Output: Hello, Alice!
```
In this example, we call the `greet` function with the argument `'Alice'`. The function prints the greeting message `'Hello, Alice!'`.
Functions can also have a return value. You can use the `return` keyword followed by an expression to specify the value that the function should return. For example:
```python
def add(a, b):
return a + b
```
In this example, we define a function called `add` that takes two parameters `a` and `b`. The function returns the sum of `a` and `b`.
To use the return value of a function, you can assign it to a variable or use it in an expression. For example:
```python
result = add(3, 4)
print(result) # Output: 7
```
In this example, we call the `add` function with the arguments `3` and `4`. The function returns the value `7`, which is then assigned to the variable `result` and printed.
Functions are a powerful tool for organizing and reusing code. They allow you to break down complex tasks into smaller, more manageable pieces, and they make your code easier to read and understand.
Here's an example that demonstrates the use of functions:
```python
# Define a function
def greet(name):
print('Hello, ' + name + '!')
# Call the function
greet('Alice') # Output: Hello, Alice!
```
In this example, we define a function called `greet` that takes a parameter `name`. The function prints a greeting message using the value of the `name` parameter. We then call the function with the argument `'Alice'`, which results in the greeting message `'Hello, Alice!'` being printed.
## Exercise
Create a function called `calculate_average` that takes a list of numbers as input and returns the average of those numbers. The function should work for lists of any length.
For example, `calculate_average([1, 2, 3, 4, 5])` should return `3.0`, and `calculate_average([10, 20, 30])` should return `20.0`.
### Solution
```python
def calculate_average(numbers):
total = sum(numbers)
average = total / len(numbers)
return average
```
# 8. File Handling
File handling is an important aspect of programming. It allows you to read data from files, write data to files, and perform various operations on files.
In Python, you can open a file using the `open()` function. The `open()` function takes two arguments: the name of the file to open and the mode in which to open the file. The mode can be `'r'` for reading, `'w'` for writing, `'a'` for appending, or `'x'` for exclusive creation.
For example, to open a file named `'data.txt'` for reading, you can use the following code:
```python
file = open('data.txt', 'r')
```
Once you have opened a file, you can read its contents using the `read()` method. The `read()` method reads the entire contents of the file as a single string. For example:
```python
contents = file.read()
```
You can also read the contents of a file line by line using the `readline()` method. The `readline()` method reads a single line from the file and returns it as a string. For example:
```python
line = file.readline()
```
After you have finished reading from a file, you should close it using the `close()` method. The `close()` method releases any system resources associated with the file. For example:
```python
file.close()
```
To write data to a file, you can open the file in write mode (`'w'`) or append mode (`'a'`). In write mode, the contents of the file are overwritten, while in append mode, new data is added to the end of the file.
To write data to a file, you can use the `write()` method. The `write()` method takes a string as input and writes it to the file. For example:
```python
file = open('data.txt', 'w')
file.write('Hello, world!')
file.close()
```
In this example, the string `'Hello, world!'` is written to the file `'data.txt'`.
File handling is a powerful feature of Python that allows you to work with external data and perform various file operations. It is commonly used in tasks such as reading and writing data files, processing log files, and generating reports.
Here's an example that demonstrates file handling in Python:
```python
# Open a file for reading
file = open('data.txt', 'r')
# Read the contents of the file
contents = file.read()
# Print the contents of the file
print(contents)
# Close the file
file.close()
```
In this example, we open a file named `'data.txt'` for reading, read its contents using the `read()` method, and print the contents. Finally, we close the file using the `close()` method.
## Exercise
Create a file named `'numbers.txt'` and write the numbers from `1` to `10` to the file, each on a separate line.
### Solution
```python
file = open('numbers.txt', 'w')
for i in range(1, 11):
file.write(str(i) + '\n')
file.close()
```
# 7.2. Function Parameters and Return Values
In Python, you can define functions that take parameters and return values. Parameters are the inputs to a function, and return values are the outputs of a function.
To define a function with parameters, you can use the following syntax:
```python
def function_name(parameter1, parameter2, ...):
# function body
```
Inside the function body, you can use the values of the parameters to perform calculations or other operations. For example:
```python
def add_numbers(x, y):
return x + y
```
In this example, the function `add_numbers` takes two parameters `x` and `y`, and returns their sum.
To call a function with parameters, you can provide the values for the parameters inside parentheses. For example:
```python
result = add_numbers(3, 5)
```
In this example, the function `add_numbers` is called with the values `3` and `5`, and the result is assigned to the variable `result`.
A function can have multiple parameters, and you can provide values for the parameters in any order by using keyword arguments. For example:
```python
result = add_numbers(y=5, x=3)
```
In this example, the values `3` and `5` are provided for the parameters `x` and `y` using keyword arguments.
A function can also have a return value, which is the value that the function produces as its output. To return a value from a function, you can use the `return` statement followed by the value to be returned. For example:
```python
def square(x):
return x * x
```
In this example, the function `square` takes a parameter `x`, and returns the square of `x`.
To use the return value of a function, you can assign it to a variable or use it in an expression. For example:
```python
result = square(4)
```
In this example, the function `square` is called with the value `4`, and the result is assigned to the variable `result`.
Functions with parameters and return values are a powerful tool in Python that allow you to write reusable and modular code. They can be used to perform calculations, manipulate data, and solve complex problems.
Here's an example that demonstrates functions with parameters and return values:
```python
def calculate_average(numbers):
total = sum(numbers)
average = total / len(numbers)
return average
numbers = [1, 2, 3, 4, 5]
result = calculate_average(numbers)
print(result)
```
In this example, the function `calculate_average` takes a list of numbers as a parameter, calculates their average, and returns the result. The function is called with the list `[1, 2, 3, 4, 5]`, and the result is assigned to the variable `result`. Finally, the result is printed.
## Exercise
Write a function named `calculate_area` that takes the radius of a circle as a parameter and returns the area of the circle. Use the formula `area = pi * radius^2`, where `pi` is a constant value of `3.14159`.
### Solution
```python
def calculate_area(radius):
pi = 3.14159
area = pi * radius ** 2
return area
```
# 7.3. Lambda Functions
In addition to regular functions, Python also supports lambda functions, also known as anonymous functions. Lambda functions are a way to create small, one-line functions without using the `def` keyword.
The syntax for a lambda function is as follows:
```python
lambda parameters: expression
```
The lambda function takes one or more parameters, followed by a colon, and then an expression. The expression is evaluated and the result is returned.
Lambda functions are often used when you need a simple function for a short period of time, such as when passing a function as an argument to another function.
Here's an example that demonstrates a lambda function:
```python
add = lambda x, y: x + y
result = add(3, 5)
print(result)
```
In this example, the lambda function `add` takes two parameters `x` and `y`, and returns their sum. The lambda function is assigned to the variable `add`, and then called with the values `3` and `5`. The result is assigned to the variable `result`, and then printed.
Lambda functions can also be used as arguments to other functions, such as the `map()` function. The `map()` function applies a given function to each item in an iterable and returns a new iterable with the results.
Here's an example that demonstrates the use of a lambda function with the `map()` function:
```python
numbers = [1, 2, 3, 4, 5]
squared_numbers = map(lambda x: x ** 2, numbers)
print(list(squared_numbers))
```
In this example, the lambda function `lambda x: x ** 2` is passed as the first argument to the `map()` function. The lambda function squares each number in the `numbers` list, and the `map()` function returns a new iterable with the squared numbers. The result is converted to a list using the `list()` function, and then printed.
Lambda functions are a convenient way to create small, one-line functions without the need for a full function definition. They can be used in a variety of situations, such as when passing a function as an argument or when creating a simple function on the fly.
# 7.4. Modules and Packages
In Python, a module is a file containing Python definitions and statements. The file name is the module name with the suffix `.py`. Modules are used to organize related code into separate files, making it easier to manage and reuse code.
To use a module in your code, you need to import it using the `import` statement. Once imported, you can access the functions, classes, and variables defined in the module.
Here's an example that demonstrates how to import and use a module:
```python
import math
radius = 5
area = math.pi * math.pow(radius, 2)
print(area)
```
In this example, the `math` module is imported using the `import` statement. The module provides various mathematical functions and constants, such as `pi`. The `pi` constant is used to calculate the area of a circle with a given radius.
Python also supports the concept of packages, which are a way to organize related modules into a directory hierarchy. A package is simply a directory that contains a special file called `__init__.py`. The `__init__.py` file can be empty or can contain initialization code for the package.
To import a module from a package, you need to specify the package name followed by the module name, separated by a dot. For example:
```python
import package.module
```
You can also use the `from` keyword to import specific functions or variables from a module or package:
```python
from package.module import function
```
This allows you to use the function directly without specifying the module name.
Python provides a rich ecosystem of modules and packages that extend its functionality. These include modules for scientific computing (NumPy, SciPy), data analysis (pandas), machine learning (scikit-learn), and many more. You can install additional modules and packages using the `pip` package manager.
Using modules and packages allows you to leverage existing code and avoid reinventing the wheel. It also promotes code reuse and modularity, making your code easier to read, maintain, and debug.
# 8. File Handling
Python provides built-in functions and modules for reading from and writing to files. File handling is an essential skill for working with data and performing various tasks, such as data processing, data analysis, and data visualization.
To read from a file, you can use the `open()` function. The `open()` function takes two arguments: the file name and the mode. The mode specifies how the file should be opened, such as for reading, writing, or appending. Here's an example:
```python
file = open('data.txt', 'r')
content = file.read()
print(content)
file.close()
```
In this example, the `open()` function is used to open the file named `data.txt` in read mode (`'r'`). The `read()` method is then called on the file object to read the contents of the file. The contents are stored in the `content` variable and printed to the console. Finally, the `close()` method is called to close the file.
To write to a file, you can use the `open()` function with the write mode (`'w'`). Here's an example:
```python
file = open('output.txt', 'w')
file.write('Hello, world!')
file.close()
```
In this example, the `open()` function is used to open the file named `output.txt` in write mode (`'w'`). The `write()` method is then called on the file object to write the string `'Hello, world!'` to the file. Finally, the `close()` method is called to close the file.
Python also provides a more convenient way to handle file operations using the `with` statement. The `with` statement ensures that the file is properly closed after the block of code is executed, even if an exception occurs. Here's an example:
```python
with open('data.txt', 'r') as file:
content = file.read()
print(content)
```
In this example, the `with` statement is used to open the file named `data.txt` in read mode (`'r'`). The file object is assigned to the variable `file`. The code block indented under the `with` statement is executed, and the file is automatically closed when the block is exited.
File handling in Python is a powerful feature that allows you to work with different file formats, such as text files, CSV files, JSON files, and more. It enables you to read and write data, manipulate file contents, and perform various file operations efficiently and effectively.
# 8.1. Reading from a File
# 8.2. Writing to a File
In Python, you can read data from a file using the `open()` function. The `open()` function takes two arguments: the name of the file you want to open, and the mode in which you want to open the file. The mode can be `'r'` for reading, `'w'` for writing, or `'a'` for appending to an existing file.
To read data from a file, you can use the `read()` method of the file object. This method reads the entire contents of the file as a single string.
Here's an example of how to read data from a file:
```python
file = open('data.txt', 'r')
data = file.read()
file.close()
```
In this example, we open the file named `'data.txt'` in read mode, read its contents using the `read()` method, and then close the file using the `close()` method.
Suppose we have a file named `'data.txt'` that contains the following text:
```
Hello, world!
This is some sample text.
```
We can read the contents of this file using the following code:
```python
file = open('data.txt', 'r')
data = file.read()
file.close()
print(data)
```
The output of this code will be:
```
Hello, world!
This is some sample text.
```
## Exercise
Create a file named `'numbers.txt'` and write the numbers from 1 to 10, each on a separate line. Then, read the contents of the file and print them.
### Solution
```python
file = open('numbers.txt', 'w')
for i in range(1, 11):
file.write(str(i) + '\n')
file.close()
file = open('numbers.txt', 'r')
data = file.read()
file.close()
print(data)
```
The output of this code will be:
```
1
2
3
4
5
6
7
8
9
10
```
# 8.3. File Modes
When opening a file in Python, you can specify the mode in which you want to open the file. The mode determines whether you want to read from, write to, or append to the file. Here are the different modes you can use:
- `'r'`: Read mode. This is the default mode and allows you to read the contents of the file.
- `'w'`: Write mode. This mode allows you to write new contents to the file. If the file already exists, it will be overwritten. If the file does not exist, a new file will be created.
- `'a'`: Append mode. This mode allows you to add new contents to the end of the file. If the file does not exist, a new file will be created.
- `'x'`: Exclusive creation mode. This mode only allows you to create a new file. If the file already exists, an error will be raised.
To specify the mode when opening a file, you can pass it as the second argument to the `open()` function. For example, to open a file named `'data.txt'` in write mode, you can use the following code:
```python
file = open('data.txt', 'w')
```
After you have finished working with a file, it is important to close it using the `close()` method of the file object. This ensures that any changes you made to the file are saved and that system resources are freed up.
Suppose we have a file named `'data.txt'` that contains the following text:
```
Hello, world!
```
If we want to append the string `'This is some additional text.'` to the end of the file, we can use the following code:
```python
file = open('data.txt', 'a')
file.write('This is some additional text.')
file.close()
```
After running this code, the contents of the file will be:
```
Hello, world!
This is some additional text.
```
## Exercise
Create a file named `'log.txt'` and write the following log messages to it, each on a separate line:
```
[INFO] User logged in.
[WARNING] Invalid input detected.
[ERROR] Connection failed.
```
Then, read the contents of the file and print them.
### Solution
```python
file = open('log.txt', 'w')
file.write('[INFO] User logged in.\n')
file.write('[WARNING] Invalid input detected.\n')
file.write('[ERROR] Connection failed.\n')
file.close()
file = open('log.txt', 'r')
data = file.read()
file.close()
print(data)
```
The output of this code will be:
```
[INFO] User logged in.
[WARNING] Invalid input detected.
[ERROR] Connection failed.
```
# 8.4. Using the 'with' Statement
When working with files in Python, it is good practice to use the `with` statement. The `with` statement ensures that the file is properly closed even if an exception occurs. It also simplifies the code by automatically handling the opening and closing of the file.
To use the `with` statement, you can open the file and assign it to a variable. Then, you can use the variable to read or write to the file within the `with` block. Here is an example:
```python
with open('data.txt', 'r') as file:
contents = file.read()
print(contents)
```
In this example, the file is opened in read mode using the `'r'` mode. The contents of the file are then read and printed. Once the `with` block is exited, the file is automatically closed.
The `with` statement can also be used with the `'w'` and `'a'` modes for writing and appending to files, respectively. Here is an example:
```python
with open('data.txt', 'a') as file:
file.write('This is some additional text.')
```
In this example, the string `'This is some additional text.'` is appended to the end of the file.
Using the `with` statement is recommended because it ensures that files are properly closed, even if an error occurs. It also simplifies the code by handling the opening and closing of the file automatically.
Suppose we have a file named `'data.txt'` that contains the following text:
```
Hello, world!
```
If we want to read the contents of the file and print them, we can use the following code:
```python
with open('data.txt', 'r') as file:
contents = file.read()
print(contents)
```
The output of this code will be:
```
Hello, world!
```
## Exercise
Using the `with` statement, open the file `'log.txt'` and read its contents. Assign the contents to a variable named `data`. Then, print the value of `data`.
### Solution
```python
with open('log.txt', 'r') as file:
data = file.read()
print(data)
```
The output of this code will be the contents of the `'log.txt'` file.
# 9. Exceptions and Error Handling
In Python, an exception is an event that occurs during the execution of a program that disrupts the normal flow of instructions. When an exception occurs, the program stops executing and jumps to a special block of code called an exception handler.
Exceptions can occur for various reasons, such as invalid input, division by zero, or file not found. Python provides a way to handle exceptions using the `try` and `except` statements.
The `try` statement is used to enclose the code that might raise an exception. If an exception occurs within the `try` block, the code execution is immediately transferred to the `except` block.
Here is an example:
```python
try:
x = 10 / 0
except ZeroDivisionError:
print("Error: Division by zero")
```
In this example, the code within the `try` block attempts to divide the number 10 by zero, which raises a `ZeroDivisionError` exception. The program then jumps to the `except` block, which prints an error message.
Multiple `except` statements can be used to handle different types of exceptions. Here is an example:
```python
try:
x = int("abc")
except ValueError:
print("Error: Invalid input")
except ZeroDivisionError:
print("Error: Division by zero")
```
In this example, the code within the `try` block attempts to convert the string "abc" to an integer, which raises a `ValueError` exception. The program then jumps to the first `except` block, which prints an error message.
The `finally` block can be used to specify code that will be executed regardless of whether an exception occurs or not. Here is an example:
```python
try:
x = 10 / 0
except ZeroDivisionError:
print("Error: Division by zero")
finally:
print("Finally block executed")
```
In this example, the code within the `try` block attempts to divide the number 10 by zero, which raises a `ZeroDivisionError` exception. The program then jumps to the `except` block, which prints an error message. Finally, the `finally` block is executed, printing a message.
Using exception handling is important because it allows you to gracefully handle errors and prevent your program from crashing. It also provides a way to handle different types of exceptions separately and perform specific actions based on the type of exception.
Suppose we have a function that divides two numbers:
```python
def divide(a, b):
try:
result = a / b
print("The result is:", result)
except ZeroDivisionError:
print("Error: Division by zero")
```
If we call this function with the arguments `10` and `2`, it will print:
```
The result is: 5.0
```
If we call this function with the arguments `10` and `0`, it will print:
```
Error: Division by zero
```
## Exercise
Write a function named `read_file` that takes a filename as an argument and attempts to read the contents of the file. If the file does not exist, the function should print an error message. Use exception handling to handle the `FileNotFoundError` exception.
### Solution
```python
def read_file(filename):
try:
with open(filename, 'r') as file:
contents = file.read()
print(contents)
except FileNotFoundError:
print("Error: File not found")
```
To test this function, you can call it with the name of an existing file and a non-existing file:
```python
read_file('existing_file.txt')
read_file('non_existing_file.txt')
```
The first call will print the contents of the existing file, while the second call will print an error message.
# 9.1. Syntax Errors vs. Exceptions
Suppose we have a list of numbers and we want to calculate the reciprocal of each number. We can write a function that handles exceptions to avoid division by zero:
```python
def calculate_reciprocal(numbers):
for number in numbers:
try:
reciprocal = 1 / number
print("The reciprocal of", number, "is", reciprocal)
except ZeroDivisionError:
print("Error: Division by zero")
```
If we call this function with the list `[2, 0, 5]`, it will print:
```
The reciprocal of 2 is 0.5
Error: Division by zero
The reciprocal of 5 is 0.2
```
The function successfully calculates the reciprocal of the numbers 2 and 5, but encounters a `ZeroDivisionError` when trying to calculate the reciprocal of 0. The exception is caught by the `except` block, which prints an error message.
## Exercise
Write a function named `calculate_average` that takes a list of numbers as an argument and calculates the average of the numbers. Handle the `ZeroDivisionError` exception to avoid division by zero. If the list is empty, the function should return `None`.
### Solution
```python
def calculate_average(numbers):
try:
average = sum(numbers) / len(numbers)
return average
except ZeroDivisionError:
print("Error: Division by zero")
return None
```
To test this function, you can call it with different lists of numbers, including an empty list:
```python
print(calculate_average([1, 2, 3, 4, 5]))
print(calculate_average([]))
```
The first call will calculate the average of the numbers 1, 2, 3, 4, and 5, and print the result. The second call will print an error message and return `None`.
# 9.2. Using Try and Except
In Python, we can handle exceptions using the `try` and `except` statements. The `try` block contains the code that might raise an exception, and the `except` block contains the code that handles the exception.
Here's the basic syntax:
```python
try:
# code that might raise an exception
except ExceptionType:
# code that handles the exception
```
When an exception is raised in the `try` block, Python looks for an `except` block that handles that specific exception. If it finds one, it executes the code in that block. If it doesn't find a matching `except` block, the exception is propagated up the call stack.
The `ExceptionType` in the `except` statement can be a specific exception class, such as `ZeroDivisionError`, or a base class like `Exception` that catches all exceptions. If you don't specify an exception type, the `except` block will catch all exceptions.
Suppose we have a function that divides two numbers:
```python
def divide(a, b):
try:
result = a / b
print("The result is", result)
except ZeroDivisionError:
print("Error: Division by zero")
```
If we call this function with `divide(10, 2)`, it will print:
```
The result is 5.0
```
The division is successful, so the code in the `try` block is executed. If we call it with `divide(10, 0)`, it will print:
```
Error: Division by zero
```
The division raises a `ZeroDivisionError`, so the code in the `except` block is executed.
## Exercise
Write a function named `calculate_square_root` that takes a number as an argument and calculates its square root. Handle the `ValueError` exception to avoid taking the square root of a negative number. If the number is negative, the function should return `None`.
### Solution
```python
import math
def calculate_square_root(number):
try:
square_root = math.sqrt(number)
return square_root
except ValueError:
print("Error: Cannot calculate square root of a negative number")
return None
```
To test this function, you can call it with different numbers, including negative numbers:
```python
print(calculate_square_root(16))
print(calculate_square_root(-9))
```
The first call will calculate the square root of 16 and print the result. The second call will print an error message and return `None`.
# 9.3. Finally Block
In addition to the `try` and `except` blocks, Python provides a `finally` block that can be used to specify code that should always be executed, regardless of whether an exception was raised or not.
Here's the basic syntax:
```python
try:
# code that might raise an exception
except ExceptionType:
# code that handles the exception
finally:
# code that is always executed
```
The code in the `finally` block is executed after the `try` or `except` block, regardless of whether an exception was raised or not. This can be useful for cleaning up resources, such as closing files or releasing locks.
Suppose we have a function that opens a file, reads its contents, and closes the file:
```python
def read_file(filename):
try:
file = open(filename, 'r')
contents = file.read()
print("File contents:", contents)
except FileNotFoundError:
print("Error: File not found")
finally:
file.close()
print("File closed")
```
If the file exists, the code in the `try` block is executed and the file is closed in the `finally` block. If the file doesn't exist, the code in the `except` block is executed and the file is still closed in the `finally` block.
## Exercise
Write a function named `calculate_average` that takes a list of numbers as an argument and calculates the average of the numbers. Handle the `ZeroDivisionError` exception to avoid division by zero. If the list is empty, the function should return `None`. In the `finally` block, print a message indicating whether the calculation was successful or not.
### Solution
```python
def calculate_average(numbers):
try:
average = sum(numbers) / len(numbers)
return average
except ZeroDivisionError:
print("Error: Division by zero")
return None
finally:
if numbers:
print("Calculation successful")
else:
print("List is empty")
```
To test this function, you can call it with different lists of numbers, including an empty list:
```python
print(calculate_average([1, 2, 3, 4, 5]))
print(calculate_average([]))
```
The first call will calculate the average of the numbers 1, 2, 3, 4, and 5, and print the result. The second call will print an error message, return `None`, and print a message indicating that the list is empty.
# 9.4. Custom Exceptions
In addition to the built-in exceptions provided by Python, you can also define your own custom exceptions. Custom exceptions can be useful when you want to raise an exception that is specific to your program or module.
To define a custom exception, you need to create a new class that inherits from the `Exception` class or one of its subclasses. Here's an example:
```python
class MyException(Exception):
pass
```
In this example, `MyException` is a custom exception that inherits from the `Exception` class. You can then raise this exception using the `raise` statement:
```python
raise MyException("This is a custom exception")
```
When this code is executed, a `MyException` exception is raised with the specified message.
Suppose we have a function that calculates the factorial of a number. If the number is negative, we want to raise a custom exception called `NegativeNumberError`. Here's the code:
```python
class NegativeNumberError(Exception):
pass
def calculate_factorial(n):
if n < 0:
raise NegativeNumberError("Negative numbers are not allowed")
factorial = 1
for i in range(1, n + 1):
factorial *= i
return factorial
```
If we call this function with `calculate_factorial(-5)`, it will raise a `NegativeNumberError` exception with the specified message.
## Exercise
Write a function named `calculate_square_root` that takes a number as an argument and calculates its square root. Handle the `ValueError` exception to avoid taking the square root of a negative number. If the number is negative, raise a custom exception called `NegativeNumberError` with the message "Negative numbers are not allowed".
### Solution
```python
import math
class NegativeNumberError(Exception):
pass
def calculate_square_root(number):
if number < 0:
raise NegativeNumberError("Negative numbers are not allowed")
square_root = math.sqrt(number)
return square_root
```
To test this function, you can call it with different numbers, including negative numbers:
```python
try:
print(calculate_square_root(16))
print(calculate_square_root(-9))
except NegativeNumberError as e:
print("Error:", e)
```
The first call will calculate the square root of 16 and print the result. The second call will raise a `NegativeNumberError` exception with the specified message, which is caught in the `except` block and printed.
# 10. Financial Concepts
10.1. Time Value of Money
The time value of money is a concept that recognizes the fact that a dollar today is worth more than a dollar in the future. This is because money has the potential to earn interest or generate returns over time. To account for this, we need to discount future cash flows to their present value.
The basic formula for calculating the present value of a future cash flow is:
$$PV = \frac{FV}{(1 + r)^n}$$
Where:
- PV is the present value
- FV is the future value
- r is the discount rate
- n is the number of periods
For example, let's say you have the opportunity to receive $1,000 in one year. If the discount rate is 5%, the present value of that $1,000 would be:
$$PV = \frac{1000}{(1 + 0.05)^1} = \frac{1000}{1.05} \approx 952.38$$
This means that the present value of $1,000 to be received in one year is approximately $952.38.
10.2. Risk and Return
In finance, risk and return are closely related. Generally, investments with higher potential returns also come with higher levels of risk. This is because higher-risk investments are typically more volatile and have a greater chance of losing value.
To measure risk, we often use standard deviation, which measures the dispersion of returns around the average return. The higher the standard deviation, the greater the risk.
Return, on the other hand, measures the gain or loss generated by an investment relative to the amount invested. It is typically expressed as a percentage.
One common way to measure the risk-return tradeoff is through the use of the Sharpe ratio. The Sharpe ratio is calculated by subtracting the risk-free rate of return from the expected return of the investment, and then dividing by the standard deviation of the investment's returns.
$$Sharpe Ratio = \frac{Expected Return - Risk-Free Rate}{Standard Deviation}$$
A higher Sharpe ratio indicates a better risk-adjusted return.
10.3. Portfolio Theory
Portfolio theory, also known as modern portfolio theory, was developed by Harry Markowitz in the 1950s. It is based on the idea that an investor can construct a portfolio of assets that maximizes expected return for a given level of risk, or minimizes risk for a given level of expected return.
The key concept in portfolio theory is diversification. By investing in a mix of assets that are not perfectly correlated, an investor can reduce the overall risk of their portfolio. This is because when one asset performs poorly, another asset may perform well, offsetting the losses.
The two main measures used in portfolio theory are expected return and variance (or standard deviation) of returns. Expected return is the average return an investor can expect to earn from an investment, while variance measures the dispersion of returns around the expected return.
The efficient frontier is a graphical representation of the set of portfolios that offer the highest expected return for a given level of risk, or the lowest risk for a given level of expected return. By constructing a portfolio that lies on the efficient frontier, an investor can optimize their risk-return tradeoff.
## Exercise
1. Calculate the present value of $1,000 to be received in five years, assuming a discount rate of 3%.
2. Calculate the Sharpe ratio for an investment with an expected return of 8% and a standard deviation of 12%, assuming a risk-free rate of 2%.
3. Explain why diversification is important in portfolio theory.
### Solution
1. The present value of $1,000 to be received in five years, assuming a discount rate of 3%, is calculated as follows:
$$PV = \frac{1000}{(1 + 0.03)^5} = \frac{1000}{1.159274} \approx 862.61$$
Therefore, the present value of $1,000 to be received in five years is approximately $862.61.
2. The Sharpe ratio for an investment with an expected return of 8% and a standard deviation of 12%, assuming a risk-free rate of 2%, is calculated as follows:
$$Sharpe Ratio = \frac{0.08 - 0.02}{0.12} = \frac{0.06}{0.12} = 0.5$$
Therefore, the Sharpe ratio for this investment is 0.5.
3. Diversification is important in portfolio theory because it allows investors to reduce the overall risk of their portfolio. By investing in a mix of assets that are not perfectly correlated, an investor can offset the losses from one asset with the gains from another asset. This reduces the volatility of the portfolio and improves the risk-return tradeoff.
# 10.1. Time Value of Money
The time value of money is a fundamental concept in finance that recognizes the fact that a dollar today is worth more than a dollar in the future. This is because money has the potential to earn interest or generate returns over time. To account for this, we need to discount future cash flows to their present value.
The basic formula for calculating the present value of a future cash flow is:
$$PV = \frac{FV}{(1 + r)^n}$$
Where:
- PV is the present value
- FV is the future value
- r is the discount rate
- n is the number of periods
Let's break down the formula:
- The discount rate (r) represents the rate of return that could be earned on an investment of similar risk.
- The future value (FV) is the amount of money that will be received in the future.
- The number of periods (n) represents the length of time until the future cash flow is received.
By discounting the future cash flow, we can determine its present value, or how much it is worth in today's dollars. This allows us to compare cash flows that occur at different points in time and make informed financial decisions.
For example, let's say you have the opportunity to receive $1,000 in one year. If the discount rate is 5%, the present value of that $1,000 would be:
$$PV = \frac{1000}{(1 + 0.05)^1} = \frac{1000}{1.05} \approx 952.38$$
This means that the present value of $1,000 to be received in one year is approximately $952.38. By discounting the future cash flow, we can determine its present value and make informed decisions about its value in today's dollars.
Suppose you are considering two investment options: Option A and Option B. Option A offers a future cash flow of $5,000 in three years, while Option B offers a future cash flow of $7,000 in five years. The discount rate is 8%.
To determine which option is more valuable in today's dollars, we can calculate the present value of each cash flow using the time value of money formula.
For Option A:
$$PV_A = \frac{5000}{(1 + 0.08)^3} \approx 4045.04$$
For Option B:
$$PV_B = \frac{7000}{(1 + 0.08)^5} \approx 5168.47$$
Based on the present value calculations, Option B has a higher present value of approximately $5,168.47 compared to Option A's present value of $4,045.04. Therefore, Option B is more valuable in today's dollars.
## Exercise
Calculate the present value of the following future cash flows, assuming a discount rate of 6%:
1. $10,000 to be received in two years
2. $20,000 to be received in five years
3. $30,000 to be received in ten years
### Solution
1. The present value of $10,000 to be received in two years, assuming a discount rate of 6%, is calculated as follows:
$$PV = \frac{10000}{(1 + 0.06)^2} \approx 8912.40$$
Therefore, the present value of $10,000 to be received in two years is approximately $8,912.40.
2. The present value of $20,000 to be received in five years, assuming a discount rate of 6%, is calculated as follows:
$$PV = \frac{20000}{(1 + 0.06)^5} \approx 13439.53$$
Therefore, the present value of $20,000 to be received in five years is approximately $13,439.53.
3. The present value of $30,000 to be received in ten years, assuming a discount rate of 6%, is calculated as follows:
$$PV = \frac{30000}{(1 + 0.06)^10} \approx 15093.48$$
Therefore, the present value of $30,000 to be received in ten years is approximately $15,093.48.
# 10.2. Risk and Return
In finance, risk and return are closely related concepts. Risk refers to the uncertainty or variability of returns on an investment, while return refers to the gain or loss generated by an investment over a specific period of time.
Investors are generally risk-averse, meaning they prefer investments with lower levels of risk. However, higher levels of risk are often associated with higher potential returns. This is known as the risk-return tradeoff.
There are several types of risk that investors consider when making investment decisions:
1. Market risk: This refers to the risk that the overall market will decline, causing the value of investments to decrease. Market risk cannot be eliminated through diversification.
2. Credit risk: This refers to the risk that a borrower will default on their debt obligations. Investors who lend money or invest in bonds are exposed to credit risk.
3. Liquidity risk: This refers to the risk that an investment cannot be easily bought or sold without causing a significant change in its price. Investments that are less liquid tend to have higher liquidity risk.
4. Interest rate risk: This refers to the risk that changes in interest rates will affect the value of fixed-income investments. When interest rates rise, the value of existing bonds decreases.
To assess the risk-return tradeoff of an investment, investors use various measures, such as standard deviation, beta, and Sharpe ratio. These measures help investors understand the historical volatility and potential returns of an investment.
Let's consider two investment options: Option X and Option Y. Option X is a low-risk investment that is expected to generate a return of 5% per year, while Option Y is a high-risk investment that is expected to generate a return of 10% per year.
To assess the risk-return tradeoff, we can calculate the standard deviation of returns for each option. The standard deviation measures the volatility or variability of returns.
Suppose the standard deviation of returns for Option X is 2% and the standard deviation of returns for Option Y is 8%. This means that Option Y has a higher level of risk compared to Option X.
Based on the risk-return tradeoff, investors who are risk-averse may prefer Option X because it offers a lower level of risk. However, investors who are willing to take on more risk in exchange for potentially higher returns may prefer Option Y.
## Exercise
Consider the following investment options:
Option A: Expected return = 7%, Standard deviation of returns = 3%
Option B: Expected return = 10%, Standard deviation of returns = 5%
Option C: Expected return = 5%, Standard deviation of returns = 2%
Which option offers the highest potential return? Which option has the lowest level of risk? Which option has the highest risk-return tradeoff?
### Solution
Option B offers the highest potential return with an expected return of 10%.
Option C has the lowest level of risk with a standard deviation of returns of 2%.
Option B has the highest risk-return tradeoff because it offers a higher potential return (10%) compared to Option A (7%) and Option C (5%), but also has a higher level of risk (5% standard deviation of returns).
# 10.3. Portfolio Theory
Portfolio theory, also known as modern portfolio theory, is a framework for constructing and managing investment portfolios. It was developed by Harry Markowitz in the 1950s and earned him a Nobel Prize in Economics.
The central idea of portfolio theory is that an investor can reduce risk by diversifying their investments across different assets. By spreading investments across multiple assets, the investor can reduce the impact of any single investment on the overall portfolio.
The key concepts of portfolio theory include:
1. Expected return: This is the average return an investor can expect to earn from an investment over a specific period of time. It is calculated as the weighted average of the expected returns of the individual assets in the portfolio.
2. Risk: In portfolio theory, risk is measured by the variance or standard deviation of returns. A lower risk portfolio has a lower variance or standard deviation.
3. Covariance: This measures the relationship between the returns of two assets. A positive covariance indicates that the returns of the two assets move in the same direction, while a negative covariance indicates that the returns move in opposite directions.
4. Efficient frontier: This represents the set of portfolios that offer the highest expected return for a given level of risk. The efficient frontier helps investors find the optimal portfolio that maximizes return for a given level of risk.
By combining assets with different risk and return characteristics, investors can construct portfolios that offer a desirable risk-return tradeoff. This allows investors to achieve their financial goals while managing risk.
Suppose an investor has $100,000 to invest and is considering two assets: Asset A and Asset B. Asset A has an expected return of 8% and a standard deviation of 10%, while Asset B has an expected return of 12% and a standard deviation of 15%.
To construct an optimal portfolio, the investor needs to consider the expected returns, standard deviations, and covariance between the two assets. Let's assume the covariance between Asset A and Asset B is 0.02.
Using portfolio theory, the investor can calculate the weights of each asset in the portfolio that will minimize risk for a given level of return. The weights are determined by the investor's risk tolerance and financial goals.
For example, if the investor wants to achieve an expected return of 10%, they can calculate the weights as follows:
Weight of Asset A = (Expected return of Asset B - Expected return of the portfolio) / (Expected return of Asset B - Expected return of Asset A) = (12% - 10%) / (12% - 8%) = 0.5
Weight of Asset B = 1 - Weight of Asset A = 1 - 0.5 = 0.5
By allocating 50% of the portfolio to Asset A and 50% to Asset B, the investor can achieve an expected return of 10% while minimizing risk.
## Exercise
Consider the following assets:
Asset A: Expected return = 6%, Standard deviation of returns = 4%
Asset B: Expected return = 8%, Standard deviation of returns = 6%
Asset C: Expected return = 10%, Standard deviation of returns = 8%
Using portfolio theory, calculate the weights of each asset in the portfolio that will minimize risk for a given level of return of 8%.
### Solution
To calculate the weights of each asset in the portfolio, we need to use the formula:
Weight of Asset A = (Expected return of Asset B - Expected return of the portfolio) / (Expected return of Asset B - Expected return of Asset A)
Weight of Asset B = (Expected return of the portfolio - Expected return of Asset A) / (Expected return of Asset B - Expected return of Asset A)
Weight of Asset C = 1 - Weight of Asset A - Weight of Asset B
Given an expected return of 8%, the weights are calculated as follows:
Weight of Asset A = (8% - 6%) / (8% - 6%) = 0.5
Weight of Asset B = (8% - 6%) / (8% - 8%) = 0.5
Weight of Asset C = 1 - 0.5 - 0.5 = 0
By allocating 50% of the portfolio to Asset A, 50% to Asset B, and 0% to Asset C, the investor can achieve an expected return of 8% while minimizing risk.
# 11. Financial Modeling
Financial modeling is the process of creating a mathematical representation of a financial situation or system. It is used to analyze and make predictions about financial outcomes, such as the performance of an investment portfolio, the valuation of a company, or the impact of different financial decisions.
Financial models are typically built using spreadsheets or specialized financial modeling software. They incorporate various financial data, assumptions, and calculations to simulate different scenarios and evaluate the potential outcomes.
Financial modeling is an essential tool for financial professionals, including investment bankers, financial analysts, and portfolio managers. It allows them to make informed decisions, assess risks, and communicate complex financial concepts to stakeholders.
There are different types of financial models, each serving a specific purpose. Some common types of financial models include:
1. Valuation models: These models are used to determine the value of a company or an asset. They can be based on various valuation techniques, such as discounted cash flow (DCF) analysis, comparable company analysis, or precedent transactions analysis.
2. Forecasting models: These models are used to predict future financial performance based on historical data and assumptions. They can be used to forecast revenue, expenses, cash flows, and other financial metrics.
3. Sensitivity analysis models: These models are used to assess the impact of different variables or assumptions on the financial outcomes. They help identify the key drivers of a financial model and evaluate the sensitivity of the results to changes in those drivers.
4. Scenario analysis models: These models are used to simulate different scenarios and evaluate the potential outcomes. They can be used to assess the impact of different market conditions, economic factors, or strategic decisions on the financial performance.
Let's consider an example of a financial model for a company's valuation. The model incorporates various financial data, such as historical financial statements, projected revenue growth, and industry benchmarks.
The model starts with the company's historical financial statements, including the income statement, balance sheet, and cash flow statement. It then makes assumptions about the company's future revenue growth, profit margins, and capital expenditure.
Based on these assumptions, the model projects the company's future cash flows and calculates its present value using a discounted cash flow (DCF) analysis. The DCF analysis takes into account the time value of money and the company's cost of capital.
The model also incorporates industry benchmarks and comparable company analysis to assess the company's relative valuation. It compares the company's financial metrics, such as price-to-earnings ratio or enterprise value-to-sales ratio, to those of similar companies in the industry.
By adjusting the assumptions and inputs in the model, financial analysts can perform sensitivity analysis and scenario analysis to evaluate the impact of different factors on the company's valuation.
## Exercise
Consider a company that is expected to generate the following cash flows over the next five years:
Year 1: $1,000
Year 2: $1,500
Year 3: $2,000
Year 4: $2,500
Year 5: $3,000
Assuming a discount rate of 10%, calculate the present value of these cash flows using a discounted cash flow (DCF) analysis.
### Solution
To calculate the present value of the cash flows, we need to discount each cash flow by the appropriate discount rate. The present value is calculated as follows:
PV = CF1 / (1 + r)^1 + CF2 / (1 + r)^2 + CF3 / (1 + r)^3 + CF4 / (1 + r)^4 + CF5 / (1 + r)^5
where PV is the present value, CF is the cash flow, r is the discount rate, and the superscript represents the year.
Using a discount rate of 10%, the present value of the cash flows is calculated as follows:
PV = $1,000 / (1 + 0.10)^1 + $1,500 / (1 + 0.10)^2 + $2,000 / (1 + 0.10)^3 + $2,500 / (1 + 0.10)^4 + $3,000 / (1 + 0.10)^5
PV = $1,000 / 1.10 + $1,500 / 1.10^2 + $2,000 / 1.10^3 + $2,500 / 1.10^4 + $3,000 / 1.10^5
PV = $909.09 + $1,239.67 + $1,652.89 + $1,932.90 + $2,143.59
PV = $7,878.14
Therefore, the present value of the cash flows is $7,878.14.
# 11.1. Introduction to Financial Modeling
Financial modeling is a crucial skill for anyone working in finance. It involves creating mathematical models to simulate and analyze financial situations. These models help professionals make informed decisions, evaluate risks, and predict future outcomes.
In this section, we'll introduce you to the basics of financial modeling. We'll cover the key concepts and techniques that you need to understand before diving into more complex models.
Financial modeling starts with identifying the problem or question that needs to be addressed. This could be anything from valuing a company to predicting future cash flows. Once the problem is defined, you can start building the model.
The first step in building a financial model is gathering the necessary data. This may include historical financial statements, market data, economic indicators, and industry benchmarks. It's important to ensure that the data is accurate and reliable.
Next, you'll need to make assumptions about the future. These assumptions will drive the model's calculations and predictions. They should be based on thorough research and analysis of relevant factors, such as market trends, industry dynamics, and company-specific information.
Once you have the data and assumptions, you can start building the model. This typically involves creating a spreadsheet or using specialized financial modeling software. The model should be structured in a logical and organized manner, with clear inputs, calculations, and outputs.
Let's say you want to build a financial model to evaluate the potential profitability of a new product launch. You gather data on the expected sales volume, pricing, production costs, and marketing expenses.
Based on this data, you make assumptions about the market size, market share, and customer demand. You also estimate the variable and fixed costs associated with producing and marketing the product.
Using these assumptions, you can calculate the projected revenue, costs, and profitability for different scenarios. You can also perform sensitivity analysis to assess the impact of changes in key variables, such as pricing or production costs.
The financial model allows you to evaluate the financial viability of the new product launch and make informed decisions about resource allocation, pricing strategies, and marketing investments.
## Exercise
Consider a company that is planning to launch a new product. You have gathered the following data:
- Expected sales volume: 10,000 units
- Selling price per unit: $50
- Variable cost per unit: $30
- Fixed costs: $100,000
- Marketing expenses: $50,000
Make assumptions about the market size, market share, and customer demand. Based on these assumptions, calculate the projected revenue, costs, and profitability for the new product launch.
### Solution
Assuming a market size of 100,000 units and a market share of 10%, the expected sales volume would be 10,000 units.
The projected revenue can be calculated as follows:
Projected revenue = Expected sales volume * Selling price per unit
= 10,000 units * $50
= $500,000
The variable costs can be calculated as follows:
Variable costs = Expected sales volume * Variable cost per unit
= 10,000 units * $30
= $300,000
The total costs can be calculated as follows:
Total costs = Fixed costs + Marketing expenses + Variable costs
= $100,000 + $50,000 + $300,000
= $450,000
The profitability can be calculated as follows:
Profitability = Projected revenue - Total costs
= $500,000 - $450,000
= $50,000
Therefore, the projected revenue for the new product launch is $500,000, the total costs are $450,000, and the profitability is $50,000.
# 11.2. Building a Simple Financial Model in Excel
To start, open a new Excel workbook and create a new sheet for your financial model. Name the sheet something descriptive, like "Financial Model" or "Projections".
Next, set up the structure of your model. This typically involves creating columns for the time period (e.g., months or years), and rows for the different variables and calculations.
In our example, let's say we want to build a financial model to project the revenue and expenses for a small business over the next three years. We'll use monthly time periods.
In the first column, enter the time periods (e.g., January, February, March, etc.). In the second column, enter the revenue projections for each month. In the third column, enter the expense projections for each month.
Here's an example of what your financial model might look like in Excel:
| Time Period | Revenue | Expenses |
|-------------|---------|----------|
| January | $10,000 | $8,000 |
| February | $12,000 | $9,000 |
| March | $15,000 | $10,000 |
| ... | ... | ... |
You can continue this pattern for the remaining time periods and variables in your model.
Once you have set up the structure of your model, you can start entering the formulas and calculations. For example, you might want to calculate the net income for each month by subtracting the expenses from the revenue.
To do this, select the cell where you want the net income calculation to appear (e.g., cell D2), and enter the formula `=B2-C2`. This formula subtracts the value in cell C2 (expenses) from the value in cell B2 (revenue).
## Exercise
Using the financial model example above, calculate the net income for each month by subtracting the expenses from the revenue. Enter the formulas in the appropriate cells.
### Solution
To calculate the net income for each month, enter the formula `=B2-C2` in cell D2. Then, copy this formula and paste it in the remaining cells in column D.
The resulting financial model should look like this:
| Time Period | Revenue | Expenses | Net Income |
|-------------|---------|----------|------------|
| January | $10,000 | $8,000 | $2,000 |
| February | $12,000 | $9,000 | $3,000 |
| March | $15,000 | $10,000 | $5,000 |
| ... | ... | ... | ... |
This calculation allows you to see the net income for each month, which is an important metric for evaluating the financial performance of the business.
# 11.3. Translating a Financial Model to Python
To start, you'll need to install Python and the necessary packages for financial modeling. We recommend using Anaconda, which includes Python and popular packages like pandas and numpy.
Once you have Python set up, you can start coding your financial model. The first step is to import the necessary libraries, such as pandas and numpy.
Next, you'll need to gather the data for your model. This may involve reading data from a CSV file, querying a database, or scraping data from the web. Pandas provides functions and methods for these tasks, making it easy to import and manipulate data.
Once you have the data, you can start performing the calculations and analysis. Python offers a wide range of mathematical and statistical functions, as well as libraries like pandas and numpy for data manipulation and analysis.
Here's an example of how you might translate the revenue and expense projections from our Excel model to Python:
```python
import pandas as pd
# Read the data from a CSV file
data = pd.read_csv('financial_data.csv')
# Calculate the net income
data['Net Income'] = data['Revenue'] - data['Expenses']
# Print the resulting data
print(data)
```
This code reads the data from a CSV file, calculates the net income by subtracting the expenses from the revenue, and then prints the resulting data.
## Exercise
Using the financial model example above, translate the revenue and expense projections from Excel to Python. Read the data from a CSV file, calculate the net income, and print the resulting data.
### Solution
Assuming you have a CSV file named "financial_data.csv" with the following data:
```
Time Period,Revenue,Expenses
January,10000,8000
February,12000,9000
March,15000,10000
```
You can translate the financial model to Python using the following code:
```python
import pandas as pd
# Read the data from a CSV file
data = pd.read_csv('financial_data.csv')
# Calculate the net income
data['Net Income'] = data['Revenue'] - data['Expenses']
# Print the resulting data
print(data)
```
The code reads the data from the CSV file, calculates the net income by subtracting the expenses from the revenue, and then prints the resulting data:
```
Time Period Revenue Expenses Net Income
0 January 10000 8000 2000
1 February 12000 9000 3000
2 March 15000 10000 5000
```
This allows you to perform the calculations and analysis in Python, leveraging the capabilities of the language and its libraries.
# 12. Data Analysis with Python
To start, you'll need to import the necessary libraries for data analysis. We recommend using pandas, a powerful library for data manipulation and analysis. You'll also need matplotlib or seaborn for data visualization.
Once you have the libraries installed, you can start importing and cleaning your data. Pandas provides functions and methods for reading data from various sources, such as CSV files, Excel spreadsheets, and databases. You can also clean your data by removing missing values, handling duplicates, and transforming data types.
After cleaning your data, you can start manipulating and analyzing it. Pandas offers a wide range of functions and methods for data manipulation, such as filtering, sorting, grouping, and aggregating data. You can also perform calculations, apply statistical functions, and create new variables based on existing ones.
Once you have your data manipulated, you can visualize it to gain insights and communicate your findings. Matplotlib and seaborn provide functions for creating various types of plots, such as line plots, bar plots, scatter plots, and histograms. These plots can help you identify patterns, trends, and relationships in your data.
Here's an example of how you might perform data analysis on a financial dataset using Python:
```python
import pandas as pd
import matplotlib.pyplot as plt
# Read the data from a CSV file
data = pd.read_csv('financial_data.csv')
# Clean the data
data.dropna(inplace=True)
# Calculate the average revenue by month
monthly_revenue = data.groupby('Month')['Revenue'].mean()
# Plot the average revenue
plt.plot(monthly_revenue)
plt.xlabel('Month')
plt.ylabel('Average Revenue')
plt.title('Monthly Revenue')
plt.show()
```
This code reads the data from a CSV file, drops any rows with missing values, calculates the average revenue by month using the groupby function, and then plots the average revenue using matplotlib.
## Exercise
Using the financial dataset example above, perform data analysis in Python. Read the data from a CSV file, clean the data by removing missing values, calculate the average revenue by month, and plot the average revenue.
### Solution
Assuming you have a CSV file named "financial_data.csv" with the following data:
```
Month,Revenue
January,10000
February,12000
March,15000
```
You can perform data analysis in Python using the following code:
```python
import pandas as pd
import matplotlib.pyplot as plt
# Read the data from a CSV file
data = pd.read_csv('financial_data.csv')
# Clean the data
data.dropna(inplace=True)
# Calculate the average revenue by month
monthly_revenue = data.groupby('Month')['Revenue'].mean()
# Plot the average revenue
plt.plot(monthly_revenue)
plt.xlabel('Month')
plt.ylabel('Average Revenue')
plt.title('Monthly Revenue')
plt.show()
```
The code reads the data from the CSV file, drops any rows with missing values, calculates the average revenue by month using the groupby function, and then plots the average revenue using matplotlib:

This allows you to perform data analysis in Python, leveraging the capabilities of pandas and matplotlib to gain insights from your data.
# 12.1. Importing Financial Data
Before you can perform data analysis on financial data, you need to import the data into Python. There are several ways to import financial data, depending on the source and format of the data.
One common way to import financial data is from a CSV (comma-separated values) file. CSV files are a simple and widely supported format for storing tabular data. You can use the `read_csv()` function from the pandas library to import data from a CSV file.
Here's an example of how to import financial data from a CSV file:
```python
import pandas as pd
# Read the data from a CSV file
data = pd.read_csv('financial_data.csv')
```
In this example, the `read_csv()` function reads the data from the file named "financial_data.csv" and stores it in a pandas DataFrame called `data`. You can then use the `data` DataFrame to manipulate and analyze the data.
Another way to import financial data is from an Excel spreadsheet. Excel spreadsheets are commonly used to store financial data, and pandas provides a `read_excel()` function to import data from Excel files.
Here's an example of how to import financial data from an Excel file:
```python
import pandas as pd
# Read the data from an Excel file
data = pd.read_excel('financial_data.xlsx')
```
In this example, the `read_excel()` function reads the data from the file named "financial_data.xlsx" and stores it in a pandas DataFrame called `data`.
You can also import financial data from databases, web APIs, and other sources using specialized libraries and functions. These methods may require additional setup and configuration, but they provide more flexibility and functionality for importing data.
Once you have imported the financial data into Python, you can start analyzing and manipulating it using the tools and functions provided by pandas and other libraries.
# 12.2. Data Cleaning and Manipulation
After importing financial data into Python, it's important to clean and manipulate the data to ensure its accuracy and usability for analysis. Data cleaning involves removing or correcting any errors, inconsistencies, or missing values in the data. Data manipulation involves transforming the data to make it suitable for analysis.
One common data cleaning task is handling missing values. Missing values can occur when data is not available or was not recorded for certain observations. Pandas provides several functions to handle missing values, such as `dropna()` to remove rows or columns with missing values and `fillna()` to fill missing values with a specified value or method.
Here's an example of how to handle missing values in a pandas DataFrame:
```python
import pandas as pd
# Remove rows with missing values
clean_data = data.dropna()
# Fill missing values with the mean
clean_data = data.fillna(data.mean())
```
In this example, the `dropna()` function removes any rows with missing values from the DataFrame `data`, creating a new DataFrame called `clean_data`. The `fillna()` function fills any missing values in the DataFrame `data` with the mean value of the column, creating a new DataFrame called `clean_data`.
Data manipulation involves transforming the data to make it suitable for analysis. This can include tasks such as converting data types, merging or joining multiple datasets, and aggregating data. Pandas provides a wide range of functions and methods to perform these tasks, such as `astype()` to convert data types, `merge()` and `join()` to combine datasets, and `groupby()` to aggregate data.
Here's an example of how to manipulate financial data using pandas:
```python
import pandas as pd
# Convert a column to a different data type
data['date'] = pd.to_datetime(data['date'])
# Merge two datasets based on a common column
merged_data = pd.merge(data1, data2, on='date')
# Group data by a column and calculate the mean
grouped_data = data.groupby('category')['value'].mean()
```
In this example, the `to_datetime()` function converts the 'date' column in the DataFrame `data` to datetime format. The `merge()` function merges two datasets `data1` and `data2` based on the common column 'date', creating a new DataFrame called `merged_data`. The `groupby()` function groups the data in the DataFrame `data` by the 'category' column and calculates the mean value of the 'value' column for each group, creating a new DataFrame called `grouped_data`.
By cleaning and manipulating financial data, you can ensure its quality and prepare it for analysis and modeling.
# 12.3. Exploratory Data Analysis
Exploratory Data Analysis (EDA) is an important step in the data analysis process. It involves examining and visualizing the data to gain insights and identify patterns or trends. EDA helps you understand the data and make informed decisions about the analysis techniques to use.
Pandas provides various functions and methods to perform EDA on financial data. These include descriptive statistics, data visualization, and correlation analysis.
Descriptive statistics summarize the main characteristics of the data, such as measures of central tendency (mean, median) and measures of dispersion (standard deviation, range). Pandas provides functions like `describe()` and `mean()` to calculate these statistics.
Here's an example of how to calculate descriptive statistics for a financial dataset using pandas:
```python
import pandas as pd
# Calculate descriptive statistics
stats = data.describe()
mean = data.mean()
```
In this example, the `describe()` function calculates descriptive statistics for the DataFrame `data` and returns a summary of the statistics, such as count, mean, standard deviation, minimum, and maximum. The `mean()` function calculates the mean value of each column in the DataFrame `data`.
Data visualization is another important aspect of EDA. It allows you to visually explore the data and identify patterns or trends. Pandas provides functions to create various types of plots, such as line plots, bar plots, and scatter plots.
Here's an example of how to create a line plot for a financial dataset using pandas:
```python
import pandas as pd
import matplotlib.pyplot as plt
# Create a line plot
data.plot(x='date', y='price', kind='line')
plt.show()
```
In this example, the `plot()` function creates a line plot for the DataFrame `data`, using the 'date' column as the x-axis and the 'price' column as the y-axis. The `kind='line'` parameter specifies the type of plot to create. The `show()` function displays the plot.
Correlation analysis is used to measure the relationship between two or more variables. It helps identify the strength and direction of the relationship. Pandas provides the `corr()` function to calculate the correlation matrix for a DataFrame.
Here's an example of how to calculate the correlation matrix for a financial dataset using pandas:
```python
import pandas as pd
# Calculate the correlation matrix
correlation_matrix = data.corr()
```
In this example, the `corr()` function calculates the correlation matrix for the DataFrame `data` and returns a matrix of correlation coefficients.
By performing EDA on financial data, you can gain insights and make informed decisions about the analysis techniques to use. EDA helps you understand the data and uncover patterns or trends that may not be immediately apparent.
# 12.4. Visualization of Financial Data
Visualization is a powerful tool for understanding and communicating financial data. It allows you to explore patterns, trends, and relationships in the data, and present your findings in a clear and compelling way.
Python provides several libraries for creating visualizations, including Matplotlib, Seaborn, and Plotly. These libraries offer a wide range of plot types and customization options to suit your needs.
Matplotlib is a popular library for creating static, publication-quality visualizations. It provides a flexible and intuitive interface for creating a variety of plots, such as line plots, bar plots, scatter plots, and histograms.
Here's an example of how to create a line plot using Matplotlib:
```python
import matplotlib.pyplot as plt
# Create a line plot
plt.plot(x_values, y_values)
plt.xlabel('X-axis label')
plt.ylabel('Y-axis label')
plt.title('Title of the plot')
plt.show()
```
In this example, the `plot()` function creates a line plot using the `x_values` and `y_values` arrays. The `xlabel()`, `ylabel()`, and `title()` functions set the labels and title for the plot. The `show()` function displays the plot.
Seaborn is a library built on top of Matplotlib that provides a higher-level interface for creating statistical visualizations. It offers a set of predefined themes and color palettes, as well as functions for creating more complex plots, such as heatmaps and violin plots.
Here's an example of how to create a scatter plot using Seaborn:
```python
import seaborn as sns
# Create a scatter plot
sns.scatterplot(x='x_column', y='y_column', data=data)
plt.xlabel('X-axis label')
plt.ylabel('Y-axis label')
plt.title('Title of the plot')
plt.show()
```
In this example, the `scatterplot()` function creates a scatter plot using the `x_column` and `y_column` columns from the DataFrame `data`. The `xlabel()`, `ylabel()`, and `title()` functions set the labels and title for the plot. The `show()` function displays the plot.
Plotly is a library that provides interactive visualizations that can be embedded in web applications or notebooks. It offers a wide range of plot types, including line plots, bar plots, scatter plots, and 3D plots. Plotly plots can be customized and annotated with interactive features, such as tooltips and zooming.
Here's an example of how to create an interactive line plot using Plotly:
```python
import plotly.express as px
# Create an interactive line plot
fig = px.line(data_frame=data, x='x_column', y='y_column')
fig.update_layout(title='Title of the plot')
fig.show()
```
In this example, the `line()` function creates a line plot using the `x_column` and `y_column` columns from the DataFrame `data`. The `update_layout()` function sets the title for the plot. The `show()` function displays the plot.
By using these visualization libraries, you can create informative and visually appealing plots to explore and communicate financial data. Whether you need static plots for a report or interactive plots for a web application, Python has the tools you need.
# 13. Algorithmic Trading with Python
Algorithmic trading is the use of computer algorithms to automate the process of buying and selling financial assets. It involves developing trading strategies based on mathematical models and executing trades automatically without human intervention.
Python is a popular programming language for algorithmic trading due to its simplicity, flexibility, and extensive libraries for data analysis and machine learning. In this section, we will explore the basics of algorithmic trading and learn how to design and implement trading strategies using Python.
Before we dive into the details of algorithmic trading, let's first understand the basics of financial markets and trading.
Financial markets are where buyers and sellers trade financial assets such as stocks, bonds, currencies, and commodities. These markets provide a platform for investors to buy and sell assets based on their expectations of future price movements.
Trading involves the process of buying and selling financial assets with the goal of making a profit. Traders use various strategies and techniques to analyze market data and make informed trading decisions.
Algorithmic trading takes trading to the next level by automating the entire process. Instead of manually executing trades, algorithms analyze market data, identify trading opportunities, and execute trades automatically based on predefined rules.
For example, let's say you have a trading strategy that involves buying a stock when its price crosses above a certain moving average and selling it when the price crosses below another moving average. Instead of monitoring the market and executing trades manually, you can write a Python program that does this automatically.
Here's an example of a simple algorithmic trading strategy in Python:
```python
import pandas as pd
# Load historical stock data
data = pd.read_csv('stock_data.csv')
# Calculate moving averages
data['MA_50'] = data['Close'].rolling(window=50).mean()
data['MA_200'] = data['Close'].rolling(window=200).mean()
# Generate trading signals
data['Signal'] = 0
data.loc[data['Close'] > data['MA_50'], 'Signal'] = 1
data.loc[data['Close'] < data['MA_200'], 'Signal'] = -1
# Execute trades
positions = data['Signal'].diff()
data['Position'] = positions.fillna(0).cumsum()
# Calculate returns
data['Return'] = data['Position'] * data['Close'].pct_change()
# Calculate cumulative returns
data['Cumulative_Return'] = (1 + data['Return']).cumprod()
# Plot cumulative returns
data['Cumulative_Return'].plot()
```
In this example, we load historical stock data from a CSV file and calculate two moving averages. We generate trading signals based on the crossing of these moving averages and execute trades accordingly. We then calculate the returns and cumulative returns of the trading strategy and plot the cumulative returns over time.
## Exercise
Using the provided historical stock data, implement a simple algorithmic trading strategy in Python. The strategy should involve buying the stock when its price crosses above a certain moving average and selling it when the price crosses below another moving average.
### Solution
```python
import pandas as pd
# Load historical stock data
data = pd.read_csv('stock_data.csv')
# Calculate moving averages
data['MA_50'] = data['Close'].rolling(window=50).mean()
data['MA_200'] = data['Close'].rolling(window=200).mean()
# Generate trading signals
data['Signal'] = 0
data.loc[data['Close'] > data['MA_50'], 'Signal'] = 1
data.loc[data['Close'] < data['MA_200'], 'Signal'] = -1
# Execute trades
positions = data['Signal'].diff()
data['Position'] = positions.fillna(0).cumsum()
# Calculate returns
data['Return'] = data['Position'] * data['Close'].pct_change()
# Calculate cumulative returns
data['Cumulative_Return'] = (1 + data['Return']).cumprod()
# Plot cumulative returns
data['Cumulative_Return'].plot()
```
In this solution, we load the historical stock data from a CSV file and calculate two moving averages. We generate trading signals based on the crossing of these moving averages and execute trades accordingly. We then calculate the returns and cumulative returns of the trading strategy and plot the cumulative returns over time.
# 13.1. Basics of Algorithmic Trading
Algorithmic trading involves the use of computer algorithms to automate the process of buying and selling financial assets. These algorithms analyze market data, identify trading opportunities, and execute trades automatically based on predefined rules.
There are several advantages to algorithmic trading. First, it eliminates the need for human intervention in the trading process, which can help reduce emotional biases and improve trading discipline. Second, algorithms can analyze large amounts of market data and execute trades at high speeds, allowing for quick reaction to market conditions. Finally, algorithmic trading can help increase efficiency and reduce transaction costs by automating the trading process.
To develop an algorithmic trading strategy, traders need to define the rules and conditions for entering and exiting trades. These rules can be based on technical indicators, fundamental analysis, or a combination of both. Traders also need to consider risk management techniques, such as setting stop-loss orders to limit potential losses.
Python is a popular programming language for algorithmic trading due to its simplicity, flexibility, and extensive libraries for data analysis and machine learning. In the following sections, we will explore various aspects of algorithmic trading using Python.
Before we dive into the details of algorithmic trading, let's first understand some key concepts and terminology used in the field.
- Market data: This refers to the information about the prices and volumes of financial assets, such as stocks, currencies, and commodities. Market data can be obtained from various sources, such as stock exchanges, financial news websites, and data providers.
- Trading strategy: This is a set of rules and conditions that determine when to enter and exit trades. Trading strategies can be based on technical indicators, fundamental analysis, or a combination of both. The goal of a trading strategy is to generate profits by taking advantage of market inefficiencies or trends.
- Backtesting: This is the process of testing a trading strategy on historical market data to evaluate its performance. Backtesting allows traders to assess the profitability and risk of a strategy before deploying it in live trading. Python provides libraries, such as pandas and numpy, that make it easy to perform backtesting.
- Order execution: This refers to the process of executing trades based on the trading strategy. Orders can be executed manually by traders or automatically by algorithms. In algorithmic trading, orders are usually executed using electronic trading platforms or application programming interfaces (APIs) provided by brokers.
- Risk management: This involves managing the risks associated with trading, such as potential losses and market volatility. Traders use various techniques, such as setting stop-loss orders and position sizing, to limit their exposure to risk. Python provides libraries, such as pyfolio and riskfolio, that can help with risk management in algorithmic trading.
In the following sections, we will explore these concepts in more detail and learn how to implement algorithmic trading strategies using Python. We will also cover topics such as data analysis, machine learning, and performance evaluation in the context of algorithmic trading. Let's get started!
# 13.2. Designing and Implementing Trading Strategies
There are several factors to consider when designing a trading strategy. First, you need to determine the time frame and frequency of your trades. Are you looking to make short-term trades based on intraday price movements, or are you interested in longer-term trends? The time frame will influence the types of indicators and signals you use in your strategy.
Next, you need to choose the type of trading strategy you want to implement. There are various types of strategies, including trend-following, mean-reversion, breakout, and momentum strategies. Each strategy has its own set of rules and conditions for entering and exiting trades. It's important to choose a strategy that aligns with your trading goals and risk tolerance.
Once you have chosen a strategy, you can start implementing it in Python. Python provides several libraries for technical analysis and trading strategy development, such as pandas, numpy, and ta. These libraries allow you to easily calculate technical indicators, generate trading signals, and backtest your strategy using historical market data.
When implementing a trading strategy, it's important to consider risk management techniques. This includes setting stop-loss orders to limit potential losses, as well as position sizing to manage the amount of capital allocated to each trade. Python provides libraries, such as riskfolio and pyfolio, that can help with risk management in algorithmic trading.
# 13.3. Backtesting and Evaluating Performance
To perform backtesting, you need historical market data for the assets you want to trade. This data includes information about the prices and volumes of the assets over a specific time period. Python provides libraries, such as pandas and numpy, that make it easy to import and manipulate historical market data.
Once you have the historical market data, you can start backtesting your trading strategy. This involves applying the rules and conditions of your strategy to the historical data and simulating the trades that would have been executed. Python provides libraries, such as backtrader and zipline, that make it easy to perform backtesting.
During the backtesting process, it's important to consider transaction costs and slippage. Transaction costs include fees and commissions associated with executing trades, while slippage refers to the difference between the expected price of a trade and the actual executed price. Python provides libraries, such as pyfolio, that can help you estimate transaction costs and slippage in your backtesting.
Once you have backtested your trading strategy, you can evaluate its performance using various metrics. These metrics include measures of profitability, such as the total return and the Sharpe ratio, as well as risk measures, such as the maximum drawdown and the volatility. Python provides libraries, such as pyfolio and empyrical, that make it easy to calculate these metrics.
# 14. Advanced Topics in Python for Finance
One of the advanced topics we will cover is object-oriented programming (OOP) in finance. OOP is a programming paradigm that allows you to create objects that encapsulate data and behavior. In finance, OOP can be used to model financial instruments, such as options and futures, as well as financial portfolios. Python provides built-in support for OOP, making it easy to implement complex financial models and simulations.
Another advanced topic we will cover is web scraping and data extraction. Web scraping involves extracting data from websites using automated tools. In finance, web scraping can be used to collect financial data, such as stock prices and company financials, from various sources. Python provides libraries, such as BeautifulSoup and Scrapy, that make it easy to scrape and extract data from websites.
Finally, we will explore machine learning for financial applications. Machine learning involves training models on data to make predictions or take actions. In finance, machine learning can be used for various tasks, such as predicting stock prices, classifying financial statements, and optimizing trading strategies. Python provides libraries, such as scikit-learn and TensorFlow, that make it easy to implement machine learning algorithms in finance.
In the following sections, we will dive into these advanced topics and learn how to apply them to real-world finance problems. We will provide practical examples and code snippets to illustrate the concepts. Let's get started!
# 14.1. Object-Oriented Programming in Finance
Object-oriented programming (OOP) is a programming paradigm that allows you to create objects that encapsulate data and behavior. In finance, OOP can be used to model financial instruments, such as options and futures, as well as financial portfolios.
In this section, we will explore the basics of OOP in Python and learn how to implement financial models and simulations using OOP principles. We will cover topics such as classes, objects, attributes, methods, and inheritance.
A class is a blueprint for creating objects. It defines the attributes and methods that an object of the class will have. For example, a class called "Option" can have attributes such as strike price, expiration date, and type (call or put), as well as methods such as calculate payoff and calculate option price.
An object is an instance of a class. It represents a specific entity that has the attributes and behaviors defined by the class. For example, an object of the "Option" class can represent a specific option contract with a specific strike price, expiration date, and type.
Attributes are variables that store data associated with an object. They represent the state of the object. For example, the strike price and expiration date of an option are attributes of the option object.
Methods are functions that define the behavior of an object. They represent the actions that an object can perform. For example, the calculate payoff method of an option object can calculate the payoff of the option based on the current price of the underlying asset.
Inheritance is a mechanism that allows you to create a new class based on an existing class. The new class inherits the attributes and methods of the existing class and can add new attributes and methods or override the existing ones. For example, you can create a new class called "EuropeanOption" that inherits from the "Option" class and adds a new attribute called "exercise style" and a new method called "calculate option price".
# 14.2. Web Scraping and Data Extraction
Web scraping involves extracting data from websites using automated tools. In finance, web scraping can be used to collect financial data, such as stock prices and company financials, from various sources.
In this section, we will explore the basics of web scraping in Python and learn how to extract financial data from websites. We will cover topics such as HTML parsing, CSS selectors, and data extraction techniques.
HTML parsing is the process of analyzing the structure of an HTML document and extracting the desired data. Python provides libraries, such as BeautifulSoup and lxml, that make it easy to parse HTML documents and extract data from them.
CSS selectors are patterns used to select elements in an HTML document. They allow you to specify the criteria for selecting elements based on their attributes, classes, and hierarchy. Python provides libraries, such as BeautifulSoup and cssselect, that make it easy to select elements using CSS selectors.
Data extraction techniques involve identifying the specific elements in an HTML document that contain the desired data and extracting the data from them. This can be done using techniques such as regular expressions, string manipulation, and data cleaning. Python provides libraries, such as re and pandas, that make it easy to extract and manipulate data.
# 14.3. Machine Learning for Financial Applications
Machine learning involves training models on data to make predictions or take actions. In finance, machine learning can be used for various tasks, such as predicting stock prices, classifying financial statements, and optimizing trading strategies.
In this section, we will explore the basics of machine learning in Python and learn how to apply machine learning techniques to financial applications. We will cover topics such as data preprocessing, model training, model evaluation, and model deployment.
Data preprocessing involves preparing the data for training a machine learning model. This includes tasks such as cleaning the data, handling missing values, encoding categorical variables, and scaling the features. Python provides libraries, such as pandas and scikit-learn, that make it easy to preprocess data.
Model training involves training a machine learning model on a labeled dataset to learn patterns and make predictions. Python provides libraries, such as scikit-learn and TensorFlow, that make it easy to train machine learning models. These libraries provide a wide range of algorithms, such as linear regression, decision trees, support vector machines, and neural networks.
Model evaluation involves assessing the performance of a machine learning model on a test dataset. This includes calculating metrics such as accuracy, precision, recall, and F1 score. Python provides libraries, such as scikit-learn and keras, that make it easy to evaluate machine learning models.
Model deployment involves using a trained machine learning model to make predictions on new, unseen data. Python provides libraries, such as scikit-learn and TensorFlow, that make it easy to deploy machine learning models. These libraries provide functions and APIs for loading and using trained models.
# 15. Conclusion and Next Steps
In this textbook, we have covered a wide range of topics in Python for finance. We have explored the basics of Python programming, as well as advanced topics such as algorithmic trading, financial modeling, data analysis, and machine learning.
We have learned how to use Python to check for membership in dictionaries, explore the history of the United States, and implement trading strategies. We have also learned how to perform backtesting, evaluate the performance of trading strategies, and extract financial data from websites.
Python is a powerful and versatile programming language that can be used for a wide range of tasks in finance. Whether you are a beginner or an experienced programmer, Python can help you analyze financial data, develop trading strategies, and make informed investment decisions.
To continue your learning journey, we recommend exploring the further resources and learning opportunities provided in the next section. These resources include books, online courses, and websites that cover various topics in Python for finance. We also encourage you to contribute to open-source finance projects and explore real-world applications of Python in finance.
Thank you for joining us on this journey through Python for finance. We hope you have found this textbook informative and engaging. We wish you the best of luck in your future endeavors in finance and programming. Happy coding!
# 15.1. Further Resources and Learning Opportunities
If you're interested in further exploring Python for finance, there are many resources and learning opportunities available to you. Here are a few recommendations:
- **Books**: There are several books that delve deeper into Python for finance and cover more advanced topics. Some popular titles include "Python for Finance: Analyze Big Financial Data" by Yves Hilpisch, "Python for Finance: Mastering Data-Driven Finance" by James Ma Weiming, and "Python for Finance Cookbook" by Eryk Lewinson.
- **Online courses**: Online platforms like Coursera, Udemy, and edX offer a variety of courses on Python for finance. These courses provide structured learning materials, video lectures, and hands-on exercises to help you build your skills. Some recommended courses include "Python for Finance and Algorithmic Trading" by The Python Quants, "Financial Analysis and Algorithmic Trading with Python" by QuantInsti, and "Python for Financial Analysis and Algorithmic Trading" by Jose Portilla.
- **Websites and blogs**: There are several websites and blogs dedicated to Python for finance that provide tutorials, articles, and code examples. Some popular websites include Quantopian, QuantStart, and Alpha Vantage. These resources can help you stay updated with the latest trends and developments in Python for finance.
- **Open-source projects**: Contributing to open-source finance projects is a great way to enhance your skills and collaborate with other like-minded individuals. GitHub is a popular platform for finding and contributing to open-source projects. You can search for finance-related projects and start contributing to them.
- **Real-world applications**: Exploring real-world applications of Python in finance can provide valuable insights and practical knowledge. You can follow financial news and blogs to learn about how Python is being used in various financial institutions and companies. You can also join online communities and forums to connect with professionals working in the field.
Remember, learning is a continuous process, and there is always more to explore and discover. Keep practicing, experimenting, and building projects to strengthen your skills in Python for finance. Good luck on your learning journey!
# 15.2. Real-World Applications of Python in Finance
Python has gained significant popularity in the financial industry due to its versatility and ease of use. It is used by a wide range of financial institutions and companies for various purposes. Here are some real-world applications of Python in finance:
- **Quantitative analysis**: Python is widely used for quantitative analysis in finance. It allows analysts to perform complex calculations, statistical modeling, and risk assessments. Python libraries like NumPy, Pandas, and SciPy provide powerful tools for data analysis and modeling.
- **Algorithmic trading**: Python is a popular choice for developing and implementing algorithmic trading strategies. It allows traders to automate trading decisions, execute trades quickly, and analyze market data in real-time. Python libraries like PyAlgoTrade and Zipline provide tools for backtesting and executing trading strategies.
- **Financial data analysis**: Python is used for analyzing financial data, including historical price data, market trends, and financial statements. Python libraries like Matplotlib and Seaborn enable the visualization of financial data, making it easier to identify patterns and trends.
- **Risk management**: Python is used for risk management in finance, including the calculation of value at risk (VaR), stress testing, and scenario analysis. Python libraries like PyMC3 and TensorFlow enable the implementation of advanced risk models and simulations.
- **Portfolio management**: Python is used for portfolio management, including portfolio optimization, asset allocation, and performance measurement. Python libraries like PyPortfolioOpt and PortfolioAnalytics provide tools for portfolio analysis and optimization.
- **Financial reporting**: Python is used for generating financial reports and dashboards. Python libraries like ReportLab and Dash enable the creation of interactive reports and visualizations.
These are just a few examples of how Python is used in finance. Its flexibility, extensive libraries, and active community make it a powerful tool for financial professionals. By learning Python, you can gain valuable skills that are in high demand in the finance industry.
# 15.3. Contributing to Open Source Finance Projects
Contributing to open-source finance projects is a great way to enhance your skills, collaborate with other developers, and make a meaningful impact in the finance community. Open-source projects are publicly available projects that are developed and maintained by a community of volunteers. Here are some steps to get started:
1. **Find a project**: Start by exploring popular open-source finance projects on platforms like GitHub. Look for projects that align with your interests and skill level. Read the project documentation and understand the project goals and contribution guidelines.
2. **Join the community**: Join the project's community by subscribing to their mailing list or joining their chat channels. Introduce yourself and express your interest in contributing. Participate in discussions, ask questions, and learn from other contributors.
3. **Choose an issue**: Look for open issues or feature requests in the project's issue tracker. Choose an issue that you feel comfortable working on and that matches your skills. If you're new to open-source, look for issues labeled as "beginner-friendly" or "good first issue".
4. **Fork the project**: Fork the project's repository on GitHub. This creates a copy of the project under your GitHub account. Clone the forked repository to your local machine.
5. **Work on the issue**: Create a new branch in your local repository for working on the issue. Make the necessary code changes and write tests if applicable. Commit your changes and push them to your forked repository.
6. **Submit a pull request**: Once you're satisfied with your changes, submit a pull request to the original project repository. Describe the changes you made and reference the related issue. The project maintainers will review your code and provide feedback.
7. **Iterate and collaborate**: Be open to feedback and iterate on your code based on the project maintainers' suggestions. Collaborate with other contributors and help review their code. Engage in discussions and contribute to the project's documentation and community resources.
Contributing to open-source projects is a rewarding experience that allows you to learn from experienced developers, showcase your skills, and contribute to the wider finance community. It's a great way to build your portfolio and establish your presence in the industry. Start small, be patient, and enjoy the process of learning and contributing. | Textbooks |
Alternating-direction implicit method
In numerical linear algebra, the alternating-direction implicit (ADI) method is an iterative method used to solve Sylvester matrix equations. It is a popular method for solving the large matrix equations that arise in systems theory and control,[1] and can be formulated to construct solutions in a memory-efficient, factored form.[2][3] It is also used to numerically solve parabolic and elliptic partial differential equations, and is a classic method used for modeling heat conduction and solving the diffusion equation in two or more dimensions.[4] It is an example of an operator splitting method.[5]
ADI for matrix equations
The method
The ADI method is a two step iteration process that alternately updates the column and row spaces of an approximate solution to $AX-XB=C$. One ADI iteration consists of the following steps:[6]
1. Solve for $X^{(j+1/2)}$, where $\left(A-\beta _{j+1}I\right)X^{(j+1/2)}=X^{(j)}\left(B-\beta _{j+1}I\right)+C.$
2. Solve for $X^{(j+1)}$, where $X^{(j+1)}\left(B-\alpha _{j+1}I\right)=\left(A-\alpha _{j+1}I\right)X^{(j+1/2)}-C$.
The numbers $(\alpha _{j+1},\beta _{j+1})$ are called shift parameters, and convergence depends strongly on the choice of these parameters.[7][8] To perform $K$ iterations of ADI, an initial guess $X^{(0)}$ is required, as well as $K$ shift parameters, $\{(\alpha _{j},\beta _{j})\}_{j=1}^{K}$.
When to use ADI
If $A\in \mathbb {C} ^{m\times m}$ and $B\in \mathbb {C} ^{n\times n}$, then $AX-XB=C$ can be solved directly in ${\mathcal {O}}(m^{3}+n^{3})$ using the Bartels-Stewart method.[9] It is therefore only beneficial to use ADI when matrix-vector multiplication and linear solves involving $A$ and $B$ can be applied cheaply.
The equation $AX-XB=C$ has a unique solution if and only if $\sigma (A)\cap \sigma (B)=\emptyset $, where $\sigma (M)$ is the spectrum of $M$.[1] However, the ADI method performs especially well when $\sigma (A)$ and $\sigma (B)$ are well-separated, and $A$ and $B$ are normal matrices. These assumptions are met, for example, by the Lyapunov equation $AX+XA^{*}=C$ when $A$ is positive definite. Under these assumptions, near-optimal shift parameters are known for several choices of $A$ and $B$.[7][8] Additionally, a priori error bounds can be computed, thereby eliminating the need to monitor the residual error in implementation.
The ADI method can still be applied when the above assumptions are not met. The use of suboptimal shift parameters may adversely affect convergence,[1] and convergence is also affected by the non-normality of $A$ or $B$ (sometimes advantageously).[10] Krylov subspace methods, such as the Rational Krylov Subspace Method,[11] are observed to typically converge more rapidly than ADI in this setting,[1][3] and this has led to the development of hybrid ADI-projection methods.[3]
Shift-parameter selection and the ADI error equation
The problem of finding good shift parameters is nontrivial. This problem can be understood by examining the ADI error equation. After $K$ iterations, the error is given by
$X-X^{(K)}=\prod _{j=1}^{K}{\frac {(A-\alpha _{j}I)}{(A-\beta _{j}I)}}\left(X-X^{(0)}\right)\prod _{j=1}^{K}{\frac {(B-\beta _{j}I)}{(B-\alpha _{j}I)}}.$
Choosing $X^{(0)}=0$ results in the following bound on the relative error:
${\frac {\left\|X-X^{(K)}\right\|_{2}}{\|X\|_{2}}}\leq \|r_{K}(A)\|_{2}\|r_{K}(B)^{-1}\|_{2},\quad r_{K}(M)=\prod _{j=1}^{K}{\frac {(M-\alpha _{j}I)}{(M-\beta _{j}I)}}.$
where $\|\cdot \|_{2}$ is the operator norm. The ideal set of shift parameters $\{(\alpha _{j},\beta _{j})\}_{j=1}^{K}$ defines a rational function $r_{K}$ that minimizes the quantity $\|r_{K}(A)\|_{2}\|r_{K}(B)^{-1}\|_{2}$. If $A$ and $B$ are normal matrices and have eigendecompositions $A=V_{A}\Lambda _{A}V_{A}^{*}$ and $B=V_{B}\Lambda _{B}V_{B}^{*}$, then
$\|r_{K}(A)\|_{2}\|r_{K}(B)^{-1}\|_{2}=\|r_{K}(\Lambda _{A})\|_{2}\|r_{K}(\Lambda _{B})^{-1}\|_{2}$.
Near-optimal shift parameters
Near-optimal shift parameters are known in certain cases, such as when $\Lambda _{A}\subset [a,b]$ and $\Lambda _{B}\subset [c,d]$, where $[a,b]$ and $[c,d]$ are disjoint intervals on the real line.[7][8] The Lyapunov equation $AX+XA^{*}=C$, for example, satisfies these assumptions when $A$ is positive definite. In this case, the shift parameters can be expressed in closed form using elliptic integrals, and can easily be computed numerically.
More generally, if closed, disjoint sets $E$ and $F$, where $\Lambda _{A}\subset E$ and $\Lambda _{B}\subset F$, are known, the optimal shift parameter selection problem is approximately solved by finding an extremal rational function that attains the value
$Z_{K}(E,F):=\inf _{r}{\frac {\sup _{z\in E}|r(z)|}{\inf _{z\in F}|r(z)|}},$
where the infimum is taken over all rational functions of degree $(K,K)$.[8] This approximation problem is related to several results in potential theory,[12][13] and was solved by Zolotarev in 1877 for $E$ = [a, b] and $F=-E.$[14] The solution is also known when $E$ and $F$ are disjoint disks in the complex plane.[15]
Heuristic shift-parameter strategies
When less is known about $\sigma (A)$ and $\sigma (B)$, or when $A$ or $B$ are non-normal matrices, it may not be possible to find near-optimal shift parameters. In this setting, a variety of strategies for generating good shift parameters can be used. These include strategies based on asymptotic results in potential theory,[16] using the Ritz values of the matrices $A$, $A^{-1}$, $B$, and $B^{-1}$ to formulate a greedy approach,[17] and cyclic methods, where the same small collection of shift parameters are reused until a convergence tolerance is met.[17][10] When the same shift parameter is used at every iteration, ADI is equivalent to an algorithm called Smith's method.[18]
Factored ADI
In many applications, $A$ and $B$ are very large, sparse matrices, and $C$ can be factored as $C=C_{1}C_{2}^{*}$, where $C_{1}\in \mathbb {C} ^{m\times r},C_{2}\in \mathbb {C} ^{n\times r}$, with $r=1,2$.[1] In such a setting, it may not be feasible to store the potentially dense matrix $X$ explicitly. A variant of ADI, called factored ADI,[3][2] can be used to compute $ZY^{*}$, where $X\approx ZY^{*}$. The effectiveness of factored ADI depends on whether $X$ is well-approximated by a low rank matrix. This is known to be true under various assumptions about $A$ and $B$.[10][8]
ADI for parabolic equations
Historically, the ADI method was developed to solve the 2D diffusion equation on a square domain using finite differences.[4] Unlike ADI for matrix equations, ADI for parabolic equations does not require the selection of shift parameters, since the shift appearing in each iteration is determined by parameters such as the timestep, diffusion coefficient, and grid spacing. The connection to ADI on matrix equations can be observed when one considers the action of the ADI iteration on the system at steady state.
Example: 2D diffusion equation
The traditional method for solving the heat conduction equation numerically is the Crank–Nicolson method. This method results in a very complicated set of equations in multiple dimensions, which are costly to solve. The advantage of the ADI method is that the equations that have to be solved in each step have a simpler structure and can be solved efficiently with the tridiagonal matrix algorithm.
Consider the linear diffusion equation in two dimensions,
${\partial u \over \partial t}=\left({\partial ^{2}u \over \partial x^{2}}+{\partial ^{2}u \over \partial y^{2}}\right)=(u_{xx}+u_{yy})$
The implicit Crank–Nicolson method produces the following finite difference equation:
${u_{ij}^{n+1}-u_{ij}^{n} \over \Delta t}={1 \over 2(\Delta x)^{2}}\left(\delta _{x}^{2}+\delta _{y}^{2}\right)\left(u_{ij}^{n+1}+u_{ij}^{n}\right)$
where:
$\Delta x=\Delta y$
and $\delta _{p}^{2}$ is the central second difference operator for the p-th coordinate
$\delta _{p}^{2}u_{ij}=u_{ij+e_{p}}-2u_{ij}+u_{ij-e_{p}}$
with $e_{p}=10$ or $01$ for $p=x$ or $y$ respectively (and $ij$ a shorthand for lattice points $(i,j)$).
After performing a stability analysis, it can be shown that this method will be stable for any $\Delta t$.
A disadvantage of the Crank–Nicolson method is that the matrix in the above equation is banded with a band width that is generally quite large. This makes direct solution of the system of linear equations quite costly (although efficient approximate solutions exist, for example use of the conjugate gradient method preconditioned with incomplete Cholesky factorization).
The idea behind the ADI method is to split the finite difference equations into two, one with the x-derivative taken implicitly and the next with the y-derivative taken implicitly,
${u_{ij}^{n+1/2}-u_{ij}^{n} \over \Delta t/2}={\left(\delta _{x}^{2}u_{ij}^{n+1/2}+\delta _{y}^{2}u_{ij}^{n}\right) \over \Delta x^{2}}$
${u_{ij}^{n+1}-u_{ij}^{n+1/2} \over \Delta t/2}={\left(\delta _{x}^{2}u_{ij}^{n+1/2}+\delta _{y}^{2}u_{ij}^{n+1}\right) \over \Delta y^{2}}$
The system of equations involved is symmetric and tridiagonal (banded with bandwidth 3), and is typically solved using tridiagonal matrix algorithm.
It can be shown that this method is unconditionally stable and second order in time and space.[19] There are more refined ADI methods such as the methods of Douglas,[20] or the f-factor method[21] which can be used for three or more dimensions.
Generalizations
The usage of the ADI method as an operator splitting scheme can be generalized. That is, we may consider general evolution equations
${\dot {u}}=F_{1}u+F_{2}u,$
where $F_{1}$ and $F_{2}$ are (possibly nonlinear) operators defined on a Banach space.[22][23] In the diffusion example above we have $F_{1}={\partial ^{2} \over \partial x^{2}}$ and $F_{2}={\partial ^{2} \over \partial y^{2}}$.
Fundamental ADI (FADI)
Simplification of ADI to FADI
It is possible to simplify the conventional ADI method into Fundamental ADI method, which only has the similar operators at the left-hand sides while being operator-free at the right-hand sides. This may be regarded as the fundamental (basic) scheme of ADI method,[24][25] with no more operator (to be reduced) at the right-hand sides, unlike most traditional implicit methods that usually consist of operators at both sides of equations. The FADI method leads to simpler, more concise and efficient update equations without degrading the accuracy of conventional ADI method.
Relations to other implicit methods
Many classical implicit methods by Peaceman-Rachford, Douglas-Gunn, D'Yakonov, Beam-Warming, Crank-Nicolson, etc., may be simplified to fundamental implicit schemes with operator-free right-hand sides.[25] In their fundamental forms, the FADI method of second-order temporal accuracy can be related closely to the fundamental locally one-dimensional (FLOD) method, which can be upgraded to second-order temporal accuracy, such as for three-dimensional Maxwell's equations [26][27] in computational electromagnetics. For two- and three-dimensional heat conduction and diffusion equations, both FADI and FLOD methods may be implemented in simpler, more efficient and stable manner compared to their conventional methods.[28][29]
References
1. Simoncini, V. (2016). "Computational Methods for Linear Matrix Equations". SIAM Review. 58 (3): 377–441. doi:10.1137/130912839. hdl:11585/586011. ISSN 0036-1445. S2CID 17271167.
2. Li, Jing-Rebecca; White, Jacob (2002). "Low Rank Solution of Lyapunov Equations". SIAM Journal on Matrix Analysis and Applications. 24 (1): 260–280. doi:10.1137/s0895479801384937. ISSN 0895-4798.
3. Benner, Peter; Li, Ren-Cang; Truhar, Ninoslav (2009). "On the ADI method for Sylvester equations". Journal of Computational and Applied Mathematics. 233 (4): 1035–1045. Bibcode:2009JCoAM.233.1035B. doi:10.1016/j.cam.2009.08.108. ISSN 0377-0427.
4. Peaceman, D. W.; Rachford Jr., H. H. (1955), "The numerical solution of parabolic and elliptic differential equations", Journal of the Society for Industrial and Applied Mathematics, 3 (1): 28–41, doi:10.1137/0103003, hdl:10338.dmlcz/135399, MR 0071874.
• Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007). "Section 20.3.3. Operator Splitting Methods Generally". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Archived from the original on 2011-08-11. Retrieved 2011-08-18.
5. Wachspress, Eugene L. (2008). "Trail to a Lyapunov equation solver". Computers & Mathematics with Applications. 55 (8): 1653–1659. doi:10.1016/j.camwa.2007.04.048. ISSN 0898-1221.
6. Lu, An; Wachspress, E.L. (1991). "Solution of Lyapunov equations by alternating direction implicit iteration". Computers & Mathematics with Applications. 21 (9): 43–58. doi:10.1016/0898-1221(91)90124-m. ISSN 0898-1221.
7. Beckermann, Bernhard; Townsend, Alex (2017). "On the Singular Values of Matrices with Displacement Structure". SIAM Journal on Matrix Analysis and Applications. 38 (4): 1227–1248. arXiv:1609.09494. doi:10.1137/16m1096426. ISSN 0895-4798. S2CID 3828461.
8. Golub, G.; Van Loan, C (1989). Matrix computations (Fourth ed.). Baltimore: Johns Hopkins University. ISBN 1421407949. OCLC 824733531.
9. Sabino, J (2007). Solution of large-scale Lyapunov equations via the block modified Smith method. PhD Diss., Rice Univ. (Thesis). hdl:1911/20641.
10. Druskin, V.; Simoncini, V. (2011). "Adaptive rational Krylov subspaces for large-scale dynamical systems". Systems & Control Letters. 60 (8): 546–560. doi:10.1016/j.sysconle.2011.04.013. ISSN 0167-6911.
11. Saff, E.B.; Totik, V. (2013-11-11). Logarithmic potentials with external fields. Berlin. ISBN 9783662033296. OCLC 883382758.{{cite book}}: CS1 maint: location missing publisher (link)
12. Gonchar, A.A. (1969). "Zolotarev problems connected with rational functions". Mathematics of the USSR-Sbornik. 7 (4): 623–635. Bibcode:1969SbMat...7..623G. doi:10.1070/SM1969v007n04ABEH001107.
13. Zolotarev, D.I. (1877). "Application of elliptic functions to questions of functions deviating least and most from zero". Zap. Imp. Akad. Nauk. St. Petersburg. 30: 1–59.
14. Starke, Gerhard (July 1992). "Near-circularity for the rational Zolotarev problem in the complex plane". Journal of Approximation Theory. 70 (1): 115–130. doi:10.1016/0021-9045(92)90059-w. ISSN 0021-9045.
15. Starke, Gerhard (June 1993). "Fejér-Walsh points for rational functions and their use in the ADI iterative method". Journal of Computational and Applied Mathematics. 46 (1–2): 129–141. doi:10.1016/0377-0427(93)90291-i. ISSN 0377-0427.
16. Penzl, Thilo (January 1999). "A Cyclic Low-Rank Smith Method for Large Sparse Lyapunov Equations". SIAM Journal on Scientific Computing. 21 (4): 1401–1418. Bibcode:1999SJSC...21.1401P. doi:10.1137/s1064827598347666. ISSN 1064-8275.
17. Smith, R. A. (January 1968). "Matrix Equation XA + BX = C". SIAM Journal on Applied Mathematics. 16 (1): 198–201. doi:10.1137/0116017. ISSN 0036-1399.
18. Douglas, J. Jr. (1955), "On the numerical integration of uxx+ uyy= ut by implicit methods", Journal of the Society for Industrial and Applied Mathematics, 3: 42–65, MR 0071875.
19. Douglas, Jim Jr. (1962), "Alternating direction methods for three space variables", Numerische Mathematik, 4 (1): 41–63, doi:10.1007/BF01386295, ISSN 0029-599X, S2CID 121455963.
20. Chang, M. J.; Chow, L. C.; Chang, W. S. (1991), "Improved alternating-direction implicit method for solving transient three-dimensional heat diffusion problems", Numerical Heat Transfer, Part B: Fundamentals, 19 (1): 69–84, Bibcode:1991NHTB...19...69C, doi:10.1080/10407799108944957, ISSN 1040-7790.
21. Hundsdorfer, Willem; Verwer, Jan (2003). Numerical Solution of Time-Dependent Advection-Diffusion-Reaction Equations. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN 978-3-662-09017-6.
22. Lions, P. L.; Mercier, B. (December 1979). "Splitting Algorithms for the Sum of Two Nonlinear Operators". SIAM Journal on Numerical Analysis. 16 (6): 964–979. Bibcode:1979SJNA...16..964L. doi:10.1137/0716071.
23. Tan, E. L. (2007). "Efficient Algorithm for the Unconditionally Stable 3-D ADI-FDTD Method" (PDF). IEEE Microwave and Wireless Components Letters. 17 (1): 7–9. doi:10.1109/LMWC.2006.887239. hdl:10356/138245. S2CID 29025478.
24. Tan, E. L. (2008). "Fundamental Schemes for Efficient Unconditionally Stable Implicit Finite-Difference Time-Domain Methods" (PDF). IEEE Transactions on Antennas and Propagation. 56 (1): 170–177. arXiv:2011.14043. Bibcode:2008ITAP...56..170T. doi:10.1109/TAP.2007.913089. hdl:10356/138249. S2CID 37135325.
25. Tan, E. L. (2007). "Unconditionally Stable LOD-FDTD Method for 3-D Maxwell's Equations" (PDF). IEEE Microwave and Wireless Components Letters. 17 (2): 85–87. doi:10.1109/LMWC.2006.890166. hdl:10356/138296. S2CID 22940993.
26. Gan, T. H.; Tan, E. L. (2013). "Unconditionally Stable Fundamental LOD-FDTD Method with Second-Order Temporal Accuracy and Complying Divergence" (PDF). IEEE Transactions on Antennas and Propagation. 61 (5): 2630–2638. Bibcode:2013ITAP...61.2630G. doi:10.1109/TAP.2013.2242036. S2CID 7578037.
27. Tay, W. C.; Tan, E. L.; Heh, D. Y. (2014). "Fundamental Locally One-Dimensional Method for 3-D Thermal Simulation". IEICE Transactions on Electronics. E-97-C (7): 636–644. Bibcode:2014IEITE..97..636T. doi:10.1587/transele.E97.C.636. hdl:10220/20410.
28. Heh, D. Y.; Tan, E. L.; Tay, W. C. (2016). "Fast Alternating Direction Implicit Method for Efficient Transient Thermal Simulation of Integrated Circuits". International Journal of Numerical Modelling: Electronic Networks, Devices and Fields. 29 (1): 93–108. doi:10.1002/jnm.2049. hdl:10356/137201. S2CID 61039449.
Numerical methods for partial differential equations
Finite difference
Parabolic
• Forward-time central-space (FTCS)
• Crank–Nicolson
Hyperbolic
• Lax–Friedrichs
• Lax–Wendroff
• MacCormack
• Upwind
• Method of characteristics
Others
• Alternating direction-implicit (ADI)
• Finite-difference time-domain (FDTD)
Finite volume
• Godunov
• High-resolution
• Monotonic upstream-centered (MUSCL)
• Advection upstream-splitting (AUSM)
• Riemann solver
• Essentially non-oscillatory (ENO)
• Weighted essentially non-oscillatory (WENO)
Finite element
• hp-FEM
• Extended (XFEM)
• Discontinuous Galerkin (DG)
• Spectral element (SEM)
• Mortar
• Gradient discretisation (GDM)
• Loubignac iteration
• Smoothed (S-FEM)
Meshless/Meshfree
• Smoothed-particle hydrodynamics (SPH)
• Peridynamics (PD)
• Moving particle semi-implicit method (MPS)
• Material point method (MPM)
• Particle-in-cell (PIC)
Domain decomposition
• Schur complement
• Fictitious domain
• Schwarz alternating
• additive
• abstract additive
• Neumann–Dirichlet
• Neumann–Neumann
• Poincaré–Steklov operator
• Balancing (BDD)
• Balancing by constraints (BDDC)
• Tearing and interconnect (FETI)
• FETI-DP
Others
• Spectral
• Pseudospectral (DVR)
• Method of lines
• Multigrid
• Collocation
• Level-set
• Boundary element
• Method of moments
• Immersed boundary
• Analytic element
• Isogeometric analysis
• Infinite difference method
• Infinite element method
• Galerkin method
• Petrov–Galerkin method
• Validated numerics
• Computer-assisted proof
• Integrable algorithm
• Method of fundamental solutions
| Wikipedia |
\begin{document}
\title{A Density Condition for Interpolation on the Heisenberg Group}
\author{Bradley Currey and Azita Mayeli}
\date{\today}
\maketitle
\begin{abstract} Let $N$ be the Heisenberg group. We consider left-invariant multiplicity free subspaces of $L^2(N)$. We prove a necessary and sufficient density condition in order that such subsspaces possess the interpolation property with respect to a class of discrete subsets of $N$ that includes the integer lattice. We exhibit a concrete example of a subspace that has interpolation for the integer lattice, and we also prove a necessary and sufficient condition for shift invariant subspaces to possess a singly-generated orthonormal basis of translates.
\end{abstract}
{\footnotesize {Mathematics Subject Classification} (2000): 42C15, 92A20, 43A80.}
{\footnotesize
Keywords and phrases: \textit{The Heisenberg group, Heisenberg frame, Gabor frame, multiplicity free subspaces,
sampling spaces, the interpolation property}}
\section{introduction}\label{intro}
Let $\mathcal H$ be a Hilbert space of continuous functions on a topological space $X$ for which point evaluation $f \mapsto f(x)$ is continuous, let $\Gamma$ be a countable discrete subset of $X$, and let $p$ be the restriction mapping $ f \mapsto f|_\Gamma$ on $\mathcal H$. For the present work, the sampling problem is as follows: describe those pairs $(\mathcal H, \Gamma)$ for which $p$ is a constant multiple of an isometry of $\mathcal H$ into $\ell^2(\Gamma)$. If $p$ is surjective then we say that $(\mathcal H, \Gamma) $ has the interpolation property. Sampling and interpolation has been studied by various authors in various settings; three related examples are \cite{Pesen98}, \cite{F}, and \cite{FG}. In \cite{Pesen98} the author studies sampling on stratified Lie groups,
while some of the results in \cite{F} provide a characterization of left invariant sampling subspaces of $L^2(G)$ where $G$ is any locally compact unimodular Type I topological group, in terms of the notion of admissibility. As a consequence of the fundamental work of \cite{F}, it has been suspected that for the Heisenberg group, left-invariant sampling spaces cannot have the interpolation property with respect to lattice subgroups. The work of \cite{FG} studies more general (non-tight) sampling and includes ideas from both of the preceding articles.
Here we consider the interpolation property for a class of quasi-lattices $\Gamma_{\a,\b}, \a , \b > 0$ in the Heisenberg group $N$ that we also consider in \cite{CM08}. Let $\mathcal H$ be a left invariant subspace of $L^2(N)$ that is multiplicity free: the group Fourier transform of functions in $\mathcal H$ have rank at most one. In an explicit version of the group Plancherel transform, the dual $\hat N$ is a.e. identified with ${\mathbb{R}}\setminus \quad + \quad{0\quad + \quad}$, the Plancherel measure on $\hat N$ becomes $d\mu = |\l|d\l$, and there is a measurable subset $E$ of ${\mathbb{R}}\setminus \quad + \quad{0\quad + \quad} $ such that $\mathcal H$ is naturally identified with $L^2(E\times {\mathbb{R}})$. With this identification, a left translation system $\quad + \quad{T_\gamma\psi : \gamma \in \Gamma_{\a,\b}\quad + \quad}$ where $\psi\in \mathcal H$ becomes a {\it field} over $E$ of Gabor systems in $L^2({\mathbb{R}})$. We use this identification to show that $(\mathcal H, \Gamma_{\a,\b})$ has the interpolation property exactly when the $\mu(E) = 1/\a\b$.
The paper is organized as follows: after introducing some preliminaries, in Section \ref{group} we collect relevant results from \cite{F, FG} concerning admissibility and sampling for left invariant subspaces of $L^2(G)$, where $G$ is unimodular and type I. The point is that admissibility is necessary for sampling, and (Theorem \ref{samp-charac}) a left invariant subspace is sampling if and only if it is admissible and its convolution projection generates a tight frame. We also recall the general fact that such a subspace has interpolation property if the afore-mentioned Parseval frame is actually orthonormal. In Section \ref{Heisenberg} we specialize to the Heisenberg group $N$ and direct our attention to the interpolation property for multiplicity free subspaces with respect to the discrete subsets $\Gamma_{\a,\b}$. In Theorem \ref{interpolMF} we characterize such spaces that have the interpolation property, and in Example \ref{mainEG} we give a concrete example of a muliplicity free subspace of $L^2(N)$ that has the interpolation property with respect to the integer lattice in $N$. Finally, with Theorem \ref{ONcharacter} we prove a necessary and sufficient condition for any shift invariant subspace to have the interpolation property.
\section{Sampling spaces for unimodular groups }\label{group}
Let $G$ be a locally compact unimodular, topological group that is type I
and choose a Haar measure $dx$ on $G$. For each $x\in G$ let $T_x$ be the unitary left translation operator on $L^2(G)$. Let $\hat G$ be the unitary dual of $G$, the set of equivalence classes of continuous unitary irreducible representations of $G$, endowed with the hull-kernel topology and the Plancherel measure $\mu$. As is well-known, for each $\l \in \hat G$, there is a continuous unitary irreducible representation $\pi_\l$ belonging to $\l$, acting in a Hilbert space $\mathcal L_\l$ with the following properties.
\noindent (1) For each $\phi \in L^1(G) \cap L^2(G)$, the weak operator integral $$ \pi_\l(\phi) := \int_N \phi(x) \pi_\l(x) dx $$ defines a trace-class
operator on $\mathcal L_\l$.
\noindent (2) The
group Fourier transform $$ \mathcal F : L^1(G) \cap L^2(G) \rightarrow \int_{\hat G}^\oplus \quad + \quad \mathcal{HS}(\mathcal L_\l) d\mu(\l) $$
defined by $\phi \mapsto \quad + \quad{\pi_\l(\phi)\quad + \quad} := \quad + \quad{\hat\phi(\l)\quad + \quad}_{\l\in \hat G}$ satisfies $\quad + \quad| \mathcal F(\phi)\quad + \quad| = \quad + \quad|\phi\quad + \quad|_2$ and has dense range. (Here $\mathcal{HS}(\mathcal L_\l)$ denotes the Hilbert space of Hilbert-Schmidt operators on $\mathcal L_\l$.)
\noindent (3) For each $x\in G$, $\mathcal F(T_x\phi) = \pi_\l(x) \hat\phi(\l), \l \in \hat G$.
\noindent A closed subspace $\mathcal H$ of $L^2(G)$ is said to be \it left invariant
\rm if $T_x(\mathcal H) \subset \mathcal H$ holds for all $x \in G$. Let $\mathcal H$ be a left invariant subspace of $L^2(G)$, and let $P : L^2(G) \rightarrow \mathcal H$ be the orthogonal projection onto $\mathcal H$. Then there is a unique (up to $\mu$-a.e. equality) measurable field $\quad + \quad{\hat P_\l\quad + \quad}_{\l\in \hat G}$ of orthogonal projections where $\hat P_\l$ is defined on $\mathcal L_\l$, and so that
$$
\widehat{(P\phi)}(\l) = \hat\phi(\l) \hat P_\l
$$
holds for $\mu$-a.e. $\l \in \hat G$. Set $m_\mathcal H(\l) = \text{rank}(\hat P_\l)$. Then spectrum of $\mathcal H$ is the set $\Sigma(\mathcal H) = \text{supp}(m_\mathcal H)$.
A left invariant subspace $\mathcal H$ of $L^2(G)$ is said to be multiplicity free if $m_\mathcal H(\l) \le 1$ a.e.; if $\mathcal H$ is left invariant and $m_\mathcal H(\l) = 1$ a.e. then we will say that $\mathcal H$ is multiplicity one.
Recall that $\psi \in \mathcal H$ is said to be admissible (with respect to the left regular representation) if the operator $V_\psi$ defined by $V_\psi(\phi) = \phi * \psi^*$ defines an isometry of $\mathcal H$ into $L^2(G)$. For convenience we recall \cite[Theorem 4.22]{F} for unimodular groups.
\begin{thm}\label{F-4.22}
Let $\mathcal H$ be a closed left invariant subspace of $L^2(G)$ with the associated measurable projection field $\quad + \quad{\hat P_\l \quad + \quad}$. Then $\mathcal H$ has admissible vectors if and only if the map $\l\mapsto \text{rank}(\hat P_\l)$ is finite and $\mu$-integrable. \end{thm} For an example of $\mathcal H$ and an admissible vector we refer the interested reader to \cite{M08}. Different examples have also been presented in this work.
Gleaning from results in \cite{F}, we have the following.
\begin{prop} \label{admissible} Let $\mathcal H$ be a closed left invariant subspace of $L^2(G)$ with $G$ unimodular. Then the following are equivalent.
\noindent
(a) $\mathcal H$ has an admissible vector.
\noindent
(b) There is a left invariant subspace $\mathcal K$ of $L^2(G)$ and $\eta \in\mathcal K$ such that $\phi \mapsto \phi * \eta^*$ is an isometric isomorphism of $\mathcal K$ onto $\mathcal H$.
\noindent
(c) There is a unique self-adjoint convolution idempotent $S\in \mathcal H$ such that $\mathcal H = L^2(G) * S$.
\noindent
(d) The function $m_\mathcal H$ is integrable over $\hat G$ with respect to Plancherel measure $\mu$.
\end{prop}
\begin{proof} Let (a) hold and $\psi$ be an admissible vector for $\mathcal H$. Then $V_\psi^* V_\psi$ is an isometry and hence the identity on $\mathcal H$. Take
$\mathcal K = V_\psi(\mathcal H)$. Then $V_\psi^*$ is an isometric isomorphism of $\mathcal K$ onto $\mathcal H$. To prove (b) , we only need to show that $V_\psi^*$ acts by $V_\psi^* f= f\ast \psi$. For this, let $f\in \mathcal H\ast \psi$ and $g\in L^1(G)\cap L^2(G)$. Then
\begin{align}\label{convolution-op}
\langle V_\psi^\ast f, g\rangle = \langle f, V_\psi g\rangle= \langle f, g\ast \psi^\ast \rangle= \langle f\ast \psi, g\rangle.
\end{align}
Now $\eta = \psi^*$ applies for (b).
Suppose that (b) holds. Define $V_\eta(\phi)=\phi\ast \eta^*$ for $\phi\in \mathcal K$. Then $V_\eta V_\eta^*$ is projection of $L^2(N)$ onto $\mathcal H$.
Since $V_\eta^*$ is bounded, then by an analogous computation in (\ref{convolution-op}) we have $V_\eta^* = V_{\eta^*}$, and hence $V_\eta V_\eta^*(\phi) = \phi * (\eta * \eta^*)$. Now set $S = \eta*\eta^* = V_\eta(\eta)$. Evidently, $S$ belongs to $ \mathcal H$, is self-adjoint, and is a convolution idempotent, and the projection onto $\mathcal H$ is given by convolution with $S$.
Suppose that (c) holds. Then $S$ itself is an admissible vector in $\mathcal H$, so (d) follows from Theorem \ref{F-4.22}. Finally, Theorem \ref{F-4.22} says that (d) implies (a).
\end{proof}
We will say that a left invariant subspace $\mathcal H$ is {\it admissible} if it satisfies one of the conditions of Proposition \ref{admissible}. The function $S$ of condition (c) is called the reproducing kernel for $\mathcal H$ . Note that in this case the associated projection field $\quad + \quad{\hat P_\l\quad + \quad}$ is just the group Fourier transform of $S$.
\begin{definition}
Let $\Gamma$ be any countable discrete subset of $G$ and let $\mathcal{H}$ be a left invariant subspace of $L^2(G)$ consisting of continuous functions. We shall call $(\mathcal{H},\Gamma)$ a sampling pair if
there exist $S\in \mathcal{H}$ and $c = c_{\mathcal H,\Gamma} > 0$ such that for all $\phi\in \mathcal{H}$
\begin{align}\label{isometry}
\quad + \quad| \phi\quad + \quad|^2= \frac{1}{c} \sum_{\gamma\in \Gamma} | \phi(\gamma)|^2,
\end{align}
and
\begin{align}\label{sinc-equality}
\phi= \sum_\gamma \phi(\gamma)~ T_\gamma S.
\end{align}
where the sum (\ref{sinc-equality}) converges in $L^2$.
\end{definition}
Following \cite{F} we say that $S$ is a sinc-type function. It is well-known that the sum (\ref{sinc-equality}) above converges uniformly as well as in $L^2$ (see \cite[Remark 2.4]{FG}).
Recall that a system $\quad + \quad{\psi_j\quad + \quad}_{j\in J}$ of functions in a separable Hilbert space $\mathcal H$ is a tight frame for $\mathcal H$ if for some $c > 0$, $$
c \quad + \quad \quad + \quad|g\quad + \quad|^2 = \sum_{j\in J} \quad + \quad \left|\langle g, \psi_j\rangle \right|^2 $$ holds for every $g \in \mathcal H$. A Parseval frame is a tight frame for which $c = 1$. Now suppose that $\mathcal H$ is closed left invariant admissible with reproducing kernel $S$, and that $\Gamma$ is a countable discrete subset such that the relation (\ref{isometry}) holds for all $\phi\in\mathcal H$. Then the identity $\phi(x) = \phi * S(x) = \langle \phi, T_x \rangle$ makes it clear that ${\mathbb{T}}_\gamma S : \gamma \in \Gamma\quad + \quad}$ is a tight frame with frame bound $c$ and hence that $\mathcal H $ is a sampling space with sinc-type function $\frac{1}{c} S$ (see [Corollary 2.3] \cite{FG}.
To characterize sampling spaces, we only need to observe that every such subspace is necessarily admissible.
\begin{thm}\label{bdd-spect} Let $\mathcal{H}$ be a left invariant subspace consisting of continuous functions and suppose that for some countable discrete subset $\Gamma$ of $G$, (\ref{isometry}) holds for all $\phi \in \mathcal H$. Then $\mathcal{H}$ is admissible and hence a sampling space. \end{thm}
\begin{proof} This is an immediate consequence of \cite[Theorem 2.56]{F}; see also \cite[Theorem 2.2]{FG}. \end{proof}
Hence we have the following equivalent conditions for left invariant subspaces.
\begin{thm}\label{samp-charac} Let $\mathcal{H}$ be a left invariant subspace and let $\Gamma$ be a countable discrete subset of $G$. Then the following are equivalent.
\noindent (i) $\mathcal H$ is admissible and $T_\gamma S : \gamma\in \Gamma\quad + \quad}$ is a tight frame for $\mathcal H$ with frame bound $c$, where $S$ is its reproducing kernel.
\noindent (ii) $\mathcal H$ consists of continuous functions and the map $A: \mathcal{H}\rightarrow \ell^2(\Gamma)$ defined by $A(\phi)= \quad + \quad{\frac{1}{\sqrt{c}}\phi(\gamma)\quad + \quad}$ is isometry. \end{thm}
Of course if $\mathcal H$ satisfies one of the conditions of the preceding theorem, then $(\mathcal{H},\Gamma)$ is a sampling pair with sinc-type function $\frac{1}{c}S$. Next we turn to the question of interpolation.
\begin{definition} We say a sampling pair $(\mathcal{H}, \Gamma)$ has an interpolation property if the isometry map $A$ is also surjective. \end{definition}
The following is a consequence of Theorem \ref{samp-charac} and standard frame theory.
\begin{thm}\label{surj-onb} Let $(\mathcal{H}, \Gamma)$ be a sampling pair. Then there exists a sinc-type function $S$ for $\mathcal H$ for which the following equivalent properties hold: $(\mathcal{H}, \Gamma)$ has the interpolation property if and only if $\quad + \quad{\frac{1}{\sqrt{c }}T_\gamma S : \gamma \in \Gamma\quad + \quad}$ is an orthonormal basis for $\mathcal{H}$. \end{thm}
\begin{proof} By Theorem \ref{samp-charac}, $\mathcal H$ is admissible, and denoting its convolution projection by $S$, we have that $\quad + \quad{\frac{1}{\sqrt{c}}T_\gamma S\quad + \quad}_{\gamma \in \Gamma}$ is a Parseval frame for $\mathcal H$ and the isometry $A$ is the associated analysis operator. If $\quad + \quad{\frac{1}{\sqrt{c}}T_\gamma S\quad + \quad}_{\gamma \in \Gamma}$ is an orthonormal basis then of course $A$ is surjective. On the other hand, if the isometry $A$ is surjective then $A$ is unitary and if $\delta_\gamma$ denotes the canonical basis element in $\ell^2(\Gamma)$ then $\quad + \quad| \frac{1}{\sqrt{c}}T_\gamma S \quad + \quad| = \quad + \quad| A^*\delta_\gamma\quad + \quad| = \quad + \quad| \delta_\gamma\quad + \quad| = 1$.
\end{proof}
In the following section we describe a class of subspaces of the Heisenberg group that admit sampling with the interpolation property.
\section{The Heisenberg group and multiplicity free subspaces }\label{Heisenberg}
We now assume that $G=N$ is the Heisenberg group: as a topological space $N$ is identified with ${\mathbb{R}}^3$, and we let $N$ have the group operation $$ (x_1,x_2,x_3)\cdot (y_1,y_2,y_3) = (x_1 + y_2,x_2+y_2,x_3 + y_3+x_1y_2). $$ We recall some basic facts about harmonic analysis on $N$. Put $\L = {\mathbb{R}} \setminus \quad + \quad{0\quad + \quad}$. For $x \in N$, $\l \in \L$, we define the unitary operator $\pi_\l(x)$ on $L^2({\mathbb{R}})$ by $$ \Bigl(\pi_\l(x)f\Bigr)(t) = e^{2\pi i \l x_3} e^{-2\pi i \l x_2 t} f(t-x_1), \quad + \quad f \in L^2({\mathbb{R}}). $$ Then $x \mapsto \pi_\l(x)$ is an irreducible representation of $N$ (the Schr\quad + \quad"odinger representation), and for $\l \ne \l'$, the representations $\pi_\l$ and $\pi_{\l'}$ are inequivalent. With respect to the Plancherel measure on $\hat N$, almost every member of $\hat N$ is realized as above and the group Fourier transform takes the explicit form $$
\mathcal F : L^2(N) \rightarrow \int_\L^\oplus \mathcal{HS}\bigl(L^2({\mathbb{R}})\bigr) |\l| d\l. $$
Let $\mathcal H$ be a multiplicity free subspace of $L^2(N)$, $E = \Sigma(\mathcal H)$ the spectrum of $\mathcal H$, $P$ the projection onto $\mathcal H$, and let $\quad + \quad{\hat P_\l\quad + \quad}$ be the associated measurable field of projections. We have a measurable field $e = \quad + \quad{e_\l\quad + \quad}_{\l\in \L}$ where each $e_\l$ belongs to $L^2({\mathbb{R}})$, where $(\l \mapsto \quad + \quad|e_\l\quad + \quad|) = \bold 1_E$, and where $\hat P_\l = e_\l \otimes e_\l$ (i.e. $\mathcal K_\l = {\mathbb{C}} e_\l$) for $\l\in E$. Thus the image of $\mathcal H$ under the group Fourier transform is \begin{equation}\label{mult one int}
\hat{\mathcal H}= \int^\oplus_E \quad + \quad L^2({\mathbb{R}}) \otimes e_\l \quad + \quad |\l| d\l. \end{equation} where $e_\l$ is regarded as an element of $\overline{L^2({\mathbb{R}})}$. Hence $\mathcal H$ is isomorphic with \begin{align}\label{reduced-subspace}
\int^\oplus_E \quad + \quad L^2({\mathbb{R}}) \quad + \quad |\l|d\l \end{align} via the unitary isomorphism $V_e$ defined on $\mathcal H$ by $\quad + \quad{V_e\eta(\l) \quad + \quad}_{\l \in E}$ where $\eta \in \mathcal H$ and $$ V_e\eta(\l) = \hat\eta(\l)\bigl(e_\l\bigr), \quad + \quad {\rm a.e.\quad + \quad } ~ \l\in E. $$
We identify the direct integral (\ref{reduced-subspace}) with $L^2(E\times{\mathbb{R}})$ in the obvious way, where it is understood that $E$ carries the measure $|\l|d\l$.
Note that if we write $\hat\eta(\l) = \quad + \quad{ f_\l\otimes e_\l\quad + \quad}_\l$, then $V_e\eta(\l) = f_\l$. For a fixed unit vector field $e=\quad + \quad{e_\l\quad + \quad}$ we will say that $V_e$ is the reducing isomorphism for $\mathcal H$ associated with the vector field $e$. Note that given a multiplicity-free subspace $\mathcal H$, the unit vector field $e =\quad + \quad{e_\l\quad + \quad}$ is essentially unique: if $e' = \quad + \quad{e'_\l\quad + \quad}$ is another measurable unit vector field for which (\ref{mult one int}) holds, then there is a measurable unitary complex-valued function $c(\l)$ on $E$ such that $e'_\l = c(\l)e_\l$ holds for a.e. $\l$. Finally, given any subset $E$ of $\L$ and measurable field $e = \quad + \quad{e_\l\quad + \quad}_{\l\in \L}$ with $(\l\mapsto \quad + \quad|e_\l\quad + \quad|) = \bold 1_E$, the subspace $$ \mathcal{H}_e = \quad + \quad{ \phi \in L^2(N):~ \text {Range}(\hat \phi(\l)^\ast)\subset {\mathbb{C}} e_\l , \text{ a.e. } \l\quad + \quad} $$ is multiplicity free with spectrum $E$ and associated vector field $e$.
Let $\Gamma$ be a countable discrete subset of $N$. If $V_e : \mathcal H \rightarrow L^2(E\times {\mathbb{R}})$ is a reducing isomorphism, and $\psi \in \mathcal H$ with $g = V_e\psi$, then the system $\mathcal T(\psi, \Gamma) = \quad + \quad{T_{(k,l,m)} \psi : (k,l,m)\in\Gamma\quad + \quad}$ is obviously equivalent with the system $ \widehat{ \mathcal T}(g, \Gamma) = \quad + \quad{\hat T_{k,l,m}g : (k,l,m)\in\Gamma\quad + \quad}$ through the isomorphism $V_e$, where $$ \hat T_{k,l,m}g(\l,t) = e^{2\pi i \l m}e^{-2 \pi i \l l t} \quad + \quad g(\l,t- k). $$
In \cite{CM08}, the discrete subsets $\Gamma_{\a,\b} = \a \Bbb Z \times \b\Bbb Z \times \Bbb Z$ for positive integers $\a$ and $\b$ are considered and when $\Gamma = \Gamma_{\a,\b}$, then we denote the above function systems by $\mathcal T(\psi, \a,\b)$ and $ \widehat{ \mathcal T}(g, \a,\b)$, respectively. For $\l \in \L$ fixed, $\hat T_{k,l,0}$ defines a unitary (Gabor) operator on $L^2({\mathbb{R}})$ in the obvious way which we denote by $\hat T_{k,l}^\l$. For $u \in L^2({\mathbb{R}})$ set $\mathcal G(u,\a, \b,\l) = \quad + \quad{\hat T_{k,l}^\l u : (k,l,0) \in \Gamma_{\a,\b}\quad + \quad}$. We say that $g \in L^2(E\times {\mathbb{R}})$ is a Gabor field over $E$ with respect to $\Gamma_{\a,\b}$ if, for a.e. $\l \in E$, $\mathcal G(|\l|^{1/2} g(\l,\cdot),\a,\b,\l) $ is a Parseval frame for $L^2({\mathbb{R}})$. If $g$ is a Gabor field over $E$ with respect to $\Gamma_{\a,\b}$, then standard Gabor theory implies that $\quad + \quad| |\l|^{1/2} g(\l,\cdot) \quad + \quad|^2 = \a\b|\l| \le 1$.
The following is an easy but significant extension of part of \cite[Proposition 2.3]{CM08}.
\begin{prop} \label{gabor fld} Let $E$ be a measurable subset of $\L$ and $g \in L^2(E\times {\mathbb{R}})$ such that $ \widehat{ \mathcal T}(g, \a,\b)$ is a Parseval frame for $L^2(E\times {\mathbb{R}})$. Then $g$ is a Gabor field over $E$ with respect to $\Gamma_{\a,\b}$.
\end{prop}
\begin{proof} Write $E = \cup E_j$ where each $E_j$ is translation congruent with a subset of $[0,1]$ and let $g_j = g|_{E_j}$. Then for each $j$, $ \widehat{ \mathcal T}(g_j, \a,\b)$ is a Parseval frame for $L^2(E_j\times {\mathbb{R}})$. Hence by \cite[Proposition 2.3]{CM08}, $g_j$ is a Gabor field over $E_j$, hence $g$ is a Gabor field over $E$.
\end{proof}
The ``only part" of \cite[Proposition 2.3]{CM08} is also true for any subset $E$ provided the orthogonality of some coefficient operators holds.
\begin{prop} Let $E\subset \L$ and $g\in L^2(E\times {\mathbb{R}})$ such that $g$ is
a Gabor field for $L^2({\mathbb{R}})$ over $E$ for $\Gamma_{\a,\b}$. Write $E = \dot\cup_{j\in J} E_j$ where each $E_j$ is translation congruent with a subset of $[0,1]$ and let $g_j = g{\bf |}_{E_j}$.
For any $j$, let $C_j$ be the coefficient operator defined from $L^2(E\times {\mathbb{R}})$ into $\ell^2(\Gamma_{\a,\b})$ by \begin{align} C_j:~ f\rightarrow \quad + \quad{\langle \hat T_\gamma g_j, f\rangle \quad + \quad}_\gamma \end{align} and assume that for each $j \ne j'$, Range$(C_j) \subset $ Range$(C_{j'})^\perp$. Then $ \widehat{ \mathcal T}(g, \a,\b)$ is a Parseval frame for $L^2(E\times {\mathbb{R}})$. \end{prop}
\begin{proof} By \cite[Proposition 2.3]{CM08}, for any $j\in {\mathbb{Z}}$ the system $ \widehat{ \mathcal T}(g_j, \a,\b)$ is a Parseval frame for $L^2(E_j\times {\mathbb{R}})$. Therefore, with the orthogonality assumption, for any $f\in L^2(E\times {\mathbb{R}})$ one has \begin{align} \sum_\gamma \mid \langle \hat T_\gamma g, f\rangle\mid^2 = \sum_\gamma \mid \sum_j \langle \hat T_\gamma g_j, f \rangle\mid^2
=\sum_{j} \sum_\gamma \mid \langle \hat T_\gamma g_j , f \rangle \mid^2
= \parallel f\parallel^2, \end{align} and hence $ \widehat{ \mathcal T}(g, \a,\b)$ is a Parseval frame for $L^2(E\times {\mathbb{R}})$. \end{proof}
The above observations together with Theorem \ref{samp-charac} above now give the following.
\begin{thm}\label{e-sinc} Let $E$ be a measurable subset of $\L$ and let $e = \quad + \quad{e_\l\quad + \quad}_{\l\in \L}$ be a measurable field of unit vectors in $L^2({\mathbb{R}})$ such that $(\l\mapsto \quad + \quad|e_\l\quad + \quad|) = \bold 1_E$. The following are equivalent.
\noindent (i) $E$ has finite Plancherel measure and $\widehat{ \mathcal T}(\frac{1}{\sqrt{c}}e, \a,\b)$ is a Parseval frame for $L^2(E\times {\mathbb{R}})$.
\noindent (ii) $(\mathcal H_e, \Gamma_{\a,\b})$ is a sampling pair with the sinc-type function $S=\frac{1}{c}V_e^{-1}(e)$.
\noindent Moreover, if the above conditions hold, then $\frac{1}{\sqrt{c}}e$ is a Gabor field over $E$, and $E$ is included in the interval $[-1/\a\b,1/\a\b]$.
\end{thm}
We now have a precise density criterion for the interpolation property in this situation.
\begin{thm} \label{interpolMF} Let $\mathcal H$ be a multiplicity free subspace of $L^2(N)$ with $E = \Sigma(\mathcal H)$. Suppose that for some $\a, \b > 0$, $(\mathcal H , \Gamma_{\a,\b})$ is a sampling pair with $c = c_{\mathcal H, \Gamma_{\a,\b}}$. Then $c =1/ \a\b$. Moreover, $(\mathcal H , \Gamma_{\a,\b})$ has the interpolation property if and only if $\mu(E)= 1 / \a\b$. Hence if $(\mathcal H , \Gamma_{\a,\b})$ has the interpolation property, then $\a\b \le 1$.
\end{thm}
\begin{proof} By Theorem \ref{samp-charac}, $\mathcal H$ is admissible; let $S$ be the associated reproducing kernel, and let $V_e$ be a reducing isomorphism, where $e = \quad + \quad{e_\l\quad + \quad}$ is an $L^2({\mathbb{R}})$-vector field with $(\l \mapsto \quad + \quad|e_\l \quad + \quad|) = \bold 1_E$, so that $V_e(S)=e$. It follows from the above that $\quad + \quad{\frac{1}{\sqrt{c}} T_\gamma S \quad + \quad}_\gamma$ is a Parseval frame for $\mathcal H$, $\widehat{ \mathcal T}(\frac{1}{\sqrt{c}}e, \a,\b)$ is a Parseval frame for $L^2(E\times {\mathbb{R}})$, and $\frac{1}{\sqrt{c}} e$ is a Gabor field over $E$. Hence for a.e. $\l \in E$, we have $$
\quad + \quad||\l|^{1/2} \frac{1}{\sqrt{c}} e_\l \quad + \quad|^2 = \a\b |\l|, $$
and the relation $c = 1/ \a\b$ follows immediately. Now $\mathcal H$ has the interpolation property if and only if $\quad + \quad{\frac{1}{\sqrt{c}}T_\gamma S : \gamma \in \Gamma_{\a,\b}\quad + \quad}$ is an orthonormal basis for $\mathcal H$, if and only if $ \quad + \quad|\frac{1}{\sqrt{c}}S\quad + \quad|^2 = 1$. But $$
\quad + \quad|\frac{1}{\sqrt{c}}S\quad + \quad|^2 =\a\b \quad + \quad|S\quad + \quad|^2 = \a\b \quad + \quad|V_e(S)\quad + \quad|^2 = \a\b \int_E \quad + \quad \quad + \quad|e_\l\quad + \quad|^2 |\l| d\l = \a\b \int_E |\l| d\l=\a\b\mu(E). $$
This proves the first part of the theorem. Now if $(\mathcal H , \Gamma_{\a,\b})$ has the interpolation property, then, since $E \subseteq [-1/\a\b,1/\a\b]$, we have
$$ 1/\a\b = \int_E |\l| d\l \le \int_{[-1/\a\b,1/\a\b]} |\l| d\l = 1/(\a\b)^2. $$
\end{proof}
We now construct an example of a sampling pair with the interpolation property. We assume that $\a = \b = 1$; note that in this case the interpolation property is equivalent with $E = [-1,1]$. In light of Theorems \ref{e-sinc} and \ref{interpolMF}, it is evident that in order to construct an example $(\mathcal H, \Gamma_{1,1})$ with $\mathcal H$ multiplicity free, it is enough to construct a measurable field of $L^2({\mathbb{R}})$-vectors $\quad + \quad{e_\l\quad + \quad}$ such that $(\l \rightarrow \quad + \quad|e_\l \quad + \quad|)={\bf 1}_{[-1,1]}$ and such that $e$ generates a Heisenberg frame for $L^2([-1,1]\times {\mathbb{R}})$. The following technical lemma is helpful in the construction of such a function $e$.
\begin{lemma} \label{tech} Let $e \in L^2([-1,1]\times {\mathbb{R}})$ such that $e$ is a Gabor field over $[-1,1]$ with respect to $\Gamma_{\a,\b}$, and such that the orthogonality condition \begin{equation}\label{orthog} \sum_{k,l}\quad + \quad \langle f(\l-1,\cdot), e_{k,l,0}(\l-1,\cdot)\rangle \quad + \quad \overline{\langle f(\l,\cdot),e_{k,l,0}(\l,\cdot)\rangle} = 0 \end{equation}
holds for all $\l\in (0,1]$ and for all $f \in L^2([-1,1]\times {\mathbb{R}})$. Then the system $\widehat {\mathcal T}(e, \Gamma_{\a,\b})$ is a Parseval frame for $L^2([-1,1]\times {\mathbb{R}})$.
\end{lemma}
\begin{proof} Suppose that $e$ is a Gabor field satisfying (\ref{orthog}) and let $f \in L^2([-1,1]\times {\mathbb{R}})$. By Proposition 2.3 of \cite{CM08}, and the Parseval identity for Fourier series, we have $$ \begin{aligned}
\int_0^1 \quad + \quad|f(\l-1,\cdot)\quad + \quad|^2 |\l-1| d\l &= \sum_{k,l,m} \left| \int_0^1 \langle f(\l-1,\cdot),e_{k,l,m}(\l-1,\cdot)\rangle |\l-1| d\l \right|^2 \quad + \quad
&= \sum_{k,l} \int_0^1 \left| \langle f(\l-1,\cdot),e_{k,l,0}(\l-1,\cdot)\rangle |\l-1| \right|^2 d\l \end{aligned} $$ and similarly, $$
\int_0^1 \quad + \quad|f(\l,\cdot)\quad + \quad|^2 |\l| d\l = \sum_{k,l} \int_0^1 \left| \langle f(\l,\cdot),e_{k,l,0}(\l,\cdot)\rangle |\l| \right|^2 d\l. $$ Hence \begin{equation} \label{fnorm} \begin{aligned}
\quad + \quad|f\quad + \quad|^2 &= \int_0^1 \quad + \quad|f(\l-1,\cdot)\quad + \quad|^2 |\l-1| d\l + \int_0^1 \quad + \quad|f(\l,\cdot)\quad + \quad|^2 |\l| d\l \quad + \quad
&= \int_0^1 \quad + \quad \sum_{k,l} \left( \bigl| \langle f(\l-1,\cdot),e_{k,l,0}(\l-1,\cdot)\rangle |\l-1| \bigr|^2 + \bigl| \langle f(\l,\cdot),e_{k,l,0}(\l,\cdot)\rangle |\l| \bigr|^2 \right) \quad + \quad d\l. \end{aligned} \end{equation} But (\ref{orthog}) implies that for $\l \in (0,1]$, $$ \begin{aligned}
\sum_{k,l} & \left( \bigl| \langle f(\l-1,\cdot),e_{k,l,0}(\l-1,\cdot)\rangle |\l-1| \bigr|^2 + \bigl| \langle f(\l,\cdot),e_{k,l,0}(\l,\cdot)\rangle |\l| \bigr|^2\right) \quad + \quad
&=
\sum_{k,l} \left(\Bigl| \langle f(\l-1,\cdot),e_{k,l,0}(\l-1,\cdot)\rangle |\l-1| + \langle f(\l,\cdot),e_{k,l,0}(\l,\cdot)\rangle |\l| \Bigr|^2\right). \end{aligned} $$ Combining the preceding with (\ref{fnorm}) and applying the Parseval identity for Fourier series again, we have $$ \begin{aligned}
\quad + \quad|f\quad + \quad|^2 &= \sum_{k,l} \quad + \quad \int_0^1 \quad + \quad \Bigl| \langle f(\l-1,\cdot),e_{k,l,0}(\l-1,\cdot)\rangle |\l-1| + \langle f(\l,\cdot),e_{k,l,0}(\l,\cdot)\rangle |\l| \Bigr|^2 d\l \quad + \quad
&= \sum_{k,l} \sum_m \quad + \quad \Bigl| \int_0^1 \left( \langle f(\l-1,\cdot),e_{k,l,0}(\l-1,\cdot)\rangle |\l-1| + \langle f(\l,\cdot),e_{k,l,0}(\l,\cdot)\rangle |\l| \right) e^{-2\pi i\l m } d\l \Bigr|^2 \quad + \quad
&= \sum_{k,l,m} \quad + \quad \left| \int_{-1}^1 \langle f(\l,\cdot),e_{k,l,m}(\l,\cdot)\rangle |\l| d\l\right|^2. \end{aligned} $$ This proves the claim. \end{proof}
By virtue of Lemma \ref{tech}, it is sufficient to construct a function $e$ with the properties in the preceding lemma. \begin{eg}\label{mainEG}
For $\l \in (0,1]$, put $$ e_\l = \bold 1_{\left[\frac{1}{\l} - 1, \frac{1}{\l}\right]} \quad + \quad \quad + \quad \text{ and } \quad + \quad \quad + \quad \quad + \quad e_{\l-1} = \bold 1_{[-1,0]}. $$ Then $e$ defined by $e(\l, t)=e_\l(t)$ for $\l\in (0,1]$ and $e(\l,t)= \bold 1_{[-1,0]}(t)$ for $\l\in [-1,0)$ is a Gabor field over $[-1,1]$ with respect to $\Gamma_{1,1}$.
\end{eg}
\begin{proof} We compute that for any $f \in L^2([-1,1]\times{\mathbb{R}})$ and for $\l \in (0,1]$, $$ \begin{aligned} \langle f(\l-1,\cdot),e_{k,l,0}(\l-1,\cdot)\rangle &= \int_{\mathbb{R}} f(\l-1, t) e^{2\pi i (\l-1) l t}\bold 1_{[-1,0]}(t-k) dt \quad + \quad &= \int_{I_k^{\l-1}} \quad + \quad \left( \left(\frac{1}{1-\l}\right) \quad + \quad f\left(\l-1, \frac{s}{\l-1}\right) \right) e^{2\pi i l s} ds \end{aligned} $$ and similarly, $$ \begin{aligned} \langle f(\l,\cdot),e_{k,l,0}(\l,\cdot)\rangle &= \int_{\mathbb{R}} f(\l,\cdot) e^{2\pi i \l l t}\bold 1_{\left[\frac{1}{\l} - 1, \frac{1}{\l}\right]} (t-k) dt \quad + \quad &= \int_{I_k^{\l}} \quad + \quad \left( \frac{1}{\l} f\left((\l, \frac{s}{\l}\right)) \right) e^{2\pi i l s} ds \end{aligned} $$ where $I_k^{\l-1} = [-(1-\l)k, -(1-\l) k + (1-\l)]$ and $I_k^\l = [1 + \l k - \l, 1+ \l k]$. It is easily seen that for each $k$, $$ I_k^{\l-1} \cap I_k^\l = \emptyset \quad + \quad \quad + \quad \text{ and } \quad + \quad \quad + \quad (I_k^{\l-1} + k) \cup I_k^\l = [\l k , \l k + 1]. $$ Hence for each $k$, the sequences $\quad + \quad{ \langle f(\l-1,\cdot), e_{k,l,0}(\l-1,\cdot)\rangle : l \in \Bbb Z\quad + \quad}$ and $\quad + \quad{\langle f(\l,\cdot),e_{k,l,0}(\l,\cdot)\rangle : l \in \Bbb Z \quad + \quad}$ are Fourier coefficients for orthogonal functions and we have $$ \sum_l\quad + \quad \langle f(\l-1,\cdot), e_{k,l,0}(\l-1,\cdot)\rangle \quad + \quad \overline{\langle f(\l,\cdot),e_{k,l,0}(\l,\cdot)\rangle} = 0. $$ Thus the equation (\ref{orthog}) holds for $e$.
\end{proof}
Since the vector field $e = \quad + \quad{e_\l\quad + \quad}$ is compactly supported, one does not expect that the inverse Fourier image is well localized. We show this explicitly in the following, where we compute the inverse group Fourier transform in terms of ordinary Fourier transforms. For a function $f \in L^1({\mathbb{R}})$, put $\hat f(s) = \int_{\mathbb{R}} f(t) e^{2\pi i st} dt$ and $\check f(s) = \int_{\mathbb{R}} f(t) e^{-2\pi i st} dt$.
\begin{eg} Let $ e = \quad + \quad{e_\l\quad + \quad}$ be the unit vector field from the preceding example, and let $S \in L^2(N)$ be the function for which $V_e(S) = e$. For each $x\in{\mathbb{R}}$ define the intervals $I_{x,\l}$ and $J_x$ by $$ I_{x,\l} = \left[-\frac{1}{\l} - 1 ,\frac{1}{\l}\right] \bigcap \left(\left[-\frac{1}{\l} - 1 ,\frac{1}{\l}\right] + x\right), \quad + \quad J_{x} = [-1,0] \cap ([-1,0] + x). $$
Then $S = S_0 + S_1$, where $S_0(x,y,z) = \check{F}_{x,y}(z)$ and $S_1(x,y,z) = \check{G}_{x,y}(z) $, and where $G_{x,y}(\l) =\l \bold 1_{[0,1]}(\l) \quad + \quad \widehat{\bold 1}_{I_{x,\l}}(\l y)$ and $F_{x,y}(\l) = -\l \bold 1_{[-1,0]}(\l) \quad + \quad \widehat{\bold 1}_{J_{x}}(\l y)$. In particular, $S$ vanishes outside the strip $U = \quad + \quad{(x,y,z) : |x| < 1\quad + \quad}$ and $S_0$ and $S_1$ are given by sinc-type expressions. For example, for $(x,y,z)\in U$ and $y \ne 0, z \ne 0$, then $$ S_0(x,y,z) = \begin{cases}
\frac{1}{2\pi i y} \left( \frac{e^{2\pi i (z-xy)}-1}{2\pi i (z-xy)} -
\frac{e^{2\pi i (z+y)}-1}{2\pi i (z+y)}\right),
& \text{ if} \quad + \quad -1<x<0, ~z\neq xy, ~y\neq -z,\quad + \quad
\frac{1}{2\pi i y} \left( \frac{e^{2\pi i z}-1}{2\pi i z} - \frac{e^{2\pi i (z+y(1-x))}-1}{2\pi i (z+y(1-x))} \right), & \text{ if}\quad + \quad \quad + \quad 0<x< 1,~ x\neq -y(1-x).
\end{cases} $$
\end{eg}
\begin{proof}
We have $$ \begin{aligned}
&S(x,y,z) = \int_\L \langle e_\l, \pi_\l(x,y,z)e_\l\rangle |\l| d\l \quad + \quad &= - \int_\L \bold 1_{[-1,0]}(\l) \langle{\bf 1}_{[-1,0]} , \pi_\l(x,y,z){\bf 1}_{[-1,0]} \rangle \l d\l +
\int_\L \bold 1_{[0,1]}(\l) \langle \bold 1_{\left[\frac{1}{\l} - 1, \frac{1}{\l}\right]} , \pi_\l(x,y,z) \bold 1_{\left[\frac{1}{\l} - 1, \frac{1}{\l}\right]} \rangle \l d\l \quad + \quad
&= - \int_\L \bold 1_{[-1,0]}(\l) \int_{\mathbb{R}} \quad + \quad e^{-2\pi i \l z} e^{2\pi i \l yt} {\bf 1}_{[-1,0]} {\bf 1}_{[-1,0]}(t-x)dt \l d\l \quad + \quad &\hspace{1in}+ \int_\L \bold 1_{[0,1]}(\l) \int_{\mathbb{R}} e^{-2\pi i \l z} e^{2\pi i \l yt} {\bold 1}_{\left[\frac{1}{\l} - 1, \frac{1}{\l}\right]}{\bold 1}_{\left[\frac{1}{\l} - 1, \frac{1}{\l}\right]}(t-x)dt \l d\l \quad + \quad &= - \int_\L \bold 1_{[-1,0]}(\l) \quad + \quad e^{-2\pi i \l z} \left( \int_{\mathbb{R}} \quad + \quad e^{2\pi i \l yt} {\bf 1}_{J_x}(t) dt\right) \l d\l \quad + \quad &\hspace{1in}+ \int_\L \bold 1_{[0,1]}(\l) e^{-2\pi i \l z}\left( \int_{\mathbb{R}} e^{2\pi i \l yt} {\bold 1}_{I_{x,\l}}(t) dt\right) \l d\l \quad + \quad &= \int_\L \quad + \quad F_{x,y}(\l) e^{-2\pi i \l z} d\l + \int_\L \quad + \quad G_{x,y}(\l) e^{-2\pi i \l z} d\l. \end{aligned} $$ The explicit expression for $G$ is now an elementary calculation.
\end{proof}
We conclude this section with a necessary and sufficient condition for the generator of a Heisenberg orthonormal basis for any arbitrary shift-invariant spaces. Let $g \in L^2(\L\times{\mathbb{R}})$ and define the closed subspace $\mathcal S(g,\a,\b)$ of $L^2(\L\times{\mathbb{R}})$ by $$ \mathcal S(g,\a,\b) = \overline{sp} \bigl(\widehat{\mathcal T}(g, \a, \b)\bigr). $$ For each $(\l,t) \in \L\times{\mathbb{R}}$ put \begin{align} \Theta^g_k(\l, t):= \sum_{l'\in\frac{1}{\b}\Bbb Z, l''\in \Bbb Z} g\left({\l - l''} , \frac{t-l'}{\l-l''}-k\right) \overline{g}\left({\l - l''} , \frac{t-l'}{\l-l''}\right) . \end{align}
Then we have the following
\begin{thm} \label{ONcharacter} $\widehat{\mathcal T}(g, \a, \b)$ is an orthonormal basis for $\mathcal S(g,\a,\b)$ if and only if $$ \Theta^g_k (\l, t)= \delta_k\quad a.e.~ (\l, t).$$ \end{thm}
\begin{proof} For convenience we consider the case $\a=\b=1$; the proof for general $\a$ and $\b$ can be adapted. For each $\gamma = (k,l,m) \in \Gamma_{1,1}$ the function
$$(\l,t) \mapsto e^{2\pi i\l m} e^{-2\pi i \l l t} g(\l, t-k) \overline{g}(\l,t)| \l |$$ is absolutely integrable and we can apply periodization and Fubini's theorem to calculate $$ \begin{aligned}
\langle \hat T_\gamma g, g\rangle &= \int_\L \int_{\mathbb{R}} e^{2\pi i\l m} e^{-2\pi i \l l t} g(\l, t-k) \overline{g}(\l,t)| \l | dt d\l \quad + \quad &= \int_\L \int_{\mathbb{R}} e^{2\pi i\l m} e^{-2\pi i l t} g(\l, t/\l-k) \overline{g}(\l , t/\l)dt d\l\quad + \quad &= \int_\L e^{2\pi i\l m} \quad + \quad \sum_{l'\in {\mathbb{Z}}} \quad + \quad \int_0^1 e^{-2\pi i l t} g(\l,(t-l')/\l-k) \overline{g}(\l,(t-l')/\l)~ dt d\l\quad + \quad &= \int_0^1 \int_0^1 e^{2\pi i\l m} e^{-2\pi i l t} \quad + \quad \sum_{l''} \sum_{l'\in {\mathbb{Z}}} g\left(\l - l'', \frac{t-l'}{\l-l''}-k\right) \overline{g}\left(\l - l'', \frac{t-l'}{\l-l''}\right) dt d\l\quad + \quad &= \int_0^1 \int_0^1\quad + \quad e^{2\pi i\l m} e^{-2\pi i l t} \quad + \quad \Theta^g_k (\l, t) dt d\l \end{aligned} $$
Suppose that $\widehat{\mathcal T}(g, \a, \b)$ is an orthonormal basis for $\mathcal S(g,\a,\b)$. Note that $\Theta^g_k$ is a $(1,1)$-periodic integrable function on ${\mathbb{T}}\times {\mathbb{T}}$. If $k \ne 0$, then $\widehat{\Theta^g_k}(m,l) = 0 $ for all integers $m$ and $l$, and hence $\Theta^g_k \equiv 0$. If $k = 0$, then $\widehat{\Theta^g_0}(m,l) = 0 $ holds for all $(m,l) \ne 0$ while $\widehat{\Theta^g_0}(0,0) = 1 $. Hence $\Theta^g_0\equiv 1 $.
On the other hand, if $ \Theta^g_k (\l, t)= \delta_k\quad a.e.~ (\l, t)$, then the above reasoning can be reversed to show that the system $\widehat{\mathcal T}(g, \a, \b)$ is orthonormal.
\end{proof}
Bradley Currey,
{Department of Mathematics and Computer Science, Saint Louis University, St. Louis, MO 63103}\quad + \quad
{ \footnotesize{E-mail address: \texttt{{ [email protected]}}}\quad + \quad}
Azita Mayeli,
{Mathematics Department, City College of Technology, City University of New York, New York, USA }\quad + \quad
\footnotesize{E-mail address: \texttt{{[email protected]}}}\quad + \quad
\end{document} | arXiv |
MSC Classifications
MSC 2010: Game Theory, Economics, Social and Behavioral Sciences
91Dxx
Only show content I have access to (10)
Statistics and Probability (14)
Advances in Applied Probability (7)
Journal of Applied Probability (6)
Bulletin of the Australian Mathematical Society (2)
Proceedings of the Royal Society of Edinburgh Section A: Mathematics (2)
European Journal of Applied Mathematics (1)
Probability in the Engineering and Informational Sciences (1)
The ANZIAM Journal (1)
Applied Probability Trust (13)
Australian Mathematical Society Inc (3)
20 results in 91Dxx
Interacting nonlinear reinforced stochastic processes: Synchronization or non-synchronization
Sequential methods
Mathematical sociology
Limit theorems
Special processes
Irene Crimaldi, Pierre-Yves Louis, Ida G. Minelli
Journal: Advances in Applied Probability , First View
Published online by Cambridge University Press: 01 August 2022, pp. 1-46
The rich-get-richer rule reinforces actions that have been frequently chosen in the past. What happens to the evolution of individuals' inclinations to choose an action when agents interact? Interaction tends to homogenize, while each individual dynamics tends to reinforce its own position. Interacting stochastic systems of reinforced processes have recently been considered in many papers, in which the asymptotic behavior is proven to exhibit almost sure synchronization. In this paper we consider models where, even if interaction among agents is present, absence of synchronization may happen because of the choice of an individual nonlinear reinforcement. We show how these systems can naturally be considered as models for coordination games or technological or opinion dynamics.
Corrigendum: Dynamics of a susceptible—infected—susceptible epidemic reaction—diffusion model
Parabolic equations and systems
Partial differential equations
Applications - Dynamical systems and ergodic theory
Keng Deng, Yixiang Wu
Journal: Proceedings of the Royal Society of Edinburgh Section A: Mathematics , First View
Published online by Cambridge University Press: 30 March 2022, pp. 1-3
This corrigendum corrects a result in [1].
Modelling Burglary in Chicago using a self-exciting point process with isotropic triggering
Probabilistic methods, simulation and stochastic differential equations
CRAIG GILMOUR, DESMOND J. HIGHAM
Journal: European Journal of Applied Mathematics / Volume 33 / Issue 2 / April 2022
Published online by Cambridge University Press: 08 April 2021, pp. 369-391
Self-exciting point processes have been proposed as models for the location of criminal events in space and time. Here we consider the case where the triggering function is isotropic and takes a non-parametric form that is determined from data. We pay special attention to normalisation issues and to the choice of spatial distance measure, thereby extending the current methodology. After validating these ideas on synthetic data, we perform inference and prediction tests on public domain burglary data from Chicago. We show that the algorithmic advances that we propose lead to improved predictive accuracy.
Zipf's law for atlas models
Stochastic analysis
Ricardo T. Fernholz, Robert Fernholz
Journal: Journal of Applied Probability / Volume 57 / Issue 4 / December 2020
Published online by Cambridge University Press: 23 November 2020, pp. 1276-1297
A set of data with positive values follows a Pareto distribution if the log–log plot of value versus rank is approximately a straight line. A Pareto distribution satisfies Zipf's law if the log–log plot has a slope of $-1$. Since many types of ranked data follow Zipf's law, it is considered a form of universality. We propose a mathematical explanation for this phenomenon based on Atlas models and first-order models, systems of strictly positive continuous semimartingales with parameters that depend only on rank. We show that the stationary distribution of an Atlas model will follow Zipf's law if and only if two natural conditions, conservation and completeness, are satisfied. Since Atlas models and first-order models can be constructed to approximate systems of time-dependent rank-based data, our results can explain the universality of Zipf's law for such systems. However, ranked data generated by other means may follow non-Zipfian Pareto distributions. Hence, our results explain why Zipf's law holds for word frequency, firm size, household wealth, and city size, while it does not hold for earthquake magnitude, cumulative book sales, and the intensity of wars, all of which follow non-Zipfian Pareto distributions.
First passage percolation on sparse random graphs with boundary weights
Lasse Leskelä, Hoa Ngo
Journal: Journal of Applied Probability / Volume 56 / Issue 2 / June 2019
Published online by Cambridge University Press: 30 July 2019, pp. 458-471
A large and sparse random graph with independent exponentially distributed link weights can be used to model the propagation of messages or diseases in a network with an unknown connectivity structure. In this article we study an extended setting where, in addition, the nodes of the graph are equipped with nonnegative random weights which are used to model the effect of boundary delays across paths in the network. Our main results provide approximative formulas for typical first passage times, typical flooding times, and maximum flooding times in the extended setting, over a time scale logarithmic with respect to the network size.
The degree-wise effect of a second step for a random walk on a graph
Combinatorial probability
Kenneth S. Berenhaut, Hongyi Jiang, Katelyn M. McNab, Elizabeth J. Krizay
In this paper we consider the degree-wise effect of a second step for a random walk on a graph. We prove that under the configuration model, for any fixed degree sequence the probability of exceeding a given degree threshold is smaller after two steps than after one. This builds on recent work of Kramer et al. (2016) regarding the friendship paradox under random walks.
THE FRIENDSHIP PARADOX FOR WEIGHTED AND DIRECTED NETWORKS
Kenneth S. Berenhaut, Hongyi Jiang
Journal: Probability in the Engineering and Informational Sciences / Volume 33 / Issue 1 / January 2019
Published online by Cambridge University Press: 18 September 2018, pp. 136-145
This paper studies the friendship paradox for weighted and directed networks, from a probabilistic perspective. We consolidate and extend recent results of Cao and Ross and Kramer, Cutler and Radcliffe, to weighted networks. Friendship paradox results for directed networks are given; connections to detailed balance are considered.
THE DEMON DRINK
MARK IAN NELSON, PETER HAGEDOORN, ANNETTE L. WORTHY
Journal: The ANZIAM Journal / Volume 59 / Issue 2 / October 2017
Published online by Cambridge University Press: 02 November 2017, pp. 135-154
We provide a qualitative analysis of a system of nonlinear differential equations that model the spread of alcoholism through a population. Alcoholism is viewed as an infectious disease and the model treats it within a sir framework. The model exhibits two generic types of steady-state diagram. The first of these is qualitatively the same as the steady-state diagram in the standard sir model. The second exhibits a backwards transcritical bifurcation. As a consequence of this, there is a region of bistability in which a population of problem drinkers can be sustained, even when the reproduction number is less than one. We obtain a succinct formula for this scenario when the transition between these two cases occurs.
A spectral method for community detection in moderately sparse degree-corrected stochastic block models
Social and behavioral sciences: general topics
Probability theory on algebraic and topological structures
Algorithms - Computer Science
Lennart Gulikers, Marc Lelarge, Laurent Massoulié
Journal: Advances in Applied Probability / Volume 49 / Issue 3 / September 2017
Print publication: September 2017
We consider community detection in degree-corrected stochastic block models. We propose a spectral clustering algorithm based on a suitably normalized adjacency matrix. We show that this algorithm consistently recovers the block membership of all but a vanishing fraction of nodes, in the regime where the lowest degree is of order log(n) or higher. Recovery succeeds even for very heterogeneous degree distributions. The algorithm does not rely on parameters as input. In particular, it does not need to know the number of communities.
Dynamics of a susceptible–infected–susceptible epidemic reaction–diffusion model
Journal: Proceedings of the Royal Society of Edinburgh Section A: Mathematics / Volume 146 / Issue 5 / October 2016
We study a susceptible–infected–susceptible reaction–diffusion model with spatially heterogeneous disease transmission and recovery rates. A basic reproduction number is defined for the model. We first prove that there exists a unique endemic equilibrium if . We then consider the global attractivity of the disease-free equilibrium and the endemic equilibrium for two cases. If the disease transmission and recovery rates are constants or the diffusion rate of the susceptible individuals is equal to the diffusion rate of the infected individuals, we show that the disease-free equilibrium is globally attractive if , while the endemic equilibrium is globally attractive if .
Respondent-driven sampling and an unusual epidemic
J. Malmros, F. Liljeros, T. Britton
Respondent-driven sampling (RDS) is frequently used when sampling from hidden populations. In RDS, sampled individuals pass on participation coupons to at most c of their acquaintances in the community (c = 3 being a common choice). If these individuals choose to participate, they in turn pass coupons on to their acquaintances, and so on. The process of recruiting is shown to behave like a new Reed–Frost-type network epidemic, in which 'becoming infected' corresponds to study participation. We calculate R0, the probability of a major 'outbreak', and the relative size of a major outbreak for c < ∞ in the limit of infinite population size and compare to the standard Reed–Frost epidemic. Our results indicate that c should often be chosen larger than in current practice.
Contagions in random networks with overlapping communities
Emilie Coupechoux, Marc Lelarge
Journal: Advances in Applied Probability / Volume 47 / Issue 4 / December 2015
We consider a threshold epidemic model on a clustered random graph model obtained from local transformations in an alternating branching process that approximates a bipartite graph. In other words, our epidemic model is such that an individual becomes infected as soon as the proportion of his/her infected neighbors exceeds the threshold q of the epidemic. In our random graph model, each individual can belong to several communities. The distributions for the community sizes and the number of communities an individual belongs to are arbitrary. We consider the case where the epidemic starts from a single individual, and we prove a phase transition (when the parameter q of the model varies) for the appearance of a cascade, i.e. when the epidemic can be propagated to an infinite part of the population. More precisely, we show that our epidemic is entirely described by a multi-type (and alternating) branching process, and then we apply Sevastyanov's theorem about the phase transition of multi-type Galton-Watson branching processes. In addition, we compute the entries of the mean progeny matrix corresponding to the epidemic. The phase transition for the contagion is given in terms of the largest eigenvalue of this matrix.
Galam's bottom-up hierarchical system and public debate model revisited
N. Lanchier, N. Taylor
This paper is concerned with the bottom-up hierarchical system and public debate model proposed by Galam (2008), as well as a spatial version of the public debate model. In all three models, there is a population of individuals who are characterized by one of two competing opinions, say opinion −1 and opinion +1. This population is further divided into groups of common size s. In the bottom-up hierarchical system, each group elects a representative candidate, whereas in the other two models, all the members of each group discuss at random times until they reach a consensus. At each election/discussion, the winning opinion is chosen according to Galam's majority rule: the opinion with the majority of representatives wins when there is a strict majority, while one opinion, say opinion −1, is chosen by default in the case of a tie. For the public debate models we also consider the following natural updating rule that we call proportional rule: the winning opinion is chosen at random with a probability equal to the fraction of its supporters in the group. The three models differ in term of their population structure: in the bottom-up hierarchical system, individuals are located on a finite regular tree, in the nonspatial public debate model, they are located on a complete graph, and in the spatial public debate model, they are located on the d-dimensional regular lattice. For the bottom-up hierarchical system and nonspatial public debate model, Galam studied the probability that a given opinion wins under the majority rule and, assuming that individuals' opinions are initially independent, making the initial number of supporters of a given opinion a binomial random variable. The first objective of this paper is to revisit Galam's result, assuming that the initial number of individuals in favor of a given opinion is a fixed deterministic number. Our analysis reveals phase transitions that are sharper under our assumption than under Galam's assumption, particularly with small population size. The second objective is to determine whether both opinions can coexist at equilibrium for the spatial public debate model under the proportional rule, which depends on the spatial dimension.
The naming game in language dynamics revisited
Nicolas Lanchier
Journal: Journal of Applied Probability / Volume 51 / Issue A / December 2014
In this article we study a biased version of the naming game in which players are located on a connected graph and interact through successive conversations in order to select a common name for a given object. Initially, all the players use the same word B except for one bilingual individual who also uses word A. Both words are attributed a fitness, which measures how often players speak depending on the words they use and how often each word is spoken by bilingual individuals. The limiting behavior depends on a single parameter, ϕ, denoting the ratio of the fitness of word A to the fitness of word B. The main objective is to determine whether word A can invade the system and become the new linguistic convention. From the point of view of the mean-field approximation, invasion of word A is successful if and only if ϕ > 3, a result that we also prove for the process on complete graphs relying on the optimal stopping theorem for supermartingales and random walk estimates. In contrast, for the process on the one-dimensional lattice, word A can invade the system whenever ϕ > 1.053, indicating that the probability of invasion and the critical value for ϕ strongly depend on the degree of the graph. The system on regular lattices in higher dimensions is also studied by comparing the process with percolation models.
How Clustering Affects Epidemics in Random Networks
Published online by Cambridge University Press: 22 February 2016, pp. 985-1008
Motivated by the analysis of social networks, we study a model of random networks that has both a given degree distribution and a tunable clustering coefficient. We consider two types of growth process on these graphs that model the spread of new ideas, technologies, viruses, or worms: the diffusion model and the symmetric threshold model. For both models, we characterize conditions under which global cascades are possible and compute their size explicitly, as a function of the degree distribution and the clustering coefficient. Our results are applied to regular or power-law graphs with exponential cutoff and shed new light on the impact of clustering.
ON DYNAMIC MONOPOLIES OF GRAPHS WITH PROBABILISTIC THRESHOLDS
HOSSEIN SOLTANI, MANOUCHEHR ZAKER
Journal: Bulletin of the Australian Mathematical Society / Volume 90 / Issue 3 / December 2014
Let $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}G$ be a graph and ${{\tau }}$ be an assignment of nonnegative thresholds to the vertices of $G$. A subset of vertices, $D$, is an irreversible dynamic monopoly of $(G, \tau )$ if the vertices of $G$ can be partitioned into subsets $D_0, D_1, \ldots, D_k$ such that $D_0=D$ and, for all $i$ with $0 \leq i \leq k-1$, each vertex $v$ in $D_{i+1}$ has at least $\tau (v)$ neighbours in the union of $D_0, D_1, \ldots, D_i$. Dynamic monopolies model the spread of influence or propagation of opinion in social networks, where the graph $G$ represents the underlying network. The smallest cardinality of any dynamic monopoly of $(G,\tau )$ is denoted by $\mathrm{dyn}_{\tau }(G)$. In this paper we assume that the threshold of each vertex $v$ of the network is a random variable $X_v$ such that $0\leq X_v \leq \deg _G(v)+1$. We obtain sharp bounds on the expectation and the concentration of $\mathrm{dyn}_{\tau }(G)$ around its mean value. We also obtain some lower bounds for the size of dynamic monopolies in terms of the order of graph and expectation of the thresholds.
MODELLING REGIONAL MIGRATION
ANGELA PEZIC
Journal: Bulletin of the Australian Mathematical Society / Volume 80 / Issue 1 / August 2009
Print publication: August 2009
Tails of Stopped Random Products: The Factoid and Some Relatives
Distribution theory - Probability
Stochastic processes
Anthony G. Pakes
The upper tail behaviour is explored for a stopped random product ∏j=1NXj, where the factors are positive and independent and identically distributed, and N is the first time one of the factors occupies a subset of the positive reals. This structure is motivated by a heavy-tailed analogue of the factorial n!, called the factoid of n. Properties of the factoid suggested by computer explorations are shown to be valid. Two topics about the determination of the Zipf exponent in the rank-size law for city sizes are discussed.
Decentralized search on spheres using small-world Markov chains: expected hitting times and structural properties
Archis Ghate
We build a family of Markov chains on a sphere using distance-based long-range connection probabilities to model the decentralized message-passing problem that has recently gained significant attention in the small-world literature. Starting at an arbitrary source point on the sphere, the expected message delivery time to an arbitrary target on the sphere is characterized by a particular expected hitting time of our Markov chains. We prove that, within this family, there is a unique efficient Markov chain whose expected hitting time is polylogarithmic in the relative size of the sphere. For all other chains, this expected hitting time is at least polynomial. We conclude by defining two structural properties, called scale invariance and steady improvement, of the probability density function of long-range connections and prove that they are sufficient and necessary for efficient decentralized message delivery.
Technology diffusion by learning from neighbours
Operations research and management science
Kalyan Chatterjee, Susan H. Xu
Journal: Advances in Applied Probability / Volume 36 / Issue 2 / June 2004
In this paper, we consider a model of social learning in a population of myopic, memoryless agents. The agents are placed at integer points on an infinite line. Each time period, they perform experiments with one of two technologies, then each observes the outcomes and technology choices of the two adjacent agents as well as his own outcome. Two learning rules are considered; it is shown that under the first, where an agent changes his technology only if he has had a failure (a bad outcome), the society converges with probability 1 to the better technology. In the other, where agents switch on the basis of the neighbourhood averages, convergence occurs if the better technology is sufficiently better. The results provide a surprisingly optimistic conclusion about the diffusion of the better technology through imitation, even under the assumption of extremely boundedly rational agents. | CommonCrawl |
\begin{document}
\title[Quantum dynamics of an optomechanical system in the presence]{Quantum dynamics of an optomechanical system in the presence of photonic Bose-Einstein condensate}
\author{M Fani$^1$ and M H Naderi$^2$}
\address{$^1$Department of Physics, Faculty of Science, University of Isfahan, Hezar Jerib, 81746-73441, Isfahan, Iran} \address{$^2$Quantum Optics Group, Department of Physics, Faculty of Science, University of Isfahan, Hezar Jerib, 81746-73441, Isfahan, Iran} \ead{[email protected]}
\begin{indented} \item[]October 2016 \end{indented}
\begin{abstract} In this paper, we study theoretically the optomechanical interaction of an almost pure condensate of photons with an oscillating mechanical membrane in a micro-cavity. We show that in the Bogoliubov approximation, due to the large number of photons in the condensate phase, there is a linear strong effective coupling between the Bogoliubov mode of the photonic Bose-Einstein condensate (BEC) and the mechanical motion of the membrane which depends on the nonlinear photon-photon scattering potential. This coupling leads to the cooling of the mechanical motion, the normal mode splitting (NMS), the squeezing of the output field and the entanglement between the excited mode of the cavity and the mechanical mode. We show that, in one hand, the nonlinearity of the photon gas increases the degree of the squeezing of the output field of the micro-cavity and the efficiency of the cooling process at high temperatures. In the other hand, it reduces NMS in the displacement spectrum of the oscillating membrane and the degree of the optomechanical entanglement. In addition, the temperature of the photonic BEC can be used to control the above-mentioned phenomena. \end{abstract}
\pacs{42.50.Pq, 67.85.Jk, 42.50.Wk, 03.65.Ud}
\noindent{\it Keywords}: Photonic Bose-Einstein condensate, cavity optomechanics, ground-state cooling, optomechanical entanglement, field squeezing\\
\maketitle
\section{Introduction}
The interactions between light and matter have been studied and implemented in a wide variety of systems from cavity QED to solid-state systems for many years \cite{Auffeves}. One type of light-matter interactions which has attracted much attention over the past decade is the optomechanical(OM) coupling between radiation pressure and a mechanical oscillator (for a recent review, see, e.g., \cite{Aspelmeyer}). In an OM cavity, the electromagnetic field affects the mechanical motion of a movable mirror via radiation pressure resulting the OM coupling between the cavity field and the mechanical element. Because of the dependence of the cavity length on the intensity of the field, the OM interaction is intrinsically nonlinear \cite{Gong}. The OM coupling was first considered to trap and control the dielectric particles \cite{Ashkin} and then to detect the gravitational waves \cite{Braginsky}. However, the field of optomechanics has undergone a rapid progress during the past years and is currently subject to intensive research investigations. Nowadays, the OM coupling can be realized in many different configurations with a wide range of mechanical frequencies (from $kHz$ to a few $GHz$) and of effective masses (from $pg$ to $kg$) \cite{Aspelmeyer}.
As a few examples of OM setups, we can mention to the Fabry-perot cavities with a moving end mirror \cite{Metzger}, suspended dielectric membranes \cite{Thompson,Sankey}, photonic crystal cavities \cite{Eichenfield,Gavartin} and cold atoms trapped inside optical cavities \cite{Brennecke,Murch}. Furthermore, different applications have been considered for the OM systems such as high precision detection and measurement of small forces, displacements and masses \cite{Aspelmeyer}. Nevertheless, the feature that makes the OM system more interesting is the fact that it is one of the most promising candidates for exploring quantum effects in the mesoscopic and macroscopic scales \cite{Schliesser,Brooks}. The main obstacle to observe quantum behaviour in the macroscopic scales comes from thermal noise. As a matter of fact, in order to prepare a macroscopic mechanical object in a quantum state it is necessary to cool it down to its motional ground state. In the OM setups, due to the finite lifetime of the photons, the radiation pressure force is non-conservative and it can provide an extra mechanical damping under certain circumstances which leads to the cooling of the mechanical element \cite{Aspelmeyer}. This cooling mechanism, which is referred to as back-action (or self) cooling in the literature \cite{Genes2}, has been studied theoretically \cite{Dantan,Genes3,Dobrindt} and it was experimentally realized in the regime of a few phonons \cite{Schliesser,Groblacher} and even close to the ground state \cite{Chan}. The possibility of the ground state cooling has been also predicted theoretically in the resolved side band regime \cite{Genes3,Marquardt} but it has not yet been achieved experimentally.
Optomechanical systems have attracted considerable attention in connection with their ability to generate entangled states of macroscopic objects. The radiation pressure-induced entanglement between two mirrors of a ring cavity was first proposed in \cite{Mancini}. After that, many other schemes have been proposed to generate entanglement between different subsytems in the standard OM as well as hybrid OM systems \cite{Vitali,Tian,Vitali2,Akram}. Aside from the generation of entanglement, the possibility of producing nonclassical sates of both the mechanical motion and cavity field have been investigated in various OM configurations \cite{Clerk,Purdy,Safavi-Naeini}. One other noticeable feature of the OM systems is relevant to the phenomenon of normal mode splitting (NMS) which stems from the strong coupling of two degenerate modes with energy exchange taking place on a time scale faster than the decoherence of each mode \cite{Dobrindt}. The optomechanical NMS, which has been experimentally observed \cite{Teufel}, may be taken into account in those experiments that seek to demonstrate ground-state cooling of the mechanical oscillator \cite{Marquardt,Wilson-Rae}. In recent years, there has been an increasing interest in nonlinear hybrid OM cavities, where the nonlinearity is mainly contributed by the nonlinear media, such as optical Kerr medium \cite{Kumar}, optical parametric amplifier(OPA) \cite{Huang2}, or combination of both (Kerr-down conversion nonlinearity) \cite{Shahidani}. It has been shown that the Kerr nonlinearity shifts the cavity frequency and weakens the OM coupling \cite{Kumar}, the OPA leads to strong OM coupling via increasing the intensity of the cavity field \cite{Huang2}, and the Kerr-down conversion can lead to significant photon-phonon entanglement simultaneously with ground-state cooling of the oscillating mirror \cite{Shahidani}.
On the other hand, hybrid optical cavities containing an atomic Bose-Einstein condensate(BEC) have been identified as suitable candidates for realization of the OM coupling arising from the dispersive interaction of the BEC with the cavity field \cite{Brennecke2}. In such systems, the fluctuations of the atomic field, i.e., the Bogoliubov mode, plays the role of the vibrational mode of the mechanical oscillator in an OM cavity \cite{Stamper-Kurn}. In comparison to the standard OM setups, the OM cavities assisted by BEC operate in a different regime and provides a OM strong coupling as well as the Kerr nonlinearity in the low-photon number regime \cite{Gupta}. Furthermore, it has been shown \cite{Dalafi} that in the Bogoliubov approximation the atomic collision affects the dynamics of the system by shifting the energy of the excited mode and provides an atomic parametric amplifier interaction.
Despite the bosonic nature of photons, it was believed for a long time that the realization of \textit{photonic} BEC faces a fundamental obstacle. The problem lies in vanishing mass and chemical potential of photons which make it very difficult to cool a fixed number of photons such that they form a condensate. However, the BEC phase transition was observed experimentally for the first time at room temperature for photons trapped in a dye-filled optical microcavity with two curved mirrors at a small distance in comparison to their dimensions \cite{Klaers2,Klaers1}. The cavity mirrors provide a non-vanishing mass in the paraxial approximation \cite{Chiao} as well as a harmonic trapping potential for photons \cite{Klaers2,Klaers1}. In addition, photons can be thermalized via multiple absorption and re-emission by the dye molecules \cite{Klaers1}. In this way, the photon gas is formally equivalent to a two-dimensional trapped massive boson gas and so the photonic BEC phase transition is possible. It should be noted that in contrast to the laser, the condensation of photons is a thermal equilibrium phase transition.
Following the first successful experimental realization, the BEC of photons, as a new state of light, has attracted much attention in recent years. Besides further relevant experimental investigations \cite{Schmitt1,Schmitt2,Marelic}, various theoretical studies have been carried out to explain the equilibration and properties of the system in the framework of statistical mechanics \cite{Sobyanin,Klaers3,Weiss}, non equilibrium Green's function \cite{deLeeuw,deLeeuw2,Vanderwurff} and nonlinear Schr\"odinger equation \cite{Nyman,Strinati}. In addition, some other theoretical schemes have been proposed to achieve thermalization and BEC phase transition of a photon gas in a dilute non-degenerated atomic gas \cite{Kruchkov}, in a one-dimensional barrel optical microresonator filled with a dye solution \cite{Cheng}, in an optomechanical cavity with a segmented moving mirror \cite{Weitz}, and in a multimode hybrid atom-membrane optomechanical microcavity \cite{Fani}. Furthermore, the temperature-dependent decay rate \cite{Zhang} and enhanced dynamic stark shift \cite{Fan} of an atom interacting with a BEC of photons have been theoretically studied.
In spite of various works which have been done on the photonic BEC, the issue of its interaction with matter systems have received less attention. Inspired by the existing studies on the interaction between atomic BEC and OM systems and also motivated by the similarities between photonic and atomic BECs, in this paper we consider the linear OM coupling of a BEC of photons with the mechanical oscillator in an OM cavity. We study the dynamical effects of the radiation pressure force induced by the photonic BEC. The cooling of the mechanical mode and the steady-state entanglement between the mechanical mode and the Bogoliubov mode of the BEC are investigated. The coherence of the photonic condensate causes an effective strong coupling between the collective excitations of the condensate and the mechanical modes. In addition, the evidences of a weak photon-photon interaction in the photonic BEC has been observed in the experiment \cite{Klaers2}. Actually, this interaction has significant influence on the behaviour of the OM system, so it can provide a tool to extract information about the photonic BEC via measuring the properties of the OM system. We show that the photon-photon interaction and BEC temperature can also be considered as the control parameters to achieve quantum effects.
The remainder of this paper is organized as follows. In the section \ref{secmodel}, we introduce the physical model of the system under consideration. In section \ref{secbogo}, by considering an almost pure condensate with weak two-body interaction, we apply the Bogoliubov approximation to the Hamiltonian of the system. We devote section \ref{secmotion} to the derivation the equations of motion describing the system dynamics within the input-output formalism. In section \ref{secdis}, we study the cooling and the displacement spectrum of the mechanical mode. Section \ref{secinten} is allocated to investigate the output intensity and quadrature squeezing spectra and in section \ref{secentan}, the steady-state entanglement between the collective excitation mode of the photonic BEC and the mechanical mode is studied. Finally, we summarize our conclusions in section \ref{seccon}.
\section{Physical Model}\label{secmodel}
In this section we model the optomechanical interaction of an oscillating micromechanical element with a BEC of photons in a microcavity which is pumped by a coherent field (figure \ref{fig1}). The microcavity consists of two curved mirrors with curvature $R$ which are fixed at distance $L$ from each other. When $L \ll R$ the longitudinal mode number can be fixed and the cavity field is equivalent to an effectively massive two-dimensional photon gas which, due to the mirrors curvature, is confined in a harmonic trap with frequency $\omega_{t}=\frac{c}{\sqrt{R L/2}} $ where $c$ is the speed of light \cite{Klaers2,Klaers1}. Therefore, the frequency of each cavity mode in the paraxial approximation (i. e., the longitudinal wave number $k_{\parallel}$ to be much larger than the transverse wave number $k_{\perp}$) is given by \cite{Rexin} $\omega_{k} = c | \textbf{k} |$ where \begin{equation}\label{k}
\omega_{k} \simeq c({k_\parallel } + \frac{1}{2}\frac{{{k_ \bot }^2}}{{{k_\parallel }}} )= \frac{{2\pi n c}}{L} + ({2l + \left| m \right| + 1}) \omega _t, \end{equation}
with $l$ and $m$ being, respectively, the radial and the azimuthal quantum numbers. Thus, the transverse wave number is given by ${k_ \bot } = \frac{{2\pi n}}{L}\frac{\textit{s}}{{\sqrt {RL/2} }}$ where for the sake of brevity, we have defined $\textit{s}=2l + \left| m \right| +1$ which is also the degeneracy of the cavity modes. In addition, the fixed longitudinal mode number $n$ determines the cut-off frequency of the cavity, $\omega_{cut} = 2n\pi c/ L$, as well as the effective mass of photons, $m_{photon}=\hbar \omega_{cut} / c^2 $. The photon gas can be in thermal equilibrium with a reservoir at temperature $T$ with non-zero chemical potential and so it can undergo a BEC phase transition when the total number of photons in the cavity $N_t \simeq \frac{2 L P}{\hbar \omega_{cut} c} $ (with $P$ the pump power) is larger than the critical photon number $N_c \simeq \frac{\pi^2}{3} (\frac{k_B T}{\hbar \omega_{t}})^2$\cite{Klaers2}. In the experiment of photon BEC generation \cite{Klaers1} the phase transition occurred at the critical power of $(1.55 \pm 0.6)W$ corresponding to the critical photon number $N_c = (6.3 \pm 2.4)\times 10^4$ at room temperature ($k_B T/\hbar \omega_t \simeq 150$). In that experiment, the effective mass of photons and the trap frequency are $m_{photon}\simeq 6.7 \times 10^{-36} kg$ and $\omega _t \simeq 2\pi \times 4.1 \times 10^{10} Hz$, respectively. \begin{figure}
\caption{(Color online) Schematic diagram of an optomechanical microcavity of length $L$ composed of a vibrating membrane with damping rate $\gamma$ in the presence of a BEC of photons. The cavity is coupled to the external environment via one of its mirrors with decay rate $\kappa _{ex} $. Furthermore, the cavity field is driven with an input coherent field.}
\label{fig1}
\end{figure}
As shown schematically in figure \ref{fig1}, the mechanical element is considered to be a semi-transparent, thin dielectric membrane inside the cavity which can vibrate at eigenfrequencies $\Omega_{q} = v_s \left| \textbf{q} \right| $, where $v_s$ is the sound velocity and $\textbf{q}$ is the associated two-dimensional wave vector of phonons. The Hamiltonian of the system in the grand canonical ensemble can be written as
\numparts \begin{eqnarray}
H = {H_f} + {H_m} + {H_I};\label{H1a}\\\nonumber {H_f} = \sum\limits_k {(\hbar {\delta _k} - \mu )a_k^\dagger {a_k}} \\ \,\,\,\,\,\,+ \frac{1}{2}\sum\limits_q {V(q)a_{k + q}^\dagger a_{k' - q}^\dagger {a_{k'}}{a_k}}
+ i\sum\limits_k {{\eta _k}(a_k^\dagger - {a_k})} ,\label{H1b}\\ {H_m} = \sum\limits_q {\hbar {\Omega _q}c_q^\dagger {c_q}} ,\label{H1c}\\ {H_I} = - \sum\limits_k {\hbar {g_k}(q)a_{k + q}^\dagger {a_k}({c_q} + c_{-q}^\dagger )} \label{H1d} , \end{eqnarray} \endnumparts where all the summations are taken over two-dimensional wave vectors. The term $H_f$ is the Hamiltonian of the photon gas with chemical potential $\mu$ and $a_k$($a_k^{\dagger}$) is the annihilation(creation) operator of a photon in the mode $\textbf{k}$. The second term in the Hamiltonian $H_f$ denotes the photon-photon scattering with the interaction potential $V(q)$, which can arise from the Kerr nonlinearity or thermal lensing in the cavity medium \cite{Klaers2}. The third term in the Hamiltonian of (\ref{H1b}) describes the cavity pumping by a coherent field with real amplitude $\eta_k$ and frequency $\omega_p$. The Hamiltonian $H_f$ has been written in the frame rotating with pump frequency, so $\delta_k = \omega_k - \omega_p$. Equation (\ref{H1c}) is the free Hamiltonian of the mechanical oscillator where $c_k$($c_k^{\dagger}$) is the annihilation(creation) operator of a phonon in the mode $\textbf{q}$. The Hamiltonian of (\ref{H1d}) represents the interaction of the membrane motion and the cavity field, where $g_k(q)$ is the linear OM coupling constant. This part of the total Hamiltonian describes the scattering of photons between the different cavity modes by the mechanical motion of the membrane. In the interaction Hamiltonian, due to the large free spectral range of the cavity field, we have assumed that the membrane oscillations can not induce transitions between the modes with different longitudinal modes.
In order to have a non-vanishing chemical potential and so a BEC of photons, it is necessary to have a thermalization mechanism in the micro-cavity, such as multiple absorption and reemission of photons by a dye solution \cite{Klaers2,Klaers1} or other proposed thermalization schemes \cite{Weitz,Fani,Hafezi}. Here, we have assumed that the thermalization process takes place much faster than the optomechanical coupling, which is relevant for the typical experiment data \cite{Klaers1,Aspelmeyer}. Thus, during the optomechanical interaction the photon gas remains in the thermal equilibrium, so the thermalization mechanism has not been included in the Hamiltonian of the system and it is not also shown in the figure \ref{fig1}. However, we will take into account the effects of the thermalization process as a thermal reservoir for the cavity field (see section \ref{secmotion}). In addition, the cavity field is assumed to be coupled to an external reservoir via one of the end mirrors and we consider a vacuum input noise due to this coupling. It is also worth pointing out that, depending on the type of the microcavity, other kinds of mechanical elements (instead of the micromechanical membrane) might be considered such as photonic crystal membranes \cite {Gavartin}, surface acoustic waves on a mirror \cite{Shi}, segmented \cite{Weitz}, or deformable \cite{Antoni} end mirror . In this paper, we analyse our results based on the experimentally feasible parameters for the case of micromechanical membrane setup, where the mechanical frequency is of the order of $MHz$, while the single-photon OM coupling is of the order of $kHz$ \cite{Sankey,Eichenfield}. Also the cavity decay rate is set to be a few $MHz$.
\section{Bogoliubov Approximation}\label{secbogo}
To proceed further, we now assume that the total number of photons is far above the BEC threshold, $N_t \gg N_c$, so there is a macroscopic number of photons ($N_0 \gg 1 $) in the ground state of the cavity field and the chemical potential is equal to the lowest energy of the cavity field. On the other hand, since the two-body interaction is weak in this system \cite{Klaers2} we can apply the Bogoliubov theory to the photon gas \cite{Chiao,Zhang}. Replacing $a_0$, $a^{\dagger}_0$ ($l,m=0$) by $\sqrt{N_0}$ and keeping only leading terms in $\sqrt{N_0}$ in each term of the Hamiltonians of equations (\ref{H1b}) and (\ref{H1d}), we obtain
\numparts \begin{eqnarray} {H_f} \simeq {E_0} + \sum\limits_{k \ne 0} {(\hbar {\delta _k} - \mu )a_k^\dagger {a_k}} \nonumber \\ \,\, + \frac{1}{2}{V_0} N_0 \sum\limits_{k \ne 0} {(a_{- k}^\dagger a_k^\dagger + {a_{-k}}{a_k})} + i\sum\limits_{k \ne 0} {{\eta _k}(a_k^\dagger - {a_k})} ,\label{H2a}\\ {H_I} \simeq - \sum\limits_{q \ne 0} {2 \hbar g(q) \sqrt{N_0}(a_q^\dagger + {a_{ - q}})({c_q} + c_{ - q}^\dagger )} ,\label{H2b} \end{eqnarray} \endnumparts
where $E_0$ is a constant and does not affect the dynamics of the system. Here, the indices $k$ and $q$ denote only the transverse wave numbers ($k_{\perp}$), so the summations are taken over $ \{ lm \} \neq \{ 00 \} $. In equations (\ref{H2a},\ref{H2b}) we have neglected the dependence of OM coupling on $k$, which is justified in the paraxial region.
Equation (\ref{H2a}) shows that in the Bogoliubov approximation the photon-photon interaction takes the form of a parametric coupling between the modes $\textbf{k} $ and $-\textbf{k}$ which can lead to photon squeezing. We have also assumed that the two-body interaction potential is constant \cite{Zhang} and it is given by $V_0=\zeta (\hbar \omega_{trap})/ 2\pi$ where $\zeta$ is the dimensionless interaction parameter whose experimental value is $\zeta \approx (7\pm 3)\times 10^{-4}$ \cite {Klaers2}. On the other hand, the Bogoliubov approximation leads to the linearization of the OM interaction Hamiltonian of equation (\ref{H2b}). It means that we have considered only the scattering of photons from/into the condensate phase by the OM coupling and the scattering among excited modes has been neglected due to the small population of these modes.
Here, the important feature is that the coherence of the photonic BEC, i. e., the macroscopic ground state population of the photon gas not only provides a strong effective linear coupling among the excited modes of the cavity and the mechanical modes, but also it causes a strong parametric interaction. The condensate photon number is given by $N_0 = N_t - \sum\limits_{k \ne 0} { \langle a_k^\dagger {a_k} \rangle } $ but as we will see in section \ref{secdis}, the changes in the value of $N_0$ induced by the OM coupling and the photon-photon interaction is negligible. Therefore, in the numerical calculation we use the value of $N_0$ for noninteracting photon gas, $N_0 = N_t - \sum\limits_{k \ne 0} { (e^{\hbar (\omega_k - \mu)/k_B T}-1)^{-1} } $.
It is well-known \cite{pitaevskii} that the Bogoliubov transformation, \begin{eqnarray}\label{bogo} {a_k} = {u_k}{b_k} + {v_k}b_{ - k}^\dagger , \nonumber \\ a_{k}^\dagger = {u_k}b_k^\dagger + {v_k}{b_{ - k}}, \end{eqnarray} with \begin{eqnarray}\label{uv} {u_k} = {[\frac{1}{2}(1 + \frac{{{N_0}{V_0}}}{{\hbar {\omega _k}}}){(1 + 2\frac{{{N_0}{V_0}}}{{\hbar {\omega _k}}})^{ - 1/2}} + \frac{1}{2}]^{1/2}},\nonumber \\ {v_k} = - {[\frac{1}{2}(1 + \frac{{{N_0}{V_0}}}{{\hbar {\omega _k}}}){(1 + 2\frac{{{N_0}{V_0}}}{{\hbar {\omega _k}}})^{ - 1/2}} - \frac{1}{2}]^{1/2}}, \end{eqnarray} diagonalizes the interaction term in the Hamiltonian $H_f$. Therefore, the total Hamiltonian of the system in terms of the operators $b_k, b^{\dagger}_k$ is given by \begin{eqnarray} H \simeq \sum\limits_{k \ne 0} {\hbar {\tilde{\delta} _k}b_k^\dagger {b_k}} +\sum\limits_{k\neq 0} {{\Omega _k}c_k^\dagger {c_k}} \nonumber \\ \,\,\, - \sum\limits_{k \ne 0} {\hbar {{\bar g}_k}(b_k^\dagger + {b_{ - k}})({c_k} + c_{ - k}^\dagger )} + i\sum\limits_{k \ne 0} {{{\bar \eta }_k}(b_k^\dagger - {b_k})} , \end{eqnarray} where the detuning is $\tilde{\delta}_k = \tilde{\omega}_k - \omega _p$, with the Bogoliubov dispersion relation ${{\tilde \omega }_k} = {\omega _k}{[1 + 2{N_0}{V_0}/\hbar {\omega _k}]}^{1/2}$. In addition, we have introduced the effective coupling strength $\bar g_k = 2 \sqrt{N_0} g(k)(u_k + v_k)$ and the real effective pump amplitude $\bar \eta _k = \eta _k (v_k - u_k)$. Increasing the interaction parameter $V_0$ causes $\bar{g_k}$ to decrease. Besides, for fixed value of $N_t$, increasing the temperature $T$ reduces $N_0$ slightly, but since $N_0\gg 1$, the effective coupling does not change by $T$, considerably.
In order to more simplify the Hamiltonian, we first apply the unitary transformation $U = \exp \{ i\sum\limits_k {{{\bar \eta }_k}(b_k^\dagger + {b_k})} /{{\tilde \delta }_k}\} $, which yields ${b_k} \to {b_k} + i{{\bar \eta }_k}/{{\tilde \delta }_k}$ and so the elimination of the coherent pump term. Then by introducing the symmetric and antisymmetric modes, \numparts \begin{eqnarray}\label{sym} B_k^{s/a} = ({b_k} \pm {b_{ - k}})/\sqrt 2 ,\\ C_k^{s/a} = ({c_k} \pm {c_{ - k}})/\sqrt 2 , \end{eqnarray} \endnumparts the Hamiltonian can be written as \numparts \begin{eqnarray}\label{Hsaa} H &= {H_s} + {H_a};\\ {H_s} &= \sum\limits_{k > 0} {\hbar {\delta _k}B{{_k^s}^\dagger }B_k^s} + \sum\limits_{k > 0} {\hbar {\Omega _k}C{{_k^s}^\dagger }C_k^s} \nonumber \\
&- \sum\limits_{k > 0} {\hbar {{\bar g}_k}(B_k^s + B{{_k^s}^\dagger })(C_k^s + C{{_k^s}^\dagger })} ,\label{Hsab}\\ {H_a} &= \sum\limits_{k > 0} {\hbar {\delta _k}B{{_k^a}^\dagger }B_k^a} + \sum\limits_{k > 0} {\hbar {\Omega _k}C{{_k^a}^\dagger }C_k^a} \nonumber \\ & - \sum\limits_{k > 0} {\hbar {{\bar g}_k}(B_k^a - B{{_k^a}^\dagger })(C_k^a - C{{_k^a}^\dagger })} ,\label{Hsac} \end{eqnarray} \endnumparts where the constant terms have been omitted. In the Hamiltonian of the equation (\ref{Hsaa}) the symmetric and antisymmetric modes are completely decoupled. On the other hand, because of the similarities of the Hamiltonians $H_s$ and $H_a$ most of the physical results are similar to each other for these modes. Thus, in the following sections we consider only the symmetric modes and, for the sake of simplicity, we omit the index $s$ in the corresponding relations. In addition, It is obvious from the Hamiltonian of the equations (\ref{Hsab},\ref{Hsac}) that all the modes $k$ are decoupled and their evolutions are independent, so we also omit the index $k$. Therefore, in the following the symbol $Y$ is used instead of $Y^s_k$ where $Y$ is any of the operators or parameters of the system. On the other hand, by adjusting the position and the geometry of the membrane it is possible to select one of the normal modes of the membrane to interact effectively with the cavity field and the coupling of other modes can be neglected \cite{Biancofiore}. In the following, the numerical results are only presented for the mode given by $\textit{s}=2$, which is corresponding to the first excited mode of the cavity.
\section{Dynamics of The System}\label{secmotion}
To describe the dynamics of the system, we apply the input-output formalism \cite{Clerk2}. Using the Hamiltonian of equation (\ref{Hsab}), we can write the equations of motion as follows \numparts \begin{eqnarray}\label{motiona} \dot B &= - (i\tilde \delta + \kappa )B + i\bar g(C + {C^\dagger }) \nonumber \\
&+ \sqrt {2\kappa_{ex} } {B_{in}}+ \sqrt {2\kappa_{0} } F,\\ \dot C &= - (i\Omega + \gamma )C + i\bar g(B + {B^\dagger }) + {\xi _c},\label{motionb} \end{eqnarray} \endnumparts with $\kappa = \kappa_{ex} + \kappa_0$ being the total cavity decay rate, where $\kappa_{ex}$ is the cavity decay rate at the input mirror and $\kappa_0$ is the remaining loss rate. The damping rate of the membrane is denoted by $\gamma$ and the operator $B_{in}$ is the input noise of the symmetric Bogoliubov mode. Considering a vacuum input noise for the cavity field, $\langle a_{in} (t) a^{\dagger}_{in} (t') \rangle = \delta (t-t')$ and using equation (\ref{bogo}) and (\ref{sym}) we have \begin{equation}\label{bnoiset} \begin{array}{l} \langle {B_{in}^\dagger (t){B_{in}}(t')} \rangle = {v^2}\delta (t - t'),{\mkern 1mu} {\kern 1pt} {\mkern 1mu} \\ \langle {{B_{in}}(t)B_{in}^\dagger (t')} \rangle = {u^2}\delta (t - t'),\\ \langle {{B_{in}}(t){B_{in}}(t')} \rangle = - uv\delta (t - t'). \end{array} \end{equation} In addition, the thermalization process is considered as coupling to a thermal reservoir with associated noise operator $F$ whose nonzero correlation functions are given by, $\langle {F^\dagger (t){F}(t')} \rangle = {{\bar n}_{th}}\delta (t - t'), \langle {{F}(t)F^\dagger (t')} \rangle = ({{\bar n}_{th}} + 1)\delta (t - t')$ where ${\bar n}_{th} = (e^{\hbar \tilde{\omega}/k_B T}-1)^{-1}$. Therefore, the total noise operator for the Bogoliubov mode can be defined as $\xi_B =\sqrt {2\kappa_{ex} } {B_{in}}+ \sqrt {2\kappa_{0} } F $. Similarly, the mechanical element is in equilibrium with a reservoir at temperature $T_m$, so the noise operator $\xi _c$ satisfies $\langle {\xi _c^\dagger (t){\xi _c}(t')} \rangle = 2 \gamma {{\bar n}_c}\delta (t - t'), \langle {{\xi _c}(t)\xi _c^\dagger (t')} \rangle =2 \gamma ({{\bar n}_c} + 1)\delta (t - t')$ in which $\bar n_c=(e^{\hbar \Omega / k_B T_m}-1)^{-1}$. It should be noticed that equation (\ref{motionb}) is valid when $\Omega \gg \gamma$ which is justified for most OM systems \cite{Aspelmeyer}.
Defining the dimensionless quadratures \numparts \begin{eqnarray}\label{quadraturs} X = \frac{1}{{\sqrt 2 }}(B + {B^\dagger }),P = \frac{1}{{\sqrt 2 i}}(B - {B^\dagger }),\\ x = \frac{1}{{\sqrt 2 }}(C + {C^\dagger }),p = \frac{1}{{\sqrt 2 i}}(C - {C^\dagger }), \end{eqnarray} \endnumparts the equations of motion (\ref{motiona}) and (\ref{motionb}) can be written in the compact matrix form \begin{equation}\label{matrixeq} {\bf{\dot u}}(t) = {\bf{Au}}(t) + {\bf{n}}(t), \end{equation} with ${\bf{u}}(t) = {(X(t),P(t),x(t),p(t))^T}$ and the corresponding noise operator vector is ${\bf{n}}(t) = ({\xi _X, \xi _P, \xi _x, \xi _p)^T}$ where we have defined $\xi _X = (\xi_B+\xi_B^{\dagger})/\sqrt{2}$ , $\xi _P =-i (\xi_B-\xi_B^{\dagger})/\sqrt{2}$, $\xi_x =(\xi_c+\xi_c^{\dagger})/\sqrt{2}$, $\xi_p =-i (\xi_c-\xi_c^{\dagger})/\sqrt{2} $. The drift matrix ${\bf{A}}$ is given by \begin{equation}\label{drift} {\bf{A}} = \left( {\begin{array}{*{20}{c}} { - \kappa }&{\tilde \delta }&0&0\\ { - \tilde \delta }&{ - \kappa }&{2\bar g}&0\\ 0&0&{ - \gamma }&\Omega \\ {2\bar g}&0&{ - \Omega }&{ - \gamma } \end{array}} \right). \end{equation}
The system is stable when the real part of all the eigenvalues of the drift matrix $\bf{A}$ are negative. The stability condition can be checked by the Routh-Hurwitz criterion \cite{Gradshteyn}. For the stable system with Gaussian noises all the stationary properties of the system can be extracted from the Lyapunov equation \cite{Genes} \begin{equation}\label{lyap} {\bf{AV}} + {\bf{V}}{{\bf{A}}^{\bf{T}}} = - {\bf{D}}, \end{equation} where $\textbf{V}$ is the $4 \times 4$ stationary covariance matrix with components ${V_{ij}} = \frac{1}{2} \langle {{u_i}(\infty ){u_j}(\infty ) + {u_j}(\infty ){u_i}(\infty )} \rangle $ and the corresponding diffusion matrix $\bf D$ is given by ${\bf{D}} = Diag[\{ \kappa_{ex} (u-v)^2 +\kappa_0 (2{{\bar n}_{th}} + 1),\kappa_{ex} (u+v)^2 +\kappa_0 (2{{\bar n}_{th}} + 1),\gamma (2{{\bar n}_c} + 1),\gamma (2{{\bar n}_c} + 1)\} ]$. It should be noted that the covariance matrix $\bf{V}$ which is obtained from equation (\ref{lyap}) is associated to the symmetric Bogoliubov mode $B$. Using the Bogoliubov transformation of equation (\ref{bogo}), we obtain the following covariance matrix for the symmetric cavity mode, $A_k = (a_k + a_{-k})/\sqrt{2}$ in terms of the elements of $\bf{V}$ \begin{equation}\label{Vp} \resizebox{0.65\textwidth}{!}{${{\bf{V'}} = \left( {\begin{array}{*{20}{c}} {{{(u + v)}^2}{V_{11}}}&{({u^2} - {v^2}){V_{12}}}&{(u + v){V_{13}}}&{(u + v){V_{14}}}\\ {({u^2} - {v^2}){V_{21}}}&{{{(u - v)}^2}{V_{22}}}&{(u - v){V_{23}}}&{(u - v){V_{24}}}\\ {(u + v){V_{31}}}&{(u - v){V_{32}}}&{{V_{33}}}&{{V_{34}}}\\ {(u + v){V_{41}}}&{(u - v){V_{42}}}&{{V_{43}}}&{{V_{44}}} \end{array}} \right).}$} \end{equation}
\section{Displacement spectrum and cooling of the membrane}\label{secdis}
To study the dynamical effects of the optomechanical coupling on the mechanical oscillator in the presence of the Bogoliubov modes of a photonic BEC, we drive the mechanical susceptibility and the displacement spectrum of the oscillator. The symmetrized displacement spectrum of the mechanical oscillator is defined as \cite{Genes3} \begin{equation}\label{dispspedef} \resizebox{0.65\textwidth}{!}{${S_x}(\omega ) = \frac{1}{{4\pi }}\int {d\omega '{e^{ - i(\omega + \omega ')t}} \langle {x(\omega )x(\omega ') + x(\omega ')x(\omega )} \rangle } ,$} \end{equation} where $x(\omega)$, the Fourier transformation of $x(t)$, can be obtained by solving the time-domain equation of motion (\ref{matrixeq}) in the frequency domain. The Fourier transform of the time-domain operator $f(\tau)$ is defined by$f(\omega)=\frac{1}{\sqrt{2\pi}} \int_{ - \infty }^\infty {d\tau f(\tau ){e^{ - i\omega \tau }}} $. In this manner, one obtains $x(\omega)=\chi_m(\omega) F_m(\omega)$ where the mechanical susceptibility $\chi _m (\omega)$ is given by \begin{equation}\label{ki} \chi _m (\omega ) = \frac{\Omega }{{{\Omega _{eff}}^2 - {\omega ^2} - i\omega {\gamma _{eff}}}}, \end{equation} with the effective mechanical frequency, \begin{equation}\label{omegaeff} {\Omega _{eff}}^2 = {\gamma ^2} + {\Omega ^2} - \frac{{4{{\bar g}^2}\Omega \tilde \delta ({\kappa ^2} - {\omega ^2} + {{\tilde \delta }^2})}}{{{{({\kappa ^2} - {\omega ^2} + {{\tilde \delta }^2})}^2} + 4{\kappa ^2}{\omega ^2}}}, \end{equation} and the effective mechanical damping rate, \begin{equation}\label{gamaeff} {\gamma _{eff}} = 2\gamma + \frac{{8{{\bar g}^2}\Omega \tilde \delta \kappa }}{{{{({\kappa ^2} - {\omega ^2} + {{\tilde \delta }^2})}^2} + 4{\kappa ^2}{\omega ^2}}}. \end{equation} equations (\ref{ki}-\ref{gamaeff}) represents how the radiation pressure force modifies the dynamics of the mechanical mode.
Furthermore, $F_m(\omega)$ is the Fourier transform of the total force exerted on the mechanical mode, given by \begin{eqnarray}\label{Ft} {F_m}(\omega ) &= {\xi _p}(\omega ) + \frac{{\gamma - i\omega }}{\Omega } {\xi _x}(\omega ) + \frac{{2\bar g\tilde \delta }}{{{{(\kappa - i\omega )}^2} + {{\tilde \delta }^2}}} \xi_P \\ \nonumber
&+ \frac{{2\bar g\tilde \delta (\kappa - i\omega )}}{{{{(\kappa - i\omega )}^2} + {{\tilde \delta }^2}}} \xi_X, \end{eqnarray} Inserting the above equations into equation (\ref{dispspedef}) and using the following noise correlation functions in the frequency domain \numparts \begin{eqnarray}\label{noiseomega} \resizebox{0.64\textwidth}{!}{$\langle {{\xi_X}(\omega ){\xi_X}(\omega ')} \rangle =[\kappa_0 (2{\bar n}_{th}+1) + \kappa_{ex} (u-v)^2] \delta (\omega + \omega '), $}\\ \resizebox{0.64\textwidth}{!}{$\langle {{\xi_P}(\omega ){\xi_P}(\omega ')} \rangle = [\kappa_0 (2{\bar n}_{th}+1) + \kappa_{ex} (u+v)^2] \delta (\omega + \omega '),$}\\ \langle {{\xi_X}(\omega ){\xi_P}(\omega ')} \rangle = \langle {{\xi_P}(\omega ){\xi_X}(\omega ')} \rangle ^{*} = i \kappa \delta (\omega + \omega '),\\ \resizebox{0.64\textwidth}{!}{$\langle {{\xi _x}(\omega ){\xi _x}(\omega ')} \rangle = \langle {{\xi _p}(\omega ){\xi _p}(\omega ')} \rangle = \gamma (2{{\bar n}_c} + 1)\delta (\omega + \omega '),$}\\ \langle {{\xi _x}(\omega ){\xi _p}(\omega ')} \rangle = \langle {{\xi _p}(\omega ){\xi _x}(\omega ')} \rangle ^* = i\gamma \delta (\omega + \omega '), \label{noiseomege} \end{eqnarray} \endnumparts we obtain \begin{eqnarray}\label{Sx}
{S_x}&(\omega ) = \frac{1}{{4\pi }}|\chi (\omega ){|^2}\{ \gamma (2{{\bar n}_c} + 1)\frac{{{\Omega ^2} + {\gamma ^2} + {\omega ^2}}}{{{\Omega ^2}}} \nonumber \\
&+ \frac{{4{{\bar g}^2}{\kappa _0}(2{{\bar n}_{th}} + 1)({{\tilde \delta }^2} + {\kappa ^2} + {\omega ^2})}}{{{{({\kappa ^2} - {\omega ^2} + {{\tilde \delta }^2})}^2} + 4{\kappa ^2}{\omega ^2}}} \nonumber\\
&+ \frac{{4{{\bar g}^2}[{\kappa _{ex}}{{\tilde \delta }^2}{{(u + v)}^2} + {\kappa _{ex}}({\kappa ^2} + {\omega ^2}){{(u - v)}^2}]}}{{{{({\kappa ^2} - {\omega ^2} + {{\tilde \delta }^2})}^2} + 4{\kappa ^2}{\omega ^2}}}\} . \end{eqnarray} It is well-known that in the strong coupling regime ($2 \bar g > \kappa$) the displacement spectrum shows NMS \cite{Aspelmeyer}. The NMS occurs due to the fact that when the OM coupling is strong, instead of having two separate subsystems, i. e., an optical mode with frequency $\tilde{\delta}$ and a mechanical mode with frequency $\Omega$, the system consists of two mixed modes. The eigenfrequencies of the normal modes are given by the eigenvalues of the drift matrix $\bf A$. When $\gamma , \kappa \ll \Omega , \tilde{\delta} $ the eigenfrequencies are approximately \begin{equation}\label{normalmodes} \omega _ \pm ^2 \simeq \frac{1}{2}[({{\tilde \delta }^2} + {\Omega ^2}) \pm \sqrt {({{\tilde \delta }^2} - {\Omega ^2})^2 + 16\bar g^2 \Omega \tilde \delta } ]. \end{equation} The NMS can be observed as two well-resolved peaks in the displacement spectrum. The properties of the photonic BEC affects theses peaks, so the measurement of the displacement spectrum of the membrane provides information about the photonic BEC. The normal modes (\ref{normalmodes}) depends on the BEC parameters in two ways. First, the effective detuning $\tilde{\delta}$ is a function of $N_0 V_0$ due to the Bogoliubov dispersion and second, the effective OM coupling $\bar g$ depends on $N_0$ and $V_0$.
In figure \ref{fig2}, we have plotted the normalized displacement spectrum of the membrane at resonance $\tilde{\delta}=\Omega$ versus the normalized frequency $\omega/ \Omega$ for different values of the parameter $\zeta$ (figure \ref{fig2}(a)) and the temperature $T$ (figure \ref{fig2}(b)). With increasing the photon-photon interaction strength while the temperature is fixed, the effective OM coupling is weakened and consequently, the splitting of the normal modes decreases (figure \ref{fig2}(a)). On the other hand, figure \ref{fig2}(b) shows that by decreasing the temperature while keeping the photon-photon interaction strength fixed, the heights of the two peaks of the spectrum decrease, although their positions remain almost unchanged. Actually, by decreasing the temperature the depletion of the photon BEC will decrease. However, since $N_0 \gg 1$ this change does not shift the peaks considerably but due to the smaller population of the Bogoliubov mode (see figure \ref{fig5}), it diminishes the heights of the peaks. It should be noted that the resonance condition itself depends on both $V_0$ and $T$. The numerical value of the parameters are chosen to be compatible with the experiment \cite{Klaers1}. For example, $k_B T/\hbar \omega_t = 150$ corresponds to the room temperature photonic BEC while the membrane is pre-cooled to about $100mK$. \begin{figure}
\caption{(Color online) The normalized displacement spectrum of the mechanical mode versus the normalized frequency $\omega / \Omega$ for \textbf{(a) }$\zeta=4 \times 10^{-4}$ (blue solid line), $\zeta=10 \times 10^{-4}$ (red dashed line) and $k_B T=150 \hbar \omega_t $; \textbf{(b)} $k_B T=150 \hbar \omega_t$ (blue solid line), $k_B T=50 \hbar \omega_t$ (red dashed line) and $\zeta=4 \times 10^{-4}$ . The values of other parameters are $\tilde{\delta} = \Omega $, $\Omega = 7 \times 10^{-4} \omega_t, \kappa_{ex} = 10^{-5} \omega_t, \kappa_0 = 5 \kappa_{ex}, \gamma = 0.001 \kappa_{ex} , N_t = 10^6, g =4.2 \times 10^{-7} \omega_t$, and $ k_B T_m = 0.05 \hbar \omega_t. $ }
\label{fig2}
\end{figure}
As can be seen from equation (\ref{gamaeff}), it is evident that the effective damping $\gamma_{eff}$ rate is larger than $2\gamma$ in the red-detuned regime ($\tilde{\delta} >0 $) so this extra damping can lead to the cooling of the membrane. The cooling occurs because, due to the finite lifetime of the photons, the radiation pressure force is non-conservative so that in the red-detuned regime it acts as a friction force for the mechanical element and causes the back-action cooling \cite{Aspelmeyer}. In addition, the back action cooling is more efficient when the light and the mechanical mode are in resonance. To quantify the cooling process the effective temperature $T_{eff}$ associated with the mechanical mode is defined by \cite{Clerk2} \begin{equation} \langle {{C^\dagger }C} \rangle {}_{ss} = \frac{1}{{{e^{\hbar {\Omega _{eff}}/{k_B}{T_{eff}}}} - 1}}, \end{equation} where $\Omega_{eff} = \Omega_{eff}(\omega=\Omega)$. In addition in terms of the elements of the covariance matrix $\textbf{V}$, the steady-state phonon population is given by $\langle C^\dagger C \rangle_{ss}=(V_{33} + V_{44} -1)/2$ which can be calculated by solving the Lyapunov equation (\ref{lyap}).
Figures \ref{fig3}(a) and \ref{fig3}(b) illustrate, respectively, the normalized effective mechanical damping and normalized effective mechanical frequency versus $\omega / \Omega$ for two values of the interaction parameter $\zeta$ assuming the resonance condition and keeping temperature fixed. Figure \ref{fig3}(a) shows that $\gamma _{eff}$ is considerably greater than $\gamma$ and it decreases by increasing the photon-photon interaction strength. The effective mechanical frequency decreases with increasing $\zeta$ (figure \ref{fig3}(b)). Since the figures are plotted at resonance, these changes arise from the decrease in the effective OM coupling.
\begin{figure}
\caption{(Color online) \textbf{(a)} The normalized effective mechanical damping rate and \textbf{(b)} the normalized effective mechanical frequency versus the normalized frequency $\omega / \Omega$ for $\zeta=4 \times 10^{-4}$ (blue solid line) and $\zeta=10 \times 10^{-4}$ (red dashed line). Here, we have set $k_B T=150 \hbar \omega_t $. The values of other parameters are the same as those in figure \ref{fig2}. }
\label{fig3}
\end{figure}
The steady-state phonon number and the associated effective temperature are, respectively, plotted in figures \ref{fig4}(a) and \ref{fig4}(b) versus the photonic BEC temperature for two values of the $\zeta$. It is obvious that at lower temperatures the thermal noise is small and the cooling process is more efficient such that for very low temperatures the ground state cooling, $\langle C^\dagger C\rangle_{ss}<1$, is even possible (figure \ref{fig4}(a)). On the other hand, by increasing the two-body interaction potential, in spite of the reduction of the effective mechanical damping rate (figure \ref{fig3}(a)), the phonon occupation number decreases. This result can be understood by noting that when the system operates in the strong OM coupling regime, the amplification of the OM coupling leads to an increase of the phonon occupancy \cite{Aspelmeyer}. However, in the system under consideration, increasing the photon-photon repulsion reduces the effective OM coupling, and thus the phonon occupancy decreases. Thus, the mechanical cooling is more efficient in the interacting photon BEC, but as seen in figure \ref{fig4}, this is not the case when the photon BEC temperature is very low. The reason is that at low temperatures the thermal photon number $\bar n _{th}$ is negligible and in this limit the effect of input noise correlations (equation \ref{bnoiset}) (see also diffusion matrix in section \ref{secmotion}) becomes significant. The photon-photon interaction amplifies the effects of this noise which limits the cooling efficiency. The lower limit of the effective temperature ($\simeq 0.01 T_m$) corresponds to the temperature about $1 mK$ for the experimental setup introduced in \cite{Klaers1}.
\begin{figure}
\caption{(Color online) (a) The steady-state phonon occupation and (b) the normalized effective temperature versus the normalized temperature of the BEC of photon for (a) $\zeta=4 \times 10^{-4}$ (blue solid line) and $\zeta=10 \times 10^{-4}$ (red dashed line). The values of other parameters are the same as those in figure \ref{fig2}. }
\label{fig4}
\end{figure}
To end up this section, we consider the depletion of the BEC of photons. For this purpose, we have illustrated the temperature dependence of the steady-state number of photons in the excited mode of the cavity (figure \ref{fig5}), which in terms of the covariance matrix elements is given by $\langle A^\dagger A \rangle_{ss}=(V'_{11} + V'_{22} -1)/2$. As is seen, increasing the photon-photon interaction as well as the temperature leads to more population of the excited mode. The OM coupling also slightly increases the photon condensate depletion (see inset of figure \ref{fig5}). However, as is evident, the amount of depletion induced by both the photon-photon and OM interactions is negligibly small compared to $N_t$. Thus, one can safely neglect the effect of photon depletion in $N_0$ . \begin{figure}
\caption{(Color online) The steady-state population of the excited mode of the cavity versus the temperature for $\zeta=4 \times 10^{-4}$ (blue solid line), $\zeta=10 \times 10^{-4}$ (red dashed line) and $ g = 8.4 \times 10^{-7} \omega_t $ . The values of other parameters are the same as those in figure \ref{fig2}. In the inset, the dot-dashed blue line represents the corresponding photon population in the absence of OM coupling, $g=0$. }
\label{fig5}
\end{figure}
\section{Output intensity and quadrature squeezing spectra} \label{secinten}
The other measurable quantities of the system which can be examined to obtain information about the BEC of photons in the cavity and its OM interaction are the output intensity and quadrature squeezing spectra. In this section, we calculate these spectra to explore how they depend on the features of the photonic BEC. The output intensity spectrum is defined as the Fourier transform of the output field correlation function \cite{Clerk2}
\begin{equation}\label{soutdef} {S_{out}}(\omega ) = \frac{1}{{2\pi }}\int\limits_{ - \infty }^\infty {d\tau \langle {A_{out}^\dagger (t + \tau ){A_{out}}(t)} \rangle {e^{ - i\omega \tau }}} . \end{equation} where the output field $A_{out}$ is related to the input field $A_{in}$ via $A_{out} = \sqrt{2\kappa_{ex}} A - A_{in}$. Considering the correlation functions (\ref{noiseomega}-$e$), the symmetric output intensity spectrum is given by $ {S^s_{out}}(\omega ) = [{S_{out}}(\omega ) + {S_{out}}(-\omega )]/2$ with ${S_{out}}(\omega ) =\langle {A_{out}^\dagger}(\omega) A_{out}(-\omega) \rangle$. Solving the equation of motion (\ref{matrixeq}) in the frequency domain, we get
\begin{equation}\label{Bo}
B(\omega ) = {\alpha _1}(\omega )\xi_B + {\alpha _2}(\omega )\xi_B^\dagger + {\alpha _3}(\omega ) {\xi _c} + {\alpha _4}(\omega ) \xi _{c}^\dagger ,
\end{equation}
where
\numparts \begin{eqnarray}\label{alphas}
{\alpha _1}(\omega ) &=& {d^{ - 1}}\{ [{(\gamma - i\omega )^2} + {\Omega ^2}][i(\tilde \delta + \omega ) - \kappa ] \nonumber\\ &-& 2i{{\bar g}^2}\tilde \delta \Omega \} ,\\ {\alpha _2}(\omega ) &=& {d^{ - 1}}( - 2i{{\bar g}^2}\Omega ),\\
{\alpha _3}(\omega ) &=& {d^{ - 1}}[i\bar g(\tilde \delta + i\kappa + \omega )(\Omega + i\gamma + \omega )],\\
{\alpha _4}(\omega ) &=& {d^{ - 1}}[ - i\bar g(\tilde \delta + i\kappa + \omega )(\Omega - i\gamma + \omega )]; \end{eqnarray} \endnumparts with \begin{equation} d = - [{(\gamma - i\omega )^2} + {\Omega ^2}][{(\kappa - i\omega )^2} - {{\tilde \delta }^2}] + 4{{\bar g}^2}\tilde \delta \Omega . \end{equation} By applying the Bogoliubov transformation (\ref{bogo}), the relation $B^{\dagger}(\omega) = [B(-\omega)]^{\dagger}$, and the noise correlation functions (\ref{noiseomega}-$e$) we obtain, after some algebra, the output intensity spectrum as follows
\begin{equation}\label{s123} {S_{out}}(\omega ) = 2\kappa_{ex} \{ {u^2}{\beta _1}(\omega ) + {v^2}{\beta _2}(\omega ) + 2uv{\mathop{\rm Re}\nolimits} [{\beta _3}(\omega )]\} , \end{equation} where the functions $\beta _i$ are defined as \numparts \begin{eqnarray}\label{betas}
{\beta _1}(\omega ) &= [2\kappa_{ex} v^2+2\kappa_0 {{\bar n}_{th}}] |{\alpha _1}(-\omega ){|^2} \nonumber \\
&+ [2\kappa_{ex} u^2+2\kappa ({{\bar n}_{th}} + 1)] |{\alpha _2}(-\omega ){|^2} \nonumber\\ &- 4\kappa_{ex} uv Re[\alpha(-\omega)^* \alpha_2(-\omega)] \nonumber \\
&+ 2\gamma {{\bar n}_c}|{\alpha _3}(-\omega ){|^2} + 2\gamma ({{\bar n}_c} + 1)|{\alpha _4}(-\omega ){|^2},\\
{\beta _2}(\omega ) &= [2\kappa_{ex} u^2+2\kappa ({{\bar n}_{th}} + 1)] |{\alpha _1}( \omega ){|^2} \nonumber \\
&+ [2\kappa_{ex} v^2+2\kappa_0 {{\bar n}_{th}}] |{\alpha _2}( \omega ){|^2} \nonumber\\
&- 4\kappa_{ex} uv Re[\alpha(\omega) \alpha_2(\omega)^*] \nonumber \\
&+ 2\gamma ({{\bar n}_c} + 1)|{\alpha _3}( \omega ){|^2} + 2\gamma {{\bar n}_c}|{\alpha _4}( \omega ){|^2},\\ {\beta _3}(\omega ) &= [2{\kappa _{ex}}{u^2} + 2{\kappa _0}({{\bar n}_{th}} + 1)]{\alpha _1}(\omega ){\alpha _2}( - \omega ) \nonumber \\
&+ [2{\kappa _{ex}}{v^2} + 2{\kappa _0}{{\bar n}_{th}}]{\alpha _2}(\omega ){\alpha _1}( - \omega ) \nonumber \\ & - 2{\kappa _{ex}} uv [{\alpha _1}(\omega ){\alpha _1}( - \omega ) + {\alpha _2}(\omega ){\alpha _2}( - \omega )] \nonumber \\
&+ 2\gamma ({{\bar n}_c} + 1){\alpha _3}(\omega ){\alpha _4}( - \omega ) \nonumber \\
& + 2\gamma {{\bar n}_c}{\alpha _3}( - \omega ){\alpha _4}(\omega ). \end{eqnarray} \endnumparts
Furthermore, to investigate the squeezing properties of the output field we calculate the quadrature noise spectrum of the output field defined by \cite{Giovannetti}, \begin{equation}\label{Sphi} \resizebox{0.65\textwidth}{!}{${S_\phi }(\omega ) = \frac{1}{{4\pi }}\int_{ - \infty }^\infty {d\omega {e^{i(\omega + \omega ')t}}\langle {{X_\phi }(\omega ){X_\phi }(\omega ') + {X_\phi }(\omega '){X_\phi }(\omega )} \rangle } , $}
\end{equation}
with ${X_\phi }(\omega ) = {e^{ - i\phi }}{A_{out}}(\omega ) + {e^{i\phi }}A_{out}^\dagger (\omega )$. When ${S_\phi }(\omega ) <1$ the output field is squeezed. By minimizing ${S_\phi }(\omega ) $ with respect to $\phi$, the optimized quadrature squeezing spectrum is obtained as
\begin{equation}\label{Sopt}
S_{opt}(\omega)=S_{out}^s(\omega)+\mathcal{C}^s_{AA^{\dagger}}(\omega)-2 | \mathcal{C}^s_{AA}(\omega) |^2,
\end{equation}
where the optimum $\phi$ satisfies $ e^{2i\phi _{opt}} = - \frac{\mathcal{C}^s_{AA^{\dagger}}(\omega)}{| \mathcal{C}^s_{AA}(\omega) | } $. In addition, the symmetric functions $\mathcal{C}^s_{AA^{\dagger}}(\omega)$ and $\mathcal{C}^s_{AA}(\omega)$ are given by \numparts \begin{eqnarray}\label{caas}
{\cal C}^{s}_{AA^{\dagger}}(\omega)&=\frac{1}{2} {[\cal C}_1(\omega)-{\cal C}_2(\omega)+{\cal C}_1(-\omega)-{\cal C}_2(-\omega)] \nonumber \\
&+1, \\
{\cal C}^s_{AA}(\omega)&=\frac{1}{2} {[\cal C}_3(\omega)-{\cal C}_4(\omega)+{\cal C}_3(-\omega)-{\cal C}_4(-\omega)], \end{eqnarray} \endnumparts with \numparts \begin{eqnarray}\label{cis}
{{\cal C}_1}(\omega ) &= 2\kappa_{ex} \{ {u^2}{\beta _2}(\omega ) + {v^2}{\beta _1}(\omega ) + 2uv{\mathop{\rm Re}\nolimits} [{\beta _3}(\omega )]\} ,\\ {{\cal C}_2}(\omega ) &= 4 \kappa_{ex} Re[\alpha_1(\omega)],\\ {{\cal C}_3}(\omega ) &=2\kappa_{ex} \{ {u^2}{\beta _3}(\omega ) + {v^2}{\beta _3}{(\omega )^*} + uv[{\beta _1}(\omega ) + {\beta _2}(\omega )] \} ,\\
{{\cal C}_4}(\omega ) &= 2{\kappa _{ex}}[{u^2}{\alpha _2}( - \omega ) - {v^2}{\alpha _2}{(\omega )^*} - uv{\alpha _1}( - \omega ) \nonumber \\ &+ uv{\alpha _1}{( \omega )^*}]. \end{eqnarray} \endnumparts
The output intensity and the quadrature squeezing spectra are plotted in figures \ref{fig6} and \ref{fig7}, respectively. Increasing the photon-photon interaction strength reduces the NMS in the output intensity spectrum via diminishing the OM coupling and of course, increases the squeezing of the output field due to amplification of the parametric interaction (figure \ref{fig7}(a)). In addition, the change of the temperature of the photonic BEC does not affect the splitting of the normal modes considerably. However, with decreasing the temperature, the thermal noise is reduced and thus, the depletion of the photonic BEC will be decreased (figure \ref{fig5}) which cause to the attenuation of the output intensity (figure \ref{fig6}(b)) and enhancement of the squeezing of the output field (figure \ref{fig7}(b)). Figure \ref{fig7}(c) shows that in the absence of the OM coupling of the photonic BEC to the membrane ($g=0$), the microcavity output field exhibits quadrature squeezing even at room temperature. With increasing the OM coupling the output-field quadrature squeezing is strengthened. \begin{figure}
\caption{(Color online) The output-field intensity spectrum versus the normalized frequency $\omega /\Omega$ for $ g = 8.4 \times 10^{-7} \omega_t $ and \textbf{(a)} $\zeta=4 \times 10^{-4}$ (blue solid line), $\zeta=10 \times 10^{-4}$ (red dashed line) and $k_B T=150 \hbar \omega_t $; \textbf{(b)} $k_B T=150 \hbar \omega_t$ (blue solid line), $k_B T=50 \hbar \omega_t$ (red dashed line) and $\zeta=4 \times 10^{-4}$. The values of other parameters are the same as those in figure \ref{fig2}. }
\label{fig6}
\end{figure} \begin{figure}
\caption{(Color online) The output-field quadrature squeezing spectrum versus the normalized frequency $\omega / \Omega$ for \textbf{(a)} $\zeta=4 \times 10^{-4}$ (blue solid line), $\zeta=10 \times 10^{-4}$ (red dashed line), $k_B T=150 \hbar \omega_t $ and $ g = 8.4 \times 10^{-7} \omega_t $; \textbf{(b)} $k_B T=150 \hbar \omega_t$ (blue solid line), $k_B T=50 \hbar \omega_t$ (red dashed line), $\zeta=4 \times 10^{-4}$ and $ g = 8.4 \times 10^{-7} \omega_t $; and \textbf{(c)} $ g = 8.4 \times 10^{-7} \omega_t $ (red dashed line), $ g = 0$ (green solid line), $\zeta=10 \times 10^{-4}$ and $k_B T=150 \hbar \omega_t$. The values of other parameters are the same as those in figure \ref{fig2}. }
\label{fig7}
\end{figure}
\section{The photon-phonon Entanglement}\label{secentan}
One of the most important quantum features of the OM systems is that the radiation pressure can lead to the steady-state entanglement between the subsystems. The entanglement measure which is usually used to quantify the entanglement of the bimodal Gaussian state is the logarithmic negativity \cite{Genes2}. Here, we are interested in the entanglement between the excited modes of the photonic BEC and the mechanical modes in the system under consideration and the logarithmic negativity is convenient for our purpose.
The logarithmic negativity is defined as \cite{Vidal} \begin{equation}
{\cal {E_N}} =\max \{ 0,-\ln 2{\eta ^ - }\} , \end{equation}
where \begin{equation}\label{eta} {\eta ^ - } = \frac{1}{{\sqrt 2 }}{[\Sigma{({\bf{V'}})} - \sqrt {\Sigma {({\bf{V'}})} - 4\det {\bf{V'}}} ]^{1/2}}, \end{equation} is the lowest symplectic eigenvalue of the partial transpose of the associated covariance matrix (\ref{Vp}). Here, $\Sigma (\bf{V}')= \det{\bf {V}_1} + \det{\bf {V}_2} - 2\det{\bf {V}_{\cal {C}}} $, where $\bf {V}_1$, $\bf {V}_2$ and $\bf {V}_{\cal {C}}$ are $2\times 2$ block matrices defined by \begin{equation} {\bf{V'}} = \left( {\begin{array}{*{20}{c}} {{{\bf{V}}_1}}&{{{\bf{V}}_{\cal {C}}}}\\ {{\bf{V}}_{\cal {C}}^\dagger }&{{{\bf{V}}_2}} \end{array}} \right). \end{equation} Therefore, by solving the Lyapunov equation (\ref{lyap}) the covariance matrix $\bf {V}'$ and so the logarithmic negativity can be calculated.
In figure (\ref{fig8}) the logarithmic negativity is plotted versus the normalized detuning $(\tilde{\delta} - \Omega)/\Omega$ for different values of the parameter. The figure shows that the degree of photon-phonon entanglement decreases when the photon-photon scattering becomes stronger. This is due to the reduction of the effective OM coupling $\bar{g}$,. In addition, for the system to be in an entangled state, the effective detuning should be adjusted close to the instability region. Furthermore, the plot of the logarithmic negativity versus the BEC temperature (figure (\ref{fig9})) shows that there is a threshold temperature that above it the thermal noise prevents the system to be in an entangled state. As can be seen from figure (\ref{fig9}) the threshold temperature depends on the nonlinearity induced by the photon-photon interaction; the weaker the nonlinearity of the photon gas, the higher is the threshold temperature.
\begin{figure}
\caption{(Color online) The logarithmic negativity versus the normalized detuning $(\tilde{\delta} - \Omega)/\Omega$ for $\zeta=5 \times 10^{-4}$ (red dashed line), $\zeta=4 \times 10^{-4}$ (blue solid line), $\zeta=3 \times 10^{-4}$ (black dot-dashed line), $k_B T=1.5 \hbar \omega_t$, and $ g = 8.4 \times 10^{-7} \omega_t $. The values of other parameters are the same as those in figure \ref{fig2}. }
\label{fig8}
\end{figure}
\begin{figure}
\caption{(Color online) The logarithmic negativity versus the normalized temperature of the photon condensate for $\zeta=5 \times 10^{-4}$ (red dashed line), $\zeta=4 \times 10^{-4}$ (blue solid line), $\zeta=3 \times 10^{-4}$ (black dot-dashed line), $\tilde{\delta} - \Omega = -0.2\Omega $, and $ g = 8.4 \times 10^{-7} \omega_t $. The values of other parameters are the same as those in figure \ref{fig2}. }
\label{fig9}
\end{figure}
\section{Conclusions} \label{seccon} To summarize, we have considered a system consisting of a photon BEC much below the threshold and an oscillating membrane in a micro-cavity. We have investigated the OM dynamics of the system by applying the Bogoliubov approximation to the photonic BEC. In this approximation, the macroscopic occupation of the ground state of the cavity not only leads to a strong coupling between the collective excitation mode of the photonic BEC and the mechanical motion but also causes a strong parametric interaction in the photon gas.
The study of the radiation pressure back action cooling shows that by decreasing the temperature of the photonic BEC, it is possible to cool down the mechanical element to its ground state. Although lower effective temperature can be achieved when the photon-photon interaction is stronger, but it is not the case for low temperatures since further cooling is prevented by the correlation of the input noise. Furthermore, the NMS can be observed in the displacement spectrum of the membrane as well as in the output intensity spectrum which decreases with increasing the nonlinearity induced by the photon-photon interaction.
We have examined the field quadrature noise spectrum which demonstrates that the output field from the microcavity can be quadrature squeezed. Based our results, the amount of squeezing is enhanced when the photon-photon repulsion and the OM coupling increases and the BEC temperature decreases. We have also shown that a stationary entanglement can be established between the membrane and the collective excited mode of the photonic BEC when the temperature is below a threshold value and the nonlinear photon-photon interaction is weak enough.
\section*{References}
\end{document} | arXiv |
9.2E: Exercises for Infinite Series
[ "article:topic", "calcplot:yes", "license:ccbyncsa", "transcluded:yes" ]
MTH 211 Calculus II
Chapter 9: Sequences and Series
In exercises 1 - 4, use sigma notation to write each expressions as an infinite series.
1) \(1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+⋯\)
\(\displaystyle \sum_{n=1}^∞\frac{1}{n}\)
2) \( 1−1+1−1+⋯\)
3) \( 1−\frac{1}{2}+\frac{1}{3}−\frac{1}{4}+...\)
\(\displaystyle \sum_{n=1}^∞\frac{(−1)^{n−1}}{n}\)
4) \( \sin 1+\sin \frac{1}{2}+\sin \frac{1}{3}+\sin \frac{1}{4}+⋯\)
In exercises 5 - 8, compute the first four partial sums \( S_1,…,S_4\) for the series having \( n^{\text{th}}\) term \( a_n\) starting with \( n=1\) as follows.
5) \( a_n=n\)
\( 1,3,6,10\)
6) \( a_n=1/n\)
7) \( a_n=\sin \frac{nπ}{2}\)
\( 1,1,0,0\)
8) \( a_n=(−1)^n\)
In exercises 9 - 12, compute the general term \( a_n\) of the series with the given partial sum \( S_n\). If the sequence of partial sums converges, find its limit \( S\).
9) \( S_n=1−\frac{1}{n}, \quad n≥2\)
\( a_n=S_n−S_{n−1}=\dfrac{1}{n−1}−\dfrac{1}{n}.\) Since \(\displaystyle S = \lim_{n\to\infty} S_n = \lim_{n\to\infty} \left(1−\frac{1}{n}\right) = 1,\) the series converges to \( S=1.\)
10) \( S_n=\dfrac{n(n+1)}{2}, \quad n≥1\)
11) \( S_n=\sqrt{n},\quad n≥2\)
\( a_n=S_n−S_{n−1}=\sqrt{n}−\sqrt{n−1}=\dfrac{1}{\sqrt{n−1}+\sqrt{n}}.\)
The series diverges because the partial sums are unbounded.
That is, \(\displaystyle \lim_{n\to\infty} S_n = \lim_{n\to\infty} \sqrt{n} = \infty.\)
12) \( S_n=2−\dfrac{n+2}{2^n},\quad n≥1\)
For each series in exercises 13 - 16, use the sequence of partial sums to determine whether the series converges or diverges.
13) \(\displaystyle \sum_{n=1}^∞\frac{n}{n+2}\)
\( S_1=1/3,\)
\( S_2=1/3+2/4>1/3+1/3=2/3,\)
\(S_3=1/3+2/4+3/5>3⋅(1/3)=1.\)
In general \( S_k>k/3,\) so the series diverges.
Note that the \(n^{\text{th}}\) Term Test for Divergence could also be used to prove that this series diverges.
14) \(\displaystyle \sum_{n=1}^∞(1−(−1)^n))\)
15) \(\displaystyle \sum_{n=1}^∞\frac{1}{(n+1)(n+2)}\) (Hint: Use a partial fraction decomposition like that for \(\displaystyle \sum_{n=1}^∞\frac{1}{n(n+1)}.)\)
\( S_1=1/(2\cdot 3)=1/6=2/3−1/2,\)
\( S_2=1/(2\cdot 3)+1/(3\cdot 4)=2/12+1/12=1/4=3/4−1/2,\)
\( S_3=1/(2\cdot 3)+1/(3\cdot 4)+1/(4\cdot 5)=10/60+5/60+3/60=3/10=4/5−1/2,\)
\( S_4=1/(2\cdot 3)+1/(3\cdot 4)+1/(4\cdot 5)+1/(5\cdot 6)=10/60+5/60+3/60+2/60=1/3=5/6−1/2.\)
The pattern is \( S_k=\dfrac{k+1}{k+2}−\dfrac{1}{2}.\)
Then \(\displaystyle \lim_{n\to\infty} S_n = \lim_{n\to\infty} \left( \dfrac{k+1}{k+2}−\dfrac{1}{2} \right) = \dfrac{1}{2},\) so the series converges to \( 1/2.\)
16) \(\displaystyle \sum_{n=1}^∞\frac{1}{2n+1}\) (Hint: Follow the reasoning for \(\displaystyle \sum_{n=1}^∞\frac{1}{n}.)\)
Suppose that \(\displaystyle \sum_{n=1}^∞a_n=1\), that \(\displaystyle \sum_{n=1}^∞b_n=−1\), that \( a_1=2\), and \( b_1=−3\). Use this information to find the sum of the indicated series in exercises 17 - 20.
17) \(\displaystyle \sum_{n=1}^∞(a_n+b_n)\)
\( \displaystyle \sum_{n=1}^∞(a_n+b_n) \quad = \quad \sum_{n=1}^∞ a_n + \sum_{n=1}^∞ b_n \quad = \quad 1 + (-1) \quad = \quad 0\)
18) \(\displaystyle \sum_{n=1}^∞(a_n−2b_n)\)
19) \(\displaystyle \sum_{n=2}^∞(a_n−b_n)\)
\(\displaystyle \sum_{n=2}^∞(a_n−b_n) \quad = \quad \sum_{n=2}^∞ a_n - \sum_{n=2}^∞ b_n \quad = \quad \left(\sum_{n=1}^∞ a_n - a_1\right) - \left(\sum_{n=1}^∞ b_n -b_1\right) \quad = \quad (1 - 2) - (-1 - (-3)) = -1 - 2 \quad = \quad -3\)
20) \(\displaystyle \sum_{n=1}^∞(3a_{n+1}−4b_{n+1})\)
In exercises 21 - 26, state whether the given series converges or diverges and explain why.
21) \(\displaystyle \sum_{n=1}^∞\frac{1}{n+1000}\) (Hint: Rewrite using a change of index.)
The series diverges, \(\displaystyle \sum_{n=1001}^∞\frac{1}{n}\)
22) \(\displaystyle \sum_{n=1}^∞\frac{1}{n+10^{80}}\) (Hint: Rewrite using a change of index.)
23) \( 1+\frac{1}{10}+\frac{1}{100}+\frac{1}{1000}+⋯\)
This is a convergent geometric series, since \( r=\frac{1}{10}<1\)
24) \( 1+\frac{e}{π}+\frac{e^2}{π^2}+\frac{e^3}{π^3}+⋯\)
25) \( 1+\frac{π}{e^2}+\frac{π^2}{e^4}+\frac{π^3}{e^6}+\frac{π^4}{e^8}+⋯\)
This is a convergent geometric series, since \( r=π/e^2<1\)
26) \( 1−\sqrt{\frac{π}{3}}+\sqrt{\frac{π^2}{9}}−\sqrt{\frac{π^3}{27}}+⋯\)
For each \( a_n\) in exercises 27 - 30, write its sum as a geometric series of the form \(\displaystyle \sum_{n=1}^∞ar^n\). State whether the series converges and if it does, find the exact value of its sum.
27) \( a_1=−1\) and \( \dfrac{a_n}{a_{n+1}}=−5\) for \( n≥1.\)
\(\displaystyle \sum_{n=1}^∞5⋅(−1/5)^n\), converges to \( −5/6\)
28) \( a_1=2\) and \( \dfrac{a_n}{a_{n+1}}=1/2\) for \( n≥1.\)
29) \( a_1=10\) and \( \dfrac{a_n}{a_{n+1}}=10\) for \( n≥1\).
\(\displaystyle \sum_{n=1}^∞100⋅(1/10)^n,\) converges to \(\frac{100}{9}\)
30) \( a_1=\frac{1}{10}\) and \( a_n/a_{n+1}=−10\) for \( n≥1\).
In exercises 31 - 34, use the identity \(\displaystyle \frac{1}{1−y}=\sum_{n=0}^∞y^n\) (which is true for \(|y| < 1\)) to express each function as a geometric series in the indicated term.
31) \( \dfrac{x}{1+x}\) in \( x\)
\(\displaystyle x\sum_{n=0}^∞(−x)^n=\sum_{n=1}^∞(−1)^{n−1}x^n\)
32) \( \dfrac{\sqrt{x}}{1−x^{3/2}}\) in \( \sqrt{x}\)
33) \( \dfrac{1}{1+\sin^2x}\) in \(\sin x\)
\(\displaystyle \sum_{n=0}^∞(−1)^n\sin^{2n}(x)\)
34) \( \sec^2 x\) in \(\sin x\)
In exercises 35 - 38, evaluate the telescoping series or state whether the series diverges.
35) \(\displaystyle \sum_{n=1}^∞2^{1/n}−2^{1/(n+1)}\)
\( S_k=2−2^{1/(k+1)}→1\) as \( k→∞.\)
36) \(\displaystyle \sum_{n=1}^∞\frac{1}{n^{13}}−\frac{1}{(n+1)^{13}}\)
37) \(\displaystyle \sum_{n=1}^∞(\sqrt{n}−\sqrt{n+1})\)
\( S_k=1−\sqrt{k+1}\) diverges
38) \(\displaystyle \sum_{n=1}^∞(\sin n−\sin(n+1))\)
Express each series in exercises 39 - 42 as a telescoping sum and evaluate its \(n^{\text{th}}\) partial sum.
39) \(\displaystyle \sum_{n=1}^∞\ln\left(\frac{n}{n+1}\right)\)
\(\displaystyle \sum_{n=1}^∞[\ln n−\ln(n+1)],\)
\(S_k=−\ln(k+1)\)
40) \(\displaystyle \sum_{n=1}^∞\frac{2n+1}{(n^2+n)^2}\) (Hint: Factor denominator and use partial fractions.)
41) \(\displaystyle \sum_{n=2}^∞\frac{\ln\left(1+\frac{1}{n}\right)}{(\ln n)\ln(n+1)}\)
\( a_n=\frac{1}{\ln n}−\frac{1}{\ln(n+1)}\) and \( S_k=\frac{1}{\ln(2)}−\frac{1}{\ln(k+1)}→\frac{1}{\ln(2)}\)
42) \(\displaystyle \sum_{n=1}^∞\frac{(n+2)}{n(n+1)2^{n+1}}\) (Hint: Look at \( 1/(n2^n)\).
A general telescoping series is one in which all but the first few terms cancel out after summing a given number of successive terms.
43) Let \( a_n=f(n)−2f(n+1)+f(n+2),\) in which \( f(n)→0\) as \( n→∞.\) Find \(\displaystyle \sum_{n=1}^∞a_n\).
\(\displaystyle \sum_{n=1}^∞a_n=f(1)−f(2)\)
44) \( a_n=f(n)−f(n+1)−f(n+2)+f(n+3),\) in which \( f(n)→0\) as \( n→∞\). Find \(\displaystyle \sum_{n=1}^∞a_n\).
45) Suppose that \( a_n=c_0f(n)+c_1f(n+1)+c_2f(n+2)+c_3f(n+3)+c_4f(n+4),\) where \( f(n)→0\) as \( n→∞\). Find a condition on the coefficients \( c_0,…,c_4\) that make this a general telescoping series.
\( c_0+c_1+c_2+c_3+c_4=0\)
46) Evaluate \(\displaystyle \sum_{n=1}^∞\frac{1}{n(n+1)(n+2)}\) (Hint: \(\displaystyle \frac{1}{n(n+1)(n+2)}=\frac{1}{2n}−\frac{1}{n+1}+\frac{1}{2(n+2)}\))
47) Evaluate \(\displaystyle \sum_{n=2}^∞\frac{2}{n^3−n}.\)
\(\displaystyle \frac{2}{n^3−1}=\frac{1}{n−1}−\frac{2}{n}+\frac{1}{n+1},\)
\(S_n=(1−1+1/3)+(1/2−2/3+1/4) +(1/3−2/4+1/5)+(1/4−2/5+1/6)+⋯=1/2\)
48) Find a formula for \(\displaystyle \sum_{n=1}^∞\left(\frac{1}{n(n+N)}\right)\) where \( N\) is a positive integer.
49) [T] Define a sequence \(\displaystyle t_k=\sum_{n=1}^{k−1}(1/k)−\ln k\). Use the graph of \( 1/x\) to verify that \( t_k\) is increasing. Plot \( t_k\) for \( k=1…100\) and state whether it appears that the sequence converges.
\( t_k\) converges to \( 0.57721…t_k\) is a sum of rectangles of height \( 1/k\) over the interval \( [k,k+1]\) which lie above the graph of \( 1/x\).
50) [T] Suppose that \( N\) equal uniform rectangular blocks are stacked one on top of the other, allowing for some overhang. Archimedes' law of the lever implies that the stack of \( N\) blocks is stable as long as the center of mass of the top \( (N−1)\) blocks lies at the edge of the bottom block. Let \( x\) denote the position of the edge of the bottom block, and think of its position as relative to the center of the next-to-bottom block. This implies that \( (N−1)x=\left(\frac{1}{2}−x\right)\) or \( x=1/(2N)\). Use this expression to compute the maximum overhang (the position of the edge of the top block over the edge of the bottom block.) See the following figure.
Each of the following infinite series converges to the given multiple of \( π\) or \( 1/π\).
In each case, find the minimum value of \( N\) such that the \( Nth\) partial sum of the series accurately approximates the left-hand side to the given number of decimal places, and give the desired approximate value. Up to \( 15\) decimals place, \( π=3.141592653589793....\)
51) [T] \(\displaystyle π=−3+\sum_{n=1}^∞\frac{n2^nn!^2}{(2n)!},\) error \( <0.0001\)
\(N=22,\)
\(S_N=6.1415\)
52) [T] \(\displaystyle \frac{π}{2}=\sum_{k=0}^∞\frac{k!}{(2k+1)!!}=\sum_{k=0}^∞\frac{2^kk!^2}{(2k+1)!},\) error \( <10^{−4}\)
53) [T] \(\displaystyle \frac{9801}{2π}=\frac{4}{9801}\sum_{k=0}^∞\frac{(4k)!(1103+26390k)}{(k!)^4396^{4k}},\) error \( <10^{−12}\)
\( N=3,\)
\(S_N=1.559877597243667...\)
54) [T] \(\displaystyle \frac{1}{12π}=\sum_{k=0}^∞\frac{(−1)^k(6k)!(13591409+545140134k)}{(3k)!(k!)^3640320^{3k+3/2}}\), error \( <10^{−15}\)
55) [T] A fair coin is one that has probability \( 1/2\) of coming up heads when flipped.
a. What is the probability that a fair coin will come up tails \( n\) times in a row?
b. Find the probability that a coin comes up heads for the first time after an even number of coin flips.
a. The probability of any given ordered sequence of outcomes for \( n\) coin flips is \( 1/2^n\).
b. The probability of coming up heads for the first time on the \( n\) th flip is the probability of the sequence \( TT…TH\) which is \( 1/2^n\). The probability of coming up heads for the first time on an even flip is \(\displaystyle \sum_{n=1}^∞1/2^{2n}\) or \( 1/3\).
56) [T] Find the probability that a fair coin is flipped a multiple of three times before coming up heads.
57) [T] Find the probability that a fair coin will come up heads for the second time after an even number of flips.
\(5/9\)
58) [T] Find a series that expresses the probability that a fair coin will come up heads for the second time on a multiple of three flips.
59) [T] The expected number of times that a fair coin will come up heads is defined as the sum over \( n=1,2,…\) of \( n\) times the probability that the coin will come up heads exactly \( n\) times in a row, or \( \dfrac{n}{2^{n+1}}\). Compute the expected number of consecutive times that a fair coin will come up heads.
\(\displaystyle E=\sum_{n=1}^∞\frac{n}{2^{n+1}}=1,\) as can be shown using summation by parts
60) [T] A person deposits \( $10\) at the beginning of each quarter into a bank account that earns \( 4%\) annual interest compounded quarterly (four times a year).
a. Show that the interest accumulated after \( n\) quarters is \( $10(\frac{1.01^{n+1}−1}{0.01}−n).\)
b. Find the first eight terms of the sequence.
c. How much interest has accumulated after \( 2\) years?
61) [T] Suppose that the amount of a drug in a patient's system diminishes by a multiplicative factor \( r<1\) each hour. Suppose that a new dose is administered every \( N\) hours. Find an expression that gives the amount \( A(n)\) in the patient's system after \( n\) hours for each \( n\) in terms of the dosage \( d\) and the ratio \( r\). (Hint: Write \( n=mN+k\), where \( 0≤k<N\), and sum over values from the different doses administered.)
The part of the first dose after \( n\) hours is \( dr^n\), the part of the second dose is \( dr^{n−N}\), and, in general, the part remaining of the \( m^{\text{th}}\) dose is \( dr^{n−mN}\), so \(\displaystyle A(n)=\sum_{l=0}^mdr^{n−lN}=\sum_{l=0}^mdr^{k+(m−l)N}=\sum_{q=0}^mdr^{k+qN}=dr^k\sum_{q=0}^mr^{Nq}=dr^k\frac{1−r^{(m+1)N}}{1−r^N},\;\text{where}\,n=k+mN.\)
62) [T] A certain drug is effective for an average patient only if there is at least \( 1\) mg per kg in the patient's system, while it is safe only if there is at most \( 2\) mg per kg in an average patient's system. Suppose that the amount in a patient's system diminishes by a multiplicative factor of \( 0.9\) each hour after a dose is administered. Find the maximum interval \( N\) of hours between doses, and corresponding dose range \( d\) (in mg/kg) for this \( N\) that will enable use of the drug to be both safe and effective in the long term.
63) Suppose that \( a_n≥0\) is a sequence of numbers. Explain why the sequence of partial sums of \( a_n\) is increasing.
\( S_{N+1}=a_{N+1}+S_N≥S_N\)
64) [T] Suppose that \( a_n\) is a sequence of positive numbers and the sequence \( S_n\) of partial sums of \( a_n\) is bounded above. Explain why \(\displaystyle \sum_{n=1}^∞a_n\) converges. Does the conclusion remain true if we remove the hypothesis \( a_n≥0\)?
65) [T] Suppose that \( a_1=S_1=1\) and that, for given numbers \( S>1\) and \( 0<k<1\), one defines \( a_{n+1}=k(S−S_n)\) and \( S_{n+1}=a_{n+1}+S_n\). Does \( S_n\) converge? If so, to what? (Hint: First argue that \( S_n<S\) for all \( n\) and \( S_n\) is increasing.)
Since \( S>1, a_2>0,\) and since \( k<1, S_2=1+a_2<1+(S−1)=S\). If \( S_n>S\) for some \( n\), then there is a smallest \( n\). For this \( n, S>S_{n−1}\), so \( S_n=S_{n−1}+k(S−S_{n−1})=kS+(1−k)S_{n−1}<S\), a contradiction. Thus \( S_n<S\) and \( a_{n+1}>0\) for all \( n\), so \( S_n\) is increasing and bounded by \( S\). Let \(\displaystyle S_∗=\lim S_n\). If \( S_∗<S\), then \( δ=k(S−S_∗)>0\), but we can find n such that \( S_∗−S_n<δ/2\), which implies that \( S_{n+1}=S_n+k(S−S_n) >S_∗+δ/2\), contradicting that Sn is increasing to \( S_∗\). Thus \( S_n→S.\)
66) [T] A version of von Bertalanffy growth can be used to estimate the age of an individual in a homogeneous species from its length if the annual increase in year \( n+1\) satisfies \( a_{n+1}=k(S−S_n)\), with \( S_n\) as the length at year \( n, S\) as a limiting length, and \( k\) as a relative growth constant. If \( S_1=3, S=9,\) and \( k=1/2,\) numerically estimate the smallest value of n such that \( S_n≥8\). Note that \( S_{n+1}=S_n+a_{n+1}.\) Find the corresponding \( n\) when \( k=1/4.\)
67) [T] Suppose that \(\displaystyle \sum_{n=1}^∞a_n\) is a convergent series of positive terms. Explain why \(\displaystyle \lim_{N→∞}\sum_{n=N+1}^∞a_n=0.\)
Let \(\displaystyle S_k=\sum_{n=1}^ka_n\) and \( S_k→L\). Then \( S_k\) eventually becomes arbitrarily close to \( L\), which means that \(\displaystyle L−S_N=\sum_{n=N+1}^∞a_n\) becomes arbitrarily small as \( N→∞.\)
68) [T] Find the length of the dashed zig-zag path in the following figure.
69) [T] Find the total length of the dashed path in the following figure.
\(\displaystyle L=\left(1+\frac{1}{2}\right)\sum_{n=1}^∞\frac{1}{2^n}=\frac{3}{2}\).
70) [T] The Sierpinski triangle is obtained from a triangle by deleting the middle fourth as indicated in the first step, by deleting the middle fourths of the remaining three congruent triangles in the second step, and in general deleting the middle fourths of the remaining triangles in each successive step. Assuming that the original triangle is shown in the figure, find the areas of the remaining parts of the original triangle after \( N\) steps and find the total length of all of the boundary triangles after \( N\) steps.
71) [T] The Sierpinski gasket is obtained by dividing the unit square into nine equal sub-squares, removing the middle square, then doing the same at each stage to the remaining sub-squares. The figure shows the remaining set after four iterations. Compute the total area removed after \( N\) stages, and compute the length the total perimeter of the remaining set after \( N\) stages.
At stage one a square of area \( 1/9\) is removed, at stage \( 2\) one removes \( 8\) squares of area \( 1/9^2\), at stage three one removes \( 8^2\) squares of area \( 1/9^3\), and so on. The total removed area after \( N\) stages is \(\displaystyle \sum_{n=0}^{N−1}\frac{8^N}{9^{N+1}}=\frac{1}{8}\cdot\frac{1−(8/9)^N}{1−8/9}→1\) as \(N→∞.\) The total perimeter is \(\displaystyle 4+4\sum_{n=0}^∞\frac{8^N}{3^{N+1}}→∞.\)
Gilbert Strang (MIT) and Edwin "Jed" Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org.
9.2: Infinite Series
9.3: The Divergence and Integral Tests | CommonCrawl |
\begin{document}
\title{Well-Localized Operators on Matrix Weighted $L^2$ Spaces} \date{\today}
\author[K. Bickel]{Kelly Bickel$^\dagger$} \address{Kelly Bickel, Department of Mathematics\\ Bucknell University\\ 701 Moore Ave\\ Lewisburg, PA 17837} \email{[email protected]} \thanks{$\dagger$ Research supported in part by National Science Foundation DMS grants \# 0955432 and \#1448846.}
\author[B. D. Wick]{Brett D. Wick$^\ddagger$} \address{Brett D. Wick, School of Mathematics\\ Georgia Institute of Technology\\ 686 Cherry Street\\ Atlanta, GA USA 30332-0160} \email{[email protected]} \urladdr{www.math.gatech.edu/~wick} \thanks{$\ddagger$ Research supported in part by National Science Foundation DMS grant \# 0955432, by the Simons Foundation and by the Mathematisches Forschungsinstitut Oberwolfach.}
\maketitle
\begin{abstract} Nazarov-Treil-Volberg recently proved an elegant two-weight T1 theorem for ``almost diagonal'' operators that played a key role in the proof of the $A_2$ conjecture for dyadic shifts and related operators. In this paper, we obtain a generalization of their T1 theorem to the setting of matrix weights. Our theorem does differ slightly from the scalar results, a fact attributable almost completely to differences between the scalar and matrix Carleson Embedding Theorems. The main tools include a reduction to the study of well-localized operators, a new system of Haar functions adapted to matrix weights, and a matrix Carleson Embedding Theorem. \end{abstract}
\section{Introduction}
In this paper, the dimension $d$ is fixed and $L^2$ will denote $L^2(\mathbb{R}, \mathbb{C}^d)$, namely the set of vector-valued functions satisfying $$ \left\Vert f\right\Vert_{L^2}^2\equiv\int_{\mathbb{R}} \Vert f(x)\Vert^2_{\mathbb{C}^d}\,dx<\infty. $$ We will be primarily interested in \emph{matrix weights}, $d \times d$ positive definite matrix-valued functions with locally integrable entries. Given such a weight $W,$ let $L^2(W)$ be the set of functions satisfying $$ \left\Vert f\right\Vert_{L^2(W)}^2\equiv\int_{\mathbb{R}} \left \Vert W^{\frac{1}{2}}(x)f(x)\right \Vert^2_{\mathbb{C}^d}\,dx=\int_{\mathbb{R}} \left\langle W(x)f(x), f(x)\right\rangle_{\mathbb{C}^d}\,dx<\infty. $$ Given matrix weights $V$ and $W$, a natural question is:~when does a bounded operator $T$ mapping $L^2$ to itself extend to a bounded operator mapping $L^2(W)$ to $L^2(V)$ and what is the norm of $T$ as a map from $L^2(W)$ to $L^2(V)$?
If we consider the special one-dimensional case when $V=W=w$, this question has a classical answer. Indeed, a Calder\'on-Zygmund operator $T$ extends to a bounded operator on $L^2(w)$ if and only if $w$ is an $A_2$ Muckenhoupt weight, namely: \[
[ w ] _{A_2} \equiv \sup_{I} \left \langle w \rangle_I
\langle w^{-1}\right \rangle_I < \infty, \]
where the supremum is taken over all intervals $I$ and $\left \langle w \right \rangle_I \equiv \frac{1}{|I|} \int_I w(x) dx.$ In contrast, the question of the operator norm of $T$ on $L^2(w)$, and its sharp dependence on $[w]_{A_2},$ called the $A_2$ conjecture, remained open for decades. Lacey-Petermichl-Reguera made substantial progress on this question in \cite{lpr} by establishing the sharp bound for dyadic shifts and as a corollary, obtained new proofs of the bound for simple Calder\'on-Zygmund operators including the Hilbert transform, Riesz transforms, and Beurling transform. Their proof rested on an elegant two-weight T1 theorem due to Nazarov-Treil-Volberg \cite{ntv08} coupled with technical testing estimates.
Using a refined method of decomposing Calder\'on-Zygmund operators as sums of dyadic shifts and an improvement of the Lacey-Petermichl-Reguera estimates, Hyt\"onen resolved the $A_2$ conjecture in 2012 in \cite{h12} and showed \[
\| T \|_{L^2(w) \rightarrow L^2(w)} \lesssim [w]_{A_2} \] for all Calder\'on-Zygmund operators $T.$
We are interested the analogue of the $A_2$ conjecture in the setting of matrix weights. However, due to complications arising in the matrix case, the current literature is less developed. Still, the boundedness of Calder\'on-Zygmund operators is known. In 1997, Treil-Volberg showed in \cite{vt97} that the Hilbert transform $H$ extends to a bounded operator on $L^2(W)$ if and only if $W$ is an $A_2$ matrix weight, i.e.~if and only if \[
\big [ W \big ] _{A_2} \equiv \sup_{I} \left \| \langle W \rangle_I^{\frac{1}{2}}
\langle W^{-1} \rangle_I^{\frac{1}{2}} \right \|^2 < \infty, \]
where $\| \cdot \|$ denotes the norm of the matrix acting on $\mathbb{C}^d$. Soon after, Nazarov-Treil \cite{nt97} extended this result to general (classical) Calder\'on-Zygmund operators and in the interim, the study of operators on matrix-weighted spaces has received a great deal of attention. See \cite{cg01, gold03, IKP, lt07, nptv02, vol97}. However, the question of the sharp dependence on $[W]_{A_2}$ is still open and this seems to be a very difficult problem. In \cite{bpw14}, the two authors with S. Petermichl showed that \[
\| H \|_{L^2(W) \rightarrow L^2(W)} \lesssim [W]_{A_2}^{\frac{3}{2}} \textnormal{log}\, [W]_{A_2}, \] for all $A_2$ weights $W$, but this bound is unlikely to be sharp.
Rather, a proof yielding a sharp estimate would likely follow, as in the scalar case, from the combination of (1) a sharp T1 theorem and (2) appropriate testing estimates. The goal of this paper is to establish the T1 theorem and specifically, obtain matrix generalizations of the two-weight T1 theorems of Nazarov-Treil-Volberg from \cite{ntv08} about ``almost diagonal'' operators including Haar multipliers and dyadic shifts. These generalizations are interesting in their own right because they give two-weight results for all pairs of matrix $A_2$ weights, which is a new development. It seems possible that, as in the scalar case, these T1 theorems will prove a robust tool for studying the dependence of operator norms on the $A_2$ characteristic. Before discussing the main results in more detail, we require several definitions.
\subsection{The Main Results}
Throughout the paper, $\mathcal{D}$ denotes the standard dyadic grid on $\mathbb{R}$ and $A \lesssim B$ means $A \le C(d) B$, where $C(d)$ is a (absolute) dimensional constant. For $I \in \mathcal{D}$, let $h_I$ be the standard Haar function defined by \[
h_I \equiv |I|^{-\frac{1}{2}} \left( \textbf{1}_{I_+} - \textbf{1}_{I_-} \right),\] where $I_+$ is the right half of $I$ and $I_-$ is the left half of $I$. To the dyadic grid $\mathcal{D}$, associate the unique binary tree where each $I$ is connected to its two children $I_-$ and $I_+.$ Given that dyadic tree, let $d_{\text{tree}}(I,J)$ denote the ``tree distance'' between $I$ and $J$, namely, the number of edges on the shortest path connecting $I$ and $J$. The ``almost diagonal'' operators of interest possess a band structure defined as follows:
\begin{definition} A bounded operator $T$ on $L^2$ is a called a \emph{band operator with radius r} if $T$ satisfies \[ \left \langle T h_I e, h_J v \right \rangle_{L^2} = 0 \]
for all intervals $I, J \in \mathcal{D}$ with $d_{\text{tree}}(I,J) >r$ and vectors $e, v \in \mathbb{C}^d.$ \end{definition} Given a matrix weight $W$ and interval $I$ in $\mathcal{D}$, define the matrices:
\[ W(I) \equiv \int_I W(x) \ dx \ \text{ and } \ \left \langle W \right \rangle_I \equiv \frac{1}{|I|} \int_I W(x) \ dx=\frac{W(I)}{\left\vert I\right\vert}. \] In this paper, we will only consider weights $W$ with the property of being an $A_2$ weight, and without loss of generality, we can focus on the question of when a band operator $T$ extends to a bounded operator from $L^2(W^{-1})$ to $L^2(V)$ with norm $C$ for matrix weights $V, W.$ It is not hard to show that this occurs precisely when
\[ \left \| M_{V^{\frac{1}{2}}} T M_{W^{\frac{1}{2}}} \right \|_{L^2 \rightarrow L^2} = C. \] The main results of this paper are then the following theorems.
\begin{theorem} \label{thm:Band} Let $W, V$ be matrix $A_2$ weights and let $T$ be a band operator with radius $r$. Then $M_{V^{\frac{1}{2}}} T M_{W^{\frac{1}{2}}}$ extends to a bounded operator on $L^2$ if and only if \begin{eqnarray} \label{eqn:T}
\left \| T W \textbf{1}_I e \right \|_{L^2(V)} &\le A_1 \left \langle W(I) e, e \right \rangle_{\mathbb{C}^d}^{\frac{1}{2}} \\ \label{eqn:dual}
\left \| T^* V \textbf{1}_I e \right \|_{L^2(W)} &\le A_2 \left \langle V(I) e, e \right \rangle_{\mathbb{C}^d}^{\frac{1}{2}} \end{eqnarray}
for all intervals $I \in \mathcal{D}$ and vectors $e \in \mathbb{C}^d$. Furthermore,
\[ \left \| M_{V^{\frac{1}{2}}} T M_{W^{\frac{1}{2}}} \right \|_{L^2 \rightarrow L^2} \le 2^{2r} C(d) \left( A_1B(W) + A_2 B(V) \right), \]
where $C(d)$ is a dimensional constant and $B(W)$ and $B(V)$ are constants depending on $W$ and $V$ from an application of the matrix Carleson Embedding Theorem.
\end{theorem}
The definitions of the constants $B(W)$ and $B(V)$ are given in Theorem \ref{thm:CET2}, the matrix Carleson Embedding Theorem used in this paper, and discussed further in Remark \ref{rem:CET}. As in \cite{ntv08}, the conditions of Theorem \ref{thm:Band} can be relaxed slightly to yield the following result:
\begin{theorem} \label{thm:Band2} Let $W, V$ be matrix $A_2$ weights and let $T$ be a band operator with radius $r$. Then $M_{V^{\frac{1}{2}}} T M_{W^{\frac{1}{2}}}$ extends to a bounded operator on $L^2$ if and only if the following two conditions hold: \begin{itemize} \item[$(i)$] For all intervals $I \in \mathcal{D}$ and vectors $e \in \mathbb{C}^d$, \begin{eqnarray*}
\left \| \textbf{1}_I T W \textbf{1}_I e \right \|_{L^2(V)} &\le A_1 \left \langle W(I) e, e \right \rangle_{\mathbb{C}^d}^{\frac{1}{2}} \\
\left \| \textbf{1}_IT^* V \textbf{1}_I e \right \|_{L^2(W)} &\le A_2 \left \langle V(I) e, e \right \rangle_{\mathbb{C}^d}^{\frac{1}{2}}. \end{eqnarray*}
\item[$(ii)$] For all intervals $I, J \in \mathcal{D}$ satisfying $2^{-r} |I| \le |J| \le 2^r |I|$ and vectors $e, \nu \in \mathbb{C}^d$,
\[ \left| \left \langle T_W \textbf{1}_I e, \textbf{1}_J \nu \right \rangle_{L^2(V)} \right| \le
A_3 \left \langle W(I)e,e \right \rangle^{\frac{1}{2}}_{\mathbb{C}^d}
\left \langle V(J) \nu,\nu \right \rangle^{\frac{1}{2}}_{\mathbb{C}^d}.
\]
\end{itemize}
Furthermore,
\[ \left \| M_{V^{\frac{1}{2}}} T M_{W^{\frac{1}{2}}} \right \|_{L^2 \rightarrow L^2} \le 2^{2r} C(d) \left( A_1B(W) + A_2 B(V) +A_3\right), \]
where $C(d)$ is a dimensional constant and $B(W)$ and $B(V)$ are constants depending on $W$ and $V$ from an application of the matrix Carleson Embedding Theorem.
\end{theorem}
\begin{remark} An observant reader, and expert in the area, will notice that Theorems \ref{thm:Band} and \ref{thm:Band2} are strictly weaker than the results of Nazarov-Treil-Volberg \cite{ntv08} in two respects. First, our results are only proved for pairs $V,W$ of matrix $A_2$ weights and second, they introduce additional constants $B(V)$ and $B(W)$ in the norm estimates, which do not come from the testing conditions.
However, it is worth pointing out that both of these shortcomings are the direct result of differences between the scalar Carleson Embedding Theorem and the current matrix Carleson Embedding Theorem. In the scalar case, the Carleson Embedding Theorem holds for \textit{all} weights and the embedding constant is an absolute multiple of the constant obtained from the testing condition. In the matrix case, the current Carleson Embedding Theorem, Theorem \ref{thm:CET2}, is only known for matrix $A_2$ weights and the embedding constant is the testing constant times an additional constant $B(W)$, depending upon the weight $W$.
A careful reading of our paper reveals that, if one can improve the underlying matrix Carleson Embedding Theorem in these two respects, then our arguments will give T1 theorems with sharp constants that hold for all pairs of matrix weights. It then seems likely that these results could be used as a tool to approach the matrix $A_2$ conjecture, at least in the setting of dyadic shifts and related operators.
Indeed, the authors recently learned that Amalia Culiuc and Sergei Treil have obtained an improved Carleson Embedding Theorem for arbitrary matrix weights in the more general non-homogeneous setting. The two authors with Culiuc and Treil are currently investigating the behavior of well-localized operators in this more general setting.
It is also worth observing that related and interesting results are obtained by R. Kerr in \cite{rk11, rk14}. He shows that band operators on $L^2$ will be bounded from $L^2(W)$ to $L^2(V)$ if the matrix weights $V$ and $W$ are both in the matrix analogue of $A_{\infty}$ (denoted $A_{2,0}$) and satisfy a joint $A_2$ condition. \end{remark}
\begin{remark} \label{rem:L1loc} If the entries of $W,V$ are not locally square-integrable, i.e.~ not in $L^2_{loc}(\mathbb{R})$, one needs to be a little careful about interpreting the expressions on the left-hand sides of \eqref{eqn:T} and \eqref{eqn:dual} and the analogous expressions in Theorem \ref{thm:Band2}. This technicality can be handled in a way similar to that found in \cite{ntv08}. Indeed, observe that if $W, W'$ are matrix weights satisfying $W' \le W$, then
\[ \left \| M_{W'^{\frac{1}{2}}} T^* M_{V^{\frac{1}{2}}} \right \|_{L^2 \rightarrow L^2} \le \left \| M_{W^{\frac{1}{2}} }T^* M_{V^{\frac{1}{2}}} \right \|_{L^2 \rightarrow L^2} \] and taking adjoints gives
\[ \left \| M_{V^{\frac{1}{2}}} T M_{W'^{\frac{1}{2}}} \right \|_{L^2 \rightarrow L^2} \le \left \| M_{V^{\frac{1}{2}}} T M_{W^{\frac{1}{2}}} \right \|_{L^2 \rightarrow L^2}. \] Now, to interpret the first necessary condition appropriately, let $\{W_n\}$ be a sequence of matrix weights with entries in $L^2_{loc}(\mathbb{R})$ increasing to $W$. Then, the boundedness of $M_{V^{\frac{1}{2}}} T M_{W^{\frac{1}{2}}}$ implies that
\[ \left \| T W_n \textbf{1}_I e \right \|_{L^2(V)} \le C < \infty \] for some constant $C$ uniformly in $n$. It is not difficult to show that this implies $\left\{ M_{V^{\frac{1}{2}}} T W_n \textbf{1}_I e\right\}$ has a limit in $L^2$, which is independent of the sequence $\{W_n\}$ chosen. So, there is no ambiguity in calling this limit function $V^{\frac{1}{2}} T W \textbf{1}_Ie$ and interpreting the lefthand side of \eqref{eqn:T} as its $L^2$ norm. The dual expressions are interpreted analogously. We can similarly interpret the term in $(ii)$ from Theorem \ref{thm:Band2} as the inner product between $V^{\frac{1}{2}} T W \textbf{1}_Ie$ and $V^{\frac{1}{2}} \textbf{1}_J \nu$ in $L^2.$
To interpret the sufficient condition, fix any sequences $\{W_n\}$ and $\{V_n\}$ in $L^2_{loc}(\mathbb{R})$ increasing to $W$ and $V$ respectively. Conditions \eqref{eqn:T} and \eqref{eqn:dual} can be interpreted as the estimates \[ \begin{aligned}
\left \| T W_n \textbf{1}_I e \right \|_{L^2(V_n)} &\le A_1 \left \langle W_n(I) e, e \right \rangle_{\mathbb{C}^d}^{\frac{1}{2}} \\
\left \| T^* V_n \textbf{1}_I e \right \|_{L^2(W_n)} &\le A_2 \left \langle V_n(I) e, e \right \rangle_{\mathbb{C}^d}^{\frac{1}{2}}, \end{aligned} \]
which are uniform in $n$, $e$, and $I$. Then Theorem \ref{thm:Band} gives the bound for $\left\| M_{V_n^{\frac{1}{2}}} T M_{W_n^{\frac{1}{2}}} \right\|_{L^2\to L^2}$ which implies the desired bound for
$\left\| M_{V^{\frac{1}{2}}} T M_{W^{\frac{1}{2}}} \right\|_{L^2\to L^2}$. The analogous interpretations of the expressions in Theorem \ref{thm:Band2} should also be clear. \end{remark}
\subsection{Summary and Outline of the Paper}
The remainder of the paper consists of the proofs of Theorems \ref{thm:Band} and \ref{thm:Band2}. To outline the proof technique, assume that $W$, $V$ are matrix $A_2$ weights. It is not hard to show that $M_{V^{\frac{1}{2}}} T M_{W^{\frac{1}{2}}}: L^2 \rightarrow L^2$ is bounded with operator norm $C$ if and only if the operator
\[ T_W \equiv TM_{W}: L^2(W) \rightarrow L^2(V) \ \text{ satisfies } \ \|T_W\|_{L^2(W) \rightarrow L^2(V)} =C. \] Because $T$ is a band operator, $T_W$ will have a particularly nice structure. Following the language and proof strategy of Nazarov-Treil-Volberg \cite{ntv08}, we will show $T_W$ is well-localized. Section \ref{sec:WellLoc} contains the details of well-localized operators, their connections to band operators, and the analogues of Theorems \ref{thm:Band} and \ref{thm:Band2} for well-localized operators. We call these results Theorems \ref{thm:WellLoc} and \ref{thm:WellLoc2}. These theorems will immediately imply our main results:~Theorems \ref{thm:Band} and \ref{thm:Band2}.
In Sections \ref{sec:HaarBasis} and \ref{sec:MCET}, the paper develops the tools need to prove Theorems \ref{thm:WellLoc} and \ref{thm:WellLoc2}. In Section \ref{sec:HaarBasis}, we define and outline the properties of a system of Haar functions adapted to a general matrix weight $W$. This system appears to be new in the context of matrix weights. We also require a matrix Carleson Embedding Theorem. We use the ideas of Treil-Volberg \cite{vt97} and Isralowitz-Kwon-Pott \cite{IKP} to obtain such a theorem with the best known constant. Details are given in Section \ref{sec:MCET}.
Section \ref{sec:proof} contains the proofs of Theorems \ref{thm:WellLoc} and \ref{thm:WellLoc2}. The well-localized structure of $T_W$ makes $T_W$ amenable to separate analyses of its diagonal part and upper and lower triangular parts, which behave like nice paraproducts. We compute the norm by duality and as part of the argument, decompose the functions in question relative to weighted Haar bases adapted to $W$ and $V$ respectively. To control the upper and lower triangular pieces, we define associated paraproducts and show they are bounded using the testing hypothesis and matrix Carleson Embedding Theorem. We bound the diagonal pieces using the well-localized structure of $T_W$ coupled with properties of the system of Haar functions and the given testing conditions.
\section{Weighted Haar Basis} \label{sec:HaarBasis}
Let $W$ be a matrix weight, and let $\| \cdot \|$ denote the operator norm of a matrix on $\mathbb{C}^d$. In this section, we construct a set of disbalanced Haar functions adapted to $W$, which we denote $H_W$. First, fix $J \in \mathcal{D}$ and let $v^1_J, \dots, v^d_J$ be a set of orthonormal eigenvectors of the positive matrix: \begin{equation} \begin{aligned} \label{posmat} W(J_-)W(J_+)^{-1}W(J_-) + W(J_-) &= W(J_-)W(J_+)^{-1}W(J_-) + W(J_+)W(J_+)^{-1}W(J_-) \\ & = W(J)W(J_+)^{-1}W(J_-). \end{aligned} \end{equation} Furthermore, for $1 \le j \le d$, define the constant
\[ w_J^j \equiv \left \| \left( W(J)W(J_+)^{-1}W(J_-) \right)^{\frac{1}{2}} v^j_J \right \|. \] Since the matrix \eqref{posmat} is positive and $v^j_J$ is a normalized eigenvector, it follows that: \[ ( w_J^j)^{-1} v^j_J = \left( W(J)W(J_+)^{-1}W(J_-) \right)^{-\frac{1}{2}} v^j_J \qquad \forall 1 \le j \le d. \] \begin{definition} \label{def:weightHaar} For each $J \in \mathcal{D}$, define the \emph{vector-valued Haar functions on $J$ adapted to $W$} as follows: \begin{equation} \label{eqn:haarfunctions} h^{W,j}_J \equiv ( w_J^j)^{-1} \left( W(J_+)^{-1}W(J_-)v_J^j \textbf{1}_{J_+} - v_J^j \textbf{1}_{J_-} \right) \qquad \forall 1\le j \le d. \end{equation} If the constant function $\textbf{1}_{[0, \infty)} e$ is in $L^2(W)$ for any nonzero $e$ in $\mathbb{C}^d$, let $\{ e_1, \dots, e_{p_1} \}$ be an orthonormal basis of the subspace of $\mathbb{C}^d$ satisfying $\textbf{1}_{[0, \infty)} e \in L^2(W).$ Define \[ h_{1}^{W,i} \equiv c^i_1 \textbf{1}_{[0, \infty)} e_i \qquad \text{ for } i =1, \dots, p_1, \]
where $c^i_1$ is chosen so that $\| h_{1}^{W,i} \|_{L^2(W)} =1.$ Define the functions \[ h_{2}^{W,i} \equiv c^i_2 \textbf{1}_{(-\infty, 0]} \nu_i \qquad \text{ for } i =1, \dots, p_2, \] where $\{ \nu_1, \dots, \nu_{p_2} \}$ is an orthonormal basis of the subspace of $\mathbb{C}^d$ satisfying $\textbf{1}_{(-\infty, 0]} \nu \in L^2(W),$ in an analogous way. Define $H_W$, the \emph{system of Haar functions adapted to $W,$} by: \[H_W \equiv \left \{ h^{W,j}_J \right\} \cup \left \{ h_{k}^{W,i} \right\}.\] One should notice that if the constant functions $\textbf{1}_{ [0,\infty)} e$ and $\textbf{1}_{(-\infty, 0]} e$ are not in $L^2(V)$ for all $e \in \mathbb{C}^d$, then $H_W = \left \{ h^{W,j}_J \right\}$. \end{definition}
We now show that $H_W$ is an orthonormal basis of $L^2(W).$
\begin{lemma} The system $H_W$ is an orthonormal system in $L^2(W).$ \end{lemma}
\begin{proof} We first prove that the system $\left\{ h^{W,j}_J\right\}$ is orthogonal. Fix $h^{W,j}_J$ and $h^{W,i}_I.$ First, assume $I \ne J$. Then, one interval must be strictly contained in the other because otherwise, the inner product trivially vanishes by support conditions. Without loss of generality, assume $I \subsetneq J$. This implies that $h^{W,j}_J$ equals a constant vector on $I$, which we will denote by $e$. Then \[ \begin{aligned} \left \langle h^{W,i}_I, h^{W,j}_J \right \rangle_{{L^2(W)} } &=\int_{I} \left \langle W(x) h^{W,i}_I, e \right \rangle_{\mathbb{C}^d} dx \\ &= \int_I ( w_I^i)^{-1} \left \langle W(x) \left( W(I_+)^{-1}W(I_-)v_I^i \textbf{1}_{I_+} - v_I^i \textbf{1}_{I_-} \right), e \right \rangle_{\mathbb{C}^d} dx \\ & = ( w_I^i)^{-1} \left \langle W(I_+) W(I_+)^{-1}W(I_-)v_I^i, e \right \rangle_{\mathbb{C}^d} - ( w_I^i)^{-1} \left \langle W(I_-) v_I^i, e \right \rangle_{\mathbb{C}^d}\\
&=0. \end{aligned}
\] One should notice that the definition of $e$ played no role; in fact, the above arguments show that each $h^{W,j}_J$ has mean zero with respect to $W$. Now assume $I=J$ and $i \ne j$. Observe that: \[ \begin{aligned} &\left \langle h^{W,i}_J, h^{W,j}_J \right \rangle_{L^2(W)} =\int_{J} \left \langle W(x) h^{W,i}_J, h^{W,j}_J \right \rangle_{\mathbb{C}^d} dx \\ & = ( w_J^j)^{-1} ( w_J^i)^{-1} \int_J \left \langle W(x) \left( W(J_+)^{-1}W(J_-)v_J^i \textbf{1}_{J_+} - v_J^i \textbf{1}_{J_-} \right), W(J_+)^{-1}W(J_-)v_J^j \textbf{1}_{J_+} - v_J^j \textbf{1}_{J_-} \right \rangle_{\mathbb{C}^d} dx \\ & = (w_J^j)^{-1} ( w_J^i)^{-1} \left( \left \langle W(J_+) W(J_+)^{-1}W(J_-)v_J^i, W(J_+)^{-1}W(J_-)v_J^j \right \rangle_{\mathbb{C}^d} + \left \langle W(J_-) v_J^i, v_J^j \right \rangle_{\mathbb{C}^d} \right)\\ &= (w_J^j)^{-1} ( w_J^i)^{-1} \left \langle \left(W(J_-)W(J_+)^{-1}W(J_-) + W(J_-) \right)v_J^i, v_J^j \right \rangle_{\mathbb{C}^d} \\ &=0, \end{aligned} \] since $v^i_J$ and $v^j_J$ are orthonormal eigenvectors of $W(J_-)W(J_+)^{-1}W(J_-) + W(J_-)$. Since each $h^{W,j}_J$ has mean zero with respect to $W$ and since each $h^{W,j}_J$ is either supported in $(-\infty, 0]$ or $[0, \infty)$, it is clear that \[ \left \langle h^{W,j}_J, h_{k}^{W,i} \right \rangle_{L^2(W)} =0 \qquad \forall J \in \mathcal{D} \] and for all indices $i, j, k.$ By construction, it is also clear that $\left\{ h_{k}^{W,j} \right\}$ is an orthonormal set in $L^2(W).$ Finally, to see that $\left\{ h^{W,j}_J \right\}$ is normalized, fix $h^{W,j}_J$ and observe that \[ \begin{aligned} &\left \langle h^{W,j}_J, h^{W,j}_J \right \rangle_{{L^2(W)} } =(w_J^j)^{-2} \left \langle \left( W(J_-)W(J_+)^{-1}W(J_-) + W(J_-) \right) v_J^j, v_J^j \right \rangle_{\mathbb{C}^d} \\ & =\left \langle \left( W(J_-)W(J_+)^{-1}W(J_-) + W(J_-) \right) \left( W(J_-)W(J_+)^{-1}W(J_-) + W(J_-) \right)^{-1} v_J^j, v_J^j \right \rangle_{\mathbb{C}^d} \\ &=1, \end{aligned} \] using the properties of $v_J^j$ and the definition of $w_J^j$. This completes the proof. \end{proof}
\begin{lemma} The orthonormal system $H_W$ is complete in $L^2(W).$ \end{lemma}
\begin{proof} Fix $f$ in $L^2(W)$, and assume $f$ is orthogonal to every function in $H_W$. Specifically, $f$ is orthogonal to the set $\left\{ h^{W,j}_J\right\}.$ Then, for each $J \in \mathcal{D}$ and $j=1, \dots, d,$ \[ 0 = \left \langle f, h^{W,j}_J \right \rangle_{{L^2(W)} }. \] Multiplying by a constant gives: \[ \begin{aligned}
0 &= |J_-|^{-1} \left \langle W(J_+)^{-1}W(J_-)v_J^j \textbf{1}_{J_+} - v_J^j \textbf{1}_{J_-}, f \right \rangle_{{L^2(W)} } \\
&= |J_-|^{-1} \int_J \left \langle W(J_+)^{-1}W(J_-)v_J^j \textbf{1}_{J_+} - v_J^j \textbf{1}_{J_-}, W(x) f(x) \right \rangle_{\mathbb{C}^d} dx \\ & = \left \langle W(J_+)^{-1}W(J_-)v_J^j , \left \langle W f \right \rangle_{J_+} \right \rangle_{\mathbb{C}^d} - \left \langle v_J^j , \left \langle W f \right \rangle_{J_-} \right \rangle_{\mathbb{C}^d} \\ &= \left \langle v_J^j, W(J_-)W(J_+)^{-1} \left \langle W f \right \rangle_{J_+} - \left \langle W f \right \rangle_{J_-} \right \rangle_{\mathbb{C}^d}. \end{aligned} \] Since this holds for each $j$ and $v_J^1, \dots, v^d_J$ is an orthonormal basis of $\mathbb{C}^d$, we can conclude that \begin{equation} \label{eqn:averages} \left \langle W f \right \rangle_{J_-} = W(J_-)W(J_+)^{-1} \left \langle W f \right \rangle_{J_+}. \end{equation} Adding $\left \langle Wf \right \rangle_{J_+}$ to both sides gives \[ 2\left \langle W f \right \rangle_{J} = W(J_-)W(J_+)^{-1} \left \langle W f \right \rangle_{J_+} +\left \langle Wf \right \rangle_{J_+}= \left( W(J_-)W(J_+)^{-1} + W(J_+)W(J_+)^{-1} \right) \left \langle W f \right \rangle_{J_+}. \] Rearranging by factoring out $W(J_+)^{-1}$ on the right from the term in parentheses and using the definitions gives \[ \left \langle W \right \rangle_J^{-1} \left \langle Wf \right \rangle_J = \left \langle W \right \rangle_{J_+}^{-1} \left \langle Wf \right \rangle_{J_+}. \] Solving \eqref{eqn:averages} for $\left \langle W f \right \rangle_{J_+}$ and using analogous arguments, one can show: \[ \left \langle W \right \rangle_J^{-1} \left \langle Wf \right \rangle_J = \left \langle W \right \rangle_{J_-}^{-1} \left \langle Wf \right \rangle_{J_-}. \] Now fix any $x,y \in (0, \infty)$ and choose some dyadic interval $J_0$ so that $x,y \in J_0.$ Define two sequence of dyadic intervals: \[ \begin{aligned} J_0 &= I_0 \supsetneq I_1 \supsetneq I_2 \dots \supsetneq I_i \supsetneq I_{i+1} \dots \\ J_0 &= K_0 \supsetneq K_1 \supsetneq K_2 \dots \supsetneq K_k \supsetneq K_{k+1} \dots \end{aligned} \] such that each $I_i$ is a parent of $I_{i+1}$ and $x \in I_i$ for all $i$ and similarly, each $K_k$ is a parent of $K_{k+1}$ and $y$ is in each $K_k$. Our previous arguments imply that \[ \left \langle W \right \rangle_{I_i}^{-1} \left \langle Wf \right \rangle_{I_i} = \left \langle W \right \rangle_{J_0}^{-1} \left \langle Wf \right \rangle_{J_0} = \left \langle W \right \rangle_{K_k}^{-1} \left \langle Wf \right \rangle_{K_k} \quad \forall i,k \in \mathbb{N}. \] Now we can use the Lebesgue Differentiation Theorem to conclude that \[ W(x)^{-1} W(x) f(x) = W(y)^{-1} W(y) f(y) \] for almost every $x,y$ in $(0, \infty)$ and so $f(x) = f(y)$ for almost every $x,y$ in $[0, \infty)$. Analogous arguments imply $f$ must be constant on $(- \infty, 0]$. But, by assumption, $f$ is also orthogonal to the set $\{ h^{W,i}_k\}$, which implies $f$ is orthogonal to all of the nonzero constant functions supported on $[0, \infty)$ or $(- \infty, 0]$ in $L^2(W).$ Thus, we can conclude $f \equiv 0.$ \end{proof}
We require one additional fact about the weighted Haar system: \begin{lemma} \label{lem:haarbound} The orthonormal system $H_W$ satisfies \[ \begin{aligned}
\left \| W(J_-)^{\frac{1}{2}} h_J^{W,j}(J_-) \right \|_{\mathbb{C}^d} & \le C(d) \\
\left \| W(J_+)^{\frac{1}{2}} h_J^{W,j}(J_+) \right \|_{\mathbb{C}^d} & \le C(d) \\ \end{aligned} \] for all $J \in \mathcal{D}$ and $1 \le j \le d,$ where $h_J^{W,j}(J_\pm)$ is the constant value $h_{J}^{W,j}$ takes on $J_\pm$. \end{lemma} \begin{proof} We only prove the first inequality as the second is proved similarly. First, recall that $W(J)W(J_+)^{-1}W(J_-)$ is a positive matrix and hence, $W(J_-)^{-1}W(J_+)W(J)^{-1}$ is positive as well. Now, observe that \[ \begin{aligned}
\left \| W(J_-)^{\frac{1}{2}} h_J^{W,j} \left( J_- \right ) \right \|_{\mathbb{C}^d}^2
& \le \left \| W(J_-)^{\frac{1}{2}} \left( W(J) W(J_+)^{-1} W(J_-) \right)^{-\frac{1}{2}} \right \|^2 \\
& = \left \| W(J_-)^{\frac{1}{2}} W(J_-) ^{-1} W(J_+) W(J)^{-1} W(J_-)^{\frac{1}{2}} \right \| \\ & \le C(d) \ \text{Tr}\left(W(J_-)^{\frac{1}{2}} W(J_-) ^{-1} W(J_+) W(J)^{-1} W(J_-)^{\frac{1}{2}} \right) \\ &= C(d) \ \text{Tr}\left( W(J)^{-\frac{1}{2}} W(J_+) W(J)^{-\frac{1}{2}} \right) \\
& \le C(d) \left \| W(J)^{-\frac{1}{2}} W(J_+) W(J)^{-\frac{1}{2}} \right \| \\
& \le C(d) \left \| W(J)^{-\frac{1}{2}} W(J) W(J)^{-\frac{1}{2}} \right \| \\ & = C(d), \end{aligned} \] where we used the fact that trace and operator norm are equivalent (up to a dimensional constant) for positive matrices. This completes the proof. \end{proof}
\begin{remark} \label{rem:Expand} In the proofs of Theorems \ref{thm:WellLoc} and \ref{thm:WellLoc2}, we will expand functions in $L^2(W)$ with respect to the basis $H_W.$ Specifically, if $f \in L^2(W)$, we can expand $f$ as \[ f = \sum_{\substack{ J \in \mathcal{D} \\ 1\le j \le d}} \left \langle f, h^{W,j}_J \right \rangle_{L^2(W)} h^{W,j}_J + \sum_{\substack{1 \le k \le 2 \\ 1\le j \le p_k}} \left \langle f, h^{W,j}_k \right \rangle_{L^2(W)} h^{W,j}_k.\] This means that for $K \in \mathcal{D}$, we can express the weighted average of $f$ on $K$ as \[ \begin{aligned}
\left \langle W \right \rangle^{-1}_K \left \langle W f \right \rangle _K &= \sum_{\substack{ J \in \mathcal{D} \\ 1\le j \le d}} \left \langle f, h^{W,j}_J \right \rangle_{L^2(W)} \left \langle W \right \rangle^{-1}_K \left \langle W h^{W,j}_J \right \rangle_K \\ & \ \ \ \ + \sum_{\substack{1 \le k \le 2 \\ 1\le j \le p_k}} \left \langle f, h^{W,j}_k \right \rangle_{L^2(W)} \left \langle W \right \rangle^{-1}_K \left \langle W h^{W,j}_k \right \rangle_{K} \\
&= \sum_{\substack{J: K \subsetneq J \\ 1\le j \le d}} \left \langle f, h^{W,j}_J \right \rangle_{L^2(W)} h^{W,j}_J(K) + \sum_{\substack{1 \le k \le 2 \\ 1\le j \le p_k}} \left \langle f, h^{W,j}_k \right \rangle_{L^2(W)} h^{W,j}_k(K), \end{aligned} \] where $h^{W,j}_J(K)$ is the constant value that $h^{W,j}_J$ takes on $K$ and $h^{W,j}_k(K)$ is the constant value that $h^{W,j}_k$ takes on $K$. Now, assume $f$ is compactly supported, so that we can find two dyadic intervals $I_1 \subset [0, \infty)$ and $I_2 \subset (-\infty, 0]$ such that $\text{supp}(f) \subseteq I_1 \cup I_2.$ For $I \in \mathcal{D}$, define the the weighted expectation of $f$ on $I$ by \[E^W_{I} f \equiv \left \langle W \right \rangle^{-1}_{I} \left \langle W f \right \rangle _{I}\textbf{1}_{I}. \] Then, we can write $f$ as \begin{eqnarray} \nonumber f &=& \sum_{\substack{J \in \mathcal{D}\\ 1\le j \le d}} \left \langle f, h^{W,j}_J \right \rangle_{L^2(W)} h^{W,j}_J + \sum_{\substack{1 \le k \le 2 \\ 1\le j \le p_k}} \left \langle f, h^{W,j}_k \right \rangle_{L^2(W)} h^{W,j}_k \\ \nonumber &=& \sum_{\substack{J:J \subseteq I_1 \cup I_2 \\ 1\le j \le d}} \left \langle f, h^{W,j}_J \right \rangle_{L^2(W)} h^{W,j}_J + \sum_{1 \le \ell \le 2} \left \langle W \right \rangle^{-1}_{I_{\ell}} \left \langle W f \right \rangle _{I_{\ell}}\textbf{1}_{I_{\ell}} \\
&=& \sum_{\substack{J:J \subseteq I_1 \cup I_2 \\ 1\le j \le d}} \left \langle f, h^{W,j}_J \right \rangle_{L^2(W)} h^{W,j}_J + \sum_{1 \le \ell \le 2} E^W_{I_{\ell}} f. \label{eqn:sum}
\end{eqnarray}
\end{remark}
\section{Matrix Carleson Embedding Theorem} \label{sec:MCET}
Let $W$ be a matrix weight such that for all positive semi-definite matrices $A$ and intervals $J \in\mathcal{D}$, there is a uniform constant $C$ satisfying
\begin{equation} \label{eqn:reverse} \frac{1}{|J|} \int_J \| A W(x) A \| dx \le C \left( \frac{1}{|J|} \int_J \| A W(x) A \|^{\frac{1}{2}} dx \right)^2. \end{equation} Define $[W]_{R_2}$ to be the smallest such constant $C$. Treil-Volberg's arguments in Lemma 3.5 and Lemma 3.6 in \cite{vt97} show that, if $W$ is an $A_2$ matrix weight, then \begin{equation} \label{eqn:RHconstant} [W]_{R_2} \le C(d) [W]_{A_2}. \end{equation} In Theorem 6.1 in \cite{vt97}, Treil-Volberg prove an embedding theorem for a specific sequence of positive semi-definite matrices. Their arguments generalize easily to arbitrary sequences of matrices, yielding the following matrix Carleson Embedding Theorem:
\begin{theorem} \label{thm:CET1} Let $W$ be a matrix weight satisfying \eqref{eqn:reverse} and let $\left\{A_I\right\}_{I\in\mathcal{D}}$ be a sequence of positive semi-definite $d\times d$ matrices. Then \[ \sum_{I\in\mathcal{D}} \left\langle A_I \left\langle f\right\rangle_I, \left\langle f\right\rangle_I\right\rangle_{\mathbb{C}^d} \le C_1 \left\Vert f\right\Vert_{L^2(W^{-1})}^2 \ \text{ if } \
\frac{1}{|J|} \sum_{I:I \subseteq J} \left \| \left \langle W \right \rangle^{\frac{1}{2}}_I A_I \left \langle W \right \rangle^{\frac{1}{2}}_I \right \| \le C_2 \ \ \forall J \in \mathcal{D}, \] where $C_1 = C_2 C(d) [W]_{R_2}$ and $C(d)$ is a dimensional constant. \end{theorem}
It should be noted that in \cite{IKP}, Isralowitz-Kwon-Pott obtained a more general version of Theorem \ref{thm:CET1}, which holds for all $A_p$ matrix weights.
\begin{remark} Treil-Volberg's arguments in \cite{vt97} actually establish a seemingly stronger result. Namely, they show that if $\{ B_I \}_{I\in\mathcal{D}}$ is a sequence of positive semi-definite matrices, then
\begin{equation} \label{eqn:TV} \sum_{I \in \mathcal{D}} \left \| \left \langle W \right \rangle_I^{-\frac{1}{2}} B_I\left \langle W \right \rangle_I^{-\frac{1}{2}} \right \| \left \| \left \langle W \right \rangle_I^{-\frac{1}{2}} \left \langle W^{\frac{1}{2}} g \right \rangle_I \right \|^2_{\mathbb{C}^d} \le C_1 \| g\|_{L^2}^2 \ \text{ if } \
\frac{1}{|J|}\sum_{I:I \subseteq J} \left \| \left \langle W \right \rangle_I^{-\frac{1}{2}} B_I \left \langle W \right \rangle_I^{-\frac{1}{2}} \right \| \le C_2,\end{equation} for all $J \in \mathcal{D}$. To recover Theorem \ref{thm:CET1} from \eqref{eqn:TV}, note that \[ \begin{aligned} \sum_{I \in \mathcal{D}} \left\langle \left \langle W \right \rangle_I^{-1} B_I\left \langle W \right \rangle_I^{-1} \left \langle W^{\frac{1}{2}} g \right \rangle_I , \left \langle W^{\frac{1}{2}} g \right \rangle_I \right \rangle_{\mathbb{C}^d}
&\le \sum_{I \in \mathcal{D}} \left \| \left \langle W \right \rangle_I^{-\frac{1}{2}} B_I\left \langle W \right \rangle_I^{-\frac{1}{2}} \right \| \left \| \left \langle W \right \rangle_I^{-\frac{1}{2}} \left \langle W^{\frac{1}{2}} g \right \rangle_I \right \|^2_{\mathbb{C}^d}. \end{aligned} \] If one is given $\left\{A_I\right\}_{I\in\mathcal{D}}$ and $f \in L^2(W^{-1})$, then pairing the above inequality with \eqref{eqn:TV} using $B_I \equiv \langle W \rangle_I A_I \langle W \rangle_I$ and $g \equiv W^{-\frac{1}{2}}f$ gives the inequalities in Theorem \ref{thm:CET1}.\end{remark}
Equation \eqref{eqn:TV} is proved via arguments similar to those used in \cite{nn86} to establish the standard Carleson Embedding Theorem. Specifically, Treil-Volberg define an associated embedding operator and show it is bounded using the Senichkin-Vinogradov Test:
\begin{theorem}[Senichkin-Vinogradov Test] \label{thm:SV} Let $\mathcal{Z}$ be a measure space, and let $k$ be a locally summable, nonnegative, measurable function on $\mathcal{Z} \times \mathcal{Z}$. If \[ \int_{\mathcal{Z}} k(s,t)k(s,x) \ ds \le C \left[ k(x,t) + k(t,x) \right] \quad a.e. \text{ on } \mathcal{Z},\] then for all nonnegative $g \in L^2(\mathcal{Z})$,
\[ \int_{\mathcal{Z}} \int_{\mathcal{Z}} k(s,t)g(s) g(t) \ ds dt \le 2C \| g \|^2_{L^2(\mathcal{Z})}. \] \end{theorem}
For the ease of the reader, we sketch the proof of \eqref{eqn:TV}. We focus on the first half of the proof, as the second half is given in detail in \cite{vt97}.
\begin{proof} First define $\mu_I \equiv \left \| \left \langle W \right \rangle_I^{-\frac{1}{2}} B_I\left \langle W \right \rangle_I^{-\frac{1}{2}} \right \| .$ Then, by assumption, $\{\mu_I \}_{I\in\mathcal{D}}$ is a scalar Carleson sequence with testing constant $C_2.$ Define the embedding operator $\mathcal{J}: L^2 \rightarrow \ell^2( \{ \mu_I \}, \mathbb{C}^d)$ by \[ \mathcal{J} f = \left\{ \left \langle W \right \rangle_I^{-\frac{1}{2}} \left \langle W^{\frac{1}{2}} f \right \rangle_I \right \}_{I \in \mathcal{D}}\] and observe that \eqref{eqn:TV} is equivalent to $\mathcal{J}$ having operator norm bounded by $\sqrt{C_1}.$ To prove the norm bound, one shows that the formal adjoint $\mathcal{J}^*: \ell^2(\{\mu_I\}, \mathbb{C}^d) \rightarrow L^2$ defined by
\[\mathcal{J}^* \{ \alpha_I \} \equiv \sum_{I \in \mathcal{D}} \frac{\mu_I}{|I|} \textbf{1}_I W^{\frac{1}{2}} \left \langle W \right \rangle_I^{-\frac{1}{2}} \alpha_I \qquad \forall \ \{\alpha_I \} \in \ell^2 \left(\{\mu_I\}, \mathbb{C}^d\right) \] has the desired norm bound. First observe that
\[ \mathcal{J} \mathcal{J}^* \{\alpha_I \} = \left\{ \left \langle W \right \rangle_J^{-\frac{1}{2}} \sum_{I \in \mathcal{D}} \frac{\mu_I}{|I|} \left \langle W \textbf{1}_I \right \rangle_J \left \langle W \right \rangle_I^{-\frac{1}{2}} \alpha_I \right \}_{J \in \mathcal{D}}. \] One can use this to immediately show that for any $\{ \alpha_I \}$ in $\ell^2( \{\mu_I\}, \mathbb{C}^d)$, \[ \begin{aligned}
\left \| \mathcal{J}^* \{ \alpha_I \} \right \|_{L^2}^2 \l &= \left \langle \mathcal{J} \mathcal{J}^* \{ \alpha_I \}, \{ \alpha_I \} \right \rangle_{\ell^2(\{\mu_I \}, \mathbb{C}^d)} \\
&= \sum_{J \in \mathcal{D}} \sum_{I: I \subseteq J} \frac{ \mu_I \mu_J}{|J|} \left \langle \left \langle W \right \rangle^{-\frac{1}{2}}_J \left \langle W \right \rangle^{\frac{1}{2}}_I \alpha_I, \alpha_J \right \rangle_{\mathbb{C}^d} + \sum_{I \in \mathcal{D}} \sum_{J: J \subsetneq I} \frac{ \mu_I \mu_J}{|I|} \left \langle \left \langle W \right \rangle^{\frac{1}{2}}_J \left \langle W \right \rangle^{-\frac{1}{2}}_I \alpha_I, \alpha_J \right \rangle_{\mathbb{C}^d}. \end{aligned} \] Now, for $K,L \in \mathcal{D}$, define $T_{LK}$ by
\[T_{LK} \equiv \frac{1}{|L|} \left \| \left \langle W \right \rangle^{\frac{1}{2}}_K \left \langle W \right \rangle^{-\frac{1}{2}}_L \right \|
= \frac{1}{|L|} \left \| \left \langle W \right \rangle^{-\frac{1}{2}}_L \left \langle W \right \rangle^{\frac{1}{2}}_K \right \| \]
if $K \subseteq L$ and $T_{KL} =0$ otherwise. By symmetry in the sums, it is easy to show that \begin{equation} \label{eqn:TVbound}
\left \| \mathcal{J}^* \{ \alpha_I \} \right \|^2_{L^2} \l \le 2 \sum_{J \in \mathcal{D}} \sum_{I: I \subseteq J} \mu_I \mu_J T_{JI} \|\alpha_I \|_{\mathbb{C}^d} \| \alpha_J \|_{\mathbb{C}^d}. \end{equation}
Thus, the result will be proved if one can show that the righthand side of \eqref{eqn:TVbound} is bounded by $C_1 \|\{ \alpha_I\} \|^2_{\ell^2( \{ \mu_I \}, \mathbb{C}^d)}.$ This is where one uses the Senichkin-Vinogradov Test. Let $\mathcal{Z}$ be $\mathcal{D}$, the set of dyadic intervals, with point mass $\mu_I$ on each interval $I$. Then, $L^2(\mathcal{Z})$ is equivalent to $\ell^2(\{\mu_I\}, \mathbb{C}).$ Indeed, $\{\beta_I\}\in \ell^2(\{\mu_I\}, \mathbb{C})$ if and only if the function $\beta$ defined by $\beta(I) = \beta_I$ is in $L^2(\mathcal{Z})$. Moreover,
\[ \| \{\beta_I\} \|_{\ell^2(\mu_I, \mathbb{C})} = \| \beta \|_{L^2(\mathcal{Z})},\] so we can treat these as the same objects. Now, define the nonnegative function $k: \mathcal{Z} \times \mathcal{Z} \rightarrow \mathbb{R}^+$ by \[ k(K, L) \equiv \sum_{J \in \mathcal{D}} \sum_{I: I \subseteq J} T_{JI} \delta_I(K) \delta_J(L), \]
where $\delta_I(K)=1$ if $K=I$ and zero otherwise. Fix a sequence $\{\alpha_I\}\in \ell^2( \{\mu_I \}, \mathbb{C}^d)$. Then the sequence $\{a_I\}$ defined by $a_I \equiv \| \alpha_I \|_{\mathbb{C}^d}$ is a nonnegative sequence in $\ell^2(\{\mu_I \}, \mathbb{C})$ or equivalently, $a$ (defined by $a(I) = a_I$) is a nonnegative function in $L^2(\mathcal{Z})$, and the norms of the two sequences are equal. It is easy to show that
\[ \int_{\mathcal{Z}} \int_{\mathcal{Z}} k(K,L) a(K) a(L) \ dK dL = \sum_{J \in \mathcal{D}} \sum_{I: I \subseteq J} \mu_I \mu_J T_{JI} a_I a_J = \sum_{J \in \mathcal{D}} \sum_{I:I \subseteq J} \mu_I \mu_J T_{JI} \| \alpha_I \|_{\mathbb{C}^d} \| \alpha_J \|_{\mathbb{C}^d}, \] which is exactly the object we need to control. Indeed, if we can establish the conditions of the Senichkin-Vinogradov test with constant $C_1$, then the result will be proved. Let us first rewrite the desired conditions. The definition of $k$ implies that \[ \int_{\mathcal{Z}} k(K,J) k(K,J') \ dK = \sum_{I: I \subseteq J, J'} T_{JI} T_{J'I} \mu_I \qquad \forall \ J, J' \in \mathcal{D}. \] Again using the definition of $k$, we have \[ k(J, J') + k(J', J) = T_{JJ'} + T_{J'J} \qquad \forall \ J, J' \in \mathcal{D}. \] Since we only sum over dyadic $I \subseteq J \cap J'$, to have a nonzero sum, we must have $J \subseteq J'$ or $J' \subseteq J$. Without loss of generality, assume $J' \subseteq J.$ Then, to establish the conditions of the Senichkin-Vinogradov test, one must simple show: \[ \begin{aligned}
\sum_{I: I \subseteq J'} T_{JI} T_{J'I} \mu_I &= \sum_{I: I \subseteq J'} \mu_I \frac{1}{|J|} \left \| \left \langle W \right \rangle^{-\frac{1}{2}}_J \left \langle W \right \rangle^{\frac{1}{2}}_I \right \| \frac{1}{|J'|} \left \| \left \langle W \right \rangle^{-\frac{1}{2}}_{J'} \left \langle W \right \rangle^{\frac{1}{2}}_I \right \| \\
& \le C_1 \frac{1}{|J|}
\left \| \left \langle W \right \rangle^{-\frac{1}{2}}_J \left \langle W \right \rangle^{\frac{1}{2}}_{J'} \right \|. \end{aligned}
\] This inequality is proven in detail in \cite{vt97}. The proof uses simple results about matrix weights including the fact that all matrix $A_2$ weights satisfy a reverse H\"older estimate as in \eqref{eqn:reverse}. The reverse H\"older estimate is used to turn the sum of interest into a sum of averages of a function weighted by the constants $\mu_I.$ Since $\{\mu_I\}_{I\in\mathcal{D}}$ is a scalar Carleson sequence, one can use the scalar Carleson Embedding Theorem to complete the proof.
\end{proof}
Using Theorem \ref{thm:CET1} and ideas from \cite{IKP}, we now obtain the following Carleson Embedding Theorem. Its testing conditions are particularly well-suited to the objects appearing in the proofs of Theorems \ref{thm:WellLoc} and \ref{thm:WellLoc2}, the well-localized analogues of Theorems \ref{thm:Band} and \ref{thm:Band2}.
\begin{theorem} \label{thm:CET2} Let $W$ be an $A_2$ weight and let $\{A_I\}_{I\in\mathcal{D}}$ be a sequence of positive semi-definite $d \times d$ matrices. Then \[ \sum_{I\in\mathcal{D}} \left\langle A_I \left\langle f\right\rangle_I, \left\langle f\right\rangle_I\right\rangle_{\mathbb{C}^d} \le C_1 \left\Vert f\right\Vert_{L^2(W^{-1})}^2 \ \text{ if } \ \
\frac{1}{|J|} \sum_{I: I \subseteq J} \left \langle W \right \rangle_I A_I \left \langle W \right \rangle_I \le C_2 \left \langle W \right \rangle_J \ \ \ \forall J \in \mathcal{D}, \] where $C_1 = C_2 C(d)[W]_{R_2} [W]_{A_2}.$ \end{theorem}
The existence of Theorem \ref{thm:CET2}, albeit with a different constant, is mentioned by Isralowitz-Kwon-Pott in the final remarks of \cite{IKP}. Indeed, according to these remarks, if one modifies their previous arguments and tracks all constants closely, one could obtain this Carleson Embedding Theorem with constant $C(d) [W]^2_{A_2}.$ However, in light of Equation $\eqref{eqn:RHconstant}$, our constant is very likely smaller than the one appearing in \cite{IKP}. As the details of the proof are not given in \cite{IKP} and we obtain a different constant, we include the proof here.
\begin{remark} \label{rem:CET} In Theorems \ref{thm:Band}, \ref{thm:Band2} and Theorems \ref{thm:WellLoc}, \ref{thm:WellLoc2}, the constants $B(W)$ and $B(V)$ appear. Since dimensional constants are already included in the statement of those theorems, it should be clear from Theorem \ref{thm:CET2} that \[ B(W) = [W]_{R_2}^{\frac{1}{2}} [W]_{A_2}^{\frac{1}{2}} \ \text{ and } \ B(V) = [V]_{R_2}^{\frac{1}{2}} [V]_{A_2}^{\frac{1}{2}}.\] \end{remark}
Now, to prove Theorem \ref{thm:CET2}, we need the decaying stopping tree from Isralowitz-Kwon-Pott. Specifically, fix $I \in \mathcal{D}$ and let $\mathcal{J}(I)$ be the collection of maximal dyadic $J \subseteq I$ such that
\[ \left \| \left \langle W \right \rangle_J^{-\frac{1}{2}} \left \langle W \right \rangle_I^{\frac{1}{2}} \right \|^2 > \lambda \ \ \text{ or } \ \ \left \| \left \langle W \right \rangle_J^{\frac{1}{2}} \left \langle W \right \rangle_I^{-\frac{1}{2}} \right \|^2 > \lambda, \] for $\lambda >1$ to be determined later. Set $\mathcal{F}(I)$ to be the collection of $J \subseteq I$ such that $J$ is not contained in any interval in $\mathcal{J}(I).$ It is clear that $I$ is always in $\mathcal{F}(I).$ Set $ \mathcal{J}^0(I) \equiv \{I \}.$ Inductively define $\mathcal{J}^j(I)$ and $\mathcal{F}^j(I)$ by \[ \mathcal{J}^j(I) = \bigcup_{J \in \mathcal{J}^{j-1}(I)} \mathcal{J}(J) \ \ \text { and } \ \ \mathcal{F}^j(I) = \bigcup_{J \in \mathcal{J}^{j-1}(I)} \mathcal{F}(J). \]
One can then prove the following lemma. \begin{lemma}[Lemma 2.1, \cite{IKP}] \label{lem:stopping} Given the stopping-tree set-up, if $\lambda = 4 C(d)[W]_{A_2},$ then
\[ \left | \bigcup_{J \in \mathcal{J}^j(I)} \mathcal{J}(J) \right| \le 2^{-j} |I| \quad \forall I \in \mathcal{D}. \] \end{lemma}
We can now provide the proof of Theorem \ref{thm:CET2}:
\begin{proof}[Proof of Theorem \ref{thm:CET2}] Using the equivalence, up to a dimensional constant, of norm and trace for positive semi-definite matrices, our hypothesis implies
\[ \sum_{I: I \subseteq K} \left \| \left \langle W \right \rangle_K^{-\frac{1}{2}} \left \langle W \right \rangle_I A_I \left \langle W \right \rangle_I \left \langle W \right \rangle_K^{-\frac{1}{2}} \right \| \lesssim C_2 |K| \quad \forall K \in \mathcal{D}. \] We will use this to obtain the testing condition from Theorem \ref{thm:CET1}. Specifically, fix $J \in \mathcal{D}$. Then
\[ \begin{aligned}
\frac{1}{|J|} &\sum_{I: I \subseteq J} \left \| \left \langle W \right \rangle^{\frac{1}{2}}_I A_I \left \langle W \right \rangle^{\frac{1}{2}}_I \right \| =
\frac{1}{|J|} \sum_{j=1}^{\infty} \sum_{K \in \mathcal{J}^{j-1}(J)} \sum_{I \in \mathcal{F}(K)} \left \| \left \langle W \right \rangle^{\frac{1}{2}}_I A_I \left \langle W \right \rangle^{\frac{1}{2}}_I \right \| \\
&\le \frac{1}{|J|} \sum_{j=1}^{\infty} \sum_{K \in \mathcal{J}^{j-1}(J)} \sum_{I \in \mathcal{F}(K)}
\left \| \left \langle W \right \rangle^{-\frac{1}{2}}_I \left \langle W \right \rangle^{\frac{1}{2}}_K \right \|
\left \| \left \langle W \right \rangle^{-\frac{1}{2}}_K \left \langle W \right \rangle_I A_I \left \langle W \right \rangle_I \left \langle W \right \rangle^{-\frac{1}{2}}_K \right \| \left \| \left \langle W \right \rangle^{\frac{1}{2}}_K \left \langle W \right \rangle^{-\frac{1}{2}}_I \right \| \\
& = \frac{1}{|J|} \sum_{j=1}^{\infty} \sum_{K \in \mathcal{J}^{j-1}(J)} \sum_{I \in \mathcal{F}(K)}
\left \| \left \langle W \right \rangle^{\frac{1}{2}}_K \left \langle W \right \rangle^{-\frac{1}{2}}_I \right \|^2
\left \| \left \langle W \right \rangle^{-\frac{1}{2}}_K \left \langle W \right \rangle_I A_I \left \langle W \right \rangle_I \left \langle W \right \rangle^{-\frac{1}{2}}_K \right \| \\
&\lesssim \frac{ [W]_{A_2} }{|J|} \sum_{j=1}^{\infty} \sum_{K \in \mathcal{J}^{j-1}(J)} \sum_{I \in \mathcal{F}(K)}
\left \| \left \langle W \right \rangle^{-\frac{1}{2}}_K \left \langle W \right \rangle_I A_I \left \langle W \right \rangle_I \left \langle W \right \rangle^{-\frac{1}{2}}_K \right \| \\
& \le\frac{ [W]_{A_2} }{|J|} \sum_{j=1}^{\infty} \sum_{K \in \mathcal{J}^{j-1}(J)} \sum_{I:I \subseteq K}
\left \| \left \langle W \right \rangle^{-\frac{1}{2}}_K \left \langle W \right \rangle_I A_I \left \langle W \right \rangle_I \left \langle W \right \rangle^{-\frac{1}{2}}_K \right \| \\
& \lesssim \frac{C_2 [W]_{A_2} }{|J|} \sum_{j=1}^{\infty} \sum_{K \in \mathcal{J}^{j-1}(J)} |K| \\
& \le C_2 [W]_{A_2} \sum_{j=1}^{\infty} 2^{-j} \\
& = C_2 [W]_{A_2}. \end{aligned} \] In the fourth line from the top we use the stopping criteria, which introduces the value $[W]_{A_2}$. Pairing this estimate with Theorem \ref{thm:CET1} gives the desired result. \end{proof}
\begin{remark} As mentioned in \cite{IKP}, one can prove a version of Lemma \ref{lem:stopping} for $A_{2, \infty}$ weights using Lemma 3.1 in \cite{vol97}. Recall from \cite{vol97} that $W$ is an $A_{2, \infty}$ weight if there is some constant $C$ such that
\[ e^{ \displaystyle \tfrac{1}{|I|} \int_I \log \| W(t)^{-\frac{1}{2}} x\| dt} \le C \left \| \left \langle W \right \rangle_I^{-\frac{1}{2}} x \right \|, \quad \forall x \in \mathbb{C}^d, \ I \in \mathcal{D}.\]
Denote the smallest such $C$ by $[ W]_{A_{2,\infty}}$. As is shown in \cite{vol97}, if $W \in A_2$, then $W \in A_{2, \infty}$ with $[W]_{A_{2, \infty}} \le [W]_{A_2}.$ If one tracks the constant in Lemma 3.1 from \cite{vol97} and uses it in the proof of Lemma 2.1 in \cite{IKP}, one can obtain Lemma \ref{lem:stopping} with $\lambda =C(d) [W]_{A_{2,\infty}}^{2d}.$ Then the proof of Theorem \ref{thm:CET2} immediately shows that Theorem \ref{thm:CET2} also holds with constant $C_1 = C_2 C(d)[W]_{R_2} [W]_{A_{2, \infty}}^{2d}.$ \end{remark}
\section{Well-Localized Operators} \label{sec:WellLoc} We say an operator $T_W$ acts formally from $L^2(W)$ to $L^2(V)$ if the bilinear form \[ \left \langle T_W \textbf{1}_I e, \textbf{1}_J v \right \rangle_{{L^2(V)} } \] is given for all $I,J\in\mathcal{D}$ and $e, v \in \mathbb{C}^d$ is well-defined. Then, the formal adjoint $T_V^*$ is defined by \[ \left \langle T_V^* \textbf{1}_I e, \textbf{1}_J v \right \rangle_{{L^2(W)} } \equiv \left \langle \textbf{1}_I e, T_W \textbf{1}_J v \right \rangle_{{L^2(V)} }.\] Given this, we can define:
\begin{definition} \label{def:wellloc} An operator $T_W$ acting (formally) from $L^2(W)$ to $L^2(V)$ is called \emph{$r$-lower triangular} if for all $ 1 \le j \le d$ and $I, J \in \mathcal{D}$ with $|J| \le 2 |I|$ and all $e\in\mathbb{C}^d$, $T_W$ satisfies \[ \left \langle T_W \textbf{1}_I e, h^{V,j}_J \right \rangle_{L^2(V)} =0\]
whenever $J \not \subset I^{(r+1)}$ or $|J| \le 2^{-r}|I|$ and $J \not \subset I.$ Here, $ \left\{ h^{V,j}_J \right\} $ is the set of $V$-weighted Haar functions on $J$ as defined in \eqref{eqn:haarfunctions} and $I^{(r+1)}$ is the $(r+1)^{th}$ ancestor of $I$. We say $T_W$ is \emph{well-localized with radius $r$} if both $T_W$ and its formal adjoint $T_V^*$ are $r$-lower triangular. \end{definition}
This definition of well-localized is slightly different than the one appearing in \cite{ntv08}. Indeed, to define lower triangular, Nazarov-Treil-Volberg only impose conditions on $T_W$ when $|J| \le |I|,$ rather than $|J| \le 2 |I|.$
Nevertheless, their ideas are clearly the correct ones and their definition is essentially correct; the difference is likely attributable to a typographical error. Still, after the establishing the related proofs, we do point out the necessity of having conditions for $|J| \le 2 |I|$ in Remark \ref{remark:definition}.
The main results about well-localized operators are the following two theorems, which are the well-localized analogues of Theorems \ref{thm:Band} and \ref{thm:Band2}:
\begin{theorem} \label{thm:WellLoc} Let $V,W$ be matrix $A_2$ weights, and assume $T_W$ is a well-localized operator of radius $r$ acting formally from $L^2(W)$ to $L^2(V).$ Then $T_W$ extends to a bounded operator from $L^2(W)$ to $L^2(V)$ if and only if \[ \begin{aligned}
\left \| T_W \textbf{1}_I e \right \|_{L^2(V)} &\le A_1 \left \langle W(I) e, e \right \rangle_{\mathbb{C}^d}^{\frac{1}{2}} \\
\left \| T^*_V \textbf{1}_I e \right \|_{L^2(W)} &\le A_2 \left \langle V(I) e, e \right \rangle_{\mathbb{C}^d}^{\frac{1}{2}}
\end{aligned}
\]
for all $I \in \mathcal{D}$ and $e \in \mathbb{C}^d$. Furthermore,
\[ \left \| T_W \right \|_{L^2(W) \rightarrow L^2(V)} \le 2^{2r}C(d) \left(A_1B(W) + A_2B(V) \right), \]
where $C(d)$ is a dimensional constant and $B(W)$ and $B(V)$ are constants depending on $W$ and $V$ from an application of the matrix Carleson Embedding Theorem.
\end{theorem}
\begin{theorem} \label{thm:WellLoc2} Let $V,W$ be matrix $A_2$ weights, and assume $T_W$ is a well-localized operator of radius $r$ acting formally from $L^2(W)$ to $L^2(V).$ Then $T_W$ extends to a bounded operator from $L^2(W)$ to $L^2(V)$
if and only if the following two conditions hold: \begin{itemize} \item[$(i)$] For all intervals $I \in \mathcal{D}$ and $e \in \mathbb{C}^d$, \begin{eqnarray*}
\left \| \textbf{1}_I T_W \textbf{1}_I e \right \|_{L^2(V)} &\le A_1 \left \langle W(I) e, e \right \rangle_{\mathbb{C}^d}^{\frac{1}{2}} \\
\left \| \textbf{1}_IT^*_V \textbf{1}_I e \right \|_{L^2(W)} &\le A_2 \left \langle V(I) e, e \right \rangle_{\mathbb{C}^d}^{\frac{1}{2}}. \end{eqnarray*}
\item[$(ii)$] For all intervals $I, J$ in $\mathcal{D}$ satisfying $2^{-r} |I| \le |J| \le 2^r |I|$ and vectors $e, \nu$ in $\mathbb{C}^d$,
\[ \left| \left \langle T_W \textbf{1}_I e, \textbf{1}_J \nu \right \rangle_{L^2(V)} \right| \le
A_3 \left \langle W(I)e,e \right \rangle^{\frac{1}{2}}_{\mathbb{C}^d}
\left \langle V(J) \nu,\nu \right \rangle^{\frac{1}{2}}_{\mathbb{C}^d}.
\]
\end{itemize}
Furthermore,
\[ \left \| T_W \right \|_{L^2(W) \rightarrow L^2(V)} \le 2^{2r} C(d) \left( A_1B(W) + A_2 B(V) +A_3\right), \]
where $C(d)$ is a dimensional constant and $B(W)$ and $B(V)$ are constants depending on $W$ and $V$ from an application of the matrix Carleson Embedding Theorem.
\end{theorem}
Theorems \ref{thm:Band} and \ref{thm:Band2} will follow immediately from these theorems once we establish the following lemma:
\begin{lemma} \label{lem:wlb} If $V,W$ are matrix weights whose entries are in $L^2_{loc}(\mathbb{R})$ and if $T$ is a band operator of radius $r$, then $T_W$ is a well-localized operator of radius $r$ acting formally from $L^2(W)$ to $L^2(V).$ \end{lemma}
\begin{proof} Assume $T: L^2 \rightarrow L^2$ is a band operator with radius $r$, and $W,V$ are matrix weights whose entries are in $L^2_{loc}$. Then the operators \[ T_W \equiv T M_W \ \text{ and } \ T_V^* \equiv T^* M_V \] act formally from $L^2(W)$ to $L^2(V)$ and $L^2(V)$ to $L^2(W)$ respectively since \[ \left \langle T W \textbf{1}_I e, V \textbf{1}_J \nu \right \rangle_{L^2} = \left \langle T_W \textbf{1}_I e, \textbf{1}_J \nu \right \rangle_{L^2(V)} \text{ and } \left \langle W \textbf{1}_I e, T^*V \textbf{1}_J \nu \right \rangle_{L^2} = \left \langle \textbf{1}_I e, T^*_V \textbf{1}_J \nu \right \rangle_{L^2(W)} \] are well-defined. To show $T_W$ is a well-localized operator with radius $r,$ by symmetry, it suffices to show that $T_W$ is $r$-lower triangular. First, fix an orthonormal basis $\{e_i \}_{i=1}^d$ of $\mathbb{C}^d$ and for $I \in \mathcal{D}$, define $H_I \equiv \{ h_I e_i \}_{1 \le i \le d}$. Then we can write \[ T = \sum_{I,J \in \mathcal{D}} T_{IJ} \ \text{ where } \ T_{IJ} : H_I \rightarrow H_J, \] and each $T_{IJ}$ is given by \[ T_{IJ} = \sum_{1 \le i,j \le d} \left \langle T h_I e_i, h_J e_j \right \rangle_{L^2} \left \langle \cdot, h_I e_i \right \rangle_{L^2} h_Je_j .\] Since the entries of $W$ are in $L^2_{loc}(\mathbb{R})$, then $W\textbf{1}_Ie$ is in $L^2$ and so, $T_W \textbf{1}_I e \equiv T W\textbf{1}_I e$ makes sense for each $I \in \mathcal{D}$ and $ e \in \mathbb{C}^d.$ Given $h^{V,j}_J$, a vector-valued Haar function on $J$ adapted to $V$, one can write:
\[ \left \langle T_W \textbf{1}_I e, h_J^{V,j} \right \rangle_{L^2(V)} = \left \langle T_W \textbf{1}_I e, V h_J^{V,j} \right \rangle_{L^2} \le \left \| T_W \textbf{1}_I e \right \|_{L^2} \left \| V h_J^{V,j} \right \|_{L^2} < \infty,
\] where the first term is bounded because $T$ is bounded on $L^2$ and the second term is bounded because $h_J^{V,j}$ is bounded and the entries of $V$ are in $L^2_{loc}(\mathbb{R}).$ Given that, we are justified in expanding $T$ with respect to the Haar basis to obtain \[ \begin{aligned} \left \langle T_W \textbf{1}_I e, h_J^{V,j} \right \rangle_{L^2(V)} &= \sum_{K,L \in \mathcal{D}} \left \langle T_{KL} W \textbf{1}_I e, h^{V,j}_J \right \rangle_{L^2(V)} \\ &= \sum_{K,L \in \mathcal{D}}\sum_{1 \le k,\ell \le d} \left \langle T h_K e_k, h_L e_{\ell} \right \rangle_{L^2} \left \langle W \textbf{1}_I e, h_K e_k \right \rangle_{L^2} \left \langle h_Le_{\ell}, h^{V,j}_J \right \rangle_{L^2(V)} . \end{aligned} \] Observe that $ \left \langle T_{KL} W \textbf{1}_I e, h^{V,j}_J \right \rangle_{L^2(V)} $ is zero if $d_{\text{tree}}(K,L) >r$, if $I \cap K = \emptyset,$ or if $L \not \subset J.$ So, we only need consider terms where $d_{\text{tree}}(K,L) \le r$, $I \cap K \ne \emptyset$, and $L \subseteq J.$
To show $T_W$ is $r$-lower triangular let $|J| \le 2 |I|$. First, assume that $J \not \subset I^{(r+1)}$ and by contradiction, assume there is a nonzero term $\left \langle T_{KL} W \textbf{1}_I e, h^{V,j}_J \right \rangle_{L^2(V)} $ in the above sum for some $K,L \in \mathcal{D}.$ By our previous assertions, we must have
\[ |K| \le 2^r |L| \le 2^r |J | \le 2^{r+1} |I|.\]
Since $I \cap K \ne \emptyset$, this implies that $K \subseteq I^{(r+1)}.$ Since $L \subseteq J$, $|L| \le 2 |I|$ and $L \not \subset I^{(r+1)}.$ But, this immediately implies that $d_{tree}(K,L) \ge r+1$, a contradiction.
Similarly, assume $|J| \le 2^{-r} |I|$ and $J \not \subset I$ and by contradiction, assume there is a nonzero term $\left \langle T_{KL} W \textbf{1}_I e, h^{V,j}_J \right \rangle_{L^2(V)} $ for some $K,L.$
Then $|L| \le 2^{-r} |I|$ and $L \not \subset I.$ Furthermore, since $d_{\text{tree}}(K,L) \le r$, this implies $|K| \le |I|$, so $K \subseteq I.$ But $|L| \le 2^{-r} |I|$, $L \not \subset I$, and $K \subseteq I$ implies that $d_{\text{tree}}(K,L) \ge r+1$, a contradiction.
Thus, $T_W$ is $r$-lower triangular and symmetric arguments give the result for $T_V^*$. This implies $T_W$ is well-localized with radius $r$. \end{proof}
\begin{remark} In Theorems \ref{thm:WellLoc} and \ref{thm:WellLoc2}, one must interpret the testing conditions correctly when the matrix weights' entries are not in $L^2_{loc}(\mathbb{R})$. We already outlined the remedy for this problem in Remark \ref{rem:L1loc}. Similarly, one should notice that Lemma \ref{lem:wlb} only handles the case where the matrix weights have entries in $L^2_{loc}(\mathbb{R})$. Nevertheless, this result is sufficient to allow us to pass from Theorems \ref{thm:WellLoc} and \ref{thm:WellLoc2} to Theorems \ref{thm:Band} and \ref{thm:Band2}. This is easy to see since, as detailed in Remark \ref{rem:L1loc}, we interpret all statements about weights with locally integrable (but not necessary square-integrable) entries in Theorems \ref{thm:Band} and \ref{thm:Band2} using limits of weights with entries in $L^2_{loc}(\mathbb{R})$. \end{remark}
\section{Proofs of Theorems \ref{thm:WellLoc} and \ref{thm:WellLoc2}} \label{sec:proof}
\subsection{Paraproducts} \label{sec:paraproducts}
To prove Theorems \ref{thm:WellLoc} and \ref{thm:WellLoc2}, we require several results about related paraproducts. As before, let $T_W$ be a well-localized operator of radius $r$ acting formally from $L^2(W)$ to $L^2(V)$ with formal adjoint $T^*_V.$ Using these operators, define the following paraproducts: \[ \begin{aligned}
\Pi^W f &\equiv \sum_{I \in \mathcal{D}} \sum_{\substack{ 1 \le j \le d \\ J \subseteq I : |J| = 2^{-r} |I|}} \left \langle T_W E^W_I f , h^{V,j}_J \right \rangle_{L^2(V)} h^{V,j}_J \\
\Pi^V g &\equiv \sum_{I \in \mathcal{D}} \sum_{\substack{ 1 \le j \le d \\ J \subseteq I : |J| = 2^{-r} |I|}} \left \langle T^*_V E^V_I g, h^{W,j}_J \right \rangle_{L^2(W)} h^{W,j}_J \end{aligned} \] for $f \in L^2(W)$ and $ g \in L^2(V)$. Recall that the $W$-weighted expectation of $f$ on $I$ is defined by $E^W_If \equiv \left \langle W \right \rangle_I^{-1} \left \langle Wf \right \rangle_I \textbf{1}_I.$ Now, observe that, as demonstrated by the following lemma, these paraproducts mimic the behavior of $T_W$ and $T^*_V$ respectively.
\begin{lemma} \label{lem:paraproduct} Let $I, J \in \mathcal{D}$ and let $\Pi^W$ be the paraproduct defined above using the well-localized operator $T_W$ with radius $r$ acting (formally) from $L^2(W)$ to $L^2(V)$. \begin{itemize}
\item[1.] If $|J| \ge 2^{-r} |I|$, then \[ \left \langle \Pi^W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)} = 0 \qquad \forall \ 1 \le i,j \le d.\]
\item[2.] If $|J| < 2^{-r} |I|$, then \[ \left \langle \Pi^W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)} = \left \langle T_W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)} \qquad \forall \ 1 \le i,j \le d.\] If $J \not \subset I$, then both sides of the equality are zero. \end{itemize} Furthermore, analogous statements hold for the paraproduct $\Pi^V$ and formal adjoint $T^*_V.$ \end{lemma}
\begin{proof} First, observe that \[ \begin{aligned} \left \langle \Pi^W h_I^{W,i}, h_J^{V,j} \right \rangle_{L^2(V)} &=
\sum_{K \in \mathcal{D}} \sum_{\substack{ 1 \le \ell \le d \\ L \subseteq K: |L| = 2^{-r} |K|} } \left \langle T_W E^W_K h^{W,i}_I, h^{V,\ell}_L \right \rangle_{L^2(V)} \left \langle h^{V, \ell}_L, h^{V,j}_J \right \rangle_{L^2(V)} \\ &= \left \langle T_W E^W_{J^{(r)}} h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)}, \end{aligned} \]
where $J^{(r)}$ is the $r^{th}$ ancestor of $J$. Now assume $|J| \ge 2^{-r} |I|$ or $J \not \subset I.$ Then, either $I \subseteq J^{(r)}$ or $I \cap J^{(r)} = \emptyset.$ In either case, \[ E^W_{J^{(r)}} h_I^{W,i} =0,\]
so the corresponding inner product is zero. Now assume $|J| < 2^{-r} |I|$, so that $|J| \le 2^{-r} |I_-| =2^{-r} |I_+|$. If $J \not \subset I$, then $J \not \subset I_-, I_+$ and since $T_W$ is well-localized with radius $r$, \[ \left \langle T_W h_I^{W,i}, h_J^{V,j} \right \rangle_{L^2(V)} = \left \langle T_W h_I^{W,i}(I_-) \textbf{1}_{I_-}, h_J^{V,j} \right \rangle_{L^2(V)} + \left \langle T_W h_I^{W,i}(I_+) \textbf{1}_{I_+}, h_J^{V,j} \right \rangle_{L^2(V)} =0.\]
This gives equality if $J \not \subset I.$ Now assume $|J| < 2^{-r} |I|$ and $J \subseteq I.$ Then \[ \begin{aligned} \left \langle \Pi^W h_I^{W,i}, h_J^{V,j} \right \rangle_{L^2(V)} &= \left \langle T_W E^W_{J^{(r)}} h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)} \\ &= \left \langle T_W h^{W,i}_I \left(J^{(r)} \right) \textbf{1}_{J^{(r)}}, h^{V,j}_J \right \rangle_{L^2(V)} \\ &= \left \langle T_W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)}, \end{aligned} \] since for all $I' \subset I \setminus J^{(r)}$, the tree distance $d_{\text{tree}}(I',J) >r$ and so \[ \left \langle T_W h^{W,i}_I \left(I' \right) \textbf{1}_{I'}, h_J^{V,j} \right \rangle_{L^2(V)} = 0.\] Analogous statements hold for $\Pi^V$, since it is defined using the operator $T^*_V$, which is also well-localized with radius $r$. \end{proof}
Now, we show that the testing condition $(i)$ from Theorem \ref{thm:WellLoc2} and hence, the stronger testing condition from Theorem \ref{thm:WellLoc}, implies the boundedness of the paraproducts $\Pi^W$ and $\Pi^V.$ We state the result for $\Pi^W,$ but analogous arguments give the result for $\Pi^V.$
\begin{lemma} \label{lem:parbdd} Let $\Pi^W$ be the paraproduct defined above and assume that the well-localized operator $T_W$ satisfies: \[
\left \| \textbf{1}_I T_W \textbf{1}_I e \right \|_{L^2(V)} \le C \left \langle W(I) e,e \right \rangle_{\mathbb{C}^d}^{\frac{1}{2}} \qquad \forall I \in \mathcal{D}, \ e \in \mathbb{C}^d.\] Then $\Pi^W$ is bounded from $L^2(W)$ to $L^2(V)$ and
\[ \left \| \Pi^W \right \|_{L^2(W) \rightarrow L^2(V)} \le C B(W), \] where $B(W)$ is the constant obtained from applying the matrix Carleson Embedding Theorem. \end{lemma}
\begin{proof} Fix $f \in L^2(W)$, which implies $Wf \in L^2(W^{-1})$, and observe that \[ \begin{aligned}
\left \| \Pi^W f \right \|_{L^2(V)}^2 &= \sum_{K \in \mathcal{D}} \sum_{\substack{ 1 \le \ell \le d \\ L \subseteq K: |L| = 2^{-r} |K|} } \left | \left \langle T_W E^W_K f, h^{V,\ell}_L \right \rangle_{L^2(V)} \right|^2 \\
& = \sum_{K \in \mathcal{D}} \sum_{\substack{ 1 \le \ell \le d \\ L \subseteq K: |L| = 2^{-r} |K|} }
\left | \left \langle E^W_K f, T_V^*h^{V,\ell}_L \right \rangle_{L^2(W)} \right|^2 \\
& =\sum_{K \in \mathcal{D}} \sum_{\substack{ 1 \le \ell \le d \\ L \subseteq K: |L| = 2^{-r} |K|} }
\left | \left \langle \left \langle W \right \rangle_K^{-1} \left \langle Wf \right \rangle_K , \alpha_{L, \ell} \right \rangle_{\mathbb{C}^d} \right|^2, \end{aligned} \] where we have set $\alpha_{L, \ell}$ to be the vector \[ \alpha_{L, \ell} \equiv \int_{L^{(r)}} W(x) T^*_V h^{V,\ell}_L(x) dx. \] And so, letting $(\alpha_{L, \ell})^*$ denote the $1 \times d$ adjoint row vector corresponding to $\alpha_{L, \ell}$, we have \[ \begin{aligned}
\left \| \Pi^W f \right \|_{L^2(V)}^2 &=
\sum_{K \in \mathcal{D}} \sum_{\substack{ 1 \le \ell \le d \\ L \subseteq K: |L| = 2^{-r} |K|} } \left \langle \alpha_{L, \ell} \left( \alpha_{L, \ell} \right)^* \left \langle W \right \rangle_K^{-1} \left \langle Wf \right \rangle_K , \left \langle W \right \rangle_K^{-1} \left \langle Wf \right \rangle_K \right \rangle_{\mathbb{C}^d} \\ &= \sum_{K \in \mathcal{D}} \left \langle A_K \left \langle Wf \right \rangle_K ,\left \langle Wf \right \rangle_K \right \rangle_{\mathbb{C}^d}, \end{aligned} \] where we have set
\[ A_K \equiv \sum_{\substack{ 1 \le \ell \le d \\ L \subseteq K: |L| = 2^{-r} |K|} } \left \langle W \right \rangle_K^{-1}\alpha_{L, \ell} \left( \alpha_{L, \ell} \right)^* \left \langle W \right \rangle_K^{-1}.\] This is exactly the setup where we can apply Theorem \ref{thm:CET2}. Specifically, we need to show that for all $J \in \mathcal{D}$, \[ \sum_{K \subseteq J} \left \langle W \right \rangle_K A_K \left \langle W \right \rangle_K \le C^2 W(J).\] To prove this matrix inequality, fix $e \in \mathbb{C}^d$ and observe that \[ \begin{aligned} \sum_{K \subseteq J} \left \langle \left \langle W \right \rangle_K A_K \left \langle W \right \rangle_K e,e
\right \rangle_{\mathbb{C}^d} &= \sum_{K \subseteq J} \sum_{\substack{ 1 \le \ell \le d \\ L \subseteq K: |L| = 2^{-r} |K|} } \left \langle\alpha_{L, \ell} \left( \alpha_{L, \ell} \right)^* e,e \right \rangle_{\mathbb{C}^d} \\
& = \sum_{K \subseteq J} \sum_{\substack{ 1 \le \ell \le d \\ L \subseteq K: |L| = 2^{-r} |K|} } \left| \left \langle\alpha_{L, \ell} ,e
\right \rangle_{\mathbb{C}^d} \right |^2 \\
& = \sum_{K \subseteq J} \sum_{\substack{ 1 \le \ell \le d \\ L \subseteq K: |L| = 2^{-r} |K|} }
\left | \left \langle h^{V, \ell}_L, T_W e \textbf{1}_K \right \rangle_{L^2(V)} \right|^2. \end{aligned} \]
Notice that as $T$ is $r$-lower triangular and $L \subseteq K$ with $|L| = 2^{-r}|K|$, we have that
\[ \left \langle h_L^{V,\ell}, T_W e \textbf{1}_{J \setminus K} \right \rangle_{L^2(V)} = \sum_{ I \subseteq J: I \ne K, |I| = |K|} \left \langle h_L^{V,\ell}, T_W e \textbf{1}_{I} \right \rangle_{L^2(V)} =0.\] This means that \[ \begin{aligned} \sum_{K \subseteq J} \left \langle \left \langle W \right \rangle_K A_K \left \langle W \right \rangle_K e,e \right \rangle_{\mathbb{C}^d}
&= \sum_{K \subseteq J} \sum_{\substack{ 1 \le \ell \le d \\ L \subseteq K: |L| = 2^{-r} |K|} }
\left| \left \langle h^{V, \ell}_L, T_W e \textbf{1}_J \right \rangle_{L^2(V)} \right|^2 \\
& \le \left \| \textbf{1}_J T_W e \textbf{1}_J \right \|^2_{L^2(V)} \\ &\le C^2 \left \langle W(J)e, e \right \rangle_{\mathbb{C}^d}. \end{aligned} \] Since $e \in \mathbb{C}^d$ was arbitrary, the matrix inequality follows, so we can apply Theorem \ref{thm:CET2} to obtain:
\[ \| \Pi^W f \|^2_{L^2(V)} = \sum_{K \in \mathcal{D}} \left \langle A_K \left \langle W f \right \rangle_K,
\left \langle W f \right \rangle_K \right \rangle_{\mathbb{C}^d} \le C^2 B(W)^2 \| W f \|^2_{L^2(W^{-1})} = C^2 B(W)^2 \|f \|^2_{L^2(W)},\] as desired. \end{proof}
\subsection{Small Lemmas}
In this subsection, we verify several small lemmas that are trivial in the scalar situation. As before, $T_W$ is a well-localized operator with radius $r$ that satisfies the testing conditions from Theorem \ref{thm:WellLoc} or \ref{thm:WellLoc2}.
\begin{lemma} \label{lem:weightedbdd} Let $T_W$ be a well-localized operator with radius $r$ acting (formally) from $L^2(W)$ to $L^2(V)$ that satisfies the testing condition from Theorem \ref{thm:WellLoc} with constant $A_1$. Then
\[ \left | \left \langle T_W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)} \right | \le C(d) A_1 \qquad \forall I,J \in \mathcal{D}, 1 \le i,j \le d.\] Similarly, if $T_W$ satisfies the testing condition $(ii)$ from Theorem \ref{thm:WellLoc2} with constant $A_3$, then
\[ \left | \left \langle T_W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)} \right | \le C(d) A_3 \qquad \forall I,J \in \mathcal{D}, 1 \le i,j \le d.\] \end{lemma} \begin{proof} For the first part of the lemma, we can use Cauchy-Schwarz to obtain: \[ \begin{aligned}
\left | \left \langle T_W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)} \right | \le \left \| T_W h^{W,i}_I \right \|_{L^2(V)} \le \left\| T_W h^{W,i}_I(I_-) \textbf{1}_{I_-} \right \|_{L^2(V)} + \left \| T_W h^{W,i}_I(I_+) \textbf{1}_{I_+} \right \|_{L^2(V)}. \end{aligned} \] It suffices to prove the desired bound for one term in the sum, since the arguments are symmetric. Using the testing condition and Lemma \ref{lem:haarbound}, we have: \[ \begin{aligned}
\left \| T_W h_I^{W,i} \left( I_- \right ) \textbf{1}_{I_-} \right \|_{L^2(V)} &\le A_1 \left \langle W(I_-) h_I^{W,i} \left( I_- \right ), h_I^{W,i} \left( I_- \right ) \right \rangle_{\mathbb{C}^d}^{\frac{1}{2}} \\
&= A_1\left \| W(I_-)^{\frac{1}{2}} h_I^{W,i} \left( I_- \right ) \right \|_{\mathbb{C}^d} \\ & \le C(d) A_1, \end{aligned} \] which completes the first part of the lemma. For the second part, we can write: \[ \begin{aligned}
\left | \left \langle T_W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)} \right | &\le
\left | \left \langle T_W h^{W,i}_I (I_-) \textbf{1}_{I_-}, h^{V,j}_J(J_-) \textbf{1}_{J_-} \right \rangle_{L^2(V)} \right | + \left | \left \langle T_W h^{W,i}_I (I_-) \textbf{1}_{I_-}, h^{V,j}_J(J_+) \textbf{1}_{J_+} \right \rangle_{L^2(V)} \right | \\
& \ \ \ + \left | \left \langle T_W h^{W,i}_I (I_+) \textbf{1}_{I_+}, h^{V,j}_J(J_-) \textbf{1}_{J_-} \right \rangle_{L^2(V)} \right | + \left | \left \langle T_W h^{W,i}_I (I_+) \textbf{1}_{I_+}, h^{V,j}_J(J_+) \textbf{1}_{J_+} \right \rangle_{L^2(V)} \right |. \end{aligned}
\] By Lemma \ref{lem:haarbound} and testing hypothesis $(ii)$, we can conclude: \[ \begin{aligned}
\left | \left \langle T_W h^{W,i}_I (I_-) \textbf{1}_{I_-}, h^{V,j}_J(J_-) \textbf{1}_{J_-} \right \rangle_{L^2(V)} \right | & \le A_3 \left \langle W(I_-) h^{W,i}_I (I_-) , h^{W,i}_I (I_-) \right \rangle_{\mathbb{C}^d}^{\frac{1}{2}} \left \langle V(I_-) h^{V,j}_J(J_-), h^{V,j}_J(J_-) \right \rangle_{\mathbb{C}^d}^{\frac{1}{2}} \\
& = A_3\left \| W(I_-)^{\frac{1}{2}} h_I^{W,i} \left( I_- \right ) \right \|_{\mathbb{C}^d} \left \| V(I_-)^{\frac{1}{2}} h_J^{V,j} \left( I_- \right ) \right \|_{\mathbb{C}^d} \\ &\le C(d) A_3. \end{aligned}\] The other three terms in the sum can be handled similarly. \end{proof}
\begin{lemma} \label{lem:expectbd} Let $f \in L^2(W)$. Then for all $I \in \mathcal{D}$,
\[ |I|^{\frac{1}{2}} \left \| \left \langle W \right \rangle_I^{-\frac{1}{2}} \left \langle Wf \right \rangle_I \right
\|_{\mathbb{C}^d} \le C(d) \left \| f \textbf{1}_I \right \|_{L^2(W)}. \] \end{lemma} \begin{proof} Using H\"older's inequality and the fact that $\left \langle W \right \rangle_I^{-\frac{1}{2}} W(x)\left \langle W \right \rangle_I^{-\frac{1}{2}}$ is positive a.e., we can compute \[ \begin{aligned}
|I| \left \| \left \langle W \right \rangle_I^{-\frac{1}{2}} \left \langle Wf \right \rangle_I \right
\|^2_{\mathbb{C}^d} & =|I|^{-1}\left \| \int_I \left \langle W \right \rangle_I^{-\frac{1}{2}} W(x) f(x) \ dx \right \|_{\mathbb{C}^d}^2 \\
&\le |I|^{-1} \left(\int_I \left \| \left \langle W \right \rangle_I^{-\frac{1}{2}} W(x) f(x) \right \|_{\mathbb{C}^d}dx \right)^2 \\
&\le |I|^{-1}\left(\int_I \left \| \left \langle W \right \rangle_I^{-\frac{1}{2}} W(x)^{\frac{1}{2}} \right \|^2 dx \right) \left( \int_I \left \| W(x)^{\frac{1}{2}} f(x) \right \|^2_{\mathbb{C}^d}dx \right) \\
& = \left(|I|^{-1} \int_I \left \| \left \langle W \right \rangle_I^{-\frac{1}{2}} W(x) \left \langle W \right \rangle_I^{-\frac{1}{2}}\right \| dx \right) \left \| f \textbf{1}_I \right \|^2_{L^2(W)} \\
& \le C(d) \left \| f \textbf{1}_I \right \|^2_{L^2(W)} \left \| |I|^{-1} \int_I \left \langle W \right \rangle_I^{-\frac{1}{2}} W(x)\left \langle W \right \rangle_I^{-\frac{1}{2}} dx \right \| \\
&= C(d) \left \| f \textbf{1}_I \right \|^2_{L^2(W)} , \end{aligned} \] which gives the needed inequality. \end{proof}
\subsection{Proofs of Theorems \ref{thm:WellLoc} and \ref{thm:WellLoc2}}
We first prove Theorem \ref{thm:WellLoc}: \begin{proof} We prove $T_W$ extends to a bounded operator from $L^2(W)$ to $L^2(V)$ using duality. Specifically we show
\begin{equation} \label{eqn:opineq} \left\vert \left \langle T_W f, g \right \rangle_{L^2(V)}\right\vert \le C\| f \|_{L^2(W)} \| g \|_{L^2(V)}, \end{equation} for a fixed constant $C$ and all $f$ and $g$ in dense sets of $L^2(W)$ and $L^2(V)$ respectively. Without loss of generality, we can assume $f$ and $g$ are compactly supported and so, we can choose disjoint $I_1, I_2 \in \mathcal{D}$
such that $\text{supp}(f), \text{supp}(g) \subseteq I_1 \cup I_2$ and $|I_1 | = |I_2 | =2^m$, for some $m \in \mathbb{N}.$ Using \eqref{eqn:sum}, we can write \begin{eqnarray} \label{eqn:fdecomp}
f &=& f_1 + f_2 = \sum_{\substack{I: |I| \le 2^m \\ 1 \le i \le d}} \left \langle f, h^{W,i}_I \right \rangle_{L^2(W)} h^{W,i}_I + \sum_{k=1}^2 E^W_{I_k} f \\
g &=& g_1 + g_2 = \sum_{\substack{J: |J| \le 2^m \\ 1 \le j \le d}} \left \langle g, h^{V,j}_J \right \rangle_{L^2(V)} h^{V,j}_J + \sum_{\ell=1}^2 E^V_{I_\ell} g. \label{eqn:gdecomp} \end{eqnarray} Using these decompositions, it suffices to show
\[ \left\vert \left \langle T_W f_i, g_j \right \rangle_{L^2(V)}\right\vert \le C\| f \|_{L^2(W)} \| g \|_{L^2(V)} \qquad \forall 1 \le i,j\le 2. \] First, consider $f_1$ and $g_1$. Using Lemma \ref{lem:paraproduct}, we can write \[ \begin{aligned}
\left \langle T_W f_1, g_1 \right \rangle_{L^2(V)} &= \sum_{\substack{I: |I| \le 2^m \\ 1 \le i \le d}} \sum_{\substack{J: |J| \le 2^m \\ 1 \le j \le d}} \left \langle f, h^{W,i}_I \right \rangle_{L^2(W)} \left \langle g, h^{V,j}_J \right \rangle_{L^2(V)} \left \langle T_W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)} \\
& = \sum_{\substack{I: |I| \le 2^m \\ 1 \le i \le d}} \sum_{\substack{J: |J| \le 2^m \\ |J| <2^{-r} |I| \\ 1 \le j \le d}}\left \langle f, h^{W,i}_I \right \rangle_{L^2(W)} \left \langle g, h^{V,j}_J \right \rangle_{L^2(V)} \left \langle T_W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)} \\
&\ \ \ \ + \sum_{\substack{J: |J| \le 2^m \\ 1 \le j \le d}} \sum_{\substack{ I: |I| \le 2^m \\ |I| <2^{-r} |J| \\ 1 \le i \le d}} \left \langle f, h^{W,i}_I \right \rangle_{L^2(W)} \left \langle g, h^{V,j}_J \right \rangle_{L^2(V)} \left \langle T_W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)} \\
&\ \ \ \ + \sum_{\substack{I: |I| \le 2^m \\ 1 \le i \le d}} \sum_{\substack{J: |J| \le 2^m\\ 2^{-r}|I| \le |J| \le 2^{r} |I| \\ 1 \le j \le d}}\left \langle f, h^{W,i}_I \right \rangle_{L^2(W)} \left \langle g, h^{V,j}_J \right \rangle_{L^2(V)} \left \langle T_W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)} \\ & = \left \langle \Pi^W f_1, g_1 \right \rangle_{L^2(V)} + \left \langle f_1, \Pi^V g_1 \right \rangle_{L^2(W)} \\
& \ \ \ \ + \sum_{\substack{I: |I| \le 2^m \\ 1 \le i \le d}} \sum_{\substack{J: |J| \le 2^m \\ 2^{-r}|I| \le |J| \le 2^{r} |I| \\ 1 \le j \le d}}\left \langle f, h^{W,i}_I \right \rangle_{L^2(W)} \left \langle g, h^{V,j}_J \right \rangle_{L^2(V)} \left \langle T_W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)}. \end{aligned} \] Lemma \ref{lem:parbdd} implies that
\[ \left | \left \langle \Pi^W f_1, g_1 \right \rangle_{L^2(V)} \right | + \left | \left \langle f_1, \Pi^V g_1 \right
\rangle_{L^2(W)} \right| \le \left(A_1B(W) + A_2 B(V) \right) \| f \|_{L^2(W)} \| g \|_{L^2(V)}. \] So, we just need to bound the last sum. We first apply Cauchy-Schwarz and exploit symmetry in the sums to obtain: \begin{align}
& \sum_{\substack{I: |I| \le 2^m \\ 1 \le i \le d}} \sum_{\substack{J: |J| \le 2^m \\ 2^{-r}|I| \le |J| \le 2^{r} |I| \\ 1 \le j \le d}} \left | \left \langle f, h^{W,i}_I \right \rangle_{L^2(W)} \left \langle g, h^{V,j}_J \right \rangle_{L^2(V)} \left \langle T_W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)} \right | \nonumber \\ & \le
\label{eqn:diagonal2} \left( \sum_{\substack{I: |I| \le 2^m \\ 1 \le i \le d}} \sum_{\substack{J: |J| \le 2^m \\ 2^{-r}|I| \le |J| \le 2^{r} |I| \\ 1 \le j \le d}} \left | \left \langle f, h^{W,i}_I \right \rangle_{L^2(W)}\right| ^2 \left |\left \langle T_W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)} \right | \right)^{1/2} \\
& \ \ \times \left(
\sum_{\substack{J: |J| \le 2^m \\ 1 \le j \le d}} \sum_{\substack{I:|I| \le 2^m \\ 2^{-r}|J| \le |I| \le 2^{r} |J| \\ 1 \le i \le d}} \left | \left \langle g, h^{V,j}_J \right \rangle_{L^2(V)} \right|^2 \left | \left \langle T_W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)} \right | \right)^{1/2} \nonumber. \end{align}
Now, fix $I \in \mathcal{D}$. Since $T_W$ is well-localized, it is not hard to show that there are only finitely many $J$ satisfying $2^{-r}|I| \le |J| \le 2^{r} |I|$ such that \[ \left \langle T_W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)} \ne 0.\]
Specifically, the number of such $J$ will always be bounded by a fixed constant times $2^{2r}$. Similarly, if we fix $J$, there are only finitely many $I$ satisfying $2^{-r}|J| \le |I| \le 2^{r} |J|$ such that \[ \left \langle T_W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(W)} =\left \langle h^{W,i}_I, T_V^*h^{V,j}_J \right \rangle_{L^2(V)} \ne 0.\] The number of such $I$ will also be bounded by a fixed constant times $2^{2r}$. Thus, we can use the testing conditions and Lemma \ref{lem:weightedbdd} to estimate \[
\eqref{eqn:diagonal2} \le A_1 2^{2r} C(d) \| f \|_{L^2(W)} \|g\|_{L^2(V)}. \] The other terms are much simpler. First observe that for each $k, \ell$: \[ \begin{aligned}
\left | \left \langle T_W E^W_{I_k} f , E^V_{I_{\ell}} g \right \rangle_{L^2(V)} \right | &\le \left \| T_W E^W_{I_k} f \right \|_{L^2(V)} \left \| \left \langle V \right \rangle_{I_{\ell}}^{-1} \left \langle V g \right \rangle_{I_{\ell}} \textbf{1}_{I_{\ell}} \right \|_{L^2(V)} \\
& \le A_1 \left \| W(I_k)^{\frac{1}{2}} \left \langle W \right \rangle_{I_k}^{-1} \left \langle W f \right \rangle_{I_k} \right \|_{\mathbb{C}^d} \left \| V(I_{\ell})^{\frac{1}{2}}\left \langle V \right \rangle_{I_{\ell}}^{-1} \left
\langle V g \right \rangle_{I_{\ell}} \right \|_{\mathbb{C}^d}\\
& = A_1 |I_k|^{\frac{1}{2}}
\left \| \left \langle W \right \rangle_{I_k}^{-\frac{1}{2}}
\left \langle Wf \right \rangle_{I_k} \right \|_{\mathbb{C}^d}
|I_{\ell}|^{\frac{1}{2}} \left \| \left \langle V \right \rangle_{I_{\ell}}^{-\frac{1}{2}} \left
\langle V g \right \rangle_{I_{\ell}} \right \|_{\mathbb{C}^d} \\
& \le A_1C(d) \|f \|_{L^2(W)} \| g \|_{L^2(V)} , \end{aligned} \] by Lemma \ref{lem:expectbd}. This immediately implies the desired bound for $\left \langle T_W f_2 , g_2 \right \rangle_{L^2(V)}.$ The mixed terms are similarly straightforward. Specifically, observe that
\[ \left | \left \langle T_W f_2 , g_1 \right \rangle_{L^2(V)} \right | \le \| g_1 \|_{L^2(V)} \sum_{k=1}^2 \left \| T_W E^W_{I_k}f\right \|_{L^2(V)} \le A_1 C(d) \|f \|_{L^2(W)} \| g \|_{L^2(V)} ,\] using the arguments that appeared in the previous bound. Similarly, \begin{eqnarray*}
\left | \left \langle T_W f_1 , g_2 \right \rangle_{L^2(V)} \right | = \left |\left \langle f_1 ,T^*_V g_2 \right \rangle_{L^2(W)} \right | & \le & \| f_1 \|_{L^2(W)} \sum_{\ell =1}^2 \left \| T_V^* E^V_{I_{\ell}} g \right \|_{L^2(W)}\\
& \le & A_2 C(d) \|f \|_{L^2(W)} \| g \|_{L^2(V)} , \end{eqnarray*} using Lemma \ref{lem:expectbd} and the testing condition on $T^*_V.$ This completes the proof. \end{proof}
We now turn to the proof of Theorem \ref{thm:WellLoc2}.
\begin{proof} This theorem is established in basically the same manner as Theorem \ref{thm:WellLoc}. We simply need to check that the weaker conditions $(i)$ and $(ii)$ in Theorem \ref{thm:WellLoc2} allow us to deduce the same estimates. As before, we establish boundedness by duality as in \eqref{eqn:opineq}, fix $f,g$ compactly supported in $I_1 \cup I_2$ with $|I_1|=|I_2|=2^m,$ and decompose \[ f = f_1 +f _2 \ \text{ and } g = g_1 + g_2 \] as in \eqref{eqn:fdecomp} and \eqref{eqn:gdecomp}. As before, \[ \begin{aligned} \left \langle T_W f_1, g_1 \right \rangle_{L^2(V)} &= \left \langle \Pi^W f_1, g_1 \right \rangle_{L^2(V)} + \left \langle f_1, \Pi^V g_1 \right \rangle_{L^2(W)} \\
& \ \ \ \ + \sum_{\substack{I: |I| \le 2^m \\ 1 \le i \le d}} \sum_{\substack{J: |J| \le 2^m \\ 2^{-r}|I| \le |J| \le 2^{r} |I| \\ 1 \le j \le d}}\left \langle f, h^{W,i}_I \right \rangle_{L^2(W)} \left \langle g, h^{V,j}_J \right \rangle_{L^2(V)} \left \langle T_W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)}. \end{aligned} \] The first two terms can be controlled by testing hypothesis $(i)$ and Lemma \ref{lem:parbdd}. For the sum, we can use Lemma \ref{lem:weightedbdd} and testing hypothesis $(ii)$ to conclude
\[ \left | \left \langle T_W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)} \right | \le C(d) A_3. \] Since $T_W$ is still well-localized with radius $r$, we can use the strategy from the proof of Theorem \ref{thm:WellLoc} to immediately conclude:
\[ \left| \left \langle T_W f_1, g_1 \right \rangle_{L^2(V)} \right | \le 2^{2r} C(d) \left(A_1B(W) + A_2 B(V) +A_3 \right) \| f \|_{L^2(W)} \| g \|_{L^2(V)}. \]
The other terms are also straightforward. First observe that since $|I_k | = |I_{\ell}|$, assumption $(ii)$ paired with Lemma \ref{lem:expectbd} implies that for each $k, \ell$: \begin{align} \nonumber
\left | \left \langle T_W E^W_{I_k} f , E^V_{I_{\ell}} g \right \rangle_{L^2(V)} \right | & \le A_3
\left \| W(I_k)^{\frac{1}{2}} \left \langle W\right \rangle^{-1}_{I_k} \left \langle Wf \right \rangle_{I_k} \right \|_{\mathbb{C}^d}
\left \| V(I_{\ell})^{\frac{1}{2}} \left \langle V\right \rangle^{-1}_{I_{\ell}} \left \langle Vg \right \rangle_{I_{\ell}} \right \|_{\mathbb{C}^d} \nonumber \\
& = A_3 |I_k|^{\frac{1}{2}} \left \| \left \langle W \right \rangle_{I_k}^{-\frac{1}{2}} \left \langle Wf \right \rangle_{I_k} \right
\|_{\mathbb{C}^d}|I_{\ell}|^{\frac{1}{2}} \left \| \left \langle V \right \rangle_{I_{\ell}}^{-\frac{1}{2}} \left \langle Vg \right \rangle_{I_\ell} \right
\|_{\mathbb{C}^d} \nonumber \\
&\le A_3 C(d) \| f \|_{L^2(W)} \| g \|_{L^2(V)}. \label{eqn:est1} \end{align} This immediately gives the desired bound for $\left \langle T_W f_2, g_2 \right \rangle _{L^2(V)}$. The mixed terms require a bit more work. We consider $\left \langle T_W f_2, g_1 \right \rangle _{L^2(V)}$. The other term can be handled analogously. Observe that \begin{eqnarray} \nonumber
\left| \left \langle T_W f_2, g_1 \right \rangle _{L^2(V)} \right|
&\le& \displaystyle \sum_{k=1}^2 \sum_{\substack{J: |J| \le 2^m \\ 1 \le j \le d}} \left | \left \langle g, h^{V,j}_J \right \rangle_{L^2(V)} \left \langle T_WE^W_{I_k} f, h^{V,j}_J \right \rangle_{L^2(V)} \right | \\
\label {eqn:sum1} &=&\displaystyle \sum_{k=1}^2 \sum_{\substack{J: J \subseteq I_k \\ 1 \le j \le d}} \left | \left \langle g, h^{V,j}_J \right \rangle_{L^2(V)} \left \langle T_WE^W_{I_k} f, h^{V,j}_J \right \rangle_{L^2(V)} \right | \\
\label{eqn:sum2}&& \ \ \ \ +\displaystyle \sum_{k=1}^2 \sum_{\substack{J: |J| \le 2^m, J \not \subset I_k \\ 1 \le j \le d}} \left | \left \langle g, h^{V,j}_J \right \rangle_{L^2(V)} \left \langle T_WE^W_{I_k} f, h^{V,j}_J \right \rangle_{L^2(V)} \right |. \end{eqnarray} We have to handle \eqref{eqn:sum1} and \eqref{eqn:sum2} separately. To handle \eqref{eqn:sum1}, simply use Cauchy-Schwarz, Lemma \ref{lem:expectbd}, and assumption $(i)$ to conclude \[ \begin{aligned}
\displaystyle \sum_{k=1}^2 \sum_{\substack{J: J \subseteq I_k \\ 1 \le j \le d}} \left | \left \langle g, h^{V,j}_J \right \rangle_{L^2(V)} \left \langle T_WE^W_{I_k} f, h^{V,j}_J \right \rangle_{L^2(V)} \right |
&\le \sum_{k=1}^2 \left \| \textbf{1}_{I_k} T_WE^W_{I_k} f \right \|_{L^2(V)} \left \| \textbf{1}_{I_k} g \right \|_{L^2(V)} \\
& \le A_1 \| g \|_{L^2(V)} \sum_{k=1}^2 |I_k|^{\frac{1}{2}} \left \| \left \langle W \right \rangle_{I_k}^{-\frac{1}{2}} \left \langle Wf \right \rangle_{I_k} \right
\|_{\mathbb{C}^d} \\
& \le A_1 C(d) \| f \|_{L^2(W)} \| g \|_{L^2(V)}. \end{aligned} \] Now, consider \eqref{eqn:sum2}. Since $T_W$ is well-localized with radius $r$, one can easily that show that for each $I_k$, there are at most a fixed constant times $2^{2r}$ intervals $J$ that satisfy \[ \left \langle T_WE^W_{I_k} f, h^{V,j}_J \right \rangle_{L^2(V)} \ne 0,\]
$|J| \le 2^m,$ and $J \not \subset I_k.$ Indeed, for the inner product to be nonzero, $J$ must satisfy $J \subset I_k^{(r+1)}$ and $|J| >2^{-r}|I_k|.$ Now, using assumption $(ii),$ Lemma \ref{lem:expectbd}, and Lemma \ref{lem:haarbound}, we can establish the following sequence: \[ \begin{aligned}\displaystyle \eqref{eqn:sum2}
&= \sum_{k=1}^2 \sum_{\substack{J: 2^{-r} |I_k| < |J| \le |I_k| \\ J \subset I_k^{(r+1)} J \not \subset I_k \\ 1 \le j \le d}} \left | \left \langle g, h^{V,j}_J \right \rangle_{L^2(V)} \left \langle T_WE^W_{I_k} f, h^{V,j}_J \right \rangle_{L^2(V)} \right | \\
& \le \| g \|_{L^2(V)} \sum_{k=1}^2 \sum_{\substack{J: 2^{-r} |I_k| \le |J| \le |I_k| \\ J \subset I_k^{(r+1)},\, J \not \subset I_k \\ 1 \le j \le d}} \left | \left \langle T_WE^W_{I_k} f, h^{V,j}_J \right \rangle_{L^2(V)} \right |\\
& \le A_3 \| g \|_{L^2(V)} \sum_{k=1}^2 \sum_{\substack{J: 2^{-r} |I_k| < |J| \le |I_k| \\ J \subset I_k^{(r+1)},\, J \not \subset I_k \\ 1 \le j \le d}} |I_k|^{\frac{1}{2}} \left \| \left \langle W \right \rangle_{I_k}^{-\frac{1}{2}} \left \langle Wf \right \rangle_{I_k} \right
\|_{\mathbb{C}^d} \left \| V(I_-)^{\frac{1}{2}} h_J^{V,j}(J_-) \right \|_{\mathbb{C}^d} \\
&\ \ \ \ + A_3 \| g \|_{L^2(V)} \sum_{k=1}^2 \sum_{\substack{J: 2^{-r} |I_k| \le |J| \le |I_k| \\ J \subset I_k^{(r+1)},\, J \not \subset I_k \\ 1 \le j \le d}} |I_k|^{\frac{1}{2}} \left \| \left \langle W \right \rangle_{I_k}^{-\frac{1}{2}} \left \langle Wf \right \rangle_{I_k} \right
\|_{\mathbb{C}^d}
\left \| V(J_+)^{\frac{1}{2}} h_J^{V,j}(J_+) \right \|_{\mathbb{C}^d} \\
& \le 2^{2r} C(d) A_3 \|g \|_{L^2(V)} \| f \|_{L^2(W)}, \end{aligned}
\] which completes the proof. \end{proof}
\begin{remark} \label{remark:definition} As mentioned earlier, our definition of well-localized is slightly different than the one appearing in \cite{ntv08}, where Nazarov-Treil-Volberg only impose conditions on $T_W$ when $|J| \le |I|,$ rather than $|J| \le 2 |I|.$ The difference is likely attributable to a typographical error and their ideas are essentially correct.
However, to see why imposing conditions on only $|J| \le |I|$ is not quite sufficient, let us consider the role of the well-localized property in the proofs of Theorems \ref{thm:WellLoc} and \ref{thm:WellLoc2}.
It is used to show that for each fixed $I$, there is at most a finite number of $J$ with $2^{-r}|I| \le |J| \le 2^{r} |I|$ such that
\[ \left | \left \langle T_W h^{W,i}_I, h^{V,j}_J \right \rangle_{L^2(V)} \right | \ne 0.\]
This allows one to control related sums given in \eqref{eqn:diagonal2}. However, the definition of well-localized given by Nazarov-Treil-Volberg is not quite enough for this, as it does not handle the case where $|I|=|J|.$ In this case, one would need control over terms such as
\[ \left | \left \langle T_W h^{W,i}_I(I_+) \textbf{1}_{I_+}, h^{V,j}_J \right \rangle_{L^2(V)} \right| \text{ or } \left | \left \langle h^{W,i}_I, T_V^* h^{V,j}_J(J_+)\textbf{1}_{J_+} \right \rangle_{L^2(W)} \right|, \]
which are not addressed in their definition of well-localized since $|I_+| <|J|$ and $|J_+| < |I|.$ This case is no longer a problem if we impose conditions on all $I,J$ with $|J| \le 2|I|$ as in Definition \ref{def:wellloc}. For an example of what can go wrong, fix $K_0 \in \mathcal{D}$. Fix a sequence $\{c_K\}$ in $\ell^2(\mathcal{D})$ with no nonzero terms, and define the operator $T: L^2(\mathbb{R}) \rightarrow L^2(\mathbb{R})$ by
\[ T h_{K_0} \equiv \sum_{K: |K|=|K_0|} c_K h_K \ \text{ and } \ \ T h_L \equiv 0 \text{ for } L \ne K_0. \]
It is not difficult to show $T$ is well-localized (with radius $0$) from $L^2(\mathbb{R})$ to $L^2(\mathbb{R})$ according to the definition in \cite{ntv08}. Indeed, if $|J| \le |I|$, then \[ \left \langle T \textbf{1}_I, h_J \right\rangle_{L^2} = 0= \left \langle T^* \textbf{1}_I, h_J \right \rangle_{L^2}.\] To see these equalities, first write \[ \textbf{1}_I =\sum_{K: I \subsetneq K} \left \langle \textbf{1}_I, h_K \right \rangle_{L^2} h_K. \]
Thus, if $I$ is not strictly contained in $K_0$, then $T\textbf{1}_I =0.$ So, we can assume $I \subsetneq K_0.$ Then $|J| \le |I| < |K_0|$ so
\[\left \langle T \textbf{1}_I, h_J \right \rangle_{L^2} = \sum_{K:|K| = |K_0|} \left \langle \textbf{1}_I, h_{K_0} \right \rangle_{L^2} c_K \left \langle h_K, h_J \right \rangle_{L^2} =0.\]
Now consider $T^*.$ If $|J| \le |I|$ and $J \ne K_0$, then \[ \left \langle T^* \textbf{1}_I, h_J \right \rangle_{L^2} = \left \langle \textbf{1}_I, T h_J \right \rangle_{L^2} = \left \langle \textbf{1}_I, 0 \right \rangle_{L^2}=0 \] immediately. If $J =K_0$, then
\[ \left \langle T^* \textbf{1}_I, h_J \right \rangle_{L^2} = \sum_{K: K=|K_0|} \overline{c_K} \left \langle \textbf{1}_I, h_K \right \rangle_{L^2} =0,\]
since $|K_0|= |J| \le |I|$ implies $K \subseteq I$ or $K \cap I =0.$ However, for this operator $T$, \[ \left \langle T h_{K_0}, h_J \right \rangle_{L^2} =c_J \ne 0, \]
for all $J$ with $|J| = |K_0|$. Since there are is infinite number of such $J$, this means we could not use the well-localized property to control the sums from \eqref{eqn:diagonal2} for this operator. \end{remark}
\begin{remark} In this paper, we only considered band operators defined on $L^2(\mathbb{R}, \mathbb{C}^d).$ However, we anticipate that these T1 theorems will generalize without substantial difficulty to band operators on $L^2(\mathbb{R}^n, \mathbb{C}^d)$. One must define a slightly more complicated Haar system, but in general, the tools and proof strategy seem to work without issue. \end{remark}
\end{document} | arXiv |
Uspekhi Matematicheskikh Nauk
Search papers
Search references
Uspekhi Mat. Nauk:
Personal entry:
1980, Volume 35, Issue 2(212)
Cluster expansions in lattice models of statistical physics and the quantum theory of fields
V. A. Malyshev 3
A survey of Linnik's large sieve and the density theory of zeros of $L$-functions
A. F. Lavrik 55
The actions of groups and Lie algebras on non-commutative rings
V. K. Kharchenko 67
On the sums of trigonometric series
I. N. Pak 91
Asymptotic representation of orthogonal polynomials
B. L. Golinskii 145
David Aleksandrovich Kveselava (obituary)
N. P. Vekua, A. A. Dorodnitsyn, V. D. Kupradze, M. A. Lavrent'ev, G. S. Litvinchuk, T. A. Èbanoidze 197
In the Moscow Mathematical Society
Communications of the Moscow Mathematical Society
A Cantor limit set
Yu. S. Barkovskii, G. M. Levin 201
The theory of residues in commutative algebras
M. M. Vinogradov 203
The Poincaré polynomial of the space of form-residues on a quasi-homogeneous complete intersection
V. V. Goryunov 205
Mobility and extension of fundamental sequences
S. Kotanov 207
Necessary optimality conditions in smoothly-convex problems with operator constraints
L. I. Krechetov 209
The absence of $L_2$-solutions for periodic partial differential equations
P. A. Kuchment 211
The boundary of a set of stable matrices
L. V. Levantovskii 213
An averaging principle and a theorem on large deviations for a family of extensions of a $Y$-flow
V. B. Minasyan 215
Classification of flags of foliations
N. M. Mishachev 217
Summability to $+\infty $ of Haar and Walsh series
N. B. Pogosyan 219
The theory of Jordan algebras with a minimum condition
A. M. Slin'ko 221
Rings whose quotient rings are all semi-injective
A. A. Tuganbaev 223
Cohomology of groups and algebras of flows
B. L. Feigin 225
A bound for the measure of the mutual transcendence of values of $E$-functions connected by arbitrary algebraic equations over $\mathbb{C}(z)$
A. B. Shidlovskii 227
Extensions of algebraic groups that are transitive on projective varieties
M. T. Èl'baradi 229
Mathematical Events in the USSR
Konstantin Ivanovich Babenko (on his sixtieth birthday)
L. R. Volevich, G. P. Voskresenskii, A. V. Zabrodin, A. N. Kolmogorov, O. A. Oleinik, V. M. Tikhomirov 231
Nikolai Aleksandrovich Shanin (on his sixtieth birthday)
S. Yu. Maslov, Yu. V. Matiyasevich, G. E. Mints, V. P. Orevkov, A. O. Slisenko 241
Georgii Dmitrievich Suvorov (on his sixtieth birthday)
P. P. Belinskii, V. I. Belyi, V. Ya. Gutlyanskii, M. A. Lavrent'ev, B. V. Shabat 247
Sessions of the Petrovskii Seminar on differential equations and problems of mathematical physics
N. V. Krylov, M. V. Safonov, V. P. Maslov, L. A. Bunimovich, Ya. G. Sinai, S. M. Kozlov, V. E. Zakharov, A. G. Aslanyan, D. G. Vasil'ev, V. B. Lidskii, R. I. Nigmatulin, V. M. Petkov 251
The Sixth Soviet–Czechoslovak Meeting on Applications of Methods of Function Theory and Functional Analysis to the Equations of Mathematical Physics and Computational Mathematics
J. Brilla, V. N. Maslennikova, V. S. Sarkisyan 257
Reviews and Bibliography
New books on mathematics
L. P. Kalinina 263
Correction to the paper: "The problem of mass transfer with a discontinuous cost function and a mass statement of the duality problem for convex extremal problems"
V. L. Levin, A. A. Milyutin 275 | CommonCrawl |
\begin{document}
\date{\today} \title{Computation on Elliptic Curves with Complex Multiplication} \author[P. Clark \and P. Corn \and A. Rice \and J. Stankewicz ]{Pete L. Clark \and Patrick Corn \and Alex Rice \and James Stankewicz }
\begin{abstract}
We give the complete list of possible torsion subgroups of elliptic curves with complex multiplication over number fields of degree 1-13. Additionally we describe the algorithm used to compute these torsion subgroups and its implementation.
\end{abstract} \maketitle \section{Introduction}
\subsection{The main results} The goal of this paper is to present a complete list of possible torsion subgroups of elliptic curves with complex multiplication over number fields of small degree. Our main tool is an algorithm whose input is a positive integer $d$. The output is a (necessarily finite) list of isomorphism classes of finite abelian groups $G$ such that $G$ is isomorphic to $E(K)[\operatorname{tors}]$ for some number field $K$ of degree $d$ and some elliptic curve $E$ defined over $K$ with complex multiplication.
Our algorithm requires a complete list of imaginary quadratic fields of class number $h$ for all integers $h$ which properly divide $d$. Fortunately, M. Watkins \cite{Watkins} has enumerated all imaginary quadratic fields with class number $h\le 100$, which would in theory allow us to run our algorithm for all $d \le 201$ (and for infinitely many other values of $d$, for instance all prime values).
We implemented our algorithm using the MAGMA programming language and ran it on Unix servers in the University of Georgia Department of Mathematics. The result, after doing some additional analysis, is a complete list of torsion subgroups for degree $d$ with $1\le d \le 13$. This list, for each degree $d$, is described in Section~\ref{lists}.$d$.
For $d=1$ these computations were first done by L. Olson in 1974 \cite{Olson}, whereas for $d=2$ and $3$ they are a special case of work of H. Zimmer and his collaborators over a ten year period from the late 1980’s to the late 1990’s \cite{Zimmer1}, \cite{Zimmer2}, \cite{Zimmer3}. We believe that our results are new for $4 \le d \le 13$.
This work was begun during a VIGRE research group led by Pete L. Clark and Patrick Corn and attended by Brian Cook, Steve Lane, Alex Rice, James Stankewicz, Nathan Walters, Stephen Winburn and Ben Wyser at the University of Georgia. Alex Rice, James Stankewicz, Nathan Walters and Ben Wyser were partially supported by NSF VIGRE grant DMS-0738586 during this work. James Stankewicz was also partially supported by the Van Vleck fund at Wesleyan University. Special thanks go to Jon Carlson, who offered use of his MAGMA server and invaluable support with coding in MAGMA. Thanks to Bianca Viray for a helpful discussion of the proof of Lemma \ref{SPECIALIZATION}. Thanks also go to Andrew Sutherland, whose interest in this project demanded that this paper be polished into publishable form.
\subsection{Connections to prior work} According to the celebrated \textbf{uniform boundedness theorem} of L. Merel \cite{Merel}, for any fixed $d \in \mathbf{Z}_{> 0}$, the supremum of the size of all rational torsion subgroups of all elliptic curves defined over all number fields of degree $d$ is finite.
In 1977, B. Mazur proved uniform boundedness for $d = 1$ (i.e., for elliptic curves $E_{/\mathbf{Q}}$) \cite{Mazur}. Moreover, Mazur gave a complete classification of the possible torsion subgroups:
\[ E(\mathbf{Q})[\operatorname{tors}]\in \begin{cases}\mathbf{Z}/m\mathbf{Z} & \textrm{for } m = 1,\dots,10,12, \\ \mathbf{Z}/2\mathbf{Z} \oplus\mathbf{Z}/2m\mathbf{Z} & \text{for $m = 1, \ldots, 4$.} \end{cases} \]
Work of Kamienny \cite{Kamienny86}, \cite{Kamienny92} and of Kenku and Momose \cite{KenkuMomose} gives the following result when $K$ is a quadratic number field:
\[ E(K)[\operatorname{tors}] \in \begin{cases} \mathbf{Z}/m\mathbf{Z} & \textrm{for } m=1,\dots,16,18, \\ \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/2m\mathbf{Z} & \textrm{for } m = 1,\dots,6, \\ \mathbf{Z}/3\mathbf{Z} \oplus \mathbf{Z}/m\mathbf{Z} & \text{for $m=3,6,$} \\ and & \mathbf{Z}/4\mathbf{Z} \oplus \mathbf{Z}/4\mathbf{Z}. \end{cases} \]
This and similar subsequent enumeration results over varying number fields are to be understood in the following sense. First, for any quadratic field $K$ and any elliptic curve $E_{/K}$, the torsion subgroup of $E(K)$ is isomorphic to one of the groups listed. Second, for each of the groups $G$ listed, there exists at least one quadratic field $K$ and an elliptic curve $E_{/K}$ with $E(K)[\operatorname{tors}] \cong G$. A complete classification of torsion subgroups of elliptic curves over cubic fields is not yet known.
Further results come from focusing on particular classes of elliptic curves. Notably H. Zimmer and his collaborators have done extensive computations on torsion in elliptic curves with $j$-invariant in the ring of algebraic integers. In \cite{Zimmer1}, M\"uller, Stroher and Zimmer proved that in the case of integral $j$-invariant, if $K$ is a quadratic number field then \[ E(K)[\operatorname{tors}] \in \begin{cases} \mathbf{Z}/m\mathbf{Z} &\textrm{for } m=1,\dots,8,10, \\ \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/m\mathbf{Z} &\textrm{for }m=2,4,6, \\ \textrm{and} &\mathbf{Z}/3\mathbf{Z} \oplus \mathbf{Z}/3\mathbf{Z}.\end{cases} \]
In \cite{Zimmer3} Peth\"o, Weis and Zimmer showed that if $E$ has integral $j$-invariant and $K$ is a cubic number field then \[ E(K)[\operatorname{tors}] \in \begin{cases} \mathbf{Z}/m\mathbf{Z} &\textrm{for } m=1,\dots,10,14, \\ \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/m\mathbf{Z} & \textrm{for } m=2,4,6.\end{cases} \]
Here we study elliptic curves with complex multiplication. Such curves form a subclass of curves with integral $j$-invariant \cite[Theorem. II.6.4]{SilvermanII}, so our results are subsumed by the above results for $d \le 3$; but, as we will see, the CM hypothesis allows us to extend our computations to higher values of $d$, up to $d=13$.
\section{Background}\label{background}
\subsection{Kubert-Tate normal form} The fundamental result on which our algorithm rests is the following elementary theorem, which gives a parameterization of all elliptic curves with an $N$-torsion point for $N \ge 4$.
\begin{theorem} \label{kubert} (Kubert) Let $E$ be an elliptic curve over a field $K$ and $P \in E(K)$ a point of order at least $4$. Then $E$ has an equation of the form \begin{equation} \label{knf} y^2 + (1-c)xy - by = x^3-bx^2 \end{equation} for some $b,c \in K$, and $P = (0,0)$. \end{theorem}
\begin{proof} This first appeared in \cite{Kubert}. See for instance \cite{Zimmer1}, \S3.\end{proof}
We will call the equation (\ref{knf}) the {\em Kubert-Tate normal form} of $E$, and our notation for a curve in Kubert-Tate normal form with parameters $b, c$ as above will be simply $E(b,c)$. The $j$-invariant of this elliptic curve is \begin{equation}\label{jinv} j(b,c) = \frac{(16b^2 + 8b(1-c)(c+2) + (1-c)^4)^3}{b^3(16b^2 - b(8c^2+20c-1) - c(1-c)^3)}. \end{equation}
\begin{remark} This form is unique for a given curve with a fixed point of order at least 4. In practice we use this to find elliptic curves with some primitive $N$-torsion point, so an elliptic curve $E$ may have many isomorphic Kubert-Tate normal forms, depending on which torsion point we choose to send to $(0,0)$. \end{remark}
\begin{example} \label{Example1} Here are some small multiples of the point $(0,0)$ on $E(b,c)$: \[ [2](0,0) = (b,bc), \] \[ [3](0,0) = (c,b-c), \] \[ [4](0,0) = \left( \frac{b(b-c)}{c^2}, \frac{b^2(c^2+c-b)}{c^3} \right), \] \[ [5](0,0) = \left( \frac{bc(c^2+c-b)}{(b-c)^2}, \frac{bc^2(b^2-bc-c^3)}{(b-c)^3} \right), \] \[ [6](0,0) = \left(\frac{(b-c)(b^2 - bc - c^3)}{A^2}, \frac{c(b-c)^2(2b^2- b c(c-3) + c^{2})}{A^3} \right), \]
\[ [7](0,0) = \left( \frac{Abc((b-c)^2 + Ab)}{(b^2-bc-c^3)^2}, \frac{(Ab)^2((b-c)^3 + c^3A)}{(b^2-bc-c^3)^3} \right), \] where $A = b-c-c^2$. In particular we see that for $N \leq 3$, $(0,0)$ cannot be an $N$-torsion point on $E(b,c)$. \end{example}
\subsection{Modular curves} The affine modular curve $Y_1(N)$ for $N \ge 4$ is a fine moduli space for pairs $(E,P)$ where $E$ is an elliptic curve and $P$ is a point of exact order $N$ on $E$. We will search for CM-points on $Y_1(N)$ for various values of $N\ge 4$; that is, points over various number fields which correspond to CM elliptic curves with an $N$-torsion point(the $Y_1(N)$ for $1\le N\le 3$ are coarse moduli spaces and so will only give us the information we desire over an algebraically closed field). Kubert curves give a down-to-earth way of constructing a defining equation for $Y_1(N)$.
\begin{definition} Let $\mathbf{Q}(b,c)$ be a rational function field, and let $E_{/\mathbf{Q}(b,c)}$ denote the elliptic curve given by equation (\ref{knf}). If $N\ge 3$ is an integer, let $n_1,d_1,n_2,d_2\in \mathbf{Q}[b,c]$ be such that $(n_i,d_i) =1$, $d_i$ is monic, and $$x\left(\left[\left\lceil\dfrac{N}{2}\right\rceil -1\right](0,0)\right) = \dfrac{n_1(b,c)}{d_1(b,c)}, x\left(\left[\left\lfloor\dfrac{N}{2} \right\rfloor+1\right](0,0)\right) = \dfrac{n_2(b,c)}{d_2(b,c)}.$$
Then we let $f_N(b,c) = n_1d_2 - n_2d_1 \in \mathbf{Q}[b,c]$. \end{definition}
\begin{lemma} \label{SPECIALIZATION} Let $k$ be a field, let $Y_{/k}$ be an integral algebraic variety, let $q: A \rightarrow Y$ be a relative abelian variety, and let $y$ be a closed point of $Y$. Then the specialization map $\mathfrak{s}: A(K(Y)) \rightarrow A_y(k(y))$ is a group homomorphism. \end{lemma} \begin{proof} This result appears in \cite[p. 40]{Lang}. Lang's (wonderful) text is rather informally written: many results, including this one, are given there without proof or reference. For the convenience of the reader we give a proof. \\ Step 1: Suppose $Y$ is a nonsingular curve. Then $A_{/Y}$ is equal to the \textbf{N\'eron model} of its generic fiber, so the map $\mathfrak{s}$ is a homomorphism by \cite[Proposition I.2.8]{BLR}. \\ Step 2: Suppose $Y$ is a singular curve. Let $\pi: \tilde{Y} \rightarrow Y$ be its normalization, and let $\tilde{y}$ be a closed point of $\tilde{Y}$ with $\pi(\tilde{y}) = y$. Let $\tilde{A} = \pi^*(q) \rightarrow \tilde{Y}$ be the pullback of the family to $\tilde{Y}$. Then the fiber of $\tilde{A}$ over $\tilde{y}$ is canonically identified with the fiber of $A$ over $y$, and thus the specialization map $\tilde{s}: \tilde{A}(K(\tilde{Y})) \rightarrow A_{\tilde{y}}(k(\tilde{y}))$ is canonically identified with $\mathfrak{s}$. We have reduced to Step 1. \\ Step 3: In the general case we choose a chain of closed irreducible subvarieties $Y_0 = \{y\} \subset Y_1 \subset \ldots Y_d = Y$ containing $y$, with $\dim Y_i = i$. We apply Step 2 repeatedly, specializing from the generic point of $Y_i$ to the generic point of $Y_{i-1}$. \end{proof}
\begin{lemma} \label{PICKYLEMMA} If $b_0,c_0 \in \overline \mathbf{Q}$ and $E(b_0,c_0)$ is an elliptic curve given by equation (\ref{knf}) then the point $(0,0)$ on $E(b_0,c_0)$ is an $N$-torsion point if and only if $f_N(b_0,c_0) = 0$.\end{lemma}
\begin{proof} By Example \ref{Example1} we must have $N \geq 4$. \\ Step 1: Suppose $[N](0,0) = O$. We claim that $[\lceil\frac{N}{2}\rceil-1](0,0)$ and $[\lfloor \frac{N}{2} \rfloor + 1](0,0)$ are finite and have equal $x$-coordinates. Indeed, if $[\lceil \frac{N}{2} \rceil - 1](0,0) = O$, then $(0,0)$ is $(2-2\lceil \frac{N}{2} \rceil +N)$-torsion, i.e., $2$-torsion if $N$ is even and $1$-torsion if $N$ is odd, contradicting Example 1. A similar argument shows that $[\lfloor \frac{N}{2} \rfloor + 1](0,0)$ is finite. Moreover, we have \[ \left[\left\lceil \frac{N}{2} \right\rceil -1\right](0,0) + \left[\left\lfloor \frac{N}{2} \right\rfloor + 1\right](0,0) = \left[\left\lceil \frac{N}{2} \right\rceil + \left\lfloor \frac{N}{2} \right\rfloor\right](0,0) = [N](0,0) = O, \] so $[\lceil\frac{N}{2}\rceil-1](0,0)) = -[\lfloor \frac{N}{2} \rfloor + 1](0,0)$. As for any points $P,Q$ on a Weierstrass elliptic curve,
$P = \pm Q$ if and only if $x(P) = x(Q)$, this establishes the claim. Now, since $x([\lceil \frac{N}{2} \rceil - 1](0,0)) = x( [\lfloor \frac{N}{2} \rfloor + 1 ](0,0)) \in \overline \mathbf{Q}$, if $d_1(b_0,c_0) = 0$ then also $n_1(b_0,c_0) = 0$ hence $f_N(b_0,c_0) = n_1(b_0,c_0)d_2(b_0,c_0) - n_2(b_0,c_0)d_2(b_0,c_0) = 0$. Similarly if $d_2(b_0,c_0) = 0$. Finally, if $d_1(b_0,c_0) d_2(b_0,c_0) \neq 0$, then \[ x(\left[\left\lceil \frac{N}{2} \right\rceil - 1\right](0,0)) = \frac{n_1(b_0,c_0)}{d_1(b_0,c_0)} = \frac{n_2(b_0,c_0)}{d_2(b_0,c_0)} = x( \left[\left\lfloor \frac{N}{2} \right\rfloor + 1 \right](0,0)), \] so $f_N(b_0,c_0) = 0$. \\ Step 2: Suppose that $f_N(b_0,c_0) = 0$. Thus there must be at least one irreducible factor $g(b,c)$ of $f_N(b,c)$ such that $g(b_0,c_0) = 0$. Then $Z = \mathbf{Q}[b,c]/\langle g(b,c) \rangle$ is an irreducible curve, and let $Y$ be obtained from $Z$ by removing the finite set of closed points on which the Kubert curve $E(b_0,c_0)$ becomes singular. Then $E(b,c)$ gives a relative elliptic curve over $Y$, and the elliptic curve $E(b_0,c_0)$ is its specialization at the closed point $(b_0,c_0)$. By construction, $[N](0,0) = O$ on the generic fiber of $Y$, so by Lemma \ref{SPECIALIZATION}, $[N](0,0) = O$ on $E(b_0,c_0)$.
\end{proof}
\begin{example}\label{Example2} We will make use of the explicit formulas of Example \ref{Example1}. \\ a) Let $N = 4$. Setting $x((0,0)) = x([3](0,0))$ gives $f_4(b,c) = c$. \\ b) Let $N = 5$. The condition that $(0,0)$ is a $5$-torsion point is $b-c = 0$. Setting $x(([2](0,0)) = x([3](0,0))$ gives $f_5(b,c) = b-c$. \\ c) Let $N = 7$. The condition that $(0,0)$ is a $7$-torsion point is $b^2-bc-c^3 = 0$. Setting $x([3](0,0)) = x([4](0,0))$ gives $f_7(b,c) = b(b-c) - c(c^2)$. \end{example}
As Examples \ref{Example1} and \ref{Example2} illustrate, the complexity of the rational functions giving the coordinates of $[N](0,0)$ increases rapidly with $N$. Our trick of computing $[\lceil \frac{N}{2} \rfloor -1](0,0)$ and $[\lfloor \frac{N}{2} \rfloor + 1](0,0)$ instead becomes a critical one to extend the range of our calculations.
In general, $f_N(b,c) = 0$ is not the defining equation for $Y_1(N)$ as $[N]P = 0$ implies only that $P$ has order $d$ for some $d|N$. The polynomial $f_N(b,c)$ will have as irreducible factors defining equations for $Y_1(d)$ for $d \mid N$, $d > 3$. However a simple Moebius inversion will furnish such an equation. Although we do not explicitly write down equations for $Y_1(N)$ in our algorithm, one could do so with relative ease. A more sophisticated version of this computation has been undertaken by Andrew Sutherland \cite{Sutherland}.
\begin{example} We computed $4(0,0)$ and $2(0,0)$ as part of our above examples, so $f_6(b,c) = b^2 - bc - bc^2$. The divisors of $6$ are $1,2,3$ and $6$. Thus by Moebius inversion, the equation for $Y_1(6)$ in the $(b,c)$-plane is $b-c-c^2$, a smooth plane curve. For higher $N$, $Y_1(N)$ is not naturally a plane curve -- e.g. the genus of $Y_1(N)$ will usually not be of the form $\frac{(d-1)(d-2)}{2}$ -- and so there will often be singularities in this plane model. \end{example}
We note in general that if $d \ge 3$ and $d\mid N$ then $f_d \mid f_N$. This is easy to see using the group law when $d\ge 4$. For $d=3$ we can see this by computing on the elliptic curve $E(b,c)$ over the function field ${\mathbf{Q}(b,c)}$. Namely, if $(x,y)$ is any nonidentity point it is possible to compute that $[3](x,y) \pm (0,0)$ has $x$-coordinate with $b$-adic valuation 1. Therefore $b$ but not $b^2$ divides $f_N(b,c)$ when $3\mid N$. Therefore performing a Moebius inversion on the $f_N$ furnishes a factorization $f_N = \prod_{\stackrel{d \mid N}{d \ge 3}} \phi_d$ where if $N\ge 4$ then $\phi_N(b,c)$ is a defining equation for $Y_1(N)$ in the $(b,c)$-plane.
\subsection{Complex multiplication and bounds on $j$-invariants} If $E(b,c)$ is a CM elliptic curve defined over a number field of degree $d$, then its $j$-invariant $j(b,c)$ must lie in a number field of degree dividing $d$. The degree of $\mathbf{Q}(j(b,c))$ is equal to the class number of End $E$, which is an order in an imaginary quadratic field.
\begin{theorem} (Heilbronn, 1934) \cite{Heilbronn} For any positive integer $d$, there are only finitely many imaginary quadratic fields with class number $d$. \end{theorem}
\begin{corollary} For any positive integer $d$, there are only finitely many imaginary quadratic orders $\mathcal{O}$ such that $h(\mathcal{O}) \le d$. \end{corollary}
\begin{proof} Since every quadratic order $\mathcal{O}$ must be of the form $\mathbf{Z} + f\mathcal{O}_K$, this follows from Gauss's class number formula \cite[Thm 7.24]{Cox}.
\end{proof}
\begin{example}\label{full7}: We find the least possible degrees for an elliptic curve over a number field $K$ with $7$-torsion and $j$-invariant 0. If we have such a curve $E$, we can find a pair $(b,c)\in K^2$ such that $E \cong E(b,c)$. Since $j(b,c) = 0$, we have \begin{equation}\label{jzeq} 16b^2+8b(1-c)(c+2)+(1-c)^4 = 0, \end{equation}
and since $(0,0)$ is a nontrivial $7$-torsion point, we have
\begin{equation}\label{f7eq} b^2-bc-c^3 = 0. \end{equation}
The real solutions to Equation \ref{jzeq} in the $(b,c)$ affine plane may be seen in Figure \ref{jzerofig}, and Equation \ref{f7eq} in Figure \ref{f7fig}.
\begin{figure}
\caption{The real $(b,c)$ such that $E(b,c)$ has $j$-invariant 0.}
\label{jzerofig}
\end{figure}
\begin{figure}
\caption{The real $(b,c)$ such that $(0,0)$ is a $7$-torsion point on $E(b,c)$.}
\label{f7fig}
\end{figure}
The resultant of these two polynomials with respect to $c$ is \[ (b^2 + b + 1)(b^6 - 325b^5 + 5518b^4 + 3655 b^3 + 718 b^2 + 51 b + 1). \] The roots of this \textit{Kubert resultant} identify the intersection points of our two affine curves, as shown in Figure \ref{overlayfig}. We should note here that the first irreducible factor has no real roots. Instead, the $b$-coordinates of the intersection points we see are four of the six real roots of the second factor. In any case, looking at the first irreducible factor over $\mathbf{Q}$, we see that we can take $b = \zeta_3$.
\begin{figure}
\caption{The real $(b,c)$ such that $(0,0)$ is a $7$-torsion point on $E(b,c)$ with $j$-invariant 0.}
\label{overlayfig}
\end{figure}
We plug in $\zeta_3$ for $b$ in the above polynomials and compute the greatest common divisor, which is $c+1$. So the elliptic curve $E(\zeta_3,-1)$ has a $7$-torsion point over $\mathbf{Q}(\zeta_3)$. That is, on the curve \[ y^2 + 2xy - \zeta_3 y = x^3 - \zeta_3 x^2, \] the point $(0,0)$ is a $7$-torsion point.\footnote{The reader who prefers standard Weierstrass models may verify that the origin corresponds to the $7$-torsion point $(12(1-\zeta_3), -108 \zeta_3)$ on the isomorphic elliptic curve $y^2 = x^3 - (1296\zeta_3 + 6480)$.} Moreover, this curve acquires full $7$-torsion over the degree-12 cyclotomic field $\mathbf{Q}(\zeta_{21})$. \end{example}
We generalize the above construction as follows: Writing $j(b,c) = \dfrac{n_j(b,c)}{d_j(b,c)}$ as the quotient of two polynomials, we see that there is an elliptic curve $E(b,c)$ with $j$-invariant $j_0$ and $[N](0,0) =O$ if and only if $(b,c)$ satisfy the equations \begin{align} n_j(b,c) &= j_0 d_j(b,c) \label{curves} \\ f_N(b,c) &= 0 .\notag \end{align}
If there are only finitely many pairs $(j_0,N)$ that we have to check, then since the resultant of these equations with respect to $c$ is a one-variable polynomial in $b$, there are only finitely many elliptic curves $E(b,c)$ over a small-degree number field with $j$-invariant $j_0$ and with $(0,0)$ an $N$-torsion point. To determine if $\mathbf{Z}/N\mathbf{Z} \oplus \mathbf{Z}/n\mathbf{Z}$ with $n \mid N$ is a torsion subgroup of an elliptic curve over a small-degree number field, we need only check the $n$-th {\em division polynomial} \cite[Exercise 3.7]{Silverman} to see if $E(b,c)$ acquires an additional $n$-torsion point over a small-degree number field. There are finitely many $n\mid N$ and for each such $n$, there are algorithms to compute the $n$-th division polynomial.
In this way, we see how a rough algorithm for enumerating torsion subgroups of CM elliptic curves presents itself. Fix a degree $d$, so that we aim to tabulate CM torsion subgroups over number fields of degree $d$. By Heilbronn's theorem, there are only finitely many $j$-invariants of elliptic curves with Complex Multiplication over all number fields of degree at most $d$. By Merel's bound, we have only finitely many possible torsion subgroups to check. Since there are only finitely many $j_0$ and $N$, the procedure described above terminates for each $d$.
We note here that Merel's bound is quite large and often impractical. We mention it only to note that the above procedure terminates for any finite number of $j$-invariants, CM or not. In the CM case, we have much better bounds to consider.
\subsection{Possible torsion of CM elliptic curves} Let $E$ be an elliptic curve over a number field $F$ with CM. If $E(F)$ contains an $N$-torsion point, then the size of $N$ is severely restricted by the degree of $F$; the following theorems of Silverberg and Prasad-Yogananda can be used to give an explicit upper bound on $N$.
\begin{theorem}\label{SPYBounds} (Silverberg, Prasad-Yogananda) Let $E$ be an elliptic curve over a number field $F$ of degree $d$, and suppose that $E$ has CM by the order $\mathcal{O}$ in the imaginary quadratic field $K$. Let $e$ be the exponent of the torsion subgroup of $E(F)$. Then
(a) $\varphi(e) \le w(\mathcal{O}) d$.
(b) If $K \subseteq F$, then $\varphi(e) \le w(\mathcal{O}) d/2$.
(c) If $K \nsubseteq F$, then $\varphi(\# E(F)[\operatorname{tors}]) \le w(\mathcal{O}) d$. \end{theorem}
\begin{proof} See \cite{Silverberg}, \cite{PY}. It can be deduced from Silverberg's work that all above occurrences of $w(\mathcal{O})$ may be replaced with $w(\mathcal{O})/h(\mathcal{O})$.\end{proof}
We will refer henceforth to the bounds obtained from the above theorem and proof as the {\em SPY bounds}. Using merely the bound of part (a) and the well-known inequality $\sqrt{N} \le \phi(N)$ for $N\ge 7$, we see that we need only consider values of $N$ that are at most $w(\mathcal{O})^2d^2$. The SPY bounds also lead us to expect that the largest torsion subgroups occur when $w(\mathcal{O})$ is largest, namely when $j=0,1728$.
Any bound on the exponent above gives a bound on the size of the torsion subgroup. If the exponent of $E(F)[\operatorname{tors}]$ is at most $N$, since $E(\mathbf{C})[\operatorname{tors}] \cong (\mathbf{Q}/\mathbf{Z})^2$ \cite[Corollary V.1.1]{Silverman}, we have $\# E(F)[\operatorname{tors}] \le N^2$. In fact, there exist integers $n\mid N$ such that $E(F)[\operatorname{tors}]\cong \mathbf{Z}/N\mathbf{Z} \oplus \mathbf{Z}/n\mathbf{Z}$. Moreover in that case, the Weil Pairing \cite[\S III.8]{Silverman} shows that $F \supset \mathbf{Q}(\zeta_n)$ and thus $\varphi(n) \mid [F:\mathbf{Q}] =d$.
In the case that $E$ has CM by $\mathcal{O}$, note that $j(E) \in F$ so that $\mathbf{Q}(j(E)) \subset F$ and thus $h(\mathcal{O}) \mid [F:\mathbf{Q}] = d$. Therefore, let $\deg = d/h(\mathcal{O})$. The strengthening of the SPY bounds as noted in the proof of Theorem \ref{SPYBounds} implies that if $e$ is the exponent of $E(F)[\operatorname{tors}]$ then $\varphi(e) \le w(\mathcal{O})\deg$. Note also that if $\deg$ is odd, $K\not\subset F$ since $K$ and $\mathbf{Q}(j(E))$ are linearly disjoint.
If $\deg =2$ then we may assume that $j \ne 0,1728$ because the possible groups in that case have already been determined \cite{Zimmer1}. Thus $w(\mathcal{O}) =2$, hence either $E(F)[\operatorname{tors}]$ is among the 12 possible torsion subgroups $G$ such that $\varphi(\# G) \le 4$ or $F$ is the compositum of $\mathbf{Q}(j(E))$ with $K$, otherwise known as the ring class field of $\mathcal{O}$. In the latter case, we have the following.
\begin{theorem}(Parish) Let $\mathcal{O}$ be an imaginary quadratic order, $j$ the $j$-invariant of an elliptic curve with CM by $\mathcal{O}$, $L = \mathbf{Q}(j)$ and $H$ the ring class field of $\mathcal{O}$. Then if $E$ is an elliptic curve defined over $H$ with CM by $\mathcal{O}$ then $E(K)[\operatorname{tors}]$ contains only points of order $1,2,3,4,$ or $6$. Moreover, if $E$ is defined over $L$ then $E(L)[\operatorname{tors}]$ can only be isomorphic to one of $0,\mathbf{Z}/2\mathbf{Z},\mathbf{Z}/3\mathbf{Z},\mathbf{Z}/4\mathbf{Z},\mathbf{Z}/6\mathbf{Z},$ or $\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/2\mathbf{Z}$. \end{theorem}
\begin{proof} \cite[\S VI]{Parish}. \end{proof}
Much finer information is available in Parish's paper. Except for $j= 0$ and $\mathbf{Z}/3\mathbf{Z}\oplus\mathbf{Z}/3\mathbf{Z}$, each torsion subgroup $G$ which is possible over a ring class field has $\varphi(\#G) \le 4$. Note that as a further consequence, if $E$ is an elliptic curve with CM by $\mathcal{O}$ over a number field $F$ and $[F:\mathbf{Q}] = h(\mathcal{O})$, then the only possible torsion subgroups are those found in degree 1. By the Weil pairing, if $E(F)[\operatorname{tors}] \cong \mathbf{Z}/N\mathbf{Z} \oplus \mathbf{Z}/n\mathbf{Z}$ with $n\mid N$ then $\varphi(n) \mid d$. In most cases though, $\varphi(n)\mid \deg$. We can however determine exactly the intersection of $\mathbf{Q}(\zeta_n)$ with $\mathbf{Q}(j(\mathcal{O}))$ and thus get closer to the ideal that $\varphi(n) \mid \deg$.
Let $H$ denote the ring class field of $\mathcal{O}$ and $G$ the maximal abelian sub-extension of $H$ over $\mathbf{Q}$, which is necessarily multi-quadratic \cite[\S 6]{Cox}. Hence $G'$, the intersection of $\mathbf{Q}(j(\mathcal{O}))$ with $G$, must also be multiquadratic. Since any abelian extension must be contained in some $\mathbf{Q}(\zeta_m)$ \cite[Theorem 8.8]{Cox}, the intersection of $\mathbf{Q}(\zeta_n)$ with $\mathbf{Q}(j(\mathcal{O}))$ must be contained in $G'$. This $G'$ may be numerically determined via discriminants, but it is not computationally difficult to simply list the discriminants of the quadratic subfields of $\mathbf{Q}(j(\mathcal{O}))$, which are all necessarily real. If $\Delta$ is a discriminant of a real quadratic field $K$, then $K \subset \mathbf{Q}(\zeta_n)$ if and only if $\Delta \mid n$ \cite[Example V.3.11]{Milne}. Finally, we may determine inductively that if $M$ is a multi-quadratic field extension of degree $2^m$, then the number of quadratic sub-extensions is $2^m -1$.
\begin{function}\label{DegQJCyc} (\textsf{CyclotomicIntersectionDegree}) Let $\mathcal{O}$ be an imaginary quadratic order and $n$ a positive integer. \begin{enumerate} \item Let $L = \{ \operatorname{disc}(K) : [K:\mathbf{Q}]=2, K \subset \mathbf{Q}(j(\mathcal{O}))\}$, the discriminants of the quadratic subfields of the multi-quadratic field $G'$ above. \item Let $M = \{ D : D \in L, D \mid n\}$, the discriminants of the quadratic subfields of $G' \cap \mathbf{Q}(\zeta_n)$. \item Return \# $M + 1$. \end{enumerate} \end{function}
These steps restrict the groups which could possibly occur as torsion subgroups of an elliptic curve with CM by $\mathcal{O}$. We combine these steps into a function, which takes as input an imaginary quadratic order $\mathcal{O}$, a degree $d$, and a list of integers $N$ which could be the exponent of a torsion subgroup of an elliptic curve $E$ over a number field $F$ with CM by $\mathcal{O}$. The output of this function is a list of finite abelian groups $G$ such that $E(F)[\operatorname{tors}] \cong G$ for an elliptic curve $E$ with CM by $\mathcal{O}$.
\begin{function}\label{PossGroups} (\textsf{PossibleGroups}) Let $d\in \mathbf{Z}_{>1}$, $\mathcal{O}$ an imaginary quadratic order such that $h(\mathcal{O}) \mid d$, and $L$ a list of positive integers $N$. \begin{enumerate} \item Set $L' = \{ \mathbf{Z}/N\mathbf{Z} \oplus \mathbf{Z}/n\mathbf{Z} : N \in L, n \mid N\},$ $h = h(\mathcal{O})$, and $\deg = \dfrac{d}{h}$. \item If $\deg =1$ then remove $\mathbf{Z}/N\mathbf{Z} \oplus \mathbf{Z}/n\mathbf{Z}$ from $L'$ unless $n=N =2$ or $n=1$ and $N \in \{1,2,3,4,6\}$. \item If $\deg =2$ then remove $\mathbf{Z}/N\mathbf{Z} \oplus \mathbf{Z}/n\mathbf{Z}$ from $L'$ unless either ($\mathcal{O} \cong \mathbf{Z}[\zeta_3]$ and $(N,n) = (3,3)$) or $\varphi(Nn) \le 4$. \item If $\deg >1$ is odd then remove $\mathbf{Z}/N\mathbf{Z} \oplus \mathbf{Z}/n\mathbf{Z}$ from $L'$ unless $\varphi(Nn) \le w(\mathcal{O})\deg$ and $\varphi(n) \mid d$. \item If $\deg >2$ is even then remove $\mathbf{Z}/N\mathbf{Z} \oplus \mathbf{Z}/n\mathbf{Z}$ from $L'$ unless $\varphi(n) \mid \deg \times\textsf{CyclotomicIntersectionDegree}(\mathcal{O},n)$ (Function \ref{DegQJCyc}). \item Return $L'$. \end{enumerate} \end{function}
In Function \ref{PossGroups}, you will of course get the best results when the list $L$ is made up of integers $N$ which can be an order of a torsion point on an elliptic curve $E$ with CM by $\mathcal{O}$. Necessarily then, $\varphi(N) \le \dfrac{w(\mathcal{O}) d}{h(\mathcal{O})} = w(\mathcal{O})\deg$ by the SPY bounds. We also have another tool for ruling out possible orders of torsion.
\begin{theorem}\label{CCSBounds} Let $\mathcal{O}$ be an imaginary quadratic order of discriminant $D$ and let $D_0$ be the discriminant of the field $K =\mathbf{Q}(\sqrt D)$, so that $D = f^2 D_0$. If $p \nmid D$ is an odd prime then let $\left(\dfrac{\cdot}{p}\right)$ denote the Legendre symbol at $p$. If $E$ is an elliptic curve over a number field of degree $d$ with CM by $\mathcal{O}$ with a point of order $p$ then we have the following. \begin{itemize} \item If $\left(\dfrac{D}{p}\right) = 1$ then $(p-1)h(\mathcal{O}_K) \mid 2dw(\mathcal{O}_K)$. \item If $\left(\dfrac{D}{p}\right) = -1$ then $(p^2-1)h(\mathcal{O}_K) \mid 2dw(\mathcal{O}_K)$. \end{itemize} \end{theorem} \begin{proof} This was directly proven for $D = D_0$ \cite[Theorem 2]{CCS}, and can be extended to the case $p\nmid D$ \cite[Proposition 25]{CCS}.\end{proof}
In this way, we can additionally remove large primes from the divisors of possible exponents. Starting from a list of integers up to $w(\mathcal{O})\deg$, we can then very quickly sieve out impossible torsion exponents. For $j=0$, performing the above procedure takes $\dfrac{1}{100}$ of one second to find $$[ 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 14, 15, 16, 18, 20, 21, 24, 26, 28, 30, 36, 42 ]$$ as a list of possible torsion exponents over a number field of degree 2.
\begin{function}\label{PossibleN} (\textsf{PossibleExponents}) Let $\mathcal{O}$ be an imaginary quadratic order and let $\deg$ be a positive integer. \begin{enumerate} \item Let $L$ be the list of positive integers $N$ such that $\varphi(N) \le w(\mathcal{O}) \deg$. \item Let $L'$ be the set of integers $N\in L$ such that if $p\mid N$ is prime then $p$ satisfies the divisibility relations in Theorem \ref{CCSBounds} for $d = h(\mathcal{O})\deg.$ \item Return $L'$. \end{enumerate} \end{function}
Note however that this list is still far too large a list to use in Function \ref{PossGroups}. We apply a sieve to this list, using resultants as in Example \ref{full7}, where we showed that 7-torsion occurred over a number field of degree 2 for $j=0$. We note especially that the {\em Kubert Degree Sequence} for $j=0$ and $N=7$, or the sequence of degrees of irreducible factors of the resultant, is $[2,6]$. On the other hand, the Degree Sequence for $j=0$ and $N = 14$ is $[6,18]$. Therefore we may eliminate $14,28$, and $42$ from our list of possible torsion exponents because 14-torsion is not possible for $j=0$ over a number field of degree not divisible by 6. Computing this Degree Sequence takes $0.03$ seconds. If we recursively perform this sieve, it takes $0.24$ seconds to find that the torsion exponents which occur for $j=0$ over a number field of degree 2 are $$[ 2, 3, 4, 6, 7 ].$$ This may seem like a relatively short amount of time to be worried about, but for $j=0$ and a number field of degree 6 it takes $69.95$ seconds to find $[ 2, 3, 4, 6, 7, 9, 14, 19 ]$ as the list of torsion exponents. For degree 12 it takes over an hour. We describe this process, along with the adjustment we have to make for $\mathcal{O}$ with larger class numbers in the following function.
\begin{function}\label{FullResSieve} (\textsf{SievedTorsion}) Let $\mathcal{O}$ be an imaginary quadratic order and let $\deg$ be a positive integer. \begin{enumerate} \item Let $L = \textsf{PossibleExponents}(\mathcal{O},\deg)$. (Function \ref{PossibleN}) \item For $N \in \textsf{PossibleExponents}(\mathcal{O},\deg)$ such that $N \in L$ and $N \ge 4$: \begin{itemize} \item Let $DegSeq$ be the sequence of integers $Degree(f)h(\mathcal{O})$ where $f$ is an irreducible factor of the resultant corresponding to $N$-torsion on elliptic curves with CM by $\mathcal{O}$. \item Unless $m \mid h(\mathcal{O})\deg$ for some $m \in DegSeq$, remove all multiples of $N$ from $L$. \end{itemize} \item Return $L$. \end{enumerate} \end{function}
We structure our computation this way to minimize the number of times that we need to compute multivariate resultants. While straightforward and much quicker than computing torsion subgroups of elliptic curves, the computation of multivariate resultants is \textsf{NP}-Hard \cite{Res}. The memory demands for computing resultants over large degree number fields can also be quite substantial. All told, the longest computation of torsion subgroups occurred in degree 12. Computing the lists of possible torsion subgroups of CM elliptic curves over a number field of degree 12 for each possible quadratic order $\mathcal{O}$ using the above procedure took over 10 hours.
\section{Ruling out Torsion Subgroups of Elliptic Curves}
Suppose we are given a finite group $G \cong \mathbf{Z}/N\mathbf{Z} \oplus \mathbf{Z}/n\mathbf{Z}$ and we want to test whether it could be a torsion subgroup of an elliptic curve $E$ over a number field $F$ of degree dividing $d$ with CM by an imaginary quadratic order $\mathcal{O}$. If there is such an elliptic curve such that $E(F)[\operatorname{tors}]\cong G$, then we can find $b,c\in F$ such that $E \cong E(b,c)$, where the point $(0,0)$ is a point of order $N$. Conversely if we have $b,c\in F$ such that $E(b,c)$ has CM by $\mathcal{O}$ and $(0,0)$ is a point of order $N$, it is not necessarily the case that $E(b,c)(F)[\operatorname{tors}]\cong G$. The first and easiest way for this to fail is if $E(b,c)(F)[\operatorname{tors}] \supsetneq G$.
\begin{example} The resultant whose roots are the $b$ such that $(0,0)$ is a 5-torsion point on $E(b,c)$ with CM by $\mathbf{Z}[\zeta_4]$ is $$(x^2 +1)^2(x^4 - 18x^3 + 74x^2 + 18x + 1)^2.$$ However, any elliptic curve over a number field $F$ with CM by $\mathbf{Z}[\zeta_4]$ has a rational 2-torsion point for trivial reasons\footnote{For instance, put your elliptic curve in short Weierstrass form.}. Therefore if we search for $\mathbf{Z}/5\mathbf{Z}$ as a torsion subgroup over a degree 2 field, we find $\mathbf{Z}/10\mathbf{Z}$ as the torsion subgroup of $E(\zeta_4,\zeta_4)$.\end{example}
It of course may also happen that $G\subsetneq E(F)[\operatorname{tors}]$ and that they have the same exponent. The more typical situation is that $E(F)[\operatorname{tors}] \subset G$. In that case, we have to check to see if there is an extension field $L$ of $F$ of degree still dividing $d$ such that $E(L)[\operatorname{tors}] \cong G$.
To rule this out, there are many options. Of course, we may compute all elliptic curves with CM by $\mathcal{O}$ and with an $N$-torsion point using the Kubert resultant method of Example \ref{full7}, base extend each of these elliptic curves by roots of their $n$-th division polynomials and then compute the torsion subgroups of all those elliptic curves. For running time reasons, it is preferable to rule this out before ever computing an elliptic curve or especially a torsion subgroup. Although there are many ways to compute a torsion subgroup of an elliptic curve over a number field, almost all of them involve reducing an elliptic curve modulo various primes in order to take advantage of Schoof's algorithm \cite{Schoof}. Unfortunately, irreducible factors of Kubert resultants often have non-integral coefficients, making this process slow and un-supported in some computer algebra systems. Even for computer algebra systems like \texttt{magma v2-18.3} with robust support for elliptic curves over number fields given by non-integral polynomials, it can be very time- and memory-consuming to compute torsion subgroups over large degree number fields.
A crucial step is thus a variant of Step (5) of Function \ref{PossGroups}. If $\mathbf{Z}/N\mathbf{Z} \oplus \mathbf{Z}/n\mathbf{Z} \cong E(L)[\operatorname{tors}]$ with $n\mid N$ for some field extension $L$ of $F$, then we must have $\mathbf{Q}(\zeta_n) \subset L$. Numerically we have done almost everything to numerically rule out the possibility that there is some field $L$ of degree dividing $d$ which contains both $F$ and $\mathbf{Q}(\zeta_n)$. Now that we have computed $F$ explicitly via the Kubert resultant, we can compute the compositum of $F$ with $\mathbf{Q}(\zeta_n)$ and its degree over $\mathbf{Q}$. If this degree does not divide $d$, then we can not base extend $F$ to $L$ and obtain $E(L)[\operatorname{tors}] \cong G$. Moreover, we have ruled $G$ out without computing any torsion on $E$.
\begin{example} Let $\mathcal{O} = \mathbf{Z}\left[\dfrac{1 + 3\sqrt{-11}}{2}\right]$, let $d=12$, and let $G \cong \mathbf{Z}/9\mathbf{Z} \oplus \mathbf{Z}/9\mathbf{Z}$. Since $h(\mathcal{O}) =2$, the Kubert Degree Sequence for $\mathcal{O}$ and $N = 9$ is $[6,12,54]$. An elliptic curve with CM by $\mathcal{O}$ over the number field defined by the first irreducible factor or a degree two extension thereof cannot have torsion subgroup $G$ by a quick standard computation. Just computing the torsion subgroup over the number field $F$ given by the second irreducible factor ran for several days before quitting due to a lack of memory. While we could not rule out $G$ as a torsion subgroup without computing with $F$, we found that the compositum of $F$ with $\mathbf{Q}(\zeta_9)$ has degree 36 over $\mathbf{Q}$ and therefore $G$ is not a torsion subgroup of an elliptic curve with CM by $\mathcal{O}$ over a number field of degree 12. \end{example}
We now describe the procedure for saying that a group $G$ which could have been produced by Functions \ref{PossibleN} and \ref{PossGroups} in fact cannot appear as the torsion subgroup of an elliptic curve $E$ with CM by $\mathcal{O}$ over a number field $L$ of degree dividing $d$. We describe this procedure as a function which either returns \textsf{True} if $G$ can be ruled out or \textsf{False} if $G$ can occur, along with an elliptic curve $E$ over a number field $L$ of degree dividing $d$.
\begin{function}\label{RuledOut}(\textsf{RuledOut}) Let $G \cong \mathbf{Z}/N\mathbf{Z} \oplus \mathbf{Z}/n\mathbf{Z}$, let $d$ be a positive integer and let $\mathcal{O}$ be an imaginary quadratic order. \begin{enumerate} \item Compute the Kubert Resultant whose roots are the $b\in \overline\mathbf{Q}$ such that $E(b,c)$ has CM by $\mathcal{O}$ and $(0,0)$ is a point of exact order $N$. Factor it as $\displaystyle \prod_{i=1}^g f_i$. \item\label{RuledOutBeginning} If $Degree(f_i)h(\mathcal{O}) \mid d$ then let $F_i$ denote the number field given by $f_i$, generated over $\mathbf{Q}(j(\mathcal{O}))$ by $b_i$. Let $c_i$ be the element of $F_i$ (or possibly an extension) such that $E(b_i,c_i)$ is our CM elliptic curve. Compute the compositum of $F_i$ with $\mathbf{Q}(\zeta_n)$ and let $d_i$ be its degree over $\mathbf{Q}$. \item\label{RuledOutError} If $c_i \not\in F_i$ then raise an error.
\item If $d_i \mid d$ then let $T_i$ be the torsion subgroup of $E(b_i,c_i)$. If $T_i \cong G$ then Return \textsf{False}, $E(b_i,c_i)_{F_i}$. \item If $[F_i:\mathbf{Q}] \ne d$ then it may be possible to base extend $E(b_i,c_i)$ to obtain $G$ as a torsion subgroup. \item If $T_i$ is a subgroup of $G$ with the same exponent then $T_i \cong \mathbf{Z}/N\mathbf{Z} \oplus \mathbf{Z}/n'\mathbf{Z}$ where $n' \mid n \mid N$. Compute the $n$-th division polynomial of $E(b_i,c_i)$, perform Moebius inversion to obtain a polynomial whose roots are $x$-coordinates of points of exact order $n$, and factor that polynomial as $\displaystyle\prod_{j=1}^m p_j$. \item If $Degree(p_j) \mid \dfrac{d}{[F_i:\mathbf{Q}]}$ then let $L_{i,j}$ be the number field given by $p_j$, generated over $F_i$ by the $x$-coordinate $a_j$. Let $g = y^2 + (a_j(1-c_i) - b_i)y + (b_ia_j^2 - a_j^3)$, the polynomial whose roots in $\overline \mathbf{Q}$ are the $y$-coordinates of the points on $E(b_i,c_i)$ with $x$-coordinate $a_j$. Let $n_g$ be the number of irreducible factors over $L_{i,j}$ of $g$ and let $e_j = Degree(p_j)\dfrac{2}{n_g}$. \item If $e_j \ne 1$ and $e_j \mid \dfrac{d}{[F_i:\mathbf{Q}]}$ then let $M_{i,j}$ be the field given by the polynomial $g$. Let $T_{i,j}$ be the torsion subgroup of the base change of $E(b_i,c_i)$ to $M_{i,j}$. \item\label{RuledOutEnding} If $T_{i,j} \cong G$ then Return \textsf{False}, $E(b_i,c_i)_{M_{i,j}}$. \item If for all possible $i$ and $j$ there is some ``If $\ldots$''' statement which begins one of Steps (\ref{RuledOutBeginning})-(\ref{RuledOutEnding}) besides Step \ref{RuledOutError} which is false, then Return \textsf{True}. \end{enumerate} \end{function}
We note that in Step \ref{RuledOut}(\ref{RuledOutError}), it is possible that $c_i \not \in F_i$ and thus we must have an error-raising statement. However, as one may intuit from Figure \ref{overlayfig}, the probability that two intersection points in the $(b,c)$-plane have the same $b$ value is zero by the properties of the Zariski topology. We now give an algorithm which produces all torsion subgroups of elliptic curves with CM over a number field of degree $d$.
\begin{algorithm}\label{FinalAlg} Let $d$ be a positive integer and $L$ a list of finite groups which we know to be torsion subgroups of CM elliptic curves over some number field of degree dividing $d$. \begin{enumerate} \item Create an associative array or dictionary $A$, indexed by imaginary quadratic orders $\mathcal{O}$ such that $h(\mathcal{O})\mid d$ and either $h(\mathcal{O}) =1$ or $h(\mathcal{O}) \ne d$. Let the $\mathcal{O}$-th entry of $A$ be $$\mathsf{PossibleGroups}\left(d,\mathcal{O},\mathsf{SievedTorsion}\left(\mathcal{O}, \dfrac{d}{h(\mathcal{O})}\right)\right).$$ \item\label{GroupsToBeRuledOut} Let $P$ be the union of all the sets $A(\mathcal{O})$ and let $R$ be $P - L$, the set of groups in $P$ which are not isomorphic to any element of $L$. \item Iterate over $G\in R$. \begin{itemize} \item If $\mathsf{RuledOut}(G,d,\mathcal{O})$ returns \textsf{True} for all $\mathcal{O}$ such that $G\in A(\mathcal{O})$, move onto the next group. \item If not, append $G$ to $L$ and go to Step (\ref{GroupsToBeRuledOut}). \end{itemize} \end{enumerate} \end{algorithm}
When Algorithm \ref{FinalAlg} is completed, $L$ is the complete list of possible torsion subgroups. If $d=2$, then Algorithm \ref{FinalAlg} takes 0.87 seconds to complete when starting with the list given by Zimmer, M\"uller and Stroher, and rules out only the group $\mathbf{Z}/5\mathbf{Z}$. If $d=12$, then if we start from Step (\ref{GroupsToBeRuledOut}) with a complete list $L$, the algorithm takes only 3.5 hours to complete for a total time of roughly 14 hours. Complete records of the ruling out computation may be found on \url{stankewicz.net/torsion}.
\section{Isomorphism classes of Torsion Subgroups of CM elliptic curves $E$}\label{lists}
\subsection{$K= \mathbf{Q}$}
$$E(\mathbf{Q})[\operatorname{tors}] \in \{0,\mathbf{Z}/2\mathbf{Z}, \mathbf{Z}/3\mathbf{Z}, \mathbf{Z}/4\mathbf{Z}, \mathbf{Z}/6\mathbf{Z}, \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/2\mathbf{Z}\}.$$
Examples of these are :
\begin{center} \begin{tabular}{ccc} Group & Elliptic Curve & $j$-invariant \\ 0 & $y^2 = x^3 +2$ & 0 \\ $\mathbf{Z}/2\mathbf{Z}$ & $y^2 = x^3 -1$ & 0 \\ $\mathbf{Z}/3\mathbf{Z}$ & $y^2 = x^3 + 16$ & 0 \\ $\mathbf{Z}/4\mathbf{Z}$ & $y^2 = x^3 + 4x$ & 1728\\ $\mathbf{Z}/6\mathbf{Z}$ & $y^2 = x^3 + 1$ & 0 \\ $\mathbf{Z}/2\mathbf{Z}\oplus\mathbf{Z}/2\mathbf{Z}$ & $y^2 = x^3 - 4x$ & 1728 \end{tabular}\end{center}
\subsection{$K$ is a number field of degree 2.}
$$E(K)[\operatorname{tors}] \in \begin{cases} \mathbf{Z}/m\mathbf{Z} & \textrm{for } m = 1,2,3,4,6,7,10, \\ \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/m \mathbf{Z} & \textrm{for } m=2,4,6, \textrm{ and} \\ \mathbf{Z}/3\mathbf{Z}\oplus \mathbf{Z}/3\mathbf{Z}. &\end{cases}$$
The only subgroups which do not occur over $\mathbf{Q}$ are:
$$E(K)[\operatorname{tors}] \in \{\mathbf{Z}/7\mathbf{Z},\mathbf{Z}/10\mathbf{Z}, \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/4 \mathbf{Z},\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/6 \mathbf{Z}, \mathbf{Z}/3\mathbf{Z}\oplus \mathbf{Z}/3\mathbf{Z}\}$$
Examples of these are: \begin{center} \begin{tabular}{cccc} Group & Field Extension & Elliptic Curve & $j$-invariant \\ $\mathbf{Z}/7\mathbf{Z}$ & $\mathbf{Q}(\zeta_3)$ & $E(\zeta_3,-1)$ & 0\\ $\mathbf{Z}/10\mathbf{Z}$ & $\mathbf{Q}(\zeta_4)$ & $E(\zeta_4,\zeta_4)$ & 1728\\ $\mathbf{Z}/2\mathbf{Z}\oplus\mathbf{Z}/4\mathbf{Z}$ & $\mathbf{Q}(\zeta_4)$ & $y^2 = x^3 + 4x$ & 1728\\ $\mathbf{Z}/2\mathbf{Z}\oplus\mathbf{Z}/6\mathbf{Z}$ & $\mathbf{Q}(\zeta_3)$ & $y^2 = x^3 + 1$ & 0\\ $\mathbf{Z}/3\mathbf{Z}\oplus\mathbf{Z}/3\mathbf{Z}$ & $\mathbf{Q}(\zeta_3)$ & $y^2 = x^3 + 16$ & 0 \end{tabular}\end{center}
\subsection{$K$ is a number field of degree 3.}
$$E(K)[\operatorname{tors}] \in \begin{cases} \mathbf{Z}/m\mathbf{Z} & \textrm{for } m = 1,2,3,4,6,9,14, \\ \textrm{and }&\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/2 \mathbf{Z}. \end{cases}$$
The only subgroups which do not occur over $\mathbf{Q}$ are:
$$E(K)[\operatorname{tors}] \in \{ \mathbf{Z}/9\mathbf{Z}, \mathbf{Z}/14\mathbf{Z} \}.$$
Examples of these are: \begin{center} \begin{tabular}{cccc} Group & Defining Polynomial & Elliptic Curve & $j$-invariant \\ \hline $\mathbf{Z}/9\mathbf{Z}$ & $b^3 - 99b^2 - 90b - 9$ & $E\left(b,\displaystyle\frac{-2b^2 + 318b - 75}{753}\right)$ & 0 \\ $\mathbf{Z}/14\mathbf{Z}$ & $b^3 + 5b^2 + 2/7b - 1/49$ & $E\left(b,\displaystyle\frac{133b^2 + 749b + 54}{167}\right)$ & -3375 \end{tabular}\end{center}
\subsection{$K$ is a number field of degree 4.}
$$E(K)[\operatorname{tors}]\in \begin{cases} \mathbf{Z}/m\mathbf{Z} & \textrm{for } m = 1,\dots,8,10,\\ &\hspace{1.5cm} 12,13,21 \\ \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/m \mathbf{Z} & \textrm{for } m=2,4,6,8,10,\\ \mathbf{Z}/3\mathbf{Z}\oplus \mathbf{Z}/m\mathbf{Z} & \textrm{for } m=3,6,\\ \textrm{and } &\mathbf{Z}/4\mathbf{Z}\oplus \mathbf{Z}/4\mathbf{Z}. \end{cases}$$
The only subgroups which do not occur over $\mathbf{Q}$ or a number field of degree 2 are:
$$E(K)[\operatorname{tors}]\in \left\{\begin{array}{c}\mathbf{Z}/5\mathbf{Z}, \mathbf{Z}/8\mathbf{Z}, \mathbf{Z}/12\mathbf{Z},\mathbf{Z}/13\mathbf{Z}, \mathbf{Z}/21\mathbf{Z},\\ \mathbf{Z}/2\mathbf{Z}\oplus \mathbf{Z}/8\mathbf{Z}, \mathbf{Z}/2\mathbf{Z}\oplus \mathbf{Z}/10\mathbf{Z}, \mathbf{Z}/4\mathbf{Z}\oplus \mathbf{Z}/4\mathbf{Z}, \mathbf{Z}/3\mathbf{Z}\oplus \mathbf{Z}/6\mathbf{Z}\end{array}\right\}.$$
Examples of these are:
\hspace{-1cm}\begin{tabular}{cccc} Group & Field & Elliptic Curve & $j$ \\ \hline $\mathbf{Z}/5\mathbf{Z}$ & $\dfrac{\mathbf{Q}[b]}{(b^4 - 4b^3 + 46b^2 + 4b + 1)}$ & $E(b,b)$ & -32768 \\ $\mathbf{Z}/8\mathbf{Z}$ & $\dfrac{\mathbf{Q}[b]}{(b^4 + 2b^3 + b^2 - b - \dfrac{1}{8})}$ & $E\left(b,\displaystyle\frac{8b^3 + 36b^2 + 46b + 3}{13}\right)$ \footnotemark & 1728\\ $\mathbf{Z}/12\mathbf{Z}$ & $\dfrac{\mathbf{Q}[b]}{(b^4 - 10b^3 + 24b^2 - 16b - 2)}$ & $E\left(b,\displaystyle\frac{-6b^3 + 52b^2 - 70b - 9}{7}\right)$ & 0\\ $\mathbf{Z}/13\mathbf{Z}$ & $\dfrac{\mathbf{Q}[b]}{(b^4 + 4b^3 + 78b^2 + 13b + 1)}$ & $E\left(b,\displaystyle\frac{16b^3 + 44b^2 + 1354b + 45}{483}\right)$ & 0\\
$\mathbf{Z}/21\mathbf{Z}$ & $\dfrac{\mathbf{Q}[e]}{(e^{4} - e^{3} + 2 e + 1)}$ & $y^2 = x^3 - \left(\begin{array}{c}371952 e^{3} + \\3373488 e^{2} + \\ 3777840 e + \\ 1228608\end{array}\right)$ & 0\\ $\left(\dfrac{\mathbf{Z}/2\mathbf{Z}\oplus}{\mathbf{Z}/8\mathbf{Z}}\right)$ & $\dfrac{\mathbf{Q}[b]}{(b^4 - 4b^3 + 4b^2 - b - 1/8)}$ & $E\left(b,32b^3 - 108b^2 + 58b + 6\right)$ & 287496\\ $\left(\dfrac{\mathbf{Z}/2\mathbf{Z}\oplus}{\mathbf{Z}/10\mathbf{Z}}\right)$ & $\dfrac{\mathbf{Q}(\zeta_4)[x]}{(x^2 - \zeta_4x -\dfrac{\zeta_4}{2})}$ & $E(\zeta_4,\zeta_4)$ & 1728 \\ $\left(\dfrac{\mathbf{Z}/4\mathbf{Z}\oplus}{\mathbf{Z}/4\mathbf{Z}}\right)$ & $\dfrac{\mathbf{Q}[x]}{(x^4 + 1)}$ & $E(-1/8,0)$ & 1728 \\ $\left(\dfrac{\mathbf{Z}/3\mathbf{Z}\oplus}{\mathbf{Z}/6\mathbf{Z}}\right)$ & $\mathbf{Q}(\sqrt 3, \sqrt{-3})$ & $E\left(\dfrac{6\sqrt 3 + 10}{3},\dfrac{2\sqrt 3 + 3}{3}\right)$ & 1728 \end{tabular} \footnotetext{Although $j(\mathbf{Z}[2i]) = 287496$ and $j(\mathbf{Z}[i]) = 1728$ and so it would be reasonable to expect two curves over number fields of the same degree with those $j$-invariants and respective torsion subgroups $\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/8\mathbf{Z}$ and $\mathbf{Z}/8\mathbf{Z}$ to be isogenous, this is not the case. Indeed the two fields are not isomorphic.}
\subsection{$K$ is a number field of degree 5.}
$$E(K)[\operatorname{tors}]\in \{0,\mathbf{Z}/2\mathbf{Z} ,\mathbf{Z}/3\mathbf{Z},\mathbf{Z}/4\mathbf{Z},\mathbf{Z}/6\mathbf{Z},\mathbf{Z}/11\mathbf{Z}, \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/2 \mathbf{Z} \}$$
The only subgroup which does not occur over $\mathbf{Q}$ is $\mathbf{Z}/11\mathbf{Z}$, which occurs over the maximal real subfield of $\mathbf{Q}(\zeta_{11})$. This occurs for $j = -32768$ in $E(b,c)$ with the following quantities. Hereon, unless otherwise stated, elliptic curves will be given by the values of $b$ and $c$.
\begin{tabular}{ccc}
Field Extension & $b$ &$c$ \\ \hline
$\dfrac{\mathbf{Q}[e]}{(e^{5} - e^{4} - 4 e^{3} + 3 e^{2} + 3 e - 1)}$ & $-7 e^{4} - 2 e^{3} + 16 e^{2} - e - 1$ & $2 e^{3} - 4 e + 1$ \end{tabular}
\subsection{$K$ is a number field of degree 6.}
$$E(K)[\operatorname{tors}]\in \begin{cases} \mathbf{Z}/m\mathbf{Z} & \textrm{for } m = 1,2,3,4,6,7,9,10,\\ & \hspace{1.5cm}14,18,19,26, \\ \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/m \mathbf{Z} & \textrm{for } m=2,4,6,14,\\ \mathbf{Z}/3\mathbf{Z}\oplus \mathbf{Z}/m\mathbf{Z} & \textrm{for } m = 3,6,9,\\ \textrm{and } &\mathbf{Z}/6\mathbf{Z} \oplus \mathbf{Z}/6\mathbf{Z}. \end{cases}$$
The only subgroups which do not occur over $\mathbf{Q}$ or a number field of degree 2 or 3 are:
$$E(K)[\operatorname{tors}]\in \left\{\begin{array}{c}\mathbf{Z}/18\mathbf{Z},\mathbf{Z}/19\mathbf{Z},\mathbf{Z}/26\mathbf{Z} \\ \mathbf{Z}/2\mathbf{Z}\oplus \mathbf{Z}/14\mathbf{Z}, \mathbf{Z}/3\mathbf{Z}\oplus \mathbf{Z}/6\mathbf{Z},\mathbf{Z}/3\mathbf{Z}\oplus \mathbf{Z}/9\mathbf{Z}, \mathbf{Z}/6\mathbf{Z}\oplus\mathbf{Z}/6\mathbf{Z} \end{array} \right\}.$$
Examples of these are:
$\begin{array}{l|l} Group & \mathbf{Z}/18\mathbf{Z} \\ \hline j, Field & 8000,\mathbf{Q}[e]/(e^{6} - 2 e^{5} + 3 e^{4} - 2 e^{3} + 2 e^{2} + 1) \\ \hline b & \frac{1}{9}\left(\begin{array}{c}28 e^{5} - 79 e^{4} + 86 e^{3} - 30 e^{2} + 11 e - 31 \end{array}\right) \\ \hline c & \frac{1}{3}\left(\begin{array}{c}-8 e^{5} + 7 e^{4} - e^{3} - 8 e^{2} - 9 e - 5 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/19\mathbf{Z} \\ \hline j,Field & 0,\mathbf{Q}[e]/(e^{6} + e^{4} - e^{3} - 2 e^{2} + e + 1) \\ \hline b & 2 e^{5} - e^{4} + 2 e^{3} - 4 e^{2} + 2 \\ \hline c & 2 e^{5} - 2 e^{4} + 4 e^{3} - 4 e^{2} - 2 e + 3 \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/26\mathbf{Z} \\ \hline j, Field & 1728,\mathbf{Q}[e]/(e^{6} - e^{4} + 2 e^{3} - 2 e + 1) \\ \hline b & \frac{1}{13}\left(\begin{array}{c}53 e^{5} + 6 e^{4} - 70 e^{3} + 75 e^{2} + 5 e - 136 \end{array}\right) \\ \hline c & \frac{1}{13}\left(\begin{array}{c}-49 e^{5} - 44 e^{4} + 14 e^{3} - 78 e^{2} - 65 e + 46 \end{array}\right) \end{array}$
\begin{tabular}{cccc}
Group & Field Extension & Elliptic Curve & $j$ \\ $\left(\dfrac{\mathbf{Z}/2\mathbf{Z}\oplus}{\mathbf{Z}/14\mathbf{Z}}\right)$ & $\dfrac{\mathbf{Q}(\sqrt{-7})[b]}{(b^3 + 5b^2 + 2/7b - 1/49)}$ & $E\left(b,\displaystyle\frac{133b^2 + 749b + 54}{167}\right)$ & -3375 \\ $\left(\dfrac{\mathbf{Z}/3\mathbf{Z}\oplus}{\mathbf{Z}/6\mathbf{Z}}\right)$ & $\mathbf{Q}(\zeta_3,\sqrt[3]{-16})$ & $y^2 = x^3 + 16$ & 0\\ $\left(\dfrac{\mathbf{Z}/6\mathbf{Z} \oplus}{\mathbf{Z}/6\mathbf{Z}}\right)$ & $\mathbf{Q}(\zeta_3,\sqrt[3]{4})$ & $y^2 = x^3 + 1$ & 0\\ $\left(\dfrac{\mathbf{Z}/3\mathbf{Z}\oplus}{\mathbf{Z}/9\mathbf{Z}}\right)$ & $\dfrac{\mathbf{Q}(\zeta_3)[b]}{b^3 - 99b^2 - 90b - 9}$ & $E\left(b,\displaystyle\frac{-2b^2 + 318b - 75}{753}\right)$ & 0 \end{tabular}
\subsection{$K$ is a number field of degree 7.}
$$E(K)[\operatorname{tors}]\in \{0,\mathbf{Z}/2\mathbf{Z}, \mathbf{Z}/3\mathbf{Z}, \mathbf{Z}/4\mathbf{Z}, \mathbf{Z}/6\mathbf{Z}, \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/2\mathbf{Z}\}.$$
No subgroups occur in degree 7 which do not occur over $\mathbf{Q}$.
\subsection{$K$ is a number field of degree 8.}
$$E(K)[\operatorname{tors}]\in \begin{cases} \mathbf{Z}/m\mathbf{Z} & \textrm{for } m = 1,\dots,8,10,12,13,\\ &\hspace{1.5cm} 15,16,20,21,28,30,34,39, \\ \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/m \mathbf{Z} & \textrm{for } m=2,4,6,8,10,12,16,20,\\ \mathbf{Z}/4\mathbf{Z} \oplus \mathbf{Z}/m\mathbf{Z} & m = 4,8,12, \\ \mathbf{Z}/m\mathbf{Z}\oplus \mathbf{Z}/m\mathbf{Z} & \textrm{for } m = 3,5,6, \\\mathbf{Z}/m\mathbf{Z}\oplus \mathbf{Z}/2m\mathbf{Z} & \textrm{for } m = 3,5. \end{cases}$$
The only subgroups which do not occur over $\mathbf{Q}$ or a number field of degree dividing 8 are:
$$E(K)[\operatorname{tors}]\in \left\{\begin{array}{c}\mathbf{Z}/15\mathbf{Z},\mathbf{Z}/16\mathbf{Z}, \mathbf{Z}/20\mathbf{Z},\mathbf{Z}/30\mathbf{Z},\mathbf{Z}/34\mathbf{Z},\mathbf{Z}/39\mathbf{Z} \\ \mathbf{Z}/2\mathbf{Z}\oplus \mathbf{Z}/12\mathbf{Z},\mathbf{Z}/2\mathbf{Z}\oplus \mathbf{Z}/16\mathbf{Z}, \mathbf{Z}/2\mathbf{Z}\oplus \mathbf{Z}/20\mathbf{Z} \\ \mathbf{Z}/6\mathbf{Z}\oplus \mathbf{Z}/6\mathbf{Z},\mathbf{Z}/4\mathbf{Z}\oplus \mathbf{Z}/8\mathbf{Z},\mathbf{Z}/4\mathbf{Z}\oplus \mathbf{Z}/12\mathbf{Z} \\ \mathbf{Z}/5\mathbf{Z} \oplus \mathbf{Z}/5\mathbf{Z},\mathbf{Z}/5\mathbf{Z}\oplus \mathbf{Z}/10\mathbf{Z}\end{array}\right\}.$$
We give examples of these and hereon, unless otherwise stated, their fields of definition will be given only by the defining polynomial over $\mathbf{Q}$.
$\begin{array}{l|l} Group & \mathbf{Z}/15\mathbf{Z} \\ \hline j, Field & 0,e^{8} - 3 e^{7} - 2 e^{6} + 9 e^{5} - 6 e^{3} - 2 e^{2} - 3 e + 1 = 0 \\ \hline b & \frac{1}{61}\left(\begin{array}{c}599 e^{7} - 303 e^{6} - 1758 e^{5} + 786 e^{4}\\ + 1411 e^{3} + 632 e^{2} + 755 e - 307 \end{array}\right) \\ \hline c & \frac{1}{61}\left(\begin{array}{c}-8 e^{7} + 68 e^{6} + 8 e^{5} - 238 e^{4} + 28 e^{3} + 260 e^{2} + 50 e - 7 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/16\mathbf{Z} \\ \hline j, Field & 16581375,e^{8} - 2 e^{7} + 6 e^{6} - 9 e^{5} + 10 e^{4} - 8 e^{3} + 6 e^{2} - 3 e + 1 = 0 \\ \hline b & \frac{1}{2}\left(\begin{array}{c}-16 e^{7} + 131 e^{6} - 234 e^{5} + 268 e^{4} - 227 e^{3} + 175 e^{2} - 90 e + 28 \end{array}\right) \\ \hline c & \frac{1}{2}\left(\begin{array}{c}-e^{7} + 7 e^{6} - 15 e^{5} + 16 e^{4} - 14 e^{3} + 10 e^{2} - 6 e + 1 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/20\mathbf{Z} \\ \hline j,Field & 287496,e^{8} - 4 e^{7} + 6 e^{6} - 8 e^{4} + 8 e^{3} - 4 e + 2 = 0 \\ \hline b & 280 e^{7} - 876 e^{6} + 873 e^{5} + 873 e^{4} - 1553 e^{3} + 710 e^{2} + 762 e - 487 \\ \hline c & -25 e^{7} + 82 e^{6} - 88 e^{5} - 71 e^{4} + 154 e^{3} - 77 e^{2} - 66 e + 55 \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/30\mathbf{Z} \\ \hline j & j^2 + 191025j - 121287375 =0 \\ \hline Field & \begin{array}{c}e^{8} - 3 e^{7} - 2 e^{6} + 9 e^{5} - 6 e^{3} - 2 e^{2} - 3 e + 1 = 0\end{array} \\ \hline b & \dfrac{1}{61}\left(\begin{array}{c}-1926 e^{7} + 7953 e^{6} - 4967 e^{5} - 12311 e^{4}\\ + 13878 e^{3} - 2797 e^{2} + 7188 e - 1853 \end{array}\right) \\ \hline c & \dfrac{1}{61}\left(\begin{array}{c}97 e^{7} - 367 e^{6} + 147 e^{5} + 583 e^{4} - 492 e^{3} + 172 e^{2} - 286 e + 62 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/34\mathbf{Z} \\ \hline j, Field & 1728,e^{8} + 4 e^{7} + 7 e^{6} + 8 e^{5} + 8 e^{4} + 6 e^{3} + 4 e^{2} + 2 e + 1 = 0 \\ \hline b & \frac{1}{17}\left(\begin{array}{c}-11 e^{7} - 34 e^{6} - 38 e^{5} - 16 e^{4} - 11 e^{3} - 11 e^{2} - e + 3 \end{array}\right) \\ \hline c & \frac{1}{17}\left(\begin{array}{c}9 e^{7} + 31 e^{6} + 59 e^{5} + 60 e^{4} + 50 e^{3} + 30 e^{2} + 25 e + 6 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/39\mathbf{Z} \\ \hline j,Field & 0,e^{8} - 2 e^{6} - 3 e^{5} + 3 e^{4} + 3 e^{3} - 2 e^{2} + 1 = 0 \\ \hline b & -12 e^{7} - 25 e^{6} + 7 e^{5} + 82 e^{4} + 77 e^{3} - 47 e^{2} - 95 e - 35 \\ \hline c & -8 e^{7} - 4 e^{6} + 18 e^{5} + 34 e^{4} - 14 e^{3} - 46 e^{2} + 13 \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/2\mathbf{Z} \oplus\mathbf{Z}/12\mathbf{Z} \\ \hline j, Field & 54000,e^{8} - 4 e^{7} + 2 e^{6} + 8 e^{5} - 8 e^{4} + 4 e^{3} - 16 e^{2} + 16 e - 2 = 0 \\ \hline b & \frac{1}{13}\left(\begin{array}{c}-1083 e^{7} + 2865 e^{6} + 1673 e^{5} - 6205 e^{4}\\ - 5 e^{3} - 4167 e^{2} + 11397 e - 1570 \end{array}\right) \\ \hline c & \frac{1}{13}\left(\begin{array}{c}-306 e^{7} + 805 e^{6} + 500 e^{5} - 1801 e^{4}\\ + 27 e^{3} - 1218 e^{2} + 3282 e - 453 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/2\mathbf{Z} \oplus\mathbf{Z}/16\mathbf{Z} \\ \hline j, Field & -3375,e^{8} + 3 e^{7} + 6 e^{6} + 8 e^{5} + 10 e^{4} + 9 e^{3} + 6 e^{2} + 2 e + 1 = 0 \\ \hline b & \frac{1}{2}\left(\begin{array}{c}20 e^{7} + 47 e^{6} + 79 e^{5} + 96 e^{4} + 110 e^{3} + 89 e^{2} + 26 e + 18 \end{array}\right) \\ \hline c & \frac{1}{2}\left(\begin{array}{c}7 e^{7} + 12 e^{6} + 22 e^{5} + 22 e^{4} + 30 e^{3} + 17 e^{2} + 7 e + 3 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/2\mathbf{Z}\oplus\mathbf{Z}/20\mathbf{Z} \\ \hline j,Field & 1728,e^{8} - 4 e^{7} + 6 e^{6} - 8 e^{4} + 8 e^{3} - 4 e + 2 = 0 \\ \hline b & -5 e^{7} + 17 e^{6} - 20 e^{5} - 12 e^{4} + 34 e^{3} - 21 e^{2} - 15 e + 13 \\ \hline c & -4 e^{7} + 13 e^{6} - 13 e^{5} - 13 e^{4} + 24 e^{3} - 8 e^{2} - 10 e + 7 \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/6\mathbf{Z} \oplus\mathbf{Z}/6\mathbf{Z} \\ \hline j, Field & -3375,e^{8} + 3 e^{7} + 4 e^{6} + 3 e^{5} + 3 e^{4} + 3 e^{3} + 4 e^{2} + 3 e + 1 = 0 \\ \hline b & \frac{1}{15}\left(132 e^{7} + 513 e^{6} + 681 e^{5} + 267 e^{4} + 143 e^{3} + 629 e^{2} + 602 e + 193 \right) \\ \hline c & \frac{1}{15}\left(\begin{array}{c}-21 e^{7} - 39 e^{6} - 3 e^{5} + 29 e^{4} - 39 e^{3} - 32 e^{2} + 9 e + 11 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/4\mathbf{Z} \oplus\mathbf{Z}/8\mathbf{Z} \\ \hline j, Field & -3375,e^{8} - e^{6} - 2 e^{5} + e^{4} + 8 e^{3} + 12 e^{2} + 8 e + 2 = 0 \\ \hline b & \frac{1}{11}\left(\begin{array}{c}103 e^{7} - 85 e^{6} - 43 e^{5} - 153 e^{4} + 222 e^{3} + 655 e^{2} + 663 e + 235 \end{array}\right) \\ \hline c & \frac{1}{11}\left(\begin{array}{c}63 e^{7} - 27 e^{6} - 64 e^{5} - 86 e^{4} + 114 e^{3} + 452 e^{2} + 534 e + 206 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/4\mathbf{Z} \oplus\mathbf{Z}/12\mathbf{Z} \\ \hline j, Field & 0,e^{8} - 2 e^{7} + 2 e^{6} - 2 e^{5} + 7 e^{4} - 10 e^{3} + 8 e^{2} - 4 e + 1 = 0 \\ \hline b & \frac{1}{11}\left(\begin{array}{c}-32 e^{7} + 66 e^{6} - 53 e^{5} + 57 e^{4} - 220 e^{3} + 320 e^{2} - 188 e + 115 \end{array}\right) \\ \hline c & \frac{1}{11}\left(\begin{array}{c}-28 e^{7} + 66 e^{6} - 56 e^{5} + 54 e^{4} - 198 e^{3} + 324 e^{2} - 214 e + 91 \end{array}\right) \end{array}$
$\begin{array}{l|l} \textrm{Group, Field} & \mathbf{Z}/5\mathbf{Z}\oplus \mathbf{Z}/5\mathbf{Z}, \mathbf{Q}(\zeta_{15}) \\ \hline j,b=c & (0,4\zeta_{15}^7 + 2\zeta_{15}^6 - 2\zeta_{15}^5 -2\zeta_{15}^3 + 4\zeta_{15} + 1)\end{array}$
$\begin{array}{l|l} \textrm{Group, Field} & \mathbf{Z}/5\mathbf{Z}\oplus \mathbf{Z}/10\mathbf{Z}, \mathbf{Q}(\zeta_{20}) \\ \hline j,b=c & (1728,\zeta_4)\end{array}$
\subsection{$K$ is a number field of degree 9.}
$$E(K)[\operatorname{tors}]\in \begin{cases} \mathbf{Z}/m\mathbf{Z} & \textrm{for } m = 1,2,3,4,6,9,14,18,19,27, \\ \textrm{and }& \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/2 \mathbf{Z}. \end{cases}$$
The subgroups which do not occur over $\mathbf{Q}$ or a number field of degree 3 are:
$$E(K)[\operatorname{tors}]\in \{ \mathbf{Z}/18\mathbf{Z},\mathbf{Z}/19\mathbf{Z},\mathbf{Z}/27\mathbf{Z}\}.$$
Examples of these are:
$\begin{array}{l|l} Group & \mathbf{Z}/18\mathbf{Z} \\ \hline j & 54000 \\ \hline Field & \begin{array}{c}e^{9} - 3 e^{8} + 3 e^{7} - 6 e^{6} + 12 e^{5} - 3 e^{4} - 15 e^{3} + 15 e^{2} - 6 e + 1 = 0\end{array} \\ \hline b & \dfrac{1}{3}\left(\begin{array}{c}217 e^{8} - 235 e^{7} + 202 e^{6} - 904 e^{5} + 841 e^{4}\\ + 971 e^{3} - 1364 e^{2} + 617 e - 113 \end{array}\right) \\ \hline c & \dfrac{1}{3}\left(\begin{array}{c}7 e^{8} - 8 e^{7} + 7 e^{6} - 31 e^{5} + 32 e^{4} + 29 e^{3} - 50 e^{2} + 25 e - 5 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/19\mathbf{Z} \\ \hline j & -884736 \\ \hline Field & \begin{array}{c}e^{9} - e^{8} - 8 e^{7} + 7 e^{6} + 21 e^{5} - 15 e^{4} - 20 e^{3} + 10 e^{2} + 5 e - 1 = 0\end{array} \\ \hline b & -31 e^{8} + 73 e^{7} + 112 e^{6} - 319 e^{5} - 80 e^{4} + 397 e^{3} - 26 e^{2} - 139 e + 22 \\ \hline c & \left(\begin{array}{c}-2 e^{8} + 16 e^{6} - 4 e^{5} - 38 e^{4} + 14 e^{3} + 26 e^{2} - 10 e + 1 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group\footnotemark & \mathbf{Z}/27\mathbf{Z} \\ \hline j & -12288000 \\ \hline Field & \begin{array}{c}e^{9} - 9 e^{7} + 27 e^{5} - 30 e^{3} + 9 e - 1 = 0\end{array} \\ \hline b & \left(\begin{array}{c}-4282 e^{8} - 507 e^{7} + 38492 e^{6} + 4523 e^{5} - 115156 e^{4}\\ - 13456 e^{3} + 126990 e^{2} + 14789 e - 36852 \end{array}\right) \\ \hline c & \left(\begin{array}{c}16 e^{8} + 2 e^{7} - 140 e^{6} - 18 e^{5} + 410 e^{4} + 54 e^{3} - 444 e^{2} - 58 e + 125 \end{array}\right) \end{array}$\footnotetext{The discriminant of the CM order is -27 and the SPY bounds are sharp here. Typically when the SPY bounds are sharp, $\gcd(\operatorname{disc}(\mathcal{O}),N) >1$.}
\subsection{$K$ is a number field of degree 10.}
$$E(K)[\operatorname{tors}]\in \begin{cases} \mathbf{Z}/m\mathbf{Z} & \textrm{for } m = 1,2,3,4,6,7,10,11,22,31,50, \\ \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/m \mathbf{Z} & \textrm{for } m = 2,4,6,22, \\ \textrm{and }& \mathbf{Z}/3\mathbf{Z}\oplus \mathbf{Z}/3\mathbf{Z}.\end{cases}$$
The only subgroups which do not occur over $\mathbf{Q}$ or a number field of degree 2 or 5 are:
$$E(K)[\operatorname{tors}]\in \{\mathbf{Z}/22\mathbf{Z},\mathbf{Z}/31\mathbf{Z},\mathbf{Z}/50\mathbf{Z}, \mathbf{Z}/2\mathbf{Z}\oplus \mathbf{Z}/22\mathbf{Z}\}.$$
Examples of these are:
$\begin{array}{l|l} Group & \mathbf{Z}/22\mathbf{Z} \\ \hline j & 16581375 \\ \hline Field & \begin{array}{c}e^{10} - e^{9} + 2 e^{8} - 4 e^{7} + 5 e^{6} - 3 e^{5} + 3 e^{4} - 6 e^{3} + 7 e^{2} - 4 e + 1 = 0\end{array} \\ \hline b & \left(\begin{array}{c}-184243 e^{9} + 88117 e^{8} - 299927 e^{7} + 589670 e^{6} - 568675 e^{5}\\ + 223462 e^{4} - 395246 e^{3} + 908033 e^{2} - 759722 e + 279034 \end{array}\right) \\ \hline c & \left(\begin{array}{c}1905 e^{9} - 643 e^{8} + 3210 e^{7} - 5564 e^{6} + 5493 e^{5}\\ - 1825 e^{4} + 4191 e^{3} - 8721 e^{2} + 7124 e - 2426 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/31\mathbf{Z} \\ \hline j & 0 \\ \hline Field & \begin{array}{c}e^{10} + 2 e^{8} - 3 e^{7} + 3 e^{6} - 7 e^{5} + 8 e^{4} - 7 e^{3} + 7 e^{2} - 4 e + 1 = 0\end{array} \\ \hline b & \left(\begin{array}{c}43 e^{9} - 33 e^{8} + 45 e^{7} - 222 e^{6} + 133 e^{5}\\ - 335 e^{4} + 508 e^{3} - 326 e^{2} + 368 e - 252 \end{array}\right) \\ \hline c & \left(\begin{array}{c}24 e^{9} + 10 e^{8} + 50 e^{7} - 54 e^{6} + 44 e^{5}\\ - 148 e^{4} + 130 e^{3} - 104 e^{2} + 122 e - 43 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/50\mathbf{Z} \\ \hline j & 1728 \\ \hline Field & \begin{array}{c}e^{10} - 4 e^{9} + 9 e^{8} - 14 e^{7} + 15 e^{6} - 10 e^{5} + 3 e^{4} + 2 e^{3} - 2 e^{2} + 1 = 0\end{array} \\ \hline b & \left(\begin{array}{c}20 e^{8} - 59 e^{7} + 87 e^{6} - 79 e^{5} + 31 e^{4} + 17 e^{3} - 12 e^{2} + 4 e + 8 \end{array}\right) \\ \hline c & \left(\begin{array}{c}e^{9} - 6 e^{8} + 10 e^{7} - 10 e^{6} + 7 e^{5} - e^{4} - e^{3} - 2 e^{2} - 2 e - 1 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/2\mathbf{Z}\oplus\mathbf{Z}/22\mathbf{Z} \\ \hline j & -3375 \\ \hline Field & \begin{array}{c}e^{10} - e^{9} + 2 e^{8} - 4 e^{7} + 5 e^{6} - 3 e^{5} + 3 e^{4} - 6 e^{3} + 7 e^{2} - 4 e + 1 = 0\end{array} \\ \hline b & \left(\begin{array}{c}-14 e^{9} + 4 e^{8} - 22 e^{7} + 41 e^{6} - 35 e^{5}\\ + 9 e^{4} - 30 e^{3} + 63 e^{2} - 46 e + 10 \end{array}\right) \\ \hline c & \left(\begin{array}{c}-6 e^{9} + 5 e^{8} - 9 e^{7} + 23 e^{6} - 22 e^{5} + 10 e^{4} - 13 e^{3} + 33 e^{2} - 32 e + 11 \end{array}\right) \end{array}$
\subsection{$K$ is a number field of degree 11.}
$$E(K)[\operatorname{tors}]\in \{0,\mathbf{Z}/2\mathbf{Z}, \mathbf{Z}/3\mathbf{Z}, \mathbf{Z}/4\mathbf{Z}, \mathbf{Z}/6\mathbf{Z}, \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/2\mathbf{Z}\}.$$
No subgroups occur in degree 11 which do not occur over $\mathbf{Q}$.
\subsection{$K$ is a number field of degree 12.}
$$E(K)[\operatorname{tors}]\in \begin{cases} \mathbf{Z}/m\mathbf{Z} & \textrm{for } m = 1,\dots,10,12,13,14\\ &\hspace{1.5cm} 18,19,21,26,37,42,57, \\ \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/m \mathbf{Z} & \textrm{for } m=2,4,6,8,10,12,14,18,26,28,42,\\ \mathbf{Z}/3\mathbf{Z}\oplus \mathbf{Z}/m\mathbf{Z} & \textrm{for } m = 3,6,9,12,18,21,\\ \mathbf{Z}/m\mathbf{Z}\oplus \mathbf{Z}/m\mathbf{Z} &\textrm{for } m= 4,6,7.\end{cases}$$
The only subgroups which do not occur over a number field of degree dividing 12 are:
$$E(K)[\operatorname{tors}] \in \begin{cases} \mathbf{Z}/m\mathbf{Z} & \textrm{for } m = 28, 37, 42, 57, \\\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/m \mathbf{Z} & \textrm{for } m = 12, 18, 26, 28, 42, \\ \mathbf{Z}/3\mathbf{Z}\oplus \mathbf{Z}/m\mathbf{Z} & \textrm{for } m = 12, 18,21, \\ \textrm{ and } &\mathbf{Z}/7\mathbf{Z}\oplus \mathbf{Z}/7\mathbf{Z}.\end{cases}$$
These are:
$\begin{array}{l|l} Group & \mathbf{Z}/28\mathbf{Z} \\ \hline j & 54000 \\ \hline Field & \begin{array}{c}e^{12} - 4 e^{11} + 8 e^{10} - 6 e^{9} - 7 e^{8} + 20 e^{7}\\ - 18 e^{6} - 4 e^{5} + 25 e^{4} - 8 e^{3} - 6 e^{2} + 2 e + 1 = 0\end{array} \\ \hline b & \dfrac{1}{402}\left(\begin{array}{c}110 e^{11} - 95 e^{10} - 24 e^{9} - 47 e^{8} - 19 e^{7} - 232 e^{6}\\ - 119 e^{5} + 1480 e^{4} + 369 e^{3} - 1017 e^{2} + 149 e + 197 \end{array}\right) \\ \hline c & \dfrac{1}{402}\left(\begin{array}{c}116 e^{11} - 423 e^{10} + 853 e^{9} - 655 e^{8} - 768 e^{7} + 2238 e^{6}\\ - 1982 e^{5} - 766 e^{4} + 2806 e^{3} - 277 e^{2} - 441 e - 3\end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/37\mathbf{Z} \\ \hline j & 0 \\ \hline Field & \begin{array}{c}e^{12} - 4 e^{11} + 11 e^{10} - 21 e^{9} + 32 e^{8} - 40 e^{7}\\ + 45 e^{6} - 46 e^{5} + 40 e^{4} - 26 e^{3} + 12 e^{2} - 4 e + 1 = 0\end{array} \\ \hline b & \dfrac{1}{37}\left(\begin{array}{c}-196 e^{11} + 657 e^{10} - 1789 e^{9} + 3384 e^{8} - 5292 e^{7} + 6890 e^{6}\\ - 7695 e^{5} + 7154 e^{4} - 4851 e^{3} + 2221 e^{2} - 773 e + 181 \end{array}\right) \\ \hline c & \dfrac{1}{37}\left(\begin{array}{c}24 e^{11} - 162 e^{10} + 432 e^{9} - 952 e^{8} + 1462 e^{7} - 1928 e^{6}\\ + 2090 e^{5} - 2060 e^{4} + 1630 e^{3} - 870 e^{2} + 294 e - 109 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/42\mathbf{Z}\footnotemark \\ \hline j & 54000 \\ \hline Field & \begin{array}{c}e^{12} - 4 e^{11} + 8 e^{10} - 11 e^{9} + 13 e^{8} - 14 e^{7}\\ + 15 e^{6} - 14 e^{5} + 7 e^{4} + 3 e^{3} - 5 e^{2} + e + 1 = 0\end{array} \\ \hline b & \left(\begin{array}{c}1416 e^{11} - 6140 e^{10} + 13362 e^{9} - 19953 e^{8} + 24872 e^{7} - 27802 e^{6}\\ + 30076 e^{5} - 29333 e^{4} + 19087 e^{3} - 1483 e^{2} - 7097 e + 4041 \end{array}\right) \\ \hline c & \left(\begin{array}{c}27 e^{11} - 138 e^{10} + 342 e^{9} - 563 e^{8} + 754 e^{7} - 896 e^{6}\\ + 1000 e^{5} - 1038 e^{4} + 847 e^{3} - 370 e^{2} - 7 e + 66 \end{array}\right) \end{array}$\footnotetext{Note that this elliptic curve is isogenous to the one with $j = 0$ and torsion $\mathbf{Z}/2\mathbf{Z}\oplus\mathbf{Z}/42\mathbf{Z}$.}
$\begin{array}{l|l} Group & \mathbf{Z}/57\mathbf{Z} \\ \hline j & 0 \\ \hline Field & \begin{array}{c}e^{12} - 2 e^{11} + 5 e^{10} - 10 e^{9} + 16 e^{8} - 22 e^{7}\\ + 30 e^{6} - 31 e^{5} + 28 e^{4} - 27 e^{3} + 19 e^{2} - 7 e + 1 = 0\end{array} \\ \hline b & \left(\begin{array}{c}18509 e^{11} - 25122 e^{10} + 76123 e^{9} - 135966 e^{8}\\ + 207749 e^{7} - 272291 e^{6} + 378051 e^{5} - 328154 e^{4}\\ + 303397 e^{3} - 302371 e^{2} + 154326 e - 27788 \end{array}\right) \\ \hline c & \left(\begin{array}{c}128 e^{11} - 62 e^{10} + 446 e^{9} - 532 e^{8} + 876 e^{7} - 986 e^{6}\\ + 1542 e^{5} - 670 e^{4} + 1136 e^{3} - 872 e^{2} + 18 e + 71 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/2\mathbf{Z}\oplus\mathbf{Z}/12\mathbf{Z} \\ \hline j & j^3 + 3491750j^2 - 5151296875j + 12771880859375 =0 \\ \hline Field & \begin{array}{c}e^{12} - 4e^{11} + 11e^{10} - 28e^9 + 63e^8 - 114e^7\\ + 161e^6 - 174e^5 + 141e^4 - 82e^3 + 33e^2 - 8e + 1 = 0\end{array} \\ \hline b & \dfrac{1}{8}\left(\begin{array}{c}46e^{11} - 165e^{10} + 327e^9 - 914e^8 + 1949e^7 - 2883e^6\\ + 3279e^5 - 2583e^4 + 1240e^3 - 576e^2 + 169e - 34\end{array}\right) \\ \hline c & \dfrac{1}{8}\left(\begin{array}{c}86e^{11} - 295e^{10} + 760e^9 - 1947e^8 + 4217e^7 - 7168e^6\\ + 9344e^5 - 9004e^4 + 6265e^3 - 2894e^2 + 769e - 103\end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/2\mathbf{Z}\oplus\mathbf{Z}/18\mathbf{Z} \\ \hline j & 8000 \\ \hline Field & \begin{array}{c}e^{12} - 4e^{11} + 4e^{10} + 8e^9 - 25e^8 + 24e^7\\ + 4e^6 - 36e^5 + 46e^4 - 32e^3 + 14e^2 - 4e + 1 = 0\end{array} \\ \hline b & \dfrac{1}{369}\left(\begin{array}{c}-1450e^{11} + 3898e^{10} + 304e^9 - 13733e^8 + 17660e^7 - 2419e^6\\ - 19740e^5 + 26634e^4 - 18553e^3 + 5681e^2 - 2148e + 155\end{array}\right) \\ \hline c & \dfrac{1}{123}\left(\begin{array}{c}1108e^{11} - 3354e^{10} + 756e^9 + 10638e^8 - 17099e^7 + 6109e^6\\ + 14805e^5 - 25449e^4 + 20608e^3 - 8740e^2 + 2841e - 680\end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/2\mathbf{Z}\oplus\mathbf{Z}/26\mathbf{Z} \\ \hline j & 0 \\ \hline Field & \begin{array}{c}e^{12} - 5e^{11} + 11e^{10} - 9e^9 - 7e^8 + 24e^7\\ - 21e^6 + 21e^4 - 25e^3 + 16e^2 - 6e + 1 = 0\end{array} \\ \hline b & \dfrac{1}{91}\left(\begin{array}{c}-781 e^{11} + 3351 e^{10} - 5753 e^{9} + 969 e^{8} + 9554 e^{7} - 12544 e^{6}\\ + 1862 e^{5} + 8736 e^{4} - 11312 e^{3} + 6330 e^{2} - 1292 e - 19 \end{array}\right) \\ \hline c & \dfrac{1}{91}\left(\begin{array}{c}-198 e^{11} + 1336 e^{10} - 3552 e^{9} + 3848 e^{8} + 1816 e^{7} - 9198 e^{6}\\ + 8344 e^{5} + 1134 e^{4} - 8428 e^{3} + 8618 e^{2} - 4202 e + 785 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/28\mathbf{Z} \\ \hline j & -3375 \\ \hline Field & \begin{array}{c}e^{12} - 4 e^{11} + 5 e^{10} + 3 e^{9} - 11 e^{8} - 3 e^{7}\\ + 35 e^{6} - 47 e^{5} + 27 e^{4} - 4 e^{3} - e^{2} - e + 1 = 0\end{array} \\ \hline b & \dfrac{1}{43}\left(\begin{array}{c}1252 e^{11} - 3557 e^{10} + 1151 e^{9} + 6917 e^{8} - 4746 e^{7} - 13843 e^{6}\\ + 26751 e^{5} - 17575 e^{4} + 2938 e^{3} + 964 e^{2} + 523 e - 674 \end{array}\right) \\ \hline c & \dfrac{1}{43}\left(\begin{array}{c}103 e^{11} - 501 e^{10} + 525 e^{9} + 659 e^{8} - 1439 e^{7} - 1039 e^{6}\\ + 4489 e^{5} - 4371 e^{4} + 1157 e^{3} + 216 e^{2} - 45 e - 219 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/2\mathbf{Z}\oplus \mathbf{Z}/42\mathbf{Z} \\ \hline j & 0 \\ \hline Field & \begin{array}{c}e^{12} - 4 e^{11} + 8 e^{10} - 11 e^{9} + 13 e^{8} - 14 e^{7}\\ + 15 e^{6} - 14 e^{5} + 7 e^{4} + 3 e^{3} - 5 e^{2} + e + 1 = 0\end{array} \\ \hline b & \left(\begin{array}{c}17 e^{11} - 40 e^{10} + 61 e^{9} - 71 e^{8} + 82 e^{7} - 81 e^{6}\\ + 95 e^{5} - 59 e^{4} - 12 e^{3} + 41 e^{2} - 6 e - 7 \end{array}\right) \\ \hline c & \left(\begin{array}{c}6 e^{11} - 18 e^{10} + 28 e^{9} - 34 e^{8} + 38 e^{7} - 40 e^{6}\\ + 44 e^{5} - 36 e^{4} + 20 e^{2} - 8 e - 5 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/3\mathbf{Z}\oplus\mathbf{Z}/12\mathbf{Z} \\ \hline j & 54000 \\ \hline Field & \begin{array}{c}e^{12} - 3e^{10} - 2e^9 + 12e^8 - 6e^7 - 3e^6\\ - 12e^5 + 36e^4 - 38e^3 + 21e^2 - 6e + 1 = 0\end{array} \\ \hline b & \dfrac{1}{7843}\left(\begin{array}{c}-208660e^{11} - 118345e^{10} + 606681e^9 + 844271e^8\\ - 2044930e^7 - 126154e^6 + 712244e^5 + 3124663e^4\\ -
5460349e^3 + 4188689e^2 - 1329972e + 272478\end{array}\right) \\ \hline c & \dfrac{1}{7843}\left(\begin{array}{c}55805e^{11} + 25237e^{10} - 174599e^9 - 227269e^8\\ +
578726e^7 + 22958e^6 - 227941e^5 - 894570e^4\\ +
1575065e^3 - 1231382e^2 + 392162e - 82582\end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/3\mathbf{Z}\oplus\mathbf{Z}/18\mathbf{Z} \\ \hline j & 8000 \\ \hline Field & \begin{array}{c}e^{12} - 4 e^{11} + 12 e^{10} - 24 e^{9} + 38 e^{8} - 50 e^{7}\\ + 52 e^{6} - 48 e^{5} + 36 e^{4} - 24 e^{3} + 15 e^{2} - 6 e + 3 = 0\end{array} \\ \hline b & \dfrac{1}{53}\left(\begin{array}{c}-262 e^{11} + 1376 e^{10} - 4458 e^{9} + 10137 e^{8} - 17241 e^{7} + 23682 e^{6}\\ - 25526 e^{5} + 22112 e^{4} - 14766 e^{3} + 6941 e^{2} - 2382 e + 151 \end{array}\right) \\ \hline c & \dfrac{1}{53}\left(\begin{array}{c}131 e^{11} - 476 e^{10} + 1328 e^{9} - 2339 e^{8} + 3188 e^{7} - 3414 e^{6}\\ + 2481 e^{5} - 1304 e^{4} + 69 e^{3} + 319 e^{2} - 293 e + 110 \end{array}\right) \end{array}$
$\begin{array}{l|l} Group & \mathbf{Z}/3\mathbf{Z}\oplus\mathbf{Z}/21\mathbf{Z} \\ \hline j & 0 \\ \hline Field & \begin{array}{c}e^{12} - 6 e^{11} + 18 e^{10} - 35 e^{9} + 54 e^{8} - 72 e^{7}\\ + 84 e^{6} - 81 e^{5} + 66 e^{4} - 44 e^{3} + 21 e^{2} - 6 e + 1 = 0\end{array} \\ \hline b & \dfrac{1}{49}\left(\begin{array}{c}-4894 e^{11} + 30046 e^{10} - 87461 e^{9} + 154268 e^{8}\\ - 201926 e^{7} + 235109 e^{6} - 256228 e^{5} + 214939 e^{4}\\ - 117237 e^{3} + 38057 e^{2} - 7246 e + 352 \end{array}\right) \\ \hline c & \dfrac{1}{49}\left(\begin{array}{c}290 e^{11} - 1742 e^{10} + 4986 e^{9} - 8686 e^{8} + 11366 e^{7} - 13286 e^{6}\\ + 14434 e^{5} - 12010 e^{4} + 6618 e^{3} - 2248 e^{2} + 458 e - 41 \end{array}\right) \end{array}$
$\begin{array}{l|l} \textrm{Group, Field} & \mathbf{Z}/7\mathbf{Z}\oplus \mathbf{Z}/7\mathbf{Z}, \mathbf{Q}(\zeta_{21}) \\ \hline (j,b,c) & (0,\zeta_3, -1)\end{array}$
\subsection{$K$ is a number field of degree 13.}
$$E(K)[\operatorname{tors}]\in \{0,\mathbf{Z}/2\mathbf{Z}, \mathbf{Z}/3\mathbf{Z}, \mathbf{Z}/4\mathbf{Z}, \mathbf{Z}/6\mathbf{Z}, \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/2\mathbf{Z}\}.$$
No subgroups occur in degree 13 which do not occur over $\mathbf{Q}$.
\end{document} | arXiv |
D. Stone, Sudakin, D. L., and Jenkins, J. J., "Longitudinal trends in organophosphate incidents reported to the National Pesticide Information Center, 1995–2007", Environmental Health, vol. 8, p. 18, 2009.
K. E. Warner and Jenkins, J. J., "Effects of 17$\alpha$-ethinylestradiol and bisphenol a on vertebral development in the fathead minnow (Pimephales Promelas)", Environmental Toxicology and Chemistry, vol. 26, pp. 732–737, 2007.
J. F. Sandahl, Baldwin, D. H., Jenkins, J. J., and Scholz, N. L., "A sensory system at the interface between urban stormwater runoff and salmon survival", Environmental Science & Technology, vol. 41, pp. 2998–3004, 2007.
J. F. Sandahl, Baldwin, D. H., Jenkins, J. J., and Scholz, N. L., "Comparative thresholds for acetylcholinesterase inhibition and behavioral impairment in coho salmon exposed to chlorpyrifos.", Environ Toxicol Chem, vol. 24, no. 1, pp. 136-45, 2005.
J. F. Sandahl, Baldwin, D. H., Jenkins, J. J., and Scholz, N. L., "Odor-evoked field potentials as indicators of sublethal neurotoxicity in juvenile coho salmon (Oncorhynchus kisutch) exposed to copper, chlorpyrifos, or esfenvalerate", Canadian Journal of Fisheries and Aquatic Sciences, vol. 61, pp. 404–413, 2004.
D. B. Buchwalter, Jenkins, J. J., and Curtis, L. R., "Temperature influences on water permeability and chlorpyrifos uptake in aquatic insects with differing respiratory strategies.", Environ Toxicol Chem, vol. 22, no. 11, pp. 2806-12, 2003.
J. F. Sandahl and Jenkins, J. J., "Pacific steelhead (Oncorhynchus mykiss) exposed to chlorpyrifos: benchmark concentration estimates for acetylcholinesterase inhibition.", Environ Toxicol Chem, vol. 21, no. 11, pp. 2452-8, 2002.
B. J. Bailey and Jenkins, J. J., "Association of azinphos-methyl with rat erythrocytes and hemoglobin", Archives of toxicology, vol. 74, pp. 322–328, 2000. | CommonCrawl |
українська гривня (Ukrainian)
1000 hryvnias banknote 1 hryvnia coin
UAH (numeric: 980)
hryvni (nom. pl.), hryven (gen. pl.)
₴ or грн
1⁄100
kopiyka (копійка)
kopiyky (nom. pl.), kopiyok (gen. pl.)
Freq. used
₴20, ₴50, ₴100, ₴200, ₴500, ₴1,000
Rarely used
₴1, ₴2, ₴5, ₴10
10, 50 kopiyok
User(s)
www.bank.gov.ua/en
9.52% (2021 y-o-y)[1][failed verification]
NBU, 2019, May[2][failed verification]
The hryvnia or hryvnya (/(hə)ˈrɪvniə/ (hə-)RIV-nee-ə; Ukrainian: гривня [ˈɦrɪu̯nʲɐ] ( listen), abbr.: грн hrn; sign: ₴; code: UAH) has been the national currency of Ukraine since 2 September 1996. The hryvnia is divided into 100 kopiyok. It is named after a measure of weight used in medieval Kievan Rus'.[3]
The currency of Kievan Rus' in the eleventh century was called grivna. The word is thought to derive from the Slavic griva; c.f. Ukrainian, Russian, Bulgarian, and Serbo-Croatian грива / griva, meaning "mane". It might have indicated something valuable worn around the neck, usually made of silver or gold; c.f. Bulgarian and Serbian grivna (гривна, "bracelet"). Later, the word was used to describe silver or gold ingots of a certain weight; c.f. Ukrainian hryvenyk (гривеник).
The nominative plural of hryvnia is hryvni (Ukrainian: гривні), while the genitive plural is hryven' (Ukrainian: гривень). In Ukrainian, the nominative plural form is used for numbers ending with 2, 3, or 4, as in dvi hryvni (дві гривні, "2 hryvni"), and the genitive plural is used for numbers ending with 5 to 9 and 0, for example sto hryven' (сто гривень, "100 hryven'"); for numbers ending with 1 the nominative singular form is used, for example dvadciat' odna hryvnia (двадцять одна гривня, "21 hryvnia"). An exception for this rule is numbers ending in 11, 12, 13 and 14 for which the genitive plural is also used, for example, dvanadciat' hryven' (дванадцять гривень, "12 hryven'"). The singular for the subdivision is копійка (kopiika), the nominative plural is копійки (kopiiky) and the genitive is копійок (kopiiok).
Currency sign
The hryvnia sign is a cursive Ukrainian letter He (г), with a double horizontal stroke (₴), symbolizing stability, similar to that used in other currency symbols such as the yen (¥), euro (€), Indian rupee (₹), and Chinese yuan (¥ shares symbol with yen). The sign was encoded as U+20B4 in Unicode 4.1 and released in 2005.[4] It is now supported by most systems. In Ukraine, if the hryvnia sign is unavailable, the Cyrillic abbreviation "грн" is used (which can be transliterated as "hrn").
Kiev hryvnia in 11–12 century, reproduction by the National Bank of Ukraine
1917 100 karbovanets of the Ukrainian National Republic with 3 languages: Ukrainian, Polish and Yiddish
Main article: History of Ukrainian hryvnia
A currency called hryvnia was used in Kievan Rus'. Following its secession from Russian Empire the Ukrainian National Republic named their currency hryvnia, a revised version of the Kievan Rus' hryvna; these were designed by Heorhiy Narbut. After its takeover by Soviet Union, a karbovanets was briefly issued from 1919-1920 before being supplanted by Soviet ruble. (Karbovanets was in addition purposed during Axis occupation from 1942-1945.)
The hryvnia replaced the third Karbovanets (which had been issued to be purposed alongside collapsing Soviet ruble) during the period 2–16 September 1996, at a rate of 1 hryvnia to 100,000 karbovantsiv.[5] The karbovanets was subject to hyperinflation in the early 1990s following the collapse of the USSR.
To a large extent, the introduction of the hryvnia was secretive.[6] The hryvnia was introduced according to a Presidential Decree dated 26 August 1996 that was published on August 29. During the transition period, 2–16 September, both hryvnias and karbovanets were used, but merchants were required to give change only in hryvnias. All bank accounts were converted to hryvnias automatically. During the transition period, 97% of karbovanets were taken out of circulation, including 56% in the first five days of the currency reform.[6] After 16 September 1996, the remaining karbovanets could be exchanged for hryvnias in banks.
The hryvnia was introduced during the period when Viktor Yushchenko was the chairman of National Bank of Ukraine. However, the first banknotes issued bore the signature of the previous National Bank chairman, Vadym Hetman, who resigned in 1993 because the first notes had been printed as early as 1992 by the Canadian Bank Note Company, but it was decided to delay their circulation until the hyperinflation in Ukraine was brought under control.
On 18 March 2014 following its seizure, the interim administration of the Republic of Crimea announced that the hryvnia was to be dropped as the region's currency in April 2014.[7] The Russian ruble became the official currency in Crimea on 21 March 2014;[8] until 1 June 2014, the hryvnia could be purposed for exclusively cash payments.[8]
By contrast, the hryvnia remains the predominant currency in the conflicted raions of Donbas, i.e. within Russophilic separatist states of Donetsk and Luhansk; this is due to a lack of low-denomination Russian rubles within these regions.[9]
Main article: Coins of the Ukrainian hryvnia
No coins were issued for the first hryvnia.
Coins were first struck in 1992 for the new currency; these were not introduced until September 1996. Initially, coins valued between 1 and 50 kopiiky were issued. In March 1997, ₴1 coins were added. Since 2004, several commemorative ₴1 coins have been struck.
In October 2012, the National Bank of Ukraine announced that it was examining the possibility of withdrawing the 1 and 2 kopiyky coins from circulation.[10] The coins had become too expensive to produce compared to their nominal value. 1 and 2 kopiyky coins were not produced after 2013, however remaining in circulation until 1 October 2019.[11]
Also, on 26 October 2012, the National Bank of Ukraine announced it was considering the introduction of a ₴2 coin.[12]
Officially, as of 1 July 2016, 12.4 billion coins, with a face value of ₴1.4 billion were in circulation.[13]
On 1 October 2019, 1, 2 and 5 kopiyky coins ceased to be legal tender. They can be still changed at banks, but bills have to be rounded to the next 0.10-step.[14]
Coins of the Ukrainian Hryvnia (1992–present)[15]
minting
1 kopiyka 16 mm 1.5 g Stainless steel Plain Value,
Ornaments Ukrainian Trident 1992–2016 2 September 1996 Not issued since 1 July 2018.[16] 1-, 2-, and 5-kopiika coins withdrew from general circulation on 1 October 2019.[11]
2 kopiyky 17.30 mm 0.64 g (1992~1996)
1.8 g (2001–) aluminium (1992–1996),
stainless steel (2001–) 1992–2014
5 kopiyok 24 mm 4.3 g stainless steel Reeded 1992–2015
10 kopiyok 16.3 mm 1.7 g brass (1992–1996),
aluminium bronze (2001–) Reeded Value,
Ornaments Ukrainian Trident 1992~present 2 September 1996 Current
25 kopiyok 20.8 mm 2.9 g Reeded and plain sectors 1992–2016 Not issued since 1 July 2018.[16] 25-kopiika coin ceased to be legal tender in Ukraine and gone out of circulation, effective 1 October 2020.[17][18]
50 kopiyok 23 mm 4.2 g 1992~present Current
1 hryvnia 26 mm 7.1 g (1995,1996)
6.9 g (2001–) brass (1995, 1996),
aluminium bronze (2001–) Inscription: "ОДНА ГРИВНЯ", minted year 1995~2013 12 March 1997 Current, but new design introduced in 2018
1 hryvnia 26 mm 6.8 g (2004–2016) Aluminium bronze (2004–2016) Plain with incuse lettering ("ОДНА · ГРИВНЯ · Date of issue") Inscription: Coat of arms of Ukraine; УКРАЇНА 1 ГРИВНЯ; date of issue inside a decorative wreath Half length figure of Volodymyr the Great holding a model church and staff with legend above 2004–2016 2004
1 hryvnia 18.9 mm 3.3 g Nickel-plated steel Reeded Coat of Arms
of Ukraine,
Ornaments Volodymyr the Great of Kyiv 2018[16] Current
2 hryvni 20.2 mm 4.0 g Yaroslav the Wise
5 hryven 22.1 mm 5.2 g Segmented (Plain and Reeded edges) Bohdan Khmelnytsky 2019
10 hryven 23.5 mm 6.4 g Nickel plated zinc alloy Reeded Ivan Mazepa 2020[16]
These images are to scale at 2.5 pixels per millimetre. For table standards, see the coin specification table.
Main article: Banknotes of the Ukrainian hryvnia
In 1996, the first series of hryvnia banknotes was introduced into circulation by the National Bank of Ukraine. They were dated 1992 and were in denominations of 1, 2, 5, 10 and 20 hryvnias. The design of the banknotes was developed by Ukrainian artists Vasyl Lopata and Borys Maksymov.[19][20] The one hryvnia banknotes were printed by the Canadian Bank Note Company in 1992. The two, five and ten hryvnia banknotes were printed two years later. The banknotes were stored in Canada until they were put into circulation.[19]
Banknotes of the first series in denominations of 50 and 100 hryvnias also existed but were not introduced because these nominals were not needed in the economic crisis of the mid-1990s.
Also in 1996, the 1, 50, and 100 hryvnia notes of the second series were introduced, with 1 hryvnia dated 1994. The banknotes were designed and printed by Britain's De La Rue.[21] Since the opening of the Mint of the National Bank of Ukraine in cooperation with De La Rue in March 1994, all banknotes have been printed in Ukraine.[21]
Later, higher denominations were added. The 200 hryvnia notes of the second series were introduced in 2001, followed by the 500 hryvnia notes of the third series in 2006, and 1000 hryvnia notes of fourth series in 2019.
The 100 hryvnia denomination is quite common due to its moderately high value. Also common is the 200 and 500 hryvnia, as most Ukrainian ATMs dispense currency in these denominations.
In 2016, the NBU paper factory started producing banknote paper using flax instead of cotton.[22]
In 2019, the National Bank of Ukraine introduced a 1,000 hryvnia banknote and was issued into circulation on 25 October 2019.[23] The introduction of the new banknote was in response to the National Bank of Ukraine's efforts of streamlining the number of coins and banknotes already in circulation. The 1, 2, 5 and 10 hryvnia banknotes will continue to be legal tender alongside its equivalent coins in general circulation, while being withdrawn from circulation from repeated use in commerce.
In 2019, the National Bank of Ukraine introduced a revised 50 hryvnia banknote into circulation on 20 December 2019 and issued a revised 200 hryvnia banknote on 25 February 2020, thereby completing the family of notes which began with the issuance of the 100 hryvnia banknote in 2015.
Denomination [1]
and dimensions
Main colour
₴1
Yellow-blue Volodymyr the Great of Kiev (c. 958–1015), Prince of Novgorod and Grand Prince of Kiev
Ruler of Kievan Rus' in (980–1015) Volodymyr I's Fortress Wall in Kiev 22 May 2006 1 October 2020
Terracotta Yaroslav the Wise (c. 978 – 1054), Prince of Novgorod and Grand Prince of Kiev
Ruler of Kievan Rus' in (1019–1054) Saint Sophia Cathedral, Kyiv 24 September 2004
Blue Bohdan Khmelnytsky (c. 1595–1657), Hetman of Ukraine A church in the village of Subotiv 14 June 2004
₴10
Crimson Ivan Mazepa (1639–1709), Hetman of Ukraine The Holy Dormition Cathedral of the Kyiv Pechersk Lavra 1 November 2004
Green Ivan Franko (1856–1916), writer and politician Lviv Theatre of Opera and Ballet 25 September 2018 Current
Violet Mykhailo Hrushevskyi (1866–1934), historian and politician. The Tsentralna Rada building ("House of the Teacher" in Kyiv) 20 December 2019
Olive Taras Shevchenko (1814–1861), poet and artist Taras Shevchenko National University of Kyiv 9 March 2015
Pink Lesya Ukrainka (1871–1913), poet and writer Entrance Tower of Lutsk Castle 25 February 2020
Brown Hryhorii Skovoroda (1722–1794), philosopher and poet National University of Kyiv-Mohyla Academy 11 April 2016
₴1,000
Blue Volodymyr Vernadskyi (1863–1945), historian, philosopher, naturalist and scientist National Academy of Sciences of Ukraine 25 October 2019
These images are to scale at 0.7 pixel per millimetre. For table standards, see the banknote specification table.
Official NBU exchange rate at moment of introduction was UAH 1.76 per 1 US dollar.[24]
Following the Asian financial crisis in 1998, the currency was devalued to UAH 5.6 = USD 1.00 in February 2000. Later, the exchange rate remained relatively stable at around 5.4 hryvnias for 1 US dollar and was fixed to 5.05 hryvnias for 1 US dollar from 21 April 2005 until 21 May 2008. In mid-October 2008 rapid devaluation began, in the course of a global financial crisis that hit Ukraine hard, with the hryvnia dropping 38.4% from UAH 4.85 for 1 US dollar on 23 September 2008 to UAH 7.88 for 1 US dollar on 19 December 2008.[25] After a period of instability, a new peg of 8 hryvnias per US dollar was established, remaining for several years. In 2012, the peg was changed to a managed float (much like that of the Chinese yuan) as the euro and other European countries' currencies weakened against the dollar due to the European debt crisis, and the value in mid-2012 was about ₴8.14 per dollar.[citation needed]
As from 7 February 2014, following political instability in Ukraine, the National Bank of Ukraine changed the hryvnia into a fluctuating/floating currency in an attempt to meet IMF requirements and to try to enforce a stable price for the currency in the Forex market.[26] In 2014 and 2015 the hryvnia lost about 70% of its value against the U.S. dollar, with the currency reaching a record low of ₴33 per dollar in February 2015.[27]
As from 24 February 2014,
On 31 July 2019, the hryvnia to U.S. dollar exchange rate in the interbank foreign exchange market strengthened to ₴24.98 — the highest level in 3 years.[28]
Following the 2022 Russian invasion of Ukraine, the official exchange rate of hryvnia was fixed at ₴29.25 per 1 US dollar and ₴33.17 per 1 Euro. On July 21, 2022, it was devalued to ₴36.92 per 1 US dollar.[29]
The real exchange rate of the hryvnia to U.S. dollar is approximately ₴40.58 per 1 US dollar on black market.[30]
Hryvnia exchange rate to US dollar (from 1996) and Euro (from 1999)
Current UAH exchange rates
From Google Finance: AUD CAD CHF CNY EUR GBP HKD JPY USD
From Yahoo! Finance: AUD CAD CHF CNY EUR GBP HKD JPY USD
From XE.com: AUD CAD CHF CNY EUR GBP HKD JPY USD
From OANDA: AUD CAD CHF CNY EUR GBP HKD JPY USD
Economy of Ukraine
List of commemorative coins of Ukraine
List of currencies in Europe
Hryvnia
Grzywna (unit)
^ "Ukrania Hryvnia". Archived from the original on 8 March 2017.
^ "Archived". Archived from the original on 2016-05-06. Retrieved 2016-06-01.
^ Langer, Lawrence N. (2002). "Grivna". Historical Dictionary of Medieval Russia. Scarecrow Press. pp. 56–57. ISBN 9780810866188. Archived from the original on 2020-01-17. Retrieved 2022-03-02.
^ * Michael Everson's "Proposal to encode the HYRVNIA SIGN and CEDI SIGN in the UCS" (PDF). 23 April 2004. Archived (PDF) from the original on 3 October 2020. Retrieved 23 April 2004.
^ "National Bank of Ukraine". Bank.gov.ua. Archived from the original on 2 April 2019. Retrieved 11 February 2017.
^ a b "Volodymyr Matvienko. Autograph on Hryvnia" (in Ukrainian). Archived from the original on December 31, 2008.
^ "Ukrainian hryvnia to be dropped in April: Crimean gov't official". CCTV News America. 18 March 2014. Archived from the original on 23 April 2014. Retrieved 18 March 2014.
^ a b Crimea enters the rouble zone Archived 2014-11-29 at the Wayback Machine, ITAR-TASS (1 June 2014)
^ ""In theory, it is possible to pay with Ukrainian hryvnias, Russian rubles, US dollars, and euros in the DPR and the LPR. However, only the two former currencies are in common use. Their exchange rate has been fixed by the governments, and is 1:2 (one hryvnia is the equivalent of two rubles). However, there is a shortage of low denomination rubles, so the Ukrainian hryvnia is still the most popular means of payment."". Osw.waw.pl. 17 June 2015. Archived from the original on 8 November 2020. Retrieved 2 April 2019.
^ "НБУ в ближайшие месяцы рассмотрит вопрос о целесообразности использования 1-2-копеечных монет". Rbc.ua. Archived from the original on 2 April 2019. Retrieved 2 April 2019.
^ a b "NBU Streamlines Hryvnia Banknote and Coin Denominations". National Bank of Ukraine. 25 June 2019. Archived from the original on 22 October 2020. Retrieved 17 July 2019.
^ НБУ рассмотрит вопрос введения в обращение 2-гривневой монеты [RBK will consider the issuance of 2-hryvnia coin] (in Russian). RBK Ukraina. 26 October 2012. Archived from the original on 21 February 2014.
^ "Cash_Circulation". October 28, 2019. Archived from the original on 2019-10-28.
^ "Монетами 1, 2 та 5 копійок не можна розраховуватися з 1 жовтня 2019 року". Національний банк України. Archived from the original on 2021-04-16. Retrieved 2021-04-16.
^ "Розмінні й обігові монети". Bank.gov.ua. Archived from the original on 2 April 2019. Retrieved 2 April 2019.
^ a b c d "Національний банк презентував нові обігові монети". Bank.gov.ua. Archived from the original on 2018-08-30. Retrieved 2018-04-07.
^ "25-Kopiika Coins and Old Series Hryvnia Banknotes to Cease Being Legal Ten-der from 1 October 2020". National Bank of Ukraine. 2 Sep 2020. Archived from the original on 21 October 2020. Retrieved 19 October 2020.
^ "NBU to Withdraw 25.Kopiyka Coins and Hryvnia Banknotes 01 Designed before 2003 from Circulation, Effective 1 October 2020". National Bank of Ukraine. 30 Sep 2020. Archived from the original on 13 October 2020. Retrieved 19 October 2020.
^ a b Как появилась гривна [How hryvnia was born] (in Russian). Podrobnosti. 4 September 2006. Archived from the original on 13 March 2014.
^ "The man who designed Hryvnia". Zerkalo Nedeli (in Russian). Archived from the original on April 23, 2008.
^ a b "Hryvnia-Immigrant". Zerkalo Nedeli (in Ukrainian). Archived from the original on 2010-12-29.
^ "NBU Starts Printing Money from Flax – Незалежний АУДИТОР". N-auditor.com.ua. Archived from the original on 10 May 2017. Retrieved 11 February 2017.
^ Brand new 1,000-hryvnia banknote put into circulation on Oct 25 Archived 2019-10-25 at the Wayback Machine, UNIAN (25 October 2019)
^ "Результати пошуку". Bank.gov.ua. Archived from the original on 22 December 2012. Retrieved 11 February 2017.
^ National Bank of Ukraine Archived 2008-12-18 at the Wayback Machine, historical exchange rates
^ "7 лютого 2014 року Національний банк України вводить в обіг пам'ятну монету "Визволення Нікополя від фашистських загарбників"" [7 February 2014 the National Bank of Ukraine will issue commemorative coins "Nikopol Liberation from the Nazis"]. 7 February 2014. Archived from the original on 21 February 2014.
^ Ukraine teeters a few steps from chaos Archived 2019-05-19 at the Wayback Machine, BBC News (5 February 2016)
^ US dollar in Ukraine costs less than Hr 25 for the first time in 3 years Archived 2021-11-09 at the Wayback Machine Kyiv Post, July 31, 2019
^ Ukraine Devalues Hryvnia to Adjust to War-Time Economic Reality bloomberg.com, July 21, 2022
^ The real exchange rate of the hryvnia to U.S. dollar on black market https://usd.currencyrate.today/uah
Krause, Chester L.; Clifford Mishler (1991). Standard Catalog of World Coins: 1801–1991 (18th ed.). Krause Publications. ISBN 0873411501.
Cuhaj, George S. (editor) (2006). Standard Catalog of World Paper Money: Modern Issues 1961-Present (12th ed.). Krause Publications. ISBN 0-89689-356-1. ((cite book)): |author1= has generic name (help)
Media related to Hryvnias at Wikimedia Commons
History of Hryvnia
National Bank of Ukraine announcement of Hryvnia Sign (in Ukrainian)
Proposed symbols for hryvnia during design competition (in Ukrainian)
Detailed Catalog of Ukrainian paper money
Pictures of hryvnia bills introduced in 1997
The first Ukrainian Money (1917–1922) Odessa Numismatics Museum
Ukraine monetary reform. Numismatics (in Russian)
List of coins of Ukraine (Numista)
Various Currency of Kievan Rus'
11th century – 15th century Succeeded by:
Ukrainian karbovanets Currency of Ukrainian People's Republic
1 March 1918 – April 1918 Succeeded by:
Ukrainian karbovanets
Reason: coup d'état
(on 29 April 1918)
(on 14 December 1918) Currency of Ukrainian People's Republic
December 1918 – November 1920 Succeeded by:
Soviet karbovanets
Reason: Soviet reintegration
Reason: inflation
(on 2 September 1996)
Ratio: 1 hryvnia = 100,000 karbovanets Currency of Ukraine
2 September 1996 – Succeeded by:
Ukraine currency and coinage
Hryvnia sign
Hryvnia coins
Archangel Michael coins
Hryvnia banknotes
Grivna
Makhnovist ruble
Ukrainian shah
Currency units named pound, lira, or similar
Circulating
Alderney pound
Manx pound
South Sudanese pound
Local alternative currency
Bristol pound
Brixton pound
Exeter pound
Lewes pound
Liverpool pound
Stroud pound
Totnes pound
Aeginetan mina
AM-Lira
Angevin pound
Attic mina
Australian pound
Bahamian pound
Bermudian pound
Biafran pound
British West African pound
Byzantine litra
Canadian pound
Connecticut pound
Cypriot pound
Delaware pound
Fijian pound
French livre
Livre parisis
Livre tournois
French colonial livre
Guadeloupe livre
Haitian livre
New France livre
Saint Lucia livre
Gambian pound
Georgia pound
Ghanaian pound
Irish punt
Israeli lira
Italian lira
Italian East African lira
Italian Somaliland lira
Jamaican pound
Japanese government-issued Oceanian pound
Jersey livre
Libyan pound
Lombardo-Venetian lira
Luccan lira
Luxembourg livre
Malawian pound
Maltese pound
Maryland pound
Massachusetts pound
New Brunswick pound
New Guinean pound
New Hampshire pound
New Jersey pound
New York pound
New Zealand pound
Newfoundland pound
Nigerian pound
North Carolina pound
Nova Scotian pound
Ottoman lira
Palestine pound
Papal lira
Parman lira
Pennsylvania pound
Peruvian libra
Pound Scots
Prince Edward Island pound
Rhode Island pound
Rhodesian pound
Rhodesia and Nyasaland pound
Roman pound
Sammarinese lira
Sardinian lira
Solomon Islands pound
South Carolina pound
Southern Rhodesian pound
South African pond
South West African pound
Tongan pound
Tripolitanian lira
Tuscan lira
Vatican lira
Venetian lira
Virginia pound
West Indian pound
Western Samoan pound
Zambian pound
Historical antecedents (mass)
Roman pound (libra)
Moneyer's pound (England)
Grzywna
Pound sign (£)
Pound (mass) (℔)
Roman currency
Carolingian monetary system
£sd
Ukraine articles
Sarmatians
Goths
Early Slavs
East Slavs
Kuyaba
Galicia–Volhynia
Grand Duchy of Lithuania
Zaporozhian Cossacks
Pereiaslav Agreement
Revolution and War of Independence
Ukrainian People's Republic
West Ukrainian People's Republic
Makhnovshchina
Ukrainian SSR
Eastern Front (World War II)
Chernobyl disaster
Orange Revolution
Revolution of Dignity
2014 pro-Russian unrest
Annexation of Crimea by Russia
War in Donbas
2022 Russian invasion
Seven Natural Wonders of Ukraine
Islands and sandbars
Ukraine–European Union relations
Hryvnia (currency)
COVID-19 pandemic (2020-2022)
Cultural icons
Vyshyvanka
Pysanka
Ukrainian people
Rus' people
Ruthenians
Immigration to Ukraine
Censuses
؋
₣
圆
圓
රු
૱
௹
꠸
₳
₢$
₰
₯
₠
ℒ𝓈
₶
₥
ℳ
ℛℳ
₷
𐆚
𐆖
𐆙
𐆗
𐆘
ᚠ
₿
N {\displaystyle \mathbb {N} }
Ᵽ
Ӿ
Generic placeholder
Currencies of Europe
Danish krone (pegged currency: Faroese króna)
Abkhazian apsar (unrecognised)
Artsakh dram (unrecognised)
Transnistrian rubla (unrecognised)
Sterling (pegged currencies: Gibraltar pound · Guernsey pound · Jersey pound · Manx pound)
Currencies of post-Soviet states
In circulation
Abkhazian apsar*
Artsakh dram*
Estonia, Latvia and Lithuania
Russia, Abkhazia* and South Ossetia*
South Ossetian zarin*
Transnistrian ruble*
Uzbekistani sum
Chechen naxar*
Estonian kroon
Georgian kuponi
Latvian rouble
Lithuanian talonas
Moldovan cupon
Soviet ruble
Tajikistani ruble
* Unrecognized
Europe Money Numismatics Ukraine
Currencies with ISO 4217 code
Pound (currency)
Circulating currencies
Economic history of Ukraine
Currencies of Ukraine
Currencies introduced in 1996
Ukrainian words and phrases | CommonCrawl |
Hyperprior
In Bayesian statistics, a hyperprior is a prior distribution on a hyperparameter, that is, on a parameter of a prior distribution.
Part of a series on
Bayesian statistics
Posterior = Likelihood × Prior ÷ Evidence
Background
• Bayesian inference
• Bayesian probability
• Bayes' theorem
• Bernstein–von Mises theorem
• Coherence
• Cox's theorem
• Cromwell's rule
• Principle of indifference
• Principle of maximum entropy
Model building
• Weak prior ... Strong prior
• Conjugate prior
• Linear regression
• Empirical Bayes
• Hierarchical model
Posterior approximation
• Markov chain Monte Carlo
• Laplace's approximation
• Integrated nested Laplace approximations
• Variational inference
• Approximate Bayesian computation
Estimators
• Bayesian estimator
• Credible interval
• Maximum a posteriori estimation
Evidence approximation
• Evidence lower bound
• Nested sampling
Model evaluation
• Bayes factor
• Model averaging
• Posterior predictive
• Mathematics portal
As with the term hyperparameter, the use of hyper is to distinguish it from a prior distribution of a parameter of the model for the underlying system. They arise particularly in the use of hierarchical models.[1][2]
For example, if one is using a beta distribution to model the distribution of the parameter p of a Bernoulli distribution, then:
• The Bernoulli distribution (with parameter p) is the model of the underlying system;
• p is a parameter of the underlying system (Bernoulli distribution);
• The beta distribution (with parameters α and β) is the prior distribution of p;
• α and β are parameters of the prior distribution (beta distribution), hence hyperparameters;
• A prior distribution of α and β is thus a hyperprior.
In principle, one can iterate the above: if the hyperprior itself has hyperparameters, these may be called hyperhyperparameters, and so forth.
One can analogously call the posterior distribution on the hyperparameter the hyperposterior, and, if these are in the same family, call them conjugate hyperdistributions or a conjugate hyperprior. However, this rapidly becomes very abstract and removed from the original problem.
Purpose
Hyperpriors, like conjugate priors, are a computational convenience – they do not change the process of Bayesian inference, but simply allow one to more easily describe and compute with the prior.
Uncertainty
Firstly, use of a hyperprior allows one to express uncertainty in a hyperparameter: taking a fixed prior is an assumption, varying a hyperparameter of the prior allows one to do sensitivity analysis on this assumption, and taking a distribution on this hyperparameter allows one to express uncertainty in this assumption: "assume that the prior is of this form (this parametric family), but that we are uncertain as to precisely what the values of the parameters should be".
Mixture distribution
More abstractly, if one uses a hyperprior, then the prior distribution (on the parameter of the underlying model) itself is a mixture density: it is the weighted average of the various prior distributions (over different hyperparameters), with the hyperprior being the weighting. This adds additional possible distributions (beyond the parametric family one is using), because parametric families of distributions are generally not convex sets – as a mixture density is a convex combination of distributions, it will in general lie outside the family. For instance, the mixture of two normal distributions is not a normal distribution: if one takes different means (sufficiently distant) and mix 50% of each, one obtains a bimodal distribution, which is thus not normal. In fact, the convex hull of normal distributions is dense in all distributions, so in some cases, you can arbitrarily closely approximate a given prior by using a family with a suitable hyperprior.
What makes this approach particularly useful is if one uses conjugate priors: individual conjugate priors have easily computed posteriors, and thus a mixture of conjugate priors is the same mixture of posteriors: one only needs to know how each conjugate prior changes. Using a single conjugate prior may be too restrictive, but using a mixture of conjugate priors may give one the desired distribution in a form that is easy to compute with. This is similar to decomposing a function in terms of eigenfunctions – see Conjugate prior: Analogy with eigenfunctions.
Dynamical system
A hyperprior is a distribution on the space of possible hyperparameters. If one is using conjugate priors, then this space is preserved by moving to posteriors – thus as data arrives, the distribution changes, but remains on this space: as data arrives, the distribution evolves as a dynamical system (each point of hyperparameter space evolving to the updated hyperparameters), over time converging, just as the prior itself converges.
References
1. Ntzoufras, Ioannis (2009). "Bayesian Hierarchical Models". Bayesian Modelling using WinBUGS. Wiley. pp. 305–340. ISBN 978-0-470-14114-4.
2. McElreath, Richard (2020). "Models With Memory". Statistical Rethinking : A Bayesian Course with Examples in R and Stan. CRC Press. ISBN 978-0-367-13991-9.
Further reading
• Bernardo, J. M.; Smith, A. F. M. (2000). Bayesian Theory. New York: Wiley. ISBN 0-471-49464-X.
| Wikipedia |
\begin{document}
\theoremstyle{plain} \newtheorem{theorem}{Theorem} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{identity}[theorem]{Identity}
\theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{conjecture}[theorem]{Conjecture}
\theoremstyle{remark} \newtheorem{remark}[theorem]{Remark}
\begin{center} \vskip 1cm{\LARGE\bf On a Two-Parameter Family of \vskip .08in Generalizations of Pascal's Triangle} \vskip 1cm \large Michael A. Allen\\ Physics Department\\ Faculty of Science\\ Mahidol University\\ Rama 6 Road\\ Bangkok 10400 \\ Thailand \\ \href{mailto:[email protected]}{\tt [email protected]} \\ \end{center}
\vskip .2 in
\begin{abstract} We consider a two-parameter family of triangles whose $(n,k)$-th entry (counting the initial entry as the $(0,0)$-th entry) is the number of tilings of $N$-boards (which are linear arrays of $N$ unit square cells for any nonnegative integer $N$) with unit squares and $(1,m-1;t)$-combs for some fixed $m=1,2,\dots$ and $t=2,3,\dots$ that use $n$ tiles in total of which $k$ are combs. A $(1,m-1;t)$-comb is a tile composed of $t$ unit square sub-tiles (referred to as teeth) placed so that each tooth is separated from the next by a gap of width $m-1$. We show that the entries in the triangle are coefficients of the product of two consecutive generalized Fibonacci polynomials each raised to some nonnegative integer power. We also present a bijection between the tiling of an $(n+(t-1)m)$-board with $k$ $(1,m-1;t)$-combs with the remaining cells filled with squares and the $k$-subsets of $\{1,\ldots,n\}$ such that no two elements of the subset differ by a multiple of $m$ up to $(t-1)m$. We can therefore give a combinatorial proof of how the number of such $k$-subsets is related to the coefficient of a polynomial. We also derive a recursion relation for the number of closed walks from a particular node on a class of directed pseudographs and apply it obtain an identity concerning the $m=2$, $t=5$ instance of the family of triangles. Further identities of the triangles are also established mostly via combinatorial proof. \end{abstract}
\section{Introduction}
In a recent paper \cite{AE22}, that we will henceforth refer to as AE22, we considered two one-parameter families of generalizations of Pascal's triangle. Regarding the triangles as lower triangular matrices, the members of both families have ones in the leftmost column and the repetition of 1 followed by $m-1$ zeros along the leading diagonal, where $m$ is a positive integer. In the case of the first family, the rest of the entries are obtained using Pascal's recurrence, i.e., $\tbinom{n}{k}_m=\tbinom{n-1}{k}_m+\tbinom{n-1}{k-1}_m$, where $\tbinom{n}{k}_m$ is the $(n,k)$-th entry (counting the first entry as being in row $n=0$ and column $k=0$) of the $m$-th triangle of the family. We showed that this is equivalent to the triangles being row-reversed $(1/(1-x^m),x/(1-x))$ Riordan arrays. A \textit{$(p(x),q(x))$ Riordan array} is an infinite lower triangular matrix whose $(n,k)$-th entry is the coefficient of $x^n$ in the series expansion of $p(x)(q(x))^k$ \cite{SGWW91,Bar=16}. The row-reversed version of a Riordan array has the entries up to and including the leading diagonal in each row placed in reverse order \cite{AE22}.
The main focus of AE22 was on a second family of triangles whose $(n,k)$-th entry (denoted by $\tchb{n}{k}_m$) is the number of ways to tile $N$-boards (which are linear arrays of $N\geq0$ unit square cells) using $k$ $(1,m-1)$-fences and $n-k$ squares (and thus $n$ tiles in total). A \textit{$(1,m-1)$-fence} is a tile composed to two unit-square sub-tiles separated by a gap of width $m-1$ \cite{Edw08,EA15}. The two families of triangles coincide for $m=1,2$ and the $m=1$ case is Pascal's triangle, i.e., $\tbinom{n}{k}_1=\tchb{n}{k}_1=\tbinom{n}{k}$ and $\tbinom{n}{k}_2=\tchb{n}{k}_2$ for all $n$ and $k$. We showed that for $j\ge0$, $k\ge0$, $m\ge1$, and $r=0,\ldots,m-1$, the entry $\tchb{mj+r-k}{k}_m$ is the coefficient of $x^k$ in $f_j^{m-r}(x)f_{j+1}^r(x)$, where in this instance the Fibonacci polynomial $f_n(x)$ is defined by $f_n(x)=f_{n-1}(x)+xf_{n-2}(x)+\delta_{n,0}$, $f_{n<0}(x)=0$, where $\delta_{i,j}$ is 1 if $i=j$ and zero otherwise. By first identifying a bijection between the tilings of an $(n+m)$-board with $k$ $(1,m-1)$-fences and $n+m-2k$ squares and the subsets of $\mathbb{N}_n=\{1,\ldots,n\}$ containing $k$ elements none of which differ from another element in the subset by $m$, we showed that the number of such subsets, $S^{(m)}(n,k)=\tchb{n+m-k}{k}_m$. We thus arrived at a combinatorial proof of the relation between $S^{(m)}(n,k)$ and the coefficient of $x^k$ in the product of nonnegative integer powers of two successive Fibonacci polynomials.
Here we generalize the second family of triangles by considering the analogous $n$-tile tilings of $N$-boards with $(1,m-1;t)$-combs and squares for positive integer $m$ and $t=2,3,\ldots$. A \textit{$(w,g;t)$-comb} contains $t$ sub-tiles of dimensions $w\times1$ (referred to as \textit{teeth}) separated from one another by gaps of width $g$ \cite{AE-GenFibSqr}. A $(1,m-1;2)$-comb is evidently a $(1,m-1)$-fence and so the $t=2$ instances of the triangles we introduce here coincide with the second family of triangles in AE22.
After introducing the two-parameter family of triangles (along with a less compact version of the triangles which is used in some proofs) in \S\ref{s:fam}, we show how entries of the triangles are related to some generalized Fibonacci polynomials in \S\ref{s:poly}. Then in \S\ref{s:comb} we give a bijection between the tilings of an $(n+(t-1)m)$-board with $k$ $(1,m-1;t)$-combs and $n+(t-1)m-kt$ squares and the $k$-subsets of $\mathbb{N}_n$ such that no two elements of the subset differ by any element of the set $\{m,2m,\ldots,(t-1)m\}$. This enables us to relate the number of such subsets to coefficients of products of powers of two successive generalized Fibonacci polynomials. The remainder of the paper concerns finding identities satisfied by entries in the triangle. Most of the identities are obtained via the enumeration of metatiles with a certain length or number of tiles which can be problematic if the metatiles contain an arbitrary number of tiles. A \textit{metatile} is a gapless grouping of tiles that completely covers a whole number of cells and cannot be split into smaller metatiles. In most cases, there are infinitely many possible metatiles and there have been various approaches to the enumeration problem: obtaining the symbolic representation all the families of metatiles \cite{EA19}, obtaining a recursion relation for the number of metatiles of a certain length and thus expressing the number in terms of a known sequence \cite{EA20a}, identifying a bijection between the metatiles and a set of objects whose number is known \cite{AE-GenFibSqr}, and constructing a directed pseudograph (that we refer to as a digraph) to represent the placing of tiles \cite{EA15}. We will use the first and last of these approaches and these are described further in \S\ref{s:meta}. Recursion relations for numbers of tilings corresponding to a particular class of digraph are derived in the appendix and these are used to obtain identities for the $m=2$, $t=5$ triangle in \S\ref{s:idn} where further identities concerning the triangles are also derived, mostly via combinatorial proof.
\section{The two-parameter family of triangles}\label{s:fam}
\begin{figure}
\caption{The start of a Pascal-like triangle (\seqnum{A354665} in the
OEIS \cite{Slo-OEIS}) whose
$(n,k)$-th entry, $\protect\tchb{n}{k}_{2,3}$, is
the number of $n$-tile tilings using $k$
$(1,1;3)$-combs (and $n-k$ squares). Entries in bold font (and those
in bold font in Figs.~\ref{f:m=2,t=4}--\ref{f:m=3,t=3}) are covered by
identities in \S\ref{s:idn}.}
\label{f:m=2,t=3}
\end{figure}
For $m=1,2,\ldots$ and $t=2,3,\ldots$, let $\tchb{n}{k}_{m,t}$ denote the number of $n$-tile tilings of $N$-boards that use $k$ $(1,m-1;t)$-combs (and $n-k$ squares). We choose that $\tchb{0}{0}_{m,t}=1$ and that $\tchb{n}{k<0}_{m,t}=\tchb{n}{k>n}_{m,t}=0$. As a $(1,0;t)$-comb is just a $t$-omino and the number of $n$-tile tilings using $n$ $t$-ominoes and $n-k$ squares is
simply $\tbinom{n}{k}$ for any $t$, we have $\tchb{n}{k}_{1,t\ge2}=\tbinom{n}{k}$ which is Pascal's triangle (\seqnum{A007318}). The triangles corresponding to $m=2,3,4,5$ with $t=2$ are \seqnum{A059259}, \seqnum{A350110}, \seqnum{A350111}, and \seqnum{A350112}, respectively \cite{AE22}. We show examples of the starts of triangles for combs with at least 3 teeth in Figs.~\ref{f:m=2,t=3}--\ref{f:m=3,t=3}.
\begin{figure}
\caption{The start of a Pascal-like triangle (\seqnum{A354666}) with
entries $\protect\tchb{n}{k}_{2,4}$.}
\label{f:m=2,t=4}
\end{figure}
\begin{figure}
\caption{The start of a Pascal-like triangle (\seqnum{A354667}) with
entries $\protect\tchb{n}{k}_{2,5}$.}
\label{f:m=2,t=5}
\end{figure}
\begin{figure}
\caption{The start of a Pascal-like triangle (\seqnum{A354668}) with
entries $\protect\tchb{n}{k}_{3,3}$.}
\label{f:m=3,t=3}
\end{figure}
We can also create a triangle of $\tch{n}{k}_{m,t}$ where this denotes the number of tilings of an $n$-board that use $k$ $(1,m-1;t)$-combs (and therefore $n-kt$ squares) again with $\tch{0}{0}_{m,t}=1$. The two triangles are related via the following identity.
\begin{identity}\label{I:ch=chb} For $m\ge1$, $t\ge2$, and $n\ge k\ge0$, \[ \ch{n}{k}_{m,t}=\chb{n-(t-1)k}{k}{m,t}. \] \end{identity} \begin{proof} If a tiling contains $n-(t-1)k$ tiles of which $k$ are $(1,m-1;t)$-combs (and so $n-kt$ are squares), the total length is $n-kt+kt=n$. \end{proof}
We will refer to the ray of entries given by $\tchb{n-\mu k}{k}_{m,t}$ for $k=0,\ldots,\floor{n/(\mu+1)}$ as the $n$-th \textit{$(1,\mu)$-antidiagonal}. A $(1,1)$-antidiagonal is therefore what is normally referred to simply as an antidiagonal. As a consequence of Identity~\ref{I:ch=chb}, the $(1,t-1)$-antidiagonals of the $\tchb{n}{k}_{m,t}$ triangle are the rows of the $\tch{n}{k}_{m,t}$ triangle. In the rest of the paper we therefore only give identities for the $\tchb{n}{k}_{m,t}$ triangle as it is more `compact' in the sense that its rows contain fewer trailing zeros. However, as in AE22, some of the identities are more straightforward to prove by considering the tiling of an $n$-board, in which case we need to consider $\tch{n}{k}_{m,t}$. The following bijection (which is established in the proof of Theorem~2.1 in \cite{AE-GenFibSqr}) will be used in such proofs.
\begin{lemma}\label{L:bij} For $t\ge2$, $j\ge0$, and $r=0,\ldots,m$, where $m\ge1$, there is a bijection between the tilings of an $(mj+r)$-board using $k$ $(1,m-1;t)$-combs and $mj+r-kt$ squares and the tilings of an ordered $m$-tuple of $r$ $(j+1)$-boards followed by $m-r$ $j$-boards using $k$ $t$-ominoes and $mj+r-kt$ squares. \end{lemma}
\section{Relation of the triangles to polynomials}\label{s:poly}
For $t\geq2$ we define a $(1,t)$-bonacci polynomial as follows: \begin{equation}\label{e:f(x)} f^{(t)}_n(x)=f^{(t)}_{n-1}(x)+xf^{(t)}_{n-t}(x)+\delta_{n,0}, \quad f^{(t)}_{n<0}(x)=0. \end{equation} The $(1,2)$-bonacci polynomials $f^{(2)}_n(x)$ are the Fibonacci polynomials used in AE22. We refer to the sequence defined by \begin{equation}\label{e:f} f^{(t)}_n=f^{(t)}_{n-1}+f^{(t)}_{n-t}+\delta_{n,0}, \quad f^{(t)}_{n<0}=0, \end{equation} for $t\geq2$ as the $(1,t)$-bonacci numbers. The $t=2,\ldots,8$ cases are, respectively, the Fibonacci numbers (\seqnum{A000045}), the Narayana's cows sequence (\seqnum{A000930}) and sequences \seqnum{A003269}, \seqnum{A003520}, \seqnum{A005708}, \seqnum{A005709}, and \seqnum{A005710} in the OEIS.
\begin{lemma}\label{L:sumcoeff} The sum of the coefficients of $f^{(t)}_n(x)$ is $f^{(t)}_n(1)=f^{(t)}_n$. \end{lemma} \begin{proof} The sum of the coefficients of $f^{(t)}_n(x)$ can be expressed as $f^{(t)}_n(1)$. Putting $x=1$ into \eqref{e:f(x)} gives \eqref{e:f} with $f^{(t)}_n$ replaced by $f^{(t)}_n(1)$. \end{proof}
In the next lemma and theorem (which are generalizations of Lemma~13 and Theorem~14 in AE22) we employ the coefficient operator $[x^k]$ which denotes the coefficient of $x^k$ in the term it precedes.
\begin{lemma}\label{L:f=t} Let $f(t,n,k)=[x^k]f^{(t)}_n(x)$ and let $b(t,n,k)$ be the number of tilings of an $n$-board with squares and $t$-ominoes that use exactly $k$ $t$-ominoes. Then $f(t,n,k)=b(t,n,k)$ for all $n$ and $k$. \end{lemma}
\begin{proof} This follows from Theorem~10 of AE22. The metatiles are the square and $t$-omino. \end{proof}
\begin{theorem}\label{T:poly} For $j\ge0$, $k\ge0$, $m\ge1$, $t\ge2$, and $r=0,\ldots,m-1$, \begin{equation}\label{e:poly} \chb{mj+r-(t-1)k}{k}{m,t} =[x^k]\bigl(f^{(t)}_j(x)\bigr)^{m-r}\bigl(f^{(t)}_{j+1}(x)\bigr)^r. \end{equation} \end{theorem} \begin{proof} From Identity~\ref{I:ch=chb}, $\tchb{mj+r-(t-1)k}{k}_{m,t}=\tch{mj+r}{k}_{m,t}$. From Lemma~\ref{L:bij}, $\tch{mj+r}{k}_{m,t}$ equals the number of ways to tile an ordered $m$-tuple of $r$ $(j+1)$-boards followed by $m-r$ $j$-boards using $k$ $t$-ominoes (and $mj+r-kt$ squares). The number of such tilings of the $m$-tuple of boards is \[ \sum_{\substack{k_1\ge0,\,k_2\ge0,\,\ldots,\,k_m\ge0,\\k_1+k_2+\cdots+k_m=k}} \Biggl(\prod_{i=1}^r b(t,j+1,k_i)\Biggr) \Biggl(\prod_{i=r+1}^m b(t,j,k_i)\Biggr) \] in which the first product is omitted when $r=0$. The coefficient of $x^k$ in $\bigl(f^{(t)}_{j+1}(x)\bigr)^r\bigl(f^{(t)}_j(x)\bigr)^{m-r}$ is \begin{align*} &[x^k] \Biggl(\prod_{i=1}^r \sum_{k_i=0}^{\floor{(j+1)/t}}f(t,j+1,k_i)x^{k_i}\Biggr) \Biggl(\prod_{i=r+1}^m \sum_{k_i=0}^{\floor{j/t}}f(t,j,k_i)x^{k_i}\Biggr)\\ &\qquad= [x^k]\sum_{k_1\ge0,k_2\ge0,\ldots,k_m\ge0}\Biggl(\prod_{i=1}^r f(t,j+1,k_i)\Biggr) \Biggl(\prod_{i=r+1}^m f(t,j,k_i)\Biggr)x^{k_1+k_2+\cdots+k_m}\\ &\qquad= \sum_{\substack{k_1\ge0,\,k_2\ge0,\,\ldots,\,k_m\ge0,\\k_1+k_2+\cdots+k_m=k}} \Biggl(\prod_{i=1}^r f(t,j+1,k_i)\Biggr) \Biggl(\prod_{i=1}^r f(t,j,k_i)\Biggr). \end{align*} The result then follows from Lemma~\ref{L:f=t}. \end{proof}
The following identity gives the sums of the $(1,t-1)$-antidiagonals of the $\tchb{n}{k}_{m,t}$ triangle. \begin{identity}\label{I:adiagsum} For $t\ge2$, $j\ge0$, $m\ge1$, and $r=0,\ldots,m-1$, \[ \sum_{k=0}^{\floor{(mj+r)/t}}\chb{mj+r-(t-1)k}{k}{m,t} =\bigl(f^{(t)}_j\bigr)^{m-r}\bigl(f^{(t)}_{j+1}\bigr)^r. \] \end{identity} \begin{proof} Summing \eqref{e:poly} over all permitted $k$ gives the sum of all coefficients of \[ F(x)=\bigl(f^{(t)}_j(x)\bigr)^{m-r}\bigl(f^{(t)}_{j+1}(x)\bigr)^r \] which is $F(1)$ and equals $\bigl(f^{(t)}_j\bigr)^{m-r}\bigl(f^{(t)}_{j+1}\bigr)^r$ by Lemma~\ref{L:sumcoeff}. \end{proof}
\section{Relation of the triangles to restricted combinations}\label{s:comb}
We now look at $S^{(m,t)}(n,k)$, the number of subsets of $\mathbb{N}_n$ of size $k$ such that the difference of any two elements of the subset does not equal any element in the set $\mathcal{Q}=\{m,2m,\ldots,(t-1)m\}$. For example, $S^{(2,3)}(5,0)=1$, $S^{(2,3)}(5,1)=5$, $S^{(2,3)}(5,2)=6$, and $S^{(2,3)}(5,k>2)=0$ since the possible subsets of $\mathbb{N}_5$ such that no two elements in the subset differ by 2 or 4 are $\{\}$, $\{1\}$, $\{2\}$, $\{3\}$, $\{4\}$, $\{5\}$, $\{1,2\}$, $\{2,3\}$, $\{3,4\}$, $\{4,5\}$, $\{1,4\}$, and $\{2,5\}$. There is a formula for $S^{(m,t)}(n,k)$ in terms of sums of products of binomial coefficients \cite{MS08}. Here we will show that $S^{(m,t)}(n,k)=\tchb{n+(t-1)(m-k)}{k}_{m,t}$ and hence obtain an expression for the number of subsets in terms of coefficients of products of $(1,t)$-bonacci polynomials which is a generalization of earlier results \cite{KL91c,AE22}. We first establish the following bijection.
\begin{lemma}\label{L:ksub} For $m,n\ge1$, $t\ge2$, and $k\ge0$, there is a bijection between the $k$-subsets of $\mathbb{N}_n$ such that all pairs of elements taken from a subset do not differ by an element from the set $\mathcal{Q}=\{m,2m,\ldots,(t-1)m\}$, and the tilings of an $(n+(t-1)m)$-board with $k$ $(1,m-1;t)$-combs and $n+(t-1)m-kt$ squares. \end{lemma} \begin{proof} We label the cells of the $(n+(t-1)m)$-board from 1 to $n+(t-1)m$. If a $k$-subset contains element $i$ then we place a comb so that its left tooth occupies cell $i$. Notice that if $i=n$ then the rightmost tooth occupies the final cell on the board. After placing combs corresponding to each element of the subset, the rest of the board is filled with squares of which there must be $n+(t-1)m-kt$. Conversely, the tiling of any $(n+(t-1)m)$-board with $k$ combs corresponds to a $k$-subset where no two elements differ by an element of $\mathcal{Q}$
since the remaining teeth of a comb whose leftmost tooth occupies cell $i$ lie on cells $i+m,i+2m,\ldots,i+(t-1)m$ which means none of these cells can be occupied by the leftmost tooth of another comb. \end{proof}
\begin{corollary}\label{C:S=chb} For $m,n\ge1$, $t\ge2$, and $k\ge0$, $S^{(m,t)}(n,k)=\tchb{n+(t-1)(m-k)}{k}_{m,t}$. \end{corollary} \begin{proof} From Lemma~\ref{L:ksub}, $S^{(m,t)}(n,k)=\tch{n+(t-1)m}{k}_{m,t}$. Identity~\ref{I:ch=chb} then gives the result. \end{proof}
\begin{corollary}\label{C:A} For $m,n\ge1$, $t\ge2$, the sum of the elements in the $n$-th $(1,t-1)$-antidiagonal of $\tchb{n}{k}_{m,t}$ is the number of subsets of $\mathbb{N}_{n-(t-1)m}$
chosen so that no two elements of the subsets differ by any member
of the set $\{m,\ldots,(t-1)m\}$. \end{corollary} \begin{proof} The elements in the $(1,t-1)$-antidiagonal are, for $k\ge0$, $\tchb{n-(t-1)k}{k}_{m,t}=S^{(m,t)}(n-(t-1)m,k)$ by Corollary~\ref{C:S=chb}. Summing over all $k$ then gives the result. \end{proof}
The next two corollaries follow from Theorem~\ref{T:poly} and Identity~\ref{I:adiagsum}, respectively.
\begin{corollary} For $j,k\ge0$, $m\ge1$, $t\ge2$, and $r=0,\ldots,m-1$, \[ S^{(m,t)}(mj+r,k) =[x^k]\bigl(f^{(t)}_{j+t-1}(x)\bigr)^{m-r}\bigl(f^{(t)}_{j+t}(x)\bigr)^r. \] \end{corollary}
\begin{corollary} For $j\ge0$, $m\ge1$, $t\ge2$, and $r=0,\ldots,m-1$, the number of subsets of $\mathbb{N}_{mj+r}$ each of which lack pairs of elements that differ by a multiple of $m$ up to $(t-1)m$ is $\bigl(f^{(t)}_{j+t-1}\bigr)^{m-r}\bigl(f^{(t)}_{j+t}\bigr)^r$. \end{corollary}
\section{Metatiles and digraphs}\label{s:meta}
The simplest metatiles when tiling with squares ($S$) and $(1,m-1;t)$-combs ($C$) are the \textit{free square} ($S$), what we will refer to as an \textit{$m$-comb} ($C^m$) which is $m$ interlocking combs with no gaps, and the \textit{filled comb} ($CS^{(m-1)(t-1)}$) which is a comb with all the gaps filled with squares. The $m=2$, $t=3$ instances of these are the first three metatiles depicted in Fig.~\ref{f:meta}(a).
\begin{figure}
\caption{Metatiles when tiling with squares and $(1,1;3)$-combs ($m=2$,
$t=3$). (a)~A 31-board tiled with all the metatiles containing less
than 6 tiles. Shaded (white) cells are occupied by squares
(combs). Bold lines indicate which teeth belong to the same comb.
Dashed lines show boundaries between metatiles. The symbolic
representation is above each metatile. (b)~The digraph for
generating metatiles.}
\label{f:meta}
\end{figure}
When $m=1$, the only metatiles are the two individual tiles themselves: a square and a comb, which, as the gaps are of zero width, is just a $t$-omino. When $m>1$, the only case when there is a finite number of metatiles is when $t=2$ \cite{EA21}. There are two cases when there is a single infinite sequence of metatiles: the $(m,t)=(3,2)$ case, which was dealt with in AE22, and when $m=2$ and $t=3$. In the latter case, the metatiles are $S$, $C^2$, and $CSC^jS$ for $j\ge0$, as illustrated in Fig.~\ref{f:meta}(a). This infinite sequence of metatiles is analogous to that found for the $(m,t)=(3,2)$ case \cite[\S6]{AE22}: $CS$ has a single remaining unit-width slot which can be filled either with an $S$, thus completing the metatile, or with the left tooth of a $C$ (to give $CSC$) which again results in a slot of unit width.
For a particular choice of types of tiles, a systematic way to generate all metatiles and, in the simpler cases, obtain finite-order recursion relations for the number of tilings is via a directed pseudograph (henceforth referred to as a \textit{digraph}) in which each arc represents the addition of a tile and each node represents the current state of the yet-to-be-completed metatile \cite{EA15,EA20}. Any such digraph contains a \textit{0 node} which represents the empty board or the completed metatile. The remaining nodes are named using binary strings: the $i$-th digit of the string is 0 (1) if the $i$-th cell, starting at the first unoccupied cell of the incomplete metatile and ending at its last occupied cell, is empty (filled). Thus all nodes (except the 0 node) start with 0 and end with 1. There is a bijection between each possible metatile and each path on the digraph which starts and finishes at the 0 node without visiting it in between. To obtain the symbolic representation of the metatile, one simply reads off the names of the arcs along the path and then simplifies the resulting expression by, for example, replacing $CC$ by $C^2$. The digraph for generating metatiles when tiling with squares and $(1,1;3)$-combs is shown in Fig.~\ref{f:meta}(b).
\section{Further identities}\label{s:idn}
We start by deriving identities that apply to all the triangles and later on obtain recursion relations for some particular instances of the triangles after constructing the corresponding metatile-generating digraphs. The following three identities arise from considering the simplest types of $n$-tile tilings.
\begin{identity}\label{I:chb=1} For $n\ge0$, $m\ge1$, and $t\ge2$, $\tchb{n}{0}_{m,t}=1$. \end{identity} \begin{proof} There is only one way to create an $n$-tile tiling without using any combs, namely, the all-square tiling. \end{proof}
\begin{identity} For $n\ge0$, $m\ge1$, and $t\ge2$, $\tchb{n}{n}_{m,t}=\delta_{n\bmod m,0}$. \end{identity} \begin{proof} The only way to tile without squares is the all $m$-comb tiling which can only occur if the number of tiles is a multiple of $m$. \end{proof}
\begin{identity} For $n,m\ge1$ and $t\ge2$, \[ \chb{n}{1}{m,t}= \begin{cases}0,&\text{if $n<(m-1)(t-1)+1$}; \\n-(m-1)(t-1),&\text{otherwise}.\end{cases} \] \end{identity} \begin{proof} Any $n$-tile tiling using exactly one $(1,m-1;t)$-comb must have a filled comb which itself contains $(m-1)(t-1)+1$ tiles. Thus there can be no $n$-tile tilings using 1 comb that use less than this number of tiles. If $n\ge (m-1)(t-1)+1$, the tiling consists of a filled comb and $n-(m-1)(t-1)-1$ free squares which gives a total of $n-(m-1)(t-1)$ metatile positions in which the filled comb can be placed. \end{proof}
The pattern of zeros seen in the triangles is a result of the following identity. \begin{identity} For $j\ge1$, $m,t\ge2$, $p=1,\ldots,m-1$, and $r=1-(t-2)p,\ldots,p$, \[ \chb{mj-r}{mj-p}{m,t}=0. \] \end{identity} \begin{proof} We first derive an expression for $K$, the maximum number of combs that can be used in the tiling of an $(mJ+R)$-board where $R=0,\ldots,m-1$. From Lemma~\ref{L:bij}, $K$ is also the maximum number of $t$-ominoes that can be used in the tiling of $R$ $(J+1)$-boards and $m-R$ $J$-boards. Then it is straightforward to show that \begin{equation}\label{e:K} K=\begin{cases}\displaystyle\frac{m\bigl(J-(J\bmod t)\bigr)}{t}, & \text{if $J\bmod t<t-1$};\\
\displaystyle\frac{m(J-t+1)}{t}+R, & \text{if $J\bmod t=t-1$}.\end{cases} \end{equation} From Identity~\ref{I:ch=chb}, \[ \chb{mj-r}{mj-p}{m,t}=\ch{tmj-r-(t-1)p}{mj-p}_{m,t}. \] Writing $tmj-r-(t-1)p$ in the form $mJ+R$, if $J=tj-s$ where $s=1,\ldots,t$ then $R=sm-r-(t-1)p$. The condition that $0\leq R<m$ gives $(s-1)m<r+(t-1)p\leq sm$. This condition is compatible with the minimum and maximum values $r+(t-1)p$ can take which are, respectively, 2 and $tm$. From \eqref{e:K} we find for $s>1$ that $K=m(j-1)$ which is always less than $mj-p$. When $s=1$, $K=mj-p-r-(t-2)p$. Since $r+(t-2)p\geq1$ we have $K<mj-p$ in this case as well. \end{proof}
The following identity explains the entries that appear at the vertical boundaries of the nonzero parts of the triangles and start with rising powers of ascending positive integers. Identities~\ref{I:j^p} and \ref{I:binom^p} reduce to Identity~24 of AE22 when $t=2$. \begin{identity}\label{I:j^p} For $j,m\ge1$, $t\ge2$, $s=0,\ldots,t-2$, and $r=0,\ldots,m$, \begin{align*} \chb{m(j+s-1)+r}{m(j-1)}{m,t}&=\binom{j+s-1}{s}^{m-r}\binom{j+s}{s+1}^r\\ &=\begin{cases} j^r, & \text{if $s=0$};\\ \biggl(\dfrac{j(j+1)\cdots(j+s-1)}{s!}\biggr)^m \biggl(\dfrac{j+s}{s+1}\biggr)^r,& \text{if $s>0$}. \end{cases} \end{align*} \end{identity} \begin{proof} From Identity~\ref{I:ch=chb}, \[ \chb{m(j+s-1)+r}{m(j-1)}{m,t}=\ch{m(t(j-1)+s)+r}{m(j-1)}_{m,t}. \] By Lemma~\ref{L:bij}, this is the number of ways to tile $m-r$ boards of length $t(j-1)+s$ and $r$ boards of length $t(j-1)+s+1$ with $m(j-1)$ $t$-ominoes (and $sm+r$ squares). As $s+1<t$, each of the $m$ boards always contains exactly $j-1$ $t$-ominoes. A board of length $t(j-1)+s$ has $s$ squares and so there are $j+s-1$ metatile positions in which to put the squares (the rest being filled by $t$-ominoes) and thus $\tbinom{j+s-1}{s}$ ways to tile it. Likewise, a board of length $t(j-1)+s+1$ has $j+s$ metatile positions and so there are $\tbinom{j+s}{s+1}$ ways to tile it. The result follows from the numbers of each type of board. \end{proof}
The next identity explains the rising powers of integers on non-vertical rays of entries at the boundaries of the nonzero parts of the triangles. \begin{identity}\label{I:binom^p} For $m,j\ge1$, $t\ge2$, and $p=0,\ldots,m$, \[ \chb{mj+(t-2)p}{mj-p}{m,t}=\binom{j+t-2}{t-1}^p. \] \end{identity} \begin{proof}
From Identity~\ref{I:ch=chb} we have \[ \chb{mj+(t-2)p}{mj-p}{m,t} =\ch{tmj-p}{mj-p}_{m,t}=\ch{m(jt-1)+m-p}{(m-p)j+p(j-1)}_{m,t}, \] which is also the number of ways to tile $m-p$ $jt$-boards and $p$ boards of length $jt-1$ using $(m-p)j+p(j-1)$ $t$-ominoes and $(t-1)p$ squares. The $jt$-boards are completely filled by $j$ $t$-ominoes and the $p$ $(jt-1)$-boards each have $j-1$ $t$-ominoes and $t-1$ squares. As on these $p$ shorter boards there are $j+t-2$ tiles in total, there are $\tbinom{j+t-2}{t-1}$ ways to tile each of them which leads to a total of $\tbinom{j+t-2}{t-1}^p$ tilings for the set of boards. \end{proof}
The following two identities are generalizations of Identities~25 and 26 in AE22. \begin{identity} For $j,m\ge1$ and $t\ge2$, \[ \chb{mj+t-1}{mj-1}{m,t}=m\binom{j+t-1}{t}. \] \end{identity} \begin{proof} From Identity~\ref{I:ch=chb}, $\tchb{mj+t-1}{mj-1}_{m,t}=\tch{tmj}{mj-1}_{m,t}$, which, from Lemma~\ref{L:bij}, is the number of ways to tile an $m$-tuple of $jt$-boards with $mj-1$ $t$-ominoes and $t$ squares. As the length of each board is a multiple of $t$, all the squares must lie on the same board. On such a board there are $j-1$ $t$-ominoes and $t$ squares making $j+t-1$ tiles in total. Hence there are $\tbinom{j+t-1}{t}$ possible ways to tile it. As there are $m$ possible boards on which to place all the squares, the result follows. \end{proof}
\begin{identity} For $t\ge2$ and $m,j\ge1$ provided $mj\geq2$, \[ \chb{mj+2(t-1)}{mj-2}{m,t}=\begin{cases} \displaystyle\binom{m}{2}, & \text{if $j=1, m>1$};\\ \displaystyle m\binom{j+2(t-1)}{2t}+\binom{m}{2}\binom{j+t-1}{t}^2, & \text{if $m,j>1$},\\ \displaystyle \binom{j+2(t-1)}{2t}, & \text{if $m=1, j>1$}.\\ \end{cases} \] \end{identity} \begin{proof} From Identity~\ref{I:ch=chb}, $\tchb{mj+2(t-1)}{mj-2}_{m,t}=\tch{tmj}{mj-2}_{m,t}$, which, from Lemma~\ref{L:bij}, is the number of ways to tile an $m$-tuple of $jt$-boards with $mj-2$ $t$-ominoes and $2t$ squares. If $j>1$, all $2t$ squares can be on the same $jt$-board which, with the $j-2$ $t$-ominoes on that board, makes $j-2+2t$ tiles in total and hence $\tbinom{j+2(t-1)}{2t}$ tilings of it. With $m$ boards to choose from, this gives the first term on the right-hand sides of the identity when $j>1$. Otherwise, if $m>1$, two of the boards have $t$ squares each. There are $\tbinom{j+t-1}{t}$ ways to tile each of those boards and $\tbinom{m}{2}$ ways to choose them. \end{proof}
The following identity is a generalization of the previous two. \begin{identity} For $s\ge1$, $t\ge2$, and $m,j\ge1$ provided $mj\ge s$, \[ \chb{mj+s(t-1)}{mj-s}{m,t}= \sum_{\substack{r_i\ge1;\\r_1+\cdots+r_p=s}} \!\!\!\binom{m}{p}\prod_{i=1}^p\binom{j+r_i(t-1)}{r_it}, \] where the sum is over compositions of $s$, $p$ is the number of parts of the composition, and $\tbinom{a}{b}$ is understood to equal zero if $a<b$. \end{identity} \begin{proof} From Identity~\ref{I:ch=chb}, $\tchb{mj+s(t-1)}{mj-s}_{m,t}=\tch{tmj}{mj-s}_{m,t}$, which, from Lemma~\ref{L:bij}, is the number of ways to tile an $m$-tuple of $jt$-boards with $mj-s$ $t$-ominoes and $st$ squares. We partition the squares into $p$ parts of sizes $r_it$ where $r_i\in\mathbb{Z}^+$ such that $r_1+\cdots+r_p=s$. A $jt$-board containing $r_it$ squares has $j-r_i$ $t$-ominoes and thus $j-r_i+r_it$ tiles in total and so $\tbinom{j+r_i(t-1)}{r_it}$ possible ways to tile it. There are $\binom{m}{p}$ ways to choose which of the $m$ boards have any squares. \end{proof}
In order to truly deserve to be called a Pascal-like triangle, a triangle ought to have a portion where Pascal's recurrence is obeyed. We now show that this is the case for our triangles by using a result from a study on restricted combinations \cite{MS08} to extend and prove Conjecture~30 of AE22. \begin{theorem} For integers $k\ge0$, $m\geq1$, $t\geq2$, and $n>(m-1)(t-1)k$, \begin{equation}\label{e:PRchb} \chb{n}{k}{m,t}=\chb{n-1}{k}{m,t}+\chb{n-1}{k-1}{m,t}. \end{equation} \end{theorem} \begin{proof} The result holds for $k=0$ since $\tchb{n\ge0}{1}_{m,t}=1$ by Identity~\ref{I:chb=1} and $\tchb{n<0}{k}_{m,t}=\tchb{n}{k<0}_{m,t}=0$ by definition. Mansour and Sun give the result in Theorem~3.5 of their paper~\cite{MS08}, when rewritten in our own notation, that for any integers $m,k\geq1$, and $t\geq2$, \begin{equation}\label{e:Srr} S^{(m,t)}(N,k)=S^{(m,t)}(N-1,k)+S^{(m,t)}(N-t,k-1), \end{equation} provided that $N\geq m(t-1)(k-1)$. However, the condition for this relation between numbers of subsets to hold should read $N>m(t-1)(k-1)$ (personal communication with Mansour). By Corollary~\ref{C:S=chb}, $n=N+(t-1)(m-k)$ and we can rewrite \eqref{e:Srr} as \[ \chb{N+(t-1)(m-k)}{k}{m,t}= \chb{N+(t-1)(m-k)-1}{k}{m,t}+ \chb{N+(t-1)(m-k+1)-t}{k-1}{m,t} \] which reduces to \eqref{e:PRchb}. The corrected condition becomes $n-(t-1)(m-k)>m(t-1)(k-1)$ which gives the condition in our theorem. \end{proof}
We now turn to obtaining recursion relations for particular instances of the triangles. For all but the last triangle we consider, we require the following theorem which extends a result proved elsewhere for tilings of an $n$-board when the digraph has a common node \cite[Theorem~5.4 and Identity~5.5]{EA15} to also include $n$-tile tilings of boards.
\begin{theorem}\label{T:CN} For a digraph possessing a common node, let $l_{\mathrm{o}i}$ be the length of the $i$-th outer cycle ($i=1,\ldots,N\rb{o}$), let $L_r$ be the length of the $r$-th inner cycle ($r=1,\ldots,N$) and let $K_r$ be the number of combs it contains, and let $l_{\mathrm{c}i}$ be the length of the $i$-th common circuit ($i=1,\ldots,N\rb{c}$) and let $k_{\mathrm{c}i}$ be the number of combs it contains. Then for all integers $n$ and $k$, \begin{align} \label{e:CNBn} B_n&=\delta_{n,0}+ \sum_{r=1}^N (B_{n-L_r}-\delta_{n,L_r})+ \sum_{i=1}^{N\rb{o}} \biggl(B_{n-l_{\mathrm{o}i}}-\sum_{r=1}^NB_{n-l_{\mathrm{o}i}-L_r}\biggr) +\sum_{i=1}^{N\rb{c}} B_{n-l_{\mathrm{c}i}},\\ B_{n,k}&=\delta_{n,0}\delta_{k,0}+ \sum_{r=1}^N (B_{n-L_r,k-K_r}-\delta_{n,L_r}\delta_{k,K_r})+ \sum_{i=1}^{N\rb{o}} \biggl(B_{n-l_{\mathrm{o}i},k-k_{\mathrm{o}i}} -\sum_{r=1}^NB_{n-l_{\mathrm{o}i}-L_r,k-k_{\mathrm{o}i}-K_r}\biggr)\nonumber\\ &\qquad\mbox{}+\sum_{i=1}^{N\rb{c}} B_{n-l_{\mathrm{c}i},k-k_{\mathrm{c}i}}, \label{e:CNBnk} \end{align} where $B_{n<0}=B_{n,k<0}=B_{n<k,k}=0$. If the lengths of the cycles and circuits are calculated as the number of tiles (the total contribution made to the number of cells occupied) then $B_n$ is the number of $n$-tile tilings (the number of tilings of an $n$-board) and $B_{n,k}$ is the number of such tilings that use $k$ combs. \end{theorem}
In the proofs of Identities \ref{I:rr23}, \ref{I:B23}, \ref{I:rr24}, \ref{I:B24}, \ref{I:rr42}, and \ref{I:B42} which use Theorem~\ref{T:CN} (and Identities \ref{I:rr25} and \ref{I:B25} which use Theorem~\ref{T:PCN}), the lengths of the cycles and circuits are the number of tiles they contain. In the proofs of Identities \ref{I:A23}, \ref{I:A24}, \ref{I:A42}, and \ref{I:A25}, the lengths of the cycles and circuits are the total number of cells that the tiles along the arcs occupy. An $S$ occupies 1 cell whereas a $C$ occupies $t$ cells. Thus if $L$ is the length of a cycle or circuit containing $K$ combs when finding the recursion relations for $n$-tile tilings then $L'=L+(t-1)K$ is the length of that cycle or circuit when the recursion relations are for the tilings of an $n$-board.
\begin{identity}\label{I:rr23} For all $n,k\in\mathbb{Z}$, \begin{multline}\label{e:rr23} \chb{n}{k}{2,3}=\delta_{n,0}\delta_{k,0}-\delta_{n,1}\delta_{k,1} +\chb{n-1}{k}{2,3}+\chb{n-1}{k-1}{2,3} -\chb{n-2}{k-1}{2,3}+\chb{n-2}{k-2}{2,3}\\ +\chb{n-3}{k-1}{2,3}-\chb{n-3}{k-3}{2,3}. \end{multline} \end{identity} \begin{proof} The digraph for tiling with squares and $(1,1;3)$-combs has a single inner cycle connecting the 01 node to itself by a $C$ (Fig.~\ref{f:meta}(b)). Hence 01 is the common node and $L_1=K_1=1$. There are 2 outer cycles ($S$ and $C^2$) and so $l_{\mathrm{o}1}=1$, $k_{\mathrm{o}1}=0$, and $l_{\mathrm{o}2}=k_{\mathrm{o}2}=2$. There is a single common circuit
($CS^2$) which gives $l_{\mathrm{c}1}=3$ and $k_{\mathrm{c}1}=1$. \end{proof}
\begin{identity}\label{I:B23} If $B_n$ is the sum of the $n$-th row of $\tchb{n}{k}_{2,3}$ then for all $n$, \[ B_n=\delta_{n,0}-\delta_{n,1}-\delta_{n,2}+2B_{n-1}, \] where $B_{n<0}=0$. \end{identity} \begin{proof} Sum each term in \eqref{e:rr23} over all $k$ or use \eqref{e:PCNBn}. \end{proof} As defined above, $(B_n)_{n\ge0}=1,1,2,4,8,16,32,64,128,256,\ldots$ is \seqnum{A011782}.
\begin{identity}\label{I:A23} If $A_n$ is the sum of the $n$-th $(1,2)$-antidiagonal of $\tchb{n}{k}_{2,3}$ then for all $n$, \[ A_n=\delta_{n,0}-\delta_{n,3}+A_{n-1}+A_{n-3}-A_{n-4}+A_{n-5}+A_{n-6}-A_{n-9}, \] where $A_{n<0}=0$. \end{identity} \begin{proof} By Identity~\ref{I:ch=chb}, the $n$-th $(1,2)$-antidiagonal of $\tchb{n}{k}_{2,3}$ is the $n$-th row of $\tch{n}{k}_{2,3}$. From the definition of the latter triangle, $A_n$ is the number of tilings of an $n$-board using squares and $(1,1;3)$-combs and is given by \eqref{e:PCNBn} (with $B_n$ replaced by $A_n$) applied to the same digraph as in the proof of Identity~\ref{I:rr23} but with the following changes made to the lengths: $L_1=3$, $l_{\mathrm{o}2}=6$, $l_{\mathrm{c}1}=5$. \end{proof} As defined above, $(A_n)_{n\ge0}=1,1,1,1,1,2,4,6,9,12,16,24,36,54,81,117,\ldots$ is \seqnum{A224809}. From Corollary~\ref{C:A}, $A_n$ is the number of subsets of $\mathbb{N}_{n-4}$ chosen so that no two elements differ by 2 or 4.
\begin{figure}
\caption{Digraph for generating metatiles when tiling with squares and
$(1,1;4)$-combs ($m=2$, $t=4$).}
\label{f:24}
\end{figure}
\begin{identity}\label{I:rr24} For all $n,k\in\mathbb{Z}$, \begin{multline}\label{e:rr24} \chb{n}{k}{2,4}=\delta_{n,0}\delta_{k,0}-\delta_{n,2}(\delta_{k,1} +\delta_{k,2}) +\chb{n-1}{k}{2,4}+\chb{n-2}{k-1}{2,4}+2\chb{n-2}{k-2}{2,4}\\ -\chb{n-3}{k-1}{2,4}-\chb{n-3}{k-2}{2,4} +\chb{n-4}{k-1}{2,4}+\chb{n-4}{k-2}{2,4} -\chb{n-4}{k-3}{2,4}-\chb{n-4}{k-4}{2,4}. \end{multline} \end{identity} \begin{proof} The digraph for tiling with squares and $(1,1;4)$-combs has 2 inner cycles ($SC$ and $C^2$) both of which pass though the 0101 and 01 nodes (Fig.~\ref{f:24}). We choose 0101 as the common node. We see that $L_1=L_2=2$, $K_1=1$, and $K_2=2$. There are 2 outer cycles ($S$ and $C^2$) and so $l_{\mathrm{o}1}=1$, $k_{\mathrm{o}1}=0$, and $l_{\mathrm{o}2}=k_{\mathrm{o}2}=2$. There are 2 common circuits: $CS\{S,C\}S$ where $X\{Y,Z\}$ means $XY$ and $XZ$. Hence $l_{\mathrm{c}1}=l_{\mathrm{c}2}=4$, $k_{\mathrm{c}1}=1$, and $k_{\mathrm{c}2}=2$. \end{proof}
\begin{identity}\label{I:B24} If $B_n$ is the sum of the $n$-th row of $\tchb{n}{k}_{2,4}$ then for all $n$, \[ B_n=\delta_{n,0}-2\delta_{n,2}+B_{n-1}+3B_{n-2}-2B_{n-3}, \] where $B_{n<0}=0$. \end{identity} \begin{proof} Sum each term in \eqref{e:rr24} over all $k$ or use \eqref{e:PCNBn}. \end{proof} As defined above, $(B_n)_{n\ge0}=1,1,2,3,7,12,27,49,106,199,419,\ldots$ is \seqnum{A099163}.
The proofs of the following identity and Identities~\ref{I:A42} and \ref{I:A25} are analogous to that of Identity~\ref{I:A23}. We just need to find the modified lengths of the cycles and circuits in the digraph before using the theorem giving the recursion relation.
\begin{identity}\label{I:A24} If $A_n$ is the sum of the $n$-th $(1,3)$-antidiagonal of $\tchb{n}{k}_{2,4}$ then for all $n$, \[ A_n=\delta_{n,0}-\delta_{n,5}-\delta_{n,8}+ A_{n-1}+A_{n-5}-A_{n-6}+A_{n-7}+2A_{n-8}-A_{n-9}+A_{n-10}-A_{n-13}-A_{n-16}, \] where $A_{n<0}=0$. \end{identity} \begin{proof} We use the same digraph and associated parameters as in the proof of Identity~\ref{I:rr24} except that $L_1=5$, $L_2=8$, $l_{\mathrm{o}2}=8$, $l_{\mathrm{c}1}=7$, and $l_{\mathrm{c}2}=10$. \end{proof} As defined above, $(A_n)_{n\ge0}=1,1,1,1,1,1,1,2,4,6,9,12,16,20,25,35,\ldots$ is \seqnum{A224808}. From Corollary~\ref{C:A}, $A_n$ is the number of subsets of $\mathbb{N}_{n-6}$ chosen so that no two elements differ by 2, 4, or 6.
\begin{identity}\label{I:rr42} For all $n,k\in\mathbb{Z}$, \zmlg \begin{multline}\label{e:rr42} \chb{n}{k}{4,2}=\delta_{n,0}\delta_{k,0}-\delta_{n,2}\delta_{k,1} -\delta_{n,3}\delta_{k,2}-\delta_{n,4}\delta_{k,4} +\chb{n-1}{k}{4,2}+\chb{n-2}{k-1}{4,2} -\chb{n-3}{k-1}{4,2}+\chb{n-3}{k-2}{4,2}\\ +\chb{n-4}{k-1}{4,2}+\chb{n-4}{k-3}{4,2}+2\chb{n-4}{k-4}{4,2} +\chb{n-5}{k-2}{4,2}+2\chb{n-5}{k-3}{4,2}-\chb{n-5}{k-4}{4,2}\\ -\chb{n-6}{k-3}{4,2}-\chb{n-6}{k-5}{4,2} -\chb{n-7}{k-4}{4,2}-\chb{n-7}{k-5}{4,2}-\chb{n-7}{k-6}{4,2} -\chb{n-8}{k-7}{4,2}-\chb{n-8}{k-8}{4,2}. \end{multline} \rmlg \end{identity} \begin{proof} The digraph for tiling with squares and $(1,3;2)$-combs (which are also called $(1,3)$-fences) has 3 inner cycles all of which contain the nodes 001 and 01 (Fig.~\ref{f:42}). We choose 001 as the common node. The cycles, given as lists of arcs starting from 001, are $\{S,C\{S,C^2\}\}C$. Hence $L_i=2,3,4$ and $K_i=1,2,4$, respectively, for $i=1,2,3$. There are 5 outer cycles: $S,C^2\{S\{S,CS\},C\{S,C\}\}$. Thus $l_{\mathrm{o}i}=1,4,5,4,4$ and $k_{\mathrm{o}i}=0,2,3,3,4$, respectively, for $i=1,\ldots,5$. There are 8 common circuits: $C\{S,CSC^2\}\{S^2,C\{S^2,C\{S,CS\}\}\}$. Hence $l_{\mathrm{c}i}=4,5,5,6,7,8,8,9$ and $k_{\mathrm{c}i}=1,2,3,4,4,5,6,7$, respectively, for $i=1,\ldots,8$. \end{proof}
\begin{figure}
\caption{Digraph for generating metatiles when tiling with squares and
$(1,3;2)$-combs ($m=4$, $t=2$).}
\label{f:42}
\end{figure}
\begin{identity}\label{I:B42} If $B_n$ is the sum of the $n$-th row of $\tchb{n}{k}_{4,2}$ then for all $n$, \[ B_n=\delta_{n,0}-\delta_{n,2}-\delta_{n,3}-\delta_{n,4} +B_{n-1}+B_{n-2}+4B_{n-4}+2B_{n-5}-2B_{n-6}-3B_{n-7}-2B_{n-8}, \] where $B_{n<0}=0$. \end{identity} \begin{proof} Sum each term in \eqref{e:rr42} over all $k$ or use \eqref{e:PCNBn}. \end{proof} As defined above, $(B_n)_{n\ge0}=1,1,1,1,5,12,21,34,70,155,318,610,\ldots$ has the generating function $(1-x-x^3)/((1-2x)(1-x^2)(1+2x^2+x^3+x^4))$.
\begin{identity}\label{I:A42} If $A_n$ is the sum of the $n$-th antidiagonal of $\tchb{n}{k}_{4,2}$ then for all $n$, \begin{multline*} A_n=\delta_{n,0}-\delta_{n,3}-\delta_{n,5}-\delta_{n,8}+ A_{n-1}+A_{n-3}-A_{n-4}+2A_{n-5}+2A_{n-7}+4A_{n-8}-2A_{n-9}\\ -2A_{n-11}-A_{n-12}-A_{n-13}-A_{n-15}-A_{n-16}, \end{multline*} where $A_{n<0}=0$. \end{identity} \begin{proof} We use the same digraph and associated parameters as in the proof of Identity~\ref{I:rr42} except that $L_1=3$, $L_2=5$, $L_3=8$, $l_{\mathrm{o}i}=6,8,7,8$ for $i=2,\ldots,5$, and for $i=1,\ldots,8$, $l_{\mathrm{c}i}=5,7,8,10,11,13,14,16$. \end{proof} As defined above, $(A_n)_{n\ge0}=1,1,1,1,1,2,4,8,16,24,36,54,81,135,225,\ldots$ (after removing the first four 1s) is \seqnum{A031923}. From Corollary~\ref{C:A}, $A_n$ is the number of subsets of $\mathbb{N}_{n-4}$ chosen so that no two elements differ by 4.
\begin{figure}
\caption{Digraph for generating metatiles when tiling with squares and
$(1,1;5)$-combs ($m=2$, $t=5$).}
\label{f:25}
\end{figure}
\begin{identity}\label{I:rr25} For all $n,k\in\mathbb{Z}$, \zmlg \begin{multline}\label{e:rr25} \chb{n}{k}{2,5}=\delta_{n,0}\delta_{k,0}-\delta_{n,1}\delta_{k,1} -\delta_{n,2}\delta_{k,2}+\delta_{n,3}(\delta_{k,3}-\delta_{k,1}) +\chb{n-1}{k}{2,5}+\chb{n-1}{k-1}{2,5}-\chb{n-2}{k-1}{2,5}\\ +2\chb{n-2}{k-2}{2,5} +\chb{n-3}{k-1}{2,5}-\chb{n-3}{k-2}{2,5}-2\chb{n-3}{k-3}{2,5} -\chb{n-4}{k-1}{2,5}\\ +\chb{n-4}{k-2}{2,5}+\chb{n-4}{k-3}{2,5}-\chb{n-4}{k-4}{2,5}+\chb{n-5}{k-1}{2,5} -2\chb{n-5}{k-3}{2,5}+\chb{n-5}{k-5}{2,5}. \end{multline} \rmlg \end{identity} \begin{proof} The digraph for tiling with squares and $(1,1;5)$-combs has 3 inner cycles but no common node (Fig.~\ref{f:25}). If the loop at the 0101 node were not present, the digraph would have a common node. Using terminology and notation we introduce in the appendix, the loop at 0101 is an errant loop and has length $L_0=1$ and number of combs $K_0=1$. We take the 010101 node as the pseudo-common node (we could have also chosen the 01 node instead). There are two common circuits, $CSCS$ and $CS^4$, the first of which is plain. Thus $l_{\mathrm{c}1}=l_{\mathrm{pc}1}=4$, $k_{\mathrm{c}1}=k_{\mathrm{pc}1}=2$, $l_{\mathrm{c}2}=5$, $k_{\mathrm{c}2}=1$, $N_{\mathrm{c}}=2$, and $N_{\mathrm{pc}}=1$. Of the other two inner cycles, $C^2$ is plain, $S^2C$ is not. Thus $L_1=2$, $K_1=2$, $L_2=3$, and $K_2=1$. The outer cycles are $S$ and $C^2$ and are both plain. Hence $l_{\mathrm{o}1}=1$, $k_{\mathrm{o}1}=0$, $l_{\mathrm{o}2}=k_{\mathrm{o}2}=2$, and $N_{\mathrm{o}}=2$. The identity then follows from applying \eqref{e:PCNBnk}. \end{proof}
\begin{identity}\label{I:B25} If $B_n$ is the sum of the $n$-th row of $\tchb{n}{k}_{2,5}$ then for all $n$, \[ B_n=\delta_{n,0}-\delta_{n,1}-\delta_{n,2}+2B_{n-1}+B_{n-2}-2B_{n-3}, \] where $B_{n<0}=0$. \end{identity} \begin{proof} Sum each term in \eqref{e:rr25} over all $k$ or use \eqref{e:PCNBn}. \end{proof} As defined above, $(B_n)_{n\ge0}=1,1,2,3,6,11,22,43,86,171,342,683,\ldots$ is \seqnum{A005578}.
\begin{identity}\label{I:A25} If $A_n$ is the sum of the $n$-th $(1,4)$-antidiagonal of $\tchb{n}{k}_{2,5}$ then for all $n$, \begin{multline*} A_n=\delta_{n,0}-\delta_{n,5}-\delta_{n,7}-\delta_{n,10}+\delta_{n,15}+ A_{n-1}+A_{n-5}-A_{n-6}+A_{n-7}-A_{n-8}+A_{n-9}\\+2A_{n-10} -A_{n-11}+A_{n-12}-2A_{n-15}+A_{n-16}-2A_{n-17}-A_{n-20}+A_{n-25} \end{multline*} where $A_{n<0}=0$. \end{identity} \begin{proof} We use the same digraph and associated parameters as in the proof of Identity~\ref{I:rr25} except that $L_0=5$, $L_1=10$, $L_2=7$, $l_{\mathrm{o}2}=10$, $l_{\mathrm{c}1}=l_{\mathrm{pc}1}=12$, and $l_{\mathrm{pc}2}=9$. \end{proof} As defined above, $(A_n)_{n\ge0}=1,1,1,1,1,1,1,1,1,2,4,6,9,12,16,20,25,30,36,48,64,\ldots$
is \seqnum{A224811}. From Corollary~\ref{C:A}, $A_n$ is the number of subsets of $\mathbb{N}_{n-8}$ chosen so that no two elements differ by 2, 4, 6, or 8.
\section{Discussion} In this paper and AE22 we considered tiling-derived triangles whose entries were shown to be numbers of $k$-subsets of $\mathbb{N}_n$ such that no two elements of the subset differ by an element in a set $\mathcal{Q}$ of disallowed differences. In AE22, $\mathcal{Q}=\{m\}$ for fixed $m\in\mathbb{Z}^+$, whereas in the present paper, $\mathcal{Q}=\{m,2m,\ldots,(t-1)m\}$, where $t=2,3,\ldots$. One is then led to ask whether there is a correspondence between restricted combinations specified by other types of $\mathcal{Q}$ and tilings. When $\mathcal{Q}=\mathbb{N}_q$ for some $q\in\mathbb{Z}^+$, using the same ideas as in the proof of Lemma~\ref{L:ksub}, it is straightforward to show that there is a bijection between the tilings of an $(n+q)$-board using $k$ $(q+1)$-ominoes and squares and the number of $k$-subsets. However, the corresponding $n$-tile tilings triangles are just Pascal's triangle for any $q$. In order to obtain a tiling interpretation of restricted combinations with other classes of $\mathcal{Q}$ one needs a form of a tiling where some parts of the tiles are allowed to overlap with parts of other tiles. This will be explored in depth in another paper. Whether or not such tiling schemes can be used to generate further aesthetically pleasing families of number triangles remains to be seen.
From some of the entries in the OEIS that give the sums of the $(1,t-1)$-antidiagonals of the triangle (see \seqnum{A224809}, \seqnum{A224808}, and \seqnum{A224811}) it appears that the number of subsets of $\mathbb{N}_{n-(t-1)m}$ whose elements do not differ by an element of the set $\{m,2m,\ldots,(t-1)m\}$ is also the number of permutations $\pi$ of $\mathbb{N}_n$ such that $\pi(i)-i\in\{-m,0,(t-1)m\}$ for all $i\in\mathbb{N}_n$. This is indeed true in general as we will demonstrate combinatorially using combs and fences elsewhere.
In AE22 it was noted that the $\tchb{n}{k}_{1,2}$ and $\tchb{n}{k}_{2,2}$ triangles are row-reversed Riordan arrays and it was shown (in Corollary~37 of AE22) that the $\tchb{n}{k}_{m>2,2}$ triangles are not. From Theorem~35 of AE22, the $\tchb{n}{k}_{m\ge2,t\ge3}$ triangles are not row-reversed Riordan arrays since when tiling with $(1,m-1;t)$-combs and squares, the filled-comb metatile contains more than one square if $(m-1)(t-1)>1$. The same theorem tells us that, except for the $m=1$ cases, the triangles are also not Riordan arrays since there are metatiles containing more than one comb.
There are a number of types of tiling that lead to common-node-free digraphs that have only a few inner cycles. As far as we are aware, Theorem~\ref{T:PCN} is the first result giving recursion relations for a class of such cases. The theorem can be modified or generalized to cope with a wider variety of classes and we will present these results in future studies involving applications of tilings where instances of such digraphs arise.
\section{Appendix: Recursion relations for 3-inner-cycle digraphs with a
pseudo-common node}
{\allowdisplaybreaks For a digraph lacking a common node, we refer to an inner cycle that can be represented as a single arc linking a node $\mathcal{E}$ to itself as an \textit{errant loop} if the digraph would have a common node $\mathcal{P}$ if the errant loop arc were removed. The node $\mathcal{P}$ is then referred to as a \textit{pseudo-common
node}. Evidently, $\mathcal{E}$ and $\mathcal{P}$ cannot be the same node; if they were the same node, the original digraph would have a true common node. For a digraph with an errant loop, a \textit{common
circuit} is defined as two concatenated simple paths from the 0 node to $\mathcal{P}$ and from $\mathcal{P}$ to the 0 node. An outer cycle, inner cycle, or common circuit is said to be \textit{plain} if it does not include the errant loop node $\mathcal{E}$. See the proof of Identity~\ref{I:rr25} for examples.
We use the $N=2$ case of the following lemma in the proof of Theorem~\ref{T:PCN}.
\begin{lemma}\label{L:multinom} For positive integers $j_0,j_1,\ldots,j_N$ where $N\ge2$, \zmlg \begin{multline*} \binom{j_1+\cdots+j_N}{j_1,\ldots,j_N}\binom{j_0+j_N-1}{j_0} \!=\! \sum_{r=1}^{N-1}\binom{j_1+\cdots+j_N-1}{j_1,\ldots,j_r-1,\ldots} \Biggl(\!\binom{j_0+j_N-1}{j_0}-\binom{j_0+j_N-2}{j_0-1}\!\Biggr)\\ +\binom{j_1+\cdots+j_N}{j_1,\ldots,j_N}\binom{j_0+j_N-2}{j_0-1} +\binom{j_1+\cdots+j_N-1}{j_1,\ldots,j_N-1}\binom{j_0+j_N-2}{j_0}. \end{multline*} \end{lemma} \begin{proof} Using the result for multinomial coefficients that \[ \binom{j_1+\cdots+j_N}{j_1,\ldots,j_N} =\sum_{r=1}^{N}\binom{j_1+\cdots+j_N-1}{j_1,\ldots,j_r-1,\ldots,j_N}, \] we have \begin{align*} &\binom{j_1+\cdots+j_N}{j_1,\ldots,j_N}\binom{j_0+j_N-1}{j_0}\\ &\qquad\qquad=\Biggl(\sum_{r=1}^{N-1}\binom{j_1+\cdots+j_N-1}{j_1,\ldots,j_r-1,\ldots} +\binom{j_1+\cdots+j_N-1}{j_1,\ldots,j_N-1}\Biggr)\binom{j_0+j_N-1}{j_0}\\ &\qquad\qquad=\sum_{r=1}^{N-1}\binom{j_1+\cdots+j_N-1}{j_1,\ldots,j_r-1,\ldots}\binom{j_0+j_N-1}{j_0}\\ &\qquad\qquad\qquad+\binom{j_1+\cdots+j_N-1}{j_1,\ldots,j_N-1} \Biggl(\binom{j_0+j_N-2}{j_0}+\binom{j_0+j_N-1}{j_0-1}\Biggr)\\ &\qquad\qquad=\sum_{r=1}^{N-1}\binom{j_1+\cdots+j_N-1}{j_1,\ldots,j_r-1,\ldots}\binom{j_0+j_N-1}{j_0}+\binom{j_1+\cdots+j_N-1}{j_1,\ldots,j_N-1}\binom{j_0+j_N-2}{j_0}\\ &\qquad\qquad\qquad+\Biggl( \binom{j_1+\cdots+j_N}{j_1,\ldots,j_N} -\sum_{r=1}^{N-1}\binom{j_1+\cdots+j_N-1}{j_1,\ldots,j_r-1,\ldots,j_N} \Biggr)\binom{j_0+j_N-2}{j_0-1}, \end{align*} which gives the required result on rearranging. \end{proof}
\begin{theorem}\label{T:PCN} For a digraph with an errant loop of length $L_0$ containing $K_0$ combs, a plain inner cycle of length $L_1$ containing $K_1$ combs, a non-plain inner cycle of length $L_2$ containing $K_2$ combs, and outer cycles that are all plain and have length $l_{\mathrm{o}i}$ and contain $k_{\mathrm{o}i}$ combs for $i=1,\ldots,N\rb{o}$, let $l_{\mathrm{c}i}$ be the length of the $i$-th common circuit and let $k_{\mathrm{c}i}$ be the number of combs it contains ($i=1,\ldots,N\rb{c}$), and let $l_{\mathrm{pc}i}$ be the length of the $i$-th plain common circuit and let $k_{\mathrm{pc}i}$ be the number of combs it contains ($i=1,\ldots,N\rb{pc}$). Then for all integers $n$ and $k$, \zmlg \begin{align}\label{e:PCNBn} B_n&=\delta_{n,0} +\sum_{r=0}^2 (B_{n-L_r}-\delta_{n,L_r}) +\delta_{n,L_0+L_1}-B_{n-L_0-L_1}\nonumber\\ &\qquad+\sum_{i=1}^{N\rb{o}} \biggl(B_{n-l_{\mathrm{o}i}} +B_{n-l_{\mathrm{o}i}-L_0-L_1} -\sum_{r=0}^NB_{n-l_{\mathrm{o}i}-L_r}\biggr) +\sum_{i=1}^{N\rb{c}} B_{n-l_{\mathrm{c}i}} -\sum_{i=1}^{N\rb{pc}} B_{n-l_{\mathrm{pc}i}-L_0},\\ B_{n,k}&=\delta_{n,0}\delta_{k,0} +\sum_{r=0}^2 (B_{n-L_r,k-K_r}-\delta_{n,L_r}\delta_{k,K_r}) + \delta_{n,L_0+L_1}\delta_{k,K_0+K_1}-B_{n-L_0-L_1,k-K_0-K_1}\nonumber\\ &\qquad+\sum_{i=1}^{N\rb{o}} \biggl(B_{n-l_{\mathrm{o}i},k-k_{\mathrm{o}i}} +B_{n-l_{\mathrm{o}i}-L_0-L_1,k-k_{\mathrm{o}i}-K_0-K_1} -\sum_{r=0}^NB_{n-l_{\mathrm{o}i}-L_r,k-k_{\mathrm{o}i}-K_r}\biggr)\nonumber\\ &\qquad+\sum_{i=1}^{N\rb{c}} B_{n-l_{\mathrm{c}i},k-k_{\mathrm{c}i}} -\sum_{i=1}^{N\rb{pc}} B_{n-l_{\mathrm{pc}i}-L_0,k-k_{\mathrm{pc}i}-K_0}, \label{e:PCNBnk} \end{align} where $B_{n<0}=B_{n,k<0}=B_{n<k,k}=0$. If the lengths of the cycles and circuits are calculated as the number of tiles (the total contribution made to the number of cells occupied) then $B_n$ is the number of $n$-tile tilings (the number of tilings of an $n$-board) and $B_{n,k}$ is the number of such tilings that use $k$ combs.
\end{theorem} \begin{proof} To keep the algebra looking as simple as possible while retaining the essentials at the heart of the proof, we just prove the formula for $B_n$ when there is a single outer cycle, one plain common circuit, and one non-plain common circuit. Their respective lengths are $l\rb{o}$, $l\rb{pc}$, and $l\rb{npc}$. It is straightforward to modify the proof we give here to include the sums over outer cycles and common circuits. The proof of \eqref{e:PCNBnk} is entirely analogous.
Conditioning on the final metatile gives \zmlg \begin{multline}\label{e:condPCN} B_n=\delta_{n,0}+B_{n-l\rb{o}} +\sum_{j_1\ge0}B_{n-l\rb{pc}-j_1L_1} +\sum_{\substack{j_0,j_1\ge0,\\j_2\ge1}}\!\!\! \binom{j_1+j_2}{j_1}\binom{j_0+j_2-1}{j_0}B_{n-l\rb{pc}-j_0L_0-j_1L_1-j_2L_2}\\ +\sum_{e,j_1\ge0}B_{n-l\rb{npc}-eL_0-j_1L_1} +\sum_{\substack{e,j_0,j_1\ge0,\\j_2\ge1}}\!\!\! \binom{j_1+j_2}{j_1}\binom{j_0+j_2-1}{j_0}B_{n-l\rb{npc}-(j_0+e)L_0-j_1L_1-j_2L_2} \end{multline} with $B_{n<0}=0$. We now explain the origin of the four sums in \eqref{e:condPCN} while referring to the digraph in Fig.~\ref{f:25} (taking the 010101 node as $\mathcal{P}$) for examples of metatiles. The first sum is from metatiles obtained by taking the first part of the plain common circuit to $\mathcal{P}$, then following the plain inner cycle $j_1$ times, and then returning to the 0 node via the second half of the plain common circuit (e.g., $CSC^{2j_1}CS$ is the symbolic representation of the metatiles corresponding to the terms in the sum).
The second sum corresponds to metatiles with the same start and end as with the first sum but on reaching $\mathcal{P}$ the plain and non-plain inner cycles are executed $j_1$ and $j_2$ times, respectively, in any order but the non-plain inner cycle is executed at least once. The number of ways of choosing the order is $\tbinom{j_1+j_2}{j_1}$. The errant loop is also traversed a total of $j_0$ times. Each time the path reaches $\mathcal{E}$ (during an execution of the non-plain inner cycle), it can detour and traverse the errant loop any number of times. This is the origin of the $\tbinom{j_0+j_2-1}{j_0}$ factor which has $j_2-1$ rather than $j_2$ as the non-plain inner cycle must be started before the errant loop can be traversed. E.g., the metatiles corresponding to the terms in the sum when $j_0=j_1=j_2=1$ are $CS\{C^2SCS,SCSC^2\}CS$.
The third and fourth sums are analogous to the first and second but $\mathcal{P}$ is reached via the first half of the non-plain common circuit, and after the inner cycles have been traversed $j_r$ times (with $r=1$ in the third sum and $r=0,1,2$ in the fourth), the 0 node is returned to via the second half of the non-plain common circuit but the errant loop is executed an extra $e$ times when the path reaches $\mathcal{E}$. E.g., the metatiles corresponding to the terms in the third sum are $CSC^{2j_1}SC^eS^2$, and in the fourth sum when $j_0=j_1=j_2=1$ they are $CS\{C^2SCS,SCSC^2\}SC^eS^2$.
Representing \eqref{e:condPCN} by $E(n)$, we write down \[ E(n)-E(n-L_0)-E(n-L_1)+E(n-L_0-L_1)-E(n-L_2) \] and re-index the sums so that, where possible, the $B_{n-\alpha}$ inside the sums for any $\alpha$ appear the same as for $E(n)$ (e.g., $\sum_{j_1\ge0}B_{n-L_1-l\rb{pc}-j_1L_1}=\sum_{j_1\ge1}B_{n-l\rb{pc}-j_1L_1}$). This leaves \begin{multline}\label{e:subPCN} B_n-\sum_{r=0}^2B_{n-L_r}+B_{n-L_0-L_1}=\delta_{n,0}+B_{n-l\rb{o}} -\sum_{r=0}^2(\delta_{n,L_r}+B_{n-l\rb{o}-L_r})+\delta_{n,L_0+L_1} +B_{n-l\rb{o}-L_0-L_1}\\ +\sum_{j_1\ge0}\beta_{j_1L_1}-\sum_{j_1\ge1}\beta_{j_1L_1} -\sum_{j_1\ge0}\beta_{L_0+j_1L_1}+\sum_{j_1\ge1}\beta_{L_0+j_1L_1} -\sum_{j_1\ge0}\beta_{j_1L_1+L_2}\\ +\sum_{e,j_1\ge0}\hat{\beta}_{j_1L_1} -\sum_{\substack{e\ge0,\\j_1\ge1}}\hat{\beta}_{j_1L_1} -\sum_{\substack{e\ge1,\\j_1\ge0}}\hat{\beta}_{j_1L_1} +\sum_{e,j_1\ge1}\hat{\beta}_{j_1L_1} -\sum_{e,j_1\ge0}\hat{\beta}_{j_1L_1+L_2}\\ +\sum_{\substack{j_0,j_1\ge0,\\j_2\ge1}} \binom{j_1+j_2}{j_1}\binom{j_0+j_2-1}{j_0}\beta_\lambda -\sum_{\substack{j_0\ge0,\\j_1,j_2\ge1}} \binom{j_1+j_2-1}{j_1-1}\binom{j_0+j_2-1}{j_0}\beta_\lambda\\ -\sum_{\substack{j_0,j_2\ge1,\\j_1\ge0}} \binom{j_1+j_2}{j_1}\binom{j_0+j_2-2}{j_0-1}\beta_\lambda +\sum_{j_0,j_1,j_2\ge1} \binom{j_1+j_2-1}{j_1-1}\binom{j_0+j_2-2}{j_0-1}\beta_\lambda\\ -\sum_{\substack{j_0,j_1\ge0,\\j_2\ge2}} \binom{j_1+j_2-1}{j_1}\binom{j_0+j_2-2}{j_0}\beta_\lambda\\ +\text{the above 3 lines with $\beta_\lambda$ replaced by
$\hat{\beta}_\lambda$ and also summed over all $e\ge0$}, \end{multline} where $\beta_a=B_{n-l\rb{pc}-a}$, $\hat{\beta}_a=B_{n-l\rb{npc}-eL_0-a}$, and $\lambda=j_0L_0+j_1L_1+j_2L_2$. On rearranging \eqref{e:subPCN} it is immediately apparent where all but the last two sums in \eqref{e:PCNBn} come from. The first two sums in the second line of \eqref{e:subPCN} reduce to $\beta_0=B_{n-l\rb{pc}}$. The next two sums reduce to $-\beta_{L_0}=-B_{n-l\rb{pc}-L_0}$ which accounts for the final sum in \eqref{e:PCNBn}. The first four sums in the third line of \eqref{e:subPCN} reduce to $B_{n-l\rb{npc}}$ which when added to the $B_{n-l\rb{pc}}$ accounts for the penultimate sum in \eqref{e:PCNBn}.
We now complete the proof by showing that the remaining terms in \eqref{e:subPCN} cancel out. We regroup terms in each of the sums in the fourth, fifth, and sixth lines in \eqref{e:subPCN} to give \begin{subequations} \label{e:456} \begin{align} &\sum_{\substack{j_0,j_1\ge0,\\j_2\ge1}}\!\!\! \binom{j_1+j_2}{j_1}\binom{j_0+j_2-1}{j_0}\beta_\lambda=\!\! \sum_{\substack{j_0,j_1\ge1,\\j_2\ge2}}\!\!\! \binom{j_1+j_2}{j_1}\binom{j_0+j_2-1}{j_0}\beta_\lambda\nonumber\\ &\qquad+\sum_{j_0,j_1\ge1}\!\!\! \binom{j_1+1}{j_1}\beta_{j_0L_0+j_1L_1+L_2} +\sum_{\substack{j_0\ge1,\\j_2\ge2}}\! \binom{j_0+j_2-1}{j_0}\beta_{j_0L_0+j_2L_2} +\sum_{\substack{j_1\ge1,\\j_2\ge2}}\! \binom{j_1+j_2}{j_1}\beta_{j_1L_1+j_2L_2}\nonumber\\ &\qquad+\sum_{j_0\ge1}\beta_{j_0L_0+L_2} +\sum_{j_1\ge1}\binom{j_1+1}{j_1}\beta_{j_1L_1+L_2} +\sum_{j_2\ge2}\beta_{j_2L_2} +\beta_{L_2}, \label{e:a} \\ &\sum_{\substack{j_0\ge0,\\j_1,j_2\ge1}}\!\!\! \binom{j_1+j_2-1}{j_1-1}\binom{j_0+j_2-1}{j_0}\beta_\lambda=\!\! \sum_{\substack{j_0,j_1\ge1,\\j_2\ge2}}\!\!\! \binom{j_1+j_2-1}{j_1-1}\binom{j_0+j_2-1}{j_0}\beta_\lambda \nonumber\\ &\qquad+\sum_{j_0,j_1\ge1}\!\!\! \binom{j_1}{j_1-1}\beta_{j_0L_0+j_1L_1+L_2} +\sum_{\substack{j_1\ge1,\\j_2\ge2}}\! \binom{j_1+j_2-1}{j_1-1}\beta_{j_1L_1+j_2L_2} +\sum_{j_1\ge1}\binom{j_1}{j_1-1}\beta_{j_1L_1+L_2}, \label{e:b} \\ &\sum_{\substack{j_0,j_2\ge1,\\j_1\ge0}}\!\!\! \binom{j_1+j_2}{j_1}\binom{j_0+j_2-2}{j_0-1}\beta_\lambda=\!\! \sum_{\substack{j_0,j_1\ge1,\\j_2\ge2}}\!\!\! \binom{j_1+j_2}{j_1}\binom{j_0+j_2-2}{j_0-1}\beta_\lambda \nonumber\\ &\qquad+\sum_{j_0,j_1\ge1}\!\!\! \binom{j_1+1}{j_1}\beta_{j_0L_0+j_1L_1+L_2} +\sum_{\substack{j_0\ge1,\\j_2\ge2}}\! \binom{j_0+j_2-1}{j_0-1}\beta_{j_0L_0+j_2L_2} +\sum_{j_0\ge1}\beta_{j_0L_0+L_2}, \label{e:c} \\ &\sum_{j_0,j_1,j_2\ge1}\!\!\! \binom{j_1+j_2-1}{j_1-1}\binom{j_0+j_2-2}{j_0-1}\beta_\lambda=\!\! \sum_{\substack{j_0,j_1\ge1,\\j_2\ge2}}\!\!\! \binom{j_1+j_2-1}{j_1-1}\binom{j_0+j_2-2}{j_0-1}\beta_\lambda\nonumber\\ &\qquad+\sum_{j_0,j_1\ge1}\!\!\! \binom{j_1}{j_1-1}\beta_{j_0L_0+j_1L_1+L_2}, \label{e:d} \\ &\sum_{\substack{j_0,j_1\ge0,\\j_2\ge2}}\!\!\! \binom{j_1+j_2-1}{j_1}\binom{j_0+j_2-2}{j_0}\beta_\lambda=\!\! \sum_{\substack{j_0,j_1\ge1,\\j_2\ge2}}\!\!\! \binom{j_1+j_2-1}{j_1}\binom{j_0+j_2-2}{j_0}\beta_\lambda\nonumber\\ &\qquad+\sum_{\substack{j_0\ge1,\\j_2\ge2}}\! \binom{j_0+j_2-1}{j_0}\beta_{j_0L_0+j_2L_2} +\sum_{\substack{j_1\ge1,\\j_2\ge2}}\! \binom{j_1+j_2-1}{j_1}\beta_{j_1L_1+j_2L_2} +\sum_{j_2\ge2}\beta_{j_2L_2}. \label{e:e} \end{align} \end{subequations} We denote the $p$-th sum (or term) on the right-hand side of (\ref{e:456}$x$) by $x_p$ where $x$ is a--e. Then $\mathrm{a}_1-\mathrm{b}_1-\mathrm{c}_1+\mathrm{d}_1-\mathrm{e}_1=0$ by virtue of Lemma~\ref{L:multinom}, $\mathrm{a}_2$ cancels $\mathrm{c}_2$, $\mathrm{a}_3$ cancels $\mathrm{c}_3+\mathrm{e}_2$, $\mathrm{a}_4$ cancels $\mathrm{b}_3+\mathrm{e}_3$, $\mathrm{a}_5$ cancels $\mathrm{c}_4$, $\mathrm{a}_6-\mathrm{b}_4+\mathrm{a}_8=\sum_{j_1\ge0}\beta_{j_1L_1+L_2}$ and therefore cancels the last sum in the second line of \eqref{e:subPCN}, $\mathrm{a}_7$ cancels $\mathrm{e}_4$, and $\mathrm{b}_2$ cancels $\mathrm{d}_2$. The simplification works in the same way for the terms represented by the last line of \eqref{e:subPCN}. Denoting sums or terms in the corresponding set of equations by $\hat{x}_p$, $\hat{\mathrm{a}}_6-\hat{\mathrm{b}}_4+\hat{\mathrm{a}}_8 =\sum_{e,j_1\ge0}\hat{\beta}_{j_1L_1+L_2}$ and therefore cancels the last sum in the third line of \eqref{e:subPCN}.
The proof of \eqref{e:PCNBnk} proceeds in an analogous way. Again considering the case where there is a single outer cycle (with $k\rb{o}$ combs), a plain common circuit (with $k\rb{pc}$ combs), and a non-plain common circuit (with $k\rb{npc}$ combs), conditioning on the final metatile gives \begin{multline}\label{e:condPCNnk} B_{n,k}=\delta_{n,0}\delta_{k,0}+B_{n-l\rb{o},k-k\rb{o}} +\sum_{j_1\ge0}B_{n-l\rb{pc}-j_1L_1,n-k\rb{pc}-j_1K_1}\\ +\sum_{\substack{j_0,j_1\ge0,\\j_2\ge1}}\!\!\! \binom{j_1+j_2}{j_1}\binom{j_0+j_2-1}{j_0} B_{n-l\rb{pc}-\lambda,k-k\rb{pc}-\kappa} +\sum_{e,j_1\ge0}B_{n-l\rb{npc}-eL_0-j_1L_1,k-k\rb{npc}-eK_0-j_1K_1}\\ +\sum_{\substack{e,j_0,j_1\ge0,\\j_2\ge1}}\!\!\! \binom{j_1+j_2}{j_1}\binom{j_0+j_2-1}{j_0}B_{n-l\rb{npc}-eL_0-\lambda,k-k\rb{npc}-eK_0-\kappa} \end{multline} with $B_{n,k>n}=B_{n,k<0}=0$ and where $\kappa=j_0K_0+j_1K_1+j_2K_2$. Denoting \eqref{e:condPCNnk} by $E(n,k)$, writing down \[ E(n,k)-E(n-L_0,k-K_0)-E(n-L_1,k-K_1)+E(n-L_0-L_1,k-K_0-K_1)-E(n-L_2,k-K_2), \] and then proceeding in the same way as for the proof of \eqref{e:PCNBn} gives the required result. \end{proof} }
\hrule
\noindent 2010 {\it Mathematics Subject Classification}: Primary 11B39; Secondary 05A19, 05A15.
\noindent \emph{Keywords}: combinatorial proof, combinatorial identity, $n$-tiling, Pascal-like triangle, directed pseudograph, Fibonacci polynomial, restricted combination
\hrule
\sloppy \noindent (Concerned with sequences \seqnum{A000045}, \seqnum{A000930}, \seqnum{A003269}, \seqnum{A003520}, \seqnum{A005578}, \seqnum{A005708}, \seqnum{A005709}, \seqnum{A005710}, \seqnum{A007318}, \seqnum{A011782}, \seqnum{A031923}, \seqnum{A099163}, \seqnum{A224808}, \seqnum{A224809}, \seqnum{A224811}, \seqnum{A350110}, \seqnum{A350111}, \seqnum{A350112}, \seqnum{A354665}, \seqnum{A354666}, \seqnum{A354667}, and \seqnum{A354668})
\hrule
\end{document} | arXiv |
Investigating the psychometric properties of the Qiyas for L1 Arabic language test using a Rasch measurement framework
Amjed A. Al-Owidha1
This study investigated the psychometric properties of the recently developed Qiyas for L1 Arabic language test using a Rasch measurement framework.
Responses from 271 examinees were analyzed in this study. The test is hypothesized to involve one dominant factor that assesses four skills: reading comprehension, rhetorical expression, structure, and writing accuracy.
Fit statistics and reliability analysis, principal component analysis of Rasch residuals, and the results of differential item functioning supported the hypothesized structure of the Qiyas for L1 Arabic language test. However, the results of a person-item map analysis suggested that the content aspect validity of the Qiyas for L1 Arabic language test lacked representation to some extent.
The initial findings of the Rasch analysis indicated that the Qiyas for L1 Arabic language test maintains satisfactory psychometric properties. However, these findings should be interpreted with caution given the limitations of the sample population used. Continued investigation of the psychometric proprieties of the test is necessary to ensure its appropriate use as a tool of assessment for modern Arabic language.
The Qiyas for L1 Arabic language test is a standardized test recently developed by the National Center for Assessment (NCA) in Riyadh, Saudi Arabia. The lack of and need for a high-quality measurement tool that produces dependable estimations of the language skills of L1 speakers in the Arab world motivated the NCA to pioneer the development of such a test. The Qiyas for L1 Arabic language test is hypothesized to involve one dominant factor that assesses four skills: reading comprehension, rhetorical expression, structure, and writing accuracy. The test was developed primarily for the purpose of selection, where the intended population is people seeking jobs in schools, public relations, TV, radio stations, or other types of local or international communication that use modern Arabic as the main language. In addition, the test serves as a tool for selecting students for Arabic language programs in universities and/or programs that require higher language skills, like Islamic studies and law, and for diagnostic purposes, including placement of students at an appropriate level in university-level Arabic language programs and course waivers from some Arabic courses. Such language skills are expected to have been acquired by the targeted population throughout their education. The ultimate purpose of the Qiyas for L1 Arabic language test is to serve as a standardized tool that assesses modern standard Arabic language skills, not only in Saudi Arabia but throughout the Arab world. Because of its widespread use, it is critical that the NCA, as the developer and owner of this test, ensures that the Qiyas for L1 test maintains adequate psychometric properties.
One step of test construction and development is to check the quality of test items, to be sure they are functioning as expected. This process is called reliability and validity analysis. At this stage of development, test developers usually select and use stringent measurement models suited to the type of responses on the test. The purpose of this process is to ensure that the data under study are appropriately handled before validation takes place. The existing practice of the NCA in the field-testing stage involves the use of item response theory measurement models (IRT), in particular, a three-parameter logistic model that calibrates test items, generates item parameters, checks their appropriateness, and then utilizes the best item parameters to construct the test. This 3-IRT model requires a sample size of at least 1000 people to ensure sufficient and accurate stability in item parameter estimation (e.g., Lord 1980; Hutten 1981). Inaccuracy in item parameter estimation can affect the measurement invariance property of the IRT, which, in turn, would call into question test-score validation (de Jong and Stoyanova 1994). Such was the case for this specific Qiyas for L1 Arabic language test in its first field-testing implementation, which involved only 271 people; thus, a more robust and suitable IRT model is needed to validate this form. In this study, Rasch measurement was selected as the model of choice. One advantage of using this model over other IRT models is that it is usable and applicable with small sample sizes, while maintaining strong and restrictive assumptions. For instance, a sample size of between 25 and 50 subjects per response category is adequate to achieve stable and accurate item parameters when analyzing dichotomous data with the Rasch model (Linacre 1994). Rasch models have been used for validation purposes in the area of language testing since the early 1980s. For instance, De Jong used the Rasch model to assess the validity of a language test (De Jong 1983; McNamara and Knoch 2012). Nakamura (2007) also examined the psychometric properties of an in-house English placement test with the Rasch model. However, the NCA has not commonly used the Rasch model for test-validation purposes during the field-testing stage; to date, application of the model has been largely limited to in-house technical measurement reports, such as test equating and test bias. Accordingly, the purpose of this study was to illustrate the usability and applicability of the Rasch measurement framework during field testing. The objective of the study was to examine the psychometric properties of the field-testing version of the Qiyas for L1 Arabic language test using the Rasch model. Specifically, this study asked the following research question: Does the field-testing version of the Qiyas for L1 Arabic language test exhibit adequate psychometric properties according to the Rasch measurement framework?
To answer this question, quantitative analysis within the framework of Rasch measurement was conducted. The model is briefly introduced below, before the analysis is discussed.
The Rasch model
The Rasch model is an item-response model that provides a linear transformation of the ordinal raw scores to a linear logit scale (Boone, Staver, and Yale 2014). More specifically, Rasch measurement specifies the relationships between people and items on a test that measures one trait at a time, that is, the likelihood of a person's success will increase as the measurement of a trait increases. Conversely, the likelihood of failure increases when the trait is less measured. With the Rasch model, only the interaction between the person's position on the underlying ability being measured by a test and item difficulty are modeled. The model is expressed mathematically as follows (De Ayala 2009):
$$ \kern14.25em p\left({x}_j=1|\theta, {\delta}_{\mathrm{j}}\right)=\frac{e^{\left(\theta -{\delta}_j\right)}}{1+{\mathrm{e}}^{\left(\theta -{\delta}_j\right)}}, $$
where p(xj = 1| θ, δj) is the probability of the response of 1, θ is the person location, δj is the item j's location, and e is a constant number whose value is 2.7183. In other words, Eq. 1 states that the probability of a person getting a correct response 1 on item j is a function of the distance between a person located on θ and the item located at δ.
Unlike other IRT models that focus on fitting the data given the constraints of the model, the Rasch model focuses on constructing the variable of interest (Andrich 1988; Wright and Masters 1982; Wright and Stone 1979). According to this perspective, the Rasch model represents the standard by which one can create a test for measuring the variable of interest; thus, the test data must meet this standard. The Rasch standard has an additive measurement form, where adding one more unit (defined by the logit value) adds the same extra amount regardless of how much was already there (Linacre 2012a, 2012b). However, the Rasch model requires that certain assumptions that must be met; not meeting those assumptions can compromise the usefulness of the model. These assumptions include unidimensionality, local independence, parallel item characteristic curves (ICCs), and measurement invariance. Unidimensionality assumes that the test data measure one ability or trait at a time (e.g., verbal ability). This assumption is never completely met; however, the presence of a dominant trait that influences the performance of the examinee on a test is necessary and can be met. Local independence is a related assumption that states that when a specified ability (e.g., verbal ability) influencing the performance of the examinee on a test has been partialled out, the responses of the examinee to any pair of items are statistically independent. The assumption of local independence can be met as long as the complete ability space has been accurately specified. It has been previously shown that if the assumption of unidimensionality is met, unidimensionality and local independence can be viewed as interchangeable concepts (Lord 1980; Lord and Novick 1968).
Another important assumption that is unique to and required by only the Rasch model is that test items that have ICCs must not intersect. Restrictions on item discrimination have made it difficult for some test data to fit expectations of the Rasch model unless discrimination indices are chosen to be equal (Birnbaum 1968). Stage (1996) also found it difficult to fit the Rasch model to some test data. Additionally, a study by Leeson and Fletcher (2003) has also shown evidence consistent with the findings of Birnbaum (1968) and Stage (1996). On the other hand, studies conducted by Wright and Panchapakesan (1969), Dinero and Haertel (1977), and Hambleton and Cook (1978) have indicated that the Rasch model is robust to heterogeneous item discrimination, that is, variation in item discrimination indices has little impact on the fit of the Rasch model. Measurement invariance is another property that is required by the Rasch model. This property assumes that a person's ability can be estimated independently of items on a particular test and that item indices can be estimated independently of the specific sample of people taking the test. Another feature of the Rasch model that distinguishes it from other unidimensional IRT models is that the total score provides sufficient statistics to model the data of interest. With the Rasch model, the total score contains all the information needed to estimate θi (Wright 1984; de Ayala 2009). The total score is sufficient for Rasch model estimation, but only if fit statistics conform to model expectations. Moreover, researchers can establish construct-related validity evidence if the data meet the requirements of the Rasch model. The Rasch model provides researchers with quality-control fit statistics and a principal component analysis of residuals (PCAR) that can be used to evaluate the internal structure of test scores. The outcome informs the researcher as to whether item responses follow a logical pattern. Items that do not fit a logical pattern are likely to be harmful to the construct under study and should be modified or deleted. Conversely, items that fit the logical pattern are likely to enhance the construct and should be retained.
Furthermore, differential item functioning (DIF) analysis within the Rasch measurement framework can be used to help detect bias or irrelevant factors. In the recently released Standards for Educational and Psychological Testing (American Educational Research Association 2014), the issue of "equivalence of the construct being assessed" was considered with respect to the importance of designing tests that produce scores that reflect only the ability that is being measured by the test, regardless of the identity of the subgroup that took it. Hence, the authors recommend the use of DIF analysis as an approach to investigate item bias. Differential item functioning occurs when examinees with the same level of ability from different subgroups (e.g., gender, language background, etc.) differ in their likelihood of answering an item correctly. Several DIF methods can detect item bias, including the Mantel-Haenszel test (Holland and Thayer 1988), logistic regression (Zumbo 1999), and IRT-based DIF methods (Thissen 1991; Thissen et al 1993; Wright and Stone 1979; Wright, Mead, and Draba 1976). The IRT-based DIF methods are relevant to this study, specifically Rasch-based DIF methods. According to Smith (2004), two approaches exist within the framework of the Rasch model: the separate t test approach and the between-group fit approach. The former is based on two separate t test calibrations of two or more subgroups of interest. The latter is based on a single calibration that involves one or more subgroups of interest. In this study, the former was used, and so it is briefly discussed below.
The t-statistic approach
The t-statistic approach is well documented in the Rasch model literature (Smith 2004). It is based on the differences between two separate calibrations of the same item of two or more subgroups of interest. Mathematically, the t-statistic approach is expressed as follows (Smith 2004):
$$ t=\frac{d_{i1}-{d}_{i2}}{{\left({s}_{i1}^2+{s}_{i2}^2\right)}^{\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$2$}\right.}}, $$
where di1 is the difficulty of item I based on the first subpopulation, di2 is the difficulty of item i based on the second subpopulation, si1 is the standard error of the estimate for di1, and si2 is the standard error for di2. This method works only with pairwise comparisons; if there are more than two subpopulations included in the analysis, multiple comparisons must be made. A drawback of multiple comparisons based on one variable is that the type I error rate and the ability of this statistic to detect bias can be affected; however, this issue is of little concern here given that only pairwise comparisons were conducted. Furthermore, an essential requirement of this method is that any item that does not fit the Rasch model should be excluded. An item with poor fit can violate the fundamental assumption of the Rasch standard that ICCs do not cross, and the lower asymptote of ICCs must be zero (Smith 2004).
Data from the field-testing version of Qiyas for L1 Arabic language test used in this study were obtained from the NCA database. This test was administered by the NCA in January 2017 in Riyadh, Saudi Arabia. The test included binary-scored responses where 1 = correct and 0 = incorrect for 271 examinees of both genders (male and female).
The Qiyas for L1 Arabic language test is a newly developed standardized test designed to measure the extent of Arabic language skills in L1 speakers and classify them at the appropriate level. More specifically, it aims to measure Arabic language skills in L1 speakers starting at approximately 11th grade and continuing through to university graduates for educational and professional purposes. Table 1 defines the skills measured by the Qiyas for L1 Arabic test.
Table 1 Test skills and number of items for Qiyas L1 Arabic test
Table 1 shows that the Qiyas for L1 Arabic language test is composed of the following skills:
Reading comprehension: Reading passages are of various lengths, classified as short (40–50 words), medium (51–200 words), and long (more than 201 words). Items in this skill target higher-order reading comprehension abilities, including inference and understanding of subtle meaning; text analysis; synthesis and abridgement; and summary.
Rhetorical expression: Items in this skill target situational, stylistic, and figurative language use. Punning, hyperbole, and metaphor are parts of speech that are used to measure pragmatic uses of language. Speech ornaments are common in the Arabic language. They are used to measure the extent of stylistic appreciation of language.
Structure: Items in this skill target all forms of structural correctness, including correct syntactic constructions like predication, attribution, coordination, conjunction, and adjectival and adverbial constructions. Structural correctness is highly linked to actual language use, rather than the perceptual understanding of grammar.
Writing accuracy: Items in this skill target correct written communication, which is intimately related to writing technique and spelling.
In addition, the Qiyas for L1 Arabic language test was developed to estimate four levels of language attainment, as follows:
Below medium: Examinees at this level are able to write texts that show basic language structure but lack expository narrative writing skills. Examples of the latter skills include the development of ideas, cohesion, and organization. Basic spelling mistakes are rampant. Examinees at this level are able to determine the main messages of reading and listening passages. They are also able to recognize explicit and direct ideas and some common words. Examinees at this level are not expected to distinguish rhetorical expressions and/or comprehend their significance.
Medium: Examinees at this level are able to write texts that show some detailed language structure beyond basic. They may also demonstrate some expository narrative writing skills related to idea development, cohesion, and organization. Some fine spelling mistakes may be present. Examinees in this level are able to determine the main messages of reading and listening passages. They are also able to recognize explicit ideas and some inexplicit ideas, as well as the textual meanings of common words. Examinees at this level are able to distinguish easier rhetorical expressions and/or comprehend their significance.
High-medium: Examinees at this level are able to write texts that show correct basic, and some more detailed, language structure. They may also show greater expository narrative writing skills related to idea development, cohesion, and organization. Occasional spelling mistakes may be present. Examinees in this level are able to determine the main messages of reading and listening passages. They are also able to recognize explicit ideas and some inexplicit ideas, as well as the textual meanings of common words. Examinees at this level are able to distinguish a significant number of rhetorical expressions and comprehend their uses and/or significance.
High: Examinees at this level are able to write texts that demonstrate correct basic and detailed language structure. They also show optimal expository narrative writing skills related to idea development, cohesion, and organization. Examinees at this level utilize creative, persuasive, and polemic writing techniques. No spelling mistakes are present. Examinees in this level are able to determine inferred messages in reading and listening passages. They are also able to recognize explicit and implicit ideas and the textual meanings of some common words. Examinees at this level are able to distinguish almost all rhetorical expressions and comprehend their uses and/or significance.
To evaluate the psychometric properties of the Qiyas for L1 Arabic test, Winsteps® version 3.75.1 (Linacre 2012a, 2012b) was used. Two stages of Rasch analysis were carried out in this particular study. The first stage of analysis compared the suitability of Qiyas L1 test items against the Rasch standard. Various Rasch statistical indices were examined (e.g., fit statistics, person and item reliability indices, the person-item map, and point-measure correlation indices). The second stage of analysis investigated the structural aspect of the Qiyas for L1 Arabic language test using PCAR and DIF analysis, to examine whether irrelevant factors might be interfering with the main construct under study.
Fit statistics and reliability analysis
Qiyas for L1 Arabic language test data were fitted to the Rasch model. Table 2 shows that the overall mean infit and outfit were 1.00 and 1.02, respectively, with a mean standardized infit and outfit of 0.0 and 0.1, respectively. This result suggests that overall, the Qiyas for L1 Arabic language test data fit the Rasch model reasonably well. The extra 0.02 in the overall mean outfit represented a small amount of unmodeled noise in the Qiyas for L1 Arabic language test data. Table 2 also showed that the reliability of the Qiyas for L1 Arabic language test was 0.86, supporting the notion that the ordering of persons along the construct is replicable given similar items measuring the same trait, that is, the Qiyas for L1 Arabic language test had adequate test score reliability. The separation index per person was 2.48. This index measures the spread of examinee scores along a logit interval scale. Separation greater than 1 suggests that the data are sufficiently broad in terms of position (Frantom and Green 2002).
Table 2 Overall person- and item-fit statistics and reliability analysis of the Qiyas for L1 Arabic language test
Table 2 also demonstrates that the item reliability index was 0.97. This indicates that the Qiyas for L1 Arabic language test items were reasonably well dispersed along the interval logit scale, suggesting an adequate breadth of position on the linear continuum from persons who were less skillful to more skillful in Arabic language proficiency. The person-item map, as depicted in Fig. 1, provides a clear picture of the linear continuum of the performance of persons in comparison to the Qiyas for L1 Arabic language test items. The left side represents people on the interval logit scale continuum. The upper left quadrant represents people who were more skillful in Arabic, whereas the lower left quadrant indicates people who were less skillful. The right side of the map represents Qiyas for L1 Arabic language test items. More difficult items are located closer to the top and easier items are located closer to the bottom of the graph. The letter "M" is the distribution mean for both items and persons, and "SD" is one standard deviation. The symbol "T" represents two standard deviation units. The mean item distribution was set to 0. Figure 1 indicates that the average person distribution was 0.33 logits higher than the distributions of items by one-third SD and was negatively skewed. This suggests that the Qiyas for L1 Arabic language test items were slightly easier for this group. Some gaps in the item location distribution exist in the map, particularly in the middle and at the top right side of the map. This finding indicates that people in the middle and upper levels of the distribution were not reasonably targeted by the Qiyas for L1 Arabic language test items, that is, the content aspect of the construct under study lacked some representation, potentially compromising the validity of the test (Messick 1989). Therefore, a future version of Qiyas for L1 should include more items that accurately represent the targeted group level.
Person-item map for the Qiyas L1 Arabic test
The Qiyas for L1 Arabic language test items were examined using mean square outfit statistics for the item and point-measure correlations (see Appendix 1). The item mean square outfit ranged between 0.7 and 1.73. The item mean square outfit is a Rasch-based model with standardized residuals used for assessing item fit. The item mean square outfit statistic is relatively more sensitive to patterns of misfit far from the person trait level. The expected value of this index is 1. However, dataset items that adequately fit the Rasch model are expected to range between 0.7 and 1.30 (Wright and Linacre 1994). Table 6 (see Appendix 1) shows that only five out of 50 items in the Qiyas for L1 test failed to fit Rasch model expectations. These items were as follows: items 19 and 20 in rhetorical expression, items 26 and 28 in structure, and item 50 in writing accuracy. These five items should be inspected carefully and then either modified or removed from the Qiyas L1 test because they contain construct-irrelevant variance that threatens the structural validity of the construct (Messick 1989). Additionally, inspection of the point-measure correlations index indicates that all but two Qiyas for L1 test items were positively correlated with the construct. Point-measure correlations in the Rasch model are analogous to point biserial correlation in classical test theory and describe how well each item contributes to the total test score. For example, Table 6 (see Appendix 1) shows that items 19 and 20 associated with rhetorical expression had point-measure correlations of 0.08 and 0.09, respectively. This small positive correlation suggests that these two particular items were either not functioning as intended or did not contribute adequately to the construct. A common practice in a Rasch investigation is to modify or delete any negative or close-to-zero point-measure correlations because they contradict the construct of interest (Linacre 1998; Bond and Fox 2007). Thus, all five items that misfit the Rasch standard, including those with close-to-zero point-measure correlations, were removed for further investigation of the internal structure of the Qiyas for L1 Arabic language test.
Unidimensionality analysis
The structural aspect of validity of the Qiyas for L1 Arabic language test was further investigated using PCAR. The PCAR is a factor analysis of residuals, after the Rasch model is applied to the data. The factor analysis of these residuals is used to identify common variance shared among data that is unexplained by the Rasch model. If a dominant measure not explained by the Rasch model is found among the items, then it can be inferred that a dimension other than the intended dimension has interfered with the test data. This calls into question the structural aspect of validity of the measure. Linacre (2012a, 2012b) argued that for test data to be unidimensional, the smallest eigenvalue for the contrasts in the residuals is 2 items in unit strength. Simulation studies have shown that eigenvalues might reach 2 accidentally (Raîche 2005; Linacre 2012a, 2012b). Table 3 displays the results of the Rasch PCAR analysis after removing misfit items. The total variance explained by the Rasch model was 14.4. The small percentage of explained variance could be due to narrow ranges of ability in examinees or the difficulty level of some items. In other words, similar abilities among examinees and equal difficulty of test items could have caused the small total explained variance observed. Table 4 shows that the first contrast remaining after Qiyas for L1 Arabic language test data were fit to the Rasch model and had strength of 2.3 out of 45 items. This contrast explained 3.8% of the variance, which exceeded the benchmark of 2 eigenvalue units suggested by Linacre (2012a, 2012b) and could be indicative of multidimensionality.
Table 3 PCARs of Qiyas for L1 Arabic language test data
Table 4 Factor loadings of Qiyas for L1 Arabic language test items that signify multidimensionality
Inspection of Table 4 indicates that items 44 and 37 in writing accuracy and item 31 in structure all had large positive factor loadings of 0.40. The clustering of these three items is notable because it suggests that they have a common meaning that is different from the yardstick of Rasch measurement (Bond and Fox 2007). This finding is evidence of a secondary dimension with a small influence. In general, any item loading ≥ 0.40 should be investigated (Bond and Fox 2007).
Because the results indicated the presence of an influential three-item cluster on the Qiyas for L1 Arabic language test, two separate Rasch calibration analyses were conducted to determine if person measures were severely affected by the secondary dimension (Wright and Stone 1979; Linacre 2012a, 2012b). A confirmatory finding would indicate that those items should be either modified or removed from the Qiyas for L1 Arabic language test, because the construct runs a risk of being distorted by this irrelevant sub-dimension. The first Rasch analysis targeted only the three-item cluster with positive loadings (writing accuracy). The second Rasch calibration targeted items with negative loadings. If the Qiyas for L1 Arabic language test fits the Rasch standard, then the person measures should remain invariant, allowing for a reasonable number of errors, that is, the person measures obtained from the two calibrations should fall within the 95% two-sided confidence interval (Bond and Fox 2007; Linacre 2012a, 2012b).
As depicted in Fig. 2, one or two person measures fell outside the specified 95% confidence interval, and the correlation between the two sets of person measures was 0.66. This result implies that the Qiyas L1 Arabic language test is unidimensional.
Cross-plot person measures for the Qiyas L1 Arabic test
DIF analysis
DIF analysis was also performed to determine whether any irrelevant factors interfered with the construct under study, and an analysis of test and item bias within the framework of the Rasch model was applied to the Qiyas for L1 Arabic language test data. A review of the literature of test bias (Crocker and Algina 1986) indicated that bias exists when test outcomes or results reflect irrelevant factors or characteristics outside the construct of interest (e.g., demographic variables). By this definition, bias would impair the construct validity by means of test score interpretation. Therefore, to investigate whether the Qiyas for L1 Arabic language test data produced assessment scores that reflected only the construct of interest, a Rasch analysis of uniform differential test and item functioning by gender (e.g., male versus female) was implemented. Items that misfit the Rasch model were first excluded. After removing five misfit items, a DIF analysis by gender was performed on the remaining 45 items to determine whether the Qiyas for L1 Arabic language test produced invariant scores in the cross-gender subgroup classification. Bond and Fox (2007) suggested that item measures obtained from two Rasch analyses must fall within the 95% two-sided confidence interval to be invariant. Thus, two separate Rasch analyses for male and female subgroups were conducted. The two item measures obtained from each analysis were then cross-plotted, as shown in Fig. 3.
DTF by gender for the Qiyas L1 Arabic language test
The dashed line in the middle of Fig. 3 represents the Rasch model best-fit line. The two curved solid lines surrounding the best-fit line represent the 95% confidence interval. The plot shows that, except for a few items, the majority of the Qiyas for L1 Arabic language test data fall within the 95% confidence interval and cluster around the Rasch best-fit line across male and female subgroups. This result implies that the Qiyas for L1 Arabic language test produces item measures that are invariant with respect to gender.
In the next step, DIF analysis at the item level was implemented to determine the specific items that exhibited bias across males and females. Linacre (2012a, 2012b) recommended two criteria: first, the probability of the item DIF should be small, that is, the probability of the item DIF must be statistically significantly different, with p ≤ 0.05. Second, the DIF contrast must be at least 0.5 logit to merit a noticeable DIF difference. Test data were subjected to a uniform DIF analysis by gender (male versus female). Table 5 displays the results of DIF analysis for items that exhibited a significantly noticeable DIF. Items 1, 6, 9, and 10 in reading comprehension, item 16 in rhetorical expression, and item 36 in writing accuracy were statistically significant at an alpha level of 0.05 and had DIF contrasts at or above the required 0.5 logit. The same findings are depicted in Fig. 4. The gendered responses on items 1, 6, and 36 with logit values of 0.85, 0.95, and 0.66, respectively, indicated that those items were more difficult for males than for females. Conversely, items 9, 10, and 16 with logit values of − 0.66, − 0.63, and − 1.00, respectively, were easier for males than for females. Those flagged items would benefit from further investigation by the test developers and subject-matter reviewers of the Qiyas for L1 Arabic test, to determine why they differed between males and females. Overall, the results of the uniform DIF analysis within the framework of the Rasch measurement model lent support to the structural aspect of validity of the Qiyas for L1 Arabic language test from the perspective of gender. The complete DIF analysis is given in Appendix 2.
Table 5 DIF analysis of the Qiyas for L1 Arabic language test at the item level
DIF analysis by gender
The purpose of this study was to investigate the psychometric properties of the field-testing version of the Qiyas for L1 Arabic language test using Rasch analysis. To this end, several Rasch quality-control fit-statistic indices were used. At the test level, the Qiyas for L1 Arabic language test mean square outfit statistic for the test level was 1.02, indicating that it fit the Rasch standard reasonably well. However, at the item level, mean square outfit statistics indicated that five out of 50 items on the Qiyas L1 Arabic language test were misfits, according to the Rasch standard. Those items would benefit from modification or should be deleted from the Qiyas for L1 test because they add irrelevant variance that could distort the precision of measurement of the construct and compromise the structural aspect of validity. The Rasch reliability analysis of the Qiyas for L1 Arabic language test was 0.86, indicating that the test is sufficiently reliable in test scores. The person-item map of the Qiyas for L1 test indicated that the average person location is higher than the average item location by one-third SD, suggesting that the Qiyas for L1 Arabic language test items are slightly easier compared with the ability of those taking the test. In fact, the small mismatch between the locations of persons and items could be explained by the gaps at the top right and middle of the item locations. Those gaps clearly illustrate that there was a content representation deficiency in the Qiyas for L1 Arabic language test, which suggests that the construct of interest is under-represented. Construct Under-representation could threaten the content aspect validity of the construct, for example, if not enough items target the average- and higher-ability groups. This issue of construct underrepresentation could be resolved by instructing the Qiyas for L1 Arabic language test developer to add more items targeting specified groups. This modification would likely improve the quality and content representation of the Qiyas for L1 Arabic language test. However, it should be emphasized that the addition of more items should be driven by practical applications of measurement. If important decisions are being made about performance on a scale, then the scale should have sufficient item coverage. If a particular group or subset of a group is being targeted, then that targeted group should have sufficient item coverage on the scale.
The structural aspect of the construct under study was further investigated using PCAR and DIF methods in the second-stage analysis, after removing unwanted items. Findings from the PCAR analysis lent support to the assumption of unidimensionality of the Qiyas for L1 Arabic language test for assessing four skills, whereas the DIF analysis flagged six items that exhibited significant DIF and merit further investigation. Those identified items should be reviewed because they contain construct-irrelevant variance that could alter the precision of measurement and threaten the structural aspect of validity of the construct. The results of the Rasch-based DIF analysis could inform the developers and content experts of the Qiyas for L1 Arabic language test in determining what caused those particular six items to differ between males and females, before concluding that they are biased items.
Overall, the findings of this study indicate that the field-testing version of the Qiyas for L1 Arabic language test has satisfactory psychometric properties. However, two limitations of the study should be noted. First, the data used in this study were not collected from real job applicants and other high school- and university-level students across Saudi Arabia. Instead, they were collected from students in high schools and with college-level education in Riyadh and may therefore not be representative of the intended population of the Qiyas for L1 Arabic language test. Thus, the generalizability of the findings of this study is limited; additional investigations using more representative samples are warranted. Second, the sample size used to conduct the DIF analysis was small. Smith (2004) noted that Rasch-based DIF methods lack the power to detect biased items of less than 0.5 logits when the sample size is smaller than 500 people in each subpopulation. Moreover, DIF studies commonly produce nonreplicable results (Linacre 2012a, 2012b). Consequently, it is possible that the six items flagged in this study would not be flagged in a different study. Follow-up studies using greater sample sizes are needed to confirm the results reported here.
Last, the findings of this study suggest new venues for future research studies. First, taking into account the representative sample targeted in the Qiyas for L1 Arabic language test, it would be beneficial to cross-validate this study using different measurement models (e.g., classical test theory models, item response theory models, structural equation modeling) and compare the results of those models with those obtained using the Rasch measurement framework. Such comparisons would fill some gaps in knowledge associated with using a Rasch measurement framework alone. For instance, validity is typically viewed as a unitary concept that embodies all evidence that supports the intended interpretation of test scores for the proposed use (AERA, APA, and NCME, 2014). In this study, only psychometric features related to the structural and content aspects of validity were investigated. Validity arguments (Messick 1989) that include findings from different measurement and statistical models would add substantial value to the intended interpretation of the Qiyas for L1 Arabic language test score. Second, a standard-setting analysis is an integral component of test development, especially in testing situations related to education and licensing. Additional studies are needed to determine if the four-level categorization of language attainment, as specified by the Qiyas for L1 test developers, is reasonably defined in the field-testing version of the Qiyas for L1 Arabic language test.
Third, the ultimate objective of the Qiyas for L1 Arabic language test is to serve as a standardized tool that assesses modern standard Arabic language skills, not only in Saudi Arabia but throughout the Arab world. It would therefore be beneficial for the NCA to carry out cross-cultural studies of the Qiyas for L1 Arabic language test in other Arab countries. Such studies would strengthen the reliability and validity of the test and also broaden its usability, resulting in greater international recognition.
The overall aim of this study was to highlight the usability, applicability, and informative nature of the Rasch measurement framework in the field of language testing. The specific objective was to investigate the psychometric properties of the test during field-testing using a Rasch model. The initial findings of the Rasch analysis indicated that the Qiyas for L1 Arabic language test has satisfactory psychometric properties. However, this result should be interpreted with caution given the limitations of the sample population used. Thus, continued investigation of the psychometric proprieties of the test is necessary to ensure its appropriate use as a tool of assessment for modern Arabic language skills. Nevertheless, because the test data conformed to model expectations, developers of the Qiyas for L1 Arabic language test would likely benefit from these findings during field-testing for development and validation, particularly when the sample size is small. For instance, test developers could be guided by the results of the Rasch analysis in efforts to improve effectiveness of the assessment tool by adding, removing, or modifying some items. Test developers and measurement practitioners would also benefit from using Rasch analysis to evaluate the psychometric features of the test to assess construct-related validity (e.g., structural and content aspects of validity of test scores); this assessment would support the interpretation of test scores.
DIF:
Differential item functioning
ICCs:
Item characteristic curves
IRT:
NCA:
National Center for Assessment
PCAR:
Principal component analysis of residuals
PMC:
Point-measure correlation
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.
Andrich, D (1988). Rasch models for measurement. Newbury Park, CA: Sage.
Birnbaum, A (1968). Some latent trait models and their use in inferring an examinee's ability. In FM Lord, MR Novick (Eds.), Statistical theories of mental tests. Reading, MA: Addison-Wesley.
Bond, TG, & Fox, CM (2007). Applying the Rasch model: fundamental measurement in the human sciences. Mahwah, NJ: Lawrence Erlbaum Associates.
Boone, WJ, Staver, JR, Yale, MS (2014). Rasch analysis in the human sciences. Dordrecht: Springer.
Crocker, L, & Algina, J (1986). Introduction to classical and modern test theory. Philadelphia: Harcourt Brace Jovanovich College Publishers.
De Ayala, RJ (2009). The theory and practice of item response theory. New York: Guilford Press.
De Jong, J.H.A.L. (1983). Focusing in on a latent trait: an attempt at construct validation by means of the Rasch model. In Van Weeren, J. (Ed.), Practice and problems in language testing 5. Non-classical test theory; final examinations in secondary schools. Papers presented at the International Language Testing Symposium (Arnhem, Netherlands, March 25–26, 1982) (pp. 11–35). Arnhem: Cito.
De Jong, J.H.A.L., & Stoyanova, F. (1994). Theory building: Sample size and data-model fit. Paper presented at the annual Language Testing Research Colloquium, Washington, DC.
Dinero, TE, & Haertel, E. (1977). Applicability of the Rasch model with varying item discriminations. Applied Psychological Measurement, 1(4), 581–592.
Frantom, C.G., & Green, K.E. (2002). Survey development and validation with the Rasch model. Paper presented at the international conference on questionnaire development, evaluation, and testing, Charleston, SC.
Hambleton, R.K., & Cook, L.L. (1978). Some results on the robustness of latent trait models. Paper presented at the annual meeting of the American Educational Research Association, Toronto, Ontario, Canada.
Holland, WP, & Thayer, DT (1988). Differential item performance and the Mantel-Haenszel procedure. In H Braun Wainer, HI Braun (Eds.), Test validity, (pp. 129–145). Hillsdale, NJ: Lawrence Erlbaum Associates.
Hutten, LR (1981). The fit of empirical data to latent trait models (doctoral dissertation). Amherst, MA: University of Massachusetts.
Leeson, H, & Fletcher, R (2003). An investigation of fit: comparison of the 1-, 2- ,3- parameter IRT models to the project TTle data. Auckland, New Zealand: Paper presented at the Australian Association for Research.
Linacre, JM. (1994). Sample size and item calibrations stability. Rasch Measurement Transactions, 7(4), 328.
Linacre, JM. (1998). Detecting multidimensionality: which residual works best. Journal of Outcome Measurement, 2(3), 266–283.
Linacre, JM (2012a). Winsteps® Rasch measurement computer program User's Guide. Beaverton, OR: Winsteps.com.
Linacre, JM (2012b). Winsteps® (Version 3.75.1) [Computer Software]. Beaverton, Oregon: Winsteps.com.
Lord, FM (1980). Applications of item response theory to practical testing problems. Hillsdale, NJ: Lawrence Erlbaum Associates.
Lord, FM, & Novick, MR (1968). Statistical theories of mental tests. Reading, MA: Addison-Wesley.
McNamara, T, & Knoch, U. (2012). The Rasch wars: the emergence of Rasch measurement in language testing. Language Testing, 29(4), 555–577.
Messick, S (1989). Validity. In RL Linn (Ed.), Educational measurement, (3rd ed., pp. 13–103). New York: Macmillan.
Nakamura, Y. (2007). A Rasch-based analysis of an in-house English placement test. http://hosted.jalt.org/pansig/2007/HTML/Nakamura.htm. Accessed 18 Feb 2018.
Raîche, G. (2005). Critical eigenvalue sizes in standardized residual principal components analysis. Rasch Measurement Transactions, 19(1), 1012.
Smith, RM (2004). Detecting item bias with the Rasch model. In S Jr., RM Smith (Eds.), Introduction to Rasch measurement, (pp. 391–418). Maple Grove, MN: JAM Press.
Stage, C. (1996). An attempt to fit IRT models to the DS subtest in the SweSAT. (Educational Measurement No 19). Umeå University, Department of Educational Measurement.
Thissen, D (1991). MUTLTILOG user's guide: multiple categorical item analysis and test scoring using item response theory (Version 6.0). Chicago: Scientific Software.
Thissen, D, Steinberg, L, Wainer, H (1993). Detection of differential item functioning using the parameters of item response models. In PW Holland, H Wainer (Eds.), Differential item functioning, (pp. 67–113).
Wright, BD (1984). MESA Research [Memorandum 41]. Hillsdale: Lawrence Erlbaum Associates http://www.rasch.org/memo41.htm. Accessed 18 Oct 2006.
Wright, BD, & Masters, GN (1982). Rating scale analysis. Chicago: MESA Press.
Wright, BD, & Panchapakesan, N. (1969). A procedure for sample-free item analysis. Educational and Psychological Measurement, 29(1), 23–48.
Wright, BD, & Stone, MH (1979). Best test design. Chicago: MESA Press.
Wright, BD, Mead, RJ, Draba, R (1976). Detecting and correcting test item bias with a logistic response model (Research Memorandum 22). Chicago: University of Chicago, MESA Psychometric Laboratory.
Zumbo, BD (1999). A handbook on the theory and methods of differential item functioning (DIF): logistic regression modeling as a unitary framework for binary and Likert-type (ordinal) item scores. Ottawa, ON: Directorate of Human Resources Research and Evaluation, Department of National Defense.
This research paper was supervised by the Department of Research and Studies at the National Center for Assessment, Saudi Arabia. Their cooperation was highly appreciated. Also, I sincerely express my deep gratitude to Dr. Abdul Rahman Shamrani, director of language testing at the National Center for Assessment, Riyadh, Saudi Arabia, for providing information with respect to test content and for his thoughtful review of the initial draft of this research paper.
The datasets supporting the conclusions of this article are available from the National Center for Assessment (NCA), Riyadh, Saudi Arabia. Copyright for the data also belongs to the NCA.
Department of General Studies, King Fahd University of Petroleum and Minerals, P.O. Box 18, Dhahran, 31261, Kingdom of Saudi Arabia
Amjed A. Al-Owidha
Search for Amjed A. Al-Owidha in:
Amjed Al-Owidha is the sole contributor to this research paper. The author read and approved the final manuscript.
Correspondence to Amjed A. Al-Owidha.
Table 6 Summary of item measures, outfit indices, and PMCs of Qiyas for L1 Arabic language test data
Table 7 Summary of item bias analysis of Qiyas for L1 Arabic language test data by gender
Al-Owidha, A.A. Investigating the psychometric properties of the Qiyas for L1 Arabic language test using a Rasch measurement framework. Lang Test Asia 8, 12 (2018) doi:10.1186/s40468-018-0064-5
Language testing
Arabic language test
Rasch model | CommonCrawl |
\begin{document}
\date{October 29, 2020}
\begin{abstract} This paper lays the foundation for Plancherel theory on real spherical spaces $Z=G/H$, namely it provides the decomposition of $L^2(Z)$ into different series of representations via Bernstein morphisms.
These series are parametrized by subsets of spherical roots which determine the fine geometry of $Z$ at infinity. In particular, we obtain a generalization of the Maass-Selberg relations. As a corollary we obtain a partial geometric characterization of the discrete spectrum: $L^2(Z)_{\rm disc }\neq \emptyset$ if $\mathfrak{h}^\perp$ contains elliptic elements in its interior. \par In case $Z$ is a real reductive group or, more generally, a symmetric space our results retrieve the Plancherel formula of Harish-Chandra (for the group) as well as that of Delorme and van den Ban-Schlichtkrull (for symmetric spaces) up to the explicit determination of the discrete series for the inducing datum. \end{abstract}
\author[Delorme]{Patrick Delorme} \email{[email protected]} \address{Institut de Math\'ematiques de Marseille, UMR 7373 du CNRS, \\ Campus de Luminy, Case 907 - 13288 MARSEILLE Cedex 9}
\author[Knop]{Friedrich Knop} \email{[email protected]} \address{Department Mathematik, Emmy-Noether-Zentrum\\ FAU Erlangen-N\"urnberg, Cauerstr. 11, 91058 Erlangen}
\author[Kr\"otz]{Bernhard Kr\"{o}tz} \email{[email protected]} \address{Institut f\"ur Mathematik, Universit\"at Paderborn,\\ Warburger Stra\ss e 100, 33098 Paderborn}
\author[Schlichtkrull]{Henrik Schlichtkrull} \email{[email protected]} \address{University of Copenhagen, Department of Mathematics\\Universitetsparken 5, DK-2100 Copenhagen \O}
\maketitle
\section{Introduction} Our concern is with a homogeneous real spherical space $Z=G/H$. We assume that $Z$ is algebraic, i.e. there exists a connected reductive group $\algebraicgroup{G}$, defined over $\mathbb{R}$, and an algebraic subgroup $\algebraicgroup{H}\subset \algebraicgroup{G}$, defined over $\mathbb{R}$ as well, such that $G=\algebraicgroup{G}(\mathbb{R})$ and $H=\algebraicgroup{H}(\mathbb{R})$. Then $Z$ is a $G$-orbit of the variety $\algebraicgroup{ Z}(\mathbb{R})$ where $\algebraicgroup{ Z}=\algebraicgroup{G}/ \algebraicgroup{H}$. We denote by $z_0=eH\in Z\subset \algebraicgroup{ Z}(\mathbb{R})$ the standard base point and recall that $Z$ is called real spherical if there is a minimal parabolic subgroup $P\subset G$ such that $P\cdot z_0$ is open in $Z$.
\par The goal of this paper is to develop the basic Plancherel theory for $L^2(Z)$, i.e. to establish the foundational Bernstein-decomposition of $L^2(Z)$ into different series of representations. Although the main body of the text is written in terms of $Z$, we focus in this introduction on $\algebraicgroup{ Z}(\mathbb{R})$ and the Bernstein decomposition for $L^2(\algebraicgroup{ Z}(\mathbb{R}))$, for which our results are easier to state. On a technical level we obtain the information for $\algebraicgroup{ Z}(\mathbb{R})$ by collecting the data of all $G$-orbits in $\algebraicgroup{ Z}(\mathbb{R})$.
\par Real spherical varieties $\algebraicgroup{ Z}(\mathbb{R})$ have a well understood $G$-equivariant compactification theory, which is constructed out of the combinatorial data of $\algebraicgroup{ Z}$ originating from the local structure theorem. We recall from \cite{KKS} that attached to $\algebraicgroup{ Z}$ there is a torus $\algebraicgroup{ A}_\algebraicgroup{ Z}=\algebraicgroup{ A}/ \algebraicgroup{ A}\cap \algebraicgroup{H}$, homogeneous for a maximal split torus $\algebraicgroup{ A}$ of $\algebraicgroup{G}$ contained in $\algebraicgroup{ P}$. Let $A_Z$ be the identity component of $\algebraicgroup{ A}_\algebraicgroup{ Z}(\mathbb{R})$, and $\mathfrak{a}_Z$ its Lie algebra. Inside $\mathfrak{a}_Z$ one finds a co-simplicial cone $\mathfrak{a}_Z^-$, called the compression cone, which is a fundamental domain for a finite reflection group $W_Z$ \cite{KK}. In particular there is a set $S\subset \mathfrak{a}_Z^*$, of the so-called spherical roots, such that the faces of $\mathfrak{a}_Z^-$ are given by $\mathfrak{a}_I^-:=\mathfrak{a}_Z\cap \mathfrak{a}_I$ with $I\subset S$ and $\mathfrak{a}_I:= I^\perp\subset \mathfrak{a}_Z$. For the simplicity of exposition we assume in this introduction that $S$ is a basis of the character group $\Xi_Z\simeq \mathbb{Z}^n$ of the torus $\algebraicgroup{ A}_\algebraicgroup{ Z}$, the so-called wonderful case.
\par Now there exists a (wonderful) smooth $G$-equivariant compactification $\widehat \algebraicgroup{ Z}(\mathbb{R})$ of $\algebraicgroup{ Z}(\mathbb{R})$ featuring a stratification in $G$-manifolds, $$\widehat \algebraicgroup{ Z}(\mathbb{R})=\coprod_{I\subset S} \widehat \algebraicgroup{ Z}_I(\mathbb{R}),$$ parametrized by subsets $I\subset S$ of spherical roots \cite{KK} and with $\algebraicgroup{ Z}(\mathbb{R})=\widehat \algebraicgroup{ Z}_S(\mathbb{R})$. The strata $\widehat Z_I(\mathbb{R})$ for $I\subset S$ arise as follows. For every element $X$ in the relative interior $\mathfrak{a}_I^{--}$ of the face $\mathfrak{a}_I^-$ of $\mathfrak{a}_Z^-$, the radial limit $$\widehat z_{0,I}:=\lim_{t\to \infty} \exp(tX)\cdot z_0\in \widehat \algebraicgroup{ Z}(\mathbb{R})$$ exists and is independent of $X$. Then $\widehat H_I$, the $G$-stabilizer of $\widehat z_{0,I}$, is real algebraic, i.e. $\widehat H_I = \widehat \algebraicgroup{H}_I(\mathbb{R})$, and $\widehat Z_I(\mathbb{R}) := [\algebraicgroup{G}\cdot \widehat z_{0,I}](\mathbb{R})$ is the set of real points in the boundary orbit $\algebraicgroup{G} \cdot \widehat z_{0,I}$. The group $\widehat H_I$ acts on the normal space to the stratum $\widehat Z_I(\mathbb{R})$ at $\widehat z_{0, I}$. The kernel of this isotropy action defines an algebraic normal subgroup $\algebraicgroup{H}_I\triangleleft \widehat \algebraicgroup{H}_I$ with torus quotient $\algebraicgroup{ A}_I=\widehat \algebraicgroup{H}_I/ \algebraicgroup{H}_I$. The real spherical space $\algebraicgroup{ Z}_I(\mathbb{R}):=(\algebraicgroup{G}/ \algebraicgroup{H}_I)(\mathbb{R})$ is in fact canonically attached to $\algebraicgroup{ Z}(\mathbb{R})$, i.e. it does not depend on the particular compactification. Geometrically $\algebraicgroup{ Z}_I(\mathbb{R})$ is a deformation of $\algebraicgroup{ Z}(\mathbb{R})$ which approximates $\algebraicgroup{ Z}(\mathbb{R})$ asymptotically near the vertex $\widehat z_{0,I}$. We denote by $A_I$ the identity component of $\algebraicgroup{ A}_I(\mathbb{R})$ and note that its Lie algebra is $\mathfrak{a}_I$ defined above.
\par We assume now that $Z$ and hence also $\algebraicgroup{ Z}(\mathbb{R})$ is unimodular, i.e.~it carries a $G$-invariant positive Radon measure. As $\algebraicgroup{ Z}_I(\mathbb{R})$ is a deformation of $\algebraicgroup{ Z}(\mathbb{R})$ for each $I\subset S$, it follows that $\algebraicgroup{ Z}_I(\mathbb{R})$ carries a natural $G$-invariant measure as well. On $\algebraicgroup{ Z}_I(\mathbb{R})$ the group $G\times A_I$ acts from left times right. The left $G$-action defines a unitary representation $L$ of $G$ on $L^2(\algebraicgroup{ Z}_I(\mathbb{R}))$ given by $(L(g)f)(z)= f(g^{-1}\cdot z)$ for $g\in G$, $z\in \algebraicgroup{ Z}_I(\mathbb{R})$ and $f\in L^2(\algebraicgroup{ Z}_I(\mathbb{R}))$. The right action of $A_I$ on $\algebraicgroup{ Z}_I(\mathbb{R})$ defines a normalized unitary representation $\mathcal{R}(a_I) f(z)= a_I^{-\rho} f(z\cdot a_I)$ for $a_I\in A_I$ and $f,z$ as before. The decomposition of $L^2(\algebraicgroup{ Z}_I(\mathbb{R}))$ with respect to $\mathcal{R}$ yields the disintegration in unitary $G$-modules
$$ L^2(\algebraicgroup{ Z}_I(\mathbb{R}))=\int_{\widehat A_I} L^2(\algebraicgroup{ Z}_I(\mathbb{R}), \chi)\ d \chi$$ with $\widehat A_I$ the unitary character group of the non-compact torus $A_I$. The space $L^2(\algebraicgroup{ Z}_I(\mathbb{R}), \chi)$ is the space of square integrable densities with respect to $\chi$ and we denote by $L^2(\algebraicgroup{ Z}_I(\mathbb{R}),\chi)_{\rm d}$ the discrete spectrum of this unitary $G$-module. We define the twisted discrete spectrum of $L^2(\algebraicgroup{ Z}_I(\mathbb{R}))$ by $$L^2(\algebraicgroup{ Z}_I(\mathbb{R}))_{\rm td} := \int_{\widehat A_I} L^2(\algebraicgroup{ Z}_I(\mathbb{R}), \chi)_{\rm d}\ d \chi\, .$$
\par The main result of this work (see Theorem \ref{thm planch refined real points} where $\sB$ of \eqref{B} is denoted by
$B_{\mathbb{R},{\rm res}}$) is the construction of a $G$-equivariant surjective map \begin{equation}\label{B} \sB: \bigoplus_{I\subset S} L^2(\algebraicgroup{ Z}_I(\mathbb{R}))_{\rm td} \to L^2(\algebraicgroup{ Z}(\mathbb{R}))\end{equation}
such that source and image have equivalent Plancherel measures, i.e. belong to the same measure class. Further each $\sB_I:=\sB\big|_{L^2(\algebraicgroup{ Z}_I(\mathbb{R}))_{\rm td}}$ is a sum of partial isometries. The latter property translates into the Maass-Selberg relations, see Theorem \ref{eta-I continuous}, and will be explained in more detail below. The existence of such a map originates from ideas of J.~Bernstein, and accordingly we call $\sB$ the Bernstein morphism. Let us remark that in the main text we derive a more general (but more complicated to state) result, namely a Bernstein decomposition for $L^2(Z)$ (see Theorem \ref{thm planch} and Theorem \ref{thm planch refined}) from which we derive \eqref{B} by collecting the data for the various $G$-orbits in $\algebraicgroup{ Z}(\mathbb{R})$.
\par For absolutely spherical spaces of wavefront type over a p-adic field $k$ a Bernstein map for $L^2(\algebraicgroup{ Z}(k))$ with the same properties as above was constructed by Sakellaridis and Venkatesh in \cite{SV} under the assumption of certain properties of the discrete series, see \cite[Conjecture 9.4.6]{SV}. A novel point of view in \cite{SV}, which we have adopted, is the observation that the decomposition of $L^2(\algebraicgroup{ Z}(k))$ into the various series of representations is reflected in the boundary geometry of a smooth compactification $\widehat \algebraicgroup{ Z}(k)$ of $\algebraicgroup{ Z}(k)$. Another new insight of \cite{SV} is that no explicit knowledge of the discrete series is needed to derive the Bernstein decomposition: the bottom line is the existence of a spectral gap for the discrete series. Since a spectral gap theorem is established in full generality for real spherical spaces in \cite{KKOS}, we do not have to make any assumptions on the discrete spectrum as in \cite{SV}.
\par With the implementation of the Bernstein decomposition the Plancherel theorem for $L^2(\algebraicgroup{ Z}(\mathbb{R}))$ essentially reduces to the understanding of the twisted discrete spectrum for each $\algebraicgroup{ Z}_I(\mathbb{R})$, and the determination of $\ker \sB$. Since the Bernstein map is isospectral and surjective, it follows that the measure class of the Plancherel measure of $L^2(\algebraicgroup{ Z}(\mathbb{R}))$ is given by countably many copies of the Haar measures on the tori $A_I$.
\par Let us consider the example $Z=\algebraicgroup{ Z}(\mathbb{R})=G \times G /\operatorname{diag} G \simeq G$ of a real semisimple algebraic Lie group. Here the spherical roots $S$ are identified with the simple roots with respect to $\mathfrak{a}$, the Lie algebra of $A$ of a maximal split torus of $G$. Recall that subsets $I\subset S$ parametrize the parabolic subgroups $P_I =L_I U_I$ of $G$. Then we have $H_I = \operatorname{diag}(L_I) (U_I \times \overline{ U_I})$ with $\overline{P_I} = L_I \overline{U_I}$ the parabolic opposed to $P_I$ and in particular $$\algebraicgroup{ Z}_I(\mathbb{R})= [G/ U_I \times G/\overline{U_I}]/ \operatorname{diag}(L_I)\, .$$ Write $L_I = M_I A_I$ as usual. Now, via induction by stages, we readily obtain
\begin{equation} \label{intro1}L^2(\algebraicgroup{ Z}_I(\mathbb{R}))_{\rm td} \underset{G\times G}{\simeq} \sum_{\sigma \in \widehat M_{I,{\rm disc}}} \int_{i\mathfrak{a}_I^*} \pi_{\sigma, \lambda}\otimes \pi_{\sigma, \lambda}^* \ d\lambda\,,\end{equation} where $\pi_{\sigma, \lambda}=\operatorname{Ind}_{P_I}^G (\lambda \otimes \sigma)$ is the unitarily induced representation of $G$ with respect to the unitary character of $A_I$ defined by $\lambda$, and $\sigma$ is a discrete series representation of $M_I$. Via basic intertwining theory we then group the occurring representations in \eqref{intro1} into equivalence classes and obtain Harish-Chandra's Plancherel formula up to the classification of the discrete spectrum of the inducing datum (see Section \ref{group case}). Likewise holds for the Plancherel theorem for symmetric spaces as obtained by Delorme \cite{Delorme} and van den Ban-Schlichtkrull \cite{vdBS} and we refer to Section \ref{section DBS} for the complete account.
\par As in the work of Harish-Chandra on the Plancherel theorem for a real reductive group, a constant term approximation \cite{HC1} lies at the heart of the proof. Let us explain that. A Harish-Chandra module $V$ endowed with a linear functional $\eta$, such that $\eta$ extends to a continuous $H$-invariant functional on the unique smooth moderate growth completion $V^\infty$, will be called a spherical pair and denoted $(V,\eta)$. The continuous dual of $V^\infty$ is denoted $V^{-\infty}$, and from \cite{KKS2} originates a natural linear map \begin{equation}\label{eta corresp} (V^{-\infty})^H \to (V^{-\infty})^{H_I}, \ \eta\mapsto \eta^I\, .\end{equation} Attached to $\eta$ are the generalized matrix coefficients $m_{v,\eta}(gH)=\eta(g^{-1}v)$ which define smooth functions on $Z$ for all $v\in V^\infty$. Likewise we obtain smooth functions $m_{v,\eta^I}$ on $Z_I:=G/H_I\subset \algebraicgroup{ Z}_I(\mathbb{R})$. An appropriate notion of temperedness for functions on a real spherical spaces was defined in \cite{KKSS2}, and accordingly $\eta$ is called tempered if all associated matrix coefficients are tempered functions. The map \eqref{eta corresp} then gives rise to a linear map of tempered functionals $$(V^{-\infty})_{\rm temp}^H \to (V^{-\infty})_{\rm temp}^{H_I}\, .$$ The constant term approximation \cite{DKS} measures the differences
$$ |m_{v,\eta}(g\exp(tX) H) - m_{v,\eta^I}(g\exp(tX)H_I)|$$ for $g\in \Omega$, a compact subset of $G$, and $t\to \infty$ for $X\in \mathfrak{a}_I^{--}$. We refer to Theorem \ref{loc ct temp} below for the detailed statement.
In case of the group Harish-Chandra obtained in \cite{HC1} such an approximation for a fixed representation. Using his strong results on the discrete series \cite{HC} it was made uniform for all tempered representations in \cite{HC2}. For spherical spaces the uniformity of the constant term approximation is obtained in \cite{DKS} via the spectral gap theorem of \cite{KKOS} for the twisted discrete spectrum. \par
Let us mention that our constant term approximation is also uniform in the category of smooth vectors so that there is no need for expansion of functions in terms of $K$-types. On a geometric level this allows us to view $Z_I$ and $Z$ in terms of the orbit geometry of the minimal parabolic subgroup $P$. In more detail, we show that there is a natural injective map of open $P$-orbits $(P\backslash Z_I)_{\rm open} \to (P\backslash Z)_{\rm open}$. This in turn allows us to identify $Z_I$ inside $Z$, up to measure zero via the open $P$-orbits. We refer to Section \ref{main remainder} for the analytic implementation of this $P$-equivariant point of view. Let us point out that the auxiliary "exponential maps" of \cite{SV}, which allowed an identification of $\algebraicgroup{ Z}(k)$ and $\widehat \algebraicgroup{ Z}_I(k)$ near the vertex $\widehat z_{0,I}$, are no longer needed in our context of $P$-equivariant matching of $Z_I$ with $Z$ up to measure zero.
\par For almost all irreducible Harish-Chandra modules in the spectrum of $L^2(Z_I)$ the multiplicity space $(V^{-\infty})_{\rm temp}^{H_I}$ is a finite dimensional semisimple module for $\mathfrak{a}_I$ and accordingly every $\eta^I\in (V^{-\infty})_{\rm temp}^{H_I}$ decomposes into eigenvectors $$\eta^I=\sum_{\lambda \in \rho +i\mathfrak{a}_I^*}\eta^{I,\lambda}\, .$$ Our Maass-Selberg relations are then expressed in the form that $\eta\mapsto \eta^{I,\lambda}$ is a partial isometry, see Theorem \ref{eta-I continuous}. Notice that the $\eta^{I,\lambda}$ reflect the asymptotics of the matrix coefficients $m_{v,\eta}$ through the constant term approximation. Finally we define the Bernstein morphisms spectrally via the technique of tempered embedding developed in \cite[Sect. 9]{KKS2}.
\par As a corollary of the Bernstein decomposition we obtain a partial geometric characterization of the existence of
the discrete spectrum:
\begin{equation}\label{DS} \operatorname{int} \mathfrak{h}_{\rm ell}^\perp \neq \emptyset\quad \Rightarrow\quad L^2(Z)_{\rm d}\neq \emptyset\,, \end{equation} see Theorem \ref{thm discrete}. This formulation reflects the known geometric characterization for groups and symmetric spaces, going back to Harish-Chandra \cite{HC} and Flensted-Jensen \cite{FJ}. Actually we expect that the converse implication in \eqref{DS} holds as well, and we provide a geometric analogue of the expected equivalence via moment map geometry in Theorem \ref{thm moment discrete}.
\par{\it Acknowledgement:} We are grateful to Joseph Bernstein who
provided us with many useful remarks to a preliminary version of this article.
\section{Notions and Generalities}
Throughout this paper we use upper case Latin letters $A,B, C\ldots$ to denote Lie groups and write $\mathfrak{a}, {\mathfrak b}, \mathfrak{c},\ldots$ for their corresponding Lie algebras. If $G$ is a Lie group, then we denote by $G_0$ its identity component.
\par If $M$ is a set and $\sim$ is an equivalence relation on $M$, then we denote by $[m]$ the equivalence class of $m\in M$. Often the equivalence class is obtained by orbits of a group $G$ acting on $M$. More specifically if $X, Y$ are sets and $G$ is a group which acts on $X$ from the right and acts on $Y$ from the left, then we obtain a left $G$-action on $X\times Y$ by $g\cdot(x,y):= (x\cdot g^{-1}, g\cdot y)$ whose set of equivalence classes we denote by $X\times_G Y$. We often abbreviate and simply write $[x,y]$ instead of $[(x,y)]$ to denote the equivalence class of $(x,y)$.
\par Given a group $G$ and subgroup $H\subset G$ we use for $g\in G$ the notation $H_g:=gHg^{-1}$, i.e. $H_g$ is the $G$-stabilizer of the point $gH\in G/H$.
\par For a Lie algebra $\mathfrak{g}$ we write $\mathcal{U}(\mathfrak{g})$ for the universal enveloping algebra of $\mathfrak{g}_\mathbb{C}$. Further we denote by $\mathcal{Z}(\mathfrak{g})$ the center of $\mathcal{U}(\mathfrak{g})$.
\par If $\algebraicgroup{ Z}$ is an algebraic variety defined over $\mathbb{R}$ and $k\supset\mathbb{R}$ is a field, then we denote by $\algebraicgroup{ Z}(k)$ the set of $k$-points. Since we only consider fields $k=\mathbb{R},\mathbb{C}$ in this paper we abbreviate in the sequel and simply set $\algebraicgroup{ Z}:=\algebraicgroup{ Z}(\mathbb{C})$.
\par Let now $\algebraicgroup{G}$ be a connected reductive algebraic group defined over $\mathbb{R}$ and let $G:=\algebraicgroup{G}(\mathbb{R})$. As a general rule we use the following notation: if $\algebraicgroup{ R}$ is an algebraic subgroup of $\algebraicgroup{G}$ and defined over $\mathbb{R}$, then we set $R:= \algebraicgroup{ R}(\mathbb{R})$ and note that $R$ is closed Lie subgroup of $G$. We regard $G\subset \algebraicgroup{G}$ and then $R=G\cap\algebraicgroup{ R}$. We let $\algebraicgroup{H}<\algebraicgroup{G}$ be an algebraic subgroup defined over $\mathbb{R}$, and define $H<G$ according to this rule. For intersections with $\algebraicgroup{H}$ we adopt the notation $\algebraicgroup{ R}_{\algebraicgroup{H}}:= \algebraicgroup{ R}\cap \algebraicgroup{H}$ and likewise $R_H:= R \cap H=\algebraicgroup{ R}_{\algebraicgroup{H}}(\mathbb{R})$.
Set $\algebraicgroup{ Z}:= \algebraicgroup{G}/ \algebraicgroup{H}$ and observe that $\algebraicgroup{ Z}$ is a smooth $\algebraicgroup{G}$-variety defined over $\mathbb{R}$. Set $Z:=G/H$ and observe that $Z$ is a $G$-orbit of $\algebraicgroup{ Z}(\mathbb{R})$. In general $\algebraicgroup{ Z}(\mathbb{R})$ is a finite union of $G$-orbits, but typically not equal to $Z$. For example if $\algebraicgroup{G}=\operatorname{SL}(n,\mathbb{C})$ and $\algebraicgroup{H}=\operatorname{SO}(n,\mathbb{C})$ then $\algebraicgroup{ Z}(\mathbb{R})\simeq \bigcup_{2k \leq n} \operatorname{SL}(n,\mathbb{R})/ \operatorname{SO}(n-2k, 2k)$ identifies with the real symmetric matrices with unit determinant, whereas $Z$ comprises the set of positive definite symmetric matrices therein. In particular, in this case $Z=G/H\subsetneq \algebraicgroup{ Z}(\mathbb{R})$. This shows, when taking real points of the principal bundle
\begin{equation} \label{Z-exact} {\bf1} \to \algebraicgroup{H} \to \algebraicgroup{G} \twoheadrightarrow \algebraicgroup{ Z}\end{equation} we have to act with care, as the functor of taking real points in (\ref{Z-exact}) is only left exact \begin{equation} \label{ZR-exact} {\bf1} \to H \to G \to\algebraicgroup{ Z}(\mathbb{R})\end{equation} and extends to a long exact sequence of pointed sets \cite[I.5.4, Prop. 36]{Serre} in Galois cohomology
\begin{equation} \label{ZR-exact long}
{\bf1} \to H \to G \to\algebraicgroup{ Z}(\mathbb{R})\to H^1 (\operatorname{Gal}(\mathbb{C}|\mathbb{R}), \algebraicgroup{H}) \to H^1(\operatorname{Gal}(\mathbb{C}|\mathbb{R}), \algebraicgroup{G})\, . \end{equation}
In this context we recall from \cite[Prop. 13.1]{KK} that:
\begin{lemma} \label{Example exact} If $\algebraicgroup{G}$ is anisotropic over $\mathbb{R}$, i.e. $\algebraicgroup{G}(\mathbb{R})$ is compact, then \eqref{ZR-exact} is right exact. \end{lemma}
We denote by $z_0=\algebraicgroup{H}$ the standard base point of $\algebraicgroup{ Z}$ and observe the $G$-equivariant embedding
$$ Z \to \algebraicgroup{ Z}= \algebraicgroup{G}/ \algebraicgroup{H}, \ \ gH \mapsto g\algebraicgroup{H}= g\cdot z_0\,.$$
\par If $\algebraicgroup{ R}$ is a unipotent group, then note that $R$ is connected for the Euclidean topology. This is because unipotent groups $\algebraicgroup{ R}$ are isomorphic (as varieties) to their Lie algebras $\mathfrak{r}_\mathbb{C}$ via the algebraic exponential map.
\subsection{Real spherical spaces and the local structure theorem}\label{subsection LST}
\par Let $\algebraicgroup{ P}<\algebraicgroup{G}$ be a parabolic subgroup of $\algebraicgroup{G}$ which is minimal with respect to being defined over $\mathbb{R}$. We denote by $\underline{N}$ the unipotent radical of $\algebraicgroup{ P}$.
\par We assume that $Z$ is {\it real spherical}, that is, the action of $P$ on $Z$ admits an open orbit. After replacing $P$ by a conjugate we will assume that $P\cdot z_0$ is open in $Z$. The local structure theorem (see \cite[Th. 2.3]{KKS} and \cite[Cor. 4.11]{KK}) asserts the existence of a parabolic subgroup $\algebraicgroup{ Q}\supset \algebraicgroup{ P} $ with Levi-decomposition $\algebraicgroup{ Q}=\algebraicgroup{ L} \ltimes \algebraicgroup{ U}$ defined over $\mathbb{R}$ such that one has
\begin{eqnarray} \label{lst1}\algebraicgroup{ P}\cdot z_0&=& \algebraicgroup{ Q} \cdot z_0\\ \label{lst2}\algebraicgroup{ Q}_\algebraicgroup{H} &=& \algebraicgroup{ L}_\algebraicgroup{H} \\ \label{lst3}\algebraicgroup{ L}_{\rm n} &\subset& \algebraicgroup{ L}_{\algebraicgroup{H}}\end{eqnarray} where $\algebraicgroup{ L}_{\rm n}$ is the unique connected normal $\mathbb{R}$-subgroup of $\algebraicgroup{ L}$ such that the Lie algebra $\mathfrak{l}_{\rm n}$ is the sum of all non-compact, non-abelian simple ideals of $\mathfrak{l}$.
\begin{rmk} \label{rmk choice of L} In addition to \eqref{lst1} - \eqref{lst3} we request from our choice of $\algebraicgroup{ L}$ that it is obtained via the constructive proof of the local structure theorem. In case that $\algebraicgroup{ Z}=\algebraicgroup{G}/\algebraicgroup{H}$ is quasi-affine, this means that there exists $\xi\in \mathfrak{h}^\perp \subset \mathfrak{g}$ such that $$\algebraicgroup{ L}= Z_{\algebraicgroup{G}}(\xi)= \{g \in \algebraicgroup{G}\mid \operatorname{Ad}^*(g) \xi= \xi\}\, .$$ In case $\algebraicgroup{ Z}$ is not quasi-affine one uses a quasi-affine cover (cone construction) to reduce to the quasi-affine case: extend $\algebraicgroup{G}$ to $\algebraicgroup{G}_1=\algebraicgroup{G}\times \mathbb{C}^\times$ and let $\psi: \algebraicgroup{H} \to \mathbb{C}^\times$ be a character defined over $\mathbb{R}$ which is obtained from a Chevalley embedding of $\algebraicgroup{ Z}$ into projective space which is defined over $\mathbb{R}$. With $\algebraicgroup{H}_1=\{ (h, \psi(h))\mid h\in \algebraicgroup{H}\}$ we obtain a real spherical subgroup $\algebraicgroup{H}_1\subset \algebraicgroup{G}_1$ such that $\algebraicgroup{ Z}_1=\algebraicgroup{G}_1/\algebraicgroup{H}_1$ is quasi-affine. The local structure theorem for $\algebraicgroup{ Z}_1$ then descends to a local structure theorem for $\algebraicgroup{ Z}$. \par With this choice of $\algebraicgroup{ L}$ it is then guaranteed that the slice $\algebraicgroup{ L}/\algebraicgroup{ L}_{\algebraicgroup{H}}$ can be extended to suitable compactifications of $\algebraicgroup{ Z}$ which will be used later in this text.\end{rmk}
In particular, we obtain from (\ref{lst1}) - (\ref{lst2}) via the obvious multiplication map
\begin{equation} \label{LST1} \algebraicgroup{ P}\cdot z_0 \simeq \algebraicgroup{ U} \times \algebraicgroup{ L} / \algebraicgroup{ L}_{\algebraicgroup H}\end{equation} an isomorphism of algebraic varieties defined over $\mathbb{R}$. If we take real points in (\ref{LST1}) we get
\begin{equation} \label{LST1R} [\algebraicgroup{ P}\cdot z_0](\mathbb{R}) \simeq U \times (\algebraicgroup{ L} / \algebraicgroup{ L}_{\algebraicgroup H})(\mathbb{R}).\end{equation} In the next step we wish to describe $(\algebraicgroup{ L}/\algebraicgroup{ L}_{\algebraicgroup H})(\mathbb{R})$ in more detail. For that let $\algebraicgroup{ A}\subset \algebraicgroup{ L}\cap \algebraicgroup{ P} $ be a maximal split torus and set $\algebraicgroup{ A}_{\algebraicgroup Z}:= \algebraicgroup{ A}/ \algebraicgroup{ A}_{\algebraicgroup{H}}$. We also view the torus $\algebraicgroup{ A}_{\algebraicgroup Z}$ as a subvariety of $\algebraicgroup{ Z}$. Further we define $A_Z$ to be the identity component of $\algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R})$.
The number $r:=\operatorname{rank}_\mathbb{R} Z:=\dim A_Z$ is an invariant of $Z$ and referred to as the {\it real rank of $Z$}.
\par Let $K$ be a maximal compact subgroup of $G$. Note that $K$ is algebraic, i.e. $K=\algebraicgroup{K}(\mathbb{R})$. Further we denote by $\mathfrak{z}(\mathfrak{g})$ the center of $\mathfrak{g}$, and we fix with $\kappa: \mathfrak{g}\times \mathfrak{g}\to\mathbb{R}$ a non-degenerate $\operatorname{Ad}(G)$-invariant bilinear form which yields an orthogonal decomposition of the center $\mathfrak{z}(\mathfrak{g}) = (\mathfrak{z}(\mathfrak{g})\cap \mathfrak{a})\oplus (\mathfrak{z}(\mathfrak{g})\cap \mathfrak{k})$. In case $\mathfrak{g}$ is semi-simple, the Cartan-Killing form can be used for $\kappa$. It is a standing further requirement for $K$ that $\mathfrak{k} \perp \mathfrak{a}$. Then $\algebraicgroup{ M}:=Z_{\algebraicgroup{K}}(\algebraicgroup{ A})$, the centralizer of $\algebraicgroup{ A}$ in $\algebraicgroup{K}$, does not depend on the particular choice of $K$ with $\mathfrak{k} \perp \mathfrak{a}$.
\par Notice that $Z_{\algebraicgroup{G}}(\algebraicgroup{ A})$ is a Levi-subgroup of $\algebraicgroup{ P}$ and as such connected. Moreover we have $Z_{\algebraicgroup{G}}(\algebraicgroup{ A})= \algebraicgroup{ M} \algebraicgroup{ A}$. Notice that (\ref{lst3}) implies that $\algebraicgroup{ M} \algebraicgroup{ A}$ acts transitively on $\algebraicgroup{ L}/\algebraicgroup{ L}_{\algebraicgroup H}$.
\par In the next two paragraphs we recall some elementary facts from \cite[Sect. 1 and App. B]{DKS}. Define $$\widehat \algebraicgroup{ M}_{\algebraicgroup H}=\{ m\in \algebraicgroup{ M}\mid m\cdot z_0 \in \algebraicgroup{ A}_{\algebraicgroup Z}\}$$ and note that $\widehat \algebraicgroup{ M}_{\algebraicgroup H}$ is the isotropy group for the action of $\algebraicgroup{ M}$ on $ \algebraicgroup{ L}/ \algebraicgroup{ L}_{\algebraicgroup H} \algebraicgroup{ A}$. In particular, $\widehat\algebraicgroup{ M}_{\algebraicgroup H}$ is an algebraic subgroup of $\algebraicgroup{G}$ defined over $\mathbb{R}$. Moreover, $\widehat\algebraicgroup{ M}_{\algebraicgroup H}$ contains $\algebraicgroup{ M}_{\algebraicgroup H}$ as a normal subgroup such that $F_M:= \widehat \algebraicgroup{ M}_{\algebraicgroup H}/ \algebraicgroup{ M}_{\algebraicgroup H}\simeq \widehat M_H/ M_H$ is a finite $2$-group. Here $\widehat M_H=\widehat\algebraicgroup{ M}_\algebraicgroup{H}(\mathbb{R})\subset M$ by our notational conventions.
\par Now $\algebraicgroup{ L}/\algebraicgroup{ L}_{\algebraicgroup H}$ is homogeneous for $\algebraicgroup{ M}\algebraicgroup{ A}$ and thus
\begin{equation} \label{deco L} \algebraicgroup{ L}/ \algebraicgroup{ L}_{\algebraicgroup H} \simeq \algebraicgroup{ M} \times_{\widehat \algebraicgroup{ M}_{\algebraicgroup H}} \algebraicgroup{ A}_{\algebraicgroup Z} = \algebraicgroup{ M}/\algebraicgroup{ M}_{\algebraicgroup H} \times_{F_M} \algebraicgroup{ A}_{\algebraicgroup Z}\, .\end{equation} In particular, by \cite[Prop. B.2]{DKS} \begin{equation} \label{LST2} (\algebraicgroup{ L}/ \algebraicgroup{ L}_{\algebraicgroup H})(\mathbb{R}) \simeq M \times_{\widehat M_H} \algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R}) = M/M_H \times_{F_M} \algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R}) \end{equation} where $\simeq$ refers to an isomorphism of real algebraic varieties.
{}From (\ref{LST1}) and (\ref{LST2}) we obtain the following form of the local structure theorem, which we will use later on: \begin{equation} \label{LST3} [\algebraicgroup{ P} \cdot z_0](\mathbb{R})\simeq U \times \left[M/M_H \times_{F_M} \algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R})\right]\, . \end{equation}
\par Recall that $\algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R}) \simeq (\mathbb{R}^\times)^r $, with $r=\operatorname{rank}_\mathbb{R}(Z)$, is a split torus viewed as a subvariety of $\algebraicgroup{ Z}(\mathbb{R})$. Set $$A_{Z,\mathbb{R}} := \algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R}) \cap Z\, .$$ Then it is clear that $A_Z\subset A_{Z,\mathbb{R}}\subset \algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R})$. In general however, $A_{Z,\mathbb{R}}$ is not a group, but carries only the structure of an $A_Z$-set (see Example \ref{ex SL3} below for $Z=\operatorname{SL}(3,\mathbb{R})/\operatorname{SO}(2,1)$). \par Let $F_\mathbb{R}=\{-1,1\}^r<\algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R})= (\mathbb{R}^\times)^r$ be the $2$-torsion subgroup of $\algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R})$. Since $A_Z$ is defined to be the identity component of $\algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R})$ we obtain the following isomorphism of groups
\begin{equation}\label{AF} \algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R}) = A_Z F_\mathbb{R} \simeq A_Z \times F_\mathbb{R}\, . \end{equation} Let $F\subset F_\mathbb{R}$ be the subset such that $A_{Z,\mathbb{R}}= A_Z F$, i.e. $F=F_\mathbb{R}\cap A_{Z,\mathbb{R}}$. Set $T_Z:=\exp_{\algebraicgroup{ A}}(i\mathfrak{a}_H^\perp)<\algebraicgroup{ A}$ and note that $F_\mathbb{R}\subset T_Z\cdot z_0$ as $T_Z\cdot z_0$ contains all torsion elements of $\algebraicgroup{ A}_{\algebraicgroup Z}$.
\par Since $F_M$ maps faithfully into $F_\mathbb{R}$ we view it in the sequel as a subgroup of $F_\mathbb{R}$. Note that $F_M\subset F$ and that $F_M$ acts on $F$.
With this terminology we obtain from (\ref{LST2}) that
\begin{equation} \label{LST4} Z \cap (\algebraicgroup{ L}/ \algebraicgroup{ L}_{\algebraicgroup H})(\mathbb{R}) \simeq M/M_H \times_{F_M} A_{Z,\mathbb{R}}\, ,\end{equation} and accordingly from \eqref{LST3}
\begin{equation} \label{LST5} Z \cap [\algebraicgroup{ P}\cdot z_0](\mathbb{R}) \simeq U \times \left[ M/M_H \times_{F_M} A_{Z,\mathbb{R}}\right]\, .\end{equation}
The set of open $P$-orbits in $Z$, resp.~$\algebraicgroup{ Z}(\mathbb{R})$, is an important geometric invariant and plays a dominant role in the harmonic analysis on $Z$, resp.~$\algebraicgroup{ Z}(\mathbb{R})$. For a symmetric space it is known from \cite{Mat1} that the open $P$-orbits are parametrized by a quotient of a Weyl group with a subgroup. Although no such parametrization is known in general we denote $$W_\mathbb{R}:= (P\backslash \algebraicgroup{ Z}(\mathbb{R}))_{\rm open} \quad\text{and} \quad W:=(P\backslash Z)_{\rm open}\, ,$$ motivated by the special case.
From \eqref{LST3} and (\ref{LST5}) we deduce:
\begin{lemma} \label{lemmaW1} The maps $$ F_M\backslash F_\mathbb{R} \to W_\mathbb{R}, \ \ \mathsf{t}=F_Mt \mapsto Pt$$ and $$ F_M\backslash F \to W, \ \ \mathsf{t}=F_Mt \mapsto Pt $$ are bijections. \end{lemma}
It is often convenient to select representatives of $W$ in $G$. For any $\mathsf{t} \in F_M\backslash F$ we pick a representative $t\in F$ such that $\mathsf{t}=F_M t$. Then $Pt\in W$ and $t\in Z=G\cdot z_0$ implies that there is a lift $w=w(\mathsf{t})\in G$ of $t$ to $G$ such that $t =w\cdot z_0$. If $\mathcal{W}=\{w(\mathsf{t})\mid \mathsf{t} \in F_M\backslash F\}$, then the assignment $$\mathcal{W} \to W, \ \ w \mapsto Pw\cdot z_0$$ is a bijection. \par Let $w=w(\mathsf{t})\in\mathcal{W}$ and let $\tilde t \in T_Z$ be a lift of $t$, i.e. $\tilde t\cdot z_0 = t$. Then
\begin{equation}\label{th-deco} w= \tilde t h \end{equation} for some $h\in \algebraicgroup{H}$.
\subsection{Spherical roots and the compression cone}
Let $\Sigma=\Sigma(\mathfrak{g},\mathfrak{a})$ be the restricted root system for the pair $(\mathfrak{g},\mathfrak{a})$ and let
$$ \mathfrak{g} = \mathfrak{a} \oplus \mathfrak{m} \oplus \bigoplus_{\alpha\in \Sigma} \mathfrak{g}^\alpha$$ be the attached root space decomposition. Write $(\mathfrak{l} \cap \mathfrak{h})^{\perp_\mathfrak{l}} \subset \mathfrak{l}$ for the orthogonal complement of $\mathfrak{l} \cap \mathfrak{h}$ in $\mathfrak{l}$ with respect to $\kappa$. From $\mathfrak{g}=\mathfrak{q} +\mathfrak{h}=\mathfrak{u} \oplus (\mathfrak{l}\cap\mathfrak{h})^{\perp_\mathfrak{l}}\oplus \mathfrak{h}$ and $\mathfrak{g}=\mathfrak{q}\oplus \overline{\mathfrak{u}}$ we infer the existence of a linear map $ T:\overline{\mathfrak{u}}\to \mathfrak{u} \oplus (\mathfrak{l}\cap\mathfrak{h})^{\perp_\mathfrak{l}}$ such that $\mathfrak{h}=(\mathfrak{l}\cap \mathfrak{h}) \oplus \mathcal{G}(T)$ with $\mathcal{G}(T) \subset \overline{\mathfrak{u}} \oplus \mathfrak{u} \oplus (\mathfrak{l}\cap\mathfrak{h})^{\perp_\mathfrak{l}}$ the graph of $T$.
\par Set $\Sigma_\mathfrak{u}:=\Sigma(\mathfrak{u},\mathfrak{a})\subset \Sigma$. For $\alpha \in \Sigma_\mathfrak{u}$ and $X_{-\alpha} \in \mathfrak{g}^{-\alpha}$ let $T(X_{-\alpha})= \sum_{\beta\in \Sigma_\mathfrak{u}\cup\{0\}} X_{\alpha,\beta}$ with $X_{\alpha,\beta} \in \mathfrak{g}^\beta$ for $\beta\in \Sigma_\mathfrak{u}$ and $X_{\alpha, 0}\in (\mathfrak{l}\cap\mathfrak{h})^\perp$. Let $\mathcal{M}\subset \mathfrak{a}^*\backslash\{0\}$ be the additive semi-group generated by
$$\{ \alpha+\beta\mid \alpha\in \Sigma_\mathfrak{u}, \exists X_{-\alpha} : \ X_{\alpha,\beta}\neq 0\}\, .$$
Note that all elements of $\mathcal{M}$ vanish on $\mathfrak{a}_H$ so that we can view $\mathcal{M}$ as a subset of $\mathfrak{a}_Z^*$. A bit more precisely the elements of $\mathcal{M}$, seen as characters of $\algebraicgroup{ A}$, are trivial when restricted to $\algebraicgroup{ A}_\algebraicgroup{H}$ and therefore factor to characters of $\algebraicgroup{ A}_{\algebraicgroup Z}$. Thus if we denote by $\Xi_Z:= \operatorname{Hom} (\algebraicgroup{ A}_{\algebraicgroup Z}, \mathbb{C}^\times)\simeq \mathbb{Z}^r$ the character group, seen as a lattice in $\mathfrak{a}_Z^*$, we have $\mathcal{M}\subset \Xi_Z$.
\par Define $$\mathfrak{a}_{Z,E}:= \{X\in \mathfrak{a}_Z \mid (\forall \alpha \in \mathcal{M})\ \alpha(X)=0\}$$ and note that $\mathcal{M}$ belongs to $\mathfrak{a}_{Z,E}^\perp\subset \mathfrak{a}_Z^*$. Next, according to \cite[Cor. 9.7]{KK}, the convex cone $\mathbb{R}_{\geq 0} \mathcal{M}$ is simplicial in $\mathfrak{a}_{Z,E}^{\perp}$. Generators of this cone, suitably normalized, will be called spherical roots and denoted $S$.
\par The standard normalization of $S$ is that a generator $\sigma$ of $\mathbb{R}_{\geq 0} \mathcal{M}$ belongs to $S$ provided it is integral and indivisible, i.e. $\sigma \in \Xi_Z$ and $\frac1n \sigma \not \in \Xi_Z$ for all $n\geq 2$. \par Next we define the {\it compression cone} by
$$\mathfrak{a}_Z^-:=\{ X\in\mathfrak{a}_Z\mid (\forall \alpha\in S) \ \alpha(X)\leq 0\}.$$ \begin{rmk} The set of spherical roots $S$ and the associated co-simplicial compression cone $\mathfrak{a}_Z^-$ make up an algebro-geometric invariant of the real spherical space $Z$, see \cite{KK}. This is important for this article, as the Bernstein morphisms defined later have an inherent parametrization by subsets $I\subset S$, i.e. faces of $\mathfrak{a}_Z^-$.
Let us also mention that there is an alternative elementary approach to the compression cone as a fundamental domain of a finite Coxeter group, see \cite{KuitSayag}. \end{rmk} Let us define by $\mathfrak{a}_{Z,E}=\mathfrak{a}_Z^- \cap (-\mathfrak{a}_Z^-)$ the edge of $\mathfrak{a}_Z^-$ and record $$\# S = \dim \mathfrak{a}_Z/\mathfrak{a}_{Z,E}\, .$$
\par Following \cite[Sect. 3]{KKS2} we define for $I\subset S$ the boundary degeneration of $\mathfrak{h}_I$ of $\mathfrak{h}$ by \begin{equation}\label{eq-hi1} \mathfrak{h}_I:= \mathfrak{l}\cap\mathfrak{h} \oplus \mathcal{G}(T_I)\end{equation} where \begin{equation} \label{eq-hi2} T_I(X_{-\alpha}):= \sum_{\alpha+\beta\in \mathbb{N}_0[I]} X_{\alpha,\beta}\, . \end{equation} Observe that $\mathfrak{h}_S=\mathfrak{h}$ and $\mathfrak{h}_\emptyset=\overline \mathfrak{u} \oplus \mathfrak{l}\cap\mathfrak{h}$.
We also set \begin{align} \mathfrak{a}_I&:=\{ X\in \mathfrak{a}_Z\mid (\forall \alpha \in I) \ \alpha(X)=0\} ,\\ \mathfrak{a}_I^-&:=\{ X\in \mathfrak{a}_I \mid (\forall \alpha\in S\backslash I) \ \alpha(X)\leq0\} ,\\ \mathfrak{a}_I^{--}&:=\{ X\in \mathfrak{a}_I \mid (\forall \alpha\in S\backslash I) \ \alpha(X)<0\} .\end{align} We recall from \cite[Sect. 3]{KKS2} that for all $X\in \mathfrak{a}_I^{--}$ \begin{equation}\label{I-compression} \mathfrak{h}_I=\lim_{t\to \infty} e^{t\operatorname{ad} X}\mathfrak{h}\, .\end{equation}
Notice that $\mathfrak{a}_Z=\mathfrak{a}/\mathfrak{a}_H$ is a quotient and not canonically a subalgebra of $\mathfrak{a}$. In general it is convenient and notation saving to identify $\mathfrak{a}_Z$ as a subalgebra of $\mathfrak{a}$ by means of the identification $\mathfrak{a}_Z\simeq \mathfrak{a}_H^{\perp_\mathfrak{a}}$. Then $\mathfrak{a}_I$ normalizes $\mathfrak{h}_I$ and we obtain with $$\widehat \mathfrak{h}_I:=\mathfrak{a}_I +\mathfrak{h}_I$$ a Lie subalgebra of $\mathfrak{g}$. It follows from (\ref{I-compression}) that $L_H$ normalizes each $\mathfrak{h}_I$. Further we define $A_Z=\exp(\mathfrak{a}_Z)\subset A$ as a connected subgroup of $A$ and set $A_Z^-:=\exp(\mathfrak{a}_Z^-)$.
\section{Equivariant smooth compactifications of $\algebraicgroup{ Z}(\mathbb{R})$}\label{section compact}
In this section we explain and recall the principles of $G$-equivariant compactification theory of $\algebraicgroup{ Z}(\mathbb{R})$ as developed in \cite [Sect. 7]{KK}.
The main idea is to use a partial toric completion of the torus $\algebraicgroup{ A}_{\algebraicgroup Z}$ via a fan $\mathcal{F}$ supported in all of $\mathfrak{a}_Z^-$ (in \cite{KK} these fans are called {\it complete}). Let us call this partial completion $\algebraicgroup{ A}_{\algebraicgroup Z}(\mathcal{F})$.
\par Given a complete fan supported in $\mathfrak{a}_Z^-$, we inflate (\ref{LST1}) and form the $\algebraicgroup{ P}$-variety
$$ \algebraicgroup{ Z}_0(\mathcal{F}):= \algebraicgroup{ U} \times (\algebraicgroup{ L}/\algebraicgroup{ L}_{\algebraicgroup H} \times_{\algebraicgroup{ A}_{\algebraicgroup Z}} \algebraicgroup{ A}_{\algebraicgroup Z}(\mathcal{F}))\, . $$ Now it is the content of \cite[Th. 7.1]{KK} that there exists a $\algebraicgroup{G}$-variety $\algebraicgroup{ Z}(\mathcal{F})$ of the form $ \algebraicgroup{ Z}(\mathcal{F})= \algebraicgroup{G}\cdot \algebraicgroup{ Z}_0(\mathcal{F})$ containing $\algebraicgroup{ Z}_0(\mathcal{F})$ as an open subset. Note that $\algebraicgroup{ Z}(\mathcal{F})(\mathbb{R})$ is compact by \cite[Cor. 7.12]{KK}. The compactifications $\algebraicgroup{ Z}(\mathcal{F})$ of $\algebraicgroup{ Z}$ just constructed are usually called {\it toroidal} as they origin from partial compactifications of the torus $\algebraicgroup{ A}_{\algebraicgroup Z}$.
\par For a cone $\mathcal{C}\in \mathcal{F}$ in the fan $\mathcal{F}$ we denote by $\operatorname{int} \mathcal{C}$ its relative interior, i.e. the interior with respect to $\mathfrak{a}_\mathcal{C}:=\operatorname{span}_\mathbb{R} \mathcal{C}\subset \mathfrak{a}_Z$. Now to every cone $\mathcal{C}\in \mathcal{F}$ corresponds a radial limit $\widehat z_\mathcal{C}\in \algebraicgroup{ A}_{\algebraicgroup Z}(\mathcal{F})\subset \algebraicgroup{ Z}(\mathcal{F})$ defined as follows. The limit
$$ \widehat z_\mathcal{C}:= \lim_{s\to \infty} \exp(sX)\cdot z_0$$ exists for every $X\in \operatorname{int} \mathcal{C}$ and is independent of $X$. Moreover, the $\algebraicgroup{G}$-orbits in $\algebraicgroup{ Z}(\mathcal{F})$ are parametrized by the cones $\mathcal{C}\in\mathcal{F}$ by way of $\mathcal{C}\mapsto \widehat \algebraicgroup{ Z}_\mathcal{C}:=\algebraicgroup{G}\cdot \widehat z_\mathcal{C}$, see \cite[Cor. 7.5]{KK}.
\par Define $\algebraicgroup{ A}_\mathcal{C}\subset \algebraicgroup{ A}_{\algebraicgroup Z}$ as the torus which fixes $\widehat z_\mathcal{C}$ and note that its Lie algebra is given by the complexification of $\mathfrak{a}_\mathcal{C}$ defined above.
Hence if $I=I(\mathcal{C})$ is the set of spherical roots vanishing on $\mathcal{C}$, then $\mathfrak{a}_\mathcal{C}\subset \mathfrak{a}_I$. Further if we denote by $\widehat \algebraicgroup{H}_\mathcal{C} $ the $\algebraicgroup{G}$-stabilizer of $\widehat z_\mathcal{C}$, then we have the following relation for Lie algebras: \begin{equation} \label{formula hc} \widehat \mathfrak{h}_\mathcal{C}= \mathfrak{h}_I +\mathfrak{a}_\mathcal{C}\end{equation} with $\mathfrak{h}_I$ defined as in \eqref{I-compression}. In case $\algebraicgroup{ Z}(\mathcal{F})$ is smooth we provide a simple argument for \eqref{formula hc} below.
\par For our purpose we need that $\algebraicgroup{ Z}(\mathcal{F})$ is a smooth manifold. By the construction of $\algebraicgroup{ Z}(\mathcal{F})$ this is the case if and only if $\algebraicgroup{ A}_{\algebraicgroup Z}(\mathcal{F})$ is smooth. Let us now provide a standard construction of a complete fan which yields a smooth partial completion $\algebraicgroup{ A}_{\algebraicgroup Z}(\mathcal{F})$. For that we denote by $\Xi_Z=\operatorname{Hom}( \algebraicgroup{ A}_{\algebraicgroup Z}, \mathbb{C}^*)\simeq \mathbb{Z}^r$ the character group of $\algebraicgroup{ A}_{\algebraicgroup Z}$. Likewise we let $\Xi_Z^\vee= \operatorname{Hom}(\mathbb{C}^*, \algebraicgroup{ A}_{\algebraicgroup Z})$ be the co-character group and note the natural identification $\mathfrak{a}_Z\simeq \Xi_Z^\vee \otimes_\mathbb{Z}\mathbb{R}$.
\par Best results are obtained when $S$ is a
$\mathbb{Z}$-basis for the character lattice $\Xi_Z$. In this case the standard fan $\mathcal{F}_{\rm st}$ obtained by the faces of $\mathfrak{a}_Z^-$ is smooth and $\algebraicgroup{ Z}(\mathcal{F}_{\rm st})$ is the wonderful compactification of $\algebraicgroup{ Z}$ (see \cite[Definition 11.4]{KK}).
\begin{rmk} \label{rmk non smooth}In general $S$ is not a basis of $\Xi_Z$. This can have several natural reasons, for example if $\mathfrak{a}_{Z,E}\neq 0$ as $\#S:=\dim \mathfrak{a}_Z/\mathfrak{a}_{Z,E}< r=\operatorname{rank}_\mathbb{R}(Z)=\dim \mathfrak{a}_Z$. One might overcome this by passing from $\algebraicgroup{H}$ to $\algebraicgroup{H}= \algebraicgroup{H} \cdot \algebraicgroup{ A}_{Z,E}$. But even if $\#S =r$ it might happen that there is torsion, i.e. $\Xi_Z/ \mathbb{Z}[S]\neq 0$ which destroys smoothness of $\algebraicgroup{ A}_{\algebraicgroup Z}(\mathcal{F})$ for $\mathcal{F}$ the fan generated by $\mathfrak{a}_Z^-$. \end{rmk}
One can overcome both issues indicated in Remark \ref{rmk non smooth} simultaneously by subdividing $\mathfrak{a}_Z^-$ into finitely many simple simplicial cones $C_1, \ldots, C_N$ such that \begin{itemize} \item $\mathfrak{a}_Z^- =\bigcup_{j=1}^N C_j$, \item $C_i \cap C_j$ is a common face of both $C_i$ and $C_j$ for all $1\leq i, j\leq N$, \item For each $1\leq j\leq N$ there exists a basis $(\psi_{ji})_{1\leq i\leq r}$ of $\Xi_Z$ such that
$C_j=\{ X\in \mathfrak{a}_Z\mid (\forall 1\leq i \leq r) \psi_{ji}(X)\leq 0\}$. \end{itemize}
The existence of such a decomposition is a standard fact of toric geometry, see \cite[Ch.~3]{KKMS}. Let us denote by $\mathcal{F}_i$ the fan generated by $C_i$, i.e.~the set of all faces of $C_i$. Then define the fan $\mathcal{F}:=\bigcup_{i=1}^N \mathcal{F}_i$. Notice that $\algebraicgroup{ A}_{\algebraicgroup Z}(\mathcal{F})$ is smooth and is obtained from gluing together the various open pieces $\algebraicgroup{ A}_{\algebraicgroup Z}(\mathcal{F}_i)\simeq \mathbb{C}^r$. From $\mathfrak{a}_Z^- =\bigcup_{j=1}^N C_j$ we obtain $\mathfrak{a}_I^{--}= \bigcup_{j=1} C_j\cap \mathfrak{a}_I^{--}$. Now for every $I\subset S$ we let $J_I\subset\{ 1, \ldots, N\}$ be the set of indices $j$ for which $C_j\cap \mathfrak{a}_I^{--}\neq \emptyset $. Then $\mathfrak{a}_I^{--}= \bigcup_{j\in J_I} (C_j \cap \mathfrak{a}_I^{--})$. Note that in general $J_I$ is not a singleton as for example $J_\emptyset=\{1, \ldots, N\}$.
\par We fix now a simplicial subdivision as above and the corresponding complete fan $\mathcal{F}$. To abbreviate notation we set $\widehat \algebraicgroup{ Z}_0:=\algebraicgroup{ Z}_0(\mathcal{F})$ and $\widehat \algebraicgroup{ Z}:=\algebraicgroup{ Z}(\mathcal{F})$. We denote by $\widehat Z$ the closure of $Z$ in $\widehat \algebraicgroup{ Z}(\mathbb{R})$ which is then a manifold with corners \cite[Sect. 14]{KK}.
\par For every $I\subset S$ we fix now an $j_I\in J_I$ and let $\mathfrak{c}_I^-= C_{j_I} \cap \mathfrak{a}_I^{-}\subset \mathfrak{a}_I^-$. We denote by $\mathfrak{c}_I^{--}$ the relative interior of $\mathfrak{c}_I^{-}$. We recall $z_0=\algebraicgroup{H}\in\algebraicgroup{ Z}$ the standard base point. Then for $X\in \mathfrak{c}_I^{--}$ the limit \begin{equation} \label{def limit z0I} \widehat z_{0,I}:=\lim_{s\to \infty} \exp(sX)\cdot z_0 \in \algebraicgroup{ A}_{\algebraicgroup Z}(\mathcal{F}_{j_I})(\mathbb{R}) \subset \widehat \algebraicgroup{ Z}_0(\mathbb{R})\end{equation} exists and is independent of the choice of $X\in\mathfrak{c}_I^{--}$ (but depends on $j_I$).
\begin{rmk}\label{rmk face FI} Our choice of $j_I\in J_I$ yielding $\mathfrak{c}_I^-$ can also be seen in the following context. Set \begin{equation}\label{def FI}\mathcal{F}_I:=\{ \mathcal{C}\in \mathcal{F}\mid \mathfrak{a}_\mathcal{C}=\mathfrak{a}_I\}\, .\end{equation} Then our choice of $j_I\in J_I$ picks an element $\mathfrak{c}_I^-\in \mathcal{F}_I$ together with an $1\leq j_I\leq N$ such that $\mathfrak{c}_I^-\subset C_{j_I}$. \end{rmk}
Let us denote by $\widehat \algebraicgroup{H}_I$ the stabilizer of $\widehat z_{0,I}$ in $\algebraicgroup{G}$. Note that $\widehat \algebraicgroup{H}_I$ is defined over $\mathbb{R}$. We claim that $\widehat H_I$ has Lie algebra \begin{equation} \label{Lie hat HI} \widehat \mathfrak{h}_I =\mathfrak{h}_I +\mathfrak{a}_I\end{equation} with $\mathfrak{h}_I$ defined in \eqref{eq-hi1}. In order to see that we note that the $G$-stabilizer of $z_t:=\exp(tX)\cdot z_0$ is $H_t:=\exp(tX) H \exp(-tX)$. Moreover the fact that $z_t\to \widehat z_{0,I}$ in the smooth manifold $\widehat \algebraicgroup{ Z}(\mathbb{R})$ implies that the stabilizer Lie algebra of the limit $\widehat z_{0,I}$ contains the limit $\mathfrak{h}_I$ of \eqref{I-compression}. Now the claim follows from \cite[Th. 7.3]{KK}. \par We define $\widehat \algebraicgroup{ Z}_I= \algebraicgroup{G} \cdot \widehat z_{0,I}\simeq \algebraicgroup{G}/ \widehat \algebraicgroup{H}_I$ . The next proposition shows that this definition is independent of the choice of $\mathfrak{c}_I^-\in \mathcal{F}_I$.
\begin{prop} \label{prop independence choice} We have $\widehat \algebraicgroup{H}_\mathcal{C}=\widehat \algebraicgroup{H}_I$ for all $\mathcal{C}\in \mathcal{F}_I$. Moreover, $\widehat\algebraicgroup{H}_I$ does not depend on the choice of the smooth complete fan $\mathcal{F}$ defining the smooth toroidal compactification $\widehat \algebraicgroup{ Z}=\algebraicgroup{ Z}(\mathcal{F})$ of $\algebraicgroup{ Z}$. In other words, for every $I\subset S$
the $\algebraicgroup{G}$-variety
$\widehat \algebraicgroup{ Z}_I=\algebraicgroup{G}/\algebraicgroup{H}_I$ is up to $\algebraicgroup{G}$-isomorphism canonically attached to the $\algebraicgroup{G}$-variety $\algebraicgroup{ Z}=\algebraicgroup{G}/\algebraicgroup{H}$.
\end{prop}
\begin{proof} We prove the first assertion by induction on $n=\# S$. We start with $n=0$, the case of horospherical varieties, see \cite[Sect. 8]{KK}. In this situation $\algebraicgroup{ A}$ normalizes $\algebraicgroup{H}$ and moreover $\algebraicgroup{H} = (\algebraicgroup{H} \cap \algebraicgroup{ L}) \algebraicgroup{ U}^{\rm opp}$ with $\algebraicgroup{ U}^{\rm opp}$ the opposite of $\algebraicgroup{ U}$. In particular, $\algebraicgroup{ A}_{\algebraicgroup Z}=\algebraicgroup{ A}/\algebraicgroup{ A}_{\algebraicgroup{H}}$ acts naturally on the right of $\algebraicgroup{ Z}=\algebraicgroup{G}/\algebraicgroup{H}$. By the construction of the toroidal compactification as the unique minimal $\algebraicgroup{G}$-extension of $\algebraicgroup{ Z}_0(\mathcal{F}) = [\algebraicgroup{ Q}/ \algebraicgroup{ Q}_{\algebraicgroup{H}}] \times_{\algebraicgroup{ A}_{\algebraicgroup Z}} \algebraicgroup{ A}_{\algebraicgroup Z}(\mathcal{F})$ we obtain that $$\algebraicgroup{ Z}(\mathcal{F})= \algebraicgroup{G}/\algebraicgroup{H}\times_{\algebraicgroup{ A}_Z} \algebraicgroup{ A}_Z(\mathcal{F})$$ and hence $\widehat \algebraicgroup{H}_\mathcal{C}= \algebraicgroup{H} \algebraicgroup{ A}$ for all $\mathcal{C}\in \mathcal{F}_S$. \par Let now $n>0$ and $I\subset S$. We first treat the case for $I=S$. Then $\mathfrak{a}_S=\mathfrak{a}_{Z,E}$ and we note for all $\mathcal{C}\in \mathcal{F}_S$ the natural isomorphism $$ \algebraicgroup{ A}_{\algebraicgroup Z}\cdot \widehat z_\mathcal{C} \simeq \algebraicgroup{ A}_{\algebraicgroup Z}/\algebraicgroup{ A}_{Z,E}\, .$$ Hence we obtain that $$\algebraicgroup{ Q}\cdot \widehat z_\mathcal{C}= [\algebraicgroup{ Q}/\algebraicgroup{ Q}_{\algebraicgroup{H}}]\times_{\algebraicgroup{ A}_{\algebraicgroup Z}} [\algebraicgroup{ A}_{\algebraicgroup Z}\cdot \widehat z_\mathcal{C}]\simeq \algebraicgroup{ Q}/ (\algebraicgroup{ Q}\cap \algebraicgroup{H})\algebraicgroup{ A}_{Z,E}\, .$$ This means for the $\algebraicgroup{G}$-extension $\widehat\algebraicgroup{ Z}_\mathcal{C}$ of $\algebraicgroup{ Z}_0(\mathcal{C})=\algebraicgroup{ Q}\cdot \widehat z_\mathcal{C}$ $$\widehat \algebraicgroup{ Z}_\mathcal{C}\simeq \algebraicgroup{G}/\algebraicgroup{H}\algebraicgroup{ A}_{Z,E}\, , $$ i.e. $\widehat \algebraicgroup{H}_\mathcal{C}=\algebraicgroup{H} \algebraicgroup{ A}_{Z,E}$. \par Suppose now that $I\subsetneq S$ and let $\mathcal{C}, \mathcal{C}'\in \mathcal{F}_I$. We connect now $\mathcal{C}$ and $\mathcal{C}'$ in $\mathfrak{a}_I^-$ face to face, i.e. we find $\mathcal{C}_1, \dots \mathcal{C}_m\in \mathcal{F}_I$ such that
$I(\mathcal{C}\cap \mathcal{C}_1)=I$, $I(\mathcal{C}_i \cap \mathcal{C}_{i+1}) = I$ for $1\leq i\leq m-1$ and
$I(\mathcal{C}'\cap \mathcal{C}_m)=I$. Hence we may assume that $I(\mathcal{C}\cap \mathcal{C}')=I$. Set $\mathcal{C}_0:=\mathcal{C}\cap \mathcal{C}'$.
Set
$$\mathcal{F}(\mathcal{C}_0):=\{ \mathcal{C}\in \mathcal{F}\mid \mathfrak{a}_{\mathcal{C}_0}\subset \mathfrak{a}_\mathcal{C}\}$$
and note that $\mathcal{C}, \mathcal{C}' \in \mathcal{F}(\mathcal{C}_0)$. Set $\algebraicgroup{ Z}_0:= \algebraicgroup{G}\cdot \widehat z_{\mathcal{C}_0}\simeq \algebraicgroup{G}/ \algebraicgroup{H}_{\mathcal{C}_0}$ and note that
$\mathfrak{a}_{Z_0}=\mathfrak{a}_Z/ \mathfrak{a}_{\mathcal{C}_0}$. Moreover, $\mathcal{F}_0:=\mathcal{F}(\mathcal{C}_0)/ \mathfrak{a}_{\mathcal{C}_0}$ is a complete smooth fan for
$\algebraicgroup{ Z}_0$ featuring $\algebraicgroup{ Z}_0(\mathcal{F}_0)\subset \algebraicgroup{ Z}(\mathcal{F})$ as the Zariski closure of $\algebraicgroup{ Z}_0$ in $\algebraicgroup{ Z}(\mathcal{F})$. Now $S_0=S(\algebraicgroup{ Z}_0)=I\subsetneq S$
and we obtain by induction that
$\widehat \algebraicgroup{H}_\mathcal{C}=\widehat\algebraicgroup{H}_{\mathcal{C}'}$.
\par Finally we note that if $\mathcal{F}_1$ and $\mathcal{F}_2$ are smooth fans, then there exists a smooth fan $\mathcal{F}_3$
containing both $\mathcal{F}_1$ and $\mathcal{F}_2$, i.e. $\algebraicgroup{ Z}(\mathcal{F}_1), \algebraicgroup{ Z}(\mathcal{F}_2)\subset \algebraicgroup{ Z}(\mathcal{F}_3)$. This completes the proof
of the proposition.
\end{proof}
For the purpose of this paper our interest is not so much with $\widehat \algebraicgroup{ Z}_I$ but with the real $G$-orbit $\widehat Z_I=G \cdot \widehat z_{0,I}\simeq G/ \widehat H_I$. Note that $\widehat Z_I\subset \widehat Z$.
For $I\subset S$ we denote by $\algebraicgroup{ A}_I$ the subtorus of $\algebraicgroup{ A}_{\algebraicgroup Z}$ corresponding to $\mathfrak{a}_I\subset \mathfrak{a}_Z$. For our fixed $j =j_I\in J_I$ with regard to $\mathfrak{c}_I^-$ we now set
$\psi_i^I:= \psi_{ji}$ for $1\leq i \leq r$. Let $k:=r-|I|$. We may order the basis $(\psi_i^I)_{1\leq i\leq r}$ then in such a way that $\mathbb{Q}[I]=\mathbb{Q}[\psi^I_{k+1}, \ldots, \psi^I_r]$ and then
$$\mathfrak{c}_I^{--}=\{ X\in\mathfrak{a}_I^{-}\mid (\forall i\leq k)\ \psi_i^I(X)<0\}\, .$$ With the basis $(\psi^I_i)_i$ we identify $\algebraicgroup{ A}_{\algebraicgroup Z} $ with $(\mathbb{C}^\times)^r$ via
\begin{equation}\label{AZ-iso1} \algebraicgroup{ A}_{\algebraicgroup Z}\to (\mathbb{C}^\times)^r, \ \ a\mapsto (a^{\psi^I_i})_{1\leq i\leq r}\, .\end{equation} In these coordinates $A_I$ corresponds to the subgroup $(\mathbb{C}^\times)^{r-k}\simeq {\bf1} \times (\mathbb{C}^\times)^{r-k} \subset (\mathbb{C}^\times)^r$.
Let us denote by $({\bf e}_i^I)_{1\leq i\leq r}\subset \mathfrak{a}_Z$ the basis dual to $(\psi_i^I)_{1\leq i\leq r}$. We define the $\algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R})$-modules $V_I:=\bigoplus_{i \leq k} \mathbb{R} {\bf e}_i^I\simeq \mathfrak{a}_I$ and $V_I^\perp :=\bigoplus_{i>k} \mathbb{R} {\bf e}_i^I$, which are both diagonal with respect to the fixed basis $(\psi_i^I)_{1\leq i\leq r}$ of $\Xi_Z$. Via the coordinates of \eqref{AZ-iso1} we view $\algebraicgroup{ A}_{\algebraicgroup Z}$ as open subset of $V_\mathbb{C}=\mathbb{C}^r=\algebraicgroup{ A}_{\algebraicgroup Z}(\mathcal{F}_{j_I})$ and obtain in particular that \begin{equation} \label{LST-coo} \algebraicgroup{ Z}_0(\mathcal{F}_{j_I})= \algebraicgroup{ U} \times [\algebraicgroup{ M}/ \algebraicgroup{ M}_{\algebraicgroup H}\times_{F_M} V_\mathbb{C}]\end{equation} where we view $F_M=\widehat M_H/M_H$ as a subgroup of $\{-1, 1\}^r$ acting on $V_\mathbb{C}$ by sign changes in the coordinates. Set $V=\mathbb{R}^r$. \begin{lemma} The real points of $ \algebraicgroup{ Z}_0(\mathcal{F}_{j_I})= \algebraicgroup{ U} \times [\algebraicgroup{ M}/ \algebraicgroup{ M}_{\algebraicgroup H}\times_{F_M} V_\mathbb{C}]$ are given by \begin{equation}\label{inf slice} \algebraicgroup{ Z}_0(\mathcal{F}_{j_I})(\mathbb{R})= U \times [M/M_H \times_{F_M} V]\, .\end{equation} \end{lemma}
\begin{proof} Let $x=(u,[m\algebraicgroup{ M}_\algebraicgroup{H}, v])\in \algebraicgroup{ Z}_0(\mathcal{F}_{j_I})$ where $u\in \algebraicgroup{ U}$, $m\in\algebraicgroup{ M}$ and $v\in V_\mathbb{C}$. Then $x$ is real if and only if $\bar x = x$, that is $$ (\bar u, [\bar m \algebraicgroup{ M}_\algebraicgroup{H}, \bar v])= (u,[m\algebraicgroup{ M}_\algebraicgroup{H}, v])$$ and in particular $u=\bar u$. Moreover, as $F_M$ has representatives in $\widehat M_H$, we obtain that $\bar m \algebraicgroup{ M}_\algebraicgroup{H} \in m\widehat M_H\algebraicgroup{ M}_{\algebraicgroup{H}}$. Now it follows from Lemma \ref{Example exact} that the polar map
$$ M \times_{M_H} \mathfrak{m}_H^\perp \to \algebraicgroup{ M}/ \algebraicgroup{ M}_\algebraicgroup{H}, \ \ [g,X]\mapsto g\exp(iX) \algebraicgroup{ M}_\algebraicgroup{H}$$ is a diffeomorphism. Hence if $y=m\algebraicgroup{ M}_\algebraicgroup{H}=[g,X]$ is such that $\bar y \in m\widehat M_H \algebraicgroup{ M}_H$ we obtain $\bar y=[g,-X]= [ g\widehat m^{-1} , \operatorname{Ad}(\widehat m)X]$ for some $\widehat m \in \widehat M_H$. But this gives $\widehat m \in M_H$ and thus $\bar y = y$, i.e. $X=0$. Therefore $y=m\algebraicgroup{ M}_\algebraicgroup{H}=[g,0]$ and we may choose $m=g\in M$. This yields in turn that $\bar v = v$ which concludes the proof of the lemma. \end{proof}
\par Let ${\bf e}_I:=\sum_{j=1}^k {\bf e}_j^I\in V_I$. Set $F_{M,I}:=F_M\cap \algebraicgroup{ A}_I$ and note that $F_{M,I}$ is the $F_M$-stabilizer of ${\bf e}_I\in V_I$. Further put $F_M^I:=F_M/ F_{M,I} $. Denote by $V_I^{\perp, \times}\subset V_I$ the subset with all coordinates non-zero and observe that $$ V_{I,\mathbb{C}}^{\perp, \times} =\algebraicgroup{ A}_{\algebraicgroup Z}\cdot {\bf e}_I\simeq \algebraicgroup{ A}_{\algebraicgroup Z}/ \algebraicgroup{ A}_I \, .$$ Then we obtain from \eqref{LST-coo} and \eqref{inf slice} the isomorphisms
\begin{eqnarray} \label{LST-Ic} \algebraicgroup{ P} \cdot \widehat z_{0,I}&\simeq &\algebraicgroup{ U} \times \big[[\algebraicgroup{ M}/ \algebraicgroup{ M}_\algebraicgroup{H} F_{M,I}] \times_{F_M^I} V_{I,\mathbb{C}}^{\perp, \times}\big ]\\ \notag &\simeq & \algebraicgroup{ U} \times \big[[\algebraicgroup{ M}/ \algebraicgroup{ M}_\algebraicgroup{H} F_{M,I}] \times_{F_M^I} [\algebraicgroup{ A}_{\algebraicgroup Z}/ \algebraicgroup{ A}_I]\big] \end{eqnarray} and \begin{eqnarray} \label{LST-I} [\algebraicgroup{ P} \cdot \widehat z_{0,I}](\mathbb{R})&\simeq &U \times \big[[M/ M_H F_{M,I}] \times_{F_M^I} V_I^{\perp, \times}\big ]\\ \notag &\simeq & U \times \big[[M/ M_H F_{M,I}] \times_{F_M^I} [\algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R})/ \algebraicgroup{ A}_I(\mathbb{R})]\big] \end{eqnarray} which are given in coordinates as $$ (u,[mM_HF_{M,I},v])=umv\cdot \widehat z_{0,I}\, .$$
\subsection{ Relatively open $P$-orbits in $\widehat Z$}\label{subsection rel open P-orbits}
The structure of the finite set of $G$-orbits in $\widehat Z$ is in general complicated and the $G$-orbits through the boundary points $\widehat z_{0,I}\subset \widehat Z$ do typically not give all $G$-orbits in $\widehat Z$ (see Example \ref{ex SL3} below).
\par In general, let us call a $P$-orbit $P\cdot \widehat z\subset \widehat Z$ {\it relatively open} provided $P\cdot \widehat z$ is open in the $G$-orbit $G\cdot \widehat z$. The goal of this subsection is to describe the set of all relatively open $P$-orbits in $\widehat Z$, denoted by $(P\backslash \widehat Z)_{\rm rel-op}$ in the sequel.
\par Recall from the end of Subsection \ref{subsection LST} the set $\mathcal{W}\subset G$ which parametrizes $(P\backslash Z)_{\rm open}$. In addition we remind that elements $w\in \mathcal{W}$ have a representation as $w=\tilde th$ with $\tilde t \in T_Z=\exp(i\mathfrak{a}_H^\perp)\subset \algebraicgroup{ A}$ and $h\in \algebraicgroup{H}$ such that $t:=\tilde t \cdot z_0\in F$ where $F=F_\mathbb{R}\cap Z$ and $F_\mathbb{R}$ the finite group of $2$-torsion points of $\algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R})\subset \algebraicgroup{ Z}(\mathbb{R})$ (see Subsection \ref{subsection LST} for the notation).
For $w\in \mathcal{W}$ we now define the shifted base points: $$z_w:=w\cdot z_0= \tilde t \cdot z_0 =t\in F\subset Z\, .$$ Likewise for $I\subset S$ and $X\in \mathfrak{c}_I^{--}$ we define in analogy to \eqref{def limit z0I}
$$\widehat z_{w,I}:= \lim_{s\to \infty} \exp(sX)\cdot z_w = \tilde t \cdot \widehat z_{0,I}$$ and note that the second equality (immediate from the definitions) implies that $\widehat z_{w,I}$ is independent of the choice of $X\in \mathfrak{c}_I^{--}$. As $\tilde t \cdot \widehat z_{0,I}$ is independent of the chosen lift $\tilde t$ of $t$ we can define for $t\in F_\mathbb{R}$ $$t \cdot \widehat z_{0,I}:= \tilde t \cdot \widehat z_{0,I}\, .$$ Since the limit defining $\widehat z_{w,I}$ exists and $z_w\in Z$ we infer that $\widehat z_{w,I}\in \widehat Z$. Moreover, as $ \widehat z_{w,I} \in F_\mathbb{R} \cdot \widehat z_{0,I}$ with the notation defined above, we infer from the local structure theorem as recorded in \eqref{LST-I} that $P\cdot \widehat z_{w,I}$ is open in $G\cdot \widehat z_{w,I}$. With that we obtain in fact all relatively open $P$-orbits in the wonderful situation:
\begin{lemma} \label{lemma rel open}Suppose that $\widehat \algebraicgroup{ Z}$ is wonderful. Then the set of relatively open $P$-orbits in $\widehat Z$ is given by $$(P\backslash \widehat Z)_{\rm rel-op}=\{ P \cdot \widehat z_{w,I}\mid w\in \mathcal{W}, I\subset S\}\, .$$ \end{lemma}
\begin{proof} The inclusion $\supset$ was already seen above. In the wonderful situation the $\algebraicgroup{G}$-orbits in $\widehat \algebraicgroup{ Z}$ are precisely the $\algebraicgroup{G}\cdot \widehat z_{0,I}\simeq \algebraicgroup{G}/\widehat \algebraicgroup{H}_I$ for $I\subset S$ and accordingly every relatively open $P$-orbit in $\widehat \algebraicgroup{ Z}(\mathbb{R})$ lies in some $[\algebraicgroup{ P} \cdot \widehat z_{0,I}](\mathbb{R}) $. Hence any relatively open $P$-orbit in $\widehat Z$ is of the form $P t_1 \cdot \widehat z_{0,I}$ for some $t_1\in F_\mathbb{R}$ by \eqref{LST-I} and \eqref{AF}. Since $\widehat Z$ is $G$-invariant, and in particular $P$-invariant, it follows that $t_1 \cdot \widehat z_{0,I}\in \widehat Z$. Further the local structure theorem \eqref{LST-I} implies that $t_1 \cdot \widehat z_{0,I}\in \partial Z$ is approached by a curve in $Z$ of the form $\exp(sX) t_2 \in Z$ for some $t_2\in F$ and $X\in \mathfrak{a}_I^{--}=\mathfrak{c}_I^{--}$, for $s\to \infty $. In other words $t_1 \cdot \widehat z_{0,I} = \lim_{s\to \infty} \exp(sX) t_2 = t_2 \cdot \widehat z_{0,I}$. With Lemma \ref{lemmaW1} this concludes the proof. \end{proof}
\begin{rmk}\label{remark rel open} (a) In the wonderful case we have a stratification $\widehat \algebraicgroup{ Z}(\mathbb{R})= \coprod_{I\subset S} \widehat \algebraicgroup{ Z}_I(\mathbb{R})$ of $\widehat\algebraicgroup{ Z}(\mathbb{R})$ in real spherical $G$-manifolds with $P\cdot \widehat z_{w,I}\subset \widehat \algebraicgroup{ Z}_I(\mathbb{R})$ for each $w\in \mathcal{W}$. In particular if $I\neq J\subset S$ we have $P\cdot \widehat z_{w,I}\neq P\cdot \widehat z_{w',J}$ for all $w,w'\in \mathcal{W}$. However, for fixed $I$ it can and will happen that $P\cdot \widehat z_{w,I}=P\cdot \widehat z_{w',I}$ for some $w\neq w'$. The extremal case is $I=\emptyset$ where $\widehat z_\emptyset=\widehat z_{w,\emptyset}$ does not depend on $w\in \mathcal{W}$ at all.
\par (b) In case $\widehat \algebraicgroup{ Z}$ is not wonderful, the assertion in Lemma \ref{lemma rel open} needs to be modified as follows. For every cone $\mathcal{C}\in \mathcal{F}$ and $w\in \mathcal{W}$ let us define \begin{equation}\label{zwC} \widehat z_{w,\mathcal{C}}:= \lim_{s \to \infty} \exp(sX)\cdot z_w \end{equation} which does not depend on $X\in \operatorname{int} \mathcal{C}$. Recall that the $\algebraicgroup{G}$-orbits in the toroidal compactification $\widehat \algebraicgroup{ Z}$ are parametrized by $\mathcal{C}\in \mathcal{F}$ and explicitly given by $\algebraicgroup{G}\cdot \widehat z_\mathcal{C}$. Then for each $\mathcal{C}\in \mathcal{F}$ the relatively open $P$-orbits in $\partial Z$ contained in $\widehat Z_\mathcal{C}= \algebraicgroup{G} \cdot \widehat z_\mathcal{C}$ are given by the $P\cdot \widehat z_{w,\mathcal{C}}$ with $w\in \mathcal{W}$. \par (c) As every open $G$-orbit in $\widehat Z_\mathcal{C}$ is open and contains an open $P$-orbit, we deduce from (b) that $$\widehat Z_\mathcal{C}=\bigcup_{w\in \mathcal{W}} G \cdot \widehat z_{w,\mathcal{C}}\,.$$
\end{rmk}
\section{Normal bundles to boundary orbits in a smooth compactification}
Let $\sX$ be a manifold and $\mathsf{Y}\subset \sX$ be a submanifold. We denote by $T\,\sX$ and $T\, \mathsf{Y}$ the associated tangent bundles of $\sX$ and $\mathsf{Y}$. The normal bundle of $\mathsf{Y}$ is then defined to be
$$N_\mathsf{Y}:= T\,\sX|_\mathsf{Y}/ T\,\mathsf{Y}\, .$$ Note that $N_\mathsf{Y} \to \mathsf{Y}$ is a vector bundle with fibers $(N_\mathsf{Y})_y=T_y\sX / T_y\mathsf{Y}$.
\par We are mainly interested in the case where $\sX$ is a smooth $G$-manifold for a Lie group $G$, and $\mathsf{Y}:=G\cdot y$ is a locally closed orbit. In this case we have a natural action of the stabilizer $G_y$ on $(N_\mathsf{Y})_y$ and
\begin{equation} \label{normal identifier}N_\mathsf{Y}= G \times_{G_y} (N_\mathsf{Y})_y\end{equation} reveals the $G$-structure of $N_\mathsf{Y}$.
\subsection{Normal bundles to boundary orbits} After this interlude on normal bundles we return to our basic setting with $G$ a real reductive algebraic group, and let $\sX:=\widehat \algebraicgroup{ Z}(\mathbb{R})$ be a smooth $G$-equivariant compactification of $Z$ as constructed in Section \ref{section compact}.
Fix $I\subset S$ and let $\mathsf{Y}:=\widehat Z_I\subset \sX$ be a boundary orbit with base point $y:=\widehat z_{0,I}$. Recall the basis $(\psi_i^I)_{1\leq i\leq r}$ of $\Xi_Z$, its dual basis $({\bf e}_i^I)_i$ and $V_I:= \bigoplus_{i\leq k}\mathbb{R} {\bf e}_i^I$. By means of the basis it is often convenient to identify $V_I$ with $\mathbb{R}^k$
where $k=r-|I|$. Define $V_I^\times:= \bigoplus_{i\leq k} \mathbb{R}^\times {\bf e}_i^I $ and $V_I^0\subset V_I^\times$ by
$$V_I^0:= \bigoplus_{i\leq k} \mathbb{R}^+ {\bf e}_i^I \simeq (\mathbb{R}^+)^k\, .$$ Set $V:=V_I \oplus V_I^{\perp}$ and recall ${\bf e}_I=\sum_{j=1}^k {\bf e}_j^I\in V_I^0$.
\par Let $\mathcal{U}_M\subset M/ M_H$ be an open neighborhood of the base point $M_H\in M/ M_H$ such that $\mathcal{U}_M \cap \mathcal{U}_M\cdot x=\emptyset$ for $x\in F_M$, $x\neq 1$.
Recall that $V_I^{\perp,\times} =\algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R})/\algebraicgroup{ A}_I(\mathbb{R})$. According to (\ref{LST-I}) the mapping
$$ \Psi_1: U \times \mathcal{U}_ M \times V_I^{\perp, \times} \to [\algebraicgroup{ P}\cdot \widehat z_{0,I}](\mathbb{R}) =U\times\big[[ M/ M_HF_{M,I}]\times_{F_M^I} V_I^{\perp, \times} \big]$$ given by $$(u, mM_H, v)\mapsto (u, [mM_H F_{M,I}, v])$$ is a diffeomorphism onto an open subset of $[\algebraicgroup{ P}\cdot \widehat z_{0,I}](\mathbb{R})$ and hence also of $\widehat \algebraicgroup{ Z}_I(\mathbb{R})$. Set $$\mathcal{V}:= \Psi_1^{-1}(\mathsf{Y}) \, .$$ Thus we obtain two diffeomorphisms onto their images
$$\Psi_0: \mathcal{V} \to \mathsf{Y}=\widehat Z_I, \ \ (u,mM_H, a\algebraicgroup{ A}_I(\mathbb{R}))\mapsto uma\cdot y$$ and \begin{equation*} \Psi: \mathcal{V}\times V_I \to U \times [ M/M_H \times_{F_M} V] \subset \sX=\widehat \algebraicgroup{ Z}(\mathbb{R})\, ,\end{equation*} the latter one being given by $$(u,mM_H, a\algebraicgroup{ A}_I(\mathbb{R}), v_I)\to (u, [m, a\cdot {\bf e}_I+ v_I]).$$
\par Set $F_I:=F\cap \algebraicgroup{ A}_I(\mathbb{R})$ and note that $F_I$ identifies with a subset of $ \{ -1, 1\}^k$ upon identification of $\algebraicgroup{ A}_I(\mathbb{R})\simeq (\mathbb{R}^\times)^k$. From the definition of $\Psi$ we then get \begin{equation} \label{points to Z} \Psi^{-1}(Z)= \mathcal{V} \times F_I \cdot V_I^0\, .\end{equation} It is worth to note that \begin{equation} \label{normalization normal bundle} \Psi(y,{\bf e}_I)= z_0\, . \end{equation}
\par With $\Psi$ being diffeomorphic we record the following property of transversality
\begin{equation} \label{tangent decomp} d\Psi(x,0) (0 \times V_I ) \oplus T_x\mathsf{Y} = T_x\sX\qquad (x\in \mathcal{V})\, .\end{equation}
In the sequel we use \eqref{tangent decomp} to identify the spaces $V_I\simeq (N_\mathsf{Y})_y$ for $y=\widehat z_{0,I}$. On $V_I=(N_\mathsf{Y})_y$ there is a natural linear action of $G_y=\widehat H_I$, the isotropy representation, which we call $$\rho: \widehat H_I \to \operatorname{GL}(V_I)\, .$$
The representation $\rho$ is algebraic, i.e. it originates from the complex isotropy representation
$$\underline{\rho}: \widehat\algebraicgroup{H}_I\to \operatorname{GL}(V_{I,\mathbb{C}})\, .$$ We write $\algebraicgroup{H}_I=\ker \underline \rho$ and note that $H_I=\algebraicgroup{H}_I(\mathbb{R})$ is given by $H_I=\ker \rho$. Observe that $\algebraicgroup{H}_I\triangleleft\widehat \algebraicgroup{H}_I$ and $H_I \triangleleft \widehat H_I$ are closed normal subgroups.
\begin{theorem} \label{thm normal}The following assertions hold: \begin{enumerate} \item \label{1one1} The Lie algebra of $H_I$ is given by $\mathfrak{h}_I$, as defined in \eqref{eq-hi1}. \item \label{2two2}$\widehat \algebraicgroup{H}_I/ \algebraicgroup{H}_I \simeq \algebraicgroup{ A}_I$. \end{enumerate} \end{theorem}
The proof of Theorem \ref{thm normal} will be prepared by several intermediate steps. The key is the following lemma and the techniques contained in its proof.
\begin{lemma} \label{limit normal}The Lie algebra of $H_I$ contains $\mathfrak{h}_I$, as defined by \eqref{eq-hi1}. \end{lemma}
\begin{proof} Let $Y\in \mathfrak{h}_I$, then $h_I:=\exp(Y)\in \widehat H_I$ as explained above Proposition \ref{prop independence choice}. We claim that $\rho(h_I)={\bf1}$.
For all $X\in \mathfrak{c}_I^{--} \cap \Xi_Z^\vee$ we consider the curve $$ \gamma_X: [0, 1] \to \sX=\widehat \algebraicgroup{ Z}(\mathbb{R}), \ \ s\mapsto \exp(-(\log s) X)\cdot z_0 ,$$ which connects $\widehat z_{0,I}$ to $z_0$. Note, that in coordinates of \eqref{AZ-iso1} we have $\algebraicgroup{ A}_{\algebraicgroup Z}(\mathcal{F}_{j_I})(\mathbb{R})\simeq \mathbb{R}^k$ (with $j_I\in J_I$ the selected element for $\mathfrak{c}_I^-$), and $$\gamma_X(s) = (s^{m_1}, \ldots, s^{m_k}) \in V_I$$ for some $m_i\in \mathbb{N}$. Notice that all tuples of $m_i\in\mathbb{N}$ occur for some $X$. Hence $\gamma_X$ is differentiable with $\gamma_X(0)= y=\widehat z_{0,I}$ and $\gamma_X'(0)=(\delta_1, \ldots, \delta_k)$ with $\delta_i=1$ if $m_i=1$ and $\delta_i=0$ otherwise.
\par Since $\rho(h_I) (\gamma_X'(0)) = \frac{d}{ds}\big|_{s=0} h_I \gamma_X(s)$, the lemma will follow provided we can show that
$\frac{d}{ds}\big|_{s=0} h_I \gamma_X(s)=\gamma_X'(0)$ for all $X$ as above. Now for $h_I\in L\cap H$ this is clear and thus we may assume that $Y$ is of the form (see \eqref{eq-hi2}) $$Y= \sum_{\alpha\in \Sigma(\mathfrak{a}, \mathfrak{u})} (X_{-\alpha} + \sum_{\alpha +\beta \in \mathbb{N}_0[I]} X_{\alpha, \beta})\,.$$ Set now for $s>0$ $$Y_s:= \sum_{\alpha\in \Sigma(\mathfrak{a}, \mathfrak{u})} (X_{-\alpha} + \sum_{\beta} e^{- (\log s) (\alpha +\beta)(X)} X_{\alpha, \beta})\in \operatorname{Ad}(\gamma_X(s)) \mathfrak{h}\, .$$
Note that $Y_s\to Y$ for $s\to 0$. Likewise we set $h_{I,s}:=\exp(Y_s)$ and note $h_{I,s} \to h_I$. Now we use that $\mathcal{M}\subset \Xi_Z$ in order to conclude that $h_{I,s}$ is right differentiable at $s=0$. The Leibniz-rule yields
$$ \frac{d}{ds}\Big|_{s=0} h_{I,s} \gamma_X(s)= \frac{d}{ds}\Big|_{s=0} h_I \gamma_X(s) +
\underbrace{ \frac{d}{ds}\Big|_{s=0} h_{I,s} y}_{\in T_y \mathsf{Y}}$$ and thus we get
$$\frac{d}{ds}\Big|_{s=0} h_I \gamma_X(s) = \mathsf {P} \left(\frac{d}{ds}\Big|_{s=0} h_{I,s} \gamma_X(s)\right)$$ with $\mathsf {P}$ the projection $T_y \sX\to V_I$ along $T_y\mathsf{Y}$. Now observe that \begin{eqnarray*} h_{I,s} \gamma_X(s) &=&h_{I,s} \exp(-(\log s) X) \cdot z_0\\ &=& \exp(-(\log s) X)\underbrace{ \exp ((\log s) X)h_{I,s}
\exp(-(\log s) X)}_{\in H} \cdot z_0\\
& =& \gamma_X(s)\end{eqnarray*} and the lemma follows. \end{proof} \subsubsection{Normal curves and the proof of Theorem \ref{thm normal}} Recall the base points $\widehat z_{0,I}\in \widehat Z_I \subset \widehat Z$. Now $\widehat z_{0,I}\in \partial Z$ is a boundary point of $Z$ provided that $I\subsetneq S$ or $\mathfrak{a}_{Z,E}\neq 0$ -- in case $\mathfrak{a}_{Z,E}=\{0\}$ we have $\widehat Z_S= Z$ and $\widehat z_S=z_0$ is not a boundary point.
\par The proof of Lemma \ref{limit normal} contains an important concept, namely smooth curves in $Z$ which approach the point $\widehat z_{0,I}\in \widehat Z_I$ in normal direction. Let $X_I \in \mathfrak{a}_I^{--}$ correspond to $-{\bf e}_I\in V_I$ after the natural identification of $\mathfrak{a}_I$ with $V_I$. Then we saw that the curve
$$\gamma_I: [0,1]\to \widehat \algebraicgroup{ Z}(\mathbb{R}), \ \ s\mapsto \exp(-(\log s) X_I)\cdot z_0$$ is smooth with the following properties
\begin{itemize} \item $\gamma_I\big((0,1]\big) \subset Z$, \item $\gamma_I(0)=\widehat z_{0,I}$, \item $\gamma_I'(0)= {\bf e}_I \in V_I \subset T_{\widehat z_{0,I}} \widehat \algebraicgroup{ Z}(\mathbb{R})$. \end{itemize}
More generally, let $v\in V_{I,\mathbb{C}}^\times $. Then there exists a unique $a(v)\in \algebraicgroup{ A}_I$ such that $v= a(v)\cdot {\bf e}_I$. If we consider now $\algebraicgroup{ A}_I\subset \algebraicgroup{ Z}$, then we obtain a smooth curve
$$\gamma_v: [0,1]\to \widehat \algebraicgroup{ Z}, \ \ s\mapsto \exp(-(\log s) X_I)\cdot a(v) $$ such that \begin{itemize} \item $\gamma_v\big((0,1]\big) \subset \algebraicgroup{ Z}$, \item $\gamma_v(0)=\widehat z_{0,I}$, \item $\gamma_v'(0)= v \in V_{I,\mathbb{C}} \subset T_{\widehat z_{0,I}} \widehat \algebraicgroup{ Z}$. \end{itemize}
For $g\in \algebraicgroup{G}$ we now shift $\gamma_v$ by $g$, i.e. we set $$\gamma_{g,v}(s):= g\cdot \gamma_v(s) \in \widehat \algebraicgroup{ Z} \qquad s\in [0,\epsilon)\, .$$ Notice that $\gamma_{g,v}(0)= g\cdot \widehat z_{0,I}$ and $\gamma'_{g,v}(0) = dL_g(\widehat z_{0,I}) v \in T_{g\cdot \widehat z_{0,I}} \widehat \algebraicgroup{ Z}$, where $dL_g$ denotes the differential of the displacement $L_g(z)=g\cdot z$. If $v={\bf e}_I$ we simply set $\gamma_{g,I}:=\gamma_{g,v}$. Specifically we are interested when $g\in \widehat \algebraicgroup{H}_I$ so that $\gamma_{g, v}(0)= \widehat z_{0,I}$.
\par Next we recall the decomposition of the complex tangent spaces $$ T_{\widehat z_{0,I}} \widehat \algebraicgroup{ Z}= T_{\widehat z_{0,I}} \widehat \algebraicgroup{ Z}_I \oplus V_{I,\mathbb{C}}$$ which identifies $V_{I,\mathbb{C}}$ with the complex normal space to the boundary orbit $\widehat \algebraicgroup{ Z}_I =\algebraicgroup{G} \cdot \widehat z_{0,I}$ at the point $\widehat z_{0,I}$. We denote by $$\mathsf {P}: T_{\widehat z_{0,I}} \widehat \algebraicgroup{ Z} \to V_{I,\mathbb{C}}, \ \ u\mapsto u_{\rm n}$$ the projection of a tangent vector $u\in T_{\widehat z_{0,I}} \widehat \algebraicgroup{ Z}$ to its normal part $u_{\rm n}$. With this notation we then obtain from the definition of $\algebraicgroup{H}_I=\ker \rho$ that
\begin{equation} \label{character HI-0} \algebraicgroup{H}_I=\{ g \in \widehat\algebraicgroup{H}_I\mid (\forall v \in V_I^\times) \ [\gamma_{g,v}'(0)]_{\rm n}=v\} \,.\end{equation}
\begin{proof}[Proof of Theorem \ref{thm normal}] First recall that $\widehat \mathfrak{h}_I = \mathfrak{h}_I +\mathfrak{a}_I$ from \eqref{Lie hat HI}. As further $\algebraicgroup{ A}_{\algebraicgroup{H}}\subset \ker\underline{\rho}$ we see that $\underline{\rho}$ induces a representation of $\algebraicgroup{ A}_I$ on $V_{I,\mathbb{C}}$ which is given by the faithful standard representation $\underline{\rho}(a)(v)= a\cdot v$. In fact, if we denote by $\tilde a\in \algebraicgroup{ A}$ any lift of $a\in \algebraicgroup{ A}_{\algebraicgroup Z}$ for the projection $\pi:\algebraicgroup{ A}\to \algebraicgroup{ A}_Z$, then for $a\in\algebraicgroup{ A}_I$ we have $\rho(a)(v)= \tilde a \cdot \gamma_v'(0)= a\cdot v$. Notice that $\underline{\rho}(\algebraicgroup{ A}_I) \simeq \operatorname{diag}(k, \mathbb{C}^\times)$ within our identification $V_{I,\mathbb{C}}\simeq \mathbb{C}^k$. It follows in particular that $\mathfrak{a}_I \cap \operatorname{Lie}(H_I)=\{0\}$ and thus $\mathfrak{h}_I=\operatorname{Lie}(H_I)$ by Lemma \ref{limit normal}. This shows \eqref{1one1}.
\par Moving on to \eqref{2two2} we first observe that $\algebraicgroup{ P} \widehat\algebraicgroup{H} = \algebraicgroup{ P} \algebraicgroup{H}$ for any spherical subgroup $\algebraicgroup{H}$. In fact, since $\widehat\algebraicgroup{H}$ normalizes $\algebraicgroup{H}$ it follows that $\algebraicgroup{ P}\widehat\algebraicgroup{H}$ is a union of open right $\algebraicgroup{H}$-orbits. Since $\algebraicgroup{G}$ is connected the identity $\algebraicgroup{ P} \widehat\algebraicgroup{H} = \algebraicgroup{ P} \algebraicgroup{H}$ follows. Equivalently, \begin{equation} \label{hat uH} \widehat \algebraicgroup{H} =(\algebraicgroup{ P}\cap\widehat\algebraicgroup{H})\algebraicgroup{H} \,.\end{equation}
We apply this to the spherical subgroup $\algebraicgroup{H}_I$. Now if $p \in \algebraicgroup{ P}\cap\widehat\algebraicgroup{H}_I$ then \begin{equation} \label{am fix} p\cdot \widehat z_{0,I}= \widehat z_{0,I}\, .\end{equation} Let $\widetilde \algebraicgroup{ A}_I:= \pi^{-1} (\algebraicgroup{ A}_I)$. Then \eqref{am fix} and the local structure theorem in the form of \eqref{LST-Ic} implies $p \in \algebraicgroup{ M}_{\algebraicgroup{H}} \widetilde \algebraicgroup{ A}_I\subset \algebraicgroup{H}_I \algebraicgroup{ A}_I$, and hence $\widehat\algebraicgroup{H}_I=\algebraicgroup{H}_I\algebraicgroup{ A}_I$ by \eqref{hat uH}.\end{proof}
In particular it follows from Theorem \ref{thm normal} that $\underline{\rho} (\widehat \algebraicgroup{H}_I)\simeq \operatorname{diag}(k, \mathbb{C}^\times)$ and thus for $g \in \widehat \algebraicgroup{H}_I$ that $\underline{\rho}(g)={\bf1}$ if and only if $\underline{\rho}(g)(v) =v$ for some $v\in V_I^\times$. Thus we obtain the following strengthening of \eqref{character HI-0} to
\begin{eqnarray}\label{character HI} \algebraicgroup{H}_I&=&\{ g \in \widehat\algebraicgroup{H}_I\mid [\gamma_{g,I}'(0)]_{\rm n}={\bf e}_I\}\, \\ \notag &=& \{ g \in \widehat\algebraicgroup{H}_I\mid [\gamma_{g,v}'(0)]_{\rm n}=v\} \quad(v\in V_I^\times)\, . \end{eqnarray}
\subsection{The part of the normal bundle which points to $Z$}\label{nb points to Z}
We denote by $A_I$ the identity component of $\algebraicgroup{ A}_I(\mathbb{R})$.
According to Theorem \ref{thm normal} there is the exact sequence \begin{equation} \label{exact1} {\bf1} \to \algebraicgroup{H}_I \to \widehat \algebraicgroup{H}_I\to \algebraicgroup{ A}_I\to {\bf1}\, .\end{equation} In (\ref{exact1}) we take real points, which is only left exact, and obtain
\begin{equation} \label{exact2} {\bf1} \to H_I \to \widehat H_I\to \algebraicgroup{ A}_I(\mathbb{R})\, .\end{equation}
The image of the last arrow in \eqref{exact2} is an open subgroup since taking real points is exact on the level of Lie algebras. We denote this open subgroup by $A(I)$ and record the exact sequence
\begin{equation} \label{exact3} {\bf1} \to H_I \to \widehat H_I\to A(I)\to {\bf1}\, .\end{equation} In particular, \begin{equation} A(I) = A_I F(I),\end{equation} where $F(I)<\{-1, 1\}^k \subset \algebraicgroup{ A}_I(\mathbb{R})$ is a subgroup of the $2$-torsion group $\{-1, 1\}^k$ of $\algebraicgroup{ A}_I(\mathbb{R})\simeq (\mathbb{R}^\times)^k $.
\begin{rmk} (a) The non-compact torus $A(I)\simeq \widehat H_I / H_I$ acts naturally on $Z_I=G/H_I$ from the right and thus commutes with the left $G$-action on $Z_I$. \par (b) Since $\algebraicgroup{ A}_I \subset \algebraicgroup{ A}_{\algebraicgroup Z}$ we obtain that $A(I)$ is naturally a subgroup of $\algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R})$. In particular we stress that it is not possible in general to realize $A(I)$ as a subgroup of $A=\algebraicgroup{ A}(\mathbb{R})\subset G$. \end{rmk}
We return to the normal bundle of the boundary orbit $\mathsf{Y}=\widehat Z_I$:
$$N_\mathsf{Y}= G \times_{G_y}V_I =G\times_{\widehat H_I} V_I\, .$$ From \eqref{exact3} we obtain that \begin{equation}\label{im rho} \rho(\widehat H_I){\bf e}_I =A(I)\cdot {\bf e}_I=F(I) \cdot V_I^0\, .\end{equation}
Recall the set $F_I= F\cap \algebraicgroup{ A}_I(\mathbb{R})\subset \{-1, 1\}^k$ with $F(I)\subset F_I$. We define an $A(I)$-stable open cone in $V_I$ by
\begin{equation} \label{chamber union} V_{Z,I} = F_I \cdot V_I^0= F_I\cdot(\mathbb{R}^+)^k\, ,\end{equation} and we define the cone-bundle
\begin{equation} \label{pointed normal bundle} N_\mathsf{Y}^Z:= G \times_{G_y} V_{Z,I}\end{equation} as part of the normal bundle $N_\mathsf{Y}$ which {\it points to} $Z$. To explain the term "points to $Z$" we recall the curves $\gamma_v$ and note that $\gamma_v\big((0,\epsilon)\big)\subset Z$ if and only if $a(v)\in A(I)$, that is $a(v)\cdot {\bf e}_I =v\in F_I V_I^0$.
\par Observe that the coset space $\sF_I:=F(I) \backslash F_I $ identifies with the $N_\mathsf{Y}^Z/G$. For every $\mathsf{t}=F(I)t\in \sF_I$ with $t\in F_I$ we now denote by
$$N_\mathsf{Y}^{Z,\mathsf{t}}= G \times_{G_y} [F(I)t \cdot V_I^0] $$ and note that $$N_\mathsf{Y}^Z=\coprod_{\mathsf{t}\in \sF_I} N_\mathsf{Y}^{Z,\mathsf{t}}$$ is the disjoint decomposition into $G$-orbits.
\par Presently we do not have a good understanding of $F(I)$ and the coset space $\sF_I=F(I)\backslash F_I$, except when $G$ is complex, where $F_I=F(I)$ for all $I\subset S$. Here are two further instructive examples:
\begin{ex}\label{ex=SL2} (cf. \cite[Ex. 14.6]{KK}) (a) Let $G=\operatorname{SL}(2,\mathbb{R})$ and $H=\operatorname{SO}(1,1)$. We identify $Z=G/H$ with the one sheeted hyperboloid
$$Z=\{ (x_1, x_2, x_3)\in \mathbb{R}^3\mid x_1^2 - x_2^2 -x_3^2 = -1\}\, .$$ We note that $Z=\algebraicgroup{ Z}(\mathbb{R})$ and we embed $Z$ into the projective space $\mathbb{P}(\mathbb{R}^4)$. The closure of $Z$ in projective space is given by
$$\widehat Z=\{ [x_1, x_2, x_3, x_4] \in \mathbb{P}(\mathbb{R}^4) \mid x_1^2 + x_4^2 = x_2^2 + x_3^2\}\simeq \mathbb{S}^1 \times \mathbb{S}^1$$ and coincides with the wonderful compactification $\widehat \algebraicgroup{ Z}(\mathbb{R})$. In the identification $\widehat Z=\mathbb{S}^1\times \mathbb{S}^1$ from above, the unique closed $G$-orbit is given by $\mathsf{Y}= \{ {\bf1}\} \times \mathbb{S}^1$ and $$ \widehat Z= Z \cup \mathsf{Y}\, .$$ In particular both directions of the normal bundle $N_\mathsf{Y}$ point to $Z$. In our notation above this means that $F_I=\{ -1, 1\}$, $F(I)=\{1\}$ and $$N_\mathsf{Y}=N_\mathsf{Y}^Z= N_\mathsf{Y}^{Z, +1}\amalg N_\mathsf{Y}^{Z, -1}\, .$$ \par (b) The situation becomes different when we consider $G=\operatorname{SL}(2,\mathbb{R})$ with $H=\operatorname{SO}(2)$. We identify $Z=G/H$ with the upper component of the two sheeted hyperboloid $\algebraicgroup{ Z}(\mathbb{R})$, in formulae:
$$Z=\{ (x_1, x_2, x_3)\in \mathbb{R}^3\mid x_1^2 - x_2^2 -x_3^2 = 1, x_1>0\}\, ,$$ and $$\algebraicgroup{ Z}(\mathbb{R})=\{ (x_1, x_2, x_3)\in \mathbb{R}^3\mid x_1^2 - x_2^2 -x_3^2 = 1\}\, .$$ We emphasize that $\algebraicgroup{ Z}(\mathbb{R})$ has two connected components, one of them being $Z$. As before we view $\algebraicgroup{ Z}(\mathbb{R})$ in the projective space $\mathbb{P}(\mathbb{R}^4)$ and obtain the wonderful compactification $\widehat \algebraicgroup{ Z}(\mathbb{R})$ as the closure
$$\widehat \algebraicgroup{ Z}(\mathbb{R})=\{ [x_1, x_2, x_3, x_4] \in \mathbb{P}(\mathbb{R}^4) \mid x_1^2 = x_2^2 + x_3^2+x_4^2\}\simeq \mathbb{S}^2\, . $$ The unique closed orbit $\mathsf{Y}=\mathbb{S}^1$ is identified with the great circle $\mathbb{S}^1\subset\mathbb{S}^2$ which divides $\widehat \algebraicgroup{ Z}(\mathbb{R})$ into the two open $G$-orbits. In particular, only one direction of the normal bundle $N_\mathsf{Y}$ points to $Z$. We obtain that $F_I=F(I)=\{1\}$ with $$N_\mathsf{Y} \supsetneq N_\mathsf{Y}^Z\, .$$ By this we end Example \ref{ex=SL2}. \end{ex}
Define
$$Z_I:=G/H_I$$ and write $z_{0,I}=H_I$ for its standard base point.
Let $\mathsf{t} =\sF_I$ and fix with $t\in F_I$ a representative so that $\mathsf{t} = F(I)t$. We then claim that
$$ G/H_I\to N_\mathsf{Y}^{Z,\mathsf{t}}, \ \ gH_I\mapsto [g, t\cdot {\bf e}_I]$$ defines a $G$-equivariant diffeomorphism for each $\mathsf{t}\in \sF_I$. In fact, with $A(I)\simeq F(I)V_I^0$ via $a\mapsto a\cdot {\bf e}_I$, this follows from:
\begin{equation} \label{N-ident} N_\mathsf{Y}^{Z,\mathsf{t}}\simeq G/H_I \times_{A(I)} (A(I) \cdot t \cdot {\bf e}_I) \simeq G/H_I \times_{A_I} V_I^0\simeq G/H_I\, .\end{equation}
\subsection{Speed of convergence} Next we wish to describe
a quantitative version of the fact that $\algebraicgroup{H}_I$ asymptotically preserves normal limits,
i.e.~of \eqref{character HI}.
For that recall the curves $\gamma_{g,v}$.
\begin{lemma}\label{speed lemma} Let $g\in \widehat \algebraicgroup{H}_I$ and $v\in V_I^\times$. Then there exists a smooth curve $[0,\epsilon)\to \algebraicgroup{ P}, \ s\mapsto p_s$ such that $$ \gamma_{g,v}(s)= p_s\cdot \gamma_v(s)\qquad (s\in [0,\epsilon)\,)$$ and: \begin{enumerate} \item \label{curve1}$p_0 \in \algebraicgroup{ A}_I$. \item\label{curve2} If $g\in \algebraicgroup{H}_I$ then $p_0={\bf1}$. \item \label{curve3} If $g\in \widehat H_I$ we can assume that $p_s\in P$. \end{enumerate} \end{lemma}
\begin{proof} Note that $g\cdot \widehat z_{0,I}=\widehat z_{0,I}$ by assumption, and hence $\gamma_{g,v}(s)\to \widehat z_{0,I}$ for $s\to 0^+$ in a smooth fashion. \par The local structure theorem gives us coordinates near $\widehat z_{0,I}$, see
\eqref{LST-Ic}. In particular, it implies that we can find a smooth curve $s\mapsto \tilde p_s\in \algebraicgroup{ P}$ such that $\tilde p_s\cdot \gamma_v(s) = \gamma_{g,v}(s)$. Note that $\tilde p_0\cdot \widehat z_{0,I} =\widehat z_{0,I}$ and hence $\tilde p_0 \in \algebraicgroup{ A}_I (\algebraicgroup{ P}\cap \algebraicgroup{H})$ by \eqref{LST-Ic}. With that we obtain an element $p_H\in \algebraicgroup{ P}\cap \algebraicgroup{H}$ such that $p_s:= \tilde p_s p_H$ satisfies \eqref{curve1}. Here we used the fact that $\gamma_v=\gamma_{p,v}$ for all $p\in \algebraicgroup{ P}\cap \algebraicgroup{H}$. \par We move on to \eqref{curve2}. For that we recall the decomposition of the tangent space $$T_{\widehat z_{0,I}} \widehat \algebraicgroup{ Z}= T_{\widehat z_{0,I}} \widehat \algebraicgroup{ Z}_I \oplus V_{I,\mathbb{C}}$$ and the normal part $u_{\rm n}\in V_{I,\mathbb{C}}$ of a tangent vector $u\in T_{\widehat z_{0,I}} \widehat \algebraicgroup{ Z}$. Now if $g\in \algebraicgroup{H}_I$, then by the definition of $\algebraicgroup{H}_I$ as the kernel of the isotropy representation, we obtain that $[\gamma_{g,v}'(0)]_{\rm n} = v$. On the other hand, using the identity $\gamma_{g,v}(s) = p_s\cdot \gamma_v(s)$ we obtain $[\gamma_{g,v}'(0)]_{\rm n}= p_0 \cdot v$. As $p_0\in \algebraicgroup{ A}_I$, this implies $p_0={\bf1}$. \par The last assertion \eqref{curve3} is proved using the real version of the argument {\text for \eqref{curve1}}. \end{proof}
Let $d_\algebraicgroup{G}$ a left invariant Riemannian metric on $\algebraicgroup{G}$. Then the quantitative version of \eqref{character HI} reads as follows:
\begin{cor} \label{lemma H_I limit} Let $X_I\in \mathfrak{c}_I^{--}\subset \mathfrak{a}_I^{--}$ correspond to $-{\bf e}_I\in V_I$ in the identification $V_I \simeq \mathfrak{a}_I$. Set $a_t:= \exp(tX_I)$ for $t\geq 0$. Let $h_I \in \algebraicgroup{H}_I$. Then there exist constants $C,\epsilon, t_0>0$ and for each $t\ge t_0$ an element $x_t \in \algebraicgroup{ P}$ such that $d_\algebraicgroup{G}(x_t, {\bf1}) \leq C e^{-\epsilon t}$ and \begin{equation} \label{normal-approx} h_I a_t\cdot z_0 = x_t a_t \cdot z_0\, . \end{equation} If further, $h_I\in H_I$, then we can choose $x_t\in P$. \end{cor} \begin{proof} Apply the lemma to $g=h_I$ and $v=e_I$. Set $x_t:= p_{e^{-t}}$ and use that $p_0={\bf1}$ and $s\mapsto p_s$ is differentiable at $s=0^+$.
\end{proof}
\subsection{The intersection of $\algebraicgroup{H}_I$ with $\algebraicgroup{ L}$}
\par For later reference we record the following fact, which is more or less immediate from \eqref{LST-Ic}. Since it is crucial for the paper we include a detailed argument.
\begin{lemma}\label{equal L cap H} For all $I\subset S$ one has \begin{equation} \label{same L cap H} \algebraicgroup{ L} \cap \algebraicgroup{H} =\algebraicgroup{ L}\cap \algebraicgroup{H}_I \, .\end{equation} \end{lemma} \begin{proof} First note that $\algebraicgroup{ L}= \algebraicgroup{ M} \algebraicgroup{ A} \algebraicgroup{ L}_{\rm n}$ and from $\algebraicgroup{ L}_{\rm n} \subset \algebraicgroup{H}\cap \algebraicgroup{H}_I$ we obtain that $\algebraicgroup{H}\cap \algebraicgroup{ L}= \algebraicgroup{ L}_{\rm n}[(\algebraicgroup{ M}\algebraicgroup{ A})\cap \algebraicgroup{H}]$ and likewise $\algebraicgroup{H}_I \cap \algebraicgroup{ L} = \algebraicgroup{ L}_{\rm n}[(\algebraicgroup{ M}\algebraicgroup{ A})\cap \algebraicgroup{H}_I]$. Hence it suffices to show that $\algebraicgroup{H}\cap (\algebraicgroup{ M}\algebraicgroup{ A} )= \algebraicgroup{H}_I \cap (\algebraicgroup{ M}\algebraicgroup{ A})$. \par We first show that $\algebraicgroup{H}\cap (\algebraicgroup{ M}\algebraicgroup{ A}) \subset \algebraicgroup{H}_I \cap (\algebraicgroup{ M}\algebraicgroup{ A})$. For that we recall the isotropy representation $\rho$ which we view here as a representation of $\widehat \algebraicgroup{H}_I$ so that $\algebraicgroup{H}_I=\ker \rho$. Recall the curves $\gamma_X$ from the proof of Lemma \ref{limit normal}. Now for $g\in (\algebraicgroup{ M}\algebraicgroup{ A})\cap H$ we have $g\gamma_X(s) = \gamma_X(s)$ and thus $g\gamma_X'(0)= \gamma_X'(0)$. Hence $g\in \ker \rho= \algebraicgroup{H}_I$ and "$\subset$" is established. \par For the converse inclusion we first note that both $\algebraicgroup{H}\cap (\algebraicgroup{ M}\algebraicgroup{ A} )$ and $\algebraicgroup{H}_I \cap (\algebraicgroup{ M}\algebraicgroup{ A})$ are elementary algebraic groups (see \cite{KK} or \cite[Appendix B]{DKS} for the notion "elementary").
Together with $\mathfrak{l} \cap \mathfrak{h} = \mathfrak{l}\cap \mathfrak{h}_I$ (which we obtain from \eqref{eq-hi1} )
we infer that $\algebraicgroup{H}\cap (\algebraicgroup{ M}\algebraicgroup{ A} )$ and $\algebraicgroup{H}_I \cap (\algebraicgroup{ M}\algebraicgroup{ A})$ have the same Lie algebra, namely $[\mathfrak{m}_H + \mathfrak{a}_H]_\mathbb{C}$. Further as $\algebraicgroup{ M} \algebraicgroup{ A}$ is an elementary group we obtain $\algebraicgroup{H}_I \cap (\algebraicgroup{ M}\algebraicgroup{ A}) = (\algebraicgroup{ M} \cap \algebraicgroup{H}_I) (\algebraicgroup{ A}_{\algebraicgroup{H}_I})_0$, see \cite[Appendix B]{DKS}.
From $\mathfrak{a}\cap \mathfrak{h} =\mathfrak{a} \cap \mathfrak{h}_I$ again obtained from \eqref{eq-hi1} we derive $(\algebraicgroup{ A}_{\algebraicgroup{H}_I})_0=(\algebraicgroup{ A}_{\algebraicgroup{H}})_0$. Hence we only need to show that $\algebraicgroup{ M}\cap \algebraicgroup{H}_I \subset \algebraicgroup{ M}\cap \algebraicgroup{H}$. Let now $m \in \algebraicgroup{ M}\cap \algebraicgroup{H}_I$. In particular $m\in \widehat \algebraicgroup{H}_I$ fixes $\widehat z_{0,I}$ and thus we obtain from \eqref{LST-I} that $m \in \algebraicgroup{ M}_\algebraicgroup{H} F_{M,I}$. Hence we may assume that $m \in F_{M,I}$. From $\rho(m)={\bf1}$ we then obtain that $m\in F_{M,I}\subset \{-1, 1\}^k$ needs to have all coordinates to be $1$, i.e. $m={\bf1}$ and the proof is complete. \end{proof}
\subsection{The structure of $\algebraicgroup{ Z}_I(\mathbb{R})$}\label{structure of Z_I} From Theorem \ref{thm normal} and Proposition \ref{prop independence choice} we obtain:
\begin{lemma} For any $I\subset S$,
the $\algebraicgroup{G}$-isomorphism class of the variety $\algebraicgroup{ Z}_I$ is canonically attached to $\algebraicgroup{ Z}$, i.e.
independent of the particular smooth toroidal compactification $\widehat \algebraicgroup{ Z}=\algebraicgroup{ Z}(\mathcal{F})$ of $\algebraicgroup{ Z}$.
\end{lemma}
In particular, it follows that up to $G$-isomorphism $\algebraicgroup{ Z}_I(\mathbb{R})$ is canonically attached to $\algebraicgroup{ Z}(\mathbb{R})$. However, for $Z_I$ the situation is different. We recall the shifted base points $z_w=w\cdot z_0$ and $\widehat z_{w,I}$ from Subsection \ref{subsection rel open P-orbits}. For $I\subset S$ we then define the set of $G$-orbits
$$\sC_I:=\{ G\cdot \widehat z_{w,I}\mid w\in \mathcal{W}\},$$ and note that different orbits in $\sC_I$ may not be isomorphic, see Example \ref{ex SL3} below. In particular, the isomorphism class of $Z_I=G/H_I$ is not canonically attached to $Z$. In this sense only the collection of $G$-spaces $\{ G/(H_w)_I \mid w\in \mathcal{W}\}$
(where $(\widehat{H_w})_I$ is the stabilizer of $\widehat z_{w,I}$) is canonically attached to the $G$-space $Z=G/H$.
\begin{rmk}\label{rmk DI} In case $\widehat \algebraicgroup{ Z}$ is wonderful the set $\sC_I$ equals the set of $G$-orbits $\partial Z \cap \widehat \algebraicgroup{ Z}_I(\mathbb{R})$. This follows from Lemma \ref{lemma rel open}. The general case is a bit more complicated, see Remark \ref{remark rel open} (b). Recall the boundary points $\widehat z_{w,\mathcal{C}}$ from \eqref{zwC}. Then \begin{eqnarray*}\sD_I&:=&\{ G \cdot \widehat z_{w,\mathcal{C}} \mid G\cdot \widehat z_{w,\mathcal{C}}\subset \widehat \algebraicgroup{ Z}_I(\mathbb{R}), w\in \mathcal{W}, \mathcal{C}\in \mathcal{F}\}\\ &=&\{G \cdot \widehat z_{w,\mathcal{C}}\mid w\in \mathcal{W}, \ \mathcal{C}\in \mathcal{F}_I\}\end{eqnarray*}
yields all $G$-orbits in $\partial Z\cap \widehat Z_I(\mathbb{R})$. \end{rmk}
For $\mathsf{c}\in\sC_I$ we set
$$\mathcal{W}_\mathsf{c}:=\{w\in \mathcal{W}\mid G\cdot \widehat z_{w,I}=\mathsf{c}\}$$ and obtain the partition \begin{equation} \label{party W} \mathcal{W}=\coprod_{\mathsf{c} \in \sC_I} \mathcal{W}_\mathsf{c}\, .\end{equation} Given $\mathsf{c}\in\sC_I$ we choose a representative $w(\mathsf{c})\in \mathcal{W}_\mathsf{c}$. In case $\mathsf{c}= G\cdot \widehat z_{0,I}=\widehat Z_I$ we make the request that $w(\mathsf{c})={\bf1}$. We then define $$H_{I,\mathsf{c}}:= (H_{w(\mathsf{c})})_I.$$ We will see in Lemma \ref{lemma HI comp} that the $G$-conjugacy class of $H_{I,\mathsf{c}}$ is independent of the representative $w(\mathsf{c})$ used for its definition.
\begin{ex} \label{ex SL3} Consider $\algebraicgroup{ Z}= \operatorname{SL}(3,\mathbb{C})/\operatorname{SO}(3,\mathbb{C})$ which is defined over $\mathbb{R}$. We will use the identification $$\algebraicgroup{ Z}= \operatorname{Sym}(3\times 3, \mathbb{C})_{\det =1}$$ with $\operatorname{Sym}$ denoting the symmetric matrices. Hence $$\algebraicgroup{ Z}(\mathbb{R}) = G/ K\amalg G/H$$ consists of two $G$-orbits with $K=\operatorname{SO}(3,\mathbb{R})$ and $H=\operatorname{SO}(1,2)$, both real forms of $\algebraicgroup{H} = \operatorname{SO}(3,\mathbb{C})$. Our interest is here with $Z=G/H$. If we identify $\algebraicgroup{ A}_{\algebraicgroup Z}$ with the diagonal matrices in $\algebraicgroup{ Z}$, then $F_\mathbb{R}$, the $2$-torsion group of $\algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R})$, is given by
$$F_\mathbb{R}=\{ (1,1,1), (1, -1, -1), (-1, 1, -1), (-1, -1, 1)\} = \{t_0, t_1, t_2, t_3\}$$ which in this case parametrize the open $P$-orbits in $\algebraicgroup{ Z}(\mathbb{R})$ -- we have $F_M=\{1\}$ in this example. Notice that $t_0 \in G/K$ whereas $t_1, t_2, t_3\in Z=G/H$. In particular $F=\{t_1, t_2, t_3\}$ is not a group. Let us denote by $w_1, w_2, w_3\in G$ lifts of $t_i$ to $G$ so that $\mathcal{W}=\{ w_1, w_2, w_3\}$.
In this case the spherical roots comprise a system of type $\sA_2$. With $I=\{\alpha_2\}$ we can take $a_t = \operatorname{diag}(t^{-2}, t, t)$ for our ray.
\par Our example $\algebraicgroup{ Z}$ has a wonderful compactification which is given by the closure of the image of its standard embedding into projective space $$ \algebraicgroup{ Z}=\operatorname{Sym}(3\times 3, \mathbb{C})_{\det =1}\to {\mathbb P}( \operatorname{Sym}(3\times 3, \mathbb{C})\times \operatorname{Sym}(3\times 3, \mathbb{C}))$$ $$X\mapsto \mathbb{C}\cdot (X,X^{-1}). $$
\par Note $H_{w_1}=H$ and an elementary calculation in the above model for $\widehat \algebraicgroup{ Z}$ yields
$$ H_I = (H_{w_1})_I = S (\operatorname{O}(1) \operatorname{O}(2)) U_I\quad\text{and} \quad (H_{w_2})_I = (H_{w_3})_I=S (\operatorname{O}(1) \operatorname{O}(1,1)) U_I$$ where $$U_I = \begin{pmatrix} 1 & & \\ * & 1 & \\ * & & 1\end{pmatrix}\subset G\, .$$ In particular, we see $H_{I,{\mathsf 1}}:=H_I$ is not conjugate to $H_{I,{\mathsf 2}}:= (H_{w_2})_I$.
\par We further note that $\widehat \algebraicgroup{ Z}_I(\mathbb{R})= \partial Z\cap \widehat \algebraicgroup{ Z}_I(\mathbb{R})$ consists of two $G$-orbits $\widehat Z_{I,{\mathsf 1}}= G/\widehat H_{I,{\mathsf 1}}$ and $\widehat Z_{I,{\mathsf 2}}= G/ \widehat H_{I,{\mathsf 2}}$, and accordingly $\sC_I \simeq \{{\mathsf 1} , {\mathsf 2}\}$ has two elements. Note that $\widehat Z_I(\mathbb{R})$ is $G$-isomorphic to the projective space of the rank two real symmetric matrices. Within this identification $\widehat Z_{I,{\mathsf 1}}\subset \widehat Z(\mathbb{R})$ consists of the rank two symmetric matrices (viewed projectively) with equal signature (i.e. $0++$ or $0--$), and $\widehat Z_{I,{\mathsf 2}}\subset \widehat Z(\mathbb{R})$ of the rank two symmetric matrices with signature $0+- $. Finally note that $\mathcal{W}_{\mathsf 1}=\{w_1\}$ and $\mathcal{W}_{\mathsf 2}=\{ w_2, w_3\}$. \end{ex}
\section{Open $P$-orbits on $Z_I$ and $Z$}\label{subsection WI}
Recall the set $\mathcal{W}\subset G$ of representatives for $W =(P\backslash Z)_{\rm open}$. Let $W_I =(P\backslash Z_I)_{\rm open}$, the set of open $P$-orbits in $Z_I$. The objective of this section is to obtain a good set $\mathcal{W}_I$ of representatives for $W_I$ which results in a natural injective map ${\bf m}: \mathcal{W}_I \to \mathcal{W}$ (or $W_I \to W$ if one wishes), and thus matches each open $P$-orbit in $Z_I$ with a particular open $P$-orbit in $Z$. This map is important for various constructions of the paper. \par In general the map ${\bf m}$ is not surjective and this originates from the fact that $Z_I$ is only one $G$-orbit in $\algebraicgroup{ Z}_I(\mathbb{R})$ which points to $Z$. We will show in this section that the $G$-orbits in $\algebraicgroup{ Z}_I(\mathbb{R})$ which point to $Z$ are given by
\begin{equation} \label{normal union} \widetilde Z_I :=\coprod_{\mathsf{c} \in \sC_I} \coprod_{\mathsf{t} \in \sF_{I,\mathsf{c}}} Z_{I,\mathsf{c}, \mathsf{t}}\, . \end{equation} Here $Z_{I,\mathsf{c},\mathsf{t}}\simeq Z_{I,\mathsf{c}}= G/ H_{I,\mathsf{c}}$ for all $\mathsf{t} \in \sF_{I,\mathsf{c}}$ with $\sF_{I,\mathsf{c}}$ the set corresponding to $\sF_I=F_I/ F(I)$ when $Z_I$ is replaced by $Z_{I,\mathsf{c}}$. For every pair $\mathsf{c},\mathsf{t}$ this then leads to an injective matching map ${\bf m}_{\mathsf{c},\mathsf{t}}: \mathcal{W}_{I,\mathsf{c}} \to \mathcal{W}$ with $\mathcal{W}_{I,\mathsf{c}}$ a parameter set for $W_{I,\mathsf{c}}=(P\backslash Z_{I,\mathsf{c}})_{\rm open}$. The case of $\mathsf{c}=\mathsf{t}={\bf1}$ corresponds to the original map ${\bf m}={\bf m}_{{\bf1},{\bf1}}$. The decomposition \eqref{normal union} then leads to a partition \begin{equation} \label{W partition} \mathcal{W}=\coprod_{\mathsf{c}\in \sC_I} \coprod_{\mathsf{t} \in \sF_{I,\mathsf{c}}} {\bf m}_{\mathsf{c}, \mathsf{t}}(\mathcal{W}_{I,\mathsf{c}}) \end{equation} refining \eqref{party W}.
\par This section has several parts. It starts with the construction of the injective map ${\bf m}: \mathcal{W}_I \to \mathcal{W}$. For a better understanding of the matching map ${\bf m}$ we then illustrate the case where $Z$ is a symmetric space and relate ${\bf m}$ to Matsuki's description \cite{Mat1} of the open $P\times H$-double cosets in $G$ in terms of Weyl groups. After that we derive the general partition of $\mathcal{W}$ in terms of the ${\bf m}_{\mathsf{c},\mathsf{t}}$. This last part is a bit more technical and can be skipped at a first reading. \par Throughout this section $I\subset S$ is fixed.
\subsection{Relating $W_I$ to $W$}\label{subsection 5.1} We recall from Lemma \ref{lemmaW1} the natural bijection of $W_\mathbb{R}=(P\backslash \algebraicgroup{ Z}(\mathbb{R}))_{\rm open}$ with $F_M\backslash F_\mathbb{R}$ where $F_\mathbb{R}= \algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R})_2$ denotes the $2$-torsion subgroup of $\algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R})$. On the other hand we recall from Lemma \ref{equal L cap H} that $\algebraicgroup{ L}\cap \algebraicgroup{H}= \algebraicgroup{ L} \cap \algebraicgroup{H}_I$. Intersecting this identity with $\algebraicgroup{ A}$ we obtain that $\algebraicgroup{ A}\cap \algebraicgroup{H}= \algebraicgroup{ A}\cap \algebraicgroup{H}_I$ and hence an identity of homogeneous spaces $$ \algebraicgroup{ A}_{\algebraicgroup Z}= \algebraicgroup{ A}/ \algebraicgroup{ A}\cap \algebraicgroup{H} = \algebraicgroup{ A}/ \algebraicgroup{ A}\cap \algebraicgroup{H}_I =\algebraicgroup{ A}_{\algebraicgroup{ Z}_I}\, .$$ In particular, $\algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R})$ and $\algebraicgroup{ A}_{\algebraicgroup{ Z}_I}(\mathbb{R})$ have the same $2$-torsion groups, namely $F_\mathbb{R}$. In addition
$\algebraicgroup{ L}\cap \algebraicgroup{H}= \algebraicgroup{ L} \cap \algebraicgroup{H}_I$ implies
that the two open $\algebraicgroup{ P}$-orbits $\algebraicgroup{ P}\cdot z_0 \subset \algebraicgroup{ Z}$ and $\algebraicgroup{ P}\cdot z_{0,I}$ carry canonically isomorphic local structure theorems, see \eqref{LST1}, \eqref{deco L} and \eqref{LST2}. Hence the group $F_M$ is identical in both cases and we obtain a natural bijection (the identity map) $${\bf m}_\mathbb{R}: W_{I,\mathbb{R}}=(P\backslash \algebraicgroup{ Z}_I(\mathbb{R}))_{\rm open} \to (P\backslash \algebraicgroup{ Z}(\mathbb{R}))_{\rm open}=W_\mathbb{R}\, .$$
\begin{rmk} On the one hand side we have an identity of homogeneous spaces $\algebraicgroup{ A}_{\algebraicgroup Z}=\algebraicgroup{ A}_{\algebraicgroup{ Z}_I}$, but on the other hand we also view $\algebraicgroup{ A}_{\algebraicgroup Z}$ as a subvariety of $\algebraicgroup{ Z}$ and $\algebraicgroup{ A}_{\algebraicgroup{ Z}_I}$ as a subvariety of $\algebraicgroup{ Z}_I$. In the latter picture the identity of homogeneous spaces yields a natural identification of subvarieties of $\algebraicgroup{ Z}$ and $\algebraicgroup{ Z}_I$. \end{rmk}
\begin{prop} \label{prop m} One has ${\bf m}_\mathbb{R}(W_I)\subset W$. \end{prop}
In order to prove this proposition we first recall another natural map ${\bf m}: \mathcal{W}_I \to \mathcal{W}$ which first arose in \cite[Sect.~3]{KKS2}. We fix with $\mathcal{W}_I\subset G$ a set of representatives of $W_I$ with elements $w_I\in \mathcal{W}_I$ of the form $w_I =\tilde t_I h_I $ where $h_I \in \algebraicgroup{H}_I$ and $\tilde t_I \in T_Z$. Upon the identification of varieties $\algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R})\simeq \algebraicgroup{ A}_{\algebraicgroup{ Z}_I}(\mathbb{R})$ we view $t_I:=\tilde t_I \cdot z_{0,I}=w_I\cdot z_{0,I}$ as an element of $F_\mathbb{R}$.
\par Let now $Pw_I\cdot z_{0,I}\in W_I$ be an open $P$-orbit in $Z_I$ with $w_I\in \mathcal{W}_I$. Next let $X\in \mathfrak{a}_I^{--}$ and set $$a_s:=\exp(sX)\in A_I^{--}\subset A \qquad (s>0)\, .$$ It follows from \cite[Lemma 3.9]{KKS2} that there exist $s_0=s_0(X)>0 $ and a unique $w=\tilde th\in \mathcal{W}$ such that \begin{equation} \label{eq-corr}P w_I a_s \cdot z_0 = Pw \cdot z_0\qquad (s\geq s_0)\, . \end{equation}
\begin{lemma} Given $w_I= \tilde {t_I} h_I\in\mathcal{W}_I$ as above, the element $w\in \mathcal{W}$ such that \eqref{eq-corr} holds
does not depend on the choice of $X\in \mathfrak{a}_I^{--}$. \end{lemma} \begin{proof} In order to record the possible dependence on $X$ we write $a_s(X)=\exp(sX)$ and $w(X)$ for the corresponding $w$. Now we recall the argument of \cite[Lemma 3.9]{KKS2}: For fixed $X$ we have $\lim_{s\to \infty} e^{s\operatorname{ad} X} \mathfrak{h}= \mathfrak{h}_I$ by \eqref{I-compression}. Thus there exists an $s_0(X)$ such that $\mathfrak{p} + \operatorname{Ad}(w_I) e^{s\operatorname{ad} X}\mathfrak{h} =\mathfrak{g}$ for all $s\geq s_0(X)$. In particular, we obtain that $Pw_I a_s(X) \cdot z_0$ is open for all $s\geq s_0(X)$. Since the limit \eqref{I-compression} is locally uniform in $X\in \mathfrak{a}_I^{--}$, it follows that $w(X)$ is locally constant. The lemma follows. \end{proof}
With this lemma we obtain in particular a natural map \begin{equation}\label{bfm} {\bf m}: \mathcal{W}_I \to \mathcal{W}, \ w_I\mapsto w={\bf m} (w_I)\, . \end{equation} With the identifications $W_I \simeq \mathcal{W}_I$ and $W\simeq \mathcal{W}$ we view ${\bf m}$ also as a map $W_I \to W$, which by slight abuse of notation is denoted as well by ${\bf m}$. \par Since the choice of $X\in \mathfrak{a}_I^{--}$ was irrelevant for the definition of ${\bf m}$ we may henceforth assume that $X=X_I\in \mathfrak{c}_I^{--}\subset \mathfrak{a}_I^{--}$ was such that it corresponds to $-{\bf e}_I\in V_I$ under the identification $V_I \simeq \mathfrak{a}_I$. \par Proposition \ref{prop m} will now follow from:
\begin{lemma} \label{lemma m} ${\bf m}_\mathbb{R} |_{W_I} = {\bf m}$.\end{lemma}
\begin{proof} Let $w_I=\tilde t_I h_I\in\mathcal{W}_I$ and $w={\bf m}(w_I)= \tilde t h$. From Lemma \ref{speed lemma} for $g=h_I \in \algebraicgroup{H}_I$ we obtain a $C^1$-curve $[0,1]\to \algebraicgroup{ P}, \ u\mapsto p_u$ with $p_u\to {\bf1}$ for $u\to 0^+$ and $$ h_I a_s \cdot z_0 = p_{e^{-s}} a_s \cdot z_0 \qquad (s\geq s_0)\, .$$ Hence with $ p'_s:= \tilde t_I p_{e^{-s}} \tilde t_I^{-1}$ we obtain that $$ w_I a_s\cdot z_0 = p_s' \cdot t_I\in Z\, .$$ Since $t_I \in Z$ and $p_s'\to {\bf1}$ for $s\to \infty$ we may assume that $p_s'\in P$ is real as well (use the local structure theorem of the form \eqref{LST3}). On the other hand the matching property \eqref{eq-corr} yields $$w_I a_s\cdot z_0 = p_s'' \cdot t$$ for some $p_s'' \in P$. Thus we get $$ P\cdot t_I=P\cdot t.$$ This implies the lemma, and with that Proposition \ref{prop m}. \end{proof}
In the sequel we adjust $\mathcal{W}\subset G$ (by possibly multiplying the previous $w = \tilde t h$ by an element of $F_M$) in such a way that for each $w_I =\tilde t_I h_I\in\mathcal{W}_I$ one has \begin{equation}\label{t=t_I} t=t_I \qquad \text{when} \ w=\tilde t h = {\bf m}(w_I)\, .\end{equation} We note that this adjustment of $\mathcal{W}$ depends on our fixed choice of $\mathcal{W}_I$ and hence on $I$.
\par \begin{rmk} \label{rmk ZZI}Notice that we typically have ${\bf m}(\mathcal{W}_I)\subsetneq \mathcal{W}$ as the example of $Z=\operatorname{SL}(2,\mathbb{R})/\operatorname{SO}(1,1)$ with $I=\emptyset$ already shows (cf. Example \ref{ex=SL2} (1) ). Here we have $H_\emptyset=M \overline N$ and $\widehat H_\emptyset= MA \overline N$ and thus $\mathcal{W}_\emptyset =\{{\bf1}\}$ while $\mathcal{W}=\{1,w\}$ has two elements. \end{rmk}
\begin{prop}\label{prop cr1} {\rm (Consistency relations for stabilizers)} Let $w_I \in \mathcal{W}_I$ and $w={\bf m}(w_I)\in \mathcal{W}$. Then \begin{equation} \label{ConsisT1} (H_w)_I = (H_I)_{w_I}\,.\end{equation} \end{prop}
\begin{proof} Recall from \eqref{character HI} that $H_I$ is the subgroup of $G$ which asymptotically preserves the curves $\gamma_v$ in normal direction, i.e. is the group of elements $g\in G$ with $g\cdot [\gamma_{v}'(0)]_{\rm n}= [\gamma_{v}'(0)]_{\rm n}= v$. Hence $(H_I)_{w_I}\subset G$ is the group of elements $g\in G$ with $g\cdot [\gamma_{w_I, v}'(0)]_{\rm n}= [\gamma_{w_I, v}'(0)]_{\rm n}=t_I \cdot v$. On the other hand we can characterize $(H_w)_I$ as follows: define the curve $$\sigma_{w,v}(s):= \tilde a(v) \exp(-(\log s) X_I) \cdot z_w = \tilde a(v) \exp(-(\log s) X_I) \cdot t$$ where $\tilde a(v) \in \algebraicgroup{ A}$ is any lift of $a(v)\in \algebraicgroup{ A}_I$ with respect to the projection $\pi: \algebraicgroup{ A}\to \algebraicgroup{ A}_Z$. Then $(H_w)_I$ is the group of elements $g\in G$ with $g\cdot [\sigma_{w,v}'(0)]_{\rm n}= [\sigma_{w,v}'(0)]_{\rm n}=t\cdot v$. As $t=t_I$, the desired equality of groups follows. \end{proof}
\subsection{Symmetric spaces}\label{Subsection m symmetric} The nature of the map ${\bf m}$ becomes quite clear in the special case where $Z$ is a symmetric space. In this special situation we can make the matching map explicit in terms of certain Weyl groups.
\par For this subsection $Z=G/H$ is symmetric, that is, there exists an involution $\tau: \algebraicgroup{G}\to \algebraicgroup{G}$, defined over $\mathbb{R}$, such that $\algebraicgroup{H}$ is an open subgroup of the $\tau$-fixed point group $\algebraicgroup{G}^\tau$. We choose our maximal anisotropic group $\algebraicgroup{K} \subset \algebraicgroup{G}$ in such a way that the Cartan involution $\theta$, which defines $\algebraicgroup{K}$, commutes with $\tau$. By slight abuse of notation we use $\tau$ and $\theta$ for the induced derived involutions on $\mathfrak{g}$ as well.
\subsubsection{The adapted parabolic}\label{Subsubsection adapted} With $\mathfrak{g}=\mathfrak{k}\oplus \mathfrak{k}^\perp$, resp $\mathfrak{g}= \mathfrak{h}\oplus \mathfrak{h}^\perp$, we obtain the decomposition of $\mathfrak{g}$ in eigenspaces of $\tau$, resp. $\theta$, with eigenvalues $+1$ and $-1$. We let $\mathfrak{a}_Z\subset \mathfrak{h}^\perp \cap \mathfrak{k}^\perp$ be a maximal abelian subspace and extend $\mathfrak{a}_Z$ to a maximal abelian subspace $\mathfrak{a}\subset \mathfrak{k}^\perp$. Now, according to Rossmann, the root system $\Sigma=\Sigma(\mathfrak{g},\mathfrak{a})$ restricts to a root system
$$\Sigma_Z=\Sigma|_{\mathfrak{a}_Z}\backslash \{0\}$$ on $\mathfrak{a}_Z$. The Weyl group of $\Sigma_Z$ is denoted by $\mathsf{W}=\mathsf{W}_Z$.
\par Let $\Sigma_Z^+\subset \Sigma_Z$ be a positive system, and let $\Sigma^+\subset \Sigma$ be a positive system such that $\Sigma^+|_{\mathfrak{a}_Z}\backslash \{0\}= \Sigma_Z^+$. Then $PH\subset G$ is open for the minimal parabolic subgroup $P=MAN$, for which $\mathfrak{n}$ is the sum of the positive root spaces. The adapted parabolic $Q=LU\supset P$ is then characterized by $L=Z_G(\mathfrak{a}_Z)$. It is the unique minimal $\theta\tau$-stable parabolic subgroup of $G$ containing $P$.
\subsubsection{The deformations $H_I$} The spherical roots $S\subset\mathfrak{a}_Z^*$ are given by the simple roots in $\Sigma_Z$ with respect to $\Sigma_Z^+$. Hence for any $I\subset S$ we obtain parabolic subgroups $P_I \supset Q$ with $L_I=Z_G(\mathfrak{a}_I)$. As before we realize $\mathfrak{a}_I\subset \mathfrak{a}$ so that $A_I=\exp(\mathfrak{a}_I)$ becomes a subgroup of $A$. Then $L_I=M_I A_I\simeq M_I \times A_I$ for a unique $\tau$-stable subgroup $M_I \subset L_I$.
Now the deformations $H_I$ are given by
$$ H_I = (M_I \cap H) \overline{U_I}$$ with $M_I \cap H\subset M_I$ a symmetric subgroup, i.e. $M_I/ M_I\cap H\subset G/H$ is a symmetric subspace. Note that the $H_w$, $w\in \mathcal{W}$, can be treated on the same footing, i.e. $(H_w)_I = (M_I \cap H_w) \overline{U_I}$ and $M_I \cap H_w\subset M_I$ a symmetric subgroup. As seen in Example \ref{ex SL3} the subgroups $M_I\cap H$ and $M_I\cap H_w$ are not necessarily conjugate in $M_I$.
\subsubsection{Open double cosets}\label{open double} For later reference in Section \ref{section DBS} (where we derive the Plancherel formula for symmetric spaces) we consider here both $(P\backslash Z_I)_{\rm open}$ and $(P_I\backslash Z)_{\rm open}$ together.
Recall that for symmetric spaces the set $W=(P\backslash Z)_{\rm open}$ allows a description in terms of Weyl groups. For that we identify $\mathsf{W}=\mathsf{W}_Z\simeq
[N_K(\mathfrak{a})\cap N_K(\mathfrak{a}_Z)]/ M$ and define a subgroup of $\mathsf{W}$ by
$\mathsf{W}_H= [N_{K\cap H}(\mathfrak{a})\cap N_{K\cap H}(\mathfrak{a}_Z)]/ M$. Then Matsuki \cite{Mat1} has shown that
\begin{equation}\label{Matsuki PZ} \mathsf{W}/\mathsf{W}_H \to (P\backslash Z)_{\rm open}, \ \ w\mathsf{W}_H \mapsto PwH \end{equation} is a bijection. In particular, $W\simeq \mathsf{W}/\mathsf{W}_H$.
When applied to the symmetric space $M_I/(M_I\cap H)$, Matsuki's result becomes \begin{equation}\label{Matsuki M_I} \mathsf{W}(I)/ (\mathsf{W}(I)\cap \mathsf{W}_H)\simeq(( P\cap M_I)\backslash M_I / (M_I \cap H))_{\rm open}. \end{equation} Now \begin{equation}\label{Pz to PIz} (P\backslash Z)_{\rm open} \to (P_I\backslash Z)_{\rm open}, \ \ Pz\mapsto P_Iz \end{equation} is surjective. It follows from \eqref{Matsuki M_I} that the composition of \eqref{Matsuki PZ} with \eqref{Pz to PIz} factorizes to a bijection (see also \cite{Mat2}) $$ \mathsf{W}(I)\backslash \mathsf{W}/ \mathsf{W}_H\to (P_I \backslash Z)_{\rm open}$$ where $\mathsf{W}(I)<\mathsf{W}=\mathsf{W}_Z$ is the subgroup generated by the reflections $s_\alpha$ for $\alpha \in I$.
In particular we obtain an action of $\mathsf{W}(I)$ on $\mathcal{W}\simeq \mathsf{W}/\mathsf{W}_H$ and record:
\begin{lemma} \label{Lemma Mats1}For $I\subset S$ the following assertions hold: \begin{enumerate} \item \label{onemats}$(P_I\backslash Z)_{\rm open}\simeq\mathsf{W}(I)\backslash \mathcal{W}$. \item \label{twomats} $(P\backslash Z_I)_{\rm open}\simeq \mathcal{W}_I\simeq\mathsf{W}(I)/(\mathsf{W}(I)\cap \mathsf{W}_H)$. \end{enumerate} \end{lemma} \begin{proof} The first assertion we have just shown. For the second, recall first that $H_I= (M_I \cap H )\overline {U_I}$. Hence the Bruhat decomposition yields that $$ (P\backslash Z_I)_{\rm open } \simeq (( P\cap M_I)\backslash M_I / (M_I \cap H))_{\rm open}\, ,$$ so that \eqref{twomats} follows from \eqref{Matsuki M_I}. \end{proof}
\begin{lemma} \label{lemma 58}Upon identifying $\mathsf{W}(I)/ \mathsf{W}(I)\cap \mathsf{W}_H$ with $\mathcal{W}_I$ and $\mathsf{W}/ \mathsf{W}_H$ with $\mathcal{W}$, the map ${\bf m}: \mathcal{W}_I \to \mathcal{W}$ corresponds to the natural inclusion map $\mathsf{W}(I)/(\mathsf{W}(I)\cap \mathsf{W}_H) \hookrightarrow \mathsf{W}/\mathsf{W}_H$.\end{lemma}
\begin{proof} We recall the construction of the map ${\bf m}$ via considering the limits of the double cosets $Pw_Ia_s H$. So let $w_I\in \mathsf{W}(I)$ and observe that $\mathsf{W}(I)$ keeps $\mathfrak{a}_I$ pointwise fixed. Thus we have $Pw_I a_s H= Pw_I H$ and the lemma follows. \end{proof}
Also of later relevance are the open $H\times \overline P_I$-double cosets in $G$ which we treat here as well. Since the anti-involution $$G\to G, \ \ g\mapsto g^{-\theta}:=\theta(g^{-1})$$ leaves $H$ invariant and maps $P_I$ to its opposite $\overline{P_I}$, we obtain a bijection of double cosets
$$ P_I\backslash G/H \to H\backslash G/ \overline {P_I} , \ \ P_IgH\mapsto H g^{-\theta} \overline{P_I}\, .$$
With Lemma \ref{Lemma Mats1} we thus obtain a bijection \begin{equation}\label{Matsuki Pbar} \mathsf{W}(I)\backslash \mathcal{W} \to (H\backslash G/\overline{P_I})_{\rm open}, \ \ \mathsf{W}(I)w \mapsto H w^{-\theta}\overline{P_I }\, . \end{equation}
\subsection{Relating $W_I$ to $\widehat W_I$} We now return to the setup of a general real spherical space. In this subsection we provide some complementary material on the relation of $W_I$ to $\widehat W_I:=(P\backslash \widehat Z_I)_{\rm open}$. This will lead to a better geometric understanding of what to come next.
Recall that $A_I$ is the connected component of $A(I)= A_I F(I)\simeq A_I \times F(I)$. Notice that $A(I)$ acts naturally on the right of $Z_I=G/H_I$ and thus induces an action of $A(I)$ on $W_I=(P\backslash Z_I)_{\rm open}$. The following lemma is then a consequence of the fact that the connected group $A_I$ acts trivially on the finite set $W_I$.
\begin{lemma} \label{lemma WWI}The natural map $$ W_I= (P\backslash Z_I)_{\rm open}\to \widehat W_I =(P\backslash \widehat Z_I)_{\rm open}, \ \ PwH_I\mapsto Pw\widehat H_I$$ is surjective and induces an isomorphism $W_I/ F(I)\simeq \widehat W_I$. \end{lemma}
\begin{proof} Let $P w\widehat H_I\subset G$ be open for some $w\in G$. We first show that $PwH_I A_I= PwH_I$. According to \eqref{th-deco} applied to the real spherical space $\widehat Z_I$ we may write $w=\tilde t\widehat h$ with $\tilde t\in T_Z$ and $\widehat h\in \widehat \algebraicgroup{H}_I$. Since $\widehat \algebraicgroup{H}_I =\exp_{\algebraicgroup{G}}(\mathfrak{a}_{I,\mathbb{C}}) \algebraicgroup{H}_I$ by (\ref{exact1}), we have $\widehat h= \tilde t_I h $ with $h\in \algebraicgroup{H}_I $ and $\tilde t_I\in \exp_{\algebraicgroup{G}} (\mathfrak{a}_{I, \mathbb{C}}) $. Let $a\in A_I\subset A$. Then $$(aw)^{-1} w a = (a\tilde t\tilde t_I h)^{-1} \tilde t \tilde t_I h a = h^{-1}a^{-1} h a \, .$$ Now we observe that $a w \in G$ and $a^{-1}ha \in \algebraicgroup{H}_I$. Hence $(aw)^{-1} w a\in H_I$, and thus $Pw H_I a=PwaH_I= PwH_I$ as claimed.
Since $\widehat H_I = H_I A_IF(I)$ we obtain that $Pw \widehat H_I = \bigcup_{t \in F(I)} P w H_I t$. In particular $PwH_I$ is open. Hence the map $W_I \to \widehat W_I$ is onto. The last assertion also follows. \end{proof}
\par Similar to $W\simeq F_M\backslash F $ (see Lemma \ref{lemmaW1}) we obtain with $$F_I^\perp := F \algebraicgroup{ A}_I(\mathbb{R})/ \algebraicgroup{ A}_I(\mathbb{R})\subset \algebraicgroup{ A}_\algebraicgroup{ Z}(\mathbb{R})/\algebraicgroup{ A}_I(\mathbb{R})$$ that $$\widehat W_I \simeq F_M\backslash F_I^\perp$$
as a consequence of \eqref{LST-I}. We further recall that we view $F_I\subset \{-1,1\}^r \cap V_I \subset V$ and accordingly the group $F_M\cap F_I= F_M \cap F(I)$ acts on $F_I$. Thus we obtain an exact sequence of pointed sets $$ (F_M\cap F(I)) \backslash F_I \hookrightarrow F_M\backslash F \twoheadrightarrow F_M\backslash F_I^\perp $$ or, equivalently, $$ (F_M \cap F(I))\backslash F_I \hookrightarrow W \twoheadrightarrow \widehat W_I\, .$$ From the injectivity of ${\bf m}$ and Lemma \ref{lemma WWI} we thus obtain the commutative diagram:
\begin{equation} \label{small 2-group diagram} \xymatrix{ (F_M\cap F(I)) \backslash F_I \ar@{^{(}->}[r]& W\ar@{>>}[r] &\widehat W_I\\ \ar@{^{(}->}[u] \ar@{^{(}->}[r] (F_M\cap F(I))\backslash F(I) & \ar@{>>}[r]\ar@{^{(}->}[u]^{{\bf m}} W_I& \ar@2{-}[u]\widehat W_I} \end{equation}
\begin{rmk}\label{rmk W does not split} Let us emphasize that the upper horizontal sequence in \eqref{small 2-group diagram} is exact in the category of pointed sets, but not in the category of sets, i.e. we do not have $W\simeq \widehat W_I \times (F_M\cap F(I))\backslash F_I$ as sets, see Example \eqref{ex SL3 continued} below. \end{rmk}
This phenomenon disappears if we consider $W_\mathbb{R}$ and $\widehat W_{I,\mathbb{R}}= (P\backslash \widehat \algebraicgroup{ Z}_I(\mathbb{R}))_{\rm open}$ instead of $W$ and $\widehat W_I$. In more detail, recall the basis $(\psi_i^I)_{1\leq i\leq r}$ by means of which we get a decomposition (see \eqref{AZ-iso1}) \begin{equation}\label{AI torus deco} \algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R})= \underbrace{\algebraicgroup{ A}_I(\mathbb{R})}_{\simeq (\mathbb{R}^\times)^k} \times \underbrace{\algebraicgroup{ A}_I^\perp(\mathbb{R})}_{\simeq (\mathbb{R}^\times)^{r-k}} \simeq (\mathbb{R}^\times)^r \end{equation} analogous to the decomposition $V= V_I \oplus V_I^{\perp}$. In particular, $F_\mathbb{R}$, the $2$-torsion subgroup of
$\algebraicgroup{ A}_{\algebraicgroup Z}(\mathbb{R})$, decomposes as $F_\mathbb{R}=F_{I,\mathbb{R}}\times F_{I,\mathbb{R}}^\perp$ in self explaining notation. Hence any $t\in F_\mathbb{R}$ decomposes as $t= t^{\|} t^\perp$ with $t^{\|} \in F_{I,\mathbb{R}}$ and $t^\perp \in F_{I,\mathbb{R}}^\perp$.
In this situation we obtain that the map $$F_\mathbb{R} \to F_{I,\mathbb{R}}^\perp, \ \ t \mapsto t^\perp$$ induces an epimorphism $$W_\mathbb{R}\simeq F_M\backslash F_\mathbb{R}\to \widehat W_{I,\mathbb{R}} \simeq F_M\backslash F_{I,\mathbb{R}}^\perp, \ \ F_M t \mapsto F_M\cdot t^\perp\, ,$$ leading to a decomposition $$ W_\mathbb{R} \simeq \widehat W_{I,\mathbb{R}} \times (F_{I,\mathbb{R}} \cap F_M)\backslash F_{I,\mathbb{R}}\, .$$
\subsection{The fine partition of $W$ with respect to $I$}\label{fine partition}
Our next goal is to explore the issue of non-surjectivity of ${\bf m}$.
We recall from Subsection \ref{structure of Z_I} the set $\sC_I$, the partition $$ \mathcal{W}=\coprod_{\mathsf{c} \in \sC_I} \mathcal{W}_\mathsf{c}\, $$ and the groups $H_{I,\mathsf{c}}$ for $\mathsf{c}\in \sC_I$. Thus the understanding of $\mathcal{W}$ with respect to $I$ comes down to understanding the various $\mathcal{W}_\mathsf{c}$. Once we have fixed $\mathsf{c}$ we will see below that we obtain a natural geometric splitting of $\mathcal{W}_\mathsf{c}\simeq \widehat \mathcal{W}_\mathsf{c}\times (F_M\cap F_I) \backslash F_I$ contrary to what happens for $\mathcal{W}$ (see Remarks \ref{rmk W does not split} and \eqref{splitting W}).
For expository reasons we start with $\mathsf{c}={\bf1}\in\sC_I$, by which we mean $\mathsf{c}=\widehat Z_I=G\cdot \widehat z_{0,I}$. Thereupon we consider the other cases by replacing $H_I$ with $H_{I,\mathsf{c}}$ and adding a further index $\mathsf{c}$ to the notation.
\begin{rmk} Even in case $\sC_I=\{{\bf1}\}$ and $\mathcal{W}=\mathcal{W}_{\bf1}$ it can happen that ${\bf m}(\mathcal{W}_I)\subsetneq \mathcal{W}$. As we will see below this is related to the set $\sF_I=F(I)\backslash F_I$ originating from the normal bundle geometry in Subsection \ref{nb points to Z}. \end{rmk}
\subsubsection{The case $\mathsf{c}={\bf1}$}\label{subsection c=1} We assume that $w\in \mathcal{W}_{\bf1}$, i.e.~$\widehat z_{w,I}\in \widehat Z_I$. Let $W_{\bf1}=\{P \cdot w\mid w \in \mathcal{W}_{\bf1}\}$. Let $F_{\bf1}:=\{t\in F\mid Pt\in W_{\bf1}\}\subset F_\mathbb{R}$. Then we can describe $F_{\bf1}$ and thus $\mathcal{W}_{\bf1}\simeq F_M\backslash F_{\bf1}$ geometrically as follows.
Recall from \eqref{AI torus deco}
that any $t\in F_\mathbb{R}$ decomposes as $t= t^{\|} t^\perp$ with $t^{\|} \in F_{I,\mathbb{R}}$ and $t^\perp \in F_{I,\mathbb{R}}^\perp$.
Let $w\in \mathcal{W}_{\bf1}$, write it as $w=\tilde th$, and decompose
$\tilde t=\tilde t^{\|} \tilde t^\perp$ such that $\tilde t^{\|} \cdot z_0 = t^{\|}$ and $\tilde t^\perp \cdot z_0=t^\perp$. Consider the curve $s\mapsto a_s \cdot z_w= a_s\cdot t $ where $a_s=\exp(sX)$ with $X\in\mathfrak{c}_I^{--}$. Then, as $t^{\|}\in \algebraicgroup{ A}_I(\mathbb{R})$ fixes $\widehat z_{0,I}$, we obtain in the limit for $s\to \infty$ that $\tilde t\cdot \widehat z_{0,I}= t^\perp \cdot \widehat z_{0,I}$, and as $w\in \mathcal{W}_{\bf1}$ this limit belongs to an open $P$-orbit of $\widehat Z_I$.
Furthermore the coordinate $t^{\|}\in F_{I,\mathbb{R}}$ tells us in which direction we approach the limit $t^\perp\cdot \widehat z_{0,I}$, i.e. in which component of the cone $V_{Z,I}=F_I V_I^0$ we approach the limit. With $F_{I,{\bf1}}^\perp:= F_{\bf1} \algebraicgroup{ A}_I(\mathbb{R})/ \algebraicgroup{ A}_I(\mathbb{R}) \simeq F_{\bf1} F_{I,\mathbb{R}}/ F_{I,\mathbb{R}}$ we obtain the following.
\begin{lemma}\label{lemma F-product} By restriction the map $t \mapsto (t^{\|},t^\perp)$ yields a bijection $F_{\bf1} \simeq F_I\times F_{I,{\bf1}}^\perp$. \end{lemma}
\begin{proof} First we claim that $F_I=F_{\bf1}\cap \algebraicgroup{ A}_I(\mathbb{R})$. The inclusion $\supset$ is clear since by definition $F_I=F\cap \algebraicgroup{ A}_I(\mathbb{R})$. Conversely, each $t\in F_I$ corresponds to a $w=\tilde th\in \mathcal{W}$ with $t\in \algebraicgroup{ A}_I(\mathbb{R})$. Then $\widehat z_{w,I}=\widehat z_{0,I}$, and hence $t\in F_{\bf1}$ as claimed.
In particular it follows that $(t^{\|},t^\perp)\in F_I\times F_{I,{\bf1}}^\perp$ for all $t\in F_{\bf1}$. Since $t\mapsto (t^{\|},t^\perp)$ is injective by its definition in \eqref{AI torus deco}, it remains to see that $t=t^{\|}t^\perp\in F_{\bf1}$
for all pairs $(t^{\|},t^\perp)\in F_I\times F_{I,{\bf1}}^\perp$. Since $ t^{\|}\in \algebraicgroup{ A}_I(\mathbb{R})$ we know that $t\cdot \widehat z_{0,I} =t^\perp\cdot \widehat z_{0,I}\in \widehat Z_I$ which is the limit of the curve $\gamma(s)= a_s \cdot t$
for $s\to \infty$. The coordinate $t^{\|}\in F_I$ shows that $\gamma$ approaches the limit $t\cdot \widehat z_{0,I}$ in a direction pointing to $Z$ (see also the end of the proof of Lemma \ref{lemma rel open} for a more formal argument). Hence $t\in F$ and $Pt\in W_{\bf1}$. \end{proof}
Lemma \ref{lemma F-product} implies the splitting \begin{equation} \label{splitting W}\mathcal{W}_{\bf1}\simeq \widehat \mathcal{W}_I\times (F_M\cap F(I)) \backslash F_I\end{equation} and we can rephrase Lemma \ref{lemma WWI} as:
\begin{lemma} We have ${\bf m}(\mathcal{W}_I)\subset \mathcal{W}_{\bf1}$ and under the identification \eqref{splitting W} we have \begin{equation}\label{splitting WI} {\bf m}(\mathcal{W}_I)\simeq \widehat \mathcal{W}_I \times (F_M\cap F(I))\backslash F(I) \end{equation} \end{lemma}
From \eqref{splitting W} and \eqref{splitting WI} we obtain that \begin{equation} \label{splitting WI1} \mathcal{W}_{\bf1} \simeq {\bf m}(\mathcal{W}_I) \times \sF_I \end{equation} with $\sF_I=F(I)\backslash F_I$.
\begin{rmk}\label{remark normal bundle} It is instructive for the following to recall from Subsection \ref{nb points to Z} the part $N_\mathsf{Y}^Z= \coprod_{\mathsf{t}\in \sF_I} N_\mathsf{Y}^{Z, \mathsf{t}}$ of the normal bundle $N_\mathsf{Y}$ which points to $Z$. Here $$Z_{I,\mathsf{t}}:=N_\mathsf{Y}^{Z,\mathsf{t}}\simeq G/H_I$$ by the isomorphism \eqref{N-ident}, and $\sF_I=F(I)\backslash F_I$ parametrizes the components of $N_\mathsf{Y}^Z$. \end{rmk} \par Note that $F_{I,\mathbb{R}}$ is a $\mathbb{Z}_2$-vector space and thus we can find a splitting $F_{I,\mathbb{R}} = F(I)\oplus F_{I,\mathbb{R}}^0$ of vector spaces. In particular, we obtain $F_I = F(I) \oplus F_I^0$ for a subset $F_I^0\subset F_{I,\mathbb{R}}^0$. In particular the map
$$ F_I^0\to \sF_I,\ \ t\mapsto \mathsf{t}:= tF(I)$$ is a bijection. \par Now, using the isomorphism \eqref{splitting WI1} and the identification $\sF_I\simeq F_I^0$ we obtain injective maps $${\bf m}_\mathsf{t}: \mathcal{W}_I \to \mathcal{W}_{\bf1}\simeq {\bf m}(\mathcal{W}_I) \times F_I^0, \ \ w_I \mapsto ({\bf m}(w_I), t)$$ which yields the partition \begin{equation} \label{partition} \mathcal{W}_{\bf1}= \coprod_{\mathsf{t}\in \sF_I} {\bf m}_{\mathsf{t}} (\mathcal{W}_I)\, .\end{equation}
Let us explain the map ${\bf m}_\mathsf{t}$ more geometrically using the normal bundle, see Remark \ref{remark normal bundle}. The subset ${\bf m}_{\mathsf{t}} (\mathcal{W}_I)\subset \mathcal{W}_{\bf1}$ corresponds to those $w=\tilde t_wh\in \mathcal{W}_{\bf1}$ for which the curve $s\mapsto a_s\cdot z_w= a_s t_w \cdot z_0$ approaches the boundary point $\widehat z_{w,I}= t_w\cdot \widehat z_{0,I}=t_w^\perp\cdot \widehat z_{0,I}$ in direction of $t F(I) V_I^0\subset V_{Z,I}$. Let us emphasize that our initial map ${\bf m}$ corresponds then to the case where $\mathsf{t}=F(I)$ is the identity coset.
\par Recall that $\mathsf{t} \in \sF_I$ corresponds to a unique $t \in F_I^0$. Further we let $\tilde t\in T_Z$ be a lift of $t$, i.e. $\tilde t \cdot z_0 =t$. We assume that $t={\bf 1}$ in case $\mathsf{t}= F(I)$.
\begin{rmk} Let $w_I=\tilde t_I h_I$ and $w_1={\bf m}(w_I)= \tilde t_I h\in \mathcal{W}$. Then note that $$ {\bf m}_{\mathsf{t} }(w_I) = \tilde t_M \tilde t \tilde t_I \tilde h $$ for some $\tilde t_M \in F_M $, depending on the choice of representatives for $w:= {\bf m}_{\mathsf{t} }(w_I)\in \mathcal{W}$, and $\tilde h \in \algebraicgroup{H}$. Thus by changing $w={\bf m}_{\mathsf{t}} (w_I)\in \mathcal{W}$ to $\tilde t_M w \tilde h\in G$ for some $ h'\in\algebraicgroup{H}$ we may assume that the compatibility conditions $${\bf m}_\mathsf{t}(w_I)=\tilde t \tilde t_I h''$$ hold for some $h''\in \algebraicgroup{H}$. In particular, we have $${\bf m}_\mathsf{t}(w_I)\cdot z_0= \tilde t {\bf m} (w_I)\cdot z_0=tt_1$$
for all $\mathsf{t}\in \sF_I, w_I\in \mathcal{W}_I$.
\par Notice that this correction of choice of $\mathcal{W}$ (by harmless left displacements of elements of $F_M$) with respect to $\mathcal{W}_I$ depends on $I$. In general it seems to be not possible to make a consistent choice of $\mathcal{W}$ which would be valid for all $I$ simultaneously. \end{rmk}
\par
Recall the notation $H_g=gHg^{-1}$ for a subgroup $H$ in a group $G$ and $g\in G$. Then note that $\algebraicgroup{H}_{\tilde t}$ is defined over $\mathbb{R}$ and $H_{\tilde t}:= (\algebraicgroup{H}_{\tilde t})(\mathbb{R})$ is conjugate to $H$ as $t\in Z$. Likewise we define $z_{t,I}:=\tilde t \cdot z_{0,I}\in \algebraicgroup{ Z}_I(\mathbb{R})$ and note that $G$-stabilizer of $z_{t,I}$ is $H_I$ as we have $(H_{\tilde t})_I = H_I$ as a consequence of the fact that $\tilde t$ fixes the vertex $\widehat z_{0,I}$.
With then obtain the following extension of the consistency relations from Proposition \ref{prop cr1}:
\begin{lemma} \label{lemma HI comp} Let $w_I\in \mathcal{W}_I$, $\mathsf{t} \in \sF_I$ and $w= {\bf m}_\mathsf{t} (w_I)\in \mathcal{W}_{\bf1}$. Then \begin{equation} \label{WWI2} (H_w)_I = (H_I)_{w_I} \, . \end{equation} In particular, $(H_w)_I$ only depends on $w_I$ and is independent of $\mathsf{t}$. \end{lemma}
\begin{proof} For $w_I =\tilde t_I h_I$ we have $w_1:={\bf m}(w_I)=\tilde t_I h'$ for some $h'\in \algebraicgroup{H}$. Hence ${\bf m}_\mathsf{t} (w_I) = \tilde t \tilde t_I h$ for some $h\in \algebraicgroup{H}$. We further have $$(H_w)_I= \big((H_{w_1})_{\tilde t}\big)_I=(H_{w_1})_I$$ and now Proposition \ref{prop cr1} applies. \end{proof}
\subsubsection{The general decomposition of $\mathcal{W}$ } In general we obtain a partition
\begin{equation} \label{full deco W} \mathcal{W}= \coprod_{\mathsf{c}\in \sC_I} \coprod_{\mathsf{t} \in \sF_{I,\mathsf{c}}} {\bf m}_{\mathsf{c},\mathsf{t}} (\mathcal{W}_{I,\mathsf{c}})\end{equation} where $\mathcal{W}_{I,\mathsf{c}}$ are the open $P$-orbits for $Z_{I,\mathsf{c}}:= G/H_{I,\mathsf{c}}$ parametrized as in the previous section with $H_I$ replaced by $H_{I,\mathsf{c}}$. The set $\sF_{I,\mathsf{c}}$ is then $\sF_I$, but for $H_I$ replaced by $H_{I,\mathsf{c}}$. We define ${\bf m}_{\mathsf{c}, \mathsf{t}} $ similarly. Regarding our choices $w(\mathsf{c})\in \mathsf{W}$ which defined $H_{I,\mathsf{c}}$ we normalize ${\bf m}_{\mathsf{c},{\bf1}}$ such that ${\bf m}_{\mathsf{c},{\bf1}}({\bf1})=w(\mathsf{c})$.
\begin{rmk} If we let $F_c\subset F$ correspond to $\mathcal{W}_\mathsf{c}\subset \mathcal{W}$ we define as before $F_{I,\mathsf{c}}:= F_\mathsf{c} \cap \algebraicgroup{ A}_I(\mathbb{R})$ and $F_{I,\mathsf{c}}^\perp:= F_\mathsf{c} F_{I,\mathbb{R}}/ F_{I,\mathbb{R}}$. As in Lemma \ref{lemma F-product} we then obtain \begin{itemize} \item $F_{I,\mathsf{c}}=F_I$.
\item $F_\mathsf{c}\simeq F_{I,\mathsf{c}} \times F_{I,\mathsf{c}}^\perp$ under $t\mapsto (t^{\|},t^\perp)$. \end{itemize} The first item tells us that $F_{I,\mathsf{c}}$ is independent of $\mathsf{c}$. However $F(I)_\mathsf{c}$ does depend on $\mathsf{c}$ as Example \ref{ex SL3 continued} below shows. In particular the dependence of
$\sF_{I,\mathsf{c}}= F(I)_\mathsf{c}\backslash F_I$ n $\mathsf{c}$ is caused by the $\mathsf{c}$-dependence of $F(I)_\mathsf{c}$ only. \end{rmk}
Further we denote by $z_{0,I,\mathsf{c}} = H_{I,\mathsf{c}}$ the standard base point of $Z_{I,\mathsf{c}}$, and state the general version of \eqref{WWI2}: let $\mathsf{c} \in \sC_I$ and $\mathsf{t} \in \sF_{I,\mathsf{c}}$ such that $w={\bf m}_{\mathsf{c},\mathsf{t}}(w_{I,\mathsf{c}})\in \mathcal{W}_\mathsf{c}$ for some $w_{I,\mathsf{c}}\in \mathcal{W}_{I,\mathsf{c}}$. Then $(H_w)_I$ does not depend on $\mathsf{t}$ and \begin{equation} \label{WWI2 general} (H_w)_I = (H_{I,\mathsf{c}})_{w_{I,\mathsf{c}}} \qquad (w= {\bf m}_{\mathsf{c},\mathsf{t}}(w_{I,\mathsf{c}}))\, .\end{equation}
If we define $w(\mathsf{c},\mathsf{t}):= {\bf m}_{\mathsf{c},\mathsf{t}}({\bf1})\in \mathcal{W}$ and set $Z_{I,\mathsf{c},\mathsf{t}} =(H_{w(\mathsf{c},\mathsf{t})})_I$, then $Z_{I,\mathsf{c},\mathsf{t}}= Z_{I,\mathsf{c}}$ and the decomposition \eqref{normal union} follows.
\begin{ex} \label{ex SL3 continued}We continue Example \ref{ex SL3} of $Z=\operatorname{SL}(3,\mathbb{R})/ \operatorname{SO}(1,2)$ with $\mathcal{W}=\{w_1, w_2, w_3\}$ and $H_{w_1}=H$. We chose $I=\{\alpha_2\}$ and obtained $\sC_I=\{{\mathsf 1},{\mathsf 2}\}$ with $\mathcal{W}_{\mathsf 1}=\{w_1\}$ and $\mathcal{W}_{\mathsf 2}=\{w_2, w_3\}$. Further we had $H_{I,{\mathsf 1}}= H_I=S(\operatorname{O}(1)\operatorname{O}(2)) U_I$ and $H_{I,{\mathsf 2}}= (H_{w_2})_I = (H_{w_3})_I=S(\operatorname{O}(1)\operatorname{O}(1,1)) U_I$. \par Next we claim that both $\mathcal{W}_{I,{\mathsf 1}}=\{{\bf1}\}$ and $\mathcal{W}_{I,{\mathsf 2}}=\{{\bf1}\}$ are are one-elemented. In fact this follows from the fact that the open $P$-orbits in $G/H_{I,j}$ are induced: if we denote by $G_I\simeq \operatorname{GL}(2,\mathbb{R})$ the Levi for the parabolic defined by $I$, then the open $P$-orbits on $G/H_{I,j}$ correspond to the open $P\cap G_2$ orbits in $\operatorname{GL}(2,\mathbb{R})/\operatorname{O}(2)$ respectively $\operatorname{GL}(2,\mathbb{R})/ \operatorname{O}(1,1)$. Both cases feature only one open orbit for $P\cap G_I$ and establish our claim.
\par Finally we determine $\sF_{I,{\mathsf 1}}$ and $\sF_{I,{\mathsf 2}}$. Since $F=\{t_1, t_2, t_3\}$ with $t_i t_j=t_k$ for all $i, j,k$ pairwise different, we readily deduce that $F_{\mathbb{R},I}=F_{I,{\mathsf 1}}=F_{I,{\mathsf 2}}\simeq \mathbb{Z}_2$ is a group. Recall that we described $\widehat H_{I,{\mathsf 1}}$ and $\widehat H_{I,{\mathsf 2}}$ already in Example \ref{ex SL3}. From that we deduce that $\widehat H_{I,{\mathsf 1}}/ H_{I,{\mathsf 1}}\simeq A_I$ is connected and thus $F(I)_{\mathsf 1}=\{{\mathsf 1}\}$. In particular, $\sF_{I,{\mathsf 1}}\simeq \mathbb{Z}_2$.
On the other hand we have $$u=\begin{pmatrix} 1 & 0&0 \\ 0& 0& 1\\ 0& -1 &0\end{pmatrix}\in \widehat H_{I,{\mathsf 2}}$$ as it preserves the diagonal quadratic form $(0,1,-1)$ projectively (i.e. up to sign). Since $u\not \in H_{I,{\mathsf 2}}$ and commutes with $A_I=\{ \operatorname{diag} (t^{-2}, t, t): t>0\}$ we thus have $F(I)_{\mathsf 2}\simeq \mathbb{Z}_2$. In particular, $\sF_{I,{\mathsf 2}}=\{{\bf1}\}$. \end{ex}
\begin{rmk} The above example shows that the group $A(I)=A_I \times F(I)$ is sensitive to the orbit type in $\sC_I$. More explicitly, we do not have $A(I)\simeq \widehat H_{I,\mathsf{c}}/ H_{I,\mathsf{c}}$ for all $\mathsf{c}\in \sC_I$. \end{rmk}
\section{Abstract Plancherel theorem and tempered representations}\label{Section AbsPlanch}
This section has several parts. We begin with a brief recall on Banach representations and their
smooth vectors, followed by a recap of smooth completions of Harish-Chandra modules.
Then we turn our attention to the abstract Plancherel theorem for real spherical spaces. In fact there is no much difference to the case of a general unimodular homogeneous space and "real spherical" only enters via finite multiplicities. Finally we recall the basic tempered theory for homogeneous spaces, initiated by Bernstein \cite{B} in a general setup, and then made concrete for real spherical spaces
in \cite{KKSS2}.
\subsection{Generalities on Banach representations and their smooth vectors} We begin with a few facts on Banach representations of a Lie group $G$.
By a Banach (or a Fr\'echet) representation of a Lie group $G$ we understand a continuous linear action
$$G \times E \to E, \ \ (g,v) \mapsto \pi(g) v\, $$ on a Banach (or Fr\'echet) space $E$. As customary we use the symbolic pair $(\pi, E)$ to denote the representation. Sometimes we abbreviate and use $g\cdot v$ instead of $\pi(g)v$.
\par Let now $(\pi, E)$ be a Banach representation. Further we fix with $p$ a norm which induces the topology on $E$. In case $E$ is a Hilbert space and $p$ originates from the defining scalar product, then we say $p$ is the Hermitian norm on $E$. As the space $E$ does not necessarily allow an action of the Lie algebra we pass to the subspace $E^\infty\subset E$ of smooth vectors. Here $v\in E$ is called smooth provided the $E$-valued orbit map $f_v: G \to E, \ \ g\mapsto \pi(g)v$ is smooth. In this sense we obtain a $G$-invariant subspace $E^\infty\subset E$ which is dense in $E$. The space $E^\infty$ carries a Fr\'echet topology for which the $G$-action is smooth. For further reference we briefly recall a few standard possibilities on how to define the Fr\'echet topology. To begin with let $\mathcal{B}:=\{ X_1, \ldots, X_n\}$ be an ordered basis of $\mathfrak{g}$. For a multi-index $\alpha\in \mathbb{N}_0^n$ we set ${\bf X}^\alpha:=X_1^{\alpha_1}\cdot \ldots \cdot X_n^{\alpha_n}\in \mathcal{U}(\mathfrak{g})$. For each $k\in \mathbb{N}_0$ we now define a norm on $E^\infty$ by
$$p_{\mathcal{B}, k}(v):= \Big(\sum_{\alpha\in \mathbb{N}_0^n \atop |\alpha|\leq k} p( {\bf X}^\alpha\cdot v)^2 \Big)^{1\over 2} \qquad (v \in E^\infty)\, .$$ Notice that $p_{\mathcal{B}, k}$ is Hermitian in case $p$ is Hermitian. If $\mathcal{C}$ is any other choice of ordered basis we note that there exist constants $C_k =C_k(\mathcal{B}, \mathcal{C})>0$, depending on $\mathcal{B}$ and $\mathcal{C}$ but not on the space $E$ and its norm, such that ${1\over C_k} p_{\mathcal{B}, k} \leq p_{\mathcal{C}, k} \leq C_k p_{\mathcal{B}, k}$ for all $k\in \mathbb{N}_0$. In particular the locally convex topology on $E^\infty$ induced from the family $(p_{\mathcal{B}, k})_{k\in \mathbb{N}_0}$ does not depend on the particular choice of $\mathcal{B}$. In the sequel we fix a basis $\mathcal{B}$, set $p_k:=p_{\mathcal{B}, k}$, and refer to $p_k$ as a $k$-th Sobolev norm of $p$. We denote by $E_k$ the completion of $E^\infty$ with respect to the norm $p_k$. Note that $G$ leaves $E_k$ invariant and defines a Banach representation $(\pi_k, E_k)$ of $G$. It follows that the Fr\'echet representation $(\pi^\infty, E^\infty)$ is of moderate growth (see \cite[Lemma 2.10]{BK}). \par A second possibility to define the Fr\'echet structure is by Laplace Sobolev norms. Let \begin{equation} \label{def Delta}\Delta:= - (X_1^2 + \ldots + X_n^2)\in \mathcal{U}(\mathfrak{g})\end{equation} be a Laplace element attached to the basis $\mathcal{B}$, and set \begin{equation}\label{def DeltaR} \Delta_R=\Delta+R^2\cdot{\bf1}\end{equation} for $R\in\mathbb{R}$. We recall the following from \cite[Cor. 3.3, Rem. 3.4]{GK}.
\begin{lemma} Let $(\pi, E)$ be a Banach representation of a unimodular Lie group $G$. Then there exists a constant $R_E\geq 0$ such that for all $R>R_E$ the operator $$ d\pi(\Delta_R): E^\infty\to E^\infty$$ is an isomorphism of Fr\'echet spaces. Moreover, one can take $R_E=0$ in case $(\pi,E)$ is unitary. \end{lemma}
From now on we assume that $G$ is a unimodular Lie group. For a Banach representation $(\pi, E)$ and fixed $R>R_E$, we define Laplace Sobolev norms of even order for any $k\in \mathbb{Z}$ by
\begin{equation} \label{def Laplace} ^\Delta p_{2k}(v):= p( \Delta_R^k v) \qquad (v\in E^\infty)\, .\end{equation} Strictly speaking $^\Delta p_{2k}$ depends on $R>R_E$ but we suppress this in the notation. In case $(\pi, E)$ is unitary we use $R=1$ and thus $^\Delta p_{2k}(v) = p(\Delta_1^k v)$.
For $k\geq 0$, it is clear that $^\Delta p_{2k} \leq c_k\cdot p_{2k}$ for a constant $c_k>0$ which is independent of $p$ and $E$.
Further, for $k\geq 0$ \cite[Prop. 4.12]{GK} yields constants $C_k >0$ only depending on $\mathcal{B}$ and not on $E$ or $p$ such that \begin{equation} \label{Laplace bound} p_{2k} (v)\leq C_k\cdot {^\Delta p}_{2k +n^* }(v) \qquad (v\in E^\infty)\, \end{equation} where
\begin{equation}\label{defi nstar} n^*=\min\{k\in 2\mathbb{N} \mid 1+\dim G\le k\} \end{equation}
\par For the rest of this section we request that $G$ is real reductive and $G<\operatorname{GL}(m,\mathbb{R})$ for some $m$. In this situation we take the basis $\mathcal{B}=\{X_1, \ldots, X_n\}$ such that the Laplace element $\Delta$ as defined in \eqref{def Delta} satisfies $$\Delta= -\mathcal{C}_G +2\mathcal{C}_K$$ with $\mathcal{C}_G$ and $\mathcal{C}_K$ appropriate Casimir elements (unique if $\mathfrak{g}$ and $\mathfrak{k}$ are semisimple).
\begin{lemma}\label{unitary Laplace inequality2} Assume $(\pi,E)$ is irreducible and unitary and let $p$ be any continuous $K$-invariant Hermitian norm on $E^\infty$. Let $R>0$. Then for each $k\in\mathbb{N}$ there exists a constant $C=C(k,R)>0$, independent of $p$ and $\pi$, such that $$ p( \Delta_R^kv)\leq C p(\Delta_1^k v) \qquad (v\in E^\infty)\, .$$ \end{lemma}
\begin{proof} It suffices to prove this for $k=1$. Notice that any $v\in E^\infty$ admits a convergent expansion $v=\sum_{\tau\in \widehat K} v_\tau$ in $K$-types which is orthogonal with respect to any $K$-invariant Hermitian norm on $E^\infty$. Since $\Delta_R$ is $K$-invariant, the norm $p(\Delta_R\cdot)$ is $K$-invariant and Hermitian. Hence it suffices to show that $p(\Delta_R v)\leq C p(\Delta_1 v)$ for $v$ belonging to a $K$-type $E[\tau]$. Then both $\mathcal{C}_G$ and $\mathcal{C}_K$ act by scalars on $E[\tau]$. Hence $\Delta_R v=(c_\tau+R^2) v$ for some scalar $c_\tau$, which has to be
$\ge 0$ as the representation $\pi$ was unitary: use $\langle \Delta v, v\rangle \geq 0$ for all
$v\in E^\infty$ and $\langle\cdot, \cdot\rangle$
a unitary inner product on $E$. Then
$$p(\Delta_R v)=(c_\tau+R^2)p(v)\le C(c_\tau+1)p(v)=Cp(\Delta_1 v)$$ for $C=\max\{1,R^2\}$ and the lemma follows. \end{proof}
\subsection{Smooth completions of Harish-Chandra modules and spherical pairs}
We move on to Harish-Chandra modules and their canonical smooth completions. A useful reference for the following summary might be \cite{BK}.
\par If $V$ is a complex vector space and $p$ is a norm on $V$, then we denote by $V_p$ the Banach completion of the normed space $(V,p)$.
\par Let $V$ be a Harish-Chandra module (with regard to a fixed choice of a maximal compact group $K$ of $G$). A norm $p$ on $V$ is called $G$-continuous provided the infinitesimal action of $\mathfrak{g}$ on $V$ exponentiates to a Banach representation of $G$ on $V_p$. Note that every Harish-Chandra module admits a $G$-continuous norm, as a consequence of the Casselman embedding theorem.
\par The Casselman-Wallach globalization theorem asserts that the space of smooth vectors $V_p^\infty$ is independent of the particular $G$-continuous norm $p$, i.e.~if $q$ is another $G$-continuous norm, then the identity map $V\to V$ extends to a $G$-equivariant isomorphism of Fr\'echet spaces $V_p^\infty \to V_q^\infty$. Stated differently, up to $G$-isomorphism of Fr\'echet spaces, there is a unique Fr\'echet completion $V^\infty$ of $V$ such that the $G$-action on $V^\infty$ is smooth and of moderate growth.
\par We extend $\mathfrak{a}$ to an abelian subalgebra $\mathfrak{j} = \mathfrak{a} +i\mathfrak{t}\subset \mathfrak{g}_\mathbb{C}$ with $\mathfrak{t}\subset \mathfrak{m}$ a maximal torus. Note that $\mathfrak{j}_\mathbb{C}$ is a Cartan subalgebra of $\mathfrak{g}_\mathbb{C}$ for which the roots are real valued on $\mathfrak{j}$, i.e.
$\Sigma(\mathfrak{g}_\mathbb{C}, \mathfrak{j}_\mathbb{C}) \subset \mathfrak{j}^*$. We denote by $\mathsf{W}_\mathfrak{j} = \mathsf{W}(\mathfrak{g}_\mathbb{C},\mathfrak{j}_\mathbb{C})$ the corresponding Weyl group and let $\rho_\mathfrak{j}\in \mathfrak{j}^*$ be a half-sum with $\rho_\mathfrak{j}|_\mathfrak{a}=\rho$, where $\rho$ is the half sum defined by $\mathfrak{n}$.
\par Assume now that $V$ is an irreducible Harish-Chandra module and denote by $\mathcal{Z}(\mathfrak{g})$ the center of $\mathcal{U}(\mathfrak{g})$. By the Schur-Dixmier lemma the elements of $\mathcal{Z}(\mathfrak{g})$ act by scalars on $V$ and we thus obtain an algebra morphism $\chi_V: \mathcal{Z}(\mathfrak{g}) \to \mathbb{C}$, the infinitesimal character of $V$. Via the Harish-Chandra isomorphism we identify $\mathcal{Z}(\mathfrak{g})\simeq S(\mathfrak{j}_\mathbb{C})^{\mathsf{W}_\mathfrak{j}}$, and consequently we may identify $\chi_V$ with an element of $\mathfrak{j}_\mathbb{C}^*/ \mathsf{W}_\mathfrak{j}$.
Let $V$ be an irreducible Harish-Chandra module and $V^\infty$ its canonical smooth completion. Further let $V^{-\infty}:= ({V^\infty})'$ be the continuous dual of $V^{\infty}$ and let $\eta\in (V^{-\infty})^H$ be an $H$-fixed element. We refer to $(V,\eta)$ as a {\it spherical pair} provided $\eta\neq 0$.
Let now $(V,\eta)$ be a spherical pair and $v\in V^\infty$. We form the generalized matrix coefficient
$$m_{v,\eta}(g\cdot z_0):= \eta( g^{-1}\cdot v) \qquad (g\in G)$$ which is a smooth function on $Z$.
\subsection{Abstract Plancherel theory}\label{subs APt} We denote by $\widehat G$ the unitary dual of $G$ and pick for every equivalence class $[\pi]$ a representative $(\pi, \mathcal{H}_\pi)$, i.e. $\mathcal{H}_\pi$ is a Hilbert space and $\pi: G \to U(\mathcal{H}_\pi)$ is an irreducible unitary representation in the equivalence class of $[\pi]$. We denote by $(\overline \pi, \mathcal{H}_{\overline \pi})$ the dual representation. We recall the $G$-equivariant antilinear equivalence $$\mathcal{H}_\pi \to \mathcal{H}_{\overline \pi}, \ \ v \mapsto \overline v:=\langle \cdot , v\rangle_{\mathcal{H}_\pi}$$ which induces the $G$-equivariant antilinear isomorphism:
$$\mathcal{H}_\pi^{-\infty} \to \mathcal{H}_{\overline \pi}^{-\infty}, \ \ \eta\mapsto \overline \eta; \ \overline \eta(\overline v) :=\overline{ \eta(v)}\, $$ and a linear embedding $\mathcal{H}_\pi^{\infty}\hookrightarrow \mathcal{H}_{\overline\pi}^{-\infty}$.
In this context we recall the mollifying map
$$ C_c^\infty(G) \otimes \mathcal{H}_{\overline \pi}^{-\infty} \to \mathcal{H}_\pi^\infty \subset\mathcal{H}_{\overline\pi}^{-\infty}, \ \ f\otimes \overline \eta \mapsto \overline\pi(f)\overline \eta:=\int_G f(g) \overline \eta (\overline \pi(g)^{-1}\cdot) \ dg\, . $$ The mollifying map restricted to $H$-invariants induces a map $$ C_c^\infty(G/H) \otimes (\mathcal{H}_{\overline \pi}^{-\infty})^H \to \mathcal{H}_\pi^\infty\, ,$$ $$ F\otimes \overline \eta \mapsto \overline\pi(F)\overline \eta:=\int_{G/H} f(gH) \overline \eta (\overline \pi(g)^{-1}\cdot) \ d(gH)\, . $$
The abstract Plancherel Theorem for the unimodular real spherical space $Z=G/H$ asserts the following (see \cite{Penney}, \cite{vanDijk}, or \cite[Section 8]{KS2}) : There exists a Radon measure $\mu$ on $\widehat G$ and for every $[\pi]\in\widehat G$ a Hilbert space $\mathcal{M}_{\pi} \subset (\mathcal{H}_{\pi}^{-\infty})^H $, depending measurably on $[\pi]$, (note that $(\mathcal{H}_{\pi}^{-\infty})^H$ is finite dimensional \cite{KO}, \cite{KS1}), such that with the induced Hilbert space structure on $\operatorname{Hom}(\mathcal{M}_{\overline \pi}, \mathcal{H}_\pi) \simeq \mathcal{M}_{\pi}\otimes \mathcal{H}_\pi$ the Fourier transform
$$ \mathcal{F}: C_c^\infty(Z) \to \int_{\widehat G}^\oplus \operatorname{Hom}(\mathcal{M}_{\overline \pi}, \mathcal{H}_\pi) \ d\mu(\pi) $$ $$ F\mapsto \mathcal{F}(F)= (\mathcal{F}(F)_\pi)_{\pi \in \widehat G}; \ \mathcal{F}(F)_\pi(\overline\eta):= \overline \pi(F) \overline \eta\in \mathcal{H}_\pi^\infty$$ extends to a unitary $G$-isomorphism from $L^2(Z)$ onto $\int_{\widehat G}^\oplus \operatorname{Hom}(\mathcal{M}_{\overline \pi}, \mathcal{H}_\pi) \ d\mu(\pi)$.
Moreover the measure class of $\mu$ is uniquely determined by $Z$ and we call $\mu$ a {\it Plancherel measure} for $Z$. Unique are also the {\it multiplicity subspaces} $\mathcal{M}_{\pi} \subset (\mathcal{H}_{\pi}^{-\infty})^H$ for almost all $\pi$ together with their inner products up to positive scalar.
Note that by definition \begin{equation}\label{inner product with matrix coefficient} \langle F, m_{v,\eta} \rangle_{L^2(Z)}= \langle \mathcal{F}(F)_\pi(\bar\eta), v\rangle
\qquad (F\in C_c^\infty(Z))\, , \end{equation} for all $\eta\in \mathcal{M}_{\pi}, v\in \mathcal{H}^\infty_\pi$, and furthermore the {\it Parseval formula}
\begin{equation}\label{abstract Plancherel} \| F\|^2_{L^2(Z)} = \int_{\widehat G} \mathsf {H}_\pi (F) \ d \mu(\pi) \qquad (F\in C_c^\infty(Z))\, ,\end{equation} where $\mathsf {H}_\pi$ denotes the Hermitian form on $C_c^\infty(Z)$ defined by
\begin{equation} \label{Hermitian sum} \mathsf {H}_\pi(F)= \sum_{j=1}^{m_\pi} \|\overline \pi(F) \overline \eta_j \|_{\mathcal{H}_\pi}^2\end{equation} for $\overline \eta_1, \ldots, \overline \eta_{m_\pi}$ an orthonormal basis of $\mathcal{M}_{\overline \pi}$. Observe that $\mathsf {H}_\pi(F)$ is the Hilbert-Schmidt norm squared of the operator $\mathcal{F}(F)_\pi: \mathcal{M}_{\overline \pi} \to \mathcal{H}_\pi$ and hence does not depend on the choice of the particular orthonormal basis.
\begin{rmk} (Normalization of Plancherel measure) As mentioned, only the measure class of $[\mu]$ of $\mu$ is unique. With a choice of Plancherel measure $\mu\in [\mu]$ we pin down uniquely the $G$-invariant Hermitian forms $\mathsf {H}_\pi$ on $\mathcal{H}_\pi\otimes \mathcal{M}_\pi$ for almost all $\pi$. In particular, together with a choice of an inner product on $\mathcal{H}_\pi$ (unique up to scalar by Schur's Lemma) we pin down the scalar product on $\mathcal{M}_\pi$ uniquely. \par Typically the $\mathcal{H}_\pi$ are induced representations with a preferred inner product, but in practice there are several meaningful choices for the inner product on the multiplicity space (see Section \ref{group case} and Section \ref{section DBS}.) A different choice of inner product on $\mathcal{M}_\pi$ then leads to a rescaling of $\mu$ in its measure class. \end{rmk}
\begin{rmk} \label{F-inverse}{\rm (Fourier inversion)} Let $f\in C_c^\infty(Z)$ be of the form
$f= (F^**F)^H$ where $F\in C_c^\infty(G)$, $F^*(g)=\overline {F(g^{-1})}$ and the upper index $H$ denoting the right $H$-average of $F^* * F$. Then $f(z_0)=\|F^H\|^2_{L^2(Z)}$. Hence we deduce from the Parseval formula \eqref{abstract Plancherel} for all $f\in C_c^\infty(Z)$ the inversion formula
\begin{equation} f(z_0) = \int_{\widehat G} \sum_{i=1}^{m_\pi} \Theta_\pi^i(f) \ d\mu(\pi) \end{equation} where $\Theta_\pi^i$ is the {\it spherical character}, i.e. the left $H$-invariant distribution
$$ \Theta_\pi^i(f) = \eta_i(\overline \pi(f) \overline \eta_i) \qquad (f\in C_c^\infty(Z))\, .$$ \end{rmk}
\subsection{Tempered norms} We recall the standard tempered norms on $Z$. Using the weight functions $\mathbf {w}$ and $\mathbf{v}$ from \cite{KKSS2} Sections 3 and 4, the following norms on $C_c^\infty(Z)$ are attached to a parameter $N\in \mathbb{R}$:
\begin{align*}
q_N(f) &:= \sup_{z\in Z} |f(z)| \, \mathbf{v}(z)^\frac12 ( 1 +\mathbf {w}(z))^N\, ,\\
p_N(f) &:= \left(\int_Z |f(z)|^2 (1 + \mathbf {w}(z))^N \ dz\right)^\frac12 . \end{align*}
Note that the norm $p_N$ is $G$-continuous, $K$-invariant, and Hermitian. We recall that the two families of Sobolev norms $q_{N;k}$ and $p_{N;k}$ for $(N,k)\in\mathbb{R}\times\mathbb{N}_0$ define the same topology on $C_c^\infty(Z)$, and specifically for $k> {\dim G\over 2}$ we recall the inequality
\begin{equation} \label{Sob comparison} q_N(f) \leq C p_{N;k}(f) \qquad (f\in C_c^\infty(Z)) \end{equation} for a constant $C$ only depending on $k$ and $N$ (see \cite[Lemma 9.5]{KS2} and its proof).
\par We denote by $L_{N;k}^2(Z)$ the completion of $C_c^\infty(Z)$ with respect to $p_{N;k}$. We wish to define $L_{N;k}^2(Z)$ and $p_{N;k}$ as well for $k\in -\mathbb{N}$, and we do that by duality. Given the invariant measure on $Z$, the dual $L_N^2(Z)'$ is canonically isometric isomorphic to $L_{-N}^2(Z)$ via the equivariant bilinear pairing $$L_N^2(Z)\times L_{-N}^2(Z)\to \mathbb{C}, \ \ (f, g) \mapsto \int_Z f(z) g(z)\ dz\, .$$ This leads to the definition
\begin{equation} \label{negative space} L_{N;-k}^2(Z):= L_{-N; k}^2(Z)' \qquad (k\in \mathbb{N})\end{equation} with
\begin{equation}\label{negative norms} p_{N;-k} (f) :=\sup_{\phi \in L_{-N;k}^2(Z)\atop p_{-N;k}(\phi)\leq 1} \left|\int_Z f(z) \phi(z) \ dz\right| \, .\end{equation}
\subsection{ Negative Sobolev norms}The definition of the negative Sobolev norms $p_{N;-k}$ for the norm $p_N$ fits into a general pattern which we recall in this Subsection. Given a Banach representation $(\pi, E)$ and a $G$-continuous norm $p$ on $E$ we define the dual norm $p'$ of $p$ on the continuous dual $E'$ as usual:
$$p'(\lambda)=\sup_{p(v)\leq 1} |\lambda(v)|\qquad (\lambda\in E').$$ In the sequel we assume that $p$ is a Hermitian norm. This guarantees in particular that the dual action of $G$ on $E'$ is continuous, i.e. $(\pi', E')$ is a representation. Further we retrieve $p$ from $p'$ via $p= (p')'$. For any $k\in \mathbb{N}_0$ we write $p'_k:=(p')_k$ for the $k$-th Sobolev norm of the dual norm $p'$ and define the negative Sobolev norm $p_{-k}$ of $p$ by \begin{equation}\label{def Sob negative} p_{-k}(v) := (p'_k)'(v) \qquad (v\in E)\, .\end{equation} Recall that we define Laplace Sobolev norms $^\Delta p_{2k}$ for all integers $k\in \mathbb{Z}$.
\begin{lemma} \label{lemma Sob negative} Let $(\pi, E)$ be a Hilbert representation of $G$ and $p$ a corresponding Hermitian norm. Then for all $k\in \mathbb{N}_0$ there exists a constant $C_k>0$ such that $$ ^\Delta p_{-2k-n^*} (v) \leq C_k p_{-2k}(v) \qquad (v\in E^\infty)\, .$$ \end{lemma}
\begin{proof} In view of the definition of the negative Sobolev norm $p_{-2k}$ in \eqref{def Sob negative} this follows from \eqref{Laplace bound} applied to the dual norm $p'$ and the observation that $$(^\Delta p'_{2k})' = {}^\Delta p_{-2k}$$ for all $k\in\mathbb{N}_0$. \end{proof}
\begin{lemma}\label{lemma Sobolev norms inequality} Let $(V,\eta)$ be a spherical pair where $V=V_\pi$ is the Harish-Chandra module of a unitary irreducible representation $\pi$, and let $N\in\mathbb{R}$ be such that $p_{N}(m_{v,\eta})<\infty$ for all $v\in V^ \infty$. Then for each $2k> n^*$ there exists a constant $C>0$, depending on $k$ but not on $(V,\eta)$ and $N$, such that \begin{equation}\label{Sobolev norms inequality}
p_{N}(m_{v,\eta}) \le C p_{N; -2k+n^*}(m_{\Delta_1^k v,\eta}) \qquad(v\in V^\infty). \end{equation} \end{lemma}
\begin{proof} In general we have for all $f\in E^\infty =L^2_N(Z)^\infty$ and fixed $R>R_E$ $$ p_N(f)= p_N(\Delta_R^{-k} \Delta_R^k f) = {^\Delta p}_{N; -2k}(\Delta_R^k f)\, .$$ Upon applying Lemma \ref{lemma Sob negative} we obtain that $$ p_N(f) \leq C p_{N; -2k + n^*}(\Delta_R^k f)\, .$$ Specifically for $f=m_{v,\eta}$ we arrive at $$ p_{N}(m_{v,\eta}) \le C p_{N; -2k+n^*}(m_{\Delta_R^k v,\eta}) \qquad(v\in V^\infty)\, .$$ Now $q(v):= p_{N; -2k +n^*} (m_{v,\eta})$ defines a $K$-invariant continuous Hermitian norm on $V^\infty$ and thus we may replace $R$ by $1$ according to Lemma \ref{unitary Laplace inequality2}. \end{proof}
\subsection{Tempered pairs} We now define
\begin{equation} \label{def NZ} N_Z:=2 \operatorname{rank}_\mathbb{R} Z +1\qquad k_Z:={\frac12 \dim\mathfrak{g}}.\end{equation} Then for all $N\geq N_Z$ and $k > k_Z$ it follows from \cite[Prop. 9.6]{KS2} combined with \cite[Th. 1.5]{B} that for $\mu$-almost all $[\pi]\in \widehat G$, the $\pi$-Fourier transform
$$\mathcal{F}_\pi: C_c^\infty(Z) \to \operatorname{Hom} (\mathcal{M}_{\overline \pi}, \mathcal{H}_\pi)$$ extends continuously to $L_{N;k}^2(Z)$ and that the corresponding inclusion \begin{equation} \label{HS11} L_{N;k}^2(Z) \to \int_{\widehat G}^\oplus \operatorname{Hom}(\mathcal{M}_{\overline \pi}, \mathcal{H}_\pi)\ d\mu(\pi)\end{equation} is Hilbert-Schmidt (in the sequel HS for short).
We wish to make this fact a bit more concrete in the context of the Hermitian forms
$\mathsf {H}_\pi$. For that purpose we fix $N$ and $k$ as above and denote by $\| \mathsf {H}_\pi\|_{{\rm HS}, N; k}$ the HS-norm of the operator $F\otimes \bar\eta\mapsto\bar\pi(F)\bar\eta$ from $L^2_{N;k}(Z) \otimes\mathcal{M}_{\bar\pi}$ to $\mathcal{H}_\pi$, that is
$$ \| \mathsf {H}_\pi\|^2_{{\rm HS}, N; k} := \sum_{n\in \mathbb{N}} \mathsf {H}_\pi(F_n)$$ for any orthonormal basis $(F_n)_{n\in \mathbb{N}}$ of $L_{N;k}^2(Z)$. The fact that \eqref{HS11} is HS then translates into the {\it a priori bound} \begin{equation} \label{global a-priori}
\int_{\widehat G} \| \mathsf {H}_\pi\|^2_{{\rm HS},N; k} \ d\mu(\pi)<\infty.\end{equation}
By \eqref{inner product with matrix coefficient} we further infer \begin{equation}\label{HiSch estimate} \sum_{j=1}^{m_\pi} p_{-N;-k}(m_{v,\eta_j})^2=
\sum_{j=1}^{m_\pi} \sup_{F\in C_c^\infty(Z)\atop p_{N;k}(F)\le 1} |\langle \mathcal{F}(F)_\pi(\bar\eta_j), v\rangle|^2
\le \|\mathsf {H}_\pi\|^2_{{\rm HS}, N; k}\, \|v\|_{\mathcal{H}_\pi}^2 \end{equation} for $\mu$-almost all $[\pi]\in\widehat G$, all $v\in\mathcal{H}_\pi^\infty$, and $\eta_1,\dots,\eta_{m_\pi}$ an orthonormal basis of $\mathcal{M}_\pi$.
Hence it follows from (\ref{global a-priori}) that
\begin{equation} \label{global a-priori 2}
\int_{\widehat G} \,\sup_{\eta\in\mathcal{M}_\pi\atop \|\eta\|\le 1}\, \sup_{v\in\mathcal{H}_\pi^\infty\atop \|v\|\le 1}\, p_{-N; -k} (m_{v,\eta})^2 \ d\mu(\pi)< \infty\, .\end{equation} Consequently $p_{-N;-k}(m_{v,\eta})<\infty$ for $N\geq N_Z$ and $k> k_Z$, for all $v\in\mathcal{H}_\pi^\infty$, $\eta\in\mathcal{M}_\pi$, and $\mu$-almost all $[\pi]$ .
In particular with any $k$ with $2k - n^* > k_Z$ we obtain for $N\geq N_Z$ we obtain from Lemma \ref{lemma Sob negative} that \begin{align*} p_{-N}(m_{v,\eta}) &= p_{-N}( \Delta_R^{-k} \Delta_R^k m_{v,\eta}) = {^\Delta p}_{-N; 2k}( \Delta_R^k m_{v,\eta})\\ &\leq C p_{-N; -2k + n^*} ( m_{\Delta_R^kv,\eta})<\infty\end{align*} for all $v\in \mathcal{H}_\pi^\infty$ and $\mu$-almost all $[\pi]$.
\begin{definition}\label{defi temp pair} (cf. \cite[Def. 5.3]{KKSS2} and \cite[Sect. 3.3]{DKS}) Let $(V,\eta)$ be a spherical pair. We say that $\eta$ is {\it tempered} or $(V,\eta)$ is a {\it tempered pair} provided that
$$ p_{-N}(m_{v,\eta})<\infty \qquad (v\in V^\infty)$$ for some $N\in\mathbb{R}$. \end{definition}
The tempered functionals make up a subspace of $(V^{-\infty})^H$ which we denote by $(V^{-\infty})^H_{\rm temp}$. We conclude that $\mathcal{M}_\pi\subset (V^{-\infty})^H_{\rm temp}$ for almost all $\pi$.
\begin{rmk} (a) (About the inclusion $(V^{-\infty})^H_{\rm temp}\subset (V^{-\infty})^H$). For a tempered pair $(V,\eta)$ the inclusion $\{0\}\neq (V^{-\infty})^H_{\rm temp}\subset (V^{-\infty})^H$ can be strict. This already appears for the rank one symmetric spaces $Z= \operatorname{SO}_0(1,n)/\operatorname{SO}_0(1,n-1)$ when $n\geq 4$, in which case there exists an irreducible Harish-Chandra module which has multiplicity one in $L^p(Z)$ for $p\le n-1$ and multiplicity two for $p>n-1$. For details of this example we refer to \cite{KrKS}. \par (b) (Tempered Frobenius reciprocity). If we denote by $C^\infty_{\rm temp}(Z) =\bigcup_{N\in \mathbb{R}} L^2_{N}(Z)^\infty$ the $G$-module of smooth functions of moderate growth on $Z$, then we recall from \cite[3.10]{DKS} the following variant of Frobenius reciprocity for Harish-Chandra modules $V$: $$ \operatorname{Hom} (V^\infty, C^\infty_{\rm temp}(Z)) \simeq (V^{-\infty})_{\rm temp}^H$$ with $\operatorname{Hom}$ referring to continuous morphisms of $G$-modules. \par (c) (About the inclusion $\mathcal{M}_\pi \subset (V^{-\infty})^H$). For symmetric spaces one has equality \begin{equation} \label{mult equal} \mathcal{M}_\pi= (V^{-\infty})^H_{\rm temp} \quad\text{ for almost all $\pi$}\, .\end{equation} This was established by forming wave packets, which was a central technical step in the proof of the Plancherel formula for symmetric spaces. Since we follow another approach towards the Plancherel formula in this article, the equality \eqref{mult equal} together with an explicit description of $(V^{-\infty})^H$ is not an issue in the underlying treatment. However, we do expect that in general $\mathcal{M}_\pi =(V^{-\infty})^H_{\rm temp}$ for almost all $\pi$. \end{rmk}
\section{Constant term approximations}\label{section: ct}
In this section we review the constant term approximation of \cite{DKS} which is a central technical tool for this paper. In fact, by using our geometric results from Section \ref{structure of Z_I} on the stabilizer $H_I$, and our combinatorial results on the open $P$-orbits of Section \ref{subsection WI}, we are able to refine slightly the results from \cite{DKS}. \par Recall from \eqref{full deco W} that the set of open $P$-orbits $\mathcal{W}$ of $Z$ admits a combinatorial decomposition $\mathcal{W}=\coprod_{\mathsf{c}\in \sC_I} \coprod_{\mathsf{t}\in \sF_{I,\mathsf{c}} } {\bf m}_{\mathsf{c}, \mathsf{t}}(\mathcal{W}_{I,\mathsf{c}})$. For the sake of readability we first consider the part ${\bf m}(\mathcal{W}_I)\subset \mathcal{W}$ corresponding to $\mathsf{c}=\mathsf{t}={\bf1}$ and treat the notationally heavier case later.
\subsection{Notation} Let $V$ be an irreducible Harish-Chandra module with smooth completion $V^\infty$ and dual $V^{-\infty}$.
We recall that $(V^{-\infty})^H$ is a finite dimensional space for any real spherical subgroup $H\subset G$. Also we recall that $A_I$ normalizes $H_I$. Hence for any $I\subset S$ we obtain an action of $A_I$ on $(V^{-\infty})^{H_I}$ by $a_I\cdot\xi= \xi(a_I^{-1}\cdot)$ for $\xi\in(V^{-\infty})^{H_I}$. Accordingly we can decompose $\xi$ into generalized eigenvectors:
$$ \xi= \sum_{\lambda\in \mathfrak{a}_{I,\mathbb{C}}^*} \xi^{\lambda},$$ where $\xi^\lambda$ has generalized eigenvalue $\lambda$. We set \begin{equation}\label{defi generalized eigenvalues} \mathcal{E}_{\xi}:=\{ \lambda\in \mathfrak{a}_{I,\mathbb{C}}^*\mid \xi^{\lambda}\neq 0\}\, . \end{equation}
For $\eta\in(V^{-\infty})^{H}$ and $w\in \mathcal{W}$ we set $\eta_w:=w\cdot \xi$ and note that $\eta_w$ is $H_w$-fixed.
\subsection{Base points from ${\bf m}(\mathcal{W}_I)$} We recall from \eqref{bfm} the injective map ${\bf m}: \mathcal{W}_I \to \mathcal{W}$. Let now $w_I\in \mathcal{W}_I$ and $w={\bf m}(w_I)$. Then, given $\xi \in (V^{-\infty})^{H_I}$ we note that $\xi_{w_I}= w_I \cdot \xi$ is fixed under $(H_I)_{w_I}=(H_w)_I$, see \eqref{ConsisT1}. Moreover $A_I$ normalizes $(H_I)_{w_I}$ and we obtain from (a slight adaption of) \cite[Lemma 6.2]{KKS2} that $(\xi^{\lambda})_{w_I}$ is a generalized eigenvector for the $\mathfrak{a}_I$-action to the same spectral value $\lambda$.
We recall that $\rho|_{\mathfrak{a}_H}=0$ by the request that $Z$ is unimodular, see \cite[Lemma 4.2]{KKSS2}. This allows us to consider $\rho$ as a functional on $\mathfrak{a}_Z=\mathfrak{a}/\mathfrak{a}_H$ as well. In the sequel if not stated otherwise we take $N=N_Z$ (see \eqref{def NZ}).
\begin{theorem} \label{loc ct temp} {\rm(Constant term approximation)} Let $Z=G/H$ be a unimodular real spherical space and $I\subset S$. Then for all irreducible Harish-Chandra modules $V$ there exists a unique linear map $$ (V^{-\infty})^H_{\rm temp} \to (V^{-\infty})^{H_I}_{\rm temp}, \ \ \eta\mapsto \eta^I$$ with the following property. For all compact sets $\Omega\subset G$ and $\mathcal{C}_I\subset \mathfrak{a}_I^{--}$ there exist $k\in \mathbb{N}$, $\epsilon>0$, and $C>0$, such that
\begin{equation} \label{cta}|m_{v,\eta}(g a_I w\cdot z_0) - m_{v,\eta^I} (g a_I w_I\cdot z_{0,I})| \leq C a_I^{(1+\epsilon)\rho} p_{-N;k} (m_{v,\eta})\end{equation} for all $\eta\in (V^{-\infty})^H_{\rm temp}$, $v\in V^{\infty}$, $g\in \Omega$, $a_I\in A_I^{--}$ with $\log a_I \in \mathbb{R}_{\geq 0}\mathcal{C}_I$, and $w={\bf m}(w_I)\in {\bf m}(\mathcal{W}_I)\subset \mathcal{W}$. The constants $k$, $\epsilon$, and $C$ can be chosen independently of $V$.
Moreover, with $\chi_V\in \mathfrak{j}_\mathbb{C}^*/\mathsf{W}_\mathfrak{j}$ the infinitesimal character of $V$ one has
\begin{equation}\label{exponents} \mathcal{E}_{\eta^I}\subset (\rho|_{\mathfrak{a}_I} + i\mathfrak{a}_I^*) \cap (\rho - \mathsf{W}_\mathfrak{j} \cdot \chi_V)|_{\mathfrak{a}_I}\,, \end{equation} where $\mathcal{E}_{\eta^I}$ is defined by \eqref{defi generalized eigenvalues}. Finally there is the consistency relation \begin{equation} \label{consist}(\eta_w)^I= (\eta^I)_{w_I} \qquad (w={\bf m}(w_I)\in W)\, .\end{equation} \end{theorem}
The constant term assignment $$ (V^{-\infty})^H_{\rm temp} \to (V^{-\infty})^{H_I}_{\rm temp}, \ \ \eta\mapsto \eta^I$$ is typically neither injective nor surjective. Let us illustrate that in two examples before giving the proof of the theorem.
\begin{ex} (a) Let $H=K$ be a maximal compact subgroup of $G$ and $I=\emptyset$. Then $H_\emptyset = M\overline N$. Now let $V$ be a $K$-spherical tempered Harish-Chandra module. Then $\dim V^K=1$. However, for generic $V$ we have $\dim (V^{-\infty})^{M\overline N}= |\mathsf{W}_\mathfrak{a}|$ with $\sW_\mathfrak{a}$ the Weyl group of the restricted root system $\Sigma(\mathfrak{g}, \mathfrak{a})$. This shows that the constant term assignment is typically not surjective. \par\noindent (b) Tempered pairs $(V,\eta)$ of the twisted discrete series can be characterized by the vanishing of the constant term assignments for $I\neq S$, see \cite[Th. 5.12]{DKS}. In particular, if $(V,\eta)$ belongs to the discrete series of $Z$, then we have $\eta^I=0$ for all $I\neq S$. Hence the constant term assignment is typically not injective. \end{ex}
\begin{proof} The existence of an $\eta^I\in(V^{-\infty})^{H_I}_{\rm temp}$ satisfying \eqref{cta}, \eqref{exponents} and \eqref{consist} is proved in \cite{DKS}, with the exception that invariance of $\eta^I$ is only shown for the identity component of $H_I$. In more precision, \eqref{cta} for $H_I$ replaced by $(H_I)_0$ is \cite[Th. 7.10]{DKS} with the caveat that in \cite{DKS} the norms to bound the right hand side of \eqref{cta} are Sobolev norms of $q_{-N}$ and not of $p_{-N}$. However, the passage between $q_{-N}$ and $p_{-N}$ is justified by the comparison of Sobolev norms in \eqref{Sob comparison} which is valid for any $N\in \mathbb{R}$. The inclusion of exponents \eqref{exponents} is part of the general theory in \cite{DKS} and the consistency relation in \eqref{consist} is \cite[Prop. 5.7]{DKS}.
\par We turn to the uniqueness of the map $\eta\to \eta^I$. We recall that $(V^{-\infty})^{H_I}$ is a finite dimensional $A_I$-module and thus
\eqref{exponents} implies that for any fixed $g\in G$ and $v\in V^\infty $ the map $$A_I \ni a_I\mapsto m_{v,\eta^I}(ga_I \cdot z_{0,I})= m_{v,a_I\cdot \eta^I}(g \cdot z_{0,I})$$ is an exponential polynomial with normalized unitary exponents and hence unique as constant term approximation of $m_{v,\eta}(ga\cdot z_0)$, see Remark \ref{rmk unique approx} below. In particular, $\eta^I$ is then uniquely determined by the approximation property \eqref{cta}. \par Finally we will show that $\eta^I$ is in fact $H_I$-invariant for all $\eta\in(V^{-\infty})^H_{\rm temp}$. We do this for the case of $w_I=w={\bf1}$, the more general case being an easy adaption. We recall Lemma \ref{lemma H_I limit} and the notation used therein.
\par Let $X_I\in \mathfrak{c}_I^{--}$ corresponding to $-{\bf e}_I$ under the identification $\mathfrak{a}_I\simeq V_I$. Set $a_t:=\exp(tX_I)$ for $t\geq 0$. First notice that both $m_{v,\eta^I} (gh_I a_t\cdot z_{0,I})$ and $m_{v,\eta^I} (gx_t a_t\cdot z_{0,I})$ approximate $$m_{v,\eta}(gh_Ia_t \cdot z_0) = m_{v,\eta}(gx_t a_t\cdot z_0)$$ via \eqref{cta}, and thus we get
\begin{equation} \label{inv1} a_t^{-\rho} |m_{v,\eta^I}(gh_Ia_t\cdot z_{0,I})- m_{v,\eta^I}
(gx_ta_t \cdot z_{0,I})| \leq C e^{-\epsilon t}\end{equation} for some $C,\epsilon >0$. On the other hand, the coefficients of the exponential polynomial $$a_I\mapsto a_I^{-\rho} m_{v,\eta^I}(gx_ta_I\cdot z_{0,I})=a_I^{-\rho}m_{(gx_t)^{-1}v, a_I\cdot \eta^I}(z_{0,I})$$ with unitary exponents depend smoothly on $gx_t$. Hence it follows, after possibly shrinking $\epsilon$,
from \eqref{normal-approx} that
\begin{equation} \label{inv2}|a^{-\rho} m_{v,\eta^I} (gx_t a\cdot z_{0,I})- a^{-\rho} m_{v,\eta^I} (ga\cdot z_{0,I})|\leq C e^{-\epsilon t}\end{equation} for all $a\in A_I$. Now the $H_I$-invariance of $\eta^I$ follows from combining (\ref{inv1}) and (\ref{inv2}) together with the before mentioned uniqueness. \end{proof}
\begin{rmk}\label{rmk unique approx} (Uniqueness of the constant term) Let $f(a)$ be a function on $A_I$ and $$F(a) = a^\rho \sum_{\lambda\in\mathcal{E}} q_\lambda(\log a) a^\lambda\qquad (a \in A_I) $$ an exponential polynomial with unitary exponents, i.e. $\mathcal{E}\subset i\mathfrak{a}_I^*$ is finite and $q_\lambda$ are polynomial functions on $\mathfrak{a}_I$. In case there exists an $\epsilon>0$ such that
\begin{equation} \label{unique approx} |f(a) - F(a)| \leq C a^{(1+\epsilon)\rho}\qquad (a \in A_I^-)\, ,\end{equation} then $F$ is the unique exponential polynomial with normalized unitary exponents having the approximation property \eqref{unique approx}. This is a consequence of the following basic lemma, which we record without proof.\end{rmk}
\begin{lemma} \label{lemma basic ineq} Let $\Lambda\subset\mathbb{R}$ be a finite set and for each $\lambda\in\Lambda$ let $q_\lambda\in\mathbb{C}[t]$ be a polynomial. If there exist constants $\epsilon,C>0$ such that
$$\Big|\,\sum_{\lambda\in\Lambda} q_\lambda(t)e^{i\lambda t}\,\Big| < C e^{-\epsilon t}\qquad (t\geq 0)$$ then $q_\lambda=0$ for all $\lambda\in\Lambda$. \end{lemma}
\subsection{General base points}\label{subsection all base points}
So far we have treated the constant term approximation through the base points $z_w=w\cdot z_0$ for $w\in {\bf m}(\mathcal{W}_I)$. The general case is obtained by adapting the notation to the partition $\mathcal{W}=\coprod_{\mathsf{c}\in \sC_I} \coprod_{\mathsf{t}\in \sF_{I,\mathsf{c}}} {\bf m}_{\mathsf{c},\mathsf{t}} (\mathcal{W}_{I,\mathsf{c}})$ from \eqref{full deco W}.
\par For $\mathsf{c}\in\sC_I$ and $\mathsf{t} \in \sF_{I,\mathsf{c}}$ we define $w(\mathsf{c},\mathsf{t}):={\bf m}_{\mathsf{c}, \mathsf{t}}({\bf1})\in \mathcal{W}$ and set $z_{\mathsf{c}, \mathsf{t}}= w(\mathsf{c}, \mathsf{t})\cdot z_0$. Further we set $w(\mathsf{c})={\bf m}_{\mathsf{c}, {\bf1}}({\bf1})\in \mathcal{W}$ and $z_\mathsf{c}= w(\mathsf{c})\cdot z_0$. Let $H_{\mathsf{c}, \mathsf{t}}$ and $H_\mathsf{c}$ denote the $G$-stabilizers of $z_{\mathsf{c},\mathsf{t}}$ and $z_\mathsf{c}$ respectively.
\par Define for $\eta\in(V^{-\infty})^{H}$ accordingly $\eta_{\mathsf{c}, \mathsf{t}}:= w(\mathsf{c},\mathsf{t})\cdot \eta$. Notice that $\eta_{\mathsf{c},\mathsf{t}}^I$ is invariant under $(H_{\mathsf{c},\mathsf{t}})_I$. From \eqref{WWI2 general} we infer further that $(H_{\mathsf{c},\mathsf{t}})_I=H_{I,\mathsf{c}}$ does not depend on $\mathsf{t}$.
\par As before we obtain that $A_I$ normalizes $(H_{I,\mathsf{c}, \mathsf{t}})_{w_I}=(H_{I,\mathsf{c}})_{w_I}$, so that $A_I$ acts naturally on $(H_{I,\mathsf{c}})_{w_I}$-invariant distribution vectors $\xi$ and yields generalized eigenspace decompositions $\xi= \sum_{\lambda\in {\mathfrak{a}_I}_\mathbb{C}^*} \xi^\lambda$. Within the introduced terminology the general case of the constant term approximation then reads as follows:
\begin{theorem} \label{loc ct temp2} {\rm(Constant term approximation - general version)} Let $Z=G/H$ be a unimodular real spherical space and $I\subset S$. Fix $\mathsf{c}\in \sC_I$ and $\mathsf{t}\in \sF_{I,\mathsf{c}}$. Then for all irreducible Harish-Chandra modules $V$ there exists a unique linear map
$$ (V^{-\infty})^H_{\rm temp} \to (V^{-\infty})^{H_{I,\mathsf{c}}}_{\rm temp}, \ \ \eta\mapsto \eta_{\mathsf{c}, \mathsf{t}}^I$$ with the following property: There exist constants $\epsilon>0$, $k\in \mathbb{N}$, such that for all compact subsets $\mathcal{C}_I\subset \mathfrak{a}_I^{--}$ and $\Omega\subset G$ there exists a constant $C>0$, such that
\begin{equation} \label{cta2}|m_{v,\eta}(g a_I w\cdot z_0) - m_{v,\eta_{\mathsf{c},\mathsf{t}}^I} (g a_I w_{I,\mathsf{c}}\cdot z_{0, I, \mathsf{c}})| \leq C a_I^{(1+\epsilon)\rho} p_{-N;k} (m_{v,\eta})\end{equation} for all $\eta\in (V^{-\infty})^H_{\rm temp}$, $v\in V^{\infty}$, $g\in \Omega$, $a_I\in A_I^{--}$ with $\log a_I \in \mathbb{R}_{\geq 0}\mathcal{C}_I$, and $w={\bf m}_{\mathsf{c},\mathsf{t}}(w_{I,\mathsf{c}})\in {\bf m}_{\mathsf{c},\mathsf{t}}(\mathcal{W}_{I,\mathsf{c}})\subset \mathcal{W}$. The constants $\epsilon$, $k$, and $C$ can all be chosen independently of $V$.
Moreover, with $\chi_V\in \mathfrak{j}_\mathbb{C}^*/\mathsf{W}_\mathfrak{j}$ the infinitesimal character of $V$ one has
\begin{equation}\label{exponents2} \mathcal{E}_{\eta_{\mathsf{c},\mathsf{t}}^I}\subset (\rho|_{\mathfrak{a}_I} + i\mathfrak{a}_I^*) \cap (\rho - \mathsf{W}_\mathfrak{j} \cdot \chi_V)|_{\mathfrak{a}_I}. \end{equation} Finally there is the consistency relation \begin{equation} \label{consist2}(\eta_w)^I= (\eta_{\mathsf{c},\mathsf{t}}^I)_{w_{I,\mathsf{c}}} \qquad (w={\bf m}_{\mathsf{c},\mathsf{t}}(w_{I,\mathsf{c}})\in W)\, .\end{equation} \end{theorem}
\begin{proof} By replacing $H_{I,\mathsf{c}}$ with $H_I$ we may assume that $\mathsf{c}={\bf1}$. Let then
$w\in {\bf m}_\mathsf{t}(W_I)$. The passage to $\mathsf{t}={\bf1}$ is obtained via the material in Subsection \ref{subsection c=1} and via the further base point shift $z_0\to z_\mathsf{t}$. By this we obtain a reduction to Theorem \ref{loc ct temp}. \end{proof}
\section{The main remainder estimate}\label{main remainder}
In this section we derive an important uniform estimate which is the key technical tool for the results in the next section. The estimate is based on the constant term approximation of Section \ref{section: ct}.
\subsection{Adjustment of Haar measures}\label{measures}
We assume that $Z=G/H$ carries a $G$-invariant measure. Then, according to \cite[Lemma 3.12]{KKS2}, the same holds for $Z_I:=G/H_I$.
Since $L\cap H=L\cap H_I$ by Lemma \ref{equal L cap H}, we see that the $P$-orbits through $z_0$ and $z_{0,I}$ are isomorphic as homogeneous spaces for $Q$, i.e. \begin{equation}\label{Q-orbit iso} P\cdot z_0= Q\cdot z_0\simeq Q/ L\cap H\simeq Q\cdot z_{0,I} = P \cdot z_{0,I}\, . \end{equation}
\noindent We fix the normalizations of the $G$-invariant measures on $Z$ and $Z_I$ such that on these open pieces they coincide with a common Haar measure on $Q/L\cap H$, and we denote these measures on $Z$ and $Z_I$ by $dz$ and $dz_I$, respectively.
\subsection{Right action by $A(I)$}\label{right action}
\par As $A(I)$ normalizes $H_I$ we obtain a right action of $A(I)$ on functions $f$ on $Z_I$ given by
$$ (R(a_I) f) (g\cdot z_{0,I}):= f( g a_I \cdot z_{0,I}) \qquad (g\in G, a_I \in A(I))\, .$$
\begin{lemma}\label{lemma ZI-int} Let $f\in L^1(Z_I)$ and $a_I\in A(I)$. Then
\begin{equation} \label{int ZI} \int_{Z_I} (R(a_I)f)(z_I) \ dz_I = |a_I^{2\rho}| \int_{Z_I} f(z_I) \ dz_I\, \end{equation}
In particular, the normalized action $ f\mapsto |a_I^{-\rho}| R(a_I) f$ of $A(I)$ is unitary on $L^2(Z_I)$. \end{lemma}
\begin{proof} First note that $|a^\rho|=1$ for all $a\in T_Z=\exp(i\mathfrak{a}_H^\perp)\subset \algebraicgroup{ A}$. Since elements of $F(I)$ have finite order it is sufficient to consider $a_I \in A_I\subset A(I)$. The first assertion then follows from \cite[Lemma 8.4]{KKS2}, and the second assertion is a consequence of the first. \end{proof}
Fix an element $X\in \mathfrak{a}_I^{--}$ and set $a_t:=\exp(tX)$ for $t\in\mathbb{R}$. Let $f\in L^2(Z_I)$ and define \begin{equation} \label{defi f_t} f_t(z):= a_t^{\rho} (R(a_t^{-1})f)(z),\quad (z\in Z_I). \end{equation} Notice that the assignment $f\mapsto f_t$ is $G$-equivariant and unitary by Lemma \ref{lemma ZI-int}. In particular
\begin{equation} \label{match1} \| f_t\|_{L^2(Z_I)}= \|f\|_{L^2(Z_I)} \qquad(t\in\mathbb{R})\end{equation} and, in case $f$ is smooth, \begin{equation} \label{match equivariant} L_u f_t = (L_u f)_t \qquad (u \in \mathcal{U}(\mathfrak{g}))\, .\end{equation}
\subsection{Matching of functions}\label{matching functions} We recall from Section \ref{subsection WI} the injective map ${\bf m}: \mathcal{W}_I \to \mathcal{W}$ which matches the open $Q$-orbit $Qw_I\cdot z_{0,I}=Pw_I\cdot z_{0,I}$ in $Z_I$ with the open $Q$-orbit $Qw\cdot z_0=Pw\cdot z_0$ in $Z$ where $w={\bf m}(w_I)$. As in \eqref{Q-orbit iso} we have
\begin{equation} \label{QwwI} Qw\cdot z_0\simeq Q/ L\cap H\simeq Qw_I \cdot z_{0,I}\, .\end{equation}
Given a smooth function $f$ on $Z_I$ with compact support in $Q\mathcal{W}_I \cdot z_{0,I}\subset Z_I$ we define via \eqref{QwwI} a `matching' smooth function $F=\Phi(f)$ on $Z$ with compact support in $Q{\bf m}(\mathcal{W}_I) \cdot z_0\subset Z$ by \begin{equation}\label{match fF} F(q{\bf m}(w_I)\cdot z_0):= f(qw_I \cdot z_{0,I})\qquad (q\in Q)\, . \end{equation} Observe that the space spanned by the smooth functions on $Z_I$ with compact support contained in the union of the open $Q$-orbits $Q\mathcal{W}_I\cdot z_{0,I}$ is dense in $L^2(Z_I)$.
\par
Since the invariant measures on $Z$ and $Z_I$ coincide on the open $Q$-orbits we get
\begin{equation*} \|\Phi(f)\|_{L^2(Z)}= \|f\|_{L^2(Z_I)}.\end{equation*} Together with \eqref{match1} this implies for the function $f_t$ defined in \eqref{defi f_t}
\begin{equation} \label{match2} \|\Phi(f_t)\|_{L^2(Z)}=\|f\|_{L^2(Z_I)}\end{equation} for all $t\in \mathbb{R}$.
The main result of this section is now reads as follows. Let $N=N_Z$ from \eqref{def NZ}.
\begin{theorem}\label{matching comparison} {\rm (Main remainder estimate)} There exists $\epsilon>0$ with the following property. Let $\Omega\subset Q$ be a compact set. Then for every $s\in\mathbb{R}$ there exist $C>0$ and $m\in \mathbb{N}$ such that for all $f\in C_c^\infty(Z_I)$ with $\operatorname{supp} f \subset \Omega \mathcal{W}_I \cdot z_{0,I}$, all tempered pairs $(V,\eta)$, and all $v\in V^\infty$ the following equality holds
$$ \langle \Phi(f_t), m_{v,\eta}\rangle_{L^2(Z)} = \langle f_t, m_{v,\eta^I}\rangle_{L^2(Z_I)} + R(t)\qquad (t\ge 0)\, ,$$ with the remainder bounded by
$$| R(t)| \leq C e^{ -t \epsilon} \, p_{-N; -s}(m_{v,\eta}) \, p_{N;m}(\Phi(f))\,.$$
\end{theorem}
Before giving the proof we observe the following corollary. Recall from (\ref{Hermitian sum}) the Hermitian forms $\mathsf {H}_\pi$ on $C_c^\infty(Z)$. We fix an orthonormal basis $\eta_1, \ldots, \eta_{m_\pi}$ of $\mathcal{M}_\pi$ and define a preliminary Hermitian form $\mathsf {H}_\pi^{I, \rm pre}$ on $C_c^\infty(Z_I)$ by
\begin{equation} \label{Hermitian sum I-side} \mathsf {H}_\pi^{I, \rm pre}(f)= \sum_{j=1}^{m_\pi} \|\overline \pi(f) \overline \eta_j^I \|_{\mathcal{H}_\pi}^2 \qquad (f\in C_c^\infty(Z_I))\,. \end{equation} Notice that $\mathsf {H}_\pi^{I, \rm pre}$ is independent from the particular choice of the orthonormal basis $\eta_1, \ldots, \eta_{m_\pi}$, being the Hilbert-Schmidt norm squared of the linear map $$\mathcal{M}_{\overline\pi} \to \mathcal{H}_\pi,\ \ \overline \eta\mapsto\overline\pi(f)\eta^I\, .$$
We derive from Theorem \ref{matching comparison} and the global a priori bound
\eqref{global a-priori} that:
\begin{cor} \label{main cor} Let $\epsilon>0$ be as in Theorem \ref{matching comparison} and let $f\in C_c^\infty(Z_I)$ with support in $ Q\mathcal{W}_I \cdot z_{0,I}$. Then there exists a constant $C>0$ such that
$$\| f \|^2_{L^2(Z_I)} = \int_{\widehat G} \mathsf {H}_\pi^{I, \rm pre} (f_t) \ d\mu(\pi) + R(t)$$
with $|R(t)| \leq C e^{-\epsilon t}$ for all $t\geq 0$. \end{cor}
\begin{proof} We first observe that by \eqref{match2} and \eqref{abstract Plancherel}-\eqref{Hermitian sum} \begin{equation}\label{first observe}
\| f \|^2_{L^2(Z_I)} = \int_{\widehat G} \mathsf {H}_\pi(\Phi(f_t)) \, d\mu(\pi) =
\int_{\widehat G} \sum_{j=1}^{m_\pi} \|\overline \pi(\Phi(f_t))\overline \eta_j\|_{\mathcal{H}_\pi}^2\, d\mu(\pi). \end{equation} Hence we need to estimate the integral over $\pi\in\widehat G$ of $$ \sum_{j=1}^{m_\pi} \Big(
\|\overline \pi(\Phi(f_t))\overline \eta_j\|_{\mathcal{H}_\pi}^2 - \|\overline \pi(f) \overline \eta_j^I \|_{\mathcal{H}_\pi}^2 \Big)\, . $$ Using the identity $a^2-b^2=2a(a-b)-(a-b)^2$ together with Cauchy-Schwarz and \eqref{first observe}, we see that it suffices to show \begin{equation}\label{suffices to show} \bigg[\int_{\widehat G} \sum_{j=1}^{m_\pi} \Big(
\|\overline \pi(\Phi(f_t))\overline \eta_j\|_{\mathcal{H}_\pi} - \|\overline \pi(f) \overline \eta_j^I \|_{\mathcal{H}_\pi} \Big)^2\, d\mu(\pi) \bigg]^{1/2} \le C e^{-\epsilon t}\,. \end{equation}
From the dense inclusion $\mathcal{H}_\pi^\infty \subset \mathcal{H}_\pi$ and \eqref{inner product with matrix coefficient} we obtain that $$
\|\overline \pi(\Phi(f_t))\overline \eta_j\|_{\mathcal{H}_\pi}
= \sup_{v\in \mathcal{H}_\pi^\infty\atop \|v\|=1} \langle \overline \pi(\Phi(f_t))\overline \eta_j, v\rangle_{\mathcal{H}_\pi}
= \sup_{v\in \mathcal{H}_\pi^\infty\atop \|v\|=1} \langle \Phi(f_t), m_{v,\eta_j}\rangle_{L^2(Z)} $$ and similarly $$
\|\overline \pi(f)\overline \eta_j^I\|_{\mathcal{H}_\pi}
= \sup_{v\in \mathcal{H}_\pi^\infty\atop \|v\|=1} \langle f, m_{v,\eta_j^I}\rangle_{L^2(Z_I)}\, . $$ Let $s>k_Z$ (see \eqref{def NZ}). Now application of Theorem \ref{matching comparison} implies for all $t>0$
$$ \Big|\|\overline \pi(\Phi(f_t))\overline \eta_j\|_{\mathcal{H}_\pi}-
\|\overline \pi(f)\overline \eta_j^I\|_{\mathcal{H}_\pi} \Big| \leq C e^{-t\epsilon} \sup_{v\in \mathcal{H}_\pi^\infty
\atop \|v\|=1} p_{-N; -s}(m_{v,\eta_j})\, ,$$ where $C>0$ depends on $f$, but not on $t$ or $\pi$. Hence \eqref{suffices to show} follows from \eqref{HiSch estimate} and \eqref{global a-priori}. \end{proof}
\subsection{Comparing Haar measures}\label{matching measures}
In the proof of Theorem \ref{matching comparison} we will assume for simplicity that $\operatorname{supp} f\subset \Omega\cdot z_{0,I}$. The general case is obtained using the following observation. Recall that the Haar measures of $Z$ and $Z_I$ are both adjusted to agree with a fixed Haar measure of $Q/Q_H$ on the $Q$-orbits through $z_0$ and $z_{0,I}$.
Recall from the local structure theorem that \begin{equation}\label{PQ-corr}Qw\cdot z_0\simeq Q/ Q_H\simeq U \times L/L_H\end{equation} and by \eqref{QwwI} likewise $Qw_I\cdot z_{0,I} \simeq Q/ Q_H$. We claim that the Haar measures of $Z$ and $Z_I$ coincide on every open $Q$-orbit with the fixed normalized measure on $Q/ Q_H$. Let us verify this for $Z$, the proof for $Z_I$ being analogous. We first implement the Haar measure on $Q/Q_H$
via a density $|\omega_Z|$ obtained from a top degree differential form $\omega_Z\in \bigwedge^{\rm top} (\mathfrak{q}/ \mathfrak{q}\cap \mathfrak{h})^*$. As usual we decompose $w=\tilde th$ with $\tilde t\in T_Z$ and $h\in \algebraicgroup{H}$, see \eqref{th-deco}. Then $\operatorname{Ad}(\tilde t)$ preserves $(\mathfrak{q}/ \mathfrak{q}\cap \mathfrak{h})_\mathbb{C}$ and thus acts on $\bigwedge^{\rm top} (\mathfrak{q}/ \mathfrak{q}\cap \mathfrak{h})_\mathbb{C}^*$ by a unit scalar. Since the scalar has to be real, the claim follows.
\subsection{Matching derivatives}\label{comparison of derivatives}
Before we can give the proof of Theorem \ref{matching comparison} we need the following lemma.
\begin{lemma} \label{lemma match2} Let $\Omega\subset Q$ be a compact subset. Then the following assertions hold: \begin{enumerate} \item \label{lemma match2a} Let $u\in \mathcal{U}(\mathfrak{g})$. There exist $u_1, \ldots, u_k\in \mathcal{U}(\mathfrak{q})$ with $\deg u_j \leq \deg u$ and a constant $C=C(\Omega, u)$ such that \begin{equation}\label{comparison with L_u}
\big|[\Phi(L_u(f_t))- L_u (\Phi(f_t))](z)\big| \\ \leq C
\underset{\sigma\in S\backslash I}\max a_t^\sigma \,\sum_{j=1}^k \left| L_{u_j} (\Phi(f_t))(z)\right| \end{equation} for all $f\in C_c^\infty(Z_I)$ with support in $\Omega \mathcal{W}_I \cdot z_{0,I}$, and all $z\in Z$, $t\ge 0$. \item \label{lemma match2b}Let $p_0$ denote the $L^2$-norm on $L^2(Z)$. Then for every $k\in\mathbb{N}_0$ there exists a constant $C=C(\Omega, k)>0$ such that \begin{equation}\label{approximately unitary} p_{0;k}(\Phi(f_t)) \leq C p_{0;k}(\Phi(f))\end{equation} for all $f\in C_c^\infty(Z_I)$ with support in $\Omega \mathcal{W}_I \cdot z_{0,I}$ and $t\ge 0$. \end{enumerate}
\end{lemma}
\begin{proof} Since the map $$ \Phi: C_c^\infty (Q\mathcal{W}_I\cdot z_{0,I})\to C_c^\infty(QW\cdot z_0), \ \ f\mapsto \Phi(f)$$ is $Q$-equivariant we have \begin{equation}
\label{Y-inv} \Phi(L_Yf)= L_Y\Phi(f) . \end{equation} for all $Y\in\mathfrak{q}$.
For simplicity we consider the case $\operatorname{supp} f\subset \Omega\cdot z_{0,I}$. We first calculate $L_X(\Phi(f_t))(qa_t \cdot z_0)$ and $\Phi(L_X (f_t))(qa_t \cdot z_0)$ for $X\in\mathfrak{g}$.
\par For that we recall that $\mathfrak{g}=\overline \mathfrak{u}+\mathfrak{q}$ is a direct sum. More generally for all $q\in Q$ the sum $\mathfrak{g}=\operatorname{Ad}(q) \overline \mathfrak{u}+\mathfrak{q}$ is direct. Accordingly we can decompose any $X\in \mathfrak{g}$ as $$ X=\sum_{\alpha, k} c_{\alpha, k} (q) \operatorname{Ad}(q)X_{-\alpha}^k + \sum_j d_j(q) X_j$$ where $(X_{-\alpha}^k)_k$ is a basis of $\mathfrak{g}^{-\alpha}$, $\alpha\in \Sigma_\mathfrak{u}$, and $(X_j)_j$ is a basis of $\mathfrak{q}$. The coefficients $c_{\alpha, k}(q), d_j(q)\in \mathbb{R}$ depend smoothly on $q$.
\par Recall that $X_{-\alpha}^k+ \sum_\beta X_{\alpha, \beta}^k\in \mathfrak{h} $ by \eqref{eq-hi2} with $I=S$. Thus we get for every smooth function $F$ on $Z$ and every $q\in Q$, $a\in A_Z$ that
\begin{equation*} L_X F(qa\cdot z_0)=\sum_j d_j(q) L_{X_j}F(qa\cdot z_0) -\sum_{\alpha,\beta, k} c_{\alpha, k}(q) a^{\alpha+\beta} L_{\operatorname{Ad}(q) X_{\alpha,\beta}^k} F(qa\cdot z_0)\,. \end{equation*} By expanding each $\operatorname{Ad}(q) X_{\alpha,\beta}^k$ in terms of the $X_j$ we can rephrase this identity as \begin{equation}\label{LYa} L_X F(qa\cdot z_0)=\sum_j \Big[ d_j(q) -\sum_{\alpha,\beta} c_{j,\alpha,\beta}(q) a^{\alpha+\beta}\Big]L_{X_j} F(qa\cdot z_0) \end{equation} with coefficients $c_{j,\alpha,\beta}$ depending smoothly on $q$.
On the other hand by \eqref{eq-hi2} we also have $X_{-\alpha}^k+ \sum_{\alpha +\beta \in \langle I\rangle} X_{\alpha, \beta}^k\in \mathfrak{h}_I$ which then similarly yields for every smooth function $f$ on $Z_I$
\begin{equation*} L_X f(qa\cdot z_{0,I})=\sum_j \Big[ d_j(q) -\sum_{\alpha,\beta\atop \alpha +\beta \in \langle I\rangle} c_{j,\alpha,\beta}(q) a^{\alpha+\beta} \Big] L_{X_j} f(qa\cdot z_{0,I}) \end{equation*} with exactly the same coefficients as before, but for fewer $\alpha$ and $\beta$. We apply $\Phi$ to this equation with $f$ replaced by $f_t$. With \eqref{Y-inv} this gives
\begin{equation}\label{LYb} \Phi(L_X f_t)(qa\cdot z_{0})=\sum_j \Big[d_j(q)
-\sum_{\alpha,\beta\atop \alpha +\beta \in \langle I\rangle} c_{j,\alpha,\beta}(q) a^{\alpha+\beta} \Big] L_{X_j} (\Phi(f_t))(qa\cdot z_{0})\, . \end{equation}
From this equation we subtract \eqref{LYa} with $F=\Phi(f_t)$. With $a=a_t$ we obtain \begin{equation}\label{LYc} [\Phi(L_X (f_t))-L_X (\Phi(f_t))] (qa_t \cdot z_0)=
\sum_j c_j(q, t)[L_{X_j} (\Phi(f_t))(qa_t\cdot z_0)] \end{equation} with coefficients $c_j(q,t)$, each being a linear combination $\sum_\mu c_\mu(q) a_t^{\mu}$ of functions $a_t^\mu$ with $\mu\in \langle S\rangle\setminus \langle I\rangle$, and with coefficients $c_\mu\in C^\infty(Q)$ supported in $\Omega$. In particular \eqref{comparison with L_u} follows for $\deg u=1$.
\par We now prove by induction on $\deg u$ that \begin{equation}\label{Lu2} [L_u (\Phi(f_t))- \Phi(L_u (f_t))] (qa_t\cdot z_0)=
\sum_j c_j(q, t)[L_{u_j} (\Phi(f_t))(qa_t\cdot z_0)] \end{equation} for some $u_j\in \mathcal{U}(\mathfrak{g})$ with $\deg u_j\leq \deg u$ and coefficients $c_j(q,t)$ of the same type as required in \eqref{LYc}. Note that the set of coefficients of this type is stable under differentiation by elements from $\mathfrak{q}$.
Let $u=Xv$ with $X\in \mathfrak{g}$ and $\deg v<\deg u$. We write \begin{align*} L_u (\Phi(f_t))&-\Phi(L_u (f_t))=\\ &L_X \big[ L_v (\Phi(f_t))-\Phi(L_v f_t)) \big]+\big[L_X \Phi(L_v f_t)-\Phi(L_X(L_v f_t)) \big]. \end{align*} For the first term we apply \eqref{LYa} to $L_X$ in order to replace the differentiation with $X\in\mathfrak{g}$ by differentiation with the $X_j\in\mathfrak{q}$. We then apply the induction hypothesis \eqref{Lu2} to $\big[L_v (\Phi(f_t))-\Phi(L_v f_t))\big]$. After the differentiations by $X_j$ we then obtain for the first term an expression of the required form. For the second term we apply \eqref{LYc} with $f_t$ replaced by $(L_vf)_t=L_vf_t$. This gives $$\sum_j c_j(q, t)[L_{X_j} (\Phi(L_v f_t))(qa_t\cdot z_0)].$$ Once more we apply the induction hypothesis to $v$, which allows us to replace this expression by $$\sum_j c_j(q, t)[L_{X_j} L_v (\Phi(f_t))(qa_t\cdot z_0)]$$ at the cost of additional terms. Since all these terms have the required form this completes the proof of \eqref{Lu2}.
In order to complete the proof of \eqref{comparison with L_u} we need to replace the $u_j\in\mathcal{U}(\mathfrak{g})$ in \eqref{Lu2} by elements from $\mathcal{U}(\mathfrak{q})$. By induction on the degree, similar to the one before, we obtain from \eqref{LYa} for every $u\in\mathcal{U}(\mathfrak{g})$ a set of elements $u_1, \ldots, u_n \in \mathcal{U}(\mathfrak{q})$ with $\deg u_j\leq \deg u$ such that \begin{equation}\label{Lu} L_u \Phi(f_t)(qa_t\cdot z_0)=\sum_j e_j(q,t) L_{u_j}\Phi(f_t)(qa_t \cdot z_0), \end{equation} with coefficients $e_j(q,t)$, each being a linear combination $\sum_\mu c_\mu(q) a_t^{\mu}$ of functions $a_t^\mu$ with $\mu\in \langle S\rangle$, and with coefficients $c_\mu\in C^\infty(Q)$ supported in $\Omega$. This finally implies \eqref{comparison with L_u} and with that the proof of \eqref{lemma match2a} has been completed.
\par For \eqref{lemma match2b} we note that \eqref{Lu} and \eqref{Y-inv} imply:
$$p_0(L_u \Phi(f_t))\leq C_u \sum_j p_0 (\Phi(L_{u_j} (f_t))).$$ If we denote by $q_0$ the $L^2$-norm on $L^2(Z_I)$ we obtain from \eqref{match equivariant} and \eqref{match2}
$$p_0(\Phi(L_{u_j} f_t))=q_0(L_{u_j} f) =p_0(\Phi(L_{u_j}f))=p_0(L_{u_j}( \Phi(f))).$$ Combining this with the preceding inequality, \eqref{lemma match2b} follows. \end{proof}
\subsection{Proof of Theorem \ref{matching comparison}}
\begin{proof} In view of the consistency relations $w_I \cdot \eta^I= (\eta_{{\bf m}(w_I)})^I$ for all $w\in \mathcal{W}_I$ (see \eqref{consist}), the assertion readily reduces to the case where $\operatorname{supp} f \subset Q\cdot z_{0,I}$. Let us assume that in the sequel.
Recall that $a_t=\exp(tX)$ with $X\in\mathfrak{a}_I^{--}$ fixed. For simplicity we assume again that $\operatorname{supp} f \subset \Omega\cdot z_{0,I}$, and then $\operatorname{supp} \Phi(f_t)\subset \Omega a_t \cdot z_0$.
\par Recall the Laplace element $\Delta_1\in \mathcal{U}(\mathfrak{g})$ from \eqref{def DeltaR}. In what follows we will apply the Sobolev inequality of Lemma \ref{lemma Sobolev norms inequality} to $V$, and for this we observe (see Theorem \ref{lead lemma 2} below) that $V$ is unitarizable since $(V,\eta)$ is tempered.
In the sequel we write $\langle\cdot, \cdot\rangle$ for $\langle\cdot, \cdot\rangle_{L^2(Z)}$ and $\langle\cdot, \cdot\rangle_I$ for $\langle\cdot, \cdot\rangle_{L^2(Z_I)}$ to save notation. \par Let $n\in \mathbb{N}$, to be specified at the end of the proof. It will depend on $s$, but apart from that only on the space $Z$. We start with the identity $v = \Delta_1^n \Delta_1^{-n} v$ which yields
\begin{equation} \label{start id0} \langle \Phi(f_t), m_{v,\eta}\rangle = \langle L_{\Delta_1^n} \Phi(f_t), m_{\Delta_1^{-n}v,\eta}\rangle\,. \end{equation}
Next we have to address the subtle point that $\Phi(L_{\Delta_1^n} f_t)$ does not necessarily equal $L_{\Delta_1^n} \Phi(f_t)$. However from Lemma \ref{lemma match2}\eqref{lemma match2a} we obtain constants $\epsilon>0$, $C>0$, and elements $u_j\in \mathcal{U}(\mathfrak{g})$ of degree $\leq 2n$ such that for all $f$ supported by $\Omega\cdot z_{0,I}$
\begin{equation}\label{F-t-esti} | L_{\Delta_1^n} \Phi(f_t) (z) - \Phi(L_{\Delta_1^n} f_t)(z) |\leq C e^{-t\epsilon}
\sum_{j} |L_{u_j} (\Phi(f_t))(z)| \qquad (z\in Z, t\ge 0).\end{equation} We rewrite (\ref{start id0}) as \begin{equation} \label{start id1} \langle \Phi(f_t), m_{v,\eta}\rangle= \langle \Phi(L_{\Delta_1^n} f_t) , m_{\Delta_1^{-n}v,\eta}\rangle +R_1(t)\end{equation} with $R_1(t) = \langle L_{\Delta_1^n} \Phi(f_t) -\Phi(L_{\Delta_1^n} f_t), m_{\Delta_1^{-n}v,\eta}\rangle$. We claim, after shrinking $\epsilon$ to ${\epsilon\over 2}$, that for $2n>n^*$ where $n^*$ is the even integer given by \eqref{defi nstar} \begin{equation}\tag{R1}\label{rem1}
|R_1(t)| \leq C e^{-t\epsilon} p_{N;2n}(\Phi(f)) p_{-N; -2n + n^*} (m_{v,\eta})\, \end{equation} with a constant $C>0$ that depends on $\Omega$ and $n$, but not on $f$. From (\ref{F-t-esti}) and Cauchy-Schwarz we obtain
\begin{equation} \label{R11} |R_1(t)|\leq C e^{-t\epsilon} p_{N; 2n} (\Phi(f_t)) p_{-N} (m_{\Delta_1^{-n}v,\eta})\, .\end{equation}
We obtain from \cite[Prop. 3.4 (2)]{KKSS2} that $|\mathbf {w}(z)|\leq C ( 1+t)$ for all $ z\in \operatorname{supp} \Phi(f_t)$ for a constant $C$ only depending on
$\Omega$. Hence it follows with Lemma \ref{lemma match2} \eqref{lemma match2b} that
\begin{eqnarray} \notag p_{N;2n}(\Phi(f_t)) &\leq& C(1+t)^{N\over 2} p_{0;2n}(\Phi(f_t)) \\
\label{R12}&\leq& C (1+t)^{N\over 2} p_{0;2n}(\Phi(f)) \leq C (1+t)^{N\over 2} p_{N;2n}(\Phi(f))\end{eqnarray} with positive constants $C$ (possibly not equal to each other). Note that these constants $C$ depend on $n$.
Furthermore it follows from \eqref{Sobolev norms inequality} that for $2n>n^*$
\begin{equation} \label{R13} p_{-N}(m_{\Delta_1^{-n}v,\eta}) \leq C p_{-N; -2n+ n^*} (m_{v,\eta})\, . \end{equation} If we insert (\ref{R12}) and (\ref{R13}) into (\ref{R11}) we obtain the claim (\ref{rem1}) by noting that $(1+t)^{N\over 2} e^{-{\epsilon\over 2}t}$ is bounded for all $t\ge 0$.
We move on with the identity (\ref{start id1}) and wish to analyze $\langle \Phi(L_{\Delta_1^n}f_t) , m_{\Delta_1^{-n}v,\eta}\rangle$ further. By the definitions of $\Phi$ and $f_t$ \begin{eqnarray}\notag \langle \Phi(L_{\Delta_1^n} f_t) , m_{\Delta_1^{-n}v,\eta}\rangle &=&
\int_{Q/Q_H} (L_{\Delta_1^n} f)(qa_t\cdot z_{0,I}) a_t^{\rho}\overline{m_{\Delta_1^{-n} v,\eta}(q\cdot z_0)} \ d(qQ_H) \\ \label{longeq} &=& \int_{Q/Q_H} (L_{\Delta_1^n} f)(q\cdot z_{0,I}) a_t^{-\rho} \overline{m_{\Delta_1^{-n} v,\eta}(qa_t\cdot z_0)} \ d(qQ_H) \,. \end{eqnarray} Likewise \begin{equation}\label{longeq2} \langle L_{\Delta_1^n} f_t , m_{\Delta_1^{-n}v,\eta^I}\rangle_I
= \int_{Q/Q_H} (L_{\Delta_1^n} f)(q\cdot z_{0,I}) a_t^{-\rho} \overline{m_{\Delta_1^{-n} v,\eta^I}(qa_t\cdot z_{0,I})} \ d(qQ_H)\,. \end{equation}
Next we wish to replace $m_{\Delta_1^{-n} v,\eta}$ by the constant term approximation $m_{\Delta_1^{-n}v,\eta^I}$ via Theorem \ref{loc ct temp}. We then obtain constants $\epsilon>0, k\in \mathbb{N}$, depending only on $Z$, and a constant $C>0$ depending also on $\Omega$ and $n$, such that with $l:=k + n^*$ one has for all $q\in \Omega$ and all $v\in V^\infty$
\begin{eqnarray} \notag | m_{\Delta_1^{-n} v,\eta}(qa_t\cdot z_0) - m_{\Delta_1^{-n}v,\eta^I} (qa_t \cdot z_{0,I})| &\leq & C a_t^{(1+\epsilon)\rho} p_{-N;k} (m_{\Delta_1^{-n}v, \eta})\\ \label{longeq3} &\leq & C a_t^{(1+\epsilon)\rho} p_{-N;l-2n} (m_{v, \eta})\, .\end{eqnarray} In the passage to the second line of \eqref{longeq3} we used \eqref{R13}.
Now note that \eqref{match equivariant} implies $$ \langle (L_{\Delta_1^n} f)_t, m_{\Delta_1^{-n}v,\eta^I}\rangle_I= \langle f_t, m_{v,\eta^I}\rangle_I,$$ and thus if we insert the bound (\ref{longeq3}) into the difference between (\ref{longeq}) and \eqref{longeq2}, we obtain the identity
\begin{equation} \langle \Phi(L_{\Delta_1^n} f_t) , m_{\Delta_1^{-n}v,\eta}\rangle = \langle f_t, m_{v,\eta^I}\rangle_I +R_2(t)\end{equation} with
\begin{equation} \label{R22}|R_2(t)|\leq C e^{-t\epsilon}
p_{-N;l-2n} (m_{v,\eta}) \| L_{\Delta_1^n} f\|_{L^2(Z_I)} \sqrt {\operatorname{vol}_{Z_I} (\Omega \cdot z_{0,I})} \, .\end{equation} Now, as in \eqref{Lu} we convert derivatives, $$L_{\Delta_1^n}f (q\cdot z_{0,I})= \sum c_j(q) L_{u_j} f (q\cdot z_{0,I})$$
with $u_j\in \mathcal{U}(\mathfrak{q})$ of $\deg u_j \leq 2n$ and smooth coefficients $c_j$. Hence
$$\| L_{\Delta_1^n} f\|_{L^2(Z_I)} \leq C p_{0; 2n}(\Phi(f))\leq C p_{N;2n}(\Phi(f))$$ with constants $C$ depending only on $\Omega$. Hence we obtain
\begin{equation}\tag{R2} \label{rem2}|R_2(t)|\leq C e^{-t\epsilon} p_{-N;l-2n} (m_{v,\eta}) p_{N;2n} (\Phi(f)) \, . \end{equation} Now the theorem follows from the two remainder estimates \eqref{rem1} and \eqref{rem2}, by choosing the number $n$ such that $m=2n \ge s+k+ n^*$. \end{proof}
\subsection{Matching with respect to $\widetilde Z_I$} \label{subsection full match} We conclude this section with a slight extension of the preceding results, when we consider instead of $Z_I$ the union of all $G$-orbits in $\algebraicgroup{ Z}_I(\mathbb{R})$ which point to $Z$, i.e. the space $\widetilde Z_I=\coprod_{\mathsf{c} \in \sC_I} \coprod_{\mathsf{t} \in \sF_{I,\mathsf{c}}} Z_{I,\mathsf{c}, \mathsf{t}}$ from \eqref{normal union} which gives rise to the full partition $\mathcal{W}=\coprod_{\mathsf{c}\in \mathsf{c}_I}\coprod_{\mathsf{t}\in \sF_{I,\mathsf{c}}} {\bf m}_{\mathsf{c},\mathsf{t}}(\mathcal{W}_{I,\mathsf{c}})$ from \eqref{W partition}.
Observe that $f\in C_c^\infty (\widetilde Z_I)$ corresponds to a family $f=(f_{\mathsf{c},\mathsf{t}})_{\mathsf{c},\mathsf{t}}$ with $f_{\mathsf{c},\mathsf{t}}\in C_c^\infty (Z_{I,\mathsf{c},\mathsf{t}})$ and $Z_{I,\mathsf{c},\mathsf{t}} =Z_{I,\mathsf{c}}$ as homogeneous spaces. Suppose now that $\operatorname{supp} f_{\mathsf{c},\mathsf{t}} \subset Q w_{I,\mathsf{c}}\cdot z_{0,I,\mathsf{c}}\subset Z_{I,\mathsf{c}}=Z_{I,\mathsf{c},\mathsf{t}}$ for all $\mathsf{c},\mathsf{t}$. With \eqref{W partition} the function $f$ can then be matched with a function $F=\Phi(f)\in C_c^\infty(Z)$ by requesting $$F(q{\bf m}_{\mathsf{c},\mathsf{t}}(w_{I,\mathsf{c}})\cdot z_0))=f_{\mathsf{c},\mathsf{t}}(q w_{I,\mathsf{c}}\cdot z_{0,I,\mathsf{c}})\qquad (q\in Q)\, .$$ Then Corollary \ref{main cor} extends to all $f\in C_c^\infty(\widetilde Z_I)$ with $\operatorname{supp} f_{\mathsf{c},\mathsf{t}}\subset Q\mathcal{W}_{I,\mathsf{c}} \cdot z_{0,I,\mathsf{c}}$, and yields constants $C,\epsilon>0$ such that
\begin{equation} \label{full L2} \| f \|^2_{L^2(\widetilde Z_I)} = \int_{\widehat G} \sum_{\mathsf{c},\mathsf{t}} \mathsf {H}_{\pi,\mathsf{c},\mathsf{t}}^{I, \rm pre} ((f_{\mathsf{c},\mathsf{t}})_t) \ d\mu(\pi) + R(t)\end{equation}
with $|R(t)| \leq C e^{-\epsilon t}$ for all $t\geq 0$. Here $\mathsf {H}_{\pi,\mathsf{c},\mathsf{t}}^{I, \rm pre}$ refers to $\mathsf {H}_\pi^{I, \rm pre} $ for $Z_I$ replaced by $Z_{I,\mathsf{c},\mathsf{t}}$; explicitly
\begin{equation} \label{H-c-t} \mathsf {H}_{\pi,\mathsf{c},\mathsf{t}}^{I, \rm pre} (f_{\mathsf{c},\mathsf{t}})= \sum_{j=1}^{m_\pi} \|\overline \pi(f_{\mathsf{c},\mathsf{t}})((\overline \eta_j)_{\mathsf{c},\mathsf{t}}^I) \|_{\mathcal{H}_\pi}^2 \qquad (f_{\mathsf{c},\mathsf{t}} \in C_c^\infty(Z_{I,\mathsf{c},\mathsf{t}}))\, .\end{equation}
\section{Induced Plancherel measures} In this section we show that the Plancherel measure of $L^2(Z_I)$ is induced from the Plancherel measure of $L^2(Z)$ in a natural manner, see Theorem \ref{Plancherel induced} below. A consequence thereof is a certain variant of the Maass-Selberg relations as recorded in Theorem \ref{eta-I continuous}. Statements and approach are largely motivated by the reasoning in Sakellaridis-Venkatesh \cite[Sect.~11.1-11.4] {SV}, which originates from ideas of Joseph Bernstein. The main technical ingredient is our remainder estimate of Corollary~\ref{main cor}.
Given a point $[\pi]\in \widehat G$ we denote by $\mathcal{U}_{[\pi]}$ the neighborhood filter of $[\pi]$ in $\widehat G$. Let $I\subset S$ and recall from \eqref{Hermitian sum I-side} the definition of the Hermitian form $\mathsf {H}_\pi^{I, \rm pre}$. Attached to the Plancherel measure $\mu$ we define its $I$-support by
\begin{equation} \label{def supp mu I} \operatorname{supp}^I (\mu):=\{ [\pi]\in \widehat G\mid (\forall U \in \mathcal{U}_{[\pi]}) \ \mu(\{ [\sigma] \in U: \mathsf {H}_\sigma^{I,\rm pre} \neq 0\})>0\}\, .\end{equation} We denote by $\mu^I$ the restriction of $\mu$ to $\operatorname{supp}^I(\mu)$. In the sequel we let $(\pi,\mathcal{H}_\pi)$ be such that $[\pi]\in \operatorname{supp}^I(\mu)$. Define
\begin{equation} \label{def M-pi-I} \mathcal{M}_\pi^I :=\operatorname{span}\{ a\cdot \eta^I: \eta\in \mathcal{M}_\pi, a\in A_I\} \subset (\mathcal{H}_\pi^{-\infty})_{\rm temp}^{H_I}\end{equation} where the latter inclusion is part of Theorem \ref{loc ct temp}.
The elements $\xi\in \mathcal{M}_\pi^I$ decompose into generalized eigenvectors for the $A_I$-action, \begin{equation} \label{decomp etaI} \xi=\sum_{\lambda\in \mathcal{E}_\xi}\xi^\lambda, \end{equation} and we recall from \eqref{exponents} that the generalized eigenvalues $\lambda$ satisfy \begin{equation} \label{exponents normalized unitary} \mathcal{E}_\xi\subset
(\rho- \mathsf{W}_\mathfrak{j}\cdot \chi_\pi)|_{\mathfrak{a}_I} \cap (\rho|_{\mathfrak{a}_I} +i\mathfrak{a}_I^*)\, .\end{equation} It will be seen later that the $A_I$-action is semisimple for almost all $\pi \in \operatorname{supp}^I(\mu)$.
\par Recall that the conjugation $\mathcal{H}_{\pi}^{-\infty}\to \mathcal{H}_{\overline \pi}^{-\infty}, \eta\mapsto \overline \eta$ is a $G$-equivariant isomorphism of topological vector spaces. The conjugation map induces an antilinear $A_I$-equivariant isomorphism $\mathcal{M}_\pi^I\simeq \mathcal{M}_{\overline \pi}^I$. In particular, $\mathcal{M}_\pi^I$ is semisimple if and only if $\mathcal{M}_{\overline\pi}^I$ is semisimple.
\subsection{Averaging} What follows is motivated by the techniques of \cite[Sect. 10]{SV}. Let $X\in \mathfrak{a}_I^{--}$ and set $a_t=\exp(tX)$ as usual. Throughout this section we let $(\pi, \mathcal{H}_\pi)$ be a representation occurring in $\operatorname{supp}^I(\mu)$. We recall the notion $f_t$ from \eqref{defi f_t}.
\begin{lemma}\label{averaging lemma} {\rm (Averaging Lemma)} Let $X\in \mathfrak{a}_I^{--}$. Then the following assertions hold: \begin{enumerate} \item \label{One I}Suppose that $\mathcal{M}_\pi^I$ is $X$-semisimple. Then we have for all $f\in C_c^\infty(Z_I)$ and $\xi\in \mathcal{M}_\pi^I$ that \begin{eqnarray}\notag
\lim_{n\to \infty} {1\over n} \sum_{t=n+1}^{2n} \|\pi (f_t)\xi\|^2& =&
\sum_{\lambda \in \mathcal{E}_{\xi}} \| \pi(f)\xi^\lambda\|^2 + \\ \label{limit 1} &+& 2 \operatorname{Re} \sum_{\lambda\neq \lambda' \in \mathcal{E}_{\xi}\atop (\lambda - \lambda')(X)\in 2\pi i\mathbb{Z}} \langle \pi(f)\xi^\lambda, \pi(f)\xi^{\lambda'}\rangle\, .\end{eqnarray} In particular, if $(\lambda - \lambda')(X) \not \in 2\pi i \mathbb{Z}$ for all $\lambda, \lambda'\in \mathcal{E}_{\xi}$ with $\lambda\neq \lambda'$, we have
\begin{equation}\label{limit 2} \lim_{n\to \infty} {1\over n} \sum_{t=n+1}^{2n} \|\pi (f_t)\xi\|^2 =
\sum_{\lambda \in \mathcal{E}_{\xi}} \| \pi(f)\xi^\lambda\|^2\, .\end{equation} \item \label{Two II} Suppose that $\mathcal{M}_\pi^I$ is not $X$-semisimple. Then for every $\xi\in\mathcal{M}_\pi^I$ which does not belong to the sum of eigenspaces for $X$ we have \begin{equation}\label{limit 3} \lim_{n\to \infty}
{1\over n} \sum_{t=n+1}^{2n} \|\pi (f_t)\xi\|^2=\infty \end{equation}
for every $f \in C_c^\infty(Z_I)$ for which $\pi(f)|_{\mathcal{M}_{\pi}^I}$ is injective. \end{enumerate} \end{lemma}
\begin{proof} \eqref{One I} Assume $\mathcal{M}_{\pi}^I$ is diagonalizable for $X$ and let $\xi\in \mathcal{M}_ \pi^I$. It then follows from \eqref{decomp etaI} and \eqref{exponents normalized unitary} that $$\pi(a_t) \xi= \sum_{\lambda \in \rho +i\mathfrak{a}_I^*} a_t^{\lambda} \xi^\lambda\, .$$ In particular we obtain from Lemma \ref{lemma ZI-int} for all $f\in C_c^\infty(Z_I)$ and $t\ge 0$ that $$ \pi (f_t) \xi= \sum_{\lambda \in \rho + i\mathfrak{a}^*} a_t^{\lambda -\rho} \pi(f) \xi^\lambda$$ and thus
\begin{eqnarray*} \|\pi(f_t)\xi\|^2 &=&
\| \sum_{\lambda\in \rho +i\mathfrak{a}_I^*} a_t^{\lambda-\rho} \pi(f)\xi^\lambda\|^2\\
&=& \sum_{\lambda\in \rho+ i\mathfrak{a}_I^*} \|\pi(f)\xi^\lambda\|^2 + 2 \operatorname{Re} \sum_{ \lambda, \lambda'\in \mathcal{E}_{\xi}\atop \lambda\neq \lambda'} a_t^{\lambda - \lambda' } \langle \pi(f)\xi^\lambda, \pi(f)\xi^{\lambda'}\rangle\, .\end{eqnarray*} Now for any $\gamma\in \mathbb{R}\backslash 2\pi\mathbb{Z}$ we have $\lim_{n\to \infty} {1\over n}\sum_{t=n+1}^{2n} e^{i t\gamma}=0$ and \eqref{One I} follows. \par For \eqref{Two II} we remark that
with the mentioned assumption on $\xi$ we have for some $m\in\mathbb{N}$ and each $\lambda\in \rho + i\mathfrak{a}^*$ that \begin{equation}\label{apply f} \pi(f_t)\xi^\lambda= a_t^{\lambda-\rho} \pi(f) \sum_{j=0}^m \frac{t^j}{j!}\xi^{\lambda,j}\end{equation} where $\xi^{\lambda,0}=\xi^\lambda,\xi^{\lambda,1},\dots,\xi^{\lambda,m}\in \mathcal{M}_ \pi^I$. Moreover we can assume $\xi^{\lambda, m}\neq 0$ for some $\lambda$. Now \eqref{limit 3} becomes a simple matter on polynomial asymptotics: set $$ \xi_t^{\rm top}:= \sum_\lambda a_t^{\lambda -\rho} \xi^{\lambda, m} \qquad (t\geq 0)$$
and note that $|a_t^{\lambda- \rho}|=1$ implies that the vectors $\xi_t^{\rm top}$, $t\geq 0$, stay away from $0$ in the finite dimensional space $\mathcal{M}_\pi^I$. Thus we obtain from \eqref{apply f} and the injectivity of $\pi(f)|_{\mathcal{M}_\pi^I}$ that
$$ \|\pi(f_t) \xi\|\sim t^{m}\|\pi(f)\xi_t^{\rm top}\| \, $$ from which \eqref{limit 3} follows. \end{proof}
Suppose that $X\in \mathfrak{a}_I^{--}$ is such that $(\lambda - \lambda')(X) \not \in 2\pi i \mathbb{Z}$ for all $\lambda, \lambda'\in \mathcal{E}_\xi$, $\xi \in \mathcal{M}_{\overline \pi}^I$, with $\lambda\neq \lambda'$. Then we obtain from \eqref{Hermitian sum I-side} and Lemma \ref{averaging lemma} that
\begin{equation} \label{LIMIT HI-PRE} \lim_{n\to \infty} {1\over n} \sum_{t=n+1}^{2n} \mathsf {H}_\pi^{I, \rm pre}(f_t) = \begin{cases}
\sum\limits_{j=1}^{m_\pi}\sum\limits_{\lambda\in \mathcal{E}_{\overline \eta_j^I}} \|\overline \pi (f) \overline \eta_j^{I,\lambda}\|^2 & \text{if $\mathcal{M}_\pi^I$ is $X$-semisimple}\\
\infty & \text{if otherwise and}\\
& \text{$\overline \pi(f)|_{\mathcal{M}_{\overline \pi}^I}$ is injective} \end{cases} \end{equation} where $\eta_1, \ldots, \eta_{m_\pi}$ is an orthonormal basis for $\mathcal{M}_\pi$.
This motivates the following definition of $\mathsf {H}_\pi^I$. In case $\mathcal{M}_\pi^I$ is a semisimple $A_I$-module we set
\begin{equation} \label{I-Hermitian form} \mathsf {H}_\pi^I (f) :=\sum_{j=1}^{m_\pi}
\sum_{\lambda\in \mathcal{E}_{\overline \eta_j^I}} \|\overline \pi (f) \overline \eta_j^{I,\lambda}\|^2 \qquad (f \in C_c^\infty(Z_I))\, ,\end{equation} and otherwise $\mathsf {H}_\pi^I:=0$. Observe that the Hermitian form $\mathsf {H}_\pi^I$ is left $G$-invariant, and normalized-right $A_I$-invariant. Set
$$ \operatorname{supp}_{ \rm fin}^I(\mu):=\{ [\pi] \in \operatorname{supp}^I(\mu)\mid \mathcal{M}_\pi^I \ \text{is $\mathfrak{a}_I$-semisimple}\}\, .$$
\subsubsection{Mollifying on multiplicity spaces} Throughout this subsection we let $V$ be an irreducible Harish-Chandra module and $V^\infty$ its unique $SF$-completion. Let $\mathcal{S}(G)$ be the Schwartz algebra of rapidly decreasing functions on $G$ (see \cite{BK}) and recall the following variant of the Casselman-Wallach theorem: if $0\neq v\in V$, then \begin{equation}\label{cas-wa}\mathcal{S}(G)*v =V^\infty\end{equation} by \cite[Th. 8.1]{BK}, where for $f\in \mathcal{S}(G)$ and $v\in V^\infty$ we use the standard notation
$$ f * v = \int_G f(g) g\cdot v \ dg$$ with the right hand side being a convergent integral in the Fr\'echet space $V^\infty$. Assertion \eqref{cas-wa} can be strengthened further as follows. Let $\widetilde V$ be the Harish-Chandra module dual to $V$. Then we first record the mollifying property $\mathcal{S}(G)* \widetilde V^{-\infty} \subset V^\infty$ which in view of \eqref{cas-wa} strengthens to \begin{equation}\label{cas-wa2}\mathcal{S}(G)*\eta =V^\infty \qquad (0\neq \eta \in\widetilde V^{-\infty})\end{equation} In fact, choose first a left $K$-finite function $f\in C_c^\infty(G)$ such that $0\neq f*\eta \in V$ and then apply \eqref{cas-wa} with $\mathcal{S}(G)*C_c^\infty(G)\subset \mathcal{S}(G)$. Let now $H\subset G$ be any closed unimodular subgroup of $G$. Then we define $\mathcal{S}(G/H)$ as the space of right $H$-averages of functions $F\in \mathcal{S}(G)$, i.e. $f \in \mathcal{S}(G/H)$ if and only if there exists an $F\in \mathcal{S}(G)$ such that $$f(gH)=F^H(g):= \int_H F(gh) \ dh\qquad (g\in G)\, .$$ With that we can define for $\eta \in (\widetilde V^{-\infty})^H$ and $f=F^H \in \mathcal{S}(G/H)$: $$f*\eta:= F*\eta$$ as the right hand side of this equation is independent of the particular lift $F$ of $f$. Then we have the following generalization of \eqref{cas-wa2}.
\begin{lemma} \label{lem molly} Let $H\subset G$ be a closed unimodular subgroup and let $E\subset (\widetilde V^{-\infty})^H$ be a finite dimensional subspace. Then the map $$\Phi_E: \mathcal{S}(G/H) \to \operatorname{Hom}(E, V^\infty), \ \ f\mapsto \left(\eta\mapsto f*\eta\right)$$ is continuous and surjective. Moreover $E$ is uniquely determined by $\ker \Phi_E$. \end{lemma}
\begin{proof} First of all it is clear that $\Phi_E$ is continuous. Next we observe that the statement reduces to $H=\{{\bf1}\}$ which we will assume from now on.
Notice that $\Phi_E$ is an $\mathcal{S}(G)$-module morphism with $\mathcal{S}(G)$ acting on $\operatorname{Hom}(E, V^\infty)$ on the target $V^\infty$, i.e. for $f\in \mathcal{S}(G)$ and $T\in \operatorname{Hom}(E, V^\infty)$ we set $(f*T)(\eta):= f* (T(\eta))$.
\par Suppose that $\Phi_E$ were not surjective. Then $\operatorname{im}\Phi_E\subset \operatorname{Hom}(E, V^\infty)$ would be a proper $\mathcal{S}(G)$-invariant subspace. Upon the identification $ \operatorname{Hom}(E, V^\infty)= E^* \otimes V^\infty$ we then derive from the fact that $V^\infty$ is an algebraically simple module for $\mathcal{S}(G)$ (a consequence of \eqref{cas-wa}) that $\operatorname{im}\Phi_E= F^\perp \otimes V^\infty$ for a subspace $0\neq F\subset E$. This then means
$$\operatorname{im}\Phi_E=\{ T\in \operatorname{Hom}(E, V^\infty)\mid T|_F=0\}$$ which contradicts the fact that $\mathcal{S}(G)*F=V^\infty\neq \{0\}$ as $F\neq 0$. \par Finally from $\mathcal{S}(G)/\ker \Phi_E \simeq E^* \otimes V^\infty$ we obtain the asserted uniqueness. Indeed, suppose you have $\ker \Phi_{E_1} = \ker \Phi_{E_2}$. Then $\ker\Phi_{E_i}= \ker \Phi_{E_1+E_2}$ for $i=1,2$ and thus $\dim (E_1+E_2)^* =\dim E_i^*$ for $i=1,2$, i.e. $E_1=E_2$. \end{proof}
We apply Lemma \ref{lem molly} to the Hermitian forms $\mathsf {H}_\pi^I$ of \eqref{I-Hermitian form} as follows. Let $E=\mathcal{M}_{\overline \pi}^I$.
\begin{cor} Let $[\pi]\in \operatorname{supp}_{ \rm fin}^I(\mu)$. There exists a unique Hermitian form $\mathsf {H}$ on $$\operatorname{Hom}(\mathcal{M}_{\overline \pi}^I, \mathcal{H}_\pi^\infty)\simeq \mathcal{H}_\pi^\infty \otimes \mathcal{M}_\pi^I$$ for which $\mathsf {H}(\Phi_E(f))=\mathsf {H}_\pi^I(f)$ for all $f\in \mathcal{S}(G/H)$. This form is $G$-invariant and positive definite. \end{cor}
\begin{proof} Clearly $ f\in\ker\Phi_E \Rightarrow \mathsf {H}_\pi^I(f)=0$. Moreover, since $[\pi]\in \operatorname{supp}_{ \rm fin}^I(\mu)$ we have
$$E=\mathcal{M}_{\overline \pi}^I=\operatorname{span}\{ {\overline\eta}_j^{I,\lambda}\mid 1\leq j\leq m_\pi, \lambda \in \rho|_{\mathfrak{a}_I} +i\mathfrak{a}_I^*\}$$ from which we deduce the converse implication. \end{proof}
We use the symbol $\mathsf {H}_\pi^I$ also for the form $\mathsf {H}$ introduced in the corollary. Now a variant of Schur's Lemma implies that $\mathsf {H}_\pi^I$ viewed as a form on $\mathcal{H}_\pi^\infty \otimes \mathcal{M}_\pi^I$ is given by
\begin{equation} \label{FORM11} \mathsf {H}_\pi^I(v\otimes \xi) =\langle v,v\rangle_{\mathcal{H}_\pi} \langle \xi, \xi\rangle_{\mathcal{M}_\pi^I}\end{equation} for a unique Hilbert inner product $\langle \cdot, \cdot\rangle_{\mathcal{M}_\pi^I}$ on $\mathcal{M}_\pi^I$.
We conclude this intermediate subsection with a simple observation of later use.
\begin{lemma}\label{molly injective} Keep the assumptions of Lemma \ref{lem molly} and let $(f_n)_{n\in \mathbb{N}}$ be a Dirac-sequence in $C_c^\infty(G/H)$. Then there exists an $N=N(E)$ such that the map $$ \Phi_E(f_n):\, E \to V^\infty , \ \ \eta \mapsto f_n *\eta$$ is injective for all $n\geq N$. \end{lemma}
\begin{proof} This is a special case of a more general fact. Let $X$ be a locally convex topological vector space and $E\subset X$ a finite dimensional subspace. Let $T_n: E\to X$ be a family of linear continuous maps with $\lim_{n\to \infty} T_n(x)=x$ for all $x\in E$. We claim that there exists $N\in\mathbb{N}$ such that $T_n$ is injective for all $n\geq N$. To see that we choose a closed complement to $E$ and obtain a continuous projection $p_E: X\to E$. With $S_n:= p_E \circ T_n$ we then obtain a sequence $S_n \in \operatorname{End}(E)$ such that $S_n\to {\bf1}$. This proves the claim. The lemma follows with $X=\widetilde V^{-\infty}$ and $T_n(x)= f_n*x$. \end{proof}
\subsection{Induced Plancherel measure} The following theorem was largely motivated by \cite[Th. 11.3]{SV}.
\begin{theorem}\label{Plancherel induced} {\rm (Induced Plancherel measure)} For all $f\in C_c^\infty(Z_I)$ one has
\begin{equation} \label{PT I-norm}\| f\|_{L^2(Z_I)}^2= \int_{\operatorname{supp}^I(\mu)} \mathsf {H}_\pi^I (f) \ d\mu (\pi)\, .\end{equation} In particular, the Plancherel measure $\mu_I$ of $L^2(Z_I)$ is equivalent to $\mu$ restricted to $ \operatorname{supp}^I(\mu)$, and $\mathcal{M}_\pi^I$ as defined in \eqref{def M-pi-I} and equipped with the Hermitian form obtained from \eqref{FORM11} provides a multiplicity space for $\mu_I$-almost all $\pi$. In other words \begin{equation} \label{Planch ZI} L^2(Z_I) \underset{G \times A_I}\simeq \int_{\operatorname{supp}^I(\mu)} \mathcal{H}_\pi \otimes \mathcal{M}_\pi^I \ d\mu(\pi)\,,\end{equation} with the just described inner product on $\mathcal{M}_\pi^I$, is a Plancherel decomposition for $Z_I$. Finally, the complement of $\operatorname{supp}_{ \rm fin}^I(\mu)$ in $\operatorname{supp}^I(\mu)$ is a null set. \end{theorem}
\begin{proof} It is sufficient to prove this identity for test functions $f$ with support in $P \mathcal{W}_I \cdot z_{0,I}$ because $P \mathcal{W}_I \cdot z_{0,I}$ exhausts $Z_I$ up to measure zero. Let such a test function $f$ be given. \par Fix $X\in \mathfrak{a}_I^{--}$. It follows from the exponential decay of $R(t)$ in Corollary \ref{main cor} that
\begin{equation} \label{Fatou}
{1\over n} \sum_{t=n+1}^{2n} \int_{\widehat G} \mathsf {H}_\pi^{I, \rm pre}(f_t) \ d\mu(\pi) \to \|f\|_{L^2(Z_I)}^2 \end{equation} as $n\to \infty$. Define $$\mathsf {H}_\pi^{I, X-{\rm inv}} (f):=\lim_{n\to \infty} {1\over n} \sum_{t=n+1}^{2n} \mathsf {H}_\pi^{I, \rm pre}(f_t)\in[0,\infty ]\, .$$ Then \eqref{Fatou} and Fatou's lemma imply \begin{equation} \label{Fatou1}
\int_{\widehat G} \mathsf {H}_\pi^{I, X-{\rm inv}} (f)\, d\mu(\pi) \le \|f\|_{L^2(Z_I)}^2 < \infty. \end{equation} Next set $$\widehat G_X:=\{ [\pi] \in \widehat G \mid \mathcal{M}_\pi^I\neq\{0\} \ \text{and $\mathcal{M}_\pi^I$ is $X$-semisimple} \}\, .$$ By choosing a Dirac sequence $f_1, f_2, \ldots$ of $C_c^\infty (Z_I)$ which is supported in $P\cdot z_{0,I} $ we obtain from Lemma \ref{molly injective} for each $[\pi]\in\widehat G$
that $\overline \pi(f_j)|_{\mathcal{M}_{\overline \pi}^I}$ is injective for some $j$. Hence by countable additivity it follows from \eqref{Fatou1} together with \eqref{LIMIT HI-PRE} and the definition of $\operatorname{supp}^I(\mu)$ in \eqref{def supp mu I} that $\mu(\operatorname{supp}^I(\mu) \backslash \widehat G_X)=0$. Further for $[\pi]\in \widehat G_X$ we have $\mathsf {H}_\pi^{I, X-{\rm inv}} (f)<\infty$ and from \eqref{limit 1} we infer
\begin{eqnarray}\notag \mathsf {H}_\pi^{I, X-{\rm inv}} (f) &=& \sum_{j=1}^{m_\pi}
\sum_{\lambda\in \rho+ i\mathfrak{a}_I^*} \|\overline\pi(f)\overline\eta_j^{I,\lambda}\|^2 +\\ \label{X inv} &+&\sum_{j=1}^{m_\pi} \, 2 \operatorname{Re} \!\!\! \sum_{\lambda\neq \lambda' \in \mathcal{E}_{\xi}\atop (\lambda - \lambda')(X)\in 2\pi i\mathbb{Z}} \!\!\! \langle \overline\pi(f)\overline\eta_j^{I,\lambda}, \overline\pi(f)\overline\eta_j^{I,\lambda'}\rangle\, . \end{eqnarray} Next we define
$$\widehat G_{X, \rm reg}:=\{ [\pi] \in \widehat G_X\mid (\forall \lambda\neq \lambda'\in (\rho-\mathsf{W}_\mathfrak{j}\chi_\pi)|_{\mathfrak{a}_I}): \ (\lambda- \lambda')(X)\not \in 2\pi i\mathbb{Z}\}$$ and deduce from \eqref{Fatou1}, \eqref{X inv}, and \eqref{I-Hermitian form} that
\begin{equation} \label{limit 5} \|f\|_{L^2(Z_I)}^2 \geq \int_{\widehat G_{X, \rm reg}} \mathsf {H}_\pi^I(f) \ d\mu(\pi) + \int_{\widehat G_X \backslash \widehat G_{X, \rm reg}} \mathsf {H}_\pi^{I, X-{\rm inv}} (f) \ d\mu(\pi) \, .\end{equation} Now we start iterating \eqref{limit 5} with finitely many $X\in \mathfrak{a}_I^{--}$. In more precision, let $X_1:=X$ and set $X_2:=\sqrt{2} X_1$. Now the iteration of \eqref{limit 5} starts with $a_t:=\exp(tX_2)$ while observing
$ \|f\|_{L^2(Z_I)}^2= \|f_t\|_{L^2(Z_I)}^2$ and taking weighted averages as before. Another application of Fatou's Lemma then yields
\begin{eqnarray*} \|f\|_{L^2(Z_I)}^2 &\geq & \int_{\bigcup_{j=1}^2\widehat G_{X_j, \rm reg}} \mathsf {H}_\pi^I(f) \ d\mu(\pi)+\\ & + & \int_{\left(\bigcap_{j=1}^2\widehat G_{X_j}\right) \backslash \left(\bigcup_{j=1}^2 \widehat G_{X_j, \rm reg}\right)} \mathsf {H}_\pi^{I, \{X_1,X_2\}-{\rm inv}} (f) \ d\mu(\pi) \end{eqnarray*} with \begin{eqnarray}\notag \mathsf {H}_\pi^{I, \{X_1,X_2\}-{\rm inv}} (f) &=& \sum_{j=1}^{m_\pi}
\sum_{\lambda\in \rho+ i\mathfrak{a}_I^*} \|\overline\pi(f)\overline\eta_j^{I,\lambda}\|^2 +\\ \label{X12 inv} &+&\sum_{j=1}^{m_\pi} 2 \operatorname{Re} \sum_{ \lambda, \lambda'\in \mathcal{E}_{\overline \eta_j^I}\atop \lambda\neq \lambda', (\lambda-\lambda')(X_1)=0} \langle \overline\pi(f)\overline\eta_j^{I,\lambda}, \overline\pi(f)\overline\eta_j^{I,\lambda'}\rangle \end{eqnarray} as a result of making \eqref{X inv} also invariant under $X_2$. Here we used that $(\lambda - \lambda')(X_i)\in 2\pi \mathbb{Z}$ for $i=1,2$ means $(\lambda-\lambda')(X_i)=0$.
Next take $X_3\in \mathfrak{a}_I^{--}$ linearly independent to $X_1$ and then $X_4:=\sqrt{2} X_3$. This we continue until $X_1, X_3, \ldots, X_{2m-1}$ is a basis of $\mathfrak{a}_I$ contained in $\mathfrak{a}_I^{--}$.
Notice that iterating \eqref{X12 inv} yields that $$\mathsf {H}_\pi^{I, \{X_1,\ldots, X_{2m}\}-{\rm inv}} (f) = \mathsf {H}_\pi^I (f)$$ and we finally arrive at
\begin{equation} \label{limit 6} \|f\|_{L^2(Z_I)}^2 \geq \int_{\widehat G} \mathsf {H}_\pi^I(f) \ d\mu(\pi)\end{equation} together with the fact $\mu(\operatorname{supp}^I(\mu) \backslash \operatorname{supp}_{\rm fin}^I(\mu))=0$ as
$\operatorname{supp}_{\rm fin}^I=\bigcap_j\widehat G_{X_j}$. \par To conclude the proof we observe for $X=X_1$ and any $\pi\in\widehat G$ that
$\|\overline\pi(f_t)\overline\eta^I\| \leq \sum_{\lambda\in \mathcal{E}_{\overline \eta^I}} \|\overline \pi(f)\overline\eta^{I,\lambda}\|$ and thus
$$\|\overline\pi(f_t)\overline\eta^I\|^2 \leq |\mathsf{W}_\mathfrak{j}| \sum_{\lambda\in \mathcal{E}_{\overline \eta^I}} \|\overline \pi(f)\overline\eta^{I,\lambda}\|^2$$ as $|\mathcal{E}_{\overline \eta}|\leq |\mathsf{W}_\mathfrak{j}|$. Summing over $ t$ and the $\overline \eta_j^I$ this implies via \eqref{X inv} for all $[\pi]\in \operatorname{supp}_{\rm fin}^I\mu$ that
$${1\over n} \sum_{t=n+1}^{2n} \mathsf {H}_\pi^{I, \rm pre}(f_t)\leq |\mathsf{W}_\mathfrak{j}| \mathsf {H}_\pi^I (f) $$ for all $n>0$. Thus by \eqref{Fatou1} and dominated convergence we can interchange limit and integral in \eqref{Fatou} and obtain actual equality in \eqref{Fatou1}:
\begin{equation} \label{limit 4} \int_{\widehat G} \mathsf {H}_\pi^{I, X-{\rm inv}} (f) \ d\mu(\pi) = \|f\|_{L^2(Z_I)}^2 \, . \end{equation} The just described iteration applied to \eqref{limit 4} then yields
$$\int_{\widehat G} \mathsf {H}_\pi^I (f) \ d\mu(\pi) = \|f\|_{L^2(Z_I)}^2 $$ and finishes the proof of the theorem. \par The final statements follow from uniqueness of the Plancherel measure together with \eqref{FORM11}. \end{proof}
\subsection{Extension to $\widetilde Z_I$} In view of Section \ref{subsection full match} we can extend Theorem \ref{Plancherel induced} to all $f\in C_c^\infty(\widetilde Z_I)$:
\begin{equation} \label{PT I-norm2}\| f\|_{L^2(\widetilde Z_I)}^2= \sum_{\mathsf{c},\mathsf{t}} \int_{\operatorname{supp}^{I,\mathsf{c},\mathsf{t}}(\mu)} \mathsf {H}_{\pi,\mathsf{c},\mathsf{t}}^I (f_{\mathsf{c},\mathsf{t}}) \ d\mu (\pi)\end{equation} where we put an extra index $\mathsf{c},\mathsf{t}$ when we consider objects, initially defined for $Z_I$, now for $Z_{I,\mathsf{c},\mathsf{t}}$. Let us further denote by $\mathcal{M}_{\pi,\mathsf{c},\mathsf{t}}^I \subset(\mathcal{H}_\pi^{-\infty})^{H_{I,\mathsf{c}}} $ the Hilbert space $\mathcal{M}_\pi^I$ (with the inner product obtained from \eqref{FORM11}), but for $Z_I$ replaced by $Z_{I,\mathsf{c},\mathsf{t}}=Z_{I,\mathsf{c}}$. We then form the direct sum of Hilbert spaces $$ \widetilde \mathcal{M}_\pi^I=\bigoplus_{\mathsf{c},\mathsf{t}} \mathcal{M}_{\pi,\mathsf{c},\mathsf{t}}^I \, ,$$ and equip this space with the diagonal action of $A_I$, i.e.~for $\xi=(\xi_{\mathsf{c},\mathsf{t}})_{\mathsf{c},\mathsf{t}}\in \widetilde \mathcal{M}_\pi^I$ we have $a\cdot \xi= (a\cdot \xi_{\mathsf{c},\mathsf{t}})_{\mathsf{c},\mathsf{t}}$. Then we obtain the following extension of \eqref{Planch ZI} to \begin{equation} \label{Planch ZI extended } L^2(\widetilde Z_I) \underset{G \times A_I}\simeq \int_{\widehat G} \mathcal{H}_\pi \otimes \widetilde \mathcal{M}_\pi^I \ d\mu(\pi)\end{equation}
\subsection{The Maass-Selberg relations} The multiplicity space $\widetilde \mathcal{M}_\pi^I$ are $A_I$-semisimple for $\mu$-almost all $[\pi]$ and thus admits a direct sum decomposition $\widetilde \mathcal{M}_\pi^I = \bigoplus_{\lambda\in \rho+ i\mathfrak{a}_I^*} \widetilde \mathcal{M}_\pi^{I,\lambda}$ with $$\widetilde \mathcal{M}_\pi^{I,\lambda}=\{\xi \in \widetilde \mathcal{M}_\pi^I\mid (\forall a\in A_I) \ a\cdot \xi= a^\lambda \xi\}\, .$$ Since the normalized right action of $A_I$ on $L^2(Z_I)$ is unitary it follows that the Hermitian structure on $\widetilde \mathcal{M}_\pi^I$ is such that this decomposition of $\widetilde \mathcal{M}_\pi^I$ is orthogonal for $\mu$-almost all $[\pi]$.
\begin{theorem} \label{eta-I continuous} {\rm (Maass-Selberg relations)}
Let $\lambda\in\rho|_{\mathfrak{a}_I} + i\mathfrak{a}_I^*$. Then for almost all $[\pi] \in \bigcup_{\mathsf{c},\mathsf{t}} \operatorname{supp}^{I,\mathsf{c},\mathsf{t}}_{\rm fin}(\mu)$ the map
$$ \mathsf{I}^\lambda: \mathcal{M}_\pi \to \widetilde \mathcal{M}_\pi^{I,\lambda}, \ \ \eta\mapsto (\eta_{\mathsf{c},\mathsf{t}}^{I,\lambda})_{\mathsf{c},\mathsf{t}}$$
is a surjective partial isometry, i.e.~its Hermitian adjoint is a unitary isometry. \end{theorem}
\begin{proof} Let us denote by $\langle\cdot, \cdot\rangle$ the scalar product on $\widetilde \mathcal{M}_\pi^I$. By definition it is given by \eqref{FORM11} (summed over all $\mathsf{c},\mathsf{t}$) for almost all $[\pi]$. Now summation of \eqref{I-Hermitian form} over all $\mathsf{c},\mathsf{t}$ implies for all
$x\in \widetilde\mathcal{M}_\pi^I$ that
\begin{equation} \label{M-equation}\|x\|^2_{\mathcal{M}_\pi^I}
= \sum_{\mathsf{c},\mathsf{t}}\sum_{j=1}^{m_\pi} \sum_{\lambda\in \mathcal{E}_{\eta_j^I}} |\langle x, (\eta_j)_{\mathsf{c},\mathsf{t}}^{I,\lambda}\rangle|^2 \, .\end{equation} In particular, for $x\in \widetilde\mathcal{M}_\pi^{I,\lambda}$ this is condition (\ref{ONB1}) so that Lemma \ref{lemma ONB} applies. \end{proof}
\begin{rmk} Of particular interest is the case of a multiplicity one space, i.e. where we have $\dim \mathcal{M}_\pi \leq 1$ for almost all $\pi \in \operatorname{supp}\mu$. This is for instance satisfied in the group case $Z= G \times G/ \operatorname{diag} G\simeq G$, for complex symmetric spaces, and in the Riemannian situation $Z=G/K$.
\par For a symmetric space the condition that $\dim \mathcal{M}_\pi \leq 1$ for almost all $\pi$ implies $\mathcal{W}=\{{\bf1}\}$. To see that we first observe that there are $|\mathcal{W}|$-many open $H$-orbits $\mathcal{O}\subset G/Q$, each isomorphic to $H/ L_H$ as a unimodular
$H$-space. Integration over these open $H$-orbits yields at least $|\mathcal{W}|$-many tempered functionals for representations $\pi$ with generic parameters in the most-continuous spectrum of $Z$, say $\eta_{\pi, w}$ for $w\in \mathcal{W}$. Now there is a subtle point that a priori we only have $\mathcal{M}_\pi \subset (V_\pi^{-\infty})^H_{\rm temp}$. But forming wave packets finally yields that these $\eta_{\pi,w}$ indeed contribute a.e. to the $L^2$-spectrum. For this one needs an estimate of $\eta_{\pi,w}$ which is locally uniform with respect to $\pi$. For the case of a symmetric space $Z$ such an estimate is given in \cite{vdBII} Thm.~9.1. The statement follows.
\par The statement above implies that $\mathcal{M}_\pi^I=\widetilde \mathcal{M}_\pi^I$. Our Maass-Selberg relations in Theorem
\ref{eta-I continuous} then assert for $\eta\in \mathcal{M}_\pi$ with $\|\eta\|=1$ that $(\eta^{I,\lambda})_{\lambda}$ is an orthonormal basis of $\mathcal{M}_\pi^I=\widetilde \mathcal{M}_\pi^I$ (where we only count those $\lambda$ for which $\mathcal{M}_\pi^{I,\lambda}\neq \{ 0\}$). In particular, for the group case this leads to the Maass-Selberg relations of Harish-Chandra \cite{HC3}, p.~146.
\end{rmk}
We finish this section with an elementary lemma about finite dimensional Hilbert spaces. It was used for Theorem \ref{eta-I continuous} above.
\begin{lemma}\label{lemma ONB} Let $J: \mathcal{M}\to \mathcal{N}$ a linear map between two finite dimensional Hilbert spaces. Assume that for some orthonormal basis $\eta_1, \ldots, \eta_n$ for $\mathcal{M}$ one has
\begin{equation} \label{ONB1} \langle x, x\rangle= \sum_{j=1}^n |\langle x, J\eta_j\rangle|^2 , \qquad (x\in \mathcal{N}).\end{equation} Then the adjoint of $J$ is an isometry. \end{lemma}
\begin{proof} It follows from \eqref{ONB1} that
$ \|x\|^2 = \sum_{j=1}^n |\langle J^*x, \eta_j\rangle |^2 = \|J^*x\|^2$. \end{proof}
\section{Spectral Radon transforms and twisted discrete spectrum}
The constant term assignments $$\mathcal{M}_\pi\ni \eta\mapsto \eta^I\in \mathcal{M}_\pi^I$$ give rise to spectral Radon transform ${\mathsf R}_I : L^2(Z) \to L^2(Z_I)$ which is the topic of this section. With the help of this transform we can characterize the twisted discrete series $L^2(Z)_{\rm td}$ of $L^2(Z)$ spectrally. The section starts with a brief recall on the twisted discrete series, see also \cite{KKOS} and \cite[Sect. 9]{KKS}.
\subsection{Twisted discrete series}\label{Subsection twisted} Let us denote by $L^2(Z)_{\rm d}$ the discrete spectrum of $L^2(Z)$, i.e. the direct sum of all irreducible subspaces. Now in case $\mathfrak{a}_{Z,E}\neq \{0\}$, it is easy to see that $L^2(Z)_{\rm d}=\emptyset$, see \cite[Lemma 3.3]{KKOS}. In particular, for $I\subsetneq S$ we have $L^2(G/H_I)_{\rm d}=\emptyset$ as $\mathfrak{a}_{Z_I,E}=\mathfrak{a}_I\neq \{0\}$.
Recall that the subspace $\mathfrak{a}_{Z,E}=\mathfrak{a}_S\subset \mathfrak{a}_Z$ normalizes $\mathfrak{h}$ and gives rise to the subalgebra $\widehat \mathfrak{h}= \mathfrak{h} +\mathfrak{a}_{Z,E}$. Hence $A_{Z,E}:=A_S\subset A$ normalizes $H$ and acts unitarily on $L^2(G/H)$ via the normalized right regular action
$$({\mathcal R}(a)f)(gH)= a^{-\rho} f(gaH) \qquad (g\in G, a \in A_{Z,E}, f\in L^2(Z)).$$ Disintegration of $L^2(G/H)$ with respect to the right action of $A_{Z,E}$ then yields the unitary equivalence of $G$-modules
\begin{equation} \label{central decomposition} L^2(Z)= \int_{\widehat A_{Z,E}} L^2(G/\widehat H, \chi) \ d\chi,\end{equation} where $\widehat A_{Z,E}$ denotes the unitary dual of the abelian Lie group $A_{Z,E}$, and for each unitary character $\chi: A_{Z,E}\to {\mathbb S}^1$ the $G$-module $L^2(G/\widehat H, \chi)$ is a certain Hilbert space of densities explained in \cite[Sect. 8]{KKS2} or \cite[Sect. 3.2] {KKOS}. A spherical pair $(V,\eta)$ which embeds into some $L^2(G/\widehat H, \chi)$ will be referred to as a representation of the {\it twisted discrete series of $Z$.} Further we denote by $L^2(G/\widehat H, \chi)_{\rm d}$ the discrete spectrum and define the {\it twisted discrete series} by \begin{equation}\label{defin td} L^2(Z)_{\rm td} = \int_{\widehat A_{Z,E}} L^2(G/\widehat H, \chi)_{\rm d} \ d\chi \end{equation} made more rigorous in Subsection \ref{char td} below.
\subsection{Spectral Radon transforms} \label{SRT}
For $w\in \mathcal{W}$ we set $Z_w=G/H_w$. Note that $$ L^2(Z)\to L^2(Z_w), \ \ f\mapsto \left(gH_w\to f(gwH)\right)$$ is a unitary equivalence of $G$-representations. Hence the abstract Plancherel formula for $L^2(Z)$ induces one for $L^2(Z_w)$ with the same Plancherel measure and isometries
$$ \mathcal{M}_\pi \to \mathcal{M}_{\pi,w},\ \ \eta\mapsto \eta_w\,. $$
For every $I\subset S$ and $w\in \mathcal{W}$ we set $Z_{I,w}:= G/ (H_w)_I$ and keep in mind that for fixed $I$, the various $(H_w)_I $ need not be $G$-conjugate (cf. Example \ref{ex SL3}).
\par Now given $\eta\in \mathcal{M}_\pi$ and $w\in \mathcal{W}$ we note that $\eta_w^I=(w\cdot \eta)^I$ is fixed by $(H_w)_I$ and we use notation $\mathcal{M}_{\pi,w}^I$ for $\mathcal{M}_\pi^I$ with respect to $(H_w )_I$. In the sequel we assume that $[\pi]\in \operatorname{supp}\mu \subset \widehat G$ is {\it generic}, that is $\mathcal{M}_{\pi,w}^I$ is $\mathfrak{a}_I$-semisimple for all $I\subset S$ and $w\in \mathcal{W}$. By Theorem \ref{Plancherel induced} with $H$ replaced by $H_w$ we obtain that the complement of the generic elements is a null set with respect to $\mu$. We endow $\mathcal{M}_{\pi,w}^I$ with the Hilbert space structure induced from $\mathcal{M}_\pi$ via Theorem \ref{Plancherel induced}.
\par Our concern is with the {\it spectral Radon transforms} induced from the constant term maps: $${\mathsf r}_{\pi,I,w} : \mathcal{M}_\pi \to \mathcal{M}_{\pi,w}^I, \ \ {\mathsf r}_{\pi,I,w}(\eta)=\eta_w^I\, .$$ and for $J\subset I$ their transitions:
\begin{equation} \label{transit1} {\mathsf r}_{\pi,J,w}^I: \mathcal{M}_{\pi,w}^I\to \mathcal{M}_{\pi,w}^J, \ \ {\mathsf r}_{\pi,J,w}^I(\xi)=\xi^J\, .\end{equation}
We recall the transitivity of the constant terms \cite[Prop. 6.1]{DKS}:
\begin{lemma}\label{lemma transitive} Let $\eta \in \mathcal{M}_\pi$ and $w\in \mathcal{W}$. Then for all $J\subset I$ one has $$ (\eta_w^I)^J = \eta_w^J\, .$$ \end{lemma}
The transitivity of the constant term maps then reflects in
\begin{equation} \label{transfer1} {\mathsf r}_{\pi,J,w}^I\circ {\mathsf r}_{\pi,I,w} = {\mathsf r}_{\pi,J,w} \qquad (J\subset I)\, . \end{equation}
\par Recall that ${\mathsf r}_{\pi, I,w}$ is a sum of at most $|\mathsf{W}_\mathfrak{j}|$-many partial isometries by the Maass-Selberg relations in Theorem \ref{eta-I continuous}. Hence we obtain
\begin{equation} \label{bb-bound} \|{\mathsf r}_{\pi, I,w}\|\leq |\mathsf{W}_\mathfrak{j}| \, . \end{equation}
\begin{defprop} \label{prop Radon} Let $I\subset S$ and $w\in\mathcal{W}$. The operator field $$(\operatorname{id}_{\mathcal{H}_\pi} \otimes {\mathsf r}_{\pi,I,w})_{\pi\in \widehat G }: \quad
\mathcal{H}_\pi \otimes \mathcal{M}_\pi \to \mathcal{H}_\pi \otimes \mathcal{M}_{\pi,w}^I$$ is measurable and induces a $G$-equivariant continuous map $$ {\mathsf R}_{I,w}: L^2(Z)\simeq \int_{\widehat G}^\oplus \mathcal{H}_\pi \otimes \mathcal{M}_\pi\ d\mu(\pi) \to L^2(Z_{I,w})\simeq \int_{\widehat G}^\oplus \mathcal{H}_\pi \otimes \mathcal{M}_{\pi,w}^I \ d\mu(\pi) $$ Moreover
\begin{equation} \label{r-bound} \|{\mathsf R}_{I,w}\|\leq |\mathsf{W}_\mathfrak{j}|\, .\end{equation} We call ${\mathsf R}_{I,w}$ the {\rm spectral Radon transform at $(I,w)$.} \end{defprop} \begin{proof} Since the ${\mathsf r}_{\pi, I,w}$ reflect the pointwise convergent asymptotics of matrix coefficients, the operator field is measurable. With the upper bound in \eqref{bb-bound} we then obtain that ${\mathsf R}_{I,w}$ is defined and continuous with norm bound \eqref{r-bound}. By definition ${\mathsf R}_{I,w}$ is then $G$-equivariant, completing the proof. \end{proof}
With \eqref{transit1} we obtain spectrally defined Radon transforms: \begin{equation} \label{transit2} {\mathsf R}_{J,w}^I: L^2(Z_{I,w})\to L^2(Z_{J,w})\qquad (J\subset I) \end{equation} which then by \eqref{transfer1} satisfy \begin{equation} \label{transfer2} {\mathsf R}_{J,w}= {\mathsf R}_{J,w}^I\circ {\mathsf R}_{I,w} \qquad (J\subset I) \end{equation}
Putting the data of the various $(I,w)$ together, we arrive at the {\it (full) spectral Radon transform} $$ {\mathsf R}= \oplus_{I,w} {\mathsf R}_{I,w}: L^2(Z) \to \bigoplus_{I\subset S} \bigoplus_{w\in \mathcal{W}}L^2(Z_{I,w})\, . $$
\subsection{Characterization of the twisted discrete spectrum} \label{char td} Next we want to define $L^2(Z)_{\rm td}$ rigorously in terms of the spectral Radon transforms. Set
\begin{equation} \label{multi twist} \mathcal{M}_{\pi, \rm td}=\{ \xi \in \mathcal{M}_\pi\mid \exists \chi \in \widehat A_{Z,E}\ \forall v\in V_\pi^\infty: \ m_{v,\xi} \in L^2(\widehat Z,\chi)_{\rm d}\}\end{equation} and likewise we define $\mathcal{M}_{\pi, w, {\rm td}}^I$ for $w\in \mathcal{W}$ and $I\subset S$.
\par Then \begin{equation} \label{defini twist} L^2(Z)_{\rm td}:= \bigcap_{w\in \mathcal{W}\atop I\subsetneq S} \ker {\mathsf R}_{I,w}\, .\end{equation} defines a closed subspace $G$-invariant subspace of $L^2(Z)$.
Next we need a reformulation of the characterization of the twisted discrete series from \cite[Sect. 8]{KKS2} in the more suitable language of constant terms \cite[Th. 5.12]{DKS}, namely:
\begin{lemma}\label{vanishing ct} Let $\eta\in \mathcal{M}_\pi$. Then the following are equivalent: \begin{enumerate} \item $\eta\in \mathcal{M}_{\pi,{\rm td}}$. \item $\eta_w^I=0$ for all $w\in \mathcal{W}$ and $I\subsetneq S$. \end{enumerate} \end{lemma}
With the characterization in Lemma \ref{vanishing ct} we arrive at:
\begin{prop} We have \begin{equation} \label{deco twist}L^2(Z)_{\rm td} \simeq \int_{\widehat G}^\oplus \mathcal{H}_\pi\otimes \mathcal{M}_{\pi, \rm td} \ d\mu(\pi)\, .\end{equation} In particular $L^2(Z)_{\rm td}\subset L^2(Z)$ is invariant under the normalized right regular representation $\mathcal{R}$ of $A_{Z,E}$. \end{prop}
\begin{proof} Both assertions follow from Lemma \ref{vanishing ct} and the involved definitions \eqref{multi twist} and \eqref{defini twist} \end{proof}
Since $L^2(Z)_{\rm td}$ is $A_{Z,E}$-invariant we obtain from \eqref{deco twist} a rigorous definition of \eqref{defin td} with $L^2(\widehat Z,\chi)_{\rm d}$ equal to the $\chi$-spectral part of $L^2(Z)_{\rm td}$ under $\mathcal{R}$.
\subsection{Restriction to the twisted discrete spectrum} Applying the preceding theory with $L^2(Z)$ replaced by $L^2(Z_{I,w})$ we obtain orthogonal projections
$$\operatorname{pr}_{I,w,{\rm td}}: L^2(Z_{I,w})\to L^2(Z_{I,w})_{\rm td}$$ and define $R_{I,w}:= \operatorname{pr}_{I,w, \rm td} \circ \ {\mathsf R}_{I,w}$. Note that $$R_{I,w}: L^2(Z)\to L^2(Z_{I,w})$$ is a continuous $G$-equivariant map. The {\it restricted spectral Radon transform} is then defined to be $$ R= \oplus_{I,w} R_{I,w}: L^2(Z) \to \bigoplus_{I\subset S} \bigoplus_{w\in \mathcal{W}}L^2(Z_{I,w})_{\rm td}\, . $$
\section{Bernstein morphisms}\label{B-morphisms} We define the {\it Bernstein morphism} $B$ as the Hilbert space adjoint $R^*$ of the restricted spectral Radon transform $R$. With $B_{I,w}:= R_{I,w}^*$ we then have
$$ B: \bigoplus_{I\subset S} \bigoplus_{w\in \mathcal{W}}L^2(Z_{I,w})_{\rm td}\to L^2(Z), \ \ (f_{I,w})_{I, w} \mapsto \sum_{I,w} B_{I,w}(f_{I,w})\, .$$ The main result of this section then is:
\begin{theorem} {\rm (Plancherel Theorem -- Bernstein decomposition)} \label{thm planch} The Bernstein morphism is a continuous surjective $G$-equivariant linear map. Moreover, $B$ is isospectral, that is, image and source have Plancherel measure in the same measure class. \end{theorem}
After some technical preparations we give the proof of Theorem \ref{thm planch}. Then, after applying the material on open $P$-orbits developed in Section \ref{subsection WI} we derive in Theorem \ref{thm planch refined} a refined Bernstein decomposition, which agrees with the partition $\mathcal{W}=\coprod_{\mathsf{c} \in \sC_I} \coprod_{\mathsf{t} \in \sF_{I,\mathsf{c}}} {\bf m}_{\mathsf{c}, \mathsf{t}}(\mathcal{W}_{I,\mathsf{c}})$ from \eqref{full deco W}. \par Finally, by adding up the refined Bernstein decompositions for the various $G$-orbits in $\algebraicgroup{ Z}(\mathbb{R})$ we obtain in Theorem \ref{thm planch refined real points} the statement for $L^2(\algebraicgroup{ Z}(\mathbb{R}))$ which is in full analogy to the p-adic statement of Sakellaridis-Venkatesh \cite[Cor.~11.6.2]{SV}.
\subsection{Proof of Theorem \ref{thm planch}}
Denote by ${\mathcal P}(S)$ the power set of $S$. With regard to $\eta\in \mathcal{M}_\pi$ we call a pair
$(I,w)\in {\mathcal P}(S) \times \mathcal{W}$ admissible provided that $\eta_w^I\neq 0$. Finally we call an $\eta$-admissible pair $(I,w)$ {\it optimal} provided that the cardinality $|I|$ is minimal, i.e. we have $\eta_{w'}^J=0$ for all $w'\in \mathcal{W}$ and $J\subsetneq I$. Notice that, by definition, for every $\eta\neq 0$ there exists an $\eta$-optimal pair $(I,w)$.
The embedding theory of tempered representations into twisted discrete series from \cite[Sect. 9] {KKS2} then comes down to:
\begin{theorem}\label{lead lemma 2} Let $0\neq \eta\in \mathcal{M}_\pi$ and $(I,w)$ be an $\eta$-optimal pair. Then $\eta_w^I\in \mathcal{M}_{\pi,w,\rm td}^I$. \end{theorem}
\begin{proof} Let $(I,w)$ be $\eta$-optimal. Applying a base point shift we may assume that $w={\bf1}$. According to Lemma \ref{vanishing ct} applied to $Z_I$ we need to show that $(w_I \cdot \eta^I)^J=0$ for all $w_I \in \mathcal{W}_I$ and $J\subsetneq I$. Let ${\bf m}(w_I)=w\in\mathcal{W}$. By the consistency relations
\eqref{consist} we have $w_I \cdot \eta_I = \eta_w^I$. Thus, by the transitivity of the constant term we have
$$(w_I \cdot \eta^I)^J=\eta_w^J=0$$
by the minimality of $|I|$. The theorem follows. \end{proof}
\par Let us denote for each $[\pi]\in\widehat G$ and each $I\subset S$, $w\in \mathcal{W}$ by $ \xi \mapsto\xi_{\rm td}$ the orthogonal projection $\mathcal{M}_{\pi, w}^I \to \mathcal{M}_{\pi,w, {\rm td}}^I$.
With that we define a linear map between finite dimensional Hilbert spaces by $$ {\mathsf r}_\pi=\oplus {\mathsf r}_{\pi,I,w, \rm td}:\mathcal{M}_\pi\to \bigoplus_{I\subset S}\bigoplus_{w\in \mathcal{W}}
\mathcal{M}_{\pi, w,{\rm td}}^I,\ \ \eta\mapsto (\eta_{w, \rm td}^I)_{I, w} $$ with $\eta_{w, \rm td}^I:=(\eta_w^I)_{\rm td}$.
\begin{rmk} Since $\xi\to \xi_{\rm td}$ is $A_I$-equivariant, we have the orthogonal decomposition $\mathcal{M}_{\pi,w, {\rm td}}^I=\bigoplus_{\mu\in \rho+ i\mathfrak{a}_I^*}\mathcal{M}_{\pi,w, {\rm td}}^{I,\mu}$. Thus every $\xi \in \mathcal{M}_{\pi, w, {\rm td}}^I$ decomposes as $\xi=\sum \xi^\mu$
with $\xi^\mu\in \mathcal{M}_{\pi,w, {\rm td}}^{I,\mu}$ for $\mu \in \rho|_{\mathfrak{a}_I} + i\mathfrak{a}_I^*$ by \eqref{exponents}. \end{rmk}
For any $\lambda\in \mathcal{E}_{\eta^I} \subset \rho|_{\mathfrak{a}_I} +i\mathfrak{a}_I^*$ (cf. \eqref{exponents}) we denote by ${\mathsf r}_{\pi, I,w, {\rm td}, \lambda}$ the map ${\mathsf r}_{\pi,I,w, \rm td}$ followed by orthogonal projection to the $\lambda $-coordinate $\mathcal{M}_{\pi, w, {\rm td}}^{I, \lambda}$ of $\mathcal{M}_{\pi, w, {\rm td}}^I$.
Then Theorem \ref{lead lemma 2} yields the technical key Lemma:
\begin{lemma} \label{properties bpi} The following assertions hold: \begin{enumerate} \item \label{eins 1}${\mathsf r}_\pi$ is injective. \item \label{zwei 2}For all $I\subset S, w\in \mathcal{W}$ and $\lambda\in \mathcal{E}_\pi$ the map $${\mathsf r}_{\pi, I,w, {\rm td}, \lambda}:\mathcal{M}_\pi\to \mathcal{M}_{\pi,w, {\rm td}}^{I,\lambda}, \ \ \eta \mapsto \eta_{w,\rm td}^{I, \lambda}$$ is a surjective partial isometry. \item \label{drei 3}The assignment $\pi \mapsto {\mathsf r}_\pi$ is measurable. \end{enumerate} \end{lemma} \begin{proof} Let $0\neq \eta \in \mathcal{M}_\pi$. According to Theorem \ref{lead lemma 2} we find an $\eta$-optimal pair $(I, w)$ such that $\eta_{w, {\rm td}}^I\neq 0$, establishing \eqref{eins 1}. Having shown \eqref{eins 1}, assertion \eqref{zwei 2} is obtained from the Maass-Selberg relations in Theorem \ref{eta-I continuous}: we replace $H$ by $H_w$ and observe that $\eta\mapsto \eta_w$ establishes an isomorphism of $\mathcal{M}_\pi\to \mathcal{M}_{\pi,w}$ with $\mathcal{M}_{\pi,w}$ referring to $\mathcal{M}_\pi$ with $H$ replaced by $H_w$.
Finally (\ref{drei 3}) is by the definition of the measurable structures involved (see Section \ref{Section AbsPlanch} and Proposition \ref{prop Radon}): The family of maps $${\mathsf r}_{\pi,I,w}: \mathcal{M}_\pi\to \mathcal{M}_{\pi,w}^I,\ \ \eta \mapsto \eta_w^I,$$ as well as the projection to discrete parts ${\mathsf r}_{\pi,I,w, \rm td}$ are measurable. \end{proof}
We now define $$b_\pi: \bigoplus_{I\subset S}\bigoplus_{w\in \mathcal{W}} \mathcal{M}_{\pi,w,{\rm td}}^I \to \mathcal{M}_{\pi}$$ to be the adjoint of ${\mathsf r}_{\overline \pi}$ and note that $b_\pi$, being the adjoint of an injective morphism, is surjective. Notice that the Bernstein morphism is $$B: \bigoplus_{I\subset S}\bigoplus_{w\in \mathcal{W}} L^2(Z_{I,w})_{\rm td} \to L^2(Z)$$ is defined spectrally by the operator field $(b_\pi)_{\pi \in \operatorname{supp} \mu}$. \begin{rmk} \label{Remark isometric} (Decomposition of $B$ into isometries) For $I\subset S$ and $w\in \mathcal{W}$ we denote by $B_{I,w}$ the restriction of $B$ to $L^2(Z_{I,w})_{\rm td}$.
We claim that there is an orthogonal decomposition $$L^2(Z_{I,w})_{\rm td}=\bigoplus_{u \in \mathsf{W}_\mathfrak{j}} L^2(Z_{I,w})_{{\rm td,u}}$$
such that every restriction $B_{I,w,u}:= B|_{L^2(Z_{I,w})_{{\rm td}, u}}$ is an isometry. To construct such a decomposition we choose for every $[\pi]\in \operatorname{supp}(\mu)$ with infinitesimal character $\chi_\pi\in \mathfrak{j}_\mathbb{C}^*/ \mathsf{W}_\mathfrak{j}$ a representative $\lambda_\pi\in \mathfrak{j}_\mathbb{C}$, i.e $\chi_\pi = \mathsf{W}_\mathfrak{j} \cdot \lambda_\pi$. Let us denote by
$$\mathsf {P}_{u}([\pi]): \mathcal{M}_{\pi,w}^I \to \mathcal{M}_{\pi,w}^{I, (\rho - u \cdot \lambda_\pi)|_{\mathfrak{a}_I}}$$ the orthogonal projection. Our request for the choice $\lambda_\pi\in\chi_\pi$ is then such that the operator field $$ \operatorname{supp}(\mu)\ni [\pi]\mapsto \mathsf {P}_u([\pi])\in \operatorname{End}(\mathcal{M}_{\pi,w}^I)$$ is measurable. With
$$L^2(Z_{I,w})_{{\rm td},u}:=\int_{\widehat G}^\oplus \mathcal{H}_\pi \otimes \mathcal{M}_{\pi,w,\rm td}^{I,(\rho - u \cdot \lambda_\pi)|_{\mathfrak{a}_I}}\ d\mu(\pi)$$ we then obtain an orthogonal decomposition $L^2(Z_{I,w})_{\rm td}=\bigoplus_{u \in \mathsf{W}_\mathfrak{j}} L^2(Z_{I,w})_{{\rm td, u}}$ for which $B_{I,w,u}$ is an isometry by Lemma \ref{properties bpi}\eqref{zwei 2}. \end{rmk}
The final piece of information we need for the proof of Theorem \ref{thm planch} is the following elementary result of functional analysis whose proof we omit.
\begin{lemma}\label{Hilbert lemma} Let $\mathcal{H}=\int_X^\oplus \mathcal{H}_x \ d\mu(x)$ be a direct integral of Hilbert spaces. Let further $\mathcal{K}=\int_X^\oplus \mathcal{K}_x \ d\mu(x)$ and $\mathcal{L}=\int_X^\oplus \mathcal{L}_x \ d\mu(x)$ be closed decomposable subspaces of $\mathcal{H}$. Suppose that $\mathcal{K}_x +\mathcal{L}_x\subset \mathcal{H}_x$ is closed for every $x\in X$. Then $\mathcal{K} +\mathcal{L}\subset \mathcal{H}$ is closed. \end{lemma} \begin{proof}[Proof of Theorem \ref{thm planch}] The surjectivity of the $b_\pi$ together with Theorem \ref{Plancherel induced} shows that $B$ is an isospectral $G$-morphism with dense image. To see that $B$ is surjective we note that $B$ is a sum of isometries each one of which has closed range. Thus $B$ is surjective by Lemma \ref{Hilbert lemma}. \end{proof}
\begin{rmk} In case $\mathcal{W}=\{{\bf1}\}$, i.e.~there is only one open $P$-orbit, the Bernstein decomposition becomes a lot simpler as the summation over $\mathcal{W}$ disappears in the domain of $B$. We recall that $\mathcal{W}=\{{\bf1}\}$ is satisfied for reductive groups $G \simeq G\times G/ G$, for complex spherical spaces, and for Riemannian symmetric spaces. \end{rmk}
\subsection{Refinement of the Bernstein morphisms}\label{Subsection refine B} In the definition of the Bernstein morphism a certain over-parametrizing takes place in the domain. This will now be remedied via the partition $\mathcal{W}=\coprod_{\mathsf{c} \in \sC_I} \coprod_{\mathsf{t} \in \sF_{I,\mathsf{c}}} {\bf m}_{\mathsf{c}, \mathsf{t}}(\mathcal{W}_{I,\mathsf{c}})$ from \eqref{full deco W}. We recall the corresponding terminology from Subsection \ref{subsection all base points}.
For $\eta\in \mathcal{M}_\pi$, $\mathsf{c}\in \sC_I$, $\mathsf{t} \in \sF_{I,\mathsf{c}}$ we recall the functional $\eta_{\mathsf{c},\mathsf{t}}= w(\mathsf{c}, \mathsf{t})\cdot \eta$ from Subsection \ref{subsection all base points}. Further we set $\eta_{\mathsf{c},\mathsf{t}}^I:= (\eta_{\mathsf{c},\mathsf{t}})^I$ and given $w_{I,\mathsf{c}}\in \mathcal{W}_{I,\mathsf{c}}$ we define the functional $(\eta_{\mathsf{c},\mathsf{t}}^I)_{w_{I,\mathsf{c}}}:= w_{I,\mathsf{c}}\cdot \eta_{\mathsf{c},\mathsf{t}}^I $. Likewise for $\mu \in \mathfrak{a}_{I,\mathbb{C}}^*$ we set $(\eta_{\mathsf{c},\mathsf{t}}^{I,\mu})_{w_{I,\mathsf{c}}}:=w_{I,\mathsf{c}}\cdot \eta_{\mathsf{c},\mathsf{t}}^{I,\mu} $. \par Every $w\in \mathcal{W}$ can be written uniquely as $w={\bf m}_{\mathsf{c},\mathsf{t}}(w_{I,\mathsf{c}})$ for $\mathsf{c}\in \sC_I$, $\mathsf{t}\in \sF_{I,\mathsf{c}}$ and $w_{I,c}\in \mathcal{W}_{I,\mathsf{c}}$. In this context we recall from \eqref{consist2} the consistency relation
\begin{equation} \label{Iw2}\eta_w^{I,\lambda}=(\eta_{\mathsf{c},\mathsf{t}}^{I,\lambda})_{w_{I,c}} \qquad (w={\bf m}_{\mathsf{c},\mathsf{t}}(w_{I,c})\in \mathcal{W}, \lambda\in \mathcal{E}_{\eta^I})\, . \end{equation}
Recall $H_{I,\mathsf{c}}=(H_{w(\mathsf{c})})_I$ equals $H_{I,\mathsf{c},\mathsf{t}}= (H_{w(\mathsf{c},\mathsf{t})})_I$. Hence $(\eta_{\mathsf{c},\mathsf{t}}^{I,\lambda})_{w_{I,c}}$ is fixed by $(H_{I,\mathsf{c}})_{w_{I,c}}$. On the other hand $\eta_w^{I,\lambda}$ is fixed under $(H_w)_I$. We recall from \eqref{WWI2 general} that the two groups are in fact equal:
$$ (H_w)_I = (H_{I,\mathsf{c}})_{w_{I,c}}\, . $$
In this context it is worth to record the following extension of \cite[Th. 5.12]{DKS}:
\begin{prop}The following assertions are equivalent for $\eta\in \mathcal{M}_\pi$: \begin{enumerate} \item \label{1S} $\eta\in \mathcal{M}_{\pi, \rm td}$. \item \label{2S} For all $I\subsetneq S$ and $\mathsf{c}\in \sC_I, \mathsf{t}\in \sF_{I,\mathsf{c}}$ one has $\eta_{\mathsf{c},\mathsf{t}}^I =0$. \end{enumerate} \end{prop}
\begin{proof} Let $w\in \mathcal{W}$ and write it as $w={\bf m}_{\mathsf{c}, \mathsf{t}}(w_{I,c})$. We recall
\eqref{consist} which asserts that $\eta_w^I = w_{I,c}\cdot \eta_{\mathsf{c},\mathsf{t}}^I$. In particular $\eta_w^I=0$ if and only if $\eta_{\mathsf{c},\mathsf{t}}^I=0$ and the proposition follows from Lemma \ref{vanishing ct}.
\end{proof}
For $\mathsf{c}\in \sC_I$ and $\mathsf{t} \in F_{I,\mathsf{c}}$ we set $Z_{I,\mathsf{c},\mathsf{t}}= Z_{I, w(\mathsf{c},\mathsf{t})}$ and note that $Z_{I,\mathsf{c},\mathsf{t}}=G/H_{I,\mathsf{c}}$ is independent of $\mathsf{t}\in \sF_{I,\mathsf{c}}$ by Lemma \ref{lemma HI comp}. The following is then a refined version of the Bernstein decomposition, taking the fine partition \eqref{full deco W} of $\mathcal{W}$ into account.
\begin{theorem} {\rm (Plancherel Theorem -- Bernstein decomposition refined)} \label{thm planch refined} The restricted Bernstein morphism
$$B_{\rm res}: \bigoplus_{I\subset S} \bigoplus_{\mathsf{c} \in \sC_I}\bigoplus_{\mathsf{t} \in \sF_{I,\mathsf{c}}} L^2(Z_{I, \mathsf{c},\mathsf{t}})_{\rm td} \to L^2(Z)$$ is surjective. \end{theorem}
\begin{proof} Given the proof of Theorem \ref{thm planch} this comes down to the fact that the map $$\tilde {\mathsf r}_\pi: \mathcal{M}_\pi \to \bigoplus_{I\subset S} \bigoplus_{\mathsf{c} \in \sC_I} \bigoplus_{\mathsf{t} \in \sF_{I,\mathsf{c}}} \mathcal{M}_{\pi, w(\mathsf{c},\mathsf{t}), \rm td}^I, \ \ \eta\mapsto (\eta_{\mathsf{c}, \mathsf{t}, \rm td }^I)_{I, \mathsf{c}, \mathsf{t}}$$ obtained from ${\mathsf r}_\pi$ by restricting the target remains injective. Now we recall the proof of Lemma \ref{properties bpi} \eqref{eins 1} and let $0\neq \eta \in \mathcal{M}_\pi$ with $\eta_{w, {\rm td}}^I\neq 0$ for an $\eta$-optimal pair $(I,w)$. In particular, $\eta_{w, {\rm td}}^I\neq 0$. Let $w={\bf m}_{\mathsf{c},\mathsf{t}}(w_{I,c})$ for $w_{I,c}\in \mathcal{W}_{I,\mathsf{c}}$ and $\mathsf{t}\in \sF_{I,\mathsf{c}}$. Then the consistency relation \eqref{Iw2} yields $\eta_{w, {\rm td}}^I =(w_{I,c}\cdot \eta_{\mathsf{c},\mathsf{t}}^I)_{\rm td}$ and thus $\eta_{\mathsf{c},\mathsf{t},\rm td}^I\neq 0$, establishing the injectivity of $\tilde {\mathsf r}_\pi$. The theorem follows. \end{proof}
\subsection{Bernstein decomposition for $L^2(\algebraicgroup{ Z}(\mathbb{R}))$}
Recall that $Z=G/H$ is only one $G$-orbit of $\algebraicgroup{ Z}(\mathbb{R})$. To obtain the Bernstein decomposition of $L^2(\algebraicgroup{ Z}(\mathbb{R}))$ we just need to add the data of the various $G$-orbits in $\algebraicgroup{ Z}(\mathbb{R})$. We recall $W_\mathbb{R}=(P\backslash \algebraicgroup{ Z}(\mathbb{R}))_{\rm open}\simeq F_\mathbb{R}/ F_M$ and choose representatives $\mathcal{W}_\mathbb{R}\subset G$ for $W_\mathbb{R}$ as we did with $\mathcal{W}$ for $W$. For $w\in \mathcal{W}_\mathbb{R}$ we set $Z_{I,w}:= G/ (H_w)_I$ with $(H_w)_I$ the real points of the $\mathbb{R}$-algebraic group $ (\algebraicgroup{H}_w)_I$. Notice that the $G$-orbit decomposition of $\algebraicgroup{ Z}(\mathbb{R})$ yields a natural partition of $W_\mathbb{R}$ by selecting for a given $G$-orbit in $\algebraicgroup{ Z}(\mathbb{R})$ the open $P$-orbits it contains. Summing up the Bernstein morphism of all $G$-orbits then yields a $G$-morphism:
$$ B_\mathbb{R} : \bigoplus_{I\subset S} \bigoplus_{w\in \mathcal{W}_\mathbb{R}} L^2(Z_{I,w})_{\rm td} \to L^2(\algebraicgroup{ Z}(\mathbb{R})) .$$ We then obtain from Theorem \ref{thm planch}:
\begin{theorem} {\rm (Plancherel Theorem for $L^2(Z(\mathbb{R}))$ -- Bernstein decomposition)} \label{thm planch real points} The Bernstein morphism $B_\mathbb{R}$ is a continuous surjective isospectral $G$-equivariant linear map. \end{theorem}
Recall from the beginning of Section \ref{subsection WI} that $W_{I,\mathbb{R}}=(P\backslash \algebraicgroup{ Z}_I(\mathbb{R}))_{\rm open}$ and $W_\mathbb{R}$ are canonically isomorphic.
In particular we obtain a generalization of \eqref{full deco W} to
$$ \mathcal{W}_\mathbb{R} = \mathcal{W}_{I,\mathbb{R}}= \coprod_{\mathsf{c}\in \sC_{I,\mathbb{R}}} \coprod_{\mathsf{t} \in \sF_{I,\mathsf{c}}} {\bf m}_{\mathsf{c},\mathsf{t}} (\mathcal{W}_{I,\mathsf{c}})$$ with $\sC_{I,\mathbb{R}}:=\{ G\cdot \widehat z_{w,I}\mid w\in W_\mathbb{R}\}$ etc.
The finer results in Theorem \ref{thm planch refined} then yield the refined restricted Bernstein morphism \begin{equation} \label{Bernstein real forms} B_{\rm \mathbb{R}, res} : \bigoplus_{I\subset S} L^2(\algebraicgroup{ Z}_I(\mathbb{R}))_{\rm td}\to L^2(\algebraicgroup{ Z}(\mathbb{R}))\end{equation} with the same properties as in Theorem \ref{thm planch refined}:
\begin{theorem} {\rm (Plancherel Theorem for $L^2(Z(\mathbb{R}))$ -- Bernstein decomposition refined)} \label{thm planch refined real points} The restriction $B_{\mathbb{R}, {\rm res}}$ of the Bernstein morphism $B_\mathbb{R}$ is a continuous surjective isospectral $G$-equivariant linear map. \end{theorem}
\section{Elliptic elements and discrete series}
As a consequence of the Bernstein decomposition in Theorem \ref{thm planch} we obtain in Theorem \ref{thm discrete} a general criterion for the existence of a discrete spectrum in $L^2(G/H)$ for a unimodular real spherical space $G/H$. The main additional tool is a theorem of \cite{HW}, by which the wave front set of the left regular representation of a unimodular homogeneous space $G/H$ is determined as the closure of $\operatorname{Ad}(G)\mathfrak{h}^\perp$.
\subsection{Existence of discrete spectrum}
As usual, we call an element $X\in\mathfrak{g}$ semisimple provided $\operatorname{ad} X$ is a semisimple operator. Equivalently, $X\in\mathfrak{g}$ is semisimple if and only if its centralizer $\mathfrak{z}_\mathfrak{g}(X)$ is a reductive subalgebra.
An element $X\in \mathfrak{g}_\mathbb{C}$ is called {\it elliptic} if $\operatorname{ad} X$ is semisimple with purely imaginary eigenvalues. If $E\subset \mathfrak{g}_\mathbb{C}$ we denote by $E_{\rm ell}$ the subset of $E$ consisting of elliptic elements. More generally we call an element $X\in \mathfrak{g}_\mathbb{C}$ {\it weakly elliptic} if $\operatorname{spec} (\operatorname{ad} X)\subset i\mathbb{R}$ and denote by $E_{\rm w-ell}$ the corresponding subset of $E\subset \mathfrak{g}_\mathbb{C}$.
\begin{theorem} \label{thm discrete} Let $Z=G/H$ be a unimodular real spherical space. Suppose that $\operatorname{int} \mathfrak{h}_{\rm w-ell}^\perp\neq \emptyset$. Then $H=\widehat H$ is reductive and $L^2(Z)_{\rm d}\neq \{0\}$. \end{theorem}
Here $\operatorname{int} \mathfrak{h}_{\rm w-ell}^\perp$ refers to the interior of $\mathfrak{h}_{\rm w-ell}^\perp$, in the vector space topology of $\mathfrak{h}^\perp$. The proof is given in the course of the next two subsections.
\begin{rmk} In case $Z=G$ is a reductive group or more generally $Z=G/H$ is a symmetric space, then Theorem \ref{thm discrete} comes down to the existence theorems of Harish-Chandra \cite{HC} and Flensted-Jensen \cite{FJ} about discrete series. It is due to Harish-Chandra that $L^2(G)_{\rm d} \neq \emptyset$ if $\mathfrak{g}$ admits a compact Cartan subalgebra. Flensted-Jensen generalized that to symmetric spaces by showing $L^2(G/H)_{\rm d}\neq \emptyset$ if there exists a compact abelian subspace $\mathfrak{t} \subset \mathfrak{g} \cap \mathfrak{h}^\perp$ with $\dim \mathfrak{t} = \operatorname{rank} \algebraicgroup{G}/\algebraicgroup{H}$. \end{rmk}
\begin{rmk} For the twisted discrete series an appropriate generalization of Theorem \ref{thm discrete} reads \begin{equation} \operatorname{int} \widehat \mathfrak{h}^\perp_{\rm w-ell}\neq \emptyset \quad\Rightarrow \quad (\forall \chi \in \widehat A_{Z,E})\ L^2(G/\widehat H, \chi)_{\rm d}\neq \emptyset\end{equation} and will presumably follow from results on wavefront sets of induced representations more general than what is obtained in \cite{HW}. \end{rmk}
\subsection{The geometry of elliptic elements}
To prepare the way for the proof of Theorem \ref{thm discrete} we establish some foundational material on elliptic elements in $\mathfrak{h}^\perp$, and show that if the weakly elliptic elements in $\mathfrak{h}^\perp$ have non-empty interior, then $\mathfrak{h}$ is reductive in $\mathfrak{g}$.
\par We consider the $H$-module $\mathfrak{h}^\perp\subset \mathfrak{g}$ and recall the canonical isomorphism $(\mathfrak{g}/\mathfrak{h})^*\simeq \mathfrak{h}^\perp$. In the sequel we view $\mathfrak{a}_Z\simeq \mathfrak{a}_H^{\perp_\mathfrak{a}}$ as a subspace of $\mathfrak{a}$ and likewise we view $\mathfrak{m}_Z=\mathfrak{m}/\mathfrak{m}_H \simeq \mathfrak{m}_H^{\perp_\mathfrak{m}}$ as a subspace of $\mathfrak{m}$.
\begin{lemma} $(\mathfrak{l}\cap\mathfrak{h})^\perp=\mathfrak{h}^\perp\oplus \mathfrak{u}$ \end{lemma}
\begin{proof} Clearly $\mathfrak{h}^\perp + \mathfrak{u}\subset(\mathfrak{l}\cap\mathfrak{h})^\perp $. Moreover $\mathfrak{h}^\perp\cap\mathfrak{u}=\{0\}$ because $\kappa(\mathfrak{u}, \mathfrak{q})=\{0\}$ and $\mathfrak{g}=\mathfrak{h}+\mathfrak{q}$. The lemma now follows from $\dim \mathfrak{h}=\dim(\mathfrak{l}\cap\mathfrak{h})+\dim\mathfrak{u}$. \end{proof}
Let $T_0: (\mathfrak{l}\cap\mathfrak{h})^\perp\to \mathfrak{u}$ be minus the projection along $\mathfrak{h}^\perp$. It follows that
\begin{equation} \label{T0 orthogonal} (\mathfrak{a}_Z +\mathfrak{m}_Z)^0:=\{ X + T_0(X): X \in \mathfrak{a}_Z+\mathfrak{m}_Z\} \subset \mathfrak{h}^\perp\, .\end{equation} Similarly we set $\mathfrak{b}^0:=\{ X + T_0(X): X \in \mathfrak{b}\}$ for $\mathfrak{b}\subset \mathfrak{a}_Z +\mathfrak{m}_Z$ a subspace.
The following lemma is motivated by \cite[Th. 5.4 and Cor. 7.2]{Knop} and \cite[Th. 5 and Th. 6]{Panyushev}.
\begin{lemma} \label{lemma H perp polar}Let $Z=G/H$ be a real spherical space for which there exists an $X_0\in \mathfrak{a}_Z\cap \mathfrak{h}^\perp$ such that
$\alpha(X_0)<0$ for all $\alpha\in \Sigma_\mathfrak{u}$. Then the canonical map $$\Phi: H \times (\mathfrak{a}_Z+\mathfrak{m}_Z)^0\to \mathfrak{h}^\perp, \ \ (h, X) \mapsto \operatorname{Ad}(h)X$$ is generically submersive. \end{lemma}
\begin{proof} We first note that $(\mathfrak{a}_Z +\mathfrak{m}_Z)^0 + [\mathfrak{h}, X]\subset \mathfrak{h}^\perp$ for all $X\in (\mathfrak{a}_Z +\mathfrak{m}_Z)^0$, and that $\Phi$ is generically submersive if and only if there is equality for some $X\in (\mathfrak{a}_Z +\mathfrak{m}_Z)^0$. We will show that \begin{equation} \label{gen surjective} (\mathfrak{a}_Z +\mathfrak{m}_Z)^0 + [\mathfrak{h}, X_0]=\mathfrak{h}^\perp .\end{equation}
For $t>0$ we set $a_t:=\exp(tX_0)$. By conjugation \eqref{gen surjective} is then equivalent to \begin{equation} \label{gen surjective with t} (\mathfrak{a}_Z +\mathfrak{m}_Z)^0_t+[\mathfrak{h}_t,X_0]=\mathfrak{h}_t^\perp\, .\end{equation} where $ (\mathfrak{a}_Z+\mathfrak{m}_Z)^0_t:= \operatorname{Ad}(a_t)(\mathfrak{a}_Z +\mathfrak{m}_Z)^0$ and $\mathfrak{h}_t:=\operatorname{Ad}(a_t)\mathfrak{h}$. Now note that by \eqref{T0 orthogonal} we have for $t\to\infty$ that $(\mathfrak{a}_Z+\mathfrak{m}_Z)_t^0 \to \mathfrak{a}_Z +\mathfrak{m}_Z$ in the Grassmannian of subspaces. Moreover $\mathfrak{h}_t\to \mathfrak{h}_\emptyset=\mathfrak{l}\cap \mathfrak{h} + \overline \mathfrak{u}$ by \eqref{I-compression}.
On the other hand $(\mathfrak{h}_\emptyset)^\perp = \mathfrak{a}_Z+\mathfrak{m}_Z + \overline \mathfrak{u}$. As $[X_0,\overline \mathfrak{u}]=\overline \mathfrak{u}$ we obtain \begin{equation*} \label{gen surjective limit} \mathfrak{a}_Z +\mathfrak{m}_Z + [\mathfrak{h}_\emptyset, X_0]=(\mathfrak{h}_\emptyset)^\perp \, ,\end{equation*} that is, \eqref{gen surjective with t} holds in the limit. Hence it holds for $t$ sufficiently large. \end{proof}
In analogy to \cite[Sect.~3]{Knop2} we call $Z=G/H$ {\it non-degenerate} provided that an element $X_0$ as in Lemma \ref{lemma H perp polar} exists, and {\it degenerate} otherwise. Flag varieties $Z=G/P$ with $P$ a parabolic subgroup of $G$ are degenerate. But in many cases $Z$ is non-degenerate as the following example shows.
\begin{ex}\label{ex-quasiaffine} (cf. \cite[Lemma 3.1]{Knop2}) Every quasi-affine real spherical space is non-degenerate. Indeed, the constructive proof of the local structure theorem, see \cite[Section 2.1]{KKS}, yields an $X_0\in \mathfrak{a}_Z\cap \mathfrak{h}^\perp$ such that $\mathfrak{l}=\mathfrak{z}_\mathfrak{g}(X_0)$. Moreover, this element can be chosen such that $\alpha(X_0)<0$ for all roots $\alpha\in \Sigma_\mathfrak{u}$. \end{ex} The following lemma was communicated to us by B. Harris. \begin{lemma} \label{lem Harris} Let $\algebraicgroup{G}$ be an algebraic group defined over $\mathbb{R}$ and $\algebraicgroup{H}\subset \algebraicgroup{G}$ be an algebraic subgroup defined over $\mathbb{R}$ as well. Suppose that $Z=G/H$ is unimodular. Then $Z$ is quasi-affine, i.e. $\algebraicgroup{ Z}=\algebraicgroup{G}/\algebraicgroup{H}$ is a quasi-affine variety. \end{lemma}
\begin{proof} Clearly $Z$ is unimodular if and only if $\algebraicgroup{ Z}$ is unimodular. We assume first that $\algebraicgroup{H}$ is connected and treat the general case at the end. We recall the following transitivity result, see \cite[Lemma 1.1]{Suk} and \cite[Th.4]{BHM}: If there is a tower $\algebraicgroup{H} \subset \algebraicgroup{H}_1\subset \algebraicgroup{G}$ of subgroups, such that $\algebraicgroup{H}_1/\algebraicgroup{H}$ and $\algebraicgroup{G}/\algebraicgroup{H}_1$ are both quasi-affine, then $\algebraicgroup{G}/\algebraicgroup{H}$ is quasi-affine. Now for $d:=\dim \algebraicgroup{H}$ and $X_1,\ldots, X_d$ a basis of $\mathfrak{h}$ consider $v_1:=X_1\wedge\ldots\wedge X_d \in \bigwedge^d \mathfrak{g}_\mathbb{C}$. As $\algebraicgroup{H}$ is supposed to be unimodular and connected, we see that $\algebraicgroup{H}$ fixes $v_1$. Let $\algebraicgroup{H}_1$ be the stabilizer of $v_1$ in $\algebraicgroup{G}$. Then $$ \algebraicgroup{G}/\algebraicgroup{H}_1 \to \bigwedge^d\mathfrak{g}_\mathbb{C}, \ \ g\algebraicgroup{H}_1\mapsto g\cdot v_1$$ is injective and exhibits $\algebraicgroup{G}/\algebraicgroup{H}_1$ as quasi-affine. Moreover, as $\algebraicgroup{H}\subset \algebraicgroup{H}_1$ is normal, $\algebraicgroup{H}_1/\algebraicgroup{H}$ is affine and the transitivity result of above applies. This shows the lemma for $\algebraicgroup{H}=\algebraicgroup{H}_0$ connected. As $F:=\algebraicgroup{H}/ \algebraicgroup{H}_0$ is finite and acts freely on $\algebraicgroup{ Z}_0=\algebraicgroup{G}/\algebraicgroup{H}_0$ the quotient $\algebraicgroup{ Z}=\algebraicgroup{G}/\algebraicgroup{H} \simeq \algebraicgroup{ Z}_0/F$ is geometric and quasi-affine as well (average polynomial function over $F$). \end{proof}
It is interesting to record the following (cf. \cite[Th.3.2]{Knop2}):
\begin{lemma} \label{lemma non-deg}Let $Z=G/H$ be a non-degenerate real spherical space. Then the set $\mathfrak{h}^\perp_{\rm ss}$ of semisimple elements in $\mathfrak{h}^\perp$ has non-empty Zariski-open interior in $\mathfrak{h}^\perp$. \end{lemma}
\begin{proof} Since $\mathfrak{g}_{\rm ss}$ has Zariski-open interior in $\mathfrak{g}$, it suffices to check that there is a non-empty open set of semisimple elements in $\mathfrak{h}^\perp$. Now $X_0$ is semisimple and for all elements $X_1\in \mathfrak{a}_Z+\mathfrak{m}_Z$ sufficiently close to $X_0$ we have in addition that $X_1+ \mathfrak{u}=\operatorname{Ad}(U)X_1$ by \cite[Lemma 2.6]{KKS}. In view of \eqref{T0 orthogonal} this implies that all elements $X_1 + T_0(X_1)$ are semisimple and belong to $(\mathfrak{a}_Z+\mathfrak{m}_Z)^0$. With Lemma \ref{lemma H perp polar} we conclude the proof. \end{proof}
\begin{cor}\label{cor non-deg} Let $Z=G/H$ be a non-degenerate real spherical space and $E\subset \mathfrak{h}^\perp$. Then the following are equivalent: \begin{enumerate} \item $\operatorname{int} E_{\rm ell}\neq \emptyset$. \item $\operatorname{int} E_{\rm w-ell}\neq \emptyset$. \end{enumerate} \end{cor}
\begin{lemma} \label{lemma elliptic} The following assertions hold: \begin{enumerate} \item \label{eins I} $\left[\operatorname{Ad}(\algebraicgroup{H}) (\mathfrak{a}_Z+ \mathfrak{m}_Z)_\mathbb{C}^0\right]_{\rm w-ell} = \operatorname{Ad}(\algebraicgroup{H})\big((\mathfrak{a}_Z +\mathfrak{m}_Z)_\mathbb{C}^0 \cap \mathfrak{z}(\mathfrak{g}_\mathbb{C})+ i\mathfrak{a}_Z^0 +\mathfrak{m}_Z^0 \big)$. \item \label{zwei II} Suppose that $Z$ is non-degenerate and assume that $\operatorname{int} \mathfrak{h}_{\rm w-ell}^\perp\neq \emptyset$. Then $$ \operatorname{int} \big (\mathfrak{h}^\perp \cap \operatorname{Ad}(\algebraicgroup{H})(\mathfrak{z}(\mathfrak{g})+ i\mathfrak{a}_Z^0 +\mathfrak{m}_Z^0) \big)\neq \emptyset\, .$$ \end{enumerate} \end{lemma}
\begin{proof} For (\ref{eins I}) we first observe that it suffices to show
$$\left[(\mathfrak{a}_Z+ \mathfrak{m}_Z)_\mathbb{C}^0\right]_{\rm w-ell} = (\mathfrak{a}_Z +\mathfrak{m}_Z)_\mathbb{C}^0 \cap \mathfrak{z}(\mathfrak{g}_\mathbb{C})+i\mathfrak{a}_Z^0 +\mathfrak{m}_Z^0 $$
Let $Y\in (\mathfrak{a}_Z+ \mathfrak{m}_Z)_\mathbb{C}$ and $X= Y + T_0(Y) \in (\mathfrak{a}_Z+ \mathfrak{m}_Z)_\mathbb{C}^0$ as in \eqref{T0 orthogonal}. Then
$$\operatorname{spec} (\operatorname{ad} X) = \operatorname{spec} (\operatorname{ad} Y)\, .$$ Hence $X$ is weakly elliptic if and only if $Y\in (\mathfrak{a}_Z+\mathfrak{m}_Z)_\mathbb{C}\cap \mathfrak{z}(\mathfrak{g}_\mathbb{C})+ i\mathfrak{a}_Z +\mathfrak{m}_Z$, that is, if and only if $X\in (\mathfrak{a}_Z+\mathfrak{m}_Z)_\mathbb{C}^0\cap \mathfrak{z}(\mathfrak{g}_\mathbb{C})+ i\mathfrak{a}_Z^0 +\mathfrak{m}_Z^0$.
For (\ref{zwei II}) we note that $\operatorname{Ad}(\algebraicgroup{H})(\mathfrak{a}_Z+\mathfrak{m}_Z)_\mathbb{C}^0$ is defined over $\mathbb{R}$ and Zariski dense in $\mathfrak{h}_\mathbb{C}^\perp$ as a consequence of Lemma \ref{lemma H perp polar}. Now (\ref{zwei II}) follows from (\ref{eins I}). \end{proof}
Recall the edge $\mathfrak{a}_{Z,E}\subset \mathfrak{a}_Z$ and $\mathfrak{a}_{Z,E}\subset \mathfrak{n}_\mathfrak{g}(\mathfrak{h})$ with $\mathfrak{n}_\mathfrak{g}(\mathfrak{h})$ the normalizer of $\mathfrak{h}$ in $\mathfrak{g}$.
\begin{lemma} \label{lemma no-elliptic} Let $Z$ be a non-degenerate real spherical space. If $\mathfrak{a}_{Z,E}\neq \{0\}$, then $\operatorname{int} \mathfrak{h}_{\rm w-ell}^\perp =\emptyset$. \end{lemma}
\begin{proof} Let $\mathfrak{a}_Z= \mathfrak{a}_{Z,E} \oplus \mathfrak{a}_{Z,S}$ be the orthogonal decomposition. Recall $\widehat \mathfrak{h}=\mathfrak{h}+\mathfrak{a}_{Z,E}$ with $[\mathfrak{a}_{Z,E}, \mathfrak{h}]\subset \mathfrak{h}$. Define $\mathfrak{a}_{Z,E}^0\subset \mathfrak{h}^\perp$ as below \eqref{T0 orthogonal}. Then since $\mathfrak{a}_{Z,E}^0 \cap \widehat \mathfrak{h}^\perp=\{0\}$ we obtain by dimension count
\begin{equation} \label{h hat perp} \mathfrak{h}^\perp=\widehat\mathfrak{h}^\perp \oplus \mathfrak{a}_{Z,E}^0\, .\end{equation} Next we claim \begin{equation} \label{center perp} \operatorname{Ad}(h)X - X \in \widehat \mathfrak{h}_\mathbb{C}^\perp \qquad (h\in \algebraicgroup{H}, X\in \mathfrak{a}_{Z,E}^0)\, .\end{equation} In fact, as $\algebraicgroup{H}$ is connected it suffices to show that $\kappa(e^{\operatorname{ad} Y} X, U) =\kappa(X,U)$ for all $Y\in \mathfrak{h}_\mathbb{C}$ and $U\in\mathfrak{a}_{Z,E}$. By the invariance of the form $\kappa$ this is then implied by $e^{-\operatorname{ad} Y}U \in U+\mathfrak{h}_\mathbb{C}$ as $[\mathfrak{a}_{Z,E}, \mathfrak{h}_\mathbb{C}]\subset \mathfrak{h}_\mathbb{C}$. \par Suppose $\operatorname{int} \mathfrak{h}_{\rm w-ell}^\perp \neq \emptyset$. According to Lemma \ref{lemma elliptic} we thus find some subset $\mathcal{O}\subset i\mathfrak{a}_{Z,E}^0 + i \mathfrak{a}_{Z,S}^0+ \mathfrak{m}_Z^0$ such that $\operatorname{Ad}(\algebraicgroup{H})\mathcal{O} \cap \mathfrak{h}^\perp$ is open and non-empty.
Let $X= i X_1 + iX_2 + Y\in \mathcal{O}$ with $X_1 \in \mathfrak{a}_{Z,E}^0, X_2\in \mathfrak{a}_{Z,S}^0, Y\in \mathfrak{m}_Z^0$ and let $h\in \algebraicgroup{H}$ be such that $\operatorname{Ad}(h)X\in\mathfrak{h}^\perp$. With \eqref{center perp} we get $$\operatorname{Ad}(h) X= iX_1 + \underbrace{(\operatorname{Ad}(h)(iX_1) - iX_1)}_{\in \widehat \mathfrak{h}_\mathbb{C}^\perp}+ \underbrace{\operatorname{Ad}(h) (iX_2 + Y)}_{\in \widehat \mathfrak{h}_\mathbb{C}^\perp}\in (i\mathfrak{a}_{Z,E}^0+\widehat\mathfrak{h}_\mathbb{C}^\perp)\cap \mathfrak{h}^\perp\, .$$ From \eqref{h hat perp} we then deduce $X_1=0$. Hence $\mathcal{O}\subset i\mathfrak{a}_{Z,S}^0+\mathfrak{m}_Z$. Now as $\mathfrak{a}_{Z,E}^0\neq \{0\}$ we have $$\dim \mathfrak{h}/ \mathfrak{l} \cap \mathfrak{h} + \dim \mathfrak{a}_{Z,S} + \dim \mathfrak{m}_Z <\dim \mathfrak{h}^\perp= \dim \mathfrak{g}/\mathfrak{h}$$ and therefore $\operatorname{Ad}(\algebraicgroup{H})(\mathfrak{a}_{Z,S}^0 + \mathfrak{m}_Z^0)_\mathbb{C}\subset \mathfrak{h}_\mathbb{C}^\perp$ has empty interior, a contradiction. This concludes the proof. \end{proof}
\begin{prop}\label{prop elliptic} Let $Z=G/H$ be a unimodular real spherical space. Suppose that $\operatorname{int} \mathfrak{h}_{\rm w-ell}^\perp\neq \emptyset$ where the interior is taken in $\mathfrak{h}^\perp$. Then $\mathfrak{h}$ is reductive in $\mathfrak{g}$. \end{prop}
\begin{proof} First we note that $Z$ is non-degenerate as $Z$ is requested to be unimodular (see Lemma \ref{lem Harris} and Example \ref{ex-quasiaffine}). We argue by contradiction and assume that $\mathfrak{h}$ is not reductive. Then \cite[Cor.~9.10]{KK} implies that $\mathfrak{a}_{Z,E}\neq \{0\}$. Now the assertion follows from Lemma \ref{lemma no-elliptic}. \end{proof}
\begin{cor}\label{cor elliptic} Let $\mathfrak{h}$ be a real spherical unimodular subalgebra and $I\subsetneq S$. Then $\operatorname{int} \big( (\mathfrak{h}_I^\perp)_{\rm w-ell} \big) = \emptyset$. \end{cor}
\begin{proof} Since $\mathfrak{h}_I$ is a proper deformation of $\mathfrak{h}$ it cannot be reductive in $\mathfrak{g}$. Hence the assertion follows from Proposition \ref{prop elliptic}. \end{proof}
\subsection{Proof of Theorem \ref{thm discrete}}
\begin{proof} The first assertion, $H=\widehat H$ reductive in $G$, repeats Proposition \ref{prop elliptic}. In particular $L^2(Z)_{\rm td}=L^2(Z)_{\rm d}$.
We recall that to every unitary representation $(\pi, E)$ of $G$ one attaches a wave-front set $\operatorname{WF}(\pi)$ which is an $\operatorname{Ad}(G)$-invariant closed cone in $\mathfrak{g}^*\simeq \mathfrak{g}$. If $Z=G/H$ is a unimodular homogeneous space, then the wavefront set of the left regular representation of $G$ on $L^2(G/H)$ was determined in \cite[Thm 2.1]{HW} as
\begin{equation}\label{WF-HW} \operatorname{WF}(L^2(G/H))= \operatorname{cl}(\operatorname{Ad}(G) \mathfrak{h}^\perp)\end{equation} with $\operatorname{cl}$ referring to the closure.
For the second assertion we compare wavefront sets of unitary $G$-representations. Recall that unitary representations with disintegration in the same measure class have the same wavefront sets. Hence we obtain from Theorem \ref{thm planch} that \begin{equation} \label{WF1} \operatorname{WF}(L^2(Z)) \subset \operatorname{WF}(L^2(Z)_{\rm d}) \cup \bigcup_{I\subsetneq S}\bigcup_{\mathsf{c} \in \mathsf{c}_I} \operatorname{WF}(L^2(Z_{I,\mathsf{c}})) \, .\end{equation} On the other hand, we obtain from \eqref{WF-HW} that
\begin{equation} \label{WF2} \operatorname{WF}(L^2(Z_{I,\mathsf{c}}))=\operatorname{cl}(\operatorname{Ad}(G) \mathfrak{h}_{I,\mathsf{c}}^\perp)\qquad (I \subset S,\mathsf{c}\in \sC_I)\, .\end{equation}
Let $Y:=\operatorname{Ad}(G)\mathfrak{h}^\perp\subset \mathfrak{g}$ and observe that $Y$ is the image of the algebraic map $$\Phi: G \times \mathfrak{h}^\perp \to \mathfrak{g}, \ \ (g,X)\mapsto \operatorname{Ad}(g)X\, .$$ In particular, it follows that $\dim \operatorname{cl}(Y) \backslash Y <\dim Y$. Likewise we have for $Y_{I,\mathsf{c}}=\operatorname{Ad}(G) \mathfrak{h}_{I,\mathsf{c}}^\perp$ that $\dim \operatorname{cl}(Y_{I,\mathsf{c}}) \backslash Y_{I,\mathsf{c}} < \dim Y_{I,\mathsf{c}}$. By assumption and Cor.~\ref{cor non-deg} the elliptic elements $Y_{\rm ell}$ have non-empty interior
in $Y$. Since $\dim \operatorname{cl}(Y) \backslash Y <\dim Y$ we also obtain that $Y_{\rm ell}$ has non empty interior $\operatorname{int}_{\operatorname{cl}(Y)}( Y_{\rm ell} )$
in $\operatorname{cl}(Y)$. On the other hand it follows from Corollary \ref{cor elliptic} that $Y_{I,\mathsf{c}, {\rm ell}}$ has no interior in $Y_{I,\mathsf{c}}$ when $I\neq S$. From $\dim \operatorname{cl}(Y_{I,\mathsf{c}}) \backslash Y_{I,\mathsf{c}} < \dim Y_{I,\mathsf{c}}$ we thus infer that $(\operatorname{cl} (Y_{I,\mathsf{c}}) )_{\rm ell}$ has empty interior in $\operatorname{cl}(Y_{I,\mathsf{c}})$.
\par From \eqref{WF-HW} and \eqref{WF1} we obtain
$$\emptyset \neq \operatorname{int}_{\operatorname{cl}(Y)}( Y_{\rm ell} ) \subset \operatorname{WF}(L^2(Z)_{\rm d}) \cup \bigcup_{I\subsetneq S, \mathsf{c}} [ \operatorname{int}_{\operatorname{cl}(Y)}( Y_{\rm ell} ) \cap \operatorname{WF}(L^2(Z_{I,\mathsf{c}})) ], $$ and since $Y_{I,\mathsf{c}}\subset \operatorname{cl}(Y)$ it follows from \eqref{WF2} that $$ \operatorname{int}_{\operatorname{cl}(Y)}( Y_{\rm ell} ) \cap \operatorname{WF}(L^2(Z_{I,\mathsf{c}})) \subset \operatorname{int}_{\operatorname{cl}(Y_{I,\mathsf{c}})} ( \operatorname{cl}(Y_{I,\mathsf{c}})_{\rm ell} )=\emptyset $$ for all $I\neq S$ and $\mathsf{c}$. Hence $L^2(Z)_{\rm d}\neq 0$. \end{proof}
\subsection{An example}
\begin{ex} \label{ex non-symmetric ell} We now give two examples of series of non-symmetric real spherical spaces $Z=G/H$ for which $\operatorname{int} \mathfrak{h}_{\rm ell}^\perp \neq \emptyset$. \par (a) Let $Z=G/H=\operatorname{SO}(n, n+1)/\operatorname{GL}(n,\mathbb{R})$ for $n\geq 2$. We realize $\mathfrak{g} =\mathfrak{so}(n,n+1)$ as matrices of the form $$X=\begin{pmatrix} A & B & v \\ C & -A^T & w \\ -w^T & -v^T& 0\end{pmatrix}$$ with $v,w\in \mathbb{R}^n$, $A, B, C\in \operatorname{Mat}_{n\times n} (\mathbb{R})$ subject to $B^T, C^T = -B, -C$. Then $\mathfrak{h}$ consists of the matrices $X\in\mathfrak{g}$ with $B,C, v, w=0$. First we consider the case where $n=2m$ is even. For ${\bf t}=(t_1, \ldots, t_m)\in \mathbb{R}^m$ we let $D_{\bf t} =\operatorname{diag}( D_{t_1}, \ldots ,D_{t_m})\in \operatorname{Mat}_{n\times n}(\mathbb{R})$ with $D_{t_i}= \begin{pmatrix} 0 & t_i \\ -t_i & 0\end{pmatrix}$. Further for ${\bf s}\in \mathbb{R}^m$ we set $v_{\bf s}=(s_1, s_1, s_2, s_2, \ldots, s_m,s_m)^T\in\mathbb{R}^n$. Now consider the $n$-dimensional non-abelian subspace
$$\mathfrak{t}^0:= \left\{ \begin{pmatrix} 0 & D_{\bf t} & v_{\bf s}\\ -D_{\bf t}& 0 & v_{\bf s} \\ -v_{\bf s}^T& - v_{\bf s}^T & 0\end{pmatrix} \mid {\bf s}, {\bf t} \in \mathbb{R}^m, \right\}\subset \mathfrak{h}^\perp_{\rm ell}\, .$$ It is then easy to see that the $H$-stabilizer of a generic element $X\in\mathfrak{t}^0$ is trivial with $[\mathfrak{h}, X] + \mathfrak{t}^0 =\mathfrak{h}^\perp$. Thus the polar map $H\times \mathfrak{t}^0\to \mathfrak{h}^\perp$ is generically dominant and therefore $\operatorname{int}\mathfrak{h}_{\rm ell}^\perp \neq \emptyset$. For $n=2m+1$ odd we modify $\mathfrak{t}^0$ as follows. We consider $D_{\bf t}$ now as $n\times n$-matrix via the left upper corner embedding. For ${\bf s}\in \mathbb{R}^{m+1}$ we further set $v_{\bf s}=(s_1, s_1, \ldots, s_m, s_m, s_{m+1})\in \mathbb{R}^n$ and define
$$\mathfrak{t}^0:= \left\{ \begin{pmatrix} 0 & D_{\bf t} & v_{\bf s}\\ -D_{\bf t}& 0 & v_{\bf s} \\ -v_{\bf s}^T& - v_{\bf s}^T & 0\end{pmatrix} \mid {\bf s}\in \mathbb{R}^{m+1}, {\bf t} \in \mathbb{R}^m \right\}\subset \mathfrak{h}^\perp_{\rm ell}\, .$$ We now complete the arguments as in the even case. \par (b) Next we consider the cases $Z=G/H=\operatorname{SU}(n,n+1)/\operatorname{Sp}(2n,\mathbb{R})$ for $n\geq 2$. Here $\mathfrak{g}=\mathfrak{su}(n,n+1)$ is realized as the trace-free matrices of the form $$X=\begin{pmatrix} A & B & v \\ C & -A^* & w \\ -w^* & -v^*& d\end{pmatrix}$$ with $v,w\in \mathbb{C}^n$, $A, B, C\in \operatorname{Mat}_{n\times n} (\mathbb{C})$ subject to $B^*, C^* = -B, -C$, and $d\in i\mathbb{R}$. Further we realize $\mathfrak{h}\simeq \mathfrak{sp}(2n,\mathbb{R})$ as the subalgebra $$\mathfrak{h}=\{ X\in \mathfrak{g}\mid A\in \operatorname{Mat}_{n\times n}(\mathbb{R}), B,C\in i \operatorname{Mat}_{n\times n} (\mathbb{R}), v=w=0, d=0\}$$
For ${\bf t}=(t_1, \ldots, t_n)\in \mathbb{C}^n$ we let $E_{\bf t}=\operatorname{diag}(t_1, \ldots, t_m)\in \operatorname{Mat}_{n\times n}(\mathbb{C})$ and consider $$\mathfrak{t}^0:= \left\{ X= \begin{pmatrix} E_{i\bf t} & 0 & {\bf s} \\ 0& E_{i\bf t} & {\bf s} \\ -{\bf s}^T& -{\bf s}^T & d\end{pmatrix} \mid {\bf s}, {\bf t}\in \mathbb{R}^n, \operatorname{tr}(X)=0\right\}\subset \mathfrak{h}^\perp_{\rm ell}\, .$$ Now proceed as in (a) and obtain that $\operatorname{int} \mathfrak{h}^\perp_{\rm ell}\neq \emptyset$. \end{ex}
\begin{cor} For $Z=\operatorname{SU}(n,n+1),\mathbb{R})/ \operatorname{Sp}(2n,\mathbb{R})$ and $Z=\operatorname{SO}(n, n+1)/\operatorname{GL}(n,\mathbb{R})$, $n\geq 2$, we have $L^2(Z)_{\rm d}\neq \emptyset$. \end{cor}
\begin{proof} In Example \ref{ex non-symmetric ell} we have shown $\operatorname{int} \mathfrak{h}_{\rm ell}^\perp\neq \emptyset$. Apply Theorem \ref{thm discrete}. \end{proof}
\section{Moment maps and elliptic geometry}\label{section moment} We expect that Theorem \ref{thm discrete} gives in fact an equivalence: $L^2(Z)_{\rm d}\neq \{0\}$ if and only if $\operatorname{int} \mathfrak{h}^\perp_{\rm w-ell}\neq \emptyset$. This section is devoted to the following theorem, which gives a geometric version of this expected equivalence.
\begin{theorem} \label{thm moment discrete} Let $Z$ be a non-degenerate real spherical space with a strictly convex compression cone, i.e.~$\mathfrak{a}_{Z,E}=\{0\}$. Then the following statements are equivalent: \begin{enumerate}
\item \label{EINS}$\operatorname{cl}(\operatorname{Ad}(G)\mathfrak{h}^\perp) = \bigcup_{I\subsetneq S}\bigcup_{\mathsf{c} \in \sC_I} \operatorname{Ad}(G)\mathfrak{h}_{I,\mathsf{c}}^\perp$. \item \label{ZWEI}$\operatorname{int} \mathfrak{h}^\perp_{\rm ell}=\emptyset$. \end{enumerate} \end{theorem}
\begin{rmk} \label{rmk thm moment discrete} (a) From Corollary \ref{cor elliptic} we obtain that
$$\operatorname{int}_{\operatorname{cl}(\operatorname{Ad}(G)\mathfrak{h}^\perp)} [\operatorname{Ad}(G)\mathfrak{h}_{I,\mathsf{c}}^{\perp}]_{\rm ell} =\emptyset$$ for all $I\subsetneq S$ (see also the proof of Theorem \ref{thm discrete}). Hence we get \eqref{EINS} $\Rightarrow$ \eqref{ZWEI}, which is the geometric equivalent of Theorem \ref{thm discrete}.
\par (b) Note that \eqref{ZWEI} is equivalent to $\operatorname{int} \mathfrak{h}^\perp_{\rm w-ell}=\emptyset$ by the assumption of non-degeneracy (see Cor.~\ref{cor non-deg}). \par (c) For fixed $I\subset S$ we recall $$\{ \mathfrak{h}_{I,\mathsf{c}}: \mathsf{c} \in \sC_I\} = \{ (\mathfrak{h}_w)_I: w \in \mathcal{W}\}\, .$$ \end{rmk}
The goal of this section is to prove Theorem \ref{thm moment discrete}. The proof is obtained via new insights on the geometry of the moment map of the Hamiltonian $G$-action on the co-tangent bundle $T^*Z$,
\subsection{The moment map}
In this subsection $Z=G/H$ is a general algebraic homogeneous space attached to a reductive group $G=\algebraicgroup{G}(\mathbb{R})$ and an algebraic subgroup $H=\algebraicgroup{H}(\mathbb{R})$.
In the sequel we identify $\mathfrak{g}^*$ with $\mathfrak{g}$ via our non-degenerate $\operatorname{Ad}(G)$-invariant form $\kappa$. In this sense we also have $(\mathfrak{g}/\mathfrak{h})^*\simeq \mathfrak{h}^\perp\subset \mathfrak{g}$ and we can view the co-tangent bundle $T^*Z$ of $Z$ as $T^*Z=G\times_H \mathfrak{h}^\perp$. Recall that the $G$-action on $T^*Z$ is Hamiltonian with corresponding $G$-equivariant moment map given by
$$m: T^*Z \to \mathfrak{g}, \ \ [g,X]\mapsto \operatorname{Ad}(g)X\, .$$ Now for $X\in \mathfrak{h}^\perp$ the stabilizer in $G$ of $\xi:=[{\bf1}, X]\in T^*Z$ is $G_\xi= Z_H(X)$ whereas the stabilizer of $X=m(\xi)\in\mathfrak{g}$ is $G_{m(\xi)}=Z_G(X)$. It is then a general fact about the geometry of moment maps (see \cite[p.190]{GS}), that for the Lie algebras of $Z_H(X)$ and $Z_G(X)$ one has \begin{equation} \label{centralizer normal} \mathfrak{z}_\mathfrak{h}(X)\triangleleft \mathfrak{z}_\mathfrak{g}(X) \qquad (X\in \mathfrak{h}^\perp)\, .\end{equation} Let us call an element $X\in \mathfrak{h}^\perp$ {\it generic}, provided that $\dim \mathfrak{z}_\mathfrak{h}(X)$ is minimal. Then it follows from
\cite[Th. 26.5]{GS} that
\begin{equation} \label{centralizer abelian} \mathfrak{z}_\mathfrak{g}(X)/\mathfrak{z}_\mathfrak{h}(X) \ \text {is abelian for $X\in \mathfrak{h}^\perp$ generic}\, .\end{equation} A somewhat sharper version of \eqref{centralizer abelian} is:
\begin{lemma} \label{lemma moment torus} \cite[Satz 8.1]{Knop} Assume that $\algebraicgroup{ Z}=\algebraicgroup{G}/\algebraicgroup{H}$ is an algebraic homogeneous space defined over $\mathbb{R}$ attached to a connected reductive group $\algebraicgroup{G}$. Then for $X$ in a dense open subset of $\mathfrak{h}^\perp$ one has \begin{enumerate} \item $Z_\algebraicgroup{H}(X) \triangleleft Z_\algebraicgroup{G}(X)$. \item $Z_\algebraicgroup{G}(X)/Z_\algebraicgroup{H}(X)$ is a torus. \end{enumerate} In particular, $Z_G(X)/Z_H(X)$ is an abelian reductive Lie group.\end{lemma}
\subsection{Ellipticity relative to $Z$}
Moment map geometry suggests notions of ellipticity and weak ellipticity of elements $X\in \mathfrak{h}^\perp$ which are more intrinsic to $Z$. \par Let us call an element $X\in \mathfrak{h}^\perp$ {\it weakly $Z$-elliptic} provided that $Z_G(X)/Z_H(X)$ is compact. A weakly $Z$-elliptic element $X\in \mathfrak{h}^\perp$ will be called {\it $Z$-elliptic} if in addition $X$ is semisimple.
\begin{lemma} \label{lemma w-Z-ell} Let $X$ be a generic weakly $Z$-elliptic element and let $(Z_G(X))_0= L_X \ltimes U_X$ be a Levi-decomposition with $L_X$ reductive. Let $L_{H,X}:=L_X \cap H$. Then $(Z_H(X))_0= L_{H,X}\ltimes U_X$ and there exists a compact torus $T_X$ in the center $Z(L_X)$ of $L_X$ such that $L_X= L_{H,X} T_X$ and $\mathfrak{l}_X =\mathfrak{l}_{H,X} \oplus \mathfrak{t}_X$ orthogonal. Moreover, \begin{equation} \label{X in T_X} X \in \mathfrak{t}_X +\mathfrak{u}_X \end{equation} and $X\in \mathfrak{t}_X$ if $X$ is semisimple. In particular, \begin{enumerate} \item\label{11} Every generic weakly $Z$-elliptic element is weakly elliptic. \item\label{12} Every generic $Z$-elliptic element is elliptic. \end{enumerate} \end{lemma}
\begin{proof} Let $G_1:=(Z_G(X))_0$ and $G_2:=(Z_H(X))_0$. Then by \eqref{centralizer normal} and \eqref{centralizer abelian}, $G_2\triangleleft G_1$ is a normal subgroup such that $G_1/G_2$ is compact, connected and abelian, i.e.~a compact torus. Furthermore $G_3:= G_2 U_X$ is a closed normal subgroup such that $G_1/G_3 = L_X/ L_X \cap G_3$ is a compact torus. This implies that $L_X = (G_3\cap L_X) T_X$ with $T_X$ an (infinitesimally) complementing compact torus in the center of $L_X$. It then follows that $G_2\cap L_X = G_3\cap L_X$, as there are no algebraic morphisms of a reductive group to a unipotent group. Now the compactness of $G_1/G_2$ implies that $U_X\subset G_2$ as well. Furthermore, since $\mathfrak{l}_X$ and $\mathfrak{l}_{X,H}$ are both algebraic Lie algebras we see that $\mathfrak{t}_X=\mathfrak{l}_{H,X}^{\perp_{\mathfrak{l}_X}}$ is the orthogonal complement. \par Finally we decompose $X\in \mathfrak{h}^\perp\cap \mathfrak{z}_\mathfrak{g}(X)$ as $X=X_0 +X_1$ with $X_0\in \mathfrak{l}_X$ and $X_1\in \mathfrak{u}_X$. Then $X\in \mathfrak{h}^\perp$ implies that $X_0\in\mathfrak{t}_X$, that is \eqref{X in T_X}. If we further observe $X$ is semisimple if and only if $\mathfrak{z}_\mathfrak{g}(X)=\mathfrak{l}_X$ is reductive, then we see that the remaining statements of the lemma are consequences of \eqref{X in T_X}. \end{proof}
\begin{rmk} Notice that $X=0$ is semisimple and elliptic but not weakly $Z$-elliptic unless $Z=G/H$ is compact. To see an example of a generic weakly $Z$-elliptic element which is not $Z$-elliptic, i.e. not semisimple, consider $H=N$ for an $\mathbb{R}$-split group $G$. Then $\mathfrak{h}^\perp =\mathfrak{a} +\mathfrak{n}$. Now for a regular nilpotent element $X\in \mathfrak{n}$ we have $Z_G(X)=Z_N(X)$ and thus $X$ is generic and weakly $Z$-elliptic. \end{rmk}
Our notion of non-degeneracy for real spherical spaces now generalizes to all algebraic homogeneous spaces $Z=G/H$ as follows. We call $Z=G/H$ {\it non-degenerate} provided that $m(T^*Z)$ contains a Zariski dense open set of semisimple elements. We recall from \cite[Sect.~3] {Knop2} that all quasi-affine homogeneous spaces are non-degenerate.
\begin{prop} Let $Z=G/H$ be a non-degenerate homogeneous space. Then the following assertions are equivalent: \begin{enumerate} \item \label{eins-aa}$\operatorname{int} \mathfrak{h}_{\rm ell}^\perp\neq \emptyset$. \item \label{zwei-bb}$\operatorname{int} \mathfrak{h}_{\rm Z-ell}^\perp\neq \emptyset$. \end{enumerate} \end{prop}
\begin{proof} Here we prove \eqref{eins-aa}$\Rightarrow$\eqref{zwei-bb}, as the converse implication follows immediately from Lemma~\ref{lemma w-Z-ell}\eqref{12} (in fact without assuming non-degeneracy).
Since $Z$ is non-degenerate, the image $m(\alpha)$ is semisimple for $\alpha=[g,X]$ in a dense open subset of $T^*Z=G\times_H\mathfrak{h}^\perp$. For those $\alpha$, the centralizer $\algebraicgroup{ L}(\alpha):=Z_\algebraicgroup{G}(m(\alpha))$ of $m(\alpha)$ is a Levi subgroup of $\algebraicgroup{G}$ which is defined over $\mathbb{R}$. Since there are only finitely many conjugacy classes of such subgroups, there is a dense open subset $\mathcal{T}$ of $T^*Z$ such that for each $\alpha\in\mathcal{T}$, $\algebraicgroup{ L}(\alpha)$ is a Levi subgroup of $\algebraicgroup{G}$ and the $G$-conjugacy class of its real points $L(\alpha)$ is locally constant on $\mathcal{T}$. \par Let $\mathcal{T}_0$ be a connected component of $\mathcal{T}$, and let $\alpha_0\in\mathcal{T}_0$ and $\algebraicgroup{ L}=\algebraicgroup{ L}(\alpha_0)$. Then $L(\alpha)$ is
$G$-conjugate to $L$ for all $\alpha\in\mathcal{T}_0$. Moreover, $\mathcal{T}_0$ is a Hamiltonian $G$-manifold with moment map $m|_{\mathcal{T}_0}:\mathcal{T}_{0}\to\mathfrak{g}$.
\par Set $\mathcal{T}_{00}:=m_{\mathcal{T}_0}^{-1}(\mathfrak{l})$. Then it follows from the Cross Section Theorem (cf. \cite[Th. 2.4.1]{GS2}) that $\mathcal{T}_{00}$ is a Hamiltonian $L$-manifold with moment map
$m|_{\mathcal{T}_{00}}:\mathcal{T}_{00}\to\mathfrak{l}$ the restriction of $m$ to $\mathcal{T}_{00}$.
\par Note that $Z_G(m(\alpha_0))=L(\alpha_0)=L$. In particular $m(\alpha_0)\in \mathfrak{z}(\mathfrak{l})$ is regular. As $m(\mathcal{T}_{00})\subset \mathfrak{l}$ we thus find an open neighborhood $U_0$ of $\alpha_0$ in $\mathcal{T}_{00}$ such that $L(\alpha)=Z_G(m(\alpha))\subset L$ for all $\alpha\in U_0$. On the other hand we know that $L(\alpha)$ is conjugate to $L$. Thus in fact $L(\alpha)=L$ for $\alpha\in U_0$. Hence by passing to a dense open subset of $\mathcal{T}_{00}$ we may assume that $L(\alpha)=L$ for all $\alpha\in \mathcal{T}_{00}$. Since $G_\alpha\subset G_{m(\alpha)}=L$ we then have $G_\alpha=L_\alpha$ with $L_\alpha$ the stabilizer of $\alpha\in \mathcal{T}_{00}$ in $L$.
\par Let $\mathfrak{c}\subset\mathfrak{l}$ be the $\mathbb{R}$-span of $m(\mathcal{T}_{00})$. We claim that \begin{equation} \label{center moment} \mathfrak{c}\subset \mathfrak{z}(\mathfrak{l})\end{equation} with $\mathfrak{z}(\mathfrak{l})$ the center of $\mathfrak{l}$. In fact, we have just seen that $G_{m(\alpha)}=L$ for all $\alpha\in \mathcal{T}_{00}$. Thus $m(\alpha)\in \mathfrak{z}(\mathfrak{l})$ for all $\alpha\in \mathcal{T}_{00}$.
\par Next we recall the basic equivariant property for the derivative of the moment map \cite[eq. (26.2)] {GS}:
\begin{equation} \label{moment equivariant} \kappa (d m(\alpha)(v), X) = \Omega_\alpha (\widetilde X_\alpha, v) \qquad (\alpha \in \mathcal{T}_{00}, v \in T_\alpha \mathcal{T}_{00}, X\in \mathfrak{l})\end{equation} where $\Omega$ is the symplectic form on $\mathcal{T}_{00}$, $\widetilde X$ is the vector field on $\mathcal{T}_{00}$ associated to $X$ and $T_\alpha \mathcal{T}_{00}$ is the tangent space at $\alpha$. Let $\mathfrak{g}_\alpha=\mathfrak{l}_\alpha$ be the Lie algebra of the stabilizer $G_\alpha=L_\alpha$ of $\alpha\in \mathcal{T}_{00}$. We claim that $\mathfrak{c}^{\perp_\mathfrak{l}} \subset \mathfrak{g}_\alpha$. To see that we first note that $dm(\alpha)(v)\in \mathfrak{c} $ by the definition of $\mathfrak{c}$. Hence we derive from \eqref{moment equivariant} that $\Omega_\alpha(\widetilde X_\alpha, v)=0$ for all $v$ if $X\perp \mathfrak{c}$. Since $\Omega$ is non-degenerate one obtains $\widetilde X_\alpha=0$. Hence $\mathfrak{c}^{\perp_\mathfrak{l}}$ acts with vanishing vector fields on $\mathcal{T}_{00}$ and thus $\mathfrak{c}^{\perp_\mathfrak{l}}\subset \mathfrak{g}_\alpha=\mathfrak{l}_\alpha$. \par Notice that the claim implies in particular that the $L$-action on $\mathcal{T}_{00}$ factors through the group $C:=L/ \langle \exp(\mathfrak{c}^{\perp_\mathfrak{l}})\rangle$ with Lie algebra $\mathfrak{c}$.
On the other hand, by passing to a further open dense subset of $\mathcal{T}_{00}$ we may assume that $C_\alpha= G_{m(\alpha)}/ G_\alpha=L/G_\alpha$ is a real form of a complex torus for all $\alpha\in \mathcal{T}_{00}$, see Lemma \ref{lemma moment torus}. Notice that $C_\alpha$ is a quotient of $\mathfrak{c}$ and likewise the Lie algebra $\mathfrak{c}_\alpha$ of $C_\alpha$ is a quotient of $\mathfrak{c}$.
Via the non-degenerate form $\kappa$ we realize $\mathfrak{c}_\alpha$ as a subalgebra of $\mathfrak{c}\subset \mathfrak{l}$ and note that $C_\alpha$ is compact if and only if $\mathfrak{c}_\alpha$ consists of elliptic elements. Further $m(\alpha)\in \mathfrak{c}_\alpha$.
\par From $\mathcal{T}_0=G\cdot \mathcal{T}_{00}$ we obtain that for all $\xi$ in a dense open subset of $\mathcal{T}_0$ it holds true that $m(\mathcal{T}_0)$ consists of elliptic elements if $G_{m(\xi)}/ G_\xi$ is a compact torus.
Finally, every $\alpha\in T^*Z$ is in the $G$-orbit of an element $\xi= [{\bf1},X]$ with $X\in\mathfrak{h}^\perp$ for which we recall $G_{m(\xi)}/G_\xi= Z_G(X)/Z_H(X)$. Now the implication \eqref{eins-aa}$\Rightarrow$\eqref{zwei-bb} follows from $m([{\bf1},X])=X$. \end{proof}
\subsection{The logarithmic tangent bundle} Let $Z\hookrightarrow\widehat Z$ be a compactification corresponding to a complete fan $\mathcal{F}$ as in Section \ref {section compact}. In particular we recall that $\widehat Z$ was constructed as the closure of $Z$ in the smooth toroidal compactification $\widehat \algebraicgroup{ Z}(\mathbb{R})$ of $\algebraicgroup{ Z}(\mathbb{R})$ attached to $\mathcal{F}$.
\par According to \cite[Cor.~12.3]{KK}, there is a unique $G$-equivariant morphism $\phi:\widehat \algebraicgroup{ Z}(\mathbb{R})\to \operatorname{Gr}(\mathfrak{g})$ into the Grassmannian of $\mathfrak{g}$ with $\phi(z_0)=\mathfrak{h}^\perp$. Let $\mathcal E\to\operatorname{Gr}(\mathfrak{g})$ be the tautological vector bundle. Then the \emph{logarithmic cotangent bundle
of $\widehat \algebraicgroup{ Z}(\mathbb{R})$} is defined by $T^{\log} \widehat \algebraicgroup{ Z}(\mathbb{R}):=\phi^*\mathcal E$. Concretely \[
T^{\log}\widehat \algebraicgroup{ Z}(\mathbb{R})=\{(z,X)\in\widehat \algebraicgroup{ Z}(\mathbb{R})\times\mathfrak{g}\mid
X\in\phi(z)\}. \] Then $T^{\log}\widehat \algebraicgroup{ Z}(\mathbb{R})$ is a smooth $G$-manifold containing $T^*\algebraicgroup{ Z}(\mathbb{R})$ as a dense open subset. It comes with a projection to the first factor \[
p:T^{\log}\widehat \algebraicgroup{ Z}(\mathbb{R})\to\widehat \algebraicgroup{ Z}(\mathbb{R}), \ \ (z,X)\mapsto z \] making it into a vector bundle. On the other hand, the second projection \[
m:T^{\log}\widehat \algebraicgroup{ Z}(\mathbb{R})\to\mathfrak{g}, \ \ (z,X)\mapsto X \] is called the \emph{logarithmic moment map} since it restricts to the moment map on $T^*Z$. Since $\widehat \algebraicgroup{ Z}(\mathbb{R})$ is compact, the logarithmic moment map is proper in the Hausdorff topology.
Next we recall from Section \ref{section compact} that each cone $\mathcal{C}\in\mathcal{F}$ corresponds to a $\algebraicgroup{G}$-orbit $\widehat \algebraicgroup{ Z}_\mathcal{C}=\algebraicgroup{G}\cdot \widehat z_{\mathcal{C}}\subset\widehat \algebraicgroup{ Z}$. We have defined $A_\mathcal{C}\subset A_Z$ to be the subtorus with Lie algebra $\mathfrak{a}_\mathcal{C}= \operatorname{span}_\mathbb{R} \mathcal{C}$. Moreover for $I=I(\mathcal{C})$ the set of spherical roots vanishing on $\mathcal{C}$, we have $\mathfrak{a}_\mathcal{C}\subset \mathfrak{a}_I$ and $\widehat \mathfrak{h}_\mathcal{C}= \mathfrak{h}_I +\mathfrak{a}_\mathcal{C}$. Also recall $\widehat \algebraicgroup{ Z}_\mathcal{C} \simeq \algebraicgroup{G}/\algebraicgroup{ A}_\mathcal{C} \algebraicgroup{H}_I=\algebraicgroup{ Z}_I/\algebraicgroup{ A}_\mathcal{C}$. Next we recall from Remark \ref{remark rel open}(c) that \begin{equation} \label{real ZC}\widehat Z \cap \widehat \algebraicgroup{ Z}_\mathcal{C}(\mathbb{R})=\bigcup_{w\in \mathcal{W}} G \cdot \widehat z_{w,\mathcal{C}}\, .\end{equation}
\par Set $T^{\log}:= p^{-1}(\widehat Z)$. For all $\mathcal{C}\in \mathcal{F}$ we define $T^{\log}_\mathcal{C}:=p^{-1}(\widehat\algebraicgroup{ Z}_\mathcal{C}\cap \widehat Z)$ and note that $T^{\log} =\coprod_{\mathcal{C}\in \mathcal{F}} T_\mathcal{C}^{\log}$. Furthermore, for
$I\subset S$ we put $T_I^{\log}:= \bigcup_{\mathcal{C}\in \mathcal{F}\atop I=I(\mathcal{C})} T_\mathcal{C}^{\log}$.
Since $m(\widehat z_{w,\mathcal{C}})=\mathfrak{h}_{w,\mathcal{C}}=(\mathfrak{h}_w)_I$ for all $\mathcal{C}\in \mathcal{F}$ with $I(\mathcal{C})=I$ we obtain with Remark \ref{rmk thm moment discrete}(c) and \eqref{real ZC} that
\begin{equation} \label{log moment} m(T^{\log}_I)=\bigcup_{\mathsf{c}\in \sC_I}\operatorname{Ad}(G)\mathfrak{h}_{I,\mathsf{c}}^\perp\, . \end{equation}
\subsection{Proof of Theorem \ref{thm moment discrete}}
As mentioned in Remark \ref{rmk thm moment discrete}(a) we only need to show \eqref{ZWEI} $\Rightarrow$ \eqref{EINS}. Let $\alpha\in T^*Z$ be generic. Then $m(\alpha)$ is not elliptic by assumption. Hence the torus $A_\alpha:=G_{m(\alpha)}/ G_\alpha$ is not compact and therefore contains a 1-parameter subgroup
$\mu:\mathbb \mathbb{R}^\times \hookrightarrow \algebraicgroup{ A}_\alpha(\mathbb{R})$. Consider the orbit
$A_\alpha\cdot \alpha\subset T^*Z$. Since its projection into $Z$ is closed (being a
flat) also $A_\alpha\cdot \alpha$ is closed in $T^*Z$. The limit
$\alpha_0:=\lim_{t\to0^+}\mu(t)\alpha$ exists in $\widehat Z$ since $m$
is proper. Since $\alpha_0\not\in T^*Z$ we have
$\alpha_0\in T^{\log}_I $ for some $I\neq S$ (here we used that the compression cone is strictly convex
which implies that $T^{\log}_S= T^*Z$.)
Hence
$$ m(\alpha)=\operatorname{Ad}(\mu(t)) (m(\alpha))= m\left(\lim_{t\to 0^+} \mu(t) \alpha\right)= m(\alpha_0)
\in m(T^{\log}_I)=\bigcup_{\mathsf{c}\in \sC_I}\operatorname{Ad}(G)\mathfrak{h}_{I,\mathsf{c}}^\perp
$$ by \eqref{log moment}. Thus we obtain for $\alpha\in T^*Z$ generic that \begin{equation}\label{dense generic} m(\alpha)\in \bigcup_{I\subsetneq S}\bigcup_{\mathsf{c}\in \sC_I} \operatorname{Ad}(G)\mathfrak{h}_{I,c}^\perp\, . \end{equation} Since the right hand side in \eqref{dense generic} consists of all proper deformations of $\operatorname{Ad}(G)\mathfrak{h}^\perp$, hence is closed in $\operatorname{cl}(\operatorname{Ad}(G) \mathfrak{h}^\perp)$, we obtain \eqref{EINS} from \eqref{dense generic} and the density of the generic elements. \qed
\section{Harish-Chandra's group case}\label{group case}
In this section we apply the results of this paper to derive Harish-Chandra's formula for the Plancherel measure for a real reductive group \cite{HC3}. The Plancherel measure contains naturally the formal degrees of discrete series representations of various inducing data. The formal degrees were computed by Harish-Chandra in \cite{HC}. The explicit knowledge of the formal degree is treated as a black box in what follows.
\par We are considering a real reductive group $G'$ together with its both-sided symmetries $G=G'\times G'$, by which $G'$ gets identified with $Z=G/H$ where $H=\operatorname{diag}(G')\subset G$ is the diagonal subgroup. Let us recall that the topological assumption on $G'$ is that $G'=\algebraicgroup{G}'(\mathbb{R})$ for a reductive algebraic group $\algebraicgroup{G}'$ which is assumed to be connected. If $P'=M'A'N'\subset G'$ is a minimal parabolic subgroup of $G'$ and $\overline P'$ is its opposite, then we obtain with $P=P' \times \overline{P'}\subset G$ a minimal parabolic subgroup of $G$ with $PH\subset G$ open and dense as consequence of the Bruhat decomposition. In particular $\mathcal{W}=\{{\bf1}\}$.
Next note that $\mathfrak{a}= \mathfrak{a}'\times \mathfrak{a}'$, $\mathfrak{a}_H= \operatorname{diag}(\mathfrak{a}')$ and $\mathfrak{a}_Z=\mathfrak{a}_H^{\perp_\mathfrak{a}}$ is the anti-diagonal $$\mathfrak{a}_Z=\{ (X,-X)\mid X\in \mathfrak{a}'\}\, .$$ The assignment $$\mathfrak{a}'\to \mathfrak{a}_Z, \ \ X\mapsto \frac12(X, -X)$$ gives a natural identification. If we denote by $\Sigma'=\Sigma(\mathfrak{a}',\mathfrak{g}')\subset (\mathfrak{a}')^*\backslash \{0\}$ the (possibly reduced) root system for the pair $(\mathfrak{a}',\mathfrak{g}')$, and further by $\Phi'\subset \Sigma'$ the set of simple roots determined by the positive roots $\Sigma'(\mathfrak{a}',\mathfrak{n}')$, then the set of spherical roots $S\subset \mathfrak{a}_Z^*$ naturally identifies with $\Phi'$.
\subsection{The abstract Plancherel Theorem for $L^2(Z)$} Here we specialize the abstract Plancherel theory of Section \ref{subs APt} to the case at hand. Recall that
$$(L, L^2(Z))\simeq \left(\int_{\widehat G} \pi \otimes \operatorname{id} \ d\mu(\pi), \int_{\widehat G} \mathcal{H}_\pi\otimes \mathcal{M}_\pi \ d\mu(\pi)\right)$$ with $\mathcal{M}_\pi\subset (\mathcal{H}_\pi^{-\infty})^H$.
Now any $\pi \in \widehat G$ has the form $\pi=\pi_1\otimes \pi_2$ with $\pi_i\in \widehat G'$. Further, since $\mathcal{H}_{\pi_i}^\infty$ is a nuclear Fr\'echet space (as a consequence of Harish-Chandra's admissibility theorem) we have $\mathcal{H}_\pi^\infty=\mathcal{H}_{\pi_1}^\infty \widehat\otimes \mathcal{H}_{\pi_2}^\infty \simeq \operatorname{Hom} (\mathcal{H}_{\pi_1}^{-\infty}, \mathcal{H}_{\pi_2}^\infty)$ together with $\mathcal{H}_\pi^{-\infty}=\mathcal{H}_{\pi_1}^{-\infty} \widehat\otimes \mathcal{H}_{\pi_2}^{-\infty}\simeq \operatorname{Hom} (\mathcal{H}_{\pi_1}^\infty, \mathcal{H}_{\pi_2}^{-\infty})$. Thus $$(\mathcal{H}_\pi^{-\infty})^H \simeq \operatorname{Hom}_{G'}(\mathcal{H}_{\pi_1}^\infty, \mathcal{H}_{\pi_2}^{-\infty})\, .$$ We then claim \begin{equation} \label{HC-claim1} \dim (\mathcal{H}_\pi^{-\infty})^H\leq 1\end{equation} and \begin{equation} \label{HC-claim2} (\mathcal{H}_\pi^{-\infty})^H\neq \{0\} \iff \pi_2\simeq \overline \pi_1\end{equation} with $\overline \pi_1$ the dual representation of $\pi_1$. \par We first show "$\Rightarrow$" of \eqref{HC-claim2} and assume
that $(\mathcal{H}_\pi^{-\infty})^H\neq \{0\}$.
This means that $\operatorname{Hom}_{G'}(\mathcal{H}_{\pi_1}^\infty, \mathcal{H}_{\pi_2}^{-\infty})\neq \{0\}$. On the level of Harish-Chandra modules this yields $\operatorname{Hom}_{\mathfrak{g}'}(V_{\pi_1}, V_{\overline \pi_2} )\neq \{0\}$ and thus $\pi_2\simeq\overline \pi_1$. The same reasoning also shows \eqref{HC-claim1}.
\par To see the converse in \eqref{HC-claim2}, we first supply some useful notation. Given a Hilbert space $\mathcal{H}$ we denote by ${\mathcal B}_2(\mathcal{H})$ the Hilbert space of Hilbert-Schmidt operators and note that ${\mathcal B}_2(\mathcal{H})\simeq \mathcal{H}\widehat \otimes \overline{\mathcal{H}}$ with $\widehat \otimes$ the tensor product in the category of Hilbert space and $\overline \mathcal{H}$ the dual to $\mathcal{H}$. Further we denote by ${\mathcal B}_1(\mathcal{H})\subset {\mathcal B}_2(\mathcal{H})$ the space of trace-class operators.
\par Given a unitary representation $(\pi, \mathcal{H}_\pi)$ of $G'$, we set $\mathcal{H}_\Pi={\mathcal B}_2(\mathcal{H}_\pi)$ and obtain a unitary representation $(\Pi, \mathcal{H}_\Pi)$ of $G=G'\times G'$ by
$$\Pi(g_1', g_2')T = \pi(g_1')\circ T \circ \pi(g_2')^{-1}\qquad (g_1',g_2'\in G', T\in \mathcal{H}_\Pi={\mathcal B}_2(\mathcal{H}_\pi))\, .$$ Notice that $\Pi\simeq \pi\otimes\overline{\pi}$ under the isomorphism ${\mathcal B}_2(\mathcal{H}_\pi)\simeq \mathcal{H}_\pi\widehat \otimes \overline{\mathcal{H}_\pi}$, and that the HS-norm on ${\mathcal B}_2(\mathcal{H}_\pi)$ does not depend on the positive scaling class of the Hilbert norm which defines the Hilbertian structure of $\mathcal{H}_\pi$.
Let us assume from now on that $(\pi, \mathcal{H}_\pi)$ is irreducible. We remind that Harish-Chandra's basic admissibility theorem implies $$\mathcal{H}_\Pi^\infty \subset {\mathcal B}_1(\mathcal{H}_\pi)\, .$$ Together with \eqref{HC-claim1} we thus obtain that
$$ (\mathcal{H}_\Pi^{-\infty})^H= \mathbb{C} \operatorname{tr}_\pi$$ with $\operatorname{tr}_\pi$ denoting the restriction of the trace on ${\mathcal B }_1(\mathcal{H}_\pi)$ to $\mathcal{H}_\Pi^\infty$. In particular, this completes the proof of \eqref{HC-claim2}.
\par From \eqref{HC-claim2} we then deduce $$\operatorname{supp} \mu\subset \{ [\Pi]\mid [\pi]\in \widehat{G'}\}\simeq \widehat G'$$ and $$\mathcal{M}_\Pi= \mathbb{C} \operatorname{tr}_\pi\,\qquad ([\pi]\in\operatorname{supp}\mu)\, .$$ \par As the Hilbert-Schmidt norm on $\mathcal{H}_\Pi={\mathcal B}_2(\mathcal{H}_\pi)$ is independent of the particular $G'$-invariant Hilbert norm on $\mathcal{H}_\pi$ we obtain a natural Hilbert space structure on the one-dimensional space $\mathcal{M}_\Pi$
by the request $\|\operatorname{tr}_\pi\|=1$. Then the natural left right representation $L=L'\otimes R'$ of $G=G'\times G'$ on $L^2(Z)$ decomposes as
$$(L'\otimes R', L^2(Z)) \underset{G'\times G'}\simeq\left(\int_{\widehat {G'}}^\oplus \Pi \ d\mu(\pi), \int_{\widehat {G'}}^\oplus \mathcal{H}_\Pi \ d\mu(\pi)\right)\, .$$
\subsection{The Plancherel Theorem for $L^2(Z_I)_{\rm td}$}
\par We recall from Theorem \ref{thm planch} the Bernstein decomposition $$L^2(Z)= \sum_{I\subset S} B_I(L^2(Z_I)_{\rm td})\, .$$
\par For $I\subset S\simeq \Phi'$ we obtain a standard parabolic $P_I'= M_I' A_I' N_I'\supset P'$ and the deformation $H_I$ of $H$ as $$H_I = \operatorname{diag} (M_I' A_I') (\overline{N_I'} \times N_I')\, .$$ with $$\widehat H_I = \operatorname{diag} (M_I') ( A_I'\overline{N_I'} \times A_I'N_I')\, .$$ Next we describe $L^2(Z_I)_{\rm td}$. As in Subsection \ref{Subsection twisted} we decompose every $f\in L^2(Z_I)$ as an $A_I$-Fourier integral $$f= \int_{i\mathfrak{a}_I^*} f_\lambda \ d \lambda$$ where $f_\lambda \in L^2(\widehat Z_I, \lambda)$ is given by $$ f_\lambda(g)= \int_{A_I} a^{-\rho - \lambda} f(gaH_I) \ da \qquad (g\in G)$$ If we denote by $$\xi_\lambda: L^2(\widehat Z_I, \lambda)^\infty\to \mathbb{C}, \ \ f\mapsto f({\bf1})$$ the evaluation at ${\bf1}$, and write $L_\lambda$ for the left regular representation of $G$ on $L^2(\widehat Z_I,\lambda)$, then we can rewrite the Fourier-inversion in terms of spherical characters (as in Remark \ref{F-inverse}) \begin{equation} \label{Four1} f(z_{0,I}) = \int_{i\mathfrak{a}_I^*} \xi_\lambda (L_{-\lambda} (f) \xi_{-\lambda}) \ d \lambda\end{equation} with $L_{-\lambda}= \overline {L_\lambda}$ the dual representation and $\xi_{-\lambda}= \overline{\xi_\lambda}$. Next note that we have by induction in stages
$$ L^2(\widehat Z_I, -\lambda)=\operatorname{Ind}_{\widehat H_I}^G (\lambda)\simeq \operatorname{Ind}_{\overline {P_I'}\times P_I'}^{G'\times G'} (L^2(M_I')\otimes \lambda)\, .$$ Thus $L^2(\widehat Z_I, -\lambda)_{\rm d}$ is induced from the discrete series of $M_I'$. \par In more detail, let $(\sigma, \mathcal{H}_\sigma)$ be a discrete series representation of $M_I'$ and $\lambda\in i(\mathfrak{a}')^*$. Then we denote by $\operatorname{Ind}_{\overline {P_I'}}^{G'}(\sigma\otimes\lambda)$ the Hilbert space of measurable functions $f: G'\to \mathcal{H}_\sigma$ with the transformation property $$ f(g' m_I' a_I' \overline{n_I'}) = \sigma(m_I')^{-1} (a_I')^{-\lambda +\rho'} f(g') \qquad (g'\in G', m_I' a_I' \overline{n_I'}\in \overline{P_I'})$$ and endowed with the inner product (of which the convergence is an extra assumption)
$$ \langle f_1, f_2\rangle =\int_{K'} \langle f_1(k'), f_2(k')\rangle_\sigma \ dk'$$ where $K'\subset G'$ is a maximal compact subgroup of $G'$ with $\mathfrak{k}'\perp \mathfrak{a}'$. The left regular representation of $G'$ on $\mathcal{H}_{\sigma,\lambda}:=\operatorname{Ind}_{\overline {P_I'}}^{G'}(\sigma\otimes\lambda)$ is then unitary and denoted by $\pi_{\sigma,\lambda}=\operatorname{ind}_{\overline {P_I'}}^{G'}(\sigma\otimes\lambda)$. Let us denote by $d(\sigma)$ the formal degree of the discrete series representation of $M_I'$ (with respect to a chosen Haar measure $dm_I'$), i.e. the positive number for which we have
\begin{equation} \label{formal deg} d(\sigma) \int_{M_I'} \langle \sigma(m_I') u,u'\rangle \overline{\langle \sigma(m_I') v,v'\rangle} \ dm_I' = \langle u,v\rangle \overline{ \langle u',v' \rangle }\end{equation} for all $v,v',u,u'\in \mathcal{H}_\sigma$.
We now define a $G'\times G'$-equivariant linear map
$$\Phi_{\sigma,\lambda}: \operatorname{Ind}_{\overline {P_I'}}^{G'}(\sigma\otimes\lambda)\widehat \otimes \operatorname{Ind}_{P_I'}^{G'}(\overline \sigma\otimes(-\lambda)\to L^2(\widehat Z_I, - \lambda)_{\rm d}$$ by $$ \Phi_{\sigma,\lambda}(f_1 \otimes f_2)(g_1', g_2'):= ( f_1(g_1'), f_2(g_2'))_\sigma \,,$$ with $( \cdot,\cdot)_\sigma$ referring here to the natural bilinear pairing of $\sigma$ with its dual representation $\overline \sigma$. The square integrability of the image follows from the fact that the norm for $f\in L^2(\widehat Z_I, - \lambda)$ can be computed by means of the Haar measures on $K'$ and $M_I'$ (with the latter properly normalized) as
$$ \|f\|_{L^2(\widehat Z_I,-\lambda)}^2= \int_{K'}\int_{K'}\int_{M_I'} |f(k_1'm_I', k_2')|^2 \ dm_I' \ dk_1' \ dk_2'\, .$$
In fact, with \eqref{formal deg} this calculation shows that $d(\sigma)^{1/2} \Phi_{\sigma,\lambda}$ is isometric.
With the operator $\sum_{\sigma} \int \Phi_{\sigma,\lambda}\, d_\sigma\lambda$ we thus obtain a unitary $G$-equivalence
\begin{equation} \label{planch Z_I} L^2(Z_I)_{\rm td} \underset{G'\times G'}\simeq \underset{\sigma\in \widehat {M_I'}_{\rm disc}}{\widehat\bigoplus} \int_{i(\mathfrak{a}_I')^*}^\oplus \operatorname{Ind}_{\overline {P_I'}}^{G'}(\sigma\otimes\lambda)\widehat \otimes \operatorname{Ind}_{P_I'}^{G'}(\overline \sigma\otimes(-\lambda))\ d_\sigma \lambda \end{equation} where $$ d_\sigma \lambda = d(\sigma)\,d\lambda $$ with $d\lambda$ the Lebesgue-measure on the Euclidean space $i(\mathfrak{a}_I')^*$, suitably normalized.
\par For any $I\subset S$ we now denote by $\mu^{I,\rm td}$ the restriction of the Plancherel measure $\mu$ to the closed subspace $\operatorname{im} B_I\subset L^2(Z)$.
From Theorem \ref{Plancherel induced} we obtain from the uniqueness of the measure class of the Plancherel measure for $L^2(Z_I)$ that: \begin{itemize} \item $\operatorname{supp} \mu^{I,\rm td}=\{[\Pi_{\sigma,\lambda}]\mid [\pi_{\sigma,\lambda}]\in \widehat G', \sigma\in \widehat {M_I'}_{\rm disc}, \lambda\in i(\mathfrak{a}_I')^*\}$, \item $\operatorname{ind}_{P_I'}^{G'}(\overline \sigma\otimes(-\lambda))$ is isomorphic to $\pi_{\sigma,\lambda}^*=\operatorname{ind}_{\overline P_I'}^{G'}(\sigma\otimes \lambda)^*$ for $\mu^{I,\rm td}$-almost all parameters $(\sigma,\lambda)$. \end{itemize}
Next we move to the subtle point on how to identify $\operatorname{Ind}_{P_I'}^{G'}(\overline \sigma\otimes(-\lambda))$ with the dual representation of $\operatorname{Ind}_{\overline {P_I'}}^{G'}(\sigma\otimes\lambda)$. For that we first remark that the pairing
\begin{equation}\label{natural dual} \operatorname{Ind}_{\overline {P_I'}}^{G'}(\sigma\otimes\lambda) \times \operatorname{Ind}_{\overline {P_I'}}^{G'}(\overline \sigma\otimes(-\lambda))
\to \mathbb{C}, \ \ (f_1,f_2)\mapsto \int_{K'} (f_1(k'), f_2(k'))_\sigma \ dk'\end{equation}
is $G'$-equivariant. Thus the dual representation of $\pi_{\sigma,\lambda}=\operatorname{ind}_{\overline {P_I'}}^{G'}(\sigma\otimes\lambda)$ is unitarily equivalent to $\pi_{\overline \sigma, -\lambda}=\operatorname{ind}_{\overline {P_I'}}^{G'}(\overline \sigma\otimes(-\lambda))$.
\par Next, we consider the long intertwining operator \begin{equation} \label{def A} \mathcal{A}_{\sigma,\lambda}: \operatorname{Ind}_{\overline {P_I'}}^{G'}(\overline \sigma\otimes(-\lambda))\to \operatorname{Ind}_{P_I'}^{G'}(\overline \sigma\otimes(-\lambda))\end{equation} \begin{equation} \label{R abs} \mathcal{A}_{\sigma,\lambda}(f)(g') = \int_{N_I'} f(g' n_I') \ dn_I' \qquad (g'\in G')\, .\end{equation} Clearly, $\mathcal{A}_{\sigma,\lambda}(f)$ is defined near $g'={\bf1}$ for functions $f$ with compact support in the non-compact picture, i.e. $\operatorname{supp} f \subset \Omega \overline{P_I'}$ for $\Omega\subset N_I'$ compact. By standard techniques of meromorphic continuation in the $\lambda$-variable, summarized in the following remark, we obtain that $\mathcal{A}_{\sigma,\lambda}$ is defined for generic $\lambda\in i(\mathfrak{a}_I')^*$.
\begin{rmk} \label{analyt cont}Let us briefly recall the basic constructions leading to the definition of $\mathcal{A}_{\sigma,\lambda}$ in terms of meromorphic continuation (originally obtained in \cite{K-SII}). In the first step one embeds the irreducible representation $\overline \sigma$ of $M_I'$ into a minimal principal series representation of $M_I'$ via the Casselman subrepresentation theorem. In formulae, we consider $\overline \sigma$ as a subrepresentation of $\operatorname{ind}_{M_I'\cap \overline P'}^{M_I'} (\overline \sigma_{M'}\otimes \lambda_0)$, where $\overline \sigma_{M'}\in \widehat M'$ and $\lambda_0 \in (\mathfrak{a}'\cap \mathfrak{m}_I')_\mathbb{C}^*$. Via induction in stages we then obtain that $\pi_{\overline \sigma,-\lambda}$ is a subrepresentation of the minimal principal series $\operatorname{ind}_{\overline P'}^{G'} (\overline \sigma_{M'} \otimes \mu )$ where $\mu =\lambda_0 -\lambda$. It is important to note that
$\mu|_{\mathfrak{a}'_I}=-\lambda$ for this initial parameter $\mu$. In the sequel $\overline \sigma_{M'}\in \widehat M'$ will be fixed, but we will allow the parameter $\mu\in (\mathfrak{a}')_\mathbb{C}^*$ to vary. For $\operatorname{Re} \mu$ in a certain open cone this then leads to an intertwining operator
$$ \mathcal{A}(\mu): \operatorname{Ind}_{\overline {P'}}^{G'}(\overline \sigma_{M'}\otimes \mu)\to \operatorname{Ind}_{(\overline P'\cap M_I')A'_IN_I' }^{G'}(\overline \sigma_{M'}\otimes \mu)$$ given by absolutely convergent integrals as in \eqref{R abs}.
\par In the second step, via Gindikin-Karpelevic change of variable (i.e. by using a minimal string of parabolics in the terminology of \cite[Sect. 4]{K-SII}), one obtains that the intertwining operator is a product of rank one intertwiners $\mathcal{A}_\alpha(\mu)$ attached to indivisible roots $\alpha\in \Sigma(\mathfrak{a}', \mathfrak{n}_I')$. For these rank one operators one has well known explicit formulae which show that they admit a meromorphic continuation via Bernstein's $p^\lambda$. In this regard it is important to note that the $\mu$-dependence of $\mathcal{R}_\alpha(\mu)$ is in fact only a dependence on $\mu_\alpha=\mu(\alpha^\vee)\in \mathbb{C}$. Moreover, regardless of $\sigma_{M'}$, the operator $\mathcal{A}_\alpha(\mu)$ is defined and invertible provided that $\mu_\alpha\not \in {1\over N} \mathbb{Z}$ for an $N\in \mathbb{N}$ only depending on $G'$, see \cite[Prop. B.1]{KKOS} which was based on \cite[Th. 1.1]{SpVo}.
\par If we now use that the roots $\alpha$ do not vanish identically on $\mathfrak{a}_I'$, we obtain $\mathcal{A}_{\sigma,\lambda}$, as in \eqref{def A}, is defined and invertible for generic $\lambda \in i (\mathfrak{a}_I')^*$. In more precision we define $\mathcal{A}_{\sigma,\lambda}$ as the restriction of $\mathcal{A}(\mu)$ to the subrepresentation $\operatorname{Ind}_{\overline {P_I'}}^{G'}(\overline \sigma\otimes(-\lambda))$. \end{rmk}
The operator $\mathcal{A}_{\sigma,\lambda}$ is $G'$-equivariant and continuous, and hence we obtain from Schur's Lemma that \begin{equation} \label{defi tau} \mathcal{A}_{\sigma,\lambda}^*\circ \mathcal{A}_{\sigma,\lambda} = \tau(\sigma,\lambda) \operatorname{id} \end{equation} for a number $\tau(\sigma,\lambda)\in [0,\infty]$ which is positive for generic $\lambda\in i(\mathfrak{a}_I')^*$. Here $\mathcal{A}_{\sigma,\lambda}^*$ is the Hilbert adjoint to $\mathcal{A}_{\sigma,\lambda}$. This implies in particular for all $f\in \mathcal{H}_{\overline \sigma, -\lambda}= \operatorname{Ind}_{\overline {P_I'}}^{G'}(\overline \sigma\otimes(-\lambda))$ the following norm identity
\begin{equation} \label{inter norm} \|\mathcal{A}_{\sigma,\lambda} f\|^2 = \tau(\sigma,\lambda) \|f\|^2\, .\end{equation}
\begin{rmk} The numbers $\tau(\sigma,\lambda)$ are computable via rank one reduction, see Remark \ref{analyt cont} above. \end{rmk}
Recall that ${\mathcal B}_2(\mathcal{H}_{\sigma,\lambda})\simeq \mathcal{H}_{\sigma,\lambda}\otimes \overline{\mathcal{H}_{\sigma,\lambda}}$ and from \eqref{natural dual} that $\mathcal{H}_{\overline \sigma, -\lambda}=\overline{\mathcal{H}_{\sigma,\lambda}}$ is the natural (isometric) dual of $\mathcal{H}_{\sigma,\lambda}$. By combining \eqref{planch Z_I} and \eqref{inter norm} we thus obtain that the operator $$\sum_\sigma\int \Phi_{\sigma,\lambda}\circ(\operatorname{id}_{\mathcal{H}_{\sigma,\lambda}}\otimes \mathcal{A}_{\sigma,\lambda}) \,\mu(\sigma,\lambda)\, d\lambda$$ provides a unitary $G$-equivalence
\begin{equation} \label{planch Z_I re} L^2(Z_I)_{\rm td} \underset{G'\times G'}\simeq \underset{\sigma\in \widehat {M_I'}_{\rm disc}}{\widehat\bigoplus} \int_{i(\mathfrak{a}_I')^*}^\oplus {\mathcal B}_2(\mathcal{H}_{\sigma,\lambda}) \ \mu(\sigma,\lambda) d\lambda\, \end{equation} where
\begin{equation} \label{def mu} \mu(\sigma,\lambda):= \frac{d(\sigma)}{\tau(\sigma,\lambda)}.\end{equation}
Next we want to keep track of the implied isomorphism in \eqref{planch Z_I re} with more suitable language. For that we define a one-dimensional Hilbert space
$\mathbb{C}_{\sigma,\lambda} = \mathbb{C} \xi_{\sigma,\lambda}\subset({\mathcal B}_2(\mathcal{H}_{\sigma,\lambda}) ^{-\infty})^{H_I}$ with $\|\xi_{\sigma,\lambda}\|=1$ and where $\xi_{\sigma,\lambda}$ is defined by
\begin{equation} \label{def xi sigma lambda} \xi_{\sigma,\lambda}(f_1\otimes f_2)= \left( f_1(e) ,\mathcal{A}_{\sigma,\lambda}(f_2)(e)\right)_\sigma \qquad (f_1 \in \mathcal{H}_{\sigma,\lambda}^ \infty, f_2 \in \mathcal{H}_{\overline \sigma, -\lambda}^\infty)\, .\end{equation}
In this regard we note for $g=(g_1', g_2')\in G$ that
\begin{equation} \label{definition xi sigma} \Phi_{\sigma,\lambda}(f_1\otimes \mathcal{A}_{\sigma,\lambda}(f_2))(g) = \xi_{\sigma,\lambda}( \Pi_{\sigma, \lambda}(g^{-1})(f_1\otimes f_2))\end{equation} so that with the extended notation \begin{equation} \label{planch Z_I Re} L^2(Z_I)_{\rm td} \underset{G'\times G'}\simeq \underset{\sigma\in \widehat {M_I'}_{\rm disc}}{\widehat\bigoplus} \int_{i(\mathfrak{a}_I')^*}^\oplus {\mathcal B}_2(\mathcal{H}_{\sigma,\lambda})\otimes \mathbb{C}_{\sigma,\lambda} \ \mu(\sigma,\lambda) d \lambda\, . \end{equation} we keep track also of the isomorphism from right to left. In view of \eqref{Four1} and the orthogonality relations for the discrete series, this isomorphism is the inverse of the Fourier transform
$$F_{\rm td}\mapsto \overline \Pi_{\sigma,\lambda}(F)\overline \xi_{\sigma,\lambda}\in {\mathcal B}_2(\mathcal{H}_{\sigma,\lambda})^\infty$$ for $F\in C_c^\infty(Z_I)$, see \eqref{Four1} and Remark \ref{F-inverse}. Here $F_{\rm td}$ refers to the orthogonal projection of $F\in C_c^\infty(Z_I)\subset L^2(Z_I)$ to $L^2(Z_I)_{\rm td}$.
\subsubsection{Grouping into irreducibles}
The $G=G'\times G'$-representation in \eqref{planch Z_I Re} is not multiplicity free as different $\Pi_{\sigma,\lambda}$ can yield equivalent representations. These equivalences are induced by Weyl group orbits. In more precision let $\mathsf{W}'$ be the Weyl group of $\Sigma'$. Then
$$ \mathsf{W}_I':=\{ w|_{\mathfrak{a}_I}\mid w \in \mathsf{W}',\ w(\mathfrak{a}_I')=\mathfrak{a}_I'\}$$ gives rise to a subquotient of $\mathsf{W}'$ and finite subgroup of the orthogonal group of $\mathfrak{a}_I'$.
\begin{rmk}\label{str WI} (Structure of $\mathsf{W}_I'$) In general we are not aware of a criterion for subsets $I\subset \Phi'=S$ which characterizes those for which $\mathsf{W}_I'$ is a reflection group. Nevertheless we can describe a fundamental domain for the action of $\mathsf{W}_I'$ as a union of simplicial cones as follows. \par For $\alpha\in \Phi'$ we denote by $s_\alpha\in \mathsf{W}'$ the corresponding simple reflection and recall that $$\mathsf{W}'(I):=\langle s_\alpha\mid \alpha\in I\rangle$$ is naturally a reflection group on $$\mathfrak{a}'(I):=\operatorname{span}_\mathbb{R}\{ \alpha^\vee \mid \alpha\in I\}$$ with simple roots given by $I$. Note that $\mathfrak{a}'=\mathfrak{a}'(I) \oplus \mathfrak{a}_I'$ is an orthogonal decomposition. Next we recall the set $D_I'\subset \mathsf{W}'$ of distinguished representatives for $\mathsf{W}'/ \mathsf{W}'(I)$, namely with $$D'_I:=\{w \in \mathsf{W}'\mid w(I)\subset (\Sigma')^+\}$$ we obtain a bijection $$D_I' \to \mathsf{W}'/\mathsf{W}'(I),\ \ w\mapsto [w]=w\mathsf{W}'(I)$$ with $w$ the unique minimal length representative of $[w]$.
\par For $I, J\subset S$ set $$\mathsf{W}'(I,J)=\{ w \in \mathsf{W}'\mid w (J)=I\}\, .$$ We claim that the map
$$R: \mathsf{W}'(I,I)\to \mathsf{W}_I', \ \ w\mapsto w|_{\mathfrak{a}_I'}$$ is an isomorphism of groups. Let us first show that $R$ is defined. In fact, if $w\in \mathsf{W}'(I,I)$, then $w(I)=I$ implies that $w$ preserves $\mathfrak{a}'(I)$ and hence its orthogonal complement $\mathfrak{a}_I'$. Hence $R$ is defined. Let us show now that $R$ is injective and assume
$w|_{\mathfrak{a}_I'}= \operatorname{id}$. In particular $w$ fixes the face $$(\mathfrak{a}_I')^-:=\{ X\in \mathfrak{a}_I'\mid (\forall \alpha\in S\backslash I) \ \alpha(X)\leq 0\}$$ of $(\mathfrak{a}')^-$, the closure of the Weyl chamber $(\mathfrak{a}')^{--}$. Hence Chevalley's Lemma implies that $w\in \mathsf{W}'(I)$, thus $w={\bf1}$ as $w(I)=I$. Finally we show that $R$ is surjective. Let $w\in \mathsf{W}'$ such that $w(\mathfrak{a}_I')=\mathfrak{a}_I'$. Hence $w(\mathfrak{a}'(I))=\mathfrak{a}'(I)$. From the description of $\mathsf{W}'/\mathsf{W}'(I)\simeq D_I'$ we find $w_1 \in \mathsf{W}'(I)$ such that $ww_1(I)\subset (\Sigma')^+$. Note that $ww_1(\mathfrak{a}_I')=ww_1(\mathfrak{a}_I')$ holds as well. Hence $ww_1(I)\subset (\Sigma')^+ \cap \operatorname{span}_\mathbb{R} I = \mathsf{W}'(I)\cdot I $. In particular we find $w_2 \in \mathsf{W}'(I)$ such that $w_2 w w_1(I)=I$. Hence $w_2ww_1 \in
\mathsf{W}'(I,I)$. Since $w_1|_{\mathfrak{a}_I'}= w_2|_{\mathfrak{a}_I'}=\operatorname{id}$, the surjectivity of $R$ follows.
\par Recall that subsets $I, J$ are called associated provided that $\mathsf{W}'(I,J)\neq \emptyset$. This defines an equivalence relation $I\sim J$ among subsets of $S$. Next we record the tiling
\begin{equation} \label{tiling} \mathfrak{a}_I' = \bigcup_{J\sim I} \bigcup_{w \in \mathsf{W}'(I,J)} w (\mathfrak{a}_J')^-\end{equation} meaning that the union of interiors $ \coprod_{ J\sim I} \coprod_{w \in \mathsf{W}'(I,J)} w (\mathfrak{a}_J')^{--}$ is disjoint. Now note that $\mathsf{W}_I'\simeq \mathsf{W}'(I,I)$ acts on each $\mathsf{W}'(I,J)$ from the left. We pick for each orbit $[w] = \mathsf{W}_I' w \subset \mathsf{W}'(I,J)$ a representative $w$ (of minimal length). Then the cone \begin{equation}\label{fund1} C_I':= \bigcup_{J\sim I} \bigcup_{[w]\in \mathsf{W}_I'\backslash \mathsf{W}'(I,J)} w (\mathfrak{a}_J')^-\end{equation} is a fundamental domain for the action of $\mathsf{W}_I'$ on $\mathfrak{a}_I'$. \end{rmk}
Let $C_I^*$ be a fundamental domain for the dual action of $\mathsf{W}_I'$ on $(\mathfrak{a}_I')^*$, constructed as in \eqref{fund1}. Let $w\in \mathsf{W}_I'$. Then for $\lambda\in i (\mathfrak{a}_I')^*$ we define
$\lambda_w= w \cdot \lambda= \lambda(w^{-1}\cdot )$. Likewise one defines
$\sigma_w$ by $\sigma_w(m_I') = \sigma(w^{-1} m_I' w)$ where we tacitly allowed ourselves to identify
$w\in \mathsf{W}_I'\simeq (N_K(\mathfrak{a}_I') \cap N_K(\mathfrak{a}'))/ M' $ with a lift to $K$ which normalizes $M_I'$.
\begin{lemma}\label{Lemma Ldt1} Let $w\in \mathsf{W}_I'$. Then for generic $\lambda\in i (\mathfrak{a}_I')^*$, the representation $\pi_{\sigma,\lambda}$ is equivalent to $\pi_{\sigma_w, \lambda_w}$. \end{lemma}
\begin{proof} We use the intertwining operator
\begin{equation} \label{def A2} \mathcal{A}_w: \operatorname{Ind}_{\overline{P_I}'}^{G'}(\sigma\otimes \lambda)\to \operatorname{Ind}_{w^{-1}\overline{P_I}'w}^{G'}(\sigma\otimes \lambda)\end{equation}
\begin{equation} \label{R abs2} \mathcal{A}_w(f)(g') = \int_{w^{-1} \overline{N_I}'w/ w^{-1}\overline{N_I}'w \cap \overline{N_I}'} f(g' x) \ dx \qquad (g'\in G')\end{equation} which is, as a product of rank one intertwiners, generically defined by Remark \ref{analyt cont}. The desired equivalence of $\pi_{\sigma,\lambda}$ and $\pi_{\sigma_w, \lambda_w}$ is then obtained by composing $\mathcal{A}_w$ with the right shift by $w$, i.e. \begin{equation}\label{R inter} \mathcal{R}_w(f)(g'):= (\mathcal{A}_w f)(g'w)\end{equation} yields the desired equivalence between $\pi_{\sigma,\lambda}$ and $\pi_{\sigma_w, \lambda_w}$. \end{proof}
The next lemma is a generic form of the Langlands Disjointness Theorem (see \cite[Th. 14.90]{Knapp}) for which we provide an elementary proof.
\begin{lemma} \label{Lemma Ldt2} Let $\sigma, \sigma'\in \widehat {M_I'}_{\rm disc}$ and $\lambda, \lambda' \in iC_I^*$ be such that the unitary representations $\pi_{\sigma,\lambda}$ and $\pi_{\sigma',\lambda'}$ are equivalent. Then for generic $\lambda$ one has $\lambda=\lambda'$ and $\sigma=\sigma'$.\end{lemma}
\begin{proof} We first show that $\lambda=\lambda'$ for generic $\lambda$. For that we consider the infinitesimal characters $\pi_{\sigma, \lambda}$. For the discrete series $\sigma$ a fairly elementary and short proof that their infinitesimal characters are real is given in \cite{KKOS}. Now, from the standard formulae for infinitesimal characters of induced representations, see \cite[Prop. 8.22]{Knapp}, we deduce from $\pi_{\sigma, \lambda}\simeq \pi_{\sigma', \lambda'}$ that \begin{equation} \label{orbit Wj} \mathsf{W}'_{\mathfrak{j}'}\cdot (\mu_\sigma + \lambda) = \mathsf{W}'_{\mathfrak{j}'}\cdot (\mu_{\sigma'} +\lambda')\, . \end{equation} Here $\mathfrak{j}' = \mathfrak{a}'+\mathfrak{t}'$ is a Cartan subalgebra of $\mathfrak{g}'$ which inflates the maximal split torus $\mathfrak{a}'\subset \mathfrak{g}'$ by a maximal torus $\mathfrak{t}'\subset \mathfrak{m}'$. Further, $\mu_{\sigma}, \mu_{\sigma'}\in (i \mathfrak{t}' + \mathfrak{a}'\cap \mathfrak{m}_I')^*$ are representatives of the infinitesimal character for $\sigma$, resp. $\sigma'$. Note that $\mathsf{W}'_{\mathfrak{j}'}$ leaves the real form $\mathfrak{j}_\mathbb{R}':= \mathfrak{a}'+i\mathfrak{t}'$ of $\mathfrak{j}_\mathbb{C}'$ invariant. Hence comparing the imaginary parts (with respect to $\mathfrak{j}_\mathbb{R}'$) in \eqref{orbit Wj} yields for generic $\lambda, \lambda'\in iC_I^*$ that $\lambda=\lambda'$.
\par Finally we show that $\sigma$ is equivalent to $\sigma'$. Let $F$ be a finite dimensional representation of $G'$ with strictly dominant highest weight $\Lambda$ and highest weight vector fixed by $M_I'$. Hence $\Lambda \in (\mathfrak{a}_I')^*$. The translation functor moves for $\lambda$ generic the representations $\pi_{\sigma, \lambda} $ to $\pi_{\sigma, \lambda +\Lambda} $ and $\pi_{\sigma', \lambda} $ to $\pi_{\sigma', \lambda +\Lambda}$, see \cite[proof of Lemma 10.2.7] {Wal2}.
\par We conclude that $\pi_{\sigma, \lambda }$ is equivalent to $\pi_{\sigma', \lambda }$ also for a parameter $\lambda$ with $ \operatorname{Re} \lambda$ sufficiently dominant. This allows us to apply Langlands' Lemma \cite[Lemma 3.12]{L} for the asymptotics of $\langle \pi_{\sigma,\lambda}(m_I' a'_t)f_1, f_2\rangle$ for $f_1, f_2 \in \mathcal{H}_{\sigma,\lambda}^\infty$, $m_I'\in M_I'$ and $a_t'=\exp(tX')$ for $X'\in (a_I')^{--}$:
$$\lim_{t\to \infty} (a_t')^{\lambda -\rho'}\langle \pi_{\sigma, \lambda} (a_t' m_I') f_1, f_2\rangle = \langle \sigma(m_I')[f_1({\bf1})], \mathcal{A}_{\sigma,\lambda}(f_2)({\bf1})\rangle_\sigma,$$ see also \eqref{Lang Lemma} below. Notice that with $f_1, f_2\in \mathcal{H}_{\sigma,\lambda}^\infty$ the vectors $f_1({\bf1}), \mathcal{A}(f_2)({\bf1})$ run over all pairs of smooth vectors in $V_\sigma^\infty$. Likewise holds for $\sigma'$ and we obtain that the unitary representations $\sigma$ and $\sigma'$ feature the same (smooth) matrix coefficients. \par Now we recall the Gelfand-Naimark-Segal construction which asserts for an irreducible unitary representation $\pi$ of a locally compact group $G$ on a Hilbert space $\mathcal{H}$ that one can recover $\pi$ by one matrix coefficient $g \mapsto \langle \pi(g)v, v\rangle$ for $v\in \mathcal{H}$, $v\neq 0$. Consequently $\sigma$ and $\sigma'$ are equivalent, concluding the proof of the lemma. \end{proof}
\begin{rmk} \label{Remark only real} Let us stress that the only property of discrete series used in the preceding proof of Lemma \ref{Lemma Ldt2} was that infinitesimal characters are real.\end{rmk}
\par By applying Lemma \ref{Lemma Ldt1} and Lemma \ref{Lemma Ldt2} to the disintegration formula \eqref{planch Z_I Re} we obtain the grouping in inequivalent irreducibles, i.e. the Plancherel formula for $L^2(Z_I)_{\rm td}$:
\begin{equation} \label{planch Z_I irred} L^2(Z_I)_{\rm td} \underset{G'\times G'}\simeq \sum_{\sigma\in \widehat {M_I'}_{\rm disc}}\int_{iC_I^*} {\mathcal B}_2(\mathcal{H}_{\sigma,\lambda}) \otimes \mathcal{M}^I_{\sigma,\lambda} \,\, \mu(\sigma,\lambda) d\lambda\,,\end{equation} where $$\mathcal{M}^I_{\sigma,\lambda}:=\mathcal{M}_{\Pi_{\sigma,\lambda}}^I = ({\mathcal B}_2(\mathcal{H}_{\sigma,\lambda})^{-\infty})_{\rm temp}^{H_I}$$ is the multiplicity space. Moreover, for generic $\lambda$ we have also seen that \begin{equation} \label{m-sigma}\mathcal{M}_{\sigma,\lambda}^I \simeq \bigoplus_{w\in \mathsf{W}_I'} \mathbb{C}_{\sigma_w, \lambda_w} \end{equation} as $\mathfrak{a}_I$-module. In particular, we obtain that
\begin{equation} \label{Spec M-sigma} \operatorname{spec}_{\mathfrak{a}_I'} \mathcal{M}_{\sigma,\lambda}^I= \rho'|_{\mathfrak{a}_I'} - \mathsf{W}_I'\cdot \lambda\, . \end{equation}
\subsection{The Maass-Selberg relations} From Theorem \ref{Plancherel induced} we obtain that the multiplicity space $\mathcal{M}^I_{\sigma,\lambda}$ is endowed with the Hilbert space structure induced from the one dimensional space $\mathcal{M}_{\sigma,\lambda}:=\mathcal{M}_{\Pi_{\sigma,\lambda}}= \mathbb{C}\operatorname{tr}_{\pi_{\sigma,\lambda}}$.
Set $\eta_{\sigma,\lambda}:= \operatorname{tr}_{\pi_{\sigma,\lambda}} \in \mathcal{M}_{\sigma,\lambda}$ and recall from \eqref{decomp etaI} the orthogonal decomposition
$$\eta_{\sigma,\lambda}^I= \sum_{\xi \in (\rho-\\mathsf{W}_\mathfrak{j} \chi)|_{\mathfrak{a}_I}} \eta_{\sigma,\lambda}^{I, \xi} $$ with $\chi$ the infinitesimal character of $\Pi_{\sigma,\lambda}$.
Upon our identification of $\mathfrak{a}_I$ with $\mathfrak{a}_I'$ we obtain for $\lambda$ generic from \eqref{Spec M-sigma} that $\eta_{\sigma,\lambda}^{I,\xi}\neq 0$ if and only if $\xi \in \rho' + \mathsf{W}_I'\cdot \lambda$ and accordingly $$\eta_{\sigma,\lambda}^I= \sum_{w\in \mathsf{W}_I'} \eta_{\sigma,\lambda}^{I, \rho' - w\lambda} \, .$$
Further, our Maass-Selberg relations in Theorem \ref{eta-I continuous} give
\begin{equation} \label{MS group} 1= \|\eta_{\sigma,\lambda}\|= \|\eta_{\sigma,\lambda}^{I,\xi}\|_{\mathcal{M}_{\sigma,\lambda}^I} \end{equation} for any $\xi$ with $\eta_{\sigma,\lambda}^{I,\xi}\neq 0$.
In order to proceed we need an elementary result on the asymptotics of the matrix coefficient $$ \eta( \Pi_{\sigma,\lambda}(g) (f_1 \otimes \langle \cdot, f_2\rangle)) = \langle \pi_{\sigma,\lambda}(g_1^{-1})f_1, \pi_{\sigma,\lambda}(g_2^{-1})f_2\rangle \qquad ( f_1, f_2 \in \mathcal{H}_{\sigma,\lambda})$$ for $g = a=( \sqrt{a'}, \sqrt{a'}^{-1})\in A_I^{--}$ with $a'\in (A_I')^{--}$. In other words we are interested in the asymptotics of
$$ a' \mapsto \langle \pi_{\sigma, \lambda} ((a')^{-1})f_1, f_2\rangle $$ for $a'=a'_t=\exp(tX')$ with $X'\in (\mathfrak{a}_I')^{--}$ and $t\to\infty$. Then we have the following variant, observed in \cite{KKOS}, of \cite[Lemma 3.12]{L}.
\begin{lemma}\label{Lemma KKOS} Let $\lambda \in i\mathfrak{a}_I^*$ and suppose that $f_1, f_2 \in \mathcal{H}_{\sigma,\lambda}^\infty$ are such that $\operatorname{supp} f_i \subset \Omega \overline{P_I'}$ for some $\Omega\subset N_I'$ compact. Then \begin{equation}\label{Lang Lemma} \lim_{t\to \infty}a_t^{\lambda -\rho} \langle \pi_{\sigma, \lambda} ((a_t')^{-1})f_1, f_2\rangle = \langle f_1({\bf1}), \mathcal{A}_{\sigma,\lambda} (f_2)({\bf1})\rangle_\sigma\end{equation}
\end{lemma}
\begin{proof} We use the non-compact model for $\pi_{\sigma,\lambda}$ and realize $f_1, f_2$ as $\sigma$-valued functions on $N_I$:
$$ \langle \pi_{\sigma, \lambda} ((a'_t)^{-1})f_1, f_2\rangle =(a_t')^{-\lambda+\rho'} \int_{N_I'} \langle f_1( a_t'n_I' (a_t')^{-1}), f_2(n_I')\rangle_\sigma \ dn_I'\, .$$ Observe that $$a'_t \Omega (a'_t)^{-1} \underset{t\to \infty}{\to} \{{\bf1}\}$$
for all $\Omega\subset N_I'$ compact. By the compactness of supports we are allowed to interchange limit and integral and the asserted formula follows. \
\end{proof}
The Maass-Selberg relations \eqref{MS group} then yield the following key-identity:
\begin{lemma}\label{functionals identical} For generic $\lambda \in i C_I^*$ we have $\xi_{\sigma,\lambda}= \eta_{\sigma,\lambda}^{I,\rho'-\lambda}$ together with $\mathbb{C}_{\sigma,\lambda} \subset \mathcal{M}_{\sigma,\lambda}^I$ as Hilbert spaces. \end{lemma}
\begin{proof} First note that $\xi_{\sigma,\lambda}$ and $\eta_{\sigma,\lambda}^{I,\rho'-\lambda}$ have to be multiples of each other as they have the same $\mathfrak{a}_I$-weight. Let us show that this multiple is indeed $1$ by computing the asymptotics of the matrix coefficient: Recall that for $a=( \sqrt{a'}, \sqrt{a'}^{-1})\in A_I^{--}$ with $a'\in (A_I')^{--}$ we have
$$ \eta( \Pi_{\sigma,\lambda}(a) (f_1 \otimes \langle \cdot, f_2\rangle)) = \langle \pi_{\sigma, \lambda} ((a')^{-1})f_1, f_2\rangle\, . $$ Now for $f_1, f_2$ as in Lemma \ref{Lemma KKOS} we obtained in \eqref{Lang Lemma} $$\langle \pi_{\sigma, \lambda} ((a')^{-1})f_1, f_2\rangle\sim(a')^{\rho'-\lambda} \langle f_1(e), \mathcal{A}(f_2)(e)\rangle_\sigma\, .$$ Comparing with \eqref{def xi sigma lambda} we then get indeed that $\xi_{\sigma,\lambda}= \eta_{\sigma,\lambda}^{I,\rho'-\lambda}$.
\par Finally, as $\|\xi_{\sigma,\lambda}\|=1$ we obtain from the Maass-Selberg relations \eqref{MS group} that $\mathbb{C}_{\sigma,\lambda} \subset \mathcal{M}_{\sigma.\lambda}^I$ as Hilbert spaces. This completes the proof of the lemma. \end{proof}
\subsection{The Plancherel Theorem for $L^2(Z)$} From the fact that source and target of the Bernstein morphism have equivalent Plancherel measures we obtain \begin{equation} \label{supp union} \operatorname{supp} \mu =\bigcup_{I\subset S} \operatorname{supp} \mu^{I,\rm td}\end{equation} with $$\operatorname{supp} \mu^{I, \rm td}= \{ [\Pi_{\sigma, \lambda}]\in \widehat G\mid \lambda\in iC_I^*, \sigma \in \widehat {M_I'}_{\rm disc}\}$$
In the union \eqref{supp union} a certain overcounting takes place, which will be taken care of in the next lemma:
\begin{lemma}\label{lemma disjoint supports} Let $I, J\subset S$. Then the following assertions hold: \begin{enumerate} \item\label{111} If $I$ and $J$ are associated, i.e.~there exists a $w\in \mathsf{W}'$ such that $w(I)=J$, then $\operatorname{supp} \mu^{I,\rm td}=\operatorname{supp} \mu^{J,\rm td}$. \item\label{222} Otherwise $\operatorname{supp} \mu^{I,\rm td}\cap \operatorname{supp} \mu^{J,\rm td}$ has $\mu$-measure zero. \end{enumerate} \end{lemma} \begin{proof} \eqref{111} Basic intertwining theory (assuming no particular knowledge on the discrete spectrum) as used above implies that $$\operatorname{spec} L^2(Z_I)_{\rm td}=\operatorname{spec} L^2(Z_J)_{\rm td}\subset \widehat G$$ if $I$ and $J$ are associated.
\eqref{222} As the infinitesimal characters for the discrete series of $M_I'$ and $M_J'$ are real (see \cite{KKOS}), we obtain that the infinitesimal characters of the induced representations in $L^2(\widehat Z_I, \lambda_I)_{\rm d}$ and $L^2(\widehat Z_J, \lambda_J)_{\rm d}$ for generic $\lambda_I ,\lambda_J \in i\mathfrak{a}_J^*$ are different if $I$ and $J$ are not associated, see \eqref{orbit Wj} and the text following it. \end{proof}
We are now ready to phrase the Plancherel theorem of Harish-Chandra in terms of the Bernstein morphism. For this let $$\mathcal{H}_I: = \sum_{\sigma\in \widehat {M_I'}_{\rm disc}}\int_{iC_I^*} {\mathcal B}_2(\mathcal{H}_{\sigma,\lambda}) \otimes \mathbb{C}{\xi_{\sigma,\lambda}} \ \mu(\sigma,\lambda) d\lambda, $$ viewed as a subspace of $L^2(Z_I)_{\rm td}$ as in \eqref{planch Z_I irred}.
Let $B'_I$ be the restriction of $B_I$ to $\mathcal{H}_I$. Select a family $\mathcal{I}$ of representatives of subsets of $S$ modulo association and set $$B':=\bigoplus _{I \in \mathcal{I}}B'_I\, .$$
\begin{theorem} \label{planch HC} The map $$B': \bigoplus_{I\in \mathcal {I}} \mathcal{H}_I \to L^2( Z)$$ is a bijective isometry, hence the inverse of a Plancherel isomorphism. In particular we obtain the explicit Parseval-formula: \begin{equation} \label{Parseval} \Vert f\Vert_{L^2(Z)}^2 = \sum_{I\in \mathcal {I}} \sum_{\sigma\in \widehat {M_I'}_{\rm disc}} \int_{iC_I^*} \Vert \pi_{\sigma, \lambda}(f)\Vert_{\rm HS}^2\ \mu(\sigma,\lambda) d\lambda\end{equation}
for all $f\in C_c^\infty(Z)$. \end{theorem} \begin{proof} By Lemma \ref{lemma disjoint supports} both sides have the same support in $\widehat G$ and moreover have multiplicity one. Next $B_I'$ is isometric by Lemma \ref{functionals identical} and the spectral definition of the Bernstein morphism (compare also to Remark \ref{Remark isometric}). Since for different $I\neq J\in \mathcal {I}$ the spectral supports are disjoint by Lemma \ref{lemma disjoint supports}, the images of the various $B_I'$ are orthogonal. The theorem follows. \end{proof}
To obtain the original Parseval formula of Harish-Chandra in its standard form we unwind \eqref{Parseval} via $i\mathfrak{a}_I'^*= \mathsf{W}_I' \cdot iC_I^*$ and average over association classes
\begin{equation} \label{HC Parseval} \Vert f\Vert_{L^2(Z)}^2 = \sum_{I\subset S} { 1 \over | [I]| \cdot | \mathsf{W}'_I|} \sum_{\sigma\in \widehat {M_I'}_{\rm disc}}\int_{i(\mathfrak{a}_I')^*} \Vert \pi_{\sigma, \lambda}(f)\Vert_{\rm HS}^2\ \mu(\sigma,\lambda) d\lambda \end{equation} where $[I]$ is the equivalence class of $I\subset S$ under association.
\begin{rmk} Regarding the knowledge about representations of the discrete series, let us stress that in the above derivation of the Plancherel formula for a real reductive group we only used the results of \cite{KKOS} on the infinitesimal characters of discrete series. These are valid for general real spherical spaces and when specialized to the group case comparably soft and elementary opposed to the usage of the difficult classification of the discrete series by Harish-Chandra. \par As byproduct of his classification of the discrete series Harish-Chandra obtained the following beautiful geometric characterization of the discrete spectrum \begin{equation} \label{HC disc}L^2(G)_{\rm d}\neq \emptyset \iff \hbox{$\mathfrak{g}$ contains a compact Cartan subalgebra}\,.\end{equation} Let us emphasize once more that we obtained "$\Leftarrow$" in this paper in the full generality of real spherical spaces, see Theorem \ref{thm discrete}. \par For a general real spherical space a description of the (twisted) discrete spectrum in terms of parameters is currently out of reach. Therefore, regarding the discrete spectrum of a real spherical space, the emphasis is to obtain "$\Rightarrow$" of \eqref{HC disc} in general. Now for the group case, there is an economic way to obtain that: one first characterizes the discrete spectrum as cusp forms and then relates cusp forms to to orbital integrals, see the account of Wallach \cite[Ch. 7]{Wal1}. This idea, as well as all other known methods for the group, fails to generalize to a real spherical space.
\par Finally, Harish-Chandra determined with the parameters of the discrete series also their formal degrees. In the group case we saw that there is a canonical normalization of the one dimensional space of $H$-invariant functionals $\mathcal{M}_\pi=\mathbb{C}\operatorname{tr}_\pi$, namely by the trace. Now for a general real spherical space the space of $H$-invariant functionals $\mathcal{M}_{\pi, {\rm td}}$ for a (twisted) discrete series is no longer one-dimensional nor is it clear whether there is a canonical normalization of the inner product on $\mathcal{M}_{\pi, {\rm td}}$. The only known general result beyond the group case is the case of holomorphic discrete series on a symmetric space \cite{Kr}. \end{rmk}
\section{The Plancherel formula for symmetric spaces} \label{section DBS}
In this section we apply the Bernstein decomposition to symmetric spaces and derive the Plancherel formula of Delorme \cite{Delorme} and van den Ban-Schlichtkrull \cite{vdBS}. The account is rather parallel to the group case. The only needed extra tool is the description of a generic basis of $H$-invariant distribution vectors for induced representations in terms of open $H$-orbits on real flag varieties $G/P$, see \cite{vdB}, \cite{CD}.
For this section $Z=G/H$ is symmetric and we use the notation and results from Subsection \ref{Subsection m symmetric}.
\subsection{Normalization of discrete series} \label{normal disc}
This small paragraph is valid for a general unimodular spherical space $Z=G/H$. Let $[\pi]\in \widehat G$ and $(\pi, \mathcal{H})$ be a unitary model of $[\pi]$. We write $\mathcal{M}_{\pi, {\rm d}}\subset (\mathcal{H}^{-\infty})^H$ for the subspace of those $\eta$ for which $m_{v,\eta}\in L^2(Z)$ for all $v\in \mathcal{H}^\infty$. We define an inner product on $\mathcal{M}_{\pi,{\rm d}}$ by the request that the Schur-Weyl orthogonality relations hold true:
\begin{equation} \label{SW-ortho}\int_Z m_{v,\eta} (z) \overline {m_{v', \eta'}(z)} \ dz = \langle v, v'\rangle_{\mathcal{H}} \langle \eta, \eta'\rangle_{\mathcal{M}_{\pi, {\rm d}}}\end{equation} Notice that the norm on $\mathcal{M}_{\pi, {\rm d}}$ depends on the unitary norm of $\mathcal{H}$ which is only unique up to positive scalar.
\begin{rmk} Given a pair of normalizations of $\langle\cdot, \cdot\rangle _{\mathcal{H}}$ and $\langle\cdot, \cdot\rangle_{\mathcal{M}_{\pi, {\rm d}}}$ one obtains a notion of formal degree $d(\pi)$ analogous to \eqref{formal deg} by requiring $$ d(\pi)\int_Z m_{v,\eta} (z) \overline {m_{v', \eta'}(z)} \ dz =
\langle v, v'\rangle_{\mathcal{H}} \langle \eta, \eta'\rangle_{\mathcal{M}_{\pi, {\rm d}}}\, . $$ The normalization of $\langle \cdot, \cdot\rangle_{\mathcal{M}_{\pi, {\rm d}}}$ by \eqref{SW-ortho} therefore amounts to setting $d(\pi)=1$. Without a canonical normalization of $\langle \cdot, \cdot\rangle_{\mathcal{M}_{\pi, {\rm d}}}$ this is the best we can offer. \end{rmk}
\subsection{The Plancherel formula for $L^2(Z_I)_{\rm td}$} \label{ZI planch 15} Recall that $H_I= (M_I \cap H)\overline {U_I}$ is contained in $\overline {P_I}=M_I \overline {U_I}$ with $\overline {P_I}/ H_I\simeq M_I/M_I\cap H$. Hence $L^2(Z_I)$ is parabolically induced from $L^2(M_I/ M_I\cap H)$, and hence we obtain \begin{equation} \label{Z_I symm} L^2(Z_I)_{\rm td} \simeq \sum_{\sigma \in \widehat M_I} \int_{i \mathfrak{a}_I^*}^\oplus \mathcal{H}_{\sigma,\lambda}\otimes \mathcal{M}_{\sigma, {\rm d}} \, d\lambda\,.\end{equation} Here $\mathcal{H}_{\sigma, \lambda}= \operatorname{Ind}_{\overline P_I}^G (\sigma\otimes \lambda)$ and $\mathcal{M}_{\sigma, {\rm d}}$ is the space of $M_I\cap H$-invariant functionals on $\mathcal{H}_\sigma^\infty$, which are square integrable for the symmetric space $M_I/ M_I\cap H$, as defined in Subsection \ref{normal disc} for $G/H$ and $\pi=\sigma$. \par The space $\mathcal{H}_{\sigma,\lambda}\otimes \mathcal{M}_{\sigma, {\rm d}}$ embeds into $L^2(\widehat Z_I,{ - \lambda})_{\rm d}$ isometrically (by our normalization of the discrete series) by $$ \Phi_{\sigma, \lambda}: \mathcal{H}_{\sigma,\lambda}\otimes \mathcal{M}_{\sigma, {\rm d}} \to L^2(\widehat Z_I,{ -\lambda})_{\rm d}$$ defined by linear extension and completion of $$\Phi_{\sigma,\lambda}(f \otimes \zeta)(gH_I) := \zeta (f(g))\qquad (f\in \mathcal{H}_{\sigma,\lambda}^\infty, \zeta\in\mathcal{M}_{\sigma,\rm d}, g\in G)\, .$$ For $\zeta \in \mathcal{M}_{\sigma, {\rm d}}$ let us also define an $H_I$-invariant functional $\xi_{\sigma,\lambda, \zeta}$ on $\mathcal{H}_{\sigma, \lambda}^\infty$ via \begin{equation} \label{def xi sigma eta} \xi_{\sigma,\lambda, \zeta}(f):= \zeta(f(e))\qquad (f\in \mathcal{H}_{\sigma,\lambda}^\infty)\end{equation} and record $$ \Phi_{\sigma, \lambda}(f\otimes\zeta) = \xi_{\sigma,\lambda, \zeta}(\pi_{\sigma, \lambda}(g^{-1}) f) \, .$$
\par Recall the little Weyl group $\mathsf{W}=\mathsf{W}_Z$ of the restricted roots system $\Sigma(\mathfrak{g}, \mathfrak{a}_Z)$ from Subsection \ref{Subsubsection adapted}.
The decomposition \eqref{Z_I symm} is not yet the Plancherel formula for $L^2(Z_I)_{\rm td}$, since it is not a grouping into irreducibles as different $\pi_{\sigma,\lambda}$ may yield equivalent representations. Similar to the group case this possibility is governed by the subquotient
$$\mathsf{W}_I=\{ w|_{\mathfrak{a}_I}\mid w\in \mathsf{W}, \ w(\mathfrak{a}_I)=\mathfrak{a}_I\}$$ of $\mathsf{W}=\mathsf{W}_Z$ and a cone $C_I^*\subset \mathfrak{a}_I^*$ as fundamental domain for the dual action of $\mathsf{W}_I$ (see Remark \ref{str WI}.) As in the group case we identify elements $w\in \mathsf{W}_I$ with lifts to $K$ which normalize $M_I$.
\par As in Lemma \ref{Lemma Ldt1} and \eqref{R inter} we obtain for every $w\in \mathsf{W}_I$, $\sigma\in\widehat M_I$ and generic $\lambda\in i\mathfrak{a}_I^*$ a $G$-intertwiner $$ \mathcal{R}_w: \mathcal{H}_{\sigma, \lambda} \to \mathcal{H}_{\sigma_w, \lambda_w}\, $$ with $\sigma_w$ and $\lambda_w$ defined as before. Next in full analogy to Lemma \ref{Lemma Ldt2} we obtain:
\begin{lemma} For $\lambda, \lambda'\in iC_I^*$ generic and $\sigma, \sigma'$ in the discrete series of $L^2(M_I/M_I\cap H)$ (i.e. both $\mathcal{M}_{\sigma, \rm d}$ and $\mathcal{M}_{\sigma', \rm d}$ are non-zero) one has $$ \pi_{\sigma, \lambda}\simeq \pi_{\sigma', \lambda'}\quad \iff \quad \lambda = \lambda' \ \text{and} \ \sigma\simeq \sigma'\, .$$ \end{lemma} \begin{proof} We recall that the proof of Lemma \ref{Lemma Ldt2} only requires that the infinitesimal character of the inducing data $\sigma$ and $\sigma'$ are real, see Remark \ref{Remark only real}. By \cite{KKOS} this is the case in the current situation as well. \par Next we recall that the root system $\Sigma_Z= \Sigma(\mathfrak{g},\mathfrak{a}_Z)\subset \mathfrak{a}_Z^*$ is obtained from the root system $\Sigma(\mathfrak{g}_\mathbb{C},\mathfrak{j}_\mathbb{C})\subset \mathfrak{j}_\mathbb{R}^*$ as the non-vanishing restrictions. In particular, the faces $\mathfrak{a}_I^-$ of $\mathfrak{a}_Z^-$ are contained in the faces $\mathfrak{j}_\mathbb{R}^-$ with respect to our lined up positive systems. This allows us now to argue as in \eqref{orbit Wj} and conclude that $\lambda=\lambda'$. \par The rest of the argument is then fully analogous. \end{proof}
By grouping equivalent representations in \eqref{Z_I symm} we then obtain the Plancherel formula \begin{equation} \label{Z_I symm Re} L^2(Z_I)_{\rm td} \simeq \sum_{\sigma \in \widehat M_I} \int_{i C_I^*}^\oplus \mathcal{H}_{\sigma,\lambda}\otimes \mathcal{M}_{\sigma, \lambda}^I \ d\lambda\end{equation} with generic multiplicity space $\mathcal{M}_{\sigma,\lambda}^I=\mathcal{M}_{\sigma,\lambda, {\rm td}}^I$ of dimension
\begin{equation} \label{dimc1}\dim \mathcal{M}_{\sigma,\lambda}^I= |\mathsf{W}_I| \cdot \dim \mathcal{M}_{\sigma, \rm d}\,.\end{equation} For $w\in \mathsf{W}_I$ let us denote by $\mathcal{M}_{\sigma_w,\rm d}$ the space $\mathcal{M}_{\sigma, \rm d}$ with $M_I \cap H$ replaced by $w(M_I\cap H)w^{-1}= M_I \cap H_w$ and $\sigma$ replaced by $\sigma_w$.
Since $L^2(M_I/ M_I\cap H)_{\rm d}\simeq L^2(M_I/M_I \cap H_w)_{\rm d}$ we infer that $\mathcal{M}_{\sigma, \rm d}$ and $\mathcal{M}_{\sigma_w, \rm d}$ are canonically isomorphic. Now for each $w\in \mathsf{W}_I$ and $\zeta \in\mathcal{M}_{\sigma_w,\rm d}$ we can define an $H_I$-invariant functional of $\mathfrak{a}_I$-weight $\rho-\lambda_w$ via
$$ \xi_{\sigma_w,\lambda_w, \zeta}: \mathcal{H}_{\sigma,\lambda}^\infty \to \mathbb{C}, \ \ f\mapsto \zeta( (\mathcal{R}_w f)({\bf1}))\, .$$ This functional yields an embedding of $\mathcal{H}_{\sigma,\lambda}^\infty$ into $L^2(Z_I)_{\rm td}$, i.e. $ \xi_{\sigma_w,\lambda_w, \zeta}\in \mathcal{M}_{\sigma,\lambda, \rm td}^{I,\rho-\lambda_w}$. Moreover, by varying $\zeta$ we obtain for each $w\in \mathsf{W}_I$ a linear injection
\begin{equation} \label{inclusion 15} \mathcal{M}_{\sigma_w,\rm d}\to \mathcal{M}_{\sigma,\lambda, \rm td}^{I,\rho-\lambda_w}, \ \ \zeta\mapsto \xi_{\sigma_w,\lambda_w, \zeta}\, .\end{equation} We now count dimensions. With $\mathcal{M}_{\sigma,\lambda}^I= \bigoplus_{\mu \in \rho + i\mathfrak{a}_I^*}\mathcal{M}_{\sigma,\lambda, \rm td}^{I, \mu}$ and \eqref{dimc1} we obtain for generic $\lambda$ that the inclusion \eqref{inclusion 15} is an isomorphism, that is
\begin{equation} \label{formula 1}\mathcal{M}_{\sigma,\lambda,{\rm td}}^{I,\rho-\lambda_w}= \{ \xi_{\sigma_w,\lambda_w, \zeta} \mid \zeta\in \mathcal{M}_{\sigma_w, \rm d}\}\,, \quad (w\in \mathsf{W}_I). \end{equation}
\subsection{Support of the Plancherel measure}\label{support 15} Previously we defined for $\sigma\in \widehat M_I$ and $w\in \mathsf{W}_I$ the multiplicity space $\mathcal{M}_{\sigma_w, \rm d}$. We now also need a notion for every $w\in \mathcal{W}$. For $w\in \mathcal{W}$ we write $\mathcal{M}_{\sigma, w,{\rm d}}$ for the space of $M_I\cap H_w$-invariant functionals on $\mathcal{H}_\sigma^\infty$, which are square integrable for the symmetric space $M_I/ M_I \cap H_w \simeq w^{-1} M_I w/ w^{-1}M_I w \cap H$.
Then, by the isospectrality of the Bernstein morphism we obtain that
\begin{equation} \label{support symmetric} \operatorname{supp} \mu =\left\{[\pi_{\sigma, \lambda}]\in \widehat G\, \Bigg| \begin{aligned} & I\subset S, \ \lambda\in iC_I^*, \\ & \sigma\in \widehat M_I \ \text{s.t.} \ \exists \ w\in \mathcal{W}:\ \mathcal{M}_{\sigma, w, {\rm d}}\neq \{0\}\end{aligned}\right\}\, .\end{equation}
Let us introduce the notion that $\sigma\in \widehat M_I$ is {\it cuspidal} provided $\mathcal{M}_{\sigma,w, {\rm d}}\neq \{0\}$ for some $w\in \mathcal{W}$.
\subsection{Generic dimension of multiplicity spaces} To abbreviate matters let us set $\mathcal{M}_{\sigma, \lambda}= \mathcal{M}_{\pi_{\sigma, \lambda}}$ for $[\pi_{\sigma, \lambda}]\in \operatorname{supp} \mu$. The next goal is to obtain a precise description of $\mathcal{M}_{\sigma, \lambda}$ for generic $\lambda$. This is related to the geometry of open $H\times \overline{P_I}$-double cosets in $G$ which we we recall from Section \ref{open double}. From Lemma \ref{Lemma Mats1} there is an action of $\mathsf{W}(I)$ on $\mathcal{W}$ with identifications $(P_I\backslash Z)_{\rm open}\simeq\mathsf{W}(I)\backslash \mathcal{W}$ and $(P\backslash Z_I)_{\rm open}\simeq \mathsf{W}(I)/\mathsf{W}(I)\cap \mathsf{W}_H$.
For what to come we need to interpret the quotient $\mathsf{W}(I)\backslash \mathcal{W}$ in terms of the geometric decomposition $\mathcal{W}=\coprod_{\mathsf{c}\in \sC_I} \coprod_{\mathsf{t} \in \sF_{I,\mathsf{c}}} {\bf m}_{\mathsf{c},\mathsf{t}}(\mathcal{W}_{I,\mathsf{c}})$ from \eqref{full deco W}.
\begin{lemma} \label{lemma fine W} With regard to $\mathcal{W}=\coprod_{\mathsf{c}\in \sC_I} \coprod_{\mathsf{t} \in \sF_{I,\mathsf{c}}} {\bf m}_{\mathsf{c},\mathsf{t}} (\mathcal{W}_{I,\mathsf{c}})$ the action of $\mathsf{W}(I)$ on $\mathcal{W}$ acts on each subset ${\bf m}_{\mathsf{c},\mathsf{t}} (\mathcal{W}_{I,\mathsf{c}})\subset \mathcal{W}$ transitively and induces a natural bijection \begin{equation} \mathsf{W}(I)\backslash \mathcal{W} \simeq \coprod_{\mathsf{c}\in\sC_I} \sF_{I,\mathsf{c}}\end{equation}
\end{lemma}
\begin{proof} Let us fix $\mathsf{c},\mathsf{t}$, and to save notation, assume first $\mathsf{c}=\mathsf{t}={\bf1}$. Then $Z_{I,\mathsf{c},\mathsf{t}}=Z_I$ and $\mathcal{W}_I=\mathcal{W}_{I,{\bf1}}$. Lemma \ref{Lemma Mats1}\eqref{twomats} implies that $\mathsf{W}(I)$ acts transitively on $\mathcal{W}_I \simeq W_I =(P\backslash Z_I)_{\rm open}$. We claim that this holds for every $Z_{I,\mathsf{c}}\simeq Z_{I,\mathsf{c},\mathsf{t}}$, i.e. $\sW(I)$ acts transitively on $(P\backslash Z_{I,\mathsf{c},\mathsf{t}})_{\rm open }$. To see that we recall the identifications $\mathsf{W} \simeq W \simeq F_M\backslash F_\mathbb{R}$ with $F_\mathbb{R}$ the $2$-torsion subgroup of $\algebraicgroup{ A}_Z(\mathbb{R})$. Further we need the splitting $F_\mathbb{R} = F_{I,\mathbb{R}}\times F_{I,\mathbb{R}}^\perp$ derived from \eqref{AI torus deco}. Now the $\mathsf{W}(I)$-orbits on $\mathcal{W}\simeq F_M \backslash F_\mathbb{R}$ correspond exactly to the $F_{I,\mathbb{R}}^\perp$-orbits on $F_M\backslash F_\mathbb{R}$. Now the claim follows from the definition of $Z_{I,\mathsf{c},\mathsf{t}}$ and the fact that $(P\backslash Z_{I,\mathsf{c}, \mathsf{t}})_{\rm open}\simeq \mathcal{W}_{I,\mathsf{c}}$ is mapped under ${\bf m}_{\mathsf{c},\mathsf{t}}$ in a $\mathsf{W}(I)$-equivariant way into $\mathcal{W}$, see Lemma \ref{lemma 58} applied to $Z_{\mathsf{c},\mathsf{t}}= G/H_{w(\mathsf{c},\mathsf{t})}\simeq Z=G/H$.
\par The reasoning above implies further that the $\mathsf{W}(I)$-action on $\mathcal{W}$ respects with regard to the decomposition $\mathcal{W}= \coprod_{\mathsf{c}\in \sC_I} \coprod_{\mathsf{t} \in \sF_{I,\mathsf{c}}} {\bf m}_{\mathsf{c},\mathsf{t}} (\mathcal{W}_{I,\mathsf{c}})$ the disjoint union and is trivial on the fibers $\sF_{I,\mathsf{c}}$. The lemma follows. \end{proof}
\subsubsection{The description of $\mathcal{M}_{\sigma,\lambda}$} We wish to relate $H$-invariant functionals on the induced representation $\mathcal{H}_{\sigma,\lambda} = \operatorname{Ind}_{\overline{P_I}}^G (\sigma\otimes \lambda)$ with regard to the open $H\times \overline{P_I}$-double cosets in $G$. Recall from \eqref{Matsuki Pbar} the bijection $$ \mathsf{W}(I)\backslash \mathcal{W} \to (H\backslash G/ \overline{P_I})_{\rm open}, \ \ \mathsf{W}(I) w \mapsto Hw^{-\theta} \overline{P_I}\, .$$ Now we define for each $[w]= \mathsf{W}(I)w\in \mathsf{W}(I)\backslash \mathcal{W}$ a subspace
$$\mathcal{H}_{\sigma,\lambda}^\infty[w]=\{ f\in \mathcal{H}_{\sigma,\lambda}^\infty\mid \operatorname{supp} f \subset H w^{-\theta} \overline{P_I}\}$$ and for each $\eta \in (\mathcal{H}_{\sigma,\lambda}^{-\infty})^H$ we define the restrictions
$$\eta[w]:= \eta|_{\mathcal{H}_{\sigma,\lambda}^\infty[w]}\, .$$ These functionals have now a straightforward description. Notice that $\eta[w]$ only depends on the double coset $H w^{-\theta} \overline{P_I}$. This allows us to replace $\mathsf{W}(I)\backslash \mathcal{W}$ by $\mathsf{W}(I)\backslash \mathsf{W}$ and since elements $w\in \mathsf{W}$ have representatives in $K$ have $w^{-\theta} = w^{-1}$ for $w\in \mathsf{W}$.
Let now $w\in \mathsf{W}$. Notice that the $H$-stabilizer of the point $w^{-1}\overline{P_I}\in G/\overline{P_I}$ is given by the (symmetric) subgroup $H\cap w^{-1} M_I w$ of $w^{-1} M_I w$. Allowing a slight conflict with previous notation we let $\sigma_w$ be the representation of $w^{-1} M_I w$ induced from the group isomorphism $M_I \simeq w^{-1} M_I w$.
Frobenius reciprocity then associates to each $\eta$ and $[w]$ a unique distribution vector $$\zeta_\eta[w]\in (\mathcal{H}_{\sigma_w}^{-\infty})^{H\cap w^{-1} M_I w}$$ such that $$\eta[w](f) = \int_{H/ H\cap w^{-1} M_I w} \zeta_\eta[w] (f(hw^{-1})) \ dh( H\cap w^{-1} M_I w)\qquad (f\in \mathcal{H}_{\sigma,\lambda}^\infty[w])\, .$$
For each $[w]\in \mathsf{W}(I)\backslash \mathsf{W}$ we pick with $w\in \mathsf{W}$ a representative. As $\mathsf{W}(I)$ normalizes $M_I$ via inner automorphisms, it follows that $\sigma_w$ depends only on $[w]$, up to equivalence. Set
\begin{equation}\label{def V sigma} V(\sigma):=\bigoplus_{[w]\in \mathsf{W}(I)\backslash \mathsf{W}} (\mathcal{H}_{\sigma_w}^{-\infty})^{H\cap w^{-1}M_Iw}\end{equation} and consider then the evaluation map \begin{equation}\label{j-map} {\rm ev}_{\sigma,\lambda}: (\mathcal{H}_{\sigma,\lambda}^{-\infty})^H \to V(\sigma) , \ \ \eta\mapsto (\zeta_\eta[w])_{w\in \mathsf{W}(I)\backslash \mathsf{W}}\end{equation} This map is a bijection for generic $\lambda$ by \cite[Thm.~5.10]{vdB} for the case of $P_I=Q$ and \cite[Thm.~3]{CD} in general. Sometimes it is useful to indicate the choice of the parabolic $\overline P_I$ above $M_I A_I$ which was used in the definition of the induced representation $\mathcal{H}_{\sigma,\lambda}= \operatorname{Ind}_{\overline P_I}^G (\sigma\otimes\lambda)$. Then we write ${\rm ev}_{\overline{P_I},\sigma, \lambda}$ instead of ${\rm ev}_{\sigma, \lambda}$. Further, for $\lambda$ generic we recall the standard notation of \cite{vdB} and \cite{CD}: \begin{equation} \label{def j-map}j(\overline{P_I}, \sigma, \lambda, \zeta):={\rm ev}_{\overline{P_I}, \sigma, \lambda}^{-1}(\zeta)\in (\mathcal{H}_{\sigma, \lambda}^{-\infty})^H\qquad (\zeta\in V(\sigma))\, .\end{equation}
Next we define a subspace of $V(\sigma)$ by $$ V(\sigma)_2:=\bigoplus_{[w]\in \mathsf{W}(I)\backslash \mathsf{W}} \mathcal{M}_{\sigma_w, \rm d} $$ with $\mathcal{M}_{\sigma_w, \rm d}\subset (\mathcal{H}_{\sigma_w}^{-\infty})^{H\cap w^{-1}M_Iw}$ referring to $\mathcal{M}_{\sigma, \rm d}\subset (\mathcal{H}_{\sigma}^{-\infty})^{H\cap M_I}$ for $M_I$ replaced by $w^{-1}M_Iw$.
In the sequel we assume that $\lambda \in i\mathfrak{a}_I^*$ is generic, i.e. $j(\overline{P_I}, \sigma, \lambda, \zeta)$ is defined (the obstruction is a countable set of hyperplanes) and the representation $\pi_{\sigma,\lambda}$ is a generic member in $\operatorname{supp}^{I,\rm td}\mu \subset \operatorname{supp} \mu$, see Subsection \ref{SRT}. Recall that our request is that $\sigma$ is cuspidal, as defined in Section \ref{support 15}.
The main result of this subsection then is:
\begin{theorem}\label{CD tempered} Let $\sigma$ be cuspidal. Then for Lebesgue-almost all $\lambda\in i\mathfrak{a}_I^*$ the image of $\mathcal{M}_{\sigma,\lambda}$ by ${\rm ev}_{\sigma,\lambda}$ is $V(\sigma)_2$, i.e. $${\rm ev}_{\sigma,\lambda}: \mathcal{M}_{\sigma,\lambda}\to V(\sigma)_2$$ is a bijection. In particular we have \begin{equation} \label{vdB-CD} \dim \mathcal{M}_{\sigma, \lambda} = \sum_{ [w]\in \mathsf{W}(I)\backslash \mathsf{W}} \dim \mathcal{M}_{\sigma_w, {\rm d}}\, .\end{equation} \end{theorem}
The proof of this theorem will be prepared by several lemmas. The first lemma is valid for a general unimodular real spherical space $Z=G/H$ with Plancherel measure $\mu$. In the sequel we consider $L^2(Z)$ as a unitary module for $G\times A_{Z,E}$ and recall that the twisted discrete spectrum $L^2(Z)_{\rm td} \subset L^2(Z)$ is a $G\times A_{Z,E}$-invariant subspace. Define the {\it essentially continuous spectrum} by $L^2(Z)_{\rm ec} := L^2(Z)_{\rm td}^\perp$. We write $\mu^{\rm td}$ and $\mu^{\rm ec}$ for the Plancherel measures of $L^2(Z)_{\rm td}$ and $L^2(Z)_{\rm ec}$.
\begin{lemma}\label{lemma 151} Let $Z=G/H$ be a unimodular real spherical space with Plancherel measure $\mu$. Then $$ \mu^{\rm ec}(\operatorname{supp} \mu^{\rm td})=0, $$ i.e. the Plancherel supports of $L^2(Z)_{\rm td}$ and $L^2(Z)_{\rm ec}$ do $\mu$-almost not interfere. \end{lemma} \begin{proof} The proof goes by comparing the infinitesimal characters of the representations occurring in $\mu^{\rm td}$ and $\mu^{\rm ec}$. For that we recall that the map $$ \Phi: \widehat G \to \mathfrak{j}_\mathbb{C}^*/\mathsf{W}_\mathfrak{j}, \ \ \pi\mapsto \chi_\pi$$ is continuous. Next the Bernstein decomposition of $L^2(Z)$ implies that $\mu^{\rm ec} $ is equivalent to $\sum_{w\in \mathcal{W}}\sum_{I\subsetneq S} \mu^{I,w, \rm td}$ with $\mu^{I,w,\rm td}$ the Plancherel measure of $L^2(Z_{I,w})_{\rm td}$. In this regard we note moreover that $\mu^{I,w, \rm td}$ is build up by the Lebesgue measure on $i\mathfrak{a}_I^*$ and counting measure over each fiber $\lambda\in i\mathfrak{a}_I^*$. Now the main result of \cite{KKOS} asserts that for each pair $I,w$ there is a $\mathsf{W}_\mathfrak{j}$-invariant lattice $\Lambda=\Lambda(I,w)\subset \mathfrak{j}_\mathbb{R}^*$ such that$$ \Phi(\operatorname{supp} \mu^{I,w,{\rm td}}) \subset \left[\bigcup_{s\in \mathsf{W}_\mathfrak{j}} (\Lambda + i\operatorname{Ad}(s)\mathfrak{a}_I^*)\right]/\mathsf{W}_\mathfrak{j}\, .$$ Now the continuity of $\Phi$ and the aforementioned structure of the various $\mu^{I,w,\rm td}$ with regard to Lebesgue measures imply the lemma. \end{proof}
A further important ingredient in the proof of Theorem \ref{CD tempered} is the long intertwiner, which we also used in treatment of the group case \eqref{def A}: $$\mathcal{A}_{\sigma,\lambda}: \operatorname{Ind}_{P_I}^G( \sigma\otimes \lambda)\to \operatorname{Ind}_{\overline{P_I}}^{G}(\sigma\otimes \lambda)$$ $$ \mathcal{A}_{\sigma,\lambda}(u)(g) = \int_{\overline {N_I}} u(g \overline{n_I}) \ d\overline{n_I} \qquad (g\in G)$$ which is defined near $g={\bf1}$ for all $u$ with $\operatorname{supp} u \subset\Omega P_I$ for $\Omega\subset\overline{ N_I}$ compact, and for general $u$ and $g$ by meromorphic continuation with respect to $\lambda$. \par We wish to compute the asymptotics of $m_{v,\eta}$ for $\eta\in \mathcal{M}_{\pi, \sigma}$ for certain test vectors $v\in \mathcal{H}_{\sigma,\lambda}^\infty$. In more precision, let $u \in\operatorname{Ind}_{P_I}^G( \sigma\otimes \lambda)^\infty$ with $\operatorname{supp} u \subset \Omega P_I$ with $\Omega$ as above. Our test vectors $v$ are then given by $v= \mathcal{A}_{\sigma,\lambda}(u)$. Now with \begin{equation}\label{display 152} \tilde \eta= \eta\circ \mathcal{A}_{\sigma,\lambda}\, \end{equation} we obtain the tautological identity
$$ m_{v, \eta}(g) = m_{u, \tilde \eta}(g)\, .$$ The advantage of using the opposite representative $\operatorname{Ind}_{P_I}^G(\sigma\otimes\lambda)$ for $[\pi_{\sigma,\lambda}]$ is that it allows us to compute the asymptotics of $m_{u, \tilde \eta}(a)$ for $a=a_t=\exp(tX)\in A_I^{--}$ on rays to infinity. In more precision, we have the following symmetric space analogue of Lemma \ref {Lemma KKOS}. Let \begin{equation}\label{defi zeta} \tilde \zeta=\operatorname{ev} _{P_I,\sigma,\lambda}(\tilde \eta) \in V(\sigma). \end{equation}
\begin{lemma} \label{lemma 15a}With the notation introduced above we have for all $m_I\in M_I$:
\begin{equation} \label{D-limit} \lim_{t\to\infty} a_t^{\lambda - \rho} m_{u,\tilde \eta}(m_I a_t)= \tilde\zeta_{[{\bf1}]}(\sigma(m_I^{-1}) (\mathcal{A}_{\sigma,\lambda}(u)({\bf1}))) = \tilde \zeta_{[{\bf1}]}(\sigma(m_I^{-1}) (v({\bf1}))). \end{equation} for the $[{\bf1}]$-component $\tilde \zeta_{[{\bf1}]}\in (\mathcal{H}_\sigma^{-\infty})^{M_I\cap H}$ of $\tilde\zeta$. \end{lemma}
\begin{proof} It is sufficient to prove the assertion for $m_I={\bf1}$. Next, since $a_t^{-1} \Omega a_t \to \{{\bf1}\}$ we may assume in addition that $\operatorname{supp} u \subset \Omega P_I \cap H P_I$. Hence $$m_{u,\tilde \eta}(a_t) =m_{u,\tilde \eta[{\bf1}]}(a_t)= ( \pi_{\sigma,\lambda}(a_t)(\tilde \eta[{\bf1}]))(u)$$ by the support condition of $u$. It is then easy to verify (see the proof of \cite[Lemme 16]{Delorme2}) that $$\lim_{t\to \infty} a_t^{\lambda - \rho} \left(\pi_{\sigma,\lambda}(a_t)(\tilde \eta[{\bf1}])\right)=\tilde \zeta_{[{\bf1}]} \cdot d\overline n_I$$ as a distribution. The lemma follows. \end{proof}
Note that for generic $\lambda$ the intertwiner $\mathcal{A}_{\sigma,\lambda}$ induces a natural linear isomorphism $$ b_{\sigma, \lambda}: V(\sigma) \to V(\sigma),$$ defined by $$ b_{\sigma,\lambda}(\xi)= \operatorname{ev}_{P_I,\sigma, \lambda} \left( j(\overline {P_I}, \sigma,\lambda,\xi)\circ \mathcal{A}_{\sigma,\lambda}\right)\,.$$
In this regard we recall from \cite[Th. 2]{Delorme2}: \begin{lemma}\label{petit B} For generic $\lambda$ one has \begin{equation} \label{claim15} b_{\sigma,\lambda}(V(\sigma)_2) = V(\sigma)_2\, .\end{equation} \end{lemma}
\begin{proof}[Proof of Theorem \ref{CD tempered}] Let $\eta\in (\mathcal{H}_{\sigma,\lambda}^{-\infty})^H$ and $\zeta=\zeta_\eta={\rm ev}_{\sigma,\lambda}(\eta)\in V(\sigma)$. The task is to show that $\zeta\in V(\sigma)_2$ if and only if $\eta\in \mathcal{M}_{\sigma,\lambda}$. Recall that $\zeta= (\zeta_{[w]})_{[w]\in \mathsf{W}(I)\backslash \mathsf{W}}$ is a tupel in accordance with the definition of $V(\sigma)$ in \eqref{def V sigma}.
\par Assume first that $\eta\in \mathcal{M}_{\sigma,\lambda}$. The proof goes by comparing two different expressions for the constant term $m_{v,\eta^I}$ for certain test vectors $v\in \mathcal{H}_{\sigma,\lambda}^\infty $. According to Lemma \ref{lemma 151} applied to $Z=Z_I$ we may assume that $\mathcal{M}_{\sigma,\lambda}^{I} = \mathcal{M}_{\sigma,\lambda, {\rm td}}^{I}$.
Hence $\eta^{I,\rho-\lambda}\in \mathcal{M}_{\sigma,\lambda}^{I,\rho-\lambda} = \mathcal{M}_{\sigma,\lambda, {\rm td}}^{I,\rho-\lambda}$. By \eqref{formula 1} we then have $\eta^{I,\rho-\lambda}=\xi_{\sigma,\lambda,\zeta'}$ for some $\zeta'\in \mathcal{M}_{\sigma, {\rm d}}$, that is, \begin{equation} \label{EQ1} \eta^{I,\rho-\lambda}(\pi(m_I)v)= \zeta'(\sigma(m_I^{-1}) (v({\bf1}))) \qquad (v\in \mathcal{H}_{\sigma,\lambda}^\infty, m_I \in M_I)\, .\end{equation}
\par On the other hand we can compute the asymptotics via Lemma \ref{lemma 15a}. Comparing \eqref{D-limit} with \eqref{EQ1} and using Theorem \ref{loc ct temp} yields $$ \zeta'(\sigma(m_I^{-1})(v({\bf1}))= \tilde \zeta_{[{\bf1}]} (\sigma(m_I^{-1})(v({\bf1}))) \qquad (m_I\in M_I)$$ for our test vectors $v=\mathcal{A}(u)$. Thus we have \begin{equation} \label{identity 15} m_{\zeta', v({\bf1})}= m_{\tilde \zeta_{[{\bf1}]}, v({\bf1})}\end{equation} as functions on $M_I/ M_I\cap H$. We claim that $\tilde \zeta_{[{\bf1}]}\in \mathcal{M}_{\sigma,\rm d}$ and $\tilde \zeta_{[{\bf1}]}=\zeta'$. To see that we first observe that there exists at least one $v$ with $v({\bf1})\neq 0$. This is because $v({\bf1})\neq 0$ translates into $\int_{\overline{N_I}} u( {\overline n_I}) \ d{\overline n_I}\neq 0$ which can obviously be achieved for one of our test vectors $u$. Now recall that $\zeta'\in \mathcal{M}_{\sigma,\rm d}$. Hence \eqref{identity 15} implies that $\tilde \zeta_{[{\bf1}]}\in \mathcal{M}_{\sigma, {\rm d}}$, since for $\tilde \zeta_{[{\bf1}]}$ to yield an embedding into $L^2(M_I/M_I\cap H)$ only one non-zero matrix coefficient $m_{\tilde\zeta_{[{\bf1}]}, v({\bf1})}$ has to be square integrable. With that $\zeta'=\tilde \zeta_{[{\bf1}]}$ follows from the orthogonality relations \eqref{SW-ortho} and \eqref{identity 15}: For $\zeta_0=\zeta'-\tilde\zeta_{[{\bf1}]}$ we have
$$0=\|m_{\zeta_0, v({\bf1})}\|_{L^2(M_I/ M_I\cap H_I)}^2=\|v({\bf1})\|_{\mathcal{H}_\sigma}^2 \|\zeta_0\|_{\mathcal{M}_{\sigma, \rm d}}^2\, .$$
\par Next we let $w\in \mathsf{W}\simeq \mathcal{W}$ vary. Analogous reasoning via transport of structure $Z\to Z_w$ yields that $\tilde \zeta_{[w]}\in \mathcal{M}_{\sigma_w,\rm d}$. Thus we arrive at \begin{equation} \label{display 151}\tilde\zeta:= (\tilde \zeta_{[w]})_{[w]\in \mathsf{W}(I)\backslash \mathsf{W}} \in V(\sigma)_2\,.\end{equation}
\par Now observe that $b_{\sigma,\lambda}(\zeta)=\tilde\zeta\in V(\sigma)_2$ in view of \eqref{display 152}, \eqref{defi zeta}, and \eqref{display 151}, and from Lemma \ref{petit B} it then follows that $\zeta\in V(\sigma)_2$, i.e. we have shown the implication $\operatorname{ev}_{\overline P_I, \sigma, \lambda}(\mathcal{M}_{\sigma,\lambda})\subset V(\sigma)_2$ of the theorem.
\par To complete the proof of the theorem we remain with the converse inclusion $\operatorname{ev}_{\overline P_I, \sigma, \lambda}(\mathcal{M}_{\sigma,\lambda})\supset V(\sigma)_2$. For that let $\zeta=(\zeta_{[w]})_{[w]}\in V(\sigma)_2$. Forming wave packets via $\eta=j(\overline P_I, \sigma, \lambda, \zeta)$ for varying $\lambda$, we finally deduce with \cite[Thm.~ 4]{Delorme2} that $j(\overline P_I, \sigma, \lambda, \zeta)$ contributes to the $L^2$-spectrum of $Z$. Hence $\eta=j(\overline P_I, \sigma, \lambda, \zeta)\in \mathcal{M}_{\sigma,\lambda}$ for Lebesgue almost all $\lambda$, completing the proof of the theorem. \end{proof}
In the course of the proof of Theorem \ref{CD tempered} we have shown the following identity:
\begin{lemma} \label{lemma scatter} Let $\lambda$ be generic and $\eta\in \mathcal{M}_{\sigma,\lambda}$ such that $\eta=j(\overline{P_I}, \sigma, \lambda, \zeta)$ for some $\zeta\in V(\sigma)_2$. Then $\tilde \eta = \eta\circ \mathcal{A}_{\sigma, \lambda}$ is of the form $\tilde \eta=j(P_I, \sigma, \lambda, \tilde \zeta)$ for a unique $\tilde \zeta\in V(\sigma)_2$ and \begin{equation} \label{scatter id} \eta^{I,\rho-\lambda}= \xi_{\sigma, \lambda, \tilde \zeta_{[{\bf1}]}}\end{equation} with $\tilde \zeta_{[{\bf1}]}= \tilde \eta[{\bf1}][{\bf1}]\in \mathcal{M}_{\sigma, \rm d}$ and $\xi_{\sigma, \lambda, \tilde \zeta_{[{\bf1}]}}$ defined as in \eqref{def xi sigma eta}. \end{lemma}
Let us now transport the structure from $Z=G/H$ to $Z_w=G/H_w$ for $w\in \mathcal{W}$ and write $j_w$ and $V(\sigma)_w$ for the $j$-map \eqref{def j-map} and multiplicity space for $Z_w$. Note that $V(\sigma)_w\simeq V(\sigma)$ by permutation of coordinates.
Then $\eta= j(\overline P_I, \sigma, \lambda,\zeta)$ for $\zeta=(\zeta_{[u]})_{[u]\in \mathsf{W}(I)\backslash \mathcal{W}}\in V(\sigma)$ will be moved to $\eta_w$ which then can be written as $\eta_w=j_w(\overline P_I, \sigma, \lambda, \zeta^w)$ for some $\zeta^w= (\zeta^w_{[u]})_{[u]\in \mathsf{W}(I)\backslash \mathcal{W}}\in V(\sigma)_w$. By the construction of $j$-maps which relates invariant functionals to open $H$-orbits we then obtain from $\eta_w= \eta\circ w^{-1}$ the transition relations \begin{equation}\label{transit zeta} \zeta^w_{[{\bf1}]}= \zeta_{[w]} \,. \end{equation}
\begin{theorem} \label{Th. 15.8}For generic $\lambda\in iC_I^*$ and $\pi=\pi_{\sigma,\lambda}$ for $\sigma\in \widehat M_I$ cuspidal the map
\begin{equation} \label{D1} \mathcal{M}_\pi \to \bigoplus_{ [w]\in \mathsf{W}(I)\backslash\mathcal{W} } \mathcal{M}_{\pi, w, {\rm td}}^{I,\rho-\lambda}, \ \ \eta\mapsto (\eta_{w}^{I,\rho-\lambda})_{[w]\in \mathsf{W}(I)\backslash \mathcal{W}}\end{equation} is a bijective isometry. \end{theorem} \begin{proof} First note that both target and source have the same dimension by Theorem \ref{CD tempered} and equation \eqref{formula 1} applied to all spaces $Z_w$ via transport of structure. Now Lemma \ref{lemma scatter} together with \eqref{transit zeta} imply that the map is bijective. Finally that the map is an isometry follows from the Maass-Selberg relations from Theorem \ref{eta-I continuous} -- for that we use $\mathsf{W}(I)\backslash \mathcal{W}\simeq \bigcup_{\mathsf{c}\in \mathsf{c}_I} \sF_{I,\mathsf{c}}$ from Lemma \ref{lemma fine W}. \end{proof}
\subsection{The Plancherel formula}
As in the group case we select now with $\mathcal {I}\subset S$ a subset of representatives for the
association classes. Let us describe in terms of \eqref{D1} the
inner product on the multiplicity space $\mathcal{M}_\pi$ for $[\pi]=[\pi_{\sigma, \lambda}]$,
where $\sigma$ is cuspidal with respect to $M_I$ and $I\in \mathcal {I}$.
For that observe that the map
$$\mathcal{M}_{\sigma,\rm d} \to \mathcal{M}_{\pi}^{I, \rho - \lambda}, \ \ \zeta\mapsto \xi_{\sigma,\lambda, \zeta}$$
is a linear isometry by \eqref{formula 1} if we request that the Plancherel measure for $L^2(Z_I)_{\rm td}$ is
the Lebesgue measure $d\lambda$ times the counting measure of the discrete series, i.e. we request
the normalization \eqref{Z_I symm Re}.
\par Via Theorem \ref{Th. 15.8} we can now
normalize the Plancherel measure $\mu$ such that
we have an isometric
isomorphism:
$$\bigoplus_{ [w]\in \mathsf{W}(I)\backslash\mathcal{W} } \mathcal{M}_{\pi, w, {\rm td}}^{I,\rho-\lambda}\simeq
\bigoplus_{ [w]\in \mathsf{W}(I)\backslash\mathcal{W} } \mathcal{M}_{\sigma, w, \rm d}$$
where $\mathcal{M}_{\sigma, w, \rm d}$ refers to the $M_I \cap H_w$-invariant
square integrable functionals of the symmetric space $M_I/ M_I \cap H_w \simeq w^{-1}M_I w/ w^{-1}M_I w \cap H$.
We now define
$$\mathcal{H}_I = \bigoplus_{[w]\in \mathsf{W}(I)\backslash \mathcal{W}} \sum_{\sigma \in \widehat M_I} \int_{i C_I^*}^\oplus \mathcal{H}_{\sigma,\lambda}\otimes \mathcal{M}_{\sigma,w, {\rm d}} \ d\lambda$$ considered as a subspace of $L^2(Z_I)_{\rm td}$. Let $B_I'$ be the restriction of $B$ to $\mathcal{H}_I$. Then with Theorem \ref{Th. 15.8} we obtain with the same reasoning as in the group case the Plancherel formula for symmetric spaces:
\begin{theorem} {\rm (Plancherel formula for symmetric spaces)} Let $Z=G/H$ be a symmetric space and let its Plancherel measure be normalized by unit asymptotics. Then $$B'=\bigoplus_{I\in \mathcal {I}} B_I': \bigoplus_{I\in \mathcal {I}} \mathcal{H}_I \to L^2(Z)$$ is a bijective $G$-equivariant isometry and is the inverse of the Fourier transform. \end{theorem}
\end{document} | arXiv |
After a wonderful evening in the restaurant the time to go home came. Leha as a true gentlemen suggested Noora to give her a lift. Certainly the girl agreed with pleasure. Suddenly one problem appeared: Leha cannot find his car on a huge parking near the restaurant. So he decided to turn to the watchman for help.
Leha wants to ask the watchman $q$ requests, which can help him to find his car. Every request is represented as five integers $x_1, y_1, x_2, y_2, k$. The watchman have to consider all cells $(x, y)$ of the matrix, such that $x_1 \leq x \leq x_2$ and $y_1 \leq y \leq y_2$, and if the number of the car in cell $(x, y)$ does not exceed $k$, increase the answer to the request by the number of the car in cell $(x, y)$. For each request Leha asks the watchman to tell him the resulting sum. Due to the fact that the sum can turn out to be quite large, hacker asks to calculate it modulo $10^9 + 7$.
However the requests seem to be impracticable for the watchman. Help the watchman to answer all Leha's requests.
The first line contains one integer $q$ ($1 \leq q \leq 10^4$) — the number of Leha's requests.
($1 \leq x_1 \leq x_2 \leq 10^9, 1 \leq y_1 \leq y_2 \leq 10^9, 1 \leq k \leq 2 \cdot 10^9$) — parameters of Leha's requests.
Print exactly $q$ lines — in the first line print the answer to the first request, in the second — the answer to the second request and so on. | CommonCrawl |
Phiclust: a clusterability measure for single-cell transcriptomics reveals phenotypic subpopulations
Maria Mircea1,
Mazène Hochane2,
Xueying Fan3,
Susana M. Chuva de Sousa Lopes3,
Diego Garlaschelli1,4 &
Stefan Semrau ORCID: orcid.org/0000-0002-4245-22461
Genome Biology volume 23, Article number: 18 (2022) Cite this article
The ability to discover new cell phenotypes by unsupervised clustering of single-cell transcriptomes has revolutionized biology. Currently, there is no principled way to decide whether a cluster of cells contains meaningful subpopulations that should be further resolved. Here, we present phiclust (ϕclust), a clusterability measure derived from random matrix theory that can be used to identify cell clusters with non-random substructure, testably leading to the discovery of previously overlooked phenotypes.
Unsupervised clustering methods [1,2,3,4] are integral to most single-cell RNA-sequencing (scRNA-seq) analysis pipelines [5], as they can reveal distinct cell phenotypes. Importantly, all existing clustering algorithms have adjustable parameters that have to be chosen carefully to reveal the true biological structure of the data. If the data is over-clustered, many clusters are driven purely by technical noise and do not reflect distinct biological states. If the data is under-clustered, subtly distinct phenotypes might be grouped with others and will thus be overlooked. Furthermore, most analysis pipelines rely on qualitative assessment of clusters based on prior knowledge, which can hinder the discovery of new phenotypes.
To assess the quality of a clustering quantitatively and help choose optimal parameters, some measures of clustering quality and clusterability have been proposed [6, 7], most of which are not directly applicable to scRNA-seq data. For example, some existing methods rely on multimodality of the expression matrix, which is not always justified for scRNA-seq data, especially when considering highly dynamic systems. Other methods have input parameters, such as the optimal number of dimensions for dimensionality reduction, that cannot be easily determined. Also, general methods do not explicitly account for uninformative sources of variability, related to cell cycle progression or the stress response, for example, which can be important confounders. In the context of scRNA-seq, one of the most widely used measures is the silhouette coefficient [8]. This measure requires the choice of a distance metric to compute the similarity between cells. Notwithstanding its usefulness, it cannot be excluded that a partition of random noise obtains a high silhouette coefficient, indicating high clustering quality. Other measures based on distance metrics or the fit of probability densities suffer from similar issues and often only provide binary results instead of a quantitative score [7]. A different approach is pursued by ROGUE [9], a recently developed tool to assess clustering quality specifically in scRNA-seq data. ROGUE applies the concept of entropy on a per-gene basis to quantify the mixing of cell types. While a clear improvement over existing methods, ROGUE depends on a challenging step of selecting informative genes to explain the differences between cell types. It also assumes a particular noise distribution and requires the careful choice of an adjustable parameter.
Here, we present phiclust (ϕclust), a new clusterability measure for scRNA-seq data that addresses some of the shortcomings of existing methods. This measure is based on the angle ϕ between vectors representing the noise-free signal and the measured, noisy signal, respectively. We consider clusterability to be the theoretically achievable agreement with the unknown ground truth clustering, for a given signal-to-noise ratio. (Below, we will describe in detail how we define "signal" and "noise" in this context.) Importantly, our measure can estimate the level of achievable agreement without knowledge of the ground truth. High clusterability (phiclust close to 1) means that multiple phenotypic subpopulations are present and that clustering algorithms should be able to distinguish them. Low clusterability (phiclust close to 0) means that the noise is too strong for even the best possible clustering algorithm to find any clusters accurately. If phiclust equals 0, the observed variability within a cluster is consistent with random noise. Any subclusters of such a cluster still have a phiclust of 0, which prevents over-clustering of random noise. Instead of assuming a certain noise distribution or relying on a selection of informative genes, our measure can be applied to arbitrary types of random noise and includes all genes in the analysis. This is made possible by certain universal properties of random matrix theory (RMT) [10], which has been applied successfully in finance [11], physics [12] and recently also scRNA-seq data analysis [13].
Below, we will use results of RMT on the singular value decomposition (SVD) of a single-cell gene expression matrix, where rows correspond to genes and columns correspond to cells. To get an intuitive understanding of RMT, it is useful to first consider the cell-cell correlation matrix, calculated from the gene expression profiles. We start from the null hypothesis that the data does not contain any structure and is produced by a random process. In the context of single-cell transcriptomics, "structure" means multiple, distinguishable clusters of cells, or phenotypes. RMT can predict, what the correlation matrix looks like, if the entries of the gene expression matrix are samples of random variables that are independent and identically distributed. Trivially, the diagonal elements of this correlation matrix are all equal to 1. The off-diagonal elements are not exactly 0, however, despite the absence of any meaningful structure in the data. Only in the limit of measuring an infinite number of (random) genes would the off-diagonal elements become identically 0, and the correlation matrix would become the identity matrix. In that case, the only eigenvalue of the correlation matrix is 1. RMT describes the properties of a correlation matrix for a finite ratio of cells and genes. These correlation matrices are, in a sense, distributed "around" the identity matrix, which corresponds to an eigenvalue spectrum distributed around 1. Although the individual entries of the correlation matrix fluctuate from realization to realization, RMT shows that the eigenvalue spectrum is robust (a so-called "self-averaging" property) and an analytical expression for it can be obtained [14]. Likewise, RMT predicts that the singular value distribution of a purely random matrix is closely approximated by the Marchenko-Pastur (MP) distribution. This result holds true irrespective of the distribution of the random variable. This universal property of random matrices allows us to apply RMT to gene expression matrices obtained by scRNA-seq. Of course, any biologically interesting scRNA-seq measurement should contain structure, usually in the form of cell clusters. RMT allows us to regard singular values lying above the MP distribution as evidence for the rejection of the null hypothesis (i.e., the absence of structure in the data). The MP distribution is characterized by sharp upper and lower limits for the singular values of a random matrix but is strictly valid only in the limit of infinite numbers of genes and cells (while keeping the cell-gene ratio fixed). For finite matrices, the largest and smallest singular values are distributed around those sharp limits, which is described by the Tracy-Widom distribution [15].
As explained above, the presence of structure manifests itself as singular values above the MP distribution (i.e., the prediction for a purely random matrix). Qualitatively, the magnitude of those outlying singular values corresponds to the magnitude of the differences between clusters. We can understand this relationship, if we assume that the measured gene expression matrix is the sum of a random matrix (the "noise") and a matrix of noise-free gene expression profiles (the "signal"); see Fig. 1a. The bigger the difference in gene expression between phenotypes, the larger the magnitude of the non-zero singular values of the signal matrix. If the number of non-zero singular values (i.e., the rank of the signal matrix) is small compared to the dimensions of the matrix, low-rank perturbation theory [16] is applicable. This theory allows us to calculate the singular values of the measured gene expression matrix from the singular values of the signal matrix. Remarkably, knowledge of the complete signal matrix is not required for this calculation.
Phiclust is a proxy for the theoretically achievable adjusted rand index (tARI). a Scheme illustrating the rationale behind phiclust. b Singular value distributions of simulated data sets with 5 clusters and different levels of noise; Red: low signal-to-noise, Green: high signal-to-noise. The MP distribution is indicated by a solid blue line, the TW threshold is indicated by a red solid line, and significant singular values are highlighted with asterisks. Inserts show UMAPs of the data. The data set with a higher signal-to-noise ratio has more significant singular values and those singular values are bigger. c Value of the largest singular value versus for simulated data. Arrows indicate where the examples from b are located. The relationship between the largest singular values and phiclust only depends on the dimensions of the expression matrix. Simulations with different cell-to-gene ratios are shown in different colors. d Phiclust versus theoretically achievable ARI (tARI). Red data points: simulated data sets with two clusters. The number of differentially expressed (DE) genes was varied; the log fold change between clusters was fixed. Green data points: simulated data sets with two clusters. The mean log fold change between clusters was varied; the number of differentially expressed genes was fixed. Blue data points: two synthetic clusters were created by weighted averages of cells from two clusters in the PBMC data set. Cluster weights were varied. The grey dashed line indicates identity. Inset: UMAP of PBMC data set with the two clusters used indicated by red solid circles. e scRNA-seq of mixtures of RNA extracted from three different cell lines. Each data point is a mixture. For each mixture, the entries of the first two singular vectors are plotted. Colors indicate different ratios of contributions from the three cell lines. f First two singular vectors of the cluster indicated by a black solid ellipse in e. The amount of mRNA per mixture [pg] is indicated in color. g Normalized total counts per mixture versus first singular vector of the cluster shown in f. Linear regression (dashed line) is used to regress out the correlation with the total counts. Grey area indicates standard deviation
phiclust is meant to help identify non-random (or deterministic) structure. At the level of a complete data set, for example of a complex tissue, clusters are typically easily discernible. However, if we zoom in on a single cluster, it is much more difficult to decide, whether the variability within that cluster corresponds to meaningful sub-structure (such as the presence of multiple phenotypes) or is consistent with random noise. Below, we will precisely define a notion of clusterability, based on the adjusted rand index, and show that it strongly correlates with phiclust. Furthermore, we will demonstrate that our measure compares favorably to the silhouette coefficient and ROGUE on simulated data and experimental data sets with known ground truth. (See Table S1 for a list of all used simulated and experimental data sets.) Finally, we will apply phiclust to scRNA-seq measurements of complex tissues and obtain new biological insights, which we validate with follow-up measurements.
Phiclust is derived from first principles and does not have free parameters
To derive phiclust, we considered the measured gene expression matrix as a random matrix perturbed by the unobserved, noise-free gene expression profiles (Fig. 1a). This is the exact opposite of the conventional approach, which considers random noise as a perturbation to a deterministic signal. Note that, in our approach, the random matrix contains both the biological variability within a phenotype as well as the technical variability (which is due to limited RNA capture and conversion efficiency, for example). Our point of view allows us to leverage well-established results from RMT [13, 17] and perturbation theory [16].
Figure S1 illustrates the basic principles that were applied to derive phiclust. RMT predicts that the SVD of a random noise matrix results in normal distributed singular vectors and a distribution of singular values that is closely approximated by the MP distribution, if the matrix is large enough (Additional file 1: Fig. S1a, left column). Here, we consider the noise-free gene expression profiles of the cells in various phenotypes, as the "signal" that perturbs the random matrix and thus its singular value distribution. Since biological and technical variability are lumped into the random matrix, expression profiles are identical for cells that belong to the same phenotype. For example, in the case of two distinct phenotypes, the signal matrix has only one non-zero singular value (Additional file 1: Fig. S1a, middle column). The observed (or measured) gene expression matrix is obtained as the sum of the random noise matrix and the noise-free gene expression profiles (Additional file 1: Fig. S1a, right column). The singular value distribution of the measured expression matrix has exactly one singular value above the upper limit that the theory predicts for a purely random matrix, the Tracy-Widom (TW) threshold. The outlying singular value and its associated singular vector correspond to the deterministic component of the measured expression matrix. The distribution of the remaining singular values (the "bulk") is still closely approximated by the MP distribution. Importantly, as the perturbation becomes larger, the value of the outlying singular value also increases (Additional file 1: Fig. S1b). A larger perturbation means more distinct and therefore more easily clusterable phenotypes (compare the singular vectors in the middle row of Figs. S1a and b). The basic idea of phiclust is to use the magnitude of the outlying singular values to quantify clusterability.
Due to the universality of RMT, all described principles are independent of the particular noise distribution (see Additional file 1: Fig. S1a-b for normal distributed noise and Additional file 1: Fig. S1c-d for Poisson distributed noise). SVD of appropriately preprocessed real data sets therefore leads to singular value distributions with the same shape as obtained in simulations: a bulk closely approximated by the MP distribution and one or multiple outlying values. We found that data preprocessing has to comprise normalization and log-transformation, as well as gene-wise and cell-wise scaling (Additional file 1: Fig. S2a-d). SVD of raw data or log-transformed, normalized data typically results in a largest outlying singular value that is much larger than all others (Additional file 1: Fig. S2a,b). The corresponding singular vector reflects a global trend in the data and is called "market mode" in the context of stock market analysis [11, 18]. Here, we call it "transcriptome mode", since it corresponds to an expression trend that is present across all cells, irrespective of cell type (such as, for example, high expression of particular cytoskeletal genes or essential enzymes and low expression of certain membrane receptors or transcription factors). The transcriptome mode is obviously not informative for clustering. Scaling shifts its singular value to 0, which effectively removes it from further analysis (Additional file 1: Fig. S2c,d). We tested for all data sets used in this study, whether the bulk of the singular value distribution of each cluster deviates significantly from the MP distribution after the described preprocessing (Kolmogorov-Smirnov test, Additional file 1: Fig. S2e). For reasonably large clusters (> 50 cells), we only found one example of a (marginally significant) deviation from the MP distribution.
We next wanted to confirm, for real data, that the remaining outlying singular values reflect the strength of the signal, i.e., differences between the phenotypes. To that end, we extracted the gene expression profiles from two clusters in an experimental single-cell RNA-seq data set and added, as additional signal, a matrix with one non-zero singular value. As to be expected, SVD of the combined data results in one additional singular value, which increases with the strength of the perturbation (Additional file 1: Fig. S2f-g). See Table S2 for a list of all outlying singular values of experimentally measured expression matrices as well as the corresponding signal matrices. All in all, these tests show that the basic principles of random matrix theory and perturbation theory are applicable to real single-cell RNA-seq data.
So far, we have shown that the values of the outlying singular values are, qualitatively, related to the differences between phenotypes. However, their magnitudes are difficult to interpret. Phiclust is derived from the outlying singular values and can be interpreted as a measure of clusterability, as we will show in the next section. More specifically, phiclust is defined as the squared cosine of the angle between the leading singular vector of the measured gene expression matrix and the corresponding singular vector of the unobserved, noise-free expression matrix. Low-rank perturbation theory is able to predict this angle using only the dimensions of the measured gene expression matrix and its singular values, but without knowledge of the noise-free expression profiles. See Additional file 2 for a detailed derivation. If the noise level is low compared to the signal, this angle will be small, since the measured gene expression matrix is then very similar to the noise-free signal. This would result in phiclust close to 1. As the level of noise increases, for a fixed signal, the singular vectors of the measured expression matrix and the noise-free signal become increasingly orthogonal and phiclust approaches 0. To illustrate the calculation of phiclust, we simulated data sets with realistic noise structure using the Splatter package [19] (Fig. 1b,c). As to be expected, increasing the number of genes that are differentially expressed between clusters makes the clusters more easily separable and leads to larger singular values outside of the MP distribution (Fig. 1b). By construction, this results in higher values of phiclust (Fig. 1c). Please refer to Table S2 for the numerical values of the outlying singular values in the simulated expression matrices as well as the corresponding signal matrix.
We would like to stress at this point that phiclust is derived from universal properties of perturbed random matrices, which can be considered first principles. By contrast, many other measures are developed based on empirical observations and justified post hoc by their usefulness. Phiclust is calculated using only the SVD and the dimensions of the expression matrix. Thus, it does not have any free, adjustable parameters, which would have to be chosen by the user or learned from the data.
Phiclust is a proxy for clusterability
To show that phiclust is a proxy for clusterability, we have to make the concept of clusterability more precise and quantifiable. We adopted the Adjusted Rand Index (ARI) [20] as a well-established measure for the agreement between an empirically obtained clustering and the ground truth. Next, we will argue that perfect agreement with the ground truth (ARI = 1) is not achievable in the presence of noise, even with the best conceivable clustering algorithm.
Take, for instance, the simplest possible case of two cell types, A and B. Without any noise (technical or biological), expression profiles within a cell type are identical and the data can be clustered perfectly. Correspondingly, the singular vector of the expression matrix has only two different entries (Additional file 1: Fig. S3a, left). Therefore, it is easy to find a threshold that discriminates between the two cell types. In the presence of noise, however, there is a chance that the measured expression profile of a cell from cell type A looks more like cell type B and is therefore clustered with other cells from cell type B and vice versa. Correspondingly, the entries of the singular vector are now spread by the noise and can overlap (Additional file 1: Fig. S3a, right). Even if we use the best possible threshold to discriminate between the two cell types, some cells will be necessarily misclassified, if the distributions overlap.
This type of error is unavoidable (or irreducible) and known as Bayes error rate [21] in the context of statistical classification. From the overlap of the singular vector entries, we can calculate the Bayes error rate or, equivalently, a theoretically achievable ARI (tARI, see also Additional file 2). Of course, this is only possible for data with known ground truth. We first used simulated data to show empirically that commonly used clustering methods are not able to exceed the tARI (Additional file 1: Fig. S3b,c). It therefore quantifies our notion of clusterability: With increased noise, tARI decreases and it is more challenging even for the best conceivable clustering algorithm to distinguish the difference between phenotypes. Importantly, phiclust is strongly correlated with the tARI (Fig. 1d) and thus allows us to estimate clusterability without knowing the ground truth.
So far, we have assumed additive noise (i.e., the measured gene expression is the sum of a random matrix and the noise-free expression matrix). Low-rank perturbation theory also makes a prediction for multiplicative noise (i.e., the measured gene expression is the product of a random matrix and the noise-free expression matrix). In that case, phiclust still scales approximately linearly with the tARI, but its dynamic range depends somewhat on the cell-to-gene ratio (Additional file 1: Fig. S3d). To our knowledge, the noise generating mechanisms at work in scRNA-seq have not been pinpointed comprehensively. Therefore, we will continue to assume additive noise, noting that our approach can be easily adapted to multiplicative noise.
To test the relationship between phiclust and the tARI in experimentally measured data, we used an scRNA-seq data set of peripheral blood mononuclear cells (PBMCs) [22]. We chose two very distinct cell types and created new clusters as weighted, linear combinations of expression profiles from the two cell types. This approach allowed us to precisely control the difference between the newly created clusters, while maintaining the experimentally observed noise structure (Additional file 1: Fig. S3e). Also for this data, phiclust strongly correlates with the tARI (Fig. 1d). As an alternative to the tARI, we also calculated the theoretically achievable silhouette coefficient [8] (tSIL), which considers the distances between the best possible clusters (Additional file 1: Fig. S4 a-c). For a large range of simulation parameters, the tSIL has a smaller dynamic range than the tARI, which makes it less useful overall for assessing clusterability. In contrast to phiclust, ROGUE [9] does not show collinearity with the tARI (Additional file 1: Fig. S4d). Therefore, ROGUE seems to implement a notion of clusterability that is distinct from our point of view.
Confounder regression removes unwanted variability
To further characterize the performance of phiclust on experimental data sets with known ground truth, we used a measurement of purified RNA from 3 cell types, mixed at different ratios [23] (Fig. 1e). We noticed a significant correlation between the amount of input RNA and the entries of the first singular vector of individual clusters (Fig. 1f). This might be explained by lowly expressed genes not being well-represented in the low-input libraries and the resulting differences in the expression profiles. In any case, the amount of input RNA seemed to be a confounding factor that could lead to high values of phiclust, even in the absence of meaningful subclusters. Correspondingly, we found a correlation between the singular vector entries and the number of total counts, despite normalization of the data (Fig. 1g). This is consistent with the finding that total counts are a confounding factor in scRNA-seq data that cannot be eliminated by normalization using one single scaling factor per cell [22, 24]. Different groups of genes scale differently with the total counts per cell. Therefore, a correlation with the total counts remains even after normalization.
More generally, there are various experimental and biological factors that drive artefactual or irrelevant variability in single-cell RNA-seq data [22, 25]. We therefore introduced a regression step that removes the influence of any nuisance variables, such as the number of total counts per cell, ribosomal gene expression, mitochondrial gene expression, or cell cycle phase (see also Additional file 2). More specifically, we first regress the entries of a singular vector on one or multiple confounders. The fraction of variance explained by all confounder is then given by the adjusted R2 (coefficient of determination) of the linear regression. Since the squared singular values can also be interpreted as the amount of variance explained, we correct them by multiplying with 1 − the adjusted R2 found in the confounder regression. (See Table S2 for a list of the uncorrected and corrected singular values for both simulated and experimental expression matrices.) The corrected singular values are then used to calculate phiclust.
Interestingly, the relative influence of the confounders considered in this study varied substantially between data sets (Additional file 1: Fig. S5a). For example, cell stress is a relevant confounder only in the kidney data set. This is likely related to the cell dissociation procedure, which is necessarily more aggressive for kidney tissue, compared to the other samples: bone marrow mononuclear cells (BMNCs) and purified RNA, extracted from cell cultures. Total counts and ribosomal gene expression explain most of the artefactual variance in BMNCs. This might be explained by high variability in the metabolic state of the cells. In Table S2 we list the R2 values of each considered confounder for each cluster. For real scRNA-seq data sets, confounder regression can lead to a significant reduction of phiclust (Additional file 1: Fig. S5b, see Table S2 for the numerical values.) It is therefore an important part of the method.
Confounder regression can also help to analyze data sets that are not made up of regular clusters but contain irregularly shaped continua of gene expression. For example, in developmental and stem cell biology, we commonly observe differentiation paths, which are large clusters with gradually changing expression profiles. Uncorrected phiclust values are high for such paths, which suggests meaningful subpopulations (S5c,d). Depending on the biological question, it might in fact be desirable to cluster differentiation paths, for example, to separate a stem cell state from a differentiated cell type. For other applications, it could be preferable to treat a differentiation path as one cluster. In that case, we can use pseudotime approaches [26] to infer a temporal order of the gene expression profiles and use the inferred pseudotime in the confounder regression. If all observed variability is explained by developmental dynamics, phiclust is reduced to 0 and thus no sub-clustering is suggested (S5c,d).
Phiclust has high sensitivity for the detection of sub-structure
After correction for unwanted variability, we compared the performance of phiclust with other clusterability measures in the RNA mixture data set (Fig. 1e). Phiclust successfully indicated the presence or absence of subclusters for all tested combinations of the 7 original mixtures (Additional file 1: Fig. S6). By contrast, ROGUE only indicated the presence of substructure when the merged clusters were very clearly distinguishable (Additional file 1: Fig. S6 b,c). The silhouette coefficient was qualitatively similar to phiclust but its dynamic range was much smaller (Additional file 1: Fig. S6, middle row). This might become critical in the case of highly similar phenotypes, which is precisely where phiclust might have an advantage. An example for this can be seen in Additional file 1: Fig. S6b: the silhouette coefficients in the pure cluster are very similar to the merged clusters (which were composed of two original clusters). To compare phiclust with the silhouette coefficient in more detail, we carried out additional simulations (Additional file 1: Fig. S7). First, we simulated 3 clusters and subsequently merged two of them. While phiclust clearly distinguished the merged cluster from the pure cluster, the silhouette coefficients were similar for both. Increasing the fraction of genes that are differentially expressed between the merged cluster increased the difference in silhouette coefficient, but only gradually (Additional file 1: Fig. S7b). By contrast, phiclust jumped to values close to 1 for the merged cluster for very small fractions of differentially expressed genes (around 0.03). It is therefore the more sensitive measure. The silhouette coefficient strongly depends on the number of principal components used in dimensionality reduction (Additional file 1: Fig. S7c), as well as the metric for distances between expression profiles (Additional file 1: Fig. S7d). Phiclust does not depend on such user-defined parameters. Most importantly, the silhouette coefficient cannot answer the question, whether an identified cluster contains meaningful substructure, as it requires partitioning into at least 2 sub-clusters. We simulated a cluster without any substructure and all variability was purely random (Additional file 1: Fig. S7e). The silhouette coefficient was maximal for a k-means clustering with k = 2, which might prompt a user to conclude (wrongly) that there are 2 sub-clusters present. Phiclust, which does not require any further partitioning of the cluster, was 0, indicating correctly that the observed variability was consistent with random noise. All in all, these comparisons indicate that phiclust is a sensitive measure, which detects differences between highly similar phenotypes.
Genes responsible for the detected substructure can be identified
In full analogy to the reasoning outlined so far, our approach can also be used to characterize variability in gene space, for which we defined the conjugate measure g-phiclust (see Additional file 2 for the derivation). Above, we considered only the right singular vectors, where each entry corresponds to a cell in the data set. We therefore also call them "cell-singular vectors." In the simplest case of well separated clusters, entries in the cell singular vectors indicate the membership of a cell in a cluster or a group of clusters. For the left singular vectors, each entry corresponds to a gene. Therefore, we also call them "gene-singular vector." The squared cosine of the angle between the leading gene-singular vector in the measured gene expression matrix and the corresponding gene-singular vector of the noise-free signal matrix is g-phiclust. As for phiclust, data sets with higher signal-to-noise ratios are characterized by higher values of g-phiclust (Additional file 1: Fig. S8a). "Signal" and "noise" are defined exactly as above: "noise" is a random matrix and the "signal" is a low-rank matrix consisting of noise-free expression profiles, where the strength of the signal (or difference between the clusters) corresponds to the magnitude of the non-zero singular values. A g-phiclust close to 0 would indicate that all observed differential gene expression can be explained by random noise. Larger values of g-phiclust indicate less overlap of the gene expression profiles between phenotypes. We therefore expect to find a bigger number of significantly differentially expressed (DE) genes and/or larger fold changes between phenotypes. We confirmed by simulations that genes with larger absolute entries in a gene-singular vector contribute more to the differences between the clusters separated along the corresponding cell-singular vector (Additional file 1: Fig. S8b-d): For example, if two clusters (A and B) are separated along a cell-singular vector and cells in cluster A are characterized by positive entries, the genes with large positive entries in the corresponding gene-singular vector will be mostly expressed in cluster A. We call these "variance driving" genes. Our approach thus not only identifies relevant substructure in a cell cluster but can also reveal the genes responsible for it. In contrast to differential expression tests, the variance driving genes can be obtained before clustering and might help the user interpret the observed variability and make an informed decision on whether it is useful to sub-cluster the data. If the variance driving genes have enriched biological features (such as being involved in the same signaling pathway or cellular function), we can take that as additional evidence for biologically meaningful sub-population.
Application of phiclust to a BMNC data set drives the discovery of biologically meaningful sub-clusters
The most important application of phiclust, in our opinion, is to prioritize clusters for further sub-clustering and follow-up studies. For a complex tissue with dozens of clusters, it is not feasible to sub-cluster all of them and try to validate all resulting subpopulations. This is particularly inefficient, if many subclusters are in fact just driven by random noise. High values of phiclust nominate those clusters that likely have deterministic structure and are therefore worthwhile to be scrutinized experimentally in more detail.
To demonstrate the application of phiclust and g-phiclust, we analyzed scRNA-seq measurements of complex tissues. In a data set of bone marrow mononuclear cells (BMNCs) [27], we calculated phiclust for the clusters reported by the authors (Fig. 2a,b). For all clusters, except the red blood cell (RBC) progenitor cluster, the bulk of the singular value distribution was well-described by the MP distribution. (In the RBC progenitors, we found several singular values below the lower limit of the MP distribution. These outliers did not influence the further analysis since we are only interested in singular vectors above the upper limit.) The first cell-singular vectors of all clusters were significantly correlated with several confounding factors (see Fig. 2d for RBC progenitors and Fig. 2e for MAIT cells). After correction for these confounding factors, phiclust corresponded well with a visual inspection of the cluster UMAPs (Fig. 2b): Where obvious clusters were present, phiclust was highest, while homogeneous, structure-less clusters resulted in a phiclust of 0. Reassuringly, many progenitor cell types received a high phiclust (indicating possible substructure) in agreement with the known higher variability in these cell types. Ranking existing clusters by g-phiclust resulted in a very similar order (Additional file 1: Fig. S9a).
Application of phiclust to a BMNC data set drives the discovery of biologically meaningful sub-clusters. a UMAP of BMNC data set. b Phiclust for the BMNC data set. Error bars indicate the uncertainty obtained by resampling the noise. Inset: UMAP of clusters with low, intermediate, and high values of phiclust. c Singular value distribution, MP distribution (red line), and TW threshold (green line) of clusters with low, intermediate, and high values of phiclust. Significant singular values are highlighted with asterisks. In the gdT cluster, the singular vectors corresponding to the outlying singular values had normal distributed entries and were thus not significant. d First three graphs: first singular vector of the red blood cell progenitor cluster in the BMNC data set versus normalized total counts per cell, normalized expression of ribosomal genes, and normalized expression of mitochondrial genes. Rightmost graph: second singular vector versus normalized G2M score. The dashed line indicates the linear regression and the grey area indicates the standard deviation. e Left: UMAP of the MAIT cell cluster in BMNC data set. The color indicates the normalized total counts per cell. Middle: singular value distribution, MP distribution (red line), and TW threshold (green line) for the MAIT cell cluster. The only significant singular value is indicated by an asterisk. Right: normalized total counts per cell versus the singular vector associated with the significant singular value (here: first singular vector) in the MAIT cluster
To confirm the presence of relevant substructure, we subclustered the two original clusters with the highest phiclust (Additional file 1: Fig. S9 b-e). In the RBC progenitors, we identified 4 subsets that correspond to different stages of differentiation, ranging from erythroid precursors to highly differentiated RBCs, as identified previously [28]. In the dendritic cell (DC) progenitor cluster, two subclusters were identified, which correspond to precursors of classical or plasmacytoid DCs, respectively [29]. For both examples, the variance-driving genes found in the gene-singular vectors were localized to their corresponding clusters (Additional file 1: Fig. S9 c,d) and overlapped strongly with differentially expressed genes found after subclustering (see Table S3).
Phiclust reveals subpopulations in a fetal human kidney data set that can be confirmed experimentally
As a second example of our approach, we analyzed a fetal human kidney data set we published previously [30]. In our original analysis, we were forced to merge several clusters, since we were unsure if the observed variability was just noise. We hence wanted to use phiclust to find previously overlooked subpopulations. As for BMNCs, phiclust corresponded well with a qualitative assessment of cluster variability (Fig. 3a): Clusters with visible sub-clusters had the highest values of phiclust. Ordering the clusters by g-phiclust resulted in a similar ranking as phiclust (Additional file 1: Fig. S10a).
Phiclust reveals subpopulations in a human fetal kidney data set that can be confirmed experimentally. a Force-directed graph layout and phiclust for the fetal kidney data set. Error bars indicate the uncertainty obtained by resampling the noise. Inset: UMAP of clusters with low, intermediate, and high values of phiclust. b UMAPs of the UBCD, SSBpr, and ICa clusters. Left: colors indicate sub-clusters. Right: colors indicate the log-normalized gene expression of the two indicated genes. One gene follows the red color spectrum, the other gene the green color spectrum. Absence of color indicates low expression in both genes, yellow indicates co-expression of both genes. c–e Immunostainings of week 15 fetal kidney sections. c UPK1A and KRT7 are expressed in the urothelial cells of the developing ureter (upper panel) and absent from the tubules in the adjacent inner medulla (lower panel). d PECs express CLDN1 and CAV2 (upper panel), as well as CLDN1 at the capillary loop stage and later stages (lower panel). MAFB staining is found in podocytes and their precursors in the SSB (lower panel). e CLDN11 and POSTN are expressed in interstitial cells visualized by immunostaining (upper panel). SULT1E1 is expressed in the interstitial cells surrounding the ureter (marked by UPK1A) and the tubule in the inner medulla (lower panel). Scale bars: 100 μm
Subclustering of the cluster with the highest phiclust, ureteric bud/collecting duct (UBCD), revealed a subset of cells with markers of urothelial cells (UPK1A, KRT7) (Fig. 3b, Additional file 1: Fig. S10b-e). Immunostaining of these two genes, together with a marker of the collecting system (CDH1), in week 15 fetal human kidney sections confirmed the presence of the urothelial subcluster (Fig. 3c, Additional file 1: Fig. S11a).
Another cell type we did not find in our original analysis, are the parietal epithelial cells (PECs). They could now be identified within the SSBpr cluster (S-shaped body proximal precursor cells) (Fig. 3b, Additional file 1: Fig. S10b-e). To reveal these cells in situ, we stained for AKAP12 and CAV2, which were among the top differentially expressed genes in this subcluster (Table S4), together with CLDN1, a known marker of PECs, and MAFB, a marker of the neighboring podocytes (Fig. 3d, Additional file 1: Fig. S11b). Next to the PECs and proximal tubule precursor cells, SSBpr also contained a few cells that were misclassified in the original analysis, indicating the additional usefulness of phiclust as a means to identify clustering errors.
Further analysis of a cluster of interstitial cells (ICa) revealed multiple subpopulations (Fig. 3b, Additional file 1: Fig. S10b-e). Immunostaining showed that a POSTN-positive population is found mostly in the cortex, often surrounding blood vessels, whereas a SULT1E1-positive population is located in the inner medulla and papilla, often surrounding tubules (Fig. 3e, Additional file 1: Fig. S11c). CLDN11, another gene identified by analysis of the gene-singular vectors (Additional file 1: Fig. S10b-e), was found mostly in the medulla, but also in the outermost cortex. A more detailed, biological interpretation of the results can be found in Additional file 3.
Here, we presented phiclust, a clusterability measure that can help detect subtly different phenotypes in scRNA-seq data. Universal properties of the underlying theory make it possible to apply phiclust to arbitrary noise distributions, and the noise can be additive or multiplicative. Empirically, we find that the bulk of the singular value distribution of measured expression matrices is well-approximated by the MP distribution. This supports the assumption that the noise is generated by independent and identically distributed random processes.
While most of the technical and biological noise can likely be considered random, there are also known systematic errors and unwanted, confounding factors (such as the efficiency of RNA recovery, cell cycle phase etc.) Therefore, regressing out uninformative, deterministic factors, is an important part of the method.
The approach underlying phiclust also allows us to identify the genes that are most relevant for the biological interpretation of the observed variability. We found these genes to overlap strongly with differentially expressed genes identified after sub-clustering. The g-phiclust measure, a conjugate to phiclust, quantifies how distinguishable the expression profiles of different phenotypes are in the presence of noise.
The most important application of phiclust is the nomination of clusters for sub-clustering and subsequent experimental validation. All clusters that were nominated in the fetal kidney data set turned out to have subpopulations that could be validated by experiments: rare urothelial cells, which differ from nearby clusters in only a few genes; PECs and subtypes of interstitial cells, which had distinct spatial distributions.
There are several other methods that attempt to detect the presence of meaningful information in single-cell RNA-seq data. Below, we will compare phiclust to some of the most popular examples: the silhouette coefficient, ROGUE, robust PCA, the dip test, and ZINB-WaVE.
The silhouette coefficient is a popular tool to assess clustering quality. In contrast to phiclust, this coefficient requires a (sub-)clustering, and it cannot be used to decide, whether a cluster contains meaningful variability and should be sub-clustered further. As demonstrated, using the silhouette coefficient can lead to over-clustering of random noise as well as missing the presence of subtly different phenotypes. Likewise, phiclust appeared to be more sensitive than ROGUE, an entropy-based clusterability measure. Both ROGUE and the silhouette coefficient do not scale linearly with the tARI, which we introduced as an upper limit to the achievable agreement of an empirical clustering with the ground truth.
Robust PCA [31, 32] decomposes a measured expression matrix into a sparse component and a low-rank component. Under the assumption that noise is sparse, the sparse component is identified with random noise. In our opinion, there is no reason to assume that the noise in scRNA-seq data is sparse or sparser than the measured expression matrix itself. Likely, every non-zero gene expression measurement was corrupted by noise. Additionally, the remaining low-rank component cannot be identified as the noise-free signal. It is fundamentally impossible to reconstruct the noise-free signal from the measured expression because the noise is created by a random process. The low-rank component is therefore only a (noisy) approximation of the noise-free signal. Given the fundamental limit to signal reconstruction, the best thing we can do is quantify the closeness between signal and measured expression, as implemented by phiclust. In robust PCA, the low-rank matrix is often further subjected to dimensionality reduction, where it is difficult to determine the correct number of dimensions. phiclust does not require any dimensionality reduction and uses all available data.
The dip test [7], a method aimed at detecting the presence of clusters, tests whether there are multiple modes in the data. It can be applied directly to the distribution of distances between expression profiles or a low-dimensional representation of the data, such as principal component scores. The dip test will miss relevant variability, if it does not manifest itself as separate modes, which can easily occur, for example in the case of differentiation paths. It also just provides a binary result (modes present or not), whereas phiclust is a continuous measure and does not require the presence of modes.
ZINB-WaVE [24] performs dimensionality reduction based on a zero-inflated negative binomial distribution and is similar to principal component analysis, if no additional covariates are added to the model. ZINB-WaVE acknowledges the fact that principal components are prone to correlate with nuisance parameters, even after normalization. The problem is circumvented by adding such parameters as covariates to the model, which is similar to the confounder regression used for phiclust. However, the user has to decide the number of dimensions to use and currently there is no principled way to determine the optimal number. phiclust does not have any such adjustable parameters.
We hope that this manuscript will bring renewed awareness to random noise as a factor that imposes hard limits on clustering and identification of differentially expressed genes. We hope that quantitative measures of clusterability, such as phiclust, can play an important role in making single-cell RNA-seq analysis more reproducible and robust.
Before applying the method to simulated or measured single-cell RNA-seq data sets, several preprocessing steps are necessary. The raw counts are first normalized and log-transformed. Next, the expression matrix is standardized, first gene-wise, then cell-wise. These steps assure the proper agreement of the bulk of the singular value distribution with the MP distribution (Additional file 1: Fig. S2). See also Additional file 2, Section 3.1.
Phiclust
To derive phiclust, we assume that the expression matrix \( \overset{\sim }{X} \) measured by scRNA-seq, can be written as the sum of a random matrix X, which contains random biological variability and technical noise, and a signal matrix P, which contains the unobserved expression profiles of each cell:
$$ \tilde{X}=X+P $$
Note that in this decomposition, cells that belong to the same cell type (or phenotype) have identical expression profiles in the signal matrix P. Below, we will show that multiplicative noise can be treated analogously.
We apply SVD to obtain the singular values, as well as the right and left singular vectors of \( \overset{\sim }{X} \). The left singular vectors span gene-space and the right singular vectors span cell-space. Hence, we call them gene-singular vectors and cell-singular vectors, respectively. If we use the term "singular vector" it is implied to mean cell-singular vector.
Considering the signal matrix P a perturbation to the random matrix X, we can apply results from both random matrix theory and low-rank perturbation theory. Random matrix theory [33, 34] predicts that the singular value distribution of X is a Marchenko-Pastur (MP) distribution [17, 18, 35], which coincides with the bulk of the singular value distribution [11,12,13] of \( \overset{\sim }{X\ } \). The singular values of \( \overset{\sim }{X\ } \)above the values predicted by the MP distribution characterize the signal matrix P. Since the agreement with the MP distribution holds strictly only for infinite matrices, we use two additional concepts to identify relevant singular values exceeding the range defined by the MP distribution. The Tracy-Widom [15, 35] (TW) distribution describes the probability of a singular value to exceed the MP distribution, if the matrix is finite. Additionally, since singular vectors of a random matrix are normally distributed, relevant singular vectors have to be significantly different from normal [13]. To test for normality, we used the Shapiro-Wilk test.
We apply low-rank perturbation theory [16] to calculate the singular values (θi) of P from the outlying singular values (γi) of the measured expression matrix \( \overset{\sim }{X\ } \):
$$ {\theta}_i\left({\gamma}_i\right)=\sqrt{\frac{2c}{{\gamma_i}^2-\left(c+1\right)-\sqrt{{\left({\gamma_i}^2-\left(c+1\right)\right)}^2-4c}}} $$
where c is the cell-to-gene ratio, i.e., the total number of cells divided by the total number of genes.
The values of θi are then used to obtain the angles αi between the singular vectors of \( \overset{\sim }{X} \) and P. These angles are conveniently expressed in terms of their squared cosine as
$$ {\phi}_i=\mathit{\cos}{\left({\alpha}_i\right)}^2=1-\frac{c\left(1+{\theta}_i^2\right)}{\theta_i^2\left({\theta}_i^2+c\right)} $$
The leading singular vector of the measured expression matrix, which has the largest singular value, has the smallest angle to its unperturbed counterpart. The squared cosine of this smallest angle is then used as a measure of clusterability:
$$ {\phi}_{clust}=\mathit{\cos}{\left( mi{n}_i{\alpha}_i\right)}^2= ma{x}_i\mathit{\cos}{\left({\alpha}_i\right)}^2= ma{x}_i{\phi}_i,{\phi}_i\in \left[0,\pi /2\right]. $$
For a detailed derivation of phiclust, see Additional File 2, Sections 2.1–2.4.
Uncertainty of phiclust
The uncertainties for the values phiclust are estimated using a sampling approach. The basic idea is to approximate the signal matrix P and add new realizations of the noise matrix by sampling from a random distribution. The uncertainty is then obtained from the values phiclust calculated for this ensemble of sampled matrices. First, we decompose a simulated or measured expression matrix \( \overset{\sim }{X} \) into a noise matrix Xr and a matrix Xs that contains deterministic structure. Xs is constructed from the relevant singular vectors, which were identified as described in the previous section. Note that Xs contains noise and is thus different from the signal matrix P. To create an approximation Ps of the signal matrix P, we replace the singular values γi used in the construction of Xs with the singular values θi of P, calculated using low-rank perturbation theory as shown in the previous section. The entries of the noise matrix Xr have a mean of 0 and a standard deviation of 1, as a result of preprocessing. Since the results of RMT are valid irrespective of the particular noise distribution, we can create additional realizations of the noise matrix by sampling from a normal distribution with mean 0 and standard deviation 1. By adding sampled noise matrices to the approximated signal matrix Ps, we can create an ensemble of matrices with the same singular value spectrum as the original measured expression matrix but different realizations of the noise. The uncertainty for positive and negative deviations from the mean is then calculated as the standard deviation for at least 50 sampled matrices. See Additional file 2, Section 2.4.3 for a detailed description.
Test for deviation from the MP distribution
To validate the use of the MP distribution, we test whether the bulk of the measured singular value distribution deviates significantly. Singular values are considered to be part of the bulk, if they are located below the MP upper bound and not associated with the transcriptome mode. We sample 1000 values from the MP distribution using the RMTstat R package (V 0.3) and subsequently test for similarity with the Kolmogorov-Smirnov test [36]. The resulting p values are adjusted for multiple hypothesis testing with the Benjamini-Hochberg procedure [37].
Confounder regression
scRNA-seq data contains various confounding factors that drive uninformative variability. These either emerge from technical issues (such as the varying efficiency of transcript recovery, which cannot be fully eliminated by normalization) or biological factors (such as cell cycle phase, metabolic state, or stress). To account for these factors, a regression step, inspired by current gene expression normalization methods [22, 25], is included. We perform a linear regression by using each relevant singular vector as a response variable and the confounding factors as covariates. This is a valid approach because the singular vectors of the measured expression matrix contain normal distributed noise. The amount of variance explained by the nuisance parameters is then given by the value of the adjusted R2 (coefficient of determination) of this linear regression. To relate the regression result to the singular values, we consider the squared singular values (= eigenvalues) which correspond to the variance explained by the corresponding singular vectors/eigenvectors. Squared singular values are corrected by multiplication with (1 − adjusted R2) to retrieve the fraction of variance not explained by nuisance parameters. The square root of the result is the corrected singular value See also Additional file 2, Section 3.2. For Additional file 1: Fig. S5a, each nuisance parameter was individually regressed on, to compare the influence of each factor.
Multiplicative noise
To model multiplicative noise, we use a rectangular random noise matrix X with the same dimensions as the measured expression matrix \( \overset{\sim }{X} \) and a square signal matrix P whose number of rows or columns is equal to the number of measured genes. The measured expression matrix \( \overset{\sim }{X} \)is then modeled as:
$$ \overset{\sim }{X}={\left(I+P\right)}^{\frac{1}{2}}\ X, $$
where I denotes the identity matrix. Importantly, the bulk of the singular vector distribution of the measured expression matrix \( \overset{\sim }{X} \) still follows the MP distribution in this model. The singular values of the signal matrix P are calculated from the outlying singular values of \( \overset{\sim }{X} \) by:
$$ {\uptheta}_{\mathrm{i}}=\frac{2c}{\uplambda_{\mathrm{i}}-c-1-\sqrt{\left({\uplambda}_{\mathrm{i}}-a\right)\left({\uplambda}_{\mathrm{i}}-b\right)}} $$
with \( \mathrm{a},\mathrm{b}={\left(1\pm \sqrt{c}\right)}^2 \). The squared cosine of the angles between the corresponding singular vectors of the measured expression matrix and the signal matrix are then calculated as:
$$ {\upphi}_i^{mult}=\frac{1}{\uptheta_{\mathrm{i}}}\frac{{\uptheta_{\mathrm{i}}}^2-c}{\uptheta_{\mathrm{i}\left(c+1\right)}+2c} $$
As before, the largest of these values is taken to be the clusterablity measure. More information on multiplicative perturbation can be found in [38].
Theoretically achievable clustering quality
To construct a Bayes classifier [21], which achieves the minimal error rate, we need to know the ground truth clustering. Hence, we used data simulated with Splatter [19], containing two clusters. For each ground truth cluster, we fit a multidimensional Gaussian to the corresponding entries of the singular vectors (see Additional file 1: Fig. S3a). We only consider singular vectors with singular values larger than predicted by the MP distribution. For the fit, we use the mclust [39] R package (V 5.4.6). We then construct a classifier by assigning a cell to the cluster for which it has the highest value of the fitted Gaussian distribution. This classifier is thus approximately a Bayes classifier (for a true Bayes classifier, we would need to know the exact distributions of the singular vector entries). The ARI [20] calculated based on this classification is thus approximately the best theoretically achievable ARI (tARI).
We also tested the silhouette coefficient [8] as a potential alternative to the ARI for quantifying our notion of clusterability. The silhouette coefficient was calculated on the first singular vector using Euclidean distances. In Additional file 1: Fig. S4, the silhouette coefficient averaged over all cells is reported. The theoretically achievable silhouette coefficient tSIL is defined as the silhouette coefficient of the Bayes classification described in the previous paragraph. The calculation of tARI and tSIL is described in more detail in Additional file 2, section 2.5.
Clustering methods
For the validation of the tARI and tSIL, several clustering methods were used on simulated data with two clusters. Seurat clustering [1] was performed with the Seurat R package with 10 principal components (PCs) and 20 nearest neighbors. Three different resolution parameters were used: 0.1, 0.6, and 1.6. Scanpy clustering [2] was performed with the scanpy python package with 10 PCs and 20 nearest neighbors. Three different resolution parameters were used: 0.1, 0.6, and 1.6. Hierarchical clustering [4] was performed on the first 10 PCs and Euclidean distances. The hierarchical tree was built with the Ward linkage and the tree was cut at a height where 2 clusters could be identified. K-means [3] was performed on the first 10 PCs using Euclidean distances and two centers. TSCAN [40] was calculated on the first 10 PCs. In Additional file 1: Fig. S7, k-means clustering was performed on the first 3 principal components and using Euclidean distances.
Clusterability measures
ROGUE [9] is an entropy-based clusterability measure. A null model is defined under the assumption of Gamma-Poisson distributed gene expression and its differential entropy is then compared to the actual differential entropy of the gene expression profile. For the RNA-mix data set, ROGUE (V 1.0) was used with 1 sample (see Fig S6), "UMI" platform, and a span of 0.6. For the simulated data sets, ROGUE was used with k = 10 (Additional file 1: Fig. S4 d). The silhouette coefficient was calculated with the cluster R package (V 2.1.0) using Euclidean distances in the space of the relevant singular vectors. The reported values for the silhouette coefficients are average values per cluster. The confidence intervals given in Additional file 1: Fig. S6 and S7 are standard deviations of its values per cluster.
Variance driving genes
Genes that drive the variance in the significant singular vectors can be used to explore the biological information in the sub-structures. Genes with large positive or negative entries in a gene-singular vector are localized in cells with high positive or negative entries in the corresponding cell-singular vector. It is also possible to assess the signal-to-noise ratio for the genes by calculating the squared cosine of the angle between the gene singular vectors of the measured expression matrix \( \overset{\sim }{X} \) and the gene singular vectors of the signal matrix P, given by15
$$ {\phi}_i^g=\mathit{\cos}{\left({\overset{\sim }{\alpha}}_i\right)}^2=1-\frac{\left(c+{\theta}_i^2\right)}{\theta_i^2\left({\theta}_i^2+1\right)}, $$
where c is the cell-to-gene ratio. The largest of the \( {\phi}_i^g \)is then called \( {\phi}_{clust}^g \), the gene phiclust (g-phiclust). See Additional file 2, section 2.4 for a more detailed discussion.
The simulated data sets in Additional file 1: Fig. S1 comprised 201 cells and 350 genes. The random noise matrix was sampled from a normal distribution with mean 0 and variance 1 in panels a and b, or from a Poisson distribution with parameter 1 in panels c and d. The rank 1 signal matrix was constructed from one cell-singular vector and one gene-singular vector. The cell-singular vector consisted of 67 entries equal to \( 1/\sqrt{N_{cell}} \) and all other entries equal to \( -1/\sqrt{N_{cell}} \) , where Ncell is the number of cells. The gene-singular vector consisted of 200 entries equal to \( 1/\sqrt{N_{gene}} \) and the rest equal to \( -1/\sqrt{N_{gene}} \) , where Ngene is the number of genes. The signal matrix was then created by matrix multiplication of the gene-singular vector and the transposed cell-singular vector times the singular value θ (θ = 2 in a,c and θ = 5 in b,d). In Additional file 1: Fig. S2f, g a rank 1 signal matrix was created similarly as described above. The cell-singular vector with a number of entries matching the number of cells in the cluster was constructed as before. The gene-singular vector was drawn from a normal distribution and subsequently normalized to unit length. The rank 1 signal matrix was then added to the preprocessed expression matrix of the indicated cluster.
The remaining simulated data sets were produced with the splatter [19] R package (V 1.10.1). The parameters used for the simulation are shown in Table S1. For Fig. 1c,d, Additional file 1: Fig. S3b-e, Additional file 1: Fig. S4, and Additional file 1: Fig. S8a, the simulations for each parameter were performed 50 times, each with a different seed. The results were averaged over the 50 runs. Confounder regression was performed for the total number of transcripts per cell.
PBMC data [22] was downloaded from the 10x genomics website (https://cf.10xgenomics.com/samples/cell/pbmc3k/pbmc3k_filtered_gene_bc_matrices.tar.gz). For the calculation of the tARI, clustering with Scanpy, TSCAN, k-means, and hierarchical clustering, preprocessing was performed with the scanpy python package (V 1.4.6) following the provided pipeline (https://scanpy-tutorials.readthedocs.io/en/latest/pbmc3k.html) for the filtering of cells and genes, normalization, and log-transformation as well as cluster annotation. For the clustering with Seurat, the provided Seurat pipeline was used (https://satijalab.org/seurat/archive/v3.2/pbmc3k_tutorial.html) for preprocessing, such as cell and gene filtering, normalization, log-transformation, and cluster annotation using the Seurat R package (V 3.1.5). CD8 T cells and B cells were extracted from the data, and each cluster was standardized gene-wise and cell-wise before the calculation of the SVD. To remove any sub-structure in these clusters and before the reconstruction of the matrices from the SVD, singular values above the MP distribution were moved into the bulk, and the transcriptome mode (i.e., the singular vector that would have the largest singular value without normalization, see Additional file 2) was moved above the MP distribution. Then, two synthetic clusters containing 150 cells each were created from the cleaned-up original clusters. For cluster 1, a weighted average of a randomly picked B cell with expression profile cB and a randomly picked CD8 T cell with expression profile cCD8 T was calculated according to: c1 = α ∙ cB + (1 − α) ∙ cCD8 T. For cluster 2, the weights were flipped: c2 = (1 − α) ∙ cB + α ∙ cCD8 T. α was chosen in a range from 0 to 1. α close to 0.5 produced highly similar clusters, while α close to 0 or 1 produced maximally different clusters (see Fig S3e). For each value of α, the procedure was repeated 50 times, each with a different seed for selecting 300 cells per cell type, and the results were averaged.
RNA-mix data [23] was downloaded from the provided GitHub page. The data were normalized with the R scran package (V 1.14.6) and then log-transformed. Confounder regression was performed for the total number of transcripts, average mitochondrial gene expression, and average ribosomal gene expression. Two different merged clusters were created from the provided RNA mixtures as shown in Additional file 1: Fig. S6.
The bone marrow mononuclear cell data set (BMNC) [27] was downloaded from the R package SeuratData (bmcite, V 0.2.1). Normalization and calculation of the G2M score [41] were performed with the Seurat R package (V 3.1.5). Confounder regression was performed for the log-transformed total number of transcripts, cell cycle score, and average expression of: mitochondrial genes and ribosomal genes (list obtained from the HGNC website).
For the fetal kidney data set [30], the same preprocessing and normalization was used as reported previously (scran R package [42]). The data was then log-transformed and the G2M score was calculated with the Seurat R package. Confounder regression was performed for the log-transformed total number of transcripts, G2M scores, and the average expression of: mitochondrial genes, ribosomal genes, and stress-related genes [43].
Uniform Manifold Approximation and Projections [44] (UMAPs) for individual clusters were calculated with the R package umap (V 0.2.7.0) on the first 10 PCs, 20 nearest neighbors, min_dist = 0.3, and Euclidean distances. The umap for BMNC data was calculated with the Seurat R package using 2000 highly variable genes (hvg), d = 50, k = 50, min.dist = 0.6, and metric = cosine. For the fetal kidney data set, a force-directed graph layout was calculated using the scanpy python package. The graph was constructed using 100 nearest neighbors, 50 PCs, and the ForceAtlas2 layout for visualization.
Differential expression test
Differentially expressed genes within the sub-clusters found in Additional file 1: Fig. S9 and Additional file 1: Fig. S10 were calculated with the function findMarkers of the scran R package on log-transformed normalized counts. Genes with a false discovery rate below 0.05 were selected and then sorted by log2 fold change. In Figures S9e and S10e, genes with the top 20 highest/lowest values in the gene singular vectors are listed and colored blue if they correspond to the top 20 DE genes.
A human fetal kidney (female) at week 15 of gestation was used for immunofluorescence using the same procedure as reported previously [30]. The following primary antibodies were used: rabbit anti-UPK1A (1:35, HPA049879, Atlas Antibodies), mouse anti-KRT7 (1:200, #MA5-11986, Thermo Fisher Scientific), rabbit anti-CDH1 (1:50, SC-7870, Santa Cruz), rabbit anti-CLDN1 (1:100, #717800, Thermo Fisher Scientific), goat anti-CAV2 (1:100, AF5788-SP, R&D Systems), mouse anti-AKAP12 (1:50, sc-376740, Santa Cruz), rabbit anti-CLDN11 (1:50, HPA013166, SIGMA Aldrich), mouse anti-POSTN (1:100, sc-398631, Santa Cruz), and goat anti-SULT1E1 (1:50, AF5545-SP, R&D Systems). The secondary antibodies were all purchased from Invitrogen and diluted to 1:500: Alexa Fluor 594 donkey anti-mouse (A21203), Alexa Fluor 594 donkey anti-rabbit (A21207), Alexa Fluor 647 donkey anti-mouse (A31571), Alexa Fluor 647 donkey anti-rabbit (A31573), and Alexa Fluor 647 donkey anti-goat (A21447). The sections were imaged on a Nikon Ti-Eclipse epifluorescence microscope equipped with an Andor iXON Ultra 888 EMCCD camera (Nikon, Tokyo, Japan).
All sequencing data sets were obtained from publicly available resources. The BMNC data can be downloaded with the R package SeuratData, named "bmcite." The fetal kidney data is available in the GEO database under the accession number GSE114530. The PBMC data can be downloaded at https://cf.10xgenomics.com/samples/cell/pbmc3k/pbmc3k_filtered_gene_bc_matrices.tar.gz and the RNA-mix data is available at https://github.com/LuyiTian/sc_mixology, named "mRNAmix_qc".
Satija R, Farrell JA, Gennert D, Schier AF, Regev A. Spatial reconstruction of single-cell gene expression data. Nat Biotechnol. 2015;33(5):495–502. https://doi.org/10.1038/nbt.3192.
Wolf FA, Angerer P, Theis FJ. SCANPY: large-scale single-cell gene expression data analysis. Genome Biol. 2018;19(1):15. https://doi.org/10.1186/s13059-017-1382-0.
Hartigan JA, Wong MA. Algorithm AS 136: A K-means clustering algorithm. Appl Stat. 1979;28(1):100. https://doi.org/10.2307/2346830.
Murtagh F, Legendre P. Ward's hierarchical agglomerative clustering method: which algorithms implement ward's criterion? J Classif. 2014;31(3):274–95. https://doi.org/10.1007/s00357-014-9161-z.
Kiselev VY, Andrews TS, Hemberg M. Challenges in unsupervised clustering of single-cell RNA-seq data. Nat Rev Genet. 2019;20(5):273–82. https://doi.org/10.1038/s41576-018-0088-9.
Ackerman M, Ben-David S. Clusterability: a theoretical study. In: van Dyk, David; Welling M, editor. Proc. Twelth Int. Conf. Artif. Intell. Stat., vol. 5, PMLR; 2009, p. 1–8.
Adolfsson A, Ackerman M, Brownstein NC. To cluster, or not to cluster: An analysis of clusterability methods. Pattern Recognit. 2019;88:13–26. https://doi.org/10.1016/j.patcog.2018.10.026.
Rousseeuw PJ. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J Comput Appl Math. 1987;20:53–65. https://doi.org/10.1016/0377-0427(87)90125-7.
Liu B, Li C, Li Z, Wang D, Ren X, Zhang Z. An entropy-based metric for assessing the purity of single cell populations. Nat Commun. 2020;11(1):1–13. https://doi.org/10.1038/s41467-020-16904-3.
Paul D, Aue A. Random matrix theory in statistics: a review. J Stat Plan Inference. 2014;150:1–29. https://doi.org/10.1016/J.JSPI.2013.09.005.
Potters M, Bouchaud J-P, Laloux L. Financial applications of random matrix theory: old laces and new pieces. Acta Physiol Pol. 2005;35:2767–84.
Bohigas O, Giannoni MJ, Schmit C. Characterization of chaotic quantum syectra and universality of level fluctuation laws. Phys Rev Lett. 1984;52(1):1–4. https://doi.org/10.1103/PhysRevLett.52.1.
Aparicio L, Bordyuh M, Blumberg AJ, Rabadan R. A random matrix theory approach to denoise single-cell data. Patterns. 2020;1(3):100035. https://doi.org/10.1016/j.patter.2020.100035.
Livan G, Novaes M, Vivo P. Introduction to Random Matrices Theory and Practice. Switzerland: Springer; 2018.
Tracy CA, Widom H. Level-spacing distributions and the Airy kernel. Commun Math Phys. 1994;159(1):151–74. https://doi.org/10.1007/BF02100489.
Benaych-Georges F, Nadakuditi RR. The singular values and vectors of low rank perturbations of large rectangular random matrices. J Multivar Anal. 2012;111:120–35. https://doi.org/10.1016/j.jmva.2012.04.019.
Wigner EP. Characteristic vectors of bordered matrices with infinite dimensions. vol. 62. 1955.
Macmahon M, Garlaschelli D. Community detection for correlation matrices. Phys Rev X. 2015;021006(2):1–34. https://doi.org/10.1103/PhysRevX.5.021006.
Zappia L, Phipson B, Oshlack A. Splatter: Simulation of single-cell RNA sequencing data. Genome Biol. 2017;18(1):174. https://doi.org/10.1186/s13059-017-1305-0.
Gates AJ, Ahn Y-Y. The impact of random models on clustering similarity. vol. 18. 2017.
Fukunaga K. Introduction to statistical pattern recognition. 2nd ed. San Diego: Elsevier; 1990. https://doi.org/10.1016/c2009-0-27872-x.
Hafemeister C, Satija R. Normalization and variance stabilization of single-cell RNA-seq data using regularized negative binomial regression. Genome Biol. 2019;20(1):296. https://doi.org/10.1186/s13059-019-1874-1.
Tian L, Dong X, Freytag S, Lê Cao KA, Su S, JalalAbadi A, et al. Benchmarking single cell RNA-sequencing analysis pipelines using mixture control experiments. Nat Methods. 2019;16(6):479–87. https://doi.org/10.1038/s41592-019-0425-8.
Risso D, Perraudeau F, Gribkova S, Dudoit S, Vert J-P. A general and flexible method for signal extraction from single-cell RNA-seq data. Nat Commun. 2018;9:1–17. https://doi.org/10.1038/s41467-017-02554-5.
Grün D. Revealing dynamics of gene expression variability in cell state space. Nat Methods. 2020;17(1):45–9. https://doi.org/10.1038/s41592-019-0632-3.
Saelens W, Cannoodt R, Todorov H, Saeys Y. A comparison of single-cell trajectory inference methods. Nat Biotechnol. 2019;37(5):547–54. https://doi.org/10.1038/s41587-019-0071-9.
Stuart T, Butler A, Hoffman P, Hafemeister C, Papalexi E, Mauck WM, et al. Comprehensive integration of single-cell data. Cell. 2019;177:1888–1902.e21. https://doi.org/10.1016/j.cell.2019.05.031.
Mello FV, Land MGP, Costa ES, Teodósio C, Sanchez ML, Bárcena P, et al. Maturation-associated gene expression profiles during normal human bone marrow erythropoiesis. Cell Death Dis. 2019;5(1):69. https://doi.org/10.1038/s41420-019-0151-0.
Villani AC, Satija R, Reynolds G, Sarkizova S, Shekhar K, Fletcher J, et al. Single-cell RNA-seq reveals new types of human blood dendritic cells, monocytes, and progenitors. Science. 2017:356. https://doi.org/10.1126/science.aah4573.
Hochane M, van den Berg PR, Fan X, Bérenger-Currias N, Adegeest E, Bialecka M, et al. Single-cell transcriptomics reveals gene expression dynamics of human fetal kidney development. PLoS Biol. 2019;17(2):e3000152. https://doi.org/10.1371/journal.pbio.3000152.
Adamson B, Norman TM, Jost M, Cho MY, Nuñez JK, Chen Y, et al. A multiplexed single-cell CRISPR screening platform enables systematic dissection of the unfolded protein response. Cell. 2016;167:1867–1882.e21. https://doi.org/10.1016/j.cell.2016.11.048.
Candès EJ, Xiaodong L, Ma Y, Wright J. Robust principal component analysis? J ACM. 2011;58:37. https://doi.org/10.1145/1970392.1970395.
Bullett S, Fearn T, Smith F, Smolyarenko IE. An introduction to random matrix theory. Adv Tech Appl Math. 2016:139–71. https://doi.org/10.1142/9781786340238_0005.
Mingo James A, Speicher R. Free probability and random matrices. 1st ed. Springer New York LLC; 2017.
Bun J, Bouchaud J, Potters M. Cleaning large correlation matrices: tools from random matrix theory. Phys Rep. 2017;666:1–109. https://doi.org/10.1016/j.physrep.2016.10.005.
Kendall K, George M. Kolmogorov–Smirnov test. Concise Encycl Stat. 2008:283–7. https://doi.org/10.1007/978-0-387-32833-1_214.
Haynes W. Benjamini–Hochberg method. Encycl Syst Biol. 2013:78–8. https://doi.org/10.1007/978-1-4419-9863-7_1215.
Benaych-Georges F, Nadakuditi RR. The eigenvalues and eigenvectors of finite, low rank perturbations of large random matrices. Adv Math (N Y). 2011;227(1):494–521. https://doi.org/10.1016/j.aim.2011.02.007.
Scrucca L, Fop M, Murphy TB, Raftery AE. Mclust 5: Clustering, classification and density estimation using Gaussian finite mixture models. RJ. 2016;8(1):289–317. https://doi.org/10.32614/rj-2016-021.
Ji Z, Ji H. TSCAN: Pseudo-time reconstruction and evaluation in single-cell RNA-seq analysis. Nucleic Acids Res. 2016;44(13):e117. https://doi.org/10.1093/nar/gkw430.
Tirosh I, Izar B, Prakadan SM, Wadsworth MH, Treacy D, Trombetta JJ, et al. Dissecting the multicellular ecosystem of metastatic melanoma by single-cell RNA-seq. Science. 2016;352:189–96. https://doi.org/10.1126/science.aad0501.
Lun ATL, McCarthy DJ, Marioni JC. A step-by-step workflow for low-level analysis of single-cell RNA-seq data with Bioconductor. F1000Res. 2016;5:2122. https://doi.org/10.12688/f1000research.9501.2.
van den Brink SC, Sage F, Vértesy Á, Spanjaard B, Peterson-Maduro J, Baron CS, et al. Single-cell sequencing reveals dissociation-induced gene expression in tissue subpopulations. Nat Methods. 2017;14(10):935–6. https://doi.org/10.1038/nmeth.4437.
McInnes L, Healy J, Melville J. UMAP: Uniform manifold approximation and projection for dimension reduction. J Open Source Softw. 2018;3(29):861. https://doi.org/10.21105/joss.00861.
Mircea M, Hochane M, Fan X, Chuva de Sousa Lopes SM, Garlaschelli D, Semrau S. Phiclust: a clusterability measure for single-cell transcriptomics reveals phenotypic subpopulations. Zenodo. 2021. https://doi.org/10.5281/ZENODO.5785793.
Guhr SSO, Sachs M, Wegner A, Becker JU, Meyer TN, Kietzmann L, et al. The expression of podocyte-specific proteins in parietal epithelial cells is regulated by protein degradation. Kidney Int. 2013;84(3):532–44. https://doi.org/10.1038/ki.2013.115.
Wang P, et al. Dissecting the global dynamic molecular profiles of human fetal kidney development by single-cell RNA sequencing. Cell Rep. 2018;24:3554–3567.e3.
We thank Ahmed Mahfouz for valuable feedback on the manuscript and the staff of Gynaikon, Rotterdam, as well as the anonymous tissue donors for the human fetal material.
The R package phiclust is available under the GNU General Public License V3.0 at Github https://github.com/semraulab/phiclust and Zenodo https://zenodo.org/record/5785793#.Ybs5wn3MK3I [45].
Andrew Cosgrove was the primary editor of this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.
The review history is available as Additional File 4.
M.M, S.S., and D.G. were supported by the Netherlands Organisation for Scientific Research (NWO/OCW, www.nwo.nl), as part of the Frontiers of Nanoscience (NanoFront) program. We acknowledge funding by an NWO/OCW Vidi grant (016.Vidi.189.007) for M.H. and S.S., by China Scholarship Council (CSC 201706320328) to X.F, and by a European Research Council Consolidator Grant OVOGROWTH (ERC-CoG-2016-725722) to S.C.S.L. This work was carried out on the Dutch national infrastructure with the support of SURF Foundation.
Leiden Institute of Physics, Leiden University, Leiden, The Netherlands
Maria Mircea, Diego Garlaschelli & Stefan Semrau
Leiden Academic Center for Drug Research, Leiden University, Leiden, The Netherlands
Mazène Hochane
Department of Anatomy and Embryology, Leiden University Medical Center, Leiden, The Netherlands
Xueying Fan & Susana M. Chuva de Sousa Lopes
Networks Unit, IMT School for Advanced Studies, Lucca, Italy
Diego Garlaschelli
Maria Mircea
Xueying Fan
Susana M. Chuva de Sousa Lopes
Stefan Semrau
S.S., M.H., D.G., and M.M. developed the clusterability measure. M.M. designed and implemented the algorithms. S.S., M.H., and M.M. analyzed and interpreted the results. S.M.S.L. provided the fetal kidney samples. X.F. sectioned and performed the immunostaining of the fetal kidney samples. M.H. imaged the kidney samples and interpreted the imaging results. S.S., M.H., and M.M. wrote the manuscript with contributions from D.G. All authors have read and approved the final version of the manuscript.
Correspondence to Stefan Semrau.
The collection and use of human material in this study was approved by the Medical Ethics Committee from the Leiden University Medical Center (P08.087), and the study was conducted in accordance with the Declaration of Helsinki. The gestational age was determined by ultrasonography, and the tissue was obtained from women undergoing elective abortion. The material was donated with written informed consent. Questions about the human material should be directed to S. M. Chuva de Sousa Lopes ([email protected]).
The authors declare no competing interests.
Supplementary figures: Fig. S1 to Fig. S11.
Contains further details on the mathematical derivation of phiclust.
Contains a more detailed interpretation of the sub-clustering results for the human fetal kidney [46, 47].
Review history.
Table S1 Information on all scRNA-seq data sets and parameters for simulated data sets.
Table S2 Tables with information on nuisance parameters.
Table S3 Results of differential expression tests for BMNC data.
Table S4 Results of differential expression tests for kidney data.
Mircea, M., Hochane, M., Fan, X. et al. Phiclust: a clusterability measure for single-cell transcriptomics reveals phenotypic subpopulations. Genome Biol 23, 18 (2022). https://doi.org/10.1186/s13059-021-02590-x
DOI: https://doi.org/10.1186/s13059-021-02590-x
Clusterability
scRNA-seq
Random matrix theory | CommonCrawl |
The European Physical Journal C
May 2013 , 73:2439 | Cite as
Counting muons to probe the neutrino mass spectrum
Carolina Lujan-Peschard
Giulia Pagliaroli
Francesco Vissani
Regular Article - Theoretical Physics
The experimental evidence that θ 13 is large opens new opportunities to identify the neutrino mass spectrum. We outline a possibility to investigate this issue by means of conventional technology. The ideal set-up turns out to be long baseline experiment: the muon neutrino beam, with 1020 protons on target, has an average energy of 6 (8) GeV; the neutrinos, after propagating 6000 (8000) km, are observed by a muon detector of 1 Mton and with a muon energy threshold of 2 GeV. The expected number of muon events is about 1000, and the difference between the two neutrino spectra is sizeable, about 30 %. This allows the identification of the mass spectrum just counting muon tracks. The signal events are well characterized experimentally by their time and direction of arrival, and 2/3 of them are in a region with little atmospheric neutrino background, namely, between 4 GeV and 10 GeV. The distances from CERN to Baikal Lake and from Fermilab to KM3NET, or ANTARES, fit in the ideal range.
Atmospheric Neutrino Matter Effect Neutrino Energy Normal Hierarchy Inverted Hierarchy
We thank G. Battistoni, A. Capone, R. Coniglione, P. Coyle, G.V. Domogatsky, S. Galatà, M. Goodman, P. Lipari, S. Ragazzi, G. Riccobene, F. Terranova, A. Varaschin, L. Votano and an anonymous Referee of EPJC for useful discussions. FV is grateful to the Organizers and the Participants in the Orca meeting at Paris for precious feedback [58].
Appendix: Remarks on the matter effect
In this appendix, we examine the oscillations by employing some simplifying assumptions, in order to obtain a qualitative understanding of the results in Sect. 2: we consider oscillations with a single scale, and we also consider oscillations in constant matter density. In fact, the second hypothesis is rather inaccurate in the conditions in which we are interested, and we need (and we use) a more accurate evaluation of the oscillation probabilities for the actual calculations. However, a qualitative discussion based on simple-minded analytical results complements usefully the numerical results discussed in the main text.
We try to go immediately to the main point, postponing derivations and refinements. Under suitable assumptions the probability that a muon converts into an electron is simply,
$$ P_{\mu e}=\sin^2\theta_{23} \sin^22 \widetilde{\theta_{13}} \sin^2\widetilde{\varphi} \quad \mbox{with }\widetilde{\varphi} =\frac{\widetilde{\varDelta m^2} L}{4 E} . $$
(A.1)
Most of the results in which we are interested follow from the above simple formula. In Eq. (A.1), we introduced the usual matter-modified mixing angle and squared-mass-difference
The sign of Δ is matter of convention; the ratio between matter and vacuum term is
where G F is the Fermi coupling and we identify Δm 2 with \(\varDelta m^{2}_{23}\). Now, instead, the sign is important: it is plus for normal hierarchy and minus for inverted hierarchy. Considering the average matter density of the Earth ρ=5.5 g/cm3 and Y e =1/2, we get n e =1.7×1024 e−/cm3 for the electronic density. Thus, the characteristic length of MSW theory is,
$$ L_* \equiv\frac{1}{\sqrt{2} G_F n_e}\sim1000\mbox{ km.} $$
We see that, for normal hierarchy, the maximum of P μe obtains when: (1) Δ is as small as possible, in order to maximize \(\sin2\widetilde{\theta_{13}}\); moreover, (2) the phase of propagation is \(\widetilde{\varphi} \sim\pi/2\). These conditions are met when the neutrino energy and the propagation distance are,
In the case of inverted hierarchy, the matter effect depresses P μe , which becomes negligible.
In principle, one could check this simple prediction concerning P μe , however, it is practically easier to study muons rather than electrons. Then, let us consider the survival probability P μμ , focussing again on the normal hierarchy case. We want that a local maximum of P μμ , resulting from P μτ and from P μe , is as small as possible. Thus, we are interested in the case when the minimum of P μτ happens in the vicinity of the energy identified in Eq. (A.5). When the phase of oscillation of P μτ is close to the vacuum phase, the condition Δm 2 L/(2E max)=2π gives L∼6000 km. This suggests that the distance that amplifies the matter effect on P μμ is between 6000 and 9000 km, which does not disagree severely with the quantitative conclusions of the precise numerical analysis of Sect. 2.
Finally, we collect more arguments and technical remarks concerning the matter effect. Let us write in full generality the amplitude of three flavor neutrino oscillations
where a ij depend upon \(\varDelta m^{2}_{23}\), θ 13, \(\varDelta m^{2}_{12}\), θ 12, and H=±1 (the type of mass hierarchy, normal/inverted): see in particular Eqs. (1), (3), (5), (6) of [41]. By solving numerically the evolution equations, we calculate the complex numbers a ij and therefore the amplitudes and the probabilities. At this level, there is no approximation (except the numerical ones).
When the "solar" \(\varDelta m^{2}_{12}\) is set to zero—i.e., when its effects are negligible—the only non-zero out-of-diagonal elements a ij in Eq. (A.6) are a 13 and a 31. The CP violating phase δ drops out from the probabilities \(P_{\ell\ell'}=|\mathcal{A}_{\ell'\ell}|^{2}\), which moreover become symmetric, P ℓℓ′=P ℓ′ℓ for each ℓ,ℓ′=e,μ,τ. Therefore, in this approximation we have three independent probabilities and all the other ones are fixed. We can chose, e.g.,
so that, e.g., P ee =1−P μe −P τe =|a 11|2. From these formulas we obtain
where \(\hat{\varphi}\) is a (rapidly varying) phase factor. Two important remarks are in order:
The last equation shows that P μe is large in the region where P ee is small, and that P μτ remains close to zero in the first non-trivial minimum near \(\hat{\varphi}=2\pi\), even when P ee ≈0.3–0.4 due to matter effect.
The sign of Δm 2 controls the sign of the vacuum Hamiltonian; therefore, switching between the two mass hierarchies or switching between neutrinos and antineutrinos has the same effect; e.g., \(P_{{e}{\mu}}(\mbox{IH})=P_{\bar{e}\bar{\mu}}(\mbox{NH})\).
The first remark is consistent with our numerical findings that P μe is amplified and P μτ does not deviate strongly from its behavior in vacuum in the conditions that are relevant for our discussion.
Proceeding further with the approximations, and considering at this point the case of constant matter density, we obtain simple and closed expressions. For the case of normal mass hierarchy, they read
$$ \begin{aligned}[c] & a_{13}=a_{31}=-i \sin\widetilde{\varphi} \sin2 \widetilde{\theta_{13}}, \\ &a_{11}=\cos\widetilde{\varphi} + i \sin\widetilde{\varphi} \cos2 \widetilde{\theta_{13}}=a_{33}^*, \\ &a_{22}=\cos\widetilde{\varphi}' + i \sin\widetilde{ \varphi}' \end{aligned} $$
$$ \widetilde{\varphi}'=\frac{{\varDelta m^2} L}{4 E} (1+\varepsilon). $$
(A.10)
From Eqs. (A.7) and (A.9), we recover the expression of Eq. (A.1), used in the above discussion. In the approximation of constant matter density, the phase \(\hat{\varphi}\) entering the expression of the probability P μτ is given by \(\sqrt{P_{ee}}\cos\hat{\varphi}\equiv\cos\widetilde{\varphi}\cos\widetilde{\varphi}'- \sin\widetilde{\varphi}\sin\widetilde{\varphi}'\cos2\widetilde {\theta _{13} }\). This is close to the vacuum phase when ε is large or small in comparison to 1: in fact, we have \(\cos2\widetilde{\theta_{13} }\sim\pm1\) and \(\widetilde{\varphi}\sim\pm \varDelta m^{2} L/(4 E) (1-\varepsilon)\) from Eq. (A.2), so that \(\cos\hat{\varphi}\sim \cos[\varDelta m^{2} L/(2 E)]\).
B.T. Cleveland et al., Astrophys. J. 496, 505 (1998) ADSCrossRefGoogle Scholar
K.S. Hirata et al. (Kamiokande-II Collaboration), Phys. Rev. D 44, 2241 (1991) [E-ibid. D 45, 2170 (1992)] ADSCrossRefGoogle Scholar
S. Fukuda et al. (Super-Kamiokande Collaboration), Phys. Rev. Lett. 86, 5651 (2001) ADSCrossRefGoogle Scholar
W. Hampel et al. (GALLEX Collaboration), Phys. Lett. B 447, 127 (1999) ADSCrossRefGoogle Scholar
J.N. Abdurashitov et al. (SAGE Collaboration), Phys. Rev. C 60, 055801 (1999) ADSCrossRefGoogle Scholar
M. Altmann et al. (GNO Collaboration), Phys. Lett. B 616, 174 (2005) ADSCrossRefGoogle Scholar
Q.R. Ahmad et al. (SNO Collaboration), Phys. Rev. Lett. 89, 011301 (2002) ADSCrossRefGoogle Scholar
C. Arpesella et al. (Borexino Collaboration), Phys. Rev. Lett. 101, 091302 (2008) ADSCrossRefGoogle Scholar
Y. Fukuda et al. (Kamiokande Collaboration), Phys. Lett. B 335, 237 (1994) ADSCrossRefGoogle Scholar
Y. Fukuda et al. (Super-Kamiokande Collaboration), Phys. Rev. Lett. 81, 1562 (1998) ADSCrossRefGoogle Scholar
M. Ambrosio et al. (MACRO Collaboration), Phys. Lett. B 434, 451 (1998) ADSCrossRefGoogle Scholar
W.W.M. Allison et al. (Soudan-2 Collaboration), Phys. Lett. B 449, 137 (1999) ADSCrossRefGoogle Scholar
M.H. Ahn et al. (K2K Collaboration), Phys. Rev. Lett. 90, 041801 (2003) ADSCrossRefGoogle Scholar
D.G. Michael et al. (MINOS Collaboration), Phys. Rev. Lett. 97, 191801 (2006) ADSCrossRefGoogle Scholar
S. Abe et al. (KamLAND Collaboration), Phys. Rev. Lett. 100, 221803 (2008) ADSCrossRefGoogle Scholar
N. Agafonova et al. (OPERA Collaboration), Phys. Lett. B 691, 138 (2010) ADSCrossRefGoogle Scholar
K. Abe et al. (T2K Collaboration), Phys. Rev. D 85, 031103 (2012) ADSCrossRefGoogle Scholar
B. Pontecorvo, Sov. Phys. JETP 6, 429 (1957) ADSGoogle Scholar
B. Pontecorvo, Sov. Phys. JETP 26, 984 (1968) ADSGoogle Scholar
Z. Maki, M. Nakagawa, S. Sakata, Prog. Theor. Phys. 28, 870 (1962) ADSzbMATHCrossRefGoogle Scholar
L. Wolfenstein, Phys. Rev. D 17, 2369 (1978) ADSCrossRefGoogle Scholar
S.P. Mikheev, A.Yu. Smirnov, Sov. J. Nucl. Phys. 42, 913 (1985) [Yad. Fiz. 42, 1441 (1985)] Google Scholar
V.D. Barger, K. Whisnant, S. Pakvasa, R.J.N. Phillips, Phys. Rev. D 22, 2718 (1980) ADSCrossRefGoogle Scholar
P. Langacker, J.P. Leveille, J. Sheiman, Phys. Rev. D 27, 1228 (1983) ADSCrossRefGoogle Scholar
G.V. Dass, K.V.L. Sarma, Phys. Rev. D 30, 80 (1984) ADSCrossRefGoogle Scholar
A.Yu. Smirnov, Neutrino masses and mixing, hep-ph/9611465, presented at the 28 International Conference on High-Energy Physics (ICHEP 96), Warsaw, Poland
G.L. Fogli, E. Lisi, A. Marrone, A. Palazzo, A.M. Rotunno, Phys. Rev. Lett. 101, 141801 (2008) ADSCrossRefGoogle Scholar
K. Abe et al. (T2K Collaboration), Phys. Rev. Lett. 107, 041801 (2011). arXiv:1304.0841 ADSCrossRefGoogle Scholar
F.P. An et al. (DAYA-BAY Collaboration), Phys. Rev. Lett. 108, 171803 (2012) ADSCrossRefGoogle Scholar
J.K. Ahn et al. (RENO Collaboration), Phys. Rev. Lett. 108, 191802 (2012) ADSCrossRefGoogle Scholar
Y. Abe et al. (Double Chooz Collaboration), Phys. Rev. D 86, 052008 (2012) ADSCrossRefGoogle Scholar
See discussion at nuTURN conference, web site. http://nuturn2012.lngs.infn.it/
M. Blennow, T. Schwetz, J. High Energy Phys. 1208, 058 (2012) [E.-ibid. 1211, 098 (2012)] ADSCrossRefGoogle Scholar
A. Ghosh, T. Thakore, S. Choubey, arXiv:1212.1305 [hep-ph]
E.K. Akhmedov, S. Razzaque, A.Yu. Smirnov, arXiv:1205.7071 [hep-ph]
B. Bajc, F. Nesti, G. Senjanovic, F. Vissani, in Proceedings of 17th La Thuile conference, ed. by M. Greco. Frascati Physics Series, vol. 30 (2003), pp. 103–143 Google Scholar
A. Geiser (MONOLITH Collaboration), Nucl. Instrum. Methods A 472, 464 (2000) CrossRefGoogle Scholar
T. Tabarelli de Fatis, Eur. Phys. J. C 24, 43 (2002) ADSCrossRefGoogle Scholar
D. Indumathi (INO Collaboration), Pramana 63, 1283 (2004) ADSCrossRefGoogle Scholar
N.K. Mondal (INO Collaboration), Pramana 79, 1003 (2012) ADSCrossRefGoogle Scholar
G. Battistoni, A. Ferrari, C. Rubbia, P.R. Sala, F. Vissani, hep-ph/0604182
J. Tang, W. Winter, J. High Energy Phys. 1202, 028 (2012). arXiv:1110.5908 [hep-ph] ADSCrossRefGoogle Scholar
S.K. Agarwalla, P. Hernandez, J. High Energy Phys. 1210, 086 (2012). arXiv:1204.4217 [hep-ph] ADSCrossRefGoogle Scholar
K. Dick, M. Freund, P. Huber, M. Lindner, Nucl. Phys. B 588, 101 (2000). hep-ph/0006090 ADSCrossRefGoogle Scholar
A.M. Dziewonski, D.L. Anderson, Phys. Earth Planet. Inter. 25, 297 (1981) ADSCrossRefGoogle Scholar
G.L. Fogli et al., Phys. Rev. D 86, 013012 (2012) ADSCrossRefGoogle Scholar
S.R. Dugad, F. Vissani, Phys. Lett. B 469, 171 (1999) ADSCrossRefGoogle Scholar
P. Lipari, M. Lusignoli, F. Sartogo, Phys. Rev. Lett. 74, 4384 (1995) ADSCrossRefGoogle Scholar
C.H. Llewellyn Smith, Phys. Rep. 3, 261 (1972) ADSCrossRefGoogle Scholar
L. Alvarez-Ruso, S.K. Singh, M.J. Vicente Vacas, Phys. Rev. C 57, 2693 (1998) ADSCrossRefGoogle Scholar
M. Glück, E. Reya, A. Vogt, Z. Phys. C 67, 433 (1995) ADSCrossRefGoogle Scholar
D.S. Ayres et al. (NOνA Collaboration), hep-ex/0503053. http://www-fnal.nova.gov
D.J. Koskinen, Mod. Phys. Lett. A 26, 2899 (2011) ADSCrossRefGoogle Scholar
http://agenda.infn.it/conferenceDisplay.py?confId=5510
J. Beringer et al. (Particle Data Group), Phys. Rev. D 86, 010001 (2012). Sect. 29 ADSCrossRefGoogle Scholar
http://www.daftlogic.com/projects-google-maps-distance-calculator.htm
S. Kopp, M. Bishai, M. Dierckxsens, M. Diwan, A.R. Erwin, D.A. Harris, D. Indurthy, R. Keisler et al., Nucl. Instrum. Methods A 568, 503 (2006) ADSCrossRefGoogle Scholar
https://indico.in2p3.fr/conferenceOtherViews.py?view=standard&confId=8261
© Springer-Verlag Berlin Heidelberg and Società Italiana di Fisica 2013
1.INFNLaboratori Nazionali del Gran SassoAssergi (AQ)Italy
2.Departamento de Fisica, DCeIUniversidad de GuanajuatoLeónMéxico
3.Gran Sasso Science Institute (INFN)L'AquilaItaly
Lujan-Peschard, C., Pagliaroli, G. & Vissani, F. Eur. Phys. J. C (2013) 73: 2439. https://doi.org/10.1140/epjc/s10052-013-2439-1
Received 12 February 2013
Revised 03 May 2013
DOI https://doi.org/10.1140/epjc/s10052-013-2439-1
EPJC is an open-access journal funded by SCOAP3 and licensed under CC BY 4.0 | CommonCrawl |
\begin{document}
\title[On Constructive Connectives and Systems] {On Constructive Connectives and Systems}
\author[A.~Avron]{Arnon Avron} \address{School of Computer Science, Tel Aviv University, Israel} \email{\{aa,orilahav\}@post.tau.ac.il}
\author[O.~Lahav]{Ori Lahav} \address{\vskip-6 pt}
\keywords{sequent calculus, cut-elimination, nonclassical logics, non-deterministic semantics, Kripke semantics} \subjclass{F.4.1, I.2.3.}
\begin{abstract} Canonical inference rules and canonical systems are defined in the framework of {\em non-strict} single-conclusion sequent systems, in which the succeedents of sequents can be empty. Important properties of this framework are investigated, and a general non-deterministic Kripke-style semantics is provided.
This general semantics is then used to provide a constructive (and very natural), sufficient and necessary {\em coherence} criterion for the validity of the strong cut-elimination theorem in such a system. These results suggest new syntactic and semantic characterizations of basic constructive connectives.
\end{abstract}
\maketitle
\section{Introduction}
There are two traditions concerning the definition and characterization of logical connectives. The better known one is the semantic tradition, which is based on the idea that an $n$-ary connective $\diamond$ is defined by the conditions which make a sentence of the form ${\diamond (\varphi_1 ,\dots, \varphi_n)}$ {\em true}. The other is the proof-theoretic tradition (originated from \cite{gentzen69} --- see e.g. \cite{handbook-sundholm} for discussions and references). This tradition implicitly divides the connectives into {\em basic} connectives and compound connectives, where the latter are defined in terms of the basic ones. The meaning of a basic connective, in turn, is determined by a set of derivation rules which are associated with it. Here one usually has in mind a natural deduction or a sequent system,
in which every logical rule is an introduction rule (or perhaps an elimination rule, in the case of natural deduction) of some unique connective. However, it is well-known that not every set of rules can be taken as a definition of a basic connective. A minimal requirement is that whenever some sentence involving exactly one basic connective is provable, then it has a proof which involves no other connectives. In ``normal" sequent systems, in which every rule except cut has the subformula property, this condition is guaranteed by a cut-elimination theorem. Therefore only sequent systems for which such a theorem obtains are considered as useful for defining connectives.
In \cite{AL05} the semantic and the proof-theoretic traditions were shown to be equivalent for a a large family of what may be called semi-classical connectives (which includes all the classical connectives, as well as many others). In these papers multiple-conclusion canonical (= `ideal') propositional rules and systems were defined in precise terms. A simple coherence criterion for the non-triviality of such a system was given, and it was shown that a canonical system is coherent if and only if it admits cut-elimination. Semi-classical connectives were characterized using canonical rules in coherent canonical systems. In addition, each of these connectives was given a semantic characterization. This characterization uses two-valued {\em non-deterministic} truth-tables -- a
natural generalization of the classical truth-tables. Moreover, it was shown there how to translate a semantic definition of a connective to a corresponding proof-theoretic one, and vice-versa.\footnote{It might be interesting to note that every connective in this framework can be viewed as basic.}
In this paper we attempt to provide similar characterizations for the class of {\em basic constructive connectives}.
What exactly is a constructive connective? Several different answers to this question have been given in the literature, each adopting either of the traditions described above (but not both!). Thus in \cite{McCullough} McCullough gave a purely semantic characterization of constructive connectives, using a generalization of the Kripke-style semantics for intuitionistic logic.
On the other hand Bowen suggested in \cite{Bowen} a quite natural proof-theoretic criterion for (basic) constructivity: an $n$-ary connective $\diamond$, defined by a set of sequent rules, is constructive if
whenever a sequent of the form ${\Rightarrow\diamond (\varphi_1 ,\dots, \varphi_n)}$ is provable, then it has a proof ending by an application of one of right-introduction rules for $\diamond$.
In what follows we generalize and unify the syntactic and the semantic approaches by adapting the ideas and methods used in \cite{AL05}. The crucial observation on which our theory is based is that every connective of a ``normal" {\em single-conclusion} sequent system that admits cut-elimination is necessarily constructive according to Bowen's criterion (because without using cuts, the only way to derive ${ \Rightarrow \diamond (\varphi_1 ,\dots, \varphi_n)}$ in such a system is to prove first the premises of one of its right-introduction rules). This indicates that only single-conclusion sequent rules are useful for defining constructive connectives. In addition, for defining {\em basic} connectives, only {\em canonical} derivation rules (in a sense similar to that used in \cite{AL05}) should be used. Therefore, our proof-theoretic characterization of {\em basic} constructive connectives is done using cut-free single-conclusion canonical systems. These systems are the natural constructive counterparts of the multiple-conclusion canonical systems of \cite{AL05}. On the other hand, McCullough's work suggests that an appropriate counterpart of the semantics of non-deterministic truth-tables
should be given by a non-deterministic generalization of Kripke-style semantics.
General single-conclusion canonical rules and systems were first introduced and investigated in \cite{AL10}. A general non-deterministic Kripke-style semantics for such systems was also developed there, and a constructive necessary and sufficient {\em coherence} criterion for their non-triviality was provided. Moreover: it was shown that a system of this kind admits a strong form of cut-elimination iff it is coherent. However, \cite{AL10} dealt only with {\em strict} single-conclusion systems, in which the succeedents of sequents contain {\em exactly} one formula. Unfortunately, in such a framework it is impossible to have canonical rules even for a crucial connective like intuitionistic negation.
To solve this, we move here to Gentzen's original (non-strict) single-conclusion framework, in which the succeedents of sequents contain {\em at most} one formula. There is a price to pay, though, for this extension of the framework. As we show below, in this more general framework we lose the equivalence between the simple coherence criterion of \cite{AL05,AL10} and non-triviality, as well as the equivalence proved there between simple cut-elimination and strong cut-elimination. Hence the theory needs some major changes.
In the rest of this paper we first redefine the notions of a canonical inference rule and a canonical system in the framework of non-strict single-conclusion sequent systems.
Then we turn to the semantic point of view, and present a corresponding general non-deterministic Kripke-style semantics. We show that every canonical system induces a class of non-deterministic Kripke-style frames, for which it is strongly sound and complete. This general semantics is then used to show that a canonical system ${\bf G}$ is coherent iff it admits a strong form of non-triviality, and this happens iff the strong cut-elimination theorem is valid for ${\bf G}$.
Taken together, the results of this paper suggest that a basic constructive connective is a connective that can be defined using a set of canonical rules in a coherent (non-strict) single-conclusion sequent system. We show that this class is broader than that suggested in \cite{McCullough}, and includes connectives that cannot be expressed by the four basic intuitionistic connectives. Examples include the ``converse non-implication" and ``not-both" connectives from \cite{Bowen}, as well as the weak implication of primal intuitionistic logic from \cite{gu09}. These connectives were left out by McCullough's deterministic semantic characterization because their semantics is strictly non-deterministic.
\section{Preliminaries}
In what follows $\mathcal{L}$ is a propositional language, ${\mathcal{F}}$ is its set of wffs, $p,q$ denote atomic formulas, $\psi,\varphi,\theta$ denote arbitrary formulas (of $\mathcal{L}$), $\mathcal{T}$ and $\mathcal{U}$ denote subsets of ${\mathcal{F}}$, $\Gamma,\Delta,\Sigma,\Pi$ denote {\em finite} subsets of ${\mathcal{F}}$, and $E,F$ denote subsets of ${\mathcal{F}}$ with at most one element. We assume that the atomic formulas of $\mathcal{L}$ are $p_1,p_2,\ldots$ (in particular: $p_1 ,\dots, p_n$ are the first $n$ atomic formulas of $\mathcal{L}$).
\begin{notation} For convenience we sometimes discard parentheses for sets, and write e.g. just $\psi$ instead of $\{\psi\}$. We also employ other standard abbreviations, like $\Gamma,\Delta$ instead of $\Gamma \cup \Delta$. \end{notation}
\begin{defi} \label{tcr} A {\em Tarskian consequence relation} ({\em tcr} for short) for $\mathcal{L}$ is a binary relation $\vdash$ between sets of formulas of $\mathcal{L}$ and formulas of $\mathcal{L}$ that satisfies the following conditions: \begin{tabbing} \ \ \ \ \=
{\it Strong} {\it Reflexivity}: \ \ \ \= if $\varphi \in \mathcal{T}$ then $\mathcal{T} \vdash \varphi$.\\
\> {\it Monotonicity}: \> if $\mathcal{T} \vdash \varphi$ and $\mathcal{T} \subseteq \mathcal{T}^\prime$ then $\mathcal{T}^\prime \vdash \varphi$. \\ \index{cut}
\> {\it Transitivity} {(\it cut)}: \> if $\mathcal{T} \vdash \psi$ and $\mathcal{T}, \psi \vdash \varphi$ then $\mathcal{T} \vdash \varphi$. \end{tabbing} \end{defi}
\noindent In the non-strict framework, it is natural to extend Definition \ref{tcr} as follows:
\begin{defi} \label{etcr} An {\em Extended Tarskian consequence relation} ({\em etcr} for short) for $\mathcal{L}$ is a binary relation $\vdash$ between sets of formulas of $\mathcal{L}$ and singletons or empty sets of formulas of $\mathcal{L}$ that satisfies the following conditions: \begin{tabbing} \ \ \ \ \=
{\it Strong} {\it Reflexivity}: \ \ \ \= if $\varphi \in \mathcal{T}$ then $\mathcal{T} \vdash \varphi$.\\
\> {\it Monotonicity}: \> if $\mathcal{T} \vdash E$, $\mathcal{T} \subseteq \mathcal{T}^\prime$, and $E\subseteq E^\prime$, then $\mathcal{T}^\prime \vdash E^\prime$. \\ \index{cut}
\> {\it Transitivity} {(\it cut)}: \> if $\mathcal{T} \vdash \psi$ and $\mathcal{T}, \psi \vdash E$ then $\mathcal{T} \vdash E$. \end{tabbing} \end{defi}
\noindent Intuitively, ``$\mathcal{T} \vdash\ $" means that $\mathcal{T}$ is inconsistent (i.e. $\mathcal{T} \vdash \varphi$ for every formula $\varphi$).
\begin{defi} \label{substitution} An {\em $\mathcal{L}$-substitution} is a function $\sigma : {\mathcal{F}} \to {\mathcal{F}}$, such that for every $n$-ary connective $\diamond$ of $\mathcal{L}$, we have: $\sigma(\diamond(\psi_1 ,\dots, \psi_n))=\diamond(\sigma(\psi_1) ,\dots, \sigma(\psi_n))$. Obviously, a substitution is determined by the values it assigns to atomic formulas. A substitution is extended to sets of formulas in the obvious way:
$\sigma(\mathcal{T})=\{\sigma(\varphi){\ |\ } \varphi\in\mathcal{T}\}$ (in particular, $\sigma(\emptyset)=\emptyset$). \end{defi}
\begin{defi} \label{etcr properties} An etcr $\vdash$ for $\mathcal{L}$ is {\it structural} if for every $\mathcal{L}$-substitution $\sigma$ and every $\mathcal{T}$ and $E$, if $\mathcal{T}\vdash E$ then $\sigma(\mathcal{T}) \vdash \sigma(E)$. $\vdash$ is {\em finitary} iff the following condition holds for every $\mathcal{T}$ and $E$: if $\mathcal{T} \vdash E$ then there exists a finite $\Gamma \subseteq \mathcal{T}$ such that $\Gamma \vdash E$. $\vdash$ is {\em consistent} (or {\em non-trivial}) if $p_1\not\vdash p_2$. \end{defi}
It is easy to see that there are exactly {\em four} inconsistent structural etcrs in any given language: $\mathcal{T}\vdash E$ for every $\mathcal{T}$ and $E$; $\mathcal{T}\vdash E$ for every $E$ and nonempty $\mathcal{T}$; $\mathcal{T}\vdash E$ for every $\mathcal{T}$ and nonempty $E$; and $\mathcal{T}\vdash E$ for every nonempty $\mathcal{T}$ and nonempty $E$. These etcrs are obviously trivial, so we exclude them from our definition of an {\em extended logic}:
\begin{defi} \label{elogic} A propositional {\it extended logic} is a pair $\tup{\mathcal{L},\vdash}$, where $\mathcal{L}$ is a propositional language, and $\vdash$ is an etcr for $\mathcal{L}$ which is structural, finitary, and consistent. \end{defi}
Sequents, which are the main tool for introducing extended logics, are defined as follows:
\begin{defi} A {\em non-strict single-conclusion sequent} is an expression of the form $(\Gamma\Rightarrow E)$ where $\Gamma$ and $E$ are finite sets of formulas, and $E$ is either a singleton or empty. A {\em non-strict single-conclusion Horn clause} is a non-strict single-conclusion sequent which consists of atomic formulas only. \end{defi}
\begin{convention} From now on, by ``{\em sequent (clause)}" we shall mean ``non-strict single-conclusion sequent (Horn clause)". \end{convention}
\begin{defi} A sequent of the form $(\Gamma\Rightarrow\{\varphi\})$ is called {\em definite}. A sequent of the form $(\Gamma\Rightarrow\emptyset)$ is called {\em negative}. \end{defi}
\begin{notation} We mainly use $s$ to denote a sequent and $\mathcal{S}$ to denote a set of sequents. We usually omit the outermost parentheses of sequents to improve readability. For convenience, we shall denote a sequent of the form $\Gamma\Rightarrow\emptyset$ by $\Gamma\Rightarrow\ $, and a sequent of the form $\Gamma\Rightarrow\{\varphi\}$ by $\Gamma\Rightarrow\varphi$. \end{notation}
\section{Canonical Systems}
The following definitions formulate in exact terms the structure of sequent rules (and systems) that can be used to define basic constructive connectives. We first define right-introduction rules and their applications, and then deal with left-introduction rules.
\begin{defi} \label{canonical right-introduction rule} \ \begin{enumerate}[(1)] \item A {\em single-conclusion canonical right-introduction rule} for a connective $\diamond$ of arity $n$ is an expression of the form: \begin{center} ${\{\Pi_i \Rightarrow E_i\}_{1 \leq i \leq m} / \ \Rightarrow \diamond(p_1 ,\dots, p_n)}$ \end{center} \sloppy \noindent where $m\geq 0$, and ${\Pi_i\cup E_i \subseteq \{p_1 ,\dots, p_n\}}$ for every $1 \leq i \leq m$.
The clauses $\Pi_i \Rightarrow E_i$ ($1 \leq i \leq m$) are the {\em premises} of the rule, while ${\Rightarrow \diamond(p_1 ,\dots, p_n)}$ is its {\em conclusion}. \item An {\em application} of the rule $\{\Pi_i \Rightarrow E_i\}_{1 \leq i \leq m} / \ \Rightarrow \diamond(p_1 ,\dots, p_n)$ is any inference step of the form: \[\dera{\{\Gamma,\sigma(\Pi_i)\Rightarrow \sigma(E_i) \}_{1 \leq i \leq m}} {\Gamma\Rightarrow \sigma(\diamond(p_1 ,\dots, p_n))}\] where $\Gamma$ is a finite set of formulas and $\sigma$ is an $\mathcal{L}$- substitution. \end{enumerate} \end{defi}
A canonical right-introduction rule may have negative premises (negative sequents serving as premises). Obviously, in applications of such a rule, a right context formula cannot be added to its negative premises. Left-introduction rules are somewhat more complicated, since in their applications it is not impossible to add a right context formula to the negative premises and to the conclusion. However, in the general case there might also be negative premises which do {\em not} allow such an addition of a right context.
Accordingly, in what follows we split the set of premises of a canonical left-introduction rule into two sets: {\em hard} premises which do not allow right context, and {\em soft} premises, which do allow it.
\sloppy
\begin{defi} \label{canonical left-introduction rule} \ \begin{enumerate}[(1)] \item A {\em single-conclusion canonical left-introduction rule} for a connective $\diamond$ of arity $n$ is an expression of the form: \begin{center} ${\tup{\{\Pi_i \Rightarrow E_i\}_{1 \leq i \leq m},\{\Sigma_j \Rightarrow \}_{1 \leq j \leq k}} / \ \diamond(p_1 ,\dots, p_n)\Rightarrow}$ \end{center} where $m,k\geq 0$, $\Pi_i\cup E_i\subseteq\{p_1 ,\dots, p_n\}$ for $1 \leq i \leq m$, and $\Sigma_j\subseteq\{p_1 ,\dots, p_n\}$ for $1 \leq j \leq k$. The clauses
$\Pi_i \Rightarrow E_i$ ($1 \leq i \leq m$) are called the {\em hard premises} of the rule,
$\Sigma_j \Rightarrow$ ($1 \leq j \leq k$) are its {\em soft premises}, and $\diamond(p_1 ,\dots, p_n) \Rightarrow$ is its {\em conclusion}. \item An {\em application} of the rule \\ $\tup{\{\Pi_i \Rightarrow E_i\}_{1 \leq i \leq m},\{\Sigma_i \Rightarrow \}_{1 \leq i \leq k}} / \diamond(p_1 ,\dots, p_n)\Rightarrow$ is any inference step of the form: \[\derb{\{\Gamma,\sigma(\Pi_i) \Rightarrow \sigma(E_i)\}_{1 \leq i \leq m}}
{\{\Gamma,\sigma(\Sigma_i) \Rightarrow E\}_{1 \leq i \leq k}}
{\Gamma, \sigma(\diamond(p_1 ,\dots, p_n))\Rightarrow E}\] where $\Gamma\Rightarrow E$ is an arbitrary sequent, and $\sigma$ is an $\mathcal{L}$- substitution. \end{enumerate} \end{defi}
\fussy
\begin{rem} Note that definite premises of a canonical left-introduction rules are all hard premises, as they do not allow the addition of a right context. \end{rem}
\begin{convention} From now on, by ``{\em canonical right-introduction (left-introduction) rule}" we shall mean ``single-conclusion canonical right-introduction (left-introduction) rule". \end{convention}
\begin{exas} \label{canonical rules examples} We give some examples for canonical rules. \begin{enumerate}[\hbox to8 pt{
}]
\item\noindent{\hskip-12 pt\bf Implication:}\ The two usual rules for implication are: $$\tup{\{\Rightarrow p_1\}, \{p_2\Rightarrow\}} \ / \ p_1 \supset p_2 \Rightarrow \mbox{\ \ \ and \ \ \ } \{p_1 \Rightarrow p_2 \} \ / \ \ \Rightarrow p_1 \supset p_2$$ Applications of these rules have the form: \[ \derb{\Gamma\Rightarrow \psi}{\Gamma,\varphi \Rightarrow E} {\Gamma,\psi \supset \varphi \Rightarrow E} \ \ \ \ \ \dera{\Gamma,\psi \Rightarrow \varphi} {\Gamma \Rightarrow \psi \supset \varphi} \] \item\noindent{\hskip-12 pt\bf Absurdity]:}\ In intuitionistic logic there is no right-introduction rule for the absurdity constant $\perp$, and there is exactly one left-introduction rule for it: \begin{center} $\tup{\emptyset,\emptyset} \ / \ \perp \ \Rightarrow \ $ \end{center} Applications of this rule provide new {\em axioms}: \begin{center} $\Gamma,\perp \ \Rightarrow E$ \end{center} \item\noindent{\hskip-12 pt\bf Negation:}\ Unlike in \cite{AL10}, in this new framework it is possible to handle negation as a basic connective, using the following standard rules:
\begin{center} $\tup{\{\Rightarrow p_1 \}, \emptyset} \ / \ \neg p_1 \Rightarrow \ $ \ and \ $\{p_1 \Rightarrow \} \ / \ \Rightarrow \neg p_1 \ $ \end{center} Applications of these rules have the form: \[ \dera{\Gamma \Rightarrow \psi} {\Gamma,\neg \psi \Rightarrow E} \ \ \ \ \ \dera{\Gamma,\psi \Rightarrow } {\Gamma \Rightarrow \neg \psi} \] \item\noindent{\hskip-12 pt\bf Semi-Implication:}\ In \cite{gu09} $\leadsto$ was introduced using the following two rules: $$\tup{\{\Rightarrow p_1 \}, \{p_2\Rightarrow\}} \ / \ p_1 \leadsto p_2 \Rightarrow \mbox{\ \ \ and \ \ \ } \{\Rightarrow p_2 \} \ / \ \ \Rightarrow p_1 \leadsto p_2$$ Applications of these rules have the form: \[ \derb{\Gamma\Rightarrow \psi}{\Gamma,\varphi \Rightarrow E} {\Gamma,\psi \leadsto \varphi \Rightarrow E} \ \ \ \ \ \dera{\Gamma \Rightarrow \varphi} {\Gamma \Rightarrow \psi \leadsto \varphi} \]
\item\noindent{\hskip-12 pt\bf Affirmation:}\
Let the connective $\vartriangleright$ be defined using the following rules: \begin{center} $\tup{\emptyset, \{p_1 \Rightarrow\}} \ / \ \vartriangleright p_1 \Rightarrow \ $ \ and \ $\{\Rightarrow p_1\} \ / \ \Rightarrow \vartriangleright p_1 \ $ \end{center} Applications of these rules have the form: \[ \dera{\Gamma, \varphi \Rightarrow E} {\Gamma,\vartriangleright \varphi \Rightarrow E} \ \ \ \ \ \dera{\Gamma \Rightarrow \varphi} {\Gamma \Rightarrow \vartriangleright \varphi} \] \item\noindent{\hskip-12 pt\bf Weak Affirmation:}\
Let the connective $\blacktriangleright$ be defined using the following rules:
\begin{center} $\tup{\{p_1 \Rightarrow\},\emptyset} \ / \ \blacktriangleright p_1 \Rightarrow \ $ \ and \ $\{\Rightarrow p_1\} \ / \ \Rightarrow \blacktriangleright p_1 \ $ \end{center} Applications of these rules have the form: \[ \dera{\Gamma, \varphi \Rightarrow } {\Gamma,\blacktriangleright \varphi \Rightarrow E} \ \ \ \ \ \dera{\Gamma \Rightarrow \varphi} {\Gamma \Rightarrow \blacktriangleright \varphi} \] Note that the left-introduction rule for $\blacktriangleright$ includes one hard negative premise, to which no right context can be added.
As a result, $\blacktriangleright \varphi \Rightarrow \varphi$ is not provable. \item\noindent{\hskip-12 pt\bf Bowen's connectives:}\ In \cite{Bowen}, Bowen introduced
an extension of the basic intuitionistic calculus with two new intuitionistic connectives\footnote {He also presented ``neither-nor" connective, which we do not describe here, since this connective can be expressed by the four basic intuitionistic connectives.}. He defined these connectives by the following canonical rules: \begin{center} $\tup{\{p_2 \Rightarrow p_1 \},\emptyset} \ / \ p_1 \not\subset p_2 \Rightarrow$ \ and \ $\{(p_1 \Rightarrow),(\Rightarrow p_2)\} \ / \ \Rightarrow p_1 \not\subset p_2 \ $
$\tup{\{(\Rightarrow p_1),(\Rightarrow p_2)\}, \emptyset} / p_1 \mid p_2 \Rightarrow$ \ and \ $\{p_1 \Rightarrow \} / \Rightarrow p_1 \mid p_2 \ $ \ \ $\{p_2 \Rightarrow \} / \Rightarrow p_1 \mid p_2 \ $ \end{center} Applications of these rules have the form: \[ \dera{\Gamma, \psi \Rightarrow \varphi} {\Gamma,\varphi \not\subset \psi\Rightarrow E} \ \ \ \ \ \derb{\Gamma, \varphi \Rightarrow}{\Gamma \Rightarrow \psi} {\Gamma \Rightarrow \varphi \not\subset \psi} \] \[ \derb{\Gamma \Rightarrow \varphi}{\Gamma \Rightarrow \psi} {\Gamma,\varphi \mid \psi\Rightarrow E} \ \ \ \ \ \dera{\Gamma, \varphi \Rightarrow} {\Gamma \Rightarrow \varphi \mid \psi} \ \ \dera{\Gamma, \psi \Rightarrow} {\Gamma \Rightarrow \varphi \mid \psi} \]
\end{enumerate} \end{exas}
\begin{defi} \label{canonical system} A non-strict single-conclusion sequent system is called {\em canonical} if it satisfies the following conditions: \begin{enumerate}[(1)] \item Its axioms are the sequents of the form $\varphi\Rightarrow\varphi$. \item Weakening and cut are among its rules. Applications of these rules have the form: \[ \dera{\Gamma \Rightarrow E} {\Gamma, \Delta \Rightarrow E} \ \ \dera{\Gamma \Rightarrow } {\Gamma \Rightarrow \psi} \ \ \ \ \ \derb{\Gamma \Rightarrow \varphi}{\Delta,\varphi \Rightarrow E} {\Gamma,\Delta\Rightarrow E} \] \item Each of its other rules is either a canonical right-introduction rule or a canonical left-introduction rule. \end{enumerate} \end{defi}
\begin{convention} From now on, by {\em canonical system} we shall mean ``non-strict single-conclusion canonical system". \end{convention}
\begin{defi} \label{syntactic tcr} Let ${{\bf G}}$ be a canonical system, and let $\mathcal{S} \cup \{s\}$ be a set of sequents. $\mathcal{S}\vdash_{{\bf G}}^{seq} s$ iff there exists a derivation in ${{\bf G}}$ of $s$ from $\mathcal{S}$. The sequents of $\mathcal{S}$ are called the {\em assumptions} (or {\em non-logical axioms}) of such a derivation. \end{defi}
\begin{defi} The etcr $\vdash_{{\bf G}}$ which is induced by a canonical system ${{\bf G}}$ is defined by: $\mathcal{T}\vdash_{{\bf G}}E$ iff there exists a finite $\Gamma\subseteq\mathcal{T}$ such that $\vdash_{{\bf G}}^{seq}\Gamma\Rightarrow E$. \end{defi}
\begin{prop} \label{structural finitary} $\vdash_{{\bf G}}$ is a structural and finitary etcr for every canonical system ${\bf G}$. \qed \end{prop}
\begin{prop} \label{reduction1} $\mathcal{T}\vdash_{{\bf G}} E$ iff $\{\Rightarrow\psi{\ |\ }\psi\in\mathcal{T}\}\vdash_{{\bf G}}^{seq}\Rightarrow E$. \qed \end{prop}
We leave the easy proofs of the last two propositions to the reader.
\section{Consistency and Coherence}
Consistency (or non-triviality) is a crucial property of a deductive system. The goal of this section is to find a constructive criterion for it in the framework of canonical systems.
\begin{defi} \label{consistency} A canonical system ${{\bf G}}$ is called {\em consistent} iff $\not\vdash_{{\bf G}}^{seq} p_1 \Rightarrow p_2$. \end{defi}
\begin{prop} \label{consistent proposition} A canonical system ${\bf G}$ is consistent iff $\vdash_{{\bf G}}$ is consistent. \qed \end{prop}
In multiple-conclusion canonical systems (\cite{AL05}), as well as in strict single-conclusion canonical systems (\cite{AL10}), consistency is equivalent to coherence. Roughly speaking, a coherent system is a system in which the rules cannot lead to new conflicts: the conclusions of two rules can contradict each other only if their joint set of premises is already inconsistent. Next we adapt this criterion to the present case:
\begin{defi} \label{coherent-connective} A set ${\mathcal{R}}$ of canonical rules for an $n$-ary connective $\diamond$ is called {\em coherent} iff $S_1\cup S_2\cup S_3$ is classically inconsistent whenever ${\mathcal{R}}$ contains both $\tup{S_1,S_2}/\diamond(p_1 ,\dots, p_n) \Rightarrow \ $ and $S_3/\ \Rightarrow \diamond(p_1 ,\dots, p_n)$. \end{defi}
\begin{rem} It is known that a set of clauses is classically inconsistent iff the empty clause can be derived from it using only cuts. \end{rem}
\begin{exa} Every connective introduced in Example \ref{canonical rules examples}, has a coherent set of rules. For example, for the two rules for implication we have $S_1=\{\ \Rightarrow p_1\}$, $S_2=\{p_2\Rightarrow\ \}$, $S_3=\{p_1\Rightarrow p_2\}$, and $S_1\cup S_2\cup S_3$ is the classically inconsistent set ${\{(\ \Rightarrow p_1) , (p_2\Rightarrow\ ), (p_1\Rightarrow p_2)\}}$ (from which the empty sequent can be derived using two cuts). For the two rules for semi-implication we have $S_1=\{\ \Rightarrow p_1\}$, $S_2=\{p_2\Rightarrow\ \}$, $S_3=\{\ \Rightarrow p_2\}$, and $S_1\cup S_2\cup S_3$ is the classically inconsistent set ${\{(\ \Rightarrow p_1) , (p_2\Rightarrow\ ), (\ \Rightarrow p_2)\}}$ (from which the empty sequent can be derived using one cut). \end{exa}
\begin{defi} \label{coherence} A canonical system ${{\bf G}}$ is called {\em coherent} iff for each connective $\diamond$, the set of rules in ${\bf G}$ for $\diamond$ is coherent. \end{defi}
Unfortunately, the next example shows that in the present case coherence is {\em not} necessary for consistency.
\begin{exa} \label{G with circle} Let ${{\bf G}}$ be a canonical system for a language which includes
a single unary connective $\circ$, having the following rules: \begin{center} $\tup{\emptyset,\{p_1 \Rightarrow \}} \ / \ \circ p_1 \Rightarrow \ $ \ and \ $\{p_1 \Rightarrow \} \ / \ \ \Rightarrow \circ p_1 $ \end{center} Applications of these rules have the form: \[ \dera{\Gamma, \varphi \Rightarrow E} {\Gamma,\circ \varphi \Rightarrow E} \ \ \ \ \ \dera{\Gamma, \varphi \Rightarrow } {\Gamma \Rightarrow \circ \varphi} \] Obviously, ${{\bf G}}$ is not coherent. However, it can easily be proved (using induction) that the only sequents provable in ${\bf G}$ from no assumptions are the sequents of the form $\Gamma\Rightarrow\psi$, where $\circ^n\psi\in\Gamma$ for some $n\geq 0$ (here $\circ^0\psi=\psi$ and $\circ^{n+1}\psi=\circ\circ^{n}\psi$). In particular, $p_1\Rightarrow p_2$ is not provable in ${\bf G}$ from no assumptions, and so ${\bf G}$ is consistent.
\end{exa}
To overcome this difficulty, we define a stronger notion of consistency, and show that in the context of non-strict canonical systems, the coherence criterion is equivalent to this stronger notion.
\begin{defi} \label{strong consistency} A canonical system ${{\bf G}}$ is called {\em strongly consistent} iff ${(\Rightarrow p_1),(p_2\Rightarrow)\not\vdash_{{\bf G}}^{seq}\Rightarrow}$. \end{defi}
\begin{prop} \label{strong consistency -> consistency} Every strongly consistent canonical system is also consistent. \end{prop} \proof Let ${{\bf G}}$ be an inconsistent canonical system. Then $\vdash_{{\bf G}}^{seq} p_1 \Rightarrow p_2$. Using the assumptions $(\Rightarrow p_1),(p_2\Rightarrow)$ and two cuts we get $(\Rightarrow p_1),(p_2\Rightarrow)\vdash_{{\bf G}}^{seq} \Rightarrow$. \qed
\noindent The following derivation shows that the system from Example \ref{G with circle} is not strongly consistent, and so strong consistency is indeed strictly stronger than consistency.
$$\infer[cut]{\Rightarrow} {
{ p_1\Rightarrow}
& \infer[cut] { \Rightarrow p_1} {
\infer[\circ\Rightarrow] {\circ p_1 \Rightarrow p_1} {p_1 \Rightarrow p_1} &
\infer[\Rightarrow\circ] { \Rightarrow \circ p_1} {p_1 \Rightarrow } } }$$
\noindent We note that strong consistency is a very natural demand from a system: in strongly inconsistent systems it suffices to have one provable sequent of the form $\psi\Rightarrow$, and one provable sequent of the form $\Rightarrow\varphi$, to make every sequent provable.
\begin{thm} \label{strong consistency -> coherence} Every strongly consistent canonical system is coherent. \end{thm} \proof Let ${{\bf G}}$ be an incoherent canonical system. This means that ${{\bf G}}$ includes two rules $\tup{S_1,S_2}/\diamond(p_1 ,\dots, p_n) \Rightarrow \ $ and $S_3/\ \Rightarrow \diamond(p_1 ,\dots, p_n)$, such that the set of clauses $S_1\cup S_2\cup S_3$ is classically satisfiable. Let $v$ be an assignment in $\{t,f\}$ that satisfies all the clauses in $S_1\cup S_2\cup S_3$. Define a substitution $\sigma$ by: \[\sigma(p)= \left\{ \begin{array}{ll} p_1 & v(p)=t\\ p_2 & v(p)=f \end{array} \right.\]
Since $v$ satisfies all the clauses in $S_1\cup S_2\cup S_3$, for every ${\Pi\Rightarrow E\in S_1\cup S_2\cup S_3}$ we have $p_2 \in \sigma(\Pi)$ or $p_1 \in \sigma(E)$. Hence, every element of ${\sigma(S_1\cup S_2\cup S_3)}$ can be derived from ${(\Rightarrow p_1),(p_2\Rightarrow)}$ by weakening. Now by applying the rules ${\tup{S_1,S_2}/\diamond(p_1 ,\dots, p_n) \Rightarrow \ }$ and ${S_3/\ \Rightarrow \diamond(p_1 ,\dots, p_n)}$ to these sequents we get proofs from ${(\Rightarrow p_1),(p_2\Rightarrow)}$ of the sequents ${\Rightarrow \sigma(\diamond(p_1 ,\dots, p_n))}$ and ${\sigma(\diamond(p_1 ,\dots, p_n))\Rightarrow}$. That ${(\Rightarrow p_1),(p_2\Rightarrow)\vdash_{{\bf G}}^{seq} \ \Rightarrow \ }$ then follows using a cut. \qed
\noindent The last theorem implies that coherence is a necessary demand from any acceptable canonical system ${{\bf G}}$.
In the sequel (Corollary \ref{equivalences}) we show that coherence is also sufficient to ensure strong consistency.
\begin{rem} Our coherence criterion can be proved to be equivalent (for fully-structural sequent systems) to the {\em reductivity} criterion defined in \cite{CT06}. However, in the framework of \cite{CT06} a connective essentially has infinitely many introduction rules, while our framework makes it possible to convert these infinite sets of rules into finite ones. \end{rem}
\section{Semantics for Canonical Systems} \label{semantic section}
In this section we generalize Kripke semantics for intuitionistic logic to arbitrary coherent canonical systems. For this we use {\em non-deterministic} Kripke frames and semiframes.
\begin{defi} \label{persistent} Let $\tup{W,\leq}$ be a nonempty partially ordered set. Let $\mathcal{U}$ be a set of formulas. A function $v:W \times \mathcal{U}\to \{t,f\}$ is called {\em persistent} iff for every $a\in W$ and $\varphi \in \mathcal{U}$,
$v(a,\varphi)=t$ implies that $v(b,\varphi)=t$ for every $b\in W$ such that $a \leq b$. \end{defi}
\begin{defi} \label{semiframe} Let $\mathcal{U}$ be a set of formulas closed under subformulas. A {\em $\mathcal{U}$-semiframe} is a triple $\mathcal{W}=\tup{W, \leq,v}$ such that: \begin{enumerate}[(1)] \item $\tup{W,\leq}$ is a nonempty partially ordered set. \item $v$ is a persistent function from $W \times {\mathcal{U}}$ to $\{t,f\}$. \end{enumerate} When $\mathcal{U}=\mathcal{F}$ a $\mathcal{U}$-semiframe is also called an {\em $\mathcal{L}$-frame}. \end{defi}
\begin{rem} \label{analytic remark} To understand the need to consider semiframes, we note that to be useful and effective, a denotational semantics of a propositional logic should be {\em analytic}.
This means that in order to determine whether a sequent $s$ follows from a set $\mathcal{S}$ of sequents, it should be sufficient to consider {\em partial} valuations, defined only on the set of subformulas of the formulas in ${\mathcal{S}}\cup\{s\}$. In the present case, such partial valuations are provided by semiframes. \end{rem}
\begin{defi} \label{sequent-satisfaction} Let ${\mathcal{W}=\tup{W, \leq,v}}$ be a $\mathcal{U}$-semiframe. \begin{enumerate}[(1)] \item A sequent ${\Gamma\Rightarrow E}$ is {\em locally true} in $a\in W$ iff $\Gamma\cup E\subseteq\mathcal{U}$, and either ${v(a,\psi)=f}$ for some ${\psi\in\Gamma}$, or $E=\{\varphi\}$ and ${v(a,\varphi)=t}$. \item A sequent is {\em true} (or {\em absolutely true}) in $a\in W$ iff it is locally true in every $b\geq a$. \item ${\mathcal{W}}$ is a {\em model} of a sequent $s$ iff $s$ is true in every $a\in W$ (equivalently, if $s$ is locally true in every $a\in W$). It is a model of a set $\mathcal{S}$ of sequents if it is a model of every $s\in{\mathcal{S}}$. \item ${\mathcal{W}}$ is a {\em model} of a formula $\varphi$ iff $v(a,\varphi)=t$ for every $a\in W$. It is a model of a theory $\mathcal{T}$ if it is a model of every $\varphi\in\mathcal{T}$. \end{enumerate} \end{defi}
\begin{rem} From the point of view of local truth, a sequent is understood according to its classical interpretation as a disjunction (either one of the formulas in its left side is ``false" or its right side is ``true"). On the other hand, the notion of absolute truth is based on viewing a sequent as expressing a real (constructive) entailment between its two sides. Note that because of the persistence condition, for sequents of the form $\Rightarrow\varphi$ there is no difference between local truth in $a$ or absolute truth in $a$. Obviously, ${\mathcal{W}}$ is a model of such a sequent iff it is a model of $\varphi$. \end{rem}
Persistence is the only general condition which is satisfied by the semantics of every coherent canonical system. In addition, to every specific canonical system corresponds a set of constraints which are directly related to its set of canonical rules. The idea is that a canonical rule for a connective $\diamond$ imposes restrictions on the truth-values that can be assigned to $\diamond$-formulas. Next we describe these restrictions.
\begin{defi} \label{substitution satisfaction, fulfil and respect} Let $\mathcal{W}=\tup{W,\leq,v}$ be a $\mathcal{U}$-semiframe. \begin{enumerate}[(1)] \item An $\mathcal{L}$-substitution $\sigma$ {\em (locally) satisfies} a sequent $\Gamma\Rightarrow E$ in $a\in W$ iff $\sigma(\Gamma)\Rightarrow\sigma(E)$ is (locally) true in $a$\footnote{When $E=\emptyset$, recall that $\sigma(\emptyset)=\emptyset$.}. \item An $\mathcal{L}$-substitution {\em fulfils} a canonical right-introduction rule in $a \in W$ (with respect to $\mathcal{W}$) iff it satisfies in $a$ every premise of the rule. \item An $\mathcal{L}$-substitution {\em fulfils} a canonical left-introduction rule in $a \in W$ (with respect to $\mathcal{W}$) iff it satisfies in $a$ every hard premise of the rule, and locally satisfies in $a$ every soft premise of the rule. \item Let $r$ be a canonical rule for an $n$-ary connective $\diamond$. $\mathcal{W}$ {\em respects} $r$ iff for every $a\in W$ and every substitution $\sigma$: if $\sigma$ fulfils $r$ in $a$ and $\sigma(\diamond(p_1 ,\dots, p_n))\in\mathcal{U}$ then $\sigma$ locally satisfies conclusion of $r$ in $a$. \end{enumerate} \end{defi}
\noindent Note that absolute truth is used for premises of right introduction rules, as well as for hard premises of left introduction rules. Local truth is used only for soft premises of left introduction rule. This is the main difference between this semantics and the one described in \cite{AL10} for the strict framework. In \cite{AL10}, the difference between absolute and local truth corresponds to the syntactic distinction between definite and negative sequents (absolute truth is used for definite premises, and local truth is used for negative premises). In the present case, since negative sequents may also serve as premises of right introduction rules and as hard premises of left introduction rules, this syntactic distinction is irrelevant for the semantics definition.
\begin{rem} Because of the persistence condition, a definite sequent of the form $\Rightarrow \psi$ is satisfied in $a$ by $\sigma$ iff $v(a,\sigma(\psi))=t$. \end{rem}
\begin{exas} We describe the semantic effects of some rules from Example \ref{canonical rules examples}. \begin{enumerate}[\hbox to8 pt{
}] \item\noindent{\hskip-12 pt\bf Negation:}\
An $\mathcal{L}$-frame $\mathcal{W}=\tup{W, \leq,v}$ respects the rule $(\neg\Rightarrow)$ if ${v(a,\neg\psi)=f}$ whenever $v(a,\psi)=t$. Because of the persistence condition, if ${v(b,\neg\psi)=f}$ for some $b\geq a$ then ${v(a,\neg\psi)=f}$. And so, $\mathcal{W}$ respects $(\neg\Rightarrow)$ if ${v(a,\neg\psi)=f}$ whenever $v(b,\psi)=t$ for some $b \geq a$. It respects $(\Rightarrow \neg)$ if $v(a,\neg\psi)=t$ whenever $v(b,\psi)=f$ for every $b \geq a$. Hence the two rules together impose exactly the well-known Kripke semantics for intuitionistic negation.
\item\noindent{\hskip-12 pt\bf Implication:}\ An $\mathcal{L}$-frame $\mathcal{W}=\tup{W, \leq,v}$ respects the rule $(\supset\Rightarrow)$ iff for every $a\in W$, $v(a,\varphi\supset\psi)=f$ whenever $v(b,\varphi)=t$ for every $b \geq a$ and $v(a,\psi)=f$ (the latter -- because $\psi\Rightarrow$ is an instance of a soft premise). Because of the persistence condition, this is equivalent to $v(a,\varphi\supset\psi)=f$ whenever $v(a,\varphi)=t$ and $v(a,\psi)=f$. Again by the persistence condition, $v(a,\varphi\supset\psi)=f$ iff $v(b,\varphi\supset\psi)=f$ for some $b\geq a$. Hence, we get: $v(a,\varphi\supset\psi)=f$ whenever there exists $b\geq a$ such that $v(b,\varphi)=t$ and $v(b,\psi)=f$. $\mathcal{W}$ respects $(\Rightarrow\supset)$ iff for every $a\in W$, $v(a,\varphi\supset\psi)=t$ whenever for every $b\geq a$, either $v(b,\varphi)=f$ or $v(b,\psi)=t$. Hence the two rules together impose exactly the well-known Kripke semantics for intuitionistic implication (\cite{Kr65}). It is easy to verify that the same applies to conjunction and disjunction, using the usual rules for these connectives.
\item\noindent{\hskip-12 pt\bf Semi-Implication:}\ An $\mathcal{L}$-frame $\mathcal{W}=\tup{W, \leq,v}$ respects the rule $(\leadsto\Rightarrow)$ under the same conditions it respects $(\supset\Rightarrow)$. $\mathcal{W}$ respects $(\Rightarrow\leadsto)$ iff for every $a\in W$, ${v(a,\varphi\leadsto\psi)=t}$ whenever $v(a,\psi)=t$ (recall that this is equivalent to $v(b,\psi)=t$ for every $b\geq a$). Note that in this case the two rules for $\leadsto$ do not always determine the value assigned to $\varphi\leadsto\psi$: if $v(a,\psi)=f$, and there is no $b\geq a$ such that $v(b,\varphi)=t$ and $v(b,\psi)=f$, then $v(a,\varphi\leadsto\psi)$ is free to be either $t$ or $f$. So the semantics of this connective is {\em non-deterministic}.
\item\noindent{\hskip-12 pt\bf Converse Non-Implication:}\ An $\mathcal{L}$-frame ${\mathcal{W}=\tup{W, \leq,v}}$ respects the rule $(\not\subset\Rightarrow)$ provided that $v(a,\varphi\not\subset\psi)=f$ whenever for every $b \geq a$ either $v(b,\varphi)=t$ or $v(b,\psi)=f$. Because of the persistence condition, this is equivalent to $v(a,\varphi\not\subset\psi)=f$ if either there exists some $b \geq a$ such that $v(b,\varphi)=t$, or if $v(b,\psi)=f$ for every $b \geq a$. It respects $(\Rightarrow \not\subset)$ if $v(a,\varphi\not\subset\psi)=t$ whenever $v(b,\varphi)=f$ and $v(b,\psi)=t$ for every $b \geq a$. Because of the persistence condition, this is equivalent to $v(a,\varphi\not\subset\psi)=t$ whenever $v(a,\psi)=t$ and $v(b,\varphi)=f$ for every $b \geq a$. This implies that $v(a,\varphi\not\subset\psi)$ is free when $v(b,\varphi)=f$ for every $b \geq a$, $v(a,\psi)=f$, and there exists $b \geq a$ such that $v(b,\psi)=t$. For example, consider the following two $\{p_1,p_2,p_1\not\subset p_2\}$-semiframes:
\begin{center}
\setlength{\unitlength}{0.00033333in}
\begingroup\makeatletter\ifx\SetFigFont\undefined \gdef\SetFigFont#1#2#3#4#5{
\reset@font\fontsize{#1}{#2pt}
\fontfamily{#3}\fontseries{#4}\fontshape{#5}
\selectfont} \fi\endgroup {\renewcommand{30}{30} \begin{picture}(6000,2389)(0,-10)
\put(802,1830){\ellipse{566}{566}}
\put(4267,1815){\ellipse{566}{566}}
\put(3990,1005){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}$p_1=f$}}}}
\put(3990,570){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}$p_2=t$}}}}
\put(3660,135){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}$p_1\not\subset p_2=t$}}}}
\put(15,105){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}$p_1\not\subset p_2=f$}}}}
\put(345,540){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}$p_2=f$}}}}
\put(345,975){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}$p_1=f$}}}}
\put(4449.500,2063.559){\arc{541.556}{3.3895}{7.5171}} \path(4215.385,2250.392)(4187.000,2130.000)(4268.700,2222.871)
\path(3863.990,1799.041)(3984.000,1829.000)(3864.010,1859.041) \path(3984,1829)(1085,1830)
\put(579.500,2093.559){\arc{541.556}{1.9077}{6.0353}} \path(760.300,2252.871)(842.000,2160.000)(813.615,2280.392) \end{picture} } \ \ \ \ \ \begin{picture}(6000,2389)(0,-10)
\put(802,1830){\ellipse{566}{566}}
\put(4267,1815){\ellipse{566}{566}}
\put(3990,1005){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}$p_1=f$}}}}
\put(3990,570){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}$p_2=t$}}}}
\put(3660,135){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}$p_1\not\subset p_2=t$}}}}
\put(15,105){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}$p_1\not\subset p_2=t$}}}}
\put(345,540){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}$p_2=f$}}}}
\put(345,975){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}$p_1=f$}}}}
\put(4449.500,2063.559){\arc{541.556}{3.3895}{7.5171}} \path(4215.385,2250.392)(4187.000,2130.000)(4268.700,2222.871)
\path(3863.990,1799.041)(3984.000,1829.000)(3864.010,1859.041) \path(3984,1829)(1085,1830)
\put(579.500,2093.559){\arc{541.556}{1.9077}{6.0353}} \path(760.300,2252.871)(842.000,2160.000)(813.615,2280.392) \end{picture}
\end{center}
\noindent While there is no difference between these two semi-frames with respect to atomic formulas, the truth-values assigned to $p_1\not\subset p_2$ in one of their two worlds are different. Now both semiframes respect the two rules of $\not\subset$. Hence the semantics of this connective is non-deterministic.\footnote{Note that no semantic characterizations for ``converse non-implication" and ``not both" were presented in \cite{Bowen}, where these connectives were first introduced.}
\item\noindent{\hskip-12 pt\bf Not Both:}\ An $\mathcal{L}$-frame $\mathcal{W}=\tup{W, \leq,v}$ respects the rule $(\mid\Rightarrow)$ if ${v(a,\varphi\mid\psi)=f}$ whenever $v(b,\varphi)=t$ and $v(b,\psi)=t$ for every $b \geq a$. Because of the persistence condition, this is equivalent to $v(a,\varphi\mid\psi)=f$ whenever $v(b,\psi)=v(b,\varphi)=t$ for some $b \geq a$. It respects $(\Rightarrow \mid)_1$ if $v(a,\varphi\mid\psi)=t$ whenever $v(b,\varphi)=f$ for every $b \geq a$. It respects $(\Rightarrow \mid)_2$ if $v(a,\varphi\mid\psi)=t$ whenever $v(b,\psi)=f$ for every $b \geq a$. This implies that $v(a,\varphi\mid\psi)$ is free when there exist $b_1,b_2\geq a$ such that $v(b_1,\varphi)=v(b_2,\psi)=t$, but there does not exist $b\geq a$ such that $v(b,\varphi)=v(b,\psi)=t$ (this is possible because the order relation does not have to be linear). Again, the induced semantics is non-deterministic. \item\noindent{\hskip-12 pt\bf Affirmation:}\ An $\mathcal{L}$-frame $\mathcal{W}=\tup{W, \leq,v}$ respects the rule $(\vartriangleright\Rightarrow)$ if ${v(a,\vartriangleright\psi)=f}$ whenever ${v(a,\psi)=f}$. It respects ${(\Rightarrow \vartriangleright)}$ if $v(a,\vartriangleright\psi)=t$ whenever ${v(a,\psi)=t}$. This means that for every ${a \in W}$, $v(a,\vartriangleright\psi)$ simply equals ${v(a,\psi)}$. \item\noindent{\hskip-12 pt\bf Weak Affirmation:}\ An $\mathcal{L}$-frame $\mathcal{W}=\tup{W, \leq,v}$ respects the rule $(\blacktriangleright\Rightarrow)$ if ${v(a,\blacktriangleright\psi)=f}$ whenever $v(b,\psi)=f$ for every $b \geq a$. It respects $(\Rightarrow \blacktriangleright)$ if $v(a,\blacktriangleright\psi)=t$ whenever $v(b,\psi)=t$ for every $b\geq a$. Because of the persistence condition, this is equivalent to $v(a,\blacktriangleright\psi)=t$ whenever $v(a,\psi)=t$. This implies that $v(a,\blacktriangleright\psi)$ is free when $v(a,\psi)=f$ and $v(b,\psi)=t$ for some $b \geq a$. Again, we obtain non-deterministic semantics.
\end{enumerate} \end{exas}
\begin{defi} \label{G-legal} Let ${{\bf G}}$ be a canonical system. A $\mathcal{U}$-semiframe is {\em ${{\bf G}}$-legal} iff it respects all the canonical rules of ${{\bf G}}$. \end{defi}
\indent We can now give the definition of the semantic relations induced by a canonical system:
\begin{defi} Let ${{\bf G}}$ be a coherent canonical system, and let $\mathcal{S} \cup \{s\}$ be a set of sequents. ${\mathcal{S}\vDash_{\bf G}^{seq} s}$ iff every ${{\bf G}}$-legal $\mathcal{L}$-frame which is a model of $\mathcal{S}$ is also a model of ${s}$. \end{defi}
\begin{defi} Let ${{\bf G}}$ be a coherent canonical system. The semantic etcr $\vDash_{\bf G}$ between {\em formulas} which is induced by ${{\bf G}}$ is defined by: $\mathcal{T}\vDash_{\bf G} E$ iff every ${{\bf G}}$-legal $\mathcal{L}$-frame which is a model of $\mathcal{T}$ is also a model of $E$. \end{defi}
\noindent Again we have:
\begin{prop} \label{reduction2} $\mathcal{T}\vDash_{{\bf G}} E$ iff $\{\Rightarrow\psi{\ |\ }\psi\in\mathcal{T}\}\vDash_{{\bf G}}^{seq}\Rightarrow E$. \qed \end{prop}
\section{Soundness, Completeness, Cut-elimination} In this section we show that the syntactic and semantic consequence relations between sequents which are induced by a given coherent canonical system are identical. In addition, we present a semantic proof of cut-elimination for arbitrary coherent canonical systems. There are a lot of similarities between the proofs of this section and the corresponding proofs in \cite{AL10}. However, the proofs in \cite{AL10} correspond to different definitions, and so, for the sake of completeness, we include here the full proofs.
\begin{thm} \label{soundness} Every coherent canonical system ${{\bf G}}$ is strongly sound with respect to the semantics of ${{\bf G}}$-legal frames. In other words: If $\mathcal{S}\vdash_{{\bf G}}^{seq}s$ then $\mathcal{S}\vDash_{{\bf G}}^{seq}s$. \end{thm}
\sloppy \proof Assume that ${\mathcal{S}\vdash_{\bf G}^{seq}s}$, and ${\mathcal{W}=\tup{W, \leq,v}}$ is a ${{\bf G}}$-legal model of $\mathcal{S}$. We show that $s$ is locally true in every ${a\in W}$. Since the axioms of ${{\bf G}}$ and the assumptions of $\mathcal{S}$ trivially have this property, and the cut and weakening rules obviously preserve it, it suffices to show that the property of being locally true in every $a\in W$ is also preserved by applications of the logical rules of ${{\bf G}}$. \begin{enumerate}[$\bullet$] \item Suppose ${\Gamma\Rightarrow \sigma(\diamond(p_1 ,\dots, p_n))}$ is derived from ${\{\Gamma,\sigma(\Pi_i) \Rightarrow \sigma(E_i)\}_{1 \leq i \leq m}}$ using the rule ${r=\{\Pi_i \Rightarrow E_i\}_{1 \leq i \leq m} / \Rightarrow \diamond(p_1 ,\dots, p_n)}$. Assume that all the premises of this application have the required property. We show that so does its conclusion. Let ${a\in W}$. If ${v(a,\psi)=f}$ for some ${\psi\in\Gamma}$, then obviously ${\Gamma\Rightarrow \sigma(\diamond(p_1 ,\dots, p_n))}$ is locally true in ${a}$. Assume otherwise. Then the persistence condition implies that ${v(b,\psi)=t}$ for every ${\psi\in\Gamma}$ and ${b\geq a}$. Thus our assumption concerning the sequents ${\{\Gamma,\sigma(\Pi_i) \Rightarrow \sigma(E_i)\}_{1 \leq i \leq m}}$ entails that for every ${b\geq a}$ and ${1 \leq i \leq m}$, either ${v(b,\psi)=f}$ for some ${\psi\in\sigma(\Pi_i)}$, or $E_i=\{q_i\}$ (i.e. $E_i$ is not empty) and ${v(b,\sigma(q_i))=t}$. It follows that for ${1 \leq i \leq m}$, ${\Pi_i \Rightarrow E_i}$ is satisfied in ${a}$ by $\sigma$. Thus, $\sigma$ fulfils $r$ in $a$. Since ${\mathcal{W}}$ respects $r$, it follows that ${v(a,\sigma(\diamond(p_1 ,\dots, p_n)))=t}$. \item
Now we deal with left-introduction rules.
Suppose ${\Gamma, \sigma(\diamond(p_1 ,\dots, p_n))\Rightarrow E}$ is derived from ${\{\Gamma,\sigma(\Pi_i) \Rightarrow \sigma(E_i)\}_{1 \leq i \leq m}}$ and ${\{\Gamma,\sigma(\Sigma_i) \Rightarrow E\}_{1 \leq i \leq k}}$, using the left-introduction rule ${r=\tup{\{\Pi_i \Rightarrow E_i\}_{1 \leq i \leq m}, \{\Sigma_i \Rightarrow \}_{1 \leq i \leq k}}/\diamond(p_1 ,\dots, p_n)\Rightarrow}$. Assume that all the premises of this application have the required property. We show that so does its conclusion. Let ${a\in W}$. If ${v(a,\psi)=f}$ for some ${\psi\in\Gamma}$ or ${E=\{\theta\}}$ and ${v(a,\theta)=t}$, then we are done. Assume otherwise. Then $E$ is either empty or ${E=\{\theta\}}$ and ${v(a,\theta)=f}$, and (by the persistence condition) ${v(b,\psi)=t}$ for every ${\psi\in\Gamma}$ and ${b\geq a}$. Thus our assumption concerning the sequents ${\{\Gamma,\sigma(\Pi_i) \Rightarrow \sigma(E_i)\}_{1 \leq i \leq m}}$ entails that for every ${b\geq a}$ and ${1 \leq i \leq m}$, either ${v(b,\psi)=f}$ for some ${\psi\in\sigma(\Pi_i)}$, or ${E_i=\{q_i\}}$ and ${v(b,\sigma(q_i))=t}$. This immediately implies that the hard premises of $r$ are satisfied in ${a}$ by $\sigma$. Since $E$ is either empty or ${E=\{\theta\}}$ and ${v(a,\theta)=f}$, our assumption concerning ${\{\Gamma,\sigma(\Sigma_i) \Rightarrow E\}_{1 \leq i \leq k}}$ entails that for every ${1 \leq i \leq k}$, ${v(a,\psi)=f}$ for some ${\psi\in\sigma(\Sigma_i)}$. Hence the soft premises of $r$ are locally satisfied in ${a}$ by $\sigma$. Thus, $\sigma$ fulfils $r$ in $a$. Since ${\mathcal{W}}$ respects $r$, it follows that ${v(a,\sigma(\diamond(p_1 ,\dots, p_n)))=f}$. \qed\end{enumerate} \fussy
For the converse, we define {\em $\mathcal{S}$-proofs} and prove the following key result.
\begin{defi} \label{S-proof} Let ${\mathcal{S}}$ be sets of sequents. A proof $P$ in a canonical system is called an {\em $\mathcal{S}$-proof} iff the cut formula of every cut in $P$ occurs in $\mathcal{S}$. \end{defi}
\begin{thm} \label{key} Let ${{\bf G}}$ be a coherent canonical system in $\mathcal{L}$, and let $\mathcal{S}\cup \{s\}$ be a set of sequents in $\mathcal{L}$. Then either there is an $\mathcal{S}$-proof of $s$ from $\mathcal{S}$ in ${\bf G}$, or there is a ${{\bf G}}$-legal $\mathcal{L}$-frame which is model of $\mathcal{S}$, but not a model of $s$. \end{thm} \proof Assume that ${s=\Gamma_0\Rightarrow E_0}$ does not have an $\mathcal{S}$-proof from $\mathcal{S}$ in ${{\bf G}}$. We construct a ${{\bf G}}$-legal $\mathcal{L}$-frame ${\mathcal{W}}$ which is a model of $\mathcal{S}$ but not of ${s}$. Let ${\mathcal{U}}$ be the set of subformulas of ${\mathcal{S}\cup \{s\}}$. Given a subset $E$ of $\mathcal{U}$ which is either a singleton or empty, call a theory ${\mathcal{T}\subseteq\mathcal{U}}$ {\em $E$-maximal} if there is no finite ${\Gamma\subseteq\mathcal{T}}$ such that ${\Gamma\Rightarrow E}$ has an $\mathcal{S}$-proof from $\mathcal{S}$, but every proper extension ${\mathcal{T}^\prime\subseteq\mathcal{U}}$ of $\mathcal{T}$ contains such a finite subset $\Gamma$. Obviously, if ${\Gamma\cup E \subseteq\mathcal{U}}$ and ${\Gamma\Rightarrow E}$ has no $\mathcal{S}$-proof from $\mathcal{S}$, then ${\Gamma}$ can be extended to a theory ${\mathcal{T}\subseteq\mathcal{U}}$ which is $E$-maximal. In particular: ${\Gamma_0}$ can be extended to a ${E_0}$-maximal theory ${\mathcal{T}_0}$.
Now let ${\mathcal{W}=\tup{W,\subseteq,v}}$, where: \begin{enumerate}[$\bullet$] \item ${W}$ is the set of all extensions of ${\mathcal{T}_0}$ in ${\mathcal{U}}$ which are $E$-maximal for some ${E\subseteq\mathcal{U}}$ (recall that $E$ is either singleton or empty). \item ${v}$ is defined inductively as follows. For atomic formulas:
\[v(\mathcal{T}, p)= \left\{ \begin{array}{ll} t & p\in\mathcal{T}\\ f & p\not\in\mathcal{T} \end{array} \right.\] Suppose ${v(\mathcal{T}, \psi_i)}$ has been defined for every ${\mathcal{T}\in W}$ and ${1 \leq i \leq n}$. \\We let ${v(\mathcal{T}, \diamond(\psi_1 ,\dots, \psi_n))=t}$ iff at least one of the following holds with respect to the semiframe constructed so far: \begin{enumerate}[(1)] \item There exists a right-introduction rule for $\diamond$ which is fulfilled in $\mathcal{T}$ by a substitution $\sigma$ such that ${\sigma(p_i)=\psi_i}$ (${1\leq i\leq n}$). \item
\sloppy
${\diamond(\psi_1 ,\dots, \psi_n) \in \mathcal{T}}$, and there do not exist ${\mathcal{T}^\prime\in W}$ and a left-introduction rule $r$ for $\diamond$, such that ${\mathcal{T}\subseteq\mathcal{T}^\prime}$, and $r$ is fulfilled in ${\mathcal{T}^\prime}$ by a substitution $\sigma$ such that ${\sigma(p_i)=\psi_i}$ (${1\leq i\leq n}$).
\fussy \end{enumerate} \end{enumerate}
\noindent First we prove that ${\mathcal{W}}$ is an $\mathcal{L}$-frame: \begin{enumerate}[$\bullet$]
\item ${W}$ is not empty because ${\mathcal{T}_0\in W}$.
\item We prove by structural induction that ${v}$ is persistent:\\ For atomic formulas ${v}$ is trivially persistent since the order is ${\subseteq}$.\\ Assume that ${v}$ is persistent for ${\psi_1 ,\dots, \psi_n}$. We prove its persistence for ${\diamond(\psi_1 ,\dots, \psi_n)}$. So assume that ${v(\mathcal{T},\diamond(\psi_1 ,\dots, \psi_n))=t}$ and ${\mathcal{T}\subseteq\mathcal{T}^*}$. By the definition of $v$ there are two possibilities: \begin{enumerate}[(1)] \item There exists a right-introduction rule for $\diamond$ which is fulfilled in $\mathcal{T}$ by a substitution $\sigma$ such that ${\sigma(p_i)=\psi_i}$ (${1\leq i\leq n}$). This is also trivially true in ${\mathcal{T}^*}$, and so ${v(\mathcal{T}^*,\diamond(\psi_1 ,\dots, \psi_n))=t}$. \item
\sloppy ${\diamond(\psi_1 ,\dots, \psi_n) \in \mathcal{T}}$, and there do not exist ${\mathcal{T}^\prime\in W}$ and a left-introduction rule $r$ for $\diamond$, such that ${\mathcal{T}\subseteq\mathcal{T}^\prime}$,
and $r$ is fulfilled in ${\mathcal{T}^\prime}$ by a substitution $\sigma$ such that ${\sigma(p_i)=\psi_i}$ (${1\leq i\leq n}$). Then ${\diamond(\psi_1 ,\dots, \psi_n) \in \mathcal{T}^*}$ (since ${\mathcal{T}\subseteq\mathcal{T}^*}$), and there cannot exist ${\mathcal{T}^\prime\in W}$ and a left-introduction rule $r$ for $\diamond$, such that ${\mathcal{T}^*\subseteq\mathcal{T}^\prime}$, and $r$ is fulfilled in ${\mathcal{T}^\prime}$ by such a substitution $\sigma$ (otherwise the same would hold for $\mathcal{T}$). Hence ${v(\mathcal{T}^*,\diamond(\psi_1 ,\dots, \psi_n))=t}$ in this case too.
\fussy
\end{enumerate} \end{enumerate}
\noindent Next we prove that ${\mathcal{W}}$ is ${{\bf G}}$-legal: \begin{enumerate}[(1)] \item The right-introduction rules are directly respected by the first
condition in the definition of $v$. \item Let $r$ be a left-introduction rule for $\diamond$, and let $\mathcal{T}\in W$. Suppose that $r$ is fulfilled in $\mathcal{T}$ by a substitution $\sigma$, such that ${\sigma(p_i)=\psi_i}$ (${1\leq i\leq n}$). Then neither of the conditions under which $v(\mathcal{T},\diamond(\psi_1 ,\dots, \psi_n))=t$ can hold: \begin{enumerate}[(a)] \item The second condition explicitly excludes the option that $r$ is fulfilled by $\sigma$ (in any ${\mathcal{T}^\prime\in W}$ such that
${\mathcal{T}\subseteq\mathcal{T}^\prime}$, including $\mathcal{T}$ itself). \item The first condition cannot be met because the coherence of ${\bf G}$ does not allow the sets of premises (of a right-introduction rule and a left-introduction rule for the same connective) to be locally satisfied together. Hence the two rules cannot be both fulfilled by the same substitution in the same element of $W$. To see this, assume by way of contradiction that ${S_1}$ and ${S_2}$ are the sets of premises of a left-introduction rule for $\diamond$, ${S_3}$ is the set of premises of a right-introduction rule for $\diamond$, and there exists ${\mathcal{T}\in W}$ in which the three sets of premises are locally satisfied by a substitution $\sigma$ such that ${\sigma(p_i)=\psi_i}$ (${1\leq i\leq n}$). Let ${u}$ be an assignment in ${\{t,f\}}$ in which ${u(p_i)=v(\mathcal{T},\psi_i)}$. Since $\sigma$ locally satisfies in $\mathcal{T}$ the three sets of premises, ${u}$ classically satisfies ${S_1}$, ${S_2}$ and ${S_3}$. This contradicts the coherence of ${{\bf G}}$. \end{enumerate} It follows that $v(\mathcal{T},\diamond(\psi_1 ,\dots, \psi_n))=f$, as required. \end{enumerate}
\noindent It remains to prove that ${\mathcal{W}}$ is a model of $\mathcal{S}$ but not of ${s}$. For this we first prove that the following hold for every ${\mathcal{T}\in W}$ and every formula ${\psi\in\mathcal{U}}$:
\begin{enumerate}[\bf(a):] \item If ${\psi \in \mathcal{T}}$ then ${v(\mathcal{T},\psi)=t}$. \item If $\mathcal{T}$ is ${\{\psi\}}$-maximal then ${v(\mathcal{T},\psi)=f}$. \end{enumerate}
\noindent We prove {\bf (a)} and {\bf (b)} together by a simultaneous induction on the complexity of $\psi$. For atomic formulas they easily follow from the definition of $v$, and the fact that ${p\Rightarrow p}$ is an axiom. For the induction step, assume that {\bf (a)} and {\bf (b)} hold for ${\psi_1 ,\dots, \psi_n\in\mathcal{U}}$. We prove them for ${\diamond(\psi_1 ,\dots, \psi_n)\in\mathcal{U}}$.
\begin{enumerate}[$\bullet$] \item Assume that ${\diamond(\psi_1 ,\dots, \psi_n)\in\mathcal{T}}$, but ${v(\mathcal{T},\diamond(\psi_1 ,\dots, \psi_n))=f}$.
By the definition of $v$, since ${\diamond(\psi_1 ,\dots, \psi_n)\in\mathcal{T}}$ there should
exist ${\mathcal{T}^\prime\in W}$, ${\mathcal{T}\subseteq\mathcal{T}^\prime}$, and a
left-introduction rule, ${r=\tup{\{\Pi_i \Rightarrow E_i\}_{1 \leq i \leq
m},\{\Sigma_i \Rightarrow \}_{1 \leq i \leq k}} / \diamond(p_1 ,\dots, p_n) \Rightarrow}$,
fulfilled in ${\mathcal{T}^\prime}$ by a substitution $\sigma$ such that
${\sigma(p_i)=\psi_i}$ (${1\leq i\leq n}$). As $\sigma$ locally
satisfies in ${\mathcal{T}^\prime\!}$ every sequent in ${\{\Sigma_i\Rightarrow\}_{1
\leq i \leq k}}$, then for every ${1 \leq i \leq k}$ there
exists ${\psi_{j_i}\in\sigma(\Sigma_i)}$ with
${v(\mathcal{T}^\prime,\psi_{j_i})=f}$. By the induction hypothesis this
implies that for every ${1 \leq i\leq k}$, there exists
${\psi_{j_i}\in\sigma(\Sigma_i)}$ such that
${\psi_{j_i}\notin\mathcal{T}^\prime}$. Let $E$ be the set for which
${\mathcal{T}^\prime}$ is maximal. Then for every ${1 \leq i \leq k}$ there
is a finite ${\Delta_i\subseteq\mathcal{T}^\prime}$ such that ${\Delta_i,\psi_{j_i}\Rightarrow E}$
has an $\mathcal{S}$-proof from $\mathcal{S}$, and therefore
${\Delta_i,\sigma(\Sigma_i)\Rightarrow E}$ has such a proof. This in turn
implies that there must exist ${1 \leq i_0 \leq m}$ such that
${\Gamma,{\sigma(\Pi_{i_0})\Rightarrow \sigma(E_{i_0})}}$ has no $\mathcal{S}$-proof
from $\mathcal{S}$ for any finite ${\Gamma\subseteq\mathcal{T}^\prime}$. Indeed, if such a
proof exists for every ${1 \leq i \leq m}$, we would use the $k$
proofs of ${\Delta_i,\sigma(\Sigma_i)\Rightarrow E}$ for ${1 \leq i \leq k}$,
the $m$ proofs for ${\Gamma_i,\sigma(\Pi_i)\Rightarrow \sigma(E_i)}$ for ${1
\leq i \leq m}$, some trivial weakenings, and the
left-introduction rule $r$ to get an $\mathcal{S}$-proof from $\mathcal{S}$ of the
sequent ${\cup_{i=1}^{i=k}\Delta_i,\cup_{i=1}^{i=m}\Gamma_i,\diamond(\psi_1 ,\dots, \psi_n)\Rightarrow
E}$. Since ${\diamond(\psi_1 ,\dots, \psi_n)\in\mathcal{T}}$, this would contradict the
$E$-maximality of ${\mathcal{T}^\prime}$. Using this ${i_0}$, extend
${\mathcal{T}^\prime\cup\sigma(\Pi_{i_0})}$ to a ${\sigma(E_{i_0})}$-maximal
theory ${\mathcal{T}^{\prime\prime}}$. By the induction hypothesis,
${v(\mathcal{T}^{\prime\prime},\psi)=t}$ for every
${\psi\in\sigma(\Pi_{i_0})}$, and if ${E_{i_0}=\{q\}}$
(i.e. $E_{i_0}$ is not empty) then
${v(\mathcal{T}^{\prime\prime},\sigma(q))=f}$. Since
${T^\prime\subseteq\mathcal{T}^{\prime\prime}}$, this contradicts the fact that
$\sigma$ satisfies ${\Pi_{i_0}\Rightarrow E_{i_0}}$ in ${\mathcal{T}^\prime}$.
\item Assume that $\mathcal{T}$ is ${\{\diamond(\psi_1 ,\dots, \psi_n)\}}$-maximal, but that ${v(\mathcal{T},\diamond(\psi_1 ,\dots, \psi_n))=t}$. Obviously, ${\diamond(\psi_1 ,\dots, \psi_n)\notin\mathcal{T}}$ (because ${\diamond(\psi_1 ,\dots, \psi_n)\Rightarrow\diamond(\psi_1 ,\dots, \psi_n)}$ is an axiom). Hence there exists a right-introduction rule, ${r=\{\Pi_i \Rightarrow E_i\}_{1 \leq i \leq m} / \Rightarrow \diamond(p_1 ,\dots, p_n)}$, which is fulfilled in ${\mathcal{T}}$ by a substitution $\sigma$ such that ${\sigma(p_i)=\psi_i}$ (${1\leq i\leq n}$). As in the previous case, there must exist ${1 \leq i_0 \leq m}$ such that ${\Gamma,\sigma(\Pi_{i_0})\Rightarrow \sigma(E_{i_0})}$ has no $\mathcal{S}$-proof from $\mathcal{S}$ for any finite ${\Gamma\subseteq\mathcal{T}}$ (if such a proof exists for every ${1 \leq i \leq m}$ with finite ${\Gamma_i\subseteq\mathcal{T}}$ than we could have an $\mathcal{S}$-proof from $\mathcal{S}$ of ${\cup_{i=1}^{i=m}\Gamma_i\Rightarrow\diamond(\psi_1 ,\dots, \psi_n)}$ using the ${m}$ proofs of ${\Gamma_i,\sigma(\Pi_i)\Rightarrow \sigma(E_i)}$, some weakenings and $r$). Using this ${i_0}$, extend ${\mathcal{T}\cup\sigma(\Pi_{i_0})}$ to a ${\sigma(E_{i_0})}$-maximal theory ${\mathcal{T}^\prime}$. By the induction hypothesis ${v(\mathcal{T}^\prime,\psi)=t}$ for every ${\psi\in\sigma(\Pi_{i_0})}$, and if ${E_{i_0}=\{q\}}$ (i.e. $E_{i_0}$ is not empty) then ${v(\mathcal{T}^\prime,\sigma(q))=f}$. Since ${\mathcal{T}\subseteq\mathcal{T}^\prime}$, this contradicts the fact that $\sigma$ satisfies ${\Pi_{i_0}\Rightarrow E_{i_0}}$ in $\mathcal{T}$. \end{enumerate}
\noindent Next we note that {\bf (b)} can be strengthened as follows:
\begin{enumerate}[\hbox to8 pt{
}] \item[(c)] If ${\psi\in\mathcal{U}}$, ${\mathcal{T}\in W}$ and there is no finite ${\Gamma\subseteq\mathcal{T}}$ such that ${\Gamma\Rightarrow\psi}$ has an $\mathcal{S}$-proof from $\mathcal{S}$, then ${v(\mathcal{T},\psi)=f}$. \end{enumerate} \noindent Indeed, under these conditions $\mathcal{T}$ can be extended to a $\{\psi\}$-maximal theory $\mathcal{T}^\prime$. Now $\mathcal{T}^\prime\in W$, $\mathcal{T}\subseteq\mathcal{T}^\prime$, and by {\bf (b)}, $v(\mathcal{T}^\prime,\psi)=f$. Hence also ${v(\mathcal{T},\psi)=f}$.
Now {\bf (a)} and {\bf (b)} together imply that ${v(\mathcal{T}_0,\psi)=t}$ for every ${\psi\in\Gamma_0\subseteq\mathcal{T}_0}$, and if $E_0={\{\theta\}}$ (i.e. $E_0$ is not empty) then ${v(\mathcal{T}_0,\theta)=f}$. Hence ${\mathcal{W}}$ is not a model of ${s}$. We end the proof by showing that ${\mathcal{W}}$ is a model of $\mathcal{S}$. So let ${\psi_1 ,\dots, \psi_n\Rightarrow E\in \mathcal{S}}$ and let ${\mathcal{T}\in W}$, where $\mathcal{T}$ is $F$-maximal. Assume by way of contradiction that ${\psi_1 ,\dots, \psi_n\Rightarrow E}$ is not locally true in $\mathcal{T}$. Therefore, ${v(\mathcal{T},\psi_i)=t}$ for $1\leq i\leq n$. By {\bf (c)}, for every $1\leq i\leq n$ there is a finite ${\Gamma_i\subseteq\mathcal{T}}$ such that ${\Gamma_i\Rightarrow\psi_i}$ has an $\mathcal{S}$-proof from $\mathcal{S}$. Now, there are two cases: \begin{enumerate}[(1)] \item Assume ${E=\{\theta\}}$. Since ${\psi_1 ,\dots, \psi_n\Rightarrow \theta}$ is not locally true in $\mathcal{T}$, ${v(\mathcal{T},\theta)=f}$. This implies (by {\bf (a)}) that ${\theta \notin \mathcal{T}}$. Since $\mathcal{T}$ is $F$-maximal, it follows that there is a finite ${\Delta\subseteq\mathcal{T}}$ such that ${\Delta,\theta\Rightarrow F}$ has an $\mathcal{S}$-proof from $\mathcal{S}$. Now from ${\Gamma_i\Rightarrow\psi_i}$ ($1\leq i\leq n$), ${\Delta,\theta\Rightarrow F}$, and ${\psi_1 ,\dots, \psi_n\Rightarrow\theta}$ one can infer $\Gamma_1,\dots,\Gamma_n,\Delta\Rightarrow F$ by $n+1$ $\mathcal{S}$-cuts (on $\psi_1 ,\dots, \psi_n$ and $\theta$). Hence, $\Gamma_1,\dots,\Gamma_n,\Delta\Rightarrow F$ has an $\mathcal{S}$-proof from $\mathcal{S}$. \item Assume $E$ is empty. $\Gamma_1,\dots,\Gamma_n\Rightarrow $ follows from the sequents ${\Gamma_i\Rightarrow\psi_i}$ ($1\leq i\leq n$) and ${\psi_1 ,\dots, \psi_n\Rightarrow }$ by $n$ $\mathcal{S}$-cuts (on $\psi_1 ,\dots, \psi_n$). Using weakening (if $F$ is not empty), it follows that $\Gamma_1,\dots,\Gamma_n\Rightarrow F$ has an $\mathcal{S}$-proof from $\mathcal{S}$. \end{enumerate} In both cases we showed an $\mathcal{S}$-proof from $\mathcal{S}$ of a sequent of the form $\Gamma \Rightarrow F$, where $\Gamma\subseteq\mathcal{T}$. This contradicts the $F$-maximality of $\mathcal{T}$. \qed
\begin{rem} This proof suggests that weakening on the right side of sequents can be limited to apply only to negative sequents of the set of assumptions of the derivation. Recall that by proposition \ref{reduction1}, $\mathcal{T}\vdash_{{\bf G}} E$ iff $\{\Rightarrow\psi{\ |\ }\psi\in\mathcal{T}\}\vdash_{{\bf G}}^{seq}\Rightarrow E$. Thus if one is only interested in consequence relations
between formulas, there are no negative sequents in the set of assumptions, and so the right weakening rule is superfluous. \end{rem}
\sloppy \begin{thm}[Soundness and Completeness] \label{completeness} Every coherent canonical system ${{\bf G}}$ is strongly sound and complete with respect to the semantics of ${{\bf G}}$-legal frames. In other words: \begin{enumerate}[(1)] \item $\mathcal{S}\vdash_{{\bf G}}^{seq}s$ iff $\mathcal{S}\vDash_{{\bf G}}^{seq}s$. \item $\mathcal{T}\vdash_{{\bf G}} E$ iff $\mathcal{T}\vDash_{{\bf G}} E$. \end{enumerate} \end{thm}
\fussy \proof ($1$) is immediate from Theorem \ref{key} and Theorem \ref{soundness}. ($2$) follows from ($1$) using the reductions given in Proposition \ref{reduction1} and Proposition \ref{reduction2}. \qed
\begin{cor}[Compactness] Let ${{\bf G}}$ be a coherent canonical system. If $\mathcal{S}\vDash^{seq}_{{{\bf G}}}s$ then there exists a finite $\mathcal{S}^{\prime}\subseteq\mathcal{S}$ such that $\mathcal{S}^{\prime}\vDash^{seq}_{{{\bf G}}}s$. \qed \end{cor}
We use Theorem \ref{key} to prove a general cut-elimination theorem.
\begin{defi} \label{cut elimination} Let $s$ be a sequent, ${\mathcal{S}}$ be a set of sequents, and ${{\bf G}}$ be a canonical system. \begin{enumerate}[(1)] \item ${{\bf G}}$ admits cut-elimination iff whenever ${\vdash_{\bf G}^{seq}s}$, there exists a proof of ${s}$ without cuts (i.e. there exists a $\emptyset$-proof). \item (\cite{Av93}) ${{\bf G}}$ admits strong cut-elimination iff whenever ${\mathcal{S}\vdash_{\bf G}^{seq}s}$, there exists an $\mathcal{S}$-proof of ${s}$ from $\mathcal{S}$. \end{enumerate} \end{defi}
Notice that cut-elimination is a special case of strong cut-elimination with an empty $\mathcal{S}$. Also notice that by cut-elimination we mean here just the existence of proofs without (certain forms of) cuts, rather than an algorithm to transform a given proof to a cut-free one (for the assumption-free case the term {\em cut-admissibility} is sometimes used).
\begin{thm}[General Strong Cut-Elimination Theorem] \label{cut-elimination} Every coherent canonical system ${{\bf G}}$ admits strong cut-elimination. \end{thm} \proof Assume ${\mathcal{S}\vdash_{\bf G}^{seq}s}$. By Theorem \ref{completeness}, ${\mathcal{S}\vDash_{\bf G}^{seq}s}$, and so there does not exist a ${{\bf G}}$-legal $\mathcal{L}$-frame which is model of $\mathcal{S}$, but not a model of $s$. By Theorem \ref{key}, there is an $\mathcal{S}$-proof of $s$ from $\mathcal{S}$. \qed
\begin{rem} \label{hyper-cut} In \cite{Av93}, a strengthening of the cut-elimination theorem was suggested for Gentzen's original systems for classical logic. The notion of a {\em hyper-resolution} rule (or {\em hyper-cut} rule) was defined, and it was proven that this special kind of cuts is the only one needed in derivations of a sequent from a non-empty set of sequents. Following the proof of Theorem \ref{key}, we can show the same in the present case. Let {\em hyper-cut}$_1$ and {\em hyper-cut}$_2$ be the rules which allow the following two derivations: \[ \derc{\psi_1 ,\dots, \psi_n\Rightarrow \theta}{\Gamma_1\Rightarrow\psi_1 \ \ \ldots \ \ \Gamma_n\Rightarrow\psi_n}{\Delta,\theta\Rightarrow F} {\Gamma_1 ,\dots, \Gamma_n,\Delta\Rightarrow F} \] \[ \derb{\psi_1 ,\dots, \psi_n\Rightarrow}{\Gamma_1\Rightarrow\psi_1 \ \ \ldots \ \ \Gamma_n\Rightarrow\psi_n} {\Gamma_1 ,\dots, \Gamma_n\Rightarrow} \] Call $\psi_1 ,\dots, \psi_n \Rightarrow E$, where $E=\{\theta\}$ in the first derivation and empty in the second, the {\em nucleus} of the rule. The last theorem can be strengthened as follows: if ${\mathcal{S}\vdash_{\bf G}^{seq}s}$, then there exists a proof of $s$ from $\mathcal{S}$, which uses only axioms, canonical rules, weakenings and hyper-cuts with elements of $\mathcal{S}$ as nuclei. \end{rem}
\begin{cor} \label{equivalences} The conditions below are equivalent for a canonical system ${{\bf G}}$: \begin{enumerate}[(1)] \item ${{\bf G}}$ is strongly consistent. \item ${{\bf G}}$ is coherent. \item ${{\bf G}}$ admits strong cut-elimination. \end{enumerate} \end{cor} \proof (1) implies (2) by Theorem \ref{strong consistency -> coherence}. (2) implies (3) by Theorem \ref{cut-elimination}. Finally, in a canonical system the only sequents which are provable from $\{(\Rightarrow p_1),(p_2\Rightarrow)\}$ using only cuts on $p_1$ or $p_2$ are: axioms, sequents of the form $\Gamma\Rightarrow p_1$, sequents of the form $\Gamma,p_2\Rightarrow E$, and sequents that contain a non-atomic formula. Thus there is no way to derive $\Rightarrow$ from $\{(\Rightarrow p_1),(p_2\Rightarrow)\}$, using only cuts on $p_1$ or $p_2$. Hence (3) implies (1). \qed
\begin{cor} If ${{\bf G}}$ is a coherent canonical system in $\mathcal{L}$ then $\tup{\mathcal{L},\vDash_{{\bf G}}}$ (or equivalently $\tup{\mathcal{L},\vdash_{{\bf G}}}$) is an extended logic. \qed \end{cor}
\subsection{Strict Canonical Systems}
In \cite{AL10} {\em strict} single-conclusion canonical systems were investigated. These systems are canonical systems, in which derivations can only contain definite sequents. Now we show that the results of \cite{AL10} about these systems can be derived from results of the present paper. For this purpose, we concentrate on a smaller set of canonical systems, for which we are able to strengthen Corollary \ref{equivalences}.
\begin{defi} \label{definite system} A canonical system is called {\em definite} if its right-introduction rules have only definite clauses as premises, and its left-introduction rules have only definite clauses as hard premises. \end{defi}
\begin{exa} Every canonical system in which the set of logical rules is a subset of the set of rules for $\supset,\perp,\leadsto,\vartriangleright$ (of Example \ref{canonical rules examples}) is definite. \end{exa}
\begin{cor} \label{equivalences definite} The conditions below are equivalent for a definite canonical system ${{\bf G}}$: \begin{enumerate}[(1)] \item ${{\bf G}}$ is strongly consistent. \item ${{\bf G}}$ is coherent. \item ${{\bf G}}$ admits strong cut-elimination. \item ${{\bf G}}$ admits cut-elimination. \item ${{\bf G}}$ is consistent. \end{enumerate} \end{cor} \proof (1),(2),(3) are equivalent by Corollary \ref{equivalences} for every canonical system. (3) trivially implies (4). (4) implies (5), since in a canonical system there is no way to derive $p_1\Rightarrow p_2$ without using cuts. Finally, a proof similar to that of Theorem 1 in \cite{AL10}, (or Theorem \ref{strong consistency -> coherence} of this paper) shows that (5) implies (2). \qed
\begin{rem} Strong cut-elimination and cut-elimination are not equivalent in the general case. To see this, consider the system ${\bf G}$ given in Example \ref{G with circle}. As explained there, a sequent $\Gamma\Rightarrow E$ can be proved in ${\bf G}$ from no assumptions iff it is of the form $\Gamma\Rightarrow\psi$, where $\circ^n\psi\in\Gamma$ for some $n\geq 0$. It is easy to see that every sequent of this form can be proved without using cuts, and so ${\bf G}$ admits cut-elimination. However, ${\bf G}$ does not admit strong cut-elimination. For example, one must apply cut on $\circ p$ to derive the empty sequent from the sequent $p_1\Rightarrow\ $. \end{rem}
To derive results about {\em strict} canonical systems, we prove the following lemma.
\begin{lem} \label{strict lemma} Let ${{\bf G}}$ be a definite canonical system, and let $\mathcal{S} \cup \{s\}$ be a set of definite sequents. If there exists a proof $P$ of $s$ from $\mathcal{S}$ in ${\bf G}$, then there also exists a proof $P'$ of $s$ from $\mathcal{S}$ in which every sequent is a definite sequent, and every cut formula in $P'$ also serves as a cut-formula in $P$. \end{lem} \proof It is easy to see that starting from definite assumptions, the only way one can produce a negative sequent in a definite canonical system is by an application of a left-introduction rule of the form: \[\derb{\{\Gamma,\sigma(\Pi_i) \Rightarrow \sigma(E_i)\}_{1 \leq i \leq m}}
{\{\Gamma,\sigma(\Sigma_i) \Rightarrow \}_{1 \leq i \leq k}}
{\Gamma, \sigma(\diamond(p_1 ,\dots, p_n))\Rightarrow}\] Since ${\bf G}$ is definite, the sequent inferred in steps of this kind cannot be used in the rest of the proof, unless right weakening is applied on a descendant of this sequent. Applying the same weakening before steps of this kind will turn the sequent into a definite one, keeping the rest of the proof valid. Finally, this modification does not affect the set of cut-formulas used in the proof. \qed
Now define a new {\em strict} provability relation $\vdash_{{\bf G}}^{seq_1}$ for definite canonical systems. $\vdash_{{\bf G}}^{seq_1}$ is defined as in Definition \ref{syntactic tcr}, except that it allows only {\em definite} sequents in proofs. By Lemma \ref{strict lemma} it immediately follows that a definite system admits cut-elimination with respect to $\vdash_{{\bf G}}^{seq_1}$, iff it admits cut-elimination with respect to $\vdash_{{\bf G}}^{seq}$. The same applies to strong cut-elimination and consistency. Therefore for definite canonical systems, Corollary \ref{equivalences definite} ensures that coherence, cut-elimination, strong cut-elimination, and consistency\footnote{Note that strong consistency is trivial in this case, since the empty sequent is not allowed to appear in derivations.} are equivalent also with respect to $\vdash_{{\bf G}}^{seq_1}$.
\section{Analycity and Decidability}
In this section we show that the semantics of ${{\bf G}}$-legal frames is analytic in the intuitive sense described in Remark \ref{analytic remark}.
\begin{thm}[Analycity] \label{analycity} Let $\mathcal{U}_1,\mathcal{U}_2$ be sets of formulas closed under subformulas, such that $\mathcal{U}_1\subset\mathcal{U}_2$. Let ${{\bf G}}$ be a coherent canonical system for $\mathcal{L}$. The semantics of ${{\bf G}}$-legal frames is {\em analytic} in the following sense: If ${\mathcal{W}_1=\tup{W,\leq,v_1}}$ is a ${{\bf G}}$-legal $\mathcal{U}_1$-semiframe, then $v_1$ can be extended to a function $v_2$ so that ${\mathcal{W}_2=\tup{W, \leq,v_2}}$ is a ${{\bf G}}$-legal $\mathcal{U}_2$-semiframe. \end{thm} \proof Similar to the proof of Theorem 6 from \cite{AL10}.
\qed
\begin{rem} In particular, the last theorem shows that every ${{\bf G}}$-legal $\mathcal{U}$-semiframe, can be extended to a ${{\bf G}}$-legal $\mathcal{L}$-frame. \end{rem}
The following two theorems are consequences of Theorem \ref{analycity} and the soundness and completeness theorems.
\begin{thm}[Conservativity] \label{conservativity} Let ${\bf G_1}$ be a coherent canonical system in a language $\mathcal{L}_1$, and let ${\bf G_2}$ be a coherent canonical system in a language $\mathcal{L}_2$. Assume that $\mathcal{L}_2$ is an extension of $\mathcal{L}_1$ by some set of connectives, and that ${\bf G_2}$ is obtained from ${\bf G_1}$ by adding to the latter canonical rules for connectives in $\mathcal{L}_2-\mathcal{L}_1$. Then ${\bf G_2}$ is a conservative extension of ${\bf G_1}$ (i.e.: if all sequents in $\mathcal{S}\cup s$ are in $\mathcal{L}_1$ then $\mathcal{S}\vdash_{\bf G_1}^{seq} s$ iff $\mathcal{S}\vdash_{\bf G_2}^{seq} s$). \end{thm} \proof Suppose that ${\mathcal{S}}\not\vdash_{\bf G_1}^{seq} s$. Then there is ${\bf G_1}$-legal model $\mathcal{W}$ of $\mathcal{S}$ which is not a model of $s$. Since the set of formulas of $\mathcal{L}_1$ is a subset of the set of formulas of $\mathcal{L}_2$ which is closed under subformulas, Theorem \ref{analycity} implies that $\mathcal{W}$ can be extended to a ${\bf G_2}$-legal model of $\mathcal{S}$ which is not a model of $s$. Hence ${\mathcal{S}}\not\vdash_{\bf G_2}^{seq} s$. \qed
\begin{thm}[Decidability] \label{decidability} Let ${{\bf G}}$ be a coherent canonical system. Then ${{\bf G}}$ is strongly decidable: Given a finite set $\mathcal{S}$ of sequents, and a sequent $s$, it is decidable whether $\mathcal{S}\vdash_{{{\bf G}}}^{seq}s$ or not. \end{thm} \proof Let ${\mathcal{U}}$ be the set of subformulas in $\mathcal{S}\cup\{s\}$. From Theorem \ref{analycity} and the proof of Theorem \ref{key} it easily follows that in order to decide whether ${\mathcal{S}\vdash_{{{\bf G}}}^{seq}s}$ it suffices to check all triples of the form $\tup{W,\subseteq,v^\prime}$ where $W\subseteq 2^\mathcal{U}$ and ${v^\prime: W \times \mathcal{U} \to \{t,f\}}$, and see if any of them is a ${{\bf G}}$-legal $\mathcal{U}$-semiframe which is a model of $\mathcal{S}$ but not a model of $s$. \qed
\begin{rem} The last two theorems can also be proved directly from the cut-elimination theorem. \end{rem}
Strong conservativity and strong decidability of $\vdash_{{\bf G}}$ and $\vDash_{{\bf G}}$ are easy corollaries of the previous theorems and the reductions given in Proposition \ref{reduction1} and Proposition \ref{reduction2}.
\section{Conclusions and Further Work}
Now we present our answer to the question from the introduction: ``what is a basic constructive connective?".
\begin{quote} {\em A basic constructive connective is a connective defined by a set of rules in some coherent canonical system.} \end{quote}
\noindent Theorem \ref{cut-elimination} ensures that the proof-theoretic criterion for constructivity, described in the introduction, is met. Theorem \ref{conservativity} ensures that a set of rules for some connective can indeed be seen as a definition of that connective, because it shows that in coherent canonical systems the same set of rules defines the same connective regardless of the rules for the other connectives.
In Section \ref{semantic section}, the proof-theoretic characterization of basic constructive connectives was matched by a (non-deterministic) Kripke-style semantics. This semantics is modular, allowing to separate the semantic effect of each derivation rule. However, we did not provide there an independent semantic characterization of (basic) constructive connectives. We leave this issue to a future work. Another future goal is to extend our results to first-order logic, and identify constructive quantifiers as well (for semi-classical quantifiers this was done in \cite{AZ08}).
\end{document} | arXiv |
How many faces does the resulting polyhedron have?
Take a regular tetrahedron of edge one.
Also take a square-based pyramid, whose edges are all one (therefore the side faces are equilateral triangles of same size as the faces of the tetrahedron).
Glue a face of the tetrahedron to a triangular face of the pyramid so that their edges match up.
Considering the volume taken up by the two pieces as a single polyhedron, how many faces does it have?
geometry puzzle polyhedra
IanF1
IanF1IanF1
$\begingroup$ I think I recall that this was an SAT question many years ago, and a student who had been marked wrong on it challenged the "obvious" answer to the question and won. $\endgroup$ – David K Nov 6 '14 at 19:32
$\begingroup$ I actually got it from an Arthur C Clarke novel, "the Ghost from the Grand Banks", where a child prodigy does the same thing. $\endgroup$ – IanF1 Nov 6 '14 at 19:34
$\begingroup$ The incident I'm thinking of occurred in the early 1980s, so Clarke may well have gotten the idea from that. $\endgroup$ – David K Nov 6 '14 at 19:46
$\begingroup$ Some details I have since found: this question was on the October 1980 PSAT and was challenged successfully by Daniel Lowen. The story was reported in early 1981. See this account or this. $\endgroup$ – David K May 20 '15 at 18:22
$\begingroup$ @DavidK excellent! Thanks for digging that out. The proof involving two pyramids side by side in your second link is very elegant and would have been the answer I accepted here :-) $\endgroup$ – IanF1 May 20 '15 at 19:54
I visualized the solution Nick provided:
(I found it because it had used this arrangement to build boats models with a magnet game)
grg
Judith LJudith L
$\begingroup$ This figure makes it immediately evident that the triangles are coplanar (hence, the faces "merge down"). Because the front facing faces of both pyramids are obviously coplanar, and the pink face of the subtended tetrahedron is obviously on the same plane (its three vertices lie on the faces of the pyramids). No formulas or calculations required! $\endgroup$ – Nicolas Miari Aug 22 '16 at 12:57
Recall that the regular tetrahedron is self-dual: it is its own dual polyhedron, thus for a regular tetrahedron of edge length 2, consider its compound with its own dual such that both tetrahedra share the same circumradius. The resulting compound is known as the stella octangula. The intersection of the two tetrahedra (i.e., the region of space common to both) is a regular octahedron of edge length 1, and half of this octahedron as bisected by a plane perpendicular to a fourfold axis, forms the aforementioned square pyramid. This pyramid, upon which a smaller regular tetrahedron of edge length 1 is attached, is the figure of interest. But from this description, it becomes immediately obvious that two of the three faces of the small tetrahedron are coplanar with two of the triangular faces of the square pyramid, thus there are only 5 distinct faces to this polyhedron.
Explanatory figure taken from the MathWorld link above:
Here we see that the coplanarity is evident.
heropupheropup
$\begingroup$ Much more elegant (obviously!!) than my own attempt. +1 and will accept it unless anyone somehow makes it even plainer anytime soon. $\endgroup$ – IanF1 Nov 6 '14 at 19:07
$\begingroup$ An equivalent explanation can be given by considering the first stellation of a regular octahedron, which is equivalent to adding eight regular tetrahedra to each octahedral face, resulting in the stella octangula; and from this, again the coplanarity immediately follows. The only subtlety is to establish that stellation produces a regular tetrahedron; this follows from the fact that joining the midpoints of the 6 edges of a regular tetrahedron results in a regular octahedron. $\endgroup$ – heropup Nov 6 '14 at 19:12
Let $ABCD$ be the base square of the pyramid, $S$ its tip, and $M$ the midpoint of $BS$. The segment $BS$ is a hinge connecting two equilateral triangles; therefore the plane of the triangle $\triangle:=AMC$ intersects $BS$ orthogonally. It follows that the angle $\alpha:=\angle(AMC)$ is the angle between two adjacent walls of the pyramid. Using the cosine theorem one obtains $$\cos\alpha={{3\over4}+{3\over4}-2\over 2\cdot {\sqrt{3}\over2}\cdot{\sqrt{3}\over2}}=-{1\over3}\ .$$ The angle $\beta$ between two faces of the tetrahedron is the angle at the tip of an isosceles triangle with sides ${\sqrt{3}\over2}$, ${\sqrt{3}\over2}$, and $1$; so $$\cos\beta={{3\over4}+{3\over4}-1\over 2\cdot {\sqrt{3}\over2}\cdot{\sqrt{3}\over2}}={1\over3}\ .$$ It follows that $\alpha+\beta=\pi$. Therefore the resulting solid does not have $5+4-2=7$ faces, as expected, but only $5$ of them: the base square and one triangular side wall of the pyramid, two rhombi composed of a side wall of the pyramid and a facet of the tetrahedron, and one facet of the tetrahedron.
Christian BlatterChristian Blatter
$\begingroup$ This property is the basis of the "octahedral-tetrahedral" tiling in three dimensions. See en.wikipedia.org/wiki/Tetrahedral-octahedral_honeycomb. $\endgroup$ – Oscar Lanzi Dec 30 '16 at 17:01
I believe that it would be the number of faces of the tetrahedron ($4$), plus the number of faces of the pyramid ($5$), minus the two faces that got glued together since they will not be on the outside surface of the resulting polyhedron, so $7$ faces total.
Mike PierceMike Pierce
$\begingroup$ I'm afraid this is an instance where the obvious solution is not the correct one. $\endgroup$ – IanF1 Nov 6 '14 at 6:38
$\begingroup$ @IanF1 ...oh... if you are getting at what I think you are, then the definition of face becomes important. If there are still edges there along where the two original polyhedra are glued together, then would we get rid of those edges just because the incident faces are flush with each other (in the same plane)? $\endgroup$ – Mike Pierce Nov 6 '14 at 6:42
$\begingroup$ I've edited the question, does that resolve the definition problem? $\endgroup$ – IanF1 Nov 6 '14 at 6:47
$\begingroup$ @IanF1, that edit definitely makes it clear. This is a pretty cool question. I've worked in topology more recently than in solid geometry, so that subtlety completely got by me. $\endgroup$ – Mike Pierce Nov 6 '14 at 6:51
$\begingroup$ @IanF1: I don't understand why this answer is not the correct one. May you expand on your comment? $\endgroup$ – Taladris Nov 6 '14 at 8:00
One way to think about this problem is as follows:
In your mind, take 2 pyramids, and place them together on a flat surface, square side down, with the edges touching. Note that each of their 2 adjacent sides are parallel to the equivalent sides on the other pyramid, and are in the same plane.
Draw a line between the tips.
Think of the length of that line. It covers half of the square base of each pyramid, thus it is equal to the square base.
The shape formed from this line and the closest side of each pyramid is the tetrahedron described (the other 3 dimensions are defined by the triangular sides of the pyramids, which are also equal to the square base). The two added sides are in the same plane as those of both pyramids, forming a single shape with 6 sides.
Now take off one of the pyramids. It's two sides are now replaced with the 1 side where it was connected to the tetrahedron. Thus, 5 sides.
$\begingroup$ Nice, except that even the solid in step 4 has only 5 faces i think. But yes this is a very intuitive way of thinking about it $\endgroup$ – IanF1 Dec 12 '15 at 10:28
I tried to do this with vectors and angles at first, but it got a bit hand-wavy and I couldn't nail down the details. Any alternative answer which does this more elegantly would be very welcome.
Let's work out the Cartesian coordinates of all the vertices.
For the sake of convenience and symmetry let's double the edge length to 2 and place the base of the pyramid at vertices: $A=(-1, -1, 0)$, $B=(-1, 1, 0)$, $C=(1, 1, 0)$, $D=(1, -1, 0)$
Then the apex of the pyramid will be at $E=(0, 0, h)$ for some value of $h$ which we can find with 3d Pythagorus:
$2^2 = DE^2 = 1^2 + 1^2 + h^2$
$4 = 2+h^2$
$h = \sqrt2$
Now place the tetrahedron against face CDE, and call its fourth vertex F, located at (x, y, z).
By symmetry (equidistant from C and D) we can see that y must be 0; let's calculate x and z.
3d Pythagorus again gives:
$2^2 = EF^2 = x^2 + 0^2+ (z-\sqrt2)^2$ (1)
$2^2 = CF^2 = (x-1)^2 + 1^2 + z^2$ (2)
Rearranging (1):
$x = \sqrt{4 - (z-\sqrt2)^2}$
Substituting in (2) and rearranging (a messy exercise left to the reader!), it turns out the only sensible solution is $z = \sqrt2 = h, x = 2$
So EF is parallel to BC and B, C, E, F are coplanar. Instead of two triangular faces, these vertices form a single parallelogram face. Similarly for A, D, E, F.
So the resulting solid is a pentahedron comprising one square face, two triangular faces (one from the pyramid, one from the tetrahedron) and two parallelograms.
Not the answer you're looking for? Browse other questions tagged geometry puzzle polyhedra or ask your own question.
Geometric puzzles similar to the two-part tetrahedron?
An unbounded convex polyhedron realizing the primes?
traversing faces of a polyhedron Hamiltonian Tour?
Tracing the faces of a convex polyhedron from edges and vertices
Asymptotic bounds on the number of faces needed to construct a polyhedron of a certain genus
Find the edge angle of a dodecahedron using spherical trigonometry?
What is the dihedral angle formed by the faces of a tetrahedron
Convex polyhedra with 6 faces
Upper bound on number of triangular faces of a convex polyhedron?
Height of Irregular Pyramid Given it's Faces
Polyhedron with faces that are not flat | CommonCrawl |
\begin{document}
\begin{abstract} Let $m,k$ be fixed positive integers. Determining the generating function for the number of tilings of an $m\times n$ rectangle by $k\times 1$ rectangles is a long-standing open problem to which the answer is only known in certain special cases. We give an explicit formula for this generating function in the case where $m<2k$. This result is used to obtain the generating function for the number of tilings of an $m\times n \times k$ box with $k\times k\times 1$ bricks. \end{abstract} \maketitle
\section{Introduction and main results}
We consider the problem of enumerating the number of tilings of an $m\times n$ rectangle with $k\times 1$ tiles. An example of such a tiling for $k=3,m=5$ and $n=6$ is shown in Figure \ref{fig:example}. \begin{figure}
\caption{Tiling a $5\times 6$ rectangle with $3\times 1$ tiles.}
\label{fig:example}
\end{figure} It is known by a theorem of Klarner~\cite[Thm. 5]{MR248643} that such tilings exist if and only if $k$ divides either $m$ or $n$. Let $h_k(m,n)$ denote the number of such tilings. A beautiful result of Kasteleyn \cite{MR153427} and Temperley-Fisher \cite{MR136398} gives an explicit formula for $h_2(m,n)$: \begin{equation}
\label{eq:dimers}
h_2(m,n)= \prod_{j=1}^{\lceil \frac{m}{2}\rceil} \prod_{k=1}^{\lceil \frac{n}{2}\rceil}\left( 4 \cos^2\frac{j\pi}{m+1}+4\cos^2\frac{k\pi}{n+1}\right). \end{equation} For arbitrary integers $m,n$ no such formula is known for $h_k(m,n)$ for any integer $k\geq 3$. However, for fixed values of $m$ and $k$, the generating function $H_{k,m}(x)=\sum_{n\geq 0}h_k(m,n)x^n$ can be obtained by using the transfer-matrix method (see Stanley \cite[Sec. 4.7]{Stanley2012}) which expresses the number of tilings as the number of walks between two vertices in a suitably defined digraph. Klarner and Pollack \cite{MR588907} computed $H_{2,m}(x)$ for $m\leq 8$ and
gave an algorithm to compute polynomials $P_m$ and $Q_m$ such that $H_{2,m}(x)=P_m(x)/Q_m(x)$. Hock and McQuistan \cite{MR739603} found recurrences for $h_2(m,n)$ in terms of $n$ for each integer $m\leq 10$. Stanley~\cite{MR798013} used the explicit formula \eqref{eq:dimers} above to determine the degrees of the polynomials $P_m(x)$ and $Q_m(x)$ and prove several properties of the polynomials $P_m(x)$ and $Q_m(x)$.
By using the transfer-matrix approach Mathar \cite{mathar2014} derived several generating functions which enumerate tilings of the $m\times n$ rectangle with $a\times b$ tiles for fixed values of $a,b$. One of the drawbacks of the transfer-matrix approach is that it involves the evaluation of large determinants for large values of the parameters.
For $k\geq 3$, there does not appear to have been substantial progress on the general problem of computing $H_{k,m}(x)$. In this paper we compute $H_{k,m}(x)$ for arbitrary positive integers $m$ and $k$ satisfying $m<2k$. The cases where $m\leq k$ are somewhat trivial but the case $k<m<2k$ is more interesting and we prove (see Theorem \ref{th:main1}) that the generating function is surprisingly simple: \begin{align}
\sum_{n\geq 0}h_k(m,n)x^n= \frac{(1-x^k)^{k-1}}{(1-x^k)^k - (m-k+1)x^k}.\label{eq:hk} \end{align} The generating function above allows for recursive computation of $h_k(m,n)$ and asymptotic estimates for large values of $n$ can be obtained by considering the smallest positive root of the denominator (see Flajolet and Sedgewick \cite[Ch. IV]{MR2483235}). In Theorem \ref{th:vert} we obtain a refinement of Equation \eqref{eq:hk} by proving that if $b_k(m,n,r)$ denotes the number of tilings in which precisely $r$ tiles are vertical, then $$ \sum_{n,r\geq 0}b_k(m,n,r)x^ny^r= \frac{(1-x^k)^{k-1}}{(1-x^k)^k - (m-k+1)x^ky^k}. $$
Under the same constraints on $m$, our results can also be used to derive the generating function (see Theorem \ref{th:3d}) for $\bar{h}_k(m,n)$, the number of tilings of an $m\times n\times k$ cuboid with $k\times k\times 1$ bricks: \begin{align*}
\sum_{n\geq 0}\bar{h}_k(m,n)x^n = \frac{(1-2x^k)^{k-1}}{(1-2x^k)^{k-1}[1 - (m-k+2)x^k] - (m-k+1)x^k}. \end{align*} In this case we also obtain a refined multivariate generating function which accounts for tilings with a specific number of tiles in a given orientation. More precisely, we prove (see Corollary \ref{cor:vert3d}) that if $d_k(m,n,r,s)$ denotes the number of tilings of an $m\times n\times k$ cuboid with $k\times k\times 1$ bricks which contain precisely $r$ bricks parallel to the $yz$-plane and $s$ bricks parallel to the $xy$-plane, then
\begin{align*}
\sum_{n,r,s\geq 0}d_k(m,n,r,s)x^ny^rz^s=\hspace*{3in} \nonumber\\ \frac{(1-x^k-x^kz^k)^{k-1}}{\left(1-x^k-(m-k+1)x^kz^k\right)(1-x^k-x^kz^k)^{k-1}-(m-k+1)x^ky^k}.
\end{align*} The key idea of our approach is to enumerate fault-free tilings which are explained in the next section. \section{Fault-free tilings} Fix, throughout this paper, a positive integer $k\geq 2$. It will be convenient to introduce a coordinate system with the origin at the bottom left corner of the $m\times n$ rectangle such that a side of length $m$ of the rectangle is along the $y$-axis. Each of the unit squares, or cells, of the $m\times n$ rectangle is then represented by a pair $(r,s)$ corresponding to the coordinates of its top right corner. For instance the cells $(1,1)$ and $(3,4)$ are shaded in Figure \ref{fig:coordinates}. \begin{figure}
\caption{The cells $(1,1)$ and $(3,4)$ of a $4\times 5$ rectangle.}
\label{fig:coordinates}
\end{figure}
A tiling of the $m\times n$ rectangle is said to have a \emph{fault} at $x=a$ for some $1\leq a\leq n-1$ if the line $x=a$ does not intersect the interior of any tile. For instance, the tiling in Figure \ref{fig:fault} has only one fault at $x=1$ while the tiling in Figure \ref{fig:nofault} has no faults; such a tiling is called \emph{fault-free}. It is easily seen that if a tiling has $l$ faults then it can be decomposed uniquely into $l+1$ fault-free tilings. Let $a(m,n)$ denote the number of fault-free tilings of an $m\times n$ rectangle with $k\times 1$ tiles. The following well-known result \cite[Thm. 2.2.1.3]{MR3409342} relates all tilings $h(m,n)=h_k(m,n)$ to fault-free tilings. \begin{figure}\label{fig:fault}
\label{fig:nofault}
\end{figure} \begin{lemma}
\label{lem:gfs}
If $H(x)=\sum_{n\geq 0}h(m,n)x^n$ and $A(x)=\sum_{n\geq 0}a(m,n)x^n$, then
\begin{align*}
H(x)=\frac{1}{1-A(x)}.
\end{align*} \end{lemma} \begin{proof}
Condition on the least positive integer $l$ such that the line $x=l$ does not intersect the interior of any tile; the number of tilings for a given $l$ is $a(m,l)h(m,n-l)$. Summing over possible values of $l$, we obtain
\begin{align*} h(m,n)=\sum_{l=1}^n a(m,l)h(m,n-l) \quad (n\geq 1, \; h(m,0)=1).
\end{align*}
In terms of generating functions, the recurrence above reads $H(x)=A(x)H(x)+1$ which is equivalent to the statement of the lemma. \end{proof}
For $m<k$ it is clear that $h(m,n)=1$ when $n$ is a multiple of $k$ and 0 otherwise. For $m=k$ we have the following proposition. \begin{proposition}\label{prop:mequalsk} For $k>1$, we have
\begin{align*}
\sum_{n\geq 0 }h(k,n)x^n=\frac{1}{1-x-x^k}.
\end{align*} \end{proposition} \begin{proof}
Here $a(k,1)=1=a(k,k)$ while $a(k,n)=0$ for $n\notin \{1,k\}$ (see Figure \ref{fig:only2}). Thus the generating function for fault-free tilings is $A(x)=x+x^k$ and the proposition follows from Lemma~\ref{lem:gfs}.
\begin{figure}
\caption{All possible fault-free tilings for \(m = k = 3\).}
\label{fig:only2}
\end{figure} \end{proof} \begin{remark}
\label{rem:compositions}
It follows from Proposition \ref{prop:mequalsk} that tilings of a $k\times n$ rectangle with $k\times 1$ tiles are in bijection with compositions of $n$ (i.e. tuples $(n_1,\ldots,n_r)$ of positive integers with $\sum n_i=n$) in which all parts are equal to 1 or $k$. We will require this fact later on. \end{remark}
We now consider the case $k<m<2k$. The following lemma is the key to computing fault-free tilings in this case. \begin{lemma}
\label{lem:main} Suppose $k<m<2k$ and $n>k$. In any fault-free tiling of an $m\times n$ rectangle by $k\times 1$ tiles there exist $k$ contiguous rows such that all tiles in the remaining $m-k$ rows are horizontal. \end{lemma}
\begin{proof}
A cell $(i,j)$ is said to be in row $i$ and column $j$. Consider a fault-free tiling of the $m\times n$ rectangle for $n>k$. Not all cells in column 1 are covered by horizontal tiles since this would create a fault at $x=k$. It follows that there is precisely one vertical tile in the first column (see Figure \ref{fig:placed}); suppose the topmost cell of this tile is $(1,b)$.
\begin{figure}
\caption{Vertical tile in the first column.}
\label{fig:placed}
\end{figure}
We claim that the $m-k$ rows that do not intersect this vertical tile consist of only horizontal tiles. If not, then consider the leftmost vertical tile that intersects one of the aforementioned rows and suppose its top cell is $(a',b')$ where $b'\neq b$. By reflecting through a horizontal line if necessary, we may assume without loss of generality that $b'>b$ (see Figure \ref{fig:contra}). \begin{figure}
\caption{The contradiction obtained.}
\label{fig:contra}
\end{figure} By the minimality of $a'$, it follows that $a'=k\ell+1$ for some positive integer $\ell$. Moreover, the first $k\ell$ cells in each row that does not intersect the vertical tile in the first column are covered by $\ell$ horizontal tiles. Now consider the tile covering the cell $(k\ell,b)$. If this tile were vertical, there would be a fault at $x=k\ell$ and, therefore, this tile must be horizontal. By the same reasoning, the tile covering the cell $(k(\ell-1),b)$ must be horizontal. Continuing this line of reasoning, it is clear that the tile covering the cell $(k,b)$ must be horizontal which is impossible since this tile would then cover $(1,b)$ which is already covered by the vertical tile in the first column. This proves the claim and the lemma. \end{proof} The idea used to prove Lemma \ref{lem:main} can also be used to show that for $n>k$, any fault-free tiling of an $m\times n$ rectangle in which each tile has dimensions $k\times j$ for some $1\leq j\leq k$ has $m-k$ rows consisting of only horizontal tiles but we do not require this stronger result here.
\begin{proposition}
\label{prop:fft}
Let $k>1$ be a positive integer and suppose $k<m<2k$. The number of fault-free tilings of an $m\times k\ell$ rectangle is given by
\begin{align*}
a(m,k\ell)=
\begin{cases}
m-k+2 & \ell=1,\\
(m-k+1){k+\ell-3 \choose k-2} & \ell \geq 2.
\end{cases}
\end{align*} \end{proposition} \begin{proof}
For an $m\times k$ rectangle there is one fault-free tiling in which all tiles are horizontal. Any other fault-free tiling of this rectangle contains precisely $m-k$ horizontal tiles and $k$ vertical tiles; there are $m-k+1$ such tilings. Thus $a(m,k)=m-k+2$.
Now consider a fault-free tiling of an $m\times k\ell$ rectangle. Such a tiling has $m-k$ horizontal rows by Lemma \ref{lem:main}. If these $m-k$ rows are removed, what remains is a tiling of a $k\times k\ell$ rectangle in which no faults occur at $x=jk$ for $j\geq 1$; denote by $N$ the number of tilings so obtained. By Remark~\ref{rem:compositions}, these tilings are in bijection with compositions $(n_1,\ldots,n_r)$ of $k\ell$ where each $n_i\in \{1,k\}$ and such that $k$ does not divide $\sum_{i=1}^sn_i$ for each $s<r$. This condition on the partial sums implies $n_1=1=n_r$. Further, the number of $n_i$'s equal to 1 must be a multiple of $k$ but the condition on the partial sums ensures that no more than $k$ of them can equal 1. It follows that precisely $k$ of the $n_i$ are 1 and hence $r=k+\ell-1$. Since precisely $k-2$ of the $n_i(2\leq i\leq r-1)$ are equal to 1 and all these choices are possible, we obtain $N={k+\ell-3 \choose k-2}.$ On the other hand, the number of fault-free tilings of an $m\times k\ell$ rectangle that yield a given $k\times k\ell$ rectangle upon removing the $m-k$ horizontal rows is clearly $m-k+1$. Therefore
\begin{align*}
a(m,k\ell)&=(m-k+1){k+\ell-3 \choose \ell-1}. \qedhere
\end{align*} \end{proof}
\begin{corollary}
\label{cor:fft} Under the hypotheses of Proposition \ref{prop:fft} every fault-free tiling of an $m\times k\ell$ rectangle ($\ell>1$) by $k\times 1$ tiles contains precisely $k$ vertical tiles. \end{corollary} \begin{remark}
\label{rem:blocks} It is clear from the proof of Proposition \ref{prop:fft} that each fault-free tiling of an $m\times k\ell$ rectangle ($\ell>1; k<m<2k$) contains precisely $\ell-1$ `blocks', where a block is defined as a collection of $k$ contiguous horizontal $k\times 1$ tiles, one on top of the other. \end{remark}
\begin{theorem}
\label{th:main1} Suppose $k<m<2k$ and $h_k(m,n)$ denotes the number of tilings of an $m\times n$ rectangle with $k\times 1$ tiles. Then
\begin{align*} \sum_{n\geq 0}h_k(m,n)x^n= \frac{(1-x^k)^{k-1}}{(1-x^k)^k - (m-k+1)x^k}.
\end{align*} \end{theorem} \begin{proof}
By Proposition~\ref{prop:fft}, it follows that the generating function for fault-free tilings is given by
\begin{align}
A(x)&=\sum_{\ell\geq 1}a(m,k\ell)x^{k\ell}\nonumber \\
&=(m-k+2)x^k+(m-k+1)\sum_{\ell\geq 2}{k+\ell-3 \choose \ell-1}x^{k\ell} \nonumber \\
&=x^k + \frac{(m-k+1)x^k}{(1-x^k)^{k-1}}.\label{eq:a}
\end{align} The theorem now follows from Lemma \ref{lem:gfs}. \end{proof}
Several OEIS sequences which correspond to the generating function in Theorem \ref{th:main1} are shown in Table \ref{tab:oeis}.
\begin{center}
\begin{table}[h]
\begin{tabular}{|c|c|c|}
\hline
$k$ & $m$ & \text{OEIS entry}\\
\hline
2 & 3 &\href{http://oeis.org/A001835}{A001835}\\
3& 4 & \href{http://oeis.org/A049086}{A049086}\\
3& 5& \href{http://oeis.org/A236576}{A236576} \\
4& 5 & \href{http://oeis.org/A236579}{A236579}\\
4& 6 & \href{http://oeis.org/A236580}{A236580}\\
4& 7 & \href{http://oeis.org/A236581}{A236581}\\
\hline
\end{tabular}
\caption{OEIS entries}
\label{tab:oeis}
\end{table}
\end{center} Theorem \ref{th:main1} may be viewed as a special case of the following more general result.
\begin{theorem}\label{th:vert}
Suppose $k<m<2k$ and let $b_k(m,n,r)$ denote the number of $k\times 1$ tilings of an $m\times n$ rectangle which contain precisely $r$ vertical tiles. Then
\begin{align}
\sum_{n,r\geq 0}b_k(m,n,r)x^ny^r= \frac{(1-x^k)^{k-1}}{(1-x^k)^k - (m-k+1)x^ky^k}.\label{eq:bk}
\end{align}
\end{theorem}
\begin{proof} Consider the number of vertical tiles in fault-free tilings of an $m\times k\ell$ rectangle by $k\times 1$ tiles. For $\ell=1$, there is one such tiling with no vertical tiles and $m-k+1$ tilings with precisely $k$ vertical tiles. By Corollary \ref{cor:fft} each such fault-free tiling for $\ell>1$ contains precisely $k$ vertical tiles. Therefore the generating function for $b_k(m,n,r)$ is $$\frac{1}{1-A(x,y)},$$ where, $A(x,y)$ can be computed from the expression for $A(x)$ in Equation \eqref{eq:a} as \begin{align*}
A(x,y)&=x^k + \frac{(m-k+1)x^ky^k}{(1-x^k)^{k-1}}.\qedhere \end{align*} \end{proof} \section{Brick tilings of a cuboid} The results of the previous section can be used to derive the generating function for the number of tilings of an $m\times n\times k$ cuboid with $k\times k\times 1$ bricks. In order to prove the result we will require the following theorem on tilings of an $m\times n$ rectangle with $k\times 1$ and $k\times k$ tiles. Fault-free tilings for $m\leq k$ are easily enumerated, so we consider the case $m>k$. \begin{theorem}
\label{th:main2}
Suppose $k<m<2k$ and $h'(m,n)$ denotes the number of tilings of an $m\times n$ rectangle with $k\times 1$ tiles and $k\times k$ tiles. Then
\begin{align*} \sum_{n\geq 0}h'(m,n)x^n = \frac{(1-2x^k)^{k-1}}{(1-2x^k)^{k-1}[1 - (m-k+2)x^k] - (m-k+1)x^k}.
\end{align*} \end{theorem} \begin{proof}
Let $a'(m,n)$ denote the number of fault-free tilings of an $m\times n$ rectangle with $k\times 1$ and $k\times k$ tiles. For fault-free tilings of an $m\times k$ rectangle, we have
\begin{enumerate}
\item 1 tiling with only horizontal $k\times 1$ tiles;
\item $m-k+1$ tilings containing a $k\times k$ tile;
\item $m-k+1$ tilings containing precisely $k$ vertical $k\times 1$ tiles.
\end{enumerate}
Thus $a'(m,k)=2m-2k+3$. Now suppose $n=k\ell$ with $\ell>1$.
Let $T'_k(m,n)$ be the set of all fault-free tilings of an $m\times n$ rectangle by $k\times 1$ and $k\times k$ tiles. Denote by $T_k(m,n)$ the subset of $T'_k(m,n)$ consisting of fault-free tilings of an $m\times n$ rectangle with only $k\times 1 $ tiles. To each tiling $T'\in T'_k(m,n)$ we can associate a new tiling by replacing each $k\times k$ tile in $T'$ by $k$ horizontal $k\times 1$ tiles (see Figure~\ref{fig:replace}); it is easily seen that this new tiling is fault-free, and therefore lies in $T_k(m,n)$. \begin{figure}
\caption{A fault-free tiling and its image under $\Psi$.}
\label{fig:replace}
\end{figure}
This correspondence gives a map $\Psi :T'_k(m,n)\to T_k(m,n)$ which is clearly surjective. In fact the fibers of $\Psi$ are of cardinality $2^{\ell-1}$. To see this note that each $T\in T_k(m,n)$ has precisely $\ell-1$ blocks as defined in Remark \ref{rem:blocks}. The tilings in $\Psi^{-1}(T)$ are precisely those which can be obtained from $T$ by optionally replacing each of the blocks in $T$ by a $k\times k$ tile, for a total of $2^{\ell-1}$ choices. In summary, we have \begin{align*}
a'(m,k\ell)=
\begin{cases}
2m-2k+3 & \ell=1,\\
2^{\ell-1}(m-k+1){\ell+k-3 \choose \ell-1} & \ell>1.
\end{cases} \end{align*} The generating function for fault-free tilings is therefore \begin{align*}
A(x) = (m-k+2)x^k + (m-k+1) \frac{x^k}{(1-2x^k)^{k-1}}. \end{align*} The theorem now follows from Lemma \ref{lem:gfs}. \end{proof}
The next result extends the above theorem by accounting for the number of tiles of a given type and orientation. \begin{theorem} \label{th:vert2}
Suppose $k<m<2k$ and let $c_k(m,n,r,s)$ denote the number of tilings of an $m\times n$ rectangle with $k\times 1$ and $k\times k$ tiles which contain precisely $r$ vertically placed $k\times 1$ tiles and $s$ square tiles. Then
\begin{align}
\sum_{n,r,s\geq 0}c_k(m,n,r,s)x^ny^rz^s=\hspace*{3in} \nonumber\\ \frac{(1-x^k-x^kz)^{k-1}}{\left(1-x^k-(m-k+1)x^kz\right)(1-x^k-x^kz)^{k-1}-(m-k+1)x^ky^k}.\label{eq:bigeq}
\end{align}
\end{theorem}
\begin{proof}
We argue as in the proof of Theorem \ref{th:vert}. If $C(x,y,z)$ denotes the generating function for $c_k(m,n,r,s)$, then
$$ C(x,y,z)=\frac{1}{1-A(x,y,z)}, $$ where $A(x,y,z)$ is computed from the discussion in the proof of Theorem \ref{th:main2} as \begin{align*}
A(x,y,z)&= x^k(1+(m-k+1)z+(m-k+1)y^k)\\
&\qquad+\sum_{\ell\geq 2}(m-k+1){\ell+k-3 \choose \ell-1}x^{k\ell}y^k(1+z)^{\ell-1}\\
&=x^k(1+(m-k+1)z)\\
&\qquad +\sum_{\ell\geq 1}(m-k+1){\ell+k-3 \choose \ell-1}y^kx^{k\ell}(1+z)^{\ell-1}\\
&=x^k(1+(m-k+1)z)+\frac{(m-k+1)x^ky^k}{(1-x^k-x^kz)^{k-1}}.\qedhere \end{align*} \end{proof}
Note that if we set $z=0$ in Theorem~\ref{th:vert2} then we obtain Theorem~\ref{th:vert} while the substitution $y=z=1$ yields Theorem \ref{th:main2}. \begin{theorem}
\label{th:3d}
Suppose $k<m<2k$ and let $\bar{h}(m,n)$ denote the number of tilings of an $m\times n\times k$ cuboid with $k\times k\times 1$ bricks. Then
\begin{align*} \sum_{n\geq 0}\bar{h}(m,n)x^n = \frac{(1-2x^k)^{k-1}}{(1-2x^k)^{k-1}[1 - (m-k+2)x^k] - (m-k+1)x^k}.
\end{align*} \end{theorem} \begin{proof}
Consider a cuboid in the positive octant with a corner at the origin and with its sides of length $n,m$ and $k$ along the $x,y$ and $z$-axes respectively. Let $\tau$ be a tiling of this cuboid by $k\times k\times 1$ bricks. The bricks of the tiling which touch the $xy$-plane determine a tiling $\tau'$ of an $m\times n$ rectangle with $k\times 1$ and $k\times k$ tiles as shown in Figure \ref{fig:3dproj}.
\begin{figure}
\caption{A tiling $\tau'$ for $m=5,n=12$ and $k=3$.}
\label{fig:3dproj}
\end{figure}
In fact $\tau$ is uniquely determined by $\tau'$ as follows. The tiles in $\tau'$ can be seen to correspond to the projections of the bricks in $\tau$ onto the $xy$-plane: each $k\times 1$ tile of $\tau'$ is the projection of a single brick of $\tau$ while each $k\times k$ tile of $\tau'$ is the projection of precisely $k$ bricks in $\tau$. This gives a one-one correspondence between tilings of the cuboid by $k\times k\times 1$ bricks and tilings of an $m\times n$ rectangle by tiles of size $k\times 1$ and $k\times k$. The result now follows from Theorem~\ref{th:main2}. \end{proof} Replacing $z$ by $z^k$ in the generating function \eqref{eq:bigeq}, the following result is obtained. \begin{corollary}\label{cor:vert3d}
Suppose $k<m<2k$ and let $d_k(m,n,r,s)$ denote the number of tilings of an $m\times n\times k$ cuboid with $k\times k\times 1$ bricks which contain precisely $r$ bricks parallel to the $yz$-plane and $s$ bricks parallel to the $xy$-plane. Then
\begin{align*}
\sum_{n,r,s\geq 0}d_k(m,n,r,s)x^ny^rz^s=\hspace*{3in} \nonumber\\ \frac{(1-x^k-x^kz^k)^{k-1}}{\left(1-x^k-(m-k+1)x^kz^k\right)(1-x^k-x^kz^k)^{k-1}-(m-k+1)x^ky^k}.\label{eq:bigeq}
\end{align*} \end{corollary}
\end{document} | arXiv |
\begin{document}
\title[secant variety] {Partially symmetric tensors and the non-defectivity of secant varieties of products with a projective line as a factor} \author{Edoardo Ballico} \address{Dept. of Mathematics\\
University of Trento\\ 38123 Povo (TN), Italy} \email{[email protected]} \thanks{The author is a member of GNSAGA of INdAM (Italy).} \subjclass[2010]{14N05; 15A69} \keywords{secant variety; Segre-Veronese varieties; partially symmetric tensors}
\begin{abstract} We prove (with a mild restriction on the multidegrees) that all secant varieties of Segre-Veronese varieties with $k>2$ factors, $k-2$ of them being $\enm{\mathbb{P}}^1$, have the expected dimension. This is equivalent to compute the dimension of the set of all partially symmetric tensors with a fixed rank and the same format. The proof uses the case $k=2$ proved by Galuppi and Oneto. Our theorem is an easy consequence of a theorem proved here for arbitrary projective varieties with a projective line as a factor and with respect to complete linear systems. \end{abstract}
\maketitle \section{Introduction} Let $Y$ be an integral projective variety over an algebraically closed field with characteristic $0$ and $\enm{\cal{L}}$ a very ample line bundle of $Y$ such that $h^1(\enm{\cal{L}})=0$. Set $\alpha:= h^0(\enm{\cal{L}})$. For any positive integer $z$ and for any integral and non-degenerate variety $W\subset \enm{\mathbb{P}}^r$ let $\sigma _z(W)$ denote the $z$-secant variety of $W$, i.e. the closure of the union of all linear spaces spanned by $z$ points of $W$. Many practical linear algebra problems and applications use secant varieties or, at least, their dimensions (\cite{l}). For instance, if $\sigma _{z-1}(W)\ne \enm{\mathbb{P}}^r$, then the integer $\dim \sigma _z(W)$ is the dimension of the set of all $q\in \enm{\mathbb{P}}^r$ with $W$-rank $z$. If we take as $W$ a multiprojective space, then the $W$-rank decompositions are the partially symmetric tensor decompositions with the minimal number of addenda. If we take $W=\enm{\mathbb{P}}^n$, then the $W$-ranks correspond to the additive decompositions of forms in $n+1$ variables with a minimal number of addenda.
We see $Y\subset \enm{\mathbb{P}}^{\alpha -1}$ as embedded variety by the complete linear system $|\enm{\cal{L}}|$. Our assumptions are satisfied if $Y$ is not secant defective, i.e. if $\dim \sigma_z(Y) =\max\{\alpha -1,z(\dim Y+1)\}$ for all positive integer $z$. In our main result (Theorem \ref{i1}) we only require that $\sigma _z(Y)$ has dimension $(\dim Y+1)z-1$ for $z=\lfloor \alpha/(\dim Y+1)\rfloor$, i.e. for the largest integer $z$ such that $z(\dim Y +1)\le \alpha$. But the thesis is that all the secant varieties of the embedding of $X$ described in Theorems \ref{minus} and \ref{i1} have the expected dimension.
Segre-Veronese varieties are related to partially symmetric tensors and hence their secant varieties (or at least their dimensions) are an active topic of research with several papers devoted to the study of the dimensions of their secant varieties (\cite{AB09a,AB09b,cgg1,go,lmr,lp}). As a corollary of our results we prove the following result in which we only use the secant non-defectivity of almost all Segre-Veronese varieties with $2$-factors (\cite{go}).
\begin{theorem}\label{minus} Take $X:= \enm{\mathbb{P}}^{n_1}\times \enm{\mathbb{P}}^{n_2}\times (\enm{\mathbb{P}}^1)^{k-2}$, $k\ge 3$, embedded by the complete linear system $\enm{\cal{O}}_X(d_1,\dots ,d_k)$ with $d_i\ge 2$ for all $i$, $d_1\ge 3$ and $d_2\ge 3$. Then this embedding of $X$ is not secant defective. \end{theorem}
We prove a more interesting result, which applies to all varieties with a projective line as a factor and with respect to complete linear systems (Theorems \ref{i1}).
Let $Y$ be an integral projective variety. Set $X:= Y\times \enm{\mathbb{P}}^1$. Let $\pi _1: X\to Y$ and $\pi_2: X\to \enm{\mathbb{P}}^1$ denote the $2$ projections. For any $\enm{\cal{L}}\in \mathrm{Pic}(Y)$ and any $\enm{\cal{R}}\in \mathrm{Pic}(\enm{\mathbb{P}}^1)$ set $\enm{\cal{L}}\boxtimes \enm{\cal{R}}:= \pi _1^\ast(\enm{\cal{L}})\otimes \pi_2^\ast(\enm{\cal{R}})$. Obviously, $\mathrm{Pic}(\enm{\mathbb{P}}^1) \cong \enm{\mathbb{Z}}\enm{\cal{O}}_{\enm{\mathbb{P}}^1}(1)$. Set $\enm{\cal{L}}[t]:= \enm{\cal{L}}\boxtimes \enm{\cal{O}}_{\enm{\mathbb{P}}^1}(t)$. Every line bundle on $X$ is of the form $\enm{\cal{L}}[t]$ for a uniquely determined $\enm{\cal{L}}\in \mathrm{Pic}(Y)$ and a unique $t\in \enm{\mathbb{Z}}$ (\cite[Proposition 3]{fu}). Now assume $t\ge 0$ and $\alpha:= h^0(\enm{\cal{L}}) >0$. The K\"{u}nneth formula gives $h^0(\enm{\cal{L}}[t]) =(t+1)\alpha$ and $h^1(\enm{\cal{L}}[t]) = (t+1)h^1(\enm{\cal{L}})$. From now on we also assume $h^1(\enm{\cal{L}})=0$.
\begin{theorem}\label{i1} Let $Y$ be an integral projective variety. Fix an integer $t\ge 2$. Set $X:= Y\times \enm{\mathbb{P}}^1$. Let $\enm{\cal{L}}$ be a very ample line bundle on $Y$ with $h^1(\enm{\cal{L}})=0$. Set $n:= \dim X$, $\alpha := h^0(\enm{\cal{L}})$ and $e_1:= \lfloor \alpha/n\rfloor$. Assume $n\ge 3$, $\alpha > n^2$ and that the $e_1$-the secant variety of $(Y,\enm{\cal{L}})$ has the expected dimension. Then the pair $(X,\enm{\cal{L}}[t])$ is not secant defective. \end{theorem}
We assume $n\ge 3$ in Theorem \ref{i1} because if $n=t=2$ no lower bound on $\alpha$ may work, as shown by Example \ref{ex1}.
Theorem \ref{minus} is an easy consequence of Theorem \ref{i1}.
Our tools work even if $(Y,\enm{\cal{L}})$ is secant defective, adding conditions on $t$ and/or $\alpha$. As an example we prove one case in which we only assume that $\sigma_{\lfloor \alpha/n\rfloor-1}(Y)$ has the expected dimension. (Theorem \ref{i1.0}). We also see that even for $t=1$ we may get non-trivial results (Proposition \ref{u1}).
We often use the Differential Horace Lemma (\cite{ah1,ah}) and an inductive procedure (the Horace Method) but from top to bottom with smaller and smaller zero-dimensional schemes to be handled. This approach may be considered as a controlled asymptotic tool which do not require the low cases to start the inductive procedure (\cite{bb,bbcs}, \cite[Lemma 3]{c}). A long and detailed explanation of this method is contained in \cite[pp. 1005-1008]{bb}; see in particular the diagram of logical implications in \cite[p. 1058]{bb}. Then sometimes the low cases may be proved, e.g. with a computer assisted proof (\cite{bbcs,d}). However, a standard use of this tool would only give a very weak result (e.g. Theorem \ref{i1} only for $t\ge \dim Y+2$ and hence the inductive proof to get Theorem \ref{minus} for $k\ge 4$ would require very large $t$).
For our proof of Theorem \ref{i1} the key part is the proof of the case $t=2$. Then the cases $t>2$ have a short inductive proof using Lemmas \ref{a2}, \ref{a3} and \ref{a4.0}.
Our method works only for some non-complete linear systems (see Remark \ref{fin1}).
The author has no conflict of interests. \section{Preliminaries} Let $W$ a projective variety and $D$ an effective Cartier divisor of $W$. For any $p\in W_{\reg}$ let $(2p,W)$ denote the closed subscheme of $W$ with $(\enm{\cal{I}}_{p,W})^2$ as its ideal sheaf. We have $\deg ((2p,W)) =\dim W+1$ and $(2p,W)_{\red} =\{p\}$. For any finite set $S\subset W_{\reg}$ set $(2S,W):= \cup_{p\in S} (2p,W)$. We often write $2p$ and $2S$ instead of $(2p,X)$ and $(2S,X)$. For any zero-dimensional scheme $Z\subset W$ let $\Res_D(Z)$ denote the residual scheme of $Z$ with respect to $D$, i.e. the closed subscheme of $W$ with $\enm{\cal{I}}_Z:\enm{\cal{I}}_D$ has its ideal sheaf. We have $\deg (Z) =\deg (\Res_D(Z)) +\deg (Z\cap D)$, $\Res_D(Z)\subseteq Z$, $\Res_D(Z) =Z$ if $Z\cap D=\emptyset$, $\Res_D(Z) =\emptyset$ if $Z\subset D$ and $\Res_D(Z) =\Res_D(A)\cup \Res_D(B)$ if $Z=A\cup B$ and $A\cap B=\emptyset$. If $p\in D_{\reg}\cap W_{\reg}$, then $\Res_D((2p,W)) = \{p\}$ and $(2p,W)\cap D =(2p,D)$. For any line bundle $\enm{\cal{R}}$ on $W$ there is an exact sequence \begin{equation}\label{eqp1}
0 \to \enm{\cal{I}}_{\Res_D(Z)}\otimes \enm{\cal{R}}(-D)\to \enm{\cal{I}}_Z\otimes \enm{\cal{R}} \to \enm{\cal{I}}_{Z\cap D,D}\otimes \enm{\cal{R}}_{|D} \to 0 \end{equation}
of coherent sheaves on $W$ which we call the {\it residual sequence of $D$}. Fix a positive integer $z$. By the Terracini Lemma (\cite[Cor. 1.11]{a}, \cite[5.3.1.1]{l}) the integer $\dim \sigma _z(W)$ is the codimension of the linear span of the zero-dimensional scheme $(2S,W)$, where $S$ is a general subset of $W$ of cardinality $z$. Thus if the embedding of $W$ is induced by the complete linear system $|\enm{\cal{R}}|$, then $\dim \sigma _z(W) =h^0(\enm{\cal{R}})-1-h^0(\enm{\cal{I}} _{(2S,W)}\otimes \enm{\cal{R}})$. Hence $\dim \sigma _z(W) = z(\dim W+1)-1$ if and only if $h^1(\enm{\cal{I}} _{(2S,W)}\otimes \enm{\cal{R}}) =h^1(\enm{\cal{R}})$.
Now we describe the so-called Differential Horace Lemma (\cite{ah1,ah}).
\begin{remark}\label{dh1} Let $E\subset W$ be a zero-dimensional scheme. Fix $i\in \{0,1\}$ and an integer $g>0$. Let $D$ be an integral divisor of $W$. Let $F\subset D$ be a general subset of $D$ with $\#F=g$. Suppose you want to prove that $h^i(\enm{\cal{I}}_{E\cup (2S,W)}\otimes \enm{\cal{R}})=0$ for a general $S\subset W$ such that $\#S =g$.
It is sufficient to prove that $h^i(H,\enm{\cal{I}} _{(E\cap D)\cup F}\otimes \enm{\cal{R}}_{|D})=0$ and $h^i(W,\enm{\cal{I}}_{\Res_D(E)\cup (2F,D)}\otimes \enm{\cal{R}}(-D))=0$ (\cite{ah1,ah}). \end{remark}
Suppose there is a line bundle $\enm{\cal{R}}$ on $W$ such that the embedding $W\subset \enm{\mathbb{P}}^r$ is induced by the complete linear system $|\enm{\cal{R}}|$. We call the secant varieties of the embedded variety $W\subset \enm{\mathbb{P}}^r$ the secant varieties of the pair $(W,\enm{\cal{R}})$.
\begin{remark}\label{ovv1} Let $W\subset \enm{\mathbb{P}}^r$ be an integral and non-degenerate variety. Set $n:= \dim W$, $z_1:= \lfloor (r+1)/(n+1)/\rfloor$ and $z_2:=\lceil (r+1)/(n+1)\rceil$. Suppose that $\sigma _{z_1}(W)$ and $\sigma _{z_2}(W)$ have the expected dimension. Since $(n+1)z_2\ge r+1$, $\sigma _x(W) =\enm{\mathbb{P}}^r$ for all $x\ge z_2$. Either $z_1=z_2$ or $z_1=z_2-1$. Let $S\subset W_{\reg}$ be a general subset such that $\#S=z_1$. Since $\dim \sigma _{z_1}(W) =(n+1)z_1-1$, the Terracini Lemma gives $\dim (2S,W) =(n+1)z_1-1$, i.e. the scheme $(2S,W)$ is linearly independent. Hence $(2A,W)$ is linearly independent for any $A\subset S$. The Terracini Lemma gives $\dim \sigma _y(W) =(n+1)y-1$ for all $1\le y<z_1$ (a similar statement is proved in \cite[Prop. 2.1(i)]{a1}). Thus $W$ is not secant defective. Note that $z_2$ is the minimal integer $z$ such that $(\dim W+1)z \ge r+1$. Thus to prove that $W$ is not secant defective it is sufficient to test all positive integers $z$ such that $(\dim W+1)z\le r+1+\dim W$. \end{remark}
\section{The proofs and related results} Set $X:= Y\times \enm{\mathbb{P}}^1$ and $n:= \dim X$. We fix $o\in \enm{\mathbb{P}}^1$ and set $H:= Y\times \{o\}$. The set $H$ is an effective Cartier divisor of $X$ and $H\cong Y$.
For all positive integers $t$ and $z$ call $A(t,z)$ the following statement:
\quad $A(t,z)$: We have $h^0(\enm{\cal{I}} _{2S}\otimes \enm{\cal{L}}[t]) =\max \{\alpha(t+1)-(n+1)z,0\}$ for a general $S\subset X$ such that $\#S =z$.
By the Terracini Lemma $A(t,z)$ is true if and only if the $z$-secant variety of the pair $(X,\enm{\cal{L}}[t])$ has the expected dimension. Since $h^1(\enm{\cal{L}}[t])=0$, $A(t,z)$ is equivalent to $h^1(\enm{\cal{I}} _{2S}\otimes \enm{\cal{L}}[t]) =\max \{0, (n+1)z -\alpha(t+1)\}$ for a general $S\subset X$ such that $\#S =z$.
We say that $A(t)$ is true if $A(t,z)$ are true for all $z\in \{\lfloor (t+1)\alpha/(n+1)\rfloor,\lceil (t+1)\alpha/(n+1)\rceil\}$. Remark \ref{ovv1} shows that $A(t)$ is true if and only if $A(t,z)$ is true for all positive integers $z$.
Write $\alpha = ne_1+f_1$ with $e_1, f_1$ integers and $0\le f_1\le n$, i.e. set $e_1:= \lfloor \alpha/n\rfloor$ and $f_1:= \alpha -ne_1$.
The following example is well-known (\cite[p. 1457]{lp}). \begin{example}\label{ex1} Fix a positive integer $a$, $t=2$ and $(Y,\enm{\cal{L}})=(\enm{\mathbb{P}}^1,\enm{\cal{O}} _{\enm{\mathbb{P}}^1}(2a))$. We have $n=2$, $\alpha =2a+1$, $(X,\enm{\cal{L}}[2])$ is secant defective with only defective the $(2a+1)$-secant variety. Indeed, a general $S\subset \enm{\mathbb{P}}^1\times \enm{\mathbb{P}}^1$ with $\#S =2a+1$
is contained in the singular locus of a unique $D\in |\enm{\cal{O}}_{\enm{\mathbb{P}}^1\times \enm{\mathbb{P}}^1}(2a,2)|$, the double curve $2T$ with $T$ the unique element of $|\enm{\cal{I}}_S(a,1)|$. \end{example}
\begin{remark}\label{aa2} It is very important for our proof that $f_1\le e_1$. Since $f_1\le n-1$, it is sufficient to assume $e_1\ge n-1$. If $\dim \sigma_{e_1}(Y) =ne_1-1$, it is sufficient to assume $\alpha \ge n(n-1)$, which is quite mild. \end{remark}
For all positive integers $t$ and $z$ we call $B(t,z)$ the following statement:
\quad $B(t,z)$: Either $z< e_1+f_1$ or $h^0(\enm{\cal{I}}_E\otimes \enm{\cal{L}}[t-1]) =\max \{0,t\alpha -(n+1)(z-e_1-f_1)-ne_1-f_1\}$, where $E\subset X$ is a general union of $z-e_1-f_1$ double points of $X$, $f_1$ double points of $H$ and $e_1$ points of $H$.
For all $t\ge 2$ we call $C(t,z)$ the following statement:
\quad $C(t,z)$: We have $h^0(\enm{\cal{I}}_{W}\otimes \enm{\cal{L}}[t-2])\le \max \{0,(t-1)\alpha -\deg (W)\}$, where $W$ is a general union of $\max \{0,z-e_1-f_1\}$ double points of $X$.
Note that $C(2,z)$ is true if and only if $z\le e_1+f_1$.
We say that $B(t)$ (resp. $C(t)$) is true if $B(t,z)$ (resp. $C(t,z)$) is true for all $z$.
Using the Differential Horace Lemma it is quite easy to get $A(t)$ if we know $B(t)$. A key step is to get $B(t)$ knowing that $C(t)$ is true. Indeed, for $z\le \lceil h^0(\enm{\cal{L}}[t])/(n+1)\rceil$ the integer $z-e_1-f_1$ usually degree much smaller than $\lfloor h^0(\enm{\cal{L}}[t-2])/(n+1)\rfloor$ and hence it should be `` easy '' to prove that $h^1(\enm{\cal{I}}_W\otimes \enm{\cal{L}}[t-2])=0$. But of course, $t-2<t$ and so to use this strategy we need to prove that $\lfloor h^0(\enm{\cal{L}}[t-2])/(n+1)\rfloor - (z-e_1-f_1)$ is very large (depending on $t$). For $t=2$ we also need another trick. Then we prove $C(3)$. Then Lemmas \ref{a2}, \ref{a3}, \ref{a4.0} give the case $t\ge 4$ by induction on $t$.
\begin{lemma}\label{a2} Assume that the $e_1$-secant variety of $(Y,\enm{\cal{L}})$ has dimension $ne_1-1$. If $t\ge 2$ and $B(t,z)$ is true, then $A(t,z)$ is true. \end{lemma}
\begin{proof} Let $Z\subset X$ be a general union of $z$ double points.
First assume $z\ge e_1+f_1$. Let $Z'\subset X$ be a general union of $z-e_1-f_1$ double points. Fix a general $S\subset H$ such that $\#S =e_1+f_1$ and write $S=S'\cup S''$ with $\#S' =e_1$ and $\#S''=2$. Set $A:= (2S',H)\cup S''$ and $B:= S'\cup (2S'',H)$. Note that $\deg (A)=\alpha$. Since $\dim \sigma _{e_1}(Y) =e_1n-1$, $h^1(H,\enm{\cal{I}} _{(2S',H)}\otimes \enm{\cal{L}}[t]_{|H}) =0$. Since $S''$ is general in $H\cong Y$ and $\deg (A)=\alpha$,
$h^i(H,\enm{\cal{I}} _{A,H}\otimes \enm{\cal{L}}[t]_{|H}) =0$, $i=0,1$. The Differential Horace Lemma (Remark \ref{dh1}) gives $h^i(\enm{\cal{I}} _Z\otimes \enm{\cal{L}}[t])) =h^i(\enm{\cal{I}}_{Z'\cup S'\cup (2S'',H)}\otimes \enm{\cal{L}}[t-1])$ for $i=0,1$. Thus $B(t,z)$ implies $A(t,z)$ if $z\ge e_1+f_1$.
Now assume $z\le e_1+f_1-1$. The proof of the case $z\ge e_1+f_1$ works taking $Z'= \emptyset$, $\#S' =\min \{e_1,z\}$ and $\#S'' = z-\#S'$. \end{proof} \begin{lemma}\label{a3} Assume $\alpha >n^2$, $t\ge 3$ and that the $e_1$-secant variety of $(Y,\enm{\cal{L}})$ has dimension $ne_1-1$. Take $z\le \lceil (t+1)\alpha /(n+1)\rceil$. If $C(t,z)$ is true and either $z\le e_1+f_1$ or $A(t-2,z-e_1-f_1)$ is true, then $B(t,z)$ and $A(t,z)$ are true. \end{lemma}
\begin{proof} By Remark \ref{ovv1} it is sufficient to check all positive integers $z$ such that $(n+1)z \le (t+1)\alpha +n$. By Lemma \ref{a2} it is sufficient to prove $B(t,z)$. By the definition of $B(t,z)$ we may assume $z\ge e_1+f_1$. Let $W\subset X$ be a general union of $z-e_1-f_1$ double points. Take a general $S\subset H$ such that $\#S =e_1+f_1$ and write $S =S'\cup S''$ with $\#S'= e_1$ and $\#S''=f_1$. By assumption $h^0(\enm{\cal{I}}_W\otimes \enm{\cal{L}}[t-2])\le \max \{0,(t-1)\alpha -\deg(W)\}$, i.e. either $h^0(\enm{\cal{I}} _W\otimes \enm{\cal{L}}[t-2])=0$ or $h^1(\enm{\cal{I}} _W\otimes \enm{\cal{L}}[t-2]) =0$. Assume that $B(t,z)$ fails. Hence $h^1(\enm{\cal{I}} _{W\cup (2S'',H)\cup S'}\otimes \enm{\cal{L}}[t-1]) >0$.
\quad (a) Assume $h^1(\enm{\cal{I}} _W\otimes \enm{\cal{L}}[t-2])) =0$. Since $f_1\le e_1$, Remark \ref{ovv1} gives $\dim \sigma _{f_1}(Y)) =nf_1-1$. Thus $h^1(H,\enm{\cal{I}} _{(2S'',H),H}\otimes \enm{\cal{L}}[t-1]_{|H})=0$. Thus the residual exact sequence of $H$
gives $h^1(\enm{\cal{I}} _{W\cup (2S'',H)}\otimes \enm{\cal{L}}[t-1]) =0$. Let $a$ be the maximal integer such that $h^1(\enm{\cal{I}} _{W\cup (2S'',H)\cup A}\otimes \enm{\cal{L}}[t-1]) =0$, where $A$ is a general subset of $H$ with cardinality $a$. By assumption $a<e_1$. Take a general $p\in H$. The definition of $a$ gives $h^1(\enm{\cal{I}} _{W\cup (2S'',H)\cup A\cup \{p\}}\otimes \enm{\cal{L}}[t-1]) >0$. Since $h^1(\enm{\cal{I}} _{W\cup (2S'',H)\cup A}\otimes \enm{\cal{L}}[t-1]) =0$, we see that $H$ is in the base locus of $|\enm{\cal{I}} _{W\cup (2S'',H)\cup A}\otimes \enm{\cal{L}}[t-1]|$, i.e, (since $\Res_H(W\cup (2S'',H)\cup A) =W$ and $h^1(\enm{\cal{I}} _W\otimes \enm{\cal{L}}[t-2]) =0$), $h^0(\enm{\cal{I}} _{W\cup (2S'',H)\cup A}\otimes \enm{\cal{L}}[t-1]) =(t-1)\alpha -\deg (W)$. We get $a+nf_1=\alpha$, which is false because $a<e_1$, $e_1\ge f_1$ (Remark \ref{aa2}) and $\alpha =ne_1+f_1$.
\quad (b) Assume $h^0(\enm{\cal{I}} _W\otimes \enm{\cal{L}}[t-2]) =0$. We get $(n+1)(z-e_1-f_1)\ge (t-1)\alpha$. By assumption $(n+1)z \le (t+1)\alpha +n$. Hence $2\alpha \le (n+1)e_1+(n+1)f_1+n$. We have $ne_1+f_1 =\alpha$. Thus $\alpha\le nf_1+n\le n^2$, a contradiction.\end{proof}
\begin{lemma}\label{a4.0} Fix an integer $t\ge 3$ and assume $A(t-2)$. Then $C(t)$ is true. \end{lemma}
\begin{proof} Fix a positive integer $z$ for which we want to prove $C(t,z)$. If $z<e_1+f_1$, then $C(t,z)$ is true. Now assume $z\ge e_1+f_1$ and let $W\subset X$ be a general union of $z-e_1-f_1$ double points of $X$. If $\deg (W)\ge h^0(\enm{\cal{L}}[t-2])$, we get $h^0(\enm{\cal{I}} _W\otimes \enm{\cal{L}}[t-2])=0$ by $A(t-2)$. If $\deg (W) <h^0(\enm{\cal{L}}[t-2])$, then $h^1(\enm{\cal{I}}_W\otimes \enm{\cal{L}}[t-2])=0$, i.e. $h^0(\enm{\cal{I}}_W\otimes \enm{\cal{L}}[t-2])=(t-1)\alpha -\deg(W)$. Thus $C(t,z)$ is true.\end{proof}
\begin{proposition}\label{u1} Fix a positive integer $z$ such that $n(z-e_1)+e_1\le \alpha$ and assume that the $e_1$-secant variety of $(Y,\enm{\cal{L}})$ has dimension $ne_1-1$. Then the $z$-secant variety of the pair $(X,\enm{\cal{L}}[1])$ has dimension $z(n+1)-1$. \end{proposition}
\begin{proof} It is sufficient to do the cases with $z\ge e_1$. Take a general $(A,B)\subset H\times H$ such that $\#A=z-e_1$ and $\#B =e_1$. By assumption $\#A+\#B \le \alpha$. Since the $e_1$-secant variety of $(Y,\enm{\cal{L}})$ has dimension $ne_1-1$ and $e_1$ is a non-negative integer,
$h^1(H,\enm{\cal{I}}_{(2A,H)\cup B,H}\otimes \enm{\cal{L}})=0$. By the Differential Horace Lemma (Remark \ref{dh1}) to prove the proposition it is sufficient to prove that $h^1(\enm{\cal{I}}_{(2B,H)}\otimes \enm{\cal{L}})=0$. Since $(2B,H) \subset H$ and $h^1(\enm{\cal{L}}[-1]) =0$ by the K\"{u}nneth formula, we have $h^1(\enm{\cal{I}}_{(2B,H)}\otimes \enm{\cal{L}})=h^1(H,\enm{\cal{I}}_{(2B,H),H}\otimes \enm{\cal{L}}_{|H})=0$ (because the $e_1$-secant variety of $(Y,\enm{\cal{L}})$ has dimension $ne_1-1$).\end{proof}
\begin{proof}[Proof of Theorem \ref{i1}:] By Remark \ref{ovv1} it is sufficient to test all positive integers $z$ such that $(n+1)z\le (t+1)\alpha +n$.
\quad {\bf Outline of the proof:} We start with the proof of some numerical inequalities. Recall that $B(t,z)$ implies $A(t,z)$ for $t\ge 3$ (Lemma \ref{a3}). We first prove the theorem for $t=2$ without using Lemma \ref{a3} (step (a)). Then we take $t=3$ and prove $C(3)$ and $B(3)$ (step (b)). Then for $t\ge 4$ we prove the theorem by induction on $t$ using that $A(t-2)$ is proved.
Let $S\subset H$ (resp. $S'\subset H$, resp. $S''$) be a general subset of $H$ with cardinality $e_1$ (resp. $f_1$, resp. $e_1-f_1$). Let $Z\subset X$ be a general union of $z$ double points of $X$. By the Terracini Lemma it is sufficient to prove that either $h^0(\enm{\cal{I}}_Z\otimes \enm{\cal{L}}[t])=0$ or $h^1(\enm{\cal{I}}_Z\otimes \enm{\cal{L}}[t])=0$. Since $f_1\le n-1$, $f_1+ne_1 = \alpha$ and $\alpha \ge n^2-1$, we have $f_1\le e_1$. Thus the $f_1$-secant variety of the pair $(Y,\enm{\cal{L}})$ has dimension $f_1n-1$ (Remark \ref{ovv1}). Note that $h^i(Y,\enm{\cal{I}} _{(2A,Y)\cup B}\otimes \enm{\cal{L}})=0$, $i=0,1$, where $A$ is a general subset of $Y$ with cardinality $e_1$ and $B$ is a general subset of $Y$ with cardinality $f_1$.
Set $\Delta := (t+1)\alpha +n -z(n+1)$ and $w:= z-(t-1)e_1-f_1$.
\quad {\bf Claim 1:} We have \begin{equation}\label{eqnn5} \Delta + z \ge tf_1+e_1+n \end{equation}
\quad {\bf Proof of Claim 1:} We have $\Delta +(n+1)z = (t+1)\alpha +n$. Thus $(n+1)\Delta +(n+1)z = n\Delta +(t+1)\alpha +n$. Since $\alpha =ne_1+f_1$, to prove Claim 1
it is sufficient to prove that $(t+1)ne_1+(t+1)f_1 - n\ge (n+1)tf_1+(n+1)e_1+n(n+1)$,
i.e. $f_n(t) \ge 0$, where $f_n(t):= (t+1)ne_1 +(t+1)f_1-(n+1)tf_1-(n+1)e_1-n(n+2)$. We have $f_n(2) =(2n-1)ne_1+(1-2n)f_1-n(n+2)$. Since $f_1\le e_1$, $f_n(2) \ge (2n-1)(n-1)e_1-n(n+2)$. Since $\alpha \ge n^2$, $e_1\ge n$.
Since $n\ge 3$, $f_n(2)\ge 0$.
The derivative $f'_n(t)$ of $f_n(t)$ is $ne_1-(n+1)f_1$. Since $f_1\le n-1$, $f'_n(t)\ge 0$ if $e_1\ge n$, i.e. if $\alpha \ge n^2$.
\quad {\bf Claim 2:} We have $nf_1+n(w-e_1)+e_1\le \alpha$.
\quad {\bf Proof of Claim 2:} By Remark \ref{aa2} we may assume $w\ge e_1$, i.e. $z\ge te_1+f_1$. Since $w=z-(t-1)e_1-f_1$, $w-e_1=z-te_1-f_1$. Thus $nf_1+n(w-e_1) +e_1 = (n+1)z-z -t\alpha +tf_1 +1$. Since $(n+1)z =(t+1)\alpha +n-\Delta$, we have $nf_1+n(w-e_1) +e_1 \le \alpha$ by \eqref{eqnn5}.
\quad (a) Assume $t=2$. We assume $z\ge e_1+f_1$, since the case $z< e_1+f_1$ only requires a small modification and it is never a critical case $z\in \{\lfloor 3\alpha /(n+1)\rfloor, \lceil 3\alpha/(n+1)\rceil\}$. Set $w:= z-e_1-f_1$. We write $E\cup Z'$ with $E\cap Z' =\emptyset$ , $E$ union of $w-e_1-f_1$ general double points and $Z'$ a general union of $e_1+f_1$ double points. We use Differential Horace Lemma (Remark \ref{dh1}) $e_1+f_1$ times to the connected of $Z'$ with respect to the general subset $S'$ of $H$. To prove that $h^0(\enm{\cal{I}}_Z\otimes \enm{\cal{L}}[2]) =\max \{0,3\alpha -(n+1)z\}$ it is sufficient to prove that $h^0(\enm{\cal{I}}_{E\cup S\cup (2S',H)}\otimes \enm{\cal{L}}[1]) =\max \{0,2\alpha -(n+1)(z-e_1-f_1) - nf_1 -e_1)\}$. Since $e_1\ge n \ge (n+1)z-3\alpha$ and $S$ is general in $H$, it is sufficient to prove that $h^1(\enm{\cal{I}}_{E\cup (2S',H)}\otimes \enm{\cal{L}}[1]) =0$ and that $h^0(\enm{\cal{I}}_E\otimes \enm{\cal{L}}[0]) \le \max \{0,h^0(\enm{\cal{I}}_{E\cup (2S',H)}\otimes \enm{\cal{L}}[1]) -e_1\}$. The last inequality is critical for the proof (see Claim 4) and it fails in Example \ref{ex1} with $n=2$, $z=2a+1$, $e_1=a$, $f_1=0$, because $h^0(\enm{\cal{I}}_E\otimes \enm{\cal{L}}[0]) =1$ and (assuming $h^1(\enm{\cal{I}}_{E\cup (2S',H)}\otimes \enm{\cal{L}}[1]) =0$) $h^0(\enm{\cal{I}}_{E\cup (2S',H)}\otimes \enm{\cal{L}}[1]) -e_1 =4a+2-3a-2-a=0$.
\quad {\bf Claim 3:} Either $h^1(\enm{\cal{I}}_{E\cup (2S',H)}\otimes \enm{\cal{L}}[1]) =0$ or $h^0(\enm{\cal{I}}_{E\cup (2S',H)}\otimes \enm{\cal{L}}[1]) =0$.
\quad {\bf Proof of Claim 3:} Take a general $S_3\subset H$ such that $\#S_3 = w-e_1$ and a general $S_4\subset H\setminus S$ such that $\#S_4 =e_1$. By the Differential Horace Lemma to prove Claim 3 it is sufficient to prove that $h^1(\enm{\cal{I}} _{2S_4,H}\otimes \enm{\cal{L}}[0]) =0$ and either $h^1(H,\enm{\cal{I}} _{(2S',H)\cup (2S_3,H)\cup S_4}\otimes \enm{\cal{L}}[1]_{|H})=0$ or $h^0(H,\enm{\cal{I}} _{(2S',H)\cup (2S_3,H)\cup S_4}\otimes \enm{\cal{L}}[1]_{|H})=0$. We have $h^1(\enm{\cal{I}} _{2S_4,H}\otimes \enm{\cal{L}}[0]) =h^1(H,\enm{\cal{I}}_{(2S_4,H)}\otimes \enm{\cal{L}}) =0$ (Remark \ref{aa2}). We have $h^1(H,\enm{\cal{I}} _{(2S',H)\cup (2S_3,H)}\otimes \enm{\cal{L}}[1]_{|H}) =0$, because $\#S'+\#S_3 = w-e_1+f_1\le e_1$
by Claim 2. Since $S_4$ is general in $H$, either $h^1(H,\enm{\cal{I}} _{(2S',H)\cup (2S_3,H)\cup S_4}\otimes \enm{\cal{L}}[1]_{|H})=0$ or $h^0(H,\enm{\cal{I}} _{(2S',H)\cup (2S_3,H)\cup S_4}\otimes \enm{\cal{L}}[1]_{|H})=0$.
If $h^0(\enm{\cal{I}}_{E\cup (2S',H)}\otimes \enm{\cal{L}}[1]) =0$, then $h^0(\enm{\cal{I}}_Z\otimes \enm{\cal{L}}[2]) =0$, proving the theorem in this case. Thus we may assume $h^1(\enm{\cal{I}}_{E\cup (2S',H)}\otimes \enm{\cal{L}}[1]) =0$, i.e. $h^0(\enm{\cal{I}}_{E\cup (2S',H)}\otimes \enm{\cal{L}}[1]) =2\alpha -\deg (E) -nf_1$.
\quad {\bf Claim 4:} $h^0(\enm{\cal{I}}_E\otimes \enm{\cal{L}}[0]) \le \max \{0,h^0(\enm{\cal{I}}_{E\cup (2S',H)}\otimes \enm{\cal{L}}[1]) -e_1\}$.
\quad {\bf Proof of Claim 4:} We saw that we may assume $h^0(\enm{\cal{I}}_{E\cup (2S',H)}\otimes \enm{\cal{L}}[1]) =2\alpha -\deg (E) -nf_1$. Since $E$ is a general union of some, $\gamma$, double points of $X$, $h^0(\enm{\cal{I}}_E\otimes \enm{\cal{L}}[0]) = h^0(Y,\enm{\cal{I}} _U\otimes \enm{\cal{L}})$, where $U$ is a general union of $\gamma$ double points of $Y$. First assume $\gamma \le e_1$. The assumption on $(Y,\enm{\cal{L}})$ gives $h^0(Y,\enm{\cal{I}} _U\otimes \enm{\cal{L}}) =\alpha -n\gamma$ (Remark \ref{ovv1}). We have $\deg (E) =(n+1)\gamma$. Thus in this case it is sufficient to check that $\alpha \ge \gamma +nf_1$. Since $\gamma \le e_1$, it is sufficient to use that $ne_1+f_1\ge e_1+nf_1$ (Remark \ref{aa2}). Now assume $\gamma >e_1$. A general union $\Gamma$ of $e_1$ double points of $Y$ and $f_1$ points of $Y$ satisfies $h^i(Y,\enm{\cal{I}}_\Gamma \otimes \enm{\cal{L}})=0$, $i=0,1$. Thus $h^0(Y,\enm{\cal{I}}_U\otimes \enm{\cal{L}}) =0$ if $\gamma \ge e_1+f_1$, while if $e_1<\gamma <\gamma +f_1$, then $h^0(Y,\enm{\cal{I}} _U\otimes \enm{\cal{L}})\le f_1-(\gamma -e_1)$. If $e_1<\gamma <\gamma +f_1$ we have $h^0(\enm{\cal{I}}_{E\cup (2S',H)}\otimes \enm{\cal{L}}[1]) =2\alpha -(n+1)\gamma-nf_1 =\alpha -\gamma -(n-1)f_1 \ge \alpha -e_1-nf_1-1$. It is sufficient to have $\alpha \ge e_1+(n+1)f_1-2$. Since $\alpha =ne_1+f_1$, it is sufficient to have $(n-1)e_1+2 \ge nf_1$, i.e. $n(n-1)e_1+2n\ge n^2f_1$. Since $f_1\le (n-1)$, it is sufficient to have $ne_1 +2n/(n-1) \ge n^2$, which is true for $\alpha >n^2$.
Claims 3 and 4 prove the case $t=2$.
\quad (b) In this step we prove $C(3,z)$. We may assume $z\ge e_1+f_1$. Let $W\subset X$ be a general union of $w':= z-e_1-f_1$ double points. We need to prove that $h^0(\enm{\cal{I}}_W\otimes \enm{\cal{L}}[1]) \le \max \{0,3\alpha -\deg(W)\}$. If $w'\ge 2e_1+f_1$, using twice the Differential Horace Lemma with respect to $H$ we get $h^0(\enm{\cal{I}}_W\otimes \enm{\cal{L}}[1])) =0$. If $w'\le e_1$ using once the usual Horace Lemma we get $h^1(\enm{\cal{I}}_W\otimes \enm{\cal{L}}[1])=0$. If $e_1\le w' \le 2e_1$, using twice the usual Horace Lemma we get $h^1(\enm{\cal{I}}_W\otimes \enm{\cal{L}}[1])\le 2e_1-w'$, which is enough to get $C(3,z)$. If $w' =2e_1+1$ we get $h^0(\enm{\cal{I}}_W\otimes \enm{\cal{L}}[1]) =0$ in the following way. We first degenerate $W$ to $W'\cup W''$ with $W'\cap W''=\emptyset$, $W'$ a general union of $e_1$ double points of $X$ and $W''$ a general union of $e_1+1$ double points of $X$ with $S_5:= W''_{\red}\subset H$. Since $(Y,\enm{\cal{L}})$ is not secant defective, $h^0(H,\enm{\cal{I}} _{W''\cap H,H}\otimes \enm{\cal{L}}[1]_{|H})=0$. Thus the residual exact sequence of $H$ gives
$h^0(\enm{\cal{I}}_W\otimes \enm{\cal{L}}[1])\le h^0(\enm{\cal{I}}_{W'\cup S_5}\otimes \enm{\cal{L}}[0])$. We have $h^0(\enm{\cal{I}}_{W'\cup S_5}\otimes \enm{\cal{L}}[0])=h^0(H,\enm{\cal{I}}_{W'\cap H\cup S_5}\otimes \enm{\cal{L}}[0]_{|H}) =0$, because $e_1\ge f_1$.
\quad (c) Step (b) and Lemma \ref{a3} proves $A(3)$. Thus we may assume $t\ge 4$ and that $A(x)$ is true for $2\le x <t$. By Lemma \ref{a3} it is sufficient to prove $C(t)$. Since $A(t-2)$ is true, Lemma \ref{a4.0} gives $C(t)$.\end{proof}
Sometimes we get $(X,\enm{\cal{L}}[t])$ not secant defective even if $\dim \sigma _{e_1}(Y) \le ne_1-2$, but we need to add other conditions. We give the following example in which we only assume that the $(e_1-1)$-secant variety of the pair $(Y,\enm{\cal{L}})$ has dimension $n(e_1-1)-1$.
\begin{theorem}\label{i1.0} Fix integer $t\ge 2$ and $z>0$. Assume $n\ge 3$, $\alpha \ge 2n^2+4n$, $t\ge 2$, $\dim \sigma _{e_1-1}(Y)=n(e_1-1)-1$. Then the $z$-secant variety of $(X,\enm{\cal{L}}[t])$ has the expected dimension. \end{theorem}
\begin{proof} The proof is very similar to the one of Theorem \ref{i1}, just using the integers $g_1:= e_1-1$ and $h_1:= f_1+n$ instead of $e_1$ and $f_1$. The stronger assumption on $\alpha$ comes from the inequality $h_1\le 2n-1$ instead of the inequality $f_1\le n-1$. \end{proof}
\begin{proof}[Proof of Theorem \ref{minus}:] We have $n:= \dim X =n_1+n_2+k$.
First assume $k=3$. We apply Theorem \ref{i1} to $Y:= \enm{\mathbb{P}}^{n_1}\times \enm{\mathbb{P}}^{n_2}$ with $\alpha =\binom{n_1+d_1}{n_1}\binom{n_2+d_2}{n_3}$. Since $\alpha$ is an increasing function of $d_1$ and $d_2$, it is sufficient to check that $\alpha \ge n^2$ if $d_1=d_2=3$. In in this case $\alpha -n^2-1 =\binom{n_1+3}{3}\binom{n_2+3}{3} -(n_1+n_2+1)-1$. Taking the derivatives with respect to $n_1$ and $n_2$ we see that it is sufficient to check the case $n_1=n_2=1$. In this case $n=3$ and $\alpha =16$.
Now assume $k>3$ and that $Y:= \enm{\mathbb{P}}^{n_1}\times \enm{\mathbb{P}}^{n_2}\times (\enm{\mathbb{P}}^1)^{k-1}$ is not secant defective. We apply Theorem \ref{i1} to $Y$ with $\alpha =\binom{n_1+d_1}{n_1}\binom{n_2+d_2}{n_3}(d_3+1)\cdots (d_{k-1}+1)\ge 3^{k-1}\binom{n_1+d_1}{n_1}\binom{n_2+d_2}{n_3}$. We immediately get $\alpha > n^2$. \end{proof}
\begin{remark}\label{fin1} Fix a non-degenerate embedding $Y\subset \enm{\mathbb{P}}^r$ for which we know that many secant varieties have the expect dimension. Set $\enm{\cal{L}}:= \enm{\cal{O}}_Y(1)$ and $X:= Y\times \enm{\mathbb{P}}^1$. Let $V\subset H^0(\enm{\cal{O}}_Y(1))$ denote the image of the restriction map $H^0(\enm{\cal{O}}_{\enm{\mathbb{P}}^r}(1)) \to H^0(\enm{\cal{L}})$. For any integer $t>0$ let $V[t]$ denote the linear subspace $\pi_1^\ast(V)\otimes \pi_2^\ast(\enm{\cal{O}}_{\enm{\mathbb{P}}^1}(t))$ of $\enm{\cal{L}}[t]$. We have $\dim V[t] = (r+1)(t+1)$ and we may use the proofs of this paper with $r+1$ instead of $\alpha$. Hence, under the assumptions of one of the results we get the non-defectivity of the secant varieties with respect to the embedding induced by a general linear subspace of dimension $(r+1)(t+1)$ of $H^0(\enm{\cal{L}}[t])$. \end{remark}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\end{document} | arXiv |
Seed production, seed dispersal and seedling establishment of two afromontane tree species in and around a church forest: implications for forest restoration
Abrham Abiyu1Email authorView ORCID ID profile,
Demel Teketay2,
Gerhard Glatzel3 and
Georg Gratzer3
Forest Ecosystems20163:16
Accepted: 7 July 2016
Seed production, seed dispersal and seedling establishment are relevant life phases of plants. Understanding these processes and their patterns is essential to recognize vegetation dynamics and to apply it to forest restoration.
For Olea europaea and Schefflera abyssinica, fecundity was estimated using randomized branch sampling. Seed dispersal and seedling establishment were monitored using spatially explicit seed traps and plots. Dispersal functions were calibrated applying inverse modeling.
O. europaea produced more seeds and had longer dispersal distances compared to S. abyssinica. Correlations between observed and predicted number of recruits were statistically significant. Seedlings of the two species showed different niche requirements.
The studied species were recruitment-limited due to low dispersal activity or lack of suitable microsites. Restoration relying on natural regeneration should overcome these limitations by increasing disperser visitation and reducing biotic and abiotic stresses.
Schefflera abyssinica
Recruitment limitation
Deforestation is almost a synonym for Ethiopian forest history. Small patches of forests that surround churches, hence called church forests, are the only remnants after past deforestation in the northern part of the highlands (Nyssen et al. 2004, 2009). Church forests are located as islands in a matrix of a highly degraded agricultural landscape (Wassie et al. 2005). They represent hotspots of biodiversity and are critical conservation areas in the Afromontane region (Aerts et al. 2016). Ecosystem restoration with the objective of watershed and riparian repair functions as well as livelihood and landscape diversification is a national priority in current land management in these degraded highlands (Nyssen et al. 2004).
The most appropriate strategy to restore degraded agricultural landscapes involves the reconstruction of native plant communities through re-colonization of native flora, reforestation or afforestation (Stanturf et al. 2014). Restoration can be achieved using tree species (active restoration), relying on natural re-colonization, or through assisted natural regeneration (passive restoration). Both approaches have been discussed as restoration options to re-vegetate the highlands of Ethiopia (Abiyu et al. 2011). In the past, reforestation programs with plantation species were successful due to the fast growth of such species and their well-known management practices (Zanne and Chapman 2001; Lemenih and Teketay 2006). While these plantations improve soil fertility (Lemma et al. 2006; Abiyu et al. 2011), they do not have any positive effect on biodiversity (Chazdon 2008) and therefore represent an incomplete first step of a restoration process (Stanturf et al. 2001). Assisted natural regeneration and natural regeneration rank highest on the restoration ladder (sensu Chazdon 2008) with the highest ranks in biodiversity and ecosystem services at the lowest associated cost. These approaches, however, have the highest requirements in terms of spatial distribution of residual vegetation or biological legacies (Bannister et al. 2014). In order to achieve cost-efficient and timely restoration, a combination of mixed passive and active restoration strategies are favored in certain situations (Bannister et al. 2014). Thus, strategies and methods of forest restoration may follow different paradigms depending on stakeholder objectives, regional climate and degree of degradation (Jacobs et al. 2015).
Fruit production, seed dispersal, germination and seedling establishment are critical life phases in plants (Harper 1997). A clear understanding of these issues are crucial to recognize patterns of colonization and extinction, directional changes in species composition or succession over time, which have strong relevance for ecosystem restoration (Hobbs et al. 2007). Information on relevant life phases of Afromontane plant species is scarce. In the absence of this information there are no means to identify the risks of failure of management measures and the means to overcome them. This information helps to understand filters of succession that determine the floristic composition of the secondary forest resulting from a passive restoration treatment. An understanding of these key processes allows the prediction of the likely outcome of any silvicultural measure in terms of vitality, density, survival, growth and quality of regeneration at an appropriate time (Wagner et al. 2010).
Studies on the regeneration ecology of trees in the Afromontane forest regions in Ethiopia have characterized propagule pools, particularly soil seed banks and seedling banks, thus focusing on storage effects of reproductive potentials (Teketay and Granström 1995, 1997). Soil seed banks have little potential as a source of tree regeneration in the Afromontane region given the lack of available seeds due to soil erosion or the failure to form persistent soil seed banks owing to the physiological requirements of seeds (Teketay and Granström 1995, 1997). Apart from a study carried out for Leptonychia usambarensis in Tanzania (Cordeiro et al. 2009), no studies have been conducted to characterize seed dispersal of tree species in Afromontane forests. Some studies conducted so far acknowledge the importance of this information for East African Afromontane forests (Aerts et al. 2006). In principle, recruitment limitation can be caused by limited seed availability or by constraints on seedling establishment (Clark et al. 1999, 2007; Nathan et al. 2000; Muller-Landau et al. 2002). Dispersal mediated recruitment limitation is especially severe with fleshy fruited trees (Rey and Alcántara 2000; Zywiec et al. 2013). While these limitations may promote coexistence in plant communities at stand levels (Herrera and Jordano 1981; Herrera et al. 1998; Abrams 2003), they severely inhibit ecosystem restoration (Holl 1998, 1999, 2008).
Ecosystem restoration in general requires knowledge on seed dispersal characteristics of targeted tree species and an understanding of constraints to seedling establishment and survival. The objective of this study was to compare the similarities and differences between the seed dispersal, seed rain and seedling establishment patterns of two selected native tree species in an Afromontane church-forest of the northern Ethiopian Highlands.
The research was conducted in and around Tara-Gedam church forest in northwestern Ethiopia (12°08′35′′N and 37°44′29′′E), situated in steep mountain terrain. The northern and northwestern boundaries of the forest are defined by steep hill slopes, the western boundary by a highway. The southern and the southeastern boundaries delimit it towards a matrix of farm and grazing land settled by subsistence farmers. Leaving a diversity of trees, which have positive interaction with crops and livestock scattered in the landscape (parkland agroforestry), is a typical land use system. The elevation ranges from 2175 to 2390 m. The mean annual minimum temperature is 13 °C and the maximum 27 °C. The mean annual rainfall is 1085 mm and occurs from June to September. The composition of the forest consists of the following native trees and shrubs: Olea europaea L. ssp. cuspidata (Wall. ex G. Don) Cif., Schefflera abyssinica (Hochst. ex A. Rich.) Harms, Albizia schimperiana Oliv., Ekebergia capensis Sparrm., Croton macrostachyus Hochst. ex Del., Acacia negrii Pic.-Serm., Apodytes dimidiata E. Mey. ex Arn., Nuxia congesta R. Br. ex Fresen., Schrebera alata (Hochst.) Welw., Grewia ferruginea Hochst. ex A. Rich., Vernonia amygdalina Del., Calpurnia aurea (Ait.) Benth., Carrisa spinarum L., Dovyalis abyssinica (A. Rich.) Warb., Bersama abyssinica Fresen., Rhus glutinosa A. Rich., Clausena anisata (Willd.) Benth., Osyris quadripartita Decn., Maesa lanceolata Forssk. and Myrsine africana L. Eucalyptus camaldulensis Dehnh. is the dominant non-native tree species in the matrix of degraded land beyond the church forest.
Two native tree species, i.e., Olea europaea L. ssp. cuspidata (Wall. ex G. Don) Cif. (from here on referred to as O. europaea) and Schefflera abyssinica (Hochst. ex A. Rich.) Harms (S. abyssinica) were selected for this study, considering their dominance as upper canopy trees in forest remnants and also for their timber and non-timber importance to local people. The study was carried out from February to November 2009 and from January to October 2010.
O. europaea is an evergreen tree or shrub, reaching a height of up to 18 m. The flowers and fruits are clustered on auxiliary panicles. Fruits are dark violet when ripe, one seeded and 0.5 to 1 cm in diameter. Flowering and fruiting time is between September and January. Dispersal is predominantly by frugivorous birds. S. abyssinica is a deciduous tree species reaching up to a height of 30 m. The flowers and fruits are clustered at the end of 30 to 40 cm racemes. The multi-seeded fruits are red and 0.26 to 0.31 cm long. Flowering and fruiting time is from March to May. Dispersal is mostly carried out by fruit eating birds (Bamps 1989). In our study area, it starts its life as an epiphyte and becomes an upper canopy tree when fully grown (Abiyu et al. 2013).
Seed trap placement for dispersal distance estimation
Seed traps were used to sample seed rain inside and outside the forest patch. The seed traps were circular baskets with a 60 cm radius, made from bamboo and lined with plastic sheets. Three 1.5-m high stakes were used to raise the seed traps above the ground to avoid predation by rodents (Additional file 1). However, not all traps were at similar height above the ground, given differences caused by pit preparation.
Traps were arranged inside the forest and in the matrix outside the forest. In the matrix outside the forest, traps were arranged along transects, under trees and along water bodies and drainage lines (Fig. 1). The total number of traps set inside and outside the forest was 388. The coordinates (x, y) of each trap and potential seed source tree were established. The traps were set up in November 2009 and were monitored during the fruit setting period up to July 2010. Seeds from traps were collected once a week, counted at the spot, bagged, labeled and transported to the laboratory. This routine was repeated until no more seeds were recovered from the traps.
Map showing the study area, the location of transects and seed traps. Vertical lines indicate UTM northing and horizontal lines indicate UTM easting
Regeneration study
Next to each trap, 1 m2 (1 m × 1 m) plots were prepared, where seeds were marked with toothpicks and monitored for emerging seedlings. These plots were stratified into: (i) open habitats, (ii) stones/boulders, (iii) trees/shrubs and (iv) banks and the immediate neighborhood of water bodies (springs, ponds and creeks).
While established plants and germinants are rare in the studied habitats, tree establishment was observed in areas with freshly exposed soil by human activity. In addition and in order to validate seed dispersal kernels with plant distribution, we laid out a further set of 34 square plots (2 m × 2 m) along additional transects running from seed sources into the open habitats, perpendicular to the first set of transects. Habitats for these plots were classified into road side boulders (12 plots), stone bunds on farms (10 plots) and open spaces between these habitats (12 plots).
Modeling seed dispersal: fecundity and distance
The approach developed by Ribbens et al. (1994) is widely used in modeling seed dispersal (Loiselle et al. 1996; Wada and Ribbens 1997; LePage et al. 2000; Canham and Uriarte 2006). It models both components of dispersal functions, i.e., the amount of seeds produced and the dispersal distance. Poisson distribution functions are used to predict the seedling recruitment patterns using the following equation:
$$ R=\left[STR\times \left(\frac{dbh}{45}\right)\beta \right]\times \frac{1}{n}\times \left[e{-}^{D\times m\theta}\right] $$
where R is the predicted number of recruits (seeds and seedlings) and STR (standard total recruitment) denotes the number of fruits produced by a tree of standard dbh (diameter at breast height). The parameter β modifies STR as a power function of the actual dbh observed.
The second part of the equation represents the mean density of recruits in a trap located at distance m from the parent tree, where n is a normalizer ensuring that the area under the second portion of the equation is equal to 1, ɵ determines the shape of the distribution and D determines the steepness of decline in the number of recruits as the distance from the parent tree increases. Therefore, the number of recruits (R) predicted for trap i, given T trees, is:
$$ {R}_i={\displaystyle \sum_{j=1}^T\left[STR\times {\left(\frac{db{h}_j}{45}\right)}^{\beta}\right]\times \frac{1}{n}\times \left[e{-}^{D\times mi{j}^{\theta }}\right]} $$
where m ij is the distance from the i th sample trap to the j th tree.
Fecundity estimation
We used randomized branch sampling (RBS) for the initial estimation of fecundity or standard total recruitment (STR) (Gregoire et al. 1995). RBS is a multistage sampling procedure, which defines a unidirectional random walk along the natural branching pattern within the crown of a tree. In this procedure, the most important points are defining the nodes (points where branching begins), segments (inter-nodal points) and paths (series of successive segments). A path is selected starting from the first node. The following segment is chosen by randomized selection weighted by the relative cross sectional area (CSA) of the radiating segments on the node. The thicker the segment, the more likely its selection. Eight individual trees of both species were randomly selected, since the church limited destructive sampling. From each individual tree, a limb (segment) was chosen from a pair or couple of limbs (segments) according to the probability proportional to its CSA. The number of seeds on the chosen limb (segment) was counted. This process continued until a terminal limb (segment), with a predetermined specific size was reached. The terminal limb circumference was between 4 and 6 cm. Fecundity then is defined as the total number of fruit counts at the final stage of the selected limb divided by the product of probabilities along the path (Jessen 1955; Cancino and Saborowski 2005, 2007; Peter et al. 2010). The total number of seeds per tree varied from 7,000 to 110,000 for both species.
Thus, the initial value of STR was set at 50,000 and the fixed dbh was set at 45 cm based on the average value of sampled trees. Initial values for the parameters were made to fluctuate and wander up to 40 % of their original values, without fixing them, as suggested by E. Ribbens (personal communications).
Models were evaluated using maximum likelihood estimation, resulting in a likelihood function, which states the probability of obtaining a set of observations. In our case, maximum likelihood based dispersal functions determined the probability to observe a certain number of seeds, given a particular set of parameters. RECRUITS 3.1 (Ribbens 2002) was used to calibrate the dispersal function (equation 2), using the Metropolis algorithm to find a combination of parameter values that are likely to produce the observed values.
Mean dispersal distance (MDD), percentage of strange recruits or percentage strangers (PS), observed-predicted correlation and confidence intervals were calculated for the statistical evaluation of the model. RECRUITS calculates MDD by distributing the expected seeds on a circle around a parent tree and determining the average distance from the tree. The observed-predicted correlation is the measure of spatial association between observed and model predicted values; this method uses product—moment correlation coefficients between observed values and expected means for every species. Approximate bivariate 95 % confidence intervals were fitted for estimates of STR and MDD values using the inverse likelihood ratio test. The percentage strangers (PS) calculates the amount of seeds contributed by trees outside the observation distance or the amount of seeds contributed from unmapped trees in the vicinity of the traps inside the observation distance.
The recruitment functions (equation 2) were cross-validated by omitting every third trap, computing a new model and using the new model to predict traps omitted from the new model (Ribbens et al. 1994).
Descriptive statistics were used to visualize the number of seeds reaching different traps located in the various habitats. In the 2 m × 2 m plots along the transects, one way ANOVA with unequal sample sizes was used to test the hypothesis of differences between the various habitats (micro sites) in the emergence of seedlings.
The amounts of seeds produced per parent tree varied between the two species. The STR value denoting the seed production for the standardized parental size was higher for O. europaea (36,219) than for S. abyssinica (28,221) (Table 1).
Maximum likelihood estimates of the parameters used in the dispersal function
Olea europaea (median dispersal distance 200 m)
Schefflera abyssinica (median dipersal distance 100 m)
MDDc
R d
Beta and Theta are parameters for the Poisson distribution function
*Significant at p = 0.05
aSTR is the standard total recruitment
b D the speed of the decline in the number of recruits
cMDD the mean dispersal distance
d R the observed-predicted correlation
The standard total recruitment (STR) values show seed production potential and hence fecundity. The total seed production values for both species studied are within the range of estimates of fruit production reported for tropical trees in Africa, e.g., 12,000–72,000 reported from Cameroon (Norghauer and Newbery 2015). Clark et al. (2005) estimated the fecundity of trees in the Dja Reserve, Cameroon between 25,000 to 65,000 for bird dispersed seeds and from 22,000 to 79,000 for mammal dispersed seeds.
Seed dispersal profiles of the two species showed that up to 55 m dispersal distance seed densities were higher for S. abyssinica than for O. europaea, while the latter species had higher seed densities compared to S. abyssinica at longer distances (Fig. 2).
Seed dispersal profiles for Schefflera abyssinica and Olea europaea ssp. cuspidata
The difference in the dispersal profile of the two tree species may be a reflection of the reproductive ability of their mother trees, their seed disperser activity and differential requirements of the seeds and seedlings (Rey and Alcántara 2000). S. abyssinica is represented by few large and sparsely scattered trees in the forest (Abiyu et al. 2013). It produces lipid and protein rich fruits (Saracco et al. 2005) with attractive colors which can be tracked by specialist and generalist frugivorous birds, probably with short gut retention time, dropping the seeds close to the seed source. O. europaea trees are distributed abundantly in the study area. Their fruits are mostly tracked by specialized frugivorous birds, which may disperse the seeds over longer distances.
The median distance was 200 m for O. europaea and 100 m for S. abyssinica. The mean dispersal distance (MDD) was 191 m for O. europaea and 92 m for S. abyssinica (Table 1). The percentage of strangers was higher for O. europaea (46 %) than for S. abyssinica (40 %).
Dispersal distances for both species can be included in the range of long distance dispersal (Norghauer and Newbery 2015), even though they are shorter than reported by other studies from the tropics. Clark et al. (2005) found up to 473 m dispersal distances for animal-dispersed species. Farther distances (316 m) were also reported (Godoy and Jordano 2001). In general, animal dispersed tree species seeds have longer dispersal distances than wind dispersed ones (Ribbens et al. 1994).
The number of predicted and observed recruits are shown for both species (Fig. 3). The correlation coefficient, indicating intensity of association or spatial congruence between observed and predicted data sets, was 0.38 for O. europaea and 0.39 for S. abyssinica (Table 1). These values were also statistically significant when cross-validated, i.e., 0.55 for O. europaea and 0.35 for S. abyssinica. The highest number of dispersed seeds was located near their source trees (Fig. 4).
Predicted (continuous lines) and observed (dotted lines) seed density of a Olea europaea ssp. cuspidata and b Schefflera abyssinica
Seed density of Olea europaea seedlings on artificially created structures as a function of distance from the seed source
Despite the arrival of seeds, there was no seedling recruitment on the 1 m2 plots located near to the seed traps. Seeds of S. abyssinica were found in traps placed under fruiting trees of its own species, while no seeds were found in traps placed in other habitats. The highest average number of seeds of O. europaea per trap was recorded in traps placed close to springs (681) followed by traps adjacent to bigger stones (183), under trees/shrubs (22) and on open sites (<1) (Table 2).
Mean number of seeds recorded in seed traps placed in different habitats for Olea europaea ssp. cuspidata and Schefflera abyssinica (the numbers in brackets indicate the standard deviation of the mean)
Habitat for trap location
Number of traps with seeds
Mean number of seeds per trap
Stone/boulders
Trees/shrubs
No seedling recruitment from the ground was recorded for S. abyssinica in the 2 m × 2 m plots. There were no statistically significant (F = 2.29, p = 0.12) differences in the number of seedlings of O. europaea among the various habitats. However, a large number of O. europaea seedlings were found on human-made structures such as between boulders excavated during road construction and on stone bunds constructed for soil and water conservation (Fig. 4). On these selected habitats, seedling density diminished with increasing distance from the seed source.
Except on certain human-made structures, seed dispersal was not accompanied by successful germination. Two conditions, non-random arrival and survival in predictable locations, should be met for directed seed dispersal to occur. Therefore, its contribution is minimal for vegetation dynamics in the study area. This may arise mainly from the uncoupling of different selective forces acting on seeds and seedlings (Rey and Alcántara 2000) or from onthogenic shifts in stress and resource requirement in various microhabitats (Jordano and Herrera 1995; Schupp 1995).
O. europaea and S. abyssinica are recruitment-limited in our studied landscape. Although there are several mother trees at the population level for the production of recruits for O. europaea, this species is recruitment-limited due to low dispersal activity as well as its failure to germinate and establish after dispersal. Dispersers birds do not take seeds of O. europaea to microsites. On the other hand, S. abyssinica is recruitment-limited because there are few mother trees at the population level and too few available microsites such as branch forks and stem wounds. These available microsites can be saturated easily (Abiyu et al. 2013).
The location of restoration sites in relation to the seed source critically affects restoration success (Holl 1998, 1999, 2008). Our study corroborates the need to maintain patches of forests as islands and stepping stones for biodiversity conservation and restoration. Church forests in northern Ethiopia are remnants of the original tree population (Aerts et al. 2016). Currently, these forests are facing strong anthropogenic pressure and dispersal of propagules from these remnants is crucial to form viable populations at the patch level. As well, restoration of those areas, devoid of their natural vegetation in the landscape, need seed input from the remnant forests. Seed dispersal is a filter for succession and affects the quality of regeneration in the context of passive restoration. This raises the importance of other landscape elements and biological legacies, such as trees on hedges and homesteads to support restoration as seed sources. In the absence of these supplementary seed sources, deliberate introduction of these two species may be needed in highly degraded areas.
The median dispersal distance for both species is less than 200 m. For successful dispersal and recruitment of O. europaea and S. abyssinica in degraded ecosystems in the Ethiopian highlands, the potential mother trees should be located not farther than the median dispersal distance. In the absence of seed sources within the indicated distance, reliance on natural regeneration may not be a suitable restoration strategy.
The authors thank the Commission for Development Studies (KEF), ÖAD and the International Foundation for Science (IFS) for financial support to A.A. Molla Addisu and Fikirte Shewatatek assisted in data collection and the late Sinatyehu Bayeh provided valuable comments on the arrangement of the seed traps. Eric Ribbens provided support with the RECRUITS software, Menale Wondie provided GIS support and Andras Darabant provided English language editing and correction support. We thank the reviewers who helped us to improve an earlier version of the manuscript.
Authors' Contribution
All authors conceived the study. All authors helped to draft the manuscript. All authors read and approved the final manuscript.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Additional file 1: Seed traps. (PPTX 3274 kb)
Amhara Agricultural Research Institute, P. O. Box 527, Bahir Dar, Ethiopia
Department of Crop Science and Production, Botswana University of Agriculture and Natural Resources, Private Bag 0027, Gaborone, Botswana
Institute of Forest Ecology, University of Natural Resources and Life Sciences, Peter Jordan Straße 82 A-1190, Vienna, Austria
Abiyu A, Gratzer G, Teketay D, Glatzel G, Aerts R (2013) Epiphytic recruitment of Schefflera abyssinica (A. Rich) Harms. and the rle of microsites in affecting tree community structure in remnant forests in northwest Ethiopia. SINET: Ethiopian J Sci 36(1):41–44Google Scholar
Abiyu A, Lemenih M, Gratzer G, Aerts R, Teketay D, Glatzel G (2011) Status of native woody species diversity and soil characteristics in an exclosure and in plantations of Eucalyptus globulus and Cupressus lusitanica in Northern Ethiopia. Mt Res Dev 31(2):144–152View ArticleGoogle Scholar
Abrams MD (2003) Where has all the white oak gone? Bioscience 53(10):927–939View ArticleGoogle Scholar
Aerts R, van Overtveld K, November E, Wassie A, Abiyu A, Demissew S, Daye DD, Giday K, Haile M, TewoldeBerhan S, Teketay D, Teklehaimanot Z, Binggeli P, Deckers J, Friis I, Gratzer G, Hermy M, Heyn M, Honnay O, Paris M, Frank JS, Muys B, Bongers F, Healey JR (2016) Conservation of the Ethiopian church forests: Threats, opportunities and implications for their management. Sci Total Enviro 551:404–414View ArticleGoogle Scholar
Aerts R, Maes W, November E, Negussie A, Hermy M, Muys B (2006) Restoring dry Afromontane forest using bird and nurse plant effects: Direct sowing of Olea europaea ssp. cuspidata seeds. Forest Ecol Manage 230(1–3):23–31View ArticleGoogle Scholar
Bamps R (1989) Araliaceae. In: Hedberg I, Edwards S (eds) Flora of Ethiopia, vol 3, Addis Abeba University. Addis Abeba & Uppsala University, Uppsala, Addis Abeba, pp 537–543Google Scholar
Bannister JR, Wagner S, Donoso PJ, Bauhus J (2014) The importance of seed trees in the dioecious conifer Pilgerodendron uviferum for passive restoration of fire disturbed southern bog forests. Aust Ecol 39(2):204–213View ArticleGoogle Scholar
Cancino J, Saborowski J (2005) Comparison of randomized branch sampling with and without replacement at the first stage. Silva Fennica 39(2):201–216View ArticleGoogle Scholar
Cancino J, Saborowski J (2007) Improving RBS estimates–effects of the auxiliary variable, stratification of the crown, and deletion of segments on the precision of estimates. J For Sci 53(7):320–333Google Scholar
Canham CD, Uriarte M (2006) Analysis of neighborhood dynamics of forest ecosystems using likelihood methods and modeling. Ecol Appl 16(1):62–73PubMedView ArticleGoogle Scholar
Chazdon RL (2008) Beyond deforestation: restoring forests and ecosystem services on degraded lands. Science 320(5882):1458–1460PubMedView ArticleGoogle Scholar
Clark C, Poulsen J, Bolker B, Connor E, Parker V (2005) Comparative seed shadows of bird-, monkey-, and wind-dispersed trees. Ecology 86(10):2684–2694View ArticleGoogle Scholar
Clark C, Poulsen J, Levey D, Osenberg C (2007) Are plant populations seed limited? A critique and meta-analysis of seed addition experiments. Am Nat 170(1):128–142PubMedView ArticleGoogle Scholar
Clark J, Beckage B, Camill P, Cleveland B, HilleRisLambers J, Lichter J, McLachlan J, Mohan J, Wyckoff P (1999) Interpreting recruitment limitation in forests. Am J Bot 86(1):1–16PubMedView ArticleGoogle Scholar
Cordeiro NJ, Ndangalasi HJ, McEntee JP, Howe HF (2009) Disperser limitation and recruitment of an endemic African tree in a fragmented landscape. Ecology 90(4):1030–1041PubMedView ArticleGoogle Scholar
Godoy JA, Jordano P (2001) Seed dispersal by animals: exact identification of source trees with endocarp DNA microsatellites. Mol Ecol 10(9):2275–2283PubMedView ArticleGoogle Scholar
Gregoire TG, Valentine HT, Furnival GM (1995) Sampling methods to estimate foliage and other characteristics of individual trees. Ecology 76(4):1181–1194View ArticleGoogle Scholar
Harper JL (1997) Population biology of plants. Academic, LondonGoogle Scholar
Herrera CM, Jordano P (1981) Prunus mahaleb and birds: the high-efficiency seed dispersal system of a temperate fruiting tree. Ecol Monogr 51(2):203–218View ArticleGoogle Scholar
Herrera CM, Jordano P, Guitián J, Traveset A (1998) Annual variability in seed production by woody plants and the masting concept: reassessment of principles and relationship to pollination and seed dispersal. Am Nat 152(4):576–594PubMedView ArticleGoogle Scholar
Hobbs RJ, Jentsch A, Temperton VM (2007) Restoration as a process of assembly and succession mediated by disturbance. In: Walker LR, Walker J, Hobbs RJ (eds) Linking restoration and ecological succession. Springer, New York, pp 150–167View ArticleGoogle Scholar
Holl KD (1998) Do bird perching structures elevate seed rain and seedling establishment in abandoned tropical pasture? Restor Ecol 6(3):253–261View ArticleGoogle Scholar
Holl KD (1999) Factors limiting tropical rain forest regeneration in abandoned pasture: seed rain, seed germination, microclimate, and soil. Biotropica 31(2):229–242View ArticleGoogle Scholar
Holl KD (2008) Are there benefits of bat roosts for tropical forest restoration? Conserv Biol 22(5):1090PubMedView ArticleGoogle Scholar
Jacobs DF, Oliet JA, Aronson J, Bolte A, Bullock JM, Donoso PJ, Landhäusser SM, Madsen P, Peng S, Rey-Benayas JM (2015) Restoring forests: What constitutes success in the twenty-first century? New Forests 46(5–6):601–614View ArticleGoogle Scholar
Jessen RJ (1955) Determining the fruit count on a tree by randomized branch sampling. Biometrics 11(1):99–109View ArticleGoogle Scholar
Jordano P, Herrera CM (1995) Shuffling the offspring: uncoupling and spatial discordance of multiple stages in vertebrate seed dispersal. Écoscience 2:230–237Google Scholar
Lemenih M, Teketay D (2006) Changes in soil seed bank composition and density following deforestation and subsequent cultivation of a tropical dry Afromontane forest in Ethiopia. Trop Ecol 47(1):1–12Google Scholar
Lemma B, Kleja DB, Nilsson I, Olsson M (2006) Soil carbon sequestration under different exotic tree species in the southwestern highlands of Ethiopia. Geoderma 136(3–4):886–898View ArticleGoogle Scholar
LePage PT, Canham CD, Coates KD, Bartemucci P (2000) Seed abundance versus substrate limitation of seedling recruitment in northern temperate forests of British Columbia. Can J For Res 30(3):415–427View ArticleGoogle Scholar
Loiselle BA, Ribbens E, Vargas O (1996) Spatial and temporal variation of seed rain in a tropical lowland wet forest. Biotropica 28(1):82–95View ArticleGoogle Scholar
Muller-Landau HC, Wright SJ, Calderón O, Hubbell SP, Foster RB (2002) Assessing recruitment limitation: concepts, methods and case-studies from a tropical forest. In: Levey DJ, Silva WR, Galetti M (eds) Seed dispersal and frugivory: ecology, evolution and conservation. CABI Publishing, Wallingford, pp 35–53Google Scholar
Nathan R, Safriel UN, Noy-Meir I, Schiller G (2000) Spatiotemporal variation in seed dispersal and recruitment near and far from Pinus halepensis trees. Ecology 81(8):2156–2169View ArticleGoogle Scholar
Norghauer JM, Newbery DM (2015) Tree size and fecundity influence ballistic seed dispersal of two dominant mast-fruiting species in a tropical rain forest. For Ecol Manage 338:100–113View ArticleGoogle Scholar
Nyssen J, Haile M, Naudts J, Munro N, Poesen J, Moeyersons J, Frankl A, Deckers J, Pankhurst R (2009) Desertification? Northern Ethiopia re-photographed after 140 years. Sci Total Environ 407(8):2749–2755PubMedView ArticleGoogle Scholar
Nyssen J, Poesen J, Moeyersons J, Deckers J, Haile M, Lang A (2004) Human impact on the environment in the Ethiopian and Eritrean highlands—a state of the art. Earth Sci Rev 64(3–4):273–320View ArticleGoogle Scholar
Peter H, Otto E, Hubert S (2010) Leaf area of beech (Fagus sylvatica L.) from different stands in eastern Austria studied by randomized branch sampling. Eur J For Res 129(3):401–408. doi:10.1007/s10342-009-0345-8 View ArticleGoogle Scholar
Rey PJ, Alcántara JM (2000) Recruitment dynamics of a fleshy fruited plant (Olea europaea): connecting patterns of seed dispersal to seedling establishment. J Ecol 88(4):622–633View ArticleGoogle Scholar
Ribbens E (2002) RECRUITS 3.1 Operating manual. Department of Biology. Western Illinois University, MacombGoogle Scholar
Ribbens E, Silander JA, Pacala SW (1994) Seedling recruitment in forests: calibrating models to predict patterns of tree seedling dispersion. Ecology 75(6):1794–1806. doi:10.2307/1939638 View ArticleGoogle Scholar
Saracco JF, Collazo JA, Groom MJ, Carlo TA (2005) Crop size and fruit neighborhood effects on bird visitation to fruiting Schefflera morototoni trees in Puerto Rico. Biotropica 37(1):81–87View ArticleGoogle Scholar
Schupp EW (1995) Seed-seedling conflicts, habitat choice, and patterns of plant recruitment. Am J Bot 82:399–409View ArticleGoogle Scholar
Stanturf JA, Palik BJ, Dumroese RK (2014) Contemporary forest restoration: A review emphasizing function. For Ecol Manage 331:292–323View ArticleGoogle Scholar
Stanturf JA, Schoenholtz SH, Schweitzer CJ, Shepard JP (2001) Achieving restoration success: myths in bottomland hardwood forests. Restor Ecol 9(2):189–200View ArticleGoogle Scholar
Teketay D, Granström A (1995) Soil seed banks in dry Afromontane forests of Ethiopia. J Veg Sci 6(6):777–786View ArticleGoogle Scholar
Teketay D, Granström A (1997) Germination ecology of forest species from the highlands of Ethiopia. J Trop Ecol 13(6):805–831View ArticleGoogle Scholar
Wada N, Ribbens E (1997) Japanese maple (Acer palmatum var. matsumurae, Aceraceae) recruitment patterns: seeds, seedlings, and saplings in relation to conspecific adult neighbors. Am J Bot 84(9):1294–1300PubMedView ArticleGoogle Scholar
Wagner S, Collet C, Madsen P, Nakashizuka T, Nyland RD, Sagheb-Talebi K (2010) Beech regeneration research: from ecological to silvicultural aspects. For Ecol Manage 259(11):2172–2182View ArticleGoogle Scholar
Wassie A, Teketay D, Powell N (2005) Church forests provide clues to restoring ecosystems in the degraded highlands of Northern Ethiopia. J Ecol Rest 23(2):131–132Google Scholar
Zanne AE, Chapman CA (2001) Expediting reforestation in tropical grasslands: distance and isolation from seed sources in plantations. Ecol Appl 11:1610–1621View ArticleGoogle Scholar
Zywiec M, Holeksa J, Wesolowska M, Szewczyk J, Zwijacz-Kozica T, Kapusta P (2013) Sorbus aucuparia regeneration in a coarse-grained spruce forest–a landscape scale. J Veg Sci 24(4):735–743View ArticleGoogle Scholar | CommonCrawl |
Journal of NeuroEngineering and Rehabilitation
Validity of shoe-type inertial measurement units for Parkinson's disease patients during treadmill walking
Myeounggon Lee1,
Changhong Youm2Email authorView ORCID ID profile,
Jeanhong Jeon2,
Sang-Myung Cheon3 and
Hwayoung Park1
Journal of NeuroEngineering and Rehabilitation201815:38
Accepted: 7 May 2018
When examining participants with pathologies, a shoe-type inertial measurement unit (IMU) system with sensors mounted on both the left and right outsoles may be more useful for analysis and provide better stability for the sensor positions than previous methods using a single IMU sensor or attached to the lower back and a foot. However, there have been few validity analyses of shoe-type IMU systems versus reference systems for patients with Parkinson's disease (PD) walking continuously with a steady-state gait in a single direction. Therefore, the purpose of this study is to assess the validity of the shoe-type IMU system versus a 3D motion capture system for patients with PD during 1 min of continuous walking on a treadmill.
Seventeen participants with PD successfully walked on a treadmill for 1 min. The shoe-type IMU system and a motion capture system comprising nine infrared cameras were used to collect the treadmill walking data with participants moving at their own preferred speeds. All participants took anti-parkinsonian medication at least 3 h before the treadmill walk. An intraclass correlation coefficient analysis and the associated 95% confidence intervals were used to evaluate the validity of the resultant linear acceleration and spatiotemporal parameters for the IMU and motion capture systems.
The resultant linear accelerations, cadence, left step length, right step length, left step time, and right step time showed excellent agreement between the shoe-type IMU and motion capture systems.
The shoe-type IMU system provides reliable data and can be an alternative measurement tool for objective gait analysis of patients with PD in a clinical environment.
Inertial measurement unit
Spatiotemporal parameter
Gait analysis is a robust method for investigating many health-related factors and has been utilized to determine overall health and predict the cognitive decline, risk of falling, quality of life, and lifespan of patients [1]. Patients with Parkinson's disease (PD), which is a progressive neurodegenerative disorder, experience gait disturbances such as a shuffling gait, reduced step length, reduced gait speed, and delayed gait initiation [2, 3]. These factors may increase the risk of falling [4]. Therefore, gait analysis of patients with PD is being actively studied to investigate the progression of neurodegenerative diseases [5].
In general, the gait of patients with PD is analyzed with objective evaluation methods using measurement devices [6]. Motion capture systems and instrumented walkway systems are considered the gold standard for gait analysis and are usually employed because they provide precise measurements for spatiotemporal and kinematic variables [5, 7]. However, these systems tend to be expensive and require a huge amount of laboratory space, extended post-processing, and skilled technicians. Furthermore, they can generally only capture a small number of consecutive steps in a small capture volume, which can limit the averaged step data, and it is doubtful whether the collected data are similar to natural walking patterns in daily life [8]. Thus, they may be difficult to utilize in a clinical environment [3, 7–11]. Consequently, various researchers have proposed inertial measurement unit (IMU) systems as an alternative method for gait analysis [3, 7, 9, 11–13]. An IMU system comprises tri-axial accelerometers and gyroscopes. It is a wearable device that can be miniaturized [10, 12, 14]. IMU systems have the advantages of being relatively inexpensive, small, lightweight, and requiring a relatively small amount of laboratory space compared to conventional systems [10, 11, 13]. IMU systems can evaluate objective measurements of spatiotemporal gait parameters quickly and easily in a clinical environment [7].
Previous studies have conducted validity analyses of IMU systems versus motion capture systems by using healthy adults [11, 12, 14, 15] and reported excellent agreement during gait-related tasks between spatiotemporal parameters [11, 12, 14] and linear accelerations [15]. Several researchers have performed validity analyses of IMU systems versus reference systems by using a motion capture system and instrumented walkway system with participants with pathologies [5, 7, 13, 16, 17], and good to excellent agreement during gait-related tasks was found for the spatiotemporal parameters [5, 7, 13, 16, 17]. However, most previous studies conducted validity analyses by using a single IMU sensor [5, 7, 16, 17]. A single sensor attached to the lower back is used in the inverted pendulum model, where walking is carried out in a straight line at a constant pace [5]. However, participants with PD have sustained gait disturbances such as a decreased step length and shuffling step, which may lead to estimation errors such as longer or shorter time variables, more or fewer gait cycles, and an incorrect step length [5, 13]. Moreover, participants with PD have higher accelerations along the anteroposterior, mediolateral, and vertical axes than healthy controls, and these results may be due to the difficulties that participants with PD have with walking smoothly [18]. Thus, increased acceleration may cause overestimation of the vertical displacement of the COM, which may lead to overestimation of the step length [7]. Trojaniello et al. [16] suggested that using IMU sensors attached to each lower limb bilaterally when analyzing participants with pathologies with a gait disturbance may increase the detection accuracy of gait-related events such as the heel strike (HS) and toe off (TO), and it may provide precise and accurate data for participants with PD [5, 13, 16].
The position of the IMU sensor is also important for increased data accuracy during gait analysis. Previous studies attached the IMU sensor to the skin above the fourth to fifth lumbar vertebra [5, 7], on the left and right lateral malleolus [13], or to the shoe [11, 17]. However, a sensor attached to the body may measure skin motion artifacts as well as dynamic acceleration due to a tilted position from conditions such as lumbar lordosis and sensor position inaccuracy, which may in turn influence the calculation accuracy of the spatiotemporal variables [8]. Joo et al. [12] reported that a shoe-type IMU system and motion capture system indicated excellent agreement for the cadence (ICC = 0.998) and step length (ICC = 0.970) of 1 min of treadmill walking by healthy young and older adults. They suggested that a shoe-type IMU system mounted on both the left and right outsoles may be a more effective and objective way to detect gait-related events and evaluate spatiotemporal gait parameters [10, 12]. However, few studies have assessed the validity of shoe-type IMU systems versus motion capture systems during continuous walking by participants with PD. Furthermore, most previous studies conducted validity analyses along a walkway with limited space and used repeated measurement methods such as multiple walking trials [5, 13, 16] or continued walking and turning at a point until the time limit [7, 17]. These methods may have limited applicability because of the averaging of several trials of data, and it remains to be seen whether the averaged data can represented the real walking patterns of participants [8]. Thus, a validity analysis of participants with PD walking continuously with a steady-state gait in a single direction may provide meaningful results. The purpose of this study was to assess the validity of the shoe-type IMU system versus a 3D motion capture system for patients with PD during 1 min of continuous treadmill walking. The working hypothesis was that the shoe-type IMU system with two sensors and the motion capture system would indicate excellent agreement for the resultant linear acceleration, cadence, step length, and step time.
An a priori power analysis was conducted to determine the minimum sufficient sample size for an effect size of 0.5, power of 80%, and significance of 0.05. Based on this analysis, 26 participants were required. Thirty patients with PD were recruited who met the UK Brain Bank criteria for PD diagnosis [19] and various Hoehn and Yahr (H&Y) stages of severity [20].
The patients were from the outpatient clinic of a medical center. The criteria for inclusion in the study were as follows: (a) diagnosed with idiopathic PD, (b) H&Y stage 1–3, (c) taking anti-parkinsonian medication, (d) a Mini-Mental State Exam (MMSE) score of more than 24 points, and (e) no medical history of orthopedic surgery, neurosurgery, and neurophysiology within 6 months prior to the study. Nine participants were excluded owing to them not successfully completing the 1-min treadmill walking activity, and four participants did not attend the treadmill walking test. Consequently, 17 participants with PD successfully carried out 1 min of treadmill walking in this study (Fig. 1, Table 1).
Flow diagram illustrating inclusion criteria for participants
Clinical and demographic characteristics
Participants with PD
(Males = 8; Females = 9)
Age (yr)
158.7 ± 9.4
Body mass (kg)
BMI (kg/m2)
Self-preferred walking speed (km/h)
MDS-UPDRS total (score)
68.0 ± 14.2
MDS-UPDRS part III (score)
H&Y stage
MMSE (score)
Duration of disease (yr)
Levodopa equivalent dose (mg)
643.24 ± 306.55
m ± sd mean and standard deviation, BMI Body mass index, H&Y Hoehn and Yahr, MMSE Mini-mental state examination, MDS-UPDRS Modified Movement Disorder Society version of the unified Parkinson's disease rating scale, PD Parkinson's disease
The severity of PD was assessed in terms of the H&Y stage, which was charted from unilateral involvement of the disability (stage 1), bilateral involvement of the disability without impairment of balance (stage 2), bilateral involvement of the disability with postural instability (stage 3), severe disability but still able to walk and stand without assistance (stage 4), and wheelchair-bound or bedridden unless aided (stage 5) [20]. The modified Movement Disorder Society version of the Unified Parkinson's Disease Rating Scale (MDS-UPDRS) has four evaluation parts. Part III of the MDS-UPDRS, which ranges from 0 (no motor symptoms) to 132 (severe motor symptoms) [5], was applied in the motor examination. All participants read and signed an informed consent form approved by the institutional review board of Dong-A University (IRB number: 2–104,709-AB-N-01-201,606-HR-025-04).
Shoe-type IMU sensor-based gait analysis systems (DynaStab™, JEIOS, South Korea) consisting of shoe-type data loggers (Smart Balance® SB-1, JEIOS, South Korea) and a data acquisition system were utilized. The shoe-type data logger included an IMU sensor (IMU-3000™, InvenSense, USA) that could measure tri-axial acceleration (up to ±6 g) and tri-axial angular velocities (up to ±500° s− 1) along three orthogonal axes [10, 12]. The IMU sensors were installed in both outsoles of the shoes, and the data were transmitted wirelessly to a data acquisition system via Bluetooth®. The shoe sizes were adapted to individuals, and a range of shoe sizes was available from 225 mm to 280 mm. The local coordinate system of the IMU sensors was established with anteroposterior, mediolateral, and vertical directions (Fig. 2).
Local coordinate system of the shoe-type IMU system
The motion capture system was composed of nine infrared cameras (MX-T10, Vicon, UK) and one acquisition system (MX-Giganet, Vicon, UK) for treadmill walking. The orientation of the global coordinate system was set behind the left side of the treadmill along the mediolateral (X-axis), anteroposterior (Y-axis), and vertical directions. The motion capture volume was set at 2.0 m (width) × 2.5 m (length) × 2.5 m (height) (Fig. 3). The speed of the belt on the treadmill (LGT7700M, LEXCO, South Korea) could be controlled from 0.5 km/h to 20 km/h in increments of 0.1 km/h. During data collection, both the IMU systems and motion capture systems sampled data at 100 Hz.
Motion capture system. a Treadmill walking. b Experimental setup
Experimental procedures
All participants took anti-parkinsonian medication at least 3 h prior to treadmill walking. Before the treadmill walking, all participants with PD were assessed by a PD specialist in terms of the MDS-UPDRS, H&Y stage, MMSE, and duration of disease. All participants performed a warmup protocol comprising a stretching program and treadmill walking practice at their self-preferred speed for 10 min with a professional exercise trainer. The self-preferred speed was defined as the speed at which a participant was able to move with a stable gait without any support while walking on a treadmill. The treadmill speed was gradually increased until the self-preferred speed of the participants was reached.
After each PD patient completed the warmup procedure, their bodies were measured to obtain the values for their models. The body height and weight were measured with a stadiometer and body weight scale. The widths of the shoulders, elbows, wrists, and knees and the ankle and hand thicknesses were measured with a caliper. The length of the leg was measured from the anterior superior iliac spine to the medial malleolus with a tape measure. After the body measurements were completed, all participants were asked to wear Lycra shirts and shorts, and IMU sensors were mounted in the shoes. Thirty-nine spherical reflective markers, each with a 14 mm diameter, were attached to the participants in accordance with the plug-in gait full-body model (Vicon, Oxford, UK) [21]. All markers were attached to bony landmarks on the participants with double-sided tape and Kinesio tape for stability.
In the treadmill walking test, all participants were first asked to walk on the treadmill at their self-preferred speed. The participants walked approximately 30–60 s from the start of the gait in order to maintain a steady-state gait at their self-preferred speed. The steady-state gait was defined as that when the participant maintained a stable gait movement at a constant speed. The treadmill walking test may be more useful for collecting steady-state gait data than the over ground walking test. The latter allows the acceleration and deceleration of the participants during the gait initiation and termination phases; thus, these were excluded from the analysis [22]. The treadmill walking test was concluded to be more suitable than the over ground walking test because this study was mainly focused on the validity of the shoe-type IMU system versus a 3D motion capture system. When the participant exhibited a steady-state gait, an operator collected the treadmill walking data for 1 min.
The IMU system and motion capture system data were filtered by using a second-order Butterworth low-pass filter with a cutoff frequency of 10 Hz [10, 12]. The data were simultaneously collected from both measurement systems and synchronized based on the timing of the HS and TO by using MATLAB® (2012a, MathWorks Inc., USA) (Fig. 4).
Data from a representative PD participant. The figure shows the resultant linear accelerations during treadmill walking with the shoe-type IMU system (DynaStab) and motion capture system (Vicon). The red line shows the motion capture system data; the blue line shows the IMU system data; the green line shows the root mean square error (RMSE) between the IMU and motion capture systems; the red circles show the timing of HS; and the blue circles show the timing of TO. a heel strike event. b toe off event
The variables of the IMU system were compared to those of the motion capture system for the resultant linear accelerations and spatiotemporal parameters during the 1-min treadmill walking test. The resultant linear accelerations from the IMU system were calculated as the net accelerations along the X, Y, and Z axes for the left and right shoes individually (Eq. 1). For the motion capture system, the double differential of the heel marker's position along the X, Y, and Z axes was used to calculate the accelerations. The marker of the heel was located closest to the IMU sensor in order to minimize the phase difference between the two systems. The resultant linear accelerations were calculated using the net accelerations along the X, Y, and Z axes for the left and right shoes individually (Eq. 1).
$$ \mathrm{Resultant}\ \mathrm{linear}\ \mathrm{acceleration}=\sqrt{x_{acce}^2+{y}_{acce}^2+{z}_{acce}^2} $$
The error between the IMU system (aIMU(R)) and motion capture system (aMC(R)) was derived by averaging the root mean square error (RMSE) over the total signal for the resultant linear acceleration [7] (Eq. 2). The %RMSE was defined as the averaged ratio of the RMSE value [23].
$$ \mathrm{RMSE}=\sqrt{\frac{1}{N}\sum \limits_{k=1}^N{\left({a}_{IMU}(R)-{a}_{MC}(R)\right)}^2} $$
The gait-related events for both the IMU and motion capture systems were defined such that the HS was when the resultant linear acceleration reached the maximum value, and the TO was when the resultant linear acceleration reached the second maximum value during a gait cycle (Fig. 4). The spatiotemporal parameters for both the IMU system and motion capture system were calculated as follows. (a) The cadence (step/min) was calculated as the total number of steps during 1 min. (b) The step length was defined as the product of the walking speed and step time. (c) The step time was defined as the period between the HS of one foot to the subsequent HS of the contralateral foot.
All statistical analyses were performed by using SPSS for Windows (version 20.0, SPSS Inc., Chicago, IL). The Shapiro–Wilk test was used to determine whether the data had a normal distribution. An intraclass correlation coefficient (ICC (2,1); two-way random single measures) analysis and the associated 95% confidence intervals (CIs) were used to assess the validity of the resultant linear acceleration and spatiotemporal parameters of the IMU system to that of the motion capture system. The limits of agreement (LOA) was calculated according to the Bland–Altman plots to show the differences between two systems [24]. The statistical significance level was set at 0.05.
In the validity analysis for the shoe-type IMU system and motion capture system, the resultant linear accelerations for the 17 participants with PD indicated excellent agreement for both the left (ICC = 0.973, 95% CI = 0.973–0.974, p < 0.001) and right (ICC = 0.971, 95% CI = 0.971–0.972, p < 0.001) shoes during the 1-min treadmill walking. The resultant linear accelerations for each participant indicated excellent agreement for both the left (ICC range = 0.961–0.985) and right (ICC range = 0.944–0.982) shoes (Table 2). Figures 5 and 6 show the difference between the two systems for the resultant linear accelerations of the left and right shoes during 1-min treadmill walking by all PD participants and representative participant, respectively. The largest difference was between the upper and lower LOA for both shoes (range = 93.02–94.01%).
Results of the validity analysis. The table presents the resultant linear acceleration for each participant with PD when measured with the shoe-type IMU system versus the motion capture system
Left shoe
Right shoe
ICC (2,1)
95% CI of ICC
(min, max)
RMSE (m/s2)
Percent RMSE (%)
95% CI of ICC (min, max)
Participant 1
0.981, 0.983
Participant 10
M ± SD
0.971 ± 0.009
6.87 ± 1.56
*: results of the correlation analysis, p < 0.001; CI confidence interval, ICC intraclass correlation coefficient; RMSE root mean square error
Bland–Altman plots of the all participants. The figure shows the resultant linear accelerations for both the left and right shoes during 1 min of treadmill walking
Bland–Altman plots of the representative participant. The figure shows the resultant linear accelerations for both the left and right shoes during 1 min of treadmill walking
All spatiotemporal parameters were normally distributed. The results for the cadence (ICC = 1.000, p < 0.001), left step length (ICC = 0.990, p < 0.001), right step length (ICC = 0.999, p < 0.001), left step time (ICC = 0.993, p < 0.001), and right step time (ICC = 0.993, p < 0.001) showed excellent agreement (Table 3). The difference between the two systems for all spatiotemporal parameters was illustrated by Bland–Altman plots (Figs. 7, 8). The spatiotemporal parameters for each participant indicated moderate to excellent agreement for both the left (ICC range = 0.734–0.997) and right (ICC range = 0.731–0.997) shoes (Tables 4 and 5).
Validity analysis of the spatiotemporal parameters. The table presents the results for the shoe-type IMU system and motion capture system
IMU system
Motion capture system
95% CI of ICC min, max
Cadence (steps/min)
108.24e (ste
Left step length (cm)
22.11step l
Right step length (cm)
22.53 step
Left step time (s)
0.57 step
Right step time (s)
0.57 t ste
m.63: mean and standard deviation; *: results of the correlation analysis, p < 0.001; CI confidence interval, ICC intracorrelation coefficient, IMU inertial measurement unit
Bland–Altman plots of the all participants. The figure shows the spatiotemporal parameters for both the left and right shoes during 1 min of treadmill walking
Bland–Altman plots of the representative participant. The figure shows the spatiotemporal parameters for both the left and right shoes during 1 min of treadmill walking
Results of the validity analysis of the left shoe. The table presents the spatiotemporal parameters for each participant with PD when measured with the shoe-type IMU system versus the motion capture system
Step length (ICC)
95% CI min, max (cm)
Step time (ICC)
95% CI min, max (s)
*: results of the correlation analysis, p < 0.001 CI confidence interval, ICC intraclass correlation coefficient, IMU inertial measurement unit
Results of the validity analysis of the right shoe. The table presents the spatiotemporal parameters for each participant with PD when measured with the shoe-type IMU system versus the motion capture system
*results of the correlation analysis, p < 0.001; CI confidence interval, ICC intraclass correlation coefficient, IMU inertial measurement unit
The resultant linear acceleration, cadence, step length, and step time were hypothesized to show excellent agreement between the shoe-type IMU system and motion capture system for the 1-min treadmill walking by patients with PD. IMU systems have previously been compared with reference systems such as the motion capture system using healthy adults [11, 12, 14, 15], and the two systems have shown excellent agreement for the stride length, foot clearance, stride velocity, and turning angle (r range = 0.91–0.99) [11]; cadence (ICC = 0.998) and step length (ICC = 0.970) [12]; heel clearance, foot clearance, and foot angle (r range = 0.92–0.99) [14]; and linear accelerations along the anteroposterior, mediolateral, and vertical axes (ICC range = 0.75–0.94) [15]. Furthermore, several researchers have used participants with pathologies to determine the validity of IMU and reference systems such as the instrumented walkway system [5, 13, 16] and motion capture system [7, 17]. The step time (ICC = 0.981) and step length (ICC = 0.869) [5] and linear acceleration along the vertical axis (ICC > 0.75, p < 0.001) [7] showed good to excellent agreement, and the stride time [13, 16], step time (range = 4–9%) [13, 16], stride velocity, and stride length [25] showed low error values.
In the current study, the resultant linear accelerations for the left and right shoes indicated excellent agreement between the shoe-type IMU system and motion capture system. Furthermore, the resultant linear accelerations for each participant with PD indicated excellent agreement between the two systems for both the left (ICC range = 0.961–0.985) and right (ICC range = 0.944–0.982) shoes. In addition, the cadence, left step length, right step length, left step time, and right step time indicated excellent agreement between the two systems. This is similar to the results obtained in previous studies [5, 7, 11, 12, 14–17]. Furthermore, previous studies have reported an RMSE for the linear accelerations along the vertical axis for the two systems of 1.21 ± 1.11 m s− 2 (10.2 ± 9.3%), which is relatively small [7]. In this study, the RMSE values for the left and right shoes were 1.99 ± 0.63 m s− 2 (6.87 ± 1.56%) and 2.01 ± 0.58 m s− 2 (7.00 ± 1.39%), respectively. These are similar to the results of previous studies [7]. The resultant linear accelerations indicated that most of the differences were between the upper and lower LOA for the left and right shoes (range = 93.02–94.01%). Therefore, these results suggest that shoe-type IMU systems can provide reliable data for gait analysis of participants with PD.
The foot is the initial segment that makes contact with the ground during walking, and the HS and TO events are defined as those when the foot makes contact with the ground and is taken off the ground, respectively [10, 12]. The shoe-type IMU system used in this study mounted sensors in the outsole of the shoe beneath the back of each foot.
This may be able to maintain stable sensor positions without hindering movement compared to the use of double-sided tape or Velcro straps. Furthermore, the advantages of this system include not only the ability to measure data on the left and right sides but also on both sides concurrently during gait-related tasks. These advantages make the estimation of HS and TO events more accurate [12]. To maximize these advantages of the shoe-type IMU system, data were not averaged by repeating several trials; instead, the real-time data of participants with PD were collected as they walked with a steady-state gait in a single direction for 1 min for analysis. Thus, the results of this study may provide more meaningful and reliable data than previous methods. This may be useful for assessing characteristics related to pathologies such as asymmetry and variability during gait-related tasks. This requires the sensors to be attached to the lower limbs [5]. Therefore, the shoe-type IMU system may be useful for analyzing patients with PD because it may provide more reliable data during gait-related tasks in a clinical environment.
We determined that there was excellent agreement between the shoe-type IMU system and motion capture system during the 1-min treadmill walking by participants with PD, but there were several limitations for this study. First, 30 participants with PD were recruited, but only 17 performed the 1-min treadmill walking successfully. The causes of these results remain unclear; the gait characteristics of elderly adults may change during various walking speeds, or these results may have been affected by the relatively small sample size. If more participants had been recruited, then the agreement between the two systems may be more meaningful. In addition, the results were similar to those of previous studies that compared shoe-type IMU systems with motion capture systems for 1-min treadmill walking for young, middle-aged, and older adults [12]. Treadmill walking may be a useful task for a validity analysis of two systems because more steps under the steady-state gait condition can be acquired than with the participants walking along a walkway. However, this task may not be easy for participants with gait disturbances and more severe impairments (e.g., more than H&Y stage 3) to perform successfully. Some patients with PD indicated moderate ICC values (Range: 0.734–0.821) for the spatiotemporal parameters due to the delayed timing of HS events in the IMU system compared to the motion capture system, even when synchronizing the data between both two systems. These results may be related to the gait characteristics of patients with PD such as a tremor and shuffling steps; thus, these factors may affect the resultant linear acceleration in the IMU system. Finally, additional gait-related tasks such as turning, changing direction, and walking on irregular surfaces should be considered for gait analysis under realistic environmental conditions.
In this study, the shoe-type IMU system and motion capture system exhibited excellent agreement for the resultant linear accelerations, cadence, step length, and step time. These results suggest that the shoe-type IMU system, which is relatively low in cost, small, and lightweight while providing reliable data, can be an alternative method for the gait analysis of patients with PD. Therefore, the shoe-type IMU system mounted on the outsole of shoes provides reliable data, and it may be useful for objective gait analysis of patients with PD in a clinical environment.
Confidence intervals
COM:
H&Y:
Hoehn and Yahr stage
HS:
ICC :
Intraclass correlation coefficient
IMU:
Inertia measurement unit
Limits of agreement
MDS-UPDRS:
The modified movement disorder society version of the unified Parkinson's disease rating scale
MMSE:
Mini-mental state exam
Resultant linear acceleration
RMSE:
Root mean square error
Toe off
This study was supported by the Dong-A University research fund.
All data generated or analyzed during this study are included in this published article [and its Additional files 1, 2, 3, 4 and 5].
ML designed the study, the acquisition, analysis, and interpretation of data, and the draft of the manuscript. CY designed the study and provided significant feedback regarding the analysis of the study and revisions to the manuscript. JJ designed the study and aided in the interpretation of the results. SC designed the study, recruited participants, and aided in the interpretation of the results. HP designed the study and the acquisition, analysis, and interpretation of data. All authors read and approved the final manuscript.
All participants read and signed an informed consent form approved by the institutional review board from Dong-A University (IRB number: 2–104,709-AB-N-01-201,606-HR-025-04).
Additional file 1: IRB number. (PDF 60 kb)
Additional file 2: Raw data - resultant linear accelerations. (XLSX 9196 kb)
Additional file 3: Raw data - spatiotemporal parameters. (XLSX 158 kb)
Additional file 4: List of levodopa-equivalent dose. (XLSX 10 kb)
Additional file 5: Raw data - total resultant linear accelerations. (XLSX 15774 kb)
Biomechanics Laboratory, College of Health Sciences, Dong-A University, Hadan 2-dong, Saha-gu, Busan, Republic of Korea
Department of Health Care and Science, College of Health Sciences, Dong-A University, 37 Nakdong-Daero 550 beon-gil, Hadan 2-dong, Saha-gu, Busan, Republic of Korea
Department of Neurology, School of Medicine, Dong-A University, Dongdaesin-dong 3-ga, Seo-gu, Busan, Republic of Korea
Godfrey A, Del Din S, Barry G, Mathers JC, Rochester L. Instrumenting gait with an accelerometer: a system and algorithm examination. Med Eng Phys. 2015;37(4):400–7.View ArticlePubMedPubMed CentralGoogle Scholar
Klucken J, Barth J, Kugler P, Schlachetzki J, Henze T, Marxreiter F, et al. Unbiased and mobile gait analysis detects motor impairment in Parkinson's disease. PLoS One. 2013;8(2):e56956.View ArticlePubMedPubMed CentralGoogle Scholar
Yoneyama M, Kurihara Y, Watanabe K, Mitoma H. Accelerometry-based gait analysis and its application to Parkinson's disease assessment—part 2: a new measure for quantifying walking behavior. IEEE Trans Neural Syst Rehabil Eng. 2013;21(6):999–1005.View ArticlePubMedGoogle Scholar
Keus SHJ, Bloem BR, Van Hilten JJ, Ashburn A, Munneke M. Effectiveness of physiotherapy in Parkinson's disease: the feasibility of a randomised controlled trial. Parkinsonism Relat D. 2007;13(2):115–21.View ArticleGoogle Scholar
Del Din S, Godfrey A, Rochester L. Validation of an accelerometer to quantify a comprehensive battery of gait characteristics in healthy older adults and Parkinson's disease: toward clinical and at home use. IEEE J Biomed Health Inform. 2016;20(3):838–47.View ArticlePubMedGoogle Scholar
Esser P, Dawes H, Collett J, Feltham MG, Howells K. Assessment of spatiotemporal gait parameters using inertial measurement units in neurological populations. Gait Posture. 2011;34(4):558–60.View ArticlePubMedGoogle Scholar
Esser P, Dawes H, Collett J, Feltham MG, Howells K. Validity and inter-rater reliability of inertial gait measurements in Parkinson's disease: a pilot study. J Neurosci Meth. 2012;205(1):177–81.View ArticleGoogle Scholar
Hartmann A, Luzi S, Murer K, de Bie RA, de Bruin ED. Concurrent validity of a trunk tri-axial accelerometer system for gait analysis in older adults. Gait Posture. 2009;29(3):444–8.View ArticlePubMedGoogle Scholar
Bauer CM, Rast FM, Ernst MJ, Kool J, Oetiker S, Rissanen SM, et al. Concurrent validity and reliability of a novel wireless inertial measurement system to assess trunk movement. J Electromyogr Kinesiol. 2015;25(5):782–90.View ArticlePubMedGoogle Scholar
Kim YK, Joo JY, Jeong SH, Jeon JH, Jung DY. Effects of walking speed and age on the directional stride regularity and gait variability in treadmill walking. J Mech Sci Technol. 2016;30(6):2899–906.View ArticleGoogle Scholar
Mariani B, Hoskovec C, Rochat S, Büla C, Penders J, Aminian K. 3D gait assessment in young and elderly subjects using foot-worn inertial sensors. J Biomech. 2010;43(15):2999–3006.View ArticlePubMedGoogle Scholar
Joo JY, Kim YK, Park JY. Reliability of 3D-inertia measurement unit based shoes in gait analysis. Korean J Sports Biomech. 2015;25(1):123–30.View ArticleGoogle Scholar
Trojaniello D, Cereatti A, Pelosin E, Avanzino L, Mirelman A, Hausdorff JM, Della CU. Estimation of step-by-step spatiotemporal parameters of normal and impaired gait using shank-mounted magneto-inertial sensors: application to elderly, hemiparetic, parkinsonian and choreic gait. J Neuroeng Rehabil. 2014;11(1):152.View ArticlePubMedPubMed CentralGoogle Scholar
Kanzler CM, Barth J, Rampp A, Schlarb H, Rott F, Klucken J, Eskofier BM. Inertial sensor based and shoe size independent gait analysis including heel and toe clearance estimation. In: 37th annual international conference of the IEEE engineering in medicine and biology society (EMBC). New York: IEEE; 2015. p. 5424–7.Google Scholar
Cole MH, Van Den Hoorn W, Kavanagh JK, Morrison S, Hodges PW, Smeathers JE, Kerr GK. Concurrent validity of accelerations measured using a tri-axial inertial measurement unit while walking on firm, compliant and uneven surfaces. PLoS One. 2014;9(5):e98395.View ArticlePubMedPubMed CentralGoogle Scholar
Trojaniello D, Ravaschio A, Hausdorff JM, Cereatti A. Comparative assessment of different methods for the estimation of gait temporal parameters using a single inertial sensor: application to elderly, post-stroke, Parkinson's disease and Huntington's disease subjects. Gait Posture. 2015;42(3):310–6.View ArticlePubMedGoogle Scholar
Mariani B, Jiménez MC, Vingerhoets FJ, Aminian K. On-shoe wearable sensors for gait and turning assessment of patients with Parkinson's disease. IEEE Trans Biomed Eng. 2013;60(1):155–8.View ArticlePubMedGoogle Scholar
Sekine M, Akay M, Tamura T, Higashi Y, Fujimoto T. Fractal dynamics of body motion in patients with Parkinson's disease. J Neural Eng. 2004;1(1):8.View ArticlePubMedGoogle Scholar
Gelb DJ, Oliver E, Gilman S. Diagnostic criteria for Parkinson disease. Arch Neurol. 1999;56(1):33–9.View ArticlePubMedGoogle Scholar
Goetz CG, Poewe W, Rascol O, Sampaio C, Stebbins GT, Counsell C, et al. Movement Disorder Society task force report on the Hoehn and Yahr staging scale: status and recommendations. Mov Disord. 2004;19(9):1020–8.View ArticlePubMedGoogle Scholar
Bolink SAAN, Naisas H, Senden R, Essers H, Heyligers IC, Meijer K, Grimm B. Validity of an inertial measurement unit to assess pelvic orientation angles during gait, sit–stand transfers and step-up transfers: comparison with an optoelectronic motion capture system. Med Eng Phys. 2016;38(3):225–31.View ArticlePubMedGoogle Scholar
Almarwani M, Van Swearingen JM, Perera S, Sparto PJ, Brach JS. Challenging the motor control of walking: gait variability during slower and faster pace walking conditions in younger and older adults. Arch Gerontol Geriatr. 2016;66:54–61.View ArticlePubMedGoogle Scholar
Mayagoitia RE, Nene AV, Veltink PH. Accelerometer and rate gyroscope measurement of kinematics: an inexpensive alternative to optical motion analysis systems. J Biomech. 2002;35(4):537–42.View ArticlePubMedGoogle Scholar
Bland JM, Altman DG. Measuring agreement in method comparison studies. Stat Methods Med Res. 1999;8(2):135–60.View ArticlePubMedGoogle Scholar
Salarian A, Russmann H, Vingerhoets FJ, Dehollain C, Blanc Y, Burkhard PR, Aminian K. Gait assessment in Parkinson's disease: toward an ambulatory system for long-term monitoring. IEEE Trans Biomed Eng. 2004;51(8):1434–43.View ArticlePubMedGoogle Scholar | CommonCrawl |
\begin{document}
\title{{
\bf Weight modules of quantized Weyl algebras}}
\author{Vyacheslav Futorny, Laurent Rigal, Andrea Solotar} \date{} \maketitle
\begin{abstract} We develop a general framework for studying relative weight representations for certain pairs consisting of an associative algebra and a commutative subalgebra. Using these tools we describe projective and simple weight modules for quantum Weyl algebras for generic values of deformation parameters. We consider two quantum versions: one by Maltsiniotis and the other one by Akhavizadegan and Jordan. \end{abstract}
\noindent 2010 MSC: 17B37, 16D30, 16D60, 16P40, 16S32, 16S36
\noindent Keywords: Weyl algebra, localisation, simple weight modules
\tableofcontents
\section*{Introduction and notation} There is a natural interest in the representation theory of Weyl algebras, both of finite and infinite rank, as the simplest deformations of polynomial rings with numerous applications in different areas of mathematics and physics. A systematic study of \emph{weight} representations of Weyl algebras was undertaken in \cite{BBF} following the results of \cite{Bl}, \cite{Ba}, \cite{BO}, \cite{DGO}, \cite{BB} among the others. Generalizations of Weyl algebras --infinite rank, generalized, modular, quantum etc.-- were studied in many papers, see for example \cite{Ba}, \cite{M}, \cite{MT}, \cite{H}, \cite{FGM}, \cite{C}, \cite{FI}, \cite{AJ}, \cite{LMZ}.
Representations of interest in all papers mentioned above are those on which certain commutative subalgebra acts in a locally finite way. General \emph{Harish-Chandra} categories of such representations for a pair $(R, A)$ consisting of an associative algebra $A$ and a commutative subalgebra $R$ were introduced and studied in \cite{DFO1}. These categories play a very important role in representation theory, in particular in the study of Gelfand-Tsetlin modules for the Lie algebra $gl_n$ \cite{DFO2}, \cite{O}.
In this paper we consider a general framework for subcategories of Harish-Chandra categories consisting of \emph{relative weight} representations for a pair $(R, A)$ on which $R$ is diagonalizable.
The pairs $(R, A)$ of our interest are so-called of \emph{strongly-fee type}. For such pairs we describe projective and simple relative weight representations in Section 1. This approach allows to address weight representations of the above mentioned algebras in a systematic way. We hope that the technique developed here will be useful in the study of the representation theory of many types of algebras.
As an application we focus in this paper on the representations of quantum Weyl algebras, motivated by their strong connections with representations of quantized universal enveloping algebras, --see \cite{LZZ}, \cite{FKZ}-- and quantum affine Lie algebras \cite{FHW}. We consider two versions of quantum Weyl algebras. The first one was introduced by G. Maltsiniotis (see \cite{M}) in his attempt to develop a quantum differential calculus on the standard deformation of the affine space. Though its definition makes it a very natural ring of skew differential operators on the quantum affine space (where classical partial derivatives are replaced by so-called Jackson operators, which are multiplicative analogues) it has the drawback of being a non-simple algebra, hence loosing a major property of the classical Weyl algebra. This observation lead to the seek of a simple localisation of this algebra and was the original motivation for the works \cite{AJ} and \cite{J} of M. Akhavizadegan and D. Jordan. In these works, the authors introduced a a slightly different deformation of the Weyl algebra, technically easier to handle and then showed that the two quantum analogues actually share a common simple localisation which, in a sense, is a better quantum analogue to the classical Weyl algebra. These algebras depend on an $n$-tuple of deformation parameters $\bar{q}=(q_{1}, \ldots, q_n)\in(\Bbbk^\ast)^n$ and an $n \times n$ skew symmetric matrix $\Lambda=(\lambda_{ij})$ for some $n$.
Denote by $B_{n}^{\bar{q},\Lambda}$ the common simple localisation of both versions of quantum Weyl algebras above. For a certain commutative Laurent polynomial subalgebra $R^{\circ}$ of $B_{n}^{\bar{q},\Lambda}$ the pair ($R^{\circ}, B_{n}^{\bar{q},\Lambda}$) is of strongly-fee type. We apply the above results to describe the projective $B_{n}^{\bar{q},\Lambda}$-modules in Section 3. In Section 4 we obtain one of our main results: a classification of simple relatively weight $B_{n}^{\bar{q},\Lambda}$-modules for generic values of deformation parameters and special matrix $\Lambda$ with all entries equal to $1$ (Theorem \ref{thm-simple}). In order to deal with the general matrix $\Lambda$ we use Zhang twists \cite{Z} and show in Section 5 how the general case can be reduced to the special case when all entries of matrix $\Lambda$ equal to $1$. It leads in Section 6 to our main
result - classification and construction of simple weight $B_{n}^{\bar{q},\Lambda}$-modules. We note that the case of rank two quantum Weyl algebra and arbitrary deformation parameters was considered in [\cite{H}, Theorem 6.14].
We fix a field $\Bbbk$. Whenever the symbol $\otimes$ is used, it stands for the tensor product over $\Bbbk$ of $\Bbbk$-vector spaces, while $M \otimes_A N$ will denote the tensor product of a right module $M$ and a left module $N$ over a ring $A$ and if $A$ is a $\Bbbk$-algebra, ${\rm Aut}_\Bbbk(A)$ denotes its group of $\Bbbk$-algebra automorphisms.
We will denote by $\mathbb{N}$ the set of non negative integers and by $\mathbb{N}^{\ast}$ the subset of positive integers.
\section*{Acknowledgments} The authors were supported by the MathAMSud grant "Representations, homology and Hopf algebra". V.F. was partially supported by the CNPq grant (304467/2017-0) and by the CNPq grant (200783/2018-1). A.S. was partially supported by PIP-CONICET 11220150100483CO and UBACyT 20020170100613BA. She is a research member of CONICET (Argentina).
\section{A general setting for weight categories} \label{section-weight-categories}
The aim of this section is to study the following context, which turns out to appear very often in representation theory. Consider a $\Bbbk$-algebra $A$ containing a commutative subalgebra $R$. One may consider (left) $A$-modules on which the subalgebra $R$ acts in a semisimple way. That is, $A$-modules $M$ which can be decomposed as a direct sum of subspaces indexed by characters of $R$ on which any element of $R$ acts by scalar multiplication by its image by the relevant character. Clearly, such {\em generalised eigenspaces} must be $R$-submodules and, if the connection between $A$ and $R$ is close enough, such a decomposition can provide a good understanding of the $A$-module structure of $M$.
We are particularly interested by the case in which $(A,R)$ is a {\em strongly-free} extension. Roughly speaking, this means that there is a subset ${\bf b}$ of $A$ which is a basis of $A$ both as a right and as a left module with the additional property that commutation relations between $R$ and elements of ${\bf b}$ are controlled by automorphisms of the $\Bbbk$-algebra $R$.
In the first subsection, we recall useful results on weight modules over a commutative $\Bbbk$-algebra. These are modules on which $R$ acts in a semisimple way --in the above sense. In the second subsection, given a pair $(A,R)$ consisting of a $\Bbbk$-algebra $A$ and a commutative subalgebra $R$, we introduce the notion of relative weight $A$-module, that is $A$-modules which are weight modules relative to $R$. In particular, we show that if the extension $R \subseteq A$ is {\em strongly-free}, then the natural induction from $R$ to $A$ preserves projective modules. This provides a natural process to construct $A$-modules which are projective in the category of relative weight $A$-modules as we show in Subsection 3. Subsection 4 then is devoted to the study of the simple objects in the above category. It turns out that they all are quotients of the aforementioned projective objects. The last subsection is of a more technical nature: we focus on tensor products of strongly free extensions and study how the projective and simple objects discussed above behave in this case.
\subsection{Weight modules over commutative algebras}
In this subsection, unless otherwise specified, $R$ denotes a commutative $\Bbbk$-algebra.
\begin{subdefinition} -- A {\em weight} of $R$ is a morphism of $\Bbbk$-algebras from $R$ to $\Bbbk$. The set of weights of $R$ will be denoted $\widehat{R}$. \end{subdefinition}
Let ${\rm Max}(R)$ be the set of maximal ideals of $R$. There is a map \[ \begin{array}{rcl} \widehat{R} & \longrightarrow & {\rm Max}(R) \cr \phi & \mapsto & \ker(\phi). \end{array} \] The link between a weight of $R$ and its kernel will be useful in the sequel. We make it clear in the following remark.
\begin{subremark} -- \rm \label{poids-comme-projecteurs} Recall the natural morphism of algebras $\Bbbk \longrightarrow R$, $\lambda \mapsto \lambda.1_R$. \begin{enumerate} \item Notice that, for all $r\in R$ and for all $\phi \in \widehat{R}$, the element $r-\phi(r).1_R$ belongs to $\ker(\phi)$. From this, it follows at once that \[ R = \Bbbk.1_R \oplus \ker(\phi). \] In particular, the map $\widehat{R} \longrightarrow {\rm Max}(R)$ mentioned above is injective. Of course, it does not need be surjective. \item Let now $\sigma$ be an automorphism of the $\Bbbk$-algebra $R$. \begin{enumerate} \item It is straightforward to verify that $\ker(\phi)=\sigma(\ker(\phi\circ\sigma))$. \item It follows, using the first item, that \[ \sigma(\ker(\phi)) = \ker(\phi) \Longleftrightarrow \phi\circ\sigma=\phi. \] \end{enumerate} \end{enumerate} \end{subremark}
\begin{subremark} -- \rm \label{action-automorphismes-sur-poids} It is clear that there is a right action of the group ${\rm Aut}_\Bbbk(R)$ of $\Bbbk$-algebra automorphisms of $R$ on $\widehat{R}$ as follows~: \[ \begin{array}{rcl} \widehat{R} \times {\rm Aut}_\Bbbk(R) & \longrightarrow & \widehat{R} \cr (\phi,\sigma) & \mapsto & \phi\circ\sigma. \end{array} \] \end{subremark}
\begin{subexample} -- \rm The case where $R$ is a polynomial algebra or a Laurent polynomial algebra will be of particular importance. \begin{enumerate} \item Let $R=\Bbbk[z_1,\dots,z_n]$, with $n\in{\mathbb N}^\ast$. There is a commutative diagram \[ \xymatrix@!C{ \widehat{R} \ar@{->}[rr]^{1:1}\ar@{->}[dr]^{} & & \Bbbk^n\ar@{->}[dl]^{} \\
& {\rm Max}(R)& \\ } \] where the horizontal map sends $\phi\in\widehat{R}$ to the $n$-tuple $(\phi(z_1),\dots,\phi(z_n))$, while the left hand side map sends $\phi\in\widehat{R}$ to its kernel and the right hand side map sends $(\alpha_1,\dots,\alpha_n)$ to $\langle z_1-\alpha_1,\dots,z_n-\alpha_n \rangle$. \\ Clearly, the horizontal map is bijective, while the other two are injective (see Remark \ref{poids-comme-projecteurs}). \item There is a similar diagram for the Laurent polynomial algebra in the indeterminates $z_1,\dots,z_n$, $n\in{\mathbb N}^\ast$, replacing $\Bbbk$ by $\Bbbk^\ast$. \end{enumerate} \end{subexample}
\begin{subdefinition} -- Let $M$ be an $R$-module. Given $\phi\in\widehat{R}$, we say that $m\in M$ is an {\em element of weight} $\phi$ if, for all $r\in R$, $r.m = \phi(r)m$. We say that an element $m\in M$ is a {\em weight element} if there exists $\phi\in\widehat{R}$ such that $m$ is an element of weight $\phi$. Further, for $\phi\in\widehat{R}$, we set \[ M(\phi) = \{m\in M \,|\, r.m=\phi(r)m, \, \forall r\in R\}. \] \end{subdefinition}
\begin{subremark} -- \rm Let $f: M \longrightarrow N$ be a morphism of $R$-modules. For all $\phi\in\widehat{R}$, $f(M(\phi)) \subseteq N(\phi)$. \end{subremark}
\begin{subproposition} -- \label{somme-directe} Let $M$ be an $R$-module. \begin{enumerate} \item For all $\phi\in\widehat{R}$, $M(\phi)$ is an $R$-submodule of $M$. \item Let $n \in{\mathbb N}^\ast$. If $\phi_1,\dots,\phi_n$ are pairwise distinct elements of $\widehat{R}$ and, for $1 \le i \le n$, $m_i\in M(\phi_i)$, then $\sum_{1\le i \le n} m_i = 0$ if and only if $m_i=0$, for all $1 \le i \le n$. That is, the sum $\sum_{\phi\in\widehat{R}} M(\phi)$ is direct. \end{enumerate} \end{subproposition}
\noindent {\it Proof.$\;$} The proof of the first item is straightforward. Let $n \in{\mathbb N}^\ast$. We prove the second statement by induction on $n$, the case $n=1$ being trivial. Assume the results holds for some $n\in{\mathbb N}^\ast$. Suppose now that $\phi_1,\dots,\phi_{n+1}$ are pairwise distinct elements of $\widehat{R}$ and, for $1 \le i \le n+1$, consider elements $m_i\in M(\phi_i)$ such that $\sum_{1\le i \le {n+1}} m_i = 0$. For all $r \in R$, we have that \[0 = r.\left(\sum_{1\le i \le {n+1}} m_i\right) = \sum_{1\le i \le {n+1}} r.m_i = \sum_{1\le i \le {n+1}} \phi_i(r).m_i. \] It follows that, for all $r \in R$, \[ \sum_{1\le i \le n} (\phi_{n+1}(r)-\phi_i(r)).m_i = 0. \] The inductive hypothesis yields that, for all $r\in R$ and $1 \le i \le n$, $(\phi_{n+1}(r)-\phi_i(r)).m_i=0$. But, the $\phi_i$'s being pairwise distinct, we have $m_1= \dots = m_n=0$ and therefore $m_{n+1}=0$. This finishes the induction. The result is thus proved.\qed
\begin{subdefinition} -- Let $M$ be an $R$-module. For $\phi\in\widehat{R}$, we will call $M(\phi)$ the {\em weight subspace} of $M$ of weight $\phi$. Further, we say that $M$ is a {\em weight module} provided it is the (direct) sum of its weight subspaces \[ M = \bigoplus_{\phi\in\widehat{R}} M(\phi) \] We denote by $R-{\sf wMod}$ the full subcategory of $R - {\sf Mod}$ whose objects are the weight $R$-modules. \end{subdefinition}
\begin{subproposition} -- \label{sous-objets-et-poids} Let $M$ be an $R$-module and let $L$ be an $R$-submodule of $M$. \begin{enumerate} \item For all $\phi\in\widehat{R}$, $L(\phi)=L \cap M(\phi)$. \item If $M$ is a weight module, then so is $L$. \end{enumerate} \end{subproposition}
\noindent {\it Proof.$\;$} The first item is obvious. Next, suppose $m$ is a non zero element of $L$. There exists $n\in{\mathbb N}^\ast$ and pairwise distinct elements $\phi_1,\dots,\phi_n$ of $\widehat{R}$ such that \begin{equation}\label{m} m = \sum_{1\le i \le n} m_i \end{equation} with $m_i\in M(\phi_i)\setminus\{0\}$ for all $1 \le i \le n$. So, \begin{equation}\label{truc} \forall r\in R, \qquad r.m - \phi_n(r) m = \sum_{1 \le i \le n} (\phi_i(r)-\phi_n(r)) m_i \in L. \end{equation} Further, if $n > 1$, then for all $1 \le i \le n-1$, we may choose $r_i \in R$ such that $\phi_i(r_i) \neq \phi_n(r_i)$, and hence $r_i.m - \phi_n(r_i) m \in L\setminus\{0\}$ by Proposition \ref{somme-directe}.
We are now ready to prove the second item. Suppose $L$ is not a weight submodule. This means that there exists a non zero element in $L$ such that at least one of its weight summands is not in $L$. Among such elements, let us choose $m$ with a minimal number $n$ of non zero weight summands. Write $m$ as in (\ref{m}) in such a way that $m_n \notin L$. Notice in addition that we must have $n > 1$. Due to the minimality of $n$, the argument above based on (\ref{truc}) shows that, for all $1 \le i \le n-1$, $m_i \in L$. It follows that $m_n \in L$. This is a contradiction. Hence $L$ must be a weight submodule.\qed
Next, we consider the case where $R$ is a tensor product of commutative algebras, that is $R= R_1\otimes \dots \otimes R_s$ for some $s\in {\mathbb N}^\ast$, where $R_1,\dots,R_s$ are commutative $\Bbbk$-algebras.
\begin{subproposition} \label{weight-in-tpa}
Let $s\in{\mathbb N}^\ast$, let $R_1,\dots,R_s$ be commutative $\Bbbk$-algebras and consider the $\Bbbk$-algebra $R = R_1\otimes\dots\otimes R_s$. \begin{enumerate} \item For $1 \le i \le s$, let $\phi_i\in\widehat{R_i}$. The $\Bbbk$-linear map \[ \begin{array}{ccrcl} \phi_1 \dots \phi_s & : & R & \longrightarrow & \Bbbk \cr
& & x_1 \otimes\dots\otimes x_s & \mapsto & \phi_1(x_1) \dots \phi_s(x_s) \end{array} \] is a morphism of $\Bbbk$-algebras. Furthermore, we have that \[ \ker(\phi_1 \dots \phi_s) = W \] where \[ W=\ker(\phi_1) \otimes R_2 \otimes\dots\otimes R_s + R_1 \otimes \ker(\phi_2) \otimes\dots\otimes R_s + \dots + R_1 \otimes\dots\otimes R_{s-1} \otimes \ker(\phi_s). \] \item The map \[ \begin{array}{ccrcl}
& & \widehat{R_1} \times\dots\times \widehat{R_s} & \longrightarrow & \widehat{R} \cr
& & (\phi_1 , \dots , \phi_s) & \mapsto & \phi_1 \dots \phi_s \end{array} \] is bijective. \end{enumerate} \end{subproposition}
\noindent {\it Proof.$\;$} In order to prove the first statement, set $\phi=\phi_1 \dots \phi_s$. It is easy to see that $W \subseteq \ker(\phi)$. According to Remark \ref{poids-comme-projecteurs}, given $1 \le i \le s$, $R_i = \Bbbk.1_{R_i} \oplus \ker(\phi_i)$. Hence, $R = \Bbbk.1_R \oplus W$. It follows that if $r$ is an element of $R$, then there exist $\lambda\in\Bbbk$ and $w \in W$ such that $r= \lambda.1_R + w$, and, since $W \subseteq \ker(\phi)$, we have $\lambda=\phi(r)$. It is now clear that if $r\in\ker(\phi)$, then $r \in W$.\\ The previous point gives rise to a map \[ \begin{array}{ccrcl}
& & \widehat{R_1} \times\dots\times \widehat{R_s} & \longrightarrow & \widehat{R} \cr
& & (\phi_1 , \dots , \phi_s) & \mapsto & \phi_1 \dots \phi_s \end{array} . \] Now, for $1 \le i \le s$, identify $R_i$ with its canonical image in $R$ under the injective algebra morphism $r \mapsto 1 \otimes\dots\otimes 1\otimes r \otimes 1 \otimes\dots\otimes 1$ ($r$ in $ith$ position). We may define the map \[ \begin{array}{ccrcl}
& & \widehat{R} & \longrightarrow & \widehat{R_1} \times\dots\times \widehat{R_s} \cr
& & \phi & \mapsto & (\phi_{|R_1}, \dots ,\phi_{|R_s}). \end{array} \] The two precedent maps are inverse to each other, so that they are bijective. \qed
\subsection{Relative weight modules} \label{sous-section-rwm}
In this subsection, we fix a pair $(R,A)$ where $A$ is a $\Bbbk$-algebra and $R$ a commutative $\Bbbk$-subalgebra of $A$.
Any $A$-module $M$ is an $R$-module by restriction of scalars. Hence, if $\phi\in\widehat{R}$, we may speak of elements of $M$ of weight $\phi$.
Let $A-{\sf Mod}$ be the category of left $A$-modules. We denote by $A-{\sf wMod}$ the full subcategory of $A-{\sf Mod}$ whose objects are weight $R$-modules. We call such modules \emph{relative weight} modules. We have the following restriction of scalars functors
\[ \xymatrix@!C{ A-{\sf Mod} \ar@{->}[rrr]^{\underline{\rm Res}} &&& R-{\sf Mod}\\ & & & \\ A-{\sf wMod} \ar@{->}[uu]^{can.} \ar@{->}[rrr]^{\underline{\rm Res}} &&& R-{\sf wMod}\ar@{->}[uu]^{can.}\\ } \] Of course, there is also an induction functor $\underline{\rm Ind} = A \otimes_R -$ from $R-{\sf Mod}$ to $A-{\sf Mod}$. In order to be able to induce weight modules from $R$ to $A$, we need the pair $(R,A)$ to satisfy some additional hypotheses.
\begin{subdefinition} -- \label{definition-strongly-free} We say that the pair $(R,A)$ is of {\em strongly-free type} if there exists a subset ${\bf b}$ of $A$ such that: \begin{enumerate} \item $1\in {\bf b}$; \item ${\bf b}$ is a basis of $A$ as a left $R$-module; \item $\forall b \in {\bf b}$, there exists an algebra automorphism $\sigma_b$ of $R$ such that, $\forall r\in R$, $rb=b\sigma_b(r)$. \end{enumerate} \end{subdefinition}
\begin{subremark} -- \rm If the pair $(R,A)$ is of strongly-free type, then, there exists a subset of $A$ which is an $R$-basis of $A$ both as left and as right module over $R$. \end{subremark}
\begin{sublemma} -- \label{induced-of-weight} Assume the pair $(R,A)$ is of strongly free type. Let $\phi\in\widehat{R}$ and suppose $M$ is an $R$-module. Then, \begin{enumerate} \item for all $m\in M(\phi)$, for all $b\in{\bf b}$, $b \otimes_R m \in (\underline{\rm Ind}(M))(\phi\circ\sigma_b)$; \item if $M$ is a weight $R$-module, then $\underline{\rm Ind}(M)$ is an object of $A-{\sf wMod}$. \end{enumerate} \end{sublemma}
\noindent {\it Proof.$\;$} The proof of the first item is straightforward. The second one follows from the first since ${\bf b}$ generates $A$ as left $R$-module.\qed\\
As a consequence of Lemma \ref{induced-of-weight},
there is a commutative diagram for the induction functors: \[ \xymatrix@!C{ A-{\sf Mod} &&& \ar@{->}[lll]^{\underline{\rm Ind}} R-{\sf Mod}\\ & & & \\ A-{\sf wMod} \ar@{->}[uu]^{can.} &&& R-{\sf wMod} \ar@{->}[lll]^{\underline{\rm Ind}} \ar@{->}[uu]^{can.}\\ } \]
\begin{subcorollary} -- \label{paire-adjointe} Assume the pair $(R,A)$ is of strongly free type. The pair $(\underline{\rm Ind},\underline{\rm Res})$ is an adjoint pair of functors between $A-{\sf wMod}$ and $R-{\sf wMod}$. In particular, if $M$ is a relative weight $A$-module and $N$ is a weight $R$-module, then there is an isomorphism of abelian groups as follows: \[ {\sf Hom}_{A-{\sf wMod}}(\underline{\rm Ind}(N), M) \cong {\sf Hom}_{R-{\sf wMod}}(N,\underline{\rm Res}(M)). \] \end{subcorollary}
\noindent {\it Proof.$\;$} It is well known that the pair $(\underline{\rm Ind},\underline{\rm Res})$ is an adjoint pair of functors between $A-{\sf Mod}$ and $R-{\sf Mod}$. The proof follows immediately from this fact. \qed
\begin{subcorollary} -- \label{ind-et-projectifs} Assume the pair $(R,A)$ is of strongly free type. The functor $\underline{\rm Ind} \, : \, R-{\sf wMod} \longrightarrow A-{\sf wMod}$ preserves projective objects. \end{subcorollary}
\noindent {\it Proof.$\;$} This is an immediate consequence of Corollary \ref{paire-adjointe} since $\underline{\rm Res}$ is exact.\qed
\subsection{A class of projective weight $A$-modules}
Let $R$ be a commutative $\Bbbk$-algebra. Consider $\phi\in\widehat{R}$. The $R$-module $R/\ker(\phi)$ is clearly an object of $R-{\sf wMod}$. More precisely, we have that any element of this module is of weight $\phi$, that is $R/\ker(\phi)=\left(R/\ker(\phi)\right)(\phi)$. In the sequel, we will denote it by $\Bbbk_\phi$.
\begin{sublemma} -- \label{kn-projectif} Given $\phi\in\widehat{R}$, the module $\Bbbk_\phi$ is a projective object in the category $R-{\sf wMod}$. \end{sublemma}
\noindent {\it Proof.$\;$} Consider the diagram \[ \xymatrix@!C{
& \Bbbk_\phi \ar@{->}[d]^{f} & \\ N_1 \ar@{->}[r]^{\pi} & N_2 \ar@{->}[r] & 0\\ } \] in $R-{\sf wMod}$ with exact horizontal row. Using Prop. \ref{somme-directe}, we get that there is an element $z$ of weight $\phi$ in $N_1$ such that $\pi(z)=f(\overline{1_R})$. Since $\Bbbk_\phi$ is a one dimensional $\Bbbk$-vector space, there is a unique $\Bbbk$-linear map $g \, : \, \Bbbk_\phi \longrightarrow N_1$ sending $\overline{1_R}$ to $z$. Thus, the diagram \[ \xymatrix@!C{
& \Bbbk_\phi \ar@{->}[d]^{f}\ar@{->}[dl]^{g} & \\ N_1 \ar@{->}[r]^{\pi} & N_2 \ar@{->}[r] & 0\\ } \] is commutative. It is easy to see that $g$ is actually a map in $R-{\sf wMod}$, since $z$ has weight $\phi$. (Here, we use Remark \ref{poids-comme-projecteurs}, point 1.) Hence, $\Bbbk_\phi$ is projective. \qed\\
For the rest of this section, let $(R,A)$ be a pair as in the introduction to section \ref{sous-section-rwm}, assume $(R,A)$ is of strongly-free type and let ${\bf b}$ be a basis of the $R$-module $A$ as in Definition \ref{definition-strongly-free}.\\
Let $\phi\in\widehat{R}$. Set $P_\phi = \underline{\rm Ind}(\Bbbk_\phi)$. We already know that $P_\phi$ is an object of $A-{\sf wMod}$.
\begin{subcorollary} -- For any $\phi\in\widehat{R}$, the module $P_\phi$ is a projective object in the category $A-{\sf wMod}$. \end{subcorollary}
\noindent {\it Proof.$\;$} This is immediate by Lemma \ref{kn-projectif} and Corollary \ref{ind-et-projectifs}. \qed\\
Now we collect some properties of the $A$-modules $P_\phi$, $\phi\in\widehat{R}$, that will be useful later.
\begin{subremark} -- \rm\label{structure-P} {\bf Structure of $P_\phi$, $\phi\in\widehat{R}$}\\ Recall that we assume $(R,A)$ is of strongly-free type and let ${\bf b}$ be a basis of the $R$-module $A$ as in Definition \ref{definition-strongly-free}. Fix $\phi\in\widehat{R}$. For all $r\in R$, we put $\bar{r} = r + \ker(\phi) \in\Bbbk_\phi$.
\begin{enumerate} \item Fix $b\in{\bf b}$. We have that $Rb=bR \subseteq A$. Further, $Rb$ is a direct summand of $A$ both as a left and as a right $R$-submodule. Now, consider the following subset of $P_\phi$: \[ b \otimes_R \Bbbk_\phi :=\{b\otimes_R \bar{r}, \, r\in R \} \subseteq P_\phi. \] Clearly, $b \otimes_R \Bbbk_\phi$ is the left $R$-submodule of $P_\phi$ generated by $b \otimes_R \bar{1}$ as well as the $\Bbbk$-subspace of $P_\phi$ generated by $b \otimes_R \bar{1}$ (see Remark \ref{poids-comme-projecteurs}). On the other hand, it is the image of the natural left $R$-linear map \[ bR \otimes_R \Bbbk_\phi \stackrel{\subseteq \otimes_R {\rm id}}{\longrightarrow} P_\phi . \] But, $bR$ being a direct summand of $A$ as a right $R$-submodule, the above map must be injective. Hence, it identifies $bR \otimes_R \Bbbk_\phi$ and $b \otimes_R \Bbbk_\phi$ as left $R$-modules.
In addition, there are obvious $\Bbbk$-linear maps as follows: \begin{equation}\label{2-applications} \begin{array}{rcl} \Bbbk_\phi & \longrightarrow & bR \otimes_R \Bbbk_\phi \cr \bar{s} & \mapsto & b \otimes_R \bar{s} \end{array}, \quad \begin{array}{rcl} bR \otimes_R \Bbbk_\phi & \longrightarrow & \Bbbk_\phi \cr br \otimes_R \bar{s} & \mapsto & r\bar{s} \end{array} \end{equation} which are inverse to each other. As a consequence, $bR \otimes_R \Bbbk_\phi$ is a one dimensional $\Bbbk$-vector space with basis $b \otimes_R \bar{1}$.
At this stage, it is interesting to notice that the maps (\ref{2-applications}) are not left $R$-linear. However, if we denote by $^{\sigma_b}\Bbbk_\phi$ the left $R$ module obtained by twisting the left action of $R$ on $\Bbbk_\phi$ by the automorphism $\sigma_b$, then the maps (\ref{2-applications}) induce a left $R$-linear isomorphism \[ bR \otimes_R \Bbbk_\phi \longrightarrow \, ^{\sigma_b}\Bbbk_\phi. \] \item We are now in position to give a very explicit description of $P_\phi$. Indeed, by standard results, we have an isomorphism of left $R$-modules as follows: \[ P_\phi = A \otimes_R \Bbbk_\phi = \left( \bigoplus_{b\in{\bf b}} bR \right) \otimes_R \Bbbk_\phi \cong \bigoplus_{b\in{\bf b}} bR \otimes_R \Bbbk_\phi. \] Thus, taking into account the first point, we end up with a $\Bbbk$-vector space isomorphism \[ P_\phi \cong \bigoplus_{b\in{\bf b}} \Bbbk . b \otimes_R \bar{1}, \] where, for all $b\in{\bf b}$, $\dim_\Bbbk (\Bbbk . b \otimes_R \bar{1})=1$. In particular, $P_\phi$ is nonzero and $\{b \otimes_R \bar{1}, b\in {\bf b}\}$ is a basis of $P_\phi$ as a $\Bbbk$-vector space. \item The weight structure of $P_\phi$, however, is not yet completely clear. Indeed, we have that, \[ \forall b\in{\bf b}, \quad \Bbbk.b \otimes_R \bar{1} \subseteq (P_\phi)(\phi\circ\sigma_b). \] But, the above inclusion may be strict. Actually, it is the case if and only if there exists $b'\in{\bf b}$, $b\neq b'$, such that $\phi\circ\sigma_b=\phi\circ\sigma_{b'}$. We will come back to this problem in Remark \ref{generic-P} \end{enumerate} \end{subremark}
\subsection{Simple objects in $A - {\sf wMod}$}
All along this section, we fix a pair $(R,A)$ as in the introduction to section \ref{sous-section-rwm}, assume $(R,A)$ is of strongly-free type and let ${\bf b}$ be a basis of the $R$-module $A$ as in Definition \ref{definition-strongly-free}. Our aim is to study the simple objects of $A - {\sf wMod}$.
\begin{subdefinition} -- An object $S$ in $A-{\sf wMod}$ is {\em simple} provided it is non zero and it has no nontrivial weight $A$-submodule. \end{subdefinition}
\begin{subremark} -- \rm By Proposition \ref{sous-objets-et-poids}, an object $M$ in $A - {\sf wMod}$ is simple if and only if it is simple as an object of $A - {\sf Mod}$. \end{subremark}
The following result shows that any simple object in $A - {\sf wMod}$ arises as a simple quotient of some $P_\phi$, with $\phi\in\widehat{R}$.
\begin{subproposition} -- \label{ubiquite-des-P} \rm Given a simple object $S$ of $A-{\sf wMod}$, there exists $\phi\in\widehat{R}$ such that $S$ is isomorphic in $A-{\sf wMod}$ to a simple quotient of $P_\phi$. \end{subproposition}
\noindent {\it Proof.$\;$} Let $S$ be a simple object of $A-{\sf wMod}$. Since $S$ is not zero, there exists $\phi\in\widehat{R}$ such that $S(\phi)\neq 0$. Let $x$ be a non zero element in $S(\phi)$. There exists a non zero $R$-linear map $R \longrightarrow S$ such that $1 \mapsto x \in S(\phi)$. But, since $x \in S(\phi)$, the above map induces a nonzero $R$-linear map $\Bbbk_\phi \longrightarrow S$. This shows that ${\sf Hom}_{R-{\sf wMod}}(\Bbbk_\phi,S) \neq 0$. It follows from Corollary \ref{paire-adjointe} that ${\sf Hom}_{A-{\sf wMod}}(P_\phi,S) \neq 0$. Hence, there exists a map $\varphi \, : \, P_\phi \longrightarrow S$ in $A-{\sf wMod}$ which is non zero. But, of course, the image of $\varphi$ is a non zero subobject of $S$, so $\varphi$ must be surjective. The rest is clear. \qed\\
Proposition \ref{ubiquite-des-P} shows the importance of understanding the nonzero simple quotients of the modules $P_\phi$. The following remark deals with the easiest case.
\begin{subremark} -- \rm \label{generic-P} Recall the hypotheses of the beginning of this section. Notice that, clearly, $\sigma_1={\rm id}$. Fix $\phi\in\widehat{R}$. Assume, further, that the map ${\bf b} \longrightarrow \widehat{R}$, given by $b \mapsto \phi\circ\sigma_b$ is injective (see Remark \ref{structure-P}). It follows from Remark \ref{structure-P} that the weight subspace of $P_\phi$ of weight $\phi$ is: \[ (P_\phi)(\phi) = \Bbbk. 1 \otimes_R (1+\ker(\phi)). \] Let $M$ be a strict weight $A$-submodule of $P_\phi$. Since $P_\phi$ is generated by its weight subspace $(P_\phi)(\phi)$ of weight $\phi$, and since the latter is a one dimensional $\Bbbk$-vector space, we must have $(P_\phi)(\phi) \cap M = \{0\}$, which leads to: \[ M \subseteq \bigoplus_{\psi\neq\phi} (P_\phi)(\psi). \] It follows that the sum of all strict submodules of $P_\phi$ in the category $A-{\sf wMod}$ is again a strict submodule. That is, there is a strict submodule of $P_\phi$ in $A-{\sf wMod}$ maximum among strict submodules of $P_\phi$ in $A-{\sf wMod}$. We denote this maximum submodule by $N_\phi$. Of course, the corresponding quotient \[ S_\phi := P_\phi/N_\phi \] is a simple object in $A-{\sf wMod}$ and it is the unique simple quotient of $P_\phi$ in $A-{\sf wMod}$. Note that the above applies to $P_\phi$ seen as an object of $A-{\sf Mod}$, by Proposition \ref{sous-objets-et-poids}. \end{subremark}
\subsection{Tensor product of strongly-free extensions} \label{TP-of-SFE}
Let $n\in{\mathbb N}^\ast$. For each $i$, $1 \le i \le n$, let $(R_i,A_i)$ be a pair of strongly-free type and let ${\bf b}_i$ be a subset of $A_i$ as in Definition \ref{definition-strongly-free}. Let \[ R = \bigotimes_{1 \le i \le n} R_i, \qquad A = \bigotimes_{1 \le i \le n} A_i \] and identify $R$ canonically with its image in $A$. (In the sequel, we will make extensive use of Proposition \ref{weight-in-tpa}.)
Suppose in addition that, for $1 \le i \le n$, $M_i$ is an object of $A_i-{\sf wMod}$. Clearly, $\bigotimes_{1 \le i \le n} M_i$ is an object of $A-{\sf wMod}$.
\begin{sublemma} -- The pair $(R,A)$ is of strongly-free type. \end{sublemma}
\noindent {\it Proof.$\;$} It is clear that the subset ${\bf b} = \bigotimes_{1 \le i \le n} {\bf b}_i$ of $A$ satisfies the conditions of Definition \ref{definition-strongly-free}. \qed
\begin{subremark} -- \rm \label{corps-residuel-et-pt} Retain the notation above and, for all $1 \le i \le n$, let $\phi_i$ be an element of $\widehat{R_i}$. Set $\phi=\phi_1 \dots \phi_n$. Recall the description of $\ker(\phi)$ given in Proposition \ref{weight-in-tpa}. There is an obvious surjective morphism \[ R \longrightarrow \bigotimes_{1 \le i \le n} \Bbbk_{\phi_i} \] in $R-{\sf Mod}$ such that its kernel contains $\ker(\phi)$. Hence, there is a surjective morphism in $R-{\sf Mod}$~: \[ \Bbbk_\phi \longrightarrow \bigotimes_{1 \le i \le n} \Bbbk_{\phi_i} \] which must be an isomorphism since its source and target are both one dimensional vector spaces over $\Bbbk$. \end{subremark}
The following theorem provides a description of the projective objects in $A-{\sf wMod}$ as tensor products of projectives.
\begin{subtheorem} -- \label{iso-P-tenseur-P} Using the previous notation, let $\phi_i$ be an element of $\widehat{R_i}$ for all $1 \le i \le n$, and put $\phi=\phi_1 \dots \phi_n$. There is an isomorphism \[ P_\phi \cong \bigotimes_{1\le i \le n} P_{\phi_i} \] in $A-{\sf wMod}$. \end{subtheorem}
\noindent {\it Proof.$\;$} It is enough to show that there is an isomorphism in $A-{\sf Mod}$: \[ P_\phi \cong \bigotimes_{1\le i \le n} P_{\phi_i}. \] This is a standard fact; indeed, by classical results, we have an isomorphism of $\Bbbk$-vector spaces \[ \begin{array}{rcl} \bigotimes_{1 \le i \le n} P_{\phi_i} & = & \bigotimes_{1 \le i \le n} \left( A_i \otimes_{R_i} \Bbbk_{\phi_i}\right) \\ & \cong & \left(\bigotimes_{1 \le i \le n} A_i \right) \otimes_R \left( \bigotimes_{1 \le i \le n}\Bbbk_{\phi_i}\right) \\ & \cong & \left(\bigotimes_{1 \le i \le n} A_i \right) \otimes_R \Bbbk_\phi \\ & \cong & A \otimes_R \Bbbk_\phi \\ & = & P_\phi \end{array} \] where the isomorphism of the second row is given by \[ \otimes_{1 \le i \le n} \left( a_i \otimes_{R_i} k_i\right) \mapsto \left(\otimes_{1 \le i \le n} a_i \right) \otimes_R \left(\otimes_{1 \le i \le n} k_i \right) \] and the isomorphism of the third row is given by Remark \ref{corps-residuel-et-pt}.\qed\\
Now we analyse simple modules in $A-{\sf wMod}$.
\begin{sublemma} -- \rm \label{shurian-simple} If, for $1 \le i \le n$, $S_i$ is a simple object in $A_i -{\sf wMod}$ such that $\dim_\Bbbk (S_i)(\phi)\le 1$ for any $\phi \in \widehat{R_i}$, then $S_1\otimes \ldots \otimes S_n$ is a simple object in $A -{\sf wMod}$. \end{sublemma}
\noindent {\it Proof.$\;$} Suppose that $\dim_\Bbbk (S_i)(\phi)\le 1$ for any $\phi \in \widehat{R_i}$, $i=1, \ldots, n$. It follows from Proposition \ref{weight-in-tpa} that the weight spaces of the $A$-module $S_{1} \otimes\dots\otimes S_{n}$ are also at most one dimensional $\Bbbk$-vector spaces, generated by elementary tensors. Now, let $W$ be a nonzero (weight) submodule of $S_{1} \otimes\dots\otimes S_{n}$. Then, $W$ contains a nonzero weight element which, by the above, must be an elementary tensor. Let $w = w_1 \otimes\dots\otimes w_n$ be such an element. For all $i$ such that $1 \le i \le n$, $w_i$ is a non zero element of $S_{i}$ which, hence, generates the $A_i$-module $S_{i}$. Therefore, $w$ generates the $A$-module $S_{1} \otimes\dots\otimes S_{n}$ and $W=S_{1} \otimes\dots\otimes S_{n}$. The statement is proved. \qed
\begin{subremark} -- \rm If we suppose that ${\rm End}_{A_i}(S)=\Bbbk$ for any simple object $S$ in $A_i -{\sf wMod}$, $i=1, \ldots, n-1$, then for any collection of simple objects $S_i$ in $A_i -{\sf wMod}$, $i=1, \ldots, n$, the $A$-module $S_1\otimes \ldots \otimes S_n$ is simple. Indeed, under this hypothesis, we have that $A_i$ is Schurian and tensor-simple \cite{B}, which implies the statement. \end{subremark}
Let $S$ be a simple object in $A-{\sf wMod}$. By Proposition \ref{ubiquite-des-P} there exists $\phi\in\widehat{R}$ such that $S$ is a quotient of $P_{\phi}$, that is $S\simeq P_{\phi}/N$. We have the following result.
\begin{subcorollary} -- \label{iso-L-tenseur-L} Let $\phi\in\widehat{R}$ be such that all weight subspaces of $P_{\phi}$ have dimension at most $1$. For $i=1,\dots,n$, let $\phi_i \in \widehat{R_i}$ be such that $\phi=\phi_1 \dots \phi_n$. \begin{enumerate} \item The map ${\bf b} \to \widehat{R}$ given by $b \mapsto \phi\circ \sigma_b$ is injective. Further, the module $P_{\phi}$ has a unique maximal strict submodule $N_{\phi}$ and unique simple quotient $S_\phi = P_{\phi}/N_{\phi}$. \item For all $1 \le i \le n$, The map ${\bf b}_i \to \widehat{R_i}$ given by $b \mapsto \phi_i\circ \sigma_b$ is injective. Further, the module $P_{\phi_i}$ has a unique maximal strict submodule $N_{\phi_i}$ and unique simple quotient $S_{\phi_i} = P_{\phi_i}/N_{\phi_i}$. \item There is an isomorphism \[ S_\phi \cong S_{\phi_1} \otimes \dots \otimes S_{\phi_n} \] in $A - {\sf wMod}$. \end{enumerate}
\end{subcorollary}
\noindent {\it Proof.$\;$}
We start by recalling that such $\phi_i$'s exist due to Proposition \ref{weight-in-tpa}. \begin{enumerate} \item This follows immediately from Remarks \ref{structure-P} and \ref{generic-P}. \item We know from Theorem \ref{iso-P-tenseur-P} that there is an isomorphism $\iota: P_{\phi}\to P_{\phi_1}\otimes \dots \otimes P_{\phi_n}$. Hence, since all weight spaces of $P_{\phi}$ have dimension $1$, by Proposition \ref{weight-in-tpa}, the same must hold for $P_{\phi_i}$, for $i=1,\dots, n$. The statement follows as in the first point. \item Observe that for all $i$, $S_{\phi_i}$ must also have weight spaces of dimension at most one. Thus, we are in the hypotheses of the previous lemma; so that $S_{\phi_1}\otimes \dots \otimes S_{\phi_n}$ is a simple object in $A-{\sf wMod}$.
Now, consider the following submodule of $P_{\phi_1}\otimes \dots \otimes P_{\phi_n}$: \[ X = N_{\phi_1} \otimes P_{\phi_2} \otimes\dots\otimes P_{\phi_n} + P_{\phi_1} \otimes N_{\phi_2} \otimes\dots\otimes P_{\phi_n} + \dots + P_{\phi_1} \otimes\dots\otimes P_{\phi_{n-1}} \otimes N_{\phi_n}. \] We have an obvious surjective map \[ P_{\phi}\simeq P_{\phi_1}\otimes \dots \otimes P_{\phi_n} \twoheadrightarrow S_{\phi_1}\otimes \dots \otimes S_{\phi_n} \] whose kernel contains $\iota^{-1}(X)$, hence a surjective map \[ P_{\phi}/\iota^{-1}(X) \twoheadrightarrow S_{\phi_1}\otimes \dots \otimes S_{\phi_n}. \] We now proceed to show that the latter must be an isomorphism. Indeed, taking Proposition \ref{weight-in-tpa} into account, the hypotheses on the dimensions of weight spaces prove that the weights of $S_{\phi_1}\otimes \dots \otimes S_{\phi_n}$ are the products $\psi_1\dots \psi_n$, where $\psi_i$ is a weight of $P_{\phi_i}$ that is not a weight of $N_{\phi_i}$. Analogously, the weights of $X$ are the products $\psi_1\dots \psi_n$, where $\psi_i$ is a weight of $P_{\phi_i}$ and, for some $j\in \{1, \dots, n\}$, $\psi_j$ is a weight of $N_{\phi_j}$.
It follows that both $P_{\phi}/\iota^{-1}(X)$ and $S_{\phi_1}\otimes \dots \otimes S_{\phi_n}$ have the same set of weights. Since, in addition, these modules have one dimensional weight spaces, the surjective map must be an isomorphism. But, as quoted above, $S_{\phi_1}\otimes \dots \otimes S_{\phi_n}$ is simple, which implies $N_{\phi} = \iota^{-1}(X)$. This proves the statement.\qed \end{enumerate}
\section{Quantum Weyl algebras} \label{section-qwa}
In this section we will consider two versions of quantum Weyl algebras. For a short account on the description of these quantum Weyl algebras as ring of skew differential operators, see Section \ref{classification}. In the first subsection, we recall Maltsiniotis' quantum Weyl algebra. In the second we introduce the version of Akhavizadegan and Jordan. In the last subsection, we recall the isomorphism between the convenient localisations of these algebras.
For this, we need to introduce quantum integers. The reader is referred to \cite{AJ} and \cite{J} for a detailed exposition. Some complements may be found in \cite{R1}. \\
Let $q\in\Bbbk^\ast$. For any integer $i\in{\mathbb N}^\ast$, we let \[ (i)_q = 1 +q + \dots + q^{i-1}, \] and extend this definition by $(0)_q=0$. Further, for $j\in{\mathbb Z}\setminus{\mathbb N}$, we set \[ (j)_q = -q^j(-j)_q = -\left(q^j + q^{j+1} + \dots q^{-1}\right). \] These scalars will be called {\em quantum integers associated to} $q$ or, for simplicity, $q$-{\em integers}.
\begin{remark} -- \rm \begin{enumerate} \item If $q=1$, then, for all $i\in{\mathbb Z}$, $(i)_q=i$. \item If $q\neq 1$, then, \[ \forall i\in{\mathbb Z} \quad\quad (i)_q = \displaystyle\frac{1-q^i}{1-q} . \] \end{enumerate} \end{remark}
Recall that an $n \times n$ matrix $(\lambda_{ij})$ with entries in $\Bbbk^\ast$ is called skew-symmetric provided $\lambda_{ii}=1$ and $\lambda_{ij}=\lambda_{ji}^{-1}$, for all $1 \le i,j \le n$.
\subsection{The quantum Weyl algebra of G.~Maltsiniotis}
\begin{subdefinition} -- Let $n\in{\mathbb N}^\ast$. To any $n$-tuple $\bar{q}=(q_{1} ,...,q_n)\in(\Bbbk^\ast)^n$ and any $n \times n$ skew symmetric matrix $\Lambda=(\lambda_{ij}) \in M_{n}(k)$, we associate the $\Bbbk$-algebra $A_n^{\bar{q},\Lambda}$ with generators $y_1,...,y_n,x_1 ,...,x_n$ and relations: \begin{equation}\label{awqm-1} \begin{array}{lll} \forall (i,j)\in{\mathbb N}^2, \; 1 \le i<j\le n, & x_{i} x_{j}=\lambda _{ij} q_{i} x_{j} x_{i}, & y_{i} y_{j}=\lambda _{ij} y_{j} y_{i}, \cr & x_{i} y_{j}=\lambda _{ij}^{-1} y_{j} x_{i}, & y_{i} x_{j}=\lambda _{ij}^{-1} q_{i}^{-1} x_{j} y_{i}. \end{array} \end{equation} \begin{equation}\label{awqm-2} \begin{array}{l} \forall i\in{\mathbb N}, \; 1 \le i \le n,\; x_{i} y_{i} -q_{i}y_{i} x_{i} =1+\sum _{j=1}^{i-1} (q_{j}-1)y_{j} x_{j}. \end{array} \end{equation} Further, we put $z_0=1$ and, for $1 \le i \le n$, \[ z_i =x_iy_i - y_ix_i. \] \end{subdefinition}
The family $\{z_1,\dots,z_n\}$ will play a central role in computations.
\begin{subremark} -- \rm It is easy to see that for all $i$, $1\le i \le n$: \begin{equation} \begin{array}{l} \qquad z_{i}=1+\sum_{j=1}^{i} (q_{j}-1)y_{j} x_{j} \qquad\mbox{and}\qquad z_{i}=z_{i-1}+(q_{i}-1)y_{i}x_{i}. \end{array} \end{equation} As a consequence, relations (\ref{awqm-2}) read: \\ \begin{equation} \begin{array}{l} \forall i, 1\le i \le n, \;\; x_{i} y_{i} -q_{i} y_{i} x_{i} =z_{i-1} \end{array} \end{equation} \end{subremark}
\begin{subproposition} -- \label{maltsiniotis} The algebra $A_n^{\bar{q},\Lambda}$ is a noetherian integral domain. In addition, the family $\{y_{1}^{j_{1}}x_{1}^{i_{1}}...y_{n}^{j_{n}}x_{n}^{i_{n}},\; (j_{1},i_{1},...,j_{n},i_{n}) \in {\mathbb N}^{2n}\}$ is a PBW basis of $A_n^{\bar{q},\Lambda}$. \end{subproposition}
\noindent {\it Proof.$\;$} See \cite[Remarque 2.1.3]{R1} and \cite[1.9]{AJ}. \qed
\begin{subproposition}\label{maltsiniotis1} -- For all $1 \le i \le n$, $z_{i}$ is a normal element of $A_n^{\bar{q},\Lambda}$. More precisely, we have: \begin{itemize} \item for all $i, j$, $1 \le i < j \le n, \qquad z_{i}x_{j}=x_{j}z_{i} \qquad \mbox{and}\qquad z_{i}y_{j}=y_{j}z_{i},$ \item for all $i, j$, $1 \le j \le i \le n, \qquad z_{i}x_{j}=q_{j}^{-1}x_{j}z_{i} \qquad \mbox{and} \qquad z_{i}y_{j}=q_{j}y_{j}z_{i},$ \item for all $i, j$, $1 \le i,j \le n, \qquad z_{i}z_{j}=z_{j}z_{i}.$ \end{itemize} \end{subproposition}
\noindent {\it Proof.$\;$} See \cite[p.~285]{J}. \qed\\
As pointed out above, the multiplicative set generated by the elements $z_i$, $1 \le i \le n$, consists of normal elements. It follows that it is an Ore set and that we may form the corresponding localisation.
\begin{subdefinition} -- We denote by $B_{n}^{\bar{q},\Lambda}$ the localisation of $A_{n}^{\bar{q}, \Lambda}$ at the multiplicative set generated by the elements $z_i$, $1 \le i \le n$. \end{subdefinition}
Since $A_{n}^{\bar{q},\Lambda}$ is an integral domain, there is a canonical injective morphism of $\Bbbk$-algebras \[ A_{n}^{\bar{q},\Lambda} \stackrel{can.inj.}{\longrightarrow} B_{n}^{\bar{q},\Lambda}. \] In the sequel, we will often identify an element of $A_{n}^{\bar{q},\Lambda}$ with its image in $B_{n}^{\bar{q},\Lambda}$ under the above canonical injection.
Denote by $R$ (resp $R^\circ$) the $\Bbbk$-subalgebra of $A_n^{\bar{q},\Lambda}$ (resp. $B_n^{\bar{q},\Lambda}$) generated by $z_1,\dots,z_n$ (resp. $z_1,\dots,z_n$ and their inverses). Hence, $R$ and $R^\circ$ are commutative $\Bbbk$-algebras.
\begin{subproposition}\label{maltsiniotis2} -- Assume $q_i\neq 1$ for all $1 \le i \le n$. The elements $z_1,\dots,z_n$ are algebraically independent in $R$. Moreover, the set \[ {\bf b}=\left\{y_{1}^{j_{1}}x_{1}^{i_{1}}...y_{n}^{j_{n}}x_{n}^{i_{n}},\; (j_{1},i_{1},...,j_{n},i_{n}) \in {\mathbb N}^{2n}, \; i_kj_k=0, \, \forall 1 \le k \le n\right\} \] is a basis of $A_n^{\bar{q},\Lambda}$ regarded as a left or as a right $R$-module. Further, ${\bf b}$ is a basis of $B_n^{\bar{q},\Lambda}$ both as a left and as a right $R^\circ$-module. \end{subproposition}
\noindent {\it Proof.$\;$} We leave the detailed proof to the interested reader. All the statements follow from a careful use of the above relations and Proposition \ref{maltsiniotis}. \qed
\subsection{The quantum Weyl algebra of M. Akhavizadegan \& D. Jordan}
\begin{subdefinition} -- Let $n\in{\mathbb N}^\ast$. To any $n$-tuple $\bar{q}=(q_{1} ,...,q_n)\in(\Bbbk^\ast)^n$ and any $n \times n$ skew symmetric matrix $\Lambda=(\lambda_{ij}) \in M_{n}(k)$, we associate the $\Bbbk$-algebra ${\mathcal A}_n^{\bar{q},\Lambda}$ with generators $y_1,...,y_n,x_1 ,...,x_n$ and relations: \begin{equation}\label{awqmAJ-1} \begin{array}{lll} \forall (i,j)\in{\mathbb N}^2, \; 1 \le i<j\le n, & x_{i} x_{j}=\lambda _{ij} x_{j} x_{i}, & y_{i} y_{j}=\lambda _{ij} y_{j} y_{i}, \cr & x_{i} y_{j}=\lambda _{ij}^{-1} y_{j} x_{i}, & y_{i} x_{j}=\lambda _{ij}^{-1} x_{j} y_{i}. \end{array} \end{equation} \begin{equation}\label{awqmAJ-2} \begin{array}{l} \forall i\in{\mathbb N}, \; 1 \le i \le n,\; x_{i} y_{i} -q_{i}y_{i} x_{i} = 1. \end{array} \end{equation} Let us also define $z_0=1$ and, for $i$ such that $1 \le i \le n$, \[ z_i =x_iy_i - y_ix_i. \] \end{subdefinition}
As before, these elements will play a crucial role.
\begin{subremark} -- \rm \label{rm4relations} It is easy to see that the four relations in (\ref{awqmAJ-1}) actually hold for all $(i,j)\in{\mathbb N}^2$ such that $1 \le i \neq j \le n$ and that: \[ \forall i, \qquad 1 \le i \le n, \qquad z_{i}=1+(q_i-1)y_i x_i . \] \end{subremark}
Two results analogous to Propositions \ref{maltsiniotis1} and \ref{maltsiniotis2} are also true in this case.
\begin{subproposition} -- The algebra ${\mathcal A}_n^{\bar{q},\Lambda}$ is a noetherian integral domain. In addition, the family $\{y_{1}^{j_{1}}x_{1}^{i_{1}}...y_{n}^{j_{n}}x_{n}^{i_{n}},\; (j_{1},i_{1},...,j_{n},i_{n}) \in {\mathbb N}^{2n}\}$ is a $\Bbbk$-basis of ${\mathcal A}_n^{\bar{q},\Lambda}$. \end{subproposition}
\noindent {\it Proof.$\;$} See \cite[1.9]{AJ}. \qed\\
In particular, the family $\{z_1,\dots, z_n\}$ verifies properties similar to those of the previous subsection.
\begin{subproposition}\label{z-et-xy} -- For all $1 \le i \le n$, $z_{i}$ is a normal element of ${\mathcal A}_n^{\bar{q},\Lambda}$. More precisely, we have: \begin{itemize} \item for all $i, j$, $1 \le i \neq j \le n, \qquad z_{i}x_{j}=x_{j}z_{i} \qquad \mbox{and}\qquad z_{i}y_{j}=y_{j}z_{i},$ \item for all $i$, $1 \le i \le n, \qquad z_{i}x_{i}=q_{i}^{-1}x_{i}z_{i} \qquad \mbox{and} \qquad z_{i}y_{i}=q_{i}y_{i}z_{i},$ \item for all $i, j$, $1 \le i,j \le n, \qquad z_{i}z_{j}=z_{j}z_{i}.$ \end{itemize} \end{subproposition}
\noindent {\it Proof.$\;$} See \cite[p. 286]{AJ}. \qed\\
As pointed out before, the multiplicative set generated by all the $z_i$'s consists of normal elements. Hence, it is an Ore set and thus we may form the corresponding localisation.
\begin{subdefinition} -- We denote by ${\mathcal B}_{n}^{\bar{q},\Lambda}$ the localisation of ${\mathcal A}_{n}^{\bar{q}, \Lambda}$ at the multiplicative set generated by the elements $z_i$, $1 \le i \le n$. \end{subdefinition}
Notice that, again, the fact that ${\mathcal A}_{n}^{\bar{q},\Lambda}$ is an integral domain implies that the canonical morphism of $\Bbbk$-algebras \[ {\mathcal A}_{n}^{\bar{q},\Lambda} \stackrel{can.inj.}{\longrightarrow} {\mathcal B}_{n}^{\bar{q},\Lambda} \] is injective, and we will identify any element of ${\mathcal A}_{n}^{\bar{q},\Lambda}$ with its image in ${\mathcal B}_{n}^{\bar{q},\Lambda}$ under this injection.
Denote by $R$ (resp $R^\circ$) the $\Bbbk$-subalgebra of ${\mathcal A}_n^{\bar{q},\Lambda}$ (resp. ${\mathcal B}_n^{\bar{q},\Lambda}$) generated by $z_1,\dots,z_n$ (resp. and their inverses).
\begin{subproposition} -- \label{R-basis-of-A} Assume $q_i\neq 1$ for all $1 \le i \le n$. The elements $z_1,\dots,z_n$ are algebraically independent in $R$. Further, the set \[ {\bf b}=\{y_{1}^{j_{1}}x_{1}^{i_{1}}...y_{n}^{j_{n}}x_{n}^{i_{n}},\; (j_{1},i_{1},...,j_{n},i_{n}) \in {\mathbb N}^{2n}, \; i_kj_k=0, \, \forall 1 \le k \le n\}. \] is a basis of ${\mathcal A}_n^{\bar{q},\Lambda}$ regarded as a left and as a right $R$-module. Moreover, ${\bf b}$ is also a basis of ${\mathcal B}_n^{\bar{q},\Lambda}$ as a left and as a right $R^\circ$-module. \end{subproposition}
\noindent {\it Proof.$\;$} It is similar to the proof of Prop. \ref{maltsiniotis2}. \qed
\subsection{A common localisation for quantum Weyl algebras}
We finish this section by recalling a result which states that the localisations of both versions of quantum Weyl algebras are isomorphic.
\begin{subtheorem} -- Let $n\in{\mathbb N}^\ast$. For any $n$-tuple $\bar{q}=(q_{1} ,...,q_n)\in(\Bbbk^\ast)^n$ and any $n \times n$ skew symmetric matrix $\Lambda=(\lambda_{ij}) \in M_{n}(\Bbbk)$, the map defined by: \[ \begin{array}{ccrcl} \theta & : & {\mathcal B}_n^{\bar{q},\Lambda} & \longrightarrow & B_n^{\bar{q},\Lambda} \cr
& & y_i & \mapsto & y_i \cr
& & x_i & \mapsto & z_{i-1}^{-1}x_i \cr
& & z_i & \mapsto & z_{i-1}^{-1}z_i \end{array} \] is an isomorphism of $\Bbbk$-algebras.
\end{subtheorem}
\noindent {\it Proof.$\;$} See p. 287 of \cite{AJ}.\qed
\section{Weight modules for quantum Weyl algebras}
In this section, we fix an $n$-tuple $\bar{q}=(q_{1} ,...,q_n)\in(\Bbbk^\ast)^n$ and an $n \times n$ skew symmetric matrix $\Lambda=(\lambda_{ij}) \in M_{n}(\Bbbk)$ and we associate to this data the algebra ${\mathcal B}_n^{\bar{q},\Lambda}$ introduced in Section \ref{section-qwa}. We consider the context of Section \ref{section-weight-categories} for the pair $(R^\circ,{\mathcal B}_n^{\bar{q},\Lambda})$, where $R^\circ$ is the $\Bbbk$-subalgebra of ${\mathcal B}_n^{\bar{q},\Lambda}$ generated by the $z_i$'s and their inverses. As a consequence of Proposition \ref{R-basis-of-A}, $R^\circ$ is a commutative Laurent polynomial $\Bbbk$-algebra in the indeterminates $z_1,\dots,z_n$.
As we will see, the situation will heavily depend on whether some $q_i$ is a root of unity in $\Bbbk$ or not. We will say that the $n$-tuple $\bar{q}$ is {\em generic} whenever, for all integer $i$, $1 \le i \le n$, $q_i$ is not a root of unity.
In the first subsection, we show that the pair $(R^\circ,{\mathcal B}_n^{\bar{q},\Lambda})$ is indeed a strongly-free extension. We also consider the action of the canonical generators of ${\mathcal B}_n^{\bar{q},\Lambda}$ on the basis attached to this extension. From that basis arises in a natural way an action of the group ${\mathbb Z}^n$ on the weights. We investigate this action in the second subsection and show for example that it is free if and only if no $q_i$ is a root of unity. As we discussed in the first section, each character $\phi$ of $R^\circ$ gives rise to a projective weight ${\mathcal B}_n^{\bar{q},\Lambda}$-module $P_\phi$. The structure of these modules is the subject of the last subsection. We investigate their weights in connection with the above group action, the dimension of their weight spaces, characterise their simplicity and start to discuss whether two such modules are isomorphic or not.
\subsection{Basic results on the pair $(R^\circ,{\mathcal B}_n^{\bar{q},\Lambda})$}
By Proposition \ref{R-basis-of-A}, \[ {\bf b}=\left\{y_{1}^{j_{1}}x_{1}^{i_{1}}...y_{n}^{j_{n}}x_{n}^{i_{n}},\; (j_{1},i_{1},...,j_{n},i_{n}) \in {\mathbb N}^{2n}, \; i_kj_k=0, \, \forall k, 1 \le k \le n\right\}. \] is a basis of ${\mathcal B}_n^{\bar{q},\Lambda}$ regarded either as a left or as a right $R^\circ$-module. It will be convenient to write this basis in a slightly different way. Let $k=(k_1,\dots,k_n)\in{\mathbb Z}^n$. We denote \[ b_k = \prod_{1 \le i \le n} b_{k,i} \qquad \mbox{where, for $1 \le i \le n$}, \qquad b_{k,i} = \left\{ \begin{array}{ccc} x_i^{k_i} & \mbox{if} & k_i \ge 0, \cr y_i^{-k_i} & \mbox{if} & k_i \le 0 . \end{array} \right. \] With this notation, we have \[ {\bf b}=\{b_k, \, k\in{\mathbb Z}^n\}. \] Given $k\in{\mathbb Z}^n$, let $\sigma_k$ be the $\Bbbk$-algebra automorphism of $R^\circ$ such that \[ \forall i, \ 1 \le i \le n, \quad\sigma_k(z_i) = q_i^{-k_i}z_i. \] From relations in Proposition \ref{z-et-xy}, we get that \[ \forall\, k\in{\mathbb Z}^n, \quad \forall r\in R^\circ, \quad rb_k=b_k\sigma_k(r). \] In the notation of Definition \ref{definition-strongly-free}, this means that, for all $k\in{\mathbb Z}^n$, $\sigma_k=\sigma_{b_k}$. These facts afford the following lemma.
\begin{sublemma} -- The pair $(R^\circ,{\mathcal B}_n^{\bar{q},\Lambda})$ is of strongly-free type. \end{sublemma}
We finish this subsection by examining how multiplication by generators modify the elements of ${\bf b}$. From now on, we let $\{e_1,\dots,e_n\}$ stand for the canonical basis of ${\mathbb Z}^n$.
\begin{sublemma} -- \label{relations-x-y-et-b} Assume $q_i\neq 1$ for all $1 \le i \le n$. Given $k\in{\mathbb Z}^n$ and $i$ such that $1 \le i \le n$, \[ x_ib_k = \left\{ \begin{array}{lll} \left(\displaystyle\prod_{1 \le j < i} \lambda_{ij}^{k_j} \right) b_{k+e_i} & \mbox{if} & k_i \ge 0, \cr \left(\displaystyle\prod_{1 \le j < i} \lambda_{ij}^{k_j} \right) b_{k+e_i}\displaystyle\frac{q_i^{-k_i}z_i-1}{q_i-1} & \mbox{if} & k_i < 0 \cr \end{array} \right. \] and \[ y_ib_k = \left\{ \begin{array}{lll} \left(\displaystyle\prod_{1 \le j < i} \lambda_{ij}^{k_j} \right)^{-1} b_{k-e_i}\displaystyle\frac{q_i^{-k_i+1}z_i-1}{q_i-1} & \mbox{if} & k_i > 0, \cr \left(\displaystyle\prod_{1 \le j < i} \lambda_{ij}^{k_j} \right)^{-1} b_{k-e_i} & \mbox{if} & k_i \le 0. \end{array} \right. \] \end{sublemma}
\noindent {\it Proof.$\;$} This is straightforward using relation (\ref{awqmAJ-1}), (\ref{awqmAJ-2}) and the equalities of Remark \ref{rm4relations} and Proposition \ref{z-et-xy}.\qed
\subsection{Action of ${\mathbb Z}^n$ on weights arising from the basis ${\bf b}$}\label{action-B}
Recall from Remark \ref{action-automorphismes-sur-poids} the right action of ${\rm Aut}_\Bbbk(R^\circ)$ on $\widehat{R^\circ}$. On the other hand, we have group morphisms as follows \[ \epsilon: {\mathbb Z}^n \rightarrow (\Bbbk^\ast)^n \qquad \quad (k_1,\dots,k_n) \mapsto (q_1^{-k_1},\dots,q_n^{-k_n}) \] and \[ \delta: (\Bbbk^\ast)^n \rightarrow {\rm Aut}_\Bbbk(R^\circ) \qquad (\lambda_1,\dots,\lambda_n) \mapsto\left( z_i \mapsto \lambda_i z_i \right)_{1 \le i \le n}. \]
Now, composing these maps gives rise to a left action of $(\Bbbk^\ast)^n$ on $\widehat{R^\circ}$ and a left action of ${\mathbb Z}^n$ on $\widehat{R^\circ}$ given by \[ (\Bbbk^\ast)^n \stackrel{\delta}{\longrightarrow} {\rm Aut}_\Bbbk(R^\circ) \stackrel{}{\longrightarrow} {\mathfrak S}(\widehat{R^\circ}) \qquad\mbox{and}\qquad {\mathbb Z}^n \stackrel{\epsilon}{\longrightarrow} (\Bbbk^\ast)^n \stackrel{\delta}{\longrightarrow} {\rm Aut}_\Bbbk(R^\circ) \stackrel{}{\longrightarrow} {\mathfrak S}(\widehat{R^\circ}), \] where ${\mathfrak S}(\widehat{R^\circ})$ is the group of permutations of the set $\widehat{R^\circ}$. Alternatively, the latter may be described as follows: \begin{equation} \begin{array}{ccrcl}
& & {\mathbb Z}^n \times \widehat{R^\circ} & \longrightarrow & \widehat{R^\circ} \cr
& & (k,\phi) & \mapsto & \phi \circ \sigma_k. \end{array} \end{equation}
These actions verify some easily proven properties that we list in the following lemma.
\begin{sublemma} -- Using the notation above, the following holds: \begin{enumerate} \item the left action of $(\Bbbk^\ast)^n$ on $\widehat{R^\circ}$ is faithful; \item the morphism $\epsilon \, : {\mathbb Z}^n \longrightarrow (\Bbbk^\ast)^n$ is injective if and only if $\bar{q}$ is generic; \item the left action of ${\mathbb Z}^n$ on $\widehat{R^\circ}$ is faithful if and only if $\bar{q}$ is generic. \end{enumerate} \end{sublemma}
Fixing $\phi\in \widehat{R^\circ}$, we obtain a map ${\mathbb Z}^n \to \widehat{R^\circ}$.
\begin{sublemma} -- \label{genericity-injectivity} Let $\phi\in\widehat{R^\circ}$. The map ${\mathbb Z}^n \to {\mathbb Z}^n.\phi\subseteq\widehat{R^\circ}$ sending $k$ to $\phi\circ\sigma_k$ is injective if and only if $\bar{q}$ is generic. \end{sublemma}
\noindent {\it Proof.$\;$} Let $k\in{\mathbb Z}^n$. By definition, given $i$ such that $1 \le i \le n$, we know that $\phi\circ\sigma_k(z_i)=q_i^{-k_i}\phi(z_i)$. But, $z_i$ being an invertible element of ${\mathcal B}_n^{\bar{q},\Lambda}$, $\phi(z_i)$ is necessarily non zero. The result follows. \qed
\begin{subcorollary} -- \label{generic-and-free} The following conditions are equivalent: \begin{enumerate} \item $\bar{q}$ is generic; \item there exists $\phi\in\widehat{R^\circ}$ such that ${\rm stab}_{{\mathbb Z}^n}(\phi)=\{0_{{\mathbb Z}^n}\}$; \item for all $\phi\in\widehat{R^\circ}$, ${\rm stab}_{{\mathbb Z}^n}(\phi)=\{0_{{\mathbb Z}^n}\}$, in other words the action of ${\mathbb Z}^n$ on $\widehat{R^\circ}$ is free. \end{enumerate} \end{subcorollary}
\noindent {\it Proof.$\;$} This is an immediate consequence of Lemma \ref{genericity-injectivity}.\qed
\subsection{Structure of $P_\phi$ for $\phi\in\widehat{R^\circ}$}\label{base-de-P}
Let $\phi\in\widehat{R^\circ}$. Recall that $P_\phi = {\mathcal B}_n^{\bar{q},\Lambda} \otimes_{R^\circ} \Bbbk_\phi$, where $\Bbbk_\phi=R^\circ/\ker(\phi)$. Given $k\in{\mathbb Z}^n$, set \[ v_k = b_k \otimes_R \overline{1} . \] By Remark \ref{structure-P}, the set $\{v_k,\, k\in{\mathbb Z}^n\}$, is a $\Bbbk$-basis of $P_\phi$ and \begin{equation}\label{inclusion-weight} \forall k\in{\mathbb Z}^n, \quad \Bbbk v_k \subseteq (P_\phi)(\phi\circ\sigma_k) . \end{equation} Hence, $P_\phi$ is a weight module of ${\mathcal B}_n^{\bar{q},\Lambda}$ and the set of weights of $P_\phi$ is the ${\mathbb Z}^n$-orbit of $\phi$, that is ${\mathbb Z}^n.\phi$.
\begin{subproposition} -- \label{generic-weight-decomposition} Let $\phi\in\widehat{R^\circ}$. If $\bar{q}$ is generic, then, for all $k\in{\mathbb Z}^n, \quad \Bbbk v_k = (P_\phi)(\phi\circ\sigma_k)$. In particular, the weight decomposition of the weight ${\mathcal B}_n^{\bar{q},\Lambda}$-module $P_\phi$ is: \[ P_\phi = \bigoplus_{k\in{\mathbb Z}^n} \Bbbk v_k = \bigoplus_{k\in{\mathbb Z}^n} (P_\phi)(\phi\circ\sigma_k). \] \end{subproposition}
\noindent {\it Proof.$\;$} Given $k\in{\mathbb Z}^n$, we know from the previous discussion that $\Bbbk v_k \subseteq (P_\phi)(\phi\circ\sigma_k)$. The result is thus an obvious consequence of Lemma \ref{genericity-injectivity}. \qed
\begin{subremark} -- \rm \label{generic-P-weyl} We may actually strengthened Proposition \ref{generic-weight-decomposition} as follows. Let $\phi\in\widehat{R^\circ}$. The following statements are equivalent:\\ (i) $\bar{q}$ is generic; \\ (ii) the map ${\bf b} \mapsto \widehat{R^\circ}$, $b \mapsto \phi \circ \sigma_b$ is injective;\\ (ii) the weight spaces of $P_\phi$ all have dimension at most one.\\ This follows from the above discussion taking Lemma \ref{genericity-injectivity} and the inclusion (\ref{inclusion-weight}) into account. \end{subremark}
\begin{subremark} -- \rm \label{N-phi} Supposing that $\bar{q}$ is generic, the hypotheses of Remark \ref{generic-P} are fulfilled. Hence, we know that, for all $\phi\in\widehat{R^\circ}$, the module $P_\phi$ has a maximum strict weight ${\mathcal B}_n^{\bar{q},\Lambda}$-submodule $N_\phi$ and a unique simple quotient $S_\phi=P_\phi/N_\phi$. \end{subremark}
We continue by giving explicit expressions for the action of $x_i$ and $y_i$ on basis vectors of $P_\phi$. Recall that $\{e_1,\dots,e_n\}$ is the canonical basis of ${\mathbb Z}^n$.
\begin{sublemma} -- \label{action-xy} Assume $q_i\neq 1$ for all $1 \le i \le n$. Let $\phi\in\widehat{R}$. Let $1 \le i \le n$ and let $k\in{\mathbb Z}^n$~: \begin{enumerate} \item the action of $x_i$ on $v_k$ is given by~: \begin{equation} \label{action-x} x_i . v_k = \left\{ \begin{array}{rcc} \left(\displaystyle\prod_{1 \le j < i} \lambda_{ij}^{k_j} \right)v_{k+e_i} & \mbox{if} & k_i \ge 0, \cr \displaystyle\frac{q_i^{-k_i}\phi(z_i)-1}{q_i-1} \left(\displaystyle\prod_{1 \le j < i} \lambda_{ij}^{k_j} \right)v_{k+e_i} & \mbox{if} & k_i < 0 ; \end{array} \right. \end{equation} \item the action of $y_i$ on $v_k$ is given by~: \begin{equation} \label{action-y} y_i . v_k = \left\{ \begin{array}{rcc} \displaystyle\frac{q_i^{-k_i+1}\phi(z_i)-1}{q_i-1} \left(\displaystyle\prod_{1 \le j < i} \lambda_{ij}^{k_j} \right)^{-1}v_{k-e_i} & \mbox{if} & k_i > 0, \cr \left(\displaystyle\prod_{1 \le j < i} \lambda_{ij}^{k_j} \right)^{-1}v_{k-e_i} & \mbox{if} & k_i \le 0. \end{array} \right. \end{equation} \end{enumerate} \end{sublemma}
\noindent {\it Proof.$\;$} It is an immediate consequence of Lemma \ref{relations-x-y-et-b}. \qed\\
It is clear after Lemma \ref{action-xy} that a key feature of $P_\phi$ depends on whether the scalars appearing in relations (\ref{action-x}) and (\ref{action-y}) may vanish or not, which motivates the next definition.
\begin{subdefinition} -- Let $\phi\in\widehat{R^\circ}$. We define the complexity of $\phi$ as the set \[ {\rm comp}(\phi) = \{i \in\{1,\dots,n\} \,|\, \phi(z_i) \in \langle q_i \rangle\}. \] \end{subdefinition}
\begin{subremark} -- \rm Any two elements of $\widehat{R^\circ}$ in the same ${\mathbb Z}^n$-orbit have the same complexity. \end{subremark}
Recall that, after Proposition \ref{sous-objets-et-poids}, any submodule of $P_\phi$ is a weight submodule. The next proposition shows that weights with empty complexity give rise to simple modules.
\begin{subproposition} -- \label{objets-de-niveau-0} Suppose that $\bar{q}$ is generic. If $\phi\in\widehat{R^\circ}$, the following are equivalent: \begin{enumerate} \item the complexity of $\phi$ is empty, \item $P_\phi$ is a simple object both of ${\mathcal B}_n^{\bar{q},\Lambda}-{\sf Mod}$ and of ${\mathcal B}_n^{\bar{q},\Lambda}-{\sf wMod}$. \end{enumerate} \end{subproposition} \noindent {\it Proof.$\;$} By Proposition \ref{sous-objets-et-poids}, it is enough to consider the simplicity of $P_\phi$ as an object of ${\mathcal B}_n^{\bar{q},\Lambda}-{\sf wMod}$.
Suppose $M$ is a non zero subobject of $P_\phi$ in the category ${\mathcal B}_n^{\bar{q},\Lambda}-{\sf wMod}$. By Proposition \ref{generic-weight-decomposition}, it must contain one of the $v_k$'s with $k\in{\mathbb Z}^n$. This in turn implies, using Lemma \ref{action-xy}, that if the complexity of $\phi$ is empty, then $v_0\in M$ and thus $M=P_\phi$.
Conversely, suppose the complexity of $\phi$ is not empty and let $i$ be an integer belonging to ${\rm comp}(\phi)$. Thus, $\phi(z_i) = q_i^\ell$ for some $\ell\in{\mathbb Z}$. Suppose first that $\ell < 0$. It follows from Lemma \ref{action-xy} that $x_i.v_k=0$ whenever $k_i=\ell$. As a consequence, $\oplus_{k, k_i \le \ell} \Bbbk v_k$ is a strict ${\mathcal B}_n^{\bar{q},\Lambda}$-submodule of $P_\phi$. Suppose now that $\ell \ge 0$. In this case, it follows from Lemma \ref{action-xy} that $y_i.v_k=0$ whenever $k_i=\ell+1$. Consequently, $\oplus_{k, k_i \ge \ell+1} \Bbbk v_k$ is a strict ${\mathcal B}_n^{\bar{q},\Lambda}$-submodule of $P_\phi$. Hence, in all cases, $P_\phi$ is not simple. \qed\\
In the next statement, we analyse the link between $P_\phi$ and $P_{\phi\circ\sigma_{e_\ell}}$ for an integer $\ell$ such that $1 \le \ell \le n$. Recall that $e_i$ stands for the $i$-th vector of the canonical basis of ${\mathbb Z}^n$.
\begin{subproposition} -- \label{iso-des-P-phi} Suppose $q_i\neq 1$ for all $1 \le i \le n$. Fix an integer $\ell$ such that $1 \le \ell \le n$ and $\phi\in\widehat{R^\circ}$. \begin{enumerate} \item If $(\lambda_k)_{k\in{\mathbb Z}^n} \in \Bbbk^{({\mathbb Z}^n)}$ satisfies the following conditions: \begin{enumerate} \item $\lambda_k = \lambda_{k+e_\ell}$, if $k_\ell \neq -1$; \item $\lambda_k = \displaystyle\frac{\phi(z_\ell)-1}{q_\ell-1}\lambda_{k+e_\ell}$, if $k_\ell = -1$; \item $\lambda_k = \lambda_{k+e_i}$, if $i<\ell$; \item $\lambda_k = \lambda_{i\ell}^{-1}\lambda_{k+e_i}$, if $i>\ell$, \end{enumerate} then, the $\Bbbk$-linear map \[ \begin{array}{ccrcl}
& & P_{e_\ell.\phi} & \longrightarrow & P_\phi \cr
& & w_k & \mapsto & \lambda_kv_{k+e_\ell}, \end{array} \] where $(w_k)_{k\in{\mathbb Z}^n}$ and $(v_k)_{k\in{\mathbb Z}^n}$ are respectively the canonical bases of $P_{e_\ell.\phi}$ and $P_\phi$ --as defined at the beginning of this subsection-- is a morphism in ${\mathcal B}_n^{\bar{q},\Lambda}-{\sf wMod}$. \item Assume in addition that $\phi(z_\ell) \neq 1$. Then, $P_{e_\ell.\phi}$ is isomorphic to $P_\phi$ in ${\mathcal B}_n^{\bar{q},\Lambda}-{\sf wMod}$. \end{enumerate} \end{subproposition}
\noindent {\it Proof.$\;$} The first item is a lengthy straightforward verification.
The second item follows directly from the first one. \qed
\section{Examples: the case where all the entries of $\Lambda$ equal $1$} \label{section-exemples}
As it will be shown in Section \ref{QWa-et-ZT}, it turns out that the study of simple weight modules of the extension $(R^\circ,{\mathcal B}_n^{\bar{q},\Lambda})$ may be reduced to the case where the skew- symmetric matrix $\Lambda$ has all its entries equal to $1$. For this reason we concentrate, in the present section, to this special case. It is easier to handle since under this hypothesis, ${\mathcal B}_n^{\bar{q},\Lambda}$ is a tensor product of $n$ copies of ${\mathcal B}_1^{\bar{q},\Lambda}$.
Hence, we first investigate the case where $n=1$. We give a complete and explicit description of the projective weight modules $P_\phi$ attached to each character $\phi$ of $R^\circ$. From this we deduce a complete description of the simple weight modules and classify them. The second subsection comes back to the case of an arbitrary integer $n$. It is shown that simple weight modules in this case may be explicitly described using simple weight modules arising in the case $n=1$. This is illustrated in the third subsection where the case $n=2$ is studied in full details using very intuitive graphs to describe the modules $P_\phi$.
In this section, we suppose that $\bar{q}$ is generic. We also assume that all the entries of the skew-symmetric matrix $\Lambda$ equal $1$. Under this last hypothesis, there is an isomorphism of $\Bbbk$-algebras \[ \begin{array}{ccrcl} & & {\mathcal B}_n^{\bar{q},\Lambda} & \longrightarrow & {\mathcal B}_1^{q_1} \otimes\dots\otimes {\mathcal B}_1^{q_n} \end{array} \] that sends each $x_i$ to the elementary tensor $1 \otimes\dots\otimes x_1 \otimes\dots\otimes 1$, and each $y_i$ to $1 \otimes\dots\otimes y_1 \otimes\dots\otimes 1$, where $x_1$ and $y_1$ are, respectively, in the $i$-th position, where, for $q\in\Bbbk^\ast$, we put ${\mathcal B}_1^q={\mathcal B}_1^{\bar{q},\Lambda}$ with $\bar{q}=(q)$ and $\Lambda=(1)$ . This isomorphism clearly induces in turn the isomorphism \[ \begin{array}{ccrcl} & & \Bbbk[z_1^{\pm 1},\dots,z_n^{\pm 1}] & \longrightarrow & \Bbbk[z_1^{\pm 1}] \otimes\dots\otimes \Bbbk[z_1^{\pm 1}]. \end{array} \] We are now ready to apply the results of Proposition \ref{weight-in-tpa} and Subsection \ref{TP-of-SFE}.
From now on, we denote by ${\bf 1}$ the unique element of $\widehat{\Bbbk[z_1^{\pm 1},\dots,z_n^{\pm 1}]}$ that sends $z_1,\dots,z_n$ to $1$.
\subsection{The case $n=1$}\label{exemple-n=1}
The observations in the Introduction show that one can build simple weight modules of ${\mathcal B}_n^{\bar{q},\Lambda}$ from those appearing in the case $n=1$. So we start by studying this case. In this subsection, we suppose $n=1$.
\begin{subremark} -- \rm \label{complexite-orbite-1} Let $\phi\in \widehat{R^\circ}$. It is clear that the following statements are equivalent: \begin{enumerate} \item ${\rm comp}(\phi) = \{1\}$; \item
${\bf 1} \in {\mathbb Z}.\phi$; \item
${\mathbb Z}.\phi={\mathbb Z}.{\bf 1}$. \end{enumerate} \end{subremark}
Fix now an element $\phi\in\widehat{R^\circ}$. As a consequence of Proposition \ref{generic-weight-decomposition} and Corollary \ref{generic-and-free}, the set of weights of $P_\phi$ is ${\mathbb Z}.\phi$, with ${\mathbb Z}.\phi$ being equipotent to ${\mathbb Z}$ and each weight space of $P_\phi$ is one dimensional over $\Bbbk$. \\
According to Proposition \ref{objets-de-niveau-0}, $P_\phi$ is simple if and only if the complexity of $\phi$ is empty, that is if and only if $\phi$ is in the orbit of ${\bf 1}$ under the action of ${\mathbb Z}$ (cf. Remark \ref{complexite-orbite-1}). We thus analyse these two cases.
In order to visualize the action under consideration, we represent it by a graph as follows. The vectors of the canonical basis of $P_\phi$ are displayed horizontally along a line and constitute the set of vertices of the graph. By Lemma \ref{action-xy}, $x_1$ (resp. $y_1$) acts on a given $v_\alpha$ sending it to a scalar multiple of $v_{\alpha +1}$ (resp. $v_{\alpha -1}$). In case the corresponding scalar is nonzero, we draw an arrow, labeled by the corresponding generator, from $v_\alpha$ to $v_{\alpha +1}$ (resp. $v_{\alpha -1}$); in case it is zero, we do not draw any arrow. A careful observation allows to "see" in this graph the unique maximal submodule and corresponding simple quotient.
\begin{subexample} -- \label{exemple-hors-1} \rm Suppose $\phi\not\in{\mathbb Z}.{\bf 1}$ or, equivalently, ${\rm comp}(\phi)=\emptyset$. \begin{enumerate} \item By Proposition \ref{objets-de-niveau-0}, $P_\phi$ is a simple ${\mathcal B}_1^{\bar{q},\Lambda}$-weight module. Its corresponding scheme is as follows: \tiny \[ \xymatrix@!C{ \dots & \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} & \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} \ar@/^/[l]^{y_1} & \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} \ar@/^/[l]^{y_1} & v_{\alpha-1} \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} \ar@/^/[l]^{y_1} & v_\alpha \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} \ar@/^/[l]^{y_1} & v_{\alpha+1} \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} \ar@/^/[l]^{y_1} & v_{\alpha+2} \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} \ar@/^/[l]^{y_1} & \ar@/^/[l]^{y_1} & \dots\\ } \] \normalsize where we have added in the scheme the action of $z_1$ for the sake of completeness. \item By Proposition \ref{iso-des-P-phi}, for all $\Psi\in{\mathbb Z}.\phi$, $P_\phi$ and $P_\Psi$ are isomorphic in ${\mathcal B}_1^{\bar{q},\Lambda}-{\sf wMod}$. \end{enumerate} \end{subexample}
Recall from Remark \ref{N-phi} the submodule $N_\phi$ of $P_\phi$.
\begin{subexample} -- \label{exemple-1} \rm Suppose $\phi\in {\mathbb Z}.{\bf 1}$. \begin{enumerate} \item First case: $\phi\in (-{\mathbb N}).{\bf 1}$. In this case there exists a unique non negative integer $\alpha$ such that $\phi=(-\alpha){\bf 1}$, or equivalently such that $\phi(z_1) = q_1^\alpha$. \begin{enumerate} \item The action of $x_1$, $y_1$, $z_1$ can be pictured as follows: \tiny \[ \xymatrix@!C{ \dots & \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} & \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} \ar@/^/[l]^{y_1} & \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} \ar@/^/[l]^{y_1} & v_{\alpha-1} \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} \ar@/^/[l]^{y_1} & v_\alpha \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} \ar@/^/[l]^{y_1} & v_{\alpha+1} \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} & v_{\alpha+2} \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} \ar@/^/[l]^{y_1} & \ar@/^/[l]^{y_1} & \dots\\ } \] \normalsize \item Applying Proposition \ref{sous-objets-et-poids}, one gets at once that \[ N_\phi = \bigoplus_{k\ge \alpha +1} \Bbbk.v_k. \] In particular the set of weights of $N_\phi$ is ${\mathbb N}^\ast.{\bf 1}$. \item It follows that the set of weights of $S_\phi$ is $(-{\mathbb N}).{\bf 1}$. \end{enumerate} \item Second case: $\phi\in {\mathbb N}^\ast.{\bf 1}$. In this case there exists a unique positive integer $\beta$ such that $\phi=\beta.{\bf 1}$, or equivalently such that $\phi(z_1) = q_1^{-\beta}$. \begin{enumerate} \item The action of $x_1$, $y_1$, $z_1$ can be pictured as follows: \tiny \[ \xymatrix@!C{ \dots & \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} & v_{-\beta-1} \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} \ar@/^/[l]^{y_1} & v_{-\beta} \ar@(ul,ur)^{z_1} \ar@/^/[l]^{y_1} & v_{-\beta+1} \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} \ar@/^/[l]^{y_1} & \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} \ar@/^/[l]^{y_1} & \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} \ar@/^/[l]^{y_1} & \ar@(ul,ur)^{z_1} \ar@/^/[r]^{x_1} \ar@/^/[l]^{y_1} & \ar@/^/[l]^{y_1} & \dots } \] \normalsize \item Arguing as in the first case, $N_\phi = \bigoplus_{k\le -\beta} \Bbbk.v_k$. Hence, the set of weights of $N_\phi$ is $(-{\mathbb N}).{\bf 1}$. \item As a consequence, the set of weights of $S_\phi$ is ${\mathbb N}^\ast.{\bf 1}$. \end{enumerate} \item By Proposition \ref{iso-des-P-phi}, we get that: \begin{enumerate} \item for all $\rho,\psi\in(-{\mathbb N}).{\bf 1}$, $P_\rho$ and $P_\psi$ are isomorphic in ${\mathcal B}_1^{\bar{q},\Lambda}-{\sf wMod}$; \item for all $\rho,\psi\in{\mathbb N}^\ast.{\bf 1}$, $P_\rho$ and $P_\psi$ are isomorphic in ${\mathcal B}_1^{\bar{q},\Lambda}-{\sf wMod}$. \item Now, consider $\rho\in(-{\mathbb N}).{\bf 1}$ and $\psi\in{\mathbb N}^\ast.{\bf 1}$. From the analysis of the first two cases, it follows that the sets of weights of $N_\rho$ and $N_\psi$ are ${\mathbb N}^\ast.{\bf 1}$ and $(-{\mathbb N}).{\bf 1}$, respectively. Since we are in the generic setting, these sets are not equal. Hence, $N_\rho$ and $N_\psi$ are not isomorphic, implying that $P_\rho$ and $P_\psi$ are not isomorphic, neither. Using again both previous cases, the sets of weights of $S_\rho$ and $S_\psi$ are $(-{\mathbb N}).{\bf 1}$ and ${\mathbb N}^\ast.{\bf 1}$, respectively. Thus, $S_\rho$ and $S_\psi$ are not isomorphic. \end{enumerate} \end{enumerate} \end{subexample}
The next two theorems provide a description in terms of orbits of the isomorphism classes of projective and simple weight modules, respectively.
\begin{subtheorem} -- Let $\phi,\psi\in \widehat{R^\circ}$. \begin{enumerate} \item If the modules $P_\phi$ and $P_\psi$ are isomorphic, either in ${\mathcal B}_1^{\bar{q},\Lambda}-{\sf wMod}$ or in ${\mathcal B}_1^{\bar{q},\Lambda}-{\sf Mod}$, then ${\mathbb Z}.\phi = {\mathbb Z}.\psi$. \item If ${\mathbb Z}.\phi = {\mathbb Z}.\psi \neq{\mathbb Z}.{\bf 1}$, then $P_\phi$ and $P_\psi$ are isomorphic, both in ${\mathcal B}_1^{\bar{q},\Lambda}-{\sf wMod}$ and in ${\mathcal B}_1^{\bar{q},\Lambda}-{\sf Mod}$. \item If ${\mathbb Z}.\phi = {\mathbb Z}.\psi = {\mathbb Z}.{\bf 1}$, then $P_\phi$ and $P_\psi$ are isomorphic if and only if both $\phi$ and $\psi$ belong to $(-{\mathbb N}).{\bf 1}$ or they both belong to ${\mathbb N}^\ast.{\bf 1}$. \end{enumerate} \end{subtheorem}
\noindent {\it Proof.$\;$} The proof of the first item is clear since ${\mathbb Z}.\phi$ and ${\mathbb Z}.\psi$ are the sets of weights of $P_\phi$ and $P_\psi$, respectively. The other statements have already been proved.\qed
\begin{subtheorem} -- \label{iso-des-S} Let $\phi,\psi\in \widehat{R^\circ}$. \begin{enumerate} \item If $S_\phi$ and $S_\psi$ are isomorphic, either in ${\mathcal B}_1^{\bar{q},\Lambda}-{\sf wMod}$ or in ${\mathcal B}_1^{\bar{q},\Lambda}-{\sf Mod}$, then ${\mathbb Z}.\phi = {\mathbb Z}.\psi$. \item If ${\mathbb Z}.\phi = {\mathbb Z}.\psi \neq{\mathbb Z}.{\bf 1}$, then $S_\phi$ and $S_\psi$ are isomorphic. \item If ${\mathbb Z}.\phi = {\mathbb Z}.\psi = {\mathbb Z}.{\bf 1}$, then $S_\phi$ and $S_\psi$ are isomorphic if and only if both $\phi$ and $\psi$ belong to $(-{\mathbb N}).{\bf 1}$ or both of them belong to ${\mathbb N}^\ast.{\bf 1}$. \end{enumerate} \end{subtheorem}
\noindent {\it Proof.$\;$} Again the proof of the first item is clear since, by the previous results, the sets of weights of $S_\phi$ and $S_\psi$ are, respectively, subsets of ${\mathbb Z}.\phi$ and ${\mathbb Z}.\psi$. All the rest has already been proved.\qed
\begin{subcorollary} -- \label{iso-S-via-poids} Let $\phi,\psi\in\widehat{R^\circ}$. The following are equivalent: \begin{enumerate} \item $S_\phi$ and $S_\psi$ are isomorphic either in ${\mathcal B}_1^{\bar{q},\Lambda}-{\sf wMod}$ or in ${\mathcal B}_1^{\bar{q},\Lambda}-{\sf Mod}$; \item $S_\phi$ and $S_\psi$ have the same set of weights. \end{enumerate} \end{subcorollary}
\noindent {\it Proof.$\;$} Clearly, the first statement implies the second one. Now, let $w(S_\phi)$ and $w(S_\psi)$ be the sets of weights of $S_\phi$ and $S_\psi$, respectively, and suppose $w(S_\phi)=w(S_\psi)$. There are different possible cases:\\ {\em First case:} $\phi\notin{\mathbb Z}{\bf 1}$. Example \ref{exemple-hors-1} shows that $S_\phi=P_\phi$ and ${\mathbb Z}\phi=w(S_\phi)=w(S_\psi)$. But $\psi\in w(S_\psi)$, hence ${\mathbb Z}\phi={\mathbb Z}\psi$ and Theorem \ref{iso-des-S} proves that $S_\phi$ and $S_\psi$ are isomorphic.\\ {\em Second case:} $\phi\in(-{\mathbb N}){\bf 1}$. By Example \ref{exemple-1}, $(-{\mathbb N}){\bf 1}=w(S_\phi)=w(S_\psi)$. Hence, $\psi\in(-{\mathbb N}){\bf 1}$ and ${\mathbb Z}\phi={\mathbb Z}{\bf 1}$. Finally, Theorem \ref{iso-des-S} proves that $S_\phi$ and $S_\psi$ are isomorphic.\\ {\em Third case:} $\phi\in{\mathbb N}^\ast{\bf 1}$. The same argument as in the second case applies to prove that $S_\phi$ and $S_\psi$ are isomorphic.\qed
\subsection{Simple weight modules for arbitrary $n$}
Recall from the introduction of this section that ${\mathcal B}_n^{\bar{q},\Lambda}$ is identified to the $n$-fold tensor product of algebras ${\mathcal B}_1^{q_1} \otimes\dots\otimes {\mathcal B}_1^{q_n}$ and that, likewise, $R^\circ$ identifies to the tensor product of $n$ copies of $\Bbbk[z_1^{\pm 1}]$. In the rest of this section, we will freely use these identifications.
Notice further that we are in position to apply Proposition \ref{weight-in-tpa}.
Let $\phi\in\widehat{R}$. By Proposition \ref{weight-in-tpa}, under the above identification, there is a unique $n$-tuple $(\phi_1,\dots,\phi_n)$ of elements of $\widehat{\Bbbk[z_1^{\pm 1}]}$ such that $\phi=\phi_1\dots\phi_n$.
We are interested in two weight ${\mathcal B}_n^{\bar{q},\Lambda}$-modules, namely $S_\phi$ and $S_{\phi_1} \otimes\dots\otimes S_{\phi_n}.$
\begin{sublemma} -- \label{tenseur-S-simple} Let $(\phi_1,\dots,\phi_n)$ be a $n$-tuple of elements of $\widehat{\Bbbk[z_1^{\pm 1}]}$. The ${\mathcal B}_n^{\bar{q},\Lambda}$-module $S_{\phi_1} \otimes\dots\otimes S_{\phi_n}$ is simple. \end{sublemma}
\noindent {\it Proof.$\;$} Since $\bar{q}$ is generic, according to Proposition \ref{generic-weight-decomposition}, for any $i$, $1 \le i \le n$, the weight spaces of the ${\mathcal B}_1^{q_i}$-module $P_{\phi_i}$ are at most one dimensional $\Bbbk$-subspaces; hence, the same holds for the quotient $S_{\phi_i}$. Hence, the statement follows from Lemma \ref{shurian-simple}.\qed\\
\begin{subtheorem} -- \label{thm-simple} Suppose $\Lambda$ is the skew-symmetric $n\times n$-matrix with all entries equal to $1$ and suppose $\bar{q}$ is generic. \begin{enumerate} \item For all $n$-tuple $(\phi_1,\dots,\phi_n)$ of elements of $\widehat{\Bbbk[z_1^{\pm 1}}]$, the ${\mathcal B}_n^{\bar{q},\Lambda}$ module $S_{\phi_1} \otimes\dots\otimes S_{\phi_n}$ is a simple weight module and any simple weight ${\mathcal B}_n^{\bar{q},\Lambda}$-module is of that form. \item Let $(\phi_1,\dots,\phi_n)$ and $(\psi_1,\dots,\psi_n)$ be elements of $\left(\widehat{\Bbbk[z_1^{\pm 1}}]\right)^n$. The ${\mathcal B}_n^{\bar{q},\Lambda}$-modules $S_{\phi_1} \otimes\dots\otimes S_{\phi_n}$ and $S_{\psi_1} \otimes\dots\otimes S_{\psi_n}$ are isomorphic as weight modules if and only if, for all $1 \le i \le n$, $S_{\phi_i}$ and $S_{\psi_i}$ are isomorphic in ${\mathcal B}_1^{q_i} -{\sf wMod}$. \end{enumerate} \end{subtheorem}
\noindent {\it Proof.$\;$} The proof of the first item follows from Lemma \ref{tenseur-S-simple}, Proposition \ref{ubiquite-des-P},
and Corollary \ref{iso-L-tenseur-L}. We now prove the second statement. The {\em if} part of the equivalence is evident. Suppose now that $S_{\phi_1} \otimes\dots\otimes S_{\phi_n}$ and $S_{\psi_1} \otimes\dots\otimes S_{\psi_n}$ are isomorphic. Given $i$ such that $1 \le i \le n$, let $w(S_{\phi_i})$ and $w(S_{\psi_i})$ be the sets of weights of $S_{\phi_i}$ and $S_{\psi_i}$, respectively. It is clear that the set of weights of $S_{\phi_1} \otimes\dots\otimes S_{\phi_n}$ is the image under the bijective map $\left(\widehat{\Bbbk[z_1^{\pm 1}}]\right)^n \longrightarrow \widehat{R^\circ}$, $(\chi_1,\dots,\chi_n) \mapsto \chi_1\dots\chi_n$ --as defined in the second item of Proposition \ref{weight-in-tpa}-- of the subset $w(S_{\phi_1})\times\dots\times w(S_{\phi_1})$. Of course, the same holds for $S_{\psi_1} \otimes\dots\otimes S_{\psi_n}$. Hence, these images must be equal since $S_{\phi_1} \otimes\dots\otimes S_{\phi_n}$ and $S_{\psi_1} \otimes\dots\otimes S_{\psi_n}$ are isomorphic. But that latter map is bijective. So, for $1 \le i \le n$, $w(S_{\phi_i})=w(S_{\psi_i})$. Corollary \ref{iso-S-via-poids} yields the desired conclusion. \qed
\subsection{The case $n=2$}
In this short section we concentrate on the case $n=2$. Our aim is to illustrate how the case of an arbitrary $n$ can be understood on the basis of the case $n=1$. We do this by using graphs attached to the modules under consideration, following the approach of Section \ref{exemple-n=1}. The action of $x_1$ is displayed horizontaly from left to right, the action of $y_1$ is displayed horizontally from right to left, the action of $x_2$ is displayed vertically from bottom to top and the action of $y_2$ is displayed vertically from top to bottom.
\begin{subexample} -- \rm Let $\phi \in \widehat{\Bbbk[z_1^{\pm 1},z_2^{\pm 1}]}$ and suppose ${\rm comp}(\phi)=\emptyset$.
In this case, the action of $x_1$, $y_1$, $x_2$, $y_2$ can be pictured as follows:
\tiny \[ \xymatrix@!C{ &&&&&&&&&&& \cr &&&&&&&&&&& \cr
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
\ar[rrrrrrrrrrrr] & & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
&&&&&&&&&&& \cr &&&&&& \ar[uuuuuuuuuuuu]&&&&&} \] \normalsize It is clear that in this graph, there is a path going from any vertex to the vertex $v_0$. It follows that any non-trivial submodule of $P_\phi$ must contain $v_0$ and hence must coincide with $P_\phi$. So, $P_\phi$ is simple. \end{subexample}
\begin{subexample} -- \rm Let $\phi \in \widehat{\Bbbk[z_1^{\pm 1},z_2^{\pm 1}]}$ and suppose ${\rm comp}(\phi)=\{1\}$. In this case, there exists an integer $\alpha_1$ such that $\phi(z_1) = q_1^{\alpha_1}$.
Let us suppose that $\alpha_1 \in {\mathbb N}$. The action of $x_1$, $y_1$, $x_2$, $y_2$ can be pictured as follows: \\
\tiny \[ \xymatrix@!C{ &&&&&&&&&&& \cr &&&&&&&&&&& \cr
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & v_\alpha \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
\ar[rrrrrrrrrrrr] & & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
&&&&&&&&&&& \cr &&&&&& \ar[uuuuuuuuuuuu]&&&&&} \] \normalsize
The graph above is clearly divided into two parts, delimited by the vertical line intersecting the $x$-axis at $x=\alpha_1 +1/2$. From any vertex located at the left hand side of this line, there is a path to $v_0$. Hence, any submodule containing one such vertex must coincide with $P_\phi$. In addition, the $\Bbbk$-vector space with basis the vertices located to the right of this line is a proper submodule of $P_\phi$. It follows that this subspace must be maximum among non trivial submodules. That is, we have: \[
N_\phi = \bigoplus_{\{ k| k_1> \alpha_1\} } \Bbbk.v_k . \]
We let the interested reader deal with the case where $\alpha_1 < 0$. Clearly, the case where ${\rm comp}(\phi)=\{2\}$ is similar, interchanging horizontal and vertical directions. \end{subexample}
\begin{subexample} -- \rm Let $\phi \in \widehat{\Bbbk[z_1^{\pm 1},z_2^{\pm 1}]}$ and suppose ${\rm comp}(\phi)=\{1,2\}$. In this case, there exist integers $\alpha_1$ and $\alpha_2$ such that $\phi(z_1) = q_1^{\alpha_1}$, and $\phi(z_2) = q_2^{\alpha_2}$.
Let us suppose that $\alpha_1, \alpha_2 \in {\mathbb N}$. Then, the action of $x_1$, $y_1$, $x_2$, $y_2$ can be pictured as follows, where $\alpha=(\alpha_1,\alpha_2)$: \\
\tiny \[ \xymatrix@!C{ &&&&&&&&&&& \cr &&&&&&&&&&& \cr
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} & \circ \ar@/^/[r]^{} \ar@/^/[u]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} & \dots & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & v_\alpha \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
\ar[rrrrrrrrrrrr] & & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
& \dots & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \circ \ar@/^/[r]^{} \ar@/^/[l]^{} \ar@/^/[u]^{} \ar@/^/[d]^{} & \dots & \\
&&&&&&&&&&& \cr &&&&&& \ar[uuuuuuuuuuuu]&&&&&} \] \normalsize
In this case, the graph is again divided into two parts, delimited by the vertical line intersecting the $x$-axis at $x=\alpha_1 +1/2$ and the horizontal line intersecting the $y$-axis at $y = \alpha_2 + 1/2$.
From any vertex located at the left hand side of the vertical line and bellow the horizontal one, there is a path to $v_0$. Hence, any submodule containing one such vertex must coincide with $P_\phi$. In addition, the $\Bbbk$-vector space with basis the vertices located either to the right of the vertical line or above the horizontal one is a submodule of $P_\phi$. It follows that this subspace must be maximum among non trivial submodules. That is, we have: \[
N_\phi = \bigoplus_{\{k| k_1 > \alpha_1 {\text or} k_2 > \alpha_2\} } \Bbbk.v_k . \]
\end{subexample}
\section{Quantized Weyl algebras and Zhang twists} \label{QWa-et-ZT}
The aim of this section is to show how weight modules of the extension $(R^\circ,{\mathcal B}_n^{\bar{q},\Lambda})$ may be described using their analogues of the special case where $\Lambda$ has all its entries equal to $1$. The method that we use to reduce the general case to this special case is based on the notion of Zhang twist as introduced in \cite{Z}.
We first recall this notion, then show that ${\mathcal B}_n^{\bar{q},\Lambda}$ is indeed a Zhang twist of ${\mathcal B}_n^{\bar{q},(1)}$ and finally establish that all simple weight modules over ${\mathcal B}_n^{\bar{q},\Lambda}$ are in fact obtained as Zhang twists of simple weight modules over ${\mathcal B}_n^{\bar{q},(1)}$.\\
For all this section, we fix $\bar{q}=(q_1,\dots,q_n) \in (\Bbbk^\ast)^n$ and a skew-symmetric $n \times n$ matrix $\Lambda=(\lambda_{ij})$. It is clear from the definition of ${\mathcal A}_n^{\bar{q},\Lambda}$ that it is ${\mathbb Z}^n$-graded by \[ \deg(x_i) = e_i \qquad\mbox{and}\qquad \deg(y_i) = -e_i . \] From Proposition \ref{R-basis-of-A}, we get that the homogeneous part of degree $0$ with respect to this grading is reduced to \[ R = \Bbbk[z_1,\dots,z_n]; \] the subalgebra of ${\mathcal A}_n^{\bar{q},\Lambda}$ generated by $z_i$, $1 \le i \le n$, the latter being algebraically independent.
\subsection{Some $\Bbbk$-algebra automorphisms of ${\mathcal A}_n^{\bar{q},(1)}$} \label{twisting-automorphisms}
Denote by $(1)$ the $n \times n$ skew-symmetric matrix with all entries equal to $1$. It is straightforward to check that, for $1 \le i \le n$, there are $\Bbbk$-algebra automorphisms $\tau_i$ of ${\mathcal A}_n^{\bar{q},(1)}$ such that \[ \begin{array}{ccrcl} \tau_i & : & {\mathcal A}_n^{\bar{q},(1)} & \longrightarrow & {\mathcal A}_n^{\bar{q},(1)} \cr & & x_j & \mapsto & \left\{ \begin{array}{rll} x_j & \mbox{whenever} & 1 \le j \le i, \cr \lambda_{ij}^{-1}x_j & \mbox{whenever} & i < j \le n. \cr \end{array} \right. \cr & & y_j & \mapsto & \left\{ \begin{array}{rll} y_j & \mbox{whenever} & 1 \le j \le i, \cr \lambda_{ij} y_j & \mbox{whenever} & i < j \le n. \cr \end{array} \right. \cr \end{array} \] Clearly, the above automorphisms pairwise commute. Hence, there is a morphism of groups as follows \[ \begin{array}{ccrcl}
& & {\mathbb Z}^n & \longrightarrow & {\rm Aut}_\Bbbk\left({\mathcal A}_n^{\bar{q},(1)}\right) \cr
& & e_i & \mapsto & \tau_i \end{array} \] It is also clear that the automorphisms $\tau_i$ are homogeneous of degree $0$ with respect to the ${\mathbb Z}^n$-grading of ${\mathcal A}_n^{\bar{q},(1)}$ defined in this section.
From now on, given $g\in{\mathbb Z}^n$, we let $\tau_g$ denote the $\Bbbk$-algebra automorphism -- ${\mathbb Z}^n$-homogeneous of degree $0$ -- that is the image of $g$ under the above group morphism. More precisely, if $g=(g_1,\dots,g_n)\in{\mathbb Z}^n$, we have that \[ \tau_g = \tau_1^{g_1} \dots \tau_n^{g_n}. \]
\subsection{Twisting of ${\mathcal A}_n^{\bar{q},(1)}$}\label{twisting-A} This subsection relies on Zhang twists as defined in \cite{Z}. However, we are interested in left modules rather than in right ones. So, the convenient context for us is that of \cite{RZ}; we will follow it. We will thus consider left twists (see \cite[Def. 1.2.1]{RZ}).
Clearly, $\tau=(\tau_g)_{g\in{\mathbb Z}^n}$ is a normalised left twisting system of ${\mathcal A}_n^{\bar{q},(1)}$ in the sense of \cite{RZ}. Thus, we may associate to $\tau$ and ${\mathcal A}_n^{\bar{q},(1)}$ the graded $\Bbbk$-algebra
${}^{\tau}\left({\mathcal A}_n^{\bar{q},(1)}\right)$. As a graded $\Bbbk$-vector space, ${}^{\tau}\left({\mathcal A}_n^{\bar{q},(1)}\right) = {\mathcal A}_n^{\bar{q},(1)}$, but the algebra structure is given by a new associative product, that we denote $\ast$, which is defined by: \[ y \ast z = \tau_g(y)z \] whenever $z$ is an homogeneous element of degree $g$ of the vector space ${\mathcal A}_n^{\bar{q},(1)}$ and $y$ is any element of ${\mathcal A}_n^{\bar{q},(1)}$. This new product has the same unit as the original one and $\left({}^{\tau}\left({\mathcal A}_n^{\bar{q},(1)}\right),\ast\right)$ is a ${\mathbb Z}^n$-graded $\Bbbk$-algebra with respect to the original grading.
At this stage, we have the following result.
\begin{subtheorem} --\label{TheoremA} There is a $\Bbbk$-algebra isomorphism ${\mathcal A}_n^{\bar{q},\Lambda} \longrightarrow {}^{\tau}\left({\mathcal A}_n^{\bar{q},(1)}\right)$ such that $x_i \mapsto x_i$ and $y_i \mapsto y_i$. \end{subtheorem}
\noindent {\it Proof.$\;$} We first check that there is a $\Bbbk$-algebra homomorphism as desired.
Let $1 \le i < j \le n$. We have that: \[ x_i \ast x_j - \lambda_{ij}x_j \ast x_i = \tau_j(x_i) x_j - \lambda_{ij} \tau_i(x_j) x_i = x_i x_j - \lambda_{ij}\lambda_{ji}x_j x_i = x_i x_j - x_jx_i = 0 ; \] \[ y_i \ast y_j - \lambda_{ij}y_j \ast y_i = \tau_j^{-1}(y_i) y_j - \lambda_{ij}\tau_i^{-1}(y_j) y_i = y_i y_j - \lambda_{ij}\lambda_{ji} y_j y_i = y_i y_j - y_jy_i = 0 ; \] \[ x_i \ast y_j - \lambda_{ji}y_j \ast x_i = \tau_j^{-1}(x_i) y_j - \lambda_{ji}\tau_i(y_j)x_i = x_i y_j - \lambda_{ji}\lambda_{ij}y_j x_i = x_i y_j - y_jx_i = 0 ; \] \[ y_i \ast x_j - \lambda_{ji}x_j \ast y_i = \tau_j(y_i) x_j - \lambda_{ji}\tau_i^{-1}(x_j) y_i = y_i x_j - \lambda_{ji}\lambda_{ij}x_j y_i = y_i x_j - x_jy_i = 0 . \] Now, given $1 \le i \le n$: \[ x_i \ast y_i - q_i y_i \ast x_i - 1 = \tau_i^{-1}(x_i) y_i - q_i \tau_i(y_i) x_i - 1 = x_i y_i - q_i y_i x_i - 1 = 0 . \] This establishes the existence of such a $\Bbbk$-algebra homomorphism. Note that this homomorphism sends the PBW basis of the $\Bbbk$-vector space ${\mathcal A}_n^{\bar{q},\Lambda}$ to the PBW basis of the $\Bbbk$-vector space ${\mathcal A}_n^{\bar{q},(1)}$. As a consequence, it is an isomorphism.\qed
\subsection{Twisting of ${\mathcal B}_n^{\bar{q},(1)}$.}\label{twisting-B} In this subsection, we extend the results of Subsection \ref{twisting-A} to the localisation ${\mathcal B}_n^{\bar{q},(1)}$ of ${\mathcal A}_n^{\bar{q},(1)}$.
Let $1 \le i \le n$. In the notation of paragraph \ref{twisting-automorphisms}, it is clear that $\tau_i(z_j)=z_j$, for all $1 \le j \le n$. It follows that the automorphism $\tau_i$ extends to an automorphism --still denoted $\tau_i$-- of ${\mathcal B}_n^{\bar{q},(1)}$. Hence, we get a morphism of groups \[ \begin{array}{ccrcl}
& & {\mathbb Z}^n & \longrightarrow & {\rm Aut}_\Bbbk\left({\mathcal B}_n^{\bar{q},(1)}\right) \cr
& & e_i & \mapsto & \tau_i. \end{array} \]
The same construction as in Subsection \ref{twisting-A} leads to the following result.
\begin{subtheorem} -- There is a $\Bbbk$-algebra isomorphism ${\mathcal B}_n^{\bar{q},\Lambda} \longrightarrow {}^{\tau}\left({\mathcal B}_n^{\bar{q},(1)}\right)$ such that $x_i \mapsto x_i$ and $y_i \mapsto y_i$. \end{subtheorem}
\subsection{The ${\mathbb Z}^n$-grading of $P_\phi$}
Let $\phi\in\widehat{\Bbbk[z_1^{\pm 1},\dots,z_n^{\pm 1}]}$. \\ Recall from Subsection \ref{base-de-P} that $P_{\phi}$ has a $\Bbbk$-basis $\{v_k,\, k\in{\mathbb Z}^n\}$. Consider the ${\mathbb Z}^n$-grading of the $\Bbbk$-vector space $P_\phi$ whose $k$-th component is $\Bbbk v_k$. We know, after Lemma \ref{action-xy}, that this grading is also a grading of the ${\mathcal B}_n^{\bar{q},\Lambda}$-module $P_\phi$. Hence, we can define the twist ${}^\tau (P_\phi)$ of the ${\mathcal B}_n^{\bar{q},(1)}$-module $P_\phi$. See \cite{RZ} for the definition of the twist of a module. It is worth mentioning at this point the following general fact: for any ${\mathbb Z}^n$-graded ${\mathcal B}_n^{\bar{q},(1)}$-module $M$, $M$ and ${}^\tau M$ have the same lattice of ${\mathbb Z}^n$-graded submodules. Using Theorem \ref{TheoremA}, we get that ${}^\tau (P_\phi)$is naturally endowed with a structure of ${\mathcal B}_n^{\bar{q},\Lambda}$-module, by restriction of scalars.\\
Now, suppose that $\bar{q}$ is generic. By Prop. \ref{generic-weight-decomposition}, the weight decomposition of $P_\phi$ is \[ P_\phi = \bigoplus_{\Bbbk\in{\mathbb Z}^n} \Bbbk.v_\Bbbk \] and each line $\Bbbk.v_k$ is the weight space of $P_\phi$ of weight $\phi\circ\sigma_k$. That is: the weight decomposition of $P_\phi$ and its ${\mathbb Z}^n$-grading coincide. On the other hand, any submodule of $P_\phi$ is a weight submodule --cf. Proposition \ref{sous-objets-et-poids}. It follows that any submodule or quotient of $P_\phi$ is ${\mathbb Z}^n$-homogeneous. As a consequence, the quotient $S_\phi$ is again ${\mathbb Z}^n$-graded and we can define its twist ${}^\tau (S_\phi)$. Using Theorem \ref{TheoremA}, we get that ${}^\tau (S_\phi)$ is naturally endowed with a structure of ${\mathcal B}_n^{\bar{q},\Lambda}$-module, by restriction of scalars.
\subsection{An isomorphism of ${\mathcal B}_n^{\bar{q},\Lambda}$-modules}\label{iso-after-twist}
In this subsection we want to consider simultaneously ${\mathcal B}_n^{\bar{q},(1)}$-modules and ${\mathcal B}_n^{\bar{q},\Lambda}$-modules. Hence, to avoid any ambiguity, we introduce the following notation. For all $\phi\in\widehat{\Bbbk[z_1^{\pm 1},\dots,z_n^{\pm 1}]}=R^\circ$, let \[ P_\phi^{(1)} = {\mathcal B}_n^{\bar{q},(1)} \otimes_{R^\circ} \Bbbk_\phi \qquad\mbox{and}\qquad P_\phi^{\Lambda} = {\mathcal B}_n^{\bar{q},\Lambda} \otimes_{R^\circ} \Bbbk_\phi. \]
Likewise, we denote by $\{v_\Bbbk^{(1)},\,k\in{\mathbb Z}^n\}$ the $\Bbbk$-basis of $P_\phi^{(1)}$ introduced in Subsection \ref{base-de-P} and by $(v_\Bbbk^{\Lambda},\,k\in{\mathbb Z}^n)$ the corresponding $\Bbbk$-basis of $P_\phi^{\Lambda}$. Let us consider the $\Bbbk$-vector space morphism $\iota_\phi$ such that \[ \begin{array}{ccrcl} \iota_\phi & : & {}^\tau\left(P_\phi^{(1)}\right) & \longrightarrow & P_\phi^{\Lambda} \cr
& & v_\Bbbk^{(1)}& \mapsto & v_\Bbbk^{\Lambda}. \end{array} \] A direct application of Lemma \ref{action-xy} gives the following result.
\begin{subtheorem} -- The map $\iota_\phi$ is an isomorphism of ${\mathcal B}_n^{\bar{q},\Lambda}$-modules. \end{subtheorem}
Suppose now that $\bar{q}$ is generic. Analogously, we denote by $S_\phi^\Lambda$ the unique simple quotient of $P_\phi^\Lambda$ and by $S_\phi^{(1)}$ the unique simple quotient of $P_\phi^{(1)}$. Using the previous theorem, we have the following composition of ${\mathcal B}_n^{\bar{q},\Lambda}$-linear maps \[ {}^\tau \left(P_\phi^{(1)}\right) \stackrel{\iota_\phi}{\longrightarrow} P_\phi^{\Lambda} \stackrel{}{\longrightarrow} S_\phi^{\Lambda}, \] where the second map is the canonical projection. This composition is surjective, hence its kernel is a maximal strict ${\mathcal B}_n^{\bar{q},\Lambda}$-submodule of ${}^\tau \left(P_\phi^{(1)}\right)$. But the lattice of submodules of the ${}^{\tau}\-\-\left({\mathcal B}_n^{\bar{q},(1)}\right)$-module ${}^\tau \left(P_\phi^{(1)}\right)$ coincides with the lattice of submodules of the ${\mathcal B}_n^{\bar{q},(1)}$-module $P_\phi^{(1)}$. Hence, the kernel of the above map is $N_\phi^{(1)}$ --using an obvious notation. As a consequence, we get that $S_\phi^\Lambda$ is isomorphic to ${}^\tau \left(P_\phi^{(1)}\right)/N_\phi^{(1)}$ as a ${\mathcal B}_n^{\bar{q},\Lambda}$-module. That is, we have proved the following theorem.
\begin{subtheorem} -- There is an isomorphism of ${\mathcal B}_n^{\bar{q},\Lambda}$-modules between $S_\phi^\Lambda$ and ${}^\tau\left(S_\phi^{(1)}\right)$. \end{subtheorem}
We finish this section with an easy but useful observation.
\begin{subremark} -- \rm Let $M$ be a ${\mathbb Z}^n$-graded ${\mathcal B}_n^{\bar{q},(1)}$-module. Then, for all $\phi \in \widehat{\Bbbk[z_1^{\pm 1},\dots,z_n^{\pm 1}]}$, we have an equality \[ ({}^\tau M)(\phi) = M(\phi) . \] This is obvious since the automorphisms of the twisting system $\tau$ all act trivially on $z_1,\dots,z_n$.
It follows that the set of weights of ${}^\tau M$ coincides with the set of weights of $M$. \end{subremark}
\section{Classification of simple weight modules of ${\mathcal B}_n^{\bar{q},\Lambda}$ in the generic case} \label{classification}
In this final section, we collect the results of the previous ones to give an explicit description and a classification of the simple weight modules of the extension $(R^\circ,{\mathcal B}_n^{\bar{q},\Lambda})$ when $\bar{q}$ is generic.
We also show that the natural representation of ${\mathcal B}_n^{\bar{q},\Lambda}$ by skew differential operators on the convenient quantum affine space is a simple weight module.\\
For all this section, we fix $\bar{q}=(q_1,\dots,q_n)\in(\Bbbk^\ast)^n$ that we assume generic and a skew-symmetric matrix $\Lambda=(\lambda_{ij})$ with entries in $\Bbbk^\ast$. We let $\tau$ be the normalised left twisting system associated to $\Lambda$, as defined in Sections \ref{twisting-A} and \ref{twisting-B}.
As proven in Section \ref{twisting-B}, there is an algebra isomorphism ${\mathcal B}_n^{\bar{q},\Lambda} \longrightarrow {}^{\tau}\left({\mathcal B}_n^{\bar{q},(1)}\right)$ such that $x_i \mapsto x_i$ and $y_i \mapsto y_i$. Notice that, under the above isomorphism, the subalgebra of ${\mathcal B}_n^{\bar{q},\Lambda}$ generated by $z_1,\dots,z_n$ and their inverses is in one-to-one correspondence with the subalgebra of ${}^\tau\left({\mathcal B}_n^{\bar{q},(1)}\right)$ generated by $z_1,\dots,z_n$. Further, the latter subalgebra is in one-to-one correspondence with the subalgebra of ${\mathcal B}_n^{\bar{q},(1)}$ generated by $z_1,\dots,z_n$ by means of the identity map. Both subalgebras are Laurent polynomial algebras in $z_1,\dots,z_n$. By abuse of notation, we all denote them by $R^\circ$. \\
We keep the notation of Section \ref{iso-after-twist}.
\paragraph{\em Construction of simple weight modules.} Let $S$ be a simple weight module of ${\mathcal B}_n^{\bar{q},\Lambda}$.\\
By Proposition \ref{ubiquite-des-P}, there exists $\phi\in\widehat{R^\circ}$ such that \[ S \cong S_\phi^{\Lambda}. \] On the other hand, we deduce from Section \ref{iso-after-twist} that \[ S_\phi^{\Lambda} \cong {}^\tau\left( S_\phi^{(1)}\right) \] and $S_\phi^{\Lambda}$ and $S_\phi^{(1)}$ have the same set of weights.
Further, by Theorem \ref{thm-simple} (see also Corollary \ref{iso-L-tenseur-L}), if $(\phi_1,\dots,\phi_n)$ is the unique $n$-tuple of elements of $\Bbbk[z_1^{\pm 1}]$ such that $\phi=\phi_1 \dots \phi_n$, then \[ S_\phi^{(1)} \cong S_{\phi_1}^{(1)} \otimes\dots\otimes S_{\phi_n}^{(1)}. \] It is clear, in addition, that the set of weights of $S_{\phi_1}^{(1)} \otimes\dots\otimes S_{\phi_n}^{(1)}$ is the image under the map of Proposition \ref{weight-in-tpa} (2) of $w\left(S_{\phi_1}^{(1)}\right) \times\dots\times w\left(S_{\phi_n}^{(1)}\right)$, where, for $1 \le i \le n$, $w\left(S_{\phi_i}^{(1)}\right)$ denotes the set of weights of $S_{\phi_i}^{(1)}$. These last weight spaces are described in Examples \ref{exemple-hors-1} and \ref{exemple-1} according to whether $\phi_i$ is or not in the ${\mathbb Z}$-orbit of ${\bf 1}$. \\
Denote by ${\mathcal B}_n^{\bar{q},\Lambda}-{\sf wSimple}$ the set of simple weight ${\mathcal B}_n^{\bar{q},\Lambda}$-modules -- note that the previous discussion shows this collection is actually a set-- and by $\sim$ the equivalence relation on this set defined by isomorphism. We deduce from the arguments above that there is a surjective map \[ \kappa_n \, : \, \widehat{R^\circ} \longrightarrow {\mathcal B}_n^{\bar{q},\Lambda}-{\sf wSimple} \longrightarrow ({\mathcal B}_n^{\bar{q},\Lambda}-{\sf wSimple})/\sim \] that sends any $\phi\in \widehat{R^\circ}$ to the isomorphism class of the ${\mathcal B}_n^{\bar{q},\Lambda}$-module ${}^\tau \left(S_\phi^{(1)}\right)$.
\paragraph{\em Isomorphisms between simple weight modules.} Our final aim is to describe fibres of the map $\kappa_n$.
Let $\phi,\psi\in \widehat{R^\circ}$. Since weight spaces and ${\mathbb Z}^n$-grading coincide for $S_\phi^{(1)}$ and $S_\psi^{(1)}$, the ${\mathcal B}_n^{\bar{q},(1)}$-modules $S_\phi^{(1)}$ and $S_\psi^{(1)}$ are isomorphic if and only if the ${\mathcal B}_n^{\bar{q},\Lambda}$-modules ${}^\tau\left(S_\phi^{(1)}\right)$ and ${}^\tau\left(S_\psi^{(1)}\right)$ are isomorphic. But, the former occur if and only if $S_\phi^{(1)}$ and $S_\psi^{(1)}$ have the same set of weights (see, in particular, Corollary \ref{iso-S-via-poids} and Theorem \ref{thm-simple}). It follows from this that $\phi$ and $\psi$ have the same image under $\kappa_n$ if and only if the corresponding simple modules $S_\phi^{(1)}$ and $S_\psi^{(1)}$ have the same set of weights.
\begin{example} -- \rm {\bf The natural representation of ${\mathcal B}_n^{\bar{q},\Lambda}$.} As already discussed, ${\mathcal B}_n^{\bar{q},\Lambda}$ may be seen as a ring of $q$-difference operators on quantum spaces. We now describe this representation and show it is a simple weight module. For this, we follow \cite[Sec. 4.1]{R2}. Let $E_n^\Lambda$ be the $\Bbbk$-algebra generated by $y_i$, $1 \le i \le n$ subject to the relations $y_iy_j=\lambda_{ij} y_jy_i$, for all $1 \le i < j \le n$. This is a noetherian integral domain and we denote by $F_n^\Lambda$ its skew-field of fractions. It is easy to check that, for $1 \le i \le n$, there is an automorphism of $\Bbbk$-algebra as follows \[ \begin{array}{ccrcl} \xi_i & : & F_n^\Lambda & \longrightarrow & F_n^\Lambda \cr
& & y_j & \mapsto & q_i^{\delta_{ij}}y_j \end{array} \] and that this automorphism restrict to an automorphism of $E_n^\Lambda$. Further, for $1 \le i \le n$, we consider the following $\Bbbk$-vector space endomorphisms of $F_n^\Lambda$: \[ \begin{array}{ccrcl} m_i & : & F_n^\Lambda & \longrightarrow & F_n^\Lambda \cr
& & f & \mapsto & y_i f \end{array} \quad\mbox{and}\quad \begin{array}{ccrcl} \partial_i & : & F_n^\Lambda & \longrightarrow & F_n^\Lambda \cr
& & f & \mapsto & (q_i-1)y_i^{-1} (\xi_i(f)-f) \end{array} \] It is easy to see that, actually, the operators $m_i$ and $\partial_i$ stabilise $E_n^\Lambda$. It turns out that we have a $\Bbbk$-algebra morphism as follows: \[ \begin{array}{ccrcl}
& & {\mathcal B}_n^{\bar{q},\Lambda} & \longrightarrow & {\rm End}_\Bbbk(F_n^\Lambda) \cr
& & x_i & \mapsto& \partial_i \cr
& & y_i & \mapsto& m_i \cr
& & z_i & \mapsto& \xi_i \cr \end{array} \] (see \cite[Prop. 4.1.3]{R2}) which defines an action of ${\mathcal B}_n^{\bar{q},\Lambda}$ on $F_n^\Lambda$ and, by restriction, an action of ${\mathcal B}_n^{\bar{q},\Lambda}$ on $E_n^\Lambda$.
For $k=(k_1,\dots,k_n)\in{\mathbb N}^n$, let $y^k=y_1^{k_1} \dots y_n^{k_n} \in E_n^\Lambda$. As it is well known, the family $\{y_k, k\in{\mathbb N}^n\}$ is a basis of the vector space $E_n^\Lambda$. Further, for $k\in{\mathbb N}^n$, it is easy to see that $y^k$ is a weight vector of weight $-k{\bf 1}$, in the notation of Section \ref{action-B} and of the introduction to Section \ref{section-exemples}. It is also easy to verify that $E_n^\Lambda$ is simple.
But, on the other hand, we know that the set of weights of $S_{\bf 1}^{\Lambda}$ is $(-{\mathbb N})^n{\bf 1}$. This shows that $E_n^\Lambda$ and $S_{\bf 1}^{(\Lambda)}$ are two simple weight modules of ${\mathcal B}_n^{\bar{q},\Lambda}$ with the same set of weights. Hence, they are isomorphic. \end{example}
\footnotesize \noindent V. F.: \\Departamento de Matem\'atica\\ Universidade de S\~ao Paulo, S\~ao Paulo, Brasil.\\ {\tt [email protected]}
\noindent L. R.: \\Laboratoire Analyse, G\'eom\'etrie et Applications LAGA, UMR 7539 du CNRS,\\ Institut Galil\'ee,\\ Universit\'e Paris Nord \\ Villetaneuse, France.\\ {\tt [email protected]}
\noindent A. S.: \\IMAS-CONICET y Departamento de Matem\'atica,
Facultad de Ciencias Exactas y Naturales,\\
Universidad de Buenos Aires, \\Ciudad Universitaria, Pabell\'on 1\\ 1428, Buenos Aires, Argentina. \\{\tt [email protected]}
\end{document} | arXiv |
In the triangle, $\angle A=\angle B$. What is $x$? [asy]
draw((.5,0)--(3,2)--(0,1)--cycle);
label("$A$",(.5,0),S);
label("$B$",(0,1),W);
label("$C$",(3,2),NE);
label("$3x-1$",(1.75,1),SE);
label("$2x+2$",(1.5,1.5),NNW);
label("$x+4$",(.25,.5),WSW);
[/asy]
Since $\angle A=\angle B$, we know that $\triangle ABC$ is isosceles with the sides opposite $A$ and $B$ equal. Therefore, $$2x+2 = 3x-1.$$ Solving this equation gives $x=\boxed{3}$. | Math Dataset |
Equidistribution theorem
In mathematics, the equidistribution theorem is the statement that the sequence
a, 2a, 3a, ... mod 1
is uniformly distributed on the circle $\mathbb {R} /\mathbb {Z} $, when a is an irrational number. It is a special case of the ergodic theorem where one takes the normalized angle measure $\mu ={\frac {d\theta }{2\pi }}$.
History
While this theorem was proved in 1909 and 1910 separately by Hermann Weyl, Wacław Sierpiński and Piers Bohl, variants of this theorem continue to be studied to this day.
In 1916, Weyl proved that the sequence a, 22a, 32a, ... mod 1 is uniformly distributed on the unit interval. In 1937, Ivan Vinogradov proved that the sequence pn a mod 1 is uniformly distributed, where pn is the nth prime. Vinogradov's proof was a byproduct of the odd Goldbach conjecture, that every sufficiently large odd number is the sum of three primes.
George Birkhoff, in 1931, and Aleksandr Khinchin, in 1933, proved that the generalization x + na, for almost all x, is equidistributed on any Lebesgue measurable subset of the unit interval. The corresponding generalizations for the Weyl and Vinogradov results were proven by Jean Bourgain in 1988.
Specifically, Khinchin showed that the identity
$\lim _{n\to \infty }{\frac {1}{n}}\sum _{k=1}^{n}f((x+ka){\bmod {1}})=\int _{0}^{1}f(y)\,dy$
holds for almost all x and any Lebesgue integrable function ƒ. In modern formulations, it is asked under what conditions the identity
$\lim _{n\to \infty }{\frac {1}{n}}\sum _{k=1}^{n}f((x+b_{k}a){\bmod {1}})=\int _{0}^{1}f(y)\,dy$
might hold, given some general sequence bk.
One noteworthy result is that the sequence 2ka mod 1 is uniformly distributed for almost all, but not all, irrational a. Similarly, for the sequence bk = 2ka, for every irrational a, and almost all x, there exists a function ƒ for which the sum diverges. In this sense, this sequence is considered to be a universally bad averaging sequence, as opposed to bk = k, which is termed a universally good averaging sequence, because it does not have the latter shortcoming.
A powerful general result is Weyl's criterion, which shows that equidistribution is equivalent to having a non-trivial estimate for the exponential sums formed with the sequence as exponents. For the case of multiples of a, Weyl's criterion reduces the problem to summing finite geometric series.
See also
• Diophantine approximation
• Low-discrepancy sequence
• Dirichlet's approximation theorem
• Three-gap theorem
References
Historical references
• P. Bohl, (1909) Über ein in der Theorie der säkularen Störungen vorkommendes Problem, J. reine angew. Math. 135, pp. 189–283.
• Weyl, H. (1910). "Über die Gibbs'sche Erscheinung und verwandte Konvergenzphänomene". Rendiconti del Circolo Matematico di Palermo. 330: 377–407. doi:10.1007/bf03014883. S2CID 122545523.
• W. Sierpinski, (1910) Sur la valeur asymptotique d'une certaine somme, Bull Intl. Acad. Polonaise des Sci. et des Lettres (Cracovie) series A, pp. 9–11.
• Weyl, H. (1916). "Ueber die Gleichverteilung von Zahlen mod. Eins". Math. Ann. 77 (3): 313–352. doi:10.1007/BF01475864. S2CID 123470919.
• Birkhoff, G. D. (1931). "Proof of the ergodic theorem". Proc. Natl. Acad. Sci. U.S.A. 17 (12): 656–660. Bibcode:1931PNAS...17..656B. doi:10.1073/pnas.17.12.656. PMC 1076138. PMID 16577406.
• Ya. Khinchin, A. (1933). "Zur Birkhoff's Lösung des Ergodensproblems". Math. Ann. 107: 485–488. doi:10.1007/BF01448905. S2CID 122289068.
Modern references
• Joseph M. Rosenblatt and Máté Weirdl, Pointwise ergodic theorems via harmonic analysis, (1993) appearing in Ergodic Theory and its Connections with Harmonic Analysis, Proceedings of the 1993 Alexandria Conference, (1995) Karl E. Petersen and Ibrahim A. Salama, eds., Cambridge University Press, Cambridge, ISBN 0-521-45999-0. (An extensive survey of the ergodic properties of generalizations of the equidistribution theorem of shift maps on the unit interval. Focuses on methods developed by Bourgain.)
• Elias M. Stein and Rami Shakarchi, Fourier Analysis. An Introduction, (2003) Princeton University Press, pp 105–113 (Proof of the Weyl's theorem based on Fourier Analysis)
| Wikipedia |
What is $62_7+34_5$ when expressed in base 10?
After converting both numbers to base 10, we add the values. We get $62_7=6\cdot7^1+2\cdot7^0=42+2=44$ and $34_5=3\cdot5^1+4\cdot5^0=15+4=19$. The sum is $44+19=\boxed{63}$. | Math Dataset |
Class 0 RPA
Class 9 Mathematics
Class 9 Science
Class 10 Mathematics
Class 11 Physics
Class 11 Chemistry
OTHER CHAPTERS
Continuity and Differentiability
Application of Integrals
Vector Algebra
Three Dimensional Geometry
Class 12 NCERT
Download Class 12 NCERT Solutions In PDF
1 Prove that the function $f ( x ) = 5 x - 3$ is continuous at $x = 0, x = - 3$ and at $x = 5.$
The given function is $F(x)=5x-3$$\\$ At $x=0,f(0)=5*0-3=3\\ \lim\limits_{x \to 0}f(x)=\lim\limits_{x \to 0}f(5x-3)\\ =5*0-3=-3 \\ \therefore \lim\limits_{x \to 0}f(x)=f(0)$$\\$ Therefore ,$f$ is continuous at $ x=0$$\\$ At$ x=-3,f(-3) =5x(-3)-3=-18\\ \lim\limits_{x \to 3}f(x)=\lim\limits_{x \to 3}f(5x-3)=5*(-3)-3=-18\\ \therefore \lim\limits_{x \to 3}f(x)=f(-3)$$\\$ Therefore , $ f$ is continuous at $x=-3 $ $\\$ At $ x=5,f(x) = f(5)=5*5-3=25-3=22\\ \lim\limits_{x \to 5}f(x)=\lim\limits_{x \to 5}f(5x-3)=5*5-3=-22\\ \therefore \lim\limits_{x \to 5}f(x)=f(5)$$\\$ Therefore , $f$ is continuous at $x=5$
2 Examine the continuity of the function $f(x)=2x^2 -1 $ at $x=3$
The given function is $f(x) =2x^2-1 $ At $ x=3,f(x)=f(3)=2*3^2-1=17\\ \lim\limits _{x \to 3}f{x}=\lim\limits_{x\to 3}(2x^2-1) =2*362-1=17\\ \therefore \lim\limits_{x\to 3}f(x)=f=(3)$$\\$ Thus,$f$ is continuous , at $ x=3$
3 Examine the following functions for continuity.$\\$ $ a) f(x) =x-5 \\ b) f(x)=\dfrac{1}{x-5},x\neq 5\\ c) f(x) =\dfrac{x^2-25}{x+5},x\neq 5\\ d) f(x) = |x-5|$
The given function is $f(x) =|x-5|=\begin{cases} 5-x & \quad \text{if } x<5\\ x-5 & \quad \text{if } x\geq 5 \end{cases}$$\\$ This function $f $ is defined at all points of the real line.$\\$ Let $c$ be a point on a real line. Then,$c<5 O r c=5 Or c>5$$\\$ case $I:c<5$$\\$ Then, $f(c)=5-c\\ \lim\limits_{x \to c}f(x)=\lim\limits_{x\to c}(5-x)=5-c\\ \therefore \lim\limits_{x \to c} f(x)=f(c)$$\\$ Therefore,$f$ is continuous at all real numbers less than $5$.$\\$ case $II: c=5$$\\$ Then ,$ f(c)=f(5)=(5-5)=0\\ \lim\limits_{x \to 5^-}f(x)=\lim\limits_{x \to 5} (5-x)=(5-5)=0\\ \lim\limits_{x \to 5^+}f(x)=\lim\limits_{x \to 5}(x-5)=0\\ \therefore \lim\limits_{x \to c^-}f(x)=\lim\limits_{x\to c^+}f(x)=f(c)$$\\$ Therefore ,$ f$ is contitinuous at $ x=5$$\\$
a) The given function is $f (x) = x - 5$$\\$ It is evident that $f$ is defined at every real number k and its value at $k$ is $k -5 .$$\\$ It is also observed that$ \lim\limits _{x \to k}f(x)=\lim\limits_{x \to k}(x-5)=k=k-5=f(k)\\ \therefore \lim\limits _{x\to k}f(x)=f(k)$$\\$ Hence ,$f$ is continuous at every real number and therefore, it is a continuous function.
$c)$ The given function is $ f(x) =\dfrac{x^2-25}{x+5},x\neq -5$$\\$ For any real number $c \neq - 5$ , we obtain $\\$ $ \lim\limits_{x \to c}(x) =\lim\limits_{x \to c}(\dfrac{x62-25}{x+5})=\lim\limits_{x \to c}\dfrac{(x+5)(x-5)}{x+5} \\ =\lim\limits_{x \to c}(x-5) =(c-5)$$\\$ Also ,$ f(c) =\dfrac{(c+5)(c-5)}{c+5} =c(c-5)(as c \neq 5)\\ \therefore \lim\limits_{x\to c}f(x)=f(c)$$\\$ Hence $f$ is continuous at every point in the domain of $f$ and therefore. It is continuous function.
case $III: c>5$$\\$ Then,$f(c)=f(5) =c-5\\ \lim\limits_{x\to c}f(x)=\lim\limits_{x \to c}f(x-5)=c-5\\ \therefore \lim\limits_{x \to c}f(x)=f(c)$$\\$ Therefore, $f$ is continuous at real numbers greater than $5$.$\\$ Hence, $f$ is continuous at every real number and therefore, it is a continuous function.
$ b) $The given function is $f(x)=\dfrac{1}{x-5},x \neq 5$$\\$ for any real number $k \neq 5$ , we obtain$\\$ $ \lim\limits _{x \to k}f(x)=\lim\limits_{x\to k}\dfrac{1}{x-5}=\dfrac{1}{k-5}$$\\$ Also,$f(k) =\dfrac{1}{k-5} \ \ \ \ \ \ \ \ \ \ \ \ \ (As k\neq 5)\\ \therefore \lim\limits _{x\to k}f(x)=f(k)$$\\$ Hence, $f $ is continuous at every point in the domain of $f$ and therefore, it is a continuous function.
4 Prove that the function $f(x)=x^n$ is continuous at $x=n$ is a positive integer.
The given function is $f (x ) =x^ n$ It is evident that $f$ is defined at all positive integers, $n$ , and its value at $n$ is $n^ n$ .$\\$ Then,$\lim\limits_{x\to n}f(n)=\lim\limits_{x \to n}f(x^n)=n^n\\ \therefore \lim\limits_{x \to n}f(x)=f(n)$$\\$ Therefore,$ f$ is continuous at $n$ , where $n$ is a positive integer.
5 Is the function f defined by $ f(x)=\begin{cases} x & \quad \text{if } x\leq\\ 5 & \quad \text{if } x>1 \end{cases}$$\\$ Continuous at $ x=0?$ At $x=2?$
The given function $f$ is $ f(x)=\begin{cases} x & \quad \text{if } x\leq\\ 5 & \quad \text{if } x>1 \end{cases}$$\\$ At $ x = 0,$$\\$ It is evident that $f$ is defined at $0$ and its value at $0$ is $0$ . Then,$\lim\limits_{x \to 0}f(x)=\lim\limits_{x \to 0}x=0 \\ \therefore \lim\limits_{x \to 0}f(x) =f(0)$$\\$ Therefore, $f$ is continuous at $x = 0$$\\$ At $x = 1 ,$$\\$ $f$ is defined at $1$ and its value at is $ 1$ . The left hand limit of f at $x = 1$ is,$\\$ $\lim\limits_{x \to 1^-}f(x)=\lim\limits_{x \to 1^-} x=1$$\\$ The right hand limit of $ f$ at $x=1 $ is ,$\\$ $\lim\limits_{x \to 1 ^+}f(x) =\lim\limits_{x \to 1 ^+}f(5)\\ \therefore \lim\limits_{x \to 1^-}f(x)\neq \lim\limits_{x \to 1^+}f(x)$$\\$ Therefore, $f$ is not continuous at $x = 1$$\\$ At $x = 2,$$\\$ $f$ is defined at $2$ and its value at $2$ is $5$.$\\$ Then,$\lim\limits_{x\to 2}f(x)=\lim\limits_{x \to 2}f(5)=5\\ \therefore \lim\limits_{x \to 2}f(x)=f(2)$$\\$ Therefore,$ f$ is continuous at $x = 2$
6 Find all points of discontinuous of $f$ , where $f$ is defined by$\\$ $f(x)= \begin{cases} 2x+3 & \quad \text{if } x \leq 2\\ 2x-3 & \quad \text{if }x >2 \end{cases}$
The give function $f$ is $f(x)= \begin{cases} 2x+3 & \quad \text{if } x \leq 2\\ 2x-3 & \quad \text{if }x >2 \end{cases}$$\\$ It is evident that the given function $f$ is defined at all the points of the real line. Let $c$ be a point on the real line. Then, three cases arise.$\\$ $I. c < 2\\ II. c > 2\\ III. c=2$$\\$ Case(i)$c < 2$$\\$ Then, $f(x)=2x+3\\ \lim\limits_{x \to c} f(x) =\lim\limits_{x \to c}(2x+3)=2c+3\\ \therefore \lim\limits_{x \to c} f(x)=f(c)$$\\$ Therefore, $f$ is continuous at all points, $x$ , such that $x < 2$$\\$ Case(ii)$c=2 $$\\$ Then,$f(c)=2c-3\\ \lim\limits_{x \to c} f(x)=\lim\limits_{x \to c}(2x-3)=2c-3\\ \therefore \lim\limits{x \to c} f(x) =f(c)$$\\$ Therefore, $f$ is continuous at all points $x$ , such that $x > 2$$\\$ Case(iii)$c=2 $$\\$ Then, the left hand limit of $f$ at $x =2$ is,$\\$ $\lim\limits_{x \to 2^-} f(x)=\lim\limits_{x \to 2^-}(2x+3)=2*2+3=7$$\\$ The right hand limit of $f$ at $x = 2$ is,$\\$ $\lim\limits_{x \to 2^+} f(x)=\lim\limits{x \to 2^+}(2x+3)=2*2-3=1$$\\$ It is observed that the left and right hand limit of f at $x = 2$ do not coincide. Therefore, $f$ is not continuous at $x = 2$$\\$ Hence, $x = 2$ is the only point of discontinuity of $f$ .
7 Find all points of discontinuity of $f$ , where $f$ is defined by$\\$ $f(x)=\begin{cases} |x|+3 & \quad \text{if } x\leq -3\\ -2x & \quad \text{if } -3 < x < 3\\ 6x+2& \quad \text{if} x \geq 3 \end{cases}$
If$ -3 < c < 3$,then $f(c)=-2c $ and$\\$ $\lim\limits_{x \to c} f(x)=\lim\limits_{x \to 3c}(-2x)=-2c\\ \therefore \lim\limits_{x \to c} f(x)=f(c)$$\\$ Therefore, $f$ is continuous in $(- 3,3 )$ .$\\$ Case IV :$\\$ If $c = 3,$ then the left hand limit of f at $x = 3$ is,$\\$ $\lim\limits_{x \to 3^-} f(x)=\lim\limits_{x \to 3^-} f(-2x)=-2*3=6$$\\$ The right hand limit of $f$ at $x =3$ is,$\\$ $\lim\limits_{x \to 3^+} f(x) =\lim\limits_{x \to 3^+} f(6x+2)=6*3+2=20$$\\$ It is observed that the left and right hand limit of f at $x = 3 $ do not coincide. Therefore, $f$ is not continuous at $x = 3$$\\$ Case V :$\\$ If $c > 3$ , then $f ( c)=6 c + 2$ and $\\$ $\lim\limits_{x \to c} f(x) =\lim\limits_{x \to c} f(6x+2)=6c+2\\ \therefore \lim\limits_{x \to c} f(x)= f(c)$$\\$ Therefore, $f $ is continuous at all points $x$ , such that $x > 3$$\\$ Hence, $x = 3$ is the only point of discontinuity of $f$ .
The given function $f$ is $f(x)=\begin{cases} |x|+3 & \quad \text{if } x\leq -3\\ -2x & \quad \text{if } -3 < x < 3\\ 6x+2& \quad \text{if} x \geq 3 \end{cases}$$\\$ The given function $f$ is defined at all the points of the real line. Let $c$ be a point on the real line.$\\$ Case I.$\\$ If $ c < -3,$ then $f(c)=-c+3$$\\$ $\lim\limits_{x \to c} f(x)=\lim\limits_{x \to c}(-x+3)=-c+3\\ \therefore \lim\limits_{x \to c} f(x)=f(c)$$\\$ Therefore, $f$ is continuous at all points $x$ , such that $x <- 3$$\\$ Case II.$\\$ If$c=-3,$ then $ f(-3)=-(-3)+3=6\\ \lim\limits_{x \to 3^-} f(x)=\lim\limits_{x \to 3^-}(-x+3)=-(-3)+3=6\\ \therefore \lim\limits_{x \to 3^+} f(x) =\lim\limits_{x \to 3^+} f(-2x) =2x(-3)=6\\ \therefore \lim\limits_{x \to 3 } f(x)=f(-3)$$\\$ Therefore, $f$ is continuous at $x=- 3$$\\$ Case III.$\\$If$ -3 < c < 3$,then $f(c)=-2c $ and$\\$ $\lim\limits_{x \to c} f(x)=\lim\limits_{x \to 3c}(-2x)=-2c\\ \therefore \lim\limits_{x \to c} f(x)=f(c)$$\\$ Therefore, $f$ is continuous in $(- 3,3 )$ .$\\$ Case IV :$\\$ If $c = 3,$ then the left hand limit of f at $x = 3$ is,$\\$ $\lim\limits_{x \to 3^-} f(x)=\lim\limits_{x \to 3^-} f(-2x)=-2*3=6$$\\$ The right hand limit of $f$ at $x =3$ is,$\\$ $\lim\limits_{x \to 3^+} f(x) =\lim\limits_{x \to 3^+} f(6x+2)=6*3+2=20$$\\$ It is observed that the left and right hand limit of f at $x = 3 $ do not coincide. Therefore, $f$ is not continuous at $x = 3$$\\$ Case V :$\\$ If $c > 3$ , then $f ( c)=6 c + 2$ and $\\$ $\lim\limits_{x \to c} f(x) =\lim\limits_{x \to c} f(6x+2)=6c+2\\ \therefore \lim\limits_{x \to c} f(x)= f(c)$$\\$ Therefore, $f $ is continuous at all points $x$ , such that $x > 3$$\\$ Hence, $x = 3$ is the only point of discontinuity of $f$ .
8 Find all points of discontinuity of $f$ , where $f$ is defined by $f ( x )= \begin{cases} |x| & \quad \text{if } x\neq 0\\ x \\ 0 & \quad \text{if } x=0 \end{cases}$
The given function $f$, where $f$ is defind by $f(x)=\begin{cases} |x| & \quad \text{if } x\neq 0\\ x \\ 0 & \quad \text{if } x=0 \end{cases}$$\\$ It is known that,$x < 0 \Rightarrow |x| =-x $ and $ x > 0 \Rightarrow |x| =x$$\\$ Therefore, the given function can be rewritten as $\\$ $f(x)= \begin{cases} \dfrac{|x|}{x}=\dfrac{-x}{x}=-1 & \quad \text{if } x < 0\\ 0, & \quad \text{if} x=0\\ \dfrac{|x|}{x}=\dfrac{x}{x}=1 & \quad \text{if } x > 0 \end{cases}$$\\$ The given function $f$ is defined at all the points of the real line.$\\$ Let $c$ be a point on the real line. Case I :$\\$ if $c < 0,$ then $f(c)=-1$$\\$ $\lim\limits_{x \to c}f(x)=\lim\limits_{x \to c}(-1)=-1\\ \therefore \lim\limits_{x \to c} f(x) =f(c)$$\\$ Therefore,$f$ is continuous at all points $x < 0$$\\$ Case II.$\\$ If $ c=0,$ then the left hand limit of $f$ at $x = 0$ is,$\\$ $\lim\limits_{x \to 0^-} f(x)= \lim\limits_{x \to 0^-}(-1)=-1$$\\$ The right hand limit of $f$ at $x = 0$ is,$\\$ Case III. $\\$ If $ c > 0,f(c)=1$$\\$ $\lim\limits_{x \to c}f(x)=\lim\limits_{x \to c} (1) =1\\ \therefore \lim\limits_{x \to c}(x)=f(c)$$\\$ Therefore, $f$ is continuous at all points $x$ , such that $x > 0$$\\$ Hence, $x = 0$ is the only point of discontinuity of $f$ .
9 Find all points of discontinuity of $f $, where $f$ is defined by $f (x)=\begin{cases} \dfrac{x}{|x|} & \quad \text{if }x < 0\\ -1 & \quad \text{if } x \geq 0 \end{cases}$
The given function $f$ is $f(x)=\begin{cases} \dfrac{x}{|x|} & \quad \text{if }x < 0\\ -1 & \quad \text{if } x \geq 0 \end{cases}$$\\$ It is known that,$ x < 0 \Rightarrow |x| =- x$ Therefore, the given function can be rewritten as$\\$ $f(x)=\begin{cases} \dfrac{x}{|x|} & \quad \text{if }x < 0\\ -1 & \quad \text{if } x \geq 0 \end{cases}$$\\$ $\Rightarrow f(x)=-1$ for all$ x \in R$$\\$ Let $c$ be any real number. Then, $\\$ $\lim\limits_{x \to c} f(x)=\lim\limits_{x\to c}(-1)=-1$$\\$ Also,$ f(c)=-1= \lim\limits_{x \to c}f(x)$$\\$ Therefore, the given function is continuous function. Hence, the given function has no point of discontinuity.
10 Find all the points of discontinuity of $f$ , where $f$ is defined by $f(x)=\begin{cases} x+1 & \quad \text{if } x \leq 1\\ x^2+1 & \quad \text{if } x < 1 \end{cases}$
The given function $f$ is $f(x)=\begin{cases} x+1 & \quad \text{if } x \leq 1\\ x^2+1 & \quad \text{if } x < 1 \end{cases}$ $\\$ The given function $f$ is defined at all the points of the real line. Let $c$ be a point on the real line.$\\$ Case I.$\\$ If $ c < 1 $ then $f(c)=c^2+1$ and $\\$ $\lim\limits_{x \to c} f(x)=\lim\limits_{x \to c}(x^2+1)=c^2+1\\ \therefore \lim\limits_{x \to c} f(x)=f(c)$$\\$ Therefore ,$ f$ is continuous at all points $x$ , such that $x < 1$$\\$ Case II :$\\$ If $c = 1$ , then $f (c)= f ( 1)= 1 + 1 = 2$ The left hand limit of $f$ at $x = 1$ is,$\\$ $\lim\limits_{x \to 1^-} f(x)=\lim\limits_{x \to 1^-}(x^2+1)=1^2+1=2$$\\$ The right hand limit of f at $x = 1$ is,$\\$ $\lim\limits_{x \to 1^+}f(x)=\lim\limits_{x \to 1^+}(x^2+1)=1^2+1=2\\ \therefore \lim\limits_{x \to 1} f(x)=f(c)$$\\$ Therefore,$ f$ is continuous at $x = 1$$\\$ Case III.$\\$ If$ c> 1$ then $ f(c)=c+1$$\\$ $\lim\limits_{x \to c} f(x)=\lim\limits_{x \to c}(x+1)=c+1\\ \therefore \lim\limits_{x \to c} f(x)=f(c)$$\\$ Therefore,$ f$ is continuous at all points $x$ , such that $x > 1$$\\$ Hence, the given function $f$ has no points of discontinuity.
Start a Journey. Enroll Now
Learn SOMETHING WHEREEVER YOU ARE
Join a Course Now
Go to the Forum to meet students like you and Best teachers
For class 12
NCERT Class 12 Maths Solutions
NCERT Class 12 Physics Solutions
NCERT Class 12 Chemistry Solutions
HC Verma Volume 2 Solutions
For Class 9 and Class 10
NCERT Class 10 Science Solutions
NCERT Class 9 Science Solutions
NCERT Class 9 Maths Solutions
Address: Bangalore,KA,India
Email: [email protected]
Copyright 2018.All Righst Reserved
Disclaimer and IP Privacy Policy Terms and Conditions Refund Policy | CommonCrawl |
\begin{document}
\title{On the size of maximal intersecting families} \author{Dmitrii Zakharov \thanks{Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA, Email: {\tt [email protected]}.}} \date{}
\maketitle
\begin{abstract}
We show that an $n$-uniform maximal intersecting family has size at most $e^{-n^{0.5+o(1)}}n^n$. This improves a recent bound by Frankl \cite{F}. The Spread Lemma of Alweiss, Lowett, Wu and Zhang \cite{Al} plays an important role in the proof. \end{abstract}
\section{Introduction}
A family $\mathcal F$ of finite sets is called \emph{intersecting} if any two sets from $\mathcal F$ have a non-empty intersection. A family $\mathcal F$ is called $n$-uniform if every member of $\mathcal F$ has cardinality $n$. Suppose that $\mathcal F$ is an $n$-uniform intersecting family which is {\it maximal}, i.e. for any $n$-element set $F \not \in \mathcal F$ the family $\mathcal F \cup \{F\}$ is not intersecting. Note that the ground set of $\mathcal F$ is not fixed here, so we allow $F$ to have some elements which do not belong to the support of $\mathcal F$. In 1973, Erd{\H o}s and Lov{\' a}sz \cite{EL} asked how large such a family $\mathcal F$ can be. Another way to phrase this question is to ask for the largest size of an $n$-uniform intersecting family $\mathcal F$ such that $\tau(\mathcal F) = n$. Here, $\tau(\mathcal F)$ denotes the \emph{covering number} of the family $\mathcal F$, that is, the minimum size of a set $T$ which intersects any member of $\mathcal F$. It is easy to see that any such family $\mathcal F$ is contained in a maximal intersecting family and any maximal intersecting family $\mathcal F$ satisfies $\tau(\mathcal F) = n$. A related question about the \emph{minimal} size of an $n$-uniform intersecting family $\mathcal F$ with $\tau(\mathcal F)=n$ was famously solved by Kahn \cite{Kahn}.
In \cite{EL}, Erd{\H o}s and Lov{\' a}sz proved the first non-trivial upper bound $n^n$ on the size of a maximal $n$-uniform intersecting family, and they also constructed such a family of size $[(e -1)n!]$ and conjectured this to be best possible (see also Section \ref{sec4} for the construction). However, 20 years later Frankl, Ota and Tokushige \cite{FOT} gave a new construction of size roughly $(n/2)^n$. The upper bound $n^n$ was improved to $(1 - 1/e +o(1))n^n$ in 1994 by Tuza \cite{Tu}. In 2011, Cherkashin \cite{Ch} obtained a bound $|\mathcal F| = O(n^{n-1/2})$ and then in 2017 Arman and Retter \cite{AR} improved this further to $(1+o(1))n^{n-1}$. The best currently known upper bound was obtained in 2019 by Frankl \cite{F}: \begin{equation}\label{fr}
|\mathcal F| \leqslant e^{-c n^{1/4}} n^n. \end{equation} Frankl \cite{F} also stated that it is possible to modify the argument and improve the exponent in (\ref{fr}) from $1/4$ to $1/3$. In this paper we provide an even stronger improvement of (\ref{fr}):
\begin{theorem}\label{el} Let $\mathcal F$ be an $n$-uniform maximal intersecting family. Then \begin{equation}\label{meq}
|\mathcal F| \leqslant e^{-n^{1/2 + o(1)}} n^n. \end{equation} \end{theorem}
Frankl, Ota and Tokushige conjecture in \cite{FOT} that $|\mathcal F| \leqslant (\alpha n)^n$ should hold for any maximal intersecting family and some absolute constant $\alpha < 1$. The methods of the present paper do not seem to be sufficient to prove this conjecture.
To prove Theorem \ref{el}, we consider a more general problem of estimating the number of minimal coverings of an arbitrary intersecting family. Given a family $\mathcal F$, a set $T$ is called {\em a minimal covering of} $\mathcal F$ if $T \cap F \neq \emptyset$ holds for any $F \in \mathcal F$ ($T$ covers $\mathcal F$) but this condition does not hold for any proper subset $T' \subset T$ ($T$ is minimal). The minimum size of a covering of $\mathcal F$ is called the {\em covering number} and denoted $\tau(\mathcal F)$. Let $\mathcal T(\mathcal F)$ denote the family of all minimal coverings $T$ of a family $\mathcal F$. For technical reasons it is convenient to restrict attention to the subfamily $\mathcal T_{\leqslant n}(\mathcal F) \subset \mathcal T(\mathcal F)$ of all minimal coverings of $\mathcal F$ of size at most $n$ (where $n$ will be taken equal to the uniformity of $\mathcal F$). For a not necessarily uniform family $\mathcal G$ and $\lambda > 0$ we define its {\em weight $w_\lambda(\mathcal G)$} as follows: \begin{equation*}
w_\lambda(\mathcal G) = \sum_{G \in \mathcal G} \lambda^{-|G|}. \end{equation*}
If $\mathcal F$ is an $n$-uniform maximal intersecting family, then $\tau(\mathcal F) = n$ and so any element $F \in \mathcal F$ is a minimal covering of $\mathcal F$. That is, $\mathcal F \subset \mathcal T(\mathcal F)$ and so \begin{equation}\label{weightbd1}
w_\lambda(\mathcal T(\mathcal F)) \geqslant \lambda^{-n} |\mathcal F| \end{equation} holds for any $\lambda >0$. On the other hand, the classical encoding procedure of Erd{\H o}s and Lov\'asz \cite{EL} actually shows that any $n$-uniform family $\mathcal F$ satisfies \begin{equation}\label{weightbd2} w_n(\mathcal T(\mathcal F)) \leqslant 1. \end{equation}
By putting (\ref{weightbd1}) and (\ref{weightbd2}) together, we recover the upper bound $|\mathcal F| \leqslant n^n$. Note that the inequality (\ref{weightbd2}) is actually tight for arbitrary $n$-uniform families:
\begin{example} Let $\mathcal F = \{F_1, \ldots, F_k\}$ be a collection of $k$ pairwise disjoint $n$-element sets $F_1, \ldots, F_k$; then clearly $$ \mathcal T(\mathcal F) = \{T = \{x_1, \ldots, x_k\}:~~ x_i \in F_i, ~i=1, \ldots, k\} $$
and so $w_n(\mathcal T(\mathcal F)) = |\mathcal T(\mathcal F)| n^{-k} = 1$. \end{example}
However, the family $\mathcal F$ in this example is very far from being intersecting. This suggests that perhaps one can improve (\ref{weightbd2}) provided that $\mathcal F$ is an intersecting family. Another obstruction comes from the case when $\mathcal F$ has small covering number: \begin{example}
Let $K_1, \ldots, K_k$ be pairwise disjoint $(n-k+1)$-element sets and let $\mathcal F$ be the family of sets of the form $F = K_i \cup T$ where $|T \cap K_j| = 1$ for all $j = 1, \ldots, k$. Then $\mathcal F$ is intersecting, $\tau(\mathcal F) = \min\{k, n-k+1\}$ and $w_n(\mathcal T(\mathcal F)) \geqslant \frac{(n-k+1)^{k}}{n^k} \gtrsim e^{- \frac{k^2}{n} }$. \end{example}
So the bound in (\ref{weightbd2}) is essentially tight for $n$-uniform intersecting families $\mathcal F$ with covering number $\tau(\mathcal F) \lesssim n^{1/2}$. Our main result states that if $\mathcal F$ is intersecting and the covering number $\tau(\mathcal F)$ is large enough, then we indeed can win over (\ref{weightbd2}) by a significant amount:
\begin{theorem}\label{main} For all $\varepsilon > 0$ and sufficiently large $n > n_0(\varepsilon)$ we have the following. Let $\mathcal A$ be an intersecting $n$-uniform family. Then \begin{equation}\label{maine} c_n(\mathcal A) \leqslant e^{1- \frac{\tau(\mathcal A)^{1.5-\varepsilon}}{n}}. \end{equation} \end{theorem}
Note that this gives a substantial improvement over (\ref{weightbd2}) provided that $\tau(\mathcal A) > n^{2/3+\varepsilon}$. By applying Theorem \ref{main} to a maximal intersecting family $\mathcal F$ and using (\ref{weightbd1}), Theorem \ref{el} follows.
We now turn to explain the main ideas of the proof of Theorem \ref{main}. In what follows, we use the notation $c_\lambda(\mathcal A) = w_\lambda(\mathcal T_{\leqslant n}(\mathcal A))$ for an $n$-uniform family $\mathcal A$ and $\lambda > 0$.
Fix $\varepsilon > 0$. Using induction, we are going to show that for any $n > n_0(\varepsilon)$ and any $n$-uniform intersecting family $\mathcal A$ we have \begin{equation}\label{induction}
c_n(\mathcal A) \leqslant \lambda^{\tau(\mathcal A)-n/2}, \end{equation} where $\lambda = e^{-\frac{1}{n^{0.5+\varepsilon}}}$. This is much weaker than what is claimed in (\ref{maine}) for $n^{2/3} \leqslant \tau(\mathcal A) \leqslant n/2$ but gives the same result when $\tau(\mathcal A)$ is close to $n$. By writing down the inductive statement (\ref{induction}) more carefully, one can recover (\ref{maine}) in the full range of parameters, see Section \ref{sec3} for details.
If $\tau(\mathcal A) \leqslant n/2$ then (\ref{induction}) follows from (\ref{weightbd2}) (which we will prove later) so we may assume that $\tau(\mathcal A) \geqslant n/2$. For the purpose of induction, we may assume that (\ref{induction}) holds for all $n$-uniform intersecting families of size strictly smaller than $\mathcal A$. The following proposition is at the core of our inductive approach:
\begin{prop}\label{prop1}
Let $\lambda = e^{-\frac{1}{n^{0.5+\varepsilon}}}$. If there exists a subfamily $\mathcal G \subset \mathcal A$ such that $c_{\lambda n}(\mathcal G) \leqslant 1$ then $c_{n}(\mathcal A) \leqslant \lambda^{\tau(\mathcal A)-n/2}$. \end{prop}
Roughly speaking, Proposition \ref{prop1} tells us that if we can find a subfamily $\mathcal G \subset \mathcal A$ which is `difficult' to cover then we can use it for the induction step and get a bound on $c_n(\mathcal A)$ in terms of $c_n(\mathcal A')$ for some proper subfamilies $\mathcal A'\subset \mathcal A$. The idea of finding a special subfamilies in $\mathcal A$ to bound the number of minimal covers also appears in a somewhat different form in \cite{F}.
Proposition \ref{prop1} puts rather strict limitations on how a potential minimal family $\mathcal A$ contradicting (\ref{induction}) might look like. The first key observation (also originating from \cite{F}) is that all pairwise intersections of sets in $\mathcal A$ are either very small or very large.
Indeed, let $A_1, A_2 \in \mathcal A$ be a pair of sets such that $ |A_1 \cap A_2| = k$ for some $k$. Observe that $$ c_{\lambda n}(\{A_1, A_2\}) = \frac{k}{\lambda n} + \frac{(n-k)^2}{\lambda^2 n^2}, $$
and so we have $c_{\lambda n}(\{A_1, A_2\}) \leqslant 1$ for any $k \in [\sqrt{n}, n-\sqrt{n}]$. So by Proposition \ref{prop1}, unless $\mathcal A$ satisfies (\ref{induction}), for any pair $A_1, A_2 \in \mathcal A$ we either have $|A_1 \cap A_2| \leqslant \sqrt{n}$ or $|A_1 \cap A_2| \geqslant n-\sqrt{n}$.
Let $k = \sqrt{n}$. The above property allows us to write $\mathcal A$ as a union \begin{equation}\label{decompK} \mathcal A = \mathcal K_1 \cup \ldots \cup \mathcal K_N, \end{equation}
where for any $i, j = 1, \ldots, N$ and $K_i \in \mathcal K_i$ and $K_j \in \mathcal K_j$ we have $|K_i \cap K_j| \geqslant n-k$ if $i= j$ and $|K_i \cap K_j| \leqslant k$ otherwise. This decomposition step is actually quite robust and works for any $k < n/3$; so if one were to prove (\ref{induction}) with $\lambda < 1-c$ for a small constant $c$, then one may still assume that, say, $|A_1 \cap A_2| \not \in [0.1 n, 0.9 n]$ holds for all $A_1, A_2 \in \mathcal A$, and so we have (\ref{decompK}) with $k = 0.1n$.
The decomposition (\ref{decompK}) has the following properties:
\paragraph{Each family $\mathcal K_i$ has a core of size $n-5k$.} That is, there exists a set $K_i$ of size $n-5k$ such that $K_i \subset A$ for any $A \in \mathcal K_i$. Note that we only know that $|A_1\cap A_2| \geqslant n-k$ for any $A_1, A_2 \in \mathcal K_i$ and so a priori the sets in $\mathcal K_i$ do not have to have a large common intersection. However, if $|\bigcap \mathcal K_i|\leqslant n-5k$ then we can take $\mathcal G = \mathcal K_i$ in Proposition \ref{prop1}:
\begin{lemma}\label{ker}
Let $k \leqslant n/10$. Let $\mathcal K$ be an $n$-uniform family. Suppose that there is an $(n-k)$-element set $K$ such that we have $|F \cap K| \geqslant n-2k$ for every $F \in \mathcal K$. Then we either have $c_{n-k}(\mathcal K) \leqslant 1$ or $|\bigcap \mathcal K| \geqslant n-5k$. \end{lemma}
The idea is use the Lubell--Yamamoto--Meshalkin inequality to control possible intersections of a minimal cover of $\mathcal K$ with the set $K$ above. This step is also quite flexible and can be employed if one were to prove (\ref{induction}) with $\lambda=1-c$ (and $k \approx c n$).
\paragraph{Each family $\mathcal K_i$ is small.} Namely, we have $|\mathcal K_i| \leqslant {\tau(\mathcal A) + 2k \choose 2k}$ for all $i$. We say that a family $\mathcal F$ is $\tau$-critical if removing any set from $\mathcal F$ reduces $\tau(\mathcal F)$. The family $\mathcal A$ in question is $\tau$-critical: if not, then for some proper $\mathcal A' \subset \mathcal A$ we have $\tau(\mathcal A') = \tau(\mathcal A)$. But then by the induction assumption we get $$ c_n(\mathcal A) \leqslant c_n(\mathcal A') \leqslant \lambda^{\tau(\mathcal A')-n/2} = \lambda^{\tau(\mathcal A)-n/2}. $$ Here we also use a simple monotonicity property $c_n(\mathcal A) \leqslant c_n(\mathcal A')$ which we prove in the next section.
So we can apply the following simple lemma:
\begin{lemma}\label{kerup}
Let $\mathcal A$ be a $\tau$-critical $n$-uniform family (that is, removing any element from $\mathcal A$ reduces $\tau(\mathcal A)$) and let $\mathcal K \subset \mathcal A$ be a subfamily such that $|\bigcap \mathcal K| \geqslant n-k$ for some $k \geqslant 0$.
Then $|\mathcal K| \leqslant {\tau(\mathcal A) + k \choose k}$. \end{lemma}
\begin{proof} Denote $K = \bigcap \mathcal K$.
By $\tau$-criticality of $\mathcal A$, for any set $A \in \mathcal K$ there is a covering $T_A$ of $\mathcal A \setminus \{A\}$ of size less than $\tau(\mathcal A)$ which does not intersect $A$. Note that $T_A$ does not intersect $K$ and so it is a covering of the family $(\mathcal K \setminus \{A\}) \setminus K$. Thus, the system of pairs of sets $(A\setminus K, T_A)_{A \in \mathcal K}$ satisfies the Bollob{\'a}s's Two Families theorem \cite[Page 113, Theorem 8.8]{J} and so $|\mathcal K| \leqslant {\tau(\mathcal A) + k \choose k}$. \end{proof}
\paragraph{The number of families $N$ is small.} Namely, we may assume that $N \leqslant n^{C}$ holds for some constant $C$. This is the part of the proof where we rely on the Spread Lemma of Alweiss--Lovett--Wu--Zhang \cite{Al}. Namely, we have the following:
\begin{lemma}\label{small}
Let $\mathcal A$ be an $n$-uniform family where $n$ is sufficiently large. Let $\mathcal B \subset \mathcal A$ be a subfamily such that $|B_1 \cap B_2| \leqslant k$ for all distinct $B_1, B_2 \in \mathcal B$. If $k \leqslant \frac{n}{10^4\log n}$ then one of the following 2 possibilities holds: \begin{enumerate}
\item We have $|\mathcal B| \leqslant n^{C}$ for some absolute constant $C$.
\item There is a proper subfamily $\mathcal A' \subset \mathcal A$ such that $$ 2^{\tau(\mathcal A)}c_n(\mathcal A) \leqslant 2^{\tau(\mathcal A')} c_n(\mathcal A'). $$ \end{enumerate} \end{lemma}
Note that this lemma has a mild restriction $k \lesssim \frac{n}{\log n}$. This means that the best possible bound in (\ref{induction}) using Lemma \ref{small} has $\lambda = 1- \frac{c}{\log n}$ (corresponding to a bound of the form $|\mathcal F| \leqslant e^{-\frac{cn}{\log n}} n^n$ for maximal intersecting families). So even though this is not enough to prove an exponential bound in the Erd{\H o}s--Lov\'asz problem, this is by far not the main bottleneck of the argument.
The proof of Lemma \ref{small} is based on the following idea. Let $p = \frac{C \log n}{n}$ and consider a random set ${\bf U}$ where each element of the ground set is included in ${\bf U}$ independently with probability $p$. If the family $\mathcal T_{\leqslant n}(\mathcal A)$ is not $\frac{n}{2}$-spread the one can check that the second option of the lemma holds. Otherwise, by the Spread Lemma (see Lemma \ref{rspread} below), with probability at least $0.9$ there exists an element $T \in \mathcal T_{\leqslant n}(\mathcal A)$ such that $T \subset {\bf U}$. On the other hand, a routine second moment computation shows that if $N$ is large enough and sets $B_1, \ldots, B_N$ have small pairwise intersections, then with probability at least $0.9$ there exists $i \in [N]$ so that $B_i$ is disjoint from ${\bf U}$. So with probability at least $0.8$ there is a covering $T \subset {\bf U}$ of $\mathcal A$ and a set $B_i \in \mathcal A$ disjoint from ${\bf U}$. In particular, $T \cap B_i = \emptyset$ with positive probability which contradicts the definition of a covering.
\paragraph{Conclusion: $\mathcal A$ is small.} We conclude from the above observations that the family $\mathcal A$ itself must be small: \begin{equation}\label{Asmall}
|\mathcal A| \leqslant |\mathcal K_1|+\ldots+|\mathcal K_N| \leqslant N {\tau(\mathcal A)+5k\choose 5k} \leqslant n^{6k}. \end{equation} Once we know that the family $\mathcal A$ is small, we can start exploiting the fact that $\tau(\mathcal A)$ is large. In fact, we show that $\mathcal A$ cannot be too `clustered' around a few elements of the ground set since otherwise we can find a covering of $\mathcal A$ of size less than $\tau(\mathcal A)$ by sampling a random set according to the degree distribution of $\mathcal A$. A careful execution of this idea results in the following lemma: \begin{lemma}\label{dec} Let $n\geqslant 1$ and $m, t \geqslant 1$. Let $\mathcal A$ be an $n$-uniform family of size at most $e^{m}$ and $\tau(\mathcal A) \geqslant t$. Then, for every $l \geqslant 1$, there is a subfamily $\mathcal A' \subset \mathcal A$ such that $\tau(\mathcal A \setminus \mathcal A') \leqslant t/2$ and for every $i = 1, \ldots, l$ we have \begin{equation}\label{deceq}
\mathbb E_{A_1, \ldots, A_i \in \mathcal A'} |A_1 \cap \ldots \cap A_i| \leqslant C_l \left(\frac{m}{t} \right)^{i-1} n, \end{equation} where $C_l \ll 2^{l^2}$ depends only on $l$ and the average is taken over all $A_1, \ldots, A_l \in \mathcal A$ chosen uniformly and independently. \end{lemma}
That is, we can remove a few sets from $\mathcal A$ and obtain the property that the $l$-wise intersections of sets in $\mathcal A$ are very small on average. Note that in our case $m \sim k \log n \sim \sqrt{n} \log n$ and $t = \tau(\mathcal A) \geqslant n/2$, so that $$ C_l \left(\frac{m}{t} \right)^{l-1} n \lesssim n^{2- l/2}, $$ that is, almost all $l$-wise intersections of sets from $\mathcal A$ are empty for constant $l$. Let $r = n^{0.5-\varepsilon/2}$ and $l = 10\varepsilon^{-1}$ and sample a uniformly random subfamily $\mathcal B = \{B_1, \ldots, B_r\} \subset \mathcal A'$, where $\mathcal A'$ is given by Lemma \ref{dec}. Then by (\ref{deceq}) and the union bound, with positive probability all $l$-wise intersections of sets in $\mathcal B$ are empty. We remark that the family $\mathcal B$ is a natural generalization of `brooms' used by Frankl in \cite{F}; the advantage of our approach is that we can find (generalized) brooms of size $\sim n^{1/2}$ whereas Frankl could only construct brooms of size $\sim n^{1/4}$.
The final step of the proof is to show that one can take $\mathcal G = \mathcal B$ in Proposition \ref{prop1}: \begin{lemma}\label{lbodeg} Let $n \geqslant 1$ and $r \geqslant 2l$ be such that $r^2 \leqslant l^3 n$. Let $\mathcal B$ be an $n$-uniform intersecting family of size $r$ such that every $l$ distinct sets from $\mathcal B$ have an empty intersection. Then for $k \leqslant \frac{r}{20 l^3}$ we have \begin{equation*}
c_{n - k}(\mathcal B) \leqslant 1. \end{equation*} \end{lemma}
The proof of this lemma crucially uses the intersecting property of the family $\mathcal B$. In fact, this is the only place in the argument where we really use the fact that the initial family is intersecting. The construction of a large bounded degree family $\mathcal B$ and Lemma \ref{lbodeg} appear to be the main bottlenecks of the argument and are the reason for the resulting bound of $e^{-n^{0.5+\varepsilon}}n^n$ for maximal intersecting families.
The proof of Lemma \ref{lbodeg} is based on a more careful analysis of the classical Erd{\H o}s--Lov\'asz encoding procedure: the intersecting property and bounded degree of $\mathcal B$ ensure that there is enough `overlap' between sets $B_i$ which makes the encoding more efficient. This completes the proof of Theorem \ref{main}. The next section contains all the proofs of the lemmas which appeared in this outline and in Section \ref{sec3} we formally deduce Theorem \ref{main}. Section \ref{sec4} contains some final remarks and questions.
\section{Proving auxiliary results}\label{sec2}
\subsection{Minimal covers}\label{sec21}
Fix $n \in \mathbb N$ and let $\mathcal A$ be a finite family of sets of size at most $n$. For $\lambda > 0$, we define the \emph{weight} $w_\lambda(\mathcal A)$ of $\mathcal A$ by the following expression: \begin{equation*}
w_\lambda(\mathcal A) = \sum_{A \in \mathcal A} \lambda^{-|A|}. \end{equation*}
The parameter $\lambda$ will be usually taken to be $\lambda = n$ or $\lambda = n - k$ for a relatively small number $k$. The following characteristic of a family will be crucial for our arguments. Recall that a covering of $\mathcal A$ is a set $T$ intersecting all members of $\mathcal A$ and a covering $T$ is minimal if any proper subset $T' \subset T$ does not cover $\mathcal A$. We denote $\tau(\mathcal A)$ the minimum size of a covering of $\mathcal A$. We denote $\mathcal T_{\leqslant n}(\mathcal A)$ the family of all minimal covers of $\mathcal A$ of size at most $n$. For $\lambda > 0$, put $$
c_\lambda(\mathcal A) = w_\lambda(\mathcal T_{\leqslant n}(\mathcal A)) = \sum_{T \in \mathcal T_{\leqslant n}(\mathcal A)} \lambda^{-|T|}. $$
We remark that the family $\mathcal T_{\leqslant n}(\mathcal F)$ was also introduced in \cite{Tu} to prove the bound $|\mathcal F| \leqslant (1 - e^{-1} + o(1))n^n$. We have the following basic monotonicity result:
\begin{obs}\label{mon} For any family $\mathcal F$ and all $\lambda \leqslant \mu$ we have $$ c_\mu(\mathcal F) \leqslant \left( \frac{\lambda}{\mu} \right)^{\tau(\mathcal F)} c_\lambda(\mathcal F). $$ \end{obs}
\begin{proof} Indeed, since every minimal covering $T$ of $\mathcal F$ has size at least $\tau(\mathcal F)$ we have $$
\mu^{-|T|} \leqslant \left( \frac{\lambda}{\mu} \right)^{\tau(\mathcal F)} \lambda^{-|T|}. $$ Summing over all $T \in \mathcal T_{\leqslant n}(\mathcal F)$ gives the desired inequality. \end{proof}
Let $X$ be the ground set of $\mathcal A$. For $S \subset X$ we denote by $\mathcal A(\bar S)$ the set of elements of $\mathcal A$ which do not intersect $S$. The following lemma lies at the foundation of our arguments.
\begin{lemma}\label{lm} For any subfamily $\mathcal A'\subset \mathcal A$ of any family $\mathcal A$ and for any $\lambda > 1$ we have \begin{equation}\label{ineqmain}
c_\lambda(\mathcal A) \leqslant \sum_{T' \in \mathcal T_{\leqslant n}(\mathcal A')} \lambda^{-|T'|} c_\lambda(\mathcal A(\bar T')), \end{equation} In particular, we have $$ c_\lambda(\mathcal A) \leqslant c_\lambda(\mathcal A') \max_{T' \in \mathcal T_{\leqslant n}(\mathcal A')} c_\lambda(\mathcal A(\bar T')). $$ \end{lemma}
\begin{proof}
Each minimal covering $T \in \mathcal T_{\leqslant n}(\mathcal A)$ contains a minimal covering $T' \subset T$ of $\mathcal A'$. Moreover, by the minimality of $T$, the set $T\setminus T'$ is a minimal covering of the family $\mathcal A(\bar T')$. So each term $\lambda^{-|T|}$ on the left hand side of (\ref{ineqmain}) corresponds to at least one term $\lambda^{-|T'|} \lambda^{-|T\setminus T'|}$ on the right hand side of (\ref{ineqmain}) (there could be more than one way to choose $T'$). This proves (\ref{ineqmain}). \end{proof}
In particular, we have:
\begin{cor}[Tuza, \cite{Tu}]\label{st} For any $n$-uniform family $\mathcal F$ we have $c_n(\mathcal F) \leqslant 1$. \end{cor}
This bound was also proved in \cite{Tu}, similar ideas appear in \cite{G}.
\begin{proof}
Note that if $|\mathcal F| \leqslant 1$ then the proposition holds. If $|\mathcal F| \geqslant 2$ then choose a proper non-empty subfamily $\mathcal F' \subset \mathcal F$ and apply the second part of Lemma \ref{lm}. The statement now follows by induction. \end{proof}
The basic idea behind the proof of Theorem \ref{el} is to use apply Lemma \ref{lm} to various subfamilies $\mathcal F'$ with small $c_n(\mathcal F')$ and use induction to estimate the terms $c_n(\mathcal F(\bar T'))$. More precisely, we will use the following consequence of Lemma \ref{lm}.
\begin{lemma}\label{crlm}
Let $f: \mathbb R\rightarrow \mathbb R$ be a differentiable convex function. Let $\mathcal F$ be an $n$-uniform family such that for any proper subfamily $\mathcal A \subset \mathcal F$ we have $c_n(\mathcal A) \leqslant e^{- f(\tau(\mathcal A))}$. Let $\lambda = e^{-f'(\tau(\mathcal F))}$ and suppose that there exists a non-empty family $\mathcal F' \subset \mathcal F$ such that $c_{\lambda n}(\mathcal F') \leqslant 1$. Then $c_n(\mathcal F) \leqslant e^{-f(\tau(\mathcal F))}$. \end{lemma}
\begin{proof}
By Lemma \ref{lm} applied to $\mathcal F' \subset \mathcal F$ we have \begin{equation*}
c_n(\mathcal F) \leqslant \sum_{T \in \mathcal T_{\leqslant n}(\mathcal F')} n^{-|T|} c(\mathcal F(\bar T)) \leqslant \sum_{T \in \mathcal T_{\leqslant n}(\mathcal F')} n^{-|T|} e^{-f(\tau(\mathcal F(\bar T)))}. \end{equation*}
We have $\tau(\mathcal F(\bar T)) \geqslant \tau(\mathcal F) - |T|$ and so by convexity $f(\tau(\mathcal F(\bar T))) \geqslant f(\tau(\mathcal F)) - |T| f'(\tau(\mathcal F))$, which leads to $$
c_n(\mathcal F) \leqslant \sum_{T \in \mathcal T_{\leqslant n}(\mathcal F')} n^{-|T|} e^{f'(\tau(\mathcal F)) |T| - f(\tau(\mathcal F))} = c_{\lambda n}(\mathcal F') e^{- f(\tau(\mathcal F))} \leqslant e^{- f(\tau(\mathcal F))}, $$ completing the proof. \end{proof}
Note that Proposition \ref{prop1} from the proof outline above follows from this lemma with $f(t) = - (t-n/2) \log \lambda$.
\subsection{Large intersections}\label{sec22}
In this section we study families $\mathcal K$ in which every pair of sets has ``large" intersection.
\begin{lemma}\label{ker2}
Let $k \leqslant n/10$. Let $\mathcal K$ be an $n$-uniform family. Suppose that there is an $(n-k)$-element set $K$ such that we have $|F \cap K| \geqslant n-2k$ for every $F \in \mathcal K$. Then we either have $c_{n-k}(\mathcal K) \leqslant 1$ or $|\bigcap \mathcal K| \geqslant n-5k$. \end{lemma}
\begin{proof}
Let $K' = \bigcap \mathcal K$ and $R = K \setminus K'$ and let $u = |K' \setminus K|$. Note that $u \in [0, k]$ since $|F \setminus K| \leqslant k$ for all $F \in \mathcal K$. Denote by $\mathcal A$ the family of all sets $F \setminus K'$ for $F \in \mathcal K$. By the definition of $\mathcal A$ we have $\tau(\mathcal A) \geqslant 2$. Note also that for any $A \in \mathcal A$ we have $|A \setminus R| \leqslant k-u$ (since $A$ has size at most $k$ and contains the $u$-element set $K'\setminus K$).
Note that a minimal covering $T$ of the family $\mathcal K$ is either contained in $K'$ and $|T| = 1$ or $T \cap K' = \emptyset$. In the latter case $T$ is obviously a minimal covering of $\mathcal A$. Thus, we have \begin{equation}\label{ckca}
c_\lambda(\mathcal K) = \frac{|K'|}{\lambda} + c_\lambda(\mathcal A). \end{equation}
Let $\mathcal T_1 \subset \mathcal T_{\leqslant n}(\mathcal A)$ be the family of minimal coverings $T$ of $\mathcal A$ which are subsets of $R$. Let $\mathcal T_2 = \mathcal T_{\leqslant n}(\mathcal A) \setminus \mathcal T_1$. We will estimate weights of $\mathcal T_1$ and $\mathcal T_2$ separately.
Note that $\mathcal T_1 \subset 2^R$ and observe that $T' \not\subset T$ for any distinct $T, T' \in \mathcal T_1$ (i.e. $\mathcal T_1$ is an antichain in $2^R$).
\begin{prop}\label{sp}
Suppose that $\mathcal T \subset 2^R$ is an antichain such that every element of $\mathcal T$ has size at least $t$. If $\lambda \geqslant |R|$ then $$
\sum_{T \in \mathcal T} \lambda^{-|T|} \leqslant \lambda^{-t} {|R| \choose t}. $$ \end{prop}
This statement also appears in \cite{F0}.
\begin{proof}
Note that for any $s \geqslant t$ we have ${|R| \choose s} \leqslant {|R| \choose t} \lambda^{s-t}$ and so by the Lubell–Yamamoto–Meshalkin inequality \cite[Page 112, Theorem 8.6]{J}: $$
\sum_{T \in \mathcal T} \lambda^{-|T|} \leqslant \sum_{T \in \mathcal T} \lambda^{-t}{|R| \choose t} / {|R| \choose |T|} = \lambda^{-t}{|R| \choose t} \sum_{T \in \mathcal T} \frac{1}{{|R| \choose |T|}} \leqslant \lambda^{-t} {|R| \choose t}. $$ \end{proof}
By Lemma \ref{sp} for every $\lambda \geqslant |R|$ the $\lambda$-weight of $\mathcal T_1$ is at most \begin{equation}\label{c1}
w_\lambda(\mathcal T_1) \leqslant \lambda^{-\tau(\mathcal A)} {|R| \choose \tau(\mathcal A)} \leqslant \frac{(|R|/\lambda)^{\tau(\mathcal A)}} {\tau(\mathcal A)!}. \end{equation} Now we estimate the weight of $\mathcal T_2$. Let $\mathcal S \subset 2^R$ be the family of all sets $S \subset R$ such that $S$ {\it does not} cover $\mathcal A$. Then the weight of $\mathcal T_2$ is bounded by the following expression: \begin{equation}\label{exp}
w_\lambda(\mathcal T_2) \leqslant \sum_{S \in \mathcal S} \lambda^{-|S|} c_\lambda(\mathcal A(\bar S) \setminus R). \end{equation} Indeed, the contribution of an element $T \in \mathcal T_2$ on the left hand side is accounted by the term corresponding to $S = T\cap R \in \mathcal S$ on the right hand side (since $T\setminus R$ is a minimal covering of the family $\mathcal A(\bar S)\setminus R$). Here the family $\mathcal A(\bar S) \setminus R$ consists of all sets of the form $A \setminus R$ where $A \in \mathcal A$ does not intersect $S$. Every element in $\mathcal A(\bar S)\setminus R$ has cardinality at most $k-u$ and so by Observation \ref{mon} and Corollary \ref{st} applied to $\mathcal A(\bar S)\setminus R$ for every $\lambda \geqslant k-u$ we have \begin{equation}\label{eql}
c_\lambda(\mathcal A(\bar S)\setminus R) \leqslant \left(\frac{k-u}{\lambda}\right)^{\tau(\mathcal A(\bar S)\setminus R)} c_{k-u}(\mathcal A(\bar S)\setminus R) \leqslant \left(\frac{k-u}{\lambda}\right)^{\tau(\mathcal A(\bar S)\setminus R)}. \end{equation}
Let $S \in \mathcal S$. Note that we have the following lower bound on $\tau(\mathcal A(\bar S)\setminus R)$: \begin{equation*}
\tau(\mathcal A(\bar S)\setminus R) \geqslant \max \{ 1, \tau(\mathcal A)-|S|\}. \end{equation*} Using this lower bound, (\ref{exp}) and (\ref{eql}) we obtain an upper bound on the weight of $\mathcal T_2$ for $\lambda \geqslant k$: \begin{align}\label{wc2}
w_\lambda(\mathcal T_2) \leqslant \sum_{s = 0}^{\tau(\mathcal A)-1} \lambda^{-s} {|R| \choose s} \left(\frac{k-u}{\lambda}\right)^{\tau(\mathcal A) - s} + \sum_{s = \tau(\mathcal A)}^{|R|} \lambda^{-s} {|R| \choose s}\left(\frac{k-u}{\lambda}\right). \end{align}
Now we combine all obtained inequalities to prove Lemma \ref{ker}. Suppose that $|K'| = |\bigcap \mathcal K| < n-5k$, we need to show that $c_{n-k}(\mathcal K) \leqslant 1$ holds. We have $$
|K\cup K'| = |K'| + |K\setminus K'|= |K| + |K'\setminus K|, $$
so that $|K'| = n-k + u - r$ holds. In particular, by the assumption $|K'|< n-5k$ we have $n-k \geqslant r > u+4k$.
Denote $t = \tau(\mathcal A) \geqslant 2$, $r = |R|$, $\rho = \frac{r}{n-k}$ and $\delta = \frac{k-u}{n-k}$. We can use (\ref{c1}) and (\ref{wc2}) with $\lambda = n-k$ and get: \begin{align*}
w_{n-k}(\mathcal T_1) &\leqslant \frac{\rho^{t}}{t!},\\
w_{n-k}(\mathcal T_2) &\leqslant \sum_{s=0}^{t-1} \frac{\rho^s \delta^{t-s}}{s!} + \sum_{s=t}^{r} \frac{\rho^s \delta}{s!}. \end{align*}
By (\ref{ckca}), formula $|K'| = n-k+u-r$ and decomposition $\mathcal T_{\leqslant n}(\mathcal K) = \mathcal T_1\cup \mathcal T_2$: \begin{align*} c_{n-k}(\mathcal K) &\leqslant \frac{n-k+u-r}{n-k} + w_{n-k}(\mathcal T_1) + w_{n-k}(\mathcal T_2) \leqslant\\
& \leqslant \frac{n-k+u-r}{n-k} + \frac{\rho^t}{t!} + \sum_{s=0}^{t-1} \frac{\rho^s \delta^{t-s}}{s!} + \sum_{s=t}^{r} \frac{\rho^s \delta}{s!}. \end{align*} Both $\rho$ and $\delta$ are between 0 and 1 so it is easy to see that the second line is the largest when $t=2$, i.e. $$ c_{n-k}(\mathcal K) \leqslant \frac{n-k+u-r}{n-k} + \frac{\rho^2}{2} + \delta^2 + \frac{\delta \rho}{2} + \sum_{s=2}^{r} \frac{\rho^s \delta}{s!} \leqslant \frac{n-k+u-r}{n-k} + \frac{\rho}{2} + 2\delta, $$ where in the last transition we used $0\leqslant \delta, \rho \leqslant 1$ to group the last 3 terms together and replace $\rho^2$ by $\rho$. Recalling $\rho = \frac{r}{n-k}$ and $\delta = \frac{k-u}{n-k}$ we get $$ c_{n-k}(\mathcal K) \leqslant \frac{n+ k - u - r/2}{n-k} \leqslant \frac{n+k - r/2}{n-k} \leqslant 1, $$ since $r \geqslant 4k$ and $u \geqslant 0$. \end{proof}
\subsection{Small intersections}
In this section we show that in some cases it is possible to estimate the size of a subfamily $\mathcal B \subset \mathcal A$ provided that elements of $\mathcal B$ have very small pairwise intersections.
\begin{lemma}\label{small2}
Let $\mathcal A$ be an $n$-uniform family where $n$ is sufficiently large. Let $\mathcal B \subset \mathcal A$ be a subfamily such that $|B_1 \cap B_2| \leqslant k$ for all distinct $B_1, B_2 \in \mathcal B$. If $k \leqslant \frac{n}{10^4\ln n}$ then one of the following 2 possibilities holds: \begin{enumerate}
\item We have $|\mathcal B| \leqslant n^{C}$ for some absolute constant $C$.
\item There is a proper subfamily $\mathcal A' \subset \mathcal A$ such that $$ 2^{\tau(\mathcal A)}c_n(\mathcal A) \leqslant 2^{\tau(\mathcal A')} c_n(\mathcal A'). $$ \end{enumerate} \end{lemma}
To prove this lemma we will need a result on $R$-spread families which was recently used to substantially improve the upper bound in the Erd{\H o}s-Rado Sunflower problem \cite{Al}, \cite{R}. We will use a variant of this result proved in \cite[Corollary 7]{Ta}. Let $\bf C$ be a random set, that is a probability distribution on $2^X$ for some finite ground set $X$. For $R \geqslant 1$ we say that $\bf C$ is {\it an $R$-spread random set} if for every set $S \subset X$ the probability that $\bf C$ contains $S$ is at most $R^{-|S|}$.
\begin{lemma}[\cite{Ta}] \label{rspread} Let $R > 1$, $\delta \in (0, 1)$ and $m \geqslant 1$. Let $\bf C$ be an $R$-spread random subset of a finite set $X$. Let ${\bf W} \subset X$ be a random set independent from $\bf C$ and such that each $x \in X$ belongs to $\bf W$ with independent probability $1 - (1-\delta)^m$. Then there exists a random set $\bf C'$ with the same distribution as $\bf C$ and such that \begin{equation*}
\mathbb E |{\bf C}' \cap {\bf W}| \leqslant \left( \frac{5}{\log_2 R\delta} \right)^m \mathbb E |{\bf C}|. \end{equation*} \end{lemma}
We will in fact only need the following corollary of this result.
\begin{cor}\label{cors} In the notations of Lemma \ref{rspread}, let $\mathcal C \subset 2^X$ be the support of the random set $\bf C$. Then the probability that a random set $\bf W$ of density $1 - (1-\delta)^m$ contains an element of $\mathcal C$ is at least \begin{equation*}
\mathbb P(\exists C \in \mathcal C:~ C \subset {\bf W}) \geqslant 1 - \left( \frac{5}{\log_2 R\delta} \right)^m \mathbb E |{\bf C}|. \end{equation*} \end{cor}
\begin{proof}[Proof of Lemma \ref{small}]
Denote by $X$ the ground set of $\mathcal A$.
Put $C = 2048$, $R=\frac{n}{2}$ and $m = \lceil \log_2{n}+10\rceil$. Let $\delta = \frac{C}{n}$ and let ${\bf U} \subset X$ be a subset of $X$ of density $(1-\delta)^m$. Let ${\bf T} \in \mathcal T_{\leqslant n}(\mathcal A)$ be a random set with distribution \begin{equation}\label{distri}
\mathbb P({\bf T} = T) = \frac{n^{-|T|}}{c_n(\mathcal A)}, \end{equation} where $T \in \mathcal T_{\leqslant n}(\mathcal A)$ and such that ${\bf T}$ is independent from $\bf U$.
Let us suppose that the random set $\bf T$ is not $R$-spread. By definition, this means that there is a non-empty set $S \subset X$ such that $$
\mathbb P(S \subset {\bf T}) \geqslant R^{-|S|} = \left( \frac{2}{n} \right)^{|S|}. $$
Let $\mathcal A' = \mathcal A(\bar S)$ be the family of $A \in \mathcal A$ such that $A \cap S = \emptyset$. Note that $\tau(\mathcal A')\geqslant \tau(\mathcal A) - |S|$. Note that if a covering $T \in \mathcal T_{\leqslant n}(\mathcal A)$ satisfies $S \subset T$ then $T\setminus S$ is a minimal covering of the family $\mathcal A'$. Thus, $$
\sum_{T \in \mathcal T_{\leqslant n}(\mathcal A):~ S \subset T} n^{-|T|} = n^{-|S|} \sum_{T \in \mathcal T_{\leqslant n}(\mathcal A): ~S \subset T} n^{-|T \setminus S|} \leqslant n^{-|S|} c_n(\mathcal A'). $$ By (\ref{distri}), the left hand side of this inequality equals to $c_n(\mathcal A) \mathbb P(S \subset {\bf T})$. We conclude \begin{align*}
c_n(\mathcal A') n^{-|S|} \geqslant c_n(\mathcal A) \mathbb P(S \subset {\bf T}) \geqslant c_n(\mathcal A) 2^{|S|} n^{-|S|},\\
c_n(\mathcal A') \geqslant c_n(\mathcal A) 2^{|S|} \geqslant c_n(\mathcal A) 2^{\tau(\mathcal A) - \tau(\mathcal A')}. \end{align*} This implies the second alternative of Lemma \ref{small2}. So we may assume that $\bf T$ is $R$-spread.
By Corollary \ref{cors} (applied to ${\bf W} = X\setminus {\bf U}$), we have the following estimate on the probability that there is a covering $T \in \mathcal T_{\leqslant n}(\mathcal A)$ which does not intersect $\bf U$: $$
\mathbb P(\exists T \in \mathcal T_{\leqslant n}(\mathcal A):~ T \cap {\bf U} = \emptyset) \geqslant 1 - \left( \frac{5}{\log R\delta} \right)^m \mathbb E |{\bf T}| \geqslant 1 - 2^{-m} n > 0.9, $$ here we used the fact that $R\delta = 1024$, $m > \log_2 n + 9$ and that every element of $\mathcal T_{\leqslant n}(\mathcal A)$ has size at most $n$.\footnote{In fact, this is the only place in the argument where we need this restriction on the sizes of the coverings.} We conclude that if we take a random set $\bf U$ of density $(1-\delta)^m$ then with probability at least $0.9$ there is a $T \in \mathcal T_{\leqslant n}(\mathcal A)$ which does not intersect $\bf U$.
Let us now show that with probability at least $0.5$ the set $\bf U$ contains an element of $\mathcal B$, provided that $\mathcal B$ is large enough. Since by definition of $\mathcal T_{\leqslant n}(\mathcal A)$ every $T \in \mathcal T_{\leqslant n}(\mathcal A)$ intersects every set from $\mathcal B$, this will lead to a contradiction if $|\mathcal B|$ is large enough.
Note that an element $A$ of $\mathcal B$ is contained in $\bf U$ with probability \begin{equation}\label{rh}
(1-\delta)^{m n} = e^{n \log_2 n (-\frac{C}{n} + O(n^{-2}))} = n^{-C/\ln 2 + o(1)}, \end{equation} provided that $n$ is sufficiently large. Denote $\rho = (1-\delta)^{n m}$. For $A \in \mathcal B$ denote by $\xi_A$ the indicator of the event that $A \subset \bf U$ and by $\xi$ the sum of $\xi_A$ over $\mathcal B$. Hence, we have $\mathbb E \xi_A = \rho$ for every $A \in \mathcal B$ and $$
\mathbb E \xi = |\mathcal B| \rho. $$ By Chebyshev's inequality (see, for instance, \cite[Page 303, (21.2)]{J}), it is enough to show that ${\rm Var}\, \xi < (\mathbb E \xi)^2 / 2$, where ${\rm Var}\, \xi$ denotes the variance of the random variable $\xi$. Let us estimate the correlations $(\mathbb E \xi_A \xi_{A'} - \rho^2)$ for $A \neq A'$. It is clear that $$
\mathbb E \xi_A \xi_{A'} = (1-\delta)^{m|A\cup A'|} \leqslant (1-\delta)^{2 m n - m\frac{n}{10^4 \ln n} } = \rho^{2 - \frac{1}{10^4 \ln n}}. $$ By (\ref{rh}), we have $$ \rho^{-\frac{1}{10^4 \ln n}} = \left(n^{\frac{C + o(1)}{\ln 2} }\right)^{- \frac{1}{10^4 \ln n}} = 2^{\frac{C}{10^4} + o(1)} < 1.4 $$ provided that $n$ is large enough. We conclude that the variance of $\xi$ is at most $$
0.4 \rho^2 |\mathcal B|^2 + \rho |\mathcal B| $$
which is less than $(\mathbb E\xi)^2/2$ if $|\mathcal B| > 10/\rho$. Therefore, provided, that $|\mathcal B| \geqslant n^{3000} > 10/\rho$, with probability at least $0.5$ the random set $\bf U$ contains an element of $\mathcal B$ and with probability at least $0.9$ it does not intersect an element of $\mathcal T_{\leqslant n}(\mathcal A)$. But these two events cannot happen simultaneously. This is a contradiction and Lemma \ref{small2} is proved. \end{proof}
\subsection{Moments of the degree function}
In this section we show that if we have an $n$-uniform family $\mathcal A$ such that $\tau(\mathcal A)$ is ``large" but $|\mathcal A|$ is ``small" then the $l$-wise intersections of sets from $\mathcal A$ are very small on average. More precisely, we will prove the following:
\begin{lemma}\label{dec2} Let $n\geqslant 1$ and $m, t \geqslant 1$. Let $\mathcal A$ be an $n$-uniform family of size at most $e^{m}$ and $\tau(\mathcal A) \geqslant t$. Then, for every $l \geqslant 1$, there is a subfamily $\mathcal A' \subset \mathcal A$ such that $\tau(\mathcal A \setminus \mathcal A') \leqslant t/2$ and for every $i = 1, \ldots, l$ we have \begin{equation*}
\mathbb E_{A_1, \ldots, A_i \in \mathcal A'} |A_1 \cap \ldots \cap A_i| \leqslant C_l \left(\frac{m}{t} \right)^{i-1} n, \end{equation*} where $C_l \ll 2^{l^2}$ depends only on $l$ and the average is taken over all $A_1, \ldots, A_l \in \mathcal A$ chosen uniformly and independently. \end{lemma}
Let $X$ denote the ground set of an $n$-uniform family $\mathcal F$. For a function $f: X \rightarrow \mathbb R_+$ and $S \subset X$ we denote by $f(S)$ the sum $\sum_{x\in S}f(x)$.
\begin{obs}\label{sa} For any non-zero function $f: X \rightarrow \mathbb R_+$ and any family $\mathcal F$ on $X$ we have \begin{equation}\label{dis}
\sum_{F \in \mathcal F} \left(1 - \frac{f(F)}{f(X)} \right)^{\tau(\mathcal F)-1} \geqslant 1. \end{equation} In particular, for any $f: X \rightarrow \mathbb R_+$ there always exists $F \in \mathcal F$ such that $$
f(F) \leqslant f(X) (1 - |\mathcal F|^{-1/(\tau(\mathcal F)-1)}). $$ \end{obs}
\begin{proof} Put $t = \tau(\mathcal F) - 1$ and let $x_1, \ldots, x_t \in X$ be a sequence of random independent elements of $X$ sampled according to distribution $f$. Then the left hand side of (\ref{dis}) is the expectation of the number of sets $F \in \mathcal F$ which are not covered by the set $\{x_1, \ldots, x_t\}$. Since $\tau(\mathcal F) > t$, this number is always positive and (\ref{dis}) follows. \end{proof}
The following variant of this observation will be slightly more convenient to use.
\begin{cor}\label{mu}
Let $f_1, \ldots, f_l: X \rightarrow \mathbb R_+$ be arbitrary non-zero functions and $\mathcal F$ be an arbitrary family on $X$. Then there exists $F \in \mathcal F$ such that $f_i(F) \leqslant f_i(X) l (1 - |\mathcal F|^{-1/(\tau(\mathcal F)-1)})$ for any $i = 1, \ldots, l$. \end{cor}
\begin{proof} Apply Observation \ref{sa} to $f(x) = \sum_{i = 1}^l \frac{f_i(x)}{f_i(X)}$. \end{proof}
For a family $\mathcal F$ on the ground set $X$ let $d_\mathcal F: X \rightarrow \mathbb R_+$ be the degree function of the family $\mathcal F$, that is, if $x \in X$ then $d_\mathcal F(x)$ equals to the number of sets $F \in \mathcal F$ which contain $x$. Let $d_\mathcal F^l: X \rightarrow \mathbb R_+$ denote the $l$-th power of $d_\mathcal F$, i.e. $d_\mathcal F^l(x) = (d_\mathcal F(x))^l$. By abusing notation, we also denote by $d^l_\mathcal F$ the number $d^l_\mathcal F(X)$.
\begin{obs} For any family $\mathcal F$ and any $l \geqslant 1$ we have the following identity $$
d_\mathcal F^l |\mathcal F|^{-l} = \mathbb E_{F_1, \ldots, F_l \in \mathcal F} |F_1 \cap \ldots \cap F_l|, $$ where $F_1, \ldots, F_l$ are taken from $\mathcal F$ uniformly and independently. \end{obs}
Applying Corollary \ref{mu} to functions $d_\mathcal F^1, \ldots, d_\mathcal F^l$ we obtain the following result.
\begin{lemma}\label{in} Let $l, t \geqslant 1$, let $\mathcal F \subset \mathcal A$ be a subfamily of a family $\mathcal A$ such that $\tau(\mathcal A \setminus \mathcal F) \geqslant t+1$. Then there exists $A \in \mathcal A \setminus \mathcal F$ such that the following holds. Denote $\mathcal F' = \mathcal F \cup \{A\}$, then for any $i = 1, \ldots, l$ we have: \begin{equation}\label{req}
d_{\mathcal F'}^i \leqslant d_\mathcal F^i + \left(\frac{l \log |\mathcal A|}{t}\right) 2^i d_\mathcal F^{i-1} + n. \end{equation} \end{lemma}
\begin{proof} For $i = 1, \ldots, l$ let \begin{equation}\label{f}
f_i(x) = \sum_{j = 1}^{i-1} {i \choose j} d_\mathcal F^j(x). \end{equation} Apply Corollary \ref{mu} to functions $f_1, \ldots, f_l$ and the family $\mathcal A \setminus \mathcal F$. Then there exists $A \in \mathcal A\setminus \mathcal F$ such that for every $i = 1, \ldots, l$ we have \begin{equation}\label{ineq}
f_i(A) \leqslant f_i(X) l (1 - |\mathcal A \setminus \mathcal F|^{-1/\tau(\mathcal A\setminus \mathcal F)-1}) \leqslant f_i(X) l(1 - |\mathcal A|^{-1/t}) \leqslant f_i(X) \frac{l \log |\mathcal A|}{t}, \end{equation} by the standard inequality $1-e^{-x} \leqslant x$. But note that for $\mathcal F' = \mathcal F \cup \{A\}$ by (\ref{f}) we have \begin{equation}\label{dmd}
d_{\mathcal F'}^i - d_{\mathcal F}^i = \sum_{x \in X} d_{\mathcal F'}^i(x) - d_\mathcal F^i(x) = \sum_{x \in A} (d_{\mathcal F}(x) + 1 )^i - d_\mathcal F^i(x) = f_i(A) + n. \end{equation} Note that $d_\mathcal F^i$ is monotone increasing in $i$ and so $$ f_i(X) = \sum_{j = 1}^{i-1} {i \choose j} d_\mathcal F^j \leqslant 2^i d_\mathcal F^{i-1}. $$ The bound (\ref{req}) now follows from (\ref{ineq}) and (\ref{dmd}). \end{proof}
Now we are ready to prove Lemma \ref{dec2}.
\begin{proof}[Proof of Lemma \ref{dec2}]
Let $X$ denote the ground set of $\mathcal A$ and put $\gamma = 2l m /t$.
Let $\mathcal F \subset \mathcal A$ be a maximal subfamily in $\mathcal A$ such that for every $i = 1, \ldots, l$ we have \begin{equation}\label{max}
d_\mathcal F^i \leqslant 2^{i^2}\gamma^{i-1} n |\mathcal F|^i + 2^{i^2}n|\mathcal F|. \end{equation}
Note that if $|\mathcal F| = 1$ then (\ref{max}) clearly holds and so $\mathcal F$ is well-defined. To prove Lemma \ref{dec} it is clearly enough to show that any such $\mathcal F$ satisfies $\tau(\mathcal A \setminus \mathcal F) \leqslant t / 2$. Indeed, in this case we have $\tau(\mathcal F) \geqslant t/2$ and, in particular, $|\mathcal F| \geqslant t/2$.
Then $\gamma |\mathcal F| \geqslant m \geqslant 1$ and, therefore, the first term in (\ref{max}) dominates the second one.
Now we show that it is impossible to have $\tau(\mathcal A \setminus \mathcal F) \geqslant t/2 + 1$. Indeed, in this case we can apply Lemma \ref{in} to the pair $\mathcal F \subset \mathcal A$ and obtain a family $\mathcal F' = \mathcal F \cup \{ A \}$ such that (\ref{req}) holds for $i = 1, \ldots, l$ and with $t/2$ instead of $t$. Note that $l \log |\mathcal A|/(t/2) \leqslant \gamma$. On the other hand, the maximality of $\mathcal F$ implies that there is some $i \in \{2, \ldots, l\}$ ($i \neq 1$ because otherwise (\ref{max}) holds automatically) such that $$
d^i_{\mathcal F'} > 2^{i^2} \gamma^{i-1}n(|\mathcal F|+1)^i + 2^{i^2}n(|\mathcal F|+1) \geqslant 2^{i^2}\gamma^{i-1} n|\mathcal F|^i + 2^{i^2}\gamma^{i-1}n|\mathcal F|^{i-1} + 2^{i^2}n|\mathcal F| + 2^{i^2}n. $$ On the other hand, from (\ref{req}) we get \begin{align*}
d^i_{\mathcal F'} \leqslant d_\mathcal F^i + \gamma 2^i d_\mathcal F^{i-1} + n \leqslant (2^{i^2}\gamma^{i-1} n|\mathcal F|^i + 2^{i^2}n|\mathcal F|) + \gamma 2^i( 2^{(i-1)^2}\gamma^{i-2} n|\mathcal F|^{i-1} + 2^{(i-1)^2} n |\mathcal F|) + n. \end{align*} Combining these two inequalities and cancelling same terms we get $$
(2^{i^2} - 2^{i^2 - i + 1})\gamma^{i-1}n|\mathcal F|^{i-1} - \gamma 2^{i^2 - i + 1} n|\mathcal F| + (2^{i^2} - 1)n < 0. $$
So if we let $x = \gamma |\mathcal F|$ then, after dividing by $2^{i^2}n$, we obtain \begin{equation}\label{ss} 2^{-i+1}x > (1 - 2^{-i+1})x^{i-1}+\frac{1}{2}. \end{equation} Recall that $i \geqslant 2$. So if $x \geqslant 1$ then the first term on the right hand side (\ref{ss}) is greater than $2^{-i+1}x$. If $x < 1$ then the second term is greater than $2^{-i+1}x$. In both cases we arrive at a contradiction. Lemma \ref{dec} is proved. \end{proof}
\subsection{Bounded degree families}\label{sec25}
In this section we consider intersecting families of bounded degree. In fact, this is essentially the only place in the paper where we use the fact that the family is intersecting. The idea to consider low degree families in the Erd{\H o}s--Lov{\' a}sz problem also appears in \cite[Section 2]{F}.
\begin{lemma}\label{lbodeg2} Let $n \geqslant 1$ and $r \geqslant 2l$ be such that $r^2 \leqslant l^3 n$. Let $\mathcal B$ be an $n$-uniform intersecting family of size $r$ such that every $l$ distinct sets from $\mathcal B$ have an empty intersection. Then \begin{equation}\label{lel}
c_n(\mathcal B) \leqslant e^{-\frac{r^2}{10 l^3 n}}. \end{equation} \end{lemma}
\begin{proof}[Proof of Lemma \ref{lbodeg2}]
In order to prove this lemma, we need to recall the classical Er{\H o}s--Lov{\' a}sz encoding procedure which they used to obtain the bound $|\mathcal F| \leqslant n^n$ for the size of an $n$-uniform maximal intersecting family. Denote $\mathcal B = \{F_1, \ldots, F_r\}$.
\paragraph{Procedure.} Let $T \in \mathcal T_{\leqslant n}(\mathcal B)$ and $S \subset T$ be a proper subset. From the pair $(T, S)$ we construct a new pair $(T, S')$ as follows. Let $i \in [r]$ be the minimum number so that $F_i \cap S = \emptyset$. Pick arbitrary $x \in F_i \cap T$ and let $S' = S \cup \{x\}$.
So if we apply this procedure to any $T \in \mathcal T_{\leqslant n}(\mathcal B)$ and $S = \emptyset$ then we will obtain a sequence of sets of the form: \begin{equation}\label{seq}
\emptyset = S_0 \subset S_1 \subset \ldots \subset S_{|T|} = T. \end{equation}
Note that the sequence $(S_0, \ldots, S_{|T|})$ is not determined uniquely by $T$ since there may be an ambiguity in the choice of $x \in F_i \cap T$ during the procedure. Let $\mathcal T_1 \subset \mathcal T_{\leqslant n}(\mathcal B)$ the the family of sets $T$ such that the sequence $(S_0, \ldots, S_{|T|})$ is determined uniquely by $T$. In other words, at each step we have an equality $|F_i \cap T| = 1$. Let $\mathcal T_2 = \mathcal T_{\leqslant n}(\mathcal B) \setminus \mathcal T_1$.
Now we denote by $\mathcal J$ the set of all sequences $(S_0, S_1, \ldots, S_k)$ which may occur during the procedure starting from some $T \in \mathcal T_{\leqslant n}(\mathcal B)$ and $S = \emptyset$. Let $\mathcal J = \mathcal J_1 \cup \mathcal J_2$ be the decomposition arising from the decomposition $\mathcal T_{\leqslant n}(\mathcal B) = \mathcal T_1 \cup \mathcal T_2$. The weight $w(\bar S)$ of a sequence $\bar S = (S_0, \ldots, S_k)$ is defined to be $n^{-|S_k|}$. The standard Erd{\H o}s--Lov{\' a}sz \cite{EL} argument shows that the weight $w(\mathcal J)$ of the family $\mathcal J$ is always at most $1$. We omit the proof since it is very similar in spirit to the proof of Lemma \ref{lm} and Corollary \ref{st}.
On the other hand, we can bound weights of families $\mathcal T_1$ and $\mathcal T_2$ in terms of weights of $\mathcal J_1$ and $\mathcal J_2$ as follows: \begin{equation}\label{cj}
c_n(\mathcal B) = w_n(\mathcal T_{\leqslant n}(\mathcal B)) = w_n(\mathcal T_1) + w_n(\mathcal T_2) \leqslant w(\mathcal J_1) + \frac{1}{2} w(\mathcal J_2) \leqslant \frac{w(\mathcal J_1) + 1}{2}. \end{equation}
So it is enough to obtain a good upper bound on $w(\mathcal J_1)$. For $T \in \mathcal T_1$ we denote by $S_i(T)$ the $i$-th element of the sequence of $T$ in the process (which is defined uniquely for elements of $\mathcal T_1$). We denote by $A_i(T) \in \mathcal B$ the element of $\mathcal B$ which was picked at step $i-1$ of the process. In particular, $S_{i-1}(T) \cap A_i(T) = \emptyset$ and $|S_{i}(T) \cap A_i(T)| = 1$. We denote by $x_i(T)$ the unique element in the intersection $S_{i}(T) \cap A_i(T)$.
The uniqueness of the sequence $\bar S(T)$ implies that for any $j < i$ we have $$ x_i(T) \not \in A_j(T). $$
Indeed, otherwise at step $j$ we may have picked the element $x_i(T)$ instead of $x_j(T)$ and thus form a different sequence $(S_0', \ldots, S'_{|T|})$ which corresponds to the covering $T$. We conclude that $$ x_i(T) \in A_i(T) \setminus \bigcup_{j < i} A_j(T) =: Y_i(T). $$ Since the family $\mathcal B$ is intersecting and does not contain $l$-wise intersections we have the following upper bound on the size of $Y_i(T)$: $$
|Y_i(T)| \leqslant n - \frac{i-1}{l}. $$ For $q \geqslant 0$ and a given sequence $\bar S = (S_0 \subset S_1 \subset \ldots \subset S_q)$ we denote by $\mathcal J_1(\bar S)$ the family of sequences from $\mathcal J_1$ which start from $\bar S$.
\begin{obs}\label{obb}
For any sequence $\bar S = (S_0, S_1, \ldots, S_{i-1})$ which is a part of the sequence of some $T\in \mathcal T_1$ such that $|T| > i$ we have $$ w(\mathcal J_1(\bar S)) \leqslant \frac{1}{n} \sum_{x \in Y_i(T)} w(\mathcal J_1(\bar S, S_{i-1} \cup \{x\})). $$ \end{obs} \begin{proof} Indeed, the observation says that a sequence $\bar S$ can be extended only by the elements of the set $Y_i(T)$ and, therefore, its weight is bounded by the sum of the weights of all possible extensions. \end{proof} For $q \geqslant 0$ let \begin{equation}\label{sup}
f(q) = \max_{\bar S = (S_0, S_1, \ldots, S_q)} w(\mathcal J_1(\bar S)). \end{equation} The following proposition will finish the proof. Note that $\tau(\mathcal B) \geqslant r/l$ because any element $x \in X$ covers at most $l$ sets from $\mathcal B$.
\begin{prop}\label{recu} For any $q \in [0, r/l]$ we have $$ f(q) \leqslant \prod_{i = q}^{[r/l]-1} \left (1 - \frac{i-1}{n l}\right). $$ \end{prop}
\begin{proof} The proof is by induction. The base case $q = [r/l]$ states that $f([r/l]) \leqslant 1$ which we already know by the Erd{\H o}s--Lov{\' a}sz argument.
For the induction step, let $T \in \mathcal T_1$ be a covering on which the maximum in (\ref{sup}) is attained. Now apply Observation \ref{obb} and the induction hypothesis to conclude that $$
f(q) \leqslant \frac{1}{n} |Y_i(T)| f(q+1) \leqslant \left(1 - \frac{i-1}{nl} \right)f(q+1), $$ where $T$ corresponds to a maximizer of the supremum on the left hand side. \end{proof}
Substituting $q = 0$ in Proposition \ref{recu} we get $$ w(\mathcal J_1) = f(0) \leqslant \prod_{i = 1}^{[r/l]-1} (1 - \frac{i-1}{n l}) \leqslant \left( 1 - \frac{r}{2n l^2} \right)^{r/2l} \leqslant e^{ - \frac{r^2}{4l^3 n}}. $$ Let $y = \frac{r^2}{l^3 n}$. By assumption we have $y \leqslant 1$ and so we have the following elementary inequality: $e^{-y/4}+1 \leqslant 2 e^{-y/10}$. By (\ref{cj}), the desired inequality (\ref{lel}) follows. \end{proof}
The following simple corollary will be more convenient to combine with Lemma \ref{crlm} in the proof of Theorem \ref{el}.
\begin{cor}\label{corbd} Let $n \geqslant 1$ and $r \geqslant 2l$ be such that $r^2 \leqslant l^3 n$. Let $\mathcal B$ be an $n$-uniform intersecting family of size $r$ such that every $l$ distinct sets from $\mathcal B$ have an empty intersection. Then for $k \leqslant \frac{r}{20 l^3}$ we have \begin{equation*}
c_{n - k}(\mathcal B) \leqslant 1. \end{equation*} \end{cor}
\begin{proof}
Note that any minimal covering of $\mathcal B$ has size at most $|\mathcal B| = r$. So for any $\lambda \leqslant 1$ we have $$ c_{\lambda n}(\mathcal B) \leqslant \lambda^{-r} c_n(\mathcal B). $$ By Lemma \ref{lbodeg2}, if we let $\lambda = e^{- \frac{r}{10 l^3 n}}$ then $c_{\lambda n}(\mathcal B) \leqslant 1$. Now if $k \leqslant \frac{r}{20 l^3}$ then $$ \frac{n-k}{n} \geqslant 1 - \frac{r}{20 l^3 n} \geqslant e^{-\frac{r}{10 l^3 n}}, $$ which implies that $c_{n-k}(\mathcal B) \leqslant 1$. \end{proof}
\section{Proof of Theorem \ref{main}}\label{sec3}
In this section we put all developed machinery together to prove Theorem \ref{main}. We restate the theorem below for convenience.
\begin{theorem}\label{main2} For all $\varepsilon > 0$ and sufficiently large $n > n_0(\varepsilon)$ we have the following. Let $\mathcal A$ be an intersecting $n$-uniform family. Then \begin{equation}\label{maine2} c_n(\mathcal A) \leqslant e^{1- \frac{\tau(\mathcal A)^{1.5-\varepsilon}}{n}}. \end{equation} \end{theorem}
Now we begin the proof of Theorem \ref{main}. Fix $n > n_0(\varepsilon)$ and suppose that there exists an intersecting family $\mathcal A$ which violates (\ref{maine}). Let $\mathcal A$ be any such family of minimal possible size. In particular, $\mathcal A$ is a $\tau$-critical family and $\tau(\mathcal A) > n^{2/3}$ because otherwise the right hand side of (\ref{maine2}) is greater than 1 and so we done by Corollary \ref{st}.
By the minimality of $\mathcal A$, for any proper subfamily $\mathcal A' \subset \mathcal A$ we have \begin{equation}\label{apr}
c_n(\mathcal A') \leqslant e^{1- \frac{\tau(\mathcal A')^{1.5-\varepsilon}}{n}}. \end{equation} We are going to apply Lemma \ref{crlm} to various subfamilies of $\mathcal A$ and $f(t) = t^{1.5-\varepsilon} - 1$. Let $\lambda = e^{-f'(\tau(\mathcal A))} = e^{-(1.5-\varepsilon) \frac{\tau(\mathcal A)^{0.5-\varepsilon}}{n}}$ and $k =\sqrt{\tau(\mathcal A)}$.
\begin{prop}\label{pgap} For any $A_1, A_2 \in \mathcal A$ we have \begin{equation}\label{gap}
|A_1 \cap A_2| \not \in [k, n-k]. \end{equation} \end{prop}
\begin{proof}
Suppose that there are some $A_1, A_2 \in \mathcal A$ such that $|A_1 \cap A_2| = x \in [k, n-k]$. Denote $\mathcal A' = \{A_1, A_2\}$ and note that \begin{equation}\label{e2}
c_{n-k/3}(\mathcal A') = \frac{x}{n-k/3} + \frac{(n-x)^2}{(n-k/3)^2} \leqslant 1, \end{equation} where the latter inequality holds for every $x \in [k, n-k]$ and any $k \leqslant 0.1 n$.
Since $\frac{n-k/3}{n} \leqslant \lambda$ for sufficiently large $n$, by Lemma \ref{crlm} applied to $\mathcal A'$ we deduce that (\ref{apr}) holds for $\mathcal A$ as well. This is however a contradiction to our initial assumption that $\mathcal A$ does not satisfy (\ref{maine2}). \end{proof}
Now we define a relation $\sim$ on $\mathcal A$ as follows: two sets $A_1$, $A_2 \in \mathcal A$ are equivalent if $|A_1 \cap A_2| \geqslant n/2$. Then Proposition \ref{pgap} implies that $\sim$ is an equivalence relation on $\mathcal A$. Let \begin{equation}\label{ecd}
\mathcal A = \mathcal K_1 \cup \ldots \cup \mathcal K_N \end{equation}
be the equivalence class decomposition on $\mathcal A$ corresponding to $\sim$. This means that for every $i = 1, \ldots, N$ and any $F_1, F_2 \in \mathcal K_i$ we have $|F_1 \cap F_2| \geqslant n-k$ and for any $i \neq j$ and $F_1 \in\mathcal K_i$ and $F_2 \in \mathcal K_j$ we have $|F_1 \cap F_2| \leqslant k$.
\begin{prop}\label{pr2}
For every $i = 1, \ldots, N$ we have $|\bigcap \mathcal K_i| \geqslant n-5k$. \end{prop}
\begin{proof}
Suppose that $|\bigcap \mathcal K_i| < n-5k$ for some $i$. Let $F \in \mathcal K_i$ be an arbitrary set from $\mathcal K_i$ and let $K \subset F$ be any subset of size $(n-k)$. Lemma \ref{ker2} applied to the family $\mathcal K_i$ and the set $K$ implies that $c_{n-k}(\mathcal K_i) \leqslant 1$. So Lemma \ref{crlm} applied to $\mathcal K_i$ implies that $\mathcal A$ satisfies (\ref{apr}), a contradiction. \end{proof}
\begin{prop}
We have $|\mathcal A| \leqslant n^{6k}$. \end{prop}
\begin{proof}
Indeed, by Lemma \ref{kerup}, Proposition \ref{pr2} and $\tau$-criticality of $\mathcal A$ we have $|\mathcal K_i| \leqslant {\tau(\mathcal A)+5k \choose 5k}$ for any $i = 1, \ldots, N$.
Now let $A_i \in \mathcal K_i$ be arbitrary representatives. Note that $|A_i \cap A_j| \leqslant k$ for any $i \neq j$. Obviously $k \ll \frac{n}{\log n}$, so by Lemma \ref{small2} we either have $N \leqslant n^{C'}$ or there is a proper subfamily $\mathcal A' \subset \mathcal A$ such that $$ 2^{\tau(\mathcal A)}c_n(\mathcal A) \leqslant 2^{\tau(\mathcal A')} c_n(\mathcal A') \leqslant 2^{\tau(\mathcal A')} C \lambda^{\tau(\mathcal A')}, $$ which immediately implies (\ref{maine}). This implies that we in fact have $N \leqslant n^{C'}$ and so $$
|\mathcal A| \leqslant n^{C'} {\tau(\mathcal A)+5k \choose 5k} \leqslant n^{6k}, $$ provided that $n$ is large enough. \end{proof}
Denote $m = \log n^{6k} = 6k \log n$ and let $l = 10\varepsilon^{-1}$. By Lemma \ref{dec2}, there is a subfamily $\mathcal A' \subset \mathcal A$ such that $\tau(\mathcal A\setminus \mathcal A') \leqslant \tau(\mathcal A)/2$ such that for every $i = 2, \ldots, l$ we have \begin{equation}\label{expec}
\mathbb E_{A_1, \ldots, A_i \in \mathcal A'} |A_1 \cap \ldots \cap A_i| \leqslant C_l \left( \frac{m}{\tau(\mathcal A)} \right)^{i-1}n, \end{equation} for some new constant $C_l \ll 2^{l^2}$. Let $r = n^{-\varepsilon} \frac{\tau(\mathcal A)}{m}$ (note that $r \gg 1$ since $\tau(\mathcal A) \geqslant n^{2/3}$ by assumption).
Sample uniformly and independently sets $B_1, \ldots, B_r \in \mathcal A'$ and form a random family $\mathcal B = \{B_1, \ldots, B_r\}$. Applying (\ref{expec}) to all $l$-element intersections in $\mathcal B$ we get \begin{equation*}
\mathbb E \sum_{S \in {[r] \choose l}} \left|\bigcap_{i \in S} B_i\right| \leqslant C_l {r \choose l} \left( \frac{m}{\tau(\mathcal A)}\right)^{l-1}n \leqslant C_l n^{1- \varepsilon l} \frac{\tau(\mathcal A)}{m} \leqslant n^{2-\varepsilon l} < 1 \end{equation*} for sufficiently large $n$.
So there exists an $r$-element family $\mathcal B \subset \mathcal A'$ such that all $l$-wise intersections of sets from $\mathcal B$ are empty. By Corollary \ref{corbd}, for $h = \frac{r}{20l^3}$ we have $c_{n-h}(\mathcal B) \leqslant 1$. But $$ \frac{n-h}{n} \leqslant 1 - \frac{\tau(\mathcal A)}{20l^3 m n^{1+\varepsilon}} \leqslant 1 -\frac{\tau(\mathcal A)}{k n} \leqslant \lambda, $$ by the choice of $k = \sqrt{\tau(\mathcal A)}$ and $\lambda = e^{-(1.5-\varepsilon) \frac{\tau(\mathcal A)^{0.5-\varepsilon}}{n}}$ and sufficiently large $n$. So by Lemma \ref{crlm} applied to $\mathcal F' = \mathcal B$ we have (\ref{maine2}). Theorem \ref{main} is proved.
\section{Remarks}\label{sec4}
Let us describe a construction of a maximal intersecting family which generalizes examples from \cite{EL} and \cite{FOT}. Let $G$ be a tournament on the vertex set $\{1, \ldots, m\}$ and let $K_1, \ldots, K_m$ be a sequence of disjoint non-empty sets. Let $\mathcal K_i$ be the family of all sets $F$ such that $K_i \subset F$ and for $i \neq j$: \begin{align*}
|F \cap K_j| = \begin{cases}
1,\text{ if $(i, j) \in G$,}\\
0, \text{ if $(i, j) \not \in G$.}
\end{cases} \end{align*}
It is clear from this definition that the family $\mathcal F = \mathcal K_1 \cup \ldots \cup \mathcal K_m$ is intersecting. Let $d_i$ be the outdegree of the vertex $i$ and let $n > \max d_i$. If we let $|K_i| = n-d_i$ then the family $\mathcal F$ is $n$-uniform and intersecting.
It is not difficult to characterize all minimal coverings of $\mathcal F$. First, observe that if $T$ is a minimal covering of $\mathcal F$ then $|T \cap K_i| \in \{0, 1, |K_i|\}$. Then for every minimal covering $T$ we can define two sets $A, B \subset [m]$, namely, $A$ is the set of all $i$ such that $|C \cap K_i| = 1$ and $B$ is the set of all $i$ such that $K_i \subset T$. Now the fact that $T$ is a covering is equivalent to the assertion that $A \cup B \cup N_{in}(B) = [m]$, where $N_{in}(B)$ denotes the set of all vertices of $G$ from which there is an edge to $B$.
\paragraph{Example 1.} If we let $G$ to be the linearly ordered complete directed graph then $\mathcal F$ coincides with the family constructed by Erd{\H o}s--Lov{\' a}sz \cite{EL}. In this case $\tau(\mathcal F) = n$ and $|\mathcal F|$ is approximately $n!$.
\paragraph{Example 2.} Let $G$ be graph on the vertex set $\mathbb Z_{2t-1} \cup \{v\}$, where the vertices $i, j \in \mathbb Z_{2t-1}$ from the cyclic group are connected if $j-i \in [1, t-1]$ and the vertex $v$ has outdegree $2t-1$. In this case we have $n = 2t$, $\tau(\mathcal F) = n$ and $|\mathcal F|$ is approximately $\left(\frac{n}{2} \right)^{n}$. Note that the main contribution to the size of $\mathcal F$ comes from the family $\mathcal K_v$ corresponding to the vertex $v$.\footnote{The construction of a maximal intersecting family of size $(n/2)^n$ for odd $n$ is a bit more delicate, see \cite{FOT}.}
It is not hard to see that the construction in the second example gives the maximum size of $\mathcal F$ among all constructions of this type.
The construction above and the decomposition (\ref{ecd}) which we used in the proof of Theorem \ref{el} suggest to consider the following special class of families.
Let $K_1, \ldots, K_n$ be disjoint sets such that $|K_i| = n-a_i$. Let $\mathcal K_i$ be an $n$-uniform family of sets containing $K_i$. Let $\mathcal F = \mathcal K_1 \cup \ldots \cup \mathcal K_n$ and suppose that $\tau(\mathcal F) = n$ and $\mathcal F$ is intersecting.
\begin{conj}[\cite{K}]\label{kconj}
In the situation described above we have $\sum_{i = 1}^n a_i \geqslant {n \choose 2}$. Moreover, the condition that $|K_i| = n-a_i$ can be replaced by the condition that $\mathcal K_i$ is $(|K_i|+a_i)$-uniform. \end{conj}
Note that, if true, Conjecture \ref{kconj} is best possible: we can take $G$ to be a graph whose outdegrees are precisely $a_1, \ldots, a_n$ and then use the construction of an $n$-uniform family $\mathcal F$ described above. One can easily produce sequences of degrees $a_1, \ldots, a_n$ such that the corresponding graph exists and $\tau(\mathcal F) = n$.
Note that if there is a counterexample to Conjecture 1 such that, say, $a_i \sim n^{1-\varepsilon}$ for some $\varepsilon > 0$ and every $i = 1, \ldots, n$, then one can construct a very large maximal intersecting family as follows.
Let $\mathcal F_0 = \mathcal K_1 \cup \ldots \cup \mathcal K_n$ be an $n$-uniform family such that $\tau(\mathcal F_0) = n$. Then any set $F$ such that $|F \cap K_i| = 1$, for every $i$, is a minimal covering of $\mathcal F_0$. Denote the family of all such sets by $\mathcal F'_1$. We have $$
c_n(\mathcal F_0) \geqslant n^{-n}|\mathcal F'_1| = n^{-n} \prod_{i = 1}^n (n - a_i) \sim (1 - n^{-\varepsilon})^n \sim e^{- n^{1-\varepsilon}}. $$ Moreover, if we let $\mathcal F_1$ be an $(n+1)$-uniform family of sets $F \cup \{x_0\}$, where $F \in \mathcal F'_1$ and $x_0$ is a ``new" element of the ground set, then $\mathcal F = \mathcal F_0 \cup \mathcal F_1$ is an intersecting family of $n$ and $n+1$ element sets such that $\tau(\mathcal F) = n$ and each member of $\mathcal F$ is a minimal covering of $\mathcal F$. The family $\mathcal F$ has size at least $e^{-n^{1-\varepsilon}} n^n$ and so it essentially contradicts the conjecture of Frankl--Ota--Tokushige \cite{FOT}.
In the setting of Conjecture \ref{kconj} we were only able to prove the lower bound $\sum_{i=1}^n a_i \gg n^{3/2}$ but any improvement seems to require new ideas.
\paragraph{Acknowledgements.} I thank Andrey Kupavskii for valuable discussions and for telling me about Conjecture \ref{kconj}. I thank P\'eter Frankl, Stijn Cambie and the anonymous referee for useful comments on an earlier version of this paper.
\end{document} | arXiv |
Algebraic structure of the minimal support codewords set of some linear codes
AMC Home
On the performance of binary extremal self-dual codes
May 2011, 5(2): 245-266. doi: 10.3934/amc.2011.5.245
Canonization of linear codes over $\mathbb Z$4
Thomas Feulner 1,
Department of Mathematics, University of Bayreuth, 95440 Bayreuth
Received April 2010 Revised October 2010 Published May 2011
Two linear codes $C, C' \leq \mathbb Z$4n are equivalent if there is a permutation $\pi \in S_n$ of the coordinates and a vector $\varphi \in \{1,3\}^n$ of column multiplications such that $(\varphi; \pi) C = C'$. This generalizes the notion of code equivalence of linear codes over finite fields.
In a previous paper, the author has described an algorithm to compute the canonical form of a linear code over a finite field. In the present paper, an algorithm is presented to compute the canonical form as well as the automorphism group of a linear code over $\mathbb Z$4. This solves the isomorphism problem for $\mathbb Z$4-linear codes. An efficient implementation of this algorithm is described and some results on the classification of linear codes over $\mathbb Z$4 for small parameters are discussed.
Keywords: canonization, group action, representative, coding theory, isometry, Automorphism group, $\mathbb Z$4-linear code..
Mathematics Subject Classification: Primary: 05E20; Secondary: 20B25, 94B0.
Citation: Thomas Feulner. Canonization of linear codes over $\mathbb Z$4. Advances in Mathematics of Communications, 2011, 5 (2) : 245-266. doi: 10.3934/amc.2011.5.245
A. Betten, M. Braun, H. Fripertinger, A. Kerber, A. Kohnert and A. Wassermann, "Error-Correcting Linear Codes, Classification by Isometry and Applications,'', Springer, (2006). Google Scholar
T. Feulner, The automorphism groups of linear codes and canonical representatives of their semilinear isometry classes,, Adv. Math. Commun., 3 (2009), 363. doi: 10.3934/amc.2009.3.363. Google Scholar
A. R. Hammons Jr., P. V. Kumar, A. R. Calderbank, N. J. A. Sloane and P. Solé, The $\mathbbZ_4$-linearity of Kerdock, Preperata, Goethals and related codes,, IEEE Trans. Inform. Theory, 40 (1994), 301. doi: 10.1109/18.312154. Google Scholar
T. Honold and I. Landjev, Linear codes over finite chain rings,, Electron. J. Comb., 7 (1998), 116. Google Scholar
W. C. Huffman and V. Pless, "Fundamentals of Error-Correcting Codes,'', Cambridge University Press, (2003). Google Scholar
M. Kiermaier and J. Zwanzger, A $\mathbbZ_4$-linear code of high minimum Lee distance derived from a hyperoval,, Adv. Math. Commun., 5 (2011), 275. Google Scholar
R. Laue, Constructing objects up to isomorphism, simple 9-designs with small parameters,, in, (2001), 232. Google Scholar
J. S. Leon, Computing automorphism groups of error-correcting codes,, IEEE Trans. Inform. Theory, 28 (1982), 496. doi: 10.1109/TIT.1982.1056498. Google Scholar
B. D. McKay, Isomorph-free exhaustive generation,, J. Algorithms, 26 (1998), 306. doi: 10.1006/jagm.1997.0898. Google Scholar
A. A. Nechaev, Kerdock's code in cyclic form,, Diskret. Mat., 1 (1989), 123. Google Scholar
E. Petrank and R. M. Roth, Is code equivalence easy to decide?,, IEEE Trans. Inform. Theory, 43 (1997), 1602. doi: 10.1109/18.623157. Google Scholar
C. C. Sims, Computation with permutation groups,, in, (1971), 23. Google Scholar
Michael Kiermaier, Johannes Zwanzger. A $\mathbb Z$4-linear code of high minimum Lee distance derived from a hyperoval. Advances in Mathematics of Communications, 2011, 5 (2) : 275-286. doi: 10.3934/amc.2011.5.275
Martino Borello, Francesca Dalla Volta, Gabriele Nebe. The automorphism group of a self-dual $[72,36,16]$ code does not contain $\mathcal S_3$, $\mathcal A_4$ or $D_8$. Advances in Mathematics of Communications, 2013, 7 (4) : 503-510. doi: 10.3934/amc.2013.7.503
Van Cyr, John Franks, Bryna Kra, Samuel Petite. Distortion and the automorphism group of a shift. Journal of Modern Dynamics, 2018, 13: 147-161. doi: 10.3934/jmd.2018015
Brandon Seward. Every action of a nonamenable group is the factor of a small action. Journal of Modern Dynamics, 2014, 8 (2) : 251-270. doi: 10.3934/jmd.2014.8.251
Thomas Feulner. The automorphism groups of linear codes and canonical representatives of their semilinear isometry classes. Advances in Mathematics of Communications, 2009, 3 (4) : 363-383. doi: 10.3934/amc.2009.3.363
Makoto Araya, Masaaki Harada, Hiroki Ito, Ken Saito. On the classification of $\mathbb{Z}_4$-codes. Advances in Mathematics of Communications, 2017, 11 (4) : 747-756. doi: 10.3934/amc.2017054
Van Cyr, Bryna Kra. The automorphism group of a minimal shift of stretched exponential growth. Journal of Modern Dynamics, 2016, 10: 483-495. doi: 10.3934/jmd.2016.10.483
Arseny Egorov. Morse coding for a Fuchsian group of finite covolume. Journal of Modern Dynamics, 2009, 3 (4) : 637-646. doi: 10.3934/jmd.2009.3.637
Helena Rifà-Pous, Josep Rifà, Lorena Ronquillo. $\mathbb{Z}_2\mathbb{Z}_4$-additive perfect codes in Steganography. Advances in Mathematics of Communications, 2011, 5 (3) : 425-433. doi: 10.3934/amc.2011.5.425
S. A. Krat. On pairs of metrics invariant under a cocompact action of a group. Electronic Research Announcements, 2001, 7: 79-86.
Tingting Wu, Jian Gao, Yun Gao, Fang-Wei Fu. $ {{\mathbb{Z}}_{2}}{{\mathbb{Z}}_{2}}{{\mathbb{Z}}_{4}}$-additive cyclic codes. Advances in Mathematics of Communications, 2018, 12 (4) : 641-657. doi: 10.3934/amc.2018038
Amit Sharma, Maheshanand Bhaintwal. A class of skew-cyclic codes over $\mathbb{Z}_4+u\mathbb{Z}_4$ with derivation. Advances in Mathematics of Communications, 2018, 12 (4) : 723-739. doi: 10.3934/amc.2018043
Carlos Matheus, Jean-Christophe Yoccoz. The action of the affine diffeomorphisms on the relative homology group of certain exceptionally symmetric origamis. Journal of Modern Dynamics, 2010, 4 (3) : 453-486. doi: 10.3934/jmd.2010.4.453
Xiaojun Huang, Yuan Lian, Changrong Zhu. A Billingsley-type theorem for the pressure of an action of an amenable group. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 959-993. doi: 10.3934/dcds.2019040
Joaquim Borges, Steven T. Dougherty, Cristina Fernández-Córdoba. Characterization and constructions of self-dual codes over $\mathbb Z_2\times \mathbb Z_4$. Advances in Mathematics of Communications, 2012, 6 (3) : 287-303. doi: 10.3934/amc.2012.6.287
Jianying Fang. 5-SEEDs from the lifted Golay code of length 24 over Z4. Advances in Mathematics of Communications, 2017, 11 (1) : 259-266. doi: 10.3934/amc.2017017
Martin Pinsonnault. Maximal compact tori in the Hamiltonian group of 4-dimensional symplectic manifolds. Journal of Modern Dynamics, 2008, 2 (3) : 431-455. doi: 10.3934/jmd.2008.2.431
W. Cary Huffman. Additive self-dual codes over $\mathbb F_4$ with an automorphism of odd prime order. Advances in Mathematics of Communications, 2007, 1 (3) : 357-398. doi: 10.3934/amc.2007.1.357
François Gay-Balmaz, Cesare Tronci, Cornelia Vizman. Geometric dynamics on the automorphism group of principal bundles: Geodesic flows, dual pairs and chromomorphism groups. Journal of Geometric Mechanics, 2013, 5 (1) : 39-84. doi: 10.3934/jgm.2013.5.39
Mahesh Nerurkar. Forced linear oscillators and the dynamics of Euclidean group extensions. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 1201-1234. doi: 10.3934/dcdss.2016049
Thomas Feulner | CommonCrawl |
Gometricamerica
Organic Compounds: Definition & Examples (2023)
Organic compounds definition Carbon in organic compounds Bonding in organic compounds Types of organic compounds Functional groups in organic compounds Aliphatic, aromatic, and alicyclic compounds Saturated and unsaturated compounds Biological organic compounds Naming organic compounds Organic compound formulae Organic compound general formulae Organic compound molecular formulae Organic compound structural formulae Organic compound displayed formulae Organic compound skeletal formulae Isomerism in organic compounds Structural isomerism Stereoisomerism Organic Compounds - Key takeaways Videos
What did you do this morning?
You probably got up, showered and put on some clothes, perhaps made from cotton or acrylic. You then might have sipped at a coffee whilst eating a slice of toast spread thickly with butter and jam. After that, you might have travelled to work or school, perhaps by car or bus, both fuelled by petrol or diesel. At some point, you sat down, pulled out your phone or computer and started reading this article.
What do these activities have in common? They all involve organic compounds. From the material of your clothes and the food you eat to the fuel for your car and the retina in your eyes, organic compounds are everywhere.
This article is about organic compounds in chemistry.
We'll start by defining organic compounds before looking at the different types of organic compounds.
You'll learn terms such as saturated and alicyclic.
After that, we'll explore organic compound nomenclature and ways of representing these molecules using formulae.
Finally, we'll look at isomerism.
Organic compounds definition
Organic compounds are molecules that are made up of carbon covalently bonded to other atoms, most commonly hydrogen, oxygen, and nitrogen.
There are hundreds of different organic compounds. In fact, thousands - perhaps even millions. They are all based on carbon atoms, covalently bonded to other elements. These are the two fundamental ideas behind organic compounds.
To tell the truth, there is no fixed definition of an organic compound, and some carbon-based molecules are in fact not organic compounds. These include carbonates, cyanides, and carbon dioxide. The reasons behind their exclusion are mostly historic, instead of being based on any defining feature. Structures such as graphite and diamond are also excluded from the group. Because they are made from just one element, they don't count as compounds.
Carbon in organic compounds
Organic molecules are all based on the element carbon. Making up the backbone of all the organic compounds in the world is a big task, but carbon successfully rises to the occasion. But what makes it so versatile?
Well, carbon has two properties in particular that make it so good at forming molecules and compounds:
Its tetravalency.
Its small size.
Tetravalency
Take a look at carbon's electron configuration, shown below.
Fig. 1 - Carbon's electron configuration
You can see that carbon has six electrons. Two are found in an inner shell, whilst four are found in its outer shell (also known as its valence shell). These four outer shell electrons make carbon a tetravalent atom. Atoms tend to want to have full outer shells of electrons, and in carbon's case, this means having eight valence electrons. To achieve a full outer shell, the atom needs to form four covalent bonds. It's not fussy about who it bonds with - it is just as happy bonding with oxygen as it is with nitrogen. This means that carbon forms compounds with a range of different elements, and we'll look at examples of organic molecules featuring both oxygen and nitrogen later.
You know that there are other atoms that have four electrons in their outer shell, such as silicon. Why aren't they as versatile and prevalent as carbon?
It's because carbon is a small atom. Its diminutive size means multiple carbon atoms can fit together easily in complicated structures. We say that it is good at catenation - when atoms of the same element join up in long chains.
The combination of small size and tetravalency means the possible arrangements of carbon atoms, covalently bonded both to each other and to other elements, are practically infinite. This is why we have so many different organic compounds.
Bonding in organic compounds
Organic compounds are joined together using covalent bonds.
(Video) Difference between Organic and Inorganic Compounds
A covalent bond is a bond formed by a shared pair of electrons.
Covalent bonds are formed when two atoms each offer up an electron to form a shared pair. The atoms are held together by the electrostatic attraction between their positive nuclei and these negative electrons. This is why most of the elements found in organic compounds are non-metals - they're the ones that can form covalent bonds.
There are a couple of exceptions to this rule - you can find some metals in organic compounds:
Firstly, transition metals can bond to organic compounds using ligand reactions. The two bond together with a dative covalent bond, using a lone pair of electrons from the organic compound. You can read more about this in Transition Metals.
Secondly, beryllium, a group 2 metal, can also form covalent bonds. You'll find out why in the article Group 2.
Types of organic compounds
In this next section, we're going to look at different types of organic compounds and ways of classifying them. We can do this in different ways.
TEMARIOS CEPRE SAN MARCOS 2022 2023 TEMAS DE INGRESO DIRECTO ADMISIÓN UNIVERSIDAD PDFInsanely Fun Activities for the Verbal Linguistic Learner -What I Wish I Knew Before Becoming a ParalegalFSU Admissions | Majors
The easiest way to group organic molecules is by their functional group.
We can also distinguish between aliphatic, aromatic, and alicyclic compounds.
Another useful label is saturated or unsaturated.
First, we'll take a look at functional groups.
Functional groups in organic compounds
A species' functional group is the particular group of atoms responsible for its chemical reactions.
The easiest way to distinguish organic compounds is by their functional group. This is the atom or combination of atoms that makes it react in a certain way. Carboxylic acids contain the carboxyl functional group, often written as \(COOH\), whereas amines contain - you guessed it - the amine functional group, or \(-NH_2\)
Types of functional groups
You'll come across the following functional groups when looking at organic compounds.
Family name Functional group Prefix/suffix
Alkane \(C-C\) -ane
Alkene \(C=C\) -ene
Alkyne \( C\equiv N\) -yne
Alcohol \(R-OH\) -ol or hydroxy-
Halogenoalkane \(R-X\) Varying suffix-ane
Aldehyde \(R-CHO\) -al
Ketone \(R-CO-R\) -one
Carboxylic acid \(R-COOH\) -oic acid
Ester \(R-COO-R\) -oate
Amine \( -NH_2 \) -amine or amino-
We explore all of these groups in more detail in the article Functional Groups.
Wondering what the prefixes and suffixes are for? We use them to name organic compounds, as you'll find out in IUPAC Nomenclature.
Homologous series
Molecules with the same functional group react in very similar ways. Because of that, we tend to group them together in a homologous series.
A homologous series is a group of organic molecules with the same functional group, but different carbon chain lengths.
A homologous series has some fixed properties.
All members can be represented by a general formula. This is a formula that expresses the basic ratio of different atoms in a molecule. We'll explore it in more depth in just a second.
Members all have the same functional group, as we mentioned above.
Members differ only by the number and arrangement of \(-CH_2\) groups in their carbon chain.
All members have the same chemical properties and undergo the same reactions. However, they might have different physical properties.
Aliphatic, aromatic, and alicyclic compounds
Organic molecules can also be classified as aliphatic, aromatic, or alicyclic.
Aliphatic compounds are based on carbon chains full of \(-CH_2\) groups. They don't feature any benzene rings, and can have long straight chains or form cyclic rings. Aliphatic compounds with cyclic rings are called alicyclic compounds.
In contrast, aromatic compounds contain benzene rings with delocalised pi electrons. We represent these rings using a hexagon with a circle in the middle.
Want to find out more about the wonders of benzene? Head over to Aromatic Chemistry, where all will be explained!
Saturated and unsaturated compounds
A third way of labelling organic compounds is using the terms saturated and unsaturated.
(Video) Definition of Organic compounds
Saturated compounds contain only single \(C-C\) bonds.
Unsaturated compounds contain one or more double \(C=C\) bonds or triple \( C\equiv C\) bonds.
You might remember from earlier that a \(C=C\) double bond is the functional group found in alkenes. This makes all alkenes unsaturated compounds. The \( C\equiv C\) triple bond, however, is the functional group found in alkynes. Once again, this makes all alkynes unsaturated.
Biological organic compounds
In biology, you'll probably come across four main groups of organic compounds that are fundamental to life. These are carbohydrates, lipids, proteins, and nucleic acids. We won't go into them here - they're much too important for that! However, you can find out more in the articles dedicated to these molecules: Carbohydrates, Lipids, Proteins, and Nucleic Acids.
Naming organic compounds
Now that we know more about the different types of organic compounds, we can have a look at naming them. The practice of naming organic compounds is known as nomenclature. The official nomenclature system was created by the International Union of Pure and Applied Chemistry (IUPAC), which is the system you need to know for your exams.
To name a molecule, you use the following:
A root name, to show the length of the molecule's longest carbon chain.
Prefixes and suffixes, to show any functional groups and side chains (known as substituents).
Numbers, known as locants, to show the position of functional groups and side chains.
For example, take the molecule 2-bromopropane. The root name -prop- tells us that this molecule is based on a propane chain, which is three carbon atoms long. The suffix -ane indicates that it is an alkane, whilst the prefix bromo- lets us know that this molecule has an additional bromine atom, and so is in fact a halogenoalkane. How about the number 2? That shows that the bromine atom is attached to the second carbon atom in the chain.
Fig. 2 - 2-bromopropane
Nomenclature is a complicated topic, and so we've created a whole article specially dedicated to solving its mysteries. Head over to IUPAC Nomenclature for more.
Organic compound formulae
Let's now focus our attention on ways of representing organic compounds. We do this using chemical formulae. There are a few different types you need to know about. These include:
General formula
Displayed formula
Skeletal formula
One formula, two formulae - formula is the singular, and formulae is the plural. Don't get them mixed up!
Let's start with general formulae.
Organic compound general formulae
A general formula is a formula that shows the basic ratio of atoms in a compound or molecule. It can be applied to a whole homologous series.
If you want to represent a whole family of compounds with the same functional group, you can use a general formula. They're useful because they can be applied to all the members of a homologous series.
General formulae express the numbers of atoms of each element in a compound in terms of \(n\) . For example, all alkanes have the general formula \(C_nH_{2n+2}\) . The formula tells us that if an alkane has n carbon atoms, it will have \(2n+2\) hydrogen atoms. This means that once we know the number of carbon atoms in an alkane, we can always find out its number of hydrogen atoms - you double the carbon number and add 2. Of course, we can go backwards as well - subtracting 2 from the number of hydrogens and then halving the result gives you the number of carbons. The general formula works for all of the alkanes in the alkane homologous series, from the very small to the very large.
Organic compound molecular formulae
General formulae are good at representing a whole family of compounds, but they aren't good at specifying an individual compound. We can do this in several ways. The first way of representing a specific compound is by using its molecular formula.
A molecular formula is a formula that shows the actual number of atoms of each element in a compound.
(Video) Compounds Definition and Examples
Let's say that we have an alkane with four carbon atoms. From the general formula, we know that it has \( (2\times 4) + 2 = 10\) hydrogen atoms. Its molecular formula is therefore \(C_4H_{10}\)
Organic compound structural formulae
There's a problem when we only rely on molecular formulae to represent molecules: different molecules can have the same molecular formula. You'll see more of this when we look at isomerism later on. A different type of formula we can use is a structural formula.
What is Asperger's Syndrome?Understanding Culturally Diverse Privacy - Aboriginal and Torres Strait Islander peoples' perspectives - Office of the Victorian Information CommissionerK-Pop as a linguistic phenomenonhttps://www.fluentin3months.com/wp-content/uploads/2021/09/old-english-writing_FB.jpg
A structural formula is a shorthand representation of the structure and arrangement of atoms in a molecule, without showing every bond.
When writing structural formulae, we move along the molecule from one end to the other, writing out each carbon and the groups attached to it separately.
Here's an example. Take the molecular formula \( C_3H_6O\). This could represent multiple different compounds - for example, propanal or propanone.
Propanal has the structural formula \( CH_3CH_2CHO\). This tells us that it has a \( -CH_3\) group, bonded to a \( -CH_2-\) group, bonded to a \( -CHO\) group. In contrast, propanone has the structural formula \( CH_3COCH_3\) .This tells us that it has a \( -CH_3\) group, bonded to a\( -CO-\) group, bonded to a\( -CH_3\) group. Do you notice the slight difference?
Fig. 3 - Structural formulae
Organic compound displayed formulae
If we want to show all of the bonds in a compound, we use its displayed formula. Displayed formulae often come in handy when drawing reaction mechanisms.
Displayed formulae show every atom and bond in a molecule.
In displayed formulae, we represent bonds using straight lines. A single straight line tells us that we have a single bond, whereas a double straight line tells us we have a double bond. Although they can be a pain to draw out, displayed formulae are useful because they give us important information about a molecule's unique structure, bonding, and arrangement of atoms.
For example, ethanol has the structural formula \( CH_3CH_2OH\) and the following displayed formula:
Fig. 4 - Displayed formula of ethanol
In this example, we've drawn all the bonds as if the molecule were flat on the page. However, bonds aren't like that in real life. If we want to show a bond sticking out of the page, we use a wedged line. If we want to show a bond protruding backwards into the page, we use a dashed line. Here's an example using methane.
20 Great Search Engines You Can Use Instead of Google13 High Paying Linguist Jobs for 2022 - Exotic CareersWhy Dollar Tree's price hike to $1.25 could be 'one of the worst decisions in retail history' | CNN Business7 Reasons Why Digital Literacy is Important for Teachers
Fig. 5 - Drawing 3D chemical molecules
Organic compound skeletal formulae
The final type of formula we'll look at is the skeletal formula.
Skeletal formulae are another type of formula that act as a shorthand representation of a molecule, showing some aspects of its structure and bonding. It omits certain atoms and bonds in order to simplify the diagram.
(Video) Classification of Organic Compounds | Organic Chemistry
Drawing displayed formulae over and over again takes a lot of time. This is where skeletal formulae come in handy. They're an easy way of showing a molecule's structure and bonding without drawing every atom and bond. As in displayed formulae, you represent bonds using straight lines. However, you leave out carbon atoms. You represent these missing carbons using the vertices of the lines, assuming that there is a carbon atom at every unlabelled vertex, junction, or end of a line. You also omit carbon-hydrogen bonds. Instead, you assume that each carbon atom forms exactly four covalent bonds, and that any bonds that aren't shown are carbon-hydrogen bonds.
Sound confusing? Let's take a look at an example. We've already seen the displayed formula of ethanol, \( CH_3CH_2OH\) . Here's how it translates into a skeletal formula.
Fig. 6 - Skeletal formula of ethanol
Isomerism in organic compounds
We've learnt about types of organic compounds and the different formulae we can use to represent them. Finally, let's look at isomerism.
Isomers are molecules with the same molecular formula, but different arrangements of atoms.
Do you remember how earlier we mentioned that molecular formulae aren't that helpful, as one molecular formula can represent multiple different molecules? Well, this is why. Isomers contain exactly the same number of atoms of each element, but the atoms are arranged differently.
There are two main types of isomerism in chemistry.
Stereoisomerism
Structural isomers are molecules with the same molecular formula but different structural formulae.
Let's revisit propanal and propanone. As we discovered, they both have the same molecular formula: \( C_3H_6O\) . However, they have different structural formulae. Propanal has the structural formula \( CH_3CH_2CHO\) , and propanone has the structural formula \( CH_3COCH_3\) . This makes them structural isomers.
Structural isomerism can be further split into three subtypes:
Chain isomers differ in the arrangement of their carbon chain. For example, one isomer might be straight, whilst the other might be branched.
Functional group isomers have different functional groups. Propanal and propanone are great examples of this - the first is an aldehyde, the second is a ketone.
Position isomers differ in their placement of the functional group on their carbon chain. For example, propan-1-ol and propan-2-ol are both isomers with the same molecular formula, \(C_3H_8O\) and the same functional group, an \(-OH\) group. But whilst in propan-1-ol the functional group is found on carbon 1, in propan-2-ol, the functional group is found on carbon 2.
Fig. 7 - Position isomerism in propanol
Another type of isomerism is stereoisomerism. If you thought structural isomers were similar, you better brace yourself - stereoisomers are even more alike!
Stereoisomers have both the same molecular formula and the same structural formula, but different arrangements of atoms in space.
To identify stereoisomers, you need to look at a molecule's displayed formula. Remember, this is a formula that shows every atom and bond. It also shows the arrangement of atoms and bonds; this is where stereoisomers differ.
Once again, there are a couple of subtypes of stereoisomerism:
E-Z isomers differ in their arrangement of atoms or groups around a \( C=C\) double bond. You'll find E-Z isomerism in alkenes such as but-2-ene.
Optical isomers differ in their arrangement of 4 different atoms or groups around a central carbon atom. They form non-superimposable, mirror-image molecules of each other.
For more examples of structural and stereoisomerism in action, take a look at Isomerism.
Organic Compounds - Key takeaways
Carbon is suitable for organic compounds because of its small size, tetravalency, and ability to catenate.
Organic compounds have different functional groups. Molecules with the same functional group form a homologous series. These all have the same chemical properties, can be represented by a general formula, and differ only in the number and arrangement of \(-CH_2\) groups in their carbon chain.
Organic compounds can be classified as aliphatic, aromatic, or alicyclic. They can also be saturated or unsaturated.
We name organic compounds using IUPAC nomenclature. Names include a root name to indicate the length of the longest carbon chain, prefixes and suffixes to indicate the functional groups and side chains present, and locants to show the position of these functional groups and side chains.
Organic compounds are represented using formulae. Types of formulae include general, molecular, structural, displayed, and skeletal.
Organic compounds can show isomerism. Isomers are molecules with the same molecular formula but different arrangements of atoms. Structural isomers differ in their structural formulae, whereas stereoisomers have the same structural formula but different spatial arrangements of atoms and bonds.
INTELIGENCIA LINGÜISTÍCA: Características, Ejemplos y Actividades Para MejorarlaStructured Literacy: Effective Instruction for Students with Dyslexia and Related Reading Difficulties - International Dyslexia Association20 best product page design examples in 2022 (+ expert advice) | ConvertCartThe best FoV (Field of View) for Escape from Tarkov | Esports Tales
1. Organic Compounds Review
(Lasseter's Lab)
2. Organic Chemistry - Basic Introduction
(The Organic Chemistry Tutor)
3. The Functional Group Concept Explained | Organic Chemistry | FuseSchool
(FuseSchool - Global Education)
4. Definition of organic compounds.
(Pragati Suralkar)
5. What Is Organic Chemistry?: Crash Course Organic Chemistry #1
(CrashCourse)
6. IUPAC Nomenclature of Organic Chemistry
(Manocha Academy)
30 Most Popular Degree Majors Studied by Millionaires - MBA Central
Understanding Linguistic Relativity Hypothesis with Examples
The Grammarphobia Blog: Is 'Gypsy' a slur?
What Is CUNY? What Does CUNY Stand For?
Culture and Tradition of Indian States | RitiRiwaz
"month" how to use in sentences - EnglishTestStore Blog
What Does "Autumnal" Really Mean? - Clazwork.com
10 Questions To Ask At The End Of An Interview
10 of the Best Study Abroad Programs in Europe | College Study Abroad
Salary & Job Growth Data. 2022-11-18
10 Best Degrees for Entrepreneurs | What to Study if you Want to Launch your Own Business
Screening for Adrenocortical and Thyroid Peroxidase Antibodies to Look for Underlying Autoimmune Etiologies in Women under 35 with Idiopathic Dimished Ovarian Reserve
Author: Greg O'Connell
Name: Greg O'Connell
Address: Suite 517 2436 Jefferey Pass, Shanitaside, UT 27519
Job: Education Developer
Hobby: Cooking, Gambling, Pottery, Shooting, Baseball, Singing, Snowboarding
Introduction: My name is Greg O'Connell, I am a delightful, colorful, talented, kind, lively, modern, tender person who loves writing and wants to share my knowledge and understanding with you.
hetati.website
Gometricamerica is a website that writes about many topics of interest to you, a blog that shares knowledge and insights useful to everyone in many fields.
© 2023 Gometricamerica. All Rights Reserved. | CommonCrawl |
Narendra Karmarkar
Narendra Krishna Karmarkar (born circa 1956) is an Indian mathematician. Karmarkar developed Karmarkar's algorithm. He is listed as an ISI highly cited researcher.[2]
Narendra Krishna Karmarkar
Borncirca 1956
Gwalior, Madhya Pradesh, India
Alma materIIT Bombay (B.Tech)
Caltech (MS)
University of California, Berkeley (PhD)
Known forKarmarkar's algorithm
Scientific career
FieldsMathematics, computing science
InstitutionsBell Labs
ThesisCoping with NP-Hard Problems (1983)
Doctoral advisorRichard M. Karp[1]
He invented one of the first provably polynomial time algorithms for linear programming, which is generally referred to as an interior point method. The algorithm is a cornerstone in the field of linear programming. He published his famous result in 1984 while he was working for Bell Laboratories in New Jersey.
Biography
Karmarkar received his B.Tech in Electrical Engineering from IIT Bombay in 1978, MS from the California Institute of Technology in 1979,[3] and PhD in Computer Science from the University of California, Berkeley in 1983 under the supervision of Richard M. Karp.[4] Karmarkar was a post-doctoral research fellow at IBM research (1983), Member of Technical Staff and fellow at Mathematical Sciences Research Center, AT&T Bell Laboratories (1983–1998), professor of mathematics at M.I.T. (1991), at Institute for Advanced study, Princeton (1996), and Homi Bhabha Chair Professor at the Tata Institute of Fundamental Research in Mumbai from 1998 to 2005. He was the scientific advisor to the chairman of the TATA group (2006–2007). During this time, he was funded by Ratan Tata to scale-up the supercomputer he had designed and prototyped at TIFR. The scaled-up model ranked ahead of supercomputer in Japan at that time and achieved the best ranking India ever achieved in supercomputing. He was the founding director of Computational Research labs in Pune, where the scaling-up work was performed. He continues to work on his new architecture for supercomputing.
Work
Karmarkar's algorithm
Main article: Karmarkar's algorithm
Karmarkar's algorithm solves linear programming problems in polynomial time. These problems are represented by a number of linear constraints involving a number of variables. The previous method of solving these problems consisted of considering the problem as a high-dimensional solid with vertices, where the solution was approached by traversing from vertex to vertex. Karmarkar's novel method approaches the solution by cutting through the above solid in its traversal. Consequently, complex optimization problems are solved much faster using the Karmarkar's algorithm. A practical example of this efficiency is the solution to a complex problem in communications network optimization, where the solution time was reduced from weeks to days. His algorithm thus enables faster business and policy decisions. Karmarkar's algorithm has stimulated the development of several interior-point methods, some of which are used in current implementations of linear-program solvers.
Galois geometry
After working on the interior-point method, Karmarkar worked on a new architecture for supercomputing, based on concepts from finite geometry, especially projective geometry over finite fields.[5][6][7][8]
Current investigations
Currently, he is synthesizing these concepts with some new ideas he calls sculpturing free space (a non-linear analogue of what has popularly been described as folding the perfect corner).[9] This approach allows him to extend this work to the physical design of machines. He is now publishing updates on his recent work,[10] including an extended abstract.[11] This new paradigm was presented at IVNC, Poland on 16 July 2008,[12] and at MIT on 25 July 2008.[13] Some of his recent work is published at IEEE Xplore.[14] He delivered a lecture on his ongoing work at IIT Bombay in September 2013.[15] He gave a four-part series of lectures at FOCM 2014 (Foundations of Computational Mathematics)[16] titled "Towards a Broader View of Theory of Computing". First part of this lecture series is available at Cornell archive.[17]
Awards
• The Association for Computing Machinery awarded him the prestigious Paris Kanellakis Award in 2000 for his work on polynomial-time interior-point methods for linear programming for "specific theoretical accomplishments that have had a significant and demonstrable effect on the practice of computing".
• Srinivasa Ramanujan Birth Centenary Award for 1999, presented by the Prime Minister of India.
• Distinguished Alumnus Award, Indian Institute of Technology, Bombay, 1996.
• Distinguished Alumnus Award, Computer Science and Engineering, University of California, Berkeley (1993).
• Fulkerson Prize in Discrete Mathematics given jointly by the American Mathematical Society & Mathematical Programming Society (1988)
• Fellow of Bell Laboratories (since 1987).
• Texas Instruments Founders' Prize (1986).
• Marconi International Young Scientist Award (1985).
• Golden Plate Award of the American Academy of Achievement, presented by former U.S. president (1985).[18][19]
• Frederick W. Lanchester Prize of the Operations Research Society of America for the Best Published Contributions to Operations Research (1984).
• President of India gold medal, I.I.T. Bombay (1978).
References
1. Narendra Karmarkar at the Mathematics Genealogy Project.
2. Thomson ISI. "Karmarkar, Narendra K., ISI Highly Cited Researchers". Archived from the original on 23 March 2006. Retrieved 20 June 2009.
3. "Eighty-Fifth Annual Commencement" (PDF). California Institute of Technology. 8 June 1979. p. 13.
4. Narendra Karmarkar at the Mathematics Genealogy Project
5. Karmarkar, Narendra (1991). "A new parallel architecture for sparse matrix computation based on finite projective geometries". Proceedings of the 1991 ACM/IEEE conference on Supercomputing – Supercomputing '91. pp. 358–369. doi:10.1145/125826.126029. ISBN 0897914597. S2CID 6665759. {{cite book}}: |work= ignored (help)
6. Karmarkar, N. K., Ramakrishnan, K. G. "Computational results of an interior point algorithm for large scale linear programming". Mathematical Programming. 52: 555–586 (1991).
7. Amruter, B. S., Joshi, R., Karmarkar, N. K. "A Projective Geometry Architecture for Scientific Computation". Proceedings of International Conference on Application Specific Array Processors, IEEE Computer Society, p. 6480 (1992).
8. Karmarkar, N. K. "A New Parallel Architecture for Scientific Computation Based on Finite Projective Geometries". Proceeding of Mathematical Programming, State of the Art, p. 136148 (1994).
9. Angier, Natalie (3 December 1984). "Folding the Perfect Corner". Time Magazine. Archived from the original on 4 December 2008. Retrieved 12 July 2008.
10. Karmarmar, Narendra (11 July 2008). "Narendra Karmarkar's recent research". punetech.com. Retrieved 12 July 2008.
11. Karmarmar, Narendra (11 July 2008). "Massively Parallel Systems and Global Optimization" (PDF). punetech.com Narendra Karmarkar's recent work. Retrieved 12 July 2008.
12. Karmarmar, Narendra (14 July 2008). "Vacuum nanoelectronics devices from the perspective of optimization theory" (PDF). punetech.com Narendra Karmarkar's recent work. Retrieved 14 July 2008.
13. Karmarkar, Narendra. "Seminar on Massively Parallel Systems and Global Optimization". Computation Research in Boston. Retrieved 12 July 2008.
14. http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=5166089&isYear=2009 .
15. Karmarkar, Narendra. "Advanced Algorithmic Approach to Optimization". Research in India. Retrieved 26 September 2003.
16. "Focm 2014".
17. Karmarkar, Narendra (2014). "Towards a Broader View of Theory of Computing". arXiv:1412.3335 [cs.NA].
18. "Golden Plate Awardees of the American Academy of Achievement". www.achievement.org. American Academy of Achievement.
19. "Whiz kids rub elbows with right stuff" (PDF). Rocky Mountain News. 30 June 1985.
External links
• Distinguished Alumnus 1996 IIT Bombay
• Flashback: An Interior Point Method for Linear Programming IIT Bombay Heritage Fund
• Karmarkar function in Scilab
Winners of the Paris Kanellakis Theory and Practice Award
• Adleman, Diffie, Hellman, Merkle, Rivest, Shamir (1996)
• Lempel, Ziv (1997)
• Bryant, Clarke, Emerson, McMillan (1998)
• Sleator, Tarjan (1999)
• Karmarkar (2000)
• Myers (2001)
• Franaszek (2002)
• Miller, Rabin, Solovay, Strassen (2003)
• Freund, Schapire (2004)
• Holzmann, Kurshan, Vardi, Wolper (2005)
• Brayton (2006)
• Buchberger (2007)
• Cortes, Vapnik (2008)
• Bellare, Rogaway (2009)
• Mehlhorn (2010)
• Samet (2011)
• Broder, Charikar, Indyk (2012)
• Blumofe, Leiserson (2013)
• Demmel (2014)
• Luby (2015)
• Fiat, Naor (2016)
• Shenker (2017)
• Pevzner (2018)
• Alon, Gibbons, Matias, Szegedy (2019)
• Azar, Broder, Karlin, Mitzenmacher, Upfal (2020)
• Blum, Dinur, Dwork, McSherry, Nissim, Smith (2021)
• Burrows, Ferragina, Manzini (2022)
Authority control: Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
| Wikipedia |
\begin{document}
\title{Rainbow simplices in triangulations of manifolds} \author{ Luis Montejano}
\maketitle
\begin{abstract} Given a coloration of the vertices of a triangulation of a manifold, we give homological conditions on the chromatic complexes under which it is possible to obtain a rainbow simplex \end{abstract} \section{Introduction and preliminaries} Consider a simplicial complex $\mathsf{K}$ which is a triangulation of a $n$-dimensional manifold and whose vertices are partitioned into $n+1$ subsets $V_0,\ldots,V_n$. Following the spirit of the Sperner lemma, the purpose of this paper is to obtain conditions that allow us to ensure the existence of a rainbow simplex, that is, an $n$-simplex of $\mathsf{K}$ with exactly one vertex in each $V_i$. In particular, we will be interested in give homological conditions on the chromatic complexes $\mathsf{K}_{\{i\}}$, where we denote by $\mathsf{K}_{\{i\}}$ the subcomplex of $\mathsf{K}$ generated by the vertices of $V_i$.
During this paper, we use reduced homology with coefficients in a arbitrary field and, if no confusion arise, we shall not distinguish between a simplicial complex and its topological realization. For instance, if $\mathsf{L}$ is a subcomplex of the simplicial complex $\mathsf{K}$, in this paper we shall denote by $\mathsf{K}\setminus \mathsf{L}$ the space $|\mathsf{K}|\setminus |\mathsf{L}|$.
Meshulam's lemma~\cite[Proposition 1.6]{Mh2} and~\cite[Theorem 1.5]{Mh1} is a Sperner-lemma type result, dealing with coloured simplicial complexes and rainbow simplices, and in which the classical boundary condition of the Sperner lemma is replaced by an acyclicity condition. It is an important result from topological combinatorics with several applications in combinatorics, such as the generalization of Edmonds' intersection theorem by Aharoni and Berger~\cite{AB} and many other results in which obtaining a system of distinct representatives is relevant, like for example, the Hall's theorem for hypergraphs \cite{AH}. Following this spirit, Meunier and Montejano \cite{MM} generalized Meshulam's lemma obtaining the following result which is the main tool in this paper to obtain rainbow simplices.
Consider a simplicial complex $\mathsf{K}$ whose vertex set is partitioned into $V_0,\ldots,V_n$. For $S\subseteq \{0,1,\dots n\}$, we denote by $\mathsf{K}_S$ the subcomplex of $\mathsf{K}$ induced by the vertices in $\bigcup_{i\in S}V_i$. Suppose that for every nonempty $S\subseteq \{0,1,\dots n\}$, \begin{equation}\label{eq1}
\widetilde{H}_{|S|-2}(\mathsf{K}_S)=0. \end{equation}
Then there exists a rainbow simplex $\sigma$ in $\mathsf{K}$. See \cite[Theorem 4]{M} and \cite{MM} for a proof.
We summarize below what we need about PL topology in this paper. See, for example, the book of Rourke and Sanderson \cite{RS}.
Let $\{U_1, U_2\}$ be a partition of the vertices $V(\mathsf{K})$ of the simplicial complex $\mathsf{K}$ and let $<U_1>$ and $<U_2>$ be the subcomplexes of $\mathsf{K}$ induced by $U_1$ and $U_2$, respectively. Let $N(<U_i>,\mathsf{K}')$ be the derived neighborhoods of $U_i$ in $\mathsf{K}$ as subcomplex of the first barycentric subdivision $\mathsf{K}'$, $i=1,2$. Hence: \begin{itemize} \item $N(<U_i>,\mathsf{K}')$ is a strong deformation retraction of $<U_i>$, $i=1,2$ and \item $\mathsf{K} \setminus <U_2>$ is a strong deformation retraction of $<U_1>$. \end{itemize}
\section{Rainbow simplices in triangulations of $2$ and $3$-manifolds}
Our first result deals with $3$-colorations in triangulation of surfaces
\begin{theorem} \label{thmsurface} Consider a simplicial complex $\mathsf{K}$ which is a triangulation of a $2$-dimensional manifold and whose vertices are partitioned into $3$ nonempty subsets $V_0,V_1,V_2$. If for every $i=0,1,2$, $$\widetilde H_1(\mathsf{K}_{\{i\}}, \mathsf{K}_{\{i\}}\cap\partial\mathsf{K})=0,$$
then $\mathsf{K}$ admits a rainbow triangle. \end{theorem}
\begin{proof} The condition $\widetilde H_1(\mathsf{K}_{\{i\}}, \mathsf{K}_{\{i\}}\cap\partial\mathsf{K})=0$ implies that every component $\mathsf{L}$ of $\mathsf{K}_{\{i\}}$ is contractible and the intersection of $\mathsf{L}$ with the boundary of $\mathsf{K}$ is either empty or contractible. Of course, we may assume that $\mathsf{K}$ is connected. Let us assume first that $\mathsf{K}$ is the triangulation of a simply connected surface without boundary. In order to find a rainbow triangle in $\mathsf{K}$, by (\ref{eq1}), it will be enough to prove that $\mathsf{K}_S$ is connected, for every subset $S\subset\{0,1,2\}$ of size two. Asume $S=\{1,2\}$ and let $p$ and $q$ be two points in $\mathsf{K}_{\{1,2\}}$. By hypothesis, $N(\mathsf{K}_{\{0\}},\mathsf{K}')$ is a countable collection $\{D_i\}$ of pairwise disjoint topological disk embedded in the surface $\mathsf{K}$. Let $f_i:\mathbb{B}^2\to D_i$ be homeomorphisms and let $\{x_i=f_i(0)\}$ be the collection of centers of all these disks. Since $\mathsf{K}$ is connected, there is an arc $\Gamma$ joining $p$ and $q$ in $\mathsf{K}$. Furthermore, by transversality, we may assume without loss of generality, that this arc $\Gamma$ does not intersect the collection of centers $\{x_i\}$. Moreover, use the radial structure of the disks $\{D_i\}$, giving by the homeomorphisms $\{f_i\}$, to push the arc $\Gamma$ outside $\mathsf{K}_{\{0\}}$. Since $\mathsf{K}_{\{1,2\}}$ is a strong deformation retract of $\mathsf{K}\setminus \mathsf{K}_{\{0\}}$, we can deform de arc $\Gamma$ to an arc from $p$ to $q$ inside $\mathsf{K}_{\{1,2\}}$, those proving the connectivity of $\mathsf{K}_{\{1,2\}}$ as we wished.
Suppose now that $\mathsf{K}$ is the triangulation of a $2$-dimensional non simply connected manifold. Taking the universal cover of this surface we obtain a simply connected simplicial complex $\widetilde \mathsf{K}$ that inherits from $\mathsf{K}$ a $3$-coloration $\widetilde V_0, \widetilde V_1, \widetilde V_2$ on its vertices. That is, there is a simplicial map $\pi:\widetilde\mathsf{K}\to \mathsf{K}$ which is a universal cover, where $\widetilde V_i=\pi^{-1}(V_i)$, $i=0,1,2$. Note that if $\mathsf{L}$ is contractible, hence $\pi^{-1}(\mathsf{L})$ is a countable union of pairwise disjoint contractible subcomplexes of $\widetilde\mathsf{K}$. Therefore, the fact that every component $\mathsf{L}$ of $\mathsf{K}_{\{i\}}$ is contractible implies that every component of $\widetilde \mathsf{K}_{\{i\}}$ is contractible. Consequently, by the first part of the proof, there is a rainbow simplex in $\widetilde \mathsf{K}$ and since $\pi$ sends colorful simplices of $\widetilde \mathsf{K}$ into colorful simplices of $\mathsf{K}$, we obtain our desired rainbow triangle.
The proof of the theorem for triangulations of surfaces with boundary is completely similar, except that if for a component $\mathsf{L}$ of $\mathsf{K}_{\{i\}}$ such that $\mathsf{L}$ and $\mathsf{L}\cap\partial \mathsf{K}$ are nonempty contractible spaces, then we use a homeomorphism $$f_i:\big(\mathbb{B}^2\cap\{(x,y) \in \mathbb{R}^2 \mid y\geq 0\}, [-1,1]\times\{0\}\big)\to \big(N(\mathsf{L}, \mathsf{K}'), N(\mathsf{L}\cap \partial \mathsf{K}, \partial K')\big)$$ in such a way that the center $x_i=f_i(0)$ lies in $\partial\mathsf{K}$ the boundary of $\mathsf{K}$. \end{proof}
For triangulations of $3$-dimensional manifolds, we have the following theorem.
\begin{theorem} \label{thmthree} Consider a simplicial complex $\mathsf{K}$ which is a triangulation of a $3$-dimensional manifold whose vertices are partitioned into $4$ subsets $V_0,V_1,V_2,V_3$. Suppose that \begin{enumerate} \item $\widetilde H_2(\mathsf{K})=0$, \item for $i=0,\dots 3$, $\mathsf{K}_{\{i\}}$ is contractible and the intersection of $\mathsf{K}_{\{i\}}$ with the boundary of $\mathsf{K}$ is either empty or contractible, \item for every pair of integers $0\leq i <j\leq 3$, there is a $1$-dimensional simplex with one vertex in $V_i$ and the other in $V_j$. \end{enumerate} then $\mathsf{K}$ admits a rainbow tetrahedron. \end{theorem}
\begin{proof} As in the proof of Theorem \ref{thmsurface}, we may assume without loss of generality that $\mathsf{K}$ is a triangulation of a connected, simply connected $3$-dimensional manifold without boundary. Since $\widetilde H_2(\mathsf{K})=0$, in order to get a rainbow simplex of $\mathsf{K}$ it is enough to prove that \begin{itemize} \item $\widetilde{H}_{1}(\mathsf{K}_S)=0$, for every $S\subset \{0,1,2,3\}$ of size $3$, and \item $\widetilde{H}_{0}(\mathsf{K}_S)=0$, for every $S\subset \{0,1,2,3\}$ of size $2$. \end{itemize}
Indeed, we shall prove that for every $S\subset \{0,1,2,3\}$ of size $3$, $\mathsf{K}_S$ is simply connected and for every $S\subset \{0,1,2,3\}$ of size $2$, $\mathsf{K}_S$ is connected. Assume that $S=\{1,2,3\}$. Let $\alpha:\mathbb{S}^1\to \mathsf{K}_{\{1,2,3\}}$ be a continuous map. Since $\mathsf{K}$ is simply connected, there is a map $\beta:\mathbb{B}^2\to \mathsf{K}$ extending $\alpha$. Since $\mathsf{K}_{\{0\}}$ is contractible, hence its derived neighborhood $N(\mathsf{K}_{\{0\}},\mathsf{K}')$ is a 3-dimensional ball. Asume that for a parametrization $f:\mathbb{B}^3 \to N(\mathsf{K}_{\{0\}},\mathsf{K}')$, the point $x=f(0)$ is its center. As in the proof of Theorem \ref{thmsurface}, by transversality, we may assume without loss of generality, that the point $x$ does not lie in $\beta(\mathbb{B}^2)$. Moreover, use the radial structure of the $3$-ball $N(\mathsf{K}_{\{0\}},\mathsf{K}')$, given by the homeomorphism $f$, to push $\beta(\mathbb{B}^2)$ outside $\mathsf{K}_{\{0\}}$. Since $\mathsf{K}_{\{1,2,3\}}$ is a strong deformation retract of $\mathsf{K}\setminus \mathsf{K}_{\{0\}}$, the map $\beta:\mathbb{B}^2\to \mathsf{K}$ is homotopic to a map whose image lies inside $\mathsf{K}_{\{1,2,3\}}$ and of course extends the map $\alpha$. This proves that $\mathsf{K}_{\{1,2,3\}}$ is simply connected. Finally, given two different integers $0\leq i <j\leq m$, the connectivity of $\mathsf{K}_{\{i,j\}}$ follows from the connectivity of $\mathsf{K}_{\{i\}}$ and $\mathsf{K}_{\{j\}}$ plus the existence of a $1$-dimensional simplex with one vertex in $V_i$ and the other in $V_j$. This completes the proof of this theorem.
\end{proof}
\section {Rainbow simplices in triangulations of $n$-dimensional manifolds}
\begin{theorem} \label{thm4surface} Consider a simplicial complex $\mathsf{K}$ which is a triangulation of a $4$-dimensional closed manifold and whose vertices are partitioned into $5$ subsets $V_0, \dots,V_4$. \begin{enumerate} \item $\widetilde H_2(\mathsf{K})=\widetilde H_3(\mathsf{K})=0$, \item for every pair of integers $0\leq i <j\leq 4$ there is a $1$-dimensional simplex with one vertex in $V_i$ and the other in $V_j$, \item for every $i=0,\dots,4$, the subcomplex $K_{\{i\}}$ is contractible and has as a regular neighborhood a $4-ball$, \item for every $S\subset\{1,\dots,5\}$ of size $2$, $K_S$ have as a regular neighborhood a handle body \end{enumerate}
then $\mathsf{K}$ admits a rainbow $4$-simplex.
\end{theorem}
\begin{proof} As in the proof of Theorem \ref{thmsurface}, we may assume without loss of generality that $\mathsf{K}$ is a triangulation of a connected, simply connected $4$-dimensional closed manifold. The strategy is to get a rainbow simplex by proving (\ref{eq1}). Since $\widetilde H_3(\mathsf{K})=0$, we have that $\widetilde{H}_{3}(\mathsf{K}_S)=0$, for every $S\subset \{0,\dots,4\}$ of size $5$. Let us prove that $\widetilde{H}_{2}(\mathsf{K}_S)=0$, for every $S\subset \{0,\dots,4\}$ of size $4$. Asume $S=\{1,2,3,4\}$ and let $\sigma^2$ be $2$-cycle of $\widetilde{H}_{2}(\mathsf{K}_S)$. Since $\widetilde{H}_{2}(\mathsf{K})=0$, there is a chain complex $\sigma^3$ in $\mathsf{K}$ whose boundary is $\sigma^2$. Since $\mathsf{K}_{\{0\}}$ is contractible, its derived neighborhood $N(\mathsf{K}_{\{0\}},\mathsf{K}')$ is a 3-dimensional ball, let $f:\mathbb{B}^4 \to N(\mathsf{K}_{\{0\}},\mathsf{K}')$ be a homeomorphism and denote by $x=f(0)$ its center. As in the proof of Theorem \ref{thmsurface}, by transversality, we may assume without loss of generality, that the point $x$ does not lie in $\sigma^3$. Moreover, use the radial structure of the $4$-ball $N(\mathsf{K}_{\{0\}},\mathsf{K}')$, given by the homeomorphism $f$, to push $\sigma^3$ outside $\mathsf{K}_{\{0\}}$. Since $\mathsf{K}_{S}$ is a strong deformation retract of $\mathsf{K}\setminus \mathsf{K}_{\{0\}}$, $\sigma^3$ is homotopic to a chain of complex $\mathsf{K}_{S}$ whose boundary is $\sigma^2$. Therefore $\widetilde{H}_{2}(\mathsf{K}_S)=0$, for every $S\subset \{0,\dots,4\}$ of size $4$. Let us prove now that $\widetilde{H}_{1}(\mathsf{K}_S)=0$, for every $S\subset \{0,\dots,4\}$ of size $3$. Asume $S=\{2,3,4\}$ and let $\delta^1$ be $1$-cycle of $\widetilde{H}_{1}(\mathsf{K}_S)$. Since $\widetilde{H}_{1}(\mathsf{K})=0$, there is a chain complex $\delta^2$ in $\mathsf{K}$ whose boundary is $\delta^1$. Moreover, since the derived neighborhood $N(\mathsf{K}_{\{0,1\}},\mathsf{K}')$ is a handle body, there is a $1$-dimensional subpolyhedron $L^1$ with the property $L^1$ is a strong deformation retract of $N(\mathsf{K}_{\{0,1\}},\mathsf{K}')$. By transversality, we may assume that $\delta^2$ does not intersect $L^1$. Moreover, we can use the radial structure of $N(\mathsf{K}_{\{0,1\}},\mathsf{K}')\setminus L^1$ to push $\delta^2$ outside $\mathsf{K}_{\{0,1\}}$. Since $\mathsf{K}_{S}$ is a strong deformation retract of $\mathsf{K}\setminus \mathsf{K}_{\{0,1\}}$, $\delta^2$ is homotopic to a chain complex of $\mathsf{K}_{S}$ whose boundary is $\delta^1$. Therefore $\widetilde{H}_{1}(\mathsf{K}_S)=0$, for every $S\subset \{0,\dots,4\}$ of size $3$. Finally, given two different integers $0\leq i <j\leq m$, the connectivity of $\mathsf{K}_{\{i,j\}}$ follows from the connectivity of $\mathsf{K}_{\{i\}}$ and $\mathsf{K}_{\{j\}}$ plus the existence of a $1$-dimensional simplex with one vertex in $V_i$ and the other in $V_j$. This completes the proof of Theorem \ref{thm4surface}.
\end{proof}
The same ideas in our previos theorems can be applied to obtain the following theorem.
\begin{theorem} \label{thmnsurface} Consider a simplicial complex $\mathsf{K}$ which is a triangulation of a $n$-dimensional closed manifold and whose vertices are partitioned into $n+1$ subsets $V_0, \dots,V_n$. If \begin{enumerate} \item $\widetilde H_2(\mathsf{K})=\dots =\widetilde H_{n-1}(\mathsf{K})=0$, \item for every $S\subset\{0,\dots,n\}$ of size $i+1$, there is a $i$-dimensional complex $L^i$ such that $L^i$ strong deformation retracts to $N(\mathsf{K}_S, \mathsf{K}')$, $0\le i\le n-2$, \end{enumerate} then $\mathsf{K}$ admits a rainbow $n$-simplex.
\end{theorem}
Finally, we shall use Alexander duality to prove the following theorem for colored triangulations of spheres
\begin{theorem} \label{thmsphere} Consider a simplicial complex $\mathsf{K}$ which is a triangulation of the $n$-dimensional sphere and whose vertices are partitioned into $n+1$ subsets $V_0, \dots,V_n$. If for every $S\subset\{0,\dots,n\}$ and
$1\le |S|\le n-1$,
$$ \widetilde H_{|S|}(K_S)=0,$$ then $\mathsf{K}$ admits a rainbow $n$-simplex. \end{theorem}
\begin{proof}
Let $S\subset [m]=\{0,1,\dots,m\}$. Then $\mathsf{K}-\mathsf{K}_S$ has the homotopy type of $K_{[m]\setminus S}$. By Alexander duality, $\widetilde H_{|S|-2}(K_S)=\widetilde H_{m+1-|S|}(K_{[m]\setminus S})=0$. Consequently, by (\ref{eq1}), $\mathsf{K}$ admits a rainbow $n$-simplex.
\end{proof}
\end{document} | arXiv |
\begin{document}
\begin{abstract} Our aim in this paper is to introduce and study a mathematical model for the description of traveling sand dunes. We use surface flow process of sand under the effect of wind and gravity. We model this phenomena by a non linear diffusion-transport equation coupling the effect of transportation of sand due to the wind and the avalanches due to the gravity and the repose angle. The avalanche flow is governed by the evolution surface model and we use a nonlocal term to handle the transport of sand face to the wind.
\end{abstract}
\subjclass{74S05, 65M99, 35K65}
\keywords{Sand dunes, evolution surface model, nonlocal equation, diffusion-transport equation}
\maketitle
\begin{center}
\section{Introduction} \end{center}
The diffusion-transport equation of the type
\begin{equation}\label{geneq}
\partial _t u = \nabla \cdot \Big( m\: \nabla u - V\: u \Big)+ f
\end{equation}
governs the spatio-temporal dynamics of a density $u(x,t) $ of particles. The first term on the right hand side describes random motion and the parameter $m$, corresponding to diffusion coefficient, is connected to rate of random exchanges between
the particles at the position $x$ and neighbors positions. The second term is a transportation with velocity $V,$ and is connected to the transport of particles according to the
the vector field $V.$
Our aim here is to show how one can use this type of equation to describe the movement of traveling sand dunes ; the so called Barchans.
A Barchan is a dune of the shape of a crescent lying in the direction of the wind. It arises where the supply of sand is low and under unidirectional winds.
The wind rolled the sand back to the slope of the dune back up the ridge and comes to cause small avalanches on steep slopes over the front. This furthered the dune. Such dunes (Barchan) move in the desert at speeds depending on their size and strength of wind. In one year, they argue a few meters to a few tens of meters. A wind of $\displaystyle 25 km/h$ is enough to turn on the preceding process and furthered the dune. Our main is to show how one can couple between the action of the wind and the action of gravity into the equation (\ref{geneq}) to fashion a model for traveling sand dunes like Barchans.
Recall that the surface evolution model is a very useful model for the description of the dynamic of a granular matter under the gravity effect. In \cite{Pri1} (see also \cite{ArEvWu}, and \cite{DuIg1}) the authors show that this toy model gives a simple way to study the dynamic of the sandpile from the theoretical and a numerical point of view. In particular, the connection of this model with a stochastic model for sandpile (cf. \cite{EvRe}) shows how it is able to handle the global dynamic of a structure of granular matter structure using simply the repose angle. In this paper, we show how one can use the surface evolution model in the equation (\ref{geneq}) to
describe the dynamic of traveling sand dunes. There is a wide literature concerning mathematical and physical studies of dunes (cf. \cite{Bagnold}, \cite{KoLa1}, \cite{AnClDo1,AnClDo2}, \cite{SaKrHe} and \cite{KrSaHe}). We do believe that this is an intricate phenomena with many determinant parameters. We are certainly neglecting some of them. Nevertheless, as for the case of sandpile, we do believe that simple models (like toys model) may help to encode complex phenomenon related to the granular matter structures.
In the following section, we give some preliminaries and present our model for a traveling sand dune. Section 3 is devoted main results of existence and uniqueness of a weak solution.
\section{Preliminaries and modeling}
\begin{figure}
\caption{ Barchan}
\label{Barchan}
\end{figure}
The simplest and the most well-known type of dunes is the Barchan. In general, it has the form of a hill with a Luff side and a Lee side separated by a crest (cf. Figure \ref{Barchan}). On the Lee side, sand is taken up by the wind into a moving layer, transported up to the crest and pass to the other side; the Luff side. By neglecting the effects of precipitation and swirls, on the Luff side, the dynamic of the sand is generated by the action of gravity on the sand arriving at the crest.
Though there are many speculations and experimental observations on the evolution of the shape, the height and the distribution of dunes, there is no universal model for the study of the motion of sand dunes. There is a large literature on this subject (cf. \cite{Bagnold}, \cite{KoLa1}, \cite{AnClDo1,AnClDo2}, \cite{SaKrHe} and \cite{KrSaHe}). Some mathematical models treats the dunes as an aerodynamic objects with an adequate smooth shape to let the air flow by with the least effort (cf. \cite{KoLa1}). The so called BCRE models (cf. \cite{BCRE}) use the conservation of mass and the repose angle to build a system of two coupled differential equations for the height of the topography $h$ and the amount of mobile particles $R.$ The particles are supposed to move all with the same velocity $u.$ Other simplified and realistic physical models (cf. \cite{AnClDo1,AnClDo2}, \cite{SaKrHe} and \cite{KrSaHe}) use the mass and momentum conservation in presence of erosion and external
forces
to derive coupled differential equations to study the evolution of the morphology of dunes. Among other things, these models focus on the way in which a sand movement could be constructed from wind data (the choice of formula for linking wind velocity to sand movement, the choice of a threshold velocity for sand movement e.g. Bagnold, etc). Keeping in mind that the driving force for the Barchans is the wind, our aim here is to introduce and study a simple model which combines the effort of wind on the Lee side with the avalanches generated by the repose angle. More precisely, we introduce and study a new mathematical model in the form of diffusion-transport equation (\ref{geneq}) for the evolution of a morphology of a Barchan under the effect of a unidirectional wind.
Let us denote by $u=u(t,x,y)$ the height of the dune at time $t\geq 0$ and at the position $\displaystyle (x,y) $ in the plane $\mathbb{R}^N$ ($N=2$ in practice). Then, $u$ can be described by (\ref{geneq}) where the term $-m\: \nabla u$ is connected to the net flux of the avalanches of sands resulting from the action of the gravity and the repose angle. The term $uV$ is connected to the transport of the sand under the action of wind up to the crest. See here that $f\equiv 0$, since we are assuming that there is no source of sand.
Thanks to \cite{Pri1} (see also \cite{ArEvWu} and \cite{EvRe}) we know that the avalanche can be governed by a non standard diffusion parameter $m$ (unknown) that is connected to the sub-gardient constraint in the following way
$$ m\geq 0,\ \vert \nabla u\vert \leq \lambda,\ m\: (\vert \nabla u\vert - \lambda )=0, $$
where $ \theta =\arctan (\lambda)$ is the repose angle of the sand. This is the consequence of the fact that the inertia is neglected, the surface flow is directed towards the steepest descent, the surface slope of the sandpile cannot exceed the repose angle $\theta$ of the material and
there is no pouring over the parts of angle less than $\theta.$
As to the action of the wind, the resulting phenomena is a transportation of $u$ in the direction of the wind. Indeed, the wind proceed by taking up the sand into a moving layer and transport it up to the crest. This creates a ripping curent of sand concentrated in the Lee side ; face up the wind. To handle the way a sand movement could be constructed from wind data, we use the nonlocal interactions between the positions of the dune face to the wind. We assume that the velocity $V=(V_1,0)$ of the transported layer is induced at a site $x$ by the net effects of the slope of all particles at various sites $y$ around $x.$ More precisely, we consider $V_1$ in the form
$$ V_1= \mathcal{H} (K\star \partial_x u) = \mathcal{H} \left( \int_{B(x,r)} K(x-y)\: \partial_x u (y,t)\: dy\right) , $$
where the kernel $K$ associates a strength of interaction per unit density with the distance $x-y$
between any two sites over some finite domain $B(x,r).$ Taking the average of this form may give more weight to information about particles that are closer, or those that are farther away. Moreover, since the transport need to be restricted to the region facing the wind, we assume that the function, $\mathcal{H} \: :\: \mathbb{R}\to \mathbb{R}^+$ is a phenomenological parameter which vanishes in the region $(-\infty,0).$
Thus, we consider the following model to describe the evolution of the morphology of a Barchan under the effect of a unidirectional wind (in the direction $ (1,0)$) : \begin{equation}\label{nlmodel} \left\{ \begin{array}{ll} \left. \begin{array}{l}
\displaystyle \partial_t u -\nabla\cdot ({ m} \:\nabla u) + \partial_x \Big( u \: \mathcal{H} (K\star \partial_x u) \Big) =0
\\ \\
\displaystyle \vert \nabla u\vert \leq \lambda, \ { m}\geq 0,\ { m}\: (\vert \nabla u\vert -\lambda)=0
\end{array} \right\} & \quad\hbox{ in }(0,\infty)\times\mathbb{R}^N\\ \\
u(0,x)=u_{0}(x) &\quad\hbox{ for }x\in \mathbb{R}^N. \end{array}\right. \end{equation}
\begin{rem}
\begin{enumerate}
\item In (\ref{nlmodel}), we are assuming that the flux du to the wind ; i.e. the quantity of sand transported by unit of time through fixed vertical line, depends on the speed of the wind and the angle of the position. Moreover, thanks to the assumption on $\mathcal H$ (vanishing in $(-\infty,0)$), the action of the wind is null whenever the slope is not face to the wind. To be more general, it is possible to assume that this flux depends also on the eight, i.e. we can assume that
$$V_1= \: \beta(u) \: \mathcal{H} (K\star \partial_x u),$$
where $\displaystyle \beta \: :\: \mathbb{R}^+\to \mathbb{R}^+$ is a continuous function.
\item We see in the formal model (\ref{nlmodel}) that $\mathcal H$ is null for negative values and strictly positive on $\mathbb{R}^+.$ So, formally, $\mathcal H=\chi_{\mathbb{R}^+}.$ Here, for technical reason, we consider continuous approximation of this kind of profile by assuming that $\mathcal H$ is a Lipschitz continuous function on $\mathbb{R}$. For instance, one can take $$ \displaystyle \mathcal{H} (r) =1-\frac{1}{\sqrt{\pi}} \int^{-r/\sqrt{\varepsilon}}_{-1/\varepsilon} e^{-z^2} dz , $$
where $0<\varepsilon<1$ is a given fixed parameter $.$
\item Since the assumptions on $\mathcal H,$ one sees that the crest, which corresponds here to the region where $u$ changes its monotonicity, constitutes a free boundary separating the region of avalanches and the region of wind erosion of sand. Indeed, the transport term $uV$ disappears in the region where $u$ is nonincreasing.
\item It is possible to improve the property of $\mathcal H$ to better describe the movement of sand face the wind. For instance if we assume that the grains move more and more slowly whenever they are face to important slope, then $\mathcal H $ can be assumed to be a nondecreasing Lipschitz continuous function. Typical example may be given by
$$ \mathcal H (r)=\frac{r^+}{ \sqrt{1+r^2}},\quad \hbox{ for any } r\in \mathbb{R}. $$ In this paper, we just assume that $\mathcal H $ is a Lipschitz continuous function. The discussions concerning concrete assumptions on $\mathcal H $ and also on $\gamma$ and $K$ will be discussed in forthcoming papers.
\item Replacing $K$ by $K_\sigma$ in (\ref{nlmodel}) where $K_\sigma \in \ensuremath{\mathcal{D}}(\mathbb{R}^N)$ is a smoothing kernel satisfying
\begin{itemize}
\item $\displaystyle \int_{\mathbb{R}^N} K_\sigma (x) \: dx=1$
\item $\displaystyle K_\sigma (x) \to \delta_x$ as $\sigma \to 0$, $\delta_x$ is the Dirac function at the point $x$,
\end{itemize} and letting formally $\sigma \to 0 $ in (\ref{nlmodel}), we obtain the following PDE :
\begin{equation}\label{pdemodel} \left\{ \begin{array}{ll} \left. \begin{array}{l}
\displaystyle \partial_t u -\nabla\cdot ({ m} \:\nabla u) + \partial_x \Big( \gamma (u) \: \mathcal{H} ( \partial_x u) \Big) =0
\\ \\
\displaystyle \vert \nabla u\vert \leq \lambda,\ \exists\ { m}\geq 0,\ { m}\: (\vert \nabla u\vert -\lambda)=0
\end{array} \right\} & \quad\hbox{ in }(0,\infty)\times\mathbb{R}^N\\ \\
u(0,x)=u_{0}(x) &\quad\hbox{ for }x\in \mathbb{R}^N. \end{array}\right. \end{equation} Since $\mathcal H $ is assumed to be nondecreasing, one sees that the term $ \partial_x \Big( \gamma (u) \: \mathcal{H} ( \partial_x u) \Big)$ is a anti-diffusive and may creates some obstruction to the existence of a solution. It is not clear for us if (\ref{pdemodel}) is well posed in this case or not.
\end{enumerate}
\end{rem}
\section{Existence and uniqueness}
To study the model (\ref{nlmodel}), we restrict our-self to $\Omega\subset \mathbb{R}^N$ a bounded open domain, with Lipschitz boundary $\partial \Omega$ and outer unit normal $\eta$. Consider the following nonlocal equation with Dirichlet boundary condition : $$\eqno (E) \label{nlmodelb} \left\{ \begin{array}{ll} \left. \begin{array}{l}
\displaystyle \partial_t u -\nabla\cdot ({ m} \:\nabla u) + \partial_x \Big( \gamma (u) \: \mathcal{H} (K\star \partial_x u) \Big) =f
\\ \\
\displaystyle \vert \nabla u\vert \leq \lambda,\ \exists\ { m}\geq 0,\ { m}\: (\vert \nabla u\vert -\lambda)=0
\end{array} \right\} & \mbox{ in}\,\Omega_T:=(0,T)\times \Omega \\ \\
u=0 & \mbox{ in}\,\,\Sigma_T:=(0,T)\times \partial \Omega\, \\\\ u(0,x)=u_0(x) & \mbox{ in}\,\, \Omega, \end{array} \right. $$
where $u_0$ patterns the intial shape of the dune. Here and throughout the paper, we assume that
\begin{itemize} \item $\mathcal{H} \: :\: \mathbb{R}\to \mathbb{R}^+$ is a Lipschitz continuous function. \item $K$ is a given regular Kernel compactly supported in $B(0,r),$ for a given parameter $r>0.$ \item $\displaystyle \gamma \: :\: \mathbb{R}^+\to \mathbb{R}^+$ is a Lipschitz continuous function with $\displaystyle \gamma (0)=0$.
\end{itemize}
Our main results concerns existence and uniqueness of a solution. As usual for the first differential operator governing the PDE (E), we use the notion of variational solution. We consider $\mathcal C_0(\Omega),$ the set of continuous function null on the boundary. For any $0\leq \alpha\leq 1,$ we consider $$ \displaystyle \mathcal C_0^{0,\alpha}(\Omega)=\Big\{ u\in \mathcal C_0(\Omega)\: :\: u(x)-u(y)\leq C \vert x-y\vert^\alpha \hbox{ for any }x,y\in \overline \Omega \Big\} ,$$ endowed with the natural norm $$\Vert u\Vert _{\mathcal C_0^{0,\alpha}(\Omega)} = \sup_{x\in \Omega} \vert u(x)\vert + \sup_{x\neq y\in \Omega} \frac{\vert u(x)-u(y)\vert}{\vert x-y\vert^\alpha} . $$ We denote by $$Lip= \mathcal C^{0,1}(\Omega)\quad \hbox{ and }Lip_0= Lip\cap \mathcal C_0(\Omega) .$$ Then, we denote by
$$ \displaystyle Lip_1=\Big\{ u\in \mathcal C_0(\Omega)\: :\: u(x)-u(y)\leq \vert x-y\vert \hbox{ for any }x,y\in \Omega \Big\} .$$
The topological dual space of $Lip_0$ will be denoted by $Lip^*_0$ and is endowed with the natural dual norm, and we denote by $\langle .,.\rangle$ the duality bracket. It is clear that, for any $\xi\in Lip_1,$ we have
$$ \vert \xi(x)\vert \leq \delta_\Omega, \quad \hbox{ for any }x\in \Omega,$$
where $\delta_\Omega$ denotes the diameter of the domain $\Omega.$
Recall that the notion of solution for the problems of the type (\ref{nlmodelb}) is not standard in general. The problem presents two specific difficulties. The first one is related to the main operator governing the equation : $-\nabla \cdot (m\nabla u)$ with $m\geq 0$ and $m(\vert \nabla u\vert -1)=0$. And the second one is connected to the regularity of the term $\partial _t u$.
Concerning the main operator governing the equation recall that $u$ is Lipschitz and, in general even in the case where $\mathcal{H} \equiv 0,$ $m $ is singular. So, the term $m\nabla u$ is not well defined in general and needs to be specified. To handle the PDE with the operator in divergence form request the use of the notion of tangential gradient with respect to a measure (cf. \cite{BBS,BBS97,BCJ}). Nevertheless, to avoid all the technicality related to this approach, we use here the notion of truncated-variational solution (that we call simply variational solution) to handle the problem. Indeed, the following lemma strips the way to this alternative. For any $k>0,$ the real function $T_k$ denotes the usual truncation given by \begin{equation}\label{trunc} \displaystyle T_k(r)=\max(\min(r,k),-k),\quad \hbox{ for any }r\in \mathbb{R}. \end{equation}
\begin{lem}\label{ldefsol}
Let $\eta \in Lip^*_0$ and $u\in Lip_1.$ If, there exists $m\in L^1(\Omega)$ such that $m\geq 0$, $m(\vert \nabla u\vert -1)=0$ a.e. in $\Omega$ and $\nabla \cdot (m\nabla u)=\eta $ in $\mathcal D'(\Omega),$ then,
$$\langle \eta ,T_k(u-\xi) \rangle \geq 0, \quad \hbox{ for any } \xi\in Lip_1 \hbox{ and }k> 0. $$
\end{lem}
The proof of this lemma is simple, we let it as an exercise for the interested reader. Let us notice that the converse part remains true if one take on $m$ to be a measure and the gradient to be the tangential gradient with respect to $m.$ Other equivalent formulations may be found in \cite{Igmonge1}.
This being said, one sees that performing the notion of variational solution in (E) generates formally the quantity $ \langle u_t ,T_k(u-\xi)\rangle.$ Since in general $\partial_t u$ is not necessary a Lebesgue function we process the following (formal) integration by parts formula in the definition of the solution : $$\langle \partial_ t u, T_k(u-\xi) \rangle = \frac{d}{dt} \int_{\Omega} \int_0^{u(t)} T_k (s -\xi) ds dx.$$ Observe that, letting $k\to\infty,$ the last formula turns into
$$\langle \partial_ t u, u-\xi\rangle =\frac{1}{2}
\frac{d}{dt} \int_{\Omega} \vert u-\xi\vert^2\: dx .$$ This is the common term for the standard notion of variational solution. It is noteworthy, however, that the truncation operation here is an important ingredient to get uniqueness (see the uniqueness proof and the remark below).
The considerations above bring on the following definition of variational solutions :
\begin{defi}\label{defweaksol}
Let $\displaystyle f\in L^{1}( \Omega_T)$ and $u_0\in Lip_1.$ A variational solution of (E) is a function $u\in L^{\infty}(0,T, C_0(\Omega))$ such that $u(t) \in Lip_1$ for a.e. $t\in [0,T]$, and for every $\xi \in Lip_1$ and every $k>0$,
\begin{equation}\label{varform}
\frac{d}{dt} \int_{\Omega} \int_0^{u(t)} T_k (s -\xi) ds { d}x - \int_{\Omega} \gamma(u)\: \mathcal{H} ( \partial_x K \ast u ) \partial_xT_k (u-\xi) dx
\le \int_{\Omega} f\: T_k(u-\xi) \: dx
\end{equation}
in $\mathcal D'([0,T)).$ \end{defi}
In other words, $u\in L^{\infty}((0,T), C^0(\Omega))$ such that $u(t) \in Lip_1$ for a.e. $t\in [0,T]$ is a variational solution of (E) if for every $\xi \in Lip_1$ and every $\sigma \in C^1([0, T), \mathbb{R}_+)$ one has $$- \int_0^T\!\!\! \int_{\Omega} \dot{\sigma}(t) \int_0^{u(t)} T_k (s -\xi) ds dx dt - \int_0^T\!\!\! \int_{\Omega} {\sigma}(t) \gamma(u(t))\: \mathcal{H} ( \partial_x K \ast ) \partial_xT_k (u(t)-\xi) dx dt $$ $$
\leq \sigma(0) \int_{\Omega} \int_0^{u_0} T_k (s -\xi) ds d x + \int_0^T\!\!\! \int_{\Omega} \sigma(t) f (t)\: T_k(u(t)-\xi) d t d x.$$
\begin{rem}\label{rsol} \begin{enumerate}
\item It is achievable to define a solution as a function $u\in L^{\infty}((0,T), C^0(\Omega))$, with $\partial_t u \in L^1(0,T; Lip^*_0)$ and $u(t) \in Lip_1$ for a.e. $t\in [0,T]$, $u(0)=u_0$
and
\begin{equation}\label{wf01}
\begin{array}{c} \langle \partial_ t u, T_k(u-\xi) \rangle - \int_{\Omega} \gamma(u)\: \mathcal{H} ( \partial_x K \ast u ) \partial_xT_k (u-\xi) dx \\
\le \int_{\Omega} f\: T_k(u-\xi) \: dx ,\quad \hbox{ for a.e. }t\in (0,T),
\end{array}
\end{equation}
where $\langle .,.\rangle$ denote sthe duality brackect in $Lip^*_0.$ However, one can prove that if $u$ satisfies (\ref{wf01}) it is also a variational solution in the sense of Definition \ref{defweaksol}. Indeed, if $\partial_t u \in L^1(0,T; Lip^*_0)$ one can prove rigorously that (\ref{wf01}) yields
$$ \int _0^T \langle \partial_ t u(t), T_k(u(t)-\xi) \rangle \sigma(t)\: dt =
- \int_0^T \int_{\Omega} \dot{\sigma}(t) \int_0^{u(t)} T_k (s -\xi) ds dx dt $$
$$- \sigma(0) \int_{\Omega} \int_0^{u_0} T_k (s -\xi) ds d x ,$$
for any $\xi \in Lip_1$ and $\sigma \in C^1([0, T), \mathbb{R}).$
\item Notice that some similar notion of solution have been used in \cite{AgCaIg} for a different problem using the so called W1-JKO scheme, where W1 is related to the Wasserstein distance $W_1.$ \end{enumerate} \end{rem}
\begin{thm}\label{theo0} Let $\displaystyle f\in L^{1}( \Omega_T)$ and $u_0\in Lip_1.$ The problem (E) has a unique variational solution $u$.
\end{thm}
To prove this theorem, we see that Lemma \ref{ldefsol} implies that the operator $u\in Lip_1\to -\nabla \cdot (m\nabla u)$, with non-negative $m$ satisfying $m(\vert \nabla u\vert -1)=0,$ may be represented in $L^2(\Omega),$ by the sub-differential operator $\partial \displaystyle\mathbbm{I}_{{Lip_1}}$ of the indicator function $ \displaystyle\mathbbm{I}_{{Lip_1}}\: :\: L^2(\Omega) \to [0,\infty],$
$$\displaystyle \displaystyle\mathbbm{I}_{{Lip_1}} (z)= \left\{ \begin{array}{ll}
0\quad &\hbox{ if } z\in {Lip_1} \\ \\
\infty & \hbox{ otherwise .}\end{array}
\right. $$ In particular, this implies that the equation (E) is formally of the type \begin{equation}\label{Etype} \displaystyle \frac{du}{dt}+\partial \displaystyle\mathbbm{I}_{{Lip_1}} u_{}\ni \mathcal T(u) +f\quad \hbox{ in }(0,T), \end{equation} where
$\mathcal T\: :\: {Lip_1}\subset L^2(\Omega) \to L^2(\Omega)$ is given by $$\mathcal T(u)= - \partial_x \Big( \gamma(u) \mathcal{H}( K \star \partial_x u )\Big),\quad \hbox{ for any }u\in {Lip_1}. $$
Recall that the case where $\mathcal{H}\equiv 0,$ the phenomena corresponds simply to the sandpile problem where the dynamic is completely governed by the following nonlinear evolution equation : \begin{equation}\label{dyn1} \left\{ \begin{array}{ll} \displaystyle \frac{du}{dt}+\partial \displaystyle\mathbbm{I}_{{Lip_1}} u\ni f\quad &\hbox{ in }(0,T)\\ \\ \displaystyle u(0)=u_0, \end{array} \right. \end{equation} in $L^2(\Omega).$
For the proof of Theorem \ref{theo0}, we begin with the following results concerning (\ref{dyn1}) which will be useful.
\begin{pro} \label{pV0}
For any $f\in L^2(\Omega_T)$ and $u_0\in Lip_1,$ there exists a unique solution of the problem (\ref{dyn1}), in the sense that $u\in W^{1,\infty}(0,T; L^2(\Omega)),$
$u(0)=u_0$ and $$ \displaystyle f(t)- \frac{du(t)}{dt}\in \partial \displaystyle\mathbbm{I}_{{Lip_1}} u(t) \quad \hbox{ for a.e. }t\in (0,T).$$
Moreover, we have
\begin{enumerate}
\item \label{P1} $u\in L^\infty(0,T;\mathcal C_0^{0,\alpha}(\Omega))\cap L^p(0,T;W^{1,p}(\Omega)),$ for any $0\leq \alpha<1$ and $1\leq p<\infty,$ and $u(t)\in {Lip_1}$ for a.e. $t\in [0,T).$
\item \label{P2} $\partial_t u\in L^1(0,T;Lip^*_0)$ and we have \begin{equation}\label{Lip'}
\Vert \partial_t u\Vert_{L^1(0,T;Lip^*_0)} \leq 2\: \delta_\Omega \Vert f\Vert_{L^1(0,T;Lip^*_0)} + \frac{1}{2} \int u_{0}^2.
\end{equation}
\end{enumerate}
\end{pro}
\noindent {\textbf{Proof:}} The existence of a solution $u\in W^{1,\infty}(0,T; L^2(\Omega))$ follows by standard theory of evolution problems governed by sub-differential operator (cf. \cite{Br}). By definition of the solution, we know that $u(t)\in {Lip_1}$ and $\vert u(t) \vert \leq \delta_\Omega$ in $\Omega$, for any $t\in [0,T).$ Using the fact that $ {Lip_1}$ is compactly injected in $\mathcal C_0^{0,\alpha}(\Omega),$ we deduce that $u\in L^\infty(0,T;\mathcal C_0^{0,\alpha}(\Omega))$. Thus (\ref{P1}). Let us prove (\ref{P2}). For any $\xi\in Lip_1,$ we see that testing with $-\xi$ and letting $k\to\infty,$ we have $$ \int_\Omega \left( f(t)-\partial_t u(t) \right) (u(t)+\xi)\: dx \geq 0,\quad \hbox{ for any } t\in [0,T).$$ This implies that $$\int_\Omega \partial_t u(t) \: \xi \: dx \leq \int_\Omega f(t)(u(t)+\xi)\: dx -\frac{1}{2}\frac{d}{dt} \int_\Omega u(t)^2\: dx .$$ Integrating over $(0,T),$ we get \begin{eqnarray*} \int_0^T\!\! \int_\Omega \partial_t u (t)\: \xi \: dtdx &\leq& \int_0^T\!\! \int_\Omega f\:( \xi + u) + \frac{1}{2} \int u_{0}^2(t) \: dx\\ \\
&\leq& 2\: \delta_\Omega \Vert f\Vert_{L^1(0,T;Lip^*_0)} + \frac{1}{2} \int u_{0}^2\: dx.
\end {eqnarray*} Since $\xi$ is arbitrary in $Lip_1,$ we deduce (\ref{Lip'}).
\qed
Now, coming back to the problem (\ref{Etype}), thanks to the assumptions on $\mathcal{H},$ $K$ and $\gamma,$ the operator $\mathcal T$ is well defined, and for any $z\in L^2(0,T;W^{1,2}(\Omega)),$ we have $\mathcal T (z)\in L^2(Q).$ So, given $u_0\in {Lip_1},$ thanks to Proposition \ref{pV0}, the sequence $(u_n)_{n\in\mathbb{N}} $ given by \begin{equation}\label{dyn1n}
\displaystyle \frac{du_{n+1}}{dt}+\partial \displaystyle\mathbbm{I}_{{Lip_1}} u_{n+1}\ni \mathcal T(u_n) +f\quad \hbox{ in }(0,T),\: \hbox{ for }n=0,1,2 ...
\end{equation}
is well defined in $W^{1,\infty}(0,T; L^2(\Omega)) \cap L^p(0,T;W^{1,p}(\Omega)),$ for any $1\leq p<\infty.$ Moreover, we have
\begin{lem}\label{ln} \begin{enumerate} \item $u_n$ is a bounded sequence in $L^\infty(0,T;\mathcal C_0^{0,\alpha}(\Omega)),$ for $0\leq \alpha<1.$
\item $\partial_t u_n$ is a bounded sequence in $L^1(0,T;Lip^*_0).$
\end{enumerate}
\end{lem}
\noindent {\textbf{Proof:}} \begin{enumerate} \item Thanks to Proposition \ref{pV0}, we know that $u_n\in L^\infty(0,T;\mathcal C_0^{0,\alpha}(\Omega)),$ $0\leq \alpha<1,$ and $u_n(t)\in {Lip_1}$ for a.e. $t\in [0,T).$ This implies that $u_n$ is bounded in $L^\infty(0,T;\mathcal C_0^{0,\alpha}(\Omega)).$
\item Thanks again to Proposition \ref{pV0}, we have \begin{eqnarray*}
\Vert \partial_t u_{n+1}\Vert_{L^1(0,T;Lip^*_0)} &\leq & 2 \: \delta_\Omega \:\Vert \mathcal T (u_n)\Vert_{L^1(0,T;Lip^*_0)} +2 \: \delta_\Omega \:\Vert f \Vert_{L^1(0,T;Lip^*_0)} + \frac{1}{2} \int u_{0}^2\\ \\ &\leq& 2\: \delta_\Omega \:\left \Vert \partial_x \Big( \gamma(u_n) \mathcal{H}( K \star \partial_x u_n )\Big) \right \Vert_{L^1(0,T;Lip^*_0)} +2 \: \delta_\Omega \:\Vert f \Vert_{L^1(0,T;Lip^*_0)} \\ \\ & & + \frac{1}{2} \int u_{0}^ 2\\ \\ &\leq& 2 \delta_\Omega \: \left \Vert \gamma(u_n) \mathcal{H}( K \star \partial_x u_n ) \right \Vert_{L^1(Q_T)} +2 \delta_\Omega \:\Vert f \Vert_{L^1(0,T;Lip^*_0)} + \frac{1}{2} \int u_{0}^ 2 .
\end{eqnarray*} Using the fact that $\gamma$ and $\mathcal H$ are Lipschitz continuous and that $ u_n(t)\in {Lip_1},$ we deduce that there exists $C$ (independent of $n$) such that $$ \Vert \partial_t u_{n+1}\Vert_{L^1(0,T;Lip^*_0)} \leq C. $$ Thus the result of the lemma.
\end{enumerate} \qed
\noindent {\bf{Proof of Theorem \ref{theo0} :}}
\noindent \underline{\bf{Existence :}} First assume that $f\in L^2(\Omega_T).$ Let us consider the sequence $(u_n)_{n\in \mathbb{N}}$ as given by Lemma \ref{ln}. Since the embedding $\mathcal C_0^{0,\alpha}(\Omega)$ into $\mathcal C_0(\Omega)$ is compact and the embedding of $\mathcal C_0(\Omega)$ into $Lip^*_0$ is continuous, by using Lemma 9 of \cite{simon}, we can conclude that, by taking a subsequence if necessary, $u_n$ converges to $u$ in $L^1(0,T; \mathcal C_0(\Omega)),$ and we have $u(t)\in {Lip_1},$ for a.e. $t\in [0,T).$
Since, for a.e. $t\in [0,T)$ and any $\xi\in Lip_1,$ $u_n(t)-T_k(u_n(t) -\xi)\in Lip_1,$ (\ref{dyn1n}) implies that $$\int_\Omega \frac{\partial u_{n+1}(t)}{\partial t}\: T_k(u_{n+1} (t) -\xi) - \int_\Omega \gamma(u_n) \mathcal{H}( K \star \partial_x u_n )\: \partial_x T_k(u_{n+1} (t) -\xi) $$ $$\leq \int f\: T_k(u_{n+1} (t) -\xi) ,$$ so that $$ \frac{d}{dt} \int_{\Omega} \int_0^{u_{n+1}(t)} T_k (s -\xi) ds dx - \int_{\Omega} \gamma(u_{n})\: \mathcal{H} ( \partial_x K \ast u_{n} ) \partial_x T_k(u_{n+1}-\xi) dx
$$ $$\le \int_{\Omega} f\: T_k(u_{n+1}-\xi) \: dx $$ in $\mathcal D'([0,T).$ Then letting $n\to \infty,$ and using the convergence of $u_n$ in $L^1(0,T; \mathcal C_0(\Omega))$ and Lebesgue dominated convergence theorem, we obtain (\ref{varform}). Now for $\displaystyle f\in L^{1}( \Omega_T)$ we consider $\displaystyle f_m\in L^{2}( \Omega_T)$ such that $ \displaystyle f_m \to f $ in $\displaystyle L^{1}( \Omega_T)$ and the sequence $(u_m)_{m\in \mathbb{N}}$ given by \begin{equation}\label{dyn1m} \displaystyle \frac{du_{m}}{dt}+\partial \displaystyle\mathbbm{I}_{{Lip_1}} u_{m} +\partial_x\big( \gamma(u_m) \mathcal{H}( K \star \partial_x u_m ) \big)\ni f_m\quad \hbox{ in }(0,T),\: \hbox{ for }m=0,1,2 ... \end{equation} in the following sense \begin{equation}\label{varformm} \frac{d}{dt} \int_{\Omega} \int_0^{u_m(t)} T_k (s -\xi) ds dx - \int_{\Omega} \gamma(u_m)\: \mathcal{H} ( \partial_x K \star u_m ) \partial_xT_k (u_m-\xi) dx \le \int_{\Omega} f_m\: T_k(u_m-\xi) \: dx \end{equation} in $\mathcal D'([0,T)$ for any $\xi \in Lip_1$ and $k>0$. Similarly as in lemme \ref{ln}, we have \begin{eqnarray*}
\Vert \partial_t u_{m}\Vert_{L^1(0,T;Lip^*_0)} \leq2 \delta_\Omega\: \left \Vert \gamma(u_m) \mathcal{H}( K \star \partial_x u_m ) \right \Vert_{L^1(\Omega_T)} +2\delta_\Omega \:\Vert f_m \Vert_{L^1(0,T;Lip^*_0)} + \frac{1}{2} \int u_{0}^ 2 dx \end{eqnarray*} and $u_m \to u $ in $L^1(0,T; \mathcal C_0(\Omega))$. Letting $m\to \infty,$ and using the dominated convergence theorem, the proof of the existence is finished.\\
\\ \\
\noindent \underline{\bf{Uniqueness :}} Now, to prove the uniqueness, let $\displaystyle u_{1}$ and $u_{2}$
be two solutions of $(E)$ in the sense of (\ref{varform}). We have
\begin{equation}
\frac{d}{dt} \int_{\Omega} \int_0^{u_i(t)} T_n (s -\xi) ds dx - \int_{\Omega} \gamma(u_i)\: \mathcal{H} ( \partial_x K \ast u_i ) \partial_xT_n (u_i-\xi) dx
\le \int_{\Omega} f\: T_n(u_i-\xi) \: dx
\end{equation}
To double variables, we consider $u_1=u_1(t)$ and $u_2=u_2(s),$ for any $s,t\in [0,T).$ Using the fact that $u_1=u_1(t)$ is a solution and setting $\xi =u_2(s) $ which is considered constant with respect to $t$ have
$$ \frac{d}{dt} \int_{\Omega} \int_0^{u_1(t)} T_n (r -u_2(s)) dr dx - \int_{\Omega} \gamma(u_1(t))\: \mathcal{H} ( \partial_x K \ast u_1(t)) \partial_xT_n (u_1(t)-u_2(s)) dx $$ $$
\le \int_{\Omega} f(t)\: T_n(u_1(t)-u_2(s)) \: dx .$$ In the same way, taking $u_2=u_2(s)$ is a solution and setting $\xi =u_1(t)$, we have
$$ \frac{d}{ds} \int_{\Omega} \int_0^{u_2(s)} T_n (r -u_1(t)) dr dx - \int_{\Omega} \gamma(u_2(s))\: \mathcal{H} ( \partial_x K \ast u_2(s)) \partial_xT_n (u_2(s)-u_1(t)) dx $$ $$
\le \int_{\Omega} f(s)\: T_n(u_2(s)-u_1(t)) \: dx \Big). $$
Dividing by $n,$ and adding the two equations, we obtain
$$ \frac{1}{n} \frac{d}{dt} \int_{\Omega} \int_0^{u_1(t)} T_n (r -u_2(s)) dr dx +
\frac{1}{n} \frac{d}{ds} \int_{\Omega} \int_0^{u_2(s)} T_n(r -u_1(t)) dr dx $$
$$ \leq \frac{1}{n} \int_{\Omega} \Big\{ \gamma(u_1(t))\: \mathcal{H} ( \partial_x K \ast u_1(t)) -\gamma(u_2(s))\: \mathcal{H} ( \partial_x K \ast u_2(s)) \Big\} \partial_xT_n (u_1(t)-u_2(s))\: dx $$
$$+ \frac{1}{n} \int_{\Omega} ( f(t) -f(s)) \: T_n(u_1(t)-u_2(s)) \: dx $$
Let us re-write the second equation
$$ I_n:= \frac{1}{n}\int_{\Omega} \Big\{ \gamma(u_1(t))\: \mathcal{H} ( \partial_x K \ast u_1(t)) -\gamma(u_2(s))\: \mathcal{H} ( \partial_x K \ast u_2(s)) \Big\} \partial_xT_n (u_1(t)-u_2(s))\: dx $$
as \begin{equation}\label{Ex445} I_n =:I_n^1 +I_n^2, \end{equation}
with
$$I_n^1:= \frac{1}{n}\: \int_{\Omega} (\gamma(u_1(t))- \gamma( u_2(s))) \partial_x\: T_n ( u_1(t)- u_2 (s) ) \mathcal{H} ( \partial_x K \ast u_1(t) ) \:dx $$
and
$$ I_n^2:= \frac{1}{n}\: \int_{\Omega} \gamma(u_2(s))\Big(\mathcal{H} ( \partial_x K \ast u_1(t) )- \mathcal{H} ( \partial_x K \ast u_2(s) )\Big) \partial_x\: T_n ( u_1(t)- u_2(s) ) \:dx .$$
Recall that $u_i(t)\in Lip_1,$ for any $t\in [0,T),$ ]$\gamma$ and $\mathcal H $ are Lipschitz continuous. So, there exists a constant $c^\star>0$ (independent of $n$), such that
\begin{equation}\label{Ex445}
I_n^1 \leq c^\star\: \int_{\Omega} |u_1(t)- u_2(s)| \:dx .
\end{equation}
Integrating by parts in $I_n^2,$ we obtain
\begin{equation}\label{Ex5}
\begin{array}{l l }
\displaystyle I_n^2
& \displaystyle= - \frac{1}{n}\: \int_{\Omega} \gamma^\prime(u_2(s)) \partial_x u_2(s)\Big(\mathcal{H} ( \partial_x K \ast u_1(t) )- \mathcal{H} ( \partial_x K \ast u_2(s) )\Big) \: T_n ( {u_1(t)- u_2(s)} ) \:dx \\ \\
& \displaystyle - \frac{1}{n}\: \int_{\Omega} \Big\{ \gamma(u_2(s)) \Big( \big( \partial_{x^2} K \ast u_1(t)\big) \mathcal{H}^\prime ( \partial_x K \ast u_1(t) ) \\
&
\hspace*{4cm}
- \big(\partial_{x^2} K \ast u_2(s)\big) \mathcal{H}^\prime ( \partial_x K \ast u_2(s) )\Big) \: T_n ( {u_1(t)- u_2(s)} )\Big\} \:dx .
\end{array}
\end{equation}
The first term of $I_n^2$ satisfies
\begin{equation}\label{Ex556}
\begin{array}{l} \displaystyle \frac{1}{n}\: \Big| \int_{\Omega} \gamma^\prime(u_2(s)) \partial_x u_2(s)\Big(\mathcal{H} ( \partial_x K \ast u_1(t) )- \mathcal{H} ( \partial_x K \ast u_2(s) )\Big) \: T_n ( {u_1(t)- u_2(s)} ) \:dx \Big| \\ \\
\hspace*{3cm} \displaystyle
\leq c^{\star\star}\: \int_{\Omega} |u_1(t)- u_2(s)| \:dx .
\end{array}
\end{equation}
As to the second term that we denote here by $I_n^{2'}$
$$\displaystyle I_n^{2'} :=\frac{1}{n}\: \int_{\Omega} \gamma(u_2(s)) \Big\{ \big( \partial_{x^2} K \ast u_1(t)\big) \mathcal{H}^\prime ( \partial_x K \ast u_1(t) )- $$ $$\hspace*{5cm}\big(\partial_{x^2} K \ast u_2(s)\big) \mathcal{H}^\prime ( \partial_x K \ast u_2(s) )\Big\} \: T_n ( u_1(t)- u_2(s) ) \:dx ,$$
we have
\begin{equation}\label{Ex66}
\begin{array}{l l }
\displaystyle |I_n^{2'}| \leq c^{\star\star \star}\: \int_{\Omega} |u_1(t)- u_2(s)| \:dx .
\end{array}
\end{equation}
From (\ref{Ex445}), (\ref{Ex556}) and (\ref{Ex66}), we obtain
$$ \vert I_n\vert \leq \: C\: \int_{\Omega} |u_1(t)- u_2(s)| \:dx ,$$
so that, for any $n>0,$ we have
$$ \frac{d}{dt} \int_{\Omega} \int_0^{u_1(t)} T_n (r -u_2(s)) dr dx +
\frac{d}{ds} \int_{\Omega} \int_0^{u_2(s)} T_n(r -u_1(t)) dr dx $$
$$ \leq \: C\: \int_{\Omega} |u_1(t)- u_2(s)| \:dx + \int_{\Omega} ( f(t) -f(s)) \: T_n(u_1(t)-u_2(s)) \: dx . $$
Letting $n\to 0,$ we get
$$ \frac{d}{dt} \int_{\Omega} \int_0^{u_1(t)} \mbox{sign}_0 (r -u_2(s)) dr dx +
\frac{d}{ds} \int_{\Omega} \int_0^{u_2(s)} \mbox{sign}_0(r -u_1(t)) dr dx $$
$$ \leq \: C\: \int_{\Omega} |u_1(t)- u_2(s)| \:dx + \int_{\Omega} ( f(t) -f(s)) \: \mbox{sign}_0(u_1(t)-u_2(s)) \: dx . $$
Thus $$
\displaystyle \frac{d}{dt} \int_{\Omega} | u_1(t)-u_2(s)| \:dx + \frac{d}{ds} \int_{\Omega} | u_1(t)-u_2(s)| \:dx \leq
\displaystyle \nu \: C\: \int_{\Omega} |u_1(t)- u_2(s)| \:dx $$ $$+ \int_{\Omega} \vert f(t) -f(s)) \vert \: dx.
$$
Now, de-doubling variables $t$ and $s,$ we get
\begin{equation}\label{inter2}\frac{d}{dt} \int_{\Omega } \vert u_1(t)-u_2(t)\vert dx
\le C \int_{\Omega} \vert u_1-u_2\vert dx \quad \hbox{ in }\mathcal{D}'(0,T)
\end{equation}
and the uniqueness follows by Gronwall Lemma.
\qed
\begin{rem}
See in the proof of Theorem \ref{theo0}, that it is possible to prove the result of existence of a variational solution for any $f\in L^1(0,T,\left( \mathcal C_0^{0,\alpha}(\Omega)\right)^*),$ with $0\leq \alpha<1,$ where $\left( \mathcal C_0^{0,\alpha}(\Omega)\right)^*$ denotes the topological dual space of $\mathcal C_0^{0,\alpha}(\Omega).$ More precisely, for any $f\in L^1(0,T, \left( \mathcal C_0^{0,\alpha}(\Omega)\right)^* )$ and $u_0\in Lip_1,$ the problem (E) has a variational solution in the sense that $u\in L^{\infty}(0,T, C_0(\Omega))$, $u(t) \in Lip_1$ for a.e. $t\in [0,T]$, and for every $\xi \in Lip_1$ and every $k>0$, $$\frac{d}{dt} \int_{\Omega} \int_0^{u(t)} T_k (s -\xi) ds { d}x - \int_{\Omega} \gamma(u(t))\: \mathcal{H} ( \partial_x K \ast u(t) ) \partial_xT_k (u(t)-\xi) dx $$ $$ \le \langle f(t), T_k(u(t)-\xi) \rangle, \quad \hbox{ in } \mathcal D'([0,T)).$$ This allows in particular to consider the situations where we have some singular source terms of the type $$f(t)=\sum_n \left( \delta_{x_n} - \delta_{y_n} \right), $$ where $x_n$ and $y_n$ are sequences in $\mathbb{R}^d,$ satisfying $$\sum_n \vert x_n-y_n\vert ^\alpha <\infty. $$ However, the uniqueness is not clear if one weaken the assumption $f\in L^1(\Omega_T).$
\end{rem}
\end{document} | arXiv |
Which experimental procedures influence the apparent proximal femoral stiffness? A parametric study
Morteza Amini1,2,
Andreas Reisinger1,2,
Lena Hirtler3 &
Dieter Pahr1,2
Experimental validation is the gold standard for the development of FE predictive models of bone. Employing multiple loading directions could improve this process. To capture the correct directional response of a sample, the effect of all influential parameters should be systematically considered. This study aims to determine the impact of common experimental parameters on the proximal femur's apparent stiffness.
To that end, a parametric approach was taken to study the effects of: repetition, pre-loading, re-adjustment, re-fixation, storage, and μCT scanning as random sources of uncertainties, and loading direction as the controlled source of variation in both stand and side-fall configurations. Ten fresh-frozen proximal femoral specimens were prepared and tested with a novel setup in three consecutive sets of experiments. The neutral state and 15-degree abduction and adduction angles in both stance and fall configurations were tested for all samples and parameters. The apparent stiffness of the samples was measured using load-displacement data from the testing machine and validated against marker displacement data tracked by DIC cameras.
Among the sources of uncertainties, only the storage cycle affected the proximal femoral apparent stiffness significantly. The random effects of setup manipulation and intermittent μCT scanning were negligible. The 15∘ deviation in loading direction had a significant effect comparable in size to that of switching the loading configuration from neutral stance to neutral side-fall.
According to these results, comparisons between the stiffness of the samples under various loading scenarios can be made if there are no storage intervals between the different load cases on the same samples. These outcomes could be used as guidance in defining a highly repeatable and multi-directional experimental validation study protocol.
Validation of numerical predictive and monitoring models is carried out against experimental results. This validation process determines how accurate a model can mimic reality [1, 2]. Image-based methods, such as CT-based finite element (FE) models, have become state-of-the-art in biomechanical bone research, with clinical use cases [3–7]. Using these models, the risk of fracture in patients with underlying conditions, such as osteoporosis, can be non-invasively estimated to guide treatment efforts and lessen the consequent immobilization burden.
The most prevalent site in orthopedic biomechanics studies is the hip or proximal femur. Hip fractures account for the majority of fracture-related disabilities [8, 9]. Osteoporosis and fall are the two leading causes of hip fractures [10, 11]. Many biomechanical experiments done on femur are based on single tests per sample [12–18]. To measure the strength of the bone, samples are loaded until failure. FE models validated against such experimental data are at risk of being biased towards those specific experimental load cases.
There has been growing evidence that considering multiple loading directions in the experimental and numerical studies might improve the predictive ability of FE models [5, 19–21]. In one study [19], using nonlinear CT-FE models, the effect of loading direction on the fracture load and location of the proximal femur was investigated. However, the employed FE model was only validated against a single stance load case [22]. In other studies [23, 24], samples were loaded in multiple fall load cases to determine the accuracy of the FE-predicted strains related to the side-way fall incidents. Although one FE technique [23] was previously validated against quasi-axial load cases as well [2, 7, 25], the two studies were done on different sample groups, one for stance and one for the fall. To our knowledge, there are no studies with multiple loading directions applied in both stance and fall configurations on the same samples.
In order to perform multiple mechanical tests on each sample, structural damages should be avoided through non-destructive loading regimes. This imposes two restrictions: First, the load amplitude should be restricted to much lower values compared to fracture loads. Second, a surrogate measure for the sample strength should be employed. A widely used criterion to address the former is conducting non-destructive tests on the femur by applying a fraction (75%) of the donor's body weight as the maximum load [26]. With regards to the latter, the apparent stiffness, a frequently reported surrogate bone strength measure, can be used as the outcome variable [27]. Despite the above-mentioned restrictive circumstances, using elastic structural metrics (i.e., apparent stiffness) might still seem like a step backward. However, if the apparent stiffness shows significant alterations, it will point to the direction of a change in the tested material itself, which in turn would affect the strength [28, 29].
To compare the apparent stiffness of the samples under different loading conditions, other unwanted experimental sources of alteration in sample stiffness must be determined. These uncertainties often arise from simplifications introduced during the experimental procedures and can be investigated using a parametric approach. This study design is a powerful tool to isolate the influence of all players and compare their relative significance systematically [30, 31]. Most studies have only reported the repeatability measures of the experiments via repeating each test case multiple times [2, 32] or by re-orienting and re-installing the setup between each test [26]. Others have studied the effect of FE modeling methodological determinants on predicted femoral strength [3]. To our knowledge, there are no studies investigating the effect of typical experimental parameters on the structural properties of the samples using a systematic approach.
The aim of this study was to parametrically determine the influence of common steps involved in an experimental validation study on the apparent stiffness of the proximal femur under multiple loading directions, namely: repetition, pre-loading, re-adjustment, re-fixation, storage, μCT scanning, and loading direction (15∘ deviation from neutral alignment) in stance and side-fall configurations. To our knowledge, this is the first study in which both stance and fall configurations have been tested on every sample. These results could add an aggregated reference to the currently scattered pool of data required for planning experimental protocols with reduced uncertainties affecting the measured structural properties of bone samples.
Methods & materials
Ten proximal femoral samples from five donors (Table 1) were harvested and kept frozen in −23∘ (Center for Anatomy and Cell Biology, Medical University of Vienna). The specimens originated from voluntary body donations for scientific and teaching purposes to the Center (According to protocol accepted by the ethics committee of Karl Landsteiner University of Health Sciences). Samples were screened for lack of any pathological disease. All procedures were performed in accordance with relevant guidelines.
Table 1 Information on donors of the samples
Experimental setup
To perform a parametric study, we developed a new femoral experimental setup based on two main criteria: 1. possess fully defined boundary conditions, and 2. provide the means for fast multiple non-destructive tests in stance and fall configurations with variable loading direction. The test setup was comprised of the following main components: Alignment setup, embedding components, testing apparatus, scan chambers, and DIC setup.
Alignment setup: A custom-made alignment setup was used to maintain the femur's physiological neutral stance alignment in the final prepared sample. It was comprised of two cross lasers, in-house 3D printed holder devices, and manufactured POM (Polyoxymethylene) supports (Fig. 1). The intact femur was laid down on two distal and proximal supports. The bone was axially tilted until the neck axis, passing through the femoral head center and middle of the femoral neck, was coincident with the horizontal line of a cross laser (Fig. 1a). Metal spacers with various thicknesses were placed between the condyles of the femur and the distal support to maintain the torsional alignment. Then, the proximal support was elevated using a screw mechanism so that the femoral head center and the distal mid-condylar line were coincident with the horizontal laser line (Fig. 1b). Finally, using the second cross laser mounted above the dissection table, a 3∘ adduction angle with the reference line of the alignment supports was formed. The femoral head center and the femoral intercondylar fossa, hence the mechanical axis, were aligned with the laser line (Fig. 1c). Once all angles were determined, a custom-made C-shaped device was used to fix the alignment on the proximal portion of the bone making it ready for the cutting and potting steps. This device was fixed 105 mm below the bounding plane coincident to the most proximal point of the femoral head (Fig. 1e). The proximal portion of the bone was cut using a bone saw 45 mm below the fixed device, resulting in a total sample length of 150 mm (Fig. 1f). The total sample length was restricted by the maximum field of view from the μCT scanner, which was necessary for future studies using the micro-FE models of the samples.
The neutral stance alignment of the femur was achieved by using a custom-made alignment setup. The intact femur was a) tilted around the shaft axis until the neck axis was horizontal, b) leveled so that the femoral head center and the distal mid-condylar line were coincident, and c) tilted in the adduction direction until its mechanical axis was 3∘ beyond the reference line. These steps were achieved using d) two cross lasers, and e) manufactured POM supports. The aligned sample was f) cut to a sample size of 150 mm, measured from the femoral head along the shaft axis
Embedding components: The proximal femoral samples were held in their neutral stance alignment on the potting block, and their shaft was embedded in a 50 mm diameter cylinder. A 5-mm gap was left between the bottom of the sample and block to account for uneven cutting surfaces. Two pins were attached to the walls of the potting block to prevent the shaft from rotating in the holder (Fig. 2a). The trochanter and the head were embedded in spherical segments to provide defined boundary conditions at the contact points and avoid local crushing. The alignment was done using 3D printed adapters (Fig. 2). Using printed holders and pins, 4-marker clusters were later attached to the femoral head, major trochanter, and shaft of the samples for displacement tracking.
The proximal portion of samples was potted at (a) shaft, and (b) head and trochanter locations using Polyurethane (PU) casting resin and a custom-made setup. (c) The posterior femoral neck area was sprayed with an airbrush to create a speckle pattern (trial tests). Three marker clusters containing four markers each were placed on the femoral head, trochanter, and shaft of the prepared samples
Testing apparatus: A 25 kN load cell with 6 degrees of freedom (DOF) (Hottinger Baldwin Messtechnik (HBM) GmbH, Germany) was mounted on the 30 kN electro-mechanical axial testing machine (Z030, ZwickRoell Ulm, Germany). Hardened iron disks were manufactured and used with ring ball-bearings to apply a purely axial load (Fig. 3). A rotating milling machine table was equipped with a hinge bearing to hold the shaft of the sample and allow for 5 degrees of freedom for the sample alignment (X, Y, Rx, Ry, Rz). An additional uni-axial 25 kN load cell (see Fig. 3) was used in the fall configuration to support the head while the trochanter is loaded (HBM, Germany). In the stance configuration, the shaft block was fixed on the rotating table. The abduction\adduction angle was adjusted on the table. The table was fixed on the testing machine. In the fall configuration, the shaft block was free to rotate in the abduction\adduction direction while the head was resting on the support load cell. The table was fixed on the machine as well (Fig. 3).
The testing apparatus allowed for multiple loading directions in both stance and fall configurations
Scan chambers: μCT: A custom-made chamber was manufactured using POM (Polyoxymethylene) and Plexiglas (Fig. 4). The cylindrical chamber was 15 cm in diameter and 17 cm inner height. A clamping mechanism was fitted in the chamber so that the femur could be stood upright and fixed to avoid movement artifacts during the scan. A sealing cap with a pressure valve was used to make sure the sample is not dehydrated during the scan while the heated air can escape the chamber.
a) μCT chamber from POM and Plexiglass to fix the sample in neutral alignment and keep it hydrated throughout the scanning process, b) CT chamber from polypropylene plastic fitted with embedding base, and holders to accommodate a pair of samples submerged in saline solution and in neutral alignment
CT: To mimic the clinical conditions in which two legs with surrounding soft tissues are present in scanner's field of view, we modified a rectangular translucent storage box (polypropylene) using embedding material, 3D printed adapters, and ready-made PVC holders. Each pair of samples were fixed side-by-side, in their neutral alignment, 20 cm apart, and fully submerged in saline solution (Fig. 4).
DIC setup: Digital Image Correlation (DIC) system (ARAMIS 3D Camera, GOM GmbH, Braunschweig, Germany) with two CCD cameras was used for optical displacement tracking. The 6-megapixel cameras were 150 mm apart and positioned at a perpendicular distance of 350 mm from the sample, capturing images at a 10 Hz rate from a measurement volume of 160 x 130 x 95 mm (LxWxD). All measurements were done according to the manufacturer's standard protocol (GOM GmbH, Braunschweig, Germany). The system was calibrated before beginning of each session using the standard calibration plate and according to the manufacturer's protocol keeping the calibration deviation below 0.05 pixels. Clusters of markers comprised of four markers (GOM GmbH, Braunschweig, Germany) attached to a 3D printed holder were placed on the head, trochanter, and shaft of the samples in order to measure the apparent stiffness of the bone (Fig. 2). Additional markers were placed on the loading plate and holder block in order to be able to measure the apparent stiffness of the full specimen as well (Fig. 5). All markers were covered within the measurement volume at all time. There was a 1 micron displacement noise in the marker displacement data at zero load.
Stiffness of the machine (Km) and the embedding (Ke) could be calculated based on the stiffness of the whole assembly (Kz), the bone (Kb), and full specimen (Ks) measured using markers (depicted in blue). These values were used to verify the methodology and the outcome variable
While all steps involved in our experimental validation study are described below, not all the acquired data were relevant and hence presented in this manuscript's scope (e.g., scan data acquired for the FE modeling phase elsewhere). The following parameters were tested for stance and side-fall configurations:
Repetition: Repeating a test five times without touching any parts of the setup or sample.
Pre-loading: Testing a load case once right after fixing the sample into the desired place and alignment and comparing the results with the average of the immediately following five repetitions, without touching the sample or setup in between.
Re-adjustment: Distorting the sample configuration and placement on the machine and re-adjusting it back to the initial condition without taking the sample out of the setup.
Re-fixation: Taking out the sample and distorting the setup adjustments, then putting everything back to their initial condition.
Storage: Storing samples in a −23∘C freezer for four weeks.
μCT scanning: μCT scanning the sample.
Loading direction: Tilting the samples for ± 15 degrees along the abduction-adduction axis from their neutral stance or fall alignments.
Three sets of biomechanical testing were carried out according to the following plan (Fig. 6). For each test, samples were loaded up to 75% of the donor's body weight (BW) to avoid any damage or destruction [26]. Loading was applied at a 5 mm/sec rate. There was a minimum of 1-minute pause between each consecutive tests (3 or 6 minutes for tests requiring switching between the direction or configuration of the sample, respectively):
Fresh frozen samples were taken from the freezer one-by-one, and the excess muscle and fat tissue was cut from their proximal half using a knife and scalpel. The periosteum was carefully scraped from the femoral neck region as well as the shaft using bone scraper. The greater trochanter surface was scraped to remove cartilaginous tissues, but the femoral head cartilage was kept intact to avoid damaging the thin cortex at that region. The clean sample was then aligned, cut, and embedded in a span of 3 hours, and stored back in the freezer until all samples were processed. The process was done on frozen samples since the soft tissue removal was easier than the thawed bone, and the bone marrow could be sealed inside the bone, avoiding large air cavities in the bone for better scan qualities.
1. Fresh frozen specimens aligned, cut, and potted in neutral stance position, 2. Clinical CT scan of the samples (data not presented, acquired for another study), 3. The first set of mechanical tests (set I): 10 samples x 2 configurations x 3 directions x 2 tests per load case (* a third test per load case was done on one sample for re-fixation), collecting: apparent stiffness and surface strain data, evaluating: re-fixation and re-adjustment, 4. Samples stored at −23∘C for four weeks, 5. The second set of tests (set II): 10 samples x 2 configurations x 3 directions x 1 test per load case, collecting: apparent stiffness and marker data, evaluating: storage, 6. μCT scan of the full sample (5 hours per scan), 7. The third set of tests (set III): 10 samples x 2 configurations x 3 directions x 6 repeated tests per load case, collecting: apparent stiffness and marker data, evaluating: repetition, pre-loading, μCT scanning, and loading configuration and direction
On the day of the CT scanning, all samples were submerged in 0.9% PBS (Phosphate-buffered saline) solution filled plastic bags and placed in the vacuum-desiccator (Trivac D8B; OC Oerlikon Management AG, Pfäffikon, Switzerland) for 30 minutes at room temperature to thaw and extract the air bubbles. The samples were then carefully transferred in their submerged condition into their corresponding room-temperature PBS-filled CT chambers and fixed. After scanning (Toshiba Aquilion Prime, res: 0.625x0.625x0.25 mm3), samples were wrapped in soaked towels and stored at −23∘C for one week. CT data is necessary for the CT-based FE modeling phase following this study. Given the relatively short scan time (13 secs compared to 5 hrs for μCT) and ideal submerged sample conditions throughout the process, it was not included as a parameter in the study (to reduce the number of test rounds).
Set I: Frozen samples were thawed in a room temperature PBS solution bath for 4 hours. The posterior femoral neck area was pat dried and degreased using ethanol pads. Speckle pattern was sprayed. Each sample was tested in stance and side-fall configurations for each of the three loading directions (neutral, 15∘ abduction, 15∘ adduction). These six load cases were repeated twice to examine the effect of re-adjustment for all samples. Additionally, for one sample all 6 load cases were repeated once more to check the re-fixation effect (due to the time consuming nature of removing the sample from the setup for every test, this parameter was limited to only 1 sample).
Samples were wrapped in soaked towels and stored for four weeks in a −23∘C freezer.
Set II: Frozen samples were fully thawed in room temperature PBS bath. Marker clusters were pinned on the head, trochanter, and shaft of the sample on the posterior side. Each sample was tested in stance and side-fall configurations for each of the three loading directions. Tests were done only once. Markers were tracked using stereo cameras.
μCT scanning: Samples were taken out of the testing setup, submerged in PBS solution bath for re-hydration, clamped in the scan chamber, with soaked towels placed at the bottom of the sealed (with a pressure valve) chamber to avoid dehydration. The full sample length was scanned in a 5-hour session (Skyscan 1173, Bruker, Belgium) (field of view: 120 x 150 mm, resolution: 30 μm, voltage: 130 kV, current: 60 mA, exposure: 580 ms, filter: Al 1.0 mm).
Set III: It was done immediately after the μCT scanning and on the same day as set II. Each sample was tested in stance and side-fall configurations for all three loading directions. Each test was repeated 6 times (with a one-minute resting period in between) to examine the repeatability of the tests as well as the pre-loading effect. Samples were frozen at the end.
A total of 180 tests (and an additional 66 and 300 repetitions in set I and set III, respectively) were performed. Data was captured at 100 Hz and 10 Hz by the testing machine and stereo cameras, respectively.
Collected raw data was comprised of: axial and shear loads as well as moments at the load introduction site, femoral head support force in side-fall load cases, vertical displacement of the machine head, and marker displacements at the five locations of : loading plate, femoral head, major trochanter, shaft, holder block. The support load, shear loads, and moments were used to check the boundary conditions of the setup. Analytically, we should expect zero shear forces at the load introduction location to match the free horizontal translation DoFs. Furthermore, the maximum moment at the loading plate should match the values calculated using the maximum load and the distance between the center of contact surface between the femoral head/trochanter cap (in stance and side-fall configurations, respectively), and the center of the load cell. A significant difference between the experimental and analytic results would point into direction of unwanted bending moment on the femoral head, negating free rotational DoFs. Finally, the reaction force at the support plate in side-fall configuration is calculated using the distances between the loading and support contact points and the shaft bearing axis. Significant deviation from this value would contradict with free rotation DoF at the shaft (Fig. 3a)
Outcome variables: The apparent stiffness was defined as the slope of the linear section of the load-displacement curve (Fig. 7). Based on some preliminary tests, the linear section was defined between 200N and 400N. The load came from the axial component recorded by the 6 DOF load cell. Depending on the stiffness measurement criteria, displacement was based on (Fig. 5):
Sample stiffness(Kz): the moving head of the mechanical testing machine.
a) Load-displacement plots from the testing machine showing the full loading and unloading cycles for all six load cases of a sample. b) The overall stiffness of the sample for each test was defined as the slope of the linear section of the plot, which was set between 200 N and 400 N for all samples
Bone stiffness(Kb): the relative vertical displacement of the femoral head and shaft (for stance) or trochanter and femoral head (for fall) markers.
The stiffness of the embedding segments and the machine components (Ke and Km, respectively) were calculated using the spring theory.
Markers were tracked by the DIC cameras and load-displacement plots were evaluated. Tests with highly linear load-displacement plots (R2>0.95) were considered for further analysis. Kz was validated against the corresponding Kb data and used as the main outcome variable (Fig. 8).
The 75% BW peak load resulted in relatively small displacements in the bone. Depending on the configuration, direction, and stiffness of the sample, the recorded values for marker displacements had different qualities. The example plots show instances with R2 values of the regression line of (a) 1.00 (0.9997), (b) 0.98, and (c) 0.02 corresponding to stance abduction, stance adduction, and fall abduction load cases of one sample, respectively. Applying a R2>0.95 threshold for the marker load-displacement data to be considered viable, apparent stiffness measures of Kb and Kz showed good correlation in (d) stance (R2 = 0.92) and (e) fall (R2 = 0.82) configurations
Statistical Analysis: To report the tests' repeatability, we used the coefficient of variation or CV% by dividing the standard deviation by the average of the five repeated measurements of each load case in set 3:
$$CV\% = \frac{Standard\:Deviation}{Average} $$
Based on the normality test on the data sets, either the Wilcoxon t-test or Student's t-test were used to determine the significance of the difference between paired groups, with a significance threshold set at 0.05. To ensure the validity of the t-tests, we imposed a minimum requirement of 6 data points per parameter. Where data from set III was involved, the average of the five repetitions was considered for the analysis. For the re-fixation parameter, where only one case was tested multiple times, a percent difference (%Diff) between the average of the five repetitions before and after re-fixing the sample was reported.
Boundary conditions: No substantial peak shear forces (< 2N) at the load introduction site were recorded across all tests. The calculated and measured support reaction forces for the fall configuration were matching. These findings validated the defined boundary conditions of the setup.
Repeatability parameters: The average repetition error (CV%) in Kz measurements for all load cases was 1.53% (95% confidence interval [0.32, 2.75]). Re-fixation, Pre-loading, and Re-adjustment did not affect the Kz significantly (Table 2).
Table 2 Effect of tested parameters on the apparent stiffness (Kz) of the proximal femoral samples
Sample manipulation effects: The storage cycle significantly reduced the Kz of the samples (p-value < 0.01, avg. % Diff ≈ 25%). The Kz showed no significant effect from performing μCT scanning on the samples (p-value =0.92).
Test configurations: The measured Kz for the neutral stance alignment was 27% larger compared to the neutral fall (p < 0.01). The samples were, on average, 33% and 13% less stiff when abducted for 15 degrees in the stance and fall cases, respectively (p < 0.02) (Table 2). Deviation of the load direction by 15 degrees in the adduction direction did not alter the apparent stiffness significantly. The overall trend of stiffness values for each load configuration can be seen in Fig. 9.
(a) Apparent stiffness of the samples (Kz) under different load conditions. Stiffness values are the average of 6 repeated measurements per load case in set III. Horizontal dashed lines represent the average value of the load case with corresponding color. (b) The average stiffness of the 10 samples per loading condition with error bars representing one standard deviation (SD) and following the same color code
Stiffness measurement: To validate the Kz values, which are based on the machine displacement data, we plotted load-displacement graphs using marker tracking data (Fig. 8). Among all load cases, stance-abduction configuration produced highly linear results (R2>0.95) for all ten samples followed by stance-neutral with 7 viable data points. There were a total of 11 fall configuration tests falling into this criteria as well. There was a strong correlation with an R2 ≈ 0.92 and 0.82 between the machine and marker data for stance and fall, respectively (Fig. 8(d,e)). Furthermore, the sample stiffness, Kz, was decomposed into its constituent elements using the spring theory (Fig. 5). The bone and full specimen values, Kb and Ks, were measured using marker tracking. The machine and embedding stiffness values, Km and Ke, were calculated accordingly (Fig. 10):
$$\begin{array}{*{20}l} \frac{1}{K_{\text{e}}} = \frac{1}{K_{\text{s}}} - \frac{1}{K_{\text{b}}}\\ \frac{1}{K_{\text{m}}} = \frac{1}{K_{\text{z}}} - \frac{1}{K_{\text{s}}} \end{array} $$
The average apparent stiffness of test assembly constituents in the stance-abduction load case. The only variable element across the tests were the samples. Components are: Full testing assembly (Kz), full specimen comprised of the bone and embedding caps (Ks), the bone segment between the femoral head and shaft markers (Kb), testing machine including the tilting table and load cells (Km), and embedding material (Ke)
Looking at the percent coefficients of variation (CV%), the variation in Kz appears to be mostly stemming from the relatively high variability of Kb, with the mechanical components showing fairly consistent stiffness measurements. In the given load case, the Kz was three times softer than the Kb, influenced largely by the Ke.
In this study, we aimed at using a systematic approach to find which experimental parameters affect the apparent stiffness of the proximal femur. Our results indicate that among the sources of uncertainties, the storage of the samples significantly alters their apparent stiffness. Moreover, controlled parameters, i.e., loading direction, also has significant effects on the apparent stiffness of the proximal femoral samples. Other sources of random effects pertaining to the repeatability of the mechanical tests proved negligible.
According to our findings, a freezing and storage cycle would significantly affect the apparent stiffness (p < 0.05). A 25% alteration in the Kz was measured. A cycle included storing wrapped and sealed samples in a −23∘C freezer for four weeks. This could potentially mean that comparisons between stiffness measurements before and after a storage cycle might be critically compromised and should be ideally avoided in experimental validation studies. Our results are different from previous reports on the subject. Some earlier research on the effect of storage methods on trabecular bone [33] or skull [34] reported no change in the stiffness of their specimens after several freezing and storage cycles. The inconsistencies between these outcomes might have roots in the sample choices. We have tested full proximal femur samples, meaning the embedding caps and cartilage layers were also frozen and tested attached to the bone, while the aforementioned studies used bare bone samples. Among possible pathways for the observed structural behavior, incomplete thawing or dehydration of the samples during the tests could be readily rejected given the insignificant difference between set II and set III and the samples' hydrated conditions throughout storage and tests. Another hypothesis is that the freezing cycle might affect the embedding-cartilage or cartilage-bone conjunctions rather than the bone structure. To check the effect of similar storage cycles on the embedding material, three groups of 10 standard tensile test specimens (DIN ISO 527-2 b) were produced using a 3D printed cast. They were stored a) in a dry container, b) submerged in 0.9% PBS bath, and c) wrapped in soaked paper towels and sealed in the −23∘C freezer for 10 days. Afterwards, frozen samples were thawed in room temperature PBS bath and all samples were tested to measure their tensile E-modulus (DIN ISO 527-2). Student's T test was used for comparing the groups. There was no significant difference between the dry-frozen and wet-frozen groups. There was an 8% decrease in E-modulus of the wet samples compared to the dry group (Fig. 11). Since according to the average stiffness values in Fig. 10, a 40% increase in Ke is required to account for the observed 25% increase in the Kz post storage cycle, the embedding material seems an unlikely candidate. Based on the available data from these experiments, we are unable to confidently pinpoint the mechanism through which the stiffness of the samples were affected.
Effect of storage methods on the Polyurethane (PU) resin tensile E-modulus. Ten standard (DIN ISO 527-2) tensile samples per storage method was produced. They were stored either in a dry container (dry), submerged in a 0.9% PBS solution bath (wet), or wrapped in soaked towels and sealed in a −23∘C freezer (frozen) for ten days. The average tensile E-modulus of each group was acquired through standard tensile tests. Student's T test was used for comparison between the means (significance at P 0.05)
There was a significant alteration in the Kz following tilting the loading direction for 15∘. The significantly lower stiffness of the samples in 15∘ abduction load cases is in line with reported lower fracture loads under similar conditions [22–24]. An interesting observation is that switching the loading configuration between neutral stance and neutral fall affects the Kz to the same extent as the 15∘ abduction. More importantly, this effect is comparable to that of the storage cycle. Aside from its more obvious implications in multi-directional mechanical testings, it is also noteworthy that sample misalignment of this range might significantly jeopardize the mechanical test outcomes.
Our results show that all other experimental sources of uncertainties, i.e., re-fixation, re-adjustment, pre-loading, and μCT imaging had insignificant effects on the Kz. In other words, replacing the samples or reassembling the testing setup are safe for the stiffness measurements. Furthermore, in the absence of standard protocols for pre-loading regimes and the possible damage they might induce in the sample, using a well-constrained setup with fully defined boundary conditions could reduce the effect of initial maladjustment between the sample and setup components, as the common reason for pre-loading cycles, and potentially alleviate the need for them. Lastly, since the popularity of μCT imaging as a strong tool for hierarchical tracking of the effectiveness of in-vivo and in-vitro studies is constantly on the rise, this result could be taken as a safety indicator in terms of preservation of the structural integrity of scanned bones. It should be noted that the negligible scanning effect comes despite the 4-hour long scanning time and a moderate rise in the temperature of the chamber and sample.
Choosing the apparent sample stiffness, Kz, as the main outcome variable instead of more prevalent measures of strength is adequately justifiable. Direct measurement of the bone strength, a.k.a. failure load, involves destroying the samples per test. The non-destructive alternative outcome variable to characterize a mechanical structure is its apparent stiffness (K) [27]. There has been shown to be a strong correlation between the stiffness and the strength of bone samples [28, 29, 35, 36]. The apparent stiffness of the bone is calculated based on the deformation of the region of interest. Strain gauges can only measure local strains, and their preparation requires substantial time and treatment of the site with possible structural damages. Full-field surface strain measurements with DIC techniques are favorable alternatives. However, with the selected maximum load threshold of 75% BW [26], at the chosen region of interest of posterior femoral neck, the noise levels proved to be high enough to prevent us from having a viable strain measurement. The same limitation resulted in limiting the number of successful marker tracking measurements and the resultant Kb values. This is in line with the reported results regarding the better performance of the DIC method in higher loading regimes and fracture tests on longer samples and at superior or inferior regions of the femoral neck, compared to < 1 BW loading cases [13, 37]. Nevertheless, there was a significant correlation between the Kb and Kz for all viable tests spanning across all load cases to merit relying on the statistical analyses of the Kz (R2 ≈ 0.92 and 0.82 for stance and fall, respectively) (Fig. 8). Given the higher sensitivity of the stiffness to structural alterations compared to the strength, which can be inferred from the lower predictive ability of models for K [14, 29, 38, 39], deducted conclusions on parameters with significant effects could even be considered as "conservative".
There are limitations in this study that require discussion. The sample size of 10 specimens from 5 donors is relatively small. Although smaller sample sizes have been used in various studies [13, 22–24, 40], it might limit our ability to generalize the outcomes of this study to broader cases. Furthermore, pure isolation of the effects of single parameters proved to be challenging. In between the testing sets, the samples had to be taken out of the setup and put back in, resulting in the potential compound influence of the re-fixation and re-adjustment parameters in addition to those of the storage and μCT scanning. However, the order-of-magnitude difference between the effect size of the storage parameter compared to those two and the similarly insignificant effect of the μCT scanning leads us to deem our derived conclusions unchallenged by the interaction effect. Finally, the 5 mm/s loading rate is not representative of physiological side-fall scenarios. Although repeatedly used in relevant studies [23, 24, 41], the effect size of different studied parameters could differ under higher rates and requires further investigation.
In conclusion, the loading direction, as well as intermediary storage of the frozen samples, affect the apparent stiffness of proximal femoral samples significantly. Using a highly repeatable parametric approach, we showed that the random effects of setup manipulation and intermittent μCT scanning are negligible. For multi-directional validation of FE models, a similar testing setup could be effectively used if there are no storage intervals between the different load cases on the same samples.
The datasets used and/or analyzed during the current study available from the corresponding author on reasonable request.
Viceconti M, Olsen S, Nolte L-P, Burton K. Extracting clinically relevant data from finite element simulations. Clin Biomech. 2005; 20(5):451–4.
Taddei F, Cristofolini L, Martelli S, Gill H, Viceconti M. Subject-specific finite element models of long bones: an in vitro evaluation of the overall accuracy. J Biomech. 2006; 39(13):2457–67.
Qasim M, Farinella G, Zhang J, Li X, Yang L, Eastell R, Viceconti M. Patient-specific finite element estimated femur strength as a predictor of the risk of hip fracture: the effect of methodological determinants. Osteoporos Int. 2016; 27(9):2815–22.
Varghese B, Short D, Hangartner T. Development of quantitative computed-tomography-based strength indicators for the identification of low bone-strength individuals in a clinical environment. Bone. 2012; 50(1):357–63.
Viceconti M, Qasim M, Bhattacharya P, Li X. Are ct-based finite element model predictions of femoral bone strengthening clinically useful?. Curr Osteoporos Rep. 2018; 16(3):216–23.
Benca E, Synek A, Amini M, Kainberger F, Hirtler L, Windhager R, Mayr W, Pahr DH. Qct-based finite element prediction of pathologic fractures in proximal femora with metastatic lesions. Sci Rep. 2019; 9(1):1–9.
Schileo E, Taddei F, Malandrino A, Cristofolini L, Viceconti M. Subject-specific finite element models can accurately predict strain levels in long bones. J Biomech. 2007; 40(13):2982–9.
Cummings SR, Melton LJ. Epidemiology and outcomes of osteoporotic fractures. Lancet. 2002; 359(9319):1761–7. https://doi.org/10.1016/S0140-6736(02)08657-9.
WHO. Scientific group on the assessment of osteoporosis at primary health care level. 2007. Report of the World Health Organization: http://www.who.int/chp/topics/Osteoporosis.pdf.
Berry SD, Miller RR. Falls: epidemiology, pathophysiology, and relationship to fracture. Curr Osteoporos Rep. 2008; 6(4):149–54.
Parkkari J, Kannus P, Palvanen M, Natri A, Vainio J, Aho H, Vuori I, Järvinen M. Majority of hip fractures occur as a result of a fall and impact on the greater trochanter of the femur: a prospective controlled hip fracture study with 206 consecutive patients. Calcif Tissue Int. 1999; 65(3):183–7.
Katz Y, Lubovsky O, Yosibash Z. Patient-specific finite element analysis of femurs with cemented hip implants. Clin Biomech. 2018; 58:74–89. https://doi.org/10.1016/j.clinbiomech.2018.06.012.
Katz Y, Yosibash Z. New insights on the proximal femur biomechanics using digital image correlation. J Biomech. 2020:109599. https://doi.org/10.1016/j.jbiomech.2020.109599.
Enns-Bray WS, Ariza O, Gilchrist S, Widmer Soyka RP, Vogt PJ, Palsson H, Boyd SK, Guy P, Cripton PA, Ferguson SJ, Helgason B. Morphology based anisotropic finite element models of the proximal femur validated with experimental data. Med Eng Phys. 2016; 38(11):1339–47. https://doi.org/10.1016/j.medengphy.2016.08.010.
Helgason B, Gilchrist S, Ariza O, Chak J, Zheng G, Widmer R, Ferguson S, Guy P, Cripton PA. Development of a balanced experimental–computational approach to understanding the mechanics of proximal femur fractures. Med Eng Phys. 2014; 36(6):793–9.
Gilchrist S, Nishiyama K, De Bakker P, Guy P, Boyd S, Oxland T, Cripton P. Proximal femur elastic behaviour is the same in impact and constant displacement rate fall simulation. J Biomech. 2014; 47(15):3744–9.
Fujii M. Experimental study on the mechanism of femoral neck fractures. Nihon Seikeigeka Gakkai zasshi. 1987; 61(5):531–41.
Pinilla T, Boardman K, Bouxsein M, Myers E, Hayes W. Impact direction from a fall influences the failure load of the proximal femur as much as age-related bone loss. Calcif Tissue Int. 1996; 58(4):231–5.
Bessho M, Ohnishi I, Matsumoto T, Ohashi S, Matsuyama J, Tobita K, Kaneko M, Nakamura K. Prediction of proximal femur strength using a ct-based nonlinear finite element method: differences in predicted fracture load and site with changing load and boundary conditions. Bone. 2009; 45(2):226–31.
Falcinelli C, Schileo E, Balistreri L, Baruffaldi F, Bordini B, Viceconti M, Albisinni U, Ceccarelli F, Milandri L, Toni A, et al. Multiple loading conditions analysis can improve the association between finite element bone strength estimates and proximal femur fractures: a preliminary study in elderly women. Bone. 2014; 67:71–80.
Keyak J, Sigurdsson S, Karlsdottir G, Oskarsdottir D, Sigmarsdottir A, Kornak J, Harris T, Sigurdsson G, Jonsson B, Siggeirsdottir K, et al. Effect of finite element model loading condition on fracture risk assessment in men and women: the ages-reykjavik study. Bone. 2013; 57(1):18–29.
Bessho M, Ohnishi I, Matsuyama J, Matsumoto T, Imai K, Nakamura K. Prediction of strength and strain of the proximal femur by a ct-based finite element method. J Biomech. 2007; 40(8):1745–53. https://doi.org/10.1016/j.jbiomech.2006.08.003.
Grassi L, Schileo E, Taddei F, Zani L, Juszczyk M, Cristofolini L, Viceconti M. Accuracy of finite element predictions in sideways load configurations for the proximal human femur. J Biomech. 2012; 45(2):394–9.
Zani L, Cristofolini L, Juszczyk MM, Grassi L, Viceconti M. A new paradigm for the in vitro simulation of sideways fall loading of the proximal human femur. J Mech Med Biol. 2014; 14(01):1450005.
Schileo E, Taddei F, Cristofolini L, Viceconti M. Subject-specific finite element models implementing a maximum principal strain criterion are able to estimate failure risk and fracture location on human femurs tested in vitro. J Biomech. 2008; 41:356–67.
Zani L, Erani P, Grassi L, Taddei F, Cristofolini L. Strain distribution in the proximal human femur during in vitro simulated sideways fall. J Biomech. 2015; 48(10):2130–43.
Basso T, Klaksvik J, Syversen U, Foss OA. Biomechanical femoral neck fracture experiments–a narrative review. Injury. 2012; 43(10):1633–9.
Patton DM, Bigelow EMR, Schlecht SH, Kohn DH, Bredbenner TL, Jepsen KJ. The relationship between whole bone stiffness and strength is age and sex dependent. J Biomech. 2019; 83:125–33.
Dall'Ara E, Eastell R, Viceconti M, Pahr D, Yang L. Experimental validation of dxa-based finite element models for prediction of femoral strength. J Mech Behav Biomed Mater. 2016; 63:17–25.
Amini M, Reisinger A, Pahr D. Effect of selected scan parameters on QCT-based BMD estimations of a femur In: Thurner PJ, Pahr D, Hellmich Ch, editors. Book of Abstracts of the 25th Congress of the European Society of Biomechanics (ESB 2019). Vienna: TU Verlag: 2019. p. 314.
Amini M, Reisinger A, Pahr DH. Influence of processing parameters on mechanical properties of a 3d-printed trabecular bone microstructure. J Biomed Mater Res B Appl Biomater. 2020; 108(1):38–47. https://doi.org/10.1002/jbm.b.34363. http://arxiv.org/abs/https://onlinelibrary.wiley.com/doi/pdf/10.1002/jbm.b.34363.
Trabelsi N, Yosibash Z, Wutte C, Augat P, Eberle S. Patient-specific finite element analysis of the human femur–a double-blinded biomechanical validation. J Biomech. 2011; 44(9):1666–72.
Linde F, Sørensen HCF. The effect of different storage methods on the mechanical properties of trabecular bone. J Biomech. 1993; 26(10):1249–52.
Torimitsu S, Nishida Y, Takano T, Koizumi Y, Hayakawa M, Yajima D, Inokuchi G, Makino Y, Motomura A, Chiba F, Iwase H. Effects of the freezing and thawing process on biomechanical properties of the human skull. Legal Med. 2014; 16(2):102–5. https://doi.org/10.1016/j.legalmed.2013.11.005.
Cody DD, Gross GJ, Hou FJ, Spencer HJ, Goldstein SA, Fyhrie DP. Femoral strength is better predicted by finite element models than qct and dxa. J Biomech. 1999; 32(10):1013–20.
Op Den Buijs J, Dragomir-Daescu D. Validated finite element models of the proximal femur using two-dimensional projected geometry and bone density. Comput Methods Prog Biomed. 2011; 104(2):168–74. https://doi.org/10.1016/j.cmpb.2010.11.008. 7th IFAC Symposium on Modelling and Control in Biomedical Systems.
Grassi L. Femoral strength prediction using finite element models. Lund: Department of Biomedical Engineering, Lund University; 2016.
Iori G, Schneider J, Reisinger A, Heyer F, Peralta L, Wyers C, Gräsel M, Barkmann R, Glüer CC, van den Bergh JP, Pahr D, Raum K. Large cortical bone pores in the tibia are associated with proximal femur strength. PLOS ONE. 2019; 14(4):1–18. https://doi.org/10.1371/journal.pone.0215405.
Dragomir-Daescu D, Op Den Buijs J, McEligot S, Dai Y, Entwistle RC, Salas C, Melton LJ, Bennet KE, Khosla S, Amin S. Robust qct/fea models of proximal femur stiffness and fracture load during a sideways fall on the hip. Ann Biomed Eng. 2011; 39(2):742–55.
Haider IT, Speirs AD, Frei H. Effect of boundary conditions, impact loading and hydraulic stiffening on femoral fracture strength. J Biomech. 2013; 46(13):2115–21. https://doi.org/10.1016/j.jbiomech.2013.07.004.
Cristofolini L, Conti G, Juszczyk M, Cremonini S, Sint Jan SV, Viceconti M. Structural behaviour and strain distribution of the long bones of the human lower limbs. J Biomech. 2010; 43(5):826–35. https://doi.org/10.1016/j.jbiomech.2009.11.022.
We would like to acknowledge the contribution of all donors to the life sciences. We thank Univ.-Prof. Dr. Johannes Streicher for the expert advice on bone alignment. We thank Lukas Warnung for manufacturing the setup and supporting us during the experiments. We appreciate the contribution of Nedaa Amraish in performing full-field DIC measurements. The authors acknowledge TU Wien Bibliothek for financial support through its Open Access Funding Program.
This study was co-funded by the Karl Landsteiner University of Health Sciences and Lower Austrian Research and Education Corporation (NFB, ID: SC16-009).
Institute of Lightweight Design and Structural Biomechanics, TU Wien, Getreidemarkt 9, Vienna, 1060, Austria
Morteza Amini, Andreas Reisinger & Dieter Pahr
Division Biomechanics, Karl Landsteiner University of Health Sciences, Dr.-Karl-Dorrek-Straße 30, Krems an der Donau, 3500, Austria
Center for Anatomy and Cell Biology, Medical University of Vienna, Währinger Straße 13, Vienna, 1090, Austria
Lena Hirtler
Morteza Amini
Andreas Reisinger
Dieter Pahr
M.A. wrote the main manuscript text and conducted the experiments. M.A., A.R. and D.P. designed the study. L.H. contributed to the study design and ethics proposal, and harvested the samples. All authors reviewed the manuscript. All authors read and approved the final manuscript.
Correspondence to Dieter Pahr.
The specimens originated from voluntary body donations for scientific and teaching purposes to the Center for Anatomy and Cell Biology of Medical University of Vienna with written informed consent of the donors, and according to protocol accepted by the ethics committee of Karl Landsteiner University of Health Sciences. All procedures were performed in accordance with relevant guidelines.
We declare that the authors have no competing interests as defined by BMC, or other interests that might be perceived to influence the results and/or discussion reported in this paper.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visithttp://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Amini, M., Reisinger, A., Hirtler, L. et al. Which experimental procedures influence the apparent proximal femoral stiffness? A parametric study. BMC Musculoskelet Disord 22, 815 (2021). https://doi.org/10.1186/s12891-021-04656-0
Parametric study
Apparent stiffness | CommonCrawl |
Yuriy Drozd
Yuriy Drozd (Ukrainian: Юрій Анатолійович Дрозд; born October 15, 1944) is a Ukrainian mathematician working primarily in algebra. He is a Corresponding Member of the National Academy of Sciences of Ukraine and head of the Department of Algebra and Topology at the Institute of Mathematics of the National Academy of Sciences of Ukraine.
Yuriy Drozd
Born (1944-10-15) 15 October 1944
Kyiv, Ukrainian SSR
NationalityUkrainian
Alma materTaras Shevchenko National University of Kyiv,
Steklov Institute of Mathematics
AwardsState Prize of Ukraine in Science and Technology
Scientific career
Fieldsmathematics, algebra, representation theory, algebraic geometry
InstitutionsInstitute of Mathematics of NAS of Ukraine
Doctoral advisorIgor Shafarevich
Doctoral studentsVolodymyr Mazorchuk
Biography
Yiriy Drozd graduated from Kyiv University in 1966, pursuing a postgraduate degree at the Institute of Mathematics of the National Academy of Sciences of Ukraine in 1969. His PhD dissertation On Some Questions of the Theory of Integral Representations (1970) was supervised by Igor Shafarevich.
From 1969 to 2006 Drozd worked at the Faculty of Mechanics and Mathematics at Kyiv University (at first as lecturer, then as associate professor and full professor). From 1980 to 1998 he headed the Department of Algebra and Mathematical Logic. Since 2006 he has been the head of the Department of Algebra and Topology (until 2014 - the Department of Algebra) of the Institute of Mathematics of the National Academy of Sciences of Ukraine.
His doctoral students include Volodymyr Mazorchuk.
References
• Mathematics Genealogy Project.
• Institute of Mathematics of the National Academy of Sciences of Ukraine.
• Personal site.
• Oberwolfach Photo Collection.
External links
• Yuriy Drozd, Introduction to Algebraic Geometry (course lecture notes, University of Kaiserslautern).
• Yuriy Drozd, Vector Bundles over Projective Curves.
• Yuriy Drozd, General Properties of Surface Singularities.
• Drozd, Yuriy; Kirichenko, Vladimir (1994). Finite-Dimensional Algebras. Springer. ISBN 978-3-642-76244-4.
Authority control
International
• ISNI
• VIAF
National
• Germany
• Czech Republic
• Poland
Academics
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• Scopus
• zbMATH
Other
• Encyclopedia of Modern Ukraine
• IdRef
| Wikipedia |
Lipschitz domain
In mathematics, a Lipschitz domain (or domain with Lipschitz boundary) is a domain in Euclidean space whose boundary is "sufficiently regular" in the sense that it can be thought of as locally being the graph of a Lipschitz continuous function. The term is named after the German mathematician Rudolf Lipschitz.
Definition
Let $n\in \mathbb {N} $. Let $\Omega $ be a domain of $\mathbb {R} ^{n}$ and let $\partial \Omega $ denote the boundary of $\Omega $. Then $\Omega $ is called a Lipschitz domain if for every point $p\in \partial \Omega $ there exists a hyperplane $H$ of dimension $n-1$ through $p$, a Lipschitz-continuous function $g:H\rightarrow \mathbb {R} $ over that hyperplane, and reals $r>0$ and $h>0$ such that
• $\Omega \cap C=\left\{x+y{\vec {n}}\mid x\in B_{r}(p)\cap H,\ -h<y<g(x)\right\}$
• $(\partial \Omega )\cap C=\left\{x+y{\vec {n}}\mid x\in B_{r}(p)\cap H,\ g(x)=y\right\}$
where
${\vec {n}}$ is a unit vector that is normal to $H,$
$B_{r}(p):=\{x\in \mathbb {R} ^{n}\mid \|x-p\|<r\}$ is the open ball of radius $r$,
$C:=\left\{x+y{\vec {n}}\mid x\in B_{r}(p)\cap H,\ -h<y<h\right\}.$
In other words, at each point of its boundary, $\Omega $ is locally the set of points located above the graph of some Lipschitz function.
Generalization
A more general notion is that of weakly Lipschitz domains, which are domains whose boundary is locally flattable by a bilipschitz mapping. Lipschitz domains in the sense above are sometimes called strongly Lipschitz by contrast with weakly Lipschitz domains.
A domain $\Omega $ is weakly Lipschitz if for every point $p\in \partial \Omega ,$ there exists a radius $r>0$ and a map $l_{p}:B_{r}(p)\rightarrow Q$ such that
• $l_{p}$ is a bijection;
• $l_{p}$ and $l_{p}^{-1}$ are both Lipschitz continuous functions;
• $l_{p}\left(\partial \Omega \cap B_{r}(p)\right)=Q_{0};$
• $l_{p}\left(\Omega \cap B_{r}(p)\right)=Q_{+};$
where $Q$ denotes the unit ball $B_{1}(0)$ in $\mathbb {R} ^{n}$ and
$Q_{0}:=\{(x_{1},\ldots ,x_{n})\in Q\mid x_{n}=0\};$
$Q_{+}:=\{(x_{1},\ldots ,x_{n})\in Q\mid x_{n}>0\}.$
A (strongly) Lipschitz domain is always a weakly Lipschitz domain but the converse is not true. An example of weakly Lipschitz domains that fails to be a strongly Lipschitz domain is given by the two-bricks domain [1]
Applications of Lipschitz domains
Many of the Sobolev embedding theorems require that the domain of study be a Lipschitz domain. Consequently, many partial differential equations and variational problems are defined on Lipschitz domains.
References
1. Werner Licht, M. "Smoothed Projections over Weakly Lipschitz Domains", arXiv, 2016.
• Dacorogna, B. (2004). Introduction to the Calculus of Variations. Imperial College Press, London. ISBN 1-86094-508-2.
| Wikipedia |
Regular tuning
Among alternative guitar-tunings, regular tunings have equal musical intervals between the paired notes of their successive open strings.
Regular tunings
For regular guitar-tunings, the distance between consecutive open-strings is a constant musical-interval, measured by semitones on the chromatic circle. The chromatic circle lists the twelve notes of the octave.
Basic information
AliasesUniform tunings
All-interval tunings
Advanced information
AdvantagesProvides new material for improvisation by advanced guitarists
DisadvantagesMakes it difficult to play music written for standard tuning.
Regular tunings (semitones)
Trivial (0)
Minor thirds (3)
Major thirds (4)
All fourths (5)
Augmented fourths (6)
New standard (7, 3)
All fifths (7)
Minor sixths (8)
Guitar tunings
Guitar tunings assign pitches to the open strings of guitars. Tunings can be described by the particular pitches that are denoted by notes in Western music. By convention, the notes are ordered from lowest to highest. The standard tuning defines the string pitches as E, A, D, G, B, and E. Between the open-strings of the standard tuning are three perfect-fourths (E–A, A–D, D–G), then the major third G–B, and the fourth perfect-fourth B–E.
In contrast, regular tunings have constant intervals between their successive open-strings:
• 3 semitones (minor third): Minor-thirds, or Diminished tuning
• 4 semitones (major third): Major-thirds or Augmented tuning,
• 5 semitones (perfect fourth): All-fourths tuning,
• 6 semitones (augmented fourth, tritone, or diminished fifth): Augmented-fourths tuning,
• 7 semitones (perfect fifth): All-fifths tuning
For the regular tunings, chords may be moved diagonally around the fretboard, as well as vertically for the repetitive regular tunings (minor thirds, major thirds, and augmented fourths). Regular tunings thus often appeal to new guitarists and also to jazz-guitarists, as they facilitate key transpositions without requiring a completely new set of fingerings for the new key. On the other hand, some conventional major/minor system chords are easier to play in standard tuning than in regular tuning.[1] Left-handed guitarists may use the chord charts from one class of regular tunings for its left-handed tuning; for example, the chord charts for all-fifths tuning may be used for guitars strung with left-handed all-fourths tuning.
The class of regular tunings has been named and described by Professor William Sethares. Sethares's 2001 chapter Regular tunings (in his revised 2010–2011 Alternate tuning guide) is the leading source for this article.[1] This article's descriptions of particular regular-tunings use other sources also.
Standard and alternative guitar-tunings: A review
In standard tuning, the C-major chord has three shapes because of the irregular major-third between the G- and B-strings.
This summary of standard tuning also introduces the terms for discussing alternative tunings.
Standard
Standard tuning has the following open-string notes:
E2–A2–D3–G3–B3–E4.
In standard tuning, the separation of the second (B), and third (G) string is by a major-third interval, which has a width of four semitones.
Standard tuningopen1st fret (index)2nd fret (middle)3rd fret (ring)4th fret (little)
1st string E4 F4F♯4/G♭4G4G♯4/A♭4
2nd string B3 C4C♯4/D♭4D4D♯4/E♭4
3rd string G3 G♯3/A♭3A3A♯3/B♭3B3
4th string D3 D♯3/E♭3E3F3F♯3/G♭3
5th string A2 A♯2/B♭2B2C3C♯3/D♭3
6th string E2 F2F♯2/G♭2G2G♯2/A♭2
Chromatic note progression
The irregularity has a price. Chords cannot be shifted around the fretboard in the standard tuning E–A–D–G–B–E, which requires four chord-shapes for the major chords. There are separate chord-forms for chords having their root note on the third, fourth, fifth, and sixth strings.[2]
Alternative
Alternative ("alternate") tuning refers to any open-string note-arrangement other than standard tuning. Such alternative tuning arrangements offer different chord voicing and sonorities. Alternative tunings necessarily change the chord shapes associated with standard tuning, which eases the playing of some, often "non-standard", chords at the cost of increasing the difficulty of some traditionally-voiced chords. As with other scordatura tuning, regular tunings may require re-stringing the guitar with different string gauges. For example, all-fifths tuning has been difficult to implement on conventional guitars, due to the extreme high pitch required from the top string. Even a common approximation to all-fifths tuning, new standard tuning, requires a special set of strings.
Properties
Chords can be shifted diagonally in regular tunings, such as major-thirds (M3) tuning.
With standard tuning, and with all tunings, chord patterns can be moved twelve frets down, where the notes repeat in a higher octave.
For the standard tuning, there is exactly one interval of a third between the second and third strings, and all the other intervals are fourths. Working around the irregular third of standard tuning, guitarists have to memorize chord-patterns for at least three regions: The first four strings tuned in perfect fourths; two or more fourths and the third; and one or more initial fourths, the third, and the last fourth.
In contrast, regular tunings have constant intervals between their successive open-strings. In fact, the class of each regular tuning is characterized by its musical interval as shown by the following list:
• 3 semitones (minor third): Minor-thirds tuning,
• 4 semitones (major third): Major-thirds tuning,
• 5 semitones (perfect fourth): All-fourths tuning,
• 6 semitones (augmented fourth, tritone, or diminished fifth): Augmented-fourths tuning,
• 7 semitones (perfect fifth): All-fifths tuning
The regular tunings whose number of semitones s divides 12 (the number of notes in the octave) repeat their open-string notes (raised one octave) after 12/s strings: For example,
• having three semitones in its interval, minor-thirds tuning repeats its open notes after four (12/3) strings;
• having four semitones in its interval, major-thirds tuning repeats its open notes after three (12/4) strings;
• having six semitones in its interval, augmented-fourths tuning repeats its notes after two (12/6) strings.
Regular tunings have symmetrical scales all along the fretboard. This makes it simpler to translate chords into new keys. For the regular tunings, chords may be moved diagonally around the fretboard.
The shifting of chords is especially simple for the regular tunings that repeat their open strings, in which case chords can be moved vertically: Chords can be moved three strings up (or down) in major-thirds tuning,[3] and chords can be moved two strings up (or down) in augmented-fourths tuning. Regular tunings thus appeal to new guitarists and also to jazz-guitarists, whose improvisation is simplified by regular intervals.
Particular conventional chords are more difficult to play
On the other hand, particular traditional chords may be more difficult to play in a regular tuning than in standard tuning. It can be difficult to play conventional chords especially in augmented-fourths tuning and all-fifths tuning,[1] in which the wide (tritone and perfect-fifth) intervals require hand stretching. Some chords that are conventional in folk music are difficult to play even in all-fourths and major-thirds tunings, which do not require more hand-stretching than standard tuning.[4] On the other hand, minor-thirds tuning features many barre chords with repeated notes,[5] properties that appeal to beginners.
Frets covered by the hand
The chromatic scale climbs from one string to the next after a number of frets that is specific to each regular tuning. The chromatic scale climbs after exactly four frets in major-thirds tuning, so reducing the extensions of the little and index fingers ("hand stretching").[6] For other regular tunings, the successive strings have intervals that are minor thirds, perfect fourths, augmented fourths, or perfect fifths; thus, the fretting hand covers three, five, six, or seven frets respectively to play a chromatic scale. (Of course, the lowest chromatic-scale uses the open strings and so requires one less fret to be covered.)
Examples
The following regular tunings are discussed by Sethares, who also mentions other regular tunings that are difficult to play or have had little musical interest, to date.
Minor thirds
C2–E♭2–G♭2–A2–C3–E♭3,[7][8] or
B2–D3–F3–A♭3–B3–D4[9]
In each minor-thirds (m3) tuning, every interval between successive strings is a minor third. Thus each repeats its open-notes after four strings. In the minor-thirds tuning beginning with C, the open strings contain the notes (C, E♭, Gb) of the diminished C triad.[7]
Minor-thirds tuning features many barre chords with repeated notes,[5] properties that appeal to acoustic guitarists and to beginners. Doubled notes have different sounds because of differing "string widths, tensions and tunings, and [they] reinforce each other, like the doubled strings of a twelve string guitar add chorusing and depth," according to William Sethares.[7]
Achieving the same range as a standard-tuned guitar using minor-thirds tuning would require a nine-string guitar (e.g. E♭2–G♭2–A2–C3–E♭3–G♭3–A3–C4–E♭4).
Major thirds
Chords vertically shift.
In major-thirds tuning, chords are inverted by raising notes by three strings on the same frets. The inversions of a C major chord are shown.[10]
Major-thirds tuning repeats its notes after three strings.
Major-thirds tuning is a regular tuning in which the musical intervals between successive strings are each major thirds.[11][12] Like minor-thirds tuning (and unlike all-fourths and all-fifths tuning), major-thirds tuning is a repetitive tuning; it repeats its octave after three strings, which again simplifies the learning of chords and improvisation;[13] similarly, minor-thirds tuning repeats itself after four strings while augmented-fourths tuning repeats itself after two strings.
Neighboring the standard tuning is the all-thirds tuning that has the open strings
E2–G♯2–B♯2–E3–G♯3–B♯3 (or F♭2–A♭2–C3–F♭3–A♭3–C4).[4]
With six strings, major-thirds tuning has a smaller range than standard tuning; with seven strings, the major-thirds tuning covers the range of standard tuning on six strings.[11][12] With the repetition of three open-string notes, each major-thirds tuning provides the guitarist with many options for fingering chords. Indeed, the fingering of two successive frets suffices to play pure major and minor chords, while the fingering of three successive frets suffices to play seconds, fourths, sevenths, and ninths.[11][14]
For the standard Western guitar, which has six strings, major-thirds tuning has a smaller range than standard tuning; on a guitar with seven strings, the major-thirds tuning covers the range of standard tuning on six strings. Even greater range is possible with guitars with eight strings.[4][15]
Major-thirds tuning was heavily used in 1964 by the American jazz-guitarist Ralph Patt to facilitate his style of improvisation.[4][16]
All fourths
E2–A2–D3–G3–C4–F4
This tuning is like that of the lowest four strings in standard tuning.[17][18] Consequently, of all the regular tunings, it is the closest approximation to standard tuning, and thus it best allows the transfer of a knowledge of chords from standard tuning to a regular tuning. Jazz musician Stanley Jordan plays guitar in all-fourths tuning; he has stated that all-fourths tuning "simplifies the fingerboard, making it logical".[19]
For all-fourths tuning, all twelve major chords (in the first or open positions) are generated by two chords, the open F major chord and the D major chord. The regularity of chord-patterns reduces the number of finger positions that need to be memorized.[20]
The left-handed involute of an all-fourths tuning is an all-fifths tuning. All-fourths tuning is based on the perfect fourth (five semitones), and all-fifths tuning is based on the perfect fifth (seven semitones). Consequently, chord charts for all-fifths tunings may be used for left-handed all-fourths tuning.[21]
Augmented fourths
C2–F♯2–C3–F♯3–C4–F♯4 and B1–F1–B2–F3–B3–F4 etc.
Between the all-fifths and all-fourths tunings are augmented-fourth tunings, which are also called "diminished-fifths" or "tritone" tunings. It is a repetitive tuning that repeats its notes after two strings. With augmented-fourths tunings, the fretboard has greatest symmetry.[22] In fact, every augmented-fourths tuning lists the notes of all the other augmented-fourths tunings on the frets of its fretboard. Professor Sethares wrote that
"The augmented-fourth interval is the only interval whose inverse is the same as itself. The augmented-fourths tuning is the only tuning (other than the 'trivial' tuning C–C–C–C–C–C) for which all chords-forms remain unchanged when the strings are reversed. Thus the augmented-fourths tuning is its own 'lefty' tuning."[23]
Of all the augmented-fourths tunings, the C2–F♯2–C3–F♯3–C4–F♯4 tuning is the closest approximation to the standard tuning, and its fretboard is displayed next:
Tritone:[23] Each fret displays the open strings of an augmented-fourths tuning
open
(0th fret)
1st fret2nd fret3rd fret4th fret5th fret
1st string F#4 G4G#4A4A#4B4
2nd string C4 C#4D4D#4E4F4
3rd string F#3 G3G#3A3A#3B3
4th string C3 C#3D3D#3E3F3
5th string F#2 G2G#2A2A#2B2
6th string C2 C#2D2D#2E2F2
An augmented-fourths tuning "makes it very easy for playing half-whole scales, diminished 7 licks, and whole tone scales," stated guitarist Ron Jarzombek.[24]
All fifths: "Mandoguitar"
C2–G2–D3–A3–E4–B4
All-fifths tuning is a tuning in intervals of perfect fifths like that of a mandolin, cello or violin; other names include "perfect fifths" and "fifths".[25] Consequently, classical compositions written for violin or guitar may be adapted to all-fifths tuning more easily than to standard tuning.
When he was asked whether tuning in fifths facilitates "new intervals or harmonies that aren't readily available in standard tuning", Robert Fripp responded, "It's a more rational system, but it's also better sounding—better for chords, better for single notes." To build chords, Fripp uses "perfect intervals in fourths, fifths and octaves", so avoiding minor thirds and especially major thirds,[26] which are sharp in equal temperament tuning (in comparison to thirds in just intonation). It is a challenge to adapt conventional guitar-chords to new standard tuning, which is based on all-fifths tuning.[27] Some closely voiced jazz chords become impractical in NST and all-fifths tuning.[28]
It has a wide range, thus its implementation can be difficult. The high B4 requires a taut, thin string, and consequently is prone to breaking. This can be ameliorated by using a shorter scale length guitar, by shifting to a different key, or by shifting down a fifth. All-fifths tuning was used by the jazz-guitarist Carl Kress.
The left-handed involute of an all-fifths tuning is an all-fourths tuning. All-fifths tuning is based on the perfect fifth (seven semitones), and all-fourths tuning is based on the perfect fourth (five semitones). Consequently, chord charts for all-fifths tunings are used for left-handed all-fourths tuning.[21]
All-fifths tuning has been approximated with tunings in the Through The Looking Glass Guitar[29] of Kei Nakano, which has been played by him since 2015. This new tuning is like a mirror to all kinds of string instruments including guitar. Also it can adapt to any other tunings of guitar. If tuned to usual conventional guitar for the right handed person, it is able to use for lefty guitar in general, and vice versa.
New standard tuning
All-fifths tuning has been approximated with tunings that avoid the high B4 or the low C2. The B4 has been replaced with a G4 in the new standard tuning (NST) of King Crimson's Robert Fripp. The original version of NST was all-fifths tuning. However, in the 1980s, Fripp never attained the all fifth's high B4. While he could attain A4, the string's life-time distribution was too short. Experimenting with a g string, Fripp succeeded. "Originally, seen in 5ths. all the way, the top string would not go to B. so, as on a tenor banjo, I adopted an A on the first string. These kept breaking, so G was adopted."[30] In 2012, Fripp experimented with A String (0.007);[31][32] if successful, the experiment could lead to "the NST 1.2", CGDAE-A, according to Fripp.[31] Fripp's NST has been taught in Guitar Craft courses.[33][34] Guitar Craft and its successor Guitar Circle have taught Fripp's tuning to three-thousand students.[35]
Extreme intervals
For regular tunings, intervals wider than a perfect fifth or narrower than a minor third have, thus far, had limited interest.
Wide intervals
Two regular-tunings based on sixths, having intervals of minor sixths (eight semitones) and of major sixths (nine semitones), have received scholarly discussion.[36] The chord charts for minor-sixths tuning are useful for left-handed guitarists playing in major-thirds tuning; the chord charts for major-sixths tuning, for left-handed guitarists playing in minor-thirds tuning.[21]
The regular tunings with minor-seventh (ten semitones) or major-seventh (eleven semitones) intervals would make conventional major/minor chord-playing very difficult, as would octave intervals.[21]
Narrow intervals
There are regular-tunings that have as their intervals either zero semi-tones (unison), one semi-tone (minor second), or two semi-tones (major second). These tunings tend to increase the difficulty in playing the major/minor system chords of conventionally tuned guitars.[21]
The "trivial" class of unison tunings (such as C3–C3–C3–C3–C3–C3) are each their own left-handed tuning.[21] Unison tunings are briefly discussed in the article on ostrich tunings. Having exactly one note, unison tunings are also ostrich tunings, which have exactly one pitch class (but may have two or more octaves, for example, E2, E3, and E4'); non-unison ostrich tunings are not regular.
Left-handed involution
See also: Interval inversion, chord inversion, and involution (mathematics)
The class of regular tunings is preserved under the involution from right-handed to left-handed tunings, as observed by William Sethares.[21] The present discussion of left-handed tunings is of interest to musical theorists, mathematicians, and left-handed persons, but may be skipped by other readers.
For left-handed guitars, the ordering of the strings reverses the ordering of the strings for right-handed guitars. For example, the left-handed involute of the standard tuning E–A–D–G–B–E is the "lefty" tuning E–B–G–D–A–E. Similarly, the "left-handed" involute of the "lefty" tuning is the standard ("righty") tuning.[21]
The reordering of open-strings in left-handed tunings has an important consequence. The chord fingerings for the right-handed tunings must be changed for left-handed tunings. However, the left-handed involute of a regular tuning is easily recognized: it is another regular tuning. Thus the chords for the involuted regular-tuning may be used for the left-handed involute of a regular tuning.
For example, the left-handed version of all-fourths tuning is all-fifths tuning, and the left-handed version of all-fifths tuning is all-fourths tuning. In general, the left-handed involute of the regular tuning based on the interval with $n$ semitones is the regular tuning based on its involuted interval with $12-n$ semitones: All-fourths tuning is based on the perfect fourth (five semitones), and all-fifths tuning is based on the perfect fifth (seven semitones), as mentioned previously.[21] The following table summarizes the lefty-righty pairings discussed by Sethares.[21]
Left-handed tunings[21]
Right-handedLeft-handed
Minor thirdsMajor sixths
Major thirdsMinor sixths
All fourthsAll fifths
Augmented fourthsDiminished fifths
All fifthsAll fourths
Minor sixthsMajor thirds
Major sixthsMinor thirds
The left-handed involute of a left-handed involute is the original right-handed tuning. The left-handed version of the trivial tuning C–C–C–C–C–C is also C–C–C–C–C–C. Among non-trivial tunings, only the class of augmented-fourths tunings is fixed under the lefty involution.[21][22]
Summary
The principal regular-tunings have their properties summarized in the following table:
Regular tuningInterval
(Number of semitones)
RepetitionAdvantages:
Each facilitates learning and improvisation.
Disadvantages:
None use standard-tuning's open chords.
Left-handed
involution[21]
Guitarist(s)
Major thirdsMajor third (4)After 3 strings
• Chromatic scale on four successive frets.
• Hence, reduced hand-stretching:
• Major and minor chords are played on 2 successive frets;
• others (seconds, fourths, sevenths, and ninths) on 3.[14]
• Smaller range (without 7 strings)
• Only three open-notes.
Minor-sixth tuning Ralph Patt
All fourthsPerfect fourth (5)Non-repetitive[37]
• Uses chords from lowest 4 strings of standard tuning.
• Same tuning as bass guitar
Difficult to play folk chordsAll-fifths tuningStanley Jordan
Augmented fourthsTritone (6)After 2 stringssymmetry ("left-handed")Only 2 open notesAugmented-fourths tuningShawn Lane
All fifthsPerfect fifth (7)Non-repetitive[37]
• Wide scope facilitates ensemble playing and single-note picking (rather than conventional chords)
• Natural for all-fifths music (violin, cello, mandolin)
• Very difficult to play conventional chord-voicings.
• Requires extreme (light and heavy) strings.
All-fourths tuning
• Carl Kress
• Kei Nakano
Notes
1. Sethares (2001)
2. Denyer (1992, p. 119)
3. Griewank (2010, p. 3)
4. Patt, Ralph (14 April 2008). "The major 3rd tuning". Ralph Patt's jazz web page. ralphpatt.com. cited by Sethares (2011) harvtxt error: no target: CITEREFSethares2011 (help) and Griewank (2010, p. 1). Retrieved 10 June 2012.
5. Sethares (2001, pp. 54–55)
6. Griewank (2010, p. 9)
7. Sethares (2001, pp. 54)
8. "ACD#F#AC: Minor thirds (m3)". Guitar tunings database. 3 February 2013. Retrieved 4 February 2013.
9. "G#BDFG#B: Minor thirds (m3)". Guitar tunings database. 3 February 2013. Retrieved 4 February 2013.
10. Kirkeby (2012, "Fretmaps, major chords: Major Triads")
11. Sethares (2001, pp. 56)
12. Griewank (2010)
13. Kirkeby, Ole (1 March 2012). "Major thirds tuning". m3guitar.com. cited by Sethares (2011) harvtxt error: no target: CITEREFSethares2011 (help) and Griewank, p. 1) harvtxt error: no target: CITEREFGriewank (help). Archived from the original on 11 April 2015. Retrieved 10 June 2012.
14. Griewank (2010, p. 2)
15. In the table, the last row is labeled the "7th string" so that the low C tuning can be displayed without needing another table; the term "7th string" does not appear in the sources.
Similarly, the terms "-1st string" and "0th string" do not appear in the sources, which do discuss guitars having seven-eight strings.
16. Griewank (2010, p. 1)
17. Sethares (2001, pp. 58–59)
18. Bianco, Bob (1987). Guitar in Fourths. New York City: Calliope Music. ISBN 0-9605912-2-2. OCLC 16526869.
19. Ferguson (1986, p. 76): Ferguson, Jim (1986). "Stanley Jordan". In Casabona, Helen; Belew, Adrian (eds.). New directions in modern guitar. Guitar Player basic library. Hal Leonard Publishing Corporation. pp. 68–76. ISBN 0881884235.
20. Sethares (2001, p. 52)
21. Sethares (2001, p. 53)
22. Sethares (2001, "The augmented fourths tuning" 60–61)
23. Sethares (2001, "The augmented fourths tuning", p. 60)
24. Turner, Steve (30 December 2005). "Interview with Ron Jarzombek". RonJarzombek.com. Retrieved 23 May 2012..
25. Sethares (2001, "The mandoguitar tuning" 62–63)
26. Mulhern (1986): Mulhern, Tom (January 1986). "On the discipline of craft and art: An interview with Robert Fripp". Guitar Player. 20: 88–103. Retrieved 8 January 2013.
27. Musicologist Eric Tamm wrote that despite "considerable effort and search I just could not find a good set of chords whose sound I liked" for rhythm guitar. (Tamm 2003, Chapter 10: Postscript)
28. Sethares (2001, "The mandoguitar tuning", pp. 62–63)
29. "日本特許第6709929号 【発明の名称】弦楽器 【特許権者】中野 圭". patents.google.com. Retrieved 30 June 2023.
30. Fripp, Robert (5 February 2010). "Robert Fripp's diary: Friday, 5th February 2010". Discipline Global Mobile, DGM Live!. Archived from the original on 11 November 2013.
31. Fripp, Robert (22 April 2012). "Robert Fripp's diary: Sunday, 22nd April 2012". Discipline Global Mobile, DGM Live!. Archived from the original on 11 November 2013.
32. Octave4Plus of Gary Goodman
33. Tamm, Eric (2003) [1990], Robert Fripp: From crimson king to crafty master (Progressive Ears ed.), Faber and Faber (1990), ISBN 0-571-16289-4, archived from the original on 26 October 2011, retrieved 25 March 2012 Zipped Microsoft Word Document
34. Zwerdling, Daniel (5 September 1998). "California Guitar Trio". All Things Considered (NPR Weekend ed.). Washington DC: National Public Radio. Retrieved 25 March 2012.
35. Fripp (2011, p. 3): Fripp, Robert (2011). Pozzo, Horacio (ed.). Seven Guitar Craft themes: Definitive scores for guitar ensemble. "Original transcriptions by Curt Golden", "Layout scores and tablatures: Ariel Rzezak and Theo Morresi" (First limited ed.). Partitas Music. ISMN 979-0-9016791-7-7. DGM Sku partitas001.
36. Sethares (2001, pp. 64–67)
37. No repetition occurs in six strings; repetition occurs after 12 strings.
References
• Denyer, Ralph (1992). "Playing the guitar ('How the guitar is tuned', pp. 68–69, and 'Alternative tunings', pp. 158–159)". The guitar handbook. Special contributors Isaac Guillory and Alastair M. Crawford (Fully revised and updated ed.). London and Sydney: Pan Books. pp. 65–160. ISBN 0-330-32750-X.
• Griewank, Andreas (1 January 2010), Tuning guitars and reading music in major thirds, Matheon preprints, vol. 695, Berlin, Germany: DFG research center "MATHEON, Mathematics for key technologies" Berlin, MSC-Classification 97M80 Arts. Music. Language. Architecture. Postscript file and Pdf file, archived from the original on 8 November 2012
• Sethares, Bill (2001). "Regular tunings". Alternate tuning guide (PDF). Madison, Wisconsin: University of Wisconsin; Department of Electrical Engineering. pp. 52–67. Retrieved 19 May 2012.
• Sethares, Bill (10 January 2009) [2001]. Alternate tuning guide (PDF). Madison, Wisconsin: University of Wisconsin; Department of Electrical Engineering. Retrieved 19 May 2012.
• Sethares, William A. (18 May 2012). "Alternate tuning guide". Madison, Wisconsin: University of Wisconsin; Department of Electrical Engineering. Retrieved 8 December 2012.
Further reading
• Allen, Warren (22 September 2011) [30 December 1997]. "WA's encyclopedia of guitar tunings". Archived from the original on 13 July 2012. Retrieved 27 June 2012. (Recommended by Marcus, Gary (2012). Guitar zero: The science of learning to be musical. Oneworld. p. 234. ISBN 9781851689323.)
• Sethares, William A. (12 May 2012). "Alternate tuning guide: Interactive". Retrieved 27 June 2012. Uses Wolfram Cdf player.
• Weissman, Dick (2006). Guitar Tunings: A Comprehensive Guide. Routledge. ISBN 9780415974417. LCCN 0415974410.
External links
The Wikibook Guitar has a page on the topic of: Alternative tunings
Major thirds
• Professors Andreas Griewank and William Sethares each recommend discussions of major-thirds tuning by two jazz-guitarists, (Sethares 2011, "Regular tunings") harv error: no target: CITEREFSethares2011 (help) and (Griewank 2010, p. 1):
• Ole Kirkeby for 6- and 7-string guitars: Charts of intervals major chords, and minor chords, and recommended gauges for strings.
• Ralph Patt for 6-, 7-, and 8-string guitars: Charts of scales, chords, and chord-progressions.
All fourths
• Yahoo group for all-fourths tuning
New standard tuning
• Courses in New Standard Tuning are offered by Guitar Circle, the successor of Guitar Craft:
• Guitar Circle of Europe
• Guitar Circle of Latin America
• Guitar Circle of North America
Guitar tunings
General
• Standard
• DADGAD
• Nashville
Open (Slide and slack-key guitar)
TuningRepetitiveOvertonesOther
(often most popular)
• Open A
• Open B
• Open C
• Open D
• Open E
• Open F
• Open G
• A-C♯-E-A-C♯-E
• B-D♯-F♯-B-D♯-F♯
• C-E-G-C-E-G
• D-F♯-A-D-F♯-A
• E-G♯-B-E-G♯-B
• F-A-C-F-A-C
• G-B-D-G-B-D
• A-A-E-A-C♯-E
• B-B-F♯-B-D♯-F♯
• C-C-G-C-E-G
• D-D-A-D-F♯-A
• E-E-B-E-G♯-B
• F-F-C-F-A-C
• G-G-D-G-B-D
• E-A-C♯-E-A-E
• B-F♯-B-F♯-B-D♯
• C-G-C-G-C-E
• D-A-D-F♯-A-D
• E-B-E-G♯-B-E
• C-F-C-F-A-F
• D-G-D-G-B-D
Regular (semitones)
• Unison (0)
• Minor thirds (3)
• Major thirds (4)
• All fourths (5)
• Augmented fourths (6)
• New standard (74, 3)
• All fifths (7)
Repetitive (open pitches)
• Trivial (1)
• Augmented fourths (2)
• Major thirds (3)
• English open-C (3)
• Russian open-G (3)
• Minor thirds (4)
Miscellaneous
• Terz
• Bass guitar
• Steel guitar (C6, E9)
• Other instruments
• Musical tuning
• William Sethares
• List
• Category
| Wikipedia |
\begin{document}
\begin{abstract} We show that there is an essentially unique $S$-algebra structure on the Morava $K$-theory spectrum $K(n)$, while $K(n)$ has uncountably many $MU$ or $\hE{n}$-algebra structures. Here $\hE{n}$ is the $K(n)$-localized Johnson-Wilson spectrum. To prove this we set up a spectral sequence computing the homotopy groups of the moduli space of $A_\infty$ structures on a spectrum, and use the theory of $S$-algebra $k$-invariants for connective $S$-algebras found in \cite{DuSh} to show that all the uniqueness obstructions are hit by differentials. \end{abstract}
\address{Department of Mathematics \\ University of Chicago \\ Chicago IL 60637}
\subjclass{55N22, 55S35, 18D50} \keywords{S-algebra, Morava K-theory, moduli space}
\title{Uniqueness of Morava $K$-theory}
\section{Introduction}
We study the moduli space of $S$-algebra structures on the Morava $K$-theory spectrum $K(n)$. Recall that given a prime $p$ and an integer $n \geq 1$, $K(n)$ is the spectrum carrying the Honda formal group of height $n$ over $\bF_p$, and that $K(n)_* \cong \bF_p[v_n,v_n^{-1}]$ with $|v_n|=2p^n-2$. Robinson \cite{Ro88} found that there are uncountably many ways to build an $A_\infty$ structure on $K(n)$, but he did not ask if these $A_\infty$ structures might all be equivalent. The point is that there are two distinct definitions of the moduli space of $S$-algebra structures, and in this paper we use the version where we allow automorphisms of the underlying spectrum. We prove the following:
\begin{mainthm} \label{thm:main1} There is an essentially unique $S$-algebra structure on $K(n)$, in the sense that the moduli space of $S$-algebra structures on $K(n)$ is connected. \end{mainthm}
This should be compared to the situation where we study the moduli space of $R$-algebra structures on $K(n)$ for some other commutative $S$-algebra $R$:
\begin{mainthm} \label{thm:main2} Let $R=MU$ or $R=\hE{n}$. Then there are uncountably many $R$-algebra structures on $K(n)$, in the sense that the moduli space of $R$-algebra structures on $K(n)$ has uncountably many path components. \end{mainthm}
\noindent If $BP$ is a commutative $S$-algebra, Theorem \ref{thm:main2} remains true with $R=BP$.
We will use two approaches to study the moduli space of $S$-algebra or $R$-algebra structures on a spectrum $A$. For our first approach, we use the equivalence between $S$-algebras and $A_\infty$ ring spectra, and study how to build an $A_\infty$ structure on $A$ by induction on the $A_m$ structure. This is the approach taken by Robinson \cite{Ro88} and later by the author \cite{AnTHH}. We need to modify this approach slightly to get the right notion of equivalence of $A_\infty$ structures; this amounts to allowing maps $(A,\phi) \to (A,\psi)$ of $A_\infty$ ring spectra where the underlying map $A \to A$ of spectra is not the identity but merely a weak equivalence.
We will define the appropriate moduli space of $S$-algebra structures on $A$, which we denote by $B\mathscr{A}^S(A)$, and set up a spectral sequence $\{E_r^{s,t}\}$ which contains the obstructions to $B\mathscr{A}^S(A)$ being nonempty and, given a basepoint, computes the homotopy groups of this space. The spectral sequence is similar to the one found in \cite{Re97} based on derived functors of derivations.
Using this approach, the uniqueness obstructions for $K(n)$ lie in $E_\infty^{s,s}$ for $s \geq 1$. On the $E_2$ term, everything in positive filtration is concentrated in even total degree, so every class in $E_2^{s,s}$ for $s \geq 1$ is a permanent cycle. But $E_1^{0,1}$ is large, in fact $E_1^{0,1}$ is closely related to the Morava stabilizer group, and there are potential differentials $d_r : E_r^{0,1} \to E_r^{r,r}$ for all $r \geq 1$ killing the uniqueness obstructions.
This should be compared to the situation for the Morava $E$-theory spectrum $E_n$. See \cite{Re97} for a spectral sequence which computes the space of $A_\infty$ structures on $E_n$ and \cite{GoHo} for a spectral sequence which computes the space of $E_\infty$ structures on $E_n$. In both cases the $E_2$ term is trivial in positive filtration, so there is no need to compute any differentials.
The other approach, which works only if $A$ is connective, is to study how to build $A$ as a Postnikov tower in the category of $S$-algebras. For this we use a result of Dugger and Shipley \cite{DuSh} which tells us that the set of ways to build $P_m A$ from $P_{m-1} A$ as an $S$-algebra can be calculated using $THH_S^{m+2}(P_{m-1} A; H\pi_m A)$.
These topological Hochschild cohomology groups can be calculated when $A=k(n)$ is connective Morava $K$-theory, and this lets us identify the uniqueness obstructions for building $k(n)$ as an $S$-algebra. Once again the obstructions are nontrivial, but something interesting happens. Each of the obstructions we found using the first approach also live in the $E_2$ term of the canonical spectral sequence converging to $THH_S^*(P_{m-1} k(n); H\bF_p)$ for some $m$, but in every case the obstruction is killed by a differential. Hence the corresponding $S$-algebra structures on $P_m k(n)$ are equivalent, and this equivalence can be lifted first to $k(n)$ and then to $K(n)$.
We emphasize that both approaches are necessary to prove Theorem \ref{thm:main1}. Using only the first approach is insufficient because we do not know how to calculate the differentials in the spectral sequence converging to $\pi_* B\mathscr{A}^S(K(n))$ directly. Using only the second approach is insufficient because the connective Morava $K$-theory spectrum $k(n)$ does \emph{not} have a unique $S$-algebra structure. While the obstructions we found in the first approach are killed in the spectral sequence converging to $THH_S^*(P_{m-1} k(n);H\bF_p)$ for suitable $m$, there are other uniqueness obstructions here and we do not have a direct way to show that those obstructions become trivial when inverting $v_n$.
\subsection{Organization} In \S \ref{s:moduli_space} we define the moduli space of $A_\infty$ structures on a spectrum $A$ and construct a spectral sequence converging to the homotopy groups of this moduli space. Because we need to allow maps of $A_\infty$ ring spectra which commute with the operad structure only up to homotopy and higher homotopies, we use a certain multicategory with $r$ colors to define $(r-1)$-fold composites, and as a result the moduli space is (the geometric realization of) an $\infty$-category, regarded as a simplicial set.
In \S \ref{s:BKSSKn} we compute the $E_2$ term of this spectral sequence for $K(n)$, with $\hE{n}$, $MU$ and $S$ as the ground ring in \S \ref{ss:BKSShEn}, \ref{ss:BKSSMU} and \ref{ss:BKSSS} respectively. In the first two cases the spectral sequence collapses at the $E_2$ term, in the last case there are potential differentials. Counting the classes that are left in $E_2^{s,s}$ with $\hE{n}$ or $MU$ as the ground ring then proves Theorem \ref{thm:main2}.
In \S \ref{s:k_invariants} we recall the theory of $k$-invariants for connective $S$-algebras, which live in topological Hochschild cohomology, due to Dugger and Shipley \cite{DuSh} and discuss the relationship with additive $k$-invariants.
In \S \ref{s:kforkn} we compute the relevant topological Hochschild cohomology groups for Postnikov sections of connective Morava $K$-theory, with $\BP{n}_p$, $MU$ and $S$ as the ground ring in \S \ref{ss:k_invBPn}, \ref{ss:k_invMU} and \ref{ss:k_invS} respectively. The calculation with $\BP{n}_p$ as the ground ring requires optimistic assumptions about the commutativity of the multiplication on $\BP{n}_p$, we include it because it is parallel to the situation of $K(n)$ as an $\hE{n}$-algebra and it gives a clearer conceptual picture of what is going on.
In \S \ref{s:proof} we put the pieces together to prove Theorem \ref{thm:main1}.
Finally, in \S \ref{s:2periodic} we discuss the moduli space of $S$-algebra structures on the $2$-periodic version $K_n$ of Morava $K$-theory. In this case we do not have a unique $S$-algebra structure on $K_n$, but we conjecture that there are only finitely many.
\section{The moduli space of $A_\infty$ structures} \label{s:moduli_space} Recall that in a good category of spectra, such as \cite{EKMM}, any $A_\infty$ ring spectrum can be replaced with a weakly equivalent $S$-algebra. Moreover, the functor from the multicategory describing $n$-fold composition of $A_\infty$ ring spectra to the multicategory describing $n$-fold composition of $S$-algebras is a weak equivalence on all $Hom$ sets, and this implies that the moduli space of $A_\infty$ structures on $A$ we define below, which only depends on the homotopy type of $A$, is weakly equivalent to the moduli space of $S$-algebra structures on $A$.
Other approaches to studying the moduli space of $A_\infty$ structures on a spectrum $A$, such as the one found in \cite{Re97}, assumes that $A$ comes with a fixed homotopy commutative multiplication. At $p=2$ the Morava $K$-theory spectrum $K(n)$ does not have a homotopy commutative multiplication \cite{Na02}, and in any case we prefer to fix as little data as possible, so instead of following \cite{Re97} we will set up a similar spectral sequence based on the obstruction theory in \cite{Ro88} and \cite{AnTHH}.
We take an $A_\infty$ ring spectrum to mean an algebra over the Stasheff associahedra operad $\cK=\{K_n\}_{n \geq 0}$. For $0 \leq n \leq \infty$ an $A_n$ structure on $X$ is a compatible family of maps \[ (K_m)_+ \wedge X^{(m)} \to X \] for $m \leq n$, where $X^{(m)}$ denotes the $m$-fold smash product of $X$ with itself. If we work in the category of $R$-modules for some commutative $S$-algebra $R$, all smash products are over $R$.
Using only maps $X \to Y$ of $A_n$ ring spectra which commute strictly with the operad action is too restrictive, so following Boardman and Vogt \cite{BoVo73} we define a map of $A_n$ ring spectra to be a family of maps $(L_m)_+ \wedge X^{(m)} \to Y$ for $m \leq n$, where $L_m$ is a certain polyhedron of dimesnions $m-1$. Here $L_m$ can be defined in terms of the $W$-construction on the multicategory (colored operad, colored PRO) with two objects $0$ and $1$ and $Hom(\epsilon_1,\ldots,\epsilon_n;\epsilon)$ a point if $\epsilon_1+\ldots+\epsilon_n \leq \epsilon$ and empty otherwise, or more concretely as a certain space of metric trees with two colors. We think of $(L_m)_+ \wedge X^{(m)} \to Y$ as a homotopy between the maps $(K_m)_+ \wedge X^{(m)} \to X \to Y$ and $(K_m)_+ \wedge X^{(m)} \to (K_m)_+ \wedge Y^{(m)} \to Y$.
As observed in \cite[Ch.\ 4]{BoVo73}, while it is possible to ``compose'' the maps we just defined, composition is not associative. Instead, we get an $\infty$-category (quasi-category, restricted Kan complex) of $A_n$ ring spectra encoding the various ways of composing multiple maps, where an $r$-simplex is a ``composite of $r-1$ maps'' defined in terms of a multicategory with $r$ colors. This is not actually a problem for us, because we can take the geometric realization of an $\infty$-category just as easily as we can take the geometric realization of (the nerve of) a category.
If $R$ is a commutative $S$-algebra and $A$ is an $R$-module, let $B\mathscr{A}^R_n(A)$ be the moduli space of $A_n$ structures on $A$ in the category of $R$-modules. To be precise, we let $B\mathscr{A}^R_n(A)$ be the geometric realization of the $\infty$-category $\sA_n^R(A)$ defined as follows. An object ($0$-simplex) in $\sA_n^R(A)$ is a pair $(X, \phi)$ where $X$ is weakly equivalent to $A$ and $\phi=\{\phi_m\}_{0 \leq m \leq n}$ is an $A_n$ structure on $X$ in the category of $R$-modules. For convenience we will assume $X$ is cofibrant as an $R$-module. A morphism ($1$-simplex) $(X,\phi) \to (Y,\psi)$ is a map $X \to Y$ of $A_n$ ring spectra, where the underlying map $X \to Y$ of spectra is a weak equivalence. An $r$-simplex is defined similarly, as in \cite[Definition 4.7]{BoVo73}. A choice of weak equivalence $X \to A$ is not part of the data. Some care is needed to make sure that we end up with a small ($\infty$-)category, which we need to apply geometric realization, we refer the reader to \cite{DwKa84} for one possible solution.
A general argument due to Dwyer and Kan \cite{DwKa84} shows that the moduli space $B\mathscr{A}_n^R(A)$ decomposes as \[ B\mathscr{A}_n^R(A) \simeq \coprod_{[X]} BAut_{\sA_n^R(A)}(X), \] where the coproduct runs over one representative from each path component of $B\mathscr{A}_n^R(A)$ and $Aut_{\sA_n^R}(X)$ is the topological monoid of self-equivalences of a cofibrant-fibrant model for $X$.
In particular, an $A_1$ structure consists only of the unit map $R \to A$ and the identity map $A \to A$, and an automorphism of $A$ as an $A_1$-algebra is a unit-preserving weak equivalence $A \to A$ of $R$-modules. Let $Aut_R(A)_1$ denote the space of unit-preserving $R$-module automorphisms of a cofibrant-fibrant model of $A$. Then $B\mathscr{A}_1^R(A) \simeq B Aut_R(A)_1$.
Given a tower of fibrations \[ \ldots \to X_n \to X_{n-1} \to \ldots \to X_0\] with inverse limit $X$, recall \cite[Ch IX, \S 4]{BoKa74} that we get a ``fringed'' spectral sequence (called ``the (extended) homotopy spectral sequence'' in loc.\ cit.) \[ E_1^{s,t}=\pi_{t-s} F_s \Longrightarrow \pi_{t-s} X, \] where $F_s$ is the fiber of $X_s \to X_{s-1}$. This is not quite a spectral sequence in the usual sense, for the following reasons. First, $X$ might be empty, and the spectral sequence only exists as long as we can lift a given basepoint up the tower. The terms $E_1^{s,s+1}$ on the superdiagonal, contributing to $\pi_1 X$, are in general nonabelian, and the terms $E_1^{s,s}$ on the diagonal, contributing to $\pi_0 X$, are only sets. The fringing refers to the lack of negative dimensional terms to receive differentials.
This spectral sequence has good convergence properties, it converges completely as long as there are no $\lim^1$ terms \cite[Lemma IX.5.4]{BoKa74}.
Also recall \cite{Bo89} that if the tower of fibrations comes from the Tot-tower of a (simple, fibrant) cosimplicial space, the above spectral sequence has (some) negative dimensional terms. In particular $E_1^{s,s-1}$ exists and serves both as the target of differentials from the diagonal and as the place where obstructions to lifting a basepoint up the tower lie.
In our case the $n$'th space in the tower of fibrations will be the space $B\mathscr{A}^R_{n+1}(A)$, and although this tower does not come from a cosimplicial space we will describe sets $E_1^{s,s-1}$ containing the obstructions to lifting a basepoint up the tower. Moreover, the only nonabelian group on the superdiagonal is $E_1^{0,1}$ and while $E_1^{s,s}$ is not a group, it is a torsor over an abelian group that can be described in the same way as $E_1^{s,t}$ for $t-s \geq 1$.
We wish to identify the fiber of $B\mathscr{A}^R_{n+1}(A) \to B\mathscr{A}^R_n(A)$ with the space of extensions of a given $A_n$ structure on $A$ to an $A_{n+1}$ structure. If $B\mathscr{A}^R_n(A)$ was the classifying space of a category we could use Quillen's Theorem B \cite{Qu73}. Instead we use the following version, with notation from \cite{Lu08}:
\begin{lemma} \label{lem:Jacob} Suppose $F : \cC \to \cD$ is a map of $\infty$-categories with the property that for every $f : d \to d'$ in $\cD$ the maps \[ \cC \times_{\cD} \cD_{d/} \overset{\simeq}{\leftarrow} \cC \times_{\cD} \cD_{f/} \overset{\simeq}{\to} \cC \times_{\cD} \cD_{d'/} \] are weak equivalences. Then the homotopy fiber of $\cC \to \cD$ is weakly equivalent to $\cC \times_{\cD} \cD_{d/}$. \end{lemma}
\begin{proof}[Sketch proof] The homotopy fiber of $\cC \to \cD$ is the fiber of \[ p : \cC \times_{\cD} Fun(\Delta^1,\cD) \to \cD. \] The hypothesis imply that the inverse image of any $0$-simplex or $1$-simplex in $\cD$ is weakly equivalent to $\cC \times_{\cD} \cD_{d/}$, and the case for a general simplex in $\cD$ follows. \end{proof}
Let $\bar{A}$ denote the cofiber of the unit map $R \to A$ (assuming $A$ is cofibrant) and let $\bigvee^n \! A$ denote the ``fat wedge'' \[\bigvee\! {}^n \! A = \bigvee_{1 \leq i \leq n} A^{(i-1)} \wedge R \wedge A^{(n-i)}.\] Then the canonical map $\bigvee^n \! A \rightarrow A^{(n)}$ is a cofibration, with cofiber $\bar{A}^{(n)}$.
Now consider the forgetful functor $F : \sA_{n+1}^R(A) \to \sA_n^R(A)$. Given $(X,\phi) \in \sA_n^R(A)$ and $(Y,\psi) \in \sA_n^R(A)_{(X,\phi)/}$, the fiber over \[ (Y,\psi) \in \sA_{n+1}^R(A) \times_{\sA_n^R(A)} \sA_n^R(A)_{(X,\phi)/} \] is the space of extensions of the $A_n$ structure $\psi$ on $Y$ to an $A_{n+1}$ structure.
An $A_{n+1}$ structure on $Y$ extending $\psi$ is a map \[ m_{n+1} : (K_{n+1})_+ \wedge Y^{(n+1)} \rightarrow Y \] satisfying two conditions. First, $m_{n+1}$ is determined by $\psi$ on $(\partial K_{n+1})_+ \wedge Y^{(n+1)}$, and second, $m_{n+1}$ is determined by the unitality condition on $(K_{n+1})_+ \wedge \bigvee^{n+1} Y$.
The cofiber of the map \[ (\partial K_{n+1})_+ \wedge Y^{(n+1)} \!\!\!\!\!\!\!\! \coprod_{(\partial K_{n+1})_+ \wedge \bigvee^{n+1} \! Y} \!\!\!\!\!\!\!\! (K_{n+1})_+ \wedge \bigvee \! {}^{n+1} Y \rightarrow (K_{n+1})_+ \wedge Y^{(n+1)} \] is $\Sigma^{n-1} \bar{Y}^{n+1}$, and hence the space of extensions of $\psi$ to an $A_{n+1}$ structure is weakly equivalent to $Hom(\Sigma^{n-1} \bar{Y}^{(n+1)}, Y)$, which is weakly equivalent to \[ Hom(\Sigma^{n-1} \bar{A}^{(n+1)}, A).\]
Similarly, given $f : (X,\phi) \to (Y,\psi)$ in $\sA_n^R(A)$ and an element $(Z,\xi) \in \sA_n^R(A)_{f/}$, the fiber over $(Z,\xi)$ is the space of extensions of the $A_n$ strucutre $\xi$ on $Z$, and the maps in the Lemma \ref{lem:Jacob} are clearly weak equivalences. Hence we can conclude that the fiber of $F : B\mathscr{A}_{n+1}^R(A) \to B\mathscr{A}_n^R(A)$ is the space of extensions of a given $A_n$ structure to an $A_{n+1}$ structure, as we wanted.
\begin{thm} There is a spectral sequence $\{E_r^{s,t}\}$ with $E_1^{s,t}$ defined for $s \geq 0$ and $t-s \geq -1$ converging to $\pi_{t-s} B\mathscr{A}^R(A)$ with the obstructions to $B\mathscr{A}^R(A)$ being nonempty on the subdiagonal $t-s=-1$. We have $E_1^{0,-1}=\varnothing$, $E_1^{0,0}=0$, $E_1^{0,1} \cong \pi_0 Aut_R(A)_1$ and \[ E_1^{s,t} \cong [\Sigma^{t-1} \bar{A}^{(s+1)},A] \] otherwise. Here $E_1^{s,t}$ is a group for $t-s \geq 1$, a torsor over the corresponding group for $t-s=0$, and a set for $t-s=-1$. \end{thm}
\begin{proof} From the obstruction theory developed in \cite{AnTHH} we conclude that we get a tower of fibrations \begin{equation*}
B\mathscr{A}^R(A) \simeq B\mathscr{A}_\infty^R(A) \to \ldots \to B\mathscr{A}^R_2(A) \to B\mathscr{A}^R_1(A), \end{equation*} and the spectral sequence is the one associated with this tower.
The above discussion identifies $E_1^{s,t}$ for $t-s \geq 0$. The obstruction theory in \cite{AnTHH} also identifies the obstruction to extending an $A_n$ structure to an $A_{n+1}$ structure with an element in $E_1^{n,n-1}$. \end{proof}
We would like to compare this to topological Hochschild cohomology, in particular to the $E_2$ term of the topological Hochschild cohomology spectral sequence, because that is something we can compute. Let $\{\tilde{E}_r^{p,q}\}$ be the spectral sequence with $E_1$-term $\tilde{E}_1^{p,q}=\pi_q F_S(\bar{A}^{(p)},A)$ converging to $\pi_{q-p}THH_R(A)$ if $A$ is an $R$-algebra so that topological Hochschild cohomology is defined.
\begin{thm} Suppose $B\mathscr{A}^R(A)$ is nonempty, and choose an $R$-algebra structure on $A$. Then \[ E_2^{s,t} \cong \tilde{E}_2^{s+1,t-1} \] for $s \geq 2$ and $t-s\geq -1$. This isomorphism of $E_2$ terms is an isomorphism of abelian groups for $t-s \geq 1$, of torsors for $t-s=0$ and of sets for $t-s=-1$. \end{thm}
\begin{proof} The $E_1$-terms are isomorphic for $s \geq 1$ and $t-s \geq -1$, and the argument for why the $d_1$ differential on $E_1^{*,*}$ is isomorphic to the Hochschild differential is contained in \cite{Ro88} or \cite{AnTHH}. \end{proof}
\section{The spectral sequence for Morava $K$-theory} \label{s:BKSSKn} In this section we prove Theorem \ref{thm:main2} by explicitly calculating the $E_\infty$ term of the spectral sequence converging to $\pi_* B\mathscr{A}^R(K(n))$ for $R=\hE{n}$ and $R=MU$. We also calculate the $E_2$ term for $R=S$.
\subsection{Ground ring $R=\hE{n}$} \label{ss:BKSShEn} Let $R=\hE{n}$ be the $K(n)$-localization of the Johnson-Wilson spectrum, with homotopy groups \[ \hE{n}_* = \bZ_{(p)}[v_1,\ldots,v_{n-1}, v_n^{\pm 1}]^\wedge_I.\] Here $I=(p,v_1,\ldots,v_{n-1})$ and $(-)^\wedge_I$ denotes $I$-completion. Then $\hE{n}$ can be given the structure of a commutative $S$-algebra \cite{RoWh02}, and $K(n) \simeq \hE{n}/I$. As in \cite{AnTHH} we find that the spectral sequence converging to $\pi_*THH_{\hE{n}}(K(n))=THH^{-*}_{\hE{n}}(K(n))$ collapses at the $E_2$ term (there are interesting extensions) because everything is concentrated in even total degree, with \[ \tilde{E}_2=\tilde{E}_\infty = K(n)_*[q_0,\ldots,q_{n-1}]. \] Here $q_i$ is in filtration $1$ and total homological degree $-2p^i$. There can obviously be no $\lim^1$ terms, so the spectral sequence converges completely.
This gives us the positive filtration part of the spectral sequence converging to $\pi_* B\mathscr{A}^{\hE{n}}(K(n))$. In particular, there are no obstructions to the existence of an $A_\infty$ structure on $K(n)$, and the part contributing to $\pi_0 B\mathscr{A}^{\hE{n}}(K(n))$ is the homological degree $-2$ part of $K(n)_*[q_0,\ldots,q_{n-1}]$ of degree at least $2$ in the $q_i$'s. We also know that \[ \pi_* F_{\hE{n}}(K(n),K(n)) \cong \Lambda_{K(n)_*}(Q_0,\ldots,Q_{n-1}), \] where $Q_i$ is the Bockstein corresponding to $v_i$, and the degree $0$ part of this is $\bF_p$. Only one of these $p$ maps commutes with the unit $\hE{n} \to K(n)$, so we find that $E_1^{0,1}=E_2^{0,1}=0$. Hence there are no possible differentials originating from $E_2^{0,1}$. Everything in positive filtration is concentrated in even total degree, so the spectral sequence collapses at the $E_2$ term with infinitely many classes on the diagonal. This proves Theorem \ref{thm:main2} for $R=\hE{n}$.
\subsection{Ground ring $R=MU$} \label{ss:BKSSMU} A similar argument shows that $K(n)$ has uncountably many $MU$-algebra structures. We first consider the connective Morava $K$-theory spectrum $k(n)$ with $k(n)_*=\bF_p[v_n]$. We choose $x_i$ such that $MU_*=\bZ[x_1,x_2,\ldots]$ and \[ k(n)=MU/(p,x_1,\ldots,x_{p^n-2},x_{p^n},\ldots).\] We can also choose these generators in such a way that $x_{p^i-1}$ maps to $v_i$ for $0 \leq i \leq n$ and $x_j$ maps to $0$ otherwise, under a suitable map $MU \to \hE{n}$ (which can be chosen to be $H_\infty$, though it is an open question whether or not it can be chosen to be $E_\infty$).
In this case, $E_1^{0,1}$ is nontrivial, but not large enough to kill all the obstructions. To be more precise, the $E_2$ term for topological Hochschild cohomology of $k(n)$ with ground ring $MU$ looks like \[ \tilde{E}_2^{*,*} = k(n)_*[\tilde{q}_0,\tilde{q}_1,\ldots,\tilde{q}_{p^n-2},\tilde{q}_{p^n},\ldots] \] with $\tilde{q}_i$ in filtration $1$ and total homological degree $-2i-2$.
The term $E_1^{0,1}$ consists of infinite sums $1+\sum v_I Q_I$, where $v_I \in k(n)_*$ is in the appropriate degree and $Q_I=Q_{i_1} \ldots Q_{i_k}$ is a product of Bocksteins. Here $Q_i$ is the Bockstein corresponding to $x_i$ in $MU_*$, or to $\tilde{q}_i$ in $\tilde{E}_2^{*,*}$.
Similarly, the term $E_1^{1,1}$ consists of infinite sums $1 \wedge 1 + \sum v_{IJ} Q_I \wedge Q_J$. The $d_1$ differential $d_1 : E_1^{0,1} \to E_1^{1,1}$ is given by \[ d_1(v_{ij} Q_i Q_j) = v_{ij} Q_i \wedge Q_j - v_{ij} Q_j \wedge Q_i,\] and more generally $d_1(v_I Q_I)$ is given by the sum of all ways to write $I=J \cup K$ of $\pm v_I Q_J \wedge Q_K$. In particular, $d_1$ is injective, so $E_2^{0,1}=0$ is trivial.
We also find that $v_{ij} Q_i \wedge Q_j = v_{ij} Q_j \wedge Q_i$ in $E_2^{1,1}$, and as in \cite[Theorem 3.9]{AnTHH}, the kernel of $d_1 : E_1^{1,1} \to E_1^{2,1}$ picks out the homotopy associative multiplications, and this identifies $E_2^{1,1}$ with $\tilde{E}_2^{2,0}$. Again there can be no $\lim^1$ terms, so the spectral sequence converges completely. This gives a complete description of all the $A_\infty$ structures on $k(n)$ as an $MU$-module. We get the same result for $K(n)$:
\begin{lemma} \label{lem:BRkntoBRKnwe} The canonical map $B\mathscr{A}^{MU}(k(n)) \to B\mathscr{A}^{MU}(K(n))$ is a weak equivalence. \end{lemma}
\begin{proof} This is clear because \[ \tilde{E}_2^{*,*}(K(n)) \cong v_n^{-1} \tilde{E}_2^{*,*}(k(n)),\] and these groups are isomorphic in the degrees contributing to $E_2^{*,*}(k(n))$ and $E_2^{*,*}(K(n))$, and the same holds for $E_2^{0,*}$. \end{proof}
This proves Theorem \ref{thm:main2} for $R=MU$. If $BP$ is a commutative $S$-algebra then the same argument shows that $K(n)$ has uncountably many $BP$-algebra structures.
\subsection{Ground ring $R=S$} \label{ss:BKSSS} By \cite{AnTHH}, we have an equivalence $THH_{\hE{n}}(K(n)) \to THH_S(K(n))$ (which is visible on $\tilde{E}_2$), and this shows that the $E_2$ term of the spectral sequence converging to $\pi_* B\mathscr{A}^S(K(n))$ is isomorphic to the $E_2$ term of the spectral sequence converging to $\pi_* B\mathscr{A}^{\hE{n}}(K(n))$ in filtration $s \geq 2$. If $p$ is odd this also gives an isomorphism in filtration $s=1$; if $p=2$ there is a possible differential $d_1 : E_1^{0,1} \to E_1^{1,1}$ killing the class $v_n q_{n-1}^2$.
As in \cite{Ra04}, let
\[ \Sigma(n) = K(n)_* \otimes_{BP_*} BP_*BP \otimes_{BP_*} K(n)_* \cong K(n)_*[t_1,t_2,\ldots]/(v_n t_i^{p^n}-v_n^{p^i} t_i) \]
be the $n$'th Morava Stabilizer algebra. Here $|t_i|=2(p^i-1)$. Recall \cite{Na02} that, for any choice of multiplication on $K(n)$, we have \begin{equation*} \label{eq:K(n)_*K(n)} K(n)_* K(n) \cong \Sigma(n) \otimes \Lambda(\alpha_0,\ldots,\alpha_{n-1}) \end{equation*}
as a ring for $p$ odd, while $\alpha_i^2=t_{i+1}$ for $0 \leq i \leq n-2$ and $\alpha_{n-1}^2=t_n+v_n$ for $p=2$.\footnote{If the reader prefers a unified description of $K(n)_*K(n)$ at all primes it is the above ring with $p$-fold Massey products ($2$-fold Massey products being products) $\langle \alpha_i,\ldots,\alpha_i \rangle = t_{i+1}$ for $0 \leq i \leq n-2$ and $\langle \alpha_{n-1},\ldots,\alpha_{n-1} \rangle = t_n+v_n$ with no indeterminacy.} Here $|\alpha_i|=2p^i-1$.
Also recall that if we consider $K(n)_* K(n)^{op}$ instead we get the same result except that we get to replace the relation $\alpha_{n-1}^2=t_n+v_n$ by $\alpha_{n-1}^2=t_n$ at $p=2$.\footnote{Or replace $\langle \alpha_{n-1},\ldots,\alpha_{n-1} \rangle = t_n+v_n$ with $\langle \alpha_{n-1}, \ldots,\alpha_{n-1} \rangle=v_n$ at any prime.}
We have that \begin{multline*}
K(n)^*K(n) \cong Hom_{K(n)_*}(K(n)_*K(n),K(n)_*) \\ \cong Hom_{K(n)_*}(\Sigma(n),K(n)_*) \otimes \Lambda(Q_0,\ldots,Q_{n-1}),
\end{multline*} where $Q_i$ is the Bockstein dual to $\alpha_i$. In particular this means that \[ E_1^{0,1} \cong \big[ Hom_{K(n)_*}(\Sigma(n),K(n)_*) \otimes \Lambda(Q_0,\ldots,Q_{n-1}) \big]_1^\times, \] which is large enough to potentially kill all the uniqueness obstructions.
Again there can be no $\lim^1$ terms, so the spectral sequence converges completely. This is clear in positive filtration, for the groups in filtration $0$ this relies on observing that $E_1^{0,t}$ is $p$-torsion.
At $p=2$, a result by Nassau \cite{Na02} gives us our first differential. He shows that if $\phi$ is one multiplication ($A_2$ structure) on $K(n)$ and $\phi^{op}$ is the other, the automorphism $\Xi$ of $K(n)$ given by $t_n \mapsto v_n$ is an antiautomorphism of the multiplication. Hence $\phi$ and $\phi^{op}$ are in the same path component in $B\mathscr{A}^S_2(K(n))$. The difference $\phi-\phi^{op}$ is represented by $v_n q_{n-1}^2$, so $d_1(\Xi)=v_n q_{n-1}^2$.
\section{$S$-algebra $k$-invariants} \label{s:k_invariants} For connective spectra we can build the $S$-algebra structure by induction on the Postnikov sections. Given a connective spectrum $A$, let $P_m A$ denote the Postnikov section of $A$ with homotopy groups only up to (and including) degree $m$. If $R$ is a connective commutative $S$-algebra then Postnikov sections can be defined in the category of $R$-algebras, so if $A$ is an $R$-algebra then this gives an $R$-algebra structure on $P_m A$ as well. Conversely, $B\mathscr{A}^R(A)=\underset{\longleftarrow}{\lim} \, B\mathscr{A}^R(P_m A)$, so we can understand $B\mathscr{A}^R(A)$ by understanding $B\mathscr{A}^R(P_m A)$ for all $m$.
A theory of $k$-invariants for connective $R$-algebras has been developed by Dugger and Shipley \cite{DuSh}. Suppose $C$ is an $R$-algebra with homotopy groups only up to degree $m-1$, and suppose $M$ is a $\pi_0 C$ module. Let $\cM(C,(M,m))$ be the category of Postnikov extensions of $C$ of type $(M,m)$. The objects are $R$-algebras $Y$ together with a map $Y \to C$ satisfying $\pi_i Y=0$ for $i>m$, $\pi_m Y \cong M$ and $P_{m-1} Y \simeq C$. The morphism are maps over $C$ inducing an isomorphism on homotopy.
\begin{theorem} (Dugger-Shipley, \cite[Proposition 1.5]{DuSh}) With $\mathcal{M}(C,(M,n))$ as above, \begin{equation*} \pi_0 \cM(C,(M,m)) \cong THH^{m+2}_R(C;M)/Aut(M). \end{equation*} \end{theorem}
Now suppose $C=P_{m-1} A$, $M=\pi_m A$, and we want to make sure that $Y \in \cM(C,(M,m))$ has the homotopy type of $P_m A$. Then $Y$ has to have the correct additive $k$-invariant, which is a map $C \to \Sigma^{m+1} HM$. Recall that the topological Hochschild cohomology spectral sequence converging to $THH^*_R(C;M)$ has $\tilde{E}_1^{s,t}=[\Sigma^t C^{(s)},M]$, contributing to $\pi_{t-s} THH_R(C;M)=THH^{s-t}_R(C;M)$. In particular, the additive $k$-invariant of $Y$ is an element in $\tilde{E}_1^{1,-m-1}$, contributing to $THH^{m+2}_R(C;M)$.
If the additive $k$-invariant $k_m$ is trivial then $Y \simeq C \vee \Sigma^m HM$ as a spectrum, and $Y$ always has at least one $S$-algebra structure, namely the square zero extension. If $k_m$ is non-trivial, it might or might not survive the topological Hochschild cohomology spectral sequence. If $d_r(k_m)=y \neq 0$ then $y$ represents the obstruction to extending the $S$-algebra structure on $C$ to an $S$-algebra structure on $Y$. If $k_m$ survives then $Y$ has at least one $S$-algebra structure.
\section{$k$-invariants for Morava $K$-theory} \label{s:kforkn} Again we study the moduli problem over each ground ring separately. First we use $BP \langle n \rangle_p$, which has homotopy groups \[ (BP \langle n \rangle_p)_* = \bZ_p[v_1,\ldots,v_n] \] and is the appropriate connetive version of $\hE{n}$, as the ground ring, assuming it can be given the structure of a commutative $S$-algebra. Then we use $MU$, and finally we use the sphere spectrum $S$. First we recall the following change-of-rings result:
\begin{lemma}[{\cite[Corollary 2.5]{AHL}}] \label{lemma:changeofcoeff} Suppose $A \to B$ is a map of $S$-algebras and $M$ is an $A-B$-bimodule, given an $A-A$-bimodule structure by pullback. Then there is a spectral sequence \[ \tilde{E}_2^{*,*}=Ext_{\pi_* B \wedge_R A}^{**}(B_*,M_*) \Longrightarrow \pi_* THH_R(A;M).\] \end{lemma}
In particular, when $B=M=H\bF_p$ we get a spectral sequence \[ \tilde{E}_2^{*,*}=Ext_{H_*^R(A;\bF_p)}(\bF_p,\bF_p) \Longrightarrow \pi_* THH_R(A;H\bF_p),\] where $H_*^R(A;\bF_p)$ denotes $\pi_* A \wedge_R H\bF_p$.
Since $k(n)$ has homotopy in degrees that are multiplies of $2p^n-2$, let $q=2p^n-2$. Each additive $k$-invariant $k_m \in H_R^{mq+1}(P_{(m-1)q} k(n);\bF_p)$ is nontrivial, this follows by considering $H_*(P_{mq} k(n);\bF_p)$, which is different from $H_*(P_{(m-1)q} k(n) \vee \Sigma^{mq} H\bF_p;\bF_p)$.
\subsection{Ground ring $BP \langle n \rangle_p$} \label{ss:k_invBPn} Since we are planning to use Lemma \ref{lemma:changeofcoeff}, we start by calculating the $\BP{n}_p$ homology of the Postnikov sections of $k(n)$.
\begin{prop} The $\BP{n}_p$-homology of $H\bF_p$, $P_{mq} k(n)$ and $k(n)$ is as follows: \begin{enumerate} \item $H_*^{\BP{n}_p}(H\bF_p;\bF_p)=\Lambda_{\bF_p}(\alpha_0,\ldots,\alpha_n)$, \item $H_*^{\BP{n}_p}(P_{mq} k(n);\bF_p)=\Lambda_{\bF_p}(\alpha_0,\ldots,\alpha_{n-1},a_{m+1})$, \item $H_*^{\BP{n}_p}(k(n);\bF_p)=\Lambda_{\bF_p}(\alpha_0,\ldots,\alpha_{n-1})$. \end{enumerate} Here $\alpha_i$ is in degree $2p^i-1$ and $a_{m+1}$ is in degree $(m+1)q+1$, $a_1=\alpha_n$. \end{prop}
\begin{proof} This is clear, using that we can write \begin{eqnarray*} H\bF_p & = & \BP{n}_p/(p,v_1,\ldots,v_n), \\ P_{mq} k(n) & = & \BP{n}_p/(p,v_1,\ldots,v_{n-1},v_n^{m+1}), \\ k(n) & = & \BP{n}/(p,v_1,\ldots,v_{n-1}). \end{eqnarray*} \end{proof}
\begin{prop} Assuming that $\BP{n}_p$ is a commutative $S$-algebra, topological Hochschild cohomology of $H\bF_p$, $P_{mq} k(n)$ and $k(n)$ over $\BP{n}_p$ with coefficients in $H \bF_p$ is as follows: \begin{enumerate} \item $THH^*_{\BP{n}_p}(H\bF_p; H\bF_p) \cong \bF_p[q_0,\ldots,q_n]$, \item $THH^*_{\BP{n}_p}(P_{mq} k(n); H\bF_p) \cong \bF_p[q_0,\ldots,q_{n-1}, b_{m+1}]$, \item $THH^*_{\BP{n}_p}(k(n); H\bF_p) \cong \bF_p[q_0,\ldots,q_{n-1}]$. \end{enumerate} Here $q_i$ is in cohomological degree $2p^i$ and $b_{m+1}$ is in degree $(m+1)q+2$, $b_1=q_n$. \end{prop}
\begin{proof} We use Lemma \ref{lemma:changeofcoeff}. In each case there can be no differentials, because the $E_2$ term is concentrated in even total degree. \end{proof}
The additive $k$-invariant of $k(n)$ dictates that we choose the $k$-invariant in \[ THH^{mq+2}_{\BP{n}_p}(P_{(m-1)q} k(n);H\bF_p) \] as $b_m+f(q_0,\ldots,q_{n-1})$ where $f$ has degree at least $2$ in the $q_i$'s.
Next we compare this with the moduli space of $\hE{n}$-algebra structures on $K(n)$.
\begin{lemma} \label{lem:fowdk} Assuming that $\BP{n}_p$ is a commutative $S$-algebra, the canonical maps \[ B\mathscr{A}^{\BP{n}_p}(k(n)) \to B\mathscr{A}^{\BP{n}_p}(K(n)) \to B\mathscr{A}^{\hE{n}}(K(n)) \] are weak equivalences. \end{lemma}
\begin{proof} This is similar to the proof of Lemma \ref{lem:BRkntoBRKnwe} \end{proof}
Now we can compare the two methods of studying the set of equivalence classes of $\BP{n}_p$-algebra structures on $k(n)$. We find that in the spectral sequence converging to $\pi_* B\mathscr{A}^{\BP{n}_p}(k(n))$, each uniqueness obstruction is represented by a class \[ v_n^m f(q_0,\ldots,q_{n-1}) \] for some $m \geq 1$, where $f(q_0,\ldots,q_{n-1})$ has homological degree $-mq-2$. If $f(q_0,\ldots,q_{n-1})=q_{i_1} \cdots q_{i_j}$ has degree $j$ in the $q_i$'s this represents changing the $A_j$ structure by the map $\Sigma^{j-2} k(n)^{(j)} \to k(n)$ given by first applying \[ Q_f = Q_{i_1} \wedge \ldots \wedge Q_{i_j} \] and then multiplying the factors and multiplying by $v_n^m$.
On the other hand, we can interpret the polynomial $f(q_0,\ldots,q_{n-1})$ as being an element of $THH^{mq+2}(P_{(m-1)q} k(n); H\bF_p)$, represented in the topological Hochschild cohomology spectral sequence by the composite \[ (P_{(m-1)q} k(n))^{(j)} \overset{Q_f}{\to} \Sigma^{mq-j+2} (P_{(m-1)q} k(n))^{(j)} \to \Sigma^{mq-j+2} H\bF_p.\]
\begin{lemma} Given a uniqueness obstruction $v_n^m f(q_0,\ldots,q_{n-1})$ of degree $j$ in the $q_i$'s represented by $v_n^m Q_f : \Sigma^{j-2} k(n)^{(j)} \to k(n)$, we get a commutative diagram as follows: \[ \xymatrix{ & \Sigma^{j-2} (P_{(m-1)q} k(n))^{(j)} \ar[r]^-{Q_f} & \Sigma^{mq} H\bF_p \ar[d] \\ \Sigma^{j-2} k(n)^{(j)} \ar[r]^-{v_n^m Q_f} \ar[ur] & k(n) \ar[r] & P_{mq} k(n) \ar[d] \\ & & P_{(m-1)q} k(n) } \] \end{lemma}
\begin{proof} Consider the following commutative diagram: \[ \xymatrix{ \Sigma^{j-2} (P_{mq} k(n))^{(j)} \ar[r]^-{Q_f} \ar[d] & \Sigma^{mq} P_{mq} k(n) \ar[d] \ar[r]^-{v_n^m} & P_{mq} k(n) \ar[d] \\ \Sigma^{j-2} (P_{(m-1)q} k(n))^{(j)} \ar[r]^-{Q_f} & \Sigma^{mq} P_{(m-1)q} k(n) \ar[r]^-{v_n^m=0} \ar@{.>}[ur] & P_{(m-1)q} k(n) } \] This gives us a map $\Sigma^{j-2} (P_{(m-1)q} k(n))^{(j)} \to P_{mq} k(n)$, and this map is trivial on $P_{(m-1)q} k(n)$ so it factors through $\Sigma^{mq} H\bF_p$. \end{proof}
The upshot of this is that we can translate from obstructions in the spectral sequence converging to $\pi_* B\mathscr{A}^{\BP{n}}(k(n))$, which by Lemma \ref{lem:fowdk} and the equivalence between topological Hochschild cohomology over $\hE{n}$ and $S$ are the obstructions in the spectral sequence converging to $\pi_* B\mathscr{A}^S(K(n))$ to obstructions in the spectral sequence converging to $THH_{\BP{n}}^*(P_{(m-1)q} k(n);H\bF_p)$.
\subsection{Ground ring $MU$} \label{ss:k_invMU} Next we do the same with $MU$ as the ground ring. If we knew that $\BP{n}_p$ was a commutative $S$-algebra then this section would not be necessary. The corresponding results are as follows:
\begin{prop} The $MU$-homology of $H\bF_p$, $P_{mq} k(n)$ and $k(n)$ is as follows: \begin{enumerate} \item $H_*^{MU}(H\bF_p;\bF_p) \cong \Lambda_{\bF_p}(\tilde{\alpha}_0,\tilde{\alpha}_1,\ldots)$, \item $H_*^{MU}(P_{mq} k(n);\bF_p) \cong \Lambda_{\bF_p}(\tilde{\alpha}_i \, : \, i \neq p^n-1, a_{m+1})$, \item $H_*^{MU}(k(n);\bF_p) \cong \Lambda_{\bF_p}(\tilde{\alpha}_i \, : \, i \neq p^n-1)$. \end{enumerate} Here $\tilde{\alpha}_i$ is in degree $2i+1$ and $a_{m+1}$ is in degree $(m+1)q+1$, $a_1=\tilde{\alpha}_{p^n-1}$. \end{prop}
\begin{proof} This is clear, using that we can write \begin{eqnarray*} H\bF_p & = & MU/(p,x_1,\ldots,x_{p^n-2}, x_{p^n-1}, x_{p^n}, \ldots), \\ P_{mq} k(n) & = & MU/(p,x_1,\ldots,x_{p^n-2}, x_{p^n-1}^m, x_{p^n},\ldots), \\ k(n) & = & MU/(p,x_1,\ldots,x_{p^n-2}, x_{p^n}, \ldots). \end{eqnarray*} \end{proof}
Now we can calculate topological Hochschild cohomology:
\begin{prop} Topological Hochschild cohomology of $\bF_p$, $P_{mq} k(n)$ and $k(n)$ over $MU$ with coefficients in $H\bF_p$ is as follows: \begin{enumerate} \item $THH_{MU}^*(H\bF_p;H\bF_p) \cong \bF_p[\tilde{q}_0,\tilde{q}_1,\ldots]$, \item $THH_{MU}^*(P_{mq} k(n);H\bF_p) \cong \bF_p[\tilde{q}_i \, : \, i \neq p^n-1, b_{m+1}]$, \item $THH_{MU}^*(k(n);H\bF_p) \cong \bF_p[\tilde{q}_i \, : \, i \neq p^n-1]$. \end{enumerate} Here $\tilde{q}_i$ is in cohomological degree $2i+2$ and $b_{m+1}$ is in degree $(m+1)q+2$, $b_1=\tilde{q}_{p^n-1}$. \end{prop}
\begin{proof} Again this follows from Lemma \ref{lemma:changeofcoeff}. \end{proof}
Recall from Lemma \ref{lem:BRkntoBRKnwe} that $B\mathscr{A}^{MU}(k(n)) \to B\mathscr{A}^{MU}(K(n))$ is a weak equivalence. Just as with $\BP{n}_p$ as the ground ring, we can translate from obstructions in the spectral sequence converging to $B\mathscr{A}^S(K(n))$ to obstructions in the spectral sequence converging to $THH^*_{MU}(P_{(m-1)q} k(n);H\bF_p)$. In this case, only $\bF_p[\tilde{q}_0, \tilde{q}_{p-1},\ldots,\tilde{q}_{p^{n-1}-1}]$ correspond to obstructions in the spectral sequence converging to $\pi_* B\mathscr{A}^S(K(n))$.
By this we mean that the $MU$-algebra $k$-invariant for building $P_{mq} k(n)$ from $P_{(m-1)q} k(n)$ lives in $THH^{mq+2}(P_{(m-1)q} k(n);H\bF_p)$ and looks like $b_m + f(\tilde{q}_i \, : \, i \neq p^n-1)$ where $f$ has degree at least $2$ in the $\tilde{q}_i$'s. This corresponds to the uniqueness obstruction $v_n^m f(\tilde{q}_i \, : \, i \neq p^n-1)$ in the $E_2$-term of the spectral sequence converging to $\pi_* B\mathscr{A}^{MU}(K(n))$, and the canonical map $B\mathscr{A}^{MU}(K(n)) \to B\mathscr{A}^S(K(n))$ induces a map on $E_2$-terms, under which $\tilde{q}_{p^i-1}$ maps to $q_i$. This is clear, because both $\tilde{q}_{p^i-1}$ and $q_i$ are represented by the Bockstein corresponding to $v_i$.
\subsection{Ground ring $S$} \label{ss:k_invS} Finally we do the same with $S$ as the ground ring. Let $\bar{A}_*$ denote the dual Steenrod algebra with $\bar{\tau}_n$ missing, or with $\bar{\xi}_{n+1}$ missing but with $\bar{\xi}_{n+1}^2$ present if $p=2$. In the following we will state all results at odd primes and leave the standard modifications, replacing $\bar{\tau}_i$ with $\bar{\xi}_i$ and $\bar{\xi}_i$ with $\bar{\xi}_i^2$ at $p=2$ to the reader.
\begin{prop} The mod $p$ homology of $\bF_p$, $P_{mq} k(n)$ and $k(n)$ is as follows: \begin{enumerate} \item $H_*(\bF_p;\bF_p) \cong A_*$, \item $H_*(P_{mq} k(n);\bF_p) \cong \bar{A}_* \otimes \Lambda_{\bF_p}(a_{m+1})$, \item $H_*(k(n);\bF_p) \cong \bar{A}_*$. \end{enumerate} Here $a_{m+1}$ is in degree $(m+1)q+1$, $a_1=\bar{\tau}_n$. \end{prop}
\begin{proof} Only part $2$ is not well known. Consider the long exact sequence obtained by taking the mod $p$ homology of the (co)fiber sequence \[ \Sigma^{mq} H\bF_p \to P_{mq} k(n) \to P_{(m-1)q} k(n) \to \Sigma^{mq+1} H\bF_p. \] By induction we have $H_*(P_{(m-1)q} k(n);\bF_p) \cong \bar{A}_* \otimes \Lambda_{\bF_p}(a_m)$, and the map to $H_*(\Sigma^{mq+1} H\bF_p; \bF_p)$ is determined by being $\bar{A}_*$-linear and that $1 \mapsto 0$ and $a_m \mapsto \Sigma^{mq+1} 1$. The result follows by combining the kernel and cokernel of this map. \end{proof}
\begin{thm} Topological Hochchild cohomology of $H\bF_p$, $P_{mq} k(n)$ and $k(n)$ with coefficients in $H\bF_p$ is as follows: \begin{enumerate} \item $THH_S^*(H\bF_p;H\bF_p) \cong P_p(\delta \bar{\tau}_0,\delta \bar{\tau}_1,\ldots)$, \item $THH_S^*(P_{mq} k(n);H\bF_p) \cong \Lambda(\delta \bar{\xi}_{n+1}) \otimes P_p(\delta \bar{\tau}_i \, : \, i \neq n) \otimes \bF_p[b_{m+1}]$, \item $THH_S^*(k(n);H\bF_p) \cong \Lambda(\delta \bar{\xi}_{n+1}) \otimes P_p(\delta \bar{\tau}_i \, : \, i \neq n)$. \end{enumerate} \end{thm}
\begin{proof} The first part is dual to B\"okstedt's original calculation of topological Hoch\-schild homology of $\bF_p$ \cite{Bo2}. For 2, consider the spectral sequence \[ E_2 = \Lambda(\delta \bar{\xi}_i \, : \, i \geq 1) \otimes \bF_p[\delta \bar{\tau}_i \, : \, i \neq n] \otimes \bF_p[b_{m+1}] \Longrightarrow THH_S^*(P_{mq} k(n);\bF_p) \] from Lemma \ref{lemma:changeofcoeff}. The map $P_{mq} k(n) \to H\bF_p$ induces a map on topological Hoch\-schild cohomology in the opposite direction, inducing differentials $d_{p-1}(\delta \bar{\xi}_{i+1}) = (\delta \bar{\tau}_i)^p$ for $i \neq n$.
The class $b_{m+1}$ is the next additive $k$-invariant for $k(n)$, and because we know that $k(n)$ can be given an $S$-algebra structure, $b_{m+1}$ has to survive the spectral sequence. The class $\delta \bar{\xi}_{n+1}$ survives for degree reasons, so each generator is a permanent cycle. Using the multiplicative structure, the spectral sequence collapses at the $E_p$ term and part 2 of the theorem follows. Part 3 is similar. \end{proof}
We note that the $p$'th powers of $\delta \bar{\tau}_i$ for $0 \leq i \leq n-1$ all die, and we make the following simple but cruical observation:
\begin{lemma} \label{lem:noobinrightdeg} Consider the $k$-invariant for $k(n)$ in $THH_S^{mq+2}(P_{(m-1)q} k(n); \bF_p)$. There are no polynomials $f(\delta \bar{\tau}_0,\ldots,\delta \bar{\tau}_{n-1}) \in P_p(\delta \bar{\tau}_0,\ldots,\delta \bar{\tau}_{n-1})$ in this degree. \end{lemma}
\begin{proof} This is clear because the element in highest degree is $(\delta \bar{\tau}_0)^{p-1} \cdots (\delta \bar{\tau}_{n-1})^{p-1}$ in degree $2p^n-2$, which is less than $mq+2$. \end{proof}
Of course the generators $\delta \bar{\tau}_i$ are related to the generators $q_i$ and $\tilde{q}_j$ from the previous sections:
\begin{lemma} \label{lem:gensarethesame} The canonical map \[ THH_{MU}^*(P_{mq} k(n);\bF_p) \to THH_S^*(P_{mq} k(n);\bF_p), \] maps $\tilde{q}_j$ to $\delta \bar{\tau}_i$ if $p^i-1=j$ and $0$ otherwise.
Similarly, if $\BP{n}_p$ is a commutative $S$-algebra, the canonical map \[ THH_{\BP{n}_p}^*(P_{mq} k(n);\bF_p) \to THH_S^*(P_{mq} k(n);\bF_p) \] maps $q_i$ to $\delta \bar{\tau}_i$. \end{lemma}
\begin{proof} This follows by the description of all of the $E_2$-terms in terms of Bocksteins. \end{proof}
\section{Proof of Theorem \ref{thm:main1}} \label{s:proof} We are now in a position to prove Theorem \ref{thm:main1}. As we have seen, each uniqueness obstruction looks like $v_n^m f(q_0,\ldots,q_{n-1})$ for some $m \geq 1$ and some monomial $f(q_0,\ldots,q_{n-1})$, and we can find these uniqueness obstructions in the corresponding spectral sequence converging to $THH_{MU}^*(P_{(m-1)q} k(n);H\bF_p)$.
In the corresponding spectral sequence converging to $THH_S^*(P_{(m-1)q} k(n);\bF_p)$, $f(q_0,\ldots,q_{n-1})$ is killed by a differential, which means that the corresponding $S$-algebra structures on $P_{mq} k(n)$ are equivalent. By considering the pullback square \[ \xymatrix{ PB \simeq k(n) \ar[d] \ar[r] & k(n) \ar[d] \\ P_{mq} k(n) \ar[r] & P_{mq} k(n) } \] of $S$-algebras, we see that the equivalence can be lifted to $k(n)$. Now we can invert $v_n$ by $K(n)$-localizing, so this gives an equivalence between the corresponding $S$-algebra structures on $K(n)$ as well.
We claim that this is enough to conclude that the obstructions are also killed in the spectral sequence converging to $\pi_* B\mathscr{A}^S(K(n))$. To see this, consider $k(n)$ and $K(n)$ as $MU$-modules, and consider the following commutative diagram: \[ \xymatrix{ B\mathscr{A}^{MU}(k(n)) \ar[r] \ar[d]^\simeq & B\mathscr{A}^S(k(n)) \ar[d] \\ B\mathscr{A}^{MU}(K(n)) \ar[r] & B\mathscr{A}^S(K(n)) } \] We showed in Lemma \ref{lem:BRkntoBRKnwe} that $B\mathscr{A}^{MU}(k(n)) \to B\mathscr{A}^{MU}(K(n))$ is a weak equivalence, and we understand the $E_2$ terms of the spectral sequences converging to the homotopy groups of all the spaces in the diagram except for $B\mathscr{A}^S(k(n))$. The spectral sequences converging to $\pi_* B\mathscr{A}^{MU}(k(n))$ and $\pi_* B\mathscr{A}^{MU}(K(n))$ collapse, and from the $E_2$ terms we can read off that the map $\pi_0 B\mathscr{A}^{MU}(k(n)) \to \pi_0 B\mathscr{A}^S(K(n))$ is surjective.
In $\pi_0 B\mathscr{A}^{MU}(k(n))$, there are classes that map surjectively onto the $E_2$ term of the spectral sequence converging to $\pi_* B\mathscr{A}^S(K(n))$ which are all hit by differentials in the spectral sequence converging to $THH_S^*(P_{(m-1)q} k(n); H\bF_p)$ for some $m$ (Lemma \ref{lem:noobinrightdeg} and \ref{lem:gensarethesame}), hence the same must happen in the spectral sequence converging to $\pi_* B\mathscr{A}^S(K(n))$.
Our argument would be simplified by the existence of a commutative $S$-algebra structure on $\BP{n}_p$, in which case it follows that all the uniqueness obstructions for building $k(n)$ as a $BP \langle n \rangle_p$-algebra are hit by differentials in the spectral sequence converging to $THH_S^*(P_{(m-1)q} k(n);H\bF_p)$ for some $m$. In particular, when $n=1$ using $\ell_p$ instead of $MU$ gives a simpler argument.
\section{$2$-periodic Morava $K$-theory} \label{s:2periodic} There is a $2$-periodic version of Morava $K$-theory, given by \[ K_n=E_n/(p,u_1,\ldots,u_{n-1}), \] where $E_n$ is the Morava $E$-theory spectrum associated to a formal group of height $n$ over a perfect field $k$ of characteristic $p$. The spectrum $E_n$ is a commutative $S$-algebra \cite{GoHo}, and $K_n$ has homotopy groups \[ (K_n)_* \cong k[u,u^{-1}] \]
with $|u|=2$. We can also ask about the space of $S$-algebra structures on $K_n$. When $p=2$ and $n=1$, $K_n=K(n)$, if $p>2$ or $n>1$ the author \cite{AnTHH} found that $THH_S(K_n)$ varies over the moduli space of $S$-algebra structures on $K_n$, so there can be no unique $S$-algebra structure on $K_n$.
\begin{conj} There are only finitely many $S$-algebra structures on $K_n$, in the sense that the moduli space of $S$-algebra structures on $K_n$ has finitely many components. \end{conj}
\begin{proof}[Outline of possible proof] The spectral sequence converging to $\pi_* B\mathscr{A}^S(K_n)$ is very similar to the one converging to $\pi_* B\mathscr{A}^S(K(n))$, but now each of the $n$ polynomial generators are in degree $-2$ instead of degree $-2p^i$ for $0 \leq i \leq n-1$.
If we try to build the connective version $k_n$ using its Postnikov tower we need to understand the topological Hochschild cohomology spectral sequence. Since $H_*(k_n;\bF_p) \cong H_*(k(n);\bF_p) \otimes P_{p^n-1}(u)$, and similarly for the Postnikov sections, we get some extra classes in the $E_2$ term. Assuming that these classes are permanent cycles, we find that we have more choices than before. To build $P_2 k_n$ from $Hk$, we need a class in \[ THH_S^4(Hk;Hk),\] and for $p$ odd we are free to choose $(\delta \bar{\tau}_0)^2$. If $p=2$ and $n>1$ we can choose $\delta \bar{\xi}_2$. In each case this corresponds to a noncommutative multiplication. Next, to build $P_4 k_n$ from $P_2 k_n$ we need a class in $THH^6_S(P_2 k_n;Hk)$. If $p>3$ we can choose the class we need for $u$ to square to something nontrivial plus $(\delta \bar{\tau}_0)^3$, if $n \geq 2$ and $p=2$ or $p=3$ there are similar choices.
However, assuming that the additional classes do not change the behavior of the spectral sequence, the $p$'th powers of $\delta \bar{\tau}_0, \ldots, \delta \bar{\tau}_{n-1}$ still die, so for $m$ sufficiently large there are no such classes in $THH_S^{2m+2}(P_{2m-2} k_n;Hk)$. \end{proof}
\end{document} | arXiv |
Abstract: The paper is devoted to the applications of the theory of dynamical systems to the theory of transport phenomena in metals in the presence of strong magnetic fields. More precisely, we consider the connection between the geometry of the trajectories of dynamical systems arising at the Fermi surface in the presence of an external magnetic field and the behavior of the conductivity tensor in a metal in the limit $\omega _B\tau \to \infty $. We describe the history of the question and investigate special features of such behavior in the case of the appearance of trajectories of the most complex type on the Fermi surface of a metal. | CommonCrawl |
Information and Communication Technology | springerprofessional.de Skip to main content
2015 | Buch
Third IFIP TC 5/8 International Conference, ICT-EurAsia 2015, and 9th IFIP WG 8.9 Working Conference, CONFENIS 2015, Held as Part of WCC 2015, Daejeon, Korea, October 4-7, 2015, Proceedings
Erstes Kapitel lesen
Buchreihe: Lecture Notes in Computer Science
Herausgeber: Ismail Khalil, Erich Neuhold, A Min Tjoa, Li Da Xu, Ilsun You
Verlag: Springer International Publishing
Print ISBN: 978-3-319-24314-6
Electronic ISBN: 978-3-319-24315-3
Enthalten in: Springer Professional "Wirtschaft+Technik" , Springer Professional "Technik" , Springer Professional "Wirtschaft"
Zugang erhalten Inhaltsverzeichnis
This book constitutes the refereed proceedings of the Third IFIP TC 5/8 International Conference on Information and Communication Technology, ICT-EurAsia 2015, with the collocation of AsiaARES 2015 as a special track on Availability, Reliability and Security, and the 9th IFIP WG 8.9 Working Conference on Research and Practical Issues of Enterprise Information Systems, CONFENIS 2015, held as part of the 23rd IFIP World Computer Congress, WCC 2015, in Daejeon, Korea, in October 2015. The 35 revised full papers presented were carefully reviewed and selected from 84 submissions. The papers have been organized in the following topical sections: networks and systems architecture; teaching and education; authentication and profiling; data management and information advertizing; applied modeling and simulation; network security; dependable systems and applications, multimedia security; cryptography; big data and text mining, and social impact of EIS and visualization.
Networks and System Architecture
Reducing Keepalive Traffic in Software-Defined Mobile Networks with Port Control Protocol
User applications, such as VoIP, have problems traversing NAT gateways or firewalls. To mitigate these problems, applications send keepalive messages through the gateways. The interval of sending keepalives is often unnecessarily short, which increases the network load, especially in mobile networks. Port Control Protocol (PCP) allows the applications to traverse the gateways and to optimize the interval. This paper describes the deployment of PCP in software-defined networks (SDN) and proposes a method to measure keepalive traffic reduction in mobile networks using PCP. The proposed solution extends the battery life of mobile devices and reduces the traffic overhead in WCDMA networks.
Kamil Burda, Martin Nagy, Ivan Kotuliak
A SDN Based Method of TCP Connection Handover
Today, TCP is the go-to protocol for building resilient communication channels on the Internet. Without much overstatement, it can be said that it runs the majority of communication on the planet. Its success only highlights the fact that it also has some drawbacks, of which one of the oldest ones is the inability to hand over running connections between participating hosts. This paper introduces a method that relies on the advantages of Software Defined Networks to overcome this limitation.
Andrej Binder, Tomas Boros, Ivan Kotuliak
IP Data Delivery in HBB-Next Network Architecture
Digital television enables IP data delivery using various protocols. Hybrid television HbbTV enhances digital television with applications delivery. HBB-Next is an architecture which enhances HbbTV with additional features. However it does not specify IP data delivery despite it has access to both broadcast and broadband channel. This paper proposes architecture and protocols for IP data delivery in HBBNext. To achieve this goal we designed new node (Application Data Handler - ADH) in HBB-Next architecture and new communication protocols (Application Data Handler Control Protocol - ADHCP, and Hybrid Encapsulation Protocol - HEP) for data transmission. We created Stochastic Petri Net (SPN) model of designed protocols and implemented them in ns2 network simulator to verify our solution. Results of SPN model simulation and ns2 network simulation are discussed and HEP protocol is compared to existing encapsulation protocols used in DVB systems.
Roman Bronis, Ivan Kotuliak, Tomas Kovacik, Peter Truchly, Andrej Binder
Syn Flood Attack Detection and Type Distinguishing Mechanism Based on Counting Bloom Filter
Presented work focuses onto proposal, implementation and evaluation of the new method for detection and type identification of SYN flood (DoS) attacks. The method allows distinguishing type of detected SYN flood attacks – random, subnet or fixed. Based on Counting Bloom filter, the attack detection and identification algorithm is proposed, implemented and evaluated in KaTaLyzer network traffic monitoring tool. Proof of correctness of the approach for TCP SYN flood attack detection and type identification is provided – both in practical and theoretical manners. In practice, new module for KaTaLyzer is implemented and TCP attacks are detected, identified and network administrator is notified about them in real-time.
Tomáš Halagan, Tomáš Kováčik, Peter Trúchly, Andrej Binder
Integrating Mobile OpenFlow Based Network Architecture with Legacy Infrastructure
UnifyCore is a concept of SDN centric, OpenFlow based and access agnostic network architecture, which changes the way networks are being built today. It is designed in a way, so present access technologies can be easily integrated in it. It provides set of architectural components and rules, which help to easily decouple components of the access technology and put their functionalities into UnifyCore building blocks. This simplifies the overall network architecture and allows the use of common transport core for all access technologies. First proof of concept built on UnifyCore is the GPRS network, which is a challenge for SDN, since it does not have split user and control plane transport. In this paper we introduce and explain features that allow fully SDN UnifyCore to be integrated with existing legacy network infrastructure (switches/routers).
Martin Nagy, Ivan Kotuliak, Jan Skalny, Martin Kalcok, Tibor Hirjak
Making Computer Science Education Relevant
In addition to algorithm- or concept-oriented training of problem solving by computer programming, introductory computer science classes may contain programming projects on themes that are relevant for young people. The motivation for theme-driven programmers is not to practice coding but to create a digital artefact related to a domain they are interested in and they want to learn about. Necessary programming concepts are learned on the way ("diving into programming"). This contribution presents examples of theme-driven projects, which are related to text mining and web cam image processing. The development and learning process is supported by metaphorical explanations of programming concepts and algorithmic ideas, experiments with simple programming statements, stories and code fragments.
Michael Weigend
Analyzing Brain Waves for Activity Recognition of Learners
Understanding the states of learners at a lecture is expected to be useful for improving the quality of the lecture. This paper is trying to recognize the activities of learners by their brain wave data for estimating the states. In analyses on brain wave data, generally, some particular bands such as
are considered as the features. The authors considered other bands of higher and lower frequencies to compensate for the coarseness of simple electroencephalographs. They conducted an experiment of recognizing two activities of five subjects with the brain wave data captured by a simple electroencephalograph. They applied support vector machine to 8-dimensional vectors which correspond to eight bands on the brain wave data. The results show that considering multiple bands yielded high accuracy compared with the usual features.
Hiromichi Abe, Kazuya Kinoshita, Kensuke Baba, Shigeru Takano, Kazuaki Murakami
Authentication and Profiling
A Multi-factor Biometric Based Remote Authentication Using Fuzzy Commitment and Non-invertible Transformation
Biometric-based authentication system offers more undeniable benefits to users than traditional authentication system. However, biometric features seem to be very vulnerable - easily affected by different attacks, especially those happening over transmission network. In this work, we have proposed a novel multi-factor biometric based remote authentication protocol. This protocol is not only resistant against attacks on the network but also protects biometric templates stored in the server's database, thanks to the combination of fuzzy commitment and non-invertible transformation technologies. The notable feature of this work as compared to previous biometric based remote authentication protocols is its ability to defend insider attack. The server's administrator is incapable of utilizing information saved in the database by client to impersonate him/her and deceive the system. In addition, the performance of the system is maintained with the support of random orthonormal project, which reduces computational complexity while preserving its accuracy.
Thi Ai Thao Nguyen, Dinh Thanh Nguyen, Tran Khanh Dang
Profiler for Smartphone Users Interests Using Modified Hierarchical Agglomerative Clustering Algorithm Based on Browsing History
Nowadays, smartphone has been a life style for many people in the world and it has become an indispensable part of their live. Smartphone provides many applications to support human activity which one of the applications is web browser applications. People spend much time on browsing activity for finding useful information that they are interested on it. It is not easy to find the particular pieces of information that they interested on it. In this paper, user-profiler is presented as way of providing smartphone users with their interest based on their browsing history. In this study, we propose a Modified Hierarchical Agglomerative Clustering algorithm that uses filtering category groups on a server-based application to aid provides smartphone user profile for interests-focused based on browsing history automatically. Based on experimental results, the proposed algorithm can measure degree of smartphone user interest based on browsing history of web browser applications, provides smartphone users interests profile and also outperforms the C4.5 algorithm in execution time on all memory utilization.
Priagung Khusumanegara, Rischan Mafrur, Deokjai Choi
Data Management and Information Advertising
Strength of Relationship Between Multi-labeled Data and Labels
Collected data must be organized properly to utilize well and classification of data is one of the efficient methods. Individual data or an object is classified to categories and annotated with labels of those categories. Giving ranks to labels of objects in order to express how close objects are to the categories enables us to use objects more precisely. When target objects are identified by a set of labels
$\mathcal{L}$
, there are various strength of relationship between objects and
. This paper proposes criteria for objects with two rank labels, primary and secondary labels, such as a label relates to
, a primary label relates to
, every primary label relates to
, and every label relates to
. The strongest criterion which an object satisfies is the level of the object to express the degree of the strength of relationship between the object and
. The results for two rank objects are extended to k rank objects.
Masahiro Kuzunishi, Tetsuya Furukawa
Online Ad-fraud in Search Engine Advertising Campaigns
Prevention, Detection and Damage Limitation
Search Engine Advertising has grown strongly in recent years and amounted to about USD 60 billion in 2014. Based on real-world data of online campaigns of 28 companies, we analyse the incident of a hacked campaignaccount. We describe the occurred damage, i.e. (1) follow-up consequences of unauthorized access to the account of the advertiser, and (2) limited availability of short-term online campaigns. This contribution aims at raising awareness for the threat of hacking incidents during online marketing campaigns, and provides suggestions as well as recommendations for damage prevention, damage detection and damage limitation.
Andreas Mladenow, Niina Maarit Novak, Christine Strauss
Applied Modeling and Simulation
Markov Chain Solution to the 3-Tower Problem
The 3-tower problem is a 3-player gambler's ruin model where two players are involved in a zero information, even-money bet during each round. The probabilities that each player accumulates all the money has a trivial solution. However, the probability of each player getting ruined first is an open problem. In this paper, the 3-tower problem recursions are modeled as a directed multigraph with loops, which is used to construct a Markov chain. The solution leads to exact values, and results show that, unlike in other models where the first ruin probabilities depend only on the proportion of chips of each player, the probabilities obtained by this model depend on the number of chips each player holds.
Guido David
Fitness Function in ABC Algorithm for Uncapacitated Facility Location Problem
We study the fitness function of the artificial bee colony algorithm applying to solve the uncapacitated facility location problem. Our hypothesis is that the fitness function in the artificial bee colony algorithm is not necessarily suitable for specific optimization problems. We carry out experiments to examine several fitness functions for the artificial bee colony algorithm to solve the uncapacitated facility location problem and show the conventional fitness function is not necessarily suitable.
Yusuke Watanabe, Mayumi Takaya, Akihiro Yamamura
Comparative Study of Monte-Carlo Tree Search and Alpha-Beta Pruning in Amazons
The game of Amazons is a combinatorial game sharing some properties of both chess and Go.We study programs which play Amazons with strategies based on Monte-Carlo Tree Search and a classical search algorithm, Alpha-Beta pruning.We execute several experiments to investigate the effect of increasing the number of searches in a Monte-Carlo Tree Search program. We show that increasing the number of searches is not an efficient method to strengthen the program for Amazons. On the other hand, augmenting the algorithms with a choice of several evaluation functions fulfills has great influence on playing strength.
Hikari Kato, Szilárd Zsolt Fazekas, Mayumi Takaya, Akihiro Yamamura
Can We Securely Use CBC Mode in TLS1.0?
Currently, TLS1.0 is one of the most widely deployed protocol versions for SSL/TLS. In TLS1.0, there are only two choices for the bulk encryption, i.e., RC4 or block ciphers in the CBC mode, which have been criticized to be insecure.
In this paper, we explore the current status of the CBC mode in TLS1.0 and prove theoretically that the current version of the (patched) CBC mode in TLS1.0 satisfies
indistinguishability
, which implies that it is secure against BEAST type of attacks.
Takashi Kurokawa, Ryo Nojima, Shiho Moriai
Key Agreement with Modified Batch Rekeying for Distributed Group in Cognitive Radio Networks
Cognitive radio networks have received more research interest in recent years as they can provide a favourable solution to spectrum scarcity problem prevailing in the wireless systems. This paper presents a new key agreement protocol called 'TKTOFT' with modified batch rekeying algorithm for distributed group oriented applications in cognitive radio networks by integrating a ternary key tree and an one way function. It is inferred from the experimental results that TKTOFT outperforms the existing one way function based protocol both in terms of computation and communication overhead. Hence, TKTOFT is suited for establishing secure and quick group communication in dynamic groups in cognitive radio networks.
N. Renugadevi, C. Mala
Secure Mobility Management for MIPv6 with Identity-Based Cryptography
Mobile IPv6 is an improvement of the original IPv6 protocol, and provides mobility support for IPv6 nodes. However, the security of mobility management is one of the most important issues for MIPv6. Traditional MIPv6 uses IPSec to protect the mobility management, while the dependence on the mechanism of the pre-shared key or certificate limits its applicability. This paper proposes an improved scheme for the original method based on IBC, to protect the mobility management signaling for MIPv6.
Nan Guo, Fangting Peng, Tianhan Gao
Investigation of DDoS Attacks by Hybrid Simulation
At present protection against distributed attacks of the type "denial of service" (DDoS) is one of the important tasks. The paper considers a simulation environment for DDoS attacks of different types using the combination of a simulation approach and real software-hardware testbeds. In the paper we briefly describe the system architecture and a series of experiments for DDoS attack simulation on transport and application levels. The experimental results are provided, and the analysis of these results is performed.
Yana Bekeneva, Konstantin Borisenko, Andrey Shorov, Igor Kotenko
Dependable Systems and Applications
Secure Database Using Order-Preserving Encryption Scheme Based on Arithmetic Coding and Noise Function
Order-preserving symmetric encryption (OPE) is a deterministic encryption scheme which encryption function preserves numerical order of the plaintexts. That allows comparison operations to be directly applied on encrypted data in case, for example, decryption takes too much time or cryptographic key is unknown. That's why it is successfully used in cloud databases as effective range queries can be performed based on. This paper presents order-preserving encryption scheme based on arithmetic coding. In the first part of it we review principles of arithmetic coding, which formed the basis of the algorithm, as well as changes that were made. Then we describe noise function approach, which makes algorithm cryptographically stronger and show modifications that can be made to obtain order-preserving hash function. Finally we analyze resulting vulnerability to chosen-plaintext attack.
Sergey Krendelev, Mikhail Yakovlev, Maria Usoltseva
An Approach for Evaluating Softgoals Using Weight
The resolution of conflicts among non-functional requirements are difficult problem during the analysis of non-functional requirements. To mitigate the problem, the weighted softgoal is proposed based on the Softgoal Interdependency Graphs (SIG) that help engineers resolve conflicts among non-functional requirements. It is also shown evaluation results of the weighted SIG applications to develop non-functional requirements and choose alternative design decisions.
Shuichiro Yamamoto
An Efficient Unsavory Data Detection Method for Internet Big Data
With the explosion of information technologies, the volume and diversity of the data in the cyberspace are growing rapidly; meanwhile the unsavory data are harming the security of Internet. So how to detect the unsavory data from the Internet big data based on their inner semantic information is of growing importance. In this paper, we propose the i-Tree method, an intelligent semantics-based unsavory data detection method for internet big data. Firstly, the internet big data are mapped into a high-dimensional feature space, representing as high-dimensional points in the feature space. Secondly, to solve the "curse of dimensionality" problem of the high-dimensional feature space, the principal component analysis (PCA) method is used to reduce the dimensionality of the feature space. Thirdly, in the new generated feature space, we cluster the data objects, transform the data clusters into regular unit hyper-cubes and create one-dimensional index for data objects based on the idea of multi-dimensional index. Finally, we realize the semantics-based data detection for a given unsavory data object according to similarity search algorithm and the experimental results proved our method can achieve much better efficiency.
Peige Ren, Xiaofeng Wang, Hao Sun, Fen Xu, Baokang Zhao, Chunqing Wu
Identification of Corrupted Cloud Storage in Batch Auditing for Multi-Cloud Environments
In cloud storage services, users can store their data in remote cloud servers. Due to new and challenging security threats toward outsourced data, remote data integrity checking has become a crucial technology in cloud storage services. Recently, many integrity checking protocols have been proposed. Several protocols support batch auditing, but they do not support efficient identification when batch auditing fails. In this paper, we propose a new identification method for the corrupted cloud in multi-cloud environments without requiring any repeated auditing processes.
Sooyeon Shin, Seungyeon Kim, Taekyoung Kwon
Face Recognition Performance Comparison Between Real Faces and Pose Variant Face Images from Image Display Device
Face recognition technology, unlike other biometric methods, is conveniently accessible with the use of only a camera. Consequently, it has created an enormous interest in a variety of applications, including face identification, access control, security, surveillance, smart cards, law enforcement, human computer interaction. However, face recognition system is still not robust enough, especially in unconstrained environments, and recognition accuracy is still not acceptable. In this paper, to measure performance reliability of face recognition systems, we expand performance comparison test between real faces and face images from the recognition perspective and verify the adequacy of performance test methods using an image display device.
Mi-Young Cho, Young-Sook Jeong
A Lossless Data Hiding Strategy Based on Two-Dimensional Side-Match Predictions
The histogram-based reversible data hiding scheme (RDH) generated a one-dimensional (1D) histogram distribution. In this article, based on two-dimensional (2D) histogram distribution, a framework of reversible data hiding is proposed by using two side-match predictors, called as Forward side-match (FSM) and Backward side-match (BSM). First, by considering each predicted pixel value, we use two side-match predictors to obtain two prediction error distributions. A slope meter is computed by the differencing of two distributions. Then, a two dimensional histogram is generated by composing of BSM distribution and slope meter. Based on the 2D, more specified spaces can be found to enhance the performance. The experimental results demonstrated that our proposed scheme can achieve better performance in terms of both marked image quality and embedding capacity than that of conventional works.
Chi-Yao Weng, Sheng-Jie Wang, Shiuh-Jeng Wang
Secure Image Deduplication in Cloud Storage
With the great development of cloud computing in recent years, the explosive increasing of image data, the mass of information storage, and the application demands for high availability of data, network backup is facing an unprecedented challenge. Image deduplication technology is proposed to reduce the storage space and costs. To protect the confidentiality of the image, the notion of convergent encryption has been proposed. In the deduplication system, the image will be encrypted/ decrypted with a convergent encryption keywhich is derived by computing the hash value of the image content. It means that identical image copies will generate the same ciphertext, which used to check the duplicate image copy. Security analysis makes sure that this system is secure.
Han Gang, Hongyang Yan, Lingling Xu
Hybrid Encryption Scheme Using Terminal Fingerprint and Its Application to Attribute-Based Encryption Without Key Misuse
Internet services make sharing digital contents faster and easier but raise an issue of illegal copying and distribution of those digital contents at the same time. A lot of public key encryption schemes solve this issue. However, the secret key is not completely protected i.e. these kinds of encryption methods do not prevent illegal copying and distribution of secret keys. In this paper, we propose a hybrid encryption scheme that employ terminal fingerprints. This scheme is a template to avoid such misuse of secret keys, and can be applied to, for example, attribute-based encryption schemes. There terminal fingerprint information is used to create a second encryption key and secret key. Since the terminal fingerprint is assumed to be unchangeable and unknowable, we ensure that our secret keys are valid in the terminal where such secret keys were created.
Chunlu Chen, Hiroaki Anada, Junpei Kawamoto, Kouichi Sakurai
Differential Fault Attack on LEA
LEA is a symmetric block cipher proposed in 2014. It uses ARX design and its main advantage is the possibility of a fast software implementation on common computing platforms.
In this paper we propose a Differential Fault Analysis attack on LEA. By injecting random bit faults in the last round and in the penultimate round, we were able to recover the secret key by using 258 faulty encryptions in average. If the position of faults is known, then only 62 faulty encryptions are needed in order to recover the key which surpasses the results achieved so far.
Dirmanto Jap, Jakub Breier
A Secure Multicast Key Agreement Scheme
Wu et al. proposed a key agreement to securely deliver a group key to group members. Their scheme utilized a polynomial to deliver the group key. When membership is dynamically changed, the system refreshes the group key by sending a new polynomial. We commented that, under this situation, the Wu et al.'s scheme is vulnerable to the differential attack. This is because that these polynomials have linear relationship. We exploit a hash function and random number to solve this problem. The secure multicast key agreement (SMKA) scheme is proposed and shown in this paper which could prevent from not only the differential attack, but also subgroup key attack. The modification scheme can reinforce the robustness of the scheme.
Hsing-Chung Chen, Chung-Wei Chen
Efficient Almost Strongly Universal Hash Function for Quantum Key Distribution
Extended Abstract
Quantum Key Distribution (QKD) technology, based on principles of quantum mechanics, can generate unconditional security keys for communication parties. Information-theoretically secure (ITS) authentication, the compulsory procedure of QKD systems, avoids the man-in-the-middle attack during the security key generation. The construction of hash functions is the paramount concern within the ITS authentication. In this extended abstract, we proposed a novel Efficient NTT-based
-Almost Strongly Universal Hash Function. The security of our NTT-based
-ASU hash function meets
+ 1)/2
− 2
. With ultra-low computational amounts of construction and hashing procedures, our proposed NTT-based
-ASU hash function is suitable for QKD systems.
Bo Liu, Baokang Zhao, Chunqing Wu, Wanrong Yu, Ilsun You
Big Data and Text Mining
DCODE: A Distributed Column-Oriented Database Engine for Big Data Analytics
We propose a novel Distributed Column-Oriented Database Engine (DCODE) for efficient analytic query processing that combines advantages of both column storage and parallel processing. In DCODE, we enhance an existing open-source columnar database engine by adding the capability for handling queries over a cluster. Specifically, we studied parallel query execution and optimization techniques such as horizontal partitioning, exchange operator allocation, query operator scheduling, operator push-down, and materialization strategies, etc. The experiments over the TPC-H dataset verified the effectiveness of our system.
Yanchen Liu, Fang Cao, Masood Mortazavi, Mengmeng Chen, Ning Yan, Chi Ku, Aniket Adnaik, Stephen Morgan, Guangyu Shi, Yuhu Wang, Fan Fang
Incorporating Big Data Analytics into Enterprise Information Systems
Big data analytics has received widespread attention for enterprise development and enterprise information systems (EIS). However, how can it enhance the development of EIS? How can it be incorporated into EIS? Both are still big issues. This paper addresses these two issues by proposing an ontology of a big data analytics. This paper also examines incorporation of big data analytics into EIS through proposing BABES: a model for incorporating big data analytics services into EIS. The proposed approach in this paper might facilitate the research and development of EIS, business analytics, big data analytics, and business intelligence as well as intelligent agents.
Zhaohao Sun, Francisca Pambel, Fangwei Wang
Analytical Platform Based on Jbowl Library Providing Text-Mining Services in Distributed Environment
The paper presents the Jbowl, Java software library for data and text analysis, and various research activities performed and implemented on top of the library. The paper describes the various analytical services for text and data mining implemented in Jbowl as well as numerous extensions aimed to address the evolving trends in data and text analysis and its usage in various tasks reflecting the areas such as big data analysis, distributed computing and parallelization. We also present the complex analytical platform built on top of the library, integrating the distributed computing analytical methods with the graphical user interface, visualization methods and resource management capabilities.
Martin Sarnovský, Peter Butka, Peter Bednár, František Babič, Ján Paralič
Social Impact of EIS and Visualization
Corporate Social Responsibility in Social Media Environment
The paper describes corporate social responsibility (CSR) communication on Facebook and Twitter – how the companies use the social media for accomplishing their CSR communication goals. On the sample of ten global companies with the best CSR reputation research tracks down their social media activity, as well as posts, likes and comments of their customers. Observed companies on average dedicate about 1/10 of their social media communication bandwidth to CSR topics, mainly on Facebook. CSR topics do not seem to be of much interest to the readers (CSR posts are mostly ignored), but at least user sentiment related to CSR messages has been proven to be mostly positive. CSR on social networks is well established, leading CSR companies use this communication channel extensively.
Antonín Pavlíček, Petr Doucek
Usage of Finance Information Systems in Developing Countries: Identifying Factors During Implementation that Impact Use
An explorative study of factors affecting implementation and use of finance information systems (FISs) in developing countries is presented. The result is based on a field study investigating implementation of a finance information system at Makerere University, Uganda. Current literature suggests that how to implement information Systems (ISs) successfully is challenging, especially in developing countries. The research question addressed is: What factors during implementation impact use of FISs in developing countries? Empirical data was gathered through face-to-face interviews with involved stakeholders in the implementation project. Analysis was done as a within-case analysis and supports the findings of nine factors that are of specific importance in developing countries. The findings can help decision-makers in guiding implementation processes of large enterprise systems especially in the accounting and finance management disciplines in developing countries.
David Kiwana, Björn Johansson, Sven Carlsson
Software Model Creation with Multidimensional UML
The aim of the paper is to present the advantages of the Use Cases transformation to the object layers and their visualization in 3D space to reduce complexity. Our work moves selected UML diagram from two-dimensional to multidimensional space for better visualization and readability of the structure or behaviour.
Our general scope is to exploit layers for particular components or modules, time and author versions, particular object types (GUI, Business services, DB services, abstract domain classes, role and scenario classes), patterns and anti-patterns in the structure, aspects in the particular layers for solving crosscutting concerns and anti-patterns, alternative and parallel scenarios, pessimistic, optimistic and daily use scenarios.
We successfully apply force directed algorithm to create more convenient automated class diagrams layout. In addition to this algorithm, we introduced semantics by adding weight factor in force calculation process.
Lukáš Gregorovič, Ivan Polasek, Branislav Sobota
Backmatter
Electronic ISBN
Copyright-Jahr
Ismail Khalil
Erich Neuhold
A Min Tjoa
Li Da Xu
Ilsun You
ec4u, Neuer Inhalt/© ITandMEDIA | CommonCrawl |
Results for 'references'
Bibliography: Preferences in Decision Theory in Philosophy of Action
News from the National Reference Center for Bioethics Literature (NRCBL) and the National Information Resource on Ethics and Human Genetics (NIREHG).National Reference Center for Bioet - 2007 - Kennedy Institute of Ethics Journal 17 (4):399-403.details
Genetics in Philosophy of Biology
Why definite descriptions really are referring terms1 John-Michael Kuczynski university of california, santa Barbara.Really Are Referring Terms - 2005 - Grazer Philosophische Studien 68 (1):45-79.details
Attributive and Referential Uses of Descriptions in Philosophy of Language
Russell's Theory of Descriptions in Philosophy of Language
Reference.Barbara Abbott - 2010 - Oxford University Press.details
This book introduces the most important problems of reference and considers the solutions that have been proposed to explain them. Reference is at the centre of debate among linguists and philosophers and, as Barbara Abbott shows, this has been the case for centuries. She begins by examining the basic issue of how far reference is a two place (words-world) or a three place (speakers-words-world) relation. She then discusses the main aspects of the field and the issues associated with them, including (...) those concerning proper names; direct reference and individual concepts; the difference between referential and quantificational descriptions; pronouns and indexicality; concepts like definiteness and strength; and noun phrases in discourse. Professor Abbott writes with exceptional verve and wit. She presupposes no technical knowledge or background and presents issues and analyses from first principles, illustrating them at every stage with well-chosen examples. Her book is addressed in the first place to advanced undergraduate and graduate students in linguistics and philosophy of language, but it will also appeal to students and practitioners in computational linguistics, cognitive psychology, and anthropology. All will welcome the clarity this guide brings to a subject that continues to challenge the leading thinkers of the age. (shrink)
Reference in Philosophy of Language
The Reference Book.John Hawthorne & David Manley - 2012 - Oxford University Press.details
This book critically examines some widespread views about the semantic phenomenon of reference and the cognitive phenomenon of singular thought. It begins with a defense of the view that neither is tied to a special relation of causal or epistemic acquaintance. It then challenges the alleged semantic rift between definite and indefinite descriptions on the one hand, and names and demonstratives on the other—a division that has been motivated in part by appeals to considerations of acquaintance. Drawing on recent work (...) in semantics, the book explores a more unified account of all four types of expression, according to which none of them paradigmatically fits the profile of a referential term. On the proposed framework, all four involve existential quantification but admit of uses that exhibit many of the traits associated with reference—a phenomenon that is due to the presence of what we call a 'singular restriction' on the existentially quantified domain. The book concludes by drawing out some implications of the proposed semantic picture for the traditional categories of reference and singular thought. (shrink)
Causal Theories of Reference in Philosophy of Language
Complex Demonstratives in Philosophy of Language
De Re Belief in Philosophy of Mind
Determiners, Misc in Philosophy of Language
Frege's Puzzle in Philosophy of Language
Indefinite Descriptions in Philosophy of Language
Kripke's Puzzle About Belief in Philosophy of Language
Millian Theories of Names in Philosophy of Language
Quantifier Restriction in Philosophy of Language
Russellian Theories of Attitude Ascriptions in Philosophy of Language
Singular Propositions in Philosophy of Language
The Synthetic A Priori in Epistemology
Reference and Essence.Nathan U. Salmon - 1981 - Princeton, New Jersey: Princeton University Press.details
Salmon's book is considered by some to be a classic in the philosophy of language movement known variously as the New Theory of Reference or the Direct Reference Theory, as well as in the metaphysics of essentialism that is related to this philosophy of language.
Essence and Essentialism, Misc in Metaphysics
Truth, Misc in Philosophy of Language
Vagueness and Indeterminacy in Philosophy of Language
$30.00 used View on Amazon.com
Direct Reference: From Language to Thought.Francois Recanati - 1993 - Blackwell.details
This volume puts forward a distinct new theory of direct reference, blending insights from both the Fregean and the Russellian traditions, and fitting the general theory of language understanding used by those working on the pragmatics of natural language.
Russellian and Direct Reference Theories of Meaning in Philosophy of Language
Reference Without Referents.Mark Sainsbury - 2005 - Oxford, England and New York, NY, USA: Clarendon Press.details
Reference is a central topic in philosophy of language, and has been the main focus of discussion about how language relates to the world. R. M. Sainsbury sets out a new approach to the concept, which promises to bring to an end some long-standing debates in semantic theory. Lucid and accessible, and written with a minimum of technicality, Sainsbury's book also includes a useful historical survey. It will be of interest to those working in logic, mind, and metaphysics as well (...) as essential reading for philosophers of language. (shrink)
Descriptive Theories of Names in Philosophy of Language
Empty Names in Philosophy of Language
Theories of Reference, Misc in Philosophy of Language
Reference and Consciousness.J. Campbell - 2002 - Oxford University Press.details
John Campbell investigates how consciousness of the world explains our ability to think about the world; how our ability to think about objects we can see depends on our capacity for conscious visual attention to those things. He illuminates classical problems about thought, reference, and experience by looking at the underlying psychological mechanisms on which conscious attention depends.
Attention and Consciousness in Philosophy of Mind
Attention, Misc in Philosophy of Mind
Consciousness and Content, Misc in Philosophy of Mind
Joint Attention in Philosophy of Mind
Salience in Philosophy of Mind
The Nature of Attention in Philosophy of Mind
The Objects of Perception in Philosophy of Mind
Fixing Reference.Imogen Dickie - 2015 - Oxford University Press.details
Imogen Dickie develops an account of aboutness-fixing for thoughts about ordinary objects, and of reference-fixing for the singular terms we use to express them. Extant discussions of this topic tread a weary path through descriptivist proposals, causalist alternatives, and attempts to combine the most attractive elements of each. The account developed here is a new beginning. It starts with two basic principles, the first of which connects aboutness and truth, and the second of which connects truth and justification. These principles (...) combine to yield a third principle connecting aboutness and justification. Dickie uses the principle to explain how the relations to objects that enable us to think about them--perceptual attention; understanding of proper names; grasp of descriptions--do their aboutness-fixing and thought-enabling work. The book includes discussions of the nature of singular thought and the relation between thought and consciousness. (shrink)
Consciousness and Intentionality in Philosophy of Mind
Intention and Knowledge in Philosophy of Action
Names, Misc in Philosophy of Language
Perceptual Justification in Philosophy of Mind
Reference and Reflexivity.John Perry - 2001 - Center for the Study of Language and Inf.details
Following his recently expanded _The Problem of the Essential Indexical and Other Essays,_ John Perry develops a reflexive-referential' account of indexicals, demonstratives and proper names. On these issues the philosophy of language in the twentieth century was shaped by two competing traditions, descriptivist and referentialist. Oddly, the classic referentialist texts of the 1970s by Kripke, Donnellan, Kaplan and others were seemingly refuted almost a century earlier by co-reference and no-reference problems raised by Russell and Frege. Perry's theory, borrowing ideas from (...) both traditions as well as from Burks and Reichenbach, diagnoses the problems as stemming from a fixation on a certain kind of content, coined referential or fully incremental. Referentialist tradition is portrayed as holding that indexicals contribute content that involves individuals without identifying conditions on them; descriptivist tradition is portrayed as holding that referential content does not explain all of the identifying conditions conveyed by names and indexicals. Perry reveals a coherent and structured family of contents — from reflexive contents that place conditions on their actual utterance to fully incremental contents that place conditions only on the objects of reference — reconciling the legitimate insights of both traditions. (shrink)
Aspects of Reference in Philosophy of Language
Semantics in Philosophy of Language
Theories of Reference in Philosophy of Language
$4.49 used $19.50 new View on Amazon.com
Reference Without Referents.R. M. Sainsbury (ed.) - 2005 - Oxford, England and New York, NY, USA: Oxford University Press UK.details
Reference is a central topic in philosophy of language, and has been the main focus of discussion about how language relates to the world. R. M. Sainsbury sets out a new approach to the concept, which promises to bring to an end some long-standing debates in semantic theory.There is a single category of referring expressions, all of which deserve essentially the same kind of semantic treatment. Included in this category are both singular and plural referring expressions, complex and non-complex referring (...) expressions, and empty and non-empty referring expressions. Referring expressions are to be described semantically by a reference condition, rather than by being associated with a referent. In arguing for these theses, Sainsbury's book promises to end the fruitless oscillation between Millian and descriptivist views. Millian views insist that every name has a referent, and find it hard to give a good account of names which appear not to have referents, or at least are not known to do so, like ones introduced through error, ones where it is disputed whether they have a bearer and ones used in fiction. Descriptivist theories require that each name be associated with some body of information. These theories fly in the face of the fact names are useful precisely because there is often no overlap of information among speakers and hearers. The alternative position for which the book argues is firmly non-descriptivist, though it also does not require a referent. A much broader view can be taken of which expressions are referring expressions: not just names and pronouns used demonstratively, but also some complex expressions and some anaphoric uses of pronouns.Sainsbury's approach brings reference into line with truth: no one would think that a semantic theory should associate a sentence with a truth value, but it is commonly held that a semantic theory should associate a sentence with a truth condition, a condition which an arbitrary state of the world would have to satisfy in order to make the sentence true. The right analogy is that a semantic theory should associate a referring expression with a reference condition, a condition which an arbitrary object would have to satisfy in order to be the expression's referent.Lucid and accessible, and written with a minimum of technicality, Sainsbury's book also includes a useful historical survey. It will be of interest to those working in logic, mind, and metaphysics as well as essential reading for philosophers of language. (shrink)
The reference class problem is your problem too.Alan Hájek - 2007 - Synthese 156 (3):563--585.details
The reference class problem arises when we want to assign a probability to a proposition (or sentence, or event) X, which may be classified in various ways, yet its probability can change depending on how it is classified. The problem is usually regarded as one specifically for the frequentist interpretation of probability and is often considered fatal to it. I argue that versions of the classical, logical, propensity and subjectivist interpretations also fall prey to their own variants of the reference (...) class problem. Other versions of these interpretations apparently evade the problem. But I contend that they are all "no-theory" theories of probability - accounts that leave quite obscure why probability should function as a guide to life, a suitable basis for rational inference and action. The reference class problem besets those theories that are genuinely informative and that plausibly constrain our inductive reasonings and decisions. I distinguish a "metaphysical" and an "epistemological" reference class problem. I submit that we can dissolve the former problem by recognizing that probability is fundamentally a two-place notion: conditional probability is the proper primitive of probability theory. However, I concede that the epistemological problem remains. (shrink)
Direct Inference Principles in Philosophy of Probability
Interpretation of Probability in Philosophy of Probability
Reference and Existence: The John Locke Lectures.Saul A. Kripke - 2013 - New York: Oxford University Press.details
Reference and Existence, Saul Kripke's John Locke Lectures for 1973, can be read as a sequel to his classic Naming and Necessity. It confronts important issues left open in that work -- among them, the semantics of proper names and natural kind terms as they occur in fiction and in myth; negative existential statements; the ontology of fiction and myth. In treating these questions, he makes a number of methodological observations that go beyond the framework of his earlier book -- (...) including the striking claim that fiction cannot provide a test for theories of reference and naming. In addition, these lectures provide a glimpse into the transition to the pragmatics of singular reference that dominated his influential paper, " Speaker's Reference and Semantic Reference " -- a paper that helped reorient linguistic and philosophical semantics. Some of the themes have been worked out in later writings by other philosophers -- many influenced by typescripts of the lectures in circulation -- but none have approached the careful, systematic treatment provided here. The virtuosity of Naming and Necessity -- the colloquial ease of the tone, the dazzling, on-the-spot formulations, the logical structure of the overall view gradually emerging over the course of the lectures -- is on display here as well. (shrink)
Fictional Characters in Aesthetics
Philosophy of Language, General Works in Philosophy of Language
Arbitrary reference.Wylie Breckenridge & Ofra Magidor - 2012 - Philosophical Studies 158 (3):377-400.details
Two fundamental rules of reasoning are Universal Generalisation and Existential Instantiation. Applications of these rules involve stipulations such as 'Let n be an arbitrary number' or 'Let John be an arbitrary Frenchman'. Yet the semantics underlying such stipulations are far from clear. What, for example, does 'n' refer to following the stipulation that n be an arbitrary number? In this paper, we argue that 'n' refers to a number—an ordinary, particular number such as 58 or 2,345,043. Which one? We do (...) not and cannot know, because the reference of 'n' is fixed arbitrarily. Underlying this proposal is a more general thesis: Arbitrary Reference : It is possible to fix the reference of an expression arbitrarily. When we do so, the expression receives its ordinary kind of semantic-value, though we do not and cannot know which value in particular it receives. Our aim in this paper is defend AR. In particular, we argue that AR can be used to provide an account of instantial reasoning, and we suggest that AR can also figure in offering new solutions to a range of difficult philosophical puzzles. (shrink)
Aspects of Reference, Misc in Philosophy of Language
Quantifiers in Philosophy of Language
Reference and definite descriptions.Keith S. Donnellan - 1966 - Philosophical Review 75 (3):281-304.details
Definite descriptions, I shall argue, have two possible functions. 1] They are used to refer to what a speaker wishes to talk about, but they are also used quite differently. Moreover, a definite description occurring in one and the same sentence may, on different occasions of its use, function in either way. The failure to deal with this duality of function obscures the genuine referring use of definite descriptions. The best known theories of definite descriptions, those of Russell and Strawson, (...) I shall suggest, are both guilty of this. Before discussing this distinction in use, I will mention some features of these theories to which it is especially relevant. (shrink)
Realism, reference & perspective.Carl Hoefer & Genoveva Martí - 2020 - European Journal for Philosophy of Science 10 (3):1-22.details
This paper continues the defense of a version of scientific realism, Tautological Scientific Realism, that rests on the claim that, excluding some areas of fundamental physics about which doubts are entirely justified, many areas of contemporary science cannot be coherently imagined to be false other than via postulation of radically skeptical scenarios, which are not relevant to the realism debate in philosophy of science. In this paper we discuss, specifically, the threats of meaning change and reference failure associated with the (...) Kuhnian tradition, which depend on a descriptivist approach to meaning, and we argue that descriptivism is not the right account of the meaning and reference of theoretical terms. We suggest that an account along the lines of the causal-historical theory of reference provides a more faithful picture of how terms for unobservable theoretical entities and properties come to refer; we argue that this picture works particularly well for TSR. In the last section we discuss how our account raises concerns specifically for perspectival forms of scientific realism. (shrink)
General Philosophy of Science, Misc in General Philosophy of Science
Reference in arithmetic.Lavinia Picollo - 2018 - Review of Symbolic Logic 11 (3):573-603.details
Self-reference has played a prominent role in the development of metamathematics in the past century, starting with Gödel's first incompleteness theorem. Given the nature of this and other results in the area, the informal understanding of self-reference in arithmetic has sufficed so far. Recently, however, it has been argued that for other related issues in metamathematics and philosophical logic a precise notion of self-reference and, more generally, reference is actually required. These notions have been so far elusive and are surrounded (...) by an aura of scepticism that has kept most philosophers away. In this paper I suggest we shouldn't give up all hope. First, I introduce the reader to these issues. Second, I discuss the conditions a good notion of reference in arithmetic must satisfy. Accordingly, I then introduce adequate notions of reference for the language of first-order arithmetic, which I show to be fruitful for addressing the aforementioned issues in metamathematics. (shrink)
Paradoxes in Logic and Philosophy of Logic
Reference and Generality.P. T. Geach - 1962 - Ithaca: Cornell University Press.details
Truth in Philosophy of Language
III-Reference by Abstraction.ØYstein Linnebo - 2012 - Proceedings of the Aristotelian Society 112 (1pt1):45-71.details
Frege suggests that criteria of identity should play a central role in the explanation of reference, especially to abstract objects. This paper develops a precise model of how we can come to refer to a particular kind of abstract object, namely, abstract letter types. It is argued that the resulting abstract referents are 'metaphysically lightweight'.
Epistemology of Mathematics in Philosophy of Mathematics
Mathematical Neo-Fregeanism in Philosophy of Mathematics
Person Reference in Interaction: Linguistic, Cultural and Social Perspectives.N. J. Enfield & Tanya Stivers (eds.) - 2007 - Cambridge University Press.details
How do we refer to people in everyday conversation? No matter the language or culture, we must choose from a range of options: full name ('Robert Smith'), reduced name ('Bob'), description ('tall guy'), kin term ('my son') etc. Our choices reflect how we know that person in context, and allow us to take a particular perspective on them. This book brings together a team of leading linguists, sociologists and anthropologists to show that there is more to person reference than meets (...) the eye. Drawing on video-recorded, everyday interactions in nine languages, it examines the fascinating ways in which we exploit person reference for social and cultural purposes, and reveals the underlying principles of person reference across cultures from the Americas to Asia to the South Pacific. Combining rich ethnographic detail with cross-linguistic generalizations, it will be welcomed by researchers and graduate students interested in the relationship between language and culture. (shrink)
Philosophy of Linguistics, Miscellaneous in Philosophy of Language
$24.15 used $89.00 new $100.99 from Amazon View on Amazon.com
Plural Reference and Reference to a Plurality. Linguistic Facts and Semantic Analyses.Friederike Moltmann - 2016 - In Massimiliano Carrara, Alexandra Arapinis & Friederike Moltmann (eds.), Unity and Plurality. Logic, Philosophy, and Semantics. Oxford: Oxford University Press. pp. 93-120.details
This paper defends 'plural reference', the view that definite plurals refer to several individuals at once, and it explores how the view can account for a range of phenomena that have been discussed in the linguistic literature.
Descriptions in Philosophy of Language
Mereology in Metaphysics
Philosophy of Language, Miscellaneous in Philosophy of Language
Plural Quantification in Philosophy of Language
Speaker Meaning and Linguistic Meaning in Philosophy of Language
Dangerous Reference Graphs and Semantic Paradoxes.Landon Rabern, Brian Rabern & Matthew Macauley - 2013 - Journal of Philosophical Logic 42 (5):727-765.details
The semantic paradoxes are often associated with self-reference or referential circularity. Yablo (Analysis 53(4):251–252, 1993), however, has shown that there are infinitary versions of the paradoxes that do not involve this form of circularity. It remains an open question what relations of reference between collections of sentences afford the structure necessary for paradoxicality. In this essay, we lay the groundwork for a general investigation into the nature of reference structures that support the semantic paradoxes and the semantic hypodoxes. We develop (...) a functionally complete infinitary propositional language endowed with a denotation assignment and extract the reference structural information in terms of graph-theoretic properties. We introduce the new concepts of dangerous and precarious reference graphs, which allows us to rigorously define the task: classify the dangerous and precarious directed graphs purely in terms of their graph-theoretic properties. Ungroundedness will be shown to fully characterize the precarious reference graphs and fully characterize the dangerous finite graphs. We prove that an undirected graph has a dangerous orientation if and only if it contains a cycle, providing some support for the traditional idea that cyclic structure is required for paradoxicality. This leaves the task of classifying danger for infinite acyclic reference graphs. We provide some compactness results, which give further necessary conditions on danger in infinite graphs, which in conjunction with a notion of self-containment allows us to prove that dangerous acyclic graphs must have infinitely many vertices with infinite out-degree. But a full characterization of danger remains an open question. In the appendices we relate our results to the results given in Cook (J Symb Log 69(3):767–774, 2004) and Yablo (2006) with respect to more restricted sentences systems, which we call $\mathcal{F}$ -systems. (shrink)
Liar Paradox in Logic and Philosophy of Logic
Logic and Information in Philosophy of Computing and Information
Singular Reference in Fictional Discourse?Manuel García-Carpintero - 2019 - Disputatio 11 (54):143-177.details
Singular terms used in fictions for fictional characters raise well-known philosophical issues, explored in depth in the literature. But philosophers typically assume that names already in use to refer to "moderatesized specimens of dry goods" cause no special problem when occurring in fictions, behaving there as they ordinarily do in straightforward assertions. In this paper I continue a debate with Stacie Friend, arguing against this for the exceptionalist view that names of real entities in fictional discourse don't work there as (...) they do in simple-sentence assertions, but rather as fictional names do. (shrink)
Nonreferring Expressions in Philosophy of Language
Ontology of Literature in Aesthetics
Ramsey, Reference and Reductionism.Huw Price - manuscriptdetails
This is an unpublished piece from July 1998. It discusses the use of semantic notions such as reference in the Canberra Plan, the question whether this use creates a problematic circularity if the Canberra Plan is applied to the semantic notions themselves, and the relation of this question to Putnam's model-theoretic argument. I used some of the ideas in later papers such as (Price 2004, 2009) and (Menzies & Price, 2009), but the bulk of discussion of the relation of my (...) concern to Putnam's argument (and to responses to Putnam by others) never made it into print. (shrink)
On referring.Peter F. Strawson - 1950 - Mind 59 (235):320-344.details
Incompleteness of Descriptions in Philosophy of Language
P. F. Strawson in 20th Century Philosophy
Temporal reference in Paraguayan Guaraní, a tenseless language.Judith Tonhauser - 2011 - Linguistics and Philosophy 34 (3):257-303.details
This paper contributes data from Paraguayan Guaraní (Tupí-Guaraní) to the discussion of how temporal reference is determined in tenseless languages. The empirical focus of this study is on finite clauses headed by verbs inflected only for person/number information, which are compatible only with non-future temporal reference in most matrix clause contexts. The paper first explores the possibility of accounting for the temporal reference of such clauses with a phonologically empty non-future tense morpheme, along the lines of Matthewson's (Linguist Philos 29:673–713, (...) 2006 ) analysis of a similar phenomenon in St'át'imcets (Salish). This analysis is then contrasted with one according to which temporal reference is not constrained by tense in Paraguayan Guaraní, but only by context and temporal adverbials. A comparison of the two analyses, both of which are couched in a dynamic semantic framework, suggests empirical and theoretical advantages of the tenseless analysis over the tensed one. The paper concludes with a discussion of cross-linguistic variation of temporal reference in tensed and tenseless languages. (shrink)
Temporal Expressions in Philosophy of Language
Reference and Response.Louis deRosset - 2011 - Australasian Journal of Philosophy 89 (1):19-36.details
A standard view of reference holds that a speaker's use of a name refers to a certain thing in virtue of the speaker's associating a condition with that use that singles the referent out. This view has been criticized by Saul Kripke as empirically inadequate. Recently, however, it has been argued that a version of the standard view, a /response-based theory of reference/, survives the charge of empirical inadequacy by allowing that associated conditions may be largely or even entirely implicit. (...) This paper argues that response-based theories of reference are prey to a variant of the empirical inadequacy objection, because they are ill-suited to accommodate the successful use of proper names by pre-school children. Further, I argue that there is reason to believe that normal adults are, by and large, no different from children with respect to how the referents of their names are determined. I conclude that speakers typically refer /positionally/: the referent of a use of a proper name is typically determined by aspects of the speaker's position, rather than by associated conditions present, however implicitly, in her psychology. (shrink)
Descriptive Theories of Reference in Philosophy of Language
Reference, Understanding, and Communication.Ray Buchanan - 2013 - Australasian Journal of Philosophy 92 (1):55-70.details
Brian Loar [1976] observed that, even in the simplest of cases, such as an utterance of (1): 'He is a stockbroker', a speaker's audience might misunderstand her utterance even if they correctly identify the referent of the relevant singular term, and understand what is being predicated of it. Numerous theorists, including Bezuidenhout [1997], Heck [1995], Paul [1999], and Récanati [1993, 1995], have used Loar's observation to argue against direct reference accounts of assertoric content and communication, maintaining that, even in these (...) simple cases, the propositional contribution of a referring expression must be more than just its referent. -/- I argue here that, while Loar's observation is correct, the conclusion he and others have sought to draw from it simply does not follow. Rather, his observation helps to remind us of an important Gricean insight into the nature of communicative acts—including acts of speaker-reference—namely, that there is more to understanding a communicative act than merely entertaining what a speaker is intending to communicate thereby. Once we remember this insight, we see that the phenomenon to which Loar is calling our attention should actually be expected given the direct reference theorist's assumptions, together with independently plausible Gricean principles concerning how we make our referential intentions manifest in communication. More generally, the Gricean strategy for explaining the challenge posed by Loar cases suggests a novel way to account for certain crucial anti-direct reference intuitions—one requiring no modification of the original theory (e.g., no invocation of 'descriptive enrichments' as in Soames [2002]), thereby allowing for a direct reference account of what is asserted in utterances of 'simple sentences' such as (1). (shrink)
Assertion, Misc in Philosophy of Language
Linguistic Communication in Philosophy of Language
Singular Reference Without Singular Thought.Filipe Martone - 2016 - Manuscrito 39 (1):33-60.details
In this paper I challenge the widespread assumption that the conditions for singular reference are more or less the same as the conditions for singular thought. I claim that we refer singularly to things without thinking singularly about them more often than it is usually believed. I first argue that we should take the idea that singular thought is non-descriptive thought very seriously. If we do that, it seems that we cannot be so liberal about what counts as acquaintance; only (...) perception will do. I also briefly discuss and reject semantic instrumentalism. Finally, I argue that while singular reference is cheap, singular thought comes only at a price. (shrink)
Knowledge by Acquaintance in Epistemology
Self-reference and self-awareness.Sydney S. Shoemaker - 1968 - Journal of Philosophy 65 (October):555-67.details
Immunity to Error through Misidentification in Philosophy of Mind
Visual Reference and Iconic Content.Santiago Echeverri - 2017 - Philosophy of Science 84 (4):761-781.details
Evidence from cognitive science supports the claim that humans and other animals see the world as divided into objects. Although this claim is widely accepted, it remains unclear whether the mechanisms of visual reference have representational content or are directly instantiated in the functional architecture. I put forward a version of the former approach that construes object files as icons for objects. This view is consistent with the evidence that motivates the architectural account, can respond to the key arguments against (...) representational accounts, and has explanatory advantages. I draw general lessons for the philosophy of perception and the naturalization of intentionality. (shrink)
Perception and Reference in Philosophy of Mind
Science of Perception in Philosophy of Mind
The Experience of Objects in Philosophy of Mind
Predicate reference.Fraser MacBride - 2006 - In Barry C. Smith (ed.), The Oxford Handbook of Philosophy of Language. Oxford University Press. pp. 422--475.details
Whether a predicate is a referential expression depends upon what reference is conceived to be. Even if it is granted that reference is a relation between words and worldly items, the referents of expressions being the items to which they are so related, this still leaves considerable scope for disagreement about whether predicates refer. One of Frege's great contributions to the philosophy of language was to introduce an especially liberal conception of reference relative to which it is unproblematic to suppose (...) that predicates are referring expressions. According to this liberal conception, each significant expression in a language has its own distinctive semantic role or power, a power to effect the truth-value of the sentences in which it occurs. (shrink)
Frege: Functions and Concepts, Misc in 20th Century Philosophy
Logic and Philosophy of Logic, Misc in Logic and Philosophy of Logic
Predicates, Misc in Philosophy of Language
Property Nominalism in Metaphysics
W. V. O. Quine in 20th Century Philosophy
Reference and contingency.Gareth Evans - 1979 - The Monist 62 (2):161-189.details
'A logical theory may be tested by its capacity for dealing with puzzles, and it is a wholesome plan, in thinking about logic, to stock the mind with as many puzzles as possible, since these serve much the same purpose as is served by experiments in physical science.' This paper is an attempt to follow Russell's advice by using a puzzle about the contingent a priori to test and explore certain theories of reference and modality. No one could claim that (...) the puzzle is of any great philosophical importance by itself, but to understand it, one has to get clear about certain aspects of the theory of reference; and to solve it, one has to think a little more deeply than one is perhaps accustomed about what it means to say that a statement is contingent or necessary. (shrink)
Apriority and Necessity in Epistemology
Self-reference, Phenomenology, and Philosophy of Science.Steven James Bartlett - 1980 - Methodology and Science: Interdisciplinary Journal for the Empirical Study of the Foundations of Science and Their Methodology 13 (3):143-167.details
The paper begins by acknowledging that weakened systematic precision in phenomenology has made its application in philosophy of science obscure and ineffective. The defining aspirations of early transcendental phenomenology are, however, believed to be important ones. A path is therefore explored that attempts to show how certain recent developments in the logic of self-reference fulfill in a clear and more rigorous fashion in the context of philosophy of science certain of the early hopes of phenomenologists. The resulting dual approach is (...) applied to several problems in the philosophy of science: on the one hand, to proposed rejections of scientific objectivity, to the doctrine of radical meaning variance, and to the Quine-Duhem thesis, and or. the other, to an analysis of hidden variable theory in quantum mechanics. (shrink)
Copenhagen Interpretation in Philosophy of Physical Science
Philosophy of Physics, General Works in Philosophy of Physical Science
Sense, Reference, and Philosophy.Jerrold J. Katz - 2003 - Oxford University Press.details
Sense, Reference, and Philosophy develops the far-reaching consequences for philosophy of adopting non-Fregean intensionalism, showing that long-standing problems in the philosophy of language, and indeed other areas, that appeared intractable can now be solved. Katz proceeds to examine some of those problems in this new light, including the problem of names, natural kind terms, the Liar Paradox, the distinction between logical and extra-logical vocabulary, and the Raven paradox. In each case, a non-Fregean intentionalism provides a philosophically more satisfying solution.
Names in Philosophy of Language
$70.90 new $71.39 used View on Amazon.com
Reference to numbers in natural language.Friederike Moltmann - 2013 - Philosophical Studies 162 (3):499 - 536.details
A common view is that natural language treats numbers as abstract objects, with expressions like the number of planets, eight, as well as the number eight acting as referential terms referring to numbers. In this paper I will argue that this view about reference to numbers in natural language is fundamentally mistaken. A more thorough look at natural language reveals a very different view of the ontological status of natural numbers. On this view, numbers are not primarily treated abstract objects, (...) but rather 'aspects' of pluralities of ordinary objects, namely number tropes, a view that in fact appears to have been the Aristotelian view of numbers. Natural language moreover provides support for another view of the ontological status of numbers, on which natural numbers do not act as entities, but rather have the status of plural properties, the meaning of numerals when acting like adjectives. This view matches contemporary approaches in the philosophy of mathematics of what Dummett called the Adjectival Strategy, the view on which number terms in arithmetical sentences are not terms referring to numbers, but rather make contributions to generalizations about ordinary (and possible) objects. It is only with complex expressions somewhat at the periphery of language such as the number eight that reference to pure numbers is permitted. (shrink)
Mathematical Nominalism in Philosophy of Mathematics
Number Theory in Philosophy of Mathematics
Numbers in Philosophy of Mathematics
Numerical Expressions in Philosophy of Language
Reference, Misc in Philosophy of Language
Tropes in Metaphysics
Reference and Existence: the John Locke Lectures for 1973.Saul A. Kripke - 2013 - New York: Oxford University Press.details
Reference fiction, and omission.Samuel Murray - 2018 - Synthese 195 (1):235-257.details
In this paper, I argue that sentences that contain 'omission' tokens that appear to function as singular terms are meaningful while maintaining the view that omissions are nothing at all or mere absences. I take omissions to be fictional entities and claim that the way in which sentences about fictional characters are true parallels the way in which sentences about omissions are true. I develop a pragmatic account of fictional reference and argue that my fictionalist account of omissions implies a (...) plausible account of the metaphysics of omissions. (shrink)
Action Sentences in Philosophy of Action
Ontology, Misc in Metaphysics
Referent tracking and its applications.Werner Ceusters & Barry Smith - 2007 - In Proceedings of the Workshop WWW2007 Workshop i3: Identity, Identifiers, Identification (Workshop on Entity-Centric Approaches to Information and Knowledge Management on the Web), Banff, Canada. CEUR.details
Referent tracking (RT) is a new paradigm, based on unique identification, for representing and keeping track of particulars. It was first introduced to support the entry and retrieval of data in electronic health records (EHRs). Its purpose is to avoid the ambiguity that arises when statements in an EHR refer to disorders or other entities on the side of the patient exclusively by means of compound descriptions utilizing general terms such as 'pimple on nose' or 'small left breast tumor'. In (...) this paper, we describe the theoretical foundations of this paradigm and show how it is being applied to the solution of analogous problems of ambiguous identification in the fields of digital rights management, corporate memories and decision algorithms. (shrink)
Identity in Metaphysics
Universals in Metaphysics
Reference and Generality: An Examination of Some Medieval and Modern Theories.Peter Thomas Geach - 1962 - Ithaca, NY and London: Cornell University Press.details
Medieval Logic in Medieval and Renaissance Philosophy
Direct reference, psychological explanation, and Frege cases.Susan Schneider - 2005 - Mind and Language 20 (4):423-447.details
In this essay I defend a theory of psychological explanation that is based on the joint commitment to direct reference and computationalism. I offer a new solution to the problem of Frege Cases. Frege Cases involve agents who are unaware that certain expressions corefer (e.g. that 'Cicero' and 'Tully' corefer), where such knowledge is relevant to the success of their behavior, leading to cases in which the agents fail to behave as the intentional laws predict. It is generally agreed that (...) Frege Cases are a major problem, if not the major problem, that this sort of theory faces. In this essay, I hope to show that the theory can surmount the Frege Cases. (shrink)
Externalism and Computation in Philosophy of Mind
Psychological Explanation in Philosophy of Cognitive Science
Direct Reference and Singular Propositions.Matthew Davidson - 2000 - American Philosophical Quarterly 37 (3):285-300.details
Most direct reference theorists about indexicals and proper names have adopted the thesis that singular propositions about physical objects are composed of physical objects and properties.1 There have been a number of recent proponents of such a view, including Scott Soames, Nathan Salmon, John Perry, Howard Wettstein, and David Kaplan.2 Since Kaplan is the individual who is best known for holding such a view, let's call a proposition that is composed of objects and properties a K-proposition. In this paper, I (...) will attempt to show that a direct reference view about the content of proper names and indexicals leads very naturally to the position that all singular propositions about physical objects are K-propositions.3 Then, I will attempt to show that this view of propositions is false. I will spend the bulk of the paper on this latter task. My goal in the paper, then, is to show that adopting the direct reference thesis comes at a cost problems the view has with problems such as opacity and the significance of some identity statements; it comes at even more of a cost). (shrink)
Reference and Description: The Case Against Two-Dimensionalism.Scott Soames - 2005 - Princeton: Princeton University Press.details
In this book, Scott Soames defends the revolution in philosophy led by Saul Kripke, Hilary Putnam, and David Kaplan against attack from those wishing to revive ..
Conceivability, Imagination, and Possibility in Metaphysics
Reference to Abstract Objects in Discourse.Nicholas Asher - 1993 - Dordrecht, Boston, and London: Kluwer.details
This volume is about abstract objects and the ways we refer to them in natural language. Asher develops a semantical and metaphysical analysis of these entities in two stages. The first reflects the rich ontology of abstract objects necessitated by the forms of language in which we think and speak. A second level of analysis maps the ontology of natural language metaphysics onto a sparser domain--a more systematic realm of abstract objects that are fully analyzed. This second level reflects the (...) commitments of real metaphysics. The models for these commitments assign truth conditions to natural language discourse. Annotation copyright by Book News, Inc., Portland, OR. (shrink)
Discourse Coherence in Philosophy of Language
Pronouns and Anaphora in Philosophy of Language
$86.54 used $278.93 new $299.31 from Amazon View on Amazon.com
Direct Reference and Definite Descriptions.Genoveva Marti - 2008 - Dialectica 62 (1):43-57.details
According to Donnellan the characteristic mark of a referential use of a definite description is the fact that it can be used to pick out an individual that does not satisfy the attributes in the description. Friends and foes of the referential/attributive distinction have equally dismissed that point as obviously wrong or as a sign that Donnellan's distinction lacks semantic import. I will argue that, on a strict semantic conception of what it is for an expression to be a genuine (...) referential device, Donnellan is right: if a use of a definite description is referential, it must be possible for it to refer to an object independently of any attributes associated with the description, including those that constitute its conventional meaning. (shrink)
Russellian and Direct Reference Theories, Misc in Philosophy of Language
Reference in the Land of the Rising Sun: A Cross-cultural Study on the Reference of Proper Names.Justin Sytsma, Jonathan Livengood, Ryoji Sato & Mineki Oguchi - 2015 - Review of Philosophy and Psychology 6 (2):213-230.details
A standard methodology in philosophy of language is to use intuitions as evidence. Machery, Mallon, Nichols, and Stich challenged this methodology with respect to theories of reference by presenting empirical evidence that intuitions about one prominent example from the literature on the reference of proper names vary between Westerners and East Asians. In response, Sytsma and Livengood conducted experiments to show that the questions Machery and colleagues asked participants in their study were ambiguous, and that this ambiguity affected the responses (...) given by Westerners. Sytsma and Livengood took their results to cast doubt on the claim that the current evidence indicates that there is cross-cultural variation in intuitions about the Gödel case. In this paper we report on a new cross-cultural study showing that variation in intuitions remains even after controlling for the ambiguity noted by Sytsma and Livengood. (shrink)
Experimental Philosophy: Reference in Metaphilosophy
Multiple reference and vague objects.Giovanni Merlo - 2017 - Synthese 194 (7):2645-2666.details
Kilimanjaro is an example of what some philosophers would call a 'vague object': it is only roughly 5895 m tall, its weight is not precise and its boundaries are fuzzy because some particles are neither determinately part of it nor determinately not part of it. It has been suggested that this vagueness arises as a result of semantic indecision: it is because we didn't make up our mind what the expression "Kilimanjaro" applies to that we can truthfully say such things (...) as "It is indeterminate whether this particle is part of Kilimanjaro". After reviewing some of the limitations of this approach, I will propose an alternative account, based on a new semantic relation—multiple reference—capable of holding in a one-many pattern between a term and several objects in the domain. I will explain how multiple reference works, what differentiates it from plural reference and how it might be used to accommodate at least some aspects of our ordinary discourse about vague objects. (shrink)
Demonstrative reference and definite descriptions.Howard K. Wettstein - 1981 - Philosophical Studies 40 (2):241--257.details
A distinction is developed between two uses of definite descriptions, the "attributive" and the "referential." the distinction exists even in the same sentence. several criteria are given for making the distinction. it is suggested that both russell's and strawson's theories fail to deal with this distinction, although some of the things russell says about genuine proper names can be said about the referential use of definite descriptions. it is argued that the presupposition or implication that something fits the description, present (...) in both uses, has a different genesis depending upon whether the description is used referentially or attributively. this distinction in use seems not to depend upon any syntactic or semantic ambiguity. it is also suggested that there is a distinction between what is here called "referring" and what russell defines as denoting. definite descriptions may denote something, according to his definition, whether used attributively or referentially. (shrink)
Demonstratives, Misc in Philosophy of Language
Reference and Modality.Leonard Linsky - 1971 - London: Oxford University Press.details
1. Reference and modality by W. V. O. Quine.--2. Modality and description by A. F. Smullyan.--3. Extensionality by R. B. Marcus.--4. Quantification into causal contexts by D. Føllesdal.--5. Semantical considerations on modal logic by S. A. Kripke.--6. Essentialism and quantified modal logic by T. Parsons.--7. Reference, essentialism, and modality by L. Linsky.--8. Quantifiers and propositional attitudes by W. V. O. Quine.--9. Quantifying in by D. Kaplan.--10. Semantics for propositional attitudes by J. Hintikka.--11. On Carnap's analysis of statements of assertion and (...) belief by A. Church.--Bibliography (p. [173]-175). (shrink) | CommonCrawl |
\begin{document}
\title{General higher-order majorization-minimization algorithms for (non)convex optimization}
\begin{abstract}
Majorization-minimization algorithms consist of successively minimizing a sequence of upper bounds of the objective function so that along the iterations the objective function decreases. Such a simple principle allows to solve a large class of optimization problems, even nonconvex and nonsmooth. We propose a general higher-order majorization-minimization algorithmic framework for minimizing an objective function that admits an approximation (surrogate) such that the corresponding error function has a higher-order Lipschitz continuous derivative. We present convergence guarantees for our new method for general optimization problems with (non)convex and/or (non)smooth objective function. For convex (possibly nonsmooth) problems we provide global sublinear convergence rates, while for problems with uniformly convex objective function we obtain locally faster superlinear convergence rates. We also prove global stationary point guarantees for general nonconvex (possibly nonsmooth) problems and under Kurdyka-Lojasiewicz property of the objective function we derive local convergence rates ranging from sublinear to superlinear for our majorization-minimization algorithm. Moreover, for unconstrained nonconvex problems we derive convergence rates in terms of first- and second-order optimality conditions. \end{abstract}
\begin{keywords}
(Non)convex optimization, majorization-minimization, higher-order methods, convergence rates. \LaTeX \end{keywords}
\begin{AMS} 90C25, 90C06, 65K05. \end{AMS}
\section{Introduction} The principle of successively minimizing upper bounds of the objective function is often called \textit{majorization-minimization} \cite{LanHun:00, Mai:15, RazHon:13}. Most techniques, e.g., gradient descent, utilize convex quadratic majorizers based on first-order oracles in order to guarantee that the majorizer is easy to minimize. Despite the empirical success of first-order majorization-minimization algorithms to solve difficult optimization problems, the convergence speed of such methods is known to slow down close to saddle points or in ill-conditioned landscapes \cite{Nes:04,Pol:87}. Higher-order methods are known to be less affected by these problems \cite{BriGar:17, Nes:19, cartis2017,cartis:2020}.
In this work, we focus our attention on higher-order majorization-minimization methods to find (local) minima of (potentially nonsmooth and nonconvex) objective functions. At each iteration, these algorithms construct and optimize a local (Taylor) model of the objective using higher-order derivatives with an additional step length penalty term that depends on how well the model approximates the real objective.
\noindent \textbf{Contributions}. This paper provides an algorithmic framework based on the notion of higher-order upper bound approximations of the (non)convex and/or (non)smooth objective function, leading to a \textit{general higher-order majorization-minimization} algorithm, which we call GHOM. Then, we present convergence guarantees for the GHOM algorithm for general optimization problems when the upper bounds approximate the objective function up to an error that is $p \geq 1$ times differentiable and has a Lipschitz continuous $p $ derivative; we call such upper bounds \textit{higher-order surrogate} functions. More precisely, on general (possibly nonsmooth) convex problems our general higher-order majorization-minimization (GHOM) algorithm achieves global sublinear convergence rate for the function values. When we apply our method to optimization problems with uniformly convex objective function, we obtain faster local superlinear convergence rates in multiple criteria: function values, distance of the iterates to the optimal point and in the minimal norms of subgradients. Then, on general (possibly nonsmooth) nonconvex problems we prove for GHOM global asymptotic stationary point guarantees and convergence rates in terms of first-order optimality conditions. We also characterize the convergence rate of GHOM algorithm locally in terms of function values under the Kurdyka-Lojasiewicz (KL) property of the nonconvex objective function. Our result show that the convergence behavior of GHOM ranges from sublinear to superlinear depending on the parameter of the underlying KL geometry. Moreover, on smooth unconstrained nonconvex problems we derive convergence rates in terms of first and second-order optimality conditions. In Table 1 we summarize the main convergence results of this paper.
\begin{small}
\begin{table}
\begin{center}
\begin{tabular}{ |p{0.9cm}| p{3.2cm}| p{0.55cm}| p{3.8cm}| p{0.55cm}|}
\hline
\multirow{2}{*}{} & \multicolumn{2}{|c|}{ \begin{footnotesize} convex \end{footnotesize}} & \multicolumn{2}{|c|}{ \begin{footnotesize} nonconvex \end{footnotesize}}\\
\cline{2-5}
& \multicolumn{1}{|c|}{ \begin{footnotesize} convergence rates \end{footnotesize}} & \multicolumn{1}{|c|}{ \begin{footnotesize} Theorem \end{footnotesize}} & \multicolumn{1}{|c|}{ \begin{footnotesize} convergence rates \end{footnotesize}} & \multicolumn{1}{|c|}{ \begin{footnotesize} Theorem \end{footnotesize} }\\
\hline
& \begin{footnotesize}{$f$ admits higher-order surrogate, Definition \ref{def:sur}: \(\displaystyle f(x_k) - f^* \leq \mathcal{O}\left(k^{-p}\right) \) } \end{footnotesize} & \ref{th:conv-conv} & \begin{footnotesize} $f$ admits higher-order surrogate, Definition \ref{def:sur}: $ S(x_k):=~\text{dist}(0,\partial f(x_k))$ converges to $0$, \end{footnotesize} & \ref{th:nonconv-gen} \\
\begin{footnotesize} global \end{footnotesize} & & & \begin{footnotesize} $\min\limits_{i=1:k} S(x_{i}) \leq \mathcal{O}\left(k^{\frac{-p}{p+1}}\right)$ \end{footnotesize} & \ref{th:noncf1f2} \\
& & & & \\
& & & \begin{footnotesize} $f$ is $p$ smooth: sublinear in first- \& second-order optimality conditions \end{footnotesize} & \ref{th:soc} \\
\hline
& \begin{footnotesize} $f$ is uniformly convex: superlinear in $f(x_k) - f^*$ and $\|x_k-x^*\|$, \end{footnotesize} & \ref{th:conv-conv_super0}, \ref{th:conv-conv_super} & \begin{footnotesize} $f$ has KL: sublinear, linear or superlinear in function values depending on KL parameter \end{footnotesize} & \ref{th:nonconv-gen-kl} \\
\begin{footnotesize} local \end{footnotesize} & & & & \\
& \begin{footnotesize} superlinear in $S(x_k)$ \end{footnotesize} & \ref{th:conv-conv_super_grad} & & \\
\hline
\end{tabular}
\caption{Summary of the convergence rates obtained in this paper for GHOM. Global rates are derived for general objective function $f$ that admits a higher-order surrogate as in Definition \ref{def:sur} and local rates are obtained under additional uniform convexity or KL property, respectively.}
\end{center}
\end{table} \end{small}
\noindent Besides providing a \textit{unifying framework} for the design and analysis of higher-order majorization-minimization methods, in special cases, where complexity bounds are known for some particular higher-order tensor algorithms, our convergence results recover the existing bounds. More precisely, our convergence results recover for $p=1$ the convergence bounds for the (proximal) gradient \cite{Mai:15,AttBol:09,RazHon:13} and Gauss-Newton \cite{DruPaq:19, Pau:16} type algorithms from the literature. For $p>1$ and convex composite objective functions (see Example \ref{expl:5}), we recover the global convergence results from \cite{Nes:19Inxt} and the local convergence rates from \cite{DoiNes:2019}. For $p>1$ and nonconvex unconstrained optimization problems (see Example \ref{expl:4}), we recover the convergence results from \cite{BriGar:17,cartis2017} and for problems with simple constraints we obtain similar rates as in \cite{cartis:2020}. However, for other examples (such as, Examples \ref{expl:8}, \ref{expl:6}, in the convex case; Examples \ref{expl:8}, \ref{expl:5}, \ref{expl:6} and the second part of \ref{expl:7}, in the nonconvex case) our convergence results seem to be new. In fact our unifying algorithmic framework is inspired in part by the recent work on higher-order Taylor-based methods for convex optimization \cite{Nes:19, Nes:19Inxt}, but it yields a more general update rule and is also appropriate for nonconvex optimization. Note that there is a major difference between the Taylor expansion and the model approximation based on a general majorization-minimization framework. Taylor expansion is unique. Conversely, majorization-minimization approach may admit many upper bound models for a given objective function and every model leads to a different optimization method. Another major difference between our approach and the existing works such as \cite{AttBol:09, BriGar:17, DruPaq:19, Nes:19, Nes:19Inxt, cartis2017,cartis:2020} is that we assume Lipschitz continuity of the $p$ derivative of the error function, while the other papers assume directly the Lipschitz continuity of the $p$ derivative of the objective function. Hence, our convergence proofs are very different from the existing works.
\subsection{Related work} \textbf{Higher-order methods} are popular due to their performance in dealing with ill conditioning and fast rates of convergence \cite{BriGar:17, CarGou:11, cartis2017, cartis:2020, GraNes:19, NesPol:06, Nes:19, Nes:19Inxt}. For example, in \cite{Nes:19} the following unconstrained convex problem was considered: \begin{equation}\label{sota:eq1}
f^* = \min_{x \in \mathbb{R}^n} f(x), \end{equation} where $f$ is convex, $p$ times continuously differentiable and with the $p$ derivative Lipschitz continuous of constant $L_p^f$ (see Section \ref{sec:notation} for a precise definition). Then, Nesterov proposed in \cite{Nes:19} the following higher-order Taylor-based iterative method for finding an optimal solution of the convex problem \eqref{sota:eq1}: \begin{equation}
\label{sota:eq2}
x_{k+1} \!=\! \argmin_{y \in \mathbb{R}^n} g(y;x_k) \left(\!:= \!\sum_{i=0}^{p} \!\frac{1}{i !} \nabla^{i} f(x_k)[y \!-\! x_k]^{i} \!+\! \frac{M_p}{(p+1)!}\|y \!-\! x_k\|^{p+1} \! \right), \end{equation} where $M_p \geq L_p^f$. Hence, at each iteration one needs to construct and optimize a local Taylor model of the objective with an additional regularization term that depends on how well the model approximates the real objective. This is a natural extension of the cubic regularized Newton's method extensively analyzed e.g., in \cite{NesPol:06, CarGou:11, ZhoWan:18}. Under the above settings \cite{Nes:19} proves the following convergence rate in function values for \eqref{sota:eq2}: \[ f\left(x_{k}\right)-f^{*} \leq \mathcal{O}\left(\frac{1}{k^p}\right) \quad \forall k \geq 1. \] \noindent Extensions of this method to composite convex problems and to objective functions with Holder continuous higher-order derivatives have been given in \cite{Nes:19Inxt} and \cite{GraNes:19}, respectively, inexact variants were analyzed e.g., in \cite{NesDoi:20} and local superlinear convergence results were given recently in \cite{DoiNes:2019}.
\noindent Further, for the unconstrained nonconvex case \cite{BriGar:17,cartis2017} provides convergence rates for a similar algorithm. The basic assumptions are that $f$ is $p$ times continuously differentiable having the $p$ derivative smooth with constant $L_p^f$ and bounded below. For example, \cite{cartis2017} proposes an adaptive regularization algorithm (called AR$p$), which requires building a higher-order model based on an appropriate regularization of the $p$ Taylor approximation $g(y;x_k)$.
Basically, at each iteration the AR$p$ algorithm approximately computes a (local) minimum of the model $g(y;x_k)$ that must satisfy certain second-order optimality conditions: \begin{equation*}
x_{k+1} \approx \argmin_{y \in \mathbb{R}^n} g(y; x_k). \end{equation*}
For the AR$p$ algorithm \cite{cartis2017} proves the best known convergence rate for this class of problems ($f$ nonconvex with $p$ derivative smooth) in terms of the first- and second-order optimality conditions: \begin{align*}
\min_{i=1:k} \max & \left(- \lambda_{\min}^{\frac{p+1}{p-1}}(\nabla^2 f(x_{i})),\;
\| \nabla f(x_{i})\|_{*}^{\frac{p+1}{p}} \right) \leq\mathcal{O}\left( \max \left( \left(\frac{1}{k}\right)^{\frac{p}{p+1}}, \, \left(\frac{1}{k}\right)^{\frac{p-1}{p+1}} \right)\right). \end{align*} Extension of these results to smooth optimization problems with simple constraints were given recently in \cite{cartis:2020}. Furthermore, several studies demonstrate that special geometric properties of the objective function, such as gradient dominance condition or Kurdyka-Lojasiewicz (KL) property \cite{BolDan:07}, can enable faster convergence of the ARp algorithm (e.g., when $p=2$), see for example \cite{NesPol:06,ZhoWan:18}. Notably, all these approaches require optimizing a $(p+1)$-th order polynomial, which is known to be a difficult problem. Recently, \cite{Nes:19, Nes:19Inxt} introduced an implementable method for convex functions (see also Lemma \ref{lema:conv} below). In particular, for $p=2$ and $p=3$ there are efficient methods from optimization for minimizing the corresponding Taylor-based upper approximation \eqref{sota:eq2}, see e.g., \cite{CarGou:11, CarDuc:16, GraNes:19sub, Nes:19, Nes:19Inxt} .
\noindent \textbf{Majorization-minimization algorithms} approximate at each iteration the objective function by a majorizing function that is easy to minimize \cite{BolPau:16, LanHun:00}. Most techniques, e.g., gradient descent, utilize convex quadratic majorizers in order to guarantee that the model at each iteration is easy to minimize. The framework of first-order majorization-minimization methods, i.e. methods that are using only gradient information to build the upper model, has been analyzed widely in the literature of the recent two decades, see e.g., \cite{BolPau:16, HunLan:04, LanHun:00, Mai:15, RazHon:13}. \textit{However, to the best of our knowledge there are no results on the convergence behavior of general higher-order majorization-minimization algorithms, i.e. methods that are using higher-order derivatives to build the upper model.} In this paper we provide a general framework for the design of higher-order majorization-minimization algorithms and derive a complete convergence analysis for them (global and local convergence rates) covering a large class of optimization problems, that is convex or nonconvex, smooth or nonsmooth.
\noindent \textbf{Content}. The paper is organized as follows: Section \ref{sec:notation} presents notation and preliminaries; in Section \ref{sect:ghom} we define our higher-order majorization-minimization framework and the corresponding algorithm; in Section \ref{sec:conv} we derive global and local convergence results for our scheme in the convex settings, while the convergence analysis for nonconvex problems is given in Section~\ref{sec:non-conv}.
\subsection{Notations and preliminaries} \label{sec:notation} We denote a finite-dimensional real vector space with $\mathbb{E}$ and by $\mathbb{E}^*$ its dual spaced composed by linear functions on $\mathbb{E}$. Using a self-adjoint positive-definite operator $D: \mathbb{E} \rightarrow \mathbb{E}^*$ (notation $D = D^{*} \succ 0$), we can endow these spaces with \textit{ conjugate Euclidean norms}, see also \cite{Nes:19}:
$$\|x\|=\langle D x, x\rangle^{1 / 2}, \quad x \in \mathbb{E}, \quad\|h\|_{*}=\left\langle h, D^{-1} h\right\rangle^{1 / 2}, \quad h \in E^{*}\text{.} $$
\noindent Let $H$ be a $p$ multilinear form on $\mathbb{E}$. The value of $H$ in $h_{1}, \ldots, h_{p} \in \mathbb{E}$ is denoted $H \left[h_{1}, \ldots, h_{p}\right] .$ The abbreviation $H[h]^{p}$ is used when $h_{1}=\cdots=h_{p}=h$ for some $h \in \mathbb{E}$. The norm of $H$ is defined in the standard way: \[
\|H\|:=\max _{\left\|h_{1}\right\|=\cdots=\left\|h_{p}\right\|=1}\left|H\left[h_{1}, \ldots, h_{p}\right]\right|\text{.} \] If the form $H$ is symmetric, it is known that the maximum in the above definition can be achieved when all the vectors are the same: \[
\|H\|=\max _{\|h\|=1}\left|H[h]^{p}\right| \text{.} \]
\begin{definition}
Let $\psi: \mathbb{E} \rightarrow \mathbb{R}$ be $p$ times continuously differentiable function, with $p \geq 1$. Then, the $p$ derivative is Lipschitz continuous if there exist $L_p^{\psi} > 0$ for which the following relation holds:
\begin{equation} \label{eq:1}
\| \nabla^p \psi(x) - \nabla^p \psi(y) \| \leq L_p^{\psi} \| x-y \| \quad \forall x,y \in \mathbb{E}.
\end{equation} \end{definition}
\noindent We denote the Taylor approximation of $\psi$ around $x$ of order $p$ by: $$ T_p^{\psi}(y;x)= \psi(x) + \sum_{i=1}^{p} \frac{1}{i !} \nabla^{i} \psi(x)[y-x]^{i} \quad \forall y \in \mathbb{E}. $$
\noindent It is known that if \eqref{eq:1} holds, then the residual between function value and its Taylor approximation can be bounded \cite{Nes:19}: \begin{equation}\label{eq:TayAppBound}
|\psi(y) - T_p^{\psi}(y;x) | \leq \frac{L_p^{\psi}}{(p+1)!} \|y-x\|^{p+1} \quad \forall x,y \in \mathbb{E}. \end{equation}
\noindent If $p \geq 2$, we also have the following inequalities valid for all $ x,y \in \mathbb{E}$: \begin{align} \label{eq:TayAppG1}
&\| \nabla \psi(y) - \nabla T_p^{\psi}(y;x) \|_* \leq \frac{L_p^{\psi}}{p!} \|y-x \|^p, \\
\label{eq:TayAppG2}
&\|\nabla^2 \psi(y) - \nabla^2 T_p^{\psi}(y;x) \| \leq \frac{L_p^{\psi}}{(p-1)!} \| y-x\|^{p-1}. \end{align}
\noindent Next, we provide several examples of functions that have known Lipschitz continuous $p$ derivatives (also called $p$ \textit{smooth} functions), see Appendix for proofs.
\begin{example}
\label{expl:1}
For the power of Euclidean norm $\psi_{p+1}(x)= \left\|x-x_{0}\right\|^{p+1}$, with $p \geq 1$, the Lipschitz continuous condition \eqref{eq:1} holds with $L_{p}^{\psi}=(p+1)!$. \end{example}
\begin{example}
\label{expl:2}
For given $a_{i} \in \mathbb{E}^{*}, 1 \leq i \leq m,$ consider the log-sum-exp function:
\[ \psi(x)=\log \left(\sum_{i=1}^{m} e^{\left\langle a_{i}, x\right\rangle}\right), \quad x \in \mathbb{R}^n. \]
Then, for the Euclidean norm $\|x\|=\langle D x, x\rangle^{1 / 2}$ for $ x \in \mathbb{R}$ and $D:=\sum_{i=1}^{m} a_{i} a_{i}^{*}$ (assuming $D \succ 0,$ otherwise we can reduce dimensionality of the problem), the Lipschitz continuous condition \eqref{eq:1} holds with $L_{1}^f=1$ for $p=1$, $L_{2}^f=2$ for $p=2$ and $L_{3}^f=4$ for $p=3$. Note that for $m=2$ and $a_1 =0$, we recover the general expression of logistic regression function, which is a loss function widely used in machine learning \cite{Mai:15}. \end{example}
\begin{example}
\label{expl:3}
For a $p$ differentiable function $\psi$ its $p$ Taylor approximation $T_p^{\psi}(\cdot;x)$ at any point $x$ has the $p$ derivative Lipschitz with constant $L_p^{T_p^{\psi}} = 0$. Moreover, its $p$ Taylor approximation has also the $p-1$ derivative Lipschitz with $L_{p-1}^{T_p^{\psi}} = \|\nabla^p \psi(x)\|$. \end{example}
\noindent Finally, in the convex case Nesterov proved in \cite{Nes:19} a remarkable result which states that an appropriately regularized Taylor approximation of a convex function is a convex multivariate polynomial. \begin{lemma} \cite{Nes:19}
\label{lema:conv}
Assume $\psi$ convex and $p>2$ differentiable function having the $p$ derivative Lipschitz with constant $L_p^f$. Then, the regularized Taylor approximation:
\begin{equation*}
g(y,x) = T_p^{\psi}(y;x) + \frac{M_p}{(p+1)!}\| y-x\|^{p+1}
\end{equation*}
is also a convex function in $y$ provided that $M_p \geq pL_p^{\psi}$. \end{lemma} As discussed in the introduction section, in higher-order tensor methods one usually needs to minimize at each iteration a regularized higher-order Taylor approximation. Thus, for the convex case, we dispose of a large number of powerful methods \cite{CarGou:11, CarDuc:16, GraNes:19sub, Nes:19Inxt} for finding the solution of the corresponding subproblem from each iteration. Further, let us introduce the class of uniformly convex functions \cite{Nes:19Inxt,NesPol:06} which will play a key role in the local convergence analysis of our algorithm in the convex settings. We denote $\bar{\mathbb{R}} = \mathbb{R} \cup \{\infty\}$ and for a given proper function $\psi : \mathbb{E} \to \bar{\mathbb{R}}$ its domain is $\text{dom} \, \psi = \{x \in \mathbb{E}: \psi(x) < \infty\}$.
\begin{definition}
\noindent A function $\psi : \mathbb{E} \to \bar{\mathbb{R}}$ is \textit{uniformly convex} of degree $q \geq 2$ if there exists a positive constant $\sigma_{q}>0$ such that:
\begin{equation}
\label{eq:unifConv}
\psi(y) \geq \psi(x)+\left\langle \psi^{x}, y-x\right\rangle+\frac{\sigma_{q}}{q}\|x-y\|^{q} \quad \forall x, y \in \text{dom} \, \psi,
\end{equation}
where $\psi^{x}$ is an arbitrary vector from the subdifferential $\partial \psi(x)$ at $x$.
\end{definition}
\noindent Minimizing both sides of \eqref{eq:unifConv}, we also get \cite{Nes:19Inxt}: \begin{align}\label{eq:probConv}
\psi^{*} &=\min _{y \in \mathbb{E}} \psi(y) \geq \psi(x)-\frac{q-1}{q}\left(\frac{1}{\sigma_{q}}\right)^{\frac{1}{q-1}}\left\|\psi^{x}\right\|_{*}^{\frac{q}{q-1}} \quad \forall x \in \text{dom} \, \psi. \end{align}
\noindent Note that for $q = 2$ in \eqref{eq:unifConv} we recover the usual definition of a strongly convex function. Moreover, \eqref{eq:probConv} for $q = 2$ is the main property used when analyzing the convergence behavior of first-order methods \cite{NecNes:16}. One important class of uniformly convex functions is given next (see e.g., \cite{Nes:19Inxt}).
\begin{example}
\label{expl:uc}
For $q \geq 2$ let us consider the convex function $ \psi(x) = \frac{1}{q} \| x - \bar{x}\|^q, $
where $\bar{x}$ is given. Then, $\psi$ is uniformly convex of degree $q$ with $\sigma_q = 2^{2-q}$. \end{example}
\noindent For nonconvex functions we have a more general notion than uniform convexity, called the Kurdyka-Lojasiewicz (KL) property, which captures a broad spectrum of the local geometries that a nonconvex function can have \cite{BolDan:07}.
\begin{definition}
\label{def:kl}
\noindent A proper and lower semicontinuous function $\psi: \mathbb{E} \to \bar{\mathbb{R}}$ satisfies \textit{Kurdyka-Lojasiewicz (KL)} property if for every compact set $\Omega \subseteq \text{dom} \, \psi$ on which $\psi$ takes a constant value $ \psi^*$ there exist $\delta, \epsilon >0$ such that one has:
\begin{equation*}
\kappa' (\psi(x) - \psi^*) \cdot \text{dist}(0, \partial \psi(x)) \geq 1 \quad \forall x\!: \text{dist}(x, \Omega) \leq \delta, \; \psi^* < \psi(x) < \psi^* + \epsilon,
\end{equation*}
where $\partial \psi(x)$ is the (limiting) subdifferential of $\psi$ at $x$ and $\kappa: [0,\epsilon] \to \mathbb{R}$ is a concave differentiable function satisfying $\kappa(0) = 0$ and $\kappa'>0$. \end{definition} Denote for any $x \in \text{dom} \, \psi$:
$$S(x) = \text{dist}(0, \partial \psi(x)) \; \left(:= \inf_{\psi^x \in \partial \psi(x)} \| \psi^x\|_* \right).$$ If $\partial \psi(x) = \emptyset$, we set $S(x) = \infty$. Note that $\partial \psi(x)$ is a closed set for $\psi$ convex function. For nonconvex function $\psi$, $\partial \psi(x)$ denotes the limiting subdifferential of $\psi$ at $x$, see e.g., \cite{Roc:70} for the definition. When $ \kappa$ takes the form $\kappa (t) = \sigma_q^{\frac{1}{q}} \frac{q}{q-1} t^{\frac{q-1}{q}}$, with $q >1$ and $\sigma_q>0$ (which is our interest here), the KL property establishes the following local geometry of the nonconvex function $\psi$ around a compact set~$\Omega$: \begin{equation}
\label{eq:kl}
\psi(x) - \psi^* \leq \sigma_q S(x)^q \quad \forall x\!: \; \text{dist}(x, \Omega) \leq \delta, \; \psi^* < \psi(x) < \psi^* + \epsilon. \end{equation}
\noindent Note that the relevant aspect of the KL property is when $\Omega$ is a subset of critical points for $\psi$, i.e. $\Omega \subseteq \{x: 0 \in \partial \psi (x) \}$, since it is easy to establish the KL property when $\Omega$ is not related to critical points.
\begin{example}
\label{expl:kl}
The KL property holds for a large class of functions including semi-algebraic functions (e.g., real polynomial functions), vector or matrix (semi)norms (e.g., $\|\cdot\|_p$, with $p \geq 0$ rational number), logarithm functions, exponential functions and uniformly convex functions, see \cite{BolDan:07} for a comprehensive list. \end{example} Finally, for a function $\psi$, we denote its sublevel set at a given $x_0 $ by: $$ \mathcal{L}_\psi(x_0) = \{x \in \mathbb{E}:\; \psi(x) \leq \psi(x_0)\}. $$
\section{General higher-order majorization-minimization algorithm} \label{sect:ghom} In what follows, we study the following general optimization problem: \begin{equation}
\label{eq:optpb}
\min_{x \in \text{dom} f } f(x), \end{equation} where $f: \mathbb{E} \rightarrow \bar{\mathbb{R}}$ is a proper lower semicontinuous (non)convex function and $\text{dom} f $ is a nonempty closed convex set. We assume that a solution $x^*$ exists for problem \eqref{eq:optpb}, hence the optimal value is finite and $f$ is bounded from below by some $f^* > -\infty$. In the convex case we consider $f^* = f(x^*)$ (the optimal value). Since $f$ is extended valued, it allows the inclusion of constraints. Note that even in this general context the Fermat's rule remains unchanged, that is $x^* \in \text{dom} f $ is a (local) minimum of $f$, then $0 \in \partial f(x^*)$ the (limiting) subdifferential of $f$ at $x^*$ \cite{Roc:70}. Moreover, in our convergence analysis below we assume that the sublevel set $\mathcal{L}_f(x_0)$ is bounded. Then, there exists $R > 0$ such that: $$
\|x-x^* \| \leq R \quad \forall x\in \mathcal{L}_f(x_0). $$ These basic assumptions are standard in the literature, see e.g., \cite{CarGou:11, cartis:2020, Nes:19, BriGar:17, NesPol:06}. The main approach we adopt in solving the optimization problem \eqref{eq:optpb} is to use a class of functions that approximates well the objective function $f$ but are easier to minimize. We call this class \textit{higher-order surrogate} functions. The main properties of our surrogate function are defined next:
\begin{definition}
\label{def:sur}
Given $f: \mathbb{E} \rightarrow \bar{\mathbb{R}}$ and $p \geq 1$, we call an extended valued function $g(\cdot;x): \mathbb{E} \rightarrow \bar{\mathbb{R}}$ a \textit{$p$ higher-order surrogate} of $f$ at $x \in \text{dom} f $ if $\text{dom} \, g(\cdot;x) = \text{dom} f $ and it has the properties:
\begin{enumerate}
\item[(i)] the surrogate function is bounded from below by the original function
$$g(y;x) \geq f(y) \quad \forall y \in\text{dom} f. $$
\item[(ii)] the error function $h(y;x) = g(y;x) - f(y)$, with $\text{dom} f \subseteq \text{int}(\text{dom}\, h)$, is $p$ differentiable and has $p$ derivative smooth with Lipschitz constant~$L_p^h$ on $\text{dom} f $.
\item[(iii)] the derivatives of the error function $h$ satisfy
$$\nabla^i h(x;x) = 0 \quad \forall i =0:p,$$
where $i=0$ means that $h(x;x) = 0$, or equivalently $g(x;x) = f(x)$.
\end{enumerate} \end{definition}
\noindent Note that \cite{Mai:15} provided a similar definition, but only for a first-order surrogate function and used in the context of stochastic optimization. Next, we give several nontrivial examples of higher-order surrogate functions (for details see Appendix). \begin{example} \label{expl:8} \textit{(proximal functions).} For a general (possibly nonsmooth and nonconvex) function $f: \mathbb{E} \to \bar{\mathbb{R}}$, one can consider for any $M_p >0$ the following $p \geq 1$ higher-order surrogate function:
$$ g(y;x) = f(y) + \frac{M_p}{(p+1)!} \| y-x \|^{p+1} \quad \forall x,y \in \text{dom}f.$$
In this case, the error function $h(y;x) = g(y;x) -f(y)$ has the Lipschitz constant $L_p^h = M_p $ on $\mathbb{E}$. Indeed, the first property of the surrogate is immediate. Next we need to prove that the error function has $p$ derivative Lipschitz, i.e., $ h(y,x) = g(y,x) -f(y) = \frac{M_p}{(p+1)!} \| y-x \|^{p+1}$, which according to Example \ref{expl:1} has the $p$ derivative Lipschitz with constant $L_{p}^{h}=M_p$. For the last property of a surrogate, we notice that $\nabla^i( \| y-x \|^{p+1})_{|_{\substack{y=x}}} = 0$ for all $i =0:p$. Thus, $\nabla^i h(x;x) = 0 \;\; \forall i =0:p$. \end{example}
\begin{example}
\label{expl:4}
\textit{(smooth derivative functions).} For a function $f: \mathbb{E} \to \mathbb{R}$ that is $p \geq 1$ times differentiable and with the $p$ derivative Lipschitz of constant $L_p^f$ on $\mathbb{E}$, one can consider the following $p$ higher-order surrogate function:
$$ g(y;x) =T_p^f(y;x) + \frac{M_p}{(p+1)!} \| y-x \|^{p+1} \quad \forall x,y \in \mathbb{E}, $$
where $ M_p \geq L_p^f$. In this case, the error function $h$ has $L_p^h = M_p + L_p^f$. \end{example}
\begin{remark}
\label{rmk:01}
If the function $f$ additionally satisfies $f(y) \geq T_{p}^{f}(y;x)$ for all $y$, then we can improve the Lipschitz constant for the error function $h$ from Example \ref{expl:4} to $L_p^h = M_p$. Indeed, using the third property of a $p$ higher-order surrogate function, we have $T_p^h(y;x) =0$ for all $y$ and thus:
\begin{align}
|h(y;x) - T_p^h(y;x) | &\!= |g(y;x) - f(y)| =\left|T_p^f(y;x) + \frac{M_p}{(p+1)!} \|y-x\|^{p+1} \! -f(y) \right| \nonumber\\
& = \left|f(y) - T_p^f(y;x) - \frac{M_p}{(p+1)!} \|y-x\|^{p+1} \right|\label{eq:21}.
\end{align}
We observe that the last term in the equation above is positive and due to the additional condition $f(y) \geq T_p^f(y;x)$, we also have that $f(y)- T_p^f(y)\geq 0$. Hence, using $|a-b | \leq \max\{a,\, b\}$ for any two positive scalars $a$ and $b$, from \eqref{eq:21} we further get:
\begin{align*}
& |h(y;x) - T_p^h(y;x) | \leq \max \left(f(y) - T_p^f(y;x), \,\frac{M_p}{(p+1)!} \|y-x\|^{p+1} \right) \\
& \overset{\eqref{eq:TayAppBound}}{\leq} \max \left( \frac{L_p^f}{(p+1)!} \| y-x\|^{p+1}, \,\frac{M_p}{(p+1)!} \|y-x\|^{p+1} \right) = \frac{M_p}{(p+1)!} \|y-x\|^{p+1}.
\end{align*} Therefore, in this case $L_p^h = M_p$. Functions that satisfy the additional condition from Remark \ref{rmk:01}, are e.g., the convex functions for $p=1$ (since in this case from convexity of $f$ we automatically have $f(y) \geq T_{1}^{f}(y;x)$) or the quadratic functions for $p=2$ (since in this case $f(y) = T_2^f(y;x)$). Recall that if $f$ is convex function and $M_p$ is chosen conveniently, then the higher-order surrogate function $g$ from Example \ref{expl:4} is also convex (see Lemma \ref{lema:conv}). \end{remark}
\begin{example}
\label{expl:5}
\textit{(composite functions).} Let $f_1: \mathbb{E} \rightarrow \bar{\mathbb{R}}$ be a proper closed convex function with $\text{dom} f_1$ and $f_2: \mathbb{E} \to \bar{\mathbb{R}}$ has the $p \geq 1$ derivative smooth with constant $L_p^{f_2}$ on $\text{dom} f_1 \subset \text{int}(\text{dom} f_2)$. Then, for the composite function $f= f_1 +f_2$ one can take the following $p$ higher-order surrogate:
$$
g(y;x) = f_1(y) + T_p^{f_2}(y;x) +\frac{M_p}{(p+1)!} \|y -x\|^{p+1} \quad \forall x,y \in \text{dom}f \;\; (:=\text{dom}f_1),
$$
where $M_p \geq L_p^{f_2}$. Moreover, error function $h$ has Lipschitz constant $L_p^h =M_p + L_p^{f_2}$. If additionally $f_2$ satisfies $f_2(y) \geq T_p^{f_2}(y;x)$ for all $y$, then the Lipschitz constant for the error function $h$ can be improved to $L_p^h = M_p$ (see Remark \ref{rmk:01}). \end{example}
\begin{example}
\label{expl:6}
\textit{(bounded derivative functions).}
We consider a function $f: \mathbb{E} \rightarrow \mathbb{R}$ that is $p+1$ times continuously differentiable, where $p \geq 1$. Assume that the $p+1$ derivative of $f$ is upper bounded by a $p+1$ constant symmetric multilinear form $\mathcal{H}$, i.e. $\langle \nabla^{p+1}f(x)[h]^p, h\rangle \leq \langle \mathcal{H} [h]^p, h\rangle \; \; \forall h,\,x \in \mathbb{E}$ (notation $\nabla^{p+1}f(x) \preccurlyeq \mathcal{H}$). In this case one can consider the following $p$ higher-order surrogate function:
$$
g(y;x) = T_p^f(y;x) + \frac{1}{(p+1)!} \langle \mathcal{H}[y-x]^p, y-x \rangle \quad \forall x,y \in \mathbb{E}.
$$
Moreover, the error function $h$ has the Lipschitz constant $L_p^h =\|\mathcal{H}\| + L_p^f$. Note that the class of functions considered in this example is different from that from Example \ref{expl:4} as one can find functions with the $p$ derivative smooth but the $p+1$ derivative may not exist everywhere. \end{example}
\begin{example}
\label{expl:7}
\textit{(composition of functions).} Let $F: \mathbb{R}^n \to \mathbb{R}^m$ be a differentiable function having Lipschitz continuous Jacobian matrix with constant $L_1^F$, $\phi: \mathbb{R}^m \to \mathbb{R}$ be a differentiable function having the first derivative smooth with constant $L_1^\phi$ and $f_1 : \mathbb{R}^n \to \bar{\mathbb{R}}$ be a proper closed convex function with $ \text{dom} \, f_1$ compact convex set. Then, for the function $f(x) = f_1 (x) + \phi(F(x))$ one can take the first-order surrogate:
\begin{align}
\label{eq:comp1}
g(y;x) = f_1(y) + \phi(F(x) + \nabla F(x)(y-x)) +\frac{M}{2} \|y -x\|^{2} \quad \forall x,y \in \text{dom}f_1,
\end{align}
where $M \geq L_0^\phi L_1^F$ and for the error function the Lipschitz constant is
$$L_1^h =M + L_1^F \max\limits_{y \in \text{dom} \, f_1} \| \nabla \phi(F(y)) \| + 2 L_1^\phi (\max\limits_{y \in \text{dom} \, f_1} \| \nabla F(y) \| )^2 .$$
If $F$ is simple, e.g., separable of the form $F(x) = [F_1(x_1) \cdots F_n(x_n)]$, then one can take the following first-order surrogate:
\begin{align}
\label{eq:comp2}
\bar g(y;x) \!=\! f_1(y) \!+\! \phi(F(x)) \!+\! \langle \nabla \phi(F(x)), \!F(y) \!-\! F(x) \rangle \!+\! \frac{M}{2} \| F(y) \!-\! F(x) \|^{2},
\end{align}
where $M \geq L_1^\phi$ and $L_1^h = 2 L_1^F \max\limits_{y \in \text{dom} \, f_1} \| \nabla \phi(F(y)) \| + 2 M L_1^F \max\limits_{y \in \text{dom} \, f_1} \|F(y)\| + (M+ L_1^\phi) (\max\limits_{y \in \text{dom} \, f_1} \| \nabla F(y) \|)^2$. Note that in both cases minimizing $g$ is easy since either $g$ is convex or $F$ has a simple structure (e.g., separable). Moreover, the second surrogate function \eqref{eq:comp2} resembles the augmented Lagrangian method of multipliers \cite{Pol:87,Roc:70}. \end{example}
\noindent The reader can find other examples of higher-order surrogate functions depending on the structure of the objective function $f$ in \eqref{eq:optpb} and we believe that this paper opens a window of opportunity for higher-order algorithmic research. In the following we define our General Higher-Order Majorization-Minimization (GHOM) algorithm:
\begin{algorithm}
\caption{Algorithm GHOM}
\label{alg:buildtree}
\begin{algorithmic}
\STATE{Given $x_0 \in \text{dom} \, f$ and $p \geq 1$, for $k\geq 0$ do:}
\STATE{ 1. Compute the $p$ surrogate function $g(y;x_k)$ of $f$ near $x_k$}
\STATE{ 2. Compute a stationary point $x_{k+1}$ of the subproblem:
\begin{equation}
\label{eq:sp}
\min_{y \in \text{dom} g} g(y;x_k),
\end{equation} satisfying the following descent property \begin{equation}
\label{eq:descGhom}
g(x_{k+1};x_k) \leq g(x_k;x_k) \quad \forall k \geq 0. \end{equation} }
\end{algorithmic} \end{algorithm}
\noindent
Note that in the convex case we can use very efficient methods from convex optimization to find the global solution $x_{k+1}$ of the subproblem at each iteration, see e.g., \cite{CarGou:11, CarDuc:16, GraNes:19sub, Nes:19Inxt} and also Lemma \ref{lema:conv}. In the nonconvex case, our convergence analysis below requires only the computation of a stationary point for the subproblem \eqref{eq:sp} satisfying the descent \eqref{eq:descGhom}. Note that almost all nonconvex optimization algorithms are able to identify stationary points of nonconvex problems. Moreover, in our convergence analysis below we can relax the stationary point condition, that is we can require $x_{k+1}$ to satisfy $\| g^{x_{k+1}} \| \leq \theta \| x_{k+1} - x_k \|^p$ for some $\theta >0$, where $g^{x_{k+1}} \in \partial g(x_{k+1};x_k)$. For simplicity of the exposition however, we asume below that $x_{k+1}$ is a stationary point of the subproblem \eqref{eq:sp}.
\section{ Convergence analysis of GHOM for convex optimization} \label{sec:conv} In this section we analyze the global and local convergence of algorithm GHOM under various assumptions on the convexity of the objective function $f$. Note that when the function $f$ is convex we assume that $x_{k+1}$ is a global minimum of the $p$ higher-order surrogate $g$ at~$x_k$. For the particular surrogate given in Example \ref{expl:5} similar convergence analysis has been given in \cite{Nes:19Inxt}, but \cite{Nes:19Inxt} requires $M_p \geq p L_p^{f_2}$, while in our analysis it is sufficient to have $M_p \geq L_p^{f_2}$. Moreover, from our knowledge the other examples of surrogate functions (Examples \ref{expl:8}, \ref{expl:6}, \ref{expl:7}) are not investigated in the literature.
\subsection{Global sublinear convergence of GHOM} \label{sec:conv-conv} In this section we derive global rate of convergence of order $\mathcal{O}(1/k^p)$ for GHOM in terms of function values when \eqref{eq:optpb} is a general convex optimization problem.
\begin{theorem}
\label{th:conv-conv}
Consider the optimization problem \eqref{eq:optpb}. Suppose the objective function $f$ is proper, lower semicontinuous, convex and admitting at each point $x \in \text{dom} \, f$ a $p \geq 1$ higher-order surrogate function $g(\cdot;x)$ as given in Definition \ref{def:sur}. Then, the sequence $(x_k)_{k \geq 0}$ generated by algorithm GHOM has the following global sublinear convergence rate:
\begin{align}
\label{theq:conv-conv}
f(x_{k}) - f(x^*) \leq \frac{L_p^h R^{p+1}}{p! \left(1+\frac{k}{p+1}\right)^{p}}.
\end{align} \end{theorem}
\begin{proof} Since $h(y,x)$, defined as the error between the $p$ higher-order surrogate $g(y;x)$ and the function $f$, has the $p$ derivative smooth with Lipschitz constant $L_p^h$ , then from \eqref{eq:TayAppBound} we get:
$$
h(y;x) \leq T_p^h(y;x) + \frac{L_p^h}{(p+1)!} \| y-x\|^{p+1} \quad \forall x,y \in \text{dom} \, f.
$$
However, based on the condition (iii) from Definition \ref{def:sur} the Taylor approximation of $h$ of order $p$ at $x$ satisfies:
$$
T_p^h(y;x) = h(x;x)+\sum_{i=1}^{p} \frac{1}{i !} \nabla^{i} h(x;x)[y-x]^{i} = 0.
$$
Therefore, we obtain:
\begin{align*}
h(y;x) = g(y;x) -f(y) &\leq T_p^h(y;x) + \frac{L_p^h}{(p+1)!} \| y-x\|^{p+1},
\end{align*}
which implies that
\begin{align}\label{eq:5}
g(y;x) & \leq f(y) + \frac{L_p^h}{(p+1)!} \| y-x\|^{p+1} \quad \forall x,y \in \text{dom} \, f.
\end{align}
Since the surrogate function $g$ at $x_k$ satisfies $\text{dom} \, g(\cdot;x_k) = \text{dom} f $, it is bounded from below by $f$ and $x_{k+1}$ is a global minimum of $g$, we further get:
\begin{align}
\label{eq:main}
f(x_{k+1}) & \!\leq\! g(x_{k+1};x_k) \!=\! \!\min_{y \in \text{dom} \, f} \!g(y;x_k) \!\!\overset{\eqref{eq:5}}{\leq}\!\! \min_{y \in \text{dom} f} \!(f(y) \!+\! \frac{L_p^h}{(p+1)!} \| y-x_k\|^{p+1}).
\end{align}
Since we assume $f$ to be convex, we can choose $ y = x_k + \alpha(x^* -x_k)$, with $\alpha \in [0,\, 1]$, and obtain further:
\begin{align}
f(x_{k+1}) & \leq \min_{\alpha\in [0,\, 1]} f(x_k + \alpha(x^* -x_k)) + \frac{L_p^h}{(p+1)!} \| x_k + \alpha(x^* -x_k)-x_k\|^{p+1} \nonumber \\
& \leq \min_{\alpha\in [0,\, 1]} f(x_k) - \alpha(f(x_k) - f(x^*)) + \frac{L_p^h}{(p+1)!} \alpha^{p+1}\| x_k-x^*\|^{p+1}. \label{eq:main0}
\end{align}
Let us show that GHOM is a decent algorithm. Indeed, from \eqref{eq:descGhom} we have:
\begin{equation*}
f(x_k) = g(x_k;x_k) \geq g(x_{k+1};x_{k}) \geq f(x_{k+1})\quad \forall k \geq 0.
\end{equation*}
Hence, all the iterates $x_k$ are in the level set $\mathcal{L}_f(x_0)$ and thus satisfy $\|x_k-x^* \| \leq R$ for all $k \geq 0$. Subtracting the optimal value on both side of \eqref{eq:main0} and recalling the fact that the sublevel set of $f$ at $x_0$ is assumed bounded, we obtain:
\begin{equation} \label{eq:6}
f(x_{k+1}) -f(x^*) \leq \min_{\alpha\in [0,\, 1]} (1 - \alpha)(f(x_k) - f(x^*)) + \frac{L_p^h R^{p+1}}{(p+1)!} \alpha^{p+1}.
\end{equation}
For simplicity, we denote $\Delta_{k} = f(x_{k}) -f(x^*)$. We consider two cases in \eqref{eq:6} (see also \cite{Nes:19, Nes:19Inxt}):\\
First case: if $\Delta_k > \frac{L_p^h R^{p+1}}{p!}$, then optimal point is $\alpha^*=1$ and $\Delta_{k+1} \leq \frac{L_p^h R^{p+1}}{(p+1)!}.$\\
Second case: if $\Delta_k \leq \frac{L_p^h R^{p+1}}{p!}$, then optimal point is $\alpha^* = \sqrt[p]{\frac{\Delta_kp!}{R^{p+1} L_p^h}}$ and obtain:
\begin{equation*}
\begin{aligned}
\Delta_{k+1} &\leq \Delta_{k} \left(1-c \Delta_k^{\frac{1}{p}} \right), \quad
\Delta_{k+1}^{-\frac{1}{p}} \geq \Delta_k^{-\frac{1}{p}} \left( 1 - c \Delta_k^{\frac{1}{p}} \right)^{-\frac{1}{p}}, \\
\Delta_{k+1}^{-\frac{1}{p}} &\geq \Delta_k^{-\frac{1}{p}} \left( 1+ \frac{c}{p} \Delta_k^{\frac{1}{p}} \right) = \Delta_k^{-\frac{1}{p}} + \frac{c}{p},\\
\end{aligned}
\end{equation*}
where $c = \frac{p}{p+1} \sqrt[p]{\frac{p!}{R^{p+1} L_p^h}}$ and the last inequality is given by $(1-x)^{-p} \geq 1+ px $ for $x \in [0,\, 1]$, see e.g., \cite{Pol:87}. We now apply recursively the previous inequalities, starting with $k = 1$. If $\Delta_0\geq \frac{L_p^h R^{p+1}}{p!}$, we are in the first case and then $\Delta_1 \leq \frac{L_p^h R^{p+1}}{(p+1)!}$. Then, we will subsequently be in the second case for all $k \geq 2$ and
$$\Delta_k \leq \Delta_1 \left(1+ \frac{(k-1)c}{p} \Delta_1^{\frac{1}{p}} \right)^{-p} \leq \frac{L_p^h R^{p+1}}{(p+1)!} \left( 1+\frac{k-1}{p+1} (p+1)^{\frac{1}{p}} \right)^{-p}. $$ Otherwise, if $\Delta_0\leq \frac{L_p^h R^{p+1}}{p!}$, then we are in the second case and obtain:
$$\Delta_k \leq \Delta_0 \left( 1+ \frac{kc}{p} \Delta_0^{\frac{1}{p}} \right)^{-p} \leq \frac{L_p^h R^{p+1}}{p!} \left( 1+\frac{k}{p+1} \right)^{-p}. $$
These prove the statement of the theorem. \end{proof}
\noindent Note that the convergence results from \cite{Nes:19, Nes:19Inxt} assume Lipschitz continuity of the $p$ derivative of the objective function $f$, while Theorem \ref{th:conv-conv} assumes Lipschitz continuity of the $p$ derivative of the error function $h=g-f$. Hence, our proof is different from \cite{Nes:19, Nes:19Inxt}. Moreover, our convergence rate \eqref{theq:conv-conv} recovers the usual convergence rates $\mathcal{O}(1/k^p)$ of higher-order Taylor-based methods in the unconstrained convex case \cite{Nes:19} (Example \ref{expl:4}) and composite convex case \cite{Nes:19Inxt} (Example \ref{expl:5}), respectively. Therefore, Theorem \ref{th:conv-conv} provides a unified convergence analysis for higher-order majorization-minimization algorithms, that covers in particular (composite) convex problems, under possibly more general assumptions than in \cite{Nes:19, Nes:19Inxt}. In fact there is a major difference between the Taylor expansion approach from \cite{Nes:19, Nes:19Inxt} and the model approximation based on our general majorization-minimization approach. Taylor expansion yields unique approximation model around a given point while in the majorization-minimization approach one may consider many upper bound models and every model leads to a different optimization method.
\subsection{Local superlinear convergence of GHOM } \noindent Next, by assuming uniform convexity on $f$, we prove that GHOM can achieve faster rates locally. More precisely we get \textit{local superlinear} convergence rates for GHOM in several optimality criteria: function values, distance of the iterates to the optimal point and in the norm of minimal subgradients. For this we first need some auxiliary results.
\begin{lemma}\label{lema:2}
Let the assumptions of Theorem \ref{th:conv-conv} hold. Then, there exist subgradients $ f^{x_{k+1}} \in \partial f(x_{k+1})$, where the sequence $(x_{k})_{k\geq 0}$ is generated by the algorithm GHOM, such that the following relation holds:
\begin{equation}\label{eq:26}
\| f^{x_{k+1}}\|_* \leq \frac{L_p^h}{p!} \| x_{k+1} -x_k \|^p \quad \forall k \geq 0.
\end{equation} \end{lemma}
\begin{proof} From the definition of the $p$ higher-order surrogate function, we know that the error function $h$ has the $p$ derivative Lipschitz with constant $L_p^h$. Thus, we have the inequality \eqref{eq:TayAppG1} for $y =x_{k+1}$, that is:
\begin{equation*}
\|\nabla h(x_{k+1};x_k) - \nabla T_p^h(x_{k+1}; x_k) \|_* \leq \frac{L_p^h}{p!} \|x_{k+1} - x_k \|^{p}.
\end{equation*}
Since the Taylor approximation of $h$ at $x_k$ of order $p$, $T_p^h(y; x_k)$, is zero, we get:
\begin{align*}
\|\nabla h(x_{k+1};x_k) \|_* \leq \frac{L_p^h}{p!} \|x_{k+1} - x_k \|^{p}.
\end{align*}
Further, from the optimality of the point $x_{k+1}$ we have that $0 \in \partial g(x_{k+1};x_k)$. Thus, since the error function $h(y;x_k) = g(y;x_k) - f(y)$ is differentiable, we obtain from calculus rules \cite{Roc:70} that:
$$ -\nabla h(x_{k+1};x_k) \in \partial f(x_{k+1}).$$
Returning with this relation in the previous inequality, we obtain \eqref{eq:26} by simply defining $f^{x_{k+1}} = -\nabla h(x_{k+1};x_k)$. \end{proof}
\begin{lemma} \label{lema:1}
Let $f$ be proper lower semicontinuous convex function and admitting a $p \geq 1$ higher-order surrogate function $g(y;x)$ at each point $x \in \text{dom} f$ as given in Definition \ref{def:sur} such that the error function $h = g-f$ is convex. Then, there exist subgradients $ f^{x_{k+1}} \in \partial f(x_{k+1})$, where the sequence $(x_{k})_{k\geq 0}$ is generated by the algorithm GHOM, such that the following relation holds:
\begin{equation}\label{eq:27}
\langle f^{x_{k+1}}, x_k -x_{k+1} \rangle \geq 0 \quad \forall k \geq 0.
\end{equation} \end{lemma}
\begin{proof}
Since the error function $h$ is $p \geq 2$ differentiable satisfying $h(y;x_k) \geq 0$ and $h(x_k;x_k) = 0$, then using further the convexity of $h(\cdot;x_k)$, we have:
\begin{align*}
0 = h(x_k;x_k) & \overset{ h \; \text{convex}}{\geq} h(x_{k+1};x_k) + \langle \nabla h(x_{k+1};x_k), x_k -x_{k+1} \rangle \\
& \overset{ h \geq 0}{\geq} \langle \nabla h(x_{k+1};x_k), x_k -x_{k+1} \rangle.
\end{align*}
From Lemma \ref{lema:2} we have $ -\nabla h(x_{k+1};x_k) \in \partial f(x_{k+1})$. We get our statement by definig $f^{x_{k+1}} = -\nabla h(x_{k+1};x_k)$.
\end{proof}
\noindent Now we are ready to prove the superlinear convergence of the GHOM algorithm in function values for general uniformly convex objective functions.
\begin{theorem}
\label{th:conv-conv_super0}
Let $f$ be uniformly convex function of degree $q \geq 2$ with constant $\sigma_{q}$ and admitting a $p \geq 1$ higher-order surrogate function $g(\cdot;x)$ at each point $x \in \text{dom} f$ as given in Definition \ref{def:sur} such that the error function $h = g-f$ is convex. Then, the sequence $(x_k)_{k \geq 0}$ generated by algorithm GHOM has the following convergence rate in function values:
\begin{align}
\label{theq:conv-conv_super0}
f\left(x_{k+1}\right) \!-\! f(x^*) \!\leq\! (q \!-\! 1) q^{\frac{p-q+1}{q-1}} \!\! \left(\frac{1}{\sigma_{q}}\right)^{\frac{p+1}{q-1}} \!\! \left(\frac{L_{p}^h}{p !}\right)^{\frac{q}{q-1}} \!\!\! \left(f\left(x_{k}\right) \!-\! f(x^*) \right)^{\frac{p}{q-1}}.
\end{align} \end{theorem}
\begin{proof}
For any $k \geq 0$ and $f^{x_{k+1}} \in \partial f(x_{k+1})$, we have:
\begin{align*}
f(x_k) - f(x^*) &\geq f(x_{k}) - f(x_{k+1}) \\
& \overset{\eqref{eq:unifConv}}{\geq} \langle f^{x_{k+1}}, x_k-x_{k+1}\rangle+\frac{\sigma_{q}}{q}\|x_k-x_{k+1}\|^{q}\\
& \overset{\eqref{eq:27}}{\geq} \frac{\sigma_{q}}{q}\|x_k-x_{k+1}\|^{q} \overset{\eqref{eq:26}}{\geq}\frac{\sigma_{q}}{q} \left( \frac{p!}{L_p^h}\| f^{x_{k+1}}\|_* \right)^{\frac{q}{p}} \\
& \overset{\eqref{eq:probConv}}{\geq} \frac{\sigma_{q}}{q} \left( \frac{p!}{L_p^h} \right)^{\frac{q}{p}}
\left(\frac{q \sigma_{q}^{\frac{1}{q-1}}}{q-1}\left(f\left(x_{k+1}\right)-f(x^*)\right)\right)^{\frac{q-1}{p}},
\end{align*}
which proves the statement of the theorem. \end{proof}
\noindent Note that if $p>q-1$, then GHOM has local superlinear convergence rate since $\frac{p}{q-1}>1$ and from Theorem \ref{th:conv-conv} we have $f\left(x_{k}\right)-f(x^*) \to 0$. E.g., if $q = 2$ (strongly convex function) and $p = 2$, then the local rate of convergence is quadratic. If $q = 2$ and $p = 3$, then the local rate of convergence is cubic. If $q=2$ and $p=1$ we recover the usual linear convergence rate, etc. Note that by choosing appropriately $M_p$ and $\mathcal{H}$ in Examples \ref{expl:8}, \ref{expl:4}, \ref{expl:5} and \ref{expl:6}, we indeed obtain error functions $h$ that are convex. However, if we remove the convexity assumption on $h$ we can still prove local superlinear convergence for GHOM in function values, but the rate is slightly worse. This result is stated~next.
\begin{theorem}
\label{th:conv-conv_super}
Let $f$ be uniformly convex function of degree $q \in [2,p+1)$ with constant $\sigma_{q}$ and admitting a $p$ surrogate function $g(\cdot;x)$ at each point $x \in \text{dom} f$ as given in Definition \ref{def:sur}. Then, the sequence $(x_k)_{k \geq 0}$ generated by algorithm GHOM has the following local superlinear convergence rate:
\begin{align}
\label{theq:conv-conv_super}
f(x_{k+1}) - f(x^*) & \leq \frac{L_p^h}{(p+1)!} \left( \frac{q}{\sigma_q} \right)^{\frac{p+1}{q}} \!\! (f(x_k) - f(x^*))^{\frac{p+1}{q} }.
\end{align} \end{theorem}
\begin{proof}
If $f$ is uniformly convex, then it has a unique optimal point $x^*$. Moreover, from \eqref{eq:main}, we have:
\begin{align}
f(x_{k+1}) & \leq \min_{y \in \text{dom} f} (f(y) + \frac{L_p^h}{(p+1)!} \| y-x_k\|^{p+1}) \nonumber \\
& \overset{y=x^*}{\leq} f(x^*) + \frac{L_p^h}{(p+1)!} \| x_k - x^*\|^{p+1}.
\label{eq:relatii1}
\end{align}
On the other hand, since $f$ is uniformly convex of degree $q$, then using
\eqref{eq:unifConv} for $f$ and the fact that $0 \in \partial f(x^*)$, we get:
\begin{align}
\label{eq:relatii2}
f(x_k) - f(x^*) \geq \frac{\sigma_{q}}{q}\|x_k - x^*\|^{q}.
\end{align}
Combining the inequalities \eqref{eq:relatii1} and \eqref{eq:relatii2}, we further obtain:
\begin{align*}
f(x_{k+1}) - f(x^*) & \leq \frac{L_p^h}{(p+1)!} \| x_k - x^*\|^{p+1} \leq \frac{L_p^h}{(p+1)!} \left( \frac{q}{\sigma_q} (f(x_k) - f(x^*) ) \right)^{\frac{p+1}{q}} \\
& = \left( \frac{L_p^h}{(p+1)!} \left( \frac{q}{\sigma_q} \right)^{\frac{p+1}{q}} \!\! (f(x_k) - f(x^*) )^{\frac{p+1}{q} -1} \right ) (f(x_k) - f(x^*)).
\end{align*}
If $q<p+1$, we have that $\beta_k = \frac{L_p^h}{(p+1)!} \left( \frac{q}{\sigma_q} \right)^{\frac{p+1}{q}} \!\! (f(x_k) - f(x^*) )^{\frac{p+1}{q} -1} $ converges to zero since from Theorem \ref{th:conv-conv} we have $f\left(x_{k}\right)-f(x^*) \to 0$, thus proving the local superlinear convergence in function values of the sequence $(x_k)_{k \geq 0}$ generated by GHOM. \end{proof}
\begin{remark}
\noindent Note that using the inequalities \eqref{eq:relatii1} and \eqref{eq:relatii2} in the convergence rates \eqref{theq:conv-conv_super0} and \eqref{theq:conv-conv_super}, respectively, we immediately obtain local superlinear convergence also for $\|x_k - x^*\|$. Since the derivations are straightforward, we omit them. \end{remark}
\noindent Finally, we show local superlinear convergence for the sequence of minimal norms of subgradients of $f$, which we recall that we denoted by: \begin{equation*}
S(x_k) := \text{dist}(0, \partial f(x_k)) \; \left( = \inf_{f^{x_k} \in \partial f(x_k)} \|f^{x_k}\|_{*} \right). \end{equation*}
\begin{theorem}
\label{th:conv-conv_super_grad}
Under the assumptions of Theorem \ref{th:conv-conv_super} the sequence $(x_k)_{k \geq 0}$ generated by GHOM has the following convergence rate:
\begin{align*}
S(x_{k+1}) \leq \frac{L_p^h}{ p!} \left(\frac{q}{\sigma_q}\right)^{\frac{p}{q-1}} S(x_{k})^{\frac{p}{q-1}}.
\end{align*} \end{theorem}
\begin{proof}
Since GHOM is a descent method and $f$ is uniformly convex, we have:
\begin{equation*}
f(x_k) \overset{\eqref{eq:main}}{\geq} f(x_{k+1}) \overset{\eqref{eq:unifConv}}{\geq} f(x_k) + \left\langle f^{x_k}, x_{k+1} - x_k \right\rangle + \frac{\sigma_{q}}{q} \| x_{k+1} - x_k \|^{q},
\end{equation*}
where $f^{x_k} \in \partial f(x_k)$. Using the Cauchy-Schwarz inequality we further get:
\begin{align*}
0 \!\geq\! \left\langle f^{x_k}, x_{k+1} \!-\! x_k \right\rangle \!+\! \frac{\sigma_{q}}{q} \| x_{k+1} \!-\! x_k \|^{q} \!\geq\! - \| f^{x_k}\|_* \| x_{k+1} \!-\! x_k \| \!+\! \frac{\sigma_{q}}{q} \| x_{k+1} \!-\! x_k \|^{q},
\end{align*}
or equivalently
\begin{equation}
\label{eq:gradrel1}
\| f^{x_k}\|_* \geq \frac{\sigma_{q}}{q} \| x_{k+1} - x_k \|^{q-1} \quad \forall k \geq 0.
\end{equation}
Now, since $\partial f(x_k)$ is compact set, taking $f^{x_k}$ such that $\| f^{x_k}\|_* = S(x_k)$ and then using Lemma \ref{lema:2}, we get:
\begin{align*}
S(x_{k+1}) & \leq \| f^{x_{k+1}}\|_* \overset{\eqref{eq:26}}{\leq} \frac{L_p^h}{p!} \| x_{k+1} -x_k \|^p \overset{\eqref{eq:gradrel1}}{ \leq} \frac{L_p^h}{p!} \left( \frac{q}{\sigma_{q}} \| f^{x_k}\|_* \right)^{\frac{p}{q-1}} \\
&= \frac{L_p^h}{p!} \left( \frac{q}{\sigma_{q}} \right)^{\frac{p}{q-1}} \left( S(x_k) \right)^{\frac{p}{q-1}},
\end{align*}
which proves the statement of the theorem.
\end{proof}
\noindent In the next section (see Theorem \ref{th:nonconv-gen}) we prove that the sequence of (sub)gradients generated by the GHOM algorithm converges to zero globally for convex problems, i.e. $S(x_k) \to 0$ as $k \to \infty$. Hence, since for $q \in [2,p+1)$ we have $\frac{p}{q-1}>1$, we conclude from Theorem \ref{th:conv-conv_super_grad} that the sequence of minimal norms of subgradients generated by GHOM has also local superlinear convergence rate in the convex case.
\section{ Convergence analysis of GHOM for nonconvex optimization} \label{sec:non-conv} In this section we analyze the convergence behavior of algorithm GHOM for general or structured nonconvex optimization. Note that in the nonconvex case we only assume that $x_{k+1}$ is a stationary point of the subproblem \eqref{eq:sp} satisfying the descent \eqref{eq:descGhom}.
\subsection{Global convergence of GHOM for general nonconvex optimization} We assume a general (possibly nonconvex) proper lower semicontinuous objective function $f: \mathbb{E} \to \bar{ \mathbb{R} }$. Recall that even in this general setting a necessary first-order optimality condition for a (local) optimum $x^*$ of $f$ is to have $ 0 \in \partial f (x^*)$. Now we are ready to analyze the convergence behavior of GHOM under these general settings. We first derive some auxiliary result:
\begin{lemma}
\label{lemma:nabla2}
Let $\tilde h$ be a function $p \geq 2$ differentiable and with the $p$ derivative smooth with constant $L_p^{\tilde h}$. For any $x, y \in \text{dom} f$ and scalar $M_p^{\tilde h} \geq L_p^{\tilde h}$ let us define:
\[ H = \nabla^2 \tilde h (x) + \sum_{i=3}^p \frac{1}{(i-1)!} \nabla^i \tilde h (x) [y-x]^{i-2} + \frac{M_p^{\tilde h}}{p!} \|y-x\|^{p-1} D. \]
Then, we have the following bounds on the matrix $H$:
\begin{align*}
\int_{0}^1 \nabla^2 & \tilde h (x + \tau(y-x)) d\tau \preceq H \\
& \preceq \int_{0}^1 \left( \nabla^2 \tilde h (x + \tau(y-x)) + \frac{M_p^{\tilde h} + L_p^{\tilde h}}{(p-1)!} \tau^{p-1} \|y-x\|^{p-1} D \right) \! d\tau. \end{align*} \end{lemma}
\begin{proof}
We note that:
\begin{align*}
H & = \int_{0}^1 \left( \nabla^2 \tilde h (x) + \sum_{i=3}^p \frac{\tau^{i-2}}{(i-2)!} \nabla^i \tilde h (x) [y-x]^{i-2} + \frac{M_p^{\tilde h} \tau^{p-1}}{(p-1)!} \|y-x\|^{p-1} D \right) \! d\tau \\
& = \int_{0}^1 \left( \nabla^2 T_p^{\tilde h} (x + \tau(y-x);x) + \frac{M_p^{\tilde h} \tau^{p-1}}{(p-1)!} \|y-x\|^{p-1} D \right) \! d\tau \\
& \overset{\eqref{eq:TayAppG2}}{\preceq} \int_{0}^1 \left( \nabla^2 \tilde h (x + \tau(y-x)) + \frac{M_p^{\tilde h} + L_p^{\tilde h}}{(p-1)!} \tau^{p-1} \|y-x\|^{p-1} D \right) \! d\tau.
\end{align*}
Similarly, we have:
\begin{align*}
H & \overset{\eqref{eq:TayAppG2}}{\succeq} \int_{0}^1 \left( \nabla^2 \tilde h (x + \tau(y-x)) + \frac{M_p^{\tilde h} - L_p^{\tilde h}}{(p-1)!} \tau^{p-1} \|y-x\|^{p-1} D \right) \! d\tau \\
& \succeq \int_{0}^1 \nabla^2 \tilde h (x + \tau(y-x)) d\tau,
\end{align*}
since $M_p^{\tilde h} \geq L_p^{\tilde h}$ and $D \succeq 0$. These conclude our statement. \end{proof}
\begin{theorem}
\label{th:nonconv-gen}
Let us assume that the objective function $f: \mathbb{E} \to \bar{\mathbb{R}}$ is proper, lower semicontinuous, (possibly nonconvex) and admits $p \geq 1$ higher-order surrogate function $g(\cdot;x)$ at any $x \in \text{dom} f$ as given in Definition \ref{def:sur}. Then, $\left(f(x_{k})\right)_{k \geq 0}$ monotonically decreases and the sequence $\left(x_{k}\right)_{k \geq 0}$ is bounded and satisfies the asymptotic stationary point condition $S(x_k) \to 0$ as $k \to \infty$. \end{theorem}
\begin{proof}
Using the properties of the surrogate (see Definition \ref{def:sur}) and the descent property \eqref{eq:descGhom}, we obtain:
\begin{equation*}
f(x_k) = g(x_k;x_k) \geq g(x_{k+1};x_{k}) \geq f(x_{k+1})\quad \forall k \geq 0.
\end{equation*}
This relation guarantees that $(f(x_k))_{k>0}$ is nonincreasing sequence and thus convergent, since $f$ is assumed to be bounded from below by $f^*$. Moreover, since we also assume the level set ${\cal L}_f(x_0)$ bounded (see Section \ref{sect:ghom}), then it follows that the sequence generated by GHOM $(x_k)_{k\geq 0} \subset {\cal L}_f(x_0)$ is also bounded. Using further the definition of the error function $h$:
\begin{align}
\label{eq: hf}
0 \leq h(x_{k+1};x_k) = g(x_{k+1};x_k) -f(x_{k+1}) \leq f(x_k) -f(x_{k+1}).
\end{align}
Telescoping the previous relation for $k = 0:\infty$, we get:
\begin{equation*}
0 \leq \sum_{k = 0}^{\infty} h(x_{k+1};x_k) \leq f(x_0) -f^* < \infty.
\end{equation*}
Therefore, the positive term of the series, $(h(x_{k+1}; x_k))_{k \geq 0}$, necessarily converges to 0. Since $g = h+ f$ is also propper and lower semicontinuous, then from $x_{k+1}$ being a stationary point w.r.t. $g$ we have $0 \in \partial g(x_{k+1};x_k)$ the (limiting) subdifferential of $g(\cdot;x_k)$ at $x_{k+1}$. Since $h$ is differentiable, from calculus rules we also have that:
\begin{equation}
\label{eq:25}
- \nabla h(x_{k+1}; x_k) \in \partial f(x_{k+1}).
\end{equation}
For proving the asymptotic stationary point condition we consider two cases: $p \geq 2$ and $p=1$. For $p \geq 2$ let us define the function $\tilde{h}(y) = h(y;x_k)$, which, according to the Definition \ref{def:sur}, has the $p$ derivative smooth. Further, let us choose the point:
$$y_{k+1} = \argmin_{y \in \mathbb{E}} T_p^{\tilde{h}}(y;x_{k+1}) + \frac{M_p^h}{(p+1)!} \|y-x_{k+1} \|^{p+1},$$
where $M_p^h > L_p^h$. It is important to note that in practice we do not need to compute $y_{k+1}$. Using the (global) optimality of $y^{k+1}$, we have:
\begin{align*}
&T_p^{\tilde{h}}(y_{k+1};x_{k+1}) + \frac{M_p^h}{(p+1)!} \| y_{k+1}-x_{k+1}\|^{p+1} \\ & \leq T_p^{\tilde{h}}(x_{k+1};x_{k+1}) + \frac{M_p^h}{(p+1)!} \| x_{k+1}-x_{k+1}\|^{p+1} \\
&= h(x_{k+1};x_k) + \sum_{i=1}^{p} \frac{1}{i !} \nabla^{i} h(x_{k+1};x_k)[x_{k+1}-x_{k+1}]^{i} = h(x_{k+1};x_k).
\end{align*}
Moreover, writting explicitly the left term of the previous inequality, we get:
\begin{align*}
& h(x_{k+1};x_k) + \sum_{i=1}^{p} \frac{1}{i!} \nabla^{i} h(x_{k+1};x_k)[y_{k+1}-x_{k+1}]^{i} + \frac{M_p^h}{(p+1)!} \| y_{k+1}-x_{k+1}\|^{p+1} \\
& \leq h(x_{k+1};x_k).
\end{align*}
Thus, we have:
\begin{equation}\label{eq:24}
\sum_{i=1}^{p} \frac{1}{i!} \nabla^{i} h(x_{k+1};x_k)[y_{k+1}-x_{k+1}]^{i} + \frac{M_p^h}{(p+1)!} \| y_{k+1}-x_{k+1}\|^{p+1} \leq 0.
\end{equation}
From relation \eqref{eq:TayAppBound} we also obtain:
\begin{equation*}
\tilde{h}(y) = h(y;x_k) \leq T_p^{\tilde{h}}(y;x_{k+1}) + \frac{L_p^h}{(p+1)!} \| y-x_{k+1}\|^{p+1} \quad \forall y .
\end{equation*}
We rewrite this relation for our chosen point $y_{k+1}$:
\begin{align*}
& h(y_{k+1};x_k) \leq T_p^{\tilde{h}}(y_{k+1};x_{k+1}) + \frac{L_p^h}{(p+1)!} \| y_{k+1}-x_{k+1}\|^{p+1} \\
& = h(x_{k+1};x_k) \!+\! \sum_{i=1}^{p} \! \frac{1}{i!} \nabla^{i} h(x_{k+1};x_k)[y_{k+1} \!-\! x_{k+1}]^{i} \!+\! \frac{L_p^h}{(p+1)!} \| y_{k+1}-x_{k+1}\|^{p+1} \\
&\overset{\eqref{eq:24}}{\leq}h(x_{k+1};x_k) - \frac{M_p^h -L_p^h}{(p+1)!} \| y_{k+1}-x_{k+1}\|^{p+1}.
\end{align*}
Recalling that $h$ is nonnegative, then it follows that:
\begin{align}
\label{eq: hy}
0 \leq h(x_{k+1};x_k) - \frac{M_p^h -L_p^h}{(p+1)!} \| y_{k+1}-x_{k+1}\|^{p+1}.
\end{align}
This leads to:
\begin{equation*}
h(x_{k+1};x_k) \geq \frac{M_p^h -L_p^h}{(p+1)!} \| y_{k+1}-x_{k+1}\|^{p+1} \geq 0.
\end{equation*}
Since $(h(x_{k+1}; x_k))_{k \geq 0}$ converges to 0, then necessarily $(y_{k+1} - x_{k+1})_{k \geq 0}$ converges to $0$ and is bounded, since $h$ is continuous and $(x_{k})_{k \geq 0}$ is bounded. Consequently, the sequence $(y_{k})_{k \geq 0}$ is also bounded. Moreover, from the optimality conditions for $y_{k+1}$, we have:
\begin{align}
\label{eq:optcondy}
\nabla h(x_{k+1};x_{k}) + H_{k+1}[y_{k+1}-x_{k+1}] = 0,
\end{align}
where we denote the matrix
\begin{align}
\label{eq:hk+1}
H_{k+1} & = \nabla^2 h(x_{k+1};x_k) + \sum_{i=3}^{p} \frac{1}{(i-1)!} \nabla^i h(x_{k+1};x_k)[y_{k+1}-x_{k+1}]^{i-2} \\
& \qquad + \frac{M_p^h}{p!} \| y_{k+1} -x_{k+1}\|^{p-1} D. \nonumber
\end{align}
From Lemma \ref{lemma:nabla2} we have that:
\begin{align}
\label{eq:Hyx}
& \int_{0}^1 \nabla^2 h (x_{k+1} + \tau(y_{k+1}-x_{k+1});x_k) d\tau \preceq H_{k+1} \\
& \preceq \int_{0}^1 \! \left( \nabla^2 h (x_{k+1} \!+\! \tau(y_{k+1} \!-\! x_{k+1});x_k) \!+\! \frac{M_p^{h} \!+\! L_p^{h}}{(p-1)!} \tau^{p-1} \|y_{k+1} \!-\! x_{k+1}\|^{p-1} \!D \! \right) \! d\tau. \nonumber
\end{align}
Since the sequences $(x_{k})_{k>0}$ and $(y_{k})_{k>0}$ are bounded and $h$ is $p \geq 2$ times continuously differentiable, then $ \nabla^{2} h(x_{k+1} + \tau(y_{k+1}-x_{k+1}); x_k)$ is bounded for $\tau \in [0, 1]$. Moreover, $y_{k+1} -x_{k+1} \rightarrow 0$ as $k \to \infty$ and it is bounded. Therefore, $H_{k+1}$ is bounded and consequently from \eqref{eq:optcondy} it follows that
\begin{align}
\label{eq:gradh0}
\nabla h(x_{k+1};x_{k}) \to 0 \; \text{as} \; k \to \infty.
\end{align}
\noindent For the case $p=1$ we can just take $y_{k+1} = x_{k+1} - 1/L_1^h \nabla h (x_{k+1;x_k})$. Then, using that $h(\cdot;x_k)$ has gradient Lipschitz with constant $L_1^h$ we obtain \cite{Nes:04}:
\[ 0 \leq h(y_{k+1};x_k) \leq h(x_{k+1}; x_k) - \frac{1}{2 L_1^h} \| \nabla h(x_{k+1};x_k) \|^2_*, \]
which further yields
\[ \frac{1}{2 L_1^h} \| \nabla h(x_{k+1};x_k) \|^2_* \leq h(x_{k+1};x_k) \to 0 \; \text{as} \; k \to \infty, \]
since we have already proved that the sequence $h(x_{k+1};x_k)$ converges to zero. Therefore, also in the case $p=1$ we have \eqref{eq:gradh0} valid. Finally, using \eqref{eq:gradh0} in \eqref{eq:25}, it follows that:
\begin{align*}
0 \leq S(x_{k+1}) = \inf_{f^{x_{k+1}} \in \partial f(x_{k+1})} \| f^{x_{k+1}} \|_* \leq \| \nabla h(x_{k+1};x_{k}) \|_* \to 0 \; \text{as} \; k \to \infty,
\end{align*}
i.e. the sequence $\left(x_{k}\right)_{k \geq 0}$ satisfies the asymptotic stationary point condition for the general nonconvex problem \eqref{eq:optpb}. \end{proof}
\noindent Note that the main difficulty in the previous proof is to handle $h$ having $p \geq 2$ derivative smooth. We overcome this difficulty by introducing a new sequence $(y_k)_{k \geq 0}$ and proving that it has similar properties as the sequence $(x_k)_{k \geq 0}$ generated by GHOM. The previous result proves only assymptotic convergence for $S(x_k)$. Therefore, in the nonconvex settings the requirements from Definition \ref{def:sur} on the surrogate function $g$ do not seem to be reach enough to enable convergence rates. However, it is well-known that for unconstrained smooth problems the surrogate from Example \ref{expl:4} allows to derive convergence rates $\mathcal{O}(k^{-\frac{p}{p+1}})$ for $\|\nabla f(x_k)\|$, see e.g., \cite{cartis2017}. At a closer look one can notice that the surrogate $g$ of Example \ref{expl:4} induces the following inequality on the error function $h$: \begin{align*}
h(y;x) = T_p^f(y;x) + \frac{M_p }{(p+1)!} \|y - x\|^{p+1} - f(y) \geq \frac{M_p - L_p^f}{(p+1)!} \|y - x\|^{p+1} \quad \forall x, y \in \mathbb{E}, \end{align*} where $M_p > L_p^f$. In fact, using the same reasoning as before, it is easy to see that such a relation holds for all the surrogate functions from Examples \ref{expl:8}, \ref{expl:4}, \ref{expl:5} and \ref{expl:6}. Hence, if we additionally assume that for some $\Delta >0$ our surrogate function satisfies the following inequality in terms of the error function: \begin{align}
\label{eq:unifh}
\Delta \|y - x\|^{p+1} \leq h(y;x) \; \left( := g(y;x) - f(y) \right) \quad \forall x,y \in \text{dom} f, \end{align} then we can strenghten the results from Theorem \ref{th:nonconv-gen}, i.e. we get convergence rates. \begin{theorem}
\label{th:noncf1f2}
Let the assumptions from Theorem \ref{th:nonconv-gen} hold. Additionally, assume that our surrogate function satisfies along the iterations of GHOM the relation \eqref{eq:unifh}. Then, the sequence $(x_{k})_{k\geq0}$ generated by GHOM satisfies the following convergence rate in terms of first-order optimality conditions:
\begin{equation*}
\min_{i=1:k} S(x_{i}) \leq \frac{L_p^{h}}{p!}\left(\frac{(f(x_0) - f^*)}{k \cdot \Delta}\right)^{\frac{p}{p+1}}.
\end{equation*} \end{theorem}
\begin{proof} Since the error function $h$ has the $p$ derivative Lipschitz, we have:
\[ \|\nabla h(x_{k+1};x_k) - \nabla T_p^h(x_{k+1};x_k) \|_* \leq \frac{L_p^h}{p!} \| x_{k+1} - x_k \|^p. \] Using that $\nabla T_p^h(x_{k+1};x_k) = 0$ (according to Definition \ref{def:sur} (iii)), we get:
\[ \|\nabla h(x_{k+1};x_k) \| \leq \frac{L_p^h}{p!} \| x_{k+1} - x_k \|^p. \] Combining this relation with our assumptions, we further obtain: \begin{align*}
& f(x_k) - f(x_{k+1}) \overset{\eqref{eq: hf}}{\geq} h(x_{k+1};x_k) \overset{\eqref{eq:unifh}}{\geq} \Delta \|x_{k+1} - x_k\|^{p+1} \\
& = \Delta \left( \|x_{k+1} - x_k\|^{p} \right)^{\frac{p+1}{p}} \geq \Delta \left( \frac{p!}{L_p^h} \|\nabla h(x_{k+1};x_k) \|_* \right)^{\frac{p+1}{p}}. \end{align*} Telescoping from $i=0:k-1$ the previous inequality and using that $- \nabla h(x_{k+1};x_k) \in \partial f(x_{k+1})$ (see \eqref{eq:25}), we further get: \begin{align*}
f(x_{0}) -f^* &\geq \Delta \sum_{i=0}^{k-1} \left( \frac{p!}{L_p^h} \|\nabla h(x_{i+1};x_i) \| _* \right)^{\frac{p+1}{p}} \\
&\overset{\eqref{eq:25}}{\geq} \Delta \left( \frac{p!}{L_p^h} \right)^{\frac{p+1}{p}} \sum_{i=0}^{k-1} S(x_{i+1})^{\frac{p+1}{p}} \\
&\geq k \cdot \Delta \left( \frac{p!}{L_p^h} \right)^{\frac{p+1}{p}} \min_{i=0:k-1} S(x_{i+1})^{\frac{p+1}{p}}. \end{align*} After rearranging the terms, we obtain our statement. \end{proof}
\noindent Note that similar convergence rates as in Theorem \ref{th:noncf1f2} have been obtained in \cite{cartis2017} for unconstrained problems and recently for problems with simple constraints \cite{cartis:2020} using the surrogate of Example \ref{expl:4}. The result of Theorem \ref{th:noncf1f2} is more general as it covers more complicated objective functions (e.g., general composite models) and many other types of surrogate functions (see Examples \ref{expl:8}, \ref{expl:4}, \ref{expl:5} and \ref{expl:6}). Further, for nonconvex unconstrained problems we can further derive convergence rates for the sequence $(x_k)_{k\geq 0}$ generated by GHOM in terms of first- and second-order optimality criteria. We use the notation $a^+ = \max(a, 0)$.
\begin{theorem}
\label{th:soc}
Let $f: \mathbb{R}^n \to \mathbb{R}$ be a $p>1$ times differentiable function with smooth $p$ derivative having Lipschitz constant $L_p^f$. In this case, according to Example \ref{expl:4}, we can consider the surrogate function:
$$g(y;x) = T_p^f(y;x) + \frac{M_p}{(p+1)!} \| y-x\|^{p+1},$$
with $M_p > L_p^f$. Additionally assume that $x_{k+1}$ is a local minimum of subproblem \eqref{eq:sp}. Then, the sequence $(x_{k})_{k\geq0}$ generated by GHOM satisfies the following convergence rate in terms of first and second-order optimality conditions:
\begin{align*}
\min_{i=1:k} & \max \! \left( \! \left(- \lambda_{\min}^+ (\nabla^2 f(x_{i})) \right)^{\frac{p+1}{p-1}}\!,
p^{\frac{p+1}{p}} \lambda_{\max}^{\frac{p+1}{p-1}} (D) \! \left( \! \frac{M_p +L_p^f}{(p-1)!} \! \right)^{\frac{p+1}{p(p-1)}} \!\! \| \nabla f(x_{i})\|_{*}^{\frac{p+1}{p}} \! \right) \\
& \leq \frac{p(p+1)}{((p-1)!)^{\frac{2}{p-1}}} \cdot \frac{(M_p+L_p^f)^{\frac{p+1}{p-1}}}{M_p -L_p^f} \cdot \frac{\lambda_{\max}^{\frac{p+1}{p-1}} (D)}{k} \cdot (f(x_0) - f^*).
\end{align*} \end{theorem}
\begin{proof}
From the descent property \eqref{eq:descGhom}, we have: $g(x_k; x_k) \geq g(x_{k+1};x_k)$. Further, taking into account the property $(iii)$ of the surrogate function, we get:
\begin{align*}
f(x_k) \geq f(x_k) +\sum_{i=1}^{p} \frac{1}{i !} \nabla^{i} f(x_k)[x_{k+1}-x_k]^{i} + \frac{M}{(p+1)!} \|x_{k+1}-x_k\|^{p+1}. \end{align*}
If we define $r_k = \|x_{k+1}-x_k\|$, then we further get:
\begin{equation}
\label{eq:11}
-\frac{M}{(p+1)!} r_{k+1}^{p+1} \geq \sum_{i=1}^{p} \frac{1}{i !} \nabla^{i} f(x_k)[x_{k+1}-x_k]^{i}.
\end{equation}
In view of relation \eqref{eq:TayAppBound}, we have:
\begin{align*}
& f(x_{k+1}) \leq T_p^f(x_{k+1}; x_k) + \frac{L_p^f}{(p+1)!} r_{k+1}^{p+1} \\
& \leq f(x_k) +\! \sum_{i=1}^{p} \! \frac{1}{i !} \nabla^{i} f(x_k)[x_{k+1} \!-x_k]^{i} \!+ \frac{L_p^f}{(p+1)!} r_{k+1}^{p+1} \!\overset{\eqref{eq:11}}{\leq}\! f(x_k) \!- \frac{M_p - L_p^f}{(p+1)!} r_{k+1}^{p+1}.
\end{align*}
Hence, we obtain the following descent relation:
\begin{equation} \label{eq:14}
f(x_k) - f(x_{k+1}) \geq \frac{M_p -L_p^f}{(p+1)!} r_{k+1}^{p+1}.
\end{equation}
Further, from the optimal conditions for $x_{k+1}$ we obtain:
\begin{equation}\label{eq:9}
\nabla g(x_{k+1};x_k) = \nabla T_p^f(x_{k+1};x_k) + \frac{M_p}{p!}\| x_{k+1}-x_k\|^{p -1} D (x_{k+1} -x_k) = 0.
\end{equation}
Using inequality \eqref{eq:TayAppG1} for $f$, we get:
\begin{equation*}
\| \nabla f(x_{k+1}) - \nabla T_p^f(x_{k+1}; x_k) \|_{*} \leq \frac{L_p^f}{p!} \| x_{k+1} - x_k\|^p.
\end{equation*}
This yields the following relation:
\begin{align*}
\| \nabla f(x_{k+1})\|_{*} &\leq \|\nabla T_p^f(x_{k+1}, x_k) \|_{*} + \frac{L_p^f}{p!} r_{k+1}^p \\
& \overset{\eqref{eq:9}}{\leq} \| -\frac{M_p}{p!} r_{k+1}^{p-1} D (x_{k+1} - x_k)\|_{*} + \frac{L_p^f}{p!} r_{k+1}^{p}\leq \frac{M_p + L_p^f}{p!} r_{k+1}^p
\end{align*}
or equivalently
\begin{equation}\label{eq:10}
\left(\frac{p!}{M_p + L_p^f}\right)^{\frac{p+1}{p}} \| \nabla f(x_{k+1})\|_{*}^{\frac{p+1}{p}} \leq r_{k+1}^{p+1}.
\end{equation}
Moreover, since we assume that $x_{k+1}$ is a local minimum of $g(\cdot; x_k)$, we have:
$$ \nabla^2 g(x_{k+1}; x_{k}) \succcurlyeq 0. $$
Computing explicitly the above expression, we obtain:
\begin{equation} \label{eq:13}
\nabla^2 T_p^f(x_{k+1}; x_k) + \frac{M_p(p-1)}{p!} r_{k+1}^{p-3} D (x_{k+1} -x_k) (x_{k+1} -x_k)^T D + \frac{M_p}{p!} r_{k+1}^{p-1} D \succcurlyeq 0.
\end{equation}
However, from \eqref{eq:TayAppG2} we have:
\begin{equation*}
\nabla^2 f(x_{k+1}) +\frac{L_p^f}{(p-1)!} r_{k+1}^{p-1} D \succcurlyeq \nabla^2 T_p^f(x_{k+1};x_k).
\end{equation*}
Returning with the above relation in \eqref{eq:13} we get:
\begin{equation*}
\nabla^2 f(x_{k+1}) +\frac{M_p + pL_p^f}{p!} r_{k+1}^{p-1} D + \frac{M_p(p-1)}{p!} r_{k+1}^{p-3} D (x_{k+1} -x_k) (x_{k+1} -x_k)^T D \succcurlyeq 0.
\end{equation*}
Rearranging the terms, we obtain:
\begin{align*}
- \nabla^2 f(x_{k+1}) & \preccurlyeq \frac{M_p + pL_p^f}{p!} r_{k+1}^{p-1} D + \frac{M_p(p-1)}{p!} r_{k+1}^{p-3}D (x_{k+1} -x_k) (x_{k+1} -x_k)^T D \\
&\preccurlyeq \frac{M_p + pL_p^f}{p!} r_{k+1}^{p-1} D + \frac{M_p(p-1)}{p!} r_{k+1}^{p-3} r_{k+1}^{2} D \preccurlyeq \frac{M_p+L_p^f}{(p-1)!} r_{k+1}^{p-1} D.
\end{align*}
Tacking the maximum eigenvalue, we obtain:
\begin{align*}
\! \lambda_{\max} (\!-\!\nabla^2 \!f(x_{k+\!1})\!) \!\!\leq\!\! \frac{M_p\!+\!L_p^f}{(p\!-\!1)!} r_{k+1}^{p-1} \lambda_{\max} (D) \; \text{or} -\!\!\lambda_{\min} (\nabla^2 \!f(x_{k+\!1})\!) \!\!\leq\!\! \frac{M_p\!+\!L_p^f}{(p\!-\!1)!} r_{k+1}^{p-1} \lambda_{\max} (D).
\end{align*}
Finally, using the notation $a^+ = \max(a, 0)$ and the fact that if $a \leq b$ for some $b \geq 0$, then also $a^+ \leq b$, the above inequality yields:
\begin{equation}\label{eq:15}
\left(\frac{(p-1)!}{(M_p+L_p^f) \lambda_{\max} (D)}\right)^{\frac{p+1}{p-1}} (- \lambda_{\min}^+ (\nabla^2 f(x_{k+1})))^{\frac{p+1}{p-1}} \leq r_{k+1}^{p+1}.
\end{equation}
By combining \eqref{eq:10} and \eqref{eq:15} we get the following compact form:
\begin{align*}
& \zeta_{k+1} := \max \Bigg( \left(\frac{(p-1)!}{(M_p+L_p^f) \lambda_{\max} (D)}\right)^{\frac{p+1}{p-1}} (- \lambda_{\min}^+ (\nabla^2 f(x_{k+1})))^{\frac{p+1}{p-1}},\\
& \qquad \qquad \qquad \left(\frac{p!}{M_p + L_p^f}\right)^{\frac{p+1}{p}} \| \nabla f(x_{k+1})\|_{*}^{\frac{p+1}{p}} \Bigg) \leq r_{k+1}^{p+1}.
\end{align*}
Telescoping \eqref{eq:14}, we get:
$$ f(x_0) - f^* \geq \frac{M_p -L_p^f}{(p+1)!} \sum_{i=0}^{k-1}r_{i+1}^{p+1} \geq \frac{M_p -L_p^f}{(p+1)!} \sum_{i=0}^{k-1} \zeta_{i+1} \geq \frac{k(M_p -L_p^f)}{(p+1)!} \min_{i=1:k} \zeta_{i}. $$
Rearranging the terms we get the statement of the theorem. \end{proof}
\noindent Note that \cite{BriGar:17, CarGou:11, cartis:2020} obtains similar convergence results for their Taylor-based algorithsm as in Theorem \ref{th:soc}. However, the update $x_{k+1}$ in these papers must usually satisfy some second-order optimality conditions. E.g., in \cite{CarGou:11} the authors require: $ \max \left (0, \!-\lambda_{\min}(\nabla^2 g(x_{k+1}; x_k)\right) \!\leq\! \theta \| x_{k+1} - x_k \|^{p-1} $ for some $\theta \!>0$ and $\| \nabla g(x_{k+1}; x_k)\| \! \leq\! \theta \| x_{k+1} - x_k\|^p $. In the previous theorem we do not require such optimality conditions on $x_{k+1}$ and thus our proof is different from \cite{BriGar:17, CarGou:11, cartis:2020}.
\subsection{Local convergence of GHOM under the KL property} The KL property was first analyzed in details in \cite{BolDan:07} and then widely applied to analyze the convergence behavior of various first-order \cite{AttBol:09,BolDan:07} and second-order \cite{FraGar:15,ZhoWan:18} algorithms for nonconvex optimization. \textit{However, to the best of our knowledge there are no studies analyzing the convergence rate of higher-order majorization-minimization algorithms under the KL property}. The main difficulty comes from the fact that $f$ satisfies the KL property while the error function $h$ is assumed smooth and it is hard to establish connections between them. In the next theorem we connect the geometric property of nonconvex function $f$ with the smooth property of $h$ and establish local convergence of GHOM in the full parameter regime of the KL property. \\
\noindent Let us denote the set of limit points of the sequence $(x_k)_{k \geq 0}$ generated by algorithm GHOM with $\Omega(x_0)$. \begin{lemma}
\label{lemma:kl}
Let the assumptions from Theorem \ref{th:nonconv-gen} hold. Assume also that either $f$ is continuous or $x_{k+1}$ is a local minimum of subproblem \eqref{eq:sp}. Then, $\Omega(x_0)$ is compact set and $f$ is constant on $\Omega(x_0)$. \end{lemma}
\begin{proof}
From Theorem \ref{th:nonconv-gen} we have that the sequence $(x_k)_{k \geq 0}$ is bounded, hence the set of limit points $\Omega(x_0)$ is also bounded. Closedness of $\Omega(x_0)$ also follows by observing that $\Omega(x_0)$ can be viewed as an intersection of closed sets, i.e. $\Omega(x_0) = \cap_{j \geq 0} \overline{ \cup_{k \geq j} \{x_k\} }$. Hence, $\Omega(x_0)$ is a compact set and $\text{dist}(x_k, \Omega(x_0)) \to 0$ as $k \to \infty$. Further, let us also show that $f(\Omega(x_0))$ is constant. From Theorem \ref{th:nonconv-gen} we have that $\left(f(x_{k})\right)_{k \geq 0}$ is monotonically decreases and since $f$ is assumed bounded from below by $f^* > -\infty$, it converges, let us say to $ f_* > -\infty$, i.e. $f(x_{k}) \to f_*$ as $k \to \infty$. On the other hand let $x_*$ be a limit point of the sequence $(x_k)_{k \geq 0}$. This means that there is a subsequence $(x_{k_j})_{j \geq 0}$ such that $ x_{k_j} \to x_*$ as $j \to \infty$. If $f$ is continuous, then $\lim_{j \to \infty} f(x_{k_j}) = f(x_*)$ and $ f(x_*) = \lim_{j \to \infty} f(x_{k_j}) = f_*$. If $x_{k+1}$ is a local minimum of \eqref{eq:sp}, then from the lower semicontinuity of $f$ we always have: $$\liminf_{j \to \infty} f(x_{k_j}) \geq f(x_*).$$ Since we assume $x_{k_j}$ be a local minimum of $g(\cdot; x_{k_j-1})$, there exists $\delta_j >0$ such that $ g(x_{k_j}; x_{k_j-1}) \leq g(y; x_{k_j-1}) $ for all $\|y - x_{k_j} \| \leq \delta_j$. As $ x_{k_j} \to x_*$, then there exists $j_0$ such that for all $j \geq j_0$ we have $\|x_* - x_{k_j} \| \leq \delta_j$ and consequently $ g(x_{k_j}; x_{k_j-1}) \leq g(x_*; x_{k_j-1}) $. Since $g = h+ f$, we get $ h(x_{k_j}; x_{k_j-1}) + f( x_{k_j}) \leq h(x_*; x_{k_j-1}) + f(x_*)$ for all $j \geq j_0$. Using that $h(\cdot; x_{k_j-1})$ is continuous and taking $\limsup$, we get: $$\limsup_{j \to \infty} f(x_{k_j}) \leq f(x_*).$$ Hence, we get $\lim_{j \to \infty} f(x_{k_j}) = f(x_*)$ and $ f(x_*) = \lim_{j \to \infty} f(x_{k_j}) = f_*$. In conclusion, we have $ f(\Omega(x_0)) = f_*$. \end{proof}
\noindent From previous lemma we note that all the conditions of the KL property from Definition \ref{def:kl} are satisfied.
\begin{theorem}
\label{th:nonconv-gen-kl}
Let the objective function $f$ be proper and lower semicontinuous (possibly nonconvex), satisfy the KL property \eqref{eq:kl} for some $q>1$ and admit $p \geq 1$ higher-order surrogate function $g(\cdot;x)$ at any $x \in \text{dom} f$ as given in Definition \ref{def:sur}. Assume also that either $f$ is continuous or $x_{k+1}$ is a local minimum of subproblem \eqref{eq:sp}.Then, the sequence $(x_k)_{k \geq 0}$ generated by GHOM satisfies:
\begin{enumerate}
\item If $q > p+1$, then $f(x_k)$ converges locally to $f_*$ at a superlinear rate.
\item If $q = p+1$, then $f(x_k)$ converges locally to $f_*$ at a linear rate.
\item If $q < p+1$, then $f(x_k)$ converges locally to $f_*$ at a sublinear rate.
\end{enumerate} \end{theorem}
\begin{proof}
Note that the set of limit points $\Omega(x_0)$ of the sequence $(x_k)_{k \geq 0}$ generated by GHOM is compact, $\text{dist}(x_k, \Omega(x_0)) \to 0$ as $k \to \infty$ and $f(\Omega(x_0))$ is constant taking value $f_*$ (see Lemma \ref{lemma:kl}). Moreover, we have that $\left(f(x_{k})\right)_{k \geq 0}$ monotonically decreases and converges to $ f_*$. Then, for any $\delta, \epsilon >0$ there exists $k_0$ such that:
\[ x_k \in \{x: \text{dist}(x, \Omega(x_0)) \leq \delta, f_* < f(x) < f_* + \epsilon \} \quad k \geq k_0. \]
Hence, all the conditions of the KL property from Definition \ref{def:kl} are satisfied and we can exploit the KL inequality \eqref{eq:kl}. Combining \eqref{eq: hf} with \eqref{eq: hy} we get:
\begin{align}
\label{eq:hfy}
\frac{M_p^h - L_p^h}{(p+1)!} \| y_{k+1} - x_{k+1}\|^{p+1} \leq f(x_k) - f(x_{k+1}) \quad \forall k \geq 0,
\end{align} for any fixed $M_p^h > L_p^h$. From $x_{k+1}$ being a stationary point of subproblem \eqref{eq:sp}, we have $0 \in \partial g(x_{k+1};x_k)$ (Fermat's rule \cite{Roc:70}) and from $h= g-f$ we get $- \nabla h(x_{k+1};x_k) \in \partial f(x_{k+1})$ . Then, using \eqref{eq:optcondy} and the definition of $H_{k+1}$ from \eqref{eq:hk+1},
we obtain:
\begin{align}
\label{eq:nfyx}
S(x_{k+1}) & = \text{dist} (0, \partial f(x_{k+1})) \leq \| \nabla h(x_{k+1};x_{k}) \|_* = \| H_{k+1}(y_{k+1}-x_{k+1}) \|_* \nonumber \\
& \leq c \cdot \| y_{k+1}-x_{k+1} \| \quad \forall k \geq 0,
\end{align}
where $c = \max_{k \geq 0} \| H_{k+1}\| < \infty$ (see \eqref{eq:Hyx}). From KL property \eqref{eq:kl}, we further get:
\begin{align*}
f(x_{k+1}) - f_* & \leq \sigma_q S(x_{k+1})^q \overset{\eqref{eq:nfyx} }{\leq} \sigma_q c^q \| y_{k+1}-x_{k+1} \|^q \\
& \overset{\eqref{eq:hfy} }{\leq} \sigma_q c^q \left( \frac{(p+1)!}{M_p^h - L_p^h} \right)^{\frac{q}{p+1}} (f(x_k) - f(x_{k+1}))^{\frac{q}{p+1}} \quad \forall k \geq k_0.
\end{align*}
Let us denote $\Delta_k = f(x_k) - f_*$ and $C = \sigma_q c^q \left( \frac{(p+1)!}{M_p^h - L_p^h} \right)^{\frac{q}{p+1}} $. Then, we obtain:
\begin{align}
\label{eq:lkyx0}
\Delta_{k+1} \leq C (\Delta_k - \Delta_{k+1})^{\frac{q}{p+1}} \quad \forall k \geq k_0.
\end{align}
If we denote $ \Delta_k = C^{\frac{p+1}{p+1-q}} \tilde \Delta_k $, we further get the recurrence:
\begin{align}
\label{eq:lkyx}
\tilde \Delta_{k+1}^{\frac{p+1}{q}} \leq \tilde \Delta_k - \tilde \Delta_{k+1} \quad \forall k \geq k_0.
\end{align}
We distinguish the following cases:\\
\textit{Case (i):} $q > p+1$. Then, from \eqref{eq:lkyx} we have:
\[ \tilde \Delta_{k+1} \leq \frac{1}{ 1 + \tilde \Delta_{k+1}^{\frac{p+1}{q} -1}} \tilde \Delta_k, \]
and since $f(x_k) \to f_*$ it follows that $\tilde \Delta_{k+1} \to 0$ and hence $\tilde \Delta_{k+1}^{\frac{p+1}{q} -1} \to \infty$ as $\frac{p+1}{q} -1 <0$. Therefore, in this case $f(x_k)$ locally converges to $f_*$ at a superlinear rate. \\
\textit{Case (ii):} $q = p+1$. Then, from \eqref{eq:lkyx0} we have:
\[ (1 +C) \Delta_{k+1} \leq C \Delta_{k}, \]
hence in this case $f(x_k)$ locally converges to $f_*$ at a linear rate. \\
\textit{Case (iii):} $q < p+1$. Then, from \eqref{eq:lkyx} we have:
\[ \tilde \Delta_{k+1}^{1 + \frac{p+1-q}{q}} \leq \tilde \Delta_k - \tilde \Delta_{k+1}, \; \text{with} \; \frac{p+1-q}{q} >0, \]
and from Lemma 11 in \cite{Nes:19Inxt} we have for some constant $\alpha>0$:
\[ \tilde \Delta_{k} \leq \frac{ \tilde \Delta_{k_0} }{ (1 + \alpha (k-k_0))^ \frac{q}{p+1-q}} \quad \forall k \geq k_0, \] Therefore, in this case $f(x_k)$ locally converges to $f_*$ at a sublinear rate. \end{proof}
\section{Conclusions} This paper has explored the convergence behaviour of higher-order majorization-minimization algorithms for minimizing (non)convex functions that admit a surrogate model such that the corresponding error function has a $p \geq 1$ higher-order Lipschitz continuous derivative. Under these settings we derived global convergence results for our algorithm in terms of function values or first-order optimality conditions. Faster local rates of convergence were established under uniform convexity or KL property of the objective function. Moreover, for unconstrained nonconvex problems we derived convergence rates in terms of first- and second-order optimality conditions. As a future direction, it is interesting to extend these results to the stochastic settings, e.g., minimizing finite sum objective functions using a stochastic variant of algorithm GHOM.
\section{Appendix} \label{apendix} In this appendix, for completeness and to facilitate a better understanding of our theory, we provide the proofs of some known results.
\subsection{Smoothness verification} \noindent \textit{Proof of Example \ref{expl:1}:} This result was given in \cite{NesRod:20}. For consistency we also provide a proof. Note that we have the following polynomial description of the $q$ derivative of $f$: \begin{equation} \label{eq:ex1-1}
\nabla^q f_{p+1}(x)[h]^q = \| x -x_0 \|^{p+1-q} g_{q,p+1}(\tau_h(x)), \end{equation} where $h \in \mathbb{E}$ is an arbitrary unit vector and \[ \tau_{h}(x):=\left\{\begin{array}{ll}
\frac{\langle D( x-x_0) , h)}{\|x-x_0\|}, & \text { if } x \neq x_0 \\
0, & \text { if } x=x_0. \end{array}\right. \] The polynomial $g_{q,p+1}$ is a combination of the previous polynomial $g_{q-1,p+1}$ and its derivative $g^{'}_{q-1,p+1}$:
$$g_{q,p+1}(\tau):=\left(1-\tau^{2}\right) g_{q-1, p+1}^{\prime}(\tau)+(p-q+2) \tau g_{q-1, p+1}(\tau)\quad \forall q\geq 1.$$ When $q =0, \, g_{q, p+1}(\tau) $ is set to 1. For \eqref{eq:1} to hold it is sufficient to show that:
$$\left|\nabla^{p+1} f_{p+1}(x)[h]^{p+1}\right| \leq(p+1) ! \quad \forall\, x,h \in \mathbb{E}. $$ Considering \eqref{eq:ex1-1}, we have: $$ \nabla^{p+1} f_{p+1}(x)[h]^{p+1} = g_{p+1,p+1}(\tau_h(x)). $$
From Cauchy-Schwartz inequality we obtain that $\left|\tau_{h}(x)\right| \leq 1$ and therefore: $$
\left|\nabla^{p+1} f_{p+1}(x)[h]^{p+1}\right| = \left| g_{p+1,p+1}(\tau_h(x))\right| \leq \max_{\tau \in [-1,1]}\left|g_{p+1, p+1} (\tau)\right|. $$ However, by induction we can easily prove that (see also Proposition 4.5 in \cite{NesRod:20}): $$
\max_{[-1,1]}\left|g_{p+1, p+1}\right| = \prod_{i=0}^{p}(p+1-i) = (p+1)!, $$ which concludes the statement of Example \ref{expl:1}.
\noindent \textit{Proof of Example \ref{expl:2}:} This results has been proved in \cite{NesDoi:20}. Let us denote for simplicity $\kappa(x)=\sum_{i=1}^{m} e^{\left\langle a_{i}, x\right\rangle} .$ Then, for all $x \in \mathbb{E}$ and $h \in \mathbb{E},$ we have: \begin{align*}
& \langle\nabla f(x), h\rangle =\frac{1}{\kappa(x)} \sum_{i=1}^{m} e^{\left\langle a_{i}, x\right\rangle}\left\langle a_{i}, h\right\rangle, \\
&\left\langle\nabla^{2} f(x) h, h\right\rangle =\frac{1}{\kappa(x)} \sum_{i=1}^{m} e^{\left\langle a_{i}, x\right\rangle}\left(\left\langle a_{i}, h\right\rangle-\langle\nabla f(x), h\rangle\right)^{2} \leq \sum_{i=1}^{m}\left\langle a_{i}, h\right\rangle^{2}=\|h\|^{2}. \end{align*}
Taking maximum over $\|h\|=1$ in the previous expression we get that $\| \nabla^{2} f(x) \| \leq 1$, hence $L_1^f=1$. Similarly, for $p=2$ we have: \begin{align*}
\nabla^{3} f(x)[h]^{3} & =\frac{1}{\kappa(x)} \sum_{i=1}^{m} e^{\left\langle a_{i}, x\right\rangle}\left(\left\langle a_{i}, h\right\rangle-\langle\nabla f(x), h\rangle\right)^{3} \\
& \leq\left\langle\nabla^{2} f(x) h, h\right\rangle \max _{1 \leq i, j \leq m}\left\langle a_{i}-a_{j}, h\right\rangle \leq 2\|h\|^{3}. \end{align*}
Taking again maximum over $\|h\|=1$ in the previous expression we obtain that $\| \nabla^{3} f(x) \| \leq 2$, hence $L_2^f=2$. Finally, for $p=3$ we have: \begin{align*}
\nabla^{4} f(x)[h]^{4} & =\frac{1}{\kappa(x)} \sum_{i=1}^{m} e^{\left\langle a_{i}, x\right\rangle}\left(\left\langle a_{i}, h\right\rangle-\langle\nabla f(x), h\rangle\right)^{4}-3\left\langle\nabla^{2} f(x) h, h\right\rangle^{2} \\
& \leq \nabla^{3} f(x)[h]^{3} \max _{1 \leq i, j \leq m}\left\langle a_{i}-a_{j}, h\right\rangle \leq 4\|h\|^{4}. \end{align*}
Proceeding as before, i.e. taking maximum over $\|h\|=1$ in the previous expression, we get that $\| \nabla^{4} f(x) \| \leq 4$, hence $L_3^f=4$. These prove the statements of Example \ref{expl:2}.
\noindent \textit{Proof of Example \ref{expl:3}:} Since $\nabla^p T_p^f(y;x) = \nabla^p f(x)$ (i.e. the $p$ derivative is constant for all $y$), we have: \begin{equation*}
\|\nabla^p T_p^f(y;x) -\nabla^p T_p^f(z;x) \| = \|\nabla^p f(x) -\nabla^p f(x) \| = 0 \leq L_p^{T_p^f} \| y-z\| \quad \forall y,z. \end{equation*}
for any $L_p^{T_p^f} \geq 0$. Moreover, the $p$ Taylor approximation of $f$ has also the $p-1$ derivative Lipschitz with constant $L_{p-1}^{T_p^f} = \|\nabla^p f(x)\|$. These prove the statements of Example \ref{expl:3}.
\noindent \textit{Proof of Lemma \ref{lema:conv}:} This key result has been proved in \cite{Nes:19}. Note that for any $p \geq 2$ we have: \begin{equation} \label{eq:4}
\nabla^{2} (\frac{1}{p}\|x\|^p) =(p-2)\|x\|^{p-4} D x x^{*} D+\|x\|^{p-2} D \succcurlyeq \|x\|^{p-2} D. \end{equation} Fixing an arbitrary $x$ and $y$ from $\text{dom}\,f$, we have for any direction $d \in \mathbb{E}$ the following: \begin{align*}
\langle (\nabla^2 f(y) -\nabla^2 T_p^f(y;x))d,d\rangle & \leq \|\nabla^2 f(y) -\nabla^2 T_p^f(y;x) \| \|d \|^2 \\
& \overset{\eqref{eq:TayAppG2}}{\leq} \frac{L_p^f}{(p-1)!} \|y-x\|^{p-1} \|d\|^2. \end{align*} This implies that: \begin{equation}\label{eq:3}
\nabla^2 f(y) -\nabla^2 T_p^f(y;x) \preccurlyeq \frac{L_p^f}{(p-1)!} \|y-x\|^{p-1} D. \end{equation} Further, from the convexity of $f$ and $M_p \geq pL_p^f$, we get: \begin{equation*}
\begin{aligned}
0 \preccurlyeq \nabla^2 f(y) &\overset{\eqref{eq:3}}{\preccurlyeq} \nabla^2 T_p^f(y;x) + \frac{L_p^f}{(p-1)!} \|y-x\|^{p-1}D\\
& \overset{\eqref{eq:4}}{\preccurlyeq} \nabla^2 T_p^f(y;x) + \frac{pL_p^f}{(p+1)!} \nabla^2 ( \|y-x\|^{p+1}) \\
& \preccurlyeq \nabla^2 T_p^f(y;x) + \frac{M_p}{(p+1)!} \nabla^2 ( \|y-x\|^{p+1})= \nabla^2 g(y;x).
\end{aligned} \end{equation*} Thus, $g(y;x)$ is convex in $y$. This proves the statement of Lemma \ref{lema:conv}.
\subsection{Surrogate properties verification}
\noindent\textit{Verification of Example \ref{expl:8}:}
The first property of the surrogate function is straightforward. Next we need to very that the error function has derivative $p$ Lipschitz:
$$
h(y,x) = g(y,x) -f(y) = \frac{M_p}{(p+1)!} \| y-x \|^{p+1},
$$
which according to Example \ref{expl:1} has the $p$ derivative Lipschitz with constant $L_{p}^{h}=M_p$.
For the last property of a surrogate function, we notice that $\nabla^i( \| y-x \|^{p+1})_{|_{\substack{y=x}}} = 0$ for all $i =0:p$. Thus, we have that $\nabla^i h(x;x) = 0 \;\; \forall i =0:p$, i.e. condition (iii) from Definition \ref{def:sur}.
\noindent\textit{Verification of Example \ref{expl:4}:} In order to prove condition (i) from Definition \ref{def:sur} we use \eqref{eq:TayAppBound}, i.e.: \begin{equation*}
\begin{aligned}
f(y) &\leq T_p^f(y;x) + \frac{L_p^f}{(p+1)!} \| y-x \|^{p+1}\\
& \leq T_p^f(y;x) + \frac{M_p}{(p+1)!} \| y-x \|^{p+1} = g(y;x),
\end{aligned} \end{equation*} where the last inequality holds since $M_p \geq L_p^f $. The second property of a $p$ higher-order surrogate function requires that the error function $h(y;x) = g(y;x) -f(y) $ has the $p$ derivative smooth, where: $$
h(y;x) = T_p^f(y;x) + \frac{M_p}{(p+1)!} \| y-x \|^{p+1} - f(y). $$
We observe that the first term, $T_p^f$, has the $p$ derivative smooth with Lipschitz constant $0$ due to Example $\ref{expl:3}$. The $p$ derivative smoothness of the second term is covered by Example \ref{expl:1} and has the Lipschitz constant equal to $M_p$. The last term, $f$, has $p$ derivative Lipschitz with constant $L_p^f$ according to our assumption. Since $h$ is a sum of these three functions, then it is also $p$ derivative smooth with Lipschitz constant $L_p^h = M_p + L_p^f$. Hence, condition (ii) from Definition \ref{def:sur} also holds. For the last property of a surrogate function, we notice that $\nabla^i T_p^f(y;x)_{|_{\substack{y=x}}} = \nabla^i f(x)$ and $\nabla^i( \| y-x \|^{p+1})_{|_{\substack{y=x}}} = 0$ for all $i =0:p$. Thus, we have that $\nabla^i h(x;x) = 0 \;\; \forall i =0:p$, i.e. condition (iii) from Definition \ref{def:sur}.
\noindent\textit{Verification of Example \ref{expl:5}:} From relation \eqref{eq:TayAppBound} and $M_p \geq L_p^{f_2}$, we have: \begin{equation} \label{eq:expl5-1}
f_2(y) \leq T_p^{f_2}(y;x)+ \frac{L_p^{f_2}}{(p+1)!} \|y-x\|^{p+1}\leq T_p^{f_2}(y;x) +\frac{M_p}{(p+1)!} \|y -x\|^{p+1}. \end{equation} Adding $f_1(y)$ on both sides of \eqref{eq:expl5-1} we obtain: \begin{equation*}
f(y) = f_1(y) +f_2(y) \leq f_1(y)+ T_p^{f_2}(y;x) +\frac{M_p}{(p+1)!} \|y -x\|^{p+1}= g(y;x), \end{equation*} which leads to condition (i) from Definition \ref{def:sur}. Further, for proving the second property of a surrogate function we write explicitly the expression of $h$: \begin{align*}
h(y;x) & = f_1(y) + T_p^{f_2}(y;x) +\frac{M_p}{(p+1)!} \|y -x\|^{p+1} - f_1(y) -f_2(y) \\
& = T_p^{f_2}(y;x) +\frac{M_p}{(p+1)!} \|y -x\|^{p+1}- f_2(y). \end{align*} We observe that the first term, $T_p^{f_2}$, has the $p$ derivative smooth with Lipschitz constant $0$ according to Example $\ref{expl:3}$. The $p$ derivative smoothness of the second term is covered by Example \ref{expl:1} and has the Lipschitz constant equal to $M_p$. The last term, $f_2$, has the $p$ derivative Lipschitz with constant $L_p^{f_2}$ according to our assumption. Since $h$ is a sum of these three functions, then it has also the $p$ derivative smooth with the Lipschitz constant $L_p^h = M_p + L_p^{f_2}$. Hence, property (ii) from Definition \ref{def:sur} also holds. For the last property of a surrogate function, we write explicitly the expression of $\nabla^i h(x;x)$ for $i=0:p$: \begin{align*}
\nabla^i h(x;x) & = \nabla^i T_p^{f_2}(y;x)_{|_{\substack{y=x}}} +\frac{M_p}{(p+1)!} \nabla^i(\|y -x\|^{p+1})_{|_{\substack{y=x}}} -\nabla^i f_2(x) \\
& = \nabla^i T_p^{f_2}(y;x)_{|_{\substack{y=x}}} - \nabla^i f_2(x) = 0. \end{align*}
The last equality is given by the fact that $\nabla^i T_p^{f_2}(y;x)_{|_{\substack{y=x}}} = \nabla^i f_2(x)$ for all $i=0:p$. This proves that the property (iii) from Definition \ref{def:sur} also holds.
\noindent\textit{Verification of Example \ref{expl:6}:} From Taylor's theorem we have that there $\exists\, t \in [0,\, 1]$ such that: $$ f(y) = T_p^{f}(y;x) + \frac{1}{(p+1)!} \langle \nabla^{p+1} f(x + t(y-x)) [y-x]^p, y-x\rangle, $$ and using that $\nabla^{p+1}f(x) \preccurlyeq H$ we obtain: \begin{equation*}
\begin{aligned}
f(y)
& \leq T_p^{f}(y;x) + \frac{1}{(p+1)!} \langle H [y-x]^p, y-x\rangle = g(y;x).
\end{aligned} \end{equation*} Thus, the first property of a higher-order surrogate function holds. Further, we write explicitly the error function $h$ as: $$ h(y;x) = T_p^f(y;x) + \frac{1}{(p+1)!} \langle H[y-x]^p, y-x \rangle - f(y). $$
We observe that the first term, $T_p^{f}$, is $p$ derivative smooth with Lipschitz constant $0$ due to Example $\ref{expl:3}$. The $p$ derivative smoothness of the second term is covered by Example \ref{expl:1} and has the Lipschitz constant equal to $\|H\|$. The last term, $f$, has the $p$ derivative Lipschitz with constant $L_p^{f}$ according to our assumption (boundedness of the $p+1$ derivative of $f$ implies Lipschitz continuity of the $p$ derivative of $f$). Since $h$ is a sum of these three functions, then $h$ has also the $p$ derivative smooth with Lipschitz constant $L_p^h = \| H\| + L_p^{f}$. For the last property of a surrogate function, we write the expression of $\nabla^i h(x;x)$ explicitly: \begin{equation*}
\begin{aligned}
\nabla^i h(x;x) &= \nabla^i T_p^{f}(y;x)_{|_{\substack{y=x}}} + \frac{1}{(p+1)!} \nabla^i(\langle H[y-x]^p, y-x \rangle)_{|_{\substack{y=x}}} - \nabla^i f(x)\\
&= \nabla^i T_p^{f}(y;x)_{|_{\substack{y=x}}} - \nabla^i f(x) = 0 \quad \forall i =0:p,
\end{aligned} \end{equation*}
since $\nabla^i T_p^f(y;x)_{|_{\substack{y=x}}} = \nabla^i f(x)$ and $\nabla^i (\langle H[y-x]^p, y-x \rangle)_{|_{\substack{y=x}}} = 0$ for all $i =0:p$. Thus, the last property of a higher-order surrogate function (i.e. condition (iii) from Definition \ref{def:sur}) also holds.
\noindent\textit{Verification of Example \ref{expl:7}:} Since $\phi$ is convex over $\mathbb{R}^m$ it follows that it is Lipschitzian relative to any compact set from $\mathbb{R}^m$ (see e.g., Theorem 10.4 in \cite{Roc:70}). Then, there exists $0 < L_0^\phi < \infty$ such that: \begin{align*}
\phi(F(y)) - \phi(F(x) + \nabla F(x)(y-x)) & \leq L_0^\phi \| F(y) - F(x) + \nabla F(x)(y-x) \| \\
& \leq L_0^\phi \frac{L_1^F}{2} \|y -x\|^{2} \quad \forall x,y \in \text{dom} \, f_1, \end{align*} where the second inequality follows from $\nabla F$ being Lipschitz. This proves that the surrogate function \eqref{eq:comp1} satisfies $g(y;x) \geq f(x)$. The Lipschitz property of the Jacobian matrix $\nabla F$ implies immediately that the second surrogate \eqref{eq:comp2} also majorizes $f$. The other properties follow from the simple observations: \begin{align*}
& \nabla h(y;x) = \nabla F(x) \nabla \phi (F(x) + \nabla F(x)(y-x)) + M(y-x) - \nabla F(y) \nabla \phi (F(y)), \\
& \nabla \bar h(y;x) = \nabla F(y) \nabla \phi (F(x)) + M \nabla F(y) (F(y)-F(x)) - \nabla F(y) \nabla \phi (F(y)), \end{align*} respectively.
\section*{Acknowledgments}
\noindent The research leading to these results has received funding from the NO Grants 2014–2021, under project ELO-Hyp, contract nr. 24/2020.
\end{document} | arXiv |
Search all SpringerOpen articles
Environmental Sciences Europe
RESEARCH | Open | Published: 26 February 2019
Impact of silver nanoparticles (AgNP) on soil microbial community depending on functionalization, concentration, exposure time, and soil texture
Anna-Lena Grün1,6,
Werner Manz1,
Yvonne Lydia Kohl2,
Florian Meier3,
Susanne Straskraba4,
Carsten Jost5,
Roland Drexel3 &
Christoph Emmerling ORCID: orcid.org/0000-0002-1286-75046
Environmental Sciences Europevolume 31, Article number: 15 (2019) | Download Citation
Increasing exposure to engineered inorganic nanoparticles takes actually place in both terrestric and aquatic ecosystems worldwide. Although we already know harmful effects of AgNP on the soil bacterial community, information about the impact of the factors functionalization, concentration, exposure time, and soil texture on the AgNP effect expression are still rare. Hence, in this study, three soils of different grain size were exposed for up to 90 days to bare and functionalized AgNP in concentrations ranging from 0.01 to 1.00 mg/kg soil dry weight. Effects on soil microbial community were quantified by various biological parameters, including 16S rRNA gene, photometric, and fluorescence analyses.
Multivariate data analysis revealed significant effects of AgNP exposure for all factors and factor combinations investigated. Analysis of individual factors (silver species, concentration, exposure time, soil texture) in the unifactorial ANOVA explained the largest part of the variance compared to the error variance. In depth analysis of factor combinations revealed even better explanation of variance. For the biological parameters assessed in this study, the matching of soil texture and silver species, and the matching of soil texture and exposure time were the two most relevant factor combinations. The factor AgNP concentration contributed to a lower extent to the effect expression compared to silver species, exposure time and physico–chemical composition of soil.
The factors functionalization, concentration, exposure time, and soil texture significantly impacted the effect expression of AgNP on the soil microbial community. Especially long-term exposure scenarios are strongly needed for the reliable environmental impact assessment of AgNP exposure in various soil types.
The production volume and the application fields of silver nanoparticles (AgNP) increased continuously in the last decade given by their unique properties such as high surface-to-volume ratio, high chemical reactivity, and specific optical properties [1,2,3]. Apart from the initial medical utilization, AgNP are actually used in households, industry and agriculture such as for water purification, plant growth promotion and textiles cleaning [4, 5]. In consequence, their emission into the environment during all stages of the life cycle, including production, product use, disposal and weathering is unavoidable [6]. The exact extent of the release is unknown due to missing reliable and robust analytical methods for detecting trace concentrations of AgNP in complex matrices [7]. Thus, several scientists modeled the fate and concentrations of AgNP in the environment and named the soil compartment as one of the main sink of AgNP released into the environment [2, 6, 8,9,10,11,12,13,14]. For Europe, an annual AgNP increase of 0.6 t and 2.09 t was calculated for soils and sediments, respectively [8].
Today, information about the impact of AgNP on the soil microbiome are still rare, although microbial communities are important and sensitive targets for determining the environmental hazards of AgNP [15]. Recently, we registered significant negative effects on soil microbial biomass (− 38.0%), bacterial ammonia oxidizers (− 17.0%), and the beta-Proteobacteria population (− 14.2%) after 1-year exposure to 0.01 mg AgNP/kg in a loamy soil, while Acidobacteria (44.0%), Actinobacteria (21.1%) and Bacteroidetes (14.6%) were significantly stimulated [16, 17]. Therefore, a detrimental disturbance on soil ecosystem functions, such as nitrification, organic carbon transformation and chitin degradation could be assumed.
Numerous studies documented the differing physico–chemical and concomitant toxicological behavior of AgNP in dependence of the soil type. As a function of pH, ionic strength, temperature, amount of dissolved ions and of natural organic matter, oxygen concentration, grain size distribution and others [18,19,20,21], AgNP could undergo various physico–chemical transformations such as reduction, oxidation, aggregation, dissolution, complexation and further secondary reactions [21,22,23,24]. Consequently, these transformations in turn affect the toxicity mechanism of AgNP as well as their bioavailability. For example, in comparative studies with different soil types, Schlich and Hund-Rinke [19] as well as Rahmatpour et al. [25] showed that AgNP caused lower toxicity in soils with higher clay content due to the AgNP immobilization by heteroaggregation with clay particles [19, 23, 25].
In addition, the AgNP species itself may significantly impact on its environmental behavior. Their size, shape, surface-coating agent, charge and stability are only a few of the properties by which AgNP can differ [1]. Today, extensive functionalization strategies are available to modify the surface chemistry of a variety of engineered nanoparticles (NP) [26]. Those coatings are used to stabilize NP against aggregation when stable suspensions are required for product functionality or for improved delivery of the product. The coating may also provide other functionalities, such as biocompatibility or targeting of specific cells in biomedical applications [27]. For example, AgNP can be coated by citrate or polyvinylpyrrolidine to increase their stability [28], modified with ATP to act as a selective antibiotic [29] or equipped with COOH– and NH2-groups to affect their surface charge in terms of their function in imaging and drug delivery [30]. Once released into the environment, the surface functionalization of AgNP significantly determines its physico–chemical fate, its bioavailability and its toxicity [22, 26]. Exemplary, Wu et al. [31] observed in a nanocosm experiment that polyethylene glycol AgNP had the highest overall toxicity, followed by silica AgNP, and lastly aminated silica-coated AgNP due to there different dissolution rates und thus stability.
Further to the soil type and the AgNP functionalization, several studies documented a significant impact of the exposure time on the toxicity of AgNP in soils [16, 32,33,34]. By a statistically significant regression and correlation analysis between silver toxicity and exposure time we recently confirmed loamy soils as a sink for silver nanoparticles and their concomitant silver ions due to ageing processes of the silver species and their slow return to the biological soil system [17].
Considering the predicted increase of AgNP into the soil environment, the known toxicity of AgNP to the soil microbial community as well as the variable fate of AgNP in the soil compartment, the aim of this study was to give a more holistic view of the impact of AgNP exposure on the soil microbial community in dependence of the factors AgNP functionalization, AgNP concentration, exposure time, and soil texture. The study was conducted with a long-term incubation period of 90 days using three soil textures (loam, clay, sand) and two different charged AgNP at concentrations in a range of 0.01–1.00 mg AgNP/kg soil. We quantified the effects on several biological parameter: the microbial biomass, the abundance of bacteria, the enzymatic activity as well as marker genes for selected processes of the inorganic nitrogen cycle and for selected higher bacterial taxa. Furthermore, we used AgNO3 as control to determine the effect of Ag+ ions on the AgNP results. Based on our preceding observation that the nitrate content of 36.5% in the AgNO3 compound might also have effects on the microbial community [16], we used NO3 as a further control. This experiment was restricted to the loamy soil. Here, we analysed the impact charged and uncharged AgNP as well as the effects of AgNO3 and NO3 on the microbial community.
Silver nanoparticles and controls
Two differently functionalized AgNP were used, Ag10-COOH functionalized with carboxy groups and Ag10-NH2 functionalized with amino groups. The AgNP were synthesized by a ligand exchange starting from hydrophobic silver particles (Ag-HPB) and the addition of toluene and mercaptopropionic acid or cysteamine hydrochloride in MeOH to receive the final Ag10-COOH or Ag10-NH2 colloidal solutions. The concentration of the stock solutions were 180 mg/L for Ag10-COOH and 21 mg/L for Ag10-NH2. Size, shape and nanoparticle surface charge (ζ-potential) of AgNP were analysed by transmission electron microscopy (Philips CM 12, Netherlands), dynamic light scattering (DLS, Zetasizer Nano S, Malvern Instruments Ltd., UK), asymmetrical flow field-flow fractionation (AF4, AF2000 MT, Postnova Analytics GmbH, Germany) and Laser-Doppler-microelectrophoresis (Malvern Zetasizer Nano-ZS, Malvern Instruments Ltd., UK).
Methods of synthesis and particle characterization can be found in detail in Additional file 1.
Additionally to the analysis of the stock solution, ζ-potential and hydrodynamic diameter of the AgNP were determined at different pH values (pH 4, pH 7.4 and pH 10). Prior to the measurements the stock solution was diluted in pure water (Millipore) with the pH-values 4, 7.4 or 10 to a concentration of 10 µg/mL, vortexed for 10 s, incubated for 1 h or 24 h under permanent rotation (100 rpm at 37 °C) and vortexed for 30 s prior to analysis via Zetasizer Nano-ZS.
Silver nitrate (AgNO3) was used as a positive control. Silver concentrations in the AgNO3 controls were the same as those in the AgNP treatments. As a further control, NO3− was used in form of KNO3. The nitrate concentration in the KNO3 controls were the same as those in the AgNO3 treatments.
Test soils
Three soil textures were selected: a silty sand, a loamy clay and a silty loam. Approximately 20 kg of each soil was sampled from the A-horizon (0–30 cm depth) in spring 2015 next to Trier, Germany. Land-use was forest for the sandy soil and arable field for the clayey and the loamy soil. The soil had a clay content of 0–5% for the sandy, 45–65% for the clayey, and 25–35% for the loamy soil, respectively. After sampling, the soils were thoroughly sieved to < 2 mm and stored at 6 °C until further use. Characteristic soil parameters are listed in Table 1.
Table 1 Characterization of the test soils
Before starting the experiment, the soil was moistened and incubated at 18 °C for 7 days. The application of the test materials was performed in petri dishes, each filled with soil equivalent to 25 g dry weight.
AgNP test solutions were prepared immediately before use: AgNP stock solutions were sonicated at 42 W/L for 15 min and gradually diluted with UPW. Then, 1 mL of the agent species Ag10-COOH, Ag10-NH2, AgNO3 or KNO3 solutions, at different concentrations, was added in small drops onto the soil surface to obtain final concentrations of 0.01, 0.10 and 1.00 mg/kg dry weight. Negative controls only received an application of UPW. Soil water content after the addition of the test solutions were average 22.0% (sand), 19.4% (clay) and 18.5% (loam), which was equivalent to 42.6%, 41.4%, and 37.4% WHCmax, respectively. For each soil texture, agent species, concentration, and day, separate samples in different soil dishes were prepared as 4 replicates (e.g. 4 × 0.01 mg Ag10-COOH kg−1 sand for day 1). Subsequently, soils were extensively mixed by stirring with a spoon, and then they were transferred to plastic containers (Centrifuge Tubes, 50 mL, VWR, Darmstadt, Germany) and sealed by Parafilm®. They were incubated at 15.1 (± 1.8 °C) in the dark for 1, 14, 28, and 90 days. Water evaporation was determined gravimetrically and then compensated with the addition of UPW. Samples were finally stored at − 20 °C. For analyses, samples were defrosted by incubation overnight at 6 °C. Each replicate was analysed on the effect expressions of the target variables leucine aminopeptidase activity, microbial biomass as well as of functional and taxonomic genes.
Furthermore, results of our previous studies [16, 17] of the effect assessment of AgPure, with an average size of 20 nm and polyacrylate stabilization, were used to calculate the impact of the five agent species AgPure, Ag10-COOH, Ag10-NH2, AgNO3, and KNO3 at concentrations of 0.01; 0.1; and 1.0 mg Ag/kg soil after exposure of 1, 14, 28, and 90 days in the silty loam soil on the same target variables.
Analysis of biological parameters
Leucine aminopeptidase activity
Leucine aminopeptidase (EC 3.4.1.1; LAP) was investigated according to Marx et al. [35], with modifications [36]. Briefly, 1 mol/L L-leucin-7-AMC was used as substrate for LAP, and 7-amino-4-methylcoumarin [37] was used as a standard. Incubation of the soil slurry with the substrate was performed at 30 °C. Measurements of fluorescence were performed after 0 and 2 h using a Victor Multilabel Plate Reader (Perkin Elmer, Germany; excitation wavelength: 355 nm, emission wavelength: 460 nm). LAP activity was calculated as substrate turnover per g dry soil h.
The potential contribution of AgNP to the total fluorescence signal was measured for each AgNP concentration. Then, AgNP was added to autoclaved soil, and the same procedure was conducted. The resulting fluorescence signal was compared to the fluorescence intensity of the pure autoclaved soil. At this, the used AgNP did not exhibit autofluorescence (data not shown).
DNA extraction and microbial biomass measurements
DNA extraction and purification were performed using the Genomic DNA from soil kit (Macherey–Nagel, Düren, Germany) according to the manufacturer's instructions and stored at − 20 °C. For the measurement of microbial biomass, 10 µL of DNA was transferred into the well of a 96-well microplate and shaken for 5 s before absorbance was measured at 260 nm [38] using Victor Multilabel Plate Reader (Perkin Elmer, Germany).
Quantitative detection of functional and taxonomic genes
16S rRNA genes were used as a proxy to quantify the abundance of bacteria, as described by Bach et al. [39]. The abundance of bacteria harboring the nifH gene was used as a marker for the potential to fix nitrogen and measured according to Rösch et al. [40]. To analyze the effects of the different silver materials on the ammonia-oxidizing bacteria, the amoA primer system described by Rotthauwe et al. [41] was used. To quantify the abundance of taxon-specific 16S rRNA gene copy numbers, qPCR assays for Acidobacteria [42, 43], Actinobacteria [43, 44], alpha-Proteobacteria [45], Bacteroidetes [43, 46], and beta-Proteobacteria [47, 48] were performed. Detailed descriptions of the used assays are given by Grün and Emmerling [17] and Grün et al. [16].
All qPCR reactions were conducted on a thermal cycler equipped with an optical module (Analytik Jena, Jena, Germany). All samples were run in triplicate wells. Single qPCR reactions were prepared in a total volume of 20 µL. The InnuMix SYBR-Green qPCR Master-Mix was purchased from Analytik Jena (Jena, Germany). Primer concentrations were 10 pmol/µL, and amplification specificity was assessed by melting curve analysis and gel electrophoresis on a 1.5% agarose gel after qPCR. Standard curves were based on cloned PCR products from the respective genes [43].
All data were processed using IBM SPSS Statistics for Windows, Version 23.0 (IBM Corp., Armonk, USA). The obtained biological values of qPCR (copy gene number per kg dry soil), leucine aminopeptidase activity (substrate turnover per g dry soil h) and measurement of microbial biomass (ng DNA per g dry soil) of negative controls (0.00 mg Ag/kg) of the 4 sample replicates were averaged for each biological parameter and day. Subsequent, the relative variation of each silver treated sample of one concentration and one sampling date were calculated as follow:
$${\text{Relative}}\,{\text{variation}}\,(\% )\, = \,\frac{{{\text{biological}}\,{\text{value}}\,{\text{of}}\,{\text{treated}}\,{\text{sample}}}}{{{\text{averaged}}\,{\text{biological}}\,{\text{value}}\,{\text{of}}\,{\text{untreated}}\,{\text{samples}}}}\, \times \,100$$
In the following, the relative variation was set as target variables. Exposure time (1, 14, 28, 90 days), concentration (0.01; 0.1; 1.0 mg Ag/kg soil), soil texture (loam, sand, clay) and silver species (Ag10-COOH, Ag10-NH2, AgNO3) were set as factors with different factor levels (e.g. Ag10-COOH, Ag10-NH2, AgNO3).
For the effect assessment of a factor level on the target variable, the mean of a target variable (e.g. leucine aminopeptidase) at a distinct factor level (e.g. Ag10-COOH) was calculated. Within a factor, the target variable was also pre-evaluated for a normal distribution by the Shapiro–Wilk test and variance homogeneity by the Levene test.
To test the influence of the four factors as well as the influence of the factor combination on the target variable, multi-factorial ANOVA was performed. Here, tests of between-subject effects provided information about significant relationships. By means of the Bonferroni post hoc test the significance of group mean differences within a factor were calculated.
Finally, a multivariate analysis of variance was performed to simultaneously mitigate the influence of the four factors on the 10 dependent target variables. The factors were set as independent variables, whereas the relative variations of the biological parameters were set as dependent variables. The test statistic was computed by Pillai's trace.
Characterization of the AgNP suspensions prior to application into soils
Characteristics of the Ag10-COOH stock solution in water were as follows: hydrodynamic diameter (DLS, z-average): 85.3 ± 2.9 nm; polydispersity index (PDI) (DLS) = 0.234 ± 0.031; ζ-potential: − 41.4 ± 1.1 mV (in UPW). AF4-UV-DLS measurement revealed an average hydrodynamic diameter of 18.1 ± 0.5 nm at the UV peak maximum. Over the main UV peak the hydrodynamic diameter ranged from 18 nm to around 28 nm. Larger particle sizes with a hydrodynamic diameter up to around 118 nm were detected as well but larger particle fractions were low in concentration based on the corresponding UV signal (Fig. 1a).
AF4-UV-DLS measurements of Ag10-COOH (a) and Ag10-NH2 (b)
Characteristics of the Ag10-NH2 stock solution in water were as follows: hydrodynamic diameter (DLS, z-average): 62.1 ± 3.1 nm; polydispersity index (PDI) (DLS) = 0.379 ± 0.072; ζ-potential: + 39.2 ± 0.2 mV (in UPW). Average hydrodynamic diameter at the UV peak maximum obtained from AF4-UV-DLS measurement: 82.3 nm ± 3.1 nm with a size distribution from around 8 nm to 142 nm (Fig. 1b).
The analysis of the ζ-potential and hydrodynamic diameter of the AgNP under different pH values resulted in pH-dependent characteristics of the NP (Fig. 2). The hydrodynamic diameter was 197 ± 13.9 nm after 1 h incubation in a pH 4 solution and 169 ± 9.2 nm in a pH 7.4 solution for Ag10-NH2 (Fig. 2a). There was no difference in the results after 1 h and 24 h incubation. Under alkaline conditions (pH 10), the hydrodynamic diameter increased to 601 ± 85 nm (1 h) or 1911 ± 475 nm (24 h). Starting from the stock solution, with a positive surface charge of + 39.2 ± 0.2 mV, the surface charge decreased with increasing pH value (Fig. 2b). After 1 h incubation of Ag10-NH2 in aqueous pH 4 solution, the ζ-potential was + 36.9 ± 2.5 mV and decreased to − 7.4 ± 1.5 mV after 1 h incubation in a matrix with pH 10. But there was no significant difference between the results after 1 h and 24 h incubation.
PH-dependent hydrodynamic diameter and ζ-potential of Ag10-COOH (a, b) and Ag10-NH2 (c, d). N = 6
Ag10-COOH behaved contrarily. After 1 h incubation, the hydrodynamic diameter of Ag-COOH at pH 4, 7.4 and 10 was comparable. After 24 h the size significantly increased at pH 4, but not at pH 7.4 and 10 (Fig. 2c). The ζ-potential mainly did not vary significantly between the different pH values. After incubation for 24 h at pH 10 there was a significant increase in the surface charge to negative (− 54.1 ± 2.9 mV) (Fig. 2d).
Transmission electron microscopy revealed an averaged diameter of 5.7 ± 2.3 nm for Ag10-COOH (Fig. 3a). Their distribution among the grid was not homogeneous but rather arranged in clusters. Between the dark grey and black single particles there was a lighter colored layer with very small objects in it. This could be a residue from the production process. The single particles were shaped roundish to oval. The Ag10-NH2-NP showed an average diameter of 6.2 ± 3.4 nm (Fig. 3b). They were not distributed homogeneously among the grid and agglomerate to secondary particles with a diameter of over 100 nm. Beside these particles there were areas with single nanoparticles which had a roundish to oval shape and a plain surface.
Transmission electron microscope (TEM) image of Ag10-COOH (a) and Ag10-NH2 (b) in water
Impact of silver species, exposure time, concentration, soil texture and their combinations on biological parameters
Results of multi-factorial ANOVA revealed significant main effects of the factors silver species, concentration, exposure time and soil texture on the relative variation of the bacterial phyla Actinobacteria, Acidobacteria, alpha-Proteobacteria, Bacteroidetes and beta-Proteobacteria (Table 2). Comparable significant results were observed for the leucine aminopeptidase (LAP) activity, the microbial biomass, the abundance of all bacteria (16S rRNA), as well as for the ammonia-oxidizing bacteria (amoA) (Table 2). In case of the relative variation of the free-living nitrogen-fixing bacteria (nifH) only the factors and factor levels of exposure time and soil texture caused significant effects (Table 2). Silver species and concentration provoked no significant impacts on nifH (Table 2). The interaction effects of the factor combinations were predominantly significant for all variables (Table 2).
Table 2 Results of the (M)ANOVA including the main factors silver species, concentration, time, and soil texture, N = 4320
The considered main factors as well as their interactions could explain at least 81.5% (R2 = 0.815) of the variance of the respective target variables; in the case of beta-Proteobacteria even 93.9% (R2 = 0.939, Table 2).
Results of multivariate analysis revealed significant main effects for all factors and factor combinations (Table 2). MANOVA scored the factor combination soil texture × silver species as the strongest option (F = 53.2), followed by the combination of soil texture × exposure time (F = 46.8) (Table 2). Nevertheless, also the main factors silver species (F = 44.2), exposure time (F = 43.9) and soil texture (F = 46.5) could broadly explain the variance compared to the error variance itself (Table 2). The main factor concentration could elucidate the least variance (F = 12.7) (Table 2).
In Fig. 4 the influence of the highest scored factor combinations silver species × soil texture and exposure time × soil texture on the relative variation of the biological parameters are highlighted.
Impact of factor combinations on the entirety of the relative variation of the biological parameters. a Silver species × soil texture, b exposure time × soil texture. The error bars represent the 95% confidence interval. N = 4320. The relative variations (%) were set as dependent variable. The factors were set as independent variable. To calculate the impact of the factor combination silver species× soil texture on the relative variation of the biological parameters, the relative variations were unconnected to the factors concentration and exposure time (a). To calculate the impact of the factor combination exposure time × soil texture on the relative variation of the biological parameters, the relative variations were unconnected to the factors concentration and silver species (b)
The silver species Ag10-COOH caused similar effects in the loamy (101.2%) and the clayey (99.4%) soil, whereas Ag10-COOH diminished the relative variation of the biological parameters significantly in pairwise comparison to the sandy soil (93.3%; p = 0.000) (Fig. 4a). The effects of Ag10-NH2 were also very similar between the loamy (95.8%) and the clayey soil (98.0%), causing no significant differences. The pairwise comparison of clayey and sandy (94.0%) soil exhibited significant differences on the biological parameters (p = 0.003), while between sandy and loamy soil no differences could be detected. AgNO3 control slightly stimulated the entirety of the relative variation of the biological parameters in the loamy (101.2%) and sandy (103.6%) soil. In contrast AgNO3 caused a significant decrease of the relative variation (89.1%, p = 0.000) in the clayey soil. The order of increasing toxicity were for the loam Ag10-COOH = AgNO3 < Ag10-NH2, for the clayey soil Ag10-COOH = Ag10-NH2 < AgNO3 and for the sandy soil AgNO3 < Ag10-NH2 = Ag10-COOH.
As shown in Fig. 4b, a 1-day exposure to the silver species at different concentration led to a clear distinction between the effect in the sandy soil relative to the clayey (p = 0.029) as well as the loamy soil (p = 0.001). Here, the sandy soil exhibited the lowest toxicity. During the mid-term exposure of 14 and 28 days, this trend reversed and the sandy soil proved to be more toxic in response to treatment with silver compared to the loamy soil (p ≤ 0.05). However, after 90 days of silver exposure, an increase in the relative deviation of the biological variables from their untreated controls from the sandy to the clayey to the loamy soil could be observed (Fig. 4b). At this, there were only significant pairwise differences between the sandy and the loamy soil (p = 0.000), as well as between the sandy and the clayey soil (p = 0.001).
Impact of agent species, exposure time, concentration and their combinations on biological parameters in a loamy soil
Results of multi-factorial ANOVA revealed significant main effects of the factors agent species and exposure time on the relative variation of all biological target variables in the loamy soil (Table 3). In the case of the main factor concentration, only LAP activity, microbial biomass, amoA, Actinobacteria, Acidobacteria, and beta-Proteobacteria were significantly affected by the factor levels. The main factor exposure time was able to explain the largest part of the variance compared to the error variance of all target variable except for Acidobacteria and beta-Proteobacteria (Table 3). For these, the highest F-values were created by the main factor agent species. The main factor concentration could elucidate the least variance.
Table 3 Results of the (M)ANOVA including the main factors agent species, concentration, and time, N = 2400
The majority of significant mean differences in a factor were found for Ag10-COOH and AgPure, 0.01 and 0.10 mg/kg silver as well as 0.01 and 1.00 mg/kg silver, and 1 day and 14 days, 14 days and 90 days as well as 28 days and 90 days (Table 3). No factor combination achieved higher variance explanations than the single main factors (Table 3).
Based on the results of the MANOVA, all factors and factor combinations revealed significant main effects (Table 3). The main factor exposure time clarified the largest amount of variance by a F-value of 56.0, followed by the main factor agent species (F = 18.8) (Table 3). The main factor concentration could elucidate the least variance again. The means of the highest scored individual factor levels of exposure time and agent species are visualized in Fig. 5.
Impact of the factors exposure time (a) and agent species (b) in a loamy soil. The error bars represent the 95% confidence interval. N = 2400. The relative variations (%) were set as dependent variable. The factors were set as independent variable. To calculate the effect of the exposure time on the relative variation of the biological parameters, the relative variations were calculated unconnected to the factors concentration and agent species (a). For the calculation of the effect expression of the factor agent species, the relative variations were unconnected to the factors concentration and exposure time (b)
The relative variation of the biological parameter in the presence of the different silver and nitrate concentrations changed significantly with the exposure time (p = 0.000) (Fig. 5a). After a decrease of the relative variation on day 1, the relative variation increased by 14.5% on day 14 to 109.4% (p = 0.000). Within the mid-term exposure, a similar, but negatively change of 10.8% could be observed at day 28 (p = 0.000). At the end of the experiment, a further, but not significant decline of the relative variation of the biological parameter could be observed (Fig. 5a).
By means of the entirety of the relative variation of the biological parameters the means of AgPure (96.4%) and Ag10-NH2 (95.8%) caused no significant pairwise disparity in their effect characteristic (Fig. 5b). This could also be stated for the means of Ag10-COOH (101.2%), AgNO3 (101.2%) and KNO3 (104.2%) (Fig. 5b). In contrast, AgPure and Ag10-NH2 each differed significantly from Ag10-COOH, AgNO3 and KNO3 in their pairwise comparisons (p ≤ 0.007).
The results of the factor combinations of the multivariate multi-factorial analysis revealed lower F-values than for the main factors agent species and exposure time individually. However, the factor combination agent species × exposure time was scored as the strongest option (F = 14.5) within the factor combination possibilities (Table 3). At this, 1 day exposure demonstrated Ag10-NH2 (89.8%) as the agent species with the highest influence on the relative variation of the biological parameter in the loamy soil, followed by Ag10-COOH (92.3%) (Fig. 6). AgPure and AgNO3 showed only small influences on the relative variation. Between AgNO3 and KNO3 no significant pairwise difference could be observed. At day 14, all agent species provoked an increase of the relative variation. The biological parameter were stimulated by 1.8% (AgPure) to 16.0% (AgNO3) (p = 0.000). Again, between AgNO3 and KNO3 no significant pairwise difference could be observed. After 4 weeks of exposure, the stimulating effect of all agent species weakened (Fig. 6). In the case of Ag10-COOH and KNO3 a decrease of the relative variation of 7.3% and 5.8%, respectively, could be observed. The strongest effects on the relative variation of the biological parameters due to the different agent species were created at day 90 (Fig. 6). The agent species Ag10-COOH and KNO3 provoked significant stimulations of the biological parameters of 12.5% and 15.1%, respectively (Table 4). In contrast, AgPure, AgNO3 and Ag10-NH2 significantly diminished the relative variation by 14.8, 16.3 and 17.8%, respectively (Table 4).
Impact of the factor combinations exposure time × agent species in a loamy soil. The error bars represent the 95% confidence interval. N = 240. The relative variations (%) were set as dependent variable. The factors were set as independent variable. To calculate the impact of the factor combination exposure time × agent species on the relative variation of the biological parameters, the relative variations were unconnected to the factor concentration
Table 4 Pairwise comparisons (p) of the post-hoc test, N = 2400
The soil microbial community is responsible for several soil ecosystem services, like the recycling of organic matter, the soil fertility and structure, the biogeochemical nutrient cycles, the toxin degradation and the pathogen control [49,50,51,52,53]. Especially bacteria are the main performer of functional processes, which are integral for maintenance of healthy soil environments [54].
In this study, we used the DNA content of soil samples as a proxy for the microbial biomass and the abundance of 16S rRNA genes as an indicator of bacterial abundance. Furthermore, we measured LAP activity as well as the gene abundance of nifH and amoA as proxies for the nitrogen cycle, which drives many ecosystem activities in soils, including plant production [40, 41, 55]. Finally, we documented the response of Acidobacteria, Actinobacteria, alpha-Proteobacteria, Bacteroidetes and beta-Proteobacteria as representatives of the main soil bacterial phyla. Despite the phylogenetic diversity [56], not cultivability [57] and functional redundancy [58] still make it difficult to link members of bacterial phyla in soils with their function, some specific soil functions could be assigned to specific soil bacteria [41, 59,60,61,62,63].
The influence of freezing and defrosting is known to influence the structure and function of microbial communities [64, 65]. Nevertheless, this procedure was executed considering the high amount of samples to be processed. Since this method was applied to all samples, the results were comparable and give statistical inferences about the influence of the factors silver/agent species, concentration, exposure time and soil texture.
Silver/agent species
The main determining factor silver species caused almost significant effects on the investigated biological parameters of the three test soils (Table 2), whereby the main factor agent species caused significant effects on all investigated biological parameters in the loamy soil (Table 3).
In a simultaneous consideration of all biological parameters by the MANOVAs, an increase in the toxicity of KNO3 (104.2%) = Ag10-COOH (98.0%/101.2%) = AgNO3 (97.95%/101.2%) < AgPure (96.4%) = Ag10-NH2 (95.9%/95.8%) could be observed, whereby between Ag10-COOH and AgNO3 no significant pairwise difference could be calculated (Additional file 1: Fig. S1). Values of the main factor agent species are shown in italic.
The coating of NP is crucial for their reactivity and physico–chemical transformations, such as the dissolution rate, the availability of surface areas, the surface charge, the aggregation rate and the long-term stability [27, 66, 67]. Analysis of the ζ-potential of the AgNP under different pH values (Fig. 2b, d), indicated no significant impact of pH on ζ-potential of the Ag10-COOH particles. The high negative ζ-potentials of − 33.9 ± 1.8 mV at pH 4 and − 39.4 ± 1.5 mV at pH 7.4 of the Ag10-COOH indicate high stability of these nanoparticles. The COOH-coating could minimize Ag+ ion release as well as direct contact of the AgNP with microorganisms or soil compartments such as clay particles or natural organic matter due to their function as a physical barrier [27, 31, 66]. The investigation of Long et al. [68] confirmed this hypothesis. They examined the Ag+ release and toxicity of different coated AgNP which were very similar to our Ag10-COOH particles and observed only a slightly release of Ag+ and a lower associated toxicity to Escherichia coli. In addition, the negative surface charge could led to an attachment of soil cations such as Ca2+, Mg2+ or K+ on the Ag10-COOH particles and an increase of their physico–chemical barrier.
Furthermore, the coating of AgNP plays a crucial role in determining their cellular uptake mechanism [26], where the negative surface charge of Ag10-COOH indicates a low affinity to negatively charged membranes of microorganisms. Nevertheless, Ag10-COOH induced both adverse and advantageous effects in view of the investigated individual biological parameters. Here, individual defense strategies [28, 69] as well as species dependent toxicological susceptibility [17, 52, 70] of the biological parameters might be the underlying reasons.
In contrast, the ζ-potential of Ag10-NH2 decreased with increasing pH in our experiment (Fig. 2b) and consequently a decrease of stability could be deduced in the test soils. As a consequence of the missing physico–chemical barrier a high availability of AgNP surface area as well a high release rate of Ag+ ions is expectable. Both, the mean of the relative variation of all biological parameters (95.9%/95.8%) as well as the mean of LAP activity (96.5%/95.1%), 16S rRNA (95.7%/92.0%), amoA (92.6%/92.9%), Acidobacteria (89.9%/78.7%) and Bacteroidetes (93.5%/96.6%) (Table 2, 3) underlined the toxic effects of Ag10-NH2. Moreover, the positive surface charge of Ag10-NH2 could have led to a strong association with the negatively charged membranes of microorganisms [71, 72] and affect the surface tension of the membrane resulting in an increase of pore formations [26].
AgPure showed similar harmful effects such as Ag10-NH2, which could be also attributed to release of Ag+ ions. Their low particle concentration, their polyacrylate stabilization and the high pH value of soil could have prevented initial aggregation and agglomeration of AgPure [16, 17, 73]. In addition, the high concentration of divalent cations, such as Ca2+ und Mg2+ in the loamy soil (Table 1) could promote AgPure and Ag10-NH2 dissolution, resulting in the displacement of Ag+ ions from the nanoparticle surface and thus toxicity [74].
Surprisingly, the AgNO3 control documented low toxicity by average 97.95% and 101.2% of the relative variation. The use of silver nitrate is ubiquitously as control in toxicological studies with AgNP to estimate the influence of dissolved Ag+ ions from AgNP [1, 4, 75, 76]. Here, the toxicity order of the MANOVAs suggest a high release of Ag+ ions due to Ag10-COOH (98.0%/101.2%) relative to Ag10-NH2 (95.9%/95.8%) and AgPure (96.4%) and thus a high direct toxic impact of Ag10-NH2 and AgPure NP itself. However, an individual view on the biological parameters with regard to the ANOVAs resulted in a different manner. The bacterial abundance (S16 rRNA) and the abundance of Actinobacteria were stimulated by Ag10-COOH exposure and diminished by AgNO3, whereas LAP activity, microbial biomass and Acidobacteria were stimulated or not influenced by AgNO3 and diminished by Ag10-COOH (Table 2). These individual responses of the biological parameters explain the majority of significant mean differences found between Ag10-COOH and AgNO3 within the main factors silver species. By averaging MANOVA, these important observations are lost and could have led to deceptive conclusions. Based on the individual ANOVAs (Tables 2, 3), as well as the particle and soil characterizations, the hypothesis of a high release of Ag+ by Ag10-COOH should be excluded and could be related rather to Ag10-NH2 and AgPure.
In detail, the presented results in Tables 2 and 3 revealed no or stimulation effects due to AgNO3 in case of LAP, amoA, Acidobacteria and Bacteroidetes, whereas the used AgNP caused predominantly lower relative variations of these biological parameters. Similar observations were documented in short-term studies dealing with the effect of low concentrations of AgNP and AgNO3 on organisms related to nitrogen cycle, where AgNO3 might cause lower or even stimulating effects relative to the AgNP. Quite recently, we observed stimulating effects by AgNO3 to ammonia-oxidizing and nitrogen fixing bacteria after short-term exposure, whereby AgNP led to their decrease [16]. Schlich et al. [77] documented a significant stimulation of the nitrite production after 7 day exposure to 0.19 mg/kg AgNO3 up to 19.4%, whereby AgNP caused an inhibition. Yang et al. [78] found that 2.5 µg/L of Ag+ as AgNO3 upregulated AMOA genes amoA1 and amoC2 by 2.1 by 3.3-fold. Furthermore, Choi et al. [79], Masrahi et al. [24] and Liang et al. [80] observed lower effects on microbial nitrification due to AgNO3 relative to AgNP after short-term exposure. Here, in view of the very similar effect expressions of AgNO3 (101.2%) and KNO3 (104.2%) we suspect, that the nitrate contained in the AgNO3 might also impact the biological parameter such as the silver itself in the AgNO3 control. LAP activity and amoA are proxies for the nitrogen cycle. Because of the spatial structure imposed by soil particles resulting in local variations in oxygen availability over small distance [81], both aerobic and anaerobic conditions can be found in the same soil sample. Thus, AgNO3 control could act as substrate for nitrate reducing bacteria, such as gamma-Proteobacteria or Acidobacteria [63, 82, 83], which were less sensitive to AgNP or Ag+ in form of AgNO3 [78, 84, 85] and promoted their increase due to nitrate utilization via denitrification and/or dissimilatory nitrate reduction [81, 83] resulting in new nitrogen sources for the ammonia-oxidizing and nitrogen fixing bacteria.
The increase of the abundance of Bacteroidetes might be the result of harboring silver resistance genes [69] and simultaneously increased availability of carbon through decreased silver-sensitive microbes, which promoted the as r-strategist know Bacteroidetes [56]. Nevertheless, the average factor expression of AgNO3 in case of Actinobacteria (90.8%/88.7%) and beta-Proteobacteria (94.0%/94.3%) (Tables 2, 3) indicated still harmful effects. Consequently, there could be an interplay of stimulating and detrimental effects, which might underlay in the agent itself, but also in consequential shifts of the microbial community and nutrient availability.
The F-values of the biological parameters in both MANOVAs indicated the main factors silver species and agent species as strong options to explain large parts of the variances compared to the error variances (Tables 2, 3). With respect to the three different used AgNP, our results confirmed a distinctive impact of the AgNP functionalization on their fate and toxicity in the soil environment. The general applicability of AgNO3 as suitable positive control should be subject of further investigations.
The main determining factor concentration caused as well almost significant effects on the investigated biological parameters in the three test soils (Table 2). The majority of significant mean differences in the factor were found for 0.01 and 1.00 mg/kg silver. Here, the toxicity of the silver species increased predominantly with increasing concentrations (Additional file 1: Fig. S2), which was already observed in a variety of recent soil studies [5, 24, 86, 87].
The concentration of AgNP strongly influences their physico–chemical transformation. Once released into the environment, AgNP concentration is crucial for dissolution and aggregation mechanisms [28]. Merrifield et al. [63] documented AgNP homoaggregation as insignificant at realistic environmental concentrations, whereby dissolution and also heteroaggregation processes were more likely. Based on the low test concentrations of 0.01–1.00 mg/kg silver, dissolution seemed to be the most probable transformation type in our study. Dissolution hypothesis was supported by the average high pH value of soils that could have prevented initial aggregation and agglomeration of AgNP [73] as well as the average high concentration of divalent cations, such as Ca2+ und Mg2+ (Table 1), which promoted AgNP dissolution and resulted in the displacement of Ag+ ions from the nanoparticle surface [74]. Furthermore, the high ζ-potentials of + 36.8 and + 39.2 mV for Ag10-NH2 as well as of − 33.9 and − 39.4 mV for Ag10-COOH at pH 4 and 7.4, respectively, indicated high interparticular repulsive forces, which diminished the aggregation probability due to high stability [88]. However, according to Klitzke et al. [14] a decrease of ζ-potential after AgNP exposure to soil solution has to be assumed.
A direct interaction of the microbes with the differently charged AgNP is also conceivable. However, the relative variation of the biological parameters differed only by 2.0% between 0.01 and 1.00 mg/kg silver. This low effect expression can be attributed to the low bioavailability of the silver species as well as to bacterial resistance mechanisms. The released silver ions could bind to clay particles [19, 33] or organic material [20, 89, 90] and thus be less bioavailable. In addition, the likelihood of a microbe to encounter a silver particle or ion is generally low concerning the applied low test concentrations. In the event of a coincidence, bacteria possess various commonly defense mechanisms to escape the toxic influence of silver, such as the thickness of their peptidoglycan-membrane as the first line of defense [91], efflux systems to extrude heavy metal ions [4, 92, 93] and the production of extracellular proteins and exopolysaccharide [94] to neutralize small amounts of toxic compounds [95, 96]. Furthermore, some bacteria obtain specific silver resistance genes, which encode periplasmic silver-specific binding protein (SilE), silver efflux pumps (P-type ATPase), and membrane potential-dependent polypeptide cations/proton antiporter (SilCBA) [4].
Nevertheless, compared to the F-values of the other main factors, the factor concentration could elucidate the least variance of the mean values considering all variables (Tables 2, 3). Based on our results, the main factor concentration at environmentally relevant levels has, therefore, only a very weak influence on the AgNP effect characteristics respecting the microbial community in soils.
The main determining factor exposure time caused consistently significant effects on the investigated biological parameters in the three test soils after silver exposure (Table 2), with the majority of significant mean differences between the factor expressions 1 day and 28 d. With the exception of the relative variation of the Bacteroidetes and the ammonia-oxidizing bacteria (amoA), all biological parameter were negatively affected at day 1 (Table 2). This observation was rather unusual, because several studies measured high and fast sensitivity of ammonia-oxidizing bacteria with AgNP and AgNO3 [24, 77, 78]. Apart from that, we recently recognized a tolerance of amoA harboring bacteria after short-term exposure to AgNP [16]. Furthermore, Schlich et al. [77] and Samarajeewa et al. [86] observed stimulatory effects with ionic and nanoparticulate silver to ammonia-oxidizing bacteria, which could be ascribed to hormone-like responses to low silver concentrations. By contrast, the stimulation of Bacteroidetes due to silver addition was in agreement with previous observations [52, 69, 70, 97, 98]. They exhibit silver resistance genes [78].
As the exposure time increased until day 28, the injury on the microbial community decreased (Additional file 1: Fig. S3). It is likely that the short-term effects after 1 day were observed as a result of the initial release of bioavailable Ag+ by AgNP and AgNO3, causing toxic effects on the microbial community. Dissolution experiments [99, 100] verified fast solubility of AgNP in soils. With increasing exposure time, the silver species became less bioavailable due to interactions with organic matter, clay minerals or pedogenic oxides [19, 33]. Furthermore, upcoming mechanisms such as self-protection [4, 91,92,93, 95, 96], resistance [4], resilience [58] and/or cryptic growth [101], might also be possible explanations for the limited effects on the microbial community in the soils.
Results of the ANOVA using data of the loamy soil only documented a similar trend (Table 3). Here, all biological parameters were also negatively affected at day 1, with the exception of the relative variation of the Bacteroidetes and the ammonia-oxidizing bacteria (amoA). In virtue of the high clay content (approximately 30%) of the loamy soil and its high content of organic carbon (2.9%) (Table 1), the retention of AgNP and Ag+ ions arose earlier and the toxicity of the agents decreased even at day 14 (Table 3) [14, 20, 102, 103].
On day 90, an impairment of the microbial community could be observed relative to day 28 for both ANOVAs (Tables 2, 3). Similar trends were observed in our previous studies [16, 17]. As AgNP and Ag+ ions gradually aged, they slowly returned to the biological soil system as a continuously sink of silver agents. Diez-Ortiz et al. [33] also documented a progressive increase in AgNP toxicity with time and assumed a time-dependent enlargement of silver in soil pore water due to slow dissolution.
The F-values of the biological parameters in both MANOVA indicated the main factor exposure time as a strong option to explain a large part of the variances compared to the error variances (Tables 2, 3). Especially in case of the exclusion of the factor soil texture from the MANOVA, the factor exposure time yielded the maximum F-value (Table 3). These results strongly underline the significance of the exposure time for AgNP ecotoxicity investigations, attributable to the changes of silver bioavailability and its specification status during long-term experiments in soils.
Soil texture
Also the main factor soil texture caused significant effects on all investigated biological parameters in the three test soils (Table 2). The majority of significant mean differences were found between the loamy soil and the clayey soil. This was noticeable, because their investigated soil characteristics such as pH, TOC, TN and CEC were very similar. The most distinct difference resulted from their grain size distributions of clay (45–65% vs. 25–35%) and sand (5–40% vs. 25–45%) (Table 1). Actually, recent studies have shown a positive correlation between the clay content of soils and the toxicity of AgNP: lower grain size of soils resulted in lower toxicity of AgNP [19, 23, 25]. Indeed, this was appropriate for the sandy soil with a mean of 97.0% and the loamy soil with a mean of 99.4% relative abundance of all biological parameters (Additional file 1: Fig. S4). However, the mean of the relative abundance of the clayey soil (95.5%) illustrated the opposed situation: lower grain size of the clayey soil resulted in higher toxicity of silver (Additional file 1: Fig. S4). In fact, not only the particle size distribution can be responsible for the silver toxicity. Schlich and Hund-Rinke [19] documented that the highest AgNP toxicity was associated with more acidic soils, whereas the lowest toxicity was associated with more alkaline soils. They supposed that the soil pH value influences AgNP dissolution and the release of ions [19]. Similarly, this could be observed for the loamy soil (pH = 7.2) and the sandy soil (pH = 3.2) in this study (Table 2). At the same time, the results of the clayey soil with a pH value of 7.4 seemed to disagree again with the hypothesis.
The results of the factor combination silver species × soil texture (Fig. 4a) might elucidate the crux of the observed mean silver toxicity order loamy soil < sandy soil < clayey soil. In case of Ag10-NH2 and Ag10-COOH, the loamy and the clayey soil exhibited lower toxicity compared to the sandy soil (Fig. 4a). This confirmed the positive correlation between the grain size distribution of soils and the toxicity of AgNP as well as the association of high AgNP toxicity by more acidic soils [19, 23, 25]. In contrast, AgNO3 caused a strong decrease of the entirety of the relative variation of the biological parameters in the clayey soil (Fig. 4a), which caused an overwhelmed influence on the average formation in the MANOVA of the silver species. This indicated a completely different fate of AgNO3 compared to AgNP in the test soils.
Soil texture× silver species
The factor combination soil texture × silver species yielded the highest F-value in the MANOVA considering the entirety of the relative variation of the biological parameters (Table 2).
With regard to the investigated AgNP it could be observed that their toxicity was lower in the loamy and clayey soil than in the sandy soil (Fig. 4a). In case of Ag10-NH2 the high availability of soil cations in the loamy and the clayey soil (Table 1) might have promoted the dissolution of the lower stable AgNP [14], resulting in the release of Ag+ ions. These were bound to clay particles and became less bioavailable [33]. The absent effect of the AgNO3 control as measure for Ag+ impact in the loamy soil might confirm this hypothesis. Furthermore, the high content of natural organic matter, derived by the land-uses of the loam and clay locations, could have been reduced the toxicological effects of the Ag10-NH2 due to inhibition of oxidation [104]. At this, positive charged Ag10-NH2 were adsorbed by negatively charged organic matter, which dominated now the surface properties leading to higher steric stability and thus also lower bioavailability [18].
In contrast, the sandy soil exhibited contrary soil properties and thus toxicity (Table 1). Although the low amount of soil cations reduced the dissolution affinity of AgNP, the low pH of 3.2 and the concomitant higher amount of OH− ions might decreased the Ag10-NH2 stability due to the deprotonation of the amino groups. Under aerobe conditions, AgNP oxidation followed and Ag+ ions were released [104]. The small clay content of the sandy soil (0.0–5.0%) did not provide enough capacity for Ag+ retention resulting in direct harmful interactions of Ag+ with the microorganisms, such as interactions with enzymes of the respiratory chain reaction, increase of DNA mutation frequencies or morphological changes of cell wall membrane [105]. However, the stimulating effects of the AgNO3 control on the biological parameters in the sandy soil disagreed with the Ag+ dissolution theory (Fig. 4a). Therefore, it might be that the Ag10-NH2 particle itself caused the negative effects in the sandy soil. Positively charged NP were observed to strongly associate with membranes, which leads to a higher cellular uptake [26].
The lower toxicity of the Ag10-COOH relative to the Ag10-NH2 particles based in general on their higher stability and their surface barrier, which reduced Ag+ dissolution. Furthermore, their negative surface charge promoted the attachment of soil cations of the loamy and the clayey soil and could have prevented direct interactions with negatively charged membranes of microorganisms [26, 27, 31, 66, 68]. Analogously to the Ag10-NH2, Ag10-COOH showed strongest toxicity in the sandy soil (Fig. 4a). Here, interactions with soil cations as well as natural organic material seemed to be less probable considering the soil and AgNP characteristics (Table 1). In addition to a low oxidation of Ag10-COOH and the concomitant lower Ag+ release, a direct interaction of these AgNP with microorganisms can be assumed. An internalization of negatively charged nanoparticles could be occured through nonspecific binding and clustering of the particles on cationic sites on the plasma membrane and subsequent endocytosis or by direct diffusion through the cell membrane [106]. Here, Ag10-COOH could act as a trojan horse: metabolization processes in food vacuoles and lysosomes allow the uncoating of AgNP and enable the direct release of Ag+ ions into the cytoplasm causing intracellular damages [32, 107].
However, there is always a challenge to accurately differentiate what proportion of the toxicity is from the ionic form and what proportion of the nanoform [76]. For resolving this question, AgNO3 was used as measure for Ag+ impact in our study. Results of silver nitrate exposure in the loamy soil confirmed their low contribution to the AgNP toxicity (Fig. 4a). Ag+ ions seemed to be bound to clay particles or other soil compartments and became less bioavailable. Conspicuously, AgNO3 caused the highest toxicity in the clayey soil and a low stimulation in the sandy soil. In fact, Schlich and Hund-Rinke [19] observed also a lower toxicity of AgNO3 compared to AgNP in a sandy soil (RefeSol 04A) investigating the potential ammonium oxidation, but there was no promotion. The high toxicity of AgNO3 in the clayey soil resulted by a decrease of the bacterial taxa, whereas LAP activity, nifH and amoA were hardly influenced or even stimulated (data not shown). In case of the proxies for nitrogen cycle, we suspect a significant portion of the nitrate compound of the AgNO3 control in the results. However, reasons for the detrimental effect on the bacterial structure were intricate to find. Probably, interactions with existing soil contaminants could be a reason for the high AgNO3 toxicity. The clay location was farmed and treated with herbicides and pesticides. With this, AgNO3 might have been bound to such contaminants, thus creating a new toxic agent. For example, Uwizeyimana et al. [108] indicated that pesticide and metal mixtures negatively affected earthworms. However, detailed studies on the impact of nanoparticle-pesticide combinations on microbial community in soils are currently lacking, but necessary to perform a comprehensive risk assessment of AgNP in soils.
Soil texture× exposure time
The factor combination soil texture × exposure time revealed the second highest F-value of 46.8 in the MANOVA (Table 2). With regard to the results in Fig. 4b, it could be clearly observed, that the spans of the effect characteristics of the individual soils differ significantly from each other during the four examination dates. While the effect levels in the sandy soil on days 1, 14, 28 and 90 extended over an amount of 6.3% (lowest value of 93.7% on day 14, highest value of 100.0% on day 90), the amounts for the loamy and the clayey soil were 21.5% and 20.3%, respectively. This indicated the loamy and the clayey soil as more complex soil systems considering the interplay of physico–chemical and biological interactions and transformations of the silver species contrary to the sandy soil. Whereas the effect expressions of the biological parameters in the sandy soil were indicative for consistent conditions, the effect expressions of the loamy and the clayey soil suggested a variety of temporal changes in the complex soil-time framework.
In case of the sandy soil it can be assumed, that there were continuously barely interactions of the silver species with the soil type characteristics, causing throughout similar effect expressions. In addition, the indigenous microbial community might show no observable adaption ability over time.
In contrast, the relative variations of the biological parameters in the loamy and the clayey soil differed not only in dependence on the exposure time, but also in their effect strength from each other, although their soil characteristics were very similar (Table 1). For example, on day 14, it could be assumed for the loamy soil, that the silver species were transformed, might be bound to the clay particles or capped by organic matter [33, 104], resulting in a decrease of toxicity and a stimulation of the biological parameters due to their defense arsenal [4, 91,92,93, 95, 96] (Fig. 4b). Based on the similar soil characteristics, this should be also noted for the clayey soil. However, the biological parameter was diminished by 14.0% in the clayey soil at day 14 suggesting divergent indigenous microbial communities with different capacities to resist and adapt on silver emission. Also interactions with herbicides and pesticides of the clayey soil might have led to the creation of a new toxicant, which provoked the harmful effect. At day 28, the effective expressions of the loamy and the clayey soil reproached and none or stimulating effects were documented (Fig. 4b). At this, ageing of the silver species and their return to the biological soil system could have reduced the stimulating effects on the biological parameters in the loamy soil. Contrary, the ageing in the clayey soil of the silver species or the new toxicants as well as the now developed defense mechanisms of the microbial community after 4 weeks exposure caused stimulation by 6.3% (Fig. 4b).
Based on our data, however, we can only speculate about these events. More detailed investigations are necessary to reveal the time-dependent fate of the silver bioavailability in different soils, as well as the time-dependent responses of the microbial community to these silver species. For this purpose, the development of reliable and robust analytical methods for detecting specific silver species at trace concentrations in complex matrices is a fundamental requirement. Next, batch experiments can reveal time-dependent silver transformations and the concomitant effects on the microbial community. Furthermore, it would be useful to further deepen the soil characterization to evaluate possible effects of anthropogenic residues from e.g. herbicides or pesticides on the ecotoxicological potential of silver nanoparticles.
Agent species× exposure time
Regarding the MANOVA with the main factors agent species, concentration and exposure time in the loamy soil, the factor exposure time was able to explain the largest part of variance. The factor combinations achieved lower F-values. Nevertheless, the factor combination agent species × exposure time still achieved the highest F-value of 14.5 for the factor combinations. Furthermore, the results of the pairwise comparisons of the post hoc test (Table 4) as well as the ambiguous role of AgNO3 control as a measure of Ag+ dissolution gave reason to pay attention to this interaction.
Short-term exposure led only to a small difference between the investigated agents AgPure, Ag10-COOH, Ag10-NH2, AgNO3 and KNO3 (Fig. 6). Ag10-COOH diminished the relative variations of the biological parameters by 7.7%. Based on their higher stability and the short exposure time, physico–chemical transformation could be excluded and led to the assumption of direct harmful interaction of these AgNP with microorganisms. In contrast, Ag10-NH2 was reported as strong instable AgNP due to their surface functionalization. At this, higher dissolution of Ag+ ions as toxicological agent might be a probable explanation for the decrease of the biological parameters by 10.2% (Fig. 6). However, relating to the low effects of the AgNO3 control (97.95%), the Ag+ ion dissolution theory seemed less likely. Therefore, a direct interaction could be assumed as well. AgPure caused on average no effects (100.7%) on day 1 and thus proved to be inert to physico–chemical transformations after short-term exposure. The KNO3 control diminished the biological parameters on average by 6.3%, but without significant difference to AgNO3 (Fig. 6, Table 4). This slightly adverse effect might be the result of NO2− accumulation, due to nitrate reduction [109, 110].
After 14 days exposure, the differences between the effect characteristics of the agent species changed. Ag10-NH2 (111.2%), AgNO3 (116.0%) as well as KNO3 (112.0%) stimulated the biological parameter significantly, whereas AgPure (101.8%) and Ag10-COOH (107.0%) caused only low stimulatory effects. In case of the silver species, their bioavailability as well as those of the released Ag+ ions could be reduced at this time point due to interactions with organic matter, clay minerals or pedogenic oxides [14, 20, 102, 103]. Furthermore, emerging self-protection mechanisms, such as the production of extracellular proteins or polysaccharides of the soil microbiome could neutralize toxic ions or cap AgNP [95, 96]. In addition, resilience mechanisms [58, 78, 101] might be possible explanations for the limited effects on the microbial community in the soil at day 14. The increase of the biological parameter due to AgNO3 and KNO3 exposure might result by an increase of nitrate reduction and the concomitant increase of nitrogen for the microbiome [81, 83]. Due to the synergy of microbial silver resistance mechanisms as well as the nitrogen substrate source, AgNO3 increased the relative variation of the biological parameters. As already mentioned, several authors documented stimulation or minor negative effects on microorganisms related to nitrogen cycle after exposure of low concentrations of AgNO3 in short-term exposure experiments [16, 24, 77,78,79,80].
Starting by day 28, ageing of the silver species and their slow return to the biological soil system presented a continuous sink of bioavailable silver, reducing the stimulating effects on the biological parameters (Fig. 6). An increase in AgNP toxicity with time can be linked to time-dependent enlargement of silver in soil pore water due to dissolution [33, 111]. However, the defense arsenal of bacteria was sufficient to resist silver toxicity at day 28. After 3 months exposure, there seemed to be a shock load of silver in case of AgPure (85.2%), Ag10-NH2 (82.4%) and AgNO3 (83.8%) to which the bacterial community was not immediately prepared. Small-scale bioavailability, chemical alterations and possible transformations (oxidation, reduction, dissolution, sulfidation) of AgNP and Ag+ [23, 24, 33, 90, 112, 113] in the loamy soil are possible physicochemical causes for the abrupt toxicity. Furthermore, it might be assumed that after short- and mid-term adaption to the silver contamination as well as the positioning of the silver species in the soil system, the bacterial population might have lost members with silver tolerance and were unanticipatedly shocked by the return of silver toxicant at day 90 resulting in strong reductions of the biological parameters [17].
In contrast, Ag10-COOH caused a significant stimulation of the investigated parameters by 12.5%, confirming their high stability. With prolonged exposure time, the likelihood of bounds between the negative charged Ag10-COOH and soil cations increased and with that their physico–chemical barrier, resulting in lower bioavailability. Together with the presumably low number of free Ag+ ions in case of Ag10-COOH, no toxicity could be observed.
Based on the missing significant differences of AgNO3 and KNO3 at days 1 and 14, it could not be determined, if the effects were caused only by the Ag+ of the AgNO3 control, but maybe due to the NO3− of the KNO3 control. First at the exposure times of 28 days and 90 days, significantly different effect characteristics of AgNO3 and KNO3 by 13.0% (p = 0.000) and 31.5% (p = 0.000) (Fig. 6, Table 4), respectively, could be observed. This caused us to suspect, that initial at this late time points the nitrate contained in the AgNO3 control did not further exceedingly influence the results of the AgNO3 control. In consequence, especially in case of biological parameters related to nitrogen cycle, such as LAP activity or the abundance von amoA harboring bacteria, the usage of AgNO3 as proxy for Ag+ release in AgNP short-term effect assessments could be deceptive. There is an urgent need for further research into the suitability of AgNO3 as a measure of Ag+ release. For example, batch experiments investigating all steps within the soil nitrogen cycle (in particular nitrogen fixation, nitrification, denitrification, dissimilatory nitrate reduction to ammonium) after short- and long-term exposure of AgNO3 and NO3− will be helpful to resolve in detail, at which step the nitrate released from the AgNO3 control presents an advantage for the microbial community and at which step the harmful influence of the silver predominates.
Impacts of the factors silver species, agent species, concentration, exposure time and soil texture on the relative variation of 10 biological parameters were analysed by 16S rRNA qPCR as well as fluorometric and photometric techniques. The used AgNP were characterized in detail by electron microscopy, dynamic light scattering and asymmetrical flow field-flow fractionation. Analyses of variance revealed the factors silver species, exposure time and soil texture as the most relevant determinants for the effect expressions of the biological parameters. Furthermore, the factor combinations soil texture × silver species as well as soil texture × exposure time facilitates even larger explanations of the variance of the biological parameter.
Overall, the presented results demonstrate the importance of considering several factors in the effect assessment of AgNP. Based on our study, the relationship of the soil texture × silver species is the most significant factor combination for the environmental fate and toxicities of AgNP in soils.
AgNP:
silver nanoparticle(s)
AF4:
asymmetrical flow field-flow fractionation
Ag-HPB:
hydrophobic silver particle(s)
DLS:
mercaptopropionic acid
nanoparticle(s)
PDI:
polydispersity index
UPW:
Reidy B, Haase A, Luch A, Dawson K, Lynch I (2013) Mechanisms of silver nanoparticle release, transformation and toxicity: a critical review of current knowledge and recommendations for future studies and applications. Materials 6:2295–2350
Sun TY, Mitrano DM, Bornhöft NA, Scheringer M, Hungerbühler K, Nowack B (2017) Envisioning Nano release dynamics in a changing world: using dynamic probabilistic modeling to assess future environmental emissions of engineered nanomaterials. Environ Sci Technol 51:2854–2863
Abbasi E, Milani M, Fekri Aval S, Kouhi M, Akbarzadeh A, Tayefi Nasrabadi H, Nikasa P, Joo SW, Hanifehpour Y, Nejati-Koshki K (2016) Silver nanoparticles: synthesis methods, bio-applications and properties. Crit Rev Microbiol 42:173–180
Pareek V, Gupta R, Panwar J (2018) Do physico–chemical properties of silver nanoparticles decide their interaction with biological media and bactericidal action? A review. Mater Sci Eng C 90:739–749
Hänsch M, Emmerling C (2010) Effects of silver nanoparticles on the microbiota and enzyme activity in soil. J Plant Nutr Soil Sci 173:554–558
Gottschalk F, Kost E, Nowack B (2013) Engineered nanomaterials in water and soils: a risk quantification based on probabilistic exposure and effect modeling. Environ Toxicol Chem 32:1278–1287
Nowack B, Baalousha M, Bornhöft N, Chaudhry Q, Cornelis G, Cotterill J, Gondikas A, Hassellöv M, Lead J, Mitrano DM (2015) Progress towards the validation of modeled environmental concentrations of engineered nanomaterials by analytical measurements. Environ Sci Nano 2:421–428
Sun TY, Gottschalk F, Hungerbühler K, Nowack B (2014) Comprehensive probabilistic modelling of environmental emissions of engineered nanomaterials. Environ Pollut 185:69–76
Gottschalk F, Sonderer T, Scholz RW, Nowack B (2009) Modeled environmental concentrations of engineered nanomaterials (TiO2, ZnO, Ag, CNT, fullerenes) for different regions. Environ Sci Technol 43:9216–9222
Gottschalk F, Nowack B (2011) The release of engineered nanomaterials to the environment. J Environ Monit 13:1145–1155
Benn TM, Westerhoff P (2008) Nanoparticle silver released into water from commercially available sock fabrics. Environ Sci Technol 42:4133–4139
Sun TY, Conroy G, Donner E, Hungerbühler K, Lombi E, Nowack B (2015) Probabilistic modelling of engineered nanomaterial emissions to the environment: a spatio-temporal approach. Environ Sci Nano 2:340–351
Dale A, Casman E, Lowry G, Lead J, Viparelli E, Baalousha M (2015) Modeling nanomaterial environmental fate in aquatic systems. Environ Sci Technol 49:2587
Klitzke S, Metreveli G, Peters A, Schaumann GE, Lang F (2014) The fate of silver nanoparticles in soil solution—sorption of solutes and aggregation. Sci Total Environ 535:54–60
Holden PA, Schimel JP, Godwin HA (2014) Five reasons to use bacteria when assessing manufactured nanomaterial environmental hazards and fates. Curr Opin Biotechnol 27:73–78
Grün A-L, Straskraba S, Schulz S, Schloter M, Emmerling C (2018) Long-term effects of environmentally relevant concentrations of silver nanoparticles on microbial biomass, enzyme activity, and functional genes involved in the nitrogen cycle of loamy soil. J Environ Sci 69:12–22
Grün A-L, Emmerling C (2018) Long-term effects of environmentally relevant concentrations of silver nanoparticles on major soil bacterial phyla of a loamy soil. Environ Sci Eur 30:1–13
Cornelis G, Hund-Rinke K, Kuhlbusch T, Van den Brink N, Nickel C (2014) Fate and bioavailability of engineered nanoparticles in soils: a review. Crit Rev Environ Sci Technol 44:2720–2764
Schlich K, Hund-Rinke K (2015) Influence of soil properties on the effect of silver nanomaterials on microbial activity in five soils. Environ Pollut 196:321–330
Settimio L, McLaughlin MJ, Kirby JK, Langdon KA, Janik L, Smith S (2015) Complexation of silver and dissolved organic matter in soil water extracts. Environ Pollut 199:174–184
Bundschuh M, Filser J, Lüderwald S, McKee MS, Metreveli G, Schaumann GE, Schulz R, Wagner S (2018) Nanoparticles in the environment: where do we come from, where do we go to? Environ Sci Eur 30:1–17
Lowry GV, Gregory KB, Apte SC, Lead JR (2012) Transformations of nanomaterials in the environment. Environ Sci Technol 46:6893–6899
Pachapur VL, Larios AD, Cledón M, Brar SK, Verma M, Surampalli R (2016) Behavior and characterization of titanium dioxide and silver nanoparticles in soils. Sci Total Environ 563:933–943
Masrahi A, VandeVoort AR, Arai Y (2014) Effects of silver nanoparticle on soil-nitrification processes. Arch Environ Con Tox 66:504–513
Rahmatpour S, Shirvani M, Mosaddeghi MR, Nourbakhsh F, Bazarganipour M (2017) Dose-response effects of silver nanoparticles and silver nitrate on microbial and enzyme activities in calcareous soils. Geoderma 285:313–322
Saei AA, Yazdani M, Lohse SE, Bakhtiary Z, Serpooshan V, Ghavami M, Asadian M, Mashaghi S, Dreaden EC, Mashaghi A (2017) Nanoparticle surface functionality dictates cellular and systemic toxicity. Chem Mat 29:6578–6595
Louie SM, Tilton RD, Lowry GV (2016) Critical review: impacts of macromolecular coatings on critical physicochemical processes controlling environmental fate of nanomaterials. Environ Sci Nano 3:283–310
Lead JR, Batley GE, Alvarez PJ, Croteau MN, Handy RD, McLaughlin MJ, Judy JD, Schirmer K (2018) Nanomaterials in the environment: behavior, fate, bioavailability, and effects—an updated review. Environ Toxicol Chem 27:2029–2063
Datta LP, Chatterjee A, Acharya K, De P, Das M (2017) Enzyme responsive nucleotide functionalized silver nanoparticles with effective antimicrobial and anticancer activity. New J Chem 41:1538–1548
Fröhlich E (2012) The role of surface charge in cellular uptake and cytotoxicity of medical nanoparticles. Int J Nanomed 7:1–15
Wu F, Harper BJ, Harper SL (2017) Differential dissolution and toxicity of surface functionalized silver nanoparticles in small-scale microcosms: impacts of community complexity. Environ Sci Nano 4:359–372
Grün A-L, Scheid P, Hauröder B, Emmerling C, Manz W (2017) Assessment of the effect of silver nanoparticles on the relevant soil protozoan genus Acanthamoeba. J Plant Nutr Soil Sci 180:602–613
Diez-Ortiz M, Lahive E, George S, Ter Schure A, Van Gestel CA, Jurkschat K, Svendsen C, Spurgeon DJ (2015) Short-term soil bioassays may not reveal the full toxicity potential for nanomaterials; bioavailability and toxicity of silver ions (AgNO 3) and silver nanoparticles to earthworm Eisenia fetida in long-term aged soils. Environ Pollut 203:191–198
Zhai Y, Hunting ER, Wouterse M, Peijnenburg W, Vijver MG (2016) Silver nanoparticles, ions and shape governing soil microbial functional diversity: nano shapes micro. Front Microbiol 7:1–9
Marx MC, Wood M, Jarvis SC (2001) A microplate fluorimetric assay for the study of enzyme diversity in soils. Soil Biol Biochem 33:1633–1640
Ernst G, Henseler I, Felten D, Emmerling C (2009) Decomposition and mineralization of energy crop residues governed by earthworms. Soil Biol Biochem 41:1548–1554
Kreuzer K, Adamczyk J, Iijima M, Wagner M, Scheu S, Bonkowski M (2006) Grazing of a common species of soil protozoa (Acanthamoeba castellanii) affects rhizosphere bacterial community composition and root architecture of rice (Oryza sativa L.). Soil Biol Biochem 38:1665–1672
Töwe S, Kleineidam K, Schloter M (2010) Differences in amplification efficiency of standard curves in quantitative real-time PCR assays and consequences for gene quantification in environmental samples. J Microbiol Methods 82:338–341
Bach H-J, Tomanova J, Schloter M, Munch J (2002) Enumeration of total bacteria and bacteria with genes for proteolytic activity in pure cultures and in environmental samples by quantitative PCR mediated amplification. J Microbiol Methods 49:235–245
Rösch C, Mergel A, Bothe H (2002) Biodiversity of denitrifying and dinitrogen-fixing bacteria in an acid forest soil. Appl Environ Microbiol 68:3818–3829
Rotthauwe J-H, Witzel K-P, Liesack W (1997) The ammonia monooxygenase structural gene amoA as a functional marker: molecular fine-scale analysis of natural ammonia-oxidizing populations. Appl Environ Microbiol 63:4704–4712
Barns SM, Takala SL, Kuske CR (1999) Wide distribution and diversity of members of the bacterial kingdom Acidobacterium in the environment. Appl Environ Microbiol 65:1731–1737
Muyzer G, De Waal EC, Uitterlinden AG (1993) Profiling of complex microbial populations by denaturing gradient gel electrophoresis analysis of polymerase chain reaction-amplified genes coding for 16S rRNA. Appl Environ Microbiol 59:695–700
Stach JE, Maldonado LA, Ward AC, Goodfellow M, Bull AT (2003) New primers for the class Actinobacteria: application to marine and terrestrial environments. Environ Microbiol 5:828–841
Bacchetti De Gregoris T, Aldred N, Clare AS, Burgess JG (2011) Improvement of phylum- and class-specific primers for real-time PCR quantification of bacterial taxa. J Microbiol Methods 86:351–356
Manz W, Amann R, Ludwig W, Vancanneyt M, Schleifer K-H (1996) Application of a suite of 16S rRNA-specific oligonucleotide probes designed to investigate bacteria of the phylum cytophaga-flavobacter-bacteroides in the natural environment. Microbiol 142:1097–1106
Overmann J, Coolen MJ, Tuschak C (1999) Specific detection of different phylogenetic groups of chemocline bacteria based on PCR and denaturing gradient gel electrophoresis of 16S rRNA gene fragments. Arch Microbiol 172:83–94
Lane D (1991) 16S/23S rRNA sequencing. In: Stackebrandt E, Goodfellow M (eds) Nucleic acid techniques in bacterial systematics. Wiley, England
Kuramae EE, Yergeau E, Wong LC, Pijl AS, van Veen JA, Kowalchuk GA (2012) Soil characteristics more strongly influence soil bacterial communities than land-use type. FEMS Microbiol Ecol 79:12–24
Kallenbach CM, Frey SD, Grandy AS (2016) Direct evidence for microbial-derived soil organic matter formation and its ecophysiological controls. Nat Commun 7:1–10
Ma W, Jiang S, Assemien F, Qin M, Ma B, Xie Z, Liu Y, Feng H, Du G, Ma X (2016) Response of microbial functional groups involved in soil N cycle to N, P and NP fertilization in Tibetan alpine meadows. Soil Biol Biochem 101:195–206
McGee CF, Storey S, Clipson N, Doyle E (2018) Concentration-dependent responses of soil bacterial, fungal and nitrifying communities to silver nano and micron particles. Environ Sci Pollut Res 25:18693–18704
Emmerling C, Schloter M, Hartmann A, Kandeler E (2002) Functional diversity of soil organisms-a review of recent research activities in Germany. J Plant Nutr Soil Sci 165:408–420
Rincon-Florez VA, Carvalhais LC, Schenk PM (2013) Culture-independent molecular tools for soil and rhizosphere microbiology. Diversity 5:581–612
Zhang X, Liu W, Schloter M, Zhang G, Chen Q, Huang J, Li L, Elser JJ, Han X (2013) Response of the abundance of key soil microbial nitrogen-cycling genes to multi-factorial global changes. PLoS ONE 8:e76500
Fierer N, Bradford MA, Jackson RB (2007) Toward an ecological classification of soil bacteria. Ecology 88:1354–1364
Torsvik V, Øvreås L (2002) Microbial diversity and function in soil: from genes to ecosystems. Curr Opin Microbiol 5:240–245
Allison SD, Martiny JB (2008) Resistance, resilience, and redundancy in microbial communities. PNAS 105:11512–11519
Ventura M, Canchaya C, Tauch A, Chandra G, Fitzgerald GF, Chater KF, van Sinderen D (2007) Genomics of Actinobacteria: tracing the evolutionary history of an ancient phylum. Microbiol Mol Biol Rev 71:495–548
Zhang Y, Cong J, Lu H, Li G, Qu Y, Su X, Zhou J, Li D (2014) Community structure and elevational diversity patterns of soil Acidobacteria. J Environ Sci 26:1717–1724
Li X, Rui J, Xiong J, Li J, He Z, Zhou J, Yannarell AC, Mackie RI (2014) Functional potential of soil microbial communities in the maize rhizosphere. PLoS ONE 9:e112609
Ward NL, Challacombe JF, Janssen PH, Henrissat B, Coutinho PM, Wu M, Xie G, Haft DH, Sait M, Badger J (2009) Three genomes from the phylum Acidobacteria provide insight into the lifestyles of these microorganisms in soils. Appl Environ Microbiol 75:2046–2056
Kielak AM, Barreto CC, Kowalchuk GA, van Veen JA, Kuramae EE (2016) The ecology of Acidobacteria: moving beyond genes and genomes. Front Microbiol 7:1–16
Sharma S, Szele Z, Schilling R, Munch JC, Schloter M (2006) Influence of freeze-thaw stress on the structure and function of microbial communities and denitrifying populations in soil. 72:2148–2154
Feng X, Nielsen LL, Simpson MJ (2007) Responses of soil organic matter and microorganisms to freeze–thaw cycles. Soil Biol Biochem 39:2027–2037
Liu C, Leng W, Vikesland PJ (2018) Controlled evaluation of the impacts of surface coatings on silver nanoparticle dissolution rates. Environ Sci Technol 52:2726–2734
Badawy AME, Luxton TP, Silva RG, Scheckel KG, Suidan MT, Tolaymat TM (2010) Impact of environmental conditions (pH, ionic strength, and electrolyte type) on the surface charge and aggregation of silver nanoparticles suspensions. Environ Sci Technol 44:1260–1266
Long Y-M, Hu L-G, Yan X-T, Zhao X-C, Zhou Q-F, Cai Y, Jiang G-B (2017) Surface ligand controls silver ion release of nanosilver and its antibacterial activity against Escherichia coli. Int J Nanomed 12:1–14
Yang Y, Quensen J, Mathieu J, Wang Q, Wang J, Li M, Tiedje JM, Alvarez PJ (2014) Pyrosequencing reveals higher impact of silver nanoparticles than Ag+ on the microbial community structure of activated sludge. Water Res 48:317–325
McGee C, Storey S, Clipson N, Doyle E (2017) Soil microbial community responses to contamination with silver, aluminium oxide and silicon dioxide nanoparticles. Ecotoxicology 26:449–458
Cho EC, Xie J, Wurm PA, Xia Y (2009) Understanding the role of surface charges in cellular adsorption versus internalization by selectively removing gold nanoparticles on the cell surface with a I2/KI etchant. Nano Lett 9:1080–1084
Villanueva A, Cañete M, Roca AG, Calero M, Veintemillas-Verdaguer S, Serna CJ, del Puerto MM, Miranda R (2009) The influence of surface functionalization on the enhanced internalization of magnetic nanoparticles in cancer cells. Nanotechnology 20:1–9
Wang D, Jaisi DP, Yan J, Jin Y, Zhou D (2015) Transport and retention of polyvinylpyrrolidone-coated silver nanoparticles in natural soils. Vadose Zone J 14:vzj2015.01.0007
Li X, Lenhart JJ, Walker HW (2010) Dissolution-accompanied aggregation kinetics of silver nanoparticles. Langmuir 26:16690–16698
Maillard J-Y, Hartemann P (2013) Silver as an antimicrobial: facts and gaps in knowledge. Crit Rev Microbiol 39:373–383
McShan D, Ray PC, Yu H (2014) Molecular toxicity mechanism of nanosilver. J Food Drug Anal 22:116–127
Schlich K, Klawonn T, Terytze K, Hund-Rinke K (2013) Hazard assessment of a silver nanoparticle in soil applied via sewage sludge. Environ Sci Eur 25:1–14
Yang Y, Wang J, Xiu Z, Alvarez PJ (2013) Impacts of silver nanoparticles on cellular and transcriptional activity of nitrogen-cycling bacteria. Environ Toxicol Chem 32:1488–1494
Choi O, Deng KK, Kim N-J, Ross L, Surampalli RY, Hu Z (2008) The inhibitory effects of silver nanoparticles, silver ions, and silver chloride colloids on microbial growth. Water Res 42:3066–3074
Liang Z, Das A, Hu Z (2010) Bacterial response to a shock load of nanosilver in an activated sludge treatment system. Water Res 44:5432–5438
Giles ME, Morley NJ, Baggs EM, Daniell TJ (2012) Soil nitrate reducing processes-drivers, mechanisms for spatial variation, and significance for nitrous oxide production. Front Microbiol 3:407–423
Ji B, Yang K, Zhu L, Jiang Y, Wang H, Zhou J, Zhang H (2015) Aerobic denitrification: A review of important advances of the last 30 years. Biotechnol Bioprocess Eng 20:643–651
Tiedje JM (1988) Ecology of denitrification and dissimilatory nitrate reduction to ammonium. Biol Anaerob Microor 717:179–244
Asadishad B, Chahal S, Akbari A, Cianciarelli V, Azodi M, Ghoshal S, Tufenkji N (2018) Amendment of agricultural soil with metal nanoparticles: effects on soil enzyme activity and microbial community composition. Environ Sci Technol 52:1908–1918
Panáček A, Kvítek L, Smékalová M, Večeřová R, Kolář M, Röderová M, Dyčka F, Šebela M, Prucek R, Tomanec O (2018) Bacterial resistance to silver nanoparticles and how to overcome it. Nat Nanotechnol 13:65
Samarajeewa AD, Velicogna JR, Princz JI, Subasinghe RM, Scroggins RP, Beaudette LA (2017) Effect of silver nano-particles on soil microbial growth, activity and community diversity in a sandy loam soil. Environ Pollut 220:504–513
He S, Feng Y, Ni J, Sun Y, Xue L, Feng Y, Yu Y, Lin X, Yang L (2016) Different responses of soil microbial metabolic activity to silver and iron oxide nanoparticles. Chemosphere 147:195–202
Metreveli G, Frombold B, Seitz F, Grün A, Philippe A, Rosenfeldt RR, Bundschuh M, Schulz R, Manz W, Schaumann GE (2016) Impact of chemical composition of ecotoxicological test media on the stability and aggregation status of silver nanoparticles. Environ Sci Nano 3:418–433
Jacobson AR, McBride MB, Baveye P, Steenhuis TS (2005) Environmental factors determining the trace-level sorption of silver and thallium to soils. Sci Total Environ 345:191–205
Levard C, Hotze EM, Lowry GV, Brown GE Jr (2012) Environmental transformations of silver nanoparticles: impact on stability and toxicity. Environ Sci Technol 46:6900–6914
Tripathi DK, Tripathi A, Singh S, Singh Y, Vishwakarma K, Yadav G, Sharma S, Singh VK, Mishra RK, Upadhyay R (2017) Uptake, accumulation and toxicity of silver nanoparticle in autotrophic plants, and heterotrophic microbes: a concentric review. Front Microbiol 8:1–16
Jung WK, Koo HC, Kim KW, Shin S, Kim SH, Park YH (2008) Antibacterial activity and mechanism of action of the silver ion in Staphylococcus aureus and Escherichia coli. Appl Environ Microbiol 74:2171–2178
Nies DH (1999) Microbial heavy-metal resistance. Appl Microbiol Biotechnol 51:730–750
Keesstra SD, Bouma J, Wallinga J, Tittonell P, Smith P, Cerdà A, Montanarella L, Quinton JN, Pachepsky Y, van der Putten WH (2016) The significance of soils and soil science towards realization of the United nations sustainable development goals. Soil 2:111–128
Wu B, Wang Y, Lee Y-H, Horst A, Wang Z, Chen D-R, Sureshkumar R, Tang YJ (2010) Comparative eco-toxicities of nano-ZnO particles under aquatic and aerosol exposure modes. Environ Sci Technol 44:1484–1489
Sudheer Khan S, Bharath Kumar E, Mukherjee A, Chandrasekaran N (2011) Bacterial tolerance to silver nanoparticles (SNPs): aeromonas punctata isolated from sewage environment. J Basic Microbiol 51:183–190
Juan W, Kunhui S, Zhang L, Youbin S (2017) Effects of silver nanoparticles on soil microbial communities and bacterial nitrification in suburban vegetable soils. Pedosphere 27:482–490
Ma Y, Metch JW, Vejerano EP, Miller IJ, Leon EC, Marr LC, Vikesland PJ, Pruden A (2015) Microbial community response of nitrifying sequencing batch reactors to silver, zero-valent iron, titanium dioxide and cerium dioxide nanomaterials. Water Res 68:87–97
Cornelis G, Kirby JK, Beak D, Chittleborough D, McLaughlin MJ (2010) A method for determination of retention of silver and cerium oxide manufactured nanoparticles in soils. Environ Chem 7:298–308
Shoults-Wilson WA, Reinsch BC, Tsyusko OV, Bertsch PM, Lowry GV, Unrine JM (2011) Role of particle size and soil type in toxicity of silver nanoparticles to earthworms. Soil Sci Soc Am J 75:365–377
Postgate JR (1967) Viability measurements and the survival of microbes under minimum stress. Adv Microb Physiol 1:1–23
Cornelis G, DooletteMadeleine Thomas C, McLaughlin MJ, Kirby JK, Beak DG, Chittleborough D (2012) Retention and dissolution of engineered silver nanoparticles in natural soils. Soil Sci Soc Am J 76:891–902
Sagee O, Dror I, Berkowitz B (2012) Transport of silver nanoparticles (AgNPs) in soil. Chemosphere 88:670–675
Liu J, Hurt RH (2010) Ion release kinetics and particle persistence in aqueous nano-silver colloids. Environ Sci Technol 44:2169–2175
Marambio-Jones C, Hoek EM (2010) A review of the antibacterial effects of silver nanomaterials and potential implications for human health and the environment. J Nanopart Res 12:1531–1551
Verma A, Stellacci F (2010) Effect of surface properties on nanoparticle–cell interactions. Small 6:12–21
Park E-J, Yi J, Kim Y, Choi K, Park K (2010) Silver nanoparticles induce cytotoxicity by a Trojan-horse type mechanism. Toxicol Vitro 24:872–878
Uwizeyimana H, Wang M, Chen W, Khan K (2017) The eco-toxic effects of pesticide and heavy metal mixtures towards earthworms in soil. 55:20–29
Stein LY, Arp DJ (1998) Loss of ammonia monooxygenase activity in Nitrosomonas europaea upon exposure to nitrite. Appl Environ Microbiol 64:4098–4102
Bollag J-M, Henninger NM (1978) Effects of nitrite toxicity on soil bacteria under aerobic and anaerobic conditions. Soil Biol Biochem 10:377–381
Das P, Barua S, Sarkar S, Chatterjee SK, Mukherjee S, Goswami L, Das S, Bhattacharya S, Karak N, Bhattacharya SS (2018) Mechanism of toxicity and transformation of silver nanoparticles: inclusive assessment in earthworm-microbe-soil-plant system. Geoderma 314:73–84
Cornelis G, Pang L, Doolette C, Kirby JK, McLaughlin MJ (2013) Transport of silver nanoparticles in saturated columns of natural soils. Sci Total Environ 463:120–130
Gunsolus IL, Mousavi MP, Hussein K, Bühlmann P, Haynes CL (2015) Effects of humic and fulvic acids on silver nanoparticle stability, dissolution, and toxicity. Environ Sci Technol 49:8078–8086
JC designed and manufactured the Ag10-NH2 and Ag10-COOH nanoparticles. SS performed and evaluated the transmission electron microscopy of AgNP. KY, MF, and DR performed the ζ-potential and size measurements of AgNP as well as the asymmetrical flow field-flow fractionation (AF4) of AgNP. GAL and EC designed and performed the soil experiments. GAL, EC and MW analysed the data and wrote the manuscript. All authors read and approved the final manuscript.
We appreciate Elvira Sieberger (University of Trier) for her excellent laboratory support and assistance.
All datasets on which the conclusions of the paper rely are presented in the main manuscript (Additional files 1 and 2).
The study was financially supported by the BMBF (Grant No. 03X0150) and the German Research Foundation (DFG Research unit FOR 1536, MA 3273/3-2).
Department of Biology, Institute for Integrated Natural Sciences, University of Koblenz-Landau, Universitätsstr. 1, 56070, Koblenz, Germany
Anna-Lena Grün
& Werner Manz
Fraunhofer Institute for Biomedical Engineering IBMT, Joseph-von-Fraunhofer-Weg 1, 66280, Sulzbach, Germany
Yvonne Lydia Kohl
Postnova Analytics GmbH, Max-Planck Straße 14, 86899, Landsberg, Germany
Florian Meier
& Roland Drexel
Institute of Molecular Biosciences, J.W. Goethe University, Max-von-Laue-Strase 9, 60438, Frankfurt am Main, Germany
Susanne Straskraba
PlasmaChem GmbH, Schwarzschildstraße 10, 12489, Berlin, Germany
Carsten Jost
Department of Soil Science, Faculty of Regional and Environmental Science, University of Trier, Campus II, 54286, Trier, Germany
& Christoph Emmerling
Search for Anna-Lena Grün in:
Search for Werner Manz in:
Search for Yvonne Lydia Kohl in:
Search for Florian Meier in:
Search for Susanne Straskraba in:
Search for Carsten Jost in:
Search for Roland Drexel in:
Search for Christoph Emmerling in:
Correspondence to Christoph Emmerling.
Additional file 1. Characterization of AgNP.
Additional file 2. Raw data.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Silver nanoparticles
Soil microbial community
Functional diversity
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
J. Arthur Seebach Jr.
J. Arthur Seebach Jr (May 17, 1938 – December 3, 1996) was an American mathematician.
Seebach studied Greek language as an undergraduate, making it a second major with mathematics.
Seebach studied with A. I. Weinzweig at Northwestern University. He earned a Ph.D. with the thesis Cones and Homotopy in Categories. Seebach began to teach at Saint Olaf College in Northfield, Minnesota in 1965. He, his wife Linda A. Seebach, and Lynn A. Steen wrote an expository article "What is a Sheaf".[1] The paper showed that a sheaf is useful in analysis, algebra, and geometry when considering germs of holomorphic functions, local rings, and differential forms. J. Arthur also wrote "Injectives and Homotopy".[2]
In 1971 Seebach and Steen took over the Book Reviews section of American Mathematical Monthly, including Telegraphic Reviews which ran for several pages every month. The massive effort was eventually distributed over some 50 mathematicians at Saint Olaf, Carleton, and Macalester Colleges. Telegraphic Reviews, in telegraphic style, was started by Kenneth O. May in 1965 and provided an American posting of new publications before the digital age.
Seebach and Steen conducted research in a 1967 summer school with students investigating the independence of conditions on topological spaces. They summarized their work in Counterexamples in Topology (1978).
In 1975 Seebach and Steen became co-editors of Mathematics Magazine. Steen wrote:
Arthur’s sense of whimsy, his love of puns, and his proclivity for obscure connections totally transformed the image of Mathematics Magazine. Cover art, viewed as radical at the time, has since been emulated...
Seebach welcomed the rise of computers when he assembled a Heathkit H8. In 1986 he became editor of Mathematical Notes in American Mathematical Monthly.
Beyond mathematics, Seebach sang with the Bach Society of Minnesota. The craftsmanship of the Studebaker automobile appealed to Seebach and he operated a side-business in Studebaker parts, driving some, and publishing a newsletter for fellow aficionados of the car. His newsletter experience was of value to Mathematical Association of America when they began their own newsletter.
Seebach died in 1996 from complications of diabetes.
References
1. J.A. Seebach, Linda A. Seebach & Lynn A. Steen (1970) "What is a Sheaf", American Mathematical Monthly 77:681–703 MR0263073
2. J.A. Seebach (1972) Injectives and HomotopyIllinois Journal of Mathematics 16:446–53, link from Project Euclid, MR0300276
• Lynn Arthur Steen (1997) "In Memoriam: J. Arthur Seebach Jr.", Mathematics Magazine 70: 78–79.
External links
• J. Arthur Seebach Jr. at the Mathematics Genealogy Project
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Israel
• United States
• Czech Republic
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
Other
• SNAC
• IdRef
| Wikipedia |
Topological entropy for set-valued maps
DCDS-B Home
Formulas for the topological entropy of multimodal maps based on min-max symbols
December 2015, 20(10): 3435-3459. doi: 10.3934/dcdsb.2015.20.3435
Realizing subexponential entropy growth rates by cutting and stacking
Frank Blume 1,
Department of Mathematics, John Brown University, 2000 W. University St, Siloam Springs, AR 72761, United States
Received October 2014 Revised March 2015 Published September 2015
We show that for any concave positive function $f$ defined on $[0,\infty)$ with $\lim_{x\rightarrow\infty}f(x)/x=0$ there exists a rank one system $(X_f,T_f)$ such that $\limsup_{n\rightarrow\infty} H(\alpha_0^{n-1})/f(n)\ge 1$ for all nontrivial partitions $\alpha$ of $X_f$ into two sets and that there is one partition $\alpha$ of $X_f$ into two sets for which the limit superior of $H(\alpha_0^{n-1})/f(n)$ is equal to one whenever the condition $\lim_{x\rightarrow\infty}\ln x/f(x)=0$ is satisfied. Furthermore, for each system $(X_f,T_f)$ we also identify the minimal entropy growth rate in the limit inferior.
Keywords: Ergodic theory, entropy growth rate, cutting and stacking, zero-entropy system., rank-one system.
Mathematics Subject Classification: Primary: 28D20; Secondary: 27A9.
Citation: Frank Blume. Realizing subexponential entropy growth rates by cutting and stacking. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3435-3459. doi: 10.3934/dcdsb.2015.20.3435
F. Blume, An entropy estimate for infinite interval exchange transformations,, Mathematische Zeitschrift, 272 (2012), 17. doi: 10.1007/s00209-011-0919-2. Google Scholar
F. Blume, Minimal rates of entropy convergence for completely ergodic systems,, Israel Journal of Mathematics, 108 (1998), 1. doi: 10.1007/BF02783038. Google Scholar
F. Blume, Minimal rates of entropy convergence for rank one systems,, Discrete and Continuous Dynamical Systems, 6 (2000), 773. doi: 10.3934/dcds.2000.6.773. Google Scholar
F. Blume, On the relation between entropy and the average complexity of trajectories in dynamical systems,, Computational Complexity, 9 (2000), 146. doi: 10.1007/PL00001604. Google Scholar
F. Blume, On the relation between entropy convergence rates and Baire category,, Mathematische Zeitschrift, 271 (2012), 723. doi: 10.1007/s00209-011-0887-6. Google Scholar
F. Blume, Possible rates of entropy convergence,, Ergodic Theory and Dynamical Systems, 17 (1997), 45. doi: 10.1017/S0143385797069733. Google Scholar
F. Blume, The Rate of Entropy Convergence,, Doctoral Dissertation, (1995). Google Scholar
A. Katok and J.-P. Thouvenot, Slow entropy type invariants and smooth realization of commuting measure-preserving transformations,, Annales de l'Institut Henri Poincare (B) Probability and Statistics, 33 (1997), 323. doi: 10.1016/S0246-0203(97)80094-5. Google Scholar
W. Parry, Entropy and Generators in Ergodic Theory,, Benjamin, (1969). Google Scholar
K. E. Petersen, Ergodic Theory,, Cambridge University Press, (1983). doi: 10.1017/CBO9780511608728. Google Scholar
Paulina Grzegorek, Michal Kupsa. Exponential return times in a zero-entropy process. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1339-1361. doi: 10.3934/cpaa.2012.11.1339
Manfred Einsiedler, Elon Lindenstrauss. On measures invariant under diagonalizable actions: the Rank-One case and the general Low-Entropy method. Journal of Modern Dynamics, 2008, 2 (1) : 83-128. doi: 10.3934/jmd.2008.2.83
Frank Blume. Minimal rates of entropy convergence for rank one systems. Discrete & Continuous Dynamical Systems - A, 2000, 6 (4) : 773-796. doi: 10.3934/dcds.2000.6.773
Wenxiang Sun, Cheng Zhang. Zero entropy versus infinite entropy. Discrete & Continuous Dynamical Systems - A, 2011, 30 (4) : 1237-1242. doi: 10.3934/dcds.2011.30.1237
Yixiao Qiao, Xiaoyao Zhou. Zero sequence entropy and entropy dimension. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 435-448. doi: 10.3934/dcds.2017018
John Kieffer and En-hui Yang. Ergodic behavior of graph entropy. Electronic Research Announcements, 1997, 3: 11-16.
Xianchao Xiu, Lingchen Kong. Rank-one and sparse matrix decomposition for dynamic MRI. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 127-134. doi: 10.3934/naco.2015.5.127
Elena Bonetti, Pierluigi Colli, Gianni Gilardi. Singular limit of an integrodifferential system related to the entropy balance. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 1935-1953. doi: 10.3934/dcdsb.2014.19.1935
Karsten Keller, Sergiy Maksymenko, Inga Stolz. Entropy determination based on the ordinal structure of a dynamical system. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3507-3524. doi: 10.3934/dcdsb.2015.20.3507
Denis Mercier, Virginie Régnier. Decay rate of the Timoshenko system with one boundary damping. Evolution Equations & Control Theory, 2019, 8 (2) : 423-445. doi: 10.3934/eect.2019021
Masayuki Asaoka. Local rigidity of homogeneous actions of parabolic subgroups of rank-one Lie groups. Journal of Modern Dynamics, 2015, 9: 191-201. doi: 10.3934/jmd.2015.9.191
Gabriele Link, Jean-Claude Picaud. Ergodic geometry for non-elementary rank one manifolds. Discrete & Continuous Dynamical Systems - A, 2016, 36 (11) : 6257-6284. doi: 10.3934/dcds.2016072
Jessy Mallet, Stéphane Brull, Bruno Dubroca. General moment system for plasma physics based on minimum entropy principle. Kinetic & Related Models, 2015, 8 (3) : 533-558. doi: 10.3934/krm.2015.8.533
Radosław Kurek, Paweł Lubowiecki, Henryk Żołądek. The Hess-Appelrot system. Ⅲ. Splitting of separatrices and chaos. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 1955-1981. doi: 10.3934/dcds.2018079
Manfred Einsiedler, Elon Lindenstrauss. Symmetry of entropy in higher rank diagonalizable actions and measure classification. Journal of Modern Dynamics, 2018, 13: 163-185. doi: 10.3934/jmd.2018016
Katayun Barmak, Eva Eggeling, Maria Emelianenko, Yekaterina Epshteyn, David Kinderlehrer, Richard Sharp, Shlomo Ta'asan. An entropy based theory of the grain boundary character distribution. Discrete & Continuous Dynamical Systems - A, 2011, 30 (2) : 427-454. doi: 10.3934/dcds.2011.30.427
Eva Glasmachers, Gerhard Knieper, Carlos Ogouyandjou, Jan Philipp Schröder. Topological entropy of minimal geodesics and volume growth on surfaces. Journal of Modern Dynamics, 2014, 8 (1) : 75-91. doi: 10.3934/jmd.2014.8.75
César J. Niche. Topological entropy of a magnetic flow and the growth of the number of trajectories. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 577-580. doi: 10.3934/dcds.2004.11.577
Harald Garcke, Kei Fong Lam. Analysis of a Cahn--Hilliard system with non-zero Dirichlet conditions modeling tumor growth with chemotaxis. Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4277-4308. doi: 10.3934/dcds.2017183
Sébastien Court. Stabilization of a fluid-solid system, by the deformation of the self-propelled solid. Part II: The nonlinear system.. Evolution Equations & Control Theory, 2014, 3 (1) : 83-118. doi: 10.3934/eect.2014.3.83
Frank Blume | CommonCrawl |
William Feller
William "Vilim" Feller (July 7, 1906 – January 14, 1970), born Vilibald Srećko Feller, was a Croatian–American mathematician specializing in probability theory.
William Feller
Born
Vilibald Srećko Feller
July 7, 1906 (1906-07-07)
Zagreb, Austro-Hungarian Monarchy (now Croatia)
DiedJanuary 14, 1970 (1970-01-15) (aged 63)
New York City, US
NationalityCroatian–American
Alma materUniversity of Zagreb
University of Göttingen
Known forFeller process
Feller's coin-tossing constants
Feller-continuous process
Feller's paradox
Feller's theorem
Feller–Pareto distribution
Feller–Tornier constant
Feller–Miyadera–Phillips theorem
Proof by intimidation
Stars and bars
AwardsNational Medal of Science (USA) in Mathematical, Statistical, and Computational Sciences (1969)
Scientific career
FieldsMathematician
InstitutionsUniversity of Kiel
University of Copenhagen
University of Stockholm
University of Lund
Brown University
Cornell University
Princeton University
Doctoral advisorRichard Courant
Doctoral studentsPatrick Billingsley
George Forsythe
Robert Kurtz
Henry McKean
Lawrence Shepp
Hale Trotter
Benjamin Weiss
David A. Freedman
InfluencesStanko Vlögel
Signature
Early life and education
Feller was born in Zagreb to Ida Oemichen-Perc, a Croatian–Austrian Catholic, and Eugen Viktor Feller, son of a Polish–Jewish father (David Feller) and an Austrian mother (Elsa Holzer).[1]
Eugen Feller was a famous chemist and created Elsa fluid named after his mother. According to Gian-Carlo Rota, Eugen Feller's surname was a "Slavic tongue twister", which William changed at the age of twenty.[2] This claim appears to be false. His forename, Vilibald, was chosen by his Catholic mother for the saint day of his birthday.[3]
Work
Feller held a docent position at the University of Kiel beginning in 1928. Because he refused to sign a Nazi oath,[4] he fled the Nazis and went to Copenhagen, Denmark in 1933. He also lectured in Sweden (Stockholm and Lund).[5] As a refugee in Sweden, Feller reported being troubled by increasing fascism at the universities. He reported that the mathematician Torsten Carleman would offer his opinion that Jews and foreigners should be executed.[6]
Finally, in 1939 he arrived in the U.S., where he became a citizen in 1944 and was on the faculty at Brown and Cornell. In 1950 he became a professor at Princeton University.
The works of Feller are contained in 104 papers and two books on a variety of topics such as mathematical analysis, theory of measurement, functional analysis, geometry, and differential equations in addition to his work in mathematical statistics and probability.
Feller was one of the greatest probabilists of the twentieth century. He is remembered for his championing of probability theory as a branch of mathematical analysis in Sweden and the United States. In the middle of the 20th century, probability theory was popular in France and Russia, while mathematical statistics was more popular in the United Kingdom and the United States, according to the Swedish statistician, Harald Cramér.[7] His two-volume textbook on probability theory and its applications was called "the most successful treatise on probability ever written" by Gian-Carlo Rota.[8] By stimulating his colleagues and students in Sweden and then in the United States, Feller helped establish research groups studying the analytic theory of probability. In his research, Feller contributed to the study of the relationship between Markov chains and differential equations, where his theory of generators of one-parameter semigroups of stochastic processes gave rise to the theory of "Feller operators".
Results
Numerous topics relating to probability are named after him, including Feller processes, Feller's explosion test, Feller–Brown movement, and the Lindeberg–Feller theorem. Feller made fundamental contributions to renewal theory, Tauberian theorems, random walks, diffusion processes, and the law of the iterated logarithm. Feller was among those early editors who launched the journal Mathematical Reviews.
Notable books
• An Introduction to Probability Theory and its Applications, Volume I, 3rd edition (1968); 1st edn. (1950);[9] 2nd edn. (1957)[10]
• An Introduction to Probability Theory and its Applications, Volume II, 2nd edition (1971)
Recognition
In 1949, Feller was named a Fellow of the American Statistical Association.[11] He was elected to the American Academy of Arts and Sciences in 1958, the United States National Academy of Sciences in 1960, and the American Philosophical Society in 1966.[12][13][14] Feller won the National Medal of Science in 1969. He was president of the Institute of Mathematical Statistics.
See also
• Feller condition
• Beta distribution
• Compound Poisson distribution
• Gillespie algorithm
• Kolmogorov equations
• Poisson point process
• Stability (probability)
• St. Petersburg paradox
• Stochastic process
References
1. Zubrinic, Darko (2006). "William Feller (1906-1970)". Croatianhistory.net. Accessed 3 July 2018.
2. Rota, Gian-Carlo (1996). Indiscrete Thoughts. Birkhäuser. ISBN 0-8176-3866-0.
3. O'Connor, John J.; Robertson, Edmund F., "William Feller", MacTutor History of Mathematics Archive, University of St Andrews
4. "Biography of William Feller". History of William Feller. Retrieved 2006-06-27.
5. Siegmund-Schultze, Reinhard (2009). Mathematicians fleeing from Nazi Germany: Individual fates and global impact. Princeton, New Jersey: Princeton University Press. pp. xxviii+471. ISBN 978-0-691-14041-4. MR 2522825.
6. (Siegmund-Schultze 2009, p. 135)
7. Preface to his Mathematical Methods of Statistics.
8. Page 199: Indiscrete Thoughts.
9. Wolfowitz, J. (1951). "Review: An introduction to probability theory and its applications, Vol. I, 1st ed., by W. Feller" (PDF). Bull. Amer. Math. Soc. 57 (2): 156–159. doi:10.1090/s0002-9904-1951-09491-4.
10. "Review: An introduction to probability theory and its applications, Vol. I, 2nd ed., by W. Feller" (PDF). Bull. Amer. Math. Soc. 64 (6): 393. 1958. doi:10.1090/s0002-9904-1958-10252-9.
11. "View/Search Fellows of the ASA". American Statistical Association. Retrieved 2016-07-22.
12. "William Feller". American Academy of Arts & Sciences. Retrieved 2022-09-29.
13. "William Feller". www.nasonline.org. Retrieved 2022-09-29.
14. "APS Member History". search.amphilsoc.org. Retrieved 2022-09-29.
External links
Wikiquote has quotations related to William Feller.
• William Feller at the Mathematics Genealogy Project
• A biographical memoir by Murray Rosenblatt
• Croatian Giants of Science - in Croatian
• O'Connor, John J.; Robertson, Edmund F., "William Feller", MacTutor History of Mathematics Archive, University of St Andrews
• "Fine Hall in its golden age: Remembrances of Princeton in the early fifties" by Gian-Carlo Rota. Contains a section on Feller at Princeton.
• Feller Matriculation Form giving personal details
United States National Medal of Science laureates
Behavioral and social science
1960s
1964
Neal Elgar Miller
1980s
1986
Herbert A. Simon
1987
Anne Anastasi
George J. Stigler
1988
Milton Friedman
1990s
1990
Leonid Hurwicz
Patrick Suppes
1991
George A. Miller
1992
Eleanor J. Gibson
1994
Robert K. Merton
1995
Roger N. Shepard
1996
Paul Samuelson
1997
William K. Estes
1998
William Julius Wilson
1999
Robert M. Solow
2000s
2000
Gary Becker
2003
R. Duncan Luce
2004
Kenneth Arrow
2005
Gordon H. Bower
2008
Michael I. Posner
2009
Mortimer Mishkin
2010s
2011
Anne Treisman
2014
Robert Axelrod
2015
Albert Bandura
Biological sciences
1960s
1963
C. B. van Niel
1964
Theodosius Dobzhansky
Marshall W. Nirenberg
1965
Francis P. Rous
George G. Simpson
Donald D. Van Slyke
1966
Edward F. Knipling
Fritz Albert Lipmann
William C. Rose
Sewall Wright
1967
Kenneth S. Cole
Harry F. Harlow
Michael Heidelberger
Alfred H. Sturtevant
1968
Horace Barker
Bernard B. Brodie
Detlev W. Bronk
Jay Lush
Burrhus Frederic Skinner
1969
Robert Huebner
Ernst Mayr
1970s
1970
Barbara McClintock
Albert B. Sabin
1973
Daniel I. Arnon
Earl W. Sutherland Jr.
1974
Britton Chance
Erwin Chargaff
James V. Neel
James Augustine Shannon
1975
Hallowell Davis
Paul Gyorgy
Sterling B. Hendricks
Orville Alvin Vogel
1976
Roger Guillemin
Keith Roberts Porter
Efraim Racker
E. O. Wilson
1979
Robert H. Burris
Elizabeth C. Crosby
Arthur Kornberg
Severo Ochoa
Earl Reece Stadtman
George Ledyard Stebbins
Paul Alfred Weiss
1980s
1981
Philip Handler
1982
Seymour Benzer
Glenn W. Burton
Mildred Cohn
1983
Howard L. Bachrach
Paul Berg
Wendell L. Roelofs
Berta Scharrer
1986
Stanley Cohen
Donald A. Henderson
Vernon B. Mountcastle
George Emil Palade
Joan A. Steitz
1987
Michael E. DeBakey
Theodor O. Diener
Harry Eagle
Har Gobind Khorana
Rita Levi-Montalcini
1988
Michael S. Brown
Stanley Norman Cohen
Joseph L. Goldstein
Maurice R. Hilleman
Eric R. Kandel
Rosalyn Sussman Yalow
1989
Katherine Esau
Viktor Hamburger
Philip Leder
Joshua Lederberg
Roger W. Sperry
Harland G. Wood
1990s
1990
Baruj Benacerraf
Herbert W. Boyer
Daniel E. Koshland Jr.
Edward B. Lewis
David G. Nathan
E. Donnall Thomas
1991
Mary Ellen Avery
G. Evelyn Hutchinson
Elvin A. Kabat
Robert W. Kates
Salvador Luria
Paul A. Marks
Folke K. Skoog
Paul C. Zamecnik
1992
Maxine Singer
Howard Martin Temin
1993
Daniel Nathans
Salome G. Waelsch
1994
Thomas Eisner
Elizabeth F. Neufeld
1995
Alexander Rich
1996
Ruth Patrick
1997
James Watson
Robert A. Weinberg
1998
Bruce Ames
Janet Rowley
1999
David Baltimore
Jared Diamond
Lynn Margulis
2000s
2000
Nancy C. Andreasen
Peter H. Raven
Carl Woese
2001
Francisco J. Ayala
George F. Bass
Mario R. Capecchi
Ann Graybiel
Gene E. Likens
Victor A. McKusick
Harold Varmus
2002
James E. Darnell
Evelyn M. Witkin
2003
J. Michael Bishop
Solomon H. Snyder
Charles Yanofsky
2004
Norman E. Borlaug
Phillip A. Sharp
Thomas E. Starzl
2005
Anthony Fauci
Torsten N. Wiesel
2006
Rita R. Colwell
Nina Fedoroff
Lubert Stryer
2007
Robert J. Lefkowitz
Bert W. O'Malley
2008
Francis S. Collins
Elaine Fuchs
J. Craig Venter
2009
Susan L. Lindquist
Stanley B. Prusiner
2010s
2010
Ralph L. Brinster
Rudolf Jaenisch
2011
Lucy Shapiro
Leroy Hood
Sallie Chisholm
2012
May Berenbaum
Bruce Alberts
2013
Rakesh K. Jain
2014
Stanley Falkow
Mary-Claire King
Simon Levin
Chemistry
1960s
1964
Roger Adams
1980s
1982
F. Albert Cotton
Gilbert Stork
1983
Roald Hoffmann
George C. Pimentel
Richard N. Zare
1986
Harry B. Gray
Yuan Tseh Lee
Carl S. Marvel
Frank H. Westheimer
1987
William S. Johnson
Walter H. Stockmayer
Max Tishler
1988
William O. Baker
Konrad E. Bloch
Elias J. Corey
1989
Richard B. Bernstein
Melvin Calvin
Rudolph A. Marcus
Harden M. McConnell
1990s
1990
Elkan Blout
Karl Folkers
John D. Roberts
1991
Ronald Breslow
Gertrude B. Elion
Dudley R. Herschbach
Glenn T. Seaborg
1992
Howard E. Simmons Jr.
1993
Donald J. Cram
Norman Hackerman
1994
George S. Hammond
1995
Thomas Cech
Isabella L. Karle
1996
Norman Davidson
1997
Darleane C. Hoffman
Harold S. Johnston
1998
John W. Cahn
George M. Whitesides
1999
Stuart A. Rice
John Ross
Susan Solomon
2000s
2000
John D. Baldeschwieler
Ralph F. Hirschmann
2001
Ernest R. Davidson
Gábor A. Somorjai
2002
John I. Brauman
2004
Stephen J. Lippard
2005
Tobin J. Marks
2006
Marvin H. Caruthers
Peter B. Dervan
2007
Mostafa A. El-Sayed
2008
Joanna Fowler
JoAnne Stubbe
2009
Stephen J. Benkovic
Marye Anne Fox
2010s
2010
Jacqueline K. Barton
Peter J. Stang
2011
Allen J. Bard
M. Frederick Hawthorne
2012
Judith P. Klinman
Jerrold Meinwald
2013
Geraldine L. Richmond
2014
A. Paul Alivisatos
Engineering sciences
1960s
1962
Theodore von Kármán
1963
Vannevar Bush
John Robinson Pierce
1964
Charles S. Draper
Othmar H. Ammann
1965
Hugh L. Dryden
Clarence L. Johnson
Warren K. Lewis
1966
Claude E. Shannon
1967
Edwin H. Land
Igor I. Sikorsky
1968
J. Presper Eckert
Nathan M. Newmark
1969
Jack St. Clair Kilby
1970s
1970
George E. Mueller
1973
Harold E. Edgerton
Richard T. Whitcomb
1974
Rudolf Kompfner
Ralph Brazelton Peck
Abel Wolman
1975
Manson Benedict
William Hayward Pickering
Frederick E. Terman
Wernher von Braun
1976
Morris Cohen
Peter C. Goldmark
Erwin Wilhelm Müller
1979
Emmett N. Leith
Raymond D. Mindlin
Robert N. Noyce
Earl R. Parker
Simon Ramo
1980s
1982
Edward H. Heinemann
Donald L. Katz
1983
Bill Hewlett
George Low
John G. Trump
1986
Hans Wolfgang Liepmann
Tung-Yen Lin
Bernard M. Oliver
1987
Robert Byron Bird
H. Bolton Seed
Ernst Weber
1988
Daniel C. Drucker
Willis M. Hawkins
George W. Housner
1989
Harry George Drickamer
Herbert E. Grier
1990s
1990
Mildred Dresselhaus
Nick Holonyak Jr.
1991
George H. Heilmeier
Luna B. Leopold
H. Guyford Stever
1992
Calvin F. Quate
John Roy Whinnery
1993
Alfred Y. Cho
1994
Ray W. Clough
1995
Hermann A. Haus
1996
James L. Flanagan
C. Kumar N. Patel
1998
Eli Ruckenstein
1999
Kenneth N. Stevens
2000s
2000
Yuan-Cheng B. Fung
2001
Andreas Acrivos
2002
Leo Beranek
2003
John M. Prausnitz
2004
Edwin N. Lightfoot
2005
Jan D. Achenbach
2006
Robert S. Langer
2007
David J. Wineland
2008
Rudolf E. Kálmán
2009
Amnon Yariv
2010s
2010
Shu Chien
2011
John B. Goodenough
2012
Thomas Kailath
Mathematical, statistical, and computer sciences
1960s
1963
Norbert Wiener
1964
Solomon Lefschetz
H. Marston Morse
1965
Oscar Zariski
1966
John Milnor
1967
Paul Cohen
1968
Jerzy Neyman
1969
William Feller
1970s
1970
Richard Brauer
1973
John Tukey
1974
Kurt Gödel
1975
John W. Backus
Shiing-Shen Chern
George Dantzig
1976
Kurt Otto Friedrichs
Hassler Whitney
1979
Joseph L. Doob
Donald E. Knuth
1980s
1982
Marshall H. Stone
1983
Herman Goldstine
Isadore Singer
1986
Peter Lax
Antoni Zygmund
1987
Raoul Bott
Michael Freedman
1988
Ralph E. Gomory
Joseph B. Keller
1989
Samuel Karlin
Saunders Mac Lane
Donald C. Spencer
1990s
1990
George F. Carrier
Stephen Cole Kleene
John McCarthy
1991
Alberto Calderón
1992
Allen Newell
1993
Martin David Kruskal
1994
John Cocke
1995
Louis Nirenberg
1996
Richard Karp
Stephen Smale
1997
Shing-Tung Yau
1998
Cathleen Synge Morawetz
1999
Felix Browder
Ronald R. Coifman
2000s
2000
John Griggs Thompson
Karen Uhlenbeck
2001
Calyampudi R. Rao
Elias M. Stein
2002
James G. Glimm
2003
Carl R. de Boor
2004
Dennis P. Sullivan
2005
Bradley Efron
2006
Hyman Bass
2007
Leonard Kleinrock
Andrew J. Viterbi
2009
David B. Mumford
2010s
2010
Richard A. Tapia
S. R. Srinivasa Varadhan
2011
Solomon W. Golomb
Barry Mazur
2012
Alexandre Chorin
David Blackwell
2013
Michael Artin
Physical sciences
1960s
1963
Luis W. Alvarez
1964
Julian Schwinger
Harold Urey
Robert Burns Woodward
1965
John Bardeen
Peter Debye
Leon M. Lederman
William Rubey
1966
Jacob Bjerknes
Subrahmanyan Chandrasekhar
Henry Eyring
John H. Van Vleck
Vladimir K. Zworykin
1967
Jesse Beams
Francis Birch
Gregory Breit
Louis Hammett
George Kistiakowsky
1968
Paul Bartlett
Herbert Friedman
Lars Onsager
Eugene Wigner
1969
Herbert C. Brown
Wolfgang Panofsky
1970s
1970
Robert H. Dicke
Allan R. Sandage
John C. Slater
John A. Wheeler
Saul Winstein
1973
Carl Djerassi
Maurice Ewing
Arie Jan Haagen-Smit
Vladimir Haensel
Frederick Seitz
Robert Rathbun Wilson
1974
Nicolaas Bloembergen
Paul Flory
William Alfred Fowler
Linus Carl Pauling
Kenneth Sanborn Pitzer
1975
Hans A. Bethe
Joseph O. Hirschfelder
Lewis Sarett
Edgar Bright Wilson
Chien-Shiung Wu
1976
Samuel Goudsmit
Herbert S. Gutowsky
Frederick Rossini
Verner Suomi
Henry Taube
George Uhlenbeck
1979
Richard P. Feynman
Herman Mark
Edward M. Purcell
John Sinfelt
Lyman Spitzer
Victor F. Weisskopf
1980s
1982
Philip W. Anderson
Yoichiro Nambu
Edward Teller
Charles H. Townes
1983
E. Margaret Burbidge
Maurice Goldhaber
Helmut Landsberg
Walter Munk
Frederick Reines
Bruno B. Rossi
J. Robert Schrieffer
1986
Solomon J. Buchsbaum
H. Richard Crane
Herman Feshbach
Robert Hofstadter
Chen-Ning Yang
1987
Philip Abelson
Walter Elsasser
Paul C. Lauterbur
George Pake
James A. Van Allen
1988
D. Allan Bromley
Paul Ching-Wu Chu
Walter Kohn
Norman Foster Ramsey Jr.
Jack Steinberger
1989
Arnold O. Beckman
Eugene Parker
Robert Sharp
Henry Stommel
1990s
1990
Allan M. Cormack
Edwin M. McMillan
Robert Pound
Roger Revelle
1991
Arthur L. Schawlow
Ed Stone
Steven Weinberg
1992
Eugene M. Shoemaker
1993
Val Fitch
Vera Rubin
1994
Albert Overhauser
Frank Press
1995
Hans Dehmelt
Peter Goldreich
1996
Wallace S. Broecker
1997
Marshall Rosenbluth
Martin Schwarzschild
George Wetherill
1998
Don L. Anderson
John N. Bahcall
1999
James Cronin
Leo Kadanoff
2000s
2000
Willis E. Lamb
Jeremiah P. Ostriker
Gilbert F. White
2001
Marvin L. Cohen
Raymond Davis Jr.
Charles Keeling
2002
Richard Garwin
W. Jason Morgan
Edward Witten
2003
G. Brent Dalrymple
Riccardo Giacconi
2004
Robert N. Clayton
2005
Ralph A. Alpher
Lonnie Thompson
2006
Daniel Kleppner
2007
Fay Ajzenberg-Selove
Charles P. Slichter
2008
Berni Alder
James E. Gunn
2009
Yakir Aharonov
Esther M. Conwell
Warren M. Washington
2010s
2011
Sidney Drell
Sandra Faber
Sylvester James Gates
2012
Burton Richter
Sean C. Solomon
2014
Shirley Ann Jackson
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Sweden
• Latvia
• Japan
• Czech Republic
• Australia
• Croatia
• Netherlands
• Poland
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
• Trove
Other
• SNAC
• IdRef
| Wikipedia |
\begin{definition}[Definition:Symmetric Matrix]
Let $\mathbf A$ be a square matrix over a set $S$.
$\mathbf A$ is '''symmetric''' {{iff}}:
:$\mathbf A = \mathbf A^\intercal$
where $\mathbf A^\intercal$ is the transpose of $\mathbf A$.
\end{definition} | ProofWiki |
Chemistry study guide.
Sanjit_Doshi
What is matter?
Anything that has mass and takes up space.
Matter vs. Not Matter - give 3 examples of things that are matter and 3 examples of things that are not matter.
Matter: Desk, chair, pencil
Non-Matter: Hearing, love, sight
Describe the atoms in a solid, liquid, and gas.
The atoms in a solid are tightly packed together and don't move much, if at all.
The atoms in a liquid are fairly tightly packed together but move freely past each other.
The atoms in a gas are loosely packed together and move freely past each other.
Atom, element, molecule, compound, mixture - define each and give 1 example of each.
An atom is a single particle of a single element. (An atom of gold.)
An element is the most basic form of anything, multiple of the same element chemically bonded together still count as an element. (A large gold bar.)
A molecule is more than one atom that is chemically bonded together. (Salt-NaCl)
A compound is more than one element that are chemically bonded together. (H2O)
A mixture is more than one atom, not chemically bonded together. (Tomato soup)
How do you write a formula for a compound?
You first write an element name, followed by how many of it are in the compound, and then repeat the process for all elements in the compound. (C6H12O6)
How are atoms and molecules related?
Atoms make up molecules.
Explain the difference between elements & compounds.
Elements may only have one type of atom inside and can be a singular atom while compounds must have multiple and be different types of atoms.
Explain the difference between compounds and mixtures.
Compounds are chemically bonded while mixtures, as a whole, are not.
Types of Mixtures - name and define the two types of mixtures.
Homogenous mixture: A mixture that is uniform and looks the same throughout.
Heterogeneous mixture: A mixture that is un-uniform and either not fully mixed or simply not a perfect mixture.
Classify each picture below as being either:
A: Compound
B: Mixture of elements
C: Mixture of elements and compounds
D: Mixture of compounds
E: Element
F: Element
Describe the structure of an atom. Where are the protons, neutrons, and electrons located?
The protons and neutrons are located in the nucleus while the electrons orbit around the nucleus.
What type of charge do protons, neutrons, and electrons have?
Protons (+)
Neutrons (None)
Electrons (-)
Use your periodic table to fill in the table below:
Aluminum: Al, 13, 27, 13, 14, 13
Zinc: Zn, 30, 65, 30, 35, 30
Mass vs. Weight -What is the difference between mass and weight?
Mass is constant and never changes while weight differs based on gravity.
How much liquid is shown in the picture below? Make sure to use the appropriate units.
You are walking along a rocky path and you see a metallic, gold-colored rock. You estimate the mass to be 10 g. and the volume to be 2 cm3. The density of gold is 19.32 g/cm3. Is your rock gold? Explain why or why not.
No, your rock isn't gold because its density is much lower than that of gold.
Honors list #6
Honors list 5
Vocal list 4
Evolution Study Guide
ECON - TEST 1
supercameo333
Practice Exam
mckennamakela
OB labor & delivery Ch 26& 27
Carice0711PLUS
Community Psych Midterm
rosemerlin
Name each molecule. a. $\mathrm{PCl}[_3$ b. $\mathrm{Cl}_{2} \mathrm{O}_{7}$ c. $P_{4} O_{6}$ d. NO
Is the following sentence true or false? The saturated hydrocarbon skeletons of organic molecules are chemically reactive. ______.
What happens to the electric current in a wire as voltage is increased?
A weather balloon at Earth's surface has a volume of 4.00 L at 304 K and 755 mm Hg. If the balloon is released and the volume reaches 4.08 L at 728 mm Hg, what is the temperature? | CommonCrawl |
Joint source and relay precoding for generally correlated MIMO with full and partial CSIT
Nguyen A. Vinh ORCID: orcid.org/0000-0003-2048-31531,
Nguyen N. Tran1 &
Nguyen H. Phuong1
EURASIP Journal on Wireless Communications and Networking volume 2017, Article number: 43 (2017) Cite this article
In this paper, we jointly design linear source and relay precoders for two-hop MIMO relaying. The involved channels encounter spatially correlated fading, the source data symbols are mutually correlated, and the noises are colored not only at the destination but also at the relay. Two different scenarios of channel state information (CSI) are assumed to be available at the transmitters: full CSI of both hops (the full CSIT) and full CSI of the source-relay hop only plus the covariance information of the relay-destination hop (the partial CSIT). First, with the full CSIT, we derive optimal precoders by maximizing the instantaneous mutual information (MI). Secondly, with the partial CSIT knowledge we derive suboptimal precoders by maximizing the average MI. For both the CSIT cases, we propose an iterative algorithm to perform power allocation iteratively and alternatively between the source antennas and the relay antennas. Its simplified version in which the power allocation is performed separately between the source antennas and the relay antennas is also developed. Simulation results show that our proposed precoding schemes with the full CSIT provide significantly higher capacity than the existing schemes. Besides, the proposed schemes with the partial CSIT also perform well especially when the channels are spatially correlated at the transmit sides and at medium-to-high signal-to-noise ratios (SNRs), while they require much lower computational complexity and less feedback overhead.
Relaying in which signal transmission from the source to the destination is done with the aid of one or multiple intermediate relays has received much attention due to its ability to enhance transmission reliability and extend coverage for wireless communication systems [1–6]. There are different relaying strategies which are categorized based on how relays process the received signals from the source, typically including amplify-and-forward (AF) and decode-and-forward (DF). The AF scheme is also known as the non-regenerative relaying, and the DF scheme is called the regenerative relaying [5, 6]. In regenerative relaying, a relay decodes the received signal and forwards its encoded version to the destination. In the non-regenerative relaying, a relay simply amplifies and forwards the received signal to the destination. In general, the non-regenerative relaying introduces shorter delay and is less complex than the regenerative relaying. Besides, multiple-input multiple-output (MIMO) techniques are well-known to provide spatial diversity and multiplexing gains to wireless links [7]. Thus, it is straightforward to find much studies on non-regenerative MIMO relay systems, e.g., [1–6].
When the channel state information (CSI) is available to transmit nodes (CSIT), precoding is applicable to non-regenerative MIMO relay systems for further system performance improvement. In such, this precoding scheme, the covariance matrices of the transmitted signals, or equivalently source and relay precoders, are designed to optimize performance metrics such as the mutual information (MI) between the source and the destination or the mean-squared error (MSE) of the detected symbols. Relay precoder designs were derived for maximizing the capacity of two-hop relay systems in [1, 2] with instantaneous CSI of both links known at the relay. These designs were developed to the joint source and relay precoding schemes in [3, 4] when the source has instantaneous CSI of both links as well. In [4], the optimal structure of the source and relay precoders that decouples the compound relaying channel into independent sub-channels was found, and an iterative algorithm to properly allocate power over such these sub-channels was also developed. The result of [4] was successfully extended to the multicarrier case in [8].
To provide the transmit nodes, the instantaneous CSI of all links for the precoder designs of [1–6, 8], the relay and the destination need to feedback instantaneous CSI of the source-relay and the relay-destination links to the source and the relay through feedback channels. This requires a large amount of signaling overhead. Furthermore, it is infeasible to obtain exact CSI of the relay-destination link at transmitters in a situation where the destination moves rapidly. Besides, in practical communication systems, the rate of the feedback channels is commonly limited. Therefore, the assumption that partial information such as mean and covariance of the relay-destination channel is available at the transmitters might be more reasonable. As such, the partial CSI is considered in the works [5, 6, 9–18]. When having full CSI of the source-relay link and covariance information of the relay-destination link at the relay, relay precoders are designed for maximizing the MI [5, 6] and for minimizing the MSE [9, 10] of two-hop relay systems having transmit-sided spatially correlated relay-destination channel. To improve the system performance, joint source and relay precoders are proposed in [11] when the source also has full CSI of the source-relay link and partial information of the relay-destination link. In [12], an asymptotic MI for large-sized multi-hop relay systems having the large number of antennas is derived. The equi-powered source and relay precoder structures are also obtained with covariance information to maximize this asymptotic MI. Robust joint designs of linear relay precoders and destination equalizers for MIMO relay systems in the presence of noisy or outdated CSI (i.e., imperfect CSI) can be found in [13–18].
In practice, communication systems often encounter some interference. Such interference causes white noise at the receiver which is dominant by thermal noise to become colored noise [19–25], and degrades the communication performance. Co-channel interference (CCI) that comes from nearby interferers using the same frequency as the receiver is a common interference type. In cellular mobile network, the CCI comes from the frequency reuse, for example, a receiver at the edge of a cell may encounter undesired signals that come from transmitters in neighbour cells using the same frequency band. Another example is when a receiver in the macro network is in the coverage range of a femtocell, it may be impacted by the CCI that results from undesired transmitters in this femtocell [26]. The aforementioned works on the precoder design [1–6, 8–18] assumed that the receiver noise is white and the source signals are independent. The case of colored noise has been taken into account in the training signal designs for MIMO point-to-point systems in [21, 22, 27] and MIMO relay systems in [23–25]. Instead of colored noise, the relay precoder that maximizes the average capacity of a two-hop relay system where the destination lies close to some interferers was designed based on covariance information of the interferers-destination channels and the relay-destination channel in [26]. The papers [28–31] considered general MIMO relay systems having spatially correlated channels, colored noises, and mutually correlated source signals. Note that mutually correlated source signals arise from encoding operations on the bit stream including channel coding, modulation, and space-time coding at a transmitter [7, 32]. In [28, 29], the optimal structure of relay precoder that maximizes the MI of the generally correlated two-hop MIMO relay systems was obtained by using full CSI of two links and covariance matrices of the correlated source signals and the colored noises known at the relay. The papers [30, 31] devoted for the case of the generally correlated multi-hop MIMO relay systems. With the full CSI of hops and the signal and noise covariance matrices at the transmitters (the full CSIT), the source and relay precoders were designed asymptotically by either maximizing the individual MI of each hop or minimizing the individual soft mean-squared error (MSE) of estimated signals of each hop.
In this paper, we investigate generally correlated two-hop MIMO relay systems with mutually correlated source symbols, spatially correlated channels and colored noises. For the system capacity maximization, we propose joint designs of source and relay precoders in two cases of the full CSIT (like [28–31]) and the partial CSIT. The partial CSIT denotes full CSI of the source-relay link and covariance information of the relay-destination link and the source signal and noise covariance matrices known to the transmitters. First, with the full CSIT, the optimal structure of the source and relay precoders is derived by maximizing the instantaneous MI between the source and the destination. By the obtained source and relay precoders and the destination equalizer, the compound relaying channel is shown to be decomposed into parallel sub-channels. We design an iterative algorithm to perform power allocation iteratively and alternatively between the source antennas and the relay antennas. To reduce the computational complexity of the iterative algorithm, we develop its simplified version in which power allocation is carried out separately between the source antennas and the relay antennas. Next, with the partial CSIT, the optimal structure of the source and relay precoders is also derived by maximizing an upper bound of the average MI between the source and the destination. Again, an iterative algorithm and a simplified algorithm for source and relay power allocation are developed as well.
Overall, the following are key contributions of this paper:
This paper extends the relay precoding with the full CSIT in [28, 29] to the joint source and relay precoding with the full and partial CSIT.
This paper develops the simplified precoding strategy based on the full CSIT in [31] to the iterative and simplified strategies based on the full CSIT as well as the partial CSIT.
This paper is a generalization of [4] from the joint design of source and relay precoding with the full CSIT for the system case of white i.i.d. channels, white noises, white source symbols to those with the full and partial CSIT for the system case of spatially correlated channels, colored noises, correlated source symbols.
The proposed joint precoding schemes in this paper include the relay precoding scheme with the partial CSIT for the system case of transmit-sided spatially correlated channel, white noises, independent source symbols in [5, 6] as special case.
The proposed joint precoding schemes in this paper provide higher capacity than the existing schemes in [4] and [28, 29, 31] by numerical simulations.
The rest of the paper is organized as follows. The system model and the precoding design problem formulation are introduced in Section 2. The derivation of the joint designs of source and relay precoders with the full CSIT and those with the partial CSIT are presented in Section 3 and in Section 4, respectively. The performance of the proposed joint precoder designs is demonstrated by numerical simulations in Section 5. Finally, some conclusions are drawn in Section 6.
Notation: A boldface upper case is used for a matrix, and a boldface lower case for a vector. An N×N identity matrix is denoted by I N . Sometimes, we omit the index N when the identity matrix size is clear. We use (.)H, (.)−1, |.|, tr (.) for the conjugate transpose, the pseudo-inverse, the determinant, the trace of a matrix, respectively. For a matrix A, the operate vec (A) is used for vectorizing A by stacking the columns of A into a column vector. The notations A≽0, B≻0 imply that the matrices A, B , are respectively, positive semi-definite and definite. For a scalar z, [ z]+ is a short form of z= max(z,0). \( \mathbf {H} \sim \mathcal {CN}(\mathbf {Z},\mathbf {\Theta } \otimes \mathbf {\Omega })\) denotes a matrix-variate complex Gaussian distribution with mean E (H)=Z and covariance E (vec(H−Z)Tvec(H−Z)TH)=Θ⊗Ω [33].
System model and design problem formulation
We consider a non-regenerative three-node two-hop MIMO relay system without the direct link between the source and the destination, as depicted in Fig. 1. The source, relay and destination have M,K and N antennas, respectively. The half-duplex mode is assumed to be how the system operates. Each signal transmission from the source to the destination takes two time slots to complete.
A non-regenerative three-node two-hop MIMO relay system
In the first time slot, the source node multiplies the signal vector \( \mathbf {x} \in \mathbb {C}^{M\times 1} \) by a source precoding matrix \( \mathbf {B} \in \mathbb {C}^{M \times M}.\) Here, the signal x contains mutually correlated data symbols with covariance matrix E(x x H)=R x =Ψ x known to the three terminals since x has been arisen from encoding operations on the baseband signals [7]. The matrix Ψ x ≽0 denotes a correlation matrix with unit elements on its diagonal. The source precoding matrix B has the power constraint tr(B R x B H)≤p 1, where p 1 is the allowed maximum transmit power at the source. Then, the resulting signal is transmitted to the relay node through the source-relay channel \( \mathbf {H}_{1} \in \mathbb {C}^{M \times K} \). The received signal at the relay \( \mathbf {y}_{1} \in \mathbb {C}^{K \times 1} \) is given by y 1=H 1 B x+n 1, where \( \mathbf {n}_{1} \in \mathbb {C}^{K \times 1} \) is the colored Gaussian noise vector at the relay with zero-mean and covariance matrix \(\phantom {\dot {i}\!}\mathrm {E}\left (\mathbf {n}_{1} \mathbf {n}_{1}^{H}\right) =\mathbf {R}_{n_{1}} = \sigma _{1}^{2} \mathbf {\Psi }_{n_{1}} \), and \(\phantom {\dot {i}\!}\mathbf {\Psi }_{n_{1}} \succ 0 \) is a correlation matrix with \(\phantom {\dot {i}\!}\text {tr}(\mathbf {\Psi }_{n_{1}}) = K\).
In the second time slot, at the relay, the received signal y 1 is multiplied by a relay precoding matrix \( \mathbf {F} \in \mathbb {C}^{K \times K} \) satisfying the power constraint \( \text {tr}\Big (\mathbf {F} \left (\mathbf {H}_{1} \mathbf {B} \mathbf {R}_{x} \mathbf {B}^{H} \mathbf {H}_{1}^{H} + \mathbf {R}_{n_{1}}\right) \mathbf {F}^{H}\Big) \leq p_{2}\phantom {\dot {i}\!}\), where p 2 is the allowed maximum transmit power at the relay. After that, the resulting signal F y 1 is forwarded to the destination through the relay-destination channel \({\mathbf {H}_{2} \in \mathbb {C}^{K \times N} }\). Therefore, the received signal at the destination \( \mathbf {y} \in \mathbb {C}^{N \times 1} \) is
$$ \mathbf{y} = \mathbf{H}_{2} \mathbf{F} \mathbf{y}_{1} + \mathbf{n}_{2} = \mathbf{H}_{2} \mathbf{F} \mathbf{H}_{1} \mathbf{B} \mathbf{x} + \mathbf{H}_{2} \mathbf{F} \mathbf{n}_{1} + \mathbf{n}_{2}, $$
where \( \mathbf {n}_{2} \in \mathbb {C}^{N \times 1} \) is the colored Gaussian noise vector at the destination with zero-mean and covariance matrix \(\mathrm {E}(\mathbf {n}_{2} \mathbf {n}_{2}^{H}) =\mathbf {R}_{n_{2}} = \sigma _{2}^{2} \mathbf {\Psi }_{n_{2}} \), and \( \mathbf {\Psi }_{n_{2}} \succ 0 \) is a correlation matrix with \( \text {tr}(\mathbf {\Psi }_{n_{2}}) = N. \) The channel matrices H 1,H 2 are generated base on Kronecker model [34] as \( \mathbf {H}_{i}=\mathbf {\Omega }_{i}^{1/2} \mathbf {H}_{w,i}\mathbf {\Theta }_{i}^{1/2},i = 1,2, \) where the elements of H w,i are i.i.d. zero-mean and unit-variance circularly symmetric complex Gaussian random variables and Ω i and Θ i are positive definite transmit and receive covariance matrices of H i , and thereby, \( \mathbf {H}_{i} \sim \mathcal {CN}(\mathbf {0},\mathbf {\Theta }_{i} \otimes \mathbf {\Omega }_{i}).\)
We assume that \(\phantom {\dot {i}\!}\text {rank}\left (\mathbf {B} \mathbf {R}_{x}^{\frac {1}{2}}\right)= \text {rank}\left (\mathbf {F}\mathbf {R}_{n_{1}}^{\frac {1}{2}}\right) = L \) and \(\phantom {\dot {i}\!} L \leq \min (r_{1},r_{2}), r_{1} = \text {rank}\left (\mathbf {R}_{n_{1}}^{-\frac {1}{2}} \mathbf {H}_{1}\right),\) \( r_{2} = \text {rank} \left (\mathbf {R}_{n_{2}}^{-\frac {1}{2}} \mathbf {H}_{2}\right) \) such that at most L independent substreams of source symbols is active in each transmission. Like [21–25, 27–31], we assume that the three terminals know \(\phantom {\dot {i}\!}\mathbf {R}_{n_{1}} \) and \( \mathbf {R}_{n_{2}}.\) An interesting example that relates to how to obtain these noise covariance matrices is given in [26] where the problem of designing the relay precoder in a three-node relay system in the presence of some interferers near the destination by using the covariance information of the interferers-destination channels is addressed. Specifically, the received signal at the destination is \(\mathbf {y} = \mathbf {H}_{2} \mathbf {F} \mathbf {H}_{1} \mathbf {x} + \mathbf {H}_{2} \mathbf {F} \mathbf {n}_{w,1} + \sum _{j = 1}^{J}{\mathbf {H}_{I_{j}} \mathbf {x}_{I_{j}}} + \mathbf {n}_{w,2},\) where n w,1,n w,2 are the relay and destination white Gaussian noise vectors with covariance matrices \( \mathbf {R}_{n_{w,1}}= \sigma _{1}^{2} \mathbf {I}_{K},\mathbf {R}_{n_{w,2}}= \sigma _{2}^{2} \mathbf {I}_{N}. \) It is assumed that the destination knows covariance matrices \(\mathbf {\Theta }_{I_{j}}\) of the interferers-destination channels \( \mathbf {H}_{I_{j}}= \mathbf {H}_{w,j}\mathbf {\Theta }_{I_{j}}^{1/2}\) by the training signals that are friendly shared by the interferers I j , and then the relay also knows \( \mathbf {\Theta }_{I_{j}}\) by feedback from the destination. It is valid to equivalently view \( \mathbf {n}_{2} \triangleq \sum _{j = 1}^{J}{\mathbf {H}_{I_{j}} \mathbf {x}_{I_{j}}} + \mathbf {n}_{w,2} \) as the colored noise vector at the destination. The destination can easily compute the destination-colored noise covariance matrix \(\mathbf {R}_{n_{2}} = \mathrm {E}\left (\mathbf {n}_{2} \mathbf {n}_{2}^{H}\right)\) by the covariance matrices \( \mathbf {\Theta }_{I_{j}} \) and \(\mathbf {R}_{n_{w,2}}.\) The relay has \( \mathbf {R}_{n_{2}} \) by feedback from the destination. The source also has \( \mathbf {R}_{n_{2}} \) by feedback from the relay. In a similar situation to the destination where the relay lies near some interferers such that \(\mathbf {y} = \mathbf {H}_{2} \mathbf {F} \mathbf {H}_{1} \mathbf {x} + \mathbf {H}_{2} \mathbf {F} \left (\sum _{j = 1}^{J}{\mathbf {H}_{I_{j}}^{'} \mathbf {x}_{I_{j}}^{'}} + \mathbf {n}_{w,1}\right) + \sum _{j = 1}^{J}{\mathbf {H}_{I_{j}} \mathbf {x}_{I_{j}}} + \mathbf {n}_{w,2},\) the relay can also have the covariance matrix \(\phantom {\dot {i}\!}\mathbf {R}_{n_{1}} = \mathrm {E}\left (\mathbf {n}_{1} \mathbf {n}_{1}^{H}\right)\) of the relay colored noise \( \mathbf {n}_{1} \triangleq \sum _{j = 1}^{J}{\mathbf {H}_{I_{j}}^{'} \mathbf {x}_{I_{j}}^{'}} + \mathbf {n}_{w,1} \) by covariance matrices of the interferers-relay channels and \( \mathbf {R}_{n_{w,1}}.\) Then, \(\phantom {\dot {i}\!}\mathbf {R}_{n_{1}} \) is fed back to the source, and fed forward to the destination by the relay. In other words, the three nodes all know \(\phantom {\dot {i}\!}\mathbf {R}_{n_{1}} \) besides \(\phantom {\dot {i}\!}\mathbf {R}_{n_{2}}.\) Besides \(\phantom {\dot {i}\!} \mathbf {R}_{x}, \mathbf {R}_{n_{1}}\), and \(\phantom {\dot {i}\!} \mathbf {R}_{n_{2}}\), the destination is also assumed to have H 2 F H 1 B through a channel estimation method (e.g., [35–37]). The instantaneous MI between the source and the destination [31] is given by
$$ \begin{aligned} \mathcal{I}(\mathbf{B},\mathbf{F}) &= \frac{1}{2} \log_{2} \left|\mathbf{I}_{M} + \mathbf{R}_{x}^{\frac{H}{2}}\mathbf{B}^{H} \mathbf{H}_{1}^{H} \mathbf{F}^{H} \mathbf{H}_{2}^{H}\right.\\ &\quad\times\left.\left(\mathbf{H}_{2} \mathbf{F} \mathbf{R}_{n_{1}}\mathbf{F}^{H} \mathbf{H}_{2}^{H} + \mathbf{R}_{n_{2}} \right)^{-1} \mathbf{H}_{2} \mathbf{F} \mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}} \right|, \end{aligned} $$
where a factor 1/2 is due to the transmission duration of two time slots.
With the use of the linear MMSE equalizer [38]
$$ \begin{aligned} \mathbf{G} &= \mathbf{R}_{x} \mathbf{B}^{H} \mathbf{H}_{1}^{H} \mathbf{F}^{H} \mathbf{H}_{2}^{H}\left(\mathbf{H}_{2} \mathbf{F} \mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}\mathbf{B}^{H} \mathbf{H}_{1}^{H} \mathbf{F}^{H} \mathbf{H}_{2}^{H}\right.\\ &\quad\left.+ \mathbf{H}_{2} \mathbf{F} \mathbf{R}_{n_{1}} \mathbf{F}^{H} \mathbf{H}_{2}^{H} + \mathbf{R}_{n_{2}} \right)^{-1} \end{aligned} $$
at the destination for signal detection as \( \hat {\mathbf {x}} = \mathbf {G} \mathbf {y},\) the source-destination MI has an interesting relation with the MSE matrix \( \mathbf {M} \triangleq \mathrm {E}\left ((\hat {\mathbf {x}} - \mathbf {x})(\hat {\mathbf {x}} - \mathbf {x})^{H}\right) \) by [2]:
$$ \mathcal{I}(\mathbf{B},\mathbf{F}) = - \frac{1}{2} \log_{2}{|\mathbf{M}|}, $$
$$ \begin{aligned} \mathbf{M} &=\left(\mathbf{I}_{M} + \mathbf{R}_{x}^{\frac{H}{2}}\mathbf{B}^{H} \mathbf{H}_{1}^{H} \mathbf{F}^{H} \mathbf{H}_{2}^{H}\right.\\ &\quad\times\left.\left(\mathbf{H}_{2} \mathbf{F} \mathbf{R}_{n_{1}}\mathbf{F}^{H} \mathbf{H}_{2}^{H} + \mathbf{R}_{n_{2}} \right)^{-1} \mathbf{H}_{2} \mathbf{F} \mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}}\right)^{-1}. \end{aligned} $$
In the preceding sections, we address the problem of jointly designing the source precoder B and the relay precoder F for the three-node relay system in the absence of the direct link in Fig. 1. In practice, this system case commonly occurs when the source is so far from the destination that they cannot directly information exchange, thus the direct link is negligible. This case was also considered in [1–6, 8–11]. It is noteworthy that user cooperation among all involved nodes is well known to yield valuable spatial diversity, thus enhance capacity and mitigate detection error. However, these benefits actually come true in a situation where the distance between the source and the destination is short enough for them to directly communicate each other through the direct link. An efficient way to tackle the problem of serious loss on the direct link is the additional use of multiple cascaded relays to form a multi-hop relay system. Enhancing the system capacity with precoding schemes that exploit available CSIT was reported in [12, 30, 31]. We propose the iterative and simplified methods of jointly optimizing B and F under the mutual information maximizing criterion in different CSIT scenarios in Section 3 and Section 4. We also propose to extend the simplified design with the full CSIT to the case of multi-hop systems in Section 3.
Joint source and relay precoding with the full CSIT
Optimal structures for source and relay precoders
In this section, we assume that the source and the relay have H 1, H 2, R x , \(\phantom {\dot {i}\!}\mathbf {R}_{n_{1}}\), and \(\phantom {\dot {i}\!}\mathbf {R}_{n_{2}}\) (the full CSIT). In practice, the relay can estimate H 1 by the training signals sent from the source, and the destination can estimate H 2 by the training signals sent from the relay. The relay has H 2 by feedback from the destination, and the source has H 1 and H 2 by feedback from the relay. Note that feedbacking H 2 from the destination to the source is unusual due to the poor condition of the source-destination direct link [39], which is also assumed in this paper. With the full CSIT, we jointly design B and F to maximize \(\mathcal {I}(\mathbf {B},\mathbf {F})\) under the source and relay power constraints. This design issue can be formulated as:
$${} {{\begin{aligned} \max_{\mathbf{B},\mathbf{F}} & \quad \mathcal{I}(\mathbf{B},\mathbf{F}) \\ \mathrm{s.t.} & \quad \text{tr}\left(\mathbf{B} \mathbf{R}_{x} \mathbf{B}^{H}\right) \leq p_{1}, \\ & \quad \text{tr}\left(\mathbf{F} \mathbf{R}_{n_{1}}^{\frac{1}{2}} \left(\mathbf{R}_{n_{1}}^{-\frac{1}{2}} \mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x} \mathbf{B}^{H} \mathbf{H}_{1}^{H} \mathbf{R}_{n_{1}}^{-\frac{H}{2}} + \mathbf{I}_{K}\right) \mathbf{R}_{n_{1}}^{\frac{H}{2}}\mathbf{F}^{H}\right) \leq p_{2}. \end{aligned}}} $$
Let us define the following singular value decompositions (SVDs):
$$ \mathbf{R}_{n_{1}}^{-\frac{1}{2}} \mathbf{H}_{1} = \mathbf{U}_{1} \mathbf{\Lambda}_{1}^{\frac{1}{2}} \mathbf{V}_{1}^{H}, $$
where U 1,U 2, V 1, and V 2 are unitary matrices, and Λ 1 and Λ 2 are diagonal matrices of non-negative eigenvalues in descending order. In order to attain a maximum MI in Problem (6), B and F should are optimally structured as:
$$ \mathbf{B} = \mathbf{V}_{1} \mathbf{\Lambda}_{b}^{\frac{1}{2}} \mathbf{R}_{x}^{-\frac{1}{2}}, $$
$$ \mathbf{F} = \mathbf{V}_{2} \mathbf{\Lambda}_{f}^{\frac{1}{2}} \mathbf{U}_{1}^{H} \mathbf{R}_{n_{1}}^{-\frac{1}{2}}, $$
where Λ b and Λ f are M×M and K×K diagonal matrices of non-negative entries with up to L positive elements.
Applying the matrix inversion lemma [40] (A+B C D)−1=A −1−A −1 B(D A −1 B+C −1)−1 D A −1 and B H(B C B H+I)−1 B=C −1−(C B H B C+C)−1 to (5) leads to
$${} {{\begin{aligned} \mathbf{M} &= \mathbf{I}_{M} - \left(\mathbf{R}_{n_{1}}^{-\frac{1}{2}}\mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}}\right)^{H} \left(\mathbf{R}_{n_{2}}^{-\frac{1}{2}} \mathbf{H}_{2}\mathbf{F} \mathbf{R}_{n_{1}}^{\frac{1}{2}}\right)^{H} \left[\left({\vphantom{\left(\mathbf{R}_{n_{2}}^{-\frac{1}{2}} \mathbf{H}_{2}\mathbf{F} \mathbf{R}_{n_{1}}^{\frac{1}{2}}\right)^{H} + \mathbf{I}_{N}}}\mathbf{R}_{n_{2}}^{-\frac{1}{2}} \mathbf{H}_{2}\mathbf{F} \mathbf{R}_{n_{1}}^{\frac{1}{2}}\right)\right.\\ &\quad\times\left[\left(\mathbf{R}_{n_{1}}^{-\frac{1}{2}}\mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}}\right) \left(\mathbf{R}_{n_{1}}^{-\frac{1}{2}}\mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}}\right)^{H}+\mathbf{I}_{K}\right]\\ &\quad\times\left. \left(\mathbf{R}_{n_{2}}^{-\frac{1}{2}} \mathbf{H}_{2}\mathbf{F} \mathbf{R}_{n_{1}}^{\frac{1}{2}}\right)^{H} + \mathbf{I}_{N} \right]^{-1} \left(\mathbf{R}_{n_{2}}^{-\frac{1}{2}} \mathbf{H}_{2}\mathbf{F} \mathbf{R}_{n_{1}}^{\frac{1}{2}}\right) \left(\mathbf{R}_{n_{1}}^{-\frac{1}{2}}\mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}}\right). \end{aligned}}} $$
Let us define
$$ \tilde{\mathbf{H}}_{1} = \mathbf{R}_{n_{1}}^{-\frac{1}{2}}\mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}}, $$
$$ \tilde{\mathbf{H}}_{2} =\mathbf{R}_{n_{2}}^{-\frac{1}{2}} \mathbf{H}_{2}\mathbf{F} \mathbf{R}_{n_{1}}^{\frac{1}{2}}. $$
M in (11) is rewritten more compactly as
$${} \mathbf{M} = \mathbf{I}_{M} - \tilde{\mathbf{H}}_{1}^{H} \tilde{\mathbf{H}}_{2}^{H} \left(\tilde{\mathbf{H}}_{2} \left(\tilde{\mathbf{H}}_{1}\tilde{\mathbf{H}}_{1}^{H}+\mathbf{I}_{K} \right) \tilde{\mathbf{H}}_{2}^{H} +\mathbf{I}_{N} \right)^{-1} \tilde{\mathbf{H}}_{2} \tilde{\mathbf{H}}_{1}. $$
Let us define the eigenvalue decompositions (EDs) as
$$\tilde{\mathbf{H}}_{1} \tilde{\mathbf{H}}_{1}^{H} = \mathbf{U}_{\tilde{H}_{1}} \mathbf{\Lambda}_{\tilde{H}_{1}} \mathbf{U}_{\tilde{H}_{1}}^{H}, $$
$$\tilde{\mathbf{H}}_{2} \left(\tilde{\mathbf{H}}_{1}\tilde{\mathbf{H}}_{1}^{H} + \mathbf{I}_{K}\right) \tilde{\mathbf{H}}_{2}^{H} = \mathbf{U}_{\tilde{H}_{2}} \mathbf{\Lambda}_{\tilde{H}_{2}} \mathbf{U}_{\tilde{H}_{2}}^{H}, $$
where \( \mathbf {\Lambda }_{\tilde {H}_{1}}, \mathbf {\Lambda }_{\tilde {H}_{2}} \) are the corresponding diagonal matrices of eigenvalues in descending order.
After some manipulations, we get
$$ \tilde{\mathbf{H}}_{1} = \mathbf{U}_{\tilde{H}_{1}} \mathbf{\Lambda}_{\tilde{H}_{1}}^{\frac{1}{2}} \mathbf{X}_{1}, $$
$$ \tilde{\mathbf{H}}_{2} = \mathbf{U}_{\tilde{H}_{2}} \mathbf{\Lambda}_{\tilde{H}_{2}}^{\frac{1}{2}} \mathbf{X}_{2} \mathbf{U}_{\tilde{H}_{1}} (\mathbf{\Lambda}_{\tilde{H}_{1}} + \mathbf{I}_{K})^{-\frac{1}{2}} \mathbf{U}_{\tilde{H}_{1}}^{H}, $$
where X 1 is an M×M unitary matrix with \( \mathbf {X}_{1} \mathbf {X}_{1}^{H} = \mathbf {I}_{M} \), X 2 is an M×K unitary matrix with \( \mathbf {X}_{2} \mathbf {X}_{2}^{H} = \mathbf {I}_{M} \). It is easy to find that they do not affect the source and relay power constraints. After substituting (12) and (13) into (14) and performing some manipulations, M becomes
$$\begin{array}{@{}rcl@{}} \mathbf{M} &=& \mathbf{I}_{M} - \mathbf{X}_{1}^{H} \mathbf{\Lambda}_{\tilde{H}_{1}}^{\frac{1}{2}} (\mathbf{\Lambda}_{\tilde{H}_{1}} + \mathbf{I}_{M})^{-\frac{1}{2}} \mathbf{U}_{\tilde{H}_{1}}^{H} \mathbf{X}_{2}^{H} \mathbf{\Lambda}_{\tilde{H}_{2}}^{\frac{1}{2}}\\ &&\times (\mathbf{\Lambda}_{\tilde{H}_{2}} +\mathbf{I}_{M})^{-1} \mathbf{\Lambda}_{\tilde{H}_{2}}^{\frac{1}{2}} \mathbf{X}_{2} \mathbf{U}_{\tilde{H}_{1}} (\mathbf{\Lambda}_{\tilde{H}_{1}} + \mathbf{I}_{M})^{-\frac{1}{2}} \mathbf{\Lambda}_{\tilde{H}_{1}}^{\frac{1}{2}} \mathbf{X}_{1}\\ &\triangleq & \mathbf{I}_{M} - \mathbf{\Gamma}. \end{array} $$
Next, we consider the following properties:
For any Hermitian matrix A with main diagonal vector d(A) and eigenvalue vector λ(A), then d(A)≺λ(A) [41].
For mN×N complex matrices A 1,A 2,…,A m with singular values arranged in the same order as the product B=A 1 A 2…A m , then the vector of singular values of B is weakly majorized by the Schur (element-wise) product of the vectors of singular values of these complex values, that means σ(B)≺ w σ(A 1)⊙σ(A 2)⊙…⊙σ(A m ) [41].
Let apply the above properties to Γ, we have
$$ d(\mathbf{\Gamma}) \prec \lambda(\mathbf{\Gamma}) \prec_{w} d(\tilde{\mathbf{\Gamma}}), $$
where \( \tilde {\mathbf {\Gamma }} = \mathbf {\Lambda }_{\tilde {H}_{1}} (\mathbf {\Lambda }_{\tilde {H}_{1}} + \mathbf {I}_{M})^{-1} \mathbf {\Lambda }_{\tilde {H}_{2}} (\mathbf {\Lambda }_{\tilde {H}_{2}} +\mathbf {I}_{M})^{-1}. \) Since − log2(d(I M −Γ)) is Schur-convex and increases with d(Γ), we get \( -\log _{2}\big (d(\mathbf {I}_{M} - \mathbf {\Gamma })\big) \leq -\log _{2}\big (d(\mathbf {I}_{M} - \tilde {\mathbf {\Gamma }})\big). \) This leads to
$${} \mathcal{I} \leq -\frac{1}{2}\log_{2}\big(\mathbf{I}_{M} - \mathbf{\Lambda}_{\tilde{H}_{1}} (\mathbf{\Lambda}_{\tilde{H}_{1}} + \mathbf{I}_{M})^{-1} \mathbf{\Lambda}_{\tilde{H}_{2}} (\mathbf{\Lambda}_{\tilde{H}_{2}} +\mathbf{I}_{M})^{-1}\big), $$
where the inequality holds due to the fact that a real-valued function f meets x≺ w y⇒f(x)≤f(y) if f is a Schur-convex and increasing function [41], and the maximum of the MI is attained when X 1 and X 2 are chosen as X 1=I M and \( \mathbf {X}_{2} = \mathbf {U}_{\tilde {H}_{1}}^{H}.\) Obviously, the MI is invariant in X 1 and X 2.
Let us consider the source and relay transmit power constraints. From (7), (12), and (15), it is easy to compute \( \mathbf {B} \mathbf {R}_{x}^{\frac {1}{2}} \) as
$$\mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}} = \mathbf{V}_{1} \mathbf{\Lambda}_{1}^{-\frac{1}{2}} \mathbf{U}_{1}^{H} \mathbf{U}_{\tilde{H}_{1}} \mathbf{\Lambda}_{\tilde{H}_{1}}^{\frac{1}{2}} \mathbf{X}_{1}.$$
The source transmit power can be rewritten as
$$\begin{array}{@{}rcl@{}} \text{tr}\left(\mathbf{B} \mathbf{R}_{x} \mathbf{B}^{H}\right) &=&\text{tr}\left(\mathbf{V}_{1} \mathbf{\Lambda}_{1}^{-\frac{1}{2}} \mathbf{U}_{1}^{H} \mathbf{U}_{\tilde{H}_{1}} \mathbf{\Lambda}_{\tilde{H}_{1}} \mathbf{U}_{\tilde{H}_{1}}^{H} \mathbf{U}_{1} \mathbf{\Lambda}_{1}^{-\frac{1}{2}}\mathbf{V}_{1}^{H} \right) \\ &=&\text{tr}\left(\mathbf{U}_{\tilde{H}_{1}} \mathbf{\Lambda}_{\tilde{H}_{1}} \mathbf{U}_{\tilde{H}_{1}}^{H} \mathbf{U}_{1} \mathbf{\Lambda}_{1}^{-1}\mathbf{U}_{1}^{H}\right)\\ &\geq&\text{tr}\left(\mathbf{\Lambda}_{\tilde{H}_{1}} \mathbf{\Lambda}_{1}^{-1}\right), \end{array} $$
where the inequality exists because the fact that for any two N×N positive semidefinite matrices A and B having the corresponding eigenvalues λ i (A) and λ i (B) in the descending order, it follows that \( \text {tr}(\mathbf {A} \mathbf {B}) \geq \sum _{i}^{N}{\lambda _{i}(\mathbf {A})\lambda _{N+1-i}(\mathbf {B})} \).
Obviously, in (19), the source transmit power is independent of X 1, and the minimum of the source power is achieved when \( \mathbf {U}_{\tilde {H}_{1}}^{H} = \mathbf {U}_{1}.\) Besides, because we also have X 1=I M as proved above, B can be obtained as
$$\mathbf{B} = \mathbf{V}_{1} \mathbf{\Lambda}_{1}^{-\frac{1}{2}} \mathbf{\Lambda}_{\tilde{H}_{1}}^{\frac{1}{2}}\mathbf{R}_{x}^{-\frac{1}{2}}.$$
By setting \( \mathbf {\Lambda }_{b}^{\frac {1}{2}} = \mathbf {\Lambda }_{1}^{-\frac {1}{2}} \mathbf {\Lambda }_{\tilde {H}_{1}}^{\frac {1}{2}}, \) we have the optimal B as shown in (9).
Similarly, from (8), (13), and (16), \( \mathbf {F} \mathbf {R}_{n_{1}}^{\frac {1}{2}} \) is easy to get as
$$\mathbf{F} \mathbf{R}_{n_{1}}^{\frac{1}{2}} = \mathbf{V}_{2} \mathbf{\Lambda}_{2}^{-\frac{1}{2}} \mathbf{U}_{2}^{H} \mathbf{U}_{\tilde{H}_{2}} \mathbf{\Lambda}_{\tilde{H}_{2}}^{\frac{1}{2}} \mathbf{X}_{2} \mathbf{U}_{\tilde{H}_{1}} \left(\mathbf{\Lambda}_{\tilde{H}_{1}} + \mathbf{I}_{K} \right)^{-\frac{1}{2}} \mathbf{U}_{\tilde{H}_{1}}^{H}.$$
The relay transmit power can be rewritten as
$$\begin{array}{@{}rcl@{}} &&{}\text{tr}\left(\mathbf{F} \mathbf{R}_{n_{1}}^{\frac{1}{2}} \left(\mathbf{R}_{n_{1}}^{-\frac{1}{2}} \mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x} \mathbf{B}^{H} \mathbf{H}_{1}^{H} \mathbf{R}_{n_{1}}^{-\frac{H}{2}} + \mathbf{I}_{K}\right) \mathbf{R}_{n_{1}}^{\frac{H}{2}}\mathbf{F}^{H}\right)\\ &=&\text{tr}\left(\mathbf{U}_{\tilde{H}_{2}} \mathbf{\Lambda}_{\tilde{H}_{2}} \mathbf{U}_{\tilde{H}_{2}}^{H} \mathbf{U}_{2} \mathbf{\Lambda}_{2}^{-1}\mathbf{U}_{2}^{H}\right)\\ &\geq&\text{tr}\left(\mathbf{\Lambda}_{\tilde{H}_{2}} \mathbf{\Lambda}_{2}^{-1}\right), \end{array} $$
It can be observed that X 2 does not impact on the relay transmit power. Similar to (19), the equality in (20) holds when \( \mathbf {U}_{\tilde {H}_{2}}^{H} = \mathbf {U}_{2}.\) Besides, since we also have \( \mathbf {X}_{2} = \mathbf {U}_{\tilde {H}_{1}}^{H} \) as proved above, F can be calculated as
$$\mathbf{F} = \mathbf{V}_{2} \mathbf{\Lambda}_{2}^{-\frac{1}{2}} \mathbf{\Lambda}_{\tilde{H}_{2}}^{\frac{1}{2}} (\mathbf{\Lambda}_{\tilde{H}_{1}} + \mathbf{I}_{K})^{-\frac{1}{2}} \mathbf{U}_{\tilde{H}_{1}}^{H}\mathbf{R}_{n_{1}}^{-\frac{1}{2}}.$$
By setting \( \mathbf {\Lambda }_{f}^{\frac {1}{2}} = \mathbf {\Lambda }_{2}^{-\frac {1}{2}} \mathbf {\Lambda }_{\tilde {H}_{2}}^{\frac {1}{2}} (\mathbf {\Lambda }_{\tilde {H}_{1}} + \mathbf {I}_{K})^{-\frac {1}{2}}, \) we have the optimal F as shown in (10). □
One can see from (9) and (10) that the optimal set of B and F comes as a generalization of that for relay systems with i.i.d. channels, independent source signals, white noise of [4] to relay systems with spatially correlated channels, correlated input signals and colored noise. It is a development of the relay only precoding (ROP) with the same F, G as our design, but \( \mathbf {B} = \sqrt {p_{1}/M} \mathbf {I}_{M} \) of [28, 29]. Like the exiting designs with the full CSIT (e.g., [1–6, 8]), these optimal structures of B and F are independent of the type of channel fading. B performs whitening the source signal streams, then loads the source power and beam-forms the obtained parallel signal streams across the eigenvectors V 1, while F performs whitening the relay colored noise, then loads the relay power across the eigenvectors \( \mathbf {U}_{1}^{H} \) and V 2. By this way, the equivalent end-to-end MIMO channel in the presence of B, F, and G is separated into at most L independent subchannels (or eigenmodes) as illustrated in Fig. 2. This implies that there is no longer interference among the signal streams, which enhances the system capacity. This channel separation is mathematically expressed by
$$ \hat{\mathbf{x}} = \mathbf{\Delta}\mathbf{\Lambda}_{2}^{\frac{1}{2}} \mathbf{\Lambda}_{f}^{\frac{1}{2}} \mathbf{\Lambda}_{1}^{\frac{1}{2}} \mathbf{\Lambda}_{b}^{\frac{1}{2}} \bar{\mathbf{x}} + \mathbf{\Delta}\mathbf{\Lambda}_{2}^{\frac{1}{2}} \mathbf{\Lambda}_{f}^{\frac{1}{2}} \bar{\mathbf{n}}_{1} + \mathbf{\Delta}\bar{\mathbf{n}}_{2}, $$
An illustration of the equivalent end-to-end MIMO channel separation
where \( \mathbf {\Delta } \triangleq \mathbf {\Lambda }_{b}^{\frac {1}{2}} \mathbf {\Lambda }_{1}^{\frac {1}{2}} \mathbf {\Lambda }_{f}^{\frac {1}{2}} \mathbf {\Lambda }_{2}^{\frac {1}{2}} \left (\mathbf {\Lambda }_{2}^{\frac {1}{2}} \mathbf {\Lambda }_{f}^{\frac {1}{2}} \mathbf {\Lambda }_{1}^{\frac {1}{2}} \mathbf {\Lambda }_{b} \mathbf {\Lambda }_{1}^{\frac {1}{2}} \mathbf {\Lambda }_{f}^{\frac {1}{2}} \mathbf {\Lambda }_{2}^{\frac {1}{2}} + \mathbf {\Lambda }_{2}^{\frac {1}{2}} \mathbf {\Lambda }_{f} \mathbf {\Lambda }_{2}^{\frac {1}{2}} + \mathbf {I}_{N} \right)^{-1} \) is the diagonal matrix containing the non-negative diagonal elements, maximum L of whom δ 1,…,δ L are positive, \(\mathbf {G} = \mathbf {R}_{x}^{\frac {1}{2}} \mathbf {\Delta },\) and \( \bar {\mathbf {n}}_{1} \triangleq \mathbf {U}_{1}^{H} \mathbf {R}_{n_{1}}^{-\frac {1}{2}}\mathbf {n}_{1}, \bar {\mathbf {n}}_{2} \triangleq \mathbf {U}_{2}^{H} \mathbf {R}_{n_{2}}^{-\frac {1}{2}}\mathbf {n}_{2} \) and \( \bar {\mathbf {x}} \triangleq \mathbf {R}_{x}^{-\frac {1}{2}}\mathbf {x} \) are white due to \( \mathrm {E}\left (\bar {\mathbf {n}}_{1} \bar {\mathbf {n}}_{1}^{H}\right) = \mathbf {I}_{K}, \mathrm {E}\left (\bar {\mathbf {n}}_{2}\bar {\mathbf {n}}_{2}^{H}\right) = \mathbf {I}_{N} \) and \(\mathrm {E}\left (\bar {\mathbf {x}} \bar {\mathbf {x}}^{H}\right) = \mathbf {I}_{M}. \)
Now the rest task is to allocate power across the subchannels, which is equivalent to solving the following problem of optimizing Λ b and Λ f :
$$ \begin{aligned} \max_{\mathbf{b},\mathbf{f}} & \quad \frac{1}{2} \sum\limits_{l=1}^{L}{\log_{2}\left(1 + \frac{\lambda_{2,l} {f}_{l} \lambda_{1,l} {b}_{l} }{1 + \lambda_{2,l} {f}_{l}} \right)} \\ \text{s.t.} & \quad \left\{ \begin{array}{l} \sum\limits_{l=1}^{L}{{b}_{l}} \leq p_{1} \; \text{and} \; \sum\limits_{l=1}^{L}{(\lambda_{1,l} {b}_{l} + 1){f}_{l}} \leq p_{2},\\ {b}_{l} \geq 0 \; \text{and} \; (\lambda_{1,l} {b}_{l} + 1){f}_{l} \geq 0, \quad l = 1,\ldots,L. \end{array}\right. \end{aligned} $$
Here, we define \( \mathbf {b} \triangleq (b_{1}, \ldots, b_{L})^{T} \), \( \mathbf {f} \triangleq (f_{1}, \ldots, f_{L})^{T} \), and λ 1,l , λ 2,l , b l and f l are the l-th main diagonal elements of Λ 1, Λ 2, Λ b and Λ f , respectively.
Set \( \mathbf {v} \triangleq (v_{1}, \ldots, v_{L})^{T} \) and v l =(λ 1,l b l +1)f l . Problem (22) can be rewritten as:
$${} {{\begin{aligned} \max_{\mathbf{b},\mathbf{v} \geq 0} & \quad \mathcal{I}(\mathbf{b},\mathbf{v}) = \frac{1}{2} \sum_{l=1}^{L}{\log_{2}\left(\frac{1 + \lambda_{1,l} {b}_{l} + \lambda_{1,l} {b}_{l} \lambda_{2,l} {v}_{l} + \lambda_{2,l} {v}_{l}}{1 + \lambda_{1,l} {b}_{l} + \lambda_{2,l} {v}_{l}} \right)}\\ \mathrm{s.t.} & \quad \sum_{l=1}^{L}{{b}_{l}} \leq p_{1} \text{\; and \;} \sum_{l=1}^{L}{{v}_{l}} \leq p_{2}. \end{aligned}}} $$
Once v l is found, f l can be easily computed as f l =v l /(λ 1,l b l +1). An optimal solution to Problem (23) is impossible to obtain, since this problem is still non-concave in b and v. However, in the next two Sections 3.2 and 3.3, we design an iterative algorithm and a simplified algorithm to find b and v.
Iterative power allocation algorithm
In this section, we develop a numerical method based on alternating technique [3, 4, 8] to identify b and v. It is important to find that b and v behave symmetrically in (23). Hence, if either b or v is kept unchanged, Problem (23) turns to a standard concave optimization problem. Specifically, when b is fixed, it collapses to the problem of optimizing v given by
$$\begin{array}{*{20}l}{} \max_{\mathbf{v} \geq 0} & \quad \mathcal{I}(\mathbf{v}) = \frac{1}{2}\!\! \sum_{l=1}^{L}{\log_{2}\!\!\left(\frac{1 + \lambda_{1,l} {b}_{l} + \lambda_{1,l} {b}_{l} \lambda_{2,l} {v}_{l} + \lambda_{2,l} {v}_{l}}{1 + \lambda_{1,l} {b}_{l} + \lambda_{2,l} {v}_{l}} \right)} \end{array} $$
$$\begin{array}{*{20}l} {}\mathrm{s.t.} & \quad \sum_{l=1}^{L}{{v}_{l}} \leq p_{2}. \end{array} $$
For the obtained v, it reduces to the problem of optimizing b given by
$$\begin{array}{*{20}l}{} \max_{\mathbf{b} \geq 0} & \quad\!\!\! \mathcal{I}(\mathbf{b}) = \frac{1}{2} \sum_{l=1}^{L}{\log_{2}\left(\frac{1 + \lambda_{1,l} {b}_{l} + \lambda_{1,l} {b}_{l} \lambda_{2,l} {v}_{l} + \lambda_{2,l} {v}_{l}} {1 + \lambda_{1,l} {b}_{l} + \lambda_{2,l} {v}_{l}} \right)} \end{array} $$
$$\begin{array}{*{20}l} {}\mathrm{s.t.} & \quad \!\!\!\sum_{l=1}^{L}{{b}_{l}} \leq p_{1}. \end{array} $$
We present how to obtain v from Problems (24)–(25). Let us consider the function
$$f({v}_{l}) = \log_{2}\left(\frac{1 + \lambda_{1,l} {b}_{l} + \lambda_{1,l} {b}_{l} \lambda_{2,l} {v}_{l} + \lambda_{2,l} {v}_{l}}{1 + \lambda_{1,l} {b}_{l} + \lambda_{2,l} {v}_{l}} \right). $$
Since it has the second derivative
$${} {\begin{aligned} \frac{d^{2}f({v}_{l})}{d{v}_{l}^{2}} &= -\frac{\lambda_{2,l}^{2}}{2 \ln2} \left(\frac{1}{(1 + \lambda_{2,l} {v}_{l})^{2}}- \frac{1}{(1 + \lambda_{1,l} {b}_{l} + \lambda_{2,l} {v}_{l})^{2}} \right) \leq 0, \end{aligned}} $$
it is concave on 0≤v l ≤p 2. It follows that the objective function \( \mathcal {I}(\mathbf {v}) = \frac {1}{2} \sum _{l=1}^{L}{\log _{2}\big (f(v_{l}) \big)} \) is also concave on the same range of v l . In addition, the constraint functions are clearly convex. Hence, Problems (24)–(25) is a standard concave optimization problem [42] and its optimum solution v can be found via Lagrange method as follows.
Let introduce the Lagrange function as
$$\mathcal{L} = \mathcal{I}(\mathbf{v}) + \nu \left(\sum_{l=1}^{L}{{v}_{l}} - p_{2}\right) - \sum_{l=1}^{L}{\gamma_{l} v_{l} }. $$
The concave function \( \mathcal {I}(\mathbf {v}) \) achieves its global maximum when the Karush-Kuhn-Tucker (KKT) conditions [42], the necessary and sufficient conditions, as listed below, are satisfied:
$$ -v_{l} \leq 0, \; \gamma_{l} \ge 0 \text{\; and \;} \gamma_{l} v_{l} = 0, $$
$$ \nu \ge 0, \; \nu \left(\sum_{l=1}^{L}{{v}_{l}} - p_{2}\right) = 0, $$
$$\begin{array}{*{20}l} \frac{\partial \mathcal{L}}{\partial v_{l}} &= \frac{\lambda_{2,l}}{2 \ln{2}} \left(\frac{1}{1 + \lambda_{2,l} {v}_{l}} - \frac{1}{1 + \lambda_{1,l} {b}_{l} + \lambda_{2,l} {v}_{l}} \right)\\ &\quad+ \nu -\gamma_{l} = 0. \end{array} $$
Solving the system of Eqs. (28)–(30) yields the optimum water-filling v as
$$ v_{l} = \left[\sqrt{\left(\frac{\lambda_{1,l}}{2 \lambda_{2,l}}b_{l}\right)^{2} + \frac{\lambda_{1,l}}{ \lambda_{2,l}} b_{l} \mu_{v}} - \frac{\lambda_{1,l}}{2 \lambda_{2,l}}b_{l} - \frac{1}{\lambda_{2,l}} \right]^{+}, $$
where the Lagrange multiplier \( \mu _{v} \triangleq 1/\nu \ln {2} \) meets
$${} \sum_{l=1}^{L} {\left[\sqrt{\left(\frac{\lambda_{1,l}}{2 \lambda_{2,l}}b_{l}\right)^{2} + \frac{\lambda_{1,l}} { \lambda_{2,l}} b_{l} \mu_{v}} - \frac{\lambda_{1,l}}{2 \lambda_{2,l}}b_{l} - \frac{1}{\lambda_{2,l}} \right]^{+}} = p_{2}. $$
Since Problems (26)–(27) has the same form as Problems (24)–(25), its optimal water-filling solution b can be inferred as:
$$ b_{l} = \left[\sqrt{\left(\frac{\lambda_{2,l}}{2 \lambda_{1,l}}v_{l}\right)^{2} + \frac{\lambda_{2,l}} { \lambda_{1,l}} v_{l} \mu_{b}} - \frac{\lambda_{2,l}}{2 \lambda_{1,l}}v_{l} - \frac{1}{\lambda_{1,l}} \right]^{+}, $$
where the Lagrange multiplier μ b satisfies
$${} \sum_{l=1}^{L} {\left[\sqrt{\left(\frac{\lambda_{2,l}}{2 \lambda_{1,l}}v_{l}\right)^{2} + \frac{\lambda_{2,l}} { \lambda_{1,l}} v_{l} \mu_{b}} - \frac{\lambda_{2,l}}{2 \lambda_{1,l}}v_{l} - \frac{1}{\lambda_{1,l}} \right]^{+}} = p_{1}. $$
It is intuitively from (31) and (33) that F loads more power to the weaker eigenmodes of relay-destination link λ 2,l than the modified eigenmodes of the source-relay link b l λ 1,l and less power to the stronger eigenmodes of the relay-destination link λ 2,l than the modified eigenmodes of the source-relay link b l λ 1,l , while B loads more power to the weaker eigenmodes of the source-relay link λ 1,l than the modified eigenmodes of the relay-destination link v l λ 2,l and less power to the stronger eigenmodes of the source-relay link λ 1,l than the modified eigenmodes of the relay-destination link v l λ 2,l . Here, the goal is to get as many optimal patterns of pairing b l λ 1,l with v l λ 2,l as possible to further enhance system capacity. This coordination of B and F is repeated until the achievement of desired system capacity. This iterative procedure is summarized briefly by Table 1. The computational complexity of the iterative design with the full CSIT is contributed by performing two SVDs in (7) and (8) with \( 2 \times \mathcal {O}(L^{3})\) operations in which we take L=N=M=K for simplicity, finding roots v and b in (31) and (33) with \( 2 \times \mathcal {O}(L)\) operations and searching the optimal patterns of pairing b l λ 1,l with v l λ 2,l with \( 2 \times \mathcal {O}(L!)\) operations. Hence, there are a total of \( 2 \times \mathcal {O}(L^{3}) + \mathcal {N} \times (2 \times \mathcal {O}(L)+ 2 \times \mathcal {O}(L!))\) operations where \( \mathcal {N} \) represents the number of iterations required to complete this iterative design.
Table 1 An iterative algorithm to derive b and v
In the design process, besides the covariance matrices \(\phantom {\dot {i}\!}\mathbf {R}_{x}, \mathbf {R}_{n_{1}}\), and \( \mathbf {R}_{n_{2}} \), the relay needs to have the estimated CSI H 1 and the destination-relay feedback CSI H 2, and the source needs to have the relay-source feedback CSI H 1 and H 2. With each set of such the full CSIT, for fixed b, the relay computes v, and then feeds back it to the source. The source updates b with the received v, and then feeds forward the updated b to the relay. To obtain an output of v and b, this updating is repeated alternatively between the relay and the source until \( \mathcal {I}(\mathbf {b},\mathbf {v}) \) converges a desired value. Due to computational burden of such the iterative procedure, its output of v and b, thus the precoders B and F may be outdated to the current propagation condition. In practical wireless communications systems, to efficiently mitigate the overhead and the design complexity, codebook and limited feedback schemes are often utilized. The idea behind these techniques is that the receiver first would quantize the estimated CSI, and feedback the resulting index to the transmitter. The transmitter then picks the desired precoder from a codebook which is a set of precoders designed offline beforehand by using various CSIT sets [39].
Simplified power allocation algorithm
In this section, we develop a simplified algorithm that allows to load source and relay power separately. First, let consider the inequality:
$$ \frac{(1 + x)(1 + y)}{1 + x + y} \leq (1 + x)(1 + y), $$
where x and y are two non-negative scalars. Applying inequality (35) to the objective function of Problem (23) leads to its upper bound that is equal to
$$ \frac{1}{2} \sum_{l=1}^{L}{\log_{2}\left(1 + \lambda_{1,l} {b}_{l}\right)} + \frac{1}{2} \sum_{l=1}^{L}{\log_{2}\left(1 + \lambda_{2,l} {v}_{l}\right)}. $$
In (23), let replace the objective function with its upper bound. Consequently, Problem (23) can be split to two concave optimization problems as:
$$ \max_{\mathbf{v} \geq 0} \; \frac{1}{2} \sum_{l=1}^{L}{\log_{2}\left(1 + \lambda_{2,l} {v}_{l}\right)} \quad \mathrm{s.t.} \; \sum_{l=1}^{L}{{v}_{l}} \leq p_{2}, $$
$$ \max_{\mathbf{b} \geq 0} \; \frac{1}{2} \sum_{l=1}^{L}{\log_{2}\left(1 + \lambda_{1,l} {b}_{l}\right)} \quad \mathrm{s.t.} \; \sum_{l=1}^{L}{{b}_{l}} \leq p_{1}. $$
The corresponding optimal water-filling solutions v and b to Problems (37) and (38) are, respectively, given by [20]:
$$ {v}_{l} = \left[\mu_{v} - \frac{1}{\lambda_{2,l}}\right]^{+},\quad \sum_{l=1}^{L}{\left[\mu_{v} - \frac{1}{\lambda_{2,l}}\right]^{+}} = p_{2}, $$
$$ {b}_{l} = \left[\mu_{b} - \frac{1}{\lambda_{1,l}}\right]^{+},\quad \sum_{l=1}^{L}{\left[\mu_{b} - \frac{1}{\lambda_{1,l}}\right]^{+}} = p_{1}. $$
It is intuitively from (39) and (40) that F loads more power to the weaker eigenmodes of the relay-destination link λ 2,l and less power to the stronger λ 2,l , while B loads more power to the weaker eigenmodes of the source-relay link λ 1,l and less power to the stronger λ 1,l . In the design process, the relay needs to have \(\phantom {\dot {i}\!} \mathbf {R}_{x}, \mathbf {R}_{n_{1}}, \mathbf {R}_{n_{2}}, \mathbf {H}_{1} \), and H 2, while the source needs to have \(\phantom {\dot {i}\!} \mathbf {R}_{x}, \mathbf {R}_{n_{1}} \), and H 1. With each set of such the full CSIT, the relay computes v and feeds back it to the source, and then the source calculates b with the received v. Because there is no need for feedbacking \( \mathbf {R}_{n_{2}} \) and H 2 to the source, the simplified design with the full CSIT allows to save a large signaling overhead compared to the iterative counterpart. In terms of the computational complexity, this simplified design requires \( 2 \times (\mathcal {O}(L^{3}) + \mathcal {O}(L)+ \mathcal {O}(L!)) \) operations to accomplish, thus it is much simpler than the iterative counterpart. Because of the simplicity of separately calculating v and b, the simplified scheme may give lower capacity than the iterative scheme. Nevertheless, we can expect that its capacity performance is comparable to the performance of the the iterative counterpart, especially at the medium-to-high SNRs. This is mainly due to the fact that in inequality (35), when x,y→∞, then 1+x+y≪(1+x)(1+y), or equivalently, when b,v→∞ (i.e., \( p_{1}/\sigma _{1}^{2}, p_{2}/\sigma _{2}^{2} \rightarrow \infty \)), then \( \mathcal {I}(\mathbf {b},\mathbf {v}) \) approaches to its upper bound. Interestingly, this simplified design can be also extended to the case of multi-hop systems, as presented below.
Let extend inequality (35) to Z≥2 non-negative scalars x 1,…,x Z . The obtained inequality is
$$ \frac{\prod_{i=1}^{Z}{(1 + x_{i})}}{1 + \sum_{i=1}^{Z}{x_{i}}} \leq \prod_{i=1}^{Z}{(1 + x_{i})}. $$
By this inequality, an upper bound MI of a Z-hop system can be derived as
$$ \frac{1}{Z} \sum_{l=1}^{L}{\log_{2}\Big(1 + \lambda_{1,l} {b}_{l}\Big)} + \frac{1}{Z} \sum_{i=2}^{Z}{\sum_{l=1}^{L}{\log_{2}\Big(1 + \lambda_{i,l} {v}_{l}\Big)}}. $$
Similar to the two-hop system case, the entries b l of the diagonal matrix Λ b of the source precoding matrix
$$\mathbf{B} = \mathbf{V}_{1} \mathbf{\Lambda}_{b}^{\frac{1}{2}} \mathbf{V}_{1}^{H} \mathbf{R}_{x}^{-\frac{1}{2}}$$
and the entries f i,l of the diagonal matrix Λ f,i of the relay precoding matrices
$$\mathbf{F}_{i} = \mathbf{V}_{i} \mathbf{\Lambda}_{f,i}^{\frac{1}{2}} \mathbf{U}_{i}^{H} \mathbf{R}_{n_{i}}^{-\frac{1}{2}}, i = \{2,\ldots,Z\} $$
can be found by solving the corresponding optimization problems given by:
$$ \max \; \frac{1}{Z} \sum_{l=1}^{L}{\log_{2}\left(1 + \lambda_{1,l} {b}_{l}\right)} \quad \mathrm{s.t.} \; \sum_{l=1}^{L}{{b}_{l}} \leq p_{1}, $$
$$ \max \; \frac{1}{Z} \sum_{l=1}^{L}{\log_{2}\left(1 + \lambda_{i,l} {v}_{i,l}\right)} \quad \mathrm{s.t.} \; \sum_{l=1}^{L}{{v}_{i,l}} \leq p_{i}. $$
Here, V i ,U i come from the SVDs of \( \mathbf {R}_{n_{i}}^{-\frac {1}{2}} \mathbf {H}_{i} = \mathbf {U}_{i} \mathbf {\Lambda }_{i}^{\frac {1}{2}} \mathbf {V}_{i}^{H} \). Solving Problems (43) and (44) yields [20]
$${b}_{l} = \left[\nu - \frac{1}{\lambda_{1,l}}\right]^{+}, \; \sum_{l=1}^{L}{\left[\nu - \frac{1}{\lambda_{1,l}}\right]^{+}} = p_{1}, $$
$$\begin{aligned} {v}_{i,l} &= \left[\mu_{i} - \frac{1}{\lambda_{i,l}}\right]^{+}, \; \sum_{l=1}^{L}{\left[\mu_{i} - \frac{1}{\lambda_{i,l}}\right]^{+}}\\ &= p_{i}, \; {f}_{i,l} = v_{i,l}/(\lambda_{1,l} {b}_{l} + 1). \end{aligned} $$
Again, it is easy to see that with the coordination of precoders B, F i and a respective linear MMSE equalizer at destination, the Z-hop MIMO relay channel is also decoupled. Notably, this extension design with the full CSIT has the same solution as the MMI asymptotic precoder design of [30, 31] where the same problem was studied. This shows the flexibility of our proposed design methods.
Theorem 1 below concludes the main results on the joint design of source and relay precoders with the full CSIT.
Theorem 1
The instantaneous mutual information \( \mathcal {I}(\mathbf {B},\mathbf {F}) \) attains its maximum under the power constraints tr(B R x B H)≤p 1 and \(\phantom {\dot {i}\!} \text {tr}\Big (\mathbf {F} \left (\mathbf {H}_{1} \mathbf {B} \mathbf {R}_{x} \mathbf {B}^{H} \mathbf {H}_{1}^{H} + \mathbf {R}_{n_{1}} \right) \mathbf {F}^{H}\Big) \leq p_{2} \) when B and F are of the optimal structures as \( \mathbf {B} = \mathbf {V}_{1} \mathbf {\Lambda }_{b}^{\frac {1}{2}} \mathbf {R}_{x}^{-\frac {1}{2}} \) and \(\phantom {\dot {i}\!} \mathbf {F} = \mathbf {V}_{2} \mathbf {\Lambda }_{f}^{\frac {1}{2}} \mathbf {U}_{1}^{H} \mathbf {R}_{n_{1}}^{-\frac {1}{2}} \). Here, V 1,U 1 and V 2 are unitary matrices of \( \mathbf {R}_{n_{1}}^{-\frac {1}{2}} \mathbf {H}_{1} = \mathbf {U}_{1} \mathbf {\Lambda }_{1}^{\frac {1}{2}} \mathbf {V}_{1}^{H} \) and \( \mathbf {R}_{n_{2}}^{-\frac {1}{2}} \mathbf {H}_{2} = \mathbf {U}_{2} \mathbf {\Lambda }_{2}^{\frac {1}{2}} \mathbf {V}_{2}^{H} \), and Λ b and Λ f are diagonal matrices of non-negative entries which can be determined alternately by the iterative algorithm (Section 3.2) or separately by the simplified algorithm (Section 3.3).
Joint source and relay precoding with partial CSIT
Suboptimal structures for source and relay precoders
In Section 3, we obtained the precoder designs with the full CSIT. However, it is too hard for the relay and the source to obtain H 2 in the situation when the destination moves rapidly. This is basically because a large amount of signalling overhead is needed for feedbacking H 2, while the feedback channels in practical wireless systems are commonly rate-limited. For these reasons, in this section, we assume that the source and the relay have R x , \(\phantom {\dot {i}\!}\mathbf {R}_{n_{1}}\), \(\mathbf {R}_{n_{2}}\phantom {\dot {i}\!}\), H 1, and only covariance information Θ 2 and Ω 2 of H 2 (the partial CSIT). With the partial CSIT, we jointly design B and F to maximize \( \mathrm {E_{H_{2}}}\big (\mathcal {I}(\mathbf {B},\mathbf {F})\big) \) under the source and relay transmit power constraints. However, it is intractable to exactly compute \( \mathrm {E_{H_{2}}}\big (\mathcal {I}(\mathbf {B},\mathbf {F})\big) \) because taking the expectation with respect to unknowns B and F is needed. Here, an alternative solution proposed is to use its an upper bound, which is derived below.
Applying the matrix inversion lemma to (5) gives
$$\begin{array}{*{20}l}{} \mathbf{M} &= \mathbf{W}^{-1} + \mathbf{W}^{-1} \left(\mathbf{R}_{n_{1}}^{-\frac{1}{2}}\mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}}\right)^{H}\\ &\quad\times\left(\mathbf{R}_{n_{1}}^{\frac{H}{2}}\mathbf{F}^{H}\mathbf{H}_{2}^{H} \mathbf{R}_{n_{2}}^{-1} \mathbf{H}_{2}\mathbf{F} \mathbf{R}_{n_{1}}^{\frac{1}{2}} + \mathbf{I}_{K}- \left(\mathbf{R}_{n_{1}}^{-\frac{1}{2}}\mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}}\right)\right.\\ &\left. \quad \times \mathbf{W}^{-1} \left(\mathbf{R}_{n_{1}}^{-\frac{1}{2}}\mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}}\right)^{H} \right)^{-1} \left(\mathbf{R}_{n_{1}}^{-\frac{1}{2}}\mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}}\right) \mathbf{W}^{-1}, \end{array} $$
where \( \mathbf {W} \triangleq \left (\mathbf {R}_{n_{1}}^{-\frac {1}{2}}\mathbf {H}_{1} \mathbf {B} \mathbf {R}_{x}^{\frac {1}{2}}\right)\left (\mathbf {R}_{n_{1}}^{-\frac {1}{2}}\mathbf {H}_{1} \mathbf {B} \mathbf {R}_{x}^{\frac {1}{2}}\right)^{H}+\mathbf {I}_{K}.\)
Since function f(X)=X −1 is convex in X [43], M in (45) is convex in \( \mathbf {H}_{2}^{H} \mathbf {R}_{n_{2}}^{-1} \mathbf {H}_{2} \) for a given H 1. By Jensen's inequality [44] and the property which states that for any matrix H with the distribution \( \mathbf {H} \sim \mathcal {CN}(\mathbf {0},\mathbf {\Theta } \otimes \mathbf {\Omega }) \), then E H (H A H H)=tr(A Θ T)Ω and E H (H H A H)=tr(Ω A)Θ T [33], we have
$${} \begin{aligned} \mathrm{E_{H_{2}}}\big(\mathbf{M}\big) &\succeq \mathbf{W}^{-1} + \mathbf{W}^{-1} \left(\mathbf{R}_{n_{1}}^{-\frac{1}{2}}\mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}}\right)^{H}\\ &\quad\times \left(\mathbf{R}_{n_{1}}^{\frac{H}{2}}\mathbf{F}^{H}\mathrm{E_{H_{2}}}\left(\mathbf{H}_{2}^{H} \mathbf{R}_{n_{2}}^{-1} \mathbf{H}_{2}\right)\mathbf{F} \mathbf{R}_{n_{1}}^{\frac{1}{2}} + \mathbf{I}_{K}\right.\\ &\left. \quad - \left(\mathbf{R}_{n_{1}}^{-\frac{1}{2}}\mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}}\right) \mathbf{W}^{-1} \left(\mathbf{R}_{n_{1}}^{-\frac{1}{2}}\mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}}\right)^{H} \right)^{-1}\\ &\quad\times \left(\mathbf{R}_{n_{1}}^{-\frac{1}{2}}\mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}}\right) \mathbf{W}^{-1}\\ &= \mathbf{W}^{-1} + \mathbf{W}^{-1} \left(\mathbf{R}_{n_{1}}^{-\frac{1}{2}}\mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}}\right)^{H}\\ &\quad\times \left(\mathbf{R}_{n_{1}}^{\frac{H}{2}}\mathbf{F}^{H}\text{tr}\left(\mathbf{\Omega}_{2}\mathbf{R}_{n_{2}}^{-1}\right) \mathbf{\Theta}_{2}\mathbf{F} \mathbf{R}_{n_{1}}^{\frac{1}{2}} + \mathbf{I}_{K} \right.\\ &\left.\quad - \left(\mathbf{R}_{n_{1}}^{-\frac{1}{2}}\mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}}\right) \mathbf{W}^{-1} \left(\mathbf{R}_{n_{1}}^{-\frac{1}{2}}\mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}}\right)^{H} \right)^{-1}\\ &\quad\times \left(\mathbf{R}_{n_{1}}^{-\frac{1}{2}}\mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}}\right) \mathbf{W}^{-1}\\ & \triangleq \mathbf{M}_{L}. \end{aligned} $$
As a result, the upper bound \( \mathcal {\dot {I}}_{erg}(\mathbf {B},\mathbf {F}) \) of \( \mathrm {E_{H_{2}}}\left (\mathcal {I}(\mathbf {B},\mathbf {F})\right) \) is found as
$$\begin{aligned} \mathrm{E_{H_{2}}}\left(\mathcal{I}(\mathbf{B},\mathbf{F})\right) &\leq -\frac{1}{2}\log_{2}\left(\mathrm{E_{H_{2}}}(\mathbf{M})\right)\\ &\leq -\frac{1}{2}\log_{2}(\mathbf{M}_{L}) \triangleq \mathcal{\dot{I}}_{erg}(\mathbf{B},\mathbf{F}). \end{aligned} $$
The design problem now is relaxed to
$${} {{\begin{aligned} \max_{\mathbf{B},\mathbf{F}} & \quad \mathcal{\dot{I}}_{erg}(\mathbf{B},\mathbf{F}) \\ \mathrm{s.t.} & \quad \text{tr}(\mathbf{B} \mathbf{R}_{x} \mathbf{B}^{H}) \leq p_{1}, \\ & \quad \text{tr}\left(\mathbf{F} \mathbf{R}_{n_{1}}^{\frac{1}{2}} \left(\mathbf{R}_{n_{1}}^{-\frac{1}{2}} \mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x} \mathbf{B}^{H} \mathbf{H}_{1}^{H} \mathbf{R}_{n_{1}}^{-\frac{H}{2}} + \mathbf{I}_{K}\right) \mathbf{R}_{n_{1}}^{\frac{H}{2}}\mathbf{F}^{H}\right) \leq p_{2}. \end{aligned}}} $$
Let us define the following SVDs:
$$ \text{tr}\left(\mathbf{\Omega}_{2}\mathbf{R}_{n_{2}}^{-1}\right)^{\frac{1}{2}}\mathbf{\Theta}_{2}^{\frac{1}{2}}= \mathbf{U}_{\theta} \mathbf{\Lambda}_{\theta}^{\frac{1}{2}} \mathbf{V}_{\theta}^{H}, $$
where U 1, U θ , V 1, and V θ are unitary matrices, and Λ 1 and Λ θ are diagonal matrices of non-negative eigenvalues in descending order. In order to achieve a maximum of \( \mathcal {\dot {I}}_{erg}(\mathbf {B},\mathbf {F}) \) in Problem (47), B and F should are in forms as:
$$ \mathbf{B} = \mathbf{V}_{1}\, [\!\mathbf{\Lambda}_{b}^{(p)}]^{\frac{1}{2}} \mathbf{R}_{x}^{-\frac{1}{2}}, $$
$$ \mathbf{F} = \mathbf{V}_{\theta}\,[\!\mathbf{\Lambda}_{f}^{(p)}]^{\frac{1}{2}} \mathbf{U}_{1}^{H} \mathbf{R}_{n_{1}}^{-\frac{1}{2}}, $$
where \( \mathbf {\Lambda }_{b}^{(p)} \) and \( \mathbf {\Lambda }_{f}^{(p)} \) are M×M and K×K diagonal matrices of non-negative entries with up to L positive elements.
$$\begin{array}{*{20}l} \tilde{\mathbf{H}}_{1} &= \mathbf{R}_{n_{1}}^{-\frac{1}{2}}\mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x}^{\frac{1}{2}}, \end{array} $$
$$\begin{array}{*{20}l} \mathbf{H}_{\theta} &=\text{tr}(\mathbf{\Omega}_{2}\mathbf{R}_{n_{2}}^{-1})^{\frac{1}{2}}\mathbf{\Theta}_{2}^{\frac{1}{2}}\mathbf{F} \mathbf{R}_{n_{1}}^{\frac{1}{2}}. \end{array} $$
Again, by the matrix inversion lemma, M L in (46) is simplified to
$$ \mathbf{M}_{L} = \mathbf{I}_{M} - \tilde{\mathbf{H}}_{1}^{H} \mathbf{H}_{\theta}^{H} (\mathbf{H}_{\theta} (\tilde{\mathbf{H}}_{1}\tilde{\mathbf{H}}_{1}^{H}+\mathbf{I}_{K}) \mathbf{H}_{\theta}^{H} +\mathbf{I}_{N})^{-1} \mathbf{H}_{\theta} \tilde{\mathbf{H}}_{1}. $$
Clearly, M L has the same form as M in (14). Therefore, the proof part of the optimal source and relay precoder structures with the full CSIT in (9) and (10) presented in Part 3 can be used for the derivation of those with the partial CSIT in (50) and (51). □
In [5, 6], the ROP schemes were obtained using the partial CSIT for two-hop relay systems with only transmit-sided spatially correlated channels, white noises, and independent source symbols. These schemes are actually included in our proposed joint precoding with the partial-CSIT as special cases. In fact, by substituing R x =I M , \( \mathbf {R}_{n_{1}} =\sigma _{n_{1}}^{2} \mathbf {I}_{K} \), \( \mathbf {R}_{n_{2}} =\sigma _{n_{2}}^{2} \mathbf {I}_{N} \), Ω 2=I N into (50) and (51), the source and relay precoding reduce to the ROP in [5, 6]. Since the relay eigen-beamer directions V θ does not match the relay-destination subchannel directions V 2, the obtained partial-CSIT precoders are clearly suboptimal compared to the full-CSIT precoders. Therefore, the system capacity enhancement much relies on how to allocate power across the source and relay antennas. This task is equivalent to solving a problem of optimizing \( \mathbf {\Lambda }_{b}^{(p)} \) and \( \mathbf {\Lambda }_{f}^{(p)} \) given by:
$${} {{\begin{aligned} \max_{\mathbf{b}^{(p)},\mathbf{f}^{(p)}} & \quad \frac{1}{2} \sum_{l=1}^{L}{\log_{2}\Big(1 + \frac{\gamma \lambda_{\theta,l} {f}_{l}^{(p)} \lambda_{1,l} {b}_{l}^{(p)}}{1 + \gamma \lambda_{\theta,l} {f}_{l}^{(p)}} \Big)}\\ \mathrm{s.t.} & \quad \left\{ \begin{array}{l} \sum\limits_{l=1}^{L}{ {b}_{l}^{(p)}} \leq p_{1} \text{\; and \;} \sum\limits_{l=1}^{L}{\left(\lambda_{1,l} {b}_{l}^{(p)} + 1\right){f}_{l}^{(p)}} \leq p_{2}, \\ {b}_{l}^{(p)} \geq 0 \text{\; and \;} (\lambda_{1,l} {b}_{l}^{(p)} + 1){f}_{l}^{(p)} \geq 0, \quad l = 1,\ldots,L, \end{array} \right. \end{aligned}}} $$
where \(\gamma \triangleq \text {tr}\left (\mathbf {\Omega }_{2}\mathbf {R}_{n_{2}}^{-1}\right)^{\frac {1}{2}}, \mathbf {b}^{(p)} \triangleq \left (b_{1}^{(p)}, \ldots, b_{L}^{(p)}\right)^{T} \) is the diagonal vector of \( \mathbf {\Lambda }_{b}^{(p)},\) and \( \mathbf {f}^{(p)} \triangleq \left (f_{1}^{(p)}, \ldots, f_{L}^{(p)}\right)^{T} \) is the diagonal vector of \( \mathbf {\Lambda }_{f}^{(p)},\)
Let \( \mathbf {v}^{(p)} \triangleq \left (v_{1}^{(p)}, \ldots, v_{L}^{(p)}\right)^{T} \) and \( v_{l}^{(p)} = \left (\lambda _{1,l} {b}_{l}^{(p)} + 1\right){f}_{l}^{(p)} \). The optimization problem (55) now can be rewritten as:
$${} {{\begin{aligned} \max_{\mathbf{b}^{(p)},\mathbf{v}^{(p)} \geq 0} & \quad \mathcal{\dot{I}}_{erg}\left(\mathbf{b}^{(p)},\mathbf{v}^{(p)}\right) = \frac{1}{2} \sum_{l=1}^{L}{\log_{2}}\\ &\quad\times{\left(\frac{1 + \lambda_{1,l} {b}_{l}^{(p)} + \lambda_{1,l} {b}_{l}^{(p)} \gamma \lambda_{\theta,l} {v}_{l}^{(p)} + \gamma \lambda_{\theta,l} {v}_{l}^{(p)} }{1 + \lambda_{1,l} {b}_{l}^{(p)} + \gamma \lambda_{\theta,l} {v}_{l}^{(p)}} \right)}\\ \mathrm{s.t.} & \quad \sum_{l=1}^{L}{{b}_{l}^{(p)}} \leq p_{1} \text{\; and \;} \sum_{l=1}^{L}{{v}_{l}^{(p)}} \leq p_{2}. \end{aligned}}} $$
Once \( v_{l}^{(p)} \) is found, \( f_{l}^{(p)} \) is straightforward to calculate as \( {f}_{l}^{(p)} = v_{l}^{(p)}/(\lambda _{1,l} {b}_{l}^{(p)} + 1) \). Directly solving Problem (56) is impossible due to its non-concavity in b (p) and v (p). Therefore, we propose to deal with this problem by an iterative algorithm in Section 4.2 and by a simplified algorithm in Section 4.3.
Interestingly, b (p) and v (p) are symmetrical each other in (56). Therefore, if either b (p) or v (p) is kept fixed, Problem (56) becomes a standard concave optimization problem. Specifically, for a given b (p), it collapses to the problem of optimizing v (p) given by
$${} {{\begin{aligned} \max_{\mathbf{v}^{(p)} \geq 0} & \quad \frac{1}{2} \sum_{l=1}^{L}{\log_{2}\left(\frac{1 + \lambda_{1,l} {b}_{l}^{(p)} + \lambda_{1,l} {b}_{l}^{(p)} \gamma \lambda_{\theta,l} {v}_{l}^{(p)} + \gamma \lambda_{\theta,l} {v}_{l}^{(p)} }{1 + \lambda_{1,l} {b}_{l}^{(p)} + \gamma \lambda_{\theta,l} {v}_{l}^{(p)}} \right)} \end{aligned}}} $$
$${} {{\begin{aligned} \mathrm{s.t.} & \quad \sum_{l=1}^{L}{{v}_{l}^{(p)}} \leq p_{2}. \end{aligned}}} $$
For the obtained v (p), it relaxes to the problem of optimizing b (p) given by
$${} {{\begin{aligned} \max_{\mathbf{b}^{(p)} \geq 0} & \quad \frac{1}{2} \sum_{l=1}^{L}{\log_{2}\left(\frac{1 + \lambda_{1,l} {b}_{l}^{(p)} + \lambda_{1,l} {b}_{l}^{(p)} \gamma \lambda_{\theta,l} {v}_{l}^{(p)} + \gamma \lambda_{\theta,l} {v}_{l}^{(p)} }{1 + \lambda_{1,l} {b}_{l}^{(p)} + \gamma \lambda_{\theta,l} {v}_{l}^{(p)}} \right)} \end{aligned}}} $$
$${} {{\begin{aligned} \mathrm{s.t.} & \quad \sum_{l=1}^{L}{{b}_{l}^{(p)}} \leq p_{1}. \end{aligned}}} $$
Similarly to Section 3.2, an iterative algorithm for finding b (p) and v (p) alternatively is also developed, as shown in Table 2.
Table 2 An iterative algorithm to derive b (p) and v (p)
In each iteration of this iterative algorithm, F and B coordinate to search for as many optimal patterns of pairing \( \lambda _{1,l} {b}_{l}^{(p)} \) with \( \gamma \lambda _{\theta,l} {v}_{l}^{(p)} \) as possible. By this way, the interference due to mismatch between the relay beamformer V θ and the eigen vectors of the relay-destination link V 2 decreases, and the system capacity increases after each iteration. As a result, the overall system capacity increases after the algorithm terminates. The complexity of the iterative precoder design with the partial CSIT is nearly the same as the complexity of the iterative precoder design with the full CSIT, with a total of \( 2 \times \mathcal {O}(L^{3}) + \mathcal {N} \times (2 \times \mathcal {O}(L)+ 2 \times \mathcal {O}(L!))\) operations. In the design process, besides R x , \(\phantom {\dot {i}\!}\mathbf {R}_{n_{1}}\), \(\phantom {\dot {i}\!}\mathbf {R}_{n_{2}}\), and H 1, the source and the relay only need to have covariance information Θ 2 and Ω 2 of H 2. Since these covariance matrices change much slower compared to their channel realization H 2, a large mount of signaling overhead and the design complexity required are significantly saved compared to the iterative design with the full CSIT. These benefits thus allow to broaden the applicability of the iterative design with the partial CSIT in practical communications systems, especially when it is realized by codebook and limited feedback techniques [39] that we discussed in Section 3.2.
In this section, to reduce the computational complexity of the iterative algorithm in Section 4.2, we develop its simplified algorithm. By inequality (35), an upper bound of the objective function of Problem (56) can be derived equal to
$$ \frac{1}{2} \sum_{l=1}^{L}{\log_{2}\left(1 + \lambda_{1,l} {b}_{l}^{(p)}\right)} + \frac{1}{2} \sum_{l=1}^{L}{\log_{2}\left(1 + \gamma \lambda_{\theta,l} {v}_{l}^{(p)}\right)}. $$
In (56), let us replace the objective function with its upper bound. As a result, Problem (56) can be decoupled into two concave optimization problems given by:
$$ \max_{\mathbf{v}^{(p)} \geq 0} \; \frac{1}{2} \sum_{l=1}^{L}{\log_{2}\left(1 + \gamma \lambda_{\theta,l} {v}_{l}^{(p)}\right)} \quad \mathrm{s.t.} \; \sum_{l=1}^{L}{{v}_{l}^{(p)}} \leq p_{2}, $$
$$ \max_{\mathbf{b}^{(p)} \geq 0} \; \frac{1}{2} \sum_{l=1}^{L}{\log_{2}\left(1 + \lambda_{1,l} {b}_{l}^{(p)}\right)} \quad \mathrm{s.t.} \; \sum_{l=1}^{L}{{b}_{l}^{(p)}} \leq p_{1}. $$
Again, the optimal water-filling solutions v (p) and b (p) to the respective problems (62) and (63) are given by [20]:
$$\begin{array}{*{20}l} {v}_{l}^{(p)} = \left[\mu_{v} - \frac{1}{\gamma \lambda_{\theta,l}}\right]^{+},\quad \sum_{l=1}^{L}{\left[\mu_{v} - \frac{1}{\gamma \lambda_{\theta,l}}\right]^{+}} = p_{2},\end{array} $$
$$\begin{array}{*{20}l} {b}_{l}^{(p)} = \left[\mu_{b} - \frac{1}{\lambda_{1,l}}\right]^{+},\quad \sum_{l=1}^{L}{\left[\mu_{b} - \frac{1}{\lambda_{1,l}}\right]^{+}} = p_{1}. \end{array} $$
It is obviously from (64) and (65) that F loads more power to the weaker eigenmodes of the transmit covariance matrix of the relay-destination link γ λ θ,l and less power to the stronger γ λ θ,l , while B loads more power to the weaker eigenmodes of the source-relay link λ 1,l and less power to the stronger λ 1,l . The relay needs to have R x , \(\phantom {\dot {i}\!}\mathbf {R}_{n_{1}}\), \(\phantom {\dot {i}\!}\mathbf {R}_{n_{2}}\), H 1, Θ 2 and Ω 2, while the source only need to have R x , \(\phantom {\dot {i}\!}\mathbf {R}_{n_{1}}\), H 1. With each set of such the partial CSIT, the relay computes v (p) and feeds back it to the source, and then the source calculates b (p) with the received v (p). Obviously, the simplified design with the partial CSIT requires less signaling overhead than the iterative design with the partial CSIT and the simplified design with the full CSIT. Besides, its computational complexity is the same as that of the simplified design with the full CSIT, and much less than that of the iterative design with the partial CSIT. Despite its simplicity, the simplified design with the partial CSIT works well, especially at high SNRs. This is mainly due to the fact that in inequality (35), when x,y→∞, then 1+x+y≪(1+x)(1+y), or equivalently, when when b (p),v (p)→∞ (i.e., \( p_{1}/\sigma _{1}^{2}, p_{2}/\sigma _{2}^{2} \rightarrow \infty \)), then \( \mathcal {\dot {I}}_{erg}(\mathbf {b}^{(p)},\mathbf {v}^{(p)}) \) approaches to its upper bound.
Theorem 2 below summarizes the main results on jointly designing source and relay precoders with the partial CSIT.
The average mutual information \(\mathrm {E_{H_{2}}}\big (\mathcal {I}(\mathbf {B},\) F)) achieves its maximum under the power constraints tr(B R x B H)≤p 1 and \( \text {tr}\left (\mathbf {F} (\mathbf {H}_{1} \mathbf {B} \mathbf {R}_{x} \mathbf {B}^{H} \mathbf {H}_{1}^{H} + \mathbf {R}_{n_{1}}) \mathbf {F}^{H}\right) \leq p_{2} \) when B and F are suboptimally constructed as \( \mathbf {B} = \mathbf {V}_{1} \left [\mathbf {\Lambda }_{b}^{(p)}\right ]^{\frac {1}{2}} \mathbf {R}_{x}^{-\frac {1}{2}} \) and \( \mathbf {F} = \mathbf {V}_{\theta } \left [\mathbf {\Lambda }_{f}^{(p)}\right ]^{\frac {1}{2}} \mathbf {U}_{1}^{H} \mathbf {R}_{n_{1}}^{-\frac {1}{2}} \). Here, U 1, U θ , V 1 and V θ are unitary matrices of \( \mathbf {R}_{n_{1}}^{-\frac {1}{2}} \mathbf {H}_{1} = \mathbf {U}_{1} \mathbf {\Lambda }_{1}^{\frac {1}{2}} \mathbf {V}_{1}^{H} \) and \(\gamma ^{\frac {1}{2}}\mathbf {\Theta }_{2}^{\frac {1}{2}}= \mathbf {U}_{\theta } \mathbf {\Lambda }_{\theta }^{\frac {1}{2}} \mathbf {V}_{\theta }^{H}\), and \( \mathbf {\Lambda }_{b}^{(p)} \) and \( \mathbf {\Lambda }_{f}^{(p)} \) are diagonal matrices of non-negative entries which can be determined alternately by the iterative algorithm (Section 4.2) or separately by the simplified algorithm (Section 4.3).
In this section, the proposed iterative and simplified precoding designs with the full CSIT and with the partial CSIT are evaluated in terms of system capacity by numerical simulations. The considered relay system has the transmit and receive covariance matrices Θ i and Ω i , i={1,2} in the corresponding Toeplitz forms [6, 12] as
$$ \mathbf{\Theta}_{i} (n,m) = r_{t}^{|n-m|},\; n,m = \{1,\ldots,M\}, $$
$$ \mathbf{\Omega}_{i} (n,m) = r_{r}^{|n-m|},\; n,m = \{1,\ldots,K\}, $$
where correlation coefficients r t , r r meet r t ,r r ∈(0,1]. Like [2, 5, 6], the source power p 1 includes the source-relay path-loss, and the relay power p 2 includes the relay-destination path-loss. \( \mathrm {SNR_{1}} \triangleq p_{1}/\sigma _{1}^{2} \) is defined as the first-hop SNR, and \( \mathrm {SNR_{2}} \triangleq p_{2}/\sigma _{1}^{2} \) as the second-hop SNR. The source, relay, and destination nodes all have four antennas (M=K=N=4). The correlation matrices of source signals, relay and destination noises are chosen as
$$\mathbf{\Psi}_{x} = \left[ \begin{array}{cccc} 1.0000 & -0.5715 & 0.7742 & -0.6025\\ -0.5715 & 1.0000 & -0.6905 & 0.5062\\ 0.7742 & -0.6905 & 1.0000 & -0.6387\\ -0.6025 & 0.5062 & -0.6387 & 1.0000\\ \end{array} \right], $$
$$\mathbf{\Psi}_{n_{1}} = \left[ \begin{array}{cccc} 1.0000 & 0.2637 & -0.1939 & 0.3348\\ 0.2637 & 1.0000 & -0.1141 & 0.1194\\ -0.1939 & -0.1141 & 1.0000 & 0.2085\\ 0.3348 & 0.1194 & 0.2085 & 1.0000\\ \end{array} \right], $$
$$\mathbf{\Psi}_{n_{2}} = \left[ \begin{array}{cccc} 1.0000 & 0.6709 & -0.3419 & 0.5644\\ 0.6709 & 1.0000 & -0.6666 & 0.5062\\ -0.3419 & -0.6666 & 1.0000 & -0.4288\\ 0.5644 & 0.5062 & -0.4288 & 1.0000\\ \end{array} \right]. $$
The noise correlation matrices are chosen randomly as long as the noises are colored enough but do not totally interfere the other channel factors. The first condition is to make sure that the noises will have a "colored" effect to the system, while the second one is to maintain the practical meaning of the wireless model. The matrix \( \mathbf {\Psi }_{n_{1}} \) has the vector of eigenvalues [1.5000, 1.2000, 0.8000, 0.5000], \( \mathbf {\Psi }_{n_{2}} \) has the vector of eigenvalues [2.6000, 0.7000, 0.5000, 0.2000], Ψ x has the vector of eigenvalues [2.9000, 0.5000, 0.4000, 0.2000]. The Matlab command used to generate \( \mathbf {\Psi }_{n_{1}} \) randomly is gallery('randcorr', [1.5000, 1.2000, 0.8000, 0.5000])1. Like previous references [28–31, 45–47], the colored noise vector \( \mathbf {n_{1}} \sim \mathcal {CN}(\mathbf {0},\sigma ^{2}_{1}\mathbf {\Psi }_{n_{1}})\) is generated as \( \mathbf {n_{1}} = \mathbf {\Psi }_{n_{1}}^{\frac {1}{2}} \mathbf {n}_{w,1},\) where \( \mathbf {n_{1}}_{w,1} \sim \mathcal {CN}(\mathbf {0},\sigma ^{2}_{1}\mathbf {I}_{K})\) is the white noise vector. The matrix \( \mathbf {\Psi }_{n_{1}}^{\frac {1}{2}} \) plays a role as the digital filter, and it uniquely exists since \( \mathbf {\Psi }_{n_{1}} \) is positive semidefinite [19, 40]. Ψ x ,x and \( \mathbf {\Psi }_{n_{2}}, \mathbf {n}_{2} \) are also generated by the same way as \( \mathbf {\Psi }_{n_{1}} \) and n 1. Note that in [21–25], the covariance matrix of the relay colored noise is generated via a first order autoregressive filter as \(\mathbf {R}_{n_{1}}(i,j) = \alpha _{1} r_{1} \eta _{1}^{|i-j|},\) where r 1 is a normalization factor to keep \(\text {tr}(\mathbf {R}_{n_{1}}) = \alpha _{1} K, \) and α 1 denotes the interference power from the neighbour interferers. In comparison with our colored noise simulation method, α 1 functions as \( \sigma ^{2}_{1} \), while \( r_{1} \eta _{1}^{|i-j|} \) functions as \( \mathbf {\Psi }_{n_{1}}(i,j).\)
Figures 3 and 4 reveal the system capacity performance of the proposed iterative and simplified precoding designs with the full CSIT in Section 3. As pointed out in Section 3.3, the precoding in [30, 31] has the same precoding design result as the simplified precoding, so its results are not illustrated in this section. The iterative precoding in [4], the ROP in [28, 29], the naive amplify-and-forward (NAF) scheme [4] having \( \mathbf {B} = \sqrt {p_{1}/\text {tr}(\mathbf {R}_{x})}\mathbf {I}_{M} \) and
$$\mathbf{F} = \sqrt{\frac{p_{2}}{\text{tr}\left(\mathbf{H}_{1} \mathbf{B} \mathbf{R}_{x} \mathbf{B}^{H} \mathbf{H}_{1}^{H} + \mathbf{R}_{n_{1}}\right)}} \mathbf{I}_{K} $$
are chosen as comparatives. Note that all the considered schemes are based on the full CSIT. As discussed in Section 3.1, their precoder structures do not vary with an effect of the spatially correlated channel fading, and thus the spatial correlations r t =r r =0.5 are chosen for all the involved channels.
A performance comparison of the precoding schemes with the full CSIT for SNR 2=20dB and r t =r r =0.5
A performance comparison of the precoding schemes with the full CSIT for SNR 1=SNR 2=SNR and r t =r r =0.5
All the examined precoding schemes provide substantial capacity gains over the NAF scheme. Although the ROP in [28, 29] uses the relay precoder alone, it performs better than the iterative precoding in [4] that employs both the source and relay precoders. This is mainly because knowledge of the signal, relay and destination noise correlation matrices is not taken into account in designing the iterative precoding in [4]. The proposed iterative precoding delivers the largest gain, while the proposed simplified precoding maintains a good performance to this precoding over nearly the whole SNR 1 range. There are clear performance gaps of our proposed designs over the other two precoding schemes. To clarify these gaps, the equal power precoding with the full CSIT that has the same optimal structures as the proposed precoding designs with the full CSIT and equal power allocation across the source antennas and across the relay antennas is used. Results reveal that it offers higher capacity than the ROP in [28, 29] and the iterative precoding in [4] at nearly all SNRs shown (specifically, at SNR1 values from 5 dB to higher for Fig. 3 and at SNR values from 10 dB to higher for Fig. 4), and even provides identical capacity to the proposed simplified precoding at high SNRs. This implies that the optimality in the precoder structures alone contributes a significant portion in the capacity enhancement. A well-designed water-filling power allocation adds further capacity, especially in the low SNR regime.
Besides the signal, relay and destination noise correlation matrices, the performance of the precoder structruces and the power allocation schemes is a function of the available CSIT types and quality of constituent channels: SNR1, SNR2 and the spatial correlations. This statement is shown in Figs. 5, 6, 7, and 8. These figures demonstrate a number of performance comparisons among all the proposed precoding designs with the full CSIT as well as with the partial CSIT. As aforementioned in Section 4.1, the ROP schemes in [5, 6] designed for two-hop relay systems with transmit-sided spatially correlated channels, white noises, independent symbols are included in the proposed partial-CSIT precoding designs. Clearly, there is no valuable information for the use of these schemes as the performance references. Hence, their behaviours are not shown in Figs. 5, 6, 7 and 8. Figure 5 shows the capacity of systems having SNR2=SNR2=SNR and r t =r r =0.3, and Fig. 6 shows the capacity of systems having SNR2=SNR2=SNR and r t =0.95,r r =0.3. The correlation coefficent r t =0.95 represents a strong correlation effect among the transmit antennas since the corresponding correlation matrix has the eigenvalues [ 3.7568, 0.1627, 0.0506, 0.0300] with the very large condition number 125.2267. It is clear to observe that when r t increases from 0.3 to 0.95, capacity gains of the proposed precoding designs over the NAF scheme increase, and capacity gaps of the full CSIT based designs over the partial CSIT based designs decrease. The partial CSIT based designs provide higher capacity than the equal power precoding at low to medium SNRs. These higher capacity regions are even enlarged with increase in the transmit correlation.
A performance comparison of the proposed precoding designs for SNR 1=SNR 2=SNR, r t =r r =0.3
A performance comparison of the proposed precoding designs for SNR 1=SNR 2=SNR, r t =0.95,r r =0.3
A performance comparison of the proposed precoding designs for SNR 2=12dB, r t =0.95,r r =0.3
How the schemes based on the partial CSIT behave for r t =0.95,r r =0.3 will be revealed more clearly in Figs. 7 and 8. Figure 7 plots the curves of capacity as a function of SNR 1 and SNR 2=12dB, and Fig. 8 shows the curves of capacity as a function of SNR 2 and SNR 1=12dB. The results from these figures reveal there are more clearer extensions in the capacity gains of the precoding techniques to the NAF technique. Amongst the considered techniques, the iterative precoding techniques based on the full CSIT still provides the best capacity gain. Besides, the capacity gaps among all the precoding schemes also increase, especially for the ones based on the full CSIT. Figure 7 indicates that when SNR2=12dB our both designs with the partial CSIT outperform the equal precoding, and more interestingly, the iterative design with the partial CSIT yields higher capacity than the simplified design with the full CSIT over the low-to-medium SNR1 range, with up to 15dB. Figure 8 indicates when SNR1=12dB, the iterative design with the partial CSIT gives the same performance as the simplified design with the full CSIT at low-to-medium SNR2 levels, but gives the higher performance than this scheme at higher SNR2 levels. While the simplified design with the partial CSIT not only extends the capacity gaps over the equal precoding but also creates the capactiy curve closer to that of the simplified design with the full CSIT, especially at low and high SNR2 levels.
These observations verify the effectiveness of the water-filling-typed power allocation strategies of the partial-CSIT-based designs, especially when the involved channels are in medium SNR environments and are strongly affected by the spatial correlation fading at the transmit sides, regardless of the mismatch between the relay eigen-beamer directions (V θ ) and the relay-destination subchannel directions (V 2) as aforementioned in Section 4.1.
In summary, we developed the iterative and simplified methods of jointly designing of source and relay precoders with the full CSIT and those with partial CSIT for general correlated dual-hop MIMO relay systems without the direct link under the MMI criterion. These general systems have spatially correlated channels, mutually correlated source signals and colored noises. We showed the optimal source and relay precoder obtained with the full CSIT and the destination equalizer altogether decouple the equivalent end-to-end MIMO channel into orthogonal SISO subchannels. We also successfully extended the simplified precoder design with the full CSIT to the multi-hop relay system case. Simulation results showed that the proposed joint precoder designs with the full CSIT provide higher capacity than the existing designs. Also, the proposed joint precoder designs with the partial CSIT work well, especially when the channels are strongly correlated at the transmit sides and at medium-to-high SNRs, while they require much lower computational complexity and less feedback overhead. In future work, we would consider the relay system case where the distance between the source and the destination is short enough for them to directly communicate each other, thus the direct link is taken into account. We would propose joint source and relay precoding schemes that exploit CSI of the compound relaying link and the direct link for the system capacity maximization.
1 With a given K×1 vector x of non-negative elements summing to K, we always create a K×K real symmetric positive semidefinite matrix, i.e., correlation matrix A as A=U Hdiag(x)U, where U is a unitary matrix. Here, the correlation matrix A has unit elements on its main diagonal and eigenvalues given by x [41].
X Tang, Y Hua, Optimal design of non-regenerative MIMO wireless relays. IEEE Trans. Wireless Commun.6(4), 1398–1407 (2007).
O Munoz, J Vidal, A Agustin, Linear transceiver design in nonregenerative relays with channel sate information. IEEE Trans. Signal Process.55(6), 2593–2604 (2007).
R Mo, YH Chew, Precoder design for non-regenerative MIMO relay systems. IEEE Trans. Wireless Commun.8(10), 5041–5049 (2009).
Z Fang, Y Hua, JC Koshy, in Fourth IEEE Workshop on Sensor Array Multi-channel Signal Processing. Joint source and relay optimization for a non-regenerative MIMO relay (Waltham, 2006), pp. 239–243.
C Jeong, H-M Kim, Precoder design of non-regenerative relays with covariance feedback. IEEE Commun. Lett.13(12), 920–922 (2009).
C Jeong, B Seo, SR Lee, H-M Kim, I-M Kim, Relay precoding for non-regenerative MIMO relay systems with partial CSI feedback. IEEE Trans. Wireless Commun.11(5), 1698–1711 (2012).
D Gesbert, M Shafi, A N, From theory to practice: an overview of MIMO space-time coded wireless systems. IEEE J. Selected Areas Commun.3(21), 281–302 (2003).
Y Rong, X Tang, Y Hua, A unified framework for optimizing linear nonregenerative multicarrier MIMO relay communication systems. IEEE Trans. Signal Process.57(12), 4837–4851 (2009).
D-H Kim, HM Kim, in 21st Annual IEEE International Symposium on Personal, Indoor and Mobile Radio Communications. MMSE precoder design for a non-regenerative MIMO relay with covariance feedback, (2010), pp. 461–464. doi:10.1109/PIMRC.2010.5671893.
L Gopal, Y Rong, Z Zang, in The 17th Asia Pacific Conference on Communications. Joint MMSE transceiver design in non-regenerative MIMO relay systems with covariance feedback, (2011), pp. 290–294. doi:10.1109/APCC.2011.6152821.
L Gopal, Y Rong, Z Zang, in 2013 IEEE 77th Vehicular Technology Conference (VTC Spring). MMSE based transceiver design for MIMO relay systems with mean and covariance feedback, (2013), pp. 1–5. doi:10.1109/VTCSpring.2013.6692635.
N Fawaz, K Zarifi, M Debbah, D Gesbert, Asymptotic capacity and optimal precoding in MIMO multi-hop relay networks. IEEE Trans. Inform. Theory. 57(4), 2050–2069 (2011).
C Xing, S Ma, Y-C Wu, Robust joint design of linear relay precoder and destination equalizer for dual-hop amplify-and-forward MIMO relay systems. IEEE Trans. Signal Process.58(4), 2273–2283 (2010).
B Zhang, Z He, K Niu, L Zhang, Robust linear beamforming for MIMO relay broadcast channel with limited feedback. IEEE Signal Process. Lett.17(2), 209–212 (2010).
W Xu, X Dong, W-S Lu, MIMO relaying broadcast channels with linear precoding and quantized channel state information feedback. IEEE Trans. Signal Process.58(10), 5233–5245 (2010).
Y Rong, Robust design for linear non-regenerative MIMO relays with imperfect channel state information. IEEE Trans. Signal Process.59(5), 2455–2460 (2011). doi:10.1109/TSP.2011.2113376.
Z Wang, W Chen, J Li, Efficient beamforming for MIMO relaying broadcast channel with imperfect channel estimation. IEEE Trans. Veh. Technol.61(1), 419–426 (2012).
Y Cai, RC de Lamare, L-L Yang, M Zhao, Robust mmse precoding based on switched relaying and side information for multiuser MIMO relay systems. IEEE Trans. Veh. Technol.64(12), 45677–5687 (2015).
C Chien, Digital Radio Systems on A Chip: A System Approach (Kluwer Academic, 2001).
A Scaglione, GB Giannkis, S Barbarossa, Redundant filterbank precoders and equalizers. Part I: unification and optimal designs. IEEE Trans. Signal Process.47(7), 1988–2006 (1999).
M Biguesh, S Gazor, MH Shariat, Optimal training sequence for MIMO wireless systems in colored environments. IEEE Trans. Sig. Process.57(8), 3144–3153 (2009).
Y Liu, TF Wong, WW Hager, Training signal design for estimation of correlated MIMO channels with colored interference. IEEE Trans. Signal Process.55(4), 1486–1497 (2007).
R Wang, M Tao, H Mehrpouyan, Y Hua, Channel estimation and optimal training design for correlated MIMO two-way relay systems in colored environment (2014). http://arxiv.org/abs/1407.5161.
R Wang, M Tao, H Mehrpouyan, Y Hua, Channel estimation and optimal training design for correlated MIMO two-way relay systems in colored environment. IEEE Trans. Wire. Commun.14(5), 2684–2699 (2015).
R Wang, H Mehrpouyan, M Tao, Y Hua, in IEEE Glob. Commun. Conf. Optimal training design and individual channel estimation for MIMO two-way relay systems in colored environment (Austin, 2014), pp. 3561–3566.
C Jeong, HM Kim, HK Song, IM Kim, Relay precoding for non-regenerative MIMO relay systems with partial CSI in the presence of interferers. IEEE Trans. Wireless Commun.11(4), 1521–1531 (2012). doi:10.1109/TWC.2012.020812.111246.
NN Tran, HD Tuan, HH Nguyen, Training signal and precoder designs for OFDM under colored noise. IEEE Trans. Vehic. Technol.57(6), 3911–3917 (2008).
NA Vinh, NN Tran, NH Phuong, Optimal precoding design for non-regenerative dual-hop correlated relaying MIMO. Electron. Lett.51(20), 1613–1615 (2015).
NA Vinh, NN Tran, NH Phuong, DL Khoa, in 2015 NAFOSTED Conference on Information and Computer Science. Optimally non-regenerative relaying for general dual-hop correlated MIMO channels (Hochiminh, 2015), pp. 300–304.
NN Tran, S Ci, in 2010 IEEE Global Telecommunications Conference (GLOBECOM 2010). Asymptotic capacity and precoding design for correlated multi-hop MIMO channels (Miami, 2010), pp. 1–5.
NN Tran, S Ci, HX Nguyen, CMI analysis and precoding designs for correlated multi-hop MIMO channels. EURASIP Journal on Wireless Communications and Networking, (127) (2015).
M Vu, A Paulraj, MIMO wireless linear precoding using CSIT to improve link performance. IEEE Signal Process. Mag.87:, 86–105 (2007).
AK Gupta, DK Nagar, Matrix Variate Distributions (Chapman and Hall/CRC, USA, 1999).
D-S Shiu, GJ Foschini, M J.Gans, J M.Kahn, Fading correlation and its effect on the capacity of multielement antenna systems. IEEE Trans. Commun.48(3), 502–513 (2000).
NN Tran, HH Nguyen, HD Tuan, DE Dodds, Training designs for amplify-and-forward relaying with spatially correlated antennas. IEEE Trans. Vehic. Technol.61:, 2864–2870 (2012).
S Sun, Y Jing, Channel training design in amplify-and-forward MIMO relay networks. IEEE Trans. Wire. Commun.10(10), 920–922 (2011).
J-S Sheu, J-K Lain, W-H Wang, On channel estimation of orthogonal frequencydivision multiplexing amplify-and-forward cooperative relaying systems. IET Commun.7(4), 325–334 (2013).
Article MathSciNet MATH Google Scholar
SM Kay, Fundamentals of Statistical Signal Processing, Volume 1: Estimation Theory (Prentice Hall, New Jersey, 1993).
DJ Love, RW Heath, VKN Lau, D Gesbert, BD Rao, M Andrews, An overview of limited feedback in wireless communication systems. IEEE J. Selected Areas Commun.26(8), 1341–1365 (2008). doi:10.1109/JSAC.2008.081002.
KB Petersen, MS Pedersen, The Matrix Cookbook (Technical University of Denmark, 2012).
AW Marshall, I Olkin, BC Arnold, Inequalities: Theory of Majorization and Its Applications - Second Edition (Springer, New York, 2011).
Book MATH Google Scholar
S Boyd, L Vandenberghe, Convex Optimization (Cambridge University Press, New York, 2004).
E Jorswieck, H Boche, Majorization and Matrix-Monotone Functions in Wireless Communications (Hanover, MA: Now Publishers, 2007).
RA Horn, CR Johnson, Matrix Analysis (Cambridge University Press, New York, 1985).
NN Tran, HD Tuan, HH Nguyen, Training signal and precoder designs for OFDM under colored noise. IEEE Trans. Veh. Techno.57(6), 3911–3917 (2008).
NN Tran, HX Nguyen, Optimal SP training for spatially correlated MIMO channels under coloured noises. Electron. Lett.51:, 247–249 (2015).
G Panci, S Colonnese, P Campisi, G Scarano, Blind equalization for correlated input symbols: a bussgang approach. IEEE Trans. Signal Process.53(5), 1860–1869 (2005).
Faculty of Electronics and Telecommunications, University of Science, Vietnam National University, Ho Chi Minh City, Vietnam
Nguyen A. Vinh, Nguyen N. Tran & Nguyen H. Phuong
Nguyen A. Vinh
Nguyen N. Tran
Nguyen H. Phuong
Correspondence to Nguyen N. Tran.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Vinh, N.A., Tran, N.N. & Phuong, N.H. Joint source and relay precoding for generally correlated MIMO with full and partial CSIT. J Wireless Com Network 2017, 43 (2017). https://doi.org/10.1186/s13638-017-0826-2
MIMO relaying
Precoding design
Full and partial CSI
Spatially correlated channel
Mutually correlated source signal | CommonCrawl |
\begin{document}
\title{The classification of $SU(2)^2$ biquotients of rank 3 Lie groups}
\author{Jason DeVito and Robert L. DeYeso III}
\date{}
\maketitle
\begin{abstract}
We classify all compact simply connected biquotients of the form $G/\!\!/ SU(2)^2$ for $G =SU(4), SO(7), Spin(7)$, or $G = \mathbf{G}_2\times SU(2)$. In particular, we show there are precisely $2$ inhomogeneous reduced biquotients in the first and last case, and $10$ in the middle cases.
\end{abstract}
\section{Introduction}\label{sec:intro}
Geometrically, a biquotient is any manifold diffeomorphic to the quotient of a Riemannian homogeneous space $G/H$ by a free isometric action of a subgroup $K\subseteq \operatorname{Iso}(G/H)$; the resulting quotient is denoted $K\backslash G/H$. Biquotients also have a purely Lie theoretic description: if $ U$ is a compact Lie group, then any $f=(f_1,f_2):U\rightarrow G\times G$ defines an action of $U$ on $G$ via $u\ast g= f_1(u) \, g\, f_2(u)^{-1}$. If this action is free, the resulting quotient $G/\!\!/ U$ is called a biquotient.
In general, biquotients are not even homotopy equivalent to homogeneous spaces. Nevertheless, one may often compute their geometry and topology, making them a prime source of examples.
If $G$ is a compact Lie group equipped with a bi-invariant metric, then $U$ acts isometrically, and thus $G/\!\!/ U$ inherits a metric. By O'Neill's formula \cite{On1}, the resulting metric on $G/\!\!/ U$ has non-negative sectional curvature. In addition, until the recent example of Grove, Verdiani and Ziller \cite{GVZ}, and independently Dearicott \cite{De}, all known examples of positively curved manifolds were constructed as biquotients \cite{AW, Ber,Es1, Baz1,Wa}. Further, almost all known examples of quasi-positively curved and almost positively curved manifolds are constructed as biquotients. See \cite{DDRW,D1,Ta1,KT,EK,Ke1,Ke2,PW2,W,Wi} for these examples.
Biquotients were first discovered by Gromoll and Meyer \cite{GrMe1} when the exhibited an exotic sphere as a biquotient.
In his Habilitation, Eschenburg \cite{Es2} classified all biquotients $G/\!\!/ U$ with $G$ compact simply connected of rank $2$. We partially extend his classification when $G$ has rank $3$. Using the well known classification of simple Lie groups together with the low dimensional exceptional isomorphisms, one easily sees that, up to cover, the complete list of rank $3$ semi-simple Lie groups is $SU(2)^3$, $\mathbf{G}_2\times SU(2)$, $SU(3)\times SU(2)$, $Sp(2)\times SU(2)$, $Sp(3)$, $SU(4)$, and $SO(7)$.
In his thesis, the first author \cite{D1} classified all compact simply connected biquotients of dimension at most $7$; these include all examples of the form $G_1\times SU(2)/SU(2)^2$ where $G_1$ has rank $2$, except $G_1 =\mathbf{G}_2$. In addition, the authors, together with Ruddy and Wesner \cite{DDRW}, have classified all biquotients of the form $Sp(3) /\!\!/ SU(2)^2$.
\begin{theorem}\label{main} For $U = SU(2)^2$, $G = SU(4), SO(7)$, or $\mathbf{G}_2\times SU(2)$, there are, respectively, precisely $2$, $10$ and $2$ reduced inhomogeneous biquotients of the form $G/\!\!/ U$. Table \ref{table:su2class} lists them all. Further, a biquotient action of $U$ on $Spin(7)$ is effectively free iff it is the lift of an effectively free biquotient action of $U$ on $SO(7)$.
\end{theorem}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
G & Left image & Right image\\
\hline
\hline
$SU(4)$ & $\text{diag}(A,A)$ & $\text{diag}(B,I_2)$ \\
\hline
$SU(4)$ & $\text{diag}(A,A)$ & $\text{diag}(\pi(B), 1)$ \\
\hline
\hline
\hline
$SO(7)$ & $\text{diag} (\pi(A), I_4)$ & $\text{diag}(\pi(B), i(A))$ \\
\hline
$SO(7)$ & $\text{diag}(\pi(A), I_4)$ & $\text{diag}(i(B), I_3)$ \\
\hline
$SO(7)$ & $\text{diag}(\pi(A), I_4)$ & $\text{diag}(\pi(B), \pi(B), 1)$ \\
\hline
$SO(7)$ & $\text{diag}(\pi(A), I_4)$ & $ \text{diag}(\pi(B), i(B))$ \\
\hline
$SO(7)$ & $\text{diag}(\pi(A), I_4)$ & $ \text{diag}(\pi(A,B), \pi(A))$ \\
\hline
$SO(7)$ & $\text{diag}(\pi(A), I_4)$ & $\text{diag}(\pi(A,B), \pi(B))$ \\
\hline
$SO(7)$ & $\text{diag}(\pi(A),I_4)$ & $ B_{max} $ \\
\hline
$SO(7)$ & $\text{diag}(\pi(A), i(B))$ & $ \text{diag}(\pi(A), \pi(A), 1)$ \\
\hline
$SO(7)$ & $\text{diag}(\pi(A), \pi(B), 1)$ & $\text{diag}(\pi(A), i(A))$\\
\hline
$SO(7)$ & $\text{diag}(A_{Ber},1 , 1)$ & $ \text{diag}(\pi(B), i(B))$ \\
\hline
\hline
\hline
$\mathbf{G}_2\times SU(2)$ & $(\pi(A,B), A)$ & $(I, A)$\\
\hline
$\mathbf{G}_2\times SU(2)$ & $(\pi(A,B),B)$ & $(I,B)$ \\
\hline
\end{tabular}
\caption{Biquotients of the form $G/\!\!/ SU(2)^2$ with $G$ rank $3$ and simple or $G = \mathbf{G}_2\times SU(2)$} \label{table:su2class}
\end{center}
\end{table}
In Table \ref{table:su2class}, $I_k$ denotes the $k\times k$ identity matrix, and $\pi$, depending on the number of arguments, denotes either of the canonical double covers $SU(2)\rightarrow SO(3)$ and $SU(2)^2\rightarrow SO(4)$. The notation $i(A)$ refers to the standard inclusion $i:SU(2)\rightarrow SO(4)$ obtained by identifying $\mathbb{C}^2$ with $\mathbb{R}^4$. The notation $B_{max}$ refers to the unique maximal $SO(3)\subseteq SO(7)$, and the notation $A_{Ber}$ refers to the maximal $SO(3)$ in $SO(5)$, whose quotient $B^7 = SO(5)/SO(3)$ is the positively curved Berger space \cite{Ber}. The term \textit{reduced} refers to the condition that $U$ not act transitively on any factor of $G$, and only applies in the case of $\mathbf{G}_2\times SU(2)$. Non-reduced biquotients of the form $\mathbf{G}_2\times SU(2)/\!\!/ SU(2)^2$ are all diffeomorphic to biquotients of the form $\mathbf{G}_2/\!\!/ SU(2)$, and are classified in \cite{KZ}.
It is not a priori clear, but follows from this classification that all biquotients of the form $SO(7)/\!\!/ SU(2)^2$ are simply connected.
The lift of an effectively free action on a connected smooth manifold to a connected cover is effectively free, but in general, the cover admits effectively free actions which do not induce effectively free actions on the base. Our main tool for understanding homomorphisms from $SU(2)^2$ into $Spin(7)$ is utilizing a concrete description, using the octonions and Clifford algebras, of the spin representation $Spin(7)\rightarrow SO(8)$.
The rest of the paper is organized as follows. Section 2 is devoted to background material on biquotient actions, representation theory, and the octonions. In Section 3, we prove Theorem \ref{main} in regard to $G=SO(7)$ and $Spin(7)$. In Section 4, we prove Theorem \ref{main} when $G = SU(4)$ or $\mathbf{G}_2\times SU(2)$.
We would like to thank the anonymous referee for suggesting an alternate approach which significantly simplified Section \ref{oct}.
\section{Background: Biquotients and Representation Theory}\label{sec:background}
In this section, we cover the necessary background for proving Theorem \ref{main}.
\subsection{Biquotients}
As mentioned in the introduction, given compact Lie groups $U$ and $G$ with $f=(f_1,f_2):U\rightarrow G\times G$ a homomorphism, $U$ naturally acts on $G$ via $u \ast g = f_1(u)\, g\, f_2(u)^{-1}$.
A simple criterion to determine when a biquotient action is effectively free is given by the following proposition.
\begin{proposition}\label{free} A biquotient action of $U$ on $G$ is effectively free if and only if for any $(u_1,u_2)\in f(U)$, if $u_1$ is conjugate to $u_2$ in $G$, then $u_1 = u_2\in Z(G)$.
\end{proposition}
Since every element of a Lie group $U$ is conjugate to an element in its maximal torus $T_U$, it follows that a biquotient action of $U$ on $G$ is effectively free if and only if the action is effectively free when restricted to $T_U$.
To begin classifying biquotient actions, we note that, as mentioned in \cite{DeV1}, replacing $(f_1, f_2)$ in any one of the following three ways will give an equivalent action: $(\phi\circ f_1, \phi \circ f_2)$ for $\phi$ an automorphism of $G$, $(f_1 \circ \psi, f_2 \circ \psi)$ for $\psi$ an automorphism of $U$, or $(C_{g_1} \circ f_1, C_{g_2}\circ f_2)$ where $C_{g_i}$ denotes conjugation by $g_i$.
Hence, we may classify all biquotient actions of $U$ on $G$ by classifying the conjugacy classes of images of homomorphisms from $U$ into $G\times G$ and then checking each of these to see if the induced action is effectively free. Combining this with Proposition \ref{free}, it follows that if $f(T_U)\subseteq T_{G\times G}$, then the action of $U$ on $G$ is (effectively) free if and only if the induced action of $T_U$ on $T_{G\times G}$ is (effectively) free.
We also note the following proposition, found in \cite{KZ}.
\begin{proposition}Suppose $U$ acts on $G_1$ and $G_2$ and that the action on $G_2$ is transitive with isotropy group $U_0$. Suppose further that the diagonal action of $U$ on $G_1\times G_2$ is effectively free. Then the action of $U_0$ on $G_1$ is effectively free and the quotients $(G_1\times G_2)/\!\!/ U$ and $G_1/\!\!/ U_0$ are canonically diffeomorphic.
\end{proposition}
So we see that, in terms of classifying biquotients, we may assume our biquotients are reduced -- that is, $U$ does not act transitively on any simple factor of $G$.
\
Since the restriction of any effectively free action to a subgroup is effectively free, we have the following simple lemma which we will make repeated use of.
\begin{lemma}\label{restest} Suppose $f=(f_1,f_2):U = SU(2)^2 \rightarrow G^2$ defines an effectively free action of $U$ on $G$. Then the restriction of $f$ to both factors of $U$, as well as to the diagonal $SU(2)$ in $U$, must define an effectively free action of $SU(2)$ on $G$.
\end{lemma}
We conclude this subsection with a a proposition relating effectively free actions on a connected cover with effectively free actions on a connected base space.
\begin{proposition}\label{easyhalf}
Suppose $\pi:\tilde{M}\rightarrow M$ is an equivariant covering of smooth connected $G$-manifolds. If the $G$ action on $M$ is effectively free, then it is effectively free on $\tilde{M}$ as well. Conversely, if the deck group is a subset of $G\subseteq Diff(M)$ and the action on $\tilde{M}$ is effectively free, it is also effectively free on $M$.
\end{proposition}
\begin{proof}
Suppose there is a $g\in G$ and a $p\in \tilde{M}$ with $g\ast p = p$. Then $g\ast \pi(p) = \pi(g\ast p) =\pi(p)$, so $g$ fixes $\pi(p)$. Since the action on $M$ is effectively free, $g$ must act trivially on all of $M$. This implies that, for any $q\in \tilde{M}$, that $\pi(g\ast q) = \pi(q)$, that is, multiplication by $g$ is an element of the deck group of the covering. Since $g$ fixes $p$, it must thus fix all of $\tilde{M}$.
\
Conversely, suppose the $G$ action on $\tilde{M}$ is effectively free and the deck group is a subgroup of $G$. If $\pi(p) = g\ast \pi(p) = \pi(g\ast p)$ for some $g\in G$, $p\in \tilde{M}$, then we see that $p = \mu(g\ast p)$ for some $\mu$ in the deck group. Writing $\mu = g_1\in G$, we have $p = (g_1 g) \ast p$. Since the action on $\tilde{M}$ is effectively free, we conclude that $g_1 g$ fixes every point of $\tilde{M}$. Then for any $q\in\tilde{M}$, $\pi(q) = \pi(g_1 g \ast q) = \pi(g\ast q) = g\ast \pi(q)$, that is, $g$ fixes $M$ pointwise.
\end{proof}
We note that the if the hypothesis on the deck group is omitted, then the converse of Proposition \ref{easyhalf} is not true in general for biquotients, though it is for homogeneous actions. For example, the biquotient action of $SU(2) = Sp(1)$ on $Sp(3)$ given by $p\ast A = \text{diag}(p,1,1) A \text{diag}(1,p,p)^{-1}$ is free, but the induced action on $Sp(3)/\{\pm I\}$ is not effectively free because the element $-1\in Sp(1)$ fixes $[I]\in Sp(3)/\{\pm I\}$ but does not fix $[\text{diag}(R(\theta), 1)]$ unless $\theta$ is an integral multiple of $\pi$. In this case, the deck group element centralizes the $G$ action.
On the other hand, we will see that the hypothesis on the deck group is not necessary in general: there is a unique biquotient of the form $Spin(7)/\!\!/ SU(2)^2$ for which $SU(2)^2\subseteq Diff(Spin(7))$ does not contain the deck group, but the induced $SU(2)^2$ action on $SO(7)$ is still effectively free.
\subsection{Representation theory} \label{Reptheory}
Our homomorphisms $f:U\rightarrow G\times G$ will be constructed via representation theory. The following information can all be found in \cite{FH}. Recall that a representation of $U$ is a homomorphism $\rho:U\rightarrow Gl(V)$ for some complex vector space $V$. It is well known that if $U$ is a compact semi-simple Lie group, then $\rho(U)$ is conjugate to a subgroup of $SU(V)$ and that $\rho$ is completely reducible -- every such $\rho$ is a direct sum of irreducible representations. The representation $\rho$ is called orthogonal if the image is conjugate to a subgroup of the standard $SO(n)\subseteq SU(n)$. If, $V = \mathbb{C}^{2n}$, and the image of $\rho$ is conjugate to a subgroup of the standard $Sp(n)\subseteq SU(2n)$, $\rho$ is called symplectic. If $\rho$ is neither orthogonal nor symplectic, it is called complex.
We note that an irreducible representation is complex iff it is not isomorphic to its conjugate representation. Recall the following well known proposition.
\begin{proposition}\label{symp} A representation $\rho$ is orthogonal (symplectic) if and only if $\rho \cong \bigoplus_i (\psi_i\oplus \overline{\psi}_i)\oplus\ \bigoplus_j \phi_j,$ where each $\phi_j$ is orthogonal (symplectic) and $\overline{\psi}_i$ denotes the conjugate representation of $\psi_i$.
\end{proposition}
Since we are interested in the case $U = SU(2)^2$, we note that the irreducible representations of a product of compact Lie groups are always given as outer tensor products of irreducible representations of the factors. We also recall that an outer tensor product of two irreducible representations is orthogonal if the two factors are either both orthogonal or both symplectic, and the outer tensor product is symplectic if and only if one of the representations is symplectic and the other is orthogonal.
As mentioned in the previous subsection, we need to classify the conjugacy classes of images of homomorphisms from $U$ to $G\times G$. To relate this problem to representation theory, we use the following theorem of Mal'cev.
\begin{theorem}\label{Mal'cev}Let $f_1, f_2:U\rightarrow G$ with $$G\in\{ SU(n), Sp(n), SO(2n+1)\}$$ be interpreted as complex, symplectic, or orthogonal representations. If the representations are equivalent, then the images are conjugate in $G$.
\end{theorem}
The irreducible representations for every compact simply connected simple Lie group have been completely classified. For $SU(2)$, we have the following proposition.
\begin{proposition}\label{sp1irrep} For each $n \geq 1$, $SU(2)$ has a unique irreducible representation of dimension $n$. When $n$ is even this representation is symplectic, and when $n$ is odd this representation is orthogonal.
\end{proposition}
We will use the standard notation $\phi_i:SU(2)\rightarrow SU(i+1)$ to denote the unique $i+1$-dimensional irreducible representation of $SU(2)$ and $\phi_{ij}$ to denote $\phi_i\otimes \phi_j:U\rightarrow SU( (i+1)(j+1)).$ We note that, if $S^1 = \{\text{diag}(z, \overline{z}): z\in \mathbb{C}\text{ and }|z|=1\}$ denotes the standard maximal torus of $SU(2)$, then $\phi_i(S^1) = \{\text{diag}(z^i, z^{i-2},..., z^{-i})\}\subseteq SU(i+1)$.
Some of the lower dimensional $\phi_i$ are more commonly known. The representation $\phi_1:SU(2)\rightarrow SU(2)$ is the inclusion map, $\phi_2:SU(2)\rightarrow SO(3)\subseteq SU(3)$ is the canonical double cover map, and $\phi_4:SU(2)\rightarrow SO(5)\subseteq SU(5)$ is the Berger embedding of $SO(3)$ into $SO(5)$. The representation $\phi_6$ has image $B_{max}\subseteq SO(7)$, mentioned just after Theorem \ref{main}. The representation $\phi_{11}:SU(2)^2\rightarrow SO(4)\subseteq SU(4)$ is the canonical double cover map.
Proposition \ref{sp1irrep} implies that all irreducible representations of both $SU(2)$ and $U$ are either orthogonal or symplectic, so $\phi_i = \overline{\phi}_i$. Then Proposition \ref{symp} implies that a representation of either $SU(2)$ or $U$ is orthogonal (symplectic) iff every irreducible symplectic (orthogonal) subrepresentation appears with even multiplicity.
\subsection{The Octonions and Clifford Algebras}\label{oct}
The octonions $\mathbb{O}$, sometimes referred to as the Cayley numbers, are an $8$-dimensional non-associative normed division algebra over $\mathbb{R}$. The octonions are alternative, meaning that the subalgebra generated by any two elements of $\mathbb{O}$ is associative. In fact, if $x,y\in \mathbb{O}$, then the algebra generated by $x$ and $y$ is isomorphic to either $\mathbb{R}$, $\mathbb{C}$, or $\mathbb{H}$. All of the necessary background may be found in \cite{GVZ}. We will follow the conventions in \cite{Ke2}.
Viewing $\mathbb{O} = \mathbb{H} + \mathbb{H}l$, where $\mathbb{H}$ denotes the division algebra of quaternions, the multiplication is defined by $$(a+bl)\cdot (c+dl) = (ac-\overline{d}b) +(da + b\overline{c})l .$$
We use the canonical basis $$\{e_0 = 1, e_1 = i, e_2 = j, e_3 = k, e_4 = l, e_5 = il, e_6 = jl, e_7=kl\},$$ which we declare to be orthonormal. Then the multiplication table is given in Table \ref{table:cayleymult}, where the entries are of the form $(\text{row}) \cdot(\text{column}).$
\begin{table}[h]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|}
\cline{2-8}
\multicolumn{1}{c|}{} & $e_1$ & $e_2$ & $e_3$ & $e_4$ & $e_5$ & $e_6$ & $e_7$\\ \hline
\hline
$e_1$ & $-1$ & $e_3$ & $-e_2$ & $e_5$ & $-e_4$ & $-e_7$ & $e_6$ \\ \hline
$e_2$ & $-e_3$ & $-1$ & $e_1$ & $e_6$ & $e_7$ & $-e_4$ & $-e_5$ \\ \hline
$e_3$ & $e_2$ & $-e_1$ & $-1$ & $e_7$ & $-e_6$ & $e_5$ & $-e_4$\\ \hline
$e_4$ & $-e_5$ & $-e_6$ & $-e_7$ & $-1$ & $e_1$ & $e_2$ & $e_3$\\ \hline
$e_5$ & $e_4$ & $-e_7$ & $e_6$ & $-e_1$ & $-1$ & $-e_3$ & $e_2$\\ \hline
$e_6$ & $e_7$ & $e_4$ & $-e_5$ & $-e_2$ & $e_3$ & $-1$ & $-e_1$\\ \hline
$e_7$ & $-e_6$ & $e_5$ & $e_4$ & $-e_3$ & $-e_2$ & $e_1$ & $-1$\\ \hline
\end{tabular}
\caption{Multiplication table for octonions}\label{table:cayleymult}
\end{center}
\end{table}
We use the octonions to construct an explicit embedding of $Spin(7)$ into $SO(8)$. We will use this embedding to understand biquotients of the form $Spin(7)/\!\!/ SU(2)^2$.
\
Consider the Clifford algebra $$Cl_8 \cong \mathbb{R}^{16\times 16} =\left\{ \begin{bmatrix} A & B\\ C&D\end{bmatrix}: A,B,C,D\in \mathbb{R}^{8\times 8}\right\}.$$ We linearly embed $\mathbb{O}$ into $Cl_8$ using left multiplication: if $L_x\in \mathbb{R}^{8\times 8}$ denotes left multiplication by $x\in \mathbb{O}$, then we identify $x\in \mathbb{O}$ with $\widehat{x} = \begin{bmatrix} 0 & -L_{\overline{x}}\\ L_x & 0\end{bmatrix}$. A simple calculation shows $\widehat{x}\widehat{y} = \begin{bmatrix} -L_{\overline{x}} L_y & \\ & -L_x L_{\overline{y}} \end{bmatrix}$. Polarizing the identity $L_{\overline{x}} L_x = \langle x, x\rangle I_8$, which follows from the fact that $\mathbb{O}$ is normed and alternative, we conclude $L_{\overline{x}} L_y + L_{\overline{y}} L_x = 2\langle x,y\rangle I_8$. From here, a simple calculation shows \begin{equation}\label{cliffcommute} \widehat{x}\widehat{y} + \widehat{y}\widehat{x} = -2\langle x,y\rangle I_{16}. \end{equation}
We let $S^7 \subseteq \mathbb{O}$ denote the unit length octonions. For $v\in S^7$, we note that $\widehat{v}\in O(16)$, which follows from the fact that $L_v:\mathbb{O}\rightarrow \mathbb{O}$ is an isometry. Further, we also point out that $\widehat{v} \widehat{v} = \begin{bmatrix} -L_{\overline{v}} L_v & \\ & -L_v L_{\overline{v}}\end{bmatrix} = -I_{16}$. It follows that $\widehat{v}^{-1} = -\widehat{v}$.
\begin{proposition}\label{reflection} If $v\in S^7$, then conjugation by $\widehat{v}$ fixes $\widehat{v}$ and acts as $-1$ on $\widehat{v}^\bot\subseteq \widehat{\mathbb{O}}$.
\end{proposition}
\begin{proof} For $x\in \mathbb{O}$, decompose it as $x = \lambda v + x^\bot$ with $\lambda\in\mathbb{R}$ and $x^\bot$ orthogonal to $v$. Then, using \eqref{cliffcommute}, we have \begin{align*} \widehat{v}\widehat{x} \widehat{v}^{-1} &= -\widehat{v} \widehat{x} \widehat{v} \\ &= -\lambda \widehat{v}^3 - \widehat{v} \widehat{x^\bot} \widehat{v}\\ &= \lambda \widehat{v} - \widehat{v}\left(-2\langle x^\bot, v\rangle I_{16} - \widehat{v}{x^\bot}\right)\\ &= \lambda\widehat{v} + \widehat{v}^2 \widehat{x^\bot}\\ &= \lambda \widehat{v} - \widehat{x^\bot}.\end{align*} The result follows.
\end{proof}
Consider $S^6\subseteq S^7\subseteq \mathbb{O}$ consisting of the unit length purely imaginary octonions. Let $H$ denote the subgroup of $Cl_8$ generated by pairs of elements in $\widehat{S^6}$. Since $\widehat{S^7}\subseteq O(16)$, it follows that $H\subseteq O(16)$. Further, since $H$ is generated by $$H' = \left\{\widehat{v}\widehat{w} = \begin{bmatrix} L_v L_w & 0 \\ 0 & L_v L_w\end{bmatrix}: v,w\in S^6\right\},$$ we see that, in fact, $H$ is naturally a subgroup of $\Delta O(8)\subseteq O(8)\times O(8)\subseteq O(16)$.
In fact, $H$ is connected, so is a subgroup of $\Delta SO(8)\subseteq \Delta O(8)$. To see this, recall that the group generated by a path connected subset containing the identity is path connected, and then simply note that $I = \widehat{i}\, \widehat{-i}\in H'$.
Now, consider the map $\pi:H\rightarrow SO(\widehat{Im\mathbb{O}}) = SO(7)$ given by $\pi(h)\widehat{y} = h \widehat{y} h^{-1}$.
\begin{proposition}\label{Hidentity} The map $\pi$ is a double cover, so $H$ is isomorphic to $Spin(7)$.
\end{proposition}
\begin{proof}
We first show that $\pi$ has image contained in $SO(7)$. For $h = \widehat{v}\widehat{w}\in H'$, we see that, by Proposition \ref{reflection}, conjugation by $h$ corresponds to a reflection along the $w$ axis followed by a reflection along the $v$ axis. This fixes $\operatorname{span}\{\widehat{v},\widehat{w}\}^\bot\subseteq \widehat{\mathbb{O}}$ and rotates the plane spanned by $\widehat{v}$ and $\widehat{w}$ by twice the angle between $\widehat{v}$ and $\widehat{w}$. Since $\widehat{1}\in \operatorname{span}\{\widehat{v},\widehat{w}\}^\bot$ for any $\widehat{v},\widehat{w} \in \widehat{S^6}$, $\pi(h)$ really is an element of $SO(7)$, so $\pi(H')\subseteq SO(7)$. It follows that $\pi(H)\subseteq SO(7)$.
Also, $\pi$ is surjective. This follows because $SO(7)$ is generated by pairs of reflections.
Finally, we note that $\ker \pi$ consists of precisely two elements. To see this, note that $\widehat{i}\,\widehat{i} = -I_{16}\in H'$ and $\widehat{i}\,\widehat{-i} = I_{16}\in H'$, but $\pi(-I_{16}) = \pi(I_{16})= I$. Thus, $\ker \pi$ contains at least $2$ elements. On the other hand, because $H$ is connected and $\pi_1(SO(7))\cong \mathbb{Z}/2\mathbb{Z}$, $\ker \pi$ contains at most $2$ elements.
\end{proof}
With this description of $H = Spin(7)\subseteq SO(8)$, we now work towards identifying the maximal torus $T_H = T^3\subseteq Spin(7)\subseteq SO(8)$ and the projection $\pi:T_H\rightarrow T^3\subseteq SO(7)$.
\begin{proposition}\label{Hinso8} Suppose $v,w\in S^6\subseteq Im\mathbb{O}$ are independent with angle $\alpha$ between them and let $\mathbb{H}_{vw}$ denote the subalgebra of $\mathbb{O}$ generated by $v$ and $w$. Let $0\neq x\in \mathbb{H}_{vw}^\bot\subseteq \mathbb{O}$. Then $L_v L_w$ preserves each of the $2$-planes
\begin{center}\begin{tabular}{lcl} $P_1 = \operatorname{span}\{1, vw \}$ & & $P_2 = P_1^\bot\subseteq \mathbb{H}_{uv}$ \\ $P_3 = L_x P_1$ & & $P_4 = L_x P_2$\end{tabular} \end{center} and rotates each $P_i$ through an angle $\alpha$.
\end{proposition}
\begin{proof}
Since $v$ and $w$ are purely imaginary and independent, $\mathbb{H}_{vw}\cong \mathbb{H}$. In particular, on $P_1, P_2\subseteq \mathbb{H}_{vw}$, $L_v L_w = L_{vw}$. Because the automorphism group of $\mathbb{H}$ acts transitively on the set of pairs of orthogonal unit length purely imaginary quaternions, we may assume $v = i$, $w = \cos(\alpha) i + \sin(\alpha) j$, and so, $vw = -\cos(\alpha) + \sin(\alpha) k$. Now a simple calculation shows that $L_{vw}$ rotates $P_1 =\operatorname{span}\{ 1, k\}$ and $P_2 = \operatorname{span}\{ i,j\}$ by an angle of $\alpha$.
For $P_3$ and $P_4$, since $x$ and $v$ are orthogonal, we see from \eqref{cliffcommute} that $\widehat{v}\widehat{x} = - \widehat{x}\widehat{v}$. The top left $8\times 8$ block of the equation $\widehat{v}\widehat{x} = -\widehat{x}\widehat{v}$ is $-L_{\overline{v}}L_x = L_{\overline{x}} L_v$. Since $v$ and $x$ are purely imaginary, this shows $L_v$ and $L_x$ anti-commute. Similarly, $L_w$ and $L_x$ anti-commute.
It follows that $L_vL_w P_3 = L_v L_w L_x P_1 = L_x L_v L_w P_1$ is a rotation by angle $\alpha$ as well, and similarly for $P_4 = L_x P_2$.
\end{proof}
For $n=1,2,3$, we set $v = e_{2n-1}$ and $w= -(cos(\theta)e_{2n-1} +\sin(\theta) e_{2n})$, so $\widehat{v}\widehat{w}\in H$. From the proof of Proposition \ref{Hidentity}, we know that $\pi(\widehat{v}\widehat{w}) = \pi(-\widehat{v} \cdot \widehat{w})$ rotates $e_{2n-1}$ towards $e_{2n}$ by an angle $2\theta$. On the other hand, using Proposition \ref{Hinso8}, we may compute the matrix form of $L_v L_w \in H\subseteq SO(8)$.
We work this out in detail when $n = 3$, so $$v = e_5 = il \text{ and } w = -(\cos(\gamma)(il) + \sin(\gamma)(jl)).$$ Then, $P_1 = \operatorname{span}\{1, k\}$. Then $L_v L_w 1 = \cos(\theta) + \sin(\gamma) k$, so $L_v L_w$ rotates $1$ towards $k$.
On $P_2 = \operatorname{span}\{il, jl\}$, we compute \begin{align*} L_v L_w (il) &= -(il)(\cos(\gamma)(il)^2 + \sin(\gamma)(jl)(il)) \\ &= -(il)(-\cos(\gamma) +\sin(\gamma)k) = \cos(\gamma)(il) - \sin(\gamma)(il)k\\ &= \cos(\gamma)(il) -\sin(\gamma)(jl),\end{align*} so $L_vL_w$ rotates $il$ towards $-jl$.
On $P_3 = i\operatorname{span}\{1,k\} = \operatorname{span}\{i, j\}$, we compute \begin{align*} L_v L_w i &= -(il)(\cos(\gamma)(il)i + \sin(\gamma)(jl)i)\\ &= -(il)(\cos(\gamma)l + \sin(\gamma)kl)\\ &= -\cos(\gamma) (il)l -\sin(\gamma)(il)(kl)\\ &= \cos(\gamma)i -\sin(\gamma)j.\end{align*} Thus, $L_vL_w$ rotates $i$ towards $-j$.
Finally, on $P_4 = i\operatorname{span}\{il , jl\} = \operatorname{span}\{l, kl \}$< we have \begin{align*} L_v L_w(l) &= -(il)(\cos(\gamma)(il)l + \sin(\gamma)(jl)l)\\ &= -(il)(-\cos(\gamma)i -\sin(\gamma)j)\\ &= \cos(\gamma)(il)i + \sin(\gamma)(il)j\\ &= \cos(\gamma) l -\sin(\gamma)kl.\end{align*} Thus, $L_v L_w$ rotates $l$ towards $-kl$.
Putting this all together, it follows that when $$v = il \text{ and } w = -(\cos(\gamma) (il) + \sin(\gamma)(jl)),$$ that $L_v L_w$ has the matrix form $$ A_3 = \begin{bmatrix} \cos\gamma & & & -\sin\gamma & & & & \\ & \cos \gamma & \sin\gamma & & & & &\\ & -\sin\gamma & \cos\gamma & & & & & \\ \sin\gamma & & &\cos\gamma & & & & \\ & & & & \cos\gamma & & & \sin\gamma \\ & & & & & \cos\gamma & \sin\gamma & \\ & & & & & -\sin\gamma & \cos\gamma & \\ & & & & -\sin\gamma & & & \cos\gamma \end{bmatrix}.$$
We let $R(\theta)$ denote the standard rotation matrix, $R(\theta) = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta\end{bmatrix}$, and we use the shorthand $R(\theta_1,..., \theta_k)$ to denote the block diagonal matrix $\text{diag}(R(\theta_1), R(\theta_2), ..., R(\theta_k) )$ or $(R(\theta_1), ..., R(\theta_k),1)$ as appropriate. In this notation, we have now proven that $\pi(A_3) = R(0,0,2\gamma)\in SO(7)$.
In an analogous fashion, one can show that when $v = i$ and $w = -(\cos(\alpha) i + \sin(\alpha) j)$, then $L_v L_w$ has the matrix form $$A_1 = \begin{bmatrix} \cos\alpha & & & \sin\alpha & & & & \\ & \cos \alpha & \sin\alpha & & & & &\\ & -\sin\alpha & \cos\alpha & & & & & \\ -\sin\alpha & & &\cos\alpha & & & & \\ & & & & \cos\alpha & & & -\sin\alpha \\ & & & & & \cos\alpha & \sin\alpha & \\ & & & & & -\sin\alpha & \cos\alpha & \\ & & & & \sin\alpha & & & \cos\alpha \end{bmatrix}$$ with $\pi(A_1) = R(2\alpha, 0,0)$. In addition, when $v = k$ and $w = -(\cos(\beta) k + \sin(\beta)l)$, then $L_v L_w$ has the matrix form $$A_2 = \begin{bmatrix} \cos\beta & & & & & & & \sin\beta\\ & \cos\beta & & & & & \sin\beta & \\ & & \cos\beta & & & -\sin\beta & & \\ & & & \cos\beta & \sin\beta & & & \\ & & & -\sin\beta & \cos\beta & & & \\ & & \sin\beta & & & \cos\beta & & \\ & -\sin\beta & & & & & \cos \beta & \\ -\sin\beta & & & & & & & \cos\beta \end{bmatrix}$$ with $\pi(A_2) = R(0,2\beta,0)$.
One can easily verify that the $A_i$ matrices commute, so they can be simultaneously conjugated to the standard maximal torus of $SO(8)$. In fact, if we set $B = \begin{bmatrix} 0&0&0&1&0&0&0&1\\ 1&0&0&0&-1&0&0&0\\ 0&0&0&1&0&0&0&-1 \\ 1&0&0&0&1&0&0&0\\ 0&1&0&0&0&-1&0 &0\\ 0&0&-1&0&0&0&1&0\\ 0&1&0&0&0&1&0&0\\ 0&0&-1&0&0&0&-1&0 \end{bmatrix},$ it is easy to verify that $BA_1 A_2 A_3 B^{-1} = R(\theta_1,\theta_2,\theta_3,\theta_4)$ with \begin{align*} \theta_1 = \alpha+\beta - \gamma & & \theta_2 = \alpha-\beta -\gamma & \\ \theta_3 = \alpha-\beta + \gamma & & \theta_4 = \alpha+\beta+\gamma &.\end{align*} Thus, we have proven the following.
\begin{proposition} Up to conjugacy, the maximal torus of $H$, $T_H\subseteq H= Spin(7)\subseteq SO(8)$ is given as $R(\theta_1,\theta_2,\theta_3,\theta_4)$ with $\theta_i$ as above. In addition, the projection $\pi:H\rightarrow SO(7)$ maps $R(\theta_1,\theta_2,\theta_3,\theta_4)$ to $R(2\alpha, 2\beta, 2\gamma)$.
\end{proposition}
\
Because checking whether a biquotient is effectively free reduces to checking conjugacy of elements in a maximal torus, we must determine when two elements of $T_{Spin(7)}\subseteq Spin(7)\subseteq SO(8)$ are conjugate in $Spin(7)$. To that end, recall the maximal torus of $SO(8)$ consists of elements of the form $R(\lambda_1, \lambda_2, \lambda_3, \lambda_4)$ and the Weyl group $W_{SO(8)}$ acts by arbitrary permutations of the $\lambda_i$ together with an even number of sign changes of the $\lambda_i$.
We also note that, with $\theta_i$ defined as above, that $\theta_1 + \theta_3 = \theta_2 + \theta_4$. The Weyl group of $Spin(7)$, $W_{Spin(7)}$, acts as arbitrary permutations of $(\alpha, \beta, \gamma)$ as well as an arbitrary number of sign changes.
\begin{proposition}\label{spinconjugate} Two elements of $T_H\subseteq H\subseteq SO(8)$, defined by the equation $\theta_1+\theta_3 = \theta_2+\theta_4$, are conjugate in $Spin(7)$ iff there is an element in $W_{SO(8)}$ which preserves $T$ and maps the first element to the second.
\end{proposition}
\begin{proof}
We first note that the action of every element of $W_{Spin(7)}$ on $T$ is the restriction of the action of some element in $W_{SO(8)}$ which preserves $T$. To see this, note it is enough to check it on a generating set of $W_{Spin(7)}$. The element which interchanges $\alpha$ and $\beta$ is the restriction of the element of $W_{SO(8)}$ which interchanges $\theta_2$ and $\theta_3$ and negates them both, so preserves $T$. Similarly, interchanging $\beta$ and $\gamma$ corresponds to swapping $\theta_1$ and $\theta_3$. Finally, the element of $W_{Spin(7)}$ which negates $\beta$ corresponds to simultaneously interchanging $\theta_1$ and $\theta_2$ and interchanging $\theta_3$ and $\theta_4$. It is easy to see that these elements generate all of $W_{Spin(7)}$, and so this proves the ``only if'' direction.
To prove the ``if'' direction, we first note that $|W_{Spin(7)}| = 48$, so at least $48$ elements of $W_{SO(8)}$ preserve $T$. We now prove there are no more.
Consider the action of $W_{SO(8)}$ on the set of all rank $3$ sub-tori of the maximal torus of $SO(8)$. Because $W_{SO(8)}$ acts as an arbitrary permutation of the $\theta_i$, the orbit of the maximal torus of $Spin(7)\subseteq SO(8)$ contains at least the $3$ tori defined by the equations $\theta_1 + \theta_3 = \theta_2 + \theta_4$, $\theta_1 + \theta_2 = \theta_3 + \theta_4$, and $\theta_1+\theta_4 = \theta_2+\theta_3$. In addition, because $W_{SO(8)}$ also acts by changing an even number of signs, the torus defined by the equations $-\theta_1-\theta_3 = \theta_2 + \theta_4 $ is also in the orbit. By the orbit-stabilizer theorem, we have $|\text{Orbit}||\text{Stabilizer}| = |W_{SO(8)}| = 192$. Since we have already shown the stabilizer has a subgroup of size $48$ and that the order of the orbit is at least $4$, it follows that the order of the stabilizer is precisely $48$.
\end{proof}
\section{\texorpdfstring{Biquotients of the form $SO(7)/\!\!/ SU(2)^2$ and $Spin(7)/\!\!/ SU(2)^2$}{Biquotients of the form SO(7)/SU(2)SU(2) and Spin(7)/SU(2)SU(2)}}
In this section, we prove Theorem \ref{main} in case of $G = SO(7)$ and $G = Spin(7)$
We begin by listing all homomorphisms, up to equivalence, from $SU(2)$ and $SU(2)^2$ into $SO(7)$. Because $SU(2)$ is simply connected, every such homomorphism lifts to $Spin(7)$.
To determine which give rise to effectively free biquotient actions, we first classify all effectively free biquotient actions of $SU(2)$ on $Spin(7)$, finding precisely $12$. We then check directly that all $12$ of these descend to effectively free actions of $SU(2)$ on $SO(7)$.
We use the classification of biquotients of the form $Spin(7)/\!\!/ SU(2)$ and Lemma \ref{restest} to classify biquotients of the form $Spin(7)/\!\!/ SU(2)^2$. It follows from the classification that, with one exception, for each effectively free biquotient action of $SU(2)^2$ on $Spin(7)$, either the point $(-I,I)$ or $(I,-I)\in H^2\subseteq SO(8)^2$ is in the image of $SU(2)^2$. Proposition \ref{easyhalf} then implies that each of these actions descends to an effectively free action of $SU(2)^2$ on $SO(7)$. The exceptional case is easily checked.
\
Classifying homomorphisms from $SU(2)$ into $SO(7)$ is simply classifying all orthogonal $7$-dimensional representations of $SU(2)$. To begin with, we note there is a natural bijection between partitions of $7$ and $7$-dimensional representations of $SU(2)$. As mentioned in Section 2, Proposition \ref{symp} implies that a sum of representations of $SU(2)$ is orthogonal if and only if each symplectic $\phi_i$, that is, those with $i$ odd, appears with even multiplicity. Thus, we seek partitions in which every even number appears an even number of times. Compiling these, we obtain Table \ref{table:so7homo}.
\begin{table}[ht]
\caption{Nontrivial homomorphisms from $SU(2)$ into $SO(7)$}\label{table:so7homo}
\begin{center}
\begin{tabular}{|c|c|}
\hline
Representation & Image of $T_{SU(2)}\subseteq SO(7)$\\
\hline
$4\phi_0 + \phi_2$ & $R(2\theta,0,0) $\\
$3\phi_0 + 2\phi_1$ & $R(\theta,\theta, 0)$\\
$2\phi_0 + \phi_4$ & $R(4\theta,2\theta,0) $\\
$\phi_0 + 2\phi_2$ & $R(2\theta,2\theta,0)$\\
$2\phi_1 + \phi_2$ & $R(2\theta, \theta, \theta) $\\
$\phi_6$ & $R(6\theta, 4\theta, 2\theta) $\\
\hline
\end{tabular}
\end{center}
\end{table}
We recorded the image of the maximal torus, which is actually a subset of $SO(6)$, as this will be essential for determining whether a given biquotient action is effectively free or not.
For each entry in Table \ref{table:so7homo}, after lifting to $Spin(7)$ and including this into $SO(8)$, we obtain an $8$-dimensional orthogonal representation of $SU(2)$.
For example, one can easily verify that if one chooses $\alpha = \theta, \beta = \gamma = 0$, then image of the maximal torus of $SU(2)$ is, in this case, $R( \alpha, \alpha,\alpha,\alpha)\in Spin(7)\subseteq SO(8)$ which projects to $\text{diag}(R(2\alpha), 1,1,1,1,1)\in SO(7)$. Hence, the lift of $4\phi_0 + \phi_2$ is, up to conjugacy, $4\phi_1$. Continuing in this fashion, we obtain Table \ref{table:su2spin}.
\begin{table}[ht]
\caption{Lifts of homomorphisms from $SU(2)$ to $SO(7)$ into $Spin(7)\subseteq SO(8)$}\label{table:su2spin}
\begin{center}
\begin{tabular}{|c|c||c|c|c|c|}
\hline
Name &Representation & $\alpha$ & $\beta$ & $\gamma$ & Image of $T_{SU(2)}\subseteq H\subseteq SO(8)$ \\
\hline\hline
$A$ & $4\phi_0 + \phi_2$ & $\theta$ & $0$ & $0$ & $R(\theta,\theta,\theta,\theta) $ \\
$B$ & $3\phi_0 + 2\phi_1$ & $\frac{1}{2}\theta$ & $\frac{1}{2}\theta$ & $0$ & $R(\theta, 0,0,\theta)$ \\
$C$ & $2\phi_0 + \phi_4$ & $2\theta$ & $\theta$ & $0$ & $R(3\theta, \theta, \theta, 3\theta)$ \\
$D$ & $\phi_0 + 2\phi_2$ & $\theta$ & $\theta$ & $0$ & $R(2\theta, 0,0,2\theta) $ \\
$E$ & $2\phi_1 + \phi_2$ & $\frac{1}{2}\theta$ & $\frac{1}{2}\theta$ & $\theta$ & $R(0,-\theta,\theta,2\theta) $\\
$F$ & $\phi_6$ & $\theta$ & $2\theta$ & $3\theta$ & $ R(0,-4\theta, 2\theta, 6\theta) $\\
\hline
\end{tabular}
\end{center}
\end{table}
In a similar fashion, one can classify all homomorphisms, up to equivalence, from $U = SU(2)^2$ to $SO(7)$. Recalling that the dimension of $\phi_{ij}$ is $(i+1)(j+1)$, one first tabulates a list of all representations of $U$ of total dimension $7$. Since orthogonal representations correspond to those representations for which every $\phi_{ij}$ with $i$ and $j$ of different parities appears with even multiplicity, one easily finds all $7$-dimensional orthogonal representations of $SU(2)^2$. Table \ref{table:so7partition} records all of the representations with finite kernel up to interchanging the two $SU(2)$ factors; those with infinite kernel are a composition $SU(2)^2\rightarrow SU(2)\rightarrow SO(7)$ where the first map is one of the two projection maps and the second is listed in Table \ref{table:su2spin}.
\begin{table}[ht]
\caption{Immersions from $SU(2)^2$ into $SO(7)$ and $Spin(7)\subseteq SO(8)$}\label{table:so7partition}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Representation & Image of $T_U\subseteq SO(7)$ & Lift $T_U\subseteq H\subseteq SO(8)$ \\
\hline
$3\phi_{00} + \phi_{11}$ & $R(\theta + \phi, \theta -\phi, 0) $ & $ R(\theta, \phi,\phi,\theta) $\\
$\phi_{00} + \phi_{20} + \phi_{02}$ & $R(2\theta, 2\phi, 0) $ & $ R(\theta + \phi, \theta - \phi, \theta - \phi, \theta + \phi) $\\
$2\phi_{10} + \phi_{02}$ & $R(2\phi, \theta, \theta) $ & $ R(\phi, \phi-\theta, \phi, \theta + \phi) $\\
$\phi_{11} + \phi_{02}$ & $R(2\phi , \theta + \phi, \theta - \phi) $ & $ R(\theta + \phi, 0, \phi - \theta, 2\phi) $\\
\hline
\end{tabular}
\end{center}
\end{table}
We now investigate which pairs of homomorphisms give rise to effectively free actions. We begin with the homomorphisms with domain $SU(2)$, found in Table \ref{table:so7homo}.
\begin{proposition}\label{so7class}
Suppose $f=(f_1,f_2):SU(2)\rightarrow Spin(7)^2\subseteq SO(8)^2$ with both $f_1$ and $f_2$ nontrivial. Then $f$ induces an effectively free biquotient action of $SU(2)$ on $Spin(7)$ if and only if up to order, we have $$(f_1,f_2) \in \{(A,B), (A,D), (A,E), (A,F), (C,E), (D,E) \}$$ \end{proposition}
\begin{proof}
Recall that a biquotient action defined by $(f_1, f_2)$ is effectively free if and only if for all $z \in T_{SU(2)}$, if $f_1(z)$ is conjugate to $f_2(z)$ in $G$, then $f_1(z) = f_2(z) \in Z(G)$. It immediately follows that $f_1$ and $f_2$ must be distinct, and that if either $f_1$ or $f_2$ is the trivial homomorphism, then the action is automatically free, which accounts for $6$ homogeneous examples. This leaves $\binom{6}{2} = 15$ pairs of homomorphisms to check. We present a few of the relevant calculations.
As shown in Proposition \ref{spinconjugate}, two elements in $T\subseteq Spin(7)$ are conjugate in $Spin(7)$ iff they are conjugate in $SO(8)$ by an element which preserves $T$. We recall that the Weyl group acts on an element $R(\theta_1,...,\theta_4)$ in the maximal torus of $SO(8)$ by arbitrary permutations and an even number of sign changes.
For the pair $(D, E)$, the two images of the maximal tori are $R(2\theta, 0,0,2\theta)$ and $R(0,-\theta, \theta, 2\theta)$. If, up to permutations and an even number of sign changes, these are equal, then we must either have $\theta = 0$ or $2\theta = 0$. Of course, the first case only arises when both matrices are the identity matrix, so we focus on the second case. If $2\theta = 0$, the first matrix becomes $I$, which then forces the second to be $I$ as well. Hence, these two elements are conjugate iff they are both the identity, and so the action is effectively free.
On the other hand, the pair $(C,D)$ does not give rise to an effectively free action. The images of the two maximal tori are $R(3\theta, \theta, \theta, 3\theta)$ and $R(2\theta,0,0,2\theta)$. Choosing $\theta = 2\pi /3$, we obtain the matrices $$R(0, 2\pi/3, 2\pi/3,0) \text{ and } R(4\pi/3, 0,0,4\pi/3) = R(-2\pi/3, 0,0,-2\pi/3).$$ Consider the element $w\in W_{SO(8)}$ with $w(\theta_1,\theta_2,\theta_3, \theta_4) = -(\theta_2, \theta_1, \theta_4, \theta_3)$. The one easily sees that $w$ maps the first torus element to the second and $w$ preserves the maximal torus of $Spin(7)$. By Proposition \ref{spinconjugate}, these two torus elements are conjugate in $Spin(7)$. Since they are not in $Z(Spin(7)) = \{\pm I\}\subseteq SO(8)$, it follows that this action is not effectively free.
\end{proof}
One can easily verify that each pair listed in Proposition \ref{so7class} descends to an effectively free action on $SO(7)$. For example, for the pair $(D,E)$, the image of the maximal torus in $SO(7)$ consists of matrices of the form $R(2\theta, 2\theta, 0)$ and $R(2\theta, \theta,\theta)$. If these matrices are conjugate in $SO(7)$, then either $\theta = 0$, which forces both matrices to be the identity, or $2\theta = 0$. Substituting this in to the first matrix gives the identity matrix, which then forces the second matrix to be the identity. Hence, these matrices are conjugate iff they are both the identity, so the action is free.
\
We now classify all biquotients of the form $Spin(7) /\!\!/ SU(2)^2$. Our main tool is Lemma \ref{restest} which asserts that for any $f=(f_1,f_2):SU(2)^2\rightarrow Spin(7)^2$, if $(f_1,f_2)$ defines an effectively free action, then after restricting $f$ to a maximal torus $T^2\subseteq SU(2)^2$ with parameters $\theta$ and $\phi$, that setting $\theta=1$, $\phi=1$, or $\theta=\phi$ must result in a pair of homomorphisms from Proposition \ref{so7class}.
\begin{proposition} Suppose $(f_1,f_2):SU(2)^2\rightarrow Spin(7)^2$ defines an effectively free action of $SU(2)^2$ on $Spin(7)$ with $f_1$ nontrivial. Then either $f_2$ is trivial, or, up to interchanging $f_1$ and $f_2$, the pair $(f_1,f_2)$ is a lift of a pair in Table \ref{table:su2class}.
\end{proposition}
\begin{proof}
A homomorphism $f:U=SU(2)^2\rightarrow G^2$ is nothing but a pair of homomorphism $f_i:U\rightarrow G$. We break the proof into cases depending on whether or not each $f_i$ has finite kernel or infinite kernel. If $f_i$ has finite kernel, it is, up to interchanging the two $SU(2)$ factors, the lift of a representation found in Table \ref{table:so7partition}. On the other hand, $f_i$ has infinite kernel (but is non-trivial), it is given as the lift of a projection to either factor composed with an entry in Table \ref{table:su2spin}.
To facilitate the use of Lemma \ref{restest}, we record in Table \ref{table:restriction}, for each representation in Table \ref{table:so7partition}, the restriction of this representation to the three natural $SU(2)$ subgroups.
\begin{table}[h]
\caption{Restrictions of $SU(2)^2\rightarrow Spin(7)$ to $SU(2)$ subgroups}\label{table:restriction}
\begin{center}
\begin{tabular}{|c||c|c|c|}
\hline
Representation & $\theta = 0$ & $\phi=0$ & $\theta = \phi$\\
\hline
$3\phi_{00} + \phi_{11}$ & $B$ & $B$ & $A$\\
$\phi_{00} + \phi_{20} + \phi_{02}$ & $A$ & $A$ & $D$\\
$2\phi_{10} + \phi_{02}$ & $A$ & $B$ & $E$\\
$\phi_{11} + \phi_{02}$ & $E$ & $B$ & $D$ \\
\hline
\end{tabular}
\end{center}
\end{table}
We first handle the case where both $f_i$ have finite kernel. The first two representations in Table \ref{table:restriction} are symmetric in $\theta$ and $\phi$, but the last two are not. It follows that, in addition to checking all $\binom{4}{2} = 6$ pairs of representations in Table \ref{table:restriction}, that we also must check $(2\phi_{10} + \phi_{02}, \phi_{11} + \phi_{20})$. With the exception of $(3\phi_{0} + \phi_{11}, \phi_{00} + \phi_{20} + \phi_{02})$, the remaining $6$ pairs have at least one non-effectively free restriction, that is, at least one restriction which is not found in Proposition \ref{so7class}. In the exceptional case, one easily sees that $\theta = \frac{2\pi}{5} $ and $\phi = \frac{4\pi}{5}$ gives rise to a non-central conjugacy in $Spin(7)$, so the induced action is not effectively free. This completes the case where both $f_i$ have finite kernel.
So, we assume $f_2$ has infinite kernel. This implies that one of the restrictions is the trivial representation, while the other two must be the same. This immediately rules out the case $f_1 = 3\phi_{00} + \phi_{11}$, because the only pair in Proposition \ref{so7class} containing $B$ is $(A,B)$, which then forces the $\theta =\phi$ restriction to be $(A,A)$, so the induced action is not effectively free.
Similarly, if $f_1 = \phi_{00} + \phi_{20} + \phi_{02}$, then $f_2 = E$ is the only possibility which is not ruled out by Lemma \ref{restest}. In this case, one has $R(\theta+\phi, \theta - \phi, \theta-\phi, \theta+\phi)$ and $R(0,-\theta,\theta,2\theta)$. If these are conjugate, then $\theta = \pm \phi \pmod{2\pi}$, either case of which gives rise to the pair $(D,E)$. Hence, by Proposition \ref{so7class}, this action is effectively free.
If $f_1$ is one of the remaining two representations with finite kernel, then the $\phi=0$ restriction forces $f_2$ to either only depend on $\phi$, or $f_2 = A$ with the variable $\theta$. In the latter case, one easily verifies that both choices of $f_1$ give rise to effectively free biquotients, we now assume $f_2$ depends only on $\phi$. If $f_1 = 2\phi_{10} + \phi_{02}$, the Proposition \ref{so7class} implies $f_2 = D$, and, if $f_1 = \phi_{11} + \phi_{02}$, then $f_2 = A$. One easily verifies that both of these give rise to effectively free actions.
\
We have now handled the case where $f_1$ has finite kernel, so we may assume $f_1$ and $f_2$ both have infinite kernel. The $\phi =\theta$ restriction implies that the only cases which can possibly give rise to effectively free actions are those given in Proposition \ref{so7class}, where we assume $f_1$ only depends on $\theta$, while $f_2$ depends only on $\phi$. One easily sees that the first five entries give rise to effectively free actions.
For example, for $(C,E)$, the two images of the maximal tori given by $R(3\theta, \theta,\theta,3\theta)$ and $R(0,-\phi , \phi ,2\phi )$. They are not even conjugate in $SO(8)$, unless $\theta = 0$.
The last entry, $(D,E)$, does not give rise to an effectively free action. The images of the maximal torus are $R(2\theta,0,0,2\theta)$ and $R(0,-\phi, \phi, 2\phi)$. Setting $\theta = \pi/2$ and $\phi = \pi$ gives a non-central conjugacy in $Spin(7)$.
\end{proof}
We note that, as a corollary to the proof, for $9$ of the $10$ effectively free biquotients, the map $(f_1,f_2):SU(2)^2\rightarrow Spin(7)^2\subseteq SO(8)^2$ defining the action has either $(-I,I)$ or $(I,-I)$ in its image, so Proposition \ref{easyhalf} implies each of these descends to a biquotient of the form $SO(7)/\!\!/ SU(2)^2$. The exceptional case is given by the pair $(2\phi_{10} + \phi_{02}, \phi_{00} + 2\phi_{02})$. As a representation into $SO(7)$, the two images of the maximal tori are $R(\theta, \theta, 2\phi)$ and $R(2\phi, 2\phi, 0)$. If these are conjugate in $SO(7)$, then either $\theta = 0$ or $2\phi = 0$. Either case forces the other case to hold, so these matrices are only conjugate when both are the identity. It follows that this action of $SU(2)^2$ on $SO(7)$ is effectively free, finishing the proof of Theorem \ref{main} in the case of $G = SO(7)$ or $G = Spin(7)$.
\section{\texorpdfstring{$G = SU(4)$ or $G = \mathbf{G}_2\times SU(2)$}{G = SU(4) or G = G2 x SU(2)}}
In this section, we complete the proof of Theorem \ref{main} in the case of $G = SU(4)$ and $G = \mathbf{G}_2\times SU(2)$.
We begin with the case $G = SU(4)$. We note that there are, up to equivalence, precisely 5 homomorphisms from $SU(2)$ to $SU(4)$, corresponding to the $5$ partitions of $4$. These are listen in Table \ref{table:su4homo}. Similarly, up to equivalence, there are precisely $2$ homomorphisms $SU(2)^2\rightarrow G$ with finite kernel: $\phi_{10} + \phi_{01}$ and $\phi_{11}$. The image of $T_{SU(2)^2}$ is, respectively, $\text{diag}(z,\overline{z},w,\overline{w})$ and $\text{diag}(zw,\overline{z}\overline{w}, z\overline{w}, \overline{z} w)$.
\begin{table}[ht]
\caption{Homomorphisms from $SU(2)$ into $SU(4)$}\label{table:su4homo}
\begin{center}
\begin{tabular}{|c|c|}
\hline
Representation & Image of $T_{SU(2)}$\\
\hline
$4\phi_0$ & \text{diag}(1,1,1,1)\\
$2\phi_0 + \phi_1$ & $\text{diag}(z,\overline{z},1,1)$\\
$\phi_0 + \phi_2$ & $\text{diag}(z^2,1,\overline{z}^2,1)$\\
$\phi_3$ & $\text{diag}(z,z^3,\overline{z},\overline{z}^3)$\\
$2\phi_1$ & $\text{diag}(z,\overline{z},z,\overline{z})$\\
\hline
\end{tabular}
\end{center}
\end{table}
As in the proof of the case $G = SO(7)$, we first classify all effectively free actions of $SU(2)$ on $SU(4)$. The proof is carried out just as in Proposition \ref{so7class}, which the only change being that two diagonal matrices in $SU(4)$ are conjugate iff the entries are the same, up to order.
\begin{proposition}\label{su4class}
Suppose $f=(f_1,f_2):Sp(1)\rightarrow G^2$ with $f_1$ nontrivial. Then $f$ induces an effectively free biquotient action of $SU(2)$ on $G$ if and only if either $f_2$ is trivial or, up to interchanging $f_1$ and $f_2$, $(f_1,f_2)$ is equivalent to $$(2\phi_0 + \phi_1, 2\phi_1) \text{ or }(\phi_0 + \phi_2, 2\phi_1).$$
\end{proposition}
We may now use Lemma \ref{restest} to complete the proof of Theorem \ref{main} in the case of $G = SU(4)$.
\begin{proposition}Table \ref{table:su2class} lists all of the inhomogeneous biquotients of the form $SU(4)/\!\!/ SU(2)^2$.
\end{proposition}
\begin{proof}
As in the proof of Theorem \ref{main} in the case of $G = SO(7)$, we only consider $(f_i, f_j)$ with $f_i\neq f_j$, and neither $f_i$ the trivial map. Taking into account the symmetry of interchanging $z$ and $w$, we reduce the number to $15$ pairs to check. Finally, Lemma \ref{restest} reduces this number down to $3$, of which only one does not give rise to an effectively free action. We now provide the computations for these three cases.
Consider the action induced by $(\phi_{10} + \phi_{01}, \phi_{11})$, with maximal torus given by $$(\text{diag}(z, \overline{z}, w, \overline{w}), \text{diag}(zw, \overline{zw}, z\overline{w}, \overline{z}w)).$$ We set $w = z^3$ and choose $z$ to be a nontrivial $5$th root of unity. Then the image on the left is $(z,z^4, z^3, z^2)$ while the image on the right is $(z^4, z, z^3, z^2)$. These two matrices are clearly conjugate in $SU(4)$, but neither is an element of $Z(SU(4)) = \{\pm I\}$.
On the other hand, the action given by $(\text{diag}(z, \overline{z}, 1, 1), \text{diag}(w, \overline{w}, w, \overline{w}))$ is effectively free because if two such matrices are conjugate, we must have $w=1$, which then forces $z=1$.
Finally, the action given by $(\text{diag}(z, \overline{z}, z, \overline{z}), \text{diag}(w^2, 1, \overline{w}^2, 1))$ is effectively free since to be conjugate we require $z=1$, which then forces $w=1$.
\end{proof}
\
We now complete the proof of Theorem \ref{main} in the last case case, when $G = \mathbf{G}_2\times SU(2)$.
As mentioned in Section \ref{sec:background}, up to conjugacy, the identity map $SU(2)\rightarrow SU(2)$ is the unique non-trivial homomorphism. It follows that, up to equivalence, there are precisely two non-trivial biquotient actions of $SU(2)$ on itself: left multiplication, and conjugation. In particular, for any $U$ action on $SU(2)$, at least one $SU(2)$ factor of $U$ acts trivially.
The left multiplication action is clearly transitive, and thus, gives rise to a non-reduced biquotient. If $SU(2)$ acts either trivially or by conjugation on itself, then $I$ is a fixed point of the action. It follows that if neither factor of $U = SU(2)^2$ acts transitively on the $SU(2)$ factor of $G$, that, in fact, $U$ must act freely on $\mathbf{G}_2$.
As shown in \cite{KZ}, there is a unique effectively free biquotient action of $U$ on $\mathbf{G}_2$, giving rise to the exceptional symmetric space $\mathbf{G}_2/SO(4)$. It follows that there are precisely three non-reduced biquotients of the form $\mathbf{G}_2\times SU(2)/\!\!/ U$: either $U$ acts trivially on the $SU(2)$ factor of $G$ (giving the homogeneous space $(\mathbf{G}_2/SO(4))\times SU(2)$), or one of the two $SU(2)$ factors of $U$ acts by conjugation on the $SU(2)$ factor of $G$, giving the last two entries. We note that, as mentioned in \cite{KZ}, the two normal $SU(2)$s in $SO(4)$ have different Dynkin indices in $\mathbf{G_2}$. It follows that these two actions are not equivalent.
\end{document} | arXiv |
Chapter 6: Sheaves on Spaces
Section 6.22: Continuous maps and abelian sheaves (cite)
6.22 Continuous maps and abelian sheaves
Let $f : X \to Y$ be a continuous map. We claim there are functors
\begin{eqnarray*} f_* : \textit{PAb}(X) & \longrightarrow & \textit{PAb}(Y) \\ f_* : \textit{Ab}(X) & \longrightarrow & \textit{Ab}(Y) \\ f_ p : \textit{PAb}(Y) & \longrightarrow & \textit{PAb}(X) \\ f^{-1} : \textit{Ab}(Y) & \longrightarrow & \textit{Ab}(X) \end{eqnarray*}
with similar properties to their counterparts in Section 6.21. To see this we argue in the following way.
Each of the functors will be constructed in the same way as the corresponding functor in Section 6.21. This works because all the colimits in that section are directed colimits (but we will work through it below).
First off, given an abelian presheaf $\mathcal{F}$ on $X$ and an abelian presheaf $\mathcal{G}$ on $Y$ we define
\begin{eqnarray*} f_*\mathcal{F}(V) & = & \mathcal{F}(f^{-1}(V)) \\ f_ p\mathcal{G}(U) & = & \mathop{\mathrm{colim}}\nolimits _{f(U) \subset V} \mathcal{G}(V) \end{eqnarray*}
as abelian groups. The restriction mappings are the same as the restriction mappings for presheaves of sets (and they are all homomorphisms of abelian groups).
The assignments $\mathcal{F} \mapsto f_*\mathcal{F}$ and $\mathcal{G} \to f_ p\mathcal{G}$ are functors on the categories of presheaves of abelian groups. This is clear, as (for example) a map of abelian presheaves $\mathcal{G}_1 \to \mathcal{G}_2$ gives rise to a map of directed systems $\{ \mathcal{G}_1(V)\} _{f(U) \subset V} \to \{ \mathcal{G}_2(V)\} _{f(U) \subset V}$ all of whose maps are homomorphisms and hence gives rise to a homomorphism of abelian groups $f_ p\mathcal{G}_1(U) \to f_ p\mathcal{G}_2(U)$.
The functors $f_*$ and $f_ p$ are adjoint on the category of presheaves of abelian groups, i.e., we have
\[ \mathop{\mathrm{Mor}}\nolimits _{\textit{PAb}(X)}(f_ p\mathcal{G}, \mathcal{F}) = \mathop{\mathrm{Mor}}\nolimits _{\textit{PAb}(Y)}(\mathcal{G}, f_*\mathcal{F}). \]
To prove this, note that the map $i_\mathcal {G} : \mathcal{G} \to f_* f_ p\mathcal{G}$ from the proof of Lemma 6.21.3 is a map of abelian presheaves. Hence if $\psi : f_ p\mathcal{G} \to \mathcal{F}$ is a map of abelian presheaves, then the corresponding map $\mathcal{G} \to f_*\mathcal{F}$ is the map $f_*\psi \circ i_\mathcal {G} : \mathcal{G} \to f_* f_ p \mathcal{G} \to f_* \mathcal{F}$ is also a map of abelian presheaves. For the other direction we point out that the map $c_\mathcal {F} : f_ p f_* \mathcal{F} \to \mathcal{F}$ from the proof of Lemma 6.21.3 is a map of abelian presheaves as well (since it is made out of restriction mappings of $\mathcal{F}$ which are all homomorphisms). Hence given a map of abelian presheaves $\varphi : \mathcal{G} \to f_*\mathcal{F}$ the map $c_\mathcal {F} \circ f_ p\varphi : f_ p\mathcal{G} \to \mathcal{F}$ is a map of abelian presheaves as well. Since these constructions $\psi \mapsto f_*\psi $ and $\varphi \mapsto c_\mathcal {F} \circ f_ p\varphi $ are inverse to each other as constructions on maps of presheaves of sets we see they are also inverse to each other on maps of abelian presheaves.
If $\mathcal{F}$ is an abelian sheaf on $Y$, then $f_*\mathcal{F}$ is an abelian sheaf on $X$. This is true because of the definition of an abelian sheaf and because this is true for sheaves of sets, see Lemma 6.21.1. This defines the functor $f_*$ on the category of abelian sheaves.
We define $f^{-1}\mathcal{G} = (f_ p\mathcal{G})^\# $ as before. Adjointness of $f_*$ and $f^{-1}$ follows formally as in the case of presheaves of sets. Here is the argument:
\begin{eqnarray*} \mathop{\mathrm{Mor}}\nolimits _{\textit{Ab}(X)}(f^{-1}\mathcal{G}, \mathcal{F}) & = & \mathop{\mathrm{Mor}}\nolimits _{\textit{PAb}(X)}(f_ p\mathcal{G}, \mathcal{F}) \\ & = & \mathop{\mathrm{Mor}}\nolimits _{\textit{PAb}(Y)}(\mathcal{G}, f_*\mathcal{F}) \\ & = & \mathop{\mathrm{Mor}}\nolimits _{\textit{Ab}(Y)}(\mathcal{G}, f_*\mathcal{F}) \end{eqnarray*}
Lemma 6.22.1. Let $f : X \to Y$ be a continuous map.
Let $\mathcal{G}$ be an abelian presheaf on $Y$. Let $x \in X$. The bijection $\mathcal{G}_{f(x)} \to (f_ p\mathcal{G})_ x$ of Lemma 6.21.4 is an isomorphism of abelian groups.
Let $\mathcal{G}$ be an abelian sheaf on $Y$. Let $x \in X$. The bijection $\mathcal{G}_{f(x)} \to (f^{-1}\mathcal{G})_ x$ of Lemma 6.21.5 is an isomorphism of abelian groups.
Proof. Omitted. $\square$
Given a continuous map $f : X \to Y$ and sheaves of abelian groups $\mathcal{F}$ on $X$, $\mathcal{G}$ on $Y$, the notion of an $f$-map $\mathcal{G} \to \mathcal{F}$ of sheaves of abelian groups makes sense. We can just define it exactly as in Definition 6.21.7 (replacing maps of sets with homomorphisms of abelian groups) or we can simply say that it is the same as a map of abelian sheaves $\mathcal{G} \to f_*\mathcal{F}$. We will use this notion freely in the following. The group of $f$-maps between $\mathcal{G}$ and $\mathcal{F}$ will be in canonical bijection with the groups $\mathop{\mathrm{Mor}}\nolimits _{\textit{Ab}(X)}(f^{-1}\mathcal{G}, \mathcal{F})$ and $\mathop{\mathrm{Mor}}\nolimits _{\textit{Ab}(Y)}(\mathcal{G}, f_*\mathcal{F})$.
Composition of $f$-maps is defined in exactly the same manner as in the case of $f$-maps of sheaves of sets. In addition, given an $f$-map $\mathcal{G} \to \mathcal{F}$ as above, the induced maps on stalks
\[ \varphi _ x : \mathcal{G}_{f(x)} \longrightarrow \mathcal{F}_ x \]
are abelian group homomorphisms.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 008N. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 008N, in case you are confused.
View Section 6.22 as pdf | CommonCrawl |
Analysis of the 3DVAR filter for the partially observed Lorenz'63 model
The tridendriform structure of a discrete Magnus expansion
March 2014, 34(3): 1041-1060. doi: 10.3934/dcds.2014.34.1041
Bernstein-type approximation of set-valued functions in the symmetric difference metric
Shay Kels 1, and Nira Dyn 1,
School of Mathematical Sciences, Tel-Aviv University, Ramat-Aviv, Tel-Aviv, Israel, Israel
Received November 2012 Revised February 2013 Published August 2013
We study the approximation of univariate and multivariate set-valued functions (SVFs) by the adaptation to SVFs of positive sample-based approximation operators for real-valued functions. To this end, we introduce a new weighted average of several sets and study its properties. The approximation results are obtained in the space of Lebesgue measurable sets with the symmetric difference metric.
In particular, we apply the new average of sets to adapt to SVFs the classical Bernstein approximation operators, and show that these operators approximate continuous SVFs. The rate of approximation of Hölder continuous SVFs by the adapted Bernstein operators is studied and shown to be asymptotically equal to the one for real-valued functions. Finally, the results obtained in the metric space of sets are generalized to metric spaces endowed with an average satisfying certain properties.
Keywords: symmetric difference metric, average of sets., Set-valued functions, Bernstein approximation.
Mathematics Subject Classification: Primary: 26E25, 41A6.
Citation: Shay Kels, Nira Dyn. Bernstein-type approximation of set-valued functions in the symmetric difference metric. Discrete & Continuous Dynamical Systems - A, 2014, 34 (3) : 1041-1060. doi: 10.3934/dcds.2014.34.1041
Z. Artstein, Piecewise linear approximations of set-valued maps,, Journal of Approximation Theory, 56 (1989), 41. doi: 10.1016/0021-9045(89)90131-7. Google Scholar
R. Baier and E. Farkhi, Differences of convex compact sets in the space of directed sets. Part I: The space of directed sets,, Set-Valued Analysis, 9 (2001), 217. doi: 10.1023/A:1012046027626. Google Scholar
R. Baier and G. Perria, Set-valued hermite interpolation,, Journal of Approximation Theory, 163 (2011), 1349. doi: 10.1016/j.jat.2010.11.004. Google Scholar
S. Bernstein, Démonstration du théoreme de weierstrass fondée sur le calcul des probabilités,, Commun. Soc. Math. Kharkow, 13 (1912), 1. Google Scholar
D. Burago, Y. Burago, S. Ivanov and A. M. Society, "A Course in Metric Geometry,", American Mathematical Society, (2001). Google Scholar
C. De Boor, "A Practical Guide to Splines,", Springer Verlag, (2001). Google Scholar
R. DeVore and G. Lorentz, "Constructive Approximation,", Springer, (1993). Google Scholar
N. Dyn and E. Farkhi, Spline subdivision schemes for convex compact sets,, Journal of Computational and Applied Mathematics, 119 (2000), 133. doi: 10.1016/S0377-0427(00)00375-7. Google Scholar
N. Dyn and E. Farkhi, Spline subdivision schemes for compact sets with metric averages,, Trends in Approximation Theory, (2001), 93. Google Scholar
N. Dyn and E. Farkhi, Set-valued approximations with Minkowski averages-convergence and convexification rates,, Numerical Functional Analysis and Optimization, 25 (2004), 363. doi: 10.1081/NFA-120039682. Google Scholar
N. Dyn, E. Farkhi and A. Mokhov, Approximation of univariate set-valued functions-an overview,, Serdica Math. J., 33 (2007), 495. Google Scholar
N. Dyn, E. Farkhi and A. Mokhov, Approximations of set-valued functions by metric linear operators,, Constructive Approximation, 25 (2007), 193. doi: 10.1007/s00365-006-0632-9. Google Scholar
N. Dyn and A. Mokhov, Approximations of set-valued functions based on the metric average,, Rendiconti di Matematica, 26 (2006), 249. Google Scholar
G. Farin, "Curves and Surfaces for CAGD: A Practical Guide,", Morgan Kaufmann Pub, (2002). Google Scholar
W. Feller, "An Introduction to Probability Theory and Its Applications,", I, I (1968). Google Scholar
P. Halmos, "Naive Set Theory,", Springer-Verlag, (1974). Google Scholar
M. Kac, Une remarque sur les polynomes de m.s. bernstein,, Studia Math, 7 (1938), 49. Google Scholar
M. Kac, Reconnaissance de priorité relative a ma note, une remarque sur les polynomes de m.s. bernstein,, Studia Math, 8 (1939). Google Scholar
S. Kels and N. Dyn, Subdivision schemes of sets and the approximation of set-valued functions in the symmetric difference metric,, Arxiv Preprint , (2011). doi: 10.1007/s10208-013-9146-z. Google Scholar
K. Levasseur, A probabilistic proof of the Weierstrass approximation theorem,, Amer. Math. Monthly, 91 (1984), 249. doi: 10.2307/2322960. Google Scholar
P. Mathé, Approximation of holder continuous functions by Bernstein polynomials,, The American Mathematical Monthly, 106 (1999), 568. doi: 10.2307/2589469. Google Scholar
P. Mathé, Asymptotic constants for multivariate Bernstein polynomials,, Studia Scientiarum Mathematicarum Hungarica, 40 (2003), 59. doi: 10.1556/SScMath.40.2003.1-2.5. Google Scholar
I. Molchanov, "Theory of Random Sets,", Springer Verlag, (2005). Google Scholar
M. Muresan, Set-valued approximation of multifunctions,, Studia Univ. Babes-Bolyai, 55 (2010), 107. Google Scholar
A. Papadopoulos, "Metric Spaces, Convexity and Nonpositive Curvature,", 6 European Mathematical Society, 6 (2005). Google Scholar
C. Rabut, An introduction to Schoenberg's approximation,, Computers & Mathematics with Applications, 24 (1992), 149. doi: 10.1016/0898-1221(92)90177-J. Google Scholar
R. Vitale, Approximation of convex set-valued functions,, Journal of Approximation Theory, 26 (1979), 301. doi: 10.1016/0021-9045(79)90067-4. Google Scholar
M. Zelen and N. Severo, "Probability Functions,", in, 5 (1964), 925. Google Scholar
Sina Greenwood, Rolf Suabedissen. 2-manifolds and inverse limits of set-valued functions on intervals. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5693-5706. doi: 10.3934/dcds.2017246
Zhiang Zhou, Xinmin Yang, Kequan Zhao. $E$-super efficiency of set-valued optimization problems involving improvement sets. Journal of Industrial & Management Optimization, 2016, 12 (3) : 1031-1039. doi: 10.3934/jimo.2016.12.1031
Roger Metzger, Carlos Arnoldo Morales Rojas, Phillipe Thieullen. Topological stability in set-valued dynamics. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1965-1975. doi: 10.3934/dcdsb.2017115
Dante Carrasco-Olivera, Roger Metzger Alvan, Carlos Arnoldo Morales Rojas. Topological entropy for set-valued maps. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3461-3474. doi: 10.3934/dcdsb.2015.20.3461
Geng-Hua Li, Sheng-Jie Li. Unified optimality conditions for set-valued optimizations. Journal of Industrial & Management Optimization, 2019, 15 (3) : 1101-1116. doi: 10.3934/jimo.2018087
Yu Zhang, Tao Chen. Minimax problems for set-valued mappings with set optimization. Numerical Algebra, Control & Optimization, 2014, 4 (4) : 327-340. doi: 10.3934/naco.2014.4.327
Qingbang Zhang, Caozong Cheng, Xuanxuan Li. Generalized minimax theorems for two set-valued mappings. Journal of Industrial & Management Optimization, 2013, 9 (1) : 1-12. doi: 10.3934/jimo.2013.9.1
Zhenhua Peng, Zhongping Wan, Weizhi Xiong. Sensitivity analysis in set-valued optimization under strictly minimal efficiency. Evolution Equations & Control Theory, 2017, 6 (3) : 427-436. doi: 10.3934/eect.2017022
Mariusz Michta. Stochastic inclusions with non-continuous set-valued operators. Conference Publications, 2009, 2009 (Special) : 548-557. doi: 10.3934/proc.2009.2009.548
Guolin Yu. Topological properties of Henig globally efficient solutions of set-valued problems. Numerical Algebra, Control & Optimization, 2014, 4 (4) : 309-316. doi: 10.3934/naco.2014.4.309
Zengjing Chen, Yuting Lan, Gaofeng Zong. Strong law of large numbers for upper set-valued and fuzzy-set valued probability. Mathematical Control & Related Fields, 2015, 5 (3) : 435-452. doi: 10.3934/mcrf.2015.5.435
C. R. Chen, S. J. Li. Semicontinuity of the solution set map to a set-valued weak vector variational inequality. Journal of Industrial & Management Optimization, 2007, 3 (3) : 519-528. doi: 10.3934/jimo.2007.3.519
Qi Wang, Yue Zhou. Sets of zero-difference balanced functions and their applications. Advances in Mathematics of Communications, 2014, 8 (1) : 83-101. doi: 10.3934/amc.2014.8.83
Jiawei Chen, Zhongping Wan, Liuyang Yuan. Existence of solutions and $\alpha$-well-posedness for a system of constrained set-valued variational inequalities. Numerical Algebra, Control & Optimization, 2013, 3 (3) : 567-581. doi: 10.3934/naco.2013.3.567
Guolin Yu. Global proper efficiency and vector optimization with cone-arcwise connected set-valued maps. Numerical Algebra, Control & Optimization, 2016, 6 (1) : 35-44. doi: 10.3934/naco.2016.6.35
Yihong Xu, Zhenhua Peng. Higher-order sensitivity analysis in set-valued optimization under Henig efficiency. Journal of Industrial & Management Optimization, 2017, 13 (1) : 313-327. doi: 10.3934/jimo.2016019
Benjamin Seibold, Morris R. Flynn, Aslan R. Kasimov, Rodolfo R. Rosales. Constructing set-valued fundamental diagrams from Jamiton solutions in second order traffic models. Networks & Heterogeneous Media, 2013, 8 (3) : 745-772. doi: 10.3934/nhm.2013.8.745
Xing Wang, Nan-Jing Huang. Stability analysis for set-valued vector mixed variational inequalities in real reflexive Banach spaces. Journal of Industrial & Management Optimization, 2013, 9 (1) : 57-74. doi: 10.3934/jimo.2013.9.57
Ying Gao, Xinmin Yang, Jin Yang, Hong Yan. Scalarizations and Lagrange multipliers for approximate solutions in the vector optimization problems with set-valued maps. Journal of Industrial & Management Optimization, 2015, 11 (2) : 673-683. doi: 10.3934/jimo.2015.11.673
Qilin Wang, Liu He, Shengjie Li. Higher-order weak radial epiderivatives and non-convex set-valued optimization problems. Journal of Industrial & Management Optimization, 2019, 15 (2) : 465-480. doi: 10.3934/jimo.2018051
Shay Kels Nira Dyn | CommonCrawl |
Richardson–Lucy deconvolution
The Richardson–Lucy algorithm, also known as Lucy–Richardson deconvolution, is an iterative procedure for recovering an underlying image that has been blurred by a known point spread function. It was named after William Richardson and Leon B. Lucy, who described it independently.[1][2]
Not to be confused with Modified Richardson iteration.
Description
When an image is produced using an optical system and detected using photographic film or a charge-coupled device, for instance, it is inevitably blurred, with an ideal point source not appearing as a point but being spread out into what is known as the point spread function. Extended sources can be decomposed into the sum of many individual point sources, thus the observed image can be represented in terms of a transition matrix p operating on an underlying image:
$d_{i}=\sum _{j}p_{i,j}u_{j}\,$
where $u_{j}$ is the intensity of the underlying image at pixel $j$ and $d_{i}$ is the detected intensity at pixel $i$. In general, a matrix whose elements are $p_{i,j}$ describes the portion of light from source pixel j that is detected in pixel i. In most good optical systems (or in general, linear systems that are described as shift invariant) the transfer function p can be expressed simply in terms of the spatial offset between the source pixel j and the observation pixel i:
$p_{i,j}=P(i-j)$
where $P(\Delta i)$ is called a point spread function. In that case the above equation becomes a convolution. This has been written for one spatial dimension, but of course most imaging systems are two dimensional, with the source, detected image, and point spread function all having two indices. So a two dimensional detected image is a convolution of the underlying image with a two dimensional point spread function $P(\Delta x,\Delta y)$ plus added detection noise.
In order to estimate $u_{j}$ given the observed $d_{i}$ and a known $P(\Delta i_{x},\Delta j_{y})$, the following iterative procedure is employed in which the estimate of $u_{j}$ (called ${\hat {u}}_{j}^{(t)}$) for iteration number t is updated as follows:
${\hat {u}}_{j}^{(t+1)}={\hat {u}}_{j}^{(t)}\sum _{i}{\frac {d_{i}}{c_{i}}}p_{ij}$
where
$c_{i}=\sum _{j}p_{ij}{\hat {u}}_{j}^{(t)}.$
It has been shown empirically that if this iteration converges, it converges to the maximum likelihood solution for $u_{j}$.[3]
Writing this more generally for two (or more) dimensions in terms of convolution with a point spread function P:
${\hat {u}}^{(t+1)}={\hat {u}}^{(t)}\cdot \left({\frac {d}{{\hat {u}}^{(t)}\otimes P}}\otimes P^{*}\right)$
where the division and multiplication are element wise, $\otimes $ indicates a 2D convolution, and $P^{*}$ is the flipped point spread function.
In problems where the point spread function $p_{ij}$ is not known a priori, a modification of the Richardson–Lucy algorithm has been proposed, in order to accomplish blind deconvolution.[4]
Derivation
In the context of fluorescence microscopy, the probability of measuring a set of number of photons (or digitalization counts proportional to detected light) $\mathbf {m} =[m_{0},...,m_{K}]$ for expected values $\mathbf {E} =[E_{0},...,E_{K}]$ for a detector with $K+1$ pixels is given by
$P(\mathbf {m} \vert \mathbf {E} )=\prod _{i}^{K}\mathrm {Poisson} (E_{i})=\prod _{i}^{K}{\frac {{E_{i}}^{m_{i}}e^{-E_{i}}}{m_{i}!}}$
it is convenient to work with $\ln(P)$ since in the context of maximum likelihood estimation we want to find the position of the maximum of the likelihood function and we are not interested in its absolute value.
$\ln(P(m\vert E))=\sum _{i}^{K}\left[(m_{i}\ln E_{i}-E_{i})-\ln(m_{i}!)\right]$
Again since $\ln(m_{i}!)$ is a constant, it will not give any additional information regarding the position of the maximum, so let's consider
$\alpha (m\vert E)=\sum _{i}^{K}\left[m_{i}\ln E_{i}-E_{i}\right]$
where $\alpha $ is something that shares the same maximum position as $P(m\vert E)$. Now let's consider that $E$ comes from a ground truth $x$ and a measurement $\mathbf {H} $ which we assume to be linear. Then
$\mathbf {E} =\mathbf {H} \mathbf {x} $
where a matrix multiplication is implied. We can also write this in the form
$E_{m}=\sum _{n}^{K}H_{mn}x_{n}$
where we can see how $H$, mixes or blurs the ground truth.
It can also be shown that the derivative of an element of $\mathbf {E} $, $(E_{i})$ with respect to some other element of $x$ can be written as:
${\frac {\partial E_{i}}{\partial x_{j}}}=H_{ij}$
(1)
Tip: it's easy to see this by writing a matrix $\mathbf {H} $ of say (5 x 5) and two arrays $\mathbf {E} $ and $\mathbf {x} $ of 5 elements and check it. This last equation can interpreted as how much one element of $\mathbf {x} $, say element $i$ influences the other elements $j\neq i$ (and of course the case $i=j$ is also taken into account). For example in a typical case an element of the ground truth $\mathbf {x} $ will influence nearby elements in $\mathbf {E} $ but not the very distant ones (a value of $0$ is expected on those matrix elements).
Now, the key and arbitrary step: we don't know $x$ but we want to estimate it with ${\hat {\mathbf {x} }}$, let's call ${\hat {\mathbf {x} _{old}}}$ and ${\hat {\mathbf {x} _{new}}}$ the estimated ground truths while we are using the RL algorithm, where the hat symbol is used to distinguish ground truth from estimator of the ground truth
${\hat {x}}_{new}={\hat {x}}_{old}+\lambda {\frac {\partial \ \alpha (m\vert E(x))}{\partial x}}|_{{\hat {x}}_{old}}$
(2)
Where ${\frac {\partial }{\partial x}}$ stands for a $K$-dimensional gradient. If we work on the derivative of $\alpha (m\vert E(x))$ we get
${\frac {\partial \ \alpha (m\vert E(x))}{\partial x_{j}}}={\frac {\partial }{\partial x_{j}}}\sum _{i}^{K}\left[m_{i}\ln E_{i}-E_{i}\right]=\sum _{i}^{K}\left[{\frac {m_{i}}{E_{i}}}{\frac {\partial }{\partial x_{j}}}E_{i}-{\frac {\partial }{\partial x_{j}}}E_{i}\right]=\sum _{i}^{K}{\frac {\partial E_{i}}{\partial x_{j}}}\left[{\frac {m_{i}}{E_{i}}}-1\right]$
and if we now use (1) we get
${\frac {\partial \ \alpha (m\vert E(x))}{\partial x_{j}}}=\sum _{i}^{K}H_{ij}\left[{\frac {m_{i}}{E_{i}}}-1\right]$
But we can also note that ${H}_{ji}^{T}=H_{ij}$ by definition of transpose matrix. And hence
${\frac {\partial \ \alpha (m\vert E(x))}{\partial x_{j}}}=\sum _{i}^{K}{H}_{ji}^{T}\left[{\frac {m_{i}}{E_{i}}}-1\right]$
(3)
Then if we consider $j$ spanning all the elements from $1$ to $K$ this equation can be rewritten in its vectorial form
${\frac {\partial \ \alpha (m\vert \mathbf {E} (x))}{\partial x}}={\mathbf {H} ^{T}}\left[{\frac {\mathbf {m} }{\mathbf {E} }}-\mathbf {1} \right]$
where $\mathbf {H} ^{T}$ is a matrix and $m$, $E$ and $\mathbf {1} $ are vectors. Let's now propose the following arbitrary and key step
$\lambda ={\frac {{\hat {\mathbf {x} }}_{old}}{\mathbf {H} ^{T}\mathbf {1} }}$
(4)
where $\mathbf {1} $ is a vector of ones of size $K$ (same as $m$, $E$ and $x$) and the division is element-wise. Using (3) and (4) we can rewrite (2) as
${\hat {\mathbf {x} }}_{new}={\hat {\mathbf {x} }}_{old}+\lambda {\frac {\partial \alpha (\mathbf {m} \vert \mathbf {E} (x))}{\partial x}}={\hat {\mathbf {x} }}_{old}+{\frac {{\hat {\mathbf {x} }}_{old}}{{\mathbf {H} ^{T}}\mathbf {1} }}{\mathbf {H} ^{T}}\left[{\frac {\mathbf {m} }{\mathbf {E} }}-\mathbf {1} \right]={\hat {\mathbf {x} }}_{old}+{\frac {{\hat {\mathbf {x} }}_{old}}{\mathbf {H} ^{T}\mathbf {1} }}\mathbf {H} ^{T}{\frac {\mathbf {m} }{\mathbf {E} }}-{\hat {\mathbf {x} }}_{old}$
which yields
${\hat {\mathbf {x} }}_{new}={\hat {\mathbf {x} }}_{old}\mathbf {H} ^{T}\left({\frac {\mathbf {m} }{\mathbf {E} }}\right)/\mathbf {H} ^{T}\mathbf {1} $
(5)
Where division refers to element-wise matrix division and $\mathbf {H} ^{T}$ operates as a matrix but the division and the product (implicit after ${\hat {\mathbf {x} }}_{old}$) are element-wise. Also, $\mathbf {E} =E({\hat {\mathbf {x} }}_{old})=\mathbf {H} {\hat {x}}_{old}$ can be calculated because we assume
- The initial guess ${\hat {\mathbf {x} }}_{0}$ is known (and is typically set to be the experimental data)
- The measurement function $\mathbf {H} $ is known
On the other hand $\mathbf {m} $ is the experimental data. Therefore, equation (5) applied successively, provides an algorithm to estimate our ground truth $\mathbf {x} _{new}$ by ascending (since it moves in the direction of the gradient of the likelihood) in the likelihood landscape. It has not been demonstrated in this derivation that it converges and no dependence on the initial choice is shown. Note that equation (2) provides a way of following the direction that increases the likelihood but the choice of the log-derivative is arbitrary. On the other hand equation (4) introduces a way of weighting the movement from the previous step in the iteration. Note that if this term was not present in (5) then the algorithm would output a movement in the estimation even if $\mathbf {m} =E({\hat {\mathbf {x} }}_{old})$. It's worth noting that the only strategy used here is to maximize the likelihood at all cost, so artifacts on the image can be introduced. It is worth noting that no prior knowledge on the shape of the ground truth $\mathbf {x} $ is used in this derivation.
Software
• RawTherapee (since v.2.3)
See also
• Deconvolution
• Wiener filter (deconvolution in the presence of additive noise)
References
1. Richardson, William Hadley (1972). "Bayesian-Based Iterative Method of Image Restoration". Journal of the Optical Society of America. 62 (1): 55–59. Bibcode:1972JOSA...62...55R. doi:10.1364/JOSA.62.000055.
2. Lucy, L. B. (1974). "An iterative technique for the rectification of observed distributions". Astronomical Journal. 79 (6): 745–754. Bibcode:1974AJ.....79..745L. doi:10.1086/111605.
3. Shepp, L. A.; Vardi, Y. (1982), "Maximum Likelihood Reconstruction for Emission Tomography", IEEE Transactions on Medical Imaging, 1 (2): 113–22, doi:10.1109/TMI.1982.4307558, PMID 18238264
4. Fish D. A.; Brinicombe A. M.; Pike E. R.; Walker J. G. (1995), "Blind deconvolution by means of the Richardson–Lucy algorithm" (PDF), Journal of the Optical Society of America A, 12 (1): 58–65, Bibcode:1995JOSAA..12...58F, doi:10.1364/JOSAA.12.000058, S2CID 42733042, archived from the original (PDF) on 2019-01-10
| Wikipedia |
Recent questions without an upvoted answer
No answer No selected answer No upvoted answer Featured Previous GATE
No selected answer
No upvoted answer
Previous GATE
There are three boxes. One contains apples, another contains oranges and the last one contains both apples and oranges. All three are known to be incorrectly labelled. If you are permitted to open just one box and then pull out and inspect only one fruit, ... boxes? The box labelled 'Apples'. The box labelled 'Apples and Oranges'. The box labelled 'Oranges'. Cannot be determined.
asked Feb 27, 2017 in Verbal Ability by Arjun (5.4k points)
$X$ is a $30$ digit number starting with the digit $4$ followed by the digit $7$. Then the number $X^{3}$ will have $90$ digits $91$ digits $92$ digits $93$ digits
asked Feb 27, 2017 in Numerical Ability by Arjun (5.4k points)
The number of roots of $e^{x}+0.5x^{2}-2=0$ in the range $[-5, 5]$ is $0$ $1$ $2$ $3$
GATE2017-2-GA-10
An air pressure contour line joins locations in a region having the same atmospheric pressure. The following is an air pressure contour plot of a geographical region. Contour lines are shown at $0.05$ bar intervals in this plot. If a possibility of a thunderstorm is given ... drops over a region, which of the following regions is most likely to have a thunderstorm? $P$ $Q$ $R$ $S$
A $25$ kVA, $400$ V, $\Delta$-connected, $3$-phase, cylindrical rotor synchronous generator requires a field current of $5$ A to maintain the rated armature current under short-circuit condition. For the same field current, the open-circuit voltage is ... to terminal voltage), when the generator delivers the rated load at $0.8$ pf leading, at rated terminal voltage is _________.
asked Feb 27, 2017 in new by Arjun (5.4k points)
numerical-answers
If the primary line voltage rating is $3.3$kV (Y side) of a $25$kVA, $Y-\Delta$ transformer (the per phase turns ratio is $5:1$), then the line current rating of the secondary side (in Ampere) is _______.
Consider the system described by the following state space representation ... $y(t)$ at $t=1$ sec (rounded off to three decimal places) is ___________.
A $10 \frac{1}{2}$ digit timer counter possesses a base clock of frequency $100$ MHz. When measuring a particular input, the reading obtained is the same in: Frequency mode of operation with a gating time of one second and Period mode of operation (in the $\times 10$ns scale). The frequency of the unknown input (reading obtained) in Hz is _______.
In the circuit shown in the figure, the diode used is ideal. The input power factor is ________. (Give the answer up to two decimal places).
For the synchronous sequential circuit shown below, the output $Z$ is zero for the initial conditions $Q_{A}, Q_{B}, Q_{C}= Q'_{A}, Q'_{B}, Q'_{C}=100$ The minimum number of clock cycles after which the output $Z$ would again become zero is _________.
In the circuit shown all elements are ideal and the switch $S$ is operated at $10$kHz and $60 \%$ duty ratio. The capacitor is large enough so that the ripple across it is negligible and at steady state acquires a voltage as shown. The peak current in amperes drawn from the $50$ V DC source is _______.(Give the answer up to one decimal place.)
Consider an overhead transmission line with $3$-phase, $50$ Hz balanced system with conductors located at the vertices of an equilateral triangle of length $D_{ab}=D_{bc}=D_{ca}=1m$ as shown in figure below. The resistance of the conductors are ... the effect of ground, the magnitude of positive sequence reactance in $\Omega/km$ (rounded off to three decimal places) is _________.
A $3$-phase, $50$Hz generator supplies power of $3$MW at $17.32$kV to a balanced $3$-phase inductive load through an overhead line. The per phase line resistance and reactance are $0.25 \Omega$ and $3.925 \Omega$ respectively. If the voltage at the generator terminal is $17.87$kV, the power factor of the load is ________.
Two generating units rated $300$MW and $400$MW have governor speed regulation of $6 \%$ and $4 \%$ respectively from no load to full load. Both the generating units are operating in parallel to share a load of $600$MW. Assuming free governor action, the load shared by the larger unit is _________ MW.
A $3$-phase, $2$-pole, $50$Hz, synchronous generator has a rating of $250$ MVA, $0.8$ pf lagging. The kinetic energy of the machine at synchronous speed is $1000$MJ. The machine is running steadily at synchronous speed and delivering $60$ ... removed, assuming the acceleration is constant for $10$ cycles, the value of the power angle after $5$ cycles is __________ electrical degrees.
A thin soap bubble of radius $R=1$ cm, and thickness $a=3.3 \mu m (a << R)$, is at a potential of $1$ V with respect to a reference point at infinity. The bubble bursts and becomes a single spherical drop of a soap (assuming ... , of the resulting single spherical drop with respect to the same reference point at infinity is _________. (Give the answer up to two decimal places.)
A cascade system having the impulse responses $h_{1}(n)= \{ \underset{\uparrow}{1}-1 \}$ and $h_{2}(n)= \{ \underset{\uparrow}{1}-1, 1 \}$ is shown in the figure below, where symbol $\uparrow$ denotes the time origin. The input sequence $x(n)$ for which the cascade system produces an ... $x(n)= \{\underset{\uparrow}{1}-1, 1, 1, 1 \}$ $x(n)= \{\underset{\uparrow}{1}-1, 2, 2, 1 \}$
A $220$ V, $10$ kW, $900$ rpm separately excited $DC$ motor has an armature resistance $R_{a}=0.02 \Omega$. When the motor operates at rated speed and with rated terminal voltage, the electromagnetic torque developed by the motor is $70$Nm. Neglecting the rotational losses of the machine, the current drawn by the motor from the $220$V supply is $34.2$ A $30$ A $22$ A $4.84$ A
The root locus of the feedback control system having the characteristic equation $s^{2}+6Ks+2s+5=0$ where $K>0$, enters into the real axis at. $s=-1$ $s=-\sqrt{5}$ $s=-5$ $s=\sqrt{5}$
asked Feb 27, 2017 in Others by Arjun (5.4k points)
The range of $K$ for which all the roots of the equation $s^{3}+3s^{2}+2s+K=0$ are in the left half of the complex $s$-plane is $0 < K < 6$ $0 < K < 16$ $6 < K < 36$ $6 < K < 16$
Which of the following systems has maximum peak overshoot due to a unit step input? $\frac{100}{s^{2}+10s+100} \\$ $\frac{100}{s^{2}+15s+100} \\ $ $\frac{100}{s^{2}+5s+100} \\$ $\frac{100}{s^{2}+20s+100}$
For the circuit shown in the figure below, it is given that $V_{CE}=\frac{V_{CC}}{2}$. The transistor has $\beta=29$ and $V_{BE}=0.7 V$ when the $B-E$ junction is forward biased. For this circuit, the value of $\frac{R_{B}}{R}$ is $43$ $92$ $121$ $129$
For the circuit shown below, assume that the $OPAMP$ is ideal. Which one of the following is TRUE? $v_{0}=v_{s}$ $v_{0}=1.5 v_{s}$ $v_{0}=2.5 v_{s}$ $v_{0}=5 v_{s}$
The figure below shows a half-bridge voltage source inverter supplying an RL-load with $R=40 \Omega$ and $L=(\frac{0.3}{\pi})H$. The desired fundamental frequency of the load voltage is $50$Hz. The switch control signals of the converter are generated using sinusoidal pulse width ... value of $DC$ source voltage $V_{DC}$ in volts is. $300\sqrt{2}$ $500$ $500\sqrt{2}$ $1000\sqrt{2}$
A person decides to toss a fair coin repeatedly until he gets a head. He will make at most $3$ tosses. Let the random variable $Y$ denote the number of heads. The value of $\text{var} \{Y\}$, where $\text{var} \{ \cdot \}$ denotes the variance, equals $\frac{7}{8} \\$ $\frac{49}{64} \\$ $\frac{7}{64} \\$ $\frac{105}{64}$
For the network given in the figure below, the Thevenin's voltage $V_{ab}$ is $-1.5$V $-0.5$V $0.5$V $1.5$V
A $120$ V DC shunt motor takes $2$A at no load. It takes $7$A on full load while running at $1200$ rpm. The armature resistance is $0.8 \Omega$, and the shunt field resistance is $240 \Omega$. The no load speed, in rpm, is _________.
A star-connected, $12.5$kW, $208$ V (line), $3$-phase, $60$Hz squirrel cage induction motor has following equivalent circuit parameters per phase referred to the stator: $R_{1}=0.3 \Omega, R_{2}=0.3\Omega, X_{1}=0.41\Omega, X_{2}=0.41\Omega$. Neglect ... . The starting current (in Ampere) for this motor when connected to an $80$ V (line), $20$ Hz, $3$-phase AC source is _________.
For the given $2$-port network, the value of transfer impedance $z_{21}$ in ohms is _________.
The initial charge in the $1$ F capacitor present in the circuit shown is zero. The energy in joules transferred from the $DC$ source until steady state condition is reached equals _________. (Give the answer up to one decimal place.)
The mean square value of the given periodic waveform $f(t)$ is __________.
The nominal- $\pi$ circuit of a transmission line is shown in the figure. Impedance $Z=100 \angle 80^{\circ} \Omega$ and reactance $X=3300 \Omega$. The magnitude of the characteristic impedance of the transmission line, in $\Omega$, is __________. (Give the answer up to one decimal place.)
In a load flow problem solved by Newton-Raphson method with polar coordinates, the size of the Jacobian is $100 \times 100$. If there are $20$PV buses in addition to $PQ$ buses and a slack bus, the total number of buses in the system is ______.
Let $ g(x)= \begin{cases} -x & \ x \leq 1 \\ x+1 & \ x \geq 1 \end{cases}$ and $ f(x)= \begin{cases} 1-x & \ x \leq 0 \\ x^{2} & \ x > 0 \end{cases}$. Consider the composition of $f$ and $g$, i.e., $(f {\circ} g) (x) = f (g(x))$. The number of discontinuities in $(f {\circ} g) (x)$ present in the interval $(-\infty, 0)$ is: $0$ $1$ $2$ $4$
The value of the contour integral in the complex plane $\oint \frac{z^{3}-2z+3}{z-2} dz$ along the contour $\mid z \mid =3$, taken counter- clockwise is $-18 \pi i$ $0$ $14 \pi i$ $48 \pi i$
The eigenvalues of the matrix given below are $\begin{bmatrix} 0 & 1 & 0\\ 0 & 0 & 1\\ 0 & -3 & -4 \end{bmatrix}$ $(0, -1, -3)$ $(0, -2, -3)$ $(0, 2, 3)$ $(0, 1, 3)$
For the balanced $Y-Y$ connected $3$-phase circuit shown in the figure below, the line-line voltage is $208$ V rms and the total power absorbed by the load is $432$ W at a power factor of $0.6$ leading. The approximate value of the impedance $Z$ is $33 \angle -53.1^{\circ}\Omega $ $60 \angle 53.1^{\circ}\Omega $ $60 \angle -53.1^{\circ}\Omega $ $180 \angle -53.1^{\circ}\Omega $
In the circuit shown below, the value of capacitor $C$ required for maximum power to be transferred to the load is $1$ nF $1 \mu F$ $1$ mF $10$ mF
The output $y(t)$ of the following system is to be sampled, so as to reconstruct it from its samples uniquely. The required minimum sampling rate is $1000$ samples/s $1500$ samples/s $2000$ samples/s $3000$ samples/s
A phase-controlled, single-phase, full-bridge converter is supplying a highly inductive $DC$ load. The converter is fed from a $230$ V, $50$ Hz, AC source. The fundamental frequency in Hz of the voltage ripple on the $DC$ side is $25$ $50$ $100$ $300$
Signals and Systems
Electrical and Electronic Measurements
Analog and Digital Electronics
General Aptitude | CommonCrawl |
Improving protein-ligand binding site prediction accuracy by classification of inner pocket points using local features
Radoslav Krivák1 &
David Hoksza1
Journal of Cheminformatics volume 7, Article number: 12 (2015) Cite this article
Protein-ligand binding site prediction from a 3D protein structure plays a pivotal role in rational drug design and can be helpful in drug side-effects prediction or elucidation of protein function. Embedded within the binding site detection problem is the problem of pocket ranking – how to score and sort candidate pockets so that the best scored predictions correspond to true ligand binding sites. Although there exist multiple pocket detection algorithms, they mostly employ a fairly simple ranking function leading to sub-optimal prediction results.
We have developed a new pocket scoring approach (named PRANK) that prioritizes putative pockets according to their probability to bind a ligand. The method first carefully selects pocket points and labels them by physico-chemical characteristics of their local neighborhood. Random Forests classifier is subsequently applied to assign a ligandability score to each of the selected pocket point. The ligandability scores are finally merged into the resulting pocket score to be used for prioritization of the putative pockets. With the used of multiple datasets the experimental results demonstrate that the application of our method as a post-processing step greatly increases the quality of the prediction of Fpocket and ConCavity, two state of the art protein-ligand binding site prediction algorithms.
The positive experimental results show that our method can be used to improve the success rate, validity and applicability of existing protein-ligand binding site prediction tools. The method was implemented as a stand-alone program that currently contains support for Fpocket and Concavity out of the box, but is easily extendible to support other tools. PRANK is made freely available at http://siret.ms.mff.cuni.cz/prank.
Accurate prediction of ligand-binding sites, often simply called pockets, from a 3D protein structure plays a pivotal role in rational drug design [1,2] and can be helpful in drug side-effects prediction [3] and elucidation of protein function [4]. Ligand-binding sites are usually found in deep protein surface cavities, but it should be emphasized that not all binding sites are found in deep cavities. Although empirical studies show that the actual ligand-binding sites tend to coincide with the largest and deepest pocket on the protein's surface [5,6], there exist cases where ligands are found binding to rather exposed shallow clefts [7,8].
Plethora of pocket detection methods, that employ variety of different strategies, are currently available. These include purely geometric methods, energetic methods and methods that make use of evolutionary conservation (see below). All these methods take a protein structure as an input and produce an ordered list of putative pockets, which represent the locations on the protein surface where ligands are expected to bind. Not all reported pockets usually correspond to true binding sites, but it is expected that entries at the top of the ordered list correspond to regions with the highest probability of being a true binding site. Although it is not unusual for one protein to have more than one ligand-binding site, the number of putative pockets predicted by pocket detection methods tends to be much higher than the number of actual known positives. The accuracy of a pocket prediction method is then evaluated by its ability to yield the true (experimentally confirmed) binding sites among the top-n putative pockets on its output (where n is usually taken to be 1, 3 or 5).
As the list of predicted pockets contains false positives, ordering of the pockets, i.e. pocket ranking, plays an important role and substantially contributes to the overall accuracy of the prediction method. More importantly, correct pocket ranking is of practical utility: it helps to prioritize subsequent efforts concerned with the predicted pockets, such as molecular docking or virtual screening.
While many ligand-binding site detection approaches employ complex and inventive algorithms to locate the pockets, the final ranking is often done by a simple method such as ordering by size or scoring pockets by a linear combination of few pocket descriptors. In the present study we are introducing a novel pocket ranking algorithm based on machine learning that can be used as a post-processing step after the application of a pocket prediction method and thus improve its accuracy. We demonstrate that applying this re-ordering step substantially improves identification success rates of two pocket prediction methods, Fpocket [9] and ConCavity [10], on several previously introduced datasets.
Pocket detection approaches
In the last few years, we have been able to observe increased interest in the field of pocket detection indicated by a number of recently published reviews [2,11,12], as well as by the influx of new detection methods. The pocket detection algorithms can be categorized based on the main strategy they adopt in the process of binding site identification. Those strategies and their representative methods shall be briefly reviewed in the following paragraphs.
Geometry based methods
The geometrical methods focus mainly on the algorithmic side of the problem of finding concave pockets and clefts on the surface of a 3D structure. Some methods are purely geometrical (LIGSITE [13], LIGSITEcs [14], PocketPicker [5]), while others make use of additional physico-chemical information like polarity or charge (MOE SiteFinder [15], Fpocket [9]).
Energy based methods
The energy based methods build on the approximation of binding potentials or binding energies [16]. They place various probes on the grid points around the protein's surface and calculate interaction energies of those points with the use of underlying force field software. That results in higher computational demands of these methods [17]. Representative examples of the energy based methods include Q-SiteFinder [18], SiteHound [8], dPredGB [19] or the method by Morita et al. [20].
Evolutionary and threading based methods
The sequence-based evolutionary conservation approaches are based on the presumption that functionally important residues are preferentially conserved during the evolution because natural selection acts on function [21]. In LIGSITEcsc [14], a sequence conservation measure of neighboring residues was used to re-rank top-3 putative pockets calculated by LIGSITEcs, which lead to an improved success rate (considering top-1 pocket). In ConCavity [10], unlike in LIGSITEcsc, the sequence conservation information is used not only to re-rank pockets, but it is also integrated directly into the pocket detection procedure. An example of an evolutionary based method which takes into account the structural information is FINDSITE [22,23]. It is based on the observation that even distantly homologous proteins usually have similar folds and bind ligands at similar locations. Thus at first ligand-bound structural templates are selected from the database of already known protein-ligand complexes by a threading (fold recognition) algorithm. The used threading algorithm is not based only on sequence similarity, but it also combines various scoring functions designed to match structurally related target/template pairs [24]. Found homologous structures are subsequently aligned with the target protein by a global structural alignment algorithm. Positions of ligands on superimposed template structures are then clustered into consensus binding sites.
Consensus methods
The consensus methods are essentially meta approaches combining results of other methods. The prominent example is MetaPocket [25]. The recently introduced updated version, MetaPocket 2.0 [26], aggregates predicted sites of 8 different algorithms (among them the aforementioned LIGSITEcs, Q-SiteFinder, Fpocket and ConCavity) by taking top 3 sites from each method. The authors demonstrated that MetaPocket performed better than any of the individual methods alone.
Ranking algorithms
Given that every pocket identification algorithm is basically a heuristic it needs to incorporate a scoring function providing a measure of confidence in given prediction. A simple strategy for scoring putative pockets, one that is probably most commonly used, is ordering pockets by a single descriptor — like size (volume), pocket depth, surface area or the overall hydrophobicity. Another strategy for scoring pockets is to combine several pocket descriptors. Fpocket, for example, uses a linear combination of 5 such descriptors which parameters were optimized on a training dataset. The same approach was also successfully applied in recent druggability prediction methods [27,28]. In ConCavity, the ranking procedure considers overall pocket evolutionary conservation score that is projected onto pocket grid probes. One study that focused solely on ranking of pockets previously found by other pocket detection algorithms introduced an approach based on amino acid composition and relative ligand binding propensities of different amino acids termed PLB index [29] (we compare our proposed method with PLB index in results section).
It has been suggested that pocket identification and pocket ranking are independent tasks and therefore should be evaluated separately [30].
It seems that pocket detection methods that have achieved the highest success rates in the aforementioned benchmark are those with more sophisticated ranking algorithms. It has also been suggested that the total coverage (i.e. identification success rate considering all predicted pockets without regard to the ordering) of many algorithms is actually close to 100% [30]. While our experiments do not support such a strong claim they, nevertheless, show that there is indeed a big difference between success rate with regards to top 1, top 3 binding sites and the total coverage. Therefore, there is room for improvement by introducing a more precise and sophisticated ranking algorithm that would rank the identified true pockets higher than the false ones.
Performance of existing methods
Considering that the goal of our method is to increase the performance of the existing state of the art methods we have to raise a question regarding their actual performance. It has been acknowledged that the field of ligand-binding site prediction lacks standardized and widely accepted benchmarking datasets and guidelines [30,31]. In the studies introducing the individual methods, their performance was usually compared to a couple of existing methods with (somewhat expectedly) favorable results, reporting success rates around 90% regarding the top 3 and 70% considering the top 1 predicted sites. The latest review [31] represents the first independent attempt to systematically assess the performance of the pocket detection methods, although only a limited set of 8 representative methods has been considered. It has challenged the previously reported high success rates of the pocket prediction programs. With the exception of FINDSITE, identification success rates of all methods on the new dataset were considerably lower than previously reported (closer to 50% rather than the often reported 70% for top 1 prediction). FINDSITE achieved clearly the best results, but only with the help of a comprehensive threading library that contained proteins highly similar to those from the benchmarking dataset. It was demonstrated that when those were removed from the library, success rates of FINDSITE dropped to the level of other methods [31].
We are introducing here a new pocket ranking method PRANK that can be used to increase the performance of existing pocket prediction methods. Thus the input of the method is a list of predicted putative pockets and its goal is to prioritize the list in such a way that the true pockets appear at the top of that list. PRANK is a machine learning method which is based on predicting ligandability of specific pocket points near the pocket surface. These points represent possible locations of contact atoms of a putative ligand. By aggregating predictions of those points PRANK outputs a score to be used for the re-ranking of the putative pockets. Thus, unlike previous studies that applied machine learning in the context of protein binding site prediction [32-37], we focused on the classification of inner pocket points rather than the classification of exposed amino acid residues or whole pockets. The following list outlines the PRANK method (see also Figure 1):
Sampling inner pocket points from Connolly surface of the protein.
Flowchart of the PRANK pocket ranking approach.
Calculating feature descriptors of the sampled points based on their local chemical neighborhood.
Computing property vectors of chosen protein's solvent exposed atoms.
Projecting distance weighted properties of the adjacent protein atoms onto the sampled inner pocket points.
Computing additional inner pocket points specific features.
Predicting ligandability of the sampled inner pocket points by random forests classifier using their feature vectors.
Aggregating predictions into the final pocket score.
Individual steps are described in greater detail in following sections. For the visualization of classified pocket points see Figure 2.
Visualization of inner pocket points.(a) Displayed is protein 1AZM from DT198 dataset bound to one ligand (magenta). Fpocket predicted 13 pockets that are depicted as colored areas on the protein surface. To rank these pockets, the protein was first covered with evenly spaced Connolly surface points (probe radius 1.6 Å) and only the points adjacent to one of the pockets were retained. Color of the points reflects their ligandability (green = 0…red = 0.7) predicted by Random Forest classifier. PRANK algorithm rescores pockets according to the cumulative ligandability of their corresponding points. Note that there are two clusters of ligandable points in the picture, one located in the upper dark-blue pocket and the other in the light-blue pocket in the middle. The light-blue pocket, which is in fact the true binding site, contains more ligandable points and therefore will be ranked higher. (b) Detailed view of the binding site with ligand and inner pocket points.
Pocket representation
To represent a pocket, PRANK first computes a set of its inner points by selecting evenly spaced points lying on the Connolly surface [38] that lie in the distance of at most 4 Å from the closest heavy pocket atom. This method of choosing points to represent a pocket is similar to the one used by Morita et al. [20], although we deliberately use only one Connolly surface layer with optimized probe radius of 1.6 Å. Thus PRANK utilizes only points in a relatively short belt around the pocket surface as the bonding between ligand and protein takes place in this area.
Next, PRANK assigns a feature vector to each of the inner points. The feature vector is built in two steps: first, it calculates feature vectors for specific pocket atoms (AFVs) which are then aggregated into feature vectors of the inner points (IFVs).
The AFVs are computed only for pocket atoms located in the atomic neighborhood of any inner point. The atomic neighborhood of point P is defined as:
$$ \begin{aligned} \mathrm{A}(\textit{P}) =&\; \left\{\textrm{heavy solvent exposed protein atoms within 8}\right.\\ &\left.\text{{\AA} radius around} \,\textit{P} \right\} \end{aligned} $$
((1))
The features forming the AFVs include two types of features: residue level features and atomic level features. The residue level features are characteristics of residues inherited by their constituent atoms. Such features include, e.g., physico-chemical properties of standard amino acids or hydropathy index of amino acids [39]. The atomic levels features are specific to individual atoms meaning that different atoms within one amino acid can have different values of those features. Examples of such features are physico-chemical properties of individual amino acid atoms adopted from VolSite druggability prediction study [40] or statistical ligand-binding propensities of amino acid atoms [41] (see Additional file 1: Listings for the complete feature list).
To calculate the feature vector of an inner pocket point (IFV), the AFVs from its atomic neighborhood are aggregated using a simple aggregation function and concatenated with a vector of features computed specifically for that point from its local neighborhood. These inner point features include the number of H-bond donors and acceptors, B-factor of structure atoms or protrusion index [42] The following aggregation function is used to project the pocket atoms feature vectors onto the inner points:
$$ \textrm{IFV}(P) = \sum_{\mathrm{A}_{i}\in\ \mathrm{A}(P)} \textrm{AFV}\left(\mathrm{A}_{i}\right) \cdot w(\text{dist}(P,\mathrm{A}_{i})) \quad || \quad \text{FV}(P), $$
where FV is the vector of the inner points specific features and w is a distance weight function :
$$ w(d) = 1 - d/8. $$
We evaluated several types of weight functions with different parameters (among them quadratic, Gaussian and sigmoid), but in the end we selected the present simple linear function which had produced the best results in the cross-validation experiments.
It also needs to be emphasized that all of the features included in the vectors are local, which means that they are calculated only based on the immediate spatial neighborhood of the points. No regard is taken to the shape and properties of the whole pocket or protein. Although the 8 Å cutoff radius by which we define chemical neighborhood can encompass considerable part of the whole pocket, immediate surrounding atoms have more influence thanks to the fact that we weight their contribution by distance (see Equation 3). Inner pocket points from different parts of the pocket can therefore have very different feature vectors. We propose that this locality has some positive impact on the generalization ability of the model.
One possible negative implication of considering only local features could be that local features are not sufficient to account for ligand binding quality of certain regions of protein surface since some ligand positions could be fixed by few relatively distant non-covalent bonds. However, our results show that in spite of that concern our local approach leads to practical improvements.
Classification-based ligandability prediction
Similarly to other studies that were trying to predict whether exposed residues of a protein are ligand binding or not, we used a machine learning approach to predict the ligandability of inner pocket points. The ligandability prediction is a binary classification problem for supervised learning. Training datasets of inner pocket points were generated as follows. For a given protein dataset with candidate pockets (e.g. CHEN11 dataset with Fpocket predictions) we merged all sampled inner pocket points and labeled as positive those located within 2.5 Å distance to any ligand atom. The resulting point datasets were highly imbalanced in terms of positives and negatives since most of the candidate pockets and their points were not true ligand binding sites (e.g. CHEN11-Fpocket dataset contained 451,104 negative and 30,166 positive points resulting in 15:1 ratio). Compensation techniques such as oversampling, undersampling and cost-sensitive learning are sometimes applied in such scenarios, but in our experiments they only led to notable degradation of the generalization ability of a trained classifier (i.e. performance on other datasets). The size of the point dataset depends on the density of the points sampled from the Connolly surface of a protein. The numerical algorithm that was employed to calculate the Connolly surface [43] is parametrized by an integer tessellation level. Our algorithm uses level 2 by default as higher levels increase the number of points geometrically but do not improve the results.
After preliminary experiments with several machine learning methods we decided to adopt Random Forests [44] as our predictive modelling tool of choice. Random Forests is an ensemble of trees created by using bootstrap samples of training data and random feature selection in tree induction [45]. In comparison with other machine learning approaches, Random Forests are characterized by an outstanding speed (both in learning and execution phase) and generalization ability [44]. Additionally, Random Forests is robust to the presence of a large number of irrelevant variables; it does not require their prior scaling [37] and can cope with complex interaction structures as well as highly correlated variables [46]. The ability of Random Forests to handle correlated variable comes in handy in our case because for example features such as hydrophobicity and hydrophilicity are obviously related.
To report the performance of a classifier, three statistics are commonly reported: precision, recall (also called sensitivity) and Matthews Correlation Coefficient (MCC). MCC is often used to describe the performance of a binary classifier by a single number in scenarios with imbalanced datasets. In such scenarios the predictive accuracy is not an effective assessment index. MCC values range from +1 (perfect prediction), over 0 (random prediction) to −1 (inverse prediction). The performance statistics are calculated as shown below. TP, TN, FP and FP stand for true positive, true negative, false positive, and false negative predictions.
$$ \text{precision} = \frac{\text{TP}}{\text{TP}+\text{FP}} $$
$$ \text{recall} = \frac{\text{TP}}{\text{TP}+\text{FN}} $$
$$ \text{MCC} = \frac{\text{TP} \times \text{TN} - \text{FP} \times \text{FN}} {\sqrt{(\text{TP} + \text{FP}) (\text{TP} + \text{FN}) (\text{TN} + \text{FP}) (\text{TN} + \text{FN})} } $$
Scoring function
As soon as the classifier is trained it can be used within the PRANK's scoring function to rescore the putative pockets. To do so we utilize the histogram of class probabilities returned by the random forests classifier for every sampled inner pocket point. Since our problem is binary (a point can either be seen as a pocket point or not) the histogram is an ordered pair [P 0,P 1]. The score is then the sum of predicted squared positive class probabilities of all inner pocket points:
$$ \textrm{PScore} = \sum_{i}^{} (P_{1}(V_{i}))^{2} $$
Squaring the probabilities puts more emphasis on the points with probability closer to 1. Originally, we experimented with a mean probability based pocket score where PScore was divided by the number of inner points. However, we found that the employed cumulative score steadily gives better results. We attribute it to the fact that the size of a correctly predicted pocket can slightly deviate from the true pocket but it still should be recognized as a true pocket. In an oversized predicted pocket that contains in it a true binding site, dividing by the number of points would lead to the decrease of its score.
The higher the PScore of a putative pocket, the higher the probability of it being a true pocket. Thus the very last step involves reordering the putative pockets in the decreasing order of their PScores.
Optimization of parameters
Apart from the hyperparameters of the classifier, our method is parameterized by a number of additional parameters that influence various steps of the algorithm, from sampling inner pocket points to calculating and aggregating the features. Since many parameters have an impact on experiment running times and optimizing all parameters at once would be too costly, we optimized default values of those parameters by linear search, and in some cases by grid search (optimizing two parameters at once). Parameters were optimized with regard to the performance on CHEN11 dataset (see the datasets section) considering averaged results of repeated independent runs of 5-fold cross-validation. The optimized parameters included, for example, the probe radius of Connolly's surface (1.6 Å), ligand distance threshold to denote positive and negative points (2.5 Å) and the choice of the weight function in the inner points feature vector building step.
Implementation and efficiency
Our software is implemented in languages Groovy and Java with the help of machine learning framework Weka [47] and bioinformatical libraries BioJava [48] and The Chemistry Development Kit (CDK) [49]. Points on the Connolly's surface are calculated by a fast numerical algorithm [43] implemented in CDK.
Rescoring is implemented in a parallel fashion with configurable number of working threads and therefore can make use of all of the system's processor cores. In our experience, running times of our rescoring step were generally lower than the running times of the pocket prediction methods themselves, even on a single thread.
To show that application of PRANK is beneficial irrespective of the test set, we investigated its ability to increase the prediction accuracy on several diverse datasets. The following list briefly introduces those datasets.
CHEN11 – This dataset includes 251 proteins and 476 ligands which were used to benchmark pocket detection methods in a recent comparative review [31]. It was designed with the intention to non-redundantly cover all SCOP families of ligand binding proteins from PDB. It can be considered as "hard" dataset as most methods performed rather poorly on this dataset.
ASTEX – Astex Diverse set [50] is a collection of 85 proteins that was introduced as a benchmarking dataset for molecular docking methods.
UB48 – UB48 [14] contains a set of 48 proteins in a bound and unbound state. It has been the most widely used dataset for comparing pocket detection methods. Since it contains mainly small globular proteins with one stereotypical large binding site it can be seen as a rather "easy" dataset.
DT198 – a dataset of 198 drug-target complexes [26].
MP210 – a benchmarking dataset of 210 proteins in bound state introduced in the MetaPocket study [25].
For each dataset we generated predictions using two algorithms, Fpocket and ConCavity, which we use as model examples in our re-ranking experiments. Fpocket was used with its default parameters in version 1.0a. ConCavity can be run in two modes depending on whether it makes use of sequence conservation information or not. To execute it in the conservation mode it needs to be provided with pre-calculated residue scores. For this we were relying on the pre-computed sequence conservation files available online at the ConCavity website [51]. However, for several proteins from our datasets the conservation files were not available. For these proteins we executed ConCavity with the conservation option turned off. List of affected proteins is provided in Additional file 1: Listings. Except for the conservation switch, ConCavity was run with default parameters.
Table 1 shows statistics of individual datasets together with the average number of pockets predicted per protein by Fpocket and ConCavity. Evidently, Fpocket produces more putative pockets than ConCavity. This number alone, however, is not conclusive since incorrectly identified pockets can be included. However, the table also shows the total coverage (percentage of identified pockets) which is clearly in favor of Fpocket. Higher number of putative pockets and higher coverage makes Fpocket a better target of a re-ranking algorithm.
Table 1 Datasets statistics
To evaluate binding site predictions we followed the evaluation methodology introduced in [31]. Unlike previous studies, it uses the ligand-centric not protein-centric approach to calculate success rates. While the ligand-centric approach to evaluation, for a method to be 100% successful on a protein, we want it to identify every pocket on that protein for every relevant ligand in the dataset, the protein-centric approach only requires every protein to have at least one identified binding site. A pocket is considered successfully identified if at least one pocket (of all predicted pockets or from the top of the list) passes a chosen detection criterion (see below).
Furthermore, instead of reporting success rates for Top-1 or Top-3 predicted pockets, we report results for Top-n and Top-(n+2) cutoffs, where n is the number of known ligand-binding sites of the protein that includes evaluated binding site. This adjustment was made to accommodate for proteins with more than one known binding site (CHEN11 dataset, also introduced in [31] contains on average more than 2 binding sites per protein, see Table 1). Specifically, if a protein contains two binding sites, then Top-1 reporting is clearly insufficient in distinguishing methods which returned a correctly identified pocket in the first position of their result set but differ in the second position. For this reason, using the Top-n and Top-(n+2) cutoffs is more suitable for the ligand-centric evaluation approach.
Pocket detection criteria
Since a predicted pocket does not need to match the real pocket exactly, we need a criterion defining when the prediction is correct. When evaluating PRANK we adopted the following two criteria.
DCA is defined as the minimal distance between the center of the predicted pocket and any atom of the ligand. A binding site is then considered correctly predicted if DCA is not greater than an arbitrary threshold, which is usually 4 A ̈. It is the most commonly used detection criterion that has been utilized in virtually all previous studies.
DCC is defined as the distance between the center of the predicted pocket and the center of the ligand. It was introduced in the Findsite study [22] to compensate for the size of the ligand.
In several studies, criteria based on volume overlap of pocket and ligand were used in addition to the standard criteria. However, since our method does not change the shape of the predicted pockets, inclusion of a volume overlap based criterion would not influence the resulting pocket ordering. Therefore, we did not include any such a criterion into our evaluation.
To demonstrate the PRANK's ability to increase the quality of prediction of a pocket prediction method (Fpocket and ConCavity) we performed two types of tests. First, we used the CHEN11 dataset for cross-validation experiments and second, we trained our prediction model on the whole CHEN11 dataset and used this model to evaluate our method on the rest of the datasets. The same model is also distributed as the default model in our software package. The reason to train the final model on the CHEN11 dataset is its structural diversity and the fact that it was compiled to include all known ligands for given proteins. The cross-validation results show the viability of our modelling approach on a difficult dataset (CHEN11), and the evaluation of the final model on the remaining datasets attests the generalization ability and applicability of our software out of the box.
The results, including the performance statistics of the classifier, are summarized in Table 2. The Top-n column displays the success rate of the particular method (Fpocket or ConCavity) when PRANK is not involved, while the Rescored column shows the success rate when PRANK was utilized as a post-processing step. It should be emphasized that since PRANK's goal is not to discover any new pockets, the maximum achievable success rate is upper bounded by the total coverage of the native prediction method as displayed in the All column. In other words, the difference between Top-n and All represents the possible improvement margin, i.e., the highest nominal improvement in success rate for the Top-n cutoff that can be achieved by optimal reordering of the candidate pockets. Thus, the Improvement column shows the nominal improvement of PRANK while the %possible column shows the percentage of the possible improvement margin. Finally, the last three columns show the statistics related to the PRANK's underlying Random Forests classifier itself.
Table 2 Rescoring Fpocket and ConCavity predictions with PRANK: cross-validation results on CHEN11 dataset and the results of the final prediction model (trained on CHEN11-Fpocket) for all datasets
The results clearly show that the application of PRANK, using the DCA pocket detection criterion with 4 A ̈ threshold, considerably outperformed the native ranking methods of Fpocket and ConCavity on all the evaluation datasets. In most of the cases more than 50% of the possible improvement (the Rescored column) was achieved. When translated into the absolute numbers, it means that in some cases using PRANK can boost the overall prediction performance of a method by up to 20% (the Improvement column) with respect to the absolute achievable maximum.
We also conducted experiments showing how PRANK behaves when the distance threshold in the DCA pocket detection criterion varies. The results carried out on the CHEN11 dataset demonstrate that the improvement of PRANK is basically independent on the utilized threshold (see Figure 3). Finally, to explore the PRANK qualities in greater detail, Figure 4 displays the success rates tracking different distance thresholds and different Top-N cutoffs on the CHEN11-Fpocket dataset.
Rescoring Fpocket predictions on CHEN11 dataset. Success rates of Fpocket compared with results rescored by PRANK on CHEN11 dataset considering Top-n, Top-(n+2) and all pockets (total coverage). Identification success is measured by DCA criterion for the range of integer cutoff distances. Displayed results for rescored pockets are averaged from ten independent 5-fold cross-validation runs.
Detailed results. Table and heatmap showing success rates [%] of Fpocket predictions for original and rescored output list of pockets together with the nominal improvements made by PRANK rescoring algorithm on CHEN11 dataset (measured by DCA and DCC criteria for different integer cutoff distances). For the DCA criterion the biggest improvements were achieved around the meaningful 4-6 Å cutoff distances. Displayed results are averaged numbers from ten independent 5-fold cross-validation runs. Four columns in each group show success rates calculated considering progressively more predicted pockets ranked at the top (where n is the number of known ligand-binding sites of the protein that includes evaluated binding site). For protein with just one binding site they correspond to Top-1, Top-3 and Top-5 cutoffs that were commonly used to report results in previous ligand-binding site prediction studies.
Furthermore, we compared performance of PRANK against two simpler pocket ranking methods: PLB index, which is based on amino acid composition [29], and simple ordering of pockets by volume that serves as a baseline. PLB index was originally developed to rescore pockets of MOE SiteFinder [15]. We have reimplemented the method and used it to rescore pockets found by Fpocket and ConCavity. The results of the comparison are summarized in Table 3. Using PRANK to rescore Fpocket outperforms both ranking methods on all datasets while for ConCavity predictions PRANK is outperformed only in individual cases by volume ranking on Astex dataset and PLB index on U(B)48 datasets. The improvement by application of PRANK is more significant when rescoring outputs of Fpocket than ConCavity. This can be attributed to the fact that ConCavity predicts, on average, less putative pockets than Fpocket (see Table 1). Having lower margin then allows even a simple method to yield relatively good performance since the possibility of error is lower as well. We can conclude that PRANK is better in prioritizing long lists of pockets that contain many false positives and therefore gives more stable results. All results are summarized in Additional file 2: Tables.
Table 3 PRANK vs. simpler rescoring methods
Although we believe that the overall performance or the PRANK method is good enough, the performance of the underlying prediction model itself can be considered less satisfactory (see the last three columns in Table 2). In few cases the classifier achieved precision of less than 0.5, which means that of all the predicted positives more than a half was predicted incorrectly. Despite of that, reordering pockets according to the new scores led to improvements. This is possible because even predictions deemed as false positives (not within a 2.5 A ̈ distance to the ligand) could actually be points from true pockets and contribute to their score. Secondly, because of the particular way we calculate the final pocket score (see Equation 7), even the predictions labeled as negative (having P 1 probability lower than 0.5) contribute to the score to some extent.
Methods based on evolutionary conservation (such as ConCavity and LIGSITEcsc) are biased towards binding sites with biological ligands (meaning ligands that have their biological function i.e 'are supposed to bind there') and therefore can possibly ignore pockets that are not evolutionary conserved but still ligandable with respect to their physico-chemical properties. Those are perhaps the most interesting pockets because among them we can find novel binding sites for which synthetic ligands can be designed. Our method, on the other hand, is based only on local geometric and physico-chemical features of points near protein surface and therefore, we believe, not prone to such bias.
It can be argued that since our model is trained on a particular dataset, it is biased towards binding sites in this dataset. This is inherently a possible issue of all methods that are based on machine learning from examples. However, we believe that by training a classifier to predict ligandability of pocket points (that represent local chemical neighborhood rather than the whole pocket) we provided a way for sufficient generalization and therefore ability to correctly predict ligandability of novel sites.
While our rescoring method leads to significant improvements of the final success rates of binding site predictions, performance of the classifier itself is less satisfactory (see Table 2). Here, we will try to outline possible reasons. Several indicators point to the fact that the training data we are dealing with in the classification phase are very noisy.
This can be due to two main reasons: one is related to the feature extraction and the other, more fundamental, has to do with completeness (or rather incompleteness) of the available experimental data.
Regarding the feature extraction, it is possible that (a) our feature set is not comprehensive enough and/or (b) we somehow dilute our feature vectors in the aggregation step mixing positives and negatives. While we cannot rule out the possibility that either could be the case, it is practically impossible to prove such a conclusion.
As for the available experimental data, on the other hand, it is easy to see how their inherent incompleteness could be contributing to the noisiness of our datasets. If we establish some region on protein's surface as a true ligand-binding site, this—by definition—means that there is an experimentally confirmed 3D structure complex available and thus there exists a ligand which binds at exactly that place. All positives in our datasets are therefore correctly labeled.
What about negatives? Negatives, in our case, are practically represented by everything else or more precisely all other points within the putative pockets. Hence, we can ask the following question: If a point near the protein surface is labeled as negative, does that mean that no ligand could bind at that place (because of its unfavorable physico-chemical properties), or do we simply not have a crystal structure where such event happens? We have no means of giving a definite answer to this question, but we suppose that some pockets are labeled as negatives incorrectly because of the inherent lack of complete experimental data (complete in a sense of confirming/ruling out binding with all possible ligands).
The dataset that was used to train our final classification model (CHEN11) had been constructed in a way that made the presence of false negatives less likely by including all known PDB ligands for the proteins present in the dataset. It is possible that it would prove better to work with much more narrowly defined negatives, that is, to take our negatives only from the putative pockets for which no ligand has been found despite a deliberate effort. However, this approach would have its own problems since examples of such cases are quite rare [30,52] and although they exist, they do not cover all structural diversity of whole PDB the way CHEN11 dataset does. Moreover, there are known cases when a ligand has been found for pockets that were previously deemed unligandable [53]. Another source of more reliable negatives could be proteins deemed unligandable by physical fragment screens [54]. Nonetheless, as it could be quite interesting to see the effect it would have on the performance of our method, we shall leave it for the future research.
We introduced PRANK, a novel method to be used as a post processing step to any pocket identification method providing a rescoring mechanism to prioritize the predicted putative pockets. Since pocket prediction tools output many false positive results, a subsequent prioritization step can greatly boost the performance of such tools. PRANK is based on machine-learning providing the ability to predict ligandability of specific pocket points. The predictions are combined into a score for a given putative pocket which is then used in the re-ranking phase. As demonstrated on multiple datasets using the examples of Fpocket and ConCavity, the method consistently increases the performance of the pocket detection methods by correct prioritization of the putative sites. PRANK is distributed as a freely available tool currently capable to work with the outputs of Fpocket and ConCavity, but it can be easily adapted to process an output from basically any pocket prediction tool. We believe that we have addressed a previously neglected problem of pocket scoring and thus the introduced method and the accompanying software present a valuable addition to the array of publicly available cheminformatics tools. PRANK is freely available at http://siret.ms.mff.cuni.cz/prank.
a Although version 2.0 of Fpocket in its beta was available, we decided to use the version 1.0 since it consistently yielded better results.
Zheng X, Gan L, Wang E, Wang J. Pocket-based drug design: Exploring pocket space. AAPS J. 2013; 15(1):228–41.
Pérot S, Sperandio O, Miteva M, Camproux A, Villoutreix B. Druggable pockets and binding site centric chemical space: a paradigm shift in drug discovery. Drug Discovery Today. 2010; 15(15-16):656–67.
Xie L, Xie L, Bourne PE. Structure-based systems biology for analyzing off-target binding. Curr Opin Struct Biol. 2011; 21(2):189–99.
Konc J, Janežič D. Binding site comparison for function prediction and pharmaceutical discovery. Curr Opin Struct Biol. 2014; 25:34–9.
Weisel M, Proschak E, Schneider G. Pocketpicker: analysis of ligand binding-sites with shape descriptors. Chem Cent J. 2007; 1(1):7.
Sotriffer C, Klebe G. Identification and mapping of small-molecule binding sites in proteins: computational tools for structure-based drug design. Il Farmaco. 2002; 57(3):243–51.
Nisius B, Sha F, Gohlke H. Structure-based computational analysis of protein binding sites for function and druggability prediction. J Biotechnol. 2012; 159(3):123–34.
Ghersi D, Sanchez R. EasyMIFS and SiteHound: a toolkit for the identification of ligand-binding sites in protein structures. Bioinf (Oxford, England). 2009; 25(23):3185–6.
Le Guilloux V, Schmidtke P, Tuffery P. Fpocket: An open source platform for ligand pocket detection. BMC Bioinf. 2009; 10(1):168.
Capra JA, Laskowski RA, Thornton JM, Singh M, Funkhouser TA. Predicting protein ligand binding sites by combining evolutionary sequence conservation and 3d structure. PLoS Comput Biol. 2009; 5(12):1000585.
Henrich S, Outi S, Huang B, Rippmann F, Cruciani G, Wade R. Computational approaches to identifying and characterizing protein binding sites for ligand design.J Mol Recognit: JMR. 2010; 23(2):209–19.
Leis S, Schneider S, Zacharias M. In silico prediction of binding sites on proteins. Curr Med Chem. 2010; 17(15):1550–62.
Hendlich M, Rippmann F, Barnickel G. LIGSITE: automatic and efficient detection of potential small molecule-binding sites in proteins. J Mol Graphics Modell. 1997; 15(6):359–63389.
Huang B, Schroeder M. Ligsitecsc: predicting ligand binding sites using the connolly surface and degree of conservation. BMC Struct Biol. 2006; 6(1):19.
Labute P, Santavy M. Locating Binding Sites in Protein Structures. (Online; accessed 2013-07-16). http://www.chemcomp.com/journal/sitefind.htm Accessed 2013-07-16.
Hajduk PJ, Huth JR, Tse C. Predicting protein druggability. Drug Discovery Today. 2005; 10(23-24):1675–82.
Schmidtke P, Axel B, Luque F, Barril X. MDpocket: open-source cavity detection and characterization on molecular dynamics trajectories. Bioinf (Oxford, England). 2011; 27(23):3276–85.
Laurie A, Jackson R. Q-SiteFinder: an energy-based method for the prediction of protein-ligand binding sites. Bioinf (Oxford, England). 2005; 21(9):1908–16.
Schneider S, Zacharias M. Combining geometric pocket detection and desolvation properties to detect putative ligand binding sites on proteins. J Struct Biol. 2012; 180(3):546–50.
Morita M, Nakamura S, Shimizu K. Highly accurate method for ligand-binding site prediction in unbound state (apo) protein structures. Proteins. 2008; 73(2):468–79.
Roy A, Zhang Y. Recognizing protein-ligand binding sites by global structural alignment and local geometry refinement. Struct (London, England:1993). 2012; 20(6):987–97.
Brylinski M, Skolnick J. A threading-based method (FINDSITE) for ligand-binding site prediction and functional annotation. Proc Nat Acad Sci USA. 2008; 105(1):129–34.
Skolnick J, Brylinski M. FINDSITE: a combined evolution/structure-based approach to protein function prediction. Briefings Bioinf. 2009; 10(4):378–91.
Skolnick J, Kihara D, Zhang Y. Development and large scale benchmark testing of the PROSPECTOR_3 threading algorithm. Proteins. 2004; 56(3):502–18.
Huang B. MetaPocket: a meta approach to improve protein ligand binding site prediction. Omics: J integrative Biol. 2009; 13(4):325–30.
Zhang Z, Li Y, Lin B, Schroeder M, Huang B. Identification of cavities on protein surface using multiple computational approaches for drug binding site prediction. Bioinf (Oxford, England). 2011; 27(15):2083–8.
Schmidtke P, Barril X. Understanding and predicting druggability. a high-throughput method for detection of drug binding sites. J Med Chem. 2010; 53(15):5858–67.
Krasowski A, Muthas D, Sarkar A, Schmitt S, Brenk R. Drugpred: a structure-based approach to predict protein druggability developed using an extensive nonredundant data set. J Chem Inf Model. 2011; 51(11):2829–42.
Soga S, Shirai H, Kobori M, Hirayama N. Use of amino acid composition to predict ligand-binding sites. J Chem Inf Model. 2007; 47(2):400–6. PMID: 17243757.
Schmidtke P. Protein-ligand binding sites Identification, characterization and interrelations. PhD thesis, University of Barcelona (September 2011).
Chen K, Mizianty M, Gao J, Kurgan L. A critical comparative assessment of predictions of protein-binding sites for biologically relevant organic compounds. Struct (London, England: 1993). 2011; 19(5):613–21.
Fariselli P, Pazos F, Valencia A, Casadio R. Prediction of protein–protein interaction sites in heterocomplexes with neural networks. Eur J Biochemistry/FEBS. 2002; 269(5):1356–61.
Bordner AJ. Predicting small ligand binding sites in proteins using backbone structure. Bioinf (Oxford, England). 2008; 24(24):2865–71.
Sikic M, Tomic S, Vlahovicek K. Prediction of protein-protein interaction sites in sequences and 3d structures by random forests. PLoS Computational Biol. 2009; 5(1):1000278.
Zhou H-X, Shan Y. Prediction of protein interaction sites from sequence profile and residue neighbor list. Proteins: Struct Funct Bioinf. 2001; 44(3):336–43.
Xiong Y, Xia J, Zhang W, Liu J. Exploiting a reduced set of weighted average features to improve prediction of dna-binding residues from 3d structures. PloS one. 2011; 6(12):28440.
Nayal M, Honig B. On the nature of cavities on protein surfaces: application to the identification of drug-binding sites. Proteins. 2006; 63(4):892–906.
Connolly M. Solvent-accessible surfaces of proteins and nucleic acids. Science. 1983; 221(4612):709–13.
Kyte J, Doolittle RF. A simple method for displaying the hydropathic character of a protein. Journal of Molecular Biology. 1982; 157(1):105–32.
Desaphy J, Azdimousa K, Kellenberger E, Rognan D. Comparison and druggability prediction of protein-ligand binding sites from pharmacophore-annotated cavity shapes. J Chem Inf Model. 2012; 52(8):2287–99.
Khazanov NA, Carlson HA. Exploring the composition of protein-ligand binding sites on a large scale. PLoS Comput Biol. 2013; 9(11):1003321.
Pintar A, Carugo O, Pongor S. Cx, an algorithm that identifies protruding atoms in proteins. Bioinformatics. 2002; 18(7):980–4.
Eisenhaber F, Lijnzaad P, Argos P, Sander C, Scharf M. The double cubic lattice method: Efficient approaches to numerical integration of surface area and volume and to dot surface contouring of molecular assemblies. Journal of Computational Chemistry. 1995; 16(3):273–84.
Breiman L. Random forests. Machine Learning. 2001; 45(1):5–32.
Svetnik V, Liaw A, Tong C, Culberson JC, Sheridan RP, Feuston BP. Random forest: a classification and regression tool for compound classification and qsar modeling. Journal of chemical information and computer sciences. 2003; 43(6):1947–58.
Boulesteix A-L, Janitza S, Kruppa J, K-nig IR. Overview of random forest methodology and practical guidance with emphasis on computational biology and bioinformatics. Wiley Interdisciplinary Rev: Data Min Knowledge Discovery. 2012; 2(6):493–507.
Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH. The weka data mining software: an update. ACM SIGKDD Explorations Newsletter. 2009; 11(1):10–8.
Prlic A, Yates A, Bliven SE, Rose PW, Jacobsen J, Troshin PV, et al.Biojava: an open-source framework for bioinformatics in 2012. Bioinf (Oxford, England). 2012; 28(20):2693–5.
Steinbeck C, Han Y, Kuhn S, Horlacher O, Luttmann E, Willighagen E. The chemistry development kit (cdk): An open-source java library for chemo- and bioinformatics. J Chem Inf Comput Sci. 2003; 43(2):493–500. PMID: 12653513.
Hartshorn M, Verdonk M, Chessari G, Brewerton S, Mooij W, Mortenson P, et al.Diverse, high-quality test set for the validation of protein-ligand docking performance. J Med Chem. 2007; 50(4):726–41.
ConCavity Website. http://compbio.cs.princeton.edu/concavity/.
Hajduk PJ, Huth JR, Fesik SW. Druggability indices for protein targets derived from nmr-based screening data. J Med Chem. 2005; 48(7):2518–25.
Filippakopoulos P, Qi J, Picaud S, Shen Y, Smith WB, Fedorov O, et al.Selective inhibition of bet bromodomains. Nature. 2010; 468(7327):1067–73.
Hajduk PJ. Sar by nmr: putting the pieces together. Mol Interventions. 2006; 6(5):266–72.
This work was supported by the Czech Science Foundation (GA CR) project 14-29032P.
Department of Software Engineering, Charles University in Prague, Prague, Czech Republic
Radoslav Krivák & David Hoksza
Radoslav Krivák
David Hoksza
Correspondence to Radoslav Krivák or David Hoksza.
Both authors proposed the ranking algorithm based on pocket representation conceived by DH. RK proposed machine learning approach, designed and implemented the algorithm and performed the experiments. Manuscript was written by RK and DH. Both authors read and approved the final manuscript.
Additional file 1
Listings. Document that contains supplementary listings: (1) the complete list of properties of feature vectors used to represent inner points and (2) the lists of proteins by dataset for which ConCavity was run with the conservation mode switched off.
Tables. Excel file that contains data used to produce tables and figures in this article.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Krivák, R., Hoksza, D. Improving protein-ligand binding site prediction accuracy by classification of inner pocket points using local features. J Cheminform 7, 12 (2015). https://doi.org/10.1186/s13321-015-0059-5
Ligand binding site
Protein pocket
Binding site prediction
Pocket score
Molecular recognition | CommonCrawl |
\begin{definition}[Definition:Philosophical Element/Water]
'''Water''' was one of the original four elements that were postulated by the Pythagoreans to compose the Universe.
It had its natural place about the rim of the surface of Earth.
\end{definition} | ProofWiki |
\begin{definition}[Definition:Propositional Expansion/Existential Quantifier]
Suppose our universe of discourse consists of the objects $\mathbf X_1, \mathbf X_2, \mathbf X_3, \ldots$ and so on.
Let $\exists$ be the existential quantifier.
What $\exists x: \map P x$ means is:
:At least one of $\mathbf X_1, \mathbf X_2, \mathbf X_3, \ldots$ has property $P$.
This means:
:Either $\mathbf X_1$ has property $P$, or $\mathbf X_2$ has property $P$, or $\mathbf X_3$ has property $P$, or ...
This translates into propositional logic as:
:$\map P {\mathbf X_1} \lor \map P {\mathbf X_2} \lor \map P {\mathbf X_3} \lor \ldots$
This expression of $\exists x$ as a disjunction is known as the '''propositional expansion''' of $\exists x$.
The propositional expansion for the existential quantifier can exist in actuality only when the number of objects in the universe is finite.
If the universe is infinite, then the propositional expansion can exist only conceptually, and the existential quantifier cannot be eliminated.
Category:Definitions/Quantifiers
\end{definition} | ProofWiki |
What are the essential characteristics of asset prices?
I think the question has already been asked about stylized facts of asset returns; this question regards the essential characteristics and normative assumptions used to evaluate asset prices. I.e., given that the economic value of a generic asset is its discounted expected utility, what are some assumptions by which an economic stakeholder may assess a claim's worth?
Price is an expressed belief of value.
The efficient market hypothesis (EMH): the market acts as a price discovery mechanism in which market prices reflect participants' capital-weighted expectation. It should be difficult to prove that the market price is not "correct". "Price is what you pay -- value is what you get" applies only in cases where the market is not efficient.
The fundamental theorem of asset pricing (FTAP) which posits (i) a risk-neutral measure equal to a probabilistic measure which can only be rigorously demonstrated given (ii) complete markets.
FTAP's correlary to EMH: in an efficient market place, any price which reflects a $\mathbb P$ (i.e., "acturial" and/or "real-world") expectation that does not have a different $\mathbb Q$ ("risk-neutral") measure can be considered an efficient price. I.e., any "no-arbitrage" price is permitted under EMH.
Asset prices cannot be negative (or can they???). Since maximum loss is (typically) constrained to principal invested, asset prices cannot theoretically be negative -- but, in practice, investors may assign them negative values (vis-a-vis, the "drag" on value whereby the inclusion of an asset causes a portfolio to be valued less than it if were dis-included).
Corollary of requirement that prices be supported over the domain $\left[0, \infty \right]$: price paid determines both expected return as well as maximum loss (à la Seth Klarman's synthesis regarding Warren Buffet-esque "Margin of Safety").
The fair price of any generic asset is equal to the expected net present value of the discounted cash flows that it is expected to generate.
Time value of money (TVM): Time is money. Time has monetary value which can be expressed as a utility function. Utility is usually interchangeably expressed as a discount factor or an interest rate which represents an expected and/or required rate of return based on an investor's intertemporal preferences regarding consumption and risk. Rational utility should always be a monotonically decreasing utility function with respect to time -- i.e.,"a dollar today is always worth more than a dollar at any time in the future". Therefore, discount rates cannot be negative (or can they???). Also, a discounting function need not be an exponential/geometric (i.e., normative) function, continuous, symmetrical, or time-invariant.
Modigliani-Miller's postulates on (i) the value of a firm and (ii) the irrelevance of capital structure inform the intuition that -- under a broad range of regulatory frameworks -- capital structuring decisions are not a major factor in determining an asset's enterprise value.
Corollary to MM II: it is simpler to price the firm's underlying assets in totality (and then allocate value to claims in order of seniority) vice value each class of claim individually, vis-a-vis "in order to value a company's stock, one must first value the company itself" (attribution needed).
Arbitrage Theory of Pricing's (APT) statement that asset prices are reflexively a transformed function of returns.
The Capital Asset Pricing Model's (CAPM) application of APT which states that asset prices are a function of diversifiable and non-systemic risk under a mean-variance framework.
Equity is analogous to a long call option on a firm's value; debt is analogous to short put option on a firm's value. A position which is long equity and long debt is a synthetic long position on the firm's underlying assets.
Good responses should add depth to and/or expand upon those characteristics already identified. I also would appreciate any relevant references including compendia and/or primers.
I appreciate your thoughts and references.
Browse other questions tagged returns arbitrage asset-pricing actuarial-science or ask your own question.
Have any new stylized facts of asset returns been discovered since 2001?
Critique against consumption-based asset pricing theory?
What is the Most Efficient Way to Calculate the Internal Rate of Return IRR?
What is a standard model of convergence when looking at negative stub values?
How to calculate an option porfolio cost and payoff function? | CommonCrawl |
\begin{document}
\preprint{ } \title[quantum information storage]{Raman scheme for adjustable bandwidth quantum memory} \author{J.-L. Le Gou\"{e}t} \affiliation{Laboratoire Aim\'{e} Cotton, CNRS UPR3321, Univ. Paris Sud, b\^atiment 505, campus universitaire, 91405 Orsay, France} \author{P. R. Berman} \affiliation{Michigan Center for Theoretical Physics, and Physics Department, University of Michigan, Ann Arbor, Michigan \ \ 48109-1040} \keywords{quantum information, storage} \pacs{42.50.Ex,42.50.Md,03.67.-a}
\begin{abstract} We propose a scenario of quantum memory for light based on Raman scattering. The storage medium is a vapor and the different spectral components of the incoming signal are stored in different atomic velocity classes. One uses appropriate pulses to reverse the resulting Doppler phase shift and to regenerate the signal, without distortion, in the backward direction. The different stages of the protocol are detailed and the recovery efficiency is calculated in the semi-classical picture. Since the memory bandwidth is determined by the Raman transition Doppler width, it can be adjusted by changing the angle of the signal and control beams. The optical depth also depends on the beam angle. As a consequence the available optical depth can be optimized, depending on the needed bandwidth. The predicted recovery efficiency is close to 100$\%$ for large optical depth.
\end{abstract} \volumeyear{year} \volumenumber{number} \issuenumber{number} \eid{identifier} \date{\today} \startpage{1} \endpage{ } \maketitle
\section{Introduction}
The storage of quantum information in an atomic ensemble has received a great deal of attention over the past ten years or so. Protocols based on electromagnetic induced transparency (EIT) have been investigated by many groups both theoretically \cite{fleisch,fleisch2} and experimentally, leading to storage and retrieval demonstration of both discrete \cite{chan,eisa} and continuous \cite{appel,honda,cvik} quantum variables. Although successful, the EIT storage scheme suffers from time-bandwidth product limitations. In EIT-based protocols, the reduction of group velocity is used to spatially confine the input signal within the boundaries of the storage medium. Simultaneously, the signal spectrum must not exceed the bandwidth of the transparency window associated with EIT. Since the group velocity is inversely proportional to the width of the transparency window, it follows that, the larger the storage bandwidth, the shorter the temporal profile the memory can accommodate. This protocol has been demonstrated using level schemes in which inhomogeneous broadening does not play a significant role. Extension to inhomogeneously broadened systems does not improve the time-bandwidth product capabilities.
On the other hand, inhomogeneous broadening can be of critical importance in other protocols for storing quantum information. For example, Doppler broadening determines the storage bandwidth when a signal pulse is totally absorbed in an optically dense medium \cite{mois}. While the temporal components of the input signal are distributed in atomic state coherence along the axial direction in EIT, the spectral components of the input signal are spread over the inhomogeneous frequency distribution of the atoms in the absorption protocol. Just as in EIT, information is stored in a long lifetime Raman coherence in the absorption protocol, but the maximum duration of the input signal is independent of the storage bandwidth, being ultimately limited only by the inverse homogeneous line width. The resulting time-bandwidth product capacity, given by the ratio of the inhomogeneous and homogeneous widths, is reminiscent of the photon-echo based storage techniques that were developed in the past \cite{lin}. However, unlike some of those classical light storage schemes, the proposal in Ref. \cite{mois}, that we shall refer to as the MK protocol, is restricted to systems where the inhomogeneous broadening is provided by the Doppler effect.
A variant of the MK protocol has been proposed for solids in which the inhomogeneous broadening is linked to stochastic variations in atomic transition frequency that depend on an absorbing center's position in a host medium \cite{nils, kraus}. Although there are several proof of principle experiments of this absorption-type protocol \cite{hetet1,hetet2,alex}, all such experiments have involved classical input fields. It should be stressed that the large time-bandwidth product capacity is lost when the Doppler shift is replaced by a more stochastic source of inhomogeneous broadening. In order to keep control of the inhomogeneous phase shift, one selects a narrow spectral group of atoms at the beginning. An external field is used to spread this initial ensemble over the desired bandwidth. Hence the bandwidth is increased at the expense of the available optical density, i.e. at the expense of the capacity to trap the optically carried information within the material.
A key feature in the MK scenario is that the totally absorbed signal is restored without amplification. This contrasts with previous photon echo investigations where large retrieval efficiency results from strong amplification in an inverted medium \cite{aza,corn,tsa,wan1,wan2}. Therefore, unlike these earlier works, the MK protocol is free from the noise associated with spontaneous and stimulated emission. This is closely related to the fact that the control fields do not interact with highly populated states. As a consequence few atoms are promoted to the upper electronic level. Another consequence is that, as the control fields interact with quasi-empty states, they are neither attenuated nor distorted as they travel through the active medium.
Quite surprisingly, the original MK scheme has not been demonstrated experimentally. A possible issue in atomic vapors is the short lifetime of the active optical transition upper level. Indeed, to combine a large optical density with the absence of collisions one has to work on strong lines with short upper level lifetime. As a consequence, some operations have to be carried out on a nanosecond time scale. Specifically, one needs nanosecond $\pi$-pulses to convert optical dipoles into Raman coherences and conversely. In addition, the input signal duration is limited to a few nanoseconds.
In this paper, we propose a Raman variant of the MK scheme that circumvents these limitations. Direct excitation of ground state coherence avoids the introduction of rapidly decaying quantities, yet retains the other advantages of the MK protocol. Quantum memories based on Raman scattering have been proposed in the past \cite{kozhe,nunn,hetet2}. However, previous proposals did not fully examine the dynamics of the system, focusing on steady-state conditions, \cite{kozhe} or they considered a situation in which the temporal signal profile is mapped into a spatial distribution of the atomic ground state coherence \cite{nunn,hetet2}. In our scenario, the spectro-temporal features of the signal are stored in the \textit{spectral distribution} of the ground state coherence. The experimental investigation presented in ref. \cite{hetet2} is actually very close to our situation, but the authors resort to a reversible magnetic field gradient to cover the signal spectrum and to reverse the atomic phase, which ultimately leads to a spatial mapping of the signal. In our case we use optical pulses to reverse the atomic phase.
Our objective in this paper is to describe the underlying physics of the Raman protocol, a goal that can be achieved within the confines of a theory in which all radiation fields are treated classically. The paper is arranged as follows: after presenting a picture of the overall process in section II, we develop a theory for each step of the protocol in sections III-V. In section VI we discuss the range of applicability of the storage method.
\section{Outline of the storage and retrieval procedure}
\begin{figure}
\caption{(Color online) Level scheme and protocol steps. (a) the weak input signal, combined with a strong control field, drives the Raman transition $a-c$. (b) the Doppler phase build-up is stopped by conversion of the coherence $\rho_{ac}$ into $\rho_{ad}$. This is accomplished by $\pi$-pulse excitation of the Raman transition $c-d$. (c) the coherence $\rho_{ac}$ is recovered with the help of a second Raman $\pi$-pulse. (d) a backward propagating control field creates a coherence $\rho_{ab}$ which allows for the restoration of the signal pulse, propagating in the backward direction. }
\label{fig1a}
\end{figure}
The protocol consists of the four steps shown schematically in Fig. \ref{fig1a}. We consider an ensemble of four-level atoms. The four levels can be magnetic state sublevels of the same or different ground state hyperfine levels. The atoms are prepared initially in state $\left\vert {a}\right\rangle .$
\subsubsection{Stage 1}
In the first stage, a quasi-monochromatic control field and the input pulse drive Raman transitions between levels $a$ and $c$. Each field is off-resonant for optical excitation of level $b$, but the difference of the field frequencies is close to that of the Raman transition. The control and input fields have a relative propagation vector $\mathbf{K}$ that leads to a Doppler shift $\mathbf{K\cdot v}$ associated with the two-photon Raman transition. As a consequence the bandwidth that can be absorbed in this Raman process is on the order of $1/Ku$, where $u$ is a characteristic atomic speed. Of critical importance is that all frequency components of the signal field are depleted in an identical fashion (no pulse distortion) if the bandwidth is less than the inhomogeneous width. The medium is optically dense so the signal pulse is totally attenuated. In other words, as a result of stimulated Raman scattering, the signal pulse energy is totally transferred to the control field. Each atom has a negligibly small population in state $\left\vert {c}\right\rangle $ and the entire population stored in level $c$ is assumed to be small as well, assuming the signal pulse is weak. The control field is turned off following the depletion of the signal pulse. In contrast to the MK protocol in which an optical coherence is created in the first stage, the signal field is transferred directly to a Raman coherence in our protocol that is immune to spontaneous emission decay.
\subsubsection{Stage 2}
The Raman coherence dephases following excitation as a result of the inhomogeneous broadening. As in a photon echo experiment, this dephasing can be reversed by the application of a second pair of pulses. However, if we were to use a two-pulse echo process, the Raman coherence excited by the signal would be affected by velocity changing collisions over the \textit{entire} storage time between the two pulses. Instead, if we use a three-pulse echo configuration, the effect of collisions during most of the storage time can be suppressed. The second pulse pair, in effect, freezes the Doppler phase created by the first pulse pair. As in the MK protocol, this phase reversal must be carried out with a Raman pulse that leaves the population in level $a$ unchanged. This is a crucial condition to ensure uniform illumination by the control fields; if any control field is resonant with a transition originating in level $a$, the field will be strongly absorbed in the optically dense medium and unsuitable for this protocol \cite{pop}. All these requirements are satisfied if one applies a Raman $\pi$ pulse between levels $c$ and $d$, following the excitation of the $a-c$ Raman coherence. The relative $\mathbf{K}$ vector of the two fields is the same as that in stage 1, so the net effect of the $\pi$ pulse is to convert the $a-c$ to $a-d$ coherence, while freezing the Doppler phase evolution of this Raman coherence, just as in a stimulated photon echo. The net result is that the original pulse information is now stored in a Raman coherence that is, for the most part, "protected" from the effects of velocity-changing collisions.
\subsubsection{Stage 3}
To prepare the system for the retrieval stage, a $\pi$ pulse is sent into the medium at some later time to restore the $a-c$ coherence. This Raman pulse has its relative $\mathbf{K}$ vector reversed and prepares the atoms with a spatial Raman coherence that allows for retrieval of the signal in stage 4.
\subsubsection{Stage 4}
A control pulse is sent into the sample that is identical to the initial control pulse, but with its propagation vector \textit{reversed}. The first three stages produce a Raman coherence that allows a field to build up in a direction opposite to that of the input signal pulse. In other words, the depletion of the signal field is reversed and the original signal pulse is restored as it exits the sample propagating in a direction opposite to that of the original signal pulse. In principle, the pulse can be restored with close to 100\% fidelity. In the context of creating a functional quantum memory device, splitting the phase reversal in two steps (stages 2 and 3) reduces the waiting time between the read-out pulse and the emission of the restored signal, making the retrieved data available more rapidly after the read-out decision.
We now describe in detail how each of these steps can be achieved.
\section{Storage step}
Storage corresponds to the mapping of the optically carried information into atomic Raman coherence, accompanied by attenuation of the input signal field.
\subsection{Buildup of a Raman atomic superposition state}
The input signal is depleted by stimulated Raman scattering in a three-level $\Lambda$-system. The weak pulse to be stored, combined with the control field, resonantly excites the Raman transition $\left\vert {a}\right\rangle -\left\vert {c}\right\rangle $. Both fields are tuned off resonance from the optical transition to upper level $\left\vert {b}\right\rangle $. Storage is performed into the superposition of the ground substates $\left\vert {a}\right\rangle $ and $\left\vert {c}\right\rangle $. Since the atoms are prepared initially in state $\left\vert {a}\right\rangle $, the medium is transparent to the control field that uniformly illuminates all the active atoms. Assuming that the control field is constant during the signal pulse, we write the electric field of the control field as the plane wave \begin{equation} E_{2}(\mathbf{r},t)=\mathcal{A}_{2}e^{i\mathbf{k}_{2}.\mathbf{r}-i\omega_{2} t}+c.c \label{cont} \end{equation} where the envelope $\mathcal{A}_{2}$ is a time- and space-independent parameter. The control field wave vector and frequency are denoted $\mathbf{k}_{2}$ and $\omega_{2}$, respectively.
The signal pulse Rayleigh range is assumed to be much larger than the storage material length $L$ to insure that its diameter does not vary significantly as it propagates in the medium. The electric field of the signal field can be expressed as \begin{equation} E_{1}(\mathbf{r},t)=\mathcal{A}_{1}(\mathbf{r},t)e^{i\mathbf{k}_{1} .\mathbf{r}-i\omega_{1}t}+c.c., \label{sigf} \end{equation} where $\mathcal{A}_{1}(\mathbf{r},t)$ is the envelope, $\mathbf{k}_{1}$ the propagation vector, and $\omega_{1}$ the carrier frequency of this field. The spatial dependence of $\mathcal{A}_{1}(\mathbf{r},t)$ reflects the radial distribution of the field and its attenuation along direction $\mathbf{k}_{1} $. When $L/c$ is not much smaller than the pulse duration, retardation also contributes to the field envelope spatial dependence. The Rabi frequencies associated with the signal and control fields are denoted by \begin{subequations} \label{rabi} \begin{align} \Omega_{1}(\mathbf{r},t) & =-\mu_{ba}\mathcal{A}_{1}(\mathbf{r} ,t)/\hbar;\label{rabia}\\ \Omega_{2} & =-\mu_{bc}\mathcal{A}_{2}/\hbar, \label{rabib} \end{align} where $\mu_{ba}$ and $\mu_{bc}$ are optical dipole moment matrix elements. It is assumed that $k_{1}\approx k_{2}\equiv k$.
In perturbation theory with the population of level $a$ set equal to unity, the coupled equations for the optical and Raman coherences are: \end{subequations} \begin{subequations} \label{eqs} \begin{align} \dot{\tilde{\rho}}_{ab;m} & =(i\Delta_{1}-\gamma_{ab})\tilde{\rho} _{ab;m}+i\Omega_{1}^{\ast}\left[ \mathbf{r}_{m}(t),t\right] e^{-i\mathbf{k} _{1}.\mathbf{r}_{m}(t)}+i\tilde{\rho}_{ac;m}\Omega_{2}^{\ast}e^{-i\mathbf{k} _{2}.\mathbf{r}_{m}(t)}\\ \dot{\tilde{\rho}}_{ac;m} & =[i(\Delta_{1}-\Delta_{2})-\gamma_{ac} ]\tilde{\rho}_{ac;m}+i\tilde{\rho}_{ab;m}\Omega_{2}e^{i\mathbf{k} _{2}.\mathbf{r}_{m}(t)} \label{Raman_coherence_0} \end{align} where $\Delta_{1}$ and $\Delta_{2}$ are the atom-field detunings for each optical transition, $\gamma_{ab}$ is the decay rate for the $a-b$ coherence, $\gamma_{ac}$ is the decay rate for the $a-c$ coherence, $\tilde{\rho} _{ab;m}=\rho_{ab;m}e^{-i\omega_{1}t}$ and $\tilde{\rho}_{ac;m}=\rho _{ac;m}e^{-i(\omega_{1}-\omega_{2})t}$. These equations give the time evolution of the density matrix elements for atom $m$, located at $\mathbf{r}_{m}(t)$ at time $t$. The manner in which the spatial phases of the field are imprinted on the atoms is readily apparent in Eqs. (\ref{eqs}). Under the assumption that $\Omega_{1},\Omega_{1}^{-1}d\Omega_{1} /dt,\gamma_{ab},ku<<\Delta_{1}$, where $u$ is the most probable atomic speed, the optical coherence adiabatically follows the field variations and can be written as: \end{subequations} \begin{equation} \tilde{\rho}_{ab;m}=-\Omega_{1}^{\ast}\left[ \mathbf{r}_{m}(t),t\right] e^{-i\mathbf{k}_{1}.\mathbf{r}_{m}(t)}/\Delta_{1}-\tilde{\rho}_{ac;m} \Omega_{2}^{\ast}e^{-i\mathbf{k}_{2}.\mathbf{r}_{m}(t)}/\Delta_{1}. \label{optical_coherence} \end{equation} Substituting this expression into Eq. \ref{Raman_coherence_0}, one obtains \begin{equation} \dot{\tilde{\rho}}_{ac;m}=\left[ i\left( \Delta_{1}-\Delta_{2} -\frac{\left\vert \Omega_{2}\right\vert ^{2}}{\Delta_{1}}\right) -\gamma _{ac}\right] \tilde{\rho}_{ac;m}-i\frac{\Omega_{1}^{\ast}\left[ \mathbf{r}_{m}(t),t\right] \Omega_{2}e^{i\mathbf{K}.\mathbf{r}_{m}(t)} }{\Delta_{1}} \end{equation} where \[ \mathbf{K}=\mathbf{k}_{2}-\mathbf{k}_{1}. \] The detuning $\Delta_{1}-\Delta_{2}$ can be adjusted to cancel the light shift $\left\vert \Omega_{2}\right\vert ^{2}/\Delta_{1}$. Finally, the Raman coherence can be expressed as: \begin{equation} \tilde{\rho}_{ac;m}(t)=-i\frac{\Omega_{2}}{\Delta_{1}}\int_{-\infty} ^{t}dt^{\prime}\Omega_{1}^{\ast}\left[ \mathbf{r}_{m}(t^{\prime}),t^{\prime }\right] e^{i\mathbf{K}.\mathbf{r}_{m}(t^{\prime})} \end{equation} where it has been assumed that any decay of $\tilde{\rho}_{ac;m}$ during the signal pulse can be neglected. The density matrix element can be expressed in terms of the atom position at time $t$. Indeed, if collisions do not change the atomic velocity $\mathbf{v}_{m}$ during the signal pulse, the position at $t^{\prime}$ can be expressed as $\mathbf{r}_{m}(t^{\prime})=\mathbf{r} _{m}(t)-\mathbf{v}_{m}(t-t^{\prime})$, so that: \begin{equation} \tilde{\rho}_{ac;m}(t)=-i\frac{\Omega_{2}}{\Delta_{1}}e^{i\mathbf{K} .\mathbf{r}_{m}(t)}\int_{-\infty}^{t}dt^{\prime}\Omega_{1}^{\ast}\left[ \mathbf{r}_{m}(t)-\mathbf{v}_{m}(t-t^{\prime}),t^{\prime}\right] e^{-i\mathbf{K}.\mathbf{v}_{m}(t-t^{\prime})} \label{Raman_coherence} \end{equation}
\subsection{Stimulated Raman scattering of the input signal}
It is assumed that, in the absence of the control field, the scattering of the probe field is negligible owing to the large detuning $\Delta_{1}$. As a consequence, the contribution to the polarization resulting from the first term in Eq. (\ref{optical_coherence}) can be neglected. On the other hand, the control field intensity is sufficiently large to allow for significant stimulated Raman scattering, resulting in a loss of signal field intensity as the signal field propagates in the medium. The Raman contribution to $\tilde{\rho}_{ab;m}$ is given by the second term on the right hand side of Eq. (\ref{optical_coherence}). To determine the modification of the signal field, we need to calculate the polarization associated with the $\tilde{\rho }_{ab;m}$ coherence. We write the macroscopic polarization as \[ P(\mathbf{r},t)=P_{+}(\mathbf{r},t)e^{i\mathbf{k}_{1}.\mathbf{r-}i\omega_{1} t}+P_{+}^{\ast}(\mathbf{r},t)e^{-\left( i\mathbf{k}_{1}.\mathbf{r-} i\omega_{1}t\right) } \]
In going over to a macroscopic polarization, we assume that, at any position $\mathbf{r}$ in the medium, one can define a slice of thickness $l<<2\pi/k$ in the $\mathbf{k}_{1}$ direction, containing many atoms. The macroscopic polarization at $(\mathbf{r},t)$ is obtained by combining the contributions from all the atoms within the slice. Those atoms satisfy the condition $\left\vert \mathbf{\hat{k}}_{1}.\mathbf{s}_{m}(t)\right\vert \leq l/2$, where $\mathbf{\hat{k}}_{1}$ is a unit vector in the $\mathbf{k}_{1}$ direction and $\mathbf{s}_{m}(t)=\mathbf{r}-\mathbf{r}_{m}(t)$. Therefore the positive frequency component $P_{+}(\mathbf{r},t)$ is given by \begin{equation} P_{+}(\mathbf{r},t)=\frac{\mu_{ab}}{\delta V}e^{-i\mathbf{k}_{1}.\mathbf{r} }\displaystyle\sum_{\substack{m\\\left\vert \mathbf{\hat{k}}_{1} .\mathbf{s}_{m}(t)\right\vert \leq l/2}}\tilde{\rho}_{ba;m}\left( \mathbf{r},t\right) , \label{pol} \end{equation} where $\delta V$ represents the slice volume. Combining Eqs. (\ref{optical_coherence}), (\ref{Raman_coherence}), (\ref{pol}), and (\ref{rabia}), we find \begin{equation} P_{+}(\mathbf{r},t)=i\frac{\left\vert \mu_{ab}\right\vert ^{2}\left\vert \Omega_{2}\right\vert ^{2}}{\delta V\hbar\Delta_{1}^{2}}\displaystyle\sum _{\substack{m\\\left\vert \mathbf{\hat{k}}_{1}.\mathbf{s}_{m}(t)\right\vert \leq l/2}}\int_{-\infty}^{t}dt^{\prime}\mathcal{A}_{1}\left[ \mathbf{r} -\mathbf{v}_{m}(t-t^{\prime}),t^{\prime}\right] e^{i\mathbf{K}.\mathbf{v} _{m}(t-t^{\prime})}. \label{pp} \end{equation} As was noted above, the contribution to $\tilde{\rho}_{ba;m}\left( \mathbf{r},t\right) $ from the first term in Eq. (\ref{optical_coherence}) has been neglected since it adiabatically follows the field and vanishes for times greater than the pulse duration.
The atoms are uniformly distributed in space, with density $N$, and their normalized velocity distribution is represented by $W(\mathbf{v})$. Replacing the discrete sum by an integral, according to \[ \frac{1}{\delta V}\displaystyle\sum_{\substack{m\\\left\vert \mathbf{\hat{k} }_{1}.\mathbf{s}_{m}(t)\right\vert \leq l/2}}\rightarrow N\int d^{3} vW(\mathbf{v}), \] enables us to transform Eq. (\ref{pp}) into \begin{equation} P_{+}(\mathbf{r},t)=i\frac{\left\vert \mu_{ab}\right\vert ^{2}\left\vert \Omega_{2}\right\vert ^{2}}{\hbar\Delta_{1}^{2}}N\int d^{3}vW(\mathbf{v} )\int_{-\infty}^{t}dt^{\prime}\mathcal{A}_{1}\left[ \mathbf{r}-\mathbf{v} (t-t^{\prime}),t^{\prime}\right] e^{i\mathbf{K}.\mathbf{v}(t-t^{\prime})}. \end{equation} Provided the input signal spectral width $\delta_{s}$ is smaller than $Ku$, $\mathcal{A}_{1}\left[ \mathbf{r}-\mathbf{v}(t-t^{\prime}),t^{\prime}\right] $ can be taken out of the integral over $t^{\prime}$ and evaluated at $t^{\prime}=t$. In this limit the polarization $P_{+}(\mathbf{r},t)$ reduces to: \begin{subequations} \label{initial_polarization} \begin{align} P_{+}(\mathbf{r},t) & =i\frac{\left\vert \mu_{ab}\right\vert ^{2}\left\vert \Omega_{2}\right\vert ^{2}}{\hbar\Delta_{1}^{2}}\mathcal{A}_{1}(\mathbf{r} ,t)N\int dvW(v)\int_{0}^{\infty}d\tau e^{iKv\tau}\\ & =\frac{i}{k}\frac{N\pi\left\vert \mu_{ab}\right\vert ^{2}W(0)}{\hbar} \frac{k\left\vert \Omega_{2}\right\vert ^{2}}{K\Delta_{1}^{2}}\mathcal{A} _{1}(\mathbf{r},t) \end{align} where $W(v)$ represents the one-dimensional velocity distribution.
This is the key result of this section. Owing to the large inhomogeneous width, the polarization is proportional to the field amplitude and depends locally on this amplitude. In other words, the polarization does not depend on the value of the field amplitude at earlier times as it would in the case of homogeneous broadening. The electric susceptibility $\chi_{R}$ is defined by $P_{+}(\mathbf{r},t)=\epsilon_{0}\chi_{R}\mathcal{A}_{1}(\mathbf{r},t)$ with the intensity absorption coefficient $\alpha_{R}$ given by $\alpha _{R}=k\mathrm{Im}(\chi_{R})$, which leads to \end{subequations} \begin{equation} \alpha_{R}=\frac{k\left\vert \Omega_{2}\right\vert ^{2}}{K\Delta_{1}^{2} }\alpha_{0} \label{Raman_absorption} \end{equation} where $\alpha_{0}$ represents the linear absorption coefficient on the inhomogeneously broadened transition $a-b$. Provided $\delta_{s}<Ku$, the signal propagates without distortion, which implies that all the signal spectral components are uniformly attenuated and stored in the atomic ensemble. The spectral components are mapped into Raman coherence in atoms whose velocities span an interval of order $\delta_{s}/K$ along the direction $\mathbf{K=k}_{2}-\mathbf{k}_{1}$. The parameter $K$ should be adjusted in such a way that $\delta_{s}/Ku$ is larger than unity to provide a sufficiently large bandwidth to store the signal pulse, but not too large, since the signal depletion and Raman storage varies inversely with $K$.
\section{Freezing the Doppler phase}
The input signal illuminates the storage medium during a time interval centered at $t_{1}$. The control field is turned off following the signal pulse. At some later time, the Raman coherence, given by Eq. (\ref{Raman_coherence}), can be expressed as: \begin{equation} \tilde{\rho}_{ac;m}(t)=-i\mathrm{e}^{i\mathbf{K}.\mathbf{r}_{m}(t)-\left( i\mathbf{K}.\mathbf{v}_{m}+\gamma_{ac}\right) (t-t_{1})}\mathcal{R} _{m}\label{Raman_coherence_2} \end{equation} where \begin{equation} \mathcal{R}_{m}=\frac{\Omega_{2}}{\Delta_{1}}\int_{-\infty}^{\infty} d\tau\Omega_{1}^{\ast}\left[ \mathbf{r}_{m}(t_{1}),t_{1}+\tau\right] \mathrm{e}^{i\mathbf{K}.\mathbf{v}_{m}\tau} \label{rm} \end{equation} and it was assumed that $\Omega_{1}\left[ \mathbf{r}-\mathbf{v}\tau _{p},t\right] \approx\Omega_{1}\left[ \mathbf{r},t\right] $, where $\tau_{p}$ is the signal pulse duration. There is a build-up of Doppler phase associated with the Raman coherence that grows linearly as a function of time following the interaction with the signal pulse. Although this phase could be reversed at a later time, it is best to nip it in the bud to prevent any deterioration from velocity-changing collisions. To accomplish this task one can send in a Raman $\pi$ pulse having the same $\mathbf{K}$ vector that transfers the amplitude from state $\left\vert {c}\right\rangle $ to an auxiliary level $\left\vert {d}\right\rangle $, or, equivalently, converts coherence $\tilde{\rho}_{ac}$ into $\tilde{\rho}_{ad}$.
\begin{figure}
\caption{(Color online) Schematic representation of the timing sequence for the entire protocol, showing the relative duration of the different pulses, and the various atomic quantities involved at the different stages.}
\label{fig2a}
\end{figure}
Let $\Omega_{3}(t)$ and $\Omega_{4}(t)$ denote the Rabi frequencies on transitions $\left\vert {c}\right\rangle -\left\vert {b}\right\rangle $ and $\left\vert {b}\right\rangle -\left\vert {d}\right\rangle $, respectively. The radial extension of these fields is assumed to be larger than that of the signal pulse. Moreover, these fields are not attenuated as they propagate through the storage material since interact with quasi-empty levels. Hence their Rabi frequency space dependence can be omitted. Both fields are detuned by the same amount $\Delta$ from resonance with their respective assigned transition. Therefore the $\left\vert {c}\right\rangle -\left\vert {d}\right\rangle $ two-photon transition is resonantly excited. The Rabi frequency of the equivalent two-level system is given by $\Omega_{3} (t)\Omega_{4}^{\ast}(t)/\Delta$. The two-level approximation holds provided $\left\vert \tilde{\Omega}_{3,4}(\Delta\pm\delta_{s})\right\vert ^{2}<<1$, where $\tilde{\Omega(\Delta)}$ represents the time-to-frequency Fourier transform of $\Omega(t)$. Of course one must take care that none of those fields can excite state $\left\vert {a}\right\rangle ,$ where all the atomic population is concentrated. With these assumptions, one finds that $\rho_{ac}$ and $\rho_{ad}$ evolve according to \begin{subequations} \label{pi_pulse} \begin{align} \dot{\tilde{\rho}}_{ad;m} & =i\frac{\left\vert \Omega_{\pi}\right\vert ^{2} }{\Delta}\mathrm{e}^{-i\phi_{m}(t)}\tilde{\rho}_{ac;m}-\gamma_{ad}\tilde{\rho }_{ad;m}\\ \dot{\tilde{\rho}}_{ac;m} & =i\frac{\left\vert \Omega_{\pi}\right\vert ^{2} }{\Delta}\mathrm{e}^{i\phi_{m}(t)}\tilde{\rho}_{ad;m}-\gamma_{ac}\tilde{\rho }_{ac;m}, \end{align} where $\tilde{\rho}_{ad;m}=\rho_{ad;m}\mathrm{e}^{-i(\omega_{3}-\omega_{4})t} $, $\phi_{m}(t)=(\mathbf{k}_{3}-\mathbf{k}_{4}).\mathbf{r}_{m}(t)$ and, for simplicity, we have set $\Omega_{3}(t)=\Omega_{4}(t)=\Omega_{\pi}(t)$ and neglected a light shift that is the same for levels $c$ and $d$. The pulses, having duration $\tau_{\pi}$, are applied at a time centered about $t=$ $t_{2}$ and their duration is assumed to be sufficiently short, $\delta _{s}\tau_{\pi}<<1$, to resonantly excite all the atoms that were excited by the signal pulse (recall that atoms in the velocity range $\left\vert \mathbf{K\cdot v}\right\vert \leq\left\vert \delta_{s}\right\vert $ are excited in stage 1). The time-diagram of the whole protocol is displayed in Fig. \ref{fig2a}, showing the relative duration of the different pulses.
In terms of the coherence components at time $t_{2}^{-}$ just before the Raman $\pi$ pulse is applied, Eq. (\ref{pi_pulse}) can be solved, for $t>t_{2}$, as: \end{subequations} \begin{subequations} \label{Doppler_freeze} \begin{align} \tilde{\rho}_{ac;m}(t) & =\mathrm{e}^{-\gamma_{ac}(t-t_{2})}\cos\left( \frac{\Theta}{2} \right) \tilde{\rho}_{ac;m}(t_{2}^{-})+i\mathrm{e}^{i\phi _{m}(t_{2})-\gamma_{ad}(t-t_{2})}\sin\left( \frac{\Theta}{2} \right) \tilde{\rho}_{ad;m}(t_{2}^{-})\\ \tilde{\rho}_{ad;m}(t) & =i\mathrm{e}^{-i\phi_{m}(t_{2})-\gamma_{ad} (t-t_{2})}\sin\left( \frac{\Theta}{2} \right) \tilde{\rho}_{ac;m}(t_{2} ^{-})+\mathrm{e}^{-\gamma_{ad}(t-t_{2})}\cos\left( \frac{\Theta}{2} \right) \tilde{\rho}_{ad;m}(t_{2}^{-}), \end{align} where $\Theta=2\int dt\left\vert \Omega_{\pi}(t)\right\vert ^{2}/\Delta$, assuming that $\gamma_{ac}\tau_{\pi},\gamma_{ad}\tau_{\pi}<<1$. Total conversion from $\rho_{ac}$ to $\rho_{ad}$ requires that $\Theta=\pi$. With initial conditions given by Eq. (\ref{Raman_coherence_2}) and $\tilde{\rho }_{ad;m}(t_{2}^{-})=0$ one finds that \end{subequations} \begin{equation} \tilde{\rho}_{ad;m}(t)=\mathrm{e}^{i(\mathbf{k}_{4}-\mathbf{k}_{3} +\mathbf{K}).\mathbf{r}_{m}(t_{2})-\gamma_{ad}(t-t_{2})}\mathrm{e}^{-\left( i\mathbf{K}.\mathbf{v}_{m}+\gamma_{ac}\right) t_{12}}\mathcal{R} _{m},\label{stopped_dephasing} \end{equation} where $t_{ij}$ is the time interval between the $j$th and $i$th pairs of pulses. With $\mathbf{k}_{3}-\mathbf{k}_{4}=\mathbf{K}$, there is no build-up of Doppler phase following this second Raman pulse \cite{doppler} - the Doppler phase has been frozen.
\section{Signal recovery}
To prepare for signal retrieval at some later time, one sends in another Raman $\pi$ pulse, centered at time $t=t_{3}$, to restore the $\tilde{\rho}_{ac;m}$ coherence. This Raman pulse consists of two fields having propagation vectors $\mathbf{k}_{4}^{\prime}$ and $\mathbf{k}_{3}^{\prime}$ that drive the $\left\vert {d}\right\rangle -\left\vert {b}\right\rangle $ and $\left\vert {b}\right\rangle -\left\vert {c}\right\rangle $ transitions, respectively. The atom-field dynamics is again described by Eqs.(\ref{pi_pulse}), with $\phi _{m}(t)$ replaced by $\phi_{m}^{\prime}(t)=(\mathbf{k}_{3}^{\prime} -\mathbf{k}_{4}^{\prime}).\mathbf{r}_{m}(t).$ To reverse the Doppler phase acquired in the time interval $t_{12}$, we choose the propagation vectors such that $\mathbf{k}_{3}^{\prime}-\mathbf{k}_{4}^{\prime}=-\mathbf{K}$\textbf{, }which leads to a Raman coherence for $t>t_{3}$ given by \begin{align} \tilde{\rho}_{ac;m}(t) & =i\mathrm{e}^{i\phi_{m}^{\prime}(t_{3} )-i\mathbf{K}.\mathbf{v}_{m}t_{12}-\gamma_{ac}(t-t_{3}+t_{12})-\gamma _{ad}t_{23}}\mathcal{R}_{m}\nonumber\\ & =i\mathrm{e}^{-i\mathbf{K}.\mathbf{r}_{m}(t)+i\mathbf{K}.\mathbf{v} _{m}(t-t_{3})-i\mathbf{K}.\mathbf{v}_{m}t_{12}-\gamma_{ac}(t-t_{3} +t_{12})-\gamma_{ad}t_{23}}\mathcal{R}_{m}.\label{rac} \end{align}
To restore the signal one directs a quasi-monochromatic control field having frequency $\omega_{2}$ into the medium in the backward direction, with $\mathbf{k}_{2}^{\prime\prime}=-\mathbf{k}_{2}$. This field should be switched on somewhat before time $t=t_{3}+t_{12}$, with the same Rabi frequency $\Omega_{2}$ as the initial coupling beam. The restored field can be written as \[ E(\mathbf{r},t)=\mathcal{A}(\mathbf{r},t)\mathrm{e}^{i\mathbf{k}_{1} ^{\prime\prime}.\mathbf{r-}i\omega_{1}t}+\mathrm{c}.\mathrm{c}. \] We now argue that, only if $\mathbf{k}_{1}^{\prime\prime}=-\mathbf{k}_{1}$, can a phase matched signal be generated. To see this, we return to Eq. (\ref{rac}) and analyze the phase factor in this equation. Although $\mathcal{R}_{m}$ contains a Doppler phase factor [see Eq. (\ref{rm})], it is of order $Ku\tau$, where $\tau$ is the signal pulse duration, and is small compared to the other Doppler phases appearing in Eq. (\ref{rac}), since we assume that $t_{12}\gg\tau$. When the control field is sent into the medium, the Raman coherence (\ref{rac}) gives rise to an optical coherence on the $a-b$ transition that is responsible for the generation of the restored field. >From Eq. (\ref{optical_coherence}), one can deduce that the optical coherence $\tilde{\rho}_{ab;m}$ varies as \begin{align*} & \mathrm{e}^{i\mathbf{k}_{2}.\mathbf{r}_{m}(t)-i\mathbf{K}.\mathbf{r} _{m}(t)+i\mathbf{K}.\mathbf{v}_{m}(t-t_{3})-i\mathbf{K}.\mathbf{v}_{m}t_{12} }\\ & =\mathrm{e}^{i\mathbf{k}_{1}.\mathbf{r}_{m}(t)+i\mathbf{K}.\mathbf{v} _{m}(t-t_{3}-t_{12})} \end{align*} This expression implies that a phase matched signal can propagate only in the $\mathbf{k}_{1}^{\prime\prime}=-\mathbf{k}_{1}$ direction and that this signal can be nonvanishing (owing to the average over atomic velocities) only for times $t\approx t_{3}+t_{12}$. Thus, in what follows we assume that the restored signal field has propagation vector $\mathbf{k}_{1}^{\prime\prime }=-\mathbf{k}_{1}$.
The existence of the restored field must be taken into account in the expression for the Raman coherence. Indeed, this field also combines with the control field to drive Raman transitions between levels $a$ and $c$. Adding this contribution to Eq. (\ref{rac}), one obtains the Raman coherence \begin{equation} \begin{split} \tilde{\rho}_{ac;m}(t) & =i\mathrm{e}^{-i\mathbf{K}.\mathbf{r} _{m}(t)+i\mathbf{K}.\mathbf{v}_{m}(t-t_{3}-t_{12})-2\gamma_{ac}t_{12} -\gamma_{ad}t_{23}}\mathcal{R}_{m}\\ & +i\frac{\mu_{ab}\Omega_{2}}{\hbar\Delta_{1}}\int_{-\infty}^{t}dt^{\prime }\mathcal{A}\left[ \mathbf{r}_{m}(t^{\prime}),t^{\prime}\right] \mathrm{e}^{i\mathbf{K^{\prime\prime}}.\mathbf{r}_{m}(t^{\prime})}. \end{split} \label{rephased_coherence} \end{equation}
Substituting this equation into Eq. (\ref{optical_coherence}) one arrives at an expression for the optical coherence, \begin{equation} \begin{split} \tilde{\rho}_{ab;m}(t) & =i\frac{\mu_{ab}\left\vert \Omega_{2}\right\vert ^{2}}{\hbar\Delta_{1}^{2}}\mathrm{e}^{i\mathbf{k}_{1}.\mathbf{r}_{m}(t)}\\ & \times\left( \mathrm{e}^{-\gamma_{ac}\left( t-t_{3}-t_{12}\right) -\gamma_{ad}t_{23}}\int_{-\infty}^{\infty}d\tau\mathcal{A}_{1}^{\ast}\left[ \mathbf{r}_{m}(t_{1}),t_{1}+\tau\right] \mathrm{e}^{i\mathbf{K} .\mathbf{v}_{m}(t-t_{3}-t_{12}+\tau)}\right. \\ & \left. -\int_{-\infty}^{t}dt^{\prime}\mathcal{A}^{\ast}\left[ \mathbf{r}_{m}(t^{\prime}),t^{\prime}\right] \mathrm{e}^{i\mathbf{K} .\mathbf{v}_{m}(t-t^{\prime})}\right) , \end{split} \label{final_optical_coherence} \end{equation} that can be used to calculate the macroscopic polarization density, which is written as \[ P_{s}(\mathbf{r},t)=P_{+s}(\mathbf{r},t)\mathrm{e}^{-i\mathbf{k} _{1}.\mathbf{r-}i\omega_{1}t}+P_{+s}^{\ast}(\mathbf{r},t)\mathrm{e}^{-\left( -i\mathbf{k}_{1}.\mathbf{r-}i\omega_{1}t\right) }, \] incorporating the fact that the signal is phase-matched in the $-\mathbf{k} _{1}$ direction only. The polarization density is comprised of two components. One of them represents the source term that gives rise to the restored signal. This contribution originates from the first term on the right hand side of Eq. (\ref{final_optical_coherence}) and is given by \begin{equation} \begin{split} P_{+s}^{(1)}(\mathbf{r},t) & =-i\frac{\left\vert \mu_{ab}\right\vert ^{2}\left\vert \Omega_{2}\right\vert ^{2}N}{\hbar\Delta_{1}^{2}} \mathrm{e}^{-\gamma_{ac}\left( t-t_{3}+t_{12}\right) -\gamma_{ad}t_{23}}\\ & \times\int d\mathbf{v}W(\mathbf{v})\int_{-\infty}^{\infty}d\tau \mathcal{A}_{1}\left[ \mathbf{r}(t_{1}),t_{1}+\tau\right] \mathrm{e} ^{-i\mathbf{K}.\mathbf{v}(t-t_{3}-t_{12}+\tau)}, \end{split} \end{equation} where $\mathcal{A}_{1}\left[ \mathbf{r}(t_{1}),t^{\prime}\right] \approx\mathcal{A}_{1}\left[ \mathbf{r}-\mathbf{v}t_{13},t^{\prime}\right] $. The polarization density depends on the field experienced by the participating atoms at their positions when they interacted with the signal field.
For the field to be restored, the condition $\mathcal{A}_{1}\left[ \mathbf{r}_{m}(t_{1}),t^{\prime}\right] \approx\mathcal{A}_{1}\left[ \mathbf{r},t^{\prime}\right] $ must hold. This condition is valid provided the distance travelled by the atoms in time $t_{13}$ is much smaller than the beam diameter, the spatial width of the input signal envelope, and the absorption length $\alpha_{R}^{-1}$. Since $\delta_{s}<<Ku$, one can take $\mathcal{A}_{1}\left[ \mathbf{r},t^{\prime}\right] $ out of the integral over $t^{\prime}$ and evaluate it at time $t^{\prime}=t_{1}-(t-t_{3}-t_{12})$, which leads to \begin{equation} \begin{split} P_{+s}^{(1)}(\mathbf{r},t) & =-i\frac{\left\vert \mu_{ab}\right\vert ^{2}\left\vert \Omega_{2}\right\vert ^{2}N}{\hbar\Delta_{1}^{2}} \mathrm{e}^{-\gamma_{ac}\left( t-t_{3}+t_{12}\right) -\gamma_{ad}t_{23}}\\ & \times\mathcal{A}_{1}\left[ \mathbf{r},t_{1}-(t-t_{3}-t_{12})\right] \int dvW(v)\int_{-\infty}^{\infty}d\tau\mathrm{e}^{-iKv\tau} \end{split} \end{equation} Using Eqs. (\ref{initial_polarization}) and (\ref{Raman_absorption}), one can re-express this in terms of $\alpha_{R}$ as \begin{equation} P_{+s}^{(1)}(\mathbf{r},t)=-2i\epsilon_{0}\frac{\alpha_{R}}{k}\mathcal{A} _{1}\left[ \mathbf{r},t_{1}-(t-t_{3}-t_{12})\right] \mathrm{e} ^{-2\gamma_{ac}t_{12}-\gamma_{ad}t_{23}} \end{equation} The amplitude $\mathcal{A}_{1}\left[ \mathbf{r},t_{1}-(t-t_{3}-t_{12} )\right] $ is nonvanishing only for $t\approx t_{3}+t_{12},$ reflecting the fact that Doppler rephasing occurs only for such times. We have used this fact to set $e^{-\gamma_{ac}(t-t_{3}-t_{12})}\approx e^{-2\gamma_{ac}t_{12}}.$
The input signal was attenuated as it traveled through the storage medium. Taking into account the propagation and attenuation of the input field $A_{1}\left[ \mathbf{r},t\right] $, we can write $P_{+}^{(1)}(r,t)$ in terms of the input field at $z=0$ as \begin{equation} P_{+s}^{(1)}(z,t)=-2i\epsilon_{0}\frac{\alpha_{R}}{k}\mathcal{A}_{1}\left[ 0,t_{1}-(t-t_{3}-t_{12})-z/c\right] \mathrm{e}^{-\alpha_{R}z/2} \mathrm{e}^{-2\gamma_{ac}t_{12}-\gamma_{ad}t_{23}}, \label{P1} \end{equation} the $z$ axis being directed along $\mathbf{k}_{1}$.
The contribution to the polarization density from the second term on the right hand side of Eq. (\ref{final_optical_coherence}) corresponds simply to depletion of the restored field as a result of Raman transitions from level $a$ to $c$ . This component can be written as \begin{equation} P_{+s}^{(2)}(z,t)=i\epsilon_{0}\frac{\alpha_{R}}{k}\mathcal{A}(z,t).\label{P2} \end{equation} Finally, the restored field is a solution of the linearized Maxwell equation: \begin{equation} -\partial_{z}\mathcal{A}(z,t)+\frac{1}{c}\partial_{t}\mathcal{A} (z,t)=i\frac{k}{2\epsilon_{0}}\left[ P_{+s}^{(1)}(z,t)+P_{+s}^{(2)} (z,t)\right] \label{P3} \end{equation} Making the change of variables $z^{\prime}=z$, $t^{\prime}=t+z/c$, and combining Eqs. (\ref{P1}) and (\ref{P3}), one obtains \begin{equation} \partial_{z^{\prime}}\mathcal{A}^{\prime}(z^{\prime},t^{\prime})=-\alpha _{R}\mathcal{A}_{1}\left[ 0,t_{1}-(t^{\prime}-t_{3}-t_{12})\right] \mathrm{e}^{-\alpha_{R}z^{\prime}/2}\mathrm{e}^{-2\gamma_{ac}t_{12} -\gamma_{ad}t_{23}}+\frac{1}{2}\alpha_{R}\mathcal{A}^{\prime}(z^{\prime },t^{\prime}), \end{equation} where $\mathcal{A}^{\prime}(z^{\prime},t^{\prime})=\mathcal{A}(z^{\prime },t^{\prime}-z^{\prime}/c)$. Solving this equation with the initial condition $\mathcal{A}^{\prime}(L,t^{\prime})=0,$ we find \begin{equation} \mathcal{A}(z,t)=\mathcal{A}_{1}\left[ z,t_{1}-(t-t_{3}-t_{12})\right] \mathrm{e}^{-2\gamma_{ac}t_{12}-\gamma_{ad}t_{23}}\left( 1-\mathrm{e} ^{-\alpha_{R}(L-z)}\right) .\label{restored_field} \end{equation} For $\alpha_{R}L\gg1$, the pulse is totally restored at time $t=t_{3}+t_{12}$ , neglecting decay of the Raman coherences. The latter equation represents the main result of the paper, showing the absence of distortion of the retrieved signal and its variation as a function of the optical depth $\alpha_{R}L$.
One can also verify that the atoms return to their initial state as the signal field is restored. Substituting Eq. (\ref{restored_field}) into Eq. (\ref{rephased_coherence}) and using Eq. (\ref{rm}), one obtains \begin{equation} \begin{split} & \tilde{\rho}_{ac;m}(t)=-i\frac{\mu_{ab}\Omega_{2}}{\hbar\Delta_{1} }\mathrm{e}^{-i\mathbf{K}.\mathbf{r}_{m}(t)+i\mathbf{K}.\mathbf{v}_{m} (t-t_{3}-t_{12})-\gamma_{ac}(2t_{23}+t_{12})}\\ & \times\left( \int_{-\infty}^{\infty}dt^{\prime}\mathcal{A}_{1}^{\ast }(z,t^{\prime})\mathrm{e}^{i\mathbf{K}.\mathbf{v}_{m}(t^{\prime}-t_{1} )}-\left( 1-\mathrm{e}^{-\alpha_{R}(L-z)}\right) \int_{t_{1}-(t-t_{3} -t_{12})}^{\infty}dt^{\prime}\mathcal{A}_{1}^{\ast}(z,t^{\prime} )\mathrm{e}^{i\mathbf{K}.\mathbf{v}_{m}(t^{\prime}-t_{1})}\right) . \end{split} \end{equation} For times $\left( t-t_{3}-t_{12}\right) >0$ the lower limit on the second integral can be replaced by $-\infty$ and this term cancels the first integral. In other words, the coherence vanishes once when the signal field is restored, provided $\alpha_{R}L\gg1$ .
Following the emission of the restored signal pulse, there remains in the medium a Raman coherence given by \begin{equation} \tilde{\rho}_{ac;m}(t)=-i\frac{\mu_{ab}\Omega_{2}}{\hbar\Delta_{1}} \mathrm{e}^{-i\mathbf{K}.\mathbf{r}_{m}(t)+i\mathbf{K}.\mathbf{v}_{m} (t-t_{3}-t_{12})-\gamma_{ac}(t_{23}+2t_{12})}\mathrm{e}^{-\alpha_{R}(L-z)} \int_{-\infty}^{\infty}dt^{\prime}\mathcal{A}_{1}^{\ast}(z,t^{\prime })\mathrm{e}^{i\mathbf{K}.\mathbf{v}_{m}(t^{\prime}-t_{1})}. \end{equation} Comparing with expression with Eq. (\ref{Raman_coherence}), and noting that the population in state $\left\vert {a}\right\rangle $ is approximately equal to unity, one can conclude that the population left in state $\left\vert {c}\right\rangle $ is \begin{subequations} \begin{align} \rho_{cc;m}[(t_{3}+t_{12})^{+}] & =\left\vert \tilde{\rho}_{ac;m}(t_{1} ^{+})\right\vert ^{2}\left[ 1-\mathrm{e}^{-2\gamma_{ac}(t_{23}+2t_{12} )}\right] +\left\vert \tilde{\rho}_{ac;m}[(t_{3}+t_{12})^{+}]\right\vert ^{2}\\ & =\left\vert \tilde{\rho}_{ac;m}(t_{1}^{+})\right\vert ^{2}\left[ 1-\mathrm{e}^{-2\gamma_{ac}(t_{23}+2t_{12})}\left( 1-\mathrm{e}^{-2\alpha _{R}(L-z)}\right) \right] . \end{align} Finally, summing over $z$, one finds the fraction $\eta$ of the initially excited atoms that is left in state $\left\vert {c}\right\rangle $ is given by \end{subequations} \begin{equation} \eta=1-\mathrm{e}^{-2\gamma_{ac}(t_{23}+2t_{12})}\left( 1-\mathrm{e} ^{-\alpha_{R}L}\right) . \end{equation}
If decay of the Raman coherence decay can be neglected, $\eta=\mathrm{e} ^{-\alpha_{R}L}$. During the storage process, a fraction $\eta_{in} =(1-\mathrm{e}^{-\alpha_{R}L})$ of the incoming signal radiation is used to excite the atoms to level $c$ , the remaining signal passes through without being scattered. From the part that is stored, a fraction is lost at retrieval, even in the absence of relaxation. Indeed the restored field is $(1-\mathrm{e}^{-\alpha_{o}L})$ times smaller than the incoming one, according to Eq. (\ref{restored_field}). Therefore one recovers a fraction $W_{out}/W_{in}=(1-\mathrm{e}^{-\alpha_{R}L})^{2}$ of the incoming energy. The difference $\eta_{in}-W_{out}/W_{in}\cong\mathrm{e}^{-\alpha_{R}L}$ corresponds to the fraction $\eta$ of the excited atoms that is left in level $c$. To summarize, with a finite length material, information is lost in equal amounts at storage and retrieval, the storage loss associated with an incomplete depletion of the input field and the retrieval loss associated with excited state population $\rho_{cc}$ that is left in the medium.
\section{Discussion}
We have shown that nearly 100$\%$ recovery efficiency can be reached with the proposed quantum storage Raman scheme. The memory bandwidth, determined by the Raman transition Doppler broadening, can be tuned continuously from $0$ to $2ku$. Since the atoms are not excited to the upper electronic level, the control pulse duration is not limited by the upper level lifetime, but only by the inverse signal bandwidth.
Our motivation in this work was to circumvent the time scale conditions imposed by the short upper level lifetime, since it is not easy to produce large area, large waist coherent pulses on the nanosecond time scale. The pulses to be considered here, whether input signal or control pulses, will most likely last for tens or hundreds of nanoseconds. This corresponds to a spectral width much smaller than the Doppler width $ku$, typically of order $10^{9}s^{-1}$ or higher. The available bandwidth of the Raman scheme is determined by $Ku$, where $K/k$ can be expressed in terms of the angle $\theta=(\mathbf{k}_{1},\mathbf{k}_{2})$ as $K/k=2\theta\sin(\theta/2)$. According to Eq. (\ref{restored_field}), the Raman optical depth $\alpha_{R}L$ should be as large as possible. Since $\alpha_{R}L$ is inversely proportional to $K/k$, the value of $K/k$ should be matched to the bandwidth of the input pulse. For an input field spectral width of order $10$ MHz, we are led to a nearly co-propagating configuration with $\theta$ of order 10 mrad.
The memory lifetime is limited by the atomic motion. As noticed in Section IV, the distance travelled by the atoms during the entire process should be much smaller than the beam diameter and the absorption length $\alpha_{R}^{-1}$. With a typical average speed of a few 100m/s, a memory lifetime of a few tens of microseconds limits the sample size to a few centimeters.
A critical issue in a multilevel system is the ability to selectively drive the target transitions. Field polarization can be used to provide the selectivity if the angle between the fields is small (or close to $\pi$); in this limit, one can use circularly polarized fields, as shown in Fig. \ref{fig3a}. The ground state manifold consists of $F=1$ and $F=2$ hyperfine states and the $\Lambda$-systems of the Raman protocol involve magnetic substates of these levels. Since the angle between the beams is assumed to be on the order 10 mrad or so, the fields can taken to be cross-polarized, in first approximation.
\begin{figure}
\caption{(Color online) Example of a possible excitation scheme. The Raman transitions connect the Zeeman sublevels of $F=1$, $F=2$ hyperfine levels. Each Raman transition involves a $\sigma_{+}$ and a $\sigma_{-}$ polarized beam. }
\label{fig3a}
\end{figure}
Optical pumping by the strong control field determines the uncoupled initial state ($F=1$, $m=-1$). In step (a), fields 1 and 2 create a Raman coherence $\rho_{1,-1:1,1}$, where the notation is $\rho_{F,m:F^{\prime},m^{\prime}}$. The Doppler dephasing is then stopped by a $\pi$-pulse, composed of two beams propagating along $\mathbf{k}_{3}$ and $\mathbf{k}_{4}$ in such a way that $\mathbf{k}_{4}-\mathbf{k}_{3}=\mathbf{k}_{1}-\mathbf{k}_{2}$ [step (b)]. These fields convert the coherence $\rho_{1,-1:1,1}$ into $\rho_{1,-1:2,-1}$. The process in which fields 4 and 3 drive Raman transitions between states $\left\vert 1,-1\right\rangle $ and $\left\vert 1,1\right\rangle $ is suppressed, provided the hyperfine frequency separation is much larger the inverse pulse duration. In step (c) a Raman $\pi$ pulse having $\mathbf{k} _{4}^{\prime}-\mathbf{k}_{3}^{\prime}=-\left( \mathbf{k}_{4}-\mathbf{k} _{3}\right) $ restores the $\rho_{1,-1:1,1}$ coherence and prepares the system in such a fashion that the subsequent application of a control pulse having propagation vector $-\mathbf{k}_{2}$ restores the \ input signal field propagating in the $-\mathbf{k}_{1}$ direction at time $t=t_{3}+t_{12}$.
\section{Conclusion}
We have proposed a Raman based quantum memory scenario that operates with close to 100\% efficiency. This protocol fits well to intermediate time scales, with signal duration of order 100ns, a time range that is well adapted to experiments based on high spectral purity continuous wave laser sources. We have shown that the spectral components are stored in different atomic velocity classes and have explained how to optimize the optical depth depending of the needed storage bandwidth. Since we have assumed that the signal field is weak and since spontaneous emission is negligible in this protocol, we expect the results to be unchanged if the classical input field is replaced by a quantized, pulsed radiation field.
\acknowledgments We are pleased to acknowledge stimulating discussions with E. Giacobino, G. Leuchs, J. Laurat and T. Chaneli\`ere. PRB would like to thank l'Institut Francilien de recherche sur les atomes froids (IFRAF) for helping to support his visit to Laboratoire Aim\'{e} Cotton and for the hospitality shown to him during his visit.
\end{document} | arXiv |
Dowker space
In the mathematical field of general topology, a Dowker space is a topological space that is T4 but not countably paracompact. They are named after Clifford Hugh Dowker.
The non-trivial task of providing an example of a Dowker space (and therefore also proving their existence as mathematical objects) helped mathematicians better understand the nature and variety of topological spaces.
Equivalences
Dowker showed, in 1951, the following:
If X is a normal T1 space (that is, a T4 space), then the following are equivalent:
• X is a Dowker space
• The product of X with the unit interval is not normal.[1]
• X is not countably metacompact.
Dowker conjectured that there were no Dowker spaces, and the conjecture was not resolved until Mary Ellen Rudin constructed one in 1971.[2] Rudin's counterexample is a very large space (of cardinality $\aleph _{\omega }^{\aleph _{0}}$). Zoltán Balogh gave the first ZFC construction of a small (cardinality continuum) example,[3] which was more well-behaved than Rudin's. Using PCF theory, M. Kojman and S. Shelah constructed a subspace of Rudin's Dowker space of cardinality $\aleph _{\omega +1}$ that is also Dowker.[4]
References
1. Dowker, C. H. (1951). "On countably paracompact spaces" (PDF). Can. J. Math. 3: 219–224. doi:10.4153/CJM-1951-026-2. Zbl 0042.41007. Retrieved March 29, 2015.
2. Rudin, Mary Ellen (1971). "A normal space X for which X × I is not normal" (PDF). Fundam. Math. Polish Academy of Sciences. 73 (2): 179–186. doi:10.4064/fm-73-2-179-186. Zbl 0224.54019. Retrieved March 29, 2015.
3. Balogh, Zoltan T. (August 1996). "A small Dowker space in ZFC" (PDF). Proc. Amer. Math. Soc. 124 (8): 2555–2560. doi:10.1090/S0002-9939-96-03610-6. Zbl 0876.54016. Retrieved March 29, 2015.
4. Kojman, Menachem; Shelah, Saharon (1998). "A ZFC Dowker space in $\aleph _{\omega +1}$: an application of PCF theory to topology" (PDF). Proc. Amer. Math. Soc. American Mathematical Society. 126 (8): 2459–2465. doi:10.1090/S0002-9939-98-04884-9. Retrieved March 29, 2015.
| Wikipedia |
The average of five numbers is $10.6$. Four of the numbers are 10, 4, 5 and 20. What is the value of the fifth number?
The sum of the numbers is $5(10.6)=53$. The sum of the four given numbers is $10+4+5+20=39$. So the fifth number is $53-39=\boxed{14}$. | Math Dataset |
August 2007, pages 807-1039
Special Issue - Bioinformatics: From Molecules to Systems
pp 807-807 August 2007
Alok Bhattacharya
pp 809-825 August 2007 Articles
Theoretical analysis of noncanonical base pairing interactions in RNA molecules
Dhananjay Bhattacharyya Siv Chand Koripella Abhijit Mitra Vijay Babu Rajendran Bhabdyuti Sinha
Noncanonical base pairs in RNA have strong structural and functional implications but are currently not considered for secondary structure predictions. We present results of comparative ab initio studies of stabilities and interaction energies for the three standard and 24 selected unusual RNA base pairs reported in the literature. Hydrogen added models of isolated base pairs, with heavy atoms frozen in their 'away from equilibrium' geometries, built from coordinates extracted from NDB, were geometry optimized using HF/6-31G** basis set, both before and after unfreezing the heavy atoms. Interaction energies, including BSSE and deformation energy corrections, were calculated, compared with respective single point MP2 energies, and correlated with occurrence frequencies and with types and geometries of hydrogen bonding interactions. Systems having two or more N-H…O/N hydrogen bonds had reasonable interaction energies which correlated well with respective occurrence frequencies and highlighted the possibility of some of them playing important roles in improved secondary structure prediction methods. Several of the remaining base pairs with one N-H…O/N and/or one C-H…O/N interactions respectively, had poor interaction energies and negligible occurrences. High geometry variations on optimization of some of these were suggestive of their conformational switch like characteristics.
Mechanism of DNA–binding loss upon single-point mutation in p53
Jon D Wright Carmay Lim
Over 50% of all human cancers involve p53 mutations, which occur mostly in the sequence−specific DNA−binding central domain (p53c), yielding little/non–detectable affinity to the DNA consensus site. Despite our current understanding of protein−DNA recognition, the mechanism(s) underlying the loss in protein−DNA binding affinity/specificity upon single−point mutation are not well understood. Our goal is to identify the common factors governing the DNA−binding loss of p53c upon substitution of Arg 273 to His or Cys, which are abundant in human tumours. By computing the free energies of wild–type and mutant p53c binding to DNA and decomposing them into contributions from individual residues, the DNA−binding loss upon charge/noncharge–conserving mutation of Arg 273 was attributed not only to the loss of DNA phosphate contacts, but also to longer–range structural changes caused by the loss of the Asp 281 salt–bridge. The results herein and in previous works suggest that Asp 281 plays a critical role in the sequence−specific DNA−binding function of p53c by
orienting Arg 273 and Arg 280 in an optimal position to interact with the phosphate and base groups of the consensus DNA, respectively, and
helping to maintain the proper DNA–binding protein conformation.
Incorporating evolution of transcription factor binding sites into annotated alignments
Abha S Bais Steffen Grossmann Martin Vingron
Identifying transcription factor binding sites (TFBSs) is essential to elucidate putative regulatory mechanisms. A common strategy is to combine cross-species conservation with single sequence TFBS annotation to yield ``conserved TFBSs". Most current methods in this field adopt a multi-step approach that segregates the two aspects. Again, it is widely accepted that the evolutionary dynamics of binding sites differ from those of the surrounding sequence. Hence, it is desirable to have an approach that explicitly takes this factor into account. Although a plethora of approaches have been proposed for the prediction of conserved TFBSs, very few explicitly model TFBS evolutionary properties, while additionally being multi-step. Recently, we introduced a novel approach to simultaneously align and annotate conserved TFBSs in a pair of sequences. Building upon the standard Smith-Waterman algorithm for local alignments, SimAnn introduces additional states for profiles to output extended alignments or annotated alignments. That is, alignments with parts annotated as gaplessly aligned TFBSs (pair-profile hits) are generated. Moreover, the pair-profile related parameters are derived in a sound statistical framework.
In this article, we extend this approach to explicitly incorporate evolution of binding sites in the SimAnn framework. We demonstrate the extension in the theoretical derivations through two position-specific evolutionary models, previously used for modelling TFBS evolution. In a simulated setting, we provide a proof of concept that the approach works given the underlying assumptions, as compared to the original work. Finally, using a real dataset of experimentally verified binding sites in human-mouse sequence pairs, we compare the new approach (eSimAnn) to an existing multi-step tool that also considers TFBS evolution.
Although it is widely accepted that binding sites evolve differently from the surrounding sequences, most comparative TFBS identification methods do not explicitly consider this. Additionally, predic tion of conserved binding sites is carried out in a multi-step approach that segregates alignment from TFBS annotation. In this paper, we demonstrate how the simultaneous alignment and annotation approach of SimAnn can be further extended to incorporate TFBS evolutionary relationships. We study how alignments and binding site predictions interplay at varying evolutionary distances and for various profile qualities.
Identification and annotation of promoter regions in microbial genome sequences on the basis of DNA stability
Vetriselvi Rangannan Manju Bansal
Analysis of various predicted structural properties of promoter regions in prokaryotic as well as eukaryotic genomes had earlier indicated that they have several common features, such as lower stability, higher curvature and less bendability, when compared with their neighboring regions. Based on the difference in stability between neighboring upstream and downstream regions in the vicinity of experimentally determined transcription start sites, a promoter prediction algorithm has been developed to identify prokaryotic promoter sequences in whole genomes. The average free energy (E) over known promoter sequences and the difference (D) between E and the average free energy over the entire genome (G) are used to search for promoters in the genomic sequences. Using these cutoff values to predict promoter regions across entire Escherichia coli genome, we achieved a reliability of 70% when the predicted promoters were cross verified against the 960 transcription start sites (TSSs) listed in the Ecocyc database. Annotation of the whole E. coli genome for promoter region could be carried out with 49% accuracy. The method is quite general and it can be used to annotate the promoter regions of other prokaryotic genomes.
Parsing regulatory DNA: General tasks, techniques, and the PhyloGibbs approach
Rahul Siddharthan
In this review, we discuss the general problem of understanding transcrip tional regulation from DNA sequence and prior information. The main tasks we discuss are predicting local regions of DNA, cis-regulatory modules (CRMs) that contain binding sites for transcription factors (TFs), and predicting individ ual binding sites. We review various existing methods, and then describe the approach taken by PhyloGibbs, a recent motif-finding algorithm that we developed to predict TF binding sites, and PhyloGibbs-MP, an extension to PhyloGibbs that tackles other tasks in regulatory genomics, particularly prediction of CRMs.
Evolutionary insights from suffix array-based genome sequence analysis
Anindya Poddar Nagasuma Chandra Madhavi Ganapathiraju K Sekar Judith Klein-Seetharaman Raj Reddy N Balakrishnan
Gene and protein sequence analyses, central components of studies in modern biology are easily amenable to string matching and pattern recognition algorithms. The growing need of analysing whole genome sequences more efficiently and thoroughly, has led to the emergence of new computational methods. Suffix trees and suffix arrays are data structures, well known in many other areas and are highly suited for sequence analysis too. Here we report an improvement to the design of construction of suffix arrays. Enhancement in versatility and scalability, enabled by this approach, is demonstrated through the use of real-life examples.
The scalability of the algorithm to whole genomes renders it suitable to address many biologically interesting problems. One example is the evolutionary insight gained by analysing unigrams, bi-grams and higher n-grams, indicating that the genetic code has a direct influence on the overall composition of the genome. Further, different proteomes have been analysed for the coverage of the possible peptide space, which indicate that as much as a quarter of the total space at the tetra-peptide level is left un-sampled in prokaryotic organisms, although almost all tri-peptides can be seen in one protein or another in a proteome. Besides, distinct patterns begin to emerge for the counts of particular tetra and higher peptides, indicative of a 'meaning' for tetra and higher n-grams.
The toolkit has also been used to demonstrate the usefulness of identifying repeats in whole proteomes efficiently. As an example, 16 members of one COG, coded by the genome of Mycobacterium tuberculosis H37Rv have been found to contain a repeating sequence of 300 amino acids.
A method for computing the inter-residue interaction potentials for reduced amino acid alphabet
Abhinav Luthra Anupam Nath Jha G K Ananthasuresh Saraswathi Vishveswara
Inter-residue potentials are extensively used in the design and evaluation of protein structures. However, dealing with all (20×20) interactions becomes computationally difficult in extensive investigations. Hence, it is desirable to reduce the alphabet of 20 amino acids to a smaller number. Currently, several methods of reducing the residue types exist; however a critical assessment of these methods is not available. Towards this goal, here we review and evaluate different methods by comparing with the complete (20×20) matrix of Miyazawa-Jernigan potential, including a method of grouping adopted by us, based on multi dimensional scaling (MDS). The second goal of this paper is the computation of inter-residue interaction energies for the reduced amino acid alphabet, which has not been explicitly addressed in the literature until now. By using a least squares technique, we present a systematic method of obtaining the interaction energy values for any type of grouping scheme that reduces the amino acid alphabet. This can be valuable in designing the protein structures.
Protein mechanics: a route from structure to function
Richard Lavery Sophie Sacquin-Mora
In order to better understand the mechanical properties of proteins, we have developed simulation tools which enable these properties to be analysed on a residue-by-residue basis. Although these calculations are relatively expensive with all-atom protein models, good results can be obtained much faster using coarse-grained approaches. The results show that proteins are surprisingly heterogeneous from a mechanical point of view and that functionally important residues often exhibit unusual mechanical behaviour. This finding offers a novel means for detecting functional sites and also potentially provides a route for understanding the links between structure and function in more general terms.
Protein local conformations arise from a mixture of Gaussian distributions
Ashish V Tendulkar Babatunde Ogunnaike Pramod P Wangikar
The classical approaches for protein structure prediction rely either on homology of the protein sequence with a template structure or on ab initio calculations for energy minimization. These methods suffer from disadvantages such as the lack of availability of homologous template structures or intractably large conformational search space, respectively. The recently proposed fragment library based approaches first predict the local structures, which can be used in conjunction with the classical approaches of protein structure prediction. The accuracy of the predictions is dependent on the quality of the fragment library. In this work, we have constructed a library of local conformation classes purely based on geometric similarity. The local conformations are represented using Geometric Invariants, properties that remain unchanged under transformations such as translation and rotation, followed by dimension reduction via principal component analysis. The local conformations are then modeled as a mixture of Gaussian probability distribution functions (PDF). Each one of the Gaussian PDF's corresponds to a conformational class with the centroid representing the average structure of that class. We find 46 classes when we use an octapeptide as a unit of local conformation. The protein 3-D structure can now be described as a sequence of local conformational classes. Further, it was of interest to see whether the local conformations can be predicted from the amino acid sequences. To that end, we have analyzed the correlation between sequence features and the conformational classes.
Exploring conformational space using a mean field technique with MOLS sampling
P Arun Prasad V Kanagasabai J Arunachalam N Gautham
The computational identification of all the low energy structures of a peptide given only its sequence is not an easy task even for small peptides, due to the multiple-minima problem and combinatorial explosion. We have developed an algorithm, called the MOLS technique, that addresses this problem, and have applied it to a number of different aspects of the study of peptide and protein structure. Conformational studies of oligopeptides, including loop sequences in proteins have been carried out using this technique. In general the calculations identified all the folds determined by previous studies, and in addition picked up other energetically favorable structures. The method was also used to map the energy surface of the peptides. In another application, we have combined the MOLS technique, using it to generate a library of low energy structures of an oligopeptide, with a genetic algorithm to predict protein structures. The method has also been applied to explore the conformational space of loops in protein structures. Further, it has been applied to the problem of docking a ligand in its receptor site, with encouraging results.
Analysis on sliding helices and strands in protein structural comparisons: A case study with protein kinases
V S Gowri K Anamika S Gore N Srinivasan
Protein structural alignments are generally considered as 'golden standard' for the alignment at the level of amino acid residues. In this study we have compared the quality of pairwise and multiple structural alignments of about 5900 homologous proteins from 718 families of known 3-D structures. We observe shifts in the alignment of regular secondary structural elements (helices and strands) between pairwise and multiple structural alignments. The differences between pairwise and multiple structural alignments within helical and 𝛽-strand regions often correspond to 4 and 2 residue positions respectively. Such shifts correspond approximately to "one turn" of these regular secondary structures. We have performed manual analysis explicitly on the family of protein kinases. We note shifts of one or two turns in helix-helix alignments obtained using pairwise and multiple structural alignments. Investigations on the quality of the equivalent helix-helix, strand-strand pairs in terms of their residue side-chain accessibilities have been made. Our results indicate that the quality of the pairwise alignments is comparable to that of the multiple structural alignments and, in fact, is often better. We propose that pairwise alignment of protein structures should also be used in formulation of methods for structure prediction and evolutionary analysis.
Use of secondary structural information and C𝛼-C𝛼 distance restraints to model protein structures with MODELLER
Boojala V B Reddy Yiannis N Kaznessis
Protein secondary structure predictions and amino acid long range contact map predictions from primary sequence of proteins have been explored to aid in modelling protein tertiary structures. In order to evaluate the usefulness of secondary structure and 3D-residue contact prediction methods to model protein structures we have used the known Q3 (alpha-helix, beta-strands and irregular turns/loops) secondary structure information, along with residue-residue contact information as restraints for MODELLER. We present here results of our modelling studies on 30 best resolved single domain protein structures of varied lengths. The results shows that it is very difficult to obtain useful models even with 100% accurate secondary structure predictions and accurate residue contact predictions for up to 30% of residues in a sequence. The best models that we obtained for proteins of lengths 37, 70, 118, 136 and 193 amino acid residues are of RMSDs 4.17, 5.27, 9.12, 7.89 and 9.69, respectively. The results show that one can obtain better models for the proteins which have high percent of alpha-helix content. This analysis further shows that MODELLER restrain optimization program can be useful only if we have truly homologous structure(s) as a template where it derives numerous restraints, almost identical to the templates used. This analysis also clearly indicates that even if we satisfy several true residue-residue contact distances, up to 30% of their sequence length with fully known secondary structural information, we end up predicting model structures much distant from their corresponding native structures.
ARC: Automated Resource Classifier for agglomerative functional classification of prokaryotic proteins using annotation texts
Muthiah Gnanamani Naveen Kumar Srinivasan Ramachandran
Functional classification of proteins is central to comparative genomics. The need for algorithms tuned to enable integrative interpretation of analytical data is felt globally. The availability of a general, automated software with built-in flexibility will significantly aid this activity. We have prepared ARC (Automated Resource Classifier), which is an open source software meeting the user requirements of flexibility. The default classification scheme based on keyword match is agglomerative and directs entries into any of the 7 basic non-overlapping functional classes: Cell wall, Cell membrane and Transporters ($\mathcal{C}$), Cell division ($\mathcal{D}$), Information ($\mathcal{I}$), Translocation ($\mathcal{L}$), Metabolism ($\mathcal{M}$), Stress($\mathcal{R}$), Signal and communication($\mathcal{S}$) and 2 ancillary classes: Others ($\mathcal{O}$) and Hypothetical ($\mathcal{H}$). The keyword library of ARC was built serially by first drawing keywords from Bacillus subtilis and Escherichia coli K12. In subsequent steps, this library was further enriched by collecting terms from archaeal representative Archaeoglobus fulgidus, Gene Ontology, and Gene Symbols. ARC is 94.04% successful on 6,75,663 annotated proteins from 348 prokaryotes. Three examples are provided to illuminate the current perspectives on mycobacterial physiology and costs of proteins in 333 prokaryotes. ARC is available at http://arc.igib.res.in.
Synonymous codon usage in different protein secondary structural classes of human genes: Implication for increased non-randomness of GC3 rich genes towards protein stability
Pamela Mukhopadhyay Surajit Basak Tapash Chandra Ghosh
The relationship between the synonymous codon usage and different protein secondary structural classes were investigated using 401 Homo sapiens proteins extracted from Protein Data Bank (PDB). A simple Chi-square test was used to assess the significance of deviation of the observed and expected frequencies of 59 codons at the level of individual synonymous families in the four different protein secondary structural classes. It was observed that synonymous codon families show non-randomness in codon usage in four different secondary structural classes. However, when the genes were classified according to their GC3 levels there was an increase in non-randomness in high GC3 group of genes. The non-randomness in codon usage was further tested among the same protein secondary structures belonging to four different protein folding classes of high GC3 group of genes. The results show that in each of the protein secondary structural unit there exist some synonymous family that shows class specific codonusage pattern. Moreover, there is an increased non-random behaviour of synonymous codons in sheet structure of all secondary structural classes in high GC3 group of genes. Biological implications of these results have been discussed.
Cytoview: Development of a cell modelling framework
Prashant Khodade Samta Malhotra Nirmal Kumar M Sriram Iyengar N Balakrishnan Nagasuma Chandra
The biological cell, a natural self-contained unit of prime biological importance, is an enormously complex machine that can be understood at many levels. A higher-level perspective of the entire cell requires integration of various features into coherent, biologically meaningful descriptions. There are some efforts to model cells based on their genome, proteome or metabolome descriptions. However, there are no established methods as yet to describe cell morphologies, capture similarities and differences between different cells or between healthy and disease states. Here we report a framework to model various aspects of a cell and integrate knowledge encoded at different levels of abstraction, with cell morphologies at one end to atomic structures at the other. The different issues that have been addressed are ontologies, feature description and model building. The framework describes dotted representations and tree data structures to integrate diverse pieces of data and parametric models enabling size, shape and location descriptions. The framework serves as a first step in integrating different levels of data available for a biological cell and has the potential to lead to development of computational models in our pursuit to model cell structure and function, from which several applications can flow out.
Sub classification and targeted characterization of prophage-encoded two-component cell lysis cassette
K V Srividhya S Krishnaswamy
Bacteriophage induced lysis of host bacterial cell is mediated by a two component cell lysis cassette comprised of holin and lysozyme. Prophages are integrated forms of bacteriophages in bacterial genomes providing a repertoire for bacterial evolution. Analysis using the prophage database (http://bicmku.in:8082) constructed by us showed 47 prophages were associated with putative two component cell lysis genes. These proteins cluster into four different subgroups. In this process, a putative holin (essd) and endolysin (ybcS), encoded by the defective lambdoid prophage DLP12 was found to be similar to two component cell lysis genes in functional bacteriophages like p21 and P1. The holin essd was found to have a characteristic dual start motif with two transmembrane regions and C-terminal charged residues as in class II holins. Expression of a fusion construct of essd in Escherichia coli showed slow growth. However, under appropriate conditions, this protein could be over expressed and purified for structure function studies. The second component of the cell lysis cassette, ybcS, was found to have an N-terminal SAR (Signal Arrest Release) transmembrane domain. The construct of ybcS has been over expressed in E. coli and the purified protein was functional, exhibiting lytic activity against E. coli and Salmonella typhi cell wall substrate. Such targeted sequence-structure-function characterization of proteins encoded by cryptic prophages will help understand the contribution of prophage proteins to bacterial evolution.
The p53-MDM2 network: from oscillations to apoptosis
Indrani Bose Bhaswar Ghosh
The p53 protein is well-known for its tumour suppressor function. The p53-MDM2 negative feedback loop constitutes the core module of a network of regulatory interactions activated under cellular stress. In normal cells, the level of p53 proteins is kept low by MDM2, i.e. MDM2 negatively regulates the activity of p53. In the case of DNA damage, the p53-mediated pathways are activated leading to cell cycle arrest and repair of the DNA. If repair is not possible due to excessive damage, the p53-mediated apoptotic pathway is activated bringing about cell death. In this paper, we give an overview of our studies on the p53-MDM2 module and the associated pathways from a systems biology perspective. We discuss a number of key predictions, related to some specific aspects of cell cycle arrest and cell death, which could be tested in experiments.
pp 999-1004 August 2007 Articles
Type 2 diabetes mellitus: phylogenetic motifs for predicting protein functional sites
Ashok Sharma Tanuja Rastogi Meenakshi Bhartiya A K Shasany S P S Khanuja
Diabetes mellitus, commonly referred to as diabetes, is a medical condition associated with abnormally high levels of glucose (or sugar) in the blood. Keeping this view, we demonstrate the phylogenetic motifs (PMs) identification in type 2 diabetes mellitus very likely corresponding to protein functional sites. In this article, we have identified PMs for all the candidate genes for type 2 diabetes mellitus. Glycine 310 remains conserved for glucokinase and potassium channel KCNJ11. Isoleucine 137 was conserved for insulin receptor and regulatory subunit of a phosphorylating enzyme. Whereas residues valine, leucine, methionine were highly conserved for insulin receptor. Occurrence of proline was very high for calpain 10 gene and glucose transporter
pp 1005-1008 August 2007 Articles
The next step in biology: A periodic table?
Pawan K Dhar
Systems biology is an approach to explain the behaviour of a system in relation to its individual components. Synthetic biology uses key hierarchical and modular concepts of systems biology to engineer novel biological systems. In my opinion the next step in biology is to use molecule-to-phenotype data using these approaches and integrate them in the form a periodic table. A periodic table in biology would provide chassis to classify, systematize and compare diversity of component properties vis-a-vis system behaviour. Using periodic table it could be possible to compute higher-level interactions from component properties. This paper examines the concept of building a bio-periodic table using protein fold as the fundamental unit.
Modularized study of human calcium signalling pathway
Losiana Nayak Rajat K De
Signalling pathways are complex biochemical networks responsible for reg ulation of numerous cellular functions. These networks function by serial and successive interactions among a large number of vital biomolecules and chemical compounds. For deciphering and analysing the underlying mechanism of such networks, a modularized study is quite helpful. Here we propose an algorithm for modularization of calcium signalling pathway of H. sapiens. The idea that ``a node whose function is dependant on maximum number of other nodes tends to be the center of a sub network" is used to divide a large signalling network into smaller sub networks. Inclusion of node(s) into sub networks(s) is dependant on the outdegree of the node(s). Here outdegree of a node refers to the number of re lations of the considered node lying outside the constructed sub network. Node(s) having more than c relations lying outside the expanding subnetwork have to be excluded from it. Here 𝑐 is a specified variable based on user preference, which is finally fixed during adjustments of created subnetworks, so that certain biological significance can be conferred on them.
Gene ordering in partitive clustering using microarray expressions
Shubhra Sankar Ray Sanghamitra Bandyopadhyay Sankar K Pal
A central step in the analysis of gene expression data is the identification of groups of genes that exhibit similar expression patterns. Clustering and ordering the genes using gene expression data into homogeneous groups was shown to be useful in functional annotation, tissue classification, regulatory motif identification, and other applications. Although there is a rich literature on gene ordering in hierarchical clustering framework for gene expression analysis, there is no work addressing and evaluating the importance of gene ordering in partitive clustering framework, to the best knowledge of the authors. Outside the framework of hierarchical clustering, different gene ordering algorithms are applied on the whole data set, and the domain of partitive clustering is still unexplored with gene ordering approaches. A new hybrid method is proposed for ordering genes in each of the clusters obtained from partitive clustering solution, using microarray gene expressions. Two existing algorithms for optimally ordering cities in travelling salesman problem (TSP), namely, FRAG_GALK and Concorde, are hybridized individually with self organizing MAP to show the importance of gene ordering in partitive clustering framework. We validated our hybrid approach using yeast and fibroblast data and showed that our approach improves the result quality of partitive clustering solution, by identifying subclusters within big clusters, grouping functionally correlated genes within clusters, minimization of summation of gene expression distances, and the maximization of biological gene ordering using MIPS categorization. Moreover, the new hybrid approach, finds comparable or sometimes superior biological gene order in less computation time than those obtained by optimal leaf ordering in hierarchical clustering solution.
Analysis of breast cancer progression using principal component analysis and clustering
G Alexe G S Dalgin S Ganesan C DeLisi G Bhanot
We develop a new technique to analyse microarray data which uses a combination of principal components analysis and consensus ensemble 𝑘-clustering to find robust clusters and gene markers in the data. We apply our method to a public microarray breast cancer dataset which has expression levels of genes in normal samples as well as in three pathological stages of disease; namely, atypical ductal hyperplasia or ADH, ductal carcinoma in situ or DCIS and invasive ductal carcinoma or IDC. Our method averages over clustering techniques and data perturbation to find stable, robust clusters and gene markers. We identify the clusters and their pathways with distinct subtypes of breast cancer (Luminal, Basal and Her2+). We confirm that the cancer phenotype develops early (in early hyperplasia or ADH stage) and find from our analysis that each subtype progresses from ADH to DCIS to IDC along its own specific pathway, as if each was a distinct disease.
Gallery of Cover Art
Online submission at eBiosciences
Journal of Biosciences | News
Tweets about @iascbng #jbsc | CommonCrawl |
\begin{definition}[Definition:Pólya's Urn]
'''Pólya's urn''' is an implementation of an urn such that, when a ball of a particular colour is taken out and examined, that ball is replaced and another of the same colour is added.
{{NamedforDef|George Pólya|cat = Pólya}}
Category:Definitions/Probability Theory
\end{definition} | ProofWiki |
Studying driving behavior and risk perception: a road safety perspective in Egypt
Islam Sayed1,
Hossam Abdelgawad1 &
Dalia Said ORCID: orcid.org/0000-0002-4351-54561
Roadway safety research indicates a correlation between drivers' behavior, the demographics, and the local environment affecting the risk perception and roadway crashes. This research examines these issues in an Egyptian context by addressing three groups: private cars drivers, truck drivers, and public transportation drivers. A Driver Behavior Questionnaire (DBQ) was developed to capture information about drivers' behavior, personal characteristics, risk perception, and involvement in crashes. The risk perception was captured subjectively by exposing participants to various visual scenarios representing specific local conditions to rank their perception of the situation from a safety perspective.
Results indicated that the human factor, in particular, failure of keeping a safe following distance, was a major cause of crashes. The analyzed data was used to predict expected crash frequency based on personal attributes, such as age, driving experience, personality traits, and driving behavior, using negative binomial models. The study recommends that the DBQ technique, combined with risk perception scenarios, can be used to understand drivers' characteristics and behaviors and collect information on the crashes they experience.
Practically the study findings could provide series of recommendations to the local authorities about the introduction of the traffic management and noise control act; raising awareness of driving etiquette; setting and enforcing driving hours' regulations, and consider specific training programs for beginners drivers.
World Health Organization (WHO) statistics for road crashes worldwide show that the number of deaths due to road crashes ranges between 1.25 and 1.35 million per year and are the leading cause of death among young people [1]. The number of road crashes has decreased in developed countries due to several interventions. However, this is not the case in developing countries, which account for 90% of worldwide road crash fatalities. The USA rate of road fatalities was 1.3 persons per 10,000 vehicles in 2016 [2]. At the same time, the rate in Egypt was 11 road fatalities per 10,000 vehicles [1]. An equally startling statistic is that there are 4 deaths in Egypt per 100 km roads [3], while the rates of death in the UK and USA are 0.47 and 0.92 people, respectively [1]. These statistics indicate an alarming number of fatalities in Egypt, resulting in a heavy toll on Egyptian welfare and the economy at large.
Accordingly, there is a strong need to study the relationship between drivers' behavior, risk perception, and roadway crashes. This research, therefore, tackles the roadway crashes issues in an Egyptian context by addressing three groups: general drivers, truck drivers, and public transportation drivers to find a way to enhance the safety of the Egyptian roads and to improve Egyptian driving behaviors. The Driver Behavior Questionnaire technique (DBQ) is adopted to capture information about drivers, their behavior, and risk perception. The collected data was then used to predict the expected crash frequency using negative binomial models.
The research objectives endeavors to investigate and quantify the impact of human behavior on road crashes in Egypt and to better understand driver conduct relevant to traffic safety. Specifically, the objective is to map the relationship between drivers' demographic characteristics, their history of traffic rules' violations and crashes, and their level of risk perception while driving along Egyptian roads. Due to the lack of an accurate and representative government roadway crash database from authorities; this research attempts to connect these dots using means of survey and drivers' interviews to ultimately model the association of these variables as a step towards improving traffic safety in Egypt. What further exaggerated the matter is that research studies related to the types and reasons for the negative behavior of drivers in Egypt are quite sporadic.
With the above objectives in mind this paper has been structured as follows: background studies summarize relevant background studies in relation to risk perception, drivers' behavior, driver demographics, and personality traits and their relation to roadway crashes. Besides, the background studies also discuss techniques used to study the drivers' behavior including the Driver Behavior Questionnaire (DBQ), its structure, fields of interest, and different ways to collect data. Then, the "Methods" section presents the research methodology, and the steps that have been followed to study the drivers' negative behaviors, risk perception, and their relationship to traffic crashes and violations concerning demographic factors. The results are then introduced by representing the results into descriptive analysis, and crash prediction modeling subsections. A specific section is devoted to a discussion of the modeling results and key findings. Finally, the paper wraps up with conclusions, limitations, and potential for future research.
According to the background studies, about 90–95% of traffic incidents result from human actions. Thus, it is reasonable to infer that a crash is likely the fault of the driver and not the fault of the vehicle. Dahlen et al. (2012) [4] showed that aggressiveness in driving increases the risk of crashes and physical injuries. The study associated anger, which interferes with judgment and coordination behind the wheel, with lowering driving performance and increasing the likelihood of a crash. Aggressive behavior is the intention to harm or injure other drivers or pedestrians in any emotional or physical way. Alonso et al. (2019) [5] found that the perception of anger, aggressiveness, and risky behavior changes according to the characteristics of sociodemographic variables of the participants, and people's attitudes and behaviors towards road safety is a reflex of their perception. Tao et al. (2017) [6] found that personality traits and driving experience played a role in predicting the risk of traffic crashes. Regev et al. (2018) [7] studied the relationship between exposure, age, gender, and time of driving in the UK. This research showed that both low and high exposure, that is, time behind the wheel, is very dangerous and risky. In Egypt, Elshamly et al. (2017) [8] found that fatigue due to long driving hours and lack of sleep is the likeliest cause of truck crashes in Egypt. These findings highlight the important role played by human factors on the risk of crash involvement among drivers. Vanlaar et al. (2006) [9] validated an empirical model discussing the driver's perception of causes of road accidents and differences in perceptions between participants by collecting data from 23 countries using face-to-face interviews to rate 15 causes of road accidents by six-point ordinal scale. The model showed that there are no relevant differences between the 23 countries' participants. However, driving under the influence of alcohol or drugs was perceived as the most significant variable in causing road accidents; followed by using mobile phones.
Several methods have been used to examine driver behavior and characteristics, including GPS tracking devices, the Mobile-Sensor-Platform for Intelligent Recognition of Aggressive Driving (MIROAD), and visual reality systems. Other methods involve the use of the Driver Behavior Questionnaire (DBQ) technique. This research adopts the DBQ method to collect demographic information, driving behavior, drivers' crash history, and estimates the level of risk perception.
Reason et al. [10] developed the Driver Behavior Questionnaire (DBQ) to measure drivers' actions behind the wheel. The DBQ is one of the most widely used instruments for measuring self-reported driving behaviors. The DBQ method has been used for studies in China [11], Canada [12], Denmark [13], Latvia [14], Qatar [15], and other countries. The content and structure of the studies varied with the study scope, objective, and participants.
Despite the popularity of the DBQ, this research is the first attempt to implement the DBQ with Egyptian drivers. The survey is divided into four sections: (A) demographic characteristics, (B) the driver's traffic violations and crashes, (C) driver behaviour, and (D) risk perception.
As stated earlier, the research objective is to study the drivers' negative behaviors, risk perception, and their relationship to traffic crashes. With the lack of advanced technologies (such as simulators) to conduct in-depth studies on the behavioral aspects by simulating real driving; this study adopted the DBQ technique and developed a survey form to collect the required data on demographics, crash history, violations of information, behaviors, perception, and personality traits. The data was then analyzed descriptively and statistically, and different statistical models were derived, tested, and compared. Accordingly, the best model was selected to predict the number of crashes by certain variables.
Survey form design
As survey design is critical to achieving the study's objectives, a comparative literature review was synthesized to determine how previous researchers designed their DBQ instrument. The research team then summarized the most relevant studies to capture the most significant reported variables; categorized by demographics, crash history, violations of information, behaviors, perception, and personality traits. This is illustrated in Table 1. The comparative literature review resulted in identifying 58 variables gleaned from previous studies, refer to Table 1 and categorized to very significant (VS); correlated, but not very significant (C),; not significant nor correlated at all (NS). Albeit the research team has identified the most significant variables from state-of-the-art research as a starting point; succinct knowledge of the local context and exploratory interviews with drivers accentuated the need to consider additional factors/questions, such as driving under stress, which seemed relevant in the Egyptian context. A 35-question instrument was developed; of which, 17 questions focused on driver behavior, while the rest were designed to capture the other factors associated with roadway safety, such as the driver's characteristics, previous traffic violations, and crash history. Most of the questions were on a 5-point Likert-like Scale (1 = never, 2 = rarely, 3 = sometimes, 4 = most of the time, and 5 = always). The questions of the questionnaire were divided into four sections, each with a set of related variables, as detailed in the following subsections:
Table 1 Comparative literature review and significance of reported variables*
Demographic characteristics
Questions in this section included the age, gender, driving experience, number of daily driving hours and trips, and education level to study the relationship between each factor and traffic crashes. Gleaning participants' age and gender are important to study the relationship between drivers' age group, their driving behavior, and the occurrence of a traffic crash.
The education level and driving experience are used to study the effect on risky driving behavior, traffic violations, traffic crashes, and risk perception. As the literature reveals that driving experience plays an important role in road safety, the instrument addressed the number of hours of daily driving and the number of trips per day to measure the exposure of the drivers to potential roadway incidents.
Traffic violations and crash history
This section of the questionnaire was designed to elicit information about the driver's traffic violation and crash history, specifically the cause and number of these crashes in the previous 3 years, to serve as a basis for the statistical and descriptive analysis to investigate the relationship between traffic crashes, driving behavior, risk perception, and demographic factors.
Driving behavior
This section has 17 questions measuring participants' risky and aggressive behavior while driving, such behavior could include speeding, tailgating, distracted driving, and failure to wear a seatbelt. Other questions also investigated the reaction of the participant in the event of experiencing aggressive or inappropriate behavior from other driver's/roadway users.
Risk perception
The participant's level of risk perception was captured by a specific technique. The research team mapped several selected roads to glean real-life footage of various combinations of traffic conditions, driver population, day/night, etc. Participants were then exposed to selected scenes from real-life situations captured on different roads in and around Cairo. These images depicted traffic violations, aggressive behavior, traffic safety, and road geometry concerns, in 10 scenarios typical to driving on Egyptian roads [for example: overloading a vehicle, picking up/drop-off of passengers along an urban highway, pedestrians crossing in front of traffic on the highway, heavy trucks drove on the left lane, to name a few]. The objective of this part was to investigate the relationship of driver risk perception, aggressive behavior, and crash history with relevant demographic factors. The selected scenarios were presented to the participants to evaluate them from their safety and perception point of view and safety awareness while driving; through rating the scenario/situation on a scale from 1 to 5, 1 being very safe and 5 being very dangerous.
The study depended on data collection by online questionnaires and field survey forms. It was expected that significant parameters for the drivers in Egypt may vary from those found in similar studies in other countries. The difference was related to road conditions, driver behavior, safety warrants, culture and habits, traffic laws, and enforcement.
The sample size calculation indicated a sample size of 385 assuming that the population size exceeds 1 million and margin of error is less than 0.05 and the confidence level is 95%.
$$ \mathrm{Sample}\ \mathrm{size}=\frac{\frac{{\mathrm{Z}}^2\times \mathrm{P}\left(1\hbox{-} \mathrm{P}\right)}{{\mathrm{e}}^2}}{1+\left(\frac{{\mathrm{Z}}^2\times \mathrm{P}\left(1\hbox{-} \mathrm{P}\right)}{{\mathrm{e}}^2\mathrm{N}}\right)} $$
where N is the population number, e is the margin of error, Z is the Z score value for the standard deviations equivalent to 1.96.
The data collection process started with a pilot survey on a limited scale to ensure the terminology was clear. A second trial involving 35 participants proved the soundness of the questionnaire. Then, the survey was published on multiple communication channels in early 2019. Researchers conducted personal interviews with drivers at factory loading stations, public transportation main terminals, and waiting areas around key attractions (e.g., shopping malls, cinemas, hospitals), resulting in 883 completed interviews with 515 private car drivers, 82 taxi drivers, 110 public bus drivers, 124 truck drivers, and 52 public transit drivers. After eliminating surveys with incomplete responses, the researchers had data from 824 participants.
The data collection process stopped at this number as it exceeded the minimum sample size. However, this survey collected above 824 valid responses. So, the confidence level can be increased to 99%. In this survey, all the drivers across Egypt were targeted. With roughly 8.6 million registered vehicles in Egypt and many drivers reaching be three times the number of registered vehicles; it is practically impossible to survey the entire population of drivers in Egypt. The sample size calculation typically reaches a fixed value after a certain population size. In addition, due to the geographical constraints in the targeting process of the trucks, taxis, and public transportation drivers; the field interview only captured drivers from Cairo and Giza. These drivers were all males which is expected to result in an over representative male percentage in the sample. Another point to consider is that the sample frame would have a bias due to the online communication channels used in the data collection; all of which have been included in the limitation section at the end of the paper.
It is worth noting that while conducting the questionnaire, approval and consent were part of the survey design, and the research and questionnaire approach did not use any personal data; indicating that participation was done voluntarily and anonymously. Confidentiality and the scientific value of data were emphasized, highlighting that data would be used only for research purposes to encourage participants to provide sincere answers to all questions as we noticed that some drivers were afraid to participate thinking that the data might be shared with traffic police. The data was then collected and initially wrangled by descriptive analysis to explore and cluster the participants according to driver categories. Each category was described separately and illustrated by graphs and figures demonstrating the distributions of answers among survey variables as shown below in the results section. The data were integrated into a logical format for further processing by Statistical Package for the Social Sciences (SPSS®) software using version 22 statistics package and Minitab software. Then, the analyzed data and variables to produce predictive models and estimate the number of likely crashes based on the drivers' characteristics were presented in the modeling section after the descriptive analysis results were discussed in the following section.
The researchers analyzed the data and variables initially using descriptive analysis to put on hand the significant variables as shown below.
Descriptive analysis
The descriptive analysis of the questionnaire was based on the participant's data showing the results of the driver crashes and accrual reasons and the driving behavior questions and the number of crashes related to these behaviors. Besides, the risk perception rating and percentages of each scenario which is based on the participants' opinion. Also, the age and gender data were presented. In addition to that, the driver's experience and the driver's education as well as the number of daily trips for each driver were presented.
The demographic analysis results show that 74.4% of the participants were males (including the bus, taxi, and truck drivers, who were over 30% of the sample), and 70.7% had a university undergraduate or a post-graduate degree. The results also show that 57.6% had five or more years of driving experience. Of these, 52.5% were taxi, truck, or microbuses drivers. Figure 1 indicates that the majority of participants (57.28%) were somehow involved in one to three crashes in the last 3 years, categorized by the following variables and indicating the dominant variable between brackets: age [26–40 years], gender [male], years of experience driving [> 10 years.], and university degree [university and post-graduate degree]. As shown in Fig. 1a, the greatest percentage of participants involved in the 1–3 crashes category were 26–40 years old. More than 30% of this category has more than 10 years of driving experience. As shown in Fig. 1c, 189 participants, or 48.8%, with 10 or more years of experience were taxi, truck, or microbus drivers. Also, 99.47 % of the 189 participants (188) drivers usually drove 3–8 h per day. Therefore, these drivers had longer exposure times, increasing the probability of being involved in crashes.
Demographic variables and number of crashes
In this sample, 74.4% are males, as can be concluded from Fig. 1b, and more than 40% of the participant involved in the 1–3 crashes category were males; the females' percentage was only 12% in this category. It should be noted that 25.6% of the sample was made up of female drivers, representing 211 participants. A total of 71 of these participants held a post-graduate degree, and the rest had graduated from a university with an undergraduate degree.
The average crash frequency for the demographic variables that were deemed significant is shown in Fig. 2 while fixing all the other variables. In the demographic dimension, the age category was inversely proportional to the number of crashes. As shown in Fig. 2b, there was a relationship between exposure and the number of traffic crashes for public transportation and truck drivers who drove at least 3 h a day. The same is true of the general drivers; those who drove more than 3 h per day tended to have more crashes.
Data trend for the mean crashes versus demographic variables and trips
Number and reasons for crashes
The participants were asked to respond to questions on the number and cause of vehicular crashes they experienced in the previous 3 years. The number of crashes ranged widely from 0 to 16, with a reported mean of 2.04 and a median of 1.00 crashes per participant. The reasons behind the reported crashes are summarized in Fig. 3. Participants reported that they believed their crashes were caused primarily (18.7%) by tailgating, which is the failure to keep a sufficiently safe distance between their car and the car in front, followed by (16.36%) related to sudden swerving of their car or the car in front. The third most likely cause (14.71%) was distracted driving, caused by mobile phones or eating.
Causes of reported crashes
Driver behavior data
Driver behavior data show that 56.4% of the participants exceeded the posted speed limit, while 15.0% of the respondents said they always overtake from the right-hand side. In addition, 40.1% said they use phones while driving, 61.7% of drivers said they express anger or aggressiveness by using the headlight beam or honking the horn. Following at a safe distance of more than 18.0 m while driving the vehicle at a speed of 80 km/h was respected by only 35.5% of the drivers. Finally, the survey results indicated that a significant 25.48% of the participants drove in the opposite direction of traffic. While the authors acknowledge that this percentage is considerably higher than in any other country, it is noteworthy that 45% of respondents are public transportation and truck drivers. Most of those drivers received only primary or preparatory education, and they abide by driving rules. Therefore, this result was not a complete surprise, especially in the Greater Cairo Region, where researchers interviewed these drivers. Figure 4 shows the variability in the average crash frequency for the different levels of driving behavior. For example, drivers who regularly use the horn or the high beam aggressively (Fig. 4a), tend to drive in the opposite direction (Fig. 4b) or tend to tailgate the front vehicle (Fig. 4c) are more likely to be involved in crashes. In other words, hostility on the highway is more likely to result in a roadway crash. It was also found that seatbelt use was inversely proportional to the average number of crashes (Fig. 4d); that is, drivers who tend to use seatbelts were less likely to be involved in a crash in the last 3 years.
Data trend for the average mean crash frequency for each variable
As previously discussed in the DBQ setup, respondents were asked to rate specific scenes from real-life situations captured on different roads based on their perception of the risky behavior in the scenario. These are the scenes deemed as the most dangerous by various types of drivers. They depicted traffic violations, aggressive behavior, traffic safety issues, and road geometry concerns in 10 scenarios typically witnessed on Egyptian roads. The 10 captured scenarios were: a typical cross-section with no depicted risk, improper pavement marking or median variable width, illegal pickup/drop off of public transportation, illegal/unsafe loading of heavy trucks, night driving with no lights, trucks driving on the fast (left) lane, an illegal pedestrian crossing on busy highways, illegal pickup/drop off on highways, dangerous means of transportation on top of goods on trucks, and driving against traffic.
The participants rated the situation on a scale from 1 to 5, 1 being very safe and 5 being very dangerous. The respondents were grouped into three different driver categories: truck drivers, public transportation (bus and taxi) drivers, and passenger car drivers. The differences in the results between the three categories, presented in Fig. 5, provide interesting insights into how different groups of drivers perceive risk in different ways.
Level of risk perception among driver types
The results showed that 93.5% of the truck drivers rated a pedestrian illegally crossing a road with high-speed traffic as the most dangerous situation. The illegal picking up or dropping off on the highway came in second (86% rated this situation as a very dangerous act). The dangerous means of transportation for passengers and cargo came third, with 82 % of the truck drivers rating it as a very hazardous act. These three situations were perceived as the highest risk, as they probably are the main reasons for heavy vehicle crashes on Egyptian roads and pose the greatest danger to truck drivers. The results showed that these participants did not perceive that driving in the fast lane was a dangerous act for truck drivers, nor were driving in the opposite direction of traffic and illegal or dangerous truck loading.
Public transportation drivers had a slightly different view. A total of 92.9% agreed with the truck drivers that an illegal pedestrian crossing was the most dangerous, followed by an unsafe means of transportation (85%), and lastly were trucks driving in the fast or left lane (82.4%). These three situations were representative of the risky situations that public transportation drivers may encounter on roads in Egypt, and they increase the probability that these drivers will be involved in a crash. The public transportation drivers found that it is very dangerous for trucks to travel in the left lane, but not more dangerous than illegal pedestrian crossing. Also, they rated the illegal picking up and dropping off as a normal scenario because they, unlike truck drivers, do this regularly.
Passenger car drivers agreed with drivers of public transportation on the first and second most dangerous situations, illegal pedestrian crossing (94.6%), and dangerous means of transportation (90.9%). Passenger car drivers reported that the third most dangerous situation was truck drivers traveling in the fast lane (87.2%). This represents one of the most severe types of crashes on Egypt's highways, ones in which heavy vehicles are involved with a private car and/or pedestrians. Regardless of the class of drivers, it was clear that all participants did rate the exposure of vulnerable pedestrians crossing illegally as the most dangerous (93.9 %), followed by dangerous means of transport (86.6%), and trucks drivers driving on the left lane (79.3%).
It is noteworthy that 25% of the participants said they might drive in the wrong direction to reach their destination faster and have done this at least once. Moreover, only 36% of the participants felt that driving against traffic was a very dangerous act. This reflects the general perception in Egypt that this is an acceptable way to drive given traffic conditions.
Modeling procedure for the probability of crashes
After data were initially wrangled to explore the results, it was essential to conduct statistical analysis and modeling to characterize the drivers' behavior. Because of the nature of the collected data, logistic regression techniques were utilized. In SPSS, the data and variables were analyzed to produce predictive models to estimate the number of crashes likely to occur, based on the demographic factors, driver behavior, and risk perception. The dependent variables (exposure to traffic crashes) are categorical, and the independent variables are the driver behavior, risk perception, and demographic variables. Regression models were applied to each cluster to investigate the critical factors affecting the probability of crash occurrence within that cluster. For example, the demographic variables were examined concerning the number of crashes to determine how those variables affected the probability of crash occurrence. Each cluster was studied separately. Then, all clusters were investigated together against the number of crashes.
Various regression analyses were conducted. These were the linear regression analysis (LR), negative binomial regression analysis (NBR), and Poisson regression analysis (PR). A series of tests and analyses were carried out to assess the most suitable model for this data. However, crashes were count data and were usually modeled by using Poisson and negative binomial regression models. Rare-event count data such as crash occurrence better fit Poisson distribution, Washington et al. (2020) [23]. However, one requirement of the Poisson distribution is that the mean of the count data equals its variance and this is not the case in this research as the variance is significantly larger than the mean, which implies that the data is over-dispersed. In many cases, over-dispersed count data are successfully modeled using the negative binomial distribution, Washington et al. (2020) [23].
In the first step, data and codes were reviewed again to investigate the data distribution. Each of the collected variables is categorical, except for the car crash count, which is measured as the number of crashes. Three tests were run, PR, NBR, and LR, to assess the predictive capability of the variables. According to the initial results of the three models, where all the constructs are included, the NBR appears to be most appropriate in that it was able to extract a higher number of predictors. The variables were clustered in three dimensions: (1) driver behavior, (2) risk perception, and (3) demographic variables.
Model A: demographic variables model
The correlation analysis for demographic variables is shown in Table 2. It indicates a strong positive correlation between age and driving experience. The two variables most positively correlated to the number of crashes are the number of driving hours and daily trips. Age is negatively correlated with the number of crashes. The internal consistency of the demographic variables was assessed by Cronbach's alpha reliability test [24]. The Cronbach's alpha scores for the initial and final trials of the demographic variables are shown in Table 4a. Using all the variables of the drivers' demographic characteristics results in a low level of reliability of 0.319 level of alpha. Thus, it is necessary to progressively drop some variables from the model until an acceptable level of reliability is attained. In that respect, dropping the gender, trip purpose, and education variables resulted in a reasonably accepted alpha coefficient of 0.735 level of reliability. The model parameter estimation is presented in Table 5a, indicating that at a 5% significance level, only the age, number of daily driving hours, and the number of daily trips variables can be retained to explain the predicted crash frequency.
Table 2 Driver demographics variables correlation analysis matrix
Model B: driver behavior variables model
The correlation analysis for the drivers' behavior variables, as shown in Table 3, indicates a strong positive correlation between speeding and illegal overtaking, mobile phone use, and changing lanes. Speeding is strongly and positively correlated with wrong overtaking and changing lanes frequently, behaviors that indicate an aggressive driving attitude. Similar to Model A above, the Cronbach's alpha reliability scale initially resulted in an alpha coefficient of 0.532 when incorporating all the variables describing the driver's behavior. Dropping the drug's impact, running red light, and changing lanes illegally variables resulted in a reasonably accepted 0.720 level of reliability, as shown in Table 4b.
Table 3 Driver behavior variables correlation analysis matrix
Table 4 Cronbach's Alpha Test for Internal Consistency and Sensitivity Analysis
The model parameter estimation is presented in Table 5b. It indicates that, at a 5% significance level, only exceeding speed limit and driving in the opposite direction variables are related to the driver's behavior factors and can be considered to explain the predicted crash frequency. In addition, at 1% or more significance level, the keeping a safe following distance, using seatbelt while driving, and honking the horn or using the high beam variables; resulting in a significant model shown in Table 5b, as well as the goodness of fit tests to review how strong and precise this model in predicting the number of crashes.
Table 5 Model Parameters Estimation
Model C: combined demographics and drivers' behavior model
Combining all the variables from both demographics and behavior dimensions results in a mixed model that estimates the predicted crash frequency as a function of both dimensions. The model parameter estimation is presented in Table 5c, indicating that, at a 5% significance level or less, the following variables are significant: age, number of daily driving hours, number of daily trips, honking horn/using high beam, and driving in the opposite direction. The variable that contributes the most to reducing the predicted frequency of crashes is Age, indicating that the senior drivers are less likely to be involved in a crash. Of course, this conclusion tops out at a certain age and is a function of the participant's age group. On the other hand, the behavioral components of the model exhibit a higher contribution to the predicted crash frequency compared to the demographic ones.
Model D: adjusted demographics and drivers' behavior model
Using all the significant and logical variables retained from all the previous models and considering Cronbach's alpha reliability measures and the correlation matrix results in a mixed model that estimates the predicted crash frequency as a function of only the significant and logical variables. The model parameter estimation is presented in Table 5d, indicating that, at 5% significance level or less, the following variables are significant: age, number of daily driving hours, number of daily trips, honking the horn/using high beams, driving in opposite direction, and keeping a safe following distance.
Age and keeping a safe following distance are the variables that contribute the most to reducing the predicted frequency of crashes. On the other hand, the driving in opposite direction variable contributes greatly to predicted crash frequency compared to the other variables.
Model comparison
The model's parameter estimation was presented in modeling tables indicating that all models were significant. However, the best model that represented the sample were identified by comparing all models against the well know comparison parameters starting with the goodness of fit which is known by chi-square (R2). Where the chi-squared test is a parameter to check the goodness of fit for the null hypothesis to confirm the statistical significance to determine whether two or more categorical random variables such as age and accidents are independent of each other. Also, used to compare the log-likelihoods of regression models under the null hypothesis. So, it can be used to compare between different models to confirm the best fitting model to the used data. Then, another two parameters called Akaike's Information Criterion (AIC), and Bayesian Information Criterion (BIC) are mathematical methods for evaluating how well a model fits the data it was generated from. In statistics, AIC is used to compare different possible models and determine which one is the best fit for the data. However, BIC is an estimated probability of a model being true. So, a lower BIC means that a model is considered to be more likely to be the truth. Both criteria are based on various assumptions and asymptotic approximations.
Model goodness of fit
All the models were tested by the goodness of fit (R2) test, the results showed that Model D, adjusted demographics and drivers' behavior model was the best model represented the sample, as its Omnibus Test (likelihood ratio chi-square) value equals 99.235 at a degree of freedom of 6 and significance of p = 0.000. Goodness of fit < deviance (720.037)/degree of freedom (816) with R2 =0.912, Pearson chi-square (976.801)/degree of freedom (816) R2 = 1.097.
Bayesian Information Criteria (BIC)
All the models were tested by the Bayesian Information Criterion (BIC) test, the results indicated that Model D BIC was equal to 3430.237 the lowest value. Model C results showed BIC equals 3484.213, and Model B equal to 3455.174. Lastly, Model A equal to 3453.663, which means that model D was the truest model as its BIC value was the lowest value.
Akaike Information Criteria (AIC)
Reviewing all the models from the Akaike Information Criterion (AIC) number, the results show that Model D, adjusted demographics, and drivers' behavior model are equal to 3392.524, which is the lowest and the best value. Model C was equal to 3422.174, and Model B was equal to 3422.174. Lastly, Model A was equal to 3430.093. Also, the Consistent Akaike's Information Criterion (CAIC) number for model D is the lowest value among all models by 3438.237.
Model testing
Testing the models showed that Model D is preferred by the BIC and the AIC. When testing the models against the goodness of fit McFadden pseudo R2 value, it was found that Model D (R2 = 0. 912) was a better fit for the database because it falls within the accepted values (0.4 and 0.9). Conclusively, the best model found by this researcher is Model D, as defined by the following equation:
$$ \mathrm{Predicted}\ \mathrm{Crash}\ \mathrm{Count}=\exp \left( Int+{\beta}_1{x}_1+{\beta}_2{x}_2+\cdots +{\beta}_k{x}_k\right) $$
Predicted Crash Count = exp (0.685) × exp (0.096 (Honking Horn/Using High) × exp (0.106 (Driving in Opposite Direction) × exp (− 0.066 (Keeping Safe Following Distance) × exp (− 0.116 (Age) × exp (0.075 (# of Daily Trips) × exp (0.045 (# of Daily Driving Hours)).
The DBQ technique, combined with risk perception scenarios, can be used as an enabling tool to understand drivers' characteristics and behaviors and collect information on the crashes they experience, especially in cases where a structured periodic crash database is largely missing.
The key conclusions of this research can be summarized as follows:
Participants stated that their crashes were primarily attributed to tailgating and failure to keep a safe gap, while the modeling results added that horn honking, use of high beams, and driving toward oncoming traffic also are aggressive factors contributing to the predicted number of crashes.
Regardless of driver type (private car, public transportation, or truck drivers), all participants said that the most dangerous behavior was when pedestrians illegally crossed a busy highway. Dangerous means of transport by cargo and passengers and trucks illegally driving in the left or fast lane were considered the second and third most hazardous.
The variables that contribute the most to reducing the predicted frequency of crashes are age and safe following distance
Behavioral components of the model exhibit a greater contribution to predicted crash frequency than do the demographic ones.
In practice, this research has the potential to support the Ministry of Transport and traffic police responsible for law enforcement on the following directions:
Introduction of Anti-Car-Honking Ordinance and Traffic Management and Noise Control Act to enforce traffic control, maintain traffic order and ensure traffic safety.
Raise awareness of driving etiquette rules to avoid the "flashing to dazzle" effect and consider including informative material in the driving test exam.
Setting and enforcing driving hours' regulations as there is evidence from research relating fatigue to crashes, as this was clearly shown from the modeling results herein, especially that the majority of participants who indicated long driving hours are truck drivers followed by public transportation (bus and taxi) drivers.
Consider specific education and training programs for beginning drivers including behind-the-wheel driver education to address tailgating, driving in the opposite direction, and seatbelts issues. Additionally, consider improvement schools for young offenders as a non-trivial number of respondents were involved in 1–3 accidents in the last 3 years, and as the age group increases the number of crashes decreases.
The survey was performed with special care to avoid response patterns as much as possible, one of the biggest limitations of this study was the self-reported data online as it could be associated with a bias of social desirability or poor understanding of the questionnaire. During the data collection process, some constraints due to geographical areas were raised as the field interview only captured drivers from Cairo and Giza. Also, the passenger car drivers captured via social media means having some glitches such as age limitations and educational categories like university degrees and post-graduate degree holders. This seems to be biased towards high educational degrees. In addition to that, we believe that due to the field interviews the male percentage in the sample was over representative. The field interviews done with the truck, taxi, public transportation drivers created males representing 74% of the total sample. However, no official statistics are mentioning the number or the percentage of female drivers in Egypt. as this is an uncontrolled bias and due to time and effort limitations the research had to adopt this bias in the study and mitigate it as shown in the modeling section. Also, collected data was limited to 2019 only. In addition, the risk perception collected data could not be modeled due to some unknown glitches. Also, the collected number of collisions for each participant could not be verified due to lots of constraints like the autonomous agreement and the data unavailability from the government.
Negative binomial regression was used in the model after succeeding in the comparison with Poisson's regression. The four models were presented and based on the model testing and comparison shown in the below section between the four models only one was recommended to be used in the predicting formula of the number of crashes.
Opportunities for future research could include the following:
Harnessing the potential of emerging tools, like driving simulation, and initiating programs like naturalistic driving—even at a modest scale.
Adopting a structured equation model [SEM] further extends the modeling effort presented in this paper by quantitatively studying multivariable relationships between measurement variables and latent variables.
All the materials, including and not limited to, the descriptive analysis, tables, figures, statistical models, and equations, are included in the manuscript. In addition to that, all the relevant raw data, excel sheets, questionnaires forms, data collected, and SPSS files are freely available to any researchers who wish to use them for non-commercial purposes while preserving data collected confidentiality and anonymity from the corresponding author on reasonable request.
CAPMAS:
Egyptian Central Agency for Public Mobilization and Statistics
DBQ:
Driver Behavior Questionnaire
MIROAD:
Mobile Intelligent Recognition of Aggressive Driving
LR:
Linear regression analysis
Negative binomial regression analysis
Poisson regression analysis
SEM:
Structured equation model
World Health Organization. Global status report on road safety 2018: summary. No. WHO/NMH/NVI/18.20. World Health Organization, 2018.
Janstrup KH (2017) Road Safety Annual Report 2017. Technical University of Denmark: Lyngby, Denmark.
Central Agency for Public Mobilization and Statistics. CAPMAS (2018), DDI-EGY-CAPMAS-Road-2018.
Dahlen ER, Edwards BD, Tubré T, Zyphur MJ, Warren CR (2012) Taking a look behind the wheel: An investigation into the personality predictors of aggressive driving. Accid Anal Prev 45:1–9. https://doi.org/10.1016/j.aap.2011.11.012
Alonso F, Esteban C, Montoro L, Serge A (2019) Conceptualization of aggressive driving behaviors through a perception of aggressive driving scale (PAD). Transp Res F: Traffic Psychol Behav 60:415–426. https://doi.org/10.1016/j.trf.2018.10.032
Tao D, Zhang R, Qu X (2017) The role of personality traits and driving experience in self-reported risky driving behaviors and accident risk among Chinese drivers. Accid Anal Prev 99(Pt A):228–235. https://doi.org/10.1016/j.aap.2016.12.009
Regev S, Rolison JJ, Moutari S (2018) Crash risk by driver age, gender, and time of day using a new exposure methodology. J Saf Res 66:131–140. https://doi.org/10.1016/j.jsr.2018.07.002
Elshamly AF, Abd El-Hakim R, Afify HA (2017) Factors affecting accidents risks among truck drivers in Egypt. In: MATEC Web of Conferences, vol 124, p 04009 EDP Sciences
Vanlaar W, Yannis G (2006) Perception of road accident causes. Accid Anal Prev 38(1):155–161. https://doi.org/10.1016/j.aap.2005.08.007
Reason J, Manstead A, Stradling S, Baxter J, Campbell K (1990) Errors and violations on the roads: a real distinction? Ergonomics 33(10-11):1315–1332. https://doi.org/10.1080/00140139008925335
Zhang H, Qu W, Ge Y, Sun X, Zhang K (2017) Effect of personality traits, age and sex on aggressive driving: psychometric adaptation of the Driver Aggression Indicators Scale in China. Accid Anal Prev 103:29–36. https://doi.org/10.1016/j.aap.2017.03.016
Cordazzo ST, Scialfa CT, Bubric K, Ross RJ (2014) The driver behaviour questionnaire: A north American analysis. J Saf Res 50:99–107. https://doi.org/10.1016/j.jsr.2014.05.002
Martinussen LM, Hakamies-Blomqvist L, Møller M, Özkan T, Lajunen T (2013) Age, gender, mileage and the DBQ: the validity of the Driver Behavior Questionnaire in different driver groups. Accid Anal Prev 52:228–236. https://doi.org/10.1016/j.aap.2012.12.036
Perepjolkina V, Renge V (2011) Drivers' Age, Gender, Driving Experience, and Aggressiveness as Predictors of Aggressive Driving Behaviour. Signum Temporis 4(1):62–72. https://doi.org/10.2478/v10195-011-0045-2
Bener A, Özkan T, Lajunen T (2008) The driver behaviour questionnaire in Arab gulf countries: Qatar and United Arab Emirates. Accid Anal Prev 40(4):1411–1417. https://doi.org/10.1016/j.aap.2008.03.003
Al Naser NB, Hawas YE, Maraqa MA (2013) Characterizing driver behaviors relevant to traffic safety: a multistage approach. J Transp Saf Secur 5(4):285–313. https://doi.org/10.1080/19439962.2013.766291
Sümer N, Lajunen T, Özkan T (2002) Sürücü davranislarinin kaza riskindeki rolü: ihlaller ve hatalar (The role of driver behaviour in accident risk: violations and errors). In: International Traffic and Road Safety Congress & Fair
Blockey PN, Hartley LR (1995) Aberrant driving behaviour: errors and violations. Ergonomics 38(9):1759–1771. https://doi.org/10.1080/00140139508925225
Mesken J, Lajunen T, Summala H (2002) Interpersonal violations, speeding violations and their relation to accident involvement in Finland. Ergonomics 45(7):469–483. https://doi.org/10.1080/00140130210129682
Gueho L, Granie MA, Abric JC (2014) French validation of a new version of the Driver Behavior Questionnaire (DBQ) for drivers of all ages and level of experiences. Accid Anal Prev 63:41–48. https://doi.org/10.1016/j.aap.2013.10.024
Şimşekoğlu Ö, Nordfjærn T, Rundmo T (2012) Traffic risk perception, road safety attitudes, and behaviors among road users: a comparison of Turkey and Norway. J Risk Res 15(7):787–800. https://doi.org/10.1080/13669877.2012.657221
Ali EK, El-Badawy SM, Shawaly EA (2014) Young drivers behavior and its influence on traffic incidents. J Traffic Logist Eng 2(1):45–51. https://doi.org/10.12720/jtle.2.1.45-51
Washington S, Karlaftis MG, Mannering F, Anastasopoulos P (2020) Statistical and econometric methods for transportation data analysis. Chapman and Hall/ CRC press. https://doi.org/10.1201/9780429244018
Cortina JM (1993) What is coefficient alpha? An examination of theory and applications. J Appl Psychol 78(1):98–104. https://doi.org/10.1037/0021-9010.78.1.98
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Public Works Department, Traffic and Highway Engineering, Cairo University, Giza, Egypt
Islam Sayed, Hossam Abdelgawad & Dalia Said
Islam Sayed
Hossam Abdelgawad
Dalia Said
All the authors confirm contribution to the paper as follows: study conception and design: DS and HA. Data collection: Sayed. Analysis and interpretation of results: IS, DS, and HA. Draft manuscript preparation: IS, DS, and HA. Manuscript review: DS and HA. All authors reviewed the results and approved the final version of the manuscript.
IS is a senior highway engineer and is interested in road safety, driver behavior, risk perception, crash analysis, autonomous vehicles, and smart city research. Sayed is currently working at RAK Municipality. He is responsible for planning and designing the strategic Megaprojects of the Emirate of Ras Al Khaimah. This is after working for Parsons Corporation for more than 5 years. Sayed was part of the designing team responsible for the roads and infrastructure design of some of Dubai's signature projects like EXPO 2020, Palm Deira, Dubai One Way system, Dubai Design District, Health care city, and a lot more.
HA is an Associate Professor at Cairo University, Traffic and Highway Engineering, and also a Director of Urban Transport Technologies at SETS. He completed his Ph.D. in ITS from the University of Toronto. Much of his professional and academic experience has been accumulated in Egypt, the Middle East, and in Canada in Traffic management, Transportation planning, operations, modeling, and optimization; ITS specifications, technical requirements, and functional testing; Smart Mobility Systems Concepts, Vision, and Strategy; Data analytics and visualization, data-driven innovation in transportation and spatial data management; and Roadway safety audits, operational reviews, speed management and traffic calming measures.
DS is an Associate Professor at Cairo University. She completed her Ph.D. at Carleton University, Canada in 2008. She has taken part and led in several research projects in Canada and Egypt related to Traffic Safety on Highways, Driver Behaviour and its Relation to Geometric Design of Highways, and Using New Technologies for Capturing Driver Behaviour Parameters. She has received several prestigious scholarships and awards during her studies including awards by the Transportation Association of Canada, and the National Science and Engineering Research Council of Canada. She is also a Professional Engineer and is involved in several strategic transportation projects.
Correspondence to Dalia Said.
Sayed, I., Abdelgawad, H. & Said, D. Studying driving behavior and risk perception: a road safety perspective in Egypt. J. Eng. Appl. Sci. 69, 22 (2022). https://doi.org/10.1186/s44147-021-00059-z
Accepted: 13 December 2021
Drivers' behavior
Driver demographics
Roadway crashes | CommonCrawl |
\begin{document}
\title{DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers} \thispagestyle{empty}
\begin{abstract} This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem. \end{abstract}
\begin{keywords} Multi-agent network, decentralized optimization, Alternating Direction Method of Multipliers. \end{keywords}
\section{Introduction}\label{sec_Introduction}
Decentralized algorithms are used to solve optimization problems where components of the objective are available at different nodes of a network. Nodes access their local cost functions only but try to minimize the aggregate cost by exchanging information with their neighbors. Specifically, consider a variable $\tbx\in\reals^p$ and a connected network containing $n$ nodes each of which has access to a local cost function $f_i:\reals^p\to\reals$. The nodes' goal is to find the optimal argument of the global cost function $ \sum_{i=1}^{n}f_i(\tbx)$,
\begin{equation}\label{original_optimization_problem1}
\tbx^*
\ =\ \argmin_{\tbx}\sum_{i=1}^{n} f_i(\tbx). \end{equation}
Problems of this form arise in, e.g., decentralized control \cite{Bullo2009,Cao2013-TII,LopesEtal8}, wireless communication \cite{Ribeiro10,Ribeiro12}, sensor networks \cite{Schizas2008-1,KhanEtal10,cRabbatNowak04}, and large scale machine learning \cite{bekkerman2011scaling,Tsianos2012-allerton-consensus,Cevher2014}. In this paper we assume that the local costs $f_i$ are twice differentiable and strongly convex.
There are different algorithms to solve \eqref{original_optimization_problem1} in a decentralized manner which can be divided into two major categories. The ones that operate in the primal domain and the ones that operate in the dual domain. Among primal domain algorithms, decentralized (sub)gradient descent (DGD) methods are well studied \cite{Nedic2009,Jakovetic2014-1,YuanQing}. They can be interpreted as either a mix of local gradient descent steps with successive averaging or as a penalized version of \eqref{original_optimization_problem1} with a penalty term that encourages agreement between adjacent nodes. This latter interpretation has been exploited to develop the network Newton (NN) methods that attempt to approximate the Newton step of this penalized objective in a distributed manner \cite{NN-part1,NN-part2}. The methods that operate in the dual domain consider a constraint that enforces equality between nodes' variables. They then ascend on the dual function to find optimal Lagrange multipliers with the solution of \eqref{original_optimization_problem1} obtained as a byproduct \cite{Schizas2008-1,BoydEtalADMM11,Shi2014-ADMM,rabbat2005generalized}. Among dual descent methods, decentralized implementation of the alternating directions method of multipliers (ADMM), known as DADMM, is proven to be very efficient with respect to convergence time \cite{Schizas2008-1,BoydEtalADMM11,Shi2014-ADMM}.
A fundamental distinction between primal methods such as DGD and NN and dual domain methods such as DADMM is that the former compute local gradients and Hessians at each iteration while the latter minimize local pieces of the Lagrangian at each step -- this is necessary since the gradient of the dual function is determined by Lagrangian minimizers. Thus, iterations in dual domain methods are, in general, more costly because they require solution of a convex optimization problem. However, dual methods also converge in a smaller number of iterations because they compute approximations to $\tbx^*$ instead of descending towards $\tbx^*$. Having complementary advantages, the choice between primal and dual methods depends on the relative cost of computation and communication for specific problems and platforms. Alternatively, one can think of developing methods that combine the advantages of ascending in the dual domain without requiring solution of an optimization problem at each iteration. This can be accomplished by the decentralized linearized ADMM (DLM) algorithm \cite{cQingRibeiroADMM14,ling2014dlm}, which replaces the minimization of a convex objective required by ADMM with the minimization of a first order linear approximation of the objective function. This yields per-iteration problems that can be solved with a computational cost akin to the computation of a gradient and a method with convergence properties closer to DADMM than DGD.
If a first order approximation of the objective is useful, a second order approximation should decrease convergence times further. The decentralized quadratically approximated ADMM (DQM) algorithm that we propose here minimizes a quadratic approximation of the Lagrangian minimization of each ADMM step. This quadratic approximation requires computation of local Hessians but results in an algorithm with convergence properties that are: (i) better than the convergence properties of DLM; (ii) asymptotically identical to the convergence behavior of DADMM. The technical contribution of this paper is to prove that (i) and (ii) are true from both analytical and practical perspectives.
We begin the paper by discussing solution of \eqref{original_optimization_problem1} with DADMM and its linearized version DLM (Section \ref{sec_preliminaries}). Both of these algorithms perform updates on dual and primal auxiliary variables that are identical and computationally simple. They differ in the manner in which principal primary variables are updated. DADMM solves a convex optimization problem and DLM solves a regularized linear approximation. We follow with an explanation of DQM that differs from DADMM and DLM in that it minimizes a quadratic approximation of the convex problem that DADMM solves exactly and DLM approximates linearly (Section \ref{sec:DQM}). We also explain how DQM can be implemented in a distributed manner (Proposition \ref{update_system_prop} and Algorithm \ref{algo_DQM}). Convergence properties of DQM are then analyzed (Section \ref{sec:Analysis}) where linear convergence is established (Theorem \ref{DQM_convergence} and Corollary \ref{convergence_of_primal}). Key in the analysis is the error incurred when approximating the exact minimization of DADMM with the quadratic approximation of DQM. This error is shown to decrease as iterations progress (Proposition \ref{error_vector_proposition}) faster than the rate that the error of DLM approaches zero (Proposition \ref{error_vector_proposition_2}). This results in DQM having a guaranteed convergence constant strictly smaller than the DLM constant that approaches the guaranteed constant of DADMM for large iteration index (Section \ref{sec:rate_comparison}). We corroborate analytical results with numerical evaluations in a logistic regression problem (Section \ref{sec:simulations}). We show that DQM does outperform DLM and show that convergence paths of DQM and DADMM are almost identical (Section \ref{comparison}). Overall computational cost of DQM is shown to be smaller, as expected.
\myparagraph{\bf Notation} Vectors are written as $\bbx\in\reals^n$ and matrices as $\bbA\in\reals^{n\times n}$. Given $n$ vectors $\bbx_i$, the vector
$\bbx=[\bbx_1;\ldots;\bbx_n]$ represents a stacking of the elements of each individual $\bbx_i$. We use $\|\bbx\|$ to denote the Euclidean norm of vector $\bbx$ and $\|\bbA\|$ to denote the Euclidean norm of matrix $\bbA$. The gradient of a function $f$ at point $\bbx$ is denoted as $\nabla f(\bbx)$ and the Hessian is denoted as $\nabla^2 f(\bbx)$. We use $\sigma(\bbB)$ to denote the singular values of matrix $\bbB$ and $\lambda(\bbA)$ to denote the eigenvalues of matrix $\bbA$.
\section{Distributed Alternating Directions Method of Multipliers} \label{sec_preliminaries}
Consider a connected network with $n$ nodes and $m$ edges where the set of nodes is $\mathcal{V}=\{1, \dots, n\}$ and the set of ordered edges $\mathcal{E}$ contains pairs $(i,j)$ indicating that $i$ can communicate to $j$. We restrict attention to symmetric networks in which $(i,j)\in\mathcal{E}$ if and only if $(j,i)\in\mathcal{E}$ and define node $i$'s neighborhood as the set $\mathcal{N}_i=\{j\mid (i,j)\in \mathcal{E}\}$. In problem \eqref{original_optimization_problem1} agent $i$ has access to the local objective function $f_{i}(\tbx)$ and agents cooperate to minimize the global cost $\sum_{i=1}^nf_{i}(\tbx)$. This specification is more naturally formulated by defining variables $\bbx_i$ representing the local copies of the variable $\tbx$. We also define the auxiliary variables $\bbz_{ij}$ associated with edge $(i,j)\in \mathcal{E}$ and rewrite \eqref{original_optimization_problem1} as
\begin{alignat}{2}\label{original_optimization_problem2}
\{\bbx_i^*\}_{i=1}^n :=\
&\argmin_{\bbx} &&\ \sum_{i=1}^{n}\ f_{i}(\bbx_{i}), \\ \nonumber
&\st &&\ \bbx_{i}=\bbz_{ij},\ \bbx_{j}=\bbz_{ij}, \
\text{for all\ } (i, j)\in\ccalE . \end{alignat}
The constraints $\bbx_{i}=\bbz_{ij}$ and $ \bbx_{j}=\bbz_{ij}$ enforce that the variable $\bbx_i$ of each node $i$ is equal to the variables $\bbx_j$ of its neighbors $j\in \ccalN_i$. This condition in association with network connectivity implies that a set of variables $\{\bbx_1,\dots,\bbx_n\}$ is feasible for problem \eqref{original_optimization_problem2} if and only if all the variables $\bbx_i$ are equal to each other, i.e., if $\bbx_1=\dots=\bbx_n$. Therefore, problems \eqref{original_optimization_problem1} and \eqref{original_optimization_problem2} are equivalent in the sense that for all $i$ and $j$ the optimal arguments of \eqref{original_optimization_problem2} satisfy $\bbx_i^*=\tbx^*$ and $\bbz_{ij}=\tbx^*$, where $\tbx^*$ is the optimal argument of \eqref{original_optimization_problem1}.
To write problem \eqref{original_optimization_problem2} in a matrix form, define $\bbA_s\in \reals^{mp\times np} $ as the block source matrix which contains $m\times n$ square blocks $(\bbA_s)_{e,i}\in \reals^{p\times p}$. The block $(\bbA_s)_{e,i}$ is not identically null if and only if the edge $e$ corresponds to $e=(i,j)\in \ccalE$ in which case $(\bbA_s)_{e,i}=\bbI_{p}$. Likewise, the block destination matrix $\bbA_d\in \reals^{mp\times np} $ contains $m\times n$ square blocks $(\bbA_d)_{e,i}\in \reals^{p\times p}$. The square block $(\bbA_d)_{e,i}=\bbI_p$ when $e$ corresponds to $e=(j,i)\in \ccalE$ and is null otherwise. Further define $\bbx:=[\bbx_1;\dots;\bbx_n]\in \reals^{np}$ as a vector concatenating all local variables $\bbx_i$, the vector $\bbz:=[\bbz_1;\dots;\bbz_m]\in \reals^{mp}$ concatenating all auxiliary variables $\bbz_e=\bbz_{ij}$, and the aggregate function $f:\reals^{np}\to\reals$ as $f(\bbx):= \sum_{i=1}^{n}f_i(\bbx_i)$. We can then rewrite \eqref{original_optimization_problem2} as
\begin{equation}\label{original_optimization_problem3}
\bbx^* := \argmin_{\bbx}f(\bbx), \quad
\st\ \bbA_s\bbx-\bbz=\bb0, \
\bbA_d\bbx-\bbz=\bb0. \end{equation}
Define now the matrix $\bbA=[\bbA_s;\bbA_d]\in \reals^{2mp\times np}$ which stacks the source and destination matrices, and the matrix $\bbB=[-\bbI_{mp};-\bbI_{mp}]\in \reals^{2mp\times mp}$ which stacks two negative identity matrices of size $mp$ to rewrite \eqref{original_optimization_problem3} as
\begin{align}\label{original_optimization_problem4}
\bbx^* := \argmin_{\bbx}f(\bbx), \quad
\st\ \bbA\bbx+\bbB\bbz=\bb0. \end{align}
DADMM is the application of ADMM to solve \eqref{original_optimization_problem4}. To develop this algorithm introduce Lagrange multipliers $\bbalpha_e=\bbalpha_{ij}$ and $\bbbeta_e=\bbbeta_{ij}$ associated with the constraints $\bbx_{i}=\bbz_{ij}$ and $\bbx_{j}=\bbz_{ij}$ in \eqref{original_optimization_problem2}, respectively. Define $\bbalpha:=[\bbalpha_1;\dots;\bbalpha_m]$ as the concatenation of the multipliers $\bbalpha_e$ which yields the multiplier of the constraint $\bbA_s\bbx-\bbz=\bb0$ in \eqref{original_optimization_problem3}. Likewise, the corresponding Lagrange multiplier of the constraint $\bbA_d\bbx-\bbz=\bb0$ in \eqref{original_optimization_problem3} can be obtained by stacking the multipliers $\bbbeta_e$ to define $\bbbeta:=[\bbbeta_1;\dots;\bbbeta_m]$. Grouping $\bbalpha$ and $\bbbeta$ into $\bblambda:=[\bbalpha;\bbbeta]\in \reals^{2mp}$ leads to the Lagrange multiplier $\bblambda$ associated with the constraint $\bbA\bbx+\bbB\bbz=\bb0$ in \eqref{original_optimization_problem4}. Using these definitions and introducing a positive constant $c>0$ we write the augmented Lagrangian of \eqref{original_optimization_problem4} as
\begin{equation}\label{lagrangian}
\ccalL(\bbx,\bbz,\bblambda) := f(\bbx)+\bblambda^{T}\left(\bbA\bbx+\bbB\bbz\right)+ \frac{c}{2}\left\|\bbA\bbx+\bbB\bbz\right\|^2. \end{equation}
The idea of ADMM is to minimize the Lagrangian $\ccalL(\bbx,\bbz,\bblambda)$ with respect to $\bbx$, follow by minimizing the updated Lagrangian with respect to $\bbz$, and finish each iteration with an update of the multiplier $\bblambda$ using dual ascent. To be more precise, consider the time index $k \in \naturals$ and define $\bbx_k$, $\bbz_k$, and $\bblambda_k$ as the iterates at step $k$. At this step, the augmented Lagrangian is minimized with respect to $\bbx$ to obtain the iterate
\begin{equation}\label{ADMM_x_update}
\bbx_{k+1}= \argmin_{\bbx} f(\bbx)+\bblambda_k^{T}\left(\bbA\bbx+\bbB\bbz_k\right)+ \frac{c}{2}\left\|\bbA\bbx+\bbB\bbz_k\right\|^2. \end{equation}
Then, the augmented Lagrangian is minimized with respect to the auxiliary variable $\bbz$ using the updated variable $\bbx_{k+1}$ to obtain
\begin{align}\label{ADMM_z_update} \bbz_{k+1}= \argmin_{\bbz} &\ f(\bbx_{k+1}) \\ \nonumber
&+\bblambda_k^{T}\left(\bbA\bbx_{k+1}+\bbB\bbz\right)
+ \frac{c}{2}\left\|\bbA\bbx_{k+1}+\bbB\bbz\right\|^2 . \end{align}
After updating the variables $\bbx$ and $\bbz$, the Lagrange multiplier $\bblambda_k$ is updated through the dual ascent iteration
\begin{equation}\label{ADMM_lambda_update} \bblambda_{k+1}=\bblambda_{k}+c\left(\bbA\bbx_{k+1}+\bbB\bbz_{k+1}\right). \end{equation}
The DADMM algorithm is obtained by observing that the structure of the matrices $\bbA$ and $\bbB$ is such that \eqref{ADMM_x_update}-\eqref{ADMM_lambda_update} can be implemented in a distributed manner \cite{Schizas2008-1,BoydEtalADMM11,Shi2014-ADMM}.
The updates for the auxiliary variable $\bbz$ and the Lagrange multiplier $\bblambda$ are not costly in terms of computation time. However, updating the primal variable $\bbx$ can be expensive as it entails the solution of an optimization problem [cf. \eqref{ADMM_x_update}]. The DLM algorithm avoids this cost with an inexact update of the primal variable iterate $\bbx_{k+1}$. This inexact update relies on approximating the aggregate function value $f(\bbx_{k+1})$ in \eqref{ADMM_x_update} through a regularized linearization of the aggregate function $f$ in a neighborhood of the current variable $\bbx_{k}$. This regularized approximation takes the form $f(\bbx)\approx f(\bbx_k)+\nabla f(\bbx_k)^T(\bbx-\bbx_k)+(\rho/2)\|\bbx-\bbx_k\|^2$ for a given positive constant $\rho>0$. Consequently, the update formula for the primal variable $\bbx$ in DLM replaces the DADMM exact minimization in \eqref{ADMM_x_update} by the minimization of the quadratic form
\begin{align}\label{DLM_x_update}
\bbx_{k+1}= \argmin_{\bbx}\ & f(\bbx_k)+\nabla f(\bbx_k)^T(\bbx-\bbx_k)+\frac{\rho}{2}\|\bbx-\bbx_k\|^2 \nonumber \\ & \quad
+\bblambda_k^{T}\left(\bbA\bbx+\bbB\bbz_k\right)+ \frac{c}{2}\left\|\bbA\bbx+\bbB\bbz_k\right\|^2. \end{align}
The first order optimality condition for \eqref{DLM_x_update} implies that the updated variable $\bbx_{k+1}$ satisfies
\begin{equation}\label{DLM_optimality_cond} \nabla f(\bbx_{k})+\rho (\bbx_{k+1}-\bbx_{k})+\bbA^T\bblambda_{k}+c \bbA^{T} \left( \bbA\bbx_{k+1}+\bbB \bbz_{k} \right)=\bb0. \end{equation}
According to \eqref{DLM_optimality_cond}, the updated variable $\bbx_{k+1}$ can be computed by inverting the positive definite matrix $\rho\bbI+c\bbA^T\bbA$. This update can also be implemented in a distributed manner.
The sequence of variables $\bbx_k$ generated by DLM converges linearly to the optimal argument $\bbx^*$ \cite{cQingRibeiroADMM14}. Although this is the same rate of DADMM, linear convergence constant of DLM is smaller than the one for DADMM (see Section \ref{sec:rate_comparison}), and can be much smaller depending on the condition number of the local functions $f_i$ (see Section \ref{comparison}). To close the gap between these constants we can use a second order approximation of \eqref{ADMM_x_update}. This is the idea of DQM that we introduce in the following section.
\section{DQM: Decentralized Quadratically Approximated ADMM}\label{sec:DQM}
DQM uses a local quadratic approximation of the primal function $f(\bbx)$ around the current iterate $\bbx_k$. If we let $\bbH_{k}:=\nabla^2 f(\bbx_{k})$ denote the primal function Hessian evaluated at $\bbx_k$ the quadratic approximation of $f$ at $\bbx_k$ is $f(\bbx) \approx f(\bbx_k)+\nabla f(\bbx_k)^T(\bbx-\bbx_k)+(1/2)(\bbx-\bbx_k)^T\bbH_k(\bbx-\bbx_k)$. Using this approximation in \eqref{ADMM_x_update} yields the DQM update that we therefore define as
\begin{align}\label{DQM_x_update}
\bbx_{k+1}:= \argmin_{\bbx} f(&\bbx_k)
+\nabla f(\bbx_k)^T(\bbx-\bbx_k) \\ \nonumber
& +\frac{1}{2}(\bbx-\bbx_k)^T\bbH_k(\bbx-\bbx_k) \\ \nonumber
& +\bblambda_k^{T}\left(\bbA\bbx+\bbB\bbz_k\right)
+ \frac{c}{2}\left\|\bbA\bbx+\bbB\bbz_k\right\|^2 . \end{align}
Comparison of \eqref{DLM_x_update} and \eqref{DQM_x_update} shows that in DLM the quadratic term $(\rho/2)\|\bbx_{k+1}-\bbx_{k}\|^2$ is added to the first-order approximation of the primal objective function, while in DQM the second order approximation of the primal objective function is used to reach a more accurate approximation for $f(\bbx)$. Since \eqref{DQM_x_update} is a quadratic program, the first order optimality condition yields a system of linear equations that can be solved to find $\bbx_{k+1}$,
\begin{align}\label{DQM_update_1}
\nabla f(\bbx_{k})+ \bbH_{k}(\bbx_{k+1}\!-\!\bbx_{k})
+\bbA^T\!\bblambda_{k}+c \bbA^{T}\! \left( \bbA\bbx_{k+1}
+\bbB \bbz_{k} \right)=\bb0. \end{align}
This update can be solved by inverting the matrix $\bbH_k+c\bbA^T\bbA$ which is invertible if, as we are assuming, $f(\bbx)$ is strongly convex.
The DADMM updates in \eqref{ADMM_z_update} and \eqref{ADMM_lambda_update} are used verbatim in DQM, which is therefore defined by recursive application of \eqref{DQM_update_1}, \eqref{ADMM_z_update}, and \eqref{ADMM_lambda_update}. It is customary to consider the first order optimality conditions of \eqref{ADMM_z_update} and to reorder terms in \eqref{ADMM_lambda_update} to rewrite the respective updates as
\begin{align}\label{DQM_update_2}
\bbB^T\bblambda_{k} + c \bbB^T\left( \bbA\bbx_{k+1}+\bbB\bbz_{k+1} \right) &=\bbzero,
\nonumber\\
\bblambda_{k+1}-\bblambda_{k}-c\left( \bbA\bbx_{k+1}+\bbB\bbz_{k+1} \right) &=\bbzero. \end{align}
DQM is then equivalently defined by recursive solution of the system of linear equations in \eqref{DQM_update_1} and \eqref{DQM_update_2}. This system, as is the case of DADMM and DLM, can be reworked into a simpler form that reduces communication cost. To derive this simpler form we assume a specific structure for the initial vectors $\bblambda_{0}=[\bbalpha_0;\bbbeta_0]$, $\bbx_0$, and $\bbz_0$ as introduced in the following assumption.
\begin{assumption}\label{initial_val_assum} Define the oriented incidence matrix as $\bbE_o:=\bbA_s-\bbA_d$ and the unoriented incidence matrix as $\bbE_u:=\bbA_s+\bbA_d$. The initial Lagrange multipliers $\bbalpha_0$ and $\bbbeta_0$, and the initial variables $\bbx_0$ and $\bbz_0$ are chosen such that:
\begin{enumerate}[(a)] \item The multipliers are opposites of each other, $\bbalpha_0=-\bbbeta_0$. \item The initial primal variables satisfy $\bbE_u\bbx_0=2\bbz_0$. \item The initial multiplier $\bbalpha_0$ lies in the column space of $\bbE_o$. \end{enumerate} \end{assumption}
Assumption \ref{initial_val_assum} is minimally restrictive. The only non-elementary condition is (c) but that can be satisfied by $\bbalpha_0=\bbzero$. Nulling all other variables, i.e., making $\bbbeta_0=\bbzero$, $\bbx_0=\bbzero$, and $\bbz_0=\bbzero$ is a trivial choice to comply with conditions (a) and (b) as well. An important consequence of the initialization choice in \eqref{initial_val_assum} is that if the conditions in Assumption \ref{initial_val_assum} are true at time $k=0$ they stay true for all subsequent iterations $k>0$ as we state next.
\begin{lemma}\label{lag_var_lemma} Consider the DQM algorithm as defined by \eqref{DQM_update_1}-\eqref{DQM_update_2}. If Assumption \ref{initial_val_assum} holds, then for all $k\geq 0$ the Lagrange multipliers $\bbalpha_k$ and $\bbbeta_k$, and the variables $\bbx_k$ and $\bbz_k$ satisfy: \begin{enumerate}[(a)] \item The multipliers are opposites of each other, $\bbalpha_k=-\bbbeta_k$. \item The primal variables satisfy $\bbE_u\bbx_k=2\bbz_k$. \item The multiplier $\bbalpha_k$ lies in the column space of $\bbE_o$. \end{enumerate}
\end{lemma}
\begin{myproof} See Appendix \ref{app_lag_var_lemma}. \end{myproof}
The validity of (c) in Lemma \ref{lag_var_lemma} is important for the convergence analysis of Section \ref{sec:Analysis}. The validity of (a) and (b) means that maintaining multipliers $\bbalpha_k$ and $\bbbeta_k$ is redundant because they are opposites and that maintaining variables $\bbz_k$ is also redundant because they can be computed as $\bbz_k = \bbE_u\bbx_k/2$. It is then possible to replace \eqref{DQM_update_1}-\eqref{DQM_update_2} by a simpler system of linear equations as we explain in the following proposition.
\begin{proposition}\label{update_system_prop} Consider the DQM algorithm as defined by \eqref{DQM_update_1}-\eqref{DQM_update_2} and define the sequence $\bbphi_{k}:=\bbE_{o}^T\bbalpha_k$. Further define the unoriented Laplacian as $\bbL_u:=(1/2)\bbE_u^T\bbE_u$, the oriented Laplacian as $\bbL_o=(1/2)\bbE_o^T\bbE_o$, and the degree matrix as $\bbD:=(\bbL_u+\bbL_o)/2$. If Assumption \ref{initial_val_assum} holds true, the DQM iterates $\bbx_k$ can be generated as
\begin{align}\label{x_update_formula}
\bbx_{k+1} &= (2c\bbD+\bbH_k)^{-1}
\left[(c\bbL_u+\bbH_k)\bbx_k
-\nabla f(\bbx_{k})-\bbphi_k \right], \nonumber\\
\bbphi_{k+1} &= \bbphi_k +c \bbL_o\bbx_{k+1}. \end{align}
\end{proposition}
\begin{myproof} See Appendix \ref{app_update_system}. \end{myproof}
Proposition \ref{update_system_prop} states that by introducing the sequence of variables $\bbphi_{k}$, the DQM primal iterates $\bbx_k$ can be computed through the recursive expressions in \eqref{x_update_formula}. These recursions are simpler than \eqref{DQM_update_1}-\eqref{DQM_update_2} because they eliminate the auxiliary variables $\bbz_k$ and reduce the dimensionality of $\bblambda_k$ -- twice the number of edges -- to that of $\bbphi_k$ -- the number of nodes. Further observe that if \eqref{x_update_formula} is used for implementation we don't have to make sure that the conditions of Assumption \ref{initial_val_assum} are satisfied. We just need to pick $\bbphi_{0}:=\bbE_{o}^T\bbalpha_0$ for some $\bbalpha_0$ in the column space of $\bbE_0$ -- which is not difficult, we can use, e.g., $\bbphi_0=\bbzero$. The role of Assumption \ref{initial_val_assum} is to state conditions for which the expressions in \eqref{DQM_update_1}-\eqref{DQM_update_2} are an equivalent representation of \eqref{x_update_formula} that we use for convergence analyses.
The structure of the primal objective function Hessian $\bbH_{k}$, the degree matrix $\bbD$, and the oriented and unoriented Laplacians $\bbL_o$ and $\bbL_u$ make distributed implementation of \eqref{x_update_formula} possible. Indeed, the matrix $2c\bbD+\bbH_k$ is block diagonal and its $i$-th diagonal block is given by $2cd_{i}\bbI+\nabla^2f_{i}(\bbx_{i})$ which is locally available for node $i$. Likewise, the inverse matrix $(2c\bbD+\bbH_k)^{-1}$ is block diagonal and locally computable since the $i$-th diagonal block is $(2cd_{i}\bbI+\nabla^2f_{i}(\bbx_{i}))^{-1}$. Computations of the products $\bbL_u\bbx_{k}$ and $\bbL_o\bbx_{k+1}$ can be implemented in a decentralized manner as well, since the Laplacian matrices $\bbL_u$ and $\bbL_o$ are block neighbor sparse in the sense that the ${(i,j)}$-th block is not null if and only if nodes $i$ and $j$ are neighbors or $j=i$. Therefore, nodes can compute their local parts for the products $\bbL_u\bbx_{k}$ and $\bbL_o\bbx_{k+1}$ by exchanging information with their neighbors. By defining components of the vector $\bbphi_k$ as $\bbphi_k:=[\bbphi_{1,k},\dots,\bbphi_{n,k}]$, the update formula in \eqref{x_update_formula} for the individual agents can then be written block-wise as
\begin{align}\label{x_local_update_formula} \bbx_{i,k+1}=\ & \left(2cd_{i}\bbI+\nabla^2f_{i}(\bbx_{i,k})\right)^{-1}\Big[ cd_{i}\bbx_{i,k}+c\sum_{j\in \ccalN_i}\bbx_{j,k} \nonumber \\ &\qquad +\nabla^2f_{i}(\bbx_{i,k})\bbx_{i,k}-\nabla f_i(\bbx_{i,k})-\bbphi_{i,k} \Big], \end{align}
where $\bbx_{i,k}$ corresponds to the iterate of node $i$ at step $k$. Notice that the defintion $\bbL_u:=(1/2)\bbE_u^T\bbE_u=(1/2)(\bbA_s+\bbA_d)^T(\bbA_s+\bbA_d)$ is used to simplify the $i$-th component of $c\bbL_u\bbx_{k}$ as $c\sum_{j\in \ccalN_i}(\bbx_{i,k}+\bbx_{j,k})$ which is equivalent to $cd_{i}\bbx_{i,k}+c\sum_{j\in \ccalN_i}\bbx_{j,k} $. Further, using the definition $\bbL_o=(1/2)\bbE_o^T\bbE_o=(1/2)(\bbA_s-\bbA_d)^T(\bbA_s-\bbA_d)$, the $i$-th component of the product $c\bbL_o\bbx_{k+1}$ in \eqref{phi_local_update_formula} can be simplified as $c\sum_{j\in \ccalN_i}(\bbx_{i,k}-\bbx_{j,k})$. Therefore, the second update formula in \eqref{x_update_formula} can be locally implemented at each node $i$ as
\begin{equation}\label{phi_local_update_formula} \bbphi_{i,k+1} =\bbphi_{i,k} +c \sum_{j\in \ccalN_i} \left(\bbx_{i,k+1} -\bbx_{j,k+1}\right). \end{equation}
The proposed DQM method is summarized in Algorithm \ref{algo_DQM}. The initial value for the local iterate $\bbx_{i,0}$ can be any arbitrary vector in $\reals^p$. The initial vector $\bbphi_{i,0}$ should be in column space of $\bbE_o^T$. To guarantee satisfaction of this condition, the initial vector is set as $\bbphi_{i,0}=\bb0$. At each iteration $k$, updates of the primal and dual variables in \eqref{x_local_update_formula} and \eqref{phi_local_update_formula} are computed in Steps 2 and 4, respectively. Nodes exchange their local variables $\bbx_{i,k}$ with their neighbors $j\in \ccalN_i$ in Step 3, since this information is required for the updates in Steps 2 and 4.
\begin{algorithm}[t]{\small \caption{DQM method at node $i$}\label{algo_DQM} \begin{algorithmic}[1] { \REQUIRE Initial local iterates $\bbx_{i,0}$ and $\bbphi_{0}$.
\FOR {$k=0,1,2,\ldots$}
\STATE Update the local iterate $\bbx_{i,k+1}$ as
\begin{align} \bbx_{i,k+1}=\ & \left(2cd_{i}\bbI+\nabla^2f_{i}(\bbx_{i,k})\right)^{-1}\bigg[ cd_{i}\bbx_{i,k}+c\sum_{j\in \ccalN_i}\bbx_{j,k} \nonumber \\ &\qquad\qquad+\nabla^2f_{i}(\bbx_{i,k})\bbx_{i,k}-\nabla f_i(\bbx_{i,k})-\bbphi_{i,k} \bigg].\nonumber \end{align}
\STATE Exchange iterates $\bbx_{i,k+1}$ with neighbors $\displaystyle{j\in \mathcal{N}_i}$.
\STATE Update local dual variable $\bbphi_{k+1} $ as \\
$\displaystyle{ \qquad \bbphi_{i,k+1} =\bbphi_{i,k} +c \sum_{j\in \ccalN_i} \left(\bbx_{i,k+1} -\bbx_{j,k+1}\right).} $
\ENDFOR} \end{algorithmic}}\end{algorithm}
DADMM, DQM, and DLM occupy different points in a tradeoff curve of computational cost per iteration and number of iterations needed to achieve convergence. The computational cost of each DADMM iteration is large in general because it requires solution of the optimization problem in \eqref{ADMM_x_update}. The cost of DLM iterations is minimal because the solution of \eqref{DLM_optimality_cond} can be reduced to the inversion of a block diagonal matrix; see \cite{ling2014dlm}. The cost of DQM iterations is larger than the cost of DLM iterations because they require evaluation of local Hessians as well as inversion of the matrices $2cd_{i}\bbI+\nabla^2f_{i}(\bbx_{i,k})$ to implement \eqref{x_local_update_formula}. But the cost is smaller than the cost of DADMM iterations except in cases in which solving \eqref{ADMM_x_update} is easy. In terms of the number of iterations required until convergence, DADMM requires the least and DLM the most. The foremost technical conclusions of the convergence analysis presented in the following section are: (i) convergence of DQM is strictly faster than convergence of DLM; (ii) asymptotically in the number of iterations, the per iteration improvements of DADMM and DQM are identical. It follows from these observations that DQM achieves target optimality in a number of iterations similar to DADMM but with iterations that are computationally cheaper.
\section{Convergence Analysis} \label{sec:Analysis}
In this section we show that the sequence of iterates $\bbx_k$ generated by DQM converges linearly to the optimal argument $\bbx^*=[\tbx^*;\dots;\tbx^*]$. As a byproduct of this analysis we also obtain a comparison between the linear convergence constants of DLM, DQM, and DADMM. To derive these results we make the following assumptions.
\begin{assumption}\label{network_eigenvalues} The network is such that any singular value of the unoriented incidence matrix $\bbE_u$, defined as $\sigma(\bbE_u)$, satisfies $0<\gamma_u\leq\sigma(\bbE_u)\leq\Gamma_u$ where $\gamma_u$ and $\Gamma_u$ are constants; the smallest non-zero singular value of the oriented incidence matrix $\bbE_o$ is $\gamma_o>0$. \end{assumption}
\begin{assumption}\label{convexity_assumption} The local objective functions $f_i(\bbx)$ are twice differentiable and the eigenvalues of their local Hessians $ \nabla^2 f_i(\bbx)$ are bounded within positive constants $m$ and $M$ where $0<m\leq M<\infty$ so that for all $\bbx\in \reals^p$ it holds
\begin{equation}\label{local_hessian_eigenvlaue_bounds} m\bbI\ \preceq\ \nabla^2 f_i(\bbx)\ \preceq\ M\bbI. \end{equation}
\end{assumption}
\begin{assumption}\label{Lipschitz_assumption} The local Hessians $\nabla^2 f_i(\bbx)$ are Lipschitz continuous with constant $L$ so that for all $\bbx, \hbx \in \reals^p$ it holds
\begin{equation}
\left\| \nabla^2 f_i(\bbx)-\nabla^2 f_i(\hbx) \right\| \ \leq\ L\ \| \bbx- \hbx \|. \end{equation}
\end{assumption}
The eigenvalue bounds in Assumption \ref{network_eigenvalues} are measures of network connectivity. Note that the assumption that all the singular values of the unoriented incidence matrix $\bbE_u$ are positive implies that the graph is non-bipartite. The conditions imposed by assumptions \ref{convexity_assumption} and \ref{Lipschitz_assumption} are typical in the analysis of second order methods; see, e.g., \cite[Chapter 9]{boyd}. The lower bound for the eigenvalues of the local Hessians $\nabla^2 f_i(\bbx)$ implies strong convexity of the local objective functions $f_{i}(\bbx)$ with constant $m$, while the upper bound $M$ for the eigenvalues of the local Hessians $\nabla^2 f_i(\bbx)$ is tantamount to Lipschitz continuity of local gradients $\nabla f_i(\bbx)$ with Lipschitz constant $M$. Further note that as per the definition of the aggregate objective $f(\bbx):= \sum_{i=1}^{n}f_i(\bbx_i)$, the Hessian $\bbH(\bbx):=\nabla^2f(\bbx)\in \reals^{np\times np}$ is block diagonal with $i$-th diagonal block given by the $i$-th local objective function Hessian $\nabla^2f_i(\bbx_i)$. Therefore, the bounds for the local Hessians' eigenvalues in \eqref{local_hessian_eigenvlaue_bounds} also hold for the aggregate function Hessian. Thus, we have that for any $\bbx\in \reals^{np}$ the eigenvalues of the Hessian $\bbH(\bbx)$ are uniformly bounded as
\begin{equation}\label{aggregate_hessian_eigenvlaue_bounds} m\bbI\ \preceq\ \bbH(\bbx)\ \preceq\ M\bbI. \end{equation}
Assumption \ref{Lipschitz_assumption} also implies an analogous condition for the aggregate function Hessian $\bbH(\bbx)$ as we show in the following lemma.
\begin{lemma}\label{Hessian_Lipschitz_countinous} Consider the definition of the aggregate function $f(\bbx):= \sum_{i=1}^{n}f_i(\bbx_i)$. If Assumption \ref{Lipschitz_assumption} holds true, the aggregate function Hessian $\bbH(\bbx) =: \nabla^2f(\bbx)$ is Lipschitz continuous with constant $L$. I.e., for all $\bbx,\hbx\in \reals^{np}$ we can write
\begin{equation}\label{H_Lipschitz_claim}
\left\|\bbH(\bbx)-\bbH(\hbx)\right\| \leq L \| \bbx-\hbx\| . \end{equation}
\end{lemma}
\begin{myproof} See Appendix \ref{app_Hessian_Lipschitz}. \end{myproof}
DQM can be interpreted as an attempt to approximate the primal update of DADMM. Therefore, we evaluate the performance of DQM by studying a measure of the error of the approximation in the DQM update relative to the DADMM update. In the primal update of DQM, the gradient $\nabla f(\bbx_{k+1})$ is estimated by the approximation $\nabla f(\bbx_{k})+ \bbH_k(\bbx_{k+1}-\bbx_{k})$. Therefore, we can define the DQM error vector $\bbe_{k}^{DQM}$ as
\begin{equation}\label{DQM_error} \bbe_{k}^{DQM}:=\nabla f(\bbx_{k})+ \bbH_k(\bbx_{k+1}-\bbx_{k})-\nabla f(\bbx_{k+1}). \end{equation}
Based on the definition in \eqref{DQM_error}, the approximation error of DQM vanishes when the difference of two consecutive iterates $\bbx_{k+1}-\bbx_{k}$ approaches zero. This observation is formalized in the following proposition by introducing an upper bound for the error vector norm $\|\bbe_k^{DQM}\|$ in terms of the difference norm $\|\bbx_{k+1}-\bbx_{k}\|$.
\begin{proposition}\label{error_vector_proposition}
Consider the DQM method as introduced in \eqref{DQM_update_1}-\eqref{DQM_update_2} and the error $\bbe_k^{DQM}$ defined in \eqref{DQM_error}. If Assumptions \ref{initial_val_assum}-\ref{Lipschitz_assumption} hold true, the DQM error norm $\| \bbe_k^{DQM}\|$ is bounded above by
\begin{equation}\label{bound_for_DQM_error}
\left\|\bbe_{k}^{DQM} \right\| \leq \min\left\{2M\|\bbx_{k+1}-\bbx_{k} \|,\frac{L}{2}\|\bbx_{k+1}-\bbx_{k} \|^2\right\}. \end{equation}
\end{proposition}
\begin{myproof} See Appendix \ref{app_error_bound}. \end{myproof}
Proposition \ref{error_vector_proposition} asserts that the error norm $\|\bbe_{k}^{DQM} \| $ is bounded above by the minimum of a linear and a quadratic term of the iterate difference norm
$\|\bbx_{k+1}-\bbx_{k} \|$. Hence, the approximation error vanishes as the sequence of iterates $\bbx_k$ converges. We will show in Theorem \ref{DQM_convergence} that the sequence
$\|\bbx_{k+1}-\bbx_k\|$ converges to zero which implies that the error vector $\bbe_k^{DQM}$ converges to the null vector $\bb0$. Notice that after a number of iterations the term
$(L/2)\|\bbx_{k+1}-\bbx_{k}\|$ becomes smaller than $2M$, which implies that the upper bound in \eqref{bound_for_DQM_error} can be simplified as $(L/2)\|\bbx_{k+1}-\bbx_{k} \|^2$ for sufficiently large $k$. This is important because it implies that the error vector norm $ \|\bbe_{k}^{DQM} \|$ eventually becomes proportional to the quadratic term $\|\bbx_{k+1}-\bbx_k\|^2$ and, as a consequence, it vanishes faster than the term
$\|\bbx_{k+1}-\bbx_k\|$.
Utilize now the definition in \eqref{DQM_error} to rewrite the primal variable DQM update in \eqref{DQM_update_1} as \begin{align} \nabla f(\bbx_{k+1})+\bbe_{k}^{DQM}+\bbA^T\bblambda_{k}+c \bbA^{T}\! \left( \bbA\bbx_{k+1}\!+\!\bbB \bbz_{k} \right)&=\bb0.\label{joint_1} \end{align}
Comparison of \eqref{joint_1} with the optimality condition for the DADMM update in \eqref{ADMM_x_update} shows that they coincide except for the gradient approximation error term $\bbe_{k}^{DQM}$. The DQM and DADMM updates for the auxiliary variables $\bbz_k$ and the dual variables $\bblambda_k$ are identical [cf. \eqref{ADMM_z_update}, \eqref{ADMM_lambda_update}, and \eqref{DQM_update_2}], as already observed.
Further let the pair $(\bbx^*,\bbz^*)$ stand for the unique solution of \eqref{original_optimization_problem2} with uniqueness implied by the strong convexity assumption and define $\bbalpha^*$ as the unique optimal multiplier that lies in the column space of $\bbE_o$ -- see Lemma 1 of \cite{cQingRibeiroADMM14} for a proof that such optimal dual variable exists and is unique. To study convergence properties of DQM we modify the system of DQM equations defined by \eqref{DQM_update_2} and \eqref{joint_1}, which is equivalent to the system \eqref{DQM_update_1} -- \eqref{DQM_update_2}, to include terms that involve differences between current iterates and the optimal arguments $\bbx^*$, $\bbz^*$, and $\bbalpha^*$. We state this reformulation in the following lemma.
\begin{lemma}\label{equalities_for_optima_lemma} Consider the DQM method as defined by \eqref{DQM_update_1}-\eqref{DQM_update_2} and its equivalent formulation in \eqref{DQM_update_2} and \eqref{joint_1}. If Assumption \ref{initial_val_assum} holds true, then the optimal arguments $\bbx^*$, $\bbz^*$, and $\bbalpha^*$ satisfy
\begin{align}
\nabla f(\bbx_{k+1})-\nabla f(\bbx^*)+\bbe_{k}^{DQM}
+ \bbE_o^T(\bbalpha_{k+1}-\bbalpha^*) \nonumber \\
-c\bbE_u^T \left( \bbz_{k}-\bbz_{k+1} \right)
& = \bb0, \label{relation1} \\
2(\bbalpha_{k+1}-\bbalpha_k)-{c}\bbE_o(\bbx_{k+1}-\bbx^*)
&=\bb0, \label{relation2} \\
\bbE_u(\bbx_{k}-\bbx^*)-2(\bbz_k-\bbz^*)
&=\bb0.\label{relation3} \end{align}
\end{lemma}
\begin{myproof} See Appendix \ref{app_equalities_for_optima}. \end{myproof}
With the preliminary results in Lemmata \ref{Hessian_Lipschitz_countinous} and \ref{equalities_for_optima_lemma} and Proposition \ref{error_vector_proposition} we can state our convergence results. To do so, define the energy function $V: \reals^{mp\times mp}\to\reals$ as
\begin{equation}\label{eqn_energy_function}
V(\bbz,\bbalpha):=c\|\bbz-\bbz^{*}\|^2+\frac{1}{c}\|\bbalpha-\bbalpha^*\|^2. \end{equation}
The energy function $V(\bbz,\bbalpha)$ captures the distances of the variables $\bbz_{k}$ and $\bbalpha_{k}$ to the respective optimal arguments $\bbz^{*} $ and $\bbalpha^*$. To simplify notation we further define the variable $\bbu\in \reals^{2mp}$ and matrix $\bbC\in \reals^{2mp\times 2mp}$ as
\begin{equation}\label{C_u_definitions} \bbu:= \left[ \begin{array}{rr} \bbz \\ \bbalpha \end{array} \right], \quad \bbC:= \left[ \begin{array}{rr} c\bbI_{mp} & \bb0 \\ \bb0 & (1/c)\bbI_{mp} \end{array} \right]. \end{equation}
Based on the definitions in \eqref{C_u_definitions}, the energy function in \eqref{eqn_energy_function} can be alternatively written $V(\bbz,\bbalpha) = V(\bbu) = \|\bbu-\bbu^*\|_{\bbC}^2$, where $\bbu^*=[\bbz^*;\bbalpha^*]$. The energy sequence $V(\bbu_k)=\|\bbu_{k}-\bbu^*\|_\bbC^2$ converges to zero at a linear rate as we state in the following theorem.
\begin{thm}\label{DQM_convergence} Consider the DQM method as defined by \eqref{DQM_update_1}-\eqref{DQM_update_2}, let the constant $c$ be such that $c> 4M^2/({m\gamma_u^2})$, and define the sequence of non-negative variables $\zeta_k$ as
\begin{equation}\label{zeta_DQM}
\zeta_k:=\min\left\{\frac{L}{2}\|\bbx_{k+1}-\bbx_{k} \|, 2M\right\}. \end{equation}
Further, consider arbitrary constants $\mu$, $\mu'$, and $\eta$ with $\mu,\mu'>1$ and $\eta_k\in({\zeta_k}/{m},{c\gamma_u^2}/{\zeta_k})$. If Assumptions \ref{initial_val_assum}-\ref{Lipschitz_assumption} hold true, then the sequence $\|\bbu_{k}-\bbu^*\|_{\bbC}^2$ generated by DQM satisfies
\begin{equation}\label{DQM_linear_claim}
\|\bbu_{k+1}-\bbu^*\|_{\bbC}^2\ \leq\ \frac{1}{1+\delta_k} \|\bbu_{k}-\bbu^*\|_{\bbC}^2\ , \end{equation}
where the sequence of positive scalars $\delta_k$ is given by
\begin{equation}\label{DLM_DQM_delta} \delta_k=\min \Bigg\{ \frac{(\mu-1)(c\gamma_u^2-\eta_k\zeta_k)\gamma_o^{2}}
{\mu\mu'( c\Gamma_u^2\gamma_u^2+4\zeta_k^2/c(\mu'-1))} , \frac{ m-{\zeta_k}/{\eta_k} }{{ c}\Gamma_u^2/4 +\mu M^2/c\gamma_o^{2}} \Bigg\}. \end{equation}
\end{thm}
\begin{myproof} See Appendix \ref{linear_convg_app}. \end{myproof}
Notice that $\delta_k$ is a decreasing function of $\zeta_k$ and that $\zeta_k$ is bounded above by $2M$. Therefore, if we substitute $\zeta_k$ by $2M$ in \eqref{DLM_DQM_delta}, the inequality in \eqref{DQM_linear_claim} is still valid. This substitution implies that the sequence $\|\bbu_{k}-\bbu^*\|_{\bbC}^2$ converges linearly to zero with a coefficient not larger than $1-\delta$ with $\delta=\delta_k$ following from \eqref{DQM_linear_claim} with $\zeta_k=2M$. The more generic definition of $\zeta_k$ in \eqref{zeta_DQM} is important for the rate comparisons in Section \ref{sec:rate_comparison}. Observe that in order to guarantee that $\delta_k>0$ for all $k\geq 0 $, $\eta_k$ is chosen from the interval $({\zeta_k}/{m},{c\gamma_u^2}/{\zeta_k})$. This interval is non-empty since the constant $c$ is chosen as $c> {4M^2}/({m\gamma_u^2})\geq {\zeta_k^2}/({m\gamma_u^2})$.
The linear convergence in Theorem \ref{DQM_convergence} is for the vector $\bbu_k$ which includes the auxiliary variable $\bbz_k$ and the multipliers $\bbalpha_k$. Linear convergence of the primal variables $\bbx_k$ to the optimal argument $\bbx^*$ follows as a corollary that we establish next.
\begin{corollary}\label{convergence_of_primal}
Under the assumptions in Theorem \ref{DQM_convergence}, the sequence of squared norms $\|\bbx_k-\bbx^*\|^2$ generated by the DQM algorithm converges R-linearly to zero, i.e.,
\begin{equation}\label{r_linear_cliam}
\|\bbx_k-\bbx^*\|^2\leq \frac{4}{c\gamma_u^2}\|\bbu_k-\bbu^*\|_\bbC^2. \end{equation}
\end{corollary}
\begin{myproof}
Notice that according to \eqref{relation3} we can write $\|\bbE_u(\bbx_k-\bbx^*)\|^2=4\|\bbz_k-\bbz^*\|^2$. Since $\gamma_u$ is the smallest singular value of $\bbE_u$, we obtain that $\|\bbx_k-\bbx^*\|^2\leq({4}/{\gamma_u^2})\|\bbz_k-\bbz^*\|^2.$ Moreover, according to the relation $\|\bbu_k-\bbu^*\|_\bbC^2=c\|\bbz_k-\bbz^{*}\|^2+({1}/{c})\|\bbalpha_k-\bbalpha^*\|^2$ we can write $c\|\bbz_k-\bbz^*\|^2\leq\|\bbu_k-\bbu^*\|_\bbC^2.$ Combining these two inequalities yields the claim in \eqref{r_linear_cliam}. \end{myproof}
As per Corollary \ref{convergence_of_primal}, convergence of the sequence $\bbx_k$ to $\bbx^*$ is dominated by a linearly decreasing sequence. Notice that the sequence of squared norms $\|\bbx_k-\bbx^*\|^2$ need not be monotonically decreasing as the energy sequence $\|\bbu_{k+1}-\bbu^*\|_{\bbC}^2$ is.
\subsection{Convergence rates comparison}\label{sec:rate_comparison}
Based on the result in Corollary \ref{convergence_of_primal}, the sequence of iterates $\bbx_k$ generated by DQM converges. This observation implies that the sequence $\|\bbx_{k+1}-\bbx_k\|$ approaches zero. Hence, the sequence of scalars $\zeta_k$ defined in \eqref{zeta_DQM} converges to 0 as time passes, since $\zeta_k$ is bounded above by $(L/2)\|\bbx_{k+1}-\bbx_k\|$. Using this fact that $\lim_{k\to \infty }\zeta_k=0$ to compute the limit of $\delta_k$ in \eqref{DLM_DQM_delta} and further making $\mu'\rightarrow1$ in the resulting limit we have that
\begin{align}\label{DQM_delta} \lim_{k\to \infty} \delta_k=\min &\Bigg\{ \frac{ (\mu-1)\gamma_o^2}{ {\mu\Gamma_u^2}} ,\ \frac{ m}{c\Gamma_u^2/4 +\mu M^2/c\gamma_o^{2}} \Bigg\}. \end{align}
Notice that the limit of $\delta_k$ in \eqref{DQM_delta} is identical to the constant of linear convergence for DADMM \cite{Shi2014-ADMM}. Therefore, we conclude that as time passes the constant of linear convergence for DQM approaches the one for DADMM.
To compare the convergence rates of DLM, DQM and DADMM we define the error of the gradient approximation for DLM as
\begin{equation}\label{DLM_error} \bbe_{k}^{DLM}=\nabla f(\bbx_{k})+ \rho(\bbx_{k+1}-\bbx_{k})-\nabla f(\bbx_{k+1}), \end{equation}
which is the difference of exact gradient $\nabla f(\bbx_{k+1})$ and the DLM gradient approximation $\nabla f(\bbx_{k})+ \rho(\bbx_{k+1}-\bbx_{k})$. Similar to the result in Proposition \ref{error_vector_proposition} for DQM we can show that the DLM error vector norm $\| \bbe_k^{DLM}\|$ is bounded by a factor of $\|\bbx_{k+1}-\bbx_{k} \|$.
\begin{proposition}\label{error_vector_proposition_2}
Consider the DLM algorithm with updates in \eqref{ADMM_z_update}-\eqref{DLM_x_update} and the error vector $\bbe_k^{DLM}$ defined in \eqref{DLM_error}. If Assumptions \ref{initial_val_assum}-\ref{Lipschitz_assumption} hold true, the DLM error vector norm $\| \bbe_k^{DLM}\|$ satisfies
\begin{equation}\label{bound_for_DLM_error}
\left\|\bbe_{k}^{DLM} \right\| \leq (\rho+M)\|\bbx_{k+1}-\bbx_{k} \|. \end{equation}
\end{proposition}
\begin{myproof} See Appendix \ref{app_error_bound}. \end{myproof}
The result in Proposition \ref{error_vector_proposition_2} differs from Proposition \ref{error_vector_proposition} in that the DLM error $\|\bbe_{k}^{DLM}\|$ vanishes at a rate of $\|\bbx_{k+1}-\bbx_{k}\|$ whereas the DQM error $\|\bbe_{k}^{DQM}\|$
eventually becomes proportional to $\|\bbx_{k+1}-\bbx_k\|^2$. This results in DLM failing to approach the convergence behavior of DADMM as we show in the following theorem.
\begin{thm}\label{DLM_convergence}
Consider the DLM method as introduced in \eqref{ADMM_z_update}-\eqref{DLM_x_update}. Assume that the constant $c$ is chosen such that $c> (\rho+M)^2/({m\gamma_u^2})$. Moreover, consider $\mu,\mu'>1$ as arbitrary constants and $\eta$ as a positive constant chosen from the interval $((\rho+M)/{m},{c\gamma_u^2}/{(\rho+M)})$. If Assumptions \ref{initial_val_assum}-\ref{Lipschitz_assumption} hold true, then the sequence $\|\bbu_{k}-\bbu^*\|_{\bbC}^2$ generated by DLM satisfies
\begin{equation}\label{DLM_linear_claim}
\|\bbu_{k+1}-\bbu^*\|_{\bbC}^2\ \leq\ \frac{1}{1+\delta} \|\bbu_{k}-\bbu^*\|_{\bbC}^2\ , \end{equation}
where the scalar $\delta$ is given by
\begin{equation}\label{DLM_delta} \delta\!=\!\min\! \Bigg\{\! \frac{(\mu-1)(c\gamma_u^2-\eta_k(\rho\!+\!M))\gamma_o^{2}}
{\mu\mu'( c\Gamma_u^2\gamma_u^2\!+\!4(\rho\!+\!M)^2\!/c(\mu'\!-\!1))} , \frac{ m-{(\rho\!+\!M)}/{\eta_k} }{{ c}\Gamma_u^2/4 \!+\!\mu M^2\!/c\gamma_o^{2}} \!\Bigg\} \end{equation}
\end{thm}
\begin{myproof} See Appendix \ref{linear_convg_app}. \end{myproof}
Based on the result in Theorem \ref{DLM_convergence}, the sequence
$\|\bbu_{k+1}-\bbu^*\|_{\bbC}^2$ generated by DLM converges linearly to 0. This result is similar to the convergence properties of DQM as shown in Theorem \ref{DQM_convergence}; however, the constant of linear convergence $1/(1+\delta)$ in \eqref{DLM_linear_claim} is smaller than the constant $1/(1+\delta_k)$ in \eqref{DQM_delta}.
\section{Numerical analysis}\label{sec:simulations}
In this section we compare the performances of DLM, DQM and DADMM in solving a logistic regression problem. Consider a training set with points whose classes are known and the goal is finding the classifier that minimizes the loss function. Let $q$ be the number of training points available at each node of the network. Therefore, the total number of training points is $nq$. The training set $\{\bbs_{il}, y_{il}\}_{l=1}^q$ at node $i$ contains $q$ pairs of $(\bbs_{il}, y_{il})$, where $\bbs_{il}$ is a feature vector and $y_{il}\in \{-1,1\}$ is the corresponding class. The goal is to estimate the probability $\Pc{y=1\mid \bbs}$ of having label $y=1$ for a given feature vector $\bbs$ whose class is not known. Logistic regression models this probability as $\Pc{y=1\mid \bbs}=1/(1+\exp(-\bbs^T\tbx))$ for a linear classifier $\tbx$ that is computed based on the training samples. It follows from this model that the maximum log-likelihood estimate of the classifier $\tbx$ given the training samples $\{\{\bbs_{il},y_{il}\}_{l=1}^q\}_{i=1}^n$ is
\begin{align}\label{eqn_logistic_regrssion_max_likelihood}
\tbx^* \ :=\ \argmin_{\tbx\in \reals^p } \ \sum_{i=1}^n \sum_{l=1}^{q}
\log \Big[1+\exp(-y_{il}\bbs_{il}^T\tbx)\Big]. \end{align}
The optimization problem in \eqref{eqn_logistic_regrssion_max_likelihood} can be written in the form \eqref{original_optimization_problem1}. To do so, simply define the local objective functions $f_i$ as
\begin{equation}
f_i(\tbx) = \sum_{l=1}^{q} \log \Big[1+\exp(-y_{il}\bbs_{il}^T\tbx)\Big]. \end{equation}
We define the optimal argument for decentralized optimization as $\bbx^*=[\tbx^*;\dots;\tbx^*]$. Note that the reference (ground true) logistic classifiers $\tbx^*$ for all the experiments in this section are pre-computed with a centralized method.
\subsection{Comparison of DLM, DQM, and DADMM}\label{comparison}
\begin{figure}\label{fig2}
\end{figure}
\begin{figure}\label{fig3}
\end{figure}
We compare the convergence paths of the DLM, DQM, and DADMM algorithms for solving the logistic regression problem in \eqref{eqn_logistic_regrssion_max_likelihood}. Edges between the nodes are randomly generated with the connectivity ratio $r_c$. Observe that the connectivity ratio $r_c$ is the probability of two nodes being connected.
In the first experiment we set the number of nodes as $n=10$ and the connectivity ratio as $r_{\mathrm{c}}=0.4$. Each agent holds $q=5$ samples and the dimension of feature vectors is $p=3$. Fig. \ref{fig2} illustrates the relative errors
${\|\bbx_k-\bbx^*\|}/{\|\bbx_0-\bbx^*\|}$ for DLM, DQM, and DADMM versus the number of iterations. Notice that the parameter $c$ for the three methods is optimized by $c_\mathrm{ADMM}=0.7$, $c_\mathrm{DLM}=5.5$, and $c_\mathrm{DQM}=0.7$. The convergence path of DQM is almost identical to the convergence path of DADMM. Moreover, DQM outperforms DLM by orders of magnitude. To be more precise, the relative errors
${\|\bbx_k-\bbx^*\|}/{\|\bbx_0-\bbx^*\|}$ for DQM and DADMM after $k=300$ iterations are below $10^{-9}$, while for DLM the relative error after the same number of iterations is $5\times 10^{-2}$. Conversely, achieving accuracy
${\|\bbx_k-\bbx^*\|}/{\|\bbx_0-\bbx^*\|}=10^{-3}$ for DQM and DADMM requires $91$ iterations, while DLM requires $758$ iterations to reach the same accuracy. Hence, the number of iterations that DLM requires to achieve a specific accuracy is $8$ times more than the one for DQM.
\begin{figure}
\caption{Relative error ${\|\bbx_k-\bbx^*\|}/{\|\bbx_0-\bbx^*\|}$ of DADMM, DQM, and DLM versus number of iterations for a random network of size $n=100$. The performances of DQM and DADMM are still similar. DLM is impractical in this setting. }
\label{fig222}
\end{figure}
\begin{figure}\label{fig333}
\end{figure}
Observe that the computational complexity of DQM is lower than DADMM. Therefore, DQM outperforms DADMM in terms of convergence time or number of required operations until convergence. This phenomenon is shown in Fig \ref{fig3} by comparing the relative of errors of DLM, DQM, and DADMM versus CPU runtime. According to Fig \ref{fig3}, DADMM achieves the relative error ${\|\bbx_k-\bbx^*\|}/{\|\bbx_0-\bbx^*\|}=10^{-10}$ after running for $3.6$ seconds, while DLM and DQM require $1.3$ and $0.4$ seconds, respectively, to achieve the same accuracy.
We also compare the performances of DLM, DQM, and DADMM in a larger scale logistic regression problem by setting size of network $n=100$, number of sample points at each node $q=20$, and dimension of feature vectors $p=10$. We keep the rest of the parameters as in Fig. \ref{fig2}. Convergence paths of the relative errors ${\|\bbx_k-\bbx^*\|}/{\|\bbx_0-\bbx^*\|}$ for DLM, DQM, and DADMM versus the number of iterations are illustrated in Fig. \ref{fig222}. Different choices of parameter $c$ are considered for these algorithms and the best for each is chosen for the final comparison. The optimal choices of parameter $c$ for DADMM, DLM, and DQM are $c_\mathrm{ADMM}=0.68$,
$c_\mathrm{DLM}=12.3$, and $c_\mathrm{DQM}=0.68$, respectively. The results for the large scale problem in Fig. \ref{fig222} are similar to the results in Fig. \ref{fig2}. We observe that DQM performs as well as DADMM, while both outperform DLM. To be more precise, DQM and DADMM after $k=900$ iterations reach the relative error ${\|\bbx_k-\bbx^*\|}/{\|\bbx_0-\bbx^*\|}=3.4\times10^{-7}$, while the relative error of DLM after the same number of iterations is $2.9\times 10^{-1}$. Conversely, achieving the accuracy ${\|\bbx_k-\bbx^*\|}/{\|\bbx_0-\bbx^*\|}=0.3$ for DQM and DADMM requires $52$ iterations, while DLM requires $870$ iterations to reach the same accuracy. Hence, in this setting the number of iterations that DLM requires to achieve a specific accuracy is $16$ times more than the one for DQM. These numbers show that the advantages of DQM relative to DLM are more significant in large scale problems.
Notice that in large scale logistic regression problems we expect larger condition number for the objective function $f$. In these scenarios we expect to observe a poor performance by the DLM algorithm that only operates on first-order information. This expectation is satisfied by comparing the relative errors of DLM, DQM, and DADMM versus runtime for the large scale problem in Fig. \ref{fig333}. In this case, DLM is even worse than DADMM that has a very high computational complexity. Similar to the result in Fig. \ref{fig222}, DQM has the best performance among these three methods.
\subsection{Effect of the regularization parameter $c$}\label{dif_para}
The parameter $c$ has a significant role in the convergence of DADMM. Likewise, choosing the optimal choice of $c$ is critical in the convergence of DQM. We study the effect of $c$ by tuning this parameter for a fixed network and training set. We use all the parameters in Fig. \ref{fig2} and we compare performance of the DQM algorithm for the values $c=0.2$, $c=0.4$, $c=0.8$, and $c=1$. Fig. \ref{eps:Par_Tun} illustrates the convergence paths of the DQM algorithm for different choices of the parameter $c$. The best performance among these choices is achieved for $c=0.8$. The comparison of the plots in Fig. \ref{eps:Par_Tun} shows that increasing or decreasing the parameter $c$ is not necessarily leads to a faster convergence. We can interpret $c$ as the stepsize of DQM which the optimal choice may vary for the problems with different network sizes, network topologies, condition numbers of objective functions, etc.
\begin{figure}
\caption{Relative error ${\|\bbx_k-\bbx^*\|}/{\|\bbx_0-\bbx^*\|}$ of DQM for parameters $c=0.2$, $c=0.4$, $c=0.8$, and $c=1$ when the network is formed by $n=10$ nodes and the connectivity ratio is $r_c=0.4$. The best performance belongs to $c=0.8$.}
\label{eps:Par_Tun}
\end{figure}
\begin{figure}
\caption{Relative error ${\|\bbx_k-\bbx^*\|}/{\|\bbx_0-\bbx^*\|}$ of DQM for random graphs with different connectivity ratios $r_c$. The linear convergence of DQM accelerates by increasing the connectivity ratio.}
\label{eps:Net_Top}
\end{figure}
\subsection{Effect of network topology}\label{sec:Net_Top}
According to \eqref{DLM_DQM_delta} the constant of linear convergence for DQM depends on the bounds for the singular values of the oriented and unoriented incidence matrices $\bbE_o$ and $\bbE_u$. These bounds are related to the connectivity ratio of network. We study how the network topology affects the convergence speed of DQM. We use different values for the connectivity ratio to generate random graphs with different number of edges. In this experiment we use the connectivity ratios $r_c=\{0.2,0.3,0.4,0.6\}$ to generate the networks. The rest of the parameters are the same as the parameters in Fig. \ref{fig2}. Notice that since the connectivity parameters of these graphs are different, the optimal choices of $c$ for these graphs are different. The convergence paths of DQM with the connectivity ratios $r_c=\{0.2,0.3,0.4,0.6\}$ are shown in Fig. \ref{eps:Net_Top}. The optimal choices of the parameter $c$ for these graphs are $c_\mathrm{0.2}=0.28$, $c_\mathrm{0.3}=0.25$, $c_\mathrm{0.4}=0.31$, and $c_\mathrm{0.6}=0.28$, respectively. Fig. \ref{eps:Net_Top} shows that the linear convergence of DQM accelerates by increasing the connectivity ratio of the graph.
\section{Conclusions} A decentralized quadratically approximated version of the alternating direction method of multipliers (DQM) is proposed for solving decentralized optimization problems where components of the objective function are available at different nodes of a network. DQM minimizes a quadratic approximation of the convex problem that DADMM solves exactly at each step, and hence reduces the computational complexity of DADMM. Under some mild assumptions, linear convergence of the sequence generated by DQM is proven. Moreover, the constant of linear convergence for DQM approaches that of DADMM asymptotically. Numerical results for a logistic regression problem verify the analytical results that convergence paths of DQM and DADMM are similar for large iteration index, while the computational complexity of DQM is significantly smaller than DADMM.
\begin{appendices}
\section{Proof of Lemma \ref{lag_var_lemma}\label{app_lag_var_lemma}}
According to the update for the Lagrange multiplier $\bblambda$ in \eqref{DQM_update_2}, we can substitute $\bblambda_k$ by $\bblambda_{k+1}-c\left( \bbA\bbx_{k+1}+\bbB\bbz_{k+1} \right)$. Applying this substitution into the first equation of \eqref{DQM_update_2} leads to
\begin{equation}\label{shajarian0} \bbB^T\bblambda_{k+1}=\bb0. \end{equation}
Observing the definitions $\bbB=[-\bbI_{mp};-\bbI_{mp}]$ and $\bblambda=[\bbalpha;\bbbeta]$, and the result in \eqref{shajarian0}, we obtain $\bbalpha_{k+1}=-\bbbeta_{k+1}$ for $k\geq0$. Considering the initial condition $\bbalpha_{0}=-\bbbeta_{0}$, we obtain that $\bbalpha_{k}=-\bbbeta_{k}$ for $k\geq0$ which follows the first claim in Lemma \ref{lag_var_lemma}.
Based on the definitions $\bbA=[\bbA_s;\bbA_d]$, $\bbB=[-\bbI_{mp};-\bbI_{mp}]$, and $\bblambda=[\bbalpha;\bbbeta]$, we can split the update for the Lagrange multiplier $\bblambda$ in \eqref{ADMM_lambda_update} as
\begin{align} \bbalpha_{k+1}&=\bbalpha_{k}+c[\bbA_s\bbx_{k+1}-\bbz_{k+1}] \label{sorry1},\\ \bbbeta_{k+1}&=\bbbeta_{k}+c[\bbA_d\bbx_{k+1}-\bbz_{k+1}] \label{sorry2}. \end{align}
Observing the result that $\bbalpha_{k}=-\bbbeta_k$ for $k\geq0$, summing up the equations in \eqref{sorry1} and \eqref{sorry2} yields
\begin{equation}\label{sorry3} (\bbA_s+\bbA_d)\bbx_{k+1}=2\bbz_{k+1}. \end{equation}
Considering the definition of the oriented incidence matrix $\bbE_u=\bbA_s+\bbA_d$, we obtain that $\bbE_u \bbx_k=2\bbz_k$ holds for $k>0$. According to the initial condition $\bbE_u \bbx_0=2\bbz_0$, we can conclude that the relation $\bbE_u \bbx_k=2\bbz_k$ holds for $k\geq 0$.
Subtract the update for $\bbbeta_k$ in \eqref{sorry2} from the update for $\bbalpha_k$ in \eqref{sorry1} and consider the relation $\bbbeta_k=-\bbalpha_k$ to obtain
\begin{equation}\label{sorry4} \bbalpha_{k+1}=\bbalpha_{k}+\frac{c}{2}(\bbA_s-\bbA_d)\bbx_{k+1}. \end{equation}
Substituting $\bbA_s-\bbA_d$ in \eqref{sorry4} by $\bbE_o$ implies that
\begin{equation}\label{update_alpha} \bbalpha_{k+1}=\bbalpha_{k}+\frac{c}{2}\bbE_o\bbx_{k+1}. \end{equation}
Hence, if $\bbalpha_k$ lies in the column space of matrix $\bbE_o$, then $\bbalpha_{k+1}$ also lies in the column space of $\bbE_o$. According to the third condition of Assumption \ref{initial_val_assum}, $\bbalpha_0$ satisfies this condition, therefore $\bbalpha_k$ lies in the column space of matrix $\bbE_o$ for all $k\geq0$.
\section{Proof of Proposition \ref{update_system_prop}\label{app_update_system}}
The update for the multiplier $\bblambda$ in \eqref{ADMM_lambda_update} implies that we can substitute $\bblambda_k$ by $\bblambda_{k+1}-c( \bbA\bbx_{k+1}+\bbB\bbz_{k+1} )$ to simplify \eqref{DQM_update_1} as
\begin{equation}\label{javad} \nabla f(\bbx_{k})+ \bbH_{k}(\bbx_{k+1}-\bbx_{k})+\bbA^T\bblambda_{k+1}+c \bbA^{T} \bbB\left( \bbz_{k}-\bbz_{k+1} \right)=\bb0. \end{equation}
Considering the first result of Lemma \ref{lag_var_lemma} that $\bbalpha_k=-\bbbeta_k$ for $k\geq0$ in association with the definition $\bbA=[\bbA_s;\bbA_d]$ implies that the product $\bbA^T\bblambda_{k+1}$ is equivalent to
\begin{equation}\label{chand} \bbA^T\bblambda_{k+1}=\bbA_s^T\bbalpha_{k+1}+\bbA_d^T\bbbeta_{k+1}=(\bbA_s-\bbA_d)^T\bbalpha_{k+1}. \end{equation}
According to the definition $\bbE_o:=\bbA_s-\bbA_d$, the right hand side of \eqref{chand} can be simplified as
\begin{equation}\label{chand2} \bbA^T\bblambda_{k+1}=\bbE_o^T\bbalpha_{k+1}. \end{equation}
Based on the structures of the matrices $\bbA$ and $\bbB$, and the definition $\bbE_u:=\bbA_s+\bbA_d$, we can simplify $\bbA^T\bbB$ as \begin{equation}\label{chand3} \bbA^T\bbB=-\bbA_s^T-\bbA_d^T=-\bbE_u^T. \end{equation}
Substituting the results in \eqref{chand2} and \eqref{chand3} into \eqref{javad} leads to
\begin{equation}\label{javad2} \nabla f(\bbx_{k})+ \bbH_{k}(\bbx_{k+1}-\bbx_{k})+\bbE_o^T\bbalpha_{k+1}+c \bbE_u^{T}\left( \bbz_{k+1}-\bbz_{k} \right)=\bb0. \end{equation}
The second result in Lemma \ref{lag_var_lemma} states that $\bbz_k=\bbE_u\bbx_{k}/2$. Multiplying both sides of this equality by $\bbE_u^{T}$ from left we obtain that $\bbE_u^{T} \bbz_{k}=\bbE_u^{T}\bbE_u \bbx_{k}/2$ for $k\geq0$. Observing the definition of the unoriented Laplacian $\bbL_u:=\bbE_u^{T}\bbE_u/2$, we obtain that the product $\bbE_u^{T} \bbz_{k}$ is equal to $\bbL_u\bbx_k$ for $k\geq0$. Therefore, in \eqref{javad2} we can substitute $ \bbE_u^{T}\left( \bbz_{k+1}-\bbz_{k} \right)$ by $\bbL_u(\bbx_{k+1}-\bbx_k)$ and write
\begin{equation}\label{javad3} \nabla f(\bbx_{k})+ \left(\bbH_{k}+c\bbL_u\right)(\bbx_{k+1}-\bbx_{k})+\bbE_o^T\bbalpha_{k+1}=\bb0. \end{equation}
Observe that the new variables $\bbphi_k$ are defined as $\bbphi_k:=\bbE_o^T\bbalpha_{k}$. Multiplying both sides of \eqref{update_alpha} by $\bbE_o^T$ from the left hand side and considering the definition of oriented Laplacian $\bbL_o=\bbE_o^T\bbE_o/2$ follows the update rule of $\bbphi_{k}$ in \eqref{x_update_formula}, i.e.,
\begin{equation}\label{update_phi} \bbphi_{k+1}=\bbphi_{k}+c\bbL_o\bbx_{k+1}. \end{equation}
According to the definition $\bbphi_k=\bbE_o^T\bbalpha_{k}$ and the update formula in \eqref{update_phi}, we can conclude that $\bbE_o^T\bbalpha_{k+1}=\bbphi_{k+1}=\bbphi_{k}+c\bbL_o\bbx_{k+1}$. Substituting $\bbE_o^T\bbalpha_{k+1}$ by $\bbphi_{k}+c\bbL_o\bbx_{k+1}$ in \eqref{javad3} yields
\begin{equation}\label{javad4} \nabla f(\bbx_{k})+ \left(\bbH_{k}+c\bbL_u\right)(\bbx_{k+1}-\bbx_{k})+\bbphi_{k}+c\bbL_o\bbx_{k+1}=\bb0. \end{equation}
Observing the definition $\bbD=(\bbL_u+\bbL_o)/2$ we rewrite \eqref{javad4} as
\begin{equation}\label{javad5}
\left(\bbH_{k}+2c\bbD\right)\bbx_{k+1}=\left(\bbH_{k}+c\bbL_u\right)\bbx_k-\nabla f(\bbx_{k})-\bbphi_{k}. \end{equation}
Multiplying both sides of \eqref{javad5} by $ \left(\bbH_{k}+2c\bbD\right)^{-1}$ from the left hand side yields the first update in \eqref{x_update_formula}.
\section{Proof of Lemma \ref{Hessian_Lipschitz_countinous}}\label{app_Hessian_Lipschitz}
Consider two arbitrary vectors $\bbx:=[\bbx_1;\dots;\bbx_n] \in \reals^{np}$ and $\hbx:=[\hbx_1;\dots;\hbx_n] \in \reals^{np}$. Since the aggregate function Hessian is block diagonal where the $i$-th diagonal block is given by $\nabla^2f_{i}(\bbx_i)$, we obtain that the difference of Hessians $\bbH(\bbx)-\bbH(\hbx)$ is also block diagonal where the $i$-th diagonal block ${\bbH(\bbx)_{ii}-\bbH(\hbx)}_{ii}$ is
\begin{equation}\label{matrix_difference} {\bbH(\bbx)_{ii}-\bbH(\hbx)}_{ii}= \nabla^2 f_{i}(\bbx_{i})-\nabla^2 f_{i}(\hbx_{i}) . \end{equation}
Consider any vector $\bbv\in \reals^{np}$ and separate each $p$ components of vector $\bbv$ and consider it as a new vector called $\bbv_{i}\in \reals^p$, i.e., $\bbv:=[\bbv_1;\dots;\bbv_n]$. Observing the relation for the difference $\bbH(\bbx)-\bbH(\hbx)$ in \eqref{matrix_difference}, the symmetry of matrices
$\bbH(\bbx)$ and $\bbH(\hbx)$, and the definition of Euclidean norm of a matrix that $\|\bbA\|=\sqrt{\lambda_{max}(\bbA^T\bbA)}$, we obtain that the squared difference norm
$\|\bbH(\bbx)-\bbH(\hbx)\|^2$ can be written as
\begin{align}\label{inner_product_differnece}
\left\|\bbH(\bbx)-\bbH(\hbx)\right\|^2 \!
&= \max_{\bbv} \frac{\bbv^T[\bbH(\bbx)-\bbH(\hbx)]^2\bbv}{\|\bbv\|^2} \\
&= \max_{\bbv} \frac{\sum_{i=1}^n \bbv_{i}^T \left[\nabla^2 f_{i}(\bbx_{i})-\nabla^2 f_{i}(\hbx_{i})\right]^2 \bbv_{i}} {\|\bbv\|^2}\nonumber \end{align}
Using the Cauchy-Schwarz inequality we can write
\begin{equation}\label{couchy_result} \bbv_{i}^T\! \left[\nabla^2 f_{i}(\bbx_{i})\!-\!\nabla^2 f_{i}(\hbx_{i})\right]^2\! \bbv_{i} \leq
\left\|\nabla^2 f_{i}(\bbx_{i})\!-\!\nabla^2 f_{i}(\hbx_{i})\right\|^2 \! \|\bbv_{i}\|^2 \end{equation}
Substituting the upper bound in \eqref{couchy_result} into \eqref{inner_product_differnece} implies that the squared norm $\left\|\bbH(\bbx)-\bbH(\hbx)\right\|^2$ is bounded above as
\begin{equation}\label{norm_difference_2}
\left\|\bbH(\bbx)-\bbH(\hbx)\right\|^2\leq
\max_{\bbv} \frac{\sum_{i=1}^n\left\|\nabla^2 f_{i}(\bbx_{i})-\nabla^2 f_{i}(\hbx_{i})\right\|^2 \|\bbv_{i}\|^2}{\|\bbv\|^2}. \end{equation}
Observe that Assumption 3 states that local objective functions Hessian $\nabla^2 f_{i}(\bbx_{i})$ are Lipschitz continuous with constant $L$, i.e. $\|\nabla^2 f_{i}(\bbx_{i})-\nabla^2 f_{i}(\hbx_{i})\|\leq L \|\bbx_{i}-\hbx_{i}\|$. Considering this inequality the upper bound in \eqref{norm_difference_2} can be changed by replacing $\|\nabla^2 f_{i}(\bbx_{i})-\nabla^2 f_{i}(\hbx_{i})\|$ by $L \|\bbx_{i}-\hbx_{i}\|$ which yields
\begin{equation}\label{alef}
\left\|\bbH(\bbx)-\bbH(\hbx)\right\|^2\leq
\max_{\bbv} \frac{L^2\sum_{i=1}^n\left\|\bbx_{i}-\hbx_{i}\right\|^2 \|\bbv_{i}\|^2}{\sum_{i=1}^n\|\bbv_{i}\|^2}. \end{equation}
Note that for any sequences of scalars such as $a_{i}$ and $b_i$, the inequality $\sum_{i=1}^n a_{i}^2b_i^2\leq (\sum_{i=1}^n a_{i}^2)(\sum_{i=1}^n b_{i}^2)$ holds. If we divide both sides of this relation by $\sum_{i=1}^n b_{i}^2$ and set $a_i=\|\bbx_{i}-\hbx_{i}\|$ and $b_i=\|\bbv_{i}\|$, we obtain
\begin{equation}\label{alef_2}
\frac{\sum_{i=1}^n\left\|\bbx_{i}-\hbx_{i}\right\|^2
\|\bbv_{i}\|^2}{\sum_{i=1}^n\|\bbv_{i}\|^2}
\leq \sum_{i=1}^n\left\|\bbx_{i}-\hbx_{i}\right\|^2. \end{equation}
Combining the two inequalities in \eqref{alef} and \eqref{alef_2} leads to
\begin{equation}\label{alef_3}
\left\|\bbH(\bbx)-\bbH(\hbx)\right\|^2
\leq \max_{\bbv} L^2 \sum_{i=1}^n\left\|\bbx_{i}-\hbx_{i}\right\|^2 . \end{equation}
Since the right hand side of \eqref{alef_3} does not depend on $\bbv$ we can eliminate the maximization with respect to $\bbv$. Further, note that according to the structure of vectors $\bbx$ and $\hbx$, we can write
$\left\|\bbx-\hbx\right\|^2=\sum_{i=1}^n\left\|\bbx_{i}-\hbx_{i}\right\|^2$. These two observations in association with \eqref{alef_3} imply that
\begin{equation}\label{rangarang}
\left\|\bbH(\bbx)-\bbH(\hbx)\right\|^2 \leq L^2
\left\|\bbx-\hbx\right\|^2, \end{equation}
Computing the square roots of terms in \eqref{rangarang} yields \eqref{H_Lipschitz_claim}.
\section{Proofs of Propositions \ref{error_vector_proposition} and \ref{error_vector_proposition_2}}\label{app_error_bound}
The fundamental theorem of calculus implies that the difference of gradients $\nabla f(\bbx_{k+1})-\nabla f(\bbx_k)$ can be written as
\begin{equation}\label{grad_dif} \nabla f(\bbx_{k+1})-\nabla f(\bbx_k)=\int_{0}^1 \bbH(s\bbx_{k+1}+(1-s)\bbx_{k})(\bbx_{k+1}-\bbx_{k})\ ds. \end{equation}
By computing norms of both sides of \eqref{grad_dif} and considering that norm of integral is smaller than integral of norm we obtain that \begin{equation}\label{grad_dif_norm}
\|\nabla f(\bbx_{k+1})-\nabla f(\bbx_k)\|\!\leq\!\!\int_{0}^1\!\!\!\! \|\bbH(s\bbx_{k+1}+(1-s)\bbx_{k})(\bbx_{k+1}-\bbx_{k})\| ds. \end{equation}
The upper bound $M$ for the eigenvalues of the Hessians as in \eqref{aggregate_hessian_eigenvlaue_bounds}, implies that $\| \bbH\left(s\bbx+(1-s)\hbx\right)(\bbx-\hbx)\|\leq M\|\bbx-\hbx\|$. Substituting this upper bound into \eqref{grad_dif_norm} leads to
\begin{equation}\label{grad_dif_norm_2}
\|\nabla f(\bbx_{k+1})-\nabla f(\bbx_k)\|\leq M \|\bbx_{k+1}-\bbx_{k}\|. \end{equation}
The error vector norm $\|\bbe_{k}^{DLM}\|$ in \eqref{DLM_error} is bounded above as
\begin{equation}\label{grad_dif_norm_dlm}
\|\bbe_{k}^{DLM}\|\leq \|\nabla f(\bbx_{k+1})-\nabla f(\bbx_k)\|+\rho\|\bbx_{k+1}-\bbx_{k}\|.
\end{equation}
By substituting the upper bound for $\|\nabla f(\bbx_{k+1})-\nabla f(\bbx_k)\|$ in \eqref{grad_dif_norm_2} into \eqref{grad_dif_norm_dlm}, the claim in \eqref{bound_for_DLM_error} follows.
To prove \eqref{bound_for_DQM_error}, first we show that $ \|\bbe_{k}^{DQM}\|\leq2M\|\bbx_{k+1}-\bbx_{k}\|$ holds. Observe that the norm of error vector $\bbe_{k}^{DQM}$ defined \eqref{DQM_error} can be upper bounded using the triangle inequality as
\begin{equation}\label{grad_dif_norm_dqm}
\|\bbe_{k}^{DQM}\|\leq \|\nabla f(\bbx_{k+1})-\nabla f(\bbx_k)\|+\|\bbH_k(\bbx_{k+1}-\bbx_{k})\|.
\end{equation} Based on the Cauchy-Schwarz inequality and the upper bound $M$ for the eigenvalues of Hessians as in
\eqref{aggregate_hessian_eigenvlaue_bounds}, we obtain $\|
\bbH_k(\bbx_{k+1}-\bbx_k)\|\leq M\|\bbx_{k+1}-\bbx_k\|$. Further, as mentioned in \eqref{grad_dif_norm_2} the difference of gradients $\|\nabla f(\bbx_{k+1})-\nabla f(\bbx_k)\|$ is upper bounded by $M \|\bbx_{k+1}-\bbx_{k}\|$. Substituting these upper bounds for the terms in the right hand side of \eqref{grad_dif_norm_dqm} yields
\begin{equation}\label{bound_1_dif_grad_dqm}
\|\bbe_{k}^{DQM}\|\leq2M\|\bbx_{k+1}-\bbx_{k}\|.
\end{equation}
The next step is to show that $ \|\bbe_{k}^{DQM}\|\leq(L/2)\|\bbx_{k+1}-\bbx_{k}\|^2$. Adding and subtracting the integral $\int_{0}^1\bbH(\bbx_k)(\bbx_{k+1}-\bbx_{k})\ ds$ to the right hand side of \eqref{grad_dif} results in
\begin{align}\label{grad_dif_new} &\nabla f(\bbx_{k+1})-\nabla f(\bbx_k) =\int_{0}^1\bbH(\bbx_k)(\bbx_{k+1}-\bbx_{k})\ ds \nonumber \\ &+ \int_{0}^1 \left[\bbH(s\bbx_{k+1}+(1-s)\bbx_{k})-\bbH(\bbx_k)\right](\bbx_{k+1}-\bbx_{k})\ ds. \end{align} First observe that the integral $\int_{0}^1\bbH(\bbx_k)(\bbx_{k+1}-\bbx_{k})\ ds$ can be simplified as $\bbH(\bbx_k)(\bbx_{k+1}-\bbx_{k})$. Observing this simplification and regrouping the terms yield
\begin{align}\label{grad_dif_new2} &\nabla f(\bbx_{k+1})-\nabla f(\bbx_k) -\bbH(\bbx_k)(\bbx_{k+1}-\bbx_{k})= \nonumber \\ & \int_{0}^1 \left[\bbH(s\bbx_{k+1}+(1-s)\bbx_{k})-\bbH(\bbx_k)\right](\bbx_{k+1}-\bbx_{k})\ ds. \end{align}
Computing norms of both sides of \eqref{grad_dif_new2}, considering the fact that norm of integral is smaller than integral of norm, and using Cauchy-Schwarz inequality lead to
\begin{align}\label{grad_dif_new3}
&\|\nabla f(\bbx_{k+1})-\nabla f(\bbx_k) -\bbH(\bbx_k)(\bbx_{k+1}-\bbx_{k})\|\leq \\
& \int_{0}^1 \left\|\bbH(s\bbx_{k+1}+(1-s)\bbx_{k})-\bbH(\bbx_k)\right\|\|\bbx_{k+1}-\bbx_{k}\| ds. \nonumber \end{align}
Lipschitz continuity of the Hessian as in \eqref{H_Lipschitz_claim} implies that $\left\|\bbH(s\bbx_{k+1}+(1-s)\bbx_{k})-\bbH(\bbx_k)\right\| \leq sL\|\bbx_{k+1}-\bbx_{k}\|$. By substituting this upper bound into the integral in \eqref{grad_dif_new3} and substituting the left hand side of \eqref{grad_dif_new3} by $\|\bbe_{k}^{DQM}\|$ we obtain
\begin{equation}\label{grad_dif_new4}
\|\bbe_{k}^{DQM}\|\leq \int_{0}^1 sL\|\bbx_{k+1}-\bbx_{k}\|^2 ds. \end{equation}
Simplification of the integral in \eqref{grad_dif_new4} follows
\begin{equation}\label{grad_dif_new44}
\|\bbe_{k}^{DQM}\|\leq \frac{L}{2}\|\bbx_{k+1}-\bbx_{k}\|^2. \end{equation}
The results in \eqref{bound_1_dif_grad_dqm} and \eqref{grad_dif_new44} follow the claim in \eqref{bound_for_DQM_error}.
\section{Proof of Lemma \ref{equalities_for_optima_lemma}\label{app_equalities_for_optima}}
In this section we first introduce an equivalent version of Lemma \ref{equalities_for_optima_lemma} for the DLM algorithm. Then, we show the validity of both lemmata in a general proof.
\begin{lemma}\label{equalities_for_optima_lemma_2} Consider DLM as defined by \eqref{ADMM_z_update}-\eqref{DLM_x_update}. If Assumption \ref{initial_val_assum} holds true, then the optimal arguments $\bbx^*$, $\bbz^*$, and $\bbalpha^*$ satisfy
\begin{align}
\nabla f(\bbx_{k+1})-\nabla f(\bbx^*)+\bbe_{k}^{DLM}
+ \bbE_o^T(\bbalpha_{k+1}-\bbalpha^*) \nonumber \\
-c\bbE_u^T \left( \bbz_{k}-\bbz_{k+1} \right)
& = \bb0, \label{relation111} \\
2(\bbalpha_{k+1}-\bbalpha_k)-{c}\bbE_o(\bbx_{k+1}-\bbx^*)
&=\bb0, \label{relation222} \\
\bbE_u(\bbx_{k}-\bbx^*)-2(\bbz_k-\bbz^*)
&=\bb0.\label{relation333} \end{align}
\end{lemma}
Notice that the claims in Lemmata \ref{equalities_for_optima_lemma} and \ref{equalities_for_optima_lemma_2} are identical except in the error term of the first equalities. To provide a general framework to prove the claim in these lemmata we introduce $\bbe_k$ as the general error vector. By replacing $\bbe_k$ with $\bbe_k^{DQM}$ we obtain the result of DQM in Lemma \ref{equalities_for_optima_lemma} and by setting $\bbe_k=\bbe_k^{DLM}$ the result in Lemma \ref{equalities_for_optima_lemma_2} follows. We start with the following Lemma that captures the KKT conditions of optimization problem \eqref{original_optimization_problem4}.
\begin{lemma}\label{KKT}
Consider the optimization problem \eqref{original_optimization_problem4}. The optimal Lagrange multiplier $\bbalpha^*$, primal variable $\bbx^*$ and auxiliary variable $\bbz^*$ satisfy the following system of equations \begin{align}\label{KKT_cliam} \nabla f(\bbx^*)+\bbE_{o}^T\bbalpha^*=\bb0,\quad \bbE_{o}\bbx^*=\bb0,\quad \bbE_u\bbx^*=2\bbz^*. \end{align} \end{lemma}
\begin{myproof} First observe that the KKT conditions of the decentralized optimization problem in \eqref{original_optimization_problem4} are given by
\begin{equation}\label{KKT1} \nabla f(\bbx^*)+\bbA^T{\bblambda^*}=\bb0,\quad \bbB^T{\bblambda^*}=\bb0,\quad \bbA\bbx^*+\bbB\bbz^*=\bb0. \end{equation}
Based on the definitions of the matrix $\bbB=[-\bbI_{mp};-\bbI_{mp}]$ and the optimal Lagrange multiplier $\bblambda^*:=[\bbalpha^*;\bbbeta^*]$, we obtain that $\bbB^T{\bblambda^*}=\bb0$ in \eqref{KKT1} is equivalent to $\bbalpha^*=-\bbbeta^*$. Considering this result and the definition $\bbA=[\bbA_s;\bbA_d]$, we obtain
\begin{equation}\label{shepesh} \bbA^T{\bblambda^*}=\bbA_s^T\bbalpha^*+\bbA_d^T\bbbeta^*=(\bbA_s-\bbA_d)^T\bbalpha^*. \end{equation} The definition $\bbE_o:=\bbA_s-\bbA_d$ implies that the right hand side of \eqref{shepesh} can be simplified as $\bbE_o^T\bbalpha^*$ which shows $\bbA^T{\bblambda^*}=\bbE_o^T\bbalpha^*$. Substituting $\bbA^T{\bblambda^*}$ by $\bbE_o^T\bbalpha^*$ into the first equality in \eqref{KKT1} follows the first claim in \eqref{KKT_cliam}.
Decompose the KKT condition $\bbA\bbx^*+\bbB\bbz^*=\bb0$ in \eqref{KKT1} based on the definitions of $\bbA$ and $\bbB$ as \begin{equation}\label{fast1} \bbA_s\bbx^*-\bbz=\bb0,\quad \bbA_d\bbx^*-\bbz=\bb0. \end{equation} Subtracting the equalities in \eqref{fast1} implies that $(\bbA_s-\bbA_d)\bbx^*=\bb0$ which by considering the definition $\bbE_o=\bbA_s-\bbA_d$, the second equation in \eqref{KKT_cliam} follows. Summing up the equalities in \eqref{fast1} yields $(\bbA_s+\bbA_d)\bbx^*=2\bbz$. This observation in association with the definition $\bbE_u=\bbA_s-\bbA_d$ follows the third equation in \eqref{KKT_cliam}. \end{myproof}
\textbf{Proofs of Lemmata \ref{equalities_for_optima_lemma} and \ref{equalities_for_optima_lemma_2}}: First note that the results in Lemma \ref{lag_var_lemma} are also valid for DLM \cite{ling2014dlm}. Now, consider the first order optimality condition for primal updates of DQM and DLM in \eqref{DQM_update_1} and \eqref{DLM_optimality_cond}, respectively. Further, recall the definitions of error vectors $\bbe_k^{DQM}$ and $\bbe_k^{DLM}$ in \eqref{DQM_error} and \eqref{DLM_error}, respectively. Combining these observations we obtain that
\begin{equation}\label{whatever} \nabla f(\bbx_{k+1})+\bbe_{k}+\bbA^T\bblambda_{k}+c \bbA^{T} \left( \bbA\bbx_{k+1}+\bbB \bbz_{k} \right)=\bb0. \end{equation}
Notice that by setting $\bbe_{k}=\bbe_{k}^{DQM}$ we obtain the update for primal variable of DQM; likewise, setting $\bbe_{k}=\bbe_{k}^{DLM}$ yields to the update of DLM.
Observe that the relation $\bblambda_{k}=\bblambda_{k+1}-c( \bbA\bbx_{k+1}+\bbB\bbz_{k+1} ) $ holds for both DLM and DQM according to to the update formula for Lagrange multiplier in \eqref{ADMM_lambda_update} and \eqref{DQM_update_2}. Substituting $\bblambda_k$ by $\bblambda_{k+1}-c( \bbA\bbx_{k+1}+\bbB\bbz_{k+1} ) $ in \eqref{whatever} follows
\begin{equation}\label{whatever2} \nabla f(\bbx_{k+1})+\bbe_{k}+\bbA^T\bblambda_{k+1}+c \bbA^{T} \bbB\left( \bbz_{k}-\bbz_{k+1} \right)=\bb0 \end{equation}
Based on the result in Lemma \ref{lag_var_lemma}, the components of the Lagrange multiplier $\bblambda=[\bbalpha;\bbbeta]$ satisfy $\bbalpha_{k+1}=-\bbbeta_{k+1}$. Hence, the product $\bbA^T{\bblambda_{k+1}}$ can be simplified as $\bbA_s^T\bbalpha_{k+1}-\bbA_d^T\bbalpha_{k+1}=\bbE_o^T\bbalpha_{k+1}$ considering the definition that $\bbE_o=\bbA_s-\bbA_d$. Furthermore, note that according to the definitions we have that $\bbA=[\bbA_s;\bbA_d]$ and $\bbB=[-\bbI;-\bbI]$ which implies that $\bbA^T\bbB=-(\bbA_s+\bbA_d)^T=-\bbE_u^T$. By making these substitutions into \eqref{whatever2} we can write
\begin{align}\label{phone} \nabla f(\bbx_{k+1})+\bbe_{k}+\bbE_o^T\bbalpha_{k+1}-c\bbE_u^T \left( \bbz_{k}-\bbz_{k+1} \right)&=\bb0. \end{align}
The first result in Lemma \ref{KKT} is equivalent to $\nabla f(\bbx^*)+\bbE_{o}^T\bbalpha^*=\bb0$. Subtracting both sides of this equation from the relation in \eqref{phone} follows the first claim of Lemmata \ref{equalities_for_optima_lemma} and \ref{equalities_for_optima_lemma_2}.
We proceed to prove the second and third claims in Lemmata \ref{equalities_for_optima_lemma} and \ref{equalities_for_optima_lemma_2}. The update formula for $\bbalpha_k$ in \eqref{update_alpha} and the second result in Lemma \ref{KKT} that $\bbE_o\bbx^*=0$ imply that the second claim of Lemmata \ref{equalities_for_optima_lemma} and \ref{equalities_for_optima_lemma_2} are valid. Further, the result in Lemma \ref{lag_var_lemma} guaranteaes that $\bbE_u\bbx_{k}=2\bbz_k$. This result in conjunction with the result in Lemma \ref{KKT} that $\bbE_u\bbx^*=2\bbz^*$ leads to the third claim of Lemmata \ref{equalities_for_optima_lemma} and \ref{equalities_for_optima_lemma_2}.
\section{Proofs of Theorems \ref{DQM_convergence} and \ref{DLM_convergence}}\label{linear_convg_app}
To prove Theorems \ref{DQM_convergence} and \ref{DLM_convergence} we show a sufficient condition for the claims in these theorems. Then, we prove these theorems by showing validity of the sufficient condition. To do so, we use the general coefficient $\beta_k$ which is equivalent to $\zeta_k$ in the DQM algorithm and equivalent to $\rho+M$ in the DLM method. These definitions and the results in Propositions \ref{error_vector_proposition} and \ref{error_vector_proposition_2} imply that
\begin{equation}\label{new_realtion}
\|\bbe_k\|\leq \beta_k \|\bbx_{k+1}-\bbx_k\|, \end{equation}
where $\bbe_k$ is $\bbe_k^{DQM}$ in DQM and $\bbe_k^{DLM}$ in DLM. The sufficient condition of Theorems \ref{DQM_convergence} and \ref{DLM_convergence} is studied in the following lemma.
\begin{lemma}\label{equi_cond_linea_convg}
Consider the DLM and DQM algorithms as defined in \eqref{ADMM_z_update}-\eqref{DLM_x_update} and \eqref{DQM_update_1}-\eqref{DQM_update_2}, respectively. Further, conducer $\delta_k$ as a sequence of positive scalars. If Assumptions \ref{initial_val_assum}-\ref{Lipschitz_assumption} hold true then the sequence $\|\bbu_{k}-\bbu^*\|_\bbC^2$ converges linearly as
\begin{equation}\label{zarbe}
\|\bbu_{k+1}-\bbu^*\|_{\bbC}^2\ \leq\ \frac{1}{1+\delta_k} \|\bbu_{k}-\bbu^*\|_{\bbC}^2, \end{equation} if the following inequality holds true,
\begin{align}\label{claim_2}
&\! \beta_k\|\bbx_{k+1}\!-\!\bbx^*\|\|\bbx_{k+1}\!-\!\bbx_k\|\!+\!\delta_k c \|\bbz_{k+1}\!-\!\bbz^*\|^2
\!+\!\frac{\delta_k}{c}\|\bbalpha_{k+1}\!-\!\bbalpha^*\|^2 \nonumber \\ &
\leq m\|\bbx_{k+1}\!-\!\bbx^*\|^2+c \|\bbz_{k+1}\!-\!\bbz_k\|^2+\frac{1}{c}\|\bbalpha_{k+1}\!-\!\bbalpha_k\|^2. \end{align}
\end{lemma}
\begin{myproof}
Proving linear convergence of the sequence $\|\bbu_k-\bbu^*\|_\bbC^2$ as mentioned in \eqref{zarbe} is equivalent to showing that
\begin{equation}\label{linear_convg}
\delta_k \|\bbu_{k+1}-\bbu^* \|_\bbC^2\leq \|\bbu_{k}-\bbu^* \|_\bbC^2-\|\bbu_{k+1}-\bbu^* \|_\bbC^2.
\end{equation}
According to the definition $\|\bba\|_\bbC^2:=\bba^T\bbC\bba$ we can show that
\begin{align}\label{mirror}
2(\bbu_{k}-\bbu_{k+1})^T\bbC(\bbu_{k+1}-\bbu^*)&= \|\bbu_{k}-\bbu^* \|_\bbC^2- \|\bbu_{k+1}-\bbu^*\|_\bbC^2 \nonumber\\
&\qquad - \|\bbu_{k}-\bbu_{k+1} \|_\bbC^2. \end{align}
The relation in \eqref{mirror} shows that the right hand side of \eqref{linear_convg} can be substituted by $2(\bbu_{k}-\bbu_{k+1})^T\bbC(\bbu_{k+1}-\bbu^*)+\|\bbu_{k}-\bbu_{k+1} \|_\bbC^2$. Applying this substitution into \eqref{linear_convg} leads to
\begin{equation}\label{working}
\delta_k \|\bbu_{k+1}-\bbu^* \|_\bbC^2 \leq 2(\bbu_{k}-\bbu_{k+1})^T\bbC(\bbu_{k+1}-\bbu^*)+ \|\bbu_{k}-\bbu_{k+1} \|_\bbC^2
\end{equation}
This observation implies that to prove the linear convergence as claimed in \eqref{zarbe}, the inequality in \eqref{working} should be satisfied.
We proceed by finding a lower bound for the term $2(\bbu_{k}-\bbu_{k+1})^T\bbC(\bbu_{k+1}-\bbu^*)$ in \eqref{working}. By regrouping the terms in \eqref{phone} and multiplying both sides of equality by $(\bbx_{k+1}-\bbx^*)^T$ from the left hand side we obtain that the inner product $(\bbx_{k+1}-\bbx^*)^T(\nabla f(\bbx_{k+1})-\nabla f(\bbx^*))$ is equivalent to
\begin{align}\label{glad} &(\bbx_{k+1}-\bbx^*)^T(\nabla f(\bbx_{k+1})-\nabla f(\bbx^*))= \nonumber\\ &\qquad \qquad -(\bbx_{k+1}-\bbx^*)^T\bbe_{k}-(\bbx_{k+1}-\bbx^*)^T\bbE_o^T(\bbalpha_{k+1}-\bbalpha^*)\nonumber\\ &\qquad \qquad+ c(\bbx_{k+1}-\bbx^*)^T\bbE_u^T ( \bbz_{k}-\bbz_{k+1} ). \end{align}
Based on \eqref{relation2}, we can substitute $(\bbx_{k+1}-\bbx^*)^T\bbE_o^T(\bbalpha_{k+1}-\bbalpha^*)$ in \eqref{glad} by $(2/c)(\bbalpha_{k+1}-\bbalpha_{k})^T(\bbalpha_{k+1}-\bbalpha^*)$. Further, the result in \eqref{relation3} implies that the term $c(\bbx_{k+1}-\bbx^*)^T\bbE_u^T \left( \bbz_{k}-\bbz_{k+1} \right)$ in \eqref{glad} is equivalent to $2c \left( \bbz_{k}-\bbz_{k+1}\right)^T (\bbz_{k+1}-\bbz^*)$. Applying these substitutions into \eqref{glad} leads to \begin{align}\label{you} &(\bbx_{k+1}\!-\!\bbx^*)^T(\nabla f(\bbx_{k+1})-\nabla f(\bbx^*))= -(\bbx_{k+1}\!-\!\bbx^*)^T\bbe_{k} \\ & \!\!+\frac{2}{c}(\bbalpha_{k}-\bbalpha_{k+1})^T(\bbalpha_{k+1}-\bbalpha^*)\!+\! 2c \left( \bbz_{k}-\bbz_{k+1}\right)^T\!(\bbz_{k+1}-\bbz^*).\nonumber \end{align}
Based on the definitions of matrix $\bbC$ and vector $\bbu$ in \eqref{C_u_definitions}, the last two summands in the right hand side of \eqref{you} can be simplified as
\begin{align}\label{money} &\frac{2}{c}(\bbalpha_{k}-\bbalpha_{k+1})^T(\bbalpha_{k+1}-\bbalpha^*)+2c \left( \bbz_{k}-\bbz_{k+1}\right)^T (\bbz_{k+1}-\bbz^*)\nonumber\\ &\qquad =2(\bbu_{k}-\bbu_{k+1})^T\bbC(\bbu_{k+1}-\bbu^*). \end{align}
Considering the simplification in \eqref{money} we can rewrite \eqref{you} as
\begin{align}\label{fine} &(\bbx_{k+1}-\bbx^*)^T(\nabla f(\bbx_{k+1})-\nabla f(\bbx^*))
\\ &\qquad \quad =-(\bbx_{k+1}-\bbx^*)^T\bbe_{k}+2(\bbu_{k}-\bbu_{k+1})^T\bbC(\bbu_{k+1}-\bbu^*). \nonumber \end{align}
Observe that the objective function $f$ is strongly convex with constant $m$ which implies the inequality
$m\|\bbx_{k+1}-\bbx^*\|^2\leq (\bbx_{k+1}-\bbx^*)^T(\nabla f(\bbx_{k+1})-\nabla f(\bbx^*))$ holds true. Considering this inequality from the strong convexity of objective function $f$ and the simplification for the inner product $(\bbx_{k+1}-\bbx^*)^T(\nabla f(\bbx_{k+1})-\nabla f(\bbx^*))$ in \eqref{fine}, the following inequality holds \begin{equation}\label{strong_convexity_result}
m\|\bbx_{k+1}-\bbx^*\|^2 +(\bbx_{k+1}-\bbx^*)^T\bbe_{k}\leq2(\bbu_{k}-\bbu_{k+1})^T\bbC(\bbu_{k+1}-\bbu^*). \end{equation}
Substituting the lower bound for the term $2(\bbu_{k}-\bbu_{k+1})^T\bbC(\bbu_{k+1}-\bbu^*)$ in \eqref{strong_convexity_result} into \eqref{working}, it follows that the following condition is sufficient to have \eqref{zarbe}, \begin{align}\label{new_claim_for_linear_convg}
\delta_k \|\bbu_{k+1}-\bbu^* \|_\bbC^2& \leq m\|\bbx_{k+1}-\bbx^*\|^2 +(\bbx_{k+1}-\bbx^*)^T\bbe_{k}\nonumber \\ &\qquad + \|\bbu_{k}-\bbu_{k+1} \|_\bbC^2. \end{align}
We emphasize that inequality \eqref{new_claim_for_linear_convg} implies the linear convergence result in \eqref{zarbe}. Therefore, our goal is to show that if \eqref{claim_2} holds, the relation in \eqref{new_claim_for_linear_convg} is also valid and consequently the result in \eqref{zarbe} holds. According to the definitions of matrix $\bbC$ and vector $\bbu$ in \eqref{C_u_definitions}, we can substitute $ \|\bbu_{k+1}-\bbu^* \|_\bbC^2$ by $c \|\bbz_{k+1}-\bbz^*\|^2+({1}/{c})\|\bbalpha_{k+1}-\bbalpha^*\|^2$ and $\|\bbu_{k}-\bbu_{k+1} \|_\bbC^2$ by $c \|\bbz_{k+1}-\bbz_k\|^2+({1}/{c})\|\bbalpha_{k+1}-\bbalpha_k\|^2$. Making these substitutions into \eqref{new_claim_for_linear_convg} yields
\begin{align}\label{imp_inequality}
&\delta_k c \|\bbz_{k+1}-\bbz^*\|^2+\frac{\delta_k}{c}\|\bbalpha_{k+1}-\bbalpha^*\|^2 \leq m\|\bbx_{k+1}-\bbx^*\|^2 \\
&\qquad +(\bbx_{k+1}-\bbx^*)^T\bbe_{k}+c \|\bbz_{k+1}-\bbz_k\|^2+\frac{1}{c}\|\bbalpha_{k+1}-\bbalpha_k\|^2.\nonumber \end{align}
The inequality in \eqref{new_realtion} implies that $-\|\bbe_k\|$ is lower bounded by $-\beta_{k}\|\bbx_{k+1}-\bbx_k\|$. This lower bound in conjunction with the fact that inner product of two vectors is not smaller than the negative of their norms product leads to
\begin{equation}\label{pen2}
(\bbx_{k+1}-\bbx^*)^T\!\bbe_{k}\geq-\beta_k\|\bbx_{k+1}-\bbx^*\|\|\bbx_{k+1}-\bbx_k\| . \end{equation}
Substituting $(\bbx_{k+1}-\bbx^*)^T\bbe_{k}$ in \eqref{imp_inequality} by its lower bound in \eqref{pen2} leads to a sufficient condition for \eqref{imp_inequality} as in \eqref{claim_2}, i.e.,
\begin{align}\label{claim_2324}
&\! \beta_k\|\bbx_{k+1}\!-\!\bbx^*\|\|\bbx_{k+1}\!-\!\bbx_k\|\!+\!\delta_k c \|\bbz_{k+1}\!-\!\bbz^*\|^2
\!+\!\frac{\delta_k}{c}\|\bbalpha_{k+1}\!-\!\bbalpha^*\|^2 \nonumber \\ &
\leq m\|\bbx_{k+1}\!-\!\bbx^*\|^2+c \|\bbz_{k+1}\!-\!\bbz_k\|^2+\frac{1}{c}\|\bbalpha_{k+1}\!-\!\bbalpha_k\|^2. \end{align}
Observe that if \eqref{claim_2324} holds true, then \eqref{imp_inequality} and its equivalence \eqref{new_claim_for_linear_convg} are valid and as a result the inequality in \eqref{zarbe} is also satisfied. \end{myproof}
According to the result in Lemma \ref{equi_cond_linea_convg}, the sequence $\|\bbu_{k}-\bbu^*\|^2$ converges linearly as mentioned in \eqref{zarbe} if the inequality in \eqref{claim_2} holds true. Therefore, in the following proof we show that for
\begin{equation}\label{general_delta} \delta_k=\min \Bigg\{ \frac{(\mu-1)(c\gamma_u^2-\eta_k\beta_k)\gamma_o^{2}}
{\mu\mu'( c\Gamma_u^2\gamma_u^2+4\beta_k^2/c(\mu'-1))} , \frac{ m-{\beta_k}/{\eta_k} }{{ c}\Gamma_u^2/4 +\mu M^2/c\gamma_o^{2}} \Bigg\}, \end{equation}
the inequality in \eqref{claim_2} holds and consequently \eqref{zarbe} is valid.
\textbf{Proofs of Theorems \ref{DQM_convergence} and \ref{DLM_convergence}}: we show that if the constant $\delta_k$ is chosen as in \eqref{general_delta}, then the inequality in \eqref{claim_2} holds true.
To do this first we should find an upper bound for
$\beta_k\|\bbx_{k+1}-\bbx^*\|\|\bbx_{k+1}-\bbx_k\| $ regarding the terms in the right hand side of \eqref{claim_2}. Observing the result of Lemma \ref{lag_var_lemma} that $\bbE_u\bbx_k=2\bbz_{k}$ for times $k$ and $k+1$, we can write \begin{equation}\label{salt} \bbE_u(\bbx_{k+1}-\bbx_{k})=2(\bbz_{k+1}-\bbz_{k}). \end{equation}
The singular values of $\bbE_u$ are bounded below by $\gamma_u$. Hence, equation \eqref{salt} implies that $\|\bbx_{k+1}-\bbx_{k}\|$ is upper bounded by
\begin{equation}\label{flower}
\|\bbx_{k+1}-\bbx_{k}\|\leq \frac{2}{\gamma_u} \|\bbz_{k+1}-\bbz_{k}\|. \end{equation}
Multiplying both sides of \eqref{flower} by $\beta_k\|\bbx_{k+1}-\bbx^*\|$ yields
\begin{equation}\label{flower2}
\beta_k\|\bbx_{k+1}-\bbx^*\|\|\bbx_{k+1}-\bbx_{k}\|\leq \frac{2\beta_k}{\gamma_u}\|\bbx_{k+1}-\bbx^*\| \|\bbz_{k+1}-\bbz_{k}\|. \end{equation}
Notice that for any vectors $\bba$ and $\bbb$ and positive constant $\eta_{k}>0$ the inequality $2\|\bba\|\|\bbb\|\leq (1/\eta_{k})\|\bba\|^2+\eta_{k}\|\bbb\|^2 $ holds true. By setting $\bba=\bbx_{k+1}-\bbx^*$ and $\bbb=(1/\gamma_u^2)(\bbz_{k+1}-\bbz_{k})$ the inequality $2\|\bba\|\|\bbb\|\leq (1/\eta_{k})\|\bba\|^2+\eta_{k}\|\bbb\|^2 $ is equivalent to
\begin{equation}\label{milad}
\frac{2}{\gamma_u}\|\bbx_{k+1}-\bbx^*\| \|\bbz_{k+1}-\bbz_{k}\|\leq
\frac{1}{\eta_{k}}\|\bbx_{k+1}-\bbx^*\|^2+\frac{\eta_{k} }{\gamma_u^2} \|\bbz_{k+1}-\bbz_{k}\|^2. \end{equation}
Substituting the upper bound for $ ({2}/{\gamma_u})\|\bbx_{k+1}-\bbx^*\| \|\bbz_{k+1}-\bbz_{k}\|$ in \eqref{milad} into \eqref{flower2} yields
\begin{equation}\label{flower3}
\beta_k\|\bbx_{k+1}-\bbx^*\|\|\bbx_{k+1}-\bbx_{k}\|\!\leq \!
\frac{\beta_k}{\eta_{k}}\|\bbx_{k+1}-\bbx^*\|^2+\frac{\eta_{k}\beta_k }{\gamma_u^2}
\|\bbz_{k+1}-\bbz_{k}\|^2. \end{equation}
Notice that inequality \eqref{flower3} provides an upper bound for
$\beta_k\|\bbx_{k+1}-\bbx^*\|\|\bbx_{k+1}-\bbx_{k}\|$ in
\eqref{claim_2} regarding the terms in the right hand side of inequality which are $\|\bbx_{k+1}-\bbx^*\|^2$ and
$\|\bbz_{k+1}-\bbz_{k}\|^2$. The next step is to find upper bounds for the other two terms in the left hand side of \eqref{claim_2} regarding the terms in the right hand side of \eqref{claim_2}
which are $\|\bbx_{k+1}-\bbx^*\|^2$, $\|\bbz_{k+1}-\bbz_{k}\|^2$, and $\|\bbalpha_{k+1}-\bbalpha_k\|^2$. First we start with $
\|\bbz_{k+1}-\bbz^*\|^2$. The relation in \eqref{relation3} and the upper bound $\Gamma_u$ for the singular values of matrix $\bbE_u$ yield
\begin{equation}\label{second_term_bound}
\delta_k c \|\bbz_{k+1}-\bbz^*\|^2 \leq \frac{\delta_k c\Gamma_u^2}{4} \|\bbx_{k+1}-\bbx^*\|^2. \end{equation}
The next step is to bound $(\delta_k/c)\|\bbalpha_{k+1}-\bbalpha^*\|$ in terms of the term in the right hand side of (47). First, note that for any vector $\bba$, $\bbb$, and $\bbc$, and constants $\mu$ and $\mu'$ which are larger than 1, i.e. $\mu,\mu'>1$, we can write
\begin{align}\label{kahn}
(1-\frac{1}{\mu'})(1-\frac{1}{\mu})\|\bbc\|^2 &\leq
\|\bba+\bbb+\bbc\|^2+(\mu'-1)\|\bba\|^2\nonumber\\
&\qquad + (\mu-1)(1-\frac{1}{\mu'})\|\bbb\|^2 . \end{align}
Set $\bba=c\bbE_u^T ( \bbz_{k}-\bbz_{k+1})$, $\bbb=\nabla f(\bbx^*)-\nabla f(\bbx_{k+1})$, and $\bbc=\bbE_o^T(\bbalpha^*-\bbalpha_{k+1})$. By choosing these values and observing equality \eqref{relation1} we obtain $\bba+\bbb+\bbc=\bbe_{k}$. Hence, by making these substitutions for $\bba$, $\bbb$, $\bbc$, and $\bba+\bbb+\bbc$ into \eqref{kahn} we can write
\begin{align}\label{beats}
(1-\frac{1}{\mu'})(1-\frac{1}{\mu})&\|\bbE_o^T(\bbalpha_{k+1}-\bbalpha^*)\|^2
\leq \|\bbe_{k}\|^2 \\
&+(\mu'-1)\|c\bbE_u^T ( \bbz_{k}-\bbz_{k+1})\|^2 \nonumber\\
&+(\mu-1)(1-\frac{1}{\mu'})\|\nabla f(\bbx_{k+1})-\nabla f(\bbx^*)\|^2 \nonumber. \end{align}
Notice that according to the result in Lemma \ref{lag_var_lemma}, the Lagrange multiplier $\bbalpha_{k}$ lies in the column space of $\bbE_o$ for all $k\geq0$. Further, recall that the optimal multiplier $\bbalpha^*$ also lies in the column space of $\bbE_o$. These observations show that $\bbalpha^*-\bbalpha_k$ is in the column space of $\bbE_o$. Hence, there exits a vector $\bbr\in \reals^{np}$ such that $\bbalpha^*-\bbalpha_k=\bbE_o\bbr$. This relation implies that $\|\bbE_o^T(\bbalpha_{k+1}-\bbalpha^*)\|^2$ can be written as $\|\bbE_o^T\bbE_o\bbr\|^2=\bbr^T(\bbE_o^T\bbE_o)^2\bbr$. Observe that since the eigenvalues of matrix $(\bbE_o^T\bbE_o)^2$ are the squared of eigenvalues of the matrix $\bbE_o^T\bbE_o$, we can write $\bbr^T(\bbE_o^T\bbE_o)^2\bbr \geq \gamma_o^2\bbr^T\bbE_o^T\bbE_o\bbr$, where $\gamma_o$ is the smallest non-zero singular value of the oriented incidence matrix $\bbE_o$. Observing this inequality and the definition $\bbalpha^*-\bbalpha_k=\bbE_o\bbr$ we can write
\begin{equation}\label{paper100}
\left\|\bbE_o^T(\bbalpha_{k+1}-\bbalpha^*)\right\|^2 \geq \gamma_o^2 \| \bbalpha_{k+1}-\bbalpha^* \|^2. \end{equation}
Observe that the error norm $\|\bbe_{k}\|$ is bounded above by $\beta_{k}\|\bbx_{k+1}-\bbx_k\|$ as in \eqref{new_realtion} and the norm $\|c\bbE_u^T ( \bbz_{k}-\bbz_{k+1})\|^2$ is upper bounded by $ c^2\Gamma_u^2\|\bbz_{k}-\bbz_{k+1}\|^2$ since all the singular values of the unoriented matrix $\bbE_u$ are smaller than $\Gamma_u$. Substituting these upper bounds and the lower bound in \eqref{paper100} into \eqref{beats} implies
\begin{align}\label{coffee}
& (1-\frac{1}{\mu'})(1-\frac{1}{\mu})\gamma_o^2\|\bbalpha_{k+1}-\bbalpha^*\|^2
\leq \beta_k^2 \|\bbx_{k+1}-\bbx_k\|^2 \\
&+\!(\mu'-1)c^2\Gamma_u^2\| \bbz_{k}-\bbz_{k+1}\|^2 \!+\!(\mu-1)(1-\frac{1}{\mu'})M^2\|\bbx_{k+1}\!-\!\bbx^*\|^2 \nonumber \end{align}
Considering the result in \eqref{flower}, $\|\bbx_{k+1}-\bbx_{k}\|$ is upper by $ ({2}/{\gamma_u} )\|\bbz_{k+1}-\bbz_{k}\|$. Therefore, we can substitute $\|\bbx_{k+1}-\bbx_{k}\|$ in the right hand side of \eqref{coffee} by its upper bound $ ({2}/{\gamma_u} )\|\bbz_{k+1}-\bbz_{k}\|$. Making this substitution, dividing both sides by $ (1-{1}/{\mu'})(1-{1}/{\mu})\gamma_o^2$, and regrouping the terms lead to
\begin{align}\label{third_term_bound}
\|\bbalpha_{k+1}-\bbalpha^*\|^2& \leq
\frac{\mu M^2}{\gamma_o^2}\|\bbx_{k+1}-\bbx^*\|^2 \\ &\!\!\!\!\!\!\!\!\!\!\!
+ \left[\frac{4\mu\mu'\beta_k^2}{\gamma_u^2\gamma_o^2(\mu-1)(\mu'-1)}\!+\!\frac{\mu\mu'c^2\Gamma_u^2}{(\mu-1)\gamma_o^2}\right] \!\! \| \bbz_{k}-\bbz_{k+1}\|^2.\nonumber \end{align}
Considering the upper bounds for $\beta_k\|\bbx_{k+1}-\bbx^*\|\|\bbx_{k+1}-\bbx_k\|$, $\|\bbz_{k+1}-\bbz^*\|^2$, and $\|\bbalpha_{k+1}-\bbalpha_{k}\|^2$, in \eqref{flower3}, \eqref{second_term_bound}, and \eqref{third_term_bound}, respectively, we obtain that if the inequality
\begin{align}\label{final} & \left[ \frac{\beta_k}{\eta_{k}} +\frac{\delta_k c\Gamma_u^2}{4} +\frac{\delta_k\mu M^2}{c\gamma_o^2} \right]
\|\bbx_{k+1}-\bbx^*\|^2 + \\ & \left[ \frac{4\delta_k\mu\mu'\beta_k^2}{c\gamma_u^2\gamma_o^2(\mu-1)(\mu'-1)} +\frac{\delta_k\mu\mu'c\Gamma_u^2}{(\mu-1)\gamma_o^2} +\frac{\eta_{k}\beta_k}{\gamma_u^2} \right]
\|\bbz_{k+1}-\bbz_k\|^2\nonumber\\ &
\qquad \leq m\|\bbx_{k+1}-\bbx^*\|^2+c \|\bbz_{k+1}-\bbz_k\|^2+\frac{1}{c}\|\bbalpha_{k+1}-\bbalpha_k\|^2.\nonumber \end{align}
holds true, \eqref{claim_2} is satisfied. Hence, the last step is to show that for the specific choice of $\delta_k$ in \eqref{general_delta} the result in \eqref{final} is satisfied. In order to make sure that \eqref{final} holds, it is sufficient to show that the coefficients of $\|\bbx_{k+1}-\bbx^*\|^2$ and $\|\bbz_{k+1}-\bbz_k\|^2$ in the left hand side of \eqref{final} are smaller than the ones in the right hand side. Hence, we should verify the validity of inequalities
\begin{align} &\qquad\qquad \qquad \frac{\beta_k}{\eta_{k}} +\frac{\delta_k c\Gamma_u^2}{4} +\frac{\delta_k\mu M^2}{c\gamma_o^2}
\leq
m, \label{hat1}\\ &\frac{4\delta_k\mu\mu'\beta_k^2}{c\gamma_u^2\gamma_o^2(\mu-1)(\mu'-1)} +\frac{\delta_k\mu\mu'c\Gamma_u^2}{(\mu-1)\gamma_o^2} +\frac{\eta_{k}\beta_k}{\gamma_u^2}
\leq c.\label{hat2} \end{align}
Considering the inequality for $\delta_k$ in \eqref{general_delta} we obtain that \eqref{hat1} and \eqref{hat2} are satisfied. Hence, if $\delta_k$ satisfies condition in \eqref{general_delta}, \eqref{final} and consequently \eqref{claim_2} are satisfied. Now recalling the result of Lemma \ref{equi_cond_linea_convg} that inequality \eqref{claim_2} is a sufficient condition for the linear convergence in \eqref{zarbe}, we obtain that the linear convergence holds. By setting $\beta_k=\zeta_k$ we obtain the linear convergence of DQM in Theorem \ref{DQM_convergence} is valid and the linear coefficient in \eqref{general_delta} can be simplified as \eqref{DLM_DQM_delta}. Moreover, setting $\beta_k=\rho+M$ follows the linear convergence of DLM as in Theorem \ref{DLM_convergence} with the linear constant in \eqref{DLM_delta}.
\end{appendices}
\end{document} | arXiv |
\begin{document}
\title{Diagonals of separately continuous maps with values in box products} \author{Olena Karlova} \author{Volodymyr Mykhaylyuk}
\maketitle
\begin{abstract}
We prove that if $X$ is a paracompact connected space and $Z=\prod_{s\in S}Z_s$ is a product of a family of equiconnected metrizable spaces endowed with the box topology, then for every Baire-one map $g:X\to Z$ there exists a separately continuous map $f:X^2\to Z$ such that $f(x,x)=g(x)$ for all $x\in X$.
\end{abstract}
\section{Introduction} Let $X$, $Y$ be topological spaces and $C(X,Y)=B_0(X,Y)$ be the collection of all continuous maps between $X$ and $Y$. For $n\geq 1$ we say that a map $f:X\to Y$ belongs to {\it the $n$-th Baire class} if $f$ is a pointwise limit of a sequence of maps $f_k:X\to Y$ from the $(n-1)$-th Baire class. By ${\rm B}_n(X,Y)$ we denote the collection of all maps of the $n$-th Baire class between $X$ and $Y$.
For a map $f:X\times Y\to Z$ and a point $(x,y)\in X\times Y$ we write $f^x(y)=f_y(x)=f(x,y)$. By $CB_n(X\times Y,Z)$ we denote the collection of all mappings $f:X\times Y\to Z$ which are continuous with respect to the first variable and belongs to the $n$-th Baire class with respect to the second one. If $n=0$, then we use the symbol $CC(X\times Y,Z)$ for the class of all separately continuous maps. Now let $CC_0(X\times Y,Z)=CC(X\times Y,Z)$ and for $n\ge 1$ let $CC_n(X\times Y,Z)$ be the class of all maps $f:X\times Y\to Z$ which are pointwise limits of a sequence of maps from $CC_{n-1}(X\times Y,Z)$.
Let $f:X^2\to Y$ be a map. Then the map $g:X\to Y$ defined by $g(x)=f(x,x)$ is called {\it a diagonal of $f$.}
Investigations of diagonals of separately continuous functions $f:X^n\to\mathbb R$ were started in classical works of R.~Baire \cite{Baire}, H.~Lebesgue \cite{Leb1, Leb2} and H. Hahn \cite{Ha} who proved that diagonals of separately continuous functions of $n$ real variables are exactly the functions of the $(n-1)$-th Baire class. On other hand, separately continuous mappings with valued in equiconnected spaces intensively studied starting from \cite{B}. A brief survey of further developments of these investigations can be found in \cite{KMS1}. At the same time little is known about the possibility of extension of a ${\rm B}_1$-function from the diagonal of $X^2$ to a separately continuous function on the whole $X^2$ when a range space is not metrizable. We know only one paper in this direction~\cite{KMS1} where maps are considered with values in a space $Z$ from a wide class of spaces which contains metrizable equiconnected spaces and strict inductive limits of sequences of closed locally convex metrizable subspaces.
Here we continue investigations in this direction and study maps with values in products of topological spaces endowed with the box topology. The main result of our paper is the following. \begin{theorem}
Let $X$ be a paracompact connected space, $Z=\prod_{s\in S}Z_s$ be a product of a family of equiconnected metrizable spaces endowed with the box topology and $g:X\to Z$ be a Baire-one map. Then there exists a separately continuous map $f:X^2\to Z$ with the diagonal $g$. \end{theorem}
\section{The case of countable box-products}
We consider a family $(X_s)_{s\in S}$ of topological spaces and put $X_S=\prod_{s\in S}X_s$. For $x\in X_S$ and $T\subseteq S$ we define $x|_T$ as the point from $X_T$ with coordinates $(y_s)_{s\in T}$ such that $y_s=x_s$ for all $s\in T$.
The product $X_S$ endowed with the {\it box topology} generated by the family of all boxes $\prod_{s\in S}G_s$, where $(G_s)_{s\in S}$ is a family of open subsets of $X_s$ for every $s\in S$, is called {\it the box product} and is denoted by $\Box_{s\in S} X_s$.
For a fixed point $a=(a_s)_{s\in S}\in X_S$ we consider the set $$ \sigma(a)=\{(x_s)_{s\in S}: \{s\in S:x_s\ne a_s\} \,\,\mbox{is finite}\}. $$ If $\sigma(a)$ is endowed with the box topology, then $\sigma(a)$ is called {\it the small box product} of $(X_s)_{s\in S}$ and is denoted by $\boxdot_{s\in S}X_s$ or simply by $\boxdot$ when no confusion will arise.
Further, for all $n\in\omega$ we put \begin{gather*}
\boxdot_n=\{(x_s)_{s\in S}:|\{s\in S:x_s\ne a_s\}|=n\},\\
\boxdot_{\le n}=\bigcup_{k\le n}\boxdot_k \end{gather*} and for a finite subset $T\subseteq S$ let \begin{gather*}
\boxdot_{T}=\{(x_s)_{s\in S}:x_s=a_s \,\, \Leftrightarrow s\in S\setminus T\}. \end{gather*} Obviously, \begin{gather*}
\boxdot=\bigsqcup_{n\in\omega}\boxdot_n \,\,\,\mbox{and}\,\,\, \boxdot_n=\bigsqcup_{T\subseteq S, |T|=n} \boxdot_{T}. \end{gather*}
The facts below follow easily from the definition of the box topology and we omit their proof. For another properties of the box topology see \cite{Williams}. \begin{proposition}\label{properties} Let $(X_s:s\in S)$ be a family of topological spaces. \begin{enumerate} \item Each $\boxdot_{s\in S}X_s$ is a closed subspace of $\Box_{s\in S}X_s$ whenever each $X_s$ is a $T_1$-space.
\item The space $\boxdot_{T}$ is homeomorphic to the finite product $\prod_{t\in T}X_t$ for any finite set $T\subseteq S$.
\item Each $\boxdot_{T}$ is clopen in $\boxdot_n$, where $n=|T|$.
\item Let $X_s$ be a $T_1$-space for every $s\in S$ and $(x_n)_{n\in\omega}$ be a sequence of points from $X=\prod_{s\in S}X_s$. Then $(x_n)_{n\in\omega}$ converges to a point $x\in X$ in the box-topology if and only if $(x_n)_{n\in\omega}$ converges to $x$ in the product topology and there
exists a number $k\in\omega$ and a finite set $T\subseteq S$ such that $\{x_n:n\ge k\}\subseteq \boxdot_{T}$, where $\boxdot_{T}$ is the subspace of $\boxdot=\sigma(x)$. \end{enumerate} \end{proposition}
The last property imply the next fact. \begin{proposition}\label{prop:cont_with_neigh}
Let $X$ be a first countable space, $(Y_s:s\in S)$ be a family of $T_1$-spaces and $f:X\to \Box_{s\in S}Y_s$ be a continuous map. Then for every $x\in X$ there exist an open neighborhood $U$ of $x$ and a finite set $T\subseteq S$ such that $f(z)|_{S\setminus T}=f(x)|_{S\setminus T}$ for all $z\in U$. In particular, $f(U)\subseteq \sigma(f(x))$. \end{proposition}
Recall that a topological space $X$ is {\it functionally Hausdorff} if for every $x,y\in X$ with $x\ne y$, there exists a continuous function $f:X\to [0,1]$ such that $f(x)\ne f(y)$.
\begin{proposition}\label{pr:7.0}
Let $(X_s:s\in S)$ be a family of Hausdorff spaces and $X\subseteq \Box_{s\in S}X_s$ be a connected subspace. If \begin{enumerate}
\item $X$ is path-connected, or
\item every $X_s$ is functionally Hausdorff, \end{enumerate}
then there exists $x^*\in X$ such that $X\subseteq \sigma(x^*)$. \end{proposition}
\begin{proof} We fix points $x=(x_s)_{s\in S}$ and $y=(y_s)_{s\in S}$ from $X$ and show that $x$ differs with $y$ for finitely many coordinates. \begin{enumerate} \item Let $\varphi:[0,1]\to X$ be a continuous function such that $\varphi(0)=x$ and $\varphi(1)=y$. Proposition~\ref{prop:cont_with_neigh} implies that for every $t\in [0,1]$ there exists an open neighborhood $U_t$ of $t$ such that $\varphi(U_t)\subseteq \sigma(\varphi(t))$. Let points $t_1,\dots t_n\in [0,1]$ be such that $[0,1]\subseteq \bigcup_{i=1}^{n}U_{t_i}$. Without loss of generality we may assume that the set $\{t_1,\dots t_n\}$ is minimal and $t_1< t_2<\dots <t_n$. Take $\tau_1\in U_{t_1}\cap U_{t_2}$,\dots, $\tau_{n-1}\in U_{t_{n-1}}\cap U_{t_n}$ and notice that $\sigma(\varphi(\tau_1))=\dots=\sigma(\varphi(\tau_n))$. Therefore, $\varphi([0,1])\subseteq\sigma(\varphi(\tau_1))$.
\item Assume that the set $\{s\in S: x_s\ne y_s\}$ is infinite and show that there exists a clopen set $U\subseteq \Box_{s\in S}X_s$ such that $x\in U$ and $y\not\in U$. We take a countable subset $$T=\{t_n:n\in \omega\}\subseteq \{s\in S: x_s\ne y_s\},$$ where all $t_n$ are distinct.
Since each $X_s$ is functionally Hausdorff, for every $s\in T$ one can choose a continuous function $f_s:X_s\to [0,1]$ such that $x_s\subseteq f_s^{-1}(0)$ and $y_s\subseteq f_s^{-1}(1)$. For every $s\in T$ we denote $I_s=[0,1]$ and define a continuous map $f:\Box_{s\in S}X_s\to \Box_{s\in T}I_s$, $f((z_s)_{s\in S})=(f_s(z_s))_{s\in T}$ for all $z=(z_s)_{s\in S}\in \Box_{s\in S}X_s$. Note that the set \begin{gather*} V=\{(u_s)_{s\in T}\in[0,1]^T:u_{t_n}\to 0\} \end{gather*} is clopen in $\Box_{s\in T}I_s$. It remains to put $$ U=f^{-1}(V). $$\end{enumerate} \end{proof}
Let $X$ be a topological space and $\Delta=\{(x,x):x\in X\}$. A set $A\subseteq X$ is called {\it equiconnected in $X$} if there exists a continuous mapping $\lambda:((X\times X)\cup \Delta)\times [0,1]\to X$ such that $\lambda(A\times A\times [0,1])\subseteq A$, $\lambda(x,y,0)=\lambda(y,x,1)=x$ for all $x,y\in A$ and $\lambda(x,x,t)=x$ for all $x\in X$ and $t\in [0,1]$. A space is {\it equiconnected} if it is equiconnected in itself. Notice that any topological vector space is equiconnected, where a mapping $\lambda$ is defined by $\lambda(x,y,t)=(1-t)x+t y$.
\begin{proposition}\label{pr:7.1}
Let $(X_s)_{s\in S}$ be a family of equiconnected spaces $(X_s,\lambda_s)$. Then each small box-product $\boxdot_{s\in S}X_s$ is equiconnected. \end{proposition}
\begin{proof} For $z=(z_s)_{s\in S}, w=(w_s)_{s\in S}\in \boxdot_{s\in S}X_s$ and $t\in[0,1]$ we put \begin{equation}\label{eq:7.1}
\lambda(z,w,t)=(\lambda_s(z_s,w_s,t))_{s\in S}.
\end{equation} It is easy to see that the space $(\boxdot_{s\in S}X_s,\lambda)$ is equiconnected. \end{proof}
A covering $(X_n:n\in\omega)$ of a topological space $X$ is said to be {\it sequentially absorbing} if for any convergent sequence $(x_n)_{n\in\omega}$ there exists $k\in\omega$ such that $\{x_n:n\in\omega\}\subseteq X_k$. Let us observe that $(\boxdot_{\le n}:n\in\omega)$ is a sequentially absorbing covering of $\boxdot$ by Proposition~\ref{properties}(4).
A topological space $X$ is said to be {\it strongly $\sigma$-metrizable} if it has a sequentially absorbing covering (which is called {\it a stratification of $X$}) by metrizable subspaces. A stratification $(X_n)_{n=1}^{\infty}$ of a space $X$ is said to be {\it perfect} if for every $n\in\mathbb N$ there exists a continuous mapping $\pi_n:X\to X_n$ with $\pi_n(x)=x$ for every $x\in X_n$. Notice that according to \cite{BB} every strongly $\sigma$-metrizable space $X$ is {\it super $\sigma$-metrizable}, that is there exists a covering $(X_n:n\in\omega)$ of $X$ by closed subspaces $X_n$ such that every compact subset of $X$ is contained in some $X_n$. A stratification $(X_n)_{n=1}^{\infty}$ of an equiconnected strongly $\sigma$-metrizable space $X$ is {\it compatible with $\lambda$} if $\lambda(X_n\times X_n\times[0,1])\subseteq X_n$ for every $n\in\mathbb N$.
\begin{proposition}\label{pr:7.2} Let $(X_n)_{n=1}^{\infty}$ be a sequence of metrizable equiconnected spaces $(X_n,\lambda_n)$, $a\in\prod\limits_{n=1}^{\infty}X_n$ and $Z=\sigma(a)=\boxdot_{n\in \omega}X_n$. Then there exists $\lambda:Z\times Z\times [0,1]\to Z$ such that $(\boxdot_{n\in \omega}X_n,\lambda)$ is strongly $\sigma$-metrizable equiconnected space with a perfect stratification $(Z_n)_{n=1}^{\infty}$ assigned with $\lambda$. \end{proposition}
\begin{proof} For any $z=(z_n)_{n=1}^{\infty}$, $w=(w_n)_{n=1}^{\infty}\in Z$ and $t\in[0,1]$ we put $$ \lambda(z,w,t)=(\lambda_n(z_n,w_n,t))_{n=1}^{\infty} $$ and notice that the space $(\boxdot_{n\in \omega}X_n,\lambda)$ is equiconnected.
For every $n\in\mathbb N$ let $$ Z_n=\{(z_k)_{k=1}^{\infty}\in Z:(z_k=a_k)(\forall k>n)\}. $$
The space $\boxdot_{n\in \omega}X_n$ is strongly $\sigma$-metrizable with the stratification $(Z_n)_{n=1}^{\infty}$. Since $\lambda(Z_n\times Z_n\times[0,1])\subseteq Z_n$, the stratification $(Z_n)_{n=1}^{\infty}$ is assigned with $\lambda$. Moreover, for every $n\in\mathbb N$ the map $\pi_n:Z\to Z_n$ is continuous, where $\pi_n((z_k)_{k=1}^\infty)=(w_k)_{k=1}^\infty$ and $w_k=z_k$ for $k\leq n$, $w_k=a_k$ for $k>n$. Hence, the stratification $(Z_n)_{n=1}^{\infty}$ is perfect. \end{proof}
Theorem 6 from \cite{KMS1} implies the following result.
\begin{theorem}\label{cor:7.1} Let $X$ be a topological space, $S$ be a countable set, $(Z_s)_{s\in S}$ be a sequence of metrizable equiconnected spaces, $n\in\mathbb N$ and $g\in {\rm B}_{n-1}(X,\boxdot_{s\in S}Z_s)$. Then there are a separately continuous map $f:X^n\to \boxdot_{s\in S}Z_s$ and a map $h\in CB_{n-1}(X\times X,\boxdot_{s\in S}Z_s)\cap CC_{n-1}(X\times X,\boxdot_{s\in S}Z_s)$ both with the diagonal $g$. \end{theorem}
\section{The case of uncountable box-products}
\begin{theorem}\label{cor:7.2}
Let $X$ be a Lindel\"{o}f first countable space, $(Z_s)_{s\in S}$ be a family of metrizable equiconnected spaces $(Z_s,\lambda_s)$, $n\in\mathbb N$ and $g\in {\rm B}_{n-1}(X,\boxdot_{s\in S}Z_s)$. Then there are a separately continuous map $f:X^n\to \boxdot_{s\in S}Z_s$ and a map $h\in CB_{n-1}(X\times X,\boxdot_{s\in S}Z_s)\cap CC_{n-1}(X\times X,\boxdot_{s\in S}Z_s)$ both with the diagonal $g$. \end{theorem}
\begin{proof} Inductively for $m\in\{0,\dots, n-1\}$ we choose families $(g_\alpha:\alpha\in\mathbb N^m)$ of maps $g_\alpha\in B_{n-m-1}(X,\boxdot_{s\in S}Z_s)$ such that \begin{equation}\label{eq:7.0} g_\alpha(x)=\lim\limits_{k\to\infty}g_{\alpha,k}(x) \end{equation} for all $x\in X$, $0\leq m\leq n-2$ and $\alpha\in\mathbb N^m$, where $g_\alpha=g$ for a single element $\alpha\in \mathbb N^0$.
Fix $\alpha\in\mathbb N^{n-1}$. For every $x\in X$ we apply Proposition~\ref{prop:cont_with_neigh} to the continuous map $g_\alpha\in B_0(X,\boxdot_{s\in S}Z_s)$ and take an open neighborhood $U(\alpha, x)$ of $x$ and a finite set $S(\alpha, x)\subseteq S$ such that
$$
g_\alpha(z)|_{S\setminus S({\alpha,x})}=a|_{S\setminus S({\alpha,x})}
$$ for all $z\in U(\alpha,x)$.
Since $X$ is Lindel\"{o}f, we choose a countable set $A_\alpha\subseteq X$ such that î $X\subseteq \bigcup\limits_{x\in A_\alpha}U(\alpha,x)$.
Consider the countable set $$S_0=\bigcup\limits_{\alpha\in \mathbb N^{n-1} }\bigcup\limits_{x\in A_\alpha}S(\alpha,x).$$ Notice that $g_{\alpha}(x)|_{S\setminus S_0}=a|_{S\setminus S_0}$ for all $x\in X$. Then $$g(x)|_{S\setminus S_0}=a|_{S\setminus S_0}$$ for all $x\in X$ according to (\ref{eq:7.0}).
It remains to apply Theorem~\ref{cor:7.1} for $S=S_0$. \end{proof}
\begin{lemma}\label{l:7.4} Let $X$ be a collectionwise normal space, $(F_n)_{n\in\omega}$ be an increasing sequence of closed subsets of $X$, $A=\bigcup_{n\in\omega}F_n$,
$(C_i:i\in I)$ be a partition of $A$ by its clopen subsets. Then there exists a sequence $(\mathscr U_n)_{n\in\omega}$ of discrete families $\mathscr U_n=(U_{n,i}:i\in I)$ of functionally open subsets of $X$ such that $F_n\cap C_i\subseteq U_{n,i}$ for all $n\in\omega$ and $i\in I$. \end{lemma}
\begin{proof} Fix $n\in\omega$ and put $F_{n,i}=F_n\cap C_i$ for all $i\in I$. Notice that $(F_{n,i}:i\in I)$ is a discrete family of closed subsets of $X$. Therefore, there exists a discrete family $(V_{n,i}:i\in I)$ of open subsets of $X$ such that $F_{n,i}\subseteq V_{n,i}$ for all $i\in I$. Since $X$ is normal, for every $i\in I$ there exists a functionally open set $U_{n,i}\subseteq X$ such that $F_{n,i}\subseteq U_{n,i}\subseteq V_{n,i}$. \end{proof}
\begin{proposition}\label{pr:7.3} Let $X$ be a paracompact space, $(Z_s)_{s\in S}$ be a family of equiconnected metrizable spaces $(Z_s,\lambda_s)$, $a\in Z_S$ and $g\in {\rm B}_1(X,\boxdot_{s\in S}Z_s)$. Then there exist a sequence of continuous maps $g_n:X\to \boxdot_{s\in S}Z_s$ and a sequence of functionally open sets $W_n\subseteq X^2$ such that \begin{enumerate} \item $\{(x,x):x\in X\}\subseteq W_n$ for all $n\in\omega$;
\item $\lim\limits_{n\to\infty}g_n(x_n)=g(x)$ for every $x\in X$ and for any sequence $(x_n)_{n\in\omega}$ of points $x_n\in X$ satisfying $(x_n,x)\in W_{n}$ for all $n\in\omega$. \end{enumerate} \end{proposition}
\begin{proof} Let $\boxdot=\boxdot_{s\in S}Z_s$ and $(h_n)_{n\in\omega}$ be a sequence of continuous maps $h_n:X\to \boxdot$ which converges to $g$ pointwisely on $X$. For every $s\in S$ we fix a metric $|\cdot - \cdot|_s$ on the space $Z_s$ which generates its topology. For a finite set $R\subseteq S$ and points $z=(z_s)_{s\in S}, w=(w_s)_{s\in S}\in \boxdot$ we put $$|z-w|_R=\max_{s\in R}|z_s-w_s|_s.$$ Moreover, for any $r\in S$ we define the function $\pi_r:\boxdot\to Z_r$, $\pi_r((z_s)_{s\in S})=z_r$; and for any finite set $R\subseteq S$ we define the function $\pi_R:\boxdot\to \boxdot$ in such a way: $\pi_R((z_s)_{s\in S})=(w_s)_{s\in S}$, where $w_s=z_s$ for $s\in R$ and $w_s=a_s$ for $s\in S\setminus R$.
We show that there exists a partition $(A_k:k\in\omega)$ of $X$ by functionally $F_\sigma$ sets $A_k$ such that \begin{gather}\label{conditionA}
\forall k\in\omega\,\,\, \exists n_k, m_k\in\omega : |\{s\in S:\,(\exists n\geq n_k)\,(\pi_s(h_n(x))\ne a_s)\}|=m_k\,\,\, \forall x\in A_k. \end{gather} For every $n\leq k$ we define a continuous function $\varphi_{m,n,k}:X\to [0,1]$ by the rule $$
\varphi_{m,n,k}(x)=\sup_{|T|\leq m+1}\inf_{s\in T}\max\{|\pi_s(h_n(x))-a_s|_s,\dots ,|\pi_s(h_k(x))-a_s|_s\}.
$$
Notice that $\varphi_{m,n,k}(x)=0$ if and only if $$|\{s\in S:\,(\exists i\in [n,k])\,(\pi_s(h_i(x))\ne a_s)\}|\leq m.$$ We put
$$
X_{m,n}= \bigcap\limits_{k\geq n}\varphi_{m,n,k}^{-1}(0)
$$
for all $m,n\in\omega$ and notice that the set $X_{m,n}$ is functionally closed in $X$ and $x\in X_{m,n}$ if and only if $$|\{s\in S:\,(\exists i\geq n)\,(\pi_s(h_i(x))\ne a_s)\}|\leq m.$$ Since the sequence $(h_n)_{n=1}^\infty$ converges to $g$ on $X$ pointwisely, $X=\bigcup\limits _{m,n\in\omega}X_{m,n}$ by Proposition \ref{properties}~(4).
Let $\phi:\omega\to \omega^2$ be a bijection such that $\phi^{-1}(m_1,n_1)<\phi^{-1}(m_2,n_2)$ if $m_1+n_1<m_2+n_2$. Now we put $$ A_k=X_{\phi(k)}\setminus \left(\bigcup_{i<k} X_{\phi(i)}\right) $$ for every $k\in\omega$. Then every set $A_k$ is functionally $F_\sigma$ as a difference of functionally closed sets and $$
|\{s\in S:\,(\exists n\geq n_k)\,(\pi_s(h_n(x))\ne a_s)\}|=m_k $$ for every $x\in A_k$, where $(m_k,n_k)=\phi(k)$.
For every $k\in\omega$ we take an increasing sequence $(F_{k,m})_{m\in \omega}$ of functionally closed subsets of $X$ such that $A_k=\bigcup_{m\in\omega} F_{k,m}$. Moreover, for every $m\in\omega$ we choose a family $(G_{k,m}:0\leq k\leq m)$ of functionally open sets such that $F_{k,m}\subseteq G_{k,m}$ for all $0\leq k\leq m$ and $G_{i,m}\cap G_{j,m}=\emptyset$ for all $0\leq i<j\leq m$.
For every $k\in\omega$ and for any set $R\subseteq S$ with $|R|=m_k$ we put \begin{gather*} V_{k,R}=\{x\in A_k: \{s\in S:\,(\exists n\geq n_k)\,(\pi_s(h_n(x))\ne a_s)\}=R\} \end{gather*} and show that $V_{k,R}$ is clopen in $A_k$. Let $x'\in A_k\setminus V_{k,R}$ and $$ R^\prime=\{s\in S:\,(\exists n\geq n_k)\,(\pi_s(h_n(x'))\ne a_s)\}.
$$ Since $x'\in A_k$, $|R^\prime|=m_k=|R|$. On the other hand, $x'\not\in V_{k,R}$. Therefore, $R^\prime\ne R$ and there exists $s\in R^\prime\setminus R$. We choose $n\geq n_k$ such that $\pi_s(h_n(x'))\ne a_s$. Since $h_n$ is continuous, the set $$ U^\prime=\{x\in X:\pi_s(h_n(x))\ne a_s\} $$ is an open neighborhood of $x'$ in $X$. Moreover, $U^\prime\cap V_{k,R}=\emptyset$. Thus, $V_{k,R}$ is closed in $A_k$.
Now let $x_0\in V_{k,R}$. For every $r\in R$ we choose a function $f_r\in\{h_n:n\geq n_k\}$ such that $\pi_r(f_r(x_0))\ne a_r$. Since all functions $f_r$ are continuous, there exists a neighborhood $U_0$ of $x_0$ in $X$ such that $\pi_r(f_r(x))\ne a_r$ for every $r\in R$ and $x\in U_0$. Therefore, $U_0\cap A_k\subseteq V_{k,R}$ and $V_{k,R}$ is open in $A_k$.
Moreover, condition~(\ref{conditionA}) implies that $$
\bigsqcup_{|R|=m_k}V_{k,R}=A_k. $$
Since every paracompact space is collectionwise normal, we apply Lemma~\ref{l:7.4} and find for every $n\geq k$ a discrete family $\mathscr U_{k,n}=(U_{k,n,R}:|R|=m_k)$ of functionally open subsets $U_{k,n,R}\subseteq G_{k,n}$ of $X$ such that $$ C_{k,n,R}=F_{k,n}\cap V_{k,R}\subseteq U_{k,n,R} $$
for all $R\subseteq S$ with $|R|=m_k$. Let us observe that the family $$
\mathscr U_{n}=\bigsqcup_{0\leq k\leq n}\mathscr U_{k,n}= (U_{k,n,R}:0\leq k\leq n\,,|R|=m_k) $$
is discrete in $X$ for all $n\in\omega$. Let $\varphi_{k,n,R}:X\to [0,1]$ be a continuous function such that $C_{k,n,R}=\varphi^{-1}_{k,n,R}(0)$ and $X\setminus U_{k,n,R}=\varphi^{-1}_{k,n,R}(1)$, $k,n\in\omega$ and $R\subseteq S$ with $|R|=m_k$.
Now for every $n\in\omega$ we define a continuous map $g_n:X\to Z$, $$ g_n(x)=\left\{\begin{array}{ll}
\lambda(\pi_R(h_n(x)),a,\varphi_{k,n,R}(x)), & 0\leq k\leq n, |R|=m_k, x\in U_{k,n,R} \\
a, & x\in X\setminus (\bigcup_{U\in \mathscr U_n}U),
\end{array}
\right. $$ where the function $\lambda$ is defined by (\ref{eq:7.1}).
Let us construct a sequence $(W_n:n\in\omega)$ of functionally open sets in $X^2$. For every $n\in\omega$ we consider a functionally closed set $$
C_n=\bigsqcup_{0\leq k\leq n,\,|R|=m_k}C_{k,n,R}.
$$
For every $x\in C_n$ we choose $k\leq n$ and $R\subseteq S$ with $|R|=m_k$ such that $x\in C_{k,n,R}$. Since the map $g_n$ is continuous, we can take a functionally open neighborhood $W_n(x)\subseteq U_{k,n,R}$ of $x$ such that $|g_n(x')-g_n(x'')|_R\leq \frac{1}{n}$ for any $x',x''\in W_n(x)$. For every $x\in X\setminus C_n$ we put $W_n(x)=X\setminus C_n$. Since $X$ is paracompact, there exists a locally finite refinement $(O_{\gamma,n}:\gamma\in \Gamma_n)$ of $(W_n(x):x\in X)$ such that each $O_{\gamma,n}$ is functionally open (see \cite[Theorem 5.1.9]{Eng}). Not we put
$$
W_n=\bigcup_{\gamma\in \Gamma_n}O_{\gamma,n}\times O_{\gamma,n}
$$
and notice that $W_n$ is functionally open subset of $X^2$ as a locally finite union of functionally open sets.
Clearly, $(W_n)_{n\in\omega}$ satisfies condition 1) of the Proposition.
We check condition 2). Fix $x\in X$ and a sequence $(x_n)_{n\in\omega}$ of points $x_n\in X$ such that $(x_n,x)\in W_{n}$ for all $n\in\omega$. Take $k\in\omega$ such that $x\in A_k$. Let $R\subseteq S$ be a finite set with $|R|=m_k$ and $i\geq \max\{k,n_k\}$ be a number such that $x\in F_{k,i}\cap V_{k,R}=C_{k,i,R}$. Notice that $x\in C_{k,j,R}$ for all $j\geq i$. In particular, $x\in C_j$.
Let $n\geq i$. Since $(x_n,x)\in W_n$, there are $\gamma\in \Gamma_n$ and $y\in X$ such that $x,x_n\in O_{\gamma,n}\subseteq W_n(y)$. Notice that $y\in C_n$, because $x\in C_n$. We choose $k'\leq n$ and $R'\subseteq S$ such that $|R'|=m_{k'}$ and $y\in C_{k',n,R'}$. Since $x\in W_n(y)\subseteq U_{k',n, R'}$, $x\in U_{k,n, R}$ and $\mathscr U_n$ is discrete, we have $k'=k$ and $R'=R$. Hence, $x,x_n\in W_n(y)\subseteq U_{k,n, R}$ and $|g_n(x)-g_n(x_n)|_R\leq \frac{1}{n}$.
Since $n\geq n_k$, condition~(\ref{conditionA}) implies that $\pi_R(h_n(x))=h_n(x)$. Since $x\in C_{k,n,R}$, $\varphi_{k,n,R}(x)=0$. It follows from the definition of $g_n$ that $g_n(x)=\pi_R(h_n(x))=h_n(x)$. Moreover, since $x_n\in W_n(y)\subseteq U_{k,n,R}$, $\pi_R(g_n(x_n))=g_n(x_n)$.
Let $g(x)=(z_s)_{s\in S}$, $g_n(x)=(z_{n,s})_{s\in S}$ and $g_n(x_n)=(w_{n,s})_{s\in S}$ for all $n\in\mathbb N$. If $s\in S\setminus R$, then $w_{n,s}=z_{n,s}=a_s$ for all $n\geq i$. If $s\in R$, then $|z_{n,s}-w_{n,s}|_s\leq \frac{1}{n}$ for all $n\geq i$. Therefore, $$ \lim\limits_{n\to\infty}g_n(x_n)=\lim\limits_{n\to\infty}g_n(x)=\lim\limits_{n\to\infty}h_n(x)=g(x), $$ which completes the proof. \end{proof}
Now we need the following general construction of separately continuous maps with the given diagonal from \cite{MSF}.
\begin{theorem}\label{th:1.1} Let $X$ be a topological space, $Z$ be a Hausdorff space, $(Z_1,\lambda)$ be an equiconnected subspace of $Z$, $g:X\to Z$, $(G_n)_{n=0}^{\infty}$ and $(F_n)_{n=0}^{\infty}$ be sequences of functionally open sets $G_n$ and functionally closed sets $F_n$ in $X^2$, let $(\varphi_n)_{n=1}^{\infty}$ be a sequence of separately continuous functions $\varphi_n:X^2\to [0,1]$, $(g_n)^{\infty}_{n=1}$ be a sequence of continuous mappings $g_n:X\to Z_1$ satisfying the conditions \begin{enumerate}
\item[1)] $G_0=F_0=X^2$ and $\Delta=\{(x,x):x\in X\}\subseteq G_{n+1}\subseteq F_n\subseteq G_n$ for every $n\in\mathbb N$;
\item[2)] $X^2\setminus G_n\subseteq\varphi_n^{-1}(0)$ and $F_n\subseteq \varphi_n^{-1}(1)$ for every $n\in\mathbb N$;
\item[3)] $\lim\limits_{n\to\infty}\lambda(g_n(x_n),g_{n+1}(x_n),t_n)=g(x)$ for arbitrary $x\in X$, any sequence $(x_n)^{\infty}_{n=1}$ of points $x_n\in X$ with $(x_n,x)\in F_{n-1}$ for all $n\in\mathbb N$, and any sequence $(t_n)_{n=1}^{\infty}$ of points $t_n\in[0,1]$. \end{enumerate} Then the mapping $f:X^2\to Z$,
\begin{equation*}
f(x,y)=\left\{\begin{array}{ll}
\lambda(g_n(x),g_{n+1}(x),\varphi_n(x,y)), & (x,y)\in F_{n-1}\setminus F_n\\
g(x), & (x,y)\in E=\bigcap\limits_{n=1}^{\infty} G_n
\end{array}
\right.
\end{equation*} is separately continuous. \end{theorem}
\begin{theorem}\label{th:7.4}
Let $X$ be a paracompact space, $(Z_s)_{s\in S}$ be a family of equiconnected metrizable spaces $(Z_s,\lambda_s)$, $a\in Z_S$ and $g\in {\rm B}_1(X,\boxdot_{s\in S}Z_s)$. Then there exists a separately continuous map $f:X^2\to \boxdot_{s\in S}Z_s$ with the diagonal $g$. \end{theorem}
\begin{proof} We use Proposition~\ref{pr:7.3} and choose a sequence $(g_n)_{n\in\omega}$ of continuous maps $g_n:X\to \boxdot_{s\in S}Z_s$ and a sequence $(W_n)^{\infty}_{n=1}$ of functionally open subsets of $X^2$ which satisfy conditions 1) and 2) of Proposition~\ref{pr:7.3}. Let $G_0=F_0=X^2$. Paracompactness of $X$ implies that we can choose sequences $(G_n)_{n=1}^{\infty}$ and $(F_n)_{n=1}^{\infty}$ of functionally open and functionally closed sets such that $$ \{(x,x):x\in X\}\subseteq G_{n+1}\subseteq F_n\subseteq G_n\subseteq \bigcap\limits_{k=1}^{n+2}W_k $$ for every $n\in\omega$. Now we take a sequence $(\varphi_n)_{n=1}^{\infty}$ of continuous functions $\varphi_n:X^2\to [0,1]$ with $X^2\setminus G_n=\varphi_n^{-1}(0)$ and $F_n= \varphi_n^{-1}(1)$ for every $n\in\mathbb N$. It remains to apply Theorem~\ref{th:1.1}. \end{proof}
A topological space $X$ is {\it strongly countably dimensional} if there exists a sequence $(X_n)_{n=1}^{\infty}$ of sets $X_n\subseteq X$ such that $X=\bigcup_{n=1}^{\infty}X_n$ and ${\rm dim} X_n< n$ for every $n\in\mathbb N$, where by ${\rm dim\,}Y$ we denote the \v{C}ech-Lebesgue dimension of $Y$.
\begin{corollary}\label{cor:7.5} Let $X$ be a connected strongly countably dimensional metrizable space, $(Z_s)_{s\in S}$ be a family of metrizable equiconnected spaces $(Z_s,\lambda_s)$ and $g:X\to \Box_{s\in S}Z_s$. Then the following conditions are equivalent: \begin{enumerate} \item[(i)] $g\in B_{1}(X,Z)$;
\item[(ii)] there exists a separately continuous map $f:X^2\to Z$ with the diagonal $g$. \end{enumerate} \end{corollary}
\begin{proof} $(i)\Rightarrow (ii)$. Let $X\ne\emptyset$ and $(g_n)_{n=1}^{\infty}$ be a sequence of continuous maps $g_n:X\to \Box_{s\in S}Z_s$ which converges to $g$ pointwisely on $X$. According to Proposition \ref{pr:7.0}, for every $n\in\mathbb N$ there exists $a_n\in Z_S$ such that $g_n(X)\subseteq \sigma(a_n)$. We fix a point $x_0\in X$ and put $a=g(x_0)$. Since $\lim\limits_{n\to\infty}g_n(x_0)=g(x_0)$, there exists $n_0\in\mathbb N$ such that $g_n(x_0)\subseteq \sigma(a)$ for every $n\geq n_0$. Thus, $\sigma(a_n)=\sigma(a)$ for every $n\geq n_0$. Therefore, $g(X)\subseteq \sigma(a)$ and $g\in {\rm B_1}(X,\boxdot_{s\in S}Z_s)$. It remains to use Theorem~\ref{th:7.4}.
$(ii)\Rightarrow (i)$. It easy to see that according to Proposition \ref{pr:7.0}, for every separately continuous map $f:X^2\to \Box_{s\in S}Z_s$ there exists $a\in Z_S$ such that $f(X^2)\subseteq \sigma(a)$. By Corollary 5.4 from \cite{KMM} $f$ is a Baire-one map and, therefore, so is $g$. \end{proof}
\begin{remark}
{\rm
\begin{enumerate}
\item We use only paracompactness and connectedness of $X$ in the proof of implication $(i)\Rightarrow (ii)$.
\item Properties of range spaces in all previous results concerning the construction of a separately continuous map with the given Baire-one diagonal allowed us to reduce the case of topological domain space $X$ to a metrizable one.
\end{enumerate} } \end{remark}
The last remark implies the following question.
\begin{question}
Let $X$ be a topological space, $(Z_s)_{s\in S}$ be a family of equiconnected metrizable spaces $(Z_s,\lambda_s)$, $a\in Z_S$ and $g\in {\rm B}_1(X,\boxdot_{s\in S}Z_s)$. Does there exist a separately continuous map $f:X^2\to \boxdot_{s\in S}Z_s$ with the diagonal $g$? \end{question}
\end{document} | arXiv |
Home Journals TS Determination of the Appropriate Kernel Structure in Electroencephalography Analysis of Alcoholic Subjects
Determination of the Appropriate Kernel Structure in Electroencephalography Analysis of Alcoholic Subjects
Omer Akgun
Department of Computer Engineering, Marmara University, Istanbul 34722, Turkey
[email protected]
Alcoholism is one of the major health problems in the world. The organ most affected by alcohol is the brain. It has been shown that alcohol causes neuronal loss in the brain and reduces brain blood flow and oxygen use. Electroencephalography is a method that measures the instantaneous electrical activity of the brain. It is known that valuable information can be obtained by observing the biological effects of alcohol through EEG. As their methods of signal processing and analysis have evolved, Electroencephalography signals have attracted the attention of researchers in this field. In this study, methods of the time-frequency analysis were applied to Electroencephalography signals obtained from normal and alcoholic subjects. For this purpose, the Cohen's class distribution was examined. Ambiguity function analysis, which was in the structure of the distribution, was applied to the signals. Then, from the kernel structure inside the distribution, the Wigner-Ville distribution, which was very common, was reached and this distribution was examined. The inadequacy of the distribution resolution was seen and analysis of the new time-frequency distributions, which were obtained by making convolution with 4 types of kernel functions (nonseparable, separable, Doppler independent, lag independent), was performed. As a result, it was shown that the resolution of time-frequency distributions could be improved with proper kernel functions. Thus, at the end of these analyses, changes that alcohol caused in brain functions were revealed.
alcoholic, EEG, ambiguity function, Wigner Ville distribution, nonseparable kernel, separable kernel, Doppler independent kernel, lag independent kernel
One of the most common abnormalities seen in alcoholics is the shrinkage of the brain. Drinking alcohol causes a significant reduction in brain weight. In cortical neuron counts, neuron loss has been shown to be significant, especially in the frontal cortex [1, 2].
Studies conducted on chronic alcoholics have shown that the frontal cortex is one of the regions where blood flow and oxygen use decrease and the regional decrease in brain blood flow is the greatest [1, 3].
In alcoholics, when the amount of thiamine in the brain decreases, the transketolase enzyme cannot complete its task and the glucose citric acid cycle is broken down into lactic acid. Therefore, enough energy cannot be obtained from glucose, which is the most important energy source of the brain.
GABA (Gamma-Aminobutyric acid) is the most important inhibitory neurotransmitter of the central nervous system and performs hyperpolarization by increasing the passage of chlorine into neurons. Alcohol depresses neurons by stimulating GABA, forming a complex with the GABA receptor, and inhibiting the release of noradrenaline from the locus coeruleus. This event can explain the depressing effect of alcohol by GABA mechanism [1, 4, 5].
Electroencephalography (EEG) is considered valuable as a non-invasive electrophysiological method in the investigation of the biological aspect of alcoholism [6]. EEG is an examination method in which spontaneous electrical activity of the brain is recorded through electrodes. This review reflects the functional state of the brain at the time rather than its structural features. Therefore, despite improvements in structural imaging methods (CT, MRI), it still maintains its importance. In particular, in clinical tables where there is no pathological evidence reflected in structural examination methods, the importance of the EEG increases even more [7].
The postsynaptic potentials that constitute the source of the EEG are collected in the cortex, spread through the structures surrounding the brain to the scalp with hair and recorded from the scalp via metal electrodes. The location of each electrode covered with a conductive material is determined by standard measurements performed on nasion, inion, right and left preauricular points and they are placed according to the international 10-20 system (Figure 1) [7-9].
Figure 1. Layout of electrodes in EEG
The EEG enables the examination of changes in cognitive activity in millisecond-level. While neurons are processing information and integrating their communication between each other, they generate electrical discharges at a time-resolution of milliseconds. When considered in terms of the oscillative approach, these electrical discharges that recur over time are called EEG frequencies. Traditionally, between 0.5-3.5 Hz is called as delta (δ), between 3.5-7 Hz as theta (θ), between 8-14 Hz as Alpha, between 15-30 Hz as beta (β), and between 30-48 Hz as gamma (γ). The traditional approach states that delta occurs during sleep, theta occurs during superficial sleep, alpha occurs when eyes are closed but not sleeping, a beta occurs when eyes are open, awake, and during a cognitive task or muscle activity. The oscillative approach also examines these frequencies according to the lower frequencies. According to the oscillative approach, various sensory and cognitive functions are associated with these frequencies. For example, the delta is associated with memory, theta is associated with attention, alpha is associated with memory and attention, beta is associated with cortical arousal, and gamma is associated with the processing of sensory and cognitive information [10, 11].
The most common findings obtained from alcoholics are the increase in theta and beta activity in EEG. The findings on alpha activity are not consistent. Changes in beta activity amplitude have also been observed frequently in the intimates of alcoholics. These pieces of information indicate that some biological factors play a role in the development of alcoholism [6]. The conducted studies mention a relationship between alcohol dependence and low-voltage EEG. In another study, in alcohol addicts, delta and theta decelerations, which were not observed in normal EEG individuals and were or were not accompanied by Alpha deceleration, were identified. It was said that delta and theta decelerations might be a specific indicator of a disorder in brain function. It is still not clear whether the decreased alpha activity seen in alcohol addicts develops as a result of the alcohol addiction, or whether it is a risk factor in the development of alcohol addiction [12-14].
As the techniques in the signal analysis have developed, EEG signals have attracted the attention of experts in this field and begun to be the subject of a variety of research. In order to infer detailed information from signals whose frequency changes over time, it is necessary to examine these signals at the same time in both regions, not just in the time or frequency region. Criteria such as how the frequency of signals changes over time, speed of change and time-frequency bandwidths give us detailed information about the characteristic of the signal. Examining both regions at the same time (that is, time-frequency signal processing), which allows us to learn more about the signal, is a fundamental research topic for all these application areas [15, 16]. In such analyses, the Short-Time Fourier Transform (STFT) and Wigner-Ville Distribution (WVD) are generally used. STFT is a linear and relatively easy transformation. However, in STFT, a good resolution requires an appropriate window selection, and high resolution cannot be achieved on both the time and frequency axis at the same time. On the other hand, WVD is a distribution that provides fairly high resolution in addition to its many other good features. However, due to its quadratic structure, it contains cross-terms alongside the main signal components that we want to identify [17, 18]. These cross-terms disrupt the identifiability of the signal. For this reason, the distribution, which is called the Cohen class and is a generalization of WVD, is used. The goal in such distributions is to destroy cross-terms by designing the kernel of the Cohen class distribution and to obtain a time-frequency distribution whose resolution is high [17, 19]. Cohen class distributions, such as Wigner Ville and Choi Williams, are defined in the weighted integral form. Cohen class distributions, such as Wigner Ville and Choi Williams, are defined in the weighted integral form and these weighting functions are called kernel [20]. In this study, using WVD, the effect of 4 types of kernels (nonseparable, separable, Doppler independent, lag independent) on cross-terms was observed.
2. Mathematical Basis and Application
The data used in the study was taken from the UCI KDD Archive. 1-second recordings at 256 Hz sampling frequency were obtained from 64 electrodes placed on the scalp of healthy and sick individuals. The control and alcoholic subjects were subjected to a stimulus (showing themselves a picture) and recordings were taken [21]. In the study, the signals obtained from the C4 electrode (Figure 1) were analyzed.
In Figure 2, the basic approach used in the study is given as a block diagram. The signs taken from the subjects via EEG are transferred to the computer environment. Later, with the MATLAB program, the distributions of NSK (Nonseparable Kernel), SK (Separable Kernel), DI (Doppler-independent kernel) and LI (Lag-independent kernel) were obtained and these distribution gaps were analyzed.
Figure 2. Block diagram of the study
2.1 EEG signal with running minimum and maximum
(a) Control subject
(b) Alcoholic subject
Figure 3. EEG graphs with running Min and Max
The EEG graph for the control subject oscillates in the range of minimum and maximum (18, -22). Compare to the control subject, the EEG graph of the alcoholic subject shows oscillations that have a higher amplitude and in the range of 30 and -20. The most noticeable distinguishing issue compared to the normal is the excess increase in the number of peaks. In other words, it is observed that the period of alcoholic subjects gets smaller (Figure 3).
2.2 Spectral amplitudes of the EEG signal
In the EEG amplitude spectrum of the control subject, two main components at origin and 25 Hz take place with their high amplitudes (approximately 1180 and 650 units). The components terminate at around 55 Hz. The most notable feature in the EEG amplitude spectrum of the alcoholic subject is the spreading of the components to the entire frequency plane in large numbers. In addition, the amplitudes of the components are much lower than that of normal (maximum 280 units). Briefly, the normal sign is a regular sign with a bandwidth of about 50 Hz, while the alcoholic sign is a complex spectrum that spreads to the whole plane (Figure 4).
Figure 4. EEG amplitude spectrums
2.3 Cohen distribution, ambiguity function and WVD
The most useful time-frequency distributions (TFD) are TFDs called as quadratic or bilinear (QTFD). The main member of this class is WVD, and all other TFDs (such as Choi-Williams, Zhao-Atlas-Marks, Born-Jordan) are smoothed versions of WVD. Again, all these TFDs are members of Cohen's bilinear class. A Cohen-class distribution is a two-dimensional Fourier transform (Eq. (1)) of the weighted version of the symmetric ambiguity function (AF) of the signal to be analyzed.
$C(t, f)=\iint_{-\infty}^{\infty} A(v, \tau) \Phi(v, \tau) e^{-j 2 \pi \theta t-j 2 \pi f t} d v d \tau$ (1)
The AF is identified as follows.
$A(v, \tau)=\int_{-\infty}^{\infty} x\left(u+\frac{\tau}{2}\right) x^{*}\left(u-\frac{\tau}{2}\right) e^{j v u} d u$ (2)
In Eq. (1) and (2), t is time, f is frequency, τ is time lag, v is frequency lag, and u is the additional integral time variable. The weight function Φ(v,τ) is called the kernel of the distribution.
Figure 5. Ambiguity functions of the EEG signals
In AF of the control subject, there are 3 Bar formations extending along the time axis in the range of 100-160 Hz. The AF of the alcoholic subject, on the other hand, draws attention by small multi-part formations. Here, normal subject AF shows the uniform distribution, while alcoholic AF is spreading as distorted distribution (Figure 5.a, b). AFs are symmetric functions.
The properties of a bilinear TFD are determined by its kernel function. Because AF is a bilinear function of the signal, unintended cross-terms occur. This causes the resolution of the TFD to decrease and the interpretation of it to become difficult. To prevent this and suppress unwanted signals, a kernel is selected for weighting AF.
The kernel of WVD, which is the simplest and most important of the Cohen class bilinear TFDs, is Φ(v,τ)=1 and is expressed as in Eq. (3) [22-24].
$W V D(t, f)=\int_{-\infty}^{\infty} x\left(t+\frac{\tau}{2}\right) x^{*}\left(t-\frac{\tau}{2}\right) e^{-j 2 \pi f \tau} d \tau$ (3)
WVD does not form a cross-term when the sign x(t) in Eq. (3) is single-component, whereas when there is a multi-component sign like x(t)=s1(t)+s2(t), the expression WVD becomes as follows due to its quadratic structure.
$W V D_{x}(t, f)=W V D_{s_{1}}(t, f)+W V D_{s_{2}}(t, f)+$$2 \operatorname{Re}\left\{W V D_{s_{1}, s_{2}}(t, f)\right\}$ (4)
The $2 \operatorname{Re}\left\{W V D_{s_{1}, s_{2}}(t, f)\right\}$ component in Eq. (4) distorts the intelligibility of the distribution by producing cross-term [15, 25, 26].
Figure 6. WVD of the EEG signals
In the WVD of the control subject, 3 illuminated regions are extending along the time axis in the region of 0.10 and 22 Hz. In the WVD of the alcoholic subject, on the other hand, there are multi-part illuminated regions that spread to the entire plane. It appears that the cross-terms mentioned above distort the resolution of the WVD and mask the main components. Although it shows itself in the distribution in AF, which is in the structure of WVD, the resolution of the main components is bad (Figure 6).
2.4 TFDs with kernel structure
To suppress cross-terms, a new time-frequency distribution is created by convoluting WVD with a kernel function (Eq. (5)).
$\rho_{x}(t, f)=W V D_{x}(t, f) * * \gamma(t, f)$ (5)
In Eq. (5), γ(t,f) is the kernel function of WVD and ** is convolution [27-29]. In addition, Equation 5 can be expressed by one of three dimensions (time-lag (t, τ), Doppler-frequency (v, f), and Doppler-lag (v, τ) (Figure 7)) [30].
The kernel structures used in quadratic TFDs are shown in Figure 8 [30].
$g(t, \tau)$ are nonseparable kernels known as the most general state of kernel structures. They are filter structures showing exponential distribution (Eq. (6)) [31]. In this study, the function of $\frac{\sqrt{\pi \sigma}}{|\tau|} e^{-\pi^{2} \sigma t^{2} / \tau^{2}}$ was used as a nonseparable kernel.
Figure 7. The transition of time-frequency presentation with other dimensions
Figure 8. Kernel types used in QTFDs
Figure 9. EEG signals' TFD with nonseparable kernel
In the control subject TFD with the nonseparable kernel, 3 main components, masked in the WVD (Figure 6(a)), have become observable. In the alcoholic subject's TFD, some high-frequency components masked completely in WVD have become perceivable although they are very inadequate (Figure 9).
A simple way to design kernel filters for QTFDs is to consider the special state of a separable kernel (Eqns. (6)-(9)) [30]. In practice, in a separable kernel design, its positivity property is neglected for higher resolution and for the fact that TFD can be interpreted as a (t, f) gradient of energy. A nonconstant separable kernel can be designed as a LI and DI kernel (a TFD without WVD; and with or without amplitude scaling), whereas a separable kernel can be designed as neither a LI nor a DI [32].
In this study, the function of $|\tau|^{\beta} \cosh ^{-2 \beta} t$ was used as a separable kernel.
$g(v, \tau)=G_{1}(v) g_{2}(\tau)$ (6)
$G_{1}(v)=\mathcal{F}\left\{g_{1}(t)\right\}$ (7)
$G_{2}(f)=\mathcal{F}\left\{g_{2}(\tau)\right\}$ (8)
$\rho_{x}(t, f)=g_{1}(t) * W V D_{x}(t, f) * G_{2}(f)$ (9)
Figure 10. EEG signals' TFD with separable kernel
The three main components at low frequencies were observed very clearly in the control subject's TFD with the separable kernel. The main components that spread across the entire plane in the TFD of the alcoholic subject and emerged especially at high frequencies were able to be monitored with high resolution (Figure 10).
The Doppler-independent kernel (DI) is a special state of the separable kernel obtained using the constant $G_{1}(v)$ (Eqns. (10)-(13)) [30]. A TFD with DI kernels provides realness, time marginal, time support and IF features. However, the DI kernel does not display frequency marginal, frequency support, or spectral delay characteristics despite a smoothing along the frequency axis [32].
In this study, the function of $\delta(t) w(\tau)$ was used as the DI kernel. $w(\tau)$ is a window function that is real and dual.
$G_{1}(v)=1$ (10)
$g(v, \tau)=g_{2}(\tau)$ (11)
$g_{1}(t)=\delta(t)$ (12)
$\rho_{x}(t, f)=G_{2}(f) * W V D_{x}(t, f)$ (13)
Figure 11. EEG signals' TFD with Doppler-independent kernel
In the control subject's TFD with DI kernels, the main components at low frequencies are seen at a lower resolution compared to the TFD with separable kernels. The same situation, with a lower resolution, is true also for the TFD of the alcoholic subject (Figure 11).
The Lag-independent kernel (LI) is another special state of the separable kernel obtained using the constant $g_{2}(\tau)$ (Eqns. (14)-(17)) [30]. A TFD with LI kernel can fulfil the realness, frequency marginal, frequency support and spectral delay features. However, LI kernel is not suitable for the IF features despite smoothing performed along the time marginal, time support, or time axis [32].
In this study, the function $\frac{\cosh ^{-2 \beta} t}{\int_{-\infty}^{\infty} \cosh ^{-2 \beta} \xi d \xi}$ was used as the LI kernel.
$g_{2}(\tau)=1$ (14)
$g(v, \tau)=G_{1}(v)$ (15)
$\rho_{x}(t, f)=g_{1}(t) * W V D_{x}(t, f)$ (17)
While in the control subject TFD with LI kernels, the resolution of 3 main components with low frequency is slightly better compared to the DI kernel structure, it is much poor compared to the separable kernel structure. The same issue is also true for the TFD of the alcoholic subject (Figure 12).
In Table 1, the TFDs used in the study are presented together with kernel functions and their performance.
The success of the methods used in this study was evaluated according to the solubility performance of the main components in the distributions. Accordingly, the most successful resolution was observed in the separable kernel distribution. Three main components centred (0, 225), (0.08, 50) and (0.09, 195) in a normal subject, and more than 10 major components distributed over the whole plane in the alcoholic subject (e.g. (0.2, 172), (0.35, 73) … Components with central coordinates) can be detected with good resolution compared to other kernel distributions (Figure 10).
Figure 12. EEG signals' TFD with lag-independent kernel
Table 1. TFDs used in the study
Kernel Function
Resolution Performance
Nonseparable Kernel
$\frac{\sqrt{\pi \sigma}}{|\tau|} e^{-\pi^{2} \sigma t^{2} / \tau^{2}}$
Separable Kernel
$|\tau|^{\beta} \cosh ^{-2 \beta} t$
DI Kernel
$\delta(t) w(\tau)$
LI Kernel
$\frac{\cosh ^{-2 \beta} t}{\int_{-\infty}^{\infty} \cosh ^{-2 \beta} \xi d \xi}$
Alcoholism is one of the most common health problems. One of the organs that it causes the greatest damage is the brain. It is possible to detect these biological changes by using EEG. In this study, EEG signals were obtained from normal and alcoholic subjects, time-frequency analyses were applied to these signals and differences were determined with the best resolution.
First, EEG Signal with Running Minimum and Maximum were examined. It was determined that the extensions of signals of the alcoholic were higher and that their period was much lower than normal. In contrast to the normal signal, whose spectral amplitude consists of two main components, the alcoholic subject has numerous spectral components that spread over the entire plane.
The Cohen class TFD structure was used in this study. This structure consists of AF and kernel function. When the ambiguity functions of the signals were examined, although there was a smooth bar structure in the normal signal, the alcohol showed its effect with its multipart structure.
Kernel functions are filter structures that affect the resolution of TFD. The structure in which the kernel function in the Cohen relation is 1 is WVD, which is widely used and has many versions. In WVD analysis of signals, the main components at low frequencies in the control subject and the components spreading to the entire plane in the alcoholic subject are barely noticeable with a very poor resolution.
To solve the resolution problem, four different kernels (nonseparable, separable, DI, LI) were tested for the WVD structure. In the nonseparable TFD analysis, the main components (3 main components at low frequencies) for the control subject became more noticeable. For the alcoholic subject, on the other hand, the resolution was worse than normal. In the separable TFD analysis, cross-terms were better suppressed, and the main components for the normal and alcoholic subjects were clearly revealed. The TFD of the control subject's EEG had 3 main components at low frequencies and the TFD of the alcoholic subject's EEG had many major components, especially at high frequencies, that spread across the entire plane. The effect of alcohol was seen with these components that spread all over the plane. In this study, it was also figured out that in the DI and LI TFD structures, the resolution was poor compared to the separable TFD structure. Consequently, it can be said that in TFD analysis of normal and alcoholic subjects' EEGs, the optimal structure is TFDs that have separable kernel structure.
[1] Eggleton, M.G. (1941). The effect of alcohol on the central nervous system. British Journal of Psychology, 32(1): 52-61.
[2] Zahr, N.M., Pfefferbaum, A. (2017). Alcohol's effects on the brain: Neuroimaging results in humans and animal models. Alcohol Research: Current Reviews, 38(2): 1-24.
[3] Erdogan, E., Vardar, G., Altun, D., Fırat, M.F. (2018). Assessment of regional cerebral blood flow in patients with early and late onset alcohol dependence: SPECT study. Journal of Surgery and Medicine, 2(3): 257-261. https://doi.org/10.28982/josam.420428
[4] Prisciandaro, J.J., Schacht, J.P., Brenner, H.M., Anton, R.F., Prescot, A.P., Renshaw, P.F., Brown, T.R. (2019). Intraindividual changes in brain GABA, glutamate, and glutamine during monitored abstinence from alcohol in treatment-naive individuals with alcohol use disorder. Addiction Biology, e12810. https://doi.org/10.1111/adb.12810
[5] Koulentaki, M., Kouroumalis, E. (2018). GABA(A) receptor polymorphisms in alcohol use disorder in the GWAS era. Psychopharmacology, 235(6): 1845-1865. https://doi.org/10.1007/s00213-018-4918-4
[6] Karaaslan, M.F., Orhan, F.O. (2005). EEG and ERP in alcoholism. Turkiye Klinikleri J Int Med Sci., 1(47): 18-27.
[7] Baykan, B., Altındağ, E., Available online: http://www.itfnoroloji.org/semi2/eeg.htm, accessed on 28 February 2020.
[8] Akbari, H., Esmaili, S.S. (2020). A novel geometrical method for discrimination of normal, interictal and ictal EEG signals. Traitement du Signal, 37(1): 59-68. https://doi.org/10.18280/ts.370108
[9] Milnik, V. (2009). Instruction of electrode placement to the international 10-20-system. Neurophysiologie-Labor, 31(1): 1-35. https://doi.org/10.1016/j.neulab.2008.12.002
[10] Bayazıt, T.O., Bozkurt, M.A., Güneş, M.E., Hatipoğlu, S.S. (2018). The neuroelectric activity of the brain in extracorporeal circulation: Preliminary results. Medical Journal of Bakirkoy, 14(4): 427-432. https://doi.org/10.4274/BTDMJB.20180728081335
[11] Barry, R.J., De Blasio, F.M., Karamacoska, D. (2019). Data-driven derivation of natural EEG frequency components: An optimised example assessing resting EEG in healthy ageing. Journal of Neuroscıence Methods, 321: 1-11. https://doi.org/10.1016/j.jneumeth.2019.04.001
[12] Yavaş, G., Arıkan, Z., Bilir, E. A preliminary study on EEG disorder in children of alcohol addicts. Gazi University Faculty of Medicine, Department of Psychiatry, Available online: www.alopsikolog.net, accessed on 28 February 2020.
[13] Gopan, K.G., Sinha, N., Jayagopi, D.B. (2019). Alcoholic EEG analysis using Riemann geometry based framework. 27th European Signal Processing Conference (EUSIPCO), A Coruna, Spain. https://doi.org/10.23919/EUSIPCO.2019.8902506
[14] Cao, R., Deng, H., Wu, Z., Liu, G., Guo, H., Xiang, J. (2017). Decreased synchronization in alcoholics using EEG. IRBM, 38(2): 63-70. https://doi.org/10.1016/j.irbm.2017.02.002
[15] Aldırmaz, S. (2012). New communication and adaptive system designs by time frequency distributions and fractional fourier transform. PhD. Thesis, Yildiz Technical University, Istanbul, Turkey.
[16] Boashash, B. (2015). Time-frequency signal analysis and processing. Academic Press, London, UK.
[17] Deprem, Z., Cetin, A.E. (2015). Kernel estimation for time-frequency distributions using epigraph set of L1-NORM. 23rd European Signal Processing Conference (EUSIPCO), Nice, France, pp. 1491-1495. https://doi.org/10.1109/EUSIPCO.2015.7362632
[18] Flandrin, P. (2018). Explorations in Time-Frequency Analysis. Cambridge University Press, Cambridge, UK.
[19] Orovic, L., Stankovic, S., Jokanovic, B. (2013). A suitable hardware realization for the Cohen class distributions. IEEE Trans. Circuits and Systems II, 60(9): 607-611. https://doi.org/10.1109/TCSII.2013.2273724
[20] Zhang, B., Sato, S. (1994). A time-frequency distribution of Cohen's class with a compound kernel and its application to speech signal processing. IEEE Transactions on Signal Processing, 42(1): 54-64. https://doi.org/10.1109/78.258121
[21] Available online: http://kdd.ics.uci.edu/databases/eeg/eeg.data.html accessed on 28 February 2020.
[22] Thomas, M., Jacob, R., Lethakumary, B. (2012). Comparison of WVD based time-frequency distributions. International Conference on Power, Signals, Controls and Computation Power, (EPSCICON), Thrissur, Kerala, India. https://doi.org/10.1109/EPSCICON.2012.6175242
[23] Thomas, M., Lethakumary, B., Jacob, R. (2012). Performance comparison of multi-component signals using WVD and Cohen's class variants. International Conference on Computing, Electronics and Electrical Technologies (ICCEET), Kumaracoil, India, pp. 717-722. https://doi.org/10.1109/ICCEET.2012.6203869
[24] Wang, H.B., Long, J.B., Zha, D.F. (2013). Pseudo Cohen time-frequency distributions in infinite variance noise environment. Applied Mechanics and Materials, 475(1): 253-258. https://doi.org/10.4028/www.scientific.net/AMM.475-476.253
[25] Aiordachioaie, D., Popescu, T.D. (2017). A method to detect and filter the cross terms in the Wigner-Ville distribution. International Symposium on Signals, Circuits and Systems (ISSCS), Iasi, Romania. https://doi.org/10.1109/ISSCS.2017.8034878
[26] Jabczyński, J. Gontar, P., Gorajek, L. (2020). Wigner transform approach to dynamic-variable partially coherent laser beam characterization. Bulletin of the Polish Academy of Sciences: Technical Sciences, 68(1): 141-146. https://doi.org/ 10.24425/bpasts.2020.131840
[27] Mesbah, M., O'Toole, J.M., Colditz, P.B., Boashash, B. (2012). Instantaneous frequency based newborn EEG seizure characterisation. Eurasip Journal on Advances in Signal Processing. https://doi.org/10.1186/1687-6180-2012-143
[28] O'Toole, J.M., Mesbah, M., Boashash, B., Colditz, P. (2007). A new neonatal seizure detection technique based on the time-frequency characteristics of the electroencephalogram. 9th International Symposium on Signal Processing and Its Applications, ISSPA, Sharjah, United Arab Emirates. https://doi.org/10.1109/ISSPA.2007.4555347
[29] O'Toole, J.M., Mesbah, M., Boashash, B. (2007). A computationally efficient implementation of quadratic time-frequency distributions. 9th International Symposium on Signal Processing and Its Applications, ISSPA, Sharjah, United Arab Emirates. https://doi.org/10.1109/ISSPA.2007.4555346
[30] Boashash, B. (2015). Theory and Design of High-Resolution Quadratic TFDs. In Time-Frequency Signal Analysis and Processing: A Comprehensive Reference; B. Boashash, Elsevier Inc., London, UK.
[31] Boashash, B. (2015). Heuristic Formulation of Time-Frequency Distributions. In Time-Frequency Signal Analysis and Processing: A Comprehensive Reference, Elsevier Inc., London, UK.
[32] Boashash, B., Putland, G.R. (2015). Design of high-resolution quadratic TFDs with separable kernels. Time-Frequency Signal Analysis and Processing: A Comprehensive Reference, B. Boashash, Elsevier Inc., London, UK. | CommonCrawl |
Further Hyperbolic Functions
Characteristics of hyperbolas
Domain and range of hyperbolas
Graphing hyperbolas with translations
Find the equation of a hyperbola
Manipulation of Hyperbolic Equations
Applications of hyperbolas
If a hyperbola is translated horizontally or vertically from the center, the parameters $a$a, $b$b, and $c$c still have the same meaning. However, we must take into account that the center of the hyperbola has moved.
Graph of a hyperbola centred at $\left(h,k\right)$(h,k)
Given the following definitions for $h$h and $k$k,
$h$h denotes the translation in the horizontal direction from $\left(0,0\right)$(0,0).
$k$k denotes the translation in the vertical direction from $\left(0,0\right)$(0,0).
and remembering the values,
$a$a is the distance from the center to a vertex
$c$c is the distance from the center to a focus
$b$b can be found using the relationship $b^2=c^2-a^2$b2=c2−a2
the table below summarizes the characteristics of a translated hyperbola in both orientations:
Horizontal Major Axis
Vertical Major Axis
Standard form $\frac{\left(x-h\right)^2}{a^2}-\frac{\left(y-k\right)^2}{b^2}=1$(x−h)2a2−(y−k)2b2=1 $\frac{\left(y-k\right)^2}{a^2}-\frac{\left(x-h\right)^2}{b^2}=1$(y−k)2a2−(x−h)2b2=1
Center $\left(h,k\right)$(h,k) $\left(h,k\right)$(h,k)
Foci $\left(h+c,k\right)$(h+c,k) and $\left(h-c,k\right)$(h−c,k) $\left(h,k+c\right)$(h,k+c) and $\left(h,k-c\right)$(h,k−c)
Vertices $\left(h+a,k\right)$(h+a,k) and $\left(k-a,k\right)$(k−a,k) $\left(h,k+a\right)$(h,k+a) and $\left(h,k-a\right)$(h,k−a)
Transverse axis $y=k$y=k $x=h$x=h
Conjugate axis $x=h$x=h $y=k$y=k
Asymptotes $y-k=\pm\frac{b}{a}(x-h)$y−k=±ba(x−h) $y-k=\pm\frac{a}{b}(x-h)$y−k=±ab(x−h)
Essentially, the information is the same as the central hyperbola. But the values of $h$h and $k$k are added to the $x$x and $y$y-values (respectively) for each characteristic.
Once we establish certain information about the hyperbola, we can use the relationships summarized in the tables to determine the graph of the hyperbola.
The hyperbola $\frac{x^2}{25}-\frac{y^2}{100}=1$x225−y2100=1 when translated $3$3 units to the right and $4$4 units up will be described by the equation $\frac{\left(x-3\right)^2}{25}-\frac{\left(y-4\right)^2}{100}=1$(x−3)225−(y−4)2100=1. Find the centre, vertices, foci and asymptotes of the translated hyperbola. Then draw the graph of the hyperbola.
What is the centre of the hyperbola?
Think: The equation of the hyperbola is in the form $\frac{\left(x-h\right)^2}{a^2}-\frac{\left(y-k\right)^2}{b^2}=1$(x−h)2a2−(y−k)2b2=1 and the centre will be $\left(h,k\right)$(h,k).
Do: The centre can just be read off from the equation as $\left(3,4\right)$(3,4).
Reflect: The values of $h$h and $k$k appear after a minus sign in the standard form of the equation. If the equation was in the form $\frac{\left(x+3\right)^2}{a^2}\ldots$(x+3)2a2… then $h$h would equal $-3$−3.
What are the vertices of the hyperbola?
Think: For a hyperbola centred at $\left(h,k\right)$(h,k) and oriented horizontally, the vertices will be $a$a units horizontally in both directions from the centre, $\left(h\pm a,k\right)$(h±a,k)
Do: From the equation $a^2=25$a2=25 so $a=5$a=5. The vertices will be at $\left(3\pm5,4\right)$(3±5,4), or written separately, $\left(-2,4\right),\left(8,4\right)$(−2,4),(8,4).
Reflect: Notice that the translated centre, and the vertices all share the same $y$y-component. They will always lie on the transverse axis.
Hyperbola centred at the origin and translated hyperbola with its vertices and centre marked
What are the equations of the asymptotes of the hyperbola?
Think: For a hyperbola of the form above the asymptotes are in the form $y=\pm\frac{b}{a}\left(x-h\right)+k$y=±ba(x−h)+k. We have found the values of $a$a, $h$h and $k$k. We need to find $b$b and substitute in the values.
Do: From the equation $b^2=100$b2=100 so $b=10$b=10. The asymptotes will be $y=\pm\frac{10}{5}\left(x-3\right)+4$y=±105(x−3)+4, which simplifies to $y=\pm2\left(x-3\right)+4$y=±2(x−3)+4.
Draw the graph of the hyperbola.
Think: The graph of the hyperbola is horizontally oriented, and will have a pair of asymptotes that intercept at the centre $\left(3,4\right)$(3,4).
Do: The graph of the hyperbola is given below.
Graph of a hyperbola $\frac{\left(x-3\right)^2}{25}-\frac{\left(y-4\right)^2}{100}=1$(x−3)225−(y−4)2100=1
The graph of the hyperbola $\frac{\left(y-k\right)^2}{a^2}-\frac{\left(x-h\right)^2}{b^2}=1$(y−k)2a2−(x−h)2b2=1 is the same graph as the graph of $\frac{y^2}{a^2}-\frac{x^2}{b^2}=1$y2a2−x2b2=1, except the center is at $\left(h,k\right)$(h,k) rather than at the origin.
Given the graph of $\frac{y^2}{16}-\frac{x^2}{4}=1$y216−x24=1, find the graph of $\frac{\left(y-5\right)^2}{16}-\frac{x^2}{4}=1$(y−5)216−x24=1.
The graph of $\frac{x^2}{16}-\frac{y^2}{9}=1$x216−y29=1 is given below. Consider the translated hyperbola described by the equation $\frac{x^2}{16}-\frac{\left(y-2\right)^2}{9}=1$x216−(y−2)29=1.
What are the coordinates of the centre of the translated hyperbola?
What are the coordinates of the vertices of the translated hyperbola?
What are the coordinates of the foci of the translated hyperbola?
What are the equations of the asymptotes of the translated hyperbola?
Select the graph of the translated hyperbola, $\frac{x^2}{16}-\frac{\left(y-2\right)^2}{9}=1$x216−(y−2)29=1.
The graph of $\frac{x^2}{16}-\frac{y^2}{9}=1$x216−y29=1 is given below. Consider the translated hyperbola described by the equation $\frac{\left(x+3\right)^2}{16}-\frac{\left(y-7\right)^2}{9}=1$(x+3)216−(y−7)29=1.
What are the coordinates of the translated hyperbola's focus?
Select the graph of the translated hyperbola, $\frac{\left(x+3\right)^2}{16}-\frac{\left(y-7\right)^2}{9}=1$(x+3)216−(y−7)29=1. | CommonCrawl |
Methodology article
SABRE: a method for assessing the stability of gene modules in complex tissues and subject populations
Casey P. Shannon1,6,
Virginia Chen1,6,
Mandeep Takhar2,
Zsuzsanna Hollander1,6,
Robert Balshaw1,3,
Bruce M. McManus1,4,6,
Scott J. Tebbutt1,5,6,
Don D. Sin5,6 &
Raymond T. Ng1,2,6
BMC Bioinformatics volume 17, Article number: 460 (2016) Cite this article
Gene network inference (GNI) algorithms can be used to identify sets of coordinately expressed genes, termed network modules from whole transcriptome gene expression data. The identification of such modules has become a popular approach to systems biology, with important applications in translational research. Although diverse computational and statistical approaches have been devised to identify such modules, their performance behavior is still not fully understood, particularly in complex human tissues. Given human heterogeneity, one important question is how the outputs of these computational methods are sensitive to the input sample set, or stability. A related question is how this sensitivity depends on the size of the sample set. We describe here the SABRE (Similarity Across Bootstrap RE-sampling) procedure for assessing the stability of gene network modules using a re-sampling strategy, introduce a novel criterion for identifying stable modules, and demonstrate the utility of this approach in a clinically-relevant cohort, using two different gene network module discovery algorithms.
The stability of modules increased as sample size increased and stable modules were more likely to be replicated in larger sets of samples. Random modules derived from permutated gene expression data were consistently unstable, as assessed by SABRE, and provide a useful baseline value for our proposed stability criterion. Gene module sets identified by different algorithms varied with respect to their stability, as assessed by SABRE. Finally, stable modules were more readily annotated in various curated gene set databases.
The SABRE procedure and proposed stability criterion may provide guidance when designing systems biology studies in complex human disease and tissues.
Gene network inference (GNI) from whole transcriptome expression data is a fundamental and challenging task in computational systems biology, with potentially important applications in translational research. Often, a major aim is to identify sets of coordinately expressed genes, termed network modules, and to study their properties and how they change across conditions [1–6]. These gene network modules may represent novel, context specific, functional biological units, and identifying and studying such modules has become an important tool of systems biology [7–10]. Although diverse computational and statistical approaches have been devised to identify such modules [11–18], their performance behavior is still not fully understood, particularly in complex human tissues. This situation is understandable – objective assessment is challenging in this context. Nevertheless, these tools are being applied to the study of complex diseases [19, 20], and evaluating the performance, as well as understanding the limitations of state-of-the-art methods in this context, is important. This is particularly true for translational research where studies tend to be smaller, and are often performed in complex tissues and/or highly heterogeneous patient populations. While it is challenging to directly assess the accuracy of inferred networks derived by these methods in real-world expression datasets, the resulting gene modules have properties that may be objectively measured in such data.
One such property is module reproducibility. A comprehensive review of various means of assessing module reproducibility was published by Langfelder et al. [21]. In that study, the authors outlined two broad categories of module preservation statistics: cross-tabulation and network topology derived (including density, connectivity, and separability), and chose to focus on the latter as a means of assessing module reproducibility, in part because of the difficulty of assessing cross-tabulation module preservation statistics in the absence of an independent validation dataset. Re-sampling approaches, such as cross-validation or bootstrapping, have previously been used to overcome this, however [21–23]. Cross-tabulation reproducibility measures are of particular interest because they can be readily applied to a broad range of gene module identification strategies, including popular clustering approaches [24–26] that do not yield a nodes-and-edges network structure and, therefore, cannot be assessed by network topological metrics.
Here, we introduce a flexible strategy for assessing the reproducibility of gene modules that does not rely on network topology and leverages bootstrap re-sampling in place of an independent validation dataset. Even though the idea of using bootstrap re-sampling to evaluate module stability is not new (e.g., [21–23]), a systematic approach to summarize the results obtained across many re-samplings into an easy-to-interpret criterion of module stability, has yet to be described. We demonstrate that the proposed procedure can provide a useful measure of module stability – how sensitive a module's gene membership is to changes in the set of samples that were used to identify it – in a large (n > 200), highly relevant clinical dataset: gene expression profiling of peripheral whole blood, a highly heterogeneous tissue, in chronic obstructive pulmonary disease (COPD), a complex human disease. COPD is a progressive disease, characterized by non-reversible loss of lung function and sporadic worsening of symptoms (shortness of breath, cough, etc.) termed acute exacerbations of COPD (AECOPD). These exacerbations lead to substantial morbidity and mortality [27]. Additionally, some patients experience exacerbation episodes more frequently, and there is interest in understanding the molecular basis, if any, of this phenotype. Since most exacerbations are associated with bacterial or viral respiratory tract infections [28], immune system monitoring using whole blood, transcriptome-wide, gene expression profiling could provide useful insights [25]. We explored this notion by identifying gene network modules from this gene expression data using three popular strategies: weighted gene co-expression analysis (WGCNA) [11], a method based on the partial least squares regression (PLS) technique described by Pihur, Datta and Datta [18], and a clustering-based approach described by Chaussabel et al. [24].
The proposed procedure is more formally defined below (cf. Methods section). Briefly, the ability to recover similar modules across many re-sampled datasets reflects module stability. We propose to estimate gene module stability by looking for concordance between a reference module set derived from the entire data and a large number of comparator module sets derived from re-sampled datasets. Concordance can be determined using some similarity measure; we propose a variation on the Jaccard similarity coefficient, a statistic commonly used to assess the similarity of sets. The stability of a particular module can be estimated by inspecting the distribution of similarity scores across a large number of repeated re-samplings. A schematic representation of the procedure, termed SABRE (Similarity Across Bootstrap RE-samplings) is shown in Additional file 1: Figure S1.
To facilitate the prioritizing of modules, we introduce a criterion that summarizes the distribution of similarity measures obtained for a particular reference module across bootstrap re-samplings, and explore some useful properties of this criterion. We hypothesized that modules identified by various algorithms, such as WGCNA and the approaches described by Pihur, Datta and Datta, or Chaussabel and others, vary with respect to their stability and that this quality may provide a useful means of prioritizing specific modules for further study. Moreover, changes in module stability across conditions may suggest loss of regulation, which could be of biological interest.
We obtained PAXgene blood samples from 238 patients with chronic obstructive pulmonary disease (COPD), who were enrolled in the Evaluation of COPD Longitudinally to Identify Predictive Surrogate Endpoints (ECLIPSE) study [29]. These patients were clinically stable at the time of blood collection and their demographics are summarized in Additional file 2: Table S1.
RNA extraction and microarray processing
Blood samples for all subjects and timepoints were collected in PAXgene tubes and stored at −80 °C until analysis. Total RNA was extracted using PAXgene Blood RNA Kits (QIAGEN Inc., Germantown, MD, USA), and integrity and concentration determined using an Agilent 2100 BioAnalyzer (Agilent Technologies Inc., Santa Clara, CA, USA). Affymetrix Human Gene 1.1 ST (Affymetrix, Inc., Santa Clara, CA, USA) microarrays were processed at the Scripps Research Institute Microarray Core Facility (San Diego, CA, USA) in order to assess whole transcriptome expression. The microarrays were checked for quality using the RMAExpress software [30] (v1.1.0). All microarrays that passed quality control were background corrected and normalized using quantile normalization (as in RMA) [30] and summarized using a factor analysis model (factor analysis for robust microarray summarization [FARMS]) [31], via the 'farms' R package. FARMS includes an objective feature filtering technique that uses the multiple probes measuring the same target transcript as repeated measures to quantify the signal-to-noise ratio of that specific probe set. Informative probe sets, as identified by FARMS (2512), were used for all downstream analyses. Limiting the feature space in this manner had the additional benefit of speeding up identification of gene network modules in the next step, an important consideration given the proposed bootstrapping procedure.
Identification of gene modules
We identified network modules using three different approaches: weighted gene co-expression network analysis (WGCNA) [11], via the 'WGCNA' R package [32], a method based on the partial least squares regression (PLS) technique described by Pihur, Datta and Datta [18], and a k-means clustering-based approach to identifying sets of coordinately expressed transcripts described by Chaussabel and others [24]. In WGCNA, the unsigned gene-gene correlation matrix was weighted to produce a network with approximately scale-free topology, characteristic of biological systems [32]. Average linkage hierarchical clustering is then applied to this weighted gene co-expression network, and the resulting dendrogram cut at a height of 0.15 to produce modules with minimum size of 50. The total number of modules identified in this manner was not fixed. We adopted a similar strategy to identify modules using the approach described by Pihur and others, this time applying average linkage clustering to the gene-gene interaction matrix output from the PLS procedure and cutting the resulting dendrogram at a heaight of 0.15, as before. The Chaussabel approach employs the k-means clustering algorithm to identify co-clustering genes. This algorithm uses a top down strategy in which genes are randomly divided into a predetermined number of clusters. Genes are then iteratively re-assigned to their nearest cluster by some distance function (Euclidean distance, in this case), and cluster centers re-computed under the new configuration. This process is repeated until the algorithm converges. The number of clusters, or gene modules, was necessarily fixed in this case. We used the elbow criterion to determine the point at which the inclusion of additional clusters no longer provides a large increase in proportion of variance explained. Because the initial centers are arbitrarily chosen, the solution is unlikely to be the minimal sum of squares of all possible partitions. Rather a local minimum is returned where any further reassignment of a gene from one cluster to another will not reduce the within cluster sum of squares. To address this we used 10 random initial configurations and retained the solution with minimal sum of squares. Although this provides a minimum over several partitions, it does not guarantee a global minimum solution.
Assessing the stability of gene modules
All three approaches were applied as described to all 238 peripheral whole blood expression profiles to identify modules of coordinately expressed transcripts. These are termed the reference module sets for each approach. To assess stability of all modules within the reference module sets, bootstrap re-sampling was used to generate 1000 random sets of 238 subjects. All three network module identification approaches were applied to all re-samplings to produce 1000 bootstrapped sets of modules, in each case.
In order to describe how we propose to assess the stability of the reference modules, we must first introduce a few related concepts. We start with the concept of module accuracy, defined as follows: for each reference module q, a set of genes of size |q|, and a test module q′, where |q ∩ q′| is the number of genes in common between the two. Accuracy is then defined as:
$$ Accurac{y}_q=\frac{\left|q{\displaystyle \cap }{q}^{\prime}\right|}{\left|q\right|}\in \left[0,1\right] $$
We instead propose to use the closely related Jaccard similarity coefficient, a statistic commonly used to assess the similarity of sets:
$$ Jaccar{d}_{q{q}^{\prime }}=\frac{\left|q{\displaystyle \cap }{q}^{\prime}\right|}{\left|q{\displaystyle \cup }{q}^{\prime}\right|}\in \left[0,1\right] $$
Comparison of a reference module to a test module that is an exact subset should be considered a perfect match from a stability standpoint (the similarity measure used should return 1 in this case). In order for the similarity measure used to reflect this, we modify (2) as follows:
$$ Similarit{y}_{q{q}^{\prime }} = \frac{\left|q{\displaystyle \cap }{q}^{\prime}\right|}{min\left\{\left|q\right|,\ \left|{q}^{\prime}\right|\right\}}\ \in\ \left[0,\ 1\right] $$
This is analogous to the Simpson index that can be computed for bipartite networks [33], but note that similarity is defined only in terms of the degree of overlap between modules members or nodes. The modules are treated as simple sets in (3).
Finally, gene module stability can be very naturally estimated via a random re-sampling with replacement procedure, termed bootstrapping [34], by looking for concordance between a reference module set derived from the entire data and one derived from a re-sampled dataset. Concordance can be determined using the similarity measure defined above (3). When comparing a module q to a module set Q = {q 1, q 2, q 3,…,q n }, the best match for q is defined to be q′, such that (3) between q and q′ is the highest among all possible comparisons between q and members of Q. The best match similarity coefficient is then (3) between q and q′, and the stability of a particular module q can be estimated by looking at the distribution of best match similarity scores across a large number of repeated re-samplings, {R j ; j = 1,2,…,n}. In order to rank modules by their stability, we summarize these distributions for each module to a single number, the Hirsch index (H-index), as follows:
$$ H- index(q) = { \max}_h\left\{\left[\frac{1}{1000}{\displaystyle {\sum}_{j=1}^{1000}}\left({R}_j\ge h\right)\right]\ge h\right\}\in \kern0.75em \left[0,\ 1\right] $$
For a reference module with H-index = 0.8, 0.8 similarity or greater was observed in 80 % of bootstrap runs. A more qualitative interpretation would be that we expect this reference module, derived from all available samples, to have 80 % similarity to a hypothetical module derived from a whole population dataset.
Construction of random gene modules
To get a sense of the stability that could be expected of a module containing genes with minimal relation to each other, we carried out a simulation study. Modules of size 50–400 (by increments of 50) were created by sampling from the all 2512 gene symbols in the FARMS filtered dataset. This was done 100 times for each size of module. Then, for each of these random modules, the best match Jaccard similarity coefficient was recorded across all 1000 module sets generated during the previously described bootstrap procedure. The resulting distribution was summarized using the H-index.
Stability of network modules, sample size, and module size
In order to study the relationship between network module stability, sample size, and module size, a slight variation of the bootstrap re-sampling strategy was used. We randomly sampled, without replacement 10, 20, 40, 80, 120, or 160 expression profiles from the 238 peripheral whole blood expression profiles described above. In each case, a reference module set was produced, 100 bootstrap re-samplings of the selected expression profiles generated, and the stability of the reference module set across bootstrap re-samplings determined as before. This was repeated 10 times to capture the effect the original selection had on generation of the reference module set.
Stability of network modules and network topology
The relationship between module stability and various network topology measures was also of interest. We constructed an undirected network using the 'igraph' R package [35]. We defined a gene-gene edge as that where the absolute correlation for that gene pair was at least two standard deviations away from the mean correlation observed across all possible gene pairs. Various topology measures were then calculated for each of the reference modules: average number of neighbors per gene (divided by module size), number of instances in which a gene appears in a shortest path between two other genes, and number of triads (divided by number of possible triads in the module), and compared to their stability.
Stability of network modules and functional annotation
Finally, we explored the relationship between stability and our ability to functionally annotate gene modules. We hypothesized that stable or reproducible gene modules should correspond well to known biological programs more often than unstable ones. To test this, we compared the gene membership of our reference modules to the MSigDB collections (v5.0) [36]. We also included the recently described Blood Transcriptome Modules (BTMs), a collection of blood-specific transcriptomic modules derived from an analysis of over 30,000 human blood transcriptome profiles from more than 500 studies whose data are publicly available [26]. Annotation was done by testing for over-representation of genes from the MSigDB gene sets using a hypergeometric test, a simple statistical method commonly used to quantitatively measure enrichment [37]. To quantify how well a particular module was annotated in the gene set collection, we computed the sum of the –log10 of the p-values for the hypergeometric test across all gene sets in the collection and compared it to its stability, separately in each of the collections (MSigDB Hallmark, C1-C7, and BTMs).
We first identified sets of gene modules by using all available samples and by employing three different approaches: weighted gene co-expression network analysis (WGCNA) [11], a method based on the partial least squares regression (PLS) technique described by Pihur, Datta and Datta [18], and a clustering-based approach to identifying sets of coordinately expressed transcripts described by Chaussabel and others [24]. These are termed reference module sets.
When applied to all 238 samples, WGCNA identified 19 network modules, ranging in size from 69 to 850 probe-sets (36 to 427 genes; Additional file 3: Table S2), with a mean module size of 240 probe-sets (median = 157). The method from Pihur, Datta and Datta, identified 24 network modules, ranging in size from 69 to 554 probe-sets (32 to 210 genes; Additional file 4: Table S3), with a mean module size of 207 probe-sets (median = 157). The Chaussabel approach identified 20 network modules ranging in size from 44 to 302 probe-sets (38 to 167 genes; Additional file 5: Table S4), with mean module size of 131 probe-sets (median = 120). Modules identified by WGCNA and the Pihur approach were highly concordant (mean Jaccard similarity = 0.78; not shown), while those identified by the Chaussabel approach were largely distinct (Jaccard similarity < 0.2 for the majority of module pair-wise comparisons; Additional file 6: Figure S2) with the exception of the WGCNA-derived turquoise and blue modules, which were similar to the Chaussabel-derived M17 and M19 modules, respectively (similarity coefficient > 0.5). Annotation of the turquoise/M17, and blue/M19 modules, was consistent (Additional file 7: Table S5).
Assessing gene module stability using SABRE and the H-INDEX
Next, we applied SABRE to assess the stability of these reference modules. Refer to Methods for detail. Briefly, each algorithm was allowed to identify a set of network modules from each re-sampling and these re-sampled module sets were compared to the reference module set. For each reference module, only the highest observed similarity coefficient within each re-sampling (that for the best matching re-sampled module) was recorded and the distribution of these similarity coefficients across all bootstrap re-samplings was used to assess stability. Modules with distribution of similarity coefficients across all re-samplings that skewed towards one were consistently matched to highly concordant re-sampled modules across many random sample configurations and are thus deemed to be relatively insensitive to sample outliers. To simplify interpretation, we summarized the distribution of similarity coefficients for each module to its Hirsch-index [38] (H-index), as defined in (4). A visual derivation of the H-index is shown in Fig. 1 for the modules identified by WGCNA.
Visual derivation of the H-index. A visual depiction of the derivation of the H-index is shown for modules identified by the WGCNA algorithm. A set of reference modules derived from all available samples are compared to a series of comparator module sets derived from bootstrapped re-sampled data. For each re-sampled dataset, all reference modules are compared to all newly identified modules using the Jaccard similarity coefficient. For each reference module, the best match Jaccard similarity coefficient value is recorded. Finally, these best match similarity coefficients are sorted and a measure of the area under the resulting curve (Hirsch-index) used to estimate the reference module's stability
Gene module stability increases with sample size and module size
Gene modules are often identified in relatively small study populations. We used a variation of the SABRE strategy to study the effect of sample size on gene module stability. As expected, module stability, as assessed using the current framework, increased as the sample size was increased (Fig. 2a). This relationship held for both very stable (1st rank) and less stable (10th rank) modules. We also observed a relationship between module stability and module size in smaller studies (Fig. 2b; n = 10, p = 4.3 × 10−13; n = 80, p = 0.041; Wilcoxon's rank-sum test), but there was no such relationship at larger sample sizes (n = 120, p = 0.55; n = 160, p = 0.88). In all cases, modules identified (by WGCNA in this case) were more stable than modules assembled by randomly sampling from all gene symbols in the filtered dataset, even when sample size was small (Additional file 8: Figure S3).
Gene module stability increases with sample size and module size. For each n, we sampled without replacement from all available gene expression profiles 10 times. In each case, a reference module set was produced (by WCGNA), 100 bootstrap re-samplings of the selected expression profiles generated, and the stability of the reference module set across bootstrap re-samplings determined as described in the Methods section. a Stability of the modules is visualized at n = 10, 20, 40, 80, 120 and 160 for the 1st, 5th and 10th rank modules. b Stability is plotted against module size at n = 10, 20, 40, 80, 120 and 160. The dotted line depicts the best-case stability of random modules in simulation. We compare the stability of S (1st quartile) and XL (4th quartile) modules using Wilcoxon's rank-sum test
Stability profiles differ between algorithms
The H-index was next used to rank reference modules (Fig. 3). Modules varied with respect to their stability as assessed by the SABRE procedure, both within sets of modules identified by particular algorithms, as well as across strategies. Modules identified by all three strategies were significantly more stable than modules assembled by randomly sampling from all gene symbols in the filtered dataset (Additional file 8: Figure S3). The set of network modules identified by WGCNA was more stable (mean H-index = 0.79, standard deviation = 0.12, range = 0.42–0.96) than that identified by either the Pihur (mean H-index = 0.68, standard deviation = 0.12, range = 0.44–0.91), or Chaussabel approaches (mean H-index = 0.69, standard deviation = 0.14, range = 0.43–0.97), though the top ranking module identified by all three approaches had comparable stability (WGCNA: lightgreen module, H-index = 0.96; Pihur: blue module, H-index = 0.91; Chaussabel: M6 module, H-index = 0.97).
Stability profiles differ between algorithms. Gene module similarity across bootstrap re-samplings, for all reference network modules identified by three gene module discovery algorithms, is visualized using box plots (a: WGCNA; b: Pihur, c: Chaussabel). The stability of the network modules is summarized using the H-index (red). The dotted line depicts the best-case stability of random modules in simulation
Stable modules are more interconnected
We might expect more connected gene network modules to be more stable. There are many proposed metrics for measuring network connectivity. The simplest such metric is the average number of neighbors. For our dataset, there is no observed trend between the H-index and the average number of neighbors within a module. A stronger notion of connectivity is to measure the number of nodes that are in the shortest path between some pair of nodes in the network. The higher this number, the more tightly connected the network. Figure 4 suggests that there is a relationship between this notion of connectivity and the H-index. An even stronger notion of connectivity described in the literature is the number of triads, which are triplets of nodes that are directly connected pairwise. Figure 4 indicates that there is a weak relationship between this notion of connectivity and the H-index, particularly for lower values of the H-index. In sum, it appears that the notion of stability indicated by the H-index is related, but not identical, to 2 well-known notions of connectivity in network science.
Stable modules are more interconnected. The relationship between module stability and a number of topological measures of network connectivity is visualized for modules identified by WGCNA (blue) or the Chaussabel approach (red). Stability is positively associated (Spearman's ρ) with both number of appearance in the shortest path and number of triads in the network (* p ≤ 0.05)
Stable modules are more readily annotated
We hypothesized above that stable modules should correspond to well-characterized biological functions. If this is true, we would expect stable modules to be more readily annotated than less stable ones. We explored this notion in the BTM and MSigDB collections of annotated gene sets and found that stable modules were indeed more readily annotated across many of the included collections (Fig. 5). Module stability was significantly associated with annotatability (sum –log10 p-value of the hypergeometric test of the overlap between annotation and module genes) in the hallmark (H, Spearman's ρ = 0.39; p = 0.01), positional (C1, Spearman's ρ = 0.39; p = 0.01), immunologic signatures (C7, Spearman's ρ = 0.31; p = 0.05), and blood transcriptomic modules (BTM, Spearman's ρ = 0.31; p = 0.05) collections, marginally associated in the canonical (C2, Spearman's ρ = 0.27; p = 0.09), and oncogenic signatures (C6, Spearman's ρ = 0.27; p = 0.10) collections, and showed no significant association in the gene ontology (GO, C5), motif (C3), or computational (C4) gene set collections. Complete gene set over-representation results are tabulated in Additional file 5: Table S4.
Stable modules are more readily annotated. Module gene over-representation in annotated gene sets (sum of –log10 p-value for the hypergeometric test) is visualized, for modules with varying stability, in the MSigDB and BTM collections. Stability is positively associated (Spearman's ρ) with our ability to assign module to known biology (* p ≤ 0.05; † p ≤ 0.10) in many of these collections
Many gene modules identified by WGCNA had very high concordance with gene sets corresponding to distinct biological functions (e.g. lightgreen: B cell activity, antibody production; cyan: interferon signaling; brown: heme metabolism; blue: recruitment of neutrophils and TLR mediated inflammatory signaling). In fact, even relatively less stable modules identified by WGCNA appeared to correspond to known biological pathways (e.g. purple module [H-index = 0.73]: MHC class II antigen presentation). Modules identified by the Chaussabel approach had generally lower concordance to annotated gene sets, with modules often having seemingly overlapping biological function (e.g. interferon signaling pathway activity was assigned to both modules M04 and M06). In fact, nearly half of gene modules identified by the Chaussabel approach were enriched in genes associated with translation (7/20 modules had significant enrichment with KEGG ribosome, KEGG translation) and mRNA transcription (4/20 modules had significant enrichment with KEGG spliceosome, REACTOME metabolism of mRNA, REACTOME Translation, REACTOME mRNA splicing).
The main objective of this work was to devise a strategy for evaluating the stability of gene modules in a manner applicable to a broad range of gene module identification strategies, and not reliant on the availability of additional datasets for validation. Quantifying network module stability would allow for prioritization of modules for further experimental interrogation. While the use of re-sampling strategies, such as cross-validation or the bootstrap, in this context has been previously proposed [21–23], a systematic approach to summarizing the results obtained across many re-samplings into an easy-to-interpret criterion of module stability, has yet to be described. As shown in Fig. 1, for a particular module, all the re-sampling results can be captured by a curve. The question we ask is how to summarize the curve with a single value that is informative. Standard ways, such as using the mean value, or even using the area-under-the-curve value, do not provide a "minimum quality" guarantee. In contrast, the proposed bootstrap re-sampling and summarization scheme, SABRE, does provide such a lower bound guarantee, at least across all generated bootstrap runs. The proposed stability criterion, the H-index of the curve, corresponding to the largest square under the curve, is readily interpreted: for a module with H-index = 0.8, one can say that similarity of 0.8 or greater was observed across at least 80 % of the bootstrap runs.
We go on to explore a number of useful characteristics of the H-index. We show that the H-index of a module generally increases as sample size increases and observe that, for any given module, a stability maximum appears to be reached between n = 80–120, at least in this tissue and patient population. We also note that, while smaller modules are generally less stable, this is not the case with larger sample sizes (n > 80–120). As expected, we find randomly assembled modules to have very low stability (H-index = 0.25), and it is comforting to note that, even for very small studies (n = 10), modules identified by WGCNA had worst case stability that exceeded this value, suggesting that, for some gene modules at least, core members may be identifiable even in very small studies. Taken together, these observations suggest that the identification of robust gene expression modules in complex tissues and diseases requires large study populations (n > 100).
Next, we compare gene network modules identified by three popular gene network module identification strategies. WGCNA and the PLS-based approach described by Pihur, Datta and Datta identified largely concordant sets of gene network modules, while the modules identified by the Chaussabel approach were largely distinct. Given that module identification for both WGCNA and the Pihur approach utilized average linkage hierarchical clustering on an adjacency matrix, this is perhaps not surprising. Applying SABRE to these modules sets allowed us to readily rank identified gene modules from most to least stable. Modules identified by WGCNA were generally more stable than those identified by the other two approaches. The ranking of modules by their stability, in combinations with other metrics, such as biological annotation, may inform prioritization of certain modules for further study.
We found that the H-index was positively associated with topological notions of network connectivity, as well as our ability to assign biological function to gene modules. SABRE uncovered important qualitative differences between module sets in this respect, however. First, while stability was positively correlated with connectivity across all modules, this relationship was strongest in modules identified by the Chaussabel approach. These modules also exhibited lower network connectivity compared to those found by WGCNA. This is not surprising since the Chaussabel approach pays no attention to the topology of the constructed modules. A similar pattern emerges when comparing the relationship between module stability and annotatability in the MSigDB and BTM gene set collections. Here again, the relationship between stability and annotatability was strongest in modules identified by the Chaussabel approach. Given the distribution of the stability criterion, and the connectivity or annotatability measures considered, it is difficult to determine whether the observed relationships between stability and connectivity/annotatability are true or primarily driven by broad differences in stability between module sets identified by different strategies.
Stable gene modules that do not readily correspond to annotated gene sets may be very interesting, of course, as they may represent novel, disease- or tissue-specific biological processes. Two such gene modules were identified in the current study: the lightgreen (h = 0.96) and blue (h = 0.86) modules. In an effort to assign biological function to these modules, we compared them to the recently published Blood Transcriptome Modules (BTM). These empirically derived sets of co-regulated genes have very low overlap with presently available pathways and were identified in peripheral whole blood gene expression data. We reasoned that the highly stable, un-annotated modules we independently identified in this study may in fact correspond to some of these highly stable blood modules. In fact, this was the case: the lightgreen module was enriched for a number of BTMs related to B cell activity, while the blue module was matched to various innate immunity BTMs (recruitment of neutrophils and TLR mediated inflammatory signaling). Both B cells and neutrophils are known to be implicated in COPD. [39, 40] This provided validation, both of the modules themselves, and of the stability ranking produced by the SABRE procedure, in that two highly stable modules that did not correspond to any available pathway annotations, were consistent with independently derived functional modules specific to blood leukocyte sub-populations. These modules may represent important and novel biological function in the peripheral whole blood compartment of COPD patients.
In conclusion, we demonstrate that bootstrap re-sampling, and the SABRE procedure described herein, can assess the stability of gene modules identified by three different algorithms and suggest that this could be a useful criterion when selecting modules for further investigation. We also show that when modules are identified in smaller studies, more stable ones are more likely to replicate in larger experiments compared to less stable ones. We show a relationship between this notion of stability, topological connectivity, and our ability to assign biological function to gene modules. Our approach highlights the relative robustness of the WGCNA algorithm to sample outliers and, more generally, suggests that many gene module strategies should probably be applied jointly to any given dataset. Finally, we identify and validate two highly stable modules that may represent novel, tissue-specific biological function in the context of the peripheral whole blood of clinically stable COPD patients.
AECOPD:
Acute exacerbation of chronic obstructive pulmonary disease
COPD:
GNI:
Gene network inference
SABRE:
Stability across bootstrap re-samplings
WGCNA:
Weighted gene co-expression analysis
Kurata H, El-Samad H, Iwasaki R, Ohtake H, Doyle JC, Grigorova I, et al. Module-based analysis of robustness tradeoffs in the heat shock response system. PLoS Comput Biol. 2006;2:e59.
Xia K, Xue H, Dong D, Zhu S, Wang J, Zhang Q, et al. Identification of the proliferation/differentiation switch in the cellular network of multicellular organisms. PLoS Comput Biol. 2006;2:e145.
Ghazalpour A, Doss S, Zhang B, Wang S, Plaisier C, Castellanos R, et al. Integrating genetic and network analysis to characterize genes related to mouse weight. PLoS Genet. 2006;2:e130.
Wang J, Zhang S, Wang Y, Chen L, Zhang X-S. Disease-aging network reveals significant roles of aging genes in connecting genetic diseases. PLoS Comput Biol. 2009;5:e1000521.
Stone EA, Ayroles JF. Modulated modularity clustering as an exploratory tool for functional genomic inference. PLoS Genet. 2009;5:e1000479.
Plaisier CL, Horvath S, Huertas-Vazquez A, Cruz-Bautista I, Herrera MF, Tusie-Luna T, et al. A systems genetics approach implicates USF1, FADS3, and other causal candidate genes for familial combined hyperlipidemia. PLoS Genet. 2009;5:e1000642.
Segal E, Shapira M, Regev A, Pe'er D, Botstein D, Koller D, et al. Module networks: identifying regulatory modules and their condition-specific regulators from gene expression data. Nat Genet. 2003;34:166–76.
Friedman N. Inferring cellular networks using probabilistic graphical models. Science. 2004;303:799–805.
Basso K, Margolin AA, Stolovitzky G, Klein U, Dalla-Favera R, Califano A. Reverse engineering of regulatory networks in human B cells. Nat Genet. 2005;37:382–90.
Lee I, Lehner B, Crombie C, Wong W, Fraser AG, Marcotte EM. A single gene network accurately predicts phenotypic effects of gene perturbation in Caenorhabditis elegans. Nat Genet. 2008;40:181–8.
Zhang B, Horvath S. A general framework for weighted gene co-expression network analysis. Stat Appl Genet Mol Biol. 2005;4:Article17.
Huynh-Thu VA, Irrthum A, Wehenkel L, Geurts P. Inferring regulatory networks from expression data using tree-based methods. PLoS ONE. 2010;5:e12776.
Rajapakse JC, Mundra PA. Stability of building gene regulatory networks with sparse autoregressive models. BMC Bioinformatics. 2011;12:S17.
Haury A-C, Mordelet F, Vera-Licona P, Vert J-P. TIGRESS: trustful inference of gene REgulation using stability selection. BMC Syst Biol. 2012;6:145.
Kuffner R, Petri T, Tavakkolkhah P, Windhager L, Zimmer R. Inferring gene regulatory networks by ANOVA. Bioinformatics. 2012;28:1376–82.
Ruyssinck J, Huynh-Thu VA, Geurts P, Dhaene T, Demeester P, Saeys Y. NIMEFI: gene regulatory network inference using multiple ensemble feature importance algorithms. PLoS ONE. 2014;9:e92709.
Gill R, Datta S, Datta S. A statistical framework for differential network analysis from microarray data. BMC Bioinformatics. 2010;11:95.
Pihur V, Datta S, Datta S. Reconstruction of genetic association networks from microarray data: a partial least squares approach. Bioinformatics. 2008;24:561–8.
de Jong S, Boks MPM, Fuller TF, Strengman E, Janson E, de Kovel CGF, et al. A Gene Co-Expression Network in Whole Blood of Schizophrenia Patients Is Independent of Antipsychotic-Use and Enriched for Brain-Expressed Genes. Mazza M, editor. PLoS ONE. 2012;7:e39498.
Van Eijk KR, de Jong S, Boks MP, Langeveld T, Colas F, Veldink JH, et al. Genetic analysis of DNA methylation and gene expression levels in whole blood of healthy human subjects. BMC Genomics. 2012;13:636.
Langfelder P, Luo R, Oldham MC, Horvath S. Is my network module preserved and reproducible? PLoS Comput Biol. 2011;7:e1001057.
Brock G, Pihur V, Datta S, Datta S, others. clValid, an R package for cluster validation. J. Stat. Softw. Brock Al March 2008 [Internet]. 2011 [cited 2016 Sep 8]; Available from: http://cran.us.r-project.org/web/packages/clValid/vignettes/clValid.pdf
Datta S, Datta S. Comparisons and validation of statistical clustering techniques for microarray gene expression data. Bioinformatics. 2003;19:459–66.
Chaussabel D, Quinn C, Shen J, Patel P, Glaser C, Baldwin N, et al. A modular analysis framework for blood genomics studies: application to systemic lupus erythematosus. Immunity. 2008;29:150–64.
Chaussabel D, Pascual V, Banchereau J. Assessing the human immune system through blood transcriptomics. BMC Biol. 2010;8:84.
Li S, Rouphael N, Duraisingham S, Romero-Steiner S, Presnell S, Davis C, et al. Molecular signatures of antibody responses derived from a systems biology study of five human vaccines. Nat Immunol. 2013;15:195–204.
Mannino DM, Buist AS. Global burden of COPD: risk factors, prevalence, and future trends. Lancet. 2007;370:765–73.
Wedzicha JA, Seemungal TA. COPD exacerbations: defining their cause and prevention. Lancet. 2007;370:786–96.
Vestbo J, Anderson W, Coxson HO, Crim C, Dawber F, Edwards L, et al. Evaluation of COPD longitudinally to identify predictive surrogate end-points (eclipse). Eur Respir J. 2008;31:869–73.
Bolstad BM, Irizarry R, Åstrand M, Speed TP. A comparison of normalization methods for high density oligonucleotide array data based on variance and bias. Bioinformatics. 2003;19:185–93.
Hochreiter S, Clevert D-A, Obermayer K. A new summarization method for affymetrix probe level data. Bioinformatics. 2006;22:943–9.
Langfelder P, Horvath S. WGCNA: an R package for weighted correlation network analysis. BMC Bioinformatics. 2008;9:559.
Fuxman Bass JI, Diallo A, Nelson J, Soto JM, Myers CL, Walhout AJM. Using networks to measure similarity between genes: association index selection. Nat Methods. 2013;10:1169–76.
Efron B. Bootstrap methods: another look at the Jackknife. Ann Stat. 1979;7:1–26.
Csardi G, Nepusz T. The igraph software package for complex network research. Int J Complex Syst. 2006;1695:1–9.
Subramanian A, Tamayo P, Mootha VK, Mukherjee S, Ebert BL, Gillette MA, et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc Natl Acad Sci U S A. 2005;102:15545–50.
Huang DW, Sherman BT, Lempicki RA. Bioinformatics enrichment tools: paths toward the comprehensive functional analysis of large gene lists. Nucleic Acids Res. 2009;37:1–13.
Hirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci U S A. 2005;102:16569–72.
Campbell JD, McDonough JE, Zeskind JE, Hackett TL, Pechkovsky DV, Brandsma C-A, et al. A gene expression signature of emphysema-related lung destruction and its reversal by the tripeptide GHK. Genome Med. 2012;4:67.
Hoenderdos K, Condliffe A. The neutrophil in chronic obstructive pulmonary disease. Too little, too late or too much, too soon? Am J Respir Cell Mol Biol. 2013;48:531–9.
The authors would like to thank the research participants whose tissue donations made this work possible.
Funding was provided by Genome Canada, Genome British Columbia, Genome Quebec, the Canadian Institutes of Health Research, PROOF Centre, St. Paul's Hospital Foundation, Providence Health Care, and the COPD Clinical Research Network.
The gene expression data used in this study is available on the Gene Expression Omnibus: GSE71220.
CPS, VC, MT, ZH, and RTN conceived of, and designed this study. VC and ZH carried out quality control and normalization of the microarray data. CPS, VC, and MT performed the statistical analyses. CPS drafted the manuscript. RB and SJT participated in the design of the study and helped draft the manuscript. BMM, DDS, and RTN conceived of, and designed the COPD ECLIPSE gene expression profiling study whose samples were used for this analysis. All authors read and approved the final manuscript.
The ECLIPSE study was conducted in accordance with the Declaration of Helsinki and good clinical practice guidelines, and has been approved by the relevant ethics and review boards at the participating centres. Study participants gave written informed consent. Consent was given for their data to be shared as part of future research for the purpose of further understanding the disease, such as this study. Patient information was anonymized and de-identified prior to the analysis. ECLIPSE study was funded by GlaxoSmithKline, under ClinicalTrials.gov identifier NCT00292552 and GSK study code SCO104960. This study, analyzing a subset of ECLIPSE subjects' mRNA samples, was approved by the University of British Columbia (UBC), under Research Ethics Board (REB) number H11-00786.
PROOF Centre of Excellence, Vancouver, BC, Canada
Casey P. Shannon, Virginia Chen, Zsuzsanna Hollander, Robert Balshaw, Bruce M. McManus, Scott J. Tebbutt & Raymond T. Ng
Department of Computer Science, UBC, Vancouver, BC, Canada
Mandeep Takhar & Raymond T. Ng
BC Centre for Disease Control, Vancouver, BC, Canada
Robert Balshaw
Department of Pathology and Laboratory Medicine, UBC, Vancouver, BC, Canada
Bruce M. McManus
Department of Medicine, Division of Respiratory Medicine, UBC, Vancouver, BC, Canada
Scott J. Tebbutt & Don D. Sin
UBC James Hogg Research Centre and Institute of HEART + LUNG Health, Vancouver, BC, Canada
Casey P. Shannon, Virginia Chen, Zsuzsanna Hollander, Bruce M. McManus, Scott J. Tebbutt, Don D. Sin & Raymond T. Ng
Casey P. Shannon
Virginia Chen
Mandeep Takhar
Zsuzsanna Hollander
Scott J. Tebbutt
Don D. Sin
Raymond T. Ng
Correspondence to Casey P. Shannon.
Additional file 1: Figure S1.
Schematic of bootstrap re-sampling procedure. (PNG 146 kb)
Patient demographics. Demographic characteristics of the 238 COPD patients. (CSV 495 bytes)
WGCNA reference module sets. (CSV 550 bytes)
Pihur reference module sets. (CSV 701 bytes)
Chaussabel reference module sets. (CSV 497 bytes)
WGCNA and Chaussabel module concordance. Concordance between network modules identified by two gene network module discovery. Reference module sets were identified using all available gene expression profiles. The similarity coefficient for each pair-wise comparison is visualized as a clustered heatmap, with red indicating high similarity and blue indicating low similarity. (PNG 77 kb)
Pathway over-representation analysis results for the WGCNA and Chaussabel modules sets. For each module identification strategy and module in turn, we carried out pathway over-representation analysis against the Broad Institute's MSigDB collections by comparing the gene set and module membership using a simple hypergeometric test. We report both p-value and false discovery rate adjusting for multiple comparisons using the Benjamini-Hochberg procedure. (CSV 196 kb)
Random module stability. To get a sense of the stability that could be expected of a module containing genes with minimal relation to each other, a simulation study was carried out. Modules of size 50, 100, 150, 200, 250, 300, 350, and 400 were randomly assembled by sampling from the all 2512 gene symbols in the filtered dataset. This was done 100 times for each size of module. For each random module, their best match Jaccard similarity ceofficients were computed for each of the 1000 bootstrap results previously generated, and the resulting distribution was summarized using the h-index. (PNG 51 kb)
Shannon, C.P., Chen, V., Takhar, M. et al. SABRE: a method for assessing the stability of gene modules in complex tissues and subject populations. BMC Bioinformatics 17, 460 (2016). https://doi.org/10.1186/s12859-016-1319-8
Accepted: 03 November 2016
Gene modules
WGCNA | CommonCrawl |
Screening of noise-induced hearing loss (NIHL)-associated SNPs and the assessment of its genetic susceptibility
Xuhui Zhang1 na1,
Yaqin Ni2 na1,
Yi Liu2,
Lei Zhang3,
Meibian Zhang4,
Xinyan Fang5,
Zhangping Yang1,
Qiang Wang3,
Hao Li1,
Yuyong Xia1 &
Yimin Zhu2
Environmental Health volume 18, Article number: 30 (2019) Cite this article
The aim of this study was to screen for noise-induced hearing loss (NIHL)-associated single nucleotide polymorphisms (SNPs) and to construct genetic risk prediction models for NIHL in a Chinese population.
Four hundred seventy-six subjects with NIHL and 476 matched controls were recruited from a cross-sectional survey on NIHL in China. A total of 83 candidate SNPs were genotyped using nanofluidic dynamic arrays on a Fluidigm platform. NIHL-associated SNPs were screened with a multiple logistic model, and a genetic risk model was constructed based on the genetic risk score (GRS). The results were validated using a prospective cohort population.
Seven SNPs in the CDH23, PCDH15, EYA4, MYO1A, KCNMA1, and OTOG genes were significantly (P < 0.05) associated with the risk of NIHL, whereas seven other SNPs were marginally (P > 0.05 and P < 0.1) associated with the risk of NIHL. A positive correlation was observed between GRS values and odds ratio (OR) for NIHL. Two SNPs, namely, rs212769 and rs7910544, were validated in the cohort study. Subjects with higher GRS (≧9) showed a higher risk of NIHL incidence with an OR of 2.00 (95% CI = 1.04, 3.86).
Genetic susceptibility plays an important role in the incidence of NIHL. GRS values, which are based on NIHL-associated SNPs. GRS may be utilized in the evaluation of genetic risk for NIHL and in the determination of NIHL susceptibility.
Noise exposure is one of the most common occupational risk factors and have several detrimental effects on health, including irritability, insomnia, fatigue, and hearing loss [1]. Noise-induced hearing loss (NIHL) is a worldwide occupational health risk and the second most frequent form of sensorineural hearing loss, after age-related hearing impairment (ARHI) [2]. The World Health Organization (WHO) and the National Institute for Occupational Safety and Health (NIOSH) have classified NIHL as a disorder with a high priority for research (http://www.cdc.gov/NIOSH/).
NIHL is a complex disease that results from the interaction of occupational noise exposure, other risk factors like solvents, medication and vibration as well as life style factors (smoking and drinking status), and genetic risk factors [3, 4]. Noise exposure is associated with damage to the sensory cells of cochlea and the outer hair cells [5]. Oxidative stress and synaptic excitotoxicity are the major mechanisms of morphological pathologies [6]. However, under similar levels of noise exposure, workers may suffer different intensity of hearing damage. The differences indicate that genetic susceptibility plays an important role in the incidence of NIHL under noise-exposure environment. Previous animal and human studies have found that several genetic loci associated with the risk for NIHL [7]. Genetic variations in the GSTM1, CAT, CDH23, KCNE1, heat shock protein 70, and 8-oxoG DNA glycosylase 1 genes have been found to be associated with NIHL risk [8,9,10,11,12,13]. However, the genetic mechanism of NIHL pathogenesis still remains unclear. Most previous studies have focused on a few candidate SNPs, and the sizes of the study population were generally small [12,13,14].
In order to screen for NIHL-associated SNPs, we conducted a matched case–control study involving 476 NIHL workers and 476 normal hearing workers. We also established a genetic risk score (GRS) on the basis of these NIHL-associated SNPs and further examined these associations in a prospective cohort population.
NIHL-associated SNPs were screened from a case-control study and were validated in a prospective cohort. The flow chart of this study is presented in Fig. 1. The study protocol was approved by the Research Ethics Committees of Hangzhou Centers for Disease Prevention and Control, Zhejiang, China. All the participants provided the written informed consent.
Flow chart of this study. The subjects were recruited form the cross-sectional survey on the workers with occupational noise exposure. Then 476 NIHL subjects and matched controls were genotyped. NIHL-associated SNPs were screened and GRS model was constructed. Four hundred eighty-five subjects with non-NIHL at baseline were further followed up and NIHL-associated SNPs and GRS model were further validated
Cross-sectional survey of NIHL
A cross-sectional survey of occupational noise-exposed workers was conducted in 2011, and its procedure was previously described in detail [10, 11]. In this survey, environmental noise exposure was evaluated with equivalent continuous dB (A)-weighted sound pressure levels (LEX, 8 h) according to Occupational Health Standard of the People's Republic of China: Measurement of Noise in the Workplace (GBZ/T189.8–2007) (China, 2007). All subjects who were exposed to occupational noise received annual health examinations, including routine physical examination, pure tone audiometry (PTA), epidemiological investigation, and whole blood collection. Hearing thresholds of both ears were determined with the ascending method in 5-dB steps at frequencies of 500 Hz, 1000 Hz, 2000 Hz, 3000 Hz, 4000 Hz, and 6000 Hz [10, 11].
The hearing threshold for each ear by PTA was the average at 3000 Hz, 4000 Hz, and 6000 Hz at high frequency (HTHF), and at 500 Hz, 1000 Hz, and 2000 Hz at speech frequency (HTSF). NIHL was defined using the following criteria: the workers with normal hearing before exposure, > 1 year of occupational noise exposure, and a HTHF > 40 dB. To exclude hearing loss was caused by factors other than noise, the worker was excluded from the study when the difference of HTHF between the left and right ears was > 35 dB (HL).
Screening of NIHL-associated SNPs
To screen NIHL-associated SNPs, 476 NIHL male subjects, and gender-, intensity of noise-, and years of noise exposure-matched 476 controls were recruited from the subjects of a cross-sectional survey to screen for the NIHL- associated SNPs.
A total of 83 candidate SNPs in the 29 genes were selected as candidate SNPs in this study based on the HapMap database and previous reports [9,10,11, 15]. The inclusion criteria for searching for tag SNPs were as follows: minor allele frequency (MAF) in CHB > 0.05 and a linkage disequilibrium value of r2 > 0.8.
Whole blood was collected from each subject after an overnight fast. Genomic DNA was extracted from peripheral blood using the Toyobo MagExtractor Genomic DNA Purification Kit (Toyobo, Osaka, Japan) following the manufacturer's protocol. All the candidate SNPs were genotyped using nanofluidic dynamic arrays on the Fluidigm platform (South San Francisco, CA, USA) at the Bio-X Institutes (Shanghai, China). Duplicate control samples were set in every genotyping plate, and the concordance was > 99%. SNPs were excluded when the call rates were < 90% and the controls deviated from Hardy-Weinberg equilibrium (HWE) with a P < 0.01.
Follow-up and validation
Additional 584 subjects with normal hearing at baseline were further followed up after the cross-sectional study. Environmental noise exposure was monitored during this period using the same protocol with cross-sectional survey. After 5 years, the subjects were again administered a health check-up, including hearing threshold determination. The same definition of NIHL was used to assess the hearing status of the subjects after follow-up.
The SNPs, which were screened in the case-control study, were genotyped in the subjects of follow-up. The associations of the genotypes with the incidence of NIHL were examined.
Cumulative noise exposure (CNE) was calculated as CNE = 10 × log (10SPL × years of noise exposure), where SPL is the sound pressure level [dB (A)] of noise exposure. Continuous variables for the normal distribution were expressed as the mean ± standard deviation (SD) and as the median (P25, P75) for skewed distribution. Categorical variables were expressed as frequency (%). Student's t-test was used to examine the statistical significance of continuous variables, and the χ2 test for categorical variables. Hardy-Weinberg equilibrium (HWE) was tested using Pearson's χ2 for each SNP in the control subjects, and the SNPs deviating from HWE were excluded from the analysis. A multiple logistic regression was used to calculate the odds ratio (OR) and 95% CI, modified by confounders such as age, smoking/drinking status, and CNE.
GRS was defined as the sum of the number of risk alleles (0, 1, 2 for additive model and 0, 1 for the dominant or recessive model, dependent of the most appropriate model) across NIHL-associated SNPs. GRS represents the overall risk of a specific disease after integrating a series of SNP markers and was calculated as the following formula [16, 17].
$$ \mathrm{GRS}=\sum \limits_{i=1}^k{n}_i $$
ni: the number of risk alleles in SNPi,
k: the number of NIHL-associated SNPs.
Considering the limited sample size and false negative signals, the SNPs, whose P values < 0.05 or marginal significance (0.05 < P < 0.10), were selected and used to construct the GRS model. The dose-response relationship between GRS and OR was tested using the χ2-square test for trends among the subjects with different levels of GRS.
All statistical analyses were performed using SPSS 19.0 for Windows (IBM Corporation, Armonk, NY, USA) and R software for Windows (3.3.1).
A total of 476 NIHL and 476 control subjects were recruited from the cross-sectional study for screening of NIHL-associated SNPs. However, one control subject failed to genotype, therefore, 475 controls were included in the final analysis. The basic characteristics of the subjects have been described in detail in our previous study [18]. Briefly, the mean age of the NIHL subjects was 36.6 ± 8.5 years (mean ± SD), significantly older than the control individuals (32.8 ± 8.0) (P < 0.001). There was no statistical significance between the NIHL subjects and controls regarding smoking, drinking status, years of noise exposure, and median of noise intensities (P > 0.05). CNE in the NIHL group was 95.5 (91.5, 100.5), and was slightly higher than the control group [94.3 (91.0, 97.8; P = 0.05).
Eighty-three SNPs in 29 genes were selected as candidate SNPs and genotyped in this study. Additional file 1: Table S1 describes the characteristics of these candidate SNPs and P values of association analysis under additive, dominant, and recessive models, respectively. After HWE testing, 11 SNPs were excluded from further analysis due to deviations from the equilibrium. Finally, seven SNPs in six genes were found to be significantly associated with NIHL (P < 0.05). The screened genes in the NIHL group included CDH23, PCDH15, eyes absent 4 homolog (EYA4), MYO1A, alpha subunit of calcium-activated potassium channel (KCNMA1), and OTOG. We also found that additional seven SNPs in six genes showed marginally significant association with NIHL (P > 0.05 and < 0.1).
Table 1 presents the ORs and 95% CI of these NIHL-associated SNPs. Besides rs11004085 in the PCDH15 gene, rs3777781 and rs3777849 in EYA4, and rs2521768 in DFNA5 that we have reported before [18, 19], additional 10 SNPs were found to be associated with NIHL risk. The variant alleles of rs1552245 in MY01A, rs4747192 in CDH23, rs1043421 in MYO7A, rs696211 in KCNMA1, and rs471757 in GRHL2 decreased the risk for NIHL, whereas rs212769 in EYA4, rs2394795 in CDH23, rs7910544 in KCNMA1, rs3751385 in CX43, rs666026 in GRHL2, and rs7106021 in OTOG increased the risk for NIHL.
Table 1 Odd ratios (ORs) and 95% CI of NIHL– associated SNPs
Different distributions of GRS were observed between individuals with NIHL and the controls (Table 2 and Fig. 2). The median (P25, P75) GRS of the NIHL group was 9.0 (7.0, 10.0), which was significantly higher than the controls (8.0 (7.0, 9.0)) (P value < 0.001). Figure 2-a also shows NIHL subjects have higher levels of GRS than those in control subjects. When the subjects were divided based on GRS levels, the ORs were calculated with the subjects with the lowest levels of GRS (< 7) as the reference group. The ORs significantly correlated to GRS values (Ptrend < 0.01) (Table 2 and Fig. 2-b). If the subjects were further classified into two groups (high or low GRS) based on a GRS value of 9, then the subjects with GRS≧9 had increased risk for NIHL, with an OR of 1.58 (95% CI = 1.22, 2.04) compared to the subjects with GRS < 9.
Table 2 Associations between levels of GRS and risk for NIHL
Distribution of GRS in the subjects of NIHL and controls. a, different distributions of GRS in the subjects with NIHL and control subjects. The distribution of GRS in the subjects with NIHL has right-tendency. b, dose-response relationship between the levels of GRS and risk of NIHL. The risk of NIHL positively correlates with the values of GRS
Five hundred eighty-four subjects, who had normal hearing capacity at baseline, were followed up in cohort. After 5 years, 51 subjects were assessed to have NIHL using the same criteria as baseline, and therefore, the overall incidence rate of NIHL was 8.85%. The incidence rates of the subjects with different genotypes are shown in Table 3. The genotypes of rs212769 in the EYA4 gene was associated with the risk for NIHL incidence (Ptrends = 0.017). Compared to the subjects who were homozygous for allele (G), the subjects carrying the variant allele showed an increased risk for NIHL (OR: 6.71, 95% CI = 1.77–25.41) The subjects who were homozygous for the variant allele of rs7910544 in the KCNMA1 gene had an increased risk for NIHL (OR: 5.23, 95% CI = 1.20–22.71). No significant association was found in the other SNPs.
Table 3 Odd ratios (ORs) and 95% CI of NIHL– associated SNPs in the cohort study
The incidence rates of NIHL among subjects in different levels of GRS are also presented in Table 3. The incidence of NIHL was marginally correlated with GRS levels (Ptrend = 0.092). Subjects with high GRS values (≥ 9) had a NIHL incidence of 10.88%, which is significantly higher than those with low GRS values (< 9) (OR = 2.00, 95% CI = (1.04, 3.86) (P = 0.038).
In this study, we screened 14 NIHL-associated SNPs in 10 genes in cross-sectional study. A genetic risk predictive model was constructed based on these SNPs. A dose–response relationship was found between the levels of GRS and NIHL risk. Two SNPs were also validated in the prospective cohort. These findings indicate that genetic susceptibility plays an important role in NIHL incidence. To the best of our knowledge, this is the first study that has been conducted to evaluate individual genetic susceptibility based on multiple SNP loci.
NIHL results from the interaction of occupational noise exposure and genetic risk factors. Due to the risk difference under similar noise exposure environments, it is seen that genetic susceptibility might play an important role in NIHL [3, 8]. Previous studies have uncovered several NIHL-associated SNPs in the GSTM1, CAT, CDH23, KCNE1, heat shock protein 70, and 8-oxoG DNA glycosylase 1 genes. In the Chinese population, we have identified that rs1104085 in the PCDH15 gene, rs3777781 and rs212769 in the EYA4 gene, rs666026 in the GRHL2 gene, and rs2521768 in the DFNA5 gene are associated with NIHL risk [10, 11]. However, the findings are still not enough to explain the incidence of hearing loss with noise exposure. In the present study, using the results of a case–control investigation, we had screened 14 NIHL-associated SNPs.
We previously reported that sequence variants in the PCDH15, EYA4, GRHL2, and DFNA5 genes are associated with NIHL [10, 11]. Here, we also found that SNPs in the CDH23, CX43, KCNMA1, MYO1A, MYO7A, and OTOG genes are associated with NIHL. However, the effect of a single polymorphism locus is weak. Therefore, these genetic markers were integrated into an index, namely, GRS, to evaluate individual genetic predisposition to NIHL, similar to that performed on other specific complex diseases such as cancer, obesity, and diabetes [17, 20]. The NIHL group showed high GRS values compared to the controls, thereby indicating that subjects with NIHL have a higher genetic susceptibility than controls. We also found that subjects with high GRS values had a greater risk for NIHL (OR: 2.69, 95% CI = 1.71, 4.23) when compared with those with low GRS values. We also validated the GRS values of two SNPs, namely, rs212769 and rs7910544, in our prospective cohort. Using these genetic biomarkers, we were able to screen for NIHL-susceptible subjects, and discriminate higher sensitivity to NIHL from the noise-exposed workers. It is relatively difficult to avoid noise exposure under most occupational environments. Therefore, preventative measures for high-risk populations are essential. In this case, primary prevention (for etiological factors) is an effective and efficient measurement. Once screening and recognizing the susceptible individuals, we could take the measurements such as appropriate job selection, decreasing noise exposure, and strengthen protection (Putting on ear plugs or helmet) in noise environment in order to effectively reduce the risk of NIHL. Previous study had found that better use of hearing protection as part of a program probably helps but does not fully protect against hearing loss. Improved implementation might provide better protection [21].
The EYA4 gene is a member of the vertebrate EYA family of transcriptional activators, and we previously conducted an investigation on this gene and found that The SNPs of rs3777781 and rs212769 in the EYA4 gene were significantly associated with NIHL risk [10]. Furthermore, rs212769 also validated in the prospective study. The subjects with genotypes AA increased the risk of incidence of NIHL. This finding indicate that EYA4 may play an important role in the incidence of NIHL. K+ is the main charge carrier of sound sensory conduction, and its normal circulation is important for the function of hearing. Previous studies have shown that sequence variants involving related genes may lead to hearing loss [1, 22,23,24,25]. Previous investigations on the associations between potassium circulation channel genes and NIHL have mainly focused on the GJB2, GJB3, GJB6, KCNE1, KCNQ1, and KCNQ4 genes, and some have been validated in various populations [11]. KCNMA1 (BK channel), one of the potassium ion channel proteins, is expressed in the tubular system and renal vasculature, and mainly functions to control the transmembrane fluxes of K+ and Ca2+. However, KCNMA1 has been mainly studied in relation to tumorigenesis [26, 27]. In animal studies, mice lacking KCNMA1 can develop normal hearing at early life, but then later show progressive hearing loss, indicating that KCNMA1 is not essential for basic inner hair cell function [28, 29]. The potassium channels of KCNMA1 are apparently essential for the survival of outer hair cells, which are the major structural components of hearing [29, 30]. In this study, we found that the homozygote of mutant allele (CC) in rs696211 of KCNMA1 gene decreased and genotypes CC in rs7910544 increased the risk of NIHL. Therefore, the genetic variation of KCNMA1 modifies the risk of NIHL.
This may be the first study that has evaluated the individual genetic risk for NIHL under a noise-exposed environment based multiple SNP loci. One of the strengths of the study is that our preliminary established NHL risk prediction model using 14 SNPs to screen for High risk NIHL was partly validated in the follow-up study cohort using NIHL incidence over a 5 year period. However the sample size in the prospective cohort was relatively small and the follow-up time was relatively short, thus false negative results may still exist in this study. Further studies using both men and women as well as a larger sample size should be performed to validate the results.
Briefly, 14 SNPs in 10 genes were found to be associated with NIHL risk. Two SNPs, namely, rs212769 and rs7910544, were also validated in the prospective cohort study. GRS values for NIHL were significantly correlated with NIHL risk. These findings suggest that genetic susceptibility plays an important role in NIHL incidence, and thus provide a method of identifying individuals with higher genetic risk for NIHL.
CNE:
Cumulative noise exposure
GRS:
Genetic risk score
MAF:
Minor allele frequency
NIHL:
Single-nucleotide polymorphism
Nelson DI, Nelson RY, Concha-Barrientos M, Fingerhut M. The global burden of occupational noise-induced hearing loss. Am J Ind Med. 2005;48(6):446–58.
Sliwinska-Kowalska M, Zaborowski K. WHO environmental noise guidelines for the European region: a systematic review on environmental noise and permanent hearing loss and tinnitus. Int J Environ Res Public Health. 2017;14: E1139.
Konings A, Van Laer L, Van Camp G. Genetic studies on noise-induced hearing loss: a review. Ear Hear. 2009;30(2):151–9.
Sliwinska-Kowalska M, Pawelczyk M. Contribution of genetic factors to noise-induced hearing loss: a human studies review. Mutat Res. 2013;752(1):61–5.
Henderson D, Bielefeld EC, Harris KC, Hu BH. The role of oxidative stress in noise-induced hearing loss. Ear Hear. 2006;27(1):1–19.
Pilati N, Ison MJ, Barker M, Mulheran M, Large CH, Forsythe ID, Matthias J, Hamann M. Mechanisms contributing to central excitability changes during hearing loss. P Natl Acad Sci USA. 2012;109(21):8292–7.
White CH, Ohmen JD, Sheth S, Zebboudj AF, McHugh RK, Hoffman LF, Lusis AJ, Davis RC, Friedman RA. Genome-wide screening for genetic loci associated with noise-induced hearing loss. Mamm Genome. 2009;20(4):207–13.
Davis RR, Newlander JK, Ling X, Cortopassi GA, Krieg EF, Erway LC. Genetic basis for susceptibility to noise-induced hearing loss in mice. Hear Res. 2001;155(1–2):82–90.
Konings A, Van Laer L, Pawelczyk M, Carlsson PI, Bondeson ML, Rajkowska E, Dudarewicz A, Vandeveldel A, Fransen E, Huyghe J, et al. Association between variations in CAT and noise-induced hearing loss in two independent noise-exposed populations. Hum Mol Genet. 2007;16(15):1872–83.
Kowalski TJ, Pawelczyk M, Rajkowska E, Dudarewicz A, Sliwinska-Kowalska M. Genetic variants of CDH23 associated with noise-induced hearing loss. Otol Neurotol. 2014;35(2):358–65.
Pawelczyk M, Van Laer L, Fransen E, Rajkowska E, Konings A, Carlsson PI, Borg E, Van Camp G, Sliwinska-Kowalska M. Analysis of gene polymorphisms associated with K ion circulation in the inner ear of patients susceptible and resistant to noise-induced hearing loss. Ann Hum Genet. 2009;73(Pt 4:411–21.
Shen HX, Cao JL, Hong ZQ, Liu K, Shi J, Ding L, Zhang HD, Du C, Li Q, Zhang ZD, et al. A functional Ser326Cys polymorphism in hOGG1 is associated with noise-induced hearing loss in a Chinese population. PLoS One. 2014;9:e89662.
Sliwinska-Kowalska M, Noben-Trauth K, Pawelczyk M, Kowalski TJ. Single nucleotide polymorphisms in the cadherin 23 (CDH23) gene in polish workers exposed to industrial noise. Am J Human Biol. 2008;20(4):481–3.
Lin CY, Wu JL, Shih TS, Tsai PJ, Sun YM, Guo YL. Glutathione S-transferase M1, T1, and P1 polymorphisms as susceptibility factors for noise-induced temporary threshold shift. Hear Res. 2009;257(1–2):8–15.
Konings A, Van Laer L, Wiktorek-Smagur A, Rajkowska E, Pawelczyk M, Carlsson PI, Bondeson ML, Dudarewicz A, Vandevelde A, Fransen E, et al. Candidate gene association study for noise-induced hearing loss in two independent noise-exposed populations. Ann Hum Genet. 2009;73(2):215–24.
Paynter NP, Chasman DI, Pare G, Buring JE, Cook NR, Miletich JP, Ridker PM. Association between a literature-based genetic risk score and cardiovascular events in women. J Am Med Assoc. 2010;303(7):631–7.
Ye Z, Austin E, Schaid DJ, Kullo IJ. A multi-locus genetic risk score for abdominal aortic aneurysm. Atherosclerosis. 2016;246:274–9.
Zhang X, Liu Y, Zhang L, Yang Z, Shao Y, Jiang C, Wang Q, Fang X, Xu Y, Wang H, et al. Genetic variations in protocadherin 15 and their interactions with noise exposure associated with noise-induced hearing loss in Chinese population. Environ Res. 2014;135:247–52.
Zhang X, Liu Y, Zhang L, Yang Z, Yang L, Wang X, Jiang C, Wang Q, Xia Y, Chen Y, et al. Associations of genetic variations in EYA4, GRHL2 and DFNA5 with noise-induced hearing loss in Chinese population: a case- control study. Environ Health. 2015;14:77.
Go MJ, Lee Y, Park S, Kwak SH, Kim BJ, Lee J. Genetic-risk assessment of GWAS-derived susceptibility loci for type 2 diabetes in a 10 year follow-up of a population-based cohort study. J Hum Genet. 2016;61:1009-12.
Tikka C, Verbeek JH, Kateman E, Morata TC, Dreschler WA, Ferrite S. Interventions to prevent occupational noise-induced hearing loss. Cochrane Database Syst Rev. 2017;7:CD006396.
Coucke PJ, Van Hauwe P, Kelley PM, Kunst H, Schatteman I, Van Velzen D, Meyers J, Ensink RJ, Verstreken M, Declau F, et al. Mutations in the KCNQ4 gene are responsible for autosomal dominant deafness in four DFNA2 families. Hum Mol Genet. 1999;8(7):1321–8.
Kubisch C, Schroeder BC, Friedrich T, Lutjohann B, El-Amraoui A, Marlin S, Petit C, Jentsch TJ. KCNQ4, a novel potassium channel expressed in sensory outer hair cells, is mutated in dominant deafness. Cell. 1999;96(3):437–46.
Neyroud N, Tesson F, Denjoy I, Leibovici M, Donger C, Barhanin J, Faure S, Gary F, Coumel P, Petit C, et al. A novel mutation in the potassium channel gene KVLQT1 causes the Jervell and Lange-Nielsen cardioauditory syndrome. Nat Genet. 1997;15(2):186–9.
Tyson J, Tranebjerg L, Bellman S, Wren C, Taylor JFN, Bathen J, Aslaksen B, Sorland SJ, Lund O, Malcolm S, et al. IsK and KVLQT1: mutation in either of the two subunits of the slow component of the delayed rectifier potassium channel can cause Jervell and Lange-Nielsen syndrome. Hum Mol Genet. 1997;6(12):2179–85.
Khaitan D, Sankpal UT, Ningaraj NS. Role of KCNMA1 gene in breast cancer invasion and metastasis to brain. Cancer Res. 2009;69(2):169s.
Ningaraj NS, Khaitan D, Sankpal UT. Role of KCNMA1 in breast Cancer metastasis and angiogenesis. Cancer Res. 2009;69(24):877s.
Oliver D, Taberner AM, Thurm H, Sausbier M, Arntz C, Ruth P, Fakler B, Liberman MC. The role of BKCa channels in electrical signal encoding in the mammalian auditory periphery. J Neurosci. 2006;26(23):6181–9.
Ruttiger L, Sausbier M, Zimmermann U, Winter H, Braig C, Engel J, Knirsch M, Arntz C, Langer P, Hirt B, et al. Deletion of the Ca2+−activated potassium (BK) alpha-subunit but not the BK beta 1-subunit leads to progressive hearing loss. P Natl Acad Sci USA. 2004;101(35):12922–7.
Engel J, Braig C, Ruttiger L, Kuhn S, Zimmermann U, Blin N, Sausbier M, Kalbacher H, Munkner S, Rohbock K, et al. Two classes of outer hair cells along the tonotopic axis of the cochlea. Neuroscience. 2006;143(3):837–49.
This study was supported by the Zhejiang Public Welfare Science and Technology Project (2015C33225) and Zhejiang Provincial Program for the Cultivation of High-Level Innovative Health Talents.
The authors are grateful to all the subjects and all the investigators in this study.
The Zhejiang Public Welfare Science and Technology Project (2015C33225) and Zhejiang Provincial Program for the Cultivation of High-Level Innovative Health Talents supported this study.
Data and material will be made available by the authors upon reasonable request.
Xuhui Zhang and Yaqin Ni contributed equally to this work.
Hangzhou Center for Disease Control and Prevention, Hangzhou, 310021, Zhejiang, China
Xuhui Zhang
, Zhangping Yang
, Hao Li
& Yuyong Xia
Department of Epidemiology and Biostatistics, Department of Respiratory, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Zhejiang, Hangzhou, 310058, People's Republic of China
Yaqin Ni
, Yi Liu
& Yimin Zhu
Hangzhou Hospital for Prevention and Treatment of Occupational Disease, Hangzhou, 310014, Zhejiang, China
& Qiang Wang
Zhejiang Center for Disease Control and Prevention, Hangzhou, 310051, Zhejiang, China
Meibian Zhang
Yongkang Center for Disease Control and Prevention, Yongkang, 321304, People's Republic of China
Xinyan Fang
Search for Xuhui Zhang in:
Search for Yaqin Ni in:
Search for Yi Liu in:
Search for Lei Zhang in:
Search for Meibian Zhang in:
Search for Xinyan Fang in:
Search for Zhangping Yang in:
Search for Qiang Wang in:
Search for Hao Li in:
Search for Yuyong Xia in:
Search for Yimin Zhu in:
XZ and YZ conceived and designed the study. YZ drafted and refined the manuscript. YL, YN performed laboratorial determination and statistical analyses. MZ, LZ, YZ, QW, LH, and YX participated in the epidemiological investigation. All authors approved the final version of the manuscript for submission.
Correspondence to Yimin Zhu.
The study protocol was approved by the Research Ethics Committees of Hangzhou Centers for Disease Prevention and Control, Zhejiang, China. All the participants provided the written informed consent.
Table S1. Distribution of allele and genotype frequencies in the subjects of Case and Control. (DOCX 37 kb)
Zhang, X., Ni, Y., Liu, Y. et al. Screening of noise-induced hearing loss (NIHL)-associated SNPs and the assessment of its genetic susceptibility. Environ Health 18, 30 (2019) doi:10.1186/s12940-019-0471-9
Received: 27 November 2018
Genetic susceptibility
Noise-induced hearing loss (NIHL)
Single nucleotide polymorphisms (SNPs)
Genetic risk prediction
Genetic risk score (GRS)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Please note that comments may be removed without notice if they are flagged by another user or do not comply with our community guidelines. | CommonCrawl |
\begin{document}
\title{On the Hamilton-Waterloo problem: the case of two cycles sizes of different parity} \author{Melissa Keranen\\ Michigan Technological University\\ Department of Mathematical Sciences\\ Houghton, MI 49931, USA\\ \\ Adri\'{a}n Pastine\\ Universidad Nacional de San Luis\\ Departamento de Matem\'{a}tica\\ San Luis, Argentina}
\maketitle \footnote{email addresses: Melissa Keranen ([email protected]), Adri\'{a}n Pastine ([email protected])}
\begin{abstract} The Hamilton-Waterloo problem asks for a decomposition of the complete graph into $r$ copies of a 2-factor $F_{1}$ and $s$ copies of a 2-factor $F_{2}$ such that $r+s=\left\lfloor\frac{v-1}{2}\right\rfloor$. If $F_{1}$ consists of $m$-cycles and $F_{2}$ consists of $n$ cycles, then we call such a decomposition a $(m,n)-\ensuremath{\mbox{\sf HWP}}(v;r,s)$. The goal is to find a decomposition for every possible pair $(r,s)$. In this paper, we show that for odd $x$ and $y$, there is a $(2^kx,y)-\ensuremath{\mbox{\sf HWP}}(vm;r,s)$ if $\gcd(x,y)\geq 3$, $m\geq 3$, and both $x$ and $y$ divide $v$, except possibly when $1\in\{r,s\}$. \end{abstract} \keywords{$2$-Factorizations, Hamilton-Waterloo Problem, Oberwolfach Problem, Cycle Decomposition, Resolvable Decompositions}
\section{Introduction}
The Oberwolfach problem asks for a decomposition of the complete graph $K_v$ into $\frac{v-1}{2}$ copies of a $2$-factor $F$. To achieve this decomposition, $v$ needs to be odd, because the vertices must have even degree. The problem with $v$ even asks for a decomposition of $K_v$ into $\frac{v-2}{2}$ copies of a $2$-factor $F$, and one copy of a $1$-factor. The uniform Oberwolfach problem (all cycles of the $2$-factor have the same size) has been completely solved by Alspach and Haagkvist \cite{AH} and Alspach, Schellenberg, Stinson and Wagner \cite{ASSW}. The non-uniform Oberwolfach problem has been studied as well, and a survey of results up to 2006 can be found in \cite{BR}. Furthermore, one can refer to \cite{BS2,BDD,BDP,RT,T} for more recent results.
In \cite{Liu1} Liu first worked on the generalization of the Oberwolfach problem to equipartite graphs. Here we are seeking to decompose the complete equipartite graph $K_{(m:n)}$ with $n$ partite sets of size $m$ each into $\frac{(n-1)m}{2}$ copies of a $2$-factor $F$. Here $(n-1)m$ has to be even. In \cite{HH} Hoffman and Holliday worked on the equipartite generalization of the Oberwolfach problem when $(n-1)m$ is odd, decomposing into $\frac{(n-1)m-1}{2}$ copies of a $2$-factor $F$, and one copy of a $1$-factor. The uniform Oberwolfach problem over equipartite graphs has since been completely solved by Liu \cite{Liu2} and Hoffman and Holliday \cite{HH}. For the non-uniform case, Bryant, Danziger and Pettersson \cite{BDP} completely solved the case when the $2$-factor is bipartite. In particular, Liu showed the following.
\begin{theorem}\label{equip}
{\normalfont\cite{Liu2}} For $m\geq 3$ and $u\geq 2$, $K_{(h:u)}$ has a resolvable $C_m$-factorization if
and only if $hu$ is divisible by $m$, $h(u-1)$ is even, $m$ is even if $u=2$, and $(h,u,m) \not \in
\{(2,3,3), (6,3,3),(2,6,3),\\(6,2,6)\}$. \end{theorem}
The Hamilton-Waterloo problem is a variation of the Oberwolfach problem, in which we consider two $2$-factors, $F_1$ and $F_2$. It asks for a factorization of $K_v$ when $v$ is odd or $K_v-I$ ($I$ is a $1$-factor) when $v$ is even into $r$ copies of $F_1$ and $s$ copies of $F_2$ such that $r+s=\left\lfloor\frac{v-1}{2}\right\rfloor$, where $F_1$ and $F_2$ are two $2$-regular graphs on $v$ vertices. Most of the results for the Hamilton-Waterloo problem are uniform, meaning $F_1$ consists of cycles of size $m$ ($C_{m}$-factors), and $F_2$ consists of cycles of size $n$ ($C_{n}$-factors). We refer to a decomposition of $K_v$ into $r$ $C_{m}$-factors and $s$ $C_{n}$-factors as a $(m,n)-\ensuremath{\mbox{\sf HWP}}(v;r,s)$. The case where both $m$ and $n$ are odd positive integers and $v$ is odd is almost completely solved by \cite{BDT,BDT2}; and if $m$ and $n$ are both even, then the problem again is almost completely solved (see \cite{BD,BDD}). However, if $m$ and $n$ are of differing parities, then we only have partial results. Most of the work has been done in the case where one of the cycle sizes is constant. The case of $(m,n)=(3,4)$ is solved in \cite{BB,DQS,OO,WCC}. Other cases which have been studied include $(m,n)=(3,v)$ \cite{LS}, $(m,n)=(3,3x)$ \cite{AKKPO}, and $(m,n)=(4,n)$ \cite{KO,OO} .
In this paper, we consider the case of $m$ and $n$ being of different parity. This case has gained attention recently, where it has been shown that the necessary conditions are sufficient for a $(m,n)-\ensuremath{\mbox{\sf HWP}}(v;r,s)$ whenever $m \mid n$, $v>6n>36m$, and $s \geq 3$ \cite{BDT3}. We provide a complementary result to this in our main theorem, which covers cases in which $m \nmid n$ and solves a major portion of the problem.
\begin{theorem} \label{main} Let $x,y,v,k$ and $m$ be positive integers such that: \begin{enumerate}[i)] \item $v,m\geq 3$, \item $x,y$ are odd, \item $\gcd(x,y)\geq 3$, \item $x$ and $y$ divide $v$. \item $4^k$ divides $v$. \end{enumerate} Then there exists a $(2^kx,y)\ensuremath{\text{--}}\ensuremath{\mbox{\sf HWP}}(vm;r,s)$ for every pair $r,s$ with $r+s=\left\lfloor(vm-1)/2\right\rfloor$, $r,s\neq 1$. \end{theorem}
\section{Preliminaries} Let $G$ and $H$ be multipartite graphs. Then the \textit{partite product} of $G$ and $H$, $G \otimes H$ is defined by: \begin{itemize}
\item $V(G\otimes H)=\{(g,h,i)| (g,i)\in V(G)$ and $(h,i)\in V(H)\}$.
\item $E(G\otimes H)=\{\{(g_1,h_1,i),(g_2,h_2,j)\}|\{(g_1,i),(g_2,j)\}\in E(G)$ and $\{(h_1,i),(h_2,j)\} \in E(H)\}$. \end{itemize} where two vertices in $G$ (or $H$) $(g,i),(g,j)$ ($(h,i),(h,j)$) are in the same partite set in $G$ ($H$) if and only if $i=j$. The \textit{complete cyclic multipartite graph} $C_{(x:k)}$ is the graph with $k$ partite sets of size $x$, where two vertices $(g,i)$ and $(h,j)$ are neighbors if and only if $i-j=\pm 1\pmod{k}$, with subtraction being done modulo $k$. The \textit{directed complete cyclic multipartite graph} $\overrightarrow{C}_{(x:k)}$ is the graph with $k$ parts of size $x$, with arcs of the form $\big((g,i), (h,i+1)\big)$ for every $0\leq g,h\leq x-1$, $0\leq i\leq k-1$.
In \cite{KP}, decompositions of $\overrightarrow{C}_{(x:k)}$, $x$ odd, into $C_{k}$-factors and $C_{xk}$-factors, and decompositions of $\overrightarrow{C}_{(4:k)}$ into $C_{k}$-factors and $C_{2k}$-factors were given. Then by using multivariate bijections, decompositions of $\overrightarrow{C}_{(xy:k)}$, $x$ and $y$ odd, into $C_{xk}$-factors and $C_{yk}$-factors were obtained.
This was used in conjunction with the following three result to produce the main theorems given in their paper.
\begin{lemma}[\cite{KP}]\label{productofbalanced} Let $\overrightarrow{G}$ and $\overrightarrow{H}$ be a $\overrightarrow{C}_n$-factor and a $ \overrightarrow{C}_m$-factor of $\overrightarrow{C}_{(x:k)}$ and $\overrightarrow{C}_{(y:k)}$, respectively. Then $\overrightarrow{G}\otimes \overrightarrow{H}$ is a $\overrightarrow{C}_{l}$-factor of $\overrightarrow{C}_{(xy:k)}$, where $l=\frac{nm}{gcd(n,m)}$. \end{lemma} \begin{theorem}[Distribution,\cite{KP}]\label{distributive} Let $G=\oplus_i G_i$ and $H=\oplus_j H_j$ be $k$-partite graphs. Then $G\otimes H=\left(\oplus_i G_i \right)\otimes \left(\oplus_j H_j\right)$. Furthermore, the following distributive property holds: \[ \left(\oplus_i G_i\right)\otimes \left(\oplus_j H_j\right)=\oplus_i\left(G_i\otimes \oplus_j H_j\right)= \oplus_i\oplus_j\left( G_i\otimes H_j\right) \] \end{theorem}
\begin{lemma}[\cite{KP}]\label{cvntokvm} Let $m$, $x_1,\ldots,x_p$, $y_1,\ldots,y_p$, and $v$ be positive integers. Let $s_1,\ldots,s_{\frac{m-1} {2}}$ be non-negative integers. Suppose the following conditions are satisfied: \begin{itemize} \item There exists a decomposition of $K_m$ into $[n_1,\ldots,n_p]$-factors. \item For every $1\leq i\leq p$, and for every $1\leq t \leq \frac{m-1}{2}$ there exists a decomposition of $C_{(v:n_i)}$ into $s_t$ $C_{x_in_i}$-factors and $r_t$ $C_{y_in_i}$-factors. \end{itemize}
Let \[ s=\sum_{t=1}^{\frac{(m-1)}{2}}s_t\text{\hspace{.3in} and \hspace{.3in}}r=\sum_{t=1}^{\frac{(m-1)}{2}}r_t \]
Then there exists a decomposition of $K_{(v:m)}$ into $s$ $[(x_1n_1)^{\frac{v}{x_1}},\ldots, (x_pn_p)^{\frac{v}{x_p}}]$-factors and $r$ $[(y_1n_1)^{\frac{v}{y_1}},\ldots,$ $(y_pn_p)^{\frac{v}{y_p}}]$- factors. \end{lemma}
Lemmas~\ref{productofbalanced} and ~\ref{cvntokvm} and Theorem~\ref{distributive} will be employed in this paper as well to produce the decompositions we are interested in. In Section \ref{section even case} we give decompositions of $\overrightarrow{C}_{(4^k:n)}$ into $ \overrightarrow{C}_{2^kn}$-factors and $\overrightarrow{C}_{n}$-factors. In Section \ref{section multivariate bijections} we use multivariate bijections to give decompositions of $\overrightarrow{C}_{(4^kxy:n)}$ into $\overrightarrow{C}_{2^kxk}$-factors and $\overrightarrow{C}_{yk}$-factors. Then in Section \ref{section main results}, we use these decompositions to prove our main results.
\section{Equipartite Decompositions}\label{section even case} We will start decomposing $\overrightarrow{C}_{(4^k:3)}$ into $\overrightarrow{C}_{2^{k}3}$-factors and $\overrightarrow{C}_{3}$-factors. Label the vertices in each partite set of $\overrightarrow{C}_{(4^k:3)}$ with the elements of the quotient ring $\mathfrak{R}=\mathbb{Z}_{2^{k}}[x]/\left\langle x^2+x+1\right\rangle$. Because the elements of $\mathfrak{R}$ are of the form $ax+b$, with $a,b\in\mathbb{Z}_{2^{k}}$, there are $4^k$ of them. For each $\alpha \in \mathfrak{R}$ let $f_{\alpha}(y)=xy+\alpha$.
\begin{lemma}\label{lemmapropertiesoffalpha} The functions $f_{\alpha}$ are bijections, with $f^3_{\alpha}(y)=y$. \end{lemma}
\begin{proof} We will first show that $f_{0}$ is a bijection. Notice that the element $x\in \mathfrak{R}$ is a unit, because \begin{align*}
x^2+x+1&=0,&\text{and}& &x(-x-1)&=-x^2-x=1.\\ \end{align*} Therefore, $xy_1=xy_2$ if and only if $y_1=y_2$, and so $f_{0}$ is a bijection. Because every element has an additive inverse we get that $f_{\alpha}$ is a bijection. We will see now that $f^3_{\alpha}(y)=y$:
\begin{align*} f_{\alpha}(y)&=xy+\alpha\\ f^2_{\alpha}(y)&=x(xy+\alpha)+\alpha\\ &=x^2y+x\alpha+\alpha\\ &=-xy-y+x\alpha+\alpha\\ f^3_{\alpha}(y)&=x(-xy-y+x\alpha+\alpha)+\alpha\\ &=-x^2y-xy+x^2\alpha+x\alpha+\alpha\\ &=xy+y-xy+(x^2+x+1)\alpha\\ &=y+0\alpha\\ &=y \end{align*}
Therefore $f_{\alpha}$ is a bijection and $f^3_{\alpha}(y)=y$. \end{proof}
Let $T_{4^k}(\alpha)$ be the subgraph of $\overrightarrow{C}_{(4^k:3)}$ where each element $y$ in a partite set is connected to the element $f_{\alpha}(y)$ in the next partite set. Because of Lemma \ref{lemmapropertiesoffalpha}, $T_{4^k}(\alpha)$ is a $\overrightarrow{C}_3$-factor ($f^3_{\alpha}(y)=y$) and given two different elements, $\alpha$ and $\beta$, $T_{4^k}(\alpha)$ and $T_{4^k}(\beta)$ are disjoint. Therefore $\overrightarrow{C}_{(4^k:3)}=\bigoplus_{\alpha\in \mathfrak{R}}T_{4^k}(\alpha)$ is a $ \overrightarrow{C}_3$-factorization of $\overrightarrow{C}_{(4^k:3)}$.
Let $H_{4^k}(\alpha,\beta)$ be the subgraph of $\overrightarrow{C}_{(4^k:3)}$ where each element $y$ of the first and second partite sets are connected to the element $f_{\alpha}(y)$ of the second and third partite sets respectively, and each element $y$ of the third partite set is connected to the element $f_{\beta}(y)$ of the first partite set. The following result is easy to see:
\begin{lemma}\label{fromTtoH} Let $\phi$ be a permutation of $\mathfrak{R}$, then \[ \overrightarrow{C}_{(4^k:3)}=\bigoplus_{\alpha\in \mathfrak{R}}T_{4^k}(\alpha)=\bigoplus_{\alpha\in \mathfrak{R}}H_{4^k}(\alpha,\phi(\alpha)) \] and $\bigoplus_{\alpha\in \mathfrak{R}}H_{4^k}(\alpha,\phi(\alpha))$ is a decomposition of $ \overrightarrow{C}_{(4^k:3)}$. \end{lemma}
\begin{proof} The first equality is true by the discussion preceding this lemma. The second equality is true because of $\phi$ being a permutation (each edge gets used once). \end{proof}
Consider $y_1\in \mathfrak{R}$. Let $y_2=f_{\alpha}(y_1)$, $y_3=f_{\alpha}(y_2)$, and $y_4=f_{\beta}(y_3) $. Then because $f_{\alpha}^3(y_1)=y_1$, we have $xy_3+\alpha=f_{\alpha}(y_3)=f_{\alpha}^3(y_1)=y_1$. We also have $f_{\beta}(y_3)=xy_3+\beta=y_4$. So $y_1-y_4=\alpha-\beta$. Therefore, if $\alpha-\beta\in\{\pm 1,\pm x,\pm x\pm 1\}$, then $H_{4^k}(\alpha,\beta)$ consists of directed cycles of length $3\cdot 2^{k}$ because $\alpha-\beta$ has additive order $2^k$ in $\mathfrak{R}$. Hence if $\phi$ has $r$ fixed points and for every non-fixed point $\alpha$ we have $\alpha-\phi(\alpha)\in\{\pm 1, \pm x, \pm x \pm 1\}$, we obtain a decomposition of $\overrightarrow{C}_{(4^k:3)}$ into $r$ $\overrightarrow{C}_3$-factors and $4^k- r$ $\overrightarrow{C}_{3\cdot2^k}$-factors. If $\alpha=\beta$, then because $H_{4^k}(\alpha,\alpha)=T_{4^k}(\alpha)$, we have that $H_{4^k}(\alpha,\beta)$ is a $\overrightarrow{C}_3$-factor of $\overrightarrow{C}_{(4^k:3)}$. Thus we have the following lemma.
\begin{lemma}\label{3h4k} If $\alpha=\beta$, then $H_{4^k}(\alpha,\beta)$ is a $\overrightarrow{C}_{3}$-factor of $\overrightarrow{C}_{(4^k:3)}$. If $\alpha-\beta\in\{\pm 1,\pm x,\pm x\pm 1\}$, then $H_{4^k}(\alpha,\beta)$ is a $\overrightarrow{C}_{3\cdot 2^k}$-factor of $\overrightarrow{C}_{(4^k:3)}$. \end{lemma}
Let $\pi:\mathfrak{R}\rightarrow \mathbb{Z}_{2^{k-1}}\times \mathbb{Z}_{2^{k-1}}\times \mathbb{Z}_{4}$ defined as: \begin{center} \[ \pi(a+bx)\coloneqq\left\lbrace\begin{array}{ccc} (\lfloor a/2\rfloor,\lfloor b/2\rfloor,0)&\text{ if }& \text{ $a$ and $b$ are even}\\ (\lfloor a/2\rfloor,\lfloor b/2\rfloor,1)&\text{ if }& \text{ $a$ is odd and $b$ are even}\\ (\lfloor a/2\rfloor,\lfloor b/2\rfloor,2)&\text{ if }& \text{ $a$ is even and $b$ is odd}\\ (\lfloor a/2\rfloor,\lfloor b/2\rfloor,3)&\text{ if }& \text{ $a$ and $b$ are odd}\\ \end{array}\right. \] \end{center} Notice that $\pi$ is a bijection, and that if $\rho$ is a permutation on $\mathbb{Z}_{2^{k-1}}\times \mathbb{Z}_{2^{k-1}}\times \mathbb{Z}_{4}$ that fixes the first two coordinates and has $r$ fixed points, then $\phi=\pi^{-1}(\rho(\pi))$ is a permutation of $\mathfrak{R}$ that has $r$ fixed points and for every non-fixed point $\alpha$ we have $\alpha-\phi(\alpha)\in\{\pm 1, \pm x, \pm x \pm 1\}$.
Because we are asking for $\rho$ to fix the first two coordinates, finding the necessary function is similar to finding $4^k$ permutations $\rho_{\dot{a},\dot{b}}$ of $\mathbb{Z}_4$ with the necessary number of fixed points. Then we have $\rho(\dot{a},\dot{b},i)=(\dot{a},\dot{b},\rho_{\dot{a},\dot{b}}(i))$.
\begin{lemma}\label{decomCv3} Let $r\neq 4^k-1$, then there is a decomposition of $\overrightarrow{C}_{(4^k:3)}$ into $r$ $\overrightarrow{C}_3$-factors and $s=4^k-r$ $C_{3*2^k}$-factors. \end{lemma} \begin{proof} Let $r_{\dot{a},\dot{b}}$, $0\leq \dot{a},\dot{b}\leq 2^{k-1}-1$ be such that $r_{\dot{a},\dot{b}}\in\{0,1,2,4\}$, $\displaystyle\sum_{\dot{a},\dot{b}}r_{\dot{a},\dot{b}}=r$. Let $\rho_{\dot{a},\dot{b}}$ be a permutation on $\mathbb{Z}_4$ with $r_{\dot{a},\dot{b}}$ fixed points. Let $\rho(\dot{a},\dot{b},i)=(\dot{a},\dot{b},\rho_{\dot{a},\dot{b}}(i))$, and $\phi=\pi^{-1}(\rho(\pi))$. Then the decomposition is given by \[ \overrightarrow{C}_{(4^k:3)}=\bigoplus_{\alpha \in \mathfrak{R}}H_{4^k}(\alpha,\phi(\alpha)) \] \end{proof}
Given an $n$-partite graph $G$, with parts $G_0,G_1,\ldots,G_{n-1}$, let $F_h(G)$ be the subgraph of $G$ that contains only the edges between parts $h-1$ and $h$. In particular $F_n(G)$ contains the edges between $G_{n-1}$ and $G_0$.
\begin{theorem}\label{4ktheorem} Let $r\neq 4^k-1$ and $n\geq 5$. Then there is a decomposition of $\overrightarrow{C}_{(4^k:n)}$ into $r$ $\overrightarrow{C}_n$-factors and $s=4^k-r$ $\overrightarrow{C}_{n\cdot 2^k}$-factors. \end{theorem} \begin{proof} First suppose $n$ is odd, and let $G_0,G_1,\ldots,G_{n-1}$ be the partite sets. Let $T_{4^k}(\alpha)$ be the subgraph of $\overrightarrow{C}_{(4^k:n)}$ having the following edges: \begin{itemize} \item A vertex $y$ in $G_i$ is adjacent to $y+\alpha$ in $G_{i+1}$ if $i<n-3$ is odd; \item A vertex $y$ in $G_i$ is adjacent to $y-\alpha$ in $G_{i+1}$ if $i<n-3$ is even; \item A vertex $y$ in $G_i$ is adjacent to $f_{\alpha}(y)$ in $G_{i+1}$ if $i\geq n-3$. \end{itemize} Notice that a directed cycle which starts at vertex $y$ in $G_0$ contains the vertex $y$ in $G_{n-3}$. Because $f_{\alpha}^{3}(y)=y$ by Lemma \ref{lemmapropertiesoffalpha}, we have that $T_{4^k}(\alpha)$ is a $\overrightarrow{C}_n$-factor of $\overrightarrow{C}_{(4^k:n)}$. Furthermore, because of the way we described the edges in $T_{4^k}(\alpha)$, we can identify the partite set $G_{n-3}$ with $G_0$. Then any $3$-cycle in $\overrightarrow{C}_{(4^k:3)}$ is equivalent to an $n$-cycle in $\overrightarrow{C}_{(4^k:n)}$. Now let $H_{4^k}(\alpha,\beta)=T_{4^k}(\alpha)\oplus F_n(T_{4^k}(\alpha))\oplus F_n(T_{4^k}(\beta))$, where the arcs $F_n(T_{4^k}(\gamma))$ consist of the arcs in $T_{4^k}(\gamma)$ from $G_{n-1}$ to $G_0$. Again, we may identify $G_{n-3}$ with $G_0$, so a directed cycle of length $3\cdot 2^k$ in $\overrightarrow{C}_{(4^k:3)}$ is now equivalent to a directed cycle of length $n\cdot 2^k$ in $\overrightarrow{C}_{(4^k:n)}$. So by Lemma \ref{decomCv3}, since there is a decomposition of $\overrightarrow{C}_{(4^k:3)}$ into $r$ $\overrightarrow{C}_3$-factors and $s=4^k-r$ $\overrightarrow{C}_{3\cdot 2^k}$-factors, this is equivalent to a decomposition of $\overrightarrow{C}_{(4^k:n)}$ into $r$ $\overrightarrow{C}_n$-factors and $s=4^k-r$ $\overrightarrow{C}_{n\cdot 2^k}$-factors.
If $n\geq 6$ is even, then let $T_{4^k}(\alpha)$ be the subgraph of $\overrightarrow{C}_{(4^k:n)}$ having the following edges: \begin{itemize} \item A vertex $y$ in $G_i$ is adjacent to $y+\alpha$ in $G_{i+1}$ if $i<n-6$ is odd; \item A vertex $y$ in $G_i$ is adjacent to $y-\alpha$ in $G_{i+1}$ if $i<n-6$ is even; \item A vertex $y$ in $G_i$ is adjacent to $f_{\alpha}(y)$ in $G_{i+1}$ if $i\geq n-6$. \end{itemize} Now a cycle that starts at vertex $y$ in $G_0$ contains the vertex $y$ in $G_{n-6}$, and because $f_{\alpha}^3(y)=y$, the cycle also contains vertex $y$ in $G_{n-3}$. We may now apply the same arguments as in the case with $n$ odd to obtain the result. \end{proof}
If we define $T_{4^k}(\alpha)$ and $H_{4^k}(\alpha,\beta)$ as in the proof of Theorem \ref{4ktheorem}, then we obtain the following corollary by applying Lemma \ref{3h4k}. \begin{corollary}\label{nh4k} If $\alpha=\beta$, then $H_{4^k}(\alpha,\beta)$ is a $\overrightarrow{C}_{n}$-factor of $\overrightarrow{C}_{(4^k:n)}$. If $\alpha-\beta\in\{\pm 1,\pm x,\pm x\pm 1\}$, then $H_{4^k}(\alpha,\beta)$ is a $\overrightarrow{C}_{2^kn}$-factor of $\overrightarrow{C}_{(4^k:n)}$. \end{corollary}
\section{Multivariate Bijections}\label{section multivariate bijections}
When $x$ is odd, the graphs $T_x(i)$ and $H_x(i,i')$ were defined in \cite{KP} as follows.
$T_x(i)$ is the subgraph of $\overrightarrow{C}_{(x:n)}$ obtained by taking differences: \begin{itemize} \item $2^{e_j}i$ between $G_{j-1}$ and $G_{j}$ for $1\leq j \leq k$; \item $-2i$ between $G_j$ and $G_{j+1}$ for $k\leq j \leq 2k-2$; \item $-i$ between $G_j$ and $G_{j+1}$ for $2k-1\leq j \leq n-2$; \item $-i$ between $G_{n-1}$ and $G_{0}$. \end{itemize} Then $H_x(i,i')=T_x(i)\oplus F_n(T_x(i))\oplus F_n(T_x(i'))$, and notice that $H_x(i,i)=T_x(i)$.
\begin{lemma}[\cite{KP}]\label{bbb} $T_x(i)$ is a $\overrightarrow{C}_n$-factor for any $i$. \end{lemma} \begin{lemma}[\cite{KP}]\label{HisforHamilton} If $gcd(x,i-s)=1$ then $H_x(i,s)$ is a directed Hamiltonian cycle. \end{lemma}
Given $x$, $y$ and $k$, positive integers with $x,y$ odd, we will use ideas similar to Section $7$ of $\cite{KP}$ to obtain decompositions of
$\overrightarrow{C}_{(4^kxy:n)}$ into $\overrightarrow{C}_{2^{k}xn}$-factors and $ \overrightarrow{C}_{yn}$-factors.
\begin{definition} Let $x$ and $y$ be odd. Define $T_{(4^kxy)}(\alpha,i,j)$ to be the directed subgraph of $ \overrightarrow{C}_{(4^kxy:n)}$ obtained by taking $T_{(4^kxy)}(\alpha,i,j)=T_{(4^k)}(\alpha)\otimes T_x(i)\otimes T_y(j)$. We also define \[ H_{(4^kxy)}(\alpha,i,j)(\beta,i',j')=T_{(4^kxy)}(\alpha,i,j)\oplus F_n(T_{(4^kxy)}(\alpha,i,j))\oplus F_n(T_{(4^kxy)}(\beta,i',j').\] \end{definition} This means that $H_{(4^kxy)}(\alpha,i,j)(\beta,i',j')$ is the directed graph obtained by taking the arcs of $T_{(4^kxy)}(\alpha,i,j)$ between parts $t$ and $t+1$ for $0\leq t\leq n-2$, and the arcs between parts $n-1$ and $0$ from $T_{(4^kxy)}(\beta,i',j')$.
\begin{lemma} Let $x$, $y$ and $n$ be odd. Then \[ H_{(4^kxy)}(\alpha,i,j)(\beta,i',j')=H_{(4^k)}(\alpha,\beta)\otimes H_x(i,i')\otimes H_y(j,j'). \] \end{lemma}
\begin{proof} Notice that \[ F_n(T_{(4^kxy)}(\alpha,i,j))=F_n(T_{4^k}(\alpha)\otimes T_x(i)\otimes T_y(j))=F_n(T_{4^{k}}(\alpha))\otimes F_n(T_x(i))\otimes F_n(T_y(j))). \] Notice also that \begin{align*} F_n(T_{4^k}(\alpha)\otimes T_x(i)\otimes T_y(j))=&F_n(T_{4^k}(\alpha))\otimes T_x(i)\otimes T_y(j)\\ =&T_{4^k}(\alpha)\otimes F_n(T_x(i))\otimes T_y(j)\\ =&T_{4^k}(\alpha)\otimes T_x(i)\otimes F_n(T_y(j)). \end{align*}
Then we have \begin{align*} H_{4^k}(\alpha,\beta)\otimes H_x(i,i')\otimes H_y(j,j') =&\left[T_{4^k}(\alpha)\oplus F_n(T_{4^k}(\alpha)) \oplus F_n(T_{4^k}(\beta))\right]\\ & \otimes \left[T_x(i)\oplus F_n(T_x(i))\oplus F_n(T_x(i'))\right] \otimes \left[T_y(j)\oplus F_n(T_y(j))\oplus F_n(T_y(j'))\right] \\ =&\left[T_{4^k}(\alpha)\otimes T_x(i)\otimes T_y(j)\right]\oplus \left[F_n(T_{4^k}(\alpha))\otimes F_n(T_x(i))\otimes F_n(T_y(j))\right]\\ &\oplus \left[F_n(T_{4^k}(\beta)) \otimes F_n(T_x(i'))\otimes F_n(T_y(j'))\right]\\ =&T_{(4^kxy)}(\alpha,i,j)\oplus F_n(T_{(4^kxy)}(\alpha,i,j))\oplus F_n( T_{(4^kxy)}(\beta,i',j'))\\ =&H_{(4^kxy)}(\alpha,i,j)(\beta,i',j'). \end{align*}
\end{proof}
\begin{lemma} Let $\varphi$ be a permutation of $\mathfrak{R}\times \mathbb{Z}_x\times \mathbb{Z}_y$. Then \[ \overrightarrow{C}_{(4^kxy:n)}=\bigoplus_{(\alpha,i,j)}H_{(4^kxy)}(\alpha,i,j)\varphi(\alpha,i,j).\] \end{lemma}
\begin{proof} From Theorem~\ref{distributive}, we know that \[\overrightarrow{C}_{(4^kxy:n)}=\overrightarrow{C}_{(4^k:n)}\otimes \overrightarrow{C}_{(x:n)}\otimes \overrightarrow{C}_{(y:n)}= \left(\bigoplus_{\alpha} T_{4^k}(\alpha)\right)\otimes \left(\bigoplus_{i} T_{x}(i)\right)\otimes \left(\bigoplus_j T_{y}(j)\right).\] By the definition of $T_{(4^kxy)}(\alpha,i,j)$ we get \[ \overrightarrow{C}_{(4^kxy:n)}=\bigoplus_{(\alpha,i,j)}T_{(4^kxy)}(\alpha,i,j).\]
We also have \[ \bigoplus_{(\alpha,i,j)}T_{(4^kxy)}(\alpha,i,j)= \bigoplus_{(\alpha,i,j)}H_{(4^kxy)}(\alpha,i,j)\varphi(\alpha,i,j). \]
Combining both we get: \[ \overrightarrow{C}_{(4^kxy:n)}=\bigoplus_{(\alpha,i,j)}H_{(4^kxy)}(\alpha,i,j)\varphi(\alpha,i,j), \] as we wanted to prove. \end{proof}
Because we are dealing with bijections on cartesian products of sets, we introduce the following notation. If $\varphi$ is a bijection of $S_1\times S_2\times \ldots \times S_n$, then $ \varphi_i(s_1,\ldots,s_n)$ is the $i$-th coordinate of $\varphi(s_1,\ldots,s_n)$. Because $H_{4^kxy}(\alpha,i,j)\varphi(\alpha,i,j)=H_{4^k}(\alpha,\varphi_1(\alpha,i,j))\otimes H_x(i,\varphi_2(\alpha,i,j))\otimes H_y(j,\varphi_3(\alpha,i,j))$, we have:
\begin{itemize} \item If $\alpha=\varphi_1(\alpha,i,j)$, $i=\varphi_2(\alpha,i,j)$ and $\gcd(y,j-\varphi_3(\alpha,i,j))=1$ (or $y=1$), then $H_{4^k}(\alpha,\varphi_1(\alpha,i,j))$ is a $\overrightarrow{C}_n$-factor of $\overrightarrow{C}_{(4^k:n)}$ by Corollary \ref{nh4k}, $H_x(i,\varphi_2(\alpha,i,j))$ is a $\overrightarrow{C}_n$-factor of $\overrightarrow{C}_{(x:n)}$ by Lemma \ref{bbb}, $H_y(j, \varphi_3(\alpha,i,j))$ is a $\overrightarrow{C}_{yn}$-factor of $\overrightarrow{C}_{(y:n)}$ by Lemma \ref{HisforHamilton}, and $H_{4^kxy}(\alpha,i,j)\varphi(\alpha,i,j)$ is a $\overrightarrow{C}_{yn}$-factor of $\overrightarrow{C}_{(4^kxy:n)}$ by Lemma \ref{productofbalanced}.
\item If $\alpha-\varphi_1(\alpha,i,j)\in\{\pm 1,\pm x,\pm x\pm 1\}$, $\gcd(x,i-\varphi_2(\alpha,i,j))=1$ (or $x=1$), and $j=\varphi_3(\alpha,i,j)$ then $H_{4^k}(\alpha,\varphi_1(\alpha,i))$ is a $\overrightarrow{C}_{2^kn}$-factor of $\overrightarrow{C}_{(4^k:n)}$ by Corollary \ref{nh4k}, $H_x(i, \varphi_2(\alpha,i))$ is a $ \overrightarrow{C}_{xn}$-factor of $\overrightarrow{C}_{(x:n)}$ by Lemma \ref{HisforHamilton}, $H_y(j,\varphi_3(\alpha,i,k))$ is a $\overrightarrow{C}_{n}$-factor of $\overrightarrow{C}_{(y:n)}$ by Lemma~\ref{bbb} and $H_{4^kxy}(\alpha,i,j)\varphi(\alpha,i,j)$ is a $\overrightarrow{C}_{2^kxn}$-factor of $\overrightarrow{C}_{(4^kxy:n)}$ by Lemma \ref{productofbalanced}. \end{itemize}
For a decomposition of $\overrightarrow{C}_{(4^kxy:n)}$ into $\overrightarrow{C}_{2^kxn}$-factors and $\overrightarrow{C}_{yn}$-factors we need a bijection $\varphi$ of $\mathfrak{R}\times \mathbb{Z}_x \times \mathbb{Z}_y$ that satisfies: \begin{conditions}\label{2xnandyn} \begin{enumerate}[a)] \item For all $(\alpha,i,j)$, $\alpha-\varphi_1(\alpha,i,j)\in\{\pm 1,\pm x,\pm x\pm 1\}$ or $\alpha=\varphi_1(\alpha,i,j)$. \item If $\alpha=\varphi_1(\alpha,i,j)$, then $i=\varphi_2(\alpha,i,j)$ and $\gcd(y,j-\varphi_{3}(\alpha,i,j))=1$ (or $y=1$). \item If $\alpha-\varphi_1(\alpha,i,j)\in\{\pm 1,\pm x,\pm x\pm 1\}$, then $\gcd(x,i-\varphi_2(\alpha,i,j))=1$ (or $x=1$), and $j=\varphi_3(\alpha,i,j)$. \end{enumerate} \end{conditions}
Define the bijection $\theta:\mathfrak{R}\times \mathbb{Z}_x\times\mathbb{Z}_y\rightarrow \mathbb{Z}_{2^{k-1}}\times \mathbb{Z}_{2^{k-1}}\times \mathbb{Z}_{4} \times \mathbb{Z}_x\times\mathbb{Z}_y$ by $\theta(\alpha,i,j)=(\pi(\alpha),i,j)$. Then finding such a function $\varphi$ is equivalent to finding $4^{k-1}$ functions $\lambda^{(\dot{a},\dot{b})}$ of $\mathbb{Z}_4\times\mathbb{Z}_x\times \mathbb{Z}_y$ satisfying Conditions \ref{2xnandynproj} and \[
\sum_{\dot{a},\dot{b}} |\{(\gamma,i,j)|\gamma=\lambda^{(\dot{a},\dot{b})}_1(\gamma,i,j)\}|=
|\{(\gamma,i,j)|\gamma=\varphi_1(\gamma,i,j)\}|. \]
\begin{conditions}\label{2xnandynproj} \begin{enumerate}[a)] \item If $\gamma=\lambda^{(\dot{a},\dot{b})}_1(\gamma,i,j)$, then $i=\lambda^{(\dot{a},\dot{b})}_2(\gamma,i,j)$ and $\gcd(y,j-\lambda^{(\dot{a},\dot{b})}_3(\gamma,i,j))=1$ (or $y=1$). \item If $\gamma\neq \lambda^{(\dot{a},\dot{b})}_1(\gamma,i,j)$, then $\gcd(x,i-\lambda^{(\dot{a},\dot{b})}_{2}(\gamma,i,j))=1$ (or $x=1$) and $j=\lambda^{(\dot{a},\dot{b})}_3(\gamma,i,j)$. \end{enumerate} \end{conditions}
The existence of such bijections was shown in Lemma $7.17$ of $\cite{KP}$ (Lemma $7.11$ if $y=1$, Lemma $7.12$ if $x=1$).
Hence we have:
\begin{lemma}\label{2xnandynlemma} Let $s_p\in \{0,2,3,\ldots,4^kxy-3,4^kxy-2,4^kxy\}$. Then there exists a decomposition of $ \overrightarrow{C}_{(4^kxy:n)}$ into $s_p$ $\overrightarrow{C}_{2^{k}xn}$-factors and $r_p=4^kxy-s_p$ $ \overrightarrow{C}_{yn}$-factors. \end{lemma} \begin{proof} Let $s_p=\sum_{\dot{a},\dot{b}}s_{\dot{a},\dot{b}}$, with $s_{\dot{a},\dot{b}}\in\{0,2,\ldots,4xy\}$. By Lemma $7.17$ of $\cite{KP}$ (Lemma $7.11$ if $y=1$, Lemma $7.12$ if $x=1$), for each pair $\dot{a},\dot{b}$ there exists a permutation $\lambda^{(\dot{a}, \dot{b})}$ of $\mathbb{Z}_4\times \mathbb{Z}_x\times \mathbb{Z}_y$ satisfying Conditions \ref{2xnandynproj} such that \[ \gamma=\lambda^{(\dot{a},\dot{b})}_1(\gamma,i,j) \] holds for $s_{\dot{a},\dot{b}}$ elements $(\gamma,i,j)$.
Then $\lambda(\dot{a},\dot{b},\gamma,i,j)=(\dot{a},\dot{b},\lambda^{(\dot{a},\dot{b})}(\gamma,i,j))$ is a permutation of $\mathbb{Z}_{2^{k-1}}\times \mathbb{Z}_{2^{k-1}}\times \mathbb{Z}_{4} \times \mathbb{Z}_x \times \mathbb{Z}_y$, and $\varphi=\theta^{-1}\lambda\theta$ is a permutation of $\mathfrak{R}\times \mathbb{Z}_x\times \mathbb{Z}_y$ satisfying Conditions \ref{2xnandyn}, with \[ s_p=\sum_{\dot{a},\dot{b}}(s_{\dot{a},\dot{b}}) \] pairs satisfying $\alpha=\varphi_1(\alpha,i,j)$. \end{proof}
\section{Main Results}\label{section main results} The complete solution to the uniform case of the Oberwolfach problem will be vital to the proof of our main result. \begin{theorem}[\cite{AH,ASSW,HS,RW}] \label{OP} $K_v$ can be decomposed into $C_m$-factors (and a $1$-factor is $v$ is even) if and only if $v \equiv 0 \pmod{m}$, $(v,m) \not = (6,3)$ and $(v,m) \not = (12,3)$. \end{theorem}
We now apply the results from Section~\ref{section multivariate bijections} to produce the following important result for the uniform equipartite version of the Hamilton-Waterloo problem where the two factor types consist of cycle sizes of distinct parities.
\begin{theorem}\label{lemmausingliu} Let $x,y,z,v,m,k$ be positive integers $v,m,k\geq 3$ satisfying the following: \begin{enumerate}[i)] \item $v,m\geq 3$, \item $k\geq 2$, \item $x,y,z$ odd, \item $z\geq 3$, \item $\gcd(x,y)=1$, \item $vm\equiv 0 \pmod{4^kxyz}$, $v\equiv 0\pmod{4^kxy}$, \item $\frac{v(m-1)}{4^kxy}$ is even, \item $\left(\frac{v}{4^kxy},m,z\right) \not \in \{(2,3,3),(6,3,3),(2,6,3),(6,2,6)\}$ \end{enumerate} then there is a decomposition of $K_{(v:m)}$ into $r$ $C_{2^kxz}$-factors and $s$ $C_{yz}$-factors, for any $s,r\neq 1$. \end{theorem} \begin{proof} Let $v_1=v/4^kxy$. Consider $K_{(v_1:m)}$. Item $vi$ ensures that $z$ divides $v_{1}m$; and items $vii$, $i$, and $viii$ give us $v_1(m-1)$ is even, $m\neq 2$, and $\left(\frac{v}{4^kxy},m,z\right) \not \in \{(2,3,3)$, $(6,3,3), (2,6,3), (6,2,6)\}$. Thus by Theorem \ref{equip} there is a decomposition of $K_{(v_1:m)}$ into $C_{z}$-factors.
Give weight $4^kxy$ to the vertices in $K_{(v_1:m)}$, which yields $K_{(v:m)}$. Even more, each $C_{z}$-factor becomes a copy of $\frac{v_1m}{z}C_{(4^kxy:z)}$. By Lemma \ref{2xnandynlemma}, we have that each $\frac{v_1m}{z}C_{(4^kxy:z)}$ can be decomposed into $r_p$ $C_{2^{k}xz}$-factors and $s_p$ $C_{yz}$-factors as long as $r_p,s_p\neq 1$. Choosing $s_p$ such that $\sum_p s_p=s$ and $s_p,r_p\neq 1$, provides a decomposition of $K_{(v:m)}$ into $r$ $C_{2^{k}xz}$-factors and $s$ $C_{yz}$-factors by Lemma~\ref{cvntokvm} \end{proof}
The next lemma, given in \cite{KP} shows how to find solutions to the Hamilton-Waterloo problems by combining solutions for the problem on complete graphs and solutions for the problem on equipartite graphs.
\begin{lemma}[\cite{KP}]\label{buildcompletegraphnonuniform} Let $m$ and $v$ be positive integers. Let $F_1$ and $F_2$ be two $2$-factors on $vm$ vertices. Suppose the following conditions are satisfied: \begin{itemize} \item There exists a decomposition of $K_{(v:m)}$ into $s_\alpha$ copies of $F_1$ and $r_\alpha$ copies of $F_2$. \item There exists a decomposition of $mK_v$ into $s_\beta$ copies of $F_1$ and $r_\beta$ copies of $F_2$. \end{itemize}
Then there exists a decomposition of $K_{vm}$ into $s=s_\alpha+s_\beta$ copies of $F_1$ and $r=r_\alpha+r_ \beta$ copies of $F_2$. \end{lemma}
We are now in a position to provide a proof of the main theorem.
\begin{theorem} Let $x,y,v,k$ and $m$ be positive integers such that: \begin{enumerate}[i)] \item $v,m\geq 3$, \item $x,y$ are odd, \item $\gcd(x,y)\geq 3$, \item $x$ and $y$ divide $v$. \item $4^k$ divides $v$. \end{enumerate} Then there exists a $(2^kx,y)\ensuremath{\text{--}}\ensuremath{\mbox{\sf HWP}}(vm;r,s)$ for every pair $r,s$ with $r+s=\left\lfloor(vm-1)/2\right\rfloor$, $r,s\neq 1$. \end{theorem}
\begin{proof} Let $r$ and $s$ be positive integers with $r+s=\left\lfloor(vm-1)/2\right\rfloor$ and $r,s\neq 1$. Write $r=r_{\alpha}+r_{\beta}$ and $s=s_{\alpha}+s_{\beta}$, where $r_{\alpha},r_{\beta},s_{\alpha},s_{\beta}$ are positive integers that satisfy $r_\alpha,s_\alpha\neq 1$, $r_\alpha+s_\alpha=v(m-1)/2$, $r_\beta+s_\beta=\left\lfloor(v-1)/2\right\rfloor$, and $r_\beta,s_\beta\in\{0,\left\lfloor(v-1)/2\right\rfloor\}$.
Start by decomposing $K_{vm}$ into $K_{(v:m)}\oplus mK_v$. Let $z=\gcd(x,y)$, $x_1=x/z$, $y_1=y/z$. By Theorem \ref{lemmausingliu} there is a decomposition of $K_{(v:m)}$ into $r_\alpha$ $C_{2^kx_1z}$-factors and $s_\alpha$ $C_{y_1z}$-factors. This is a decomposition of $K_{(v:m)}$ into $r_\alpha$ $C_{2^kx}$-factors and $s_\alpha$ $C_{y}$-factors. By Theorem \ref{OP} there is a decomposition of $mK_v$ into $r_\beta$ $C_{2^kx}$-factors and $s_\beta$ $C_{y}$-factors. Lemma~\ref{buildcompletegraphnonuniform} shows that all of this together yields a decomposition of $K_{vm}$ into $r$ $C_{x}$-factors and $s$ $C_{y}$-factors. \end{proof}
\section{Bibliography}
\end{document} | arXiv |
\begin{document}
\maketitle
\begin{abstract} We consider the evolution of the temperature $u$ in a material with thermal memory characterized by a time-dependent convolution kernel $h$. The material occupies a bounded region $\Omega$ with a feedback device controlling the external temperature located on the boundary $\Gamma$. Assuming both $u$ and $h$ unknown, we formulate an inverse control problem for an integrodifferential equation with a nonlinear and nonlocal boundary condition. Existence and uniqueness results of a solution to the inverse problem are proved. \end{abstract}
\vskip0.2truecm \noindent {\bf Keywords.} Integrodifferential equations, automatic control problems, inverse problems.
\noindent {\bf Mathematics Subject Classification (2010).} 35R30, 35K20, 45K05, 47J040, 93B52.
\section{Introduction} In this paper we want to study the evolution of the temperature $u$ in a material with thermal memory, occupying a bounded region $\Omega \subseteq \mathbb{R}^3$. The memory mechanism is characterized by a time-dependent convolution kernel $h$.
Here we are interested in analyzing the heat exchange at the boundary $\Gamma:= \partial \Omega$ under the influence of a thermostat, with boundary conditions of the third type. On account of the existing literature, (see, e.g., \cite{CoGrSp1}, \cite{HoNiSp1} and their references), we are willing to formulate and study an automatic control problem based on a feedback device located on the boundary $\Gamma$. This device is prescribed by means of a quite general memory operator (in the following we shall give precise definitions). Moreover, due to the presence of a memory term in the evolution equation for $u$, the past history involves also the boundary condition that turns out to be of integrodifferential type.
Therefore, we introduce the following initial and boundary value problem:
\begin{problem}\label{P1} Find $u$ of domain $Q_T$ such that \begin{equation} \left\{ \begin{array}{ll} D_t u(t,x) = Au(t,x) + (h \ast Au)(t,x) + f(t,x), & (t,x) \in Q_T, \\ \\ Bu(t,x) + (h \ast Bu)(t,x) + q(t,x) = u_e(t,x) - u(t,x), & (t,x) \in \Sigma_T, \\ \\ u(0,x) = u_0(x), & x \in \Omega. \end{array} \right. \end{equation} where \begin{equation}\label{eq1.2A} Q_T:= (0, T) \times \Omega, \end{equation} \begin{equation}\label{eq1.3A} \Sigma_T:= (0, T) \times \Gamma, \end{equation} and \begin{equation} (h \ast f)(t,x) := \int_0^t h(t-s) f(s,x) ds. \end{equation} \end{problem} \noindent Here $T >0$, $f : Q_T \to \mathbb{R}$ is the heat source, $u_0 : \Omega \to \mathbb{R}$ is the initial temperature, $q$ accounts for the past history of $u$ on the boundary up to $t = 0$, while $u_e$ represents the temperature of the external environment. Moreover, $A$ and $B$ are linear differential operators defined by \begin{equation} A = \sum_{i=1,j}^n D_{x_i} (a_{ij}(x) D_{x_j}), \quad x \in \Omega, \end{equation} \begin{equation} B = \sum_{i=1}^n b_{i}(x) D_{x_i} + b_0(x), \quad x \in \Gamma. \end{equation} As already stated above, the external control $u_e$ should be regulated by a feedback device based on measurements of $u$. Suppose that we are able to measure the temperature by a real system of thermal sensors placed in some fixed positions over $\Omega$ and/or $\Gamma$ as follows: \begin{equation}\label{eqA1.7} {\cal M}(u)(t):= \int_\Omega \omega_1(x) u(t,x) dx + \int_{\Gamma} \omega_2 (y) u(t,y) d\sigma, \end{equation} where $\omega_1$ and $\omega_2$ are functions with domain $\Omega$ and $\Gamma$, respectively. We consider a thermostat device modifying $u_e$, on account of ${\cal M}(u)$, in this way (see, e. g. \cite{HoNiSp1}, \cite{CoGrSp1} and their references): \begin{equation} u_e = \phi u_A + u_B \quad \mbox{\rm on } \Sigma_T. \end{equation} Here $u_B: \Sigma_T \to \mathbb{R}$ is a given reference boundary value (e.g. the external average temperature), while $u_A : \Sigma_T \to \mathbb{R}$ is a (known) factor of the part of $u_e$ that can be controlled by our device. The dynamic control is exerted through the function $\phi: [0, T]Ê\to \mathbb{R}$, solution to the problem
\begin{equation}\label{eq0.9} \left\{\begin{array}{ll} \epsilon \phi' + \phi = {\cal W} ({\cal M} (u))Ê+ u_C & \mbox{ in } [0, T], \\ \\ \phi(0) = \phi_0, \end{array} \right. \end{equation} where $u_C : [0, T] \to \mathbb{R}$ is a given function, $\epsilon$ is a positive parameter and $\phi_0 \in \mathbb{R}$.
The nonlinear operator ${\cal W}$ completes the description of the feedback action. We assume that ${\cal W}$ is a memory operator, accordingly to the definition in \cite{Vi1}, Chap. III, to be precisely defined in the following. Mathematical literature contains several examples of operators ${\cal W}$ useful in applications: we mention the generalized plays and Preisach operators (see \cite{Vi1}, Chaps. III-IV and \cite{KrPo1}, Part 1).
Going back to the Cauchy problem for $\phi$, we formally deduce that $u_e$ is assigned by \begin{equation}\label{eq0.10} u_e = {\cal F}({\cal W} ({\cal M}(u))) \quad \mbox{\rm on } \Sigma_T, \end{equation} where \begin{equation}\label{eqA1.11} {\cal F}(r)(t,y) := (E_1 \ast r)(t) u_A(t,y) + E_0(t,y), \quad (t,y) \in \Sigma_T, \end{equation} and \begin{equation}\label{eqA1.12} E_1(t):= \epsilon^{-1}Êe^{-t/\epsilon}, \end{equation} \begin{equation}\label{eqA1.13} E_0(t,y) = [ (E_1 \ast u_C)(t) + \epsilon \phi_0 E_1(t)]Êu_A(t,y) + u_B(t,y). \end{equation} Therefore, on account of (\ref{eq0.10}), the feedback nonlinear control problem reduces to a system with a nonlinear, nonlocal boundary condition. In the paper \cite{CaCo1}, the authors studied a problem in the form (\ref{P1}), with two possible choices of the memory operator: the relay switch operator or the Preisach operator. Moreover, in the second part of the paper they studied the case in which they had also to identify a time-dependent factor of the heat source.
Instead, here we assume that the memory kernel $h$ is unknown. In order to determine $h$ along with the temperature $u$, we need an additional information: we assume to know the following quantity \begin{equation}\label{eqA1.14} \Phi(u(t)) := \int_\Omega \omega(x) u(t,x) dx, \quad \text{for any } t \in [0, T], \end{equation} where $\omega$ is a properly smooth function, vanishing together with its first derivatives in $\Gamma$.
We can now formulate our inverse control problem:
\begin{problem}\label{P2} Find $u$ of domain $Q_T$ and $h$ of domain $[0, T]$ such that \begin{equation}\label{eqA1.15} \left\{ \begin{array}{ll} D_t u = Au + h \ast Au + f, & \mbox{ in } Q_T, \\ \\ Bu + h \ast Bu + q = {\cal F} ({\cal W} ({\cal M} (u))) - u, & \mbox{ on } \Sigma_T, \\ \\ u(0,\cdot) = u_0, & \mbox{ in } \Omega\\ \\ \Phi(u) = g & \mbox{ in } [0, T]. \end{array} \right. \end{equation} \end{problem}
In order to solve our problem we need to apply the theory developed by Lions and Magenes in \cite{LiMa1} (see also \cite{LiMa2}). To this aim, we have to settle all the functions, the operators and the linear spaces involved in Problem 2 in the framework of a complex context. It is not difficult to realize that whence all the functions and coefficient appearing in Problem 2 are real valued, then the real part of the solution turns out to be real valued as well.
Hence, from now on we will assume all the functions introduced before taking values in $\mathbb{C}$.
As far as the operator ${\cal W}$ is concerned, even if in applications it is defined only for functions $f: [0,T] \to \mathbb{R}$, nevertheless, whenever $f: [0,T] \to \mathbb{C}$, then we intend ${\cal W} (f)$ as ${\cal W}(Re f) + i {\cal W}(Im f )$.
More in details, a memory operator ${\cal W}_\tau$ is characterized as follows.
We indicate by $C([0, \tau])$, $\tau \in \mathbb{R}^+$, the Banach space of continuous complex valued functions of domain $[0, \tau]$, equipped with its standard norm. Then
{\it (C1) $\forall \, \tau \in [0, T]$, ${\cal W}_\tau: C([0, \tau]) \cap \cap BV([0, \tau]) \to C([0, \tau]) \cap BV([0, \tau])$ is given, where $BV([0, \tau])$ stands for the class of bounded variation functions.;
(C2) if \, $0 \leq \tau_2 \leq \tau_1 \leq T$ and $f \in C([0, \tau_1]) \cap BV([0, \tau_1])$, then $W_{\tau_2}(f_{|[0, \tau_2]}) = [W_{\tau_1}(f)]_{|[0, \tau_2]}$;}
{\it (C3) There exists $L \in \mathbb{R}^+$, such that $\forall \tau \in [0, T]$, $\forall f,g \in C([0, \tau]) \cap BV([0, \tau])$, $$
\quad \|{\cal W}_\tau(f) - {\cal W}_\tau(g)\|_{C([0, \tau])} \leq L\|f - g\|_{C([0, \tau])}. $$}
\begin{remark} {\rm Conditions which are alternative to (C1)-(C3) can be adopted, in order to obtain the conclusion of the main result of the paper, Theorem \ref{thA2.1}. A short discussion in this direction will be put in the final Remark \ref{ref}.
}
\end{remark}
On account of ${\it (C2)}$, given $f \in C([0, \tau_1]) \cap BV([0, \tau_1])$, then $[{\cal W}_{\tau_1}(f)]_{|[0, \tau_2]}$ depends only on $f_{|[0, \tau_2]}$. Hence, if $f \in C([0, \tau]) \cap BV([0, \tau])$ for some $\tau \in [0, T]$, we shall loosely write ${\cal W}(f)$ in alternative to ${\cal W}_\tau(f)$.
We conclude this introduction by fixing the basic notations, recalling some well known definitions and facts, and outlining the organization of the paper.
Concerning the notation, we indicate with $\mathbb{N}$ and $\mathbb{N}_0$ the sets of positive and nonnegative integers, respectively. If $\beta \in \mathbb{R}$, $[\beta]$ stand for its integer part and $\{\beta\} := \beta -[\beta]$.
We indicate with $C$ a positive constant which may be different from time to time. However, in a sequence of estimates, we write also $C_1, C_2, \dots$. In order to stress the fact that the constant $C$ depends on $\alpha, \beta, \dots$, we shall write $C(\alpha,\beta,\dots)$.
We indicate with $BV([0, T])$ the class of complex valued bounded variation functions with domain $[0, T]$. If $E$ is a Banach space, $\alpha \in (0, 1)$ and $f : [0, T]Ê\to X$, we set $$
[f]_{C^\alpha([0, T]; E)} := \sup_{0 \leq s < t \leq T} \frac{\|f(t) - f(s)\|_E}{(t-s)^\alpha}, $$ and, in case $[f]_{C^\alpha([0, T]; E)} < \infty$, we write $f \in C^\alpha([0, T]; E)$. If $E = \mathbb{C}$, we simply write $C^\alpha([0, T])$.
If $E$ and $F$ are normed spaces, we indicate with ${\cal L}(E,F)$ the space of linear bounded operators from $E$ to $F$. If $E = F$, we simply write ${\cal L}(E)$. We indicate with $E'$ the space of continuous antilinear functionals in $E$, equipped with its natural norm.
Let $\Omega$ be an open subset of $\mathbb{R}^n$. We consider the Sobolev spaces $H^m(\Omega)$ ($m \in \mathbb{N}_0$), defined as
$$H^m(\Omega) = \{u \in L^2(\Omega): D^\alpha u \in L^2(\Omega), |\alpha| \leq m\},$$ with $D^\alpha$ intended in the sense of distributions. $H^m(\Omega)$ is a Hilbert space with the norm $$
\|u\|^2_{H^m(\Omega)} := \sum_{|\alpha| \leq m} \|D^\alpha u\|_{L^2(\Omega)}^2. $$ Let $\beta \in \mathbb{R}^+$. Then we define $$ H^\beta(\Omega):= (H^{[\beta]}(\Omega), H^{[\beta]+1}(\Omega))_{\{\beta\},2}, $$
denoting with $(\cdot, \cdot)_{\theta,2}$ ($0 < \theta < 1$) the real interpolation functor. This definition is equivalent to the one in \cite{LiMa1}, Chap. 1.9 (see \cite{Gr1}, 1.2) . In the case $\Omega = \mathbb{R}^n$, $H^\beta(\Omega)$ admits a well known characterization in terms of Fourier transform (see \cite{LiMa1}, Chap. 1.7). If $\alpha \in \mathbb{N}_0^n$ and $|\alpha| \leq \beta$, $D^\alpha \in {\cal L}(H^\beta(\Omega), H^{\beta-|\alpha|}(\Omega))$.
Given an open subset $\Omega \subseteq\mathbb{R}^n$ with boundary $\Gamma$, from now on we assume one of the following conditions
{\it (H1) $\Omega$ is bounded and lying on one side of its topological boundary $\Gamma\in C^\infty$};
{\it (H1bis) $\Omega = \mathbb{R}^n_+$};
{\it (H1ter) $\Omega= \mathbb{R}^n$}.
Then, first of all, one has $$
H^\beta(\Omega) = \{U_{|\Omega} : U \in H^\beta (\mathbb{R}^n)\} $$
and an equivalent norm is $\inf\{\|U\|_{H^\beta(\mathbb{R}^n)}: U_{|\Omega} = u\}$. Moreover, $C^\infty (\overline {\Omega})$ is dense in $H^\beta(\Omega)$ (see \cite{LiMa1}, Chap. 1, Theorems 9.2, 9.3) and it is a space of pointwise multipliers in it.
We can also define (by local charts) the spaces $H^\beta(\Gamma)$. One can verify that, if $j \in \mathbb{N}_0$ and $\beta > j + \frac{1}{2}$, the map $u \to\frac{\partial^j u}{\partial \nu^j}$, from $C^\infty (\overline \Omega)$ to $C^\infty (\Gamma)$, can be extended to an element of ${\cal L} (H^\beta(\Omega), H^{\beta - j - 1/2}(\Gamma))$.
If $\beta \geq 0$, we indicate with $H^\beta_0(\Omega)$ the closure of ${\mathcal{D}}(\Omega):=C_0^\infty(\Omega)$ in $H^\beta(\Omega)$. It is known that, in case $\beta \leq 1/2$, $H^\beta_0(\Omega) = H^\beta(\Omega)$ (see \cite{LiMa1}, Chap. 1, Theorem 11.1). In case $0 \leq \beta < 1/2$, the trivial extension operator with $0$ outside $\Omega$ belongs to ${\cal L}(H^\beta(\Omega) , H^\beta(\mathbb{R}^n))$ (see \cite{LiMa1}, Theorem 11.4).
Now, for $\beta \geq 0$, we define
\begin{equation}\label{eq1.1A} H^{-\beta}(\Omega):= H_0^\beta(\Omega)'. \end{equation} Every element $f$ of $L^2(\Omega)$ will be always identified with the functional $$g \to \int_{\Omega} f(x) \overline{g(x)} dx$$
and, with this identification, $L^2(\Omega) \hookrightarrow H^{-\beta}(\Omega)$, $\forall \beta \geq 0$. We observe that, as (by definition) ${\mathcal{D}}(\Omega)$ is dense in $H_0^\beta(\Omega)$, $H^{-\beta}(\Omega)$ is a space of distributions. One can show that, if $\beta \in \mathbb{R}$, $\alpha \in \mathbb{N}_0^n$ and $\beta - |\alpha| \not \in \{-1/2, -3/2, ...\}$, the derivative operator
$f \to D^\alpha f$ maps $H^\beta(\Omega)$ into $H^{\beta-|\alpha|}(\Omega)$ (\cite{LiMa1}, Chap. 1, Proposition 12.1).
We shall need also Sobolev spaces with values in a certain complex Hilbert spaces $V$, with scalar product $(\cdot, \cdot)_V$. In this more general situation, we limit ourselves to consider the case $n = 1$ with $\Omega = (a,b) \subset \mathbb{R}, \, a <b $. In case $\beta \geq 0$, $H^\beta(a,b; V)$ can be defined similarly to the scalar valued situation (see \cite{LiMa1}, Chap. 1, 2.2, \cite{LiMa2}, Chap. 4, Sec. 2.1) and the aforementioned properties can be extended without much difficulty, employing the theory of vector valued distributions.
Further properties of Sobolev spaces will be recalled in Section \ref{seA3}.
Let $T \in \mathbb{R}^+$. If $\alpha, \beta \in [0, \infty)$, we set \begin{equation} H^{\alpha,\beta}(Q_T):= H^\alpha(0, T; L^2(\Omega)) \cap L^2(0, T; H^\beta(\Omega)). \end{equation} This is a Hilbert space with the norm \begin{equation}
\|f\|_{H^{\alpha,\beta}(Q_T)}^2:= \|f\|_{H^{\alpha}(0, T; L^2(\Omega))}^2 + \|f\|_{L^2(0, T; H^\beta(\Omega))}^2. \end{equation} Analogously, we define \begin{equation} H^{\alpha,\beta}(\Sigma_T):= H^\alpha(0, T; L^2(\Gamma)) \cap L^2(0, T; H^\beta(\Gamma)), \end{equation} which is a Hilbert space with the norm \begin{equation}
\|f\|_{H^{\alpha,\beta}(\Sigma_T)}^2:= \|f\|_{H^{\alpha}(0, T; L^2(\Gamma))}^2 + \|f\|_{L^2(0, T; H^\beta(\Gamma))}^2. \end{equation} We collect the following facts, which will be crucial for us: \begin{theorem}\label{th1.1} Assume that (H1) holds. Let $\alpha, \beta \in [0, \infty)$. Then the following propositions hold.
(I) If $ \max\{\alpha, \beta\} \leq 1/2$, then ${\mathcal{D}}(Q_T)$ is dense in $H^{\alpha,\beta}(Q_T)$, so that its dual space $H^{\alpha,\beta}(Q_T)'$ is a space of distributions in $Q_T$.
(II) If $j \in \mathbb{N}_0$ and $\beta > j + 1/2$, for $u \in H^{\alpha,\beta}(Q_T)$, $\gamma \in \mathbb{N}_0^n$ and $|\gamma| \leq j$, we may define
$D_x^\gamma u_{|\Sigma_T}$, and $u \to D_x^\gamma u_{|\Sigma_T}$ is a continuous linear mapping from $H^{\alpha,\beta}(Q_T)$ to $H^{\alpha(1-\frac{j+1/2}{\beta}), \beta - j - 1/2}(\Sigma_T)$.
(III) If $k \in \mathbb{N}_0$ and $\alpha > k+1/2$, we may define $D_t^k u(0,\cdot)$, and $u \to D_t^k u(0,\cdot)$ is a continuous linear mapping from $H^{\alpha,\beta}(Q_T)$ to $H^{(1-\frac{k+1/2}{\alpha})\beta}(\Omega)$.
\end{theorem}
\begin{proof} See \cite{LiMa2}, Chap. 4: for (I), Sec. 2.1; for (II) and (III), Theorem 2.1. \end{proof} We pass to outline the structure of the paper.
In Section \ref{seA2}, we state in a precise way the problem we want to solve. The main aim of the paper is to prove a result of existence and uniqueness of a solution to Problem 2, precisely stated in Theorem \ref{thA2.1}. The principal difficulty in the reconstruction of the convolution kernel $h$ lies in the fact that $h$ is solution to an integral equation of the first kind, which is a severely badly posed problem. Following a method which, at least for parabolic systems, was introduced in \cite{LoSi1}, we differentiate the parabolic equation and the boundary condition with respect to time and we formulate Problem 3 for the unknowns $v:=D_tu$ and $h$. This new problem turns out to be equivalent to Problem 2 (Proposition \ref{prA2.1}). The differentiation in time has the effect of transforming the integral equation of the first kind into an integral equation of the second type in the unknown $h$. Moreover, it forces to look for $u \in H^2(0, T; L^2(\Omega)) \cap H^1(0, T; H^2(\Omega))$, implying $v \in H^{1,2}(Q_T)$, that is the classical functional framework for parabolic systems (see Section \ref{seA3}).
In order to solve Problem \ref{P3}, it seems very hard looking directly for a solution $v$ in $H^{1,2}(Q_T)$. The reason is that the traces on $\Sigma_T$ of the first order space derivatives are in $H^{1/4, 1/2}(\Sigma_T)$, see \ref{th1.1} (II). Unfortunately, in Problem 3 there appears on the boundary a term depending on memory which is not Lipschitz continuous from $H^{1,2}(Q_T)$ into $H^{1/4, 1/2}(\Sigma_T)$ and prevents us from applying the contraction mapping theorem. To overcome this difficulty, we apply the following strategy. First we look for a weak solution $v \in H^{3/4,3/2}(Q_T)$. Applying classical results of Lions and Magenes (see \cite{LiMa2}), we may replace the space of trace functions $H^{1/4, 1/2}(\Sigma_T)$ with $L^2(\Sigma_T)$. This allows to employ the contraction mapping theorem and obtain existence and uniqueness of a global weak solution. The final step is to show that $v$ belongs, in fact, to $H^{1,2}(Q_T)$.
Entering into the details, in Section \ref{seA3} we revise the weak parabolic theory developed in \cite{LiMa2} and add some technical results on it and on vector valued Sobolev spaces and their duals. Concerning dual spaces, we have adopted an abstract setting, having in mind the space $H^{1/4,1/2}(Q_T)'$, which has a role in the weak parabolic theory.
The technical Sections \ref{seA4} and \ref{seA5} are dedicated to convolution and memory terms, respectively. We study in particular the convolution of an element $h$ in $L^1(0, T)$ by $z$, with $z$ belonging to a proper class of vector valued distributions in $(0, T)$, a generalization of $H^{1/4,1/2}(Q_T)'$.
In Section \ref{se6} we study the weak version of Problem \ref{P3}. Here we employ a method, which was introduced in \cite{CoGu1}, allowing to treat the convolution as an affine operator (see Remark \ref{re4.4}).
Finally, in Section \ref{se7} we show that $v \in H^{1,2}(Q_T)$ and this completes the proof of Theorem \ref{thA2.1}.
\section{Statement of the problem and equivalent formulation}\label{seA2}
\setcounter{equation}{0}
Concerning system (\ref{eqA1.15}), we assume (H1) and moreover (see \cite{LiMa1}, Chap. 2, Sec. 1)
\vskip0.2truecm {\it (H2) $A = \sum_{i,j=1}^n D_{x_i} (a_{ij}(x) D_{x_j}), \quad a_{ij} \in C^\infty(\overline \Omega)$ \vskip0.2truecm \quad \quad \! $B = \sum_{i=1}^n b_{i}(x') D_{x_i} + b_0(x'), \quad b_i \in C^\infty(\Gamma); \quad\quad \sum_{i=1}^n b_{i}(x') \nu_i(x') \neq 0$, $\forall\, x' \in \Gamma$ \vskip0.2truecm \quad \quad \, $e^{i\theta}D_t^2 + \sum_{i,j=1}^n D_{x_i} (a_{ij}(x) D_{x_j})$ is properly elliptic in $\mathbb{R} \times \Omega$, and covered by $B$ in $\mathbb{R} \times \Gamma$, \vskip0.1truecm \quad \quad \, $\forall \, \theta \in [-\pi/2, \pi/2]$
(H3) $f \in H^{1,0}(Q_T)$
(H4) $u_0 \in H^2(\Omega)$, $v_0:= Au_0 + f(0,\cdot) \in H^1(\Omega)$
(H5) $q \in H^{5/4}(0, T; L^2(\Gamma))\cap H^1(0, T; H^{1/2}(\Gamma))$
(H6) $\omega \in H^2_0(\Omega)$, \quad $g \in H^2(0, T)$
(H7) $\Phi(Au_0) \neq 0$
(H8) $\omega_1 \in L^2(\Omega)$, \,$\omega_2 \in L^2(\Gamma)$, \, $u_A, u_B \in H^{5/4}(0, T; L^2(\Gamma)) \cap H^1(0, T; H^{1/2}(\Gamma))$,\,
\quad \quad \, $u_C \in C([0, T]) \cap BV([0, T])$
(H9) $\Phi(u_0) = g(0)$, \, $\Phi(v_0) = g'(0)$, \, $Bu_0 + q(0,\cdot) = \phi_0 u_A(0,\cdot) + u_B(0,\cdot) - u_{0|\Gamma}$ }
\begin{remark} {\rm In (H2) we say that $e^{i\theta} D_t^2 + \sum_{i,j=1}^n D_{x_i}(a_{ij}(x) D_{x_j})$ is covered by $B$ when the following condition is satisfied (see \cite{LiMa1}, Ch. 2, Prop. 4.2).
Take an arbitrary $x'$ in $\Gamma$, $(\tau,\xi)$ in $(\mathbb{R} \times \mathbb{R}^n) \setminus \{(0,\mathbf{0})\}$ with $\xi$ tangent to $\Gamma$ in $x'$, $\xi'$ in $\mathbb{R}^n \setminus \{\mathbf{0}\}$, normal to $\Gamma$ in $x'$ and consider the ODE problem $$ \left\{\begin{array}{l} - e^{i\theta} \tau^2 v(t) + \sum_{i,j=1}^n a_{ij}(x') (i\xi_i + \xi_i' D_t) (i\xi_j+ \xi_j' D_t) v(t) = 0, \\ \\ \sum_{i=1}^n b_i(x') (i\xi_i + \xi_i' D_t) v (0) = 1. \end{array} \right. $$ Then such problem has a unique solution $v$ which is bounded in $\mathbb{R}^+$.} \end{remark}
The main result of this paper is the following
\begin{theorem}\label{thA2.1} Assume that (C1)-(C3) and (H1)-(H9) hold. Then (\ref{eqA1.15}) has a unique solution $(u,h)$ such that $u \in H^2(0, T; L^2(\Omega)) \cap H^1(0, T; H^2(\Omega))$ and $h \in L^2(0, T)$. \end{theorem}
\begin{remark}
{\rm Let $u \in H^2(0, T; L^2(\Omega)) \cap H^1(0, T; H^2(\Omega))$. As $u_{|\Sigma_T} \in H^1(0, T; H^{3/2}(\Gamma))$, then ${\cal M} (u) \in H^1(0, T) \hookrightarrow C([0, T]) \cap BV([0, T])$. So, ${\cal W}({\cal M} (u))$ is well defined and belongs to $C([0, T]) \cap BV([0, T])$. Moreover, we deduce that $E_1 * {\cal W}({\cal M} (u)) \in C^1([0, T])$ and $D_t[E_1 * {\cal W}({\cal M} (u))] \in BV([0, T])$. } \end{remark}
The first step in the proof of Theorem \ref{thA2.1} is to formulate a problem which is equivalent to Problem 2. This is done through the following
\begin{proposition}\label{prA2.1} Assume (C1)-(C3) and (H1)-(H9). Let $(u,h)$ be a solution to (\ref{eqA1.15}) such that \begin{equation}\label{eqA2.4} u \in H^2(0, T; L^2(\Omega)) \cap H^1(0, T; H^2(\Omega)), \quad h \in L^2(0, T). \end{equation} Setting \begin{equation} v(t,x):= D_t u(t,x), \end{equation} then the pair $(v,h)$ satisfies
\begin{equation}\label{eqA2.6A} v \in H^{1,2}(Q_T), \quad h \in L^2(0, T) \end{equation} and solves
\begin{problem}\label{P3} Find $v$ of domain $Q_T$ and $h$ of domain $(0, T)$, such that \begin{equation}\label{eqA2.6} \left\{\begin{array}{l} D_tv(t,x) = Av(t,x) + h \ast Av(t,x) + v^*(t,x) \\ \\
- [(\psi_1,v(t,\cdot)) + h \ast (\psi_1, v(t,\cdot))]Êz_0(x), \quad (t,x) \in Q_T, \\ \\ v(0,x) = v_0(x), \quad x \in \Omega, \\ \\ Bv(t,y) = - v(t,y) + \Psi(v)(t,y) - h \ast Bv(t,y) + [(\psi_1,v(t,\cdot)) + h \ast (\psi_1, v(t,\cdot))] z_1(y)Ê\\ \\ + v^*_\Gamma (t,y), \quad (t,y) \in \Sigma_T, \\ \\ h(t) = h^*(t) - (\psi_1,v(t,\cdot)) - h \ast (\psi_1, v(t,\cdot)), \quad t \in (0, T), \end{array} \right. \end{equation} \end{problem} where we have set \begin{equation} \chi:= \Phi(Au_0)^{-1}, \end{equation} \begin{equation} h^*(t) := \chi(g''(t) - \Phi (D_tf(t,\cdot)), \quad t \in (0, T), \end{equation} \begin{equation} z_0(x):= Au_0(x), \quad x \in \Omega, \end{equation} \begin{equation} v^*(t,x) := D_tf(t,x) + h^*(t) z_0(x), \quad (t,x) \in Q_T, \end{equation} \begin{equation}\label{eqA2.3} A^*:= \sum_{i,j=1}^n D_{x_j} (\overline{a_{ij}(x)} D_{x_i}), \end{equation} \begin{equation} \psi_1(x):= \chi \overline{ A^*\overline{\omega}(x)}, \quad x \in \Omega, \end{equation} \begin{equation}\label{eqA2.12} \Psi(v)(t,y):= D_t [{\cal F} ({\cal W}({\cal M}(u_0 + 1 \ast v))](t,y), \quad (t,y) \in \Sigma_T, \end{equation} \begin{equation} z_1(y):= Bu_0(y), \quad y \in \Gamma, \end{equation} \begin{equation} v^*_\Gamma (t,y) := -D_t q(t,y) - h^*(t) z_1(y), \quad (t,y) \in \Sigma_T, \end{equation} \begin{equation} (\psi_1, v):= \int_\Omega \psi_1(x) v(x) dx, \quad v \in L^2(\Omega). \end{equation} On the other hand, if $(v, h)$ is a solution to problem (\ref{eqA2.6}) and satisfies conditions (\ref{eqA2.6A}), then the pair $(u, h)$ verifies (\ref{eqA2.4}) and solves system (\ref{eqA1.15}), where we have set $u(t,x) := u_0(x) + 1 \ast v(t,x)$.
\end{proposition}
\begin{proof} We observe that, thanks to (H1)-(H9), we have \begin{equation} \begin{array}{cccccc} h^* \in L^2(0, T), & z_0 \in L^2(\Omega), & v^* \in L^2(Q_T), & \psi_1 \in L^2(\Omega), & z_1 \in H^{1/2}(\Gamma). \end{array} \end{equation} Suppose that (\ref{eqA1.15}) has a solution $(u,h)$, with $u \in H^2(0, T; L^2(\Omega)) \cap H^1(0, T; H^2(\Omega))$, $h \in L^2(0, T)$. Obviously, $v = D_t u \in H^{1,2}(Q_T)$. We observe that $$ h \ast Au = h \ast (A u_0 + 1 \ast Av) = 1 \ast [h Au_0 + h \ast Av], $$ so that $h \ast Au \in H^1(0, T; L^2(\Omega))$ and $D_t(h \ast Au) = h Au_0 + h \ast Av$. Analogously, $D_t(h \ast Bu) = h Bu_0 + h \ast Bv$. Hence, differentiating with respect to $t$ the two first equations in (\ref{eqA1.15}), we obtain
\begin{equation}\label{eqA2.15} \left\{\begin{array}{l} D_t v(t,x) = Av(t,x) + h \ast Av(t,x) + h(t)z_0(x) + D_tf(t,x), \quad (t,x) \in Q_T, \\ \\ Bv(t,y) = -v(t,y) + D_t[{\cal F} ({\cal W} ({\cal M}(u_0 + 1 \ast v)))](t,y) - h \ast Bv(t,y) \\ \\
- h(t) z_1(y) - D_tq(t,y) , \quad (t,y) \in \Sigma_T, \\ \\ v(0,x) = v_0(x), \quad x \in \Omega. \end{array} \right. \end{equation} We observe also that \begin{equation} \begin{array}{lll} \Phi(v(t,\cdot)) = g'(t), & \Phi(D_tv(t,\cdot)) = g''(t), & t \in (0, T). \end{array} \end{equation} So, applying $\Phi$ to the first equation in (\ref{eqA2.15}), we obtain \begin{equation}\label{eqA2.17} h(t) = \chi[g''(t) - \Phi(Av(t,\cdot)) - h \ast \Phi(Av(t,\cdot)) - \Phi(D_t f(t, \cdot))]Ê= h^*(t) - (\psi_1,v(t, \cdot)) - h \ast (\psi_1,v(t, \cdot)), \end{equation} which is the last equation in (\ref{eqA2.6}). Replacing $h$ with the right term of (\ref{eqA2.17}) in (\ref{eqA2.15}), we deduce that $(v,h)$ solves (\ref{eqA2.6}).
On the other hand, assume that $(v,h)$ satisfies (\ref{eqA2.6A}) and solves (\ref{eqA2.6}). We set $u:= u_0 + 1 \ast v$. Then, as $u_0 \in H^2(\Omega)$, we have $u \in H^2(0, T; L^2(\Omega)) \cap H^1(0, T; H^2(\Omega))$. Moreover, (\ref{eqA2.6}) clearly implies (\ref{eqA2.15}), which can be written in the form $$ \left\{ \begin{array}{ll} D_t^2 u = D_t(Au + h \ast Au + f), & \mbox{ in } Q_T, \\ \\ D_t[Bu + h \ast Bu + q] = D_t[{\cal F} ({\cal W} ({\cal M} u)) - u], & \mbox{ on } \Sigma_T. \end{array} \right. $$ From (H4) and (H8), we deduce that the first three equations in (\ref{eqA1.15}) are satisfied. It remains only to show that $\Phi(u) \equiv g$. Applying $\Phi$ to the first equation in (\ref{eqA2.15}) and employing (\ref{eqA2.17}), we obtain $$ D_t^2 [\Phi(u)](t) = \Phi(D_t^2 u(t,\cdot)) = \Phi(Av(t,\cdot) + h \ast Av(t,\cdot)) + h(t)\chi^{-1} + \Phi[D_tf(t,\cdot)] = g''(t). $$ So the conclusion follows from (H9). \end{proof}
\section{Weak solutions to parabolic systems}\label{seA3}
\setcounter{equation}{0}
In this section we introduce the theory of linear parabolic problems together with some further results and remarks that we are going to use to study our inverse problem. We begin by considering a linear parabolic system in the form
\begin{equation}\label{eq1.22} \left\{\begin{array}{ll} D_tu(t,x) = Au(t,x) + f(t,x), & (t,x) \in Q_T, \\ \\ u(0,x) = u_0(x), & x \in \Omega, \\ \\ Bu(t,x') = g(t,x'), & (t,x') \in \Sigma_T, \end{array} \right. \end{equation} We outline now some topics of the theory contained in Chap. 4 of \cite{LiMa2}. Recalling the definition of the adjoint operator $A^*$ of $A$ (see (\ref{eqA2.3})), the first result is concerned with the elliptic theory.
\begin{theorem}\label{th1.2} Assume that (H1)-(H2) hold. Then we can construct three operators $S$, $C$, $T$, such that:
(I) $S$ and $T$ are multiplication operators by functions $s(x')$ and $t(x')$ in $C^\infty (\Gamma)$, such that $s(x') \neq 0$ and $t(x') \neq 0$, $\forall x' \in \Gamma$;
(II) $C$ is a first order differential operator, with coefficients in $C^\infty (\Gamma)$;
(III) (H2) holds with $A$ replaced by $A^*$ and $B$ replaced by $C$;
(IV) $\forall \, u, v \in H^2(\Omega)$, the following Green's formula holds \begin{equation}\label{eq1.24} \int_\Omega [Au(x) \overline{v(x)} - u(x) \overline{A^*v(x)}] dx = \int_{\Gamma}Ê[Su(x') \overline{Cv(x')} - Bu(x') \overline{Tv(x')}] d\sigma. \end{equation}
\end{theorem}
\begin{proof} See \cite{LiMa1}, Chap. 2. \end{proof}
From (\ref{eq1.24}) we immediately deduce the following formula, valid for $u, v \in H^{1,2}(Q_T)$, \begin{equation} \begin{array}{c} \int_{Q_T} \{[D_tu(t,x) - Au(t,x)] \overline{v(t,x)} + u(t,x) [\overline{D_tv(t,x) + A^*v(t,x)}]\} dt dx = \\ \\ = \int_\Omega [u(T,x) \overline{v(T,x)} - u(0,x) \overline{v(0,x)}] dx + \int_{\Sigma_T}Ê[ Bu(t,x') \overline{Tv(t,x')} - Su(t,x') \overline{Cv(t,x')}] dt d\sigma. \end{array} \end{equation}
The next fundamental result holds.
\begin{theorem}\label{th1.3} Assume (H1)-(H2). Then (\ref{eq1.22}) admits a unique solution $u \in H^{1,2}(Q_T)$ if and only if $$ \begin{array}{ccc} f \in L^2(Q_T), & u_0 \in H^1(\Omega), & g \in H^{1/4,1/2}(\Sigma_T). \end{array} $$ \end{theorem} \begin{proof} See \cite{LiMa2}, Chap. 4, Theorem 5.3. \end{proof} \begin{remark} {\rm If $u \in H^{1,2}(Q_T)$ is the solution of (\ref{eq1.22}), then it holds \begin{equation} \begin{array}{c} \int_{Q_T} u(t,x) [\overline{D_tv(t,x) + A^*v(t,x)}] dt dx = \\ \\ = - \int_{Q_T} f(t,x) \overline{v(t,x)} dt dx - \int_\Omega u_0(x) \overline{v(0,x)}] dx + \int_{\Sigma_T}Êg(t,x') \overline{Tv(t,x')} dt d\sigma, \\ \\ \forall \, v \in H^{1,2}(Q_T) : Cv \equiv 0, v(T,\cdot) = 0. \end{array} \end{equation} } \end{remark}
A simple consequence of Theorems \ref{th1.2} Êand \ref{th1.3} is the following
\begin{corollary}\label{co1.1} Assume that (H1)-(H2) are fulfilled and consider the system \begin{equation}\label{eq1.26} \left\{\begin{array}{ll} D_tv(t,x) + A^*v(t,x) = \phi(t,x), & (t,x) \in Q_T, \\ \\ v(T,x) = 0, & x \in \Omega, \\ \\ Cv(t,x') = 0, & (t,x') \in \Sigma_T. \end{array} \right. \end{equation} If $\phi \in L^2(Q_T)$, then (\ref{eq1.26}) has a unique solution $v \in H^{1,2}(Q_T)$.
\end{corollary}
From property (II) of Theorem \ref{th1.1}, it follows that, if $v \in H^{1,2}(Q_T)$, then $Tv \in H^{3/4,3/2}(Q_T)$. So, employing a simple duality argument, from Corollary \ref{co1.1} we deduce the following
\begin{corollary}\label{co1.2}
Suppose that (H1)-(H2) are fulfilled. Then, $\forall f \in H^{1,2}(Q_T)'$, $\forall \, u_0 \in H^1(\Omega)'$,
$\forall \, g \in H^{3/4,3/2}(Q_T)'$, there exists a unique $u = S_T(f, u_0,g)$ in $L^2(Q_T)$ such that
\begin{equation} \begin{array}{c} \int_{Q_T} u(t,x) [\overline{D_tv(t,x) + A^*v(t,x)}] dt dx =
- (f, v) - (u_0, v(0,\cdot)) + (g, Tv), \\ \\
\forall \, v \in H^{1,2}(Q_T) : Cv \equiv 0, v(T,\cdot) = 0. \end{array} \end{equation} Moreover, if $f \in L^2(Q_T)$, $u_0 \in H^1(\Omega)$ and $g \in H^{1/4,1/2}(\Sigma_T)$, then $u \in H^{1,2}(Q_T)$ and it solves (\ref{eq1.22}).
\end{corollary}
\begin{proof} See \cite{LiMa2}, Chap. 4, Sec. 13.3.
\end{proof}
\begin{remark}
{\rm An $L^p$ version of Corollary \ref{co1.2} ($1 < p < \infty$) is given in \cite{Am1}, Theorem 0.5. Here the author
defines $S_T(f, u_0,g)$ as {\it ultraweak solution} of the parabolic problem with data $(f, g,u_0)$.}
\end{remark}
Another key tool for our analysis is
\begin{theorem}\label{th1.4}
Assume (H1)-(H2). Then $S_T$ maps $H^{1/4,1/2}(Q_T)' \times H^{1/2}(\Omega) \times L^2(\Sigma_T)$ into $H^{3/4, 3/2}(Q_T)$.
\end{theorem} \begin{proof} See \cite{LiMa2}, Chap. 4, Secs. 15.1 and 15.3.
\end{proof} Now we want to derive appropriate estimates of $S_T(f, u_0,g)$. To this aim, we consider in detail the space $H^\beta(0,T; V)$ with $\beta \in (0, 1)$ and $V$ Hilbert. It can be characterized as $$ \left\{f \in L^2(0, T; V) \,\,\, \text{such that} \,\,\, [f]_{H^\beta(0, T; V)}^2:=
\int_0^T \left(\int_0^t \frac{\|f(t) - f(s)\|^2_V}{(t-s)^{1+2\beta}} ds \right) dt < \infty\right\} $$ (see, for example, \cite{Ad1}, Theorem 7.48, or \cite{Am1}, Theorem 4.4.3) and it is a Hilbert space with the norm \begin{equation}\label{norm1}
\left (\|f\|'_{H^\beta(0, T; V)}\right )^2 := \|f\|_{L^2(0, T; V)}^2 + [f]^2_{H^\beta(0, T; V)}. \end{equation} Consider first the case $\beta \in (0, 1/2)$. One can prove that there exists $C>0$ such that, $\forall f \in H^\beta(0, T; V)$, \begin{equation}\label{eq1.3}Ê
\int_0^T [t^{-2\beta} + (T-t)^{-2\beta}]Ê\|f(t)\|_V^2 dt \leq C \left( \|f\|'_{H^\beta(0, T; V)}\right )^2. \end{equation} (see \cite{LiMa1}, Chap. 1, Sec. 11.2). On the other hand, if $\beta \in (1/2, 1)$, then $H^\beta(0, T; V) \hookrightarrow C^{\beta-1/2}([0, T]; V)$, the space of H\"older functions with values in $V$ (see \cite{Si1}, Chap. 14). So, introducing the norms
\begin{equation}\label{eq1.1}
\|f\|_{H^\beta(0, T; V)}^2Ê:= \left\{ \begin{array}{lll}
\int_0^T t^{-2\beta} \|f(t)\|_V^2 dt + \int_0^T (\int_0^t \frac{\|f(t) - f(s)\|^2_V}{(t-s)^{1+2\beta}} ds ) dt, & \mbox{Êif }Ê 0 < \beta < 1/2, \\ \\
\|f(0)\|_V^2 + \int_0^T (\int_0^t \frac{\|f(t) - f(s)\|^2_V}{(t-s)^{1+2\beta}} ds ) dt, & \mbox{Ê if }Ê 1/2 < \beta < 1,
\end{array} \right. \end{equation} then it holds \begin{proposition} Let $\beta \in (0, 1) \setminus \{1/2\}$. Then the norms defined in (\ref{norm1}) and in (\ref{eq1.1}) are equivalent. \end{proposition}
\begin{proof} The case $\beta \in (0, 1/2)$ follows easily from (\ref{eq1.3}). Concerning the case $\beta \in (1/2, 1)$, it suffices to prove the estimate $$
\|f\|_{L^2(0, T; V)}^2 \leq C \|f\|_{H^\beta(0, T; V)}^2, \, \forall \, f \in H^\beta(0, T; V), $$
for some $C > 0$, independent of $f$. If this is not the case, then there exists a sequence $(f_k)_{k \in \mathbb{N}}$ in $H^\beta(0, T; V)$, such that $\|f_k\|_{L^2(0, T; V)} = 1$ for every $k \in \mathbb{N}$, while ${\displaystyle \lim_{k \to \infty}}Ê\|f_k\|_{H^\beta(0, T; V)} = 0$. In particular,
$$ \lim_{k \to \infty} \int_0^T \left (\int_0^T \frac{\|f_k(t) - f_k(s)\|^2_V}{|t-s|^{1+2\beta}} ds \right ) dt
\leq 2 \|f_k\|_{H^\beta(0, T; V)} \to 0, \quad (k \to \infty). $$ This implies that, possibly passing to a subsequence, we may assume, for almost every $t$ in $[0, T]$, \begin{equation}\label{eq1.5}
\int_0^T \frac{\|f_k(t) - f_k(s)\|^2_V}{|t-s|^{1+2\beta}} ds \to 0, \quad (k \to \infty). \end{equation} By an application of imbedding Sobolev theorems, there exists $C_1>0$, such that, $\forall \, k \in \mathbb{N}$, $\forall \, t \in [0, T]$, $$
\|f_k(t) - f_k(0)\|_V \leq C_1 {\|f_k\|'_{H^\beta(0, T; V)}} t^{\beta - 1/2}Ê\leq C_2 t^{\beta - 1/2}. $$
So, as ${\displaystyle \lim_{k \to \infty}} \|f_k(0)\|_V = 0$, for every $\epsilon>0$ we can choose $t \in [0, T]$, such that
$\|f_k(t)\|_V \leq \epsilon$, $\forall \, k \in \mathbb{N}$, and (\ref{eq1.5}) holds. Hence, we can deduce $$
1 = \int_0^T \|f_k(s)\|_V^2 ds
\leq 2\left( T^{1+2\beta}Ê\int_0^T\frac{\|f_k(t) - f_k(s)\|^2_V}{|t-s|^{1+2\beta}} ds + T\epsilon^2\right) \leq 2( T^{1+2\beta}Ê + T)\epsilon^2, $$ if $k$ is sufficiently large, which is clearly a contradiction. \end{proof}
We observe that the norm $\|\cdot\|_{H^\beta(0, T; V)}$ will be particularly convenient in the case $0 < \beta < \frac{1}{2}$, in force of the homogeneity property
\begin{equation}\label{eq1.2}
\|f\|_{H^\beta(0, T; V)}^2 = T^{1-2\beta} \|f(T\cdot)\|_{H^\beta(0, 1; V)}^2 \quad \forall \, T >0. \end{equation} For future use, we prove the following \begin{lemma}\label{le1.1A} Let $\beta \in (1/2, 1)$. Then there exists $C >0$, independent of $T$, such that $$
\|f\|_{L^2(0, T; V)}^2
\leq CT\left (\|f(0)\|_V^2 + T^{2\beta -1} \int_0^T \left(\int_0^t \frac{\|f(t) - f(s)\|_V^2}{(t-s)^{1+2\beta}} ds\right) dt\right ), \quad \forall \, f \in H^\beta(0, T; V). $$ \end{lemma} \begin{proof} It follows from the chain of inequalities $$ \begin{array}{c}
\|f\|_{L^2(0, T; V)}^2 = T \| f(T\cdot)\|_{L^2(0, 1; V)}^2 \leq CT \|f(T\cdot)\|_{H^\beta(0, 1; V)}^2 \\ \\
= CT(\|f(0)\|_V^2 + \int_0^1 (\int_0^t \frac{\|f(Tt) - f(Ts)\|_V^2}{(t-s)^{1+2\beta}} ds) dt) \\ \\
= CT(\|f(0)\|_V^2 + T^{2\beta -1} \int_0^T (\int_0^t \frac{\|f(t) - f(s)\|_V^2}{(t-s)^{1+2\beta}} ds) dt). \end{array} $$ \end{proof}
\begin{lemma}\label{le1.1}
Let $\beta \in (0, 1/2)$. Then the following propositions hold.
(I) There exists $C>0$, independent of $T$, such that, $\forall \, f \in H^\beta(0, T; V)$, $$
\int_0^T (T-t)^{-2\beta} \|f(t)\|_V^2 dt \leq C \|f\|_{H^\beta(0, T; V)}^2. $$ (II) Let $0 < T < T'$. Given $f$ in $H^\beta(0, T; V)$, we indicate with $\widetilde f$ its extension to $(0, T')$, such that $\widetilde f(t) = 0$ for a.a. $t \in [T, T')$. Then $\widetilde f \in H^\beta(0, T'; V)$. Moreover, there exists $C >0$, independent of $T, T', f$, such that $$
\|\widetilde f \|_{H^\beta(0, T'; V)}^2 \leq C \|f \|_{H^\beta(0, T; V)}^2. $$ (III) Given $f$ in $H^\beta(0, T; V)$ and $s \in (0, T)$, we define, for $t \in (0, T)$, \begin{equation}\label{eq1.4} f_s(t) = \left\{\begin{array}{ll} f(t+s), & \mbox{ if } t + s < T, \\ \\ 0, & \mbox{ if } t + s \geq T. \end{array} \right. \end{equation} Then $f_s \in H^\beta(0, T; V)$. Moreover, the map $s \to f_s$ belongs to $C((0, T); H^\beta(0, T; V))$ and $$
\| f_s \|_{H^\beta(0, T; V)}^2 \leq C \|f \|_{H^\beta(0, T; V)}^2, $$ with $C>0$, independent of $T$, $s$, $f$. \end{lemma}
\begin{proof} (I) follows immediately from (\ref{eq1.2}). In fact, $$ \begin{array}{c}
\int_0^T (T-t)^{-2\beta} \|f(t)\|_V^2 dt = T^{1-2\beta} \int_0^1 (1-t)^{-2\beta} \|f(Tt)\|_V^2 dt \\ \\
\leq C T^{1-2\beta} \|f(T\cdot)\|_{H^\beta(0, 1; V)}^2 = C \|f\|_{H^\beta(0, T; V)}^2. \end{array} $$ Concerning (II), we have $$ \begin{array}{c}
\|\widetilde f\|_{H^\beta(0, T'; V)}^2 = \|f\|_{H^\beta(0, T; V)}^2 + \int_T^{T'} (\int_0^T \frac{\|f(s)\|_V^2}{(t-s)^{1+2\beta}} ds ) dt \\ \\
\leq \|f\|_{H^\beta(0, T; V)}^2 + \int_0^T (\int_T^\infty (t-s)^{-1-2\beta} dt) \|f(s)\|_V^2 ds \\ \\
= \|f\|_{H^\beta(0, T; V)}^2 + \frac{1}{2\beta}Ê\int_0^T (T-s)^{-2\beta}Ê\|f(s)\|_V^2 ds, \end{array} $$ and the conclusion follows from (I).
In order to prove (III), we start by considering the case $T = 1$. We adopt the norm $$
\|f\|''_{H^\beta(0, 1; V)}:= \inf\{\|F\|_{H^\beta(\mathbb{R}; V)}: F_{|(0, 1)} = f\}, $$ with $$
\|F\|_{H^\beta(\mathbb{R}; V)}^2 := \int_\mathbb{R} (1 + \tau^2)^\beta \|\widehat F(\tau)\|_V^2 d\tau $$
and $\widehat F$ the Fourier transform of $F$. It is easy to see that $\|F(\cdot + s)\|_{H^\beta(\mathbb{R}; V)} = \|F\|_{H^\beta(\mathbb{R}; V)}$, $\forall \, s \in \mathbb{R}$, and the map $s \to F(\cdot + s)$ is continuous from $\mathbb{R}$ to $H^\beta(\mathbb{R}; V)$. As the trivial extension of an element of $H^\beta(0, 1; V)$ is an element of $H^\beta(\mathbb{R}; V)$, we may think of the characteristic function $\chi$ of $(0, 1)$ as a pointwise multiplier in $H^\beta(\mathbb{R}; V)$. So, if $F \in H^\beta(\mathbb{R}; V)$ and $F_{|(0, 1)} = f$, we have $$
\|f_s\|''_{H^\beta(0, 1; V)} \leq \|\chi F(\cdot + s)\|_{H^\beta(\mathbb{R}; V)} \leq C \|F(\cdot + s)\|_{H^\beta(\mathbb{R}; V)}
= C \|F\|_{H^\beta(\mathbb{R}; V)}, $$ implying $$
\|f_s\|''_{H^\beta(0, 1; V)} \leq C \|f\|''_{H^\beta(0, 1; V)}, $$ for some $C >0$, independent of $s \in (0, 1)$ and $f$. Moreover, if $s_k \to s$, $$
\|f_{s_k} - f_s \|''_{H^\beta(0, 1; V)} \leq C\|F(\cdot + s_k) - F(\cdot + s)\|_{H^\beta(\mathbb{R}; V)} \to 0, \quad {\text as} \, \, k \to \infty. $$
The general case follows from (\ref{eq1.2}), since if $s \in (0, T)$ and $f \in H^\beta(0, T; V)$ then we have $$ \begin{array}{c}
\|f_s\|_{H^\beta(0, T; V)}^2 = T^{1-2\beta} \|f_s(T\cdot)\|_{H^\beta(0, 1; V)}^2 = T^{1-2\beta} \|[f(T\cdot)]_{s/T}\|_{H^\beta(0, 1; V)}^2 \\ \\
\leq C T^{1-2\beta} \|f(T\cdot)\|_{H^\beta(0, 1; V)}^2 = C \|f\|_{H^\beta(0, T; V)}^2. \end{array} $$ \end{proof}
As we have in mind to apply Theorem \ref{th1.4}, we have to deal with the space $H^{1/4,1/2}(Q_T)'$. So we consider the following abstract framework.
\begin{definition}\label{alfa} Let $V_1$ and $V_2$ be complex Hilbert spaces with $V_1$ densely embedded into $V_2$. For $0 < \beta < 1/2$, we define the Hilbert space $$ Y_T:= H^\beta(0, T; V_2) \cap L^2(0, T; V_1), $$ equipped with the norm $$
\|f\|_{Y_T}^2 := \|f\|_{H^\beta(0, T; V_2)}^2 + \|f\|_{L^2(0, T; V_1)}^2 $$ and its antidual space $Y_T' = [H^\beta(0, T; V_2) \cap L^2(0, T; V_1)]'$, normed by $$
\|z\|_{Y_T'} := \sup \{|(z,f)| \, : \, \|f\|_{Y_T} \leq 1\}. $$ \end{definition}
Observe that we shall think of $L^2(0, T; V_2)$ as continuously embedded into $Y_T'$, identifying $g \in L^2(0, T; V_2)$ with the functional $$ f \to \int_0^T (g(t), f(t))_{V_2} dt. $$ The first important fact is
\begin{lemma}\label{le1.2}
$L^2(0, T; V_2)$ is dense in $Y_T'$. \end{lemma}
\begin{proof} Owing to the reflexivity of $Y_T$, it suffices to show that, if $f \in Y_T$, and $\int_0^T (g(t), f(t))_{V_2} dt = 0$, $\forall \,g \in L^2(0, T; V_2)$, then $f(t) = 0$ a.e. in $(0,T)$, which is obvious. \end{proof}
Let $0 < T < T'$. Given $z \in Y_{T'}'$ and recalling the definition of $\widetilde f$, we can define the restriction $z_{|(0, T)}$ of $z$ to $(0, T)$ as the element of $Y_T'$, such that
\begin{equation}\label{eq1.6}
(z_{|(0, T)}, f):= (z, \widetilde f), \quad f \in Y_T. \end{equation}
On the other hand, taking $z \in Y_{T}'$, we can define its trivial extension $\widetilde z$ to $(0, T')$ as
\begin{equation}\label{eq1.7A}
(\widetilde z, f):= (z, f_{|(0, T)}), \quad f \in Y_{T'}. \end{equation} We observe that these definitions coincide with the natural ones in $L^2(0, T'; V_2)$ and $L^2(0, T; V_2)$.
\begin{lemma} \label{le1.3} The following propositions hold.
(I) There exists $C >0$, independent of $T$ and $T'$, such that
\, $\|z_{|(0, T)}\|_{Y_T'} \leq C \|z\|_{Y_{T'}'}, \forall z \in Y_{T'}'$.
(II) $\|\widetilde z\|_{Y_{T'}'} \leq \|z\|_{Y_{T}'}$, $\forall \,z \in Y_{T}'$.
(III) If \, $g \in L^2(0, T; V_2)$, then \, $\|g\|_{Y_T'} \leq T^\beta \|g\|_{L^2(0, T; V_2)}$. \end{lemma}
\begin{proof} (I) By Lemma \ref{le1.1}, there exists $C$, independent of $T$ and $T'$, such that $$
|(z_{|(0, T)}, f)| = |(z, \widetilde f)| \leq \|z\|_{Y_{T'}'} \|\widetilde f\|_{Y_{T'}} \leq C \|z\|_{Y_{T'}'} \|f\|_{Y_{T}}, \quad \forall \, f \in Y_T. $$ (II) $\forall \, f \in Y_{T'}$ then it holds $$
|(\widetilde z, f)| = |(z, f_{|(0, T)})| \leq \|z\|_{Y_{T}'} \|f_{|(0, T)}\|_{Y_T} \leq \|z\|_{Y_{T}'} \|f\|_{Y_{T'}}. $$ (III) If $f \in Y_T$, we have $$ \begin{array}{c}
|(g, f)| = |\int_0^T (g(t), f(t))_{V_2} dt| \leq T^\beta \int_0^T \|g(t)\|_{V_2} t^{-\beta} \|f(t)\| dt \\ \\
\leq T^\beta \|g\|_{L^2(0, T; V_2)}Ê(\int_0^T t^{-2\beta} \|f(t)\|_{V_2}^2 dt)^{1/2} \leq T^\beta \|g\|_{L^2(0, T; V_2)} \|f\|_{Y_T}. \end{array} $$ \end{proof}
\begin{remark} {\rm If $z \in Y_{T'}'$ and $0 < T < T'$, we get
\begin{equation}\label{eq1.7}
(z, \chi_{(0, T)} f) = (z, {\widetilde f}_{|(0, T)}) = (z_{|(0, T)}, f_{|(0, T)}), \quad \forall \, f \in Y_{T'}. \end{equation}
We can also define $z_{|(T, T')}$, which will be an element of $[H^\beta(T, T'; V_2) \cap L^2(T, T'; V_1)]'$. Observe that $H^\beta(T, T'; V_2) \cap L^2(T, T'; V_1)$ is a Hilbert space, with the norm \begin{equation} \begin{array}{c}
\|f\|_{H^\beta(T, T'; V_2) \cap L^2(T, T'; V_1)}^2 \\ \\
:= \int_T^{T'}Ê[(t-T)^{-2\beta} \|f(t)\|_{V_2}^2 + \|f(t)\|_{V_1}^2 + \int_T^t \frac{\|f(t) - f(s)\|_{V_2}^2}{(t-s)^{1+2\beta}} ds] dt. \end{array} \end{equation} Let $f \in H^\beta(T, T'; V_2) \cap L^2(T, T'; V_1)$. We can consider the element $f_0 \in Y_{T'}$, which is the trivial extension of $f$ to $(0, T')$: \begin{equation} f_0(t):= \left\{\begin{array}{ll} 0 & \mbox{ if }Êt \in (0, T], \\ \\ f(t) & \mbox{ if }Êt \in (T, T']. \end{array} \right. \end{equation} and set \begin{equation}\label{eq1.11A}
(z_{|(T, T')}, f) := (z, f_0). \end{equation} One can see that, if $f \in Y_{T'}$, \begin{equation}\label{eq1.11}
(z, \chi_{(T, T')} f) = (z_{|(T, T')}, f_{|(T, T')}). \end{equation} We can associate with any element $v \in [H^\beta(T, T'; V_2) \cap L^2(T, T'; V_1)]'$ an element $v(\cdot + T) \in Y_{T'-T}'$ in the following way: \begin{equation}\label{eq1.12} (v(\cdot + T), f):= (v, f(\cdot - T)), \quad f \in Y_{T'-T}. \end{equation} Clearly, $v \to v(\cdot + T)$ is an isometry between $[H^\beta(T, T'; V_2) \cap L^2(T, T'; V_1)]'$ and $Y_{T'-T}'$. } \end{remark}
\begin{lemma}\label{le3.14}
Let $0 < T < T'$. Then the map $z \to (z_{|(0, T)}, z_{|(T, T')})$ is a bicontinuous bijection between $Y_{T'}'$ and $Y_T' \times [H^\beta(T, T'; V_2) \cap L^2(T, T'; V_1)]'$. \end{lemma}
\begin{proof} Let $z_0 \in Y_T' $ and let $z_1 \in [H^\beta(T, T'; V_2) \cap L^2(T, T'; V_1)]'$. Then, if $z \in Y_{T'}'$, $z_{|(0, T)} = z_0$,
$ z_{|(T, T')} = z_1$, by (\ref{eq1.7}) and (\ref{eq1.11}) we have \begin{equation}\label{eq1.13}
(z, f) = (z, \chi_{(0, T)} f) + (z, \chi_{(T, T')} f) = (z_0, f_{|(0, T)}) + (z_1, f_{|(T, T')}), \quad \forall \, f \in Y_{T'}. \end{equation}
On the other hand, the right term in (\ref{eq1.13}) defines an element $z$ of $Y_{T'}'$ such that $z_{|(0, T)} = z_0$, $ z_{|(T, T')} = z_1$. The bicontinuity follows from Lemma \ref{le1.3} (I)-(II). \end{proof}
Let us go back now to parabolic problems. We have
\begin{proposition}\label{pr1.1}
Assume (H1)-(H2) and let $S_T$ be the operator defined in the statement of Corollary \ref{co1.2}. Then there hold
(I) the restriction of $S_T$ to $H^{1/4,1/2}(Q_T)' \times H^{1/2}(\Omega) \times L^2(\Sigma_T)$ is injective;
(II) if $u = S_T(f,u_0,g)$, with $(f,u_0,g) \in H^{1/4,1/2}(Q_T)' \times H^{1/2}(\Omega) \times L^2(\Sigma_T)$, then
$$
f = D_t u - Au
$$
in the sense of distributions and
$$u_0 = u(0,\cdot).$$
\end{proposition}
\begin{proof} (I) Let $(f,u_0,g) \in H^{1/4,1/2}(Q_T)' \times H^{1/2}(\Omega) \times L^2(\Sigma_T)$ be such that $S_T(f,u_0,g) = 0$, that is,
$$
- (f, v) - \int_\Omega u_0(x) \overline{v(0, x)} dx + \int_{\Sigma_T}Êg(t,x') \overline{t(x') v(t,x')} dt d\sigma = 0
$$
$$\forall v \in H_T:=\left\{v \in H^{1,2}(Q_T): Cv = 0, v(T,\cdot) = 0\right\}. $$
Taking $v \in \mathcal D(Q_T)$, we obtain $(f,v) = 0$. As $\mathcal D(Q_T)$ is dense in $H^{1/4,1/2}(Q_T)$ (cf. Theorem \ref{th1.1} (I)),
we obtain $f = 0$. Next, let us fix $v_0 \in {\mathcal{D}}(\Omega)$ and take
$$
v(t,x):= \zeta(t) v_0(x),
$$
with $\zeta \in C^\infty([0, T])$, such that $\zeta(0) = 1$ and $\zeta(T) = 0$. We deduce
$$
\int_\Omega u_0(x) \overline{v_0(x)} dx = 0, \quad \forall \, v_0 \in {\mathcal{D}}(\Omega),
$$
implying $u_0 = 0$. Hence we conclude that
$$\int_{\Sigma_T}Êg(t,x') \overline{t(x') v(t,x')} dt d\sigma = 0, \quad \forall \,v \in H_T,$$
which implies, as $t(x') \neq 0$, $\forall \, x' \in \Gamma$, that $g = 0$.
(II) Taking $v \in {\mathcal{D}}(Q_T)$ we have
$$
\int_{Q_T} u(t,x) [\overline{D_tv(t,x) + A^*v(t,x)}] dt dx = - (f, v),
$$ and then $D_tu - Au = f$ (we recall that $H^{1/4,1/2}(Q_T)'$ is a space of distributions). Next, we consider a sequence $(f_k, u^k_0, g_k)$ in $L^2(Q_T) \times H^1(\Omega) \times H^{1/4,1/2}(\Sigma_T)$, converging to $(f, u_0, g)$ in $H^{1/4,1/2}(Q_T)' \times H^{1/2}(\Omega) \times L^2(\Sigma_T)$ (we employ Lemma \ref{le1.2} to show the existence of such a sequence). Setting $u_k:= S_T(f_k, u^k_0, g_k)$, then the sequence $(u_k)_{k\in \mathbb{N}}$ converges to $u \in H^{3/4,3/2}(Q_T)$, so that, by Theorem \ref{th1.1} (III), we infer $(u^k_0)_{k \in \mathbb{N}} = (u_k(0,\cdot))_{k \in \mathbb{N}}$ converging to $u(0,\cdot)$ in $H^{1/2}(\Omega)$. And finally we get $u_0 = u(0, \cdot)$.
\end{proof}
\begin{remark}\label{re1.3}
{\rm On account of the Hahn-Banach theorem, it is possible to identify the elements of $H^{1/4,1/2}(Q_T)'$
with the distributions $z$ in $Q_T$, which can be represented (in not unique way) in the form
$$
z = z_0 + z_1,
$$ with $z_0 \in H^{1/4,0}(Q_T)' = H^{-1/4}(0, T; L^2(\Omega))$ and $z_1 \in L^2(0, T; H^{-1/2}(\Omega))$ (see \cite{BeLo1}, Theorem 2.7.1). Observe that if $u \in H^{3/4,3/2}(Q_T)$, then $D_tu \in H^{-1/4}((0, T); L^2(\Omega))$ (see Proposition 12.1 in \cite{LiMa1}, Ch. I, extended to the vector valued case, and also \cite{Am1}, Theorem 4.4.2). However, as (in general) if $z \in H^{3/2}(\Omega)$ it does not hold $Az \in H^{-1/2}(\Omega)$, then we cannot infer as well that $Au \in H^{1/4,1/2}(Q_T)'$. On the other hand, we deduce by difference from Proposition \ref{pr1.1} (II) that, if $u = S_T(f,u_0,g)$, with $(f,u_0,g) \in H^{1/4,1/2}(Q_T)' \times H^{1/2}(\Omega) \times L^2(\Sigma_T)$, then necessarily $Au \in H^{1/4,1/2}(Q_T)'$.
We observe also that, if $u \in H^{3/4,3/2}(Q_T)$, then $u_{|\Sigma_T} \in H^{1/2,1}(\Sigma_T) \hookrightarrow L^p(0, T; L^2(\Gamma))$, $\forall p \in [1, \infty)$ (see \cite{Si1}, Lemma 8).} \end{remark}
We introduce now the following functional space \begin{definition}\label{de3.1}
Assume (H1)-(H2). We indicate with $X_T$ the range of $S_T$, restricted to $H^{1/4,1/2}(Q_T)' \times H^{1/2}(\Omega) \times L^2(\Sigma_T)$.
If $u \in X_T$, then $Au \in H^{1/4,1/2}(Q_T)'$ (cf. Remark \ref{re1.3}). Moreover, by Proposition \ref{pr1.1} (I), the element
$Bu:= g \in L^2(\Sigma_T)$ is uniquely determined. Hence, for a fixed $p \in (2, \infty)$, we can introduce in $X_T$ the norm \begin{equation}\label{eq1.30}
\|u\|_{X_T} := \|u\|_{H^{3/4,3/2}(Q_T)} + \|u_{|\Sigma_T}\|_{L^p(0, T; L^2(\Gamma))} + \|Au\|_{H^{1/4,1/2}(Q_T)'} + \|Bu\|_{L_2(\Sigma_T)}.
\end{equation}
\end{definition}
\begin{lemma}
Assume (H1)-(H2). Then the following propositions hold.
(I) Let $u \in H^{3/4,3/2}(Q_T)$. Then $u \in X_T$ if and only if $Au \in H^{1/4,1/2}(Q_T)'$ and there exists a sequence
$(u_k)_{k \in \mathbb{N}}$ in $H^{1,2}(Q_T)$, such that
\begin{equation}\label{eq1.31}
\|u_k - u\|_{H^{3/4,3/2}(Q_T)}^2 + \|Au_k - Au\|_{H^{1/4,1/2}(Q_T)'}^2 \to 0 \quad (k \to \infty)
\end{equation}
with $(Bu_k)_{k \in \mathbb{N}}$ converging in $L^2(\Sigma_T)$.
(II) $X_T$ is a Banach space.
\end{lemma}
\begin{proof} (I) Let $u = S_T(f,u_0,g) \in X_T$ and take $(f_k, u^k_0, g_k) \in L^2(Q_T) \times H^1(\Omega) \times H^{1/4,1/2}(\Sigma_T)$ such that
$$
\|f_k - f\|_{H^{1/4,1/2}(Q_T)'}^2 + \|u^k_0 - u_0\|_{H^{1/2}(\Omega)}^2 + \|g_k - g\|_{L^2(\Sigma_T)}^2 \to 0 \quad (k \to \infty).
$$
Setting $u_k:= S_T(f_k, u^k_0, g_k)$, then $u_k \in H^{1,2}(Q_T)$ and $\|u_k - u\|_{H^{3/4,3/2}(Q_T)}^2 \to 0$ $(k \to \infty)$ from which
$$\|D_tu_k - D_tu\|_{H^{-1/4}((0, T); L^2(\Omega))} \to 0 \quad (k \to \infty).$$
By difference
$$
Au_k = -f_k + D_tu_k \to -f + D_tu = Au \quad (k \to \infty)
$$
in $H^{1/4,1/2}(Q_T)'$. Moreover, $Bu_k = g_k \to g$ in $L^2(\Sigma_T)$.
On the other hand, let $u \in H^{3/4,3/2}(Q_T)$ be such that $Au \in H^{1/4,1/2}(Q_T)'$ and assume that there exists a sequence
$(u_k)_{k \in \mathbb{N}} \in H^{1,2}(Q_T)$ satisfying (\ref{eq1.31}) with $\|Bu_k - g\|_{L^2(\Sigma_T)}Ê\to 0$ ($k \to \infty$), for some $g \in L^2(\Sigma_T)$.
Consider now $v \in H^{1,2}(Q_T)$, with $Cv = 0$ and $v(T,\cdot) = 0$. Then, $\forall \, k \in \mathbb{N}$,
\begin{equation}\label{eq1.32}
\begin{array}{c}
\int_{Q_T} u_k(t,x) [\overline{D_tv(t,x) + A^*v(t,x)}] dt dx = \\ \\ = - \int_{Q_T} (D_tu_k(t,x) - Au_k(t,x)) \overline{v(t,x)} dt dx - \int_\Omega u_k(0,x) \overline{v(0,x)}] dx \\ \\ + \int_{\Sigma_T}ÊBu_k(t,x') \overline{Tv(t,x')} dt d\sigma. \end{array} \end{equation} From (\ref{eq1.31}) we have that $$
\|D_tu_k - D_tu\|_{H^{-1/4}(0, T; L^2(\Omega))} + \|u_k(0,\cdot) - u(0,\cdot)\|_{H^{1/2}(\Omega)}Ê\to 0 \quad (k \to \infty). $$ So we can pass to the limit in (\ref{eq1.32}) for $k \to \infty$, obtaining $$
\begin{array}{c}
\int_{Q_T} u(t,x) [\overline{D_tv(t,x) + A^*v(t,x)}] dt dx = \\ \\ = - (D_tu - Au, v)- \int_\Omega u(0,x) \overline{v(0,x)}] dx + \int_{\Sigma_T}Êg(t,x') \overline{Tv(t,x')} dt d\sigma. \end{array} $$ We can conclude that $u = S_T(D_tu- Au, u(0,\cdot), g)$.
(II) We prove only the completeness. Let $(u_k)_{k \in \mathbb{N}}$ be a Cauchy sequence in $X_T$. Then there exists $u$ in $H^{3/4,3/2}(Q_T)$ such that $$
\|u_k - u\|_{H^{3/4,3/2}(Q_T)} \to 0, \quad (k \to \infty). $$ It follows that $$ \begin{array}{c}
\|u_{k|\Sigma_T} - u_{|\Sigma_T}\|_{L^p(0, T; L^2(\Gamma))} + \|D_tu_k - D_tu\|_{H^{-1/4}(0, T; L^2(\Omega))} \\ \\
+ \|u_k(0,\cdot) - u(0, \cdot)\|_{H^{1/2}(\Omega)} \to 0 \quad (k \to \infty), \end{array} $$ and $Au_k \to Au$ in ${\mathcal{D}}'(Q_T)$. As $(Au_k)_{k \in \mathbb{N}}$ is a Cauchy sequence in $H^{1/4,1/2}(Q_T)'$, we deduce that $Au \in H^{1/4,1/2}(Q_T)'$ and $$
\|Au_k - Au\|_{H^{1/4,1/2}(Q_T)'} \to 0 \quad (k \to \infty). $$ Let $u_k = S_T(f_k,u^k_0, g_k)$. Then by Proposition \ref{pr1.1} (II) we infer $f_k = D_tu_k - Au_k \to D_tu - Au \in H^{1/4,1/2}(Q_T)'$ and $u_0^k = u_k(0,\cdot)$. Moreover, there exists $g \in L^2(\Sigma_T)$ such that
$$\|Bu_k - g\|_{L^2(\Sigma_T)} = \|g_k - g\|_{L^2(\Sigma_T)}\to 0 \quad (k \to \infty). $$
Let $v \in H^{1,2}(Q_T)$, with $Cv = 0$ and $v(T,\cdot) = 0$. Then, $\forall \, k \in \mathbb{N}$, $$ \begin{array}{c} \int_{Q_T} u_k(t,x) [\overline{D_tv(t,x) + A^*v(t,x)}] dt dx = \\ \\
- (f_k, v) - \int_\Omega u_0^k(0,x) \overline{v(0,x)} dx + \int_{\Sigma_T}Êg_k(t,x') \overline{Tv(t,x')} dt d\sigma.
\end{array}
$$
Passing to the limit, for $k \to \infty$, we obtain
$$ \begin{array}{c} \int_{Q_T} u(t,x) [\overline{D_tv(t,x) + A^*v(t,x)}] dt dx = \\ \\
- (D_tu - Au, v) - \int_\Omega u(0,x) \overline{v(0,x)} dx + \int_{\Sigma_T}Êg(t,x') \overline{Tv(t,x')} dt d\sigma,
\end{array}
$$
which implies $u = S(D_tu - Au, u(0,\cdot), g)$ and
$$
\begin{array}{c}
\|u - u_k\|_{X_T}
= \|u - u_k\|_{H^{3/4,3/2}(Q_T)} + \|u_{k|\Sigma_T} - u_{|\Sigma_T}\|_{L^p(0, T; L^2(\Gamma))} \\ \\
+ \|Au - Au_k\|_{H^{1/4,1/2}(Q_T)'} + \|g - Bu_k\|_{L_2(\Sigma_T)} \to 0, \quad (k \to \infty).
\end{array}
$$
\end{proof}
\begin{remark}
{\rm We have already observed that $H^{1/4,1/2}(Q_{T})'$ is a space of distributions in $Q_T$. Let $0 < T < T'$ and $f \in H^{1/4,1/2}(Q_{T'})'$. We consider $f_{|(0, T)}$ and $f_{|(T, T')}$ defined in (\ref{eq1.6}) and (\ref{eq1.11A}). They can be identified with the restrictions of $f$
(in the sense of distributions) to $Q_T$ and $(T, T') \times \Omega$, respectively. We recall again the notation (\ref{eq1.12}). Then $f_{|(0, T)} \in H^{1/4,1/2}(Q_{T})'$ and $f_{|(T, T')}(T + \cdot) \in H^{1/4,1/2}(Q_{T'-T})'$. } \end{remark}
\begin{proposition}\label{pr1.2}
Assume (H1)-(H2), $0 < T < T'$, $f \in H^{1/4,1/2}(Q_{T'})'$, $u_0 \in H^{1/2}(\Omega)$ and $g \in L^2(\Sigma_{T'})$. We set
$$
\begin{array}{ccc}
f_0:= f_{|(0, T)}, & g_0:= g_{|\Sigma_T}, & v_0:= S_T(f_0, u_0, g_0),
\end{array}
$$
$$
\begin{array}{cccc}
f_1:= f_{|(T, T')}(T+\cdot), & u_1:= v_0(T, \cdot), & g_1:= g_{|(T, T') \times \Gamma}(T + \cdot), & v_1:= S_{T'-T}(f_1, u_1, g_1).
\end{array}
$$
Then
$$
S_{T'}(f,u_0,g)(t, \cdot) = \left\{\begin{array}{lll}
v_0(t, \cdot) & \mbox{Êif }Ê& t \in [0, T], \\ \\
v_1(t - T, \cdot) & \mbox{Êif }Ê& t \in [T, T'].
\end{array}
\right.
$$
\end{proposition}
\begin{proof} The statement holds if $f \in L^2(Q_{T'})$, $u_0 \in H^1(\Omega)$, $g \in H^{1/4,1/2} (\Sigma_{T'})$. It can be extended to the general case using an argument of continuity. \end{proof}
\begin{remark}\label{re3.21}
{\rm Proposition \ref{pr1.2} and Lemma \ref{le3.14} imply that, if $T, T' \in \mathbb{R}^+$, $v \in X_T$, $w \in X_{T'}$ and $v(T,\cdot) = w(0, \cdot)$, setting
$$
z(t,\cdot):= \left\{\begin{array}{lll}
v(t, \cdot) & \mbox{Êif }Ê& t \in [0, T], \\ \\
w(t-T,\cdot) & \mbox{Êif }Ê& t \in [T, T+T'],
\end{array}
\right.
$$
then $z \in X_{T+T'}$.
}
\end{remark}
\begin{proposition}Ê\label{pr1.3}
Assume (H1)-(H2) and let $T'>0$. Then there exists $C(T')>0$, such that, $\forall \, T \in (0, T']$, $\forall \, f \in H^{1/4,1/2}(Q_{T})'$,
$\forall \, u_0 \in H^{1/2}(\Omega)$, $\forall \, g \in L^2(\Sigma_{T})$, it holds
$$
\|S_T(f,u_0, g)\|_{X_T} \leq C(T')\left(\|f\|_{H^{1/4,1/2}(Q_{T})'} + \|u_0\|_{H^{1/2}(\Omega)} + \|g\|_{L^2(\Sigma_{T})}\right).
$$
\end{proposition}
\begin{proof} Let $f \in H^{1/4,1/2}(Q_{T})'$, $u_0 \in H^{1/2}(\Omega)$, $g \in L^2(\Sigma_{T})$. We consider $\widetilde f \in H^{1/4,1/2}(Q_{T'})'$ defined in (\ref{eq1.7A}) (with $z$ replacing $f$) and the trivial extension $\widetilde g$ of $g$ to $\Sigma_{T'}$. Observe that, by Proposition \ref{pr1.2}, $S_T(f,u_0,g)$ coincides with the restriction of $S_{T'}(\tilde f, u_0, \tilde g)$ to $Q_T$, so that
$$AS_T(f,u_0,g) = AS_{T'}(\tilde f, u_0, \tilde g)_{|(0, T)}.$$ Hence, by Lemma \ref{le1.3} (I)-(II), we deduce
$$
\begin{array}{c}
\|S_T(f,u_0, g)\|_{X_T} = \|S_T(f,u_0, g)\|_{H^{3/4,3/2}(Q_T)} + \|S_T(f,u_0, g)_{|\Sigma_T}\|_{L^{p}(0, T; L^2(\Gamma))} \\ \\
+ \|A S_T(f,u_0, g)\|_{H^{1/4,1/2}(Q_T)'} + \|g\|_{L^2(\Sigma_T)} \\ \\
\leq \|S_{T'}(\widetilde f,u_0, \widetilde g)\|_{H^{3/4,3/2}(Q_{T'})} + \|S_{T'}(\widetilde f,u_0, \widetilde g)_{|\Sigma_{T'}}\|_{L^{p}(0, T'; L^2(\Gamma))} \\ \\
+ C_1 \|A S_{T'}(\widetilde f,u_0, \widetilde g)\|_{H^{1/4,1/2}(Q_{T'})'} + \|\widetilde g\|_{L^2(\Sigma_{T'})} \\ \\
\leq C_1(T') \left (\|\widetilde f\|_{H^{1/4,1/2}(Q_{T'})'} + \|u_0\|_{H^{1/2}(\Omega)} + \|\widetilde g\|_{L^2(\Sigma_{T'})} \right) \\ \\
\leq C_1(T') \left(\|f\|_{H^{1/4,1/2}(Q_{T})'}^2 + \|u_0\|_{H^{1/2}(\Omega)} + \||g\|_{L^2(\Sigma_{T})} \right). \end{array}
$$ \end{proof}
\begin{proposition}Ê\label{pr1.4}
Assume (H1)-(H2) and let $T' \in \mathbb{R}^+$. Then there exists $C(T') >0$ such that, $\forall \, T \in (0, T']$, $\forall \, f \in L^2(Q_{T})$,
$\forall \, u_0 \in H^{1}(\Omega)$, $\forall \, g \in H^{1/4,1/2}(\Sigma_{T})$, it holds
$$
\|S_T(f,u_0, g)\|_{H^{1,2}(Q_T)} \leq C(T')(\|f\|_{L^2(Q_{T})} + \|u_0\|_{H^{1}(\Omega)} + \|g\|_{H^{1/4,1/2}(\Sigma_T)}.
$$
\end{proposition}
\begin{proof} We consider $\widetilde f \in L^2(Q_{T'})$, the extension of $f$ to $Q_{T'}$,
and $\widetilde g$, the extension of $g$ to $\Sigma_{T'}$. Observe that, by Proposition \ref{pr1.2},
$S_T(f,u_0,g)$ coincides with the restriction of $S_{T'}(\widetilde f, u_0, \widetilde g)$ to $Q_T$.
So, by Lemma \ref{le1.1} (II), we have
$$
\begin{array}{c}
\|S_T(f,u_0, g)\|_{H^{1,2}(Q_T)} \leq \|S_{T'}(\widetilde f,u_0, \widetilde g)\|_{H^{1,2}(Q_{T'})} \\ \\
\leq C_1(T') \left(\|\widetilde f\|_{L^{2}(Q_{T'})} + \|u_0\|_{H^{1}(\Omega)} + \|\widetilde g\|_{H^{1/4,1/2}(\Sigma_{T'})}\right) \\ \\
\leq C_1(T') \left(\|f\|_{L^{2}(Q_{T})} + \|u_0\|_{H^{1/2}(\Omega)} + C\|g\|_{H^{1/4,1/2}(\Sigma_{T})} \right) \\ \\
\leq C_2(T') \left(\|f\|_{L^{2}(Q_{T})} + \|u_0\|_{H^{1/2}(\Omega)} + \|g\|_{H^{1/4,1/2}(\Sigma_{T})} \right). \end{array}
$$ \end{proof}
\section{Convolution terms}\label{seA4}
\setcounter{equation}{0}
In this section, we examine the convolution terms appearing in system (\ref{eqA2.15}). Let us consider the following abstract situation. Given $h \in L^1(0, T)$, $f \in L^2(0, T; V)$, with $V$ Hilbert space, we set \begin{equation} (h \ast f)(t) := \int_0^t h(t-s) f(s) ds. \end{equation} By Young's inequality, $h \ast f \in L^2(0, T; V)$ and $$
\|h\ast f\|_{L^2(0, T; V)} \leq \|h\|_{L^1(0, T)}Ê\|f\|_{L^2(0, T; V)}. $$ The first result says
\begin{lemma}\label{le1.6} Let $V$ be a Hilbert space, $\beta \in (0, 1/2)$, $h \in L^1(0, T)$ and $f \in H^\beta(0, T; V)$. Then $h \ast f \in H^\beta(0, T; V)$. Moreover, there exists $C >0$, independent of $T$, $h$, $f$, such that $$
\|h \ast f\|_{H^\beta(0, T; V)} \leq C \|h\|_{ L^1(0, T)} \| f\|_{H^\beta(0, T; V)}. $$ \end{lemma}
\begin{proof} By Cauchy-Schwarz inequality, we get $$
\|(h \ast f)(t)\|_V ^2 \leq \|h\|_{L^1(0, T)} \int_0^t |h(s)| \|f(t-s)\|_V ^2 ds, \quad \forall \, t \in (0,T). $$ Then it follows $$ \begin{array}{c}
\int_0^T t^{-2\beta} \|(h \ast f)(t)\|_V ^2 dt \leq \|h\|_{L^1(0, T)} \int_0^T t^{-2\beta} (\int_0^t |h(s)| \|f(t-s)\|_V ^2 ds) dt \\ \\
= \|h\|_{L^1(0, T)} \int_0^T |h(s)| (\int_s^T t^{-2\beta} \|f(t-s)\|_V ^2 dt) ds \\ \\
\leq \|h\|_{L^1(0, T)}^2 \int_0^T t^{-2\beta} \|f(t)\|_V ^2 dt. \end{array} $$ Moreover, if $0 < s < t < T$, $$ \begin{array}{c}
\|(h \ast f)(t) - ( h \ast f)(s)\|_V \leq \int_0^s |h(\tau)| \|f(t-\tau) - f(s-\tau)\|_V d\tau + \int_s^t |h(\tau)| \|f(t-\tau)\|_V d\tau \\ \\
\leq \|h\|_{L^1(0, T)}^{1/2}Ê[(\int_0^s |h(\tau)| \|f(t-\tau) - f(s-\tau)\|_V^2 d\tau)^{1/2} + ÊÊ(\int_s^t |h(\tau)| \|f(t-\tau)\|_V^2 d\tau)^{1/2}ÊÊÊÊ], \end{array} $$ so that $$ \begin{array}{c}
\int_0^T (\int_0^t \frac{\|(h \ast f)(t) - ( h \ast f)(s)\|_V^2Ê}{(t-s)^{1+2\beta}} ds ) dt \\ \\
\leq 2 \|h\|_{L^1(0, T)} [Ê\int_0^T (\int_0^t (Ê\int_0^s |h(\tau)| \|f(t-\tau) - f(s-\tau)\|_V^2 d\tau) (t-s)^{-1-2\beta} ds ) dt \\ \\
+ \int_0^T (\int_0^t (\int_s^t |h(\tau)| \|f(t-\tau)\|_V^2 d\tau) (t-s)^{-1-2\beta} ds ) dt ]. \end{array} $$ We have $$ \begin{array}{c}
\int_0^T (\int_0^t (Ê\int_0^s |h(\tau)| \|f(t-\tau) - f(s-\tau)\|_V^2 d\tau) (t-s)^{-1-2\beta} ds ) dt \\ \\
= \int_0^T |h(\tau)| (\int_\tau^T (\int_\tau^t (t-s)^{-1-2\beta} \|f(t-\tau) - f(s-\tau)\|_V^2 ds ) dt) d\tau \\ \\
\leq \|h\|_{L^1(0, T)}Ê\int_0^T (\int_0^t \frac{\|f(t) - f(s)\|_VÊÊ}{(t-s)^{1+2\beta}Ê} ds) dt. \end{array} $$ Finally, we deduce $$ \begin{array}{c}
\int_0^T (\int_0^t (\int_s^t |h(\tau)| \|f(t-\tau)\|_V^2 d\tau) (t-s)^{-1-2\beta} ds ) dt \\ \\
= \int_0^T (\int_0^t |h(\tau)| \|f(t-\tau)\|_V^2 (\int_0^\tau (t-s)^{-1-2\beta} ds ) d\tau ) dt \\ \\
\leq \frac{1}{2\beta}Ê\int_0^T (\int_0^t |h(\tau)| (t-\tau)^{-2\beta}Ê \|f(t-\tau)\|_V^2 d\tau ) dt \\ \\
\leq \frac{1}{2\beta} \|h\|_{L^1(0, T)} \int_0^T t^{-2\beta}Ê \|f(t)\|_V^2 dt.
\end{array} $$ The conclusion follows. \end{proof}
Now, having in mind the case $h \in L^1(0, T)$ and $f \in H^{1/4,1/2}(Q_T)'$, we want to define the convolution $h \ast z$ when $h \in L^1(0, T)$ and $z \in Y_T'$ (cf. definition \ref{alfa}).
\begin{lemma}\label{le1.7} There exists a unique bilinear and continuous mapping $(h,z) \to h \ast z$ from $L^1(0, T) \times Y_T'$ to $Y_T'$ extending the convolution mapping $(h,g) \to h \ast g$ from $L^1(0, T) \times L^2(0, T; V_2)$ to $L^2(0, T; V_2)$. Moreover, we can find a positive constant $C$, independent of $T$, $h$, $z$, such that $$
\|h \ast z\|_{Y_T'} \leq C \|h\|_{L^1(0, T)} \|z\|_{Y_T'}. $$ \end{lemma} \begin{proof} The uniqueness of the extension of the convolution follows from Lemma \ref{le1.2}. If $h \in L^1(0, T)$, $g \in L^2(0, T; V_2)$ and $f \in Y_T$, we have $$ \begin{array}{c} (h \ast g, f) = \int_0^T ((h \ast g)(t), f(t))_{V_2} dt = \int_0^T h(s) (\int_0^{T-s} (g(\tau), f(\tau +s))_{V_2} d\tau) ds \\ \\ = \int_0^T h(s) (\int_0^{T} (g(\tau), f_s(\tau))_{V_2} d\tau) ds = \int_0^T h(s) (g, f_s) ds, \end{array} $$ with $f_s$ as in (\ref{eq1.4}). So, if $z \in Y_T'$ and $f \in Y_T$, we define \begin{equation}\label{eq1.8} (h \ast z, f):= \int_0^T h(s) (z, f_s) ds. \end{equation} Observe that (\ref{eq1.8}) is well defined, because, in force of Lemma \ref{le1.1} (III), the mapping $s \to (z, f_s)$ is continuous and bounded. We have also $$
|(h \ast z, f)| \leq \int_0^T |h(s)| |(z, f_s)| ds \leq C \|h\|_{L^1(0, T)}Ê\|z\|_{Y_T'}Ê\|f\|_{Y_T}, $$ with $C$ independent of $T$, $h$, $z$, $f$. The conclusion follows. \end{proof}
\begin{lemma} \label{le1.8}
Let $0 < T < T'$, $h \in L^1(0, T')$, $z \in Y_{T'}'$. We set $z_0:= z_{|(0, T)}$, $z_1:= z_{|(T, T')}$, $h_0:= h_{||0, T)}$, $h_1:= h_{|(T, T')}$. Then, recalling the notation (\ref{eq1.12}), there hold
(I) \quad $(h\ast z)_{|(0, T)} = h_0 \ast z_0$;
\vskip0.1truecm (II)\quad
$(h\ast z)_{|(T, T')}(\cdot + T) = (h \ast {\widetilde z}_0)_{|(T, T')}(\cdot + T) + h_{|(0, T'-T)} \ast [z_1(\cdot + T)] $
\vskip0.1truecm
$\quad \quad \quad = ({\widetilde h}_0 \ast z)_{|(T, T')} (\cdot + T) + h_1(\cdot + T) \ast z_{|(0, T'-T)};$
\vskip0.1truecm
(III) if $T' \leq 2T$
\vskip0.1truecm $\quad \quad \quad
(h\ast z)_{|(T, T')}(\cdot + T)
= ({\widetilde h}_0 \ast {\widetilde z}_0 )_{|(T, T')}(\cdot + T) + h_1(\cdot + T) \ast z_{0|(0, T-T')}Ê+ h_{0|(0, T'-T)} \ast [z_1(\cdot + T)]. $ \end{lemma}
\begin{proof} By Lemma \ref{le1.2}, it suffices to consider $z \in L^2(0, T'; V_2)$. In this case, (I) is obvious. Concerning (II), we have, for $t \in (0, T' - T)$, $$ \begin{array}{c} (h \ast z)(T+t) = \int_0^{T+t}Êh(T+t-s) z(s) ds \\ \\ = \int_0^{T+t} h(T+t-s) {\widetilde z}_0(s) ds + \int_0^{t} h(t-s) z_1(s+T) ds \end{array} $$ and the first identity in (II) is proved. The second identity follows inverting the roles of $h$ and $z$, as $$ (h \ast z)(T+t) = \int_0^{T+t}Êh(s) z(T+t-s) ds. $$ We have also $$ \int_0^{T+t} h(T+t-s){\widetilde z}_0(s) ds = \int_0^{T+t}{\widetilde h}_0(T+t-s){\widetilde z}_0(s) ds + \int_0^t h_1(T+t-s){\widetilde z}_0(s)ds, $$ which implies (III) and completes the proof. \end{proof}
\begin{remark}\label{re4.4} {Ê\rm If we think of the final formula in (III) as a function of $(z_1,h_1)$, we see that it is affine, in spite of the fact that the convolution, as a function of $(h,z)$, does not enjoy this property. This will be crucial to prove global existence for solution to Problem 3 (see Section 5).} \end{remark}
\section{The memory term }\label{seA5}
\setcounter{equation}{0}
In this section we study some properties of the term $\Psi(v)$ defined in (\ref{eqA2.12}). We recall that the (usually nonlinear) operator ${\cal W}$ fulfills the conditions (C1)-(C3). Let $v \in H^{3/4,3/2}(Q_T)$. We have $$ {\cal M}(u_0 + 1 \ast v)(t) = {\cal M} u_0 + (1 \ast {\cal M} v)(t), \quad t \in (0, T). $$
On account of (H8), observe that the function $\int_\Omega \omega_1(x) v(\cdot,x) dx \in H^{3/4}(0, T)$. Moreover, as $v_{|\Sigma_T} \in H^{1/2,1}(\Sigma_T)$, then $\int_\Omega \omega_2(y) v(\cdot,y) d\sigma \in H^{1/2}(0, T)$. So, ${\cal M}(u_0 + 1 \ast v) \in H^{3/2}(0, T) \hookrightarrow C([0, T]) \cap BV([0, T])$ and $\Psi(v)$ is well defined. More in general, taking $v \in H^{3/4,3/2}(Q_\tau)$ with $0 < \tau \leq T$, we define \begin{equation} \Psi(v)(t,y):= D_t[{\cal F} ({\cal W} ({\cal M}(u_0(x) + 1 \ast v(t,x)))](t,y), \quad (t,y) \in \Sigma_\tau. \end{equation}
We observe that, if $v_1,v_2 \in H^{3/4,3/2}(Q_\tau)$ and $v_{1|Q_{\tau_1}} = v_{2|Q_{\tau_1}}$ for some $\tau_1 \in [0, \tau]$, then $$ \Psi(v_1)(t,y) = \Psi(v_2)(t,y) \quad \mbox{ in }Ê\Sigma_{\tau_1}. $$ Further, recalling definition (\ref{eqA1.11}), we have \begin{equation} \Psi(v)(t,y) = D_t(E_1 \ast r_v)(t) u_A(t,y) + (E_1 \ast r_v)(t) D_tu_A(t,y) + D_tE_0(t,y), \quad (t,y) \in \Sigma_\tau, \end{equation} with \begin{equation} r_v(t):= {\cal W} ({\cal M} u_0 + 1 \ast {\cal M} v)(t). \end{equation} The following result holds \begin{lemma}\label{le2.1} Let $v_1, v_2 \in H^{3/4,3/2}(Q_\tau)$, with $0 < \tau \leq T$. Then $$
\|\Psi(v_1) - \Psi(v_2)\|_{L^2(\Sigma_\tau)} \leq C(T) \tau^{1/2} \|v_1 - v_2\|_{H^{3/4,3/2} (Q_\tau)}. $$ \end{lemma}
\begin{proof} First, we estimate $\| D_t[E_1 \ast (r_{v_1} - r_{v_2})]\|_{C([0, \tau])}$. Employing Lemma \ref{le1.1A}, we have $$ \begin{array}{c}
\|D_t[E_1 \ast (r_{v_1} - r_{v_2})]\|_{C([0, \tau])} \leq C_1(T) \|r_{v_1} - r_{v_2}\|_{C([0, \tau])} \\ \\
\leq C_1(T) L \|1 \ast {\cal M} (v_1 - v_2)\|_{C([0, \tau])} \leq C_1(T) L \tau^{1/2} \| {\cal M} (v_1 - v_2)\|_{L^2(0, \tau)} \\ \\
\leq C_1(T) L \tau^{1/2} \left(\|\omega_1\|_{L^2(\Omega)} \|v_1 - v_2\|_{L^2(Q_\tau)}
+ \|\omega_2\|_{L^2(\Gamma)}Ê\|(v_1 - v_2)_{|\Sigma_\tau}\|_{L^2(\Sigma_\tau)}\right) \\ \\
\leq C_2(T) L \tau^{1/2} \left(\|\omega_1\|_{L^2(\Omega)} \tau^{1/2} \|v_1 - v_2\|_{H^{3/4}((0, \tau); L^2(\Omega))}
+ \|\omega_2\|_{L^2(\Gamma)}Ê\|v_1 - v_2\|_{L^2((0, \tau); H^{3/2}(\Omega))}\right) \\ \\
\leq C_3(T) \tau^{1/2} \|v_1 - v_2\|_{H^{3/4,3/2} (Q_\tau)}. \end{array} $$ It follows $$ \begin{array}{c}
\|E_1 \ast (r_{v_1} - r_{v_2})]\|_{C([0, \tau])} = \|1 \ast D_t[E_1 \ast (r_{v_1} - r_{v_2})]\|_{C([0, \tau])} \\ \\
\leq \tau \|D_t[E_1 \ast (r_{v_1} - r_{v_2})]\|_{C([0, \tau])} \leq C_3(T) \tau^{3/2} \|v_1 - v_2\|_{H^{3/4,3/2} (Q_\tau)}. \end{array} $$ So we have $$ \begin{array}{c}
\|\Psi(v_1) - \Psi(v_2)\|_{L^2(\Sigma_\tau)} \leq
\|[ÊD_t(E_1 \ast (r_{v_1} - r_{v_2}))](t) u_A(t,y)\| _{L^2(\Sigma_\tau)} \\ \\
+ \|[E_1 \ast (r_{v_1} - r_{v_2})] (t) D_tu_A(t,y) \| _{L^2(\Sigma_\tau)} \\ \\
\leq \|[ÊD_t(E_1 \ast (r_{v_1} - r_{v_2}))]\|_{C([0, \tau])}Ê\|u_A\| _{L^2(\Sigma_\tau)}
+ \|[E_1 \ast (r_{v_1} - r_{v_2})]\|_{C([0, \tau])} \|D_t u_A\| _{L^2(\Sigma_\tau)} \\ \\
\leq C_4(T) \tau^{1/2} \|v_1 - v_2\|_{H^{3/4,3/2} (Q_\tau)}. \end{array} $$ \end{proof} Let us consider now $\delta >0$ such that $0 < \tau < \tau + \delta \leq T$. We fix $w \in X_\tau$ (recall definition (\ref{de3.1})). If $v \in X_\delta$ and $v(0, \cdot) = w(\tau, \cdot)$, we set \begin{equation}\label{eq2.7} V(t,\cdot) = \left\{\begin{array}{ll} w(t, \cdot) & \mbox{ if } t \in [0, \tau], \\ \\ v(t-\tau, \cdot) & \mbox{ if } t \in [\tau, \tau + \delta] \end{array} \right. \end{equation} and define \begin{equation} \Psi(w,v)(t,y):= \Psi(V)(\tau+t,y), \quad t \in (0, \delta), \, y \in \Gamma. \end{equation} On account of Remark \ref{re3.21}, we observe that $V \in X_{\tau +\delta}$. Moreover it holds
\begin{corollary}\label{co2.1} Let $v_1, v_2 \in X_\delta$ be such that $v_1(0, \cdot) = v_2(0,\cdot) = w(\tau, \cdot)$. Then $$
\|\Psi(w,v_1) - \Psi(w,v_2)\|_{L^2(\Sigma_\delta)}Ê\leq C\delta^{1/2}Ê\|v_1 - v_2\|_{H^{3/4,3/2}(Q_\delta)}, $$ with $C$ independent of $\tau$ in $[0, T)$, $\delta \in (0, T-\tau]$, $w, v_1, v_2$. \end{corollary}
\begin{proof} Indicating by $V_j$ the function obtained replacing $v$ with $v_j$ ($j \in \{1,2\}$) in (\ref{eq2.7}), we have $$ {\cal M}(u_0 + 1\ast V_j)(t) = \left\{\begin{array}{ll} {\cal M} u_0 + 1 \ast {\cal M}(w)(t) & \mbox{ if } t \in [0, \tau], \\ \\ {\cal M} u_0 + 1 \ast {\cal M}(w)(\tau) + 1 \ast {\cal M}(v_j)(t-\tau) & \mbox{ if } t \in [\tau, \tau + \delta]. \end{array} \right. $$ So $$
\|{\cal W}({\cal M}(u_0 + 1\ast V_1)) - {\cal W}({\cal M}(u_0 + 1\ast V_1))\|_{C([0, \tau+\delta])} \leq L \|1 \ast {\cal M}(v_1- v_2)\|_{C([0, \delta])}, $$ and the conclusion follows as in the proof of Lemma \ref{le2.1}. \end{proof}
\section{Weak solutions to Problem \ref{P3}}\label{se6}
\setcounter{equation}{0}
In this section we begin to study the inverse problem reformulated as Problem \ref{P3}. Here we shall limit ourselves to consider weak solutions, in the sense that we shall not search for a solution $(v,h)\in H^{1,2}(Q_T) \times L^2(0, T)$, but rather a solution $(v,h)\in X_T \times L^2(0, T)$, with $X_T$ defined as in \ref{de3.1}. So, we are going to consider system (\ref{eqA2.6}), with the following (generalized) assumptions:
{\it (K1) (H1)-(H2) are fulfilled;
(K2) $v^* \in H^{1/4,1/2}(Q_T)'$;
(K3) $\psi_1, z_0 \in L^2(\Omega)$;
(K4) $v_0 \in H^{1/2}(\Omega)$;
(K5) $z_1 \in L^2(\Gamma)$, $v^*_\Gamma \in L^2(\Sigma_T)$;
(K6) $h^* \in L^2((0, T))$.
}
We introduce the following auxiliary function $V_0$ solution in $X_T$ of \begin{equation}\label{eq3.5} \left\{\begin{array}{l} D_tV_0(t,x) = AV_0(t,x) + v^*(t,x), \quad (t,x) \in Q_T, \\ \\ V_0(0,x) = v_0(x), \quad x \in \Omega \\ \\ BV_0(t,y) = \Psi(0)(t,y) + v^*_\Gamma (t,y), \quad (t,y) \in \Sigma_\tau. \end{array} \right. \end{equation}
\begin{remark} {\rm Of course, $V_0$ is the solution of (\ref{eq3.5}) in the sense of Theorem \ref{th1.4}. Moreover, we shall consider the restrictions of $v^*$ to $(0, \tau)$, with $0 < \tau \leq T$ (see (\ref{eq1.6})) usually writing $v^*$ instead of
$v^*_{|(0, \tau)}$. We shall follow the same convention for other restrictions, if this is not likely to produce confusion.} \end{remark}
We begin with a result of local existence.
\begin{lemma}\label{le3.1} Assume (K1)-(K6). Then, $\forall \, r >0$, there exists $\tau(r) \in (0, T]$, such that, if $0 < \tau \leq \tau(r)$, Problem (\ref{eqA2.6}) admits a unique solution $(v,h) \in X_\tau \times L^2(0, \tau)$ and satisfying $$
\|v-V_0\|_{X_\tau} + \|h - h^*\|_{L^2(0, \tau)} \leq r. $$ \end{lemma}
\begin{proof} We start by observing that $Z_\tau:= X_\tau \times L^2(0, \tau)$ is a Banach space with the norm \begin{equation}
\|(z,h)\|_{Z_\tau} := \|z\|_{X_\tau} + \|h\|_{L^2(0, \tau)}. \end{equation} We set \begin{equation}
Z_\tau(r):= \{(v,h) \in Z_\tau : \|v - V_0\|_{X_\tau} + \|h - h^*\|_{L^2(0, \tau)} \leq r \}, \end{equation} which is a closed subset of $Z_\tau$. Let $(V,H) \in Z_\tau$. We consider the element $(v,h)$ in $Z_\tau$, such that $$ \left\{\begin{array}{l} D_tv(t,x) = Av(t,x) + H \ast AV(t,x) + v^*(t,x) \\ \\
- [(\psi_1,V(t,\cdot)) + H \ast (\psi_1, V(t,\cdot))]Êz_0(x), \quad (t,x) \in Q_\tau, \\ \\ v(0,x) = v_0(x), \quad x \in \Omega \\ \\ Bv(t,y) = - V(t,y) + \Psi(V)(t,y) - H \ast BV(t,y) + [(\psi_1,V(t,\cdot)) + H \ast (\psi_1, V(t,\cdot))] z_1(y)Ê\\ \\ + v^*_\Gamma (t,y), \quad (t,y) \in \Sigma_\tau \\ \\ h(t) = h^*(t) - (\psi_1,V(t,\cdot)) - H \ast (\psi_1, V(t,\cdot))(t), \quad t \in (0, \tau), \end{array} \right. $$ and the map $P(V,H):=(v,h)$. It is clear that $(v,h)$ is a solution of $(\ref{eqA2.6})$ if and only if it is a fixed point of $P$. Let $r >0$. We show that, if $\tau$ is sufficiently small, $P$ maps $Z_\tau(r)$ into itself. Let $(V,H) \in Z_\tau(r)$. Then $(v-V_0, h - h^*)$ satisfies $$ \left\{\begin{array}{l} D_t(v - V_0)(t,x) = A(v - V_0)(t,x) + H \ast AV(t,x) \\ \\
- [(\psi_1,V(t,\cdot)) + H \ast (\psi_1, V(t,\cdot))]Êz_0(x), \quad (t,x) \in Q_\tau, \\ \\ (v-V_0)(0,x) = 0, \quad x \in \Omega \\ \\ B(v - V_0)(t,y) = - V(t,y) + \Psi(V)(t,y) - \Psi(0)(t,y) - H \ast BV(t,y) \\ \\ + [(\psi_1,V(t,\cdot)) + H \ast (\psi_1, V(t,\cdot))] z_1(y), \quad (t,y) \in \Sigma_\tau \\ \\ h(t) - h^*(t) = - (\psi_1,V(t,\cdot)) - H \ast (\psi_1, V(t,\cdot))(t), \quad t \in (0, \tau). \end{array} \right. $$ In the following part of the proof we shall indicate by $C, C_1, C_2, \dots$ some positive constants independent of $\tau$, $r$, $V$ and $H$. By Lemma \ref{le1.1A} we have \begin{equation}\label{eq3.8} \begin{array}{c}
\|(\psi_1,V(t,\cdot)) + H \ast (\psi_1, V(t,\cdot))\|_{L^2(0, \tau)}Ê
\leq \|\psi_1\|_{L^2(\Omega)} \|V\|_{L^2(Q_\tau)} \left(1 + \|H\|_{L^1(0, \tau)}\right) \\ \\
\leq C_1 \tau^{1/2} \|V\|_{H^{3/4}(0, \tau; L^2(\Omega))} \left(1 + \tau^{1/2}\|H\|_{L^2(0, \tau)}\right) \\ \\
\leq C_1 \tau^{1/2} \left(r + \|V_0\|_{H^{3/4}(0, T; L^2(\Omega))}\right) \left[1 + \tau^{1/2} (r + \|h^*\|_{L^2(0, T)})\right]. \end{array} \end{equation}
Now we apply Proposition \ref{pr1.3} in order to estimate $\|v - V_0\|_{X_\tau}$. We get \begin{equation} \begin{array}{c}
\|v - V_0\|_{X_\tau} \leq C_1\big (\| H \ast AV\|_{H^{1/4,1/2}(Q_\tau)'}Ê+ \|[(\psi_1,V(t,\cdot))
+ H \ast (\psi_1, V(t,\cdot))]Êz_0(x)\|_{H^{1/4,1/2}(Q_\tau)'} \\ \\
+ \|V_{|\Sigma_\tau}\|_{L^2(\Sigma_\tau)} + \|\Psi(V) - \Psi(0)\|_{L^2(\Sigma_\tau)} + \|H \ast BV\|_{L^2(\Sigma_\tau)} \\ \\ + \|[(\psi_1,V(t,\cdot))
+ H \ast (\psi_1, V(t,\cdot))] z_1(y)\|_{L^2(\Sigma_\tau)} \big). \end{array} \end{equation} By Lemmas \ref{le1.7} and \ref{le1.3} (I), we obtain \begin{equation}\label{eq3.10} \begin{array}{c}
\| H \ast AV\|_{H^{1/4,1/2}(Q_\tau)'}Ê\leq C_1 \|H\|_{L^1(0, \tau)} \|AV\|_{H^{1/4,1/2}(Q_\tau)'}Ê
\leq C_1 \tau^{1/2} \|H\|_{L^2(0, \tau)} \|AV\|_{H^{1/4,1/2}(Q_\tau)'} \\ \\
\leq C_1 \tau^{1/2} \left(r + \|h^* \|_{L^2(0, \tau)}\right) \left(r + \|AV_0\|_{H^{1/4,1/2}(Q_\tau)'}\right) \\ \\
\leq C_1 \tau^{1/2} \left(r + \|h^* \|_{L^2(0, \tau)}\right) \left(r + C_2 \|AV_0\|_{H^{1/4,1/2}(Q_T)'}\right). \end{array} \end{equation} From Lemma \ref{le1.3} (III) and (\ref{eq3.8}), it follows \begin{equation} \begin{array}{c}
\|[(\psi_1,V(t,\cdot)) + H \ast (\psi_1, V(t,\cdot))]Êz_0(x)\|_{H^{1/4,1/2}(Q_\tau)'} \\ \\
\leq \tau^{1/4} \|[(\psi_1,V(t,\cdot)) + H \ast (\psi_1, V(t,\cdot))]Êz_0(x)\|_{L^2(Q_\tau)} \\ \\
= \tau^{1/4} \|[(\psi_1,V(t,\cdot)) + H \ast (\psi_1, V(t,\cdot))]\|_{L^2(0, \tau)}ÊÊ\|z_0(x)\|_{L^2(\Omega)} \\ \\
\leq C_2\tau^{3/4} \left(r + \|V_0\|_{H^{3/4}(0, T; L^2(\Omega))}\right) \left[1 + \tau^{1/2} (r + \|h^*\|_{L^2(0, T)})\right].
\end{array} \end{equation} Moreover, \begin{equation}
\|V_{|\Sigma_\tau}\|_{ L^2(\Sigma_\tau)}Ê\leq \tau^{\frac{1}{2} - \frac{1}{p}} \|V_{|\Sigma_\tau}\|_{ L^p(0, \tau; L^2(\Gamma))}Ê
\leq \tau^{\frac{1}{2} - \frac{1}{p}} \left(r + \|V_{0|\Sigma_T}\|_{ L^p(0, T; L^2(\Gamma))}\right). \end{equation} Next, by Lemma \ref{le2.1}, we have \begin{equation}
\|\Psi(V) - \Psi(0)\|_{L^2(\Sigma_\tau)} \leq C(T) \tau^{1/2} \|V\|_{H^{3/4,3/2}(Q_\tau)} \leq C(T) \tau^{1/2} \left(r + \|V_0\|_{H^{3/4,3/2}(Q_T)}\right), \end{equation} \begin{equation}
\|H \ast BV\|_{L^2(\Sigma_\tau)} \leq \tau^{1/2} \|H\|_{L^2(0, \tau)} \|BV\|_{L^2(\Sigma_\tau)}
\leq \tau^{1/2} \left(r + \|h^*\|_{L^2(0, T)}) (r + \|BV_0\|_{L^2(\Sigma_T)}\right). \end{equation} Finally, \begin{equation}\label{eq3.15} \begin{array}{c}
\|[(\psi_1,V(t,\cdot)) + H \ast (\psi_1, V(t,\cdot))] z_1(y)\|_{L^2(\Sigma_\tau)}\\ \\
= \|(\psi_1,V(t,\cdot)) + H \ast (\psi_1, V(t,\cdot))\|_{L^2(0, \tau)}Ê\| z_1\|_{L^2(\Gamma)} \\ \\
\leq C_1 \tau^{1/2} (r + \|V_0\|_{H^{3/4}(0, T; L^2(\Omega))})\left [1 + \tau^{1/2} \left(r + \|h^*\|_{L^2(0, T)}\right)\right].
\end{array} \end{equation} From (\ref{eq3.8}) and (\ref{eq3.10})-(\ref{eq3.15}), we deduce the estimate $$
\|v - V_0\|_{X_\tau} + \|h - h^*\|_{L^2((0, \tau))} \leq C(r) \tau^\epsilon, $$ for some $\epsilon >0$. Choosing $\tau$ such that $C(r) \tau^\epsilon \leq r$, we obtain that $P: Z_\tau(r) \to Z_\tau(r)$.
Now, for $j \in \{1,2\}$, we take $(V_j, H_j) \in Z_\tau(r)$ and we put $(v_j, h_j):= P(V_j, H_j)$. Then the pair $(v_1-v_2, h_1-h_2)$ solves the system $$ \left\{\begin{array}{l} D_t(v_1 - v_2)(t,x) = A(v_1 - v_2) (t,x) + H_1 \ast AV_1(t,x) - H_2 \ast AV_2(t,x) \\ \\
- [(\psi_1,V_1(t,\cdot) - V_2(t,\cdot) ) + H_1 \ast (\psi_1, V_1(t,\cdot)) - H_2 \ast (\psi_1, V_2(t,\cdot))]Êz_0(x), \quad (t,x) \in Q_\tau, \\ \\ v_1(0,x) - v_2(0,x) = 0, \quad x \in \Omega \\ \\ B(v_1 - v_2) (t,y) = V_2(t,y) - V_1(t,y) + \Psi(V_1)(t,y) - \Psi(V_2)(t,y) - H_1 \ast BV_1(t,y) + H_2 \ast BV_2(t,y) \\ \\ + [(\psi_1,V_1(t,\cdot) - V_2(t,\cdot)) + H_1 \ast (\psi_1, V_1(t,\cdot)) - H_2 \ast (\psi_1, V_2(t,\cdot))] z_1(y), \quad (t,y) \in \Sigma_\tau \\ \\ h_1(t) - h_2(t) = - (\psi_1,V_1(t,\cdot) - V_2(t,\cdot)) - H_1 \ast (\psi_1, V_1(t,\cdot))(t) + H_2 \ast (\psi_1, V_2(t,\cdot))(t) , \quad t \in (0, \tau), \end{array} \right. $$ so that $$ \begin{array}{c}
\|(v_1-v_2, h_1- h_2)\|_{Z_\tau} \leq C\big (\| H_1 \ast AV_1 - H_2 \ast AV_2\|_{H^{1/4,1/2}(Q_\tau)'}Ê\\ \\
+ \|- [(\psi_1,V_1(t,\cdot) - V_2(t,\cdot) ) + H_1 \ast (\psi_1, V_1(t,\cdot)) - H_2 \ast (\psi_1, V_2(t,\cdot))]Êz_0(x)\| _{H^{1/4,1/2}(Q_\tau)'}Ê\\ \\
+\|(V_2 - V_1)_{|\Sigma_\tau}\|_{L^2(\Sigma_\tau)}Ê+ \|\Psi(V_1) - \Psi(V_2)\|_{L^2(\Sigma_\tau)} + \|H_1 \ast BV_1 - H_2 \ast BV_2\|_{L^2(\Sigma_\tau)} \\ \\Ê
+ \|[(\psi_1,V_1(t,\cdot) - V_2(t,\cdot)) + H_1 \ast (\psi_1, V_1(t,\cdot)) - H_2 \ast (\psi_1, V_2(t,\cdot))] z_1(y)\|_{L^2(\Sigma_\tau)} \\ \\
+ \|(\psi_1,V_1(t,\cdot) - V_2(t,\cdot)) + H_1 \ast (\psi_1, V_1(t,\cdot))(t) - H_2 \ast (\psi_1, V_2(t,\cdot))\|_{L^2(0, \tau)}\big ). \end{array} $$ By Lemma \ref{le1.1A} we have \begin{equation}\label{eq3.16} \begin{array}{c}
\|(\psi_1,V_1(t,\cdot) - V_2(t,\cdot)) + H_1 \ast (\psi_1, V_1(t,\cdot))(t) - H_2 \ast (\psi_1, V_2(t,\cdot))\|_{L^2(0, \tau)} \\ \\
\leq \|(\psi_1,V_1(t,\cdot) - V_2(t,\cdot)) + H_1 \ast (\psi_1, V_1(t,\cdot) - V_2(t,\cdot)) \|_{L^2(0, \tau)} \\ \\
+ \|(H_1 - H_2) \ast (\psi_1, V_2(t,\cdot))\|_{L^2(0, \tau)} \\ \\
\leq C_1 \tau^{1/2} \|V_1 - V_2\|_{H^{3/4}(0, \tau; L^2(\Omega))} \left(1 + Ê\tau^{1/2}\|H_1\|_{L^2(0, \tau)}\right))\\ \\
+ \|\psi_1\|_{L^2(\Omega)} \tau^{1/2}Ê\|H_1 - H_2\|_{L^2(0, \tau)}Ê\|V_2\|_{L^2(Q_\tau)}Ê\\ \\
\leq C_1 \tau^{1/2} \|V_1 - V_2\|_{H^{3/4}(0, \tau; L^2(\Omega))} \left(1 + Ê\tau^{1/2}\|H_1\|_{L^2(0, \tau)}\right) \\ \\
+ C_2 \tauÊ\|H_1 - H_2\|_{L^2(0, \tau)}Ê\|V_2\|_{H^{3/4}(0, \tau; L^2(\Omega))}Ê\\ \\
\leq C_3 \tau^{1/2} (1+ r + \|h^*\|_{L^2(0, T)} + \|V_0\|_{X_T}) (\|V_1 - V_2\|_{X_\tau} + Ê\|H_1 - H_2\|_{L^2(0, \tau)}). \end{array} \end{equation} Employing (\ref{eq3.16}), we obtain \begin{equation} \begin{array}{c}
\|- [(\psi_1,V_1(t,\cdot) - V_2(t,\cdot) ) + H_1 \ast (\psi_1, V_1(t,\cdot)) - H_2 \ast (\psi_1, V_2(t,\cdot))]Êz_0(x)\| _{H^{1/4,1/2}(Q_\tau)'}Ê\\ \\
\leq C_1 \tau^{1/4}Ê\|- [(\psi_1,V_1(t,\cdot) - V_2(t,\cdot) ) + H_1 \ast (\psi_1, V_1(t,\cdot)) - H_2 \ast (\psi_1, V_2(t,\cdot))]Êz_0(x)\| _{L^2(Q_\tau)}Ê\\ \\
\leq C_2 \tau^{3/4} \left(1+ r + \|h^*\|_{L^2(0, T)} + \|V_0\|_{X_T}\right) \left(\|V_1 - V_2\|_{X_\tau} + Ê\|H_1 - H_2\|_{L^2(0, \tau)}\right). \end{array} \end{equation} and \begin{equation} \begin{array}{c}
\|[(\psi_1,V_1(t,\cdot) - V_2(t,\cdot)) + H_1 \ast (\psi_1, V_1(t,\cdot)) - H_2 \ast (\psi_1, V_2(t,\cdot))] z_1(y)\|_{L^2(\Sigma_\tau)} \\ \\
\leq C \tau^{1/2} \left(1+ r + \|h^*\|_{L^2(0, T)} + \|V_0\|_{X_T}\right) \left(\|V_1 - V_2\|_{X_\tau} + Ê\|H_1 - H_2\|_{L^2(0, \tau)}\right). \end{array} \end{equation} Next, we have \begin{equation} \begin{array}{c}
\| H_1 \ast AV_1 - H_2 \ast AV_2\|_{H^{1/4,1/2}(Q_\tau)'} \\ \\
Ê\leq \| H_1 \ast A(V_1 - V_2)\|_{H^{1/4,1/2}(Q_\tau)'} + \| (H_1 - H_2) \ast AV_2\|_{H^{1/4,1/2}(Q_\tau)'} \\ \\
\leq C_1\left ( \|H_1\|_{L^1(0, \tau)}Ê\|A(V_1 - V_2)\|_{H^{1/4,1/2}(Q_\tau)'} + \|H_1-H_2\|_{L^1(0, \tau)}Ê\|A V_2\|_{H^{1/4,1/2}(Q_\tau)'} \right) \\ \\
\leq C_1 \tau^{1/2}Ê\big[\left(r + \|h^*\|_{L^2(0, T)}\right)Ê\|A(V_1 - V_2)\|_{H^{1/4,1/2}(Q_\tau)'} \\ \\
+ \|H_1-H_2\|_{L^2(0, \tau)}Ê\left(r + C_2\|AV_0\|_{H^{1/4,1/2}(Q_T)'}\right)\big], \end{array} \end{equation} \begin{equation} \begin{array}{c}
\|H_1 \ast BV_1 - H_2 \ast BV_2\|_{L^2(\Sigma_\tau)} \leq \|H_1 \ast B(V_1 - V_2)\|_{L^2(\Sigma_\tau)} + \|(H_1 - H_2) \ast B V_2\|_{L^2(\Sigma_\tau)} \\ \\
\leq \tau^{1/2}Ê\left(\|H_1\|_{L^2(0, \tau)} \|B(V_1 - V_2)\|_{L^2(\Sigma_\tau)} + \|H_1 - H_2\|_{L^2(0, \tau)} \|B V_2\|_{L^2(\Sigma_\tau)}\right) \\ \\
\leq \tau^{1/2}Ê \left[\left(r + \|{h}^*\|_{L^2(0, T)}\right) \|B(V_1 - V_2)\|_{L^2(\Sigma_\tau)}
+ \left(r + \|B V_0\|_{L^2(\Sigma_T)})\right \|H_1 - H_2\|_{L^2((0, \tau))}\right], \end{array} \end{equation}
\begin{equation}
\|(V_2 - V_1)_{|\Sigma_\tau}\|_{L^2(\Sigma_\tau)}Ê\leq \tau^{\frac{1}{2} - \frac{1}{p}} \|(V_2 - V_1)_{|\Sigma_\tau}\|_{L^p(0, \tau; L^2(\Gamma))}, \end{equation} and, finally, \begin{equation}\label{eq3.22}
\|\Psi(V_1) - \Psi(V_2)\|_{L^2(\Sigma_\tau)} \leq C(T) \tau^{1/2}Ê\|V_1 - V_2\|_{H^{3/4,3/2}(Q_\tau)}. \end{equation} From (\ref{eq3.16})-(\ref{eq3.22}), we deduce an estimate of the form \begin{equation}
\|(v_1-v_2, h_1- h_2)\|_{Z_\tau} \leq C(r) \tau^\epsilon \|(V_1-V_2, H_1- H_2)\|_{Z_\tau}, \end{equation} valid for every $(V_1,H_1)$ and $(V_2,H_2)$ in $Z_\tau(r)$, for some $\epsilon >0$. If we choose $\tau$ such that $$ C(r) \tau^\epsilon < 1, $$ and $\tau$ so small that $P$ carries $Z_\tau(r)$ into itself, we have that $P$ has a unique fixed point in $Z_\tau(r)$.
The proof is complete. \end{proof} To obtain global existence and uniqueness, we shall employ the following
\begin{lemma}\label{le3.2} Assume that (K1)-(K6) are satisfied. Let $0 < \tau < \tau + \delta \leq \min\{T, 2\tau\}$ and let $(V,H) \in X_{\tau+\delta} \times L^2(0, \tau+\delta)$ be a solution of (\ref{eqA2.6}) (replacing $T$ with $\tau + \delta$). Setting
\begin{equation}\label{eq3.25B} \begin{array}{cc}
w:= V_{|Q_{\tau}}, & v(t, \cdot):= V(\tau + t, \cdot), \\ \\
h_0:= H_{|(0, \tau)}, & h(t):= H(\tau+t), \end{array} \end{equation} there following propositions hold.
(I) $(v,h) \in X_\delta \times L^2(0, \delta)$ and solves the system \begin{equation}\label{eq3.25} \left\{\begin{array}{l} D_tv(t,x) = Av(t,x) + ({\widetilde h}_0 \ast {\widetilde A}w )(\tau + t,x) + (h \ast Aw)(t,x) + (h_0 \ast Av)(t,x) + v^*(\tau + t,x) \\ \\
- [(\psi_1,v(t,\cdot)) + {\widetilde h}_0 \ast (\psi_1, \widetilde w)(\tau+t) + h \ast (\psi_1, w)(t) + h_0 \ast (\psi_1,v)(t)]Ê z_0(x), \quad (t,x) \in Q_\delta, \\ \\ v(0,x) = w(\tau,x), \quad x \in \Omega \\ \\ Bv(t,y) = - v(t,y) +\Psi(w,v)(t,y) - [({\widetilde h}_0 \ast {\widetilde B }w)(\tau+t,y) + (h \ast Bw) (t,y) + (h_0 \ast Bv)(t,y)]Ê\\ \\ +
[(\psi_1,v(t,\cdot)) + \tilde h_0 \ast (\psi_1, \widetilde w)(\tau+t) + h \ast (\psi_1, w)(t) + h_0 \ast (\psi_1,v)(t)]Êz_1(y)Ê\\ \\ + v^*_\Gamma (\tau+t,y), \quad (t,y) \in \Sigma_\delta \\ \\ h(t) = h^*(\tau+t) - [(\psi_1,v(t,\cdot)) + {\widetilde h}_0 \ast (\psi_1, \tilde w)(\tau+t) + h \ast (\psi_1, w)(t) + h_0 \ast (\psi_1,v)(t)], \\ \\
t \in (0, \delta), \end{array} \right. \end{equation} where ${\widetilde h}_0$, ${\widetilde A}w$ and ${\widetilde B}w$ indicate the trivial extensions of $h_0$, $Aw$ and $Bw$ (see, in particular, (\ref{eq1.7A})).
(II) Let $(w,h_0) \in X_\tau \times L^2(0, \tau)$ be a solution of (\ref{eqA2.6}), with $\tau$ replacing $T$. Let $(v,h) \in X_\delta \times L^2(0, \delta)$ be a solution of (\ref{eq3.25}). Setting $$ V(t,x):= \left\{\begin{array}{lll} w(t,x), & \mbox{ if } & (t,x) \in Q_\tau, \\ \\ v(t-\tau,x), &\mbox{ if } & (t,x) \in [\tau, \tau+\delta) \times \Omega, \end{array} \right. $$ $$ H(t):= \left\{\begin{array}{lll} h_0(t), & \mbox{ if } & t \in (0, \tau), \\ \\ h(t-\tau), & \mbox{ if } & t \in [\tau, \tau+\delta), \end{array} \right. $$ then $(V,H) \in X_{\tau+\delta} \times L^2(0, \tau+\delta )$ and solves (\ref{eqA2.6}), where we have replaced $T$ by $\tau+\delta$. \end{lemma}
\begin{proof} (I) follows from Proposition \ref{pr1.2} and Lemma \ref{le1.8} (III).
Concerning (II), by Proposition \ref{pr1.2} we obtain $(V,H) \in X_{\tau+\delta} \times L^2(0, \tau+\delta)$ and solving $$ D_t V = AV + f \quad {\mbox in }Ê\quad Q_{\tau + \delta}, $$ with $$ \begin{array}{c}
f_{|(0, \tau)} = h_0 \ast Aw + v^*_{|(0, \tau)}Ê- [(\psi_1, w(t,\cdot)) + h_0 \ast (\psi_1, w(t,\cdot)]Êz_0(\cdot) \\ \\
= \{H \ast AV + v^* - [(\psi_1,V) + H \ast (\psi_1, V)]Êz_0\}_{|(0, \tau)}, \end{array} $$ $$ \begin{array}{c}
f_{|(\tau, \tau+\delta)}(t,\cdot)Ê= ({\widetilde h}_0 \ast {\widetilde A}w ) (t,\cdot)
+ (h \ast Aw)(t-\tau,\cdot) + (h_0 \ast Av)(t-\tau,\cdot) + v^*_{|(\tau, \tau + \delta)}(t,\cdot) \\ \\
- [(\psi_1,v(t-\tau,\cdot)) + {\widetilde h}_0 \ast (\psi_1, \widetilde w)(t) + h \ast (\psi_1, w)(t-\tau) + h_0 \ast (\psi_1,v)(t-\tau)]Ê z_0(\cdot) \\ \\
= \{H \ast AV + v^* - [(\psi_1,V) + H \ast (\psi_1, V)]Êz_0\}_{|(\tau, \tau+\delta)},
\end{array} $$
as it is clear that $Aw = AV_{|(0, \tau)}$ and $Av = AV_{|(\tau, \tau + \delta)}(\cdot + \tau)$. So, by Lemma \ref{le1.8}, we conclude that the first equation in (\ref{eqA2.6}) is satisfied if we replace $T$ by $\tau+\delta$. The validity of the other equations in (\ref{eqA2.6}) can be proved analogously. \end{proof}
Now we are able to prove uniqueness.
\begin{lemma}\label{le3.3} Assume (K1)-(K6). Then Problem (\ref{eqA2.6}) has, at most, one solution $(v,h) \in X_\tau \times L^2(0, \tau)$, $\forall \, \tau \in (0, T]$. \end{lemma} \begin{proof} Let $(V_1,H_1)$ and $(V_2,H_2)$ be solutions of (\ref{eqA2.6}) in $X_\tau \times L^2(0, \tau)$. We observe that there exists $\tau_1 \in (0, \tau]$ such that $(V_1,H_1)$ and $(V_2,H_2)$ coincide in $Q_{\tau_1} \times (0, \tau_1)$. This follows easily from Lemma \ref{le3.1}. In fact, by Lemma \ref{le1.3} (I), there exists $r >0$ such that, for $j \in \{1,2\}$ and $\forall \,\sigma \in (0, \tau]$, $$
\|(V_j - V_0)_{|Q_\sigma}\|_{X_\sigma} + \|(H_j - h^*)_{|(0, \sigma)}\|_{L^2(0, \sigma)} \leq r. $$ Then, choosing $\tau_1 \leq \tau(r)$ (see Lemma \ref{le3.1}), we obtain that $(V_1,H_1)$ and $(V_2,H_2)$ coincide in $Q_{\tau_1} \times (0, \tau_1)$. We choose $\tau_1$ as large as possible. More precisely, we set \begin{equation} \begin{array}{c}
\tau_1:= \sup \{\sigma \in (0, \tau]: \|V_1 - V_2\|_{X_\sigma} + \|H_1- H_2\|_{L^2(0, \sigma)} = 0\} \\ \\
= \max \{\sigma \in (0, \tau]: \|V_1 - V_2\|_{X_\sigma} + \|H_1- H_2\|_{L^2(0, \sigma)} = 0\}. \end{array} \end{equation} We have to show that $\tau_1 = \tau$. We assume that $\tau_1 < \tau$. We shall see that there exists $\delta \in (0, \tau - \tau_1]$ such that $(V_1,H_1)$ and $(V_2,H_2)$ coincide in $Q_{\tau_1 + \delta} \times (0, \tau_1 + \delta)$ and this contradicts the definition of $\tau_1$. So that, consider \begin{equation} \delta \in (0, \min\{\tau_1, \tau - \tau_1\}]. \end{equation} We introduce the new functions \begin{equation} \begin{array}{ccc}
w:= V_{1|Q_{\tau_1}} = V_{2|Q_{\tau_1}}, & v_1(t, \cdot):= V_1(\tau_1+t, \cdot), & v_2(t, \cdot):= V_2(\tau_1+t, \cdot), \\ \\
h_0:= H_{1|(0, \tau_1)} = H_{2|(0, \tau_1)}, & h_1(t):= H_1(\tau_1+t), & h_2(t):= H_2(\tau_1+t). \end{array} \end{equation} and we consider problem (\ref{eq3.25}), where we replace $v$ by $v_j$ and $h$ by $h_j$ ($j \in \{1,2\})$. Setting \begin{equation} v:= v_1 - v_2, \quad h := h_1 - h_2, \end{equation} we obtain that $(v,h)$ satisfies the systen \begin{equation}\label{eq3.30} \left\{\begin{array}{l} D_tv(t,x) = Av(t,x) + (h \ast Aw)(t,x) + (h_0 \ast Av)(t,x) \\ \\
- [(\psi_1,v(t,\cdot)) + h \ast (\psi_1, w)(t) + h_0 \ast (\psi_1,v)(t)]Ê z_0(x), \quad (t,x) \in Q_\delta, \\ \\ v(0,x) = 0, \quad x \in \Omega \\ \\ Bv(t,y) = - v(t,y) +\Psi(w,v_1)(t,y) -\Psi(w,v_2)(t,y) - [ (h \ast Bw) (t,y) + (h_0 \ast Bv)(t,y)]Ê\\ \\ + [(\psi_1,v(t,\cdot)) + h \ast (\psi_1, w)(t) + h_0 \ast (\psi_1,v)(t)]Êz_1(y), \quad (t,y) \in \Sigma_\delta \\ \\ h(t) = - [(\psi_1,v(t,\cdot)) + h \ast (\psi_1, w)(t) + h_0 \ast (\psi_1,v)(t)], \quad
t \in (0, \delta). \end{array} \right. \end{equation} Following the same arguments as in the proofs of Lemma \ref{le3.1} and also of Corollary \ref{co2.1}, we deduce that there exist $C, \epsilon >0$, such that, if $\delta \in (0, \min\{\tau_1, \tau - \tau_1\}]$, then \begin{equation}
\|v\|_{X_\delta} + \|h\|_{L^2(0, \delta)} \leq C \delta^\epsilon (\|v\|_{X_\delta} + \|h\|_{L^2(0, \delta)}). \end{equation}
Choosing $\delta$ sufficiently small, this implies $\|v\|_{X_\delta} + \|h\|_{L^2(0, \delta)} = 0$. \end{proof} Now we want to show that (\ref{eqA2.6}) has a unique solution in $[0, T]$. To this aim, we consider the auxiliary system \begin{equation}\label{eq3.32} \left\{\begin{array}{l} D_tv(t,x) = Av(t,x) + (h \ast \zeta_0)(t,x) + (h_0 \ast Av)(t,x) + z(t,x) \\ \\
- [(\psi_1,v(t,\cdot)) + h \ast \chi_1(t) + h_0 \ast (\psi_1,v)(t)]Ê z_0(x), \quad (t,x) \in Q_T, \\ \\ v(0,x) = w(\tau,x), \quad x \in \Omega, \\ \\ Bv(t,y) = - v(t,y) + \Psi(w,v)(t,y) - [(h \ast g_0) (t,y) + (h_0 \ast Bv)(t,y)]Ê\\ \\ + [(\psi_1,v(t,\cdot)) + h \ast \chi_1(t) + h_0 \ast (\psi_1,v)(t)]Êz_1(y)Ê + g(t,y), \quad (t,y) \in \Sigma_T \\ \\ h(t) = k(t) - [(\psi_1,v(t,\cdot)) + h \ast \chi_1(t) + h_0 \ast (\psi_1,v)(t)]Ê, \quad t \in (0, T). \end{array} \right. \end{equation}
\begin{lemma}\label{le3.4} Assume that (H1)-(H2) hold. Consider system (\ref{eq3.32}), where $\zeta_0,\, z \in H^{1/4,1/2}(Q_T)'$, $h_0, \,k, \,\chi_1 \in L^2(0, T)$, $\psi_1, \,z_0 \in L^2(\Omega)$, $g_0,\, g \in L^2(\Sigma_T)$, $z_1 \in L^2(\Gamma)$, $w \in X_\tau$, for some $\tau >0$. Then (\ref{eq3.32}) admits a unique solution $(v,h)$ in $X_T \times L^2(0, T)$. \end{lemma}
\begin{proof} We prove the result in two steps. First, we show that there exists $\delta \in (0, T]$, independent of $z$, $w$, $g$ and $k$, such that (\ref{eq3.32}) has a unique solution in $X_\delta \times L^2(0, \delta)$. This can be proved as follows. Set $$ Z_\delta := \{(V,H) \in X_\delta \times L^2(0, \delta) : V(0,\cdot) = w(\tau,\cdot)\}, \quad 0 < \delta \leq T, $$ which is a closed subset of $X_\delta \times L^2(0, \delta)$. If $(V, H) \in Z_\delta$, we consider the solution $(v,h) \in Z_\delta$ of $$ \left\{\begin{array}{l} D_tv(t,x) = Av(t,x) + (H \ast \zeta_0)(t,x) + (h_0 \ast AV)(t,x) + z(t,x) \\ \\
- [(\psi_1,V(t,\cdot)) + H \ast \chi_1(t) + h_0 \ast (\psi_1,V)(t)]Ê z_0(x), \quad (t,x) \in Q_T, \\ \\ v(0,x) = w(\tau,x), \quad x \in \Omega, \\ \\ Bv(t,y) = - V(t,y) + \Phi(w,V)(t,y) - [(H \ast g_0) (t,y) + (h_0 \ast BV)(t,y)]Ê\\ \\ +
[(\psi_1,V(t,\cdot)) + H \ast \chi_1(t) + h_0 \ast (\psi_1,V)(t)]Êz_1(y)Ê + g(t,y), \quad (t,y) \in \Sigma_T \\ \\ h(t) = k(t) - [(\psi_1,V(t,\cdot)) + H \ast \chi_1(t) + h_0 \ast (\psi_1,V)(t)]Ê, \quad t \in (0, \delta). \end{array} \right. $$ We can show that, if $\delta$ is sufficiently small, the mapping $(V,H) \to (v,h)$ admits a unique fixed point. In fact, considering $(V_j,H_j) \to (v_j,h_j)$ ($j \in \{1,2\}$), then we have $$ \left\{\begin{array}{l} D_t(v_1-v_2)(t,x) = A(v_1 - v_2) (t,x) + [(H_1 - H_2) \ast \zeta_0)(t,x) + [h_0 \ast A(V_1 - V_2)](t,x) \\ \\
- [(\psi_1,V_1(t,\cdot) - V_2(t,\cdot)) + (H_1 - H_2) \ast \chi_1(t) + h_0 \ast (\psi_1,V_1 - V_2)(t)]Ê z_0(x), \quad (t,x) \in Q_T, \\ \\ v_1(0,x) - v_2(0,x) = 0, \quad x \in \Omega, \\ \\ B(v_1 - v_2)(t,y) = - V_1(t,y) + V_2(t,y) + \Phi(w,V_1)(t,y) - \Phi(w,V_2)(t,y) \\ \\
- [((H_1 - H_2) \ast g_0) (t,y) + (h_0 \ast B(V_1 - V_2))(t,y)]Ê\\ \\ + [(\psi_1,V_1(t,\cdot) - V_2(t,\cdot)) + (H_1 - H_2) \ast \chi_1(t) + h_0 \ast (\psi_1,V_1 - V_2)(t)]Êz_1(y), \quad (t,y) \in \Sigma_T, \\ \\ h_1(t) - h_2(t) = - [(\psi_1,V_1(t,\cdot) - V_2(t, \cdot)) + (H_1 - H_2) \ast \chi_1(t) + h_0 \ast (\psi_1,V_1 - V_2)(t)]Ê, \quad t \in (0, \delta), \end{array} \right. $$ so that $$
\|v_1 - v_2\|_{X_\delta} + \|h_1 - h_2\|_{L^2(0, \delta)} \leq C \delta^\epsilon \left(\|V_1 - V_2\|_{X_\delta} + \|H_1 - H_2\|_{L^2(0, \delta)}\right), $$ for some $\epsilon >0$, and $C$ and $\epsilon$ independent of $z$, $w$, $g$, $k$ (cf. Corollary \ref{co2.1}). Hence, if $\delta$ is sufficiently small, problem (\ref{eq3.32}) has a unique solution $(v,h) \in X_\delta \times L^2(0, \delta)$. Observe that, in case $\delta < T$, we can extend $(v, h)$ to $(0,T)$ on account of Proposition \ref{pr1.2} and Lemma \ref{le1.8} (II). Indeed, taking as new unknowns $v_1(t, \cdot):= v(\delta + t, \cdot)$ and $h_1(t):= h(\delta + t)$, then $(v_1, h_1)$ satisfies \begin{equation}\label{eq3.33} \left\{\begin{array}{l}
D_t v_1(t,x) = A v_1(t,x) + ( h_1 \ast \zeta_0)(t,x) + ({\widetilde h}_{|(0, \delta)} \ast \zeta_0)(t +\delta,x) \\ \\
+ (h_0 \ast Av_1)(t,x) + (h_0 \ast {\widetilde A} v_{|(0, \delta)})(t + \delta,x) + z(t + \delta,x) \\ \\
- [(\psi_1,v_1(t,\cdot)) + h_1 \ast \chi_1(t) + ({\widetilde h}_{|(0, \delta)} \ast \chi_1) (t+\delta) + h_0 \ast (\psi_1,v_1)(t) \\ \\
+ h_0 \ast (\psi_1,{\widetilde v}_{|(0, \delta)})(t+\delta)]Ê z_0(x), \quad (t,x) \in Q_{\min\{\delta, T-\delta\}}, \\ \\ v_1(0,x) = w_1(\tau+\delta,x) = v(\delta, x), \quad x \in \Omega, \\ \\
Bv_1(t,y) = - v_1(t,y) + \Phi(w_1,v_1)(t,y) - [(h_1 \ast g_0) (t,y) + ({\widetilde h}_{|(0, \delta)} \ast g_0) (t+\delta,y) \\ \\
+ (h_0 \ast Bv_1)(t,y) + (h_0 \ast {\widetilde B}v_{|(0, \delta)})(t+\delta,y)] \\ \\
+ [(\psi_1,v_1(t,\cdot)) + h_1 \ast \chi_1(t) + ({\widetilde h}_{|(0, \delta)} \ast \chi_1) (t+\delta) + h_0 \ast (\psi_1,v_1)(t) \\ \\
+ h_0 \ast (\psi_1,{\widetilde v}_{|(0, \delta)})(t+\delta)]
Êz_1(y)Ê + g(t+\delta,y), \quad (t,y) \in \Sigma_{\min\{\delta, T-\delta\}}, \\ \\
h_1(t) = k(t+\delta) - [(\psi_1,v_1(t,\cdot)) + h_1 \ast \chi_1(t) + {\widetilde h}_{|(0, \delta)} \ast \chi_1 (t+\delta) + h_0 \ast (\psi_1,v_1)(t) \\ \\
+ h_0 \ast (\psi_1,{\widetilde v}_{|(0, \delta)})(t+\delta)], \quad t \in (0, \min\{\delta, T-\delta\}), \end{array} \right. \end{equation} with $$ w_1(t,x) = \left\{\begin{array}{ll} w(t,x) & \mbox{ if }Êt \in [0, \tau], \\ \\ v(t-\tau,x) & \mbox{ if }Êt \in [\tau, \tau +\delta]. \end{array} \right. $$ Now we observe that (\ref{eq3.33}) is a system of the same form of (\ref{eq3.32}). In fact it suffices to replace $v$ by $v_1$, $h$ by $h_1$, $z(t,x)$ by
$$({\widetilde h}_{|(0, \delta)} \ast \zeta_0)(t +\delta,x) +
(h_0 \ast {\widetilde A} v_{|(0, \delta)})(t + \delta,x) + z(t + \delta,x) - [ {\widetilde h}_{|(0, \delta)} \ast \chi_1 (t+\delta)
+ h_0 \ast (\psi_1,{\widetilde v}_{|(0, \delta)})(t+\delta)]Ê z_0(x), $$ $w$ by $w_1$, $g(t,y)$ by $$
- [ {\widetilde h}_{|(0, \delta)} \ast g_0 (t+\delta,y) + (h_0 \ast {\widetilde B}v_{|(0, \delta)})(t+\delta,y)]
+ [{\widetilde h}_{|(0, \delta)} \ast \chi_1 (t+\delta) + h_0 \ast (\psi_1,{\widetilde v}_{|(0, \delta)})(t+\delta)]
Êz_1(y)Ê+ g(t+\delta,y), $$ and $k(t)$ by
$$k(t+\delta) - [{\widetilde h}_{|(0, \delta)} \ast \chi_1 (t+\delta) + h_0 \ast (\psi_1, {\widetilde v}_{|(0, \delta)})(t+\delta)].$$ So, following the same arguments as in the first part of the proof, we can determine a unique solution $(v_1,h_1)$ in $X_{\min\{\delta,T-\delta\}} \times L^2(0, \min\{\delta,T-\delta\})$. Now, setting $$ v(t,x):= v_1(t-\delta,x), \quad h(t) := h_1(t-\delta), \quad \mbox{Êfor }Êt \in (\delta, \min\{2\delta, T\}), $$ and applying Proposition \ref{pr1.2}, together with Lemma \ref{le1.8}, we obtain a unique solution in $X_{\min\{2\delta, T\}} \times L^2(0, \min\{2\delta, T\})$ (see also the proof of Lemma \ref{le3.2} (II)). In case we have $2\delta < T$, we can iterate the method and, in a finite number of steps, we can construct a solution in $[0, T]$. \end{proof}
To conclude, we state the main result of this section \begin{theorem}\label{th6.6} Assume (K1)-(K6). Then (\ref{eqA2.6}) has a unique solution in $X_T \times L^2(0, T)$. \end{theorem} \begin{proof} The uniqueness was already proved in Lemma \ref{le3.3}. Concerning the existence, we have already seen in Lemma \ref{le3.1} that there exists a solution $(w,h_0) \in X_\tau \times L^2(0, \tau)$, for some $\tau \in (0, T]$. If $\tau < T$, we employ Lemma \ref{le3.2} in order to extend $(w,h_0)$ from $(0, \tau)$ to $(0, \min\{2\tau, T\})$. To this aim, we consider system (\ref{eq3.25}), which is of the form (\ref{eq3.32}) whence we set $\zeta_0:= Aw$, $\chi_1(t):= - (\psi_1, w(t,\cdot))$, $g_0(t,y) := Bw(t,y)$, $$ z(t,x):= ({\widetilde h}_0 \ast {\widetilde A}w )(T + t,x) + v^*(\tau + t,x) - [{\widetilde h}_0 \ast (\psi_1, {\widetilde w})(\tau + t) ]z_0(x), $$ $$ g(t,y):= -({\widetilde h}_0 \ast {\widetilde B}w)(\tau + t,y) + [{\widetilde h}_0 \ast (\psi_1, {\widetilde w})(\tau + t)]Êz_1(y)Ê+ v^*_\Gamma (\tau + t,y), $$ $$ k(t):= h^*(\tau + t) - {\widetilde h}_0 \ast (\psi_1, {\widetilde w})(\tau + t). $$ So, by Lemma \ref{le3.4}, we can extend $(w, h_0)$ to a solution of (\ref{eqA2.6}) in $X_{\min\{2\tau,T\}} \times L^2(0, \min\{2\tau,T\})$. If $2\tau < T$, we iterate the procedure, replacing $\tau$ by $2\tau$, and in a finite number of steps we get the result. \end{proof}
\section{Proof of the main result}\label{se7}
\setcounter{equation}{0}
Now we are able to prove the main result of the paper, namely Theorem \ref{thA2.1}. By Proposition \ref{prA2.1}, we are reduced to search for a solution $(v,h) \in H^{1,2}(Q_T) \times L^2(0, T)$ to Problem \ref{P3}. In Section \ref{se6}, we have just seen that there exists a unique solution $(v,h) \in X_T \times L^2(0, T)$. So, it remains to show that, if (H1)-(H9) are satisfied, then $v$ belongs, in fact, to $H^{1,2}(Q_T)$. To this aim, we begin with the following
\begin{lemma} \label{le3.5} Taking $h \in L^2(0, T)$, $f \in H^{1/4,1/2}(Q_T)'$, $v_0 \in H^{1/2}(\Omega)$, $g \in L^2(\Sigma_T)$, consider the system \begin{equation}\label{eq3.34} \left\{\begin{array}{l} D_tv(t,x) = Av(t,x) + h \ast Av(t,x) + f(t,x), \quad (t,x) \in Q_\tau, \\ \\ v(0,x) = v_0(x), \quad x \in \Omega, \\ \\ Bv(t,y) = - h \ast Bv(t,y) Ê + g(t,y), \quad (t,y) \in \Sigma_\tau, \end{array} \right. \end{equation} Then the following propositions hold.
(I) (\ref{eq3.34}) has a unique solution $v$ in $X_T$.
(II) If $f \in L^2(Q_T)$, $v_0 \in H^{1}(\Omega)$, and $g \in H^{1/4,1/2}(\Sigma_T)$, then $v$ belongs to $H^{1,2}(Q_T)$. \end{lemma}
\begin{proof} The proof of (I) is a simplified variation of the proof of Lemma \ref{le3.4} and we leave it to the reader.
Concerning (II), we follow again the idea contained in the proof of Lemma \ref{le3.4}. Let $\delta \in (0, T]$ and set \begin{equation} Z_\delta := \{V \in H^{1,2}(Q_\delta) : V(0,\cdot) = v_0\}, \end{equation} which is a closed subset of $H^{1,2}(Q_\delta)$. If $V \in Z_\delta$, we consider the solution $v \in Z_\delta$ of $$ \left\{\begin{array}{l} D_tv(t,x) = Av(t,x) + h \ast AV(t,x) + f(t,x), \quad (t,x) \in Q_\delta, \\ \\ v(0,x) = v_0(x), \quad x \in \Omega\\ \\ Bv(t,y) = - h \ast BV(t,y) Ê + g (t,y), \quad (t,y) \in \Sigma_\delta. \end{array} \right. $$ We observe that $h \ast AV \in L^2(Q_\delta)$ and, on account of Theorem \ref{th1.1} (II) and Lemma \ref{le1.6}, we have also $h \ast BV \in H^{1/4,1/2}(\Sigma_\delta)$. We can show that, if $\delta$ is sufficiently small, the mapping $V \to v$ has a unique fixed point. In fact, considering $V_j \to v_j$ ($j \in \{1,2\}$), then we have $$ \left\{\begin{array}{l} D_t(v_1 - v_2)(t,x) = A(v_1 - v_2)(t,x) + h \ast A(V_1 - V_2)(t,x), \quad (t,x) \in Q_\delta, \\ \\ (v_1 - v_2)(0,x) = 0, \quad x \in \Omega \\ \\ B(v_1 - v_2)(t,y) = - h \ast B(V_1 - V_2)(t,y), \quad (t,y) \in \Sigma_\delta. \end{array} \right. $$ So that, if we indicate by $C_1, C_2, ...$ some positive constants which are independent of $\delta$, $V_1$, $V_2$, by an application of Proposition \ref{pr1.4} and Lemma \ref{le1.6} we deduce $$ \begin{array}{c}
\|v_1 - v_2\|_{H^{1,2}(Q_\delta)} \leq C_1(\|h \ast A(V_1 - V_2)\|_{L^2(Q_\delta)} + \| h \ast B(V_1 - V_2)\|_{H^{1/4,1/2}(\Sigma_\delta)}) \\ \\
\leq C_2\|h\|_{L^1(0, \delta)} (\|V_1 - V_2\|_{L^2(0, \delta; H^2(\Omega))} + \|B(V_1 - V_2)\|_{H^{1/4,1/2}(\Sigma_\delta)}) \\ \\
\leq C_3 \delta^{1/2}Ê\|V_1 - V_2\|_{H^{1,2}(Q_\delta)}. \end{array} $$ Choosing $\delta >0$ such that $C_3 \delta^{1/2}Ê < 1$, then problem (\ref{eq3.34}) admits a unique solution $v$ in $H^{1,2}(Q_\delta)$. We observe that $\delta$ is independent of $f$, $v_0$, $g$. In case $\delta < T$, in order to extend $v$ we employ again Lemma \ref{le1.8}. Taking as new unknown $v_1(t,\cdot):= v(\delta + t, \cdot)$, then $v_1$ should satisfy $$ \left\{\begin{array}{l}
D_tv_1(t,x) = Av_1(t,x) + h \ast Av_1(t,x) + [h \ast {\widetilde A}v_{|Q_\delta}] (t+T,x) + f(t+\delta,x), \quad (t,x) \in Q_\tau, \\ \\ v_1(0,x) = v(\delta,x), \quad x \in \Omega\\ \\
Bv_1(t,y) = - h \ast Bv_1(t,y) Ê- [h \ast {\widetilde B}v_{|Q_\delta}] (t+T,y) + g(t+\delta,y), \quad (t,y) \in \Sigma_\tau, \end{array} \right. $$
which has the same structure of (\ref{eq3.34}), when we replace $f(t,x)$ by $[h \ast {\widetilde A}v_{|Q_\delta}] (t+T,x) + f(t+\delta,x)$,
$v_0$ by $v(\delta,\cdot)$ and $g(t,y)$ by Ê$- [h \ast {\widetilde B}v_{|Q_\delta}] (t+T,y) + g(t+\delta,y)$. Hence, we can extend $v$ to a solution of domain $Q_{\min\{2\delta,T\}}$. If $2\delta < T$, we iterate the procedure and, in a finite number of steps, we get the proof. \end{proof}
In order to conclude, we need some auxiliary results.
\begin{lemma}\label{le2.2} $BV([0, T]) \hookrightarrow H^\beta(0, T)$, $\forall \, \beta \in [0, 1/2)$. \end{lemma}
\begin{proof} By Theorem 10 in \cite{Si1}, we have $W^{1,1}(0, T) \hookrightarrow H^\beta(0, T)$, $\forall \, \beta \in [0, 1/2)$. Moreover, it is well known that $W^{1,1}(0, T) \hookrightarrow BV([0, T])$ and that $V_0^T(f) = \int_0^T |f'(t)| dt$, $\forall \, f \in W^{1,1}(0, T)$. Now, let us take $f \in BV([0, T])$. Extending $f$ to $F:\mathbb{R} \to \mathbb{C}$ by $F(t) = f(0)$ if $t < 0$ and $F(t) = f(T)$ if $t > T$, we obtain
$F \in BV(\mathbb{R})$, with variation
$$V(F) = |f(0)| + V_0^T (f) + |f(T)|.$$
Now we fix $\omega \in \mathcal D (\mathbb{R})$ with $\int_\mathbb{R} \omega (t) dt = 1$, and set, for $k \in \mathbb{N}$, $t \in \mathbb{R}$, $\omega_k(t) := k \omega(kt)$. $\{\omega_k\}_{k \in \mathbb{N}}$ is a sequence of standard smooth mollifiers converging to $\delta$ in the sense of distributions. Taking $f_k = (F \ast \omega_k)_{|[0, T]}$, we have $f_k \in W^{1,1}(0, T)$ , $\|f_k - f\|_{L^1((0, T))} \to 0$ ($k \to \infty$), so that, (possibly passing to a subsequence) $$ f_k(t) \to f(t) \quad (k \to \infty) \quad \text{almost everywhere in } [0,T]. $$ If $0 = t_0 < t_1 < \dots ... < t_{N-1} < t_N = T$, we have $$ \begin{array}{c}
\sum_{j=1}^N |f_k(t_j) - f_k(t_{j-1})| \leq \int_\mathbb{R} \sum_{j=1}^N |F(t_j - s) - F(t_{j-1} - s)| |\omega_k(s)| ds \\ \\
\leq V(F) \|\omega_k\|_{L^1(\mathbb{R})} = V(F) \|\omega\|_{L^1(\mathbb{R})}, \end{array} $$ implying $$
\|f_k'\|_{L^1((0, T)} = V_0^T(f_k) \leq V(F) \|\omega\|_{L^1(\mathbb{R})} \quad \forall k \in \mathbb{N}. $$
So, if $\beta \in [0, 1/2)$, by the lemma of Fatou, we have $$ \begin{array}{c}
[f]_{H^\beta(0,T)} = \int_0^T \int_0^t \frac{|f(t) - f(s)|^2}{(t-s)^{1+2\beta}} ds dt
\leq \liminf_{k \to \infty}Ê\int_0^T \int_0^t \frac{|f_k(t) - f_k(s)|^2}{(t-s)^{1+2\beta}} ds dt \\ \\
= \liminf_{k \to \infty} [f_k]_{H^\beta(0,T)} \leq C(\beta) \liminf_{k \to \infty} \|f_k\|_{W^{1,1}((0, T))}Ê\\ \\
\leq C(\beta) (\|f\|_{L^1((0, T))} + V(F) \|\omega\|_{L^1(\mathbb{R})}). \end{array} $$ The conclusion follows. \end{proof}
\begin{lemma}\label{le2.3} Let $V$ be a Hilbert space and $0 < \beta < \alpha < 1$. Assume $\phi \in H^\beta(0, T)$ and $f \in C^\alpha([0, T]; V)$, or $\phi \in C^\alpha([0, T])$ and $f \in H^\beta(0, T; V)$. Then $\phi f \in H^\beta(0, T; V)$. \end{lemma}
\begin{proof} We prove only the first case, the other case can be treated analogously. Indeed, there holds $$ \begin{array}{c}
|f|_{H^\beta(0,T;V)} = \int_0^T (\int_0^t \frac{\|\phi(t) f(t) - \phi(s) f(s)\|_V^2}{(t-s)^{1+2\beta}} \\ \\
\leq 2\left [\int_0^T |\phi(t)|^2 (\int_0^t \frac{\| f(t) - f(s)\|_V^2}{(t-s)^{1+2\beta}} ds) dt
+ \int_0^T (\int_0^t \frac{| \phi (t) - \phi(s)|^2}{(t-s)^{1+2\beta}} ds) \|f(t)\|_V^2 dt \right] \\ \\
\leq 2\Big[\int_0^T |\phi(t)|^2 ( [f]_{C^\alpha([0, T]; V)}^2 \int_0^t (t-s)^{2(\alpha - \beta) - 1}Êds ) dt \\ \\
+ \|f\|_{L^\infty(0, T; V)}^2 \int_0^T (\int_0^t \frac{| \phi (t) - \phi(s)|^2}{(t-s)^{1+2\beta}} ds) dt \Big] \\ \\
\leq 2\left[ \frac{T^{2(\alpha - \beta)}}{2(\alpha - \beta)}Ê[f]_{C^\alpha([0, T]; V)}^2 \int_0^T |\phi(t)|^2 dt
+ \|f\|_{L^\infty(0, T; V)}^2 \int_0^T (\int_0^t \frac{| \phi (t) - \phi(s)|^2}{(t-s)^{1+2\beta}} ds) dt \right]. \end{array} $$ \end{proof}
\begin{corollary}\label{co2.2} Assume (C1)-(C3) and $\tau \in (0, T]$. For any $v \in H^{3/4,3/2}(Q_\tau)$, then $\Psi(v) \in H^{1/4,1/2}(\Sigma_\tau)$. \end{corollary}
\begin{proof} We recall that $\Psi(v)(t,y) = D_t[(E_1 * r_v)(t)u_A(t,y) + E_0(t,y)]\,$ where $r_v := {\cal W} ({\cal M} (u_0 + 1 \ast v)) \in C([0, \tau]) \cap BV([0, \tau])$ and $E_0(t,y) = [ (E_1 \ast u_C)(t) + \epsilon \phi_0 E_1(t)]Êu_A(t,y) + u_B(t,y)$. On account of (H8) and (H9), there hold $D_t (E_1 \ast r_v), \, [(E_1 \ast u_C)(t) + \epsilon \phi_0 E_1(t)] \in C([0, \tau]) \cap BV([0, \tau])$ and, moreover, $$u_A \in H^{5/4}(0, T; L^2(\Gamma)) \hookrightarrow H^{1}(0, T; L^2(\Gamma)) \hookrightarrow C^{1/2}([0, T]; L^2(\Gamma)), \quad u_B\in H^{5/4}(0, T; L^2(\Gamma)).$$ So, an applications of Lemmas \ref{le2.2}Ê and \ref{le2.3} gives $\Psi(v) \in H^{1/4}(0, T; L^2(\Gamma))$. Moreover, using again (H8) and (H9), it is easy to realize that $\Psi(v) \in L^2(0, T;H^{1/2}(\Gamma))$. \end{proof}
Now we are able to prove the following final result, which completes the proof of Theorem \ref{thA2.1}.
\begin{theorem} Assume (K1)-(K6), and, moreover, $v^* \in L^2(Q_T)$, $v_0 \in H^1(\Omega)$, $z_1 \in H^{1/2}(\partial \Omega)$, $v^*_\Gamma \in H^{1/4,1/2}(\Sigma_T)$. Then the solution $(v,h) \in X_T \times L^2(0, T)$ of (\ref{eqA2.6}) belongs, in fact, to $H^{1,2}(Q_T) \times L^2(0, T)$. \end{theorem}
\begin{proof} Indeed, $v$ is the solution in $X_T$ of (\ref{eq3.34}), if we set $$ f(t,x):=v^*(t,x)
- [(\psi_1,v(t,\cdot)) + h \ast (\psi_1, v(t,\cdot))]Êz_0(x), $$ and $$
g(t,y):= - v_{|\Sigma_T}(t,y) + \Psi(v)(t,y) + [(\psi_1,v(t,\cdot)) + h \ast (\psi_1, v(t,\cdot))] z_1(y)Ê\\ \\ + v^*_\Gamma (t,y). $$
Clearly, as $v^* \in L^2(Q_T)$, then $f \in L^2(Q_T)$. Moreover, as $v \in H^{3/4, 3/2}(Q_T)$ then $v_{|\Sigma_T} \in H^{1/2,1}(\Sigma_T)$. By Corollary \ref{co2.2}, we have $\Psi(v) \in H^{1/4,1/2}(\Sigma_T)$. Thanks to Lemma \ref{le1.6}, it holds $ (\psi_1,v(t,\cdot)) + h \ast (\psi_1, v(t,\cdot)) \in H^{1/4}(0, T)$, so that $ [(\psi_1,v(t,\cdot)) + h \ast (\psi_1, v(t,\cdot))] z_1(y) \in H^{1/4,1/2}(\Sigma_T)$. Hence, as $v^*_\Gamma \in H^{1/4,1/2}(\Sigma_T)$, we deduce $g \in H^{1/4,1/2}(\Sigma_T)$. And finally from Lemma \ref{le3.5} (II) it follows that $v \in H^{1,2}(Q_T)$. \end{proof}
\begin{remark}\label{ref} {\rm Examples of memory operators $\mathcal W$ satisfying (C1) and (C3) are Preisach operators and generalized play operators (with fixed initial value of the output), under suitable conditions.
Concerning Preisach operators, conditions implying that, $\forall \, \tau \in [0, T]$, $\mathcal W_\tau: C([0, \tau]) \to C([0, \tau])$ in a uniformly Lipschitz way are given in \cite{Vi1}, Chap. IV, Thms 3.1 and 3.5. Analogous results for generalized play operators can be found again in \cite{Vi1}, Chap. III, Thm. 2.2.
By inspection of the previous proofs, what we need is that, $\forall \,\tau \in [0, T]$, $\mathcal W_\tau: C([0, \tau]) \to C([0, \tau])$ in a uniformly Lipschitz way and \begin{equation}\label{eq7.3} \mathcal W_\tau(H^{3/2}(0, \tau)) \subset H^{1/4}(0, \tau) \cap C([0, \tau]), \end{equation} which is guaranteed by (C1)-(C3). In suitable circumstances, the mentioned operators map $W^{1,1}(0, \tau)$ into itself boundedly (see \cite{Vi1}, Chap. IV, Thm. 3.10 and Chap. III, Thm. 2.3). Now, we can easily see that, if $\mathcal W_\tau$ maps $C([0, \tau])$ into itself continuously and $W^{1,1}(0, \tau)$ into itself boundedly, then $\mathcal W_\tau$ maps $C([0, \tau]) \cap BV([0, \tau])$ into itself. In fact, if $r \in C([0, \tau]) \cap BV([0, \tau])$, then we can construct a bounded sequence $(r_k)_{k \in \mathbb{N}}$ in $W^{1,1}(0, \tau)$, converging to $r$ uniformly in $[0, \tau]$. From the lower semicontinuity of $V_0^\tau(\cdot)$ in $C([0, \tau])$, we deduce $$ V_0^\tau(\mathcal W_\tau (r)) \leq \liminf_{k \to \infty}ÊV_0^\tau(\mathcal W_\tau (r_k)) < \infty. $$ Moreover, we can also observe that condition $\mathcal W_\tau (W^{1,1}(0, \tau)) \subset W^{1,1}(0, \tau)$ implies (\ref{eq7.3}).
Finally, as $H^{3/2}(0, \tau)) \subset C^\alpha([0, \tau])$ $\forall \,\alpha \in [0, 1)$, and $C^{1/4}([0, \tau]) \subset H^{1/4}(0, \tau) \cap C([0, \tau])$, then we can obtain (\ref{eq7.3}) requiring that $\mathcal W_\tau (C^\alpha([0, \tau]) \subset C^{1/4}([0, \tau])$, for some $\alpha \in [0, 1)$. Concerning the Preisach operator, this can be obtained imposing the assumptions indicated in \cite{Vi1}, Chap. IV, Thm. 3.9. }
\end{remark}
\end{document} | arXiv |
Congruum
In number theory, a congruum (plural congrua) is the difference between successive square numbers in an arithmetic progression of three squares. That is, if $x^{2}$, $y^{2}$, and $z^{2}$ (for integers $x$, $y$, and $z$) are three square numbers that are equally spaced apart from each other, then the spacing between them, $z^{2}-y^{2}=y^{2}-x^{2}$, is called a congruum.
The congruum problem is the problem of finding squares in arithmetic progression and their associated congrua.[1] It can be formalized as a Diophantine equation: find integers $x$, $y$, and $z$ such that
$y^{2}-x^{2}=z^{2}-y^{2}.$
When this equation is satisfied, both sides of the equation equal the congruum.
Fibonacci solved the congruum problem by finding a parameterized formula for generating all congrua, together with their associated arithmetic progressions. According to this formula, each congruum is four times the area of a Pythagorean triangle. Congrua are also closely connected with congruent numbers: every congruum is a congruent number, and every congruent number is a congruum multiplied by the square of a rational number.
Examples
As an example, the number 96 is a congruum because it is the difference between adjacent squares in the sequence 4, 100, and 196 (the squares of 2, 10, and 14 respectively).
The first few congrua are:
24, 96, 120, 216, 240, 336, 384, 480, 600, 720 … (sequence A256418 in the OEIS).
History
The congruum problem was originally posed in 1225, as part of a mathematical tournament held by Frederick II, Holy Roman Emperor, and answered correctly at that time by Fibonacci, who recorded his work on this problem in his Book of Squares.[2]
Fibonacci was already aware that it is impossible for a congruum to itself be a square, but did not give a satisfactory proof of this fact.[3] Geometrically, this means that it is not possible for the pair of legs of a Pythagorean triangle to be the leg and hypotenuse of another Pythagorean triangle. A proof was eventually given by Pierre de Fermat, and the result is now known as Fermat's right triangle theorem. Fermat also conjectured, and Leonhard Euler proved, that there is no sequence of four squares in arithmetic progression.[4][5]
Parameterized solution
The congruum problem may be solved by choosing two distinct positive integers $m$ and $n$ (with $m>n$); then the number $4mn(m^{2}-n^{2})$ is a congruum. The middle square of the associated arithmetic progression of squares is $(m^{2}+n^{2})^{2}$, and the other two squares may be found by adding or subtracting the congruum. Additionally, multiplying a congruum by a square number produces another congruum, whose progression of squares is multiplied by the same factor. All solutions arise in one of these two ways.[1] For instance, the congruum 96 can be constructed by these formulas with $m=3$ and $n=1$, while the congruum 216 is obtained by multiplying the smaller congruum 24 by the square number 9.
An equivalent formulation of this solution, given by Bernard Frénicle de Bessy, is that for the three squares in arithmetic progression $x^{2}$, $y^{2}$, and $z^{2}$, the middle number $y$ is the hypotenuse of a Pythagorean triangle and the other two numbers $x$ and $z$ are the difference and sum respectively of the triangle's two legs.[6] The congruum itself is four times the area of the same Pythagorean triangle. The example of an arithmetic progression with the congruum 96 can be obtained in this way from a right triangle with side and hypotenuse lengths 6, 8, and 10.
Relation to congruent numbers
A congruent number is defined as the area of a right triangle with rational sides. Because every congruum can be obtained (using the parameterized solution) as the area of a Pythagorean triangle, it follows that every congruum is congruent. Conversely, every congruent number is a congruum multiplied by the square of a rational number.[7] However, testing whether a number is a congruum is much easier than testing whether a number is congruent. For the congruum problem, the parameterized solution reduces this testing problem to checking a finite set of parameter values. In contrast, for the congruent number problem, a finite testing procedure is known only conjecturally, via Tunnell's theorem, under the assumption that the Birch and Swinnerton-Dyer conjecture is true.[8]
See also
• Automedian triangle, a triangle for which the squares on the three sides form an arithmetic progression
• Spiral of Theodorus, formed by right triangles whose (non-integer) sides, when squared, form an infinite arithmetic progression
References
1. Darling, David (2004), The Universal Book of Mathematics: From Abracadabra to Zeno's Paradoxes, John Wiley & Sons, p. 77, ISBN 978-0-471-66700-1.
2. Bradley, Michael John (2006), The Birth of Mathematics: Ancient Times to 1300, Infobase Publishing, p. 124, ISBN 978-0-8160-5423-7.
3. Ore, Øystein (2012), Number Theory and Its History, Courier Dover Corporation, pp. 202–203, ISBN 978-0-486-13643-1.
4. Erickson, Martin J. (2011), Beautiful Mathematics, MAA Spectrum, Mathematical Association of America, pp. 94–95, ISBN 978-0-88385-576-8.
5. Euler's proof is not clearly written. An elementary proof is given in Brown, Kevin, "No Four Squares In Arithmetic Progression", MathPages, retrieved 2014-12-06.
6. Beiler, Albert H. (1964), Recreations in the Theory of Numbers: The Queen of Mathematics Entertains, Courier Corporation, p. 153, ISBN 978-0-486-21096-4.
7. Conrad, Keith (Fall 2008), "The congruent number problem" (PDF), Harvard College Mathematical Review, 2 (2): 58–73, archived from the original (PDF) on 2013-01-20.
8. Koblitz, Neal (1984), Introduction to Elliptic Curves and Modular Forms, Graduate Texts in Mathematics, no. 97, Springer-Verlag, ISBN 0-387-97966-2
External links
• Weisstein, Eric W., "Congruum Problem", MathWorld
| Wikipedia |
Home Journals TI-IJES Analysis of Heat Transfer in the Case of Non-linear Hyperbolic Conduction Equation Coupled with Radiation or with Thermoelectricity
Analysis of Heat Transfer in the Case of Non-linear Hyperbolic Conduction Equation Coupled with Radiation or with Thermoelectricity
Myriam Lazard* | Guillaume Lambou Ymeli | Hervé Thierry Tagne Kamdem | René Tchinda
Institut P', Université de Poitiers, Centre National de la Recherche Scientifique, Ecole Nationale Supérieure de Mécanique et Aérotechnique, 2 Rue Pierre Brousse, Bâtiment B25, TSA 41105,86073, Poitiers Cedex 9, France
Department of Physics/Faculty of Science, University of Dschang, Cameroon
Institut de Technologie Fotso-Victor, Bandjoun, University of Dschang, Cameroon
[email protected]
Nonlinear hyperbolic heat conduction problems are analyzed thanks to the Cattaneo-Vernotte model, which takes what happens at very short times into account. First, the case of the coupled conductive-radiative heat transfer in planar and spherical media is considered. The accuracy of the Lattice Boltzmann heat conduction model coupled with an analytical layered spherical harmonics solution of the radiative transport equation is investigated. The effects of different parameters such as scattering albedo on both temperature and conductive heat flux distributions within the medium are studied for steady and transient states. The present predictions agree well with literature benchmarks. It is also shown that the parameters have a significant effect on both temperature profile and hyperbolic sharp wave front. Second, the non-Fourier heat conduction in a thermoelectric thin layer is investigated under several boundary conditions by performing a specific quadrupole method. The expressions of the temperature and the heat flux of the small-scale thermoelectric materials are obtained and the whole matrix formulation is given explicitly. Good agreement is observed between quadrupole temperature predictions and analytical results for the Fourier heat conduction problems.
analytical layered radiative solution, non-Fourier conduction, lattice Boltzmann method, quadrupole method, planar/spherical media, thermoelectricity
Simultaneously, transient conduction and/or radiation in participating media appears in many engineering systems and emerging technologies such as nanostructures, biological tissues, insulated foams, polymers, furnaces, heat pipes, combusting chambers and rocket propulsion, renewable energy applications using thermoelectric materials [1-3]. The amount and rate of heat, coupled to system properties and the surrounding govern the thermal response at both the microscopic and macroscopic scales. For transient heat flow during an extremely short period, the conduction process is well described by the hyperbolic heat conduction theory based on non-Fourier constitutive heat flux equations such as the Cattaneo – Vernotte heat flux equation, which implies a finite speed for the propagation of thermal perturbations [4-5]. In engineering practice, it is convenient to simulate the non-linear heat conduction behavior with commercial software such as ANSYS, FLUENT and TRNSYS [6-8]. However, it is interesting and suitable to develop semi analytical models for instance to make a thermodynamic analysis of a thermoelectric device [9], to estimate the maximum rate of heat pumping, to study the influence of one parameter, to use it for optimization or as an inverse model. Moreover, several modeling of the transient state is based on the convenient electrical analogy, Laplace transform and separation of the variables [10] finite difference/volume and the lattice Boltzmann (LBM) methods [11] to study thermoelectric self-cooling of devices. They are also convenient to solve the partial differential equation for a small time leg or for micro thermoelectric coolers [12].
At macroscopic scales, the time and spectral changes of radiative information are often considered as non-significant due to its very large propagation speed and gray medium assumption [13]. Hence, the radiation assessment involves an integro-differential radiative transfer equation (RTE) including three spatial and two angular variables. Furthermore, for present curvilinear shape, the differential form of the RTE coupled to the dependence of spatial/angular variables and the optically inhomogeneous complex media make the radiative transfer equation difficult to solve analytically except for some limiting cases [1, 2, 13]. Generally, the solutions of RTE found in the literatures are typically determined by numerical means and therefore, contain errors such as angular and spatial errors due to solution procedures. To mitigate the effects of error due to angular discretization, different approaches have been described in the literatures, including phase function renormalization, high order angular discretization, angular grid refinement [2, 11, 14-15]. However, the error due to spatial discretization depends to the considered approach around each grid node and to the selected method. In order to improve the accuracy of the solution, semi-analytical approach based on discrete ordinates has been developed recently by Ladeia et al. [16] for cylindrical media but limited to homogeneous and steady state transfer.
The present research is concerned: (1) firstly with the development of an analytical solution for radiation analysis in optically complex media of inhomogeneous graded index. The spherical harmonics method is then used for the development of an analytical radiative transfer solution. The non-linear hyperbolic heat conduction with temperature dependent thermal conductivity has been investigated through the lattice Boltzmann and finite volume methods; (2) and secondly with the complete one-dimensional transient hyperbolic heat transfer equation solved semi-analytically in order to obtain explicit expressions of the temperature and also of the heat flux in a thermoelectric layer. A quadrupole formulation of the hyperbolic partial differential equation taking the Thomson effect into account is then proposed for thin thermoelectric layer or medium considered at very short times.
2. Problem Statements
2.1 Coupled conductive-radiative heat transfer in semi-transparent media
The local energy balance describes the transient conduction heat transfer in a medium:
$ρC_p ∂T/∂t=-∇(q_C+q_R )$ (1)
where ρ is the density, $C_p$ is the specific heat at constant pressure, $q_C$ is the conductive heat flux; T the temperature field at time t>0 and $q_R$ represents the radiative flux. The constitutive equation for conductive heat flux is stated by Cattaneo and Vernote through the expression [4-5]
$q_C (r,t+τ_cv )≈q_C (r,t)+τ_cv (T) (∂q_C)/∂t=-k_C (T)∇T$ (2)
The thermal conductivity of the material is considered in the form $k_C (T)=k_0 u(T)$ where $k_0$ represent the reference thermal conductivity. The temperature dependent time lag $τ_cv$ (T) is related to the constant speed $C_v$ by $τ_cv (T)=α(T)/C_v^2$. The boundary conditions of the thermal problem are known temperatures $T_0$ for left boundary, $T_L$ for right boundary, $T_∞$ is the initial temperature distribution. For combined mode of radiation and conduction, the radiative source is $∇q_R=κ_e (1-ω)[4n^2 σ_B T^4-2π∫_{-1}^{1 }I(r,μ)dμ]$ [2], where the parameter $κ_e$ is the extinction coefficient, $σ_B$ is the Stefan Boltzmann constant and ω is the single scattering albedo. The function $I(r,μ)$ is the radiative intensity known from RTE where the boundary conditions are [17, 18]
$I_1 (τ_0,μ)=b ̅(ϵ_0 I_b (T_0 )+2(1-ϵ_0 ) ∫_0^1I_1 (τ_0,-μ' ) μ' dμ' )+bI_1 (τ_0,-μ)$ (3a)
$I_(N_L ) (τ_L,μ)=ϵ_L I_b (T_L )+2(1-ϵ_L ) ∫_0^1I_{N_L} (τ_L,μ' ) μ' dμ'$ (3b)
where ϵ is the emissivity and b taking the value 1 for slab or hollow sphere and 0 for solid sphere ($b ̅=1-b$).
2.2 Non-Fourier conduction in thermoelements
Consider a single thermoelectric leg of length and cross-sectional area A. An electrical current I = JA enters uniformly into the element. As the thermoelectric material considered here is a thin thermoelectric layer, it is necessary to consider the hyperbolic law instead of the parabolic law of heat conduction [19]. The governing equation in a thin thermoelectric leg is the partial differential hyperbolic equation:
$ρC_p (∂T/∂t+τ_cv (∂^2 T)/(∂t^2 ))=∂/∂z (k_C ∂T/∂z)+J^2/σ-τJ ∂T/∂z$ (4)
The relevant properties are the electrical conductivity, the Thomson coefficient and finally is the relaxation time. To obtain the hyperbolic equation for a thermoelectric material, the steps are clearly detailed for instance in [20].
3. Solution Methodologies
The first solution approach consists to solve the radiation transfer using the spherical harmonics in order to build the solution of Eq. (1) from Lattice Boltzmann method. Secondly, a specific quadrupole method is developed to solve the energy equation relative to a thin layer applied to thermoelectricity.
3.1 A layered radiative solution
The inhomogeneity and angular redistribution of radiative intensity increase the difficulties to build the analytical solution of the radiation equation. In order to construct a semi-analytical solution, the media is assumed to be a set of sub-layers with constant factor 1/τ. In such case, the radiative intensity in a single layer can be written as
$μ (∂I_k)/∂τ+(1-μ^2 ) a/τ (∂I_k)/∂μ+I_k-[1-ω] I_b (T)=ω/2 ∫_{-1}^1 Φ(μ'→μ) I_k (τ,μ' )dμ' $ (5)
where $ I_b=n^2 σ_B T^4/π$ is the blackbody intensity; $τ=κ_e r$ being the optical depth; $μ=cos(θ)$ is the polar direction cosine; and $Φ$ is the scattering phase function. For the constant a=0 and 1, the equation pertains to that for the slab and the spherical enclosure, respectively.
The radiative problem in each layer is solved using the spherical harmonics or PN method. Therefore, the radiative intensity is expanded as [2, 18]
$I_k (τ,μ)≈∑_{l=0}^NI_(l,k) (τ)P_l (μ)$ (6)
where $I_{l,k}$ are the radiative intensity moments to be determined and $P_l$ are the Legendre polynomials. Hence, applying recursive expression of $μP_l (μ)$ and combined to orthogonality condition of Legendre polynomials, the RTE is reduced to the matrix- vector form as
$A (dF^k)/dτ+(C+a/τ_k B) F^k=D^k;F^k=[I_{0,k},I_{1,k} ,⋯,I_{N,k}]^T$ (7)
and the component of matrices A,B and C are given, respectively, by
$A_{ll'}=1/(2l'+1) [l' δ_{l+1,l'}+(l'+1) δ_{l-1,l'}] $ (8a)
$B_{ll'}=(l' (l'+1))/(2l'+1) [δ_{l+1,l' }-(l'+1) δ_{l-1,l'}]$ (8b)
${{C}_{ll\text{ }\!\!'\!\!\text{ }}}={{\delta }_{ll\text{ }\!\!'\!\!\text{ }}}(1-\frac{(\omega {{a}_{l}})}{2l+1})$ (8c)
where δ is the Kronecker symbol; the components $D_l^k=δ_{0l}[1-ω(τ)] I_b (T)$ and $a_l$ is the scattering phase function coefficient. The Marshak boundary conditions are used in this study and written in matrix-vector form $A_0 F^1 (τ_0 )=D_0$ for inner and $A_L F^{N_L} (τ_L )=D_L$ for outer boundary [11, 18]
For transient heat transfer, the analytical solution of Eq. (7) for a single homogeneous and isothermal layer is similar to that developed by Ymeli and Kamdem [11] and Kamdem et al. [21] for planar media using double spherical harmonics method and discrete ordinates methods, respectively. The solution in each sub-layer is constructed by setting $F^k (τ)=R^k W^k (τ)$, where $R^k$ is the matrix of real eigenvector and $W^k (τ)$ is the vector of the characteristics intensity to be determined. Therefore, Eq. (7) is reduced to
$(dW^k)/dτ+Λ^k W^k=S^k=(AR^k )^{-1} D^k $ (9)
with $Λ^k$, the matrix of non-zero real eigenvalues λ of the system. Each decoupled component of characteristics radiative intensity $W^k$ can be solved analytically and independently as [21]
$W^k (τ)=exp[λ^k (τ_B^k-τ)] C^k+\widetilde{\exp }[λ^k (τ_B^k-τ)] S^k$ (10)
with $τ_B^k=τ_l^k+((1-k_a )(τ_r^k-τ_l^k))⁄2$, where $τ_l^k$ and $τ_r^k$ are the optical depth of the left and right interfaces of the $k^{th}$ layer. The constant $k_a$ is $|λ^k |⁄λ^k$ and the diagonal matrices exp and $\widetilde{\exp }$ are defined as
$exp\left( {{\lambda }^{k}}(\tau _{B}^{k}-\tau ))=diag({{e}^{\lambda _{0}^{k}(\tau _{B}^{k}-\tau )}},...,{{e}^{\lambda _{N}^{k}(\tau _{B}^{k}-\tau )}} \right)$ (11)
$\widetilde{\exp }[λ^k (τ_B^k-τ)]=(Λ^k )^(-1).(I-exp{λ^k (τ_B^k-τ)})$ (12)
where I is the identity matrix with the same size with exp. The determination of the constant $C^k$ from boundary conditions and continuity condition is reduced to the linear system MX=N [11] where components are given by
${{M}_{(i,j)}}\text{=}\left\{ \begin{matrix} \begin{matrix} {{A}_{0}}{{R}^{1}}exp[{{x}_{1}}(\tau )] & i=j=1 \\\end{matrix} \\ \begin{matrix} {{R}^{j}}exp[{{x}_{j}}(\tau )]{{\delta }_{(i-1,j)}}-{{R}^{(j+1)}}exp[{{x}_{(j+1)}}(\tau ]{{\delta }_{(i,j)}} & {} \\\end{matrix} \\ \begin{matrix} {{A}_{L}}{{R}^{({{N}_{L}})}}exp[{{x}_{({{N}_{L}})}}(\tau ) & i=j={{N}_{L}} \\\end{matrix} \\\end{matrix} \right.$ (13a)
${{N}_{i}}=\left\{ \begin{matrix} {{D}_{0}}-{{A}_{0}}{{R}^{1}}\widetilde{exp}[{{x}_{1}}(\tau )]{{S}^{1}}j=1 \\ {{R}^{(j+1)}}\widetilde{exp}[{{x}_{(j+1)}}(\tau )]{{S}^{(j+1)}}-{{R}^{j}}\tau [{{x}_{j}}(\tau )]{{S}^{j}} \\ {{D}_{L}}-{{A}_{L}}{{R}^{({{N}_{L}})}}\widetilde{exp}[{{x}_{({{N}_{L}})}}(\tau )]{{S}^{({{N}_{L}})}}j={{N}_{L}} \\\end{matrix} \right.$ (13b)
where $X=[C^1,C^2,…,C^{N_L}]^T$ is the vector containing the integration constant of all the layers, $N_L$ is the total number of sub-layers and $x_i (τ)=λ^i (τ_B^i-τ_i )$. The linear system is solved with LU factorization and the final solution in each layer is given by
$F^k (τ)=R^k exp[λ^k (τ_B^k-τ)] C^k+R^k\widetilde{\exp }[λ^k (τ_B^k-τ)] S^k $ (14)
The developed methodology presents the advantages that the volumetric radiation can be computed directly at a particular selected location with reasonable computational time allowing the coupled conduction and radiation heat transfer. The radiative flux $q_R$ is obtained from $q_R^k (τ)=4πI_(1,k) (τ)/3$.
In this study, the temperature is initially guessed using the distribution at the previous time step and the assumption of multilayers structure of the medium where temperature and are discretes and constants. This assumption has been also considered by Tan et al. [22] for radiation and Fourier's conduction in graded index planar medium.
In addition, each layer has been splitting into nodes (including boundaries) to represent the solution given in Eq. (14). All these nodes are used to compute the solution of the conduction problem in the medium as a single layer.
3.2 Lattice Boltzmann method formulation (LBM)
The LBM is a computational method based on a mesoscopic description of heat flow, which better describes and captures sharp discontinuities of physical problems than the finite difference/volume methods [11, 23]. In order to simplify the energy equation with known radiative energy, we consider the dimensionless parameters as follows
$\eta =r/{{l}_{c}};\text{ }\xi ={{C}_{v}}t/{{l}_{c}},{{I}^{*}}=\frac{{{I}_{0}}(\eta )}{{{\sigma }_{B}}T_{h}^{4}/\pi },{{N}_{c}}=\frac{{{\kappa }_{e}}{{k}_{0}}}{4{{\kappa }_{B}}T_{h}^{3}}$ (15)
where $l_c=k_0⁄(ρC_p C_v )$ is the characteristic length of the problem, T_h is the reference temperature and N_c is the conductive-radiative parameter known also as Planck Number, the dimensionless time, distance, temperature $Θ=T/T_h$, and conductive heat flux $q_C^*=q_C/ρC_p T_h C_v$ are $ξ, η, Θ$, and $q_C^*$ respectively. Using this set of variables, Eqs. (1) and (2) are rewritten, as
$∂Θ/∂ξ+(∂q_c^*)/∂η=-a (q_C^*)/η-(κ_e l_c^2)/N_c ∇^* q_R$ (16a)
$∂q_C^*/∂ξ+∂Θ/∂η=-q_C^*/u(Θ)$ (16b)
with $∇^* q_R =κ_e [1-ω]{n^2 Θ^4-I^* }$ and $u(Θ)$ is a given function of temperature dependent thermal conductivity. The modified LBM equation due to radiation and non-Fourier is
$\frac{{{f}_{i}}}{\partial _{\xi }^{{}}}+{{\vec{e}}_{i}}.\nabla {{f}_{i}}=\frac{f{{_{i}^{e}}^{q}}-{{f}_{i}}}{{{\tau }_{M}}}-\frac{({{\kappa }_{e}}l_{c}^{2})}{2{{N}_{c}}}{{\nabla }^{*}}{{q}_{R}}-\frac{1}{2}\left( \frac{a}{\eta }/\hat{\eta }+\frac{{{e}_{i}}}{u(\theta )}) \right)q_{C}^{*}$ (17)
where $τ_M$ is the relaxation time, $f_i^{eq}$ is the equilibrium distribution function, $f_i$ is the particle distribution and $e ⃗_i$ is the moving velocity of the particles. For D1Q2 model, particles moves with two opposites unit velocities, $e_1 ⃗$ and $e_2 ⃗$. The relaxation time and vectors are $τ_M=u(Θ)+∆ξ/2$ [11, 23, 24]. After discretization, Eq. (17) is written as
${{f}_{i}}(\vec{\eta }+{{\vec{e}}_{i}}\Delta \xi ,\Delta +\Delta \xi )-{{f}_{i}}+\frac{\Delta \xi }{{{\tau }_{M}}}[f{{_{i}^{e}}^{q}}-{{f}_{i}}]+\frac{{{\kappa }_{e}}l_{c}^{2}}{2{{N}_{c}}}\Delta \xi {{\nabla }^{*}}{{q}_{R}}=\frac{\Delta \xi }{2}\left( \frac{a}{\eta }\hat{\eta }+\frac{{{{\vec{e}}}_{i}}}{u(\theta )} \right)q_{C}^{*}$ (18)
The temperature, conductive heat flux and equilibrium distributions are obtained from [11, 23, 24].
$Θ(η ⃗,ξ)=f_1 (η ⃗,ξ)+f_2 (η ⃗,ξ) $ (19a)
$q_C^* (η ⃗,ξ)=(e_1 ) ⃗.f_1 (η ⃗,ξ)+(e_2 ) ⃗.f_2 (η ⃗,ξ)$ (19b)
$f_i^{eq} (η ⃗,ξ)={Θ(η ⃗,ξ)+e ⃗_i.q_C^* (η ⃗,ξ)}/2$ (19c)
3.3 Quadrupole method
To solve the partial differential heat conduction equation, many different methods could be used but it is also very convenient to apply the Laplace transform, which transforms the partial differential equation into an ordinary differential equation in the Laplace domain. Let introduce p the Laplace variable and note To solve the partial differential heat conduction equation, many different methods could be used but it is also very convenient to apply the Laplace transform, which transforms the partial differential equation into an ordinary differential equation in the Laplace domain. Let introduce p the Laplace variable and note $L(f)=f ̅(p)=∫_0^{+∞}f(z,t)exp(-pt)dt$.
$(d^2 Θ ̅)/(dz^2 )-τJ/k_C (dΘ ̅)/dz-(ρC_p)/k_C p(1+pτ_cv ) Θ ̅(z,p)+J^2/(pσk_C )=0$ (20)
The roots of the characteristic associate equation are
${{\gamma }_{1,2}}=\frac{\tau J}{2{{k}_{C}}}\pm \sqrt{{{(\frac{\tau J}{2{{k}_{C}}})}^{2}}+\frac{\rho {{C}_{p}}}{{{k}_{C}}}p(1+p{{\tau }_{cv}})}$ (21)
As a consequence, the solution of the Eq. (20) is
$Θ ̅(z,p)=ξ_1 exp(γ_1 z)+ξ_2 exp(γ_2 z)+ϑ ̅(z)$ (22)
where $ϑ ̅(z)$ is a particular solution of Eq. (20) and $ξ_1, ξ_2$ are two constants depending on the boundary conditions. Two cases corresponding to different initial conditions are investigated to determine $ ϑ ̅(z)$:
IC0: the initial temperature within the leg is constant $T(z,t=0)=T_∞$ then it is obvious that
$\bar{\vartheta }(z)=\frac{{{J}^{2}}}{\rho {{C}_{p}}{{p}^{2}}\sigma (1+p{{\tau }_{c}}v)}+\frac{{{T}_{\infty }}}{p}$ (23)
IC1: the initial temperature is linear
$\bar{\vartheta }(z)=\frac{1}{p}(\frac{{{T}_{L}}-{{T}_{0}}}{L})z+\frac{\frac{{{J}^{2}}}{\sigma }+\frac{\tau J}{L}/({{T}_{L}}-{{T}_{0}})}{\rho {{C}_{p}}{{p}^{2}}(1+p{{\tau }_{c}}v)}+\frac{{{T}_{0}}}{p}$ (24)
Let for instance determine $ξ_1$ and $ξ_2$ in the most common case that is to say the thermoelectric thin layer is at the temperature $T_0$ and the side at $z=L$ is then at the temperature $T_L$. The equations corresponding to these boundary conditions are in the Laplace domain:
$Θ ̅(z=0,p)=0 ; Θ ̅(z=L,p)=(T_L-T_0 )/p$ (25)
Then $ξ_1$ and $ξ_2$ are determined as
${{\xi }_{1}}=\frac{\frac{{{J}^{2}}}{\rho {{C}_{p}}{{p}^{2}}\sigma (1+p{{\tau }_{c}}_{v})}(1-exp({{\gamma }_{2}}L)-\frac{({{T}_{L}}-{{T}_{0}})}{p})}{exp({{\gamma }_{2}}L)-{{\xi }_{2}}exp({{\gamma }_{1}}L)}$ (26a)
${{\xi }_{2}}=\frac{\frac{{{J}^{2}}}{\rho {{C}_{p}}{{p}^{2}}\sigma (1+p{{\tau }_{c}}_{v})}(exp({{\gamma }_{2}}L)-1+\frac{({{T}_{L}}-{{T}_{0}})}{p})}{exp({{\gamma }_{2}}L)-{{\xi }_{2}}exp({{\gamma }_{1}}L)}$ (26b)
Now, let express the heat flux which is a linear combination of the temperature and the derivative of the temperature (where $α_s$ is the Seebeck coefficient)
$φ=α_s IΘ-k_C A ∂Θ/∂z$ (27)
Considering Eqs (22) and (27), the Laplace transform of the heat flux is:
$\bar{\varphi }\left( z \right)=\left( {{\alpha }_{s}}I-{{k}_{C}}A{{\gamma }_{1}} \right)\text{ }{{\xi }_{1}}~exp\left( {{\gamma }_{1}}z \right)+\left( {{\alpha }_{s}}I-{{k}_{C}}A{{\gamma }_{2}} \right)\text{ }{{\xi }_{2}}\text{ }exp\left( {{\gamma }_{2}}z \right)+\overline{{{\varphi }_{2}}}\left( z \right)$ (28)
$\overline{{{\varphi }_{2}}}\left( z \right)\text{=}~\left( {{\alpha }_{s}}I.\bar{\vartheta }(z)-{{k}_{C}}A\frac{d\bar{\vartheta }(z)}{dz} \right)$ (29)
From a mathematical point of view, the quadrupole method belongs to the class of analytical unified exact explicit method for solving linear partial differential equations in simple geometries. H.S. Carlaw [25] first presented this approach for the conduction of heat in solids. The Laplace heat flux and temperature are analogue of the electrical current and electrical potential. To summarize, this method provides a transfer matrix for the medium that linearly links the input temperature–heat flux column vector at one side and the output vector at the other side. Let now determine the quadrupole corresponding to the case studied. Thanks to Eqs. (22) and (28)
$\text{ }\left[ \begin{matrix} \bar{\Theta }\left( {{z}_{i}} \right) \\ \bar{\varphi }\left( {{z}_{i}} \right) \\\end{matrix} \right]\text{=}{{M}_{p,i}}\left( \begin{matrix} {{\xi }_{1}} \\ \xi {}_{2} \\\end{matrix} \right)+{{U}_{p,i}}$ (30)
111111.png
$Mp.i=\left[ \begin{matrix} \bar{\vartheta }({{z}_{i}}) \\ {{a}_{s}}I\bar{\vartheta }({{z}_{i}})-{{k}_{c}}A\bar{\vartheta }({{z}_{i}})/dz \\\end{matrix} \right]$ (31b)
Considering the Eq. (26) at the both sides of the thermoelectric film, the quadrupole formulation of the problem is directly given by
$\text{ }\left[ \begin{matrix} \bar{\Theta }\left( 0 \right) \\ \bar{\varphi }\left( 0 \right) \\\end{matrix} \right]\text{=}{{M}_{p,0}}{{M}_{p,L}}^{-1}\left[ \begin{matrix} \bar{\Theta }\left( L \right) \\ \bar{\varphi }\left( L \right) \\\end{matrix} \right]-{{M}_{p,0}}{{M}_{p,L}}^{-1}{{U}_{p,L}}+{{U}_{p,0}}$ (32)
Thanks to this semi-analytical formulation, it is possible to plot the transient temperature along the thermoelectric film.
Toward the validation of the present development, the heat transfer occurring in three spatial configurations: the slab, the solid sphere and the hollow concentric spheres has been considered. For grid independent solution, the medium is divided into 90 layers for coupled PN – LBM, and each layer is split into ten grid points (including interfaces) to solve the radiation problem. For angular independent solution, a reasonable order P25 approximation has been found to be sufficient as recommended by Ymeli and Kamdem [18] for spherical geometry. The present section is organized as follows: the first part investigates the accuracy of the proposed methodology while the second part focus on the performances of the coupled PN–LBM in solving one-dimensional combined mode of radiative and non-Fourier conduction in inhomogeneous media. The third part investigates the effect of relaxation time on temperature field in thermoelement.
4.1 Steady state temperature distributions for inhomogeneous media
The PN – LBM code for computing the coupled volumetric radiation and heat conduction used in the present work was first benchmarked for steady state cases dealing with constant and temperature dependent thermal conductivity in 1-D planar medium. The temperature dependent thermal conductivity is considered in the form $k_c (T)=k_0+γ_C (T-T_0 )$. The left boundary has the emissivity $ϵ_0=0.2$ while the right boundary is considered as black and the optical thickness is taken to $κ_e L=10$ with scattering albedo $ω=0.0$.
Figure 1. Effect of conductive – radiative parameter on steady state temperature in a slab for $ϵ_0=0.2$ and $ϵ_L=1.0$
Figure 2. Effect of variable thermal conductivity on steady state temperature in a slab for $ϵ_0=0.2$ and $ϵ_L=1.0$
The temperature distributions with constant thermal conductivity are given in Figure 1 while the case with temperature dependent thermal conductivity is given in Figure 2. In order to establish the accuracy of the PN – LBM, these results are compared to those reported by Mishra et al. [26] based on Discrete Transfer Method (DTM). It is seen from these figures that the results of the present work match very well with those reported in the literatures.
The next considered problem is that of two concentric spheres where the inner radius has $R_0=0.5$ and outer radius $R_L=1.0$ with heated black boundaries $T_0=2T_L$. The extinction coefficient is $κ_e=2.0$ for isotropically scattering with constant thermal conductivity. The conduction-radiation parameter $N_C = 0.1 and 1.0$ with three scattering albedos $ω=0.1, 0.5 and 0.9$ are considered. The conductive heat flux $Q_C$ and total heat flux $Q_T$ at boundaries are presented in Table 2 for steady state and compared to Galerkin solutions [27] used as benchmark due to the high accuracy of the Galerkin method. Therefore, it can be observed in Table 2 that, the maximum relative errors for $Q_C (R_0 )$, $Q_C (R_L )$, $Q_T (R_0 )$ and $Q_T (R_L )$ are 0.3854 %, 0.5154 %, 0.4674 %, and 0.6146 % respectively. So, the developed methodology for coupled heat transfer shows good agreement with benchmark solutions for all the cases presented.
Table 1. Dimensionless conductive and total heat flux at the inner/outer black sphere with $κ_e R_L=2.0$, and $T_0=2T_L$ with $Q_C=R_L q_C/k_0 T_0$ and $Q_T=R_L (q_C+q_R )/k_0 T_0$
Conductive heat fluxes
Galerkin [27]
$N_c$
$ω$
$Q_C (R_0)$
$Q_C (R_L)$
Total heat fluxes
$Q_T (R_0)$
$Q_T (R_L)$
4.2 Transient temperature for nonhomogeneous media
One of the particular cases of combined conduction-radiation problems correspond to the purely scattering medium with $ω=1.0$ where conduction equation (Eq.3) and radiative heat transfer equation (Eq.4) are decoupled and evolve separately. The considered problem concerns the solid sphere with constant thermal conductivity and throughout this study, the constant thermal speed is taken in such that $C_v R_L/2α_0=1.0$, with $R_L=1.0$. By selecting four time levels $ξ=C_v t/(R_L-R_0 ) = 0.25, 0.5, 0.75 and 0.875$, the corresponding transient temperature distributions are given in Figure 3 and compared to those of Mishra and Sahai [28] based on LBM. It should be note that, the benchmark solutions are considered as exact solutions due to the ability of the LBM to accommodate the thermal wave front. The most visible observation to be drawn is that temperatures increase or decrease suddenly. This behavior indicates the presence of the thermal wave front moving from the surface towards the center. The present Figure shows that, the LBM solution build in this work produces the same results with the implemented LBM in the literatures.
Figure 3. Temperature distributions at four time steps $ξ=0.25$, $ξ=0.5$, $ξ=0.75$ and $ξ=0.875$ for solid sphere
Figure 4. Temperature distributions at four-time steps $ξ=0.1$, $ξ=0.3$, $ξ=0.6$ and $ξ=6.2$ for $γ_C=0.0$
For the next considered case, the medium absorb 50% of radiation energy ($ω=0.5$) with $κ_e=1.0$, $R_0/R_L =0.5$ and conduction-radiation parameter $N_c=0.1$. The temperature distributions for hollow concentric spheres at four-time levels including the steady state (SS) $ξ=0.1$, $ξ=0.3$, $ξ=0.6$ and $ξ=6.2$ are presented in Figure 4. The thermal wave front moves from inner hot boundary to outer cold boundary and the temperature of the unperturbed zone increase with time, which is the contribution of thermal radiations.
4.3 Effect of variable thermal conductivity on transient temperature
Considering the solid sphere heated at the surface, the effect of temperature dependent thermal conductivity $k_c (T)=k_0 [1+β_C T]$ is now investigated with three values of $γ_C=β_C T_h$ for both conduction and radiation phenomena. The conduction-radiation number is taken to be $N_c=0.1$ and $ω=0.5$. Figure 5 depicts the temperature field for $γ_C = -0.5, 0.0 and +0.5$ at time levels $ξ=0.3$ and $ξ=0.6$. It is seen from this figure that, the increase of $γ_C$ parameter improves the temperature profile in the media and negative values of $γ_C$ strongly affect the temperature field than positive values. This may be attributed to the fact that for positive $γ_C$, the thermal conductivity increases and therefore improves the thermal transfer by conduction.
Figure 5. Effect of variable thermal conductivity on temperature field for $ω=0.5$, $Nc=0.1$ and $κ_e=1.0$ in solid sphere $(a) ξ=0.1$ $(b) ξ=0.3$ and $(c) ξ=0.6$
4.4 Effect of relaxation time on transient temperature for thermoelement
Thanks to semi-analytical method based on Laplace transform and quadrupole matrix formulation, the effect of thermal relaxation time on the transient temperature along the thermoelectric film is now investigated. The time variation of temperature at different coordinates $z=0$, $z=L/4$, $z=L/2$, $z=3L/4$ and $z=L$ is plotted in Figure 6. Four values of relaxation times are considered $τ_cv = 0.1s, 1s, 10s and 100s$ with the boundary conditions $IC0: T_∞=T_0=270K$ and $T_L=300K$. It is seen in this figure that for small relaxation times, there is not notably difference between the Fourier and non-Fourier states. As relaxation time increases, temperatures obtained with Fourier law are greater than those produce with non-Fourier law of thin layer
(a) Relaxation time $τ_cv$= 0.1s
(b) Relaxation time $τ_cv$= 1s
(c) Relaxation time $τ_cv$= 10s
Figure 6. Effect of relaxation time on temperature field of thermoelectric thin layer
The transient coupled non-Fourier conduction with radiation heat transfer in an inhomogeneous, scattering and gray medium with graded index is investigated. The analytical layered solution of radiation is developed based on linear algebra approach in complex media while the lattice Boltzmann method is used to solve the non-linear hyperbolic energy equation and compared with benchmark solutions. The contribution of radiation on the transient temperature distributions in the medium was graphically examined using three radiation parameters: temperature dependent thermal conductivity, single scattering albedo and conduction-radiation parameter. The obtained results show good agreement with benchmark solutions of radiative heat flux, steady state and transient temperature distributions. It was found that the radiation effect is more pronounced for high value of refractive index or low values of the single scattering albedo; conduction-radiation parameter while this contribution is less effective for anisotropic scattering coefficient. Through these calculations, it was found that although the hyperbolic sharp wave front of non-Fourier conduction becomes smoother when strong radiative heat transfer is taken into account, it can be damped or emphasized with the effect of thermal conductivity and the scattering albedo. The present study demonstrates that the proposed analytical layered solution for radiation coupled to Lattice Boltzmann method for non-linear hyperbolic conduction is accurate and suitable for combined radiation-conduction problems in complex media. In the case of thermoelectric elements, the development of a specific quadrupole applied to thermoelectricity allows to obtain the transient behavior of a film under several boundary conditions, to take into account what happens at very short times and to investigate the effect of relaxation times on the thermogram.
a=constant taken 0 for slab and for 1 sphere
A,B,C=matrices for angular discretized RTE
Cv=speed of the thermal wave (m.s-1)
Cp=specific heat at constant pressure (J.kg-1. K-1)
D=source vector for radiative transfer equation
$e ⃗$=velocities of the particle distribution functions
f=particle distribution function
F=vector of intensity moment
I=radiative intensity (W.m-2.Sr-1)
J=current density
kc=thermal conductivity (W.m-1.K-1)
L=length of the geometry (m)
Nc=conduction-radiation parameter
NL=total number of layers of the medium
Pl=Legendre polynomial
q=heat flux (W.m-2)
r,z=spatial variable (m)
R=matrix of eigenvectors
s=source term of the ordinary differential equation
T=temperature distribution (K)
t=time variable (s)
u=temperature dependent function
W=vector of radiative intensity moment
Greek symbols
α=thermal diffusivity (m2.s-1)
γ=dimensionless coefficient of thermal conductivity
δ=Kronecker symbol
ϵ=emissivity
η dimensionless distance
Θ dimensionless temperature
Λ matrix of real eigenvalue
λ eigenvalue
ξ dimensionless time or constant in thermoelectricity
ρ material density (kg.m-3)
σ Stefan-Boltzmann constant (5.67 × 10−8 W. m−2.K−4)
$κ_e$=extinction coefficient (m-1)
τ=optical depth
$τ_B$=optical thickness of a given layer
$τ_c$ relaxation time in non-Fourier conduction (s)
$τ_M$ thermal relaxation time for LBM model
Φ scattering phase function
ω scattering albedo
Superscripts
eq=equilibrium state
k=order of the given layer
T=transpose vector
Subscripts
c=conductive
h=reference
R=radiative
[1] Lazard M, Andre S, Maillet D, Baillis D, Degiovanni A. (2001). Flash experiment on a semitransparent material: Interest of a reduced model. Inverse Problems in Engineering 9(4): 413-429. http://doi.org/10.1080/174159701088027772
[2] Howell JR, Mengüç MP, Siegel R. (2015). Thermal Radiation Heat Transfer 6th ed. Taylor and Francis, CRC Press: New York. http://doi.org/10.1115/1.3607834
[3] Zheng XF, Liu CX, Yan YY, Wang Q. (2014). A review of thermoelectrics research – Recent developments and potentials for sustainable and renewable energy applications. Renewable and Sustainable Energy Reviews 32: 486-503. http://doi.org/10.1016/j.rser.2013.12.053
[4] Özişik MN, Tzou DY. (1994). On the wave theory in heat conduction. Journal of Heat Transfer 116(3): 526-535. http://doi.org/10.1115/1.2910903
[5] Zhou J, Zhang Y, Chen JK. (2008). Non-Fourier heat conduction effect on laser-induced thermal damage in biological tissues. Numerical Heat Transfer Part A 54: 1–19. http://doi.org/10.1080/10407780802025911
[6] Chen WH, Wang CC, Hung CI, Yang CC, Juang RC. (2014). Modeling and simulation for the design of thermal-concentrated solar thermoelectric generator. Energy 64(1): 287-297. http://doi.org/10.1016/j.energy.2013.10.073
[7] Espinosa N, Lazard M, Aixala L, Scherrer H. (2010). Modeling thermoelectric generator applied to diesel automotive heat recovery. Journal of Electronic Materials 39(9): 1446-1455. http://doi.org/10.1007/s11664-010-1305-2
[8] Massaguer E, Massaguer A, Montoro L, Gonzalez JR. (2014). Development and validation of a new TRNSYS type for the simulation of thermoelectric generators. Applied Energy 134: 65-74. http://doi.org/10.1016/j.apenergy.2014.08.010
[9] Chen J, Yan Z, Wu L. (1997). Nonequilibrium thermodynamic analysis of a thermoelectric device. Energy 22(10): 979-985. http://doi.org/10.1016/S0360-5442(97)00026-1
[10] Martínez A, Astrain D, Rodríguez A. (2011). Experimental and analytical study on thermoelectric self-cooling of devices. Energy 36(8): 5250-5260. http://doi.org/10.1016/j.energy.2011.06.029
[11] Ymeli LG, Kamdem THT. (2017). Hyperbolic conduction-radiation in participating and inhomogeneous slab with double spherical harmonics and lattice Boltzmann methods. ASME Journal of Heat Transfer 139(4): 1-14.
[12] Lazard M, Goupil C, Fraisse G, Scherrer H. (2012). Thermoelectric quadrupole of a leg to model transient state. Journal of Power and Energy Proc. IMechE 226: 277-282, https://doi.org/10.1177/0957650911433240
[13] Zhao JM, Liu LH. (2017). Radiative transfer equation and solutions. In: Kulacki F. (eds) Handbook of Thermal Science and Engineering. http://doi.org/10.1007/978-3-319-32003-8_56-1
[14] Hunter B, Guo Z. (2015). Applicability of phase – function normalization techniques for radiation transfer computation. Numerical Heat Transfer 67: 1-24. http://doi.org/10.1080/10407790.2014.949516
[15] Ge WJ, Michael F, Modest MF, Roy, S. (2016). Development of high-order PN models for radiative heat transfer in special geometries and boundary conditions. Journal of Quantitative Spectroscopy & Radiative Transfer 172: 98–109. http://doi.org/10.1016/j.jqsrt.2015.09.001
[16] Ladeia C, Bardo Bodmann EJ. Vilhena MT. (2018). The radiative conductive transfer equation in cylinder geometry: Semi-analytical solution and a point analysis of convergence. Journal of Quantitative Spectroscopy & Radiative Transfer 217: 338–352.
[17] Tsai JR, Ozisik MN. (1989). Radiations spherical symmetry with anisotropic scattering and variable properties. Journal of Quantitative Spectroscopy and Radiative Transfer 42(3): 187–199. http://doi.org/10.1016/j.jqsrt.2018.06.012
[18] Ymeli LG, Kamdem THT. (2015). High-order spherical harmonics method for radiative transfer in spherically symmetric problems. Computational Thermal Sciences 7(4): 353-371. http://doi.org/10.1615/ComputThermalScien.2015014843
[19] Lazard M, Goupil C, Fraisse G, Scherrer H. (2012). Thermoelectric quadrupole of a leg to model transient state. Journal of Power and Energy, Proc. IMechE 226: 277-282. https://doi.org/10.1177/0957650911433240
[20] Alata M, Al-Nimr MA, Naji M. (2003). Transient behavior of a thermoelectric device under the hyperbolic heat conduction model. International Journal of Thermophysics 24(6): 1753-1768. http://doi.org/10.1023/b:ijot.0000004103.26293.0c
[21] Kamdem THT, Ymeli LG, Tapimo R. (2017). The discrete ordinates characteristics solution to the one-dimensional radiative transfer equation. Journal of Computational and Theoretical Transport 46(5): 346-365. http://doi.org/10.1080/23324309.2017.1352519
[22] Tan H, Yi H, Luo J, Dong S. (2005). Effects of graded refractive index on steady and transient heat transfer inside a scattering semitransparent slab. Journal of Quantitative Spectroscopy and Radiative Transfer 96(3-4): 363–381. http://doi.org/10.1016/j.jqsrt.2004.12.034
[23] Mishra SC, Sahai H. (2012). Analyses of non-Fourier heat conduction in 1-D cylindrical and spherical geometry –An application of the lattice Boltzmann method. International Journal of Heat and Mass Transfer 55: 7015-7023. http://doi.org/10.1016/j.ijheatmasstransfer.2012.07.014
[24] Varmazyar1 M, Bazargan M. (59). Development of a thermal lattice Boltzmann method to simulate heat transfer problems with variable thermal conductivity. International Journal of Heat and Mass Transfer 59: 363-371. http://doi.org/10.1016/j.ijheatmasstransfer.2012.12.014
[25] Carslaw HS. (1922). Introduction to the mathematical theory of heat in solids. Monatshefte für Mathematik und Physik 32(1): A27-A28. http://doi.org/10.1007/BF01696928
[26] Mishra SC, Krishna NA, Gupta N, Chaitanya GR. (2008). Combined conduction and radiation heat transfer with variable thermal conductivity and variable refractive index. International Journal of Heat and Mass Transfer 51(1-2): 83-90. http://doi.org/10.1016/j.ijheatmasstransfer.2007.04.018
[27] Jia G, Yener Y, Cipolla J. (1991). Radiation between two concentric spheres separated by a participating medium. Journal of Quantitative Spectroscopy and Radiative Transfer 46(1): 11-19. http://doi.org/10.1016/0022-4073(91)90062-u
[28] Mishra SC, Sahai H. (2012). Analyses of non-Fourier heat conduction in 1-D cylindrical and spherical geometry – An application of the lattice Boltzmann method. International Journal of Heat and Mass Transfer 55: 7015-7023. http://doi.org/10.1016/j.ijheatmasstransfer.2012.07.014 | CommonCrawl |
\begin{document}
\begin{center} {\Large Multimodal Transportation with Ridesharing of Personal Vehicles} \vskip 0.2in Qian-Ping Gu$^1$, Jiajian Leo Liang$^1$
$^1$School of Computing Science, Simon Fraser University, Canada\\ [email protected], leo\[email protected] \end{center}
\noindent \textbf{Abstract:} Many public transportation systems are unable to keep up with growing passenger demand as the population grows in urban areas. The slow or lack of improvements for public transportation pushes people to use private transportation modes, such as carpooling and ridesharing. However, the occupancy rate of personal vehicles has been dropping in many cities. In this paper, we describe a centralized transit system that integrates public transit and ridesharing, which matches drivers and transit riders such that the riders would result in shorter travel time using both transit and ridesharing. The optimization goal of the system is to assign as many riders to drivers as possible for ridesharing. We give an exact approach and approximation algorithms to achieve the optimization goal. As a case study, we conduct an extensive computational study to show the effectiveness of the transit system for different approximation algorithms, based on the real-world traffic data in Chicago City; the data sets include both public transit and ridesharing trip information. The experiment results show that our system is able to assign more than 60\% of riders to drivers, leading to a substantial increase in occupancy rate of personal vehicles and reducing riders' travel time.
\noindent \textbf{Keywords:} Multimodal transportation, ridesharing, approximation algorithms, computational study
\section{Introduction} \label{sec-intro} As the population grows in urban areas, commuting between and within large cities is time-consuming and resource-demanding. Due to growing passenger demand, the number of vehicles on the road for both public and private transportation has increased to handle the demand. Public transportation systems are unable to keep up with the demand in terms of service quality. This pushes people to use personal vehicles for work commute. In the United States, personal vehicles are the main transportation mode~\cite{CSS20}. However, the occupancy rate of personal vehicles in the U.S. is 1.6 persons per vehicle in 2011~\cite{USDT-G,USDTFHA-S} (and decreased to 1.5 persons per vehicle in 2017~\cite{CSS20}), which can be a major cause for congestion and pollution. This is the reason municipal governments encourage the use of public transit; the major drawback of public transit is the inconvenience of last mile and/or first mile transportation compared to personal vehicles~\cite{TS14-W}. With the increasing popularity in ridesharing/ridehailing service, there may be potential in integrating private and public transportation. From the research report of~\cite{TRB16-M}, it is recommended that public transit agencies should build on mobility innovations to allow public-private engagement in ridesharing because the use of shared modes increases the likelihood of using public transit. As pointed out by Ma et al.~\cite{TRELTR19-M}, some basic form of collaboration between MoD (mobility-on-demand) services and public transit already exists (for first and last mile transportation). There is an increasing interest for collaboration between private companies and public sector entities~\cite{SMT18-R}.
The spareness of transit networks usually is the main cause of the inconvenience in public transit. Such transit networks have infrequent transit schedule and cause customers to have multiple transfers. In this paper, we investigate the potential effectiveness of integrating public transit with ridesharing to increase ridership in such sparse transit networks and reduce traffic congestion for work commute (not very short trips). For example, people who drive their vehicles to work can pick-up \textit{riders}, who use public transit regularly, and drop-off them at some transit stops, and those riders can take public transit to their destinations. In this way, riders are presented with a cheaper alternative than ridesharing for the entire trip, and it is more convenient than using public transit only. The transit system also gets a higher ridership, which matches the recommendation of~\cite{TRB16-M} for a more sustainable transportation system. Our research focuses on a centralized system that is capable of matching drivers and riders satisfying their trips' requirements while achieving some optimization goal; the requirements of a trip may include an origin and a destination, time constraints, capacity of a vehicle, and so on. When a rider is assigned a driver, we call this \emph{ridesharing route}, and it is compared with the fastest \emph{public transit route} for this rider which uses only public transit. If the ridesharing route is faster than the public transit route, the ridesharing route is provided to both the rider and driver. To increase the number of rider participants, our system-wide optimization goal is to maximize the number of riders, each of whom is assigned a ridesharing route. We call this the \emph{maximization problem} (formal definition in Section~\ref{sec-preliminary}).
In the literature, there are many papers about standalone ridesharing/carpooling, from theoretical to empirical studies (e.g.,~\cite{TRBM11-A,PNAS17-AM,GLZ21,TRBM20-X}). For literature reviews on ridesharing, readers are referred to~\cite{EJOR12-A,TRBM13-F,TRBM19-M,SS20-T}. On the other hand, there are only few papers study the integration of public transit with dynamic ridesharing. Aissat and Varone~\cite{ICEIS15-A} proposed an approach in which a public transit route for each rider is given, and their algorithm tries to substitute part(s) of every rider's route with ridesharing. Any part of a rider's original transit route is replaced only if ridesharing substitution is better than the original part. Their algorithm finds the best route for each rider in first-come first-serve basis (system-wide optimization goal is not considered) and is computational intensive. Huang et al.~\cite{TITS19-H} presented a more robust approach, compared to~\cite{ICEIS15-A}, by combining two networks $N, N'$ (representing the public transit and ridesharing network respectively) into one single routable graph $G$. The graph $G$ uses the \emph{time-expanded model} to maintain the information about all public vehicles schedule, riders' and drivers' origins, destinations and time constraints. In general, a \emph{stop node} in $G$ represents a public vehicle's/driver's stop location, and a \emph{time node} represents time events of this vehicle/driver at this stop. An edge between two nodes implies possible transfer for riders from one vehicle to the other (i.e., the departure time of a vehicle is after the arrival of the other); this also implies a rider can be pick-up/dropped-off from/at a public stop within time constraints. The authors apply this idea to create the ridesharing network graph $N'$ and connect the two networks $N, N'$ by creating edges between them whenever a rider can be pick-up/dropped-off from/at a public stop within time constraints. For each rider travel query, a shortest path is found on $G$. Their approach is also first-come first-serve basis and does not achieve system-wide optimization goal.
Ma~\cite{EEEIC17-M} and Stiglic et al.~\cite{COR18-S} proposed models to integrate ridesharing and public transit as graph matching problems to achieve system-wide optimization goals. Algorithm presented in~\cite{EEEIC17-M} uses the shareability graph (RV-graph)~\cite{PNAS14-S} and the extension of RV-graph, called RTV-graph~\cite{PNAS17-AM}. In fact, the approach used by Stiglic et al.~\cite{COR18-S} is similar, except~\cite{COR18-S} supports more rideshare match types. A set of driver and rider trip announcements and a public transit network with a fixed cyclic timetable are given. For a pre-transit rideshare match, a set of riders is assigned to a driver, and the driver pick-ups each rider by traveling to each rider's origin, then drop-of them at some public transit stops. For a post-transit rideshare match, a driver picks-up a set of riders at a public transit stop and then transport each rider one by one to their destinations. A set of riders can only be assigned to a driver if certain constraints are met, such as capacity of the driver's vehicle and the travel time constraints of the driver and riders. Each of the driver and rider is represented by a node. There is an edge between a driver and a rider if the rider can be served by the driver. If a group of riders can be served by a driver, a node containing the group is created, and an edge between the driver and the group is also created. From this graph, a matching problem is formulated as an integer linear program (ILP) and solved by standard branch and bound (CPLEX). The optimization goal in~\cite{EEEIC17-M} is to minimize cost related to waiting time and travel time, but ridesharing routes are not guarantee to be better than transit route. Although the optimization goal in~\cite{COR18-S} aligns with ours, there are some limitations in their approach: they limit at most two riders for each rideshare match, each rider must travel to the transit stop that is closest to the rider's destination, and more importantly, ridesharing routes assigned to riders can be longer than public transit routes.
In this paper, we use a similar model as in~\cite{EEEIC17-M,COR18-S}. We extend the work in~\cite{COR18-S} to eliminate the limitations described above and give approximation algorithms for the optimization problem to ensure solution quality. Our discrete algorithms allow to control the trade-off between quality and computational time. We conduct a numerical study based on real-life data in Chicago City. Our main contributions are summarized as follows: \begin{enumerate} \setlength\itemsep{0em} \item We give an exact algorithm approach (an ILP formulation based on a hypergraph representation) for integrating public transit and ridesharing. \item We prove our maximization problem is NP-hard and give a 2-approximation algorithm for the problem. We show that previous $O(k)$-approximation algorithms~\cite{SWAT00-B,SODA99-C} for the $k$-set packing problem are 2-approximation algorithms for our maximization problem. Our algorithm is more time and space efficient than previous algorithms. \item As a case study, we conduct an extensive numerical study based on real-life data in Chicago City to evaluate the potential of having an integrated transit system and the effectiveness of different approximation algorithms. \end{enumerate} The rest of the paper is organized as follows. In Section~\ref{sec-preliminary}, we give the preliminaries of the paper, describe a centralized system that integrates public transit and ridesharing, and define the maximization problem. In Section~\ref{sec-exact}, we describe our exact algorithm approach. We then propose approximation algorithms in Section~\ref{sec-approximate}. We discuss our numerical experiments and results in Section~\ref{sec-experiment}. Finally, Section~\ref{sec-conclusion} concludes the paper.
\section{Problem definition and preliminaries} \label{sec-preliminary} In the problem \textit{multimodal transportation with ridesharing} (MTR), we have a centralized system, and for every fixed time interval, the system receives a set $\mathcal{A} = D \cup R$ of trips with $D \cap R = \emptyset$, where $D$ is the set of driver trips and $R$ is the set of rider trips. Each trip is expressed by an integer label $i$ and consists of an individual, a vehicle (for driver trip) and some requirements. A connected public transit network with a fixed timetable $T$ is given.
We assume that for any source $o$ and destination $d$ in the public transit network, $T$ gives the fastest travel time from $o$ to $d$. A \emph{ridesharing route} $\pi_i$ for a rider $i \in R$ is a travel plan using a combination of public transportation and ridesharing to reach $i$'s destination satisfying $i$'s requirements, whereas a \emph{public transit route} $\hat{\pi}_i$ for a rider $i$ is a travel plan using only public transportation. The multimodal transportation with ridesharing problem asks to provide at least one feasible route ($\pi_i$ or $\hat{\pi}_i$) for every rider $i \in R$. We denote an instance of multimodal transportation with ridesharing problem by $(N,\mathcal{A},T)$, where $N$ is an edge-weighted directed graph (network) for both private and public transportation. We call a public transit station or stop just \emph{station}. The terms rider and passenger are used interchangeably (although passenger emphasizes a rider has been provided with a ridesharing route).
The requirements of each trip $i$ in $\mathcal{A}$ are specified by $i$'s parameters submitted by the individual. The parameters of a trip $i$ contain an origin location $o_i$, a destination location $d_i$, an earliest departure time $\alpha_i$, a latest arrival time $\beta_i$ and a maximum trip time $\gamma_i$. A driver trip $i$ also contains a capacity $n_i$ of the vehicle, a limit $\delta_i$ on the number of stops a driver wants to make to pick-up/drop-off passengers, and an optional path to reach its destination. The maximum trip time $\gamma_i$ of a driver $i$ includes a travel time from $o_i$ to $d_i$ and a detour time limit $i$ can spend for offering ridesharing service. A rider trip $i$ also contains an acceptance rate $\theta_i$ for a ridesharing route $\pi_i$, that is, $\pi_i$ is given to rider $i$ if $t(\pi_i) \leq \theta_i \cdot t(\hat{\pi}_i)$ for every public transit route $\hat{\pi}_i$ and $0 < \theta_i \leq 1$, where $t(\cdot)$ is the travel time. Such a route $\pi_i$ is called an \emph{acceptable ridesharing route} (acceptable route for brevity). For example, suppose the best public transit route $\hat{\pi}_i$ takes 100 minutes for $i$ and $\theta_i = 0.9$. An acceptable route $\pi_i$ implies that $t(\pi_i) \leq \theta_i \cdot t(\hat{\pi}_i) = 90$ minutes. We consider two match types for practical reasons. \begin{itemize} \setlength\itemsep{0em} \item \textbf{Type 1 (rideshare-transit)}: a driver may make multiple stops to pick-up different passengers, but makes only one stop to drop-off all passengers. In this case, the \emph{pick-up locations} are the passengers' origin locations, and the \emph{drop-off location} is a public station. \item \textbf{Type 2 (transit-rideshare)}: a driver makes only one stop to pick-up passengers and may make multiple stops to drop-off all passengers. In this case, the \emph{pick-up location} is a public station and the \emph{drop-off locations} are the passengers' destination locations. \end{itemize} Riders and drivers specify one of the match types to participate in; they are allowed to choose both in hope to increase the chance being selected, but the system will assign them only one of the match types such that the optimization goal of the MTR problem is achieved, which is to assign acceptable routes to as many riders as possible. Formally, the \textbf{maximization problem} we consider is to maximize the number of passengers, each of whom is assigned an acceptable route $\pi_i$ for every $i \in R$.
For a driver $i$ and a set $J \subseteq R$ of riders, $\sigma(i) = \{i\} \cup J$ is called a {\em feasible match} if the routes for all trips of $\sigma(i)$ satisfy the requirements (constraints) specified by the parameters of the trips collectively as listed below (a summary of notation and constraints can be found in Section~\ref{sec-compute-matches}). \begin{enumerate} \setlength\itemsep{0em} \item \textit{Ridesharing route constraint}: for $J=\{j_1,\ldots,j_k\}$, there is a path $(o_i,o_{j_1},\dots,o_{j_k}$, $s,d_i)$ in $N$, where $s$ is the drop-off location for Type 1 match; or there is a path $(o_i,s,d_{j_1},...,d_{j_k},d_i)$ in $N$, where $s$ is the pick-up location for Type 2 match.
\item \textit{Capacity constraint}: $1\leq |J| \leq n_i$. \item \textit{Acceptable constraint}: each passenger $j \in J$ is given an acceptable route $\pi_j$ offered by driver $i$. \item \textit{Travel time constraint}: each trip $j \in \sigma(i)$ departs from $o_j$ no earlier than $\alpha_j$, arrives at $d_j$ no later than $\beta_j$, and the total travel duration of $j$ is at most $\gamma_j$. \item \textit{Stop constraint}: the number of unique locations visited by driver $i$ to pick-up (for Type 1) or drop-off (for Type 2) all passengers of $\sigma(i)$ is at most $\delta_i$. \end{enumerate} Two feasible matches $\sigma(i), \sigma(i')$ are \emph{disjoint} if $\sigma(i) \cap \sigma(i') = \emptyset$. Then, the maximization problem considered is to find a set of pairwise disjoint feasible matches such that the number of passengers included in the feasible matches is maximized.
Intuitively, a rideshare-transit (Type 1) feasible match $\sigma(i)$ is that all passengers in $\sigma(i)$ are picked-up at their origins and dropped-off at a station, and then $i$ drives to destination $d_i$ while each passenger $j$ of $\sigma(i)$ takes transit to destination $d_j$. A transit-rideshare (Type 2) feasible match $\sigma(i)$ is that all passengers in $\sigma(i)$ are picked-up at a station and dropped-off at their destinations, and then $i$ drives to destination $d_i$ after dropping the last passenger. We give algorithms to find pairwise disjoint feasible matches to maximize the number of passengers included in the matches. We describe our algorithms for Type 1 only. Algorithms for Type 2 can be described with the constraints on the drop-off location and pick-up location of a driver exchanged, and we omit the description. Further, it is not difficult to extend to other match types, such as rideshare only and park-and-ride, as described in~\cite{COR18-S}.
\section{Exact algorithm} \label{sec-exact} An exact algorithm for the maximization problem is presented in this section, which is similar to the matching approach described in~\cite{PNAS17-AM,PNAS14-S} for ridesharing and in~\cite{EEEIC17-M,COR18-S} for MTR.
\subsection{Integer program formulation} \label{sec-exact-IP} The exact algorithm is summarized as follows. First, we compute all feasible matches for each driver $i$. Then, we create a bipartite (hyper)graph $H(D,R,E)$, where $D(H)$ is the set of drivers, and $R(H)$ is the set of riders. There is a hyperedge $e = (i, J)$ in $E(H)$ between $i \in D(H)$ and a non-empty subset $J \subseteq R(H)$ if $\{i\} \cup J$ is a feasible match, denoted by $\sigma_J(i)$, for driver $i$. \begin{figure}
\caption{A bipartite hypergraph for all possible matches of an instance $(N,\mathcal{A},T)$.}
\label{fig-hypergraph}
\end{figure} An example is given in Figure~\ref{fig-hypergraph}. Any driver $i$ and rider $j$ with no feasible match is removed from $D(H)$ and $R(H)$ respectively, namely, no isolated vertex.
For an edge $e=(i,J)$, let $A(e) = \{i\} \cup J$ and $p(e) = |J|$ be the number of riders represented by $e$. For a trip $j \in \mathcal{A}$, define $E_j = \{e \in E \mid j \in A(e)\}$ to be the set of edges in $E$ associated with $j$. To solve the maximization problem, we give an integer program (ILP) formulation: \begin{alignat}{4}
& \text{maximize } & &\sum_{e \in E(H)} p(e) \cdot x_{e} & \qquad \label{obj-1}\\
& \text{subject to } & \qquad &\sum_{e \in E_j} x_{e} \leq 1, & & \forall \text{ } j \in \mathcal{A} \label{constraint-1}\\
& & &x_{e} \in \{0,1\}, & &\forall \text{ } e \in E(H) \label{constraint-2} \end{alignat} The binary variable $x_e$ indicates whether the edge $e = (i, J)$ is in the solution ($x_e = 1$) or not ($x_e = 0$). If $x_e = 1$, it means that all passengers in $J$ are served by $i$. Inequality (2) in the ILP formulation guarantees that each driver serves at most one feasible set of passengers and each passenger is served by one driver. Note that the ILP (\ref{obj-1})-(\ref{constraint-2}) is similar to a set packing formulation. An advantage of this ILP formulation is that the number of constraints is substantially decreased, compared to traditional ridesharing formulation.
\begin{observation} A match $\sigma(i)$ for any driver $i \in D$ is feasible if and only if for every subset $P$ of $\sigma(i)\setminus\{i\}$, the match between $i$ and $P$ is feasible~\cite{TRBM15-S}. \label{obs-1} \end{observation}
From Observation~\ref{obs-1}, it is not difficult to see that the following results hold. \begin{prop} Let $i_1, i_2,\ldots, i_j$ be a set of drivers in $D$ and $P$ be a maximal set of passengers served by $i_1, \ldots, i_j$. There always exists a solution such that $\sigma(i_a) \cap \sigma(i_b) = \emptyset$ $(1 \leq a \neq b \leq j)$ and $\bigcup_{i_1 \leq a \leq i_j} \sigma(a) = P$. \label{prop-1} \end{prop}
\begin{theorem}\label{theorem-ILP} Given a bipartite graph $H(D,R,E)$ representing an instance of the multimodal transportation with ridesharing maximization problem, an optimal solution to the ILP (\ref{obj-1})-(\ref{constraint-2}) is an optimal solution to the maximization problem and vice versa. \end{theorem}
\begin{proof} From inequality (\ref{constraint-1}) in the integer program, the solution found by the integer program is always feasible to the maximization problem. By Proposition~\ref{prop-1} and objective function~(\ref{obj-1}), an optimal solution to the ILP (\ref{obj-1})-(\ref{constraint-2}) is an optimal solution to the maximization problem. Obviously, an optimal solution to the maximization problem is an optimal solution to the ILP (\ref{obj-1})-(\ref{constraint-2}). \end{proof}
\subsection{Computing feasible matches} \label{sec-compute-matches}
Let $i$ be a driver in $D$ and $n_i$ be the capacity of $i$ (maximum number of riders $i$ can serve). The maximum number of feasible matches for $i$ is $\sum_{p = 1}^{n_i} \binom{|R|}{p}$. Assuming the capacity $n_i$ is a very small constant (which is reasonable in practice), the above summation is polynomial in $R$, that is, $O((|R|+1)^{n_i})$ (partial sums of binomial coefficients). Let $K = \max_{i \in D} {n_i}$ be the maximum capacity among all vehicles (driver trips). Then, in the worst case, $|E(H)| = O(|D| \cdot (|R|+1)^K)$.
We compute all feasible matches for each trip in two phases. In phase one, for each driver $i$, we find all feasible matches $\sigma(i)=\{i,j\}$ with one rider $j$. In phase two, for each driver $i$, we compute all feasible matches $\sigma(i)=\{i,j_1,..,j_p\}$ with $p$ riders, based on the feasible matches $\sigma(i)$ with $p-1$ riders computed previously, for $p=2$ and upto the number of passengers $i$ can serve. Before describing how to compute the feasible matches, we first introduce some notations and specify the feasible match constraints we consider. Each trip $i \in \mathcal{A}$ is specified by the parameters $(o_i, d_i, n_i, z_i, p_i, \delta_i, \alpha_i, \beta_i, \gamma_i, \theta_i)$, where the parameters are summarized in Table~\ref{table-notation} along with other notation. \begin{table}[!ht] \footnotesize \centering
\begin{tabular}{| c | l |}
\hline
\textbf{Notation} & \textbf{Definition} \\
$o_i$ & Origin (start location) of $i$ (a vertex in $N$) \\
$d_i$ & Destination of $i$ (a vertex in $N$) \\
$n_i$ & Number of seats (capacity) of $i$ available for passengers (driver only) \\
$z_i$ & Maximum detour time $i$ willing to spend for offering ridesharing services (driver only) \\
$p_i$ & An optional preferred path of $i$ from $o_i$ to $d_i$ in $N$ (driver only) \\
$\delta_i$ & Maximum number of stops $i$ willing to make to pick-up passengers for match \\
& Type 1 and to drop-off passengers for match Type 2. \\
$\alpha_i$ & Earliest departure time of $i$ \\
$\beta_i$ & Latest arrival time of $i$ \\
$\gamma_i$ & Maximum trip time for $i$ \\
$\theta_i$ & Acceptance rate ($0 \leq \theta_i < 1$) for a ridesharing route $\pi_i$ (rider only) \\
$\pi_i$ & Route for $i$ using a combination of public transit and ridesharing (rider only) \\
$\hat{\pi}_i$ & Route for $i$ using only public transit (rider only) \\
$d(\pi_i)$ & The driver of ridesharing route $\pi_i$ \\
$t(p_i)$ & Travel time for traversing path $p_i$ by private vehicle \\
$t(\pi_i)$ \& $t(\hat{\pi}_i)$ & Travel time for traversing route $\pi_i$ and $\hat{\pi}_i$ resp. \\
$t(u,v)$ \& $\hat{t}(u,v)$ & Travel time from location $u$ to $v$ by private vehicle and public transit resp. \\ \hline
\end{tabular} \caption{Parameters for a trip announcement $i$.} \label{table-notation} \end{table} The maximum trip time $\gamma_i$ of a driver $i$ can be calculated as $\gamma_i = t(p_i) + z_i$ if $p_i$ is given; otherwise $\gamma_i = t(o_i,d_i) + z_i$. For a passenger $j$, $\gamma_j$ is more flexible; it is default to be $\gamma_j = t(\hat{\pi}_i)$, where $\hat{\pi}_i$ is the fastest public transit route.
For a driver $i \in D$ and a set $J \subseteq R$ of riders, the set $\sigma(i) = \{i\} \cup J$ is called a {\em feasible match} if the routes for all trips of $\sigma(i)$ satisfy the requirements (constraints) specified by the parameters of the trips collectively as listed below: \begin{enumerate} \setlength\itemsep{0em} \item \textit{Ridesharing route constraint}: for $J=\{j_1,\ldots,j_k\}$, there is a path $(o_i,o_{j_1},\ldots,o_{j_k}$, $s,d_i)$ in $N$, where $s$ is the drop-off location for Type 1 match; or there is a path $(o_i,s,d_{j_1},...,d_{j_k},d_i)$ in $N$, where $s$ is the pick-up location for Type 2 match.
\item \textit{Capacity constraint}: limits the number of passengers a driver can serve, $1\leq |J| \leq n_i$. \item \textit{Acceptable constraint}: each passenger $j \in J$ is given an acceptable route $\pi_j$ offered by $i$ such that $t(\pi_j) \leq \theta_j \cdot t(\hat{\pi}_j)$ for $0 < \theta_j \leq 1$, where $\hat{\pi}_j$ is the public transit route with shortest travel time for $j$. \item \textit{Travel time constraint}: each trip $j \in \sigma(i)$ departs from $o_j$ no earlier than $\alpha_j$, arrives at $d_j$ no later than $\beta_j$, and the total travel duration of $j$ is at most $\gamma_j$. The exact application of these time constraints is described in Subsection~\ref{subsection-alg-feas-all} (Algorithm 1) and Subsection~\ref{subsection-alg-feas-all} (Algorithm 2). \item \textit{Stop constraint}: the number of unique locations visited by driver $i$ to pick-up (for Type 1) or drop-off (for Type 2) all passengers of $\sigma(i)$ is at most $\delta_i$. \end{enumerate}
\subsubsection{Phase one (Algorithm 1)}\label{subsection-alg-feas-single} Now we describe how to compute a feasible match between a driver and a passenger for Type 1. The computation for Type 2 is similar and we omit it. For every trip $i \in D \cup R$, we first compute the set $S_{do}(i)$ of feasible drop-off locations for trip $i$. Each element in $S_{do}(i)$ is a station-time tuple $(s, \alpha_i(s))$ of $i$, where $\alpha_i(s)$ is the earliest possible time $i$ can reach station $s$. When computing feasible matches, we use a simplified model for the waiting time and ridesharing service time: given the fastest travel time $t(u,v)$ from location $u$ to location $v$, we multiply a small constant $\epsilon>1$ with $t(u,v)$ to simulate the waiting time and ridesharing service time. In this model, the waiting time and ridesharing service time are considered together, as a whole. The station-time tuples are computed by the following preprocessing procedure. \begin{itemize}[leftmargin=*] \setlength\itemsep{0em} \item We find all feasible station-time tuples for each passenger $j \in R$. A station $s$ is \emph{feasible} for $j$ if $j$ can reach $d_j$ from $s$ within time window $[\alpha_j, \beta_j]$, $t(o_j,s) + \hat{t}(s,d_j) \leq \gamma_j$ and $t(o_j,s) + \hat{t}(s,d_j) \leq \theta_j \cdot \hat{t}(o_j,d_j)$.
\begin{itemize}
\item The earliest possible time to reach station $s$ for $j$ can be computed as $\alpha_j(s) = \alpha_j + t(o_j,s)$ without pick-up and drop-off time. Since we do not consider waiting time and ridesharing service time separately, $\alpha_j(s)$ also denotes the earliest departure time of $j$ at station $s$.
\item Let $\hat{t}(s,d_j)$ be the travel time of a fastest public route. Station $s$ is \emph{time feasible} if $\alpha_j(s) + \hat{t}(s,d_j) \leq \beta_j$, $t(o_j,s) + \hat{t}(s,d_j) \leq \gamma_j$ and $t(o_j,s) + \hat{t}(s,d_j) \leq \theta_j \cdot \hat{t}(o_j,d_j)$.
\end{itemize} \item Next, we find all feasible station-time tuples for each driver $i \in D$ using a similar calculation.
\begin{itemize}[leftmargin=*]
\setlength\itemsep{0em}
\item Without considering pick-up and drop-off time separately, the earliest arrival time of $i$ to reach $s$ is $\alpha_i(s) = \alpha_i + t(o_i,s)$. Station $s$ is \emph{time feasible} if $\alpha_i(s) + t(s,d_i) \leq \beta_i$ and $t(o_i,s) + t(s,d_i) \leq \gamma_i$.
\end{itemize} \end{itemize}
After the preprocessing, the Algorithm~1 finds all matches consists of a single passenger. For each pair $(i, j)$ in $D \times R$, let $\alpha_i(o_j) = \max\{\alpha_i,\alpha_j - t(o_i,o_j)\}$ be the latest departure time for driver $i$ from $o_i$ such that $i$ can still pick-up $j$ at the earliest; this minimizes the time (duration) needed for driver $i$ to wait for passenger $j$, and hence, the total travel time of $i$ is minimized. The process of checking if the match $\sigma(i) = \{i,j\}$ is feasible for all pairs of $(i,j)$ can be performed as in Algorithm~1 in Figure~\ref{alg-feas-single}. \begin{figure}
\caption{Algorithm for computing matches consists of a single passenger.}
\label{alg-feas-single}
\end{figure}
\subsubsection{Phase two (Algorithm 2)}\label{subsection-alg-feas-all}
We extend Algorithm~1 to create matches with more than one passenger. Let $H(D,R,E)$ be the graph after computing all possible matches consists of a single passenger (instance computed by Algorithm~1). We start with computing feasible matches consists of two passengers, then three passengers, and so on. Let $\varSigma(i)$ be the set of matches found so far for driver $i$ and $\varSigma(i,p-1) = \{\sigma(i) \in \varSigma(i) \mid |\sigma(i) \setminus \{i\}|=p-1\}$ be the set of matches with $p-1$ passengers, and we try to extend $\varSigma(i,p-1)$ to $\varSigma(i,p)$ for $p \geq 2$. Let $r_i = (l_0,l_1,\ldots,l_p,s)$ denotes an ordered potential path (travel route) for driver $i$ to pick-up all $p$ passengers of $\sigma(i)$ and drop-off them at station $s$, where $l_0$ is the origin of $i$ and $l_y$ is the pick-up location (origin of passenger $j_y$), $1 \leq y \leq p$. We extend the notion of $\alpha_i(o_j)$, defined above, to all locations of $r_i$. That is, $\alpha_i(l_p)$ is the latest departure time of $i$ to pick-up all passengers $j_1,\ldots,j_p$ such that the waiting time of $i$ is minimized, and hence, travel time of $i$ is minimized. All possible combinations of $r_i$ are enumerated to find a feasible path $r_i$; the process of finding $r_i$ is described in the following. \begin{itemize} \setlength\itemsep{0em}
\item First, we fix a combination of $r_i$ such that $|\sigma(i)| \leq n_i + 1$ and $r_i$ satisfies the stop constraint. The order of the pick-up origin locations is known when we fix a path $r_i$. \item The algorithm determines the actual drop-off station $s$ in $r_i = (l_0,l_1,\ldots,l_{p},s)$. Let $j_{y}$ be the passenger corresponds to pick-up location $l_y$ for $1 \leq y \leq p$ and $l_0 = o_i$. For each station $s$ in $\bigcap_{0 \leq y \leq p} S_{do}(j_y)$, the algorithm checks if $r_i = (l_0,l_1,\ldots,l_{p},s)$ admits a time feasible path for each trip in $\sigma(i)$.
\begin{itemize}
\item The total travel time (duration) for $i$ from $l_0$ to $s$ is $t_i = t(l_0, l_1) + \cdots + t(l_{p-1},l_{p}) + t(l_p, s)$.
The total travel time (duration) for $j_y$ from $l_y$ to $s$ is $t_{j_y} = t(l_y,l_{y+1}) + \cdots + t(l_{p-1},l_{p}) + t(l_p, s)$, $1 \leq y \leq p$.
\item Since the order for $i$ to pick up $j_y$ ($1 \leq y \leq p$) is fixed, $\alpha_i(l_p)$ can be calculated as $\alpha_i(l_p) = \max\{\alpha_i, \alpha_{j_1} - t(l_0,l_1), \alpha_{j_{2}} - t(l_0,l_1) - t(l_1,l_2), \ldots, \alpha_{j_{p}} - t(l_0,l_1) - \cdots - t(l_{p-1},l_p)\}$.
The earliest arrival time at $s$ for all trips in $\sigma(i)$ is $t = \alpha_i(l_p) + t_i$.
\item If $t \leq \beta_i(s)$, $t_i + t(s, d_i) \leq \gamma_i$, and for $1\leq y\leq p$, $t \leq \beta_{j_{y}}(s)$, $t_{j_y} + \hat{t}(s, d_{j_{y}}) \leq \gamma_{j_{y}}$ and $t_{j_y} + \hat{t}(s, d_{j_{y}}) \leq \delta_{j_{y}} \cdot \hat{t}(o_{j_{y}}, d_{j_{y}})$, then $r_i$ is feasible.
\end{itemize} \item If $r_i$ is feasible, add the match corresponds to $r_i$ to $H$. Otherwise, check next combination of $r_i$ until a feasible path $r_i$ is found or all combinations are exhausted. \end{itemize} The pseudo code for the above process is given in Algorithm~2 (Figure~\ref{alg-feas-all}). \begin{figure}
\caption{Algorithm for computing matches consists of multiple passengers.}
\label{alg-feas-all}
\end{figure} We show that the latest departure time $\alpha_i(l_p)$ used in Algorithm~2 indeed minimizes the total travel time of $i$ to reach $l_p$. \begin{theorem} Given a feasible path $r_i = (l_0,\ldots,l_p,s)$ for driver $i$ that serves $p$ passengers in a match $\sigma(i)$. The latest departure time $\alpha_i(l_p)$ calculated above minimizes the total travel time of $i$ to reach $l_p$. \end{theorem}
\begin{proof} Prove by induction. For the base case $\alpha_i(l_1) = \max\{\alpha_i,\alpha_{j_{1}} - t(l_0,l_{1})\}$, $i$ does not need to wait for $j_1$. Hence, the total travel time of $i$ to pick-up $j_1$ is minimized with departure time $\alpha_i(l_1)$. Assume the lemma holds for $1 \leq y-1 < p$, that is, $\alpha_i(l_{y-1})$ minimizes the total travel time of $i$ to reach $l_{y-1}$. We prove for $y$. From the calculation, $\alpha_i(l_{y}) = \max\{\alpha_i(l_{y-1}), \alpha_{j_{y}} - t(l_0,l_{1}) - t(l_{1},l_{2}) - \cdots - t(l_{y-1},l_y)\}$. By the induction hypothesis, $\alpha_i(l_{y})$ minimizes the total travel time of $i$.
\end{proof}
The running time of Algorithm~2 heavily depends on the number of subsets of passengers to be checked for feasibility. One way to speed up Algorithm~2 is to use dynamic programming (or memoization) to avoid redundant checks on a same subset. For each feasible match $|\sigma(i)| = p$ of a driver $i \in D$, we store every feasible path $r_i = \{i, j_1,\ldots,j_p,s\}$ and extend from each feasible path $r_i$ to insert a new trip to minimize the number of ordered potential paths we need to test. We can further make sure that no path is tested twice during execution. First, the set $R$ of riders is given a fixed ordering (based on the integer labels). For a feasible path $r_i$ of a driver $i$, the check of inserting a new rider $j$ into $r_i$ is performed only if $j$ is larger than every rider in $r_i$ according to the fixed ordering. A heuristic approach to speed up Algorithm~2 is given in at the end of Section~\ref{sec-instances}.
\section{Approximation algorithms} \label{sec-approximate} We show that the maximization problem defined in Section 3 is NP-hard and give approximation algorithms for the problem. When every edge in $H(D,R,E)$ consists of only two vertices (one driver and one passenger), the maximization problem is equivalent to the maximum matching, which can be solved in polynomial time. However, if the edges consist of more than two vertices, they become hyperedges. In this case, the integer program~(\ref{obj-1})-(\ref{constraint-2}) becomes a formulation of the maximum weighted set packing problem, which is NP-hard~~\cite{CINP79-GJ,Karp72}. Our maximization problem is a special case of the maximum weighted set packing problem. We first show our maximization problem instance $H(D,R,E)$ is indeed NP-hard.
\subsection{NP-hardness}
It was mentioned in~\cite{PNAS14-S} that their minimization problem related to shareability hyper-network is NP-Complete, which is similar to our maximization problem formulation. However, an actual reduction proof was not described. One may think the maximization problem relates to covering problems; rather, it relates to packing problems. We prove our maximization problem is NP-hard by a reduction from a special case of the maximum 3-dimensional matching problem (3DM). An instance of 3DM consists of three disjoint finite sets $A$, $B$ and $C$, and a collection $\mathcal{F} \subseteq A \times B \times C$. That is, $\mathcal{F}$ is a collection of triplets $(a,b,c)$, where $a \in A, b \in B$ and $c \in C$. A 3-dimensional matching is a subset $\mathcal{M} \subseteq \mathcal{F}$ such that all sets in $\mathcal{M}$ are pairwise disjoint. The decision problem of 3DM is that given $(A, B, C, \mathcal{F})$ and an integer $q$, decide whether there exists a matching $\mathcal{M} \subseteq \mathcal{F}$ with $|\mathcal{M}| \geq q$. We consider a special case of 3DM: $|A| = |B| = |C| = q$; it is still NP-complete~\cite{CINP79-GJ,Karp72}. Given an instance $(A,B,C,\mathcal{F})$ of 3DM with $|A| = |B| = |C| = q$, we construct an instance $H(D,R,E)$ (bipartite hypergraph) of the maximization problem as follows: \begin{itemize} \setlength\itemsep{0em} \item $D(H) = A$, the set of drivers and $R(H) = B \cup C$, the set of passengers. \item For each $f \in \mathcal{F}$, create a hyperedge $e(f)$ in $E(H)$ containing elements $(a,b,c)$, where $a$ represents a driver and $\{b,c\}$ represent two different passengers. Further, create edges $e'(f) = \{a, b\}$ and $e''(f) = \{a, c\}$. \end{itemize}
\begin{theorem} The maximization problem is NP-hard. \end{theorem}
\begin{proof} By Theorem~\ref{theorem-ILP}, we only need to prove the ILP~(\ref{obj-1})-(\ref{constraint-2}) is NP-hard, which is done by showing that an instance $(A,B,C,\mathcal{F})$ of the maximum 3-dimensional matching problem has a solution $\mathcal{M}$ of cardinality $q$ if and only if the bipartite hypergraph instance $H(D,R,E)$ has a solution $X$ with $2q$ passengers.
Assume that $(A,B,C,\mathcal{F})$ has a solution $\mathcal{M} = \{m_1, m_2,\ldots, m_q\}$. For each $m_i$ ($1 \leq i \leq q$), add the corresponding hyperedge $e(m_i) \in E(H)$ to $X$ (that is, setting the corresponding variable $x_{e(m_i)} = 1$). Since $m_i \cap m_j = \emptyset$ for $1 \leq i \neq j \leq q$ (implying the constraint~(\ref{constraint-1}) of the ILP is always satisfied), and each edge $e \in X$ contains two passengers, $X$ is a valid solution to $H(D,R,E)$ with $2q$ passengers.
Assume that $H(D,R,E)$ has a solution $X$ with $2q$ passengers served (the objective function (\ref{obj-1}) of ILP is $2q$ and every edge $e(f) \in X$ corresponds to variable $x_{e(f)} = 1$). For every edge $e(f) \in X$, add the corresponding set $f \in \mathcal{F}$ to $\mathcal{M}$. From the constraint~(\ref{constraint-1}) of the ILP, $X$ is pairwise disjoint. In order to serve $2q$ passengers, $|X| = |D| = q$ since every $e(f) \in X$ must contain two different passengers. Hence, $\mathcal{M}$ is a valid solution to $(A,B,C,\mathcal{F})$ s.t. $|\mathcal{M}| = q$.
The size of $H(D,R,E)$ is polynomial in $q$. It takes a polynomial time to convert a solution of $H(D,R,E)$ to a solution of the 3DM instance $(A,B,C,\mathcal{F})$ and vice versa. \end{proof}
\subsection{2-approximation algorithm} For consistency, we follow the convention in~\cite{SWAT00-B,SODA99-C} that a $\rho$-approximation algorithm for a maximization problem is defined as $\rho \cdot w(\mathcal{C}) \geq OPT$ for $\rho > 1$, where $w(\mathcal{C})$ and $OPT$ are the values of approximation and optimal solutions respectively. In this section, we give a $2$-approximation algorithm to the maximization problem instance $H(D,R,E)$. Our $2$-approximation algorithm (refer to as \textit{ImpGreedy}) is a simplified version of the simple greedy~\cite{SWAT00-B,SODA99-C,PNAS14-S} discussed in Section~\ref{sec-app-algs}, except the running time and memory usage are significantly improved by computing a solution directly from $H(D,R,E)$ without solving the independent set/weighted set packing problem.
\subsubsection{Description of ImpGreedy Algorithm} For a maximization problem instance $H(D,R,E)$, we use $\Gamma$ to denote a current partial solution, which consists of a set of matches represented by the hyperedges in $E(H)$. Let $P(\Gamma)=\bigcup_{e \in \Gamma} J_e$ (called \textit{covered passengers}). Initially, $\Gamma = \emptyset$. In each iteration, we add a match with the most number of uncovered passengers to $\Gamma$, that is, select an edge $e=(i,J_e)$ such that
$|J_e \setminus P(\Gamma)|$ is maximum, and then add $e$ to $\Gamma$. Remove $E_e = \cup_{j \in A(e)} E_j$ from $E(H)$ ($E_j$ is defined in Section~\ref{sec-exact-IP}). Repeat until $P(\Gamma) = R$ or $|\Gamma| = |D|$. The pseudo code of ImpGreedy algorithm is shown in Figure~\ref{alg-new-approx}. \begin{figure}
\caption{$2$-approximating algorithm for problem instance $(N,\mathcal{A},T)$.}
\label{alg-new-approx}
\end{figure}
\noindent In ImpGreedy Algorithm, when an edge $e$ is added to $\Gamma$, $E_e$ is removed from $E(H)$, so Property~\ref{property-gamma} holds for $\Gamma$. \begin{property} For every $i \in D$, at most one edge $e$ from $E_i$ can be selected in any solution. \label{property-gamma} \end{property}
\subsubsection{Analysis of ImpGreedy Algorithm} Let $\Gamma = \{x_1, x_2,\ldots, x_a\}$ be a solution found by Algorithm~3, where $x_i$ is the $i^{th}$ edge added to $\Gamma$.
Throughout the analysis, we use $OPT$ to denote an optimal solution, that is, $P(OPT) \geq P(\Gamma)$. Further, $\Gamma_i = \bigcup_{1 \leq b \leq i} x_b$ for $1 \leq i \leq a$, $\Gamma_0 = \emptyset$ and $\Gamma_a = \Gamma$. The driver of match $x_i$ is denoted by $d(x_i)$. The main idea of our analysis is to add up the maximum difference between the number of covered passengers by selecting $x_i$ in $\Gamma$ and not selecting $x_i$ in $OPT$. For each $x_i\in \Gamma$, by Property~\ref{property-gamma}, there is at most one $y \in OPT$ with $d(y)=d(x_i)$. We order $OPT$ and introduce dummy edges to $OPT$ such that $d(y_i) = d(x_i)$ for $1 \leq i\leq a$. Formally, for $1\leq i\leq a$, define \[ OPT(i)=\{y_1,\ldots,y_i \mid 1\leq b \leq i, d(y_b)=d(x_b) \text{ if } y_b \in OPT, \text{ otherwise } y_b \text{ a dummy edge}\}. \] A dummy edge $y_b\in OPT(i)$ is defined as $d(y_b) = d(x_b)$ with $J_{y_b}=\emptyset$. The gap of an edge $x_i \in \Gamma$ is defined as \[
{\mathop{\rm gap}}(x_i) = |J_{y_i}| - |J_{x_i} \setminus P(\Gamma_{i-1})| + |J'_{x_i}|, \]
where $J'_{x_i} = (J_{x_i} \setminus P(\Gamma_{i-1})) \cap P(OPT \setminus \Gamma)$ is the maximum subset of passengers in $J_{x_i} \setminus P(\Gamma_{i-1})$ that are also covered by drivers in $OPT \setminus \Gamma$. The intuition is that the sum of ${\mathop{\rm gap}}(x_i)$ for all $x_i \in \Gamma$ states the maximum possible number of passengers may not be covered by $\Gamma$. Let $P(OPT(i)) = \bigcup_{1 \leq b \leq i} J_{y_b}$ and $P(OPT'(i)) = \bigcup_{1 \leq b \leq i} J'_{x_b}$ for any $i \in [1,\ldots,a]$. Then the maximum gap between $\Gamma$ and $OPT$ can be calculated as $\sum_{x \in \Gamma_a} {\mathop{\rm gap}}(x) = |P(OPT(a))| + |P(OPT'(a))| - |P(\Gamma_{a})|$. First, we show that $P(OPT) = P(OPT(a)) \cup P(OPT'(a))$.
\begin{prop} Let $\Gamma = \{x_1,\ldots,x_a\}$, $P(OPT(a)) = \bigcup_{1 \leq i \leq a} J_{y_i}$ and $P(OPT'(a)) = \bigcup_{1 \leq i \leq a} J'_{x_i}$. Then, $P(OPT) = P(OPT(a)) \cup P(OPT'(a))$. \label{prop-opt-size} \end{prop}
\begin{proof} By definition, $P(OPT)=P(OPT(a)) \cup P(OPT \setminus OPT(a))$. For any $z$ in $OPT\setminus OPT(a)$, $d(z) \neq d(x)$ for every $x \in \Gamma$. If $J_z \setminus P(\Gamma) \neq \emptyset$, then $z$ would have been found and added to $\Gamma$ by Algorithm~3. Hence, $J_z \setminus P(\Gamma) = \emptyset$, implying $J_z \subseteq P(OPT'(a))$ and $P(OPT \setminus OPT(a)) \subseteq P(OPT'(a))$. \end{proof}
\begin{lemma}
Let $OPT$ be an optimal solution and $\Gamma = \{x_1, x_2,\ldots, x_a\}$ be a solution found by the algorithm. For any $1 \leq i \leq a$, $\sum_{x \in \Gamma_i} {\mathop{\rm gap}}(x) = |P(OPT(i))| - |P(\Gamma_{i})| + |P(OPT'(i))| \leq |P(\Gamma_i)|$. \label{lemma-max-gap} \end{lemma}
\begin{proof}
Recall that $OPT(i)=\{y_1,\ldots,y_i\}$ as defined above. For $y_b \in OPT(i), 1 \leq b \leq i, d(y_b)=d(x_b)$. We prove the lemma by induction on $i$. Base case $i=1$: $|P(OPT(1))| - |P(\Gamma_1)| + |P(OPT'(1))| \leq |P(\Gamma_1)|$. By definition, ${\mathop{\rm gap}}(x_1) = |J_{y_1}| - |J_{x_1} \setminus \Gamma_0| + |J'_{x_1}|$. Since $x_1$ is selected by the algorithm, it must be that $|J_{x_1}| \geq |J_u|$ for all $u \in V(G')$, so $|J_{y_1}| \leq |J_{x_1}|$. Thus, \begin{align*}
{\mathop{\rm gap}}(x_1) &= |J_{y_1}| - |J_{x_1} \setminus \Gamma_0| + |J'_{x_1}| \\
&\leq |J'_{x_1}| \leq |J_{x_1}|. \end{align*}
Assume the statement is true for $i-1 \geq 1$, that is, $\sum_{x \in \Gamma_{i-1}} {\mathop{\rm gap}}(x) \leq |P(\Gamma_{i-1})|$, and we prove for $i \leq a$. By the induction hypothesis, both $P(OPT(i-1))$ and $P(OPT'(i-1))$ are included in the calculation of $\sum_{x \in \Gamma_{i-1}} {\mathop{\rm gap}}(x)$. More precisely, $\sum_{x \in \Gamma_{i-1}} {\mathop{\rm gap}}(x) = |P(OPT(i-1))| - |P(\Gamma_{i-1})| + |P(OPT'(i-1))| \leq |P(\Gamma_{i-1})|$. If $|J_{y_i}| \leq |J_{x_i} \setminus P(\Gamma_{i-1})|$, the lemma is true since we can assume $|J'_{x_i}| \leq |J_{x_i}|$. Suppose $|J_{y_i}| > |J_{x_i} \setminus P(\Gamma_{i-1})|$. Before $x_i$ is selected, the algorithm must have considered $y_i$ and found that $|J_{x_i} \setminus P(\Gamma_{i-1})| \geq |J_{y_i} \setminus P(\Gamma_{i-1})|$. Then, $|J_{y_i}| > |J_{x_i} \setminus P(\Gamma_{i-1})| \geq |J_{y_i} \setminus P(\Gamma_{i-1})|$, implying $J_{y_i} \cap P(\Gamma_{i-1}) \neq \emptyset$. We have \begin{align}
|J_{x_i} \setminus P(\Gamma_{i-1})| + |J_{y_i} \cap P(\Gamma_{i-1})|
\geq |J_{y_i} \setminus P(\Gamma_{i-1})| + |J_{y_i} \cap P(\Gamma_{i-1})| = |J_{y_i}|. \label{eq-4} \end{align}
Let $J''_{y_i} \subseteq (J_{y_i} \cap P(\Gamma_{i-1}))$ be the set of passengers covered by $P(OPT(i-1)) \cup P(OPT'(i-1))$, namely $J''_{y_i} \subseteq (P(OPT(i-1)) \cup P(OPT'(i-1)))$.
Then by the induction hypothesis, \begin{align}
\sum_{x \in \Gamma_{i-1}} {\mathop{\rm gap}}(x) \leq P(\Gamma_{i-1}) - |J_{y_i} \cap P(\Gamma_{i-1})| + |J''_{y_i}|. \label{eq-5} \end{align} Adding $\sum_{x \in \Gamma_{i-1}} {\mathop{\rm gap}}(x)$ and ${\mathop{\rm gap}}(x_i)$ together: \begin{align*} &(\sum_{x \in \Gamma_{i-1}} {\mathop{\rm gap}}(x)) + ({\mathop{\rm gap}}(x_i)) \\
&= |P(OPT(i-1))| - |P(\Gamma_{i-1})| + |P(OPT'(i-1))| + |J_{y_i} \setminus J''_{y_i}| - |J_{x_i} \setminus P(\Gamma_{i-1})| + |J'_{x_i}| \\
&\leq (|P(\Gamma_{i-1})| - |J_{y_i} \cap P(\Gamma_{i-1})| + |J''_{y_i}|) + |J_{y_i} \setminus J''_{y_i}| - |J_{x_i} \setminus P(\Gamma_{i-1})| + |J'_{x_i}| \hspace*{16mm} \text{from } (\ref{eq-5}) \\
&= |P(\Gamma_{i-1})| - |J_{y_i} \cap P(\Gamma_{i-1})| + |J_{y_i}| - |J_{x_i} \setminus P(\Gamma_{i-1})| + |J'_{x_i}| \\
&\leq |P(\Gamma_{i-1})| - |J_{y_i} \cap P(\Gamma_{i-1})| + |J_{y_i} \cap P(\Gamma_{i-1})| + |J'_{x_i}| \hspace*{53mm} \text{from } (\ref{eq-4}) \\
&= |P(\Gamma_{i-1})| + |J'_{x_i}| \leq |P(\Gamma_{i-1})| + |J_{x_i} \setminus P(\Gamma_{i-1})| \hspace*{45mm} \text{by defintion of } J'_{x_i} \\ &= P(\Gamma_{i}) \end{align*} Therefore, by the property of induction, the lemma holds. \end{proof}
\begin{theorem}
Given the hypergraph instance $H(D,R,E)$. Algorithm~3 computes a solution $\Gamma$ to $H$ such that $2|P(\Gamma)| \geq |P(OPT)|$, where $OPT$ is an optimal solution, with running time $O(|D| \cdot |E|)$, $|E| \leq |D| \cdot (|R|+1)^K$. \label{theorem-ImpGreedy} \end{theorem}
\begin{proof}
Let $\Gamma = \{x_1,\ldots,x_a\}$, $P(OPT(a)) = \bigcup_{1 \leq i \leq a} J_{y_i}$ and $P(OPT'(a)) = \bigcup_{1 \leq i \leq a} J'_{x_i}$. By Proposition~\ref{prop-opt-size}, $P(OPT) = P(OPT(a)) \cup P(OPT'(a))$, and by Lemma~\ref{lemma-max-gap}, $|P(OPT(a))| + |P(OPT'(a))| - |P(\Gamma_{a})| \leq |P(\Gamma_a)|$. We have \[
|P(OPT)| \leq |P(OPT(a))| + |P(OPT'(a))| \leq 2|P(\Gamma)|. \]
In each iteration of the while-loop, it takes $O(E)$ to find an edge $x$ with maximum $|J_x \setminus P(\Gamma)|$, and there are at most $|D|$ iterations. Hence, Algorithm~3 runs in $O(|D| \cdot |E|)$ time. \end{proof}
\subsection{Approximation algorithms for maximum weighted set packing}\label{sec-app-algs} Now, we explain the algorithms for the maximum weighted set packing problem, which solve our maximization problem. Given a universe $\mathcal{U}$ and a family $\mathcal{S}$ of subsets of $\mathcal{U}$, a \emph{packing} is a subfamily $\mathcal{C} \subseteq \mathcal{S}$ of sets such that all sets in $\mathcal{C}$ are pairwise disjoint. Every subset $S \in \mathcal{S}$ has at most $k$ elements and is given a real weight. The maximum weighted $k$-set packing problem (MWSP) asks to find a packing $\mathcal{C}$ with the largest total weight. We can see that the maximization problem on $H(D,R,E)$ is a special case of the maximum weighted $k$-set packing problem, where the trips of $D \cup R$ is the universe $\mathcal{U}$ and $E(H)$ is the family $\mathcal{S}$ of subsets, and every $e \in E(H)$ represents at most $k = K+1$ trips ($K$ is the maximum capacity of all vehicles). Hence, solving MWSP also solves our maximization problem. Chandra and Halld\'{o}rsson~\cite{SODA99-C} presented a $\frac{2(k+1)}{3}$-approximation and a $\frac{2(2k+1)}{5}$-approximation algorithms (refer to as \textit{BestImp} and \textit{AnyImp} respectively), and Berman~\cite{SWAT00-B} presented a $(\frac{k+1}{2} + \epsilon)$-approximation algorithm (refer to as \textit{SquareImp}) for the weighted $k$-set packing problem (here, $k = K + 1$), where the latter still has the best approximation ratio.
The three algorithms in~\cite{SWAT00-B,SODA99-C} (AnyImp, BestImp and SquareImp) solve the weighted $k$-set packing problem by first transferring it into a weighted independent set problem, which consists of a vertex weighted graph $G(V,E)$ and asks to find a maximum weighted independent set in $G(V,E)$. We briefly describe the common local search approach used in these three approximation algorithms. A \emph{claw} $C$ in $G$ is defined as an induced connected subgraph that consists of an independent set $T_C$ of vertices (called talons) and a center vertex $C_z$ that is connected to all the talons ($C$ is an induced star with center $C_z$). For any vertex $v \in V(G)$, let $N(v)$ denotes the set of vertices in $G$ adjacent to $v$, called the \emph{neighborhood} of $v$. For a set $U$ of vertices, $N(U) = \cup_{v \in U} N(v)$. The \textit{local search} of AnyImp, BestImp and SquareImp uses the same central idea, summarized as follows: \begin{enumerate} \item The approximation algorithms start with an initial solution (independent set) $I$ in $G$ found by a \textbf{simple greedy} (refer to as \textit{Greedy}) as follows: select a vertex $u \in V(G)$ with largest weight and add to $I$. Eliminate $u$ and all $u$'s neighbors from being selected. Repeatedly select the largest weight vertex until all vertices are eliminated from $G$. \item While there exists claw $C$ in $G$ w.r.t. $I$ such that independent set $T_C$ improves the weight of $I$ (different for each algorithm), augment $I$ as $I = (I \setminus N(T_C)) \cup T_C$; such an independent set $T_C$ is called an \emph{improvement}. \end{enumerate} To apply these algorithms to our maximization problem, we need to convert the bipartite hypergraph $H(D,R,E)$ to a weighted independent set instance $G(V,E)$, which is straightforward. Each hyperedge $e \in E(H)$ is represented by a vertex $v_e \in V(G)$. The weight $w(v_e) = p(e)$ for each $e \in E(H)$ and $v_e \in V(G)$. There is an edge between $v_{e}, v_{e'} \in V(G)$ if $e \cap e' \neq \emptyset$ where $e, e' \in E(H)$. We observed the following property. \begin{property}
When the size of each set in the set packing problem is at most $k$ $(|e| = k, e \in E(H))$, the graph $G(V,E)$ has the property that it is $(k+1)$-claw free, that is, $G(V,E)$ does not contain an independent set of size $k+1$ in the neighborhood of any vertex. \end{property}
Applying this property, we only need to search a claw $C$ consists of at most $k$ talons, which upper bounds the running time for finding a claw within $O(n^k)$, where $n = |V(G)|$. When $k$ is very small, it is practical enough for solving our maximization problem instance $H(D,R,E)$ computed by Algorithm~2 from $(N,\mathcal{A},T)$. It has been mentioned in~\cite{PNAS14-S} that the approximation algorithm in~\cite{SODA99-C} can be applied to the ridesharing problem. However, only the simple greedy (\textit{Greedy}) with $k$-approximation was implemented in~\cite{PNAS14-S}. Notice that algorithm ImpGreedy (Algorithm 3) is a simplified version of algorithm Greedy, and Greedy is used to get an initial solution in algorithms AnyImp, BestImp and SquareImp. From Theorem~\ref{theorem-ImpGreedy}, we have Corollary~\ref{corollary-approximate}. \begin{corollary} Greedy, AnyImp, BestImp and SquareImp algorithms compute a solution to $H(D,R,E)$ with 2-approximation ratio. \label{corollary-approximate} \end{corollary} Since ImpGreedy finds a solution directly on $H(D,R,E)$ without converting it to $G(V,E)$ and solving the independent set problem of $G(V,E)$, it is more time and space efficient than the algorithms for MWSP. In the rest of this paper, Algorithm 3 is referred to as ImpGreedy.
\section{Numerical experiments} \label{sec-experiment} We create a simulation environment consists of a centralized system that integrates public transit and ridesharing. We implement our proposed approximation algorithm (ImpGreedy) and Greedy, AnyImp and BestImp algorithms for the $k$-set packing problem to evaluate the benefits of having an integrated transportation system supporting public transit and ridesharing. The exact algorithm, ILP (\ref{obj-1})-(\ref{constraint-2}), is not evaluated because it takes too long to complete for the instances in our study. The results of SquareImp are not discussed because its performance is same as AnyImp; this is due to the implementation of the search/enumeration order of the vertices and edges in the independent set instance $G(V,E)$ being fixed, and each vertex in $V(G)$ has integer weight. We use a simplified transit network of Chicago to simulate the public transit and ridesharing.
\subsection{Description and characteristics of datasets} We built a simplified transit network of Chicago to simulate practical scenarios of public transit and ridesharing. The roadmap data of Chicago is retrieved from OpenStreetMap\footnote{Planet OSM. \url{https://planet.osm.org}}. We used the GraphHopper\footnote{GraphHopper 1.0. \url{https://www.graphhopper.com}} library to construct the logical graph data structure of the roadmap. The Chicago city is divided into 77 officially community areas, each of which is assigned an area code. We examined two different dataset in Chicago to reveal some basic traffic pattern (the datasets are provided by the Chicago Data Portal (CDP) and Chicago Transit Authority (CTA)\footnote{CDP. \url{https://data.cityofchicago.org}. CTA. \url{https://www.transitchicago.com}}, maintained by the City of Chicago). The first dataset is bus and rail ridership, which shows the monthly averages and monthly totals for all CTA bus routes and train station entries. We denote this dataset as \textit{PTR, public transit ridership}. The PTR dataset range is chosen from June 1st, 2019 to June 30th, 2019. The second dataset is rideshare trips reported by Transportation Network Providers (sometimes called rideshare companies) to the City of Chicago. We denote this dataset as \textit{TNP}. The TNP dataset range is chosen from June 3rd, 2019 to June 30th, 2019, total of 4 weeks of data. Table~\ref{table-PTRdata} and Table~\ref{table-TNPdata} show some basic stats of both datasets. \begin{table}[htbp] \small \captionsetup{font=small} \parbox{.5\linewidth}{ \centering
\begin{tabular}{| p{3.7cm} | p{3.3cm} |} \hline Total Bus Ridership & 20,300,416 \\ Total Rail Ridership & 19,282,992 \\ \hline
12 busiest bus routes & 3, 4, 8, 9, 22, 49, 53, 66, 77, 79, 82, 151 \\ \hline The busiest bus routes selected & 4, 9, 49, 53, 77, 79, 82 \\ \hline \end{tabular} \caption{Basic stats of the PTR dataset \label{table-PTRdata}} }
\parbox{.49\linewidth}{ \centering
\begin{tabular}{| p{4.3cm} | p{2.8cm} |} \hline \# of original records & 8,820,037 \\ \hline \# of records considered & 7,427,716 \\ \# of shared trips & 1,015,329 \\ \# of non-shared trips & 6,412,387 \\ \hline The most visited community areas selected & 1, 4, 5, 7, 22, 23, 25, 32, 41, 64, 76 \\ \hline \end{tabular} \caption{Basic stats of the TNP dataset \label{table-TNPdata}} } \end{table}
In the PTR dataset, the total ridership for each bus route is recorded; there are 127 bus routes in the dataset. We examined the 12 busiest bus routes based on the total ridership and selected 7 out of the 12 routes as listed in Table~\ref{table-PTRdata} to build the transit network (excluded bus routes either serve a small community or too close to train stations). We also selected all the major trains/metro lines within the Chicago area except the Brown Line and Purple Line since they are too close to the Red and Blue lines. Note that the PTR dataset also provides the total rail ridership. However, it only provides the number of riders entering every station in each day; it does not provide the number of riders exit from a station nor the time related to the entries.
Each record in the TNP dataset describes a passenger trip served by a driver who provides the rideshare service; a trip record consists of the pick-up and drop-off time and the pick-up and drop-off community area of the trip, and exact locations are provided sometimes. We removed records where the pick-up or drop-off community area is hidden for privacy reason or not within Chicago, which results in 7.4 million ridesharing trips. We calculated the average number of trips per day departed from/arrived at each area. \begin{figure}
\caption{The average number of trips per day departed from and arrived at each area.}
\label{fig-OD-pairs}
\end{figure} The results are plotted in Figure~\ref{fig-OD-pairs}; the community areas that have the highest number of departure trips are almost the same as that of the arrival trips.
We selected 11 of the 20 most visited areas as listed in Table~\ref{table-TNPdata} (area 32 is Chicago downtown, areas 64 and 76 are airports) to build the transit network for our simulation. From the selected bus routes, trains and community areas, we create a simplified public transit network connecting urban community areas, depicted in Figure~\ref{fig-transit-network}. \begin{figure}
\caption{Simplified public transit network of Chicago with 13 urban communities and 3 designated locations. Figure on the right has the Chicago city map overlay for scale.}
\label{fig-transit-network}
\end{figure} Each rectangle on the figure represents an urban community within one community area or across two community areas, labeled in the rectangle. The blue dashed rectangles/urban communities are chosen due to the busiest bus routes from the PTR dataset. The rectangles/urban communities labeled with red area codes are chosen due to the most visited community areas from the TNP dataset. The dashed lines are the trains, which resemble the major train services in Chicago. The solid lines are the selected bus routes connecting the urban communities to their closest train stations. There are also three designated locations/destinations that many people want to travel to/from throughout the day; they are the two airports and downtown region in Chicago.
The travel time between two locations (each location consists of the latitude and longitude coordinates) uses the fastest/shortest route computed by the GraphHopper library, which is based on personal cars. The shortest paths are \textbf{computed in real-time}, unlike many previous simulations where the shortest paths are precomputed and stored. As mentioned in Section~\ref{subsection-alg-feas-single}, waiting time and service time are considered in a simplified model; we multiply a small constant $\epsilon > 1$ to the fastest route to mimic waiting time and service time for public transit. For instance, consider two consecutive metro stations $s_1$ and $s_2$. The travel time $t(s_1,s_2)$ is computed by the fastest route, and the travel time by train between from $s_1$ to $s_2$ is $\hat{t}(s_1,s_2) = 1.15 \cdot t(s_1,s_2)$. The constant $\epsilon$ for bus service is 2. Rider trips originated from most locations must take a bus to reach a metro station when ridesharing service is not involved.
\subsection{Generating instances}\label{sec-instances} In our simulation, we partition each day from 6:00 to 23:59 into 72 time intervals (each has 15 minutes), and we only focus on weekdays. To see ridesharing traffic pattern, we calculated the average number of served trips per hour for each day of the week using the TNP dataset. The dashed (orange) line and solid (blue) line of the plot in Figure~(\ref{fig-sub-originalTrips}) represent shared trips and non-shared trips respectively. A set of trips are called \emph{shared trips} if this set of trips are matched for the same vehicle consecutively such that their trips may potentially overlap, namely, one or more passengers are in the same vehicle. For all other trips, we call them \textit{non-shared trips}. From the plot, the peak hours are between 7:00 AM to 9:00AM and 4:00PM to 7:00PM on weekdays for both non-shared and shared trips. The number of trips generated for each interval is plotted in Figure~(\ref{fig-sub-nTrips}), which is a scaled down and smoothed version of the TNP dataset for weekdays. The ratio between the number of drivers and riders generated is roughly 1:3 (1 driver and 3 riders) for each interval. Such a ratio is chosen because it should reflect the system's potential as capacity of 3 is common for most vehicles. \begin{figure}
\caption{Average numbers of shared and non-shared trips in TNP dataset.}
\label{fig-sub-originalTrips}
\caption{Total number of driver and rider trips generated for each time interval.}
\label{fig-sub-nTrips}
\caption{Plots for the number of trips for every hour from data and generated.}
\label{fig-nTrips-plot}
\end{figure} For each time interval, we first generate a set $R$ of riders and then a set $D$ of drivers. We do not generate a trip where its origin and destination are close. For example, no trip with origin Area25 and destination Area15 is generated.
\paragraph{Generation of rider trips.} We assume that the numbers of riders entering and exiting a station are the same. Next we assume that the the numbers of riders in PTR over the time intervals each day follow a similar distribution of the TNP trips over the time intervals. Each day is divided into 6 different consecutive time periods (each consists of multiple time intervals): morning rush, morning normal, noon, afternoon normal, afternoon rush, and evening time periods. Each time period determines the probability and distribution of origins and destinations. Based on the PTR dataset and Rail Capacity Study by CTA~\cite{CTA19}, many riders are going into downtown in the morning and leaving downtown in the afternoon. To generate a rider trip $j$ during \textbf{morning rush} time period, we first decide a \emph{pickup area} which is a community area selected uniformly at random. The origin $o_j$ is a random point within the selected pickup area. Then, we use standard normal distribution to determine the \emph{dropoff area}, where downtown area is within two SDs (standard deviations), airports are more than two and at most three SDs, and the community areas are more than three SDs away from the mean. The destination $d_j$ is a random point within the selected dropoff area. The above is repeated until $a_t$ riders are generated, where $a_t + a_t / 3$ (riders + drivers) is the total number of trips for time interval $t$ shown in Figure~(\ref{fig-sub-nTrips}). For any pickup area $c$, let $c_t$ be the number of generated riders originated from $c$ for time interval $t$, that is, $\sum_c c_t = a_t$. Other time periods follow the same procedure, and all community areas and locations can be selected as pickup and dropoff areas \begin{enumerate} \setlength\itemsep{0em} \item \textbf{Morning normal}: for pickup area, community areas are within two SDs, downtown is more than two and at most three SDs and airports are more than three SDs away from the mean; and destination area is selected using uniform distribution. \item \textbf{Noon}: both pickup and dropoff are selected using uniform distribution. \item \textbf{Afternoon normal}: for pickup area, downtown and airport are within two SDs and community areas are more than two SDs away from the mean; for dropoff area, community areas are within two SDs and downtown and airports are more than two SDs away from the mean. \item \textbf{Afternoon rush}: for pickup area, downtown is within two SDs, airports are more than two SDs and at most three SDs and community areas are more than three SDs away from the mean; and for dropoff area, community areas are within two SDs, airports are more than two SDs and at most three SDs and downtown is more than three SDs away from the mean. \item \textbf{Evening}: for both pickup and dropoff areas, community areas are within two SDs, downtown is more than two and at most three SDs and airports are more than three SDs away from the mean. \end{enumerate}
\paragraph{Generation of driver trips.} We examined the TNP dataset to determine if there are enough drivers who can provide ridesharing service to riders that follow match Types 1 and 2 traffic pattern. First, we removed any trip from TNP if it is too short (less than 15 minutes or origin and destination are adjacent areas). We calculated the average number of trips per hour originated from every pre-defined area in the transit network (Figure~\ref{fig-transit-network}), and then plotted the destinations of such trips in a grid heatmap. In other words, each cell $(c,r)$ in the heatmap represents the the average number of trips per hour originated from area $c$ to destination area $r$ in the transit network (Figure~\ref{fig-transit-network}). An example is depicted in Figure~\ref{fig-OD-distribution}. \begin{figure}
\caption{Traffic heatmaps for the average number of trips originated from one area (x-axis) during hour 7:00 (left) and hour 17:00 (right) to every other destination area (y-axis).}
\label{fig-OD-distribution}
\end{figure} From the heatmaps, many trips are going into the downtown area (A32) in the morning; and as time progresses, more and more trips leave downtown. This traffic pattern confirms that there are enough drivers to serve the riders in our simulation. The number of shared trips shown in Figure~\ref{fig-sub-originalTrips} also suggests that many riders are willing to share a same vehicle. We slightly reduce the difference between the values of each cell in the heatmaps and use the idea of marginal probability to generate driver trips. Let $d(c,r,h)$ be the value at the cell $(c,r)$ for origin area $c$, destination $r$ and hour $h$. Let $P(c,h)$ be sum of the average number of trips originated from area $c$ for hour $h$ (the column for area $c$ in the heatmap corresponds to hour $h$), that is, $P(c, h) = \sum_{r} d(c,r,h)$ is the sum of the values of the whole column $c$ for hour $h$. Given a time interval $t$, for each area $c$, we generate $c_t/3$ drivers ($c_t$ is defined in Generation of rider trips) such that each driver $i$ has origin $o_i = c$ and destination $d_i = r$ with probability $d(c,r,h)/P(c,h)$, where $t$ is contained in hour $h$. The probability of selecting an airport as destination is fixed at 5\%.
\paragraph{Deciding other parameters for each trip.} After the origin and destination of a rider of driver trip have been determined, we decide other parameters of the trip. The capacity $n_i$ of drivers' vehicles is selected from three ranges: the {\em low range} [1,2,3], {\em mid range} [3,4,5], and {\em high range} [4,5,6]. During morning/afternoon peak hours, roughly 95\% and 5\% of vehicles have capacities randomly selected from the low range and mid range respectively. It is realistic to assume vehicle capacity is lower for morning and afternoon peak-hour commute. While during off-peak hours, roughly 80\%, 10\% and 10\% of vehicles have capacities randomly selected from low range, mid range and high range respectively. The number $\delta_i$ of stops equals to $n_i$ if $n_i \leq 3$, else it is chosen uniformly at random from $[n_i-2, n_i]$ inclusive. The detour limit $z_i$ of each driver is within 5 to 20 minutes because traffic is not considered, and waiting time and service time are considered in a simplified model. The general information of the base instances is summarized in Table~\ref{table-simulation}. \begin{table}[htbp] \footnotesize \centering
\begin{tabular}{ l | p{11.3cm} }
\hline
Major trip patterns & from urban communities to downtown and vice versa for peak and off-peak hours respectively;
trips specify one match type for peak hours and can be in either type for off-peak hours \\
\# of intervals simulated & Start from 6:00 AM to 11:59 PM; each interval is 15 minutes \\
\# of trips per interval & varies from [350, 1150] roughly, see Figure~\ref{fig-nTrips-plot} \\
Driver:rider ratio & 1:3 approximately \\
Capacity $n_i$ of vehicles & low: [1,3], mid: [3,5] and high: [4,6] inclusive \\
Number $\delta_i$ of stops limit & $\delta_i=n_i$ if $n_i \leq 3$, or $\delta_i \in [n_i-2,n_i]$ if $n_i \geq 4$ \\
Earliest departure time $\alpha_i$ & immediate to 2 intervals after a trip announcement is generated \\
Driver detour limit $z_i$ & 5 minutes to min\{$2 \cdot t(o_i,d_i)$ (driver's fastest route), 20 minutes\} \\
Latest arrival time $\beta_i$ & at most $1.5 \cdot (t(o_i,d_i) + z_i) + \alpha_i$ \\
Travel duration $\gamma_i$ of driver $i$ & $\gamma_i = t(o_i,d_i) + z_i$ \\
Travel duration $\gamma_j$ of rider $j$ & $\gamma_j = t(\hat{\pi}_i)$, where $\hat{\pi}_i$ is the fastest public transit route \\
Acceptance rate & 80\% for all riders (0.8 times the fastest public transit route) \\
Train and bus travel time & average at 1.15 and 2 times the fastest route by car, respectively \\ \hline
\end{tabular} \captionsetup{font=small} \caption{General information of the base instances.} \label{table-simulation} \end{table}
\paragraph{Reduction configuration procedure.} When the number of trips increases, the running time for Algorithm~2 and the time needed to construct the $k$-set packing instance also increase. This is due to the increased number of feasible matches for each driver $i \in D$. In a practical setup, we may restrict the number of feasible matches a driver can have. Each match produced by Algorithm~1 is called a \emph{base match}, which consists of exactly one driver and one passenger. To make the simulation feasible, we heuristically limit the numbers of base matches for each driver and each rider and the number of total feasible matches for each driver. We use $(x\%, y, z)$, called \emph{reduction configuration} (\emph{Config} for short), to denote that for each driver $i$, the number of base matches of $i$ is reduced to $x$ percentage and at most $y$ total feasible matches are computed for $i$; and for each rider $j$, at most $z$ base matches containing $j$ are used.
After Algorithm~1 is completed. A reduction procedure may be evoked with respect to a reduction Config. Let $H(D,R,E)$ be the graph after computing all feasible base matches (instance computed by Algorithm~1 and before Algorithm~2 is executed).
For a trip $i \in \mathcal{A}$, let $E_i$ be the set of base matches of $i$. The reduction procedure works as follows. \begin{itemize} \setlength\itemsep{0em} \item First of all, the set of drivers is sorted, based on number of base matches each driver has, in descending order. \item Each driver $i$ is then processed one by one.
\begin{enumerate}
\item If driver $i$ has at least 10 base matches, then $E_i$ is sorted, based on the number of base matches each passenger included in $E_i$ has, in descending order.
\item For each match $e=(i, J=\{j\})$ in $E_i$, if $j$ belongs to more than $z$ other matches, remove $e$ from $E_i$.
\item After above step 2, if $E_i$ has not been reduced to $x\%$, sort the remaining matches of $E_i$, based on the travel time of passengers included in $E_i$ to $i$, in descending order.
\item Remove the first $x'$ matches from $E_i$ until $x\%$ is reached.
\end{enumerate} \end{itemize} The original sorting of the drivers allows us to first remove matches from drivers that have more matches than others. The sorting of the base matches of driver $i$ in step 1 allows us to first remove matches containing passengers that also belong to other matches. Passengers farther away from a driver $i$ may have lower chance to be served together by $i$; this is the reason for the sorting in step 3.
\subsection{Computational results} We use the same transit network and same set of generated trip data for all algorithms.
All experiments were implemented in Java and conducted on Intel Core i7-2600 processor with 1333 MHz of 8 GB RAM available to JVM. Since the optimization goal is to assign accepted ridesharing route to as many riders as possible, the performance measure is focused on the number of riders served by ridesharing routes, followed by the total time saved for the riders as a whole. We record both of these numbers for each approximation algorithm. The base case instance uses the parameter setting described in Section~\ref{sec-instances} and Config (30\%, 600, 20). The experiment results are shown in Table~\ref{table-base-result}. \begin{table}[htbp] \footnotesize \centering
\begin{tabular}{ l | c | c | c | c }
\hline
& ImpGreedy & Greedy & AnyImp & BestImp \\ \hline
Total number of riders served & 27413 & 27413 & 28248 & 28258 \\
Avg number of riders served per interval & 380.736 & 380.736 & 392.333 & 392.472 \\
Total time saved of all riders (minute) & 354568.2 & 354568.2 & 365860.6 & 365945.8 \\
Avg time saved of riders per interval (minute) & 4924.56 & 4924.56 & 5,081.40 & 5082.58 \\ \hline
\multicolumn{2}{| l |}{Total number of riders and public transit duration} & \multicolumn{3}{l |}{45314 and 1383743.97 minutes}\\ \hline
\end{tabular} \captionsetup{font=small} \caption{Base case solution comparison between the approximation algorithms.} \label{table-base-result}
\end{table} The results of ImpGreedy and Greedy are aligned since they are essentially the same algorithm - 60.5\% of total passengers are assigned ridesharing routes and 25.6\% of total time are saved. The results of AnyImp and BestImp are similar because of the density of the graph $G(V,E)$ due to Observation~\ref{obs-1}. For AnyImp and BestImp, roughly 62.4\% of total passengers are assigned ridesharing routes and 26.4\% of total time are saved. On average, passengers are able to reduce their travel duration from 30.5 minutes to 22.5 minutes by using public transit plus ridesharing. The results of these four algorithms are not too far apart. However, it takes too long for AnyImp and BestImp to run to completion. A 10-second limit is set for both algorithms in each iteration for finding an independent set improvement. With this time limit, AnyImp and BestImp run to completion within 15 minutes for almost all intervals.
We also examine this from the drivers' perspective; we recorded both the mean occupancy rate and vacancy rate of drivers. The mean occupancy rate is calculated as, in each interval, the number of passengers served divided by the number of drivers who serve them. The mean vacancy rate is calculated as, in each interval, the number of drivers with feasible matches who are not assigned any passenger divided by the total number of drivers with at least one feasible match. The results are depicted in Figure~\ref{fig-OR-VR}. \begin{figure}
\caption{The mean occupancy rate and vacancy rate of drivers for each interval.}
\label{fig-OR-VR}
\end{figure}
The occupancy rate results show that in many intervals, 1.9-2 passengers are served by each driver on average. The vacancy rate of drivers show that 3-8\% (0-4\% resp.) of drivers are not assigned any passenger while such drivers have some feasible matches for ImpGreedy (BestImp respectively) during all hours except afternoon peak hours; on the other hand, this time period has the highest occupancy rate. This is most likely due to the origins of many trips are from the same area (downtown). If the destinations of drivers and riders do not have the same general direction from downtown, the drivers may not be able to serve any riders. On the other hand, when their destinations are aligned, drivers are likely to serve more riders.
Another major component of the experiment is to measure the computational time of the algorithms, which is highly affected by the base match reduction configurations. By reducing more matches, we are able to improve the running time of AnyImp and BestImp significantly, but sacrifice performance slightly. We tested 12 different Configs: \begin{itemize}[leftmargin=*] \setlength\itemsep{0em} \begin{footnotesize} \item \textit{Small1} (20\%,300,10), \textit{Small2} (20\%,600,10), \textit{Small3} (20\%,300,20), \textit{Small4-10} (20\%,600,20). \item \textit{Medium1} (30\%,300,10), \textit{Medium2} (30\%,600,10), \textit{Medium3} (30\%,300,20), \textit{Medium4-10} (30\%,600,20). \item\textit{Large1} (40\%,300,10), \textit{Large2} (40\%,600,10), \textit{Large3-10} (40\%,300,20), and \textit{Large4-10} (40\%,600,20). \end{footnotesize} \end{itemize} Configs with label ``-10'' have a 10-second limit to find an independent set improvement, and all other Configs have 20-second limit. Notice that all 12 Configs have the same sets of driver/rider trips and base match sets but generate different feasible match sets. The performance and running time results of all 12 Configs are depicted in Figures~\ref{fig-configurations-perf}~and~\ref{fig-configurations-time} respectively. The results are divided into peak and off-peak hours for each Config (averaging all intervals of peak hours and off-peak hours). The running time of ImpGreedy and Greedy are within seconds for all Configs as shown in Figure~\ref{fig-configurations-time}. On the other hand, it may not be practical to use AnyImp and BestImp for peak hours since they require around 15 minutes for most Configs. Since AnyImp and BestImp provide better performance than ImpGreedy/Greedy when each Config is compared side-by-side, one can use ImpGreedy/Greedy for peak hours and AnyImp/BestImp for off-peak hours so that it becomes practical. \begin{figure}
\caption{Average performance of peak and off-peak hours for different configurations.}
\label{fig-configurations-perf}
\end{figure} \begin{figure}
\caption{Average running time of peak and off-peak hours for different configurations.}
\label{fig-configurations-time}
\end{figure} The increase in performance from Small1 to Small3 is much larger than that from Small1 to Small2 (same for Medium and Large), implying any parameter in a Config should not be too small. The increase in performance from Large1 to Large4 is higher than that from Medium1 to Medium4 (similarly for Small). Therefore, a balanced configuration is more important than a configuration emphasizes only one or two parameters.
Because ImpGreedy does not create the independent set instance, it runs quicker than Greedy. More importantly, ImpGreedy uses less memory space than Greedy does. We tested ImpGreedy and Greedy with the following Configs: \textit{Huge1} (100\%,600,10), \textit{Huge2} (100\%,2500,20) and \textit{Huge3} (100\%,10000,30) (these Configs have the same sets of driver/rider trips and base match sets as those in the previous 12 Configs). The focus of these Configs is to see if Greedy can handle large number of feasible matches. The results are shown in Table~\ref{table-hugeConfig}. \begin{table}[htbp!] \footnotesize \centering
\begin{tabular}{ l | c | c | c } \hline
\textbf{ImpGreedy} & Huge1 & Huge2 & Huge3 \\ \hline
Avg running time for peak/off-peak hours (sec) & 0.08 / 0.03 & 0.43 / 0.12 & 1.2 / 0.29 \\
Avg number of riders served for peak/off-peak hours & 406.9 / 339.0 & 458.8 / 355.4 & 484.1 / 361.9 \\
Avg time saved of riders per interval (sec) & 284891.8 & 302774.1 & 310636.9 \\ \hline
\textbf{Greedy} & Huge1 & Huge2 & Huge3 \\ \hline
Avg running time & N/A & N/A & N/A \\
Avg instance size $G(V,E)$ of afternoon peak ($|E(G)|$) & 0.02 billion & 0.38 billion & 5.47 billion \\
Avg time creating $G(V,E)$ of afternoon peak (sec) & 14.6 & 320.9 & 3726.79 \\ \hline \end{tabular} \captionsetup{font=small} \caption{The results of ImpGreedy and Greedy using Unlimited reduction configurations.} \label{table-hugeConfig}
\end{table} Greedy cannot run to completion for all configurations because in many intervals, the whole graph $G(V,E)$ of the independent set instance is too large to hold in memory. The average number of edges for afternoon peak hours is 0.02, 0.38 and 5.47 billion for Huge1, Huge2 and Huge3 respectively. Further, the time it takes to create $G(V,E)$ can excess practicality. Hence, using Greedy (AnyImp and BestImp) for large instances may not be practical. In addition, the performance of ImpGreedy with Huge3 is better than that of AnyImp/BestImp with Large4.
Lastly, we also looked at the total running times of the approximation algorithms including the time for computing feasible matches (Algorithms 1 and 2). The running time of Algorithm 1 solely depends on computing the shortest paths between the trips and stations. Table~\ref{table-algorithms-time} shows that Algorithm 1 runs to completion within 500 seconds on average for peak hours. As for Algorithm 2, when many trips' origins/destinations are concentrated in one area, the running time increases significantly, especially for drivers with high capacity. Running time of Algorithm 2 can be reduced significantly by Configs with aggressive reductions. \begin{table}[htbp] \scriptsize \setlength\tabcolsep{5pt} \centering
\begin{tabular}{ l | c | c | c | c | c | c || c|c|c|c }
& Alg1 & Alg2 & ImpGreedy & Greedy & AnyImp & BestImp & \multicolumn{4}{c}{Total computational time} \\
& & & & & & & ImpGreedy & Greedy & AnyImp & BestImp \\ \hline
{Small3} & 485.2 & 26.8 & 0.021 & 2.0 & 840.5 & 876.4 & 512.1 & 514.1 & 1352.5 & 1388.5 \\
{Small4} & 485.2 & 28.2 & 0.029 & 3.6 & 599.1 & 629.9 & 513.4 & 517.0 & 1112.5 & 1143.3 \\
{Medium3} & 485.2 & 43.6 & 0.031 & 3.7 & 1312.1 & 1371.0 & 532.5 & 543.0 & 1840.9 & 1899.9 \\
{Medium4} & 485.2 & 50.1 & 0.048 & 7.7 & 971.5 & 990.0 & 535.3 & 543.0 & 1506.8 & 1525.3 \\
{Large4} & 485.2 & 72.0 & 0.076 & 12.2 & 1121.3 & 1167.2 & 557.3 & 569.5 & 1678.6 & 1724.4 \\
{Huge3} & 485.2 & 339.4 & 1.2 & N/A & N/A & N/A & 825.8 & N/A & N/A & N/A \\
\end{tabular} \captionsetup{font=small} \caption{Average computational time (in seconds) of peak hours for all algorithms.} \label{table-algorithms-time}
\end{table} Combining the results of this and previous (Table~\ref{table-hugeConfig}) experiments, ImpGreedy is capable of handling large instances while providing quality solution compared to other approximation algorithms.
From the experiment results in Figures~\ref{fig-configurations-perf}~and~\ref{fig-configurations-time}, it is beneficial to dynamically select different algorithms and reduction configurations for each interval depending on the number of trips. With large problem instances, previous approximation algorithms are not efficient (time and memory consuming), so they require aggressive reduction to reduce the instance size. On the other hand, ImpGreedy is much faster and capable of handling large instances. The running time of ImpGreedy can also be an advantage to improve the quality of solutions. For example, as shown in Figures~\ref{fig-configurations-perf}~and~\ref{fig-configurations-time}, for the same set of drivers and riders, ImpGreedy assigns more riders when taking Meduim/Medium4 as inputs than AnyImp/BestImp on Small1/Small2, and uses less time than AnyImp/BestImp.
When the size of an instance is not small and a solution must be computed within some time-limit, ImpGreedy has a distinct advantage over the previous approximation algorithms.
\section{Conclusion and future work} \label{sec-conclusion} Based on real-world transit datasets in Chicago, our study has shown that integrating public and private transportation can benefit the transit system as a whole, Recall that we focus on work commute traffic, and we only consider two match types that emphasize this transit pattern (with the flexibility to choose either type). Just from these two types, our base case experiments show that more than 60\% of the passenger are assigned ridesharing routes and able to save 25\% of travel time. Majority of the drivers are matched with at least one passenger, and vehicle occupancy rate has improved close to 3 (including the driver) on average. These results suggest that ridesharing can be a complement to public transit.
Our experiments show that the whole system is capable of handling more than 1000 trip requests in real-time using ordinary computer hardware. It is likely that the performance results of ImpGreedy can be further improved by extending it with the local search strategy of AnyImp and BestImp. Perhaps the biggest challenge for scalability comes from computing the base matches (Algorithm 1) since it has to compute many shortest paths in real-time; it may be worth to apply heuristics to reduce the running time of Algorithm 1 for scalability. To better understand scalability and practicality, it is important to include different match types and a more sophisticated simulation which includes real transit schedule and transit demand.
\end{document} | arXiv |
Bonnet's theorem
In classical mechanics, Bonnet's theorem states that if n different force fields each produce the same geometric orbit (say, an ellipse of given dimensions) albeit with different speeds v1, v2,...,vn at a given point P, then the same orbit will be followed if the speed at point P equals
$v_{\mathrm {combined} }={\sqrt {v_{1}^{2}+v_{2}^{2}+\cdots +v_{n}^{2}}}$
This article is about Bonnet's theorem in classical mechanics. For the Bonnet theorem in differential geometry, see Bonnet theorem.
History
This theorem was first derived by Adrien-Marie Legendre in 1817,[1] but it is named after Pierre Ossian Bonnet.
Derivation
The shape of an orbit is determined only by the centripetal forces at each point of the orbit, which are the forces acting perpendicular to the orbit. By contrast, forces along the orbit change only the speed, but not the direction, of the velocity.
Let the instantaneous radius of curvature at a point P on the orbit be denoted as R. For the kth force field that produces that orbit, the force normal to the orbit Fk must provide the centripetal force
$F_{k}={\frac {m}{R}}v_{k}^{2}$
Adding all these forces together yields the equation
$\sum _{k=1}^{n}F_{k}={\frac {m}{R}}\sum _{k=1}^{n}v_{k}^{2}$
Hence, the combined force-field produces the same orbit if the speed at a point P is set equal to
$v_{\mathrm {combined} }={\sqrt {v_{1}^{2}+v_{2}^{2}+\cdots +v_{n}^{2}}}$
References
1. Legendre, A-M (1817). Exercises de Calcul Intégral. Vol. 2. Paris: Courcier. pp. 382–3.
| Wikipedia |
\begin{document}
\title{ On The Algebras $U_q^{\pm} \begin{center} \begin{minipage}{120mm} {\small {\bf Abstract.} Let $U_q^+(A_N)$ (resp. $U_q^-(A_N)$) be the $(+)$-part (resp. $(-)$-part) of the Drinfeld-Jimbo quantum group of type $A_N$ over a field $K$. With respect to Jimbo relations and the PBW $K$-basis ${\cal B}$ of $U_q^+(A_N)$ (resp. $U_q^-(A_N)$) established by Yamane, it is shown, by constructing an appropriate monomial ordering $\prec$ on ${\cal B}$, that $U_q^+(A_N)$ (resp. $U_q^-(A_N)$) is a solvable polynomial algebra. Consequently, further structural properties of $U_q^+(A_N)$ (resp. $U_q^-(A_N)$) and their modules may be established and realized in a constructive-computational way.} \end{minipage}\end{center} {\parindent=0pt\par
{\bf Key words:} Quantum group, Solvable polynomial algebra, Gr\"obner basis} \vskip -.5truecm
\renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}} \let\footnote\relax\footnotetext{E-mail: [email protected]} \let\footnote\relax\footnotetext{2010 Mathematics Subject Classification: 16T20, 16Z05.}
\def\mathbb{N}{\mathbb{N}}
\def
{$\Box$}{
{$\Box$}} \def \r{\rightarrow}
\def\mapright#1#2{\smash{\mathop{\longrightarrow}\limits^{#1}_{#2}}}
\def\vskip .5truecm{\vskip .5truecm} \def\OV#1{\overline {#1}} \def\hangindent\parindent{\hangindent\parindent} \def\textindent#1{\indent\llap{#1\enspace}\ignorespaces} \def\par\hang\textindent{\par\hangindent\parindent\textindent}
\def{\bf LH}}\def\LM{{\bf LM}{{\bf LH}}\def\LM{{\bf LM}}\def\LT{{\bf LT}}\defK\langle X\rangle} \def\KS{K\langle X\rangle{K\langle X\rangle} \def\KS{K\langle X\rangle} \def{\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}{{\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}} \def\SUM^#1_#2{\displaystyle{\sum^{#1}_{#2}}} \def\widetilde}\def\KD{K\langle D\rangle{\widetilde}\def\KD{K\langle D\rangle} \def\prec_{d\textrm{\tiny -}lex}{\prec_{d\textrm{\tiny -}lex}}
\section*{1. Introduction}
Let $K$ be a field, and let $U_q(A_N)$ be the Drinfeld-Jimbo quantum group of type $A_N$ in the sense of ([Dr], [Jim]), that is, with $q\in K$, $q^8\ne 1$, and a positive integer $N$, $U_q(A_N)$ is generated by $\{E_{i},{K_{i}}^{\pm1},F_{i}|1\leq i,j\leq N\}$ over $K$ subject to the relations $$\begin{array}{rcl} K&=&\left\{\begin{array}{l} K_{i}K_{j}-K_{j}K_{i},~ K_{i}{K_{i}}^{-1}-1, ~{K_{i}}^{-1}K_{i}-1,\\ E_{j}{K_{i}}^{\pm 1}-q^{{\mp d_{i}}a_{ij}}{K_{i}}^{\pm 1}E_{j},~ K_{i}^{\pm 1}F_{j}-q^{{\mp d_{i}}a_{ij}}{F_{j}K_{i}}^{\pm 1}\end{array}\right\}\\ \\ T&=&\left\{E_{i}F_{j}-F_{j}E_{i}-\delta_{ij}\frac{K_{i}^{2}-K_{i}^{-2}}{q^{2d_{i}} -q^{-2d_{i}}}\right\};\\ \\ S^{+}&=&\left\{\left.\displaystyle{\sum_{v=0}^{1-a_{ij}}}(-1)^v
\left[ \begin{array}{c}1-a_{ij} \\ v\end{array} \right]_tE_{i}^{1-a_{ij}-v}E_{j}E_{i}^{v}~\right |~i\ne j, t=q^{2d_{i}}\right\}; \end{array}$$ $$\begin{array}{rcl} S^{-}&=&\left\{\left.\displaystyle{\sum_{v=0}^{1-a_{ij}}}(-1)^v
\left[ \begin{array}{c}1-a_{ij} \\v\end{array} \right]_{t}F_{i}^{1-a_{ij}-v}F_{j}F_{i}^v~\right |~i\ne j, t=q^{2d_{i}}\right\}, \end{array}$$ where $$\left[ \begin{array}{c} m\\ n\end{array} \right]_{\alpha}=\left\{\begin{array}{ll} \prod_{i=1}^{n}\frac{t^{m-1+i}-t^{i-m-1}}{t^{i}-t^{-i}},&m>n>0,\\ 1&n=0~\hbox{or}~n=m.\end{array}\right.$$ Let $U_{q}^0(A_N)$, $U_{q}^+(A_N)$, and $U_{q}^-(A_N)$ be the subalgebras of $U_q(A_N)$ generated by
$\{{K_i}^{\pm1}~|~1\leq i\leq N\}$, $\{E_i~|~ 1\leq i\leq N\}$, and
$\{F_i~|~ 1\leq i\leq N\}$, respectively. Then $U_{q}(A_N)$ has the triangular decomposition $$U_q(A_N)\cong{U_{q}^+(A_N)}\otimes_K {U_q^0(A_N)}\otimes_K {U_q^-(A_N)},$$ where $U_q^+(A_N)$ and $U_q^-(A_N)$ are called the $(+)$-part and ($-$)-part of $U_q(A_N)$ respectively. In [Ros] and [yam], it was proved that with respect to the Jimbo defining relations, $U^+_q(A_N)$ (similarly $U_q^-(A_N)$) has the standard PBW $K$-basis ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$ (see Section 2 for this basis). In this paper, we show (in Section 2) that there is a monomial ordering $\prec$ on the PBW basis ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$ of $U_q^+(A_N)$ (resp. $U_q^-(A_N)$) that makes $U_q^+(A_N)$ (resp. $U_q^-(A_N)$) into a solvable polynomial algebra in the sense of [K-RW]. In Section 3, we show that the main result (Theorem 2.3) obtained in Section 2 may enable us to establish and realize further structural properties of $U_q^+(A_N)$ (resp. $U_q^-(A_N)$) and their modules in a constructive-computational way.\vskip .5truecm
Throughout this note, $K$ denotes a field of characteristic 0, $K^*=K-\{0\}$, and all $K$-algebras considered are associative with multiplicative identity 1. If $S$ is a nonempty subset of an algebra $A$, then we write $\langle S\rangle$ for the two-sided ideal of $A$ generated by $S$.\vskip .5truecm
\section*{2. $U_q^+(A_N)$ (resp. $U_q^-(A_N)$) is a solvable polynomial algebra}\par We start by recalling from ([K-RW], [Li1, 6]) the following definitions and notations. Suppose that a finitely generated $K$-algebra $A=K[a_1,\ldots ,a_n]$ has the PBW $K$-basis ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}} =\{ a^{\alpha}=a_{1}^{\alpha_1}\cdots a_{n}^{\alpha_n}~|~\alpha =(\alpha_1,\ldots ,\alpha_n)\in\mathbb{N}^n\}$, and that $\prec$ is a total ordering on ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$. Then every nonzero element $f\in A$ has a unique expression $$\begin{array}{rcl} f&=&\lambda_1a^{\alpha (1)}+\lambda_2a^{\alpha (2)}+\cdots +\lambda_ma^{\alpha (m)},\\ &{~}&\hbox{such that}~a^{\alpha (1)}\prec a^{\alpha (2)}\prec\cdots \prec a^{\alpha (m)},\\ &{~}&\hbox{where}~ \lambda_j\in K^*,~a^{\alpha (j)}=a_1^{\alpha_{1j}}a_2^{\alpha_{2j}}\cdots a_n^{\alpha_{nj}}\in{\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}} ,~1\le j\le m. \end{array}$$ Since elements of ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$ are conventionally called {\it monomials}\index{monomial}, the {\it leading monomial of $f$} is defined as $\LM (f)=a^{\alpha (m)}$, the {\it leading coefficient of $f$} is defined as $\LC (f)=\lambda_m$, and the {\it leading term of $f$} is defined as $\LT (f)=\lambda_ma^{\alpha (m)}$.\vskip .5truecm
{\bf Definition 2.1} Suppose that the $K$-algebra $A=K[a_1,\ldots ,a_n]$ has the PBW basis ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$. If $\prec$ is a total ordering on ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$ that satisfies the following three conditions:{\parindent=1.35truecm\par
\par\hang\textindent{(1)} $\prec$ is a well-ordering (i.e., every nonempty subset of ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$ has a minimal element);\par
\par\hang\textindent{(2)} For $a^{\gamma},a^{\alpha},a^{\beta},a^{\eta}\in{\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$, if $a^{\gamma}\ne 1$, $a^{\beta}\ne a^{\gamma}$, and $a^{\gamma}=\LM (a^{\alpha}a^{\beta}a^{\eta})$, then $a^{\beta}\prec a^{\gamma}$ (thereby $1\prec a^{\gamma}$ for all $a^{\gamma}\ne 1$);\par
\par\hang\textindent{(3)} For $a^{\gamma},a^{\alpha},a^{\beta}, a^{\eta}\in{\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$, if $a^{\alpha}\prec a^{\beta}$, $\LM (a^{\gamma}a^{\alpha}a^{\eta})\ne 0$, and $\LM (a^{\gamma}a^{\beta}a^{\eta})\not\in \{ 0,1\}$, then $\LM (a^{\gamma}a^{\alpha}a^{\eta})\prec\LM (a^{\gamma}a^{\beta}a^{\eta})$,\par}{\parindent=0pt then $\prec$ is called a {\it monomial ordering} on ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$ (or a monomial ordering on $A$).} \vskip .5truecm
{\bf Definition 2.2} A finitely generated $K$-algebra $A=K[a_1,\ldots ,a_n]$ is called a {\it solvable polynomial algebra} if $A$ has the PBW $K$-basis ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}} =\{
a^{\alpha}=a_1^{\alpha_1}\cdots a_n^{\alpha_n}~|~\alpha =(\alpha_1,\ldots ,\alpha_n)\in\mathbb{N}^n\}$ and a monomial ordering $\prec$ on ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$, such that for $\lambda_{ji}\in K^*$ and $f_{ji}\in A$, $$\begin{array}{l} a_ja_i=\lambda_{ji}a_ia_j+f_{ji},~1\le i<j\le n,\\ \LM (f_{ji})\prec a_ia_j~\hbox{whenever}~f_{ji}\ne 0.\end{array}$$\par
It follows from [K-RW] that a solvable polynomial algebra $A$ is equipped with an algorithmic Gr\"obner basis theory, that is, every (two-sided, respectively one-sided) ideal of $A$ and every submodule of a free (left) $A$-module has a finite Gr\"obner basis which can be produced by running a noncommutative Buchberger Algorithm with respect to a given monomial ordering. It is also well known that nowadays the noncommutative Buchberger Algorithm for solvable polynomial algebras and their modules has been successfully implemented in the computer algebra system \textsf{Plural} [LS]. Concerning basic constructive-computational theory and methods for solvable polynomial algebras and their modules, one is referred to [Li6] for more details.\vskip .5truecm
Now, we aim to prove the following result.\vskip .5truecm
{\bf Theorem 2.3} If $q^8\ne 1$, then the algebra $U_q^+(A_N)$ is a solvable polynomial algebra in the sense of Definition 2.2. \vskip 6pt
{\bf Proof} First, recall that the Jimbo relations, namely the defining relations of $U_q^+(A_N)$ as described in [Yam], are given by $$\begin{array}{ll} f_{13}=x_{mn}x_{ij}-q^{-2}x_{ij}x_{mn},&((i,j),(m,n))\in C_1\cup C_3,\\ f_{26}=x_{mn}x_{ij}-x_{ij}x_{mn},&((i,j),(m,n))\in C_2\cup C_6,\\ f_4=x_{mn}x_{ij}-x_{ij}x_{mn}+(q^2-q^{-2})x_{in}x_{mj},&((i,j),(m,n))\in C_4,\\ f_5=x_{mn}x_{ij}-q^2x_{ij}x_{mn}+qx_{in},&((i,j),(m,n))\in C_5,\end{array}$$ where with $\Lambda_N=\{ (i,j)\in\mathbb{N}\times
\mathbb{N}~|~1\le i<j\le N+1\}$, the $C_i$ are given by
$$\begin{array}{l} C_1=\{ ((i,j),(m,n))~|~i=m<j<n\},\\
C_2=\{ ((i,j),(m,n))~|~i<m<n<j\},\\
C_3=\{ ((i,j),(m,n))~|~i<m<j=n\},\\
C_4=\{ ((i,j),(m,n))~|~i<m<j<n\},\\
C_5=\{ ((i,j),(m,n))~|~i<j=m<n\},\\
C_6=\{((i,j),(m,n))~|~i<j<m<n\}.\end{array}$$ It follows from [Yam] that for $q^8\ne 1$, $U^+_q(A_N)$ has the standard PBW $K$-basis
$${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}} =\left\{1,~x_{i_1j_1}x_{i_2j_2}\cdots x_{i_kj_k}~\left | ~(i_{\ell},j_{\ell})\in\Lambda_N,~k\ge 1,~ (i_1,j_1)\le_{lex} (i_2,j_2)\le_{lex}\cdots \le_{lex} (i_k,j_k)\right.\right\} ,$$ where $<_{lex}$ is the lexicographic ordering on $\Lambda_N$, i.e., $$(l,k)<_{lex}(i,j)\Leftrightarrow\left\{\begin{array}{l} l<i,\\ \hbox{or}~l=i~\hbox{and}~k<j.\end{array}\right.$$\par
We now start on constructing a monomial ordering on ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$. In doing so, we let $X=\{ x_{ij}~|~(i,j)\in\Lambda_N\}$ and introduce an ordering $\prec$ on $X$: for $x_{lk}$, $x_{ij}\in X$, $$x_{lk}\prec x_{ij}\Leftrightarrow\left\{\begin{array}{l} l<i,\\ \hbox{or}~l=i~\hbox{and}~k>j.\end{array}\right.$$ Note that the ordering $\prec$ is not the one introduced by the lexicographic ordering $<_{lex}$ on $\Lambda_N$. Furthermore, we extend $\prec$ to ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$: $$1\prec u~\hbox{for all}~u=x_{i_1j_1}x_{i_2j_2}\cdots x_{i_rj_r}\in{\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}} -\{ 1\},$$ and for $u=x_{i_1j_1}x_{i_2j_2}\cdots x_{i_rj_r}$, $v=x_{l_1t_1}x_{l_2t_2}\cdots x_{l_ht_h}\in{\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$, $$u\prec v\Leftrightarrow\left\{\begin{array}{l} r<h~\hbox{and}~ x_{i_1j_1}=x_{l_1t_1},~x_{i_2j_2}=x_{l_2t_2},\ldots ,x_{i_rj_r}=x_{l_rt_r},\\ \hbox{or there exists an}~m, 1\le m\le r,~\hbox{such that}\\ x_{i_1j_1}=x_{l_1t_1},~x_{i_2j_2}=x_{l_2t_2},\ldots ,x_{i_{m-1}j_{m-1}}=x_{l_{m-1}t_{m-1}}\\ \hbox{but}~x_{i_mj_m}\prec x_{l_mt_m}.\end{array}\right.$$ It is straightforward to check that $\prec$ is reflexive, antisymmetrical, transitive, and any two generators $x_{ij}, x_{kl}\in X$ are comparable, thereby $\prec$ is a total ordering on ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$. Also since $\Lambda_N$ is a finite set, it can be directly verified that $\prec$ satisfies the descending chain condition on ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$, namely $\prec$ is a well-ordering on ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$. \par It remains to show that $\prec$ satisfies the conditions (2) and (3) of Definition 2.1, and that with respect to $\prec$ on ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$, the relations $f_{13}$, $f_{26}$, $f_4$ and $f_5$satisfied by generators of $U_q^+(A_N)$ (which are given by the Jimbo relations) have the property required by Definition 2.2. To see this, let $x_{mn}, x_{kl}, x_{ij}\in X$, and suppose that $x_{mn}\prec x_{kl}$. If $((i,j), (m,n))\in C_4$, then since $i<m<j<n$ and $x_{in}x_{mj}\in {\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$, the jimbo relation $f_4$ gives rise to $$\begin{array}{rcl} x_{mn}x_{ij}&=&x_{ij}x_{mn}-(q^2-a^{-2})x_{in}x_{mj}\\ &{~}&\hbox{with} ~\LM ((a^2-a^{-2})x_{in}x_{mj})=x_{in}x_{mj}\prec x_{ij}x_{mn}=\LM (x_{mn}x_{ij}). \end{array}$$ On the other hand, since $i<j$, if $((i,j), (k,l))\in C_4$, then noticing $i<k<j<l$ and $x_{il}x_{kj}\in{\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$, the jimbo relation $f_4$ gives rise to $$\begin{array}{rcl} x_{kl}x_{ij}&=&x_{ij}x_{kl}-(q^2-a^{-2})x_{il}x_{kj}\\ &{~}&\hbox{with} ~\LM ((a^2-a^{-2})x_{il}x_{kj})=x_{il}x_{kj}\prec x_{ij}x_{kl}=\LM (x_{kl}x_{ij}). \end{array}$$ Thus, we have shown that if $((i,j), (m,n)), ((i,j), (k,l))\in C_4$, then $$\begin{array}{l} x_{mn}\prec x_{kl}~\hbox{implies}~\LM (x_{mn}x_{ij})=x_{ij}x_{mn}\prec x_{ij}x_{kl}=\LM (x_{kl}x_{ij}),\\ \hbox{and the generating relations of}~U_q^+(A_N)~\hbox{determined by}~f_4\\ \hbox{have the property reqired by Definition 2.2}.\end{array}\eqno{(1)}$$ Similarly in the case that $((m,n), (i,j)), ((k,l), (i,j))\in C_4$, we have $$\begin{array}{l} x_{mn}\prec x_{kl}~\hbox{implies}~\LM (x_{ij}x_{mn})=x_{mn}x_{ij}\prec x_{kl}x_{ij}=\LM (x_{ij}x_{kl}),\\ \hbox{and the generating relations of}~U_q^+(A_N)~\hbox{determined by}~f_4\\ \hbox{have the property reqired by Definition 2.2}.\end{array}\eqno{(2)}$$ Fuethermore, if $((i,j), (m,n)), ((i,j), (k,l))\in C_5$, then since $i<j=m<n$ and $x_{in}, x_{il}\in{\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$, the jimbo relation $f_5$ gives rise to $$\begin{array}{rcl} x_{mn}x_{ij}&=&q^2x_{ij}x_{mn}-qx_{in}\\ &{~}&\hbox{with} ~\LM (qx_{in})=x_{in}\prec x_{ij}x_{mn}=\LM (x_{mn}x_{ij}),\\ x_{kl}x_{ij}&=&q^2x_{ij}x_{kl}-qx_{il}\\ &{~}&\hbox{with} ~\LM (qx_{il})=x_{il}\prec x_{ij}x_{kl}=\LM (x_{kl}x_{ij}). \end{array}$$ This shows that if $((i,j), (m,n)), ((i,j), (k,l))\in C_5$, then $$\begin{array}{l} x_{mn}\prec x_{kl}~\hbox{implies}~\LM (x_{mn}x_{ij})=x_{ij}x_{mn}\prec x_{ij}x_{kl}=\LM (x_{kl}x_{ij}),\\ \hbox{and the generating relations of}~U_q^+(A_N)~\hbox{determined by}~f_5\\ \hbox{have the property reqired by Definition 2.2},\end{array}\eqno{(3)}$$ and in the case that $((m,n), (i,j)), ((k,l), (i,j))\in C_5$, we also have $$\begin{array}{l} x_{mn}\prec x_{kl}~\hbox{implies}~\LM (x_{ij}x_{mn})=x_{mn}x_{ij}\prec x_{kl}x_{ij}=\LM (x_{ij}x_{kl}),\\ \hbox{and the generating relations of}~U_q^+(A_N)~\hbox{determined by}~f_5\\ \hbox{have the property reqired by Definition 2.2}.\end{array}\eqno{(4)}$$ At this stage, the relations $f_{13}$, $f_{26}$, $f_4$, and $f_5$, the assertions $(1)$, $(2)$, $(3)$, and $(4)$ derived above, all together enable us to conclude that for any $x_{mn}$, $x_{kl}$, $x_{ij}\in X$, if $x_{mn}\prec x_{kl}$ then then $$\begin{array}{l} x_{mn}\prec x_{kl}~\hbox{implies}~\LM (x_{ij}x_{mn})=x_{mn}x_{ij}\prec x_{kl}x_{ij}\LM (x_{ij}x_{kl}),\\ x_{mn}\prec x_{kl}~\hbox{implies}~\LM (x_{mn}x_{ij})=x_{ij}x_{mn}\prec x_{ij}x_{kl}=\LM (x_{kl}x_{ij}),\\ \hbox{and the generating relations of}~U_q^+(A_N)~\hbox{determined by}~f_{13},f_{26}, f_4,~\hbox{and}~ f_5\\ \hbox{have the property reqired by Definition 2.2}.\\ \end{array}\eqno{(5)}$$ Finally, by means of the assertions $(1)$, $(2)$, $(3)$, $(4)$, and $(5)$ derived above, it is straightforward to check that the conditions (2) and (3) of Definition 2.1 are satisfied by $\prec$, thereby $\prec$ is a monomial ordering on ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$, and consequently, $U_q^+(A_N)$ is a solvable polynomial algebra n the sense of Definition 2.2, as desired.
{$\Box$}\vskip .5truecm
Similarly, the following assertion holds.\vskip .5truecm
{\bf Theorem 2.4} Let $U_q^-(A_N)$ be the $(-)$-part of the Drinfeld-Jimbo quantum group of type $A_N$. Then $U_q^-(A_N)$ is a solvable polynomial algebra in the sense of Definition 2.2.\par
{$\Box$}\vskip .5truecm
Also by [L6, Proposition 1.1.6] we have the following\vskip .5truecm
{\bf Theorem 2.5} The tensor product $R=U_q^+(A_N)\otimes_KU_q^-(A_N)$ is a solvable polynomial algebra, where, for convenience, if ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}_1$ and ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}_2$ denote the PBW bases of $U_q^+(A_N)$ and $U_q^-(A_N)$ respectively, $\prec_1$ and $\prec_2$ denote the monomial orderings of $U_q^+(A_N)$ and $U_q^-(A_N)$ respectively, then ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}=\{u\otimes v~|~u\in{\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}_1,~v\in{\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}_2\}$ is a PBW basis of $R$, and a monomial ordering $\prec$ on ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$ is defined subject to the rule: for $u_1\otimes v_1$, $u_2\otimes v_2\in{\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$, $$u_1\otimes v_1\prec u_2\otimes v_2\Leftrightarrow\left\{\begin{array}{l} u_1\prec_1 u_2\\ \hbox{or}\\ u_1=u_2~\hbox{and}~v_1\prec_2 v_2.\end{array}\right.$$
\section*{3. Applications of Theorem 2.3 and Theorem 2.4}\par
Let $U_q^+(A_N)$ (resp. $U^-_q(A_N$) be the $(+)$-part (resp. $(-)$-part) of the Drinfeld-Jimbo quantum group of type $A_N$. In this section, we show that Theorem 2.3 and Theorem 2.4 obtained in the last section may enable us to establish and realize further structural properties of the algebra $U_q^+(A_N)$ (resp. $U^-_q(A_N)$ and their modules in a constructive-computational way. For more details on the basic constructive-computational theory and methods for solvable polynomial algebras and their modules, one is referred to [Li6]. All notions and notations used in previous sections are retained.\vskip .5truecm
As the first application of Theorem 2.3, we recapture a known result (see [Li3, P.135, Example 2]) concerning the defining relations of $U_q^+(A_N)$.\vskip .5truecm
{\bf Proposition 3.1} Let $K\langle X\rangle} \def\KS{K\langle X\rangle =K\langle X_{ij}~|~(i,j)\in\Lambda_N\rangle$ be the free $K$-algebra generated by $X=\{X_{ij}~|~(i,j)\in\Lambda_N\}$, and $$\G =\left\{\begin{array}{ll} F_{13}=X_{mn}X_{ij}-q^{-2}X_{ij}X_{mn},&((i,j),(m,n))\in C_1\cup C_3,\\ F_{26}=X_{mn}X_{ij}-X_{ij}X_{mn},&((i,j),(m,n))\in C_2\cup C_6,\\ F_4=X_{mn}X_{ij}-X_{ij}X_{mn}+(q^2-q^{-2})X_{in}X_{mj},&((i,j),(m,n))\in C_4,\\ F_5=X_{mn}X_{ij}-q^2X_{ij}X_{mn}+qX_{in},&((i,j),(m,n))\in C_5,\end{array}\right\}$$ the set of defining relations of $U_q^+(A_N)$, where with $\Lambda_N=\{ (i,j)\in\mathbb{N}\times
\mathbb{N}~|~1\le i<j\le N+1\}$, the $C_i$s are given by
$$\begin{array}{l} C_1=\{ ((i,j),(m,n))~|~i=m<j<n\},\\
C_2=\{ ((i,j),(m,n))~|~i<m<n<j\},\\
C_3=\{ ((i,j),(m,n))~|~i<m<j=n\},\\
C_4=\{ ((i,j),(m,n))~|~i<m<j<n\},\\
C_5=\{ ((i,j),(m,n))~|~i<j=m<n\},\\
C_6=\{((i,j),(m,n))~|~i<j<m<n\}.\end{array}$$ Then, there exists a monomial ordering $\prec_{_X}$ on $K\langle X\rangle} \def\KS{K\langle X\rangle$ such that $\G$ is a Gr\"obner basis (in the sense of [Mor]) of the ideal $\langle\G\rangle$, and
$$\LM (\G)=\{ X_{mn}X_{ij}|~((i,j), (m,n))\in C_1\cup C_2\cup\cdots\cup C_6,~(i,j)<_{lex} (m,n)\}.$$ where $\LM (\G )$ is the set of leading monomials $\LM (g)$ of elements $g\in\G$ (for an $F\inK\langle X\rangle} \def\KS{K\langle X\rangle$, $\LM (F)$ is defined with respect to $\prec_{_X}$, as that defined for an element in a solvable polynomial algebra in the last section).\par A similar result holds true for $U_q^-(A_N)$.\vskip 6pt
{\bf Proof} Since $U_q^+(A_N)$ is a solvable polynomial algebra by Theorem 2.3, this follows from a constructive characterization of solvable polynomial algebras [Li4, Theorem 2.1] (see also [Li6, Theorem 1.2.1]).
{$\Box$}\vskip .5truecm
The next proposition stems from [Li2, Section 6, Example 1] and [Li3, P.167 Example 3; Ch.7, Section 5.7, Corollary 7.6; section 6.3, Corollary 3.2]. \vskip .5truecm
{\bf Proposition 3.2} [Li6, Proposition A1.16, Proposition 1.2.2] Let $A=K[a_1,\ldots, a_n]$ be a solvable polynomial algebra with admissible system $({\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}} ,\prec )$. Then $A\cong K\langle X\rangle} \def\KS{K\langle X\rangle /\langle\G\rangle$, where $K\langle X\rangle} \def\KS{K\langle X\rangle =K\langle X_1,\ldots ,X_n\rangle$ is the free $K$-algebra of $n$ generators and $\G$ is a Gr\"obner basis of the ideal $\langle\G\rangle$ in $K\langle X\rangle} \def\KS{K\langle X\rangle$ with respect to some monomial ordering $\prec$ such that $\LM (\G )=\{ X_jX_i~|~1\le i<j\le n\}$, and the following statements hold.\par
(i) $A$ has Gelfand-Kirillov dimension GK.dim$A=n$.\par
(ii) $A$ has global (homological) dimension gl.dim$A\le n$. If $K\langle X\rangle} \def\KS{K\langle X\rangle$ is equipped with a positive-weight $\mathbb{N}$-gradation and $\G$ is a homogeneous Gr\"obner basis, then gl.dim$A=n$.\par
(iii) If $K\langle X\rangle} \def\KS{K\langle X\rangle$ is $\mathbb{N}$-graded by assigning each $X_i$ the degree 1, and $\G$ is a homogeneous Gr\"obner basis, then $A$ is a homogeneous $2$-Koszul algebra; otherwise, $A$ is a non-homogeneous $2$-Koszul algebra in the sense of ([Pos], [BG]).\par
{$\Box$}
For the sake of saving notation, except for retaining all notions and other notations used in previous sections, in what follows we use $R^+$ (resp. $R^-$) to denote the algebra $U_q^+(A_N)$ (resp. $U_q^-(A_N)$).\vskip .5truecm
{\bf Theorem 3.3} With notation fixed above, the following statements hold.\par (i) $R^+$ (resp. $R^-$) is a Noetherian domain.\par (ii) $R^+$ (resp. $R^-$) has Gelfand-Kirillov dimension $\frac{N(N+1)}{2}$. \par (iii) $R^+$ (resp. $R^-$) has global homological dimension $\le \frac{N(N+1)}{2}$.\par (iv) $R^+$ (resp. $R^-$) is a non-homogeneous 2-Koszul algebra in the sense of ([Pos], [BG2]).\vskip 6pt
{\bf Proof} We prove all assertions for $R^+$, because similar argumentation works well for $R^-$.\par
(i) Though this result have been known from the literature (see[Yam]), here we emphasize that this property may follow immediately from Theorem 2.3. More precisely, that $R^+$ has no divisors of zero follows from the fact that $\LM (fg)=\LM (f)\LM (g)$ for all nonzero $f,g\in R^+$, and that the Noetherianess of $R^+$ follows from the fact that every nonzero one-sided ideal has a finite Gr\"obner basis (see [K-RW]). \par
Note that $R^+$ is a solvable polynomial algebra by Theorem 2.3, and it has $\frac{N(N+1)}{2}$ generators $x_{ij}$ (see the proof of Theorem 2.3). The assertions (ii), (iii), and (iv) follow from Proposition 3.1 and Proposition 3.2 above.
{$\Box$}\vskip .5truecm
Before stating the next result, let us recall the Auslander regularity and the Cohen-Macaulay property of an algebra for the reader's convenience. A finitely generated algebra $A$ is said to \par (a) be {\it Auslander regular} if $A$ has finite global homological dimension, and for every finitely generated left $A$-module $M$, every integer $j\ge 0$, and every (right) $A$-submodule $N$ of Ext$^j_A(M,A)$ we have that $j(N)\ge j$, where $j(N)$ is the grade number of $N$ which is the least integer $i$ such that Ext$^i_A(M,A)\ne 0$;\par
(b) satisfy the {\it Cohen-Macaulay property} if for every finitely generated left $A$-module $M$ we have the equality: GK.dim$M+j(M)=$ GK.dim$A$, where GK.dim denotes the Gelfand-Kirollov dimension of a module. \par
Concerning the Auslander regularity and the Cohen-Macaulay property of an algebra, particularly, an algebra with filtered-graded structures, one is referred to [LVO].\vskip .5truecm
{\bf Theorem 3.4} With notation as above, the following statements hold.\par (i) $R^+$ (resp. $R^-$) is an Auslander regular algebra satisfying the Cohen-Macaulay property.\par (ii) The $K_0$-group of $R^+$ (resp. $R^-$) is isomorphic to $\mathbb{Z}$, the additive group of integers.\vskip 6pt
{\bf Proof} We prove the two assertions only for $R^+$, because similar argumentation works well for $R^-$.\par (i) Our approach is to employ certain specified filtered-graded structures associated with $R^+$. As in the proof of Theorem 2.3, let $\Lambda_N=\{ (i,j)\in\mathbb{N}\times
\mathbb{N}~|~1\le i<j\le N+1\}$, $X=\{ x_{ij}~|~(i,j)\in\Lambda_N\}$,
$$\begin{array}{l} C_1=\{ ((i,j),(m,n))~|~i=m<j<n\},\\
C_2=\{ ((i,j),(m,n))~|~i<m<n<j\},\\
C_3=\{ ((i,j),(m,n))~|~i<m<j=n\},\\
C_4=\{ ((i,j),(m,n))~|~i<m<j<n\},\\
C_5=\{ ((i,j),(m,n))~|~i<j=m<n\},\\
C_6=\{((i,j),(m,n))~|~i<j<m<n\},\end{array}$$
and $${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}} =\left\{ 1,~x_{i_1j_1}x_{i_2j_2}\cdots x_{i_kj_k}~\left | ~(i_{\ell},j_{\ell})\in\Lambda_N,~k\ge 1,~ (i_1,j_1)\le_{lex} (i_2,j_2)\le_{lex}\cdots \le_{lex} (i_k,j_k)\right.\right\}$$ which is the PBW $K$-basis of $R^+$. Furthermore, for every $x_{ij}\in X$, we assign the degree $d(x_{ij})=1$, so that each standard monomial $u=x_{i_1j_1}x_{i_2j_2}\cdots x_{i_kj_k}\in{\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$ has a unique degree $$d(u)=d(x_{i_1j_1})+d(x_{i_2j_2})+\cdots +d(x_{i_kj_k}).$$ Now, let us take the $\mathbb{N}$-filtration $FR^+=\{F_qR^+\}_{q\in\mathbb{N}}$ of $R^+$ determined by ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$, where
$$F_qR^+=K\hbox{-span}\{ u\in{\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}~|~d(u)\le q\},\quad q\in\mathbb{N}.$$ It is straightforward to check that $F_0R^+=K$, $F_qR^+\subseteq F_{q+1}R^+$ for all $q\in\mathbb{N}$, $R^+=\cup_{q\in\mathbb{N}}F_qR^+$, and, by referring to the proof of Theorem 2.3, $$F_{q_1}R^+F_{q_2}R^+\subseteq F_{q_1+q_2}R^+,\quad q_1,q_2\in\mathbb{N}.$$
Hence, the filtration $FR^+$ constructed above makes $R^+$ into an $\mathbb{N}$-filtered algebra (indeed, one may check easily that the filtration $FR^+$ defined here coincides with the natural standard filtration of $R^+$, see also [Li6, Proposition A3.6]). Considering the associated $\mathbb{N}$-graded algebra $G(R^+)=\oplus_{q\in\mathbb{N}}G(R^+)_q$ of $R^+$, where $G(R^+)_0=K$, $G(R^+)_q=F_qR^+/F_{q-1}R^+$, $q\ge 1$, then $G(R^+)=K[\sigma(x_{ij})~|~(i,j)\in\Lambda_N]$, where $\sigma (x_{ij})$ is the homogeneous element in $G(R^+)$ represented by $x_{ij}$ and $d(\sigma (x_{ij}))=1=d(x_{ij})$, such that $$\begin{array}{ll}\sigma (x_{mn})\sigma (x_{ij})=q^{-2}\sigma (x_{ij})\sigma (x_{mn}),&((i,j),(m,n))\in C_1\cup C_3,\\ \sigma (x_{mn})\sigma (x_{ij})=\sigma (x_{ij})\sigma (x_{mn}),&((i,j),(m,n))\in C_2\cup C_6,\\ \sigma (x_{mn})\sigma (x_{ij})=\sigma (x_{ij})\sigma (x_{mn})-(q^2-q^{-2})\sigma (x_{in})\sigma (x_{mj}),&((i,j),(m,n))\in C_4,\\ \sigma (x_{mn})\sigma (x_{ij})=q^2\sigma (x_{ij})\sigma (x_{mn}),&((i,j),(m,n))\in C_5.\end{array}$$ Note that with the aid of the monomial ordering $\prec$ on ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$ (as constructed in the proof of Theorem 2.3), we can further construct a graded monomial ordering $\prec_d$ on ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$ as follows: for $u,v\in{\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$, $$u\prec_d\Leftrightarrow\left\{\begin{array}{l} d(u)<d(v),\\ \hbox{or} d(u)=d(v)~\hbox{and}~u\prec v.\end{array}\right.$$ It follows from [Li1, CH.IV, Theorem 4.1] that $G(R^+)$ is a quadratic solvable polynomial algebra with the PBW $K$-basis $$\sigma({\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}} ) =\left\{\sigma (x_{i_1j_1})\sigma(x_{i_2j_2})\cdots
\sigma(x_{i_kj_k})~\left | ~\begin{array}{l}(i_{\ell},j_{\ell})\in\Lambda_N,~k\ge 1,\\ (i_1,j_1)\le_{lex} (i_2,j_2)\le_{lex}\cdots \le_{lex} (i_k,j_k)\end{array}\right.\right\}$$
and the graded monomial ordering $\prec_d$ induced by that on $R^+$. Noticing that $q^2\ne 0$, and $G(R^+)$ is an iterated skew polynomial algebra over the commutative polynomial ring $K[\sigma (x_{mn}),~\sigma (x_{ij})~|~((i,j),(m,n))\in C_2\cup C_6]$ (as one may see easily), it follows from[Lev], [Li1, P. 176 Theorem 1.1], and the foregoing Proposition 3.2(ii) that $G(R^+)$ is an Auslander regular domain of global dimension $\frac{N(N+1)}{2}$ and $G(R^+)$ satisfies the Cohen-Macaulay property. Since the $\mathbb{N}$-filtration $FR^+$ of $R^+$ is a positive filtration and both $R^+$ and $G(R^+)$ are solvable polynomial algebras and hence are Noetherian domains, $R^+$ is then a Zariskian filtered ring in the sense of [LVO]. Also by [Li1] and [LVO] we know that GK.dim$R^+=$ GK.dim$G(R^+)$, and GK.dim$M=$ GK.dim$G(M)$, $j_{R^+}(M)=j_{G(R^+)}(G(M))$ hold rue for every finitely generated $A$-module $M$, where $G(M)$ is the associated graded module of $M$ determined by a good filtration of $M$, and $j_{R^+}(M)$, respectively $j_{G(R^+)}(G(M))$, denotes the grade number of $M$, respectively the graded number of $G(M)$. Therefore, it follows from [LVO, CH.III, Theorem 2, Theorem 6] that $R^+$ is an Auslander regular algebra satisfying the Cohen-Macaulay property.\par
(ii) That $K_0(R^+)\cong\mathbb{Z}$ follows from [Li1, P. 176, Theorem 1.1] (see also [LVO, P.125 Corollary 6.8], or [Li6, Theorem 3.3.3]).\par
This finishes the proof.
{$\Box$}\vskip .5truecm
Also let us mention a result concerning modules over $R^+$ (resp. $R^-$).\vskip .5truecm
{\bf Theorem 3.5} Let the algebra $R^+$ (resp. $R^-$) be as before. Then the following statements hold.\par
(i) Let $L$ be a nonzero left ideal of $R^+$, and $R^+/L$ the left $R^+$-module. Considering Gelfand-Kirillov dimesion, we have GK.dim$R^+/L<$ GK.dim$A=\frac{N(N+1)}{2}$, and there is an algorithm for computing GK.dim$A/L$. Similar result holds true for $R^-$.\par
(ii) Let $M$ be a finitely generated $R^+$-module. Then a finite free resolution of $M$ can be algorithmically constructed, and the projective dimension of $M$ can be algorithmically computed. Similar result holds true for $R^-$.\vskip 6pt
{\bf Proof} (ii) That Gk.dim$R^+=\frac{N(N+1)}{2}$ follows from Theorem 3.3(ii). Since $R^+$ is a (quadric) solvable polynomial algebra by Theorem 2.3, it follows from [Li1, CH.V] that GK.dim$R^+/L<\frac{N(N+1)}{2}$ (this may also follow from classical Gelfand-Kirillov dimension theory [KL], for $R^+$ is now a Noetherian domain), and that there is an algorithm for computing GK.dim$R^+/L$.\par
(iii) This follows from [Li6, Ch.3].
{$\Box$}\vskip .5truecm
We end this section by concluding that the algebra $R^+$ (resp. $R^-$) also has the elimination property for (one-sided) ideals in the sense of [Li5] (see also [Li6, A3]), and that this property may be realized in a computational way. To see this clearly, let us first recall the Elimination Lemma given in [Li5]. Let $A=K[a_1,a_2\ldots ,a_n]$ be a finitely generated $K$-algebra with the PBW basis ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}} =\{ a^{\alpha}=a_1^{\alpha_1}a_2^{\alpha_2}\cdots a_n^{\alpha_n}~|~\alpha =(\alpha_1,\alpha_2\ldots ,\alpha_n)\in\mathbb{N}^n\}$ and, for a subset $U=\{ a_{i_1},a_{i_2},\ldots ,a_{i_r}\}\subset \{ a_1,a_2,\ldots,a_n\}$ with $i_1<i_2<\cdots <i_r$, let
$$T=\left\{ a_{i_1}^{\alpha_1}a_{i_2}^{\alpha_2}\cdots a_{i_r}^{\alpha_r}~\Big |~ (\alpha_1,\alpha_2,\ldots ,\alpha_r)\in\mathbb{N}^r\right\},\quad V(T)=K\hbox{-span}T.$$\par
{\bf Lemma 3.6} ([Li5, Lemma 3.1]) Let the algebra $A$ and the notations be as fixed above, and let $L$ be a nonzero left ideal of $A$ and $A/L$ the left $A$-module defined by $L$. If there is a subset $U=\{ a_{i_1},a_{i_2}\ldots ,a_{i_r}\} \subset\{a_1,a_2\ldots ,a_n\}$ with $i_1<i_2<\cdots <i_r$, such that $V(T)\cap L=\{ 0\}$, then $$\hbox{GK.dim}(A/L)\ge r.$$ Consequently, if $A/L$ has finite GK dimension $\hbox{GK.dim}(A/L)=d<n$ ($=$ the number of generators of $A$), then $$V(T)\cap L\ne \{ 0\}$$ holds true for every subset $U=\{ a_{i_1},a_{i_2},\ldots ,a_{i_{d+1}}\}\subset$ $\{ a_1,a_2,\ldots ,a_n\}$ with $i_1<i_2<\cdots <i_{d+1}$, in particular, for every $U=\{ a_1,a_2,\ldots ,a_s\}$ with $d+1\le s\le n-1$, we have $V(T)\cap L\ne \{ 0\}$.\par
{$\Box$} \vskip .5truecm
Once again, the discussion below will be carried out only for $R^+$, because similar argumentation works well for $R^-$. Also for convenience of deriving the next theorem, let us write the set of generators of $R^+$ as $X=\{ x_1,x_2,\ldots x_{\omega}\}$ with $\omega =\frac{N(N+1)}{2}$, i.e., $R^+=K[x_1,x_2,\ldots ,x_{\omega}]$. Thus, for a subset $U=\{ x_{i_1},x_{i_2},\ldots ,x_{i_r}\}\subset \{ x_1, x_2,\ldots,x_{\omega}\}$ with $i_1<i_2<\cdots <i_r$, we write
$$T=\left\{ x_{i_1}^{\alpha_1}x_{i_2}^{\alpha_2}\cdots x_{i_r}^{\alpha_r}~\Big |~ (\alpha_1,\alpha_2,\ldots ,\alpha_r)\in\mathbb{N}^r\right\},\quad V(T)=K\hbox{-span}T.$$\par
{\bf Theorem 3.7} With notation as fixed above, let $L$ be a left ideal of $R^+$. Then the following two statements hold.\par
(i) GK.dim$R^+/L< \omega$, and if GK.dim$R^+/L=d$, then $$V(T)\cap L\ne\{ 0\}$$ holds true for every subset $U=\{ x_{i_1},x_{i_2},...,x_{d+1}\}\subset D$ with $i_1<i_2<\cdots <i_{d+1}$, in particular, for every $U=\{ x_1,x_2\ldots x_s\}$ with $d+1\le s\le \omega -1$, we have $V(T)\cap L\ne \{ 0\}$.\par
(ii) Without exactly knowing the numerical value GK.dim$R^+/L$, the elimination property for a left ideal $L=\sum_{i=1}^mR^+\xi_i$ of $R^+$ (resp. $R^-$) can be realized in a computational way, as follows:\par
Let $\prec$ be the monomial ordering on the PBW basis ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$ of $R^+$ as constructed in the proof of Theorem 2.3 (or let $\prec_d$ be the graded monomial ordering as constructed in the proof of Theorem 3.4), and let $V(T)$ be as in (i). Then, employing an elimination ordering $\lessdot$ with respect to $T$ (which can always be constructed if the existing monomial ordering on ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}$ is not an elimination ordering, see [Li6, Proposition 1.6.3]), a Gr\"obner basis $\G$ of $L$ can be produced by running the noncommutative Buchberger algorithm for solvable polynomial algebras, such that $$L\cap V(T)\ne \{ 0\} \Leftrightarrow \G\cap V(T)\ne \emptyset .$$\par
Similar statements as mentioned above hold true for $R^-$.\vskip 6pt
{\bf Proof} (i) Adopting the notations we fixed above, it follows from [Yam] that $R^+$ has the PBW basis $${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}} =\{ x^{\alpha}=x_1^{\alpha_1}x_2^{\alpha_2}\cdots x_{\omega}^{\alpha_{\omega}}~|~\alpha =(\alpha_1,\alpha_2\ldots ,\alpha_{\omega})\in\mathbb{N}^{\omega}\}.$$ Also by Theorem 3.3(ii) and Theorem 3.5(i) we have GK.dim$R^+/L<\omega$. Therefore, the desired elimination property follows from Lemma 3.6 mentioned above.\par
(ii) This follows from [Li6, Corollary 1.6.5].
{$\Box$}\vskip .5truecm
{\bf Remark} Since $R^+$ (resp. $R^-$) is now a solvable polynomial algebra, if $F=\oplus_{i=1}^sM_q(n)e_i$ is a free (left) $R^+$-module (resp. $R^-$-module) of finite rank, then a similar (even much stronger) result of Theorem 3.7 holds true for any finitely generated submodule $N=\sum_{i=1}^mM_q(n)\xi_i$ of $F$. The reader is referred to [Li6, Section 2.4] for a detailed argumentation.\vskip .5truecm
\centerline{Reference}{\parindent=.6truecm\par
\par\hang\textindent{[BG]} ~R. Berger and V. Ginzburg, ~Higher ~symplectic ~reflection ~algebras and nonhomogeneous $N$-Koszul property, {\it J. Alg.}, 1(304)(2006), 577--601.
\par\hang\textindent{[Dr]} V.G. Drinfeld, Hopf algebras and the quantum Yang-Baxter equation, Doklady Akademii Nauk SSSR 283(5) (1985) 1060--1064.
\par\hang\textindent{[Jim]} M. Jimbo, A q-difference analogue of U(G) and the Yang-Baxter equation, Letters in Mathematical Physics 10(1) (1985) 63--69.
\par\hang\textindent{[KL]} G.R. Krause and T.H. Lenagan, {\it Growth of Algebras and Gelfand-Kirillov Dimension}. Graduate Studies in Mathematics. American Mathematical Society, 1991.
\par\hang\textindent{[K-RW]} A. Kandri-Rody and V. Weispfenning, Non-commutative Gr\"obner bases in algebras of solvable type. {\it J. Symbolic Comput.}, 9(1990), 1--26. Also available as: Technical Report University of Passau, MIP-8807, March 1988.
\par\hang\textindent{[Lev]} T. Levasseur, Some properties of noncommutative regular graded rings. {\it Glasgow Math. J}., 34(1992), 277--300.
\par\hang\textindent{[Li1]} H. Li, {\it Noncommutative Gr\"obner Bases and Filtered-graded Transfer}. Lecture Notes in Mathematics, Vol. 1795, Springer, 2002.
\par\hang\textindent{[Li2]} H. Li, $\Gamma$-leading ~homogeneous~ algebras~ and Gr\"obner bases. In: {\it Recent Developments in Algebra and Related Areas} (F. Li and C. Dong eds.), Advanced Lectures in Mathematics, Vol. 8, International Press \& Higher Education Press, Boston-Beijing, 2009, 155 -- 200. \url{arXiv:math.RA/0609583, http://arXiv.org}
\par\hang\textindent{[Li3]} H. Li, {\it Gr\"obner Bases in Ring Theory}. World Scientific Publishing Co., 2011. \url{https://doi.org/10.1142/8223}\par
\par\hang\textindent{[Li4]} H. Li, A note on solvable polynomial algebras. {\it Computer Science Journal of Moldova}, vol.22, no.1(64), 2014, 99--109. arXiv:1212.5988 [math.RA]
\par\hang\textindent{[Li5]} H. Li, An elimination lemma for algebras with PBW bases. {\it Communications in Algebra}, 46(8)(2018), 3520-3532.\par
\par\hang\textindent{[Li6]} H. Li, {\it Noncommutative polynomial algebras of solvable type and their modules: Basic constructive-computational theory and methods}. Chapman and Hall/CRC Press, 2021.
\par\hang\textindent{[LVO]} H. Li, F. Van Oystaeyen, {\it Zariskian Filtrations}. K-Monograph in Mathematics, Vol.2. Kluwer Academic Publishers, 1996; Berlin Heidelberg: Springer-Verlag, 2003.
\par\hang\textindent{[LS]} V. Levandovskyy and H. Sch\"onemann, Plural: a computer algebra system for noncommutative polynomial algebras. In: {\it Proc. Symbolic and Algebraic Computation}, International Symposium ISSAC 2003, Philadelphia, USA, 176--183, 2003.
\par\hang\textindent{[Mor]} T. Mora, An introduction to commutative and noncommutative Gr\"obner Bases. {\it Theoretic Computer Science}, 134(1994), 131--173.
\par\hang\textindent{[Pos]} ~L. Positselski, Nonhomogeneous quadratic duality and curvature, {\it Funct. Anal. Appl.}, 3(27)(1993), 197--204.
\par\hang\textindent{[Ros]} M. Rosso, Finite dimensional representations of the quantum analogue of the enveloping algebra of a complex simple Lie algebra, Comm. Math. Phys. 117 (1988) 581--593.
\par\hang\textindent{[Yam]} I. Yamane, A Poincare-Birkhoff-Witt theorem for quantized universal enveloping algebras of type $A_N$, Publ., {\it RIMS. Kyoto Univ}., 25(3)(1989), 503--520.}
\end{document} | arXiv |
\begin{document}
\pagestyle{plain} \setcounter{page}{1}
\begin{center} {\Large {\bf Constant Unary Constraints and Symmetric Real-Weighted
\\ Counting Constraint Satisfaction Problems}}\footnote{A preliminary version under a slightly concise title appeared in the Proceedings of the 23rd International Symposium on Algorithms and Computation (ISAAC 2012), Taipei, Taiwan, December 19--21, 2012, Lecture Notes in Computer Science, Springer-Verlag, vol. 7676, pp. 237--246, 2012.}
\\ {\sc Tomoyuki Yamakami}\footnote{Present Affiliation: Department of Information Science, University of Fukui, 3-9-1 Bunkyo, Fukui 910-8507, Japan}
\\ \end{center}
\begin{quote} \noindent{\bf Abstract:} A unary constraint (on the Boolean domain) is a function from $\{0,1\}$ to the set of real numbers. A free use of auxiliary unary constraints given besides input instances has proven to be useful in establishing a complete classification of the computational complexity of approximately solving weighted counting Boolean constraint satisfaction problems (or \#CSPs). In particular, two special constant unary constraints are a key to an arity reduction of arbitrary constraints, sufficient for the desired classification. In an exact counting model, both constant unary constraints are always assumed to be available since they can be eliminated efficiently using an arbitrary nonempty set of constraints. In contrast, we demonstrate in an approximate counting model, that at least one of them is efficiently approximated and thus eliminated approximately by a nonempty constraint set. This fact directly leads to an efficient construction of polynomial-time randomized approximation-preserving Turing reductions (or AP-reductions) from \#CSPs with designated constraints to any given \#CSPs composed of symmetric real-valued constraints of arbitrary arities even in the presence of arbitrary extra unary constraints.
\noindent{\bf Keywords:} counting constraint satisfaction problem, AP-reducible, effectively T-constructible, constant unary constraint, symmetric constraint, algebraic real number, p-convergence \end{quote}
\section{Roles of Constant Unary Constraints}\label{sec:introduction}
{\em Constraint satisfaction problems} (or {\em CSPs}, in short) are combinatorial problems that have been ubiquitously found in real-life situations. The importance of these problems have led recent intensive studies from various aspects: for instance, decision CSPs \cite{DF03,Sch78}, optimization CSPs \cite{Cre95,Yam11b}, and counting CSPs \cite{CL07,CH96,DGJ09,Yam10a}. Driven by theoretical and practical interests, in this paper, we are particularly focused on {\em counting Boolean CSPs} (abbreviated as \#CSPs) whose goal is to count the number of variable assignments that satisfy all given Boolean-valued constraints defined over a fixed series of Boolean variables. The problem of counting the number of Boolean assignments that satisfy each given propositional formula, known as \#SAT (counting satisfiability problem), is a typical counting CSP with three Boolean-valued constraints, $AND$, $OR$, and $NOT$. As this example demonstrates, in most real-life applications, all available constraints are pre-determined. Hence, we naturally fix a collection of ``allowed'' constraints, say, ${\cal F}$ and wish to solve every \#CSP whose constraints are all chosen from ${\cal F}$. Such a counting problem is conventionally denoted $\#\mathrm{CSP}({\cal F})$ and this notation will be used throughout this paper. Creignou and Hermann \cite{CH96} first examined the computational complexity of {\em exactly} counting solutions of unweighted \#CSPs. Recently, Dyer, Goldberg, and Jerrum \cite{DGJ10} studied the computational complexity of {\em approximately} computing the number of solutions of unweighted \#CSPs using a technical reduction, known as {\em polynomial-time randomized approximation-preserving Turing reduction} (or {\em AP-reduction}, hereafter), whose formulation is originated from \cite{DGGJ04} and it will be explained in details through Section \ref{sec:randomized-scheme}.
In a more interesting case of {\em weighted \#CSPs}, the values of constraints are expanded from Boolean values to more general values, and each weighted \#CSP asks for the sum, over all possible assignments for Boolean variables, of products of the output values of all given constraints. Earlier, Cai, Lu, and Xia \cite{CLX09x} gave a complete classification of complex-weighted \#CSPs restricted on a given set ${\cal F}$ of constraints according to the computational complexity of {\em exactly} solving them. Another complete classification regarding the complexity of {\em approximately} solving complex-weighted \#CSPs was presented by Yamakami \cite{Yam10a} when allowing a free use of auxiliary unary (\textrm{i.e.},\hspace*{2mm} arity-1) constraints besides initially given input constraints. More precisely, let ${\cal U}$ denote the set of all unary constraints. Given an arbitrary constraint $f$, the free use of auxiliary unary constraints makes $\#\mathrm{SAT}_{\mathbb{C}}$ (a complex extension of $\#\mathrm{SAT}$) AP-reducible to $\#\mathrm{CSP}(f,{\cal U})$ unless $f$ is factored into three categories of constraints: the binary equality, the binary disequality, and unary constraints \cite{Yam10a}. All constraints factored into constraints of those categories form a special set ${\cal ED}$. The aforementioned fact establishes the following complete classification of the approximation complexity of weighted \#CSPs in the presence of ${\cal U}$.
\begin{theorem}{\em \cite[Theorem 1.1]{Yam10a}}\label{dichotomy-theorem} Let ${\cal F}$ be any set of complex-valued constraints. If ${\cal F}\subseteq {\cal ED}$, then $\#\mathrm{CSP}({\cal F},{\cal U})$ is solvable in polynomial time; otherwise, it is AP-reduced from $\#\mathrm{SAT}_{\mathbb{C}}$. \end{theorem}
In this particular classification, the free use of auxiliary unary constraints provide enormous power that makes it possible to establish a ``dichotomy'' theorem beyond a ``trichotomy'' theorem of Dyer \textrm{et al.}~\cite{DGJ10} for Boolean-valued constraints (or simply, {\em Boolean constraints}). The proof of Theorem \ref{dichotomy-theorem} in \cite{Yam10a} employed two technical notions: ``factorization'' and ``T-constructibility.'' Limited to unweighted \#CSPs, on the contrary, a key to the proof of the trichotomy theorem of \cite{DGJ10} is an efficient approximation of so-called {\em constant unary constraints}, conventionally denoted\footnote{A bracket notation $[x,y]$ denotes a unary function $g$ satisfying $g(0)=x$ and $g(1)=y$. Similarly, $[x,y,z]$ expresses a binary function $g$ for which $g(0,0)=x$, $g(0,1)=g(1,0)=y$, and $g(1,1)=z$.} $\Delta_0=[1,0]$ and $\Delta_1=[0,1]$. A significant use of the constant unary constraints is a technique known as {\em pinning}, with which we can make an arbitrary variable pinned down to a particular value, reducing the associated constraints of high arity to those of lower arity. To see this arity reduction, let us consider, for example, an arbitrary constraint $f$ of the form $[x,y,z]$ with three Boolean variables $x_1,x_2,x_3$. When we pin a variable $x_1$ down to $0$ (resp., $1$) in $f(x_1,x_2,x_3)$, we immediately obtain another constraint of the form $[x,y]$ (resp., $[y,z]$). Therefore, an efficient approximation of those special constraints helps us first analyze the approximation complexity of $\#\mathrm{CSP}({\cal F},\Delta_{i_0})$ for an appropriate index $i_0\in\{0,1\}$ by the way of pinning and then eliminate $\Delta_{i_0}$ completely to obtain a desired classification theorem for $\#\mathrm{CSP}({\cal F})$. Their proof of approximately eliminating the constant unary constraints is based on basic properties of Boolean arithmetic and it is not entirely clear that we can expand their proof to a non-Boolean case. Therefore, it is natural for us to raise a question of whether we can obtain a similar elimination theorem for $\#\mathrm{CSP}({\cal F})$ even when ${\cal F}$ is composed of real-valued constraints. In the following theorem, we wish to claim that at least one of the constant unary constraints is always eliminated approximately. This claim can be sharply contrasted with the case of exact counting of $\#\mathrm{CSP}({\cal F})$, in which $\Delta_0$ and $\Delta_1$ are {\em both} eliminated deterministically by a technique known as {\em polynomial interpolation}.
\begin{theorem}\label{key-Delta-elimination} For any nonempty set ${\cal F}$ of real-valued constraints, there exists a constant unary constraint $h\in\{\Delta_0,\Delta_1\}$ for which $\#\mathrm{CSP}(h,{\cal F})$ is AP-equivalent to $\#\mathrm{CSP}({\cal F})$ (namely, $\#\mathrm{CSP}(h,{\cal F})$ is AP-reducible to $\#\mathrm{CSP}({\cal F})$ and vice versa). \end{theorem}
Under a certain set of explicit conditions (given in Proposition \ref{Delta-removal-arity-all}), we further prove that $\Delta_0$ and $\Delta_1$ are simultaneously eliminated even in an approximation sense.
When the values of constraints in ${\cal F}$ are all limited to Boolean values, Theorem \ref{key-Delta-elimination} is exactly \cite[Lemma 16]{DGJ10}. For real-valued constraints, however, we need to develop a quite different argument from \cite{DGJ10} to prove this theorem. An important ingredient of our proof, described in Section \ref{sec:const-unary-const}, is an efficient estimation of a lower bound of an arbitrary multi-variate polynomial in the values of given constraints. However, since our constraints can output negative real values, the polynomial may possibly produce arbitrary small values, and thus we cannot find a polynomial-time computable lower bound. To avoid encountering such an unwanted situation, we dare to restrict our attention onto {\em algebraic real numbers}. In the rest of this paper, all real numbers will be limited to algebraic numbers.
As a natural application of Theorem \ref{key-Delta-elimination}, we give an alternative proof to our classification theorem (Theorem \ref{dichotomy-theorem}) for {\em symmetric real-weighted \#CSPs} when arbitrary unary constraints are freely available. Using the constant unary constraints, we can conduct the aforementioned arity reductions. Since Theorem \ref{key-Delta-elimination} guarantees the availability of only one of $\Delta_0$ and $\Delta_1$, we need to demonstrate such arity reductions of target constraints even when $\Delta_0$ and $\Delta_1$ are separately given for free. Furthermore, we intend to build such reductions with no use of auxiliary unary constraint.
Our alternative proof proceeds roughly as follows. In the first step, we recognize constraints $g$ of the following three special forms: $[0,y,z]$ and $[x,y,0]$ with $x,y,z>0$ and $[x,y,z]$ with $x,y,z>0$ as well as $xz\neq y^2$. The constraints $g$ of those forms become crucial elements of our later analyses because, when auxiliary unary constraints are available for free, $\#\mathrm{CSP}(g,{\cal U})$ is computationally at least as hard as $\#\mathrm{SAT}$ with respect to the AP-reducibility (Lemma \ref{OR-and-B-Yam10a}).
In the second step, we isolate a set ${\cal F}$ of constraints whose corresponding counting problem $\#\mathrm{CSP}({\cal F},{\cal G})$ is AP-reduced from a specific problem $\#\mathrm{CSP}(g,{\cal G})$ for an arbitrary set ${\cal G}$ of constraints with no use of extra unary constraints. To be more exact, we wish to establish the following specific AP-reduction from $\#\mathrm{CSP}(g,{\cal G})$ to $\#\mathrm{CSP}({\cal F},{\cal G})$.
\begin{theorem}\label{main-theorem} Let ${\cal F}$ be any set of symmetric real-valued constraints of arity at least $2$. If either ${\cal F}\subseteq {\cal DG}\cup{\cal ED}_{1}^{(+)}$ or ${\cal F}\subseteq{\cal DG}^{(-)}\cup{\cal ED}_{1}\cup{\cal AZ} \cup{\cal AZ}_1\cup{\cal B}_0$ holds, then $\#\mathrm{CSP}({\cal F})$ is polynomial-time solvable. Otherwise, $\#\mathrm{CSP}({\cal F},{\cal G})$ are AP-reduced from $\#\mathrm{CSP}(g,{\cal G})$ for any constraint set ${\cal G}$, where $g$ is an appropriate constraint of one of the three special forms described above. \end{theorem}
In Theorem \ref{main-theorem}, the constraint set ${\cal DG}$ consists of {\em degenerate} constraints, ${\cal ED}_{1}$ indicates a set of equality and disequality, ${\cal AZ}$ contains specific symmetric constraints having alternating zeros, ${\cal AZ}_1$ is similar to ${\cal AZ}$ but requiring alternating zeros, ``plus'' signs, and ``minus'' signs, and ${\cal B}_0$ is composed of special constraints of non-zero entries. Two additional sets ${\cal DG}^{(-)}$ and ${\cal ED}_{1}^{(+)}$ are naturally induced from ${\cal DG}$ and ${\cal ED}_{1}$, respectively. For their precise definitions, refer to Section \ref{sec:computability}.
\sloppy In the third step, we recognize distinctive behaviors of two constraint sets ${\cal DG}\cup{\cal ED}_1^{(+)}$ and ${\cal DG}^{(-)}\cup {\cal ED}_1\cup{\cal AZ} \cup{\cal AZ}_1\cup {\cal B}_0$. The counting problems $\#\mathrm{CSP}({\cal DG},{\cal ED}_{1}^{(+)})$ and $\#\mathrm{CSP}({\cal DG}^{(-)},{\cal ED}_{1},{\cal AZ},{\cal AZ}_1,{\cal B}_0)$ are both solvable in polynomial time \cite{CLX09x,GGJ+10}. In the presence of the auxiliary set ${\cal U}$ of arbitrary unary constraints, the problem $\#\mathrm{CSP}({\cal DG},{\cal ED}_{1}^{(+)},{\cal U})$, which essentially equals $\#\mathrm{CSP}({\cal ED},{\cal U})$, remains solvable in polynomial time; on the contrary, as a consequence of Theorem \ref{main-theorem}, the problem $\#\mathrm{CSP}({\cal DG}^{(-)},{\cal ED}_{1},{\cal AZ},{\cal AZ}_1,{\cal B}_0,{\cal U})$ is AP-reduced from $\#\mathrm{CSP}(g,{\cal U})$ for an appropriately chosen $g$ of the prescribed form.
In the final step, since $\#\mathrm{SAT}$ is AP-reducible to $\#\mathrm{CSP}(g,{\cal U})$ \cite{Yam10a}, the above results immediately imply Theorem \ref{dichotomy-theorem} for symmetric real-weighted \#CSPs. The above argument exemplifies that the free use of auxiliary unary constraints can be made only in the third step. The detailed argument is found in Section \ref{sec:computability}.
A heart of our proof is an efficient, approximate transformation (called {\em effective T-constructibility}) of a target constraint from a given set of constraints. This effective T-constructibility is a powerful tool in showing AP-reductions between two counting problems. Since our constructibility can locally modify underlying structures of input instances, this simple tool makes it possible to introduce an auxiliary constraint set ${\cal G}$ in Theorem \ref{main-theorem}. A prototype of this technical tool first appeared in \cite{Yam10a} and was further extended or modified in \cite{Yam10b,Yam11a}.
\paragraph{Comparison of Proof Techniques:} Dyer \textrm{et al.}~\cite{DGJ10} used a notion of ``simulatability'' to demonstrate the approximate elimination of the constant unary constraints using any given set of Boolean constraints. Our proof of Theorem \ref{key-Delta-elimination}, however, employs a notion of effectively T-constructibility. While a key proof technique used in \cite{Yam10a} to prove Theorem \ref{dichotomy-theorem} is the factorization of constraints, our proof of Theorem \ref{main-theorem} (which leads to Theorem \ref{dichotomy-theorem}) in Section \ref{sec:computability} makes a heavy use of the constant unary constraints. Furthermore, our proof is quite elementary because it proceeds by examining {\em all} possible forms of a target constraint. This fact makes the proof cleaner and more straightforward to follow.
\section{Fundamental Notions and Notations}\label{sec:preliminaries}
We will explain basic concepts that are necessary to read through the rest of this paper. First, let $\mathbb{N}$ denote the set of all {\em natural numbers} (\textrm{i.e.},\hspace*{2mm} nonnegative integers) and let $\mathbb{R}$ be the set of all {\em real numbers}. For convenience, define $\mathbb{N}^{+} = \mathbb{N}-\{0\}$ and, for each number $n\in\mathbb{N}^{+}$, $[n]$ stands for the {\em integer interval} $\{1,2,\ldots,n\}$.
Because our results heavily rely on Lemma \ref{constructibility}(3), we need to limit our attention within {\em algebraic real numbers}. For this purpose, a special notation $\mathbb{A}$ is used to indicate the set of all algebraic real numbers. To simplify our terminology throughout the paper, whenever we refer to ``real numbers,'' we actually mean ``algebraic real numbers.''
\subsection{Constraints and \#CSPs}
The term ``constraint of arity $k$'' always refers to a function mapping the set $\{0,1\}^{k}$ of binary strings of length $k$ to $\mathbb{A}$. Assuming the standard lexicographic ordering on the set $\{0,1\}^{k}$, we conveniently express $f$ as a {\em row-vector} consisting of its output values; for instance, when $f$ has arity $2$, it is expressed as $(f(00),f(01),f(10),f(11))$. Given any $k$-ary constraint $f=(f_1,f_2,\ldots,f_{2^k})$ in a vector form, the notation $\|f\|_{\infty}$ means $\max_{i\in[2^k]}\{|f_i|\}$. A $k$-ary constraint $f$ is called {\em symmetric} if, for every input $x$ in $\{0,1\}^k$, the value $f(x)$ depends only on the Hamming weight (\textrm{i.e.},\hspace*{2mm} the number of $1$'s) of the input $x$; otherwise, $f$ is called {\em asymmetric}. For any symmetric constraint $f$ of arity $k$, we also use a succinct notation $[f_0,f_1,\ldots,f_k]$ to express $f$, where each entry $f_i$ expresses the value of $f$ on inputs of Hamming weight $i$. For instance, if $f=[f_0,f_1,f_2]$ is of arity two, then it holds that $f_0=f(00)$, $f_1=f(01)=f(10)$, and $f_2=f(11)$. Of all symmetric constraints, we recognize two special unary constraints, $\Delta_0=[1,0]$ and $\Delta_1=[0,1]$, which are called {\em constant unary constraints}.
Restricted to a set ${\cal F}$ of constraints, a {\em real-weighted (Boolean) \#CSP}, conventionally denoted $\#\mathrm{CSP}({\cal F})$, takes a {\em finite} set $\Omega$ composed of elements of the form $\pair{h,(x_{i_1},x_{i_2},\ldots,x_{i_k})}$, where $h\in{\cal F}$ is a function on $k$ Boolean variables $x_{i_1},x_{i_2},\ldots,x_{i_k}$ in $X=\{x_1,x_2,\ldots,x_n\}$ with $i_1,\ldots,i_k\in[n]$, and its goal is to compute the real value \begin{equation}\label{eqn:csp-def} csp_{\Omega} =_{def} \sum_{x_1,x_2,\ldots,x_n\in\{0,1\}} \prod_{\pair{h,x}\in \Omega} h(x_{i_1},x_{i_2},\ldots,x_{i_k}), \end{equation}
where $x$ denotes $(x_{i_1},x_{i_2},\ldots,x_{i_k})$. To illustrate $\Omega$ graphically, we view it as a {\em labeled undirected bipartite graph} $G=(V_1|V_2,E)$ whose nodes in $V_1$ are labeled distinctively by $x_1,x_2,\ldots,x_n$ in $X$ and nodes in $V_2$ are labeled by constraints $h$ in ${\cal F}$ such that, for each pair $\pair{h,(x_{i_1},x_{i_2},\ldots,x_{i_k})}$, there are $k$ edges between an associated node labeled $h$ and the nodes labeled $x_{i_1},x_{i_2},\ldots,x_{i_k}$. The labels of nodes are formally specified by a {\em labeling function} $\pi:V_1\cup V_2\rightarrow X\cup{\cal F}$ with $\pi(V_1)\subseteq X$ and $\pi(V_2)\subseteq {\cal F}$ but we often omit it from the description of $G$ for simplicity. When $\Omega$ is viewed as this special bipartite graph, it is called a {\em constraint frame} \cite{Yam10a,Yam10b}. More formally, a constraint frame $\Omega=(G,X|{\cal F}',\pi)$ is composed of an undirected bipartite graph $G$ with its associated labeling function $\pi:V_1\cup V_2\rightarrow X\cup {\cal F}'$, a variable set $X=\{x_1,x_2,\ldots,x_n\}$, and a {\em finite} set ${\cal F}'\subseteq {\cal F}$.
To simplify later descriptions, we wish to use the following simple rule of abbreviation. For instance, when $f$ is a constraint and both ${\cal F}$ and ${\cal G}$ are constraint sets, we write $\#\mathrm{CSP}(f,{\cal F},{\cal G})$ to mean $\#\mathrm{CSP}(\{f\}\cup {\cal F}\cup{\cal G})$.
In the subsequent sections, we will use the following succinct notations. Let $f$ be any constraint of arity $k\in\mathbb{N}^{+}$. Given any index $i\in[k]$ and any bit $c\in\{0,1\}$, the notation $f^{x_i=c}$ stands for the function $g$ of arity $k-1$ satisfying that $g(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_k) = f(x_1,\ldots,x_{i-1},c,x_{i+1},\ldots,x_k)$ for every $(k-1)$-tuple $(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_k)\in\{0,1\}^{k-1}$. For any two distinct indices $i,j\in[k]$, we denote by $f^{x_i=x_j}$ the function $g$ defined as $g(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_k) =
f(x_1,\ldots,x_{i-1},x_{j},x_{i+1},\ldots,x_k)$ for every $k$-tuple $(x_1,x_2,\ldots,x_k)\in\{0,1\}^k$. Finally, let $f^{x_i=*}$ express the function $g$ defined by $g(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_k) = \sum_{c\in\{0,1\}}f(x_1,\ldots,x_{i-1},c,x_{i+1},\ldots,x_k)$ for every $(k-1)$-tuple $(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_k)\in\{0,1\}^{k-1}$.
\subsection{FP$_{\mathbb{A}}$ and AP-Reducibility}\label{sec:randomized-scheme}
To connect our results (particularly, Theorems \ref{key-Delta-elimination} and \ref{main-theorem}) to Theorem \ref{dichotomy-theorem}, we follow notational conventions used in \cite{Yam10a,Yam10b}. First, $\mathrm{FP}_{\mathbb{A}}$ denotes the collection of all {\em $\mathbb{A}$-valued} functions that can be computed deterministically in polynomial time.
Let $F$ be any function mapping $\{0,1\}^*$ to $\mathbb{A}$ and let $\Sigma$ be any nonempty finite alphabet. A {\em randomized approximation scheme} (or RAS, in short) for $F$ is a randomized algorithm that takes a standard input $x\in\Sigma^*$ together with an error tolerance parameter $\varepsilon\in(0,1)$, and outputs values $w$ with probability at least $3/4$ for which \begin{equation}\label{eqn:approx} \min\{2^{-\varepsilon}F(x), 2^{\varepsilon}F(x)\} \leq w \leq \max\{2^{-\varepsilon}F(x), 2^{\varepsilon}F(x)\}. \end{equation}
Given two arbitrary real-valued functions $F$ and $G$, a {\em polynomial-time randomized approximation-preserving Turing reduction} (or {\em AP-reduction}) from $F$ to $G$ \cite{DGGJ04} is a randomized algorithm $M$ that takes a pair $(x,\varepsilon)\in\Sigma^*\times(0,1)$ as input, accesses an oracle, and satisfies the following three conditions: (i) when the oracle is an arbitrary RAS $N$ for $G$, $M$ is always an RAS for $F$;
(ii) every oracle call made by $M$ is of the form $(w,\delta)\in\Sigma^*\times(0,1)$ with $1/\delta \leq poly(|x|,1/\varepsilon)$ and its answer is the outcome of $N$ on $(w,\delta)$; and (iii) the running time of $M$ is upper-bounded by a certain polynomial in $(|x|,1/\varepsilon)$, which is not dependent of the choice of $N$. If such an AP-reduction exists, then we also say that $F$ is {\em AP-reducible} to $G$ and we write $F\leq_{\mathrm{AP}} G$. If both $F\leq_{\mathrm{AP}} G$ and $G\leq_{\mathrm{AP}} F$ hold, then $F$ and $G$ are said to be {\em AP-equivalent} and we use the special notation $F\equiv_{\mathrm{AP}} G$.
\begin{lemma}\label{AP-property} For any functions $F_1,F_2,F_3:\{0,1\}^*\rightarrow\mathbb{A}$, the following properties hold. \begin{enumerate}\vs{-1} \item $F_1\leq_{\mathrm{AP}} F_1$. \vs{-2} \item If $F_1\leq_{\mathrm{AP}} F_2$ and $F_2\leq_{\mathrm{AP}} F_3$, then $F_1\leq_{\mathrm{AP}} F_3$. \end{enumerate} \end{lemma}
\subsection{Effective T-Constructibility}\label{sec:constructibility}
Our goal in the subsequent sections is to prove our main theorems, Theorems \ref{key-Delta-elimination} and \ref{main-theorem}. For their desired proofs, we will introduce a fundamental notion of {\em effective T-constructibility}, whose underlying idea comes from a graph-theoretical formulation of {\em limited T-constructibility} \cite{Yam10b}.
Let us start with the definitions of ``representation'' and ``realization''
in \cite{Yam10b}. Let $f$ be any constraint of arity $k$. We say that an undirected bipartite graph $G=(V_1|V_2,E)$ (together with a labeling function $\pi$) {\em represents} $f$ if $V_1$ consists only of $k$ nodes labeled with $x_1,\ldots,x_k$, which may possibly have a certain number of dangling edges,\footnote{A dangling edge is obtained from an edge by deleting exactly one end of this edge. These dangling edges are treated as ``normal'' edges, and therefore the degree of each node must count dangling edges as well.} and $V_2$ contains only a node labeled
$f$ to whom each node $x_i$ is adjacent. Given a set ${\cal G}$ of constraints, a graph $G=(V_1|V_2,E)$ is said to {\em realize $f$ by ${\cal G}$} if the following four conditions are met simultaneously: \begin{enumerate} \item[(i)] $\pi(V_2)\subseteq {\cal G}$, \vs{-2} \item[(ii)] $G$ contains at least $k$ nodes having the labels $x_1,\ldots,x_k$, possibly together with nodes associated with other variables, say, $y_1,\ldots,y_m$; namely,
$V_1=\{x_1,\ldots,x_k,y_1,\ldots,y_m\}$, \vs{-2} \item[(iii)] only the nodes $x_1,\ldots,x_k$ are allowed to have dangling edges, and \vs{-2} \item[(iv)] $f(x_1,\ldots,x_k) = \lambda \sum_{y_1,\ldots,y_m\in\{0,1\}} \prod_{w \in V_2} f_{w}(z_1,\ldots,z_d)$ for an appropriate constant $\lambda\in\mathbb{A}-\{0\}$, where $f_w$ denotes a constraint $\pi(w)$ and $z_1,\ldots,z_d\in V_1$. \end{enumerate}
The {\em sign function}, denoted $sgn$, is defined as follows. For any real number $\lambda$, we set $sgn(\lambda)=+1$ if $\lambda>0$, $sgn(\lambda)=0$ if $\lambda=0$, and $sgn(\lambda)=-1$ if $\lambda<0$. An infinite series $\Lambda = (g_1,g_2,g_3,\ldots)$ of arity-$k$ constraints is called a {\em p-convergence series}\footnote{At a quick glance, the approximation scheme of Eq.(\ref{eqn:convergence}) appears quite differently from that of Eq.(\ref{eqn:approx}). However, by setting $\varepsilon = \lambda^m$, the value $1+\varepsilon$ approximately equals $2^{\varepsilon}$ and $1-\varepsilon$ is also close to $2^{-\varepsilon}$ for any sufficiently large number $m$.} for a target constraint $f=(r_1,r_2,\ldots,r_{2^k})$ of arity $k$ if there exist a constant $\lambda\in(0,1)$ and a deterministic Turing machine (abbreviated as DTM) $M$ running in polynomial time such that, for every number $m\in\mathbb{N}^{+}$, (i) $M$ takes an input of the form $1^m$ and outputs a complete description of the constraint $g_m$ in a row-vector form $(z_1,z_2,\ldots,z_{2^k})$, (ii) for every $k$-tuple $x\in\{0,1\}^k$, if $f(x)\neq0$, then $sgn(f(x))=sgn(g_m(x))$, and (iii) for every index $i\in[2^k]$, if $r_i\neq0$, then \begin{equation}\label{eqn:convergence} \min\{(1+\lambda^m)z_i,(1-\lambda^m)z_i\}\leq r_i\leq \max\{(1+\lambda^m)z_i,(1-\lambda^m)z_i\}, \end{equation}
and otherwise, $|z_i|\leq \lambda^m$.
We then define the effective T-constructibility of a given finite set of constraints.
\begin{definition}\label{def:constructibility} (effective T-constructibility) Let ${\cal F}$ and ${\cal G}$ be any two finite sets of constraints. We say that ${\cal F}$ is {\em effectively T-constructible} from ${\cal G}$ if there exists a finite series $({\cal F}_1,{\cal F}_2,\ldots,{\cal F}_n)$ of finite constraint sets (which is succinctly called a {\em generating series} of ${\cal F}$ from ${\cal G}$) such that \begin{enumerate}\vs{-1} \item[(i)] ${\cal F} = {\cal F}_1$ and ${\cal G}={\cal F}_n$, and \vs{-2} \item[(ii)] for each adjacent pair $({\cal F}_i,{\cal F}_{i+1})$, where $i\in[n-1]$, one of Clauses (I)--(II) should hold. \end{enumerate}\vs{-1}
(I) For every constraint $f$ of arity $k$ in ${\cal F}_i$ and for any finite graph $G$ representing $f$ with distinct variables $x_1,\ldots,x_k$, there exists another finite graph $G'$ satisfying the following two conditions: \begin{enumerate}\vs{-1} \item[(i')] $G'$ realizes $f$ by ${\cal F}_{i+1}$, and \vs{-2} \item[(ii')] $G'$ maintains the same dangling edges as $G$ does. \end{enumerate}\vs{-1}
(II) Let ${\cal F}_{i+1}=\{g_1,g_2,\ldots,g_d\}$. For every constraint $f$ of arity $k$ in ${\cal F}_i$, there exist a p-convergence series $\Lambda=(f_1,f_2,\ldots)$ of arity-$k$ constraints and a polynomial-time DTM $M$ such that, for every number $m\in\mathbb{N}^{+}$, (a) $M$ takes an input of the form $(1^m,G,(g_1,g_2,\ldots,g_d))$, where $G$ represents $f_m$ with distinct variables $x_1,\ldots,x_k$ and each $g_j$ is described in a row-vector form, and (b) $M$ outputs a bipartite graph $G_m$ such that \begin{enumerate}\vs{-1} \item[(i'')] $G_m$ realizes $f_m$ by ${\cal F}_{i+1}$ and \vs{-2} \item[(ii'')] $G_m$ maintains the same dangling edges as $G$ does. \end{enumerate}\vs{-1} \end{definition}
When ${\cal F}$ is effectively T-constructible from ${\cal G}$, we write ${\cal F}\leq_{e\mbox{-}con} {\cal G}$. We are particularly interested in the case where ${\cal F}$ is a singleton $\{f\}$, and we succinctly write $f\leq_{e\mbox{-}con}{\cal G}$. Moreover, when ${\cal G}$ is also a singleton $\{g\}$, we further write $f\leq_{e\mbox{-}con} g$.
\begin{lemma}\label{constructibility} Let ${\cal F}_1$, ${\cal F}_2$, and ${\cal F}_3$ be three finite constraint sets. Let ${\cal G}$ be an arbitrary set of constraints. \begin{enumerate}\vs{-1} \item ${\cal F}_1\leq_{e\mbox{-}con} {\cal F}_1$. \vs{-2} \item ${\cal F}_1\leq_{e\mbox{-}con} {\cal F}_2$ and ${\cal F}_2\leq_{e\mbox{-}con} {\cal F}_3$ imply ${\cal F}_1\leq_{e\mbox{-}con} {\cal F}_3$. \vs{-2} \item If ${\cal F}_1\leq_{e\mbox{-}con} {\cal F}_2$, then $\#\mathrm{CSP}({\cal F}_1,{\cal G})\leq_{\mathrm{AP}} \#\mathrm{CSP}({\cal F}_2,{\cal G})$. \end{enumerate} \end{lemma}
It is important to note that, since our constraints are permitted to output negative values, the use of {\em algebraic real numbers} for the constraints may be necessary in the proof of Lemma \ref{constructibility} because the proof heavily relies on an explicit lower bound estimation of arbitrary polynomials over algebraic numbers.
In later arguments, a use of effective T-constructibility will play an essential role because a relation $f\leq_{e\mbox{-}con} g$ leads to $\#\mathrm{CSP}(f,{\cal G})\leq_{\mathrm{AP}} \#\mathrm{CSP}(g,{\cal G})$ for any constraint set ${\cal G}$ by Lemma \ref{constructibility}(3), whereas a relation $\#\mathrm{CSP}(f)\leq_{\mathrm{AP}} \#\mathrm{CSP}(g)$ in general does not imply $\#\mathrm{CSP}(f,{\cal G})\leq_{\mathrm{AP}} \#\mathrm{CSP}(g,{\cal G})$.
For the readability, we postpone the proof of Lemma \ref{constructibility} until Appendix.
\section{Approximation of the Constant Unary Constraints}\label{sec:const-unary-const}
Let us prove our first main theorem---Theorem \ref{key-Delta-elimination}---which states that, given an arbitrary set of constraints, we can efficiently approximate at least one of the two constant unary constraints. The theorem allows us to utilize such a constraint freely for a further analysis of constraints in Section \ref{sec:computability}.
\subsection{Notion of Complement Stability}\label{sec:complement-stable}
To prove Theorem \ref{key-Delta-elimination}, we will first introduce two useful notions regarding a certain ``symmetric'' nature of a given constraint. A $k$-ary constraint $f$ is said to be {\em complement invariant} if $f(x_1,\ldots,x_k) = f(x_1\oplus1,\ldots,x_k\oplus1)$ holds for every input tuple $(x_1,\ldots,x_k)$ in $\{0,1\}^k$, where the notation $\oplus$ means the {\em (bitwise) XOR}. In contrast, we say that $f$ is {\em complement anti-invariant} if, for every input $(x_1,\ldots,x_k)\in\{0,1\}^k$, $f(x_1,\ldots,x_k) = - f(x_1\oplus1,\ldots,x_k\oplus1)$ holds. For instance, $f=[1,1]$ is complement invariant and $f'=[1,0,-1]$ is complement anti-invariant. In addition, we say that $f$ is {\em complement stable} if $f$ is either complement invariant or complement anti-invariant. A constraint set ${\cal F}$ is {\em complement stable} if every constraint in ${\cal F}$ is complement stable. In the case where $f$ (resp., ${\cal F}$) is not complement stable, by contrast, we conveniently call it {\em complement unstable}.
We will split Theorem \ref{key-Delta-elimination} into two separate statements, as shown in Proposition \ref{Delta-removal-arity-all}, depending on whether or not a given nonempty set ${\cal F}$ of constraints is complement stable.
\begin{proposition}\label{Delta-removal-arity-all} Let ${\cal F}$ be any nonempty set of constraints and let $f$ be any constraint of arity $k$ with $k\geq1$. \renewcommand{$\circ$}{$\circ$} \begin{enumerate}
\setlength{\topsep}{-2mm}
\setlength{\itemsep}{1mm}
\setlength{\parskip}{0cm}
\item If ${\cal F}$ is complement stable, then $\#\mathrm{CSP}(\Delta_i,{\cal F})\equiv_{\mathrm{AP}} \#\mathrm{CSP}({\cal F})$ holds for every index $i\in\{0,1\}$.
\item Assume that $f$ is complement unstable. If $f$ satisfies one of two conditions ($a$)--($b$) given below, then $\Delta_i\leq_{e\mbox{-}con} f$ holds for all indices $i\in\{0,1\}$. Otherwise, there exists at least one index $i\in\{0,1\}$ for which $\Delta_i\leq_{e\mbox{-}con} f$ holds.
\begin{enumerate}\vs{-1}
\setlength{\topsep}{-1mm}
\setlength{\itemsep}{1mm}
\setlength{\parskip}{0cm}
\item $k\geq2$ and $|z_1| = |z_{2^k}|$.
\item $k\geq2$ and either $(|z_1|-|z_{2^k}|)(|z_1+z_j| - |z_{2^k-j+1}+z_{2^k}|)<0$ or
$(|z_1|-|z_{2^k}|)(|z_1+z_{2^k-j+1}| - |z_j+z_{2^k}|)<0$ holds for a certain index $j\in[2^{k-1}]-\{1\}$. \end{enumerate} \end{enumerate} \end{proposition}
Notice that Proposition \ref{Delta-removal-arity-all} together with Lemma \ref{constructibility}(3) implies Theorem \ref{key-Delta-elimination}. Proposition \ref{Delta-removal-arity-all}(1) can be proven rather easily, as presented below, whereas Proposition \ref{Delta-removal-arity-all}(2) requires a slightly more complicated argument.
\begin{proofof}{Proposition \ref{Delta-removal-arity-all}(1)}
In the following proof, we will deal only with $\Delta_0$, because the other case is similarly handled. Let ${\cal F}$ be any nonempty set of constraints and take any input instance $\Omega$, in the form of constraint frame $(G,X|{\cal F}',\pi)$ with ${\cal F}'\subseteq {\cal F}$, given to the counting problem $\#\mathrm{CSP}(\Delta_0,{\cal F})$. If $\Delta_0\in{\cal F}$, then Proposition \ref{Delta-removal-arity-all}(1) is trivially true. Henceforth, we assume that $\Delta_0\not\in{\cal F}$. Let us consider the case where ${\cal F}$ is complement anti-invariant. Since the other case where ${\cal F}$ is complement invariant is essentially the same, we omit the case.
To simplify our proof, we modify $\Omega$ as follows. First, we merge all variable nodes (\textrm{i.e.},\hspace*{2mm} nodes with ``variable'' labels) adjacent to nodes labeled $\Delta_0$ into a single node having a fresh variable label. If there are more than one adjacent nodes with the label $\Delta_0$, then we delete all those nodes except for one node. After this modification, we always assume that there is exactly one node, say, $v_0$ whose label is $\Delta_0$. Now, let $v_1$ be a unique node adjacent to $v_0$ and let $x_0$ be its variable label. For simplicity, we keep the same notation $\Omega$ for the constraint frame obtained by this modification.
Let $m$ denote the total number of {\em nodes} in $\Omega$ whose labels are constraints in ${\cal F}$. By simply removing the node $v_0$ having the label $\Delta_0$ from $\Omega$, we obtain another instance, say, $\Omega'$, which is obviously an input instance to $\#\mathrm{CSP}({\cal F})$. Using basic properties of complement anti-invariance, we wish to prove the following equality: \begin{equation}\label{csp-formula} csp_{\Omega'} = csp_{\Omega} + (-1)^mcsp_{\Omega}. \end{equation} Let us consider any ``partial'' assignment $\sigma$ to all variables appearing in $\Omega'$ except for $x_0$, that is, $\sigma:X-\{x_0\}\rightarrow\{0,1\}$. Associated with $\sigma$, we introduce two corresponding Boolean assignments $\sigma_0$ and $\sigma_1$. Firstly, we obtain $\sigma_0$ from $\sigma$ by additionally assigning $0$ to $x_0$. Now, we assume that $\sigma_0$ is an satisfying assignment for $\Omega'$. Secondly, let $\sigma_1$ be defined by assigning $1$ to $x_0$ and $1-\sigma(z)$ to all the other variables $z$. Note that $csp_{\Omega}$ is calculated over all assignments $\sigma_0$ induced from any partial assignments $\sigma$. Similarly, to compute $csp_{\Omega'}$, is is enough to consider all assignments $\sigma_0$ and $\sigma_1$. Since all constraints in $\Omega'$ are complement anti-invariant, the product of the values of all constraints by $\sigma_1$ equals $(-1)^m$ times the product of all constraints' values by $\sigma_0$. This establishes Eq.(\ref{csp-formula}).
If $m$ is even, then we immediately obtain the equation $csp_{\Omega} = \frac{1}{2}csp_{\Omega'}$ from Eq.(\ref{csp-formula}). Next, assume that $m$ is odd and choose any constraint $g$ that is complement anti-invariant in ${\cal F}$. We further modify $\Omega'$ into $\Omega''$ as follows. Letting $g$ be of arity $k$, we prepare a new variable, say, $x$ and add to $\Omega'$ a new element $\pair{g,(x,x,\ldots,x)}$, which essentially behaves as $e\cdot [1,-1]$ for a certain constant $e\neq0$. A similar argument for Eq.(\ref{csp-formula}) can prove that \begin{equation*} csp_{\Omega''} = e\cdot csp_{\Omega} + (-1)^{m+1}e\cdotcsp_{\Omega} = 2e\cdot csp_{\Omega}. \end{equation*} Thus, from the value $csp_{\Omega''}$, we can efficiently compute $csp_{\Omega}$, which equals $\frac{1}{2e}csp_{\Omega''}$. The two equations $csp_{\Omega}=\frac{1}{2}csp_{\Omega'}$ and $csp_{\Omega}=\frac{1}{2e}csp_{\Omega''}$ clearly establish an AP-reduction from $\#\mathrm{CSP}(\Delta_i,{\cal F})$ to $\#\mathrm{CSP}({\cal F})$.
Since the other direction, $\#\mathrm{CSP}({\cal F})\leq_{\mathrm{AP}} \#\mathrm{CSP}(\Delta_i,{\cal F})$, is obvious, we finally obtain the desired AP-equivalence between $\#\mathrm{CSP}(\Delta_i,{\cal F})$ and $\#\mathrm{CSP}({\cal F})$. \end{proofof}
In Sections \ref{sec:basis-case-proof}--\ref{sec:general-case-proof}, we will concentrate on the proof of Proposition \ref{Delta-removal-arity-all}(2). First, let ${\cal F}$ denote any nonempty set of constraints. Obviously, $\#\mathrm{CSP}({\cal F})$ is AP-reducible to $\#\mathrm{CSP}(\Delta_i,{\cal F})$ for every index $i\in\{0,1\}$. It therefore suffices to show the other direction (namely, $\#\mathrm{CSP}(\Delta_i,{\cal F})\leq_{\mathrm{AP}} \#\mathrm{CSP}({\cal F})$) for an appropriately chosen index $i$. Hereafter, we suppose that ${\cal F}$ is complement unstable, and we choose a constraint $f$ in ${\cal F}$ that is complement unstable. Furthermore, we assume that $f$ has arity $k$ ($k\geq1$). Our proof of Proposition \ref{Delta-removal-arity-all}(2) proceeds by induction on this index $k$.
\subsection{Basis Case: {\em k} = 1, 2}\label{sec:basis-case-proof}
Under the assumption described at the very end of Section \ref{sec:complement-stable}, we now target the basis case of $k\in\{1,2\}$. The induction case of $k\geq3$ will be discussed in Section \ref{sec:general-case-proof}. Notice that Condition ($a$) of Proposition \ref{Delta-removal-arity-all}(2) is necessary; to see this claim, consider a constraint set ${\cal F}=\{[1,0]\}$.
(1) Assuming $k=1$, let $f=[x,y]$ with $x,y\in\mathbb{A}$. Note that $x\neq \pm y$. This is because, if $x= \pm y$, then $f$ has the form $x\cdot [1,\pm1]$ and $f$ must be complement stable, a contradiction. Hence, it follows that $|x|\neq|y|$. Henceforth, we wish to assert that $|x|>|y|$ (resp., $|x|<|y|$) leads to a conclusion that $\Delta_0$ (resp., $\Delta_1$) is effectively T-constructible from $f$. This assertion comes from the following simple observation.
\begin{claim}\label{algebraic-simul}
Let $x$ and $y$ be two arbitrary algebraic real numbers with $|x|>|y|$. The constraint $\Delta_0 =[1,0]$ is effectively T-constructible from $[1,y/x]$ via a p-convergence series $\Lambda=\{[1,(y/x)^{2n}] \mid n\in\mathbb{N}^{+}\}$ for $\Delta_0$. In the case of $\Delta_1$, a similar statement holds if $|x|<|y|$ (in place of $|x|>|y|$). \end{claim}
\begin{proof}
Let us assume that $|x|>|y|$. We set $\lambda =y/x$ and define $g_m = [1,\lambda^{2m}]$ for every index $m\in\mathbb{N}^{+}$. It is clear that the series $\Lambda = \{g_m\mid m\in\mathbb{N}^{+}\}$ is indeed a p-convergence series for $\Delta_0 =[1,0]$. In addition, the definition of $g_m$ yields the effective T-constructibility of $\Delta_0$ from $[1,\lambda]$. The case of $|x|<|y|$ is similarly treated. \end{proof}
Assuming $|x|>|y|$, let $\lambda=y/x$. Claim \ref{algebraic-simul} implies that $\Delta_0 \leq_{e\mbox{-}con} [1,\lambda]$. Since $[1,\lambda]\leq_{e\mbox{-}con} f$, we derive by Lemma \ref{constructibility}(2)
the desired conclusion that $\Delta_0\leq_{e\mbox{-}con} f$. In the case of $|x|<|y|$, it suffices to define $\lambda=x/y$.
(2) Assume that $k=2$ and let $f=(x,y,z,w)$ for certain numbers $x,y,z,w\in\mathbb{A}$. For convenience, we will examine separately the following two cases: $|x|=|w|$ and $|x|\neq |w|$.
[Case: $|x|=|w|$] We want to prove the following claim, which corresponds to Condition ($a$) of Proposition \ref{Delta-removal-arity-all}(2). Recall that $f$ is complement unstable.
\begin{claim}\label{k=2_|x|=|w|}
Assuming that $|x|=|w|$, both $\Delta_0$ and $\Delta_1$ are effectively T-constructible from $f$. \end{claim}
(a) Let us assume that $x=w$. Notice that $y\neq z$, because $y=z$ implies that $f$ is complement invariant, a contradiction. Since $y\neq z$, we set $g=f^{x_1=*}$, which equals $[x+z,x+y]$. Similarly, define $h=f^{x_2=*}=[x+y,x+z]$. Note that the equation $(x+y)^2=(x+z)^2$ is transformed into $(y-z)(2x+y+z)=0$, which is equivalent to $2x+y+z=0$ since $y\neq z$. If $|x+y|<|x+z|$, then we can effectively T-construct $[1,0]$ and $[0,1]$ from $g$ and $h$, respectively, as done in Case (1). Similarly, when
$|x+y|>|x+z|$, we obtain $[0,1]$ and $[1,0]$. In the other case where $2x+y+z = 0$, we start with a new constraint $f'=f^2$ (which equals $(x^2,y^2,z^2,w^2)$) in place of $f$. Obviously, $f'$ is effectively $T$-constructible from $f$. Let us consider the simple case where $y^2\neq z^2$. Since $f'$ is not complement stable and $2x^2+y^2+z^2\neq0$, this case is reduced to the previous case. Finally, let us consider the case where $y^2=z^2$. Since $y\neq z$, we conclude that $y=-z$. {}From $2x+y+z=0$, instantly $x=0$ follows. Thus, $f$ must equal $(0,y,-y,0)$, which is complement anti-invariant. This contradicts our assumption.
(b) Assume that $x=-w$. First, we claim that $y\neq -z$ because, otherwise, $f$ becomes complement anti-invariant. Let us consider a new constraint $f'=f^2$. If $y^2\neq z^2$, then $f'$ is not complement stable, and thus we can reduce this case to Case (a). Hence, it suffices to assume that $y^2=z^2$. This implies $y=z$ and we thus obtain $f=[x,y,-x]$. Next, we define $g=f^{x_1=*}=[x+y,y-x]$. Note that $f^{x_1=*}=f^{x_2=*}$. Consider the case where $|x+y|=|y-x|$. This is equivalent to $xy=0$. If $x=y=0$, then $f$ is complement invariant. If $x=0$ and $y\neq0$, then $f$ is also complement invariant. If $x\neq0$ and $y=0$, then $f$ is complement anti-invariant. In any case, we obtain an obvious contradiction.
Finally, we deal with the case where $|x+y|\neq |y-x|$ (which is equivalent to $xy\neq 0$). Define $f'=f^{x_1=x_2}=[x,-x]$ and set $h(x_1,x_3)= \sum_{x_2\in\{0,1\}} f(x_1,x_2)f'(x_2) f(x_2,x_3)$, which equals $x\cdot [x^2-y^2,2xy,x^2-y^2]$. Note that $h\leq_{e\mbox{-}con} f$.
(i) If $x=y$, then we obtain $g=[2x,0]$. From this $g$, we can effectively T-construct $[1,0]$. Now, consider another constraint $h'(x_1)= \sum_{x_2} h(x_1,x_2) g(x_2)=[0,4x^4]$, from which we effectively T-construct $[0,1]$.
(ii) Assume that $x=-y$. Since $g=[0,-2x]$, $[0,1]$ is effectively T-constructible from $g$. Next, consider $h'$ defined as above. Obviously, $h'=[4x^4,0]$ holds, and thus we effectively T-construct $[1,0]$ from $h'$.
(iii) Consider the case where $|x|\neq |y|$. If $|x+y|>|y-x|$ (equivalently, $xy>0$), then we effectively T-construct $[1,0]$ from $g= [x+y,y-x]$
by Claim \ref{algebraic-simul}. Since $f$ is of the form $[x,y,-x]$, when $|x|<|y|$, we take a series $\Lambda = \{[(x/y)^{2i},1,(-x/y)^{2i}] \mid i\in\mathbb{N}^{+}\}$, which is obviously a p-convergence series for $XOR=[0,1,0]$. Similar to Claim \ref{algebraic-simul}, the following assertion holds.
\begin{claim}\label{XOR-T-const}
Let $x,y$ be arbitrary algebraic real numbers with $|x|<|y|$. The constraint $XOR$ is effectively T-constructible from $[x/y,1,\pm x/y]$ via a p-convergence series $\Lambda = \{[(x/y)^{2i},1,(\pm x/y)^{2i}] \mid i\in\mathbb{N}^{+}\}$ for $XOR$. \end{claim}
Note that $[0,1]$ is effectively T-constructible from $\{XOR,[1,0]\}$. Since Claim \ref{XOR-T-const} yields $XOR \leq_{e\mbox{-}con} f$, we obtain $[0,1]\leq_{e\mbox{-}con} f$.
In the other case where $|x|>|y|$, $h$ has the form $\frac{1}{2y}\cdot [d,1,d]$, where $d=(x^2-y^2)/2xy$. If $x^2-y^2<2|xy|$, then a series $\Lambda =\{[d^{2i},1,d^{2i}] \mid i\in\mathbb{N}^{+}\}$ is a p-convergence series for $XOR=[0,1,0]$ because of $|d|<1$. By Claim \ref{XOR-T-const}, $XOR$ is effectively T-constructible from $[d,1,d]$. Since $[d,1,d]\leq_{e\mbox{-}con} f$, it follows that $XOR\leq_{e\mbox{-}con} f$. From this result, the aforementioned relation $[0,1]\leq_{e\mbox{-}con}\{XOR,[1,0]\}$ implies the desired consequence that $[0,1]\leq_{e\mbox{-}con} f$.
In contrast, when $|x+y|<|y-x|$ (equivalently, $xy<0$), Claim \ref{algebraic-simul} helps us effectively T-construct $[0,1]$ from $g=[x+y,y-x]$. A similar argument as before shows that $[1,0]\leq_{e\mbox{-}con} f$ using $XOR$.
[Case: $|x|\neq|w|$] In this case, it is enough to prove the following claim whose last part is equivalent to Condition ($b$).
\begin{claim}\label{k=2-and-|x|neq|w|}
Assume that $|x|\neq|w|$. There exists an index $i\in\{0,1\}$ satisfying $\Delta_i\leq_{e\mbox{-}con} f$. Moreover, if one of the following two conditions is satisfied, then both $\Delta_0$ and $\Delta_1$ are effectively T-constructible from $f$. The conditions include (i) $|x|>|w|$, and either $|x+y|<|z+w|$ or $|x+z|<|y+w|$, and (ii) $|x|<|w|$, and either $|x+y|>|z+w|$ or $|x+z|>|y+w|$. \end{claim}
To show Claim \ref{k=2-and-|x|neq|w|}, let $g_0=f^{x_1=x_2}=[x,w]$, $g_1=f^{x_1=*}=[x+z,y+w]$, and $g_2=f^{x_2=*}=[x+y,z+w]$. Clearly, $g_0,g_1,g_2\leq_{e\mbox{-}con} f$ holds. In the case where $|x|>|w|$, we can effectively T-construct $\Delta_0$
from $g_0$ using Claim \ref{algebraic-simul}. In addition, if either $|x+y|< |z+w|$ or $|x+z|< |y+w|$ holds, we further effectively T-construct $\Delta_1$ from either $g_1$ or $g_2$. Hence, we obtain both $\Delta_0$ and $\Delta_1$. The case of $|x|<|w|$ is similarly treated.
\subsection{Induction Case: {\em k} $\geq$ 3}\label{sec:general-case-proof}
As in the previous subsections, let $f=(z_1,z_2,\ldots,z_{2^k})$. We will deal with the remaining case of $k\geq3$. In the next lemma, from a given complement unstable constraint $f$ of arity $k$, we can effectively T-construct another arity-$(k-1)$ complement unstable constraint $g$ of a special form that helps us apply an induction hypothesis.
\begin{lemma}\label{induction-step} Let $k\geq3$ and let $f$ be any $k$-ary constraint. If $f$ is complement unstable, then there exists another constraint $g$ of arity $k-1$ for which (i) $g$ is complement unstable, (ii) $g\leq_{e\mbox{-}con} f$, and (iii) if $f$ satisfies one of Conditions ($a$)--($b$), then so does $g$. \end{lemma}
We will briefly show that the induction case holds for Proposition \ref{Delta-removal-arity-all}(2), assuming that Lemma \ref{induction-step} is true. By Lemma \ref{induction-step}, we take another arity-$(k-1)$ constraint $g$ in ${\cal G}$ that satisfies Conditions (i)--(ii) of the lemma. We then apply the induction hypothesis for Proposition \ref{Delta-removal-arity-all}(2) to conclude that either $\Delta_0$ or $\Delta_1$ is effectively T-constructible from $f$. Next, we assume that $f$ violates one of Conditions ($a$)--($b$) given in Proposition \ref{Delta-removal-arity-all}(2). If $f$ violates one of Conditions ($a$)--($b$), then the obtained constraint $g$ also violates one of those conditions. Hence, the induction hypothesis guarantees that both $\Delta_0$ and $\Delta_1$ are effectively T-constructible from $g$.
The above argument completes the induction case for Proposition \ref{Delta-removal-arity-all}(2). Therefore, the remaining task of ours is to give the proof of Lemma \ref{induction-step}.
\begin{proofof}{Lemma \ref{induction-step}} Let $f= (z_1,z_2,\ldots,z_{2^k})$. Since $f$ is complement unstable, there exists an appropriate index $\ell\in[2^k]$ satisfying $z_{\ell}\neq 0$. For each pair of indices $i,j\in[k]$ with $i<j$, we write $g^{(i,j)}$ to denote $f^{x_i=x_j}$ and then define ${\cal G}=\{g^{(i,j)}\mid 1\leq i<j \leq k\}$. Note that each constraint $g^{(i,j)}$ is effectively T-constructible from $f$.
Let us begin with a simple observation.
\begin{claim}\label{fix-index} Let $k\geq3$. For any index $j\in[2^k]$, there exist a constraint $g$ in ${\cal G}$ and $k-1$ bits $a_1,a_2,\ldots,a_{k-1}$ satisfying $z_j = g(a_1,a_2,\ldots,a_{k-1})$. \end{claim}
\begin{proof} Since $z_j$ is an output value of $f$, there exists an input tuple $(a'_1,a'_2,\ldots,a'_k)\in\{0,1\}^k$ for which $z_j = f(a'_1,a'_2,\ldots,a'_k)$. Since $k\geq3$, there are two indices $i,j\in[2^k]$ with $i<j$ satisfying $a'_i =a'_j$. For this special pair, it follows that $g^{(i,j)}(a'_1,\ldots,a'_{i-1},a'_{i+1},\ldots,a'_k) = f(a'_1,\ldots,a'_{i-1},a'_j,a'_{i+1},\ldots,a'_k) = z_j$. It therefore suffices to set the desired constraint $g$ to be $g^{(i,j)}$ and set $(a_1,a_2,\ldots,a_{k-1})$ to be $(a'_1,\ldots,a'_{i-1},a'_{i+1},\ldots,a'_k)$. \end{proof}
Let us return to the proof of Lemma \ref{induction-step}. First, we assume that $f$ satisfies Condition ($b$). Take an index $j\in[2^{k-1}]-\{1\}$ for which either $(|z_1|-|z_{2^k}|)(|z_1+z_j| - |z_{2^k-j+1}+z_{2^k}|)<0$ or $(|z_1|-|z_{2^k}|)(|z_1+z_{2^k-j+1}| - |z_{j}+z_{2^k}|)<0$. By Claim \ref{fix-index}, we can choose a constraint $g\in{\cal G}$ satisfying that $z_j=g(a_1,a_2,\ldots,a_{k-1})$ for a certain bit series $a_1,a_2,\ldots,a_{k-1}$. Note that this constraint $g$ also satisfies Condition ($b$) and is complement unstable. The lemma thus follows instantly.
Hereafter, we assume that $f$ does not satisfy Condition ($b$). When ${\cal G}$ contains a complement unstable constraint, say, $g$, it has arity $k-1$ and $g\leq_{e\mbox{-}con} f$ holds. Moreover, if $f$ further satisfies Condition ($a$), then $g$ also satisfies the condition because $g\in{\cal G}$. We then obtain the lemma. It therefore suffices to assume that ${\cal G}$ is complement stable.
Since $f$ is complement unstable, either of the following two cases must occur. (1) There exists an index $i\in[2^{k-1}]$ satisfying $|z_i|\neq |z_{2^k-i+1}|$. (2) It holds that $|z_i|=|z_{2^k-i+1}|$ for every index $i\in[2^{k-1}]$, but there are two distinct indices $i_0,j_0\in[2^{k-1}]$ for which $z_{i_0} =z_{2^k-i_0+1}\neq0$ and $z_{j_0}= - z_{2^k-j_0+1}\neq0$.
(1) In the first case, let us choose an index $i\in[2^{k-1}]$ satisfying $|z_i|\neq |z_{2^k-i+1}|$. Claim \ref{fix-index} ensures the existence of a constraint $g$ in ${\cal G}$ such that $z_i = g(a_1,a_2,\ldots,a_{k-1})$ for appropriately chosen $k-1$ bits $a_1,a_2,\ldots,a_{k-1}$. This implies that $z_{2^k-i+1} = g(a_1\oplus1,a_2\oplus1,\ldots,a_{k-1}\oplus1)$. By the choice of $i$, $g$ cannot be complement stable. Obviously, this is a contradiction against our assumption that ${\cal G}$ is complement stable.
(2) In the second case, let us take any two indices $i_0,j_0\in[2^{k-1}]$ satisfying that $z_{i_0} =z_{2^k-i_{0}+1}\neq0$ and $z_{j_0}= - z_{2^k-j_{0}+1}\neq0$. We will examine two possible cases separately.
(i) Assume that a certain constraint $g\in {\cal G}$ satisfies both $z_{i_0} = g(a_1,a_2,\ldots,a_{k-1})$ and $z_{j_0} = g(b_1,b_2,\ldots,b_{k-1})$ for appropriately chosen $2(k-1)$ bits $a_1,a_2,\ldots,a_{k-1},b_1,b_2,\ldots,b_{k-1}$. {}From the properties of $z_{i_0}$ and $z_{j_0}$, it follows that $g$ is complement unstable, and this fact clearly leads to a contradiction.
(ii) Finally, assume that Case (i) does not hold. This case is much more involved than Case (i). By our assumption, $|z_{i}| = |z_{2^k-i+1}|$ holds for all indices $i\in[2^{k-1}]$. This assumption makes $f$ satisfy Condition ($a$). To make the following argument simple, we will introduce several notations. First, we denote by $H'$ the set of all index pairs $(i,j)$ in $[2^k]\times[2^k]$ such that both $z_i = g(a_1,a_2,\ldots,a_{k-1})$ and $z_j = g(b_1,b_2,\ldots,b_{k-1})$ hold for a certain constraint $g$ in ${\cal G}$ and certain $2(k-1)$ bits $a_1,a_2,\ldots,a_{k-1},b_1,b_2,\ldots,b_{k-1}$. Notice that $(i,i)\in H'$ holds for every index $i\in[2^k]$. Since Case (i) fails, we obtain $(i_0,j_0)\notin H'$. Associated with the set $H'$, we define two new sets $H = [2^k]\times[2^k] - H'$ and
$\hat{H} = \{i\in[2^k]\mid \exists j[(i,j)\in H]\}$. Since $|z_i|=|z_{2^k-i+1}|$ for all $i\in[2^{k-1}]$, $\hat{H}$ can be expressed as the disjoint union $\hat{H}_0\cup\hat{H}_{+}\cup\hat{H}_{-}$, where $\hat{H}_0 = \{i\in\hat{H}\mid z_i=z_{2^k-i+1}=0\}$, $\hat{H}_{+} = \{i\in \hat{H}\mid z_i = z_{2^k-i+1}\neq0\}$, and $\hat{H}_{-} = \{i\in\hat{H}\mid z_i = -z_{2^k-i+1}\neq0\}$. Concerning $\hat{H}_{+}$ and $\hat{H}_{-}$, the following useful properties hold.
\begin{claim}\label{H+-and-H-} Let $i,j\in[2^k]$ be any two indices with $i\leq 2^{k-1}$ and assume that $(i,j)\in H'$. \begin{enumerate}\vs{-1} \item If $j\in \hat{H}_{+}$ then $z_i = z_{2^k-i+1}$ holds. \vs{-2} \item If $j\in \hat{H}_{-}$ then $z_i = -z_{2^k-i+1}$ holds. \end{enumerate} \end{claim}
\begin{proof}
Assume that $z_i=0$. From $z_i =0$ follows $z_{2^k-i+1}=0$, because $|z_i|=|z_{2^k-i+1}|$. We then obtain $z_i=\pm z_{2^k-i+1}$; thus, the claim is trivially true. Henceforth, we consider the case where $z_i\neq0$. Since $(i,j)\in H'$, there exist a constraint $g\in {\cal G}$ and bits $a_1,a_2,\ldots,a_{k-1},b_1,b_2,\ldots,b_{k-1}$ for which $z_i =g(a_1,a_2,\ldots,a_{k-1})$ and $z_j = g(b_1,b_2,\ldots,b_{k-1})$. Recall that $g$ is complement stable by our assumption. In the case where $j\in \hat{H}_{+}$, since $z_j = z_{2^k-j+1}\neq0$ holds, $g$ must be complement invariant. Thus, for the index $i$, we obtain $z_i = z_{2^k-i+1}$. By a similar argument, when $j\in \hat{H}_{-}$, $g$ must be complement anti-invariant and thus $z_i = -z_{2^k-i+1}$ holds. \end{proof}
Here, we claim that $\hat{H}_{+}$ and $\hat{H}_{-}$ are both nonempty. To see this claim, recall that the indices $i_0$ and $j_0$ satisfy $(i_0,j_0)\notin H'$, and thus they belong to $\hat{H}$; more specifically, it holds that $i_0\in\hat{H}_{+}$ and $j_0\in\hat{H}_{-}$ since $z_{i_0} = z_{2^k-i_0+1}$ and $z_{j_0} = -z_{2^k-j_0+1}$. By symmetry, we also conclude that $2^{k}-i_0+1\in\hat{H}_{+}$ and $2^{k}-j_0+1\in \hat{H}_{-}$. In Claim \ref{not-in-H-z_i}, we present another useful property of $\hat{H}$.
\begin{claim}\label{not-in-H-z_i} For any index $i\in[2^k]$, if $i\notin \hat{H}$, then $z_i=0$ holds. \end{claim}
\begin{proof} Let $i$ be any index in $[2^k]-\hat{H}$. Without loss of generality, we assume that $i\leq 2^{k-1}$. Toward a contradiction, we assume that $z_i\neq 0$. As noted earlier, $\hat{H}_{+}$ and $\hat{H}_{-}$ are nonempty. Now, let us take two indices $j_1\in\hat{H}_{+}$ and $j_2\in\hat{H}_{-}$ and consider two pairs $(i,j_1)$ and $(i,j_2)$. For the first pair $(i,j_1)$, if $(i,j_1)\notin H'$ holds, then $(i,j_1)$ must be in $H$, and thus $i$ belongs to $\hat{H}$. Since this is clearly a contradiction, $(i,j_1)\in H'$ follows. Similarly, we can obtain $(i,j_2)\in H'$ for the second pair $(i,j_2)$. Claim \ref{H+-and-H-} then implies $z_i = z_{2^k-i+1}$ as well as $z_i = - z_{2^k-i+1}$. {}From these equations, $z_i=0$ follows. This is also a contradiction. Therefore, the claim is true. \end{proof}
The rest of the proof proceeds by examining three cases, depending on the value of $k\geq3$.
(a) Let us consider the base case of $k=3$ with $f=(z_1,z_2,\ldots,z_8)$. By a straightforward calculation, $H'$ is comprised of pairs $(2,3)$, $(2,4)$, $(2,5)$, $(3,4)$, $(3,5)$, $(3,7)$, $(4,6)$, $(4,7)$, $(5,6)$, $(5,7)$, $(6,7)$ and, moreover, all pairs obtained from those listed pairs, say, $(i,j)$ by exchanging two entries $i$ and $j$. Thus, $\hat{H}$ equals $\{2,3,4,5,6,7\}$. Claim \ref{not-in-H-z_i} yields the equality $z_1=z_8=0$. Now, we assume that $\hat{H}_0\neq\mathrm{\O}$. In the case where $\hat{H}_0=\{4,5\}$, $\hat{H}_{+}$ is either $\{2,7\}$ or $\{3,6\}$ because $\hat{H}_{-}$ is nonempty. Now, we define $h_2=f^{x_2=*}$, which equals $(z_3,z_2,z_7,z_6)$. Note that $h_2$ satisfies Condition ($a$). If $h_2$ is complement stable, then it must hold that either $z_i=z_{9-i}$ for all $i\in[4]$ or $z_i= -z_{9-i}$ for all $i\in[4]$. This implies that $f$ is complement stable, a contradiction. Therefore, $h_2$ is complement unstable. The other cases ($\hat{H}_0=\{2,7\}$ and $\hat{H}_0=\{3,6\}$) are similar.
Next, assume that $\hat{H}_0=\mathrm{\O}$. Recall that $|\hat{H}_{+}|>0$ and $|\hat{H}_{-}|>0$ and note that $|\hat{H}_{+}|\neq |\hat{H}_{-}|$. Let us consider the case where $|\hat{H}_{+}|>|\hat{H}_{-}|$. If $\hat{H}_{+}=\{2,4,5,7\}$ and $\hat{H}_{-}=\{3,6\}$, then we define $h_2=f^{x_2=*}$, which is $(z_3,z_2+z_4,z_5+z_7,z_6)$. Obviously, Condition ($a$) holds for $h_2$. Since $3\in\hat{H}_{-}$, $z_3=-z_6$ holds; moreover, since $2,4\in\hat{H}_{+}$, it follows that $z_2+z_4=z_5+z_7$. We then conclude that $h_2$ is not complement stable. Similarly, if $\hat{H}_{+}=\{2,3,6,7\}$ and $\hat{H}_{-}=\{4,5\}$ (resp., $\hat{H}_{+}=\{3,4,5,6\}$ and $\hat{H}_{-}=\{2,7\}$), then consider $h_1=f^{x_1=*} = (z_5,z_2+z_6,z_3+z_7,z_4)$ (resp., $h_3=f^{x_3=*}=(z_2,z_3+z_4,z_5+z_6,z_7)$). This constraint
$h_1$ is also complement unstable and satisfies Condition ($a$), as requested. The other case where $|\hat{H}_{-}|>|\hat{H}_{+}|$ is similarly treated.
(b) Consider the case where $k=4$. It is not difficult to show that $\hat{H} = \{4,6,7,10,11,13\}$. Let us define $h = f^{x_1=*}$. This constraint $h=(w_1,w_2,\ldots,w_8)$ contains eight entries $w_i=z_{i}+z_{8+i}$ for all $i\in[8]$. In particular, $w_1=z_1+z_9$, $w_2=z_2+z_{10}$, $w_3=z_3+z_{11}$, $w_4=z_4+z_{12}$, $w_5=z_5+z_{13}$, $w_6=z_6+z_{14}$, $w_7=z_7+z_{15}$, and $w_8=z_8+z_{16}$. By Claim \ref{not-in-H-z_i}, it follows that $h=(0,z_{10},z_{11},z_{4},z_{13},z_{6},z_{7},0)$. Condition ($a$) is clearly met for this constraint $h$. If $h$ is complement stable, either $z_{i}=z_{16-i+1}$ for all $i\in\{4,6,7\}$ or $z_{i}=-z_{16-i+1}$ for all $i\in\{4,6,7\}$. This is impossible because $z_{i_0}=z_{16-i_0+1}\neq0$ and $z_{j_0}= -z_{16-j_0+1}\neq0$. Therefore, $h$ is complement unstable.
(c) Assume that $k\geq5$. let us claim that $H=\mathrm{\O}$. Assume otherwise. Let $(i,j)\in H$ and consider two $k$-bit series $a=a_1a_2\cdots a_k$ and $b=b_1b_2\cdots b_k$ satisfying that $z_i=f(a)$ and $z_j=f(b)$. Note that, for every distinct pair $s,t\in[k]$, $a_s=a_t$ implies $b_s\neq b_t$. For convenience, let $P_r=\{s\in[k]\mid a_s=r\}$ for each bit $r\in\{0,1\}$. Here, we examine only the case where $|P_0|\geq |P_1|$ since the other case is similar. Since $k\geq5$, it follows that $|P_0|\geq k/2\geq 3$; namely, there are at least three elements in $P_0$. For simplicity, let $1,2,3\in P_0$. Since $a_1=a_2=a_3=0$, there must be two distinct indices $i_1,i_2\in\{1,2,3\}$ for which $b_{i_1} = b_{i_2}$. Write $b^{(i_2)}$ for the $(k-1)$-bit series $b_1b_2\cdots b_{i_2-1}b_{i_2+1}\cdots b_k$. Similarly, we define $a^{(i_2)}$. By the choice of $(i_1,i_2)$, it holds that $z_i=f^{x_{i_1}=x_{i_2}}(a^{(i_2)})$ and $z_j=f^{x_{i_1}=x_{i_2}}(b^{(i_2)})$. This fact implies that $(i,j)\in H'$, a contradiction. Therefore, we conclude that $H=\mathrm{\O}$; that is, $H'=[2^k]\times[2^k]$. However, this contradicts our assumption that $(i_0,j_0)\notin H'$.
This completes the proof of Lemma \ref{induction-step} and thus finishes the proof of Proposition \ref{Delta-removal-arity-all}(2). \end{proofof}
Throughout the proof of Theorem \ref{key-Delta-elimination}, we have required the use of algebraic real numbers only in the proofs of Claims \ref{algebraic-simul} and \ref{XOR-T-const}. It is not known so far that the theorem is still true for arbitrary real numbers.
\section{AP-Reductions without Auxiliary Unary Constraints}\label{sec:computability}
As a direct application of Theorem \ref{key-Delta-elimination}, we wish to prove our second theorem---Theorem \ref{main-theorem}---presented in Section \ref{sec:introduction}. To clarify the meaning of this theorem, we need to formalize the special constraints described in Section \ref{sec:introduction}. Let us introduce the following sets of constraints. Recall that all constraints dealt with in this paper are assumed to output only {\em algebraic real values}. \begin{enumerate} \item Let ${\cal DG}$ denote the set of all constraints $f$ that are expressed by products of unary functions. A constraint in ${\cal DG}$ is called {\em degenerate}. When $f$ is symmetric, $f$ must have one of the following three forms: $[x,0,\ldots,0]$, $[0,\ldots,0,x]$, and $y\cdot [1,z,z^2,\ldots,z^k]$ with $yz\neq 0$. By restricting ${\cal DG}$, we define ${\cal DG}^{(-)}$ as the set of constraints of the forms $[x,0,\ldots,0]$, $[0,\ldots,0,x]$, $y\cdot[1,1,\ldots,1]$, and $y\cdot [1,-1,1,\ldots, -1\,\text{or}\,1]$, where $y\neq0$. Naturally, both $\Delta_0$ and $\Delta_1$ belong to ${\cal DG}^{(-)}$. \vs{-2} \item The notation ${\cal ED}_{1}$ indicates the set of the following constraints: $[x,\pm x]$, $[x,0,\ldots,0,\pm x]$ of arity $\geq2$, and $[0,x,0]$ with $x\neq0$. As a natural extension of ${\cal ED}_{1}$, let ${\cal ED}_{1}^{(+)}$ be composed of constraints $[x,y]$, $[x,0,\ldots,0,y]$ of arity $\geq2$, and $[0,x,0]$ with $x,y\neq0$. Notice that $[x,y]$ also belongs to ${\cal DG}$. \vs{-2} \item Let ${\cal AZ}$ be made up of all constraints of arity at least $3$ having the forms $[0,x,0,x,\ldots,0\,\text{or}\,x]$ and $[x,0,x,0,\ldots,x\,\text{or}\,0]$ with $x\neq0$. The term ``${\cal AZ}$'' indicates ``alternating zeros.'' Similarly, ${\cal AZ}_1$ denotes the set of all constraints of arity at least $3$ of the forms $[0,x,0,-x,0,x,0,\ldots,0\,\text{or}\, x\,\text{or}\,-x]$ and $[x,0,-x,0,x,0\ldots,-x\,\text{or}\, x\,\text{or}\,0]$ with alternating $0$, $x$, and $-x$, where $x\neq0$. \vs{-2} \item The set ${\cal B}_0$ consists of all constraints $[z_0,z_1,\ldots,z_{k}]$ with $k\geq2$ and $z_0\neq0$ that satisfy either (i) $z_{2i+1}=z_{2i+2}=(-1)^{i+1}z_0$ for all $i\in \mathbb{N}$ satisfying $2i+1\in[k]$ or $2i+2\in[k]$, or
(ii) $z_{2i}=z_{2i+1}=(-1)^{i}z_0$ for all $i\in \mathbb{N}$ with $2i\in[k]$ or $2i+1\in[k]$.
As simple examples, $[1,1,-1]$ and $[1,-1,-1]$ belong to ${\cal B}_0$. \vs{-2} \item Let ${\cal OR}$ denote the set of all constraints of the form $[0,x,y]$ with $x,y>0$. For instance, a special constraint $OR=[0,1,1]$ belongs to ${\cal OR}$. \vs{-2} \item Let ${\cal NAND}$ consist of all constraints of the form $[x,y,0]$ with $x,y>0$. A Boolean constraint $NAND=[1,1,0]$ is in ${\cal NAND}$. \vs{-2} \item Let ${\cal B}$ be comprised of all constraints of the form $[x,y,z]$ with $x,y,z>0$ and $xz\neq y^2$. \end{enumerate}
In the presence of ${\cal U}$, we obtain (i) $\#\mathrm{SAT}\leq_{\mathrm{AP}} \#\mathrm{CSP}(OR,{\cal U})$ \cite[Lemma 4.1]{Yam10a}, (ii) $\#\mathrm{CSP}(OR,{\cal U})\leq_{\mathrm{AP}} \#\mathrm{CSP}(g,{\cal U})$ for every constraint $g$ in ${\cal OR}\cup{\cal NAND}$ \cite[Lemma 6.3]{Yam10a}, and (iii) $\#\mathrm{CSP}(OR,{\cal U})\leq_{\mathrm{AP}} \#\mathrm{CSP}(g,{\cal U})$ for every constraint $g\in{\cal B}$ \cite[Proposition 6.8]{Yam10a}. In conclusion, we can derive the following lemma.
\begin{lemma}\label{OR-and-B-Yam10a} For every constraint $g$ in ${\cal OR}\cup{\cal NAND}\cup{\cal B}$, it holds that $\#\mathrm{SAT}\leq_{\mathrm{AP}} \#\mathrm{CSP}(g,{\cal U})$. \end{lemma}
The first part of Theorem \ref{main-theorem} concerns the tractability of $\#\mathrm{CSP}({\cal F})$ when one of the two containments ${\cal F}\subseteq{\cal DG}\cup{\cal ED}_{1}^{(+)}$ and ${\cal F}\subseteq{\cal DG}^{(-)}\cup {\cal ED}_{1}\cup{\cal AZ} \cup{\cal AZ}_1\cup{\cal B}_0$ holds. For such a counting problem $\#\mathrm{CSP}({\cal F})$, it is already known in \cite[Theorem 5.2]{CLX09x} and \cite[Theorem 1.2]{GGJ+10} that $\#\mathrm{CSP}({\cal F})$ is solvable in polynomial time.
\begin{proposition}\label{compute-sig-set}{\rm \cite{CLX09x,GGJ+10}}\hs{1} Let ${\cal F}$ be any set of symmetric real-valued constraints. If either ${\cal F}\subseteq {\cal DG}\cup{\cal ED}_{1}^{(+)}$ or ${\cal F}\subseteq{\cal DG}^{(-)}\cup{\cal ED}_{1}\cup{\cal AZ} \cup{\cal AZ}_1\cup{\cal B}_0$ holds, then $\#\mathrm{CSP}({\cal F})$ belongs to $\mathrm{FP}_{\mathbb{A}}$. \end{proposition}
Now, we come to the point of proving the second part of Theorem \ref{main-theorem}. Let us first analyze the approximation complexity of $\#\mathrm{CSP}(f)$ for an arbitrary symmetric constraint $f$ that are not included in ${\cal DG}\cup{\cal ED}_{1}^{(+)}\cup{\cal AZ} $. Note that, when $f$ is in ${\cal DG}\cup{\cal ED}_{1}^{(+)}\cup{\cal AZ} \cup{\cal AZ}_1\cup{\cal B}_0$, $\#\mathrm{CSP}(f)$ belongs to $\mathrm{FP}_{\mathbb{A}}$ by Proposition \ref{compute-sig-set}. The following is a key claim required for the proof of Theorem \ref{main-theorem}.
\begin{lemma}\label{higher-case} Let $f$ be any symmetric real-valued constraint of arity at least $2$. If $f\notin{\cal DG}\cup{\cal ED}_{1}^{(+)}\cup {\cal AZ}\cup {\cal AZ}_1\cup{\cal B}_0 $, then, for any index $i\in\{0,1\}$, there exists a constraint $g$ in ${\cal OR}\cup{\cal NAND}\cup{\cal B}$ such that $g$ is effectively T-constructible from $\{f,\Delta_i\}$. \end{lemma}
Theorem \ref{main-theorem} then follows, as shown below, by combining Theorem \ref{key-Delta-elimination} and Proposition \ref{compute-sig-set} with an application of Lemma \ref{higher-case}.
\begin{proofof}{Theorem \ref{main-theorem}} Since Proposition \ref{compute-sig-set} has already shown the first part of Theorem \ref{main-theorem}, we are focused on the last part of the theorem. To prove this part, we assume that ${\cal F}\not\subseteq{\cal DG}\cup{\cal ED}_{1}^{(+)}$ and ${\cal F}\not\subseteq{\cal DG}^{(-)}\cup {\cal ED}_{1}\cup{\cal AZ} \cup{\cal AZ}_1\cup{\cal B}_0$. Notice that, by our assumption, ${\cal F}$ should contain a certain constraint whose entries are not all zero. Given a constraint set ${\cal G}$, by applying Theorem \ref{key-Delta-elimination} to ${\cal F}\cup{\cal G}$, we obtain an index $i_0\in\{0,1\}$ for which $\#\mathrm{CSP}(\Delta_{i_0},{\cal F},{\cal G})\equiv_{\mathrm{AP}} \#\mathrm{CSP}({\cal F},{\cal G})$.
If there exists a constraint $f$ in ${\cal F}$ not in ${\cal DG}\cup{\cal ED}_{1}^{(+)}\cup {\cal AZ} \cup{\cal AZ}_1\cup{\cal B}_0$, then we apply Lemma \ref{higher-case} to obtain an appropriate constraint $g\in {\cal OR}\cup{\cal NAND}\cup{\cal B}$ for which $g$ is effectively T-constructible from $\{f,\Delta_{i_0}\}$. By Lemma \ref{constructibility}(3), the theorem immediately follows. Therefore, it is sufficient to consider the case where ${\cal F}\subseteq {\cal DG}\cup{\cal ED}_{1}^{(+)}\cup {\cal AZ} \cup{\cal AZ}_1\cup {\cal B}_0$. Now, let us choose two constraints $f_1,f_2\in{\cal DG}\cup{\cal ED}_{1}^{(+)}\cup{\cal AZ} \cup{\cal AZ}_1\cup{\cal B}_0$ in ${\cal F}$ for which $f_1\notin{\cal DG}^{(-)}\cup {\cal ED}_{1}\cup{\cal AZ} \cup{\cal AZ}_1\cup{\cal B}_0$ and $f_2\notin{\cal DG}\cup{\cal ED}_{1}^{(+)}$. Note that $f_1\in {\cal DG}\cup{\cal ED}_1^{(+)}$ and $f_2\in {\cal AZ}\cup{\cal AZ}_1\cup{\cal B}_0$. Hereafter, we will prove the following claim.
\begin{claim}\label{OR-and-B} There exists a constraint $g$ in ${\cal OR}\cup{\cal NAND}\cup{\cal B}$ that is effectively T-constructible from $\{f_1,f_2,\Delta_{i_0}\}$. \end{claim}
\begin{proof}
The proof of the claim proceeds as follows. In general, $f_1$ has one of the following three forms: $[x,y]$, $[x,0,\ldots,0,y]$, and $x\cdot [1,z,z^2,\ldots,z^k]$ with $x,y\neq0$, $|x|\neq |y|$, $|z|\neq 1$, and $k\geq2$. Notice that we can effectively T-construct $[x,y]$ from $[x,0,\ldots,0,y]$. If $f_1$ is of the form $y\cdot [1,z,z^2,\ldots,z^k]$ with $y\neq0$, $|z|\neq 1$, and $k\geq2$, then no matter which of $\Delta_0$ and $\Delta_{1}$ is available, we can effectively T-construct $[1,z]$. Hence, we can assume that $f_1$ has the form $[1,z]$. In what follows, it suffices to assume that $f_1=[x,y]$ with $xy\neq0$ and $|x|\neq |y|$.
(1) When $f_2\in {\cal AZ}$, there are two possibilities $f_2= u\cdot [0,1,0,1,\ldots,0\,\text{or}\,1]$ and $f_2= u\cdot [1,0,1,0,\ldots,1\,\text{or}\,0]$ to occur. Since $f_2\notin {\cal DG}\cup {\cal ED}_1^{(+)}$, the arity $k$ of $f_2$ must be at least $3$. For simplicity, we set $u=1$. When $k>3$, we use the given constant constraint $\Delta_{i_0}$ to reduce $f_2$ to either $[1,0,1,0]$ or $[0,1,0,1]$. Here, let us consider only the case where $f_2=[1,0,1,0]$ because the other case is similarly handled.
\sloppy Here, we define two constraints $f(x_1,x_2,x_3) = f_1(x_1)f_2(x_1,x_2,x_3)$ and $h(x_1,x_2,x_3) = f(x_1,x_2,x_3)f(x_2,x_3,x_1)f(x_3,x_1,x_2)$. A simple calculation shows that $h$ equals $x\cdot [x^2,0,y^2,0]$. Since $x^2\neq y^2$, we conclude that $h\notin {\cal DG}\cup{\cal ED}_{1}^{(+)}\cup{\cal AZ} \cup {\cal AZ}_1\cup{\cal B}_0$. Now, we apply Lemma \ref{higher-case} and then obtain $g\leq_{e\mbox{-}con} \{h,\Delta_{i_0}\}$ for a certain constraint $g$ in ${\cal OR}\cup{\cal NAND}\cup {\cal B}$. Since $h\leq_{e\mbox{-}con} \{f_1,f_2\}$, Lemma \ref{constructibility}(2) implies that $g\leq_{e\mbox{-}con} \{f_1,f_2,\Delta_{i_0}\}$.
(2) If $f_2\in{\cal AZ}_1$, then we take $f_2^2$, which belongs to ${\cal AZ}$, and reduce this case to Case (1).
(3) Assume that $f_2$ is in ${\cal B}_0$. The constraint $f_2$ has either form $u\cdot [1,1,-1,\ldots,\pm1]$ or $u\cdot [1,-1,-1,\ldots,\pm1]$ with $u\neq0$. Apply $\Delta_{i_0}$ to $f_2$ appropriately and reduce $f_2$ to either $\pm u\cdot [1,1,-1]$ or $\pm u \cdot [1,-1,-1]$. Here, we assume that the constants ``$\pm u$'' to be $1$ for simplicity. If $f_2=[1,1,-1]$, then we define $h(x_1,x_2) = \sum_{x_3\in\{0,1\}} f_2(x_1,x_3)f_1(x_3)f_2(x_2,x_3)$. which equals $[x+y,x-y,x+y]$. Since $y\neq0$, it follows that $(x+y)^2-(x-y)^2=4xy\neq0$. Since $|x|\neq |y|$, $h$ belongs to ${\cal B}$. If $f_2=[1,-1,-1]$, then $h$ equals $[x+y,y-x,x+y]$. A similar argument shows that $h$ also belongs to ${\cal B}$. By the definition of $h$, it holds that $h\leq_{e\mbox{-}con} \{f_1,f_2,\Delta_{i_0}\}$. \end{proof}
Let us return to the proof of the theorem. Let ${\cal G}$ be any constraint set. By Claim \ref{OR-and-B}, an appropriately chosen constraint $g$ in ${\cal OR}\cup{\cal NAND}\cup{\cal B}$ can be effectively T-constructible from $\{f_1,f_2,\Delta_{i_0}\}$. Lemma \ref{constructibility}(3) then implies that $\#\mathrm{CSP}(g,{\cal G})\leq_{\mathrm{AP}} \#\mathrm{CSP}(f_1,f_2,\Delta_{i_0},{\cal G})$. It follows from $f_1,f_2\in{\cal F}$ that $\#\mathrm{CSP}(f_1,f_2,\Delta_{i_0},{\cal G})\leq_{\mathrm{AP}} \#\mathrm{CSP}({\cal F},\Delta_{i_0},{\cal G})\equiv_{\mathrm{AP}} \#\mathrm{CSP}({\cal F},{\cal G})$, where the last AP-equivalence comes from the choice of $i_0$. Similarly, $g\leq_{e\mbox{-}con} \{f_1,f_2,\Delta_{i_0}\}$ implies $\#\mathrm{CSP}(g,{\cal G})\leq_{\mathrm{AP}} \#\mathrm{CSP}(f_1,f_2,\Delta_{i_0},{\cal G})$. Thus, combining all AP-reductions yields the desired consequence that $\#\mathrm{CSP}(g,{\cal G})\leq_{\mathrm{AP}} \#\mathrm{CSP}({\cal F},{\cal G})$. \end{proofof}
Briefly, we will describe how to prove Theorem \ref{dichotomy-theorem} even though a sketchy proof outline has been given in Section \ref{sec:introduction}. First, we consider the case where ${\cal F}\subseteq {\cal ED}$. Since ${\cal U}\subseteq {\cal DG}$, the problem $\#\mathrm{CSP}({\cal ED},{\cal U})$ is AP-equivalent to $\#\mathrm{CSP}({\cal DG},{\cal ED}^{(+)}_1)$ and, by Proposition \ref{compute-sig-set}, it is solvable in polynomial time. On the contrary, assume that ${\cal F}\nsubseteq {\cal ED}$. Since ${\cal DG}\cup{\cal ED}_1^{(+)}\subseteq {\cal ED}$, it follows that ${\cal F}\nsubseteq {\cal DG}\cup{\cal ED}_1^{(+)}$. Consider the first case where ${\cal F}\nsubseteq {\cal DG}^{(-)} \cup{\cal ED}_1\cup {\cal AZ}\cup{\cal AZ}_1\cup{\cal B}_0$. Since ${\cal F}\nsubseteq {\cal DG}\cup{\cal ED}_1^{(+)}$, Theorem \ref{main-theorem} ensures the existence of a special constraint $g$ in ${\cal OR}\cup{\cal NAND}\cup{\cal B}$ satisfying $\#\mathrm{CSP}(g,{\cal U})\leq_{\mathrm{AP}} \#\mathrm{CSP}({\cal F},{\cal U})$. Using Lemma \ref{OR-and-B-Yam10a}, we obtain $\#\mathrm{SAT} \leq_{\mathrm{AP}} \#\mathrm{CSP}(g,{\cal U})$. By the choice of $g$, we easily conclude that $\#\mathrm{SAT}$ is AP-reducible to $\#\mathrm{CSP}({\cal F},{\cal U})$. Next, we consider the second case where ${\cal F}\subseteq {\cal DG}^{(-)} \cup{\cal ED}_1\cup {\cal AZ}\cup{\cal AZ}_1\cup{\cal B}_0$. Since ${\cal F}\nsubseteq {\cal DG}\cup {\cal ED}_{1}^{(+)}$, let us take a constraint $f_2$ from ${\cal F} - ({\cal DG}\cup{\cal ED}_{1}^{(+)})$. It follows that $f_2\in {\cal AZ}\cup{\cal AZ}_1\cup{\cal B}_0$. Next, we choose another constraint $f_1$ from ${\cal U}-({\cal DG}^{(-)}\cup{\cal ED}_1)$. It is clear by its definition that $f_1\notin {\cal DG}^{(-)}\cup {\cal ED}_1\cup{\cal AZ}\cup{\cal AZ}_1\cup{\cal B}_0$. Applying Claim \ref{OR-and-B}, we obtain a constraint $g$ in ${\cal OR}\cup{\cal NAND}\cup{\cal B}$ for which $g\leq_{e\mbox{-}con} \{f_1,f_2,\Delta_0,\Delta_1\}$. Lemma \ref{constructibility}(3) then implies that $\#\mathrm{CSP}(g,{\cal U})\leq_{\mathrm{AP}} \#\mathrm{CSP}(f_1,f_2,\Delta_{0},\Delta_{1},{\cal U})$. Since $f_1,f_2\in{\cal F}$ and $\Delta_0,\Delta_1\in{\cal U}$, we conclude that $\#\mathrm{CSP}(g,{\cal U})\leq_{\mathrm{AP}} \#\mathrm{CSP}({\cal F},{\cal U})$. Together with Lemma \ref{OR-and-B-Yam10a}, the desired AP-reduction $\#\mathrm{SAT} \leq_{\mathrm{AP}} \#\mathrm{CSP}({\cal F},{\cal U})$ follows.
To finish our entire argument, we still need to prove Lemma \ref{higher-case}.
\begin{proofof}{Lemma \ref{higher-case}} Let $f$ be any symmetric real-valued constraint of arity $k\geq2$. For convenience, we write $\Gamma$ for ${\cal DG}\cup{\cal ED}_{1}^{(+)}\cup{\cal AZ}\cup {\cal AZ}_1\cup{\cal B}_0$. Throughout this proof, we assume that $f\notin\Gamma$ and that, for a fixed index $i_0\in\{0,1\}$, $\Delta_{i_0}$ is given to use. Our proof proceeds by induction on $k$.
\paragraph{Case of $k=2$.}\label{sec:binary-case}
Let $f$ be any binary constraint not in $\Gamma$. There are three major cases to consider separately, depending on the number of zeros in the output values of $f$.
(B1) Consider the case where there are two zeros in the entries of $f$. Obviously, $f$ must have one of the following three forms: $[x,0,0]$ ($\in {\cal DG}$), $[0,0,x]$ ($\in{\cal DG}$), and $[0,x,0]$ ($\in{\cal ED}_{1}$) with $x\neq0$, yielding a contradiction against the assumption on $f$.
(B2) Consider the case where there is exactly one zero in $f$. Note that $f$ must have one of the following forms: $[0,x,y]$, $[x,0,y]$, and $[x,y,0]$, where $xy\neq0$. For the first and the last forms, $f^2$ respectively belongs to ${\cal OR}$ and ${\cal NAND}$. Moreover, $[x,0,y]$ is in ${\cal ED}_{1}^{(+)}$. The lemma thus follows.
(B3)
Finally, consider the case where there is no entry of zero in $f=[x,y,z]$. When $|xz|\neq y^2$, the constraint $f^2=[x^2,y^2,z^2]$ obviously belongs to ${\cal B}$; thus, it suffices to set $g$ in the lemma to be $f^2$. If $xz=y^2$, then $f$ has the form $[x,y,y^2/x]$ and thus it is in ${\cal DG}$. Finally, if $xz=-y^2$, then we obtain $f=[x,y,-y^2/x]$. Here, we wish to claim that $|x|\neq|y|\neq|-y^2/x|$, because this claim establishes the membership of $f$ to ${\cal B}$. If $x=y$, then we obtain $-y^2/x=-x$ and thus $f=x\cdot[1,1,-1]$, which is obviously in ${\cal B}_0$, a contradiction. When $x=-y$, we obtain $f=x\cdot [1,-1,-1]$, leading to another contradiction. Note that $y=y^2/x$ implies $x=y$, and $y=-y^2/x$ implies $y=-x$. Therefore, our claim is true.
\paragraph{Case of $k=3$.}\label{sec:single-sig-3}
We assume that $f$ has arity $3$. For convenience, notations $x,y,z,w$ that will appear below as real values are assumed to be non-zero.
(T1) Consider the case where $f$ has exactly three zeros; that is, $f$ is one of the following four forms: $[x,0,0,0]$, $[0,x,0,0]$, $[0,0,x,0]$, and $[0,0,0,x]$ with $x\neq0$. If $f\in\{[x,0,0,0],[0,0,0,x]\}$, then $f$ falls into ${\cal DG}$, a contradiction. Now, assume that $f=[0,x,0,0]$. We then define the desired constraint $g$ as $f^{x_1=*}$, which equals $x\cdot[1,1,0] = x\cdot NAND$, and thus it belongs to ${\cal NAND}$. Clearly, $g\leq_{e\mbox{-}con} f$ holds without use of $\Delta_{i_0}$. The case of $f=[0,0,x,0]$ is treated similarly using ${\cal OR}$.
\sloppy (T2) Let us consider the case where $f$ has exactly two zeros; namely, $f$ has one of the forms: $[x,0,0,y]$, $[x,y,0,0]$, $[0,0,x,y]$, $[x,0,y,0]$, $[0,x,0,y]$, and $[0,x,y,0]$ with $xy\neq0$. We exclude the case of $f=[x,0,0,y]$ because it belongs to ${\cal ED}_{1}^{(+)}$. Remember that $\Delta_{i_0}$ is available to use.
(a) If $f=[x,y,0,0]$, then we define $g=f^{x_1=x_2=x_3}$, which yields $x\cdot \Delta_0$. Since $x\neq0$, we can freely use $\Delta_0$. From the set $\{f,\Delta_0\}$, we can effectively T-construct a new constraint $h=[x,y,0]$. Thus, the constraint $h^2=[x^2,y^2,0]$, which is also effectively T-constructible, belongs to ${\cal NAND}$. The case $f=[0,0,x,y]$ is handled similarly using ${\cal OR}$ instead of ${\cal NAND}$.
(b) Assume that $f=[0,x,y,0]$. From $\{\Delta_{i_0},f\}$, we can effectively T-construct either $[0,x,y]$ or $[x,y,0]$, which is then reduced to Case (B2).
(c)
Finally, let $f=[x,0,y,0]$. In the case where $x=y$, we obtain $f= x\cdot [1,0,1,0]$, which belongs to ${\cal AZ}$. Moreover, if $x=-y$, then $f=x\cdot [1,0,-1,0]$ is in ${\cal AZ}_1$. In those cases, we clearly obtain a contradiction. Hence, $|x|\neq |y|$ must hold. Let us consider another constraint $g=f^{x_1=*}$, which equals $[x,y,y]$. Since $|xy|\neq y^2$, the constraint $g^2=[x^2,y^2,y^2]$ belongs to ${\cal B}$. Since $g^2\leq_{e\mbox{-}con} f$, the lemma instantly follows. The other case $f=[0,x,0,y]$ is similarly treated.
(T3) Consider the case where $f$ has exactly one zero; that is, $f=[x,y,z,0],[0,x,y,z],[x,y,0,z],[x,0,y,z]$ with $xyz\neq0$.
(a) If $f$ is of the form $[x,y,z,0]$, then we define $g=f^{x_1=x_2}$, which equals $(x,y,z,0)$. We then define $h(x_1,x_2) = g(x_1,x_2)^2g(x_2,x_1)^2$, implying $h=[x^4,y^2z^2,0]$. This constraint $h$ is in ${\cal NAND}$. By duality, we effectively T-construct $h'=[0,(xy)^2,z^4]$ from $[0,x,y,z]$. Clearly, $h'$ is a member of ${\cal OR}$.
(b) If $f=[x,0,y,z]$, then define $g(x_1,x_2) = \sum_{x_3,x_4\in\{0,1\}} f(x_1,x_3,x_4)f(x_2,x_3,x_4)$. This new constraint $g$ has the form $[A,B,C]$ with $A=x^2+y^2$, $B=yz$, and $C=2y^2+z^2$. Note that $A,C>0$ and $B\neq0$ because of $xyz\neq0$. We want to claim that $AC\neq B^2$. To show this inequality, assume that $AC=B^2$, that is, $(x^2+y^2)(2y^2+z^2)=y^2z^2$, or equivalently $2y^4+2x^2y^2+x^2z^2=0$. Since $z^2=-\frac{2y^2(x^2+y^2)}{x^2}$, it follows that $z^2<0$, a contradiction; hence, we obtain $AC\neq B^2$. Now, consider $g^2=[A^2,B^2,C^2]$. This constraint $g^2$ is clearly in ${\cal B}$. By duality, we can handle the case of $f=[x,y,0,z]$.
(T4) Let us consider the case where $f$ has no zero; namely, $f$ is of the form $[x,y,z,w]$ with $xyzw\neq0$. For the subsequent argument, we let $h_1(x_1,x_2) = \sum_{x_3,x_4\in\{0,1\}}f(x_1,x_3,x_4)f(x_2,x_3,x_4)$, $h_2(x_1,x_2) = \sum_{x_3\in\{0,1\}} f(x_1,x_3,x_3)f(x_2,x_3,x_3)$, and $h_3(x_1,x_3) = \sum_{x_2\in\{0,1\}}f(x_1,x_1,x_2) f(x_2,x_3,x_3)$. Note that $h_1$, $h_2$, and $h_3$ are effectively T-constructible from $f$ alone.
(a)
If $|xz|\neq y^2$ and $|yw|\neq z^2$, then define $h=f^{x_1=i_0}$ (i.e., $\sum_{x_1\in\{0,1\}} f(x_1,x_2,x_3)\Delta_{i_0}(x_1)$) and then obtain either $[x,y,z]$ or $[y,z,w]$. By our assumption, $h^2$ belongs to ${\cal B}$.
(b) Now, assume that $|xz|\neq y^2$ and $|yw|=z^2$.
(i) We consider the first case where $yw=z^2$. Without loss of generality, we can assume that $x=1$. Since $w=z^2/y$, $f$ equals $[1,y,z,z^2/y]$. Note that $h_3$ is of the form $[A,B,C]$ with $A=1+y^2$, $B=z(z+1)$, and $C=z^2(1+z^2/y^2)$. Clearly, $A,C>0$ holds. Now, let us study the case where $B\neq0$. Since $y^2[AC-B^2]$ equals $z^2(y^2-z)^2$, it holds that $AC=B^2$ iff $z=y^2$. Since $|z|\neq y^2$ by our assumption, we conclude that $AC\neq B^2$. The constraint $h_3^2 =[A^2,B^2,C^2]$ therefore belongs to ${\cal B}$. Next, we consider the other case of $B=0$. Since $B=z(z+1)$, we immediately obtain $z=-1$; thus, $f$ must be of the form $[1,y,-1,1/y]$. Note that $h_1=[A',B',C']$, where $A'=2(1+y^2)$, $B'=-(y+1/y)$, and $C'=2+y^2+1/y^2$. Obviously, $A',C'>0$ and $B'\neq0$ because $y\neq -1/y$. The term $y^2[AC-B^2]$ is expressed as $y^2(y^2+1)^2(2y^2+1)$, which is obviously non-zero. We thus conclude that $AC\neq B^2$. Therefore, the constraint $h_1^2=[(A')^2,(B')^2,(C')^2]$ is in ${\cal B}$. From $h_1^2\leq_{e\mbox{-}con} f$ and $h_3\leq_{e\mbox{-}con} f$, the lemma instantly follows.
(ii)
The second case is that $yw=-z^2$. As did before, we assume that $x=1$. Since $w=-z^2/y$, $f$ equals $[1,y,z,-z^2/y]$. We then focus on the constraint $h_3=[A,B,C]$, where $A=1+y^2$, $B=z(1-z)$, and $C=z^2(1+z^2/y^2)$. Firstly, we examine the case of $B\neq 0$. Note that the value $y^2[AC-B^2]$ equals $z^2(y^2+z)$; thus, it holds that $AC=B^2$ iff $z=-y^2$. Since $|xz|\neq y^2$ and $A,C>0$, we conclude that $|AC|\neq B^2$. This places the constraint $h_3^2=[A^2,B^2,C^2]$ into ${\cal B}$. Secondly, we study the other case where $B=0$, or equivalently, $z=1$
because $B=z(1-z)$. In this case, we use another constraint $h_2=[A',B',C']$, which is actually of the form $[2,y-1/y,y^2+1/y^2]$. If $|y|\neq 1$, then $B'\neq0$ holds. Since $y^2[A'C'-(B')^2]$ has a non-zero value $(y^2+1)^2$, the constraint $h_2^2$ is a member of ${\cal B}$. On the contrary, assume that $|y|=1$. When $y=1$, $f$ equals $[1,1,1,-1]$, from which we obtain $h_1=[4,2,4]$. Obviously, $h_1$ belongs to ${\cal B}$. If $y=-1$, then we obtain $f=[1,-1,1,1]$, and thus $h_1=[4,-2,4]$ is also in ${\cal B}$.
(c) Consider the remaining case where $xz=\delta y^2$ and $yw=\delta' z^2$ for certain constants $\delta,\delta'\in\{\pm1\}$.
(i) In the case where $xz=y^2$ and $yw=z^2$, the constraint $f=[x,y,y^2/x,y^3/x^2]$ is obviously in ${\cal DG}$, a contradiction.
(ii) When $xz=y^2$ and $yw=-z^2$, we obtain $f=[x,y,y^2/x,-y^3/x^2]$. Let us consider $h_2$. For simplicity, let $x=1$. The constraint $h_2$ therefore has the form $[A,B,C]$, where $A=1+y^4$, $B=y-y^5$, and $C=y^2+y^6$. Note that $A,C>0$. It follows from $AC-B^2 = 4y^6$ that $AC$ is different from $B^2$. Moreover, note that $B=0$ iff $y=1$. Therefore, when $y\neq1$, the constraint $h_2^2=[A^2,B^2,C^2]$ belongs to ${\cal B}$. When $y=1$, $f$ has the form $[1,1,1,-1]$. The constraint $h_1$, which equals $[4,2,4]$, is clearly in ${\cal B}$. The proof for the case where $xz=-y^2$ and $yw=z^2$ is essentially the same.
(iii)
Finally, we consider the case where $xz=-y^2$ and $yw=-z^2$. This case implies $f=[x,y,-y^2/x,-y^3/x^2]$. By assuming $x=1$, we obtain $h_1=[A,B,C]$, where $A=1+2y^2+y^4$, $B=y(y^2-1)^2$, and $C=y^2+2y^4+y^6$. Obviously, $A,C>0$ holds. By a simple calculation, we obtain $AC-B^2 = 8y^4(y^4+1)$. It thus follows that $AC\neq B^2$. Note that $B=0$ iff $y=\pm1$. If $|y|\neq1$, then the constraint $h_1^2=[A^2,B^2,C^2]$ belongs to ${\cal B}$. When $y=1$ and $y=-1$, we obtain $f=[1,1,-1,-1]$ and $f=[1,-1,-1,1]$, respectively, which are both in ${\cal B}_0$, a contradiction.
\paragraph{Case of $k\geq4$.}
For convenience, let $u=f(0^k)$ and $w=f(1^k)$. There are four fundamental cases to examine, depending on the values of $u$ and $w$.
[Case: $u=0$ and $w\neq0$] Since the other case where $u\neq0$ and $w=0$ is symmetric, we omit that case. First, we note that $g$ cannot belong to ${\cal B}_0$ because $u=0$. Now, let us consider the constraint $h' = f^{x_1=x_2=\cdots =x_k} =[0,w]$. Since $w\neq0$, from this constraint $h'$, we can effectively T-construct $\Delta_1=[0,1]$. We then set $g$ as $f^{x_1=1}$ (equivalently, $\sum_{x_1\in\{0,1\}}f(x_1,\ldots,x_k)\Delta_1(x_1)$). Since $g\notin\Gamma$ implies the desired consequence, in what follows, we assume that $g\in\Gamma$.
(a) In the case where $g\in{\cal DG}$, $g$ cannot be $[x,0,0,\ldots,0]$ because $w\neq0$. If $g=[0,0,\ldots,0,x]$, then $f$ must equal $[0,0,0,\ldots,0,x]$, which belongs to ${\cal DG}$, a contradiction. The remaining case is that $g$ has the form $x\cdot [1,y,y^2,\ldots,y^{k-1}]$ with $xy\neq0$. Notice that $f=x\cdot [0,1,y,y^2,\ldots,y^{k-1}]$. If $y\neq -1$, then we define $h=f^{x_1=*}$, which is $x\cdot[1,y+1,y(y+1),\ldots,y^{k-2}(y+1)]$. Since $y(y+1)\neq (y+1)^2$, $h$ is not in ${\cal DG}$. Because $y\neq0$, we obtain $y+1\neq1$; moreover, $y+1=y(y+1)$ implies $y+1\neq-1$. Therefore, $h$ cannot belong to ${\cal B}_0$. In conclusion, $g$ is not in $\Gamma$. We then apply the induction hypothesis to obtain the lemma. On the contrary, when $y=-1$, we consider another constraint $f^2=x^2\cdot[0,1,1,\ldots,1]$. This case can be reduced to the previous case of $y\neq-1$.
(b) Let us consider the case where $g$ belongs to ${\cal ED}_{1}^{(+)}$. Since $k\geq4$, $g$ cannot have the form $[0,x,0]$. If $g$ equals $[x,0,\ldots,0,w]$, then $f$ must be $[0,x,0,\ldots,0,w]$. Define $h=f^{x_1=*}$, implying $h=[x,x,0,\ldots,0,w]$. Since $h\notin\Gamma$, the induction hypothesis leads to the desired consequence.
(c) The next case to examine is that $g$ is in ${\cal AZ}$. The cases of $g=[x,0,x,0,\ldots,x,0]$ and $g=[0,x,0,x,\ldots,x,0]$ never occur because of $w\neq0$. If $g$ has the form $[x,0,x,0,\ldots,x]$, then $f$ must be of the form $[0,x,0,x,0,\ldots,x]$; thus, $f$ belongs to ${\cal AZ}$, a contradiction. Moreover, if $g=[0,x,0,x,\ldots,0,x\,\text{or}\,0]$, then $f$ equals $[0,0,x,0,x,\ldots,0,x\,\text{or}\,0]$. Let us define $h=f^{x_1=*}$, which equals $x\cdot [0,1,1,\ldots,1]$. Since $h\notin\Gamma$, we can apply the induction hypothesis to $h$.
(d) In the case of $g\in{\cal AZ}_1$, it holds that either $g=[x,0,-x,0,\ldots,w]$ or $g= [0,x,0,-x,0,\ldots,w]$, where $w\in\{\pm x\}$. In the former case, $f$ is of the form $[0,x,0,-x,0,\ldots,w]$ and falls into ${\cal AZ}_1$, a contradiction. In the latter case, we obtain $f=[0,0,x,0,-x,0,\ldots,w]$. Consider a new constraint $h=f^{x_1=*}$, which equals $[0,x,x,-x,-x,\ldots, w\,\text{or}\,w\pm x]$. Clearly, this does not belong to $\Gamma$, and thus we can apply the induction hypothesis to $h$.
[Case: $uw\neq0$]
This case is split into three subcases: $|u|=|w|$, $|u|<|w|$, and $|u|>|w|$. The last subcase is essentially the same as the second one, we omit it.
(i) Let us assume that $|u|=|w|$. If $u=-w$, then the constraint $f'=f^2$ satisfies $f'(0,\ldots,0)=f'(1,\ldots,1)$, and thus it falls into the case of $u=w$. Henceforth, we will discuss only the case of $u=w$. Here, we assume that $\Delta_0$ is available for free (\textrm{i.e.},\hspace*{2mm} $i_0=0$). The other case $i_0=1$ is similar. Note that the constraint $g=f^{x_1=0}$ is effectively T-constructible from $\{f,\Delta_0\}$. When $g\notin \Gamma$, the induction hypothesis can be directly applied to $g$; therefore, we assume below that $g$ is a member of $\Gamma$.
(a) We start with the case where $g\in{\cal DG}$. When $g=[u,0,\ldots,0]$, $f$ has the form $[u,0,\ldots,0,u]$. This constraint $f$ thus belongs to ${\cal ED}_{1}$, a contradiction against $f\notin{\cal ED}_1^{(+)}$. When $g=[0,0,\ldots,0,x]$ with $x\neq0$, since $f=[0,0,\ldots,0,x,u]$, we can effectively T-construct $\Delta_1=[0,1]$ from $f$. We then define $h=f^{x_1=*}$ of the form $[0,0,\ldots,x,x+u]$. If $x\neq -u$, then the constraint $h'= h^{x_1=x_2=\cdots =x_{k-3}=1}$ has the form $[0,x,x+u]$, and thus its induced constraint $(h')^2$ belongs to ${\cal OR}$. Otherwise, we consider $f'=f^2=[0,0,\ldots,u^2,u^2]$ and reduce this case to the previous case of $x\neq-u$. Next, assume that $g= x\cdot[1,z,z^2,\ldots,z^{k-1}]$ for $x,z\neq0$. In what follows, when $z=-1$, we use $f'=f^2$ instead of $f$. If $z\neq -1$, then $f$ is of the form $x\cdot [1,z,z^2,\ldots,z^{k-1},u/x]$. Since $f\notin {\cal DG}$, it follows that $u/x\neq z^k$ (equivalently, $u\neq xz^{k}$). Let us consider $h=f^{x_1=*}$ of the form $x(1+z)\cdot [1,z,z^2,\ldots,z^{k-2},A]$, where $A=\frac{xz^{k-1}+u}{x(1+z)}$. Notice that $x=u/z^{k}$ iff $A= z^{k-1}$. This equivalence leads to $A\neq z^{k-1}$. Therefore, $h$ does not belong to $\Gamma$. We simply apply the induction hypothesis to $h$.
(b) Assume that $g\in{\cal ED}_{1}^{(+)}$. Since $g$ equals $[u,0,\ldots,0,x]$, $f$ has the form $[u,0,\ldots,0,x,u]$. Let us define $h=f^{x_1=*}$, which equals $[u,0,\ldots,x,u+x]$. When $u\neq-x$, we apply the induction hypothesis to $h$ because of $h\notin\Gamma$. On the contrary, if $u=-x$, then we consider another constraint $f^2$ ($=[u^2,0,\ldots,0,x^2,u^2]$) and reduce this case to the previous case of $u\neq-x$.
(c) Next, we assume that $g$ is in ${\cal AZ}$. Note that $g$ cannot have the form $g=[u,0,u,0,\ldots,0]$ because, otherwise, $f$ equals $[u,0,u,0,\ldots,0,u]\in{\cal AZ}$, a contradiction. When $g$ is of the form $[u,0,u,0,\ldots,0,u]$, we easily obtain $[u,2u]$ from the constraint $h=f^{x_1=*}=[u,u,\ldots,u,2u]$. By Claim \ref{algebraic-simul}, we effectively T-construct $\Delta_1=[0,1]$ from $[u,2u]$. Now, we apply $\Delta_1$ repeatedly to $f$ and then obtain $g'=[0,u,u]= u\cdot OR$, which obviously belongs to ${\cal OR}$.
(d) When $g$ is in ${\cal AZ}_1$, $g$ is not of the form $[u,0,-u,0,\ldots,-u,0]$ since, otherwise, $f$ is $[u,0,-u,\ldots,-u,0,u]$ and it belongs to ${\cal AZ}_1$, a contradiction. If $g$ equals $[u,0,-u,\ldots,0,\pm u]$, then $f$ has the form $[u,0,-u,\ldots,0,\pm u,u]$. We then take $f^2=[u^2,0,u^2,\ldots,0,u^2,u^2]$ and reduce this case to Case (c). Next, assume that $g=[u,0,-u,\ldots,u,0]$. Since $f=[u,0,-u,\ldots,u,0,u]$, the constraint $f^{x_1=*,x_2=*}$ equals $[0,-2u,0,\ldots,0,2u,2u]$, from which we immediately obtain $[0,2u] = 2u\cdot \Delta_{1}$. Using this $\Delta_{1}$, we can effectively T-construct $[0,2u,2u] = 2u\cdot OR$, which obviously is in ${\cal OR}$.
(e) Assuming that $g\in {\cal B}_0$, let $g=[u,-u,-u,u,\ldots,z_{k-1},z_k]$, where $z_{k-1},z_k\in\{u,-u\}$. As the first case, consider the case where $(z_{k-1},z_k)=(-u,u)$; namely, $f$ has the form $u\cdot [1,-1,-1,1,\ldots,-1,1,1]$. Obviously, $f$ belongs to ${\cal B}_0$, a contradiction. A similar argument works for the case of $(z_{k-1},z_k)=(-u,-u)$. Now, assume that $(z_{k-1},z_k)=(u,u)$. Since $f= u\cdot[1,-1,-1,\ldots,1,1,1]$, the constraint $f^{x_1=*}$ must be of the form $u\cdot[0,-2,0,\ldots,0,2,2]$. From this constraint, we effectively T-construct $\Delta_1=[0,1]$. Hence, $[0,2,2]$ is also effectively T-constructed from $\{f,\Delta_1\}$. The resulted constraint $[0,2,2]$ is clearly in ${\cal OR}$. Finally, consider the case of $(z_{k-1},z_k)=(u,-u)$. Since $f= u\cdot[1,-1,-1,\ldots,1,-1,1]$, the constraint $f^{x_1=*,x_2=*,\ldots,x_{k-1}=*}$ has the form $[0,y]$ for a certain non-zero value $y$. This allows us to use $\Delta_1$ freely. Applying $\Delta_1$ repeatedly, we easily obtain $[-2,-2,0]$ from $f^{x_1=*,x_2=*}= u\cdot [-2,-2,\ldots,0]$. This constraint $[-2,-2,0]=-2\cdot NAND$ belongs to ${\cal NAND}$.
(ii) Let us consider the second case where $|u|<|w|$. Here, we define $h'(x_1)=f(x_1,x_1,\ldots,x_1)$. Since $h'=[u,w]$ holds, we effectively T-construct $\Delta_1=[0,1]$ from $h'$ by Claim \ref{algebraic-simul}. As a result, from $\{f,\Delta_1\}$, we further effectively T-construct two constraints $g=f^{x_1=1}$ and $h=f^{x_1=*}$. In the case of $g\notin\Gamma$, the desired result follows from the induction hypothesis. Hereafter, we assume that $g$ is indeed in $\Gamma$.
(a) If $g\in{\cal DG}$, then $g$ must be either $[0,0,\ldots,0,w]$ or $x\cdot [1,z,z^2,\ldots,z^{k-1}]$. In the former case, $f$ equals $[u,0,0,\ldots,0,w]$, and thus it belongs to ${\cal ED}_{1}^{(+)}$. This is a clear contradiction. In the latter case, $f$ has the form $x\cdot [u/x,1,z,z^2,\ldots,z^{k-1}]$.
If $x=uz$, then $f$ equals $\frac{x}{z}\cdot [1,z,z^2,\ldots,z^{k-1}]$, which belongs to ${\cal DG}$, a contradiction. Therefore, we obtain $x\neq uz$. Now assume that $z=-1$. Since $x\neq uz$, $x+u\neq0$ holds. Note that the constraint $h$ is of the form $x\cdot [\frac{u+x}{x},0,0,\ldots,0]$, from which we can effectively T-construct $[\frac{u+x}{x},0]$. This unary constraint allows us to use $\Delta_0=[1,0]$ freely. An application of $\Delta_0$ to $f$ generates $g'=x\cdot [u/x,1,z,z^2,\ldots,z^{k-2}]$. Since $g'\notin\Gamma$, we can apply the induction hypothesis. Next, assume that $z\neq-1$. By a simple calculation, we obtain $h = x(1+z)\cdot [\frac{u+x}{x(1+z)},1,z,\ldots,z^{k-2}]$. It follows from $x\neq uz$ that $\frac{u+x}{x(1+z)} \neq \frac{1}{z}$. Hence, $h$ is not in $\Gamma$. The induction hypothesis then leads to the desired consequence.
(b) Consider the case of $g\in{\cal ED}_{1}^{(+)}$. Let $g$ be $[x,0,\ldots,0,w]$ for a certain constant $x\neq0$; thus, $f$ equals $[u,x,0,\ldots,0,w]$. If $u\neq-x$, then the constraint $h$ has the form $[u+x,x,0,\ldots,0,w]$. Since $u+x\neq0$, $h$ does not belong to $\Gamma$. We then apply the induction hypothesis to $h$. On the contrary, when $u=-x$, we instead substitute $f^2=[u^2,x^2,0,\ldots,0,w^2]$ for $f$ and make this case reduced to the previous case of $u\neq-x$.
(c) Let us consider the case where $g$ is in ${\cal AZ}$. Since $g$ cannot have the form $[0,x,0,x,\ldots,x,0]$, we first assume that $g=[0,x,0,x,\ldots,x]$. The original constraint $f$ then has the form $[u,0,x,0,x,\ldots,x]$
with $x=w$. Notice that $x\neq u$ because $|u|<|w|=|x|$. Since $0<|u|<|x|$, the constraint $h=[u,x,x,\ldots,x]$ does not belong to $\Gamma$. The induction hypothesis can be applied to $h$. In the case of $g=[x,0,x,0,\ldots,x]$, on the contrary, we consider $g'=(f^2)^{x_1=*}$, which is $[u^2+x^2,x^2,\ldots,x^2]$. From this $g'$, we obtain $[u^2+x^2,x^2]$. Claim \ref{algebraic-simul} again allows us to use $\Delta_0 =[1,0]$. Apply $\Delta_0$ repeatedly to the constraint $f^2$. We then obtain $[u^2,x^2,0]$, which clearly belongs to ${\cal NAND}$.
(d) Assume that $g$ belongs to ${\cal AZ}_1$. Note that $g=[0,x,0,-x,0,\ldots,0]$
and $g=[x,0,-x,0,x,\ldots,0]$ are both impossible. First, we assume that $g=[0,x,0,-x,\ldots,\pm x]$; thus, $f$ has the form $[u,0,x,0,-x,\ldots,\pm x]$, where $w=\pm x$. Since $|u|<|w|=|x|$, the constraint $h$ ($=[u,x,x,-x,-x,\ldots,\pm x]$) cannot belong to $\Gamma$. We then apply the induction hypothesis to $h$. Next, assume that $g=[x,0,-x,0,x,\ldots,\pm x]$. Since $f$ has the form $[u,x,0,-x,0,x,\ldots,\pm x]$, we consider a new constraint $g'=(f^2)^{x_1=*}$, which equals $[u^2+x^2,x^2,x^2,\ldots,x^2]$. From this constraint $g'$, we obtain $[u^2+x^2,x^2]$. The constant unary constraint $\Delta_0=[1,0]$ can be effectively T-constructed from $[u^2+x^2,x^2]$ by Claim \ref{algebraic-simul}. Using this $\Delta_0$, we obtain from $f^2$ the constraint $[u^2,x^2,0]$, which belongs to ${\cal NAND}$.
(e) Assuming that $g\in {\cal B}_0$, we first consider the case where $g=[x,-x,-x,x,\ldots,\pm x]$ with $x = \pm w$. Note that $f$ must have the form $x\cdot [u/x,-1,-1,1,\ldots,-1,\pm1]$. Since $f\notin{\cal AZ}_1$, $u\neq x$ (equivalently, $u+x\neq 2x$) follows. If $u+x\neq -2x$, then the constraint $h = x\cdot [(u+x)/x,-2,0,2,0,\cdots,0\,\text{or}\,\pm2]$ cannot belong to $\Gamma$, and thus we can apply the induction hypothesis to $h$. The remaining case is $u+x=-2x$ (equivalently, $u=-3x$). In this case, we obtain $f= x\cdot [-3,1,-1,-1,1,\cdots,\pm1]$. Since $f^{x_1=x_2=\cdots =x_{k}}$ is $[-3,\pm1]$, we can effectively T-construct $\Delta_0=[1,0]$ by Claim \ref{algebraic-simul}. Applying $\Delta_0$ repeatedly to $f$, we obtain a new constraint $h'=[-3,1,-1]$, which clearly belongs to ${\cal B}$. The case where $g=[x,x,-x,-x,\ldots,\pm x]$ can be similarly handled.
[Case: $u=w=0$] Here, we assume that $\Delta_{i_0}=\Delta_0$. The other case of $\Delta_{i_0}= \Delta_1$ is similarly handled. Now, we effectively T-construct $g=f^{x_1=0}$ from $\{f,\Delta_0\}$. Note that $g(0^{k-1})=0$. As done before, it suffices to consider the case where $g$ is in $\Gamma$. Clearly, $g\notin{\cal ED}_1^{(+)}\cup{\cal B}_0$, and thus $g$ must be in ${\cal DG}\cup {\cal AZ}\cup{\cal AZ}_1$. In the following argument, $h$ refers to $f^{x_1=*}$.
(a) Assume that $g$ is in ${\cal DG}$. Since $g(0^{k-1})=0$, $g$ is of the form $[0,0,\ldots,0,x]$ with $x\neq0$. Since $f$ equals $[0,0,\ldots,0,x,0]$, $h$ coincides with $[0,0,\ldots,x,x]$. Obviously, $h\notin\Gamma$, and thus the induction hypothesis can be applied to $h$.
(b) When $g$ belongs to ${\cal AZ}$, $g$ must have the form $[0,x,0,x,\ldots,0]$, because $g=[0,x,0,x,\ldots,x]$ implies $f=[0,x,0,x,\ldots,x,0]\in{\cal AZ}$, which leads to a contradiction. Since $f=[0,x,0,x,\ldots,0,0]$, we obtain $h=[x,x,x,\ldots,x,0]$. Since $h\notin\Gamma$, the induction hypothesis then leads to the desired consequence.
(c) Assume that $g\in{\cal AZ}_1$. If $g=[0,x,0,-x,\ldots,\pm x]$, then the original constraint $f=[0,x,0,-x,\ldots,\pm x,0]$ is already in ${\cal AZ}_1$, a contradiction. Hence, $g$ must have the form $[0,x,0,-x,\ldots,0]$, yielding $h=[x,x,-x,\ldots,0]$. Clearly, $h$ does not belong to $\Gamma$. Finally, we apply the induction hypothesis to $h$. \end{proofof}
\subsection*{Appendix: Proof of Lemma \ref{constructibility}}
In what follows, we will give the missing proof of Lemma \ref{constructibility}. For any constraint $f$ of arity $k$, the notation $\max|f|$ indicates the maximum value $|f(x)|$ over all inputs $x\in\{0,1\}^k$.
(1)--(2) These properties (reflexivity and transitivity) directly come from the definition of effective T-constructibility.
(3) Let $({\cal H}_1,{\cal H}_2,\ldots,{\cal H}_n)$ be a generating series of ${\cal F}_1$ from ${\cal F}_2$. We need to show that $\#\mathrm{CSP}({\cal H}_i,{\cal G}) \leq_{\mathrm{AP}} \#\mathrm{CSP}({\cal H}_{i+1},{\cal G})$ for each adjacent pair $({\cal H}_i,{\cal H}_{i+1})$, where $i\in[n-1]$. By Lemma \ref{AP-property}, $\leq_{\mathrm{AP}}$ is transitive; thus, it follows that $\#\mathrm{CSP}({\cal H}_1,{\cal G})\leq_{\mathrm{AP}} \#\mathrm{CSP}({\cal H}_n,{\cal G})$. This is clearly equivalent to $\#\mathrm{CSP}({\cal F}_1,{\cal G})\leq_{\mathrm{AP}} \#\mathrm{CSP}({\cal F}_2,{\cal G})$, as requested.
Taking an arbitrary pair $({\cal H}_i,{\cal H}_{i+1})$ with $i\in[n-1]$, we treat the first case where $({\cal H}_i,{\cal H}_{i+1})$ satisfies Clause (I) of Definition \ref{def:constructibility}. Consider a constraint frame $\Omega=(G,X|{\cal H}',\pi)$ with ${\cal H}'\subseteq {\cal H}_i\cup{\cal G}$. For convenience, let ${\cal H}'=\{f_1,f_2,\ldots,f_d\}$. Take $f_i$ inductively and consider all subgraphs of $G$ that represents $f_i$. Choose such subgraphs one by one. Now, let $G_{f_i}$ be such a subgraph. By Clause (I), there exists another finite graph $G'$ that realizes $f_i$ by ${\cal H}_{i+1}$. We replace $G_{f_i}$ in $G$ by $G'_{f_i}$. After all the subgraphs representing $f_i$ are replaced, the obtained graph, say, $G'$ constitutes a new constraint frame $\Omega'$. It is not difficult to show that $csp_{\Omega'}$ equals $csp_{\Omega}$. We continue this replacement process for all $f_i$'s. In the end, we conclude that $\#\mathrm{CSP}({\cal H}_i,{\cal G})\leq_{\mathrm{AP}} \#\mathrm{CSP}({\cal H}_{i+1},{\cal G})$.
We will examine the second case where $({\cal H}_i,{\cal H}_{i+1})$ satisfies Clause (II) of Definition \ref{def:constructibility}. In what follows, for ease of our argument, we assume that ${\cal H}_i=\{f\}$ and we want to claim that $\#\mathrm{CSP}(f,{\cal G})\leq_{\mathrm{AP}} \#\mathrm{CSP}({\cal H}_{i+1},{\cal G})$. Take a p-convergence series $\Lambda$ for $f$, which is effectively T-constructible from ${\cal H}_{i+1}$. Our claim is split into two parts: (a) $\#\mathrm{CSP}(f,{\cal G})\leq_{\mathrm{AP}} \#\mathrm{CSP}(\Lambda,{\cal G})$ and (b) $\#\mathrm{CSP}(\Lambda,{\cal G})\leq_{\mathrm{AP}} \#\mathrm{CSP}({\cal H}_{i+1},{\cal G})$. We will prove these parts separately. Since (b) is easy, we start with (b).
\sloppy
(b) We intend to show that $\#\mathrm{CSP}(\Lambda,{\cal G})\leq_{\mathrm{AP}} \#\mathrm{CSP}({\cal H}_{i+1},{\cal G})$. Let $\Lambda =(f_1,f_2,\ldots)$ and ${\cal H}_{i+1}=\{g_1,g_2,\ldots,g_d\}$. Now, we take any constraint frame $\Omega = (G,X|{\cal G}',\pi)$ with ${\cal G}'\subseteq \Lambda\cup{\cal G}$, given to $\#\mathrm{CSP}(\Lambda,{\cal G})$. Since the constraint set ${\cal G}'$ is finite, for simplicity, we assume that ${\cal G}'$ is composed of constraints $h_1,h_2,\ldots,h_s,f_{i_1},f_{i_2},\ldots,f_{i_t}$, where $s\in\mathbb{N}$, $t\in\mathbb{N}^{+}$, and each constraint $h_i$ belongs to ${\cal F}-\Lambda$. For this constraint frame $\Omega$, we will explain how to compute the value $csp_{\Omega}$. Since $\Lambda$ is effectively T-constructible from ${\cal G}$, there exists a polynomial-time DTM $M$ that, for each index $j\in[t]$, generates an appropriate graph $\tilde{G}_{i_j}$ realizing $f_{i_j}$ from any graph $G_{i_j}$ representing $f_{i_j}$.
Each node $v$ labelled $f_{i_j}$ ($j\in[t]$) in $G$ corresponds to a unique subgraph $G_{i_j}$, including all dangling edges adjacent to $v$, that represents $f_{i_j}$. By running $M$ on $G_{i_j}$, we obtain another subgraph $\tilde{G}_{i_j}$ realizing $f_{i_j}$, which contains all the dangling edges of $G_{i_j}$. It is therefore possible to generate from $G$ another bipartite graph $\tilde{G}$ in which every subgraph $G_{i_j}$ of $G$ representing $f_{i_j}$ is replaced by its associated subgraph $\tilde{G}_{i_j}$ obtained from $G_{i_j}$ by $M$. We denote by $\Omega'$ the constraint frame obtained from $\Omega$ by replacing $G$ with $\tilde{G}$ and by modifying $\pi$ accordingly. The definition of ``realizability'' implies that $csp_{\Omega'}$ equals $\gamma\cdot csp_{\Omega}$ for an appropriate number $\gamma\in\mathbb{A}$. Since $\tilde{G}$ contains only constraints in ${\cal H}_{i+1}\cup{\cal G}$, $\Omega'$ must be a valid input instance to $\#\mathrm{CSP}({\cal H}_{i+1},{\cal G})$. As a result, we conclude that $\#\mathrm{CSP}(\Lambda,{\cal G})$ is AP-reducible to $\#\mathrm{CSP}({\cal H}_{i+1},{\cal G})$.
(a) We want to claim that $\#\mathrm{CSP}(f,{\cal G})\leq_{\mathrm{AP}} \#\mathrm{CSP}(\Lambda,{\cal G})$. This claim is proven by modifying the proof of \cite[Lemma 9.2]{Yam10a}. Hereafter, assume that $f$ is of arity $k$ and let $\Lambda=(g_1,g_2,\ldots)$. For convenience, we define $AC=\{x\in\{0,1\}^k\mid f(x)\neq0\}$. By Eq.(\ref{eqn:convergence}), there exists a constant $\lambda\in(0,1)$ such that, for every $m\in\mathbb{N}^{+}$ and every $x\in\{0,1\}^k$, certain constants $c,d\in\{\pm1\}$ satisfies the following condition: \begin{quote} \begin{itemize}
\item[(*)\;\;] $(1+\lambda^m c)g_m(x) \leq f(x) \leq (1+\lambda^m d)g_m(x)$ for all $x\in AC$, and $|g_m(x)|\leq \lambda^m$ for all $x\in\{0,1\}^k-AC$. \end{itemize} \end{quote} Without loss of generality, we can assume that $\gamma$ is an algebraic real number.
Let us take any constraint frame $\Omega=(G,X|{\cal G}',\pi)$ with $G=(V_1|V_2,E)$ and ${\cal G}'\subseteq \{f\}\cup{\cal G}$ given as an input instance to $\#\mathrm{CSP}(f,{\cal G})$. It is enough to consider the case where $f$ appears in ${\cal G}'$. Let $p_f$ denote the total number of nodes in $V_2$ whose labels are $f$. For simplicity, write $L$ for the set of all $2^k$-tuples $\ell=(\ell_{x_1},\ell_{x_2},\ldots,\ell_{x_{2^k}})\in\mathbb{N}^{2^k}$ satisfying that $\sum_{i\in[2^k]}\ell_{x_i}= p_f$, where each $x_i$ denotes the lexicographically $i$th string in $\{0,1\}^k$. In addition, we set $L_f =\{\ell\in L\mid \forall i\in[2^k] \;[\ f(x_i)=0\rightarrow \ell_{x_i}=0\ ]\}$. It is not difficult to show by Eq.(\ref{eqn:csp-def}) that $csp_{\Omega}$ can be expressed in the form $\sum_{\ell\in L_f} \alpha_{\ell}(\prod_{x\in AC}f(x)^{\ell_x})$ for appropriately chosen numbers $\alpha_{\ell}\in \mathbb{A}$, provided that $0^0$ is treated as $1$ for technical reason.
We set $a_0=2^k!\,2^{4k}$ and $b_0= [1+ (2\max|f|)^{|V_2|}]\cdot \sum_{\ell\in L-L_f}|\alpha_{\ell}|$, which are obviously independent of $m$. Meanwhile, we arbitrarily fix an integer $m\in\mathbb{N}^{+}$ that satisfies both $\lambda^ma_0<1$ and $\lambda^mb_0<1$, and we denote by $\Omega_m$ the constraint frame obtained from $\Omega$ by replacing every node labeled $f$ with a new node having the label $g_m$. Concerning this $\Omega_m$, its value $csp_{\Omega_m}$ coincides with the sum $\Gamma_{1,m}+\Gamma_{2,m}$, where \[ \Gamma_{1,m} = \sum_{\ell\in L_f} \alpha_{\ell} \prod_{x\in AC}g_m(x)^{\ell_x} \;\;\text{ and }\;\; \Gamma_{2,m} = \sum_{\ell\in L-L_f} \alpha_{\ell} \prod_{x\in \{0,1\}^k\wedge \ell_x>0}g_m(x)^{\ell_x}. \]
Next, we will establish a close relationship between $csp_{\Omega}$ and $\Gamma_{1,m}$; more specifically, we intend to prove the following key claim.
\begin{claim}\label{Gamma-1m-a0}
It holds that $(1+\lambda^m B)\Gamma_{1,m}\leq csp_{\Omega}\leq (1+\lambda^m B')\Gamma_{1,m}$ for appropriate numbers $B,B'\in\mathbb{A}$ satisfying $|B|,|B'|\leq a_0$. Therefore, $sgn(csp_{\Omega}) = sgn(\Gamma_{1,m})$ holds. \end{claim}
\begin{proof}
It is obvious that the second part of the claim follows from the first part, because $\lambda^m|B| \leq \lambda^ma_0<1$ and similarly $\lambda^m|B'|<1$ by our choice of $m$. Henceforth, we aim at proving the first part. Fix $\ell\in L_f$ arbitrarily. {}From Condition (*), for appropriate selections of $c_{\ell,x}$'s and $d_{\ell,x}$'s in $\{\pm1\}$, we obtain \begin{equation}\label{eqn:M_0-vs-M_1} \prod_{x\in AC}(1+\lambda^mc_{\ell,x})^{\ell_{x}} g_m(x)^{\ell_{x}} \leq \prod_{x\in AC}f(x)^{\ell_{x}}\leq \prod_{x\in AC}(1+\lambda^md_{\ell,x})^{\ell_{x}} g_m(x)^{\ell_{x}}. \end{equation} Note that, when all elements in ${\cal F}'$ are limited to {\em nonnegative} constraints, we can always set $c_{\ell,x}=-1$ and $d_{\ell,x}=1$. Eq.(\ref{eqn:M_0-vs-M_1}) leads to upper and lower bounds of $csp_{\Omega}$: \begin{equation}\label{eqn:alpha-csp-Omega} \sum_{\ell\in L_f} \alpha_{\ell}\prod_{x\in AC}(1+\lambda^mc_{\ell,x})^{\ell_{x}} g_m(x)^{\ell_{x}} \leq csp_{\Omega} \leq \sum_{\ell\in L_f} \alpha_{\ell}\prod_{x\in AC}(1+\lambda^md_{\ell,x})^{\ell_{x}} g_m(x)^{\ell_{x}}. \end{equation} Let us further estimate the first and the last terms in Eq.(\ref{eqn:alpha-csp-Omega}). Let us handle the first term. By considering the binomial expansion of $(1+z)^{n}$, it holds that, for any numbers $n\in\mathbb{N}^{+}$ and $z\in\mathbb{R}$ satisfying that $-1/n\leq z\leq 2/n$, there exists a number $e\in\{1/2,n\}$ such that $1+nz\leq (1+z)^{n}\leq 1+enz$ (more precisely, if $z\geq0$ then $e=n$; otherwise, $e=1/2$). Hence, by choosing appropriate numbers $e_{\ell,x} \in \{\pm1/2,\pm\ell_x\}$, we obtain \begin{eqnarray*} \prod_{x\in AC}(1+\lambda^m c_{\ell,x})^{\ell_x}g_m(x)^{\ell_x} \geq \prod_{x\in AC}(1+\lambda^m \ell_{x}e_{\ell,x})g_m(x)^{\ell_x} = \prod_{x\in AC}(1+\lambda^m \ell_{x}e_{\ell,x})\prod_{x\in AC}g_m(x)^{\ell_x} \end{eqnarray*} since $m$ satisfies that $-1<\lambda^m\ell_{x}e_{\ell,x}<1$.
For a further estimation, let us focus on the value $\prod_{x\in AC}(1+\lambda^m z_x)$ for any series $\{z_x\}_{x\in AC}\subseteq [-2^{2k},2^{2k}]_{\mathbb{Z}}$. Since $\prod_{x\in AC}(1+\lambda^m z_x)$ has the form $1+\sum_{i=1}^{|AC|}\sum_{y_1,y_2,\ldots,y_i\in AC}\lambda^{im}z_{y_1}z_{y_2}\cdots z_{y_i}$, where all indices $y_1,y_2,\ldots,y_i$ are distinct, if we set $\tilde{B} = \sum_{i=1}^{|AC|}\sum_{y_1,y_2,\ldots,y_i\in AC}\lambda^{(i-1)m}|z_{y_1}z_{y_2}\cdots z_{y_i}|$, then we derive that $1-\lambda^m \tilde{B}\leq \prod_{x\in AC}(1+\lambda^m z_x)
\leq 1+\lambda^m \tilde{B}$. Note that $|\lambda^mz_{x}| \leq \lambda^m2^{2k}\leq \lambda^ma_0<1$ for any $x\in AC$ since $z_{x}\in [-2^{2k},2^{2k}]_{\mathbb{Z}}$. It therefore follows that \[
\sum_{y_1,\ldots,y_i\in AC}\lambda^{(i-1)m}|z_{y_1}\cdots z_{y_i}|\leq \sum_{y_1\in AC}|z_{y_1}|\sum_{y_2,\ldots,y_i\in AC}1\leq |AC|2^{k}\comb{|AC|}{i}\leq |AC|2^{2k}|AC|!. \] We then conclude that $\tilde{B}$ satisfies that
$|\tilde{B}|\leq \sum_{i=1}^{|AC|} |AC|2^{2k}|AC|! \leq |AC|^22^{2k}|AC|! \leq a_0$ since $|AC|\leq 2^k$. {}From this fact, there exists a series $\{B_{\ell}\}_{\ell\in L_f}\subseteq \mathbb{A}$ with $|B_{\ell}|\leq a_0$ such that \[ \prod_{x\in AC}(1+\lambda^{m}\ell_x e_{\ell,x}) \prod_{x\in AC}g_m(x)^{\ell_x} \geq (1+\lambda^m B_{\ell})\prod_{x\in AC}g_m(x)^{\ell_x}. \]
Finally, we choose an appropriate number $B\in\mathbb{A}$ with $|B|\leq a_0$ that satisfies \begin{equation*} \sum_{\ell\in L_f}\alpha_{\ell}(1+\lambda^mB_{\ell})\prod_{x\in AC}g_m(x)^{\ell_x} \geq (1 + \lambda^m B)\sum_{\ell\in L_f}\alpha_{\ell}\prod_{x\in AC}g_m(x)^{\ell_x} = (1 + \lambda^m B)\Gamma_{1,m}. \end{equation*}
Concerning the third term in Eq.(\ref{eqn:alpha-csp-Omega}), a similar argument used for the first term shows the existence of an algebraic real number $B'\in\mathbb{A}$ such that $|B'|\leq a_0$ and \begin{equation*} \sum_{\ell\in L_f}\alpha_{\ell}\prod_{x\in AC}(1+\lambda^m d_{\ell,x})^{\ell_x}g_m(x)^{\ell_x} \leq (1+\lambda^m B')\Gamma_{1,m}. \end{equation*} By the selection of $B$ and $B'$, they certainly satisfy the claim. \end{proof}
Next, we will give an upper-bound of $|\Gamma_{2,m}|$. Recall that $b_0= C \sum_{\ell\in L-L_f}|\alpha_{\ell}|$, where $C= 1+ (2\max|f|)^{|V_2|}$.
\begin{claim}\label{Gamma-2m}
It holds that $|\Gamma_{2,m}|\leq \lambda^m b_0$. \end{claim}
\begin{proof} For the time being, we fix a series $\ell\in L-L_f$ and conduct a basic analysis. For this series $\ell$, there exists an element $x\in\{0,1\}^k$
such that $x\notin AC$ and $\ell_x>0$. For convenience, we define $D=\{x\in\{0,1\}^k\mid \ell_x>0\}$ and further partition it into two sets: $D_1=\{x\in D\mid x\in AC\}$ and $D_2=\{x\in D\mid x\notin AC\}$. Notice that $D_2$ is nonempty. Since $\lambda<1$ and $|g_m(x)|\leq \lambda^m$ for all $x\in D_2$, it follows that \begin{equation*}
\left| \prod_{x\in D_2}g_m(x)^{\ell_x} \right|
= \prod_{x\in D_2}|g_m(x)|^{\ell_x} \leq \prod_{x\in D_2} \lambda^{m\ell_x} \leq \lambda^m. \end{equation*}
Condition (*) implies that $|g_m(x)|\leq |f(x)|/\min\{1+\lambda^mc,1+\lambda^md\}\leq 2|f(x)|$ for any $x\in AC$ because $|\lambda^mc|,|\lambda^md|<1/2$. If $\max|g_m|\geq1$, then we obtain \[
\left| \prod_{x\in D_1}g_m(x)^{\ell_x} \right| \leq \prod_{x\in D_1} |g_m(x)|^{\ell_x} \leq (\max|g_m|)^{\sum_{x\in D_1} \ell_x}
\leq (\max|g_m|)^{p_f} \leq (2\max|f|)^{|V_2|} \]
since $\sum_{x\in D_1}\ell_x \leq p_f\leq|V_2|$. When $\max|g_m|<1$, we instead obtain $\prod_{x\in D_1}|g_m(x)|^{\ell_x}\leq 1$. Therefore, it holds that \begin{equation*}
\left| \prod_{x\in D}g_m(x)^{\ell_x} \right|
= \left| \prod_{x\in D_2}g_m(x)^{\ell_x} \right|\cdot
\left| \prod_{x\in D_1}g_m(x)^{\ell_x} \right| \leq \lambda^m C. \end{equation*}
The value $|\Gamma_{2,m}|$ is upper-bounded by \begin{equation*}
|\Gamma_{2,m}| \leq \sum_{\ell\in L-L_f} \left|\alpha_{\ell}\right| \left|\prod_{x\in D}g_m(x)^{\ell_x} \right|
\leq \lambda^m C \sum_{\ell\in L-L_f} \left|\alpha_{\ell}\right| \leq \lambda^m b_0. \end{equation*} This completes the proof of the claim. \end{proof}
To finish the proof of Lemma \ref{constructibility}, we will present a randomized oracle computation that solves $\#\mathrm{CSP}(f,{\cal G})$ with a single query to the oracle $\#\mathrm{CSP}(\Lambda,{\cal G})$. First, we want to define a special constant $d_0$ corresponding to $\Omega$. The definition of $d_0$ requires the following well-known lower bound of the absolute values of polynomials in algebraic real numbers.
\begin{lemma}\label{complex-lower-bound}{\rm \cite{Sto74}}\hs{1}
Let $\alpha_1,\ldots,\alpha_m\in\mathbb{A}$ and let $c$ be the degree of $\mathbb{Q}(\alpha_1,\ldots,\alpha_m)/\mathbb{Q}$. There exists a constant $e>0$ that satisfies the following statement for any complex number $\alpha$ of the form $\sum_{k}a_{k}\left(\prod_{i=1}^{m}\alpha_i^{k_i}\right)$, where $k=(k_1,\ldots,k_m)$ ranges over $[N_1]\times\cdots\times[N_m]$, $(N_1,\ldots,N_m)\in\mathbb{N}^{m}$, and $a_k\in\mathbb{Z}$. If $\alpha\neq0$, then $|\alpha|\geq \left(\sum_{k}|a_k|\right)^{1-c}\prod_{i=1}^{m}e^{-cN_i}$. \end{lemma}
Following the proof of \cite[Lemma 9.2]{Yam10a} under the assumption that $csp_{\Omega}\neq0$, it is possible to set values of four series $\{N_i\}_{i}$, $\{a_k\}_{k}$, $\{\alpha_i\}_{i}$, and $\{k_i\}_{i}$ appropriately so that Lemma \ref{complex-lower-bound} provides two constants $c,e>0$ for which $|csp_{\Omega}|\geq (\sum_{k}|a_k|)^{1-c}\prod_{i}e^{-cN_i}$. The desired constant $d_0$ is now defined to be $(\sum_{k}|a_k|)^{1-c}\prod_{i}e^{-cN_i}$. Notice that $d_0$ is an algebraic real number.
Let us describe our randomized approximation algorithm.
\begin{quote}
[Algorithm ${\cal M}$] On input instance $(\Omega,1/\varepsilon)$, set $\delta = \varepsilon/2$ and find in polynomial time an integer $m\geq1$ satisfying that $\lambda^ma_0<\min\{1,\delta\}$ and $\lambda^m b_0<\min\{d_0,\delta\}$. Produce another constraint frame $\Omega_m$. make a query with a query word $(\Omega_{m},1/\delta)$ to the oracle and let $w$ be an answer from the oracle. Notice that $w$ is a {\em random variable} since the oracle is a RAS. Compute $d_0$ defined above. If $|w|< d_0$, then output $0$; otherwise, output $w$. \end{quote}
We want to prove that the above randomized algorithm ${\cal M}$ approximately solves $\#\mathrm{CSP}(f,{\cal G})$ with high probability. Let us consider two cases separately.
(1) For the first case where $csp_{\Omega}=0$, we need to prove that $M$
outputs $0$ with high probability. Let us evaluate the values $\Gamma_{1,m}$ and $\Gamma_{2,m}$. Obviously, Claim \ref{Gamma-1m-a0} implies $\Gamma_{1,m}=0$. By Claim \ref{Gamma-2m} and the choice of $m$, we derive $|\Gamma_{2,m}|\leq \lambda^m b_0<d_0$. From $csp_{\Omega_m} = \Gamma_{1,m}+\Gamma_{2,m}$, it follows that $|csp_{\Omega_m}| <d_0$. This means that ${\cal M}$ outputs $0$ with high probability.
(2) Next, we consider the case where $csp_{\Omega}\neq0$. We consider only the case where $csp_{\Omega}>0$ because the other case $csp_{\Omega}<0$ can be similarly handled. The choice of $d_0$ implies that $csp_{\Omega}\geq d_0$. We then choose a number $\alpha$, not depending on $m$, for which $\alpha (\Gamma_{1,m}+sgn(\alpha)b_0) + b_0 \leq B\Gamma_{1,m}$ and $|\alpha|\leq \max\{a_0,b_0\}$. For this $\alpha$, it holds by Claim \ref{Gamma-2m} that \begin{eqnarray*} (1+\lambda^m\alpha)csp_{\Omega_m} &=& (1+\lambda^m\alpha)(\Gamma_{1,m}+\Gamma_{2,m}) \\
&\leq& \Gamma_{1,m}+\lambda^m\alpha\Gamma_{1,m} + (1+|\alpha|)\lambda^mb_0 \\ &=& \Gamma_{1,m}+\lambda^m [\alpha(\Gamma_{1,m} + sgn(\alpha)b_0)+b_0] \\ &\leq& \Gamma_{1,m}+\lambda^m B\Gamma_{1,m} \;\;=\;\; (1+\lambda^mB)\Gamma_{1,m}.
\end{eqnarray*}
Similarly, we choose $\alpha'$ with $|\alpha'|\leq \max\{a_0,b_0\}$ such that $(1+\lambda^mB)\Gamma_{1,m}\leq (1+\lambda^m\alpha')(\Gamma_{1,m}+\Gamma_{2,m}) = (1+\lambda^m\alpha')csp_{\Omega_m}$.
For simplicity, let $\gamma = \max\{|\alpha|,|\alpha'|\}$. Note that $\delta\geq \lambda^m\gamma$. Since $\lambda^m\gamma<1$, it holds that $\log_2(1+\lambda^m\gamma)\leq \lambda^m\gamma\leq \delta$. Thus, we conclude that $1+\lambda^m\alpha'\leq 2^{\log_2(1+\lambda^m\gamma)}\leq 2^{\delta}$. Moreover, since $\log_2(1-\lambda^m\gamma)\geq -\lambda^m\gamma$, it follows that $1+\lambda^m\alpha \geq 2^{\log_2(1-\lambda^m\gamma)}\geq 2^{-\delta}$. In conclusion, it holds that $2^{-\delta} csp_{\Omega} \leq csp_{\Omega_m} \leq 2^{\delta} csp_{\Omega}$. {}From this follows $csp_{\Omega_m}>0$.
If $w$ is any oracle answer, then it must satisfy that $2^{-\delta}csp_{\Omega_m}\leq w \leq 2^{\delta}csp_{\Omega_m}$ because $csp_{\Omega_m}>0$. Therefore, we derive that $w\leq 2^{\delta}csp_{\Omega_m}\leq 2^{2\delta}csp_{\Omega}$ and $w\geq 2^{-\delta}csp_{\Omega_m}\geq 2^{-2\delta}csp_{\Omega}$. Since $\varepsilon = 2\delta$, ${\cal M}$ outputs a $2^{\varepsilon}$-approximate solution using any $2^{\delta}$-approximate solution for $(\Omega_m,1/\delta)$ as an oracle answer.
This completes the proof of Lemma \ref{constructibility}
\let\oldbibliography\thebibliography \renewcommand{\thebibliography}[1]{
\oldbibliography{#1}
\setlength{\itemsep}{0pt} }
\end{document} | arXiv |
alpoge
When Weil asked him which work of his he thought best, he replied, "Oh, I think a few watercolors I made in Greece some years ago are pretty good."
https://www.ams.org/notices/199611/comm-steele.pdf
One professor began to laugh at me. Each time we met he would ask: "Have you proved the unsolvability of Hilbert's tenth problem? Not yet? But then you will not be able to graduate from the university."
Eventually he decided that he would have to forget about Hilbert's Tenth Problem and concentrate on other problems for his Candidate's Degree. However:-
... one day in the autumn of 1969, some of my colleagues told me, "rush to the library. In the recent issue of the Proceedings of the American Mathematical Society there is a new paper by Julia Robinson!" But I was firm in putting Hilbert's tenth problem aside. I told myself, "It is nice that Julia Robinson goes on with the problem, but I cannot waste my time on it any longer." So I did not rush to the library. But somewhere in the mathematical heavens there must be a god or goddess of mathematics who would not let me fail to read Julia Robinson's new paper. Because of my early publications on the subject, I was considered a specialist on the tenth problem, and so the paper was sent to me to review. Thus I was forced to read Julia Robinson's paper, and Hilbert's tenth problem captured me again.
https://mathshistory.st-andrews.ac.uk/Biographies/Matiyasevich/
princeton.edu/~lalpoge
Member for 10 years
MathOverflow 789 789 22 gold badges55 silver badges1313 bronze badges
Theoretical Computer Science 682 682 66 silver badges1515 bronze badges
Mathematics 101 101 22 bronze badges
History 101 101 22 bronze badges
History of Science and Mathematics 101 101 22 bronze badges
22 Number of elements of "$\mathrm{SL}_n(\mathbb{F}_p^\times)$" mod $p$
13 Most discriminants are almost squarefree
8 Can exponential sums be small on a whole interval?
context-free
turing-machines
fl.formal-languages
automata-theory
reference-request
29 How is proving a context free language to be ambiguous undecidable? Jan 17 '11
12 Post Correspondence Problem variant Jan 12 '11
11 Algorithms from the Book. Jan 16 '11
11 Constructions better than a random one. Mar 14 '11
8 What can I say about the Parikh map of a CSL? Jan 20 '11
4 The class CFL\cap co-CFL Jan 11 '11
3 Classes containing boolean closure of CFLs Jan 11 '11
0 Hardness of finding a word of length at most $k$ accepted by a nondeterministic pushdown automaton Jan 20 '11 | CommonCrawl |
Diff of /trunk/doc/user/notation.tex
revision 2923 by jfenwick, Thu Feb 4 04:05:36 2010 UTC
revision 3295 by jfenwick, Fri Oct 22 01:56:02 2010 UTC
# Line 19 Compact notation is used in equations su Line 19 Compact notation is used in equations su
20 There are two rules which make up the convention: There are two rules which make up the convention:
22 firstly, the rank of the tensor is represented by an index. For example, $a$ is a scalar; $b\hackscore{i}$ represents a vector; and $c\hackscore{ij}$ represents a matrix. firstly, the rank of the tensor is represented by an index. For example, $a$ is a scalar; $b_{i}$ represents a vector; and $c_{ij}$ represents a matrix.
24 Secondly, if an expression contains repeated subscripted variables, they are assumed to be summed over all possible values, from $0$ to $n$. For example, for the following expression: Secondly, if an expression contains repeated subscripted variables, they are assumed to be summed over all possible values, from $0$ to $n$. For example, for the following expression:
28 \begin{equation} \begin{equation}
29 y = a\hackscore{0}b\hackscore{0} + a\hackscore{1}b\hackscore{1} + \ldots + a\hackscore{n}b\hackscore{n} y = a_{0}b_{0} + a_{1}b_{1} + \ldots + a_{n}b_{n}
30 \label{NOTATION1} \label{NOTATION1}
31 \end{equation} \end{equation}
33 can be represented as: can be represented as:
36 y = \sum\hackscore{i=0}^n a\hackscore{i}b\hackscore{i} y = \sum_{i=0}^n a_{i}b_{i}
40 then in Einstein notion: then in Einstein notion:
43 y = a\hackscore{i}b\hackscore{i} y = a_{i}b_{i}
47 Another example: Another example:
50 \nabla p = \frac{\partial p}{\partial x\hackscore{0}}\textbf{i} + \frac{\partial p}{\partial x\hackscore{1}}\textbf{j} + \frac{\partial p}{\partial x\hackscore{2}}\textbf{k} \nabla p = \frac{\partial p}{\partial x_{0}}\textbf{i} + \frac{\partial p}{\partial x_{1}}\textbf{j} + \frac{\partial p}{\partial x_{2}}\textbf{k}
54 can be expressed in Einstein notation as: can be expressed in Einstein notation as:
57 \nabla p = p,\hackscore{i} \nabla p = p,_{i}
# Line 63 where the comma ',' indicates the partia Line 63 where the comma ',' indicates the partia
63 For a tensor: For a tensor:
66 \sigma \hackscore{ij}= \sigma _{ij}=
67 \left[ \begin{array}{ccc} \left[ \begin{array}{ccc}
68 \sigma\hackscore{00} & \sigma\hackscore{01} & \sigma\hackscore{02} \\ \sigma_{00} & \sigma_{01} & \sigma_{02} \\
71 \end{array} \right] \end{array} \right]
76 The $\delta\hackscore{ij}$ is the Kronecker $\delta$-symbol, which is a matrix with ones for its diagonal entries ($i = j$) and zeros for the remaining entries ($i \neq j$). The $\delta_{ij}$ is the Kronecker $\delta$-symbol, which is a matrix with ones for its diagonal entries ($i = j$) and zeros for the remaining entries ($i \neq j$).
79 \delta \hackscore{ij} = \delta _{ij} =
80 \left \{ \begin{array}{cc} \left \{ \begin{array}{cc}
81 1, & \mbox{if $i = j$} \\ 1, & \mbox{if $i = j$} \\
82 0, & \mbox{if $i \neq j$} \\ 0, & \mbox{if $i \neq j$} \\ | CommonCrawl |
Barnaba Tortolini
Barnaba Tortolini (19 November 1808 – 24 August 1874) was a 19th-century Italian priest and mathematician who played an early active role in advancing the scientific unification of the Italian states. He founded the first Italian scientific journal with an international presence and was a distinguished professor of mathematics at the University of Rome for 30 years. As a mathematics researcher, he had more than one hundred mathematical papers to his credit in Italian, French, and German journals.
Barnaba Tortolini
Marble bust at Pincio, Rome
Born(1808-11-19)November 19, 1808
Rome
DiedAugust 24, 1874(1874-08-24) (aged 65)
Ariccia, Rome
CitizenshipItalian
Alma materPontifical Gregorian University
Known forAnnali di scienze matematiche e fisiche
Scientific career
Fieldsmathematics
InstitutionsCollegio Urbano de Propaganda Fide
Pontifical Gregorian University
University of Rome
Early years
Tortolini was born on 19 November 1808, in Rome and studied literature and philosophy at the Pontifical Gregorian University, especially under Don Andrea Caraffa (1789–1845) who was a mathematical physicist. He continued his mathematical and philosophic studies at the Archiginnasio Romano della Sapienza in Roma where he obtained the degree of laurea ad honorem in 1829. Subsequently, he attended the course for engineers before studying theology at the Pontifical Roman Seminary and took holy orders in 1832. Don Tortolini, along with Don Michele Ambrosini, was put in charge of the Basilica of Santa Maria dei Martiri (Our Lady of the Martyrs) from 1860, and then alone after the latter's death in 1866. Edoardo Borromeo Arese, the Papal majordomo, enabled Tortolini to join the “Camerieri d’onore in abito paonazzo (Chamberlains of Honor of the Purple)" in 1861; this was a former honorary office of the Papal Court.
As his biography shows, Tortolini had a foot in both the religious and scientific worlds. As a priest and noted academic at major universities of the city, he was an official figure with stature in the Papal States. Yet his own correspondences with Enrico Betti show his concern as an editor and mathematician with careful attention to detail, concern for content and awareness of the latest foreign developments.
Teaching career
In February 1835, Tortolini began his career as professor of mathematical physics at the Pontifical Urban University, an institution run by the Pontifical Congregation for the Evangelization of Peoples (long called the "Propaganda"), directed to the promotion of the worldwide Catholic overseas missions. The College was founded by Pope Urban VIII in 1627. It came under the authority of the Congregation in 1641, and received the title "Pontifical" from Pope John XXIII in 1962.
A year later, in 1836, Tortolini was appointed to the chair of mechanics and hydraulics at the University of Rome where, in 1837, he obtained by competition the professorship of introductory higher calculus. Following in the same year, he was also appointed professor of differential and integral calculus. At the Pontifical Roman Seminary, his alma mater, he assumed the professorship of mathematical physics in 1846 and began directing the publication of Propaganda Fide, founded in 1626. This editorship he pursued from 1846 to 1865.
Professionally, his interests in research ranged from definite and elliptic integrals, calculus of residues, and applications of various differential equations. He was quoted in the works of Augustin Louis Cauchy, George Boole, Joseph Liouville, and Betti. He was honored with membership in the most eminent Italian societies and became a foreign minister of the Swedish Academy of Sciences in Uppsala. As a teacher, he was applauded for over 30 years at the University of Rome. He devoted his life to raising the standards of scientific education on the peninsula at a time when Italy as a newly formed European power in 1860 needed a cultural presence on par with France, Germany, and England.
Founding of the Annali
Although he was a productive mathematician and devoted teacher, Tortolini is mainly remembered for his role in founding and publishing the first Italian international scientific journal, then published under the now defunct title, Annali di scienze matematiche e fisiche, from 1850 to 1857. This journal gathered and disseminated the work of the most notable scholars of the exact sciences in order to revive a love for higher educational studies in Italy and to bring to notice to other nations the scientific activity of the peninsula.
By publishing his own research abroad, he underscored his belief in the importance of the internationalization of mathematical results and made contact with differing cultures reflecting the views and standards of rigor promoted by foreign editors. During his tenure, the journal's content skewed progressively more towards pure mathematics and away from application and topics in other sciences. Among the foreign authors who published in his journal were Arthur Cayley, Carl Gustav Jakob Jacobi, J. J. Sylvester and the Irishman William Roberts. Betti pioneered work in Galois theory of irreducible equations of prime degree at the encouragement of Tortolini.
In 1858, the journal was restructured to include an editorial board composed of Tortolini, Betti, Luigi Cremona, Francesco Brioschi and Angelo Genocchi. This was the last year of the Annali’s publication as broader geopolitical trends called for a more focused new journal of pure mathematics to witness the opening stages of what would become the unification of Italy by 1861. This was one of the reasons why Brioschi and Cremona later moved the Annali to Milan in 1867. The journal which replaced the former Annali became the Annali di matematica pura ed applicata and its first series reigned from 1858 to 1865 but published in Roma. The numerical parity between foreign and Italian contributors became equal. The Editorial Board members contributed their own papers vigorously. Betti even started the publication of a translation of Riemann's inaugural dissertation.
Reformation of the Annali
After a hiatus in 1866, Cremona and Brioschi proposed to stop publication of the journal due to Tortolini's "mishandling" — according to their opinion. The political process of unification was very long and painful and the role played by the Church was not exactly pointing toward unification. The Annali finally moved to Milan in 1867 to distance itself from the Papal State. Cremona and Brioschi called for another new journal series enlisting the collaboration of Europe's leading mathematicians. Tortolini once again graciously went along. The second series ran from July 1867 to May 1868 but that was only the first volume. The second series lasted way longer. The new Annali saw contributions from Alfred Clebsch, Elwin Christoffel, Paul Gordan, Camille Jordan, Cayley, Charles Hermite, Rudolf Sturm, Carl Neumann, Hermann Schwarz and Georg Friedrich Bernhard Riemann. Pari passu, Tortolini's influence over the content of the second series ebbed. At the end of the century, the new journal had become one of Europe's premiere mathematical journals. This new Annali enjoys continuity into the present day.
L'envoi
In the early years, in his role as sole editor of the Annali, Tortolini corresponded with all the major scientists of his day. Aside from Betti, Tortolini was one of the few Italian contemporaries to tap into foreign journals and by doing so established a rapport with the finest minds of his time including Carl Friedrich Gauss, Joseph Louis Lagrange, Cauchy, Riemann, Luigi Bianchi, Tullio Levi-Civita, Charles Hermite, Niels Abel, Peter Gustav Lejeune Dirichlet, Sir William Thomson, Augustus De Morgan, J. J. Sylvester, Gabriel Lamé, and Eugenio Beltrami. His contributions to mathematical research — over one hundred papers — have yet to be assessed on their own merits.
On 20 September 1870, after refusing to sign a loyalty oath to the King of Italy upon invasion and occupation of Roma by the Italian troops led by Raffaele Cadorna, Tortolini lost the chair of calculus at Roma. A year earlier he had become paralyzed and was ultimately forced to retire his various positions. He died in Ariccia, Rome, on 24 August 1874.
Honors
The street “Via Barnaba Tortolini” in Rome is named for Tortolini.
Footnotes
• For a list of Tortolini's mathematical works, see Vincenzo Diorio, “Interno alla vita e ai lavori di Monsignore D. Barnaba Tortolini,” Atti della Accademia Pontificia dei Nuovi Lincei 28 (1874): pp. 93–106 on pp. 100–106.
References
• Laura Martini, "The Politics of Unification: Barnaba Tortolini and the Publication of Research Mathematics in Italy, 1850–1865," in Il sogno di Galois: Scritti di storia della matematica dedicati a Laura Toti Rigatelli per il suo 60º compleanno. A cura di R. Franci, P. Pagli e A. Simi (Siena: Centro Studi della Matematica Medioevale, Università di Siena, 2003): 171–198.
• Laura Martini, “The Politics of Unification: Barnaba Tortolini and the Publication of Research Mathematics in Italy, 1850–1865”, Centro Studi della Matematica Medioevale, University of Siena, 2003.
• Laura Martini, "Political and Mathematical Unification: Algebraic Research in Italy, 1850–1914", Ph.D. Dissertation, University of Virginia, May 2006.
• “Commemorazione di Barnaba Tortolini (1808–1874)”,Annali di matematica pura ed applicata, ser. 2, 7 (May 1875–October 1876), pp. 63–64.
Authority control
International
• ISNI
• VIAF
National
• Germany
• Italy
• Vatican
Academics
• zbMATH
People
• Italian People
• Deutsche Biographie
Other
• IdRef
| Wikipedia |
\begin{document}
\title{Device-Independent Quantum Key Distribution with Local Bell Test} \author{Charles Ci Wen \surname{Lim}} \affiliation{Group of Applied Physics, University of Geneva, Switzerland.} \author{Christopher \surname{Portmann}} \affiliation{Group of Applied Physics, University of Geneva, Switzerland.} \affiliation{Institute for Theoretical Physics, ETH Zurich, Switzerland.} \author{Marco \surname{Tomamichel}} \affiliation{Institute for Theoretical Physics, ETH Zurich, Switzerland.} \affiliation{Centre for Quantum Technologies, National University of Singapore, Singapore.} \author{Renato \surname{Renner}} \affiliation{Institute for Theoretical Physics, ETH Zurich, Switzerland.} \author{Nicolas \surname{Gisin}} \affiliation{Group of Applied Physics, University of Geneva, Switzerland.}
\begin{abstract}
Device-independent quantum key distribution (DIQKD) in its current
design requires a violation of a Bell's inequality between two
parties, Alice and Bob, who are connected by a quantum
channel. However, in reality, quantum channels are lossy and current
DIQKD protocols are thus vulnerable to attacks exploiting the
detection-loophole of the Bell test. Here, we propose a novel
approach to DIQKD that overcomes this limitation. In particular, we
propose a protocol where the
Bell test is performed entirely on two casually independent devices situated in Alice's laboratory. As a result,
the detection-loophole caused by the losses in the channel is avoided. \end{abstract} \maketitle
\section{Introduction}
The security of quantum key distribution (QKD)~\cite{bb84,ekert91} relies on the fact that two honest parties, Alice and Bob, can devise tests---utilizing laws of quantum physics---to detect any attack by an eavesdropper, Eve, that would compromise the secrecy of the key strings they generate~\cite{Gisin2002}. While the theoretical principles allowing this are nowadays well understood, it turns out that realizing QKD with practical devices is rather challenging. That is, the devices must conform to very specific models, otherwise the implementation may contain loopholes allowing side-channel attacks~\cite{sidechannelrefs}.
In general, there are two broad approaches towards overcoming such implementation flaws. The first is to include all possible imperfections into the model used in the security analysis. This approach, however, is quite cumbersome and it is unclear whether any specific model includes all practically relevant imperfections. In the second approach, which is known as \emph{device-independent} QKD (DIQKD)~\cite{pironio09,McKague2009,haenggi10,Haenggi10a, masanes10}, the security is based solely on the observation of non-local statistical correlations, thus it is no longer necessary to provide any model for the devices (though a few assumptions are still required). In this respect, DIQKD appears to be the ultimate solution to guarantee security against inadvertently flawed devices and side-channel attacks.
DIQKD in its current design requires the two distant parties Alice and Bob to perform a Bell test~\cite{Bell1964} (typically the Clauser-Horne-Shimony-Holt (CHSH) test~\cite{Clauser1969}), which is applied to pairs of entangled quantum systems shared between them. In practice, these quantum systems are typically realized by photons, which are distributed via an optical fiber. Hence, due to losses during the transmission, the individual measurements carried out on Alice's and Bob's sites only succeed with bounded (and often small) probability. In standard Bell experiments, one normally accounts for these losses by introducing the \emph{fair-sampling} assumption, which asserts that the set of runs of the experiment---in which both Alice's and Bob's measurements succeeded---is representative for the set of all runs.
In the context of DIQKD, however, the fair-sampling assumption is not justified since Eve may have control over the set of detected events. More concretely, she may use her control to emulate quantum correlations based on a local deterministic model, i.e., she instructs the detector to click only if the measurement setting (chosen by the party, e.g., Alice) is compatible with the prearranged values. This problem is commonly known as the detection-loophole~\cite{Pearle1970}. In fact, for state-of-the-art DIQKD protocols, it has been shown in Ref.~\cite{Gerhardt2011} that the detection-loophole is already unavoidable when using optical fibers of about 5km length.
One possible solution to this problem are heralded qubit amplifiers~\cite{HQA}, which have been proposed recently. The basic idea is to herald the arrival of an incoming quantum system without destroying the quantum information it carries. This allows Alice and Bob to choose their measurement settings only after receiving the confirmation, which is crucial for guaranteeing security. Unfortunately, realizing an efficient heralded qubit amplifier that is applicable for long distance DIQKD is extremely challenging; although there has been progress along this direction~\cite{Kocsis2012}.
In this work, we take a different approach to circumvent the detection-loophole. We propose a protocol that combines a self-testing scheme for the Bennett and Brassard (BB84) states~\cite{bb84} with a protocol topology inspired by the ``time-reversed'' BB84 protocol~\cite{Biham96, Inamori2002,Braunstein2012,Lo12}. Crucially, the protocol only requires Bell tests carried out locally in Alice's laboratory, so that the detection probabilities are not affected by the losses in the channel connecting Alice and Bob. We show that the protocol provides device-independent security under the assumption that certain devices are causally independent (see below for a more precise specification of the assumptions).
In contrast to existing protocols for DIQKD, whose security is inferred from the monogamy of non-local correlations, the security of our protocol is proved using a recent generalization of the entropic uncertainty relation that accounts for quantum side information~\cite{berta10}. This is the key ingredient that allows us to circumvent the need to bound the non-locality between particle pairs shared by Alice and Bob (non-locality over larger distances is hard to achieve, as explained above). Instead, the uncertainty relation solely depends on the local properties of the states sent by Alice, which in turn, can be inferred from the local Bell test.
Technically, our security proof uses a relation between the local CHSH test and a variant of the entropic uncertainty relation for smooth entropies~\cite{Tomamichel2010}. The analysis applies to the (practically relevant) finite-size regime, where the secret key is generated after a finite number of channel uses. The resulting bounds on the achievable key size are comparable to the almost tight finite-size result~\cite{Tomamichel2012a} for the BB84 protocol. Furthermore, in the (commonly studied) asymptotic limit where the number of channel uses tends to infinity, and in the limiting case where the CHSH inequality is maximally violated, the performance of our protocol reaches the one of the BB84 protocol.
\section{Required Assumptions}
As mentioned above, our goal is to impose only limited and realistic assumptions on the devices used by Alice and Bob. These are as follows:
First, it is assumed that Alice and Bob's laboratories are perfectly isolated, i.e., no information leaves a party's laboratory unless this is foreseen by the protocol. Second, we assume that Alice and Bob can locally carry out classical computations using trusted devices and that they have trusted sources of randomness. Third, we assume that Alice and Bob share an authenticated classical channel. Finally, we require that the devices of Alice and Bob are causally independent, that is, there is no correlation in their behavior between different uses. This, for instance, is guaranteed if the devices have no internal memory or if their memory can be reliably erased after each use.
We remark that, in very recent work~\cite{RUV12,VV12,BCK12}, it has been shown that this last assumption can be weakened further for standard DIQKD protocols. More precisely, it is shown that the assumption of causal independence can be dropped for the repeated uses of a device within one execution of the protocol. However, the assumption of causal independence still needs to be made when the same devices are reused in a subsequent execution of the protocol, as information about previously generated keys may leak otherwise~\cite{BCK13}.
\section{Protocol Topology}
In this section we describe the basic idea and the main structure of the QKD scheme we propose. The actual protocol will then be detailed in the next section.
Our proposal is motivated by the time-reversed BB84 protocol~\cite{Biham96, Inamori2002,Braunstein2012,Lo12}. This protocol involves a third party, Charlie, whose task is to help Alice and Bob distribute their key strings. Importantly, however, no trust in this third party is required. While a deviation of Charlie from the protocol may cause abortion of the key distribution protocol, it will not compromise the secrecy of a successfully distributed key string. The time-reversed BB84 protocol consists of the followings steps: first, Alice and Bob each generate a pair of qubits in the maximally entangled state $\ket{\Phi^+}=\left(\ket{00}+\ket{11}\right)\sqrt{2}$ and send one of their qubits to Charlie. Subsequently, Charlie performs a Bell-state-measurement (BSM) on the two received qubits and broadcasts the outcome to Alice and Bob~\cite{Bennett1993}. The two remaining qubits held by Alice and Bob are now in a Bell state. Alice then applies appropriate bit and phase flips on her qubit to convert this joint state to $\ket{\Phi^+}$. Finally, Alice and Bob measure their qubits at random in one of the two BB84 bases. Note that Alice and Bob can alternatively measure the qubits they kept before Charlie performs the BSM, and Alice flips the outcome of her measurement if necessary once she has received the correction (i.e., the outcome of the BSM) from Charlie.
The security of the time-reversed BB84 protocol, as described above, depends on the correct preparation and measurement of the states by Alice and Bob. In order to turn this protocol into a device independent one, we add a CHSH test on Alice's site. Security is then established by virtue of a relation between the violation of this CHSH test and the incompatibility of the two possible measurements carried out by Alice's device (which are supposed to be in the two BB84 bases). More precisely, we bound the overlap between the basis vectors of the two measurements that Alice may choose. This is all that is needed to apply the entropic uncertainty relation~\cite{Tomamichel2010} mentioned in the introduction, which allows us to infer security without any further assumptions on Alice's and Bob's devices. We note that our modification of the time-reversed BB84 protocol is reminiscent of the idea of self-testing of devices introduced by Mayers and Yao~\cite{MY04} (see Ref.~\cite{MYS12} for the CHSH version). Our test has however a different purpose: its goal is to certify the incompatibility of Alice's local measurements, while the test of Mayers and Yao certifies that Alice and Bob share a maximally entangled state.
\begin{figure}
\caption{\textbf{Topology}. The protocol is inspired by the idea of the time-reversed BB84 protocol and involves an additional (untrusted) party, Charlie. Charlie is supposed to make an entangling measurement (ideally, a Bell-state-measurement) on quantum states sent by Alice and Bob. He outputs either a pass or fail to indicate whether the measurement was successful. If successful, he additionally outputs two bits to be used by Alice to make correcting bit-flip operations.}
\label{fig1}
\end{figure}
In order to realize the CHSH test, we use a setup with three different devices on Alice's site: two measurement devices $\mathbb{M}_{\tn{key}}$, $\mathbb{M}_{\tn{test}}$ and a source device $\mathbb{S}$ (see Fig.~\ref{fig1}). The source device generates a pair of entangled qubits and sends them to $\mathbb{M}_{\tn{key}}$ and $\mathbb{M}_{\tn{test}}$. The device $\mathbb{M}_{\tn{key}}$ has two settings $\{\mathsf{X},\mathsf{Z}\}$~\cite{footnote1} and produces a binary output after one of the settings is chosen by Alice. The device $\mathbb{M}_{\tn{test}}$ has three settings $\{\mathsf{U}, \mathsf{V},\mathsf{P}\}$. The first two produce a binary output (a measurement outcome), and the last one sends the qubit (from the device $\mathbb{S}$) to the quantum channel which connects to Charlie. Alice has therefore two modes of operation, of which one (corresponding to the settings $\mathsf{U},\mathsf{V}$) is used to carry out the CHSH test and one (corresponding to the setting $\mathsf{P}$) is chosen to communicate to Charlie. We refer to these operation modes as $\Gamma_{\tn{CHSH}}$ and $\Gamma_{\tn{QKD}}$, respectively.
Bob has two devices: a measurement device $\mathbb{M}_{\tn{key}}'$ and a source device $\mathbb{S}'$. The latter generates entangled qubits and sends one of them to the quantum channel and the other to $\mathbb{M}_{\tn{key}}'$. The device $\mathbb{M}_{\tn{key}}'$ has two settings $\{\mathsf{X},\mathsf{Z}\}$ and produces a binary output after one of the settings is chosen by Bob.
\section{Protocol Description} The protocol is parameterized by the secret key length $\ell$, the classical post-processing block size $m_x$, the error rate estimation sample size $m_z$, the local CHSH test sample size $m_j$, the tolerated CHSH value $S_{\tn{tol}}$, the tolerated channel error rate $Q_{\tn{tol}}$, the tolerated efficiency of Charlie's operation $\eta_{\tn{tol}}$, the error correction leakage $\tn{leak}_{\tn{EC}}$ and the required correctness $\varepsilon_{\tn{cor}}$.
In the following, the first three steps are repeated until the conditions in the sifting step are satisfied. \newline
\noindent\emph{1.~State preparation and distribution:} Alice selects an operation mode $h_i\in\{ \Gamma_{\tn{CHSH}},\Gamma_{\tn{QKD}}\}$ where $\Gamma_{\tn{CHSH}}$ is selected with probability $p_s=\eta_{\tn{tol}} m_j/\big(\eta_{\tn{tol}} m_j+(\sqrt{m_x}+\sqrt{m_z})^2\big)$ and $\Gamma_{\tn{QKD}}$ is selected with probability $1-p_s$~\cite{footnote2}. In the following, we describe $\Gamma_{\tn{CHSH}}$ and $\Gamma_{\tn{QKD}}$ formally for each of the runs, which we label with indices $i$.
$\Gamma_{\tn{CHSH}}$: Alice measures both halves of the bipartite state. More specifically, she chooses two bit values $u_i, v_i$ uniformly at random, where $u_i$ sets the measurement on the first half to $\mathsf{X}$ or $\mathsf{Z}$ and $v_i$ sets the measurement on the second half to $\mathsf{U}$ or $\mathsf{V}$. The outputs of each measurement are recorded in $s_i$ and $t_i$, respectively.
$\Gamma_{\tn{QKD}}$: Alice selects a measurement setting $a_i\in\{ \mathsf{X},\mathsf{Z}\}$ with probabilities $p_x=1/(1+\sqrt{(m_z/m_x)})$ and $1-p_x$, respectively~\cite{footnote2}, measures one half of the bipartite state with it and stores the measurement output in $y_i$. The other half of the bipartite state is sent to Charlie.
Similarly, Bob selects a measurement setting $b_i\in\{ \mathsf{X},\mathsf{Z}\}$ with probabilities $p_x$ and $1-p_x$, respectively, measures one half of the bipartite state with it and stores the measurement output in $y_i'$. The other half of the bipartite state is sent to Charlie. \newline
\noindent\emph{2.~Charlie's operation:} Charlie makes an entangling measurement on the quantum states sent by Alice and Bob, and if it is successful, he broadcasts $f_i=\tn{pass}$, otherwise he broadcasts $f_i=\tn{fail}$. Furthermore, if $f_i=\tn{pass}$, then Charlie communicates $g_i\in\{0,1\}^2$ to Alice and Bob. Finally, Alice uses $g_i$ to make correcting bit flip operations. \newline
\noindent\emph{3.~Sifting:} Alice and Bob announce their choices $\{h_i\}_i, \{a_i\}_i, \{b_i\}_i$ over an authenticated classical channel and identify the following sets: key generation $\mathcal{X}:= \{i:(h_i\allowbreak =\Gamma_{\tn{QKD}})\wedge (a_i=b_i=\mathsf{X})\wedge (f_i=\tn{pass})\}$, channel error rate estimation $\mathcal{Z}:=\{i:(h_i=\Gamma_{\tn{QKD}})\wedge (a_i=b_i=\mathsf{Z})\wedge (f_i=\tn{pass})\}$, and Alice's local CHSH test set, $\mathcal{J}:=\{i:h_i=\Gamma_{\tn{CHSH}}\}$.
The protocol repeats steps (1)-(3) as long as $|\mathcal{X}|<m_x$ or
$|\mathcal{Z}|<m_z$ or $|\mathcal{J}|<m_j$, where $m_x,m_z,m_j \in \mathbb{N}$. We refer to these as the sifting condition. \newline
\noindent \emph{4.~Parameter estimation:} To compute the CHSH value from $\mathcal{J}$, Alice uses the following formula, $S_{\tn{test}} \allowbreak := \allowbreak 8 \allowbreak \sum_{i\in\mathcal{J}} f(u_i,v_i,s_i,t_i)/|\mathcal{J}|-4$, where $f(u_i,v_i,s_i,t_i)=1$ if $s_i \oplus t_i= u_i \wedge v_i$, otherwise $f(u_i,v_i,s_i,t_i)=0$. Next, both Alice and Bob publicly announce the corresponding bit strings $\{y_i\}_{i\in\mathcal{Z}}$, $\{y_i^\prime\}_{i\in\mathcal{Z}}$ and compute the error rate $Q_{\tn{test}}:=\sum_{i\in\mathcal{Z}}y_i\oplus y_i^{\prime}/|\mathcal{Z}|$. Finally, they compute the efficiency of Charlie's operation $\eta:=|\mathcal{X}|/|\tilde{\mathcal{X}}|$ where $\tilde{\mathcal{X}}:=\{i:(h_i=\Gamma_{\tn{QKD}})\wedge (a_i=b_i=\mathsf{X})\}$. If $S_{\tn{test}} < S_{\tn{tol}}$ or $Q_{\tn{tol}}<Q_{\tn{test}}$ or $\eta < \eta_{\tn{tol}}$, they abort the protocol. \newline
\noindent\emph{5.~One-way classical post-processing:} Alice and Bob choose a random subset of size $m_x$ of $\mathcal{X}$ for post-processing. An error correction protocol that leaks at most $\tn{leak}_{\tn{EC}}$-bits of information is applied, then an error verification protocol (e.g., this can be implemented with two-universal hashing) that leaks $\lceil\log_2(1/\varepsilon_{\tn{cor}})\rceil$-bits of information is applied. If the error verification fails, they abort the protocol. Finally, Alice and Bob apply privacy amplification~\cite{Bennett1995} with two-universal hashing to their bit strings to extract a secret key of length $\ell$~\cite{RennerThesis}.
\section{Security definition}
Let us briefly recall the criteria for a generic QKD protocol to be secure. A QKD protocol either aborts or provides Alice and Bob with a pair of key strings, $S_A$ and $S_B$, respectively. If we denote by $E$ the information that the eavesdropper (Eve) gathers during the protocol execution, then the joint state of $S_A$ and $E$ can be described by a classical-quantum state, $\rho_{S_AE}= \allowbreak \sum_s\proj{s}\otimes \rho_E^s$ where $\{\rho_E^s\}_s$ are quantum systems (conditioned on $S_A$ taking values $s$) held by Eve. The QKD protocol is called $\varepsilon_{\tn{cor}}$-correct if $\Pr[S_A \not = S_B] \leq \varepsilon_{\tn{cor}}$, and $\varepsilon_{\tn{sec}}$-secret if $(1-p_\tn{abort} )\frac{1}{2}\|\rho_{S_AE}-U_{S_A} \otimes \rho_E \|_1 \leq \varepsilon_{\tn{sec}}$ where $p_\tn{abort}$ is the probability that the protocol aborts and $U_{S_A}$ is the uniform mixture of all possible values of the key string $S_A$. Accordingly, we say that the QKD protocol is $(\varepsilon_{\tn{cor}}+\varepsilon_{\tn{sec}})$-secure if it is both $\varepsilon_{\tn{cor}}$-correct and $\varepsilon_{\tn{sec}}$-secret~\cite{Tomamichel2012a,MullerQuadeRenner,RennerThesis}. Note that this security definition guarantees that the QKD protocol is universally composable~\cite{RennerThesis, MullerQuadeRenner}. That is, the pair of key strings can be safely used in any application (e.g., for encrypting messages) that requires a perfectly secure key (see~\cite{RennerThesis} for more details).
\section{Security analysis}\label{section:SA} In the following, we present the main result and a sketch of its proof. For more details about the proof, we refer to the Appendix.
The correctness of the protocol is guaranteed by the error verification protocol which is parameterized by the required correctness $\varepsilon_{\tn{cor}}$. \newline
\textbf{Main Result.}~\emph{The protocol with parameters $(\ell, m_x,\allowbreak m_z, \allowbreak m_j, \allowbreak S_{\tn{tol}}, \allowbreak Q_{\tn{tol}}, \eta_{\tn{tol}}, \allowbreak \tn{leak}_{\tn{EC}}, \allowbreak \varepsilon_{\tn{cor}})$ is $\varepsilon_{\tn{sec}}$-secret if \begin{multline} \label{eqn1}
\ell \leq m_x \bigg( 1 - \log_2
\bigg(1+\frac{\hat{S}_{\tn{tol}}}{4\eta_{\tn{tol}}} \sqrt{8 -
\hat{S}_{\tn{tol}}^{2}} + \frac{\zeta}{\eta_{\tn{tol}}}\bigg) - \tn{h}(\hat{Q}_{\tn{tol}})
\bigg) \\ \hspace{2cm} -\tn{leak}_{\tn{EC}} -\log_2\frac{1} {\varepsilon_{\tn{cor}}\varepsilon^4},\end{multline}
for $\varepsilon \allowbreak =\varepsilon_{\tn{sec}}/9$ and $2 \leq \hat{S}_{\tn{tol}} \leq 2\sqrt{2}$, where $\tn{h}$ denotes the binary entropy function, $\hat{S}_{\tn{tol}} :=S_{\tn{tol}}- \xi$ and $ \hat{Q}_{\tn{tol}}:= Q_{\tn{tol}} + \mu$, with the statistical deviations given by
\begin{align*}
\xi &:= \sqrt{\frac{32}{m_j}\ln \frac{1}{\varepsilon } }, \\
\zeta& := \sqrt{\frac{2(m_x+m_j\eta)(m_j+1)}{m_xm_j^2}\ln \frac{1}{\varepsilon}},\\
\mu & := \sqrt{\frac{(m_x+m_z)(m_z+1)}{m_x m_z^2} \ln
\frac{1}{\varepsilon}}. \end{align*} }
\emph{Proof sketch.} Conditioned on passing all the tests in the parameter estimation step, let $X_A$ be the random variable of length $m_x$ that Alice gets from ${\mathcal{X}}$ and let $E'$ denote Eve's information about $X_A$ at the end of the error correction and error verification protocols.
We use the following result from~\cite{RennerThesis}. By using privacy amplification with two-universal hashing, a $\Delta$-secret key of length $\ell$ can be generated from $X_A$ with \[
\Delta \leq 6\varepsilon+2^{-\frac{1}{2}\big(\HminSmooth{3\varepsilon}{X_A|E'}-\ell\big)-1} \]
for any $\varepsilon > 0$. Here $\HminSmooth{3\varepsilon}{X_A|E'}$ denotes the smooth min-entropy~\cite{RennerThesis}. It therefore suffices to bound this entropy in terms of the tolerated values ($S_{\tn{tol}}$, $Q_{\tn{tol}}$ and $\eta_{\tn{tol}}$).
First, using chain rules for smooth entropies~\cite{RennerThesis}, we get $
\HminSmooth{3\varepsilon}{X_A|E'} \geq
\HminSmooth{3\varepsilon}{X_A|E} - \tn{leak}_{\tn{EC}}-\log_2(2/\varepsilon_{\tn{cor}})$, where $E$ denotes Eve's information after the parameter estimation step. Then, from the generalised entropic uncertainty relation~\cite{esther11}, we further get
\[\HminSmooth{3\varepsilon}{X_A|E} \geq \log_2 \frac{1}{c^*} -
\HmaxSmooth{\varepsilon}{Z_A|Z_B} - \log_2 \frac{2}{\varepsilon^2},\] where $c^*$ is the effective overlap of Alice's measurements (a function of the measurements corresponding to settings $\mathsf{Z},\mathsf{X}$ and the marginal state). Here, $Z_A$ can be seen as the bit string Alice would have obtained if she had chosen setting $\mathsf{Z}$ instead. Likewise, $Z_B$ represents the bit string obtained by Bob with setting $\mathsf{Z}$. From Ref.~\cite{Tomamichel2012a}, the smooth max-entropy of the alternative measurement is bounded by the error rate sampled on the set $\mathcal{Z}$ of size $m_z$,
$\HmaxSmooth{\varepsilon}{Z_A|Z_B} \leq m_x \tn{h}(Q_{\tn{tol}} + \mu)$, where $\mu$ is the statistical deviation due to random sampling theory, i.e., with high probability, the error rate between $Z_A$ and $Z_B$ is smaller than $Q_{\tn{tol}} + \mu$.
It remains to bound the effective overlap $c^*$ with $S_{\tn{tol}}$ and $\eta_{\tn{tol}}$. First, we note that $\tilde{\mathcal{X}}$ is independent of Charlie's outputs and $\mathcal{X}\subseteq \tilde{\mathcal{X}}$ with equality only if Charlie always outputs a pass. Furthermore, $\mathcal{X}$ is not necessarily a random subset of $\tilde{\mathcal{X}}$ as a malicious Charlie can control the content of $\mathcal{X}$ (this is discussed later). Assuming the worst case scenario, it can be shown that $ c^*\leq 1/2+\left(\tilde{c}^*-1/2\right)/\eta,$ where
$\eta=|\mathcal{X}|/|\tilde{\mathcal{X}}|$ is the efficiency of Charlie's operation and $\tilde{c}^*$ is the effective overlap of $\tilde{\mathcal{X}}$. Next, by establishing a relation between the effective overlap and the local CHSH test~\cite{esther11} (for completeness, we provide a more concise proof in Lemma~\ref{lem:chsh_c} in the Appendix) and using random sampling theory, we further obtain \[\tilde{c}^*\leq \frac{1}{2}\bigg(1+\frac{ (S_{\tn{tol}}-\xi)}{4}\sqrt{8- \big(S_{\tn{tol}}-\xi\big)^2}+\zeta\bigg).\] Here $\xi$ quantifies the statistical deviation between the expected CHSH value and the observed CHSH value, and $\zeta$ quantifies the statistical deviation between the effective overlaps of $\tilde{\mathcal{X}}$ and $\mathcal{J}$, respectively.
\begin{figure}\label{Fig2}
\end{figure}
Putting everything together, we obtain the secret key length as stated by Eq.~(\ref{eqn1}). \newline
\emph{Asymptotic limit.} In the following, we consider the secret fraction defined as $f_{\tn{secr}}:=\ell/m_x$~\cite{Gisin2002}. In the asymptotic limit $N \rightarrow \infty$ and using $\tn{leak}_{\tn{EC}} \rightarrow \tn{h}(Q_{\tn{tol}})$ (corresponding to the Shannon limit), it is easy to verify that the secret fraction reaches \begin{equation} \label{eqn2}f_{\tn{secr}}=1 - \log_2 \left (1+
\frac{S_{\tn{tol}}}{4\eta_{\tn{tol}}} \sqrt{8 - S_{\tn{tol}}^{2}} \right)-2\tn{h}(Q_{\tn{tol}}).\end{equation} The expression reveals the roles of the modes of operation $\Gamma_{\tn{CHSH}}$ and $\Gamma_{\tn{QKD}}$. The first provides a bound on the quality of the devices (which is taken into account by the $\log_2$ term) and the latter, apart from generating the actual key, is a measure for the quality of the quantum channel.
\section{Discussion}
We have proposed a DIQKD protocol which provides security even if the losses of the channel connecting Alice and Bob would not allow for a detection-loophole free Bell test. Nevertheless, the security of the protocol still depends on the losses and the protocol therefore needs to perform a check to ensure that Charlie does not output a fail too often. This dependence from the failure probability arises from the fact that a malicious Charlie may choose to output a pass only when Alice and Bob's devices behave badly. Therefore, the CHSH value calculated from Alice's CHSH sample is not a reliable estimate for the overlap of the sample used to generate the key string. However, with the CHSH test, Alice can estimate how often her devices behave badly and thus determine the minimum tolerated efficiency (or the maximum tolerated failure probability) of Charlie. This is illustrated in Fig.~\ref{Fig2} where large values of $S_{\tn{tol}}$ are required to tolerate small values of $\eta_{\tn{tol}}$.
Taking the asymptotic limit and the maximal CHSH value, we see that the secret fraction is independent of $\eta_{\tn{tol}}$, which is not so surprising since the maximal CHSH value implies that the devices of Alice are behaving ideally all the time. Remarkably, we recover the asymptotic secret fraction for the BB84 protocol~\cite{SP00}.
From a practical point of view, the possibility to consider very small values of $\eta_{\tn{tol}}$ is certainly appealing, since it suggests that the distance between Alice and Bob can be made very large. A quick calculation using the best experimental values~\cite{ExpMDIQKD} (i.e., $\eta_{\tn{tol}}\approx t/2$ and $S_{\tn{tol}} \approx 2.81$ where $t$ is the channel transmission) shows that the secret fraction is positive for $t > 0.45$. This translates to about a $17$km optical fiber between Alice and Bob. Accordingly, to achieve larger distances, we would need a local CHSH test that generates violations larger than those achieved by current experiments.
\section{Conclusion}
In summary, we provide an alternative approach towards DIQKD, where the Bell test is not carried out between Alice and Bob but rather in Alice's laboratory. On a conceptual level, our approach departs from the general belief that the observation of a Bell violation between Alice and Bob is necessary for DIQKD. On the practical side, it offers the possibility to replace the extremely challenging task of implementing a long distance detection-loophole free Bell test with a less challenging task, i.e., implementing a local detection-loophole free Bell test. In fact, recently, there has been very encouraging progress towards the implementation of a local detection-loophole free CHSH test~\cite{Giustina2013}. In view of that, we believe an experimental demonstration of DIQKD with local Bell tests is plausible in the near future. \newline
\textbf{Acknowledgments.} We thank Jean-Daniel Bancal, Marcos Curty, Esther H\"{a}nggi, Stefano Pironio, Nicolas Sangouard, Valerio Scarani and Hugo Zbinden for helpful discussions. We acknowledge support from the National Centre of Competence in Research QSIT, the Swiss NanoTera project QCRYPT, the Swiss National Science Foundation SNSF (grant No. 200020-135048), the CHIST-ERA project DIQIP, the FP7 Marie-Curie IAAP QCERT project and the European Research Council (grant No. 258932 and No. 227656).
\appendix \setcounter{equation}{0} \setcounter{thm}{0} \section*{Appendix: D\lowercase{etails of Security analysis}} We present the proof for the main result given in the main text. First, we discuss about the assumptions and then introduce the necessary technical lemmas. Second, we establish a relation between the local CHSH test and a generalized version of smooth entropic uncertainty relation (Lemma \ref{lem:chsh_c}). Third, we provide the required statistical statements for estimating certain quantities of the bit strings of Alice and Bob. Finally, we state our main result (Theorem \ref{securitythm}) which is slightly more general than the main result presented above.
\subsection{Notations}
We assume that all Hilbert spaces denoted by $\ensuremath{\mathcal{H}}$, are finite-dimensional. For composite systems, we define the tensor product of $\ensuremath{\mathcal{H}}_A$ and $\ensuremath{\mathcal{H}}_B$ as $\ensuremath{\mathcal{H}}_{AB}:=\ensuremath{\mathcal{H}}_A \otimes \ensuremath{\mathcal{H}}_B$. We denote $\mathcal{P}(\ensuremath{\mathcal{H}})$ as the set of positive semi-definite operators on $\ensuremath{\mathcal{H}}$ and $\mathcal{S}(\ensuremath{\mathcal{H}})$ as the set of normalised states on $\ensuremath{\mathcal{H}}$, i.e., $\mathcal{S}(\ensuremath{\mathcal{H}})=\{\rho\in\mathcal{P}(\ensuremath{\mathcal{H}}):\trace{\rho}=1\}$. Furthermore, for a composite state $\rho_{AB}\in \mathcal{S}(\ensuremath{\mathcal{H}}_{AB})$, the reduced states of system A and system B are given by $\rho_A=\trace[B]{\rho_{AB}}$ and $\rho_B=\trace[A]{\rho_{AB}}$, respectively. A positive operator valued measure (POVM) is denoted by $\mathds{M}:=\{M_x\}_x$ where $\sum_xM_x=\mathds{1}$. For any POVM, we may view it as a projective measurement by introducing an ancillary system, thus for any POVM with binary outcomes, we may write it as an observable $O=\sum_{x\in\{0,1\}} (-1)^x M_x$, such that $\sum_{x\in\{0,1\}} M_x=\mathds{1}$. We also use $\bar{x}:=(x_1,x_2,\ldots,x_n)$ to represent the concatenations of elements and $[n]$ to denote $\{1,2,\ldots,n\}$. The binary entropy function is denoted by $\tn{h}(x):=-x\log_2x-(1-x)\log_2(1-x)$.
\subsection{Basic assumptions on Alice's and Bob's abilities}\label{Ass} Prior to stating the security proof, it is instructive to elucidate the basic assumptions necessary for the security proof. In particular, the assumptions are detailed in the following:
\begin{enumerate}
\item \textbf{Trusted local sources of randomness.} Alice (also Bob) has access to a trusted source that produces a random and secure bit value upon each use. Furthermore, we assume the source is unlimited, that is, Alice can use it as much as she wants, however the protocol only requires an amount of randomness linear in the number of quantum states generated. \label{A1}
\item \textbf{An authenticated but otherwise insecure classical channel.}\label{A2} Generally, this assumption is satisfied if Alice and Bob share an initial short secret key~\cite{WC81, Stinson94}. Note that the security analysis of such authentication schemes was recently extended to the universally composable framework~\cite{RennerThesis,MullerQuadeRenner} in Ref~\cite{Portmann2012}, which allows one to compose the error of the authentication scheme with the errors of the protocol, giving an overall error on the security.
\item \textbf{No information leaves the laboratories unless the protocol allows it.}\label{A3} This assumption is paramount to any cryptographic protocol. It states that information generated by the legitimate users is appropriately controlled. More concretely, we assume the followings \begin{enumerate}
\item \emph{Communication lines.---} The only two communication lines leaving the laboratory are the classical and the quantum channel. Furthermore, the classical channel is controlled, i.e., only the information required by the protocol is sent.
\item \emph{Communication between devices.---} There should be no unauthorized communication between any devices in the laboratory, in particular from the measurement devices to the source device. \end{enumerate}
\item \textbf{Trusted classical operations.} Classical operations like authentication, error correction, error verification, privacy amplification, etc must be trusted, i.e., we know that the operations have ideal functionality and are independent of the adversary.\label{A4}
\item \textbf{Measurement and source devices are causally independent.} \label{A5} This means each use of the device is independent of the previous uses. For example, for $N$ uses of a source device and a measurement that produces a bit string $\bar{x}:=(x_1,x_2,\ldots,x_n)$, we have \[\rho^N=\bigotimes_{i=1}^N\rho^i,\quad M_{\bar{x}} = \bigotimes_i M^i_{x_i}\] where $ M_{\bar{x}}$ is the POVM element corresponding to the outcome $\bar{x}$. \end{enumerate}
\subsection{Technical lemmas}
\begin{lem}[Jordan's lemma~\cite{Masanes2006, pironio09}]\label{lem:jordan} Let $O$ and $O^{\prime}$ be observables with eigenvalues $\pm 1$ on Hilbert space $\ensuremath{\mathcal{H}}$. Then there exists a partition of the Hilbert space, $\ensuremath{\mathcal{H}}=\bigoplus_i\ensuremath{\mathcal{H}}_i$, such that \[ O=\bigoplus_{i}O_i \quad \tn{and} \quad O^{\prime}=\bigoplus_iO^{\prime}_i \] where $\ensuremath{\mathcal{H}}_i$ satisfies $\tn{dim}(\ensuremath{\mathcal{H}}_i) \leq 2$ for all $i$. \end{lem}
\begin{lem}[Chernoff-Hoeffding~\cite{hoeffding63}] \label{lem:chernoff} Let $X := \frac{1}{n} \sum_i X_i$ be the average of $n$ \emph{independent} random variables $X_1,X_2,\dotsc,X_n$ with values in $[0,1]$, and let $\mu := \E[X] = \frac{1}{n} \sum_i \E[X_i]$ denote the expected value of $X$. Then, for any $\delta > 0$, \[\Pr \left[X - \mu \geq \delta \right] \leq \exp(-2\delta^2n).\] \end{lem}
\begin{lem}[Serfling~\cite{serfling74}] \label{lem:serfling} Let $\{x_1,\dotsc,x_n\}$ be a list of (not necessarily distinct) values in $[a,b]$ with average $\mu := \frac{1}{n}\sum_i x_i$. Let the random variables $X_1,X_2,\dotsc,X_k$ be obtained by sampling $k$ random entries from this list without replacement. Then, for any $\delta > 0$, the random variable $X := \frac{1}{k} \sum_i X_i$ satisfies \[\Pr \left[X - \mu \geq \delta \right] \leq \exp \left(\frac{-2\delta^2kn}{(n-k+1)(b-a)} \right).\] \end{lem}
\begin{cor} \label{Cor:serfling}Let $\mathcal{X}:=\{x_1,\dotsc,x_n\}$ be a list of (not necessarily distinct) values in $[0,1]$ with the average $\mu_{\mathcal{X}}:=\frac{1}{n}\sum_{i=1} x_i$. Let $\mathcal{T}$ of size $k$ be a random subset of $\mathcal{T}$ with the average $\mu_{\mathcal{T}}:=\frac{1}{t}\sum_{i\in \mathcal{T}} x_i$. Then for any $\varepsilon > 0$, the set $\mathcal{K}=\mathcal{X} \setminus \mathcal{T}$ with average $\mu_{\mathcal{K}}=\frac{1}{n-t}\sum_{i\in\mathcal{K}} x_i$ satisfies \[\Pr\left[\mu_{\mathcal{K}}-\mu_{\mathcal{T}}\geq \sqrt{\frac{n(t+1)}{2(n-t)t^2}\ln\frac{1}{\varepsilon}}\right] \leq \varepsilon \] \end{cor} \begin{proof} Since $\mathcal{T}$ is a random sample of $\mathcal{X}$, from Lemma \ref{lem:serfling}, we have \[\Pr \left[\mu_{\mathcal{K}} - \mu_{\mathcal{X}} \geq \delta \right] \leq \exp \left(\frac{-2\delta^2(n-t)n}{(t+1)} \right)=\varepsilon.\] Using $\mu_{\mathcal{X}}=\frac{t}{n}\mu_{\mathcal{T}}+\frac{n-t}{n}\mu_{\mathcal{K}}$ we finish the proof. \end{proof}
The main ingredient is a fine-grained entropic uncertainty relation (see~\cite[Corollary~7.3]{Tomamichel2012thesis} and~\cite{esther11}).
\begin{lem} \label{thm:ucr} Let $\varepsilon >0, \bar{\varepsilon} \geq 0$ and $\rho \in \sno{ABC}$. Moreover let $\mathds{M} = \{M_x\}$, $\mathds{N} = \{N_z\}$ be POVMs on $\ensuremath{\mathcal{H}}_A$, and $\mathds{K} = \{P_k\}$ a projective measurement on $\ensuremath{\mathcal{H}}_A$ that commutes with both $\mathds{M}$ and $\mathds{N}$. Then the post-measurement states $\rho_{XB}= \sum_{x} \proj{x} \otimes \trace[AC]{\sqrt{M_x}
\rho_{ABC}\sqrt{M_x}}$, $
\rho_{ZC} = \sum_{z} \proj{z} \otimes\trace[AB]{\sqrt{N_z}
\rho_{ABC}\sqrt{N_z}} $ satisfy \begin{multline}
\label{eq:ucr}
\HminSmooth[\rho]{2\varepsilon+\bar{\varepsilon}}{X|B} +
\HmaxSmooth[\rho]{\varepsilon}{Z|C} \\ \geq \log_2
\frac{1}{c^*(\rho_A,\mathds{M},\mathds{N})} - \log_2 \frac{2}{\bar{\varepsilon}^2}, \end{multline} where the effective overlap is defined as \begin{multline} \label{eq:effective} c^*(\rho_A,\mathds{M},\mathds{N}) \\ := \min_{\mathds{K}} \left\{\sum_k \trace{P_k \rho} \max_x \inftynorm{P_k\sum_z
N_{z}M_{x}N_{z}} \right\} \end{multline} \end{lem} Note that (\ref{eq:ucr}) is a statement about the entropies of the post-measurement states $\rho_{XB}$ and $\rho_{ZC}$, thus it also holds for any measurements that lead to the same post-measurement states. Accordingly, one may also consider the projective purifications $\mathds{M}'$ and $\mathds{N}'$ of $\mathds{M}$ and $\mathds{N}$, applied to $\rho_A \otimes \proj{\phi}$, where $\ket{\phi}$ is a pure state of an ancillary system. Since both measurement setups $\{\rho, \mathds{M},\mathds{N}\}$ and $\{\rho_A \otimes \proj{\phi},\mathds{M}', \mathds{N}'\}$ give the same post-measurement states, the R.H.S of (\ref{eq:ucr}) holds for both $c^*(\rho_A,\mathds{M},\mathds{N})$ and $c^*(\rho_A \otimes \proj{\phi},\mathds{M}',\mathds{N}')$. We can thus restrict our considerations to projective measurements.
In the protocol considered, Alice performs independent binary measurements --- $\mathds{M}_i = \{M^i_{x}\}_{x \in \{0,1\}}$ and $\mathds{N}_i = \{N^i_{z}\}_{z \in \{0,1\}}$ --- on each subsystem $i$. We can reduce \eqref{eq:effective} to operations on each subsystem, if we choose $\mathds{K} = \{P_{\bar{k}}\}$ to also be in product form, i.e., $P_{\bar{k}} = \bigotimes_i P^i_{k_i}$, where $\bar{k}$ is a string of (not necessarily binary) letters $k_i \in \mathcal{K}$. Then plugging this, $M_{\bar{x}} = \bigotimes_i M^i_{x_i}$ and $N_{\bar{z}} = \bigotimes_i N^i_{z_i}$ in the norm from \eqref{eq:effective}, we get \begin{multline} \inftynorm{P_{\bar{k}}\sum_{\bar{z}}
N_{\bar{z}}M_{\bar{x}}N_{\bar{z}}} \\ = \inftynorm{\sum_{z_1,z_2,\dots} \bigotimes_i P_{k_i} N^i_{z_i}M^i_{x_i}N^i_{z_i} } \\ = \prod_i \inftynorm{P_{k_i}\sum_{z_i} N^i_{z_i}M^i_{x_i}N^i_{z_i} }. \end{multline}
Putting this in \eqref{eq:effective} with $\rho=\bigotimes_i\rho^i$, $p^i_k:=\tr(P_k^i \rho^i)$, and dropping the subscript $i$ when possible, we obtain, \begin{align} & c^*(\rho_A,\mathds{M},\mathds{N}) \notag \leq \sum_{k_1,k_2,\dots} \prod_i p^i_{k_i} \max_{x} \inftynorm{P_{k_i}\sum_{z} N^i_{z}M^i_{x}N^i_{z}} \notag \\
& \qquad = \prod_i \sum_k p^i_k \max_{x} \inftynorm{P^i_k\sum_{z} N^i_{z}M^i_{x}N^i_{z}}=: \prod_i c^{*,i}. \label{eq:effective.subsystem} \end{align}
In the following we will refer to \begin{equation} \label{eq:effective.decomposition} c^{i}_k
:= \max_{x} \inftynorm{P^i_k\sum_{z}
N^i_{z}M^i_{x}N^i_{z}} \end{equation} as the overlap of the measurements $\{M^i_{x}\}_{x}$ and $\{N^i_{z}\}_{z}$.
\subsection{An upper bound on the effective overlap with the CHSH value}
In this section, we first introduce the notion of CHSH operator~\cite{BelloperatorRefs} and then prove the relation between the CHSH test and the effective overlap (\ref{eq:effective.decomposition}).
In the CHSH test, two space-like separated systems share a bipartite state $\rho$ and each system has two measurements. More specifically, system A has POVMs $\{M_0^0,M_1^0\}$ and $\{M_0^1,M_1^1\}$ and system T has POVMs $\{T_0^0,T_1^0\}$ and $\{T_0^1,T_1^1\}$. Since for any POVM there is a (unitary and) projective measurement on a larger Hilbert space that has the same statistics, we can restrict our considerations to projective measurements. Then, we may write the POVMs as observables with $\pm1$ outcomes, i.e., at the site of the first system, the two observables are $O_A^0:=\sum_{s=0}^1(-1)^sM_s^0$ and $O_A^1:=\sum_{s=0}^1(-1)^sM_s^1$. Furthermore, the measurements are chosen uniformly at random. As such, the CHSH value is given by $S(\rho,\beta):=\mathrm{ Tr }(\rho\beta)$ where the CHSH operator is defined as \begin{align}\label{CHSH:op}
\beta(O_A^0,O_A^1,O_T^0,O_T^1) &:=\sum_{u,v}(-1)^{u \wedge v}O_A^u \otimes O_T^v \end{align} where $u,v$ and $s,t$ are the inputs and outputs, respectively. The maximization of $S(\rho,\beta)$ over the set of density operators for a fixed $\beta$ is defined by $S_{\tn{max}}(\beta)$. Moreover, the CHSH operator can be decomposed into a direct sum of two-qubits subspaces via Lemma \ref{lem:jordan}. Mathematically, we may write $O_A^0=\sum_kP_kO_A^0P_k$ and $O_A^1=\sum_kP_kO_A^1P_k$ where $\{P_k\}_k$ is a set of projectors such that $\tn{dim}(P_k) = 2 ~\forall~k$. Note that in Lemma \ref{lem:jordan}, one may select a partition of the Hilbert space such that each block partition has dimension two. This allows one to decompose the general CHSH operator into direct sums of qubits CHSH operators. Likewise, for the measurements of Bob, $O_B^0=\sum_rQ_rO_T^0Q_r$ and $O_B^1=\sum_rQ_rO_T^1Q_r$. For all $k$, $P_kO_A^0P_k$ and $P_kO_A^1P_k$ can be written in terms of Pauli operators,
\begin{align}
P_kO_A^0P_k=\vec{m}_k\cdot \Gamma_k \quad \tn{and}\quad P_kO_T^1P_k=\vec{n}_k\cdot \Gamma_k, \label{CHSH:block}
\end{align} where $\vec{m}_k$ and $\vec{n}_k$ are unit vectors in $\mathbb{R}_k^3$ and $\Gamma_k$ is the Pauli vector. Combining \eqref{CHSH:op} and \eqref{CHSH:block} yields \begin{equation} \beta=\bigoplus_{k,r}\beta_{k,r} \quad \tn{where}\quad \beta_{k,r} \in \mathds{C}^2_k\otimes\mathds{C}^2_r \label{CHSH:twoqubits} \end{equation} and it can be verified that \begin{align}\label{decom:CHSH} S(\rho,\beta)&=\sum_{k,r}\lambda_{k,r}S_{k,r} \end{align} where \begin{align} \label{CHSH:lambda_kr} &\lambda_{k,r}:=\mathrm{ Tr }(P_k\otimes Q_r \rho) \\ \label{CHSH:w_kr} &S_{k,r}:=\mathrm{ Tr }(\rho_{k,r}\beta_{k,r}) \end{align} Whenever the context is clear, we write $S=S(\rho,\beta)$ and $S_{\tn{max}}=S_{\tn{max}}(\beta)$.
In the following analysis, we consider only one subsystem, the superscript $i$ is omitted, i.e., we use $c^*=\sum_kp_kc_k$ instead. \newline
\begin{lem}\label{lem:chsh_c} Let $\{O_A^x\}_{x \in \{0,1\}} $ and $\{O_T^y\}_{y \in \{0,1\}} $ be observables with eigenvalues $\pm 1$ on $\ensuremath{\mathcal{H}}_A$ and $\ensuremath{\mathcal{H}}_T$ respectively and let $\beta=\sum_{x,y}(-1)^{x \wedge y}O_A^x \otimes O_T^y$ be the CHSH operator. Then for any $\rho\in\no{AT}$, the effective overlap $c^*$ is related to the CHSH value $S=\mathrm{ Tr }(\rho \beta)$ by \begin{equation} c^* \leq \frac{1}{2} + \frac{S}{8}\sqrt{8-S^2} \end{equation} \end {lem} \begin{proof} Using \eqref{CHSH:block}, let the relative angle between $\vec{m}_k$ and $\vec{n}_k$ be $\theta_k \in [0,\pi/2]$ for all $k$, i.e., $\vec{m}_k \cdot \vec{n}_k=\cos(\theta_k)$. Furthermore, we can express $\vec{m}_k \cdot \Gamma_k$ and $\vec{n}_k\cdot \Gamma_k$ in terms of rank-1 projectors. Formally, we have $\vec{m}_k\cdot \Gamma_k=\ket{\vec{m}_{k}}\!\bra{\vec{m}_{k}}-\ket{-\vec{m}_{k}}\!\bra{-\vec{m}_{k}}$ and similarly for $\vec{n}_k\cdot \Gamma_k$. Plugging these into (\ref{eq:effective.decomposition}), \begin{equation} \label{lem:c_k_theta}
c_k=\underset{i,j\in\{0,1\}}{\tn{max}} |\bra{(-1)^i\vec{m}_k}(-1)^j\vec{n}_k\rangle|^2 =\frac{1+\cos{\theta_k}}{2} \end{equation} Next, we want to relate $c_k$ to the CHSH value. Using the result of Seevinck and Uffink~\cite{seevinck2007}, for all $r$, (\ref{CHSH:w_kr}) satisfies \begin{align} S_{k,r}\leq2\sqrt{1+\sin(\theta_k)\sin(\theta_r)} \label{CHSH:Seev} \end{align} where $\sin(\theta_k)$ and $\sin(\theta_r)$ quantify the commutativity of Alice's $k$th and system T's $r$th measurements, respectively. From (\ref{lem:c_k_theta}) and (\ref{CHSH:Seev}) we obtain for all $r$, \[c_k \leq \frac{1}{2}+\frac{S_{k,r}}{8}\sqrt{8-S_{k,r}^2} ,\] where we use the fact that the right hand side is a monotonic decreasing function. Finally, we get \[c^*=\sum_k p_k c_k=\sum_{k,r}\lambda_{k,r}c_k \leq \frac{1}{2}+ \frac{S}{8}\sqrt{8-S^2},\] and the inequality is given by the Jensen's inequality and (\ref{decom:CHSH}).
\end{proof}
\subsection{Statistics and efficiency of Charlie's operation}
We recall in the protocol description, after the sifting step, Alice and Bob identify sets $\mathcal{X}, \mathcal{Z}$ and $\mathcal{J}$. Also, they have $\tilde{\mathcal{X}}$ where $|\tilde{\mathcal{X}}|$ corresponds to the total number of times Alice chooses sub-protocol $\Gamma_{\tn{QKD}}$, and both Alice and Bob choose setting $\mathsf{X}$.
Part of the goal is to estimate the average overlap of set $\mathcal{X}$ with the observed CHSH values (evaluated on sets $\mathcal{J}$) and the efficiency of Charlie's operation $\eta$. Note that $\eta=|\mathcal{X}|/|\hat{\mathcal{X}}|$. To do that, we need the following two lemmas: the first (Lemma \ref{lem:statistics.cstar}) gives a bound on the average effective overlap of $\mathcal{X}$ in terms of the average effective overlap of $\tilde{\mathcal{X}}$ and the efficiency of Charlie's operation $\eta$, and the second (Lemma \ref{lem:statistics.chsh}) gives a bound on the probability that the observed CHSH value is larger than the expected CHSH value.
\begin{lem} \label{lem:statistics.cstar}
Let $c^*_\mathcal{X}$ and $c^*_{\tilde{\mathcal{X}}}$ be the average effective overlaps of $\mathcal{X}$ and $\tilde{\mathcal{X}}$, respectively, and let $\eta:=|\mathcal{X}|/|\tilde{\mathcal{X}}|$. Then \[ c^*_\mathcal{X} \leq \frac{1}{2}+ \frac{1}{\eta}\left(c^*_{\tilde{\mathcal{X}}}-\frac{1}{2} \right) \] \end{lem} \begin{proof}
First, we note that $\mathcal{X}\subseteq \tilde{\mathcal{X}}$ with equality only if Charlie always outputs a pass (or has perfect efficiency). Next, we consider $\{c^{*,i}\}_{i\in \tilde{\mathcal{X}}}$ in decreasing order, that is, $c^{*,1}\geq c^{*,2}\geq \cdots \geq c^{*,|\tilde{\mathcal{X}}|}$. Accordingly, the average overlap of $\tilde{\mathcal{X}}$ can be written as
\[c_{\tilde{\mathcal{X}}}^*=\frac{|\mathcal{X}|}{|\tilde{\mathcal{X}}|}\sum_{i=1}^{|\mathcal{X}|}\frac{c^{*,i}}{|\mathcal{X}|}+\sum_{j=|\mathcal{X}|+1}^{|\tilde{\mathcal{X}}|}\frac{c^{*,i}}{|\tilde{\mathcal{X}}|} \geq \frac{|\mathcal{X}|}{|\tilde{\mathcal{X}}|}\left(c^*_{\mathcal{X}}-\frac{1}{2}\right)+ \frac{1}{2} \]
where we consider that $\mathcal{X}$ collects the large effective overlaps, and the inequality is given by $c^{*,i}\geq 1/2$. Finally, let $\eta=|\mathcal{X}|/|\tilde{\mathcal{X}}|$. \end{proof}
\begin{lem} \label{lem:statistics.chsh} Let $S_{\mathcal{J}}$ be the
average CHSH value on $m_j$ independent systems, and
$S_{\tn{test}}$ the observed CHSH on these systems. Then \[
\Pr\left[S_{\tn{test}}-S_{\mathcal{J}} \geq \sqrt{\frac{32}{m_j}\ln
\frac{1}{\varepsilon}} \right] \leq \varepsilon.\] \end{lem}
\begin{proof} We define the random variable \[Y_i := \begin{cases} 1 & \tn{if } s_i \oplus t_i = u_i \wedge u_i, \\
0 & \tn{otherwise},\end{cases}\] where $u_i,v_i,s_i,t_i$ are the inputs and outputs, respectively of the measurements on system $i$, and $Y_\mathcal{J} := \frac{1}{m_j} \sum_{i \in \mathcal{J}}Y_i$. It is easy to see that $S_i = 8\E[Y_i]-4$, $S_{\mathcal{J}} = 8\E[Y_\mathcal{J}]-4$ and $S_{\tn{test}} = Y_\mathcal{J}$. The proof is then immediate from Lemma \ref{lem:chernoff}. \end{proof}
\subsection{Secrecy analysis}
With the relevant results in hand, we are ready to prove our main result which roughly follows the same line of argument as Ref.~\cite{Tomamichel2012a}. The main differences are the use of a more general smooth entropic uncertainty relation (Lemma \ref{thm:ucr}) to bound the error on the secrecy, and the CHSH test to bound the effective overlap of the measurement operators and states used by the uncertainty relation (Lemma \ref{lem:chsh_c}). Since the players can only sample the CHSH violation, we use Lemma \ref{lem:statistics.cstar} to bound the distance between this estimate and the expected effective overlap of the key set. The correctness of the protocol are evaluated in exactly the same way as in Ref~\cite{Tomamichel2012a}, so we refer to that work for the corresponding bounds and theorems. We only prove the secrecy of the protocol here.
Contrary to most QKD protocols, the protocol adopts a tripartite model where Charlie is supposed to establish entanglement between Alice and Bob. Thus in our picture, we can view Charlie as an accomplice of the adversary and evaluate the secrecy on the overall state conditioned on the events where Charlie outputs a pass.
We briefly recall the main parameters of the protocol, which are detailed in the protocol definition given in the paper. Conditioned on the successful operation of Charlie (the events whereby Charlie outputs a pass), Alice and Bob generate systems until at least $m_x$ of them have been measured by both of them in the basis $\mathsf{X}$, $m_z$ have been measured in the basis $\mathsf{Z}$, and $j$ have been chosen for both CHSH tests. The tolerated error rate and the CHSH value are $Q_{\tn{tol}}$ and $S_{\tn{tol}}$, respectively.
Furthermore, we take that our information reconciliation scheme leaks at most $\tn{leak}_{\tn{EC}}+\lceil\log(1/\varepsilon_{\tn{cor}})\rceil$-bits of information, where an error correction scheme which leaks at most $\tn{leak}_{\tn{EC}}$-bits of information is applied~\cite{RennerThesis}, then an error verification scheme using two-universal hashing which leaks $\lceil\log(1/\varepsilon_{\tn{cor}})\rceil$-bits of information is applied. If the error verification fails, they abort the protocol.
\begin{thm}\label{securitythm}
The protocol is $\varepsilon_{\tn{sec}}$-secret if for some
$\varepsilon_Q,\varepsilon_{\tn{UCR}},\varepsilon_{\tn{PA}},\varepsilon_{c^*},\varepsilon_{\tn{CHSH}}
> 0$ such that $4 \varepsilon_Q + 2\varepsilon_{\tn{UCR}} + \varepsilon_{\tn{PA}} +
\varepsilon_{c^*} + \varepsilon_{\tn{CHSH}} \leq \varepsilon_{\tn{sec}}$, the final
secret key length $\ell$ satisfies
\begin{multline}\label{eq:securitythm}
\! \!\! \ell \leq m_x \bigg( 1 - \log_2
\bigg(1+\frac{\hat{S}_{\tn{tol}}}{4\eta_{\tn{tol}}} \sqrt{8 -
\hat{S}_{\tn{tol}}^{2}} + \zeta(\varepsilon_{c^*})\bigg) - \tn{h}(\hat{Q}_{\tn{tol}})
\bigg) \\ - \tn{leak}_{\tn{EC}} - \log_2
\frac{1}{\varepsilon_{\tn{UCR}}^2\varepsilon_{\tn{PA}}^2\varepsilon_{\tn{cor}}} ,
\end{multline}
where $\hat{S}_{\tn{tol}} :=S_{\tn{tol}} - \xi(\varepsilon_{\tn{CHSH}})$ and $ \hat{Q}_{\tn{tol}} :=
Q_{\tn{tol}} + \mu(\varepsilon_Q)$ with the statistical deviations given by \[
\xi(\varepsilon_{\tn{CHSH}}) := \sqrt{\frac{32}{m_j}\ln \frac{1}{\varepsilon_{\tn{CHSH}}}},\]
\[ \zeta(\varepsilon_{c^*}) := \sqrt{\frac{2(m_x+m_j\eta)(m_j+1)}{m_xm_j^2}\ln \frac{1}{\varepsilon_{c^*}}},\\\] and \[
\mu(\varepsilon_Q) := \sqrt{\frac{(m_x+m_z)(m_z+1)}{m_x m_z^2} \ln
\frac{1}{\varepsilon_Q}}. \] \end{thm}
\begin{proof} Let $\Omega$ be the event that $Q_{\tn{test}} \leq Q_{\tn{tol}}$ and $S_{\tn{test}} \geq S_{\tn{tol}}$ and $\eta \geq \eta_{\tn{tol}}$. If $\Omega$ fails to occur, then the protocol aborts, and the secrecy error is trivially zero. Conditioned on passing these tests, let $X$ be the random variable on strings of length $m_x$ that Alice gets from the set $\mathcal{X}$, and let $E$ denote the adversary's information obtained by eavesdropping on the quantum channel. After listening to
the error correction and hash value, Eve has a new system
$E'$. Using $\lceil\log(1/\varepsilon_{\tn{cor}})\rceil \leq \log_2(2/\varepsilon_{\tn{cor}})$ (the number bits used for error correction and error verification) and using chain rules for smooth entropies~\cite{RennerThesis} we bound the min-entropy of the $X$ given $E'$
\[\HminSmooth{2\varepsilon+\varepsilon_{\tn{UCR}}}{X|E'} \geq
\HminSmooth{2\varepsilon+\varepsilon_{\tn{UCR}}}{X|E} - \tn{leak}_{\tn{EC}}-\log_2\frac{2}{\varepsilon_{\tn{cor}}}.\] From the entropic uncertainty relation (Lemma \ref{thm:ucr}), we further get
\[\HminSmooth{2\varepsilon+\varepsilon_{\tn{UCR}}}{X|E} \geq \log_2 \frac{1}{c^*} -
\HmaxSmooth{\varepsilon}{Z|B} - \log_2 \frac{2}{\varepsilon_{\tn{UCR}}^2},\] where $Z$ can be seen as the outcome Alice would have gotten if she had measured the same systems in the corresponding basis $\mathsf{Z}$, and $B$ is Bob's system in this case (before measurement).
The max-entropy of the alternative measurement is then bounded by the error rate sampled on the $m_z$ systems $\mathcal{Z}$~\cite{Tomamichel2012a}:
\[\HmaxSmooth{\varepsilon}{Z|B} \leq m_x \tn{h}(Q_{\tn{tol}} + \mu(\varepsilon_Q)),\] where $\varepsilon = \varepsilon_Q/\sqrt{p_{\Omega}}$ and $p_{\Omega}:=\Pr[\mathsf{\Omega}]$.
Next, we bound $c^*$ (evaluated on Alice's devices from $\mathcal{X}$) in terms of the observed CHSH value $S_{\tn{test}}$. We first use the arithmetic-geometric mean's inequality, from which we get \[ c^*\leq\prod_{i \in \mathcal{X}} c^{*,i} \leq \left(\sum_{i \in \mathcal{X}}
\frac{c^{*,i}}{m_x}\right)^{m_x} = \left(c^*_\mathcal{X}\right)^{m_x},\] where $c^*_{\mathcal{X}}$ is the average effective overlap on $\mathcal{X}$. Using Lemma. \ref{lem:statistics.cstar}, we get \[ c^*_{\mathcal{X}} \leq \frac{1}{2}+\frac{1}{\eta}\left(c_{\tilde{\mathcal{X}}}^*-\frac{1}{2} \right).\] Since $\tilde{\mathcal{X}}$ is randomly chosen by Alice and is independent of Charlie, $c^*_{\tilde{\mathcal{X}}}$ can be estimated from $c^*_{\mathcal{J}}$, i.e., we apply Corollary \ref{Cor:serfling} to further obtain $\Pr [ c^*_{\tilde{\mathcal{X}}} - c^*_{\mathcal{J}} \geq \zeta (\varepsilon_{c^*})/2 ] \leq \varepsilon_{c^*}$, hence \[ \varepsilon' := \Pr \left[ c^*_{\tilde{\mathcal{X}}} -
c^*_{\mathcal{J}} \geq \frac{\zeta (\varepsilon_{c^*})}{2} \middle| \Omega \right] \leq \frac{\varepsilon_{c^*}}{p_{\Omega}}.\] Lemma \ref{lem:chsh_c} can now be used together with Jensen's inequality, so with probability at least $1-\varepsilon'$, \[c^*_{\tilde{\mathcal{X}}} \leq \frac{1}{2}\left( 1+\frac{S_{\mathcal{J}}}{4} \sqrt{8 -
S_{\mathcal{J}}^2} + \zeta(\varepsilon_{c^*})\right).\] We still need to take into account that we only have an approximation for the CHSH value of the systems in $\mathcal{J}$. From Lemma \ref{lem:statistics.chsh} we get that \[\varepsilon'' := \Pr \left[
S_{\mathcal{J}}\leq\hat{S}_{\tn{test}}
\middle|\Omega\right] \leq \frac{\varepsilon_{\tn{CHSH}}}{p_{\Omega}}.\]
Finally, the bound on the error of privacy amplification by universal hashing~\cite{RennerThesis} says that the error is less than $4\varepsilon + 2\varepsilon_{\tn{UCR}} + \varepsilon_{\tn{PA}}$ as long as \[ \ell \leq
\HminSmooth{2\varepsilon+\varepsilon_{\tn{UCR}}}{X|E'} - 2 \log_2 \frac{1}{2\varepsilon_{\tn{PA}}}.\]
Putting all the above equations together we get (\ref{eq:securitythm}), with a total error conditioned on the event $\Omega$ of at most $4\varepsilon + 2\varepsilon_{\tn{UCR}} + \varepsilon_{\tn{PA}} + \varepsilon'+\varepsilon''$. If we remove this conditioning, the error is then \begin{multline}\notag p_{\Omega} (4\varepsilon + 2\varepsilon_{\tn{UCR}} + \varepsilon_{\tn{PA}} +\varepsilon'+\varepsilon'') \\ \leq 4\varepsilon_Q + 2\varepsilon_{\tn{UCR}} + \varepsilon_{\tn{PA}} + \varepsilon_{c*}+\varepsilon_{\tn{CHSH}}\leq \varepsilon_{\tn{sec}}. \qedhere \end{multline} \end{proof}
\end{document} | arXiv |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.