id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2301.02907 | David Maurice Brink, 20 July 1930 - 8 March 2021 | David Brink was one of the leading theoretical nuclear physicists of his
generation. He made major contributions to the study of all aspects of nuclear
physics embracing nuclear structure, nuclear scattering, and nuclear
instability. His wide ranging interests and interactions with theorists and
experimentalists alike helped him in providing both theoretical analysis and
interpretations and suggesting experiments. He had the gift of visualising
complex problems in simple terms and provided clear analysis of the underlying
processes. He was an expert on the use of semi-classical methods which provided
an intuitively clear picture of complex phenomena. His research work and books
are characterised by scientific clarity, transparency, and depth. David
possessed outstanding skills in mathematical computation, and he was an expert
on special functions, group theory, and the Feynman path integral method. David
had many research students and collaborated with a large number of scientists
from across the world, for whom he was a source of scientific and human
inspiration and admiration. His most fundamental belief was that research was a
means of trying to discover and understand the beauties of Nature and explain
them in simple terms to others. His absolute belief in the value of truth and
his unselfish and generous attitude in sharing knowledge makes him an
outstanding figure in contemporary Nuclear Physics. | C. V. Sukumar, A. Bonaccorso | 2023-01-07T17:55:13Z | http://arxiv.org/abs/2301.02907v1 | # David Maurice Brink
###### Abstract
David Brink was one of the leading theoretical nuclear physicists of his generation. He made major contributions to the study of all aspects of nuclear physics embracing nuclear structure, nuclear scattering, and nuclear instability. His wide ranging interests and interactions with theorists and experimentalists alike helped him in providing both theoretical analysis and interpretations and suggesting experiments. He had the gift of visualising complex problems in simple terms and provided clear analysis of the underlying processes. He was an expert on the use of semi-classical methods which provided an intuitively clear picture of complex phenomena. His research work and books are characterised by scientific clarity, transparency, and depth. David possessed outstanding skills in mathematical computation, and he was an expert on special functions, group theory, and the Feynman path integral method. David had many research students and collaborated with a large number of scientists from across the world, for whom he was a source of scientific and human inspiration and admiration. His most fundamental belief was that research was a means of trying to discover and understand the beauties of Nature and explain them in simple terms to others. His absolute belief in the value of truth and his unselfish and generous attitude in sharing knowledge makes him an outstanding figure in contemporary Nuclear Physics.
## 1 Early years and family memories
David Maurice Brink was born on 20 July 1930 in Hobart, Tasmania. His father, Maurice Brink had been born in the village of Bjuv in Sweden in 1900. David's grandparents emigrated to Australia in July 1900. At the age of 14 David's father moved to Sydney where he trained to become an accountant. After this he went to Tasmania and joined an accountancy firm Wise, Lord and Ferguson, where he eventually became a partner. In 1929 he married Victoria Finlayson, (born in 1900). Her father David had emigrated with his parents from Scotland in 1884. They had an engineering firm in Devonport, Tasmania whose main activity was maintaining and repairing machinery for mining, shipping, and timber companies. David's grandfather and his colleagues built the first steam car in Tasmania and between 1900 and 1904 built nine vehicles including three passenger cars and one 12-passenger bus. David visited his grandparents often during vacations. He saw the casting floor and other parts of the factory and enjoyed playing amongst the remains of old steam traction engines.
## 1 Introduction
The study of the influence of
David was the eldest of three brothers. The Brink brothers went to a Quaker school in Hobart, Australia 1936 to 1948. David attended the University of Tasmania during 1948-51 studying Physics, Mathematics, and Chemistry, graduating with a BSc in December 1950 and was elected as a Rhodes Scholar at Magdalen College, Oxford, from October 1951. From February 1951 to September 1951 he studied for BSc Honours in Hobart but did not complete the course because he moved to Oxford in September 1951.
As a student at the University of Tasmania David joined the Hobart Walking Club. With this club he went on many trips to the interior of the island. When he arrived in Oxford he became a member of the Oxford University Alpine Club. Its activities took him to the Alps where he climbed in the Valais and the Engadine in Switzerland. It was in Switzerland that he met his future wife Verena. Verena and David married in 1958 and had three children together. His love for walking was transmitted to his three children who continue to enjoy walking in urban, rural, and mountainous settings. While always very committed and absorbed with his Physics he was also a devoted husband and father, transmitting his joy for walking and travel to his family. He often helped his children with their homework and was very patient with them, even when they were not! Together David and his family travelled to, and lived in many countries across the world, where their horizons were broadened and they were introduced to the idea that there are many different ways of living and being. When his children had left home and travelled to other countries he would often be found in front of an atlas studying their exact whereabouts.
David was very open minded and curious, always accepting other people's opinions and points of view. David and Verena were very close, shared everything and had full respect for each other. Verena was a wonderful host and the Brinks often organised tea and dinner parties for students, visitors, and their families. Verena also helped visitors find accommodation, and with other issues related to living in Oxford. They were also very generous in offering accommodation at their place whenever possible.
In Oxford David developed an interest in birds, initially just birds he saw in Oxford, but when he travelled he always liked to look for birds and made lists of species he saw. This curiosity in nature extended to other species as well, including trees. When in 1993 he moved to Trento, Italy, he became a member of the SOSAT, a branch of the alpine club, and went regularly with them on Sunday trekking trips.
## 2 Graduate studies and Oxford beginnings
David started his studies at Oxford in October 1951. When he arrived at Magdalen College there was no tutor in Theoretical Physics at the college. His maths tutor was David Kendal who sent him for tutorials to Jack De Wet at Balliol College. Jack asked David to read Von Neumann's book on the foundations of Quantum Mechanics in German. He also encouraged David to change his studies from a BA in Mathematics to a D. Phil in Theoretical Physics. Maurice H. L. Pryce (FRS 1951) was the Wykeham Professor and head of the Theoretical Physics Department in Oxford from 1946 to 1954. He was David's supervisor. Pryce was also the part-time leader of the Theoretical Physics Division of the Atomic Energy Research Establishment (AERE) at Harwell, not far from Oxford, where nuclear theory was very much in the forefront and Rudolf E. Peierls (FRS 1945) was a consultant. At Harwell there was a very productive theory group including Tony Skyrme and J. P. (Phil) Elliott (FRS 1980). Skyrme organized regular informal meetings known as 'Skyrmishes'. Important papers in the latest journals were presented and discussed. Members of the group attended Oxford seminars and while the local group including Roger Blin-Stoyle (FRS 1976), David Brink, and Pryce attended the Harwell meetings. Elliott gave some
Figure 2: David (right) in Tasmania 1950.
lectures at Oxford on Racah algebra. Later on his best-known work brought together the shell and collective models to explain rotational bands in deformed nuclei using the unitary group SU(3). During this time he wrote a long article in Handbuch der Physik with A. M.(Tony)Lane (FRS 1975) [1] on the shell model.
The foundations of David Brink's lifelong research, can all be found in his thesis "Some Aspects of the Interactions of Fields with Matter" [2] which was submitted in May 1955. It is a remarkable document for its breadth and early contributions to the field of nuclear physics. M. Pryce, his thesis adviser was interested largely in atomic spectroscopy but also studied the spectroscopy of nuclear energy levels. The advent of the shell model around 1950 opened the door to new theoretical approaches for understanding the properties of nuclei and applying quantum mechanical tools to calculate them. There was also a great interest in reactions involving heavy nuclei and which could only be treated by statistical methods that had been developed much earlier. Brink's two-part thesis contained contributions to both areas, reflecting the interactions between the Harwell and Oxford groups. The first part was inspired by the shell model and the second contains important contributions to the statistical theory of nuclear reactions.
In the first part of his thesis, dealing with Nuclear Structure, David analyzed the spectroscopic consequences of the nucleon-nucleon interaction acting on the valence nucleons in nuclei close to the doubly-magic \({}^{208}\)Pb. David was able to estimate the order of magnitude of the interaction matrix elements from the properties of the deuteron. He also proposed treating the interaction through a density matrix expansion. This would figure prominently in later work in the field.
The second part of his thesis dealt with reactions involving heavy nuclei. It was probably inspired by the work of experimental group at Harwell. There, a Van de Graaff accelerator was used to measure energy levels, moments and transition rates in nuclei. David was also fortunate to have contact with the strong experimental group working on neutron resonances. While David was working on gamma widths of neutron resonances he benefited from contacts with Prof. Hughes [3] and Prof. Weisskopf who were visiting Oxford. Weisskopf was very much interested in applying the detailed balance theory to nuclear reaction and interactions with him must have influenced David because at the end of the thesis he acknowledges discussions with Victor Weisskopf. The first subject in this part was the theory of inelastic scattering on deformed nuclei. David constructed a theory for the excitation of rotational bands in deformed nuclei based on two new ideas, namely Bohr's model of deformed nuclei and the optical model of Weisskopf et al. [4] published the previous year. David was able to carry out the calculations to a point where the relative importance of this mechanism in the total cross section could be estimated. This was an impressive achievement at a time before computers were available to carry out the full calculations.
The final section of his thesis deals with the decays of the compound-nucleus resonances produced in reactions on heavy nuclei. The formulas he presented here are still in use for modeling the spectra and reactions in heavy nuclei [5]. The best known is the formula for gamma decay rates in compound-nucleus resonances. This formula is based on a treatment widely known as the "Brink-Axel" hypothesis. At a fundamental level, the theory was derived from the principle of detailed balance which Weisskopf had used very successfully in other contexts. The principle gives a formula to relate decay rates to absorption cross sections in the inverse reaction. The Brink-Axel hypothesis simply states that the absorption cross sections for gamma radiation on excited states of heavy nuclei can be estimated by the corresponding cross sections on the ground states. Axel and Brink worked independently. Peter Axel's paper appeared in 1962 [6]. The important statement is made on page 101 of David's thesis and is expressed in equation (11) of
Axel's paper. The prediction of the statistics of the widths of nuclear resonances, based on the generalization of the central limit theorem which David had learned about in his statistics course in Tasmania. David published the results in [7] where he showed the close connection between the shell-model description of the giant dipole resonance and the collective model of Goldhaber and Teller [8] and Steinwedel and Jensen [9]. After his paper, theory of the giant resonances used the shell model as a starting point. Confirmation of the Brink-Axel hypothesis first came from the Berkeley experiments in 1981 [10].
The last part of thesis has formulas related to another important topic in compound-nucleus theory, the fluctuations in decay widths of individual resonances. Here, David speculated that the fluctuations would follow a chi-squared distribution with one or two degrees of freedom. This is borne out experimentally and is now considered one of the hallmark properties of the compound nucleus. It also became a part of random matrix theory in mathematical physics. Unlike the early parts of the thesis, David never published the parts on compound-nucleus decay widths. However, physicists at the Harwell Laboratory knew about David's results and J.E. Lynn explained them in his book [11]. Unfortunately, David's treatment of fluctuations was not recognized until very recently [12] and the distributions are known today under other author's names [13].
Figure 3: David and his children (left to right), Barbara, Thomas, and Anne-Katherine 1969.
Research areas
David's interactions with the physicists mentioned earlier were reflected not only in David's thesis but also in his early publications. One paper [14], which dealt with angular momentum couplings and angular distributions of \(\gamma\)-rays and other particles, is still the "Bible" most experimentalist use when they analyse their data, as we have been told by Peter Butler (FRS 2019) (Liverpool) and Yorick Blumenfeld (Orsay), and others. Early in his research career David wrote the textbook _Angular Momentum_[15] with Ray Satchler. This textbook was prominent among several texts published in this time period. It was widely used by graduate students and post-graduates working in nuclear theory. David also published a book on _Nuclear Forces_[16].
### Effective interactions and calculations tools
In his thesis David had laid the basis for the use of effective interactions in the calculations of matrix elements for nuclear structure studies. The idea was greatly advanced in three later papers. The first proposes a gaussian form for the effective nucleon-nucleon interaction known as the "Brink-Boeker" interaction [17] that all nuclear physicists have used at least once in their lives. This paper was very influential at the time and was later developed by Gogny and collaborators in the interaction that is widely used even today [18, 19].
In 1959 Tony Skyrme proposed modelling the effective interaction between nucleons in nuclei by a short-range potential, an idea which is useful in nuclear structure and the equation of state of neutron stars [20]. The Skyrme force is an effective interaction depending on a small number of parameters whose strength could be fitted to reproduce various bulk properties of nuclei as well as selected properties of some nuclei, especially the doubly magic nuclei. At the beginning of the 1970s David was a frequent visitor to the Theoretical Division at the Institut de Physique Nucleaire, Orsay where his sixty-fifth birthday was celebrated (figure 4). The work done there produced two papers with Dominique Vautherin [21, 22] which were the basis for the intense use of the so-called Skyrme interactions, in all their many present variants. The papers revived a general interest in using Skyrme's parametrization of the nucleon-nucleon interaction to calculate nuclear binding energies, and later to other aspects of nuclear structure. In effect, the interaction is treated as an energy-density functional theory in the spirit of the Kohn-Sham theory in condensed matter physics.
The Hartree-Fock calculations in [21] for spherical nuclei used Skyrme's density dependent effective interaction. This seminal paper showed how the Skyrme force could be used to make accurate calculations of certain nuclear properties and Vautherin and Brink developed these ideas further in a series of papers which had a strong impact on nuclear structure calculations. T. Otsuka comments: "The paper [21] has had a huge impact, as verified by the number of citations \(>\)2000. In nuclear theory, papers having the citation index \(>\)1000 are rather few, which implies how important the Vautherin-Brink paper is. This year is the 50 year anniversary of this paper, and it is amazing that the basic formulation within the mean-field approach has not changed too much, implying that the scheme presented in this paper is so solid".
The calculations of Vautherin and Brink were extended by many other physicists during the subsequent period. In particular at Oxford, Micky Engel, Klaus Goeke and Steve Krieger, together with Dominique Vautherin derived the energy density using a Slater determinant where the single particle states were no longer invariant under time reversal, as it is in the Hartree-Fock method. With the Skyrme interaction the TDHF approach leads to an equation of continuity for the single particle density [22]. This paper showed how Dirac's time-dependent Hartree-Fock theory could be applied to nuclear dynamics in a light nucleus. In the year immediately following
the publication, the theory was applied to collisions involving a large number of nucleons [23], showing that the method would be a powerful one for heavy nuclei as well. The method is justified as a time-dependent density-functional theory, and it remains in widespread use.
In 1973 Ica Stancu came to Oxford as a post doctoral fellow and worked with David on heavy ion reactions in deriving the interaction potential of two \({}^{16}\)O nuclei starting from the Skyrme energy density formalism [24]. They included the previously ignored tensor part of the Skyrme interaction. Along with an additional effort from Hubert Flocard at Orsay, the Skyrme HF calculations yielded single particle levels of spherical closed nuclei [25]. The role of the tensor force is to contribute to the spin-orbit splitting of the single-particle levels. For spherical closed shell nuclei the effect turned out to be small. Later it was found that in spherical spin unsaturated nuclei it makes a dramatic difference, giving the correct order of single particle levels, as, for example, in the Sn isotopes [26]. Many experiments on neutron-rich nuclei since 2006 have shown that the Skyrme formalism including the tensor force was the simplest way to describe the shell evolution of neutron-rich or proton-rich nuclei and indicated new magic numbers.
### Heavy-ions and Semi-classical methods in Nuclear Physics
As tandem accelerators and cyclotrons were built to study heavy-ion Physics, David started an intense collaboration with the experimentalists at the Department of Nuclear Physics in Oxford. The accelerators were used to study heavy-ion elastic scattering and direct reactions such as transfer and measure masses and perform spectroscopy of neutron-rich matter. In those years semiclassical methods were widely used in the Nuclear Physics community to analyse data. They were particularly appropriate for heavy ions because of the high incident energies and the large impact parameters involved. Thus David started the Oxford school on the subject, more or less parallel in time to the Copenhagen school of Broglia and Winter and collaborators. At that time, these heavy-ion reactions were analyzed through the partial wave expansions of the colliding partners, a methodology that was computationally demanding and giving little insight to the underlying dynamics. David's semi-classical treatment of the collision was much simpler. Some of the early papers on the theory of peripheral reactions were based on his student's thesis, including Hashima Hasan and Luigi Lo Monaco [27, 28].
David's investigation of the kinematical effects in such reactions, for which there was concrete experimental evidence from the work of Peter Twin (FRS 1993) and his collaborators at Liverpool, became a key element for experimentalists. In the paper by the title "Kinematical effects in heavy-ion reactions" [29] David introduced a "semi-classical amplitude" [30] that could be used in DWBA-like calculations of transfer [31] and proposed a matching condition to predict a large reaction cross-sections, a condition that was beautifully adapted to understand spin-polarization experiments. He showed that energy and angular momentum couplings in heavy-ion reactions led to very selective matching rules by which high angular momentum single-particle states could be populated. High angular momentum single-particle states sometimes appear as low-lying continuum resonances. They have been studied by the method of transfer-to-the-continuum [32] which has helped disentangle single-particle from collective degrees of freedom and has also been applied in the so called "surrogate reactions" as a substitute for free neutron beams.
Semi-classical ideas have been helpful in studying breakup and dissociation of weakly bound radioactive ions including halo nuclei and other such unstable nuclei whose dynamics is rather involved and difficult to study experimentally due to the very low intensity of beams. David, Angela Bonaccorso and her students got heavily involved in this new physics from the '90s on, with a long series of papers (see [33] and references therein), conference contributions, meeting organization, some of them at the ECT* in Trento, spanning the last forty years of David's
career. Finally it has recently been shown [34] that the semi-classical treatment of breakup by David and his collaborators is fully consistent with a quantum mechanical treatment.
David studied microscopic models for the real and imaginary parts of the ion-ion optical potential to be used in elastic scattering calculations with Ica Stancu. He also studied fusion with Neil Rowley and N. Takigawa. David and Takigawa developed a semi-classical reaction theory with three classical turning points which explained the anomalous large angle scattering of \(\alpha\) particles as a quantum-mechanical interference between the barrier wave and the internal wave, thereby providing an intuitively clear picture of a complex phenomenon underlying nuclear reactions in terms of classical and quantum ideas. David, Vautherin, and M.C. Nemes studied the effect of intrinsic degrees of freedom on the quantum tunnelling of a collective variable. This work was further developed by other theorists including Kouichi Hagino who studied the deviation from adiabaticity in quantum tunnelling with many degrees of freedom.
David met Uzi Smilanski in Munich when they were both there on sabbatical. Both had worked on semi-classical approximations and gave a joint series of lectures on this topic. David was concerned that the standard WKB method was insufficient to explain tunnelling through a barrier and was particularly bad near the barrier top. David and Uzi applied the uniform semi-classical method evolved by Michael Berry (FRS 1982) to successfully address the problem [35]. Uzi remembers David as a physicist with excellent intuition and an ability to grasp the essence of a problem before cracking the problem with rigorous mathematics and complex computation.
David, Massimo di Toro, and Alberto Dellafiore developed a semi-classical description of collective responses with a mean field approach paving the way for a study of the dynamics of a nuclear Hartree-Fock fluid. When the national heavy-ion laboratory started in Catania (LNS-INFN) around an advanced superconducting cyclotron, David was a reference point for simple physics suggestions.
### Path integral methods in Nuclear Physics
David's expertise with semi-classical methods for tackling quantum problems naturally led him towards the Feynman path integral approach to quantum mechanics which was based on a Lagrangian approach. Hans Weidenmuller had met David at various conferences in the 1950s and 1960s and spent 1977-78 on a sabbatical in Oxford. During this period David and Hans worked on the application of the Feynman path integral method to the study of the heavy-ion reactions and developed the Influence Functional approach to this problem which David and his collaborators later used to establish master equations. Hans remembers that at a summer school a few years later David delivered a series of lectures on nuclear reactions. In the first lecture he developed the topic using a dozen transparencies and in subsequent lectures used the same transparencies in a different order to display and illuminate aspects of the topic that had gone unnoticed before. Hans remembers it as a display of the combination of simplicity and depth that were hallmarks of David's approach to Physics.
The path integral method was particularly well suited for studying problems with many degrees of freedom in which classical description in terms of trajectories was good for some degrees of freedom but not for all. Coulomb excitation in heavy-ion collisions is an example where the relative motion of the ions could be described in terms of coulomb trajectories but the excitation of the quantum states of the ions had to be treated using quantum mechanics. David and Sukumar [36] used the Feynman path integral method to evolve a systematic way of arranging the correction terms for the quantum amplitudes for processes involving coupled degrees of
freedom. David, Sukumar, and Fernando Dos Aidos used this method to provide corrections to the primitive semi-classical amplitude for Coulomb excitation of heavy-ions. Sukumar and David used the path integral method to describe spin-orbit coupling effects and together with Ron Johnson at Surrey and his group successfully explained the experimental data on polarization effects.
## 4 Other topics
David was very quick at grasping the core of a Physics problem and putting it in simple, calculable terms. Often the problem required somewhat involved analytical calculations, but he was a master of that. Thus anytime a visitor went to Oxford with a new problem, David would start a very successful line of research which he often followed up with his graduate students.
Figure 4: David and his wife Verena, May 2018.
### Cluster models
It happened for example with the cluster model physics, starting with the seminal paper [37]. This paper developed the generator coordinate method of Hill and Wheeler [38] to produce a practical tool to reduce the many-particle Hamiltonian to an ordinary Schrodinger equation for a collective variable. Thus the nuclear cluster model was related to the shell model. To treat nuclear states in such different circumstances, a formulation which includes clustering at one extreme and shell structure at the other extreme was needed. David proposed microscopic multi-\(\alpha\)-clusters treating four nucleons with different spin-isospin states as a single particle orbit. Under anti-symmetrisation of nucleons the cluster model wave-functions approximate shell model functions and enabled the description of both cluster and shell model structures in a unified way. Their approach was adopted and is in widespread use even in present-day nuclear theory. The main applications up to now are on spectroscopy and large-amplitude collective motion.
Y. Suzuki's work on the cluster model was largely inspired by David's paper on "Do alpha clusters exist in nuclei?" [39] presented at a meeting in Tokyo in 1975. This paper contained all the essential components needed in the alpha particle model, the microscopic theory beyond the shell model description based on many-particle many-hole excitations, the relation between the resonating group method GCM, the equilibrium arrangement of clusters, extension of the Hill-Wheeler method, the angular momentum projection, and the Slater determinant technique for evaluating matrix elements. Suzuki remembers that David never forgot to mention that the original model was proposed by H. Margenau and C.Bloch [40, 41, 42].
At the Varenna School in 1955 David met S. Yoshida from Japan and they discussed inelastic scattering of protons and neutrons by deformed nuclei. By chance David had a chapter in his thesis on this topic and Yoshida had been studying the same subject. This interaction with Yoshida helped David to develop strong connections with nuclear theory groups in Japan over many years.
### Bose-Einstein condensation of atoms
During his period as Deputy Director of ECT* in Trento, 1993-1998, David interacted with many members of the Physics Department in Trento. One such interaction with Sandro Stringari led to David's interest in Bose-Einstein condensation of alkali atoms in magnetic traps [43]. Sukumar and David [44] developed an approximate method for calculating the rate of escape from the magnetic trap thereby enabling an estimation of the duration for which the condensate atoms can be held in the trap as a function of the ultra-cold temperature and the strength of the magnetic field.
### Miscellaneous
David was interested in the role of pairing interaction in finite nuclei and this led to the study of nuclear superfluids. His book with R. Broglia [45] is considered to be a wonderful exposition of this subject. David's knack for explaining detailed Physics in a simple and clear manner is abundantly evident in this book. In the 1990s Ica Stancu raised David's interest in the quark structure of exotic hadrons named tetraquarks, a system of two quarks and two antiquarks, and studied the stability of such systems containing heavy quarks/antiquarks in a QCD inspired quark model. Even though David had not worked on the Interacting Boson Model (IBM) he nevertheless provided supervision for doctoral students such as Martin Zirnbauer who chose topics in this field. He also supervised Hans Peter Pavel's thesis on Schwinger pair production in a flux tube model containing a chromomagnetic field.
Teaching and administrative roles
David's doctoral students remember him for the gentle way he corrected them when they had made errors. Many of the students learned from him how to take a critical approach to their results and how it is possible to look at a complex problem from several different viewpoints and find the one that gives the best physical insight. They also remember the immense support he gave to their research and pastoral care. Many graduate students also remember how much they had learned from the courses he taught at Oxford and at Summer schools. His book with Satchler [15] and paper with Rose [14] on angular momentum algebra were found to be of immense value in formulating and tackling problems in Nuclear Physics. Many researchers and students who met David were astonished that someone with such towering achievements could be so humble, nice and honest. David was very open-minded and we report a number of episodes to illustrate this aspect of his character.
Future Nobel laureate Prof. Tony Leggett remembers: " My undergraduate major at Balliol was in Greats (classical languages, ancient history and philosophy) and I was set to graduate (and eventually did so) in the summer of 1959. Towards the end of the academic year 1957-1958, partly encouraged by the post-Sputnik cultural swing towards science in the UK, I conceived the ambition of taking a second undergraduate degree in physics and perhaps eventually making my career in academia in that subject. Given that I had essentially no meaningful exposure to physics at the high-school level and only a brief and informal exposure to any kind of mathematics beyond simple differential calculus (I'm not sure that I had even had that), such a drastic change of academic direction was extremely unusual, indeed at the time almost unheard-of. My first concern was to find a higher education institution which would accept me for it and I rapidly concluded that my only hope was to apply to my existing Oxford college, Balliol. David had just recently become the college's first tutor in theoretical physics (most Oxford colleges did not have such a thing in 1958), so it fell on him to take the decision on my application. To this end he asked me to read over the summer vacation a few chapters from the book "What is Mathematics?" by Courant and Robbins [46], perhaps the most beautiful presentation I have ever seen of mathematical topics for the layperson. When I returned to Oxford in the Fall of 1958 he gave me an informal mini-exam on that material, and on the basis of my performance decided to recommend to Balliol to accept me. In the event I did my physics degree at Merton, who offered me a scholarship, but since they did not at the time have a tutor in theoretical physics David played that role for me for much of the two years which it took me to complete the degree. I think it is virtually certain that had he made the opposite decision, I would never have had a career in physics, and I am profoundly grateful to him for the imagination he showed in going beyond my formal academic qualifications."
Another story comes from Paul Stevenson: "I was called up for interview at Balliol in December 1991. The office I was in for that interview was David Brink's office, above the Senior Common Room. In the interview were me, David Brink, David Wark, Jonathan Hodby (those three there for physics) and Bill Newton-Smith (for philosophy). I don't remember all the questions. I do remember that David Brink showed me a postcard and asked me what, physically, was wrong with the picture. It was a Japanese style print with a mountain in the background and a lake in the foreground. There was a reflection of the mountain in the lake, but it was off to one side. I saw what was wrong, and struggled to articulate it in the language of a physicist, and in the end David prompted me by asking what is particular about an incident light ray, a reflected light ray, and the normal to the surface at which it is reflected and I said the right thing - that they are all in the same plane. I was duly accepted to Balliol and spent three years there studying physics". Danny Chapman remembers: "I don't think I'll ever forget the "sense" of David Brink's
tutorials, and of being in the presence of such a sharp and insightful mind. I remember being quite inspired once when my fellow student had tried to answer a question in what I thought was an odd and probably wrong way, ending up with a sum, which he then attempted to turn into an integral, which didn't work out. Rather than saying "don't do it like that, do it like this", David was able to continue from there and make it work, which was a really positive experience and encouragement to follow every path to its end. I feel lucky to have been at Balliol when he was there."
Angela Bonaccorso remembers daily life as one of David's students: At the Department of Theoretical Physics there was a coffee room where coffee was served between 11:00 and 11:30. We would try to be there on time to sit around David who would be chatting with other senior members of the department or some visitor. There would always be someone bringing up some interesting and challenging new problem. Everyone gave an opinion, the atmosphere was competitive. Most of the time David would win the argument and his students felt very proud.Not all supervisors were so nice, helpful, and respectful of us as David was. But it was not at all easy to be David's student. First of all we needed to have detective skills. David was very busy and very elusive. In those days there was no email or SMS. The only way to be sure that he was inside was to look for his bicycle. If the bicycle was outside we would knock at the door of his office and if we were lucky he would answer and let us in. In spite of all his many commitments we always managed to have at least one chat per week with him. Another reason why it was not easy to be his student was that David had a very original way of understanding things and finding the way out of problems. During our conversations often he would stop talking and be silent for five to ten minutes, rubbing his hand on his forehead. Then he came up with some equation, or a drawing or something like that and he would tell us: I think it is like this...I think we should get something like that...etc. I (we) would stare at him speechless and in wonder. Where did the 'oracle' come from? Most of the time this was the end of the meeting. I (we) left his office rather puzzled, worked desperately hard for one week and if we had managed to understand his line of thought, after pages and pages of calculations, we would find exactly what he had predicted. We all knew it was like that, we all passed this information on to each other, generation after generation: listen to David, he is always right, just try to reproduce the miracle of his craftsmanship in physics.
A further proof of how much busy David was and how precious was for everyone the time spent in conversation with him can be found in the comment Gerry Brown made in his review for Science [47] of the Proceedings of the Varenna summer school [41] : 'Let me draw special attention also to the article of David Brink, "The alpha-particle model of light nuclei," which is one of the most beautiful developments in this subject. Brink likes to sit on his work for years and, on the whole, doesn't even answer letters inquiring about it, so that one must either adopt the expedient of traveling to Oxford to talk with him, or invite him to lecture at summer schools. Both are worth while.'
David was a pillar of Balliol college and Department of Theoretical Physics for decades, an immensely popular tutor and supervisor, a cheerful and always helpful colleague, and a wonderful guide to younger colleagues and administrative staff who happened to be working with him. David had another long and distinguished career in Italy after he left Oxford. Following an invitation from Renzo Leonardi he moved to Trento as full professor of History of Physics and helped in establishing the ECT*, European Center for Theoretical Studies in Nuclear Physics and Related Areas. The Nobel laureate Ben Mottelson was the founding director and David the vice-director, while Renzo Leonardi was the Scientific Secretary. In the five years David spent at Trento he took care of organising various technical aspects of the secretarial offices,
library, computer center and visitor hospitality. At the same time he gave very productive contributions to workshops with his constant presence, his huge knowledge of nuclear physics and stimulating discussions. The superb reputation and international standing of this extremely important European initiative is undoubtedly due in large part to David's wisdom in its crucial, formative years.
## 6 Career, Honours and Awards
1954-55 Royal Society Rutherford Scholarship.
1957-1958 Instructor at the Massachusetts Institute of Technology (MIT).
1958 Fellow of Balliol College and Lecturer in Theoretical Physics, Oxford.
1976-1978 Vice-Master of Balliol College.
1981 Fellow of the Royal Society.
1982 Rutherford Medal of the Institute of Physics.
1988 H. J. G. Mosley Reader at Oxford.
1990-1993 Senior Tutor, Balliol College, academic planning and administration, Oxford.
1992 Foreign member of the Royal Society of Sciences, Uppsala.
1993-1998 ECT*, Trento, Vice-Director.
1993-1998 Full professor of History of Physics, University of Trento.
Figure 5: David’s sixty-fifth birthday celebration. Orsay, 1995.
2006 Varenna Conference on Nuclear Reactions dedicated to him.
2006 Lise Meitner prize of the European Physical Society shared with H. J. Kluge.
Visiting scientist at :
* Niels Bohr Institute 1964,
* University of British Columbia 1975,
* Institut de Physique Nucleaire d'Orsay 1969 and 1981-1982,
* The Technical University of Munich 1982,
* University of Trento 1988,
* University of Catania 1988,
* Michigan State University 1988-1989.
## 7 Acknowledgements
The authors are greatly indebted to the Brink family for sharing with them private memories and photographs and for a critical reading of the manuscript. A large number of friends and colleagues, too many to be individually mentioned, contributed with their appreciation of David's life and scientific career. Ica Stancu and Sharon McGrayne Bertsch read and commented the manuscript. One of us (AB) gratefully acknowledges George F. Bertsch for his help in digging out from David's thesis and early work the roots of several founding pillars of modern Nuclear Physics.
|
2304.09054 | Stable intermediate phase of secondary structures for semiflexible
polymers | Systematic microcanonical inflection-point analysis of precise numerical
results obtained in extensive generalized-ensemble Monte Carlo simulations
reveals a bifurcation of the coil-globule transition line for polymers with a
bending stiffness exceeding a threshold value. The region, enclosed by the
toroidal and random-coil phases, is dominated by structures crossing over from
hairpins to loops upon lowering the energy. Conventional canonical statistical
analysis is not sufficiently sensitive to allow for the identification of these
separate phases. | Dilimulati Aierken, Michael Bachmann | 2023-04-18T15:19:21Z | http://arxiv.org/abs/2304.09054v1 | # Stable intermediate phase of secondary structures for semiflexible polymers
###### Abstract
Systematic microcanonical inflection-point analysis of precise numerical results obtained in extensive generalized-ensemble Monte Carlo simulations reveals a bifurcation of the coil-globule transition line for polymers with a bending stiffness exceeding a threshold value. The region, enclosed by the toroidal and random-coil phases, is dominated by structures crossing over from hairpins to loops upon lowering the energy. Conventional canonical statistical analysis is not sufficiently sensitive to allow for the identification of these separate phases.
For more than a century, canonical statistical analysis has been the standard procedure for the quantitative description of thermodynamic phase transitions in systems large enough to allow for making use of the thermodynamic limit in analytic calculations and finite-size scaling methods in computational approaches. However, throughout the last few decades, interest has increasingly shifted toward understanding the thermal transition behavior of systems at smaller scales, including, but not limited to, biomolecular systems such as proteins, DNA and RNA.
The impact of finite-size and surface effects on the formation of structural phases and the transitions that separate them is so significant that conventional statistical analysis methods are not sensitive enough to provide a clear picture of the system behavior. Recently developed approaches like the generalized microcanonical inflection-point analysis method [1] overcome this issue as they allow for a systematic and unambiguous identification and classification of transitions in systems of any size.
One particularly intriguing problem is the characterization of phases for entire classes of semiflexible polymers, which, for example, include variants of DNA and RNA. This has been a long-standing problem, but simple early approaches such as the wormlike-chain or Kratky-Porod model [2] could not address this problem. Significant advances in the development of Monte Carlo algorithms and vastly improved technologies enabled the computer simulation of more complex coarse-grained models in recent years, though [3; 4; 5; 6; 7]. Yet, most of these studies still employed conventional canonical statistical analysis techniques that led to a plethora of results. This is partly due to the fact that canonical statistical analysis, which is usually based on locating extremal points in thermodynamic response functions such as the specific heat and temperature derivatives of order parameters or in free-energy landscapes, is ambiguous, inconsequential, and often not sufficiently sensitive to allow for the systematic construction of a phase diagram for finite systems.
In this Letter, we take a closer look at the most interesting part of the hyperplane diagram of the generic model for semiflexible polymers in the space of bending stiffness and temperature, where the structurally most relevant toroidal, loop, and hairpin phases separate from the wormlike chain regime of random coils. As we will show, microcanonical inflection-point analysis reveals two transitions that standard canonical analysis cannot resolve.
For this Letter, we employ the widely used generic model for semiflexible polymers, composed of the potentials of bonded and nonbonded pairs of monomers and a repulsive bending energy term. The energy of a polymer conformation with \(N\) monomers \(\mathbf{X}=(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{N})\), where \(\mathbf{x}_{n}\) is the position vector of the \(n\)th monomer, is given by \(E(\mathbf{X})=\sum_{n}V_{\mathrm{b}}(r_{n\,n+1})+\sum_{n<m+1}V_{\mathrm{nb}}(r _{nm})+\kappa\sum_{k}(1-\cos\Theta_{k})\). For the nonbonded interactions we use the standard Lennard-Jones (LJ) potential \(V_{\mathrm{nb}}(r)\equiv V_{\mathrm{LJ}}(r)=4\varepsilon[(\sigma/r)^{12}-( \sigma/r)^{6}]\), where \(\sigma\) is the van der Waals radius. The bonded potential is given by the combination of the shifted LJ and the finitely extensible nonlinear elastic (FENE) [8; 9; 10] potential, \(V_{\mathrm{b}}(r)=V_{\mathrm{LJ}}(r)-(1/2)KR^{2}\ln(1-(r-r_{0})^{2}/R^{2})\). We chose the same FENE parameter values as used in previous simulations of flexible polymers [11]: \(K=(98/5)\varepsilon/r_{0}^{2}\) and \(R=(3/7)r_{0}\). Monomer-monomer distances are given by \(r_{nm}=|\mathbf{x}_{n}-\mathbf{x}_{m}|\), and \(\Theta_{k}\) is the bending angle spanned by successive bond vectors \(\mathbf{x}_{k}-\mathbf{x}_{k-1}\) and \(\mathbf{x}_{k+1}-\mathbf{x}_{k}\). The bending stiffness is denoted by \(\kappa\); it is a material parameter that helps distinguish classes of semiflexible polymers. The basic length scale for all distances is provided by the location of the LJ potential minimum \(r_{0}\), which we set to unity in our simulations. Likewise, \(\varepsilon\) is used as the basic energy scale. Hence, throughout the Letter, energies are measured in units of \(\varepsilon\). For simulation efficiency, the LJ potential was cut off at \(r_{c}=2.5\sigma\) and shifted by \(V_{\mathrm{LJ}}(r_{c})\)[11]. Whereas the systematic phase-space study on a large array of \(\kappa\) values was performed for chains with \(N=55\) monomers, selected simulations were also run for longer chains with up to 100 monomers to verify the robustness of the results presented here.
For our simulations we employed generalized-ensemble Markov-chain Monte Carlo methodologies [12], most notably an extended version of the replica-exchange (parallel tempering) method [13; 14; 15; 16] in the combined space of simulation temperature and bending stiffness. Advanced sets of conformational updates were used to sample the phase space [17]. For reasonable statistics, up to \(10^{9}\) sweeps were performed simulation. The multi-histogram reweighting procedure [18; 19] was used to determine the densities of states. Results were verified by means of multicanonical simulations [20; 21; 22].
Canonical response quantities such as the specific heat and fluctuations of the square radius of gyration, shown in Fig. 1 for various values of \(\kappa\), only exhibit one major peak suggesting
enhanced thermal activity between entropically favored worm-like chains at higher temperatures and energetically more ordered structures at lower temperatures. It should also be noted that, even for this simple signal, both quantities locate the transition at different temperatures for any given value of \(\kappa\). This ambiguity is common to canonical statistical analysis for finite systems.
In contrast, in the microcanonical inflection-point method [1] employed here, least-sensitive inflection points of the microcanonical entropy \(S(E)=k_{\mathrm{B}}\ln\,g(E)\), where \(g(E)\) is the density (or number) of system states at energy \(E\), and its derivatives with respect to the energy are considered indicators of phase transitions. This has been motivated by the rapid change of thermodynamic quantities like the internal energy in the vicinity of canonical transition temperatures, which causes a _maximally_ sensitive inflection point (a small variation in heat-bath temperature leads to a drastic response of the system). This results in a peak of the corresponding fluctuation or response quantity (which is why peaks in specific-heat curves like in Fig. 1 often serve as indicators of transitions in conventional canonical statistical analysis). However, in microcanonical statistical analysis the temperature is a system property. It is defined by \(T(E)=[dS(E)/dE]^{-1}\) and, consequently, it is a function of the system energy. Therefore, the corresponding inflection point marking the transition in \(T(E)\) is now _least sensitive_ to changes in energy.
Extending this analogy in a systematic way, we classify a transition as of first order, if the entropy \(S(E)\) exhibits a least-sensitive inflection point. Consequently, a second-order transition is characterized by a least-sensitive inflection point in the inverse microcanonical temperature \(\beta(E)=dS(E)/dE\). For finite systems, higher-order transitions have to be considered seriously as well and are classified accordingly. As we will see later, third-order transitions, identified here by inflection points in \(\gamma(E)=d^{2}S(E)/dE^{2}\), typically fill gaps near bifurcation points of transition lines.
It should also be noted that there are two transition categories, independent and dependent transitions [1]. However, in the model studied here, all transitions were found to be independent transitions, i.e., they are not entangled with other transition processes.
With this method, we recently found that even the two-dimensional Ising model possesses two third-order transitions in addition to the familiar second-order phase transition [1; 23]. Our method has already been successfully employed in previous studies of macromolecular systems [24; 25]. It has also proven useful in supporting the understanding of the general geometric and topological foundation of transitions in phase spaces [26; 27; 28; 29].
Microcanonical entropies \(S(E)\) and inverse temperatures \(\beta(E)\) are shown in Fig. 2 for various \(\kappa\) values. The quantities are plotted as functions of the reduced energy \(\Delta E^{(\kappa)}=E-E_{0}^{(\kappa)}\), where \(E_{0}^{(\kappa)}\) is the ground-state energy estimate for the polymer with bending stiffness \(\kappa\). The least-sensitive inflection points are marked by dots. For the actual quantitative identification of these inflection points, it is useful to search for extrema in the corresponding next-higher derivative. These extrema are marked by crosses.
For \(\kappa=7\), we only find a single signal in the \(\beta\) curve (and none in \(S\)), suggesting a single second-order transition in the plotted energy range. However, at \(\kappa=9\), two different transitions emerge in close proximity: Least-sensitive inflection points in both \(S\) and \(\beta\) identify independent first- and second-order transitions. Most striking among these results are the
Figure 1: Canonical per-monomer fluctuations of energy \(C_{V}/N=(1/N)d(E)/dT_{\mathrm{can}}\) (solid lines) and square radius of gyration \(\Gamma_{g}/N=(1/N)d(R_{g}^{2})/dT_{\mathrm{can}}\) (dashed lines) for a semiflexible polymer with 55 monomers as functions of the canonical heat-bath temperature at different values of the bending stiffness \(\kappa\).
Figure 2: (a) Microcanonical entropies \(S\) and (b) inverse temperatures \(\beta\) as functions of the reduced energy \(\Delta E^{(\kappa)}\) for different values of the bending stiffness \(\kappa\). Least-sensitive inflection points indicating transitions are marked by dots. Corresponding extrema in the next-higher derivative are marked by crosses and support an easier identification of the transition point: Minima in \(\beta\) indicate first-order transitions and maxima in \(\gamma\) (inset) indicate second-order transitions. Dashed vertical lines help guide the eye from the inflection points to the corresponding extremum in the next-higher derivative commonly used to identify the transition energy.
two strong first-order transition signals found in the \(\beta\) curve for \(\kappa=13\).
Therefore, it is intriguing to construct the complete hyperphase diagram, parametrized by bending stiffness \(\kappa\) and inverse microcanonical temperature \(\beta\), in the vicinity of the bifurcation point. It is shown in Fig. 3 in this range of \(\kappa\) values. We see that the coil-globule transition line, still intact from the flexible case (\(\kappa=0\)), begins to split into two branches at about \(\kappa=7\). In fact, the structural behavior of the polymers changes qualitatively from there.
Transition points identified by microcanonical analysis of simulation data are marked by symbols. In the plotted region, the hyperphase diagram is clearly dominated by three phases. The disordered regime C is governed by wormlike random-coil structures. In this phase, entropic effects enable sufficiently large fluctuations that suppress the formation of stable energetic contacts between monomers. For \(\kappa\) values just below the bifurcation, a direct transition into the toroidal phase T occurs as \(\beta\) is increased beyond the transition point. However, more interestingly, a stable intermediate phase forms if \(\kappa>7\). We characterize it as a mixed phase with hairpin (H) and loop (L) structures coexisting. Eventually, further cooling leads to another transition into the toroidal phase T. It is worth noting that upon increasing \(\kappa\) beyond the bifurcation point, the upper line starts off with third-order transitions, then turning to second order, and eventually to first order. This is a typical characteristic feature of transition lines branching off a main line. Transitions of higher-than-second order are common in finite systems. Without their consideration, the phase diagram would contain gaps.
Exemplified simulations of longer chains with up to 100 monomers confirm the bifurcation of transition lines, but the bifurcation point shifts to larger \(\kappa\) values and lower temperatures in this model, as expected.
The characterization of the phases was made simple by utilizing the distinct maps of pairwise monomer-monomer distances of the different structure types. Regions shaded red (\(r_{nm}<1.2\)) correspond to close contacts between monomers. Representative conformations for each phase are included in Fig. 3. The triangular maps shown underneath the structures exhibit the characteristic features of the class of structures they belong to. For extended coil-like structures prominent features are not present. Also, loops have only a small number of contacts near the tails, resulting in a short contact line parallel to the diagonal. In contrast, the toroidal structure possesses multiple windings and therefore an additional streak parallel to the diagonal appears. Hairpin structures are easily identified by the contact line that is perpendicular to the diagonal.
In order to quantify the population of the different structures in each phase, we have estimated the probabilities for each structure type in the energetic range that includes the two first-order transitions. Detailed results for \(\kappa=16\) are shown in Fig. 4(a). To provide the context, we also included the \(\beta\) curve as a dashed line. The two first-order transition regions are shaded in gray. The respective microcanonical Maxwell constructions define the coexistence regions of those transitions (shaded in gray in Fig. 4). They clearly do not overlap and leave an energetic gap, in which the intermediate mixed hairpin-loop crossover phase is located. This is also confirmed by the plot of the energy probabil
Figure 3: Hyperphase diagram for the semiflexible polymers with 55 monomers, parametrized by bending stiffness \(\kappa\) and inverse microcanonical temperature \(\beta\). Red diamonds mark first-order, blue dots second-order, and purple triangles third-order transitions. Solid transition lines are guides to the eye. Representative structures, characterizing the dominant types in the respective phases (C: wormlike random coils, H: hairpins, L: loops, T: toroids), and their distance maps (lower triangles in the insets) are also shown. Monomer labels are ordered from the black (first monomer) to the white end (last monomer).
ity distributions \(P_{\rm can}(E)=g(E)e^{-E/k_{\rm B}T_{\rm can}}/Z_{\rm can}\) (where \(Z_{\rm can}=\int dEg(E)e^{-E/k_{\rm B}T_{\rm can}}\) is the canonical partition function) as shown in Fig. 4(b) for various canonical temperatures in the transition region. The envelope of these curves exhibits two noticeable suppression regions, where the inflection-point method indicated the first-order transitions.
As expected, coil structures dominate at high energies, but their presence is rapidly diminished by the formation of more ordered hairpin structures. These conformations still provide sufficient entropic freedom for the dangling tail, which is already stabilized by van der Waals contacts, however. The loop part of the hairpin helps reducing the stiffness restraint. It is noteworthy that pure loop structures also significantly contribute to the population, although at a lesser scale in this region. The actual crossover from hairpins to loops happens within the intermediate phase, which is why we consider it a mixed phase. Even though hairpin and loop structures may be irrelevant at very low temperatures, they represent biologically significant secondary structure types at finite temperatures. The tail can be easily spliced, contact pair by contact pair, with little energetic effort, which supports essential micromolecular processes on the DNA and RNA level such as transcription and translation. The phase diagram shown in Fig. 3 tells us how important it is to discern the phase dominated by these structures.
Upon reducing the energy (and therefore also entropy), forming energetically favorable van der Waals contacts becomes the dominant structure formation strategy and loops coil in to eventually form toroids. Further lowering the energy toward the ground state may even lead to knotting [6; 7].
We would like to emphasize that we have also performed selected simulations of semiflexible chains in this generic model with 70 and 100 monomers, which essentially led to the same qualitative results. Quantitatively, we observe a shift of the bifurcation point toward higher bending stiffness values. This is expected, of course, as the number of possible energetic contacts scales with the number of monomers, which requires a larger energy penalty to break these symmetries. It also helps understand why microbiological structures are not only finite, but exist on a comparatively small, mesoscopic length scale. At the physiological scale, structure formation processes of large systems would be much more difficult to control and to stabilize. This also means that studying such systems in the thermodynamic limit may not help understanding physics at mesoscopic scales. Therefore, employing alternative statistical analysis methods as in this Letter is more beneficial than the application of standard procedures, however successful they have been in studies of other problems.
To conclude, neither canonical energetic nor structural fluctuation quantities hint at the existence of two clearly separated transitions for semiflexible polymers, which we could identify by microcanonical inflection point-analysis, though. Conventional canonical analysis is too rugged - the intermediate phase is simply washed out in the averaging process. This should be considered a problem, particularly when standard canonical analysis methods are employed in studies of finite systems. In the generic model for semiflexible polymers used in our Letter, the intermediate phase accommodates loop and hairpin structures, which are found in biomacromolecular systems including types of DNA and RNA. We conclude that bending stiffness is not only a necessary property of polymers in the formation of distinct and biologically relevant structures at finite temperatures; it also stabilizes the phase dominated by these structure types in a thermal environment, where entropy and energy effectively compete with each other. Neither flexible polymers nor crystalline structures would be equally adaptable _and_ stable like semiflexible polymers are under physiological conditions. This is fully compliant with Nature's governing principle, in which sufficient order is provided to enable the formation of stable mesostructures, but at the same time enough disorder allows these structures to explore variability. This makes them functional in a stochastic, thermal environment, with sufficient efficiency enabling lifeforms to exist and survive under these conditions.
We thank the Georgia Advanced Computing Resource Center at the University of Georgia for providing computational resources.
|
2306.08533 | Early-Stopped Technique for BCH Decoding Algorithm Under Tolerant Fault
Probability | In this paper, a technique for the Berlekamp-Massey(BM) algorithm is provided
to reduce the latency of decoding and save decoding power by early termination
or early-stopped checking. We investigate the consecutive zero discrepancies
during the decoding iteration and decide to early stop the decoding process.
This technique is subject to decoding failure in exchange for the decoding
latency. We analyze our proposed technique by considering the weight
distribution of BCH code and estimating the bounds of undetected error
probability as the event of enormous stop checking. The proposed method is
effective in numerical results and the probability of decoding failure is lower
than $10^{-119}$ for decoding 16383 code length of BCH codes. Furthermore, the
complexity compared the conventional early termination method with the proposed
approach for decoding the long BCH code. The proposed approach reduces the
complexity of the conventional approach by up to 80\%. As a result, the FPGA
testing on a USB device validates the reliability of the proposed method. | Hong-fu Chou | 2023-06-14T14:27:02Z | http://arxiv.org/abs/2306.08533v4 | # Early-Stopped Technique for BCH Decoding Algorithm Under Tolerant Fault Probability
###### Abstract
In this paper, a technique for the Berlekamp-Massey(BM) algorithm is provided to reduce the latency of decoding and save decoding power by early termination or early-stopped checking. We investigate the consecutive zero discrepancies during the decoding iteration and decide to early stop the decoding process. This technique is subject to decoding failure in exchange for the decoding latency. We analyze our proposed technique by considering the weight distribution of BCH code and estimating the bounds of undetected error probability as the event of enormous stop checking. The proposed method is effective in numerical results and the probability of decoding failure is lower than \(10^{-119}\) for decoding 16383 code length of BCH codes. Furthermore, the complexity compared the conventional early termination method with the proposed approach for decoding the long BCH code. The proposed approach reduces the complexity of the conventional approach by up to 80%. As a result, the FPGA testing on a USB device validates the reliability of the proposed method.
BCH code, BCH decoding, Berlekamp-Massey algorithm, low latency design, early stop, early termination.
## I Introduction
Flash memory [1][2] performs as the main non-volatile storage device, and the flash interface unit is applied for system-on-chip (SoC) products. The market size of NAND flash memories is still growing and is projected to see a compound annual growth rate of 6.39% [3]. Flash memory provides a low-power solution for storage systems and, small size and the light form factor are the essential properties for this type of storage. The flash interface unit [4] provides basic flash commands which can be used by the main central processing unit(CPU) to access data from the flash memory. It is assumed that the flash memory is non-removable since the flash memory is used to initiate the boot process based on information from the firmware.
Flash memory plays an important role in the storage device to execute the tasks to be performed by the main CPU. The tasks are literally to read and write files and are identical to any generic file system. The flash interface unit has mainly provided a reliable component for graphics and multimedia processors and has been applied to digital televisions, car navigation systems, and mobile applications. To support multimedia applications, flash interface units have been optimized for large block read and write, as presented in [5]. To minimize the main CPU interaction, the flash interface unit supports direct memory access (DMA) [6] when transferring from the flash memory to the system DRAM memory.
In the SoC applications, all of the boot information is generally stored in flash memory. The flash memory includes a number of partitions for the boot loader code and the flash file system are created in the flash memory. In [6], the DMA interacts with the error control coding (ECC) block, which provides two main purposes. The first is to generate the ECC bytes and program in the spare area, and the second is to correct the data in the data buffer. Consequently, the ECC engine is a critical issue regarding system performance. The chip area is dominated by the ECC decoder, comprising a high percentage of the flash controller.
The Bose-Chaudhuri-Hocquenghem (BCH) code has become the ultimate solution for the ECC engine in recent years. In coding theory, the BCH codes form a class of cyclic error-correcting codes that are constructed using finite fields. The decoding algorithm is based on a feasible implementation where the Berlekamp-Massey (BM) algorithm [7] has been widely selected in typical examples. The complexity of the decoding is competitive with respect to the BM properties of the linear feedback shift register. However, system latency suffers from larger \(t\) error correction capability which requires \(2t\) iterations of conventional BM decoding and common applications require high error-correcting capability. The long decoding time has become a bottleneck in the system performance while using BM decoding. The error distribution for flash memory shows that few errors at the beginning of its usage and the low number of errors dominate the majority of the probability that will occur within a code block. In order to overcome this degradation, early termination of BM decoding is necessary to improve the system performance for high-speed applications. In [8], the authors adopt a restricted Gaussian elimination on the Hankel structured augmented syndrome matrix to reinterpret an early-stopped version of the Berlekamp-Massey algorithm. This approach has proven the minimal iterations \(t+e\) of the Berlekamp-Massey algorithm where \(e\) is the number of error bits. Following the thread of [9], the author presents a feasible approach for early termination but the investigation of malfunction probability was present in [10].
In this paper, the probability of decoding failure is considered in exchange for early-stopped BM decoding feasibility. The proposed technique terminates conventional BM decoding after less than \(t+e\) iterations so as to reduce redundant latency. However, the proposed technique is subject to the
decoding failure problem. The probability that a detection error will occur must be evaluated to ensure the reliability of the proposed approach. Consequently, we propose an early-stopped technique for BM decoding by observing certain conditions while performing decoding iterations. In Section II, we present the early-stopped checking procedure of BM decoding by observing consecutive zero discrepancies. Since zero discrepancies provide the information of detectable decoding, it is an interesting problem to estimate the undetectable decoding after consecutive zero discrepancies. We provide an estimation of the enormous early-stopped checking by means of the probability of undetected error probability in [11]. After combining the early-stopped checking criterion in [9], we propose our approach. In Section III, the complexity analysis is presented to compare with the conventional early-stopped BM approach. In Section IV, the numerical results are presented to evaluate the feasibility of a practical application. Conclusions are presented in Section V.
## II Early stopped approach based on the view of discrepancy for the BM algorithm
In coding theory, BCH codes [12][13] are constructed using polynomials over a finite field (also called the Galois field and is denoted as GF(q)). One of the key features of BCH codes is that, during code design, there is precise control over the number of symbol errors that are correctable by the code. In particular, it is possible to design binary BCH codes that can correct multiple-bit errors in discrete distribution under a correction capability of \(t\) bits. Another advantage of BCH codes is the ease with which they can be decoded, namely, via an algebraic method known as syndrome decoding. This simplifies the design of the decoder for these codes, using small low-power electronic hardware.
BCH codes are used in applications such as satellite communications, compact disc players, DVDs, disk drives, solid-state drives, etc.
There are many algorithms for decoding BCH codes. The most common follow this general outline:
1. Calculate the syndromes for the received vector
2. Determine the number of errors \(v\) and the error locator polynomial \(N(x)\) from the syndromes
3. Calculate the roots of the error location polynomial to determine the error locations \(X_{i}\)
4. Calculate the error values at those error locations
5. Correct the errors
The decoding algorithm may determine that the received vector contains too many errors and cannot be corrected. For example, if the number of errors is greater than the correction capability, then the correction would fail. In a truncated (not primitive) code, an error location may be out of range. If the received vector has more errors than the code can correct, the decoder may unknowingly produce an apparently valid message that is not the one that was sent.
In order to determine any possible solutions to shorten the BM decoding process, based on the result in [9] and [8], we classify the solutions in two conditions as follows.
\(Condition\)\(1\):
For the \(u\)-th iteration of the BM algorithm, the discrepancy at iteration \(u\) is presented as \(d_{u}\), and any discrepancies in the next t-\(l_{u}\)-1 steps of the iteration are zero.
\(Condition\)\(2\):
If the number of errors in the received polynomials is \(v\), only \(t+v\) steps of the iteration are needed in order to determine the error-location polynomials.
### _Heuristics for consecutive zero discrepancies_
Following the thread of \(Condition\)\(2\), the probability of the enormous event based on the view of the discrepancy is investigated as follows. The discrepancies in certain iterations equal to zero, as shown in \(Condition\)\(1\) represent the detection capability reach in a certain level of \(l_{u}\) iterations, i.e. \(l_{u}=v\), where \(v\) is the number of error bits hypothesized by our proposed approach.
\(Heuristic\)\(1\): Let a BCH code \(\zeta\) have minimum Hamming distance \(d\geq 2t+1\) and consider that \(\zeta^{v+\kappa}\subset\zeta\) denotes a BCH code subset with minimum Hamming distance \(d_{s}\geq v+\kappa\) and \(\kappa\) is the number of consecutive zero discrepancies for the \(v\)-th iteration of BM algorithm. The next \(\kappa\) steps actually occurred with \(v+\kappa\leq 2t\). \(Rationale\): The Hamming distance for the received codeword \(r\) and the transmitted codeword \(c\) is presented as \(d(r,c)=i\), \(i<t\), where \(c\in\zeta^{v+\kappa}\).
\(Heuristic\)\(2\): The error pattern \(\xi\) defects the codeword \(c\), it can also be presented as \(r=c+\xi\) and \(d(r,c)=d(\xi,c)\). \(Rationale\): Assume \(e=v\) and \(e\) denotes the exact number of error bits caused by the channel without the decoding fault. Otherwise, a malfunction occurs when the location of the error pattern is beyond the detection capability at \(l_{u}=v+\kappa\) iteration which indicates the case of \(v+\kappa<e\).
### _Numerical Analysis of fault probability for the proposed early stopped technique_
Based on the above heuristics, the error event of observing consecutive zero discrepancies during decoding iterations is invested as follows. A non-zero discrepancy occurs after performing \(v+\kappa\) BM decoding iterations and the codeword \(c\in\{\zeta^{v+\kappa}-\zeta\}\) which results in the proposed technique failing to provide a correct BM decoding. Hence, the probability of malfunction is given as follows.
\[P_{mf}=p[v+\kappa<e]\] \[=\sum_{i=0}^{t}P[d(r,c)=i|c\in\zeta^{i+\kappa}-\zeta]\] \[=\sum_{i=0}^{t}P[d(\xi,c)=i|c\in\zeta^{i+\kappa}]-P[d(\xi,c)=i|c \in\zeta] \tag{1}\]
According to [11], the bounds of the probability \(P_{ud}\) that an undetected error will occur can be bound by the assumption
of a long codeword length \(n\) and \(m\) is equal to the message length,
\[P_{ud}=\sum_{i=0}^{t}P[d(\xi,c)=i|c\in\zeta]\] \[\cong 2^{-mt}\sum_{s=0}^{t}{n\choose s}\sum_{h=t+1}^{n}{n\choose h} \varepsilon^{h}(1-\varepsilon)^{n-h} \tag{2}\]
The undetected error probability of the difference between upper and lower bounds is limited to 1%. We further extend the bounds of the probability of an error pattern given by [14] and [11]. The conditional probability of a BCH code \(\zeta^{d^{\prime}}\) that has minimum Hamming distance \(d^{\prime}\) is interpreted as follows.
\[P[d(\xi,c)=i|c\in\zeta^{d^{\prime}}]\cong\sum_{h=(d^{\prime}+1)/2}^{n}{n \choose h}\varepsilon^{h}(1-\varepsilon)^{n-h} \tag{3}\]
Substituting (3) into (1), the probability of malfunction can be estimated as
\[P_{mf}\cong 2^{-mt}[\sum_{s=0}^{t}{n\choose s}\sum_{h=(s+\kappa+1)/2}^ {n}{n\choose h}\varepsilon^{h}(1-\varepsilon)^{n-h}\] \[-\sum_{s=0}^{t}{n\choose s}\sum_{h=t+1}^{n}{n\choose h} \varepsilon^{h}(1-\varepsilon)^{n-h}] \tag{4}\]
Furthermore, (4) can be simplified further by bounds of the type considered in [11] and define \(\lambda_{1}=(s+\kappa+1)/(2n)\) and \(\lambda_{2}=(t+1)/n\).
\[P_{mf}\cong 2^{-mt}\sum_{s=0}^{t}{n\choose s}[2^{-nE(\lambda_{1},\varepsilon )}-2^{-nE(\lambda_{2},\varepsilon)}] \tag{5}\]
where \(E(\lambda,\varepsilon)\) is the relative entropy between the binary probability distribution \(\lambda\) and \(\varepsilon\).
\[E(\lambda,\varepsilon)=H(\varepsilon)+(\lambda-\varepsilon)H( \varepsilon)-H(\lambda) \tag{6}\] \[=\lambda log_{2}(\lambda/\varepsilon)+(1-\lambda)log_{2}((1- \lambda)/(1-\varepsilon))\]
Based on the above observing \(d_{j}\) discrepancies during BM iteration, we illustrate the proposed early-stopped checking method, which is described below. The proposed method is denoted as the early-stopped(ES) version, and we provide three different versions. For BM decoding of the \(j\)-th iteration, we observe the following discrepancy based on the proposed method. We denote that \(\delta_{max}\) represents the maximum error location degree of the BM algorithm.
ES version 1 in Algorithm 1 for checking 4 consecutive zero discrepancies and ES version 2 in Algorithm 2 for checking 6 zeros are presented to summarize a combination of early-stopping approaches considering [9] and our technique. However, ES version 3 in Algorithm 3 is the main core of our proposed approach to reveal the best complexity reduction.
Beginning from \(j=4\) as \(j\)-th iteration of the BM algorithm, verify the following steps:
1. Check Case A: \(t\)\(\delta_{max}/2\) = \(j\)
2. Check Case B: \(d_{j}\), \(d_{j-1}\), \(d_{j-2}\) and \(d_{j-3}\) are all zero.
3. If Case A and Case B are satisfied, terminate the BM decoding. Otherwise, proceed to the next BM iteration and return to Step 1.
**Algorithm 1** ES version 1
Beginning from \(j=4\) as \(j\)-th iteration of the BM algorithm, verify the following steps:
1. Check Case A: \(t\)\(\delta_{max}/2\) = \(j\)
2. Check Case B: \(d_{j}\), \(d_{j-1}\), \(d_{j-2}\), \(d_{j-3}\), \(d_{j-4}\), \(d_{j-5}\) are all zero.
3. If Case A and Case B are satisfied, terminate the BM decoding. Otherwise, proceed to the next BM iteration and return to Step 1.
**Algorithm 2** ES version 2
Beginning from \(j=6\) as \(j\)-th iteration of the BM algorithm, verify the following steps:
1. Check Case A: \(t+\delta_{max}/2\) = \(j\)
2. Check Case B: \(d_{j}\), \(d_{j-1}\), \(d_{j-2}\), \(d_{j-3}\), \(d_{j-4}\), \(d_{j-5}\) are all zero.
3. If Case A and Case B are satisfied, terminate the BM decoding. Otherwise, proceed to the next BM iteration and return to Step 1.
## III Complexity analysis
The early stopped technique enjoys saving processing time and lowers power consumption. In this section, the analysis of multiplicative complexity is presented. Thanks to the author in [8] that the upper bound of complexity analysis can be applied to evaluate the proposed technique by comparing it with the conventional BM algorithm and its related early-stopped technique. Since our proposed technique stops the conventional BM algorithm by certain conditions, the complexity of decoding can be computed by considering stopping the conventional BM algorithm at \(e+\kappa\) iterations. Following the thread in [8], the multiplicative complexity \(C_{ES3}\) of the proposed ES version 3 is upper bound by \(2e(e+\kappa)-1\) which require at most \(e+\kappa\) steps to check the discrepancies \(d_{j}\). We summarize the comparison in Table I to show the merit of our proposed technique. \(e\) denotes the exact number of error bits caused by the channel. To compare with the proposed technique, the conventional BM algorithm and conventional early-stopped technique enjoy low complexity when decoding the short codeword BCH code with a small t. However, the complexity of our proposed technique is not related to the parameter \(t\) and is only dominated by \(e^{2}+e\kappa\) which is quite beneficial for decoding long BCH code with larger correcting bits \(t\). The complexity analysis results contribute to the applications such as NAND flash and future satellite communication. A 16384 code length BCH code with large
\(t=72\) is considered. For an example of \(t=72\), \(e=2\) and \(\kappa=6\), \(1-C_{ES3}/C_{ESBM}\) denote as the complexity reduction ratio of the proposed technique is equal to 79%. We present the complexity reduction ratio in Fig. 1 and the proposed technique can reach up to 80% improvement over the early-stopped approach in [8]. The complexity reduction comes from taking the risk of decoding failure. Hence, we investigate the probability of decoding failure for the proposed technique in the following section.
## IV Numerical results
The proposed early stopped technique has the capability to reduce the decoding latency. For example, the case of t error correcting which is equal to 72 leads to a huge cost of the area to implement the BCH decoder and the decoding latency of BM decoding degrades the system performance of the DMA accessing the flash memory. The authors in [11] obtained bounds on the probability of undetected errors in binary primitive BCH codes by applying the result to the code and showed that the bounds are quantified by the deviation factor of the true weight distribution from the binomial-like weight distribution. This approach presents a promising prediction for us to investigate that a long primitive BCH code can be robust to applying an early-stopped technique for a NAND flash system.
First, we consider a BCH code with a length that is equal to 31 in \(GF(2^{5})\), and that can correct \(t=3\), which has an outcome of \(2^{31}\) codewords. During the decoding of the received codewords used to compute the discrepancy, we consider the following case in Table II. If we observe that the number of discrepancies is consecutively zero, we can compute the probability of a failure event occurring if d' is equal to non-zero. A conditional failure event can cause the proposed method to fail to decode a correct codeword which is subject to the observation of consecutively zero discrepancies. The failure rate Po is illustrated based on equation (4) for a certain degree of non-zero discrepancy during each iteration. In Fig 2, it can be observed that Po has the bound of \(1.25562\times 10^{-4}\). This simple example can figure out the problem causing the decoding failure.
Consequently, it is interesting to investigate how should we set the parameter \(\kappa\). The probability of enormous early-stopped checking for the proposed ES version can be calculated using equation (5). In Fig 3, a BCH code with a length 1024 and \(t=17\) is presented to show that the highest probability of an enormous event for proposed ES version 3 is \(1.63752\times 10^{-12}\) for \(\kappa=1\), \(1.7629\times 10^{-15}\) for \(\kappa=2\) and \(1.77413\times 10^{-18}\) for \(\kappa=3\) respectively. As a result, we trade the failure probability with the early-stopped technique is not good enough while we use \(\kappa=1,2,3\). In particular, a threshold of \(\kappa\) is set as \(\kappa\geqslant 4\) to obtain the result with the probability of an enormous event as \(1.7005\times 10^{-21}\) for \(\kappa=4\) and \(1.85605\times 10^{-26}\) for \(\kappa=6\).
Furthermore, we show that the problem of decoding failure caused by early-stopped technique can be neglected with the nature of long BCH codes. By using equation (5) as shown in Fig 4, a BCH code with a length 16384 and \(t=72\) is presented as an example to reveal the effectiveness of the proposed early-stopped checking method. For ES version 3 with \(\kappa=6\), the highest probability of undetected errors is calculated as \(6.49437\times 10^{-119}\) over the cross-over probability at \(2.5\times 10^{-3}\). It can be shown as an example that ES version 3 provides a reliable result for early
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Discrepancy & \(>0\) & 0 & 0 & d’ \\ \hline BM iteration & 1 & 2 & 3 & 4 \\ \hline \end{tabular}
\end{table} TABLE I: COMPARISON OF UPPER BOUNDS OF FINITE-FIELD MULTIPLICATIVE COMPLEX
Fig. 1: The complexity reduction of the proposed technique with t=72.
Fig. 2: The probability of enormous event during early termination checking using \(GF(2^{5})\) BCH code t=3.
observing that the number of discrepancies is consecutively zeros. For practical applications, the proposed ES version 3 should be considered to prevent decoding failure over the firmware and decoder commuting period. As a matter of fact, the reliability of the early stopped method is the major concern for the flash controller rather than comparing the performance. If the detection failure occurred from the BCH decoder, the credibility of hard decoding would collapse. To address this issue, this paper focuses on the practical consideration to investigate the malfunction probability in this sense. To evaluate the credibility of the proposed method, we have given a complete test sample based on an FPGA board from the Altera family Statix II which operates at a clock rate of 110Mhz and uses BCH code length of 16384 that is suitable for a USB firmware testing. The system throughput is set to 480Mbps based on the USB 2.0 standard. The whole test sample quantity has a great amount of \(5.9793\times 10^{35}\). Each test sample contains the data package of 3 BCH code blocks and the code length is 16383 using \(GF(2^{14})\) BCH code t=72. This result means that we never encountered any decoding failure during the time using a storage device based on the proposed design. Finally, this technique has been applied to commercial USB devices since 2012 and the USB controller name is BR825CA illustrated in Fig. 5.
## V Conclusion
We have provided a practical solution for early termination checking while decoding BCH code. The complexity analysis and numerical results are presented to show the merit of the proposed technique which is suitable for long and large error correcting capability of BCH code with complexity reduction up to 80% over conventional early-stopped approach in [8]. The decoding failure is successful in exchange for decoding latency since the numerical result illustrates that the probability of undetected errors is lower than \(6.49437\times 10^{-19}\) for \(GF(2^{14})\) BCH code t=72. The FPGA testing on a USB device using 16384 code length of BCH code has been implemented to justify the reliability of the early termination checking strategy and the number of testing samples is accumulated up to \(5.9793\times 10^{35}\). This approach is shown to provide a solution for a practical design.
|
2304.10275 | Discrete Heat Equation with irregular thermal conductivity and tempered
distributional data | In this paper, we consider a semi-classical version of the nonhomogeneous
heat equation with singular time-dependent coefficients on the lattice $\hbar
\mathbb{Z}^n$. We establish the well-posedeness of such Cauchy equations in the
classical sense when regular coefficients are considered, and analyse how the
notion of very weak solution adapts in such equations when distributional
coefficients are regarded. We prove the well-posedness of both the classical
and the very weak solution in the weighted spaces $\ell^{2}_{s}(\hbar
\mathbb{Z}^n)$, $s \in \mathbb{R}$, which is enough to prove the well-posedness
in the space of tempered distributions $\mathcal{S}'(\hbar \mathbb{Z}^n)$.
Notably, when $s=0$, we show that for $\hbar \rightarrow 0$, the classical
(resp. very weak) solution of the heat equation in the Euclidean setting
$\mathbb{R}^n$ is recaptured by the classical (resp. very weak) solution of it
in the semi-classical setting $\hbar \mathbb{Z}^n$. | Marianna Chatzakou, Aparajita Dasgupta, Michael Ruzhansky, Abhilash Tushir | 2023-04-20T12:58:45Z | http://arxiv.org/abs/2304.10275v1 | # Discrete heat equation with irregular thermal conductivity and tempered distributional data
###### Abstract.
In this paper, we consider a semi-classical version of the nonhomogeneous heat equation with singular time-dependent coefficients on the lattice \(\hbar\mathbb{Z}^{n}\). We establish the well-posedeness of such Cauchy equations in the classical sense when regular coefficients are considered, and analyse how the notion of very weak solution adapts in such equations when distributional coefficients are regarded. We prove the well-posedness of both the classical and the very weak solution in the weighted spaces \(\ell_{s}^{2}(\hbar\mathbb{Z}^{n}),\;s\in\mathbb{R}\), which is enough to prove the well-posedness in the space of tempered distributions \(\mathcal{S}^{\prime}(\hbar\mathbb{Z}^{n})\). Notably, when \(s=0\), we show that for \(\hbar\to 0\), the classical (resp. very weak) solution of the heat equation in the Euclidean setting \(\mathbb{R}^{n}\) is recaptured by the classical (resp. very weak) solution of it in the semi-classical setting \(\hbar\mathbb{Z}^{n}\).
M. Chatzakou is a postdoctoral fellow of the Research Foundation - Flanders (FWO) under the postdoctoral grant No 12B1223N. A. Dasgupta is supported by Core Research Grant, RP03890G, Science and Engineering Research Board (SERB), DST, India. M. Ruzhansky is supported by the EPSRC Grants EP/R003025 by the FWO Odysseus 1 grant G.0H94.18N: Analysis and Partial Differential Equations and by the Methusalem programme of the Ghent University Special Research Fund (BOF) (Grantnumber 01M01021). The last author is supported by the institute assistantship from Indian Institute of Technology Delhi, India.
## 1. Introduction
The solvability of the heat equation, as well as the inverse problem for the heat equation where thermal coefficients are constant, time-dependent, or space-dependent, has been studied extensively in many different settings.
For the case of the heat equation with singular potentials in the space variable, we refer to the works [10, 11] for an overview of the known results in different settings. For the case where the time-dependent coefficients are considered, we refer to the works [12, 13, 14, 15] and [16], that apply in settings different from the lattice \(\mathbb{Z}^{n}\). In the particular case of the lattice \(\mathbb{Z}^{n}\) we have the work [17] where constant coefficients are considered. In the aforesaid setting, the heat equation with constant coefficients reads as the parabolic Anderson model, see e.g. [1] and references therein, and the asymptotic analysis of its solution is a topic of wide interest in the field of stochastic analysis.
The setting in this current work is, for a (small) semi-classical parameter \(\hbar>0\), given by
\[\hbar\mathbb{Z}^{n}=\{x\in\mathbb{R}^{n}:x=\hbar k,\ k\in\mathbb{Z}^{n}\},\]
and clearly includes the \(n\)-dimensional lattice \(\mathbb{Z}^{n}\) as a special case. For \(\alpha>0\), the discrete fractional Laplacian on \(\hbar\mathbb{Z}^{n}\) denoted by \(\left(-\mathcal{L}_{\hbar}\right)^{\alpha}\) is defined by
\[\left(-\mathcal{L}_{\hbar}\right)^{\alpha}u(k):=\sum_{j\in\mathbb{Z}^{n}}a_{ j}^{(\alpha)}u(k+j\hbar),\quad k\in\hbar\mathbb{Z}^{n}, \tag{1.1}\]
where the expansion coefficient \(a_{j}^{(\alpha)}\) is given by
\[a_{j}^{(\alpha)}:=\int\limits_{\begin{subarray}{c}\left[-\frac{\imath}{2}, \frac{1}{2}\right]^{n}\end{subarray}}\left[\sum_{l=1}^{n}4\sin^{2}\left(\pi \xi_{l}\right)\right]^{\alpha}e^{-2\pi ij\cdot\xi}\mathrm{d}\xi. \tag{1.2}\]
For more information on the discrete fractional Laplacian, see Section 3.
For the space variable \(k\in\hbar\mathbb{Z}^{n}\) in this setting, we analyse the semi-classical analogue of the heat equation in the Euclidean setting, which reads as follows:
\[\left\{\begin{array}{l}\partial_{t}u(t,k)+a(t)\hbar^{-2\alpha}\left(- \mathcal{L}_{\hbar}\right)^{\alpha}u(t,k)+b(t)u(t,k)=f(t,k),\quad(t,k)\in(0,T] \times\hbar\mathbb{Z}^{n},\\ u(0,k)=u_{0}(k),\quad k\in\hbar\mathbb{Z}^{n},\end{array}\right. \tag{1.3}\]
where \(a=a(t)\geq 0\) is the thermal conductivity, \(b\) is a real-valued bounded potential and \(f\) is the heat source.
The first aim of the current work is to prove the well-posedness of the classical solution of the heat equation as in (1.3) in the semi-classical setting \(\hbar\mathbb{Z}^{n}\). Let us note that in [14, 15] the authors examine the discrete wave equation with time-dependent coefficients and the discrete Klein-Gordan equation with regular potential and prove that they are well-posed in \(\ell^{2}(\hbar\mathbb{Z}^{n})\). In this work, we are extending the well-posedness results in [14, 15] for our consideration of the Cauchy problem by allowing the initial data to grow polynomially. More precisely, we investigate the well-posedness of the Cauchy problem (1.3) with regular/irregular coefficients as well as the heat source and initial Cauchy data from the space of discrete tempered distributions \(\mathcal{S}^{\prime}(\hbar\mathbb{Z}^{n})\). A special feature in our approach in this article is that we also
allow the coefficients \(a,b\) to be distributions. Particularly, our analysis allows \(a,b\) to have distributional irregularities; i.e., to have \(\delta\)-type terms, while also Heaviside discontinuities e.g. we can take \(b(t)=\delta+H(t)\). Taking into account that the solution \(u(t,x)\) might as well have singularities in \(t\), this consideration would lead to foundational mathematical difficulties due to the problem of the impossibility of multiplying distributions, see Schwartz [13]. To overcome this problem we employ the theory of very weak solution as introduced in [16] which allows us to recapture the classical/distributional solution to the Cauchy problem (1.3), provided that the latter exists.
With the well-posedness of the classical (resp. very weak) solution to the Cauchy problem (1.3) on the lattice \(\hbar\mathbb{Z}^{n}\) at our disposal, two natural questions arise:
1. Can we recapture the classical solution to the heat equation (1.3) when the space variable lies in the Euclidean space \(\mathbb{R}^{n}\) by allowing \(\hbar\to 0\)?
2. Can we recapture the very weak solution to (1.3) when the space variable lies in the Euclidean space \(\mathbb{R}^{n}\) by allowing \(\hbar\to 0\)?
We will see that both questions are answered in the affirmative, provided additional Sobolev regularity. In particular, we consider the semi-classical limits \(\hbar\to 0\) for the following cases:
1. regular coefficient and Sobolev initial data;
2. distributional coefficient and Sobolev initial data,
and prove that in both cases we recover the (classical/very weak) solution in the Euclidean setting. The idea of ensuring globally convergence for the solution by adding Sobolev regularity, can be found in the semi-classical limit theorems in [14, Theorem 1.2] and [14, Theorem 1.3].
Note that the Cauchy problem (1.3) is the discrete analogue of heat equation with time-dependent coefficients in the Euclidean setting \(\mathbb{R}^{n}\) given by:
\[\left\{\begin{array}{l}\partial_{t}u(t,x)+a(t)(-\mathcal{L})^{\alpha}u(t,x )+b(t)u(t,x)=f(t,x),\quad(t,x)\in(0,T]\times\mathbb{R}^{n},\\ u(0,x)=u_{0}(x),\quad x\in\mathbb{R}^{n},\end{array}\right. \tag{1.4}\]
where \((-\mathcal{L})^{\alpha}\) is the usual fractional Laplacian \((-\mathcal{L})^{\alpha}\) on \(\mathbb{R}^{n}\) defined as a pseudo-differential operator with symbol \(|2\pi\xi|^{2\alpha}=\left[\sum\limits_{l=1}^{n}(2\pi\xi_{l})^{2}\right]^{\alpha}\) i.e.,
\[(-\mathcal{L})^{\alpha}u(x)=\int\limits_{\mathbb{R}^{n}}|2\pi\xi|^{2\alpha} \,\widehat{u}(\xi)e^{2\pi ix\cdot\xi}\mathrm{d}\xi. \tag{1.5}\]
Theorems 1.1 and 1.2 below state the well-posedness results of the Cauchy problem (1.4) for the above cases in the Euclidean setting \(\mathbb{R}^{n}\).
Before presenting the aforementioned results, let us note that, notation-wise, we write \(a\in L^{\infty}_{m}([0,T])\), if \(a\in L^{\infty}([0,T])\) is \(m\)-times weakly differentiable with \(\partial_{t}^{j}a\in L^{\infty}([0,T])\), for all \(j=1,\ldots,m\). Let us recall the usual Sobolev space \(H^{m}(\mathbb{R}^{n})\) and its dual \(L^{2}_{m}(\mathbb{R}^{n})\) defined as
\[f\in H^{m}(\mathbb{R}^{n})\iff(I-\mathcal{L})^{m/2}f\in L^{2}(\mathbb{R}^{n}),\]
\[g\in L^{2}_{m}(\mathbb{R}^{n})\iff(1+|\xi|^{2})^{m/2}g\in L^{2}(\mathbb{R}^{n}), \tag{1.6}\]
respectively.
In the sequel, we will be writing \(A\lesssim B\) whenever there exists a constant \(C\), independent of the appearing parameters, such that \(A\leq CB\).
For the well-posedness of the Cauchy problem (1.4) in the case (A) we have the following result:
**Theorem 1.1**.: _Let \(m\in\mathbb{R}\) and \(f\in L^{2}([0,T];H^{m}(\mathbb{R}^{n}))\). Assume that \(a\in L^{\infty}_{1}([0,T])\) satisfies \(\inf\limits_{t\in[0,T]}a(t)=a_{0}>0\) and \(b\in L^{\infty}([0,T])\). If the initial Cauchy data \(u_{0}\in H^{m}(\mathbb{R}^{n})\), then the Cauchy problem (1.4) has a unique solution \(u\in C([0,T];H^{m}(\mathbb{R}^{n}))\) which satisfies the estimate_
\[\|u(t,\cdot)\|^{2}_{H^{m}(\mathbb{R}^{n})}\leq C_{T,a,b}\left(\|u_{0}\|^{2}_{ H^{m}(\mathbb{R}^{n})}+\|f\|^{2}_{L^{2}([0,T];H^{m}(\mathbb{R}^{n})}\right), \tag{1.7}\]
_for all \(t\in[0,T]\), where the positive constant \(C_{T,a,b}\) is given by_
\[C_{T,a,b}=a_{0}^{-1}\|a\|_{L^{\infty}}e^{a_{0}^{-1}(\|a_{t}\|_{L^{\infty}}+2\|a \|_{L^{\infty}}\|b\|_{L^{\infty}}+\|a\|_{L^{\infty}})T}. \tag{1.8}\]
Similarly, for the well-posedness of the Cauchy problem (1.4) in the case (B) we have:
**Theorem 1.2**.: _Let a and \(b\) be distributions with supports included in \([0,T]\) such that \(a\geq a_{0}>0\) for some positive constant \(a_{0}\), and let the source term \(f(\cdot,x)\) be a distribution with support included in \([0,T]\), for all \(x\in\mathbb{R}^{n}\). Let \(m\in\mathbb{R}\) and \(u_{0}\in H^{m}(\mathbb{R}^{n})\). Then the Cauchy problem (1.4) has a unique very weak solution \((u_{\varepsilon})_{\varepsilon}\in L^{2}([0,T];H^{m}(\mathbb{R}^{n}))^{(0,1]}\) of order \(m\)._
For more details about the very weak solutions for the Cauchy problem (1.4) in the Euclidean setting, we refer to Section 6.
To give a synopsis of the topic of very weak solution, we refer to the works [19, 17] on Cauchy problems with singular, time-dependent coefficients in the Euclidean setting. The concept of very weak solution with space-dependent coefficients in the general setting of graded Lie groups has been employed in the works [16, 17, 18, 19] and [15] in the Euclidean setting.
To summarise, we are able to: use the notion of very weak solution, as introduced in [18] in the setting of hyperbolic Cauchy problems with distributional coefficients in space; understand distributionally the Cauchy problem (1.3); and prove its well-posedness. Precisely, the following facts will be presented in the sequel:
* The Cauchy problem (1.3) is well-posed in the weighted spaces \(\ell^{2}_{s}(\hbar\mathbb{Z}^{n})\), for all \(s\in\mathbb{R}\) and admits a very weak solution even when distributional coefficients \(a,b\) are considered.
* The very weak solution is unique in the sense of Definition 2.5.
* If the coefficients \(a,b\) are regular enough so that the Cauchy problem (1.3) has a classical solution, then the very weak solution recaptures the classical one. This fact indicates that the notion of very weak solution as adapted to our setting is consistent with the classical one.
* We are able to approximate both the classical and the very weak solution to the heat equation as in (1.3) in the Euclidean setting \(\mathbb{R}^{n}\), by the corresponding solution in the semi-classical setting \(\hbar\mathbb{Z}^{n}\).
The structure of the paper is as follows: in Section 2 we present our main results. In Section 3 we discuss the basics of the Fourier analysis in the case of the lattices \(\mathbb{Z}^{n}\) and \(\hbar\mathbb{Z}^{n}\) and of the torus \(\mathbb{T}_{h}^{n}\). Additionally to this, the necessary for our analysis functions of spaces and distributions are recalled. In Section 4 we provide the proofs of the results on the existence and uniqueness of the very weak solution of the problem we consider, as well as the consistency of it with the classical solution. In Section 5 we prove the results on the approximation of the (classical/very weak) solution on \(\mathbb{R}^{n}\), by the one on \(\hbar\mathbb{Z}^{n}\). Finally, in Section 6, we make some remarks about the very weak solution in the Euclidean setting.
## 2. Main results
In this section, we will present our main results for the well-posedness of the Cauchy problem (1.3) with regular/irregular coefficients and the discrete tempered distributional initial data. We will also present the semi-classical limit theorems for the classical as well as very weak solution.
First, we consider the Cauchy problem (1.3) with discrete tempered distributional initial Cauchy data and the regular coefficients \(a\in L^{\infty}_{1}([0,T])\) and \(b\in L^{\infty}([0,T])\). In the light of the relation (3.2), proving the well-posedness in the space of weighted spaces \(\ell^{2}_{s}(\hbar\mathbb{Z}^{n})\) is sufficient to prove the well-posedness in the space of discrete tempered distributions \(\mathcal{S}^{\prime}(\hbar\mathbb{Z}^{n})\).
Let us start our exposition of results with the one on the well-posedness theorem (in the classical sense) for the Cauchy problem (1.3):
**Theorem 2.1** (Classical solution).: _Let \(s\in\mathbb{R}\) and \(f\in L^{2}([0,T];\ell^{2}_{s}(\hbar\mathbb{Z}^{n}))\). Assume that \(a\in L^{\infty}_{1}([0,T])\) satisfies \(\inf\limits_{t\in[0,T]}a(t)=a_{0}>0\) and \(b\in L^{\infty}([0,T])\). If for the initial Cauchy data we have \(u_{0}\in\ell^{2}_{s}\left(\hbar\mathbb{Z}^{n}\right)\), then the Cauchy problem (1.3) has a unique solution \(u\in C([0,T];\ell^{2}_{s}(\hbar\mathbb{Z}^{n}))\) satisfying the estimate_
\[\|u(t,\cdot)\|^{2}_{\ell^{2}_{2}(\hbar\mathbb{Z}^{n})}\leq C_{T,a,b}(\|u_{0}\| ^{2}_{\ell^{2}_{2}(\hbar\mathbb{Z}^{n})}+\|f\|^{2}_{L^{2}([0,T];\ell^{2}_{2}( \hbar\mathbb{Z}^{n}))}), \tag{2.1}\]
_for all \(t\in[0,T]\) and \(\hbar>0\), where the positive constant \(C_{T,a,b}\) is given by_
\[C_{T,a,b}=a_{0}^{-1}\|a\|_{L^{\infty}}e^{a_{0}^{-1}(\|a_{t}\|_{L^{\infty}}+2\| a\|_{L^{\infty}}\|b\|_{L^{\infty}}+\|a\|_{L^{\infty}})T}. \tag{2.2}\]
Next, we are considering the Cauchy problem (1.3) with discrete tempered distributional initial Cauchy data and, also allowing the coefficients and the source term to have singularities in the time variable. As we discussed earlier in Section 1, in order to deal with such equations where fundamental mathematical difficulties may arise when adapting the classical approach, we have to introduce the notion of a very weak solution. To this end, let us quickly recall the important points for the upcoming analysis:
Let \(a\in\mathcal{D}^{\prime}\left(\mathbb{R}\right)\) be a distribution. Using the Friedrichs-mollifier, i.e., a function \(\psi\in C^{\infty}_{0}\left(\mathbb{R}\right),\psi\geq 0\) and \(\int_{\mathbb{R}}\psi=1\), we are able to construct families of smooth
functions \((a_{\varepsilon})_{\varepsilon}\) by regularizing the distributional coefficient \(a\) as follows:
\[a_{\varepsilon}(t):=\left(a*\psi_{\omega(\varepsilon)}\right)(t),\quad\psi_{ \omega(\varepsilon)}(t)=\frac{1}{\omega(\varepsilon)}\psi\left(\frac{t}{\omega( \varepsilon)}\right),\quad\varepsilon\in(0,1], \tag{2.3}\]
where \(\omega(\varepsilon)\geq 0\) and \(\omega(\varepsilon)\to 0\) as \(\varepsilon\to 0\). Let us note that since for the purposes of this article, the distributions \(a\) and \(b\), as in (1.3), are distributions with domain \([0,T]\), it is enough to consider that \(\mathrm{supp}(\psi)\subseteq K\), with \(K=[0,T]\) throughout this article.
The notions of moderateness and the negligibility for a net of functions/distributions as follows:
**Definition 2.2**.: _(i) A net \(\left(a_{\varepsilon}\right)_{\varepsilon}\in L^{\infty}_{m}(\mathbb{R})^{(0,1]}\) is said to be \(L^{\infty}_{m}\)-moderate if for all \(K\Subset\mathbb{R}\), there exist \(N\in\mathbb{N}_{0}\) and \(c>0\) such that_
\[\left\|\partial^{k}a_{\varepsilon}\right\|_{L^{\infty}(K)}\leq c\varepsilon^{ -N-k},\quad\text{ for all }k=0,1,\ldots,m,\]
_for all \(\varepsilon\in(0,1]\). (ii) A net \(\left(a_{\varepsilon}\right)_{\varepsilon}\in L^{\infty}_{m}(\mathbb{R})^{(0,1]}\) is said to be \(L^{\infty}_{m}\)-negligible if for all \(K\Subset\mathbb{R}\) and \(q\in\mathbb{N}_{0}\), there exists \(c>0\) such that_
\[\left\|\partial^{k}a_{\varepsilon}\right\|_{L^{\infty}(K)}\leq c\varepsilon^{ q},\quad\text{ for all }k=0,1,\ldots,m,\]
_for all \(\varepsilon\in(0,1]\). (iv) A net \(\left(u_{\varepsilon}\right)_{\varepsilon}\in L^{2}([0,T];\ell^{2}_{s}(\hbar \mathbb{Z}^{n}))^{(0,1]}\) is said to be \(L^{2}([0,T];\ell^{2}_{s}(\hbar\mathbb{Z}^{n}))\)-negligible if for all \(q\in\mathbb{N}_{0}\) there exists \(c>0\) such that_
\[\|u_{\varepsilon}\|_{L^{2}([0,T];\ell^{2}_{s}(\hbar\mathbb{Z}^{n}))}\leq c \varepsilon^{q},\]
_for all \(\varepsilon\in(0,1]\). (iv) A net \(\left(u_{\varepsilon}\right)_{\varepsilon}\in L^{2}([0,T];\ell^{2}_{s}(\hbar \mathbb{Z}^{n}))^{(0,1]}\) is said to be \(L^{2}([0,T];\ell^{2}_{s}(\hbar\mathbb{Z}^{n}))\)-negligible if for all \(q\in\mathbb{N}_{0}\) there exists \(c>0\) such that_
\[\|u_{\varepsilon}\|_{L^{2}([0,T];\ell^{2}_{s}(\hbar\mathbb{Z}^{n}))}\leq c \varepsilon^{q},\]
_for all \(\varepsilon\in(0,1]\)._
We note that the moderateness requirements are natural in the sense that distributions are moderately regularized. Moreover, by the structure theorems for distributions, we have the following inclusion:
\[\text{compactly supported distributions }\mathcal{E}^{\prime}(\mathbb{R}) \subset\left\{L^{2}\text{-moderate families}\right\}.\]
Therefore, it is possible that the Cauchy problem (1.3) may not have a solution in the space of compactly supported distributions \(\mathcal{E}^{\prime}(\mathbb{R})\) but it may exist in the space of \(L^{2}\)-moderate families in some suitable sense.
The notion of a very weak solution to the Cauchy problem (1.3) can be viewed as follows:
**Definition 2.3**.: _Let \(s\in\mathbb{R},f\in L^{2}([0,T];\ell^{2}_{s}(\hbar\mathbb{Z}^{n}))\), and \(u_{0}\in\ell^{2}_{s}(\hbar\mathbb{Z}^{n}).\) The net \(\left(u_{\varepsilon}\right)_{\varepsilon}\in L^{2}([0,T];\ell^{2}_{s}(\hbar \mathbb{Z}^{n}))^{(0,1]}\) is a very weak solution of order \(s\) of the Cauchy problem (1.3) if there exist_
1. \(L^{\infty}_{1}\)_-moderate regularisation_ \(a_{\varepsilon}\) _of the coefficient_ \(a\)_;_
2. \(L^{\infty}\)_-moderate regularisation_ \(b_{\varepsilon}\) _of the coefficient_ \(b\)_; and_
3. \(L^{2}([0,T];\ell^{2}_{s}(\hbar\mathbb{Z}^{n}))\)_-moderate regularisation_ \(f_{\varepsilon}\) _of the source term_ \(f\)
_such that \(\left(u_{\varepsilon}\right)_{\varepsilon}\) solves the regularised problem_
\[\left\{\begin{array}{ll}\partial_{t}u_{\varepsilon}(t,k)+a_{ \varepsilon}(t)\hbar^{-2\alpha}\left(-\mathcal{L}_{\hbar}\right)^{\alpha}u_{ \varepsilon}(t,k)+b_{\varepsilon}(t)u_{\varepsilon}(t,k)=f_{\varepsilon}(t,k ),\quad t\in(0,T],\\ u_{\varepsilon}(0,k)=u_{0}(k),\quad k\in\hbar\mathbb{Z}^{n},\end{array}\right. \tag{2.4}\]
_for all \(\varepsilon\in(0,1]\), and is \(L^{2}([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n}))\)-moderate._
It should be noted that Theorem 2.1 provides a unique solution to the regularised Cauchy problem (1.3) that satisfies estimate (2.1).
A distribution \(a\) is said to be a positive distribution if \(\langle a,\psi\rangle\geq 0\) for all \(\psi\in C_{0}^{\infty}(\mathbb{R})\) such that \(\psi\geq 0\). Similarly, a distribution \(a\) is said to be a strictly positive distribution if there exists a positive constant \(\alpha\) such that \(a-\alpha\) is a positive distribution in the previous sense. In other words, \(a\geq\alpha>0\) in the support of \(a\), where \(a\geq\alpha\), means that
\[\langle a-\alpha,\psi\rangle\geq 0,\quad\text{for all }\psi\in C_{0}^{\infty}( \mathbb{R}),\psi\geq 0. \tag{2.5}\]
Now we can state the existence theorem for the Cauchy problem (1.3) with distributional coefficients as follows:
**Theorem 2.4** (Existence).: _Let \(a\) and \(b\) be distributions with supports contained in \([0,T]\) such that \(a\geq a_{0}>0\) for some positive constant \(a_{0}\), and let the source term \(f(\cdot,k)\) be a distribution with support contained in \([0,T]\), for all \(k\in\hbar\mathbb{Z}^{n}\). For \(s\in\mathbb{R}\), we assume that the initial Cauchy data \(u_{0}\) satisfies \(u_{0}\in\ell_{s}^{2}(\hbar\mathbb{Z}^{n})\). Then the Cauchy problem (1.3) has a very weak solution \(\left(u_{\varepsilon}\right)_{\varepsilon}\in L^{2}([0,T];\ell_{s}^{2}(\hbar \mathbb{Z}^{n}))^{(0,1]}\) of order s._
Next, we define the uniqueness of the very weak solution obtained in Theorem 2.4 for the Cauchy problem (1.3). This should be regarded as if the family of very weak solution is not "significantly" affected by the negligible changes in the approximations of the coefficients \(a,b\) and of the source term \(f\). This can be also regarded as a "stability" property.
Strictly speaking, the notion of the uniqueness of the very weak solution for the Cauchy problem (1.3) is formulated as follows:
**Definition 2.5**.: _We say that the Cauchy problem (1.3) has a \(L^{2}([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n}))\)-unique very weak solution, if_
_(1) for all \(L_{1}^{\infty}\)-moderate nets \(a_{\varepsilon},\tilde{a}_{\varepsilon}\) such that \((a_{\varepsilon}-\tilde{a}_{\varepsilon})_{\varepsilon}\) is \(L_{1}^{\infty}\)-negligible;_
_(2) for all \(L^{\infty}\)-moderate nets \(b_{\varepsilon},\tilde{b}_{\varepsilon}\) such that \((b_{\varepsilon}-\tilde{b}_{\varepsilon})_{\varepsilon}\) are \(L^{\infty}\)-negligible; and_
_(3) for all \(L^{2}([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n}))\)-moderate nets \(f_{\varepsilon},\tilde{f}_{\varepsilon}\) such that \((f_{\varepsilon}-\tilde{f}_{\varepsilon})_{\varepsilon}\) is_
\(L^{2}([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n}))\)_-negligible,_
_the net \((u_{\varepsilon}-\tilde{u}_{\varepsilon})_{\varepsilon}\) is \(L^{2}([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n}))\)-negligible, where \((u_{\varepsilon})_{\varepsilon}\) and \((\tilde{u}_{\varepsilon})_{\varepsilon}\) are the families of solution corresponding to the \(\varepsilon\)-parametrised problems_
\[\left\{\begin{array}{ll}\partial_{t}u_{\varepsilon}(t,k)+a_{ \varepsilon}(t)\hbar^{-2\alpha}\left(-\mathcal{L}_{\hbar}\right)^{\alpha}u_{ \varepsilon}(t,k)+b_{\varepsilon}(t)u_{\varepsilon}(t,k)=f_{\varepsilon}(t,k ),\quad t\in(0,T],\\ u_{\varepsilon}(0,k)=u_{0}(k),\quad k\in\hbar\mathbb{Z}^{n},\end{array}\right. \tag{2.6}\]
_and_
\[\left\{\begin{array}{l}\partial_{t}\tilde{u}_{\varepsilon}(t,k)+\tilde{a}_{ \varepsilon}(t)\hbar^{-2\alpha}\left(-\mathcal{L}_{\hbar}\right)^{\alpha}\tilde {u}_{\varepsilon}(t,k)+\tilde{b}_{\varepsilon}(t)\tilde{u}_{\varepsilon}(t,k )=\tilde{f}_{\varepsilon}(t,k),\quad t\in(0,T],\\ \tilde{u}_{\varepsilon}(0,k)=u_{0}(k),\quad k\in\hbar\mathbb{Z}^{n},\end{array}\right. \tag{2.7}\]
_respectively._
**Remark 2.6**.: _The Colombeau algebra \(\mathcal{G}(\mathbb{R})\) defined as_
\[\mathcal{G}(\mathbb{R})=\frac{C^{\infty}\text{-moderate nets}}{C^{\infty}\text{- negligible nets}},\]
_can also be used to formulate the uniqueness for the Cauchy problem (1.3). For more details about the Colombeau algebra, we refer to [10]. The uniqueness of very weak solution in the sense of Colombeau algebra was introduced by Garetto and the third author in [11] and subsequently used in different settings, see e.g. [12, 13]._
The following theorem gives the uniqueness of the very weak solution to the Cauchy problem (1.3) in the sense of Definition 2.5.
**Theorem 2.7** (Uniqueness).: _Let \(a\) and \(b\) be distributions with supports contained in \([0,T]\) such that \(a\geq a_{0}>0\) for some positive constant \(a_{0}\), and let the source term \(f(\cdot,k)\) be a distribution with support contained in \([0,T]\), for all \(k\in\hbar\mathbb{Z}^{n}\). For \(s\in\mathbb{R}\), let also \(u_{0}\in\ell_{s}^{2}(\hbar\mathbb{Z}^{n})\). Then the very weak solution of the Cauchy problem (1.3) is \(L^{2}([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n}))\)-unique._
In the following theorem, we prove the consistency results with the classical case:
**Theorem 2.8** (Consistency).: _Let \(s\in\mathbb{R}^{n}\) and \(f\in L^{2}([0,T],\ell_{s}^{2}(\hbar\mathbb{Z}^{n}))\). Assume that \(a\in L_{1}^{\infty}\left([0,T]\right)\) satisfies \(\inf\limits_{t\in[0,T]}a(t)=a_{0}>0\) and \(b\in L^{\infty}\left([0,T]\right)\). Let also \(u_{0}\in\ell_{s}^{2}(\hbar\mathbb{Z}^{n})\) for any \(s\in\mathbb{R}\). Then the regularised net \(u_{\varepsilon}\) converges, as \(\varepsilon\to 0\), in \(L^{2}([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n}))\) to the classical solution of the Cauchy problem (1.3)._
The following theorem shows that the classical solution of (1.3) in the semi-classical setting \(\hbar\mathbb{Z}^{n}\) obtained in Theorem 2.1 for \(s=0\), recaptures the classical solution in the Euclidean setting \(\mathbb{R}^{n}\) as in Theorem 1.1, provided the latter exists.
**Theorem 2.9**.: _Let \(\alpha\in(0,1]\) and \(s=0\). Let \(u\) and \(v\) be the classical solution of the Cauchy problems (1.3) on \(\hbar\mathbb{Z}^{n}\) and (1.4) on \(\mathbb{R}^{n}\), respectively. Assume that for the initial Cauchy data we have \(u_{0}\in H^{m}(\mathbb{R}^{n})\) for some \(m\geq 4\alpha\). Then we have the following uniform in \(t\in[0,T]\) convergence:_
\[\|v(t,\cdot)-u(t,\cdot)\|_{\ell^{2}(\hbar\mathbb{Z}^{n})}\to 0,\text{ as }\hbar \to 0\,. \tag{2.8}\]
**Remark 2.10**.: _Let us point out that the solution \(u,v\) as in Theorem 2.9 are regarded as solution to the Cauchy problems (1.3) and (1.4), respectively, with the same Cauchy data \(u_{0}\), source term \(f\) and time-dependent coefficients \(a,b\). The same hypothesis holds true in Theorem 2.11 on the convergence of the nets of the very weak solution in the semi-classical and Euclidean settings of the corresponding regularised heat equations._
The analogous to Theorem 2.9 statement in the "very weak sense" reads as follows:
**Theorem 2.11**.: _let \(\alpha\in(0,1]\) and \(s=0\). Let \((u_{\varepsilon})_{\varepsilon}\) and \((v_{\varepsilon})_{\varepsilon}\) be the very weak solution of the Cauchy problem (1.3) on \(\hbar\mathbb{Z}^{n}\) and (1.4) on \(\mathbb{R}^{n}\), respectively. Assume that for the initial Cauchy data, we have \(u_{0}\in H^{m}(\mathbb{R}^{n})\) for some \(m\geq 4\alpha\). Then we have the following uniform in \(t\in[0,T]\) convergence_
\[\|v_{\varepsilon}(t,\cdot)-u_{\varepsilon}(t,\cdot)\|_{\ell^{2}(\hbar\mathbb{ Z}^{n})}\to 0\text{ as }\hbar\to 0, \tag{2.9}\]
_and pointwise for \(\varepsilon\in(0,1]\)._
## 3. Preliminaries
The Fourier analysis related to the discrete lattice \(\mathbb{Z}^{n}\) and the torus \(\mathbb{T}^{n}\) has been developed by Turunen and the third author in [10]. The pseudo-difference operators and the related symbolic calculus on the weighted sequence space \(\ell_{s}^{p}(\mathbb{Z}^{n})\) have been extensively studied in [1]. The aim of the section is to recall the preliminaries and important tools related to the discrete lattice \(\hbar\mathbb{Z}^{n}\) and the torus \(\mathbb{T}^{n}_{\hbar}\) that will be necessary for the analysis that we will follow.
### Spaces of functions and distributions on the lattice \(\hbar\mathbb{Z}^{n}\)
The Schwartz space \(\mathcal{S}(\hbar\mathbb{Z}^{n})\) on the lattice \(\hbar\mathbb{Z}^{n}\) is the space of rapidly decreasing functions \(\varphi:\hbar\mathbb{Z}^{n}\to\mathbb{C}\); that is, we write \(\varphi\in\mathcal{S}(\hbar\mathbb{Z}^{n})\) if for any \(M<\infty\) there exists a constant \(C_{\varphi,M}\) such that
\[|\varphi(k)|\leq C_{\varphi,M}(1+|k|)^{-M},\quad\text{ for all }k\in\hbar \mathbb{Z}^{n},\]
where \(|k|\) stands for the Euclidean norm of \(k\in\hbar\mathbb{Z}^{n}\); i.e., for \(k=\hbar(k_{1},\cdots,k_{n})\), we have \(|k|=\hbar\left(\sum\limits_{l=1}^{n}k_{l}^{2}\right)^{\frac{1}{2}}\). The topology on \(\mathcal{S}(\hbar\mathbb{Z}^{n})\) is given by the seminorms \(p_{j}\), defined as
\[p_{j}(\varphi):=\sup\limits_{k\in\hbar\mathbb{Z}^{n}}(1+|k|)^{j}|\varphi(k)|, \quad\varphi\in\mathcal{S}(\hbar\mathbb{Z}^{n})\text{ and }j\in\mathbb{N}_{0}.\]
The topological dual of \(\mathcal{S}(\hbar\mathbb{Z}^{n})\) is the space of tempered distributions defined as
\[\mathcal{S}^{\prime}(\hbar\mathbb{Z}^{n}):=\mathcal{L}\left(\mathcal{S}( \hbar\mathbb{Z}^{n}),\mathbb{C}\right),\]
i.e., \(\mathcal{S}^{\prime}(\hbar\mathbb{Z}^{n})\) is the space of all linear continuous functionals on \(\mathcal{S}(\hbar\mathbb{Z}^{n})\) of the form
\[\varphi\mapsto(u,\varphi):=\sum\limits_{k\in\hbar\mathbb{Z}^{n}}u(k)\varphi(k ),\quad\varphi\in\mathcal{S}(\hbar\mathbb{Z}^{n}).\]
We note that, in contrast to \(\mathcal{S}^{\prime}(\mathbb{R}^{n})\), the tempered distributions in \(\mathcal{S}^{\prime}(\hbar\mathbb{Z}^{n})\) in the semi-classical setting are pointwise well-defined functions on \(\hbar\mathbb{Z}^{n}\). Furthermore, a tempered distribution \(u:\hbar\mathbb{Z}^{n}\to\mathbb{C}\) has polynomial growth at infinity, i.e., there exist positive constants \(M\) and \(C_{u,M}\) such that
\[|u(k)|\leq C_{u,M}(1+|k|)^{M},\quad k\in\hbar\mathbb{Z}^{n}.\]
For \(s\in\mathbb{R}\), one can extend the usual space \(\ell^{2}(\hbar\mathbb{Z}^{n})\) to the weighted space \(\ell^{2}_{s}(\hbar\mathbb{Z}^{n})\) of order \(s\) as follows: for \(f:\hbar\mathbb{Z}^{n}\to\mathbb{C}\) we write \(f\in\ell^{2}_{s}(\hbar\mathbb{Z}^{n})\) whenever the following norm is finite
\[\|f\|_{\ell^{2}_{s}(\hbar\mathbb{Z}^{n})}:=\left(\sum\limits_{k\in\hbar \mathbb{Z}^{n}}(1+|k|)^{2s}|f(k)|^{2}\right)^{\frac{1}{2}}\,.\]
Clearly, the weighted \(\ell^{2}\)-spaces also include \(\ell^{2}(\hbar\mathbb{Z}^{n})\) as a special case when \(s=0\).
Observe that the weighted space \(\ell_{s}^{2}(\hbar\mathbb{Z}^{n})\) is a Hilbert space when endowed with the natural inner product:
\[(u,v)_{\ell_{s}^{2}(\hbar\mathbb{Z}^{n})}:=\sum_{k\in\hbar\mathbb{Z}^{n}}(1+|k|) ^{2s}u(k)\overline{v(k)}\,, \tag{3.1}\]
where \(\overline{v(k)}\) stands for the complex conjugate of \(v(k)\).
Let us point out that the structure of the space of tempered distributions \(\mathcal{S}^{\prime}(\hbar\mathbb{Z}^{n})\), as well as this of the Schwartz space \(\mathcal{S}(\hbar\mathbb{Z}^{n})\), are closely related to the weighted \(\ell^{2}\)-spaces. In particular, we have the following useful relations:
\[\mathcal{S}(\hbar\mathbb{Z}^{n})=\bigcap_{s\in\mathbb{R}}\ell_{s}^{2}(\hbar \mathbb{Z}^{n})\text{ and }\mathcal{S}^{\prime}(\hbar\mathbb{Z}^{n})=\bigcup_{s\in \mathbb{R}}\ell_{s}^{2}(\hbar\mathbb{Z}^{n}). \tag{3.2}\]
### Space of periodic functions and distributions on the torus \(\mathbb{T}_{\hbar}^{n}\)
Let us now introduce the torus denoted by \(\mathbb{T}_{\hbar}^{n}\) that will be useful for the subsequent analysis, especially when the semi-classical limit \(\hbar\to 0\) is taken. The torus \(\mathbb{T}_{\hbar}^{n}\) can be realised via the identification
\[\mathbb{T}_{\hbar}^{n}=\left[-\frac{1}{2\hbar},\frac{1}{2\hbar}\right]^{n}, \quad\hbar>0.\]
Consequently, the space \(C^{k}(\mathbb{T}_{\hbar}^{n})\) consists of functions that are \(\frac{1}{\hbar}\)-periodic and \(k\)-times continuously differentiable functions on torus \(\mathbb{T}_{\hbar}^{n}\). The space \(C^{\infty}(\mathbb{T}_{\hbar}^{n})\) of test functions on the torus \(\mathbb{T}_{\hbar}^{n}\) can then be defined as
\[C^{\infty}(\mathbb{T}_{\hbar}^{n}):=\bigcap_{k=1}^{\infty}C^{k}(\mathbb{T}_{ \hbar}^{n}).\]
The Frechet topology on the space of smooth functions \(C^{\infty}(\mathbb{T}_{\hbar}^{n})\) is given by the seminorms \(p_{j}\) defined as
\[p_{j}(\psi):=\max\{\|\partial^{\alpha}\psi\|_{C(\mathbb{T}_{\hbar}^{n})}:| \alpha|\leq j\},\quad j\in\mathbb{N}_{0},\alpha\in\mathbb{N}_{0}^{n}.\]
The topological dual of \(C^{\infty}(\mathbb{T}_{\hbar}^{n})\) is the space of periodic distributions defined as
\[\mathcal{D}^{\prime}(\mathbb{T}_{\hbar}^{n}):=\mathcal{L}\left(C^{\infty}( \mathbb{T}_{\hbar}^{n}),\mathbb{C}\right),\]
i.e., it is the space of all linear continuous functionals on \(C^{\infty}(\mathbb{T}_{\hbar}^{n})\) of the form
\[\varphi\mapsto\int_{\mathbb{T}_{\hbar}^{n}}\varphi(\xi)\psi(\xi)\mathrm{d} \xi,\quad\psi\in C^{\infty}(\mathbb{T}_{\hbar}^{n}).\]
### Related semi-classical Fourier analysis
The Fourier transform operator
\[\mathcal{F}_{\hbar\mathbb{Z}^{n}}:\mathcal{S}(\hbar\mathbb{Z}^{n})\to C^{ \infty}(\mathbb{T}_{\hbar}^{n})\]
is defined as
\[\mathcal{F}_{\hbar\mathbb{Z}^{n}}u(\xi):=\hbar^{n/2}\sum_{k\in\hbar\mathbb{Z}^ {n}}u(k)e^{-2\pi ik\cdot\xi},\quad\xi\in\mathbb{T}_{\hbar}^{n}\,.\]
For the inverse Fourier transform operator
\[\mathcal{F}_{\hbar\mathbb{Z}^{n}}^{-1}:C^{\infty}(\mathbb{T}_{\hbar}^{n}) \rightarrow\mathcal{S}(\hbar\mathbb{Z}^{n})\]
we have
\[\mathcal{F}_{\hbar\mathbb{Z}^{n}}^{-1}v(k):=\hbar^{n/2}\int_{\mathbb{T}_{ \hbar}^{n}}v(\xi)e^{2\pi ik\cdot\xi}\mathrm{d}\xi,\quad k\in\hbar\mathbb{Z}^ {n}\,,\]
implying that the Fourier inversion formula is given by
\[u(k)=\hbar^{n/2}\int_{\mathbb{T}_{h}^{n}}\widehat{u}(\xi)e^{2\pi ik\cdot\xi} \mathrm{d}\xi,\quad k\in\hbar\mathbb{Z}^{n}. \tag{3.3}\]
The Fourier transform \(\mathcal{F}_{\hbar\mathbb{Z}^{n}}\) can be uniquely extended to the space of tempered distributions \(\mathcal{S}^{\prime}(\hbar\mathbb{Z}^{n})\) when realised via the distributional duality
\[(\mathcal{F}_{\hbar\mathbb{Z}^{n}}u,\psi):=(u,\iota\circ\mathcal{F}_{\hbar \mathbb{Z}^{n}}^{-1}\psi), \tag{3.4}\]
where \(u\in\mathcal{S}^{\prime}(\hbar\mathbb{Z}^{n})\), \(\psi\in C^{\infty}(\mathbb{T}_{h}^{n})\), and \((\iota\circ f)(x)=f(-x)\).
Hence for the operator \(\mathcal{F}_{\hbar\mathbb{Z}^{n}}\) we can write in more generality \(\mathcal{F}_{\hbar\mathbb{Z}^{n}}:\mathcal{S}^{\prime}(\hbar\mathbb{Z}^{n}) \rightarrow\mathcal{D}^{\prime}(\mathbb{T}_{h}^{n})\), and consequently we can define the inverse operator \(\mathcal{F}_{\hbar\mathbb{Z}^{n}}^{-1}:\mathcal{D}^{\prime}(\mathbb{T}_{h}^{ n})\rightarrow\mathcal{S}^{\prime}(\hbar\mathbb{Z}^{n})\). With the use of the latter, we can define the periodic Sobolev spaces \(H^{s}(\mathbb{T}_{h}^{n})\), for \(s\in\mathbb{R}\) as follows:
\[H^{s}(\mathbb{T}_{h}^{n}):=\left\{u\in\mathcal{D}^{\prime}(\mathbb{T}_{h}^{n} ):\|u\|_{H^{s}(\mathbb{T}_{h}^{n})}:=\left(\sum_{k\in\hbar\mathbb{Z}^{n}}(1+|k |)^{2s}\left|\mathcal{F}_{\hbar\mathbb{Z}^{n}}^{-1}u(k)\right|^{2}\right)^{1/2 }<\infty\right\}.\]
For any \(s\in\mathbb{R}\) the periodic Sobolev space \(H^{s}\left(\mathbb{T}_{h}^{n}\right)\) is a Hilbert space endowed with the inner product given by
\[(u,v)_{H^{s}\left(\mathbb{T}_{h}^{n}\right)}:=\sum_{k\in\hbar\mathbb{Z}^{n}}(1 +|k|)^{2s}\mathcal{F}_{\hbar\mathbb{Z}^{n}}^{-1}u(k)\overline{\mathcal{F}_{ \hbar\mathbb{Z}^{n}}^{-1}v(k)}. \tag{3.5}\]
Clearly, the periodic Sobolev space also includes \(L^{2}(\mathbb{T}_{h}^{n})\) as a special case when \(s=0\).
We have the following relation between the space of test functions \(C^{\infty}(\mathbb{T}_{h}^{n})\) and the space of periodic distributions \(\mathcal{D}^{\prime}(\mathbb{T}_{h}^{n})\) and the periodic Sobolev spaces \(H^{s}(\mathbb{T}_{h}^{n})\):
\[C^{\infty}(\mathbb{T}_{h}^{n})=\bigcap_{s\in\mathbb{R}}H^{s}(\mathbb{T}_{h}^{n })\text{ and }\mathcal{D}^{\prime}(\mathbb{T}_{h}^{n})=\bigcup_{s\in\mathbb{R}}H^{s}( \mathbb{T}_{h}^{n}).\]
Combining the inner product (3.1) and (3.5) with the Fourier transform, we obtain the following Plancherel formula
\[\|f\|_{\ell_{s}^{2}(\hbar\mathbb{Z}^{n})}=\|(1+|\cdot|)^{s}f(\cdot)\|_{\ell^{2 }(\hbar\mathbb{Z}^{n})}=\|\widehat{f}\|_{H^{s}(\mathbb{T}_{h}^{n})},\quad f \in\ell_{s}^{2}(\hbar\mathbb{Z}^{n}). \tag{3.6}\]
### The discrete fractional Laplacian
The discrete fractional Laplacian on the lattice \(\hbar\mathbb{Z}^{n}\) can be defined by restricting the usual fractional centered difference operators in the Euclidean setting \(\mathbb{R}^{n}\), see [11, Section 5.4]. For more details about the fractional difference operators on \(\mathbb{R}^{n}\), we refer to [10].
Rigorously, for a positive \(\alpha>0\) and for \(u\) being a complex-valued grid function on \(\hbar\mathbb{Z}^{n}\), the discrete fractional Laplacian \(\left(-\mathcal{L}_{\hbar}\right)^{\alpha}\) is defined by
\[\left(-\mathcal{L}_{\hbar}\right)^{\alpha}u(k):=\sum_{j\in\mathbb{Z}^{n}}a_{j}^ {(\alpha)}u(k+j\hbar),\quad\alpha>0, \tag{3.7}\]
where the generating function \(a_{j}^{(\alpha)}\) is given by
\[a_{j}^{(\alpha)}:=\int\limits_{\left[-\frac{1}{2},\frac{1}{2}\right]^{n}} \left[\sum_{l=1}^{n}4\sin^{2}\left(\pi\xi_{l}\right)\right]^{\alpha}e^{-2\pi ij \cdot\xi}\mathrm{d}\xi.\]
The Fourier transform and the Fourier inversion formula, allow to verify that
\[\sum_{j\in\mathbb{Z}^{n}}a_{j}^{(\alpha)}e^{2\pi ij\cdot\xi}=\left[\sum_{l=1}^{n}4 \sin^{2}\left(\pi\xi_{l}\right)\right]^{\alpha},\quad\xi\in\mathbb{T}_{h}^{n}. \tag{3.8}\]
Further, using the relation (3.8) and the shifting property of the Fourier transform, we can compute the Fourier transform of the discrete fractional Laplacian \(\left(-\mathcal{L}_{h}\right)^{\alpha}\) as follows:
\[\left(\mathcal{F}_{h\mathbb{Z}^{n}}\left(-\mathcal{L}_{h}\right) ^{\alpha}u\right)\left(\xi\right) = \sum_{k\in\hbar\mathbb{Z}^{n}}\left(-\mathcal{L}_{h}\right)^{ \alpha}u(k)e^{-2\pi ik\cdot\xi} \tag{3.9}\] \[= \sum_{k\in\hbar\mathbb{Z}^{n}}\left(\sum_{j\in\mathbb{Z}^{n}}a_{j }^{(\alpha)}u(k+j\hbar)\right)e^{-2\pi ik\cdot\xi}\] \[= \left(\sum_{j\in\mathbb{Z}^{n}}a_{j}^{(\alpha)}e^{2\pi ijh\cdot \xi}\right)\widehat{u}(\xi)\] \[= \left[\sum_{l=1}^{n}4\sin^{2}\left(\pi\hbar\xi_{l}\right)\right] ^{\alpha}\widehat{u}(\xi),\]
for all \(\xi\in\mathbb{T}_{h}^{n}\), and consequently the Fourier transform of fractional Laplacian is
\[\left(\mathcal{F}_{h\mathbb{Z}^{n}}\left(-\mathcal{L}\right)^{ \alpha}u\right)\left(\xi\right) = \left(\left(-\mathcal{L}\right)^{\alpha}u,e^{2\pi ik\cdot\xi}\right) \tag{3.10}\] \[= \left(u,\left(-\mathcal{L}\right)^{\alpha}e^{2\pi ik\cdot\xi}\right)\] \[= \left(u,\left|2\pi\xi\right|^{2\alpha}\!e^{2\pi ik\cdot\xi}\right)\] \[= |2\pi\xi|^{2\alpha}\widehat{u}(\xi),\]
for all \(\xi\in\mathbb{T}_{h}^{n}\). For more details about the construction and other properties of discrete fractional Laplacian on \(\hbar\mathbb{Z}^{n}\), see [10].
The consistency of the formula 3.7 for the discrete fractional Laplacian \((-\mathcal{L}_{h})^{\alpha}\) with the usual discrete Laplacian introduced in the monograph [10], can be understood by explicitly computing the expansion coefficients \(a_{j}^{(\alpha)}\), for \(\alpha=1\) and \(j\in\mathbb{Z}^{n}\). Indeed, it is easy to check that
\[a_{0}^{(1)}=2n,\quad a_{\pm v_{l}}^{(1)}=-1,\text{ and }a_{j}^{(1)}=0,\quad \text{for all }j\neq 0,\pm v_{l},\]
where \(v_{l}\) is the \(l^{th}\) basis vector in \(\mathbb{Z}^{n}\), having all zeros except for \(1\) at the \(l^{th}\) component. This gives
\[(-\mathcal{L}_{h})^{1}u(k) = 2nu(k)-\sum_{j=\pm v_{l}}u(k+j\hbar)\] \[= 2nu(k)-\sum_{l=1}^{n}\left(u(k+v_{l}\hbar)+u(k-v_{l}\hbar)\right),\]
which is usual discrete Laplacian on \(\hbar\mathbb{Z}^{n}\).
## 4. Proof of the main results
In this section, we give the proofs of all the results presented above in Section 2. Before moving on to proving our first result, let us note that the proof of Theorem 1.1 in the Euclidean setting \(\mathbb{R}^{n}\) follows the same arguments as the ones that are used in the proof of Theorem 2.1 in the setting \(\hbar\mathbb{Z}^{n}\) except for using the inner product of \(L_{m}^{2}(\mathbb{R}^{n})\) instead of \(H^{s}(\mathbb{T}^{n}_{h})\). Therefore we just prove Theorem 2.1 and the proof of Theorem 1.1 should be considered verbatim.
Proof of Theorem 2.1.: Taking the Fourier transform of the Cauchy problem (1.3) with respect to \(k\in\hbar\mathbb{Z}^{n}\) and using (3.9), we obtain
\[\left\{\begin{array}{l}\partial_{t}\widehat{u}(t,\xi)+a(t)\nu^{2}(\xi) \widehat{u}(t,\xi)+b(t)\widehat{u}(t,\xi)=\widehat{f}(t,\xi),\quad\text{ with }(t,\xi)\in[0,T]\times\mathbb{T}^{n}_{h},\\ \widehat{u}(0,\xi)=\widehat{u}_{0}(\xi),\quad\xi\in\mathbb{T}^{n}_{h},\end{array}\right. \tag{4.1}\]
where
\[\nu^{2}(\xi)=\hbar^{-2\alpha}\left[\sum_{l=1}^{n}4\sin^{2}{(\pi\hbar\xi_{l})} \right]^{\alpha}\geq 0. \tag{4.2}\]
Define the energy functional for the Cauchy problem (4.1) by
\[E(t,\xi):=(a(t)\widehat{u}(t,\xi),\widehat{u}(t,\xi)),\quad(t,\xi)\in[0,T] \times\mathbb{T}^{n}_{h}, \tag{4.3}\]
where \((\cdot,\cdot)\) is the inner product in the Sobolev space \(H^{s}(\mathbb{T}^{n}_{h})\) given by (3.5). It is easy to check that
\[\inf_{t\in[0,T]}\{a(t)\}(\widehat{u}(t,\xi),\widehat{u}(t,\xi))\leq(a(t) \widehat{u}(t,\xi),\widehat{u}(t,\xi))\leq\sup_{t\in[0,T]}\{a(t)\}(\widehat{ u}(t,\xi),\widehat{u}(t,\xi)). \tag{4.4}\]
Since \(a\in L_{1}^{\infty}([0,T])\), there exist two positive constants \(a_{0}\) and \(a_{1}\) such that
\[\inf_{t\in[0,T]}\{a(t)\}=a_{0}\quad\text{ and }\sup_{t\in[0,T]}\{a(t)\}=a_{1}. \tag{4.5}\]
Combining the equations (4.3) and (4.5) together with the inequality (4.4), we obtain the following bounds for the energy functional
\[a_{0}\|\widehat{u}(t,\cdot)\|_{H^{s}}^{2}\leq E(t,\xi)\leq a_{1}\|\widehat{u }(t,\cdot)\|_{H^{s}}^{2},\quad(t,\xi)\in[0,T]\times\mathbb{T}^{n}_{h}. \tag{4.6}\]
Differentiating the energy functional \(E(t,\xi)\) and using (4.1), we obtain
\[\partial_{t}E(t,\xi) = (a_{t}(t)\widehat{u}(t,\xi),\widehat{u}(t,\xi))+(a(t)\widehat{u}_ {t}(t,\xi),\widehat{u}(t,\xi))+(a(t)\widehat{u}(t,\xi),\widehat{u}_{t}(t,\xi))\] \[= (a_{t}(t)\widehat{u}(t,\xi),\widehat{u}(t,\xi))-(a^{2}(t)\nu^{2}( \xi)\widehat{u}(t,\xi),\widehat{u}(t,\xi))-\] \[(a(t)b(t)\widehat{u}(t,\xi),\widehat{u}(t,\xi))+(a(t)\widehat{f} (t,\xi),\widehat{u}(t,\xi))-\] \[(a(t)\widehat{u}(t,\xi),a(t)\nu^{2}(\xi)\widehat{u}(t,\xi))-(a(t )\widehat{u}(t,\xi),b(t)\widehat{u}(t,\xi))+\] \[(a(t)\widehat{u}(t,\xi),\widehat{f}(t,\xi))\] \[= a_{t}(t)(\widehat{u}(t,\xi),\widehat{u}(t,\xi))-2a^{2}(t)(\nu( \xi)\widehat{u}(t,\xi),\nu(\xi)\widehat{u}(t,\xi))-\] \[2a(t)b(t)(\widehat{u}(t,\xi),\widehat{u}(t,\xi))+a(t)(\widehat{ f}(t,\xi),\widehat{u}(t,\xi))+\] \[a(t)(\widehat{u}(t,\xi),\widehat{f}(t,\xi))\] \[\leq \left(|a_{t}(t)|+2|a(t)||b(t)|\right)\|\|\widehat{u}(t,\cdot)\|_{H ^{s}}^{2}+2|a(t)||\operatorname{Re}(\widehat{f}(t,\xi),\widehat{u}(t,\xi))|.\]
Using the hypothesis that \(a\in L^{\infty}_{1}([0,T])\) and \(b\in L^{\infty}([0,T])\), we obtain
\[\partial_{t}E(t,\xi) \leq (\|a_{t}\|_{L^{\infty}}+2\|a\|_{L^{\infty}}\|b\|_{L^{\infty}})\,\| \widehat{u}(t,\cdot)\|_{H^{s}}^{2}+2\|a\|_{L^{\infty}}\|\widehat{f}(t,\cdot)\|_{ H^{s}}\|\widehat{u}(t,\cdot)\|_{H^{s}}\] \[\leq (\|a_{t}\|_{L^{\infty}}+2\|a\|_{L^{\infty}}\|b\|_{L^{\infty}}+\|a \|_{L^{\infty}})\,\|\widehat{u}(t,\cdot)\|_{H^{s}}^{2}+\|a\|_{L^{\infty}}\| \widehat{f}(t,\cdot)\|_{H^{s}}^{2}.\]
If we set \(\kappa_{1}=\|a_{t}\|_{L^{\infty}}+2\|a\|_{L^{\infty}}\|b\|_{L^{\infty}}+\|a\|_ {L^{\infty}}\) and \(\kappa_{2}=\|a\|_{L^{\infty}}\), then putting together (4.6) and (4.7), we obtain
\[\partial_{t}E(t,\xi)\leq a_{0}^{-1}\kappa_{1}E(t,\xi)+\kappa_{2}\|\widehat{f} (t,\cdot)\|_{H^{s}}^{2}. \tag{4.8}\]
Applying the Gronwall's lemma to the inequality (4.8), we get
\[E(t,\xi)\leq e^{\int_{0}^{t}a_{0}^{-1}\kappa_{1}{\rm d}\tau}\left(E(0,\xi)+ \int_{0}^{t}\kappa_{2}\|\widehat{f}(\tau,\cdot)\|_{H^{s}}^{2}{\rm d}\tau\right), \tag{4.9}\]
for all \((t,\xi)\in[0,T]\times\mathbb{T}_{h}^{n}\). Again combining the estimates (4.6) and (4.9), we obtain
\[a_{0}\|\widehat{u}(t,\cdot)\|_{H^{s}}^{2}\leq E(t,\xi) \leq e^{\int_{0}^{t}a_{0}^{-1}\kappa_{1}{\rm d}\tau}\left(E(0,\xi)+ \int_{0}^{t}\kappa_{2}\|\widehat{f}(\tau,\cdot)\|_{H^{s}}^{2}{\rm d}\tau\right)\] \[\leq e^{a_{0}^{-1}\kappa_{1}T}\left(a_{1}\|\widehat{u}(0,\cdot)\|_{H^ {s}}^{2}+\kappa_{2}\int_{0}^{T}\|\widehat{f}(\tau,\cdot)\|_{H^{s}}^{2}{\rm d }\tau\right).\]
Further, using the Plancherel formula (3.6), we obtain the required estimate
\[\|u(t,\cdot)\|_{\ell_{s}^{2}(h\mathbb{Z}^{n})}^{2}\leq C_{T,a,b}\left(\|u_{0} \|_{\ell_{s}^{2}(h\mathbb{Z}^{n})}^{2}+\|f\|_{L^{2}([0,T];\ell_{s}^{2}(h \mathbb{Z}^{n}))}^{2}\right),\quad\text{ for all }t\in[0,T],\]
where the constant \(C_{T,a,b}\) is given by
\[C_{T,a,b}=a_{0}^{-1}\|a\|_{L^{\infty}}e^{a_{0}^{-1}(\|a_{t}\|_{L^{\infty}}+2\| a\|_{L^{\infty}}\|b\|_{L^{\infty}}+\|a\|_{L^{\infty}})T}.\]
The uniqueness of the solution follows immediately from the above estimate. This completes the proof.
Thus, we have obtained the well-posedness for the Cauchy problem (1.3) in the weighted spaces \(\ell_{s}^{2}(h\mathbb{Z}^{n}))\) for all \(s\in\mathbb{R}\) and consequently we have distributional well-posedness for the Cauchy problem (1.3) in the space of tempered distribution \(\mathcal{S}^{\prime}(h\mathbb{Z}^{n})\). We will now prove the existence of the very weak solution to the Cauchy problem (1.3) in the case of distributional coefficients.
Proof of Theorem 2.4.: Consider the regularised Cauchy problem
\[\left\{\begin{array}{l}\partial_{t}u_{\varepsilon}(t,k)+a_{\varepsilon}(t) \hbar^{-2\alpha}\left(-\mathcal{L}_{h}\right)^{\alpha}u_{\varepsilon}(t,k)+b _{\varepsilon}(t)u_{\varepsilon}(t,k)=f_{\varepsilon}(t,k),\quad t\in(0,T],\\ u_{\varepsilon}(0,k)=u_{0}(k),\quad k\in\hbar\mathbb{Z}^{n},\end{array}\right. \tag{4.10}\]
where the coefficient \(a_{\varepsilon}\) is \(L^{\infty}_{1}\)-moderate, \(b_{\varepsilon}\) is \(L^{\infty}\)-moderate and the source term \(f_{\varepsilon}\) is \(L^{2}([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n})\)-moderate regularisation of the coefficients \(a,b\) and the source term \(f\), respectively. Taking the Fourier transform with respect to \(k\in\hbar\mathbb{Z}^{n}\), we obtain
\[\left\{\begin{array}{l}\partial_{t}\widehat{u}_{\varepsilon}(t,\xi)+a_{ \varepsilon}(t)\nu^{2}(\xi)\widehat{u}_{\varepsilon}(t,\xi)+b_{\varepsilon}(t) \widehat{u}_{\varepsilon}(t,\xi)=\widehat{f}_{\varepsilon}(t,\xi),\quad(t,\xi )\in(0,T]\times\mathbb{T}_{h}^{n},\\ \widehat{u}_{\varepsilon}(0,\xi)=\widehat{u}_{0}(\xi),\quad k\in\mathbb{T}_{h }^{n},\end{array}\right.\]
where \(\nu^{2}(\xi)\) is given by (4.2). Define the energy functional for the Cauchy problem (4.10) by
\[E_{\varepsilon}(t,\xi):=(a_{\varepsilon}(t)\widehat{u}_{\varepsilon}(t,\xi), \widehat{u}_{\varepsilon}(t,\xi)),\quad(t,\xi)\in[0,T]\times\mathbb{T}_{h}^{n}, \tag{4.11}\]
where \((\cdot,\cdot)\) is the inner product in the Sobolev space \(H^{s}(\mathbb{T}_{h}^{n})\). It is easy to check that
\[\inf_{t\in[0,T]}\{a_{\varepsilon}(t)\}(\widehat{u}_{\varepsilon}(t,\xi), \widehat{u}_{\varepsilon}(t,\xi))\leq(a_{\varepsilon}(t)\widehat{u}_{ \varepsilon}(t,\xi),\widehat{u}_{\varepsilon}(t,\xi))\leq\sup_{t\in[0,T]}\{a_ {\varepsilon}(t)\}(\widehat{u}_{\varepsilon}(t,\xi),\widehat{u}_{\varepsilon }(t,\xi)).\]
Since \(a\) and \(b\) are distributions, by the structure theorem for compactly supported distributions, there exist \(L_{1},L_{2}\in\mathbb{N}\) and \(c_{1},c_{2}>0\) such that
\[\big{|}\partial_{t}^{k}a_{\varepsilon}(t)\big{|}\leq c_{1}\omega(\varepsilon) ^{-L_{1}-k}\text{ and }\big{|}\partial_{t}^{k}b_{\varepsilon}(t)\big{|}\leq c_{2}\omega( \varepsilon)^{-L_{2}-k},\quad k\in\mathbb{N}_{0}, \tag{4.12}\]
for all \(t\in[0,T]\), where \(\omega(\varepsilon)\) is given by (2.3). Since \(a\geq a_{0}>0\), we can write
\[a_{\varepsilon}(t)=\left(a*\psi_{\omega(\varepsilon)}\right)(t)=\langle a,\tau _{t}\tilde{\psi}_{\omega(\varepsilon)}\rangle\geq\tilde{a}_{0}>0, \tag{4.13}\]
where \(\tilde{\psi}(x)=\psi(-x),x\in\mathbb{R}\) and \(\tau_{t}\psi(\xi)=\psi(\xi-t),\xi\in\mathbb{R}\).
Now applying Theorem 2.1 to the Cauchy problem (4.10), we have the following estimate
\[\|u_{\varepsilon}(t,\cdot)\|_{\ell^{2}_{s}(\hbar\mathbb{Z}^{n})}^{2}\leq C_{T,a_{\varepsilon},b_{\varepsilon}}\left(\|u_{0}\|_{\ell^{2}_{s}(\hbar\mathbb{Z} ^{n})}^{2}+\|f_{\varepsilon}\|_{L^{2}([0,T];\ell^{2}_{s}(\hbar\mathbb{Z}^{n}) )}^{2}\right),\quad\text{ for all }t\in[0,T], \tag{4.14}\]
where the constant \(C_{T,a_{\varepsilon},b_{\varepsilon}}\) is given by
\[C_{T,a_{\varepsilon},b_{\varepsilon}}=\tilde{a}_{0}^{-1}\|a_{\varepsilon}\|_{ L^{\infty}}e^{\tilde{a}_{0}^{-1}(\|\partial_{t}a_{\varepsilon}\|_{L^{\infty}}+2 \|a_{\varepsilon}\|_{L^{\infty}}\|b_{\varepsilon}\|_{L^{\infty}}+\|a_{ \varepsilon}\|_{L^{\infty}})T}. \tag{4.15}\]
Combining the estimates (4.12) and (4.14) with (4.15) we get
\[\|u_{\varepsilon}(t,\cdot)\|_{\ell^{2}_{s}(\hbar\mathbb{Z}^{n})}^ {2} \leq \tilde{a}_{0}^{-1}c_{1}\omega(\varepsilon)^{-L_{1}}e^{\left(c_{ 1}\omega(\varepsilon)^{-L_{1}-1}+2c_{1}c_{2}\omega(\varepsilon)^{-L_{1}-L_{2}} +c_{1}\omega(\varepsilon)^{-L_{1}}\right)T}\times\] \[\left(\|u_{0}\|_{\ell^{2}_{s}(\hbar\mathbb{Z}^{n})}^{2}+\|f_{ \varepsilon}\|_{L^{2}([0,T];\ell^{2}_{s}(\hbar\mathbb{Z}^{n}))}^{2}\right)\] \[\leq \tilde{a}_{0}^{-1}e^{K_{T}\left(\omega(\varepsilon)^{-L_{1}-1}+ \omega(\varepsilon)^{-L_{1}-L_{2}}+\omega(\varepsilon)^{-L_{1}}\right)T}\times\] \[\left(\|u_{0}\|_{\ell^{2}_{s}(\hbar\mathbb{Z}^{n})}^{2}+\|f_{ \varepsilon}\|_{L^{2}([0,T];\ell^{2}_{s}(\hbar\mathbb{Z}^{n}))}^{2}\right),\]
where \(K_{T}=\tilde{a}_{0}^{-1}c_{1}T\max\{1,2c_{2},T^{-1}\}\). Putting \(\omega(\varepsilon)\sim|\log(\varepsilon)|^{-1}\), we obtain
\[\|u_{\varepsilon}(t,\cdot)\|_{\ell^{2}_{s}(\hbar\mathbb{Z}^{n})}^{2}\lesssim \varepsilon^{-3L_{1}-L_{2}-1}\left(\|u_{0}\|_{\ell^{2}_{s}(\hbar\mathbb{Z}^{n}) }^{2}+\|f_{\varepsilon}\|_{L^{2}([0,T];\ell^{2}_{s}(\hbar\mathbb{Z}^{n}))}^{2 }\right), \tag{4.16}\]
for all \(t\in[0,T]\) and \(\varepsilon\in(0,1]\).
Since \((f_{\varepsilon})_{\varepsilon}\) is \(L^{2}([0,T];\ell^{2}_{s}(\hbar\mathbb{Z}^{n}))\)-moderate regularisation of \(f\), there exist positive constants \(L_{3}\) and \(c\) such that
\[\|f_{\varepsilon}\|_{L^{2}([0,T];\ell^{2}_{s}(\hbar\mathbb{Z}^{n}))}\leq c \varepsilon^{-L_{3}}. \tag{4.17}\]
Now by integrating the estimate (4.16) with respect to \(t\in[0,T]\) and then using (4.17), we obtain
\[\|u_{\varepsilon}\|_{L^{2}([0,T];\ell^{2}_{s}(\hbar\mathbb{Z}^{n}))}\lesssim \varepsilon^{-3L_{1}-L_{2}-L_{3}-1}. \tag{4.18}\]
Thus we conclude that \((u_{\varepsilon})_{\varepsilon}\) is \(L^{2}([0,T];\ell^{2}_{s}(\hbar\mathbb{Z}^{n}))\)-moderate. This completes the proof.
Therefore, we have established the existence of very weak solution for the Cauchy problem (1.3) with irregular coefficients. We will now prove the uniqueness of the very weak solution of it in the sense of Definition 2.5.
Proof of Theorem 2.7.: Let \((u_{\varepsilon})_{\varepsilon}\) and \((\tilde{u}_{\varepsilon})_{\varepsilon}\) be the families of solution corresponding to the Cauchy problems (2.6) and (2.7), respectively. Denoting \(w_{\varepsilon}:=u_{\varepsilon}-\tilde{u}_{\varepsilon}\), we get
\[\left\{\begin{array}{l}\partial_{t}w_{\varepsilon}(t,k)+a_{\varepsilon}(t) \hbar^{-2\alpha}\left(-\mathcal{L}_{h}\right)^{\alpha}w_{\varepsilon}(t,k)+b_ {\varepsilon}(t)w_{\varepsilon}(t,k)=g_{\varepsilon}(t,k),\quad t\in(0,T],\\ w_{\varepsilon}(0,k)=0,\quad k\in\hbar\mathbb{Z}^{n},\end{array}\right. \tag{4.19}\]
where
\[g_{\varepsilon}(t,k):=(\tilde{a}_{\varepsilon}-a_{\varepsilon})(t)\hbar^{-2 \alpha}\left(-\mathcal{L}_{h}\right)^{\alpha}\tilde{u}_{\varepsilon}(t,k)+( \tilde{b}_{\varepsilon}-b_{\varepsilon})(t)\tilde{u}_{\varepsilon}(t,k)+(f_ {\varepsilon}-\tilde{f}_{\varepsilon})(t,k). \tag{4.20}\]
Since the nets \((\tilde{a}_{\varepsilon}-a_{\varepsilon})_{\varepsilon}\), \((\tilde{b}_{\varepsilon}-b_{\varepsilon})_{\varepsilon}\) and \((f_{\varepsilon}-\tilde{f}_{\varepsilon})_{\varepsilon}\) are \(L_{1}^{\infty}\)-negligible, \(L^{\infty}\)-negligible and \(L^{2}([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n}))\)-negligible, respectively, it follows that \(g_{\varepsilon}\) is \(L^{2}([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n}))\)-negligible.
Taking the Fourier transform of the Cauchy problem (4.19) with respect to \(k\in\hbar\mathbb{Z}^{n}\), we obtain
\[\left\{\begin{array}{l}\partial_{t}\widehat{w}_{\varepsilon}(t,\xi)+a_{ \varepsilon}(t)\nu^{2}(\xi)\widehat{w}_{\varepsilon}(t,\xi)+b_{\varepsilon}(t )\widehat{w}_{\varepsilon}(t,\xi)=\widehat{g}_{\varepsilon}(t,\xi),\quad(t, \xi)\in(0,T]\times\mathbb{T}_{h}^{n},\\ \widehat{w}_{\varepsilon}(0,\xi)=0,\quad\xi\in\mathbb{T}_{h}^{n},\end{array}\right. \tag{4.21}\]
where \(\nu^{2}(\xi)\) is given by (4.2). The energy functional for the Cauchy problem (4.21) is given by
\[E_{\varepsilon}(t,\xi):=(a_{\varepsilon}(t)\widehat{w}_{\varepsilon}(t,\xi), \widehat{w}_{\varepsilon}(t,\xi)),\quad(t,\xi)\in[0,T]\times\mathbb{T}_{h}^{n}, \tag{4.22}\]
where \((\cdot,\cdot)\) is the inner product in the Sobolev space \(H^{s}(\mathbb{T}_{h}^{n})\). Then using (4.12), we have the following energy bounds
\[\tilde{a}_{0}\|\widehat{w}_{\varepsilon}(t,\cdot)\|_{H^{s}}^{2}\leq E_{ \varepsilon}(t,\xi)\leq c_{1}\omega(\varepsilon)^{-L_{1}}\|\widehat{w}_{ \varepsilon}(t,\cdot)\|_{H^{s}}^{2},\quad(t,\xi)\in[0,T]\times\mathbb{T}_{h}^ {n}. \tag{4.23}\]
Further we can calculate
\[\partial_{t}E_{\varepsilon}(t,\xi)\leq(|a_{\varepsilon}^{\prime}(t)|+2|a_{ \varepsilon}(t)||b_{\varepsilon}(t)|+|a_{\varepsilon}(t)|)\,\|\widehat{w}_{ \varepsilon}(t,\cdot)\|_{H^{s}}^{2}+|a_{\varepsilon}(t)|\|\widehat{g}_{ \varepsilon}(t,\cdot)\|_{H^{s}}^{2}. \tag{4.24}\]
Combining the estimates (4.12), (4.23) and (4.24), we obtain
\[\partial_{t}E_{\varepsilon}(t,\xi)\leq\kappa_{1}\left(\omega(\varepsilon)^{- L_{1}-1}+\omega(\varepsilon)^{-L_{1}-L_{2}}+\omega(\varepsilon)^{-L_{1}} \right)E_{\varepsilon}(t,\xi)+c_{1}\omega(\varepsilon)^{-L_{1}}\|\widehat{g}_{ \varepsilon}(t,\cdot)\|_{H^{s}}^{2}, \tag{4.25}\]
where \(\kappa_{1}=\tilde{a}_{0}^{-1}c_{1}\max\{1,2c_{2}\}\). Applying the Gronwall's lemma to the inequality (4.25), we obtain
\[E_{\varepsilon}(t,\xi)\leq\] \[\quad e^{\int_{0}^{t}\kappa_{1}\left(\omega(\varepsilon)^{-L_{1}-1 }+\omega(\varepsilon)^{-L_{1}-L_{2}}+\omega(\varepsilon)^{-L_{1}}\right) \mathrm{d}\tau}\left(E_{\varepsilon}(0,\xi)+c_{1}\omega(\varepsilon)^{-L_{1}} \int_{0}^{t}\|\widehat{g}_{\varepsilon}(\tau,\cdot)\|_{H^{s}}^{2}\mathrm{d} \tau\right). \tag{4.26}\]
Putting together (4.23) and (4.26), and then using the fact that \(\widehat{w}_{\varepsilon}(0,\xi)\equiv 0\) for all \(\varepsilon\in(0,1]\), we get
\[\|\widehat{w}_{\varepsilon}(t,\cdot)\|^{2} \leq \tilde{a}_{0}^{-1}c_{1}\omega(\varepsilon)^{-L_{1}}e^{\kappa_{1} \left(\omega(\varepsilon)^{-L_{1}-1}+\omega(\varepsilon)^{-L_{1}-L_{2}}+ \omega(\varepsilon)^{-L_{1}}\right)T}\int_{0}^{T}\|\widehat{g}_{\varepsilon} (\tau,\cdot)\|_{H^{s}}^{2}\mathrm{d}\tau\] \[\leq \tilde{a}_{0}^{-1}c_{1}e^{\kappa_{T}\left(\omega(\varepsilon)^{- L_{1}-1}+\omega(\varepsilon)^{-L_{1}-L_{2}}+\omega(\varepsilon)^{-L_{1}}\right)} \int_{0}^{T}\|\widehat{g}_{\varepsilon}(\tau,\cdot)\|_{H^{s}}^{2}\mathrm{d}\tau,\]
where \(\kappa_{T}=c_{1}T\max\{1,2c_{2},T^{-1}\}\). Putting \(\omega(\varepsilon)\sim|\log(\varepsilon)|^{-1}\) and using the Plancherel formula (3.6), we obtain
\[\|w_{\varepsilon}(t,\cdot)\|_{\ell_{s}^{2}(\hbar\mathbb{Z}^{n})}^{2}\lesssim \varepsilon^{-3L_{1}-L_{2}-1}\|g_{\varepsilon}\|_{L^{2}([0,T];\ell_{s}^{2}( \hbar\mathbb{Z}^{n}))}^{2},\]
for all \(t\in[0,T]\). Since \(g_{\varepsilon}\) is \(L^{2}\left([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n})\right)\)-negligible, we obtain
\[\|w_{\varepsilon}(t,\cdot)\|_{\ell_{s}^{2}(\hbar\mathbb{Z}^{n})}^{2}\lesssim \varepsilon^{-3L_{1}-L_{2}-1}\varepsilon^{3L_{1}+L_{2}+1+q}=\varepsilon^{q}, \quad\text{ for all }q\in\mathbb{N}_{0},\]
for all \(t\in[0,T]\). Now by integrating the above estimate with respect to \(t\in[0,T]\), we get
\[\|w_{\varepsilon}\|_{L^{2}([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n}))}\lesssim \varepsilon^{q},\quad\text{ for all }q\in\mathbb{N}_{0}.\]
Thus \(\left(u_{\varepsilon}-\tilde{u}_{\varepsilon}\right)_{\varepsilon}\) is \(L^{2}\left([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n})\right)\)-negligible. This completes the proof.
Next we prove that the very weak solution obtained in Theorem 2.4 is consistent with the classical solution obtained in Theorem 2.1.
Proof of Theorem 2.8.: Let \(\tilde{u}\) be the classical solution given by Theorem 2.1, that is, \(\tilde{u}\) satisfies the Cauchy problem
\[\left\{\begin{array}{l}\partial_{t}\tilde{u}(t,k)+a(t)\hbar^{-2\alpha}\left( -\mathcal{L}_{h}\right)^{\alpha}\tilde{u}(t,k)+b(t)\tilde{u}(t,k)=f(t,k),\quad (t,k)\in(0,T]\times\hbar\mathbb{Z}^{n},\\ \tilde{u}(0,k)=u_{0}(k),\quad k\in\hbar\mathbb{Z}^{n},\end{array}\right. \tag{4.27}\]
and let \((u_{\varepsilon})_{\varepsilon}\) be the very weak solution obtained by Theorem 2.4, that is, \((u_{\varepsilon})_{\varepsilon}\) satisfies the regularised Cauchy problem
\[\left\{\begin{array}{l}\partial_{t}u_{\varepsilon}(t,k)+a_{\varepsilon}(t )\hbar^{-2\alpha}\left(-\mathcal{L}_{h}\right)^{\alpha}u_{\varepsilon}(t,k)+b _{\varepsilon}(t)u_{\varepsilon}(t,k)=f_{\varepsilon}(t,k),\quad t\in(0,T],\\ u_{\varepsilon}(0,k)=u_{0}(k),\quad k\in\hbar\mathbb{Z}^{n}.\end{array}\right. \tag{4.28}\]
Note that by the hypothesis the nets \(\left(a_{\varepsilon}-a\right)_{\varepsilon},\left(b_{\varepsilon}-b\right) _{\varepsilon}\) and \(\left(f_{\varepsilon}-f\right)_{\varepsilon}\) are converging to \(0\) uniformly. Using (4.27) and (4.28), we have
\[\left\{\begin{array}{l}\partial_{t}\tilde{u}(t,k)+a_{\varepsilon}(t)\hbar^{ -2\alpha}\left(-\mathcal{L}_{h}\right)^{\alpha}\tilde{u}(t,k)+b_{\varepsilon }(t)\tilde{u}(t,k)=f_{\varepsilon}(t,k)+g_{\varepsilon}(t,k),\quad t\in(0,T], \\ \tilde{u}(0,k)=u_{0}(k),\quad k\in\hbar\mathbb{Z}^{n},\end{array}\right. \tag{4.29}\]
where
\[g_{\varepsilon}(t,k):=\left(a_{\varepsilon}-a\right)\left(t\right)\hbar^{-2 \alpha}\left(-\mathcal{L}_{h}\right)^{\alpha}\tilde{u}(t,k)+\left(b_{ \varepsilon}-b\right)\left(t\right)\tilde{u}(t,k)+\left(f-f_{\varepsilon} \right)(t,k),\]
\(g_{\varepsilon}\in L^{2}([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n}))\) and \(g_{\varepsilon}\to 0\) in \(L^{2}([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n}))\) as \(\varepsilon\to 0\).
Combining the Cauchy problem (4.28) and (4.29), we deduce that the net \(w_{\varepsilon}:=(\tilde{u}-u_{\varepsilon})\) solves the Cauchy problem
\[\left\{\begin{array}{l}\partial_{t}w_{\varepsilon}(t,k)+a_{\varepsilon}(t) \hbar^{-2\alpha}\left(-\mathcal{L}_{\hbar}\right)^{\alpha}w_{\varepsilon}(t,k) +b_{\varepsilon}(t)w_{\varepsilon}(t,k)=g_{\varepsilon}(t,k),\quad t\in(0,T], \\ w_{\varepsilon}(0,k)=0,\quad k\in\hbar\mathbb{Z}^{n}.\end{array}\right. \tag{4.30}\]
Taking the Fourier transform of the Cauchy problem (4.30) with respect to \(k\in\hbar\mathbb{Z}^{n}\), we obtain
\[\left\{\begin{array}{l}\partial_{t}\widehat{w}_{\varepsilon}(t,\xi)+a_{ \varepsilon}(t)\nu^{2}(\xi)\widehat{w}_{\varepsilon}(t,\xi)+b_{\varepsilon}( t)\widehat{w}_{\varepsilon}(t,\xi)=\widehat{g}_{\varepsilon}(t,\xi),\quad(t,\xi) \in(0,T]\times\mathbb{T}_{\hbar}^{n},\\ \widehat{w}_{\varepsilon}(0,\xi)=0,\quad\xi\in\mathbb{T}_{\hbar}^{n},\end{array}\right. \tag{4.31}\]
where \(\nu^{2}(\xi)\) is given by (4.2). Define the energy functional for the Cauchy problem (4.31) by
\[E_{\varepsilon}(t,\xi):=(a_{\varepsilon}(t)\widehat{w}_{\varepsilon}(t,\xi), \widehat{w}_{\varepsilon}(t,\xi)),\quad(t,\xi)\in[0,T]\times\mathbb{T}_{ \hbar}^{n},\]
where \((\cdot,\cdot)\) is the inner product in the Sobolev space \(H^{s}(\mathbb{T}_{\hbar}^{n})\).
Since the coefficients are sufficiently regular, following the lines of Theorem 2.1, the next energy estimate holds true
\[\partial_{t}E_{\varepsilon}(t,\xi)\leq\kappa_{1}E_{\varepsilon}(t,\xi)+\kappa _{2}\left|\widehat{g}_{\varepsilon}(t,\xi)\right|^{2},\]
for some positive constants \(\kappa_{1}\) and \(\kappa_{2}\). Then using the Gronwall's lemma and the energy bounds similar to (4.6) alongwith the Plancherel formula (3.6), we obtain the following estimate
\[\|w_{\varepsilon}(t,\cdot)\|_{\ell_{s}^{2}(\hbar\mathbb{Z}^{n})}^{2}\lesssim\| w_{\varepsilon}(0,\cdot)\|_{\ell_{s}^{2}(\hbar\mathbb{Z}^{n})}^{2}+\|g_{ \varepsilon}\|_{L^{2}([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n}))}^{2},\]
\(+\) for all \(t\in[0,T]\). Now by integrating the above estimate with respect to \(t\in[0,T]\) and using the fact that \(w_{\varepsilon}(0,k)\equiv 0\) for all \(\varepsilon\in(0,1]\), we get
\[\|w_{\varepsilon}\|_{L^{2}([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n}))}^{2} \lesssim\|g_{\varepsilon}\|_{L^{2}([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n}))}^ {2}.\]
Since \(g_{\varepsilon}\to 0\) in \(L^{2}([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n}))\), we have
\[w_{\varepsilon}\to 0\text{ in }L^{2}([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n})), \quad\varepsilon\to 0,\]
implying also that
\[u_{\varepsilon}\to\tilde{u}\text{ in }L^{2}([0,T];\ell_{s}^{2}(\hbar \mathbb{Z}^{n})),\quad\varepsilon\to 0.\]
Furthermore, the limit is the same for every representation of \(u\), since they will differ from \(\left(u_{\varepsilon}\right)_{\varepsilon}\) by a \(L^{2}([0,T];\ell_{s}^{2}(\hbar\mathbb{Z}^{n}))\)-negligible net. This completes the proof.
## 5. Semi-classical limit \(\hbar\to 0\)
In this section, we will prove the semi-classical limit theorems for the classical solution as well as for the very weak solution.
Proof of Theorem 2.9.: Consider two Cauchy problems:
\[\left\{\begin{array}{l}\partial_{t}u(t,k)+a(t)\hbar^{-2\alpha}(-\mathcal{L}_ {\hbar})^{\alpha}u(t,k)+b(t)u(t,k)=f(t,k),\quad(t,k)\in(0,T]\times\hbar\mathbb{ Z}^{n},\\ u(0,k)=u_{0}(k),\quad k\in\hbar\mathbb{Z}^{n},\end{array}\right. \tag{5.1}\]
and
\[\left\{\begin{array}{l}\partial_{t}v(t,x)+a(t)(-\mathcal{L})^{\alpha}v(t,x)+b(t)v( t,x)=f(t,x),\quad(t,x)\in(0,T]\times\mathbb{R}^{n},\\ v(0,x)=u_{0}(x),\quad x\in\mathbb{R}^{n},\end{array}\right. \tag{5.2}\]
where \(\left(-\mathcal{L}\right)^{\alpha}\) is the usual fractional Laplacian on \(\mathbb{R}^{n}\) given by (1.5). We have assumed that \(f\) and \(u_{0}\) in (5.1) are the restrictions in \(\hbar\mathbb{Z}^{n}\) of the corresponding ones in (5.2) defined on \(\mathbb{R}^{n}\). From the equations (5.1) and (5.2), denoting \(w:=u-v\), we get
\[\left\{\begin{array}{l}\partial_{t}w(t,k)+a(t)\hbar^{-2\alpha}(-\mathcal{L} _{\hbar})^{\alpha}w(t,k)+b(t)w(t,k)=a(t)\left((-\mathcal{L})^{\alpha}-\hbar^{- 2\alpha}(-\mathcal{L}_{\hbar})^{\alpha}\right)v(t,k),\\ w(0,k)=0,\quad k\in\hbar\mathbb{Z}^{n}.\end{array}\right. \tag{5.3}\]
Since \(w_{0}=0\), applying Theorem 2.1 with s = 0 for the Cauchy problem (5.3) and using the estimate (2.1), we obtain
\[\|w(t,\cdot)\|_{\ell^{2}(\hbar\mathbb{Z}^{n})}^{2} \leq C_{T,a,b}\left\|a\left((-\mathcal{L})^{\alpha}-\hbar^{-2\alpha} (-\mathcal{L}_{\hbar})^{\alpha}\right)v\right\|_{L^{2}([0,T];\ell^{2}(\hbar \mathbb{Z}^{n}))}^{2}\] \[\leq C_{T,a,b}\left\|a\right\|_{L^{\infty}([0,T])}^{2}\left\|\left(( -\mathcal{L})^{\alpha}-\hbar^{-2\alpha}(-\mathcal{L}_{\hbar})^{\alpha}\right) v\right\|_{L^{2}([0,T];\ell^{2}(\hbar\mathbb{Z}^{n}))}^{2}, \tag{5.4}\]
for all \(t\in[0,T]\), where the constant \(C_{T,a,b}\) is given by
\[C_{T,a,b}=a_{0}^{-1}\|a\|_{L^{\infty}}e^{a_{0}^{-1}\left(\|a\|_{L^{\infty}}+2 \|a\|_{L^{\infty}}\|b\|_{L^{\infty}}+\|a\|_{L^{\infty}}\right)T}.\]
Now we will estimate the term \(\left\|((-\mathcal{L})^{\alpha}-\hbar^{-2\alpha}(-\mathcal{L}_{\hbar})^{\alpha })\,v\right\|_{L^{2}([0,T];\ell^{2}(\hbar\mathbb{Z}^{n}))}^{2}.\) Using the Plancherel formula (3.6) for \(s=0\), (3.9) and (3.10), we have
\[\left\|\left((-\mathcal{L})^{\alpha}-\hbar^{-2\alpha}(-\mathcal{ L}_{\hbar})^{\alpha}\right)v(t,\cdot)\right\|_{\ell^{2}(\hbar\mathbb{Z}^{n})}=\\ \left\|\left(\left[\sum_{l=1}^{n}(2\pi\xi_{l})^{2}\right]^{\alpha}- \hbar^{-2\alpha}\left[\sum_{l=1}^{n}4\sin^{2}\left(\pi\hbar\xi_{l}\right) \right]^{\alpha}\right)\widehat{v}(t,\cdot)\right\|_{L^{2}(\mathbb{T}_{\hbar }^{n})}. \tag{5.5}\]
Since \(|\cdot|^{\alpha}:\mathbb{R}\rightarrow\mathbb{R}\) is \(\alpha\)-Holder continuous for \(0<\alpha\leq 1\), we have the following inequality
\[\left||x|^{2\alpha}-|y|^{2\alpha}\right|\lesssim\left|\left|x\right|^{2}-|y|^ {2}\right|^{\alpha},\quad x,y\in\mathbb{R}^{n},\]
and \(|x|=\sqrt{x_{1}^{2}+\cdots+x_{n}^{2}}\). Now, using the above inequality and the Taylor expansion for \(\sin^{2}(\pi\hbar\xi_{l})\), we get
\[\left|\left[\sum_{l=1}^{n}(2\pi\xi_{l})^{2}\right]^{\alpha}-\hbar^{ -2\alpha}\left[\sum_{l=1}^{n}4\sin^{2}\left(\pi\hbar\xi_{l}\right)\right]^{ \alpha}\right| \tag{5.6}\] \[\lesssim \left|\sum_{l=1}^{n}4\pi^{2}\xi_{l}^{2}-\hbar^{-2}\sum_{l=1}^{n} 4\sin^{2}(\pi\hbar\xi_{l})\right|^{\alpha}\] \[= \left|\sum_{l=1}^{n}\left[4\pi^{2}\xi_{l}^{2}-\hbar^{-2}4\left( \pi^{2}\hbar^{2}\xi_{l}^{2}-\frac{\pi^{4}}{3}\hbar^{4}\xi_{l}^{4}\cos\left(2 \theta_{l}\right)\right)\right]\right|^{\alpha}\] \[= \left(\frac{4\pi^{4}\hbar^{2}}{3}\right)^{\alpha}\left|\sum_{l=1} ^{n}\xi_{l}^{4}\cos(2\theta_{l})\right|^{\alpha}\] \[\lesssim \hbar^{2\alpha}\left[\sum_{l=1}^{n}\xi_{l}^{4}\right]^{\alpha}\] \[\lesssim \hbar^{2\alpha}|\xi|^{4\alpha},\]
where \(|\xi|^{4\alpha}=\left[\sum\limits_{l=1}^{n}\xi_{l}^{2}\right]^{2\alpha}\) and \(\theta_{l}\in(0,\pi\hbar\xi_{l})\) or \((\pi\hbar\xi_{l},0)\) depending on the sign of \(\xi_{l}\). Now, combining the estimate (5.6) with (5.5), we get
\[\left\|\left((-\mathcal{L})^{\alpha}-\hbar^{-2\alpha}(-\mathcal{L }_{\hbar})^{\alpha}\right)v(t,\cdot)\right\|_{\ell^{2}(\hbar\mathbb{Z}^{n})} \lesssim \hbar^{2\alpha}\|(1+|\cdot|^{2})^{2\alpha}\widehat{v}(t,\cdot) \|_{L^{2}(\mathbb{T}_{\hbar}^{n})} \tag{5.7}\] \[\lesssim \hbar^{2\alpha}\|(1+|\cdot|^{2})^{m/2}\widehat{v}(t,\cdot)\|_{L^ {2}(\mathbb{T}_{\hbar}^{n})},\]
whenever \(m\geq 4\alpha\). Since \(u_{0}\in H^{m}(\mathbb{R}^{n})\) with \(m\geq 4\alpha\), using Theorem 1.1, we have \(v\in H^{m}(\mathbb{R}^{n})\) with \(m\geq 4\alpha\). Therefore, using (5.4) and (5.7), we deduce that \(\|w(t,\cdot)\|_{\ell^{2}(\hbar\mathbb{Z}^{n})}\to 0\) as \(\hbar\to 0.\) Hence we have
\[\|v(t,\cdot)-u(t,\cdot)\|_{\ell^{2}(\hbar\mathbb{Z}^{n})}\to 0\text{ as }\hbar\to 0, \text{ for all }t\in[0,T].\]
This finishes the proof of Theorem 2.9.
Proof of Theorem 2.11.: Without making any significant changes to the proof of Theorem 2.9, we can prove Theorem 2.11.
## 6. Remarks
In this section we make a few remarks related to the very weak solution for the Cauchy problem (1.4) in the Euclidean setting:
1. For a net \(u_{\varepsilon}=u_{\varepsilon}(t,x)\), where \(x\in\mathbb{R}^{n}\) is a variable in the Euclidean space, the definitions of moderateness and negligibility are adapted accordingly from Definition (2.2); i.e., the net \((u_{\varepsilon})_{\varepsilon}\in L^{2}([0,T];H^{m}(\mathbb{R}^{n}))\) is \(L^{2}([0,T];H^{m}(\mathbb{R}^{n}))\)-moderate if there exist \(N\in\mathbb{N}_{0}\) and \(c>0\) such that \[\|u_{\varepsilon}\|_{L^{2}([0,T];H^{m}(\mathbb{R}^{n}))}\leq c\varepsilon^{-N},\]
for all \(\varepsilon\in(0,1]\) and is \(L^{2}([0,T];H^{m}(\mathbb{R}^{n}))\)-negligible if for all \(q\in\mathbb{N}_{0}\) there exists \(c>0\) such that \[\|u_{\varepsilon}\|_{L^{2}([0,T];H^{m}(\mathbb{R}^{n}))}\leq c\varepsilon^{q},\] for all \(\varepsilon\in(0,1]\).
2. The notion of a very weak solution for the Cauchy problem (1.4) can be adapted from Definition 2.3 by simply replacing the \(L^{2}([0,T];\ell_{s}^{2}(h\mathbb{Z}^{n}))\)-moderate regularisation by the \(L^{2}([0,T];H^{m}(\mathbb{R}^{n}))\)-moderate regularisation for the source term \(f\) and the net \((u_{\varepsilon})_{\varepsilon}\).
3. The proof of Theorem 1.2 will be similar to the proof of Theorem 2.4 except for using the inner product of \(L^{2}_{m}(\mathbb{R}^{n})\) instead of \(H^{s}(\mathbb{T}^{n}_{h})\) in (4.11).
4. The notion of the uniqueness of very weak solution for the Cauchy problem (1.4) can also be formulated by making similar modifications to Definition 2.5.
5. If the coefficients \(a,b\) are regular, the very weak solution obtained in Theorem 1.2 recaptures the classical solution obtained in Theorem 1.1 in the limit \(L^{2}([0,T];H^{m}(\mathbb{R}^{n}))\) as \(\varepsilon\to 0\). More precisely, we have the consistency result similar to Theorem 2.8 with the same modifications as above.
|
2307.09274 | Improving Text Semantic Similarity Modeling through a 3D Siamese Network | Siamese networks have gained popularity as a method for modeling text
semantic similarity. Traditional methods rely on pooling operation to compress
the semantic representations from Transformer blocks in encoding, resulting in
two-dimensional semantic vectors and the loss of hierarchical semantic
information from Transformer blocks. Moreover, this limited structure of
semantic vectors is akin to a flattened landscape, which restricts the methods
that can be applied in downstream modeling, as they can only navigate this flat
terrain. To address this issue, we propose a novel 3D Siamese network for text
semantic similarity modeling, which maps semantic information to a
higher-dimensional space. The three-dimensional semantic tensors not only
retains more precise spatial and feature domain information but also provides
the necessary structural condition for comprehensive downstream modeling
strategies to capture them. Leveraging this structural advantage, we introduce
several modules to reinforce this 3D framework, focusing on three aspects:
feature extraction, attention, and feature fusion. Our extensive experiments on
four text semantic similarity benchmarks demonstrate the effectiveness and
efficiency of our 3D Siamese Network. | Jianxiang Zang, Hui Liu | 2023-07-18T14:11:58Z | http://arxiv.org/abs/2307.09274v1 | # Improving Text Semantic Similarity Modeling through a 3D Siamese Network
###### Abstract
Siamese networks have gained popularity as a method for modeling text semantic similarity. Traditional methods rely on pooling operation to compress the semantic representations from Transformer blocks in encoding, resulting in two-dimensional semantic vectors and the loss of hierarchical semantic information from Transformer blocks. Moreover, this limited structure of semantic vectors is akin to a flattened landscape, which restricts the methods that can be applied in downstream modeling, as they can only navigate this flat terrain. To address this issue, we propose a novel 3D Siamese network for text semantic similarity modeling, which maps semantic information to a higher-dimensional space. The three-dimensional semantic tensors not only retains more precise spatial and feature domain information but also provides the necessary structural condition for comprehensive downstream modeling strategies to capture them. Leveraging this structural advantage, we introduce several modules to reinforce this 3D framework, focusing on three aspects: feature extraction, attention, and feature fusion. Our extensive experiments on four text semantic similarity benchmarks demonstrate the effectiveness and efficiency of our 3D Siamese Network.
## 1 Introduction
The aim of modeling text semantic similarity is to predict the degree of similarity between a pair of text sequences [15, 6, 24, 27, 2]. One of the popular methods for modeling text semantic similarity is the Siamese network, which employs dual Transformer encoders to encode two texts separately and fuse them together at the matching layer [18, 19]. The Transformer blocks used as encoders capture hierarchical semantic information that can be viewed from two aspects: positional (spatial) domain information and feature domain information [30]. Shallow Transformer blocks have high global feature encapsulation, whereas deep blocks have comparatively lower global feature encapsulation. On the other hand, deep Transformer blocks have high encapsulation of local feature and positional information, while shallow blocks have low encapsulation of local feature information.
In order to fully utilize these semantic information for modeling semantic similarity, researchers have introduced different modules into the models from three perspectives: feature extraction [26, 14, 9], late attention [16, 20, 2], and feature fusion [31, 6, 26, 18, 35, 19]. However, as illustrated in Figure 1(a), these methods are based on a flattened processing, pooling the representation from each Transformer, and in some cases, even solely utilizing the representation from the final Transformer block. [18, 16, 22, 19]. These processes lead to two-dimensional (sentence length, feature dimension) semantic vectors and causes a substantial loss of hierarchical semantic information from different Transformer blocks. Moreover, this limited structure of semantic vectors is akin to a flattened landscape, which restricts the methods that can be applied in downstream modeling, as they can only navigate this flat terrain. To tackle this issue, as depicted in Figure 1(b), we have implemented a novel technique that models text semantic similarity through a 3D Siamese network. This 3D network architecture maps semantic information to a higher-dimensional space, comprehensively and effectively retaining the hierarchical semantic information from Transformer blocks. For traditional methods of modeling two-dimensional semantic tensor similarity, the limitations lie in the tensor dimensions, which dictate the use of a single attention mechanism, and the reliance on global pooling during the feature fusion stage. In contrast, the resulted three-dimensional semantic tensors provides the necessary structural condition for a more diverse range of options for attention and feature fusion in downstream tasks, enabling a more comprehensive and robust capture of spatial and feature domain information. Leverag
Figure 1: Comparison between the traditional Siamese Network and our 3D network for modeling text semantic similarity, where ’P’ denotes the pooling operation and ’C’ denotes the concatenation operation.
ing this structural advantage, and drawing inspiration from modules within the image processing domain [13, 33, 12, 29, 21], we suggest several modules focusing on three vital aspects to reinforce this network: feature extraction, attention, and feature fusion. Specifically, we introduce **A**daptive weighting to **E**xtract **F**eatures (**AFE**) from the representation from each transformer block, and use a stacked concatenation method to encode sentences for downstream semantic similarity modeling. The 3D Siamese network provides increased potential for attention mechanisms and feature fusion. In contrast to traditional single late attention, we simultaneously incorporate **S**patial **A**ttention (**SA**) and **F**eature **A**ttention (**FA**), forming a robust information interactor. Moreover, from a higher-dimensional standpoint, we draw upon Inception architecture and Dilated Convolution to develop a **R**eceptive **F**ield **M**odule (**RFM**) for feature fusion.
Our main contributions resides in the following aspects:
* We propose a 3D Siamese network that comprehensively and effectively retaining the hierarchical semantic information from Transformer blocks, and provides the necessary structural condition for comprehensive downstream semantic similarity modeling strategies to capture them.
* To reinforce this 3D Siamese network, we suggest several modules for the downstream semantic similarity task that focus on three crucial aspects: feature extraction, attention, and feature fusion.
* Introduced modules exhibit "plug-and-play" characteristic, which contributes to the model's robust modularity and scalability.
## 2 Related Work
Text semantic similarity modeling is a core problem in Natural Language Processing, which aims to determine the semantic relationship between two given textual inputs[15, 6, 24, 27, 2].
In recent years, Transformer-based pre-trained encoders have garnered significant attention for modeling text semantic similarity [30], yielding remarkable results [25, 7, 34]. Two primary training paradigms have emerged, based on Transformer encoders. The first entails the joint encoding of sentence pairs by Transformer modules [23], which facilitates comprehensive interaction between sentence pairs in terms of information. However, this approach results in increased computational costs and higher inference latency, posing challenges for industrial deployment. Alternatively, the second approach employs a structure with Siamese Transformer encoders [7, 26, 18, 22]. This method enables offline computation of text embeddings, significantly reducing online latency [3, 26].
To enhance the performance of Siamese networks, researchers have proposed corresponding improvements in three areas: feature extraction, late attention, feature fusion. In terms of feature extraction, researchers [26] compared the effects of three pooling fusion methods on the performance of Siamese BERT. The average-pooling strategy can extract more abundant information compared with max pooling. In addition to pooling operation, DenseNet [14, 9] has also been used for feature extraction [3], largely preserving the information of the original text features. In order to enhance the model's interaction capability, researchers have introduced numerous late attention strategies, such as the cross-attention layer [16], the MLP layer [20], and the Transformer layer [2]. Regarding feature fusion, tensor cross-fusion [31, 6, 26] and pooling fusion [6, 26, 18] are the most commonly used methods for feature fusion. Based on these, [35] propose an improved mechanism for information fusion and interaction that compares local and aligned representations from three perspectives. [19] propose VIRT-Adapted Interaction which performs feature extraction and information exchange simultaneously.
As mentioned in Section 1, these methods are based on a uniform and flattened processing, pooling the representation from each Transformer block during encoding, [26, 18, 16, 22, 19], the limitation of which lies in the insufficient extraction of these two types of information, as well as the constraints imposed on the downstream choices of similarity modeling. To address this issue, we have adopted a novel approach modeling text semantic similarity through a three-dimensional Siamese network and suggest several modules to reinforce it.
## 3 Methodology
The task of modeling text semantic similarity can be described as following. Given the input sentence pair \(\mathbf{s}^{x}\) and \(\mathbf{s}^{y}\), the training goal of the model is to train a classifier \(\xi\) that computes conditional probabilities \(P(\mathrm{label}|\mathbf{s}^{x},\mathbf{s}^{y})\) to predict the relationship between the output sentence pairs based on the output probabilities. The \(\mathrm{label}\in\Omega\) represents different degrees of semantic similarity.
\[P(\mathrm{label}|\mathbf{s}^{x},\mathbf{s}^{y})=\xi(\mathbf{s}^{x},\mathbf{s}^{y}) \tag{1}\]
In this paper, we introduce a novel 3D Siamese network for text semantic similarity modeling, emphasizing feature extraction, attention, and feature fusion. The 3D Siamese network consists of the following modules we introduced: Adaptive Feature Extraction (AFE), Spatial Attention & Feature Attention (SA&FA), and Receptive Field Module (RFM). AFE comprehensively and efficiently preserves semantic information from different Transformer blocks. SA captures long-range dependencies between sentence pairs, while FA dynamically adjusts feature weights within certain sentence for better discrimination. We combine SA and FA for robust information interactor. Finally, RFM leverages the Inception architecture and Dilated Convolution layers to capture a larger receptive field.
### Adaptive Feature Extraction
Compared to the traditional method of using pooling operations to compress semantic representations from transformer blocks,, we propose the use of trainable Adaptive weights for Feature Extraction (AFE) in each block. These weighted representations are concatenated in a three-dimensional form, delivering the most comprehensive information for downstream task modeling.
Given the input sentence pair \(\mathbf{s}^{x}=[\mathbf{s}^{x}_{1},\mathbf{s}^{x}_{2}...\mathbf{s}^{x}_{l_{x}}]\) and \(\mathbf{s}^{y}=[\mathbf{s}^{y}_{1},\mathbf{s}^{y}_{2}...\mathbf{s}^{y}_{l_{y}}]\), we use Transformer blocks to acquire representations of the input sentences. Assuming there are \(H\) Transformer blocks, the semantic representations of the sentence pair \(\mathbf{x}^{h}_{i}\) and \(\mathbf{y}^{h}_{j}\) can be obtained in the \(h^{th}\) Transformer block.
\[\begin{split}\mathbf{x}^{h}_{i}&=\mathrm{TransformerBlock }^{h}(\mathbf{s}^{x},i),i\in[1,2...l_{x}]\\ \mathbf{y}^{h}_{j}&=\mathrm{TransformerBlock}^{h}(\mathbf{s}^{ y},j),j\in[1,2...l_{y}]\end{split} \tag{2}\]
Our goal is to obtain a semantic tensor comprising the representations from each Transformer block, concatenated using trainable weights. We define our unnormalized attention vectors \(\widehat{\mathbf{x}}^{h}_{i}\) and \(\widehat{\mathbf{y}}^{h}_{j}\) as follows:
\[\begin{split}\widehat{\mathbf{x}}^{h}_{i}&=\sigma(W^{h}_ {2}\left(\mathrm{ReLU}(W^{h}_{1}\mathbf{x}^{h}_{i}+b^{h}_{1})\right)+b^{h}_{2})\\ \widehat{\mathbf{y}}^{h}_{j}&=\sigma(W^{h}_{2}\left( \mathrm{ReLU}(W^{h}_{1}\mathbf{y}^{h}_{j}+b^{h}_{1})\right)+b^{h}_{2})\end{split} \tag{3}\]
where \(W_{1}^{h}\), \(W_{2}^{h}\), \(b_{1}^{h}\), \(b_{2}^{h}\) are trainable parameters for the representation from the \(h^{th}\) Transformer block. \(\sigma\) refers to the sigmoid activation function. \(\mathrm{ReLU}\) refers to the ReLU activation function. We calculate the normalized attention weights \(\mathbf{\alpha}_{i}^{h}\) and \(\mathbf{\beta}_{j}^{h}\):
\[\mathbf{\alpha}_{i}^{h}=\frac{\widetilde{\mathbf{x}}_{i}^{h}}{\sum_{i=1}^{l_{x}} \widetilde{\mathbf{x}}_{i}^{h}},\quad\mathbf{\beta}_{j}^{h}=\frac{\widetilde{\mathbf{y}}_{ j}^{h}}{\sum_{j=1}^{l_{y}}\widetilde{\mathbf{y}}_{j}^{h}} \tag{4}\]
These weights are applied to perform a weighted concatenation on the representations from each transformer block, producing final semantic tensors \(\{\mathbf{X},\mathbf{Y}\}\) that are utilized for further modeling of semantic similarity in downstream tasks.
\[\mathbf{X}=[\mathbf{\alpha}_{i}^{1}\mathbf{x}_{i}^{1};\mathbf{\alpha}_{i}^{2}\bm {x}_{i}^{2}...;\mathbf{\alpha}_{i}^{H}\mathbf{x}_{i}^{H}] \tag{5}\] \[\mathbf{Y}=[\mathbf{\beta}_{j}^{1}\mathbf{y}_{j}^{1};\mathbf{\beta}_{j}^{2}\mathbf{y} _{j}^{2};...;\mathbf{\beta}_{j}^{H}\mathbf{y}_{j}^{H}]\]
In above equation both \(\mathbf{X}\) and \(\mathbf{Y}\in\mathbb{R}^{H\times L\times D}\). \(D\) is the dimension of the sentence vector. \(H\) is the number of Transformer blocks, and \(L\) is the length of the sentence vector. \([;]\) is the concatenation operation.
### Spatial Attention & Feature Attention
As we have mentioned, traditional methods of modeling text semantic similarity employ consistent pooling compression of the representations generated by transformer blocks. As a result, they can only adopt a single late attention to model the spatial and feature domain information between sentences. In contrast, our approach takes a 3D perspective and allows for the separate modeling of Spatial Attention (SA) and Feature Attention (FA). By fusing the semantic tensors that have been enhanced through these attention mechanisms for feature fusion, our approach forms a more powerful information interactor. Specifically, Spatial Attention effectively learns the inter-dependencies between various positions of two semantic tensors, enhancing its ability to capture long-range dependencies between sentence pairs. Meanwhile, Feature Attention is capable of discerning dependencies among features within the semantic tensor, dynamically adjusting the weights for each feature and thereby extracting more discriminative features. For the semantic tensors \(\{\mathbf{X},\mathbf{Y}\}\in\mathbb{R}^{H\times L\times D}\), we fuse the results of two attention-enhanced processes using either feature-level or element-wise multiplication to obtain the semantic tensors \(\{\mathbf{X}^{\prime},\mathbf{Y}^{\prime}\}\in\mathbb{R}^{H\times L\times D^{\prime}}\).
\[\mathbf{X}^{\prime}=\mathrm{SA}(\mathbf{X},\mathbf{Y})\odot\mathrm{FA}(\mathbf{X}) \tag{6}\] \[\mathbf{Y}^{\prime}=\mathrm{SA}(\mathbf{Y},\mathbf{X})\odot\mathrm{FA}(\mathbf{Y})\]
where SA and FA refer to Spatial Attention Module and Feature Attention Module respectively. \(\odot\) refers to elements-wise multiplication.
#### 3.2.1 Spatial Attention
Spatial Attention enables the modeling of inter-dependencies between different positions of semantic tensor pairs, which enhances the ability of our model to capture long-range dependencies between pairs of sentences. Let \(\{\mathbf{X},\mathbf{Y}\}\in\mathbb{R}^{H\times L\times D}\) denote the semantic tensors that have been processed by the Adaptive Feature Fusion and Extraction module. We first reshape them to \(\mathbf{K}_{i}\) and \(\mathbf{Q}_{j}\) with the shape of \(N\times D\), where \(N=H\times L\). Different from self-attention, our query matrix \(\mathbf{Q}\) and key matrix \(\mathbf{K}\) interact with each other for cross-sentence information communication. Therefore, as illustrated in Figure 2, we design two branches to process sentence pair's representations respectively. After that, we perform a matrix multiplication between the transpose of \(\mathbf{K}\) and \(\mathbf{Q}\) and transpose this calculation result to obtain the semantic tensor of another branch. Finally, we apply a softmax layer on them to calculate \(\mathbf{M}_{ji}^{Y}\) and \(\mathbf{M}_{ij}^{X}\) respectively.
\[\mathbf{M}_{ji}^{Y}=\frac{\exp(\mathbf{K}_{i}^{T}\cdot\mathbf{Q}_{j})}{\sum_{k=1}^{N}\exp (\mathbf{K}_{i}^{T}\cdot\mathbf{Q}_{k})},\quad\mathbf{M}_{ij}^{X}=\frac{\exp(\mathbf{K}_{i}^{T }\cdot\mathbf{Q}_{j})}{\sum_{k=1}^{N}\exp(\mathbf{K}_{k}^{T}\cdot\mathbf{Q}_{j})} \tag{7}\]
Here, similarity matrix \(\mathbf{M}_{ji}^{Y}\) measures the \(\mathbf{Y}^{\text{s}}\)'s \(j^{th}\) position's impact on the \(i^{th}\) position of \(\mathbf{X}\). The more similar the feature representations of the two positions are, the larger the correlation between them becomes. The function of \(\mathbf{M}_{ij}^{X}\) is similar with Equation 7.
Meanwhile, we reshape \(\mathbf{X}\) and \(\mathbf{Y}\) reshape them to \(\mathbf{V}_{i}^{X}\) and \(\mathbf{V}_{j}^{Y}\) with the shape of \(N\times D\), where \(N=H\times L\). Then we perform a matrix multiplication between \(\mathbf{V}_{j}^{Y}\) and \(\mathbf{M}_{ji}^{Y}\), after reshaping them to the shape of \(N\times D\). For another branch, similar to the above calculation process, we perform a matrix multiplication between \(\mathbf{V}_{i}^{X}\) and \(\mathbf{M}_{ij}^{X}\).
\[\mathbf{X}_{i}^{(\mathrm{SA})}=\sum_{j=1}^{N}\mathbf{M}_{ji}^{Y}\cdot\mathbf{V}_{j}^{Y}, \quad i\in[1,...N] \tag{8}\] \[\mathbf{Y}_{j}^{(\mathrm{SA})}=\sum_{i=1}^{N}\mathbf{M}_{ij}^{X}\cdot\mathbf{V} _{i}^{X},\quad j\in[1,...N]\]
Where \(\mathbf{X}_{i}^{(\mathrm{SA})}\) is the weighted sum of \(\{\mathbf{Y}_{j}\}_{j=1}^{N}\) with \(\mathbf{M}_{ji}^{Y}\). Intuitively, the purpose of Spatial Attention is to use the elements in \(\{\mathbf{Y}_{j}\}_{j=1}^{N}\) that are related to \(\mathbf{X}_{i}^{(\mathrm{SA})}\) to represent \(\mathbf{X}_{i}^{(\mathrm{SA})}\). The same is performed for another branch with Equation 8. Finally, we apply reshape operations to the results and then leverage a \(1\times 1\) convolution layer to get the outputs \(\mathbf{X}^{(\mathrm{SA})}\) and \(\mathbf{Y}^{(\mathrm{SA})}\), where \(\mathbf{X}^{(\mathrm{SA})},\mathbf{Y}^{(\mathrm{SA})}\in\mathbb{R}^{H\times L\times D^{\prime}}\) and \(D^{\prime}\) is the feature dimension of low dimensional mapping space.
#### 3.2.2 Feature Attention
The Feature Attention is a computational unit designed to learn the inter-dependencies between features. It takes a semantic tensor \(\mathbf{X}=[\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{D}]\in\mathbb{R}^{H\times L\times D}\) or \(\mathbf{Y}=[\mathbf{y}_{1},\mathbf{y}_{2},...,\mathbf{y}_{D}]\in\mathbb{R}^{H\times L\times D}\) as input and produces transformed tensor \(\mathbf{X}^{(\mathrm{FA})}\) or \(\mathbf{Y}^{(\mathrm{FA})}\in\mathbb{R}^{H\times L\times D^{\prime}}\) with augmented representations of the same size as \(\mathbf{X}^{(\mathrm{SA})}\) or \(\mathbf{Y}^{(\mathrm{SA})}\). Inspired by the Squeeze-and-Excitation Networks [13, 33, 12], Feature Attention can be modeled in two
Figure 2: Overview of Spatial Attention.
stages, the squeeze stage and the excitation stage. Specifically, the Squeeze stage is responsible for compressing the input semantic tensor to obtain a spatial context descriptor [11], which is a process of extracting global information. The Excitation performs nonlinear transformations on the compressed semantic tensor to obtain a weight vector for each feature. We propose three forms of FA. To simplify illustration, we will only show the processing of FA in the branch of input \(\mathbf{X}\) as example in Figure 3.
1. For FA-1, given the input \(\mathbf{X}\), to open up the correlation between feature and spatial domain information, we squeeze this semantic tensor using average pooling. The squeeze step for the \(d^{th}\) feature can be formulated in Equation 9 \[\mathbf{z}_{d}=\frac{1}{H\times L}\sum_{i=1}^{H}\sum_{j=1}^{L}\mathbf{x}_{d}(i,j)\] (9) where \(\mathbf{z}_{d}\) is the spatial context descriptor associated with the \(d^{th}\) feature. This squeeze operation makes collecting global information possible. Next, we utilize the feedforward layers to obtain the semantic tensor \(\mathbf{X}^{(\mathrm{FA-1})}\in\mathbb{R}^{H\times L\times D^{\prime}}\), fully capturing feature-wise dependencies. \[\mathbf{X}^{(\mathrm{FA-1})}=\sigma(W_{2}(\mathrm{ReLU}(W_{1}\mathbf{z}+b_{1}))+b_{2})\] (10) where \(W_{1}\), \(W_{2}\), \(b_{1}\), \(b_{2}\) are trainable parameters, \(\sigma\) is the sigmoid function.
2. For FA-2, compared to FA-1, we argue that utilizing max pooling captures another crucial aspect of object characteristics, which enables us to infer Feature Attention with even greater precision. Firstly, we utilize average pooling and max pooling operations to generate two different spatial context descriptors: \(\mathbf{z}_{d,\mathrm{avg}}\) and \(\mathbf{z}_{d,\mathrm{max}}\). \[\mathbf{z}_{d,\mathrm{avg}}=\frac{1}{H\times L}\sum_{i=1}^{H}\sum_{j=1}^{L}\mathbf{x}_ {d}(i,j)\] (11) \[\mathbf{z}_{d,\mathrm{max}}=\mathrm{max}_{i=1}^{H}\mathrm{max}_{j=1}^{L}\mathbf{x}_ {d}(i,j)\] (12) Similar to FA-1, Both descriptors are concatenated then forwarded to the feedforward layers to produce the final semantic tensor \(\mathbf{X}^{(\mathrm{FA-2})}\in\mathbb{R}^{H\times L\times D^{\prime}}\). \[\mathbf{X}^{(\mathrm{FA-2})}=\sigma(W_{2}(\mathrm{ReLU}(W_{1}([\mathbf{z}_{d,\mathrm{ avg}};\mathbf{z}_{d,\mathrm{max}}])+b_{1}))+b_{2})\] (13) 3. For FA-3, we further enhance the process by accurately model the relationships between features located at longer distances in a spatial context while also taking into account their specific positions within that space. To achieve this, we factorize the global pooling operation formulated in the original method into a pair of 1D feature encoding operations. Specifically, given an input \(\mathbf{X}\), we use two pooling kernels with different spatial extents, \((H,1)\) and \((1,L)\), to encode each feature along the horizontal and vertical coordinates, respectively. This allows us to compute the output of the \(d^{th}\) feature at \(h^{th}\) Transformer block and the output of the \(d^{th}\) feature at the \(l^{th}\) position in the sentence, formulated in Equation 14. \[\mathbf{z}_{d}^{h}(h)=\frac{1}{L}\sum_{i=0}^{L}\mathbf{x}_{d}(i,j),\quad\mathbf{z}_{d}^{l} (l)=\frac{1}{H}\sum_{j=0}^{H}\mathbf{x}_{d}(i,j)\] (14) The above two transformations enable a global receptive field and encode precise positional information. Compared to FA-1 and FA-2, we not only want to make full use of the captured spatial information but also to effectively capture inter-feature relationships. Specifically, given the descriptors produced by Equation 14, we apply sigmoid activation and batch normalization processing to the concatenated result. \[\mathbf{g}=\mathrm{BatchNorm}(\sigma([\mathbf{z}_{d}^{h};\mathbf{z}_{d}^{l}]))\] (15) where \(\mathbf{g}\in\mathbb{R}^{1\times(H+L)\times D}\) is the intermediate descriptor that encodes spatial information in both the horizontal direction and the vertical direction. Here, \(D^{\prime}\) is the feature dimension of low dimensional mapping. We then split \(\mathbf{g}\) along the spatial dimension into two separate tensors \(\mathbf{g}^{h}\in\mathbb{R}^{H\times 1\times D}\) and \(\mathbf{g}^{l}\in\mathbb{R}^{1\times L\times D}\). Another two \(1\times 1\) convolution layers \(\phi_{h}\), \(\phi_{l}\) are utilized to reshape \(\mathbf{g}^{h}\) and \(\mathbf{g}^{l}\) to tensors with the same feature dimension to \(D^{\prime}\). Finally we leverage the matrix multiplication between these two tensor, and obtain \(\mathbf{X}^{(\mathrm{FA-3})}\in\mathbb{R}^{H\times L\times D^{\prime}}\). \[\mathbf{X}^{(\mathrm{FA-3})}=\phi_{h}(\mathbf{g}^{h})\cdot\phi_{l}(\mathbf{g}^{l})\] (16) Where \(\cdot\) denotes the matrix multiplication.
### Feature Fusion with a Larger Receptive Field
From our 3D perspective, we leverage a Receptive Field Module (RFM) to perform feature fusion. Inspired by the receptive field of the human vision [21]1, we adopt the Inception architecture [29, 28, 21], which utilizes a multi-branch structure consisting
Figure 4: Overview of Receptive Field Module
Figure 3: Overview of three forms of Feature Attention.
of convolutional layers with different kernel sizes. In addition, we use Dilated Convolutional layers [5, 4, 10], which have previously been employed in the segmentation algorithm Deeplab, to further expand the receptive field.
Figure 4 illustrates the multi-branch structure of RFM, where only the modeling of a single semantic tensor \(\mathbf{X^{\prime}}\) is presented in the figure. Specifically, RFM takes the input semantic tensors \(\mathbf{X^{\prime}},\mathbf{Y^{\prime}}\in\mathbb{R}^{H\times L\times D^{\prime}}\) and constructs receptive field blocks by using multiple convolution kernels \(\psi\) of different sizes. This enables the network to capture feature information at different scales. Next, we perform Dilated Convolutions \(\phi^{\text{r}}\) on \(\psi(\mathbf{X^{\prime}})\) and \(\psi(\mathbf{Y^{\prime}})\), which output multiple semantic tensors with different dilation rates. Assuming there are \(k\) different dilation rates for the convolution with the same shape, \(k\) corresponding semantic tensors can be obtained. In addition to Dilated Convolutions, a \(1\times 1\) convolutional layer can also be used to encode the feature vectors, resulting in a semantic tensor with global contextual information. This feature vector can help the model understand the global semantic information. Finally, all output feature vectors are concatenated to obtain the final results \(\mathbf{X}^{\mathrm{RFM}}\) and \(\mathbf{Y}^{\mathrm{RFM}}\in\mathbb{R}^{H\times L\times(k+1)D^{\prime\prime}}\), which can be formulated in Equation 17 and Equation 18.
\[\mathbf{X}_{i}^{\mathrm{RFM}}=\mathrm{ReLU}(\phi_{i}^{r_{i}}(\psi_{i}(\mathbf{X^{ \prime}}))),i\in[1,2...,k] \tag{17}\] \[\mathbf{Y}_{j}^{\mathrm{RFM}}=\mathrm{ReLU}(\phi_{j}^{r_{j}}(\psi_ {j}(\mathbf{Y^{\prime}}))),j\in[1,2...,k]\]
\[\mathbf{X}^{\mathrm{RFM}}=\sigma([\mathbf{X}_{i}^{\mathrm{RFM}};...;\mathbf{X}_{k}^{ \mathrm{RFM}};\phi_{0}(\mathbf{X^{\prime}})]) \tag{18}\] \[\mathbf{Y}^{\mathrm{RFM}}=\sigma([\mathbf{Y}_{1}^{\mathrm{RFM}};...;\mathbf{Y }_{k}^{\mathrm{RFM}};\phi_{0}(\mathbf{Y^{\prime}})])\]
where \(\psi\) denotes the \(i^{th}\) convolution with different kernel size. \(\phi_{i}^{r_{i}}\) denotes the \(i^{th}\) convolution with \(r_{i}\) dilation rates, \(\phi_{0}\) is the \(1\times 1\) convolution layer. The kernels among \(\phi_{i}^{r_{i}},i\in[1,2...k]\) have the same size while the kernels among \(\phi_{i},i\in[1,2...k]\) have different sizes to capture complex features. The specific process of \(\phi_{\alpha\to B}^{r}\) is shown in Equation 19. The convolution kernel \(K\in\mathbb{R}^{h_{h}\times h_{l}\times D\times D^{\prime}}\) is a four-dimensional tensor, where \(k_{h}\) and \(k_{l}\) denote the size of the convolution kernel, and \(D\) and \(D^{\prime}\) denote the size of input features and output features, respectively.
\[B(i,j,d^{\prime})=\sum_{m=0}^{k_{h}-1}\sum_{n=0}^{k_{l}-1}\sum_{d=0}^{d-1}A( i+m\!\cdot\!r,j\!+\!n\!\cdot\!r,d)\!\cdot\!K(m,n,d,d^{\prime}) \tag{19}\]
Finally, we concatenate the two semantic tensors along with their element-wise multiplication results, and input them into a global average pooling layer and a fully connected layer to compute semantic similarity, where \(\mathrm{GAP}(\cdot)\) denotes the global average pooling operation.
\[\mathbf{v}=\mathrm{GAP}([\mathbf{X}^{\mathrm{RFM}};\mathbf{Y}^{\mathrm{RFM}}; \mathbf{X}^{\mathrm{RFM}}\odot\mathbf{Y}^{\mathrm{RFM}}])\]
\[P(\mathrm{label}|\mathbf{s}^{x},\mathbf{s}^{y})=\mathrm{softmax}(W_{2}(\mathrm{ReLU }(W_{1}(\mathbf{v})+b_{1}))+b_{2}) \tag{20}\]
## 4 Experimental Setup
### Datasets
We adopt the following four benchmarks for evaluation.
**QQP (Quora Question Pairs)**[17]: QQP is a dataset of around 400,000 question pairs from Quora. The goal is to determine if the questions have the same intent, assessing performance on semantic similarity and natural language understanding.
**MRPC (Microsoft Research Paraphrase Corpus)**[8]: MRPC has 5,800 English sentence pairs from online news. The task is to identify if the pairs are paraphrases, testing textual entailment and semantic similarity.
**SNLI (Stanford Natural Language Inference)**[1]: SNLI is a large textual entailment dataset with 570,000 English sentence pairs and labels for entailment relationships (entailment, contradiction, neutral). It evaluates natural language inference, determining if a hypothesis holds based on a premise.
**MNLI (Multi-Genre Natural Language Inference)**[32]: MNLI extends SNLI with 430,000 sentence pairs from various domains and genres. Like SNLI, it assesses natural language inference but covers a broader range of tasks.
### Baselines
We utilize three representative semantic similarity models as baselines.
**SBERT**[26]: Siamese BERT is a Siamese architecture that uses pre-trained BERT to separately produce embeddings of two inputs. The output embeddings of two sequences are concatenated to give final predictions.
**ColBERT**[18]: ColBERT serves as an efficient model for semantic similarity tasks, utilizing its late interaction mechanism to process queries and documents independently, enabling scalability. By employing dense vector representations, it effectively captures richer semantic relationships between text pairs.
**BERT(interaction-based)**[30]: As mentioned in Section 2, BERT(interaction-based) is an interaction-based model rather than
\begin{table}
\begin{tabular}{c|c c|c c c c|c c|c} \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{Modules} & \multicolumn{4}{c|}{Accuracy(Test)} & \multicolumn{2}{c}{Latency\&Params} \\ \cline{2-11} & Feature Extraction & Attention & Feature Fusion & QQP & MRPC & SNLI & MNLI & Avg & Latency & Params \\ \hline SBERT [26] & Pooling & - & Max\&Avg Pooling & 80.81 & 70.23 & 82.88 & 72.35 & 76.57 & 0.2ms & 6.1M \\ ColBERT [18] & Pooling & LA & Max\&Avg Pooling & 86.31 & 74.23 & 86.12 & 76.23 & 81.22 & 0.3ms & 6.1M \\ \hline \multirow{9}{*}{Our Networks} & AFE & SA & Max\&Avg Pooling & 89.34 & 77.03 & 88.43 & 77.13 & 83.23 & 0.5ms & 6.1M \\ & AFE & SA,FA-1 & Max\&Avg Pooling & 89.45 & 77.04 & 89.10 & 77.11 & 84.06 & 0.5ms & 6.1M \\ & AFE & SA,FA-2 & Max\&Avg Pooling & 89.48 & 77.02 & 89.14 & 77.13 & 83.44 & 0.5ms & 6.1M \\ & AFE & SA,FA-3 & Max\&Avg Pooling & 89.46 & 77.04 & 89.02 & 77.21 & 83.43 & 0.5ms & 6.1M \\ & AFE & SA & RFM & 90.02 & 78.10 & 89.13 & 77.24 & 83.62 & 0.6ms & 6.4M \\ & AFE & SA,FA-1 & RFM & 90.09 & 78.11 & 89.17 & 77.24 & 83.65 & 0.6ms & 6.4M \\ & AFE & SA,FA-2 & RFM & 90.11 & 78.05 & 89.11 & 77.23 & 83.63 & 0.6ms & 6.4M \\ & AFE & SA,FA-3 & RFM & 90.23 & 78.18 & 89.41 & 77.28 & 83.78 & 0.6ms & 6.4M \\ \hline \multicolumn{11}{c}{BERT(interaction-based) [30]} & 91.51 & 87.81 & 90.01 & 86.50 & 88.96 & 27.4ms & 106M \\ \hline \end{tabular}
\end{table}
Table 1: Performance comparison on four benchmarks.
utilizing a Siamese framework. Its underlying principle involves concatenating sentence pairs and encoding them through BERT for subsequent similarity prediction. While this approach enables rich interactions, it also introduces a significant number of training parameters and inference latency.
### Implementation Details
Our prediction target is \(\mathrm{label^{*}}=\mathrm{argmax}P(\mathrm{label}|\mathbf{s}^{\pi},\mathbf{s}^{y})\). For the QQP and MRPC datasets, \(\mathrm{label^{*}}\) represents {match, not match}, while for the SNLI and MNLI datasets, \(\mathrm{label^{*}}\) corresponds to {entainment, neutral, contradiction}. For all baselines, as well as our models, we use BERT (Base) as the encoder with 12 Transformer blocks, resulting in an encoded feature dimension of 768. Each sentence is added to the average length of sentences in the dataset. To minimize the impact of the number of parameters on model performance, we adjusted the hyper-parameters of the convolutional and fully connected layers in the model, keeping the training parameters of all Siamese-based baselines and our model at similar levels. In the calculation of inference latency, the values we display are normalized by the test set size. We train the model for 15 epochs with a batch size of 32 and a learning rate of 0.001. Cross-entropy is used as the loss function, and the Adam optimizer with \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\) is employed. We adopt a learning rate decay strategy and early stopping here. Specifically, if there is no early stopping after 10 epochs, the learning rate will be reduced with a decay rate of 0.1.
## 5 Results and Analysis
### Main Results
Table 1 presents the main results for both baselines and our proposed approach across four evaluation tasks (QQP, MRPC, SNLI, MNLI). The principal difference between SBERT and ColBERT lies in the latter's implementation of Late Attention (LA), which contributes to ColBERT's enhanced performance. Our model is built upon various combination strategies of the aforementioned modules, utilizing proposed modules such as AFE for feature extraction, SA, FA-1, FA-2, and FA-3. In terms of feature fusion, our model explores both max pooling and average pooling, as well as RFM. The experimental results indicate that our model surpasses the baseline methods in terms of accuracy for the majority of tasks. Models employing RFM for feature fusion exhibit higher accuracy compared to those utilizing max pooling and average pooling. This can be attributed to the increased receptive field provided by RFM. Among the various attention mechanism combinations, the pairing of SA and FA-3 demonstrates the most favorable performance. BERT (interaction-based), as a powerful interactive-based model, outperforms our model in accuracy. However, its intricate interaction pattern leads to a substantially greater inference latency and parameter count relative to our method.
In conclusion, our network demonstrates superior performance in terms of accuracy across four benchmark tasks when compared to the SBERT and ColBERT. Compared with BERT (interaction-based), our network exhibits significant advantages in aspects such as inference latency and parameter count. Consequently, our network requires less storage space and computational resources, rendering it more valuable for real-time applications and low-latency scenarios. The modular design of our network allows for the implementation of various combination strategies based on the selection of different components. These components possess minimal requirements for inter-module communication, facilitating seamless integration and enhancing the model's modularity and scalability. This adaptable design paves the way for further optimization and refinement of the model, ultimately contributing to advancements in the field.
### Ablation Study
#### 5.2.1 Effect of SA&FA
Spatial Attention and Feature Attention together form a powerful information interactor. To gain a clear understanding of the impact of different attention combinations on model performance, we conducted ablation study on various combination strategy selections for SA and FA. As illustrated in Figure 5, the standalone SA consistently outperforms the independent FA-1, FA-2, and FA-3 across all tasks. For QQP, MRPC, and SNLI tasks, combining SA with FA-1, FA-2, and FA-3 results in a significant performance boost. In the absence of SA, the differences in performance among FA-1, FA-2, and FA-3 are not substantial. Upon incorporating SA, the SA and FA-3 combination achieves the best results in all four tasks. In summary, SA serves as the foundation for improving the modeling of semantic similarity, and the SA, FA-3 approach demonstrates more stable and superior performance compared to other methods. This suggests that combining SA and FA-3 methodologies may lead to better extraction and
Figure 5: Results of ablation study on various combination of strategy selections for SA and FA
Figure 6: Robust experimental results on the impact of Transformer block and adaptive weight selection for feature extraction. FE and AFE represent feature extraction without the introduction of adaptive weights and feature extraction with the incorporation of adaptive weights, respectively.
utilization of feature information in these tasks.
#### 5.2.2 Effect of RFM
The superior performance of the RFM is due to the introduction of Inception architecture and Dilated Convolutions to increase the network's receptive field. In this ablation study on QQP and MRPC datasets, we investigate the impact of Inception architecture and Dilated Convolutions on the network's capabilities. Specifically, for the Inception architecture, we only introduce multi-branch convolutions with different kernel sizes; for Dilated Convolutions, we only employ multi-branch Dilated Convolutions with different dilation rates. In the experiments, we use global max pooling and average pooling structures as references. As can be seen from Table 2, both Inception and Dilated Convolution are able to enhance the network's performance.
### Robustness Experiments
Our method demonstrates exceptional performance, which can be attributed to the efficient and comprehensive extraction of information from Transformer blocks. In this experiment, we investigated the impact of varying the number of retained Transformer blocks during feature extraction and the incorporation of adaptive weights on model performance. As reported in Figure 6, we conducted eight sets of experiments on MRPC dataset, examining the differences in experimental results after applying adaptive weights to four different model strategies with various Transformer block configurations: 6 spaced Transformer blocks (selecting Transformer blocks at intervals), 6 bottom Transformer blocks, 6 top Transformer blocks, and 12 Transformer blocks. When comparing different choices of Transformer blocks without introducing adaptive weights, we observed that selecting all Transformer blocks or retaining the top 6 Transformer blocks resulted in better model performance and stability than retaining the bottom 6 Transformer blocks or spacing 6 Transformer blocks. This is likely because the top Transformer blocks capture more comprehensive spatial and feature domain information after passing through multiple attention and feedforward layers. Introducing adaptive weights not only significantly enhanced the effectiveness of feature extraction but also minimized fluctuations between different network combination strategies. Furthermore, it reduced the influence of the number of Transformer blocks on the model, ultimately bolstering the network's robustness.
### Case Study
To illustrate the advantages of our 3D semantic similarity modeling framework compared to traditional models, we encoded sentences from the MRPC dataset using both the pre-trained ColBERT model and our own model, obtaining semantic similarity matrices for sentence pairs. Two examples are displayed in the Figure 7, along with visualizations of their attention similarity matrices. ColBERT can solely focus on parts of the sentence pairs with similar meanings, such as'survival', 'live', '8 to 9 cents', and '13 to 14 cents'. In contrast, our approach can pay more attention to key semantic information like 'who got surgery', 'who only had surgery', 'average','median', 'beat the company's April earnings cast', and 'earnings per share from recurring operations'. This demonstrates that our method, which efficiently utilizes raw information and models semantic similarity in a three-dimensional manner, can capture focus information more effectively.
## 6 Conclusion
In this paper, we propose a novel three-dimensional Siamese network for modeling semantic similarity. To reinforce this 3D framework, we have introduced a series of modules that address three key aspects: feature extraction, attention, and feature fusion. Extensive experiments on four text semantic similarity benchmarks demonstrate the effectiveness and efficiency of this 3D Siamese Network. Moreover, our introduced modules exhibit a "plug-and-play" characteristic, contributing to the model's robust modularity and scalability.
In the future, we plan to apply this concept of three-dimensional semantic modeling to other tasks within the field of natural language processing.
|
2305.07420 | Anomalous near-equilibrium capillary rise | We report and rationalize the observation of a crossover from the classical
Lucas-Washburn dynamics to a long-lived anomalously slow regime for capillary
rise in simple glass tubes. We propose an analytical model considering the role
of thermal motion and the nanoscale surface topography to account for the
experimental observations. The proposed model indicates that the contact line
perimeter and the surface topography dimensions determine the crossover
condition and anomalous imbibition rate. Our findings have important
implications for the scientific understanding and technical application of
capillary imbibition and suggest strategies to control the adsorption of
specific liquids in porous materials. | Menghua Zhao, Aktaruzzaman Al Hossain, Carlos E. Colosqui, Matthieu Roché | 2023-05-12T12:40:00Z | http://arxiv.org/abs/2305.07420v1 | # Anomalous near-equilibrium capillary rise
###### Abstract
We report and rationalize the observation of a crossover from the classical Lucas-Washburn dynamics to a long-lived anomalously slow regime for capillary rise in simple glass tubes. We propose an analytical model considering the role of thermal motion and the nanoscale surface topography to account for the experimental observations. The proposed model indicates that the contact line perimeter and the surface topography dimensions determine the crossover condition and anomalous imbibition rate. Our findings have important implications for the scientific understanding and technical application of capillary imbibition and suggest strategies to control the adsorption of specific liquids in porous materials.
The phenomenon of capillary rise is among the most studied in interfacial science owing to its relevance to numerous natural and industrial processes such as liquid transport in porous media [1; 2; 3; 4], the wetting of fabrics [5; 6], additive manufacturing [7; 8], and microfluidic devices for liquid handling [9; 10]. The classical theoretical description of capillary rise relies on mechanical models involving inertia, viscous hydrodynamic friction, gravitational effects and the driving capillary force. According to Jurin's law [11; 12], these forces produce mechanical equilibrium when the rising liquid reaches the equilibrium height \(h_{eq}=2\gamma\cos\theta_{Y}/(\rho gR)\) with \(\gamma\) the surface tension of the liquid, \(\theta_{Y}\) the Young contact angle, \(\rho\) the density of the liquid, \(g\) the acceleration of gravity, and \(R\) the radius of the tube. The surface of the solid is conventionally assumed as smooth and chemically homogeneous, while fluid and solid phases are separated by sharp interfaces.
The general analytical treatment of capillary rise is complex when considering short-time inertial effects [13; 14; 15; 16; 17]. However, as the column height \(h=h(t)\) increases and inertial effects become negligible, the system enters a regime described by the celebrated Lucas-Washburn (LW) equation [18; 19]
\[\dot{h}=\frac{\rho gR^{2}}{8\mu}(h_{eq}/h-1), \tag{1}\]
where \(\mu\) is the liquid viscosity. A key assumption of this description is that the contact angle is constant and equal to the equilibrium contact angle \(\theta_{eq}=\theta_{Y}\), as predicted by Young's law. While the validity of this assumption is not trivial, the LW equation accounts accurately for experimental observations for small imbibition rates \(\dot{h}\ll\gamma/\mu\)[20; 21; 22; 23; 24; 25].
Anomalous behavior with large deviations from LW predictions via Eq. 1 has been reported in porous media with random networks of micro-scale pores [26; 27]. This anomalous behavior, characterized by vanishing imbibition rates, has been attributed to the random non-local dynamics of the wetting front, contact line pinning, and spatial fluctuations of the capillary pressure, which are ignored in the LW equation [26; 27; 28; 29; 30]. Notably, various interfacial phenomena such as colloidal particle adsorption, shear-driven drainage, and droplet spreading [31; 32; 33; 34; 35; 36] also show an anomalously slow relaxation to equilibrium after a regime crossover from the initial non-equilibrium dynamics. These observations are attributed to surface energy barriers induced by surface topography at the nanoscale.
To the best of our knowledge, no prior study has characterized and rationalized the observation of anomalous capillary rise in a capillary tube. A small number of studies report that prewetting is needed to observe agreement with LW predictions [20; 24]. Here, we report that capillary rise experiments performed over several hours in capillary tubes display a predictable crossover from conventional LW dynamics to an anomalously slow rise near equilibrium. To rationalize this phenomenon, we propose an analytical description based on a Langevin-type equation accounting for thermal motion and energy perturbations induced by the nanoscale surface topography. The model produces close agreement with our experimental observations and the magnitude of its parameters is supported by topographic analysis of the surface of the capillaries via atomic force microscopy (AFM).
We analyze the case of a liquid with constant mass density \(\rho\), surface tension \(\gamma\), viscosity \(\mu\), and temperature \(T\) that rises within a vertical capillary of radius \(R\) much smaller than the capillary length \(\ell_{c}=\sqrt{\gamma/(\rho g)}\) (Fig. 1a). The liquids in this study are three homologous alkanes with low vapor pressures (Sigma Aldrich, purity 99%; tetradacean (C\({}_{14}\)H\({}_{30}\geq 99\)%, \(\rho=762\) kg/m\({}^{3}\), \(\gamma=26.1\) mN/m, \(\mu=2.1\) mPa s, and \(P_{vap}=2.0\) Pa at 25\({}^{\circ}\)C), pentadecane (C\({}_{15}\)H\({}_{32}\geq 99\)%, \(\rho=769\) kg/m\({}^{3}\), \(\gamma=27.1\) mN/m, \(\mu=2.3\) mPa s, and \(P_{vap}=0.7\) Pa at 25\({}^{\circ}\)C), and hexadecane (C\({}_{16}\)H\({}_{34}\geq 99\)%, \(\rho=770\) kg/m\({}^{3}\), \(\gamma=27.5\) mN/m, \(\mu=3.4\) mPa s, and \(P_{vap}=0.2\) Pa at 25\({}^{\circ}\)C). We assume thermal and chemical equilibrium with the much less viscous ambient air. We use glass capillaries (Hirschmann ringcaps 5 \(\mu\)L and 20 \(\mu\)L) with two different inner radii, \(R=140\,\mu\)m and \(R=310\,\mu\)m (5% uncertainty). Capillaries are used as received, without precleaning or prewetting. Liquids are poured into a borosilicate glass petri dish (Pyrex, diameter 30 mm). A lid with a tight hole to insert the capillary tube covers the dish to prevent contamination and further minimize evaporation of the nonvolatile liquid during long exposure to the ambient. All experiments are performed at a controlled temperature \(T=25\pm 1\)\({}^{\circ}\)C. Further details on the experimental protocol and reproducibility are included in the Supplemental Material [37].
A "microcamera" (Imaging Source, DFK camera) records the displacement of the meniscus at the top of the liquid column with a spatial resolution of 6.0 \(\upmu\)m px\({}^{-1}\) (top right panel in Fig. 1a). We use a sub-pixel technique [38] to detect displacements as small as 1.2 \(\upmu\)m. A "macrocamera" (Andor Zyla 5.5) captures an overall view of liquid rise from the initial contact to the final equilibrium height with a resolution of 42.2 \(\upmu\)m px\({}^{-1}\) (bottom right panel in Fig. 1a). Images are processed with the software package Flili [39] to extract the column height \(h(t)\) measured from the flat bath interface to the bottom of the rising meniscus as a function of the time \(t\) elapsed since contact. Figure 1b shows the column height evolution \(h(t)\). At early times, the three alkanes show a fast increase of the column height that slows down as the system approaches equilibrium, which we defined at the height \(h_{eq}\) for which there is no detectable change for at least 6000 s. While experiments agree closely with the LW model (Eq. 1) for typical observation times \(t\lesssim 10\) s, substantial discrepancies arise for longer times. In particular, the time to reach the experimentally determined equilibrium height is over two orders of magnitude larger than that predicted by the LW equation.
To rationalize our experimental observations we will consider the effects of thermal fluctuations and energy perturbations induced by nanoscale surface topography. The time evolution of the liquid height \(h(t)\) is governed by a Langevin-type equation [32; 33; 40]
\[\xi\dot{h}=-\frac{1}{\xi}\frac{\partial\mathcal{F}}{\partial h}+\sqrt{2k_{B} T\xi}\eta(t), \tag{2}\]
where \(\xi=8\pi\mu h\) is the damping coefficient assuming energy dissipation is dominated by Poiseuille flow in the narrow capillary, \(\eta(t)\) is a zero-mean unit-variance uncorrelated noise, and the system free energy is
\[\mathcal{F}=\frac{\rho g\pi R^{2}h^{2}}{2}-2\pi R\gamma\cos\theta_{Y}h+\frac{ \Delta U}{2}\sin\left(\frac{2\pi h}{\ell}+\phi\right). \tag{3}\]
The first and second terms in Eq. 3 are, respectively, the gravitational potential energy and the interfacial free energy determined by the Young contact angle \(\theta_{Y}\), which both appear in the LW equation (Eq. 1). The third term is a single-mode energy perturbation of magnitude \(\Delta U\), period \(\ell\), and phase \(\phi\), induced by nanoscale surface topography and/or chemical heterogeneities. Since \(\ell\ll h_{eq}\), we can arbitrarily adopt \(\phi=-2\pi h_{eq}/\ell\) so that mechanical equilibrium is exactly attained at the height \(h_{eq}\) for which the system energy in Eq. 3 is at the global minimum, in accordance with Jurin's law.
Any finite energy perturbation of magnitude \(\Delta U>0\) in Eq. 3 produces multiple metastable states where \(\partial\mathcal{F}/\partial h=0\), sufficiently close to equilibrium, when \(|h_{eq}-h|\times\rho gR^{2}\ell/2\leq\Delta U\). Hence, as \(h\to h_{eq}\) the system governed by Eqs. 2-3 must eventually crossover from non-equilibrium dynamics driven by capillary forces to arrested dynamics dominated by metastable state transitions induced by thermal fluctuations. Such a regime crossover corresponds to the transition from conventional LW dynamics to an anomalous capillary rise and occurs around the crossover height [32]
\[h_{c}=h_{eq}-\alpha\frac{\Delta U}{\rho gR^{2}\ell}, \tag{4}\]
where a factor \(\alpha=0.5\) estimates the center of the range of heights over which the crossover takes place [32; 33; 34; 35; 36].
For \(h\gtrsim h_{c}\), the column height evolution is determined by the displacement rate \(\dot{h}=\ell(\Gamma_{+}-\Gamma_{-})\). Using Kramers theory for the (forward/backward) transition rates \(\Gamma_{\pm}\) gives [32]
\[\dot{h}=\frac{h_{eq}}{h}V_{H}\sinh\left(\frac{h_{eq}-h}{L_{H}}\right) \tag{5}\]
with the "hopping" length
\[L_{H}=\frac{2k_{B}T}{K\ell}, \tag{6}\]
and the "hopping" velocity
\[V_{H}=\frac{\ell}{2\pi\xi_{eq}}\sqrt{\frac{1}{4}\left(\frac{2\pi}{\ell} \right)^{4}\Delta U^{2}-K^{2}}\exp\left(\frac{-\Delta U-\frac{1}{8}K\ell^{2}} {k_{B}T}\right), \tag{7}\]
Figure 1: Capillary rise in a glass microcapillary. (a) Experimental setup. Image sequences are for pentadecane in the 140-\(\upmu\)m microcapillary. Top panel: microcamera view at \(t=\) 26.5, 266.5, 2646.5, 26496.5 s and 20 hours; scale bar: 1 mm. Bottom panel: macrocamera view at \(t=\) -10, 2, 4, 8, 16, 32, 64 and 128 s; scale bar: 3 mm. Dashed green line: position of the flat bath surface. (b) Column height \(h(t)\) and regime crossovers for three alkanes in the 140-\(\upmu\)m microcapillary. Analytical predictions (see legend) for the crossover height \(h_{c}\) (Eq. 4), LW dynamics (Eq. 1), and anomalous rise regime (Eq. 8) employ the parameters reported in Table 1.
where \(K=\rho g\pi R^{2}\) and \(\xi_{eq}=8\pi\mu h_{eq}\) is the damping coefficient prescribed by the equilibrium height. The displacement rate in Eq. 5 can be integrated to obtain the implicit relation \(t=-(L_{H}/V_{H})\times[F(x)+G(x)]+c\), where \(x=(h-h_{eq})/L_{H}\), \(c\) is an integration constant, \(F(x)=(1-xL_{H}/h_{eq})\times\log[\tanh(x/2)]\), and \(G(x)=(L_{H}/h_{eq})\times[\mathrm{Li}_{2}(e^{-x})-\mathrm{Li}_{2}(-e^{-x})]\); here, \(\mathrm{Li}_{2}\) is the dilogarithm function. For \(L_{H}\ll h_{eq}\), which is expected for the case of nanoscale surface features, the expression for the column height can be simplified to
\[t=-\frac{L_{H}}{V_{H}}\log\left[\tanh\left(\frac{h_{eq}-h}{2L_{H}}\right) \right]. \tag{8}\]
The near-equilibrium expressions for the displacement rate (Eq. 5) and column height (Eq. 8) derived via Kramers theory are prescribed by the energy barrier magnitude \(\Delta U\) and period \(\ell\). Assuming that free energy perturbations in the 1D energy profile \(\mathcal{F}(h)\) (Eq. 3) are caused by nanoscale topographic features with a characteristic projected area \(A_{d}\), we then define the period \(\ell=A_{d}/(2\pi R)\) and energy barrier \(\Delta U=\beta\gamma A_{d}\), where \(\beta\) is a shape factor accounting for surface energy changes associated with different 3D interfacial configurations induced by the nanoscale topography [32, 33, 40]. We use \(A_{d}\) and \(\beta\) as fitting parameters to account for our experimental observations of (i) the critical crossover height \(h_{c}\) (Eq. 4) and (ii) the displacement rate \(\dot{h}\) (Eqs. 5-8) for \(h>h_{c}\). We estimate the Young contact angle \(\theta_{Y}\simeq\theta_{eq}\) from the measured equilibrium heights. Table 1 reports the values of the parameters employed in the model.
The positive displacement rates \(\dot{h}\) reported in Fig. 2 decay monotonically over time with a marked transition from LW dynamics to the anomalous capillary rise regime near equilibrium. For reference, the maximum evaporation flux into a perfect vacuum, prescribed by the vapor pressure and molecular weight of the studied alkanes, translates into negative rising rates \(\dot{h}\simeq\) -6 to -1 \(\upmu\)m\(\,\)s\({}^{-1}\), a value that is higher by many orders of magnitude than the actual evaporation rate [41]. While the LW model (Eq. 1) describes the initial capillary rise dynamics, the proposed model (Eqs. 5-8) accounts for the late near-equilibrium evolution of \(\dot{h}\) by using the model parameters reported in Table 1. The same finding is reported in Fig. 1b for the column height evolution. The reported values of \(A_{d}\) and \(\beta\) (cf. Table 1) give the characteristic energy barriers \(\Delta U\simeq 6.7\) to \(10.7\)\(k_{B}T\) and periods \(\ell\sim 10^{-14}\) m. We note that \(\ell\) in our model is determined by the average displacement of the full contact line perimeter over a single surface feature of nanoscale area \(A_{d}\) and therefore it is much smaller than the physical distance between localized surface features [32].
An important feature of the proposed analytical model is the prediction of the conditions for the crossover from LW dynamics to the anomalous regime. The crossover heights \(h_{c}\) predicted via Eq. 4 are in good agreement with the heights at which the transition is observed experimentally (i.e., 5% to 10% below \(h_{eq}\)). The use of Eq. 4 in Eq. 1 gives the corresponding critical displacement rate \(\dot{h}_{c}\) and time \(t_{c}\) reported in Fig. 2. Besides, the proposed analytical model indicates that the contact angle \(\theta_{c}=\arccos(\rho gRh_{c}/(2\gamma))\) at the crossover height is significantly larger than \(\theta_{eq}\) (cf. Table 1), which highlights the importance of following the rising dynamics over sufficiently long times to determine equilibrium properties.
The close agreement between experiments and the proposed model relies on a set of parameters (cf. Table 1) that we assume to be induced by topographic surface features with a specific range of nanoscale dimensions. To verify this assumption we image the nanoscale topography of the inner surface of the capillaries at several different locations with an atomic force microscope (Park NX-20, \(k=42\) N/m, \(\omega_{o}=330\) kHz). The topographic height data \(z(x,y)\) (Fig. 3a), obtained in non-contact mode with a lateral resolution of 0.39 nm and height noise level of 0.03 nm, reports a mean topographic height \(\bar{z}=1.82\) nm, standard deviation \(z_{std}=\sqrt{z-\bar{z}}=0.501\) nm, and a nearly Gaussian height distribution (kurtosis \(\kappa=2.96\)). The AFM images (cf. Fig. 3a) show that the wetting front must move over a a complex landscape of surface features with a wide range of lateral dimensions \(\ell_{f}\sim 10\) to 100 nm and a characteristic height \(h_{f}\simeq 2z_{std}\simeq 1\) nm (Fig. 3a).
We now turn to the analytical estimation of the energy barriers \(\Delta U\) induced by the topographic features imaged via AFM. Based on the energetics of wetting on a "rough" surface [42], hemiwicking (i.e., liquid infiltration in the local topography) is favored by features with lateral dimensions \(\ell_{f}\leq\ell^{*}\) prescribed by the critical infiltration length \(\ell^{*}=h_{f}\times(1/\cos^{2}\theta_{Y}-1)^{-1/2}\). As illustrated in Fig. 3b, a local displacement of the contact line over a liquid-infiltrated feature with characteristic projected area \(A_{d}=\pi\ell^{*2}/4\) produces a characteristic energy barrier \(\Delta U=\Delta U^{*}=\gamma(1-\cos\theta_{Y})A_{d}\), from which we readily determine the shape factor \(\beta=1-\cos\theta_{Y}\). Notably, by using \(h_{f}=0.85\) to 1.05 nm and the experimental estimates for the equilibrium contact angle \(\theta_{eq}\) we predict \(\ell^{*}\simeq 5\) to 10 nm and the values for \(A_{d}\) and \(\beta\) reported in Table 1 that are employed in the analytical fits reported in Fig. 1b & Fig. 2 for the anomalous regime. It is worth noting that for surface features with lateral dimensions \(\ell_{f}>\ell^{*}\) larger than the critical infiltration length, the "dry" topography (Fig. 3b) induces energy barriers given by a shape factor \(\beta\simeq\sqrt{1+(h_{f}/\ell_{f})^{2}}-1\sim(h_{f}/\ell_{f})^{2}\) and thus we have \(\Delta U\ll\Delta U^{*}\). Hence, the near-equilibrium displacement rate (Eq. 5) is determined by the nanoscale features that are infiltrated by the preceding liquid film.
Based on this analysis, the characteristic "hopping" rate
\begin{table}
\begin{tabular}{l l l|l l l|l l} \hline \hline Case \(R_{1}\) & \(h_{eq}\) [mm] & \(\theta_{eq}\) [\({}^{\circ}\)] & \(A_{d}\) [nm\({}^{2}\)] & \(\Delta U/k_{B}T\) & \(\ell\) [pm] & \(\theta_{c}\) [\({}^{\circ}\)] \\ \hline C\({}_{14}\)H\({}_{30}\) & 49.0 & 11.3 & 86.75 & 10.67 & 0.099 & 18.2 \\ C\({}_{15}\)H\({}_{32}\) & 44.6 & 15.5 & 45.04 & 9.71 & 0.051 & 24.9 \\ C\({}_{16}\)H\({}_{34}\) & 49.6 & 17.7 & 21.25 & 6.73 & 0.024 & 28.6 \\ \hline \hline Case \(R_{2}\) & \(h_{eq}\) [mm] & \(\theta_{eq}\) [\({}^{\circ}\)] & \(A_{d}\) [nm\({}^{2}\)] & \(\Delta U/k_{B}T\) & \(\ell\) [pm] & \(\theta_{c}\) [\({}^{\circ}\)] \\ \hline C\({}_{14}\)H\({}_{30}\) & 22.0 & 12.3 & 59.64 & 8.69 & 0.031 & 19.8 \\ C\({}_{15}\)H\({}_{32}\) & 20.9 & 14.0 & 55.72 & 10.11 & 0.029 & 22.5 \\ C\({}_{16}\)H\({}_{34}\) & 22.2 & 19.2 & 18.68 & 6.96 & 0.009 & 31.1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Parameters employed in the analytical predictions for the two microcapillaries of radius \(R_{1}=140\)\(\upmu\)m and \(R_{2}=310\)\(\upmu\)m. The shape factor \(\beta=1-\cos\theta_{Y}\) is determined using the estimate \(\theta_{Y}=\theta_{eq}\).
\(V_{H}\) and length \(L_{H}\) are prescribed by the critical surface feature length \(\ell_{f}\simeq\ell^{*}\). The functional form of Eq. 5 suggests a collapse of the experimental results onto a single curve when plotting the dimensionless displacement rate \(\dot{h}/V_{H}\) against the equilibrium separation \((h_{eq}-h)/L_{H}\) (see Fig. 3c). Furthermore, sufficiently close to equilibrium, \((h_{eq}-h)<L_{H}\), we find the linear scaling \(\dot{h}=V_{H}\times(h_{eq}-h)/L_{H}\) (cf. Fig. 3c) with the characteristic displacement rate in the range \(V_{H}\simeq 0.06\) to \(4.3~{}\mu\)m/s for the anomalous regime. It is worth remarking that these findings and the model in Eq. 3 are strictly valid for sufficiently large separation of scales between the characteristic dimensions of the surface topography and the radius of the capillary tube so that \(\ell\ll h_{eq}\) and \(h_{f}\ll R\). While this condition is satisfied for microcapillaries with macroscopically smooth and flat surfaces having nanoscale topographic features, the proposed model is not applicable for surface features with dimensions comparable to the capillary radius.
In summary, capillary rise experiments performed over unusually long times with non-volatile liquids under controlled ambient conditions report a crossover from LW dynamics to an anomalous regime as the column height approaches equilibrium. Compared to the classical LW equation, the characteristic defect area \(A_{d}\) is the only additional parameter in the model proposed to account for the experimental observations. The estimate \(\theta_{Y}=\theta_{eq}\) for the Young contact angle results in a shape factor \(\beta=1-\rho gRh_{eq}/(2\gamma)\) with an error linearly proportional to the uncertainty in determining \(h_{eq}\). While the equilibrium height was determined with high accuracy owing to the extended observation times, even large relative uncertainties up to \(20\%\) in \(h_{eq}\) would result in slightly different defect areas \(A_{d}\) that produce the energy barrier \(\Delta U\) and period \(\ell\) reported in Table 1. We thus find that the proposed model could similarly fit the experimental data with a reasonable single estimate for the Young contact angle and small adjustments of the reported nanoscale defect areas \(A_{d}\).
Our experimental and theoretical analyses thus suggest that the crossover height and the anomalous imbibition rate are determined by topographic surface features with nanometric dimensions to favor hemivicking and infiltration of liquid preceding the contact line. Our analysis indicates that tuning the base radius-to-height ratio of nanoscale topographic features can promote or prevent the occurrence of the anomalously slow imbibition. In addition, we find that the equilibrium contact angle can be determined from the linear relation between the displacement rate and separation from equilibrium in the final stage of the anomalous capillary rise. These findings have direct implications on the design and use of capillary de
Figure 3: Nanoscale surface topography and master curve in the stochastic regime. (a) AFM image of the surface topography height \(z(x,y)\). (b) Illustration of contact line motion over surface features with characteristic extent \(\ell_{f}\) and height \(h_{f}\) using two overlapping AFM scans along the x-direction 10 nm apart in the y-direction. The sequence labeled (A) to (D) shows the hypothesized infiltration of critically small features via hemivicking. (c) Dimensionless displacement rate \(h/V_{H}\) versus separation from equilibrium \((h_{eq}-h)/L_{H}\) for all tested systems. Thick black line: non-dimensionalized theoretical prediction. Grey dashed line: linear scaling in the limit \((h_{eq}-h)<L_{H}\).
Figure 2: Capillary rise rates \(\dot{h}\equiv d\dot{h}/dt\) over observation times \(10^{-1}\leq t\leq 10^{4}\) s in two glass microcapillaries of radius \(R_{1}=140~{}\mu\)m and \(R_{2}=310~{}\mu\)m for (a) tetradecane, (b) pentadecane, and (c) hexadecane. Symbols: experimental data. Thin continuous line: Eq. 1 (LW dynamics); thick continuous line: Eq. 5 (anomalous regime). The crossover rate \(\dot{h}_{c}\) and time \(t_{c}\) (dot-dashed lines) are obtained by employing the crossover height \(h_{c}\) (Eq. 4) in Eqs. 1.
vices for micro/nanofluidic handling, the characterization of porous materials, and the fundamental understanding of capillary driven transport in near-equilibrium conditions.
C. C. acknowledges Universite Paris Cite for supporting his stay at Matiere et Systemes Complexes and support from the NSF (CBET-2016204). M. Z. and M. R. gratefully acknowledge ANR (Agence Nationale de la Recherche) and CGI (Commissariat a l'Investissement d'Avenir) for their financial support of this work through Labex SEAM (Science and Engineering for Advanced Materials and Devices), ANR-10-LABX-0096 and ANR-18-IDEX-0001.
## Supplemental Information
### Experimental reproducibility
Following the experimental protocol described in the main text, we performed multiple realization of the capillary rise experiments for the three studied alkanes to verify the reproducibility of the observed anomalous regime. For these experiments the capillary tubes (Hirschmann ringcaps 5 \(\mu\)L and 20 \(\mu\)L) were employed as received from the supplier without prewetting with the alkanes or pre-cleaning. The column height \(h(t)\) and displacement rate \(\dot{h}\) measured in these experiments are reported in Fig. 4 for the two different capillary sizes employed (iner radius \(R_{1}=140\)\(\mu\)m and \(R_{2}=310\)\(\mu\)m) and show a high degree of reproducibility. For the experiments reported in Fig. 4, the room temperature was controlled at \(T=25.5\pm 1\)\({}^{\circ}\)C for the tetradecane and pentadecane, and \(T=22.0\pm 1\)\({}^{\circ}\)C was the temperature for the experiments with hexadecane.
### Cleaning protocol and surface aging
To assess the effect of pre-cleaning the glass capillary, a set of capillary rise experiments were performed by employing a cleaning protocol. The cleaning protocol consisted of injecting successively ethanol, acetone, and distilled water through the capillaries using a syringe pump at a fixed flow rate of 100 \(\mu\)L/min. Each of the three cleaning steps was performed for 10 minutes. The capillary was finally dried with pressurized nitrogen. The column height \(h(t)\) observed with and without the cleaning protocol is reported in Fig. 5 and shows no substantial differences on the observed rise dynamics and final equilibrium height.
The effect of the aging of the hydrophilic glass surface was additionally examined by exposing the glass capillaries to the ambient air over periods of 1 to 4 months before performing the experiments without employing a cleaning protocol (cf. Fig. 5). The capillaries are made of hydrophilic (DURAN) borosilicate glass, which makes them susceptible to surface aging, with the formation of oxides and contamination. As reported in Fig. 5, the aging of the surface on capillaries not treated with cleaning protocol has a significant effect on the observed rise dynamics and final equilibrium height.
|
2302.12934 | Distribution of $δ$-connected components of self-affine sponge of
Lalley-Gatzouras type | Let $(E, \rho)$ be a metric space and let $h_E\left( \delta \right)$ be the
cardinality of the set of $\delta$-connected components of $E$. In literature,
in case of that $E$ is a self-conformal set satisfying the open set condition
or $E$ is a self-affine Sierpi\'nski sponge, necessary and sufficient condition
is given for the validity of the relation $ h_E(\delta)\asymp \delta^{-\dim_B
E}, \text{ when }\delta\to 0. $ In this paper, we generalize the above result
to self-affine sponges of Lalley-Gatzouras type; actually in this case, we show
that there exists a Bernoulli measure $\mu$ such that for any cylinder $R$, it
holds that $ h_R(\delta)\asymp \mu(R) \delta^{-\dim_B E}, \text{ when
}\delta\to 0. $ | Yanfang Zhang, Yongqiang Yang | 2023-02-24T23:31:32Z | http://arxiv.org/abs/2302.12934v1 | # Distribution of \(\delta\)-connected components of self-affine sponge of Lalley-Gatzouras type
###### Abstract.
Let \(\left(E,\rho\right)\) be a metric space and let \(h_{E}\left(\delta\right)\) be the cardinality of the set of \(\delta\)-connected components of \(E\). In literature, in case of that \(E\) is a self-conformal set satisfying the open set condition or \(E\) is a self-affine Sierpinski sponge, necessary and sufficient condition is given for the validity of the relation
\[h_{E}(\delta)\asymp\delta^{-\dim_{B}E},\text{ when }\delta\to 0.\]
In this paper, we generalize the above result to self-affine sponges of Lalley-Gatzouras type; actually in this case, we show that there exists a Bernoulli measure \(\mu\) such that for any cylinder \(R\), it holds that
\[h_{R}(\delta)\asymp\mu(R)\delta^{-\dim_{B}E},\text{ when }\delta\to 0.\]
Key words and phrases:self-affine sponge, maximal power law, component-counting measure 2010 Mathematics Subject Classification: 28A80, 28A78 The work is supported by the start-up research fund from Huzhou University No. RK21089. **2010 Mathematics Subject Classification: 28A80, 28A78**.
## 1. Introduction
Let \(\left(E,\rho\right)\) be a metric space and let \(\delta>0\). Two points \(x,y\in E\) are said to be \(\delta\)-equivalent if there exists a sequence \(\left\{x_{1}=x,x_{2},...,x_{k-1},x_{k}=y\right\}\subset E\) such that \(\rho\left(x_{i},x_{i+1}\right)\leq\delta\) for \(1\leq i\leq k-1\). A \(\delta\)-equivalent class of \(E\) is called a \(\delta\)_-connected component_ of \(E\). We denote by \(h_{E}\left(\delta\right)\) the cardinality of the set of \(\delta\)-connected components of \(E\).
A notion closely related to \(\delta\)-connected component is the gap sequence of a compact metric space, which has been studied by many mathematicians; see for instance, [5, 7, 9, 17, 30]. Some early works ([7, 9, 17]) observed that for some totally disconnected self-similar sets and self-conformal sets \(E\), the gap sequence, which we denote by \(\{g_{E}(n)\}_{n\geq 1}\), is comparable to \(n^{-1/\dim_{B}E}\), which is written as
\[g_{E}(n)\asymp n^{-1/\dim_{B}E},\text{ }n\geq 1,\]
where \(\dim_{B}\) denotes the box dimension. (Two functions \(f,g:X\rightarrow\mathbb{R}\) are said to be _comparable_, denoted by \(f\asymp g\), if there exists a constant \(c>0\) such that \(c^{-1}g(x)\leq f(x)\leq cg(x)\) for all \(x\in X\).)
Let \(\left(E,\rho\right)\) be a compact metric space and let \(\gamma>0\). It is shown (Miao, Xi and Xiong [25], Zhang and Huang [34]) that
\[g_{E}(n)\asymp n^{-1/\gamma}\Leftrightarrow h_{E}(\delta)\asymp\delta^{-\gamma}.\]
Motivated by this relation, Zhang and Huang [34] proposed the following definition.
**Definition 1.1**.: Let \(E\) be a compact metric space. We say \(E\) satisfies the _power law with index \(\gamma\)_ if \(h_{E}\left(\delta\right)\asymp\delta^{-\gamma}\); if \(\gamma=\dim_{B}E\), in addition, we say \(E\) satisfies the _maximal power law_.
There are many works devoted to the maximal power law of attractors of IFS ([7,9,13,17,20,21,25,34]). An _iterated function system_ (IFS) is a family of contractions \(\Phi=\{\varphi_{j}\}_{j=1}^{N}\) on a metric space \(X\). In this paper, we will always assume that all \(\varphi_{j}\) are injections. The _attractor_ of the IFS is the unique nonempty compact set \(E\) satisfying \(E=\bigcup_{j=1}^{N}\varphi_{j}(E)\); especially, it is called a _self-similar set_ if all \(\varphi_{j}\) are similitudes. An IFS \(\{\varphi_{j}\}_{j=1}^{N}\) is said to satisfy the _open set condition_ (OSC), if there is a bounded nonempty open set \(U\subset\mathbb{R}^{d}\) such that for all \(1\leq i\leq N\), \(\varphi_{i}(U)\subset U\) and \(\varphi_{i}(U)\cap\varphi_{j}(U)=\emptyset\) for \(1\leq i\neq j\leq N\); moreover, if \(U\cap E\neq\emptyset\), then we say \(E\) satisfies the _strong open set condition_ (SOSC) and we call \(U\) a _feasible strong open set_ for \(E\).
Lapidus, Pomerance [17] and Falconer [9] confirmed maximal power law for self-similar sets \(E\subset\mathbb{R}\) satisfying the strong separation condition, and Deng, Wang and Xi [7] generalized this result to the self-conformal sets. Recently, Huang and Zhang [13] gave a complete answer in case of the self-conformal set satisfying the open set condition.
A point \(x\in E\) is called a trivial point of \(E\) if \(\{x\}\) is a connected component of \(E\).
**Proposition 1.1** ([13]).: _Let \(K\) be a \(\mathcal{C}^{1+\alpha}\) self-conformal set generated by the IFS \(\{S_{j}\}_{j=1}^{N}\). Assume \(K\) satisfies the OSC. Then the following statements are equivalent:_
(i)_\(K\) satisfies the maximal power law._
(ii) _There exists a strong open set \(U\) such that \(U\) contains trivial points of \(K\)._
(iii) _Every strong open set for \(K\) contains trivial points of \(K\)._
(iv) _The set of trivial points of \(K\) is dense in \(K\)._
There are several works devoted to the maximal power law of Bedford-McMullen carpets [20, 21, 25] and self-affine Sierpinski sponge [34]. Let \(d\geq 2\) and let
\[2\leq n_{1}\leq n_{2}<\cdots\leq n_{d}\]
be a sequence of integers. Let \(\Gamma=\operatorname{diag}(n_{1},\ldots,n_{d})\) be the \(d\times d\) diagonal matrix. Let \(\mathcal{D}=\{\mathbf{i}_{1},\ldots,\mathbf{i}_{N}\}\subset\prod_{j=1}^{d}\{0,1,\ldots,n_{j}-1\}\). For \(\mathbf{i}\in\mathcal{D}\) and \(z\in\mathbb{R}^{d}\), we define \(S_{\mathbf{i}}(z)=\Gamma^{-1}(z+\mathbf{i})\). The attractor of the IFS \(\{S_{\mathbf{i}}\}_{\mathbf{i}\in\mathcal{D}}\), which is denoted by \(E=K(\Gamma,\mathcal{D})\), is called a \(d\)-dimensional _generalized self-affine Sierpinski sponge_ (see Olsen [26]). The set \(E\) is called a _fractal cube_ if \(n_{1}=n_{2}=\cdots=n_{d}\); it is called a _self-affine Sierpinski sponge_ if \(n_{1}<n_{2}<\cdots<n_{d}\) (see Kenyon and Peres [14]), and especially when \(d=2\), \(E\) is called a _Bedford-McMullen carpet_.
We say \(E\) is _non-degenerated_ if \(E\) is not contained in a \((d-1)\)-dimensional face of the cube \([0,1]^{d}\). For \(1\leq j\leq d\), let \(\pi_{j}:\mathbb{R}^{d}\to\mathbb{R}^{j}\) be the projection
\[\pi_{j}(x_{1},\ldots,x_{d})=(x_{1},\ldots,x_{j}). \tag{1.1}\]
**Proposition 1.2** ([34]).: _Let \(E\) be a non-degenerated self-affine Sierpinski sponge. Then \(E\) satisfies the maximal power law if and only if \(E\) and all \(\pi_{j}(E)(1\leq j\leq d-1)\) possess trivial points._
The main purpose of the present paper is to prove a stronger version of Proposition 1.2 for Lalley-Gatzouras sponges. A diagonal self-affine sponge \(\Lambda\) is said to be a _Lalley-Gatzouras sponge_ if the generating IFS satisfies a coordinate ordering condition as well as a neat projection condition ([6, 16]). See Section 3 for the precise definitions.
**Remark 1.1**.: The diagonal self-affine sponge is an important class of self-affine sets received a lot of studies in recent years, Baranski [1], Das and Simmons [6], Feng and Wang [10], Mackay [23], Peres [28] Banaji and Kolossvary [2] on dimension theory, King [15], Jordan and Rams [22], Barrel and Mensi [3], Olsen [26], Reeve [32] on multifractal analysis, Li et al [19], Rao _et al_[31], Yang and Zhang [33], on Lipschitz classification, etc. The distribution of \(\delta\)-connected components illustrates the metric and topology properties of the self-affine sponges from a new point of view.
However, this generalization is not trivial. The difficulty comes from the fact that given a cylinder \(E_{I}\) of a Lalley-Gatzouras type sponge \(E\), the relation between \(h_{E}(\delta)\) and \(h_{E_{I}}(\delta)\) is unclear. In this paper, we overcome this difficulty by using a recent result of Huang et al [12] on box-counting measures.
Denote \(\Sigma=\{1,\ldots,N\}\) and set \(\Sigma^{*}=\bigcup_{n\geq 0}\Sigma^{n}\). For \(I=i_{1}\ldots i_{n}\in\Sigma^{*}\), we call \(E_{I}=\varphi_{I}(E)=\varphi_{i_{1}}\circ\cdots\circ\varphi_{i_{n}}(E)\) an \(n\)-th _cylinder_ of \(E\). We call \(\delta(z+[0,1]^{d})\) a \(\delta\)_-mesh_ box if \(z\in\mathbb{Z}^{d}\). For \(A\subset\mathbb{R}^{d}\), we use
\[N_{A}(\delta) \tag{1.2}\]
to denote the number of \(\delta\)-mesh boxes intersecting \(A\). It is well known that
\[\dim_{B}A=\lim_{\delta\to 0}-\frac{\log N_{A}(\delta)}{\log\delta}\]
if the limit exists (see for instance, [8]).
For a Lalley-Gatzouras sponge, a special Bernoulli measure, which we will call the _canonical Bernoulli measure_ (see Section 3 for precise definition), is closely related to the box-dimension ([12, 16]). Recently, Huang _et al_ proved the following result.
**Proposition 1.3**.: _([12]) Let \(\Lambda\) be a Lalley-Gatzouras sponge. Then the canonical Bernoulli measure \(\mu\) is a **cylinder box-counting measure** in the sense that, there is a constant \(M_{0}>0\) such that for any cylinder \(R\) of \(\Lambda\), and any \(\delta<S(R)\), the shortest side of \(R\), it holds that_
\[M_{0}^{-1}\mu(R)\delta^{-\dim_{B}\Lambda}\leq N_{R}(\delta)\leq M_{0}\mu(R) \delta^{-\dim_{B}\Lambda}. \tag{1.3}\]
Motivated by the above result, we propose the following concept.
**Definition 1.2** (Component-counting measure).: Let \(\Lambda\) be the attractor of an IFS \(\Phi\), and let \(\mu\) be a finite Borel measure on \(\Lambda\). If there is a constant \(M\geq 1\) such that for any cylinder \(R\) of \(\Lambda\), there exists a \(\delta_{0}=\delta_{0}(R)\) such that
\[M^{-1}\mu\left(R\right)\delta^{-\dim_{B}\Lambda}\leq h_{R}\left(\delta\right) \leq M\mu\left(R\right)\delta^{-\dim_{B}\Lambda},\]
for \(\delta<\delta_{0}\), then we call \(\mu\) a _component-counting measure_ of \(\Lambda\).
**Remark 1.2**.: Clearly, if \(\Lambda\) admits a component-counting measure, then not only \(\Lambda\), but all its cylinders satisfies the maximal power law with a uniform constant \(M\). This reflects a kind of homogenity of \(\Lambda\).
First, we add one more equivalent statement to the list in Proposition 1.1.
**Theorem 1.1**.: _Let \(K\) be a \(\mathcal{C}^{1+\alpha}\) self-conformal set satisfying the OSC, and let \(\mu\) be the Gibbs measure. Then \(K\) satisfies the maximal power law if and only if \(\mu\) is a component-counting measure._
A Lalley-Gatzouras sponge \(\Lambda\) is said to be _degenerated_ if \(\Lambda\) is contained in a face of \(\left[0,1\right]^{d}\) with dimension \(d-1\). From now on, we will always assume that \(\Lambda\) is a non-degenerated Lalley-Gatzouras sponge. Under this assumption, \(\Lambda\) satisfies the strong open set condition with the open set \(V=(0,1)^{d}\). Denote \(\Lambda_{j}=\pi_{j}\left(\Lambda\right)\) where \(\pi_{j}\) is defined by (1.1); clearly \(\Lambda_{j}\) is also non-degenerated, and \(V_{j}=(0,1)^{j}\) is a feasible strong open set for \(\Lambda_{j}\). (See Lemma 4.4.) A trivial point \(x\) of \(\Lambda\) is called an _inner trivial point_ of \(\Lambda\) if \(x\in(0,1)^{d}\).
The main result of the present paper is the following two theorems.
**Theorem 1.2**.: _Let \(\Lambda\) be a non-degenerated Lalley-Gatzouras type sponge, and let \(\mu\) be the canonical Bernoulli measure. Then the following statements are equivalent_
_(i) \(\mu\) is a component-counting measure._
_(ii) \(\Lambda\) satisfies the maximal power law._
_(iii) \(V_{j}=(0,1)^{j}\) possesses trivial points of \(\Lambda_{j}=\pi_{j}(\Lambda)\) for every \(1\leq j\leq d\)._
_(iv) The trivial points of \(\Lambda_{j}\) is dense in \(\Lambda_{j}\) for every \(1\leq j\leq d\)._
**Remark 1.3**.: Zhang and Xu [35] proved that if \(\Lambda\) is a slicing self-affine sponge (see Remark 3.1 for a definition), then \(\Lambda\) possesses inner trivial points as soon as it possesses trivial points. This explains why in Proposition 1.2, we require that \(\Lambda_{j}\) possesses trivial points instead of inner trivial points of \(\Lambda_{j}\).
**Theorem 1.3**.: _Let \(\Lambda\) be a non-degenerated Lalley-Gatzouras sponge, and let \(\mu\) be the canonical Bernoulli measure. If \(\Lambda\) does not satisfy the maximal power law, then there exists a real number \(0<\chi<\dim_{B}\Lambda\) such that for any cylinder \(R\),_
\[h_{R}\left(\delta\right)=O\left(\mu\left(R\right)\delta^{-\chi}\right),\ \ \delta\to 0. \tag{1.4}\]
**Remark 1.4**.: We remark that if a cylinder box-counting measure \(\mu\) is also a component-counting measure, then for any cylinder \(R\) of \(\Lambda\) and \(\delta\) small enough, it holds that
\[h_{R}(\delta)\asymp N_{R}(\delta),\]
which means that even locally, a large portion of \(\delta\)-connected components are very'small'.
The paper is organized as follows. Theorem 1.1 is proved in Section 2. In Section 3, we recall some known results about Lalley-Gatzouras type sponges. In Section 4, we give some notations and lemmas. In Section 5, we deal with the case that there exists \(j\) such that \(\Lambda_{j}\) contains no inner trivial points. Theorem 1.3 and Theorem 1.2 are proved in Section 6.
## 2. **Component-counting measures of self-conformal sets**
Let \(\alpha>0\). A conformal map \(S:V\to\mathbb{R}^{d}\) is \(\mathcal{C}^{1+\alpha}\) differentiable if there exists a constant \(C>0\) such that
\[||S^{\prime}(x)|-|S^{\prime}(y)||\leq C|x-y|^{\alpha}\text{ for all }x,y\in V. \tag{2.1}\]
The attractor of an IFS \(\{S_{j}\}_{j=1}^{N}\) is called a \(\mathcal{C}^{1+\alpha}\)_self-conformal set_, if all maps in the IFS are \(\mathcal{C}^{1+\alpha}\) differentiable.
In this section, we always assume that \(K\) is a \(\mathcal{C}^{1+\alpha}\) self-conformal set generated by the IFS \(\{S_{j}\}_{j=1}^{N}\). Let \(s=\dim_{H}K(=\dim_{B}K)\) and \(\mu\) be the Gibbs measure on \(K\).
For \(A\subset\mathbb{R}^{d}\), we denote \(|A|\) the diameter of \(A\). The following lemmas are well-known.
**Lemma 2.1** (Principle of bounded distortion [27, 28]).: _Let \(E\) be a compact subset of \(V\). Then there exists \(C_{1}\geq 1\) such that for any \(\boldsymbol{\sigma}\in\Sigma^{*}\),_
\[C_{1}^{-1}|S_{\boldsymbol{\sigma}}(E)|\cdot|x-y|\leq|S_{\boldsymbol{\sigma}}( x)-S_{\boldsymbol{\sigma}}(y)|\leq C_{1}|S_{\boldsymbol{\sigma}}(E)|\cdot|x-y|\]
_for all \(x,y\in E\)._
Strings \(\boldsymbol{\omega},\boldsymbol{\tau}\in\Sigma^{*}\) are _incomparable_ if each is not a prefix of the other.
**Lemma 2.2** (Alfors regularity [27, 28]).: _If \(K\) satisfies the OSC, then \(\mu\asymp\mathcal{H}^{s}|_{K}\), and there is a constant \(C_{2}>0\) such that for any \(\boldsymbol{\sigma}\in\Sigma^{*}\),_
\[C_{2}^{-1}|K_{\boldsymbol{\sigma}}|^{s}\leq\mu(K_{\boldsymbol{\sigma}})\leq C _{2}|K_{\boldsymbol{\sigma}}|^{s}.\]
_Moreover, \(K_{\boldsymbol{\omega}}\) and \(K_{\boldsymbol{\tau}}\) are disjoint in \(\mu\) provided that \(\boldsymbol{\omega}\) and \(\boldsymbol{\tau}\) are incomparable._
Proof of Theorem 1.1.: First, let \(K\) be a self-conformal set satisfying the maximal power law. Let \(K_{I}\) be a cylinder and let \(\delta<|K_{I}|\). Set \(\delta^{\prime}=C_{1}\delta/|K_{I}|\), where \(C_{1}\) is the constant in Lemma 2.1. Let \(U_{1}\) and \(U_{2}\) be two \(\delta^{\prime}\)-connected components of \(K\). Then by Lemma 2.1, \(x\in S_{I}(U_{1})\) and \(y\in S_{I}(U_{2})\) belongs to different \(\delta\)-connected component of \(K_{I}\). Therefore,
\[h_{\delta}(K_{I}) \geq h_{\delta^{\prime}}(K)\] (By the similarity mentioned above.) \[\geq M(\delta^{\prime})^{-s}\] (By the maximal power law.) \[=MC_{1}^{-s}|K_{I}|^{s}\delta^{-s}\] \[\geq M(C_{1}C_{2})^{-s}\mu(K_{I})\delta^{-s}.\] (By Lemma 2.2.)
The other direction inequality can be proved in the same manner. It follows that \(\mu\) is a component-counting measure.
On the other hand, if \(\mu\) is a component-counting measure, then obviously \(K\) satisfies the maximal power law (see Remark 1.2). The theorem is proved.
## 3. **Self-affine sponges of Lalley-Gatzouras type**
In this section, we recall some known results on Lalley-Gatzouras type sponges.
### Lalley-Gatzouras type sponges
We call \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\), \(f(x)=Tx+b\) a _diagonal self-affine mapping_ if \(T\) is a \(d\times d\) diagonal matrix such that all the diagonal entries are positive numbers. An IFS \(\Phi=\{\phi_{j}(x)\}_{j=1}^{m}\) is called a _diagonal self-affine IFS_ if all the maps \(\phi_{j}(x)\) are distinct diagonal self-affine contractions; the attractor is called a _diagonal self-affine sponge_, and we denote it by \(\Lambda_{\Phi}\). Without loss of generality, we will always assume that \(\Lambda_{\Phi}\subset[0,1]^{d}\). The following is an alternative definition.
**Definition 3.1** (Diagonal IFS [6]).: Let \(d\geq 1\) be an integer. For each \(i\in\{1,...,d\}\), let \(A_{i}=\{0,1,...,n_{i}-1\}\) with \(n_{i}\geq 2\), and let \(\varPhi_{i}=\left(\phi_{a,i}\right)_{a\in A_{i}}\) be a collection of contracting similarities of \([0,1]\), called the base IFS in coordinate \(i\). Let \(A=\prod_{i=1}^{d}A_{i}\), and for each \(a=\left(a_{1},\cdots,a_{d}\right)\in A\), consider the contracting affine maps \(\phi_{a}:[0,1]^{d}\rightarrow[0,1]^{d}\) defined by the formula
\[\phi_{a}\left(x_{1},\cdots x_{d}\right)=\left(\phi_{a,1}\left(x_{1}\right), \cdots,\phi_{a,d}\left(x_{d}\right)\right)\]
where \(\phi_{a,i}\) is shorthand for \(\phi_{a,i}\) in the formula above. Then we can get
\[\phi_{a}\left([0,1]^{d}\right)=\prod_{i=1}^{d}\phi_{a,i}\left([0,1]\right)\in [0,1]^{d}\]
Given \(\mathcal{D}\subset A\), we call the collection \(\varPhi=\left(\phi_{a}\right)_{a\in\mathcal{D}}\) a _diagonal IFS_, and we call its invariant set \(\Lambda\) a _diagonal self-affine sponge_.
**Remark 3.1**.: A diagonal self-affine IFS \(\varPhi\) is called a slicing self-affine IFS, if for each \(i\in\{1,...,d\}\),
\[[0,1]=\phi_{0,i}\left[0,1\right)\cup\cdots\cup\phi_{n_{i}-1,i}\left[0,1\right)\]
is a partition of \([0,1)\) from left to right; in this case, we call \(\Lambda\) a _slicing self-affine sponge_. In particular, if \(d=2\), then we call \(\Lambda\) a _Baranski carpet_. Furthermore, if for each \(1\leq j\leq d\), the contraction ratios of \(\phi_{a,j}\), \(a\in\mathcal{D}\), are all equal, then \(\Lambda\) is the _self-affine Sierpinski sponge_.
We say that \(\Phi\) satisfies the _coordinate ordering condition_ if
\[\phi_{a,1}^{\prime}>\cdots>\phi_{a,d}^{\prime},\quad\text{ for all }a\in \mathcal{D}, \tag{3.1}\]
where \(f^{\prime}\) denotes the derivative of the function \(f\).
Recall that \(\pi_{j}(x_{1},\ldots,x_{d})=\left(x_{1},\ldots,x_{j}\right)\). Let
\[\varPhi_{\{1,\cdots,j\}}=\left(\left(\phi_{a,1},\cdots,\phi_{a,j}\right) \right)_{a\in\pi_{j}(\mathcal{D})},\]
which is an IFS on \(\mathbb{R}^{j}\). Clearly \(\Lambda_{j}=\pi_{j}\left(\Lambda\right)\) is the attractor of the IFS \(\varPhi_{\{1,\cdots,j\}}\).
**Definition 3.2** ([6]).: Let \(\Lambda_{\Phi}\) be a diagonal self-affine sponge satisfying (3.1). We say \(\Phi\) satisfies the _neat projection condition_, if for each \(j\in\{1,\ldots,d\}\), the IFS \(\Phi_{\{1,\ldots,j\}}\) satisfies the OSC with the open set \(\mathbb{I}^{j}=(0,1)^{j}\), that is,
\[\left\{\phi_{\mathbf{d},\{1,\ldots,j\}}(\mathbb{I}^{j})\right\}_{\mathbf{d} \in\pi_{j}(\mathcal{D})}\]
are disjoint. Moreover, we say a diagonal self-affine sponge \(\Lambda_{\Phi}\) is of _Lalley-Gatzouras type_ if it satisfies the coordinate ordering condition as well as the neat projection condition.
### The canonical Bernoulli measure
Let \({\bf p}=(p_{\bf d})_{{\bf d}\in{\cal D}}\) be a probability weight. The unique probability measure \(\mu_{\bf p}\) satisfying
\[\mu_{\bf p}(\cdot)=\sum_{{\bf d}\in{\cal D}}p_{\bf d}\phi_{\bf d}^{-1}(\cdot)\]
is called the _Bernoulli measure_ determined by the weight \({\bf p}\).
Let \(\Lambda_{\Phi}\) be a Lalley-Gatzouras type sponge. Now we define a sequence \(\{\beta_{j}\}_{j=1}^{d}\) related to \(\Lambda_{\Phi}\). Let \(\beta_{1}>0\) be the unique real number satisfying
\[\sum_{f_{1}\in\Phi_{\{1\}}}(f_{1}^{\prime})^{\beta_{1}}=1.\]
If \(\beta_{1},\ldots,\beta_{j-1}\) are defined, we define \(\beta_{j}>0\) to be the unique real number such that
\[\sum_{(f_{1},\ldots,f_{j})\in\Phi_{\{1,\ldots,j\}}}\prod_{k=1}^{j}(f_{k}^{ \prime})^{\beta_{k}}=1. \tag{3.2}\]
Next, for \({\bf f}=(f_{1},\ldots,f_{j})\in\Phi_{\{1,\ldots,j\}}\), define
\[p_{\bf f}=\prod_{k=1}^{j}(f_{k}^{\prime})^{\beta_{k}}. \tag{3.3}\]
Let \(\mu_{j}\) be the Bernoulli measure on \(\Lambda_{j}\) defined by the weight \((p_{\bf f})_{{\bf f}\in\Phi_{\{1,\ldots,j\}}}\). Especially, we denote \(\mu:=\mu_{d}\), and we call \(\mu\) the _canonical Bernoulli measure_ of \(\Lambda\). It is shown that
**Theorem 3.1**.: _([12]) Let \(\Lambda\) be a Lalley-Gatzouras type sponge, then_
\[\dim_{B}\Lambda_{\Phi}=\sum_{j=1}^{d}\beta_{j},\]
_and the canonical Bernoulli measure \(\mu\) is a cylinder box-counting measure._
Denote
\[\alpha_{j}:=\dim_{B}\Lambda_{j}=\sum_{k=1}^{j}\beta_{k}. \tag{3.4}\]
Especially \(\dim_{B}\Lambda=\alpha_{d}:=\alpha\).
**Remark 3.2.** Let \(R\) and \(\widetilde{R}\) be two \(\ell\)-th cylinders of \(\Lambda_{\Phi}\), and \(\nu\) be a Bernoulli measure of \(\Lambda_{\Phi}\). Then \(\nu(R\cap\widetilde{R})=0\) is always true. See for instance [31].
**Lemma 3.3**.: _Let \(\Lambda\) be a non-degenerated diagonal self-affine sponge of Lalley-Gatzouras type. Then all \(\Lambda_{j}\), \(1\leq j\leq d\), are non-degenerated._
Proof.: Define \(\pi_{k}^{\prime}(x_{1},\ldots,x_{d})=x_{k}\). Then \(\Lambda\) is non-degenerated if and only if that for each \(1\leq k\leq d\), \(\pi_{k}^{\prime}(\Lambda)\neq\{0\}\) and \(\pi_{k}^{\prime}(\Lambda)\neq\{1\}\). Since for \(k\leq j\), \(\pi_{k}^{\prime}\circ\pi_{j}=\pi_{k}^{\prime}\), we obtain the lemma.
## 4. **Notations and lemmas**
In this section, we always assume that \(\Lambda\) is a non-degenerated generalized Lalley-Gatzouras type sponge. We will use \(\mathcal{D}\) as the alphabet and denote \(\mathcal{D}^{*}=\bigcup_{n\geq 1}\mathcal{D}^{n}\). For \(i_{1}\ldots i_{k}\in\mathcal{D}^{k}\), we call \(\Lambda_{i_{1}\ldots i_{k}}:=\phi_{i_{1}\ldots i_{k}}(\Lambda)\) a \(k\)-th cylinder; moreover, we set
\[S(W)=\prod_{j=1}^{k}\phi^{\prime}_{i_{j},d} \tag{4.1}\]
to be the'shortest side' of \(W\). Let
\[r_{*}=\min\left\{\phi^{\prime}_{a,d};\ a\in\mathcal{D}\right\}. \tag{4.2}\]
Let
\[\mathcal{V}_{\delta}=\{\Lambda_{i_{1}\ldots i_{k}};\ S(\Lambda_{i_{1}\ldots i _{k}})<\delta/r_{*}\ \text{and}\ S(\Lambda_{i_{1}\ldots i_{k-1}})\geq\delta/r_{*}\} \tag{4.3}\]
and we call it the \(\delta\)_-blocking_ of \(\Lambda\). Clearly for \(H\in\mathcal{V}_{\delta}\), it holds that
\[\delta\leq S(H)<\delta/r_{*}.\]
The following lemma is obvious.
**Lemma 4.1**.: _(i) For any bounded set \(A\subset\mathbb{R}^{d}\), it holds that \(h_{A}(\delta)\leq 3^{d}N_{A}(\delta)\)._
_(ii) \(h_{A}(\delta)+h_{B}(\delta)\geq h_{A\cup B}(\delta)\)._
_(iii) \(h_{A}(\delta)\geq h_{f(A)}(\delta)\) if \(f\) is a contractive map._
We use \(\partial E\) to denote the boundary of a set \(E\subset\mathbb{R}^{d}\).
Let \(W=\Lambda_{I}\) be a cylinder of \(\Lambda\). We use
\[N^{b}_{W}(\delta)\]
to denote the number of \(\delta\)-mesh boxes intesecting \(\phi_{I}(\partial[0,1]^{d}\cap\Lambda)\). The main goal of this section is to estimate \(N^{b}_{W}(\delta)\). To this end, our strategy is to estimate the \(\mu\)-measure of the set
\[\Lambda^{b}_{m}=\cup\{\Lambda_{I};\ I\in\mathcal{D}^{m}\ \text{and}\ \varphi_{I}([0,1]^{d})\cap\partial[0,1]^{d}\neq\emptyset\}, \tag{4.4}\]
and then use the fact that \(\mu\) is a box-counting measure.
Let \(F_{1},\ldots,F_{2d}\) be the \((d-1)\)-faces of the \([0,1]^{d}\). For \(1\leq j\leq 2d\), denote
\[\mathcal{D}^{(j)}=\left\{a\in\mathcal{D};\ \phi_{a}\left([0,1]^{d}\right)\cap F _{j}\neq\emptyset\right\}.\]
Denote
\[Q_{j}=\sum_{a\in\mathcal{D}^{(j)}}\mu(\Lambda_{a})\]
and set
\[Q=\max\{Q_{j};\ j=1,\ldots,2d\}. \tag{4.5}\]
Since \(\Lambda\) is non-degenerated, we conclude that for each \(j\), \(\mathcal{D}^{(j)}\) is a proper subset of \(\mathcal{D}\) and hence \(Q_{j}<1\).
**Lemma 4.2**.: _For each integer \(m\geq 1\), we have_
\[\mu(\Lambda_{m}^{b})=\sum_{j=1}^{2d}Q_{j}^{m}\leq 2dQ^{m}.\]
Proof.: Clearly \(\varphi_{I}([0,1]^{d})\cap\partial F_{j}\neq\emptyset\) if and only if \(I\in(\mathcal{D}^{(j)})^{m}\), so the measure of the union of such cylinders is \(Q_{j}^{m}\). The lemma is proved.
**Lemma 4.3**.: _Let \(W\) be a cylinder of \(\Lambda\) and let \(m\geq 1\) be an integer. If \(\delta<r_{*}^{m}S(W)\), then_
\[N_{W}^{b}(\delta)\leq 2dM_{0}Q^{m}\mu(W)\delta^{-\alpha},\]
_where \(M_{0}\) is the constant in Proposition 1.3._
Proof.: Let \(W=\Lambda_{I}\) be a cylinder. Since \(\delta<r_{*}^{m}S(W)\), if a \(\delta\)-mesh box \(B\) intersects \(\partial W\), then \(B\subset\varphi_{I}(\Lambda_{m}^{b})\). By the definition of \(\mu\), we have \(\mu(\varphi_{I}(A))=\mu(W)\mu(A)\) for any cylinder \(A\) of \(\Lambda\). Hence, by Lemma 4.2,
\[\mu(\varphi_{I}(\Lambda_{m}^{b}))\leq 2dQ^{m}\mu(W).\]
Therefore,
\[N_{W}^{b}(\delta)\leq N_{\varphi_{I}(\Lambda_{m}^{b})}(\delta)\leq M_{0}\mu( \varphi_{I}(\Lambda_{m}^{b}))\leq M_{0}2dQ^{m}\mu(W)\delta^{-\alpha},\]
where the second inequality holds since \(\mu\) is a box-counting measure.
Finally, we characterize when \(\Lambda\) admits inner trivial points.
Let \(\Phi\) be an IFS with attractor \(K\), and assume that \(\Phi\) satisfies the strong open set condition with an open set \(V\). A trivial point \(x\in K\) is called an _inner trivial point_ of \(K\) if \(x\in V\). Following Zhang and Huang [34], a clopen set (closed and open set) \(F\) of \(\Lambda\) is called a _island_ of \(\Lambda\) if \(F\cap\partial\left([0,1]^{d}\right)=\emptyset\). Obviously, an island \(F\) is a union of several \(k\)-th cylinders for \(k\) large enough.
Zhang and Huang [34] proved that if \(\Phi\) is an IFS satisfying the strong open set condition, then its attractor \(K\) possesses inner trivial points if and only if \(K\) admits islands.
**Lemma 4.4**.: _Let \(\Lambda\) be a non-degenerated diagonal self-affine sponge of Lalley-Gatzouras type. Then \(\Lambda\) has inner trivial points if and only if there exist \(\delta>0\) and a \(\delta\)-connected component \(X\) of \(\Lambda\) such that \(X\cap\partial[0,1]^{d}=\emptyset\)._
Proof.: Clearly \(V=(0,1)^{d}\) is an open set fulfilling the open set condition. \(\Lambda\) is non-degenerated implies that \(\Lambda\) satisfies the strong open set condition for this \(V\). Hence, by the general result of [34], \(\Lambda\) admits inner trivial points if and only if \(\Lambda\) admits islands.
Suppose \(\Lambda\) admits an island \(U\). Let \(d_{0}\) be the distance between \(U\) and \(\Lambda\setminus U\), and let \(\delta<d_{0}/3\). Let \(X\) be a \(\delta\)-connected component intersecting \(U\). Then clearly \(X\) does not intersect \(\Lambda\setminus U\), and hence does not intersect \(\partial[0,1]^{d}\).
On the other hand, suppose that there exists \(\delta>0\) and a \(\delta\)-connected component \(X\) of \(\Lambda\) such that \(X\cap\partial[0,1]^{d}=\emptyset\). Let \(m\) be an integer large enough so that every \(m\)-th cylinder has diameter smaller than \(\delta\). Then union of the \(m\)-th cylinders intersecting \(X\) forms an island of \(\Lambda\)
## 5. **The case that \(\Lambda_{j}\) contains no inner trivial points for some \(j\)**
In this section, we always assume that \(\Lambda\) is a non-degenerated self-affine sponge of Lalley-Gatzouras type.
**Theorem 5.1**.: _Suppose \(\Lambda\) does not contain inner trivial points. Let \(\tau\in(1,+\infty)\). Then there exist \(0<\chi<\dim_{B}\Lambda\) and \(C_{1}>0\) such that for any cylinder \(W\) of \(\Lambda\),_
\[h_{W}\left(\delta\right)\leq C_{1}\mu(W)\delta^{-\chi},\quad\text{ for }\delta\leq(r_{*}S(W))^{\tau}. \tag{5.1}\]
_Consequently, \(\Lambda\) does not satisfy the maximal power law._
Proof.: Let \(W=\Lambda_{I}\) be a cylinder and let \(\delta<S(W)\). Let \(m\) be the integer such that
\[(r_{*})^{m+1}\leq\frac{\delta}{S(W)}<(r_{*})^{m}. \tag{5.2}\]
Since \(\Lambda\) does not possess inner trivial points, by Lemma 4.4, every \(\delta\)-connected component of \(\Lambda\) must intersect \(\Lambda\cap\partial[0,1]^{d}\). It follows that every \(\delta\)-connected component of \(W\) must intersect \(W\cap\varphi_{I}(\partial[0,1]^{d})\), so
\[\begin{array}{ll}h_{W}(\delta)&\leq h_{W\cap\varphi_{I}(\partial[0,1]^{d})} (\delta)\\ &\leq 3^{d}N_{W}^{0}(\delta)\qquad\qquad\qquad\text{(By Lemma \ref{lem:2.1} (i).)}\\ &\leq 2d3^{d}M_{0}Q^{m}\mu(W)\delta^{-\alpha}.\qquad\text{(By Lemma \ref{lem:2.1})}\end{array}\]
By the left side inequality of (5.2), we have that for
\[\delta\leq(r_{*}S(W))^{\tau}, \tag{5.3}\]
it holds that
\[Q^{m}\leq\delta^{s},\]
where \(s=(1-\frac{1}{\tau})\frac{\log Q}{\log r_{*}}.\) Hence, the theorem holds by setting \(\chi=\alpha-s\).
From now on, we fix \(\tau_{0}\) to be the number
\[\tau_{0}=\min_{j=1,\ldots,d-1}\frac{\log\max\{\phi_{\mathbf{a},j+1}^{\prime}; \ \mathbf{a}\in\mathcal{D}\}}{\log\min\{\phi_{\mathbf{a},j}^{\prime};\ \mathbf{a}\in\mathcal{D}\}}.\]
Then \(\tau_{0}>1\) by the coordinate ordering condition; moreover, for any cylinder \(W\) of \(\Lambda\), we have that
\[S(\pi_{j+1}(W))\leq S(\pi_{j}(W))^{\tau_{0}},\quad j=1,\ldots,d-1. \tag{5.4}\]
**Lemma 5.1**.: _Suppose there exist \(0<\chi<\alpha_{d-1}\) and \(C_{1}>0\) such that for every cylinder \(W^{\prime}\) of \(\Lambda_{d-1}\), it holds that_
\[h_{W^{\prime}}\left(\delta\right)\leq C_{1}(\mu_{d-1}\left(W^{\prime}\right) \delta^{-\chi})\quad\text{ for }\delta\leq(r_{*}S(W^{\prime}))^{\tau_{0}}. \tag{5.5}\]
_Then for every cylinder \(W\) of \(\Lambda\), we have_
\[h_{W}\left(\delta\right)\leq 2^{d}{r_{*}}^{-(1+\tau_{0})d}C_{1}\mu_{d}(W) \delta^{-\eta}\quad\text{ for }\delta\leq S(W), \tag{5.6}\]
_where \(\eta=\chi+\beta_{d}\)._
Proof.: Let \(H\) be a cylinder of \(\Lambda\) and denote \(H^{\prime}=\pi_{d-1}(H)\). We claim that for \(\varepsilon<\delta/2\), it holds that
\[S(H)<\delta/2\Rightarrow h_{H}(\delta)\leq h_{H^{\prime}}\left(\varepsilon \right). \tag{5.7}\]
Pick \(x,y\in H\). If \(|\pi_{d-1}(x)-\pi_{d-1}(y)|\leq\varepsilon\leq\delta/2\), then \(|x-y|\leq\delta\) since \(S(H)\leq\delta/2\). So if \(\pi_{d-1}(x)\) and \(\pi_{d-1}(y)\) belong to a same \(\varepsilon\)-connected component of \(H^{\prime}\), then \(x\) and \(y\) belong to a same \(\delta\)-connected component of \(H\), which proves our claim.
Now we turn to prove the lemma. Pick a cylinder \(W\) of \(\Lambda\) and let \(\delta<S(W)\). Set
\[\delta^{\prime}=\delta r_{*}/2\text{ and }\varepsilon=\delta r_{*}^{1+\tau_{ 0}}/2.\]
Then for \(H\in\mathcal{V}_{\delta^{\prime}}\), we have \(\delta r_{*}/2\leq S(H)<\delta/2\). Moreover, by (5.4), we have
\[\varepsilon\leq r_{*}^{\tau_{0}}S(H)\leq(r_{*}S(H^{\prime}))^{\tau_{0}}.\]
Therefore,
\[\begin{array}{ll}h_{W}(\delta)&\leq\sum_{H\in\mathcal{V}_{\delta^{\prime}} \text{ and }H\subset W}h_{H}(\delta)&\text{(By Lemma \ref{lem:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq: eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq:
**Lemma 6.1**.: _Then for any \(\kappa>0\), there exists an integer \(m_{0}\geq 1\) such that for any cylinder \(W\) of \(\Lambda\) and any \(\delta<(r_{*})^{m_{0}}S(W)\), it holds that_
\[h_{W}^{b}\left(\delta\right)\leq\kappa\mu\left(W\right)\delta^{-dim_{B}\Lambda}.\]
Proof.: Denote \(\alpha=\dim_{B}\Lambda\). By Proposition 1.3, there exists \(M_{0}>0\) such that for any cylinder \(H\) of \(\Lambda\),
\[M_{0}^{-1}\mu(H)\delta^{-\alpha}\leq N_{H}(\delta)\leq M_{0}\mu(H)\delta^{- \alpha}\quad\text{ for }\delta<S(H).\]
Let \(W=\Lambda_{I}\) be a cylinder of \(\Lambda\) and let \(m\geq 1\). Let \(\mathcal{B}(W,m)\) be the collection of \((k+m)\)-th cylinders of \(W\) whose corresponding basic pillars intersect \(\varphi_{I}(\partial\left[0,1\right]^{d})\). Then
\[\cup\mathcal{B}(W,m)=\varphi_{I}(\Lambda_{m}^{b}):=W^{*},\]
where \(\Lambda_{m}^{b}\) is defined by (4.4). By Lemma 4.2, we have
\[\mu(W^{*})=\mu(\Lambda_{m}^{b})\mu(W)\leq 2dQ^{m}\mu(W).\]
Let \(m_{0}\) be an integer such that \(3^{d}M_{0}(2d)Q^{m_{0}}<\kappa\). Let \(\delta<(r_{*})^{m_{0}}S(W)\); in this case if \(U\) is a boundary \(\delta\)-connected component of \(W\), then \(U\) must intersect \(W^{*}\). Therefore,
\[\begin{array}{ll}h_{W}^{b}(\delta)&\leq h_{W^{*}}(\delta)\leq\sum_{H\in \mathcal{B}(W,m_{0})}h_{H}(\delta)\\ &\leq 3^{d}\sum_{H\in\mathcal{B}(W,m_{0})}N_{H}(\delta)\qquad\text{ (By Lemma \ref{lem:2} (i).)}\\ &\leq 3^{d}\sum_{H\in\mathcal{B}(W,m_{0})}M_{0}\mu(H)\delta^{-\alpha}\\ &\leq 3^{d}M_{0}\mu(W^{*})\delta^{-\alpha}\\ &\leq\kappa\mu(W)\delta^{-\alpha},\end{array}\]
which proves the lemma.
**Theorem 6.1**.: _If \(\mu_{d-1}\) is a component-counting measure of \(\Lambda_{d-1}\) and \(\Lambda\) possesses inner trivial points. Then \(\mu\) is a component-counting measure of \(\Lambda\)._
Proof.: Since \(\Lambda\) is non-degenerated and possesses trivial points, by Lemma 4.4, there is a clopen subset \(F\) of \(\Lambda\) such that \(F\cap\partial[0,1]^{d}=\emptyset\). Assume that \(F\) is the union of \(L\) number of \(m_{1}\)-th cylinders. Let \(U\) be a \(m_{1}\)-cylinder in \(F\) with the maximal measure in \(\mu\), then
\[\mu(U)\geq\mu(F)/L.\]
Let \(W\) be a \(k\)-th cylinder of \(\Lambda\) and denote \(W^{\prime}=\pi_{d-1}\left(W\right)\). Let \(\delta\leq S(W)\). Since
\[h_{W}\left(\delta\right)\leq 3^{d}N_{W}(\delta)\asymp\mu_{d}\left(W\right) \delta^{-\alpha_{d}},\]
we need only to consider the lower bound estimate of \(h_{W}(\delta)\).
That \(\mu_{d-1}\) is a component-counting measure implies that there is a constant \(M\) such that
\[h_{W^{\prime}}\left(\delta\right)\geq M\mu_{d-1}\left(W^{\prime}\right)\delta^ {-\alpha_{d-1}},\qquad\text{for all }\delta<S(W^{\prime}).\]
(We remark that we set the threshold \(\delta_{0}(W^{\prime})=S(W^{\prime})\) here.)
Let \(\kappa=M/2\) and let \(m_{0}\) be the constant in Lemma 6.1. Set \(\delta^{\prime}=\delta/(r_{*})^{m_{0}+m_{1}}\). Let \(\mathcal{V}_{\delta^{\prime}}\) be the \(\delta^{\prime}\)-blocking of \(\Lambda\).
Pick \(H\in\mathcal{V}_{\delta^{\prime}}\) and \(H\subset W\). Let \(f_{H}\) be the affine map such that \(H=f_{H}(\Lambda)\), then \(H_{F}:=f_{H}(F)\) is an island of \(H\) in the sense that \(f_{H}(F)\) is a clopen subset of \(H\), and \(f_{H}(F)\) does not intersect \(f_{H}(\partial[0,1]^{d})\). Clearly
\[\mu(H_{F})=\mu(F)\mu(H).\]
Now we estimate \(h_{H_{F}}(\delta)\). Denote
\[U_{H}=f_{H}(U)\text{ and }U_{H}^{\prime}=\pi_{d-1}(U_{H}).\]
First, since \(\pi_{d-1}\) is contractive, by Lemma 4.1(iii), we have
\[h_{U_{H}}(\delta)\geq h_{U_{H}^{\prime}}(\delta).\]
Notice that
\[\delta=(r_{*})^{m_{0}+m_{1}}\delta^{\prime}\leq(r_{*})^{m_{0}+m_{1}}S(H)\leq (r_{*})^{m_{0}}S(U_{H})\leq(r_{*})^{m_{0}}S(U_{H}^{\prime}).\]
On one hand, since \(\mu_{d-1}\) is a component-counting measure, we have
\[h_{U_{H}^{\prime}}(\delta)\geq M\mu_{d-1}(U_{H}^{\prime})\delta^{-\alpha_{d-1 }}. \tag{6.1}\]
On the other hand, by Lemma 6.1, we have
\[h_{U_{H}^{\prime}}^{b}(\delta)\leq\kappa\mu_{d-1}(U_{H}^{\prime})\delta^{- \alpha_{d-1}}=\frac{M}{2}\mu_{d-1}(U_{H}^{\prime})\delta^{-\alpha_{d-1}}.\]
This together with (6.1) imply that
\[h_{U_{H}^{\prime}}^{i}(\delta)\geq\frac{M}{2}\mu_{d-1}(U_{H}^{\prime})\delta^{ -\alpha_{d-1}}. \tag{6.2}\]
First, we show that \(h_{H_{F}}(\delta)\) is no less than the left hand side of (6.2):
\[h_{H_{F}}(\delta)\geq h_{\pi_{d-1}(H_{F})}(\delta)\geq h_{U_{H}^{\prime}}^{i} (\delta). \tag{6.3}\]
Secondly, we show that \(\mu(H)\) is comparable with the right hand side of (6.2). Since
\[\delta=(r_{*})^{m_{0}+m_{1}}\delta^{\prime}\geq(r_{*})^{m_{0}+m_{1}+1}S(H) \geq(r_{*})^{m_{0}+m_{1}+1}S(U_{H}), \tag{6.4}\]
we have
(6.5) \[\begin{array}{ll}\mu_{d-1}(U_{H}^{\prime})\delta^{-\alpha_{d-1}}&=\mu(U_{H })\left(\frac{\delta}{S(U_{H})}\right)^{\beta_{d}}\delta^{-\alpha_{d}}\\ &\geq\mu(U_{H})r_{*}^{(m_{0}+m_{1}+1)}\delta^{-\alpha_{d}}\quad\text{(By (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq
This proves that \(\mu\) is a component-counting measure and the threshold \(\delta_{0}(W)\) can be set to be \(S(W)\).
Proof of Theorem 1.2.: The second assertion in the theorem is proved in Corollary 5.1. In the following, we show that (i)\(\Rightarrow\)(ii) \(\Rightarrow\) (iii)\(\Rightarrow\) (i).
(i)\(\Rightarrow\)(ii) is trivial, see Remark 1.2.
(ii)\(\Rightarrow\)(iii) holds by Corollary 5.1.
(iii)\(\Rightarrow\)(i): First, \(\mu_{1}\) is a component-counting measure of \(\Lambda_{1}=\pi_{1}(\Lambda)\) by Theorem 1.1. Then by Theorem 6.1 and induction, we conclude that \(\mu_{j}\) is a component-counting measure of \(\Lambda_{j}\) for each \(1\leq j\leq d\).
That (iii)\(\Leftrightarrow(iv)\) is obvious. The theorem is proved.
Proof of Theorem 1.3.: By Theorem 1.2, if \(\Lambda\) does not satisfy the maximal power law, then there exists \(1\leq j\leq d\) such that \(\Lambda_{j}\) contains no inner trivial points. Now the theorem is a direct consequence of Corollary 5.1.
|
2306.01501 | A Note on BKP for the Kontsevich Matrix Model with Arbitrary Potential | We exhibit the Kontsevich matrix model with arbitrary potential as a BKP
tau-function with respect to polynomial deformations of the potential. The
result can be equivalently formulated in terms of Cartan-Pl\"ucker relations of
certain averages of Schur $Q$-function. The extension of a Pfaffian integration
identity of de Bruijn to singular kernels is instrumental in the derivation of
the result. | Gaëtan Borot, Raimar Wulkenhaar | 2023-06-02T12:49:48Z | http://arxiv.org/abs/2306.01501v2 | # A short note on BKP for the Kontsevich matrix model with arbitrary potential
###### Abstract.
We exhibit the Kontsevich matrix model with arbitrary potential as a BKP tau function with respect to polynomial deformations of the potential.
Key words and phrases:BKP hierarchy, matrix models, topological recursion 2010 Mathematics Subject Classification: 37K10, 37K20, 15A15
## 1. The formula
Let \(\mathcal{H}_{N}\) be the space of hermitian \(N\times N\) matrices equipped with the Lebesgue measure
\[\mathrm{d}H=\prod_{i=1}^{N}\mathrm{d}H_{ii}\prod_{1\leq i<j\leq N}\mathrm{dRe}( H_{ij})\,\mathrm{dIm}(H_{ij}).\]
Given a positive matrix \(\Lambda=\mathrm{diag}(\lambda_{1},\ldots,\lambda_{N})\), we introduce the Gaussian probability measure on \(\mathcal{H}_{N}\)
\[\mathrm{d}\mathbb{P}(H)=\frac{\sqrt{\Delta(\boldsymbol{\lambda},\boldsymbol{ \lambda})}}{2^{\frac{N}{2}}(2\pi)^{\frac{N^{2}}{2}}}\,\mathrm{d}H\,e^{-\frac{ 1}{2}\mathrm{Tr}(\Lambda H^{2})}. \tag{1.1}\]
We use the notations \(\Delta(\boldsymbol{\lambda})=\prod_{1\leq i<j\leq N}(\lambda_{j}-\lambda_{i})\) and \(\Delta(\boldsymbol{\lambda},\boldsymbol{\mu})=\prod_{1\leq i,j\leq N}(\lambda_{ i}+\mu_{j})\). We denote \([n]=\{1,\ldots,n\}\) and \(\lambda_{\min}=\min\{\lambda_{i}\ |\ i\in[N]\}\).
Let \(V_{0}\) be a continuous function on \(\mathbb{R}\) such that the measure \(e^{-\frac{1}{2}\lambda_{\min}x^{2}+V_{0}(x)}\) has finite moments on \(\mathbb{R}\) (take for instance \(V_{0}\) to be a polynomial of even degree with negative top coefficient). Then the measure \(\mathrm{d}\mathbb{P}(H)\,e^{\mathrm{Tr}\,V_{0}(H)}\) on \(\mathcal{H}_{N}\) is finite. Let
\[V_{\mathbf{t}}(x)=V_{0}(x)+\sum_{k\geq 0}t_{2k+1}\,x^{2k+1},\]
where \(\mathbf{t}=(t_{2k+1})_{k\geq 1}\) are formal parameters. The partition function of the Kontsevich model with arbitrary potential is defined by
\[Z(\mathbf{t})=\int_{\mathcal{H}_{N}}\mathrm{d}\mathbb{P}(H)\,e^{\mathrm{Tr}\, V_{\mathbf{t}}(H)}. \tag{1.2}\]
This short note presents a derivation of the following result.
**Theorem 1.1**.: _For even \(N\), we have_
\[Z(\mathbf{t})=\frac{\sqrt{\Delta(\boldsymbol{\lambda},\boldsymbol{\lambda})} }{2^{\frac{N^{2}}{2}}\,(2\pi)^{\frac{N}{2}}\,\prod_{n=1}^{N-1}n!}\,\underset{ 0\leq m,n\leq N-1}{\text{Pf}}\,\big{(}K_{m,n}(\mathbf{t})\big{)}, \tag{1.3}\]
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 The BKP-equations
* 3.1 The BKP-equations
* 3.2 The BKP-equations
* 3.3 The BKP-equations
* 3.4 The BKP-equations
* 3.5 The BKP-equations
* 3.6 The BKP-equations
* 3.7 The BKP-equations
* 3.8 The BKP-equations
* 3.9 The BKP-equations
* 3.1 The BKP-equations
* 3.1 The BKP-equations
* 3.2 The BKP-equations
* 3.3 The BKP-equations
* 3.4 The BKP-equations
* 3.5 The BKP-equations
* 3.6 The BKP-equations
* 3.7 The BKP-equations
* 3.8 The BKP-equations
* 3.9 The BKP-equations
* 3.1 The BKP-equations
* 3.1 The BKP-equations
* 3.2 The BKP-equations
* 3.3 The BKP-equations
* 3.4 The BKP-equations
* 3.5 The BKP-equations
* 3.6 The BKP-equations
* 3.7 The BKP-equations
* 3.8 The BKP-equations
* 3.9 The BKP-equations
* 3.1 The BKP-equations
* 3.2 The BKP-equations
* 3.3 The BKP-equations
* 3.4 The BKP-equations
* 3.5 The BKP-equations
* 3.6 The BKP-equations
* 3.7 The BKP-equations
* 3.8 The BKP-equations
* 3.9 The BKP-equations
* 3.1 The BKP-equations
* 3.1 The BKP-equations
* 3.2 The BKP-equations
* 3.3 The BKP-equations
* 3.4 The BKP-equations
* 3.5 The BKP-equations
* 3.6 The BKP-equations
* 3.7 The BKP-equations
* 3.8 The BKP-equations
* 3.9 The BKP-equations
* 3.1 The BKP-equations
* 3.2 The BKP-equations
* 3.3 The BKP-equations
* 3.4 The BKP-equations
* 3.5 The BKP-equations
* 3.6 The BKP-equations
* 3.7 The BKP-equations
* 3.8 The BKP-equations
* 3.9 The BKP-equations
* 3.1 The BKP-equations
* 3.2 The BKP-equations
* 3.3 The BKP-equations
* 3.4 The BKP-equations
* 3.5 The BKP-equations
* 3.6 The BKP-equations
* 3.7 The BKP-equations
* 3.8 The BKP-equations
* 3.9 The BKP-equations
* 3.1 The BKP-equations
* 3.1 The BKP-equations
* 3.2 The BKP-equations
* 3.3 The BKP-equations
* 3.4 The BKP-equations
* 3.5 The BKP-equations
* 3.6 The BKP-equations
* 3.7 The BKP-equations
* 3.8 The BKP-equations
* 3.9 The BKP-equations
* 3.1 The BKP-equations
* 3.2 The BKP-equations
* 3.3 The BKP-equations
* 3.4 The BKP-equations
* 3.5 The BKP-equations
* 3.6 The BKP-equations
* 3.7 The BKP-equations
* 3.8 The BKP-equations
* 3.9 The BKP-equations
* 3.1 The BKP-equations
* 3.2 The BKP-equations
* 3.3.1 The BKP-equations
* 3.3.2 The BKP-equations
* 3.3.3 The BKP-equations
* 3.4 The BKP-equations
* 3.5.1 The BKP-equations
* 3.5.2 The BKP-equations
* 3.6 The BKP-equations
* 3.7 The BKP-equations
* 3.8 The BKP-equations
* 3.9 The BKP-equations
* 3.1 The BKP-equations
* 3.2 The BKP-equations
* 3.3.3 The BKP-equations
* 3.4 The BKP-equations
* 3.5.3 The BKP-equations
* 3.6 The BKP-equations
* 3.7 The BKP-equations
* 3.8.1 The BKP-equations
* 3.9 The BKP-equations
* 3.1 The BKP-equations
* 3.2 The BKP-equations
* 3.3.4 The BKP-equations
* 3.3.5 The BKP-equations
* 3.4.1 The BKP-equations
* 3.4.2 The BKP-equations
* 3.4.3 The BKP-equations
* 3.5.4 The BKP-equations
* 3.6.1 The BKP-equations
* 3.6.2 The BKP-equations
* 3.7.1 The BKP-equations
* 3.8.2 The BKP-equations
* 3.9.1 The BKP-equations
* 3.9.2 The BKP-equations
* 3.9.3 The BKP-equations
* 3.10.1 The BKP-equations
* 3.11.1 The BKP-equations
* 3.1.2 The BKP-equations
* 3.1.3 The BKP-equations
* 3.1.4 The BKP-equations
* 3.1.5 The BKP-equations
* 3.1.6 The BKP-equations
* 3.1.7 The BKP-equations
* 3.1.8 The BKP-equations
* 3.1.9 The BKP-equations
* 3.1.1 The BKP-equations
* 3.1.1 The BKP-equations
* 3.1.1 The BKP-equations
* 3.1.2 The BKP-equations
* 3.1.3 The BKP-equations
* 3.1.4 The BKP-equations
* 3.1.5 The BKP-equations
* 3.1.6 The BKP-equations
* 3.1.7 The BKP-equations
* 3.1.8 The BKP-equations
* 3.1.9 The BKP-equations
* 3.1.1 The BKP-equations
* 3.1.1 The BKP-equations
* 3.1.2 The BKP-equations
* 3.1.1 The BKP-equations
* 3.1.2 The BKP-equations
* 3.1.3 The BKP-equations
* 3.1.4 The BKP-equations
* 3.1.5 The BKP-equations
* 3.1.6 The BKP-equations
* 3.1.7 The BKP-equations
* 3.1.8 The BKP-equations
* 3.1.9 The BKP-equations
* 3.2.1 The BKP-equations
* 3.2.2 The BKP-equations
* 3.2.3 The BKP-equations
* 3.2.4 The BKP-equations
* 3.2.5 The BKP-equations
* 3.2.6 TheKP-equations
* 3.2.7 The BKP-equations
* 3.2.8 TheKP-equations
* 3.2.9 TheKP-equations
* 3.2.1 TheKP-equations
* 3.2.1 TheKP-equations
* 3.2.2.2 TheKP-equations
* 3.2.3 TheKP-equations
* 3.2.4 TheBKP-equations
* 3.2.5 TheKP-equations
* 3.2.6 TheKP-equations
* 3.2.7 TheKP-equations
* 3.2.8 TheKP-equations
* 3.2.9 TheKP-equations
* 3.2.1 TheKP-equations
* 3.2.1 TheKP-equations
* 3.2.2.2 TheKP-equations
* 3.2.3.3 TheKP-equations
* 3.2.1 TheKP-equations
* 3.2.2.3 TheKP-equations
* 3.2.4 TheKP-equations
* 3.2.5 TheKP-equations
* 3.2.6 TheKP-equations
* 3.2.7 TheKP-equations
* 3.2.8 TheKP-equations
* 3.2.9 TheKP-equations
* 3.3.1 TheKP-equations
* 3.2.1 TheKP-equations
* 3.2.2.2 TheKP-equations
* 3.2.3.1 TheKP-equations
* 3.2.2.4 TheKP-equations
* 3.2.1 TheKP-equations
* 3.2.3.2 TheKP-equations
* 3.2.1 TheKP-equations
* 3.2.2.5 TheKP-equations
* 3.2.1 TheKP-equations
* 3.2.2.6 TheKP-equations
* 3.2.1 TheKP-equations
* 3.2.2.7 TheKP-equations
* 3.2.8 TheKP-equations
* 3.2.2.9 TheKP-equations
* 3.2.1 TheKP-equations
* 3.2.2.8 TheKP-equations
* 3.2.1 TheKP-equations
* 3.2.2.9 TheKP-equations
* 3.2.1.1 TheKP-equations
* 3.2.2.1 TheKP-equations
* 3.2.2.2 TheKP-equations
* 3.2.2.2.1 TheKP-equations
* 3.2.2.2.2 TheKP-equations
* 3.2.3.3 TheKP-equations
* 3.2.4.1 TheKP-equations
* 3.2.1.2.2 TheKP-equations
* 3.2.1.3 TheKP-equations
* 3.2.2.2.2 TheKP-equations
* 3.2.3.3.4 TheKP-equations
* 3.2.4.2 TheKP-equations
* 3.2.1.4 TheKP-equations
* 3.2.1.5 TheKP-equations
* 3.2.1.6 TheKP-equations
* 3.2.1.7 TheKP-equations
* 3.2.1.8 TheKP-equations
* 3.2.2.1.9 TheKP-equations
* 3.2.2.2.2 TheKP-equations
* 3.2.2.1.1 TheKP-equations
* 3.2.2.2.2 TheKP-equations
* 3.2.3.2.2.3 TheKP-equations
* 3.2.3.4 TheKP-equations
* 3.2.3.5 TheKP-equations
* 3.2.1.1.2.2.2 TheKP-equations
* 3.2.2.2.3 TheKP-equations
* 3.2.1.1.3 TheKP-equations
* 3.2.2.2.4 TheKP-equations
* 3.2.2.1.5 TheKP-equations
* 3.2.2.2.6 TheKP-equations
* 3.2.2.7 TheKP-equations
* 3.2.1.7 TheKP-equations
* 3.2.1.8 TheKP-equations
* 3.2.2.9 TheKP-equations
* 3.2.2.1.9 TheKP-equations
* 3.2.2.1.1.1 TheKP-equations
* 3.2.2.2.2.2.2 TheKP-equations
* 3.2.2.2.2.3 TheKP-equations
* 3.2.2.1.1.2.2.2.3 TheKP-equations
* 3.2.2.2.2.3.2.4 TheKP-equations
* 3.2.2.2.5 TheKP-equations
* 3.2.2.1.6 TheKP-equations
* 3.2.2.7 TheKP-equations
* 3.2.2.8 TheKP-equations
* 3.2.2.9 TheKP-equations
* 3.2.2.1.1.2.2.2.3 TheKP-equations
* 3.2.2.2.1.2.2.2.3 TheKP-equations
* 3.2.2.1.1.2.2.4 TheKP-equations
* 3.2.2.2.2.2.5 TheKP-equations
* 3.2.2.2.6 TheKP-equations
* 3.2.2.1.7 TheKP-equations
* 3.2.2.1.8 TheKP-equations
* 3.2.2.1.9 TheKP-equations
* 3.2.2.2.2.2.6 TheKP-equations
* 3.2.2.1.1.2.2.7 TheKP-equations
* 3.2.2.2.1.2.8 TheKP-equations
in terms of the Hirota operators \(D_{k}(\tau,\tau)({\bf t})=(\partial_{t_{k}}-\partial_{s_{k}})\tau({\bf t})\tau({ \bf s})\big{|}_{{\bf s}={\bf t}}\). For _even_\(V_{0}\) we have \(M_{2\ell_{1}+1,\ldots,2\ell_{n}+1}=0\) for \(n\) odd, and (1.5) results in
\[0 =M_{1^{6}}+15M_{1^{4}}M_{1,1}-5M_{3,1^{3}}-15M_{3,1}M_{1,1}-5M_{3, 3}+9M_{5,1}\] \[0 =M_{1^{8}}+28M_{1^{6}}M_{1,1}+35(M_{1^{4}})^{2}+7M_{3,1^{5}}+70M_ {3,1^{3}}M_{1,1}+35M_{3,1}M_{1^{4}}-35M_{3,3}M_{1,1}\] \[\quad-70(M_{3,1})^{2}-21M_{5,1^{3}}-63M_{5,1}M_{1,1}-42M_{5,3}+90M _{7,1}. \tag{1.6}\]
Or, equivalently in terms of cumulants:
\[0 =K_{1^{6}}+30K_{1^{4}}K_{1,1}+60(K_{1,1})^{3}-5K_{3,1^{3}}-5K_{3, 3}-30K_{3,1}K_{1,1}+9K_{5,1},\] \[0 =K_{1^{8}}+56K_{1^{6}}K_{1,1}+70(K_{1^{4}})^{2}+840K_{1^{4}}(K_{ 1,1})^{2}+840(K_{1,1})^{4}+7K_{3,1^{5}}\] \[\quad+70K_{3,1}K_{1^{4}}+420K_{3,1}(K_{1,1})^{2}+140K_{3,1^{3}}K_ {1,1}-35K_{3,3}K_{1,1}-70(K_{3,1})^{2}-21K_{5,1^{3}}\] \[\quad-126K_{5,1}K_{1,1}-42K_{5,3}+90K_{7,1}.\]
When \(V_{0}\) is not even, many more terms contribute.
It is instructive to test these equation in the simplest case \(V_{0}=0\). The moments can be found1 e.g. in [16], with \(p_{k}=\operatorname{Tr}\Lambda^{-k}\)
Footnote 1: Note that in [16], equation (45) follows from substituting \(\Lambda\to\frac{1}{2}\Lambda\) in equation (44). Equation (45) is the one they use to compute moments, and the Gaussian probability measure it induces agrees with our \(\mathbb{P}\) defined in (1.1).
\[M_{1,1} = p_{1}\] \[M_{3,1} = 3p_{1}^{2}\] \[M_{1^{6}} = 15p_{1}^{3}\] \[M_{3,3} = 3p_{3}+12p_{1}^{3}\] \[M_{5,1} = 5p_{3}+10p_{1}^{3}.\]
They satisfy the first equation of (1.6), as expected.
## 2. Proof of Theorem 1.1
A hermitian matrix \(H\) decomposes as
\[H=\sum_{a=1}^{N}(\operatorname{Re}H_{a,a})\,E_{a,a}+\sum_{1\leq a<b\leq N}( \sqrt{2}\operatorname{Re}H_{a,b})\,\tfrac{1}{\sqrt{2}}(E_{a,b}+E_{b,a})+(\sqrt {2}\operatorname{Im}H_{a,b})\,\tfrac{\mathrm{i}}{2}(E_{a,b}-E_{b,a}).\]
As \(E_{a,a}\), \(\tfrac{1}{\sqrt{2}}(E_{a,b}+E_{b,a})\) and \(\tfrac{\mathrm{i}}{\sqrt{2}}(E_{a,b}-\mathrm{E}_{b,a})\) have unit norm for the standard Euclidean metric on \(\operatorname{Mat}_{N}(\mathbb{C})\cong\mathbb{R}^{2N^{2}}\), the volume form on \(\mathcal{H}_{N}\) induced by the Euclidean volume form on \(\operatorname{Mat}_{N}(\mathbb{C})\cong\mathbb{R}^{2N^{2}}\) is \(2^{\frac{N(N-1)}{2}}\,\mathrm{d}H\). Denote \(\mathcal{U}_{N}\) the unitary group and \(\mathrm{d}\nu\) its volume form induced by the Euclidean volume form in \(\operatorname{Mat}_{N}(\mathbb{C})\). The corresponding volume is
\[\operatorname{Vol}(\mathcal{U}_{N})=\frac{(2\pi)^{\frac{N(N+1)}{2}}}{\prod_{n=1 }^{N-1}n!}.\]
We also recall the Harish-Chandra-Itzykson-Zuber formula [13]
\[\frac{1}{\prod_{n=1}^{N-1}n!}\int_{\mathcal{U}_{N}}\frac{\mathrm{d}\nu(U)}{ \operatorname{Vol}(\mathcal{U}_{N})}\,e^{\operatorname{Tr}(AUBU^{\dagger})}= \frac{\det(e^{a_{i}b_{j}})}{\Delta(\boldsymbol{a})\Delta(\boldsymbol{b})},\]
where \(A=\operatorname{diag}(a_{1},\ldots,a_{N})\) and \(B=\operatorname{diag}(b_{1},\ldots,b_{N})\).
Diagonalising the matrix \(H=UXU^{\dagger}\) with \(X=\operatorname{diag}(x_{1},\ldots,x_{N})\) and \(U\in\mathcal{U}_{N}\) defined up to action of \(\mathfrak{S}_{N}\times\mathcal{U}_{1}^{N}\) brings the partition function (1.2) in the form
\[\begin{split} Z(\mathbf{t})&=\frac{\sqrt{\Delta( \boldsymbol{\lambda},\boldsymbol{\lambda})}}{2^{\frac{N}{2}}(2\pi)^{\frac{N ^{2}}{2}}}\,\frac{1}{N!\,(2\pi)^{N}\,2^{\frac{N(N-1)}{2}}}\int_{\mathbb{R}^{N} }\bigg{(}\int_{\mathcal{U}_{N}}\mathrm{d}\nu(U)\,e^{-\frac{1}{2}\operatorname {Tr}(\Lambda UX^{2}U^{\dagger})}\bigg{)}\,(\Delta(\boldsymbol{x}))^{2}\,\prod _{i=1}^{N}e^{V_{\mathbf{t}}(x_{i})}\mathrm{d}x_{i}\\ &=\frac{\sqrt{\Delta(\boldsymbol{\lambda},\boldsymbol{\lambda})} }{2^{\frac{N^{2}}{2}}\,(2\pi)^{\frac{N}{2}}\,N!\,\Delta(-\boldsymbol{\lambda} /2)}\int_{\mathbb{R}^{N}}\frac{\big{(}\Delta(\boldsymbol{x})\big{)}^{2}}{ \Delta(\boldsymbol{x}^{2})}\det_{1\leq i,j\leq N}\big{(}e^{-\frac{1}{2} \lambda ix_{j}^{2}}\big{)}\,\prod_{i=1}^{N}e^{V_{\mathbf{t}}(x_{i})}\mathrm{d}x _{i}.\end{split} \tag{2.1}\]
Here we could use Fubini because the integrand in the first line of (2.1) is real positive, and in fact integrable due to the assumptions on \(V_{0}\) and \(\Lambda\). We observe that \(\Delta(-\boldsymbol{\lambda}/2)=(-2)^{-\frac{N(N-1)}{2}}\Delta(\boldsymbol{ \lambda})\) and recall Schur's Pfaffian identity [18], for \(N\) even
\[\frac{\big{(}\Delta(\boldsymbol{x})\big{)}^{2}}{\Delta(\boldsymbol{x}^{2})}= \prod_{1\leq i<j\leq N}\frac{x_{j}-x_{i}}{x_{j}+x_{i}}=\operatorname*{Pf}_{1 \leq i,j\leq N}\bigg{(}\frac{x_{j}-x_{i}}{x_{j}+x_{i}}\bigg{)}.\]
So, up to a prefactor, \(Z(\mathbf{t})\) is an integral of the form
\[\int_{\mathbb{R}^{N}}\operatorname*{Pf}_{1\leq i,j\leq N}\big{(}S(x_{i},x_{j} )\big{)}\det_{\begin{subarray}{c}0\leq m\leq N-1\\ 1\leq j\leq N\end{subarray}}\big{(}f_{m}(x_{j}^{2})\big{)}\prod_{i=1}^{N}\rho( x_{i})\mathrm{d}x_{i}. \tag{2.2}\]
De Bruijn's identity [8] would allow rewriting (2.2) as
\[N!\,\operatorname*{Pf}_{0\leq m,n\leq N-1}\bigg{(}\int_{\mathbb{R}^{2}}S(x,y )\,f_{m}(x^{2})f_{n}(y^{2})\,\rho(x)\rho(y)\,\mathrm{d}x\mathrm{d}y\bigg{)}, \tag{2.3}\]
but the proof in loc. cit. is solely based on algebraic manipulations, valid when \((f_{n})_{n=0}^{N-1}\) is a sequence of measurable functions on \(\mathbb{R}_{\geq 0}\) and \(S(x,y)=-S(x,y)\) is a measurable function on \(\mathbb{R}^{2}\) such that \(\int_{\mathbb{R}^{2}}\big{|}S(x,y)f_{m}(x^{2})f_{n}(y^{2})\big{|}\rho(x)\rho(y) \mathrm{d}x\mathrm{d}y<+\infty\). The choice of \(S(x,y)=\frac{x-y}{x+y}\) in general violates this integrability assumption due to the presence of the simple pole on the antidiagonal combined with the non-compactness of \(\mathbb{R}^{2}\). Nevertheless, we show that the conclusion (2.3) remains valid provided the integral in the Pfaffian is understood as a Cauchy principal value, under a Schwartz-type condition.
**Lemma 2.1**.: _Let \(\rho>0\) be a measurable function on \(\mathbb{R}\) and \((f_{n})_{n=0}^{N-1}\) be a sequence of \(\mathcal{C}^{N-1}\)-functions on \(\mathbb{R}_{\geq 0}\) such that \(f_{m}^{(\ell)}\) is bounded by a polynomial for any \(m,\ell\in\{0,\ldots,N-1\}\). Let \(S(x,y)=\frac{S(x,y)}{x+y}\) where \(\tilde{S}\) is a measurable function on \(\mathbb{R}^{2}\) such that_
\[\forall k,l\in\mathbb{N},\qquad\int_{\mathbb{R}^{2}}|\tilde{S}(x,y)x^{k}y^{l}| \,\rho(x)\rho(y)\,\mathrm{d}x\mathrm{d}y<+\infty.\]
_Then, for \(N\) even_
\[\begin{split}&\int_{\mathbb{R}^{N}}\operatorname*{Pf}_{1\leq i,j \leq N}\big{(}S(x_{i},x_{j})\big{)}\det_{\begin{subarray}{c}0\leq m\leq N-1\\ 1\leq j\leq N\end{subarray}}\big{(}f_{m}(x_{j}^{2})\big{)}\prod_{i=1}^{n}\rho( x_{i})\mathrm{d}x_{i}\\ &=N!\,\operatorname*{Pf}_{0\leq m,n\leq N-1}\bigg{(}\fint_{ \mathbb{R}^{2}}S(x,y)f_{m}(x^{2})f_{n}(y^{2})\,\rho(x)\rho(y)\,\mathrm{d}x \mathrm{d}y\bigg{)},\end{split} \tag{2.4}\]
_where \(\fint=\lim_{\epsilon\to 0}\int_{|x+y|\geq\epsilon}\) and the integrand in the left-hand side is integrable._
Proof.: Take \(\epsilon>0\) and set \(S_{\epsilon}(x,y)=S(x,y)\cdot\mathbf{1}_{|x+y|\geq\epsilon}\). In this situation we can use de Bruijn's formula and write
\[\begin{split}&\int_{\mathbb{R}^{N}}\underset{1\leq i,j\leq N}{ \operatorname{Pf}}\left(S_{\epsilon}(x_{i},x_{j})\right)\,\underset{\begin{subarray} {c}0\leq m\leq N-1\\ 1\leq j\leq N\end{subarray}}{\operatorname{det}}\big{(}f_{m}(x_{j}^{2})\big{)} \prod_{i=1}^{N}\rho(x_{i})\mathrm{d}x_{i}\\ &=N!\,\underset{0\leq m,n\leq N-1}{\operatorname{Pf}}\bigg{(}\int _{\mathbb{R}^{2}}S_{\epsilon}(x,y)\,f_{m}(x^{2})f_{n}(y^{2})\,\rho(x)\rho(y) \,\mathrm{d}x\mathrm{d}y\bigg{)}.\end{split} \tag{2.5}\]
The right-hand side tends to \(N!\)\(\operatorname{Pf}_{0\leq m,n\leq N-1}\big{(}\int_{\mathbb{R}^{2}}S(x,y)f_{m}(x ^{2})f_{n}(y^{2})\rho(x)\rho(y)\mathrm{d}x\mathrm{d}y\big{)}\) when \(\epsilon\to 0\). Call \(I_{\epsilon}(\mathbf{x})\) the integrand in the left-hand side of (2.5). We clearly have \(\lim_{\epsilon\to 0}I_{\epsilon}(\mathbf{x})=I_{0}(\mathbf{x})\) for \(\mathbf{x}\) almost everywhere in \(\mathbb{R}^{N}\). Provided we can find for \(I_{\epsilon}(\mathbf{x})\) a uniform in \(\epsilon\) and integrable on \(\mathbb{R}^{2}\) upper bound, the lemma follows from dominated convergence.
To find such a bound, we introduce the matrix \(W(\boldsymbol{\xi})\) with entries \(\xi_{j}^{n}\) at row index \(n\in\{0,\ldots,N-1\}\) and column index \(j\in[N]\), which satisfies \(\Delta(\boldsymbol{\xi})=\det W(\boldsymbol{\xi})\). Its inverse matrix is
\[(W(\boldsymbol{\xi})^{-1})_{i,n}=\frac{(-1)^{N-n-1}\,e_{N-n-1}(\boldsymbol{\xi }_{[i]})}{\prod_{j\neq i}(\xi_{i}-\xi_{j})},\]
where \(e_{k}\) is the \(k\)-th elementary symmetric polynomial and \(\boldsymbol{\xi}_{[i]}=(\xi_{1},\ldots,\widehat{\xi_{i}},\ldots,\xi_{N})\). Then
\[\begin{split}&\underset{\begin{subarray}{c}0\leq m\leq N-1\\ 1\leq j\leq N\end{subarray}}{\operatorname{det}}\left(f_{m}(x_{j}^{2})\right)= \Delta(\boldsymbol{x}^{2})\,\underset{\begin{subarray}{c}1\leq i\leq N\\ 0\leq n\leq N-1\end{subarray}}{\operatorname{det}}\left(f_{n}(x_{i}^{2})\right) \,\cdot\,\det\big{(}W(\boldsymbol{x}^{2})^{-1}\big{)}\\ &=\Delta(\boldsymbol{x}^{2})\,\underset{0\leq m,n\leq N-1}{ \operatorname{det}}\bigg{(}\sum_{i=1}^{N}\frac{(-1)^{N-m-1}\,f_{n}(x_{i}^{2}) \,e_{N-m-1}(\boldsymbol{x}_{[i]}^{2})}{\prod_{j\neq i}(x_{i}^{2}-x_{j}^{2})} \bigg{)}.\end{split} \tag{2.6}\]
Up to a sign that we can take out of the determinant, the \((m,n)\)-entry inside the determinant is
\[\begin{split}[u^{N-m-1}]\,\,\sum_{i=1}^{N}f_{n}(x_{i}^{2})\prod_ {j\neq i}\frac{1+ux_{j}^{2}}{x_{i}^{2}-x_{j}^{2}}&=[u^{N-m-1}]\, \prod_{i=1}^{N}(1+ux_{i}^{2})\bigg{(}\sum_{i=1}^{N}\frac{f_{n}(x_{i}^{2})}{1+ux _{i}^{2}}\,\frac{1}{\prod_{j\neq i}(x_{i}^{2}-x_{j}^{2})}\bigg{)}\\ &=\sum_{k=0}^{N-m-1}e_{N-m-1-k}(\boldsymbol{x}^{2})\bigg{(}\sum_{i =1}^{N}\frac{(-1)^{k}x_{i}^{2k}f_{n}(x_{i}^{2})}{\prod_{j\neq i}(x_{i}^{2}-x_ {j}^{2})}\bigg{)}.\end{split}\]
In the first two steps, \([u^{m}]\) acting on the formal power series of \(u\) to its right meant extracting the coefficient of \(u^{m}\). Up to the use of squared variables, we recognize the divided difference
\[g[\xi_{1},\ldots,\xi_{N}]:=\sum_{i=1}^{N}\frac{g(\xi_{i})}{\prod_{j\neq i}(\xi _{i}-\xi_{j})}.\]
When \(g\) is \(\mathcal{C}^{N-1}\), it can be written (see e.g. [12, Theorem 2, p.250]) as an integral over the \((N-1)\)-dimensional simplex \(\Delta_{N-1}=\{p\in[0,1]^{N}\,|\,p_{1}+\cdots+p_{N}=1\}\), equipped with the volume form \(\mathrm{d}\sigma(\boldsymbol{p})=\mathrm{d}p_{1}\cdots\mathrm{d}p_{N-1}\):
\[g[\xi_{1},\ldots,\xi_{N}]=\int_{\Delta_{N-1}}g^{(N-1)}(p_{1}\xi_{1}+\cdots+p_{ N}\xi_{N})\,\mathrm{d}\sigma(\boldsymbol{p}).\]
We use this for \(g_{k,n}(\xi)=(-1)^{k}\xi^{k}f_{n}(\xi)\). Inserting the integral representation in (2.6) yields
\[\begin{split}&\big{|}I_{\epsilon}(\mathbf{x})\big{|}=\big{|} \Delta(\boldsymbol{x}^{2})\big{|}\underset{1\leq i,j\leq N}{\operatorname{ Pf}}\big{(}S_{\epsilon}(x_{i},x_{j})\big{)}\\ &\quad\times\bigg{|}\,\underset{0\leq m,n\leq N-1}{\operatorname{det}} \bigg{(}\sum_{k=0}^{N-1-m}e_{N-1-m-k}(\boldsymbol{x}^{2})\int_{\Delta_{N-1}}g _{k,n}^{(N-1)}(p_{1}x_{1}^{2}+\cdots+p_{N}x_{N}^{2})\mathrm{d}\sigma( \boldsymbol{p})\bigg{)}\bigg{|}\prod_{i=1}^{N}\rho(x_{i}).\end{split}\]
Since \(|\Delta(\mathbf{x}^{2})|\) cancels the denominators in \(S_{\epsilon}\), the first line of the right-hand side admits an upper bound by sum of terms, each of which is a polynomial in \(\mathbf{x}\) multiplied by \(\prod_{\{i,j\}\in\mathcal{P}}|\tilde{S}(x_{i},x_{j})|\), where \(\mathcal{P}\) is a partition of \([N]\) into pairs. In the second line, we first expand the determinant inside the absolute value and use the triangular inequality to get an upper bound by a sum of finitely many positive terms, each of which involves an \(N\)-fold product of simplex integrals of functions with at most polynomial growth, since the derivatives \(f_{n}^{(\ell)}\) (and thus \(g_{k,n}^{(N-1)}\)) have at most polynomial growth. Therefore, they result in a polynomial upper bound in the variable \(\mathbf{x}\). We are thus left with an upper bound by a sum of finitely many terms of the form:
\[\prod_{\{i,j\}\in\mathcal{P}}\big{|}\tilde{S}(x_{i},x_{j})\big{|}\prod_{i=1}^{ N}x_{i}^{q_{i}}\rho(x_{i})=\prod_{\{i,j\}\in\mathcal{P}}\big{|}\tilde{S}(x_{i},x_ {j})\big{|}\,x_{i}^{q_{i}}x_{j}^{q_{j}}\rho(x_{i})\rho(x_{j})\]
for various \(N\)-tuples of integers \(\mathbf{q}\) and pair partitions \(\mathcal{P}\) of \([N]\). Integrating each term of this form over \(\mathbb{R}^{N}\) factorizes into a product of \(\frac{N}{2}\) two-dimensional integrals, each of them being finite by assumption. This provides the domination assumption to conclude \(\lim_{\epsilon\to 0}\int_{\mathbb{R}^{N}}I_{\epsilon}(\mathbf{x})\prod_{i=1}^{N} \mathrm{d}x_{i}=\int_{\mathbb{R}^{N}}I_{0}(\mathbf{x})\prod_{i=1}^{N}\mathrm{d}x_ {i}\) as desired.
_Remark 2.2_.: The proof can easily be adapted to obtain an analogous statement for kernels of the form \(S(x,y)=\frac{\tilde{S}(x,y)}{x-y}\), in which case one can use \(f_{n}(x)\) instead of \(f_{n}(x^{2})\).
The assumptions of Lemma 2.1 are fulfilled for
\[S(x,y)=\frac{x-y}{x+y},\qquad\rho(x)=e^{-\frac{1}{2}\lambda_{\min}x^{2}+V_{ \mathbf{t}}(x)},\qquad f_{m}(\xi)=e^{-\frac{1}{2}(\lambda_{m-1}-\lambda_{\min})\xi}\]
where we stress that \(\mathbf{t}\) are formal parameters. Therefore, coming back to (2.1) and tracking the \(N\)-dependent prefactors, we arrive to the identity of formal power series in the variables \(\mathbf{t}\):
\[Z(\mathbf{t}) =\frac{(-1)^{\frac{N(N-1)}{2}}\,\sqrt{\Delta(\mathbf{\lambda},\mathbf{ \lambda})}}{2^{N}\,\pi^{\frac{N}{2}}\,\Delta(\mathbf{\lambda})}\,\underset{1\leq m,n\leq N}{\mathrm{Pf}}\big{(}L_{m,n}\big{)}\] \[L_{m,n} =\bigg{(}\,\fint_{\mathbb{R}^{2}}\frac{x-y}{x+y}\,e^{-\frac{1}{2} \lambda_{m}x^{2}-\frac{1}{2}\lambda_{n}y^{2}+V_{\mathbf{t}}(x)+V_{\mathbf{t}} (y)}\,\mathrm{d}x\mathrm{d}y\bigg{)}.\]
We would like to rewrite this formula by absorbing the denominator \(\Delta(\mathbf{\lambda})\) in the Pfaffian. Recall the transposed Vandermonde matrix \(W(\mathbf{\lambda})^{T}\), whose entries are \(W(\mathbf{\lambda})^{T}_{i,n}=\lambda_{i}^{n}\) indexed by \(i\in[N]\) and \(n\in\{0,\dots,N-1\}\). We have
\[\frac{\mathrm{Pf}(L)}{\Delta(\mathbf{\lambda})}=\frac{\mathrm{Pf}(L)}{\det W(\mathbf{ \lambda})^{T}}=\mathrm{Pf}\big{(}(W(\mathbf{\lambda})^{T})^{-1}LW(\mathbf{\lambda})^{ -1}\big{)},\]
and by Cramer's formula for the inverse
\[\big{(}(W(\mathbf{\lambda})^{T})^{-1}LW(\mathbf{\lambda})^{T}\big{)}_{m,n}=v_{m}v_{n} \,\int_{\mathbb{R}^{2}}\frac{x-y}{x+y}\,F_{m}(x)F_{n}(y)\,e^{V_{\mathbf{t}}(x) +V_{\mathbf{t}}(y)}\,\mathrm{d}x\mathrm{d}y,\]
where \(v_{n}\) are non-zero constants to be chosen later, rows and columns are indexed by \(m,n\in\{0,\dots,N-1\}\), and we introduced:
\[F_{m}(x)=\frac{1}{v_{m}}\sum_{i=1}^{N}\big{(}W(\mathbf{\lambda})^{T}\big{)}_{i,m}^{ -1}\,e^{-\frac{1}{2}\lambda_{i}x^{2}}=\frac{\det\big{(}\lambda_{i}^{0}\, \big{|}\,\lambda_{i}^{1}\,\big{|}\,\cdots\,\big{|}\,\lambda_{i}^{m-1}\,\big{|} \,e^{-\frac{1}{2}\lambda_{i}x^{2}}\,\big{|}\,\lambda_{i}^{m+1}\,\big{|}\, \cdots\,\big{|}\,\lambda_{i}^{N-1}\big{)}}{v_{m}\,\Delta(\mathbf{\lambda})}.\]
With Taylor formula in integral form at order \(N\) near \(0\), we can write
\[\begin{split} e^{-\frac{1}{2}\lambda_{i}x^{2}}&=P_{N-1} \big{(}-\tfrac{1}{2}\lambda_{i}x^{2}\big{)}+\frac{(-\tfrac{1}{2}\lambda_{i}x^{2 })^{m}}{m!}+R_{N}\big{(}-\tfrac{1}{2}\lambda_{i}x^{2}\big{)},\\ R_{N}(\xi)&=\frac{\xi^{N}}{(N-1)!}\int_{0}^{1}(1-u)^ {N-1}e^{\xi u}\mathrm{d}u\end{split} \tag{2.7}\]
for some polynomial \(P_{N-1}\) of degree atmost \(N-1\) and without its term of degree \(m\) (which we wrote separately). The contribution of \(P_{N-1}\) disappears as it is a linear combination of the other columns, while the contribution of the degree \(m\) term simply retrieves the Vandermonde determinant. Hence:
\[F_{m}(x)=\frac{(-1)^{m}x^{2m}}{2^{m}\,m!\,v_{m}}+\frac{\det\left(\lambda_{i}^{ 0}\,\big{|}\,\lambda_{i}^{1}\,\big{|}\,\cdots\,\big{|}\,\lambda_{i}^{m-1}\, \big{|}\,R_{N}\big{(}-\tfrac{1}{2}\lambda_{i}x^{2}\big{)}\,\big{|}\,\lambda_{i }^{m+1}\,\big{|}\,\cdots\,\big{|}\,\lambda_{i}^{N-1}\right)}{v_{m}\,\Delta( \boldsymbol{\lambda})}.\]
We now choose \(v_{m}=\frac{(-1)^{m}}{2^{m}m!}\) to get \(F_{m}(x)=x^{2m}+O(x^{2N})\) when \(x\to 0\). Introducing the matrix
\[K_{m,n}(\mathbf{t})=\int_{\mathbb{R}^{2}}\frac{x-y}{x+y}\,F_{m}(x)F_{n}(y)\,e^ {V_{\mathbf{t}}(x)+V_{\mathbf{t}}(y)}\,\mathrm{d}x\mathrm{d}y,\]
we arrive to
\[Z_{N}(\mathbf{t}) =\frac{(-1)^{\frac{N(N-1)}{2}}\,\sqrt{\Delta(\boldsymbol{\lambda },\boldsymbol{\lambda})}\,\prod_{n=0}^{N-1}v_{n}}{2^{N}\,\pi^{\frac{N}{2}}} \operatorname*{Pf}_{0\leq m,n\leq N-1}K_{m,n}(\mathbf{t})\] \[=\frac{\sqrt{\Delta(\boldsymbol{\lambda},\boldsymbol{\lambda})}}{ 2^{\frac{N^{2}}{2}}\,(2\pi)^{\frac{N}{2}}\,\prod_{n=1}^{N-1}n!}\,\operatorname* {Pf}_{0\leq m,n\leq N-1}K_{m,n}(\mathbf{t}).\]
## 3. Discussion
For the Kontsevich model with potential \(V_{0}(x)=-\frac{\mathrm{i}}{6}x^{3}\), \(Z(\mathbf{t})\) is a KdV \(\tau\)-function with respect to the times \(s_{2n+1}=-(2n-1)!!\operatorname{Tr}\Lambda^{-(2n+1)}\)[15, 14]. With the result of Alexandrov [1], it is also a BKP \(\tau\)-function in the times \(\frac{1}{2}\mathbf{s}\). In [16, Section 4] a relation between the Kontsevich model (presented with a shift \(H\to\bar{H}-2\mathrm{i}\Lambda\)) and Schur \(Q\)-functions was discovered. We note that Schur \(Q\)-functions are a key tool in the proof of the BKP relations.
The BKP hierarchy of Theorem 1.1 is independent of the KdV/BKP structure of the Kontsevich model \(V(x)=-\frac{\mathrm{i}}{6}x^{3}\), since it rather governs the evolution under polynomial deformations of the potential (parameters \(\mathbf{t}\)), for fixed \(\Lambda\). We are mainly interested in expansion at \(\mathbf{t}=\mathbf{0}\) which amounts to quadratic BKP equations between moments. When \(V_{0}\) is even these relations simplify considerably. It still remains a meaningful problem to understand whether, for arbitrary \(V_{0}\), \(Z(\mathbf{t})\) is governed by an integrable hierarchy with respect to times \(\mathbf{s}\) related to \(\Lambda\).
Apart from \(V_{0}=0\) and \(V_{0}\) cubic, the simplest even case is \(V_{0}(x)=-\frac{cN}{4}x^{4}\) for some parameter \(c>0\), and its formal large \(N\) topological expansion has been studied during the last years [10, 19], providing strong evidence [6] that the topological expansion of the cumulants obey the blobbed topological recursion [5], which is the general solution of abstract loop equations [4]. In [11] a recursive formula for meromorphic differentials which are generating series of the genus \(1\) cumulants was given, and a generalisation to higher genera was outlined.
On the other hand, BKP tau functions of hypergeometric type with mild analytic assumptions are known to satisfy abstract loop equations, and thus (perhaps blobbed) topological recursion [2]. In particular, this was applied to prove the conjecture of [9] that spin Hurwitz numbers (weighted by the parity of a spin structure) satisfy topological recursion. It is plausible that a suitably normalised version of \(Z(\mathbf{t})\) is in fact a \(2\)-BKP \(\tau\)-function with respect to \(\mathbf{t}\) and \(\Lambda\), and
therefore that (blobbed or not) topological recursion could be established for it, and we shall return to this question later.
## Acknowledgements
R.W. was supported2 by the Cluster of Excellence _Mathematics Munster_ and the CRC 1442 _Geometry: Deformations and Rigidity_. Parts of this note were prepared during a workshop at the Erwin Schrodinger Institute in Vienna, whose hospitality is gratefully acknowledged.
Footnote 2: Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 427320536 - SFB 1442, as well as under Germany’s Excellence Strategy EXC 2044 390685587, Mathematics Munster: Dynamics – Geometry – Structure.
|
2302.03839 | Futuristic Variations and Analysis in Fundus Images Corresponding to
Biological Traits | Fundus image captures rear of an eye, and which has been studied for the
diseases identification, classification, segmentation, generation, and
biological traits association using handcrafted, conventional, and deep
learning methods. In biological traits estimation, most of the studies have
been carried out for the age prediction and gender classification with
convincing results. However, the current study utilizes the cutting-edge deep
learning (DL) algorithms to estimate biological traits in terms of age and
gender together with associating traits to retinal visuals. For the traits
association, our study embeds aging as the label information into the proposed
DL model to learn knowledge about the effected regions with aging. Our proposed
DL models, named FAG-Net and FGC-Net, correspondingly estimate biological
traits (age and gender) and generates fundus images. FAG-Net can generate
multiple variants of an input fundus image given a list of ages as conditions.
Our study analyzes fundus images and their corresponding association with
biological traits, and predicts of possible spreading of ocular disease on
fundus images given age as condition to the generative model. Our proposed
models outperform the randomly selected state of-the-art DL models. | Muhammad Hassan, Hao Zhang, Ahmed Fateh Ameen, Home Wu Zeng, Shuye Ma, Wen Liang, Dingqi Shang, Jiaming Ding, Ziheng Zhan, Tsz Kwan Lam, Ming Xu, Qiming Huang, Dongmei Wu, Can Yang Zhang, Zhou You, Awiwu Ain, Pei Wu Qin | 2023-02-08T02:17:22Z | http://arxiv.org/abs/2302.03839v1 | # Futuristic Variations and Analysis in Fundus Images Corresponding to Biological Traits
###### Abstract
Fundus image captures rear of an eye, and which has been studied for the diseases identification, classification, segmentation, generation, and biological traits association using handcrafted, conventional, and deep learning methods. In biological traits estimation, most of the studies have been carried out for the age prediction and gender classification with convincing results. However, the current study utilizes the cutting-edge deep learning (DL) algorithms to estimate biological traits in terms of age and gender together with associating traits to retinal visuals. For the trait's association, our study embeds aging as the label information into the proposed DL model to learn knowledge about the effected regions with aging. Our proposed DL models, named FAG-Net and FGC-Net, correspondingly estimate biological traits (age and gender) and generates fundus images. FAG-Net can generate multiple variants of an input fundus image given a list of ages as conditions. Our study analyzes fundus images and their corresponding association with biological traits, and predicts of possible spreading of ocular disease on fundus images given age as condition to the generative model. Our proposed models outperform the randomly selected state-of-the-art DL models.
Fundus images, Biological traits, Age, Gender, GAN, Aging effects, FAG-Net, FGC-Net.
## I Introduction
The retina is the organ that enables humans to capture visuals from the real world. The retina is a vital source to assess distinct pathological processes and neurological complications associated with risks of mortality. The retina refers to the inner surface of the eyeball opposite to the lens, including the optic disc, optic cup, macula, fovea, and blood vessel [1, 2]. Fundus images are fundus projections captured by a monocular camera on a 2D plane [3]. Fundus images play an important role in monitoring the health status of the human eye and multiple organs [4]. Analyzing fundus images and their corresponding association with biological traits can help in preventing eye diseases and early diagnosis. Retina allows us to visualize both vascular and neural tissues in a non-invasive way and examine neurological complications. The strong association of retina with physiology and vitality may lead to a deeper association with biological traits, such as age and gender. Biological traits can be determined by genes, environmental factors, or a combination of both, which can either be qualitative (such as gender or skin color) or quantitative (such as age or blood pressure) [5]. Biological traits are relevant to a variety of systemic and ocular diseases in individuals [6], for instance, females are expected to have longer life expectancies compared to those males in similar living environment [7, 8, 9]. With the increasing age, women with reduced estrogen production are predisposed to develop degenerative eye diseases, including cataracts and age-related macular degeneration [10, 11, 12]. In contrast, males are more likely to suffer from pigment dispersion glaucoma [13], open-angle glaucoma [14], and diabetic retinopathy [15]. The study of biological traits association with fundus images is a challenging task in the clinical practices where experts in the field are unaware of the gender discrimination in the fundus images of males and females and the association of aging information. Thus, this study utilizes deep learning (DL) algorithms to estimate biological traits and their association with the generated fundus images.
The fundus or retinal images have been studied for classification, disease identification, and analysis using conventional machine learning (ML) to recent DL methods [16, 17]. However, much of the work has been focused on 'feature engineering', which involves computing explicit features specified by experts. On the contrary, DL has been characterized by multiple computational layers that allow an algorithm to learn the appropriate predictive features based on examples. The DL algorithms have been utilized for the classification and detection of different eye diseases, such as diabetic retinopathy and melanoma, with human comparable results. In the conventional ML approaches, the relationship between retinal morphology and systemic health has been extrapolated using multivariable regression. However, such methods show limited ability for large size and complex datasets [18, 19]. Thus, the DL algorithms avoid manual feature engineering, tuning, and made it possible of extracting hidden features which were previously unexplored by the conventional methods. The DL models have shown significant results for previous challenging tasks. The harnessing of DL power innovatively associated the retinal structure and pathophysiology. The DL models can
extract independent features unknown to clinicians; however, they may face challenges of explainability and interpretability, which have been attempted to address in the existing work [20]. The DL approaches to fundus image analysis are receiving popularity featuring easy implementation and high efficiency [21]. It has been extrapolated that DL models can capture subtle pixel-level information in terms of luminous and contrast which humans may not differentiate. These findings underscore the promising ability of DL models hidden to humans and can be employed in medical imaging with high efficacy in clinical practices [22].
In clinical studies, experts in the field are unaware of the subjects' discrimination based on their fundus images which emphasis on the importance of employing DL models. The cause and effect of demographic information in fundus images are not readily apparent to domain experts. On the contrary, DL models may enable data-driven algorithms to discover of novel approaches to disease biomarkers identification and biological traits association. Therefore, the ophthalmoscope has been deeply associated with systemic indices of biological traits (such as aging and gender) and diseases. In previous studies, age has been estimated from distinct clinical images, such as age prediction from brain MRI, facial images, and neuroimaging using machine learning and deep learning [23, 24, 25, 26]. For instance, brain MRI and facial images have been used for age prediction emphasizing on the potential of traits estimation from fundus images [23, 24, 25, 27]. The excellent performance in age prediction implies that fast, safe, cost-effective, and user-friendly deep learning models can be possible in a larger population. In addition to the aging association, fundus images have also been associated with sex by applying logistic regression on several features [28]. These features include papillomacular angle, retinal vessel angles, and retinal artery trajectory. Various studies have shown retinal morphology differences between the sexes, including retinal and choroidal thickness [29, 30]. The study [22] reported fovea as an important region for gender classification. The prediction of gender became possible, which was an inconvenient job for the ophthalmologist who spent the whole career at retina [31]. Thus, results for the age and gender estimation may assist investigating physiological variations on fundus images corresponding to biological traits [16]. The estimation of age and gender classification may not be clinically inevitable, but the study of age progression based on biological traits learning hints the potential application of DL in discovering novel associations between traits and fundus images. The DL models implementation uncovers additional features from fundus images results in better biological traits association [32].
The successful estimation of age and gender prediction convince for studying age progression effects and evaluating aging status via fundus images. In the study of [16], aging effects were investigated while associating cardiovascular risk factor with fundus images. Similarly, large size DL models was used for classification and association of fundus images with physiological traits dependent on patients' health [16]. The existing algorithms mainly consider the optic disc's features for gender prediction as having consistent observations with that of Poplin [16]. In Poplin's work, large deep learning models were used to classify sex and other physiological and behavioral traits that were associated with patient health based on fundus images. Similarly, fundus (retinal) images were closely related to age and gender traits by allowing the definition of'retinal age gap', which is a potential biomarker for aging and risk of mortality [33].
The variational effects of age progression can be visualized in distinct ways, including saliency maps or heat maps in fundus images that were difficult to be observed by ophthalmologists. The differential visualization in fundus images can also be used to distinguish male and female subjects. After the successful classification of gender trait from fundus images [34], our proposed model (FAG-Net) emphasizes on optic disc area and learned features while training and learning the association corresponding to aging. The optic disc was also considered the main structure to train our deep learning approaches. Similarly, the second proposed model (FGC-Net) utilizes the learned knowledge to generate different fundus images given a single fed fundus with a list of ages as labels (conditions). FGC-Net can be used to predict the possible spreading of eye diseases and early diagnosis. To carry out this, firstly, we trained and successfully evaluated a DL model (FAG-Net) for the trait effects in terms of age and gender estimation. Secondly, we proposed a DL model (FGC-Net) to learn aging effects and embed these effects for the generation of multiple fundus versions to highlight the possible disease spreading. The corresponding multiple generated versions are subtracted accordingly to demonstrate the learning effects with age progression and disease projections. The rest of the manuscript is organized as follows: Section-2 outlines the existing works, Section-3 demonstrates methods, Section-4 illustrates and analyzes results, and Section-5 concludes the study with future directions.
## II Literature study
In previous studies, age and gender have been estimated from distinct imaging modalities such as age prediction from brain MRI, facial images, and neuroimaging using machine learning and deep learning [23, 24, 25, 26]. Brain MRI and facial images have been used for age prediction emphasizing on the potential of traits estimation from fundus images [23, 24, 25, 27]. In the work by Poplin [16], large deep learning models were used to classify sex and other physiological and behavioral traits that were associated with patient health based on retinal fundus images. There are a number of studies in which the fundus images have been used for age prediction and gender classification using machine learning classification [22, 23, 24, 25, 27, 28, 29, 30]. Most of them have estimated the age and gender either from healthy or unhealthy subjects. However, the current study underlying both healthy and unhealthy subjects for age and gender association to fundus images. For age and gender prediction, conventional to recent deep learning based algorithms have been employed [16, 21, 22, 35]. To our knowledge, non-of them attempted the age progression effects besides the age prediction and gender classification.
Clinicians are currently unaware of the distinct retinal feature varying between males and females, highlighting the
importance of deep learning and model explainability. Automated machine learning (AutoML) may enable clinician-driven automated discovery of novel insights and disease biomarkers. Gender has been classified in the study of [22], in which, the area under the receiver operating characteristic curve of the code free deep learning model was 0.93. The study [36] estimated biological age from dataset collected with MAE = 3.67 and cumulative score = 0.39 [37]. The study [38] developed CNN age and sex prediction models from normal participants without hypertension, diabetes mellitus (DM), and any smoking history. However, our proposed model (FAGNet) shows higher results in the majority of the evaluation metrics compared to the existing models for both healthy and unhealthy subjects as tabulated in Table II and III.
The ML algorithms are widely applied in analyzing biological traits with different imaging modalities. In the conventional biological traits estimation, the study [39] proposed a trait tissue association mapping human biological traits and diseases. The study [40] estimated the age of the subjects using PCA [41] for dimension reduction and relevance vector machine [42] with significant score. Similarly, study [43] used neural network and support vector machine to analyze five anatomical features, which resulted in high prediction accuracy. According to the study of [16], machine learning has been leveraged for many years for a variety of classification tasks, including the automated classification of eye disease. However, much of the work has focused on 'feature engineering', which involves computing explicit features specified by experts.
The relationship between retinal morphology and systemic health has been extrapolated using multivariable regression like conventional approaches. However, such methods show limited ability for large size and complex datasets [18, 19]. Thus, the advances in automatic algorithm into DL avoids manual feature engineering and made extracting hidden features possible, which were previously unexplored. The DL models have shown significant results for previous challenging tasks. The harnessing of DL power is innovatively associated with the retinal structure and pathophysiology. DL models extract independent features unknown to clinicians; however, face challenges of explainability and interpretability which have been attempted to address by a neuro-symbolic learning study [20]. Deep learning has been applied in different domains specifically in diseases diagnosing, such as melanoma and diabetic retinopathy, and achieved comparable accuracy to that of human experts [44].
Deep learning approaches to automated retinal image analysis are gaining popularity for their relative ease of implementation and high efficacy [21]. It has been reported that DL models capture subtle pixel-level luminance variations, which are likely indifferentiable to humans. Such findings underscore the promising ability of deep neural networks to utilize salient features in medical imaging which may remain hidden to human experts [22]. Deep learning has shown great strength in medical image analysis. Most importantly, ophthalmologists have successfully predicted biological traits, such as age and gender with the significance of 0.97 as area under curve (AUC) score [22]. Yamashita performed logistic regression on several features that were identified to be associated with sex [28]. These features include papillomavirus angle, retinal vessel angles and retinal artery trajectory. Various studies have shown retinal morphology differences between the sexes, including retinal and choroidal thickness [29, 30]. In previous studies, age has been estimated from clinical images via machine learning and deep learning [23, 24, 25, 26]. The excellent performance in age prediction implies that fast, safe, cost-effective, and user-friendly deep learning models are possible in larger population. In comparison to the state-of-the-art works, our study aims to show the continuous effect of age progression besides age estimation and gender classification.
## III Methodology
### _Biological Traits Estimation using FAG-Net Architecture_
For Age prediction and gender classification, we borrowed the concept of biological traits estimation from ShoeNet model [45]. The ShoeNet model has been used for age estimation and gender classification from pairwise shoeprints. However, the datasets available for fundus images are rarely found in pairwise (left and right eyes' images). Thus, the model needs special attention to overcome the challenging situation to be utilized for biological traits estimation. Therefore, we propose a model for fundus images based age and gender estimation (FAG-Net) (Figure 1). The model composed of six blocks, where block1, -2, and -6 contain spatial attention mechanism (SAB) while the rest of the blocks have been exempted from SAB. The first block receives input fundus images with dimensions of 512\(\times\)512\(\times\)3 (width\(\times\)height\(\times\)channel). The input three channels fundus image first pass through a stack of convolution neural network with given number of filters (32) and kernel size (3). The SAB block has been augmented to focus on the salient spatial regions.
Attention-mechanism has shown great attention recently due to its significant performance in the literature [46]. In practice, both channel wise (CA) and spatial wise (SA) attentions have been employed with channel first order. However, we only applied SA, which only focuses along the spatial dimension. In SA, average pooling and max pooling are applied in parallel to the input and concatenated correspondingly. A 2D attention map is generated over each pixel for all spatial locations with a large filter-size (i.e., \(K=5\)). The convolutional output is then normalized by non-linear sigmoid function. Finally, the normalized and direct connections are merged with element-wise multiplication to produce attention-based output. Both average and max pooling are used in SA to balance the selection of salient features (max pool) and global statistics (average pool). The embedding of the attention mechanism in FAG-Net focuses on regions of interest vulnerable to aging effects. The output from SAB passes through batch-normalization (BN) and rectified linear unit (ReLu) functions. Therefore, each block ends with BN and ReLu functions. The input of block-2 received from block-1 passes through stack of convolution, SAB, and ends with maxpool layer. Similarly, the output of block-1 also passes as a direct connection to convolution maxpool (CMP) block. The convolution layer in CMP applies to the output (from block-1) with the same
number of 64 filters (as that of block-2). However, the 1x1 kernel size has been used to produce the same feature maps (64) and passed to a maxpool operation to bring into the same dimension. Both outputs of block-2 and CMP block concatenate along the third dimension and forward as input to block-3 and a direct connection.
The purpose of CMP block is to retain the spatial features in high dimensional space to deeper-level related to age progression. In the abstract level, the dense structure passes salient features together with those extracted from block sequence. The feature maps increase and the dimensions decrease as the network goes deeper. The accumulated output from all the blocks passes through a normal convolution layer having 8x8 feature maps and 1024 number of filters. The final convolution layer passes the output to fully connected layers where each layer has been dropout with ratio of 9, 8, and 5 output 10 to avoid overfitting. The final output neuron can be singleton for age prediction or two for gender classification. In the case of age prediction, a linear activation function applies to produce a regression value. However, for gender classification, a softmax layer is employed to output weighted output for both male and female.
### _Objective function for FAG-Net_
The objective function used for training FAG-Net composed of three loss terms including \(L_{1}\), \(L_{2}\), and regression specific custom loss function (CLF). The accumulative loss function (ALF) is the mean of all the weighted loss terms, formulated as follows:
\[ALF=\psi*L_{1}+\psi*L_{2}+\psi*CLF)/3, \tag{1}\]
\(\psi\) is the corresponding weights to balance the loss terms, including \(L_{1}\) and \(L_{2}\) and which can be formulated as follows:
\[L_{J}=\sum_{i=0}^{n-1}abs|A_{age}^{i}-P_{age}^{i}|, \tag{2}\]
\[L_{2}=\sum_{i=0}^{n-1}\{A_{age}^{i}-P_{age}^{i}\}^{2}, \tag{3}\]
where \(n\) is the number of samples and \(A_{age}\) and \(P_{age}\) denote the actual and predicted ages.
Furthermore, age prediction is a regression problem, and a single output will be expected as a result. Thus, a specialized custom loss function based on mean square error (MSE) is proposed to optimize the hyperparameters during training [45]. The optimizer (Adam) fine-tunes the weights of convolution filters to minimize the loss value. To produce regression specific results, CLF penalizes the out-ranged values more. It minimizes the distance between the actual and predicted age in a target-oriented way. The formulation of CLF is illustrated in the following equation:
\[\text{CLF}=\frac{\sum_{j=1}^{n}E_{i}}{n};\ E_{i}=\begin{cases}d_{i}*\varphi,& \text{if }d_{i}\leq J\\ d_{i}^{3}+\varphi,&\text{if }d_{i}>J\end{cases} \tag{4}\]
CLF is the mean of difference \((E)\) for \(n\) number of samples, where \(n=(total-samples)/(input-size)\). \(\varphi\) is a small value (0.0001-to-0.3) used to prevent the network from attaining zero difference and to sustain the learning process. Similarly, \(d_{i}=||y-\bar{y}||\) is an absolute error between actual age \((y)\) and predicted age \((\bar{y})\). Furthermore, \(J\) is a natural number derived from MCS-\(J\) for predictable age ranges. In the second condition (\(d_{i}\leq J\)), the values higher than the value of \(J\) will cause more penalization of the weights based on the computed loss-value in the exponential time (power 3). The penalization influences the optimization of network weights and biases. It will direct the optimizer to tune these parameters in order to minimize the difference between actual and predicted age. CLF not only considers the absolute error but also penalizes more the adjacent values to \(J\) in MCS-J. Adam is used as an optimizer with the L\({}_{2}\) regularizer to tune hyper-parameters following the objective function.
#### Iii-B1 Evaluation Metrics for FAG-Net
Besides MAE and MSE as evaluation metrics for age prediction, we apply cumulative score (CS) and mean cumulative score (MCS) as evaluation metrics to accommodate the nature of the problem. CS and MCS imitate the existing studies, and are used to assess accuracies in a range of age groups. CS (or CS\({}_{j}\)) and MCS (or MCS-J) give more weight to the smaller ranges of match windows. The ranges depend on the value of \(j\) and \(J\), the absolute differences between actual and estimated age scores [45], is formulated as follows.
\[\text{MCS-J}=\frac{\sum_{j=0}^{J}CS_{j}}{J+1},\ \text{CS}_{j}=\frac{\sum_{i=1}^{n }\delta_{i}}{n}*100,\ \delta_{i}=\begin{cases}1,&\text{if }\delta_{i}\leq j\\ 0,&\text{if }\delta_{i}>j\end{cases} \tag{5}\]
\(\text{CS}_{j}\) is the percentage mean of \(\delta_{i}\), where \(\delta_{i}\) is the Euclidean-distance \((|y_{i}-\bar{y}_{i}|)\) between actual \((y_{i})\) and predicted \((\bar{y})\) score, and it will be counted as 1 for \(|y_{i}-\bar{y}_{i}|\leq j\). The value of \(\delta_{i}\) expressed as zero (0) implies that the distance \((|y_{i}-\bar{y}_{i}|)\) is greater than the threshold value (\(j\)). The MCS score facilitates prediction in various ranges of matching thresholds rather than a single threshold. Thus, the MCS score gives a more comprehensive assessment for the challenging problem of retinal based age prediction to cover all the values of \(|y_{i}-\bar{y}_{i}|\leq j\) for the setup threshold (\(j\)). This also allows us to give different penalties with varying thresholds in the objective function of the deep learning model.
### _Fundus Images Generation Given Age as Condition_
After proposing a sophisticated DL model (FAG-Net) for age prediction and gender classification, a novel network model has been introduced to predict futuristic variations in
Fig. 1: FAG-Net: for age and gender estimation using fundus images.
the fundus images. The model generates fundus images given age as condition (FGC-Net) (Figure 2). The FGC-Net
#### Iii-B1 Encoding
The encoding phase of FGC-Net first receives the input fundus images (\(X^{th}\mathbb{R}^{N\times H\times W\times C}\)) regarding biological traits association and learning (Figure 3). The dimensions \(N\times H\times W\times C\), denote the batch size, width, height, and features map (number of channel: 1 for grayscale and 3 for color images), respectively. The encoder automatically extracts lower-dimensional features from the input data and inputs them into the latent space. The \(i^{th}\) convolutional layer (\(NC_{i}\)) acts as a feature extractor by encoding the salient features from \(X_{i}\). Considering the input structure (e.g., \(X^{h}=H\), \(X^{w}=W\), \(X^{c}=C\), where \(X^{h}\), \(X^{w}\), \(X^{c}\) are the output structure with new height \(h\), width \(w\), and dimension \(c\), respectively), the encoder (e) part contains five encoding blocks-EB (EB-1 to -6) to sufficiently extract low-level features in the spatial dimension (e.g., \(X^{h}=\frac{1}{n}\times H\), \(X^{w}=\frac{1}{n}\times W,X^{c}=n\times C\), where \(n\) is the number of downsampling and deeper levels) followed by the bottleneck layer (\(Z\in\mathbb{R}^{k}\), where k is the spatial dimension of \(Z\)). The size of the channels (EB-1 to EB-6) decreases by halves in each subsequent deep layer, where the loss of information can be compensated for by doubling the number of filters (channels). In the encoding layer, the received image passes through an input-block (IB) which has been designed for the purpose of extracting varieties of features by employing distinct kernel and filter sizes such as 1\(\times\)1, 3\(\times\)3, 5\(\times\)5, and 7\(\times\)7 after a normal convolution (512\(\times\)512, 24, 3, corresponding to dimensions, number of filters, and kernel size). The outputs of variant size of filters merge as elementwise sum prior to proceeding for the deeper block. The output of IB forwards to EB1, where EB1 contains strides convolution to avoid the loss of information useful for generation, normal convolution (dimensions, number of features, kernel size, and stride) for feature extraction, followed by BN and ReLu functions. The rest of the blocks (EB2 to EB6) have the same structure till the bottleneck layer. The EB compresses the input spatial wise and extends channel wise. The compressing process at the \(l^{th}\) encoding block \(EB\)-\(l\), where \(l=6\), is formulated as follows:
\[EB\text{-}l=En\Big{(}\left[NC[S_{t}(X^{l-1})];\{op_{b},op_{r}\}\right];\phi \Big{)}, \tag{6}\]
where, \(S_{t}\) and \(Co\) denote strides (s = 2) and normal convolution in a block (\(-l\)) over the data sample (\(X^{l-1}\)) obtained from a previous block (\(l-1\)). The output from a stride convolution \(S_{t}\) and normal convolution \(NC\) forwards to BN (\(op_{b}\)) and ReLu (\(op_{r}\)) functions. The stack of stride convolution (\(St\)) and normal convolution (\(NC\)) avoids the loss of useful information. In addition to reducing computational operations [47], \(St\) enables the model to learn while downsampling [48] and retain and passes features into subsequent layers heading into latent space which is used by the decoder to generate back with age embedded effects. Besides the encoding of input fundus image, the corresponding label information has also been embedded into the latent space. After a number of experiments, the label information as condition in the latent space is more effective to effect the generation process. The embedding of age as condition with the output of the encoding layer is carried out in the latent space. The encoder part (\(En\)) passes the label (age \(L_{g}\)) information as condition (\(Ec(L_{g},\xi)\)), where \(\xi\) is the learned parameters by the encoder, into the latent space of VAE.
#### Iii-B2 Bottleneck and Conditioning
The bottleneck layer is an unsupervised method of modeling complex and higher dimensional data in deeper level. The encoder part (\(En(X^{i},\phi)\)) compresses the input from higher-dimensional space (\(X^{H}_{m},X^{W}_{m}\)) through network parameters (\(\phi\)) and generates the probabilistic distribution over the latent space (\(Z\)) with a lower possible dimension (\(\frac{1}{n}\times X^{H},\frac{1}{n}\times X^{W}\)). Similarly, \(Ec(L_{g},\xi)\) passes \(L_{g}\) through fully connected layers with learning parameters of \(\xi\) similar to the fully connected layer of \(En(X^{i},\phi)\). The decoder part utilizes the embedded and compressed form (latent variables \(Z\)) and generates it back to the high-dimensional generated space (\(Y\)). Minimizing the gap between \(X\) and \(Y\) enables the model to learn and tune the parameter values. The latent space enables the model to learn from the mutual
Fig. 3: Detailed network architecture of FGC-Net, which is composed of generative and discriminative modules together with condition in the bottleneck using variational autoencoder (VAE). The VAE in the bottleneck embeds age as scaler values and facilitates different versions for the given input. The input block has a special architecture where varieties of filters (with different sizes) have been employed. The model output different copies of the input given ages as condition while testing.
Fig. 2: FGC-Net: for fundus images generation given different ages. The generator part composed of variational autoencoder where the bottleneck receives age as condition prior to decoding. The discriminator receives both ground truth and generated fundus images to learn the aging effects. **Left**: The FGC-Net receives fundus image as input, encodes into latent space and generated back with embedded (in the bottleneck) age as condition. The generated fundus image discriminates against the age as label for learning age embedding. **Right**: for the testing phase, a single fundus image can be input with multiple age labels and generates corresponding fundus images with relevant variations. The details of the model have been drawn in Figure 3.
distributions of \(X\) and \(L_{g}\). The output of \(EB6\) and \(En(L_{g},\xi)\) are passed to a fully connected layer modeling the complex dimensional structure into a latent representation and then flatten via 64 neurons. From each of the flatten 64-neurons, both mean (\(\mu\)) and standard deviation (\(\sigma\)) are computed.
The encoder part (\(En\{X_{m};\ \phi\}\)) generates the posterior over latent space (\(z^{l}\), where \(i\) denotes the sample number) and samples from the latent space (\(P^{i}\)) which can be used for the decoding (generation) as \(De\{En\uplus L_{g}\oplus S_{f};\ \vartheta\}\). The latent space is obtained as follows:
\[z_{l}\sim\mathfrak{R}_{i}\left[(z_{0}|x^{i})\parallel(z_{1}|x^{i})\right], \tag{7}\]
where \(\mathfrak{R}(|)\) is the distribution over \(z_{0}\) and \(z_{1}\) given input \(x^{i}\). The sampling \(z_{l}\) from the distribution \(\mathcal{N}(;)\) can be rewritten for the conditional input as follows:
\[z_{l}\sim\mathcal{P}_{i}(z|X) =\mathcal{N}\left(\mu(X;\phi_{0}),\sigma(X;\phi_{0})\right)\ \parallel\mathcal{N}\left(\mu(X;\phi_{1}),\sigma(X;\phi_{1})\right),\] \[z_{l}\sim\mathcal{P}_{i}(z|X) =\mathcal{N}\left(\left[\mu(X;\phi_{0})+\mu(X;\phi_{1})\right]; \left[\sigma(X;\phi_{0})*\sigma(X;\phi_{1})\right]\right),\] \[z_{l}\sim\mathcal{P}_{i}(z|X) =\mathcal{N}\left(\left[\mu(X;\phi_{l})\right];\left[\sigma(X; \phi_{m})\right]\right),\] \[\textit{where}\ \phi_{l} =\phi_{0}+\phi_{1},\ \phi_{m}=\phi_{0}*\phi_{1}\]
The drawn sample (\(z_{l}\)) conditioned with \(X_{m}\) from the distribution (see (Equation III-C2)) maps into the same shape as the decoder (\(Dec(z_{l},\theta)\)) for the generation process with the learning network parameters (\(\theta\)). The latent distribution must be regularized by the kullback leibler (KL) divergence (see the loss function) to closely approximate the posterior (\(P(z|x)\)) and prior (\(P(z)\)) distributions. The regularization (i.e., via the Gaussian prior) holds in the latent space between the distributions in terms of \(\mu\) and \(\sigma\), which further contributes to the latent activations utilized by the decoder to produce new retinal image. The latent distributions are centered (\(\mu\)) and spread over the area \(\sigma\) to project the possible fundus as desired (DSp). Usually, the distance between the learned distribution \(\mathcal{N}(\mu,\ \sigma^{2})\) and the standard normal distribution \(\mathcal{N}(0,1)\) can be quantified by the KL divergence. However, instead of Gaussian normal distribution and normal mean (\(\mu\)) standard deviation \(\sigma\), we utilize the sum of \(\mu\) and the product of \(\sigma\). The detailed formulation is shown in Equations III-C2 and 9. The latent distribution and regularization are expected to have the properties of continuity and completeness. In the case of continuity, the sampling from the latent distribution given \(X\) may exist a nearby data point that feeds into the decoder to generate fundus images with a similar structure with additional information, as desired. The decoder must generate target-oriented fundus images in a controlled fashion.
#### Iii-B3 Decoding
FGC-Net generates a random sample (\(z_{l},for\ i=1,2,...,n\) ) conditioned by \(L_{g}\) drawn from the probabilistic distribution \(P_{i}(z_{l}|X)\) at the decoding side as decoding blocks (DB1 to DB7) and projects to \(Y_{i}\):
\[Y_{i}=Dec\{[z_{l}\odot R_{l}]\oplus S_{f}(X);\theta\}, \tag{8}\]
where, \(Y_{i}\) is the generated fundus images corresponding to \(z_{l}\) with adjustable weights (\(\odot R_{l}\)) regularized by the objective function and merged with the contextual skipped features (\(\oplus S_{f}\)) using network learning parameters (\(\theta\)).
In the decoding process, \(z\) is computed from the sum of \(\mu\) and \(\sigma\) multiplied by the standard normal distribution (\(\epsilon\)). The values of \(\mu\) and \(\sigma\) are computed in Equation III-C2. The \(\epsilon\) value is computed from the absolute different of normal distribution having mean \(\mu(L_{g})\) and standard deviation \(\sigma(L_{g})\) based on the fed scaler age values.
\[z\sim\mathcal{N}(\mu,\sigma^{2})\cdot\epsilon\leftarrow\mu+\sigma^{2}\cdot \mathcal{N}(\mu(L_{g}),\sigma(L_{g})). \tag{9}\]
The dimension of \(z\) is reshaped and upsampled to match the dimension of the corresponding encoding layer (EB6) and merge \(\oplus S_{f}\) as elementwise sum with the skip layer from EB6. Each block receives input, performs upscale dimensions via transpose convolution followed by BN and ReLu function. Each decoding block-DB composed of strides convolution, BN, and ReLu activation. The output of DB1 concatenates with the skip connection from IB.
#### Iii-B4 Skip Layer
The deeper the network, the more chances of losing key features due to the application of downsampling operations and the vanishing gradient problem [49]. Similarly, to avoid the loss of contextual information [50], we adopted skip connections between the encoding (\(Enc_{k}\{X;\ \phi\}\)) and decoding (\(Dec_{k}\{\varnothing\otimes S_{f};\ \theta\}\)) at particular \(\text{layers}(\kappa)\) to transfer spatial features and global information in terms of the input image structure. The skip layers integrate the learned features from early levels, avoid degrading shallow stacked networks and overcome gradient information loss by retaining key features during training. These connections also improve the end-to-end mapping of training and achieve an effective role in a deeper network. The sole purpose of adopted skip connections is to facilitate the decoder to maintain the existing input structure while generating on the decoding side together with synthetic information to reflect age progression. The dimensions and merging position with the corresponding layers, both at the bottleneck and decoder layers, are show, in Figure 3. After generating \(z\) given \(P(z|X)\) from the encoder (see Equation 6), the decoder part merges the data sample information from the latent space, conditioning information (\(L_{g}\)), and skip connection at a particular layer (\(\iota_{k}\)) is formulated as follows:
\[DB_{k}=Dec\big{(}NC[S_{i}(Y^{k+1});\{op_{b},op_{r}\}]\oplus S_{f}(X);\theta \big{)}, \tag{10}\]
where \(Y_{k+1}\), \(\oplus S_{f}(X)\)), and \(\oplus\) denote the previous tensor, skipped features, and merging operation for the elementwise sum, respectively. Additionally, \(op_{b}\) and \(op_{r}\) denote BN and nonlinear activation ReLu operations, respectively. In addition to the completeness and continuity properties of the VAE, the involvement of skip connections borrowed from U-Net controls the generation process.
#### Iii-B5 Discriminator
The discriminative part borrowed from generative adversarial network (GAN) is appended at the end of FGC-Net, which brings sharpness and better quality to the generated images [51]. Adversarial learning plays a min-max game to distinguish the original and fake (generated or synthetic) images. FGC-Net brings the inferencing features to reason at the latent space and generates fundus images as desired [52]. However, instead of training in a min-max fashion, we utilize the discriminative part solely for prediction of a scaler value or regression similar to subjects' ages. There are six blocks receiving both input and generated fundus images. Each discriminative block (DsB) composed of stride
convolution, BN, and ReLu functions. The output of stacked DsB ends with three fully connected layers containing 512, 256, and 128 neurons followed by dropout layers with ratio of 0.8, 0.7, and 0.6, respectively. Finally, the output of fully connected layer passes through linear activation function to a single neuron for age estimation. In the objective function, both outputs as single values (from input fundus and generated fundus images) are formulated as mean square error (MSE) or \(L_{2}\) loss term.
In our case, the generator maps \(X_{i}\) to \(Y_{i}^{j}\) where \(j=L_{g}\) and \(L_{g}\) can be age values, and the discriminator distinguishes \(X_{i}\) and \(Y_{i}^{j}\) as real and fake, respectively. The min-max game of learning in GAN can be formulated as follows:
\[V(D,G)=\underset{G}{min}\underset{D}{max}(D_{XY},G_{X}), \tag{11}\]
Similarly, the generative (\(G_{X}\)) and discriminative (\(D_{XY}\)) operations can be illustrated in mathematical forms as follows:
\[G_{X}=G\{\underbrace{En(X_{i};\ \phi)\to Y_{i}\sim Dec(Z_{i};\ \theta)} _{Generative\ Uni}\rightarrow\underbrace{Disc([X_{i},Y_{i}];\ \Phi)}_{Discriminative\ Uni}\} \tag{12}\]
\[G_{X}=G(X_{i},\ Y_{i};\ \omega)\ \textit{where}\ \ \omega=\{\phi,\theta,\Phi\}\]
The discriminator plays a vital role in the abstract reconstruction error in the circumstances where VAE is infused in the network model. The discriminator part measures the sample similarity [52] at both element and feature levels. In addition, the discriminator is made stronger to distinguish between real and fake images by including \(L_{2}\) loss term.
#### Iii-A6 Objective function for FGC-Net
The objective loss function for FGC-Net is composed of reconstruction loss (\(L_{2}\)) (Equation 3) and KL divergence loss [53]. The probabilistic distribution in VAE as inferencing model (\(q_{\phi}(z|x)\)) approximates the posterior (true) distribution (\(p_{\theta}(z|x)\)) in terms of KL-divergence to minimize the gap as follows [54]:
\[KL_{d}(q_{\phi}(z|x)||p_{\theta}(z|x)))=\mathbb{E}_{q_{\phi}}\Big{[}log\frac {q_{\phi}(z|x)}{p_{\theta}(z|x)}\Big{]}, \tag{13}\]
In our case, the KL-divergence between the distribution \(\mathcal{N}(\mu_{i},\sigma_{i})\) of the inference model with mean \(\mu_{i}\) and variance \(\sigma_{i}\), and the standard normal distribution \(\mu(L_{g}),\sigma(L_{g})\) (Equation 9) with mean \(\mu\) and unit variance \(\sigma\) can be formulated after the Bayesian inference simplification [55] as follows:
\[KL_{d}(\mathcal{N}(\mu,\sigma)||\mathcal{N}(\mu(L_{g}),\sigma(L_{g})))=\frac {1}{2}\sum_{i=1}^{l}\big{(}\sigma_{i}^{2}+\mu_{i}^{2}-1-exp(\sigma_{i}^{2}) \big{)}. \tag{14}\]
Thus, the total loss function for FGC-Net (TLF-FGC) composed of the following terms:
\[TLF-FGC=(L_{1}+L_{2}+KL_{d})/3 \tag{15}\]
#### Iii-A7 Dataset preparation and network training
To train, evaluate, and test proposed models for biological traits estimation and traits based futuristic analysis, we used dataset Ocular Disease Intelligent Recognition (ODIR-5K) [56], PATILA [57], and a longitudinal population based on 10-year progression collections (10Y-PC) [58]. To propose a generalized model for biological traits estimation, we utilized both cross-sectional and longitudinal population. Furthermore, to cover the biological traits estimation, both healthy and unhealthy subjects were included so that the underlying DL model should learn features invariant to abnormalities. Similarly, varieties of cameras and environments have been used to capture different qualities of images modeling sophisticated DL networks. Both FAG-Net and FGC-Net have been trained via Adam for optimizing network parameters. For FAG-Net and FGC-Net, Adam optimizer was used with initial learning rate of 0.001, \(\beta_{1}\) = 0.9, \(\beta_{2}\) = 0.999, where the learning rate was decreased by \(\frac{1}{10}\) after every 50 epochs. The batch size was composed of 16 samples according to the available GPU's memory size. The models run for 500 epochs and apply early stopping on observing poor result after each epoch.
## IV Results
### _Biological traits estimation_
The proposed model FAG-Net and the state-of-the-art (SOTA) models have been trained for 5-fold cross-validation (FCV). The evaluation metrics MAE, MSE, MCS-2 and MCS-3 were used for testing purposes. Evaluation metrics MCS can help better assessing the performance of the models for age prediction where age prediction can only be predicted in a range of values rather than classified value. Therefore, MSE metrics may produce larger value for larger difference between actual and predicted values in the case of outliers. Thus, MSE metric for such scenarios may not be a reliable option. The details of five cross validation have been shown in Table I. Table II shows the results of all the underlying modalities corresponding to evaluation metrics.
### _Gender classification_
In biological traits estimation, we also trained the proposed (FAG-Net) and few SOTA models. All the classification results are shown in Table III. After the successful classification of gender traits from fundus images, the proposed model (FGC-Net) emphasized more on optic disc area and learned features while training for aging association. In the study for gender classification [34], optic disc was also considered the main structure by the deep learning approaches.
To evaluate the performance of our proposed model for gender classification, we randomly chose few SOTA models and trained and tested them on the same dataset and parameters (Table III). We used confusion metrics to evaluate the results. The rest of the metrics include true positive (TP), false positive (FP), true negative (TN), false negative (FL), specificity, sensitivity, positive predictive value (PPV), negative predictive value (NPV), F\({}_{1}\) score, and accuracy. The derivation of these metrics is illustrated in the following equations.
\[Sensitivity=\frac{TP}{TP+FN},\ Specificity=\frac{TN}{TN+FP}, \tag{16}\] \[F_{1}=2\times\frac{Specificity\times Sensitivity}{Specificity+ Sensitivity},\ \ PPV=\frac{TP}{TP+FP},\] \[NPV=\frac{TN}{TN+FN},\ Accuracy=\frac{TP+FN}{TP+TN+FP+FN}.\]
\[R_{2}=1-\frac{RSS}{TSS},\;SSR=\sum_{i=1}^{j}{(X_{i}-Y_{i})^{2}},\;TSS=\sum_{i}^{j}{ \left((Avg(X_{i}^{j})-Y_{i})^{2}\right)^{2}} \tag{17}\]
where sum of square of residuals (SSR), total sum of square (TSS). From the accumulated results, our proposed model FAG-Net outperforms the competitive SOTA models. VGG-Net-16 has received the second highest score in terms of accuracy. The convincing results of our proposed model encourage us to proceed with the age prediction from fundus images and learning the corresponding effect.
### _Age progression effects in fundus images_
In this study, FAG-Net has been utilized to estimate biological traits from fundus images. After the successful estimation of age (accuracy 91.878%) and gender (MAE 1.634), we proposed FGC-Net, a generative model conditioned by subjects' age. To extrapolate the effects of the fed condition, we proposed different versions of FGC-Net in order to verify the changes made on fundus images with age progression. After training FGC-Net (Figure 3) together with all versions, we randomly chose samples and fed them to the models to retrieve fundus images in different age stages (Figure 4). There are total of seven versions of FGC-Net together with their output (Figure 4). The random chosen sample is fed to each model together with different conditions (labels) as in the range of 10 to 80 years. The output images are subtracted from the original (fed fundus image) and the difference is displayed in Figure 4 (2nd column to the 9th).
Variations have been observed based on the subjective evaluation. From the visualized results, three key anatomical structures including optical disc, area near OD, and size (volume), were observed to be variant given different ages. The OD region, approximately a circular and bright yellow object, in all the generated fundus images by a variety of modalities found variant from early to late aging. The embedded age as condition mostly influences optic disc with age progression and which can be observed with naked eyes from 5th to 7th row (Figure 4) for the corresponding model. Similarly, the nearby thick vessels and region to OD have also been observed variant with aging. Besides, the size of the fundus images has also been found variant with age progression. Such variations are apparent for FGC-Net-6 model (Figure 4-last row). We employed attention mechanism in all the proposed models to highlight the regions of interest while embedding and estimating biological traits. The attention mechanism also highlights pixels in the input image based on their contributions to the final evaluation. Therefore, the affected regions in the generated images can be observed from the underlying modalities. The learning process from the embedded age as condition occurs at abstract level. In other sense, the learning becomes generalized by utilizing the fundus images from both healthy and unhealthy subjects and avoids. Thus, the study innovatively learns biological traits and their effects on fundus images using the cutting-edge technology of deep learning. The ability of neural networks to use greater abstractions and tighter integrations comes at the cost of lower interpretability. Saliency maps, also called heat maps or attention maps, are common model explanation tools used to visualize model thinking by indicating areas of local morphological changes
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c} \hline Network & MAE & MSE & \(CS_{0}\) & \(CS_{1}\) & \(CS_{2}\) & \(CS_{3}\) & \(CS_{4}\) & \(CS_{5}\) & MCS-2 & MCS-3 & MCS-4 & R\({}_{2}\) \\ \hline AlexNet & 662 & 552 & 408 & 778 & 0.58 & 0.61 & 0.54 & 0.65 & 0.60 & 60.3 \\ VGG-Net 16 & 990 & 114.897 & 168 & 0.88 & 0.85 & 0.89 & 0.84 & 0.87 & 86.9 \\ VGG-Net 19 & 100 & 104.89 & 889 & 117 & 0.89 & 0.84 & 0.90 & 0.83 & 0.87 & 87.0 \\ ShoseNet & 991 & 1138 & 838 & 138 & 0.88 & 0.88 & 0.89 & 0.82 & 0.86 & 86.3 \\ Pixel RNN & 997 & 1070 & 861 & 205 & 0.88 & 0.82 & 0.90 & 0.80 & 0.85 & 85.6 \\ Highway Net & 673 & 541 & 483 & 703 & 0.56 & 0.58 & 0.55 & 0.59 & 0.57 & 57.3 \\ Residual Net & 978 & 126 & 813 & 213 & 0.87 & 0.82 & 0.88 & 0.80 & 0.84 & 84.3 \\
**FAG-Net** & **1414** & **74** & **1065** & **121** & **0.935** & **0.904** & **0.939** & **0.897** & **0.919** & **0.87** \\ \hline \end{tabular}
\end{table} TABLE III: Comparative evaluation-scores of FAG-Net and SOTA models in terms of MAE, MSE, MCS.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c} \hline Network & MAE & MSE & \(CS_{0}\) & \(CS_{1}\) & \(CS_{2}\) & \(CS_{3}\) & \(CS_{4}\) & \(CS_{5}\) & MCS-2 & MCS-3 & MCS-4 \\ \hline FCV-1 & 2.269 & 22.026 & 32.736 & 69.263 & 80.133 & 84.132 & 85.756 & 87.172 & 60.710 & 66.566 & 71.174 \\ FCV-2 & 2.286 & 24.734 & 33.944 & 71.470 & 80.258 & 83.632 & 86.006 & 88.172 & 61.890 & 67.326 & 71.062 \\ FCV-3 & 2.286 & 25.573 & 79.291 & 81.842 & 83.239 & 84.941 & 86.905 & 88.825 & 81.667 & 82.485 & 83.369 \\ FCV-4 & 1.401 & 3.324 & 18.280 & 62.354 & 88.147 & 95.159 & 97.663 & 99.249 & 56.260 & 65.984 & 72.320 \\ FCV-5 & 0.517 & 3.961 & 83.282 & 93.147 & 94.718 & 95.258 & 96.377 & 97.76 & 90.382 & 91.608 & 92.562 \\
**Average** & **1.634** & **15.151** & **49.705** & **75.767** & **85.475** & **88.814** & **90.729** & **92.168** & **70.315** & **74.9400** & **78.098** \\ \hline \end{tabular}
\end{table} TABLE I: FAG-Net scores for five cross validation \(CS_{0}\), \(CS_{1}\), \(CS_{2}\), \(CS_{3}\), MAE, MSE, MCS.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c} \hline Network & TP & FP & TN & FN & Specificity & Sensitivity & PPV & NPV & F\({}_{1}\) & Acc.\% \\ AlexNet & 662 & 552 & 408 & 778 & 0.58 & 0.61 & 0.54 & 0.65 & 0.60 & 60.3 \\ VGG-Net 16 & 990 & 114.897 & 168 & 0.88 & 0.85 & 0.89 & 0.84 & 0.84 & 0.87 & 86.9 \\ VGG-Net 19 & 100 & 104 & 898 & 117 & 0.89 & 0.84 & 0.90 & 0.83 & 0.87 & 87.0 \\ ShoseNet & 991 & 1138 & 838 & 138 & 0.88 & 0.88 & 0.89 & 0.82 & 0.86 & 86.3 \\ Pixel RNN & 997 & 1070 & 861 & 205 & 0.88 & 0.82 & 0.90 & 0.80 & 0.85 & 85.6 \\ Highway Net & 673 & 541 & 483 & 703 & 0.56 & 0.58 & 0.55 & 0.59 & 0.57 & 57.3 \\ Residual Net & 978 & 126 & 813 & 213 & 0.87 & 0.82 & 0.88 & 0.80 & 0.84 & 84.3 \\
**FAG-Net** & **1414** & **74** & **1065** & **121** & **0.935** & **0.904** & **0.939** & **0.897** & **0.919** & **0.889** \\ \hline \end{tabular}
\end{table} TABLE I: FAG-Net scores for five cross validation \(CS_{0}\), \(CS_{1}\), \(CS_{2}\), \(CS_{3}\), MAE, MSE, MCS.
within fundus photographs that carry more weight in modifying network predictions. Algorithms mainly used the features of the optic disc for gender prediction, which is consistent with the observations made by Poplin [16]. Deep learning models that were trained using images from the UK Biobank and EyePACS data sets primarily highlighted the optic disc, retinal vessels, and macula when soft attention heat maps were applied, although there appeared to be a weak signal distributed throughout the retina [16].
## V Conclusion and Future Directions
In this study, we investigate biological traits from fundus images from both healthy and unhealthy subjects. We also extrapolate the variational effects on fundus images with age progression. We proposed two types of DL models name FAG-Net and FGC-Net. FAG-Net estimates age and classifies subjects from fundus images utilizing the dense network architecture together with attention mechanism at distinct levels. The proposed models generalize the learning process in order to avoid the variation in anatomical structure in fundus images caused by the retinal disease. The study successfully carried out age prediction and gender classification with significant accuracy. Similarly, the attention mechanism highlighted regions of interest that are vulnerable to aging. Furthermore, the model shows similar salient regions in ungradable input images as in gradables (Fig. 2). This suggests that the model is sensitive to signals in poor quality images from subtle pixel-level luminance variations, which are likely indifferentiable to humans. This finding underscores the promising ability of deep neural networks to utilize salient features in medical imaging which may remain hidden to human experts. In the future study, more sophisticated deep learning models with attention mechanisms can be proposed for healthy and unhealthy subjects both in isolated and joint form.
|
2306.07170 | Prompt-based Extraction of Social Determinants of Health Using Few-shot
Learning | Social determinants of health (SDOH) documented in the electronic health
record through unstructured text are increasingly being studied to understand
how SDOH impacts patient health outcomes. In this work, we utilize the Social
History Annotation Corpus (SHAC), a multi-institutional corpus of de-identified
social history sections annotated for SDOH, including substance use,
employment, and living status information. We explore the automatic extraction
of SDOH information with SHAC in both standoff and inline annotation formats
using GPT-4 in a one-shot prompting setting. We compare GPT-4 extraction
performance with a high-performing supervised approach and perform thorough
error analyses. Our prompt-based GPT-4 method achieved an overall 0.652 F1 on
the SHAC test set, similar to the 7th best-performing system among all teams in
the n2c2 challenge with SHAC. | Giridhar Kaushik Ramachandran, Yujuan Fu, Bin Han, Kevin Lybarger, Nicholas J Dobbins, Özlem Uzuner, Meliha Yetisgen | 2023-06-12T15:08:25Z | http://arxiv.org/abs/2306.07170v1 | # Prompt-based Extraction of Social Determinants of Health Using Few-shot Learning
###### Abstract
Social determinants of health (SDOH) documented in the electronic health record through unstructured text are increasingly being studied to understand how SDOH impacts patient health outcomes. In this work, we utilize the Social History Annotation Corpus (SHAC), a multi-institutional corpus of de-identified social history sections annotated for SDOH, including substance use, employment, and living status information. We explore the automatic extraction of SDOH information with SHAC in both standoff and inline annotation formats using GPT-4 in a one-shot prompting setting. We compare GPT-4 extraction performance with a high-performing supervised approach and perform thorough error analyses. Our prompt-based GPT-4 method achieved an overall 0.652 F1 on the SHAC test set, similar to the 7\({}^{\text{th}}\) best-performing system among all teams in the n2c2 challenge with SHAC.
## 1 Introduction and related work
Social determinants of health (SDOH) are the conditions in which people work and live that impact quality of life and health Centers for Disease Control and Prevention (2022). Understanding SDOH can assist in clinical decision-making Daniel et al. (2018); Friedman and Banegas (2018). SDOH is documented in the electronic health record (EHR) through unstructured clinical narratives and structured data; however, the clinical narrative includes a more detailed description of many SDOH events. To utilize the text-encoded SDOH information in secondary use applications, including clinical decision-support systems, the SDOH information must be automatically extracted Daniel et al. (2018); Singh et al. (2017).
SDOH extraction has been explored using rule-based systems and data-driven models that use supervised learning Hatef et al. (2019); Patra et al. (2021); Yu et al. (2022); Han et al. (2022) on a variety of corpora Uzuner et al. (2008); Stemerman et al. (2021); Yetisgen and Vanderwende (2017). Recent SDOH extraction work utilizes large language models (LLMs) like BERT and T5, where models are fine-tuned to the SDOH extraction task Lybarger et al. (2022); Romanowski et al. (2023). Recent advancements in LLMs, including larger models like Generative Pretrained Transformer (GPT)-based models Brown et al. (2020); OpenAI (2023), allow for new training paradigms, including few-shot or zero-shot learning. Recent developments in LLMs like the GPT-4 OpenAI (2023) and med-PaLM models have shown their capability to understand the clinical text and achieve/exceed human-level performance in US medical licensing exams Singhal et al. (2022). This high performance may be attributed to (1) high model parameter counts, (2) large pre-training datasets, and (3) instruction tuning and optimization with Reinforcement Learning Human Feedback (RLHF) Ouyang et al. (2022). Recent clinical information extraction (IE) work Liu et al. (2023); Hu et al. (2023) comparing BERT-based fine-tuning approaches to zero-shot learning indicates GPT models can extract entities and relations with reasonable performance; however, there are many open questions related to the use of recent LLMs, like GPT-4, in clinical IE tasks.
In this work, we explore the extraction of SDOH using GPT-4 in a one-shot prompting setting with event-based SHAC Lybarger et al. (2021). We compare prompt-based extraction approaches with a high-performing supervised BERT-based modelLybarger et al. (2022) that has been fine-tuned to SDOH extraction from SHAC. We investigate two different one-shot prompting strategies for GPT-4, including prompts aimed at generating BRAT standoff format and inline annotations. We report an overall performance of 0.861 F1 from the fine-tuned model, evaluated on the withheld test set. The highest-performing one-shot GPT-4 approach
achieved an overall 0.652 F1 for SDOH event extraction. Our initial study shows that GPT-4 can extract SDOH information from text with limited training examples.
## 2 Data, Task, & Evaluation
The 2022 National NLP Clinical Challenges SDOH extraction task (n2c2/UW SDOH Challenge) used SHAC for model development and evaluation (Lybarger et al., 2023). SHAC contains 4405 de-identified social history sections of notes from MIMIC-III (Johnson et al., 2016) and the University of Washington (UW). SHAC includes training, development, and test partitions for both sources (MIMIC-III and UW). SHAC was annotated using BRAT (Stenetorp et al., 2012), a web-based annotation tool, to capture five SDOH event types: substance use (_Alcohol, Drug, Tobacco_), employment status (_Employment_), and living status (_LivingStatus_). Figure 1 (A) presents an annotated sample in BRAT from the SHAC UW training set.
The n2c2/UW SDOH Challenge evaluation criteria interpret the extraction as a slot-filling task (Lybarger et al., 2023). Each event comprises a single trigger span and at least one required argument. _Trigger any overlap_ equivalence requires the predicted trigger to overlap with the true trigger of the same event type. Arguments can be classified into two categories: _span-only_ (a multi-word span and argument type) and _labeled_ (a multi-word span, argument type, and _subtype_ label). Arguments can be equivalent only when attached to equivalent triggers. In addition to trigger _any overlap_ equivalence, _span-only_ argument equivalence is evaluated by _exact match_, and _labeled_ arguments equivalence requires the correct argument and subtype labels (_span agnostic_) (Lybarger et al., 2023).
We evaluated performance using the n2c2/UW SDOH Challenge criteria, as well as on more lenient evaluation criteria that still assess the clinical meaning of extraction. In the lenient criteria, trigger equivalence is relaxed to a minimum-distance metric (_minimum distance_), where gold triggers are paired (aligned) with the closest predicted trigger of the same event type, and the closest predicted trigger is counted as a true positive. In the lenient criteria, the _span-only_ arguments use the _any overlap_ criteria and the _labeled_ arguments are evaluated as previously described.
## 3 Methods
We benchmark the SDOH extraction task using two methods: (1) multi-label variation of the Span-based Entity and Relation Transformer (SpERT)(Eberts and Ulges, 2020) architecture, mSpERT (Lybarger et al., 2022) benchmarked for SHAC as a high-performing fine-tuned baseline, and (2) prompt-based one-shot learning with GPT-4. Inspired by performance gains of few-shot learning, relative to zero-shot learning in prior work (Brown et al., 2020; Lievin et al., 2022), we use one-shot prompting with GPT-4 for the SDOH extraction task in this short study. We experiment with two distinct output formats - (1) BRAT-style standoff annotations (GPT-standoff) and (2) Inline annotations (GPT-inline).
We conduct the GPT-4 one-shot experiments through OpenAI's GPT-4 Chat Completion Application Programming Interface (API)1, because of GPT-4's proprietary nature and significant hardware requirements. The API allows users to provide instructions via three role variables. Our prompts are structured in the following order:
Footnote 1: [https://platform.openai.com/docs/api-reference/chat](https://platform.openai.com/docs/api-reference/chat)
1. _system_: defines the desired role, personality traits, and task instructions for GPT-4. We use the _system_ variable to assign GPT-4 the role of an annotator along with the paraphrased annotation guideline.
2. _user_: provides an example note for one-shot learning.
3. _assistant_: provides the gold annotations for the example note in _user_. This is an example prediction for one-shot learning.
Following the above definitions, we end with a _user_ message containing a note to be annotated and
Figure 1: A. Sample note with SDOH events, visualized in the BRAT website. B. Standoff annotations in the BRAT format (.ann). C. Inline annotations.
indicate the _assistant_ should respond. We randomly sample a note from a subset of the SHAC-UW training set containing all five SDOH event types. Appendix A.2 contains formats and snippets of our prompts.
### GPT-standoff
To assess GPT-4's ability to comprehend the task and generate structured outputs, we prompt the model to generate predictions in the BRAT standoff format (Stenetorp et al., 2012) used by SHAC. The BRAT standoff format includes pairs of text (*.txt) and annotation (*.ann) files. The event annotation is characterized by three BRAT annotation frames in the annotation file: (1) _Text bounds (\(T\))_ include a text span (e.g. "currently unemployed"), span label (e.g. 'Employment'), and character indices (e.g. "35 55") for marking both triggers and arguments; (2) _Attributes (\(A\))_ adds a subtype label to \(T\), and (3) _Events (\(E\))_ characterize an SDOH event through linking a trigger and at least one argument. A visual representation is provided in Figure 1 (B).
We provide the paraphrased annotation guideline in the _system_ and the note via _user_ variables and elicit GPT-4 to output the desired annotation file through the _assistant_ variable. Our preliminary experimentation indicated that though GPT-4 was able to correctly extract relevant text spans, it had some shortcomings: (1) some generated lines did not conform to the BRAT standoff format, and (2) the generated character indices did not correspond with the identified text spans. We post-processed the generated outputs to ensure compliance with the BRAT standoff format and updated the character indices to correspond with the first occurrence of the generated text span (<3% spans occur more than once). Please refer to Appendix A.3 for original and post-processed examples.
### GPT-inline
Prior fine-tuned IE work utilized inline markers to infuse entity information in the body of narratives (Romanowski et al., 2023; Phan et al., 2021). In our work, we instruct GPT-4 to generate a version of the note with inline markers that identify the SDOH triggers and arguments. These markers encode all spans inside double-angle brackets ("\(\times\)"), with trigger, argument, and subtype labels appended (Figure 1, (C)). Similar to the GPT-standoff model, the GPT-inline model elicits GPT-4 the desired annotation format through the _assistant_ variable. This method does not prompt the model to make trigger-argument connections, and we use a heuristic search to associate each argument with the nearest event trigger, constrained by the allowable trigger-argument connections defined by the annotation guideline (See details in the Appendix. A.4). The GPT-inline output is post-processed into BRAT standoff format for evaluation.
## 4 Results and Discussion
Table 1 contains the overall performance of the prompt-based GPT-4 models on the withheld SHAC-UW test set, which was the evaluation data for Subtask C of n2c2/UW SDOH Challenge (Lybarger et al., 2023). The GPT-standoff and GPT-inline models achieve an overall F1 of 0.625 and 0.652, respectively. This performance is much lower than the mSpERT model and the highest-performing n2c2 systems, which utilized the entire training set in supervised model fine-tuning. GPT-inline achieved performance similar to the 7th best n2c2 system from IBM, which utilized BERT (Lybarger et al., 2023). This one-shot performance indicates that the natural language understanding capabilities of GPT-4 allow the prompt-based methods to leverage the annotation schema and achieve moderate performance. The results suggest that fine-tuning at least a portion of the training set may be needed to achieve high performance. We also observed that a generative architecture could lead to a new set of errors, and some of them may be eliminated in post-processing.
Table 2 contains trigger and argument micro-averaged F1 scores for each event type using the n2c2 and lenient evaluation criteria. Comparing overall performances (last row in the table), mSpERT outperforms both our GPT-4 models. But the performance gap between the fine-tuned model and the one-shot GPT models is smaller from the
\begin{table}
\begin{tabular}{l c c c} \hline
**Method** & **P** & **R** & **F1** \\ \hline
**Fine-tuned** & & & \\ Microsoft (T5) & 0.891 & 0.887 & 0.889 \\ CHOP (BERT) & 0.874 & 0.888 & 0.881 \\ mSpERT & 0.868 & 0.854 & 0.861 \\... &... &... &... \\ IBM (BERT) & 0.538 & 0.788 & 0.640 \\
**GPT-4 one-shot + post-processing** & & & \\ GPT-standoff & 0.621 & 0.628 & 0.625 \\ GPT-inline & 0.650 & 0.654 & 0.652 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of overall micro-averaged performance of SDOH triggers and arguments between select top-performing models in the n2c2 challenge, mSpERT (fine-tuned baseline), and the GPT-4 models.
n2c2 evaluation to the lenient evaluation, which can largely be attributed to higher trigger extraction performance, as argument equivalence requires trigger equivalence in both evaluations. The GPT-standoff model can identify the presence of SDOH events, but the identified triggers may not overlap with the gold trigger. The lenient evaluation only requires the same trigger type present in the social history text. The relatively lower performance for the GPT-inline model in _Employment_ trigger extraction can be attributed to the model frequently identifying an _StatusEmploy_ labeled argument without predicting an _Employment_ trigger. The GPT-inline extractions may capture meaningful employment information but do not adhere to the annotation guidelines. For both GPT models, the extraction of _LivingStatus_ triggers is relatively more challenging. Although the notes contain many plausible candidate spans for _LivingStatus_ triggers, these spans were not annotated in SHAC since they did not contain information to resolve the associated _TypeLiving_ labeled argument. The GPT models capture these false positive _LivingStatus_ triggers often without _TypeLiving_ labeled arguments. For argument extraction, GPT-standoff significantly outperforms GPT-inline in four arguments under the n2c2 evaluation and eight arguments in lenient evaluation. The GPT-inline does not link annotated arguments to triggers, and a distance metric (character count) is used to link them, which contributes to the GPT-in-line's relatively lower performance. The labeled arguments are required for each event, and we observe that labeled argument performance is 0.1 F1 lower than the corresponding trigger performance. When multiple substance events are present in a note with differing _StatusTime_ labels (e.g. current and past), we observe the GPT-standoff model tends to output the same _StatusTime_ label for all substance events. The GPT-inline model correctly captures both substance trigger and _StatusTime_ spans with the correct label but fails to correctly link the triggers with the right _StatusTime_ spans because multiple _StatusTime_ spans can have the same distance to a trigger span.
## 5 Conclusions
We investigate the efficacy of two prompt-based approaches for extracting SDOH from social history sections using GPT-4. Although the supervised model achieves higher performance, our findings indicate that GPT-4's one-shot learning capabilities serve as a promising starting point for extracting SDOH events without the need for annotated data. Possible gains in future work may be achieved with a combination of few-shot and active learning.
\begin{table}
\begin{tabular}{l l l|c c c|c c c} \hline \hline \multirow{2}{*}{**Field**} & \multirow{2}{*}{**Argument**} & \multirow{2}{*}{**\# True**} & \multicolumn{2}{c}{**n2c Evaluation (F1)**} & \multicolumn{2}{c}{**Lenient Evaluation (F1)**} \\ \cline{3-8} & & & **mSpERT** & \multicolumn{2}{c}{**GPT-**} & \multicolumn{2}{c}{**GPT-**} & \multicolumn{2}{c}{**mSpERT**} & \multicolumn{2}{c}{**GPT-**} & \multicolumn{2}{c}{**GPT-**} \\ & & & & **standoff** & **inline** & **inline** & **mSpERT** & **standoff** & **inline** \\ \hline
**Trigger** & & & & & & & & & \\ Alcohol & - & 403 & 0.964 & 0.861 & 0.938* & 0.967 & 0.972* & 0.952 \\ Drug & - & 473 & 0.929 & 0.824 & 0.861* & 0.942 & 0.935* & 0.898 \\ Tobacco & - & 434 & 0.963 & 0.825 & 0.917* & 0.970 & 0.965* & 0.939 \\ Employment & - & 153 & 0.908 & 0.803* & 0.709 & 0.915 & 0.921* & 0.766 \\ LivingStatus & - & 354 & 0.886 & 0.590 & 0.749* & 0.903 & 0.844* & 0.811 \\
**Labeled Argument** & & & & & & & & \\ Alcohol & StatusTime & 403 & 0.913 & 0.763 & 0.734 & 0.913 & 0.856* & 0.750 \\ Drug & StatusTime & 473 & 0.857 & 0.706* & 0.646 & 0.868 & 0.783* & 0.673 \\ Tobacco & StatusTime & 434 & 0.917 & 0.694 & 0.738* & 0.926 & 0.813* & 0.764 \\ Employment & StatusEmploy & 153 & 0.868 & 0.657 & 0.627 & 0.875 & 0.759* & 0.679 \\ LivingStatus & StatusTime & 354 & 0.833 & 0.572 & 0.709* & 0.850 & 0.787* & 0.760 \\
**Span-only Argument** & & & & & & & & \\ Alcohol & All types & 178 & 0.699 & 0.388* & 0.172 & 0.783 & 0.694* & 0.354 \\ Drug & All types & 418 & 0.625 & 0.219* & 0.104 & 0.688 & 0.426 & 0.381 \\ Tobacco & All types & 375 & 0.775 & 0.420* & 0.322 & 0.830 & 0.714* & 0.537 \\ Employment & Duration, History, Type & 96 & 0.675 & 0.169 & 0.109 & 0.735 & 0.677* & 0.500 \\ LivingStatus & Duration, History & 11 & 0.421 & 0.063 & 0.074 & 0.526 & 0.159 & 0.105 \\ \hline & **Overall** & 5066 & 0.861 & 0.625 & 0.652* & 0.882 & 0.791* & 0.728 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Micro-averaged F1 comparison between mSpERT and GPT-4 one-shot models. SpERT outperforms the GPT-4 one-shot models in all trigger and argument extraction (with the exception of _Alcohol_ and _Employment_ triggers). For better readability, we only mark performance significance among GPT-4 one-shot models. * indicates performance significance among GPT-4 one-shot methods, with 10,000 bootstrap samples and a p-value threshold of 0.05.
## 6 Limitations
We only explored one-shot prompting strategies with GPT-4. More examples (few-shot) may improve performance. We prompted GPT-4 with only a single randomly selected sample that included all of the annotated event types. Our post-processing included simple rules to process the generated output and may be improved. The quality of the sample and the selection method may influence performance. We explored two prompting styles. Future work could explore more prompting methods such as question & answering and chain-of-thought (Wei et al., 2023) and fine-tuning non-proprietary LLMs.
## 7 Ethics statement
Our experimentation utilized OpenAI API to extract SDOH information from SHAC with GPT-4. SHAC is a fully de-identified corpus of social history sections. The use of such external API/models could introduce ethical problems related to privacy, identifiability, and other unintended consequences if the data sets are not fully de-identified. Additionally, a careful examination is needed to assess potential bias in LLMs for extracting SDOH prior to implementing real-life secondary use applications. We received approval from the Institutional Review Board (IRB) prior to conducting the presented research. As our GPT-4 one-shot experiments are conducted on the SHAC-UW test set, broader use of the model may need necessary precautions.
## 8 Acknowledgements
This work was supported in part by the National Institutes of Health and the National Library of Medicine (NLM) (Grant numbers R01 CA248422-01A1, R15 LM013209). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
|
2306.08243 | MMASD: A Multimodal Dataset for Autism Intervention Analysis | Autism spectrum disorder (ASD) is a developmental disorder characterized by
significant social communication impairments and difficulties perceiving and
presenting communication cues. Machine learning techniques have been broadly
adopted to facilitate autism studies and assessments. However, computational
models are primarily concentrated on specific analysis and validated on private
datasets in the autism community, which limits comparisons across models due to
privacy-preserving data sharing complications. This work presents a novel
privacy-preserving open-source dataset, MMASD as a MultiModal ASD benchmark
dataset, collected from play therapy interventions of children with Autism.
MMASD includes data from 32 children with ASD, and 1,315 data samples segmented
from over 100 hours of intervention recordings. To promote public access, each
data sample consists of four privacy-preserving modalities of data; some of
which are derived from original videos: (1) optical flow, (2) 2D skeleton, (3)
3D skeleton, and (4) clinician ASD evaluation scores of children, e.g., ADOS
scores. MMASD aims to assist researchers and therapists in understanding
children's cognitive status, monitoring their progress during therapy, and
customizing the treatment plan accordingly. It also has inspiration for
downstream tasks such as action quality assessment and interpersonal synchrony
estimation. MMASD dataset can be easily accessed at
https://github.com/Li-Jicheng/MMASD-A-Multimodal-Dataset-for-Autism-Intervention-Analysis. | Jicheng Li, Vuthea Chheang, Pinar Kullu, Eli Brignac, Zhang Guo, Kenneth E. Barner, Anjana Bhat, Roghayeh Leila Barmaki | 2023-06-14T05:04:11Z | http://arxiv.org/abs/2306.08243v3 | # MMASD: A Multimodal Dataset for Autism Intervention Analysis
###### Abstract
Autism spectrum disorder (ASD) is a developmental disorder characterized by significant social communication impairments and difficulties perceiving and presenting communication cues. Machine learning techniques have been broadly adopted to facilitate autism studies and assessments. However, computational models are primarily concentrated on specific analysis and validated on private datasets in the autism community, which limits comparisons across models due to privacy-preserving data sharing complications. This work presents a novel privacy-preserving open-source dataset, **MMASD** as a **M**ultiModal **ASD** benchmark dataset, collected from play therapy interventions of children with Autism. **MMASD** includes data from **32** children with ASD, and **1,315** data samples segmented from over 100 hours of intervention recordings. To promote public access, each data sample consists of four privacy-preserving modalities of data; some of which are derived from original videos: **(1)** optical flow, **(2)** 2D skeleton, **(3)** 3D skeleton, and **(4)** clinician ASD evaluation scores of children, e.g., ADOS scores. **MMASD** aims to assist researchers and therapists in understanding children's cognitive status, monitoring their progress during therapy, and customizing the treatment plan accordingly. It also has inspiration for downstream tasks such as action quality assessment and interpersonal synchrony estimation. **MMASD** dataset can be easily accessed at [https://github.com/Li-Jicheng/MMASD-A-Multimodal-Dataset-for-Autism-Intervention-Analysis](https://github.com/Li-Jicheng/MMASD-A-Multimodal-Dataset-for-Autism-Intervention-Analysis).
## 1. CCS CONCEPTS
* [topsep=0pt,topsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,opsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,opsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,opsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,
results show that children significantly improved their ability to recognize emotions, maintain eye contact, and respond appropriately to social cues. They identified several effective techniques for detecting and analyzing the children's emotional and behavioral responses, such as analyzing the frequency and duration of specific behaviors. Billing _et al._[4] proposed a dataset of behavioral data recorded from 61 children with ASD during a large-scale evaluation of robot-enhanced therapy. The dataset comprises sessions where children interacted with a robot under the guidance of a therapist and sessions where children interacted one-on-one with a therapist. For each session, they used three RGB and two RGBD (Kinect) cameras to provide detailed information, including body motion, head position and orientation, and eye gaze of children's behavior during therapy.
Another dataset related to humanoid robot interactions with autistic children is DE-ENIGMA [33], which includes using a multimodal human-robot interaction system to teach and expand social imagination in children with ASD. The DE-ENIGMA dataset comprises behavioral features such as facial mapping coordinates, visual and auditory, and facilitates communication and social interaction between the children and the robot. The authors indicated that the DE-ENIGMA could be used as an effective tool for teaching and expanding social imagination in children with ASD. They also suggest that the usage of a multimodal human-robot interaction could be a promising approach for developing interventions for children with ASD that aim to improve their social skills and promote better social integration.
_Eye movement and vocalization datasets._ Duan _et al._[12] introduced a dataset of eye movement collected from children with ASD. The dataset includes 300 natural scene images and eye movement data from 14 children with ASD and 14 healthy individuals. It was created to facilitate research on the relationship between eye movements and ASD, with the goal of designing specialized visual attention models. Baird _et al._[2] introduced a dataset of vocalization recordings from children with ASD. They also evaluated classification approaches from the spectrogram of autistic speech instances. Their results suggest that automatic classification systems could be used as a tool for aiding in the diagnosis and monitoring of ASD in children.
_Behavior analysis datasets._ For action recognition dataset of children with ASD, Pandey _et al._[30] proposed a dataset of video recording actions and a technique to automate the response of video recording scenes for human action recognition. They evaluated their technique on two skill assessments with autism datasets and a real-world dataset of 37 children with ASD. Rehg _et al._[32] introduced a publicly available dataset including over 160 sessions of child-adult interactions. They discussed the use of computer vision and machine learning techniques to analyze and understand children's social behavior in different contexts. They also identified technical challenges in analyzing various social behaviors, such as eye contact, smiling, and discrete behaviors. Rajagopalan _et al._[31] explored the use of computer vision techniques to identify self-stimulatory behaviors in children with ASD. They also presented a self-stimulatory behavior dataset (SSBD) to assess the behaviors
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Dataset & Target & Research Focus & Data Modalities & Computational Models & \(N\) of Data Samples & Availability \\ & Population & & & & & (Data length)\({}^{*}\) & \\ \hline Billing _et al._[4] & 61 ASD & Behavior analysis & Body motion, head pose, eye gaze & SVM, face detector [39], Microsoft Kinect SDK & 3121 sessions (306 h) & Public \\ Rajagopalan _et al._[31] & - & Self-stimulatory behaviors detection & Video, audio & Space time interest point (STIP) [21] & 75 videos (90 s/video) & Public \\ Rehg _et al._[32] & 121 total & Behavior analysis & Video, audio, physiological data & SVM, Omron OKAO vision library & 160 sessions (3– 5 m/video) & On request \\ DE-ENIGMA [33] & 128 ASD & Behavior analysis & Facial landmark, body postures, video, audio & Computer vision, Microsoft Kinect SDK & On request \\ Zunino _et al._[43] & 20 ASD, & Grasping actions & Hand \& arm trajectories & LSTM, CNN, Histogram of optical flow (HOG), VLAD [18] & 1837 videos (83 \\ Pandey _et al._[30] & 37 ASD & Behavior analysis & Video, 2D pose, optical flow & Guided weak supervision, Temporal segment networks, Inflated 3D CNN Constrained Local Neural Fields [3] & 1481 video clips (joint activities), 4 videos (imitative play) & On request \\ Dawson _et al._[9] & 22 ASD, & Phenotyping, Head movement & Head pose, facial landmark & Model-based object pose, Computer vision analysis (CVA) [20] & 10 videos (1–3 s/video, 49 facial landmarks) & Private \\ Martin _et al._[29] & 21 ASD, & Head movement & Head pose, facial landmark & Computer-vision head tracking (Zface) [19] & 252 videos (6 m/video) & Private \\
**MMASD** (Ours) & 22 ASD & Movement \& Behavior analysis & Optical flow, 2D pose, 3D pose, ADOS score & ROMP [38], OpenPose [5], Lucas-Kanade [27] & 1,315 videos (7 s \& 186 \\ \hline \multicolumn{6}{l}{\({}^{*}\)_Certain works provided information on the duration of individual videos, whereas others presented the overall length of the dataset._} \\ \end{tabular}
\end{table}
Table 1. A comparison of related benchmarks and our dataset focused on behavior and movement analysis of children with ASD, including the target population, research focus, data modalities, computational models, data sample, and availability.
from video records of children with ASD in uncontrolled natural settings. The dataset comprises 75 videos grouped into three categories: arm flapping, head banging, and spinning behaviors.
Comparison to **MMASD**.In Table 1, we compare our proposed dataset with related benchmarks. Overall, MMASD features diverse themes and scenes, capturing full-body movements with multi-modal features. In contrast, some works focused specifically on upper-body movements (Han et al., 2017; Chen et al., 2018; Li et al., 2019; Li et al., 2019). MMASD also provides critical privacy-preserving features to represent body movements making it publicly accessible, while some works were conducted on raw videos that are either private or accessible only upon request (Han et al., 2017; Chen et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). Additionally, it is collected from therapeutic interventions, reflecting participants' motor ability and providing valuable insights for treatment guidance.
## 3. Method
In the following sections, we describe the participants, procedure, and experimental settings of our proposed dataset.
### Participants
We recruited 32 children (27 males and 5 females) with ASD from different races (Caucasian, African American, Asian, and Hispanic) and backgrounds through fliers posted online and onsite in local schools, services, and self-advocacy groups. Prior to enrollment, children were screened using the Social Communication Questionnaire (Sucasian et al., 2019), and their eligibility was determined by the Autism Diagnostic Observation Schedule-2 (ADOS-2) (Shi et al., 2019; Li et al., 2019) as well as Clinical judgment. All the children were between 5 and 12 years old. Written parental consent was obtained from the _Institutional Review Board_ of the university before enrollment. The Vineland Adaptive Behavior Scales (Vineland, 2019) were used to assess the children's adaptive functioning levels. In general, 82% of the participating children had delays in the Adaptive Behavior Composite. Specifically, 70% of them experienced communication delays, 80% had difficulties with daily living skills, and 82% had delays in socialization
### Procedure
The study was conducted over ten weeks, with the pre-test and post-test being conducted during the first and last weeks of the study, respectively. Each training session was scheduled four times per week and lasted approximately 45 minutes. During the intervention, the trainer and adult model interacted with the child within a triadic context, with the adult model acting as the child's confederate and participating in all activities with the child. This triadic setting (child, trainer, and model) provided numerous opportunities for promoting social and fine motor skills such as eye contact, body gesturing and balancing, coordination, and interpersonal synchrony during joint action games.
All expert trainers and models involved were either physical therapists or physical therapy/kinesiology graduate students who had received significant pediatric training prior to their participation. The trainers and models were unknown to the children before the study. In addition to the expert training sessions, we also encouraged parents to provide two additional weekly sessions involving similar activities to promote practice. Parents were provided with essential instruction manuals, supplies, and in-person training beforehand. All training sessions were videotaped with the parents' consent and notification to the children, and the training diary was compiled by parents in collaboration with expert trainers. The general pipeline of training sessions had a standard procedure despite some unique activities across different themes. A welcoming and debriefing phase was present at the beginning and end of the data collection to help children warm up and get ready for the intervention, as well as to facilitate the subsequent data processing stage by providing time labels that indicate the segments to investigate.
### Experiment Settings
All videos were recorded in a house environment with the camera pointed toward the children. Models (trainers/therapists and trained adults) interacted with the child within a triadic context, and the adult model was the child's confederate and practiced all activities with the child. Different tools were introduced to facilitate the training process depending on the theme of the intervention, for example, instruments and robots. Selected scenes in different themes of our proposed MMASD dataset are shown in Figure 2.
## 4. MMASD
MMASD includes 32 children diagnosed with autism of different levels. It covers three unique themes:
* Robot: children followed a robot and engaged in body movements.
* Rhythm: children and therapists played musical instruments or sang together as a form of therapy.
* Yoga: children participated in yoga exercises led by experts. These exercises included body stretching, twisting, balancing, and other activities.
Overall, MMASD comprises 1,315 video clips that have been meticulously gathered from intervention video recordings spanning more than 108 hours. It consists of 244,679 frames with an average duration of 7.15 seconds. The average data length in MMASD is 7.0 \(\pm\) 3.4 seconds (186.1 \(\pm\) 92.9 frames), with dimensions ranging from 320 \(\times\) 240 to 1920 \(\times\) 1080. Table 2 presents statistical information on MMASD. Depending on the activity conducted during the intervention, we further categorized all data into eleven activity classes as described in Table 3. Each activity class falls into a unique theme, as shown in Figure 2. MMASD also reports demographic and autism evaluation scores of all participating children, including date of birth, ADOS-2 score (social affect & restricted and repetitive behavior), motor functioning score, and severity of autism.
### Data processing
From original video recordings, we manually find out the start and end time stamps of a specific activity. Then we segmented the video into clips and categorized them by activity class. Clips shorter than three seconds were discarded. We also discarded noisy data due to video quality, lighting conditions, and body occlusion. Besides all the eleven activities in MMASD, there were some other activities with fewer examples. To ensure a balanced data distribution, we excluded all inadequate classes.
### Data Annotation
We have four annotators that are well-trained in intervention understanding. The annotators had a comprehensive understanding of interventions and a cross-discilined background in computer science and physical therapy. We exclusively assigned one activity class label to each video. Each annotator completed data annotation independently, and the final class label was determined by majority voting. However, original videos cannot be publicly shared due to privacy concerns. Therefore, we create data samples from every video clip by extracting selected features from the original scenes, including (1) optical flow, (2) 2D skeleton, and (3) 3D skeleton, respectively. All these features can maintain critical body movements while preserving privacy. Section 4.3 explained all selected features in detail.
In addition to the motion-related features mentioned above, we also reported clinician evaluation results such as ADOS-2 score, motor functioning score, and severity of autism for each participating child. ADOS-2 is a standardized assessment tool used to evaluate individuals suspected of having ASD. It is used in conjunction with other diagnostic information to help clinicians determine whether an individual meets the criteria for an ASD diagnosis. ADOS-2 includes several modules, each designed for individuals of different ages and language abilities and includes a series of activities and tasks that are used to observe key features of ASD, such as social communication skills and repetitive behaviors. The ADOS-2 scores are based on an individual's performance during activities and tasks specific to their module and can range from 0 to 10 or higher depending on the algorithm used, with higher scores indicating more severe ASD symptoms. Moreover, we reported the ADOS comparison score, a continuous metric ranging from 1 to 10 that describes the severity of a child's autism symptoms compared to children with ASD of similar age and language levels (Han et al., 2018). Low comparison scores are indicative of minimal evidence of autism symptoms, whereas high scores are indicative of severe autism symptoms.
The contrast between the ADOS-2 score and the ADOS comparison score is worth mentioning. The ADOS-2 score reflects an individual's raw score on the ADOS-2 assessment tool, while the ADOS comparison score is a statistical measure that compares an individual's performance to others of the same age and language level. The motor functioning score refers to an assessment of an individual's motor skills and abilities and is evaluated based on the children's level of independence in daily living skills (Shen et al., 2017; Shen et al., 2018). It is on
\begin{table}
\begin{tabular}{l l} \hline Description & Value \\ \hline Number of data samples & 1,315 \\ Number of frames & 244,679 \\ Number of activity classes & 11 \\ Average video length (seconds) & \(7.0\pm 3.4\) \\ Average number of frames & \(186.1\pm 92.9\) \\ Resolution & \(320\times 240\sim 1920\times 1080\) \\ FPS & \(25\sim 30\) \\ \hline \end{tabular}
\end{table}
Table 2. Statistics of our proposed MMASD dataset.
Figure 2. Sample scenes depicting various themes and activity classes present in the MMASD dataset.
a scale of 1 to 3, while 1, 2, 3 represent low functioning (needing significant support), medium functioning (needing moderate support), and high functioning (needing less support), respectively. Finally, the severity of autism is determined by a comprehensive assessment that includes both the ADOS and motor function evaluation.
### Feature Extraction
In order to preserve critical details of movement while avoiding any infringement of privacy, we derived the subsequent features from the initial footage.
**(1) Optical flow** An optical flow is commonly referred to as the apparent motion of individual pixels between two consecutive frames on the image plane. Optical flow derived from raw videos (see Figure 1) can provide a concise description of both the region and velocity of a motion without exposing an individual's identity (Safra et al., 2014; Dosov et al., 2015; Dosov et al., 2016; Dosov et al., 2017).
**(2) 2D Skeleton** Skeleton data has an edge over RGB representations because it solely comprises the 2D positions of the human joints, which offer highly conceptual and context-independent data. This allows models to concentrate on the resilient aspects of body movements. 2D skeleton data has been widely applied to tasks relating to human behavior understanding, such as action recognition (Safra et al., 2014; Dosov et al., 2017), action quality assessment (Dosov et al., 2017; Dosov et al., 2017) and beyond. An optimal way to acquire skeleton data is through the use of wearable devices and sensors that are affixed to the human body. However, in the context of autism research, it poses a substantial challenge as children may feel overwhelmed wearing these devices and experience anxiety. As a result, the skeleton data extraction process is carried out by pre-trained pose detectors based on deep neural networks.
**(3) 3D Skeleton** Similar to 2D skeleton data, 3D skeletons instead represent each key joint with a 3D coordination, introducing an additional depth dimension. Since all the data was collected using a single RGB camera, we also completed this process with the help of deep neural networks.
The technical details and tools utilized for feature extraction can be found in Section 4.5.
### Data Format
Suppose the original video clip includes \(N\) participants (child, trainer, and assistant) is composed of \(L\) frames, and the height and width of each frame are \(H\) and \(W\), respectively. As discussed above, each _data sample_ consists of four distinct components, with data dimension demonstrated in braces:
* Optical flow \((L-1,H,W)\): saved as _npy_ files (Dosov et al., 2016).
* 2D skeleton \((L,N,17,2)\): 2D coordinates of 17 key joints, following COCO (Dosov et al., 2017) format, saved as _JSON_ files.
* 3D skeleton \((L,N,24,3)\): 3D coordinates of 24 key joints, following ROMP (Kang et al., 2016) format, saved as _npz_ files.
* Demographic and clinical evaluation for ASD \((9,\cdot)\): including nine attributes such as participant ID, date of birth, chronological age, ADOS-2 module, social affect score, restricted and repetitive behavior score, ADOS comparison score, motor functioning score, and severity of autism, saved as _CSV_ files.
### Implementation Details
#### 4.5.1. Optical flow
The Lucas-Kanade (Lucas, 2017) method is used for our study. It is a popular technique used in computer vision to estimate the motion of objects between consecutive frames. The method assumes that the displacement of the image contents between two nearby instants is small and approximately constant within a neighborhood of the point under consideration. By solving the optical flow equation for all pixels within a window centered at the point, the method can estimate the motion of objects in the image sequence. Overall, the Lucas-Kanade optical flow method is an effective and popular technique for estimating motion in various computer vision applications.
\begin{table}
\begin{tabular}{l l l} \hline \hline Activity Class & Activity Description & Count \\ \hline _Arm swing_ & The participant raises their left and right arm in succession while maintaining an upright posture. & 105 \\ _Body swing_ & The participant swings their body left and right while stretching out both hands, one behind the other. & 119 \\ _Chest expansion_ & The participant gradually opens and closes their chest. & 114 \\ _Drumming_ & The participant plays either the sanre or Tubano drum with one or both hands. & 168 \\ _Frog pose_ & The participant widens their knees as far as possible, places their feet with the big toes touching & 113 \\ _Moracas forward shaking_ & The participant shakes maracas back and forth, an instrument commonly appearing in Caribbean & 103 \\ _Moracas shaking_ & The participant shakes maracas left and right in front of their chest. & 130 \\ _Sing and clap_ & The participant sits on the ground while simultaneously singing and clapping, typically done at the start or end of an intervention. & 113 \\ _Squat_ & The participant repeatedly performs a crouching stance with their knees bent. & 101 \\ _Tree pose_ & The participant balances on one leg and places the sole of the other foot on the inner thigh, calf, or ankle of the standing leg in tree pose. & 129 \\ _Twist pose_ & The participant sits with their legs crossed and twists their torso to one side, keeping their lower body stable and grounded. & 120 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Description and distribution of all 11 activity classes in MMASD.
#### 4.5.2. 2D skeleton
The OpenPose method (Deng et al., 2017) is used to extract 2D skeletons from our dataset of human action videos. OpenPose is a powerful tool for body, face, and hand analysis, developed by Carnegie Mellon University, and is based on deep learning techniques. It is a real-time multi-person key-point detection library that can accurately detect the key points of a human body, including joints and body parts, from an image or video feed. Initially, it predicts confidence maps for every body part and subsequently associates them with distinct individuals via Part Affinity Fields. The library is open-source and written in C++ with a Python API, which makes it easy to use and integrate into various computer vision applications.
#### 4.5.3. 3D skeleton
We utilized the Regression of Multiple 3D People (ROMP) proposed by Sun _et al._(Sun et al., 2017), a state-of-the-art technique to estimate the depth and pose of an individual from a single 2D image. The authors proposed a deep learning-based approach that is based on a fully convolutional architecture, which takes an input image and directly predicts the 3D locations of the body joints of the person(s) present in the image. This is achieved by directly estimating multiple differentiable maps from the entire image, which includes a Body Center heatmap and a Mesh Parameter map. 3D body mesh parameter vectors of all individuals can be extracted from these maps using a simple parameter sampling process. These vectors are then fed into the SMPL body model to generate multi-person 3D meshes.
In our study, we employed the code and pre-trained model shared by the authors and used it on our dataset to suit our specific needs. By utilizing this method and applying it to our own data, we obtained 2D and 3D coordinates of key joints of the person(s).
## 5. Discussion
This section delves into the limitations of MMASD and the challenges we encountered during the dataset preparation process. As the experiments were conducted in real-world settings, we faced common computer vision challenges, including varying video quality, illumination changes, cluttered backgrounds, and pose variations. Notably, in the feature extraction stage, we encountered pose detection failures in challenging scenarios, such as body occlusion and participants moving out of the scene. The intrinsic video quality limitation of MMASD also restricted us from capturing subtle and fine-grained features, such as facial expressions.
Moreover, it is imperative to conduct in-depth investigations into domain-specific challenges. For instance, in standard benchmarks for human activity recognition, typically developing individuals exhibit dominant and continuous actions with similar intensity. However, data on MMASD may not solely contain the target behavior throughout the entire duration, as impromptu actions or distractions may occur during therapy sessions for children. Furthermore, unlike prevailing benchmarks that collect ground truth skeleton data by attaching sensors to the human body, MMASD generates skeleton data by means of pre-trained deep neural networks. This is because children with autism have limited tolerance for external stimuli, and the presence of sensors on their bodies may cause them to become anxious, agitated, or exhibit challenging behaviors. Consequently, the skeleton data's reliability in MMASD depends on the performance of the underlying pose detectors.
In addition, children with autism can exhibit varying motor functions, resulting in different intensity levels and completion rates for the same activity. Clinician evaluation results can offer additional guidance in determining and classifying activities for each individual. Nevertheless, there is a need to explore ways to incorporate tabulated clinician evaluation results with other movement-related features to generate comprehensive representations.
## 6. Conclusion
A Autism research has been greatly facilitated by machine learning techniques, which offer cost-effective and accurate ways to analyze various aspects of children's behavior and development. However, the lack of open-access datasets has posed challenges to conducting fair comparisons and promoting sound research practices in the field of autism research. To address this issue, we have proposed MMASD, a publicly accessible multimodal dataset that features diverse scenes collected from therapy interventions for children with autism. Our dataset includes multimodal features such as 2D & 3D skeleton data, optical flow, demographic data, and ADOS rating, offering a confidential data-sharing approach that can maintain critical motion information. MMASD distinguishes itself from existing works by utilizing privacy-preserving multimodal features to provide comprehensive representations of full-body movements across a range of diverse themes. Moreover, each scene in our dataset depicts the same activity performed by a child and one or more therapists, providing a valuable template for comparing typically developing individuals with children with ASD.
There are several directions for future work that are worth exploring. Firstly, further research can be conducted to develop and compare machine learning models on the MMASD dataset for various tasks, such as action quality assessment (Krizhevsky et al., 2014), interpersonal synchrony estimation (Krizhevsky et al., 2014; Krizhevsky et al., 2014), and cognitive status tracking. This can help establish benchmark performance and identify state-of-the-art methods for analyzing the full-body movements of children during therapeutic interventions. In addition, new approaches can be investigated to overcome pose detection failures in MMASD. For example, by introducing pose uncertainty or attention mechanism to assign higher weights to more reliable body joints. Furthermore, the MMASD dataset can be expanded in several aspects. To elaborate, further intervention scenarios could be included along with more features such as mutual gaze (Krizhevsky et al., 2014) and additional annotations like a synchrony score. Finally, efforts can be made to dataset augmentation via existing benchmarks not limited to the autism domain by matching samples with similar motion features (Krizhevsky et al., 2014), which can significantly expand the scale of the autism dataset.
|
2310.15292 | Microscopic theory of spin Seebeck effect in antiferromagnets | We develop a microscopic theory for the spin Seebeck effect (SSE) in N\'eel
and canted phases of antiferromagnetic insulators. We calculate the DC spin
current tunneling from the antiferromagnet to an attached metal, incorporating
the spin-wave theory and the non-equilibrium Green's function approach. Our
result shows a sign change of the spin current at the spin-flop phase
transition between N\'eel and canted phases, which is in agreement with a
recent experiment for the SSE of $\rm Cr_2O_3$ in a semi-quantitative level.
The sign change can be interpreted from the argument based on the density of
states of up- and down-spin magnons, which is related to the polarized-neutron
scattering spectra. The theory also demonstrates that the spin current in the
N\'eel phase is governed by the magnon correlation, while that in the canted
phase consists of two parts: Contributions from not only the magnon dynamics
but also the static transverse magnetization. This result leads to a prediction
that at sufficiently low temperatures, the spin current non-monotonically
changes as a function of magnetic field in the canted phase. Towards a more
unified understanding of the SSE in antiferromagnets, we further discuss some
missing links of theories of SSE: Interface properties, effects of the
transverse spin moment in the canted phase, the spin-orbit coupling in the
metal, etc. Finally, we compare the SSE of antiferromagnets with those of
different magnetic phases such as ferromagnets, ferrimagnets, an
one-dimensional spin liquid, a spin-nematic liquid, and a spin-Peierls
(dimerized) phase. | Keisuke Masuda, Masahiro Sato | 2023-10-23T19:00:03Z | http://arxiv.org/abs/2310.15292v2 | # Microscopic theory of spin Seebeck effect in antiferromagnets
###### Abstract
We develop a microscopic theory for the spin Seebeck effect (SSE) in Neel and canted phases of antiferromagnetic insulators. We calculate the DC spin current tunneling from the antiferromagnet to an attached metal, incorporating the spin-wave theory and the non-equilibrium Green's function approach. Our result shows a sign change of the spin current at the spin-flop phase transition between Neel and canted phases, which is in agreement with a recent experiment for the SSE of Cr\({}_{2}\)O\({}_{3}\) in a semi-quantitative level. The sign change can be interpreted from the argument based on the density of states of up- and down-spin magnons, which is related to the polarized-neutron scattering spectra. The theory also demonstrates that the spin current in the Neel phase is governed by the magnon correlation, while that in the canted phase consists of two parts: Contributions from not only the magnon dynamics but also the static transverse magnetization. This result leads to a prediction that at sufficiently low temperatures, the spin current non-monotonically changes as a function of magnetic field in the canted phase. Towards a more unified understanding of the SSE in antiferromagnets, we further discuss some missing links of theories of SSE: Interface properties, effects of the transverse spin moment in the canted phase, the spin-orbit coupling in the metal, etc. Finally, we compare the SSE of antiferromagnets with those of different magnetic phases such as ferromagnets, ferrimagnets, an one-dimensional spin liquid, a spin-nematic liquid, and a spin-Peierls (dimerized) phase.
## I Introduction
The spin Seebock effect (SSE) [1; 2; 3; 4] is a representative of the established ways generating the DC spin current by temperature gradient. Its usual setup is depicted in Fig. 1, in which a temperature gradient is applied to the junction system of a magnet (typically a magnetic insulator) and a metal film (typically Pt). The direction of the gradient is perpendicular to the interface. The generated spin current runs along the gradient in the magnet and then injects into the metal. In the metal, the spin current is converted into electric voltage via the inverse spin Hall effect (ISHE) [5; 6; 7] and the electric field of the voltage is perpendicular to both the spin polarization and the spin current flow. Measuring the voltage provides a clear evidence of the spin current generation, i.e., the SSE. Since the spin current is basically a non-conservative quantity and any direct method of observing spin current have never been developed, we usually attach a metal as shown in Fig 1. The attachment of a metal is an essential difference between the SSE and the electronic Seebeck effect.
The SSE in a ferromagnetic insulator was first detected in 2010 [4], and since then experimental [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21] and theoretical [22; 23; 24; 25; 26; 27; 28; 29; 30; 31] studies on the SSE in ordered magnets have been developed. Moreover, beyond the conventional ordered magnets, the SSE has also been studied in exotic magnetic materials, such as a paramagnetic phase of a geometrically frustrated magnet Gd\({}_{3}\)Ga\({}_{5}\)O\({}_{12}\)[32; 33], an one-dimensional spin liquid in Sr\({}_{2}\)CuO\({}_{3}\)[34], a spin-nematic liquid in LiCuVO\({}_{4}\)[35], a spin-Peierls (dimerized) state in CuGeO\({}_{3}\)[36], and a magnon-condensation state in Pb\({}_{2}\)V\({}_{3}\)O\({}_{9}\)[37].
Among these magnets, antiferromagnetic insulators are one of the most fundamental class of magnetic materials. They have also attracted much attention as a new platform of ultrafast spintronics [38; 39; 40; 41] because their magnetic excitations (magnons) are located in a higher-energy THz regime (typically \(10^{11-13}\) Hz) compared to
Figure 1: (Color online) Schematic setup of the spin Seebeck effect in antiferromagnets. A bilayer structure consisting of an antiferromagnetic insulator and a paramagnetic metal is placed in a static magnetic field whose direction is in the \(z\)-axis. The temperature gradient is applied parallel to the \(x\) axis. A spin current is generated along the same \(x\) direction in the antiferromagnet and is injected into the metal via an interfacial interaction. The injected spin current is converted into the electromotive force via the inverse spin Hall effect in the metal. We denote the averaged temperature of the antiferromagnet as \(T_{\mathrm{s}}\) and that of the metal as \(T_{\mathrm{m}}\) (\(T_{\mathrm{s}}>T_{\mathrm{m}}\)).
the GHz-regime magnons in ferromagnets. It is therefore important to deeply understand the spin dynamics and the resulting nonequilibrium phenomena in a wide class of antiferromagnets, including the SSE.
Neel and canted ordered phases usually appear in most of antiferromagnets in the space of temperature and external magnetic field. Recent experiments for antiferromagnets [17; 18; 20] have focused on SSEs in both phases: Ref. [17] performs an experiment of SSE in an antiferromagnet Cr\({}_{2}\)O\({}_{3}\). The authors detect a large jump of the SSE voltage at the spin-flop phase transition from Neel phase to canted one, while they observe almost vanishing of the SSE voltage in Neel phase and no sign change of the voltage. The SSE experiment for another antiferromagnet MnF\({}_{2}\) is investigated in Ref. [18] and the authors again observe a voltage jump at the spin-flop transition and no sign change. On the other hand, the authors in Ref. [20] observe a sign change of the SSE voltage at the spin-flop transition in a SSE experiment for Cr\({}_{2}\)O\({}_{3}\).
At present, we have no theory explaining the whole of the above experimental results in an unified way. However, theoreticians have continuously studied SSE in antiferromagnets towards the thorough understanding. Rezende discusses that the sign of spin current in SSE of antiferromagnets is opposite to that in the SSE of ferromagnets. The paper of Ref. [30] explains the sign change observed in Ref. [20] by using the low-energy semi-classical theory for antiferromagnets and a phenomenological theory for bulk transport. The authors in Ref. [31] compute the spin current tunneling from an antiferromagnet to a metal based on the Ginzburg-Landau theory, assuming that on the interface, both ferromagnetic and antiferromagnetic spin moments in the antiferromagnet are independently coupled to conducting electrons in the metal. They then discuss the existence and disappearance of the sign change, by tuning the ratio of two couplings. In Ref. [42], Arakawa computes the temperature dependence of thermal spin current conductivity in bulk antiferromagnets by applying the Green's function method with magnon interactions.
In this paper, we add a new theoretical result for the SSE of ordered antiferromagnetic insulators. Incorporating the spin-wave theory and the non-equilibrium Green's function approach, we derive the microscopic formula for the tunneling DC spin current flowing from the antiferromagnet to a normal metal [24; 26; 43] under the assumption that the relevant interfacial interaction is a standard exchange coupling between localized spins on the antiferromagnet and conducting electron spins on the metal. Our theory explains the sign reversal of the SSE voltage, which semi-quantitatively agree with the experiment of Ref. [20]. The sign change is clearly understood from the argument based on density of states (DoSs) of up- and down-spin magnons, which can be observed by the polarized neutron scattering experiment. The theory also reveals that the leading term of the spin current stems from the magnon's DoS in the Neel phase, whereas there are two main contributions of the spin current in the canted phase: One is the magnon's DoS and the other is the transverse magnetization term inducing the spin flipping of conducting electrons in metal. This microscopic origin of the spin current further leads to a prediction that in the canted phase at a low temperatures, the spin current non-monotonically varies as a function of applied magnetic field. This low-temperature behavior could be potentially observed in future experiments.
In addition to the above microscopic calculation, we qualitatively discuss several effects of the interface and the bulk transport that are the missing parts of our theory, and we argue the possibility that these effects explain the different experimental results of Refs. [17; 18; 20]. We also compare the SSE voltage of antiferromagnets with those of several magnetic states such as ferromagnets, ferrimagnets, an 1D spin liquid, a spin-nematic liquid and a spin-Peierls state. Particularly, we show that the present microscopic approach for tunneling spin current predicts a reasonable value of spin current in both antiferromagnets and ferromagnets.
The remaining part of this paper is organized as follows. In Sec. II, we define an antiferromagnetic Heisenberg model on a cubic lattice and explain its magnetic phase diagram including both Neel and canted phases. Then we derive the magnon (spin-wave) band dispersion in both Neel and canted phases. The magnon representation of spin operators are important to compute the spin current. A simple model for conduction electrons in a normal metal is introduced as well. In Sec. III, based on the Keldysh Green's function method, we derive the formula of the DC spin current flowing from the antiferromagnet to the normal metal via an interfacial exchange interaction. The formula demonstrates that in the canted phase, the DC spin current is divided into two main parts: The magnon part depends on temperatures while another part of transverse magnetization is almost independent of temperatures. We explain that the sign of the spin current is mainly determined by magnons' density of states. The main results of this paper are in Secs. IV and V. In Sec. IV, we estimate the magnetic-field and temperature dependence of the tunneling DC spin current. We show that a sign reversal of the SSE voltage occurs at the first-order spin-flop transition due to the change of the dominant spin-current carrier between spin-up and down magnons. The computed tunneling spin current semi-quantitatively agrees with a recent experimental result of SSE in an antiferromagnet Cr\({}_{2}\)O\({}_{3}\)[20]. We also predict that in the low-temperature regime, the spin current of the canted phase exhibits a non-monotonic magnetic-field dependence. In Sec. V, we quantitatively compare the SSE in antiferromagnets with that in ferromagnets by using the tunneling spin current formalism. The result is in good agreement with typical experimental results of SSEs. In addition, we compare SSE results in various sorts of magnets and from the rich behavior of SSE voltages, we can argue that SSE can be used not only as spintronics function but also as a new probe of detecting features of different magnets. Section VI is devoted to
the argument about missing pieces of the tunneling spin current formalism. From it, we point out some important issues associated with SSE. Finally, we simply summarize our results in Sec. VII. Details of several calculation processes used in this paper are given in Appendix.
## II Model
In this section, we review the basic physical nature of antiferromagnetic insulators. In this paper, we focus on a simple Heisenberg antiferromagnet on a cubic lattice (the lattice space \(a_{0}\)), whose Hamiltonian is given by
\[\mathscr{H}_{\rm AFM}=J\sum_{\langle\mathbf{r},\mathbf{r}^{\prime}\rangle}\mathbf{S}_{ \mathbf{r}}\!\cdot\!\mathbf{S}_{\mathbf{r}^{\prime}}+D\sum_{\mathbf{r}}\left(S_{\mathbf{r}}^{z} \right)^{2}\!-B\sum_{\mathbf{r}}S_{\mathbf{r}}^{z},\] (II.1)
where \(\mathbf{S}_{\mathbf{r}}\) is the spin-\(S\) operator on site \(\mathbf{r}\), \(J>0\) is the antiferromagnetic exchange coupling constant, \(D<0\) is the easy-axis anisotropy, and \(B=g\mu_{\rm B}H\geq 0\) is the magnitude of the external magnetic field (\(g\), \(\mu_{\rm B}\), and \(H\) are the \(g\) factor, the Bohr magneton, and the magnetic field, respectively). The thermodynamic properties of this model is well understood. In Sec. II.1, we shortly explain the phase diagram of Eq. (II.1) which includes Neel and canted ordered phases. Then we derive the spin-wave band dispersions in both ordered phases in Secs. II.2 and II.3. The results will be used in the computation of the spin current of SSE. Section II.4 defines a simple model for a bulk normal metal in the nonequilibrium steady state of the SSE setup.
### Magnetic phase diagram
Here we shortly summarize the phase diagram of the model (II.1) in the space \((k_{\rm B}T,B)\), where \(T\) is temperature and \(k_{\rm B}\) is the Boltzmann constant. As shown in Fig. 2(c), the antiferromagnetic Heisenberg model has two ordered phases, Neel and canted phases. Let us consider the case of a sufficiently low temperature regime with a finite \(D\). The Neel phase is stable against a small magnetic field \(B\), while the canted phase becomes stable when \(B\) exceeds a certain value. The first-order phase transition between the Neel and canted phase is called the spin-flop transition.
Some of phase transition points can be simply estimated from mean-field theory and the energy of classical spin configuration in a semi-quantitative level. The Neel temperature \(T_{\rm N}\) at zero field \(B=0\) is calculated from the mean-field approximation
\[k_{\rm B}T_{\rm N}=\frac{S(S+1)}{3}(6J+2|D|)\sim 2S(S+1)J,\] (II.2)
where we assume \(|D|\) is much smaller than \(J\). The magnetic field \(B_{\rm f}=g\mu_{\rm B}H_{\rm f}\) of the spin-flop transition at \(T=0\) is computed by the comparison between the ground-state energies in Neel and canted states. The result is
\[B_{\rm f}=2S\sqrt{|D|(6J-|D|)}\sim 2S\sqrt{6J|D|},\] (II.3)
where we again assume \(|D|\ll J\). Similarly, the critical magnetic field \(B_{\rm c}\) between canted state and fully polarized states at \(T=0\) is given by
\[B_{\rm c}=2S(6J-|D|).\] (II.4)
The experiment for the SSE of antiferromagnets has been done in two materials Cr\({}_{2}\)O\({}_{3}\) and MnF\({}_{2}\). Using the above estimation of the phase transitions, we can roughly determine the values of \(J\) and \(D\) in these compounds. Since the \(S=3/2\) antiferromagnet Cr\({}_{2}\)O\({}_{3}\) has \(T_{\rm N}\sim 300\) K and \(H_{\rm f}\sim 6\) T, we obtain \(J\sim 4\) meV and \(|D|\sim 5\times 10^{-3}\) meV. Similarly, since the \(S=5/2\) antiferromagnet MnF\({}_{2}\) has \(T_{\rm N}\sim 70\) K and \(H_{\rm f}\sim 9\) T, \(J\sim 0.4\) meV and \(|D|\sim 4\times 10^{-2}\) meV are obtained.
### Spin-wave dispersion in the Neel phase
In this subsection, we summarize the spin-wave theory for the Neel phase of Eq. (II.1) [44; 45]. The ordered state consists of two sublattices A and B, and the spin moment is parallel to the \(z\) axis due to the anisotropy \(D\), as shown in Fig. 2(a). Following the standard Holstein-Primakov (HP) transformation, we can approximate the
Figure 2: (Color online) (a),(b) Schematic views of the spin arrangement of the model Eq. (II.1) in the classical ground state. The model Eq. (II.1) has two ordered phase, (a) Néel phase along the \(S^{z}\)-axis and (b) canted AF phase in the \(S^{z}\)-\(S^{x}\) plane. The magnetic structure consists of two sublattices A and B. Blue and red arrows represent spins localized on the sublattices. (c) Schematic view of the phase diagram of the model Eq. (II.1) in the space \((k_{\rm B}T,B)\).
spin operators on A and B sublattices as
\[S^{z}_{\mathbf{r}_{\rm A}} =S-a^{\dagger}_{\mathbf{r}_{\rm A}}a_{\mathbf{r}_{\rm A}},\quad\ S^{+}_{\bm {r}_{\rm A}}\simeq\sqrt{2S}a_{\mathbf{r}_{\rm A}},\quad\ S^{-}_{\mathbf{r}_{\rm A}} \simeq\sqrt{2S}a^{\dagger}_{\mathbf{r}_{\rm A}},\] \[S^{z}_{\mathbf{r}_{\rm B}} =-S+b^{\dagger}_{\mathbf{r}_{\rm B}}b_{\mathbf{r}_{\rm B}},\quad\ S^{+}_{ \mathbf{r}_{\rm B}}\simeq\sqrt{2S}b^{\dagger}_{\mathbf{r}_{\rm B}},\quad\ S^{-}_{\mathbf{r }_{\rm B}}\simeq\sqrt{2S}b_{\mathbf{r}_{\rm B}},\] (II.5)
where \(a^{\dagger}_{\mathbf{r}_{\rm A}}\) and \(a_{\mathbf{r}_{\rm A}}\) (\(b^{\dagger}_{\mathbf{r}_{\rm B}}\) and \(b_{\mathbf{r}_{\rm B}}\)) are respectively the creation and annihilation operators of magnons on an A-sublattice site \(\mathbf{r}_{\rm A}\) (B-sublattice site \(\mathbf{r}_{\rm B}\)). The Fourier transformation of magnon operators is defined as
\[a_{\mathbf{k}} =\sqrt{\frac{2}{N}}\sum_{\mathbf{r}_{\rm A}}e^{-i\mathbf{k}\cdot\mathbf{r}_{ \rm A}}a_{\mathbf{r}_{\rm A}},\quad\ a^{\dagger}_{\mathbf{k}}=\sqrt{\frac{2}{N}}\sum_{ \mathbf{r}_{\rm A}}e^{i\mathbf{k}\cdot\mathbf{r}_{\rm A}}a^{\dagger}_{\mathbf{r}_{\rm A}},\] \[b_{\mathbf{k}} =\sqrt{\frac{2}{N}}\sum_{\mathbf{r}_{\rm B}}e^{i\mathbf{k}\cdot\mathbf{r}_{ \rm B}}b_{\mathbf{r}_{\rm B}},\quad\ b^{\dagger}_{\mathbf{k}}=\sqrt{\frac{2}{N}}\sum_{ \mathbf{r}_{\rm B}}e^{-i\mathbf{k}\cdot\mathbf{r}_{\rm B}}b^{\dagger}_{\mathbf{r}_{\rm B}},\] (II.6)
where \(N\) is the total number of sites of the antiferromagnet and \(\mathbf{k}=(k_{x},k_{y},k_{z})\) is the wave number in the first Brillouin zone for the sublattice. Substituting Eqs. (II.5) and (II.6) into the model (II.1) and neglecting magnon interactions under the assumption that the magnon density is low enough, we obtain the low-energy spin-wave Hamiltonian. Through a suitable Bogoliubov transformation, we finally arrive at the diagonalized spin-wave Hamiltonian
\[\mathscr{H}_{\rm SW}^{\rm(Neel)}=\sum_{\mathbf{k}}\varepsilon_{\alpha}^{\rm(N)}( \mathbf{k})\alpha^{\dagger}_{\mathbf{k}}\alpha_{\mathbf{k}}+\sum_{\mathbf{k}}\varepsilon_{ \beta}^{\rm(N)}(\mathbf{k})\beta^{\dagger}_{\mathbf{k}}\beta_{\mathbf{k}}+(\text{const.}),\] (II.7)
where \(\alpha_{\mathbf{k}}\) and \(\beta_{\mathbf{k}}\) are the magnon annihilation operators and the corresponding two magnon (spin-wave) dispersions are given by
\[\varepsilon_{\alpha}^{\rm(N)}(\mathbf{k}) =S\sqrt{\left(6J+2|D|\right)^{2}-\left(2J\gamma_{\mathbf{k}}\right)^{ 2}}+B,\] (II.8) \[\varepsilon_{\beta}^{\rm(N)}(\mathbf{k}) =S\sqrt{\left(6J+2|D|\right)^{2}-\left(2J\gamma_{\mathbf{k}}\right)^{ 2}}-B.\] (II.9)
The paramarter \(\gamma_{\mathbf{k}}\) is defined as
\[\gamma_{\mathbf{k}}=\cos\left(2a_{0}k_{x}\right)+\cos\left(2a_{0}k_{y}\right)+ \cos\left(2a_{0}k_{z}\right).\] (II.10)
In the magnon description, the spin operator is represented as
\[S^{+}_{\mathbf{r}_{\rm A}} \simeq\sqrt{2S}\sqrt{\frac{1}{N/2}}\sum_{\mathbf{k}}e^{i\mathbf{k}\cdot \mathbf{r}_{\rm A}}\Big{(}\cosh\phi_{\mathbf{k}}\alpha_{\mathbf{k}}+\sinh\phi_{\mathbf{k}} \beta^{\dagger}_{\mathbf{k}}\Big{)},\] (II.11) \[S^{+}_{\mathbf{r}_{\rm B}} \simeq\sqrt{2S}\sqrt{\frac{1}{N/2}}\sum_{\mathbf{k}}e^{i\mathbf{k}\cdot \mathbf{r}_{\rm B}}\Big{(}\sinh\phi_{\mathbf{k}}\alpha_{\mathbf{k}}+\cosh\phi_{\mathbf{k}} \beta^{\dagger}_{\mathbf{k}}\Big{)},\] (II.12)
where we neglect the more-than two magnon terms. The angle \(\phi_{\mathbf{k}}\) is used in the Bogoliubov transformation and is determined by
\[\tanh\left(2\phi_{\mathbf{k}}\right)=\frac{-J\gamma_{\mathbf{k}}}{3J+|D|}.\] (II.13)
This representation of the spin is very essential to compute the spin current of the SSE.
We show some typical magnon bands of the Neel state in Figs. 3(a) and 3(b). In the Neel phase, two spin-wave modes have opposite spin polarizations (angular momentum). Since the \(\alpha\)-mode magnon has the down-spin polarization and the \(\beta\)-mode one has the up-spin polarization, as the field increases, \(\varepsilon_{\alpha}^{\rm(N)}(\mathbf{k})\) increases while \(\varepsilon_{\beta}^{\rm(N)}(\mathbf{k})\) decreases. At the zero field case of \(B=0\), the two modes are degenerated. The Neel phase spontaneously breaks one-cite translation symmetry, while the SU(2) spin rotation symmetry is also spontaneously broken at \(B=D=0\). In the latter case, the two spin-wave excitations become gapless, which correspond to the Nambu-Goldstone modes.
### Spin-wave dispersion in the canted AF phase
Here, we summarize the result of the spin-wave theory for the canted phase, in which we assume that the spin moment is in \(S^{z}\)-\(S^{x}\) plane. This phase also has two sublattices A and B, as shown in Fig. 2(b). From the condition of minimizing the ground-state energy, the canted angle \(\theta\) defined in Fig. 2(b) is given by
\[\cos\theta=\frac{B}{2S(6J-|D|)}.\] (II.14)
Based on this classical ground state, we can obtain the spin-wave Hamiltonian by taking into account the quantum fluctuation effect around the canted moment. The resultant Hamiltonian is given by
\[\mathscr{H}_{\rm SW}^{\rm(Cant)}=\sum_{\mathbf{k}}\varepsilon_{\alpha}^{\rm(C)}( \mathbf{k})\alpha_{\mathbf{k}}^{\dagger}\alpha_{\mathbf{k}}+\sum_{\mathbf{k}}\varepsilon_{ \beta}^{\rm(C)}(\mathbf{k})\beta_{\mathbf{k}}^{\dagger}\beta_{\mathbf{k}}+(\text{const.}),\] (II.15)
where the two magnon dispersions are
\[\varepsilon_{\alpha}^{\rm(C)}(\mathbf{k}) =\sqrt{2JS(3-\gamma_{\mathbf{k}})\Big{\{}2JS(3+\gamma_{\mathbf{k}})-2|D|S +B\cos\theta-4JS(3+\gamma_{\mathbf{k}})\cos^{2}\theta+4|D|S\cos^{2}\theta\Big{\}}},\] (II.16) \[\varepsilon_{\beta}^{\rm(C)}(\mathbf{k}) =\sqrt{2JS(3+\gamma_{\mathbf{k}})\Big{\{}2JS(1-2\cos^{2}\theta)(3- \gamma_{\mathbf{k}})-2|D|S(1-2\cos^{2}\theta)+B\cos\theta\Big{\}}}.\] (II.17)
Using this magnon operators, we can represent the spin operators as
\[S_{\mathbf{r}_{\rm A}}^{+} \simeq\frac{\sqrt{S}}{2}(1+\cos\theta)\sqrt{\frac{1}{N/2}}\sum_{ \mathbf{k}}e^{i\mathbf{k}\cdot\mathbf{r}_{\rm A}}\Big{(}\cosh\varphi_{\mathbf{k}}^{\alpha} \alpha_{\mathbf{k}}+\sinh\varphi_{\mathbf{k}}^{\alpha}\alpha_{-\mathbf{k}}^{\dagger}+\sinh \varphi_{\mathbf{k}}^{\beta}\beta_{\mathbf{k}}^{\dagger}+\cosh\varphi_{\mathbf{k}}^{\beta }\beta_{-\mathbf{k}}\Big{)}\] \[-\frac{\sqrt{S}}{2}(1-\cos\theta)\sqrt{\frac{1}{N/2}}\sum_{\mathbf{k} }e^{-i\mathbf{k}\cdot\mathbf{r}_{\rm A}}\Big{(}\cosh\varphi_{\mathbf{k}}^{\alpha}\alpha_{ \mathbf{k}}^{\dagger}+\sinh\varphi_{\mathbf{k}}^{\alpha}\alpha_{-\mathbf{k}}+\sinh \varphi_{\mathbf{k}}^{\beta}\beta_{\mathbf{k}}+\cosh\varphi_{\mathbf{k}}^{\beta}\beta_{- \mathbf{k}}^{\dagger}\Big{)}-S\sin\theta,\] (II.18)
\[S_{\mathbf{r}_{\rm B}}^{+} \simeq-\frac{\sqrt{S}}{2}(1+\cos\theta)\sqrt{\frac{1}{N/2}}\sum_{ \mathbf{k}}e^{-i\mathbf{k}\cdot\mathbf{r}_{\rm B}}\Big{(}\sinh\varphi_{\mathbf{k}}^{\alpha} \alpha_{\mathbf{k}}^{\dagger}+\cosh\varphi_{\mathbf{k}}^{\alpha}\alpha_{-\mathbf{k}}-\cosh \varphi_{\mathbf{k}}^{\beta}\beta_{\mathbf{k}}-\sinh\varphi_{\mathbf{k}}^{\beta}\beta_{- \mathbf{k}}^{\dagger}\Big{)}\] \[\quad+\frac{\sqrt{S}}{2}(1-\cos\theta)\sqrt{\frac{1}{N/2}}\sum_{ \mathbf{k}}e^{i\mathbf{k}\cdot\mathbf{r}_{\rm B}}\Big{(}\sinh\varphi_{\mathbf{k}}^{\alpha} \alpha_{\mathbf{k}}+\cosh\varphi_{\mathbf{k}}^{\alpha}\alpha_{-\mathbf{k}}^{\dagger}-\cosh \varphi_{\mathbf{k}}^{\beta}\beta_{\mathbf{k}}^{\dagger}-\sinh\varphi_{\mathbf{k}}^{ \beta}\beta_{-\mathbf{k}}\Big{)}+S\sin\theta,\] (II.19)
where we neglect higher-order magnon terms. The angles \(\varphi_{\mathbf{k}}^{\alpha}\) and \(\varphi_{\mathbf{k}}^{\beta}\) are given by
\[\tanh\left(2\varphi_{\mathbf{k}}^{\alpha}\right)=\frac{-2\text{C}_{4 }+\text{C}_{2}(\mathbf{k})}{\text{C}_{1}-\text{C}_{3}(\mathbf{k})},\] \[\tanh\left(2\varphi_{\mathbf{k}}^{\beta}\right)=\frac{-2\text{C}_{4}- \text{C}_{2}(\mathbf{k})}{\text{C}_{1}+\text{C}_{3}(\mathbf{k})},\] (II.20)
where \(C_{1-4}\) are defined in Eq. (C.6).
Figures 3(c)-3(f) depict typical magnon bands in the canted state. In this phase, both the spin-wave modes (\(\alpha\) and \(\beta\) modes) averagely possess the down-spin polarization (negative angular momentum along the \(z\) axis) because an uniform magnetization exists. The \(\alpha\) mode is gapless around \(\mathbf{k}=(0,0,0)\), while the \(\beta\) mode is so around \(\mathbf{k}=(\pi/(2a_{0}),\pi/(2a_{0}),\pi/(2a_{0}))\). This gaplessness originates from the spontaneous breaking of the U(1) spin rotation symmetry around \(S^{z}\) axis. From the comparison between Figures 3(c) and (d) [(e) and (f)], one sees that the band shape, especially, the higher-energy band, strongly depends on the value of the magnetic field \(B\).
### Normal metal
Here, we introduce the model for conduction electrons in the attached metal of Fig. 1. We define a simple free electron model for the metal and its Hamiltonian is given by
\[H_{\rm NM}=\sum_{\mathbf{q},\alpha}\epsilon_{\mathbf{q}}f_{\mathbf{q},\alpha}^{\dagger}f_{ \mathbf{q},\alpha},\] (II.21)
where \(f_{\mathbf{q},\alpha}\) (\(f^{\dagger}_{\mathbf{q},\alpha}\)) is the annihilation (creation) operator of the conducting electron with wave vector \(\mathbf{q}\) and spin \(\alpha=\uparrow,\downarrow\), and \(\epsilon_{\mathbf{q}}=\big{(}\hbar^{2}q^{2}\big{)}/2m\) is the kinetic energy (\(q=|\mathbf{q}|\)). In the real setup, the metal possesses a finite spin-orbit (SO) interaction to yield ISHE, but we assume that effects of the SO interaction are negligible when we estimate the tunneling spin current.
The quantum dynamics (i.e., time evolution) of conducting electrons obeys the Hamiltonian (II.21) (e.g., \(f_{\mathbf{q},\alpha}(t)=e^{iH_{\rm NM}t/\hbar}f_{\mathbf{q},\alpha}e^{-iH_{\rm NM}t/ \hbar}\)), but it is known that in a bilayer (or multilayer) system consisting of a normal metal and a magnet, a spin-dependent chemical potential [46; 47] appear in the vicinity of the interface when a nonequilibrium steady state (NESS) is realized by applying an external force (an electric field, a temperature gradient, etc.) to the system. Therefore, to describe such NESSs, we also define another Hamiltonian for the metal [48]:
\[\mathcal{H}_{\rm NM}=\sum_{\mathbf{q}}\bigg{(}\xi_{\mathbf{q}}-\frac{\delta\mu_{\rm s} }{2}\bigg{)}f^{\dagger}_{\mathbf{q},\uparrow}f_{\mathbf{q},\uparrow}+\sum_{\mathbf{q}} \bigg{(}\xi_{\mathbf{q}}+\frac{\delta\mu_{\rm s}}{2}\bigg{)}f^{\dagger}_{\mathbf{q}, \downarrow}f_{\mathbf{q},\downarrow},\] (II.22)
where \(\mu_{\alpha}\) (\(\alpha=\uparrow,\downarrow\)) is the spin-dependent chemical potential, \(\xi_{\mathbf{q}}=\epsilon_{\mathbf{q}}-\mu\) is the kinetic energy measured from the averaged chemical potential \(\mu=(\mu_{\uparrow}+\mu_{\downarrow})/2\), and \(\delta\mu_{\rm s}=\mu_{\uparrow}-\mu_{\downarrow}\) is the potential difference on the interface. In the NESS of the metal, the thermal average should be taken by using the second Hamiltonian \(\mathcal{H}_{\rm NM}\) as \(\langle\cdots\rangle=\text{Tr}\big{(}e^{-\beta\mathcal{H}_{\rm NM}}\cdots \big{)}/\text{Tr}\big{(}e^{-\beta\mathcal{H}_{\rm NM}}\big{)}\)[48]. We note that in the SSE setup, the conduction electrons relax to the NESS through dissipation process, which cannot be described by simple free electron models such as Eq. (II.21).
The magnitude of \(\delta\mu_{\rm s}\) is known to be much smaller than the other typical energy scales of the bilayer SSE setup such as magnetic exchanges and the Fermi energy (or band width). For instance, Ref. [46] estimates \(\delta\mu_{\rm s}\) at the interface of YIG and Pt: \(\delta\mu_{\rm s}\sim\mathcal{O}(10^{1})\)\(\upmu\)V. Therefore, effects of \(\delta\mu_{\rm s}\) are often negligible, but (as we will explain soon later) a finite \(\delta\mu_{\rm s}\) will play a crucial role when we consider the SSE in the canted phase.
## III Tunneling spin current
This section is devoted to explaining the generic formula of tunneling spin current from a magnet to a metal, which will be applied to the SSE of antiferromagnets in the next section. The detail of calculations is in Appendix and we here mainly discuss the important nature of the formula. As we shortly mentioned in the Introduction, the tunneling spin current is proportional to measured SSE voltage in the metal. Therefore, one can understand several parameter (e.g., magnetic field, temperature, etc.) dependence of the spin current from the SSE voltage. And it is enough to compute the tunneling spin current instead of SSE voltage. In Secs. III.1- III.3, based on the approach of non-equilibrium Green's function, we derive the formula of the thermally generated DC spin current tunneling from the antiferromagnetic insulator to the paramagnetic metal via the interfacial interaction [24; 26; 43]. This type of tunneling current formula was first developed in the mesoscopic physics field to analyze the tunneling electric current from left to right leads through a quantum dot [43]. Adachi, _et al_, applied this approach to the SSE in bilayer systems of a ferromagnet and a metal [24]. The driving force for electric current is a chemical potential gradient (i.e., electric field) in the dot systems, while that for spin current is a temperature gradient. This spin current formula has been applied and developed to SSEs of different magnets such as a 1D spin liquid [34], a spin-nematic liquid [35], and a spin-Peierls state [36]. The formula has succeeded in explaining the magnetic-field dependence of these SSEs in a semi-quantitative level. We will use it to the SSE in antiferromagnets.
In Sec. III.4, we show that the tunneling spin current and its sign can be interpreted from the viewpoint of magnon's density of states, which can be observed by inelastic neutron scattering experiments. This interpretation is essential to understand the magnetic-field dependence of the SSE in antiferromagnet in the next section.
### Perturbation calculation of tunneling spin current
We focus on a bilayer system in Fig. 1, which consists of an antiferromagnet and a normal metal. These two materials are weakly interacting through a s-d exchange interaction at the interface. In the real experiment, temperature is smoothly changed as a function of the coordinate \(x\), but we here approximate such a varying temperature as two representative values, the averaged temperature of the antiferromagnet \(T_{\rm s}\) and that of the normal metal \(T_{\rm m}\), and we assume \(T_{\rm s}>T_{\rm m}\) for simplicity. We consider the spin current injected from the magnet to the metal in the non-equilibrium steady state with temperature difference of \(T_{\rm s}>T_{\rm m}\).
The total Hamiltonian of the bilayer system consists of the following three parts:
\[\mathscr{H}=\mathscr{H}_{\rm AFM}+\mathscr{H}_{\rm NM}+\mathscr{H}_{\rm int}.\] (III.1)
The first term \(\mathscr{H}_{\rm AFM}\) is the antiferromagnetic Heisenberg model. The second term \(\mathscr{H}_{\rm NM}\) represents the Hamiltonian of conduction electrons in the normal metal. The third term \(\mathscr{H}_{\rm int}\) is the s-d exchange interaction at the interface, and we assume that the spins localized on the A sublattice and B one couple with the conduction electrons with equal weight, respectively:
\[\mathscr{H}_{\rm int}=\sum_{\mathbf{r}_{\rm A}\in\text{int-A}}J_{\rm sd}(\mathbf{r}_{ \rm A})\mathbf{S}_{\mathbf{r}_{\rm A}}\cdot\mathbf{\sigma}_{\mathbf{r}_{\rm A}}+\sum_{\mathbf{r}_{ \rm B}\in\text{int-B}}J_{\rm sd}(\mathbf{r}_{\rm B})\mathbf{S}_{\mathbf{r}_{\rm B}}\cdot\bm {\sigma}_{\mathbf{r}_{\rm B}},\] (III.2)
where \(J_{\rm sd}(\mathbf{r}_{\rm X})\) is the s-d exchange coupling constant at a \(\mathbf{r}_{\rm X}\) site (X=A or B) on the interface, and \(\sum_{\mathbf{r}\in\text{int-X}}\) stands for the summation w.r.t. all the X-sublattice sites
on the interface. The symbol \(\mathbf{\sigma}_{\mathbf{r}_{\rm X}}\) denotes the metal's conduction-electron spin coupled to the magnet's localized spin \(\mathbf{S}_{\mathbf{r}_{\rm X}}\) at a X-sublattice site \(\mathbf{r}_{\rm X}\) on the interface.
We define the tunneling DC spin current operator as the time derivative of the spin-polarization density of conduction electrons at the interface:
\[I_{\rm S} =\sum_{\mathbf{r}_{\rm A}\in\text{int-A}}\frac{\partial}{\partial t} \sigma_{\mathbf{r}_{\rm A}}^{z}(t)+\sum_{\mathbf{r}_{\rm B}\in\text{int-B}}\frac{ \partial}{\partial t}\sigma_{\mathbf{r}_{\rm B}}^{z}(t)\] \[=\left(-\frac{i}{\hbar}\right)\sum_{\mathbf{r}_{\rm A}\in\text{int-A}} \big{[}\sigma_{\mathbf{r}_{\rm A}}^{z}(t),\mathscr{H}\big{]}\] \[\qquad+\left(-\frac{i}{\hbar}\right)\sum_{\mathbf{r}_{\rm B}\in\text{ int-B}}\big{[}\sigma_{\mathbf{r}_{\rm B}}^{z}(t),\mathscr{H}\big{]},\] (III.3)
where \(\sigma_{\mathbf{r}}^{z}(t)=e^{i\mathscr{H}t/\hbar}\sigma_{\mathbf{r}}^{z}e^{-i\mathscr{ H}t/\hbar}\) and \(\sigma_{\mathbf{r}}^{z}=\frac{1}{2}\Big{(}f_{\mathbf{r},\uparrow}^{\dagger}f_{\mathbf{r}, \uparrow}-f_{\mathbf{r},\downarrow}^{\dagger}f_{\mathbf{r},\downarrow}\Big{)}\) with \(f_{\mathbf{r},\alpha}\) and \(f_{\mathbf{r},\alpha}^{\dagger}\) being creation and annihilation operators of conducting electrons with spin \(\alpha=\uparrow,\downarrow\) in the metal, respectively. Using the commutation relation \(\big{[}\sigma_{\mathbf{r}}^{z},\sigma_{\mathbf{r}^{\prime}}^{\pm}\big{]}=\pm\delta_{ \mathbf{r},\mathbf{r}^{\prime}}\sigma_{\mathbf{r}}^{\pm}\), we obtain
\[I_{\rm S} =\sum_{\mathbf{r}_{\rm A}\in\text{int-A}}\frac{J_{\rm sd}(\mathbf{r}_{ \rm A})}{2\hbar}\big{\{}(-i)S_{\mathbf{r}_{\rm A}}^{-}(t)\sigma_{\mathbf{r}_{\rm A}}^{+ }(t)+(\text{H.c})\big{\}}\] \[\quad+\sum_{\mathbf{r}_{\rm B}\in\text{int-B}}\frac{J_{\rm sd}(\mathbf{r} _{\rm B})}{2\hbar}\big{\{}(-i)S_{\mathbf{r}_{\rm B}}^{-}(t)\sigma_{\mathbf{r}_{\rm B}}^ {+}(t)+(\text{H.c})\big{\}}.\] (III.4)
From Eq. (III.4), the statistical average of \(I_{\rm S}\) under the non-equilibrium steady state in the SSE setup is given by
\[\langle I_{\rm S}\rangle =\sum_{\mathbf{r}_{\rm A}\in\text{int-A}}\frac{J_{\rm sd}(\mathbf{r}_{ \rm A})}{\hbar}\text{Re}\Big{[}(-i)\Big{\langle}S_{\mathbf{r}_{\rm A}}^{-}(t) \sigma_{\mathbf{r}_{\rm A}}^{+}(t)\Big{\rangle}\Big{]}\] \[\quad+\sum_{\mathbf{r}_{\rm B}\in\text{int-B}}\frac{J_{\rm sd}(\mathbf{r} _{\rm B})}{\hbar}\text{Re}\Big{[}(-i)\Big{\langle}S_{\mathbf{r}_{\rm B}}^{-}(t) \sigma_{\mathbf{r}_{\rm B}}^{+}(t)\Big{\rangle}\Big{]},\] \[=\sum_{\mathbf{r}_{\rm A}\in\text{int-A}}\langle I_{\mathbf{r}_{\rm A}} \rangle+\sum_{\mathbf{r}_{\rm B}\in\text{int-B}}\langle I_{\mathbf{r}_{\rm B}}\rangle,\] (III.5)
where \(\langle\cdots\rangle\) denotes the statistical average for the total Hamiltonian Eq. (III.1) under the non-equilibrium SSE setup, and \(I_{\mathbf{r}_{\rm A}}\) (\(I_{\mathbf{r}_{\rm B}}\)) is the spin current tunneling through a A (B) sublattice site \(\mathbf{r}_{\rm A}\) (\(\mathbf{r}_{\rm B}\)) on the interface.
In general, the s-d interaction at the interface is considered to be weaker than the energy scale of magnets and metals. Hence, taking \(\mathscr{H}_{\rm AFM}+\mathscr{H}_{\rm NM}\) as the unperturbed Hamiltonian and \(\mathscr{H}_{\rm int}\) as the perturbation, we may apply the approach of the non-equilibrium Green's function [49; 50] for Eq. (III.5). Note that in this perturbation theory, the average \(\langle\cdots\rangle\) of the unperturbed system is equivalent to the thermal average for the decoupled antiferromagnet with \(T=T_{\rm s}\) and metal with \(T=T_{\rm m}\). We here assume that interface sites \(\mathbf{r}_{\rm A}\) and \(\mathbf{r}_{\rm B}\) are randomly distributed and the averaged distance between the neighboring interface sites are much longer than the lattice spaces of the antiferromagnet and the metal. Namely, the interface is assumed to be very dirty as shown in Fig. 4. Under this condition, we can neglect the correlation between the local tunneling spin currents, \(I_{\mathbf{r}_{\rm X}}\) and \(I_{\mathbf{r}^{\prime}_{\rm X^{\prime}}}\) (\(\mathbf{r}_{\rm X}\neq\mathbf{r}^{\prime}_{\rm X^{\prime}}\)). Therefore, we can compute the average of each local spin current \(\langle I_{\mathbf{r}_{\rm X}}\rangle\) independently, and the total spin current is given by the sum of the independent local currents (see Fig. 4).
To proceed the perturbation calculation, we here define some correlation functions. The local spin current is represented as
\[\langle I_{\mathbf{r}_{\rm A}}\rangle =\frac{J_{\rm sd}(\mathbf{r}_{\rm A})}{\hbar}\lim_{\delta\to+0} \text{Re}\big{[}F_{+-}^{<}(\mathbf{r}_{\rm A},t;\mathbf{r}_{\rm A},t^{\prime})\big{]},\] (III.6) \[\langle I_{\mathbf{r}_{\rm B}}\rangle =\frac{J_{\rm sd}(\mathbf{r}_{\rm B})}{\hbar}\lim_{\delta\to+0} \text{Re}\big{[}F_{+-}^{<}(\mathbf{r}_{\rm B},t;\mathbf{r}_{\rm B},t^{\prime})\big{]},\] (III.7)
where \(t^{\prime}=t+\delta\), and \(F_{+-}^{<}(\mathbf{r},t;\mathbf{r}^{\prime},t^{\prime})\) is the lesser component of the two-point function \(F_{+-}(\mathbf{r},t;\mathbf{r}^{\prime},t^{\prime})=-i\langle T_{\rm C}\sigma_{\mathbf{r}_{ \rm F}}^{+}(t)S_{-}^{-}(t^{\prime})\rangle\) with \(T_{\rm C}\) being the time ordered product on the Keldysh contour [49; 50]. The spin correlation function of the decoupled metal and that of the antiferromagnet are respectively defined as
\[\chi_{+-}(\mathbf{r},t;\mathbf{r}^{\prime},t^{\prime}) =-i\big{\langle}T_{\rm C}\tilde{\sigma}_{\mathbf{r}}^{+}(t)\tilde{\sigma }_{\mathbf{r}^{\prime}}^{-}(t^{\prime})\big{\rangle}_{0},\] (III.8) \[G_{+-}^{(\rm X)}(\mathbf{r}_{\rm X},t^{\prime};\mathbf{r}^{\prime}_{\rm X},t^ {\prime}) =-i\big{\langle}T_{\rm C}\tilde{S}_{\mathbf{r}_{\rm X}}^{+}(t)\tilde{S}_{ \mathbf{r}_{\rm X}^{\prime}}^{-}(t^{\prime})\big{\rangle}_{0},\] (III.9)
where \(\langle\cdots\rangle_{0}\) stands for the statistical average for the unperturbed Hamiltonian and X = A, B. The symbol \(\tilde{}\) stands for time evolution by the unperturbed Hamiltonian. Using these correlators and Langreth rule [49; 50], we arrive at the following expression of the local spin current (for more detail, see App. D):
Figure 4: (Color online) Schematic image of randomly distributed exchange interactions (black points) on the interface between a magnet and a metal. The total number of interface sites is supposed to be much smaller than that of sites of the magnet or the metal near the interface and thus the correlation between neighboring sites on the interface is expected to be quite weak. Under this assumption, the tunneling spin current is described by the sum of the tunneling spin currents through each site on the interface.
\[\langle I_{\mathbf{r}_{\rm A}}\rangle =\frac{1}{2}\bigg{(}\frac{J_{\rm sd}(\mathbf{r}_{\rm A})}{\hbar} \bigg{)}^{2}\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}\mathrm{Re}\Big{[}\chi_{ +-}^{\rm R}(\mathbf{r}_{\rm A},\omega)G_{+-}^{({\rm A})<}(\mathbf{r}_{\rm A}, \omega)+\chi_{+-}^{<}(\mathbf{r}_{\rm A},\omega)G_{+-}^{({\rm A}){\rm A}}( \mathbf{r}_{\rm A},\omega)\Big{]},\] \[=\frac{1}{2}\bigg{(}\frac{J_{\rm sd}(\mathbf{r}_{\rm A})}{\hbar} \bigg{)}^{2}\frac{1}{N_{\rm m}(N/2)}\sum_{\mathbf{p},\mathbf{k}}\int_{-\infty }^{\infty}\frac{d\omega}{2\pi}\mathrm{Re}\Big{[}\chi_{+-}^{\rm R}(\mathbf{p}, \omega)G_{+-}^{({\rm A})<}(\mathbf{k},\omega)+\chi_{+-}^{<}(\mathbf{p}, \omega)G_{+-}^{({\rm A}){\rm A}}(\mathbf{k},\omega)\Big{]}.\] (III.10)
Here, we have defined four sorts of correlation functions,
\[\chi_{+-}^{\rm R}(\mathbf{r},t;\mathbf{r}^{\prime},t^{\prime}) =-i\theta(t-t^{\prime})\Big{\langle}\big{[}\tilde{\sigma}_{\mathbf{ r}}^{+}(t),\tilde{\sigma}_{\mathbf{r}^{\prime}}^{-}(t^{\prime})\big{]} \Big{\rangle}_{0},\] (III.11) \[\chi_{+-}^{<}(\mathbf{r},t;\mathbf{r}^{\prime},t^{\prime}) =-i\Big{\langle}\tilde{\sigma}_{\mathbf{r}^{\prime}}^{-}(t^{ \prime})\tilde{\sigma}_{\mathbf{r}}^{+}(t)\Big{\rangle}_{0},\] (III.12) \[G_{+-}^{({\rm X}){\rm A}}(\mathbf{r}_{\rm x},t;\mathbf{r}^{ \prime}_{\rm x},t^{\prime}) =i\theta(t^{\prime}-t)\Big{\langle}\Big{[}\tilde{S}_{\mathbf{r}_ {\rm x}}^{+}(t),\tilde{S}_{\mathbf{r}_{\rm x}^{\prime}}^{-}(t^{\prime})\Big{]} \Big{\rangle}_{0},\] (III.13) \[G_{+-}^{({\rm X})<}(\mathbf{r}_{\rm x},t;\mathbf{r}^{\prime}_{ \rm x},t^{\prime}) =-i\Big{\langle}\tilde{S}_{\mathbf{r}_{\rm x}^{\prime}}^{-}(t^{ \prime})\tilde{S}_{\mathbf{r}_{\rm x}}^{+}(t)\Big{\rangle}_{0}.\] (III.14)
In the first line in Eq. (III.10), \(\chi_{+-}^{\rm R[<]}(\mathbf{r}_{\rm A},\omega)\) is the Fourier component of the retarded [lesser] part of \(\chi_{+-}(\mathbf{r}_{\rm A},t-t^{\prime})\) and \(G_{+-}^{({\rm A}){\rm A}[<]}(\mathbf{r}_{\rm A},\omega)\) is the Fourier component of the advanced [lesser] part of \(G_{+-}^{({\rm A}){\rm A}}(\mathbf{r}_{\rm A},t-t^{\prime})\) in the frequency \(\omega\) space: \(\chi_{+-}^{\rm R[<]}(\mathbf{r}_{\rm A},\mathbf{r}^{\prime}_{\rm A},\omega)= \int_{-\infty}^{\infty}d(t-t^{\prime})e^{i\omega(t-t^{\prime})}\chi_{+-}^{\rm R [<]}(\mathbf{r}_{\rm A},\mathbf{r}^{\prime}_{\rm A},t-t^{\prime})\) and \(G_{+-}^{({\rm A}){\rm A}[<]}(\mathbf{r}_{\rm A},\mathbf{r}^{\prime}_{\rm A}, \omega)=\int_{-\infty}^{\infty}d(t-t^{\prime})e^{i\omega(t-t^{\prime})}G_{+-}^{ ({\rm A}){\rm A}[<]}(\mathbf{r}_{\rm A},\mathbf{r}^{\prime}_{\rm A},t-t^{ \prime})\). In the second line in Eq. (III.10), \(N_{\rm m}\) is the total number of sites of the metal, \(\chi_{+-}^{\rm R[<]}(\mathbf{p},\omega)\) and \(G_{+-}^{({\rm A}){\rm A}[<]}(\mathbf{k},\omega)\) are respectively the Fourier components of \(\chi_{+-}^{\rm R[<]}(\mathbf{r}_{\rm A},\omega)\) and \(G_{+-}^{({\rm A}){\rm A}[<]}(\mathbf{r}_{\rm A},\omega)\): \(\chi_{+-}^{\rm R[<]}(\mathbf{p},\omega)=\sum_{\mathbf{r}_{\rm A}-\mathbf{r}^{ \prime}_{\rm A}(\mathbf{r}_{\rm A},\mathbf{r}^{\prime}_{\rm A}\in{\rm NM})}e^{- i\mathbf{p}\cdot\left(\mathbf{r}_{\rm A}-\mathbf{r}^{\prime}_{\rm A}\in{\rm NM} \right)}e^{-i\mathbf{k}\cdot\left(\mathbf{r}_{\rm A}-\mathbf{r}^{\prime}_{\rm A }\in{\rm AFM}\right)}e^{-i\mathbf{k}\cdot\left(\mathbf{r}_{\rm A}-\mathbf{r}^{ \prime}_{\rm A}\right)}G_{+-}^{({\rm A}){\rm A}[<]}(\mathbf{r}_{\rm A}-\mathbf{ r}^{\prime}_{\rm A},\omega)\) and \(G_{+-}^{({\rm A}){\rm A}[<]}(\mathbf{k},\omega)=\sum_{\mathbf{r}_{\rm A}-\mathbf{r}^{ \prime}_{\rm A}(\mathbf{r}_{\rm A},\mathbf{r}^{\prime}_{\rm A}\in{\rm AFM})}e^ {-i\mathbf{k}\cdot\left(\mathbf{r}_{\rm A}-\mathbf{r}^{\prime}_{\rm A}\right)}G_{ +-}^{({\rm A}){\rm A}[<]}(\mathbf{r}_{\rm A}-\mathbf{r}^{\prime}_{\rm A},\omega)\). Through a similar algebra, we also obtain the expression of \(\langle I_{\mathbf{r}_{\rm m}}\rangle\). Substituting these results to Eq.(III.5), we obtain
\[\langle I_{\rm S}\rangle=\frac{1}{2}\bigg{(}\frac{J_{\rm sd}}{ \hbar}\bigg{)}^{2}\frac{N_{\rm int}/2}{N_{\rm m}(N/2)}\sum_{\mathbf{p}, \mathbf{k}}\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}\mathrm{Re}\Big{[}\chi_{+- }^{\rm R}(\mathbf{p},\omega)G_{+-}^{({\rm A})<}(\mathbf{k},\omega)+\chi_{+-}^{ <}(\mathbf{p},\omega)G_{+-}^{({\rm A}){\rm A}}(\mathbf{k},\omega)\Big{]}\\ +\frac{1}{2}\bigg{(}\frac{J_{\rm sd}}{\hbar}\bigg{)}^{2}\frac{N_{ \rm int}/2}{N_{\rm m}(N/2)}\sum_{\mathbf{p},\mathbf{k}}\int_{-\infty}^{\infty} \frac{d\omega}{2\pi}\mathrm{Re}\Big{[}\chi_{+-}^{\rm R}(\mathbf{p},\omega)G_{+-}^{ ({\rm B})<}(\mathbf{k},\omega)+\chi_{+-}^{<}(\mathbf{p},\omega)G_{+-}^{({\rm B}) {\rm A}}(\mathbf{k},\omega)\Big{]},\] (III.15)
where we have introduced an averaged s-d exchange \(J_{\rm sd}\) by \(\sum_{\mathbf{r}_{\rm A}\in{\rm int-A}}J_{\rm sd}\left(\mathbf{r}_{\rm A} \right)^{2}=\sum_{\mathbf{r}_{\rm B}\in{\rm int-B}}J_{\rm sd}\left(\mathbf{r}_{ \rm B}\right)^{2}=:N_{\rm int}J_{\rm sd}^{2}/2\) (\(N_{\rm int}\) is the number of sites of the interface).
This is the formula for the DC spin current flowing from the antiferromagnet to the normal metal via an interfacial exchange interaction, as shown in Fig. 1. This formalism can be applied to SSEs and spin pumpings in a broad class of magnets including various ordered magnets and quantum spin liquids. Note that we do not assume any magnetic order in the derivation of the spin current. For instance, if we consider a SSE in a ferromagnetic insulator, two kinds of Green's functions \(G_{+-}^{({\rm A}){\rm R}}\) and \(G_{+-}^{({\rm B}){\rm R}}\) replaced with a single Green's function \(G_{+-}^{\rm R}\) since there is no sublattice structure in the ferromagnet. On the other hand, if we consider a magnet with a \(M\)-sublattice structure, they are replaced with the sum of multiple Green's functions on \(M\) different sublattice sites.
### SSE in the Neel phase
In the Neel ordered phase, spin correlators can be computed by using the spin-wave approximation Eqs. (II.11) and (II.12). Namely, spin correlators can be represented as
\[G_{+-}^{({\rm X}){\rm A}}(\mathbf{k},\omega) =\mathcal{G}_{+-}^{({\rm X}){\rm A}}(\mathbf{k},\omega),\] (III.16) \[G_{+-}^{({\rm X}){\rm A}\times}(\mathbf{k},\omega) =\mathcal{G}_{+-}^{({\rm X})<}(\mathbf{k},\omega),\] (III.17)
where \(\mathcal{G}_{+-}^{({\
equilibrium states, we obtain [49; 50]
\[\chi^{<}_{+-}(\mathbf{p},\omega)=2i\text{Im}\chi^{\text{R}}_{+-}(\mathbf{p}, \omega)n_{\text{B}}(\omega+\delta\mu_{\text{s}}/\hbar,T_{\text{m}}),\] (III.18) \[\mathcal{G}^{(\text{X})<}_{+-}(\mathbf{k},\omega)=2i\text{Im}\mathcal{ G}^{(\text{X})\text{R}}_{+-}(\mathbf{k},\omega)n_{\text{B}}(\omega,T_{\text{s}}),\] (III.19)
through the Lehmann representation of these correlators. Here, \(n_{\text{B}}(\omega,T)=\left(e^{\hbar\omega/k_{\text{B}}T}-1\right)^{-1}\) is the Bose distribution function. Substituting Eqs. (III.16)-(III.19) into Eq. (III.15), we finally arrive at
\[\langle I_{\text{S}}\rangle =-\bigg{(}\frac{J_{\text{sd}}}{\hbar}\bigg{)}^{2}\frac{N_{\text{ int}}/2}{N_{\text{m}}(N/2)}\sum_{\mathbf{p},\mathbf{k}}\int_{-\infty}^{\infty}\frac{d \omega}{2\pi}\text{Im}\chi^{\text{R}}_{+-}(\mathbf{p},\omega)\Big{\{}\text{Im} \mathcal{G}^{(\text{A})\text{R}}_{+-}(\mathbf{k},\omega)+\text{Im}\mathcal{G}^{( \text{B})\text{R}}_{+-}(\mathbf{k},\omega)\Big{\}}\Big{\{}n_{\text{B}}(\omega,T_{ \text{s}})-n_{\text{B}}(\omega,T_{\text{m}})\Big{\}}\] \[=-\bigg{(}\frac{J_{\text{sd}}}{\hbar}\bigg{)}^{2}\frac{N_{\text{ int}}}{2}\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}\text{Im}\chi^{\text{R}}_{+-} (\omega)\Big{\{}\text{Im}\mathcal{G}^{(\text{A})\text{R}}_{+-}(\omega)+\text{ Im}\mathcal{G}^{(\text{B})\text{R}}_{+-}(\omega)\Big{\}}\Big{\{}n_{\text{B}}(\omega,T_{ \text{s}})-n_{\text{B}}(\omega,T_{\text{m}})\Big{\}},\] (III.20)
where \(\text{Im}\chi^{\text{R}}_{+-}(\omega)=\frac{1}{N_{\text{m}}}\sum_{\mathbf{p}}\text {Im}\chi^{\text{R}}_{+-}(\mathbf{p},\omega)\) and \(\text{Im}\mathcal{G}^{(\text{X})\text{R}}_{+-}(\omega)=\frac{1}{N/2}\sum_{\bm {k}}\text{Im}\mathcal{G}^{(\text{X})\text{R}}_{+-}(\mathbf{k},\omega)\) (X=A, B). We have omitted the \(\delta\mu_{\text{s}}\) dependence of the current \(\langle I_{\text{S}}\rangle\) since \(\delta\mu_{\text{s}}\) is very small compared to the relevant interval of integration of \(\mathcal{O}(J)\).
### SSE in the canted phase
Unlike the Neel phase, in the canted AF phase, there is an additional contribution in the spin current \(\langle I_{\text{S}}\rangle\). The magnon representation of spin, Eqs. (II.18) and (II.19), tells us that spin correlators are decomposed into two parts. Namely, the advanced and retarded parts of the spin correlators are given by magnon correlation function, while the lesser part is given by the sum of the magnon correlator and constant term:
\[G^{(\text{X})\text{A}}_{+-}(\mathbf{k},\omega) =\mathcal{G}^{(\text{X})\text{A}}_{+-}(\mathbf{k},\omega),\] (III.21) \[G^{(\text{X})\text{-}}_{+-}(\mathbf{k},\omega) =(-i)2\pi S^{2}\sin^{2}\theta\frac{N}{2}\delta_{\mathbf{k},0}\delta( \omega)+\mathcal{G}^{(\text{X})\text{<}}_{+-}(\mathbf{k},\omega),\] (III.22)
where X=A, B. The first term of Eq. (III.22) comes from the transverse magnetization \(S\sin\theta\) as shown in Fig. 5. Using the Lehmann representation of the magnon propagator, we again obtain
\[\mathcal{G}^{(\text{X})\text{<}}_{+-}(\mathbf{k},\omega)=2i\text{Im}\mathcal{G}^{ (\text{X})\text{R}}_{+-}(\mathbf{k},\omega)n_{\text{B}}(\omega,T_{\text{s}}).\] (III.23)
Substituting Eq. (III.18) and Eqs. (III.21)-(III.23) into Eq. (III.15), we finally arrive at
\[\langle I_{\text{S}}\rangle=\bigg{(}\frac{J_{\text{sd}}}{\hbar} \bigg{)}^{2}S^{2}\sin^{2}\theta\frac{N_{\text{int}}}{2}\text{Im}\chi^{\text{R} }_{+-}(\omega=0)\\ -\bigg{(}\frac{J_{\text{sd}}}{\hbar}\bigg{)}^{2}\frac{N_{\text{ int}}}{2}\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}\text{Im}\chi^{\text{R}}_{+-}( \omega)\Big{\{}\text{Im}\mathcal{G}^{(\text{A})\text{R}}_{+-}(\omega)+\text{ Im}\mathcal{G}^{(\text{B})\text{R}}_{+-}(\omega)\Big{\}}\Big{\{}n_{\text{B}}(\omega,T_{ \text{s}})-n_{\text{B}}(\omega,T_{\text{m}})\Big{\}},\] (III.24)
Figure 5: (Color online) Schematic view of spin-flip process of conducting electrons at an interface between the antiferromagnet and the metal when the magnet is in the canted phase. The effective transverse magnetic fields generated by localized spins cause the spin-flip scattering in the metal.
where \(\text{Im}\chi^{\text{R}}_{+-}(\omega)=\frac{1}{N_{m}}\sum_{\mathbf{p}}\text{Im}\chi^{ \text{R}}_{+-}(\mathbf{p},\omega)\), \(\text{Im}\mathcal{G}^{(\text{X})\text{R}}_{+-}(\omega)=\frac{1}{N/2}\sum_{\mathbf{k} }\text{Im}\mathcal{G}^{(\text{X})\text{R}}_{+-}(\mathbf{k},\omega)\) (X=A, B), and we have again omitted the small quantity \(\delta\mu_{\text{s}}\) in the second term of the integral.
The tunneling spin current of the SSE in the canted AF phase is composed of two parts. The first term in Eq. (III.24) indicates that spin flips of conducting electrons caused by transverse magnetization (see Fig. 5) contribute to the spin current. A similar electron-spin flips by magnetic moments have been already studied in spin Hall magnetoresistance [51, 52]. As one will see soon later, the introduction of a small but finite \(\delta\mu_{\text{s}}\) is essential to obtain a finite value of the first term because \(\text{Im}\chi^{\text{R}}_{+-}(0)=0\) for \(\delta\mu_{\text{s}}=0\). The second term is the contribution from magnon dynamics like the spin current in the Neel phase of Eq. (III.20).
### Magnetic density of states and neutron scattering spectra
In this subsection, we discuss the physical meaning of the spin current formulas (III.20) and (III.24) from the viewpoint of the magnetic density of state (DoS).
In conducting electron systems, the imaginary part of one-particle retarded Green's functions, \(\sum_{\mathbf{k}}G^{\text{R}}(\mathbf{k},\omega)\), is proportional to the electron (hole) DoS \(D_{\text{e(h)}}(\omega)\)[49, 50]:
\[D_{\text{e(h)}}(\omega)=-\frac{1}{\pi}\text{Im}\sum_{\mathbf{k}}G^{\text{R}}(\mathbf{ k},\omega)\] (III.25)
when the frequency \(\omega\) is larger (smaller) than the chemical potential \(\mu\) (see Table 1). Similarly, in magnetic insulators, we can define the magnetic DoS for spin-down [spin-up] excitations from the imaginary part of \(\mathcal{G}^{\text{R}}_{+-}(\mathbf{k},\omega)\) [\(\mathcal{G}^{\text{R}}_{-+}(\mathbf{k},\omega)\)] as
\[D_{\downarrow(\uparrow)}(\omega)=-\text{Im}\sum_{\mathbf{k}}\mathcal{G}^{\text{R} }_{+-(-+)}(\mathbf{k},\omega).\] (III.26)
Note that the DoS of Eq. (III.26) is physically relevant only for \(\omega>0\) since the excitation energy is always larger than the ground state energy in magnetic systems. In magnon systems with a magnetization along \(S^{z}\) axis, \(\mathcal{G}^{\text{R}}_{+-(-+)}(\mathbf{k},\omega)\) can be regarded as the spin-down (spin-up) magnon Green's function because \(\hat{S}^{-(+)}\) is almost equivalent to the creation operator of a spin-down (spin-up) magnon. Taking the complex conjugate value of \(\mathcal{G}^{\text{R}}_{+-(-+)}(\mathbf{k},\omega)\), one can easily find the relationship between two Green's functions \(\mathcal{G}^{\text{R}}_{+-}(\mathbf{k},\omega)\) and \(\mathcal{G}^{\text{R}}_{-+}(\mathbf{k},\omega)\):
\[\text{Im}\mathcal{G}^{\text{R}}_{+-}(\mathbf{k},\omega)=-\text{Im}\mathcal{G}^{ \text{R}}_{-+}(\mathbf{k},-\omega).\] (III.27)
This holds in generic spin systems.
Using Eqs. (III.26) and (III.27), we can provide an important interpretation for the tunneling spin current of Eq. (III.20) and Eq. (III.24) as follows. A key point of Green's functions in Eq. (III.20) and the second term of Eq. (III.24) is their \(\omega\) dependence. As we will soon discuss in more detail, the Green's function of metal generally satisfies \(\text{Im}\chi^{\text{R}}_{+-}(\omega)\propto\omega\) in the low-energy regime, where \(\omega\) is much smaller than Fermi energy (chemical potential). Namely, \(\text{Im}\chi^{\text{R}}_{+-}(\omega)\) is odd w.r.t. the frequency \(\omega\). The temperature factor of Eq. (III.20) and Eq. (III.24), \(n_{\text{B}}(\omega,T_{\text{s}})-n_{\text{B}}(\omega,T_{\text{m}})\), is odd as well. Therefore, we find that \(\text{Im}\mathcal{G}^{\text{R}}_{+-}(\omega)\)_has to possess a finite \(\omega\)-even component_ if the spin current \(\langle I_{\text{S}}\rangle\) becomes a finite value.
Secondly, the viewpoint of magnetic DoSs enables us to obtain a deep understanding of Eq. (III.20) and Eq. (III.24). In the integration of Eq. (III.20) and Eq. (III.24), \(\text{Im}\mathcal{G}^{\text{R}}_{+-}(\omega)\) for \(\omega>0\) can be regarded as the spin-down DoS \(D_{\downarrow}(\omega)\), while that for \(\omega<0\) is equivalent to the spin-up DoS \(D_{\uparrow}(\omega)\). Thus, we conclude that the sign of the spin current \(\langle I_{\text{S}}\rangle\) is determined by whether spin-up or -down carrier is more dominant. In other words, the sign of spin current in SSE tells us the main magnetic carrier of the target material. This result is very reminiscent of the standard electronic Seebeck effect, in which the sign of Seebeck coefficient indicates the kind of dominant carrier, i.e., electrons or holes. This picture based on magnetic DoS is useful to study the SSE in antiferromagnets since both spin-up and down excitations exists there (On the other hand, only down-spin magnons appear in ferromagnetic phases).
The imaginary part of retarded two-spin Green's functions \(\text{Im}\mathcal{G}^{\text{R}}_{+-}(\omega)\) is proportional to spin dynamical structure factor and they can be measured by polarized neutron scattering experiment [53, 54]. Therefore, both experiments of SSE and neutron scattering enable us to check how quantitatively our formula of the spin current can capture the measured value of SSE voltage.
\begin{table}
\begin{tabular}{l l l} \hline \hline DoS & Green’s function & condition \\ \hline spin-up DoS \(D_{\uparrow}(\omega)\) & \(-\text{Im}\sum_{\mathbf{k}}\mathcal{G}^{\text{R}}_{+-}(\mathbf{k},\omega)\) & \(\omega>0\) \\ spin-down DoS \(D_{\downarrow}(\omega)\) & \(-\text{Im}\sum_{\mathbf{k}}\mathcal{G}^{\text{R}}_{+-}(\mathbf{k},\omega)\) & \(\omega>0\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Relationship between spin Green’s functions and spin-up and -down Density of States.
\begin{table}
\begin{tabular}{l l l} \hline \hline DoS & Green’s function & condition \\ \hline electron DoS \(D_{\text{e}}(\omega)\) & \(-\frac{1}{\pi}\text{Im}\sum_{\mathbf{k}}G^{\text{R}}(\mathbf{k},\omega)\) & \(\omega>\mu\) \\ hole DoS \(D_{\text{h}}(\omega)\) & \(-\frac{1}{\pi}\text{Im}\sum_{\mathbf{k}}G^{\text{R}}(\mathbf{k},\omega)\) & \(\omega<\mu\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Relationship between one-particle electron Green’s function and electron Density of State.
Spin Seebeck effect in antiferromagnetic insulators
In this section, based on Eqs. (III.20) and (III.24), we estimate the tunnelng DC spin currents generated by SSE in antiferromagnets. We focus on their magnetic-field and temperature dependence.
### Normalized spin current
In this subsection, substituting some information about Green's functions into Eqs. (III.20) and (III.24), we simplify the expression of the spin current.
For the model described by Eqs.(II.21) and (II.22), the dynamical susceptibility of the conducting electrons can be approximated by (for more detail, App. E)
\[\text{Im}\chi_{+-}^{\text{R}}(\omega)\simeq-\pi\{\mathcal{D}(0)\}^{2}\hbar( \hbar\omega+\delta\mu_{\text{s}}),\] (IV.1)
where \(\text{Im}\chi_{+-}^{\text{R}}(\omega)=\frac{1}{N_{\text{m}}}\sum_{\mathbf{p}} \text{Im}\chi_{+-}^{\text{R}}(\mathbf{p},\omega)\), and \(\mathcal{D}(0)\) denotes the electron DoS at the Fermi energy [55; 56; 57; 58; 48]. This expression is valid when \(\hbar\omega\) is sufficiently smaller than the Fermi energy (or band width). In the setup of SSE, we apply a static magnetic field and hence a Landau level structure seems to emerge in the attached metal. However, usually one uses a dirty metal, which includes impurity potentials and polycrystal structure, and the energy scale of the metal (Fermi energy, impurity potential, etc.) is much larger than those of the applied magnetic field and exchange interaction of the magnet. Thus, the Landau level is expected to be irrelevant and we may rather use the expression of Eq. (IV.1).
When temperature difference \(\Delta T=T_{\text{s}}-T_{\text{m}}\) (\(>0\)) is sufficiently small, the \(T\)-dependent factor \(n_{\text{B}}(\omega,T_{\text{s}})-n_{\text{B}}(\omega,T_{\text{m}})\) can be approximated by \(\frac{\hbar\omega}{k_{\text{B}}T^{2}}\frac{\Delta T}{4\sinh^{2}[\hbar\omega/2 k_{\text{B}}T]}\), where \(T=(T_{\text{s}}+T_{\text{m}})/2\) is the averaged temperature. Here, the small factor \(\delta\mu_{\text{s}}\) in the Bose distribution has been again neglected under the assumption that \(k_{\text{B}}\Delta T\) is sufficiently smaller than the exchange coupling, but larger enough than \(\delta\mu_{\text{s}}\), namely, \(J\gg k_{\text{B}}\Delta T\gg\delta\mu_{\text{s}}\).
Substituting these relations of the susceptibility of the metal and the \(T\)-factor into Eq. (III.20), we can define a normalized spin current \(\langle\bar{I}_{\text{S}}\rangle\) as \(\langle I_{\text{S}}\rangle=-\pi\{\mathcal{D}(0)\}^{2}(J_{\text{sd}})^{2}(N_ {\text{int}}/2)(k_{\text{B}}\Delta T/\hbar)\langle\bar{I}_{\text{S}}\rangle\). The current in the Neel phase is given by
\[\langle\bar{I}_{\text{S}}\rangle\simeq-\frac{1}{4}\frac{1}{\left(k_{\text{B}} T\right)^{2}}\frac{1}{N/2}\sum_{\mathbf{k}}\int_{-\infty}^{\infty}\frac{d\omega}{2 \pi}\Big{\{}\text{Im}\mathcal{G}_{+-}^{(\text{A})\text{R}}(\mathbf{k},\omega)+ \text{Im}\mathcal{G}_{+-}^{(\text{B})\text{R}}(\mathbf{k},\omega)\Big{\}}\frac{( \hbar\omega)^{2}}{\sinh^{2}[\hbar\omega/2k_{\text{B}}T]}.\] (IV.2)
Similarly, the normalized spin current in the canted AF phase is given by
\[\langle\bar{I}_{\text{S}}\rangle\simeq S^{2}\sin^{2}\theta\frac{\delta\mu_{\text{s}}}{k_{\text{B}} \Delta T}\] \[-\frac{1}{4}\frac{1}{\left(k_{\text{B}}T\right)^{2}}\frac{1}{N/2} \sum_{\mathbf{k}}\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}\Big{\{}\text{Im} \mathcal{G}_{+-}^{(\text{A})\text{R}}(\mathbf{k},\omega)+\text{Im}\mathcal{G}_{+-} ^{(\text{B})\text{R}}(\mathbf{k},\omega)\Big{\}}\frac{(\hbar\omega)^{2}}{\sinh^{2 }[\hbar\omega/2k_{\text{B}}T]},\] (IV.3)
The first term in Eq. (IV.3) can be viewed as the contribution from the spin flips of conducting electrons caused by effective transverse fields in Fig. 5. Its expression clearly indicates that consideration about the NESS with a finite \(\delta\mu_{\text{s}}\) is necessary to include the effect of the spin flip. Hereafter, we will discuss the magnetic-field and temperature dependence of spin current or SSE voltage by employing these normalized spin currents.
### Density of states in antiferromagnets
The expression of Eqs. (IV.2) and (IV.3) shows that the remaining task is to compute magnon propagators of antiferromagnets, \(\text{Im}\mathcal{G}_{+-}^{(\text{A})\text{R}}(\mathbf{k},\omega)\) and \(\text{Im}\mathcal{G}_{+-}^{(\text{B})\text{R}}(\mathbf{k},\omega)\). In the Neel ordered phase, we utilize the spin-wave Hamiltonian (II.7) and the magnon propagators are given by
\[\mathcal{G}_{+-}^{(\text{A})\text{R}}(\mathbf{k},\omega)=\frac{2S\cosh^{2}\phi_{\bm {k}}}{\omega-\varepsilon_{\alpha}^{(\text{N})}(\mathbf{k})/\hbar+i\eta}-\frac{2S \sinh^{2}\phi_{\mathbf{k}}}{\omega+\varepsilon_{\beta}^{(\text{N})}(\mathbf{k})/\hbar +i\eta},\]
\[\mathcal{G}_{+-}^{(\text{B})\text{R}}(\mathbf{k},\omega)=\frac{2S\sinh^{2}\phi_{\bm {k}}}{\omega-\varepsilon_{\alpha}^{(\text{N})}(\mathbf{k})/\hbar+i\eta}-\frac{2S \cosh^{2}\phi_{\mathbf{k}}}{\omega+\varepsilon_{\beta}^{(\text{N})}(\mathbf{k})/\hbar +i\eta},\] (IV.4)
where \(1/\eta\) denotes the relaxation time of the magnon. Within the phenomenological theory, one may set \(\eta\simeq\alpha\omega\) with \(\alpha\) being the dimensionless Gilbert damping constant [59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69]. The explicit form of the tunneling DC spin current in the Neel ordered phase is therefore given by
\[\langle\bar{I_{\rm S}}\rangle=\frac{S}{4\pi}\frac{1}{\left(k_{\rm B}T\right)^{2 }}\frac{1}{N/2}\sum_{\mathbf{k}}\int_{-\infty}^{\infty}d\omega\frac{\left(\hbar \omega\right)^{2}}{\sinh^{2}[\hbar\omega/2k_{\rm B}T]}\frac{1}{\sqrt{1-\tanh^ {2}(2\phi_{\mathbf{k}})}}\Bigg{[}\frac{\eta}{\left\{\omega-\varepsilon_{\alpha}^{ \rm(N)}(\mathbf{k})/\hbar\right\}^{2}+\eta^{2}}-\frac{\eta}{\left\{\omega+ \varepsilon_{\beta}^{\rm(N)}(\mathbf{k})/\hbar\right\}^{2}+\eta^{2}}\Bigg{]}.\] (IV.5)
Similarly, let us write down the explicit form of \(\langle\bar{I_{\rm S}}\rangle\) in the canted phase. From the spin-wave approximation based on Eq. (II.15), the magnon Green's functions on A and B sublattices are calculated as
\[\mathcal{G}_{+-}^{\rm(X)R}(\mathbf{k},\omega)=\frac{S}{4}\frac{A_{\mathbf{k}}^{\alpha }}{\omega-\varepsilon_{\alpha}^{\rm(C)}(\mathbf{k})/\hbar+i\eta}-\frac{S}{4} \frac{B_{\mathbf{k}}^{\alpha}}{\omega+\varepsilon_{\alpha}^{\rm(C)}(\mathbf{k})/ \hbar+i\eta}-\frac{S}{4}\frac{B_{\mathbf{k}}^{\beta}}{\omega+\varepsilon_{\beta}^ {\rm(C)}(\mathbf{k})/\hbar+i\eta}+\frac{S}{4}\frac{A_{\mathbf{k}}^{\beta}}{\omega- \varepsilon_{\beta}^{\rm(C)}(\mathbf{k})/\hbar+i\eta},\] (IV.6)
where X (=A, B) is the sublattice index, \(A_{\mathbf{k}}^{x}=\big{(}1+\cos^{2}\theta\big{)}\big{(}\cosh^{2}\varphi_{\mathbf{k} }^{x}+\sinh^{2}\varphi_{\mathbf{k}}^{x}\big{)}\ +\ 2\cos\theta\), \(B_{\mathbf{k}}^{x}=\big{(}1+\cos^{2}\theta\big{)}\big{(}\cosh^{2}\varphi_{\mathbf{k} }^{x}+\sinh^{2}\varphi_{\mathbf{k}}^{x}\big{)}-2\cos\theta\), and \(\cosh^{2}\varphi_{\mathbf{k}}^{x}+\sinh^{2}\varphi_{\mathbf{k}}^{x}=1/\sqrt{1-\tanh^{ 2}(2\varphi_{\mathbf{k}}^{x})}\) [\(x\ (=\alpha,\ \beta)\) is the magnon-band index]. The spin current in the canted phase is hence given by
\[\langle\bar{I_{\rm S}}\rangle=\langle\bar{I_{\rm S}}\rangle_{1}+\langle\bar{I _{\rm S}}\rangle_{2},\] (IV.7)
where
\[\langle\bar{I_{\rm S}}\rangle_{1}= S^{2}\sin^{2}\theta\frac{\delta\mu_{\rm s}}{k_{\rm B}\Delta T},\] (IV.8) \[\langle\bar{I_{\rm S}}\rangle_{2}= \frac{S}{16\pi}\frac{1}{\left(k_{\rm B}T\right)^{2}}\frac{1}{N/2} \sum_{\mathbf{k}}\int_{-\infty}^{\infty}d\omega\frac{\left(\hbar\omega\right)^{2} }{\sinh^{2}[\hbar\omega/2k_{\rm B}T]}\] \[\times\Bigg{[}\frac{\eta A_{\mathbf{k}}^{\alpha}}{\left\{\omega- \varepsilon_{\alpha}^{\rm(C)}(\mathbf{k})/\hbar\right\}^{2}+\eta^{2}}-\frac{\eta B _{\mathbf{k}}^{\alpha}}{\left\{\omega+\varepsilon_{\alpha}^{\rm(C)}(\mathbf{k})/ \hbar\right\}^{2}+\eta^{2}}-\frac{\eta B_{\mathbf{k}}^{\beta}}{\left\{\omega+ \varepsilon_{\beta}^{\rm(C)}(\mathbf{k})/\hbar\right\}^{2}+\eta^{2}}+\frac{\eta A _{\mathbf{k}}^{\beta}}{\left\{\omega-\varepsilon_{\beta}^{\rm(C)}(\mathbf{k})/\hbar \right\}^{2}+\eta^{2}}\Bigg{]}.\] (IV.9)
The first term \(\langle\bar{I_{\rm S}}\rangle_{1}\) is the contribution from the transverse magnetization in Fig. 5 and the second one \(\langle\bar{I_{\rm S}}\rangle_{2}\) is that from the magnon dynamics like Eq. (IV.5). Hereafter we will set \(\delta\mu_{\rm s}/k_{\rm B}\Delta T\) to be \(0.01\). This is a reasonable value since in usual SSE setups, \(\delta\mu_{\rm s}\) and \(\Delta T\) are estimated as \(\delta\mu_{\rm s}\sim\mathcal{O}(10^{1})\)\(\mathrm{\SIUnitSymbolMicro}\sim\mathcal{O}(10^{-2})\)\(\mathrm{\SIUnitSymbolMicro}\)[46] and \(\Delta T\sim 10\)\(\mathrm{K}\sim\mathcal{O}(10^{0})\)\(\mathrm{\SIUnitSymbolMicro}\) meV [17].
In the following subsections, we estimate the magnetic-field and temperature dependence of the spin current with Eqs. (IV.5) and (IV.7). However, even without these formulas, the Green's functions of Eqs. (IV.4) and (IV.6) are enough to predict the sign of the spin current. Figure 6 (a) and (b) respectively show the imaginary parts of \(\mathcal{G}_{+-}^{\rm R}\) on the A sublattice and the B one in the Neel phase. On the A sublattice, the spin-down DoS (i.e., \(-\mathrm{Im}\mathcal{G}_{+-}^{\rm R}\)) is larger than the spin-up one, whereas the spin-up one is larger on B sublattice. This is reasonable because the spin moment is positive and negative in A and B sublattices, respectively (see Fig. 2(a)). Therefore, there is the competition between spin-up and -down carriers in the Neel phase. From the quantitative comparison between Figs. 6 (a) and (b), we can see that spin-up carrier is more dominant than spin-down one in the Neel phase, and the spin current is predicted to be positive. This conclusion comes from the external magnetic field and we will discuss this point in more detail in the following subsections. Figure 6 (c) shows the \(-\mathrm{Im}\mathcal{G}_{+-}^{\rm R}\) in the canted phase. The weight of \(-\mathrm{Im}\mathcal{G}_{+-}^{\rm R}\) on A and B sublattices are equivalent with each other in this phase, and hence we plot only \(-\mathrm{Im}\mathcal{G}_{+-}^{\rm R}\) on a single sublattice. The panel (c) shows that the spin-down carrier is more dominant in the canted phase, and the static term of \(\langle\bar{I_{\rm S}}\rangle_{1}\) is positive. Therefore, the spin current in the canted phase is predicted to be positive. This is reasonable because the canted phase has a uniform magnetization along the applied magnetic field and the dominant carrier should be the same as that of ferromagnets.
In Secs. IV.3-IV.5, we will discuss the field and temperature dependence of the spin current quantitatively using Eqs. (IV.5) and (IV.7). As we mentioned in the Introduction, SSEs of antiferromagnets \(\mathrm{Cr}_{2}\mathrm{O}_{3}\)[17; 20] and \(\mathrm{MnF}_{2}\)[18] have been experimentally investigated. Based on the mean-field approximation, we can estimate the exchange and anisotropy interactions for \(\mathrm{Cr}_{2}\mathrm{O}_{3}\) as \(J\sim 40\) K and \(|D|/J\sim\mathcal{O}(10^{-3})\), and those for \(\mathrm{MnF}_{2}\) as \(J\sim 4\) K and \(|D|/J\sim\mathcal{O}(10^{-1})\). In the remaining parts of this section, we will often refer to these values to
quantitatively discuss the properties of the spin current.
### Field dependence in the intermediate temperature range
Let us investigate the magnetic-field and temperature dependences of the tunneling spin current in Neel and canted phases. This subsection is devoted to the field dependence in the intermediate temperature regime of \(k_{\rm B}T\sim J\) and \(k_{\rm B}T\ll k_{\rm B}T_{\rm N}\) [see Fig. 2(c)]. The magnetic-field dependence of Eqs. (IV.5) and (IV.7) is shown in Fig. 7. The spin-flop transition field \(B_{\rm f}\) is known to be almost independent of temperature, as shown in Fig. 2(c), and therefore we simply take the approximation of \(B_{\rm f}=B_{\rm f}^{\rm MF}\equiv 2S\sqrt{6J|D|}\) irrespective of temperature \(T\). Namely, we use Eq. (IV.5) for \(B<B_{\rm f}^{\rm MF}\) and Eq. (IV.7) for \(B_{\rm f}^{\rm MF}<B<B_{\rm c}\). Hereafter, we will draw the value of the spin current under this approximation.
As expected from the argument based on the magnon DoS in Sec. III.4, we can confirm that a sign reversal of the tunneling DC spin current takes place at the spin-flop transition in Fig. 7. In the Neel phase, the value of the spin current is negative, i.e., the sign is opposite to that of ferromagnets, whereas it becomes positive in the canted phase. This can be understood from the DoSs for spin-up and down magnons shown in Fig. 6. This field dependence of the spin current (i.e., SSE voltage) is in good agreement with the experimental result of Cr\({}_{2}\)O\({}_{3}\)[20] in semi quantitative level, while not with other two experimental results in Refs. [17; 18], where the sign change is not observed. Figure 7 also shows that both \(\left\langle\bar{I_{\rm S}}\right\rangle_{1}\) and \(\left\langle\bar{I_{\rm S}}\right\rangle_{2}\) can be the main contribution of the spin current in the canted phase, especially, in the low-field range [Figs. 7(b) and (d)].
Figure 8 separately depicts the contributions of the spin current from \(\alpha\) and \(\beta\) modes in both Neel [panels (a) and (c)] and canted [panels (b) and (d)] phases. From it, we can more deeply understand some features of the spin current as follows. Low-energy excitations in the Neel phase are described by two species of magnons with different spin polarizations, namely, the \(\alpha\) mode magnons are down-polarized and the \(\beta\) mode are up-polarized. A competition between \(\alpha\) and \(\beta\) magnons determines the sign of the spin current. The energy gap of the \(\alpha\) magnon becomes larger than that of the \(\beta\) magnon when we apply a static magnetic field \(B\) due to the Zeeman splitting. Therefore, the number of up magnons becomes dominant over down ones at a finite \(T\) under a magnetic field \(B\). The spin current carried by the up magnons is hence dominant and we obtain the negative spin current as in Fig. 8 (a) and (c). This is consistent with the argument based on the magnon DoS in Fig. 6.
Similarly, Fig. 8 (b) and (d) also tell us why the spin
Figure 7: (Color online) Magnetic-field dependences of the tunneling spin current in the Néel [Eq. (IV.5)] and canted [Eq. (IV.7)] phases at \(k_{\rm B}T/J=1.0\). Parameters are set to be \(S=1\), \(2a_{0}=1\), \(J=1\), and \(\hbar=1\). The magnon lifetime is set to be \(\eta=\alpha|\omega|+\eta_{0}\) with \(\alpha=0.01\) and \(\eta_{0}=0.001\). The small constant \(\eta_{0}\) is introduced to enhance the stability of the numerical integral around \(\omega\to 0\). The blue and red dotted lines respectively correspond to the spin currents in Néel and canted phases. The green dotted line corresponds to the static part of the spin current in the canted phase [Eq. (IV.8)], in which the small parameter \(\delta\mu_{\rm s}/k_{\rm B}\Delta T\) is set to be \(0.01\) in \(\left\langle\bar{I_{\rm S}}\right\rangle_{1}\). Panels (a) and (b) are the field dependence of the spin current for a relatively large anisotropy \(|D|/J=0.1\): The former is in a wide range up to the saturation field \(B_{\rm c}\), while the latter is in a low-field regime \(B<2B_{\rm f}\). Similarly, panels (c) and (d) are the spin current for a small anisotropy \(|D|/J=0.001\).
current takes a positive value in the canted phase. This phase has two magnon modes (we again call \(\alpha\) and \(\beta\) modes) as well like the Neel phase. As we already mentioned [see Fig. 6(c)], both modes have a spin-down polarization on the average. Therefore, as shown in Fig. 8 (b) and (d), both \(\alpha\) and \(\beta\) modes contribute to a positive value of the spin current and cooperatively carry the spin current (no competition). This positive value can be roughly understood because the canted phase has a finite uniform magnetization along the magnetic field \(B\) like ferromagnets. In addition to these two mode, the static part \(\left<\bar{I}_{\mathrm{S}}\right>_{1}\) has a positive value as well. The spin current in the canted phase is thus positive.
### Non-monotonic behavior in the low-temperature canted phase
If we focus on a sufficiently low-temperature regime of the canted phase, our microscopic theory can predict a new property of the SSE spin current. Figure 9 shows the field dependence of Eqs. (IV.5) and (IV.7) at low temperatures \(k_{\mathrm{B}}T<J\). Even in this low-temperature regime, some properties of the spin current still survive: A sign reversal at the spin-flop transition occurs, and the spin current takes a negative (positive) value in the Neel (canted) phase. However, from Fig. 9, one sees that the spin current in the canted phase changes in a non-monotonic way with respect to the magnetic field. This behavior can be understood as follows. At low temperatures, the magnon density is low and it means a small amount of the spin-current carrier. As a result, the static part \(\left<\bar{I}_{\mathrm{S}}\right>_{1}\) of the tunneling spin current [Eq. (IV.8)] is generally dominant over dynamical one \(\left<\bar{I}_{\mathrm{S}}\right>_{2}\) [Eq. (IV.9)] in the sufficiently low-temperature region. Since the canted angle \(\theta\) is the monotonically decreasing function of the magnetic field \(B\) [see Fig. 2 (b)], the static part decreases with the increase of the field. When the field \(B\) becomes close to the saturation value \(B_{\mathrm{c}}\), the static part \(\left<\bar{I}_{\mathrm{S}}\right>_{1}\) approaches to zero and hence the dynamical part \(\left<\bar{I}_{\mathrm{S}}\right>_{2}\) is again dominant. Therefore, we can observe the non-monotonic field dependence of the spin current as shown in Fig. 9.
The non-monotonic behavior of the spin current has never been observed experimentally. Our theory indicates that such a behavior can be observed in the canted phase of antiferromagnets if temperature is set to be sufficiently low.
### Shrinkage of magnetic moment in the low-temperature Neel phase
As we already explained, the reason why the spin current in the Neel phase takes a negative value is that the spin-up magnon (\(\beta\) mode) density is larger than the spin-down magnon (\(\alpha\) mode) one owing to the Zeeman splitting [see Fig. 10(a)]. However, in order to more accurately compute the spin current beyond the linear spin-wave approximation, we have to take care of the shrinkage of magnetic moments on A and B sublattices. The increase of magnon densities generally means the reduction of magnetic moments from the classical spin configuration. Therefore, in the Neel phase, the more the
Figure 8: (Color online) Magnetic-field dependences of spin currents carried by \(\alpha\)-mode magnons (red dotted line) and \(\beta\)-mode magnons (blue dotted line). The green dotted line in panels (a) and (c) denotes the tunneling spin current in the Néel phase [Eq. (IV.5)], while that in panels (b) and (d) does the dynamical part of the spin current \(\left<\bar{I}_{\mathrm{S}}\right>_{2}\) in the canted phase [Eq. (IV.9)]. Parameters of \(S\), \(a_{0}\), \(J\), \(\eta\), and \(\hbar\) are set to be the same as those of Fig. 7. Panels (a) and (b) are the results for \(|D|/J=0.1\) and \(k_{\mathrm{B}}T/J=1.0\). Panels (c) and (d) are for \(|D|/J=0.001\) and \(k_{\mathrm{B}}T/J=1.0\).
Figure 9: (Color online) Magnetic-field dependences of the tunneling spin current in the Néel [Eq. (IV.5)] and canted [Eq. (IV.7)] phases at a low temperature of \(k_{\mathrm{B}}T/J=0.1\). Parameters are set to be \(S=1\), \(2a_{0}=1\), \(J=1\), \(\eta=0.01|\omega|+0.001\), \(\hbar=1\), and \(\delta\mu_{\mathrm{s}}/k_{\mathrm{B}}\Delta T=0.01\). Panel (a) is the results for \(|D|/J=0.1\), while (b) is for \(|D|/J=0.001\). Blue and red dotted lines denote the spin current in the Néel and canted phases, respectively. The green dotted line corresponds to the static part of the spin current \(\left<\bar{I}_{\mathrm{S}}\right>_{1}\) in the canted phase [Eq. (IV.8)].
spin-up magnon density increases, the more the spin moment on B sublattice decreases since the spin-up magnons (\(\beta\) mode) mainly reside on B sublattice [see Fig. 10(b)]. Let us take this quantum-fluctuation effect into account in the calculation of the tunneling spin current. Within the linear spin-wave theory, we have approximated the transverse spin as \(S^{+}_{\mathbf{r_{A}}}=\sqrt{2S}a_{\mathbf{r_{A}}}\) and \(S^{+}_{\mathbf{r_{B}}}=\sqrt{2S}b^{\dagger}_{\mathbf{r_{B}}}\) as in Eq. (II.5), but we should replace the approximation with the original ones \(S^{+}_{\mathbf{r_{A}}}=(2S-a^{\dagger}_{\mathbf{r_{A}}}a_{\mathbf{r_{A}}})^{1 /2}a_{\mathbf{r_{A}}}\) and \(S^{+}_{\mathbf{r_{B}}}=b^{\dagger}_{\mathbf{r_{B}}}(2S-b^{\dagger}_{\mathbf{r _{B}}}b_{\mathbf{r_{B}}})^{1/2}\) if we more quantitatively include the effects of magnon densities. It is not easy to correctly treat the square root part, but we may take a simple approximation
\[2S-a^{\dagger}_{\mathbf{r_{A}}}a_{\mathbf{r_{A}}}\to 2S- \langle a^{\dagger}_{\mathbf{r_{A}}}a_{\mathbf{r_{A}}}\rangle\] \[2S-b^{\dagger}_{\mathbf{r_{B}}}b_{\mathbf{r_{B}}}\to 2S- \langle b^{\dagger}_{\mathbf{r_{B}}}b_{\mathbf{r_{B}}}\rangle\]
for a sufficiently low temperature, in which the magnon densities \(\langle a^{\dagger}_{\mathbf{r_{A}}}a_{\mathbf{r_{A}}}\rangle\) and \(\langle b^{\dagger}_{\mathbf{r_{B}}}b_{\mathbf{r_{B}}}\rangle\) are small enough. Within the linear spin-wave approximation, the magnon densities are easily estimated as
\[\langle a^{\dagger}_{\mathbf{r_{A}}}a_{\mathbf{r_{A}}}\rangle =\frac{1}{N/2}\sum_{\mathbf{k}}\frac{\cosh^{2}\phi_{\mathbf{k}}}{e^{c^{ (\text{N})}_{\mathbf{r_{A}}}(\mathbf{k})/\text{k}_{\text{B}}T}-1}+\frac{1}{N/2} \sum_{\mathbf{k}}\frac{\sinh^{2}\phi_{\mathbf{k}}}{e^{c^{(\text{N})}_{\mathbf{r_{B}}} (\mathbf{k})/\text{k}_{\text{B}}T}-1}+\frac{1}{N/2}\sum_{\mathbf{k}}\sinh^{2}\phi_{ \mathbf{k}},\] \[\langle b^{\dagger}_{\mathbf{r_{B}}}b_{\mathbf{r_{B}}}\rangle =\frac{1}{N/2}\sum_{\mathbf{k}}\frac{\cosh^{2}\phi_{\mathbf{k}}}{e^{c^{ (\text{N})}_{\mathbf{r_{B}}}(\mathbf{k})/\text{k}_{\text{B}}T}-1}+\frac{1}{N/2} \sum_{\mathbf{k}}\frac{\sinh^{2}\phi_{\mathbf{k}}}{e^{c^{(\text{N})}_{\mathbf{r_{B}}} (\mathbf{k})/\text{k}_{\text{B}}T}-1}+\frac{1}{N/2}\sum_{\mathbf{k}}\sinh^{2}\phi_{ \mathbf{k}}.\] (IV.10)
Figure 11: (Color online) Magnetic-field dependences of the tunneling spin current in the Néel phase at \(k_{\text{B}}T/J=0.1\). Panels (a) and (b) are for \(|D|/J=0.1\) and \(|D|/J=0.001\), respectively. Blue and green dotted lines are respectively the modified spin current [Eq. (IV.11)] and the simple linear-spin-wave result [Eq. (IV.5)]. We set \(S=1/2\), \(a_{0}=1\), \(J=1\), \(\eta=0.01|\omega|+0.001\), and \(\hbar=1\).
Figure 10: (Color online) (a) Image of \(\alpha\) and \(\beta\) magnon densities in the Néel phase with a finite magnetic field \(B\). (b) Shrinkage of magnetic moments on A and B sublattices in the Néel phase: Blue and red arrows respectively denote the spin expectation values on A and B sublattices. Dotted arrows show their classical values \(\pm S\). Due to a finite magnon density, the spin moment generally becomes smaller than the classical value \(S\). The moment on A sublattice is larger than that on B sublattice because \(\beta\) magnons mainly reside on B sublattices and their large density decreases the moment on B sublattice.
Using the approximated relation \(S^{+}_{\mathbf{r}_{\rm A}}\simeq\sqrt{2S-\langle a^{\dagger}_{\mathbf{r}_{\rm A}}a_{\mathbf{r}_ {\rm A}}\rangle}a_{\mathbf{r}_{\rm A}}\) and \(S^{+}_{\mathbf{r}_{\rm B}}\simeq b^{\dagger}_{\mathbf{r}_{\rm B}}\sqrt{2S-\langle b^{ \dagger}_{\mathbf{r}_{\rm B}}b_{\mathbf{r}_{\rm B}}\rangle}\), we can compute the tunneling spin current with the magnon-density effect as
\[\langle\bar{I}_{\rm S}\rangle=\frac{1}{8\pi}\frac{1}{(k_{\rm B}T) ^{2}}\frac{1}{N/2}\sum_{\mathbf{k}}\int_{-\infty}^{\infty}d\omega\frac{(\hbar \omega)^{2}}{\sinh^{2}[\hbar\omega/2k_{\rm B}T]}\\ \times\left[\Big{\{}\big{(}2S-\langle a^{\dagger}_{\mathbf{r}_{\rm A }}a_{\mathbf{r}_{\rm A}}\rangle\big{)}\cosh^{2}\phi_{\mathbf{k}}+\big{(}2S-\langle b^{ \dagger}_{\mathbf{r}_{\rm B}}b_{\mathbf{r}_{\rm B}}\rangle\big{)}\sinh^{2}\phi_{\mathbf{ k}}\Big{\}}\frac{\eta}{\big{\{}\omega-\varepsilon_{\alpha}^{(\rm N)}(\mathbf{k})/ \hbar\big{\}}^{2}+\eta^{2}}\right.\\ \left.-\Big{\{}\big{(}2S-\langle b^{\dagger}_{\mathbf{r}_{\rm B}}b_{ \mathbf{r}_{\rm B}}\rangle\big{)}\cosh^{2}\phi_{\mathbf{k}}+\big{(}2S-\langle a^{ \dagger}_{\mathbf{r}_{\rm A}}a_{\mathbf{r}_{\rm A}}\rangle\big{)}\sinh^{2}\phi_{\mathbf{ k}}\Big{\}}\frac{\eta}{\big{\{}\omega+\varepsilon_{\beta}^{(\rm N)}(\mathbf{k})/ \hbar\big{\}}^{2}+\eta^{2}}\right].\] (IV.11)
The magnitude of the spin current is decreased due to the factors \(\langle a^{\dagger}_{\mathbf{r}_{\rm A}}a_{\mathbf{r}_{\rm A}}\rangle\) and \(\langle b^{\dagger}_{\mathbf{r}_{\rm B}}b_{\mathbf{r}_{\rm B}}\rangle\) in Eq. (IV.11). We emphasize that the contribution from B sublattice is more decreased than that from A sublattice because of the inequality \(\langle b^{\dagger}_{\mathbf{r}_{\rm B}}b_{\mathbf{r}_{\rm B}}\rangle>\langle a^{ \dagger}_{\mathbf{r}_{\rm A}}a_{\mathbf{r}_{\rm A}}\rangle\) in a finite magnetic field \(B>0\).
From these arguments, we can say (as shown in Fig. 10) that a magnetic field simultaneously induce (i) a density difference between \(\alpha\) and \(\beta\) modes and (ii) a reduction of magnetic moments. The fact (i) favors a negative value of the spin current, while the moment reduction (ii) tends to make the spin current positive (i.e., the magnitude of spin current decreases). Namely, there is a competition in terms of the sign of the spin current and a possibility that the spin current becomes positive in the Neel phase. Figure 11 shows that the magnitude of the modified spin current of Eq. (IV.11) slightly decreases from the linear-spin-wave result, while the spin current is still negative. From this result, we conclude that even at low temperatures, the sign of the spin current in the Neel phase is always opposite to that in ferromagnets within the formalism of the interface spin current.
In addition, from the above arguments, we can expect that the spin current in the Neel phase can be positive if the system possesses much large quantum fluctuation and the moment reduction is more large. More frustrated antiferromagnets or quasi-two-dimensional ones might be candidate for a positive spin current.
We note that the reduction of the magnetic moment also occurs in the canted phase, but its effect is not so critical because there is no competing nature in the canted phase.
## V Comparison with other magnets
In this section, we compare the SSE in antiferromagnetic insulators with those in other magnets. First we concentrate on ferromagnets whose SSE has been well studied in spintronics. We quantitatively compare the spin currents in ferromagnets and antiferromagnets in Sec. V.1. Secondly, we simply discuss experimental results of SSEs in various sorts of magnets in Sec. V.2.
### Comparison with ferromagnetic insulators
As we mentioned in the Introduction, the formalism of the tunneling spin current has the following advantages: (i) it can be applicable to a broad class of magnetic systems and (ii) one can compare the value of the spin currents in different magnetic systems without any additional parameters. In the field of spintronics, the tunneling-current formalism was first applied to the SSE in 3D ordered ferromagnets [24; 43].
Here, we shortly review it in order to compare it and our result of antiferromagnets. A typical Hamiltonian for ferromagnets is given by changing the sign of the exchange coupling \(J\) in Eq. (II.1). According to the linear
Figure 12: (Color online) Magnetic-field dependences of the tunneling spin currents for ferromagnets and antiferromagnetics. We set \(S=1\), \(2a_{0}=1\), \(J=1\), \(\eta=0.01|\omega|+0.001\), \(\hbar=1\), \(\delta\mu_{\rm s}/k_{\rm B}\Delta T=0.01\), and \(N=N^{\prime}\). Panels (a) and (b) are for \(|D|/J=0.1\), while panels (c) and (d) are for \(|D|/J=0.001\). We assume that values of \(D\), \(J_{\rm sd}\), and \(N_{\rm int}\) in the antiferromagnet are the same as those in the ferromagnet. Blue, red, and green dotted lines respectively correspond to the spin currents for Néel phase, canted phase in antiferromagnets, and ferromagnetic ordered phase in the ferromagnet.
spin-wave theory, the spin-wave Hamiltonian for ferromagnets with polarization \(\langle S^{z}\rangle>0\) is approximated by
\[\mathscr{H}_{\rm SW}^{\rm Ferro}=\sum_{\mathbf{k}}\varepsilon_{\rm SW}(\mathbf{k})a_{\mathbf{ k}}^{\dagger}a_{\mathbf{k}},\] (V.1)
where \(a_{\mathbf{k}}^{\dagger}\) (\(a_{\mathbf{k}}\)) is the creation (annihilation) operator of the magnon with wave vector \(\mathbf{k}\), and \(\varepsilon_{\rm SW}(\mathbf{k})=2S|J|(3-\gamma_{\mathbf{k}})+2S|D|+B\) is the magnon dispersion. In the magnon (spin-wave) picture, the transverse spin correlation functions of the ferromagnetic Heisenberg model is given by the magnon Green's function
\[G_{+-}^{\rm R}(\mathbf{k},\omega)=\frac{2S}{\omega-\varepsilon_{\rm SW}(\mathbf{k})/ \hbar+i\eta}.\] (V.2)
Using these instruments, we can compute the tunneling spin current for SSE in ordered ferromagnets. We can obtain the spin current by replacing \(N_{\rm int}/2\) and \(\frac{1}{N/2}\sum_{\mathbf{k}}\left\{{\rm Im}\mathscr{G}_{+-}^{\rm(A)R}(\mathbf{k}, \omega)+{\rm Im}\mathscr{G}_{+-}^{\rm(B)R}(\mathbf{k},\omega)\right\}\) with \(N_{\rm int}\) and \(\frac{1}{N^{\prime}}\sum_{\mathbf{k}}{\rm Im}G_{+-}^{\rm R}(\mathbf{k},\omega)\) (\(N^{\prime}\) is the total number of sites), respectively, in Eq. (III.20). The explicit form is hence given by
\[\langle\bar{I}_{\rm S}\rangle=\frac{S}{4\pi}\frac{1}{(k_{\rm B}T) ^{2}}\frac{1}{N^{\prime}}\sum_{\mathbf{k}}\int_{-\infty}^{\infty}d\omega\frac{( \hbar\omega)^{2}}{\sinh^{2}[\hbar\omega/2k_{\rm B}T]}\\ \times\frac{\eta}{\left\{\omega-\varepsilon_{\rm SW}(\mathbf{k})/ \hbar\right\}^{2}+\eta^{2}},\] (V.3)
where we have assumed that the Hamiltonians for metal and interface parts are the same as those of antiferromagnets.
In Fig. 12, we quantitatively compare the value of the tunneling spin currents in an antiferromagnet [Eqs. (IV.5) and (IV.7)] and a ferromagnet [Eq. (V.3)]. We have set \(S=1\), \(J=1\), and \(N=N^{\prime}\) in ferro and antiferromagnets. And we have supposed that values of \(D\), \(J_{\rm sd}\), and \(N_{\rm int}\) in the antiferromagnet are the same as those in the ferromagnet. Figure 12 shows that (i) the magnitude of the spin current in antiferromagnet can approach to the same order as that in ferromagnet in an intermediate temperature range \(k_{\rm B}T\sim J\) and (ii) the former can be larger than that of ferromagnet in a low temperature range \(k_{\rm B}T\ll J\). In several experiments of SSEs [17; 18; 15; 4; 20] for ferromagnetic and antiferromagnetic insulators, the measured SSE voltages are usually the order of micro volts. Therefore, we can conclude that the computed spin current for antiferromagnets is sufficiently reliable in a semi quantitative level. The study in Ref. [34] has also performed a similar comparison between spin currents in SSEs for a ferromagnet and an antiferromagnetic spin-1/2 chain. It concludes that the spinon spin current in the spin-1/2 chain is \(10^{-3}\sim 10^{-4}\) order of magnitude smaller than that in ferromagnets, and this result is in good agreement with the experimental result. These experimental and theoretical works imply that the microscopic formalism for the tunneling current [43] correctly captures some essential features of the spin currents in SSEs (especially, their magnetic-field dependence) for different sorts of magnets.
### Comparison with various magnets
Here, we briefly discuss experimental results of SSEs in different magnets. As we shortly mentioned in the Introduction, recently, SSEs in various magnets have been investigated both experimentally and theoretically on the top of usual ferro or ferrimagnets. Figure 13 shows typical magnetic-field or temperature dependences of SSE voltage (proportional to the tunneling spin current) in different magnets. From panels (e)-(g), magnetic states even without magnetic order are shown to potentially carry the spin current by thermal gradient. Theoretical studies [23; 34; 35; 36] show that the tunneling spin current formalism can explain main features (especially, magnetic-field dependence) of SSEs in different magnets. This means that the magnetic DoS plays an important role of the spin current in SSEs of a broad class of magnets. As we discussed in Sec. III.4, the magnetic DoS \(\propto-{\rm Im}\mathscr{G}_{\pm\mp}^{\rm R}(\omega)\) can be observed by inelastic neutron scattering. Therefore, experiments of SSE and neutron scattering cooperatively leads to more deep understanding of various magnets and their excitations.
From this figure, we see that SSE voltages exhibit rich variety as a function of \(k_{\rm B}T\) and \(B\), depending on sorts of magnetic orders and excitations. In the first stage of the SSE researches, the SSE has attracted attention from the application viewpoint and therefore ferromagnets (typically YIG) have been mainly investigated. However, as we mentioned above, Fig. 13 indicates that SSE is also useful to detect some characteristic natures of various magnets. In other words, SSE can be used as a probe of magnetic properties in the purely scientific sense. For instance, it is generally difficult to experimentally find the features of quantum spin liquids and topological states in frustrated magnets [70; 71; 72]. SSE would be useful for studying such "invisible" quantum states in magnets.
## VI Discussions
In Secs. III\(-\)V, based on the spin-wave theory and the nonequilibrium Green's function method, we analyze the SSE of antiferromagnets. Our theoretical result nicely explains the sign reversal and the magnetic-field dependence of the spin current observed in a recent experiment for Cr\({}_{2}\)O\({}_{3}\)[20]. However, our theory dose not agree with two other experimental results [17; 18] where any sign reversal does not occur. In this section, we qualitatively consider important effects and possibilities that are not taken into account by the tunneling spin-current formalism.
First, we discuss the missing interaction term on interface in the canted phase. Our spin current formula starts from the Heisenberg's equation of motion for total spin in the metal (or antiferromagnet) like Eq. (III.3). In the calculation of this spin current, we simply neglect the \(z\) component of interface interaction \(S_{z}^{z}\sigma_{\mp}^{z}\). This treatment is justified when the \(z\) component of "local" spins is con
served in the system, in which \(S^{z}_{\mathbf{r}}\sigma^{z}_{\mathbf{r}}\) does not induce any dynamics of the \(z\) component of spins. However, in the canted phase, the local \(S^{z}_{\mathbf{r}}\) conservation is broken due to the transverse magnetization (see Fig. 5). As a result, \(S^{z}_{\mathbf{r}}\) possesses a constant term and one-magnon operators like transverse spins \(S^{\pm}_{\mathbf{r}}\) [see Eqs. (II.18) and (II.19)]. Within the spin-wave theory, the static and one-magnon parts of \(S^{z}_{\mathbf{r}}\) are given by
\[S^{z}_{\mathbf{r}_{\rm A}}\simeq\frac{\sqrt{S}}{2}\sin\theta\sqrt{ \frac{1}{N/2}}\sum_{\mathbf{k}}e^{i\mathbf{k}\cdot\mathbf{r}_{\rm A}}\Big{(}\cosh\varphi^{ \alpha}_{\mathbf{k}}\alpha_{\mathbf{k}}+\sinh\varphi^{\alpha}_{\mathbf{k}}\alpha^{\dagger} _{-\mathbf{k}}+\sinh\varphi^{\beta}_{\mathbf{k}}\beta^{\dagger}_{\mathbf{k}}+\cosh\varphi ^{\beta}_{\mathbf{k}}\beta_{-\mathbf{k}}\Big{)}\\ +\frac{\sqrt{S}}{2}\sin\theta\sqrt{\frac{1}{N/2}}\sum_{\mathbf{k}}e^ {-i\mathbf{k}\cdot\mathbf{r}_{\rm A}}\Big{(}\cosh\varphi^{\alpha}_{\mathbf{k}}\alpha^{ \dagger}_{\mathbf{k}}+\sinh\varphi^{\alpha}_{\mathbf{k}}\alpha_{-\mathbf{k}}+\sinh\varphi ^{\beta}_{\mathbf{k}}\beta_{\mathbf{k}}+\cosh\varphi^{\beta}_{\mathbf{k}}\beta^{\dagger}_{ -\mathbf{k}}\Big{)}+S\cos\theta,\] (VI.1)
\[S^{z}_{\mathbf{r}_{\rm B}}\simeq\frac{\sqrt{S}}{2}\sin\theta\sqrt{ \frac{1}{N/2}}\sum_{\mathbf{k}}e^{-i\mathbf{k}\cdot\mathbf{r}_{\rm B}}\Big{(}\sinh\varphi^ {\alpha}_{\mathbf{k}}\alpha^{\dagger}_{\mathbf{k}}+\cosh\varphi^{\alpha}_{\mathbf{k}} \alpha_{-\mathbf{k}}-\cosh\varphi^{\beta}_{\mathbf{k}}\beta_{\mathbf{k}}-\sinh\varphi^{ \beta}_{\mathbf{k}}\beta^{\dagger}_{-\mathbf{k}}\Big{)}\\ +\frac{\sqrt{S}}{2}\sin\theta\sqrt{\frac{1}{N/2}}\sum_{\mathbf{k}}e ^{i\mathbf{k}\cdot\mathbf{r}_{\rm B}}\Big{(}\sinh\varphi^{\alpha}_{\mathbf{k}}\alpha_{\bm {k}}+\cosh\varphi^{\alpha}_{\mathbf{k}}\alpha^{\dagger}_{-\mathbf{k}}-\cosh\varphi^{ \beta}_{\mathbf{k}}\beta^{\dagger}_{\mathbf{k}}-\sinh\varphi^{\beta}_{\mathbf{k}}\beta_{- \mathbf{k}}\Big{)}+S\cos\theta.\] (VI.2)
Therefore, in addition to the transverse spin interaction on the interface, the Ising interaction \(S^{z}_{\mathbf{r}}\sigma^{z}_{\mathbf{r}}\) potentially
Figure 13: (Color online) Schematic views of field or temperature dependences of SSE voltages in magnetically ordered phases [(a)-(d)] and quantum disordered states [(e)-(g)]. Panels (a) is for a typical ferromagnet (see, e.g., Ref. [15]). Panel (b) is the SSE voltage for antiferromagnets MnF\({}_{2}\)[18] and Cr\({}_{2}\)O\({}_{3}\)[17] without sign change, while panel (c) is the experimental result accompanying a sign reversal for an antiferromagnet Cr\({}_{2}\)O\({}_{3}\) observed by the other group [20]. Panel (d) is the temperature dependence of the SSE voltage in a ferrimagnet [19], which includes two sign reversals. Panels (e), (f) and (g) are respectively the magnetic-field dependence of the SSE voltages in an 1D spin liquid with spinon excitations [34], a spin-nematic liquid with both magnons and magnon-pairs [35], and a spin-Peierls state with triplons [36].
contributes to the tunneling spin current in the first-order perturbation calculation. Roughly speaking, the perturbation term is given by a product of two correlators, \(\langle S^{z}_{\mathbf{r}}(t)S^{\pm}_{\mathbf{r}}(t^{\prime})\rangle\) and \(\langle\sigma^{z}_{\mathbf{r}}(t)\sigma^{\pm}_{\mathbf{r}}(t^{\prime})\rangle\). From Eqs. (VI.1) and (VI.2), the former correlator \(\langle S^{z}_{\mathbf{r}}(t)S^{\pm}_{\mathbf{r}}(t^{\prime})\rangle\) includes both a static term and magnon Green's functions. On the other hand, the value of the correlation function \(\langle\sigma^{z}_{\mathbf{r}}(t)\sigma^{\pm}_{\mathbf{r}}(t^{\prime})\rangle\) strongly depends on the sorts and the strength of spin-orbit couplings in the metal. We thus expect that these correlators somewhat change the value of spin current in the canted phase, especially, near the spin-flop transition, \(B\sim B_{\rm f}\). The theory including the effect of these correlators is an important future issue in the field of spintronics.
We have also ignored higher-order magnon interaction terms in the calculation of spin correlation functions. However, effects of such higher-order terms are generally weak if we consider magnetically ordered phases in sufficiently low-temperature regime.
Secondly, we consider the interface properties. As already mentioned, a sign reversal of the SSE voltage is observed in an experiment for Cr\({}_{2}\)O\({}_{3}\)-Pt bilayer system [20], while another experiment for the same bilayer set up does not detect any sign reversal [17]. As the authors in Ref. [20] shortly discussed, these two experimental results indicate that sorts of interfaces and the method of creating them strongly affect the tunneling spin current and the resulting SSE voltage. The presence of an interface generally reduces the symmetry of focused systems compared to bulk materials. Inversion and translation symmetries are clearly broken due to the interface. Such low symmetry systems allow the appearance of some kinds of magnetic anisotropies such as Dzyaloshinskii-Moriya and additional single-ion interactions in the vicinity of the interface. For instance, we can expect that additional magnetic anisotropies near the interface change the spin orientation from the bulk one as shown in Fig. 14. If a canted state appears near the interface in a certain probability when the bulk system is in the Neel ordered phase, the tunneling spin current might become positive differently from the negative value predicted in the Neel phase. This scenario potentially (partially) explains the positive SSE voltage observed in Ref. [17].
Finally, we discuss the transport of spin current flowing in the bulk antiferromagnet. The microscopic theory used in this paper focuses on the tunneling process at the interface, and (as we discussed in the last section) this formalism have succeeded in explaining SSEs of various magnets (see Fig. 13) including antiferromagnets. However, in real experiments, a spin current exists not only around the interface but also in the whole region of the bulk: The spin current flows along the temperature gradient in whole regime of the antiferromagnet and a part of it arrives at the interface. To complete the microscopic theory for SSE in bilayer systems of a magnet and a metal, we have to combine the tunneling spin current formalism and a bulk transport theory such as Boltzmann's equation approach, without any phenomenological parameter. In the Neel phase, both spin-up and down magnons flow along the thermal gradient and hence spin-moment accumulation near the interface is expected to be relatively small because of the cancellation between spin-up and down magnons. This implies that the bulk transport of spin current in the Neel phase is more important than that in the canted phase. Unification of the bulk and boundary transports is another critical future issue for deep understanding of spin transport phenomena including SSE.
## VII Conclusion
In summary, we have developed a microscopic theory for the SSE of antiferromagnetic insulators. Our theory well explains the sign reversal and the magnetic-field dependence of the SSE voltage (\(\propto\) the tunneling spin current) in a semi-quantitative level. In this last section, we simply summarize the contents of this paper.
In Secs. II\(-\)IV, based on the non-equilibrium Green's function and the spin-wave theory, we analyze the tunneling DC spin current generated by thermal gradient in a bilayer system consisting of an antiferromagnet and a metal (see Fig. 1). Our analysis shows that in antiferromagnets, the dominant carrier of the spin current suddenly changes at the first-order spin-flop transition and then a sign reversal of the SSE voltage takes place (see Fig. 7). From the argument based on the spin-current formula and the magnon DoSs, we confirm that the spin current by spin-up magnons is dominant in the Neel phase, while spin-down magnons are dominant in the the canted phase. We have explained that the tunneling spin current is deeply associated with the magnetic DoS, which can be observed by inelastic neutron scatter
Figure 14: (Color online) Schematic view of a magnetic structure near the interface between a Néel ordered magnet and a paramagnetic metal. There is a possibility that the magnetic order pattern is partially changed in the vicinity of the interface because of interface-driven magnetic anisotropies.
ing. Namely, we emphasize the deep relationship between SSE and neutron scattering spectra.
In the canted phase, the spin current is shown to be decomposed into two parts of the static term and magnon correlation. The former term can be interpreted as the effect of electron spin-flip cause by the transverse magnetization (see Fig. 5). This is very reminiscent of the spin flip in spin Hall magnetoresistance [51]. From the result of two main parts of the spin current, we predict that in the low-temperature canted phase, the spin current non-monotonically change as a function of the magnetic field \(B\) (see Fig. 9). Such a non-monotonic behavior has never been observed in SSE of antiferromagnets, and its detection would be a test to enhance the reliability of our theory.
In Sec. V, we compare the SSE of antiferromagnets with those of various magnets. In particular, we quantitatively compare the cases of ferromagnets and antiferromagnets, and show that the magnitude of the SSE voltage in antiferromagnets can approach to the value of ferromagnets. This is consistent with observed SSE voltages. Moreover, Fig. 13 shows that magnetic-field and temperature dependence of SSE voltage strongly depends on sorts of excitations and orders in magnetic systems, and it indicates that SSE can be also useful to detect characteristic features of different magnets [17; 18; 19; 20; 34; 35; 36; 37; 34; 37; 38; 39; 4].
Finally, in Sec. VI, we discuss important missing pieces in the tunneling spin current formalism used in this paper. We point out the importance of the Ising interaction part \(S_{\mathbf{r}}^{z}\sigma_{\mathbf{r}}^{z}\) on the magnet-metal interface in the canted phase, the material dependence of interface properties, and effects of the spin-current transport in bulk far from the interface.
###### Acknowledgements.
We are grateful to Yuichi Ohnuma, Takeo Kato, and Minoru Kanega for fruitful discussions. We also thank Masahito Mochizuki and Hiroyuki Chudo for useful comments. This work was supported by JST, the establishment of university fellowships towards the creation of science technology innovation, Grant Number JPMJFS2105. M. S. is supported by JSPS KAKENHI (Grant No. 20H01830 and No. 20H01849) and a Grant-in-Aid for Scientific Research on Innovative Areas "Quantum Liquid Crystals" (Grant No. 19H05825) and "Evolution of Chiral Materials Science using Helical Light Fields" (Grants No. JP22H05131 and No. JP23H04576) from JSPS of Japan.
## Appendix A Mean-field approximation for antiferromagnets
In this appendix, we shortly review the mean-field results for antiferromagnets needed to draw the phase diagram in Fig. 2(c). We write down the mean-field free energies for the Neel and canted phases.
### Neel phase
First, we consider the Neel phase in the antiferromagnetic Heisenberg model (II.1). In the mean-field approximation, spin operators are given by
\[\mathbf{S}_{\mathbf{r}_{\rm A}} =\big{(}\delta S_{\mathbf{r}_{\rm A}}^{\rm r},\delta S_{\mathbf{r}_{\rm A }}^{\rm y},m_{\rm A}+\delta S_{\mathbf{r}_{\rm A}}^{z}\big{)}, \tag{101}\] \[\mathbf{S}_{\mathbf{r}_{\rm B}} =\big{(}\delta S_{\mathbf{r}_{\rm B}}^{\rm r},\delta S_{\mathbf{r}_{\rm B }}^{\rm y},-m_{\rm B}+\delta S_{\mathbf{r}_{\rm B}}^{z}\big{)}, \tag{102}\]
where \(\langle S_{\mathbf{r}_{\rm A}}^{z}\rangle=m_{\rm A}>0\) and \(\langle S_{\mathbf{r}_{\rm B}}^{z}\rangle=-m_{\rm B}>0\) are respectively the spin moment on A and B sublattices, and \(\delta S_{\mathbf{r}_{\rm A}}^{\alpha}\) is the fluctuation from the mean value of the spin \(\alpha=x,y,z\) component on site \(\mathbf{r}_{\rm X}\). Substituting these spin operators into Eq. (II.1) and then ignoring terms of \(\mathcal{O}\big{(}\{\delta S_{\mathbf{r}_{\rm X}}^{\alpha}\}^{2}\big{)}\), we obtain the effective one-body Hamiltonian
\[\mathscr{H}_{\rm MF}^{\rm(N\rm eel)}=E_{\rm cl}-B_{\rm eff}^{\rm A}\sum_{\mathbf{r }_{\rm A}}S_{\mathbf{r}_{\rm A}}^{z}-B_{\rm eff}^{\rm B}\sum_{\mathbf{r}_{\rm B}}S_{ \mathbf{r}_{\rm B}}^{z}, \tag{103}\]
where the classical ground-state energy \(E_{\rm cl}\) and the effective fields \(B_{\rm eff}^{\rm A,B}\) are defined as
\[E_{\rm cl}=\frac{N}{2}\big{\{}6Jm_{\rm A}m_{\rm B}+|D|m_{\rm A}^ {2}+|D|m_{\rm B}^{2}\big{\}},\] \[B_{\rm eff}^{\rm A}=B+6Jm_{\rm B}+2|D|m_{\rm A},\] \[B_{\rm eff}^{\rm B}=B-6Jm_{\rm A}-2|D|m_{\rm B}.\]
The partition function \(Z_{\rm MF}^{\rm(N\rm eel)}\) of the model Eq. (103) is easily calculated. The mean-field free energy \(F_{\rm MF}^{\rm(N\rm eel)}=k_{\rm B}T\ln Z_{\rm MF}^{\rm(N\rm eel)}\) is given by
\[F_{\rm MF}^{\rm(N\rm eel)} =E_{\rm cl}-\frac{N}{2}k_{\rm B}T\log\Bigg{(}\frac{\sinh\big{[} \beta B_{\rm eff}^{\rm A}\left(S+\frac{1}{2}\right)\big{]}}{\sinh\big{(}\beta B _{\rm eff}^{\rm A}/2\big{)}}\Bigg{)}\] \[\quad-\frac{N}{2}k_{\rm B}T\log\Bigg{(}\frac{\sinh\big{[}\beta B_{ \rm eff}^{\rm B}\left(S+\frac{1}{2}\right)\big{]}}{\sinh\big{(}\beta B_{\rm eff }^{\rm B}/2\big{)}}\Bigg{)}. \tag{104}\]
The self-consistent condition determines spin expectation values as follows:
\[\left\{\begin{aligned} &\langle S_{\mathbf{r}_{\rm A}}^{z}\rangle=m_{\rm A }=SB_{S}\big{(}S\beta B_{\rm eff}^{\rm A}\big{)},\\ &\langle S_{\mathbf{r}_{\rm B}}^{z}\rangle=-m_{\rm B}=SB_{S}\big{(}S \beta B_{\rm eff}^{\rm B}\big{)},\end{aligned}\right. \tag{105}\]
where the Brillouin function \(B_{S}(x)\) is defined as
\[B_{S}(x)=\frac{2S+1}{2S}\coth\left(\frac{2S+1}{2S}x\right)-\frac{1}{2S}\coth \left(\frac{x}{2S}\right). \tag{106}\]
### Canted AF phase
Similarly, we can build up the mean-field theory for the canted phase, in which spin operators are represented as
\[\mathbf{S_{r_{\rm A}}}=\big{(}-m\sin\theta+\delta S^{x}_{r_{\rm A}}, \delta S^{y}_{r_{\rm A}},m\cos\theta+\delta S^{z}_{r_{\rm A}}\big{)}, \tag{111}\] \[\mathbf{S_{r_{\rm B}}}=\big{(}m\sin\theta+\delta S^{x}_{r_{\rm B}}, \delta S^{y}_{r_{\rm B}},m\cos\theta+\delta S^{z}_{r_{\rm B}}\big{)}, \tag{112}\]
where \(\langle S^{x}_{r_{\rm A}}\rangle=-m\sin\theta\), \(\langle S^{z}_{r_{\rm A}}\rangle=m\cos\theta\), \(\langle S^{x}_{r_{\rm B}}\rangle=m\sin\theta\), and \(\langle S^{z}_{r_{\rm B}}\rangle=m\cos\theta\) (\(m>0\)). We have assumed that spins are in \(S^{z}\)-\(S^{x}\) plane, and \(\delta S^{\alpha}_{r_{\rm A}}\) is again the fluctuation from the mean value of \(S^{\alpha}\) on site \(\mathbf{r}_{\rm X}\). From these tools, the mean-field Hamiltonian is given by
\[\mathscr{H}^{\rm(Cant)}_{\rm MF}=E_{\rm cl}+ B^{x}_{\rm eff}\sum_{\mathbf{r}_{\rm A}}S^{x}_{r_{\rm A}}-B^{z}_{\rm eff} \sum_{\mathbf{r}_{\rm A}}S^{z}_{r_{\rm A}}\] \[-B^{x}_{\rm eff}\sum_{\mathbf{r}_{\rm B}}S^{x}_{r_{\rm B}}-B^{z}_{\rm eff }\sum_{\mathbf{r}_{\rm B}}S^{z}_{r_{\rm B}}, \tag{113}\]
where
\[E_{\rm cl}=\frac{N}{2}\big{\{}6Jm^{2}\left(1-2\cos^{2}\theta \right)+2|D|m^{2}\cos^{2}\theta\big{\}},\] \[B^{x}_{\rm eff}=6Jm\sin\theta,\] \[B^{z}_{\rm eff}=B-2(3J-|D|)m\cos\theta.\]
In order to simplify the mean-field Hamiltonian, we define new coordinates so that the \(\tilde{z}\)-direction is aligned to the direction of the effective magnetic field:
\[\begin{pmatrix}S^{z}_{r_{\rm A}}\\ S^{\tilde{y}}_{r_{\rm A}}\end{pmatrix}=\begin{pmatrix}\cos\varphi&0&\sin\varphi \\ 0&1&0\\ -\sin\varphi&0&\cos\varphi\end{pmatrix}\begin{pmatrix}S^{x}_{r_{\rm A}}\\ S^{y}_{r_{\rm A}}\end{pmatrix}, \tag{114}\] \[\begin{pmatrix}S^{\tilde{z}}_{r_{\rm B}}\\ S^{\tilde{y}}_{r_{\rm B}}\\ S^{\tilde{z}}_{r_{\rm B}}\end{pmatrix}=\begin{pmatrix}\cos\varphi&0&-\sin \varphi\\ 0&1&0\\ \sin\varphi&0&\cos\varphi\end{pmatrix}\begin{pmatrix}S^{x}_{r_{\rm B}}\\ S^{y}_{r_{\rm B}}\\ S^{\tilde{y}}_{r_{\rm B}}\end{pmatrix}, \tag{115}\]
where the angle \(\varphi\) is defined by \(\tan\varphi=B^{x}_{\rm eff}/B^{z}_{\rm eff}\). Using these spin operators on the new coordinates, the effective Hamiltonian is transformed to
\[\mathscr{H}^{\rm(Cant)}_{\rm MF}=E_{\rm cl}-\tilde{B}_{\rm eff}\sum_{\mathbf{r}_{ \rm A}}S^{\tilde{z}}_{r_{\rm A}}-\tilde{B}_{\rm eff}\sum_{\mathbf{r}_{\rm B}}S^{ \tilde{z}}_{r_{\rm B}}, \tag{116}\]
where the effective magnetic field is given by
\[\tilde{B}_{\rm eff}=\sqrt{\left(B^{x}_{\rm eff}\right)^{2}+\left(B^{z}_{\rm eff }\right)^{2}}. \tag{117}\]
The mean-field free energy are estimated as
\[F^{\rm(Cant)}_{\rm MF}=E_{\rm cl}-Nk_{\rm B}T\log\Bigg{(}\frac{\sinh\big{[} \beta\tilde{B}_{\rm eff}\left(S+\frac{1}{2}\right)\big{]}}{\sinh\Big{(}\beta \tilde{B}_{\rm eff}/2\Big{)}}\Bigg{)}. \tag{118}\]
The spin expectation values on A and B sublattices are
\[\left\{\begin{aligned} \langle S^{x}_{r_{\rm A}}\rangle&=-m\sin\theta=-S\sin \varphi B_{S}\Big{(}S\beta\tilde{B}_{\rm eff}\Big{)},\\ \langle S^{z}_{r_{\rm A}}\rangle&=m\cos\theta=S\cos\varphi B_{S} \Big{(}S\beta\tilde{B}_{\rm eff}\Big{)}.\end{aligned}\right. \tag{119}\]
If we numerically compute the free energy and spin moments (order parameters) in both Neel and canted phases, we can determine the magnetic phase diagram for the antiferromagnetic Heisenberg model (II.1).
## Appendix B Bogoliubov transformation for the Neel phase
Here, we shortly explain the Bogoliubov transformation to obtain the effective spin-wave Hamiltonian in the Neel state of Eq. (II.1). After the HP transformation and Fourier one for magnon operators, the Hamiltonian is approximated as
\[\mathscr{H}^{\rm(Neel)}_{\rm SW}=(6JS+2|D|S+B)\sum_{\mathbf{k}}a^{ \dagger}_{\mathbf{k}}a_{\mathbf{k}}\\ +(6JS+2|D|S-B)\sum_{\mathbf{k}}b^{\dagger}_{\mathbf{k}}b_{\mathbf{k}}\\ +2JS\sum_{\mathbf{k}}\gamma_{\mathbf{k}}\Big{(}a_{\mathbf{k}}b_{\mathbf{k}}+a^{ \dagger}_{\mathbf{k}}b^{\dagger}_{\mathbf{k}}\Big{)}+(\text{const.}). \tag{120}\]
Hereafter, we neglect the final constant term of \(\mathscr{H}^{\rm(Neel)}_{\rm SW}\). In order to diagonalize Eq. (120), we apply the following Bogoliubov transformation
\[\begin{pmatrix}\alpha_{\mathbf{k}}\\ \beta^{\dagger}_{\mathbf{k}}\end{pmatrix}=\begin{pmatrix}\cosh\phi_{\mathbf{k}}&- \sinh\phi_{\mathbf{k}}\\ -\sinh\phi_{\mathbf{k}}&\cosh\phi_{\mathbf{k}}\end{pmatrix}\begin{pmatrix}a_{\mathbf{k}}\\ b^{\dagger}_{\mathbf{k}}\end{pmatrix}, \tag{121}\]
where \(\alpha_{\mathbf{k}}\) and \(\alpha^{\dagger}_{\mathbf{k}}\), and \(\beta_{\mathbf{k}}\) and \(\beta^{\dagger}_{\mathbf{k}}\) are satisfy the Bose-Einstein statistics, respectively. If we tune the value of the angle \(\phi_{\mathbf{k}}\) as
\[\tanh\left(2\phi_{\mathbf{k}}\right)=\frac{-J\gamma_{\mathbf{k}}}{3J+|D|},\]
we obtain the diagonalized Hamiltonian of Eq. (II.7).
## Appendix C Bogoliubov transformation for the canted AF phase
In this appendix, we explain the Bogoliubov transformation to obtain the effective spin-wave Hamiltonian in the canted AF state of Eq. (II.1). In order to obtain that Hamiltonian, we introduce a local spin coordinates \((S^{\zeta},S^{\eta},S^{\xi})\) where the \(S^{\xi}\) direction is aligned to the spin-polarization direction. We define the relationship be
tween original spins and \((S^{\zeta},S^{\eta},S^{\xi})\) as follows:
\[\begin{pmatrix}S^{\zeta}_{\mathbf{r}_{\rm A}}\\ S^{\eta}_{\mathbf{r}_{\rm A}}\\ S^{\xi}_{\mathbf{r}_{\rm A}}\end{pmatrix} =\begin{pmatrix}\cos\theta&0&\sin\theta\\ 0&1&0\\ -\sin\theta&0&\cos\theta\end{pmatrix}\begin{pmatrix}S^{\mathbf{r}}_{\mathbf{r}_{\rm A }}\\ S^{\eta}_{\mathbf{r}_{\rm A}}\end{pmatrix}, \tag{122}\] \[\begin{pmatrix}S^{\zeta}_{\mathbf{r}_{\rm B}}\\ S^{\eta}_{\mathbf{r}_{\rm B}}\\ S^{\zeta}_{\mathbf{r}_{\rm B}}\end{pmatrix} =\begin{pmatrix}\cos\theta&0&-\sin\theta\\ 0&1&0\\ \sin\theta&0&\cos\theta\end{pmatrix}\begin{pmatrix}S^{\mathbf{r}}_{\mathbf{r}_{\rm B }}\\ S^{\eta}_{\mathbf{r}_{\rm B}}\end{pmatrix}, \tag{123}\]
where \(\mathbf{S}_{\mathbf{r}_{\rm A}}\) (\(\mathbf{S}_{\mathbf{r}_{\rm B}}\)) is the spin-\(S\) operators on an A-sublattice site \(\mathbf{r}_{\rm A}\) (B-sublattice site \(\mathbf{r}_{\rm B}\)). In the local coordinate, the canted state in the \(S^{z}\)-\(S^{x}\) plane is regarded as a ferromagnetic state along the \(S^{\xi}\) axis. Therefore, we may perform a standard HP transformation for the spins \(\big{(}S^{\zeta},S^{\eta},S^{\xi}\big{)}\):
\[S^{\xi}_{\mathbf{r}_{\rm A}} =\!S-a^{\dagger}_{\mathbf{r}_{\rm A}}a_{\mathbf{r}_{\rm A}}, S^{\zeta}_{\mathbf{r}_{\rm A}}\!\!+\!iS^{\eta}_{\mathbf{r}_{\rm A}}\!\! \simeq\!\sqrt{2S}a_{\mathbf{r}_{\rm A}}, S^{\zeta}_{\mathbf{r}_{\rm A}}\!\!-\!iS^{\eta}_{\mathbf{r}_{\rm A}}\!\! \simeq\!\sqrt{2S}a^{\dagger}_{\mathbf{r}_{\rm A}}, \tag{124}\] \[S^{\xi}_{\mathbf{r}_{\rm B}} =\!S-b^{\dagger}_{\mathbf{r}_{\rm B}}b_{\mathbf{r}_{\rm B}}, S^{\zeta}_{\mathbf{r}_{\rm B}}\!\!+\!iS^{\eta}_{\mathbf{r}_{\rm B}}\!\! \simeq\!\sqrt{2S}b_{\mathbf{r}_{\rm B}}, S^{\zeta}_{\mathbf{r}_{\rm B}}\!\!-\!iS^{\eta}_{\mathbf{r}_{\rm B}}\!\! \simeq\!\sqrt{2S}b^{\dagger}_{\mathbf{r}_{\rm B}}, \tag{125}\]
The Hamiltonian is then represented as
\[\mathscr{H}_{\rm SW}^{\rm(Cant)} =\sum_{\mathbf{k}}\Bigl{\{}\big{(}\mathrm{C}_{1}-\mathrm{C}_{3}(\mathbf{k })\big{)}c^{\dagger}_{\mathbf{k}}c_{\mathbf{k}}+\big{(}\mathrm{C}_{4}-\frac{\mathrm{ C}_{2}(\mathbf{k})}{2}\big{)}\Big{(}c_{\mathbf{k}}c_{-\mathbf{k}}+c^{\dagger}_{\mathbf{k}}c^{ \dagger}_{-\mathbf{k}}\Big{)}\Bigr{\}}\] \[\qquad\qquad+\sum_{\mathbf{k}}\Big{\{}\big{(}\mathrm{C}_{1}+\mathrm{C }_{3}(\mathbf{k})\big{)}d^{\dagger}_{\mathbf{k}}d_{\mathbf{k}}+\big{(}\mathrm{C}_{4}+\frac {\mathrm{C}_{2}(\mathbf{k})}{2}\big{)}\Big{(}d_{\mathbf{k}}d_{-\mathbf{k}}+d^{\dagger}_{\bm {k}}d^{\dagger}_{-\mathbf{k}}\Big{)}\Bigr{\}}+(\text{const.}), \tag{126}\]
where parameters \(C_{1-4}\) are given by
\[\mathrm{C}_{1}=6JS\big{(}1-2\cos^{2}\theta\big{)}-|D|S\big{(}1-3 \cos^{2}\theta\big{)}+B\cos\theta,\] \[\mathrm{C}_{2}(\mathbf{k})=-2JS\big{(}1-\cos^{2}\theta\big{)}\gamma_{ \mathbf{k}},\] \[\mathrm{C}_{3}(\mathbf{k})=2JS\cos^{2}\theta\gamma_{\mathbf{k}},\] \[\mathrm{C}_{4}=-\frac{|D|S}{2}\big{(}1-\cos^{2}\theta\big{)}. \tag{127}\]
Secondly, if we apply the Bogoliubov transformation
\[\begin{pmatrix}\alpha_{\mathbf{k}}\\ \alpha^{\dagger}_{-\mathbf{k}}\end{pmatrix} =\begin{pmatrix}\cosh\varphi^{\alpha}_{\mathbf{k}}&-\sinh\varphi^{ \alpha}_{\mathbf{k}}\\ -\sinh\varphi^{\alpha}_{\mathbf{k}}&\cosh\varphi^{\alpha}_{\mathbf{k}}\end{pmatrix} \begin{pmatrix}c_{\mathbf{k}}\\ c^{\dagger}_{-\mathbf{k}}\end{pmatrix},\] \[\begin{pmatrix}\beta_{\mathbf{k}}\\ \beta^{\dagger}_{-\mathbf{k}}\end{pmatrix} =\begin{pmatrix}\cosh\varphi^{\beta}_{\mathbf{k}}&-\sinh\varphi^{ \beta}_{\mathbf{k}}\\ -\sinh\varphi^{\beta}_{\mathbf{k}}&\cosh\varphi^{\beta}_{\mathbf{k}}\end{pmatrix} \begin{pmatrix}d_{\mathbf{k}}\\ d^{\dagger}_{-\mathbf{k}}\end{pmatrix}, \tag{128}\]
we finally arrive at the diagonalized spin-wave Hamiltonian of Eq. (II.15).
## Appendix D Derivation of Eq. (III.10)
In this appendix, we explain the derivation of Eq. (III.10). First, we expand the exponential factor in the statistical average of \(F_{+-}(\mathbf{r}_{\rm A},t;\mathbf{r}_{\rm A},t^{\prime})=-i\big{(}T_{\rm C}\sigma^{+}_{ \mathbf{r}_{\rm A}}(t)S^{-}_{\mathbf{r}_{\rm A}}(t^{\prime})\big{)}\) w.r.t. \(\mathscr{H}_{\rm int}\):
\[F_{+-}(\mathbf{r}_{\rm A},t;\mathbf{r}_{\rm A},t^{\prime}) =-i\sum_{n=0}^{\infty}\left(\frac{-i}{\hbar}\right)^{n}\frac{1}{n! }\int_{\rm C}dt_{1}\cdots\int_{\rm C}dt_{n}\Bigl{\langle}T_{\rm C}\bar{\sigma} ^{+}_{\mathbf{r}_{\rm A}}(t)\bar{S}^{-}_{\mathbf{r}_{\rm A}}(t^{\prime})\bar{\mathscr{H} }_{\rm int}(t_{1})\cdots\bar{\mathscr{H}}_{\rm int}(t_{n})\Bigr{\rangle}_{0}\] \[=-i\left(\frac{-i}{\hbar}\right)\int_{\rm C}dt_{1}\Bigl{\langle}T _{\rm C}\bar{\sigma}^{+}_{\mathbf{r}_{\rm A}}(t)\bar{S}^{-}_{\mathbf{r}_{\rm A}}(t^{ \prime})\bar{\mathscr{H}}_{\rm int}(t_{1})\Bigr{\rangle}_{0}+\cdots, \tag{129}\]
where we have written only the leading term. Since the perturbed Hamiltonian is given by
\[\tilde{\mathscr{H}}_{\rm int}(t_{1})=\sum_{\mathbf{r}_{\rm A}^{\prime}\in{\rm int-A}}J _{\rm sd}(\mathbf{r}_{\rm A}^{\prime})\tilde{\mathbf{S}}_{\mathbf{r}_{\rm A}^{\prime}}(t_{1}) \cdot\tilde{\mathbf{\sigma}}_{\mathbf{r}_{\rm A}^{\prime}}(t_{1})+\sum_{\mathbf{r}_{\rm B}^ {\prime}\in{\rm int-B}}J_{\rm sd}(\mathbf{r}_{\rm B}^{\prime})\tilde{\mathbf{S}}_{\bm {r}_{\rm B}^{\prime}}(t_{1})\cdot\tilde{\mathbf{\sigma}}_{\mathbf{r}_{\rm B}^{\prime}}(t _{1}),\] (D.2)
we obtain
\[F_{+-}(\mathbf{r}_{\rm A},t;\mathbf{r}_{\rm A},t^{\prime}) =(-i)^{2}\sum_{\mathbf{r}_{\rm A}^{\prime}\in{\rm int-A}}\frac{J_{\rm sd }(\mathbf{r}_{\rm A}^{\prime})}{2\hbar}\int_{\rm C}dt_{1}\Big{\langle}T_{\rm C} \tilde{\sigma}_{\mathbf{r}_{\rm A}}^{+}(t)\tilde{\sigma}_{\mathbf{r}_{\rm A}^{\prime} }^{-}(t_{1})\Big{\rangle}_{0}\Big{\langle}T_{\rm C}\tilde{S}_{\mathbf{r}_{\rm A}^{ \prime}}^{+}(t_{1})\tilde{S}_{\mathbf{r}_{\rm A}}^{-}(t^{\prime})\Big{\rangle}_{0}\] \[=\sum_{\mathbf{r}_{\rm A}^{\prime}\in{\rm int-A}}\frac{J_{\rm sd}(\bm {r}_{\rm A}^{\prime})}{2\hbar}\int_{\rm C}dt_{1}\chi_{+-}(\mathbf{r}_{\rm A},t;\bm {r}_{\rm A}^{\prime},t_{1})G_{+-}^{(\rm A)}(\mathbf{r}_{\rm A}^{\prime},t_{1};\bm {r}_{\rm A},t^{\prime}).\] (D.3)
In the last line, we have used Eqs. (III.8) and (III.9). Using Langreth rule [49, 50], we arrive at the following expression of \(F_{+-}^{<}(\mathbf{r}_{\rm A},t;\mathbf{r}_{\rm A},t^{\prime})\):
\[F_{+-}^{<}(\mathbf{r}_{\rm A},t;\mathbf{r}_{\rm A},t^{\prime})=\sum_{\mathbf{r}_{\rm A}^{ \prime}\in{\rm int-A}}\frac{J_{\rm sd}(\mathbf{r}_{\rm A}^{\prime})}{2\hbar}\int_ {-\infty}^{\infty}dt_{1}\Big{\{}\chi_{+-}^{\rm R}(\mathbf{r}_{\rm A},t;\mathbf{r}_{\rm A }^{\prime},t_{1})G_{+-}^{(\rm A)<}(\mathbf{r}_{\rm A}^{\prime},t_{1};\mathbf{r}_{\rm A },t^{\prime})+\chi_{+-}^{<}(\mathbf{r}_{\rm A},t;\mathbf{r}_{\rm A}^{\prime},t_{1})G_ {+-}^{(\rm A)A}(\mathbf{r}_{\rm A}^{\prime},t_{1};\mathbf{r}_{\rm A},t^{\prime}) \Big{\}},\] (D.4)
where \(\chi_{+-}^{\rm R[<]}(\mathbf{r}_{\rm A},t;\mathbf{r}_{\rm A}^{\prime},t_{1})\) is the retarded [lesser] part of \(\chi_{+-}(\mathbf{r}_{\rm A},t;\mathbf{r}_{\rm A}^{\prime},t_{1})\), and \(G_{+-}^{(\rm A)A[<]}(\mathbf{r}_{\rm A}^{\prime},t_{1};\mathbf{r}_{\rm A},t^{\prime})\) is the advanced [lesser] part of \(G_{+-}^{(\rm A)}(\mathbf{r}_{\rm A}^{\prime},t_{1};\mathbf{r}_{\rm A},t^{\prime})\). Finally, applying the Fourier transformations on Eq. (D.4), we get
\[F_{+-}^{<}(\mathbf{r}_{\rm A},t;\mathbf{r}_{\rm A},t^{\prime}) =\sum_{\mathbf{r}_{\rm A}^{\prime}\in{\rm int-A}}\frac{J_{\rm sd}(\bm {r}_{\rm A}^{\prime})}{2\hbar}\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}e^{-i \omega(t-t^{\prime})}\Big{\{}\chi_{+-}^{\rm R}(\mathbf{r}_{\rm A},\mathbf{r}_{\rm A}^ {\prime},\omega)G_{+-}^{(\rm A)<}(\mathbf{r}_{\rm A}^{\prime},\mathbf{r}_{\rm A},\omega )+\chi_{+-}^{<}(\mathbf{r}_{\rm A},\mathbf{r}_{\rm A}^{\prime},\omega)G_{+-}^{(\rm A)A }(\mathbf{r}_{\rm A}^{\prime},\mathbf{r}_{\rm A},\omega)\Big{\}}\] \[=\frac{J_{\rm sd}(\mathbf{r}_{\rm A})}{2\hbar}\frac{1}{N_{\rm m}(N/2) }\sum_{\mathbf{p},\mathbf{k}}\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}e^{-i\omega(t-t ^{\prime})}\Big{\{}\chi_{+-}^{\rm R}(\mathbf{p},\omega)G_{+-}^{(\rm A)<}(\mathbf{k}, \omega)+\chi_{+-}^{<}(\mathbf{p},\omega)G_{+-}^{(\rm A)A}(\mathbf{k},\omega)\Big{\}}.\] (D.5)
In the first line, we have performed the integral calculations for \(t_{1}\) and then for \(\omega^{\prime}\). In the last line, we have neglected the correlation between different interfacial sites \(\mathbf{r}_{\rm A}\) and \(\mathbf{r}_{\rm A}^{\prime}\) (\(\mathbf{r}_{\rm A}\neq\mathbf{r}_{\rm A}^{\prime}\)) (see Fig. 4). Substituting Eq. (D.5) into the local spin current Eq. (III.6) and taking the limit of \(\delta\to+0\), we obtain Eq. (III.10).
## Appendix E Dynamical susceptibility of the normal metal
In this appendix, using the Hamiltonian Eqs.(II.21) and (II.22), we calculate the imaginary part of the local spin susceptibility in the NESS of the metal. The retarded part of \(\chi_{+-}(\mathbf{r},t;\mathbf{r}^{\prime},t^{\prime})\) is defined by Eq. (III.11):
\[\chi_{+-}^{\rm R}(\mathbf{r},t;\mathbf{r}^{\prime},t^{\prime})=-i\theta(t-t^{\prime}) \Big{\langle}\big{[}\tilde{\sigma}_{\mathbf{r}}^{+}(t),\tilde{\sigma}_{\mathbf{r}^{ \prime}}^{-}(t^{\prime})\big{]}\Big{\rangle}_{0},\]
where \(\sigma_{\mathbf{r}}^{+}(t)=f_{\mathbf{r},\uparrow}^{\dagger}(t)f_{\mathbf{r},\downarrow}(t)\), \(\sigma_{\mathbf{r}}^{-}(t)=\big{\{}\sigma_{\mathbf{r}}^{+}(t)\big{\}}^{\dagger}\), and \(\theta(t-t^{\prime})\) is the step function. The Fourier transformations of fermion operators are defined as (\(\alpha=\uparrow,\downarrow\))
\[f_{\mathbf{q},\alpha} =\frac{1}{\sqrt{N_{\rm m}}}\sum_{\mathbf{r}}e^{-i\mathbf{q}\cdot\mathbf{r}}f_{ \mathbf{r},\alpha},\] \[f_{\mathbf{q},\alpha}^{\dagger} =\frac{1}{\sqrt{N_{\rm m}}}\sum_{\mathbf{r}}e^{i\mathbf{q}\cdot\mathbf{r}}f_{ \mathbf{r},\alpha}^{\dagger}.\] (E.1)
Substituting Eq. (E.1) into Eq. (III.11), we obtain
\[\chi_{+-}^{\rm R}(\mathbf{r}-\mathbf{r},t-t^{\prime})=-i\theta(t-t^{\prime})\frac{1}{ \left(N_{\rm m}\right)^{2}}\sum_{\mathbf{q},\mathbf{q}^{\prime}}e^{-i\big{(}\mathbf{q}-\mathbf{q} ^{\prime}\big{)}\cdot\big{(}\mathbf{r}-\mathbf{r}^{\prime}\big{)}}e^{i\big{(}\epsilon_{ \mathbf{q}}-\epsilon_{\mathbf{q}^{\prime}}\big{)}\big{(}t-t^{\prime}\big{)}/\hbar}\Big{\{} n_{\rm F}(\xi_{\mathbf{q}}-\delta\mu_{\rm s}/2)-n_{\rm F}(\xi_{\mathbf{q}^{\prime}}+\delta\mu_{\rm s}/2) \Big{\}},\] (E.2)
where \(n_{\rm F}(\xi_{\mathbf{q}})=\left(e^{\beta\xi_{\mathbf{q}}}+1\right)^{-1}\) is the Fermi distribution function. Using the Fourier transformations, we get
\[\chi^{\rm R}_{+-}(\mathbf{p},\omega)=\frac{-\hbar}{N_{\rm m}}\sum_{\mathbf{q}}\frac{n_{ \rm F}(\xi_{\mathbf{p}+\mathbf{q}}+\delta\mu_{\rm s}/2)-n_{\rm F}(\xi_{\mathbf{q}}-\delta \mu_{\rm s}/2)}{\hbar\omega+\xi_{\mathbf{q}}-\xi_{\mathbf{p}+\mathbf{q}}+i\delta}, \tag{111}\]
where \(\delta=+0\). Hence, the local spin susceptibility, \(\chi^{\rm R}_{+-}(\omega)=\frac{1}{N_{\rm m}}\sum_{\mathbf{p}}\chi^{\rm R}_{+-}(\bm {p},\omega)\), is calculated as
\[\chi^{\rm R}_{+-}(\omega) =\frac{-\hbar}{(N_{\rm m})^{2}}\sum_{\mathbf{p},\mathbf{q}}\frac{n_{\rm F }(\xi_{\mathbf{p}+\mathbf{q}}+\delta\mu_{\rm s}/2)-n_{\rm F}(\xi_{\mathbf{q}}-\delta\mu_{ \rm s}/2)}{\hbar\omega+\xi_{\mathbf{q}}-\xi_{\mathbf{p}+\mathbf{q}}+i\delta}\] \[\simeq-\hbar\{D(0)\}^{2}\int_{-\infty}^{\infty}d\xi\ d\xi^{ \prime}\frac{n_{\rm F}(\xi+\delta\mu_{\rm s}/2)-n_{\rm F}(\xi^{\prime}-\delta \mu_{\rm s}/2)}{\hbar\omega+\xi^{\prime}-\xi+i\delta}. \tag{112}\]
In the last line, we have replaced the wavenumber summation according to \(1/N_{\rm m}\sum\,_{q}\simeq\,D(0)\int_{-\infty}^{\infty}d\xi\), where \(D(\xi)\) is the density of states per spin and per unit cell. Taking the limit \(\delta=+0\), we obtain
\[{\rm Im}\chi^{\rm R}_{+-}(\omega)\simeq-\pi\{D(0)\}^{2}\hbar(\hbar\omega+ \delta\mu_{\rm s}). \tag{113}\]
|
2306.13105 | Multi-task Learning for Radar Signal Characterisation | Radio signal recognition is a crucial task in both civilian and military
applications, as accurate and timely identification of unknown signals is an
essential part of spectrum management and electronic warfare. The majority of
research in this field has focused on applying deep learning for modulation
classification, leaving the task of signal characterisation as an understudied
area. This paper addresses this gap by presenting an approach for tackling
radar signal classification and characterisation as a multi-task learning (MTL)
problem. We propose the IQ Signal Transformer (IQST) among several reference
architectures that allow for simultaneous optimisation of multiple regression
and classification tasks. We demonstrate the performance of our proposed MTL
model on a synthetic radar dataset, while also providing a first-of-its-kind
benchmark for radar signal characterisation. | Zi Huang, Akila Pemasiri, Simon Denman, Clinton Fookes, Terrence Martin | 2023-06-19T12:01:28Z | http://arxiv.org/abs/2306.13105v2 | # Multi-Task Learning for Radar Signal Characterisation
###### Abstract
Radio signal recognition is a crucial task in both civilian and military applications, as accurate and timely identification of unknown signals is an essential part of spectrum management and electronic warfare. The majority of research in this field has focused on applying deep learning for modulation classification, leaving the task of signal characterisation as an understudied area. This paper addresses this gap by presenting an approach for tackling radar signal classification and characterisation as a multi-task learning (MTL) problem. We propose the IQ Signal Transformer (IQST) among several reference architectures that allow for simultaneous optimisation of multiple regression and classification tasks. We demonstrate the performance of our proposed MTL model on a synthetic radar dataset, while also providing a first-of-its-kind benchmark for radar signal characterisation.
Zi Huang\({}^{\star\dagger}\), Akila Pemasiri\({}^{\star}\), Simon Denman\({}^{\star}\), Clinton Fookes\({}^{\star}\), Terrence Martin\({}^{\dagger}\)\({}^{\star}\)Queensland University of Technology, Australia
\({}^{\dagger}\)Revolution Aerospace, Australia Multi-task Learning, Radio Signal Recognition, Radar Signal Characterisation, Automatic Modulation Classification, Radar Dataset, Transformer
## 1 Introduction
Recent innovations in deep learning (DL) coupled with the declining cost of computation have enabled the successful application of deep neural networks (DNNs) for radio signal recognition (RSR). RSR can be defined as the process of extracting the hidden characteristics within the radio frequency (RF) waveform to aid in the identification of unknown radio emitters. This capability is foundational to both civilian and military integrated sensing and communications (ISAC) applications [1], such as to improve the spectrum utilisation in communications networks and to enhance the spectrum situational awareness of soldiers in the modern battlefield.
Traditionally, classification of RF waveforms is achieved using likelihood-based [2] and feature-based [3, 4] methods that exploit the unique characteristics observed in the captured signal, such as its cyclostationary behaviour [5] and statistical features [4]. However, traditional methods are generally labour intensive requiring expert feature engineering and _a priori_ knowledge about the signal characteristics, and thus cannot effectively cater for non-cooperative and covert spectrum users [1]. DL-based RSR solutions have garnered significant attention in recent years [6, 7] as they hold promise in effectively addressing these challenges.
The application of convolutional neural networks (CNNs) to automatic modulation classification (AMC) was introduced by [8]. Their early works [9, 10] together with the release of several public datasets [11] initiated a wave of interest in DL-based RSR. Recently, several alternative DL approaches that adopt recurrent neural networks (RNNs) and hybrid architectures [12] were able to consistently achieve above 90% modulation classification accuracy in relatively high signal-to-noise ratio (SNR) settings. Despite the success of DNNs, many recent approaches still rely on handcrafted features to pre-process the complex-valued, in-phase and quadrature (IQ) data into image-based representations, such as spectrograms [12], prior to training. These approaches effectively transform RSR into an image classification problem, and thus limits the ability of DNNs to extract the fine-grained temporal relationships within the IQ data. While transformers [13] have found success in adjacent fields such as audio signal processing [14], CNN-based models still dominate the current solution landscape for AMC.
Despite the progress made in RSR, the majority of recent research has only focused on AMC and wireless communication waveforms in a civilian context. While classifying modulation schemes can provide useful insight on the radio spectrum use, this information alone is insufficient in identifying or interpreting radio emitters, which is a highly desirable capability in a military context [1]. Signal characterisation extends the scope of AMC by extracting additional signal characteristics, such as estimating the pulse width (PW) and pulse repetition interval (PRI) of a radar transmission. Specifically for radar signal characterisation (RSC), estimating the pulse descriptor words (PDWs) of radar systems is an essential part of electronic warfare. PDWs, which comprise specific radar signal parameters such as PW and PRI, are essential for constructing threat libraries.
It is possible that limited DL research covering RSC may be attributed to the lack of publicly available radar datasets, as the majority of existing work in RSR has only focused on a single task, such as AMC [12]. Recently, multi-task learning (MTL) approaches to RSR were investigated in [15, 16]. These were the initial works that explored RSR as a joint problem by simultaneously classifying modulation and
signal types on a synthetic radar and communication dataset [15]. While MTL was demonstrated in [16] to be effective at performing RSR in resource-constrained environments, this work was limited to classification tasks only. Furthermore, the proposed dataset lacks labelled signal characteristics that are required to support DL model development for RSC.
To address the existing gaps, this paper introduces a MTL framework for RSC. In addition, we introduce the IQ Signal Transformer (IQST) to perform automatic feature extraction on IQ data without requiring handcrafted features or image-based transforms. Our main contributions are threefold. First, we produce a synthetic radar signals dataset with multiple categorical and numerical labels needed to support MTL. Our dataset will be made available for public use1. Second, we propose a novel MTL architecture for RSC solving classification and regression tasks as a joint problem. Finally, we introduce a new benchmark for RSC and provide several reference architectures for MTL.
Footnote 1: The download link to our synthetic dataset can be accessed via GitHub at: [https://github.com/abcxyzi/RadChar](https://github.com/abcxyzi/RadChar)
## 2 Proposed Method
### Dataset Generation
Although existing datasets such as RadioML [11] and Radar-Comms [15] are useful for AMC, they do not provide training labels that are needed for RSC, and thus a new dataset is required. We generate our radar signals following the derivations in [17], at varying SNRs between -20 to 20 dB. To limit the number of signal parameters for the RSC problem, our dataset (RadChar) specifically focuses on pulsed radar signals. RadChar comprises a total of 5 radar signal types each covering 4 unique signal parameters. The signal classes include: Barker codes, polyphase Barker codes, Frank codes, linear frequency-modulated (LFM) pulses, and coherent unmodulated pulse trains. The signal parameters include PW (\(t_{\text{pw}}\)), PRI (\(t_{\text{pri}}\)), number of pulses (\(n_{\text{p}}\)), and pulse time delay (\(t_{\text{d}}\)). For phase-coded pulses, code lengths (\(l_{c}\)) of up to 13 and 16 are considered in Barker and Frank codes respectively. A radar waveform example is shown in Figure 1.
We carefully design each waveform in RadChar to contain 512 baseband IQ samples (\(\vec{x}_{i}+j\vec{x}_{\text{q}}\)) while ensuring the range of radar parameter values used to construct the dataset adheres to the Nyquist-Shannon sampling theorem. The minimum sampling frequency (\(f_{\text{s}}\)) required as a function of the selected radar characteristics is given by (1). The sampling rate used in RadChar is 3.2 MHz. The numerical bounds selected for radar parameters \(t_{\text{pw}}\), \(t_{\text{pri}}\), \(t_{\text{d}}\) and \(n_{\text{p}}\) are 10-16 us, 17-23 us, 1-10 us, and 2-6 respectively. We apply uniformly random sampling across these value ranges for each signal class to generate 1 million unique radar waveforms. Additive white Gaussian noise (AWGN) is used to simulate varying SNRs in the dataset. In addition, we impose a unity average power to each radar waveform to ensure that signal power is scaled consistently across the dataset.
\[f_{\text{s}}>2\cdot\max(l_{\text{c}}t_{\text{pw}}^{-1},t_{\text{pri}}^{-1},t_{ \text{d}}^{-1}) \tag{1}\]
### Multi-task Learning Framework
The proposed MTL model for RSC adopts the hard parameter shared MTL approach [18], where individual tasks share a single neural network backbone. Our model comprises two segments: a modular backbone for learning shared representations on the raw IQ data, and a set of parallel task-specific heads which consist of classification and regression tasks for signal classification and characterisation respectively. Our approach benefits from its modularity as the choice of the shared backbone is flexible allowing for domain adaptation, while additional task-specific heads can be added to increase the scope of the model. Furthermore, hard parameter sharing is advantageous for learning common representations of similar tasks, such as RSC tasks, and significantly reduces the risk of overfitting as the number of related tasks increases [19].
For the multi-task segment of our model, we follow the approach in [15] to construct task-specific heads using a minimal set of hidden layers. To achieve a lightweight design, all task-specific heads are the same depth and each contains a single convolutional layer with a kernal size of 3\(\times\)3 followed by a dense layer. Dropout rates of 0.25 and 0.5 are applied to convolutional and dense layers respectively. The number of convolutional filters used here is driven by the output dimension of the shared backbone. We adopt the ReLU activation function in each head, while batch normalisation is applied prior to the activation function. Our model contains 5 task-specific heads which include a single classification head for signal classification, and 4 regression heads for signal characterisation. For classification, a softmax function is used to output probabilities for individual signal classes, while parameter predictions are obtained directly from the dense layer
Figure 1: Radar signals sampled from the RadChar dataset illustrating polyphase Barker codes at varying SNRs.
for each regression task.
The proposed MTL model is trained by optimising a compound multi-task loss (\(L_{\text{mtl}}\)) function given by (2). The classification task is optimised using a categorical cross-entropy loss function, while the regression tasks are optimised using an L1 loss function. The multi-task loss is parameterised by shared parameters (\(\theta_{\text{sh}}\)) from the model backbone and task-specific parameters (\(\theta_{1},...,\theta_{5}\)) from individual task heads. The weights (\(w_{i}\)) of task-specific losses are MTL hyperparameters and joint optimisation, given by (3), is achieved by minimising the total loss from task-specific heads.
\[L_{\text{mtl}}(\theta_{\text{sh}},\theta_{1},...,\theta_{5})=\sum_{i=1}^{5}w_{ i}L_{i}(\theta_{\text{sh}},\theta_{i}) \tag{2}\]
\[\operatorname*{argmin}_{\theta_{\text{sh}},\theta_{1},...,\theta_{5}}L_{\text {mtl}}(\theta_{\text{sh}},\theta_{1},...,\theta_{5}) \tag{3}\]
### Shared Feature Extraction Backbones
We provide several reference designs of the MTL backbone to perform feature extraction on the raw IQ data. The uniqueness of our approach is that our models operate directly on raw IQ data requiring no additional pre-processing and feature transforms as seen in [6, 7]. Because our MTL architecture is inherently modular, our models can easily be extended to incorporate additional classification and regression tasks in order to increase the scope of RSC.
We provide two CNN implementations of the shared feature extraction backbone. CNN2D follows the same design philosophy as [15] to achieve a lightweight model. It comprises a single convolution layer with 8 filters using a kernal size of 2\(\times\)2 followed by a 2\(\times\)2 max pooling operation. Input to the CNN is a single channel 32\(\times\)32 tensor which is reshaped from the raw IQ data. By intuition, such an uninformed reshaping operation is non-ideal for representing IQ data that is inherently sequential. We propose a modification to this approach by directly ingesting the IQ data as two separate I and Q channels of shape 2\(\times\)512 to retain the shape of the raw IQ sequence. CNN1D uses 1D convolutional and max pooling operations instead, while maintaining the same number of filters as CNN2D. ReLU is used as the activation function in both backbones with a dropout rate of 0.25.
We introduce the IQ Signal Transformer (IQST), as shown in Figure 2, a novel attention-based architecture tailored for RSC and MTL. Our design is inspired by the Audio Spectrogram Transformer (AST) from [14], though adopts the standard transformer encoder architecture from [13]. Unlike [14], our approach operates on the raw signal allowing for direct feature extraction from the IQ data without the need to first transform the IQ data to an image representation. We adopt the patch embedding technique of [14, 20] to generate a sequence of 1D patch embeddings from a 2\(\times\)512 tensor constructed from the raw IQ sequence. The dual-channel IQ data is flattened to form 8\(\times\)1\(\times\)128 blocks (or tokens) prior to applying a dense linear projection to form 8 learnable patch embeddings, each with an embedding dimension of 768. Each embedded patch is added to the standard positional embeddings from [13] to form a 128\(\times\)8 input to the transformer encoder. We include an additional learnable embedding to the encoder to allow for common feature sharing across the individual tasks. This extra embedding is similar to the class embedding from [20]. The standard IQST (IQST-S) adopts the GELU activation function and implements 3 multi-head attention blocks and 3 encoder layers. We feed the outputs from the shared embedding as a 1\(\times\)128 feature map into each task-specific head to complete the MTL model.
## 3 Experiments
### Training Details
We train and evaluate our models on a single Nvidia Tesla A100 GPU. A 70-15-15% train-validation-test split of RadChar is used for all our experiments. We train our models for 100 epochs with a learning rate of 5e-4 and a batch size of 64. We adopt the Adam optimiser and initialise the model parameters using LeCun initialisation. Importantly, we standardise the raw IQ samples against the training population mean and variance, and also normalise the regression labels between 0 and 1. The latter step significantly improves training performance and convergence for regression tasks, especially when dealing with small time values such as radar parameters.
### MTL Model Performance
We evaluate our MTL models on the RadChar dataset. Model performance is compared on the same test set using task-specific metrics. Classification accuracy and mean absolute error (MAE) are selected to evaluate the performance of clas
Figure 2: The proposed hard parameter shared MTL architecture for RSC. This model shows an IQST backbone with task-specific classification and regression heads.
sification and regression tasks respectively. Table 1 provides a summary of individual task performance across various SNRs. The performance of a larger IQST (IQST-L), which uses 9 multi-head attention blocks and 6 encoder layers is also shown here for comparison. While CNN2D underperforms against other models across all tasks, IQST models generally perform better, especially at low SNRs. We observe from Figure 3 that as SNR increases, MAE decreases and classification accuracy improves, with the latter trend consistent with what is expected for AMC [8, 10].
The outstanding performance of 1D models substantiates the importance of representing IQ data as 1D sequences. IQST benefits from its transformer architecture, which better captures longer-term dependencies between IQ samples, therefore allowing it to perform better at low SNRs. Although CNN1D outperforms IQST-S in the \(n_{\text{p}}\) estimation task at high SNRs, increasing the model capacity, as in IQST-L, shows a significant improvement in task performance. While a larger transformer encoder is capable of capturing more complex dependencies in the IQ sequence, the computational cost is significantly increased due to its quadratic bottleneck [13]. Additionally, our results highlight the challenge in MTL, where a trade-off in task performance may need to be considered as the number of individual tasks increases. Nevertheless, our results indicate the potential for attention-based, hard parameter-shared MTL models for RSC.
### Ablation Study
Increasing the number of convolutional layers did not provide a notable improvement on individual task performance. Instead, we find that deeper convolutional networks negatively impact regression tasks and result in higher errors. We hypothesise that regression tasks which require accurate estimation of time parameters are adversely affected by the stacking of operations, which reduces temporal resolution. Separately, selecting task weights that produce a relatively even distribution of \(w_{i}L_{i}\) during model initialisation provides stable task performance over all SNRs, while increasing the task weight to favour a specific task did not appear to help improve its test performance. This is true for both classification and regression tasks. Our observations are consistent with similar findings from [15] under the same SNR environment. A weight distribution of 0.1 for classification and 0.225 for all regression tasks was used in the experiments shown.
## 4 Conclusion
In this paper, we present a MTL framework for tackling RSC as a joint optimisation problem. We propose the IQST among other reference architectures to perform simultaneous optimisation of classification and regression tasks while highlighting the benefits of IQST for feature extraction on raw IQ data, particularly at low SNRs. We demonstrate the performance of our models on a synthetic radar dataset and provide a first-of-its-kind benchmark for RSC. The modularity of our proposed MTL design provides opportunities for additional classification and regression tasks in future work.
## 5 Acknowledgement
The research for this paper received funding support from the Queensland Government through Trusted Autonomous Systems (TAS), a Defence Cooperative Research Centre funded through the Commonwealth Next Generation Technologies Fund and the Queensland Government.
\begin{table}
\begin{tabular}{l c c c c c} \hline Model & MAE(\(n_{\text{p}}\)) & MAE(\(t_{\text{pw}}\)) & MAE(\(t_{\text{pri}}\)) & MAE(\(t_{\text{d}}\)) & Class Acc. \\ \hline CNN1D & **0.729**, 0.193, **0.085** & 1.413, **0.560**, 0.340 & 0.999, 0.330, 0.209 & 1.349, 0.385, **0.206** & 0.757, 0.998, **1.000** \\ CNN2D & 0.793, **0.174**, 0.090 & 1.466, 0.801, 0.505 & 1.054, 0.420, 0.299 & 1.729, 0.638, 0.443 & 0.673, 0.983, 0.998 \\ IQST-S & 0.733, 0.294, 0.251 & 1.282, 0.628, 0.364 & 0.816, **0.273**, **0.192** & **1.229**, 0.415, 0.277 & **0.792**, **0.999**, **1.000** \\ IQST-L & 0.752, 0.195, 0.124 & **1.253**, 0.579, **0.334** & **0.799**, 0.286, 0.225 & 1.253, **0.379**, 0.233 & 0.791, 0.998, **1.000** \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of task performance across MTL models. Each value string (-,-,-) shows the performance of each task at -10, 0 and 10 dB SNR respectively. A lower MAE value is desired for regression, while a higher accuracy value indicates better classification performance. Note that the units for \(t_{\text{pw}}\), \(t_{\text{pri}}\) and \(t_{\text{d}}\) are expressed in \(\upmu\)s.
Figure 3: Test performance of MTL models across an SNR range of -20 to 20 dB. MAE results of the regression tasks are shown in (a) to (d), and signal type classification accuracy is shown in (e). |
2307.04750 | Quantum oscillations with topological phases in a kagome metal
CsTi$_3$Bi$_5$ | Quantum oscillations can reveal Fermi surfaces and their topology in solids
and provide a powerful tool for understanding transport and electronic
properties. It is well established that the oscillation frequency maps the
Fermi surface area by Onsager's relation. However, the topological phase
accumulated along the quantum orbit remains difficult to estimate in
calculations, because it includes multiple contributions from the Berry phase,
orbital and spin moments, and also becomes gauge-sensitive for degenerate
states. In this work, we develop a gauge-independent Wilson loop scheme to
evaluate all topological phase contributions and apply it to CsTi$_3$Bi$_5$, an
emerging kagome metal. We find that the spin-orbit coupling dramatically alters
the topological phase compared to the spinless case. Especially, oscillation
phases of representative quantum orbits demonstrate a strong 3D signature
despite their cylinder-like Fermi surface geometry. Our work reveals the Fermi
surface topology of CsTi$_3$Bi$_5$ and paves the way for the theoretical
investigation of quantum oscillations in realistic materials. | Yongkang Li, Hengxin Tan, Binghai Yan | 2023-07-10T17:55:44Z | http://arxiv.org/abs/2307.04750v1 | # Quantum oscillations with topological phases in a kagome metal CsTi\({}_{3}\)Bi\({}_{5}\)
###### Abstract
Quantum oscillations can reveal Fermi surfaces and their topology in solids and provide a powerful tool for understanding transport and electronic properties. It is well established that the oscillation frequency maps the Fermi surface area by Onsager's relation. However, the topological phase accumulated along the quantum orbit remains difficult to estimate in calculations, because it includes multiple contributions from the Berry phase, orbital and spin moments, and also becomes gauge-sensitive for degenerate states. In this work, we develop a gauge-independent Wilson loop scheme to evaluate all topological phase contributions and apply it to CsTi\({}_{3}\)Bi\({}_{5}\), an emerging kagome metal. We find that the spin-orbit coupling dramatically alters the topological phase compared to the spinless case. Especially, oscillation phases of representative quantum orbits demonstrate a strong 3D signature despite their cylinder-like Fermi surface geometry. Our work reveals the Fermi surface topology of CsTi\({}_{3}\)Bi\({}_{5}\) and paves the way for the theoretical investigation of quantum oscillations in realistic materials.
## I Introduction
Kagome lattice, a 2D corner-sharing triangle lattice, has attracted much interest due to its geometric frustration and non-trivial band geometry. Among various materials containing such 2D lattice structure, Kagome material family \(A\)V\({}_{3}\)Sb\({}_{5}\) (\(A\) = K, Rb, Cs)[1] receives special attention since it exhibits many exotic quantum phenomena including \(\mathbb{Z}_{2}\) topology and flat bands[2; 3; 4], possible unconventional superconductivity[2; 3; 5; 6; 7; 8; 9] and density wave order [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. However, because of the interplay and competition between different correlated states, the origin of these physical properties and their relation to the unique electronic structure remains elusive.
Recently, a new Ti-based Kagome material \(A\)Ti\({}_{3}\)Bi\({}_{5}\) (\(A\) = K, Rb, Cs) isostructural to \(A\)V\({}_{3}\)Sb\({}_{5}\) is synthesized[16] and investigated[16; 17; 18; 19; 20; 21; 22; 23; 24]. Unlike V-based \(A\)V\({}_{3}\)Sb\({}_{5}\) family, the charge density wave (CDW) order is absent in \(A\)Ti\({}_{3}\)Bi\({}_{5}\) family as shown in transport and scanning tunneling microscopy (STM) experiments[16; 17; 19; 22; 23; 25]. First-principles calculation also shows the absence of lattice structural instability[21]. Hence, \(A\)Ti\({}_{3}\)Bi\({}_{5}\) could serve as a complementary system to \(A\)V\({}_{3}\)Sb\({}_{5}\), in which the origin of these exotic phenomena and their relation to electronic properties can be investigated without reference to lattice's effect. For example, the observed two-fold rotational symmetry and orbital selectivity in the electronic structure of \(A\)Ti\({}_{3}\)Bi\({}_{5}\)[17; 19; 22; 25] may form a pure electronic nematic phase, similar to that in Fe-based high-temperature superconductors[26]. Understanding the band structure and Fermi surface of \(A\)Ti\({}_{3}\)Bi\({}_{5}\) is crucial for further investigating these correlating properties.
Quantum oscillation measurement is one way to measure the Fermi surface topology as well as its associated properties like cyclotron mass and carrier mobility[27]. More importantly, the phase of the fundamental oscillation is related to the band topology. Usually, a \(\pi\) phase shift in the oscillation is regarded as \(\pi\) Berry phase which indicates a topological band structure[28; 29; 30]. The quantum oscillation analysis from this perspective has been carried out in \(A\)V\({}_{3}\)Sb\({}_{5}\)[31; 32; 33; 34] and also recently in ATi\({}_{3}\)Bi\({}_{5}\)[16; 24], which claims nontrivial band topology due to this \(\pi\) Berry phase.
The topological phase actually has other contributions entangled with the Berry phase[35; 36]. Especially in the degenerate case with strong spin-orbit coupling (SOC), such \(\pi\) phase may mainly come from orbital or spin magnetic moment other than the Berry phase, as revealed recently in CsV\({}_{3}\)Sb\({}_{5}\)[37]. Hence, the analysis of the topological properties based on the phase shift in quantum oscillation should consider all contributions. Apart from the experiment, this phase can be independently evaluated from ab-initio band structures. However, such calculation has to deal with the gauge fixing problem in the presence of degeneracy which is common for centrosymmetric nonmagnetic materials. A numerical study for all phase contributions without gauge ambiguities has not been explored in detail before.
In this work, we develop a Wilson loop method to determine the quantum oscillation phase and apply it to CsTi\({}_{3}\)Bi\({}_{5}\). We first detail the method which has explicit gauge independence and can be implemented conveniently in the case of degenerate bands. Then combining this method with first-principles calculation we resolve the Fermi surface of CsTi\({}_{3}\)Bi\({}_{5}\) and determine the total oscillation phase for all quantum orbits. Its relation to the Fermi surface geometry and band topology is clarified at last. The 3D nature of several representative quantum orbits present is imprinted in the topological phase, although related Fermi surfaces show a cylinder-like shape. Our work provides a useful theoretical tool to investigate the Fermi surfaces and topological electronic properties in materials.
Overview on the quantum oscillation phase
In the presence of a strong magnetic field, the physical quantities (e.g., resistance and magnetization) show oscillation with respect to a magnetic field (\(B\)) due to the formation of quantized Landau levels (LLs). Under the semiclassical limit in which the scale of \(k\)-space orbit is much larger than the inverse of magnetic length \(l_{B}^{-1}\) (\(l_{B}=\sqrt{\hbar/eB}\)), the oscillation is periodic with respect to \(1/B\) and can be expanded as a sum of Fourier series in general:
\[\delta A=\sum_{i}\sum_{r}A_{i,r}\text{cos}\left[r(l_{B}^{2}S_{F,i}+\theta_{i}+ \phi_{M,i})+\delta_{i}+\varphi_{A}\right]. \tag{1}\]
Here, \(A\) is the physical quantity being measured which is usually magnetization \(M\) or longitudinal resistivity \(\rho_{xx}\), \(\delta A\) is the oscillation part and \(A_{i,r}\) is the oscillation amplitude for the \(r\)-th harmonic of the \(i\)-th external orbit. \(S_{F,i}\) is the momentum space area of the \(i\)-th extremal orbit on Fermi surface and determines the \(i\)-th oscillation frequency. Here the total oscillation phase is decomposed into four parts: \(\theta_{i}\) is the first-order correction to the dynamical phase including the geometry phase and (orbital and spin) magnetic moment phase. \(\phi_{M,i}\) is Maslov correction which equals to \(\pi\) for a simple closed orbit. \(\delta_{i}\) is dimension related phase resulting from the integration over \(k_{z}\) if a 3D solid is measured (suppose \(B\) is along z direction). The last term \(\varphi_{A}\) is measured quantity (\(A\)) related phase (see the following discussion). All phases except \(\varphi_{A}\) depend only on the Fermi surface properties and are universal for any oscillatory quantity. Below we show that each phase can be determined from first-principles calculations to understand experiments. We note that a comprehensive theory on quantum oscillations was established in Refs. [35; 36]. We first overview this theory and then introduce the Wilson loop method to compute the topological phase.
### Phase \(\theta\)
The first two phases \(\theta\) and \(\phi_{M}\) (below we focus on a single orbit and ignore the subscript \(i\)) are related to LLs. In general, there are no simple rules to determine the exact LL for arbitrary band structure. However, in the semiclassical limit, approximate LL can be determined from Bohr-Sommerfeld-like quantization rules. For a group of \(D\)-fold degenerate bands, the \(j\)-th LLs can be obtained up to leading order in \(l_{B}^{-1}\) as,
\[l_{B}^{2}S(E_{a,j})+\lambda_{a}+\phi_{M}=2\pi j+O(l_{B}^{-2/3}). \tag{2}\]
\(a\in\mathbb{Z}_{D}:=\{1,\dots,D\}\) is the band index among \(D\) degenerate bands and \(\lambda_{a}\) is a phase that we are interested. \(\lambda_{a}\) is equivalent to \(\theta\) if there is no degeneracy,i.e., \(D=1\). \(\phi_{M}\) is Maslov correction and can be determined from the topology of the orbit, which equals \(\pi\) for a simple closed orbit.
Because of degeneracy, \(D\) LLs create \(D\) oscillation terms with the same frequency \(F=\hbar S_{F}/2\pi e\) by Onsager's relation but different phase shift \(\lambda_{a}\). It amounts to a single oscillation term with reduced amplitude \(C\) and effective phase shift \(\theta\),
\[\sum_{a=1}^{D}\text{cos}\left[r(l_{B}^{2}S_{F}+\lambda_{a}+\phi_{M})\right]= C\text{cos}\left[r(l_{B}^{2}S_{F}+\theta+\phi_{M})\right]. \tag{3}\]
For example, all bands are doubly degenerate (\(D=2\)) in the presence of combined inversion and time reversal (\(\mathcal{PT}\)) symmetries, which is the case of kagome metals CsV\({}_{3}\)Sb\({}_{5}\) and CsTi\({}_{3}\)Bi\({}_{5}\). We regulate \(\lambda_{1,2}\) in the range of \([-\pi,\pi]\) and then \(\mathcal{PT}\) symmetry leads to \(\lambda_{1}=-\lambda_{2}\). Hence, summing two cosine functions in Eq.(3) leads to
\[\theta =\begin{cases}0,&\text{if }|\lambda_{1}|<\pi/2\\ \pi,&\text{if }|\lambda_{1}|>\pi/2\end{cases} \tag{4}\] \[C =|\text{cos}(\lambda_{1})|\hskip 56.905512pt,\]
One can find \(\theta\) is a quantized topological invariant (0 or \(\pi\)) [35] insensitive to orbit details.
In general, phase \(\lambda_{a}\) can be determined from the spectrum \(\{e^{i\lambda_{a}}\}_{m=1}^{D}\) of propagator[35; 36]
\[\mathcal{A}[\mathfrak{o}]=\overline{\text{exp}}\left[i\oint_{\mathfrak{o}}\left\{ (\mathbf{A}+\mathbf{R})\cdot d\mathbf{k}+Z\left(\sigma^{z}/v^{\perp}\right) dk\right\}\right]. \tag{5}\]
Here \(\overline{\text{exp}}\) means path-ordered product, \(\mathbf{A}(\mathbf{k})_{mn}=i\left\langle u_{m\mathbf{k}}|\nabla_{\mathbf{k}} u_{n\mathbf{k}}\right\rangle\) is non-Abelian Berry connection and
\[\mathbf{R}_{mn}\cdot d\mathbf{k} =\sum_{l\not\in\mathbb{Z}_{D}}\text{A}_{ml}^{x}\Pi_{ln}^{y}dk_{x }/2v_{y}+(x\leftrightarrow y)\] \[=-i\hbar\sum_{l\not\in\mathbb{Z}_{D}}\frac{\Pi_{ml}^{x}\Pi_{ln}^{ y}}{\varepsilon_{m\mathbf{k}}-\varepsilon_{l\mathbf{k}}}\frac{dk_{x}}{2v_{y}}+(x \leftrightarrow y) \tag{6}\] \[=-(M_{z}/ev^{\perp})dk, \tag{7}\]
is Roth term and represents the orbital correction (\(-M_{z}B_{z}\)) to the band energy. \(\mathbf{\Pi}(\mathbf{k})_{ln}=\left\langle u_{\mathbf{k}}|(1/\hbar)\nabla_{ \mathbf{k}}\hat{H}(\mathbf{k})|u_{n\mathbf{k}}\right\rangle\) is velocity matrix element and \(\mathbf{v}=\mathbf{\Pi}_{nn}\) is group velocity. \(\epsilon_{m\mathbf{k}}\) is band energy and \(v^{\perp}\) is the velocity in \(xy\) plane. \(M_{z}=i(e\hbar/2)\sum_{l\not\in\mathbb{Z}_{D}}\Pi_{ml}^{x}\Pi_{ln}^{y}/( \varepsilon_{m\mathbf{k}}-\varepsilon_{l\mathbf{k}})-(x\leftrightarrow y)\) is the self rotation part of orbital magnetic moment[38]. Furthermore, \(\sigma_{z,mn}=\langle u_{\mathbf{k}}|\hat{\sigma}_{z}|u_{n\mathbf{k}}\rangle\) (\(\hat{\sigma}_{z}\) is spin Pauli matrix) and \(Z=g_{0}\hbar/4m\). The last term is the spin Zeeman term. Once the propagator (\(\mathcal{A}[\mathfrak{o}]\)) is known, the phase \(\lambda_{a}\) can be easily obtained by diagonalizing it.
Though its formulation is clear in theory, the numerical calculation of this propagator needs to deal with the derivatives in the Berry connection. Besides, the multi-band magnetic moment (including orbital and spin) is a gauge covariant quantity whose matrix elements
depend on the gauge. If a random gauge is chosen, the magnetic moment transforms independently at each point along the orbit, rendering the (5) meaningless. To deal with these problems, one can choose a smooth gauge by finding the maximally localized Wannier function[39]. Alternatively, the Wilson loop method[40; 41; 42] can be applied to avoid the choice of any specific gauge. Below, we shall use the Wilson loop method for the calculation of \(\lambda_{a}\).
In this way, the quantum orbit is discretized into \(N\) segments (Fig.1) and the propagator is written as the product for each segment. If the segment is small enough, the exponent can be split into Berry connection and magnetic moment parts.
\[\mathcal{A}[\mathfrak{o}]=\prod_{i=1}^{N}\exp\left\{i\left[(\mathbf{ A}(\mathbf{k_{i}})+\mathbf{R}(\mathbf{k}_{i}))\cdot d\mathbf{k_{i}}+Z\frac{\sigma^{z}}{v^{ \perp}}|d\mathbf{k}_{i}|\right]\right\}\] \[\approx\prod_{i=1}^{N}\exp\left[i\mathbf{A}(\mathbf{k_{i}})\cdot d\bm {k_{i}}\right]\exp\left[i\mathbf{R}(\mathbf{k}_{i})\cdot d\mathbf{k_{i}}+iZ\frac{ \sigma^{z}}{v^{\perp}}|d\mathbf{k}_{i}|\right]. \tag{8}\]
For numerical calculation, the Berry connection part is usually expressed by an overlap matrix \(M^{i}=\exp\left[i\mathbf{A}(\mathbf{k_{i}})\cdot d\mathbf{k_{i}}\right]\). \(M_{i}\) is a \(D\) by \(D\) matrix with \(M^{i}_{mn}=\left\langle u_{m\mathbf{k_{i}+1}}|u_{n\mathbf{k_{i}}}\right\rangle\).
The last ingredient for the propagator is an appropriate expression for the Roth term which shows explicit gauge covariance. It can be written as a summation of velocity matrix elements over _all_ other states as in (6). Instead, we propose another method that considers only \(D\) degenerate states on the Fermi surface using the covariant derivative[43]. The covariant derivative is defined as
\[\left|D_{\alpha}u_{n\mathbf{k}}\right\rangle=Q_{\mathbf{k}}\left|\partial_{ \alpha}u_{n\mathbf{k}}\right\rangle,\quad Q_{\mathbf{k}}:=I-\sum_{a\in\mathbb{Z}_{D}} \left|u_{a\mathbf{k}}\right\rangle\left\langle u_{a\mathbf{k}}\right|. \tag{9}\]
In numerical calculation, it can be evaluated as an appropriate finite difference[43]
\[\left|D_{\alpha}u_{n\mathbf{k}}\right\rangle=\frac{1}{2|\mathbf{q}_{\alpha}|}\left(| \overline{u}_{n,\mathbf{k}+\mathbf{q}_{\alpha}}\rangle-|\overline{u}_{n,\mathbf{k}-\mathbf{q }_{\alpha}}\rangle\right), \tag{10}\]
where the dual state \(|\overline{u}_{n,\mathbf{k}+\mathbf{q}}\rangle\) is a linear combination of \(|u_{n,\mathbf{k}+\mathbf{q}}\rangle\) and has the property \(\left\langle\overline{u}_{m\mathbf{k}}|\overline{u}_{n\mathbf{k}+\mathbf{q}}\right\rangle= \delta_{mn}\). This ensures the orthogonality between the covariant derivative and states in degenerate space, i.e. \(\left\langle u_{m\mathbf{k}}|D_{\alpha}u_{n\mathbf{k}}\right\rangle=0\). Dual states are constructed as
\[|\overline{u}_{n,\mathbf{k}+\mathbf{q}}\rangle=\sum_{n^{\prime}}\left(S^{-1}_{\mathbf{k}, \mathbf{k}+\mathbf{q}}\right)_{n^{\prime}n}|u_{n^{\prime},\mathbf{k}+\mathbf{q}}\rangle \tag{11}\]
and
\[\left(S_{\mathbf{k},\mathbf{k}+\mathbf{q}}\right)_{nn^{\prime}}=\left\langle u_{n\mathbf{k}} |u_{n^{\prime},\mathbf{k}+\mathbf{q}}\right\rangle. \tag{12}\]
Using covariant derivative, Eq. (6) is expressed only by states inside the degenerate space
\[\mathbf{R}_{mn}\cdot d\mathbf{k}=-\frac{i}{\hbar}\sum_{l\not\in \mathbb{Z}_{D}}\mathrm{A}^{x}_{ml}(\varepsilon_{n\mathbf{k}}-\varepsilon_{l\mathbf{k} })\mathrm{A}^{y}_{ln}dk_{x}/2v_{y}+(x\leftrightarrow y) \tag{13}\] \[=-\frac{i}{\hbar}\sum_{l\not\in\mathbb{Z}_{D}}\left\langle D_{x}u _{m\mathbf{k}}|u_{l\mathbf{k}}\right\rangle(\varepsilon_{n\mathbf{k}}-\varepsilon_{l\mathbf{k }})\left\langle u_{l\mathbf{k}}|D_{y}u_{n\mathbf{k}}\right\rangle dk_{x}/2v_{y}\] \[\qquad+(x\leftrightarrow y)\] \[=-\frac{i}{\hbar}\left\langle D_{x}u_{m\mathbf{k}}|\varepsilon_{n\mathbf{k }}-\hat{H}(\mathbf{k})|D_{y}u_{n\mathbf{k}}\right\rangle dk_{x}/2v_{y}+(x\leftrightarrow y).\]
In Appendix we show both Eq. (6) and Eq. (13) are gauge independent, which can be implemented easily in first-principles calculation. The Eq. (6) is practical for tight-binding models with a small number of bands but quite tedious if the total number of bands is large. The Eq. (13) avoids these problems and focuses only on the degenerate space and it is convenient when covariant derivatives can be easily calculated.
### Phase \(\delta\)
The above discussion about phase \(\theta\) is for a single \(k\)-plane perpendicular to the magnetic field. For 3D material, one needs to integrate over \(k_{z}\) to get the contribution from the whole Fermi surface. Extremal orbits will dominate in the integration and this procedure will introduce another phase \(\delta\) for each of them, which is generally \(\pm\pi/4\) (\(+\) for minimum cross-section and \(-\) for maximum cross-section). \(\delta=0\) for 2D material since there is only one k-plane. But for a nearly cylindrical Fermi surface (e.g., Fig.2(c)), \(\delta\) lies between these two limits. Below we adopt a simple model from Refs. [44; 27] to determine \(\delta\) for every extremal orbit that lies in a mirror plane. Here, we assume \(\mathcal{PT}\) symmetry for simplicity.
The oscillation of 3D Fermi surface is calculated first for a 2D plane with thickness \(dk_{z}\) and then integrate with respect to \(k_{z}\), i.e.
Figure 1: The Wilson loop \(\mathfrak{o}\) for calculation of propagator \(\mathcal{A}\). Here it’s discretized to N points and the circulation direction is clockwise.
\[A_{r} =\sum_{a}\int dk_{z}\,A_{r}(k_{z})\text{cos}\left[r(2\pi\frac{F(k_{z} )}{B}+\lambda_{a}(k_{z}))+\phi_{M}\right] \tag{14}\] \[\propto\int dk_{z}\,A_{r}(k_{z})\text{cos}\left[r(2\pi\frac{F(k_{z} )}{B}+\theta(k_{z}))+\phi_{M}\right],\]
where \(A_{r}(k_{z})\) is the oscillation amplitude of 2D plane, which depends on \(k_{z}\) through cyclotron frequency \(F(k_{z})\) and cyclotron mass \(m(k_{z})\). The relative change of \(F(k_{z})\) and \(m(k_{z})\) in the interval where the integral is appreciable is usually small. Hence in the integration of Eq. (14), \(A_{r}(k_{z})\) can be treated approximately as a constant while \(F(k_{z})\) in the cosine function can't be treated as fixed because \(F(k_{z})\gg B\). Maslov phase \(\phi_{M}\) remains constant as long as the orbit on the Fermi surface doesn't change its topology. Moreover, \(\mathcal{PT}\) symmetry cause the phase \(\theta(k_{z})\) quantized to \(0\) or \(\pi\) as in Eq. (4). So the \(k_{z}\) dependence of \(\theta\) can also be ignored and only the \(k_{z}\)-variation of \(F(k_{z})\) needs to be considered.
We expand \(F(k_{z})\) near its extremal value to the fourth order and all odd orders are zero due to mirror symmetry.
\[F(k_{z})=F_{0}+\frac{1}{2}F_{2}k_{z}^{2}+\frac{1}{24}F_{4}k_{z}^{4}. \tag{15}\]
Introducing dimensionless variable \(x=(2r|F_{2}|/B)^{1/2}k_{z}\) and \(\alpha=\text{sgn}(F_{2})\frac{F_{2}B}{24r|F_{2}|^{2}}\) then the integration can be calculated as
\[A_{r} \propto\text{Re}\int\exp\left[i(2\pi r\frac{F(k_{z})}{B}+r\theta +\phi_{M})\right]dk_{z} \tag{16}\] \[\propto\text{Re}\,\exp\left[i(2\pi r\frac{F_{0}}{B}+r\theta+\phi _{M})\right]\int\exp\left[\text{sgn}(F_{2})i\frac{\pi}{2}x^{2}(1+\alpha x^{2} )\right]dx\] \[\propto\cos\left[r(2\pi\frac{F_{0}}{B}+\theta)+\phi_{M}+\delta \right].\]
where phase \(\delta\) is the argument of the last integral
\[\delta=\arg\left\{\int_{x_{m}}^{x_{m}}\exp\left[\text{sgn}(F_{2})i\frac{\pi}{ 2}x^{2}(1+\alpha x^{2})\right]dx\right\}. \tag{17}\]
\(\delta\) was numerically determined by carrying out the integral with given value \(\alpha\)[44, 27], for which \(F_{2}\), \(F_{4}\) can be found from the polynomial fitting of \(F(k_{z})\) around the extremal orbit. The integral limit \(x_{m}\) can be taken as \(\infty\) when \(\alpha>0\) because the main contribution comes from \(x\approx 0\). However, this argument does not apply when \(\alpha<0\) due to the two extra artificial extrema. Since the real cross-section varies monotonically on either side of \(x=0\), \(x_{m}\) should be taken less than the turning point \(1/\sqrt{2|\alpha|}\) to avoid these artificial extrema. In calculation, the argument of the integral goes to a steady value before the turning point, which should be assigned as \(\delta\). It's obvious that \(\delta=0\) from Eq. (14) when \(F(k_{z})=F_{0}\). For a general 3D material, if \(\alpha\to 0\) (i.e., \(F_{4}\to 0\) and \(F_{2}k_{z}^{2}\) is the leading dispersion), one can get \(\delta=\pm\pi/4\). Otherwise, \(\delta\) may take a value between \(0\) and \(\pm\pi/4\).
### Phase \(\varphi_{A}\)
The last phase \(\varphi_{A}\) depends on the type of physical quantity \(A\). When \(A\) is the density of states (DOS), this phase vanishes \(\varphi_{DOS}=0\). For other quantities, \(\varphi_{A}\) represents the connection between the oscillation of \(A\) and the oscillation of DOS. For example, \(\varphi_{M}=\pi/2\) if \(A\) is sample magnetization, and \(\varphi_{X}=\pi\) if \(A\) is magnetic susceptibility. In four terminal devices, the longitudinal conductivity \(\sigma_{xx}\) oscillates in phase with DOS hence \(\varphi_{\sigma}=0\). But since \(\sigma_{xx}=\rho_{xx}/(\rho_{xx}^{2}+\rho_{xy}^{2})\), the resistivity \(\rho_{xx}\) can be in phase (if \(\rho_{xx}\ll\rho_{xy}\)) or out of phase (if \(\rho_{xx}\gg\rho_{xy}\)) with \(\sigma_{xx}\), so \(\varphi_{\rho}=0\) if \(\rho_{xx}\ll\rho_{xy}\) or \(\varphi_{\rho}=\pi\) if \(\rho_{xx}\gg\rho_{xy}\)[45, 32].
To summarize, all the phases in the oscillation term Eq. (1) have the following intuitive explanations. First, the magnetic-field-dependent term \(l_{B}^{2}S_{F}\) is given by the combination of the de Broglie phase (determined by the number of wavelengths in an orbit) and the Aharonov-Bohm phase. Then there is a phase \(\lambda_{a}\) associated with each orbit and each band coming from geometric effects and magnetic moment energy. \(\lambda_{a}\) of
Figure 2: (a) Spherical Fermi surface, (b) perfect cylindrical Fermi surface, and (c) nearly cylindrical Fermi surface. In both cases, the red circle shows the extremal orbit.
degenerate bands for the same orbit will combine to give the phase \(\theta\). The reflection of the wave packet at turning points in the orbit causes phase \(\phi_{M}\). These phases are the total phase for a single orbit lying in the \(kx-ky\) plane. For 3D materials, \(k_{z}\) integration needs to be carried out to incorporate the whole Fermi surface's contribution, which gives phase \(\delta\). At last, depending on what quantity \(A\) is measured, there will be another phase \(\phi_{A}\) if the oscillation of \(A\) is not synchronized with the oscillation of DOS.
## III Results and Discussions
The crystal structure of CsTi\({}_{3}\)Bi\({}_{5}\) is fully relaxed within the Density Functional Theory (DFT) as implemented in the Vennia _ab-inito_ Simulation Package [46; 47]. The cutoff energy for the plane-wave basis set is 300 eV. The force convergence criteria is 5 meV/A. The electronic structure is calculated with the full-potential local-orbital minimum-basis code (FPLO) [48]. The default atomic basis sets are employed for the wave function expansion. The generalized gradient approximation parameterized by Perdew, Burke, and Ernzerhof (PBE) [49] is employed to mimic the exchange-correlation interaction between electrons throughout. The Brillouin zone is sampled by a \(k\)-mesh of 12\(\times\)12\(\times\)6. The tight-binding Hamiltonian of CsTi\({}_{3}\)Bi\({}_{5}\) is extracted via the maximally localized Wannier functions [39] as implemented in FPLO, which enforces all crystal symmetries. The Wannier basis set is composed of the Ti \(d\) and Bi \(p\) orbitals. The Fermi surface is calculated with the tight-binding Hamiltonian on a \(k\)-mesh of \(300\times 300\times 100\).
We mention that the above Wilson loop method for the total oscillation phase shift has been successfully applied to the \(\mathcal{PT}\) symmetric kagome metal CsV\({}_{3}\)Sb\({}_{5}\)[37], which predicted consistent results with experiments. In the following, we will apply the Wilson loop method to the recently discovered kagome superconductor CsTi\({}_{3}\)Bi\({}_{5}\)[16] to further demonstrate the reliability of this method. We note here that the characterization of the dimensionality of the quantum orbit by the phase \(\delta\) has not been discussed in our previous work on CsV\({}_{3}\)Sb\({}_{5}\).
The band structure of CsTi\({}_{3}\)Bi\({}_{5}\) with spin-orbit coupling is plotted in Fig.3(a), which contains rich topological properties. Due to the \(\mathcal{PT}\) symmetry in CsTi\({}_{3}\)Bi\({}_{5}\), each band is doubly degenerate. Characteristic features of the kagome lattice, such as Dirac points at K/H points away from the Fermi level which are gapped by SOC, van Hove singularities at M/L, and flat bands along M-K/L-H lines [19; 20; 22], are shown. There are also type II Dirac crossings on the \(\Gamma\)-M and A-L lines, which form a Dirac nodal line [19; 20; 22] in the \(\Gamma\)-M-A plane. Besides, both the experiment and theory have shown that CsTi\({}_{3}\)Bi\({}_{5}\) has topological Dirac surface states at the \(\overline{\Gamma}\) point on the (001) surface [18; 19; 20; 22; 23].
The band structure on the \(k_{z}=0\) plane looks similar to the band structure on the \(k_{z}=0.5\) plane (in units of \(2\pi/c\), \(c\) is the lattice constant), which indicates the quasi-two-dimensional feature of the electronic structure of CsTi\({}_{3}\)Bi\({}_{5}\). Indeed, the 3D Fermi surface shown in Fig.3(b) shows a good cylindrical shape for all pieces. There are totally four bands crossing the Fermi level creating five pieces of the Fermi surface. By sweeping \(k_{z}\), all extremal quantum orbits perpendicular to the \(z\)-direction are found to locate at the two mirror planes \(k_{z}=0\) and \(k_{z}=0.5\), shown in Fig.3(c) and (d). The initial experiment reported an oscillation frequency of 200 T [16]. A more recent transport experiment [24] reported a series of oscillation frequencies, ranging from 217 to 1013 T. Our calculations show agreement with the experiments in the low-frequency region. For example, the calculated frequencies of 213, 336, and 542 T might correspond to the observed frequencies of 200/217, 281, 498 or 594 T, respectively. We notice that our calculated frequencies are slightly different from the calculations in Ref. [24], which might be induced by the mismatch of Fermi energy and/or different calculation parameters employed.
The cyclotron masses \(m^{*}\) of all calculated quantum orbits are summarized in Table 1. Except for the two small pockets (336 and 213 T) around M/L points, all other orbits are electron pockets, whose cyclotron masses are defined as positive. The two largest hexagonal orbits centered around the \(\Gamma\) point (7488 and 8111 T) have
Figure 3: (a) Band structure of CsTi\({}_{3}\)Bi\({}_{5}\) with SOC. (b) 3D Fermi surface of CsTi\({}_{3}\)Bi\({}_{5}\), where the color representing the Fermi velocity is used to distinguish between different Fermi surfaces. (c) Fermi surfaces in \(k_{z}=0\) and (d) \(k_{z}=0.5\) (in units of \(2\pi/c\), \(c\) is the lattice constant) mirror plane at the Fermi energy. The grey hexagon is the first Brillouin zone. Fermi surfaces with the same color come from the same band. The cyclotron frequencies (in units of T) are given.
the largest cyclotron masses (1.6\(\sim\)1.7) while others have relatively small cyclotron masses.
The different quantum oscillation phases, as mentioned above, of all orbits are calculated and listed in Table.1. Here every cyclotron orbit is a simple closed curve; thus the Maslov correction \(\phi_{M}=\pi\) is omitted in the table. The phase \(\lambda_{a}\) is calculated by Eq. (13) with random gauge choices to test the gauge invariance, which presents the same results. We also confirm the relation \(\lambda_{1}=-\lambda_{2}\) for any two degenerate quantum orbits imposed by the \(\mathcal{PT}\) symmetry. Thus only the positive one \(\lambda_{1}\) is listed. The Berry phases without (\(\phi_{B0}\)) or with SOC (\(\phi_{B}\)) are also listed for comparison. According to our previous discussion of Eq. (4), the final phase shift of the quantum orbit \(\theta\) must be quantized to either 0 or \(\pi\), depending on the magnitude of \(\lambda_{1}\), as listed in Table.1. From these phases, it's clear that phase \(\lambda_{1}\) is in general different from the Berry phase \(\phi_{B}\) due to the orbital and spin magnetic moment contribution. Also for the \(\mathcal{PT}\)-symmetric system, the topology of the quantum orbit is not equivalent to the band topology of the individual Fermi surface. For example, the quantum orbits of 336 T (around M) and 8111 T (around \(\Gamma\)) have Berry phases \(\phi_{B}\) close to 0 but the oscillation phase shifts are \(\pi\). On the contrary, the quantum orbit of 4907 T (around \(A\)) has a Berry phase close to \(\pi\) but a zero oscillation phase shift. We note here that the strong SOC is important because these orbits have only a trivial Berry phase in the spinless case. Therefore, the incorporation of the magnetic moment contribution in the oscillation phase by SOC is crucial and the quantum phase shift extracted from the Landau fan diagram should be interpreted more carefully, rather than just interpreting it as the Berry phase. The recent experiment [24] finds that the quantum orbit of 281 T is non-trivial with a \(\pi\) phase shift (\(\theta=\pi\)), which is consistent with our calculated non-trivial quantum orbit of 336 T.
Because the 3D Fermi surface is nearly cylindrical, the dimension-related phase \(\delta\) should be determined by considering higher order terms in the expansion of \(F(k_{z})\) in Eq. 15. From numerical calculations, the frequency \(F\) and cyclotron mass \(m^{*}\) of all extremal orbits have a small relative change on the Fermi surface (less than 5% in the interval \(|\Delta k_{z}|\leq 0.1\)). Since CsTi\({}_{3}\)Bi\({}_{5}\) has \(\mathcal{PT}\) symmetry and all extremal orbits locate in mirror planes, Eq. (17) applies, which is used to calculate phase \(\delta\). The phase \(\delta\) is calculated with the magnetic field \(B\) varying from 5 T to 40 T, covering the range of \(B\) in general oscillation experiments [31, 32, 33]. The variation of \(\delta\) is very small in the considered \(B\) range. Thus the \(\delta\) can be approximately treated as a constant, whose average value is listed in Table 1. It shows that all quantum orbits except for the 213 and 802 T ones have a phase \(\delta\) quite close to \(\pm\pi/4\). Therefore, most orbits should be classified as 3D cases in quantum oscillation, even though the Fermi surfaces in Fig. 3(b) show a strong quasi-2D feature. On the other hand, the Fermi surface around A is almost dispersionless along \(k_{z}\), so the \(\delta\) for the quantum orbit of 802 T is closer to zero than others. As a result, this quantum orbit is 2D. However, the quantum orbit of 713 T which comes from the same Fermi surface as the 802 T orbit but on the \(k_{z}=0\) plane, has a \(\delta=\pi/4\). Consequently, the character (2D or 3D) of a quantum orbit should not be simply determined from the appearance of the related Fermi surface in the 3D \(k\) space.
## IV Conclusion
We theoretically studied the quantum oscillations by revealing their frequencies and topological phases through a Wilson loop method in CsTi\({}_{3}\)Bi\({}_{5}\). We revealed three quantum orbits with \(\theta=\pi\) phase shift. Despite most Fermi surfaces are quasi-2D, the dimensional-related phase \(\delta\), beyond the angle-dependent frequency, clearly indicates their 3D nature. Our method can be applied to other quantum materials and provides a general way to study quantum oscillations assisted by first-principles calculations.
## Acknowledgement
B.Y. acknowledges the financial support by the European Research Council (ERC Consolidator Grant "NonlinearTopo", No. 815869) and the ISF - Personal Research Grant (No. 2932/21).
\begin{table}
\begin{tabular}{c c c c c c c c} Freq. & \(k_{z}\) & \(m^{*}\) & \(\phi_{B0}\) & \(\phi_{B}\) & \(\lambda_{1}\) & \(\theta\) & \(\delta\) \\ (T) & \((2\pi/c)\) & \((m_{0})\) & \((\pi)\) & \((\pi)\) & \((\pi)\) & \((\pi)\) & \((\pi)\) \\ \hline
213 & 0.5 & \(-\)0.24 & 0 & 0.08 & 0.40 & 0 & 0.22 \\
336 & 0 & \(-\)0.22 & 0 & 0.16 & 0.59 & 1 & -0.25 \\
542 & 0 & 0.22 & 0 & 0.50 & 0.23 & 0 & 0.25 \\
713 & 0 & 0.26 & 0 & 0.30 & 0.33 & 0 & 0.25 \\
802 & 0.5 & 0.24 & 0 & 0.33 & 0.12 & 0 & -0.14 \\
889 & 0.5 & 0.32 & 1 & 0.26 & 0.10 & 0 & -0.25 \\
4569 & 0 & 0.72 & 0 & 0.74 & 0.42 & 0 & 0.25 \\
4907 & 0.5 & 0.78 & 0 & 0.92 & 0.22 & 0 & -0.25 \\
7488 & 0.5 & 1.62 & 0 & 0.49 & 0.81 & 1 & 0.25 \\
8111 & 0 & 1.68 & 0 & 0.38 & 0.82 & 1 & -0.25 \\ \end{tabular}
\end{table}
Table 1: Extremal orbits of Fermi surfaces of CsTi\({}_{3}\)Bi\({}_{5}\) at Fermi energy. Frequency (Freq.) is in units of T. \(k_{z}\) refers to the \(k_{z}\) plane (in units of \(2\pi/c\), \(c\) is the lattice constant) where the corresponding extremal orbit is located in. The underlined frequency indicates a minimal Fermi surface cross-section and the others correspond to a maximal cross-section. The cyclotron mass \(m^{*}\) is in units of bare electron mass \(m_{0}\), where positive and negative values are for electron and hole pockets respectively. All orbits have Maslov correction \(\phi_{M}=\pi\). \(\phi_{B0}\) (\(\phi_{B}\)) is Berry phase without (with) SOC and \(\lambda_{1}\) is the phase of one of the band between two degenerate bands, defined in Eq. (5). \(\delta\) is the phase related to the Fermi surface dimensionality. All phases are in units of \(\pi\).
## Appendix
The most general gauge transformation is a \(U(D)\) basis transformation among the degenerate bands
\[\ket{u_{n\mathbf{k}}}\rightarrow\sum_{m=1}^{D}U(\mathbf{k})_{mn}\ket{u_{m\mathbf{k}}},\quad U ^{-1}=U^{\dagger}, \tag{18}\]
It has already been shown that the propagator \(\mathcal{A}[\mathbf{\mathsf{o}}]\) is gauge covariant under such transformation[36] provided that the same wave function is used at the initial point and the final point, i.e. \(\ket{u(\mathbf{k}_{N+1})}=\ket{u(\mathbf{k}_{1})}\). Here we use the same way to show our numerical formula inherits this property so it's appropriate for calculation.
First, covariant derivatives transform as states under the \(U(D)\) gauge transformation
\[\ket{\overline{u}_{n,\mathbf{k}+\mathbf{q}}} \rightarrow\sum_{n^{\prime}}\left(U(\mathbf{k})^{\dagger}S_{\mathbf{k}, \mathbf{k}+\mathbf{q}}U(\mathbf{k}+\mathbf{q})\right)^{-1}_{n^{\prime}n}\ket{u_{n^{\prime},\mathbf{ k}+\mathbf{q}}} \tag{19}\] \[=\sum_{n^{\prime},m,l,m^{\prime}}U(\mathbf{k}+\mathbf{q})^{-1}_{n^{\prime }m}(S^{-1}_{\mathbf{k},\mathbf{k}+\mathbf{q}})_{ml}U(\mathbf{k})_{ln}\] \[\qquad\qquad U(\mathbf{k}+\mathbf{q})_{m^{\prime}n^{\prime}}\ket{u_{m^{ \prime},\mathbf{k}+\mathbf{q}}}\] \[=\sum_{m,l}(S^{-1}_{\mathbf{k},\mathbf{k}+\mathbf{q}})_{ml}U(\mathbf{k})_{ln}\ket{ u_{m,\mathbf{k}+\mathbf{q}}}\] \[=\sum_{l}U(\mathbf{k})_{ln}\ket{\overline{u}_{l,\mathbf{k}+\mathbf{q}}}\]
which makes the covariant derivative expression of Roth term (13) transform covariantly. This is also true for the matrix elements expression of Roth term (6) and spin matrix \(\sigma_{z}\), meaning that
\[\begin{split}\mathbf{R}(\mathbf{k}_{i})_{mn}\cdot d\mathbf{k}_{i}& \to U(\mathbf{k}_{i})^{-1}\mathbf{R}(\mathbf{k}_{i})_{mn}\cdot d\mathbf{k}_{i}U( \mathbf{k}_{i})\\ \sigma_{z}(\mathbf{k}_{i})_{mn}\cdot d\mathbf{k}_{i}&\to U (\mathbf{k}_{i})^{-1}\sigma_{z}(\mathbf{k}_{i})_{mn}\cdot d\mathbf{k}_{i}U(\mathbf{k}_{i}) \end{split} \tag{20}\]
Therefore, the second term in (8) is also gauge covariant
\[\begin{split}&\exp\left[i\mathbf{R}(\mathbf{k}_{i})\cdot d\mathbf{k}_{i}+iZ \frac{\sigma^{z}}{v^{\perp}}|d\mathbf{k}_{i}|\right]\\ &\rightarrow\exp\left\{iU(\mathbf{k}_{i})^{-1}\left[\mathbf{R}(\mathbf{k}_{i })\cdot d\mathbf{k}_{i}+Z\frac{\sigma^{z}}{v^{\perp}}|d\mathbf{k}_{i}|\right]U(\mathbf{k} _{i})\right\}\\ &=U(\mathbf{k}_{i})^{-1}\text{exp}\left[i\mathbf{R}(\mathbf{k}_{i})\cdot d\bm {k}_{i}+iZ\frac{\sigma^{z}}{v^{\perp}}|d\mathbf{k}_{i}|\right]U(\mathbf{k}_{i})\end{split} \tag{21}\]
Besides, the overlap matrix \(M^{i}_{mn}=\left\langle u_{m\mathbf{k}_{i}+1}|u_{n\mathbf{k}_{i}}\right\rangle\) transforms like
\[M^{i}\to U(\mathbf{k}_{i+1})^{-1}M^{i}U(\mathbf{k}_{i}) \tag{22}\]
Hence, the covariance of discretized propagator (8) follows from the transformation properties of the two separate terms as
\[\begin{split}\mathcal{A}[\mathbf{\mathsf{o}}]& \rightarrow\prod_{i=1}^{N}U(\mathbf{k}_{i+1})^{-1}M^{i}U(\mathbf{k}_{i}) \cdot U(\mathbf{k}_{i})^{-1}\\ &\qquad\exp\left[i\mathbf{R}(\mathbf{k}_{i})\cdot d\mathbf{k}_{i}+iZ\frac{ \sigma^{z}}{v^{\perp}}|d\mathbf{k}_{i}|\right]U(\mathbf{k}_{i})\\ &=U(\mathbf{k}_{N+1})^{-1}\\ &\left\{\prod_{i=1}^{N}M^{i}\cdot\exp\left[i\mathbf{R}(\mathbf{k}_{i}) \cdot d\mathbf{k}_{i}+iZ\frac{\sigma^{z}}{v^{\perp}}|d\mathbf{k}_{i}|\right]\right\}U( \mathbf{k}_{1})\\ &=U(\mathbf{k}_{1})^{-1}\mathcal{A}[\mathbf{\mathsf{o}}]U(\mathbf{k}_{1})\end{split} \tag{23}\]
Since propagator \(\mathcal{A}[\mathbf{\mathsf{o}}]\) transforms covariantly, its spectrum \(\{e^{i\lambda_{a}}\}_{a=1}^{D}\) is gauge invariant. In other words, the phase \(\lambda_{a}\) obtained through these numerical formulas is uniquely determined (module \(2\pi\)) independent of gauge choice in the calculation.
|
2305.02558 | Analyzing Hong Kong's Legal Judgments from a Computational Linguistics
point-of-view | Analysis and extraction of useful information from legal judgments using
computational linguistics was one of the earliest problems posed in the domain
of information retrieval. Presently, several commercial vendors exist who
automate such tasks. However, a crucial bottleneck arises in the form of
exorbitant pricing and lack of resources available in analysis of judgements
mete out by Hong Kong's Legal System. This paper attempts to bridge this gap by
providing several statistical, machine learning, deep learning and zero-shot
learning based methods to effectively analyze legal judgments from Hong Kong's
Court System. The methods proposed consists of: (1) Citation Network Graph
Generation, (2) PageRank Algorithm, (3) Keyword Analysis and Summarization, (4)
Sentiment Polarity, and (5) Paragrah Classification, in order to be able to
extract key insights from individual as well a group of judgments together.
This would make the overall analysis of judgments in Hong Kong less tedious and
more automated in order to extract insights quickly using fast inferencing. We
also provide an analysis of our results by benchmarking our results using Large
Language Models making robust use of the HuggingFace ecosystem. | Sankalok Sen | 2023-05-04T05:23:11Z | http://arxiv.org/abs/2305.02558v1 | ###### Abstract
###### Abstract
Analysis and extraction of useful information from legal judgments using computational linguistics was one of the earliest problems posed in the domain of information retrieval. Presently, several commercial vendors exist who automate such tasks. However, a crucial bottleneck arises in the form of exorbitant pricing and lack of resources available in analysis of judgements mete out by Hong Kong's Legal System. This paper attempts to bridge this gap by providing several statistical, machine learning, deep learning and zero-shot learning based methods to effectively analyse legal judgments from Hong Kong's Court System. The methods proposed consists of: (1) Citation Network Graph Generation, (2) PageRank Algorithm, (3) Keyword Analysis and Summarization, (4) Sentiment Polarity, and (5) Paragrah Classification, in order to be able to extract key insights from individual as well a group of judgments together. This would make the overall analysis of judgments in Hong Kong less tedious and more automated in order to extract insights quickly using fast inferencing. We also provide an analysis of our results by benchmarking our results using Large Language Models making robust use of the HuggingFace ecosystem.
**Keywords.** legal judgments, hong kong legal system, natural language processing, citation network graph, knowledge representation, keyword extraction, summarization, sentiment polarity detection, paragraph-wise semantic analysis
**Analyzing Hong Kong's Legal**
**Judgments from a Computational**
**Linguistics point-of-view**
Sankalok Sen\({}^{\clubsuit\diamond}\)1
Footnote 1: Work done as a part of the author’s Research Assistantship at The University of Hong Kong from July 2022 - May 2023.
\({}^{\clubsuit}\)_Department of Computer Science, The University of Hong Kong_
\({}^{\diamondsuit}\)[email protected]_
## 1 Introduction
In the following sections, we provide a brief history of Hong Kong's Legal System, the importance of this paper from an academic standpoint, engineering choices considered for this paper, and finally the overall objectives of what this paper attempts to accomplish.
### Brief History of Hong Kong Legal System
The Judicial Branch of the Hong Kong Special Administrative Region of the People's Republic of China (HKSAR) exercises its control over the judicial needs and requirements of the region. It is independent from the influences of the Legislative and Executive Branches as conformed by the mandates of the Basic Law [1]. The Basic Law was ratified by the National People's Congress on April 4, 1990, coming into effect after the handover of the region by the United Kingdom, on July 1, 1997. It replaced the Colonial Rules consisting of the Hong Kong Letters Patent and Hong Kong Royal Instructions of 1917 [2]. Broadly, the Courts of Law in Hong Kong which rule Judgments under the protection of the Basic Law, broadly consist of 8 different types as stated in Table 1.
### Background & Motivation
Each of the Courts of Law as mentioned in Table 1 metes out multiple judgments every week. To facilitate legal research and impart academic teaching, law faculties in Hong Kong need to constantly update their database with respect to how each of these judgments differ from each other in case type, important, and wording. It is a manually exhaustive process, and several commercial third-party companies provide computationally automated solutions [3]. However, it is monetarily expensive and often such corporate solutions are not provided for Hong Kong's judgments.
Therefore, this paper suggests a solution by combining several computational techniques in Natural Language Processing. This makes it easier to effectively analyse newer legal judgments from both unsupervised and supervised learning based points of view. The methods implemented consists of: (1) Citation Network Graph based Knowledge Generation, (2) PageRank Algorithm, (3) Keyword Analysis and Summarization, (4)
\begin{table}
\begin{tabular}{|l|} \hline
**Courts of Law** \\ \hline \hline
1. Court of Final Appeal \\ \hline
2. Court of Appeal of the High Court \\ \hline
3. Competition Tribunal \\ \hline
4. District Court \\ \hline
5. Family Court \\ \hline
6. Lands Tribunal \\ \hline
7. Others: Magistrates’ Court, Labour Tribunal, Small Claims Tribunal, Obscene Articles Tribunal, Coroner’s Court \\ \hline \end{tabular}
\end{table}
Table 1: Categories of Courts of Law in Hong Kong
Sentiment Polarity, and (5) Paragrah Classification, and to be able to extract key insights from individual as well a group of judgments together.
### Engineering Choices
In recent years, with release of more powerful computing processors, various papers were published which leveraged the theory of Deep Learning Models. The intuition behind Deep Learning comes from the structure of the human brain composed of neurons. Just like the human brain learns from new experiences, a deep learning model is said to be learning and mimicking a human neuron. Specific to NLP, huge advancements were seen with the proposal of the Attention Mechanism in 2014 by Bahdanau et al [4] and subsequent introduction of the Transformers Architectures by Vaswani et al in 2017 [5], both leveraging Deep Learning Models. These models witnessed a surge in improvements in natural language understanding tasks like Text Summarization, Generation, Sentiment, and Question-Answering Tasks, among others.
However, when these models are applied to a specific social science based domain to understand it better, they tend to generalize as they often constitute very large pre-trained models which are often trained on extremely generic datasets which do not fit well with the nature of the social science domain. Thus, this paper has chosen to adopt more general Probabilistic Machine Learning Models instead. All the three models as proposed in this paper, are based on this theory. In comparison, Deep Learning is said to be a more niche subset of Machine Learning, where the model seems to mimic a human neuron. This approach of choosing more general Machine Learning Models over specific Deep Learning Models is often taken when attempting to solve social science based problems as seen in [6], [7], [8], [9], [10], [11].
Therefore, this paper evaluates by comparing the proposed methods with the results of Deep Learning Models (specifically, Large Pre-trained Language Models), and using this analysis concludes which model is suitable for which parts of a task from a precision and accuracy perspective.
### Objectives
Keeping these in mind, the paper presents its objectives:
* Implementation of the Citation Network Model based Knowledge Graph for each judgment which have citations, to find more important citations in the past 25 years and causal connections between various judgments since the handover of Hong Kong.
* Implementation of the Google's PageRank Algorithm [12] and apply it to the already built Citation Network Models to find the top citations among the data
available to provide an overall score to each judgment.
* Implementation of Keyword Analysis Algorithms (TextRank[13], YAKE[14], RAKE[15], KeyATM[16], LDA[17]), to extract keywords and phrases as well as effectively summarise each judgment and benchmarking using BERT (base) for benchmarking.
* Implementation of Sentiment Analysis (VADER[16]) to extract sentiment distribution across a judgment paragraph-wise.
* Implementation of a Paragraph level classifier for each Judgment improving the quality of semantic extraction for each paragraph using algorithms like Naive Bayes, Support Vector Machines (SVMs), Bernoulli Restricted Boltzmann Machines (BernoulliRBM), Stochastic Gradient Descent (SGD), Multilayer Perceptrons (MLP), and BART trained on Multi-Genre Natural Language Inference for benchmarking, using One to Few-Shot Learning.
## 2 Literature Review
In the following section we go through an extensive review of work done in the field of Legal Judgment Analysis using Machine Intelligence in the past two decades.
[19] introduced a parallelly translated corpus of Italian and German legal texts which are used for testing different translation methods like alignment architectures. [20] used a corpus compiled from House of Lords (United Kingdom) judgments which are used for creating effective summarisation techniques in relation to legal judgments. [21] experimented with a question-answering method for creating a human-like tool used for testing against the Bar examination held in the United States of America, stating the Bar preparation materials as the legal corpus. [22] examined various methods posed by competitors in a case law information extraction competition on Japanese legal documents. [23] introduced a dataset which are used for testing Named Entity Recognition tasks on Brazillian legal texts. [24] introduced a dataset consisting of more than 2.8 million criminal cases as released by the Supreme Court of China in attempts to test for judgment prediction using both Machine Learning and Deep Learning techniques and providing benchmark comparisons between the two techniques. [25] provided a dataset of different legal contracts and an attempt to summarise them as informal English for the ease of human language understanding. [26] introduced a dataset compiled from judgments by the United Nations Convention on Humar Rights and attempted to predict results using different neural techniques. [27] introduced a large dataset of 57K judgments in European Union and attempts a robust multi-label classification method. [28] introduced a dataset of 10K judgments on Chinese judicial reading along with 50K questions
and answers and tests methods like BERT and TF-1DF compared to human annotated benchmarks.
[29] randomly selected 50K Chinese Judgments as published online by the Courts of China and implements single and multi-level classification as well as Bi-Directional GRU architectures and models a charge prediction task for criminal cases. [30] also tackles the charge prediction task but also used Positional Embeddings, Part of Speech Tags, Bigram Models and WordNet on top of their deep neural architectures and received better prediction accuracies. [31] used an Encoder with Attention towards prediction of accurate judgments for legal reading comprehension texts. [32] implemented Markov Network Models to predict judgments in relation to divorce cases.
[33] attempted to analyse criminal cases for tackling the courts view generations task using a label-conditional sequence-to-sequence model with attention. [34] tackles the same problem using an attention based encoder architecture and an innovation counterfactual decoder architecture with pointer-generator. [35], [36], [37] attempted to extract legal entities using different Named Entity Recognition techniques. [38], [39] attempted to extract events from legal texts using techniques like temporal reasoning. [40] tried information retrieval techniques on legal judgments using paragraph and citation information. [41] built a pre-trained phrase scoring model for information retrieval using summarization and lexical matching techniques. [42] used a combination of rule-based and statistical methods to first create an automatic summarization tool for legal judgments. [43] used the LDA algorithm in an attempt to summarize legal documents. [44] leveraged domain knowledge methods towards legal text summarization effectively.
## 3 Programming Languages & Tools
This project leverages modern powerful processors by utilizing resources of the GPU Farm of the Department of Computer Science, The University of Hong Kong to attain its results. The main programming languages used for coding the models and algorithms is Python.
The justification for the choice of Python is that it can easily handle big data and able to easily fit Language models on the dataset. The scripting is done in Jupyter Notebooks and executed on the GPU Farm. The choice of using Jupyter Notebooks is because it provides an easy interface for interactive simulation. For instance, for a given dataset, it is very easy visualize different types of outputs in form of graphs, tables, and charts. GPU (Graphics Processing Unit) is a special type of circuit which speeds up numerical computation, and so the GPU Farm of HKU has been used to speed up the overall computation of the models.
## 4 Dataset
The Legal-NLP Dataset was extracted as a part of another ongoing project at the Natural Language Processing Lab, Department of Computer Science, HKU (HKUNLP), as is situated in the departmental server of the project supervisor, with each Judgment extracted and stored as a JSON File, with each JSON File structured as shown in Table 2.
Before being preprocessed into a JSON File, it was downloaded from the Hong Kong Legal Information Institute (HKLII) [45]. Preprocessing was done using semantic parsing and regular expressions. The Legal-NLP Dataset is formally stated in this subsection. Mathematically, the Dataset \(\mathcal{D}\) consists of Judgements \(|\mathcal{J}|\), which can be described as:
\[\forall c=\{c_{1},c_{2},...,c_{M}\}\in\mathcal{C}\text{ (Courts of Law)}\] \[\exists j=\{j_{c,1},j_{c,2},...,j_{c,N}\}\in|\mathcal{J}|_{C,N} \text{ (\#\{Judgments per Court\})}\] \[\text{such that }\sum_{c=1}^{c=M}\sum_{j=1}^{j=N}J_{c_{m},j_{n}}=| \mathcal{J}|=\mathcal{D}\] (Equation 1)
The Dataset consists of approximately 115,000 Bilingual Judgments, with around 80,000 in English and remaining 35,000 in Traditional Chinese. The Categories of Courts of Law as described in Table 1 can be divided in 28 semi-broad entities, with cases existing for about 18 semi-broad entities in the Dataset. A graph summarizing this distribution has been shown in Figure 1.
73 different types of cases were identified out of a total possible 112 [46], each of which can be described using its own unique code for identification. The total number of words in the Dataset is approximately 251 million with an average of 3000 words per judgment.
\begin{table}
\begin{tabular}{|l|l|} \hline
**Key Value** & **Pair Values** \\ \hline \hline judgement & \([data_{i},[type\ =\ \{other,para,heading,quote\}]]\ \ \forall i\) (List of Lists) \\ \hline ref & \([ref_{i}]\ \forall i\) (List of References) \\ \hline date & \([date_{i}]\ \forall i\) (All Hearing Dates) \\ \hline parties & \([party_{i}]\ \forall i\) (Parties in Dispute) \\ \hline coram & \([coram_{i}]\ \forall i\) (_Coram Judice_) \\ \hline representation & \([rep_{i}]\ \forall i\) (Representations for all parties) \\ \hline caseno & [caseno] (Extracted Case-No) \\ \hline \end{tabular}
\end{table}
Table 2: Document Structure of each JSON holding one Judgment
## 5 Methodology
### Citation Network based Knowledge Graph and PageRank
Each Judgment has a specific number of citations, with pre-handover cases being rejected to concentrate the date over the specific 25 years period. The citations were extracted using various syntactical methods based off their citation styles including both brute force approach and regular expressions, resulting in a key-value pair based dictionary data-type. A Key-Value Dictionary is such that for every unique Key (Judgment Citation Number), there exist a certain number of citations existing in a list like format which are described as Values.
After cleaning the prepared Graph, to make it more structured owing to the different citation styles, the PageRank Algorithm was implemented. The PageRank Algorithm is a re-implementation of the "Google Algorithm" [12] - the official algorithm that the Google search engine uses to rank its web pages based on certain searches. This was implemented on the Graph Citation Network to view the most important cases in Hong Kong's history post handover having the most present day impact.
Then, we color the Citation Network Graph using the following rules. The various colors are described as: Red [Lead Major Case being viewed], Blue [Citing Red], Green [Citing Blue], Yellow [Citing Green], Purple [Citing Yellow], Pink [Citing mixture of different cases at different levels]. Pink cases is significant cause it helps us track 'transitioning' cases, i.e., cases that not only have cited the lead parent cases but also some other child case of the parent case signifying that there have been ideological shifts in meting out a particular judgment to such similar cases over time resulting in a judgment
Figure 1: Case Counts vs. Hong Kong Courts
not only taking inspiration from the parent case but also its different child cases that have cited the parent case.
Algorithm 1 as shown below displays the creation of the State-Space-Graph.
```
0: List of Cases each of DICT() type \(\mathbb{C}=\{c_{1},c_{2},...,c_{N}\}\)
0: State Space Graph \(\mathbb{G}=\) DICT() for\(c_{i}\) in \(\mathbb{C}\)do if\(c_{i}\)[CASE NUMBER] not in \(\mathbb{G}.\)KEYS() then \(\mathbb{G}[c_{i}\)[CASE NUMBER]] = [] endif for\(r_{i}\)in\(c_{i}\)[LIST OF REFERENCES] do if\(r_{i}\)not in \(\mathbb{G}.\)KEYS() then \(\mathbb{G}[r_{i}]=[c_{i}\)[CASE NUMBER]] else \(\mathbb{G}[r_{i}].\)APPEND(\(c_{i}\)[CASE NUMBER]) endif endfor endfor
```
0:\(\mathbb{K}=\mathbb{G}.\)KEYS() for\(k\)in\(\mathbb{K}\)do \(\mathbb{G}[k]=\)LIST(SET(\(\mathbb{G}[k]\))) endfor ```
**Algorithm 1** State-Space-Graph Generation
This State-Space-Graph consists of key-value pairs using which we generate an Acyclic Directed Graph consisting of 12068 nodes and 12663 edges. We report a graph density of 8.695648829995403e-05. We apply the PageRank Algorithm using the default value of 0.85 for the damping factor, which is the damping parameter.
Similar to webpages, each judgment case is denoted as a graph in the web of the internet (or, in this case the Hong Kong Legal System). As PageRank itself was inspired by Academic Citation Analysis as stated in the eponymous paper, we concluded that implementing it for Judgment Citation Analysis would provide us with good results due to the similar nature of the task to be tackled.
After this, we implement Algorithm 2, which helps us to generate a colored citation network graph which is described as stated in Citation-Network-Coloring given a particular CASE-NUMBER.
Using this graph, as derived using Algorithm 2, we correlate the positions 0 with RED, 1 with BLUE, 2 with GREEN, 3 with YELLOW and 4 with PURPLE color for each node. If a node has multiple positions attached to it that means it has been cross-cited, and so it is given the color PINK.
```
0: CASE-NUMBER, sub-graph-edges = [], tree-structure = DICT(), tree-structure[0] = [CASE-NUMBER], tree-structure[1] = [], tree-structure[2] = [], tree-structure[3] = [], State Space Graph \(\mathbb{G}\)
0:\(\mathbb{L}\) = LIST(SET(LIST(\(\mathbb{G}\).PREDECESSORS(CASE-NUMBER)))) for\(l\) in \(\mathbb{L}\)do sub-graph-edges.APPEND((\(l\), CASE-NUMBER)) if\(l\) not in tree-structure.KEYS() then tree-structure[\(l\)] = [\(l\)] else tree-structure[\(l\)].APPEND(\(l\)) endif endfor endfor
```
**Algorithm 2** Citation-Network-Coloring
### Keyword Analysis and Summarization
We perform the following algorithms for visualization for some of the top cases extracted from the PageRank, and then benchmark between each other to perform the similarity of their results for evaluation on the HKCFA (Hong Kong Court of Final Appeals) subset of judgments.
#### 5.2.1 TextRank
The TextRank paper [13] proposed an innovative method for keyword extraction and text summarization by converting a text into a graphical structure, and hence choosing key linguistic structures by methods of voting and recommendation similar to that shown in
PageRank using the same scoring index.
Given a document \(\mathcal{D}\), perform tokenization, and then construct a graph based on the tokenized text. Then, rank the graph using the PageRank scoring mechanism, where for a vertex \(V_{j}\) in the constructed graph, \(In(V_{j})\) is the set of vertices that point to the predecessor vertices, and \(Out(V_{j})\) is the set of vertices that point to to the successor vertices. The score for a vertex is defined as:
\[S(V_{i})=(1-d)+d\sum_{j\in In(V_{i})}\frac{1}{|Out(V_{j})|}S(V_{j})\]
The parameter \(d\) is the damping factor, and is default parameterised as 0.85 as suggested by the eponymous PageRank paper.
#### 5.2.2 Rapid Automatic Keyword Extraction (RAKE)
The RAKE paper [15] proposed a method of keyword extraction by partitioning the document using punctuation and stop-words, and construct word-level co-occurrence matrices and use the computed word scores to extract the top words.
The initial candidate keywords are selected as phrases occurring in-between stop-words or phrase delimiters, and then followed by graph construction using co-occurrences of keywords. The degree of a word \(w\) is \(deg(w)\) is computed from the constructed graph and the word frequency is computed as \(freq(w)\), finally computing the ratio as \(\frac{deg(w)}{freq(w)}\). For each candidate phrase initially selected, the scoring is computed by summing up the individual scores for each keyword phrase:
\[S(\text{cand-phr})=\sum_{w\in\text{cand-phr}}S(w)\]
Finally, it takes into multi-occurring stopwords to include in the candidate keyword phrase, and produces a final scoring and ranking them in descending order of scoring.
#### 5.2.3 Yet Another Keyword Extractor (YAKE)
The YAKE paper [14] is another recent keyword extractor inspired by RAKE which used additional statistical estimators as features of extraction consisting of Position of Words, Word frequency, Term Relatedness to Context, Term Different Sentence structures. This was followed by term scoring, deduplication and final re-ranking.
The scoring of candidate keywords is given as follows:
\[S(kw)=\frac{\prod_{t\in kw}S(t)}{KF(kw)\times(1+\sum_{t\in kw}S(t))}\]
The \(kw\) represents a 1 or more \(n\)-gram keyword for scoring, \(S(t)\) are the candidate \(n\)-gram probability scores constituting the candidate keyword, and \(KF(kw)\) is the candidate keyword's overall keyword frequency as described in the paper.
#### 5.2.4 Latent Dirichlet Allocation (LDA)
The LDA paper [18] describes generating topics from a corpus of texts. The mathematical description based on the paper is described as the following.
For a given corpus of \(N\) documents, each with length \(n_{i}\), select \(\theta_{i}\sim Dir(\alpha)\), with a sparse parameter \(\alpha<1\) for \(i=\{1,...,N\}\). To extract \(K\) topics, select \(\phi_{k}\sim Dir(\beta)\) for a sparse parameter \(\beta<1\) for \(k=\{1,...,K\}\). For each word position in the document and length of document \(i\in\{1,...,M\}\), \(j\in\{1,...,N_{i}\}\) respectively, compute firstly, the topic \(t_{i,j}\sim Multinomial(\theta_{i})\) and the word \(w_{i,j}\sim Multinomial(\phi_{t_{i,j}})\).
### Sentiment Analysis
We use the Valence Aware Dictionary for Sentiment Reasoning (VADER) [16] paper to extract the linguistic sentiment of each paragraph in a judgment for gaining insights into the sentiment variate of the judge while meting out a judgment. Based on human checking, most cases describe the admissal or dismissal of the plaintiff or defendant's case in the beginning or right at the ending paragraphs for a judgment. Finally, we visualize this by plotting the paragraph-wise sentiment variate distribution in a graph.
_Limitations: This approach highlights the lack of distinguishing sympathetic or other raw emotional variations in documents. This can be solved by adding a tagger to the paragraph by means of text classification, to decipher paragraphs from each other. For example, a paragraph might go as "Let us sympathize with the victim's family...", might be tagged a positive linguistic sentiment. However, in the context of the overall case, there might be an ambiguity in analysis of results. However, if the tagger descibes it to be a part of Description/About the case instead of a judge's Opinion/Ruling, this would greatly improve the understanding of the corpus text for a legal researcher/academic. We propose multiple methods for paragraph-wise text classification in the following subsection to tackle this challenge posed and propose a benchmarking method for analyzing our results, and finally suggest an innovative Zero-Shot Learning based approach for quick inferencing. This makes use of the recent surge of large language models which are often trained on a large volume of data and helpful in generalizing results better than trained models due to its better understanding of linguistic substructures in textual data._
### Paragraph-wise Text Classification
In the following section, we implement various Machine Learning and Deep Learning Algorithms for paragraph-wise Judgment Text Classification task. After careful inspection of individual judgments, we classified a subset of 50 documents and extracted their paragraphs into 4 types:
* **About:** Description of the case by the Judge.
* **Ruling:** Describes a neutralized ruling or opinion of the Judge.
* **Allowed:** Describes an opinion of the judge in favour of the plaintiff or defendant.
* **Dismissal:** Describes an opinion of the judge against the plaintiff or defendant.
#### 5.4.1 Naive Bayes
The above figure shows the Naive Bayes Model implemented. The count vectorizer aids in conversion of a text document corpus to a matrix of token counts. The Term Frequency-inverse Document Frequency or TF-iDF for a word/term \(t\), individual document \(d\), cluster of documents \(D\), total number of documents \(|D|=N\), and number of documents for which a term \(t\) appears as \(|d\in D:t\in d|\), is defined as:
\[TF(t,d) =\frac{freq(t,d)}{\sum_{t^{\prime}\in d}freq(t^{\prime},d)}\] \[iDF(t,D) =\log\frac{N}{1+|d\in D:t\in d|}\] \[TF\text{-}iDF(t,d,D) =TF(t,d)\cdot iDF(t,D)\]
Figure 2: Naive Bayes Model
To perform Naiver Bayes classification, we wish to predict a class \(\hat{y}\) for a sentence \((x_{1},x_{2},...,x_{n})\):
\[\mathbb{P}(y|x_{1},x_{2},...,x_{n}) \propto\mathbb{P}(y)\prod_{i=1}^{n}\mathbb{P}(x_{i}|y)\] \[\hat{y} =\operatorname*{arg\,max}_{i}\mathbb{P}(y)\prod_{i=1}^{n}\mathbb{ P}(x_{i}|y)\]
For the priors, we consider three types of Naive Bayes submodels: (1) Bernoulli, (2) Multinomial, and (3) Complement (Multinomial for sparse datasets).
#### 5.4.2 Linear Support Vector Machine
The above figure shows the Support Vector Machine implemented. For a given set of sentences and classes \((\vec{X},Y)\), we wish to linearly separate the data based on respective classes. To perform this test, we use Stochastic Gradient Descent, with Hinge Loss and \(L_{2}\) regularizer. A hyperplane separating the data into clusters can be defined by \(w^{T}\cdot\vec{X}-b=0\). With a parameter \(\lambda>0\), the optimization problem is theoretized as:
\[\min\left(\lambda||w||_{2}^{2}+\frac{1}{n}\sum_{i=1}^{n}\max(0,1-y_{i}(w^{T} \cdot x_{i}-b))\right)\]
Solving this classifies the data into required classes, and helps to evaluate our dataset on the fitted SVM model.
Figure 3: Support Vector Machine
#### 5.4.3 Logistic Regression
The above figure shows the Logistic Regression model implemented. We compute the priors as:
\[p_{X}(\vec{x_{i}})=\mathbb{P}(\vec{x_{i}})=\frac{1}{1+e^{-\boldsymbol{\beta}\vec {x_{i}}}}\]
We wish to minimize the log likelihood estimate and predict the classes as:
\[C=\min\sum_{i=1}^{n}y_{i}\log(p_{X}(\vec{x_{i}}))+(1-y_{i})\log(1-p_{X}(\vec{x_ {i}}))\]
#### 5.4.4 Beroulli Restricted Boltzmann Machine
Figure 4: Logistic Regression
Figure 5: Restricted Boltzmann Machine
The above figure shows the Restricted Boltzmann Machine implemented with a Logistic Classifier.
It consists of visible \(v\) (with offsets \(a\)) and hidden \(h\) (with offsets \(b\)) binary units and is a stochastic generative neural network and a weight matrix \(w_{i,j}\in W\). The classification model is computed as:
\[E(v,h) =-a^{T}v-b^{T}h-v^{T}Wh\] \[\mathbb{P}(v,h) =\frac{e^{-E(v,h)}}{\sum e^{-E(v,h)}}\] \[\mathbb{P}(h_{j}=1|v) =\sigma\left(b_{j}+\sum_{i}w_{i,j}v_{i}\right)\] \[\mathbb{P}(v_{i}=1|h) =\sigma\left(a_{i}+\sum_{j}w_{i,j}h_{j}\right)\]
We wish to maximize the following for an input sentence to classify it:
\[\operatorname*{arg\,max}_{W}\prod_{x\in\mathcal{X}}\mathbb{P}(x)\]
#### 5.4.5 Base Linear Model with Embeddings
For the base deep learning model we create a simple model with Embeddings and Linear Layer. We use Stochastic Gradient Descent Algorithm with Cross Entropy Loss and Step Learning Rate Decay for optimization. The model is visualized in Figure 6.
Figure 6: Linear Model with Embeddings
#### 5.4.6 Encoder (LSTM) + Decoder (LSTM + SelfAttention)
For the improved deep learning model we create an LSTM based Encoder and an LSTM with Self Attention based Decoder. The model is visualized in Figure 7.
We compute Self Attention for Key (\(K\)), Values (\(V\)), and Query (\(Q\)) pairs as:
\[\text{SelfAttention}(Q,K,V)=\text{softmax}\left(\frac{QK^{T}}{\sqrt{ dim(K)}}\right)V\]
#### 5.4.7 BART Large MultiNLI for Paragraph Classification
Finally, we compare our results with the pre-trained BART Large MultiNLI Large Language Model, created by Facebook AI Research, and show it's useful qualities for fast Paragraph-level Classification of Judgments.
## 6 Results
In the following subsections, we go through the results of our experiments and findings. Firstly, we state the results of the PageRank algorithm. Then, we show the results of the implemented Machine Learning, Deep Learning, and Zero-Shot Learning inference, and compare and contrast between different models. Finally, we present the results of Citation Network Graphs, Keyword Analysis, Summarization methodology for some of the top cases extracted from the PageRank Algorithm.
Figure 7: Encoder (LSTM) + Decoder (LSTM + SelfAttention)
### PageRank Results
The Top 20 cases extracted as stated in Table 3 below with their scoring from the PageRank Algorithm with default parameters.
### Keyword Analysis Results
In this section, we outline the results of our Keyword Extraction Models. We hypothesize that we can choose some model over the other if they have a close correlation and scoring with respect to other model or some model extracts a high quality and quantity of keywords with respect to other models. The 5 models implemented were TextRank, RAKE, YAKE, LDA (with a singular topic generated), and KeyBERT, with KeyBERT being the benchmark as it is a Large Language Model. We summarize our results in Table 4 and 5.
\begin{table}
\begin{tabular}{|l|l|} \hline
**Case Number** & **PageRank Score** \\ \hline \hline
7 HKCFAR 187 & 0.00676 \\ \hline CACV 284/2017 & 0.00232 \\ \hline CACV 54/2018 & 0.00221 \\ \hline CACV 219/2018 & 0.00199 \\ \hline
2 HKLRD 1121 & 0.00162 \\ \hline
1 HKLRD 69 & 0.00162 \\ \hline
1 HKLRD 1 & 0.00136 \\ \hline
3 HKLRD 691 & 0.00115 \\ \hline
10 HKCFAR 676 & 0.00108 \\ \hline
1 HKC 261 & 0.00106 \\ \hline
2 HKLRD 437 & 0.00101 \\ \hline CACC 338/2007 & 0.00098 \\ \hline HCAL 106/2017 & 0.00096 \\ \hline
5 HKLRD 1 & 0.00095 \\ \hline
5 HKCFAR 356 & 0.00084 \\ \hline
2 HKLRD 12 & 0.00071 \\ \hline
2 HKLRD 1 & 0.00070 \\ \hline CACV 65/2014 & 0.00068 \\ \hline FACV No 16 of 2008 & 0.00066 \\ \hline \end{tabular}
\end{table}
Table 3: PageRank (Top 20 Judgments)
_Conclusion:_ From both of the tables where we visualize the computed Keyword Analysis metrics, we conclude that **YAKE-LDA** have the most common percentage of outcome keywords between them. Whereas, **KeyBERT** by itself have the least common percentage of outcomes, highlighting that KeyBERT extracts unique keywords different from traditional models.
Therefore, while considering different Keyword Analysis Algorithms, we must consider the tasks at hand and use a host of different algorithms and visualize the extracted keywords independently on average to extract unique insights into the Judgments.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline
**Metric/CN** & **HKCCDI** & **HKFAMC** & **HKMAGC** & **HKCTC** & **HKOAT** \\ \hline \hline
**Case Count** & **8** & **749** & **20** & **37** & **1** \\ \hline TextRank-RAKE & 0.0902 & 0.1118 & 0.0924 & 0.1067 & 0.1714 \\ \hline TextRank-YAKE & 0.1360 & 0.1900 & 0.1258 & 0.1448 & 0.2553 \\ \hline TextRank-LDA & 0.0645 & 0.1189 & 0.0610 & 0.0773 & 0.1162 \\ \hline TextRank-KeyBERT & 0.0160 & 0.0279 & 0.0209 & 0.0216 & 0.0232 \\ \hline RAKE-YAKE & 0.1000 & 0.0968 & 0.0787 & 0.1120 & 0.1509 \\ \hline RAKE-LDA & 0.1250 & 0.0771 & 0.0582 & 0.0944 & 0.0869 \\ \hline RAKE-KeyBERT & 0.0000 & 0.0160 & 0.0134 & 0.0186 & 0.0000 \\ \hline YAKE-LDA & 0.3043 & 0.3843 & 0.3699 & 0.3516 & 0.2857 \\ \hline YAKE-KeyBERT & 0.0000 & 0.0575 & 0.0840 & 0.0670 & 0.0454 \\ \hline LDA-KeyBERT & 0.0000 & 0.0572 & 0.0621 & 0.0525 & 0.0909 \\ \hline \end{tabular}
\end{table}
Table 4: Keyword Analysis Metrics for each Court (Part 1)
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline
**Metric/CN** & **HKCFA** & **HKCRC** & **HKCT** & **HKFC** & **HKMC** \\ \hline \hline
**Case Count** & **1544** & **8** & **12** & **872** & **18** \\ \hline TextRank-RAKE & 0.1173 & 0.0902 & 0.1026 & 0.1127 & 0.0966 \\ \hline TextRank-YAKE & 0.1899 & 0.1360 & 0.1804 & 0.1916 & 0.1289 \\ \hline TextRank-LDA & 0.1452 & 0.0645 & 0.1252 & 0.1215 & 0.0627 \\ \hline TextRank-KeyBERT & 0.0559 & 0.0160 & 0.0302 & 0.0294 & 0.0211 \\ \hline RAKE-YAKE & 0.2005 & 0.1000 & 0.1107 & 0.1002 & 0.0806 \\ \hline RAKE-LDA & 0.1448 & 0.1250 & 0.1068 & 0.0790 & 0.0605 \\ \hline RAKE-KeyBERT & 0.0621 & 0.0000 & 0.0207 & 0.0165 & 0.0148 \\ \hline YAKE-LDA & 0.4348 & 0.3043 & 0.3983 & 0.3858 & 0.3610 \\ \hline YAKE-KeyBERT & 0.1280 & 0.0000 & 0.0920 & 0.0603 & 0.0813 \\ \hline LDA-KeyBERT & 0.1488 & 0.0000 & 0.0820 & 0.0607 & 0.0523 \\ \hline \end{tabular}
\end{table}
Table 5: Keyword Analysis Metrics for each Court (Part 2)
### Summarization Results
In the analysis of our summarization results, we use the Recall-Oriented Understudy for Gisting Evaluation (or ROUGE) [47] metric for our computation analysis. We visualize the Rouge-1, Rouge-2, and Rouge-L and state their Precision, Recall and F1-Scores. We benchmark the TextRank summarization against Facebook's BART-Large-CNN Model for the Summarization Task.
With the statistics summarized in the following Table 6, we define them as shown below:
_Conclusion:_ We compute the ROUGE metrics for a sample of 25 cases from 5 courts with the highest case counts. We conclude that for the Summaarization Metrics:
* **HKLDT** has the highest ROUGE-1 Metrics [Unigram]
* **HKFC** has the lowest ROUGE-2 Metrics [Bigram]
* **HKCA** has the highest ROUGE-L Metrics [Longest Common Subsequence]
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline
**Metric/CN** & **HKCFA** & **HKCA** & **HKDC** & **HKFC** & **HKLDT** \\ \hline \hline
**Case Count** & **25** & **25** & **25** & **25** & **25** \\ \hline ROUGE-1 Recall & 0.3242 & 0.3236 & 0.2783 & 0.2057 & **0.2233** \\ \hline ROUGE-1 Precision & 0.4489 & 0.5060 & 0.3958 & 0.4545 & **0.6285** \\ \hline ROUGE-1 F1 & 0.3270 & 0.3541 & 0.3019 & 0.2762 & **0.3297** \\ \hline ROUGE-2 Recall & 0.1719 & 0.1792 & 0.1236 & **0.0649** & 0.1069 \\ \hline ROUGE-2 Precision & 0.1999 & 0.2641 & 0.1670 & **0.1974** & 0.3578 \\ \hline ROUGE-2 F1 & 0.1577 & 0.1811 & 0.1292 & **0.0950** & 0.1646 \\ \hline ROUGE-L Recall & 0.2973 & **0.2945** & 0.2535 & 0.1822 & 0.1928 \\ \hline ROUGE-L Precision & 0.4034 & **0.4512** & 0.3608 & 0.4068 & 0.5428 \\ \hline ROUGE-L F1 & 0.2980 & **0.3192** & 0.2747 & 0.2458 & 0.2846 \\ \hline \end{tabular}
\end{table}
Table 6: Summarization Metrics for selected Court
### Paragraph-level Judgment Classification Results
Firstly, we summarize the results of our models in the following Table. We considered a sample of 1000 paragraphs with 4 classifiers for a 80-20 Train-Test Set Split, with the classes as described in Section 5.4.
Therefore, we see that Bernoulli Restricted Boltzmann Machines perform the worst while the proposed Encoder Decoder Style Architecture performs the best.
_Limitations: The models however, don't generalize well for different samples of test-set. Therefore, we suggest using the BART-Large Model trained on the MultiGenre Natural Language Inference Dataset for better attention to linguistic substructures in long paragraphs. Our Encoder-Decoder architecture fails to perform for models with large paragraphs and the accuracy of the model reduces to 0.47 for paragraphs of size more than 100 tokens. As the pre-trained model allows upto 512 tokens, and our data doesn't have more than 473 tokens in a paragraph, it performs as a better generalization model._
_Conclusion:_ Therefore, we conclude that Zero-Shot Learning performs a more realistic examination of textual classification of legal data with a relatively fast inference time, due to its pretraining on multiple large datasets. The architecture of BART is described [48] as a Transformer based Encoder-Decoder style architectural language model with a Bidirectional Encoder and an Autoregressive Decoder. The model was pre-trained using corrupted text with an arbitrary noising functions making it learn to reconstruct the original text by denoising.
Some of the work done by BART prediction is shown in the following figure with highlights for the texts shown to cause the model to consider one type of class over the other done by human crosschecking.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline
**Model/Metric** & **Accuracy** & **Precision** & **Recall** & **F1 Score** \\ \hline \hline
**Boltzmann Machine** & **0.55** & **0.30** & **0.55** & **0.39** \\ \hline
**Bernoulli Naive Bayes** & 0.73 & 0.67 & 0.73 & 0.69 \\ \hline
**Multinomial Naive Bayes** & 0.68 & 0.71 & 0.67 & 0.61 \\ \hline
**Complement Naive Bayes** & 0.72 & 0.73 & 0.72 & 0.69 \\ \hline
**Support Vector Machine** & 0.77 & 0.76 & 0.76 & 0.75 \\ \hline
**Logistic Regression** & 0.78 & 0.78 & 0.78 & 0.77 \\ \hline
**Multilayer Perceptron** & 0.74 & 0.74 & 0.74 & 0.73 \\ \hline
**Embedding+Linear** & 0.62 & 0.61 & 0.62 & 0.61 \\ \hline
**E(LSTM)+D(LSTM+Attn)** & **0.98** & **0.97** & **0.94** & **0.95** \\ \hline \end{tabular}
\end{table}
Table 7: Model Metrics for Classification (Test Set)
PARAGRAPH: 14. In drug trafficking cases, the fact that all or part of the drug involve d is for the defendant's self-consumption is recognized as a mitigating factor (see, fo r example, R v Chan Mung-lung [1992] 2 HKCLR 127 and R v Chung Kam Fai [1993] HKC 42).
LABEL: ALLOMEO
PARAGRAPH: 15. In R v Meah & Marlow (1991) 92 Cr App R 254, a case concerning drug traf ficking by way of importation of drugs, Jupp J also made the following observations (at 256):
LABEL: RULING
PARAGRAPH: "Importing is a distinct offence from possessing. The penalties are diff erent and in our view it is not right to say that this must be treated simply as a case of possession. Nevertheless there must be a considerable reduction in sentence to refle ct the fact that the drugs were for the appellant's own consumption."
LABEL: RULING
PARAGRAPH: 22. With 15 previous convictions, the Appellant's criminal record is worse than those of other defendants in similar cases. It must also be borne in mind that the Appellant re-offended when he was wanted. Having regard to self-consumption by the Appellant of most of the "ice" involved, the judge reduced the starting point from 5 years and 10 months to 5 years and 3 months. We agree that this 7-month discount (i.e. approx imately 10%) is on the conservative side. Nevertheless, in our judgment, the Judge has not erred in principle, nor is the final sentence of 3 years and 6 months on the first charge manifestly excessive.
LABEL: ALLOMEO
PARAGRAPH: 23. The ground of appeal advanced by the Appellant fails. We therefore dismiss s his appeal against sentence.
LABEL: DISNISSA
### Case Analysis 1: 7 HKCFAR 187
For the following case, we show the keyword analysis results and the subsequent sentiment distribution paragraph wise with tagging.
\begin{table}
\begin{tabular}{|l|l|} \hline
**Method** & **Result** \\ \hline \hline TextRank & [’ani’, ’unhcr’, ’refuge’, ’state’, ’secretari’, ’tortur’, ’reason’, ’mr’, ’legal’, ‘deport’, ’deporte’, ’convent’, ’concern’, ’law’, ’nation high’, ’art’, ’person’, ’relev consider includ’, ’sri lanka’] \\ \hline RAKE & [’anxious consideration mr prabakar expressed’, ’november’, ’based’, ’acknowledging’, ’permissible course’, ’determining refugee status’, ’claimed protection’, ’september’, ’way without undertaking’, ’take unhcr’, ’secretary merely following unhcr’, ’omission’, ’suspected’, ’accordance’, ’refugee saying’, ’suffering whether physical’] \\ \hline YAKE & [’Secretary’, ’UNHCR’, ’refugee’, ’Convention’, ’torture’, ’respondent’, ’person’, ’Director’, ’Hong’, ’Kong’, ’claim’, ’Sri’, ’status’, ’concerned’, ’country’, ’order’, ‘Lanka’, ’Art’, ’reasons’, ’State’] \\ \hline LDA & [’secretary’, ’refugee’, ’unhcr’, ’torture’, ’would’, ’respondent’, ’person’, ’convention’, ’claim’, ’status’] \\ \hline \end{tabular}
\end{table}
Table 9: Keyword Analysis and Summary: 7 HKCFAR 187 (Continued)
Figure 10: Sentiment Distribution with Tagging: 7 HKCFAR 187
### Case Analysis 2: 2 HKLRD 1121
For the following case, we show the keyword analysis results and the subsequent sentiment distribution paragraph wise with tagging.
Figure 11: Citation Network Graph: 7 HKCFAR 187
\begin{table}
\begin{tabular}{|l|l|} \hline
**Method** & **Result** \\ \hline \hline TextRank & The Judge emphasized the gravity of the offence of trafficking in dangerous drugs and pointed out that, according to the sentencing guidelines laid down by the Court of Appeal, the starting point for trafficking in up to 10 grammes of “ice” was 3 to 7 years’ imprisonment. \\ \hline TextRank & [’sentence’, ’judg’, ’drug’, ’ma’, ’charg’, ’appeal’, ’appel chow chun’, ’discount’, ’case’, ’offenc’, ’appropri’, ’defend’, ’consid’, ’consider’, ’onli’, ’polic’, ’fai hklrd’, ’traffick’, ’mitig’] \\ \hline \multirow{4}{*}{RAKE} & [’judge considered’, ’defendant’, ’apparatus’, ’execution’, ’dangerous drugs’, ’guilty plea’, ’ning road’, ’court factual’, ’trafficking’, ’police officer’, ’appellant emphasized’, ’discount approximately’, ’inadequate’, ’months taking’, ’already sentenced’, ’take issue’, ’approached’] \\ \hline \multirow{4}{*}{YAKE} & [’Appellant’, ’Judge’, ’Court’, ’drug’, ’years’, ’month’, ‘ice’, ’sentence’, ’drug’, ’defendant’, ’trafficking’, ’Appeal’, ’grammes’, ’consumption’, ’TMCC’, ’point’, ’starting’, ’possession’, ’discount’, ’prisonment’] \\ \hline \multirow{2}{*}{LDA} & [’appellant’, ’months’, ’years’, ’drug’, ’judge’, ’ice’, ’sentence’, ’court’, ’defendant’, ’trafficking’] \\ \hline \end{tabular}
\end{table}
Table 10: Keyword Analysis and Summary: 2 HKLRD 1121
Figure 12: Sentiment Distribution with Tagging: 2 HKLRD 1121
Figure 13: Citation Network Graph: 2 HKLRD 1121
Conclusion & Future Works
In this paper, we conducted extensive experiments and analysis for extraction of useful information from legal judgments from a computational linguistics viewpoint, with respect to judgments mete out by Hong Kong's Legal System. We implemented a 5-fold methodology: (1) Citation Network Graph Generation, (2) PageRank Algorithm, (3) Keyword Analysis and Summarization, (4) Sentiment Polarity, and (5) Paragrah Classification, for extraction of key insights from individual as well a group of judgments together, thus automating the extraction of useful insights with relatively fast inference times. We also coupled our experimental results by benchmarking our results using Large Language Models.
Our future work consists of finetuning two of the papers that we wrote as a complement to this project, and aim for submission to the upcoming International Conference of Legal Knowledge and Information Systems (JURIX 2023). We plan to upload the papers to arXiv in coming weeks, and we would appreciate feedback in relation to the papers.
Building Efficient Knowledge Graphs for
Hong Kong Judgments
Sankalok SEN*1
[MISSING_PAGE_POST]
Acknowledgment
I would like to thank my project advisor Dr. Lingpeng Kong for his continued support in this project to further my research interests in bridging the gap between Natural Language Processing and Social Science domains. I would also like to thank Dr. Zhiyong Wu of Shanghai AI Laboratory and Dr. Kevin Wu, Post Doctorate Fellow at The Department of Computer Science, HKU, for their helpful tips and suggestions in solving the multiple bottlenecks that this project aimed to tackle. |
2310.06588 | FTFT: Efficient and Robust Fine-Tuning by Transferring Training Dynamics | Despite the massive success of fine-tuning Pre-trained Language Models
(PLMs), they remain susceptible to out-of-distribution input. Dataset
cartography is a simple yet effective dual-model approach that improves the
robustness of fine-tuned PLMs. It involves fine-tuning a model on the original
training set (i.e. reference model), selecting a subset of important training
instances based on the training dynamics, and fine-tuning again only on these
selected examples (i.e. main model). However, this approach requires
fine-tuning the same model twice, which is computationally expensive for large
PLMs. In this paper, we show that (1) training dynamics are highly transferable
across model sizes and pre-training methods, and that (2) fine-tuning main
models using these selected training instances achieves higher training
efficiency than empirical risk minimization (ERM). Building on these
observations, we propose a novel fine-tuning approach: Fine-Tuning by
transFerring Training dynamics (FTFT). Compared with dataset cartography, FTFT
uses more efficient reference models and aggressive early stopping. FTFT
achieves robustness improvements over ERM while lowering the training cost by
up to $\sim 50\%$. | Yupei Du, Albert Gatt, Dong Nguyen | 2023-10-10T12:53:48Z | http://arxiv.org/abs/2310.06588v2 | # FTFT: Efficient and Robust Fine-Tuning by TransFerring Training Dynamics
###### Abstract
Despite the massive success of fine-tuning large Pre-trained Language Models (PLMs) on a wide range of Natural Language Processing (NLP) tasks, they remain susceptible to out-of-distribution (OOD) and adversarial inputs. Data map (DM) is a simple yet effective dual-model approach that enhances the robustness of fine-tuned PLMs, which involves fine-tuning a model on the original training set (i.e. reference model), selecting a specified fraction of important training examples according to the training dynamics of the reference model, and fine-tuning the same model on these selected examples (i.e. main model). However, it suffers from the drawback of requiring fine-tuning the same model twice, which is computationally expensive for large models. In this paper, we first show that 1) training dynamics are highly transferable across different model sizes and different pre-training methods, and that 2) main models fine-tuned using DM learn faster than when using conventional Empirical Risk Minimization (ERM). Building on these observations, we propose a novel fine-tuning approach based on the DM method: Fine-Tuning by transferring Training dynamics (FTFT). Compared with DM, FTFT uses more efficient reference models and then fine-tunes more capable main models for fewer steps. Our experiments show that FTFT achieves better generalization robustness than ERM while spending less than half of the training cost.
## 1 Introduction
Current state-of-the-art performance in Natural Language Processing (NLP) is dominated by large, pretrained language models (PLMs), which are typically fine-tuned for downstream tasks. Scaling laws (Kaplan et al., 2020; Hoffmann et al., 2022) suggest that better downstream performance is achieved with larger pretrained language models. However, fine-tuning large PLMs is also more expensive than fine-tuning small PLMs, in terms of both computational resources and carbon emission (Strubell et al., 2019; Wu et al., 2022).
Moreover, despite making impressive progress on regular benchmarks, many studies have shown that fine-tuned PLMs lack robustness against out-of-distribution (OOD) and adversarial inputs. For instance, human annotators can easily exploit the weaknesses of fine-tuned PLMs to trick these models to yield incorrect predictions, on tasks such as Natural Language Inference (NLI) (Nie et al., 2020) and Hate Speech Detection (HSD) (Vidgen et al., 2021).
The problem of robustness can be mitigated using dual-model approaches. Such methods first train a **reference model** to estimate the importance of each training instance, and then train a **main model** based on the outputs of the reference model (Nam et al., 2020; Utama et al., 2020; Sanh et al., 2021; Karimi Mahabadi et al., 2020; Zhang et al., 2022; Liu et al., 2021). Among these methods, the approach proposed by Swayamdipta et al. (2020) is particularly attractive in view of its simplicity and the demonstration that it consistently improves model performance on OOD test datasets. However, a major drawback of this approach is that it improves robustness at the expense of efficiency, which is especially problematic when using large PLMs. In particular, this approach requires fine-tuning the same model twice. First, a Data Map (DM) is constructed, based on the **training dynamics** from an initial fine-tuning run of the model (i.e., the reference model) on the
full dataset. The DM divides the training data into three subsets which a model finds ambiguous, hard-to-learn, and easy. The main model is then fine-tuned using only either the ambiguous or the hard-to-learn subset of data. In the standard DM approach, the reference model and the main model are the same PLM (e.g., DeBERTaV3Large, He et al., 2023).
In this paper, we jointly address the issues of robustness and efficiency without sacrificing the properties that make DMs attractive. We achieve this by exploiting the transferability of training dynamics. Specifically, we study whether training instances identified by more efficient reference models can be used to improve the robustness of more capable -- i.e. larger -- main models. We make three key observations. First, _training dynamics are highly transferable_ across different model sizes (e.g., DeBERTaV3Small as the reference model and DeBERTaV3Large as the main model) and different pre-training methods (PTMs, e.g., ELECTRA (Clark et al., 2020) as the reference model and DeBERTaV3 as the main model), Second, the condition for this transfer to work well is that reference models should be reasonably strong. Crucially, we identify a key property of effective reference models, namely that they typically identify higher ratios of training instances as easy cases, compared to less effective reference models. This observation can help us inspect whether a reference model would work well without training the main model. Third, the main model in the DM method learns faster than ERM, achieving good performance using fewer steps.
Based on our observations, we propose **Fine-Tuning by transferring Training dynamics (FTFT)** to improve both the efficiency and the robustness of fine-tuning. Concretely, FTFT improves the efficiency of the DM method in two ways. First, FTFT uses more efficient reference models for identifying the subset of ambiguous training instances. Second, when using this data subset to fine-tune a more capable main model, FTFT uses substantially fewer training steps than ERM fine-tuning. Experimental results on two tasks, NLI and HSD, using two models, DeBERTaV3 (He et al., 2021) and ELECTRA (Clark et al., 2020), show that FTFT can achieve better generalization robustness than ERM, while lowering the training cost by a factor of more than two.
## 2 Background
### Improving Model Robustness Using Dual-Model Approaches
Many previous studies have proposed dual-model approaches to improve model robustness. Nam et al. (2020) first train a reference model using generalized cross-entropy loss. Then, they train a main model while assigning higher weights to training instances that are hard for the reference model. Utama et al. (2020) use the same rationale, except for using a model that is trained on a random subset of the full training data as the reference model, as well as adding a confidence regularization mechanism. Sanh et al. (2021) use a Product-of-Expert (PoE) approach, by first training a reference model with limited capacity to capture dataset biases, and then training the main model to avoid these biases. Karimi Mahabadi et al. (2020) also adopt a PoE loss, but they train both the reference model and the main model in an end-to-end fashion. Zhang et al. (2022) investigate the idea of regularizing hidden representations by first training a reference model using ERM, and then using contrastive learning to make the hidden representations of the same class and different classes more similar and more separable. Liu et al. (2021) first train a less capable reference model using heavy regularization and vanilla SGD, and up-weigh the training instances that the reference model mis-predicts when training the main model. The DM method proposed by Swayamdipta et al. (2020) is a more nuanced approach to a similar idea. Instead of relying only on the correctness of the reference model, they use training dynamics to categorize training instances, and only use a subset of data to train the main model. We turn to the details of this method below.
### Data Map
Swayamdipta et al. (2020) propose a dual-model approach for improving model robustness. First, a reference model is trained on the full original dataset. Then, a Data Map (DM) is built based on the observed training dynamics, by tracking the prediction probabilities of the true class (\(p_{\rm true}\)) of each training instance across different epochs. Using the DM, training instances can be categorized into three groups: _ambiguous_ (i.e. the standard deviation of \(p_{\rm true}\) is in the top \(q\%\) of all training instances); _hard-to-learn_ (i.e. the mean of \(p_{\rm true}\) is at the bottom \(q\%\) of all training instances); and _easy_ (i.e. neither ambiguous nor hard-to-learn). The threshold \(q\%\) is fixed and typically set to \(33\%\)
It is worth noting that _ambiguous and hard-to-learn are not mutually exclusive_: a training instance can be categorized as both hard-to-learn (its average of \(p_{\mathrm{true}}\) is at the bottom \(q\%\)) and ambiguous (its standard deviation of \(p_{\mathrm{true}}\) is among the top \(q\%\)). Finally, the main model is only fine-tuned on the \(q\%\) most ambiguous or hard-to-learn datapoints. Swayamdipta et al. (2020) show that, with a slight loss of In-Distribution (ID) performance, this approach improves model performance on challenging Out-Of-Distribution (OOD) datasets. They also observe that ambiguous data is more beneficial than hard-to-learn data for model training. We therefore only experiment with ambiguous data.
Swayamdipta et al. (2020) uses the same PLM as both the reference and the main model. In contrast, Sar-Shalom and Schwartz (2023) show that a DM constructed by ELECTRALarge can be used to improve the robustness of DeBERTaV3Large (He et al., 2021). However, they use a different approach: instead of training only on the \(q\%\) most ambiguous datapoints, they add \(k\) copies of this subset to the original training set; also, they do not investigate DM transferability further. Inspired by their observation, we study whether exploiting the transferability of training dynamics can help improve the efficiency of the DM method, while retaining the advantage of improved robustness.
## 3 Experimental Setup
In our experiments we study the transferability of training dynamics on two tasks, Natural Language Inference (NLI) and Hate Speech Detection (HSD). We consider both ID performance and OOD robustness. To study the transferability across both model sizes and pretraining methods (PTMs), we mainly focus on two PTMs, DeBERTa-V3 (He et al., 2023) and ELECTRA (Clark et al., 2020). Each of these methods makes different model sizes available (small, base, and large). To further investigate the impact of the capability of the reference model on transferability, we also include TinyBERT (Turc et al., 2020), a very efficient but relatively weak PLM. As a baseline, we also experiment with random DM (i.e., randomly selecting \(q\%\) of the training data). Unless otherwise specified, we set the ratio \(q\%\) of all DM methods to \(33\%\).
DataTo study the impact of our method on model robustness, for each task, besides the original train set and ID validation set, we also include a few challenging OOD test sets.
For NLI, we use the MultiNLI dataset (Williams et al., 2018) as the train and ID validation set, because of its wide coverage of diverse inputs (10 genres). As OOD test sets, we use two challenging datasets designed to target weaknesses of models trained on MultiNLI: WANLI (Liu et al., 2022) and AdversarialNLI (Nie et al., 2020), which consists of three rounds of adversarial data collection.
For HSD, we use CAD (Vidgen et al., 2021) as the train and ID validation set. CAD consists of Reddit posts covering diverse topics and writing styles, annotated for multiple categories. We frame the task as a binary classification task (hafteful vs. non-hafteful). Following Ramponi and Tonelli (2022), we mark identity-related abuse as hateful and all other categories as non-hafteful. As OOD test sets, we use DynaHate (Vidgen et al., 2021), which also contains three rounds of adversarial data collection, as well as a perturbed version for each round. Conceptualizations of hate speech vary widely across hate speech datasets. We selected these two datasets, because they use similar definitions and taxonomies concerning hate speech.
ModelsWe focus on two PLMs: DeBERTaV3 (He et al., 2023) and ELECTRA (Clark et al., 2020), for two reasons. First, they have both been shown to perform well on text classification tasks. Second, they are available in different sizes (small, base, and large), allowing us to experiment with transfer between different model sizes. We also experiment with TinyBERT to study the impact of low reference model capacity. We provide the cost (measured in PFLOPs) for fine-tuning different PLMs in Appendix A.1.1
Footnote 1: We report PFLOPs rather than GPU hours because on our setup, which uses NVIDIA A100 GPUs, we’ve noticed occasional low GPU utilization, especially during the fine-tuning of smaller models such as TinyBERT. In such cases, reporting GPU hours would not reflect the computational costs accurately.
TrainingFor NLI, we train all models for 60k steps (\(\sim 5\) epochs) and evaluate every 4k steps. For TD, we train all models for 6k steps (\(\sim 10\) epochs) and evaluate every 400 steps. For all models, we use four different random seeds and report their average performance. Full training details (e.g., optimization, software and hardware) are included in Appendix A.2.
Transferability of Training Dynamics
In this section, we study the transferability of training dynamics in the DM method, i.e., whether we can use different reference and main models while maintaining the robustness of the main model. Specifically, we study whether training dynamics are transferable across different model sizes (SS4.1, e.g., from DeBERTaV3\({}_{\text{small}}\) to DeBERTaV3\({}_{\text{large}}\)) and pretraining methods (SS4.2, e.g., from ELECTRA\({}_{\text{Large}}\) to DeBERTaV3\({}_{\text{large}}\)), for two reasons. First, transferability across model sizes can help improve the efficiency of constructing a DM, by enabling the use of more efficient reference models. Second, transferability across pretraining methods can help achieve such an efficiency gain, when more efficient versions of the main model pretraining method are either not available, or are not able to model the task sufficiently well. Moreover, in the long run, transferability across different models can help us gain a better understanding of data ambiguity. Indeed, if the same subset of training instances is consistently identified as ambiguous by reference models of different sizes or pretraining methods, this would suggest that ambiguity is to some extent intrinsic to the data instances rather than being completely model-dependent.
Our results show training dynamics to be transferable across different model sizes and pretraining methods, with a few exceptions. To better understand the conditions for successful transfers, we analyze these few failure cases. We observe that successful transfers require reference models to be reasonably strong in identifying relatively easy training instances (SS4.3). This observation offers us a guideline for choosing reference models without training the main model, which can be computationally prohibitive.
### Transferability Across Different Model Sizes
In this section, we study whether smaller and more efficient models can be used as reference models for training larger main models, without compromising robustness. For example, when using DeBERTaV3\({}_{\text{large}}\) as the main model, we investigate whether there is a comparable performance when using DeBERTaV3\({}_{\text{small}}\) versus using DeBERTaV3\({}_{\text{large}}\) itself as reference models. Successful transfers of this type can improve the efficiency of the DM method by enabling the use of more efficient reference models.
We show the results for DeBERTaV3 in Table 1 (across different model sizes), and make three observations. First, performance is better with larger PLMs as main models, even when we use ERM. In support of this claim, we note that DeBERTaV3\({}_{\text{large}}\) performs better than DeBERTaV3\({}_{\text{small}}\) and DeBERTaV3\({}_{\text{Base}}\) when fine-tuned with ERM or DM. This is the case for all test datasets, except when the reference model is trained on a random \(33\%\) subset of the train data. Second, by comparing DeBERTaV3\({}_{\text{Large}}\) fine-tuned with ERM and DM, we observe that ERM performs better on ID datasets, while DM outperforms ERM on OOD datasets. This is consistent with Swayamdipta et al. (2020). Third and most importantly, _training dynamics are transferable across different model sizes_: When using DeBERTaV3\({}_{\text{large}}\) as the main model, changing the reference model from DeBERTaV3\({}_{\text{large}}\) to DeBERTaV3\({}_{\text{small}}\) or DeBERTaV3\({}_{\text{Base}}\) yields comparable performance.
To understand this transferability, we analyze to which extent reference models of different sizes are consistent in identifying ambiguous training instances. Figure 1 displays the fraction of the ambiguous instances shared by reference models of different sizes, or of the same size but different random seeds. Consistent with observation regarding transferability, the fractions of shared ambiguous instances between different sizes are only slightly
Figure 1: Consistency across different sizes of DeBERTaV3 on NLI. The numbers are the fractions (0-1) of the ambiguous training instances shared by two models. Training dynamics are transferable across different sizes: the fractions of shared ambiguous instances between models of different sizes are only slightly smaller than those between models of the same size but different random seeds (shown as superscript).
smaller than those between the same size but different random seeds (for comparison, the fraction between random DMs is expected to be 0.33).
### Transferability Across Different Pretraining Methods
In this section, we study the transferability of training dynamics across different pretraining methods, by comparing performance when the reference and main models are trained using the same pretraining method, or using different pretraining methods. If transfers of this type are successful, we can improve the efficiency of DM methods in case there is no version of the main model pretraining method which is both efficient and effective as the reference model.
Table 1 displays the results when using DeBERTaV3Large as the main model with different reference models (across different pretraining methods). _In most settings, training dynamics are transferable
\begin{table}
\begin{tabular}{l l|c|c c c c c c} \hline \hline Mode & Main Model & Ref. Model & MultiNLI & WANLI & \multicolumn{3}{c}{AdversarialNLI} \\ & & & & & R1 & R2 & R3 \\ \hline ERM & DeBERTaV3Small & - & 87.57 & 61.61 & 33.55 & 30.63 & 32.40 \\ ERM & DeBERTaV3Base & - & 90.00 & 64.61 & 43.55 & 33.23 & 34.14 \\ ERM & DeBERTaV3Large & - & **91.06** & 66.46 & 58.17 & 45.57 & 41.34 \\ \hline DM & DeBERTaV3Large & Random & 90.74 & 65.31 & 53.30 & 42.02 & 38.60 \\ DM & DeBERTaV3Large & DeBERTaV3Large & 90.75 & 66.33 & 59.75 & 45.60 & 41.94 \\ \hline \multicolumn{8}{c}{Across different model sizes} \\ \hline DM & DeBERTaV3Large & DeBERTaV3Small & 90.51 & **66.80** & 59.60 & 45.62 & 42.04 \\ DM & DeBERTaV3Large & DeBERTaV3Base & 90.74 & 66.61 & **61.42** & **46.73** & 41.58 \\ \hline \multicolumn{8}{c}{Across different pretraining methods} \\ \hline DM & DeBERTaV3Large & ELECTRASmall & 90.91 & 62.09 & 49.62 & 38.5 & 35.98 \\ DM & DeBERTaV3Large & ELECTRABase & 90.63 & 66.58 & 59.77 & 46.25 & **42.29** \\ DM & DeBERTaV3Large & ELECTRALarge & 90.80 & 66.42 & 58.95 & 44.57 & 41.52 \\ \hline \multicolumn{8}{c}{Across different model sizes} \\ \hline Mode & Main Model & Ref. Model & CAD & \multicolumn{3}{c}{DynaHate-Original} & \multicolumn{3}{c}{DynaHate-Perturb} \\ & & & R2 & R3 & R4 & R2 & R3 & R4 \\ \hline ERM & DeBERTaV3Small & - & 76.57 & 56.89 & 59.29 & 63.48 & 59.55 & 66.59 & 61.48 \\ ERM & DeBERTaV3Base & - & 78.64 & 60.53 & 64.28 & 68.89 & 60.81 & 69.48 & 63.12 \\ ERM & DeBERTaV3Large & - & **81.69** & 75.44 & 73.32 & 76.12 & 70.62 & 77.41 & 68.89 \\ \hline DM & DeBERTaV3Large & Random & 76.22 & 63.38 & 61.59 & 71.21 & 64.05 & 72.10 & 62.88 \\ DM & DeBERTaV3Large & DeBERTaV3Large & 81.58 & 79.17 & 76.87 & 77.73 & 73.34 & 76.63 & 67.54 \\ \hline \multicolumn{8}{c}{Across different model sizes} \\ \hline DM & DeBERTaV3Large & DeBERTaV3Small & 81.15 & 80.68 & **79.56** & **79.86** & 76.47 & 78.03 & 70.32 \\ DM & DeBERTaV3Large & DeBERTaV3Base & 80.12 & 76.34 & 78.82 & 74.60 & 77.81 & 68.67 \\ \hline \multicolumn{8}{c}{Across different pretraining methods} \\ \hline DM & DeBERTaV3Large & ELECTRASmall & 79.74 & 78.09 & 77.40 & 78.75 & 75.26 & 76.79 & 70.05 \\ DM & DeBERTaV3Large & ELECTRABase & 80.37 & **81.47** & 78.16 & 78.38 & **76.99** & **78.51** & **71.01** \\ DM & DeBERTaV3Large & ELECTRALarge & 79.48 & 76.71 & 75.97 & 78.58 & 73.55 & 77.80 & 69.81 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Transferability across different model sizes and pretraining methods, using DeBERTaV3 as the main model, on NLI (top, accuracy) and HSD (bottom, macro-F1). We compare the performance of 1) DeBERTaV3Large of different sizes fine-tuned using ERM, 2) DeBERTaV3Large as the main model, using random DM (random \(33\%\) training instances), and DeBERTaV3Large as the reference model (Ref. Model) to construct DM (original DM), 3) DeBERTaV3Large as the main model, using DeBERTaV3Small and DeBERTaV3Large as reference models to construct DM, 4) DeBERTaV3Large as the main model, using ELECTRA of different sizes as reference models to construct DM. R1-R4 in AdversarialNLI and DynHate refer to different rounds of collected data. Training dynamics are transferable across different sizes and pretraining methods: DM methods using different reference model sizes and pretraining methods show comparable performance.
across different pretraining methods._ We note that in most cases DeBERTaV3Large achieves comparable performance when trained by transferring a DM from reference models of varying sizes of DeBERTaV3 and ELECTRA. However, such transfers are not always successful. When using ELECTRASmall as the reference model, the performance is clearly worse on the OOD datasets of NLI than using other PLMs. We hypothesize that ELECTRASmall is not strong enough for constructing effective DMs for MultiNLI, and will analyze this further in SS4.3.
### How Efficient Can We Be?
We have shown that training dynamics are transferable across different model sizes and pretraining methods. In this section, we study the conditions for successful transfers. Specifically, we focus on answering two questions: 1) whether we can use very efficient but weak models as reference models; and 2) what are the differences between effective and ineffective reference models. Answers to these questions can serve as guidelines for selecting efficient yet effective reference models.
The Use of Efficient but Weak ModelsTo answer this question, we compare the performance of a wide range of methods of three types. First, we consider four models fine-tuned with ERM: the small, base, and large versions of ELECTRA, and TinyBERT (Turc et al., 2020), which is a very weak but efficient PLM. We use these four models because they are of different sizes and capabilities (ELECTRALarge \(>\) ELECTRABase \(>\) ELECTRASmall \(>\) TinyBERT). Second, we use these ERM models as reference models in the DM method to fine-tune DeBERTaV3Large. By using reference models of different capabilities to fine-tune the same main model, we can inspect the impact of reference model capability on transferability. Third, we also include the results with ELECTRALarge as the main model, and different sizes of ELECTRA as reference models. By comparing results with different main models, we can better understand whether successful transfer of training dynamics is due to the compatibility between reference and main models, or due to the capability of the reference model itself. We also include the results for ELECTRALarge fine-tuned with a random DM as a baseline. We show our results for NLI in Table 2, and include the results for HSD in Appendix B. We make two observations.
First, poorly performing reference models, such as TinyBERT and ELECTRASmall fine-tuned with ERM, lead to failed transfers. Also, the worse the reference models perform, the worse the OOD performance of their main models are, e.g., TinyBERT leads to worse main model performance than ELECTRASmall. Moreover, transferring from such models to main models of neither the same nor different pretraining method is successful, e.g., transferring from ELECTRASmall to ELECTRALarge and DeBERTaV3Large. This suggests that the success of transfers mostly depends on the reference model, rather than the compatibility between the reference and the main model. Second, using weak reference models for DM does not negatively affect ID performance much. For instance, transferring from ELECTRASmall to DeBERTaV3Large yields the best accuracy on MultiNLI. We suspect the reason for this is that weak models usually identify easy training instances as ambiguous data, which are sufficient for obtaining satisfactory ID performance (Swayamdipta et al., 2020).
Differences between Effective and Ineffective Reference ModelsBecause some reference models lead to failed transfers, it is important to understand the differences between effective and ineffective reference models. To answer this question, we start by considering the differences between a weak and a reasonably strong reference model when categorizing training data into ambiguous, hard-to-learn, and easy instances.
Assume we have a weak reference model which is only able to fit the easy training instances but fails to fit hard training instances. As a result, this weak reference model will assign increasing \(p_{\text{true}}\) to easy training instances across different epochs, while keeping \(p_{\text{true}}\) for hard training instances around the values expected in a random guessing scenario. This reference model behavior means that \(p_{\text{true}}\) will exhibit high standard deviations on the subset of easy cases, which will therefore be identified as ambiguous data; while hard training instances will have lower means for \(p_{\text{true}}\) and therefore be identified as hard-to-learn data. In contrast, a reasonably capable reference model can fit the easy part of the training instances during the early stage of training, which makes these instances have both high means and low standard deviations for \(p_{\text{true}}\). In contrast, \(p_{\text{true}}\) for harder instances will gradually increase across different epochs, making these instances yielding relatively low means and high standard deviations for \(p_{\text{true}}\). As a result, such cases will be identified as both ambiguous and
hard-to-learn (i.e. we expect a large overlap in these subsets). Because we select a fixed percentage \(q\%\) of instances as ambiguous or hard-to-learn, the larger overlap results in a larger percentage of data instances to be classified as easy when using strong reference models.
This reasoning is supported by our previous observation on the ID performance of unsuccessful transfers (Table 2). Concretely, the fact that weaker reference models (e.g. TinyBERT) identify easy training instances as ambiguous, and that using only easy training instances for fine-tuning will produce comparable ID performance but degraded OOD performance (Swayamdipta et al., 2020), together explain our results for weak reference models.
To further validate our reasoning, we compute the percentages of easy training instances from different reference models and show the results in Table 3. Besides the default \(q\%=33\%\), we also experiment with \(q\%=25\%\) and \(q\%=50\%\). Consistent with our reasoning, ineffective reference models (i.e. ELECTRA\({}_{\text{Small}}\) on NLI and TinyBERT on both datasets) indeed identify fewer data points as easy compared with other models. Furthermore, the overlap between hard-to-learn and ambiguous instances in successful transfers is usually very large. For example, with \(q\%=50\%\), all effective reference models identify more than 45% of the training data as easy, (the maximum is 50%, when ambiguous and hard-to-learn data align perfectly).
\begin{table}
\begin{tabular}{l l l l|c c c c} \hline \hline Mode & Main Model & Ref. Model & MultiNLI & WANLI & \multicolumn{3}{c}{AdversarialNLI} \\ & & & & & R1 & R2 & R3 \\ \hline ERM & TinyBERT & - & 67.32 & 43.40 & 23.30 & 28.10 & 30.94 \\ ERM & ELECTRA\({}_{\text{Small}}\) & - & 81.98 & 54.11 & 23.38 & 28.57 & 30.25 \\ \hline ERM & ELECTRA\({}_{\text{Base}}\) & - & 88.53 & 63.06 & 34.58 & 30.73 & 31.29 \\ ERM & ELECTRA\({}_{\text{Large}}\) & - & 90.75 & 65.85 & 54.20 & 39.38 & 36.10 \\ \hline DM & DeBERTRA\({}_{\text{V3}}\)\({}_{\text{Large}}\) & TinyBERT & 89.17 & 60.02 & 41.83 & 34.58 & 34.54 \\ DM & DeBERTRA\({}_{\text{V3}}\)\({}_{\text{Large}}\) & ELECTRA\({}_{\text{Small}}\) & **90.91** & 62.09 & 49.62 & 38.50 & 35.98 \\ DM & DeBERTa\({}_{\text{V3}}\)\({}_{\text{Large}}\) & ELECTRA\({}_{\text{Base}}\) & 90.63 & **66.58** & **59.77** & **46.25** & **42.29** \\ DM & DeBERTa\({}_{\text{V3}}\)\({}_{\text{Large}}\) & ELECTRA\({}_{\text{Large}}\) & 90.80 & 66.42 & 58.95 & 44.57 & 41.52 \\ \hline DM & ELECTRA\({}_{\text{Large}}\) & ELECTRA\({}_{\text{Small}}\) & 89.88 & 61.53 & 45.90 & 36.20 & 31.89 \\ DM & ELECTRA\({}_{\text{Large}}\) & ELECTRA\({}_{\text{Base}}\) & 90.40 & 66.09 & 54.10 & 40.97 & 37.31 \\ DM & ELECTRA\({}_{\text{Large}}\) & ELECTRA\({}_{\text{Large}}\) & 90.33 & 65.37 & 53.73 & 39.67 & 36.17 \\ \hline DM & ELECTRA\({}_{\text{Large}}\) & Random & 89.99 & 65.03 & 51.25 & 39.02 & 34.98 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of different models on NLI with conventional fine-tuning (ERM) and DM using the \(33\%\) most ambiguous data identified with different reference models. Random in Ref. Model means randomly selecting \(33\%\) of the train data. The rows marked in gray are the results for DM training where the transfer was not successful, and the corresponding reference models. Successful transfer requires the reference model to be reasonably strong: reference models of clearly worse performance lead to degraded OOD performance for the main models.
\begin{table}
\begin{tabular}{l c c c|c c c} \hline \hline & \multicolumn{3}{c|}{NLI: MultiNLI} & \multicolumn{3}{c}{HSD: CAD} \\ & 25\% & 33\% & 50\% & 25\% & 33\% & 50\% \\ \hline ELECTRA\({}_{\text{Small}}\) & 55.91\% & 47.77\% & 35.84\% & 68.72\% & 62.10\% & 46.07\% \\ ELECTRA\({}_{\text{Base}}\) & 69.10\% & 62.88\% & 46.62\% & 73.36\% & 65.54\% & 47.56\% \\ ELECTRA\({}_{\text{Large}}\) & 66.91\% & 59.08\% & 45.00\% & 67.80\% & 60.03\% & 45.01\% \\ DeBERTa\({}_{\text{V3}}\)\({}_{\text{Small}}\) & 67.08\% & 61.31\% & 46.08\% & 72.05\% & 65.00\% & 47.94\% \\ DeBERTaV3\({}_{\text{Base}}\) & 71.39\% & 64.02\% & 46.90\% & 73.20\% & 65.80\% & 48.04\% \\ DeBERTaV3\({}_{\text{Large}}\) & 72.97\% & 64.84\% & 46.97\% & 74.06\% & 65.89\% & 47.72\% \\ TinyBERT & 51.29\% & 38.44\% & 20.01\% & 63.58\% & 54.45\% & 40.20\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Differences between effective and ineffective reference models. We show the ratios of easy data identified by the data maps of different models; cells marked in gray are the ratios of easy data identified by ineffective reference models. The column names indicate different thresholds for \(q\%\) in the DM method. The key difference between effective and ineffective reference models lies in their ability to identify easy cases: TinyBERT identifies less easy data than other models on both datasets, and ELECTRA\({}_{\text{Small}}\) identifies less easy data on NLI.
## 5 FTFT: efficient and robust Fine-Tuning by transFerring Training dynamics
Given the transferability of training dynamics across model sizes and pretraining methods (SS4), we can improve the efficiency of the DM method by using more efficient reference models. However, after training the reference model, DM method still requires fine-tuning the main model in the same way as ERM, making it less efficient than ERM. In this section, we investigate whether we can overcome this limitation. We show that models trained using DM learn faster than ERM, and fine-tuning the main model in DM for much fewer than the full training steps of ERM is already sufficient. Based on these observations, we propose a novel approach, Fine-Tuning by transferring Training dynamics (FTFT). We show that FTFT consistently offers both better efficiency and better robustness over ERM fine-tuning.
DM Learns Faster Than ERMB Figure 2 shows the OOD test performance of ELECTRALarge fine-tuned with fewer steps (i.e. from 1/15 to 7/15 of the full training steps). We compare DM with different reference model sizes against ERM. To better show the different learning speeds, instead of the absolute performance scores, we show the percentiles compared with using the full training steps for each method. Clearly DM methods learns much faster than ERM on all OOD datasets: the performance percentile curves of DM methods are almost always above those of ERM. We also observe that DM methods using only 1/3 of the full training steps already achieve comparable performance as using all training steps. This result suggests that we can further improve the efficiency of the DM method by training fewer steps, while maintaining its robustness advantage over ERM. We show the results for DeBERTaV3Large in Appendix B, and observe similar trends.
FTFT: Achieving both Efficiency and RobustnessFine-Tuning by transferring Training dynamics (FTFT), involves two crucial changes to the standard DM method. First, FTFT uses more efficient PLMs as reference models. Second, to improve the efficiency of the main model, FTFT uses only 1/3 of the training steps used for ERM fine-tuning. We use 1/3 of the total training steps because we select 33% of the ambiguous train data, and this choice means that we fine-tune on each training instance for the same number of times as ERM fine-tuning on the full training dataset.
Figure 2: Performance on NLI (top) and HSD (bottom) when training the main model (ELECTRA) with fewer training steps. ERM is standard ERM fine-tuning on the full training set. DM-* refers to fine-tuning ELECTRALarge with the DM method, using reference model ELECTRA+. The X-axis is the percentile of full training steps used, ranging from 1/15 to 7/15 of the total number of training steps. The Y-axis is the **percentile of performance compared with the full training steps**. Fine-tuning the main model using data maps is much faster than ERM: the models achieve close-to-\(100\%\) performance using only 1/3 of the training steps.
Nevertheless, we recommend to determine the number of training steps by monitoring model performance. Table 4 summarizes the performance of FTFT using DeBERTaV3\({}_{\text{large}}\) as the main model, and DeBERTaV3\({}_{\text{small}}\) and DeBERTaV3\({}_{\text{Base}}\) as reference models. We also include the performance of DeBERTaV3\({}_{\text{large}}\) fine-tuned with ERM and the original DM method for comparison. We also show the cost for each method for NLI, using the cost of fine-tuning ELECTRA\({}_{\text{Small}}\) for this task as the unit. The relative cost for each method in HSD is the same as NLI.
We make two observations. First, FTFT methods show better robustness (i.e. better performance on OOD datasets) than both ERM and the original DM method. Second, FTFT is also much more efficient than ERM and the original DM method. For example, FTFT with DeBERTaV3\({}_{\text{Base}}\) and DeBERTaV3\({}_{\text{Small}}\) as the reference models only cost \(19.7\) and \(15.2\) units of computation: compared with the 32/64 units of computation of ERM/DM fine-tuning, these two FTFT choices are respectively 1.63/3.26 times and 2.11/4.22 times cheaper.
## 6 Conclusions
Fine-tuning PLMs has been shown vulnerable to OOD and adversarial inputs. The DM method has been shown to improve model robustness (Swayamdipta et al., 2020), however, it is computationally expensive. In this paper, we have presented a novel approach for fine-tuning PLMs, FTFT, which yields both better efficiency and better robustness over conventional ERM fine-tuning (SS5). FTFT is built on the DM method, based on two observations: 1) in DM methods, reference model training dynamics are highly transferable across different model sizes (SS4.1) and pretraining methods (SS4.2), and 2) models trained using DM learn faster than when using conventional ERM fine-tuning. We have also discussed the conditions for successful FTFT runs (SS4.3). We believe FTFT will be an important tool for future researchers and practitioners to perform efficient PLM fine-tuning, especially in situations where robustness against out-of-distribution or adversarial inputs is essential.
LimitationsNevertheless, our work has limitations. First, despite that we have observed that effective reference models will identify more easy instances, we did not conduct controlled experiments to validate whether this feature is the only condition to ensure the success of transferring training dynamics. Future researchers can perform more exhaustive empirical studies to investigate the prediction of reference model effectiveness. Second, we have only tested FTFT on two classification tasks. Future studies can extend our work to tasks of other types, such as generative tasks (e.g., question answering), to examine the generalizability of FTFT.
\begin{table}
\begin{tabular}{l l l|c|c c c c c} \hline \hline Mode & Main Model & Ref. Model & Cost & MultiNLI & WANLI & \multicolumn{3}{c}{AdversarialNLI} \\ & & & & & R1 & R2 & R3 \\ \hline ERM & DeBERTaV3\({}_{\text{large}}\) & - & 32.0 & **91.06** & 66.46 & 58.17 & 45.57 & 41.34 \\ \hline DM & DeBERTaV3\({}_{\text{large}}\) & DeBERTaV3\({}_{\text{large}}\) & 64.0 & 90.75 & 66.33 & 59.75 & 45.60 & 41.94 \\ \hline FTFT & DeBERTaV3\({}_{\text{large}}\) & DeBERTaV3\({}_{\text{Small}}\) & 15.2 & 90.12 & 66.42 & **60.30** & 45.75 & **43.66** \\ FTFT & DeBERTaV3\({}_{\text{large}}\) & DeBERTaV3\({}_{\text{Base}}\) & 19.7 & 90.14 & **66.47** & 59.77 & **46.65** & 42.71 \\ \hline \hline Mode & Main Model & Ref. Model & CAD & \multicolumn{3}{c}{DynaHate-Original} & \multicolumn{3}{c}{DynaHate-Perturb} \\ & & & R2 & R3 & R4 & R2 & R3 & R4 \\ \hline ERM & DeBERTaV3\({}_{\text{large}}\) & - & **81.69** & 75.44 & 73.32 & 76.12 & 70.62 & 77.41 & 68.89 \\ \hline DM & DeBERTaV3\({}_{\text{large}}\) & DeBERTaV3\({}_{\text{large}}\) & 81.58 & 79.17 & 76.87 & 77.73 & 73.34 & 76.63 & 67.54 \\ \hline FTFT & DeBERTaV3\({}_{\text{large}}\) & DeBERTaV3\({}_{\text{Small}}\) & 80.73 & 78.77 & **77.53** & **79.48** & **76.19** & 77.53 & 69.31 \\ FTFT & DeBERTaV3\({}_{\text{large}}\) & DeBERTaV3\({}_{\text{Base}}\) & 79.76 & **82.05** & 76.77 & 78.62 & 75.22 & **78.43** & **71.00** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison between FTFT and ERM/original DM fine-tuning. Performance of DeBERTaV3 on NLI (top, accuracy) and HSD (bottom, macro-F1). ERM is conventional ERM fine-tuning, and FTFT refers to using the \(33\%\) most ambiguous data identified by different reference models (i.e., Ref. Model) _and only training for 1/3 of the total steps_. Cost refers to the fine-tuning cost, with the cost of fine-tuning ELECTRA-Small with ERM as the unit. FTFT yields both better efficiency and better robustness compared to both ERM fine-tuning and the original DM method. |
2306.03316 | CoSiNES: Contrastive Siamese Network for Entity Standardization | Entity standardization maps noisy mentions from free-form text to standard
entities in a knowledge base. The unique challenge of this task relative to
other entity-related tasks is the lack of surrounding context and numerous
variations in the surface form of the mentions, especially when it comes to
generalization across domains where labeled data is scarce. Previous research
mostly focuses on developing models either heavily relying on context, or
dedicated solely to a specific domain. In contrast, we propose CoSiNES, a
generic and adaptable framework with Contrastive Siamese Network for Entity
Standardization that effectively adapts a pretrained language model to capture
the syntax and semantics of the entities in a new domain.
We construct a new dataset in the technology domain, which contains 640
technical stack entities and 6,412 mentions collected from industrial content
management systems. We demonstrate that CoSiNES yields higher accuracy and
faster runtime than baselines derived from leading methods in this domain.
CoSiNES also achieves competitive performance in four standard datasets from
the chemistry, medicine, and biomedical domains, demonstrating its cross-domain
applicability. | Jiaqing Yuan, Michele Merler, Mihir Choudhury, Raju Pavuluri, Munindar P. Singh, Maja Vukovic | 2023-06-05T23:58:40Z | http://arxiv.org/abs/2306.03316v1 | # CoSiNES: Contrastive Siamese Network for Entity Standardization
###### Abstract
Entity standardization maps noisy mentions from free-form text to standard entities in a knowledge base. The unique challenge of this task relative to other entity-related tasks is the lack of surrounding context and numerous variations in the surface form of the mentions, especially when it comes to generalization across domains where labeled data is scarce. Previous research mostly focuses on developing models either heavily relying on context, or dedicated solely to a specific domain. In contrast, we propose CoSiNES, a generic and adaptable framework with Contrastive Siamese Network for Entity Standardization that effectively adapts a pretrained language model to capture the syntax and semantics of the entities in a new domain.
We construct a new dataset in the technology domain, which contains 640 technical stack entities and 6,412 mentions collected from industrial content management systems. We demonstrate that CoSiNES yields higher accuracy and faster runtime than baselines derived from leading methods in this domain. CoSiNES also achieves competitive performance in four standard datasets from the chemistry, medicine, and biomedical domains, demonstrating its cross-domain applicability.
Code and data is available at [https://github.com/konveyor/](https://github.com/konveyor/) tackle-container-advisor/tree/main/entity_standardizer/cosines
## 1 Introduction
The automatic resolution of mentions in free-form text to entities in a structured knowledge base is an important task for understanding and organizing text. Two well-recognized tasks tackle entity mentions in text. _Entity matching_ concerns resolving data instances that refer to the same real-world entity Li et al. (2020). The data instances usually comprise a specific schema of attributes, such as product specifications. _Entity linking_, also known as entity disambiguation, associates ambiguous mentions from text with entities in a knowledge base, where precise attributes and relationships between entities are curated Alam et al. (2022). Both tasks involve rich context surrounding the mention and the underlying entity Li et al. (2020); Alam et al. (2022). Much effort in deep learning approaches focuses on ways to leverage and encode the context surrounding mentions in text and attributes associated with entities in the knowledge base. However, little work has been done on scenarios where such rich context and precise information are not available. In domains such as finance, biology, medicine, and technology, mentions involve specialized jargon, where no context is associated with the mentions and often no attribute of the entities is available other than the mentions themselves.
We tackle the challenge of missing context for entity standardization (ES) mapping, which involves mapping mentions to entities in the knowledge base across multiple domains. Due to the lack of a public dataset for ES and to foster research on the problem, we manually construct a dataset in the technology domain geared to application modernization. We propose an
Figure 1: Examples of various mentions referring to the same entity from two different domains. Top: technology, bottom: medical.
for the dataset and then evaluate the generalization of CoSiNES in the biomedical domain.
Application modernization consists in migrating legacy applications to the cloud. It relies on a faithful assessment of the technical components of such applications. Much technical information is contained in free-form textual application descriptions, but automatic extraction of such knowledge is non-trivial due to variations in how the same entities are mentioned (Kalia et al., 2021).
Compared to the two aforementioned tasks of entity matching and linking, ES presents unique challenges. First, the mentions could have acronyms, numbers, symbols, alias, punctuation, and misspellings. Figure 1 shows two examples of multiple mentions referring to the same entity. Second, there is a lack of context surrounding the mentions, and there are no attributes or relationships for entities in the knowledge base, which the previous approaches heavily rely on. Third, large deep learning models require massive training datasets, which are not available for specialized domains. Therefore, architectures that are suited for zero-shot or few-shot learning are of great value for this task.
Another challenge is how to perform entity standardization at scale. A naive way is to have exhaustive comparisons between each possible mention and entity pair, which is inefficient. Previous deep learning models for entity matching and entity linking usually have multiple stages (Papadakis et al., 2020): first stage, such as blocking in entity matching, reduces the number of comparison pairs via a coarse-grained criterion so that the latter stages can focus on filtered candidate pairs. This multistage approach leads to globally inferior performance due to the errors accumulated along the pipeline.
We tackle these challenges with a generic framework based on Contrastive Siamese Network which efficiently adapts domain-agnostic pretrained language models to specific domains using a limited number of labeled examples. Language models have shown great capacity to capture both syntactic and semantic variations of text. Our framework decouples the comparison of mention-entity pairs for training and inference so that the model can be used as a standalone encoder after training. Therefore, the embeddings of the entity from the knowledge base can be precomputed and hashed. At inference time, the running time is linear in the size of query mentions, and we can leverage existing tools, such as FAISS,1 for efficient and large-scale similarity search.
Footnote 1: [https://github.com/facebookresearch/faiss](https://github.com/facebookresearch/faiss)
Our contributions are the following.
* A generic, scalable, and adaptable framework that leverages domain-agnostic pretrained language models.
* A method for generating anchored contrastive groups and a training scheme with a hybrid of batch-all and batch-hard online triplet mining.
* A dataset curated for application modernization, where various mentions for technical components are manually labeled.
We validate these contributions via comprehensive experiments with various hyperparameters, loss functions, and training schemes and show the robustness and effectiveness of the framework on our custom dataset in the technology domain. With optimal settings on our dataset, we further evaluate the framework on four datasets from the biomedical domain. We show that the framework can be adapted to other domains with minimal changes.
## 2 Related Work
Various forms of entity-related tasks have been studied by previous research, of which three are most relevant to our task.
**Entity Matching (EM)** identifies if different mentions refer to the same real-world entity, and is an important step in data cleaning and integration (Christen, 2012). The targets of EM are records from a database, where records follow a specific schema of attributes. The goal is to find pairs of records from two databases that refer to the same entity. Whereas early approaches of EM mostly apply rule-based heuristics, recent research often relies on deep neural network (Nie et al., 2019; Mudgal et al., 2018; Li et al., 2020; Ebraheem et al., 2018). As the number of pairwise comparisons grows quadratically, a preprocessing step (blocking) is usually applied to reduce the number of candidate matches. The matcher then takes a pair of a mention and an entity as input and produces a probability of a match. In contrast, entity standardization comes with a predefined set of standard entities, and the mentions come with no attributes. Our method involves learning a metric function, where the model can be used as an encoder to embed mentions and entities in the same space.
**Entity Linking (EL)** is the process of linking a mention in context with an entity in a knowledge base. Unlike entity standardization, the entities in the knowledge base, such as WikiData (Vrandecic and Krotzsch, 2014) and Freebase (Bollacker et al., 2008), usually have well-structured attributes and precisely defined relationships between them. The mention comes with rich context and unstructured raw text. To leverage these two different types of contextual information, separate context-mention and graph-entity encoders are designed to produce embeddings respectively, and another neural network is used to combine and project these two embeddings to the same space (Shahbazi et al., 2019; Yamada et al., 2022; Radhakrishnan et al., 2018). Due to the lack of context for both the mention and entity for entity standardization, we propose to use a single unified model as the encoder, which can reduce the complexity of the pipeline.
**Entity Normalization (EN)** is widely used in the biomedical domain. The task is to map noisy mentions to entities in a well-defined reference set, such as ontologies and taxonomies (Ferre et al., 2020; Ferre et al., 2020). The mentions usually have no context, and the entities come with no attributes, but there is a hierarchical structure in the reference set. Unlike entity standardization in the technology domain, the variations of mentions in life science are fairly standardized and synonyms are rare. The task can be well addressed with a sufficient number of training examples for each entity category, which is not the case in our setting. Fakhraei et al. (2020) propose a similar idea using a Siamese neural network for EN. Our approach differs in the following aspects: the designed training batch-generation algorithm, the computation of the contrastive loss, and the usage of PLMs in our specialized training scheme.
## 3 Methodology
### Problem Formulation
We denote the set of query mentions as \(\mathcal{Q}\equiv\{m_{q}\}\), and the set of standard entities as \(\mathcal{S}\equiv\{e_{s}\}\). Each entity in \(\mathcal{S}\) is associated with zero or more mentions referring to it \(e_{s}\leftarrow\{m_{s}\}\). Importantly, there should be no overlap between the query mention set \(\mathcal{Q}\) and the mentions associated with the standard entity set \(\mathcal{S}\). The task is to retrieve an entity \(e\in\mathcal{S}\) given \(m\in\mathcal{Q}\) such that \(e\) is the entity \(m\) refers to.
We tackle this task with contrastive learning by learning an embedding encoder such that mentions and entities are encoded to the same high-dimensional embedding space. The property of the embedding space is that the cosine distance between mentions of the same entity is smaller than mentions of different entities.
We design a BERT-based Siamese neural network architecture, which acts as the embedding encoder after training. The training is conducted with a hybrid of batch-all and batch-hard online triplet mining schemes. Figure 2 gives an overview of CoSiNES. The training (top) phase has the goal of pulling similar mentions together and pushing dissimilar mentions far away in the embedding space. After training, the inference (bottom) phase has the goal of using a Siamese neural network to project entities in the knowledge base and query mentions to the same embedding space. At inference time,
Figure 2: System overview of CoSiNES.
nearest neighbor search algorithms can be used to retrieve the target entity.
### Contrastive Learning and Triplet Loss
Contrastive Learning Khan et al. (2022); Rethmeier and Augenstein (2022); Smith and Eisner (2005) aims to group similar data points together and push dissimilar data points far apart in a high-dimensional embedding space. Equation 1 shows the core idea of contrastive learning. Here \(x\) represents any data point in the domain, \(x^{+}\) is a positive sample that is similar to \(x\) (or from the same class as \(x\)), and \(x^{-}\) is a negative sample that is dissimilar to \(x\). \(E\) is an encoder, which could be any neural network. And, dis is a distance measure between the embedding vectors.
\[\text{dis}(E(x),E(x^{+}))\ll\text{dis}(E(x),E(x^{-})) \tag{1}\]
As shown in Equation 2, triplet loss is calculated based on triplets \(\{x,x^{+},x^{-}\}\), which consist of two samples from the same class and a third sample from a different class. The intuition is that the distance \(\text{d}(x,x^{-})\) should be larger than the distance \(\text{d}(x,x^{+})\) by a _margin_. The _margin_ is a hyperparameter that needs to be tuned.
\[\mathcal{L}=\max(\text{d}(x,x^{+})-\text{d}(x,x^{-})+\text{margin},0) \tag{2}\]
Based on the difference between \(\text{d}(x,x^{-})\) and \(\text{d}(x,x^{+})\), we can classify triplets into three categories: easy, semihard, and hard. See appendix B for detailed definitions.
### Online Triplet Mining
There are two different strategies of mining triplets for contrastive learning. _Offline mining_ generates triplets at the beginning of training. The embeddings of the whole training dataset are computed, then hard and semihard triplets are mined based on the embeddings. Offline mining is highly inefficient. First, it requires computing the embeddings for all the training data to mine the triplets. Second, as the model starts to learn, the hard and semihard triplets may turn into easy triplets. Therefore, at least for a few epochs, we need to update the triplet set frequently. _Online triplet mining_Schroff et al. (2015) seeks to generate triplets on the fly within a batch. There are two strategies to mine triplets from a batch, i.e., batch all and batch hard. We adopt the same idea in our model and propose a hybrid online mining scheme which is shown to be superior to single-mining strategy.
#### 3.3.1 Batch-All
To form valid triplets, a batch of training data should always include samples from more than one class, and each class should contain at least two samples. Suppose the size of the batch is \(B\) and the number of all possible triplets is \(B^{3}\). However, not all of these triplets are valid as we need to make sure each triplet comprises two distinct samples from the same class and one sample from another class. For all valid triplets in the batch, we simply select all hard and semihard triplets and compute the average loss over them. We do not include easy triplets in computing the average as it will make the loss too small. The calculations are based on the embeddings of the batch after they pass through the model.
#### 3.3.2 Batch-Hard
This strategy always selects the hardest positive and negative for each anchor in the batch. Each data instance in the batch can be used as an anchor. Therefore, the number of triplets is always equal to the size of the batch. The hardest positive has the largest \(\text{d}(x,x^{+})\) among all positives, and the hardest negative has the smallest \(\text{d}(x,x^{-})\) among all negatives.
#### 3.3.3 Contrastive Group Generation
Based on the above discussion, a batch should include multiple samples from multiple classes. We sample batches with two steps. First, we randomly generate groups of samples from the same class with size \(g\), and second, we randomly sample \(b\) classes of groups to form a batch. Therefore, the effective batch size would be \(B=g*b\).
### BERT-Based Siamese Neural Network
The canonical Siamese neural network is an architecture that consists of two towers with shared weights working in parallel on two different inputs. The outputs are passed on to a distance function to learn comparable output vectors. We extend the same idea to a batch of inputs instead of a pair of inputs. We sample the batch as described in Section 3.3 and feed the sampled triplets through the network. The output embeddings of the batch are used to generate valid triplets and compute the loss. The backbone of the Siamese model could be any
neural network. We use the pretrained language model BERT Devlin et al. (2019) as the backbone.
### Hashing and Retrieval
Once the Siamese model is trained, it can be used as a standalone encoder to compute the embeddings of entities and mentions. We precompute the embeddings for all entities and save them for comparisons at inference time. For each query mention, we use the same Siamese model to get the embedding and our task is to retrieve the entity with the closest distance to the mention in the embedding space. For a query set of size \(q\), we need to run the Siamese model only \(q\) times, avoiding exhaustive pairwise running of the Siamese model. Potentially, we still need to conduct a pairwise nearest neighbor search over the mention and entity embeddings. Tools such as FAISS can be leveraged to efficiently perform large-scale nearest neighbor search.
## 4 Experimental Setup
### Dataset
We curate a dataset (ESAppMod) on application modernization that comprises named entities with respect to the technical stack of business applications. There are a total number of \(640\) unique entities, covering a variety of technical component categories, such as Operating System (OS), Application Server, Programming Language, Library, and Runtime. We manually extract and label \(6{,}412\) unique mentions associated with the entities in AppMod from real application descriptions. All annotations are done by domain experts. We split the mentions \(60{-}40\) into train and test sets, which yields \(3{,}973\) and \(2{,}439\) mentions in the training and testing splits, respectively. The mentions associated with each entity are not evenly distributed, ranging from one to over a hundred.
### Hyperparameter Tuning
Implementing our framework involves many design choices and hyperparameters. To facilitate performance at scale, the tradeoff between accuracy and inference time is crucial. We experimented with different sizes of BERT as the backbone of CoSiNES, including BERT-tiny, BERT-mini, BERT-small, BERT-medium, and BERT-base. For triplet mining, we evaluated batch-all, batch-hard, and a hybrid of the two. For the measure of distance, we investigated cosine, Euclidean, and squared Euclidean distance. For the hyperparameters, we evaluated different values of margin, learning rate, and batch size detailed in appendix C. All training experiments were carried out on an NVIDIA A100 GPU with 40GB memory. We use the tool Ray.tune2 for hyperparameter tuning. Inference times were computed as the cumulative time to predict all 2,439 mentions in the test set on the CPU of Macbook pro with 2.3 GHz Quad-Core Intel Core i7, 32 GB 3733 MHz LPDDR4X RAM. We report the median inference time of 10 runs.
Footnote 2: [https://docs.ray.io/en/latest/tune/index.html](https://docs.ray.io/en/latest/tune/index.html)
### Baselines
We compare CoSiNES with four baselines.
**TF-IDF** A model that computes TF-IDF embeddings learned from training dataKalia et al. (2021).
**GNN** A graph neural network that treats each entity or mention as a chain. Each character represents a node in the graph and its embedding representation is learned during training. The average of the character embeddings are used to represent entity names and mentions Fan et al. (2022).
**BERT** We use the mean of last layer outputs of all tokens from BERT_small Bhargava et al. (2021) to represent entities and mentions. This is the same backbone used to train CoSiNES.
**GPT3**3 We use the embedding GPT-3 api from OpenAI to compute the embeddings using model embedding-ada-002.
Footnote 3: [https://beta.openai.com/docs/guides/embeddings/](https://beta.openai.com/docs/guides/embeddings/)
## 5 Results and Discussions
Table 1 shows the comparative results on our dataset. Our model outperforms all baselines by a significant margin in terms of top-1 retrieval accuracy: 10.46% over TF-IDF, 13.2% over GNN, 47.76% over BERT, and 3.16% over GPT3. Through comprehensive experimentation, we observe that the best performance model has the
\begin{table}
\begin{tabular}{l r r r r} \hline \hline Model & T@1 & T@3 & T@5 & Inf. Time \\ \hline TF-IDF & 69.94 & 85.36 & 88.44 & 60 \\ GNN & 67.20 & 79.29 & 82.49 & 29 \\ BERT & 32.64 & 47.23 & 54.82 & 17 \\ GPT3 & 77.24 & **90.24** & **93.56** & 240 \\ CoSiNES & **80.40** & 88.68 & 90.98 & 11 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experimental results on ESAppMod. T@1: top-1 retrieval accuracy. Inf. Time refers to total inference time in seconds.
BERT-small as the backbone. The learning rate is set to \(1\mathrm{e}{-}5\), contrastive group size is \(10\), and the batch size of groups is \(16\), which makes the effective batch size \(160\). We set the margin to \(2\).
### Learning Rate
To investigate how different learning rates affect the convergence of the Siamese model on our dataset, we run five-fold cross-validation with four learning rates (\(1\mathrm{e}{-}4\), \(5\mathrm{e}{-}5\), \(1\mathrm{e}{-}5\), and \(1\mathrm{e}{-}6\)) on the training data, as shown in Figure 3. For each learning rate, we experiment with different numbers of epochs, ranging from \(10\) to \(200\) with an interval of \(10\). The X axis is the number of epochs for each experiment and the Y axis is the top-1 accuracy. The average of the five-fold top-1 accuracy is shown for each dot in the figure, together with the standard deviation across five folds. As we can see, the learning rate affects how fast and stably the model converges, and most of them reach similar performance when trained for enough number of epochs. This indicates that the Siamese model is robust with respect to the learning rate. We set the learning rate to be 1e-5 as it tends to have a smaller deviation of performance.
### Hybrid Triplet Mining
We propose a hybrid of batch-all and batch-hard triplet mining during training. Figure 4 shows the training process with \(200\) epochs with the above three learning rates, of which the first \(100\) epochs apply batch-all triplet sampling and the second \(100\) epochs employ batch-hard triplet sampling. The result shows that for the first batch-all \(100\) epochs, the training of \(1\mathrm{e}{-}4\) and \(5\mathrm{e}{-}5\) is unstable and performance oscillates greatly. When batch-hard mining comes into play, the training becomes much smoother and the performance continues to improve steadily for all three learning rates. This experiment shows that the hybrid mining scheme improves the top-1 accuracy by around 2% compared to the single-mining strategy.
### Model Size
Normally, there is a tradeoff between model accuracy and efficiency. Therefore, we experiment with different sizes of BERT as backbone to find a balance between performance and running time. Figure 5 shows the inference time on the testing set with top-1 accuracy. The results show that CoSiNES with BERT-small achieves the best performance and fast inference time. Although the GPT3 embeddings achieve performance close to CoSiNES, running inference using the GPT3 OpenAI api is inefficient.
### ROC Curve
For a comprehensive comparison between our model and the baselines, we conduct an experiment to compute the receiver operating characteristic (ROC) curve. We add \(420\) previously unseen relevant but negative mentions from the technology domain that do not refer to any entities in the training set, and calculate the false positive rate under different thresholds. Figure 6 shows that our proposed model has a larger area under the curve, which demonstrates its superior performance over the baselines.
### Qualitative Error Analysis
We examine the predictions from CoSiNES on ESAppMod and categorize the following error types. Table 2 shows a few examples for each of these types.
**Misspelling**. When a mention has an error in the spelling, the tokens returned by PLMs could be very different, which leads to mismatch. This is a challenge for PLMs, whereas human could easily handle, e.g. "Andriod" vs "Android".
**Acronym**. Linking acronyms to full expressions seem to be a trivial task for humans, however, CoSiNES falls short of this capability. The rescue might be to design a task specialized for recognizing acronyms for PLMs.
**Multi-match**. This is the most common error where multiple entities partially match with the mention in the surface form. One way to address this issue is to enrich the training dataset with various mentions, which is not always within easy
Figure 3: Five–fold cross–validation with different learning rates on training data.
reach. Another potential approach is to integrate external knowledge about entities so that the model can refer to.
**No-match**. When the entity and mention have no match at all in the surface form, it is unlikely for the model to retrieve the correct target, especially no context can be leveraged. Therefore, external knowledge could be particularly useful in this case.
## 6 Adaptation to Biomedical Domain
We show how to adapt our framework to the biomedical domain with minimal changes.
### Datasets
We consider four public datasets, ncbi, bc5cdr-disease, bc5cdr-chemical, and bc2gm, covering three types of entities: chemicals, diseases, and genes. Details and statistics regarding the datasets can be found in appendix A.
### Baselines
We compare our framework with three models.
**TF-IDF** Like the baseline for ESAppMod, we implement a straightforward TF-IDF model (Kalia et al., 2021) based on the knowledge database for each dataset and apply nearest-neighbor search for testing.
**BioBERT ranking** Use BioBERT (Lee et al., 2019) to encode concepts and mentions without fine-tuning. BioBERT is a large biomedical language representation model pretrained with PubMed abstracts and PMC full-text articles.
**BioSyn**BioSyn (Sung et al., 2020) is the state-of-the-art model for biomedical entity normalization with synonym marginalization and iterative candidate retrieval. The model leverages sparse embedding from TF-IDF and dense embedding from BioBERT.
### Domain Adaptation
For domain adaptation, it would be ideal if we can make none or a few changes to the model architecture and training process. Therefore, we follow all experimental settings, such as learning rate, margin, contrastive group generation, and hybrid training scheme from the experiments on our proposed datasets. The most significant change is that to adapt to a new domain, we use dmis-lab/biobert
Figure 4: Hybrid triplet mining with different learning rates for five-fold cross validation.
Figure 5: Accuracy versus efficiency between the proposed models on the ESAppMod dataset. The CoSiNES line represents different size of BERT as backbone.
Figure 6: ROC Curves on the ESAppMod dataset.
v1.14 in replacement of the regular BERT as our backbone. We conduct all experiments on two NVIDIA A100 GPUs and adjust the batch size for each dataset based on the lengths of the mentions.
Footnote 4: [https://huggingface.co/dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1)
### Results
The results are shown in Table 3. We reproduce the BioBERT experiment reported by [20] using the embedding of the [CLS] token as the representation. The results are almost identical. The minor differences might be due to different versions of the pretrained language model.
The performance of BioSyn reported by Sung et al. (2020) is high. However, as pointed out by Tutubalina et al. (2020), the original testing splits used by Sung et al. (2020) have significant overlapping mentions with the knowledge base. Therefore, Tutubalina et al. removed all the duplicates and produced refined testing splits. We follow the performance of BioSyn reported by them.
The results show that CoSiNES significantly outperforms the baselines of TF-IDF and BioBERT ranking in terms of top-k accuracy. CoSiNES achieves competitive results with BioSyn on all the datasets. Given that we didn't change any hyperparameters or architectures of CoSiNES, and directly applied the framework to new domains, we demonstrate the cross-domain applicability of CoSiNES.
## 7 Conclusion
We propose a generic, scalable, and adaptable framework CoSiNES for the entity standardization task, which maps various mentions to standard entities in the knowledge base. We first construct a new dataset ESAppMod in the technology domain and demonstrate the superiority of our framework over four other models. We conduct comprehensive experiments regarding batch size, learning rate, margin, loss calculation and different sizes of BERT, with our designed contrastive group generation and hybrid triplet mining, and show that the framework is rather robust with respect to hyper-parameters. With the optimal setting on our dataset, we further show that our model can be easily adapted to new domains with minimal changes by achieving competitive performance on four benchmark datasets from the biomedical domain covering three different types of entities.
After examining the errors produced by the framework on our proposed dataset, we categorize four different types of errors and defer to future work with the following directions: (1) integrating the framework with external knowledge. For multi-match errors, where multiple entities partially match with the mention, it would be ambiguous to retrieve the target entity. For no-match errors, external knowledge could provide extra information; (2) Adversarial training for misspellings. For technical
\begin{table}
\begin{tabular}{l l l l} \hline \hline Error type & Mention & Target entity & Top-5 retrieved entities \\ \hline \multirow{2}{*}{Misspelling} & \multirow{2}{*}{\begin{tabular}{l} Antirod \\ Visual Basic \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{l} Android \\ Visual Basic \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{l} IBM ILOG Views / Oracle Real-Time Decisions (RTD) / BeOS / Ingres / etc \\ ClarifyClear Basic / BASIC / IBM Basic Assembly Language / Pervasive PSQL / ADABAS \\ \end{tabular} } \\ \hline \multirow{2}{*}{Acronym} & NES & \multirow{2}{*}{\begin{tabular}{l} Netscape Enterprise Server \\ IBM Integration Bus \\ \end{tabular} } & Mobile / SAS / OS / Powershell / MiniO \\ & & IBM Integration Bus & Visual Basic / VBN.NET / ClarifyClear Basic / IBS\({}^{*}\) / Ada \\ \hline \multirow{2}{*}{Multi-match} & \multirow{2}{*}{\begin{tabular}{l} Cordova Android \\ MO 9.1 \\ Open Libtery \\ \end{tabular} } & Apache Cordova & \multirow{2}{*}{\begin{tabular}{l} Android / Apache Cordova \\ IR Websphere MQ \\ Open Libtery \\ \end{tabular} } & Android / Apache Cordova / Cisco / IBM Websphere MQ / Qbici / IBM Websphere MQ Telemetry \\ & & OpenIbQAD / WebSphere Liberty & / Virtual Appearance / OpenVPN / Microsoft System Center Endpoint Protection \\ \hline \multirow{2}{*}{No-match} & \multirow{2}{*}{\begin{tabular}{l} AS400 \\ EAP \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{l} IBM Power Systems \\ JBoss \\ \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{l} DB400 / Asterisk / Pijnware 76 / EAServer / Microsoft Excel \\ XAMPP / JS Secure Web Gateway Services / JavaJava Web Start / \\ UtilDev Web Server Pro (UWS) / A-Auto Job Scheduling Software \\ \hline \hline \end{tabular}
\end{table}
Table 2: Examples for each type of errors on ESAppMod.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & nchi & bc5cdr-d & bc5cdr-c & bc2gm \\ \hline
**TF-IDF@**1 & 59.31 & 61.34 & 71.76 & 67.01 \\ TF-IDF@3 & 69.61 & 69.41 & 76.24 & 76.55 \\ TF-IDF@5 & 74.02 & 73.21 & 78.59 & 79.90 \\ \hline
**BioBERT@1** & 47.55 & 64.23 & 79.55 & 68.12 \\ BioBERT@3 & 57.35 & 74.89 & 81.65 & 74.11 \\ BioBERT@5 & 61.77 & 79.45 & 82.82 & 76.04 \\ \hline
**BioSyn@1** & 72.5 & **74.1** & **83.8** & **85.8** \\ BioSyn@3 & - & - & - & - \\ BioSyn@5 & - & - & - & - \\ \hline
**CoSiNES@1** & **72.55** & 73.52 & 81.65 & **85.79** \\ CoSiNES@3 & 80.39 & 78.39 & 85.88 & 90.66 \\ CoSiNES@5 & 81.37 & 80.52 & 87.76 & 91.68 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results on four datasets from the biomedical domain. @1: top-1 accuracy. Here, bc5cdr-d means bc5cdr-disease and bc5cdr-c means bc5cdr-chemical.
terms, misspelling could lead to completely different tokenization of the mentions; (3) Construct new or augment the existing training dataset with acronym samples. The pretrained language models are not specialized in recognizing acronyms. Therefore, it would be worthwhile endowing PLMs with such capability.
## Limitations
We focuses on resolving various mentions from different domains. Although we have tested our framework on multiple datasets, it relies on a human-annotated dataset and effort should be taken to investigate how the model performs with emerging domains without human-annotated data. Our model works with mentions that have been extracted from raw text. It would be more practical if the model could work with raw text directly and interact with another mention-extraction module. The performance of the model is largely affected by the surface form of the mentions, although our framework is robust to variations in the surface form, it would be more beneficial to further investigate how adversarial turbulence in the mentions could affect the behaviors of the framework.
## Ethics Statement
The domain and data we work with don't involve any personal information and are all publicly available. However, as the work could be potentially applied in the medical domain to resolve mentions of disease, discretion is advised when any medical decisions or diagnostics are made with the assistance of the model.
|
2305.15024 | ChatAgri: Exploring Potentials of ChatGPT on Cross-linguistic
Agricultural Text Classification | In the era of sustainable smart agriculture, a massive amount of agricultural
news text is being posted on the Internet, in which massive agricultural
knowledge has been accumulated. In this context, it is urgent to explore
effective text classification techniques for users to access the required
agricultural knowledge with high efficiency. Mainstream deep learning
approaches employing fine-tuning strategies on pre-trained language models
(PLMs), have demonstrated remarkable performance gains over the past few years.
Nonetheless, these methods still face many drawbacks that are complex to solve,
including: 1. Limited agricultural training data due to the expensive-cost and
labour-intensive annotation; 2. Poor domain transferability, especially of
cross-linguistic ability; 3. Complex and expensive large models
deployment.Inspired by the extraordinary success brought by the recent ChatGPT
(e.g. GPT-3.5, GPT-4), in this work, we systematically investigate and explore
the capability and utilization of ChatGPT applying to the agricultural
informatization field. ....(shown in article).... Code has been released on
Github
https://github.com/albert-jin/agricultural_textual_classification_ChatGPT. | Biao Zhao, Weiqiang Jin, Javier Del Ser, Guang Yang | 2023-05-24T11:06:23Z | http://arxiv.org/abs/2305.15024v1 | # ChatAgri: Exploring Potentials of ChatGPT on Cross-linguistic Agricultural Text Classification
###### Abstract
We propose ChatAgri to create a ChatGPT-based approach for a hierarchical text classification. We propose ChatAgri to create a ChatGPT-based approach for a hierarchical text classification. We propose ChatAgri to create a ChatGPT-based approach for a hierarchical text classification.
Keywords:ChatGPT
# ChatAgri: Exploring Potentials of ChatGPT on Cross-linguistic Agricultural Text Classification
Biao Zhao
[email protected]
Weiqiang Jin
[email protected]
Javier Del Ser
[email protected]
Guang Yang
[email protected] School of Information and Communications Engineering, Xi'an Jiaotong University, Innovation Harbour, Xi'an, 710049, Shaanxi, China TECNALIA, Basque Research & Technology Alliance (BRTA), Dario, 48160, Spain Bioengineering, Imperial College London, London, SW7 2BX, UK Imperial-X, Imperial College London, London, W12 7SL, UK National Heart and Lung Institute, Imperial College London, London, SW3 6LY, UK
###### Abstract
In the era of sustainable smart agriculture, a massive amount of agricultural news text is being posted on the Internet, in which massive agricultural knowledge has been accumulated. In this context, it is urgent to explore effective text classification techniques for users to access the required agricultural knowledge with high efficiency. Mainstream deep learning approaches employing fine-tuning strategies on pre-trained language models (PLMs), have demonstrated remarkable performance gains over the past few years. Nonetheless, these methods still face many drawbacks that are complex to solve, including: 1. Limited agricultural training data due to the expensive-cost and labour-intensive annotation; 2. Poor domain transferability, especially of cross-linguistic ability; 3. Complex and expensive large models deployment. Inspired by the extraordinary success brought by the recent ChatGPT (e.g. GPT-3.5, GPT-4), in this work, we systematically investigate and explore the capability and utilization of ChatGPT applying to the agricultural informatization field. Specifically, we have thoroughly explored various attempts to maximize the potentials of ChatGPT by considering various crucial factors, including prompt construction, answer parsing, and different ChatGPT variants. Furthermore, we conduct a preliminary comparative study on ChatGPT, PLMs-based fine-tuning methods, and PLMs-based prompt-tuning methods. A series of empirical results demonstrate that ChatGPT has effectively addressed the aforementioned research challenges and bottlenecks, which can be regarded as an ideal solution for agricultural text classification. Moreover, compared with existing PLM-based fine-tuning methods, ChatGPT achieves comparable performance even without fine-tuning on any agricultural data samples. We hope our preliminary study could prompt the emergence of a general-purposed AI paradigm for agricultural text processing.
keywords: Agricultural text classification, Very large pre-trained language model, Generative Pre-trained Transformer (GPT), ChatGPT and GPT-4 +
Footnote †: journal: Neurocomputing
## 1 Introduction
With the rapid development of sustainable smart agriculture ecosystem, the quantity of various news contents related to agricultural themes on the Internet has undergone an explosive increase. Such a vast quality of unstructured data contains already latent historical knowledge, helping us precisely study natural hazards and mitigate potential agricultural risks. Artificial intelligence-based agricultural text classification enables managing these massive Internet agricultural news automatically and makes these massive unstructured data easily indexable, which is a crucial step for agricultural digitization and agricultural Internet of Things.
In recent years, these mainstream agricultural document processing techniques including text classification generally rely on various deep representation learning-based methods, especially on approaches based on pre-trained language models (PLMs), including BERT,
BART, and T5 [1; 2; 3]. Xu _et al._[4] proposed a novel model, namely time series-long short-term memory (AETS-LSTM), for predicting the rise and fall of agricultural exports. agricultural document processing. Cao _et al._[5] utilized the BERT with symmetrical structure to analyze the sentiment tendency of the Internet consumers reviews towards the agricultural products. Leong _et al._[6] employed a text-level character region awareness model (CRAFT) for recognizing and extracting the essential information from agricultural regulatory document and certificates. Jiang _et al._[7] proposed a BERT-based text classification network for automatically classifying the French bulletin to make these data easily indexable. In additional to the aforementioned research efforts, these deep representation learning-based approaches have held great promise for almost all agricultural information applications.
Unfortunately, these PLMs-based fine-tuning solutions, inevitably encounter several challenging issues in the practical processes of model development, and application deployment. On the one hand, insufficient and poor quality supervised training data can greatly decrease the model performance whereas acquiring enough high-quality annotated data remains time-consuming and labour-intensive; on the other hand, even if trained properly on sufficient data, the inherent characteristics of supervised learning models limit their generalization capabilities to specific contexts related to the supervised corpus. In other words, when transplanted to new domains or new tasks, their limitations become evident, lacking a certain degree of scenario transferability, particularly of the cross-linguistic capacity. Moreover, due to the extremely large parameter volumes of PLMs, the corresponding deployment is complex and power-intensive, which requires high-performance equipments (such as massively parallel computing hardware, such as GPUs and TPUs). For example, the largest T5 model has over 11 billion parameters, which is 100 times the number of parameters of the BERT-base model. These prove that mainstream PLMs-based agricultural text classification methods fall far short of the standards for achieving General Purpose Artificial Intelligence (GPAI) in the future.
These introduced limitations and deficiencies have made existing agricultural document processing techniques can not handle well in almost application scenarios, especially for agricultural text classification. Recently, the artificial intelligence ChatGPT-family chatbots, proposed by the OpenAI foundation, has caused a groundbreaking revolution in the academic community, especially for natural language processing (NLP) tasks. ChatGPT is essentially a powerful very large pre-trained language model for dialogue based on the Transformer architecture [8], utilizing a larger corpus, higher computational power, and an unprecedented amount of network parameters1. What is inspiring is that unlike previous intelligent chat robots, ChatGPT can provides smooth and comprehensive responses to various complex and professional human questions. For instance, ChatGPT can perform tasks such as multilingual translation, poetry generation, and code generation based on specific requirements [9; 10]. Thus, ChatGPT have rapidly exhibited their remarkable language comprehension and generation abilities, which produces popularity and attracts ever-increasing attention in various cross-disciplinary researches that NLP community intersects with, such as radical radiology diagnosis [11] and sentiment analysis of surgery disease [12; 13].
Footnote 1: You can access ChatGPT by visiting the following URL: [https://chat.openai.com/chat](https://chat.openai.com/chat) [Accessed on 2023.05].
After experiencing ChatGPT's universal and powerful capabilities, it is natural for us to wonder about how much potentials ChatGPT can bring to the agricultural products' production management process for optimizing sustainable agricultural applications. As shown in Fig. 1, when asked about the potential applications of GPT-3.5 (a standard model in ChatGPT-family) in agriculture, The model replied that it is capable of performing tasks such as weather forecasting, pest and disease identification, and market analysis (among others).
Inspired by the potential applications of ChatGPT in the field of smart agriculture, it is our belief that the
Figure 1: Valuable suggestions advised by ChatGPT for assisting farmers and market regulator in better governing agricultural affairs (Query Date: 2023.3.16).
community is much in need for principled explorations to determine how much ChatGPT can contribute to the optimization of sustainable agricultural practices. With that concern in mind, we have decided to delve into the potentials of ChatGPT by focusing on the concise classification of agricultural text in this work.
By doing so, our experiments mainly investigate the potential power of ChatGPT (i.e. GPT-3.5 by default) [14] and its extension (i.e. GPT-4) [10] for classifying the agricultural-related documents. Notably, along with the proposed ChatAgri, this paper also provides a brand-new paradigm which is distinguished from existing methods. Through a series comparative experiments of ChatAgri with a range of mainstream text classification models, including classic fine-tuned PLMs [15; 16] and prompt-learning based on auto-regressive generative PLMs [17; 18; 19], we systematically evaluated and investigated the superiority of ChatGPT in agricultural text classification tasks, which distinguished it significantly from other methods.
Furthermore, we have investigated extensive literature related to ChatGPT-based question answering (QA) [20; 21; 22; 23] and the prompt learning scheme [17; 24; 25], and arrived at the following conclusions: Most language understanding tasks based on ChatGPT can be categorized as a new form of Prompt Learning based on PLMs. Specifically, regarding the adopted ChatGPT interface as a parameters-frozen large-scale PLM, the overall procedure are extremely similar to the prompt-tuning paradigm described in the survey of Liu _et al._[17]. Fig. 2 gives a clear illustration of the major similarities and distinguishes between ChatGPT-based NLP paradigm, (a) and MLM prompt-tuning paradigm, (b), through a typical example of the agricultural food comment sentiment analysis task. As depicted in part. (c) of Fig. 2, the MLM prompt-tuning paradigm can be divided into three primary procedures: template engineering, pre-trained language models reasoning, and answer mapping engineering [17]. As shown in part. (b) of Fig. 2, the general NLP research related to ChatGPT can be organized into the following several phases in our experiments [11; 26]: 1) prompting question construction engineering; 2) ChatGPT Q&A inference; 3) answer normalization engineering (alias. answer alignment). Thus, several core factors were considered to be optimized:
* 1). Due to that interacting with ChatGPT involves providing instructions through human response, based on previous ChatGPT prompting works [27; 22; 21], we have designed several appropriate task-specific inquiries to intuitively trigger the understanding capability of ChatGPT;
* 2). As the textual generations of ChatGPT are essentially human-like natural language, they differ greatly when it comes to specific tasks. So, a accurate label mapping strategy from ChatGPT outputs to the final classified categories are needed to be developed. In our experiments, we devised two novel answer mapping strategies for this critical step for the answer alignment engineering.
Figure 2: The paradigm comparison of the ChatGPT-based NLP solutions and existing prompt learning paradigm using an agricultural sentiment analysis example. Part. (a) denotes the task prototype of the agricultural sentiment analysis; Part. (b) denotes the standard workflow of ChatGPT-based approaches; and Part. (c) denotes the standard workflow of Masked LM prompt-tuning methods.
To evaluate extensive data in various agricultural subfields, sourced mainly comes from Internet news covering topics ranging from insect pests, and natural hazards to agricultural market comments. Further, even in cases multi-language corpora are tested, experiments validate that the proposed ChatAgri still features a significant transferring effectiveness in cross-linguistic scenarios.
In summary, our experiments provide a preliminary study of ChatGPT on agricultural text classification to gain a better understanding of it, and reported a systematic analysis according to the corresponding empirical results. We believe that by exploring how ChatGPT can contribute to agricultural production and management through text classification tasks such as pest and disease identification, agricultural news categorization, and market comment analysis, we can demonstrate the feasibility of ChatGPT in advancing agricultural practices, thereby paving the way for a more efficient and sustainable smart agriculture.
The novel ingredients of this work can be summarized as follows:
* Motivated by the various application progresses of very large pre-trained language models represented by ChatGPT, we conduct a preliminary study towards exploring the potentials of ChatGPT in agricultural text classification task and thus propose ChatGPT-based solution for agricultural text classification, namely ChatAgri;
* Evaluated on several multi-linguistic datasets, ChatAgri achieves competitive performance compared to existing PLM-based fine-tuning approaches, showing a superior ability in terms of the impressive semantic understanding. Through several specific case analysis, it even surprisingly produces a intelligent reasoning chain;
* The zero-shot learning experiments demonstrate the great potential of ChatAgri in agricultural text classification, compared to existing PLM-based fine-tuning approaches, which require high-quality supervised data, along with a time-consuming, labor-intensive annotations and expensive knowledge from agricultural domain experts;
* Multi-linguistic experiments discussed in this work expose the excellent domain transferability of ChatAgri, by which the model can adapt to different agricultural applications quickly, and is a fundamental step accelerating the future General Purpose AI (GPAI);
* ChatAgri, only relying on network interface and minimum hardware requirements, subverts the mainstream complex and power-intensive PLM-based methods, which holds great promise of the general and low-costing artificial intelligence techniques for the future smart agricultural applications;
* To encourage further research of smart agricultural applications by leveraging ChatGPT, we released the codes of ChatAgri on Github2. Footnote 2: Code has been released on Github: [https://github.com/albert-jin/agricultural_textual_classification_ChatGPT](https://github.com/albert-jin/agricultural_textual_classification_ChatGPT) [Accessed on 2023.05].
The remainder of this paper is organized as follows: Section 2 provides an overview of the recent literature in related fields, with a focus on recent research for the agricultural text classification task, ChatGPT, and pre-trained language model-based NLP techniques. Section 3 presents a detailed description of the whole ChatAgri framework, including a detailed algorithmic description. In Section 4 and 5, we conduct a comprehensive analysis of the comparison experiments between ChatAgri and several mainstream PLM-based methods, along with various ablated studies. Finally, Section 6 offers a concise summary of the primary contributions of our research and outlines future prospects for further sustainable smart agriculture development based on our findings.
## 2 Related Work
In this section, we will review the related literature on accurately classifying cross-linguistic agricultural texts, recent advancements and applications in ChatGPT and its extensions, as well as PLM-based fine-tuning and prompt-tuning approaches in addressing the challenges of agricultural text classification.
### Agricultural Text Classification
Over the past decade, the primary machine learning models (e.g. decision tree, CNN, LSTM, and GRU) [4] have been the dominant approaches in research on the agricultural document classification.
Azeze _et al._[28] used the support vector machine (SVM) and decision tree induction classifiers to complete the regional agricultural land texture classification. Li _et al._[29] simultaneously utilized the Bi-LSTM and the attention mechanism to further dynamically enrich
the extracted multi-sources semantic features, which effectively improve the performance of agricultural text classification. Dunnmon _et al._[30] leveraged CNN to predict agricultural Twitter feeds from farming communities to forecast food security indicators, and demonstrated that CNNs are widely superior to RNNs in agriculturally-relevant tweets sentiment classification.
Since the introduction of large models such as BERT [1] and GPT [31], many NLP tasks have achieved significant performance improvements and have gradually replaced traditional machine learning approaches [26]. Compared to traditional machine learning methods, large pre-trained language models are better equipped to handle the complexity scenarios, having received widespread attentions in both academic and industrial settings.
Shi _et al._[32] employed BERT to identify the most representative information from unlabeled sources, which were manually labeled to construct the corpora of agricultural related news from diversified topics, enhancing the efficiency of labeling process and ultimately improving the corpora construction quality. Jiang _et al._[7] automatically classify the French plants health bulletins to make these data easily searchable through fine-tuning BERT. Leong _et al._[6] developed an automatic optical character recognition system for the categorization and classification of agricultural regulatory documents. To tackle the imbalance between the supply and demand of the agricultural market, Cao _et al._[5] introduced a improved BERT-based sentiment analysis model for agricultural product evaluation through Internet reviews. The proposed BERT model with symmetrical structure accurately identifies the emotional tendencies of consumers, helping consumers evaluate the quality of agricultural products and helping agricultural enterprises optimize and upgrade their products.
### Traditional Machine Learning methods, and PLM-based Fine-tuning, and Prompt-tuning
For a significant period of time in the past, the predominant approach for addressing the agricultural text processing problems was based on traditional machine learning methodologies. Xu _et al._[33] proposed a novel method to predict the rise and fall of agricultural exports, called agricultural exports time series-long short-term memory (AETS-LSTM). AETS-LSTM achieves improved prediction performance that predicts the tendencies of the agricultural exports, which is effective way to help agribusiness operators to make better evaluations and adjustment policies. To identify the pests and diseases symptoms of rice farming, Costa _et al._[34] build a knowledge-based system that used jaccard similarity coefficient (JSC), which performs tokenizing, filtering and porter stemming to extract critical information to deliver pests and disease problem.
Feature engineering-based methods were limited by their inability to capture the complexity and nuances of natural language, particularly when it comes to some semantic complex situations [26]. With the emergence of PLMs [31; 35; 1; 2], a powerful technique that revolutionized the field of NLP, many traditional methods [7; 36; 30] has been substituted [8]. Since then, the PLM-based fine-tuning paradigm has been propelled to be the mainstream learning technique for various agricultural information processing [37]. PLM-based fine-tuning paradigm is designed by introducing additional network parameters and fine-tuning PLMs to downstream tasks using task-specific objective functions. Cao _et al._[5] developed an improved BERT-based model to extract complete semantic information for the task of sentiment analysis in agricultural product reviews. The goal was to assist consumers in making informed purchasing decisions. They utilized TensorFlow to fine-tune the whole parameters of BERT and its downstream classifier to obtain a well-optimized model. Jin _et al._[16] proposed a dictionary knowledge infused network, DictABSA, for sentiment analysis and agricultural text classification.
Nevertheless, these PLM-based fine-tuned models may not generalize well to new scenarios and required significant amount of annotated data, making it hard to be quickly developed and easily deployed. As a result, the role of traditional PLM-based fine-tuning has gradually diminished in NLP, being replaced by a more promising learning paradigm known as "prompt learning" or "prompt-tuning", according to a recent survey [17]. Different from the PLM-based fine-tune paradigm, prompt-tuning follows the original LM training which adapts the downstream task to the PLM itself with the help of constructed prompting templates, thus especially performing well in few-shot or even zero-shot scenarios. Lyu _et al._[11] investigate the effect of different optimized prompts on the performance of the improved plain-language translations of the radiology report. Liu _et al._[24] proposed _P-Tuning_, a novel method that automatically searches for prompts in the continuous space to improve the performance of PLMs. It uses a few continuous free parameters as prompts and optimizes them using gradient descent. Experiments proved that _P-Tuning_ brings substantial improvements to GPTs, even outperforms BERT models to some extent. Liu _et al._[25] also introduced _P-Tuning v2_, a enhanced continuous prompt optimization method of _P-Tuning_[24].
_P-Tuning v2_ represents a significant improvement over _P-Tuning_ by using continuous prompts for every layer of the PLM, rather than just the input layer, increasing the capacity of continuous prompts and helping to close the gap to fine-tuning across the small models and hard tasks. Hu _et al._[38] devised a novel knowledge enhanced method for text classification, namely _knowledge prompt-tuning_ (KPT). It incorporates rich external knowledge from knowledge bases (KBs) into the prompt verbalizer to better stimulate the internal knowledge in PLMs.
### ChatGPT
ChatGPT is a leading conversational language model developed by OpenAI, which serves as an expert in all fields with omnipotent and omniscient knowledge. ChatGPT is a disruptive revolution across numerous research domains, extending beyond NLP, providing a user-friendly interface that grants the general public unprecedented access to the capabilities of large language models. ChatGPT, also known as GPT-3.5 that built upon GPT-3 [14], serves as a conversational robot capable of comprehending intricate instructions and producing high-quality replies across diverse scenarios. ChatGPT, acting as a valuable tool, has made a significant contribution to many application scenarios and has opened up new possibilities for virtual assistants. In terms of model structure, ChatGPT [10; 9] can be regarded as a quantum leap characterized by several distinctive characteristic features that stands out from previous NLP models such as BERT [1], BART [2], and T5 [3]. These can be summarized as: a very large language model using over billions of parameters, having the capability of a chain of thought prompting, and trained with reinforcement learning from human feedback (RLHF).
As millions of users continue to tap into these language models, countless new use cases emerge, opening the door to a flurry of ChatGPT potentials. Based on a recent empirical study [27; 26], ChatGPT has shown remarkable proficiency in multilingual translations, particularly in high-resource languages translator such as mutual translation between various European and American languages. Furthermore, this study found that ChatGPT performs similarly to other prominent translation services like Tencent TranSmart, DeepL Translate, and Google Translate. What's even more impressive is that ChatGPT can be used in code debugging and even code generation [10]. Based on Haque _et al._[20], ChatGPT was evaluated on its capability to provide code snippets that adhered to the syntax and semantics of the programming language, such as Python, Java, and JavaScript. Bang and colleagues [39] utilized several codes, including Python Turtle graphics and HTML Canvas, acted as tools for the multi-modal task of generating images from text. These researchers demonstrate that ChatGPT was able to generate superior-quality codes based on brief business requirements expressed in natural language, overwhelmingly surpassing other code modification techniques.
The growing fascination with ChatGPT has spurred a wide range of investigations into the myriad of possibilities presented by this groundbreaking language model, particularly those in the agricultural field. Gao _et al._[21] investigates the feasibility of using ChatGPT for event extraction, highlights the difficulties posed by event extraction due to its complexity and the need of a comprehensive set of instructions. Wei _et al._[23] designed a universal zero-shot information extraction framework via chatting with ChatGPT, namely ChatIE, handling NLP tasks including named entity recognition, event extraction, and relation extraction. Specifically, ChatIE is devised as a decomposed multi-stages involving with several turns of QA: first stage to discover the element types presented in the sentence through one turn of QA, and second stage to find the elements to fill the corresponding element slots through multiple QA turns.
Furthermore, OpenAI [10] released GPT-4, an advanced, large-scale, multi-modal generative PLM, in early March of this year. It exhibits significant improvements over ChatGPT (GPT-3.5) in terms of multi-modal image and text interaction, broader digital character limitations and more accurate semantic understanding. GPT-4 holds immense promise for future diverse applications and is regarded as a significant stride toward achieving general-purpose technology.
Moreover, the official investigation of GPT-4 [9] confirms the hypothesis that these technologies can have a substantial effect on a wide swath of occupations, especially for higher-wage occupations that face greater exposure to PLMs. Recently, an open letter signed by numerous prominent researchers has called for a halt to "Pause Giant AI Experiments" towards the successive development of GPT-5 due to GPT-4's perceived terrifying power and its potential risks to society [10]. Even Sam Altman, the CEO of OpenAI, has also signed this open letter, demonstrating that the future impact of General Purpose AI, represented by ChatGPT, on various industries will be revolutionary and profoundly impressive.
## 3 ChatAgri: ChatGPT-based Agricultural Text Classification
### Methodology Overview
Focusing on investigate the feasibility of applying ChatGPT to agricultural text classification, ChatAgri, one of the preliminary studies of ChatGPT-based agricultural applications is constructed in this paper, along with a series of systematically and exploratory experimental analysis discussed.
Through our investigations, there are no existing research works that systematically utilized ChatGPT to the text classification task until our ChatAgri proposed. To fill this gap, the question how to defined the corresponding general workflow for the ChatGPT-based agricultural text classification will be further discussed. Specifically, after referred to abundant latest literature, as shown in Fig. 3, we deem that almost all the ChatGPT-assisted applications can be divided into three phrases:
* Prompting Question Construction: The first stage which focuses on providing appropriate prompting strategies to be fed into ChatGPT;
* ChatGPT Q&A Inference: The second stage about the reasoning procedure of ChatGPT Q&A, which is transparent to us and can be regarded as a black box;
* Answer Normalization or Alignment: The third stage transferring the natural language intermediate response to the target label in the pre-defined categories.
Among these steps, in additional to the Q&A inference conducted by ChatGPT, a static reasoning procedure we can not participate in modification, the prompting construction engineering and answer alignment engineering can be further optimized during our experiments. From a macro perspective, ChatAgri is a pipeline structure in which each procedure influence the final prediction performance to a certain extent, including the quality of constructed prompts, the selected ChatGPT version, and the priority of adopted answer mapping strategies. Thus, the next subsections will introduce multiple novel solutions which are utilized in our experiments to fully exert the enormous potential and superiority of the ChatGPT in ChatAgri.
Furthermore, as opposed to the text classification in the universal domain, the agricultural text classification acted as a domain-specific research branch due to the additional requirements of domain expertise knowledge. Another crucial factor, domain-specificity, should
Figure 3: The framework of ChatAgri, which is illustrated by an typical example in the agricultural natural disaster dataset, French Plant Health Bulletin. First (left), several prompting construction strategies were applied to generate prompts, and the ChatGPT question is constituted by integrating these prompts with the original sentence; Second (center), ChatGPT provides response based on the inputs; Finally (right), the answer alignment strategies were devised to classify the intermediate answer to pre-defined categories.
also taken into more considerations and corresponding customized strategies.
The following chapters would successively elaborate the specific solutions during the entire experiments of ChatAgri.
### Prompt Question Construction
It is widely acknowledge to us that prompting engineering is a cumbersome art that requires extensive experience and manual trial-and-errors [17; 26]. To design the suitable prompts to trigger the sentence classification ability of ChatGPT, we investigate sufficient pioneering works that discuss about how to generate optimized ChatGPT prompting questions [40; 21; 22]. Specifically, as depicted in the left of Fig. 3, the adopted prompt generation strategies in this experiments includes: 1). manually defined prompts; 2). prompts triggered from ChatGPT; 3). prompts based on the zero-shot similarity comparisons; and 4). prompts based on Chain-of-Thought (CoT); These novel prompt generation strategies are discussed in the followings.
#### 3.2.1 Manually Defined Prompts
Following the general communication habits, we manually elaborate several prompting templates, Table 1 displays the part of designed prompts. Note that it is necessary to provide ChatGPT with the two mentions: original textual context and pre-defined categories, through some appropriate ways. Furthermore, for simplicity, we insert two extra slots into the prompts to combine the corresponding mentions, which respectively are [SENT] (slot of sentence) and [CATE] (slot of categories).
To conduct the successive comparison experiments, we evaluate the specific effect of each candidate prompt to select the best candidate prompt. Formally, we employ a data sampling-based evaluation approach among these candidate prompts [39]. Concretely, we randomly selected a fixed number of samples (set as 100 during experiments by default) from the Twitter Natural Hazards dataset, then we further test the performance for each prompts on this subset by accuracy. After overall comparisons, the prompt which is shown in Fig. 4 is selected as the most suitable manually defined prompt for subsequent experiments.
Moreover, note that we add an extra command "_Please only answer the category._" into prompts to ask ChatGPT not to generate redundant explanation around the ChatGPT reply, which might be a disrupting factors for subsequent text label decisions. The factor has also been taken into consideration for the subsequent prompting methods.
#### 3.2.2 ChatGPT Triggered Prompts
Drawing inspiration from the relevant literature [40; 22], we posit that inquiring about ChatGPT itself could potentially yield valuable insights into the generation of high-quality templates. Thus, we seek inspiration from ChatGPT by asking ChatGPT with the recommendations for templates generation. Note that a similar preliminary study of Zhong _et al._[22] suggests that the task-specific prompts can be triggered by using the following human inquiries:
```
>Providefiveconcisepromptsor templatesthatcanmakeyoudeal withthe[x]task.
```
where the slot [x] means the specific task types. Experiments prove that this strategies performs well in most scenarios.
Correspondingly, as shown in Fig. 5, our request is intuitively constructed as follows:
```
>Providefiveconciseprompts ortemplatesthatcanmakeyou deal withtheagriculturaltext classificationtask.
```
\begin{table}
\begin{tabular}{c c} \hline \hline
**No.** & **prompting template** \\ \hline
**1** & Classify the following sentence into one of the given categories: [CATE]\(\backslash\)nSentence: [SENT]\(\backslash\)nCategory: \(\backslash\)n[Res] \\ \hline
**2** & Which categories do you think sentence: \(\backslash\)n[SENT]\(\backslash\)nbelongs to,out of [CATE]\(\backslash\)n[Res] \\ \hline
**3** &..... \\ \hline \hline \end{tabular}
\end{table}
Table 1: The partial manually devised prompts. [Res] denotes the response provided by ChatGPT.
Figure 4: The adopted prompt which is selected through the subset evaluation.
Afterwards, ChatGPT naturally answers us with several candidate responses, which is depicted in Fig. 5. The prompts that have been generated appear to be sensible and consistent in terms of their semantic content, while also exhibiting some noticeable distinctions in terms of their individual formats.
To this end, following the above described sampling-based evaluation method, we select the best-performed prompt to represent the ChatGPT triggered prompts for successive comparison experiments, which is shown as follows:
```
">Classifytheagriculturaltext:[SENT]accordingtoitsmaintopic[CATE]."
```
#### 3.2.3 Zero-Shot Similarity Prompts
Motivated by previous few/zero-shot learning works that utilizes meta-learning paradigm [41; 23], we devised a novel prompting strategies upon it, called zero-shot similarity-based prompting.
Typically, few-shot object classification is performed by leveraging sample and classifiers from similar classes by some distance measure and similarity functions, such as cosine similarity and squared \(\ell_{2}\) distance [41]. To give an example, let's consider the few-shot learning-based images classification task. Firstly, given an image to be classified, one extra representative image for each category was choosed. Then, they were embedded into the same low-dimensional space using an embedding network, such as siamese network, prototypical network, and matching network. Finally, the similarity threshold between the image to be classified and images from all-kind of categories is then used for label classification.
Back to agricultural text classification, the adopted ChatGPT interface can be regarded as a special distance similarity measurement for evaluating the inter-relationship between two different sentences. All these procedures were conducted by performing one turn or multi turns QA. Specifically, we have designed two QA modes: end-to-end direct QA-based similarity evaluation and progressive comparison QAs-based similarity evaluation.
* **End-to-end direct QA-based:** Concretely, the most straightforward and simplest way is to directly ask ChatGPT that which sentence is most similar to the pre-classified sentence. Furthermore, we adopt the following prompt during experiments. ``` >GivensentenceS:[SENT1],whichsentenceofA:[SENT2],B:[SENT3],...doyoouthinkismostsimilartosentenceS?A,B,...,orC? In this manner, the text category can be finally determined. As see in Fig. 6, the target sentence can be classified to the category of sentence **C** based on only one-turn QA.
* **Progressive comparison QAs-based:** Similar to bubble sorting algorithm that compares pairs of elements at a time and subsequently applying the comparison to successive elements. Encouraged by the sorting algorithm, we incorporate its use in determining text similarity. Intuitively, we use the QA prompt:
Figure 5: Candidate prompt templates triggered by requests to ChatGPT (Model: GPT-3.5, Query Date: 2023.4.02).
Figure 6: The end-to-end direct similarity measurement QA-based prompting method for text classification.
A typical example related to the three-classification problem was given in Fig. 7. Based on two-turn QAs, the target sentence can be classified to the category of sentence **A** based on the topic similarity comparison in the second QA stage. To our knowledge, we are the first to utilize the multi-stage similarity comparison approach to conduct the text classification task.
#### 3.2.4 Chain-of-Thought Triggered Prompts
In Jiao _et al._s' [40] preliminary research of ChatGPT evaluation, they devised a _Pivot Prompting_ translation strategy for ChatGPT-based multi-linguistic translator, which significantly improves the translation performance. _Pivot Prompting_ translates source language to target language by using a high-resource pivot language (i.e. English by default) as a transition when two distant language is scarce. The above research reflected that this intermediate transitional strategy is particularly effective in some special application scenarios. Jin _et al._'s knowledge graph-based QA research [15] provides further evidence that these chains of reasoning are a critical factor that impacts the accuracy of the model.
Moreover, our inspection of ChatGPT's computational ability reveals that while ChatGPT tends to fall behind in its ability to reason and provide correct answers, it performs competitively when a step-by-step calculation process is used. Fig. 8 gives a typical example. To be more specific, while ChatGPT incorrectly provides the answer of 334 for the arithmetic problem 4+32 \(\cdot\) 5-2, it is capable of correctly reasoning and arriving at the right answer for the same problem based on a step-by-step calculation process.
Building upon the experimental findings that support the effectiveness of step-by-step incremental reasoning, we explore the utility and viability of utilizing this technique for agricultural text classification. Concretely, we choose the _manually defined prompts_ and _ChatGPT triggered prompts_ as baselines. Also, drawing on these initial prompts, we require ChatGPT not only with delivering the final classification category but also with producing a corresponding comprehensive Chain-of-Thought reasoning analysis. For ease of illustration, as shown in Fig. 9, we further add the following expression based on the original QA prompt.
Figure 8: The ChatGPT performance comparison between providing the answer directly and presenting a step-by-step calculation process in solving arithmetic problems. (Model: GPT-3.5, Query Date: 2023.3.15)
Figure 7: The progressive similarity measurement QAs-based prompting method for text classification.
This section presents several feasible strategies which shares a diverse range of distinctive features that set them apart from one another. But the prompting engineering is more complex and nuanced than what we can observe at a superficial level, as it is influenced by multiple factors, with dataset features playing a particularly significant role. For example, experimental results indicated that the Chain-of-Though triggered prompts performs particularly well on datasets with a high number of classification categories, but its effectiveness is not satisfactory when dealing with datasets with relatively simple classification (few categories), such as only two to three categories.
The upcoming experiments will systematically compare multiple prompting strategies proposed above to enable a comprehensive evaluation and research.
### ChatGPT Q&A Inference
ChatGPT is a state-of-the-art conversation robot which are based on the generative language model, Generative Pre-trained Transformer (GPT). The ChatGPT model's conversational capability stems from its ability to generate coherent text using sequence-to-sequence learning and the transformer architecture, where it conditions on a given prompt and samples from a probability distribution of words. The prominent intelligent thinking of ChatGPT is derived from its training on extensive amounts of text data to acquire a statistical understanding of the patterns that exist in natural language.
The GPTs family uses the transformer architecture, which is a deep neural network that processes input data in parallel using multi-headed attention mechanisms. During the inference stage, the GPT model generates text by conditioning on a given prompt and sampling from a probability distribution of words that follow. The probability distribution is computed by applying the softmax function over the output of the model. The output of the model at each time step depends on the previous tokens generated, creating a generative process that allows the model to generate coherent text.
Mathematically, the token generative procedure of ChatGPT can be represented as:
\[p(y|x)=\prod_{t=1}^{T}p(y_{t}|y_{1},...,y_{t-1},x) \tag{1}\]
where the \(\prod\) means the probability multiplication operator. Given the previous tokens \(y_{1},...,y_{t-1}\) and the input prompt \(x\), \(p(y_{t}|y_{1},...,y_{t-1},x)\) is the probability distribution over the token \(y_{t}\) in t-th time step and \(T\) is the length of the generated sequence.
At this stage, we direct our focus towards ChatGPT and hypothesize that ChatGPT possesses inherent capabilities that enable it to act as an integrated zero-shot text classification interface through an interactive mode.
During the ChatGPT interaction process, we created a fresh conversation thread for each prompt to ensure that the previous conversation history would not impact ChatGPT's responses. By adopting this methodology, ChatGPT is able to consistently exercise independent thinking and deliver optimal responses by leveraging the information provided by the user.
Besides applying the vanilla ChatGPT (GPT-3.5), our experiments also evaluated the capabilities of GPT-4 [10]. GPT-4 represents a new breakthrough in OpenAI's ongoing efforts to advance the field of deep learning. The results showed that GPT-4 performed better than ChatGPT, even in some complex semantic text classification scenarios, as seen in the following section of related evaluations.
### Answer Alignment
After the above steps, using an appropriate prompt and ChatGPT for question-answering, ChatGPT provided feedback on the classification results for the corresponding text. Nevertheless, its unique characteristic of generating responses in a conversational way presents challenges for the subsequent analysis and evaluation of its outputs. Unlike traditional PLM-based text classification models, ChatGPT's responses do not directly correspond to predefined labels, which means that an addi
Figure 9: The _Chain-of-Thought_-based prompting strategy which is built upon a simple and direct QA prompt. (Model: GPT-3.5, Query Date: 2023.3.15)
tional alignment strategy is required to convert these intermediate answers into the final labels that can be used to calculate various performance metrics (e.g. accuracy and F1-score). We refer to this additional mapping strategy as the "answer alignment engineering".
In our experiments, we investigated the impact of answer alignment engineering on the ChatGPT-based text classification's performance. Specifically, we designed and implemented two different alignment strategies: rule-based matching strategy and similarity-based matching strategy. Both approaches involve a mapping process that maps the intermediate responses to the corresponding labels. The rule-based matching approach uses predefined rules to match the responses to the labels, while the string matching approach computes the similarity between the response and each label and selects the label with the highest similarity score.
* **Rule-based matching strategy:** Essentially, the rule-based matching strategy is a text matching method that involves using patterns or rules based on token attributes, such as part-of-speech tags, to match sequences of tokens in unstructured text data. During our experiments, we use the Matcher3 object in spaCy v3 to find the matched tokens in context to classify the sentence returned by ChatGPT. spaCy v3 is a leading industrial-strength natural language processing and analysis tool4 using Python. Footnote 3: TheMatchertoolisin[https://spacy.io/api/matcher](https://spacy.io/api/matcher) [Accessed on 2023.03]
Specifically, we firstly analyze the text extraction patterns based on expert experience and ChatGPT's historical output habits, and design and define a set of rules. Then, the rules are applied to the text data and the extracted information is verified and validated. Finally, after adjustment and optimization, a comprehensive set of matching rules is summarized;
* **Similarity-based matching strategy:** Although the former approach utilizes rigid matching with high accuracy, it is difficult to handle semantically ambiguous situations. To address this issue, we adopt the second strategy, which is the similarity-based matching strategy. Firstly, we aggregate and synthesize ChatGPT's commonly expressed utterances under each category to establish a repository of pivot answers for each category. Subsequently, we apply the Levenshtein distance algorithm to compute the minimum edit distance between each pivot answer and the input answer being classified. The pivot answer with the smallest edit distance is regarded as the definitive category label. This approach offers comprehensive coverage and effectively mitigates the shortcomings of rule-based matching in accommodating ambiguous and nuanced language use. The string similarity-based matching strategy is depicted in Fig. 10.
In theory, neither of these two strategies can perfectly solve the problem of answer mapping. To overcome the challenge of answer mapping, we combined rule-based and similarity-based matching strategies in a pipeline approach. Specifically, we found that ChatGPT typically provides explicit category labels in natural language form. Therefore, in the first step, we tend to use the rule-based strategy to parse the intermediate answers. If the category is still uncertain, we then use the string similarity-based strategy to compute the similarity between the intermediate answer and each category's answer examples, selecting the category with the highest similarity as the final classification. In our experiments, this approach can simultaneously improve the accuracy and recall rate of the answer mapping process effectively.
Nevertheless, this work mainly explored a character-based literal matching method that lacks semantic understanding. The method has certain limitations, whereas the deep neural network-based methods using PLMs are more adept at such scenarios. In our future work, we will attempt to use a PLMs-based semantic
Figure 10: The illustrating diagram of the similarity-based matching strategy.
understanding model for this step, which theoretically can bring about better performance.
## 4 Experimental Setup
We perform a series of experiments in order to figure out exactly what kinds of factors of these devised strategies that indeed influence the final agricultural text classification performance of the ChatAgri in Section 5. Correspondingly, acting as a preliminary, this section mainly introduces the details of the experimental setups, including the used multi-linguistic datasets, the employed text classification baselines for model comparisons, the adopted evaluation metrics, and the adopted hyperparameters of our ChatAgri.
### Datasets
To demonstrate the actual potentials of ChatAgri for classifying agricultural text, we carefully collect several suitable datasets for evaluation and validation, ranging from different types of categories (e.g. plant diseases, insect pests, and twitter natural hazards) and numbers of categories to different languages, including French, English, and Chinese. These datasets are respectively called Amazon-Food-Comments, PestObserver-France, Natural-Hazards-Twitter, and Agri-News-Chinese in our experiments, whose details are illustrated as follows.
* **Amazon-Food-Comments:** An amazon food comment dataset that contains nearly 200,000 positive samples, neutral samples, and negative samples, which can be used to perform text classification tasks for both positive, neutral, and negative reviews5; Footnote 5: Access to [https://nijianmo.github.io/amazon/index.html](https://nijianmo.github.io/amazon/index.html) for more details of Amazon-Food-Comments [Accessed on 2023.02].
* **PestObserver-France: [7]** A plant health bulletin classification dataset in French to estimate a agricultural prediction model that how well can it deal with heterogeneous documents and predict for natural hazards6; Footnote 6: PestObserver-France can be downloaded from [https://github.com/sufianj/fast-camember](https://github.com/sufianj/fast-camember) [Accessed on 2023.02].
* **Natural-Hazards-Twitter: [42]** A natural disaster dataset with sentiment labels of United States which is proposed to identify attitudes towards disaster response. It contains different natural disaster types and nearly 5,000 Twitter sentences7; Footnote 7: Natural-Hazards-Twitter can be downloaded from [https://github.com/Dong-UTIL/Natural-Hazards-Twitter-Dataset](https://github.com/Dong-UTIL/Natural-Hazards-Twitter-Dataset) [Accessed on 2023.02].
* **Natural-Hazards-Type:** In addition to recognize the sentiment polarities of Natural-Hazards-Twitter, we also re-organize it into a new disaster type classification dataset, denoted as Natural-Hazards-Type, to identify the natural disaster categories of text. Due to the large volume of the original Natural-Hazards-Twitter dataset, the new Natural-Hazards-Type dataset has taken a small subset of it, containing thousands of samples; Footnote 8: More details about ATE expert online system can be accessed to [http://zjzx.cnki.net/](http://zjzx.cnki.net/) [Accessed on 2023.02].
* **Agri-News-Chinese:** Besides the above existing datasets, we proposed a Chinese Agricultural short text classification dataset, namely Agri-News-Chinese, containing seven categories, such as agricultural economy and aquatic fishery. Its data source was collected and cleaned from the agricultural technology expert online system (ATE expert online system) 8, with a total volume of approximately 60000 pieces of data, divided into the train and test sets by 9:1. Footnote 8: More details about ATE expert online system can be accessed to [http://zjzx.cnki.net/](http://zjzx.cnki.net/) [Accessed on 2023.02].
Table 2 gives a meta statistic for the five datasets, including the split distribution of train/test samples, the language scope, and the categories of textual topics.
### Baselines
Existing extensive models for text classification can be divided into five major training paradigms: 1) traditional feature engineering-based machine learning (e.g. SVM, Decision Tree, and Random Forest) [28; 30; 29]; 2) word embedding-based deep learning (e.g. TextCNN, and TextRNN); 3) PLM-based fine-tuning, in which the PLMs include BERT [1], BART [2], T5 [3] and so on; 4) PLM-based prompt learning; and 5) the newest ChatGPT QA-based zero-shot learning paradigm that brought by ChatGPT recently (e.g. ChatIE [23], ChatEventExtract [21], and our ChatAgri).
To ensure the research comprehensiveness, the above introduced mainstream natural language understanding (NLU) paradigms were considered to be estimated and reported as the comparison baselines in our experiments. Specifically, besides the herein proposed ChatAgri, we adopted the following methods listed below for each mentioned learning paradigm.
* **SVM: [28]** Support Vector Machine (SVM) is a classic classification method pursuing maximization of support vector distance between multiple class hyper-planes for classification, typically in the text category classification task. SVM mainly classifies the text by calculating the unstructured discrete textual features, optimizing them into high-dimensional spatialized vector representations;
* **Random Forest: [28]** Random Forest (RF) is also a well-known classification algorithm, belonging to the ensemble methods family, combines multiple weaker classifier to create a stronger classifier for categorical data;
* **TextCNN: [43]** Built on the top of pre-trained word vectors, TextCNN uses convolutional neural networks (CNN) as feature detector and utilizes kernels of different sizes to extract the valuable semantic feature for sentence classification. Lastly, the external softmax layer performs multi-classification on the convolutional logical values;
* **TextRNN: [44]** Based on pre-trained word embeddings, TextRNN integrates recurrent neural network (RNN) into the multi-learning framework. Specifically, TextRNN utilizes long short-term memory (LSTM) to address the issues of gradient vanishing and exploding, thereby resolving the challenge of capturing long-range dependencies within sequences;
* **BERT-based fine-tuning: [1; 5; 26]** Fine-tuning BERT has emerged as a widely employed methodology across diverse text processing tasks, including text classification. By generating contextualized word embeddings, BERT effectively captures both semantic and syntactic information associated with individual words. Leveraging its inherent strengths, BERT can be fine-tuned on specific tasks utilizing limited labeled datasets, rendering it a flexible and formidable solution for addressing an array of text processing objectives;
* **T5-based prompt-tuning: [17; 45; 3]** Different from the "pre-train then fine-tune" procedure of fine-tuning methods, the prompt-tuning paradigm induces those PLM to generate suitable target responses with the help of additional triggered sentences, which are called "prompts". In prompt-tuning, the major research attention has been transferred on how to provide better prompts to activate the PLM's rich internal prior knowledge. We use the PLM, Transfer Text-to-Text Transformer (T5) to be the backbone. T5 is a unified very large PLM based on Transformer architecture, which converts all text processing tasks into Text-to-Text tasks;
* **BART-based prompt-tuning: [45; 2]** We also investigate the usage of Bidirectional and Auto-Regressive Transformers (BART), being acted as the backbone for prompt learning. BART simultaneously incorporates the advantages of BERT and GPT (i.e. the characteristics of the context bidirectional modelling and the sequence joint probability hypothesis);
### Evaluation Metrics
In such agricultural text classification task that involves multiple label classification, accuracy and F1-score are two commonly used metrics.
Correspondingly, accuracy measures the proportion of correctly predicted samples among all predicted samples, is a simple and coarse-grained evaluation metric which only accumulates all the correct instances. And accuracy is calculated as follows:
\[Accuracy=Count_{T}/Count_{N} \tag{2}\]
\begin{table}
\begin{tabular}{c c c c c c} \hline
**Dataset** & train samples & test samples & language & categories & label count \\ \hline Amazon-Food-Comments & 165863 & 16175 & English & ‘negative’, ‘positive’, ‘neutral’ & 3 \\ \multicolumn{5}{l}{} \\ PestObserver-France & 322 & 80 & French & ‘Bioagressor’, ‘ Disease’, ‘Others’ & 3 \\ Natural-Hazards-Twitter & 45669 & 5074 & English & ‘negative’, ‘positive’ & 2 \\ \multicolumn{5}{l}{} \\ Natural-Hazards-Type & 5000 & 1000 & English & ‘Hurricane’, ‘Wildfires’, ‘Blizzard’, ‘Floods’, ‘Tornado’ & 5 \\ \multicolumn{5}{l}{} \\ \multicolumn{5}{l}{} \\ \multicolumn{5}{l}{} \\ \multicolumn{5}{l}{} \\ \multicolumn{5}{l}{} \\ \multicolumn{5}{l}{} \\ \multicolumn{5}{l}{} \\ \multicolumn{5}{l}{} \\ \end{tabular}
\end{table}
Table 2: The statistical meta information of the adopted agricultural text classification datasets.
where \(Count_{T}\) represents the correctly predicted samples and \(Count_{N}\) represents the total number of samples evaluated.
Comparatively, F1-score is considered to be a relatively fine-grained evaluation indicator than accuracy. In comparison to accuracy, F1-score is considered to be a higher confidence indicators which simultaneously considers the precision and recall. And F1-score is calculated as follows:
\[F1=\frac{2*\text{Precision}*\text{Recall}}{\text{Precision}+\text{Recall}}\ \ \ \ \ \text{where}\]
\[\text{Precision}=\frac{TP}{TP+FP}\ \ \ \ \&\ \ \ \text{Recall}=\frac{TP}{TP+FN} \tag{3}\]
In the equation presented, \(Precision\) and \(Recall\) refer to the precision and recall rate of the classification results, respectively. \(TP\) (true positives) represents the number of samples whose actual and predicted class are both positive; \(FP\) (false positives) represents the number of samples whose actual class is negative but are predicted as positive; and \(FN\) (false negatives) represents the number of samples whose actual class is positive but are predicted as negative.
Specifically, F1-score includes several calculating strategies: micro-F1, macro-F1, and weighted-F1. Without considering micro-F1 and macro-F1, we utilize the weighted-F1 as it accounts for the classification performance of categories under varying weights, thereby providing greater reference value.
### Hyperparameter Settings
During our experimental procedure, there are various meta settings for all kinds of hyperparameters. The optimal hyperparameters, determined by their superior performance on the development set, will be selected for the final evaluation. The meta settings are summarized as follows.
We adopted the pretrained word vectors, GloVe [46], as the embeddings of the baselines of TextCNN and TextRNN. GloVe leverages the word co-occurrence statistics that can capture both syntactic and semantic relationships between words9. Considering the trade-offs between computational limitations and performances and to ensure experimental competitiveness and stability, we adopted the version "bert-base-uncased"10 for the PLM BERT, the version "t5-base"11 for the PLM T5, and the version "facebook/bart-base"12 for the PLM BART respectively. The code implementation is developed using Python 3.7 13 and PyTorch 1.9.0 14 frameworks. For experimental simplicity, the prompts of the prompt-tuning baselines are pre-defined as "Given a sentence of [SENT], it is more like to be a topic of [SLOT] from [CATE]", and the probability scores of the estimated words in the position of [SLOT] are then regarded as the intermediate answers for the final classification. Furthermore, the experimental hardware environment comprises a CPU Intel Core i9-9900k, and a single Nvidia GPU of _GTX 1080Ti_.
Footnote 10: BERT can be obtained from: [https://huggingface.co/docs/transformers/model_doc/bert](https://huggingface.co/docs/transformers/model_doc/bert) [Accessed on 2023.03].
Footnote 11: TS-base can be obtained from: [https://huggingface.co/t5-base](https://huggingface.co/t5-base) [Accessed on 2023.03].
Footnote 12: BART can be obtained from: [https://huggingface.co/docs/transformers/model_doc/bart](https://huggingface.co/docs/transformers/model_doc/bart) [Accessed on 2023.03].
## 5 Experimental Results and Analyses
Next, we conducted a series of baseline comparison experiments and ablation experiments to analyze and explore the specific connections between various key factors that affect the performance of ChatAgri on agricultural text classification tasks. We first verified the competitiveness and superiority of ChatAgri relative to known state-of-the-art (SOTA) models. Then, we systematically investigated the impact of different prompting strategies on the classification accuracy for text classification. Moreover, we also attempted to apply GPT-4 and investigated the superiority of GPT-4 compared to the basic version of ChatGPT, GPT3.5. The systematic analysis toward extensive empirical results firmly demonstrate the enormous potentials, feasibility, and broad application prospects of ChatGPT in agricultural text classification tasks.
### Methods Comparison
Table 3 details comprehensive experimental results on the agricultural text classification task for our model ChatAgri and existing state-of-the-art approaches. In this table, as shown by multiple rows before the row data of _ChatGPT-based Prompt QA_, we conducted a systematic evaluation of the classification performance
of these baseline models on these five datasets based on the above described hyperparameter settings. The time node of ChatGPT interface calls is March 16, 2023. Subsequent OpenAI official updates may lead to certain performance fluctuations towards the ChatGPT interface. The last row shows the evaluation results of our ChatAgri. For simplicity and clarity, we took the primary designed solution of ChatAgri as the basic model of ChatAgri for comparison. Specifically, we used the manually defined prompts, which is illustrated in Section 3.2.1, as the prompting template for ChatAgri. And we simultaneously adopted the rule-based and similarity-based text pattern matching strategy for the answer alignment engineering. Correspondingly, we labeled this basic model of ChatAgri as **ChatAgri-base**.
In Table 3, we classified all the existing agricultural text classification methods explored in this experiment according to their belonged learning paradigms. Among them, these methods based on fine-tuning PLM and PLM prompt engineering can be seen as the latest optimal benchmark approaches, and are respectively recorded in the last few rows of the table. From the table, it can be clearly observed that our ChatAgri has achieved exciting and competitive performance on some specific datasets, such as PestObserver-France and Natural-Hazards-Type. Not to mention surpassing traditional machine learning methods or word vector-based representation learning methods by an absolute gap of over 10% to 20%, which is a noticeable performance margin. Compared with the latest Transformer PLM-based deep learning methods, ChatAgri is also a particularly strong presence, with no loss in accuracy or weighted-f1 compared to these SOTA methods. Specifically, ChatAgri significantly outperformed the PLM-based fine-tuning method represented by fine-tuned BERT by about 3.0% accuracy on the PestObserver-France dataset, and outperformed the PLM-based prompt-tuning method represented by prompt-tuned BART by approximately 2.2% weighted-f1 indicator. Similarly, ChatAgri also surpassed the above two state-of-the-art models by 0.6% accuracy and weighted-f1 indicators on the Natural-Hazards-Type dataset. In addition, the performance of ChatAgri on other datasets is also impressive. For example, it can be seen from the table that the performance of ChatAgri on the Agri-News-Chinese Chinese dataset have significantly surpassed the PLM-based fine-tuning method represented by fine-tuned BERT by about 3.7% accuracy and 4.7% weighted-f1 indicator. In addition, ChatAgri's performance is also slightly higher than the PLM-based fine-tuning method represented by prompt-tuned T5 by approximately 0.4% accuracy and 0.2% weighted-f1.
In addition, we further explored the reasons why ChatAgri performed more strongly on some datasets but slightly worse than previous SOTA methods on others. By observations from Table 3, we found that ChatAgri had obvious advantages on two minority language datasets, PestObserver-France and Agri-News-Chinese, but performed poorly on the widely-used English datasets, Amazon-Food-Comment and Natural-Hazards-Twitter. We speculate that this is mainly due
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline & & \multicolumn{2}{c}{Amazon-Food} & \multicolumn{2}{c}{PestObserver} & \multicolumn{2}{c}{Natural-Hazards} & \multicolumn{2}{c}{Natural-Hazards} & \multicolumn{2}{c}{Agri-News} \\ \cline{3-10}
**Learning** & **Baseline** & \multicolumn{2}{c}{-Comments} & \multicolumn{2}{c}{-France} & \multicolumn{2}{c}{-Twitter} & \multicolumn{2}{c}{-Type} & \multicolumn{2}{c}{-Chinese} \\ \cline{3-10}
**Paradigms** & **Methods** & acc & weighted & acc & weighted & acc & weighted & acc & weighted \\ & & -F1 & & -F1 & acc & -F1 & & -F1 & & -F1 \\ \hline Traditional & **SVM** & 0.627 & 0.624 & 0.672 & 0.655 & 0.763 & 0.742 & 0.811 & 0.811 & 0.523 & 0.522 \\ Machine Learning & **Random Forest** & 0.647 & 0.643 & 0.664 & 0.652 & 0.787 & 0.755 & 0.863 & 0.863 & 0.553 & 0.534 \\ \hline Word Embedding & **TextCNN** & 0.748 & 0.742 & 0.715 & 0.704 & 0.834 & 0.816 & 0.914 & 0.914 & 0.792 & 0.785 \\ -based learning & **TextRNN** & 0.727 & 0.725 & 0.707 & 0.697 & 0.845 & 0.827 & 0.931 & 0.931 & 0.812 & 0.801 \\ \hline PLM-based & **BERT-based** & \multirow{2}{*}{0.767} & \multirow{2}{*}{0.764} & \multirow{2}{*}{0.736} & \multirow{2}{*}{0.714} & \multirow{2}{*}{0.869} & \multirow{2}{*}{0.839} & \multirow{2}{*}{0.945} & \multirow{2}{*}{0.945} & \multirow{2}{*}{0.826} & \multirow{2}{*}{0.819} \\ fine-tuning & **fine-tuning** & & & & & & & & & \\ \hline \multirow{6}{*}{PLM-base} & **T5-based** & \multirow{2}{*}{**0.805**} & \multirow{2}{*}{**0.798**} & \multirow{2}{*}{0.764} & \multirow{2}{*}{0.753} & \multirow{2}{*}{0.874} & \multirow{2}{*}{0.857} & \multirow{2}{*}{0.966} & \multirow{2}{*}{0.966} & \multirow{2}{*}{0.859} & \multirow{2}{*}{0.854} \\ & **prompt-tuning** & & & & & & & & & \\ \cline{1-1} \cline{3-10} & **BART-based** & \multirow{2}{*}{0.800} & \multirow{2}{*}{0.795} & \multirow{2}{*}{0.757} & \multirow{2}{*}{0.767} & \multirow{2}{*}{**0.875**} & \multirow{2}{*}{**0.865**} & \multirow{2}{*}{0.971} & \multirow{2}{*}{0.971} & \multirow{2}{*}{**0.867**} & \multirow{2}{*}{**0.862**} \\ prompt-tuning & **ChatAgri-base** & \multirow{2}{*}{0.798} & \multirow{2}{*}{0.793} & \multirow{2}{*}{**0.794**} & \multirow{2}{*}{**0.789**} & \multirow{2}{*}{0.866} & \multirow{2}{*}{0.853} & \multirow{2}{*}{**0.978**} & \multirow{2}{*}{**0.978**} & \multirow{2}{*}{0.863} & \multirow{2}{*}{0.856} \\ Prompt QA & **(Ours)** & & & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance Statistics of all baselines and ChatAgri on all adopted datasets. We respectively boldface and underline the score with the best performance and the second-best performance across all models (**Query Date: 2023.3.16**).
to the difference in the scale of large-scale language corpus training for different languages. After comprehensive investigations on latest literature [39; 10; 9], we can conclude that ChatGPT excels at handling various cross-linguistic tasks. Unlike previous methods based on traditional PLMs, ChatGPT's learning corpus is totally comprehensive and of high quality, covering the majority of languages spoken in most countries. Moreover, ChatGPT's ultra-large parameter size allows it to memorize and master more linguistic knowledge, not just limited to English. Therefore, in terms of cross-lingual understanding capability, ChatGPT is significantly superior to traditional PLM models (e.g. BERT, RoBERTa, and BART). Correspondingly, traditional PLM models perform poorly on less commonly spoken language datasets, as their learning corpus is far less comprehensive and of lower quality than that of ChatGPT. This probably is the primary factor that allows ChatAgri to perform well on various minority language datasets regardless of these datasets' linguistic characteristics.
On the Natural-Hazards-Type disaster category classification dataset based on the transformation of Natural-Hazards-Twitter, we found that both the PLM-based method and ChatAgri performed very well, fluctuating around 94% to 97% of accuracy and weighted-f1, which meets almost all the users' needs. By observing this dataset itself, we observe that most of the text in the dataset can be classified by using some fixed phrases as trigger words. For example, there is a sentence in the dataset: "_Florida governor declares state of emergency ahead of Dorian and warns Floridians on the East Coast_", where the word "_Dorian_" essentially belongs to the topic of a happened American hurricane disaster. As we know, a simple semantic context always can make the training and prediction of NLU tasks much simpler, so these existing SOTA models have achieved satisfactory performances. It is worth mentioning that during the process of reorganizing the Natural-Hazards-Twitter dataset into the Natural-Hazards-Type dataset, we intuitively maintained the same quantity of test samples for each category. Therefore, the calculation results of the accuracy indicator on the Natural-Hazards-Type dataset are the same with the weighted-F1 indicator.
The above discussion fully demonstrate the superiority of ChatGPT in agricultural text classification: even though ChatGPT has not been trained on any training set, it can still outperform all kinds of SOTA methods that trained on large-scale training sets. Note that ChatAgri-base used as a comparison baseline here solely employs the manually defined prompting strategy, which is a basic and simple one. Even the simple ChatAgri can achieve impressive results, which makes us more convinced that the ChatGPT-based solution will be the future direction for the continuous research development of agricultural text classification.
### Improving ChatGPT with Advanced Prompting Strategies
In order to explore the influence of different prompt generation strategies to the final classification performance, we conducted systematic evaluations and in-depth explorations of various prompt generation strategies introduced in Section 3.2 to clarify the advantages and significance of different prompt generation strategies in this section. The current date for ChatGPT interface calls is March 24, 2023. Subsequent OpenAI updates to the ChatGPT official API may influence the future function calls, leading to certain performance discrepancies.
From the first two rows of Table 4, it can be discovered that the ChatAgri which adopts ChatGPT Triggered-Prompts outperforms the Manually Defined Prompts strategy counterpart in most cases, indicating that ChatGPT can generate better prompts to trigger its more comprehensive knowledge for more accurate prediction. For instance, ChagAgri based on ChatGPT Triggered-Prompts improved the accuracy by average 2.1% and 1.1% on the PestObserver-France and AgriNews-Chinese datasets, respectively, compared to ChagAgri based on Manually Defined Prompts. This empirically demonstrates that prompt engineering for ChatGPT should be combined with ChatGPT's own understanding and feedback to achieve better classification performance.
From the third and fourth rows of Table 4, it can be observed that the Zero-Shot Similarity-Prompts strategy performs significantly better than the baseline prompts on the first three datasets, but its performance on the Natural-Hazards-Type and Agri-News-Chinese datasets is relatively unsatisfactory, even falling behind the basic prompts, namely Manually Defined Prompts and ChatGPT Triggered-Prompts. For example, ChatAgri based on Zero-Shot Similarity-Prompts reduced the accuracy and weighted-f1 by 0.3% compared to ChatAgri-base based on Manually Defined Prompts on the Natural-Hazards-Type dataset.
We can also easily observe from Table 4 that the Chain-of-Thought Prompts strategy significantly improves the overall task performance on all datasets, and its effect is better than that of ChatAgri based on Zero-Shot Similarity-Prompts. Especially on the Natural-Hazards-Type and Agri-News-Chinese datasets, Chain-of-Thought Triggered-Prompts has further improved,
which is an excellent effect that Zero-Shot Similarity-Prompts cannot achieve. For example, on the AgriNews-Chinese dataset, Chain-of-Thought Triggered-Prompts simultaneously improved the accuracy and weighted-f1 by average 2.7% compared to ChagAgriBase.
It is worth mentioning that for the binary classification dataset Natural-Hazards-Twitter, the classification process based on the Chain-of-Thought rules only requires one comparison step, and the pivot sentence selected by this strategy is exactly the same as that used by Zero-Shot Similarity-Prompts. Therefore, the performance of the Chain-of-Thought Prompts and Zero-Shot Similarity-Prompts strategies is the same here. Moreover, due to the simple semantics of the Natural-Hazards-Type constructed by us, the prediction effect of various ChatAgri model variants is close to saturation. Therefore, the Natural-Hazards-Type dataset is not more persuasive than other datasets in terms of reference value.
In summary, Chain-of-Thought Triggered-Prompts is particularly good at handling texts with many classification categories in multi-classification tasks, which also confirms the effectiveness of the divide-and-conquer idea of splitting complex multiple classification tasks into multiple simple binary classifications for handling slightly complex classification tasks. In contrast, Zero-Shot Similarity-Prompts performs relatively poorly when there are many classification categories, and even worse than the effects of Manually Defined Prompts and ChatGPT Triggered-Prompts. We speculate that the main reason is that the selection of pivot sentences is not perfect on the one hand, and on the other hand, when ChatGPT judges the specific similarity of multiple semantically similar pivot sentences, multiple semantically similar pivot sentences can easily confuse ChatGPT, leading to its easy misjudgment of the final classification result.
### Few-shot prompt-tuning and zero-shot ChatAgri
Although most representative text classification methods are based on supervised learning with a large volume of high-quality annotated samples. The fact is, the annotation procedure of supervised corpora demands the expertise of domain specialists and is expensive and time-consuming, as well as a significant amount of manual efforts. Thus, in specific practical application scenarios, it is often more widespread and ubiquitous to apply data-scarce learning due to insufficient resource and scarce data.
As numerous literature have suggested [17; 24; 25], prompt-learning is particularly useful in data insufficient scenarios. It is a powerful and promising NLP technique which fully leverages the prior knowledge learned from the PLM's pre-trained stage. By using the prompting tricks, prompt-learning allows PLMs quickly adapt to various new tasks while learning on a small amount of data. Here, we delved in-depth into the characteristics, differences, and interactions between ChatGPT and prompt-learning paradigms. The evaluation statistic of these prompt learning methods was simulated based on the open-source framework _OpenPrompt_. _OpenPrompt_[45] is an advanced research toolkit developed by Tsinghua University15. _OpenPrompt_ integrates various prompt-based learning methods, making it easy and feasible for researchers to quickly develop and deploy their prompt-tuning solutions.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Amazon-Food} & \multicolumn{2}{c}{PestObserver} & \multicolumn{2}{c}{Natural-Hazards} & \multicolumn{2}{c}{Natural-Hazards} & \multicolumn{2}{c}{Agri-News} \\
**Prompting** & \multicolumn{2}{c}{-Comments} & \multicolumn{2}{c}{-France} & \multicolumn{2}{c}{-Twitter} & \multicolumn{2}{c}{-Type} & \multicolumn{2}{c}{-Chinese} \\
Correspondingly, we provided a detailed comparison to explore the relationships between ChatAgri and PLM-based prompt-tuning methods under few-shot and zero-shot learning settings. As shown in Table 5, we report the experimental results of these SOTA methods (i.e. T5-based prompt-tuning, BART-based prompt-tuning and ChatAgri) under the few-shot learning and zero-shot settings.
Specifically, from the first row of Table 5, it can be seen that prompt learning methods are extremely effective in zero-shot learning (i.e., without any training on any samples), far surpassing the performance of models that guess based on average probability. For instance, on the Natural-Hazards-Twitter dataset, the BART-based prompt-tuning method achieved an accuracy of 57.3% in zero-shot learning, compared to a performance of 33.3% based on average probability, an improvement of about 24 percentage points. Especially on the five-classification dataset, Natural-Hazards-Type, the evaluated accuracy was 63.9%, which is much higher than the baseline accuracy of 20% for random prediction. In addition, under the 20-shot and 50-shot few-shot settings, the improvement of these prompt learning methods is even more significant, and the specific experimental results can be found in the third and fourth rows. The above statistical results indicate that prompt learning methods are very effective in training with small amounts of data.
Most impressively, it can be obviously observed from the table that ChatAgri performs significantly better than these prompt learning methods and achieves state-of-the-art performances in most aspects, regardless of different classification category topics and counts. The text classification performance of ChatAgri-base has surpassed these SOTA models in all test datasets with a significant improvement, demonstrating its superiority in all aspects. For example, compared with the baseline BART-based prompt-tuning that trained on 50-shot setting, ChatAgri-base yielded approximately absolute 10.5%, 15.1%, and 10.8% improvements in accuracy on datasets Amazon-Food-Comment, PestObserver-France, and Natural-Hazards-Twitter, respectively. It goes without saying that even compared to prompt learning models under zero-shot learning, those better performed, which is trained on a small amount of data, are significantly inferior to the ChatGPT-based classification framework ChatAgri without any fine-tuning. In addition, better prompt engineering, ChatGPT models, and answer alignment engineering could further bring better results to the ChatAgri technology. Overall, ChatAgri has essentially surpassed the existing state-of-the-art prompt learning paradigm in all aspects, which is also the enormous potentials brought by the ultra-large-scale models.
In conclusion, ChatAgri shows its effectiveness and superiority in data-insufficient learning scenarios, indicating that ChatGPT has strong cross-domain and generalization capabilities. This kind of generalization is
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline & & \multicolumn{2}{c}{Amazon-Food} & \multicolumn{2}{c}{PestObserver} & \multicolumn{2}{c}{Natural-Hazards} & \multicolumn{2}{c}{Natural-Hazards} & \multicolumn{2}{c}{Agri-News} \\
**Few-Shot** & & \multicolumn{2}{c}{-Comments} & \multicolumn{2}{c}{-France} & \multicolumn{2}{c}{-Twitter} & \multicolumn{2}{c}{-Type} & \multicolumn{2}{c}{-Chinese} \\ \cline{3-10}
**Learning** & **Methods** & acc & \begin{tabular}{c} weighted \\ -F1 \\ \end{tabular} & acc & \begin{tabular}{c} weighted \\ -F1 \\ \end{tabular} & acc & \begin{tabular}{c} weighted \\ -F1 \\ \end{tabular} & acc &
\begin{tabular}{c} weighted \\ -F1 \\ \end{tabular} \\ \hline
**Zero -** &
\begin{tabular}{c} T5-based \\ prompt-tuning \\ \end{tabular} & 0.521 & 0.523 & 0.474 & 0.466 & 0.562 & 0.545 & 0.597 & 0.597 & 0.425 & 0.419 \\ \hline
**Shot** &
\begin{tabular}{c} BART-based \\ prompt-tuning \\ \end{tabular} & 0.545 & 0.539 & 0.439 & 0.431 & 0.573 & 0.566 & 0.639 & 0.639 & 0.452 & 0.447 \\ \hline
**20 -** &
\begin{tabular}{c} T5-based \\ prompt-tuning \\ \end{tabular} & 0.605 & 0.595 & 0.585 & 0.578 & 0.674 & 0.651 & 0.757 & 0.757 & 0.563 & 0.559 \\ \hline
**Shot** &
\begin{tabular}{c} BART-based \\ prompt-tuning \\ \end{tabular} & 0.627 & 0.609 & 0.563 & 0.554 & 0.643 & 0.626 & 0.761 & 0.761 & 0.594 & 0.592 \\ \hline
**50 -** &
\begin{tabular}{c} T5-based \\ prompt-tuning \\ \end{tabular} & 0.679 & 0.674 & 0.656 & 0.647 & 0.732 & 0.719 & 0.831 & 0.831 & 0.766 & 0.760 \\ \hline
**Shot** &
\begin{tabular}{c} BART-based \\ prompt-tuning \\ \end{tabular} & 0.694 & 0.688 & 0.643 & 0.629 & 0.758 & 0.746 & 0.854 & 0.854 & 0.742 & 0.738 \\ \hline
**Zero-Shot** &
\begin{tabular}{c} ChatAgri-base \\ (Ours) \\ \end{tabular} & **0.798** & **0.793** & **0.794** & **0.789** & **0.866** & **0.853** & **0.978** & **0.978** & **0.863** & **0.856** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance statistics of ChatAgri and prompt learning baselines in the zero/few-shot supervised learning. Values (%) in green represent the increased performances of ChatAgri (zero-shot) compared to the second-best results (50-shot).
one of the directions for the development of future General Purpose AI, as it can help us build more flexible and adaptable intelligent systems that can handle various tasks and scenarios.
As we know, better performance would like to be obtained once using smoother prompts or update ChatGPT itself. As the impact of advanced prompting strategies has been investigated in Section 5.2, we then explore the potentials of upgrading the ChatAgri framework with more advanced ChatGPT, GPT-4.
### Potentials between ChatGPT and GPT-4
Just as we were conducting research on vanilla ChatGPT (GPT-3.5) in March to April, 2023, OpenAI coincidentally released their latest powerful conversational system, GPT-4 [10], which serves as an improved version of ChatGPT. Thus, it is necessary to conduct additional exploration experiments to evaluate the overall performance of GPT-4, the upgraded ChatGPT, in the agriculture field text classification task.
Building on the advanced technologies learned from ChatGPT, GPT-4 has been iteratively refined to achieve unprecedented levels of authenticity, controllability, and rejection of undesirable outputs. In terms of model parameter scale, GPT-4 is expected to have over 1 trillion parameters, a significant increase from the GPT-3.5's 175 billion parameters. This means that GPT-4 will be able to handle larger amounts of data and generate longer, more complex, coherent, accurate, diverse, and creative text. In terms of overall capability, compared to the previous version of ChatGPT, GPT-4 boasts improved performances in advanced reasoning, handling complex instructions, and demonstrating more creativity.
But GPT-4 currently has a cap of 25 messages every three hours by the latest released policy of OpenAI. It is the computation resource scarcity that caused the limited API capacity, which is far way from reaching the demand of the comprehensive experiments towards GPT-4 based ChatAgri. To overcome those pitfalls, we have taken a relatively balanced approach based on the trade-offs between experimental effectiveness and resource consumption (running time and empirical cost) in our experiments. Specifically, we made several reasonable reductions to the experiment from three perspectives: the linguistic categories, scales and their contributions of the datasets. The specific adjustments and arrangements for this experiment are as follows:
* For dataset selection, in order to comprehensively
Figure 11: The values shows the absolute metrics of accuracy and wighted-F1, which are reported using (%). The first group of a.(1), a.(2) and a.(3) denotes the ChatAgri\({}_{a}\), and the second group of b.(1), b.(2) and b.(3) denotes the ChatAgri\({}_{\beta}\) counterpart. Reported results were averaged over 5 runs to ensure experimental reliability and robustness.
evaluate the performance of cross-linguistic text classification tasks, we selected three datasets that represent English, Chinese, and French contexts: Amazon-Food-Comments, PestObserver-France, and Agri-News-Chinese;
* For the specific samples to be evaluated, for each independent experiment, we randomly selected 100 samples from the original evaluation set as the evaluation subset;
* For the selection of the baselines, we used two ChatAgri models based on manually defined prompts and prompts triggered from ChatGPT, respectively labeled as ChatAgri\({}_{\alpha}\) and ChatAgri\({}_{\beta}\);
* To ensure the reliability and accuracy of the experimental results, we conducted 5 rounds of random screening and corresponding evaluations for each dataset, and took the average of the results from multiple rounds as the final evaluation result.
According to a series of comparative experiments, we found that GPT-4 performs better than vanilla ChatGPT, GPT-3.5. Specifically, as illustrated in Fig. 11, from which we can observe that the overall performance of ChatAgri\({}_{\alpha}\) and ChatAgri\({}_{\beta}\) equipped with GPT-4 is better than the counterparts equipped with vanilla ChatGPT. For example, as shown in a. (2) of Fig. 11, the GPT-4 based ChatAgri\({}_{\alpha}\) overwhelmingly outperforms the GPT-3.5 based based ChatAgri\({}_{\alpha}\) by obtaining about 2.9% and 3.1% absolute gains of accuracy and weighted-F1 on the PestObserver-France dataset. As shown in the second group, GPT-4 also has brought a significant performance gain to ChatAgri\({}_{\beta}\) when compared with the vanilla ChatGPT-equipped counterpart on both the Amazon-Food-Comments and Agri-News-Chinese datasets, by achieving averaged 1.7% absolute accuracy gains. These experiment results powerfully demonstrate that GPT-4 can further exert its potentials and gain a better semantic understanding capability in handling the agricultural text classification task.
Especially in some complex semantic scenarios, like a semantic context containing a large number of semantically similar but subtly different texts, the classification accuracy of GPT-4 is significantly higher than that of vanilla ChatGPT. These results indicate that GPT-4 has higher accuracy and robustness in handling complex semantic texts, and has a wider range of application prospects. Overall, the performance of GPT-4 is proved to be much superior and more stable than the vanilla ChatGPT. So far, we sincerely hope that in the future, OpenAI will provide greater support for the successive GPT series, including GPT-4 and even more advanced versions, so that we can fully leverage the benefits brought by advanced General Purpose AI in all aspects of future sustainable agricultural applications.
## 6 Conclusion and Outlook
Agricultural text classification, which serves as the basis for organizing various types of documents, is a crucial step towards managing massive and ever-increasing agricultural information. Notwithstanding, existing mainstream PLM-based classification models has faced some bottlenecks that are difficult to overcome, such as high-dependency of well-annotated corpora, cross-linguistic transferrability, and complex deployment. To our surprise, the emergence of ChatGPT has brought a turning point to this dilemma. Despite their success, there are few to no systematic of the benefits brought by ChatGPT for the sustainable agricultural information management, especially in the research field of agricultural text classification.
In this work, we have conducted a preliminary study to explore the potentials of ChatGPT in agricultural text classification. As a result, we have proposed a novel ChatGPT-based text classification framework, namely ChatAgri. To the best of our knowledge, the proposed ChatAgri is the first study performing a qualitative analysis of text classification on ChatGPT, with a focus on the agricultural domain. Specifically, in our experiments, we have compared ChatAgri with various baselines relying on different learning paradigms, including traditional ML methods, such as traditional machine learning, PLM-based fine-tuning, and PLM-based prompt learning. Experiments have been performed on datasets that included various languages, such as English, French, and Chinese. Furthermore, we have developed several prompt generation strategies to better stimulate the generation potentials of ChatGPT, and to ultimately evince the effectiveness of the designed prompts. Additionally, we have further investigated the capability of the latest released ChatGPT (GPT-4) through a series of comparative experiments. Overall, the examination of the results elicited by our experiments and ablation studies have revealed the superiority of applying ChatGPT in agricultural text classification.
It is certain that this empirical exploration has opened up new milestones for the development of various ChatGPT-based agricultural information management techniques. We look forward to proposing more applications of ChatGPT in sustainable agricultural development in the future, which will help promote the
digital transformation and sustainable development of the agricultural sector. For example, ChatGPT can be used in the field of smart agriculture to help farmers better manage crops and land, thereby improving agricultural production efficiency and quality. On an overarching outlook, we hope this work has succeeded in its aim at exposing the manifold of opportunities brought by LLM for the agriculture domain, leveraging the immense knowledge currently available in databases to empower this sector with exciting opportunities to benefit from modern Artificial Intelligence advances.
## Acknowledgments
The authors would like to thank the anonymous reviewers for their helpful comments, corrections, and recommendations, which significantly improved the quality of the paper. G.Y. was supported in part by the ERC IMI (101005122), the H2020 (952172), the MRC (MC/PC/21013), the Royal Society (IEC\(\backslash\)NSFC\(\backslash\)211235), the NVIDIA Academic Hardware Grant Program, the SABER project supported by Boehringer Ingelheim Ltd, and the UKRI Future Leaders Fellowship (MR/V023799/1). J.D.S. also acknowledged support from the Spanish _Centro para el Desarrollo Tecnologico Industrial_ (CDTI) through the AI4ES project, and the Department of Education of the Basque Government (_Eusko Jaurlaritza_) via the Consolidated Research Group MATHMODE (IT1456-22). B.Z. and W.J. were supported in part by the Natural Science Basis Research Plan in Shaanxi Province of China (Project Code: 2021JQ-061). Both the first two authors, B.Z. and W.J., made equal contributions to this work.
|
2304.02671 | Noise Induced Universal Diffusive Transport in Fermionic Chains | We develop a microscopic transport theory in a randomly driven fermionic
model with and without linear potential. The operator dynamics arise from the
competition between noisy and static couplings, leading to diffusion regardless
of ballistic transport or Stark localization in the clean limit. The universal
diffusive behavior is attributed to a noise-induced bound state arising in the
operator equations of motion at small momentum. By mapping the noise-averaged
operator equation of motion to a one-dimensional non-hermitian hopping model,
we analytically solve for the diffusion constant, which scales
non-monotonically with noise strength, revealing regions of enhanced and
suppressed diffusion from the interplay between onsite and bond dephasing
noise, and a linear potential. For large onsite dephasing, the diffusion
constant vanishes, indicating an emergent localization. On the other hand, the
operator equation becomes the diffusion equation for strong bond dephasing and
is unaffected by additional arbitrarily strong static terms that commute with
the local charge, including density-density interactions. The bound state
enters a continuum of scattering states at finite noise and vanishes. However,
the bound state reemerges at an exceptional-like point in the spectrum after
the bound-to-scattering state transition. We then characterize the fate of
Stark localization in the presence of noise. | Christopher M. Langlett, Shenglong Xu | 2023-04-05T18:04:50Z | http://arxiv.org/abs/2304.02671v2 | # Noise Induced Universal Diffusive Transport in Fermionic Chains
###### Abstract
We develop a microscopic transport theory in a randomly driven fermionic model with and without linear potential. The operator dynamics arise from the competition between noisy and static couplings, leading to diffusion regardless of ballistic transport or Stark localization in the clean limit. The universal diffusive behavior is attributed to a noise-induced bound state arising in the operator equations of motion at small momentum. By mapping the noise-averaged operator equation of motion to a one-dimensional non-hermitian hopping model, we analytically solve for the diffusion constant, which scales non-monotonically with noise strength, revealing regions of enhanced and suppressed diffusion from the interplay between onsite and bond dephasing noise, and a linear potential. For large onsite dephasing, the diffusion constant vanishes, indicating an emergent localization. On the other hand, the operator equation becomes the diffusion equation for strong bond dephasing and is unaffected by additional arbitrarily strong static terms that commute with the local charge, including density-density interactions. The bound state enters a continuum of scattering states at finite noise and vanishes. However, the bound state reemerges at an exceptional-like point in the spectrum after the bound-to-scattering state transition. We then characterize the fate of Stark localization in the presence of noise.
An outstanding challenge of many-body physics is a complete explanation of how phenomenological laws governing irreversible macroscopic transport behavior emerge from reversible microscopic dynamics, a process encapsulated by the eigenstate thermalization hypothesis [1, 2, 3]. This challenge only magnifies in interacting quantum many-body systems in both equilibrium and non-equilibrium processes [4, 5]. Along these lines, one-dimensional systems [6, 7] are attractive because quantum fluctuations have a pronounced effect, leading to a wide array of quantum phenomena ranging from ballistic transport to localization. In particular, the observation of superdiffusive transport [8, 9, 10, 11, 12, 13, 14] beyond the expected ballistic behavior in integrable systems. However, a complete characterization of quantum transport in solvable models remains challenging despite having access to the eigenenergies and excitations [15].
Randomly driven models, in which couplings are random variables uncorrelated in time, help understand the spreading of a local operator under Heisenberg evolution, known as the operator dynamics. Systems with added stochasticity ought to lose their microscopic properties, such as conservation laws, permitting the emergence of universal behavior. These systems have recently been revitalized with discrete time evolution involving dual unitary circuits [16, 17] and replica disorder averaged random unitary circuits [18, 19, 20]. On the other hand, stochastic dynamics of continuous time models in random Hamiltonians [21, 22, 23, 24, 25, 26], noisy spin chains [27, 28, 29, 30, 31, 32], and (a)symmetric simple exclusion processes [33, 34, 35, 36] have provided deep insights. Random unitary dynamics have also attracted experimental interest in cold atoms [37, 38, 39], trapped ions [40, 41, 42], and paraxial optics [43].
Despite tremendous progress, a complete characterization of the ingredients necessary for unorthodox transport to arise in interacting many-body systems remains open. One approach is introducing a static term as a perturbation [29, 44] to access more generic information about late-time transport. A recent study [45] of a spin-\(1/2\) chain with exchange couplings that fluctuate in space-time around a non-zero mean revealed, through perturbation theory, late-time spin diffusion, albeit with a superdiffusive enhancement suggesting normal diffusion [46].
In this work, we extend these results to non-perturbative static terms. We develop a microscopic transport theory in a fermionic chain without and in the presence of a linear potential. In both cases, the operator dynamics arise from the competition between randomly driven and arbitrarily strong static couplings. We analytically solve for the diffusion constant by exactly mapping the noise-averaged operator equation of motion to a one-dimensional non-hermitian hopping model--the diffusion constant scales non-monotonically with noise strength,
Figure 1: _Noise Induced Non-Hermitian Hopping Model._ (a) Randomly driven non-interacting fermions in a spatially dependent potential, \(V_{x}\). Classical noise \(\Gamma_{x,y}(t)\) models the random drive by coupling locally to the hopping or density. (b) Noise-averaged operator equations of motion map onto a set of one-dimensional non-hermitian hopping models with a repulsive delta function. The \(x\)-axis is the operator length, and \(k\) is the center-of-mass momentum.
revealing enhanced and suppressed diffusion regions.
We uncover for all noise models that a diffusive mode governs the late-time hydrodynamics at small \(k\), attributed to an emergent bound state in the operator equations of motion. As \(k\) increases, the bound state enters a scattering state continuum and vanishes. From the non-hermitian structure of the operator equations, the bound state reemerges at an exceptional-like point where a pair of complex energies form. However, for strong bond dephasing noise, the operator equation becomes the diffusion equation and is _unaffected_ by additional arbitrarily strong static terms that commute with the local charge, including density-density interactions. Moreover, we then characterize the fate of Stark localization in the presence of noise. Ultimately, noise destabilizes the Stark ladder, allowing transport to occur albeit non-monotonically.
_Model._-- We explore the dynamics of one-dimensional non-interacting fermions with time-dependent noise [47, 48], through the Hamiltonian,
\[H_{t}=\sum_{x,y}\big{[}J_{x,y}+\Gamma_{x,y}(t)\big{]}c_{x}^{\dagger}c_{y}, \tag{1}\]
where \(c_{x}^{\dagger}\) (\(c_{x}\)) create (annihilate) an electron at site index \(x\). The off diagonal elements of \(J_{x,y}\) and \(\Gamma_{x,y}(t)\) represent either static or driven hopping, while the diagonal elements represent a static or driven potential. The amplitudes \(\{\Gamma_{x,y}\}\) are drawn independently for each pair of sites \((x,y)\) from a Gaussian distribution with zero mean and variance,
\[\mathbb{E}[\Gamma_{x,y}(t)\Gamma_{l,m}(t^{\prime})]=\Gamma_{xy}\delta_{x,l} \delta_{y,m}\delta(t-t^{\prime}). \tag{2}\]
Where \(\mathbb{E}[\cdot]\) denotes the average over disorder, \(\Gamma_{x,y}\) sets the energy scale of the noise, and \(\delta(t-t^{\prime})\) implies the couplings are correlated at a single instance in time.
We study analytically and numerically time-dependent correlation functions to reveal the long-distance late-time hydrodynamic transport in the presence of noise. In the Heisenberg picture, the infinitesimal operator evolves stochastically, \(\mathcal{O}_{t+dt}=e^{iH_{t}dt}\mathcal{O}_{t}e^{-iH_{t}dt}\). The evolution equation for a generic noise-averaged operator follows from expanding the flow of \(\mathcal{O}_{t}\) up to second-order in \(dt\) and averaging the noise [49, 50, 51],
\[d\bar{\mathcal{O}}_{t}=\sum_{x,y}\bigg{[}iJ_{x,y}[c_{x}^{\dagger}c_{y},\bar{ \mathcal{O}}_{t}]+\Gamma_{x,y}\mathcal{L}_{x,y}[\bar{\mathcal{O}}_{t}]\bigg{]} dt. \tag{3}\]
Here the average dynamics are governed by an effective Lindblad description [52, 53, 54, 34] where \(\mathcal{L}_{x,y}[\bullet]=L_{x,y}^{\dagger}*L_{x,y}-\frac{1}{2}\{L_{x,y}^{ \dagger}L_{x,y},\star\}\) with \(L_{x,y}=c_{x}^{\dagger}c_{y}+h.c\), and \(\{,\}\) standing for the anti-commutator [55]. Competition between coherent and incoherent dynamics drive the time evolved noise-averaged operator in the late-time limit to the steady state \(\lim_{t\to\infty}\bar{\mathcal{O}}_{t}=\sum_{x}n_{x}\) from charge conservation.
_Characterizing Transport._-- Universal behavior of the random unitary dynamics is ascertained through the infinite-temperature fermion density-density correlation function,
\[C_{x,y}(t)=\frac{1}{2^{N}}\operatorname{tr}\biggl{[}\biggl{(}n_{x}(t)-\frac{1 }{2}\biggr{)}\left(n_{y}-\frac{1}{2}\biggr{)}\biggr{]}, \tag{4}\]
where \(n_{x}(t)\) denotes the time-evolved density operator at site index \(x\) in the Heisenberg picture. The density-density correlation function Eq. (4) decays with an algebraic tail at late times,
\[\lim_{t\to\infty}\lim_{N\to\infty}C_{N/2,N/2}(t)\sim t^{-1/z}. \tag{5}\]
The dynamical exponent \(z\) classifies the universal hydrodynamic transport behavior, for example, \(z=1\) for ballistic, \(1<z<2\) for superdiffusive, \(z=2\) for diffusive, \(z>2\) is subdiffusive, and \(z=\infty\) for localized.
_Operator Dynamics._-- The Heisenberg operator \(n_{x}(t)\) remains a two-body operator under evolution due to the absence of interactions, permitting the expansion,
\[n_{x}(t)=\sum_{m,n=1}^{N}A_{m,n}(t)c_{m}^{\dagger}c_{n}. \tag{6}\]
With the initial condition, \(A_{m,n}(0)=\delta_{m,x}\delta_{n,x}\). We transform into the coordinates \(\ell=n-m\)[56] and \(\mathcal{R}=n+m\) representing the operator length and center-of-mass. Because the noise-averaged operator equation is translation invariant in \(\mathcal{R}\) in our models, a Fourier transformation maps Eq. (3) to equations for \(A_{\ell,k}\) describing a one-dimensional hopping model on a fictitious lattice of operator length \(\ell\) with the center of mass momentum \(k\) [see Fig. 1]. The correlation function, in terms of the coefficients is given by, \(\frac{1}{8\pi}\int dkA_{0,k}(t)e^{ik(x-y)}\), where \(A_{\ell,k}(t)\) is the time-evolved wavefunction of the
Figure 2: _Bond and Onsite Dephasing Noise._ (a) Real part of eigenvalue spectrum with both onsite and bond dephasing noise. The yellow curve is the diffusive mode corresponding to Eq. (13). The red line indicates the continuum of scattering states, and the blue curve is a degenerate set of complex energies. (b) Diffusion constant from Eq. (13). When \(\Gamma=0\) the diffusion constant decreases from a ballistic (\(\mathcal{V}\to 0\)) to an emergent localization regime when \(\mathcal{V}\to\infty\). As \(\Gamma\) reaches the minimum \(\sqrt{6}J-\mathcal{V}\) then _increases_ monotonically into a noise-assisted transport regime. Parameters: (a) \(N=400\), \(\gamma=0\), \(\Gamma/J=2\), \(\mathcal{V}/J=2\).
effective hopping model and \(A_{\ell,k}(0)=\delta_{\ell,0}\). At finite noise, the effective model is non-Hermitian, where the non-positive real parts of the eigenvalues drive the system to the steady state in the late-time limit, corresponding to the eigenvalue with the maximal real part, namely, the eigenstate decays slowest during time evolution.
_Bond and Onsite Dephasing Noise._-- We now focus our model in Eq. (1) on nearest-neighbor hopping with dephasing noise on both bonds and sites. Specifically, we define the parameters,
\[J_{x,x+1}=J,\quad\Gamma_{x,x}=\mathcal{V},\quad\Gamma_{x,x+1}=\Gamma. \tag{7}\]
Here \(J\) is the nearest-neighbor coherent hopping, \(\mathcal{V}\) and \(\Gamma\) are the onsite and bond dephasing strength, respectively. The eigenvalue equations of Eq. (3) take the form
\[\mathcal{E}_{q}A_{0} =t_{k}\big{[}A_{1}-A_{-1}\big{]}-4\Gamma\sin^{2}(k)A_{0}\] \[\mathcal{E}_{q}A_{\pm 1} =\pm t_{k}\big{[}A_{\pm 2}-A_{0}\big{]}+\Gamma A_{\mp 1}-\big{[} \mathcal{V}+2\Gamma\big{]}A_{\pm 1}\] \[\mathcal{E}_{q}A_{\ell} =t_{k}\big{[}A_{\ell+1}-A_{\ell-1}\big{]}-\big{[}\mathcal{V}+2 \Gamma\big{]}A_{\ell}. \tag{8}\]
We dropped the index \(k\) in \(A_{\ell,k}\) for simplicity, and \(q\) labels different levels of the eigenvalue equation. The first two equations are the boundary conditions near the origin of the fictitious operator length lattice, and the third describes the bulk for \(|\ell|>1\) with the effective hopping, \(t_{k}=2J\sin(k)\). There are two well known limits of Eq. (8); no noise, \(\Gamma=\mathcal{V}=0\), and pure dephasing, \(J=0\). In the former case, the model is purely coherent, leading to the correlation function,
\[C_{x,y}(t)=\frac{1}{4}\mathcal{J}_{x-y}^{2}(2Jt). \tag{9}\]
Here \(\mathcal{J}_{x-y}(2Jt)\) is the Bessel function of the first kind of order \(x-y\). The asymptotic behavior of the correlation function, \(\lim_{t\to\infty}C_{N/2,N/2}(t)=1/\pi t\), indicates ballistic transport with an exponent \(z=1.0\). In the latter case (\(J=0\) or equivalently \(t_{k}=0\)), the operator length \(\ell=0\) decouples from all other operator lengths, mapping to the diffusion equation, with the solution,
\[C_{x,y}(t)=\frac{1}{4}e^{-2\Gamma t}\mathcal{I}_{x-y}(2\Gamma t). \tag{10}\]
Here \(\mathcal{I}_{x-y}(2\Gamma t)\) is the modified Bessel function of the first kind of order \(x-y\). The asymptotic scaling of Eq. (10) is, \(\lim_{t\to\infty}C_{N/2,N/2}(t)=1/2\sqrt{t\pi}\) corresponding to the exponent \(z=2\). Including a static potential that couples to the density do not affect the diffusive mode because it commutes with the local charge \(n_{x}\) and bond dephasing leaves \(n_{x}\) unchanged. Generically, including any static term that commutes with the local charge, even the density-density interaction, \(n_{x}n_{y}\), will not affect the diffusive hydrodynamic mode.
Now we solve Eq. (8) for general \(J\), \(\mathcal{V}\) and \(\Gamma\). It is similar to the standard Schrodinger equation with a \(\delta\) potential; both scattering and bound states exist in the spectrum, whereby the bulk equation fixes the real part of the scattering states energy to be \(-\big{[}\mathcal{V}+2\Gamma\big{]}\) [see red line in Fig. 2(a)]. Translation invariance of Eq. (8) permits the ansatz,
\[A_{\ell}=\begin{cases}A_{-1}e^{q(1+\ell)}&\text{if }\ell\leq-1\\ -A_{1}e^{q(1-\ell)+i\pi\ell}&\text{if }\ell\geq 1.\end{cases} \tag{11}\]
Inserting the above solution into the bulk equation, gives the energy, \(\mathcal{E}_{q}=4\sin(k)\sinh(q)-\mathcal{V}-2\Gamma\). The boundary conditions for \(|\ell|\leq 1\) constraint the values of \(q\) through [see SM [57]],
\[\big{[}\mathcal{E}_{q}+4\Gamma\sin^{2}(k)\big{]}\big{[}t_{k}e^{q}+\Gamma \big{]}=-2t_{k}^{2}. \tag{12}\]
The above equation is an exactly solvable cubic equation, which at small \(k\) admits two physical solutions, one that begins at \(\mathcal{E}_{q}=0\) [see yellow curve in Fig. 2(a)] and the other at \(\mathcal{E}_{q}=-[3\Gamma+\mathcal{V}]\) [lowest branch in Fig. 2(a)]. The branch in Fig. 2(a) beginning at \(\mathcal{E}_{q}=-[\Gamma+\mathcal{V}]\) is determined by solving Eq. (8) assuming \(A_{0}=0\). Moreover, the gapless bound state energy is given by,
\[\mathcal{E}_{q}=-4\bigg{[}\Gamma+\frac{2J^{2}}{\mathcal{V}+3\Gamma}\bigg{]}k^ {2}. \tag{13}\]
A diffusive mode always exists at small momentum regardless of whether the sites or the hopping have finite dephasing [see the yellow curve in Fig. 2(a)]. When both \(\mathcal{V},\Gamma\to 0\), the diffusion constant diverges, which is reminiscent of ballistic transport in the coherent limit.Previously obtained was the result with either only onsite or bond dephasing noise [58; 36]. In general, the diffusion constant decreases monotonically with increasing onsite dephasing \(\mathcal{V}\) because an energy barrier from site-to-site impedes coherent hopping. In particular, in the absence of bond dephasing, the diffusion constant is zero in the large \(\mathcal{V}\) limit, indicating an emergent localization. As illustrated in Fig. 2(b), the diffusion constant displays non-monotonic behavior as a function of bond dephasing \(\Gamma\). Specifically, as \(\Gamma\) increases, the diffusion constant reaches a minimum at \(\Gamma=(\sqrt{6}J-\mathcal{V})/3\) (assuming \(\mathcal{V}<\sqrt{6}J\)), and then _increases_ monotonically, entering a regime of noise-assisted transport [59; 42; 60].
As momentum increases, two interesting characteristics become apparent. First, the diffusive mode undergoes a bound-to-scattering state phase transition upon entering a scattering state continuum at \(\mathcal{E}_{q}=-2\Gamma-\mathcal{V}\). Then, from the non-hermitian characteristic of Eq. (8), there is an exceptional-like point [61; 62] where the two physical solutions of Eq. (12) collide and coalesce, becoming a complex conjugate pair of energies visualized by the doubly degenerate points in Fig. 2(a) indicated with a blue curve.
_Linear Potential with Bond and Onsite Dephasing._-- In the clean limit of the previous examples, the system exhibited ballistic transport [see Eq. (9)]. However, no
matter how weak or the location, finite noise causes diffusive transport. We now turn our attention to the opposite limit, where in the clean limit, the system is localized, and the diffusion constant vanishes. We will study Wannier-Stark localization in the presence of noise [63; 64; 65; 66]. Specifically, consider the linear potential \(J_{x,x}=-\gamma x\) where \(\gamma\) is the slope with the noise coupled to the hopping and density. We now study the competition between these two noise models through the equation,
\[\mathcal{E}_{q}A_{\ell,k}=t_{k}\big{[}A_{\ell+1,k}-A_{\ell-1,k}\big{]}+\big{[} i\gamma\ell-2\Gamma-\mathcal{V}\big{]}A_{\ell,k}. \tag{14}\]
The bulk operator equation is no longer translation invariant in \(\ell\), which permitted the plane wave ansatz Eq. (11). Solving the recursion relation, \(A_{\ell}\) instead takes the form,
\[A_{\ell,k}=\begin{cases}A\mathcal{I}_{\nu_{-}}(-2it_{k}/\gamma)&\text{if }\ell<-1\\ B\mathcal{I}_{\nu_{+}}(-2it_{k}/\gamma)&\text{if }\ell>1.\end{cases} \tag{15}\]
where \(\nu_{\pm}=i(\mathcal{E}_{q}+2\Gamma+\mathcal{V})/\gamma\pm\ell\). For \(\mathcal{V}=\Gamma=0\) the operator equations are anti-hermitian leading to an equally spaced tower of purely imaginary eigenvalues, \(\mathcal{E}_{q}=i\gamma q\) for \(q\in\{-\ell_{\text{max}},\ell_{\text{max}}\}\) independent of momentum \(k\). The corresponding unnormalized eigenstates are \(A_{\ell,k}=\mathcal{I}_{\ell-q}(-4iJ\sin(k)/\gamma)\) which are Wannier-Stark localized [67; 68; 69; 70]. Finite noise renders the operator equations non-hermitian, causing an eigenvalue to become purely real, which is the long wavelength mode. In the SM [57], we determine the scaling of the hydrodynamic mode,
\[\mathcal{E}_{q}=-8\bigg{[}\frac{\Gamma}{2}+\frac{J^{2}(\mathcal{V}+\Gamma)}{ \gamma^{2}+(\mathcal{V}+\Gamma)(\mathcal{V}+3\Gamma)}\bigg{]}k^{2}, \tag{16}\]
which is diffusive for finite noise, similar to Anderson localized models with global noise [71; 72; 29], but different from local noise models [73; 74]. In the limit \(\gamma=0\), we recover the bound state energy Eq. (13), while in the limit either \(\mathcal{V}\) or \(\Gamma\) is large, the bound state energy is finite, specifically, \(4\Gamma\), indicating Stark localization instability to noise.
In Fig. 2(a) and (b), we plot the heatmap of the diffusion constant with \(\gamma<0.5\) and \(\gamma=0.5\). In both cases, the model is Stark localized when \(\mathcal{V}=\Gamma=0\). When \(\gamma<0.5\), initially, there is a regime where increasing \(\Gamma\) or \(\mathcal{V}\) leads to noise-assisted transport to a maximum value [see Fig. 2(c) or (d)]. Increasing noise further in either direction introduces an energy barrier that overcomes the linear potential, suppressing diffusion; however, when \(\Gamma>\gamma\), diffusion enhances once more as if the linear potential was nonexistent [see the black curve for \(\gamma=0\) in Fig. 3(c) or Fig. 2(b)]. As \(\gamma\to 0.5\), the non-monotonic behavior decreases and is lost when \(\gamma>0.5\), whereby diffusion immediately enters a noise-assisted transport regime. On the other hand, the on-site dephasing dominates the linear potential as \(\mathcal{V}\) increases [see Fig. 3(d)], introducing an energy barrier and decreasing the diffusion constant.
We first study the operator dynamics of Eq. (14) with only onsite dephasing present, i.e., \(\Gamma=0\). When \(\mathcal{V}\ll\gamma\) the diffusion constant is small, and Bloch oscillations push diffusion to later times [see Fig. 4(a)], rather than when \(\mathcal{V}\) is the dominant energy scale. In contrast, diffusion almost immediately occurs when the noise is on the bonds [see Fig. 4(b)], i.e., \(\mathcal{V}=0\); a consequence of the diffusion constant always being finite regardless of the linear potential strength.
_Conclusion._-- Through a combination of analytics and large-scale numerics, this work developed a transport model where the operator dynamics arise from the competition between randomly driven and static couplings. We exactly solve for the diffusion constant by determining the emergent bound state of an effective one-dimensional non-hermitian hopping model. In contrast to standard hydrodynamic theories [75; 76], the diffusion constant scales non-monotonically with noise strength. For pure dephasing, the noise-averaged equation satisfies the diffusion equation, which is robust to arbitrarily strong static terms that commute with the local charge, including interactions. As momentum increases, the bound state enters a continuum of scattering states and
Figure 3: _Diffusion Constant Phase Diagram_. (a) Diffusion constant from Eq. (16) with the linear potential strength \(\gamma=0.15\). Inset: Illustration of the non-monotonicity along both axes. (b) Same as in (a) with \(\gamma=0.50\) where the non-monotonic behavior arises only along \(\Gamma=0\). Inset: Illustration of non-monotonicity along the onsite dephasing axis only. (c) Diffusion constant with \(\mathcal{V}=0\). Provided \(\gamma<0.5\) there is an initial noise assisted regime to a maximum value, where then bond dephasing introduces an energy barrier, suppressing diffusion. Once \(\Gamma>\gamma\) diffusion enhances as if the linear potential was absent [see black curve for \(\gamma=0\) or Fig. 2(b)]. As \(\gamma\to 0.5\) the non-monotonic behavior is lost, and diffusion immediately enters a noise-assisted transport regime. (d) While when \(\Gamma=0\) noise compensates for the energy barrier from the linear potential, enhancing transport to a maximum. As \(\mathcal{V}\) increases further, the onsite dephasing dominates the linear potential, introducing an energy barrier and decreasing the diffusion constant. Parameters: The dotted black curves in (a) and (b) indicate a maximum or minimum.
vanishes. Surprisingly, beyond the bound-to-scattering state phase transition, the bound state reemerges at an exceptional-like point. We further find Stark localization is unstable to onsite and bond dephasing noise, but illustrates a rich phase diagram where diffusion enters regimes of enhancement and suppression. Future work could be understanding transport when the model has long-range hopping or correlating the noise [77].
_Acknowledgement.--_ We thank Lakshya Agarwal, Joaquin F. Rodriguez-Nieva, and Artem Abanov for useful discussions. We also thank Mark Mitchison for pointing out related results from previous works. The numerical simulations in this work were conducted with the advanced computing resources provided by Texas A&M High Performance Research Computing.
|
2310.16943 | Illuminating evaporating protostellar outflows: ERIS/SPIFFIER reveals
the dissociation and ionization of HH 900 | Protostellar jets and outflows are signposts of active star formation. In H
II regions, molecular tracers like CO only reveal embedded portions of the
outflow. Outside the natal cloud, outflows are dissociated, ionized, and
eventually completely ablated, leaving behind only the high-density jet core.
Before this process is complete, there should be a phase where the outflow is
partially molecular and partially ionized. In this paper, we capture the HH 900
outflow while this process is in action. New observations from the
ERIS/SPIFFIER near-IR integral field unit (IFU) spectrograph using the K-middle
filter ($\lambda$=2.06-2.34 $\mu$m) reveal H$_2$ emission from the dissociating
outflow and Br-$\gamma$ tracing its ionized skin. Both lines trace the
wide-angle outflow morphology but H$_2$ only extends $\sim$5000 au into the H
II region while Br-$\gamma$ extends the full length of the outflow
($\sim$12,650 au), indicating rapid dissociation of the molecules. H$_2$ has
higher velocities further from the driving source, consistent with a jet-driven
outflow. Diagnostic line ratios indicate that photoexcitation, not just shocks,
contributes to the excitation in the outflow. We argue that HH 900 is the first
clear example of an evaporating molecular outflow and predict that a large
column of neutral material that may be detectable with ALMA accompanies the
dissociating molecules. Results from this study will help guide the
interpretation of near-IR images of externally irradiated jets and outflows
such as those obtained with the James Webb Space Telescope (JWST) in high-mass
star-forming regions where these conditions may be common. | Megan Reiter, Thomas J. Haworth, Carlo F. Manara, Suzanne Ramsay, Pamela D. Klaassen, Dominika Itrich, Anna F. McLeod | 2023-10-25T19:24:32Z | http://arxiv.org/abs/2310.16943v1 | Illuminating evaporating protostellar outflows: ERIS/SPIFFIER reveals the dissociation and ionization of HH 900+
###### Abstract
Protostellar jets and outflows are signposts of active star formation. In H ii regions, molecular tracers like CO only reveal embedded portions of the outflow. Outside the natal cloud, outflows are dissociated, ionized, and eventually completely ablated, leaving behind only the high-density jet core. Before this process is complete, there should be a phase where the outflow is partially molecular and partially ionized. In this paper, we capture the HH 900 outflow while this process is in action. New observations from the ERIS/SPIFFIER near-IR integral field unit (IFU) spectrograph using the K-middle filter (\(\lambda\)=2.06-2.34 \(\mu\)m) reveal H\({}_{2}\) emission from the dissociating outflow and Br-\(\gamma\) tracing its ionized skin. Both lines trace the wide-angle outflow morphology but H\({}_{2}\) only extends \(\sim\)5000 au into the H ii region while Br-\(\gamma\) extends the full length of the outflow (\(\sim\)12,650 au), indicating rapid dissociation of the molecules. H\({}_{2}\) has higher velocities further from the driving source, consistent with a jet-driven outflow. Diagnostic line ratios indicate that photoexcitation, not just shocks, contributes to the excitation in the outflow. We argue that HH 900 is the first clear example of an evaporating molecular outflow and predict that a large column of neutral material that may be detectable with ALMA accompanies the dissociating molecules. Results from this study will help guide the interpretation of near-IR images of externally irradiated jets and outflows such as those obtained with the _James Webb Space Telescope (JWST)_ in high-mass star-forming regions where these conditions may be common.
keywords: Herbig-Haro objects - ISM: jets and outflows - photodissociation region (PDR) - Infrared: ISM - stars: formation - stars: protostars
## 1 Introduction
Protostellar jets and outflows are produced by actively accreting young stellar objects (YSOs). Fast, collimated jets launched close to the YSO are often seen at visual wavelengths in hydrogen recombination and forbidden emission lines that are excited in shocks. Molecular emission lines in the millimetre (mm) and infrared (IR) trace outflows that may be launched directly from the circumstellar disk or entrained from the surrounding medium by an underlying jet. Jets and outflows may coexist and which component is more prominent / visible depends on the environment and evolutionary stage of the source (see Bally 2016, for a recent review).
A few well-studied examples like HH 46/47 have played an important role in understanding the relationship between jets and outflows. HH 46/47 is a parsec-scale bipolar outflow (Stanke et al. 1999). The fast, collimated jet that emerges from an embedded young stellar object (YSO) is well-studied in bright recombination and forbidden lines that trace shock-excited gas in the jet body (e.g., Heathcote et al. 1996; Hartigan et al. 2005, 2011; Erkal et al. 2021). On the opposite side of the embedded driving source, a classic molecular outflow propagates into the natal cloud with a wide-angle morphology traced by both the _Spitzer Space Telescope_(Noriega-Crespo et al. 2004) and the Atacama Large Millimeter Array (ALMA; Arce et al. 2013; Zhang et al. 2016).
Most of our knowledge about jets and outflows comes from sources like HH 46/47 - those located in nearby star-forming regions that are forming primarily or exclusively low-mass stars. In regions with high-mass stars, the observational picture may look quite different. Copious UV photons from the most massive stars illuminate the surrounding cloud, including jets and outflows that emerge into the H ii region. External irradiation renders the entire jet body visible (Bally et al. 2006a; Smith et al. 2010), unlike jets in more quiescent regions where the only material emitting at visual and infrared wavelengths has been heated and excited in shock fronts (e.g., Bally & Reipurth 2001). This is a distinct advantage for measuring the mass-loss rate (and thus accretion history of the driving source) because it can be
determined using the physics of photoionized gas rather than complex, non-linear, and time-dependent shock models (Reipurth et al., 1998; Bally et al., 2006b).
Once jets and outflows emerge into the H ii region, molecules will quickly be dissociated. Traditional molecular outflow tracers, like CO, will only be seen in portions of the flow that remain embedded in the cloud (e.g., Cortes-Rangel et al., 2020). Outside the cloud, the outflow will be dissociated, ionized, and finally, completely ablated, leaving behind only the high-density jet core. Before this process is complete, there should be a phase where the molecular outflow is only partially dissociated with an ionized skin. The idea of an ionized outflow has been proposed for a few sources in the Carina Nebula (HH 666 and HH 900; Smith et al., 2004; Hartigan et al., 2015; Reiter et al., 2015, 2015, 2015). So far, there have been no unambiguous detections of a molecular outflow that extends into the H ii region.
In this paper, we present the first clear case of an evaporating molecular outflow. The HH 900 jet+outflow, emerges from a small, tadpole-shaped globule located in the heart of the Carina Nebula (see Figure 1). Dozens of O- and B-type stars in the nearby Trumpler 16 (Tr16) star cluster illuminate the system and may have triggered its formation (Reiter et al., 2020). H\(\alpha\) emission traces a wide-angle bipolar outflow that emerges from the opaque globule (Smith et al., 2010). Unusually for an outflow, the H\(\alpha\) emission appears to taper as it gets further from the driving source. This is the morphology expected for a jet-driven outflow (see, e.g., Arce et al., 2007), consistent with H\(\alpha\) tracing the ionized skin of the outflow driven by an underlying jet (seen in [Fe ii]; Reiter et al., 2015, 2019).
CO observations with ALMA reveal a bipolar molecular outflow that extends only to the edge of the globule where it abruptly ends (Reiter et al., 2020). In seeing-limited near-IR narrowband images, H\({}_{2}\) emission extends from the globule edge along the outflow axis (Hartigan et al., 2015). Optical integral field unit spectroscopy from the Multi-Unit Spectroscopic Explorer (MUSE) revealed extended [C i] emission from the same region as the H\({}_{2}\)(Reiter et al., 2019). [C i] is often observed to be coincident with H\({}_{2}\) in partially molecular gas (e.g., Escalante et al., 1991), suggesting that the region where both lines are detected traces the transition between the fully molecular and fully ionized portions of the outflow. Together, these observations strongly suggest that HH 900 has a dissociating molecular outflow.
To test this hypothesis, we obtained new near-IR integral field unit spectroscopic observations from the Enhanced Resolution Imager and Spectrograph (ERIS) on the Very Large Telescope (VLT). These high spatial and spectral resolution observations allow us to probe the three key features we expect if HH 900 has a dissociating molecular outflow, namely: (1) the same morphology in molecular (H\({}_{2}\)) and ionized (Br-\(\gamma\)) gas; (2) similar kinematics in molecular (H\({}_{2}\)) and ionized (Br-\(\gamma\)) gas that are distinct from the fast, collimated jet (Reiter et al., 2015); and (3) gas that is primarily photoexcited. These data provide comparable angular resolution and spatial coverage to previous observations obtained with the _Hubble Space Telescope (HST)_/ACS, VLT/MUSE, and ALMA, providing a comprehensive view of the outflow.
## 2 Observations
### Spifter/Eris
We observed HH 900 (RA=10:45:19.3, Dec=\(-\)59:44:23) using the recently-commissioned SPIFIER near-IR integral field unit spectrograph of the VLT/ERIS instrument (Davies et al., 2018, 2023). SPIFIER is the refurbished integral field unit spectrograph SPIFI (SPectrometer for Infrared Faint Field Imaging) that was previously part of the Spectrograph for INtegral Field Observations in the Near Infrared (SINFONI). Data were obtained as part of the ERIS Science Verification 1 for Pr. Id. 110.257T (PI: M. Reiter) on the night of 05 December 2022.
Footnote 1: [https://www.eso.org/sci/activities/vltsw/errissv.html](https://www.eso.org/sci/activities/vltsw/errissv.html)
We used the lowest resolution plate scale which provides \(125\times 250\) mas spatial pixels (spaxels) over an \(8\arcsec\times 8\arcsec\) field-of-view. Two overlapping pointings capture the entire extent of the HH 900 jet+outflow (excluding distant bow shocks). Observations were obtained in good weather with seeing \(\lesssim\)0.8\(\arcsec\). We also utilized laser guide star (LGS) adaptive optics (AO) correction to further improve the image quality.
We used the high-resolution K-middle filter (\(\lambda\)=2.06-2.34 \(\mu\)m) which covers the H\({}_{2}\) line at 2.12 \(\mu\)m and Br\(\gamma\) at 2.16 \(\mu\)m simultaneously with spectral resolution R\(\sim\)11,200. The corresponding velocity resolution is \(\sim\)25 km s\({}^{-1}\).
Data were reduced using a beta version of the ERIS pipeline provided by ESO at the time of the Science Verification run and executed through the ESO Reflex workflow (Freudling et al., 2013). The pipeline corrects the data from the instrumental signatures, i.e. darks and flats, calibrates a wavelength solution using the associated arc lamps, applies a field distortion mapping, computes and substracts the sky contribution, and generates a resampled 3D data cube. This is then used in the following analysis.
The systemic velocity of HH 900 was measured by Reiter et al. (2020) who found \(v_{\rm LSR}=-33.5\) km s\({}^{-1}\). For the coordinates of Carina, \(v_{\rm helio}\approx v_{\rm LSR}+11.6\) km s\({}^{-1}\)(Kiminki & Smith, 2018). Using this, we compute a heliocentric velocity of \(v_{\rm helio}=-21.9\) km s\({}^{-1}\) for HH 900. All velocities in this paper are reported relative to the heliocentric velocity of HH 900.
### Archival data
To align the ERIS/SPIFIER data with the archival images, we use header astrometry for preliminary registration, then apply additional
Figure 1: An H\(\alpha\) image from _HST_ showing the HH 900 jet+outflow. HH 900 emerges from a tadpole-shaped globule that is seen in silhouette against the bright background of the H ii region. The jet-driving source is unseen inside the opaque globule. Two additional candidate YSOs are seen outside the globule, PCYC 838 and PCYC 842.
linear offsets to align the point sources near the globule and outflow. The spatially-resolved globule and tadpole tail provide a second check for data with few point sources (i.e. from ALMA). Data were additionally rotated and shifted to minimize subtraction residuals. Typical alignment uncertainties are on the order of a pixel (\(\sim\)0.15\({}^{\prime\prime}\)).
#### 2.2.1 HST H\(\alpha\) and [Fe ii]
A narrowband H\(\alpha\) image was obtained with the F658N filter on the Advanced Camera for Surveys (ACS) on 04 August 2014 (programme GO-13390, PI: N. Smith). The narrowband [Fe ii] 1.64 \(\mu\)m (F164N) and offline continuum (F167N) images were obtained with the infrared channel of the Wide-Field Camera 3 (WFC3-IR) on 28 December 2013 (programme GO-13391, PI: N. Smith). Details of these data and their reduction are presented in Reiter et al. (2015).
#### 2.2.2 Muse
The HH 900 system was observed with the Multi-Unit Spectroscopic Explorer (MUSE) integral field unit spectrograph on the VLT 03 April 2018 (programme ID 0101.C-0391(A); PI: M. Reiter). These observations utilised the GALACSI Adaptive Optics module in Wide Field Mode (WFM) to provide \(\sim\)0.8\({}^{\prime\prime}\) angular resolution over the 1\({}^{\prime}\times\) 1\({}^{\prime}\) field-of-view. MUSE provides spectral coverage from 4650-9300A with a gap between \(\sim\)5800-5950A for the laser guide stars with spectral resolution \(R=2000-4000\). Details of those observations may be found in Reiter et al. (2019).
#### 2.2.3 Alma
ALMA Band 6 observations of the HH 900 system were obtained on 08 May 2017 and 25 September 2017 using medium and long baseline configurations (programme ID 2016.1.01537.S, PI: A. Guzman). The maximum angular resolution is 0.02\({}^{\prime\prime}\), comparable to the resolution of H\(\alpha\) images from _HST_. CO lines were observed with a velocity resolution of 0.08 km s\({}^{-1}\). Analysis and details may be found in Reiter et al. (2020).
## 3 Results
We detected Br-\(\gamma\) and 3 bright H\({}_{2}\) lines with ERIS/SPIFFIER (see Table 1 and Figure 2). All of the detected lines are spatially extended, tracing the bipolar HH 900 outflow.
Bright emission from all lines also trace the ionization front on the globule surface. The tadpole tail is prominent in H\({}_{2}\) images, tracing the same morphology seen in silhouette in H\(\alpha\)(Smith et al., 2010) and in emission in CO (Reiter et al., 2020).
Both Br-\(\gamma\) and H\({}_{2}\) emission from HH 900 trace the wide-angle outflow, extending smoothly from where the CO outflow ends at the edge of the globule (see Figure 3 and Reiter et al., 2020). Br-\(\gamma\) emission traces the same morphology as H\(\alpha\), as expected. In the H ii region, H\({}_{2}\) traces a broadly similar morphology to the ionized outflow traced by H\(\alpha\) and Br-\(\gamma\) but the surface brightness is less uniform. H\({}_{2}\) appears limb-brightened with a measurable dip in the intensity at the mid-point of the outflow lobe (see Figure 4). H\({}_{2}\) emission is less extended, reaching only \(\sim\) 2.2\({}^{\prime\prime}\) (0.02 pc) from the globule, compared to \(\sim\) 5.5\({}^{\prime\prime}\) (0.06 pc) for H\(\alpha\) and Br-\(\gamma\). This is the same extent seen in seeing-limited H\({}_{2}\) images from Hartigan et al.
\begin{table}
\begin{tabular}{l l} \hline Line & wavelength \\ & [\(\mu\)m] \\ \hline H\({}_{2}\) 1-0 S(1) & 2.12183 \\ Br-\(\gamma\) & 2.16612 \\ H\({}_{2}\) 1-0 S(0) & 2.22329 \\ H\({}_{2}\) 2-1 S(1) & 2.24772 \\ \hline \end{tabular}
\end{table}
Table 1: Primary emission lines detected in the HH 900 jet-outflow. Wavelengths for the H\({}_{2}\) lines are from Levenson et al. (2000); Br-\(\gamma\) wavelength is from Chang & Denning (1996) via NIST3. All wavelengths are in vacuum.
Figure 2: Integrated intensity (moment 0) maps of the lines detected in the ERIS SV data (see Table 1): (a) H\({}_{2}\) 1-0 S(1); (b) Br-\(\gamma\); (c) H\({}_{2}\) 1-0 S(0); and (d) H\({}_{2}\) 2-1 S(1).
(2015) and coincides with the [C i] emission seen with MUSE (see Figure 3 and Reiter et al., 2019).
In the following analysis, we refer to the H\({}_{2}\) 2.12 \(\mu\)m line simply as 'H\({}_{2}\)' and specify the transition of the other H\({}_{2}\) lines where used.
### Velocity structure
The HH 900 outflow lies close to the plane of the sky, with a tilt angle \(\lesssim 10^{\circ}\)(Reiter et al., 2015). The bulk of the outflow velocity is therefore captured in the tangential motions in the plane of the sky. We benefit from the high velocity resolution of SPIFFIER/ERIS to measure the more modest radial velocities from the outflow. To measure the velocity of the outflow components traced by H\({}_{2}\) and Br-\(\gamma\), we construct position-velocity (P-V) diagrams of each line (see Figure 5). We extract a slice 5 pixels wide through the center of the jet+outflow. Both H\({}_{2}\) and Br-\(\gamma\) show blueshifted emission from the western limb of the jet and redshifted emission to the east, consistent with other velocity measurements of the jet+outflow (Reiter et al., 2015, 2020).
To obtain a more precise estimate, we fit a Gaussian to the velocity profile in different slices across the outflow. We take the average emission in three-pixel-wide slices (the equivalent of three columns in the P-V diagrams). Line profiles are largely single-peaked. Low H\({}_{2}\) velocities between \(-3^{\prime\prime}\) and \(-4^{\prime\prime}\) (on the eastern side of the globule) are contaminated by the blueshifted tadpole tail (see Appendix B and Figure B1). Note that the tadpole tail is blueshifted on the same (eastern) side of the globule where the outflow is redshifted. Overall, H\({}_{2}\) velocities in the outflow are faster than Br-\(\gamma\).
Velocity measurements for the HH 900 jet+outflow were previous reported by Reiter et al. (2015). Br-\(\gamma\) velocities are consistent with the marginally-resolved blueshifted velocities seen in the western limb of the outflow in H\(\alpha\) (which reach a maximum blueshifted velocity of \(\sim\)-16 km s\({}^{-1}\)). All outflow velocities traced by H\({}_{2}\) and Br-\(\gamma\) are slower than the [Fe ii] emission from the jet (\(\pm\sim\)30 km s\({}^{-1}\); see Appendix A).
### The YSOs
The HH 900 driving source remains unseen at near-IR wavelengths as it is deeply embedded inside the tadpole-shaped globule. However, we detect two point sources that are visible outside the globule (see Figure 1). Both were identified as candidate young stellar objects (YSOs) in the Pan Carina YSO Catalog (PCYC) by Povich et al. (2011). We use aperture extraction to isolate the stellar spectrum and remove the sky background. For both objects, the'sky' background includes nebular emission from the H ii region and structured emission from either the edge of the globule or the limb-brightened HH 900 outflow. As a result, the Br-\(\gamma\) and H\({}_{2}\) line profiles may be contaminated with residual nebular emission. Spectra of both sources are shown in Appendix C.
The first source, PCYC 838, lies directly on top of the western limb of the HH 900 jet+outflow. We detect Br-\(\gamma\) in emission in the PCYC 838 YSO spectrum but no H\({}_{2}\) lines. Br-\(\gamma\) emission may indicate active accretion (e.g., Fairfamb et al., 2017). However, we do not detect other indicators of the circumstellar disk such as the CO bandhead in emission or absorption (e.g., Carr, 1989; Calvet et al., 1991). The absence of the molecular bands may point to a slightly higher mass for this object, consistent with the spectral type (roughly G-type) estimated from the optical spectrum (Reiter et al., 2019) and the mass estimated from the best-fitting model of the spectral energy distribution (\(2.5\pm 1.2\) M\({}_{\odot}\); Povich et al., 2011).
A second star, identified as candidate YSO PCYC 842 by Povich et al. (2011), lies at the bottom of the tadpole-shaped globule. No prominent emission lines (i.e. Br-\(\gamma\)) are detected in this source. No millimeter continuum or molecular line emission associated with this source was detected with ALMA (Reiter et al., 2020), suggesting a lack of circumstellar material. Indeed, Reiter et al. (2015) questioned whether this source is a YSO based on its motion _toward_ the globule and the low resolution of the data used for its original classification. The relatively featureless spectrum of this source does not provide evidence that this source is young or actively accreting.
## 4 HH 900 as an evaporating molecular outflow
We argue that the morphology, velocity, and excitation of the extended H\({}_{2}\) and Br-\(\gamma\) emission in HH 900 trace an ionized and evaporating outflow. In this picture, the molecular outflow is rapidly but not instantaneously dissociated once it emerges into the H ii region. Extended H\({}_{2}\) emission traces the extent (and therefore the time) that molecules survive in the H ii region. Spatially coincident [C i] emission seen with MUSE supports this interpretation (see Figure 3). The ionization potential of carbon is lower than hydrogen (11.26 eV compared to 13.6 eV), so [C i] is expected to coexist with partially dissociated H\({}_{2}\)(see, e.g., Osterbrock & Ferland, 2006).
High spatial and spectral resolution integral field unit spectroscopy with ERIS allows us to perform three key tests of this hypothesis. If HH 900 is an evaporating molecular outflow we expect: **(1)** the same morphology in molecules (H\({}_{2}\)) and recombination lines (Br-\(\gamma\)); **(2)**
Figure 3: **Top:** Grayscale image of H\({}_{2}\) from ERIS; red and blue contours show CO 2-12 from ALMA (emission is integrated from \(-\)30 km s\({}^{-1}\) to \(-\)11 km s\({}^{-1}\) and \(-\)57 km s\({}^{-1}\) to \(-\)35.75 km s\({}^{-1}\), respectively); white contours show [Fe ii] 1.64 \(\mu\)m _from HST_. **Bottom:** H\({}_{2}\) contours (red; 10-50 \(\sigma\) in steps of 5\(\sigma\)) on a [C i] 8727 Å image from MUSE.
similar outflow-like kinematics in molecular (H\({}_{2}\)) and recombination lines (Br-\(\gamma\)) that are distinct from the fast, collimated jet traced by [Fe ii]; and **(3)** photoexcitation dominating over shock excitation in the outflow. We discuss each test individually below.
### Morphology
H\({}_{2}\) emission extends beyond the edge of the globule, tracing the same wide-angle flow as Br-\(\gamma\) (see Figure 6). The H\({}_{2}\) appears limb-brightened, as though tracing the edges of an outflow cavity. The width of the H\({}_{2}\) and Br\(\gamma\) profiles are comparable for intensity tracings through the outflow (see Figure 4). This is in contrast to tracings through the globule itself that show an offset of \(\sim\)0.5\({}^{\prime\prime}\) between the Br-\(\gamma\) and H\({}_{2}\) emission peaks.
The key morphological difference between H\({}_{2}\) and Br-\(\gamma\) is their extent. H\({}_{2}\) reaches \(\sim\)2.2\({}^{\prime\prime}\) (0.02 pc) from the globule edge into the H ii region. The terminal edge of the limb-brightened H\({}_{2}\) outflow coincides with the onset of [Fe ii] emission from the jet (see Figure 6). Br-\(\gamma\) extends \(\sim\)5.5\({}^{\prime\prime}\) (0.06 pc) tracing the full length of the jet seen in H\(\alpha\) and [Fe ii]. This is consistent with an evaporating molecular outflow if H\({}_{2}\) disappears at the point where molecules are completely dissociated.
### Outflow velocities
Outflow velocities traced by H\({}_{2}\) increase with distance from the driving source. This Hubble-like flow is characteristic of jet-driven outflows (see Figure 2 from Arce et al., 2007). The highest H\({}_{2}\) velocities are \(\pm\sim\) 20 km s\({}^{-1}\), remarkably similar to the highest velocities seen from the embedded CO outflow (Reiter et al., 2020). H\({}_{2}\) velocities are \(\sim\)10 km s\({}^{-1}\) faster than Br-\(\gamma\) in both limbs of the outflow (see Figure 5) and for all slices across the globule (see Appendix B). This velocity difference is comparable to the sound speed in ionized gas (\(c_{s}\sim\) 11 km s\({}^{-1}\)), the expected velocity of a photoevaporative flow.
The fastest velocities measured in H\({}_{2}\) and Br-\(\gamma\) are both lower than in the collimated [Fe ii] jet where Reiter et al. (2015, 2019) found fast, steady velocities up to \(\pm\)30 km s\({}^{-1}\). Br-\(\gamma\) outflow we
Figure 4: **Top:** Intensity tracings showing H\({}_{2}\) (red), Br-\(\gamma\) (blue), and [Fe ii] (black dashed line) along the length of the outflow **(a)**. H\({}_{2}\) extends a small distance from the globule but tapers of before [Fe ii] from the jet is first detected. Br-\(\gamma\) is bright throughout the length of the outflow. **Middle:** H\({}_{2}\) image with lines showing the location of the horizontal (red line) and vertical (black dotted lines) intensity tracing locations **(b)**. **Bottom:** Vertical intensity tracings ordered from east to west. In the eastern-most slice **(c)**, both lines trace the outflow but H\({}_{2}\) is dominated by emission from the tadpole tail. Closer to the globule (**d)**, H\({}_{2}\) and Br-\(\gamma\) trace the wide-angle outflow. A tracing through the globule itself **(e)** reveals H\({}_{2}\) offset inside Br-\(\gamma\), consistent with a steady-state photodissociation region on the surface of a post-collapse globule (see Reiter et al., 2020). The western-most slice through the HH 900 outflow **(f)** shows the wide opening angle traced by both components and illustrates the limb-brightening of the H\({}_{2}\) flow.
locities are consistent with zero while the fastest H\({}_{2}\) velocities are closer to 30 km s\({}^{-1}\). Several other outflows have been observed to have a nested or 'onion-layer' structure with fast, highly collimated components on the jet axis surrounded by slower, wider angle layers (e.g. Lavalley-Fouquet et al., 2000; Bacciotti et al., 2000; Pyo et al., 2003; Coffey et al., 2008). This same structure is seen in HH 900 with [Fe ii] emission tracing the fast core of the jet while H\({}_{2}\) traces molecular layers either lifted from the disk or entrained from the envelope/globule. On the surface is the ionized Br-\(\gamma\) emission where the strong external radiation field first encounters the slowest, widest layers of the outflow.
### Excitation
We measure two different emission line ratios in the spatially-resolved outflow to determine the excitation of the gas. All four emission lines detected with these ERIS observations are within a narrow wavelength range so differences in wavelength-dependent extinction are negligible. We expect H\({}_{2}\)/Br-\(\gamma\)\(<\)1 for a dissociating outflow where photoexcitation will play a dominant role (the expected ratio in shock-excited gas is \(>\)1; see discussion in e.g., Yeh et al., 2015). The H\({}_{2}\) 1-0 S(1) / H\({}_{2}\) 2-1 S(1) ratio provides an additional diagnostic. For shock-excited gas, models predict a line ratio \(\gtrsim\) 10 (Shull & Hollenbach, 1978); for photo-excited gas, the expected line ratio is \(\sim\) 1.5 - 3 (Black & van Dishoeck, 1987).
Line ratio maps are shown in Figure 7. We take the ratio of the integrated intensity (moment 0) obtained by integrating each line over an interval \(\pm\sim\)4 A from the line center. Ambient nebular emission is estimated from the median value of the sky above and below HH 900.
The H\({}_{2}\)/Br-\(\gamma\) ratio is \(\geq\) 1 throughout the HH 900 system. The
Figure 5: **Top:** H\({}_{2}\) image showing the location and width of the slice used to make the P-V diagrams. **Middle:** H\({}_{2}\) and Br-\(\gamma\) P-V diagrams and a third P-V diagram showing H\({}_{2}\) contours (red) on the Br-\(\gamma\) P-V diagram. **Bottom:** The best-fit velocity derived from fitting a Gaussian to velocity slices along the jet axis (see Section 3.1).
Figure 6: H\({}_{2}\) contours (black) on a Br-\(\gamma\) grayscale image (**a**). [Fe ii] 1.64 \(\mu\)m contours on an H\({}_{2}\) image (**b**) and Br-\(\gamma\) image (**c**). H\({}_{2}\) contours on an H\(\alpha\) image from _HST_ (**d**). H\({}_{2}\) contours are 10–50\(\sigma\) in steps of 5\(\sigma\); [Fe ii] contours are 3–30\(\sigma\) in steps of 3\(\sigma\).
highest values are seen on the eastern edge of the globule and in a few knots in the tadpole tail indicating that shock-excitation contributes in these portions of the system. Ratios in the outflow are lower but increase to higher values toward the outflow edges. No part of the system has H\({}_{2}\)/Br-\(\gamma<1\).
The interpretation of the H\({}_{2}\) 1-0 S(1) / H\({}_{2}\) 2-1 S(1) ratio map is less clear. All values in the H\({}_{2}\) 1-0 S(1) / H\({}_{2}\) 2-1 S(1) ratio map are \(<\)10, below the range expected for shock-excited gas (\(\gtrsim\) 10). However, throughout the map, the flux ratio is \(>\)3, larger than expected for pure photoexcitation. Unlike the H\({}_{2}\)/Br-\(\gamma\) ratio map, the globule edge is not prominent in the H\({}_{2}\) 1-0 S(1) / H\({}_{2}\) 2-1 S(1) ratio map and there are no prominent bright spots in the tadpole tail.
Based on these ratio maps, the simplest interpretation is that a combination of shocks and photoexcitation contribute to the observed emission. Note that the typical H\({}_{2}\) knot sizes seen in the spectacular HH 212 jet are \(\sim\)250\(-\)500 au (Zinnecker et al., 1998). At the distance of Carina (2.35 kpc; Goppl & Preibisch, 2022), this corresponds to \(<0.25^{\prime\prime}\) - too small to be resolved with ERIS.
## 5 Discussion
Evaporating molecular outflows are the missing piece that connects (cold) molecular outflows seen inside clouds with the hot ionized jets laid bare in H ii regions. Once outflows leave the protection of their natal cocoons, external irradiation rapidly heats, dissociates, and ionizes them. This explains the abrupt end of CO outflows like HH 900 at the edge of the globule (or dust pillar in the case of HH 901 and HH 902, see Cortes-Rangel et al., 2020).
In HH 900, the CO outflow morphology connects smoothly with the ionized outflow traced by H\(\alpha\) and Br-\(\gamma\) that is seen outside the globule. Near-IR [Fe ii] emission traces the fast, collimated jet that bisects the wide-angle ionized outflow. However, this component is first detected \(\gtrsim\)1.5\({}^{\prime\prime}\) from the edges of the globule. Non-instantaneous dissociation of the molecular outflow provides one explanation for this gap. In this region, all UV photons are consumed ionizing the skin of the outflow (Br-\(\gamma\); H\(\alpha\)) and dissociating molecules just inside it (H\({}_{2}\); [C i]), exhausting the high energy photons before they can reach the core and ionize Fe (first ionization requires 7.6 eV). Essentially, slices along the outflow at different distances from the globule trace the development and progression of a photodissociation region (PDR) through the column of outflowing gas (see Figure 9).
In a less punishing environment than Carina, HH 900 would look like a traditional molecular outflow with CO tracing more of its extent. The HH 900 YSO remains deeply embedded in the globule with the only detection to date in the millimeter continuum with ALMA. This suggests that the source evolutionary stage is in the Class 0/I regime, consistent with HH 900 being in the most active outflow phase (we discuss the mass-loss rates in Section 5.1).
One of the outstanding challenges for understanding the local and large-scale impact of protostellar jets and outflows is obtaining a full mass census. Typically, only one component is visible: the underlying (atomic) jet is unseen in molecular outflows while optical and near-IR images reveal collimated atomic jets that have largely escaped their natal clouds and may no longer be surrounded by a molecular outflow. External illumination reveals all of these components in the HH 900 jet+outflow. The system is younger than other well-studied examples of externally irradiated jets (e.g., in Orion, Bally & Reipurth, 2001; Bally et al., 2006; Kirwan et al., 2023), capturing a consequential but not well understood outflow phase.
External irradiation in the H ii region alters the observable diagnostics but provides an extraordinary opportunity because it also illuminates the entire body of the jet+outflow. Mass-loss rates can therefore be estimated using the well-understood physics of photoionized gas instead of non-linear and time-dependent shock models. Following Bally et al. (2006), we can use the well-studied environment to determine the dissociation time, and therefore the mass-loss rate, of the dissociating molecular outflow.
### Mass-loss rate
We make a simple estimate of the photodissociation rate of molecular hydrogen entrained in the outflow. The free space photodissociation rate is \(D_{0}=5.18\times 10^{-11}\,\chi\,\mathrm{s}^{-1}\)(Sternberg et al., 2014; Bialy, 2020) where \(\chi\) is the Draine UV radiation field strength (Draine, 1978). Ignoring shielding, the photodissociation rate of H\({}_{2}\) at the surface of a slab irradiated from one side (the outflow in our case) is
\[R_{\mathrm{phot}}=\frac{1}{2}\chi 5.18\times 10^{-11}\,\mathrm{s}^{-1}. \tag{1}\]
From Smith (2006), the FUV luminosity of Tr16 is log(L\({}_{\mathrm{FUV}}\)) = 6.79 L\({}_{\odot}\). We convert this to the local flux assuming the median distance to the OB stars in Tr16 from Alexander et al. (2016). This gives an incident Draine UV radiation field of \(\chi=10^{3}\), corresponding to a dissociation timescale of \(\sim\)1.2 years. For an outflow velocity of 30 km s\({}^{-1}\), this gives a distance of \(\sim\)8 au for the extent of the H\({}_{2}\) outflow. This is \(\sim\)3 orders of magnitude smaller than the observed H\({}_{2}\) extent of HH 900 (\(\sim\)5000 au, see Section 4.1).
This simple estimate ignores the effect of shielding. However, H\({}_{2}\) can be shielded by dust (which may also be present in the outflow Smith et al., 2010; Reiter et al., 2019, 2020) or it can self-shield, provided a sufficiently large H\({}_{2}\) column (Sternberg et al., 2014). Including self-shielding, the dissociation rate becomes
\[R_{\mathrm{phot}}=\frac{1}{2}\chi 5.18\times 10^{-11}f_{\mathrm{shield}}(N_{ \mathrm{H}_{2}})\,\mathrm{exp}\left(-\tau_{g}\right)\,\mathrm{s}^{-1}. \tag{2}\]
where \(\tau_{g}=\sigma_{g}N_{\mathrm{H}_{2}}\) is the dust column optical depth and
Figure 7: **Top**: Intensity map showing the H\({}_{2}\) 1-0 S(1)/ Br-\(\gamma\) line ratio. Values \(<\)1 indicate that photoexcitation dominates. **Bottom:** Intensity maps of the H\({}_{2}\) 1-0 S(1)/ H\({}_{2}\) 2-1 S(1) line ratio. The expected ratio is \(\sim\)3 in photoexcited gas, \(\sim\)10 in shock-excited gas.
\(f_{\rm shield}(N_{\rm H_{2}})\) is the H\({}_{2}\) self-shielding parameter; both depend on the H\({}_{2}\) column \(N_{\rm H_{2}}\). We follow Bialy (2020) and adopt \(\sigma_{g}=1.9\times 10^{-21}\)cm\({}^{2}\). For \(f_{\rm shield}(N_{\rm H_{2}})\) we use the Draine & Bertoldi (1996) fit
\[f_{\rm shield}(N_{\rm H_{2}})=\frac{0.965}{(1+x/y_{5})^{2}}+ \frac{0.035}{(1+x)^{1/2}}\\ \times\exp\left[-8.5\times 10^{-4}\,(1+x)^{1/2}\right] \tag{3}\]
where \(x=N_{\rm H_{2}}/5\times 10^{14}\)cm\({}^{-2}\) and \(b_{5}=2/10^{5}\)cm s\({}^{-1}\). This gives an H\({}_{2}\) photodissociation timescale that depends only on the H\({}_{2}\) column for a given \(\chi\). We solve this expression for the H\({}_{2}\) column that enables the H\({}_{2}\) outflow to extend \(\sim 5\times 10^{3}\) au with a flow velocity of \(\sim 30\) km s\({}^{-1}\). We calculate the photodissociation timescale \(t_{\rm phot}=1/R_{\rm phot}\) and multiply by a characteristic speed (30 km s\({}^{-1}\)) to obtain the maximum observable H\({}_{2}\) extent.
Results from this simple analysis are shown in Figure 8. We estimate that an H\({}_{2}\) column of \(\sim 1\times 10^{18}\) cm\({}^{-2}\) is required to reproduce the observed extent of the outflow. Assuming the outflow is a cylinder, we convert this to a volume density by dividing by the observed width of the outflow. For an H\({}_{2}\) column density of \(\sim 10^{18}\) cm\({}^{-2}\) and a jet width of \(2\nu_{\rm out}\)\(\sim\)1.5\(\arcsec\) (3450 au), the H\({}_{2}\) volume density is \(n_{\rm H_{2}}\sim 20\) cm\({}^{-3}\). We compute the mass-loss rate of the H\({}_{2}\) outflow assuming a constant density and radius in a cylindrical jet
\[\dot{M}_{\rm H_{2}}=n_{\rm H_{2}}m_{\rm H_{2}}\pi r_{\rm out}^{2}v_{\rm out} \tag{4}\]
where \(m_{\rm H_{2}}\) is the mass of molecular hydrogen and \(v_{\rm out}=30\) km s\({}^{-1}\) is the characteristic velocity of the outflow. From this, we estimate \(\dot{M}_{\rm H_{2}}\approx 1.6\times 10^{-9}\) M\({}_{\odot}\) yr\({}^{-1}\).
This simple estimate yields a mass-loss rate two orders of magnitude smaller than estimated for the ionized component, \(\dot{M}_{\rm H\alpha}\approx 5.7\times 10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\)(Smith et al., 2010). The mass-loss rate of the molecular outflow inside the globule is an order of magnitude higher than the ionized component with an average \(\dot{M}_{\rm CO}\approx 3.5\times 10^{-6}\) M\({}_{\odot}\) yr\({}^{-1}\)(Reiter et al., 2020). The highest mass-loss rate is measure in the low-ionization jet core with \(\dot{M}_{\rm[FIII]}\approx 1.7\times 10^{-5}\) M\({}_{\odot}\) yr\({}^{-1}\)(Reiter et al., 2016).
The large discrepancy between the mass-loss rates in the molecular outflow measured inside and outside the globule is not entirely surprising. We ignore any dust shielding in the outflow despite indirect evidence for dust in the outflow. In addition, we argue that H\({}_{2}\) emission traces partially dissociated molecular gas. The column of _neutral_ material tracing the brief phase between dissociation and ionization should increase once the outflow is in the H ii region. Near the edge of the globule where the outflow first emerges into the H ii region, we expect that the sum of the molecular (H\({}_{2}\)), neutral (C i 7), and ionized (H\(\alpha\) or Br-\(\gamma\)) components to be comparable to the CO mass-loss rate measured inside the globule. This implies that there is a large column of neutral material in the outflow. Future observations of neutral gas tracers like C i with ALMA may reveal the mass and extent of this neutral component.
Finally, we note that the low volume density we estimate in H\({}_{2}\) is consistent with photoexcited gas. Black & van Dishoeck (1987) predict that, in steady state, the boundary layer should have \(n({\rm H_{2}})<<n({\rm H})\). We derive an \(n({\rm H_{2}})\) that is 13\(\times\) smaller than the \(n_{e}\) reported by Smith et al. (2010), consistent with this prediction.
## 6 Conclusions
In H ii regions, jets and outflows that emerge from their natal clouds will be externally illuminated, dissociated, and ionized by UV photons from nearby high-mass stars. In this paper, we present new near-IR integral field unit spectroscopy from ERIS/SPIFIER that provides the first clear evidence that HH 900 is a dissociating and evaporating molecular outflow. Extended H\({}_{2}\) and Br-\(\gamma\) emission trace the wide-angle outflow as it smoothly extends from the edge of the CO outflow. H\({}_{2}\) emission extends \(\sim\)2.2\(\arcsec\) (0.02 pc) from the globule edge, tracing the molecule survival time. The ionized outflow, traced by Br-\(\gamma\), reveals the full extent of the outflow as it reaches \(\sim\)5.5\(\arcsec\) into the H ii region.
These new spatially and spectrally resolved observations of HH 900 allow us to perform three tests to confirm that HH 900 is an evaporating molecular outflow. First, we show that H\({}_{2}\) and Br-\(\gamma\) trace the same morphology as long as molecules survive. Second, we show that both lines trace outflow-like kinematics. Velocities of both lines are modest where the outflow emerges from the globule,
Figure 8: The photodissociation timescale as a function of H\({}_{2}\) column (left axis). This is also presented in terms of the extent of molecular H\({}_{2}\) expected for a column travelling at 30 km s\({}^{-1}\) (right axis). This is calculated by multiplying the photodissociation timescale \(t_{\rm phot}=1/R_{\rm phot}\) by the typical flow speed. The horizontal black line is the observed extent of H\({}_{2}\) in the outflow.
Figure 9: Cartoon depiction of the jet observables (left) and the structure of the jet-outflow (right). Cold molecules only survive in the protection of the globule (CO; gray). In the H ii region, molecules are hot (H\({}_{2}\); red) before they are completely dissociated leaving the ionized outflow (Br-\(\gamma\); blue) surrounding the underlying jet ([Fe ii]; black).
then H\({}_{2}\) increases to speeds approaching the high velocity of the underlying jet seen in [Fe ii]. Velocities between each component differ by \(\sim\)10 km s\({}^{-1}\) with [Fe ii] tracing the fastest velocities, H\({}_{2}\) intermediate, and Br-\(\gamma\) the slowest, consistent with the layered velocity structure seen in other outflows. Third, diagnostic line ratios indicate a significant contribution from photoexcitation to the excitation in the outflow. Together, these tests provide strong evidence that an evaporating molecular outflow connects the cold (CO) outflow seen inside the globule with the hot ionized outflow (H\(\alpha\), Br-\(\gamma\)) seen in the H ii region.
To the best of our knowledge, this is the first direct evidence for an evaporating molecular outflow. As such, these observations provide an excellent test case for models of irradiated jets and outflows (i.e., Estrella-Trujillo et al., 2021). Finally, near-IR observations from the _James Webb Space Telescope_ (JWST) will soon be revealing jets and outflows in many high-mass star-forming regions (e.g., Reiter et al., 2022). These externally irradiated sources will likely have more in common with objects like HH 900 than the well-studied jets seen in more local, quiescent regions.
## Acknowledgements
We would like to thank the referee, Dr. William Henney, for a prompt and thoughtful report that improved the quality of the manuscript. We would like to thank Lowell Tacconi-Garman for support with the preparation, reduction, and analysis of the new ERIS/SPIFIER data. We wish to thank Nathan Smith for productive discussion about the velocity structure of HH 900. TJH is funded by a Royal Society Dorothy Hodgkin Fellowship and UKRI ERC guarantee funding (EP/Y024710/1). CFM is funded by the European Union (ERC, WANDA, 101309452). DI is funded by the European Research Council (ERC) via the ERC Synergy Grant ECOGAL (grant 855130). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.
Based on observations collected at the European Southern Observatory under ESO programmes 110.257T.001 and 0101.C-0391(A). This paper makes use of the following ALMA data: ADS/JAO.ALMA #2016.1.01537.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan) and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This work uses observations made with the NASA/ESA Hubble Space Telescope, obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. The HST observations are associated with GO 13390 and GO 13391. This research made use of Astropy,4 a community-developed core Python package for Astronomy (Astropy Collaboration et al., 2013; Price-Whelan et al., 2018). This research made use of APLpy, an open-source plotting package for Python (Robitaille & Bressert, 2012).
Footnote 4: [http://www.astropy.org](http://www.astropy.org)
## Data Availability
The ERIS data that are presented in this paper are publicly available from the ESO archive5. Archival MUSE data are also available from the ESO archive. The ALMA data used in this study are publicly available from the ALMA archive6. Data from _HST_ are publicly available via the MAST archive7.
Footnote 5: [https://archive.eso.org/cms.html](https://archive.eso.org/cms.html)
Footnote 6: [https://almascience.nrao.edu/aug/result_view=observations](https://almascience.nrao.edu/aug/result_view=observations)
Footnote 7: [https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html](https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html)
|
2308.03862 | Hall Coefficient and Resistivity in the Doped Bilayer Hubbard Model | Finding and understanding non-Fermi liquid transport behaviors are at the
core of condensed matter physics. Most of the existing studies were devoted to
the monolayer Hubbard model, which is the simplest model that captures
essential features of high-temperature superconductivity. Here we discover a
new type of non-Fermi liquid behavior emergent in the hole-doped bilayer
Hubbard model, using dynamical mean-field theory with a full consideration of
the short-range interlayer electron correlation. We find that at low
temperatures, the Hall coefficient has a strong nonmonotonic dependence on
temperature, leading to a double or quadruple reversal of its sign depending on
the doping level. At the same time, the resistivity exhibits two plateaus
rather than linearity in its temperature dependence. We show that these
intriguing transport behaviors stem from the formation of coherent interlayer
singlets, which scatter off gapped collective modes arising from short-range
interlayer antiferromagnetic fluctuations. | Yin Shi, Jonathan Schirmer, Long-Qing Chen | 2023-08-07T18:15:25Z | http://arxiv.org/abs/2308.03862v1 | # Hall Coefficient and Resistivity in the Doped Bilayer Hubbard Model
###### Abstract
Finding and understanding non-Fermi liquid transport behaviors are at the core of condensed matter physics. Most of the existing studies were devoted to the monolayer Hubbard model, which is the simplest model that captures essential features of high-temperature superconductivity. Here we discover a new type of non-Fermi liquid behavior emergent in the hole-doped bilayer Hubbard model, using dynamical mean-field theory with a full consideration of the short-range interlayer electron correlation. We find that at low temperatures, the Hall coefficient has a strong nonmonotonic dependence on temperature, leading to a double or quadruple reversal of its sign depending on the doping level. At the same time, the resistivity exhibits two plateaus rather than linearity in its temperature dependence. We show that these intriguing transport behaviors stem from the formation of coherent interlayer singlets, which scatter off gapped collective modes arising from short-range interlayer antiferromagnetic fluctuations.
## I Introduction
Studying the magnetotransport properties of electron systems is a valuable way to learn about their electronic structure. For example, in high-temperature cuprate superconductors, the direct-current (DC) Hall resistivity has a strong temperature (\(T\)) dependence and changes its sign in the heavily overdoped regime [1; 2]. Meanwhile, the DC longitudinal resistivity in the normal state has linear temperature dependence and exceeds the Mott-Ioffe-Regel criterion [3; 4; 5], known as strange metallicity. In an atomically thin cuprate van der Waals heterostructure during cooling, the Hall resistivity decreases and changes from positive to negative and then reverses sign again before vanishing at low temperatures. This was explained by the vortex dynamics-based description of the Hall effect in high-temperature superconductors [6]. These behaviors are incompatible with the Fermi liquid theory of weakly interacting electrons and manifest the intricate nature of strongly correlated electron systems.
In efforts to understand the non-Fermi liquid behaviors, various authors have calculated the magnetotransport properties of the hole-doped Hubbard model using the quantum Monte Carlo method for small square lattices [7], the dynamical mean-field theory (DMFT) approximation for hypercubic [8] and square [9; 10; 11] lattices, and an expansion formula of the Hall coefficient for small square lattices [12]. A double sign change of the \(T\)-dependent DC Hall coefficient similar to that in cuprate superconductors has been observed [8; 12]. Recent numerical calculations for the square-lattice Hubbard model also revealed the \(T\)-linear DC longitudinal resistivity exceeding the Mott-Ioffe-Regel limit [13] and a \(T\)-linear electron scattering rate at low temperatures [14].
These works motivate us to investigate further the magnetotransport properties of a more complicated lattice model, the Hubbard model on a bilayer square lattice, in which electrons can form disordered interlayer singlets with a spin gap [15; 16; 17]. Accurately computing the conductivities of strongly correlated systems is notoriously difficult [18], and is frequently hindered by small lattice sizes, infinite expansion summations, or the omission of vertex corrections. We use the DMFT [19] to calculate the resistivities of the hole-doped bilayer Hubbard model. The DMFT works for the thermodynamic limit, but in this theory the vertex corrections to the in-plane conductivities cancel out due to the neglect of in-plane nonlocal correlations [20]. However, the short-range out-of-plane correlation is entirely accounted for in the Kubo bubble. This distinguishes our calculation from those for the monolayer Hubbard model.
We find that the Hall coefficient has a strong nonmonotonic \(T\) dependence at low temperatures and can change its sign twice or four times with decreasing temperature, depending on the doping level. Concomitantly, the longitudinal resistivity as a function of \(T\) acquires two plateaus that smoothly cross over to each other. These unfamiliar transport behaviors are shown to be associated with the formation of coherent interlayer singlets, which scatter off gapped collective modes arising from short-range interlayer antiferromagnetic fluctuations.
## II Model and Methods
The bilayer square-lattice Hubbard model consists of two square lattices stacked site-to-site. We consider only the nearest-neighbor intralayer hopping \(t\) and interlayer hopping \(t_{\perp}\). The Hamiltonian is
\[H = -\sum_{\ell=1}^{2}\sum_{\langle i,j\rangle,\sigma}tc^{\dagger}_{ \ell\alpha}c_{\ell j\sigma}-\sum_{i,\sigma}(t_{\perp}c^{\dagger}_{1\alpha}c _{2i\sigma}+\text{H.c.})\]
\[+\sum_{\ell=1}^{2}\sum_{\imath}Un_{\ell\uparrow}n_{\ell\downarrow}.\]
Here \(c_{t\alpha}\) (\(c_{t\alpha}^{\dagger}\)) annihilates (creates) an electron of spin \(\sigma\) (\(=\uparrow,\downarrow\)) on the site \(\imath\) in the layer \(\ell\). \(U\) is the onsite Coulomb repulsion and \(n_{\ell\alpha}=c_{t\alpha}^{\dagger}c_{t\alpha\sigma}\) is the electron number operator. The Kubo formulae for the longitudinal and Hall conductivities (sheet conductances) in a vanishing out-of-plane magnetic field \(B_{z}\to 0\) can be directly extended from those for the monolayer Hubbard model [8; 22; 23],
\[\sigma_{xx} = \frac{e^{2}\pi}{\hbar N}\sum_{\mathbf{k},\sigma}\left(\frac{ \partial\epsilon_{\mathbf{k}}}{\partial k_{x}}\right)^{2}\int d\omega\,\text{ Tr}[\hat{A}_{\mathbf{k}\sigma}(\omega)^{2}]\left[-\frac{df(\omega)}{d\omega} \right],\] \[\frac{\sigma_{xy}}{B_{z}} = \frac{2\pi^{2}e^{3}a^{2}}{3\hbar^{2}N}\sum_{\mathbf{k},\sigma} \left(\frac{\partial\epsilon_{\mathbf{k}}}{\partial k_{x}}\right)^{2}\frac{ \partial^{2}\epsilon_{\mathbf{k}}}{\partial k_{y}^{2}}\] \[\times\int d\omega\,\text{Tr}[\hat{A}_{\mathbf{k}\sigma}(\omega )^{3}]\left[-\frac{df(\omega)}{d\omega}\right].\]
Here \(e\) is the elementary charge magnitude, \(\hbar\) is the reduced Planck constant, \(a\) is the in-plane lattice constant, \(N\) is the number of unit cells in the lattice, and \(\mathbf{k}=(k_{x},k_{y})\) is the reciprocal vector in the first Brillouin zone. \(f(\omega)=(1+e^{\hbar\omega/T})^{-1}\) is the Fermi distribution function. The energy of the bonding or antibonding band up to a constant shift is \(\epsilon_{\mathbf{k}}=-2t(\cos k_{x}+\cos k_{y})\). \(\hat{A}\) is the spectral function, which is a matrix in the layer index space.
We choose \(t_{\perp}=1.2t\) and \(U=10t\), which are relevant to the material VO\({}_{2}\)[15], a prototypical strongly correlated oxide with the vanadium dimer as the basic unit [24]. In the DMFT, we consider an interlayer dimer embedded in a self-consistent noninteracting electron bath, thereby fully taking into account the short-range interlayer electron correlation. We use the continuous-time auxiliary-field Monte Carlo method [25; 26] to solve the corresponding quantum impurity problem, accurately measuring the self-energy at all Matsubara frequencies. We then employ the recently developed maximum quantum entropy method [27; 28] to analytically continue the self-energy matrix to the real-frequency axis.
## II Transport coefficients
Figure 1 shows the calculated Hall coefficient \(R_{H}=\sigma_{xy}/(B_{z}\sigma_{xx}^{2})\) and longitudinal resistivity \(\rho_{xx}=\sigma_{xx}^{-1}\) as functions of the temperature at various hole doping levels \(p=1-\sum_{\sigma}\langle n_{t\sigma}\rangle\). For \(T\gtrsim 0.1t\), the \(T\) dependence of \(R_{H}\) is similar for all doping levels shown, but \(R_{H}\) shifts downward with increasing doping. In this temperature range, as \(T\) increases, \(R_{H}\) decreases in \(0.1t\lesssim T\lesssim 0.13t\), then increases in \(0.13t\lesssim T\lesssim 0.5t\), and then decreases again for \(T\gtrsim 0.5t\). In \(0.67t\lesssim T\lesssim 1t\), \(R_{H}(T)\) changes more slowly with increasing doping and becomes almost flat at \(p=0.3\). Depending on the doping level, \(R_{H}\) can be totally below zero (\(p=0.3\)), or change sign once (\(p=0.2\)) or three times (\(p=0.25\)) in this range \(T\gtrsim 0.1t\).
For \(T\lesssim 0.1t\), the behaviors of \(R_{H}(T)\) at different doping levels are radically different. In this temperature range, the \(T\) dependence of \(R_{H}\) quickly weakens with increasing doping. At \(p=0.2\) and \(p=0.25\), \(R_{H}\) changes sign once due to its strong dependence on \(T\). But for a heavier doping \(p=0.3\), \(R_{H}(T)\) is almost a negative constant. Therefore, the total number of times \(R_{H}(T)\) changes its sign counts to zero at \(p=0.3\), two at \(p=0.2\), and as many as four at \(p=0.25\), in contrast to the single or double sign reversal normally observed in high-temperature superconductors [2; 6; 29] and the single-orbital Hubbard model [8; 12].
The longitudinal resistivity \(\rho_{xx}(T)\) also shows unfamiliar behavior (Fig. 1, right panel). There are two temper
Figure 1: Hall coefficient (left panel) and longitudinal resistivity (right panel) of the bilayer Hubbard model as functions of the temperature at various doping levels. The insets are close-up views of the lowest-temperature data. The solid lines are the guide to the eye. The dashed lines in the inset of the right panel are quadratic fits \(\rho_{xx}=\text{const.}\times T^{2}\). The error bars represent Monte Carlo sampling errors and errors arising from DMFT iterations [21], determined by four iterations starting from a converged solution.
ature ranges, \(0.26t\lesssim T\lesssim 0.5t\) and \(T\lesssim 0.1t\), where \(\rho_{xx}(T)\) is almost constant, and these ranges become broader for heavier doping. Especially at low temperatures, \(\rho_{xx}(T)\) deviates significantly from the quadratic fits \(\rho_{xx}=\text{const.}\times T^{2}\) expected for a Fermi liquid (Fig. 1, right panel, inset). Nevertheless, the quadratic fit is improved for heavier doping, along with the weaker \(T\) dependence of \(R_{H}\) for heavier doping (Fig. 1, left panel, inset), demonstrating that the system at low temperatures approaches the Fermi liquid phase as doping increases. The constant value of \(\rho_{xx}\) in \(T\lesssim 0.1t\) does not change much as the doping level is varied, whereas at high temperatures, \(\rho_{xx}\) exceeds the Mott-Ioffe-Regel limit (\(\sim\sqrt{2\pi}\hbar/e^{2}\approx 2.5\hbar/e^{2}\)[5]) and is lower for heavier doping consistent with more charge carriers.
## III Mechanisms
To understand the anomalous behaviors of the Hall coefficient and longitudinal resistivity, we plot in Fig. 2 the single-particle excitation spectra \(A(\mathbf{k},\omega)=\sum_{\sigma}\text{Tr}\,\hat{A}_{\mathbf{k}\sigma}(\omega)\) and densities of states \(A(\omega)=\sum_{\mathbf{k}}A(\mathbf{k},\omega)/N\) at various temperatures for \(p=0.25\). The noninteracting band structure is also superimposed (dotted lines). In the noninteracting limit, at light doping, the bonding (lower-lying) band has a hole pocket at the \(M\) point and the antibonding (higher-lying) band has a smaller electron pocket at the \(\Gamma\) point, which is the case for \(p=0.25\). At heavy doping, both the bonding and antibonding bands have an electron pocket at the \(\Gamma\) point.
At a high temperature \(T=2t\), the spectrum is highly incoherent and continuous, showing only two broad Hubbard bands centered around \(\omega=-1t\) and \(\omega=10t\), respectively, with a pseudogap in between. Charge excitation across this pseudogap produces a negative \(R_{H}\)[8]. As \(T\) is lowered to \(1t\), the spectrum near the Fermi level becomes more coherent, and the peak of the density of states of the lower Hubbard band moves to a higher energy. Fewer electrons are excited onto the upper Hubbard band, leading to a more holelike \(R_{H}\), that is, a larger \(R_{H}\).
At a moderate temperature \(T=0.5t\), two dispersive quasiparticle bands develop near and above the Fermi level, whereas the two incoherent Hubbard bands persist. The quasiparticle bands are renormalized by the Kondo screening [15] to be approximately twice as narrow as their noninteracting counterparts. To see the interlayer Kondo screening, we show in the left panel of Fig. 3 the nearest-neighbor interlayer spin correlation function \(\langle S_{1}^{z}S_{2}^{z}\rangle\) as a function of temperature at various doping
Figure 2: Single-particle excitation spectra \(A(\mathbf{k},\omega)\) along a path connecting high-symmetry reciprocal points, and corresponding densities of states \(A(\omega)\) at indicated temperatures for \(p=0.25\). The black dotted lines are the noninteracting bands for the same doping. The Fermi level lies at \(\omega=0\).
levels, where \(S^{z}_{\ell_{t}}=n_{\ell\uparrow}-n_{\ell\downarrow}\) is the spin density operator. The neighboring interlayer spins do tend to be antiparallel screening each other. The singlet correlation strength peaks at a nonzero temperature that increases with increasing doping. For \(T\lesssim 0.5t\), the singlet correlation strength is significant [14] and considerably larger than that at \(T=2t\), compatible with the observed crossover from the totally incoherent spectrum at \(T=2t\) to the coherent bands at \(T=0.5t\), which can thus be viewed as interlayer singlet bands [15]. Compared to the spectrum at \(T=1t\), the hole pocket at the \(M\) point is relatively well defined thanks to the coherent singlet bands, while the electron pocket at the \(\Gamma\) point remains incoherent, leading to an overall positive \(R_{H}\) (Fig. 1, left panel).
As \(T\) is lowered from \(0.5t\) to \(0.267t\), the spectral function does not change much; especially the smearing of the quasiparticle bands at these two temperatures is similar, both being \(\sim 0.5t\) wide and having a spectral weight of \(\sim 3t^{-1}\). This is consistent with the Kondo resonance, in which the width and height of the resonance peak are temperature-independent and are only determined by one energy scale, the Kondo temperature [30]. In this temperature range, \(T\) is less than the smearing width of the quasiparticle bands, resulting in approximately \(T\)-constant resistivity (Fig. 1, right panel) reminiscent of resistivity saturation in the Kondo problem. There are two slight, yet visible changes in the spectral function as \(T\) is lowered from \(0.5t\) to \(0.267t\). One is that the electron pocket at the \(\Gamma\) point becomes more well defined and deeper on the way to the complete formation of the coherent singlet bands. The other is the appearance of a weak incoherent flat band with a smearing width of \(\sim 1t\) at the Fermi level, which hosts electrons. These two changes tend to reduce \(R_{H}\) (Fig. 1, left panel).
At a relatively low temperature \(T=0.1t\), a small single-particle gap _in the incoherent spectrum_ (\(\sim 0.4t\)) opens at the Fermi level. Concomitantly, the coherent singlet bands within the incoherent spectrum gap (spectral weight \(\sim 30t^{-1}\)) become much sharper and flatter than those at \(T=0.267t\) (spectral weight \(\sim 3t^{-1}\)). This results in a very sharp peak in the density of states at the Fermi level, which is the manifestation of a strong Kondo resonance. The opening of the gap in the incoherent spectrum depopulates the upper Hubbard band, and the hole pocket of the well-defined singlet bands is larger than its electron pocket, resulting in an increase in \(R_{H}(T)\) from \(T\simeq 0.13t\) to \(T\simeq 0.1t\) (Fig. 1, left panel).
As \(T\) decreases from \(0.1t\) to \(0.05t\), the quasiparticle bands again remain almost unchanged, with a \(\sim 0.1t\) smearing width and \(\sim 30t^{-1}\) spectral weight at the Fermi level. Therefore, the resistivity in \(T\lesssim 0.1t\) should also be approximately temperature-independent (Fig. 1, right panel). The two plateaus in \(\rho_{xx}(T)\) are smoothly connected by a crossover spanning from \(T\simeq 0.1t\) to \(T\simeq 0.26t\). At \(T=0.05t\), a flat band (not the quasiparticle band) with weak intensity (\(\sim 0.1t^{-1}\)) appears at the Fermi level, which hosts electrons leading to a decrease in \(R_{H}(T)\) from \(T\simeq 0.1t\) to \(T\simeq 0.05t\) (Fig. 1, left panel).
The non-Fermi liquid behavior down to very low temperatures also has a manifestation in the interlayer spin correlation, which does not vanish at zero temperature (Fig. 3, left panel). Instead, the singlet correlation extrapolated to zero temperature is \(\sim-0.06\), which is very close to the nearest-neighbor spin correlation responsible for the non-Fermi liquid scattering in the overdoped monolayer square-lattice Hubbard model at low temperatures [14]. A dynamical cluster approximation study of the doped bilayer Hubbard model suggested that there exists non-Fermi liquid behavior even in the absence of finite scattering rate at vanishing temperature, attributed to short-range interlayer antiferromagnetic fluc
Figure 3: Left panel: interlayer singlet correlation as a function of the temperature at various doping levels. The vertical dotted lines mark the temperatures of the maximum correlation strength, \(T_{n}\). The dashed lines are extrapolations to zero temperature by fitting the data for \(T<T_{m}\) to cubic functions. The arrows indicate the approximate boundaries of the two resistivity plateaus. Right panel: imaginary part of the self-energy in the low-energy range at \(p=0.25\) for the various temperatures in correspondence to Fig. 2.
tuation [31]. This observation is consistent with our results, nevertheless we showed that the resulting non-Fermi liquid behavior is not the \(T\)-linear resistivity but a resistivity plateau.
To show where the Kondo saturation comes from, we depict in the right panel of Fig. 3 the imaginary part of the self-energy, \(\Sigma(\omega)=\sum_{\sigma}\mathrm{Tr}\,\tilde{\Sigma}_{\sigma}(\omega)\), where \(\tilde{\Sigma}_{\sigma}(\omega)\) is the self-energy matrix of spin \(\sigma\) in the layer-index space. At the lowest temperature shown, \(T=0.05t\), \(-\Im\Sigma(\omega)\) has two peaks in the low-energy range located at \(\omega\simeq 0\) and \(\omega\simeq 0.9t\), respectively, which are separated by a gap spanning \(0.05t\lesssim\omega\lesssim 0.3t\equiv\omega_{g}\). These two peaks are absent in the DMF result of the single-orbital Hubbard model [following the Fermi liquid behavior at low temperatures, \(-\Im\Sigma(\omega)\sim\omega^{2}\) at small \(\omega\)[19; 23]] and represent two low-energy scattering modes arising from short-range interlayer antiferromagnetic fluctuations. The zero-energy mode should be the simultaneous flip of spins in the inert interlayer singlet costing no energy, while the second mode should be the flip of a single spin in the singlet. There is also a pseudogap at \(\omega\simeq 2t\) that separates the second scattering mode and a broad peak at a high energy \(\omega\simeq 7.8t\) (not shown) corresponding to scattering off doubly-occupied-site states.
The zero-energy scattering mode is responsible for nonzero resistivity at vanishing temperatures. For \(T\lesssim 0.1t\simeq\omega_{g}/3\), electrons near the Fermi level thermally fluctuate so weakly that they cannot cross the gap to scatter off the second scattering mode [32]. Therefore, the scattering rate \(-\Im\Sigma(0)\) at \(T=0.1t\) saturates and is close to that at \(T=0.05t\), leading to the lower resistivity plateau at \(T\lesssim 0.1t\). Similarly, the pseudogap renders the scattering rates \(-\Im\Sigma(0)\) at \(T\simeq 0.267t\) and \(T\simeq 0.5t\) near and both close to that of the second scattering mode \([-\Im\Sigma(0.9t)\) at \(T=0.05t]\), forming the higher resistivity plateau at \(0.26t\lesssim T\lesssim 0.5t\). But because the pseudogap is not a genuine gap, the higher resistivity plateau is not as flat as the lower resistivity plateau (Fig. 1, right panel). For heavier doping, the Kondo resonance peak will get wider [19; 23], i.e., the (pseudo-) gap in \(-\Im\Sigma(\omega)\) will become wider, resulting in wider resistivity plateaus.
## IV Conclusion
In conclusion, we have calculated the Hall coefficient and longitudinal resistivity of the hole-doped bilayer Hubbard model and found that its transport properties are very different from those of the monolayer or single-orbital Hubbard model. The Hall coefficient has a strong nonmonotonic dependence on temperature at low temperatures, and it can change sign four times for some range of doping. The resistivity at low temperatures is not linear in temperature like that of strange metals or quadratic in temperature like that of Fermi liquids. Rather, it exhibits two plateaus with a smooth crossover between them. These anomalous transport behaviors can be traced back to the formation of coherent interlayer singlets, which scatter off gapped collective modes arising from short-range interlayer antiferromagnetic fluctuations.
Although we did not account for vertex corrections to the conductivity, the corresponding analysis for the single-band Hubbard model hints at the effect of the vertex corrections. Inclusion of the vertex corrections shifts the longitudinal resistivity downward, but preserves its temperature dependence [33; 34]. Neither does it alter the trend of the Hall coefficient as a function of temperature [7; 8; 12]. Therefore, we do not expect the vertex corrections to give rise to resistivity behaviors qualitatively different from our conclusions.
## Acknowledgements
This work was supported as part of the Computational Materials Sciences Program funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award No. DE-SC0020145. JS was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, under Grant No. DE-SC-0005042. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-SC-0020145 using NERSC award BES-ERCAP0023632.
|
2306.15574 | See Through the Fog: Curriculum Learning with Progressive Occlusion in
Medical Imaging | In recent years, deep learning models have revolutionized medical image
interpretation, offering substantial improvements in diagnostic accuracy.
However, these models often struggle with challenging images where critical
features are partially or fully occluded, which is a common scenario in
clinical practice. In this paper, we propose a novel curriculum learning-based
approach to train deep learning models to handle occluded medical images
effectively. Our method progressively introduces occlusion, starting from
clear, unobstructed images and gradually moving to images with increasing
occlusion levels. This ordered learning process, akin to human learning, allows
the model to first grasp simple, discernable patterns and subsequently build
upon this knowledge to understand more complicated, occluded scenarios.
Furthermore, we present three novel occlusion synthesis methods, namely
Wasserstein Curriculum Learning (WCL), Information Adaptive Learning (IAL), and
Geodesic Curriculum Learning (GCL). Our extensive experiments on diverse
medical image datasets demonstrate substantial improvements in model robustness
and diagnostic accuracy over conventional training methodologies. | Pradeep Singh, Kishore Babu Nampalle, Uppala Vivek Narayan, Balasubramanian Raman | 2023-06-27T15:53:20Z | http://arxiv.org/abs/2306.15574v2 | # **See Through the Fog: Curriculum Learning with Progressive Occlusion in Medical Imaging**
###### Abstract
In recent years, deep learning models have revolutionized medical image interpretation, offering substantial improvements in diagnostic accuracy. However, these models often struggle with challenging images where critical features are partially or fully occluded, which is a common scenario in clinical practice. In this paper, we propose a novel curriculum learning-based approach to train deep learning models to handle occluded medical images effectively. Our method progressively introduces occlusion, starting from clear, unobstructed images and gradually moving to images with increasing occlusion levels. This ordered learning process, akin to human learning, allows the model to first grasp simple, discernable patterns and subsequently build upon this knowledge to understand more complicated, occluded scenarios. Furthermore, we present three novel occlusion synthesis methods, namely **Wasserstein Curriculum Learning** (WCL), **Information Adaptive Learning** (IAL), and **Geodesic Curriculum Learning** (GCL). Our extensive experiments on diverse medical image datasets demonstrate substantial improvements in model robustness and diagnostic accuracy over conventional training methodologies.
## 1 Introduction
Medical imaging plays a pivotal role in modern healthcare, providing critical information for diagnosis, treatment planning, and disease monitoring. However, accurate interpretation
of these images remains a challenging task, largely due to their complex nature and the extensive variation observed among patients. In recent years, deep learning has emerged as a promising tool to augment the capabilities of medical practitioners, facilitating better and faster interpretation of medical images.
Deep learning models, particularly convolutional neural networks (CNNs), have shown impressive performance in tasks such as image classification, object detection, and semantic segmentation [21]. Their ability to automatically learn hierarchical representations from raw data makes them ideally suited for medical image analysis [22]. Despite their potential, however, the performance of these models can be significantly affected by the presence of occlusion in images - where important features or objects are partially or fully obscured [14]. This problem is of particular concern in the medical domain, where images often contain overlapping structures or may be occluded by medical instruments, implants, or artifacts.
Curriculum learning is an approach inspired by the way humans and animals learn, where the learning process starts from easy examples and gradually moves to more complex ones. This is seen in our educational curriculum where we learn numbers before algebra, sentences before essays. This idea can be applied to machine learning, to train a model on simpler tasks or examples before more complex ones can make the learning process more effective. This notion was introduced by Bengio et al. in 2009 [2], postulating that a model could learn more effectively and efficiently if it first learns to recognize easily distinguishable patterns and progressively handles more difficult concepts. The authors investigate the value of a structured, incremental approach to training machine learning models, particularly neural networks. This approach can help the model to find better or more appropriate local minima in the error surface. The training data is sorted in a meaningful order that presents simpler concepts before more complex ones. This is opposed to the traditional method of presenting training examples randomly. This structured presentation of data could lead to a sort of "scaffolding" where knowledge is built incrementally and complexities are added progressively.
In the experiments presented in the paper [2], Bengio and his team demonstrate the potential benefits of curriculum learning in several contexts, including learning to recognize shapes, language modeling, and other tasks. They show that a curriculum can help improve generalization and speed up training, suggesting that this kind of structured learning can be a valuable tool in training deep learning models. However, _one of the main challenges that the paper highlights is how to define and design a "curriculum" for a given problem_. It remains an open problem and a potential area of research.
In this paper, we propose a novel application of curriculum learning to tackle the challenge of occlusion in medical images. We argue that, by progressively increasing the complexity of training examples in terms of occlusion, deep learning models can learn more robust and accurate representations. We start by training the model with clear, unobstructed images, and then gradually introduce images with varying levels of occlusion. This staged learning process enables the model to initially grasp the simple, discernable patterns in the data and subsequently apply this foundational knowledge to understand more complicated, occluded scenarios. Our approach aims to improve the robustness and diagnostic accuracy of deep learning models in real-world medical applications, where images are often not perfect and occlusion is frequently encountered. Through a series of experiments on various medical image datasets, we demonstrate the competency of our proposed approach over traditional training methods. Our approach to progressively introducing occlusion challenges draws inspiration from the methodologies of problem-solving in physics, which often involve starting with simpler cases before moving on to more complex scenarios [18, 20]. We believe this work paves the way for a new line of research in making artificial intelligence more reliable and effective in the realm of healthcare. Our primary contributions are:
* We develop a novel curriculum learning strategy for deep learning models that adaptively incorporates increasing levels of occlusion, providing a robust solution for handling occluded medical images in classification tasks.
* We introduce three novel occlusion synthesis methods based on optimal transport principles, information theory and exploring the high-dimensional space of occluded images from a geometric perspective to optimize the model training process. We name them Wasserstein Curriculum Learning (WCL), Information Adaptive Learning (IAL) and Geodesic Curriculum Learning (GCL).
* We demonstrate through extensive experiments on real-world medical image datasets, the effectiveness of our proposed methodology in significantly improving the classification performance over baseline models.
The rest of the paper is organized as follows: Section 2 provides a comprehensive review of related works in the fields of deep learning for medical imaging and curriculum learning. Section 3 details the methodology of our curriculum learning approach with progressive occlusion. In Section 4, we present our experimental setup, including the datasets used, the evaluation metrics, and the baseline models for comparison. Finally, we conclude the paper and discuss future research directions in Section 5.
Background
Medical image analysis has been a central focus of artificial intelligence research over the past decades. Specifically, deep learning techniques have shown promising results in a wide array of applications, ranging from disease diagnosis to anatomical structure segmentation. However, occlusions present in medical images introduce added complexity and ambiguity, challenging these techniques significantly.
Traditional approaches for medical image analysis primarily relied on handcrafted features, including textural, morphological, and statistical properties of the images [24]. However, such methods often grapple with variability across different patients, modalities, and institutions. Convolutional neural networks (CNNs), in particular, have revolutionised this field by allowing the automatic extraction of discriminative features straight from raw data [3]. These models have achieved state-of-the-art performance in many medical imaging tasks, such as diagnosing diabetic retinopathy from retinal images [15], detecting lung nodules from CT scans [7], and classifying skin lesions from dermoscopic images [11].
The concept of curriculum learning, introduced by Bengio et al., draws inspiration from the learning progression in humans and animals. The fundamental idea is to initiate the training process with simpler examples, gradually escalating the complexity. This strategy has proven its efficacy in various domains, ranging from object recognition to natural language processing [23]. However, its application to medical image analysis remains an under explored area.
Handling occlusions in images has been a long-standing challenge in computer vision. Occlusions, caused by various factors like overlapping structures, foreign objects, or missing data, lead to incomplete or ambiguous visual information [16]. Several methods have been proposed to counter occlusions, from occlusion-aware models, such as part-based models and deformable models [12], to occlusion synthesis techniques for data augmentation [19]. Despite these efforts, occlusion remains a significant hurdle, especially in medical images, where visual information is often intricate, and the repercussions of misinterpretation are severe.
Recent advances in machine learning have begun to exploit more advanced mathematical concepts, such as optimal transport and differential geometry. Optimal transport offers a potent tool for comparing and transforming probability distributions, finding a multitude of applications in machine learning, from domain adaptation to generative models [9]. On the other hand, differential geometry provides a framework for understanding high-dimensional spaces and has been employed to investigate the properties of neural networks and the
dynamics of their training process [5]. In this work, we propose a novel fusion of these diverse research areas to tackle the challenge of occlusions in medical images. By integrating the principles of curriculum learning with occlusion synthesis techniques, and employing the mathematical tools of optimal transport and differential geometry, we aim to develop a robust and efficient training strategy for deep learning models applied to occluded medical images. In the subsequent sections, we will detail our methodology and present our experimental results.
## 3 Methodology
In this section, we detail the methodology of our curriculum learning approach with progressive occlusion. This approach encompasses two main steps: occlusion generation (refer figure 1) and training schedule design (refer figure 2).
### Occlusion Generation
Let \(X=\{x_{1},x_{2},\ldots,x_{n}\}\) represent our dataset of \(n\) images, and \(x_{i}\in\mathbb{R}^{h\times w\times c}\) denote an individual image of height \(h\), width \(w\), and \(c\) color channels. To generate occlusion, we introduce a binary mask \(M=\{m_{1},m_{2},\ldots,m_{n}\}\), where each \(m_{i}\in\{0,1\}^{h\times w}\). We define the operation \(\odot\) to denote element-wise multiplication. The occluded image \(x^{\prime}_{i}\) is then given by \(x^{\prime}_{i}=x_{i}\odot m_{i}\). The mask \(m_{i}\) is generated by randomly selecting a region within the image and setting the corresponding pixels to \(0\), effectively occluding that region.
Figure 1: Progressive occlusion strategy showing areal occlusions
### Training Schedule Design
With the occluded images, we now aim to design a learning schedule following the principles of curriculum learning. In our approach, we define a function \(f:X\rightarrow\mathbb{R}\) that assigns a difficulty score to each image. The difficulty of an image \(x_{i}\) is proportional to the size of the occlusion, denoted as \(|m_{i}|\). Therefore, we can express this as \(f(x_{i})=|m_{i}|\). We sort the dataset \(X\) based on the difficulty scores to obtain a new ordered dataset \(X^{\prime}=\{x^{\prime}_{1},x^{\prime}_{2},\ldots,x^{\prime}_{n}\}\), such that \(f(x^{\prime}_{i})\leq f(x^{\prime}_{j})\) for all \(1\leq i<j\leq n\). Next, we divide the learning process into \(T\) stages. At each stage \(t\), we train the model using a subset of the ordered dataset \(S_{t}=\{x^{\prime}_{i}|\;i\leq n_{t}\}\), where \(n_{t}=\lceil t\cdot n/T\rceil\). This implies that the model is initially trained with the least occluded images and progressively exposed to more occluded examples as the stages advance. For the smaller datasets we can induce different levels of occlusion per sample. Let \(\delta\) represent the number of times each sample is used for generating its occluded representations. Then the new ordered dataset is given by \(X^{*}=\mathop{\kern 1.0pt\coprod}\limits_{j=0}^{\delta}X^{(j)}\), where \(X^{(j)}\) represents the \(j\)th level occluded dataset (\(X^{(0)}\) represents the original dataset). The definition of order on \(X^{\prime}\) is carried forward onto \(X^{*}\).
Let \(y=\{y_{1},y_{2},\ldots,y_{n}\}\) represent the ground truth labels corresponding to the images in \(X\). Our model \(M:\mathbb{R}^{h\times w\times c}\rightarrow\mathbb{R}^{k}\) outputs a \(k\)-dimensional vector for each image, representing
Figure 2: Schematic representation of the operation of curriculum learning technique
the predicted probabilities for \(k\) classes. We employ the standard cross-entropy loss function \(L:\mathbb{R}^{k}\times\mathbb{R}^{k}\rightarrow\mathbb{R}\), defined as \(L(y,\hat{y})=-\sum_{i=1}^{k}y_{i}\log(\hat{y}_{i})\), where \(y\) and \(\hat{y}\) represent the ground truth and predicted probability vectors, respectively. The total loss for the dataset \(X\) is then \(\mathcal{L}(X,y)=\frac{1}{n}\sum_{i=1}^{n}L(y_{i},M(x_{i}^{\prime}))\), where \(n\) is the number of images in \(X\). During training, we aim to minimize the loss function using stochastic gradient descent, adjusting the model parameters to better fit the training data.
### Wasserstein Curriculum Learning (WCL)
Historically, the concept of Wasserstein distance has been a fundamental element in the field of generative models, particularly in the training of Generative Adversarial Networks (GANs). The introduction of the Wasserstein GAN (WGAN) [30] aimed to tackle the common issues associated with traditional GANs, such as unstable training, mode collapse, and vanishing gradients. The application of the Wasserstein distance in this context allowed for a more meaningful and smoother gradient during training, resulting in improved model stability and performance. Furthermore, the Wasserstein distance has been applied in the domain of adaptation and optimal transport [32]. It provides a measure to compute the discrepancy between source and target domain distributions, enabling a geometrically meaningful way of transporting samples from one distribution to the other. Drawing upon this historical usage of the Wasserstein distance in deep learning, we leverage its power to compare probability distributions in our WCL framework. This distance provides an effective means to ensure a smooth transition of occlusion distributions from one level of complexity to the next, facilitating more efficient and effective learning. This will enable a more efficient and effective way of increasing the complexity of our training examples in a continuous, rather than discrete, manner. The central idea here is to devise a mechanism by which the transition from less occluded to more occluded images becomes more fluid, in turn, offering the model a smoother learning trajectory. Wasserstein distances, a fundamental notion in optimal transport, to measure the "distance" or discrepancy between different distributions. For a smoother curriculum learning experience, we devise the notion of Wasserstein Curriculum Learning. The goal is to match the occlusion distributions of the training data from one complexity level to the next. This approach leverages the first Wasserstein distance, also known as the Earth Mover's distance, to guide the gradual transition from less occluded to more occluded images.
The Wasserstein distance is a measure of the difference between two probability distributions that considers the underlying geometry of the data space. It is defined in the context of optimal transport theory, a branch of mathematics that deals with transporting mass in an optimal way. The \(p\)-th Wasserstein distance between two probability measures \(\mu\) and \(\nu\) on a metric space \((X,d)\) is defined as follows: Given two probability measures \(\mu\) and \(\nu\) on a
metric space \((X,d)\), the \(p\)-th Wasserstein distance \(W_{p}(\mu,\nu)\) is defined as:
\[W_{p}(\mu,\nu)=\left(\inf_{\gamma\in\Gamma(\mu,\nu)}\int_{X\times X}(d(x,y))^{p} d(\gamma(x,y))\right)^{1/p},\]
where \(p\geq 1\) and \(\Gamma(\mu,\nu)\) is the set of all joint distributions \(\gamma\) on \(X\times X\) with marginals \(\mu\) and \(\nu\) on the first and second factors respectively. For the commonly used first Wasserstein distance (\(p=1\)), it is often referred to as the earth mover's distance, as It might be compared to the least expensive way to move and change a mound of dirt that is in the form of one probability distribution into that of another. This definition includes a minimization over all possible joint distributions between \(\mu\) and \(\nu\). Solving this problem is computationally expensive and is typically approximated in practice.
Let us represent the occlusion of each image \(x_{i}\) as a normalized histogram \(h_{i}\in\mathbb{R}^{b}\), where \(b\) is the number of bins. The occlusion histogram \(h_{i}\) can be thought of as a discrete probability distribution over the occlusion levels. Given two successive stages \(t\) and \(t+1\), we aim to minimize the Wasserstein distance between the occlusion distributions of the corresponding data subsets, \(S_{t}\) and \(S_{t+1}\). The first Wasserstein distance \(W_{1}\) between two probability distributions \(P\) and \(Q\) is defined as \(W_{1}(P,Q)=\min_{\gamma\in\Gamma(P,Q)}\sum_{i,j}|i-j|\cdot\gamma_{i,j}\), where \(\Gamma(P,Q)\) is the set of all joint distributions \(\gamma\) whose marginals are \(P\) and \(Q\). In our case, \(P\) and \(Q\) correspond to the occlusion histograms of \(S_{t}\) and \(S_{t+1}\). The Wasserstein distance offers an effective way of comparing probability distributions, taking into account not only the discrepancies in the distribution values but also their locations.
By using this method, the model experiences a smooth increase in complexity, since the occlusion levels between two successive stages have minimal distance, and no sudden jumps are experienced. This, in turn, could facilitate a more effective learning process, allowing the model to adjust more easily to the new complexity level. Here too our training objective remains to minimize the cross-entropy loss. However, we introduce a regularization term to encourage a smooth transition between successive stages. Thus, our new loss function becomes
\[\mathcal{L}^{(W)}(X,y)=\frac{1}{n}\sum_{i=1}^{n}L(y_{i},M(x_{i}^{\prime}))+ \lambda W_{1}(S_{t},S_{t+1}),\]
where \(\lambda>0\) is a hyperparameter that controls the importance of the Wasserstein distance in the loss function.
At each stage \(t\), we have a set of training data, and the occlusion histogram of this data forms a discrete probability distribution \(S_{t}\). As the stages advance (i.e., as \(t\) increases), the
complexity of the tasks also increases, here characterized by the level of occlusion in the images. In essence, \(S_{t}\) and \(S_{t+1}\) represent the discrete probability distributions of occlusion levels for two consecutive stages in the learning process. The goal of WCL, as defined in the loss function, is to minimize the cross-entropy loss while encouraging a smooth transition between these successive stages, as measured by the Wasserstein distance between \(S_{t}\) and \(S_{t+1}\).
### Information Adaptive Learning (IAL)
We devise a strategy called Information Adaptive Learning for adaptively determining the optimal level of occlusion to be introduced at each stage of training. This strategy involves formulating an auxiliary optimization problem, which aims to maximize the mutual information between the model's outputs and the true labels, subject to the constraint of a maximum allowed occlusion. A notion from information theory called "mutual information" quantifies the amount of knowledge one random variable can learn from observing another random variable [8]. In our case, we are interested in the mutual information between the true labels \(Y=y_{1},y_{2},\ldots,y_{n}\) and the model's outputs \(\hat{Y}=\hat{y}_{1},\hat{y}_{2},\ldots,\hat{y}_{n}\), which we denote as \(I(Y;\hat{Y})\).
Given a maximum allowed occlusion \(\alpha\), we aim to find the occlusion level that maximizes the mutual information. This can be expressed as the following optimization problem:
\[\max_{|m_{i}|\leq\alpha} I(Y;\hat{Y})\] subject to \[|m_{i}|\leq\alpha,\quad\forall i\in 1,\ldots,n\]
In this optimization problem, \(|m_{i}|\) represents the level of occlusion introduced to the i-th sample [13]. The constraint \(|m_{i}|\leq\alpha\) ensures that the occlusion level does not exceed the maximum limit specified by \(\alpha\). The mutual information \(I(Y;\hat{Y})\) can be estimated using various techniques [4], such as non-parametric methods based on k-nearest neighbors, or parametric methods assuming specific distributions of \(Y\) and \(\hat{Y}\). This optimization problem can be solved using gradient-based methods, where the gradient of \(I(Y;\hat{Y})\) with respect to \(m_{i}\) can be approximated using backpropagation.
We modify our training objective again, where our loss function becomes a weighted combination of the cross-entropy loss, the Wasserstein distance [1], and the negative mutual
information (since we aim to maximize the mutual information):
\[\mathcal{L}^{(I)}(X,y)=\frac{1}{n}\sum_{i=1}^{n}L(y_{i},M(x_{i}^{\prime}))+ \lambda_{1}W_{1}(S_{t},S_{t+1})-\lambda_{2}I(Y;\hat{Y}),\]
where \(\lambda_{1},\lambda_{2}>0\) are hyperparameters controlling the importance of each term.
This adaptive occlusion optimization strategy adds another layer of sophistication to our curriculum learning approach. By dynamically adjusting the level of occlusion based on the model's current performance, we can ensure that the model is always presented with the right amount of challenge, thereby fostering more effective learning. This approach, combined with the Wasserstein curriculum learning strategy, provides a comprehensive framework for robust training of deep learning models on occluded medical images.
### Geodesic Curriculum Learning (GCL)
The concept of viewing the model's state during training as a point in a high-dimensional vector space, or more formally, a Riemannian manifold, has been explored in several areas of machine learning. This approach leverages the power of differential geometry to handle complex learning trajectories, adapting and evolving model parameters based on the underlying geometric structure of the data. In [27], the authors demonstrated the application of geometric optimization on the manifold of positive definite matrices, introducing a way to handle constraints and structure in the optimization process.
The state of our model during training can be characterized by the weights of its layers, which form a high-dimensional vector space. We can view this vector space as a Riemannian manifold [28, 29], a mathematical structure that generalizes the notion of curved surfaces to high dimensions. In a Riemannian manifold, the distance between two points (or states of our model) is determined by a metric tensor. In our case, we define the metric tensor based on the cross-entropy loss function and the Wasserstein distance between successive stages. This leads to an adaptive representation of our model's learning trajectory, where the "curvature" of the learning path is determined by the complexity of the training data at each stage.
In this framework, the optimal learning trajectory becomes the geodesic path on this manifold. We name it Geodesic Curriculum Learning. A geodesic is the shortest path between two points on a curved surface, or more generally, a Riemannian manifold. By following this path, our model can adapt more efficiently to the increasing complexity of the training data, leading to faster convergence and improved performance. Given two successive stages \(t\) and \(t+1\), the geodesic path connecting the corresponding model states is the solution to the
geodesic equation, which in general can be written as:
\[\frac{d^{2}x^{\lambda}}{dt^{2}}+\Gamma^{\lambda}_{\mu\nu}\frac{dx^{\mu}}{dt} \frac{dx^{\nu}}{dt}=0,\]
where \(\Gamma^{\lambda}_{\mu\nu}\) are the Christoffel symbols, which depend on the metric of the space and hence on the loss function and Wasserstein distance, and \(x^{\lambda}\) are the coordinates on the manifold, representing the model parameters at a particular stage. The indices \(\lambda\), \(\mu\), and \(\nu\) run over all dimensions of the model parameter space. This is a system of second-order differential equations that describe the evolution of the model's weights along the geodesic path. In practice, we can approximate the solution to these equations using numerical integration methods, such as the Euler method or the Runge-Kutta method.
We further refine our training objective. In addition to the cross-entropy loss, the Wasserstein distance, and the mutual information, we introduce a regularization term based on the length of the geodesic path. The new loss function becomes:
\[\mathcal{L}^{(G)}(X,y)=\frac{1}{n}\sum_{i=1}^{n}L(y_{i},M(x_{i}^{\prime}))+ \lambda_{1}W_{1}(S_{t},S_{t+1})\ -\lambda_{2}I(Y;\hat{Y})+\lambda_{3}L_{geo}(M_{t},M_{t+1}),\]
where \(L_{geo}(M_{t},M_{t+1})\) represents the length of the geodesic path between the model states at stages \(t\) and \(t+1\), and \(\lambda_{3}>0\) is a hyperparameter. It can be written as:
\[L_{geo}(M_{t},M_{t+1})=\int_{t}^{t+1}\sqrt{g_{ij}\frac{dM^{i}}{dt}\frac{dM^{j }}{dt}}dt,\]
where \(g_{ij}\) is the metric tensor which encodes the "distance" between two infinitesimally close points in the parameter space, and \(M^{i}\) represents the coordinates in the parameter space (i.e., the weights of the model). This equation essentially sums up (or integrates) all the infinitesimal distances along the geodesic path to get the total length of the path. Casting the learning process in the framework of differential geometry provides a geometric interpretation to curriculum learning.
### Explainability induced by WCL, IAL and GCL
As the field of AI progresses, the need for transparency and understandability in machine learning models becomes more and more important, especially in fields like healthcare, where interpretability of model decisions can have critical consequences [25]. WCL, IAL, and GCL methods offer unique pathways towards better explainability of AI models.
The Wasserstein Curriculum Learning approach facilitates explainability by providing in
sights into how the model copes with the changing complexity of the data. By leveraging Wasserstein distances, we can measure the difference in complexity levels between different stages of learning. This can be interpreted as the model's "journey" of learning and adaptation as it transitions from less occluded to more occluded images, providing a quantitative and interpretable narrative of the learning process.
Information Adaptive Learning introduces a criterion based on maximizing mutual information between the true labels and the model's predictions. The mutual information is a measure of the statistical dependence between two variables, giving a clear and intuitive quantification of how much the model's output depends on the input. Therefore, a high mutual information implies that the model has learned significant features from the input data. This becomes a quantifiable measure of interpretability of what the model has learned.
Geodesic Curriculum Learning provides a geometric perspective on the model's learning trajectory. By representing the learning process as a path on a high-dimensional Riemannian manifold, we provide a geometric visualization of the learning process. This visualization can be used to explain how the model evolves over time, what changes in the data affect its evolution, and how the model reaches its final state. Additionally, the length of the geodesic path represents a measure of the "difficulty" or "complexity" of the learning process from one stage to another. Shorter paths correspond to easier transitions, indicating that the model is able to adapt more efficiently to the new complexity level. Conversely, longer paths signify more challenging transitions. This can provide insights into how the model handles different complexities and how it adjusts its parameters accordingly, providing an interpretable measure of the model's adaptability.
These three methods combined provide a comprehensive framework for enhancing the transparency and interpretability of AI models. By using these approaches, we can better understand and explain how our model learns, adapts, and makes decisions, making the black-box nature of deep learning models a bit more interpretable.
In the following section, we will discuss the implementation details and present our experimental results, which showcase the effectiveness of this Wasserstein Curriculum Learning strategy in dealing with occlusion in medical image analysis.
## 4 Experiments & Results
In order to empirically evaluate the efficacy of our proposed methodology, we conducted a series of experiments using various datasets of medical images. For the purpose of our ex
periment, we selected a pre-trained MobileNetV2 architecture as our baseline model, which has achieved notable success in various image classification tasks. We appended it with customized top layers to tailor the network towards our specific binary and multi-class classification tasks.
### Architecture
Our model architecture (as shown in figure 3) is founded on MobileNetV2 [26], a highly effective neural network known for its efficiency in image classification tasks. The MobileNetV2 model we employ is pre-trained on the ImageNet dataset [10], which allows us to leverage the learned feature representations for our specific medical image classification tasks. The
Figure 3: The architectural design of the suggested medical image classification system.
base architecture, \(\mathcal{M}\), is expressed as:
\[\mathcal{M}:\mathbb{R}^{H\times W\times C}\rightarrow\mathbb{R}^{D}\]
where \(\mathbb{R}^{H\times W\times C}\) and \(\mathbb{R}^{D}\) represent the input and output spaces respectively. Here, \(H\), \(W\), and \(C\) denote the height, width, and the number of color channels of the input image, respectively, and \(D\) is the output dimension after the last layer of the base model (which is flattened in our case).
The base model's output, \(\mathcal{M}(x)\), is a \(D\)-dimensional feature vector that contains the learned representations of the input image \(x\). This output is then passed through a series of transformations to make it suitable for our specific classification task. The transformations include dropout layers for regularization, Dense layers for non-linear transformations, and Batch Normalization layers for normalization. These transformations are collectively denoted by \(\mathcal{T}\), and can be expressed as:
\[\mathcal{T}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D^{\prime}}\]
where \(\mathbb{R}^{D^{\prime}}\) is the output space after the transformations, with \(D^{\prime}\) being the output dimension after the final transformation layer.
Lastly, the output of the transformations, \(\mathcal{T}(\mathcal{M}(x))\), is fed into a fully-connected (Dense) layer with sigmoid activation function for binary classification tasks (or softmax for multi-class tasks). This layer, denoted by \(\mathcal{F}\), can be defined as:
\[\mathcal{F}:\mathbb{R}^{D^{\prime}}\rightarrow\mathbb{R}^{K}\]
where \(\mathbb{R}^{K}\) is the output space after the final fully-connected layer, and \(K\) is the number of classes in our classification task. Our complete model can thus be written as the composition of these functions:
\[\mathcal{F}\circ\mathcal{T}\circ\mathcal{M}(x)\]
where the circle symbol "\(\circ\)" denotes function composition. The entire process, from input image to class probabilities, is expressed by this function composition.
The objective during the training process is to learn the parameters of the model that minimize the loss function, which in our case is a weighted sum of Binary Crossentropy (or Categorical Crossentropy for multi-class tasks) and the Wasserstein distance between occlusion histograms. The loss function \(\mathcal{L}\) can be defined as:
\[\mathcal{L}^{(W)}(y,\hat{y})=\frac{1}{n}\sum_{i=1}^{n}L(y,\hat{y})+\lambda W_{ 1}(p,q)\]
where \(y\) and \(\hat{y}\) are the true and predicted labels, \(p\) and \(q\) are the occlusion histograms, and \(\lambda\) is a hyperparameter that controls the contribution of the Wasserstein distance to the overall loss.
### Datasets and Preprocessing
We have employed two datasets for our experiments. For the binary classification task, the Br35H dataset was utilized [17]. This dataset comprises tumor and non-tumor images, providing a binary classification challenge. The second dataset used was the Brain Multi-Class (brain-multi) dataset [6], offering a multi-class classification task with four classes - glioma, meningioma, pituitary, and non-tumor. Image preprocessing involved resizing the images to match the input size of the MobileNetV2 model and normalizing pixel intensities. Furthermore, to simulate increasing levels of occlusion and add more diversity to our training samples, we employed the occlusion synthesis method mentioned in 3.3, which modifies images to mimic occluded conditions. This synthesis helps the model better handle real-world occlusions, providing a broader and more challenging training scope.
### Results
We evaluated the models' performance using standard metrics, including accuracy, precision, recall, and F1-score. For the multi-class task, we computed the macro averages of these metrics. Additionally, the Area Under the Receiver Operating Characteristic Curve (AUC-ROC) was computed to assess the models' ability to distinguish between the classes under different thresholds.
In Table1, Baseline refers to the pre-trained MobileNetV2 with a single sigmoid neuron to classify in the case of binary and 4 neurons with softmax activation in case of 4-class classification of brain tumors. **PROS** refers to Progressive Random Occlusion Strategy, where random occlusions were applied progressively and **PBOS** refers to Progressive Border Occlusion Strategy where occlusions were applied progressively as hollow rectangles of width 3 pixels.
These results indicate the effectiveness of our strategy to blend curriculum learning with adaptive occlusion optimization. This integrated approach has proven to handle occlusions in medical images adeptly and improve the performance of deep learning models on complex classification tasks. The robustness of the model performance, as evident from the metrics, highlights the suitability of our approach for real-world medical imaging applications where occlusions are common.
## 5 Discussion and Ending Remarks
In this work, we proposed a novel training methodology integrating curriculum learning with adaptive occlusion optimization for deep learning models applied to medical image classification tasks. The choice of MobileNetV2 as our base model, equipped with our custom-built top layers, proved to be well-suited for both binary and multi-class medical image classification tasks. Our experiments demonstrated a significant improvement in model performance when comparing our proposed methodology with the baseline model. This lends credence to our hypothesis that by gradually introducing occlusion challenges into the training process, much like the principles of curriculum learning, we can enhance the model's ability to handle occluded objects effectively.
The application of optimal transport in the occlusion synthesis process allowed for a smoother transition between different occlusion levels, making it easier for the model to adapt and learn the complex structures underlying the data. Moreover, the differential geometry perspective helped us better understand the landscape of the high-dimensional space formed by the occluded images, potentially offering clues on how to further optimize the training process. This work opens up a number of interesting avenues for future research. The occlusion synthesis process could be further refined by incorporating more sophisticated occlusion models or by using generative models like GANs to create more diverse and realistic occlusions. Additionally, other forms of curriculum learning, such as self-paced learning or task difficulty estimation, could be integrated into our framework to further enhance its effectiveness. From the perspective of optimal transport and differential geometry, exploring other applications of these powerful mathematical tools in the context of deep learning is a promising direction. Particularly, their potential roles in other challenging issues in medical image analysis, such as noise reduction, outlier detection, or multi-modal data integration, could be investigated.
While our results are promising, it is important to note that the ultimate measure of success is the practical impact of these methods in real-world clinical settings. This would involve not only technical challenges such as system integration, scalability, and real-time process
\begin{table}
\begin{tabular}{l|l|l|l|l|l|l} \hline \hline
**Strategy** & **Dataset** & **Precision** & **Recall** & **F1-Score** & **ROC-AUC** & **Accuracy** \\ \hline \hline PROS & Br35H & **100** & **99.67** & **99.83** & **100** & **99.83** \\ \hline Baseline & Br35H & **100** & 99.00 & 99.50 & 99.67 & 99.50 \\ \hline PBOS & Br35H & **100** & 98.67 & 99.33 & 99.67 & 99.33 \\ \hline \hline PBOS & brain-multi & **97.99** & **97.94** & **97.96** & **98.64** & **98.02** \\ \hline PROS & brain-multi & 97.52 & 97.27 & 97.37 & 98.19 & 97.41 \\ \hline Baseline & brain-multi & 96.26 & 95.91 & 96.03 & 96.88 & 96.11 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative Results Analysis
ing but also non-technical issues such as user acceptance, regulatory compliance, and ethical considerations. Future work should, therefore, aim at more extensive validation on diverse and larger datasets, potentially including different types of imaging modalities, diseases, and occlusion levels. Furthermore, it would be interesting to examine how our methodology performs when integrated into a full-fledged computer-aided diagnosis system, and whether it can help improve the diagnostic accuracy and efficiency of healthcare professionals. Additionally, our methodology requires a substantial amount of computational resources due to the complexity of the optimal transport computation and the large number of training iterations required by the curriculum learning approach. This could be a constraint in scenarios with limited computational resources or require real-time processing. It would be worthwhile to investigate more efficient implementations or approximations of the optimal transport computation and the curriculum learning process. Potential solutions could involve using more efficient optimal transport algorithms, parallel computing techniques, or hardware accelerators, or by developing more sophisticated curriculum learning strategies that can achieve similar performance improvements with fewer training stages or less severe occlusions.
## Acknowledgements
This research has been funded in part by the Department of Atomic Energy, India under grant number 0204/18/2022/R&D-II/13979 and the Ministry of Education, India under grant reference number OH-31-24-200-428.
|
2307.02321 | MSViT: Dynamic Mixed-Scale Tokenization for Vision Transformers | The input tokens to Vision Transformers carry little semantic meaning as they
are defined as regular equal-sized patches of the input image, regardless of
its content. However, processing uniform background areas of an image should
not necessitate as much compute as dense, cluttered areas. To address this
issue, we propose a dynamic mixed-scale tokenization scheme for ViT, MSViT. Our
method introduces a conditional gating mechanism that selects the optimal token
scale for every image region, such that the number of tokens is dynamically
determined per input. In addition, to enhance the conditional behavior of the
gate during training, we introduce a novel generalization of the batch-shaping
loss. We show that our gating module is able to learn meaningful semantics
despite operating locally at the coarse patch-level. The proposed gating module
is lightweight, agnostic to the choice of transformer backbone, and trained
within a few epochs with little training overhead. Furthermore, in contrast to
token pruning, MSViT does not lose information about the input, thus can be
readily applied for dense tasks. We validate MSViT on the tasks of
classification and segmentation where it leads to improved accuracy-complexity
trade-off. | Jakob Drachmann Havtorn, Amelie Royer, Tijmen Blankevoort, Babak Ehteshami Bejnordi | 2023-07-05T14:22:31Z | http://arxiv.org/abs/2307.02321v2 | # MSViT: Dynamic Mixed-scale Tokenization for Vision Transformers
###### Abstract
The input tokens to Vision Transformers carry little semantic meaning as they are defined as regular equal-sized patches of the input image, regardless of its content. However, processing uniform background areas of an image should not necessitate as much compute as dense, cluttered areas. To address this issue, we propose a dynamic mixed-scale tokenization scheme for ViT, MSViT. Our method introduces a conditional gating mechanism that selects the optimal token scale for every image region, such that the number of tokens is dynamically determined per input. In addition, to enhance the conditional behavior of the gate during training, we introduce a novel generalization of the batch-shaping loss. We show that our gating module is able to learn meaningful semantics despite operating locally at the coarse patch-level. The proposed gating module is lightweight, agnostic to the choice of transformer backbone, and trained within a few epochs with little training overhead. Furthermore, in contrast to token pruning, MSViT does not lose information about the input, thus can be readily applied for dense tasks. We validate MSViT on the tasks of classification and segmentation where it leads to improved accuracy-complexity trade-off.
## 1 Introduction
The Transformer architecture [51] has seen widespread success across Natural Language Processing (**NLP**) tasks and more recently in Computer Vision (**CV**) [11, 27, 49]. However, the quadratic time and memory complexity of transformer-based models poses a challenge when deploying such models on compute constrained devices or scaling them to large image sizes. In particular, the number of input tokens and the tokenization method are defining aspects of the computational complexity of transformers. In NLP, it is generally straightforward to use semantic units, such as words or sentences, as input tokens: This leads to little redundancy in the information carried by individual tokens. Conversely, in CV, tokenization is usually achieved by slicing an image into equal-sized, square patches without
Figure 1: We introduce a learnable module to dynamically select the optimal token scale for each region. This module can be plugged in as a preprocessing step to any Vision Transformer. Here we illustrate some mixed-scale masks on ImageNet samples with varying levels of clutter, output by the scale selection module, trained alongside a pretrained ViT-S/16 for 20 epochs to choose between a coarse (32px, ) and a fine (16px, ) token scale.
considering their content. This introduces redundant information across tokens, leading to computational waste: For instance, trivial background regions (e.g. sky and grass) are often expressed by a large number of tokens, dominating the bulk of compute in the model. Nonetheless, it remains unclear how to design a more efficient tokenization that reduces input redundancy compared to such uniform patching. In fact, most successful token reduction methods in the literature, such as token pruning [56, 34, 57, 29, 20, 30] or token merging [35, 42], only act on intermediate layers of the transformer, while earlier layers still inefficiently operate with a large number of redundant tokens.
In this work, we propose a novel, orthogonal approach: We predict the _tokenization scale_ for each image region as a pre-processing step before the transformer. Intuitively, uninformative image regions such as background can be processed at a coarser scale than the foreground, without loss of information, leading to a smaller total number of tokens. To capture this behavior, we introduce a lightweight conditional gating MLP trained to select the optimal tokenization scale for every coarse local image region, as illustrated in Figure 1, leading to a dynamic number of tokens per image. Because it operates at the input level, the gate is agnostic to the choice of transformer backbone. Furthermore, mixed-scale tokenization is lossless, as every input region is covered by a token, making it well suited for dense prediction tasks in contrast to other methods such as pruning. Nevertheless, learning such a scale selection module raises several issues: **(i)** Current multi-scale ViT architectures are often trained with extra parameters for each scale or have cumbersome training pipelines with multiple stages [6, 62, 7]. Instead, we design a unified, single-stage model by maximizing parameter sharing across scales. **(ii)** The gating module may learn a bad local minimum such as always outputting the same trivial static pattern. To combat this, we introduce a novel training loss that enables finer control over the learned gating distribution, enhancing the dynamic behavior of the mixed-scale tokenization. Finally, **(iii)** the cost of training grows with the total number of fine and coarse tokens. To reduce training costs, we employ an adaptive trimming strategy at training time which relies on the underlying mapping between coarse and fine tokens. The main contributions of this work are as follows:
1. We design a dynamic scale selection gating mechanism that acts as a _preprocessing_ stage, agnostic to the choice of transformer backbone, and trained jointly with the transformer _in a single stage_ with mixed-scale tokens as inputs. We show in experiments that this dynamic tokenization process leads to improved computational costs by reducing the number of input tokens.
2. We propose a generalization of batch-shaping [13] to better handle _multi-dimensional distributions_ when training dynamic gates: The resulting loss provides better control over the learned scale distribution, and allows for easier and better initialization of the gates.
3. We reduce the training overhead incurred from handling a set of tokens for each scale by (i) defining the gate locally at the coarse token level only and (ii) employing an adaptive trimming strategy during training.
## 2 Proposed method
In this work, we enhance the standard Vision Transformer (**ViT**) formalism with mixed-scale tokens that are dynamically selected for each input image. In this section, we briefly introduce ViT, then describe how we handle tokens extracted at different scales, with a focus on keeping the architecture parameter-efficient (Section 2.1) and reducing training overhead (Section 2.3). Finally, we present the generalized batch-shaping loss for training the mixed-scale selection module (Section 2.2).
### Parameter-efficient mixed-scale ViT
Given an input image of size \(W\times W\), a ViT first splits the image into square _patches_ of equal size, \(S\), resulting in a total of \(N_{S}=\lfloor W/S\rfloor^{2}\) tokens. These tokens are flattened, and individually embedded to the target dimension \(d\). A position encoding is then added to each token, which is a vector capturing the initial 2D spatial location of the token. Finally, the tokens are fed to a transformer, \(T\), which is a sequence of Multi-Headed Self-Attention (**MHSA**) blocks, that compute global attention across the set of tokens, followed by FFNs, which process each token independently [51]. Our work is agnostic to the choice of the transformer backbone \(T\), thus, in the rest of the section, we only describe changes made to the patching, token embedding, and position encoding mechanisms to handle mixed-scale tokens.
Dynamic mixed-scale ViT.An overview of the proposed mixed-scale vision transformer model (MSViT) is presented in Figure 2. In the scope of this paper, we consider the case of two scales (\(S_{f}<S_{c}\)). We refer to \(S_{f}\) (resp. \(S_{c}\)) as the _fine_ (resp. _coarse_) scale. First, we extract square patches at both scales, for a total of \(N=N_{S_{f}}+N_{S_{c}}\) tokens. We then introduce a discrete _gating_ mechanism, \(g\), which selects active tokens across both scales, for a given input image: These tokens are further sent to the transformer, while inactive ones are discarded at this stage.
In practice, we define the learned gate as a local operation, at the level of coarse tokens: The gate parses each coarse image region individually and outputs a binary decision on whether the region should be tokenized at either the coarse or fine scale. We consider the case where the fine-scale \(S_{f}\) evenly divides the coarse scale \(S_{c}\). This way, for all \(i\), the \(i\)-th fine token can be mapped to the unique coarse token \(C(i)=j\) it belongs to. Using this mapping, we recover the complete binary mixed-scale mask at the fine
token level, \(\overline{m}\), using the coarse-level gate outputs:
\[\forall j\in[1,N_{S_{c}}],\,m_{j} =\text{GumbelSigmoid}(g(x_{j}))\in[0,1] \tag{1}\] \[\overline{m}_{j} =\text{STE}(m_{j})\in\{0,1\}\] (2) \[\forall i\in[N_{S_{c}}+1,N_{S_{c}}+N_{S_{f}}],\,\overline{m}_{i} =1-\overline{m}_{C(i)} \tag{3}\]
Here, we distinguish between the soft outputs of the gate, \(m\in[0,1]\), used to constrain the gate during training, and the discretized outputs \(\overline{m}\in\{0,1\}\) used during the forward pass. In order, to estimate gradients for the discrete gate operation, we use the Gumbel-Sigmoid relaxation of binary variables during training [28] with the straight-through gradient estimator (**STE**) [17, 1].
While this design choices for the gate may limit representational power, as \(g\) only sees local regions of the image as inputs, we find that it works well in practice and yields a very lightweight gating strategy. Moreover, as in the original ViT tokenization, token overlap is prevented by design, as every image region can only be captured by a unique scale.
Sharing parameters across scales.Previous mixed-scale ViTs usually introduce extra parameters to handle each scale [6, 55] or train a shared backbone stage by stage for each scale separately [7, 62]. Instead, our intention is **(i)** to fully share the token embedding, position encodings, and the transformer backbone across scales, and **(ii)** to directly train the model in one stage with batches of mixed-scale tokens, rather than treating each scale individually. This allows us to avoid extra parameter costs and makes our method architecture agnostic. In addition, due to the dynamic nature of the gating mechanism, defining separate branches per scale instead of sharing may lead to common issues of training conditional models such as imbalanced routing and data starvation [14, 43, 39].
To implement sharing across scales, we draw inspiration from ViT [11, 2]: At inference, the authors scale a trained model to a different input image size by linearly interpolating its position encodings to match the size of the new grid. We extend this idea to our setting by defining the learned embedding \(\phi_{f}\) and position encoding parameters \(\rho_{f}\) relative to the _fine scale_ only (Figure 2 (c)). We then deterministically infer their equivalent for the coarse scale as:
\[\phi_{f}:x\in\mathbb{R}^{S_{f}\times S_{f}\times 3}\mapsto \mathbb{R}^{d},\,\,\,\rho_{f}\in\mathbb{R}^{N_{S_{f}}\times d} \tag{4}\] \[\phi_{c}=\phi_{f}\circ\text{resize}(S_{c}\to S_{f}),\,\,\rho_{c}= \text{interpolate}(\rho_{f}) \tag{5}\]
In Appendix G.3, we show that this simple linear interpolation scheme works very well in practice, but may suffer when rescaling to a very low token resolution: For instance, directly training with the coarse patch size 32 on inputs of size 224px yields higher accuracy than the model with fine patch size 16, rescaled for 112px inputs to reach the same number of 49 tokens. Nevertheless, this is not an issue for the image and patch sizes we consider in our experiments.
### Learning the mixed-scale gating
We jointly train the transformer and the gate by balancing the model performance with computational efficiency, forcing the model to only select a few tokens at fine scale:
\[\mathcal{L}(x_{1\dots N},m_{1\dots N},y)=\mathcal{L}_{task}(x,y;m)+\lambda \mathcal{L}_{gate}(m) \tag{6}\]
where \(\mathcal{L}_{task}\) is the task loss (e.g., cross-entropy) applied on the masked transformer outputs, \(\mathcal{L}_{gate}\) is a sparsity constraint on the gate output \(m\) (before STE), which directly
Figure 2: Overview of the proposed dynamic mixed-scale tokenization scheme for ViT, MSViT. **(a)** The input image is first patched into coarse image regions of size \(S_{c}\times S_{c}\). **(b)** Each coarse region is processed by a small 4-layer MLP, the gate \(g\), outputting a binary decision on whether the region should be processed at a coarse or fine scale. **(c)** The resulting mask, \(\overline{m}\), defines the set of mixed-scale tokens for the input image. The corresponding mixed-scale position encodings are obtained by linearly interpolating the fine scale position encodings to the coarse scale, when needed. Finally, the tokens are sent to a standard transformer backbone \(T\) which outputs the task-relevant prediction.
controls the model's computational cost, and \(\lambda\) is a hyperparameter balancing both losses. In the next paragraphs, we will motivate and define a novel gate loss to use for \(\mathcal{L}_{gate}\).
Common gate sparsity losses.The \(L_{0}\) loss is often used as sparsity loss in the conditional computing literature[52]. Given the 2-dimensional active token mask for the current batch, \(m\in[0,1]^{B\times N}\), we define:
\[\mathcal{L}_{gate}^{L_{0}}(m)=\frac{1}{B}\sum_{b=1}^{B}\min\left(0,\frac{1}{N_ {S_{c}}}\sum_{i=1}^{N_{S_{c}}}m_{b,i}-g^{*}\right) \tag{7}\]
where the hyperparameter \(g^{*}\in[0,1]\) is the target rate for gate sparsity. However, \(L_{0}\) only penalizes the _mean_ of the distribution, and can not prevent the model from learning static patterns, such as assigning the same probability to all tokens independent of input, as illustrated in Figure 3 (a).
To enhance the desired conditional behavior, the recently proposed batch-shaping loss [13] (BaS) constrains the distribution of the gate outputs, across the batch, to match a certain prior \(p\). In our setting, this means enforcing the _same prior_ across each spatial position. This lacks the necessary flexibility for our use-case, as the gate could not learn for instance that edges of the image are less likely to contain fine-scale details. As a more flexible alternative, we apply BaS directly on the flattened distribution of the gate outputs:
\[\mathcal{L}_{gate}^{BaS}(m)=\left[\text{CDF}(\{m_{b,i},\;\forall b,i\})- \text{CDF}(p(g^{*}))\right]^{2} \tag{8}\]
where CDF is the cumulative distribution function, and \(p\) is a prior with mean \(g^{*}\). Unfortunately, this variant is now too flexible, e.g. it cannot prevent spatial positions from being constantly on or off regardless of the input patch. Corner cases for both variants of BaS are illustrated in Figure 3 (c).
Generalized batch-shaping loss.To address these shortcomings, we introduce the _generalized batch-shaping loss_ (**GBaS**) for finer control over the learned mask distribution, across both the batch and token dimensions. Like BaS, GBaS constrains the marginal distribution at each token spatial position, \(m_{:,i}\)\(\forall i\in[1,N_{S_{c}}]\), but with a dedicated independent prior instead of a shared one. Manually setting the prior for each position would be tedious; Instead, we let the model learn each of these independent prior's parameters, while controlling their distribution using a _hyperprior_\(\mathcal{P}\) with mean equal to the target sparsity \(g^{*}\) (Figure 3 (b)):
\[\mathcal{L}_{gate}^{GBaS}(m)= \sum_{i=1}^{N_{S}}\left[\text{CDF}(\{m_{b,i},\;\forall b\})- \text{CDF}(p(\theta_{i}))\right]^{2}\] \[+ \left[\text{CDF}(\{\theta_{i},\;\forall i\})-\text{CDF}( \mathcal{P}(g^{*};\sigma))\right]^{2} \tag{9}\]
where \(\theta\) are learned parameters defining each prior, and \(\sigma\) is a variance hyperparameter controlling the spread of the learned priors. When \(\sigma=0\), all priors are identical; hence we recover the original BaS; When \(\sigma\rightarrow+\infty\), there is little constraint on the learned \(\theta\) and we may encounter the same corner cases as for BaS applied to the flattened distribution.
In summary, GBaS enables fine-grained control over the learned distribution through the hyperprior. Another benefit of GBaS is that we can easily inject prior knowledge about which spatial positions are more likely to be kept at fine/coarse scale by initializing the \(\theta\) parameters accordingly. In contrast, achieving a similar initialization with BaS would require pretraining the gate to match the desired prior. For instance, in most of our experiments with GBaS, we initialize the learned prior parameters \(\theta\) with the inverse normalized distances of each spatial position to the center. We further compare BaS and GBaS in ablation experiments in Section 4.3 and Appendix G. We use the Relaxed
Figure 3: Our proposed generalized batch-shaping (GBaS) allows for fine control over the learned distribution via a hyperprior (**b**): GBaS allows for learning different distributions for each token position in contrast to BaS **(c, top)**; In addition, GBaS explicitly controls this flexibility through the variance hyperparameter \(\sigma\), hence avoiding corner cases of BaS-flat **(c, bottom)** or L0 **(a)**
Bernoulli [28] distribution for the prior \(p\), as we found it easier to parametrize than the Beta distribution used in BaS. We use a Gaussian for the hyperprior \(\mathcal{P}\) with mean \(g^{*}\) and variance given by the hyperparameter \(\sigma\). Our implementation for the Batch-Shaping and generalized Batch-Shaping losses is available on github1.
Footnote 1: [https://github.com/Qualcomm-AI-research/batchshaping](https://github.com/Qualcomm-AI-research/batchshaping)
### Reducing the training overhead
When executing the model with batches of data, inactive tokens (\(\overline{m}_{i}=0\)) cannot be pruned statically, as the masking pattern output by the gate \(g\) varies across the batch. Instead, we explicitly mask the inactive tokens in the attention layers and the output of the transformer backbone; the FFN layers are applied individually to every token and hence are not affected. Given the set of tokens across all scales, \(x\in\mathbb{R}^{N\times d}\) and the current binary mask output by the gate, \(\overline{m}\in\{0,1\}^{N}\), we must apply masking in every attention block, such that the inactive tokens are ignored when updating the representations of active ones:
\[\forall i,j\in[1,N],\ A^{\text{mask}}(x_{i},x_{j})=\frac{\overline{m}_{j}\ e^{Q_{i}K_{j}^{T}}}{\sum_{p=1}^{N} \overline{m}_{p}\ e^{Q_{i}K_{p}^{T}}} \tag{10}\]
where \(A^{\text{mask}}(x_{i},x_{j})\) is the normalized attention score from token \(i\) to \(j\) and \(Q\) and \(K\) denote the query and key embeddings of the tokens. Unfortunately, with this naive masking approach the increased total number of tokens, \(N=N_{S_{f}}+N_{S_{e}}\), leads to higher training costs.
To address this issue, we employ an _adaptive trimming_ (**AT**) strategy at training time: For each image in the batch, we first reorder the tokens in descending order of the corresponding gate outputs \(m\), omitting the class token or any task-specific token. This reordering step takes advantage of the fact that the transformer is not affected by the order of the tokens. We then trim the token dimension to only keep \(k\) tokens for each image, where \(k\) is the maximum number of active tokens in any image in the current batch. As a result, the number of tokens (and hence the computational cost) is lower bounded by \(N_{S_{f}}\), i.e., the number of fine scale tokens. This strategy does impact the gradients received by the gate, effectively making the gating module less robust to tokens flipping from the coarse to the fine scale during training (see Appendix F). Nevertheless, as we show in Appendix F.3, this only leads to a small drop in accuracy in practice but a clear reduction in training time (\(\sim\)1.16-1.35 times per-epoch speedup, depending on the target sparsity). For this reason, we always use AT in our training pipeline.
## 3 Related work
Self-Attention for computer vision.Starting from Vision Transformer (ViT) [11, 9, 4, 32], Multiheaded Self-Attention (MHSA) has been successfully applied in many vision tasks such as image classification [11, 49], object detection [5, 61] or semantic segmentation [12, 59, 27]. While ViTs are often able to match CNN-based models' performance with fewer computational resources [11], the number of input tokens remains an important bottleneck to achieve efficient transformers. Several works [48] have focused on reducing the cost of the attention operation, which scales quadratically with the number of tokens, by using low-rank approximations [8, 54, 31] or exploiting redundant or sparse structures [19, 53, 16, 21, 26, 53]. However, unlike for NLP, the cost incurred by the Feed-Forward Neural Networks (FFNs) in ViTs is often significant due in part to the generally smaller number of tokens. Hence, instead of focusing only on attention layers, a number of techniques have been proposed to reduce the total number of tokens.
Token pruning and merging.Token pruning [56, 34, 57, 29, 20, 30, 23] and merging [35, 42, 3] are some of the most successful token reduction approaches in the literature. These methods usually prune away a fixed number of tokens in intermediate layers of the transformer based on their class attention score [23, 57] or on the previous layer's features [34], or merge tokens into a fixed smaller number of tokens using a cross-attention layer or projection [35, 42].
Orthogonal to these methods, our mixed-scale selection scheme outputs a dynamic number of tokens, tailored to the input image content. It is also designed as a preprocessing module acting on the token set before the first Transformer layer, and hence can be combined with methods such as token pruning or early-exiting which act on the intermediate transformer layers. Finally, in contrast to pruning, mixed-scale models are lossless in the sense that every input image region is covered by a token. This is important for dense tasks such as segmentation where the final spatial predictions are usually directly reconstructed from the tokens.
Mixed-scale ViTs.Mixing features from different scales has shown positive results for convolutional networks [24, 25]. Following this insight, recent works have started to investigate ways to incorporate mixed-scale information in ViTs as well: Quadtree Attention [47] uses hierarchical structures to improve the efficiency of MHSA. CrossViT [6] defines separate branches for each scale, which occasionally communicate through cross-attention. CF-ViT [7] and DVT [55] combine early-exiting with a two-stage cascade of transformers, one for each scale. Finally ReViT [62] learns a global input patch scale for each image with an EfficientNet backbone trained with precomputed proxy labels. The majority of these works treat each scale separately, either by incorporating extra parameters (entire branch [6, 55]
or layernorm parameters [62]) or by training for each scale in separate stages [7, 62]. In contrast, we design a simpler single-stage model which directly handles having mixed-scale tokens in one batch, for both training and inference. Closest to our work is [40], which leverages saliency maps from a pretrained model to design a quadtree structure on token scales and enable mixed-scale token selection.
## 4 Experiments
### ImageNet classification
We first benchmark the proposed mixed-scale tokenization on ImageNet [41]: We use publicly available SotA ViT backbones pretrained on ImageNet-21k [44, 11, 37], and DeiT backbones pretrained on ImageNet [49, 36]. We implement the gate as a lightweight 4-layer MLP with a scalar output in [0, 1], applied to every coarse token individually. After the first layer, a learned position encoding, specific to the gate, is also added to the token representations. Finally, the bias of the last layer is initialized such that the gate outputs 1: i.e., all patches are extracted at the fine scale at the beginning of training. We set all other hyperparameters to that of the original ViT (resp. DeiT) pipeline and finetune all models for 20 epochs with a batch size of 512 on a single device (see additional training details in Appendix C).
In Table 1, we evaluate the proposed mixed-scale MSViT across different backbone architectures (ViT-S and ViT-Tiny), pretraining strategies (ViT and DeiT), and input image sizes. We report top-1 accuracy results as well as MACs counts calculated via deepseed [38].
From the quantitative results, we observe that the mixed-scale models consistently reach higher accuracy at equivalent MACs across different compute budgets and input sizes. We also display qualitative examples of the mixed-scale selection patterns learned by the gate in Figure 1 and Appendix A: Despite having a limited field of view, the learned gate picks up on meaningful local features such as background/foreground distinction to select tokens' scales. Furthermore, we observe that the learned mixed-scale pattern is very similar across experimental settings: Two gates with the same number of active tokens, trained for MSViT-S/16 and MSViT-L/16 respectively, select the same scale for **78.4%** of the tokens on the ImageNet validation set. Similarly, the gates of a MSViT-S/16 model trained with 224px and 192px inputs respectively, agree for **87.1%** of the tokens. Motivated by this observation, we investigate in the next section whether the learned mixed-scale gate can be directly transferred as an off-the-shelf lightweight preprocessing module to other vision transformer-based models.
### Transferring mixed-scale tokenization across tasks and backbones
#### 4.2.1 Mixed-scale tokens for segmentation
To verify whether MSViT also benefits dense prediction tasks, we augment the standard Segmenter training pipeline [18, 45] on ADE20k [60] with one of our gates, pretrained on ImageNet and frozen. The change is easy to implement: we replace the standard patch embedding of the ViT encoder with our own mixed-scale tokenization (Section 2.1) and keep it frozen during training. We then propagate the mixed-scale mask into further layers using masked
\begin{table}
\begin{tabular}{|c||c|c||c|c|} \hline DeiT-Small & \multicolumn{2}{c||}{Avg \#} & \multicolumn{2}{c||}{GMACs} & \multicolumn{2}{c|}{accuracy} \\ backbone & tokens & (avg) & top-1 & top-5 \\ \hline \hline DeiT-S/16 in=160 & 100 & **2.27** & 75.86 & 92.84 \\ \hline
**MSDeiT-S/16,32 in=224** & **97** & **2.27** & **76.99** & **93.38** \\ \hline \hline DeiT-S/16 in=192 & 144 & **3.32** & 77.79 & 93.96 \\ \hline
**MSDeiT-S/16,32 in=224** & **142** & **3.32** & **78.76** & **94.32** \\ \hline \hline DeiT-S/16 in=224 & 196 & 4.60 & **79.85** & **94.57** \\ \hline
**MSDeiT-S/16,32 in=224** & **173** & **4.08** & 79.38 & 94.38 \\ \hline \hline ViT-Tiny & \multicolumn{2}{c||}{Avg \#} & \multicolumn{2}{c||}{GMACs} & \multicolumn{2}{c|}{accuracy} \\ backbone & tokens & (avg) & top-1 & top-5 \\ \hline \hline ViT-Ti/16 in=160 & 100 & **0.60** & 71.63 & 90.68 \\ \hline
**MSViT-Ti/16,32 in=224** & **95** & **0.60** & **72.57** & **91.32** \\ \hline \hline ViT-Ti/16 in=192 & 144 & 0.89 & 74.24 & 92.22 \\ \hline
**MSViT-Ti/16,32 in=224** & **138** & **0.88** & **74.93** & **92.54** \\ \hline \hline ViT-Ti/16 in=224 & 196 & 1.25 & **76.00** & **93.26** \\ \hline
**MSViT-Ti/16,32 in=224** & **154** & **0.98** & 75.51 & 92.98 \\ \hline \hline ViT-Small & \multicolumn{2}{c||}{Avg \#} & \multicolumn{2}{c||}{GMACs} & \multicolumn{2}{c|}{accuracy} \\ backbone & tokens & (avg) & top-1 & top-5 \\ \hline \hline ViT-S/16 in=128 & **64** & **1.44** & 75.48 & 93.08 \\ \hline
**MSViT-S/16,32 in=224 & 75 & 1.76 & **77.16** & **94.14** \\ \hline \hline ViT-S/16 in=160 & 100 & **2.27** & 78.88 & 94.95 \\ \hline
**MSViT-S/16,32 in=224** & **98** & 2.30 & **79.51** & **95.33** \\ \hline ViT-S/16 in=192 & 144 & 3.32 & 80.75 & 95.86 \\ \hline
**MSViT-S/16,32 in=224** & **138** & **3.23** & **81.47** & **96.14** \\ \hline \hline ViT-S/16 in=224 & 196 & 4.60 & **82.02** & **96.45** \\ \hline
**MSViT-S/16,32 in=224** & **187** & **4.43** & **82.02** & 96.44 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of our dynamic mixed-scale model with the corresponding backbone baseline evaluated at different input image sizes. For ease of reading, the results are sorted by MACs, and grouped by backbones. Inside each table, we group results by comparable MAC counts or accuracy. We refer to models as “arch/S in=X”, where arch is the backbone architecture, x is the input image size, and S is the patch scale(s). The prefix MS (Multi-Scale) refers to our mixed-scale models: We sweep over values of the gate target \(g^{*}\in\{0.5,0.25,0.1\}\) and loss weight \(\lambda\in\{1,4,20\}\) to obtain dynamic models with various MACs counts and report their GMACs and number of tokens averaged over the evaluation set (For reference, the additional computational cost induced by the gate for ViT-S is 0.017 GMACs). Additional results for all hyperparameters and different input image sizes, and including latency measurements, can be found in Appendix B.
attention (Equation (10)), and finally reshape the stream of mixed-scale tokens to the original 2D image shape before feeding it to the decoder (see Appendix E for details).
We report the results (mIOU, average MAC count, and average latency) in Figure 4 (a, b). Similar to the classification task, we observe improved accuracy-efficiency trade-offs across different backbone sizes and gate sparsities: For instance with a ViT-S backbone, we can save roughly 40% MACs for a minor drop of 0.4 in mIoU. In addition, the scale selection pattern learned on ImageNet is still very meaningful for the images in ADE20k: In Figure 4 (c), we show that classes represented via coarse tokens often correspond to uniform regions such as sky or sea, which typically occupy large regions of the image.
#### 4.2.2 Mixed-scale tokens for token pruning
Token pruning methods iteratively discard a fixed ratio of the tokens in several intermediate layers of the transformer, based on their global class token attention [56, 34, 57, 29, 35, 20, 30]. In contrast, MSViT treats every local region individually and reduces the number of tokens before applying any transformer layer, using pixel-level information only, and without discarding any image region. As a result, both methods are orthogonal and select active tokens on different criteria. To verify how they interact, we augment two SotA pruning methods on DeiT-S, namely EViT [23, 22] and DyViT [34, 33], with one of our pretrained frozen gates instead of the standard ViT tokenization, and then train each model with their respective original pipeline, for different pruning ratios. We report results in Figure 5. We observe that mixed-scale tokenization followed by token pruning in the intermediate layers complement one another well, which also introduces an interesting trade-off: Rather than using very high pruning ratios, better accuracy/efficiency performance can be reached by combining mixed-scale tokenization with a token pruning ratio.
#### 4.2.3 Mixed-scale tokens for hierarchical ViTs
Hierarchical (or Multi-scale) Vision Transformers [27, 26, 15, 10, 58] is a popular family of models that draw inspiration from the inductive bias of CNNs to build efficient ViT architectures: For instance in Swin, the image is initially split in very small tokens (4x4) which interact through local attention windows (7x7 tokens) and are progressively merged into larger tokens as depth increases.
To incorporate mixed-scale tokens in this scenario, we first run the gate to determine the fine/coarse scale pattern across the image: We process fine image regions with the standard Swin paradigm, at a reduced computational cost due to the lower number of tokens and potentially empty attention windows; Coarse tokens on the other hand are passed through a single linear embedding and merged back in the stream of tokens at layer \(\ell\), once the fine tokens stream has been merged all the way up to the coarse scale. We discuss this process in more details in Appendix E. We report results for two values of \(\ell\) in Table 2: The value of \(\ell=3\) yields better performance than merging the coarse and fine tokens in an earlier block (\(\ell=2\), bottom table). Finally, due to the multi-scale design of hierarchical ViTs, we hypothesize that the choice of \(\ell\) could be further tuned with respect to the fine scale, coarse scale and window size, and
Figure 4: We train Segmenter [18, 45] on the ADE20k [60] dataset, after adding a (frozen) mixed-scale gate trained on ImageNet. We report quantitative results in Table (a), a qualitative example in (b), and a break-down of classes most often in coarse regions in (c)
### Ablation experiments
#### 4.3.1 Generalized batch-shaping loss (GBaS)
In Section2.2, we introduced the novel GBaS, which allows for more control over the conditional behavior of the gate, and enables us to easily inject prior knowledge about the spatial distribution of the selected scale at initialization. In Figure6(a), we confirm that the best trade-off is achieved by GBaS, further improved when the learned priors are initialized as the inverse normalized distance of each spatial position to the center (ctr init for short).
In addition, we observe that the cropping data augmentation used during training is a key factor. By default, we use the standard "Inception-style" cropping strategy[46] which leads to a shift between the tokens distributions at train and test time [50]. This behavior can be observed qualitatively in Figure7(a): When training with Inception crops, there is high uncertainty on the location of objects, and the L0 loss ends up stuck on a trivial static pattern early during training. On the other hand, GBaS learns more centered mixed-scale patterns, but still captures uncertainty over spatial positions through the learned priors (Fig.7(b)_top row_), which can be further reduced with ctr init(_bottom row_).
In contrast, with a lighter cropping strategy, all losses learn that, on average, fine scale tokens are more likely to appear in the center-top of the image, where the object to categorize usually lies (see AppendixG). As a result, all batch-shaping variants perform equally well, and the L0 loss even outperforms them in some cases (Figure6(b)).
In summary, GBaS is more robust to train/test discrepancy than other losses; Nevertheless when there is no notable distribution shift, then even a simple L0 sparsity loss can reach a similar or even better performance.
#### 4.3.2 Benefits of learning a dynamic gate
In Figure 8, we illustrate how the learned gate module dynamically adapts the mixed-scale pattern, hence the computation cost, to the input image content. We further investigate and highlight this behavior quantitatively in Appendix G.1, in which we compare using a learned gate versus using a fixed oracle mixed-resolution pattern where all central patches are at the fine scale, and any region further than a certain radius from the center is kept at coarse scale.
## 5 Conclusions
In this work, we proposed a dynamic mixed-scale tokenization scheme for ViT, MSViT, via a novel conditional gating mechanism. The gate is agnostic to the choice of transformer backbone, and is trained jointly with it, in a single-stage, with mixed-scale tokens. To improve the conditional behaviour of the gate, we proposed a generalization of batch-shaping [13] to better handle _multi-dimensional distributions_. GBaS improves results and allows for easier and better initialization of the gates. Our experiments on image classification and semantic segmentation show that the proposed dynamic tokenization enhances computational efficiency by reducing the number of input tokens, with minimal impact on performance. For both tasks, the gate learns to represent uniform and background regions with coarse tokens and higher entropy regions with fine ones.
Figure 8: Example of the learned dynamic gate outputs when applied on random image zooms and shifts of the validation dataset |
2310.17485 | Fair collaborative vehicle routing: A deep multi-agent reinforcement
learning approach | Collaborative vehicle routing occurs when carriers collaborate through
sharing their transportation requests and performing transportation requests on
behalf of each other. This achieves economies of scale, thus reducing cost,
greenhouse gas emissions and road congestion. But which carrier should partner
with whom, and how much should each carrier be compensated? Traditional game
theoretic solution concepts are expensive to calculate as the characteristic
function scales exponentially with the number of agents. This would require
solving the vehicle routing problem (NP-hard) an exponential number of times.
We therefore propose to model this problem as a coalitional bargaining game
solved using deep multi-agent reinforcement learning, where - crucially -
agents are not given access to the characteristic function. Instead, we
implicitly reason about the characteristic function; thus, when deployed in
production, we only need to evaluate the expensive post-collaboration vehicle
routing problem once. Our contribution is that we are the first to consider
both the route allocation problem and gain sharing problem simultaneously -
without access to the expensive characteristic function. Through decentralised
machine learning, our agents bargain with each other and agree to outcomes that
correlate well with the Shapley value - a fair profit allocation mechanism.
Importantly, we are able to achieve a reduction in run-time of 88%. | Stephen Mak, Liming Xu, Tim Pearce, Michael Ostroumov, Alexandra Brintrup | 2023-10-26T15:42:29Z | http://arxiv.org/abs/2310.17485v1 | # Fair Collaborative Vehicle Routing: A Deep Multi-Agent Reinforcement Learning Approach
###### Abstract
Collaborative vehicle routing occurs when carriers collaborate through sharing their transportation requests and performing transportation requests on behalf of each other. This achieves economies of scale, thus reducing cost, greenhouse gas emissions and road congestion. But which carrier should partner with whom, and how much should each carrier be compensated? Traditional game theoretic solution concepts are expensive to calculate as the characteristic function scales exponentially with the number of agents. This would require solving the vehicle routing problem (NP-hard) an exponential number of times. We therefore propose to model this problem as a coalitional bargaining game solved using deep multi-agent reinforcement learning, where - crucially - agents are _not_ given access to the characteristic function. Instead, we _implicitly_ reason about the characteristic function; thus, when deployed in production, we only need to evaluate the expensive post-collaboration vehicle routing problem once. Our contribution is that we are the first to consider both the route allocation problem and gain sharing problem simultaneously - without access to the expensive characteristic function. Through decentralised machine learning, our agents bargain with each other and agree to outcomes that correlate well with the Shapley value - a fair profit allocation mechanism. Importantly, we are able to achieve a reduction in run-time of 88%.
C 1
Footnote 1: Previously at Department of Computer Science and Technology, Tsinghua University
Collaborative Vehicle Routing; Deep Multi-Agent Reinforcement Learning; Negotiation; Gain Sharing; Multi-Agent Systems; Machine Learning
## 1 Introduction
Heavy goods vehicles (HGVs) in the UK contributed 4.3% of the UK's _total_ greenhouse gas emissions in 2019 (UK BEIS 2021). HGVs are utilised inefficiently at 61% of their total weight capacity. Moreover, 30% of the distance travelled carries zero freight (UK Dft 2020, RFS0125).
Collaborative vehicle routing (CVR) has been proposed to improve HGV utilisation. Here, carriers collaborate through sharing their delivery information in order to achieve
economies of scale. If carriers agree to work together, they are said to be in a _coalition_. As a result of improved utilisation, total travel costs across collaborating carriers can be reduced, resulting in a _collaboration gain_. The remaining question then is how to allocate this collaboration gain in a fair manner such that carriers are incentivised to form coalitions. An example of CVR is given in Figure 1.
Prior literature suggests that collaborative vehicle routing can reduce costs by around 4-46% and also reduce greenhouse gas emissions and road congestion (Cruijssen et al., 2007; Zhang et al., 2017; Gansterer and Hartl, 2018; Pan et al., 2019; Gansterer and Hartl, 2020; Cruijssen, 2020; Ferrell et al., 2020). Sharing resources may also lead to improved resilience to fluctuations in supply and/or demand. Despite these benefits, real-world adoption remains limited, with only a few companies participating (Cruijssen et al., 2007; Guajardo and Ronnqvist, 2016; Cruijssen, 2020). Currently, a key barrier is the computational complexity of calculating a fair _gain sharing_ mechanism that scales with a larger number of companies. Guajardo and Ronnqvist (2016) recommends that future work should investigate approximate gain sharing methods. Our paper follows this recommendation.
Our first contribution is modelling the collaborative routing problem as a _coalitional bargaining game_(Okada, 1996) with intelligent agents obtained through the use of _deep multi-agent reinforcement learning_ (MARL). We provide the theoretical grounding in this paper, tying together the fields of collaborative vehicle routing, coalitional bargaining, and deep multi-agent reinforcement learning in order to obtain a theoretically grounded approach that significantly reduces run-time. Here, agents
Figure 1: Three agents (denoted by colours) before and after collaboration. Squares denote depots. Crosses denote customer locations. Node indices (arbitrary) are denoted in black, with costs given in their respective colours. The _collaboration gain_ is defined as the difference in social welfare (or total cost) before and after collaboration. In Figure 0(b), Agents 1, 2 and 3 all decide to collaborate which reduces the system’s total cost by 0.88 (or 26%). This results in a _collaboration gain per capita_ (assuming agents split the gain equally) of 0.29. For detailed calculations, see Section 3.1.
attempt to reach agreement on selecting the 'best' carrier(s) to partner with, and rationally share the collaboration gain amongst the coalition. This bargaining process takes place over multiple rounds of bargaining (see Section3.3 for a formal definition). A benefit of this approach is that both the routing problem (who should deliver which requests?) and the gain sharing problem (who receives how much of the added value?) are considered simultaneously, whereas a key limitation of many previous methods consider these sub-problems in isolation from one another (Gansterer and Hartl, 2018). Moreover, our approach is agnostic to the underlying routing problem - the complexity of the vehicle routing problem (VRP) formulation can be increased with further constraints such as time windows, without further modification to the method.
Our second contribution is that agents do not need access to the full characteristic function explicitly. To obtain the full characteristic function, the collaboration gain for all possible coalitions must be calculated. In the three-player setting, there are four possible coalitions \(\{1,2,3\},\{1,2\},\{1,3\}\) and \(\{2,3\}\). Therefore, to obtain the full characteristic function requires solving \(2^{n-1}\) NP-hard post-collaboration VRPs (for a formal introduction, see Section2.1.3. As a result, methods that require full access to the characteristic function are intractable for settings with more than 6 carriers (Cruijssen, 2020). Instead, our agents can implicitly reason about the characteristic function through only receiving a high-dimensional graph input of delivery information (for example, latitudes and longitudes), as well as other agents' actions. This eliminates the need to fully evaluate the characteristic function when deployed in production, which involves solving the expensive post-collaboration VRP an exponential number of times. Instead, we only need to solve the post-collaboration VRP once when deployed in real-world settings, thus allowing our approach to achieve a significant run-time reduction. In addition, our approach utilises Centralised Training with Decentralised Execution (CTDE) to obtain decentralised agent policies (Lowe et al., 2020). Decentralised policies are desirable in real-world applications as each agent does not necessarily require access to the global, underlying state. This helps ensure that companies' sensitive information will not be leaked to competitors. This also aids to stabilise training in multi-agent settings as well as reduce communication costs. Furthermore, our approach is _inductive_ as opposed to transductive of prior methods. This enables our agents to generalise to agents never seen before during training and thus reduces computational cost.
The remainder of this paper is organised as follows. Section2 positions our work within the wider context of both collaborative vehicle routing and deep multi-agent reinforcement learning. Section3 provides a formal introduction to coalitional games, coalitional bargaining and reinforcement learning. Section4 discusses and justifies various design decisions regarding our agents. Section5 details our experimental setup, results, discussion and future work. Finally, Section6 concludes our findings and provides broader managerial implications as a result of this work.
## 2 Related Work
### Collaborative vehicle routing
Prior collaborative routing literature tackles the partner selection sub-problem (i.e., who should each carrier work with?) by estimating the collaboration gain between different carriers using heuristics (Palhazi Cuervo et al., 2016; Adenso-Diaz et al., 2014).
However, a limitation of this approach is that they do not consider how much each agent should be compensated, nor if agents even agree to join the same coalitions (i.e., if the coalitions are stable). Posing this problem as a _coalitional bargaining_ game not only allows us to tackle the partner selection aspect, but we are also able to consider the gain sharing aspect simultaneously as well.
The majority of the collaborative routing literature is concerned with the exchange of _individual_ transportation requests amongst the carriers. This can be divided into three types of planning approaches: centralised; decentralised without auctions; and decentralised with auctions (Gansterer and Hartl, 2018, 2020).
#### 2.1.1 Centralised planning
Centralised planning approaches desire to simply maximise social welfare (the sum of each company's profits). Typically, this goal is achieved by using a form of mixed integer linear programming or (meta)heuristics (Cruijssen et al., 2007; Gansterer and Hartl, 2018; Angelelli et al., 2022). This can be viewed as a common-payoff setting, i.e., where all agents are on the same team and receive the same reward. However, assuming a common-payoff setting in practice is unrealistic as companies are _self-interested_ - they mostly care only about their own profits (Cruijssen et al., 2007). Moreover, there exists fierce competition especially in horizontal collaborations. Therefore, the more realistic setting of decentralised control is needed where agents are modelled to be self-interested.
#### 2.1.2 Decentralised planning
There have been few attempts to tackle CVR with decentralised approaches as well. One approach focuses on the problem of _partner selection_, i.e. "who should work with whom?". Adenso-Diaz et al. (2014) proposes an a priori index to estimate the collaboration gain between carriers based on their transportation requests. However, a key limitation is that they do not consider the gain sharing aspect and thus the coalitions formed may not be stable.
A key challenge in decentralised settings is managing the explosion in the number of _bundles_. Consider Figure 1 where Agent 2 may desire to sell delivery node 10 to Agent 1. However, if Agent 2 offers both nodes 10 and 11 as a _bundle_, then Agent 2 may be able to command a higher price. Indeed, the number of possible bundles scales \(\mathcal{O}(2^{m})\) where \(m\) is the number of deliveries. To manage this explosion, a heuristic is typically implemented where agents can only submit or request a few bundles (sometimes only one) which would limit optimality (Bo Dai and Chen, 2009).
A second challenge is to also elicit other agents' preferences over all bundles. One approach is to invoke structure on the problem in the form of _combinatorial auctions_ which aids optimality (Krajewska et al., 2008; Gansterer and Hartl, 2018; Gansterer et al., 2019; Los et al., 2022). Auctions are where carriers submit requests they do not wish to fulfil to a common pool. Then, other carriers can submit bids on these requests with various methods of determining the "winners" of said bids. Combinatorial auctions in these settings allow carriers to bid on bundles of transportation requests instead of individual transportation requests which increases its expressivity and optimality. However, this additional structure comes at additional computational complexity. Moreover, in auction mechanism design, there are four desirable properties: efficiency; individual rationality; incentive compatibility; and budget balance. (Gansterer et al., 2019) proposes two auction-based approaches which may be useful in practice, but
would be unable to satisfy all four properties simultaneously: there exists a trade-off instead. (Los et al., 2022) investigates large-scale carrier collaboration containing 1,000 carriers with decentralised auctions. Whilst impressive in scale, their approach ignores the difficulty of large-scale gain sharing.
Both auction-based and non-auction-based approaches may also be exploited by strategic agent behaviour. Would agents intentionally misreport the costs associated with performing deliveries in order to maximise their own profits? Whilst we do not tackle this problem in our work, we believe MARL could be a useful tool to investigate this strategic behaviour in future work.
#### 2.1.3 Gain sharing
Whilst gain sharing has been studied in collaborative routing using cooperative game theory (Guajardo and Ronnqvist, 2016), the solution concepts typically assumes that the characteristic function is given. For a set of \(n\) agents, \(N=\{1,\ldots,n\}\), the characteristic function \(v:\mathbf{2}^{N}\to\mathbb{R}_{\geq 0}\) assigns a _value_, or in our case _collaboration gain_, for every possible coalition that could be formed. Note that there exists \(\mathcal{O}(2^{n})\) possible coalitions. This is intractable for settings with more than a few agents, because evaluating the collaboration gain of even a single coalition, involves solving a vehicle routing problem which is NP-hard. For detailed calculations of the collaboration gain, see Section3.1. Guajardo and Ronnqvist (2016) reviews 55 papers from the collaborative transportation literature concerning gain sharing. They recommend that a future research direction should focus on developing approximate gain sharing approaches based on cooperative game theory that scales with the number of agents.
In the wider algorithmic game theory literature, coalition formation has also been extensively studied (Chalkiadakis et al., 2011). However, much of the existing literature again assumes that the full characteristic function is given. Alternatively, they aim to find more succinct representations of the characteristic function, typically at a cost of increased computational complexity when computing solution concepts (Chalkiadakis et al., 2011). Examples include Induced Subgraph Games and Marginal Contribution Nets (Deng and Papadimitriou, 1994; Ieong and Shoham, 2005); however, even these succinct representation schemes require evaluating the value of multiple coalitions and thus solving multiple NP-hard VRPs. We argue that many real-world scenarios consist of the characteristic function being a function of the agents' assets or capabilities. In the collaborative routing setting, this is a function of the transportation requests an agent possesses. We therefore ask: _"Can agents form optimal coalitions from the delivery information alone instead of having full access to the characteristic function?"_. Therefore, our paper can be viewed as using an alternative, succinct representation scheme which approximates a rational outcome by using a function approximator.
### Deep multi-agent reinforcement learning
Single agent reinforcement learning has seen increasing adoption in supply chain management. However, supply chains can be naturally modelled as a system comprising multiple self-interested agents (Fox et al., 2000; Xu et al., 2021; Brintrup, 2021). For a thorough review of reinforcement learning applied towards supply chain management, see Yan et al. (2022).
Recently, MARL has seen success in playing board and video games such as Go, StarCraft II and Dota 2 (Silver et al., 2016; Vinyals et al., 2019; OpenAI et al., 2019). Whilst these are tremendous feats in the AI space, the underlying games tend to be
2-player and zero-sum. However, most real-world applications, including supply chain management (Gabel and Riedmiller, 2012; Kosasih and Brintrup, 2021), are \(n\)-player and mixed-motive (with potential'sequential social dilemmas' (Leibo et al., 2017)). Whilst there is some research in this direction, the majority of MARL research focuses on pure coordination or pure competition settings (see Table 1). Our work is 3-player and mixed-motive which leads to a more challenging joint-policy space, allowing for complex behaviours such as collusion.
The most similar work to ours from a multi-agent learning perspective is that of Bachrach et al. (2020) and Chalkiadakis and Boutilier (2004). In Bachrach et al. (2020), they apply deep MARL to a spatial and non-spatial Weighted Voting Game, where agents are given full access to the characteristic function. In Chalkiadakis and Boutilier (2004), they apply a Bayesian MARL approach to coalition formation as their problem has uncertainty in the characteristic function. In their problem, each agent knows its own capability, but does not observe other agents' capabilities. As a result, they maintain a belief over other agents' capabilities. However, each agents' capabilities remains constant. In our work, each agents' 'capability' can be thought of as the transportation requests it possesses, which constantly changes between episodes. Thus, our agents must be able to generalise across differing agent capabilities.
## 3 Background
### Collaborative Vehicle Routing
We denote the set of \(n\) agents as \(N=\{1,\ldots,n\}\). A coalition is a subset of \(N\), i.e. \(C\subseteq N\). The grand coalition is where all agents are in the coalition, i.e. \(C=N\).
**Pre-collaboration profit and social welfare**: The _pre-collaboration profit_ of Agent 1 in Figure 2 is calculated as follows: the _Revenue_ is 3 (1 for each delivery); the _Cost_ is 1.42 (sum of the edge distances); thus the _Profit_ is 1.58 (Revenue subtract Cost). Similarly, the pre-collaboration profit of Agents 2 and 3 is 2 and 2.07. The _pre-collaboration social welfare_ is the sum of the pre-collaboration profits, thus \(1.58+2+2.07=5.65\).
**Post-collaboration "profit" and social welfare**: Assuming agents agree to form the grand coalition \(C=\{1,2,3\}\), the post-collaboration "profit" of Agent 1 can be calculated as \(1-(0.06+0.06)=0.88\). Note that the post-collaboration "profit" for Agent 2
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Game} & \multirow{2}{*}{\(>2\)-players} & \multicolumn{2}{c}{Known} & Partially \\ & & Mixed-Motive & Optimum & Observable \\ \hline Go (Silver et al., 2016) & ✗ & ✗ & ✗ \\ StarCraft II (Vinyals et al., 2019) & ✗ & ✗ & ✗ & ✓ \\ SMACa (Samvelyan et al., 2019) & ✓ & ✗ & ✗ & ✓ \\ Dota 2 (OpenAI et al., 2019) & ✓ & ✗ & ✗ & ✓ \\ Gran Turismo (Wurman et al., 2022) & ✓ & ✗ & ✗ & ✓ \\ Football (Kurach et al., 2020) & ✓ & ✗ & ✗ & ✓d \\ Hide and Seek (Baker et al., 2020) & ✓ & ✗ & ✗ & ✓ \\ Communication (Foerster et al., 2016) & ✓ & ✗ & ✓ & ✓ \\ GCEb (Mordatch and Abbeel, 2018) & ✓ & ✗ & ✓ & ✓ \\ SSDsc (Leibo et al., 2017) & ✓ & ✓ & ✗ & ✓ \\
**Coalitional Bargaining (ours)** & ✓ & ✓ & ✓ & ✗ \\ \hline \hline \end{tabular}
* StarCraft Multi-Agent Challenge; b Grounded Communication Environment; c Sequential Social Dilemmas; d Both fully and partially observable settings supported.
\end{table}
Table 1: Characteristics of selected games studied in MARL.
1 appears to have decreased from \(1.58\) to \(0.88\) as a result of collaboration. This will be accounted for when discussing the characteristic function and thus Agent 1 will not lose out when we calculate its reward. For Agents 2 and 3, the post-collaboration "profit" is \(2.19\) and \(3.46\) respectively. Thus a _post-collaboration social welfare_ of \(0.88+2.19+3.46=6.53\).
**Collaboration gain**: The _collaboration gain_ is defined as the difference in social welfare before and after collaboration for a given coalition, in this case \(6.53-5.65=0.88\) for the grand coalition. Note that the collaboration gain is always greater than or equal to \(0\). The _value per capita_ is \(\frac{0.88}{3}=0.29\). During the bargaining process, agents are able to choose how to divide this collaboration gain amongst themselves. In the unique case where agents agree to divide the collaboration gain equally, i.e. according to the value per capita, we refer to this as _equal gain sharing_. Note that if only Agents 1 and 2 form a coalition (and exclude Agent 3), then the collaboration gain (assuming equal gain sharing) is divided by 2 instead - thus making it rational to object and form the coalition \(\{1,2\}\) (the value per capita of this coalition is \(0.38\)).
**Characteristic function**: The characteristic function, \(v:\mathbf{2}^{N}\rightarrow\mathbb{R}\) calculates for every possible coalition the collaboration gain. Importantly, to fully evaluate the characteristic function would require solving a variant of the Vehicle Routing Problem for every possible coalition which scales \(\mathcal{O}(2^{n})\).
Following the example in Figure 2:
\[v(\{1,2,3\})=0.88\quad\text{Value per Capita = }\tfrac{0.88}{3}=0.29\] \[v(\{1,2\})=0.76\quad\quad\text{Value per Capita = }\tfrac{0.76}{2}=0.38\] \[v(\{1,3\})=0.24\quad\quad\text{Value per Capita = }\tfrac{0.24}{2}=0.12\] \[v(\{2,3\})=0.01\quad\quad\text{Value per Capita = }\tfrac{0.01}{2}=0.005\]
Figure 2: Three agents, Agents 1, 2 and 3 are denoted by the colours green, orange and purple respectively. Squares denote depots. Crosses denote customer locations. Node indices (arbitrary) are denoted in black, with costs given in their respective colors. The _collaboration gain_ is defined as the difference in social welfare before and after collaboration. Figure 1(b) and Figure 1(c) refer to two possible post-collaboration scenarios with collaboration gains per capita of \(0.29\) and \(0.38\) respectively. Thus, it would be rational for the coalition \(\{1,2\}\) to form instead of the grand coalition \(\{1,2,3\}\).
It is important to note that the characteristic function is _0-normalised_, _essential_ and _super-additive_ (see Section3.2 for a formal definition). This guarantees that agents will not lose profits as a result of collaboration. The final _take-home profit_ that each agent (or carrier) receives can then be calculated as the sum of the pre-collaboration profit and its respective allocation of the collaboration gain. For Agents 1, 2 and 3, this would equate to \(1.58+\frac{0.88}{3}=1.87\), \(2+\frac{0.88}{3}=2.29\) and \(2.07+\frac{0.88}{3}=2.36\) respectively (assuming equal gain sharing). In reality, carriers will receive this take-home profit (which is always greater than or equal to the pre-collaboration profit) as an incentive to collaborate.
### Coalitional games
We consider the \(n\)-player coalitional game, also called a cooperative game, with a set of agents \(N=\{1,\ldots,n\}\). A _coalition_ is defined as a subset of N, i.e. \(C\subseteq N\). The set of all coalitions is denoted \(\Sigma\). The _grand coalition_ is where the coalition consists of all agents in N, i.e. \(C=N\). A _singleton coalition_ is where the coalition consists of only one agent, i.e. \(|C|=1\). A _coalition structure_\(CS=\{C^{1},\ldots,C^{k}\}\) is a partition of \(N\) into mutually disjoint coalitions, \(C^{1}\cup\cdots\cup C^{k}=N\) and \(C^{i}\cap C^{j}=\varnothing,\forall i\neq j\).
A (transferable utility) coalitional game is a pair \(G=\langle N,v\rangle\). The _characteristic function_\(v:\mathbf{2}^{N}\rightarrow\mathbb{R}_{\geq 0}\) represents the _value_ (or collaboration gain in our setting) that a given coalition \(C\) receives. Like Okada (1996), we assume that the characteristic function is _0-normalised_, _essential_ and _super-additive_. The characteristic function is _0-normalised_ if the value of all singleton coalitions is 0, i.e. \(v(\{i\})=0,\forall i\in N\). It is _essential_ if the value of the grand coalition is strictly positive, \(v(N)>0\). It is
Figure 3: Flowchart of the \(n\)-player coalitional bargaining game (Okada 1996). Our proposed approach is therefore to obtain a set of intelligent agents that can bargain with each other in a coalitional bargaining game. To achieve a suitable level of agent intelligence, we train our agents using deep multi-agent reinforcement learning.
super-additive_ if \(v(C\cup D)\geq v(C)+v(D)\) for all coalition pairs \(C,D\in\Sigma\) where \(C\cap D=\varnothing\).
The payoff vector \(\mathbf{x}^{C}=(x_{i}^{C})_{i\in C}\) denotes the pay-off for player \(i\) in the coalition \(C\). The payoff vector is _feasible_ if \(\sum_{i\in C}x_{i}^{C}\leq v(C)\). The set of all feasible payoff vectors for a given coalition C is \(X^{C}\), and \(X_{+}^{C}\) when all the elements of \(X^{C}\) is non-negative.
### Coalitional bargaining
The purpose of this work is to find a partition of the \(N\) carriers with an associated payoff vector, i.e. \((CS,\mathbf{x})\), which all self-interested, rational carriers agree to. Notice how this does not imply any sequential decision making. However, it was found that certain cooperative solution concepts can be retrieved as the outcome of non-cooperative, extensive form games such as coalitional bargaining (Nash, 1953). Therefore, this necessitates sequential decision making in our problem where we propose to obtain intelligent agents through the use of MARL.
Okada (1996) presents the \(n\)-player, random proposers, alternating offers coalitional bargaining game which we adopt. At every time-step \(t=1,2,\dots\) an agent from N is selected uniformly at random to be the _proposer_. The proposer, player \(i\), has two actions - the proposed coalition and proposed pay-off vector. The proposed coalition \(C\) must contain player \(i\) and the value of the coalition \(v(C)\) must be greater than 0. Due to the characteristic function being 0-normalised this implies \(|C|\geq 2\). The payoff vector \(\mathbf{x}^{C}\) must be in the set of all feasible, non-negative payoff vectors \(X_{+}^{C}\). After player \(i\) has proposed, the remaining players called the _responders_ are uniformly at random selected sequentially to either accept or reject the proposal. If all agents in the proposed coalition \(C\) accepts, then those agents form a coalition with the agreed upon proposal. The remaining players outside of \(C\) continue negotiating from the next time-step. If any responder in \(C\) rejects the proposal, then all players receive an immediate reward of zero and negotiations go on to the next round of bargaining. Then, a new proposer is selected uniformly at random and the time-step incremented by 1. This continues until either agreement is reached, or the maximum time step is reached. When a proposal \((C,x^{C})\) is agreed upon at time \(t\), every agent \(i\) in \(C\) receives a reward of \(\gamma^{t-1}x_{i}^{C}\), where \(\gamma\in[0,1]\) is the discount factor. The discount factor decreases the reward received as time passes. This encourages agents to reach agreement within the first time-step in the three-player setting as shown in Okada (1996). The discount factor in this setting is analogous to the _patience_ of an agent, or the urgency of the delivery decision. Any agent who is not in a coalition at the end of this process is assumed to have a reward of zero. In the three-player setting, note that if one proposal is accepted, then no more feasible coalitions can form; thus, this denotes the end of the bargaining process as seen in Figure 3.
## 4 Methodology
In summary, analytically calculating cooperative game theory solution concepts is intractable for settings with more than 6 carriers (Cruijssen, 2020). Instead, we can recover these cooperative solution concepts through non-cooperative, extensive form games such as coalitional bargaining (Serrano, 2004). However, coalitional bargaining requires intelligent, rational agents and it is difficult to manually craft rule-based agents for collaborative routing due to its exponential and NP-hard nature. Instead,
\begin{table}
\begin{tabular}{l l} \hline \hline
**Symbol** & **Definition** \\ \hline
**Coaliational** & **Bargaining Game:** \\ \(n\) & Number of Agents \\ \(N\) & Set of all \(n\) Agents (i.e., grand coalition) \\ \(i\) & Agent index \\ \(C\) & A Coalition \\ \(\Sigma\) & Set of all Coalitions \\ \(CS\) & A Coalition Structure \\ \(\varnothing\) & Empty set \\ \(G\) & A coalitional game \\ \(v(\cdot)\) & Characteristic function \\ \(v(C)\) & Value of the coalition \(C\), or the collaboration gain of the coalition \(C\) in the collaborative vehicle routing setting. \\ \(\mathbf{x}^{C}\) & Payoff vector for a given coalition \(C\) \\ \(X^{C}\) & Set of all feasible payoff vectors for a given coalition \(C\) \\ \(X^{C}_{+}\) & Set of all feasible, non-negative payoff vectors for a given coalition \(C\) \\ \end{tabular}
\begin{tabular}{l l} \hline
**(Multi-agent) Reinforcement Learning:** \\ \(\gamma\) & Discount factor \\ \(\mathcal{M}\) & A Markov decision process (MDP) \\ \(\mathcal{S}\) & Set of states \\ \(s_{0}\) & Initial state of an episode \\ \(\mathcal{A}\) & Set of (joint) actions \\ \(\mathcal{T}\) & Transition probability distribution \\ \(\rho_{0}\) & Distribution of the initial state, \(s_{0}\) \\ \(a\) & An action \\ \(t\) & Time-step index \\ \(G_{t}\) & Return following time \(t\) \\ \(\mathcal{T}\) & Maximum time-step (or the horizon length) \\ \(\pi\) & Agent’s policy \\ \(V_{\pi}(s)\) & State-value function of a state \(s\) following a policy \(\pi\) \\ \(\hat{V}(s,\theta)\) & Policy’s (parameterised by \(\theta\)) estimate of the state-value function given the state \(s\). \\ \(\hat{Q}(s,a,\theta)\) & Policy’s (parameterised by \(\theta\)) estimate of the action-value function given the state \(s\) and taking the action \(a\). \\ \(Q_{\pi}(s,a)\) & Action-value function of a state \(s\) taking the action \(a\) following a policy \(\pi\) \\ \(\mathcal{R}\) & Set of all possible rewards \\ \(R_{i,t}\) & Reward at time \(t\) for agent \(i\) \\ \(\theta_{i}\) & Agent’s policy parameters, usually the parameters of a neural network \\ \(J(\theta)\) & Performance measure for the policy \(\pi_{\theta}\) \\ \(\nabla J(\theta)\) & Column vector of partial derivatives of \(\pi(a|s,\theta)\) with respect to \(\theta\) \\ \(\hat{g}\) & Estimate of the policy gradient \\ \(M\) & Number of episodes played in parallel \\ \(\alpha\) & Learning rate for stochastic gradient descent \\ \(b(s)\) & A baseline function for policy gradient methods \\ \(r_{t}(\theta)\) & PPO’s probability ratio between the new policy (after gradient updates) and the old policy (before gradient updates) \\ \(\varepsilon\) & Threshold to clip the probability ratio in PPO \\ \(\mathcal{H}\) & Entropy bonus \\ \end{tabular}
\begin{tabular}{l l}
**Collaborative Vehicle Routing:** \\ \(\mathbf{D}\) & Deliveries matrix \\ \(x\) & x-coordinate of the location \\ \(y\) & y-coordinate of the location \\ \(o\) & Agent index who owns the location \\ \(d\) & Binary variable denoting whether the location is a depot or a customer \\ \(\mathbf{c}\) & Multi-hot encoded vector denoting which agents are in the proposed coalition \\ \(\mathbf{x}\) & Proposed pay-off vector \\ \(\mathbf{r}\) & Responses of the agents to the given proposal \\ \(p\) & Agent index who was selected to propose in the current round of bargaining \\ \(a\) & Binary variable denoting whether the current agent is proposing or responding \\ Dir(\(\mathbf{\alpha}\)) & Dirichlet distribution with concentration parameters \(\mathbf{\alpha}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Notation Table
we propose to develop intelligent, rational agents through having agents learn through trial-and-error, learning to collaborate in the presence of multiple other self-interested, rational agents (i.e., multi-agent reinforcement learning). A holistic diagram to depict the whole pipeline can be found in Appendix E. The remainder of this section focuses on the reinforcement learning algorithm employed. Pseudo-code of the pipeline can be found in Appendix D.
### Single Agent Reinforcement Learning
Reinforcement Learning (RL) is a subfield of machine learning. Here, the field studies an agent learning what _actions_ to take for a given _state_ in order to maximise a numerical _reward_. In supervised learning, the ground truth target labels are provided. In RL, we are not told the "correct" actions to take that will maximise (expected) cumulative reward. Instead, the agent must learn through trial-and-error. This leads to an exploration-exploitation dilemma. Should the agent try new actions (explore) in the hope that there is a better sequence of actions that leads to an even higher expected reward? Or, should the agent stick with its current best-known actions (exploit) since the agent believes it is unlikely there will be a better sequence of actions with higher expected reward? (Sutton and Barto 2018). The agent selects actions according to its _policy_ based on the current state. The action is sent to the _environment_ which calculates the reward and next state which is then returned to the agent. Through the learning process, we aim to obtain a policy that maximises the expected cumulative reward.
In our setting of collaborative vehicle routing, the environment is the coalitional bargaining game as described in Section 3.3. Each carrier is represented as an individual agent. The state is the locations of depots and customers, as well as auxiliary features to describe the current state of the coalitional bargaining process - see Section 4.3 for further details. There are three actions that an agent can take depending on if it is proposing or responding. When proposing, the agent must decide (a) which other carriers should the agent propose to partner with, and (b) how much should each carrier in the proposal be paid. When responding, the agent must decide (c) if they accept or reject the proposal. The reward is the collaboration gain the agent is allocated as a result of the coalitional bargaining process. Throughout the training process, we train our agents' policies (or neural network) to maximise expected cumulative reward. See Section 4 for a formal definition of states, actions and rewards in our setting.
We can formalise the problem using Markov decision processes (MDPs) (Puterman 1994). Formally, a finite-horizon, discounted Markov decision process \(\mathcal{M}\) can be defined by the tuple \(\mathcal{M}=\langle\mathcal{S},\mathcal{A},P,r,\rho_{0},\gamma\rangle\) where \(\mathcal{S}\) is the set of states, \(\mathcal{A}\) is the set of actions, \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}\) is the transition probability distribution, \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R}\) is the reward function, \(\rho_{0}:\mathcal{S}\rightarrow\mathbb{R}\) is the distribution of the initial state \(s_{0}\), and \(\gamma\in[0,1]\) is the discount factor.
An episode begins by first sampling an initial state \(s_{0}\) from \(\rho_{0}\). A _trajectory_\((s_{0},a_{0},s_{1},a_{1},\dots)\) is generated by sampling actions from the agent's policy \(a_{t}\sim\pi(a_{t}\,|\,s_{t})\). The next states are obtained by sampling the transition dynamics function \(s_{t+1}\sim\mathcal{T}(s_{t+1}\,|\,s_{t},a_{t})\) until reaching a terminal state. At each time step, a reward \(R_{t}\sim\mathcal{R}(s_{t},a_{t},s_{t+1})\) is received. At timestep \(t\), the discounted return, \(G_{t}\), is defined as:
\[G_{t}\doteq R_{t+1}+\gamma R_{t+2}+\gamma^{2}R_{t+3}+\cdots+\gamma^{T}R_{T+1}=\sum_{ k=0}^{T}\gamma^{k}R_{t+k+1} \tag{1}\]
where \(T\) is the maximum time-step and \(\gamma\in[0,1]\) is the discount factor. As \(\gamma\) approaches 1, the agent will take into account rewards received far into the future. However, as \(\gamma\) approaches 0, the agent will only account for the immediate reward \(R_{t+1}\), and the agent is often said to be _myopic_.
The _state-value function_ of a state \(s\) under a policy \(\pi\) is denoted by \(V_{\pi}(s)\). This is the expected return when the agent starts in \(s\) and continues following its policy \(\pi\). Formally:
\[V_{\pi}(s)\doteq\mathbb{E}_{\pi}\left[G_{t}\,|\,S_{t}=s\right]=\mathbb{E}_{ \pi}\left[\sum_{k=0}^{T}\gamma^{k}R_{t+k+1}\,|\,S_{t}=s\right],\qquad\forall s \in\mathcal{S} \tag{2}\]
A similar notion is the _action-value function_ which is denoted by \(Q_{\pi}(s,a)\). This is the expected return when the agent starts from \(s\), but also takes the action \(a\), and follows its policy \(\pi\) afterwards. Formally:
\[Q_{\pi}(s,a)\doteq\mathbb{E}_{\pi}\left[G_{t}\,|\,S_{t}=s,A_{t}=a\right]= \mathbb{E}_{\pi}\left[\sum_{k=0}^{T}\gamma^{k}R_{t+k+1}\,|\,S_{t}=s,A_{t}=a\right] \tag{3}\]
### Multi-agent reinforcement learning
A _stochastic game_ generalises MDPs to involve multiple agents. This can be defined as a tuple \(\langle N,S,A,\mathcal{T},\mathcal{R},\gamma,\rangle\) where:
* \(N\) denotes the set of \(n\) agents
* \(S\) denotes the set of states including the initial state \(s_{0}\)
* \(A=A_{i}\times\cdots\times A_{n}=\{(a_{1},\ldots a_{n})\,|\,a_{i}\in A_{i}\, \text{for every}\,i\in\{1,\ldots,n\}\}\) denotes the set of joint actions, where \(A_{i}\) is player \(i\)'s set of actions and \(\times\) denotes the Cartesian product.
* \(\mathcal{T}:S\times A\to S\) denotes the transition dynamics
* \(\mathcal{R}:S\times A\times S\ \times\ N\rightarrow\mathbb{R}\) denotes the reward function
* \(\gamma\) denotes the discount factor
For every time-step \(t\), an agent \(i\in N\) receives an observation of the global state \(s\) and outputs an action \(a_{i,t}\) sampled from its _policy_\(\pi_{i}(a_{i,t}\ |\ s_{t})\). We update the state \(s_{t}\) to include agent \(i\)'s action before sending this new state to agent \(j\in N,j\neq i\). Note that the time-step is not yet incremented. We continue this process until all agents in \(N\) have submitted their actions to the environment. This yields the joint action \(\mathbf{a}=(a_{1},\ldots a_{n})\). We calculate the reward \(R_{i,t}\sim\mathcal{R}(s_{t},\mathbf{a},s_{t+1},i)\). We consider the sparse reward setting, i.e., all rewards are zero until the episode terminates. Upon termination, we calculate the reward for agent \(i\) depending on if agent \(i\) successfully joined a coalition or not. When a proposal \((C,x^{C})\) is agreed upon at time \(t\), every agent in \(C\) receives a reward of \(\gamma^{t-1}x_{i}^{C}v(C)\). Else, if the agent is not in a coalition \(C\), it is
assumed to receive a reward of zero. The return \(G_{i}\) is discounted by a factor \(\gamma\in[0,1]\), given by \(G_{i}=\sum_{t=1}^{T}\gamma^{t-1}r_{i,t}\).
Agent \(i\)'s objective is to find a policy \(\pi_{\theta_{i}}\) which maximises its expected discounted sum of rewards \(\mathbb{E}[\sum_{t=1}^{T}\gamma^{t-1}R_{i,t}]\). It is important to note that this maximisation assumes all opponents' policies \(\pi_{\theta_{j}}\ \forall j\neq i\) to be fixed. Thus, one of the key challenges in MARL is the non-stationarity present due to multiple concurrently learning agents.
In our setting, we assume perfect information and thus agents have full access to the global state. We make this assumption as the aim of our paper is to provide the theoretical grounding between collaborative vehicle routing, coalitional bargaining, and multi-agent reinforcement learning. The imperfect information setting is also a promising research direction, e.g., to investigate the value of information. Future work could study the applicability of _decentralised partially observable_ Markov decision processes (dec-POMDPs) (Oliehoek and Amato, 2016) to imperfect information settings in collaborative vehicle routing.
A challenge in reinforcement learning is handling the curses (plural) of dimensionality (Powell, 2022). With "tabular" methods, the policy is represented by a lookup table. One curse is that the size of the state space grows exponentially with the number of dimensions (even if the state space is discrete). In our setting, our state space is continuous thus further exacerbating the challenge. As a result, we must resort to _function approximation_ methods (Sutton et al., 2000). Instead, we aim to replace the lookup table with a parameterised model, with parameters \(\theta\in\mathbb{R}^{d}\) to map from states to actions. Thus, we can write the policy for agent \(i\) as \(\pi_{\theta_{i}}(a_{i,t}\,|\,s_{t})\) instead. Respectively, the state-value function and action-value function can also be re-written \(\hat{V}(s,\theta)\approx V_{\pi}(s)\) and \(\hat{Q}(s,a,\theta)\approx Q_{\pi}(s,a)\). Importantly, the dimensionality \(d\) of the model is typically much less than the number of states. Changing one parameter will effect the estimated value of many other states. Thus, if we can generalise across states, this could greatly accelerate learning. Note that any parameterised model can be used: a linear function, multi-layer perceptron, decision trees etc. Historically, linear functions were favoured due to favourable convergence guarantees. However, deep neural networks have demonstrated significant success due to their high capacity and generalisability (Sutton and Barto, 2018; Vinyals et al., 2019; Mnih et al., 2015; OpenAI et al., 2019). Thus, we also opt for deep neural networks as well.
_Policy gradient_-based approaches are a common way to learn a parameterised policy \(\pi_{\theta_{i}}\) which maximises an agent's expected discounted return. It is also performant, for example, it achieved great success in playing Dota 2 (OpenAI et al., 2019) amongst others. Typically, a scalar performance measure \(J(\theta)\) is defined and we maximise their performance using approximate gradient ascent: \(\theta_{t+1}=\theta_{t}+\alpha\widehat{\nabla J(\theta_{t})}\) where \(\widehat{\nabla J(\theta_{t})}\in\mathbb{R}^{d}\) is a stochastic estimate whose expectation approximates the gradient of \(J(\theta_{t})\) with respect to \(\theta_{t}\). However, a challenge is that the performance depends on both the policy's action selection and also the distribution of states where these actions are selected. Varying \(\theta\) affects both of these distributions and we typically do not know the effect of our policy on the state distribution. The policy gradient theorem (Sutton et al., 2000; Sutton and Barto, 2018) shows that we can approximate the gradient of performance with respect to \(\theta\) but without requiring the derivative of the state distribution. Formally:
\[\nabla J(\theta)\propto\sum_{s}\mu(s)\sum_{a}Q_{\pi}(s,a)\nabla\pi(a\,|\,s,\theta) \tag{4}\]
The simplest approach is the REINFORCE algorithm (Williams, 1992). Here, an agent plays \(M\) episodes in parallel until termination and remembers all states, actions and rewards it encountered (or trajectory). Next, it estimates the (undiscounted) policy gradient using:
\[\hat{g}=\frac{1}{M}\sum_{m=1}^{M}\left[\sum_{t=1}^{T}\hat{A}_{t}^{m}\nabla_{ \theta}\log\pi_{\theta}(a_{t}^{m}\,|\,s_{t}^{m})\right] \tag{5}\]
where, for REINFORCE \(\hat{A}_{t}=\sum_{t^{\prime}=t}^{T}\gamma^{t^{\prime}-t}r(s_{t^{\prime}}^{m},a _{t^{\prime}}^{m})\). The agent updates its policy using stochastic gradient descent, i.e., \(\theta\leftarrow\theta+\alpha\hat{g}\) where \(\alpha\) is the learning rate. The intuition for this policy update is that for each action the agent took for a given state, it will increase or decrease the (log) probability of taking that same action proportional to the discounted return it received during that episode. However, policy gradient methods are notorious for having high variance in the policy gradient. As a result, we employ multiple variance reduction techniques to mitigate this problem, such as \(M\) parallel environments.
Another variance reduction technique is to subtract a _baseline_. A baseline \(b(s)\) can be any function that may or may not depend on the state \(s\). Importantly, it must not vary with the action \(a\). We can replace REINFORCE's estimate of \(\hat{A}_{t}\) by using \(\hat{A}_{t}=\left[\left(\sum_{t^{\prime}=t}^{T}\gamma^{t^{\prime}-t}r(s_{t^{ \prime}}^{m},a_{t^{\prime}}^{m})\right)-b(s)\right]\) instead. It can be shown that introducing a baseline does not introduce bias into the policy gradient, but may significantly reduce variance (Williams, 1992; Greensmith et al., 2004; Sutton and Barto, 2018). An example baseline is the average return an agent received. The term \(\left[\left(\sum_{t^{\prime}=t}^{T}\gamma^{t^{\prime}-t}r(s_{t^{\prime}}^{m},a _{t^{\prime}}^{m})\right)-b(s)\right]\) can be thought of as how much better than the baseline an agent performed as a result of choosing its action. A common choice of \(b(s)\) is to estimate the state-value \(\hat{V}_{\pi}(s_{t}^{m},\theta)=\mathbb{E}_{\pi}\left[\sum_{t^{\prime}=t}^{T} \gamma^{t^{\prime}-t}r(s_{t^{\prime}}^{m},a_{t^{\prime}}^{m})\,|\,S_{t}=s\right]\). Selecting a good baseline is crucial. We discuss our proposed baseline functions in Section4.7.
In REINFORCE, typically only one gradient update is used per batch of trajectories. As a result, REINFORCE is typically said to be sample inefficient - it requires a lot of episodes to train a performant policy. In addition, REINFORCE can be unstable during training, and sometimes performance collapse may occur as a result of the data distribution changing too drastically.
Proximal Policy Optimisation (PPO) (Schulman et al., 2017) aims to improve the sample efficiency by performing multiple gradient updates to maximise the use of each gathered data point. However, this risks changing the data distribution too drastically and thus risks performance collapse. To rectify this, the intuition behind PPO is to constrain the policy from deviating too greatly. Let the current policy (before any gradient updates) be denoted \(\pi_{\theta_{old}}(a_{t}|s_{t})\). After one round of gradient updates, this would yield new policy parameters, denoted \(\pi_{\theta}(a_{t}|s_{t})\). PPO constrains that the probability ratio, \(r_{t}(\theta)=\frac{\pi_{\theta}(a_{t}|s_{t})}{\pi_{\theta_{old}}(a_{t}|s_{t})}\), of taking action \(a_{t}\) for the same state \(s_{t}\) under the old policy vs new policy to be no more than a certain percentage \(\varepsilon\). This should prevent the risk of policy collapse if \(\varepsilon\) is chosen carefully. Moreover, PPO is then able to perform more gradient updates on the same data points, thus greatly improving its sample efficiency. In addition, it is also more stable during training and is less sensitive to chosen hyperparameters. As a result, PPO has been applied to wide range of domains, most notably in OpenAI Five (bots to play Dota 2) (OpenAI et al., 2019)
and also in ChatGPT (OpenAI, 2022).
PPO adjusts the neural network parameters \(\theta\) to increase or decrease the probability ratio \(r_{t}(\theta)\) proportional to the advantage the agent received \(\hat{A}_{t}\). PPO enforces the \(\varepsilon\) threshold by clipping the probability ratio, \(r_{t}(\theta)\), to remain within \(\pm\,\varepsilon\). We can further encourage exploration by adding an entropy bonus. Thus, the PPO policy gradient can be estimated as follows:
\[\hat{g} \approx \frac{1}{M}\sum_{m=1}^{M}\sum_{t=1}^{T}\nabla_{\theta}\Big{[} \min(r_{t}(\theta)\hat{A}_{t},\,\text{clip}(r_{t}(\theta),1\,-\,\varepsilon, 1\,+\,\varepsilon)\,\hat{A}_{t})\,+\,\beta\mathcal{H}[\pi_{\theta}](s_{t}) \Big{]} \tag{6}\]
where \(\hat{A}_{t}\) is the baseline, \(\beta\) is the entropy regularisation coefficient and \(\mathcal{H}\) the entropy bonus. An entropy bonus encourages agents to explore rather than exploit. It is important to note that when the advantage is positive, we clip \(r_{t}(\theta)\)_only_ if it is greater than \(1+\varepsilon\). If the advantage is negative, we clip \(r_{t}(\theta)\)_only_ if it is less than \(1-\varepsilon\) (see Figure 1 of (Schulman et al., 2017) for further details). The clip function is a function that clips the first argument by the lower and upper bounds denoted by the second and third arguments respectively.
As a result, PPO has been widely used in a range of applications, most notably in OpenAI Five (for Dota 2) and in ChatGPT (OpenAI et al., 2019; OpenAI, 2022).
### State space
The agents receive a variety of inputs from the environment as seen in Figure 4. Let the state at time \(t\) be denoted by \(s_{t}\in S\) which can be represented by the tuple \((\mathbf{D},\mathbf{c},\mathbf{x},\mathbf{r},t,p,a)\). The _deliveries_ matrix \(\mathbf{D}\in\mathbb{R}^{12\times 4}\) describes the features of each of the three depots and nine customers, yielding twelve rows where we refer to each row as a _location_. A location can be represented by the tuple \(\langle x,y,o,d\rangle\) where \(x\in\mathbb{R}\) is the x-coordinate; \(y\in\mathbb{R}\) is the y-coordinate; \(o\in\mathbb{N}\) denotes the agent who owns the location; and \(d\in\{0,1\}\) denotes whether the location is a depot or a customer. For instance, to represent Agent 2's depot located at \(\langle x=0.2,y=0.173\rangle\), its corresponding row in \(\mathbf{D}\) would be represented as \(\langle 0.2,0.173,2,1\rangle\) and the remaining rows in \(\mathbf{D}\) would be comprised of similar entries for the remaining depots and customers, yielding a shape of \(12\times 4\). The vector \(\mathbf{c}\in\{0,1\}^{|N|}\) denotes which agents were selected to be in the proposed coalition. The vector \(\mathbf{x}\in\mathbb{R}^{|N|}\) denotes the proposed pay-off vector, the vector \(\mathbf{r}\in\{0,1\}^{|N|}\) denotes the responses of the agents. The vectors \(\mathbf{c}\), \(\mathbf{x}\) and \(\mathbf{r}\) are initialised to zero if no agent has taken an action in the current round of bargaining. The scalar \(t\in\mathbb{N}_{0}\) denotes the current round of bargaining, \(p\in\mathbb{N}\) denotes which agent was selected to propose in the current round of bargaining, and \(a\in\{0,1\}\) denotes whether the current agent is proposing or responding.
### Action space
The agents have three action heads: _coalitions_, _proposals_ and _response_.
The _coalitions_ action is denoted by \(\mathbf{c}\in\{0,1\}^{|N|}\) where \(|N|\) is the total number of agents, in this case, \(3\). Note that in game theory, typically agents propose a coalition of size \(|C|\) instead of \(|N|\). However, it is beneficial to output coalitions in this manner as it keeps the output size constant. The coalitions action denotes whether the respective
agent index is part of the coalition \(C\). Note that this game assumes that player \(i\) is in the coalition \(\mathbf{c}\), i.e., \(c_{i}=1\). The _deliveries_ matrix, \(\mathbf{D}\), is fed through two dense layers with 256 hidden neurons. These parameters come from a supervised pre-training step (see Section4.6). The output is fed through a linear layer with \(|N|\) outputs. These outputs are passed into \(|N|\) independent Bernoulli distributions to determine the probability that a given agent is in the coalition \(C\). A Bernoulli distribution is chosen as the number of outputs required scales linearly with the number of agents. Alternatively, this action can be output auto-regressively, but would be more computationally expensive. It may also be useful to introduce correlation in the agents' actions via more expressive probability distributions which may speed up learning.
The _proposals_ action is denoted by \(\mathbf{x}\in\mathbb{R}^{|N|}\) where \(\sum_{i}x_{i}=1,\ x_{i}\in[0,1]\). This vector denotes how much of the collaboration gain is assigned to each respective agent (as a percentage). Note that in game theory, the definition of a feasible pay-off vector is \(\sum_{i\in C}x_{i}^{C}\leq v(C)\). However, agents will never know the value of \(v(C)\) a priori (although it can implicitly reason about it). Thus, to practically implement our neural network, we output a vector that is interpreted as percentages as opposed to absolute values. These percentages are then multiplied by the value of a coalition \(v(C)\) to obtain a feasible pay-off vector.
Note that this is a continuous action space, as opposed to the other actions which are discrete. To parameterise the proposals action head, we use the Dirichlet distribution which is a multivariate generalisation of the Beta distribution. The neural network will output three logits \(\alpha\) which are used as the concentration parameters of the Dirichlet distribution \(\text{Dir}(\alpha)\). The Dirichlet distribution has support over the probability simplex \(S_{K}=\{\theta:0\leq\theta_{k}\leq 1,\sum_{k=1}^{K}\theta_{k}=1\}\)(Murphy, 2021). Intuitively, agents will propose an equal gain share with high probability if the inputs to the Dirichlet are large and equal. Agents will make proposals uniformly at random within the probability simplex if the inputs to the Dirichlet are small and equal, but greater than 1. If Agent 1 wanted to collaborate with Agent 2 but not 3, the input to the Dirichlet could be \(\langle 10000,10000,1.001\rangle\). This would result in approximately a 50/50 split between Agents 1 and 2 with high probability.
The Dirichlet distribution is appealing due to two reasons. Firstly, the proposals vector requires that it sums to 1 which matches the form of the Dirichlet distribution. Secondly, the Dirichlet distribution has finite support. In continuous action spaces, a
Figure 4: Actors’ neural network design. Grey boxes denote state inputs. Blue boxes denote MLP parameters which come from supervised pre-training (see Section4.6). Note that the linear layer to produce coalition logits is learnt and not pre-trained. White boxes denote learnt parameters. Red boxes denote actions. Numbers in brackets denote the output shapes (ignoring batch size as it’s shared by all).
Gaussian distribution is typically used which has infinite support and can lead to bias (Chou et al., 2017). Chou et al. (2017) overcomes this issue by using a Beta distribution instead as it has finite support and find that their agents learn more efficiently.
To calculate the proposals, the state inputs are passed through a variety of dense layers (see Figure 4) to produce an embedding. A linear layer with 3 output neurons is applied to the embedding. As in Chou et al. (2017) we add 1.001 to the output logits to ensure the resultant Dirichlet distribution remains unimodal. As a result, during evaluation the agents can fully exploit by proposing the mode of the distribution, instead of having to sample from the Dirichlet which may involve exploration. The output logits are then masked by the _coalitions_ vector, i.e. if a player \(i\) is not in the coalition \(S\), its corresponding output logit will be 1.001. Finally, to calculate the pay-off vector, we sample from the Dirichlet distribution with the masked output logits.
The _response_ action \(r\in\{0,1\}\) denotes whether an agent accepts or rejects a given proposal. It takes the resultant embedding followed by a single linear layer with one output neuron. The output is then fed through a Bernoulli distribution.
Whilst we have chosen to use Bernoulli and Dirichlet distributions to parameterise the three action spaces, it may be beneficial to experiment with more expressive probability distributions or e.g. output actions auto-regressively. This may speed up learning and would be an interesting line of future research.
### Reward function
Our reward function is sparse, i.e., at timestep \(t\) the agents will always receive an immediate reward \(R_{t}\) of zero until the coalitional bargaining game terminates. Upon termination, we calculate a reward for each agent.
If agent \(i\) successfully joins a coalition \(C\) by having all agents in \(C\) accept the proposal, then it receives a reward of \(r_{i,t}=v(C)\cdot x_{i}\) where \(v(C)\) is the collaboration gain obtained by coalition \(C\), and \(x_{i}\) is the \(i\)th element of the pay-off vector \(\mathbf{x}\). For clarity, if agent \(i\) is the proposer and has its proposal rejected by the responder agents, it will receive an immediate reward of zero. However, there is potential for agent \(i\) to obtain more than zero immediate reward in future rounds of bargaining and thus the discounted return can still be greater than zero.
Else, if agent \(i\) does not successfully join a coalition \(C\) by the end of the episode, then it will receive a terminal reward of zero.
### Transfer learning
A key challenge with policy gradient approaches is its sample inefficiency, even in single agent settings. This is further exacerbated due to the non-stationary learning dynamics imposed by having multiple agents learn concurrently. In typical RL settings, agents learn "tabula rasa", i.e., without any prior knowledge. Whilst this is mathematically elegant, learning tasks tabula rasa for problems with high complexity, such as in real-world, multi-agent settings, is rare (Agarwal et al., 2022). Instead, it may be preferable to pre-train on some offline dataset in order to learn a good feature extractor. For example, (Silver et al., 2016; Vinyals et al., 2019) pre-train their networks on human gameplay data in a supervised learning setting before using RL. This idea of transfer learning, or recently, _reincarating RL_(Agarwal et al., 2022) is well accepted in the RL literature and the reader is referred to (Taylor and Stone, 2009; Agarwal et al., 2022) for a thorough review. Furthermore, transfer learning is well accepted in supervised
learning, especially in the computer vision and natural language processing domains leading to the likes of ChatGPT (OpenAI 2022). In our case, the pre-training process aids in efficiently initializing the agents' policies and facilitates faster convergence in the MARL framework.
We therefore pre-train our agents to learn a good feature extractor in a supervised learning fashion. We hypothesise that a good feature extractor should be able to predict whether a given coalition for a given state is productive or not. As a result, we create a dataset of one million instances and randomly select a feasible coalition per instance and calculate the social welfare obtained. Next, we train a neural network to predict the social welfare for a given state and coalition. We optimise the neural network to minimise the mean squared error. We split the dataset using an 80/20 train/test split. The neural network design can be seen in Figure 5. We experimented with different neural network architectures but found this architecture performed best. Whilst this is not the exact task agents must perform in the collaborative routing scenario, the intuition is that the neural network should still learn useful patterns which are transferable to the full collaborative routing problem.
### Policy gradient baselines
As discussed in Section 4, a useful baseline helps reduce the variance in policy gradient methods. We use two types of baselines: one for the response action (when agents are responding); and one shared for both the coalitions and proposals actions (when agents are proposing). The neural network architectures for the baselines can be found in Figure 6.
The response action is discrete and thus we can easily implement Counterfactual Multi-Agent Policy Gradients (COMA) (Foerster et al., 2017). They use the following baseline:
Figure 5: Pre-trained neural network design. Grey boxes denote state inputs. White boxes denote learnt parameters. Red box denotes the output, which predicts the collaboration gain for this given state and coalition. Numbers in brackets denote the output shapes (ignoring batch size as it’s shared by all).
\[A^{i}(s,\mathbf{a})=Q(s,\mathbf{a},i)-\sum_{a^{{}^{\prime}i}}\pi^{i}(a^{{}^{\prime }i}|\tau^{i})\hat{Q}(s,(\mathbf{a}^{-i},a^{{}^{\prime}i}),i,\phi) \tag{7}\]
where \(Q_{\pi}(s,\mathbf{a},i)=\mathbb{E}_{\pi}\left[\sum_{t^{\prime}=t}^{T}\gamma^{t^ {\prime}-t}r(s^{m}_{t^{\prime}},a^{m}_{t^{\prime}})\,|\,s,\mathbf{a}\right]\) is the discounted return received if all agents take the joint action \(\mathbf{a}\) in state \(s\). The estimate comes from a function approximator with parameters \(\phi\). \(a^{{}^{\prime}i}\) is the other actions agent \(i\) could have taken. \(\tau^{i}\) is the prior trajectory agent \(i\) has observed. \(\hat{Q}(s,(\mathbf{a}^{-i},a^{{}^{\prime}i}),i,\phi)\) is the _estimated_ discounted return agent \(i\) would receive if it took a different action \(a^{{}^{\prime}i}\) whilst keeping the other agents' actions \(\mathbf{a}^{-i}\) constant. This is estimated through the use of a neural network with parameters \(\phi\).
Intuitively, the COMA baseline can be thought of as how much better agent \(i\)'s decision to take action \(a\) was relative to any other action agent \(i\) could have taken, \(a^{{}^{\prime}i}\). In our case, the question we ask is: if an agent has agreed to a given proposal, could it have done better by rejecting instead, assuming other agents' actions remain the same?
Figure 6: Grey boxes denote state inputs. Blue boxes denote MLP parameters which come from supervised pre-training (see Section4.6). White boxes denote learnt parameters. Red boxes denote outputs for the baseline. Numbers in brackets denote the output shapes (ignoring batch size as it’s shared by all).
In the discrete setting, it is easy to sum over all other actions agent \(i\) could have taken. However, with continuous actions using Dirichlet distributions in the proposals action, this can be difficult. Therefore, we instead estimate the _state-value_ which estimates the expected discounted return conditioned on the state \(s\). We denote this baseline with \(\hat{V}_{\pi}(s,i,\mathbf{w})=\mathbb{E}_{\pi}\left[\sum_{t^{\prime}=t}^{T} \gamma^{t^{\prime}-t}r(s^{m}_{t^{\prime}},a^{m}_{t^{\prime}})\,|\,s\right]\) where \(\mathbf{w}\) is the parameters of a function approximator such as a neural network. Thus, our baseline for both the coalitions action and proposals action is given by:
\[A^{i}(s,\mathbf{a})=Q_{\pi}(s,\mathbf{a},i)-\hat{V}_{\pi}(s,i,\mathbf{w}) \tag{8}\]
Finally, we normalise \(A^{i}(s,\mathbf{a})\) by subtracting the mean and dividing by the standard deviation due to the small magnitude in rewards.
### Time limits
It is crucial to deal with time limits properly in this setting. The full coalitional bargaining game presented in Okada (1996) is infinite horizon, i.e., negotiation could go on indefinitely. Clearly, this is impossible to simulate on a finite computer and we must set a maximum number of rounds. Nevertheless, it is still possible to optimise for the infinite horizon, but care must be taken as shown in Pardo et al. (2018). They argue that if an episode terminates only due to reaching the maximum number of rounds, we should _bootstrap_ the discounted estimated value of the next state, \(\hat{v}_{\pi}(s^{\prime})\). Thus, if agents reach agreement _within_ the maximum number of rounds, they should receive a reward \(r\) as expected. However, if they _exceed_ the maximum number of rounds, they should receive a reward of \(r+\gamma\hat{v}_{\pi}(s^{\prime})\). In our setting with the maximum number of rounds equal to 10, if agents do not reach agreement within 10 rounds, we fictitiously step them into the next state \(s^{\prime}\), at round 11 with proposers selected uniformly at random. If player \(i\) is not selected as a proposer, then the selected proposer is asked to propose a coalition and pay-off vector \((S,\mathbf{x})\) in this fictitious round. We then use a critic to estimate the value of this state, \(\hat{v}_{\pi}(s^{\prime})\).
### Skill retention
Okada (1996) shows that agents should reach agreement with no delay in agreement. Therefore, as agents learn to collaborate better, they will reach agreement sooner, which is beneficial due to the environment's discount factor. However, this may lead to agents forgetting how to play the game at later time-steps. To enable retention of skills at later time-steps, we employ a targeted training design. During training, instead of starting all bargaining games at round 1, we uniformly at random start them between round 1 and the last round of bargaining, \(T-1\). Therefore, agents will always be exposed to a range of bargaining scenarios even if agents are collaborating optimally.
## 5 Experiments
### Problem setting
We base our problem setting on a modified version of (Gansterer and Hartl 2018a). We consider an environment with three companies, each represented by an agent. Each agent has one depot and three customers that it must deliver to. The depot \((x,y)\) locations for each agent are held fixed at \(\{(-0.2,0.173),(0.2,0.173),(0,-0.173)\}\) respectively. The depots' service radius for each instance is selected uniformly at random from the set \(\{0.3,0.4,0.6\}\). The rationale by Gansterer and Hartl (2018a) is that, through varying the depots' service radius, this varies the degree of overlap and thus competition (or collaboration opportunity) between carriers. A high degree of overlap using a radius of 0.6 creates high collaboration opportunity between carriers. A low degree of overlap using a radius of 0.3 has low collaboration opportunity between carriers. With a small radius of 0.3, this can analogously be seen as the scenario when depots do not lie in close proximity to each other. The customers locations are then generated uniformly at random with the depot's service radius.
To calculate the pre-collaboration and post-collaboration gains, the shortest paths are calculated exactly using Gurobi (Gurobi Optimization, LLC 2021). The pre-collaboration shortest paths can be calculated by solving three (un)Capacitated Vehicle Routing Problems (one for each agent). The post-collaboration shortest paths are calculated by solving a single multi-depot vehicle routing problem. Problem formulations for the capacitated VRP and multi-depot VRP can be found in Appendices A and B respectively. Capacity is effectively removed by setting the capacity of each vehicle to an arbitrarily large number and the weight of each delivery to 1.
Whilst this problem setting is rather simplistic, this is important as it allows us to evaluate our agents rigorously. To calculate optimal solutions (for evaluation purposes only), we must brute force the characteristic function. This is expensive and only possible for small, simple VRPs and 3 agents.
Figure 7: A plot of the distribution of depot and customer locations. Depots are denoted by squares. Each depot has three distinct service radii which are selected uniformly at random. Customers may be uniformly at random located within any of corresponding depot’s service radius.
### Experimental design
We perform 10 independent runs with different random seeds to train our agents. Agents are trained for 10,000 epochs and evaluated every 100 epochs. Agents are evaluated on instances it has never seen before in training. We train using a batch size of 2048 and evaluate with a batch size of 2048. All agents use a discount factor \(\gamma\) of 0.95. All agents' observations are normalised with a running estimate of the mean and standard deviation. The maximum number of bargaining rounds \(T\) is set to 10. The learning rate was held constant at 3\(\times 10^{-4}\) and we use Adam optimisation. We clip the global norm of gradient updates if they exceed 1. We use \(\varepsilon=0.05\) to clip the probability ratios in PPO as it seems to help stability in (Yu et al., 2021). All code to generate results is run on the Wilkes 3 high performance computing cluster with AMD EPYC(tm) 7763 64-Core Processors and NVIDIA A100 GPUs. Note we only use a supercomputer to perform runs in parallel. Training takes approximately 8 hours per run.
### Evaluation
#### 5.3.1 Correlation with the Shapley value
The objective of our work is to find a partition of the \(N\) carriers with an associated fair pay-off vector. We emphasise that certain cooperative solution concepts (e.g. Shapley values) can be retrieved as the outcome of non-cooperative, extensive form games (e.g. coalitional bargaining as in our work). The Shapley value is the most common gain sharing mechanism used in the collaborative vehicle routing setting (Guajardo and Ronnqvist, 2016) as it is widely accepted in game theory to be fair - each agent gets paid proportional to their marginal contribution. In addition, it is also guaranteed to be unique. We believe that both of these arguments would help transportation planners to reach agreements better, in line with (Krajewska et al., 2008). Thus, we compare the outcomes that our MARL agents agree to with the Shapley value for each instance by measuring the correlation, mean absolute error, and mean squared error.
#### 5.3.2 Baseline bots
We compare our MARL agents against two rule-based bots as a baseline. The heuristic bot always proposes the grand coalition with equal gain share and always accepts every proposal. The random bot proposes coalitions and gain shares as well as responses all uniformly at random. These two bots help us to understand that (a) our MARL agents are learning interesting, complex behaviours, and (b) our experimental setup is not too easy in design and that simple, intuitive policies are not sufficient for this setting.
#### 5.3.3 Accuracy
A simple evaluation metric is to measure how often the agents propose the correct coalition. For player \(i\), the correct coalition \(C_{i}^{*}\) is defined to be the coalition \(C\) which would maximise player \(i\)'s reward. This involves brute forcing the characteristic function to evaluate the value of each possible coalition which is only possible since we consider 3 agents. We emphasise that brute force is only required to _evaluate_ our agents - brute force is not required to train the agents. The reward \(R\) is the collaboration gain from agreeing to coalition \(C\), \(v(C)\), multiplied by the \(i\)th element of the pay-off vector, \(x_{i}\).
#### 5.3.4 Optimality gap
We denote the absolute and relative optimality gap of player \(i\) by \(\phi_{i}\) and \(\eta_{i}\) respectively. The absolute optimality gap \(\phi_{i}\) for player \(i\) is defined as \(\phi_{i}=v(C_{i}^{*})-v(C)\), where \(C_{i}^{*}\) is the correct coalition, \(C_{i}\) is player \(i\)'s proposed coalition, and \(v(\cdot)\) is the characteristic function (i.e. the collaboration gain of a given coalition). The relative optimality gap \(\eta_{i}\), is calculated as:
\[\eta_{i}=\frac{v(C_{i}^{*})-v(C_{i})}{v(C_{i}^{*})} \tag{9}\]
Since the data is randomly generated, there could be scenarios where there is no value in collaborating, i.e. even the value of the grand coalition is 0, \(v(N)=0\). Note that we exclude these scenarios when calculating the above evaluation metrics; however, this only occurs 1.9% of the time when brute-forcing 51,200 instances.
#### 5.3.5 Other checks
Okada (1996) analyses this coalitional bargaining game in a non-collaborative routing setting, and proves that agents will cooperate by sharing gains equally in our setting. Therefore, in addition to the above metrics, we check that the agents' behaviour agrees with those predicted by Okada (1996). Firstly, we check that agents do converge to an equal gain share. Secondly, all agents should reach agreement in the first time-step in the three-player setting.
### Results
We perform ten independent runs comparing our RL bot to the heuristic bot. Ten independent runs in (MA)RL is commonly accepted following the work of Henderson et al. (2019). We also compare to a random bot which simply proposes coalition structures and pay-off vectors as well as responds all uniformly at random.
From Figures 7(a) and 7(b), we conclude that our agents have learnt close to optimal behaviour. Our agents reach an average accuracy of 77% and average optimality gap of 0.01 (or 3.9%). Moreover, we can see from Figure 8(a) that Agent 1 learns to share gains equally - as expected by game theory (Okada, 1996). Whilst we only show the plot for Agent 1, similar plots can be made for Agents 2 and 3 but are omitted due to space constraints. Interestingly, three 'phases' of learning are identified as shown in Figure 8(b). In Phase 1 (the first approximately 300 epochs), proposers act extremely myopically and propose that they receive the majority of the gain (up to 90%). Occasionally, the responders will accept these sub-optimal proposals and thus the proposer could receive high reward. However, the responders learn to reject more proposals so that they can potentially counter-offer in the next round. This leads to more rounds of bargaining. After about 300 epochs, both proposers and responders reach agreement quickly; however, the gains are not equally shared. In Phase 2, responders realise they can do better by rejecting proposals and potentially proposing counter-proposals. This drives the proposers to propose more equal gain shares. Finally, in Phase 3, we can see that agents have learnt to maximally cooperate with equal gain share and reach agreement within the first time-step as expected by Okada (1996).
#### 5.4.1 Correlation with Shapley Values
In Figure 10, we see that the outcomes from our bargaining procedure correlate well with the calculated Shapley values. The three agents receive an \(R^{2}\) score of 0.76, mean squared error of 0.08, and mean absolute error of 0.01 (averaged across all three agents). In addition, it is promising that when agent 1 is excluded from the coalition (denoted by orange cross markers), this is usually when Agent 1 has low marginal contribution (as seen by the orange kernel density estimate plot at the top of the x-axis). As a result, we conclude that our agents learn to agree to _fair_ outcomes. This is important from a managerial perspective as fairness could be crucial to help incentivise carriers to participate in collaborative vehicle routing (Guajardo and Ronnqvist, 2016).
#### 5.4.2 Ablations
We further perform two ablations to strengthen the confidence in our findings. Each ablation is carried out with 10 random seeds each. The first ablation changes the maximum number of bargaining rounds from 10 to 30. This ablation is carried out
Figure 8: Learning curve of (a) average accuracy (b) average optimality gap across all 3 agents for readability. Solid lines denote mean accuracy across 10 independent runs. Shaded regions denote \(\pm\) two standard deviations. After training for 10,000 epochs, our RL agents reach an average accuracy of 77% with an average optimality gap of 3.9%.
since the underlying coalitional bargaining game is infinite-horizon, yet we must set a maximum number of bargaining rounds. Our ablation shows that increasing the maximum number of time-steps does not significantly change the quality of our agents' solutions. The agents still agree to share gains equally, with an average optimality gap of 4.1% (up from 3.9%) and identifying the correct coalitions 76% of the time (down from 77%). Therefore, we conclude that using a maximum number of time-steps of 10 to be sufficient. This is expected as we deal with time-limits properly as discussed in section Section4.8. The second ablation changes the agents' discount factor from 0.95 to 1.0. This ablation is carried out as we use a discount factor to reduce variance in the return. We test whether it's possible to use a higher discount factor. We find that using a discount factor of 1.0 decreases performance which we suspect to be due to the increased variance. With a discount factor of 1.0, agents achieve an average optimality gap of 6.07% (up from 3.9%). Agents do still learn to propose an approximately equal
Figure 9: Learning curve of (a) Agent 1’s average proposed pay-offs (b) average number of bargaining rounds across all 3 agents for readability. Solid lines denote mean accuracy across 10 independent runs. Shaded regions denote \(\pm\) two standard deviations. Dashed lines denote the proposed pay-off of an equal gain share agent. In (a), after 10,000 epochs, Agent 1 converges on an approximately equal gain share. In (b), after 10,000 epochs agents reach agreement after an averaged 1.03 rounds of bargaining. Both of these results agree with Okada (1996).
gain share but identifies the correct coalitions only 68% of the time (down from 77%). We conclude that using a discount factor of 0.95 is sufficient to achieve a set of strong agents.
### Discussion
In addition, our RL agents are able to reach agreement in 512 parallel instances within an average of 3.0s (or 0.006s per instance). We note that the prior literature assumes full access to the characteristic function, such as Krajewska et al. (2008). Using these prior methods to solve 512 instances takes 24.3s (or 0.047s per instance). Thus, our RL agents achieve a 88% reduction in computational time when compared with prior methods to calculate the Shapley value, such as in (Krajewska et al., 2008). Whilst 0.047s per instance may seem reasonable even with traditional methods, we stress that this is due to the simplistic VRP setting we consider - prior methods will not scale with the number of agents nor problem complexity via additional constraints such as time-windows. Importantly, our agents agree to outcomes that correlate well with Shapley values and thus we conclude that our method produces _fair_ outcomes. This is important to fairly compensate carriers to enable wide-spread industrial adoption of collaborative vehicle routing. Our agents also reach agreement in a decentralised and self-interested manner, which overcomes the limitations of central orchestration methods mentioned in Section 2.
Figure 10: The empirical pay-off agent 1 receives as a result of coalitional bargaining vs. the theoretical Shapley values for 2048 test instances. Green circle markers denote when agent 1 was included in the coalition. Orange cross markers denote when agent 1 was excluded from the coalition. \(R^{2}\) score of 0.76, mean squared error of 0.08 and mean absolute error of 0.01.
Furthermore, our MARL agents are able to outperform the two baseline bots in both accuracy and optimality. The heuristic bot and random bot has an accuracy of 62% and 25% respectively, and an optimality gap of 8% and 32% respectively. The relatively low performance of both the heuristic bot and random bot suggests that the experimental setup is sufficiently challenging (due to the NP-hard nature of vehicle routing problems), and that simple policies are not performant in this setting. The heuristic bot shows that 38% of the time, it is not desirable to form the grand coalition as some agents may contribute very little. The random bot's high optimality gap shows that, whilst there is symmetry in our problem and depots are equidistant, the choice of partners is still important. This necessitates more intelligent agents and thus complex methods such as MARL. More importantly, we conclude that our MARL agents have learnt interesting behaviours, such as to exclude opponents if they contribute little to the coalition, as seen in Figure 10.
In this work, we make the assumption that each carrier possesses only one truck. We further assume that the same truck driver is assigned to the same truck. This is a reasonable assumption as the road freight industry is highly fragmented: for example, in the UK, there are 60,000 registered carriers (Office for National Statistics 2022) in 2022, and 1 million registered carriers in the EU in 2020 (Eurostat 2020). However, if a single carrier possesses multiple trucks and thus multiple drivers, it would be possible to decompose the problem at different levels of granularity. One could consider coalitions of carriers; coalitions of trucks; or even coalitions of truck drivers. Our framework should be applicable to deal with all three types of modelling choices, but clearly the more granular the modelling choice, the more computational power that will be required.
The benefit of studying collaborative routing in a coalitional bargaining game is that game theory describes optimal, rational behaviour in this setting. As a result, we have a measure of the gap to optimality. This is important because of the challenging nature of 3-player, mixed-motive settings for MARL; thus, we can understand if the agents are learning correctly. However, there are three main limitations of this approach. Firstly, collaborative vehicle routing is most fruitful with a large number of participating carriers (Cruijssen et al. 2007; Los et al. 2022). Future work must investigate scaling our MARL approach to a larger number of carriers. We believe this to be possible in a hybrid centralised-decentralised manner. The advantage of our decentralised MARL approach is that it enables us to provide a large volume of high-quality solutions to optimise the central agent. Secondly, future work should investigate the performance of MARL-based approaches on real-world data distributions with real-world constraints. One direction would be to study the effect of data imbalance (such as the locations of depots and customers, as well as the delivery volumes) on the performance of MARL-based methods. Another direction could be to study the effect of partial observability of other carriers' information; we currently consider the perfect information scenario where all delivery information is publicly shared (though, crucially, the characteristic function is still unknown). It would be interesting in future work to explore imperfect information settings, such as the value of information sharing. This could be tackled using decentralised, partially observable Markov decision processes (dec-POMDPs) (Oliehoek and Amato 2016). Thirdly, our approach currently only incentivises carriers. An independent third-party logistics provider may be required to enable collaborative routing. How should we incentivise third-party logistics providers? How should we incentivise shippers? What role could government play to incentivise collaboration? Moving in these directions with MARL would result in using more complex and flexible games; however, optimal,
rational behaviour would be unknown. Nevertheless, MARL may still be applied to these complex games but in a descriptive manner (Shoham et al., 2007), i.e. to analyse the emergent behaviour of agents assuming a given MARL algorithm. We believe this to be an exciting line of future research.
## 6 Conclusions and Managerial Implications
Collaborative Vehicle Routing has promised cost savings between 4 - 46% in the last two decades. Yet industrial adoption remains limited. A key remaining barrier is the design of a gain sharing mechanism that is fair and scalable such that carriers are incentivsed to collaborate. Orchestration of truck sharing is usually proposed via a central optimiser, where an intermediary would receive information from each carrier and allocate trucks to each route. Subscription to intermediaries do not necessarily outweigh costs, and carriers typically do not obtain any benefits from sharing their trucks. In this paper, we propose an automated, decentralised approach, where software agents representing carriers find optimal routes through a _coalitional bargaining_ game, and any gain obtained via improved truck utilisation is shared between the carriers. Manual orchestration costs are also avoided as the approach is automated.
To facilitate decentralised optimisation and fair gain sharing we utilised deep multi-agent reinforcement learning. The main challenge of our setting is the inability of extant methods to fully evaluate the characteristic function due to high computational complexity. The characteristic function calculates the collaboration gain for every possible coalition, which requires solving an exponential number of NP-hard VRPs. The autonomous agents designed in this work are able to correctly reason over a high-dimensional graph input to _implicitly_ reason about the characteristic function instead. This eliminates the need to evaluate the expensive post-collaboration vehicle routing problem an exponential number of times and increases its practicability as we only need to evaluate this once. Furthermore, applying MARL to mixed-motive games is _highly_ non-trivial and applying out-of-the-box MARL algorithms to this problem does not work. We show that we are able to achieve strong performance through careful design decisions, such as transfer learning, a targeted training design and COMA, and provide intuition for why these approaches help.
Moreover, the multi-agent reinforcement learning approach designed in our work is applicable to any coalitional bargaining game. Thus, our work may be suitable to problems in the broader collaborative logistics literature such as warehouse sharing. Another important point is that collaboration is not centrally orchestrated but facilitated using decentralised decision making. This marks an important step towards real-world adoption which might encourage transportation planners to consider more profitable and fair collaboration scenarios. Whilst we initially envisage this system operating as a decision support system, as transportation planners gain trust in the agents' decisions, we ultimately envisage this system to operate fully autonomously. This would enable even faster decision making that is traceable and consistent, potentially enabling a more responsive supply chain (Brintrup et al., 2009). We urge transport planners and software system providers to consider potential adoption scenarios and integration into information systems.
Our work has limitations which provide avenues for future research. The current focus of this work is to obtain strong autonomous agents that maximally cooperate in the challenging mixed-motive setting of collaborative vehicle routing. Whilst we have achieved this, we have focused on a setting with 3 carriers as the focus of
our work was to provide the theoretical link between collaborative vehicle routing, coalitional bargaining, and deep multi-agent reinforcement learning. Future work should investigate the scalability of a MARL approach to a larger number of agents. Furthermore, CVR problems typically include various additional considerations such as axle weights, goods compatibility, and packing orders, which have not yet been incorporated to the framework proposed here. Our approach is agnostic to the underlying optimisation design, and being so, we do not envisage the incorporation of additional problem features to hinder its function.
## Acknowledgement(s)
This work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (www.csd3.cam.ac.uk), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/T022159/1), and DiRAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk).
We thank the three anonymous reviewers for their support and insightful comments during the review process which has greatly enhanced this paper. We also thank the Supply Chain Artificial Intelligence Lab (SCAIL) for their insightful discussions regarding early drafts of this paper.
## Disclosure statement
The authors report no conflict of interest.
## Funding
This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant on "Intelligent Systems for Supply Chain Automation" under Grant Number 2275316, as well as by the UK EPSRC Connected Everything Network Plus under Grant EP/S036113/1.
## Appendix A Capacitated vehicle routing problem
In our paper, the pre-collaboration social welfare can be calculated by first solving three independent Capacitated Vehicle Routing Problems, where we assume an arbitrarily high capacity for each vehicle.
The capacitated vehicle routing problem (CVRP) and their variants have been studied for over 60 years (Toth and Vigo, 2014). Here we show the _three-index (vehicle-flow) formulation_.
The CVRP considers the setting where goods are distributed to \(n\) customers. The goods are initially located at the _depot_, denoted by nodes (or vertices) \(o\) and \(d\). Node \(o\) refers to the starting point of a route, and node \(d\) the end point of a route. The customers are denoted by the set of nodes \(N=\{1,2,\ldots,n\}\). Each customer \(i\in N\) has a _demand_\(q_{i}\geq 0\). In our setting, we consider \(q_{i}=1\) for all customers. A _fleet_ of \(|K|\) vehicles \(K=\{1,2,\ldots,|K|\}\) are said to be _homogeneous_ if they all have the same capacity \(Q>0\). In our setting, we consider only one vehicle and set its capacity \(Q\) to an arbitrarily high number to remove the capacity constraint. A vehicle must start at the depot, and can deliver to a set of customers \(S\subseteq N\) before returning to the depot. The _travel cost_\(c_{i,j}\) is associated for a vehicle travelling between nodes \(i\) and \(j\) which we assume to be the Euclidean distance.
This problem can be modelled as a complete directed graph \(G=(V,A)\), where the vertex set \(V\coloneqq N\cup\{o,d\}\) and the arc set \(A\coloneqq(V\setminus\{d\})\times(V\setminus\{o\})\). We define the _in-arcs_ of \(S\) as \(\delta^{-}(S)=\{(i,j)\in A:i\notin S,j\in S\}\). The _out-arcs_ of \(S\) is \(\delta^{+}(S)=\{(i,j)\in A:i\in S,j\notin S\}\).
The binary decision variables \(x_{ijk}\) denotes whether a vehicle \(k\in K\) travels over the arc \((i,j)\in A\). The binary decision variables \(y_{ik}\) denotes whether a vehicle \(k\in K\) visits node \(i\in V\). \(u_{ik}\) denotes the load in vehicle \(k\) before visiting node \(i\). We define the demand at the depot nodes \(o\) and \(d\) to be \(0\), i.e. \(q_{o}=q_{d}=0\). This yields:
minimize \[\sum_{k\in K}c^{T}x_{k}\] (1a) subject to \[\sum_{k\in K}y_{ik}=1, \forall i\in N,\] (1b) \[x_{k}(\delta^{+}(i))-x_{k}(\delta^{-}(i))=\begin{cases}1,&i=o,\\ 0,&i\in N,\end{cases} \forall i\in V\setminus\{d\},k\in K,\] (1c) \[y_{ik}=x_{k}(\delta^{+}(i)) \forall i\in V\setminus\{d\},k\in K,\] (1d) \[y_{dk}=x_{k}(\delta^{-}(d)) \forall k\in K,\] (1e) \[u_{ik}-u_{jk}+Qx_{ijk}\leq Q-q_{j} \forall(i,j)\in A,k\in K,\] (1f) \[q_{i}\leq u_{ik}\leq Q \forall i\in V,k\in K,\] (1g) \[x=(x_{k})\in\{0,1\}^{K\times A},\] (1h) \[y=(y_{k})\in\{0,1\}^{K\times V}.\] (1i)
* The objective function (1a) minimises the Euclidean distance travelled by the vehicle.
* Constraint (1b) ensures the vehicle only visits each customer once.
* Constraint (1c) ensures that the sum of vehicles entering node \(d\) and exiting node \(d\) is \(-1\). This ensures that a vehicle \(k\) performs a route starting at \(o\) and ending
at \(d\).
* Constraint (1d and 1e) couples variables \(x_{ijk}\) and \(y_{ik}\).
* Constraint (1f) is the Miller-Tucker-Zemlin constraint which helps eliminate subtours.
* Constraint (1g) is the capacity constraint.
## Appendix B Multi-depot vehicle routing problem
In our paper, the post-collaboration social welfare can be calculated by solving the multi-depot vehicle routing problem (MDVRP) once. The number of depots corresponds to the number of agents within the accepted coalition. Again, we remove capacity constraints by setting the capacity of each vehicle to an arbitrarily large number. However, we add the additional constraint that each vehicle has to visit at least one customer.
The MDVRP is a simple extension of the CVRP formulation provided in Appendix A. Instead of having the depot simply represented by nodes \(o\) and \(d\), the depots are extended to belong to a specific vehicle \(k\) through nodes \(o_{k}\) and \(d_{k}\). Doing so yields:
minimize \[\sum_{k\in K}c^{T}x_{k}\] (2a) subject to \[\sum_{k\in K}y_{ik}=1, \forall i\in V,\] (2b) \[x_{k}(\delta^{+}(i))-x_{k}(\delta^{-}(i))=\begin{cases}1,&i=o_{k },\\ 0,&i\in N,\end{cases} \forall i\in V\setminus\{d_{k}\},k\in K,\] (2c) \[y_{ik}=x_{k}(\delta^{+}(i)) \forall i\in V\setminus\{d_{k}\},k\in K,\] (2d) \[y_{d_{k}k}=x_{k}(\delta^{-}(d_{k})) \forall k\in K,\] (2e) \[y_{d_{k}k}=1 \forall k\in K,\] (2f) \[u_{ik}-u_{jk}+Qx_{ijk}\leq Q-q_{j} \forall(i,j)\in A,k\in K,\] (2g) \[q_{i}\leq u_{ik}\leq Q \forall i\in V,k\in K,\] (2h) \[x=(x_{k})\in\{0,1\}^{K\times A},\] (2i) \[y=(y_{k})\in\{0,1\}^{K\times V}.\] (2j)
* The objective function (2a) minimises the Euclidean distance travelled by all vehicles.
* Constraint (2b) ensures that each vehicle only visits each customer once.
* Constraint (2c) ensures that the sum of vehicles entering node \(d_{k}\) and exiting node \(d_{k}\) is \(-1\). This ensures that a vehicle \(k\) performs a route starting at \(o_{k}\) and ending at \(d_{k}\).
* Constraint (2d and 2e) couples variables \(x_{ijk}\) and \(y_{ik}\).
* Constraint 2f ensures that each vehicle performs at least one delivery.
* Constraint (2g) is the Miller-Tucker-Zemlin constraint which helps eliminate subtours.
* Constraint (2h) is the capacity constraint.
## Appendix C Expected Number of Bargaining Rounds by a Random bot
Let \(X\) be a discrete random variable denoting the number of bargaining rounds. Let's assume we have a random agent as discussed in Section 5.3.2 which proposes coalitions, pay-off vectors and responses uniformly at random. We wish to calculate the expected number of bargaining rounds achieved by three random bots, \(\mathbb{E}[X]\). The maximum number of bargaining rounds is 10 in our experiments (although our ablations show that increasing this to 30 has no meaningful difference).
\[\mathbb{E}[X] =\sum_{k=1}^{10}x\cdot P(X=x) \tag{10}\] \[=1\cdot P(X=1)+2\cdot P(X=2)+\cdots+10\cdot P(X=10) \tag{11}\]
To obtain \(P(X=1)\), note that e.g. for Player 2, the random bot can propose four coalitions, \(C=\{1,2,3\},\{1,2\},\{2,3\}\) or \(\{2\}\) since Player 2 must be in the coalition \(C\). If the coalition \(C=\{1,2,3\}\) is proposed, then both Players 1 and 3 must accept for the bargaining process to terminate, which yields a probability of acceptance (and thus termination) of \(\frac{1}{2}^{2}\).
Therefore, \(P(X=1)\) can be re-written as follows:
\[P(X=1) =\left[P(|C|=3)\times\frac{1}{2}^{2}\right]+\left[P(|C|=2)\times \frac{1}{2}\right]+\left[P(|C|=1)\right] \tag{12}\] \[=\left[0.25\times\frac{1}{2}^{2}\right]+\left[(0.25+0.25)\times \frac{1}{2}\right]+\left[0.25\right] \tag{13}\]
Repeating a similar logic to calculate \(\mathbb{E}[X]\) yields an expected number of bargaining rounds of \(1.775\).
\[\mathbb{E}[X] =(1\cdot 0.5625)+(2\cdot 0.2461)+(3\cdot 0.1077)+(4\cdot 0.04 71)+\cdots+(10\cdot 0.0003) \tag{14}\] \[=1.775 \tag{15}\]
Empirically, our bots reach agreement at \(1.777\) rounds averaged over 10 runs.
## Appendix D Pseudo-code of the entire pipeline
```
1:Initialise \(\mathbf{\theta}=\theta_{1},\theta_{2},\ldots,\theta_{n},\theta_{\textit{critic}}\) // \(n\) actors' (neural network) policy and critic
2:
3:// Supervised Pre-training (regression, minimise mean-squared error)
4:for\(\theta_{i}\) in \(\mathbf{\theta}\)do
5:\((\Delta\hat{g}_{\theta_{i}})^{2}=(v(C)-\hat{g}_{\theta_{i}})^{2}\) // Calculate loss
6:\(\Delta\theta_{i}=\nabla_{\theta_{i}}(\Delta\hat{g}_{\theta_{i}})^{2}\) // Calculate gradients
7:\(\theta_{i}=\theta_{i}+\alpha\Delta\theta_{i}\) // Update parameters
8:
9:// MARL Training
10:for each training epoch \(e\)do
11: Initialise \(M=2048\) parallel environments // Coalitional bargaining envs.
12:\(s_{1}\sim\rho_{1},t=0\) // Sample the initial state \(s_{1}\) from \(\rho_{1}\)
13:while\(s_{t}\neq\) terminal and \(t<T\)do
14: t += 1
15: // Calculate joint actions a
16:for i in N do
17:\(a_{i,t}\sim\pi_{\theta_{i}}(a_{i,t}|s_{i,t})\) // Select actions stochastically for exploration
18:\(s_{t+1}\sim\mathcal{T}(s_{t},A_{t})\) // Sample next state from transition dynamics
19:\(R_{i,t}\sim\mathcal{R}(s_{t},A_{t},s_{t+1})\) \(\forall i\in N\) // Calculate reward
20: Store each \(\langle s_{i,t},a_{i,t},\log(\pi_{\theta_{i}}(a_{i,t}|s_{i,t})),s_{i,t+1},R_{i, t}\rangle\forall i\in N\)in agent\(i\)'s buffer
21:
22: // Here, all \(M\) episodes will be finished
23:for\(t=1\) to T do
24:\(G_{i,t}=\sum_{t^{\prime}=t}^{T}\gamma^{t^{\prime}-t}R_{i,t}\) \(\forall i\in N\) // Calculate discounted returns
25:for t=T down to 1 do
26:\((\Delta Q_{i,t})^{2}=\begin{bmatrix}G_{i,t}-\hat{Q}_{\theta_{\textit{critic}}} (s_{i,t},\mathbf{a})\end{bmatrix}^{2}\) // Calculate critic loss
27:\(\Delta\theta_{\textit{critic}}=\nabla_{\theta_{\textit{critic}}}(\Delta Q_{ i,t})^{2}\) // Calculate critic gradients
28:\(\theta_{\textit{critic}}=\theta_{\textit{critic}}+\alpha\Delta\theta_{ \textit{critic}}\) // Update critic parameters
29:for t=T down to 1 do
30: // Calculate proposal baseline
31:\(A_{t,\textit{prop.}}^{i}=G_{i,t}-\hat{V}(s,\theta_{\textit{critic}})\) \(\forall i\in N\)
32: // Calculate response baseline
33:\(A_{t,\textit{resp.}}^{i}=G_{i,t}-\sum_{a}\hat{Q}(s_{i,t},a,\mathbf{a}^{-a}, \theta_{\textit{critic}})\pi_{\theta_{i}}(a_{i,t}|s_{i,t})\) \(\forall i\in N\)
34: // Accumulate actor proposal gradients
35:\(\Delta\theta_{i}\) += \(\nabla_{\theta_{i}}\left[\min(r_{t}(\theta_{i})A_{t,\textit{prop.}}^{i}, \text{clip}(r_{t}(\theta_{i}),1-\varepsilon,1+\varepsilon)A_{t,\textit{prop.}}^ {i}\right]\forall i\in N\)
36: // Accumulate actor response gradients
37:\(\Delta\theta_{i}\) += \(\nabla_{\theta_{i}}\left[\min(r_{t}(\theta_{i})A_{t,\textit{resp.}}^{i}, \text{clip}(r_{t}(\theta_{i}),1-\varepsilon,1+\varepsilon)A_{t,\textit{resp.}}^ {i}\right]\forall i\in N\)
38:\(\theta_{i}=\theta_{i}+\alpha\Delta\theta_{i}\) \(\forall i\in N\) // Update actors' policy parameters
39:\(\Delta\theta_{i}=\mathbf{0}\) // Reset gradients
```
**Algorithm 1** Pseudo-code of MARL pipeline
## Appendix E. Holistic Diagram of our Pipeline |
2305.04129 | Motion and deformation of capsules flowing through a corner in the
inertial and non-inertial regimes | We investigate the inertial and non-inertial dynamics of three-dimensional
elastic capsules flowing through a square channel presenting a sharp corner.
Our study analyzes the trajectory, surface area, velocity and membrane stress
of the capsules in the case of a single capsule, a system of two interacting
capsules and a train of ten capsules released upstream of the corner. The
channel Reynolds number $Re$ ranges from 0.01 to 50 and the Capillary number
$Ca$, which measures the ratio of the viscous and elastic stresses, ranges from
0.075 to 0.35. We find that in the inertial regime, the membrane stretch and
stress increase dramatically as compared to the non-inertial case, and that the
velocity overshoot inside the corner is also enhanced. The maximum capsule
deformation is observed to depend nearly linearly on $Ca$ and $Re$.
Additionally, we report a repelling mechanism between two confined capsules
when their initial interspacing distance $d$ is smaller than a critical value
$d_c$. The deformation of the leading capsule is found to be mitigated by the
presence of the following capsule. In the case of multiple capsules flowing
through the corner, we observe that the increase in the maximum surface area of
the trailing capsules eventually saturates at the tail of the train. Moreover,
we find that the corner tends to separate the capsules regardless of their
upstream interspacing distances $d$. This study contributes to the elaboration
of practical guidelines for controlling capsule breakup and predicting
throughput in both inertial and non-inertial microfluidic experiments. | Damien P. Huet, Antoine Morente, Guodong Gai, Anthony Wachs | 2023-05-06T20:19:06Z | http://arxiv.org/abs/2305.04129v1 | Motion and deformation of capsules flowing through a corner in the inertial and non-inertial regimes
###### Abstract
We investigate the inertial and non-inertial dynamics of three-dimensional elastic capsules flowing through a square channel presenting a sharp corner. Our study analyzes the trajectory, surface area, velocity and membrane stress of the capsules in the case of a single capsule, a system of two interacting capsules and a train of ten capsules released upstream of the corner. The channel Reynolds number \(Re\) ranges from \(0.01\) to \(50\) and the Capillary number \(Ca\), which measures the ratio of the viscous and elastic stresses, ranges from \(0.075\) to \(0.35\). We find that in the inertial regime, the membrane stretch and stress increase dramatically as compared to the non-inertial case, and that the velocity overshoot inside the corner is also enhanced. The maximum capsule deformation is observed to depend nearly linearly on \(Ca\) and \(Re\). Additionally, we report a repelling mechanism between two confined capsules when their initial interspacing distance \(d\) is smaller than a critical value \(d_{c}\). The deformation of the leading capsule is found to be mitigated by the presence of the following capsule. In the case of multiple capsules flowing through the corner, we observe that the increase in the maximum surface area of the trailing capsules eventually saturates at the tail of the train. Moreover, we find that the corner tends to separate the capsules regardless of their upstream interspacing distances \(d\). This study contributes to the elaboration of practical guidelines for controlling capsule breakup and predicting throughput in both inertial and non-inertial microfluidic experiments.
## I Introduction
Membrane-enclosed fluid objects, or capsules, are everywhere in natural and industrial processes, from red blood cells (RBCs), circulating tumor cells (CTCs) or flowing eggs in biology to encapsulated substances in the pharmaceutical, cosmetic and food industries [1]. The study of microcapsules in particular is of primary importance in a variety of biological applications, such as sorting and enriching solutions of biological microcapsules, e.g. to segregate RBCs or CTCs, as well as efficiently manufacturing capsules enclosing an active substance in the field of targeted drug delivery [2; 3]. In the past decade, microfluidic devices have been shown to accomplish a variety of tasks including cell segregation based on size and deformability [4; 5; 6; 7], concentration enrichment [8; 9; 10] and cell characterization [11; 12; 13]. Moreover, the increase in computing power has recently allowed numerical studies to contribute to the design of microfluidic devices. For example, Zhu et al. [4] numerically investigated an original microchannel geometry consisting of a semi-circular pillar located at the center of a microchannel: their study showed that this design can efficiently segregate cells based on membrane deformability. Recently, experiments were conducted using their microfluidic design and concluded that it can indeed sort cells based solely on membrane stiffness, with relatively high efficacy [5]. With regards to cell characterization, Gubspun et al. [11] proposed a method to determine capsule properties such as the membrane shear modulus by comparing the experimental and numerical "parachute" shape of capsules in a straight microchannel. While the majority of microfluidic investigations operate in Stokes conditions, in recent years the design and study of inertial microfluidic devices has risen due to their ability to accurately segregate capsules by size and to extract them from their solvent [14; 7]. Inertial focusing in microfluidic devices typically relies on a spiral-shaped channel concentrating heavier capsules to the outer, lower-curvature edge of the channel, while lighter capsules concentrate closer to the inner, higher-curvature edge. A smooth geometry such as a spiral-shaped channel usually does not induce a high strain nor stress on a suspended capsule even in inertial regimes, however little is known about
the strains and stresses induced by commonly encountered sharp geometries such as forks and corners on a capsule flowing in the presence of inertia. Moreover, the effect of such sharp geometries on the hydrodynamic interactions of a train of several capsules in inertial regimes is also an open question. More insight in these directions is of practical interest in the design and operation of inertial microfluidic devices because (i) the devices should not compromise the mechanical integrity of the capsules, i.e. it is critical to avoid capsule breakup, and (ii) cell-sorting processes typically operate in very dilute regimes to avoid capsule interactions, while a better understanding of such interactions would allow to operate these devices at a moderate to high concentration optimizing efficacy and throughput.
In the past four decades, a significant research effort has been invested into the modeling and the study of capsule deformations in non-inertial regimes, primarily because this regime is encountered in microcirculation such as capillary vessels and in traditional microfluidic devices. Using formalism from the thin-shell theory [15], Barthes-Biesel & Rallison first published an analytical solution for the time-dependant deformation of an elastic capsule in an unbounded, creeping shear flow in the limit of small deformations [16]. Over a decade later, Pozrikidis was able to go beyond the assumption of small deformations using a Boundary Integral Method (BIM) [17]. The same method was used to consider finite deformations of sheared capsules which inner and outer fluid viscosities differ [18], as well as to study the contribution of bending stresses [19], allowing to consider RBCs suspended in an unbounded shear flow [20]. Besides unbounded geometries, Zhao et al. [21] simulated RBCs in straight and constricted channels using a spectral BIM. A similar method was later used by Hu et al. [22] to consider an initially spherical capsule flowing through a square channel of width similar to the capsule diameter: the originality of their work is that they performed experiments and showed remarkable agreement between the measured and the computed capsule shape. Concomitantly, Park and Dimitrakopoulos [23] studied the deformation of a capsule with non-unity viscosity ratio flowing through a sharp constriction. More recently, Balogh & Bagchi [24; 25; 26] used a Front-Tracking Method (FTM) to analyze the motion and deformation of RBCs through complex geometries resembling capillary vessels found in human microcirculation: their simulations exhibited in particular the well-known cell-free layer observed experimentally between the RBCs and the vessel walls [27; 28].
Regarding the study of flowing capsules in the presence of inertia, the aforementioned analytical theory for small deformations as well as the popular BIM both fall short of accounting for the convective term in the fluid momentum equation. Doddi & Bagchi [29] first studied inertial capsules in the context of two interacting capsules in a shear flow using the FTM. They showed in particular that the two capsules engage in spiralling motions at sufficiently high inertia. The inertial motion of a deformable capsule was then studied in straight microchannels [30; 31], where several equilibrium positions are found away from the channel centerline, along the cross-section diagonals. With regards to curved channels, Ebrahimi & Bagchi [32] recently investigated the migration of a single capsule over an impressive amount of varying parameters: the channel Reynolds number, the capsule deformability, as well as the aspect ratio and curvature of the channel were all varied independently. Their study shows that for sufficiently high inertia, exactly two focusing locations appear near the centers of the vortices of the secondary flow, known as Dean's vortices. However no mention of the membrane internal strains and stresses is found in their work, as their goal was not to investigate the capsule integrity in such flows.
While straight and curved microchannels are essential components of microfluidic devices, such simple geometries do not account for the numerous junctions, corners and coils commonly found in these devices. To bridge this gap, Zhu & Brandt [33] investigated the non-inertial motion and the deformation of a single elastic capsule in a sharp corner. They showed that the capsule follows the streamlines of the undisturbed flow regardless of membrane deformability. Due to lubrication forces, the capsule velocity decreases when approaching the corner, reaches a minimum along the corner diagonal, and rises back to its steady state with an overshoot increasing with deformability. Similarly, the surface area of the capsule reaches a maximum inside the corner and reaches its steady value with an undershoot more pronounced as deformability is increased. Also reported in their study is the maximum stress in the capsule membrane, which can be used to assess mechanical integrity and characterize the cell mechanical properties. They find that the maximum stress deviation increases and shifts from the front to the top of the capsule with increasing deformability. Wang et al. [8; 9] later considered the inertial and non-inertial path selection of a single capsule through Y- and T-junctions, both typically encountered in microfluidic geometries. They observe that at high inertia, the capsule does not necessarily favor the daughter branch with the largest flow rate, and that this effect is more pronounced for stiff membranes (corresponding to a low capillary number). Recently, Lu et al. [10] investigated the interaction and path selection of capsules in a T-junction at moderate inertia, with the goal of enriching capsule solutions. When considering a pair of capsules, they show that the leading capsule is weakly affected by the presence of a trailing capsule, but that the reverse is not true. They find that the trailing capsule enters a different branch depending on the initial interspacing distance and on the flow rate split ratio between the two daughter branches of the T-junction. They then consider a train of capsules and find two distinct regimes: (i) the interspacing distance is low and the capsule interaction is high, resulting in an unsteady regime and affecting the trajectories of the capsules, and (ii) the interspacing distance is large and the capsule interaction is low, leaving the capsule trajectories identical to that of a single capsule. Interestingly, they report that the critical interspacing distance between two capsules
plotted against the flow rate split ratio of the daughter branches results in a master curve independent of membrane deformability, capsule size, and Reynolds number.
In this study, we investigate the inertial and non-inertial motion and the interaction of deformable capsules flowing through a sharp corner, which is a very common geometry in microfluidic devices. As the efficiency of these devices is defined in terms of the capsules throughput, which can be optimized by increasing the flow rate as well as the concentration of capsules, our objective is two-fold: first, we aim to quantify the effect of inertia on the deformation of a single capsule in a microfluidic-relevant geometry, second, we seek to describe the hydrodynamic interactions and deformation differences between leading and trailing capsules when a pair and a train of capsules are considered. The rest of this paper is organized as follows. In Section II, we describe the governing equations as well as the flow configuration and the considered parameter space. In Section III, we give an overview of our numerical method and we investigate the impact of the inlet length. We analyze the motion of a single capsule in Section IV, both in the non-inertial and in the inertial regimes. Section V is devoted to the analysis of binary interactions of a pair of capsules, where the influence of the initial interspacing distance is investigated. In Section VI, we consider a train of ten capsules flowing through the corner and we discuss the velocity and deformation discrepancies between the leading and trailing capsules. Finally, we conclude in Section VII.
The documented source code allowing to reproduce all of the simulations and figures presented in this study is freely available online [34].
## II Governing equations and problem statement
The capsule membrane \(\Gamma\) is assumed infinitely thin and is surrounded by an incompressible, Newtonian fluid of constant viscosity and density. In all of this study, the capsule inner and outer fluids are assumed identical: in particular their viscosity ratio is unity. The fluid is described by the mass and momentum conservation equations:
\[\nabla\cdot\mathbf{\tilde{u}}=0 \tag{1}\]
\[\frac{\partial\mathbf{\tilde{u}}}{\partial\tilde{t}}+\mathbf{\tilde{u}}\cdot\nabla\bm {\tilde{u}}=\frac{1}{\tilde{\rho}}\nabla\tilde{p}+\tilde{\nu}\Delta\mathbf{ \tilde{u}}+\frac{1}{\tilde{\rho}}\mathbf{\tilde{f}_{b}} \tag{2}\]
where \(\mathbf{\tilde{u}}\) is the velocity field, \(\tilde{p}\) is the pressure field, \(\tilde{\rho}\) is the density, \(\tilde{\nu}=\tilde{\mu}/\tilde{\rho}\) is the kinematic viscosity, \(\tilde{\mu}\) is the dynamic viscosity and \(\mathbf{\tilde{f}_{b}}\) is a body term accounting for the action of the membrane on its surrounding fluid. The dimensional quantities are denoted by the \(\sim\) symbol. The membrane exhibits elasticity and bending resistance, and its action on the fluid is local, resulting in the following expression for \(\mathbf{\tilde{f}_{b}}\):
\[\mathbf{\tilde{f}_{b}}=\left(\mathbf{\tilde{f}_{\text{elastic}}}+\mathbf{\tilde{f}_{\text {bending}}}\right)\tilde{\delta}(\mathbf{\tilde{x}}-\mathbf{\tilde{x}_{\Gamma}}), \tag{3}\]
where \(\tilde{\delta}(\mathbf{\tilde{x}}-\mathbf{\tilde{x}_{\Gamma}})\) is a Dirac distribution that is non-zero on the surface of the membrane.
The shear and area-dilatation membrane stresses are described using the thin-shell theory, and are briefly summarized here. The interested reader is referred to Green & Adkins [15] as well as to the analytical study of Barthes-Biesel & Rallison [16] for more details. We adopt a neo-Hookean law [15], which surface strain-energy function is expressed as:
\[\tilde{W_{s}}^{\,\,NH}=\frac{\tilde{E_{s}}}{2}\left(\lambda_{1}^{2}\lambda_{2 }^{2}+\frac{1}{\lambda_{1}^{2}\lambda_{2}^{2}}\right), \tag{4}\]
where \(\lambda_{1,2}\) are the principal stretches in the two tangential directions, and \(\tilde{E_{s}}\) is a shear modulus. The principal stresses \(\tilde{\sigma}_{1,2}\) are given by:
\[\tilde{\sigma}_{i}=\frac{1}{\lambda_{j}}\frac{\partial\tilde{W_{s}}^{\,\,NH}} {\partial\lambda_{i}},\qquad i,j\in\{1,2\},\quad i\neq j. \tag{5}\]
The bending stresses for biological membrane are governed by the Helfrich's bending energy \(\mathcal{E}_{b}\)[35; 36]:
\[\tilde{\mathcal{E}}_{b}=\frac{\tilde{E_{b}}}{2}\int_{\Gamma}\left(2\tilde{ \kappa}-\tilde{\kappa}_{0}\right)^{2}dS, \tag{6}\]
where \(\tilde{E}_{b}\) is the bending modulus, \(\tilde{\kappa}\) is the mean curvature and \(\tilde{\kappa}_{0}\) is a reference curvature. Taking the variational formulation of Eq. (6) leads to the bending force per unit area \(\tilde{A}\):
\[\mathbf{\tilde{f}}_{\text{bending}}/\tilde{A}=-2\tilde{E}_{b}(\Delta_{s}(\tilde{ \kappa})+2(\tilde{\kappa}-\tilde{\kappa}_{0})(\tilde{\kappa}^{2}-\tilde{\kappa} _{g}+\tilde{\kappa}_{0}\tilde{\kappa}))\mathbf{n}, \tag{7}\]
where \(\tilde{\kappa}_{g}\) is the Gaussian curvature and \(\mathbf{n}\) is the outer normal vector.
At \(t=0\), an initially spherical capsule of radius \(\tilde{a}\) is placed in a square channel of width \(\tilde{W}=3\tilde{a}\) at a distance \(\tilde{h}_{0}=30\tilde{a}\) from a sharp corner, as represented in Fig. 1. An average cross-section velocity \(\tilde{U_{0}}\) is imposed at the inlet boundary, while the outflow boundary condition \(\partial\mathbf{\tilde{u}}_{n}/\partial\mathbf{n}=0\) is imposed at the outlet boundary. When several capsules are considered, we use the same initial conditions as Lu et al. [10]: a trailing capsule is inserted in the simulation only after the centroid of its preceding capsule has advanced by a distance \(\tilde{d}\). Our problem is governed by the trailing dimensionless numbers:
1. The channel Reynolds number \(\text{{Re}}=\tilde{\rho}\tilde{U_{0}}\tilde{W}/\tilde{\mu}\),
2. The Capillary number \(Ca=\tilde{\mu}\tilde{U_{0}}\tilde{a}/\tilde{E_{s}}\), representing the ratio of viscous stresses over elastic stresses,
3. The reduced bending stiffness coefficient \(E_{b}=\tilde{E_{b}}/(\tilde{E_{s}}a^{2})\),
4. The confinement ratio \(\beta=2\tilde{a}/\tilde{W}\),
5. The reduced initial gap between capsules \(d_{0}=\tilde{d}/2\tilde{a}-1\).
In this study, the Reynolds number \(\text{{Re}}\) ranges from 0.01 to 50, the Capillary number \(Ca\) varies from 0.075 to 0.35, and the reduced initial gap \(d_{0}\) is chosen from 0.125 to 1. The reduced bending stiffness \(E_{b}\) and the confinement ratio \(\beta\) are both kept constant, with \(\beta=2/3\) and \(E_{b}=5\cdot 10^{-3}\) as proposed by Pozrikidis [37]. The reference curvature \(\tilde{\kappa_{0}}\) is equal to \(-2.09/\tilde{a}\) in this study, as is common for some biological membranes such as RBC membranes [38; 39]. In the rest of this study, we use the capsule radius \(\tilde{a}\) as the characteristic length scale, and we define the characteristic time scale as the radio of the capsule radius over the average cross-section velocity, i.e. \(t=\tilde{a}/\tilde{U_{0}}\).
## III Numerical method and validations
We use our adaptive Front-Tracking Method (FTM) to solve the above equations: we provide below a brief overview of the numerical method, while an in-depth description is available in [40]. Eq. (1) and Eq. (2) are solved using the Finite Volume method on an adaptive octree grid using the open-source software Basilisk [41]. The membrane is discretized using an unstructured triangulation and Eq. (5) is solved using a linear Finite Element Method, while Eq. (7) is solved using a paraboloid-fitting method. The membrane triangulation and the octree grid communicate
Figure 1: (a) Schematic of the geometry of the fluid domain. The channel has a square cross-section of side length \(3\tilde{a}\). (b) Visualization of the full channel and the computational grid over the symmetry plane of the channel.
by means of the immersed boundary method [42; 43], where the Dirac distribution in Eq. (3) is regularized using a cosine-based formulation:
\[\tilde{\delta}(\mathbf{x_{0}}-\mathbf{x})=\begin{cases}\dfrac{1}{64\tilde{ \Delta}^{3}}\prod_{i=1}^{3}\left(1+\cos\left(\dfrac{\pi}{2\tilde{\Delta}}(x_{0,i}-x_{i})\right)\right)\quad\text{if}\quad|x_{0,i}-x_{i}|<2\tilde{\Delta}\\ 0\quad\text{otherwise}\end{cases}, \tag{8}\]
where \(\mathbf{x_{0}}=[x_{0,1}\ x_{0,2}\ x_{0,3}]\) is the location of a Lagrangian node on the surface discretization of the membrane, and \(\tilde{\Delta}\) is the local mesh size of the Eulerian octree grid. Extensive validation of the present numerical method was the focus of our previous study [40] and is therefore not presented here. Nonetheless, the convergence with respect to the Eulerian grid as well as the release distance of the capsule from the corner are investigated below.
In the immersed boundary method, it is well known that the support of the regularized Dirac distribution may extend outside of the fluid domain if the immersed object of interest becomes very close to the domain walls [44; 9; 10]. In order to avoid unphysical loss of momentum for the specific membrane nodes close to the wall, it is important to ensure that none of the supports of the regularized Dirac distribution extend outside of the fluid domain, i.e. that there always exist more than two grid cells between membrane nodes and the domain boundaries. As such, we simulate the dynamics of a capsule for two different grid resolutions in the configuration where it is most deformed and is the closest to the channel wall, as shown in Fig. 2b. Figure 2a shows the velocity of the capsule \(\tilde{V}\) inside and downstream of the corner for Eulerian resolutions equivalent to \(32\) and \(64\) grid cells per initial capsule diameter, as well as the deviation of the velocities in these two configurations. Excellent agreement is found between the velocities computed using the two grid resolutions, with the maximum discrepancy lower than \(1\%\) and the average discrepancy over the considered time range of about \(0.5\%\). Moreover, in both configurations it was found that more than \(3\) grid cells are present in the lubrication layer between the capsule tail and the upper corner wall. These results indicate that an equivalent grid resolution of \(32\) grid cells per capsule initial diameter is sufficient to obtain converged solutions, and that the present simulations do not suffer from immersed boundary stencils extending outside of the fluid domain.
Next we investigate the influence of the normalized release distance \(D_{c}\) between the initial position of the capsule centroid and the corner. Indeed, after its release the capsule relaxes from a spherical to an equilibrium steady shape and it is important that this steady state is reached before the capsule enters the corner. As such, we consider three initial distances \(D_{c}=15\), \(30\) and \(60\) in the most challenging configuration at \(Re=50\) and \(Ca=0.35\), i.e. the capsule is highly deformable and placed in a highly inertial flow. The inlet boundary is located at a distance of \(90a\) away from the corner and is therefore sufficiently far away from the capsule to not alter its response. The norm of the capsule centroid velocity \(\tilde{V}\) and the reduced capsule surface area \(\mathcal{A}=\tilde{\mathcal{A}}/4\pi a^{2}\) are shown in Fig. 3, where the origin of the reduced time \(t\) is chosen at the time the capsule reaches a minimum velocity \(\tilde{V}_{min}\). In Fig. 3a we remark that the capsule velocity \(\tilde{V}\) at \(D_{c}=15\) decreases significantly prior to entering the corner: this is because the initially spherical capsule is located farther away from the channel walls and is therefore advected faster than when it has reached a steady shape. We observe that neither the capsule velocity shown in Fig. 3a nor the normalized surface area shown in Fig. 3b present a steady state before the capsule enters the corner in the case \(D_{c}=15\). Therefore a larger initial distance \(D_{c}\) should be used. When considering \(D_{c}=30\), both the velocity and the normalized surface area present
Figure 2: (a) Centroid velocity of a capsule at \(Ca=0.35\) and \(Re=50\) for two grid resolutions: \(32\) grid cells per initial diameter (red dotted line) and \(64\) grid cells per initial diameter (red solid line). The blue curve denotes the deviation in the centroid velocities for these two grid resolutions. (b) Corresponding shape and grid resolutions of the capsule and the flow field: blue means zero velocity and red means large velocity.
steady values before the corner. Interestingly, inside and after the corner the capsule velocity and surface area almost overlap when the capsule is released \(15\) and \(30\) initial radii away from the corner, suggesting that the corner resets the dynamics of the capsule regardless of its previous state. The fact that steady values for the velocity and the surface area of the capsule are reached before the corner for \(D_{c}=30\) suggests that this initial release distance is suitable for the rest of this study. Interestingly, releasing the capsule at \(D_{c}=60\) leads to an unexpected result: the capsule seems to no longer be in a steady motion as its velocity (respectively its normalized surface area) is slightly decreasing (respectively slightly increasing) prior to entering the corner. This suggests that in this challenging configuration, the relaxation of the capsule from a fixed spherical shape to a steady "parachute" shape occurs over very long time scales. However, the magnitude of the deviations between the capsule velocity and surface area in the cases \(D_{c}=30\) and \(60\) is at most \(3\%\). As the capsule has already reached a pseudo steady state by the time it reaches the corner in the case of \(D_{c}=30\), and as the aforementioned discrepancies are small, we choose \(D_{c}=30\) in the rest of this study. Again, this short study of the impact of the initial release distance on the capsule dynamics was performed in our most challenging configuration as we considered our highest Reynolds number and highest Capillary number. The discrepancy between the cases \(D_{c}=30\) and \(60\) is less pronounced \(-\) sometimes nonexistent \(-\) for less deformable membranes and less inertial flows.
## IV Motion and deformation of a single capsule
We consider the motion of a single capsule through a square duct at \(Ca=0.075,0.15,0.25,0.35\) and \(Re=0.01,1,25,50\), extending the investigation carried out in a non-inertial framework by Zhu & Brandt [33]. In order to establish the influence of the increasing effect of inertia on the motion and the deformation of a single capsule, we first recall the overall dynamics of a capsule moving through a duct corner in the Stokes regime, as detailed in [33]. The capsule once released from its initial position moves along the center of the channel due to the symmetry of the flow far from the corner. While approaching the corner, the capsule velocity decreases until reaching a minimum in the corner region. The capsule experiences moderate to high deformation (depending on the Capillary number considered) due to the flow acceleration, and its velocity strongly increases; this phenomenon being referred to as the overshoot of velocity. Further away from the corner, the capsule moves in the downstream branch of the duct, relaxing to a steady state (shape and velocity), and moving along the center of the duct.
We investigate the influence of the Reynolds number \(Re\) and the Capillary number \(Ca\) on the dynamics and the deformation of the capsule, reporting the time evolution of its surface area \(\mathcal{A}\) scaled by the initial surface area of the capsule \(\mathcal{A}_{\text{sphere}}=4\pi\hat{a}^{2}\), as well as the velocity \(V\) of the capsule centroid scaled by its equilibrium velocity \(V_{eq}\) before the capsule enters the corner region. In the remainder of this study and unless otherwise stated, the time origin is chosen such that \(t=0\) when capsule velocity reaches a global minimum, i.e. \(V_{min}=V(t=0)\). We borrow this convention from Zhu & Brandt [33], as it corresponds to setting the time origin when the capsule is located at the heart of the corner.
Figure 3: Centroid velocity (a) and normalized surface area (b) of a capsule flowing through a corner from three distinct normalized release distances \(D_{c}=15\), \(30\) and \(60\), at \(Re=50\) and \(Ca=0.35\).
### Influence of the Reynolds and Capillary numbers
To characterize the dynamics of the capsule as it flows through the corner, we analyze the time evolution of the centroid velocity \(V\) and the surface area \(\mathcal{A}\). Figure 4 shows the velocity of the capsule centroid for \(Ca\) ranging from \(0.075\) to \(0.35\). \(Re\) is constant for each subfigure of Fig. 4. Conversely, Fig. 5 shows the same data as Fig. 4, but with each subfigure corresponding to a constant \(Ca\). From both figures, we observe a general trend for all cases: the capsule approaches the corner with a steady velocity \(V_{eq}\), then reaches a global minimum \(V_{min}\) and a global maximum \(V_{max}\) as it flows through the corner, and relaxes back to \(V_{eq}\) downstream of the corner. Moreover, we observe in Fig. 4 that the velocity extrema increase with increasing \(Ca\). In the more inertial regimes especially, the maximum velocity deviation of the capsule at \(Ca=0.35\) is close to three times that of the capsule at \(Ca=0.075\).
We note from Fig. 5 that the curves corresponding to \(Re=0.01\) and \(Re=1\) practically overlap, indicating that the capsule motion in low inertial regimes is very similar to that in the non-inertial regime. As the Reynolds number is increased to \(25\) and \(50\), major deviations from the non-inertial regime appear. First, as the capsule enters the corner zone, a local maximum appears in the capsule velocity, which is independent of the Capillary number, and is about \(1\%\) greater than \(V_{eq}\) at \(Re=25\) and \(2\%\) greater than \(V_{eq}\) at \(Re=50\). This local maximum is due to the migration of the capsule across the centerline of the secondary channel: in this process the capsule is located far away from the channel walls and is therefore less subject to their confinement effect. Then, the minimum velocity \(V_{min}\) is reached in the heart of the corner. Interestingly, at small \(Ca\), \(V_{min}\) is observed to be independent of the \(Re\), as can be seen in Fig. 5a at \(Ca=0.075\). In contrast, in the case of larger \(Ca\) the minimum velocity of the capsule increases slightly with \(Re\). A difference of about \(4\%\) is observed for \(V_{min}\) as \(Re\) increases from \(0.01\) to \(50\) for both \(Ca=0.25\) and \(Ca=0.35\).
As the capsule exits the corner zone and migrates to the channel centerline, its velocity reaches its maximum value \(V_{max}\) which increases with increasing \(Re\) and \(Ca\): at \(Ca=0.075\), \(V_{max}\) increases by \(3\%\) between \(Re=0.01\) and \(Re=50\) while at \(Ca=0.35\), \(V_{max}\) increases by about \(8\%\) between \(Re=0.01\) and \(Re=50\). Then, the capsule velocity relaxes back to its equilibrium value and its relaxation time increases with increasing \(Re\). Interestingly, velocity undershoots are observed during the relaxation stage in the inertial regime, which magnitude increases with \(Re\). The relaxation time does not depend on \(Ca\).
The time evolution of the normalized capsule surface area \(\mathcal{A}\) is shown in Fig. 6 for fixed \(Re\) and in Fig. 7 for fixed \(Ca\). We observe that the surface area presents a maximum \(\mathcal{A}_{max}\) at around \(t=1\) before relaxing to its equilibrium value
Figure 4: Temporal evolution of the capsule centroid velocity \(V\) at fixed Reynolds numbers.
\(\mathcal{A}_{eq}\). Unsurprisingly, Fig. 6 confirms that a large \(Ca\), i.e. a highly deformable capsule, results in a greater surface area than for lower \(Ca\). Figure 6 also shows that the magnitude of the maximum surface area increases with \(Ca\). Moreover, when large \(Ca\) are considered the time evolution of the capsule surface area presents some undershoots that are more pronounced as \(Re\) is increased. Additionally, Fig. 7 reveals that \(Re\) has a very strong influence on the deformation of the capsule, especially at large \(Ca\): at \(Ca=0.075\), \(\mathcal{A}_{max}/\mathcal{A}_{eq}\) increases from \(2\%\) to \(8\%\) between \(Re=0.01\) and \(Re=50\), and at \(Ca=0.35\) it increases from \(8\%\) to a staggering \(22\%\) between \(Re=0.01\) to \(Re=0.35\). In particular, at \(Ca=0.35\) the maximum capsule surface area increases from \(9\%\) to \(40\%\) of the surface area of a sphere between the non-inertial and the highly inertial regimes. These surface area deviations are very large and are discussed further in the next section.
### Maximum deformation of the capsule
The maximum surface area \(\mathcal{A}_{max}\) of the capsule is presented in Fig. 8, as a function of both the Reynolds number and the Capillary number. To better analyze the trends in this figure, we also report the maximum area at intermediate Reynolds numbers, namely at \(Re=12.5\) and \(37.5\). The data reported in Fig. 8 clearly exhibits a double linear scaling of \(\mathcal{A}_{max}\) with both \(Ca\) and \(Re\) as long as \(Ca\) is below \(0.35\)\(-\) at \(Ca=0.35\), the shape of the curve \(\mathcal{A}_{max}(Re)\) is slightly concave. The slope of the scaling is about \(0.003\) for \(\mathcal{A}_{max}(Ca)\) and \(1.12\) for \(\mathcal{A}_{max}(Re)\). This means that the capsule maximum deformation responds proportionally to the Capillary number, but also to the Reynolds number. To our knowledge, this is the first time such a trend has been reported and established for low (\(Re=1\)) to moderate (\(Re=12.5,25,37.5,50\)) inertial regimes. We believe that this result can be used as a predictive tool for many studies involving single capsules travelling through duct corners, as the maximum deformation observed for a capsule is a measure of its mechanical integrity, which is of major interest in many microfluidic applications.
Additionally, we present in Fig. 9 the maximum and minimum velocity of the single capsule flowing through the corner. In the non-inertial regime, the maximum velocity of the capsule increases with \(Ca\), as shown in Fig. 9. In inertial conditions we observe that \(V_{max}\) increases for \(Re\) ranging from \(1\) to \(50\). The increase in \(V_{max}\) between \(Re=1\) and \(Re=50\) is significant in Fig. 9, especially for large \(Ca\). For instance, at \(Ca=0.35\), \(V_{max}\) increases by about \(8\%\) between the non-inertial and the highly inertial regimes. We then consider the evolution of the minimum velocity
Figure 5: Temporal evolution of the capsule centroid velocity \(V\) at fixed Capillary numbers.
Figure 6: Temporal evolution of the capsule surface area \(\mathcal{A}\) at fixed Reynolds numbers.
Figure 7: Temporal evolution of the capsule surface area \(\mathcal{A}\) at fixed Capillary numbers.
\(V_{min}\) for a single capsule at various \(Ca\) and \(Re\) in Fig. 8(b). In general, we observe that the minimum velocity decreases with \(Ca\) in both the non-inertial and the inertial regimes for \(Re\leq 25\). In Fig. 8(b), we also observe a non-monotonous behavior of \(V_{min}\) at low inertia and at sufficiently high \(Ca\): for \(Ca\geq 0.15\), \(V_{min}\) first decreases with increasing \(Re\), reaching a minimum for \(Re=12.5\), before increasing sharply at \(Re>12.5\). Overall, we observe from Fig. 9 that the presence of inertia tends to increase both velocity extrema of the capsule, especially at large \(Ca\).
A quantity of practical interest to experimentalists is the maximum stress experienced by the capsule, as it can be used to predict _a priori_ if a given geometry can induce plastic deformation or even breakup of the capsule membrane [5]. More specifically, it is the largest eigenvalue \(\tilde{\sigma}_{2}\) of the stress tensor \(\tilde{\mathbf{\sigma}}\) that can bring insight into the mechanical integrity of the membrane. In Fig. 10, we show the maximum and average values of \(\tilde{\sigma}_{2}\) over the membrane surface as the capsule approaches and flows through the corner at \(Ca=0.35\) and \(Re=1\), \(25\) and \(50\). We observe that \(\tilde{\sigma}_{2,\,avg}\) follows a trend very similar to that of the capsule surface area observed in Fig. 6(d): \(\tilde{\sigma}_{2,\,avg}\) varies smoothly with time, presents a maximum near \(t=1\) and a local minimum near \(t=2.5\), and the value of the maximum deviation from steady state nearly doubles between the low and moderate inertial cases \(Re=1\) and \(Re=50\). We also note that the steady state value of \(\tilde{\sigma}_{2,\,avg}\) prior to entering the corner is independent of \(Re\), as was observed in the case of the capsule surface area in Fig. 6(d). In particular, we find by comparing Fig. 6(d) and 10 that at \(Ca=0.35\), a non-dimensional surface area \(\mathcal{A}\) of about \(1.14\) leads to an average non-dimensional membrane stress of about \(0.4\). The steady state of the maximum stress \(\tilde{\sigma}_{2,\,max}\), however, increases by about \(40\%\) between the low inertial case (\(Re=1\)) and the moderate inertial cases (\(Re=25\), \(50\)). Inside the corner, \(\tilde{\sigma}_{2,\,max}\) increases by nearly \(75\%\) between \(Re=1\) and \(Re=50\), confirming that a capsule in a moderate inertial regime has a higher risk of breakup than in a low inertial regime.
It is worth noting that for all \(Re\), the value of the maximum stress \(\tilde{\sigma}_{2,\,max}\) is about double that of the average stress \(\tilde{\sigma}_{2,\,avg}\): since we showed previously that \(\tilde{\sigma}_{2,\,avg}\) is closely related to the capsule surface area \(-\) a quantity that is relatively easy to measure experimentally \(-\), this observation can be used by experimentalists as a rule of thumb to estimate the maximum stress in the capsule membrane and assess the mechanical integrity of the membrane.
Figure 8: Maximum surface area \(\mathcal{A}_{max}\) as a function of \(Re\) and \(Ca\) for a single capsule passing through the corner.
Figure 9: Maximum (minimum) velocity \(V_{max}\) (\(V_{min}\)) as a function of \(Re\) and \(Ca\) for a single capsule passing through the corner.
### Evolution of the capsule shape
We now illustrate the temporal evolution of the capsule travelling through the corner. Figure 11 shows the outline of the capsule in the symmetry plane \(z=0\) for successive discrete times. The capsule outlines are given for \(Ca=0.075\) and \(Ca=0.35\) and \(Re=0.01\), \(25\) and \(50\). Prior to entering the corner, the capsule adopts a steady shape that is determined by the confinement of the walls. In the case of \(Ca=0.35\), we observe the well-known "parachute" shape. Upstream of the corner, the trajectory of the capsule coincides with the centerline of the primary (vertical) channel. As the capsule flows through the corner, the capsule deviates from the channel centerline: in the non-inertial regime, Zhu & Brandt [33] showed that the capsule trajectory closely matches the flow streamlines. We obtain the same conclusion in the inertial regime. When inertia is considered, the capsule trajectory crosses the horizontal centerline of the secondary channel and comes increasingly close to the upper wall as \(Re\) increases, before relaxing to the channel centerline.
Figures 10(a) and 10(b) show clear differences in the effects of \(Ca\) in the Stokes regime. Increasing \(Ca\) from \(0.075\) to \(0.35\) causes the equilibrium shape of the capsule to change from an slightly deformed spheroid to a concave "parachute" shape. For a small \(Ca=0.075\), the equilibrium shapes of the capsule remain similar as \(Re\) increases from \(Re=0.01\) to \(Re=50\) (see Fig. 10(a), 10(c), and 10(e)). However, the deformation of the capsule becomes more evident inside the corner at higher \(Re\), particularly in Fig. 10(e). After passing the corner, the capsule shape returns to its steady spheroid shape observed in the Stokes regime for all values of \(Re\). In the case of a high \(Ca=0.35\), we observe that the equilibrium shape of the capsule is more and more concave as \(Re\) increases. Inside the corner, the capsule is highly elongated and presents an increasingly long tail for increasing \(Re\)\(-\) e.g. Fig. 10(f) in the case of \(Re=50\). In the highly inertial regime, strong lubrication interactions occur between the capsule and the top wall, resulting in a flat top surface.
In figures 11(a) and 11(b), we present the single capsule outline with the maximum surface area \(\mathcal{A}_{max}\) and the maximum velocity \(V_{max}\) inside the corner for all the cases investigated in this section. Inside the corner, the maximum surface area of the single capsule is reached when it approaches the upper wall and it is quickly followed by the maximum velocity. From figures 11(a) and 11(b), we observe in particular that a high \(Re\) leads to an elongation of the capsule in the streamwise direction, while a high \(Ca\) increases the concavity of the capsule. Moreover, we note that the centroid of the capsule moves closer to the rim of the outline at high values of \(Ca\): note that the centroid drawn in figures Fig. 11-11(b) corresponds to the centroid of the three-dimensional capsule, not to that of the two-dimensional outline. The results shown in figures 11-11(b) indicate that \(Ca\) has a significant effect on capsule deformation, while \(Re\) has a more pronounced effect on the trajectory of the capsule as well as its deformation resulting from the lubrication layer against the top wall of the corner. In particular, at high \(Re\), the capsule undergoes significant stretching, which may cause damage or even rupture in microfluidic devices. Understanding the effects of \(Re\) on capsule deformation and the resulting damage is crucial in designing efficient and reliable microfluidic devices.
Figure 11: Sequence of Capsule outlines for different \(Ca\) and \(Re\). The time between each frame is \(t=1.5\).
Figure 12: Outlines of a single capsule passing a corner with (a) maximal surface area \(\mathcal{A}\) and (b) maximal velocity \(v_{max}\) at various \(Re\) and \(Ca\).
initial spherical surface area of the capsule, indicating a small loss of the internal capsule volume. The cause of these observations may be related to the limitations of the FTM coupled with a sub-optimal choice of numerical parameters in the case of \(Re=0.01\) only. Indeed, the immersed boundary method is known to conserve volume asymptotically rather than to machine precision. In earlier IBM studies involving capsules, the volume loss is always small, typically below \(1\%\)[8; 9; 10; 24]. Moreover, Stokes conditions are known to be challenging for PDE-based incompressible Navier-Stokes solvers, as the matrix inverted in the velocity viscous Poisson problem is less well conditioned at low \(Re\). While it is worth noting that the capsule surface area in the Stokes regime should be interpreted with caution, these limitations only affect the capsule surface area and not the centroid velocity. Moreover, our solver was extensively validated in Stokes conditions in [40] and showed excellent agreement with the BIM as well as other FTM solvers. As such, while further investigation should be conducted in the Stokes regime, it cannot be excluded that at high \(Ca\) the capsule surface area at \(Re=0.01\) is physically slightly greater than that at \(Re=1\). Finally, the main focus of the present work is to investigate the inertial motion and deformation of capsules through a sharp corner, i.e. in conditions where our FTM solver does not suffer from the limitations outlined above.
## V System of two capsules
In this section, we consider two identical capsules flowing through the corner as we vary the normalized interspacing distance \(d=\tilde{d}/2\tilde{a}-1\) between the capsules as well as the Reynolds and Capillary numbers. Lu et al. [10] previously considered the binary interaction of capsules flowing through a T-junction: they showed that when \(d_{0}\geq 1.3\) the trailing capsule has minimal impact on the motion of the leading capsule. In contrast, in their T-junction geometry Lu et al. observed that the motion of the trailing capsule is significantly affected by the presence of the leading capsule. To gain insight into the physical features relevant to capsule interactions through a corner in the inertial and non-inertial regimes, we select small values for the normalized interspacing distance \(d_{0}=1,1/2\), and \(1/4\) and we examine phenomena such as migration, dynamics and deformation of the leading and the trailing capsules.
### Qualitative analysis: trajectory and capsule shape
We first analyze the trajectory and the qualitative shapes of the pair of capsules as they flow through the corner. Figure 12(a) shows the trajectory of the capsules at \(Re=0.01\), \(25\) and \(50\) and \(Ca=0.15\) and \(0.35\). We note that all curves corresponding to the same \(Ca\) overlap: \(Ca\) has no impact on the path of either the leading or the trailing capsule. Likewise, we observe no significant difference in the trajectories of the leading and the trailing capsules, unlike the strikingly different paths reported in the case of a T-junction [10]. In fact, the key parameter that controls the capsule trajectory is the Reynolds number. As \(Re\) increases, the inertia drives the capsule closer to the upper channel wall, as observed in Section IV in the case of a single capsule. We then illustrate the capsule shape on the symmetry plane \(z=0\) in Fig. 12(b) for the most deformed capsule configuration corresponding to \(Ca=0.35\) and \(Re=50\) with an initial interspacing distance \(d_{0}=0.25\). We compare the outlines of the leading and the trailing capsules to that of a single capsule in the same conditions. Qualitatively, the deformation of interacting capsules is not significantly
Figure 13: (a) Trajectory of the two capsules at different \(Ca\) and \(Re\). (b) Outlines of the leading and trailing capsules at \(Ca=0.35,Re=50,d_{0}=0.25\), with comparison to a single capsule.
different than that observed in the case of a single capsule. Perhaps more surprisingly, the qualitative outlines of the leading and the trailing capsules are also very similar, almost overlapped, even in the strongly interacting configuration corresponding to \(d_{0}=0.25\). Note that this qualitative shape analysis relies on the outline of the capsule in the plane of symmetry \(z=0\), while the actual three-dimensional shape of the leading and trailing capsules may differ more strongly.
### Quantitative analysis: velocity and membrane surface area
We now compare the temporal evolution of the velocity of the centroids of the capsules as well as the time evolution of their surface areas, as plotted in Fig. 14. To simplify the identification of interaction features, we first focus on the most deformed configuration corresponding to \(Ca=0.35\), \(Re=50\) and \(d_{0}=0.25\). For reference, we also plot the evolution of a single capsule under the same conditions in red. Throughout the remainder of this study, and unless otherwise stated, the velocity of interacting capsules is normalized by the equilibrium velocity \(V_{eq}\) of a single capsule for the same Capillary and Reynolds numbers. This normalization choice allows for an unbiased comparison between the velocities of the leading and the trailing capsules. In this section we also denote the reduced velocity of the single capsule by \(V_{s}\), that of the leading capsule by \(V_{l}\) and that of the trailing capsule by \(V_{t}\). Similarly, we denote by \(\mathcal{A}_{s}\), \(\mathcal{A}_{l}\), \(\mathcal{A}_{t}\) the normalized surface areas of respectively the single, leading and trailing capsules.
In Fig. 13(a), we observe that the velocity of the leading capsule is affected by the presence of the trailing capsule before it reaches the corner, as it is about 1% higher than that of a single capsule. However, the extrema of \(V_{l}\) as it flows through the corner closely match those of \(V_{s}\). After the corner, \(V_{l}\) is about 2% larger than \(V_{s}\) but slowly relaxes back to \(V_{s}\) further downstream. With regards to the trailing capsule, we note that its velocity is more markedly affected by the presence of the leading capsule. Prior to reaching the corner, \(V_{t}\) is about 1% lower than \(V_{s}\), but inside the corner its minimum value is 4% lower than \(V_{s}\). However, the maximum of \(V_{t}\) is identical to that of both \(V_{l}\) and \(V_{s}\). Downstream of the corner, \(V_{t}\) quickly relaxes back to \(V_{s}\) and maintains a similar value thereafter, eventually converging to \(V_{eq}\). The time evolution of the surface areas of the pair of capsules is shown in Fig. 13(b). The normalized surface area of the leading capsule \(\mathcal{A}_{l}\) is clearly influenced by the presence of the trailing capsule, as was observed above in the case of its velocity. The steady and maximum surface areas of the leading capsule are about 2% lower than that of the single capsule. In contrast, the steady surface area of the trailing capsule closely matches that of the single capsule upstream and downstream of the corner, while its maximum value is about 1% higher than that of the single capsule. We postulate that the small interspacing distance between the two capsules disturbs the wake behind the leading capsule, which tends to mitigate its deformation and therefore decreases its surface area. Conversely, as the wake of the trailing capsule is unaffected, the discrepancies between its surface area and that of the single capsule are less pronounced.
We then present the time evolution of the velocity and surface area of the leading and the trailing capsules at various \(Ca\), \(Re\) and \(d_{0}\). We first focus on the velocity of the capsules, displayed in Fig. 15 for \(Ca=0.15\) and \(0.35\) and for \(d_{0}=0.5\) and \(1\). The velocity of both capsules displays a minimum at \(t=0\) and a maximum at \(t\approx 2\) at \(Ca=0.15\) and \(Ca=0.35\). The extrema of the velocity are more pronounced as \(Ca\) increases. The effects of the initial interspacing distance \(d_{0}\) on these extrema are less evident but still present: the velocity maxima of both the leading and the trailing capsules are increased by about 1% as \(d_{0}\) is halved from 1 to 0.5. Interestingly, the relaxation time
Figure 14: Temporal evolution of the velocity \(V\) and the surface area \(\mathcal{A}\) of capsules at \(Ca=0.35,Re=50\) with \(d_{0}=0.25\): a comparison of the leading, trailing and a single capsule.
of \(V_{t}\) to \(V_{eq}\) is significantly reduced when compared to that of \(V_{l}\): about 3 time units in the case of \(V_{t}\) with respect to more than 10 time units in the case of \(V_{l}\). Capsule velocities in the inertial regimes at \(Re=25\) and 50 and at \(Ca=0.15\) and \(Ca=0.35\) are plotted in Fig. (b)b and Fig. (c)c for \(d_{0}=1\) and \(0.25\), respectively. The results are similar to that of the non-inertial regime: \(Ca\) enhances the velocity deviations and the extrema are more pronounced in the case of the trailing capsule. Surprisingly, we note that Fig. (b)b and Fig. (c)c display very similar behaviors: therefore, the interspacing distance does not seem to impact the capsule velocities inside the corner: its effects are bounded to the capsule velocities upstream and downstream from the corner. We will come back to this observation in Section V.3.
When analyzing the capsule surface areas for varying \(Re\), \(Ca\) and \(d_{0}\), a similar behavior is found: the surface area of the trailing capsule is consistently greater than that of the leading capsule, and increasing Capillary and Reynolds numbers and decreasing the initial interspacing distance enhance this phenomenon. In particular we report in Table 1 the maximum surface areas of the leading capsule and in Table 2 that of the trailing capsule. As can be seen from Table 1 and Table 2, the maximum surface area of the leading capsule exceeds that of the trailing capsule by up to 5%. The full time-dependant data is provided in Appendix A.
### Time evolution of the interspacing distance
We now analyze the time evolution of the interspacing distance between the two confined capsules considered in this section. Figure 16 shows the time-dependent interspacing distance for \(Ca=0.15\) and \(0.35\), \(Re=25\) and \(50\) and \(d_{0}=1\), \(0.5\) and \(0.25\). In this figure, we note that in all cases, the interspacing distance decrease immediately after the trailing capsule is released. This is due to the fact that upon release, the trailing capsule is spherical and therefore located farther away from the channel walls than is the leading capsule, resulting in its initial acceleration before a steady shape is found \(-\) typically within less than five time units. In the case where \(d_{0}=1\), the interspacing distance \(d\) is steady until the leading capsule approaches the corner, reaches a minimum then a maximum value inside the corner and becomes steady again as the trailing capsule leaves the corner region. Interestingly, the steady
Figure 15: Temporal evolution of \(V\) of the leading and trailing capsules at different \(Ca\), \(Re\) and \(d_{0}\).
interspacing distance after the corner is up to 10% greater than its steady value prior to the corner, suggesting that the corner separates the two capsules. Moreover, the initial interspacing distance is greater in the case \(Re=25\) than in the case \(Re=50\): this is only an artifact of our release mechanism. Indeed, the steady "parachute" shape of the capsule is deployed faster at \(Re=50\) than at \(Re=25\), leading to a shorter initial acceleration phase of the trailing capsule towards the leading capsule at \(Re=50\) than at \(Re=25\). When \(d_{0}=0.5\) and \(d_{0}=0.25\), we observe that the interspacing distance steadily increases until the capsules reach the corner region where it displays the same behavior as in the case of \(d_{0}=1\), and continues to increase downstream of the corner. While a steady value of \(d\) is not clearly reached within the considered time range, we can extrapolate the trend and conclude that the interspacing distance seems to saturate to values ranging from \(0.6\) to \(0.8\) depending on \(Re\), \(Ca\) and \(d_{0}\). Therefore, the pair of confined capsules we consider exhibit a minimum stable interspacing distance \(d_{min}\). Moreover, we note that the slope of \(d\) is greater in the case of lower initial interspacing distances, suggesting that the relative velocity of the capsules is a function of their interspacing distance. To investigate further this behavior, we show in Fig. 17 the velocity of the two capsules at \(Ca=0.35\), \(Re=50\) and \(d_{0}\) ranging from \(0.25\) to \(1\). We observe that the velocity of the trailing capsule is lower than that of the leading capsule prior to entering and downstream of the corner, and that the velocity difference increases with decreasing interspacing distance. This velocity difference confirms the above observations in terms of interspacing distance, in particular that a lower interspacing distance results in a greater relative velocity between the two capsules, i.e. an enhanced repulsive behavior. Moreover, we note in Fig. 17 that the difference in velocity minima between the leading and the trailing capsules is always greater than the difference between their velocity maxima. As a result, the residence time of the trailing capsule inside the corner region is always greater than that of the leading capsule, and the corner tends to separate the pair of capsules. The present analysis of the binary interaction of capsules through a corner reveals that the two considered capsules do interact in this geometry, affecting their motion and deformation. In particular, the trailing capsule tends to be more deformed than the leading capsule, and the corner tends to separate the pair of capsules. A natural question that arises is that of the accumulation of such effects if more than two capsules are considered.
## VI Train of ten capsules
In this last section, we investigate the behavior of a train of ten capsules flowing through the corner. We insert each capsule using the same procedure employed in the previous section: a new initially spherical capsule appears at a distance \(D_{c}=30\) radii from the corner as soon as the preceding capsule has advanced by a reduced distance \(\tilde{d}=2\tilde{a}(1+d_{0})\). The capsules are removed from the computational domain when they are less than one initial
\begin{table}
\begin{tabular}{c|l c c c} \hline \hline \(d_{0}\) & \multicolumn{3}{c}{\(Re=0.01\)} & \(Re=25\) & \(Re=50\) \\ \hline \multirow{2}{*}{1} & \(Ca=0.15\) & 1.068 & 1.143 & 1.201 \\ & \(Ca=0.35\) & 1.271 & 1.342 & 1.417 \\ \hline \multirow{2}{*}{0.5} & \(Ca=0.15\) & 1.070 & 1.144 & 1.204 \\ & \(Ca=0.35\) & 1.277 & 1.345 & 1.414 \\ \hline \multirow{2}{*}{0.25} & \(Ca=0.15\) & 1.069 & 1.148 & 1.204 \\ & \(Ca=0.35\) & 1.277 & 1.344 & 1.41 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Maximum surface area \(\mathcal{A}_{max}\) of the trailing capsule at different \(Ca\), \(Re\) and \(d_{0}\).
\begin{table}
\begin{tabular}{c|l c c c} \hline \hline \(d_{0}\) & \multicolumn{3}{c}{\(Re=0.01\)} & \(Re=25\) & \(Re=50\) \\ \hline \multirow{2}{*}{1} & \(Ca=0.15\) & 1.065 & 1.138 & 1.193 \\ & \(Ca=0.35\) & 1.263 & 1.334 & 1.399 \\ \hline \multirow{2}{*}{0.5} & \(Ca=0.15\) & 1.065 & 1.135 & 1.186 \\ & \(Ca=0.35\) & 1.247 & 1.323 & 1.383 \\ \hline \multirow{2}{*}{0.25} & \(Ca=0.15\) & 1.068 & 1.129 & 1.180 \\ & \(Ca=0.35\) & 1.236 & 1.308 & 1.379 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Maximum surface area \(\mathcal{A}_{max}\) of the leading capsule at different \(Ca\), \(Re\) and \(d_{0}\).
diameter away from the outflow boundary. Our goal is to determine if the findings of the previous binary capsule analysis accumulate when more than two capsules are considered, especially with regards to the increased surface area of the capsules and the separating effect reported in Section V. As such, we plot in Fig. 18 the normalized surface area and velocity of each capsule of the train at \(Re=50\), \(d_{0}=0.125\) and \(Ca\) ranging from \(0.15\) to \(0.35\). The same figure obtained in the case of \(d_{0}=1\) is provided in Appendix B. In Fig. 18, the darkness of the color corresponds to the position of the capsule in the train: darker means increasing capsule number i.e. further downstream along the capsule train. As mentioned in Section V, the initial peaks in the surface area and velocity of the capsule are insertion artifacts and do not contribute to the physics that is the focus of this section. We observe in Fig. 18 that the behavior of the last capsule is significantly different than that of the rest of the train. In Section V we hypothesized that the difference in surface areas of the leading and the trailing capsules is due to the fact that the wake of the leading capsule is significantly affected by the presence of the trailing capsule. The present observation in Fig. 18 corroborates this statement: all of the capsules in the train see their wake affected by a trailing capsule, except in the case of the last capsule. As a result, its deformation is greater and extends closer to the channel walls, thus decreasing its velocity. We also remark in Fig. 18 that this effect is enhanced with increasing \(Ca\). While noteworthy in the case of a pair of capsules, this effect is less pertinent to the study of a train of capsules, as only the core of the capsule train is relevant to typical microfluidic applications. As such, in the remainder of this section our analysis is focused on the first ninth capsules of the train.
As expected, a steady state is reached in the straight channel prior to the corner for each capsule and for all \(Ca\). While the steady surface area remains constant with increasing capsule number, i.e. as we move further downstream in the train of capsules, we observe that the velocity of the capsules decreases. In particular the difference between the steady velocity of the first and ninth capsules increases with increasing \(Ca\). As the capsules enter the corner region, they display the familiar pattern previously described in Section IV and Section V, before relaxing to steady values. The shape of the deviation pattern is strikingly similar across different capsules of the train, regarding both the velocity and the surface area of the capsules, except that they are shifted in time and magnitude. More precisely,
Figure 16: Temporal evolution of \(d\) for different initial interspacing distance \(d_{0}\) and Reynolds number \(Re\).
Figure 17: Effects of the initial interspacing distance \(d_{0}\) on the evolution of the capsules velocities \(V\) at \(Ca=0.35,Re=50\).
the surface area curves are shifted upwards with increasing capsule number while the velocity curves are shifted downwards with increasing capsule number. As a result, the maximum surface area of the capsule increases and the velocity extrema decrease with increasing capsule number. This behavior is more pronounced as \(Ca\) increases. Additionally, we compare in Fig. 19 the normalized interspacing distance \(d\) between each pair of capsules in the train. In Fig. 19, each curve is shifted in time such that \(t=0\) corresponds to \(d_{min}\) inside the corner. For all \(Ca\), we observe that the interspacing distance \(d(1,2)\) between the first and the second capsules increases to a steady value close to \(0.5\), and that the corner has marginal effects on the downstream evolution of \(d(1,2)\): this behavior is identical to the case of two capsules studied in the previous section. However, as we move downsteam in the train of capsules, \(d\) increases slower and slower prior to the corner until it remains constant for capsule numbers greater than \(7\), at a steady value \(d\approx 0.7\) that decreases only marginally with increasing \(Ca\). After the transient regime due to the corner, \(d(i,i+1)\) for capsule numbers \(i\) greater than \(7\) reaches a steady state that is slighly higher than prior to entering the corner. In other words, the corner tends to increase the interspacing distance, and therefore exhibits a separating effect. This seperating effect is observed regardless of the initial interspacing distance \(d_{0}\), as was the case in the previous section when only two capsules were considered.
Figure 18: Time evolution of the reduced surface areas and velocities of ten capsules at \(Re=50\) and \(d_{0}=1/8\) for \(Ca=0.15\), \(0.25\) and \(0.35\).
Finally, in order to investigate further the influence of the capsule number on the capsule dynamics, we plot in figures 20-21 the maximum surface area as well as the maximum and minimum velocities of each capsule of the train for varying Capillary numbers and interspacing distances. The difference in minimum velocity (respectively, maximum velocity) between the first and the ninth capsule is about \(15\%\) (respectively, about \(7\%\)) at \(Ca=0.35\) while it is about \(11\%\) (respectively, \(2\%\)) at \(Ca=0.15\). Similarly, the difference in maximum surface area between the first and the ninth capsule is about \(4\%\) at \(Ca=0.35\) and less than \(1\%\) at \(Ca=0.15\). These results correspond to \(d_{0}=0.125\), while in the case of \(d_{0}=1\) only deviations lower than \(1\%\) are observed in the extrema of the capsule surface area and velocity (except in the case of \(Ca=0.35\) for which velocity deviations of \(2\%\) are observed). The very small deviations observed in the case \(d_{0}=1\) indicates that for this interspacing distance the capsules interact very weakly. As such, there exist a critical interspacing distance \(d_{c}\) below which capsule interactions are observed, with \(0.125<d_{c}<1\).
The fact that \(d_{c}\) is less than \(1\) can be surprising, as a normalized interspacing distance of \(d_{0}=1\) would typically be
Figure 20: \(\mathcal{A}_{max}\) as a function of the capsule number.
Figure 19: Temporal evolution of \(d\) for a train of \(10\) capsules at \(Re=50\) and \(d_{0}=0.125\) for (a) \(Ca=0.15\), (b) \(Ca=0.25\) and (c) \(Ca=0.35\).
classified as a strongly interacting regime in other geometries, e.g. in the T-junction investigated by Lu et al. [10]. The main reason for the low interaction we observed is likely due to the short residence time of the capsules in the corner region. Indeed, Lu et al. showed that the residence time is determinant in the path selected by the capsules in a T-junction geometry. Another reason for such a low critical interspacing distance is related to the very confined configuration we study: the capsule shape and behavior is primarily due to the presence of the walls, while the small disturbances of the flow field due to the other capsules only marginally contribute to each capsule dynamics. Future studies could explore the dynamics of a train of capsules in a wider channel, i.e. in a less confined configuration, where each capsule could be more influenced by the wake disturbances of their preceding neighbor.
## VII Conclusion
In the present work, the inertial and non-inertial dynamics of three-dimensional elastic capsules flowing through a sharp corner are investigated. The capsule trajectory, surface area, velocity and membrane stress are analyzed in the cases of one, two and a train of ten capsules released upstream of the corner. The channel Reynolds number ranges from 0.01 to 50, the Capillary number representing the ratio of viscous stresses over elastic stresses ranges from 0.075 to 0.35 and the initial normalized interspacing distance between two capsules is varied from 1 to 0.125. The goal of this study is to help provide practical guidelines in order to anticipate capsule breakup and estimate throughput in inertial microchannels.
The case of a single capsule with no inertia was previously studied by Zhu & Brandt [33], who reported that the capsule follows the flow streamlines closely regardless of the Capillary number. In inertial flows, we found that this statement is still valid for all considered Reynolds and Capillary numbers. As the streamlines of the inertial flow cross the centerline of the secondary channel \(-\) the horizontal channel downstream of the corner \(-\), the capsule position is increasingly close to the top wall for increasing Reynolds number, especially in the case of large Capillary numbers. However no collision between the capsule and the wall of the secondary channel was observed thanks to strong lubrication forces. In their study, Zhu & Brandt also analyzed the velocity of the capsule centroid and the surface area of the capsule membrane: they found that the capsule velocity decreases in the corner and increases immediately after the corner, with an overshoot increasing with membrane deformability. The surface area of the capsule was also found to reach a maximum slightly shifted in time with respect to the minimum of velocity. In the inertial regime, we observed that this behavior is enhanced as the Reynolds number increases. However our results at \(Re=1\) do not differ significantly from results obtained in the non-inertial regime, which corroborates the same observation made by Wang et al. [8; 9]. Moreover, at sufficiently high inertia, capsule surface areas lower to equilibrium surface areas are observed as the capsule relaxes to its steady state. In other words, immediately after the corner the capsule oscillates around its steady shape. This phenomenon is enhanced as the Capillary number increases. Additionally, we reported that the relationship between the maximum surface area \(\mathcal{A}_{max}\) of the capsule and the Reynolds number is linear as long as the Capillary number is kept below 0.35. At \(Ca=0.35\), the relationship between \(\mathcal{A}_{max}\) and \(Re\) is not perfectly linear and the curve \(\mathcal{A}_{max}(Re)\) is slightly concave. Moreover, from \(Re=1\) to \(Re=50\), the maximum surface area increases nearly linearly over the full range of \(Ca\). At \(Ca=0.35\), we compared the membrane stress to the capsule surface area and found that (i) the time evolution of the average stress presents a strong correlation to that of the membrane surface area, and (ii) in our configuration, the value of the maximum stress is double that of the average
Figure 21: (a) \(V_{max}\) and (b) \(V_{min}\) as a function of the capsule number.
stress. As a result, observing the capsule surface area experimentally can provide reliable insight into the average stress as well as an estimate of the capsule maximum stress. This finding is of primary importance in the design of microfluidic devices where capsule breakup is to be avoided, as well as in the development of targeted drug delivery methods for which a controlled capsule breakup is sought.
We then investigated the interaction of several capsules in the corner geometry. First, two capsules are considered with varying initial interspacing distances. Similar to the case of a single capsule, neither the trajectory of the leading nor of the trailing capsule is observed to significantly deviate from the flow streamlines. In the range of initial interspacing distance considered, the velocity of the trailing capsule is found to be generally lower than that of the leading capsule as well as that of a single capsule at the same Reynolds and Capillary numbers. Similarly, the velocity of the leading capsule is greater than that of a single capsule in the same conditions. This velocity difference is also visible in the time evolution of the interspacing distance \(d\) between the pair of capsules. In particular, we found that capsules initially located at \(d_{0}\leq 0.5\) tend to separate. This suggests that there exists a minimum stable gap \(d_{min}>0.5\) between two confined capsules. A systematic analysis of this effect is left for future studies. In contrast, inside the corner the surface area of the trailing capsule is found to be larger than that of the leading capsule and of the single capsule in the same conditions. However, in the configuration we consider where confinement is strong, the magnitude of these effects is small even for capsules located very close to each other: the velocity of the leading and trailing capsules only deviates by a few percents from that of a single capsule. Next, we examined the case of a train of capsules and sought to determine whether the effects observed with a pair of capsule accumulate. While no interaction occurs for a large initial interspacing distance \(d_{0}=1\), we found that in the case \(d_{0}=1/8\), the steady and extremum surface areas of the trailing capsules increase by up to 5% and eventually saturate at the tail of the train, around the ninth capsule. In all cases, the corner is found to separate the pair of capsules as well as the capsule train, which can be further evidenced from the analysis of the time evolution of the capsule velocity inside the corner region.
We believe that the present work is a step forward towards providing practical guidelines to avoid capsule breakup in inertial and non-inertial microfluidic experiments. Future works could study capsule membranes exhibiting a strain-hardening elastic behavior, e.g. as described by the Skalak law [45], as well as vary the confinement ratio \(\beta=2\tilde{a}/\tilde{W}\) in order to consider high-throughput microfluidic devices. In the case of lower confinement ratios in particular, we expect to see stronger capsule interactions along with cross-stream capsule migration inside and downstream of the corner. Finally, the present work could also be useful to develop membrane characterization techniques, where viscoelastic membrane properties could be inferred from the time-dependant evolution of a capsule of interest through a corner.
## Appendix A Time evolution of capsule surface areas in the case of two interacting capsules
Figure 22 shows the evolution of the surface area of the leading and the trailing capsules in the non-inertial regime (Fig. 22a), as well as at \(Re=25\) and \(Re=50\) where the initial interspacing \(d_{0}\) is 1 (Fig. 22b) and 0.25 (Fig. 22c).
## Appendix B Train of capsules at large initial interspacings
We provide in Fig. 23 the time evolution of the surface area and velocity of each capsule in a train of 10 capsules flowing through a corner at \(Re=50\), \(Ca=0.35\) and a reduced initial interspacing distance between each capsule \(d_{0}=0.125\). As can be noted in this figure, the capsules in this regime do not interact as the surface area and velocity evolution of each capsule is almost identical.
|
2307.05328 | ProgGP: From GuitarPro Tablature Neural Generation To Progressive Metal
Production | Recent work in the field of symbolic music generation has shown value in
using a tokenization based on the GuitarPro format, a symbolic representation
supporting guitar expressive attributes, as an input and output representation.
We extend this work by fine-tuning a pre-trained Transformer model on ProgGP, a
custom dataset of 173 progressive metal songs, for the purposes of creating
compositions from that genre through a human-AI partnership. Our model is able
to generate multiple guitar, bass guitar, drums, piano and orchestral parts. We
examine the validity of the generated music using a mixed methods approach by
combining quantitative analyses following a computational musicology paradigm
and qualitative analyses following a practice-based research paradigm. Finally,
we demonstrate the value of the model by using it as a tool to create a
progressive metal song, fully produced and mixed by a human metal producer
based on AI-generated music. | Jackson Loth, Pedro Sarmento, CJ Carr, Zack Zukowski, Mathieu Barthet | 2023-07-11T15:19:47Z | http://arxiv.org/abs/2307.05328v1 | # ProgGP: From GuitarPro Tablature Neural Generation
###### Abstract
Recent work in the field of symbolic music generation has shown value in using a tokenization based on the GuitarPro format, a symbolic representation supporting guitar expressive attributes, as an input and output representation. We extend this work by fine-tuning a pre-trained Transformer model on ProgGP, a custom dataset of 173 progressive metal songs, for the purposes of creating compositions from that genre through a human-AI partnership. Our model is able to generate multiple guitar, bass guitar, drums, piano and orchestral parts. We examine the validity of the generated music using a mixed methods approach by combining quantitative analyses following a computational music-cology paradigm and qualitative analyses following a practice-based research paradigm. Finally, we demonstrate the value of the model by using it as a tool to create a progressive metal song, fully produced and mixed by a human metal producer based on AI-generated music.
Keywords:Controllable Music Generation, Transformers, Interactive Music AI, Guitar Tablatures, Human-AI Interaction, Practice-Based Research
## 1 Introduction
With advancements in computing power, new approaches to music generation have emerged. In recent years, deep learning has become a popular approach for automatic music generation, with research focusing on both the audio domain and the symbolic domain. This work extends previous work by Sarmento et al. [18] using a symbolic music generation model trained on DadaGP, a symbolic music dataset consisting 26k songs of various genres [17]. We follow here a practice-based research approach where a human expert music producer and music AI researchers collaborate to produce music based on machine-generated ouputs. We fine tuned the DadaGP-based model with a custom dataset of 173 progressive metal songs, which we refer to in this paper as
ProgGP, with the intent of using the model to generate songs, which can be recorded and turned into a fully produced progressive metal song. The model used in this work generates music in the GuitarPro format, rather than formats such as MIDI, MusicXML and ABC seen in other symbolic music generation works [7]. For guitar parts, GuitarPro not only encodes the pitch of each note, but also the location on a guitar free-board where the note is meant to be played, as well as various expressive techniques (e.g. _vibrato_ and _string bending_). We suggest that for certain musical genres, this format is very advantageous for a practice-based approach, as it provides much more information to an artist on how to perform the music that is generated, while still leaving room for creative interpretation. This paper presents the work that went into creating a brand new progressive metal song using neurally generated riffs and ideas that are relevant to the progressive metal genre. As per its main contributions, we highlight: (1) ProgGP, a manually curated progressive metal GuitarPro dataset made available to the community for research purposes; (2) a fine-tuned guitar tablature generative model for the creation of progressive metal tablatures; (3) heuristics for assessing whether generated music holds traits of the desired genre; (4) a practice-based research approach relying on a human-AI partnership where neurally-generated music is selected, edited, and integrated into a composition by a human producer. We also critically examine how to use neurally-generated music to foster creativity, inspire new ideas and improve the writing workflow of artists. We hope that this work will stir more research into human-AI interaction in the musical domain.
## 2 Background
### Symbolic Music Generation Using Deep Learning
Recent advances in deep learning have led to promising results in the field of music generation [16], with techniques such as Variational Autoencoders (VAEs) [21], Generative Adversarial Networks (GANs) [8], Recurrent Neural Networks (RNNs) [13][20], and Transformers [10] being increasingly used. The Transformer model [22] has enabled steep improvements in natural language processing (NLP) tasks and has been adapted for generating symbolic piano music in Huang et al.'s Music Transformer [10]. Other notable works, such as Musenet [14] and Pop Music Transformer [11], have further built on this approach to generate multi-instrument music and improve the generated music's rhythmic structure. However, the task of guitar tablature music generation has received limited research attention until the recent release of the DadaGP [17] dataset, comprising songs in both GuitarPro format, a tablature edition software, and a dedicated textual token format. An initial example of guitar tablature generation work is Chen et al.'s fingerstyle guitar generator [5], despite not being based on the GuitarPro format. More recent works that explore the DadaGP dataset include GTR-CTRL [18], proposing a method for guitar tablature generation with control over instrumentation and musical genre, as well as LooperGP [1], enabling to generate loopable music excerpts with applications for live coding performance.
### Practice-Based Research and Computer Music
Many works deal with the notion of 'practice' in research. Practice-based research is generally concerned with the knowledge gained through practice and the outcomes of that practice, while practice-led research leads to new understandings about practice itself [4]. Benford et al. describe this kind of research as consisting of three interconnected activities which inform each other in different ways: _practice_, _theory_ and _studies_[3]. However, they note challenges in conducting this research with balancing potentially different researcher and artist goals, as well as ethical concerns that can arise through artistic use of new technologies. Artistic uses of new technologies involving AI can be difficult due to the difficulty of prototyping new AI systems and the number of ways that AI can respond to users in different contexts [23]. Amershi et al. [2] provide guidelines on dealing with such unpredictable AI systems, mostly focusing on keeping the user informed on the system's capabilities and understanding its outputs. AI systems have seen use in musical practice-based research [12][19] with the _Folk-RNN_ model by Sturm et al. being noted to have a number of impacts on musical creation such as a way to inspire ideas, break habits, and a sense of creating something that could not have been created otherwise.
## 3 Practice-Based Research Methodology
### Human-AI Partnership
In this work, the first author, a music AI researcher and progressive metal producer, adopted the practice-based research approach described below:
1. Use a deep learning model to generate music in the style of the producer's preferred genre, progressive metal;
2. Evaluate the outputs of the model using a mixed method evaluation approach, combining objective metrics with subjective evaluation;
3. Craft a song using generated outputs based on outcomes from the evaluation;
4. Learn and record the song;
5. Analyse and reflect on the overall music production process.
The work aims to better understand the successes and issues of the deep learning model in order to help the research community use and improve the model. We also publicly release the dataset used to fine-tune the deep learning model to support similar kinds of research. Finally, we develop a music production process which can be used to efficiently integrate neurally-generated content within a human composition. The artistic content that was recorded can be listened to online and could lead to public performances.
For the neural music generation, we use a model pre-trained on the DadaGP [17] dataset, a dataset consisting of over 26k songs of various genres. The model is trained to produce songs in a tokenized symbolic format, which can be converted to the more commonly used GuitarPro format. This model is further fine-tuned on ProgGP, a curated dataset of progressive metal songs. This fine-tuned model can then be used to generate new songs in the style of progressive metal. For clarification, we do not assess timbre
quality aspects of progressive metal since we are working in the symbolic domain, despite timbre playing an important role in the genre (e.g. heavily distorted guitars, loud and punchy snare and kick drums, etc). However, we do take into account timbre identity through a distinction between distorted and clean guitars in our model.
### Fine-Tuning Dataset
ProgGP, the fine-tuning dataset used in our experiments, consists of 173 songs largely from the progressive metal genre3. The songs were obtained using Songsterr4, a website that hosts GuitarPro files and allows playback using an web-based GuitarPro player. The tablatures (tabs) obtained from this website were not official tabs created by the artists of the songs, but rather created and maintained by the online community. Due to this, there is no guarantee that the tabs used in the dataset are perfectly accurate to the songs they are based on. However, each was verified to at least mostly capture the spirit of the original performance during the construction of the dataset. We limited the dataset to only songs in which the bass guitar and drums have also been transcribed, since the pre-trained model was trained on fully transcribed songs. This however limited the scope of the dataset, as many songs were only available with guitar transcriptions, rather than the full band. Additionally, the model only supports a few common guitar tunings, and only 6 and 7 string guitars. Many bands in this genre use more unique guitar tunings and/or 8 string guitars, so some artists that might be important in the genre of progressive metal may have limited songs or be absent entirely from the dataset. All this led to some artists dominating the dataset more than others. A word cloud representation of the artists used in the ProgGP dataset can be seen in Figure 1. We made ProgGP5 available upon request, together with a list of songs per artist.
Footnote 3: Some songs included in the dataset are from adjacent genres (e.g. technical death metal).
Footnote 4: [https://www.songsterr.com/](https://www.songsterr.com/)
Footnote 5: [https://github.com/otnemrasordep/ProgGP](https://github.com/otnemrasordep/ProgGP)
Figure 1: Word cloud representation of ProgGP’s songs per artist distribution.
### Model Fine-Tuning
The pre-trained model is based on the Transformer-XL [6] architecture, a modified version of the original Transformer [22] that is more capable of learning longer-term dependency. The pre-trained model used in our experiments was trained for 200 epochs on the DadaGP [17] dataset. We trained the model on the fine-tuning dataset for an additional 65 epochs, at which the loss dropped low enough to trigger early stopping. Checkpoints were saved at every five epochs or training, resulting in 13 models at various stages of fine tuning.
### Neural Generation
A new song can be generated by feeding the model a prompt (set of instructions) in the form of a tokenized GuitarPro file. This will be the starting point of the generation, and the model will attempt to continue the song after the prompt. The tempo (in BPM) used for the generated song is taken from the prompt and the number of tokens to be generated is used as a parameter during inference. In DadaGP token format, a token can be a single note, rest, or expressive technique. Prompts used in the generation experiments ranged from a single note, a few measures from songs in the training set, and a few measures of songs not in the training set. The number of generated songs and the model from which to generate the songs can also be specified. Empirical analysis of the generated songs have allowed us to identify common structural patterns in generated songs, which we refer to as'sections', typically consisting of a _riff_ that is repeated one or more times with slight variations. The songs will typically start by repeating the notes from the prompt, with minor changes. It will then generate two or three sections afterward, each somewhat changing the feel of the song. While progressive metal songs can contain a large number of different riffs, they tend to build on one another and use references to musical motifs found throughout the song and throughout other songs by the same artist. Between The Buried And Me, a band with a large presence in ProgGP, is particularly well known for this [9]. This is a difficult thing to capture within a model however, as while the different sections seem to fit together naturally, they do not necessarily reference one another. Together with this submission, we release all the generated compositions on the undertaken experiments, cherry-picking some examples 6.
Footnote 6: Available at: [https://drive.google.com/drive/folders/1xaejTcUrPncE4hoyONhSzgS0a5TRo6G_?usp=share_link](https://drive.google.com/drive/folders/1xaejTcUrPncE4hoyONhSzgS0a5TRo6G_?usp=share_link)
## 4 Analysing AI-Generated Music
We used a mixed method approach to better understand the outputs of the fine-tuned models, their strengths and weaknesses, and to help the producer select a model for further music production use. This was done by analysing the generated music from each model objectively through the use of common symbolic music metrics, as well as listening through many generated examples and analysing them subjectively in the context of the author's own knowledge of progressive metal.
### Objective Metrics
Given the difficulties in assessing the quality of neurally-generated music without using a listening test, specially in the symbolic domain, we resorted on commonly used metrics from the literature, implemented in the MusPy package [7]. For this evaluation, 173 songs were generated from each of the thirteen fine-tuned models, the same number of songs present within ProgGP, in order to maintain consistency when comparing the songs generated to the songs present in ProgGP. The prompt used in this analysis was a single low E note on guitar and bass guitar, and a kick and cymbal hit on drums. This was chosen in order to minimize the influence of the prompt as much as possible, as per the findings in [18].
In previous work, Sarmento et al. [18] used **pitch class entropy** (PCE), a measure of the entropy of pitch classes used within a song, to evaluate their model. The PCE of the fine tuned models can be seen in Figure 2 (to ease visualization, we omit plots from models after epoch 30). The models fine-tuned for 15 and 20 epochs seem to have a distribution closer to ProgGP. The models fine-tuned for 5 and 10 epochs and beyond 20 epochs generally have a lower mean than the 15 and 20 epoch models. We hypothesize that this could be due to overfitting, causing the model to get stuck on certain sections or notes and repeating them, something seen in the generated songs by the more fine-tuned models. This would lower the pitch class entropy of a model's outputs rather than push it closer to that of the training data which is higher. The rest of the metrics can be seen in Figure 3. They include **drum pattern consistency** (DPC), **number of pitch classes** (NPC), **number of pitches** (NP), **pitch entropy** (PE), **pitch range** (PR), **scale consistency** (SC), **polymphy** (Pol) and **polymphy rate** (PolR). These metrics, while not necessarily giving a definitive idea of the performance of a model, help us understand how the output of certain models matches the training data. They also give an idea of certain characteristics of the music that each model tends to generate. An in-depth definition of each can be found in MusPy's package documentation7.
Footnote 7: [https://salu133445.github.io/muspy/metrics.html](https://salu133445.github.io/muspy/metrics.html)
Figure 2: Pitch class entropy calculated for the songs in ProgGP (pink) and the generated songs from the fine-tuned models for different epochs (blue and green). Model with lowest KL-divergence highlighted (in green).
The Kullback-Leibler divergence (KLD), a measure of relative entropy between the true probability distribution and a sample probability distribution, was calculated for each of the fine-tuned models (ProgGP is used as groundtruth to compared against generated songs). The KLD results can be seen in Table 1.
The model fine-tuned for 15 epochs scores the lowest for most metrics. The only exceptions are polyphony and polyphony rate, in which the model fine-tuned for 20
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|} \hline
**Epoch** & **PCE** & **DPC** & **NPC** & **NP** & **PE** & **PR** & **Pol** & **PolR** & **SC** \\ \hline \hline
5 & 0.473 & 0.513 & 0.608 & 0.799 & 0.638 & 0.762 & 0.497 & 0.495 & 0.263 \\ \hline
10 & 0.665 & 0.696 & 1.599 & 1.052 & 0.800 & 0.845 & 0.570 & 0.573 & 0.433 \\ \hline
15 & **0.262** & **0.442** & **0.491** & **0.746** & **0.442** & **0.591** & 0.365 & 0.353 & **0.216** \\ \hline
20 & 0.425 & 0.478 & 0.999 & 0.914 & 0.616 & 1.062 & **0.301** & **0.247** & 0.286 \\ \hline
25 & 0.673 & 0.596 & 1.641 & 0.998 & 0.670 & 0.912 & 0.484 & 0.559 & 0.491 \\ \hline
30 & 0.707 & 0.640 & 1.200 & 1.043 & 0.851 & 1.054 & 0.400 & 0.509 & 0.312 \\ \hline
35 & 0.625 & 0.625 & 1.144 & 0.939 & 0.743 & 0.974 & 0.376 & 0.493 & 0.376 \\ \hline
40 & 0.480 & 0.611 & 1.050 & 0.970 & 0.717 & 1.121 & 0.513 & 0.544 & 0.274 \\ \hline
45 & 0.702 & 0.746 & 1.554 & 1.059 & 0.910 & 1.089 & 0.420 & 0.486 & 0.336 \\ \hline
55 & 0.648 & 0.679 & 1.510 & 1.040 & 0.813 & 1.092 & 0.517 & 0.504 & 0.317 \\ \hline
55 & 0.595 & 0.690 & 1.358 & 1.039 & 0.818 & 1.092 & 0.471 & 0.485 & 0.346 \\ \hline
60 & 0.681 & 0.677 & 1.513 & 1.018 & 0.816 & 1.157 & 0.579 & 0.575 & 0.375 \\ \hline
65 & 0.757 & 0.730 & 2.069 & 1.126 & 0.842 & 1.041 & 0.394 & 0.484 & 0.379 \\ \hline \end{tabular}
\end{table}
Table 1: KLD scores for each fine-tuned model against ProgGP. Bold and green coloring indicates the lowest KLD per column.
Figure 3: Metrics calculated for the songs in ProgGP (pink) and the generated songs for each fine-tuned model (blue and green). Model with lowest KLD highlighted (green).
epochs scores the lowest. This is expected given that the model trained for 15 epochs seems to be more similar to ProgGP for most of the metrics than the other models.
### Subjective Analysis
Subjectively evaluating generated progressive metal songs first requires a definition of progressive metal. This definition is hard to specify, as music genres are not always straightforward. Nevertheless, there are a number of tropes that progressive metal songs tend to have. Robinson [15] describes several of these such as polyrhythms, syncopated chugging on low notes and uncommon time signatures. These can be seen in many generated songs, particularly uncommon time signatures and syncopated rhythms. Similarly to the conclusions from GTR-CTRL [18], we empirically found that the prompt has a reasonably large amount of influence over the generated song, but this varies between songs. The model tends to only generate notes for instruments contained in the prompt (e.g if there exists two guitars, one bass guitar and drums within the prompt, the model will only generate new notes for those instruments). It does however occasionally generate an extra guitar or keyboard track (id-00)8, but these scenarios were found to be rare. Generated guitar parts for multiple guitar tracks tend to be mostly identical, mirroring the recording technique of two guitars playing identical parts in order to create width in a song mix. Interestingly however, the model will sometimes generate a harmony for a particular guitar line where one guitar plays some kind of melodic line and the other playing the same line with the pitch shifted (id-01). It also occasionally generates guitar solos and rhythmic accompaniment (id-002), with one guitar playing low-pitched chords while the other plays fast single high-pitched notes. The model generates very impressive drum parts in addition to the guitar and bass guitar (id-03). The timing of the kick drum consistently lines up with the notes of the bass guitar (id-04). Additionally, several common drum beats heard in many metal songs can be generated (e.g. blast beats ((id-05)). Many songs also feature drum fills at the end of a section before transitioning into a new section. It is possible that the model excels at generating drum parts due to the limited number of possible notes compared to pitch-based instruments such as guitar and bass guitar. This being said, the generated drum parts would likely need further editing if used in an actual song in order to convey more of the nuance heard in progressive metal drumming.
Footnote 8: Song ids are hyperlinked to facilitate listening.
## 5 Song Production
A short progressive metal song was recorded, produced and mixed using one of the fine-tuned models to generate the initial musical ideas and song structure. This was done by the first author, himself a progressive metal producer and music AI researcher. The intention with this production was to utilize the generated songs as a way to bolster creativity and inspire ideas for music in a way in which the artist's creativity can still be applied to integrate the generated content into a song of their own. Section 5.1 describes a high level overview of the song creation process using the AI system in collaboration
with a music producer, while Section 5.2 presents a detailed analysis of the generated song and what was changed in order to suit the production.
### Process
The process of creating the song can be broken into the following steps:
1. A prompt is selected and songs are generated using one a fine-tuned model. One is chosen to be the starting point of the song based on how it inspires the producer.
2. The generated song is loaded into a guitar tab reader software (e.g. GuitarPro).
3. Drums and bass are exported to MIDI format and loaded into a digital audio workstation (DAW), along with appropriate virtual instruments.
4. The guitar parts are learned by the guitarist producer from the generated guitar tab and subsequently recording in the DAW. During the recording of the guitar, changes can be made to suit the producer's idea of the direction of the song.
5. The drum and bass guitar MIDI are edited to suit any changes made to the guitar, or to better serve the song. This may be done in conjunction with the previous step and may require some back and forth in order to fully develop the song.
These steps can be repeated as many times as desired to build out a complete song. They may even be skipped if the producer is inspired by the ideas to create their own parts based on what was already generated. Virtual instruments for the bass guitar and drums are not strictly needed, but can assist in speeding up the workflow. It was found that this strategy allowed for a song to be developed quickly and minimized any extra work that may distract from creativity (e.g. having to record bass guitar parts in addition to the guitar parts or manually programming drum parts). In the next section we focus on a particular example generated using the first two measures of "Stabwound" by Necrophagist as the prompt. The song was generated using the model fine-tuned on ProgGP for 15 epochs. The structure of the generated song was not changed, as we felt that it had many interesting qualities. The guitar, drums and bass were changed slightly to better fit the vision that the generated song inspired. Additional sounds such as synths, organs and impact samples were also added to flesh out the song and increase interest in the production. The final mix and the original generated song in both PDF and GuitarPro format are available online9.
Footnote 9: Available at: [https://drive.google.com/drive/folders/1y2xX3WIQeOz628FoN2VP3kzWvOqYk8QI?usp=sharing](https://drive.google.com/drive/folders/1y2xX3WIQeOz628FoN2VP3kzWvOqYk8QI?usp=sharing)
### Song and Production Analysis
The first section of the song is made up of an idea which takes up 4 measures. This idea is repeated with the second repetition skipping the first measure of the motif and adding on a new lick in the final measure which helps transition the section into the next one. Each repetition has a similar structure: three measures of 4/4 and a final measure with an odd time signature. The first repetition adds a 5/4 time signature to the end, while the second section uses a 6/4 time signature. Time signature changes are common in
progressive metal [15], and it is interesting to see the model generate this time signature change in both repetitions of the initial idea without simply repeating the idea. The changes in the second repetition of the idea feel like something a real songwriter might intentionally write, as if the model is building on the initial idea to create more excitement before the next section. The second section shows off a major flaw of the model: it does not always generate tabs or ideas that can be reasonably played by a human. Since a specific pitch can be played at multiple different areas of the guitar fretboard, tabs specify exactly which fret and string a note should be played on. However, the model will sometimes generate fretboard locations that are very unnatural to play by a guitarist. The tabs had to be slightly modified in order to record this section, however keeping the same notes. The main idea in this section is a repeated line of seven 8th notes followed by a chromatic note run and a lick that changes the modality from major to minor halfway through. It is difficult to know if this is something the model learned through training or if this note selection was more random. The section ends with four simple chords to transition into the next one. These were changed to be more dissonant chords in the recorded version. The final section is another repeated riff of seven notes used in a slightly more musical way than the previous section. Each repetition uses the same relative intervals between notes to outline two different chords, F# minor and G# minor. It then ends the section with two measures of 4/4, helping the song end in a slightly more familiar and natural way. A lick from the previous section is used in this ending in the tab, which helps tying the two sections together and increases cohesion.
While the structures and guitar riffs remained largely unchanged, the drums did not support the rest of the song as well as they could. While many generated songs have impressive sounding drums, the drum parts generated in this particular song did not quite hold up to professional standards. The first section mostly had a snare fill which did not enhance the interesting aspects of the guitar and bass parts. This was changed to use a more steady snare hit and symbols on the downbeats of the measure. A stack symbol was used in the first repetition, but was changed to a china symbol in the second repetition to add excitement to the changes between the two repetitions. A drum fill was also added in during the last few beats of the section to help highlight the transition between the two sections. The drums for the second section were mostly the same as the generated drums. The generated snare drum placement in this section accents the 7/4 time signature. However, the ride symbols in the second repetition were changed to china symbols which hit on the downbeats of the measure, and the kick
Figure 4: Original generated drum MIDI (top) vs. the final edited drum MIDI (bottom).
drum was changed to be constant eight notes. This was done to push the energy up as the section finishes. The drums in the final section were kept mostly unchanged, with a small change to the drum fill at the end. A comparison from a section of the song of the originally generated MIDI and the edited MIDI can be seen in Figure 4. The process showed that while the model can excel at generating inspiring progressive metal ideas, a decent amount of work is still needed to make the ideas playable and professional sounding. Drums in particular, while containing good initial ideas, need a lot of editing to make them sound natural and support the ideas in the guitar and bass guitar parts. It is not as simple as directly importing the drum and bass MIDI from the generated song, a human producer is still required to make the ideas into something that is satisfying to listen to and convey emotion properly. That being said, the entire writing and production process only took three to four hours over two sessions, with most of the time being spent practicing the guitar parts in order to play them to a sufficient level for recording. The producer felt that the AI system helps inspiring new ideas and producing a good sounding demo extremely quickly, with an amazing level of detail in both the kinds of notes generated and song structure. It is easy to imagine combining multiple generated ideas together in this way to produce a full length song.
## 6 Conclusion and Future Work
We have presented a deep learning model capable of generating songs in the style of progressive metal. We released ProgGP, a symbolic music dataset consisting of 173 progressive metal songs, which was constructed and used to fine-tune a pretrained transformer model. The models fine-tuned for only a relatively small number of epochs, such as 15 and 20 epochs, produce interesting results and are shown to exemplify traits of the fine-tuning data in nine different symbolic music metrics. This analysis was used to inform the selection of a generated song, which was then turned into a full progressive metal production. Finally, we presented an analysis of the generated song and how it was used to augment the producer's own creativity. We hope to continue this collaboration between human musicians and the AI system in a possible professionally recorded album and live performance of AI-assisted progressive metal songs.
|
2306.05001 | COURIER: Contrastive User Intention Reconstruction for Large-Scale
Visual Recommendation | With the advancement of multimedia internet, the impact of visual
characteristics on the decision of users to click or not within the online
retail industry is increasingly significant. Thus, incorporating visual
features is a promising direction for further performance improvements in
click-through rate (CTR). However, experiments on our production system
revealed that simply injecting the image embeddings trained with established
pre-training methods only has marginal improvements. We believe that the main
advantage of existing image feature pre-training methods lies in their
effectiveness for cross-modal predictions. However, this differs significantly
from the task of CTR prediction in recommendation systems. In recommendation
systems, other modalities of information (such as text) can be directly used as
features in downstream models. Even if the performance of cross-modal
prediction tasks is excellent, it is challenging to provide significant
information gain for the downstream models. We argue that a visual feature
pre-training method tailored for recommendation is necessary for further
improvements beyond existing modality features. To this end, we propose an
effective user intention reconstruction module to mine visual features related
to user interests from behavior histories, which constructs a many-to-one
correspondence. We further propose a contrastive training method to learn the
user intentions and prevent the collapse of embedding vectors. We conduct
extensive experimental evaluations on public datasets and our production system
to verify that our method can learn users' visual interests. Our method
achieves $0.46\%$ improvement in offline AUC and $0.88\%$ improvement in Taobao
GMV (Cross Merchandise Volume) with p-value$<$0.01. | Jia-Qi Yang, Chenglei Dai, Dan OU, Dongshuai Li, Ju Huang, De-Chuan Zhan, Xiaoyi Zeng, Yang Yang | 2023-06-08T07:45:24Z | http://arxiv.org/abs/2306.05001v3 | # COURIER: Contrastive User Intention Reconstruction
###### Abstract.
With the development of the multi-media internet, visual characteristics have become an important factor affecting user interests. Thus, incorporating visual features is a promising direction for further performance improvements in click-through rate (CTR) prediction. However, we found that simply injecting the image embeddings trained with established pre-training methods only has marginal improvements. We attribute the failure to two reasons: First, The pre-training methods are designed for well-defined computer vision tasks concentrating on semantic features, and they cannot learn personalized interest in recommendations. Secondly, pre-trained image embeddings only containing semantic information have little information gain, considering we already have semantic features such as categories and item titles as inputs in the CTR prediction task. We argue that a pre-training method tailored for recommendation is necessary for further improvements. To this end, we propose a recommendation-aware image pre-training method that can learn visual features from user click histories. Specifically, we propose a user interest reconstruction module to mine visual features related to user interests from behavior histories. We further propose a contrastive training method to avoid collapsing of embedding vectors. We conduct extensive experiments to verify that our method can learn users' visual interests, and our method achieves 0.46% improvement in offline AUC and 0.88% improvement in Taobao online GMV with p-value \(<\) 0.01.
Pre-training, Image Features, User Intention Reconstruction, Contrastive Learning, Personalized Searching +
Footnote †: journal: Information systems Information retrieval
+
Footnote †: journal: Information systems Information retrieval
+
Footnote †: journal: Information systems Information retrieval
+
Footnote †: journal: Information systems Information retrieval
+
Footnote †: journal: Information systems Information retrieval
+
Footnote †: journal: Information systems Information retrieval
+
Footnote †: journal: Information systems Information retrieval
+
Footnote †: journal: Information systems Information retrieval
+
Footnote †: journal: Information systems Information retrieval
+
Footnote †: journal: Information systems Retrieval
+
Footnote †: journal: Information retrieval retrieval
+
Footnote †: journal: Information systems Retrieval
+
Footnote †: journal: Information retrieval retrieval
+
Footnote †: journal: Information retrieval retrieval
able to learn such visual characteristics. Secondly, the information learned through the pre-training methods may be useless in our scenario. We may hope that the semantic information contained in the pre-trained embeddings (classes, language description, etc.) can improve the CTR prediction. However, such information can be directly used in item recommendations. For example, we have the categories, item titles, and style tags that are provided by the merchants, which are already used in our CTR model. Thus, a pre-trained model performs well predicting categories or titles has little new information and won't improve on the recommendation tasks.
We argue that to boost the performance of downstream CTR prediction tasks, the pre-training method should be aware of the downstream task and should also be decoupled from the CTR task to reduce computation. To achieve this goal, we propose a **CoInt**rastive **UseR** **IntEn**tion **R**econstruction (**Courier**) method. Our method is based on an intuitive assumption: An item clicked by a user is likely to have visual characteristics similar to some of the clicking history items. For example, a user that has clicked on a white dress and a red T-shirt may also be interested in a red dress. However, the visual characteristics that affect the users' choices may not be as clear as in the example, such as looking cute or cool. To mine the visual features related to user interests, we propose reconstructing the next clicking item with a cross-attention mechanism on the clicking history items. The reconstruction can be interpreted by a weighted sum of history item embeddings, as depicted in Figure. 1 (a), which can be trained end-to-end. Minimizing the reconstruction loss alone will lead to a trivial solution: all the embeddings collapse to the same value. Thus, we propose to optimize a contrastive loss that not only encourages lower reconstruction error, but also push embeddings of un-clicked items further apart, as depicted in Figure. 1 (b). We conducted various experiments to verify our motivation and design of the method. In offline experiments, our method achieves 0.46% absolute improvement on AUC and 10.4% relative improvement on NDCG@10 compared to a strong baseline. In online A/B tests, we achieve 0.88% improvement on GMV in the women's clothing category, which is significant considering the volume of the Taobao search engine.
Our contribution can be summarized as follows:
* To pre-train image embeddings containing users' visual preferences, we propose a user interest reconstruction method, which can mine latent user interests from history click sequences.
* The user interest reconstruction alone may lead to a collapsed solution. To solve this problem, we propose a contrastive training method that utilizes the negative PV items efficiently.
* We conduct extensive experiments to verify the effectiveness of our method. Our experimental results show that our method can learn meaningful visual concepts, and the model can generalize to unseen categories.
## 2. Related Work
From collaborative filtering(Wang et al., 2017; Wang et al., 2018; Wang et al., 2019) to deep learning based recommender systems(Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), IDs and categories (user IDs, item IDs, advertisement IDs, tags, etc.) are the essential features in item recommendation systems, which can straightforwardly represent the identities. However, with the development of the multi-media internet, ID features alone can hardly cover all the important information. Thus, recommendations based on content such as images, videos, and texts have become an active research field in recommender systems. In this section, we briefly review the related work and discuss their differences compared with our method.
### Content-based recommendation
In general, content is more important when the content itself is the concerned information. Thus, the content-based recommendation has already been applied in news(Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), music(Wang et al., 2019; Wang et al., 2019), image(Wang et al., 2019), and video(Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) recommendations.
Research on the content-based recommendation in the item recommendation task is much fewer because of the dominating ID features. Early applications of image features typically use image embeddings extracted from a pre-trained image classification model, such as (Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). The image features adopted by these methods are trained on general classification datasets such as ImageNet(Dong et al., 2019), which may not fit recommendation tasks. Thus, Dong et al. (2019) proposes to train on multi-modal data from recommendation task. However, Dong et al. (2019) does not utilize any recommendation labels such as clicks and payments, which will have marginal improvement if we are already using information from other modalities, as we analyze in Section. 3.3.
With the increase in computing power, some recent papers propose to train item recommendation models end-to-end with image feature networks(Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). However, the datasets used in these papers are much smaller than our application scenario. For example, Wang et al. (Wang et al., 2019) uses a dataset consisting of 25 million interactions. While in our online system, average daily interactions in a single category (women's clothing) are about 850 million. We found it infeasible to train image networks in our scenario end-to-end, which motivates our decoupled two-stage framework with user-interest-aware pre-training.
Figure 1. The contrastive user intention reconstruction (**Courier**) method. (a) The items in the user click history are used to reconstruct the embeddings of positive PV items. An embedding will be pushed to approach the corresponding reconstruction if the reconstruction is not perfect. (b) To avoid trivial degeneration of the embeddings, we propose a contrastive framework: While the embeddings and corresponding reconstructions are pushed closer, unmatched embeddings and reconstructions are pushed further apart.
### Pre-training of image features
Self-supervised pre-training has been a hot topic in recent years. We classify the self-supervised learning methods into two categories: Augmentation-based and prediction-based.
**Augmentation-based.** Taking image pre-training as an example, the augmentation-based methods generate multiple different views of an image by random transformations, then the model is trained to pull the embeddings of different views closer and push other embeddings (augmented from different images) further. SimCLR(Liu et al., 2017), SimSiam(Liu et al., 2017), BYOL(Liu et al., 2018) are some famous self-supervised methods in this category. These augmentation-based methods do not perform well in the item recommendation task as shown in our experiments in Section. 4.5. Since the augmentations are designed for classification (or segmentation, etc.) tasks, they change the visual appearance of the images without changing their semantic class, which contradicts the fact that visual appearance is also important in recommendations (e.g., color, shape, etc.).
**Prediction-based.** If the data can be split into more than one part, we can train a model taking some of the parts as inputs and predict the rest parts, which is the basic idea of prediction-based pre-training. Representative prediction-based methods include BERT(Liu et al., 2017), CLIP(Liu et al., 2017), GPT(Chen et al., 2018), VL-BERT(Liu et al., 2018), etc. The prediction-based methods can be used to train multi-modal recommendation data as proposed by Dong et al. (2018). However, if we are already utilizing multi-modal information, the improvements are limited, as shown in our experiments. To learn user interests information that can not be provided by other modalities, we argue that user behaviors should be utilized, as in our proposed method.
### Contrastive learning in recommender systems
Contrastive learning methods are also adopted in recommendation systems in recent years. The most explored augmentation-based method is augmenting data by dropping, reordering, and masking some features(Liu et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), items(Liu et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), and graph edges(Wang et al., 2017). Prediction-based methods are also adopted for recommendation tasks, e.g., BERT4Rec(Wang et al., 2017) randomly masks some items and makes predictions. However, all these recommender contrastive learning methods concentrate on augmentation and pre-training with ID features, while our method tackles with pre-training of image features.
## 3. The Proposed Method
We briefly introduce some essential concepts and notations, then introduce our pre-training and downstream CTR model in detail.
### Preliminary
**Background.** Modeling user preference (user portrait) is one of the most important problems in personalized recommender systems. Currently, the best-performing method utilizes the attention mechanism to model long-term user interests, where the user interests are mainly reflected by the history of clicking, purchasing, or shopping cart, which are lists of items. For example, a user that has clicked a red dress is more likely to click another red dress in the future, which reflects the user's preference for red dress. The general cases, however, are much more complicated: Even the users themselves may not be able to describe their preferences explicitly. For example, a user may prefer "lovely clothing." However, the exact visual appearance of "lovely clothing" is not well defined and may vary between users. Such preference information cannot be described by the available data exactly, such as category and style tags provided by the merchants. Typically, the features of items are their IDs, titles, tags, prices, and some statistical values, such as monthly sales and favorable rates. Among them, numerical features are directly concatenated; the categorical features are converted to their corresponding embeddings then concatenated with numerical features.
**Notations.** A data sample for CTR prediction in item search can be represented with a tuple of (user, item, query, label). A user searches for a query text, and several items are shown to the user. Then the items that the user clicked are labeled as positive and negative otherwise. When a user views a page, we have a list of items that are presented to the user, we call them page-view (PV) items. The length of this PV list is denoted by \(l_{po}\). Each PV item has a cover image, denoted by \(Img_{po}^{j}\), where \(0\leq j<l_{po}\). The corresponding click labels are denoted by \(y_{po}^{j}\), where \(y_{po}^{j}\in\{0,1\}\). Each user has
Figure 2. The contrastive user intention reconstruction method. The images are fed into the image backbone model to obtain the corresponding embeddings. The embeddings of PV sequences are blue-colored, and the embeddings of click sequences are yellow-colored. The reconstructions are in green. Red boxes denote positive PV items.
a list of clicked item history, the image of each item is denoted by \(Img^{k}_{click}\). \(0\leq k<l_{click}\), where \(l_{click}\) is the length of click history. The \(l_{po}\) and \(l_{click}\) may vary by different users and pages, we trim or pad to the same length practically.
**Attention**. An attention layer is defined as follows:
\[\text{Attention}(Q,K,V)=\text{softmax}(\frac{QK^{T}}{\sqrt{d_{K}}})V \tag{1}\]
where \(Q\) is the query matrix, \(K\) is the key matrix, \(V\) is the value matrix. The mechanism of the attention layer can be interpreted intuitively: For each query, a similarity score is computed with every key, the values corresponding to keys are weighted by their similarity scores and summed up to obtain the outputs. We refer interested readers to Vaswani et al. (2017) for more details of the attention mechanism.
### Contrastive User Intention Reconstruction
We make an essential assumption that the users' preferences are fully described by their clicking history, and a newly clicked item be characterized by all the past clicked items. Intuitively, our assumption can be described as follows: Every item has some specific characteristics, such as lovely, sweet, gorgeous, etc. Assuming a user has clicked on three items with characteristics (A, B), (A, C), (B, D), respectively. When a user clicks a new item, we don't know what the characteristics of the new item are, but we assume the new item can be reconstructed by some of the characteristics of the previously clicked items. That is, we assume that the characteristics of the new item may be (A, D), (A, B, C), but not (A, E). Such an assumption may not hold precisely, since the new item may indeed contain some new characteristics that does not exist in the click history. In such case, we want the model to recover as many common characteristics as possible. To this end, we propose a user-interest reconstruction method to learn to extract such characteristics that are related to user interests.
#### 3.2.1. User intention reconstruction
In the following discussion, we only consider a single line of data (a PV list and corresponding list of user click history), the batch training method will be discussed later. We use \(Img_{po}\) and \(Img_{click}\) to denote the matrix of all the \(l_{po}\) and \(l_{click}\) images. All the images are fed to the image backbone (IB) to get their embeddings. We denote the embeddings as \(Emb^{j}_{po}=IB(Img^{j}_{po})\) and \(Emb^{k}_{click}=IB(Img^{k}_{click})\) correspondingly. In our user interest reconstruction method, we treat the embeddings of PV images \(Emb_{po}\) as queries \(Q\), and we input the embeddings of click images \(Emb_{click}\) as values \(V\) and keys \(K\). Then the user interest reconstruction layer can be calculated by
\[Rec^{j}_{po} =\text{Attention}(Emb^{j}_{po},Emb_{click},Emb_{click}) \tag{3}\] \[=\sum a_{k}Emb^{k}_{click} \tag{2}\]
where \(\alpha\sim softmax\left(Emb^{j}_{po}Emb^{T}_{click}\right)\). \(a_{k}\) is the attention on \(k\)th history click item. The reason for its name is that the attention layer forces the outputs to be a weighted sum of the embeddings of click sequences. Thus, the output space is limited to the combination of \(Emb_{click}\) within a simplex as depicted in Figure. 1 (a).
```
1# backbone, attention: image backbone and attention models.
2# size: dimension of embeddings.
3# pv_n, click_n: number of page-view and click history items.
4
5for img_pv, img_click, labels in dataloader:
6# Calculate the embeddings.
7emb_p, emb_c:= backbone(img_pv), backbone(img_click)
8# emb_p,shape = {batch_size, pv_n, size}
9# emb_c.shape = {batch_size, pv_n, click_n, size}
10# labels.shape = {batch_size, pv_n}
11
12# Flatten batch data and calculate reconstructions.
13emb_p = reshape(emb_p, {batch_size+pv_n, size})
14emb_c = reshape(emb_c, {batch_size+pv_n, click_n, size})
15rec_pv = attention(emb_p, emb_c, emb_c)
16# rec_pv.shape = {batch_size+pv_n, size}
17
18# Calculate similarity matrix Sim(j0, j1)
19emb_p = normalize(emb_p, dim=1) # normalize
20rec_pv = normalize(rec_pv, dim=1) # normalize
21sim_matrix = emb_p # rec_pv.t()
22
23# Calculate the contrastive loss
24labels = reshape(labels, {batch_size+pv_n})
25log_prob = log_softmax(sim_matrix / tau, dim=0)
26loss = sum(-log_prob * diag(labels)) / {batch_size * pv_n}
27
28loss.backward() # Calculate gradients by backpropagation update(backbone, attention) # Update models by SGD
```
**Algorithm 1**Courier Pseudocode, PyTorch-like
#### 3.2.2. Contrastive training method
The user intention reconstruction module cannot prevent the trivial solution that all the embeddings collapse to the same value. Thus, we propose a contrastive method to train the user-interest reconstruction module.
With the PV embeddings \(Emb_{po}\), and the corresponding reconstructions \(Rec_{po}\). We calculate pairwise similarity score of \(Emb^{j}_{po}\) and \(Rec^{j_{h}}_{po}\):
\[Sim(j_{0},j_{1})=\frac{Emb^{j_{0}}_{po}{}^{T}\,Ree^{j_{h}}_{po}}{|| Emb^{j_{0}}_{po}||\,||Rec^{j_{h}}_{po}||} \tag{4}\]
Then we calculate the contrastive loss by
\[L_{po} =\mathcal{L}_{contrast}(Emb_{po},Emb_{click},y) \tag{6}\] \[=\sum_{j_{h},j_{1}}-\log\left(\frac{e^{Sim(j_{0},j_{1})/\tau}}{ \sum_{j_{h}}e^{Sim(j_{0},j_{1})/\tau}}\right)\mathbb{I}[j_{0}=j_{1}\text{ {and} }y_{j_{h}}=1] \tag{5}\]
Here \(\mathbb{I}[j_{0}=j_{1}\text{ {and} }y_{j_{h}}=1]\) is an indicator function that equals 1 when \(j_{0}=j_{1}\) and \(y_{j_{0}}=1\), and equals 0 otherwise.
The contrastive loss with user interest reconstruction is depicted in Figure. 2. The softmax function is calculated column-wisely, and only the positive PV images are optimized to be reconstructed (the two columns in dashed boxes). The behavior of the contrastive loss aligns with our assumption: Positive PV images are pulled closer to the corresponding reconstructions (\(j_{0}=j_{1}\) and \(y_{j_{h}}=1\)), while the negative PV images are not encouraged to be reconstructed (\(j_{0}=j_{1}\) and \(y_{j_{0}}=0\)). All the PV embeddings \(Emb^{j_{0}}_{po}\) with \(j_{0}\neq j_{1}\) and \(y_{j_{0}}=1\), are pushed further away from \(Rec^{j_{h}}_{po}\), which can prevent the trivial solution that all the embeddings are the same. Some elements in the similarity matrix are not used in our loss, namely the elements
with \(y_{j_{1}}=0\), since the negative PV images are not supposed to be reconstructible. We left these columns for ease of implementation. The procedure of calculating this contrastive loss is depicted in Figure. 2.
**Extending to batched contrastive loss**. The above contrastive loss is calculated using PV items within a single page (we have at most 10 items on a page), which can only provide a limited number of negative samples. However, a well-known property of contrastive loss is that the performance increases as the number of negative samples increases, which is verified both practically(Beng et al., 2015; Chen et al., 2016) and theoretically(Zhu et al., 2017). To increase the number of negative samples, we propose to treat all other in-batch PV items as negative samples. Specifically, we have \((\text{batch\_size}^{t}l_{po})\) PV items in this batch. For a positive PV item, all the other \((\text{batch\_size}^{t}l_{po}-1)\) PV items are used as negatives, which significantly increases the number of negatives. The procedure of optimizing this batched contrastive loss is summarized in Algorithm 1.
**Differences compared to established contrastive methods**. Although Courier is also a contrastive method, it is significantly different from classic contrastive methods. First, in contrastive methods such as SimCLR(Beng et al., 2015) and CLIP(Zhu et al., 2017), every sample has a corresponding positive counterpart. In our method, a negative PV item does not have a corresponding positive reconstruction, but it still serves as a negative sample in the calculation. Secondly, there is a straightforward one-to-one matching in SimCLR and CLIP, e.g., text and corresponding image. In our scenario, a positive PV item corresponds to a list of history click items, which is transformed into a one-to-one matching with our user interest recommendation module introduced in Section. 3.2.1. Thirdly, another approach to convert this multi-to-one matching to one-to-one is to apply self-attention on the multi-part, as suggested in (Zhu et al., 2017), which turned out to perform worse in our scenario. We experiment and analyze this method in Section. 4.9 (w/o Reconstruction).
#### 3.2.3. Contrastive learning on click sequences
In the contrastive loss introduced in Section. 3.2.2, the negative and positive samples are from the same query. Since all the PV items that are pushed to the users are already ranked by our online recommender systems, they may be visually and conceptually similar and are relatively hard to distinguish by contrastive loss. Although the batch training method introduces many easier negatives, the hard negatives still dominate the contrastive loss at the beginning of training, which makes the model hard to train.
Thus, we propose another contrastive loss on the user click history, which does not have hard PV negatives. Specifically, suppose the user's click history is denoted by corresponding embeddings \(Emb^{0}_{click},...,Emb^{L_{click}-1}_{click}\) We treat the first item as the next item to be clicked, and the rest items are treated as click history. The user click sequence loss is calculated as follows:
\[L_{ucs}=\mathcal{L}_{contrast}(Emb^{0}_{click}.Emb^{1.L_{click}}_{click},1)\]
The \(\mathcal{L}_{contrast}\) function is the same as in Section. 3.2.2. Here the label \(y=1\) because all the samples in history click sequences are positive. The user sequence loss provides an easier objective at the start of the training, which helps the model to learn with a curriculum style. It also introduces more signals to train the model, which improves data efficiency. The overall loss of Courier is:
\[L_{\text{Courier}}=L_{po}+L_{ucs} \tag{7}\]
### Downstream CTR model
To evaluate the information increment of pre-trained image embeddings on the CTR prediction task, we use an architecture that aligns with our online recommender system, which is depicted in Figure. 3.
**Why must we use a full model in evaluation?** In the CTR model, we use all the available information used in the online system, which is also a fairly strong baseline without adding image features. A full CTR model may be much slower than only a subset of features. However, since the downstream CTR model already uses some features related to vision, we found it necessary to use a full model in the evaluation to obtain the actual performance gain compared to the online system. For example, some keywords in the item titles may describe the visual appearance of a cloth. When we train image features with an image-text matching method such as CLIP, the image features are trained to describe those text keywords. In such case, if we evaluate the image embeddings in a downstream model _without_ text features, we may observe a significant performance gain compared with the baseline. However, the improvements will disappear when we apply the model to the downstream model with the text features, since the text features already contain information of the keywords, so the information
Figure 3. The downstream CTR model and image representation.
gain vanishes. Many other features may cause similar information leakage, e.g., tags of the items, promotional information that may exist on some of the item images, etc.
**Downstream usage of image representations**. Practically, we find that how we feed the image features into the downstream CTR model is critical for the final performance. We experimented with three different methods: 1. directly using embedding vectors. 2. using similarity scores to the target item. 3. using the cluster IDs of the embeddings. Cluster-ID is the best-performing method among the three methods, bringing about \(0.1\%-0.2\%\) improvements on AUC compared to using embedding vectors directly. We attribute the success of Cluster-ID to its better alignment to our pre-training method. The results and analysis of the embedding usage are provided in Appendix Section. A.
The CTR model consists of a user embedding network, an item embedding network, and a query embedding network.
**Item embedding network**. The item embedding network takes the cover image of the item, the item ID, the item title, and various statistical features as input, as depicted in Figure. 3. The image features are fed by one of the three methods (Vector, Similarity score, Cluster ID). ID features are transformed into corresponding embeddings. The item titles are tokenized, then the tokens are converted to embeddings. We do not use a large language model such as BERT or GPT because of the limitation on inference speed. All the features are then concatenated to form the item embeddings.
**User embedding network**. The user embeddings consist of the embeddings of history item sequences, the user IDs, and other statistical features. The user IDs and statistical features are treated similarly to item features. The most important feature for personalized recommendation is the history item sequence. In our CTR model, we use three different item sequences: 1. Long-term click history consists of the latest up to 1000 clicks on the concerned category (the one the user is searching for) within 6 months. 2. Long-term payment history consists of up to 1000 paid items within 2 years. 3. The recent up to 200 item sequences in the current shopping cart. All the items are embedded by the item embedding network. The embedded item sequences are fed to multi-head attention and layer-norm layers. Then, the item embeddings are mean-pooled and projected to a proper dimension, which is concatenated with other user features to form the final user embeddings.
**Query embedding and CTR prediction network**. The user queries are treated in the same way as item titles. The user embeddings, item embeddings, and query embeddings are flattened and concatenated into a single vector. Then, we use an MLP model with 5 layers to produce the logits for CTR prediction. The CTR model is trained with the cross entropy loss on the downstream user click data.
## 4. Experiments
### Datasets
#### 4.1.1. Pre-training dataset
The statistics of the pre-training dataset are summarized in Table. (1). To reduce the computational burden, we down-sample to 20% negative samples. So the click-through rate (CTR) is increased to around 13%. To reduce the computational burden, we sort the PV items with their labels to list positive samples in the front, then we select the first 5 PV items to constitute the training targets (\(l_{pv}=5\)). We retain the latest 5 user-click-history items (\(l_{click}=5\)). Thus, there are at most 10 items in one sample. There are three reasons for such data reduction: First, as mentioned in Section. 4.4, our dataset is still large enough after reduction. Second, the number of positive items in PV sequences is less than 5 most of the time, so trimming PV sequences to 5 will not lose many positive samples, which is generally much more important than negatives. Third, we experimented with \(l_{click}=10\) and did not observe significant improvements, while the training time is significantly longer. Thus, we keep \(l_{click}=5\) and \(l_{po}=5\) fixed in all the experiments. We remove the samples without clicking history or positive items within the page. The training dataset is collected during 2022.11.18-2022.11.25 on the Taobao search service. Intuitively, women's clothing is one of the most difficult recommendation tasks (the testing AUC is significantly lower than average) which also largely depends on the visual appearance of the items. Thus, we select the women's clothing category to form the dataset of the training dataset of the pre-training task.
In the pre-training dataset, we only retain the item images and the click labels. All other information, such as item title, item price, user properties, and even the query by the users are dropped. There are three reasons to drop additional information: First, since we want to learn the visual interest of the users, we need to avoid the case that the model relies on additional information to make prediction, which may hinder the model from learning useful visual information. Secondly, these additional training data may cause a significant increase in training time. For example, to train with text information, we need to introduce an NLP backbone network (e.g., BERT), which is parameter intensive and will significantly slow the training. Thirdly, we found that introducing text information will improve the performance of the pre-training task, but the effect on the downstream is marginal. The reason is that all the information is given in the downstream task. Thus, the additional information learned by the model is useless. We report the experimental results of training with text information in Section. 4.10.
#### 4.1.2. Downstream Evaluation Dataset
The average daily statistics of the downstream datasets in all categories and women's clothing are summarized in Table. (2). To avoid any information leakage, we use data collected from 2022.11.27 to 2022.12.04 on Taobao search to train the downstream CTR model, and we use data collected on 2022.12.05 to evaluate the performance. In the evaluation stage, the
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \# User & \# Item & \# Samples & CTR & \# Hist \\ \hline
71.7 million & 35.5 million & 311.6 million & 0.13 & 5 & 5 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Pre-training dataset collected from the women’s clothing category. A sample corresponds to a page of items.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \# User & \# Item & \# Samples & CTR & \# Hist \\ \hline All & 0.118 billion & 0.117 billion & 4.64 billion & 0.139 & 98 \\ Women’s & 26.39 million & 12.29 million & 874.39 million & 0.145 & 111.3 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Daily average statistics of the downstream dataset. A sample corresponds to a single item.
construction of the dataset aligns with our online system. We use all the available information to train the downstream CTR prediction model. The negative samples are also down-sampled to 20%. Different from the pre-training dataset, we do not group the page-view data in the evaluation dataset, so each sample corresponds to an item.
### Evaluation metrics
* Area Under the ROC Curve (**AUC**): AUC is the most commonly used evaluation metric in evaluating ranking methods, which denotes the probability that a random positive sample is ranked before a random negative sample.
* Grouped AUC (**GAUC**): The AUC is a global metric that ranks all the predicted probabilities. However, in the online item-searching task, only the relevant items are considered by the ranking stage, so the ranking performance among the recalled items (relevant to the user's query) is more meaningful than global AUC. Thus, we propose a Grouped AUC metric, which is the average AUC within searching sessions.
* (Grouped) Normalized Discounted Cumulative Gain (**NDCG**): NDCG is a position-aware ranking metric that assigns larger weights to top predictions, which can better reflect the performance of the top rankings. We use grouped NDCG@10 in the experiments since our page size is 10.
### Compared methods
* **Baseline**: Our baseline is a CTR model that serves our online system, which is described in Section. 3.3. It's noteworthy that we adopt a warmup strategy that uses our online model trained with more than one year's data to initialize all the weights (user ID embeddings, item ID embeddings, etc.), which is a fairly strong baseline.
* **Supervised**: To pre-train image embeddings with user behavior information, a straightforward method is to train a CTR model with click labels and image backbone network end-to-end. We use the trained image network to extract embeddings as other compared methods.
* **SimCLR**(Liu et al., 2017): SimCLR is a self-supervised image pre-training method based on augmentations and contrastive learning.
* **SimSiam(Liu et al., 2017): SimSiam is also an augmentation-based method. Different from SimCLR, SimSiam suggests that contrastive loss is unnecessary and proposes directly minimizing the distance between matched embeddings.
* **CLIP**(Liu et al., 2017): CLIP is a multi-modal pre-training method that optimizes a contrastive loss between image embeddings and item embeddings. We treat the item cover image and its title as a matched sample. We use a pre-trained BERT(Wang et al., 2019) as the feature network of item titles, which is also trained end-to-end.
### Implementation
**Image backbone and performance optimization**. We use the Swin-tiny transformer(Shi et al., 2018) as the image backbone model, which is faster than the Swin model and has comparable performance. However, since we have about \(3\times 10^{9}\) images in our pre-training dataset (230 times bigger than the famous Imagenet(Imagenet et al., 2017) dataset, which has about \(1.3\times 10^{6}\) images), even the Swin-tiny model is slow. After applying gradient checkpointing(Dong et al., 2018), mixed-precision computation(Shi et al., 2018), and optimization on distributed IO of images, we managed to reduce the training time for one epoch on the pre-training dataset from 150 hours to 50 hours (consuming daily data in \(\sim\) 10 hours) with 48 Nvidia V100 GPUs (32GB).
**Efficient clustering**. In our scenario, there are about \(N=6*10^{7}\) image embeddings to be clustered, and we set the cluster number to \(C=10^{5}\). The computational complexity of vanilla k-means implementation is \(O(N*C*d)\) per iteration, which is unaffordable. Practically, we implement a high-speed learning-based clustering problem proposed by Yi et al. (Yi et al., 2019). The computing time is reduced significantly from more than 7 days to about 6 hours.
More implementation details are provided in the Appendix.
### Performance in downstream CTR task
The performances of compared methods are summarized in Table. (3). We have the following conclusions: First, since all the methods are pre-trained in the women's clothing category, they all have performance improvement on the AUC of the downstream women's clothing category. SimCLR, SimSiam, CLIP, and Courier outperform the Baseline and Supervised pre-training. Among them, our Courier performs best, outperforms baseline by 0.46% AUC and outperforms second best method SimCLR by 0.18% AUC. Secondly, we also check the performance in all categories. Our Courier also performs best with 0.16% improvement in AUC. However, the other methods' performances are significantly different from the women's clothing category. The Vanilla and SimSiam method turned out to have a negative impact in all categories. And the improvements of CLIP and SimCLR become marginal. The reason is that the pre-training methods failed to extract general user interest information and overfit the women's clothing category. We analyze the performance in categories other than women's clothing in Section. 4.6. Thirdly, the performance on GAUC indicates that the performance gain of CLIP and SimCLR vanishes when we consider in-page ranking, which is indeed more important than global AUC, as discussed in Section. 4.2. The GAUC performance further validates that Courier can learn fine-grained user interest features that can distinguish between in-page items. Fourthly, All the pre-training methods achieve significant NDCG improvements compared to the Baseline, which indicates the accuracy of the top-ranking items is higher. Our Courier's NDCG is slightly lower than other pre-training methods. Since the metrics are calculated on page-viewed items, the ranking of a few top items will not significantly influence the CTR performance. For example, whether an item is ranked first or third performs similarly since we have 10 items on a page. On the contrary, a significantly higher GAUC may help users find desired items within fewer pages, which is more important in our scenario.
### Generalization in unseen categories
In Figure. 4, we plot the AUC improvements of Courier in different categories. We have the following conclusions: First, the performance improvement in the women's clothing category is the most, which is intuitive since the embeddings are trained with women's clothing data. Secondly, there are also significant improvements
in women's shoe, children's shoe, children's clothing, underwear, etc. These categories are not used in the pre-training task, which indicates the Courier method can learn general visual characteristics that reflect user interests. Thirdly, the performances in the bedding, cosmetics, knapsack, and handicrafts are also improved by more than 0.1%. These categories are significantly different from women's clothing in visual appearance, and Courier also learned some features that are transferable to these categories. Fourthly, Courier does not have a significant impact on some categories, and has a negative impact on the car category. These categories are less influenced by visual looking and can be ignored when using our method to avoid performance drop. Fifthly, the performance is also related to the amount of data. Generally, categories that have more data tend to perform better.
### Visualization of trained embeddings
Did Courier really learn features related to user interests? We verified the quantitative improvements on CTR in Section. 4.5. Here, we also provide some qualitative analysis. During the training of Courier, we _did not_ use any additional information other than images and user clicks. Thus, if the embeddings contain some semantic information, such information must be extracted from user behaviors. So we plot some randomly selected embeddings with specific categories and style tags in Figure. 5 and Figure. 6. First, embeddings from different categories are clearly separated, which indicates that Courier can learn categorical semantics from user behaviors. Secondly, some of the style tags can be separated, such as Neutral vs. Romantic, and Cool vs. Sexy. The well-separated tags are also intuitively easy to distinguish. Thirdly, some of the tags can not be separated clearly, such as Mature vs. Cuties, and Grace vs. Antique, which is also intuitive since these tags have relatively vague meanings and may overlap. Despite this, Courier still learned some gradients between the two concepts. To conclude, the proposed Courier method can learn meaningful user-interest-related features by only using images and click labels.
### Influence of \(\tau\)
We experiment with different \(\tau\). The results are in Table. (4). Note that the experiments are run with the Simscore method, which is worse than Cluster-ID but is much faster. We find that \(\tau=0.05\) performs best for Courier and keep it fixed.
### Ablation study
We conduct the following ablation experiments to verify the effect of each component of Courier.
* **w/o UCS**: Remove the user click sequence loss.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Temperature & AUC (women’s clothing) & AUC & GAUC \\ \hline
0.02 & 0.33\% & 0.11\% & -0.03\% \\
0.05 & **0.36\%** & **0.15\%** & **0.09\%** \\
0.1 & -0.06\% & -0.31\% & -0.07\% \\
0.2 & 0.17\% & 0.07\% & -0.09\% \\ \hline \hline \end{tabular}
\end{table}
Table 4. The influence of different \(\tau\).
Figure 4. The AUC improvements of Courier compared to the Baseline on different categories. The x-axis is sorted by the improvements.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline Methods & AUC (Women’s Clothing) & AUC & GAUC & NDCG@10 \\ \hline Baseline & 0.00\% (0.7785) & 0.00\% (0.8033) & 0.00\% (0.7355) & 0.0000 (0.1334) \\ Supervised & +0.06\% (0.7790) & -0.14\% (0.8018) & -0.06\% (0.7349) & **+0.0155 (0.1489)** \\ CLIP[36] & +0.26\% (0.7810) & +0.04\% (0.8036) & -0.09\% (0.7346) & +0.0148 (0.1483) \\ SimCLR[5] & +0.28\% (0.7812) & +0.05\% (0.8037) & -0.08\% (0.7347) & +0.0141 (0.1475) \\ SimSiam[8] & +0.10\% (0.7794) & -0.10\% (0.8022) & -0.29\% (0.7327) & +0.0142 (0.1476) \\ Courier(ours) & **+0.46\% (0.7830)** & **+0.16\% (0.8048)** & **+0.19\% (0.7374)** & +0.0139 (0.1473) \\ \hline \hline \end{tabular}
\end{table}
Table 3. The improvements of AUC in the women’s clothing category. And performances of AUC, GAUC, and NDCG in all categories. We report the relative improvements compared to the Baseline method, and the raw values of the metrics are in parentheses.
* **w/o Contrast**: Remove the contrastive loss, only minimize the reconstruction loss, similar to SimSiam[8].
* **w/o Reconstruction**: Use self-attention instead of cross-attention in the user interest recommendation module. We further analyze this method in Appendix Section. B.
* **w/o Neg PV**: Remove negative PV samples, only use positive samples.
* **w/o Large batch**: Change batch size from 3072 to 64.
The results in Table. (5) indicate that all the proposed components are necessary for the best performance.
### Train with text information
Text information is important for searching and recommendation since it's directly related to the query by the users and the properties of items. Thus, raw text information is already widely used in real-world systems. The co-train of texts and images also shows significant performance gains in computer vision tasks such as classification and segmentation. So we are interested in verifying the influence to Courier by co-training with text information. Specifically, we add a CLIP[36] besides Courier, with the loss function becomes \(L=L_{\text{Courier}}+L_{\text{CLIP}}\). The CLIP loss is calculated with item cover images and item titles. However, such multi-task training leads to worse downstream CTR performance as shown in Table. (6), which indicates that co-training with the text information may not help generalization when the text information is available in the downstream task.
### Deployment
To evaluate the performance improvement brought by Courier on our online system, we conduct online A/B testing in Taobao Search for 30-days. Compared with the strongest deployed online baseline, COURIER significantly (p-value < 0.01) improves the CTR and GMV (Cross Merchandise Volume) by +0.34% and +0.88% in women's clothing, respectively (the noise level is less than 0.1% according to the online A/A test). Such improvements are considered significant with the large volume of our Taobao search engine. The model has also been successfully deployed into production, serving the main traffic, with the Cluster-ID updated once a month.
\begin{table}
\begin{tabular}{l r r r} \hline \hline & AUC (women’s clothing) & AUC & GAUC \\ \hline w CLIP & 0.26\% & 0.04\% & -0.09\% \\ Courier & **0.46\%** & **0.16\%** & **0.19\%** \\ \hline \hline \end{tabular}
\end{table}
Table 6. Train with text information.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline & IPV & \# Order & CTR & GMV \\ \hline All categories & +0.11\% & +0.1\% & +0.18\% & +0.66\% \\ Women’s clothing & +0.26\% & +0.31\% & +0.34\% & +0.88\% \\ \hline \hline \end{tabular}
\end{table}
Table 7. The A/B testing improvements of Courier.
Figure 5. T-SNE visualization of embeddings in different categories.
Figure 6. T-SNE visualization of embeddings with different style tags. We also plot some item images with different tags below the corresponding figures. |
2306.15212 | TranssionADD: A multi-frame reinforcement based sequence tagging model
for audio deepfake detection | Thanks to recent advancements in end-to-end speech modeling technology, it
has become increasingly feasible to imitate and clone a user`s voice. This
leads to a significant challenge in differentiating between authentic and
fabricated audio segments. To address the issue of user voice abuse and misuse,
the second Audio Deepfake Detection Challenge (ADD 2023) aims to detect and
analyze deepfake speech utterances. Specifically, Track 2, named the
Manipulation Region Location (RL), aims to pinpoint the location of manipulated
regions in audio, which can be present in both real and generated audio
segments. We propose our novel TranssionADD system as a solution to the
challenging problem of model robustness and audio segment outliers in the trace
competition. Our system provides three unique contributions: 1) we adapt
sequence tagging task for audio deepfake detection; 2) we improve model
generalization by various data augmentation techniques; 3) we incorporate
multi-frame detection (MFD) module to overcome limited representation provided
by a single frame and use isolated-frame penalty (IFP) loss to handle outliers
in segments. Our best submission achieved 2nd place in Track 2, demonstrating
the effectiveness and robustness of our proposed system. | Jie Liu, Zhiba Su, Hui Huang, Caiyan Wan, Quanxiu Wang, Jiangli Hong, Benlai Tang, Fengjie Zhu | 2023-06-27T05:18:25Z | http://arxiv.org/abs/2306.15212v1 | # TranssonADD: A multi-frame reinforcement based sequence tagging model for audio deepfake detection
###### Abstract
Thanks to recent advancements in end-to-end speech modeling technology, it has become increasingly feasible to imitate and clone a user's voice. This leads to a significant challenge in differentiating between authentic and fabricated audio segments. To address the issue of user voice abuse and misuse, the second Audio Deepfake Detection Challenge (ADD 2023) aims to detect and analyze deepfake speech utterances. Specifically, Track 2, named the Manipulation Region Location (RL), aims to pinpoint the location of manipulated regions in audio, which can be present in both real and generated audio segments. We propose our novel TranssonADD system as a solution to the challenging problem of model robustness and audio segment outliers in the trace competition. Our system provides three unique contributions: 1) we adapt sequence tagging task for audio deepfake detection; 2) we improve model generalization by various data augmentation techniques; 3) we incorporate multi-frame detection (MFD) module to overcome limited representation provided by a single frame and use isolated-frame penalty (IFP) loss to handle outliers in segments. Our best submission achieved 2nd place in Track 2, demonstrating the effectiveness and robustness of our proposed system.
Audio Deepfake Detection, Manipulation Region Location, Anti-spoofing, Audio Synthesis Detection +
Footnote †: footnote]footnote: footnotetext:
hashing. Neither of these two works is suitable for situations where the forged region is extremely similar to its neighbor, such as a segment forged by noise addition only. Wu et al. [24] introduce a location model similar to Question-Answering strategy, which is to make the model answer "where are the start and end points" of anomalous clips. But it makes the problem more ambiguous because apparently it is not a natural language understanding task.
To solve these problems, we propose TranssonADD system, which specifically addresses poor generalization and outlier problems of the location model. Firstly, we innovatively convert the task into a sequence tagging problem, predicting the label of each frame and merging frames with the same label to locate audio segments. Secondly, we use RCNN-BLSTM to extract spatial and temporal features frame by frame. To avoid the network's poor learning of contextual information, we use multi-frame detection (MFD) module to compress multi-frame information. Then we address data insufficiency by applying data augmentation including voice conversion, pitch shift, etc. Finally, to solve outliers from isolated frames, we introduce a penalty strategy called isolated-frame penalty (IFP) to constrain this situation. Our submission won second place in the ADD 2023. More explicitly, our system mainly contributes to the following parts:
* We design a novel location model that draws inspiration from the sequence tagging task, specifically the Name Entity Recognition (NER) task.
* We utilize data augmentation method both before training and during training.
* We apply MFD module to improve robustness and add IFP loss to deal with outliers.
* Experiments show our proposed method performs better than the baseline.
## 2 Method
In this section, we introduce details and rationales of our proposed method, which mainly consists of four parts: 1) RCNN-BLSTM backbone; 2) data augmentation; 3) multi-frame detection module; and 4) isolated-frame penalty loss.
### RCNN-BLSTM Backbone
As shown in Figure 1, we use mel-spectrograms extracted from audio as input representations, and then we build a basic model based on residual convolutional neural network (RCNN) and bi-directional long short-term memory (BLSTM). Some of our structures refer to the design of RawNet2 [21] and reference encoder [25]. Mel-spectrograms are passed through 7 layers of 1D residual convolution with filter size and kernel size set to be 256 and 3 respectively. After each convolutional layer, batch normalization and Relu activation are applied.
Next, the output of final convolutional layer is fed into a single bidirectional LSTM layer with 256 units (128 in each direction) to generate classification representations. At the end of the model, we use a linear layer to output the classification probabilities of each frame of audio.
### Data Augmentation
In this study, there are two major difficulties: 1) the data is very limited and imbalanced; 2) the training and test sets have huge disparities in terms of their domain. The model, therefore, suffers from underperformance. To address this issue, we propose a two-stage data augmentation method.
**Before-training augmentation.** Both the training set and dev set are insufficient and lack diversity. Therefore, we propose three data augmentation methods:
* **Voice conversion.** Use VC [8] model trained on given dataset to transform segments of audio and then annotate adjusted segments as "fake".
* **Noise adding.** Add noise from MUSAN [26] corpus to entire audio segments.
* **Insertion.** Insert a real audio clip with different lengths randomly selected from other audio or the audio itself.
**In-training augmentation.** Due to the fixed nature of data augmentation before training, it limits the randomness of the data. Therefore, we propose dynamic data augmentation during training to ensure sufficient diversity and randomness in the training data.
* **Pitch shift.** Randomly adjust the pitch of segments from audio and then label the adjusted segments as "fake".
* **Gaussian noise.** Add Gaussian noise to segments from audio and then label the adjusted segments as "fake".
### Multi-frame Detection Module
RCNN primarily focuses on single-frame information but lacks a broader context. To further extract more effective contextual information, we draw inspiration from wav2vec2.0 [27] and enhance current single-frame detection with an additional multi-frame detection (MFD) module trained in a supervised manner. Multiple audio frames are passed through a downsampling network to get their multi-frame feature. The logits of the average label values corresponding to these multiple audio frames are calculated to get an estimated label of this segment. The multi-frame feature is then passed through
a classification network using the estimated label as the target.
Precisely, as shown in Figure 1, we use two convolutional layers to build the downsampling network and one dense layer to form the classification network. The filter size of both convolutional layers is set to 128. The stride values are set to be 5 and 2; the kernel sizes are set to be 7 and 3, respectively. The cross-entropy loss of MFD is added to the training loss. Finally, we replicate and add the outputs of the MFD module to the outputs of BLSTM for each frame, to enhance the classification prediction accuracy for each frame.
### Isolated-frame Penalty Loss
In traditional NER tasks, one entity corresponds to one or several word(s); while in ADD tasks, one fake/real segment corresponds to tens or hundreds of frames. Consequently, when applying regular NER to audio frame-level predictions, a challenge arises wherein the model predicts labels that appear on a significantly higher number of discontinuous frames, which we refer to as outliers. Therefore, in order to constrain the case where the predicted value is different from the surrounding values, we design an additional loss function called isolated-frame penalty (IFP) loss.
Specifically, we devise a penalty calculation method: simultaneously calculating the difference between current frame and its surrounding frames (1 to 3 before and after). Particularly, for frame \(i\), its predicted probability is \(\hat{y}_{i}\), and we consider the previous and following \(s\) frames, \(s\in\{1,2,3\}\). Correspondingly, we can obtain the regularization term \(r_{i}^{s}\).
\[r_{i}^{(s)}=||\hat{y}_{i}-\frac{\sum_{k=1}^{s}(\hat{y}_{i-k}+\hat{y}_{i+k})}{2 s}|| \tag{1}\]
Finally, we calculate the regularity constraint of total N frames and divide the sum by \(3\) as the final regularity constraint term \(R\). The formula is as follows:
\[R=\sum_{i=1}^{N}(r_{i}^{(1)}+r_{i}^{(2)}+r_{i}^{(3)})/3 \tag{2}\]
Then, our total loss involves losses of multiple modules, and its expression is as follows:
\[L=L_{SF}+L_{MFD}+\alpha*R \tag{3}\]
Where \(L_{SF}\) and \(L_{MFD}\) represent the cross-entropy loss for the single frame and each MFD module, respectively, while \(R\) serves as the constraint applied to isolated small fragments. It is important to note that the weight \(\alpha\) assigned to the IFP loss is a hyperparameter.
Figure 1: Overview of our proposed system. Subfigure (a) illustrates the overall structure of our model. Subfigure (b) is the RCNN encoder. Subfigure (c) shows the multi-frame detection module (MFD).
## 3 Experiments
### Datasets
Our dataset is a publicly available Mandarin corpus provided by organizers of ADD 2023. Some samples are all real or all fake, and some are partially fake or partially real.
### Training Setup
#### 3.2.1 Input representations
We compare mel-spectrograms and MFCCs, and finally we choose mel-spectrograms as low-level acoustic representation due to the better performance. We use a 16 kHz sampling rate for all experiments. And window size of fast Fourier transform (FFT), hop size, and number of output bins are set to 800, 160, and 80 respectively.
#### 3.2.2 Data augmentation
Before training, we augment the training set and dev set using techniques such as VC, noise addition, and insertion. During training, we use a 0.2 random rate to add Gaussian noise with less than 15 dB signal-to-noise ratios (SNR) and change the audio pitch by pitch shift.
#### 3.2.3 Implementation details
We first train the model on one NVIDIA 3090 GPU, with 64 batch sizes and 100 epochs. The models are optimized by Adam with a learning rate of 0.0001 and weight decay of \(10^{-5}\). The hyper-parameter weight \(\alpha\), in the isolated-frame penalty (IFP) loss, is set to be 0.1 as default, based on simple tuning experiments.
#### 3.2.4 Metrics
As follow, \(A_{sentence}\) is used to measure the model's ability to correctly distinguish between genuine and fake audio, and \(F_{1\_segment}\) is used to measure the model's ability to correctly identify fake areas from fake audios. The final score is the weighted sum of \(A_{sentence}\) and \(F_{1\_segment}\).
\[score=0.3*A_{sentence}+0.7*F_{1\_segment} \tag{4}\]
Moreover, we consider audio should consist of long enough consecutive frames with the same label and isolated frames are possible to be outliers. Therefore, relying solely on \(A_{sentence}\) and \(F_{1\_segment}\) may not adequately capture the occurrence of fragmented and isolated segments in audio frame predictions. To evaluate the efficacy of dealing with outliers, we introduce a simple auxiliary evaluation metric called "iso-rate," as defined in Eq. 5. If a segment is shorter than 6 frames (60ms), we classify it as an isolated segment (\(N_{isolated}\)), and then compute the proportion of these isolated segments appearing across the entire dataset. A lower iso-rate indicates that the evaluated system has fewer isolated segments and addresses outliers better.
\[\textit{iso-rate}\ =\ \frac{N_{isolated}}{N_{audios}} \tag{5}\]
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline Model & dev-score\(\uparrow\) & iso-rate\(\downarrow\)(\%) & aug-dev-score\(\uparrow\) & aug-iso-rate\(\downarrow\)(\%) & test-score\(\uparrow\) \\ \hline Rawnet2 (baseline) & 0.9911 & 0.2245 & 0.9502 & 1.4183 & 0.2783 \\ RCNN+BLSTM (proposed baseline) & 0.9926 & 0.1684 & 0.9535 & 1.40 & 0.2842 \\
**TransionADD** & **0.9957** & **0.1123** & **0.9933** & **0.6873** & **0.6249** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Scores** of the dev/test dataset and the augmented dev dataset, and the **iso-rate** of their respective auxiliary metrics. We conduct a series of controlled experiments using modified Rawnet2, RCNN + BLSTM backbone (proposed baseline), and our TransionADD (RCNN + BLSTM + two-stage data augmentation + MFD module + isolated-frame-penalty) as the experimental control groups. The best in **bold** in each metric.
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline Model & dev-score\(\uparrow\) & iso-rate\(\downarrow\)(\%) & aug-dev-score\(\uparrow\) & aug-iso-rate\(\downarrow\)(\%) & test-score\(\uparrow\) \\ \hline
**TransionADD** & 0.9957 & **0.1123** & **0.9933** & **0.6873** & **0.6249** \\ \hline -IFP & **0.9959** & 0.4153 & 0.9931 & 1.0865 & 0.6236 \\ -MFD & 0.9934 & 0.2245 & 0.9885 & 0.8692 & 0.5182 \\ -InAug & 0.9953 & 0.1347 & 0.9804 & 1.5992 & 0.4126 \\ -BefAug & 0.9926 & 0.1684 & 0.9535 & 1.40 & 0.2842 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison for ablation studies. (In the table, “-” means to remove the corresponding module)
### Results
From Table 1 above, our system achieves significantly higher scores on the development set than on the test set which may contains more variable and complicated samples. To address this problem, we introduce the augmented development set (aug-dev) as an essential reference to evaluate the model's performance.
#### 3.3.1 Performance
Table 1 compares three systems: 1) a modified open-source implementation, Rawnet2; 2) our proposed baseline, which consists of RCNN and BLSTM; and 3) our best system, TransionADD. The results show that TransionADD significantly outperforms the other two systems in terms of both correct detection score and iso-rate.
#### 3.3.2 Ablation studies
We also conduct ablation studies to demonstrate the effectiveness of each part in our method. Specifically, we conduct individual experiments by removing different components, including before-training augmentation (BefAug), in-training augmentation (InAug), multi-frame detection module (MFD), and isolated-frame penalty (IFP).
From the multi-metric comparison in Table 2, we observe that the IFP significantly reduces iso-rate while improving the correct detection score, as evident from the second and third rows. Additionally, the removal of MFD module, as indicated in the fourth row, results in a significant decrease in correct detection score, indicating its notable contribution, despite limited impact on iso-rate. Moreover, the fifth and sixth rows demonstrate that using data augmentation both before-training and in-training (BefAug & InAug) leads to a significant improvement in performance, especially on more various augmented development set and test set.
## 4 Conclusion
In summary, we propose a basic RCNN-BLSTM backbone to classify each frame of audio. Then we enhance the stability and generalization ability of our base model through diverse data augmentation techniques. We also innovatively design a multi-frame detection (MFD) module and isolated-frame penalty (IFP) loss to improve the representation of local features and constrain outliers, resulting in improved model performance. These three aspects are crucial for our success in manipulated region detection challenge. In the future, to further improve the detection capabilities of our system, we will focus on enhancing the model's robustness against noise.
|
2304.03209 | Implicit Anatomical Rendering for Medical Image Segmentation with
Stochastic Experts | Integrating high-level semantically correlated contents and low-level
anatomical features is of central importance in medical image segmentation.
Towards this end, recent deep learning-based medical segmentation methods have
shown great promise in better modeling such information. However, convolution
operators for medical segmentation typically operate on regular grids, which
inherently blur the high-frequency regions, i.e., boundary regions. In this
work, we propose MORSE, a generic implicit neural rendering framework designed
at an anatomical level to assist learning in medical image segmentation. Our
method is motivated by the fact that implicit neural representation has been
shown to be more effective in fitting complex signals and solving computer
graphics problems than discrete grid-based representation. The core of our
approach is to formulate medical image segmentation as a rendering problem in
an end-to-end manner. Specifically, we continuously align the coarse
segmentation prediction with the ambiguous coordinate-based point
representations and aggregate these features to adaptively refine the boundary
region. To parallelly optimize multi-scale pixel-level features, we leverage
the idea from Mixture-of-Expert (MoE) to design and train our MORSE with a
stochastic gating mechanism. Our experiments demonstrate that MORSE can work
well with different medical segmentation backbones, consistently achieving
competitive performance improvements in both 2D and 3D supervised medical
segmentation methods. We also theoretically analyze the superiority of MORSE. | Chenyu You, Weicheng Dai, Yifei Min, Lawrence Staib, James S. Duncan | 2023-04-06T16:44:03Z | http://arxiv.org/abs/2304.03209v2 | # Implicit Anatomical Rendering for Medical Image Segmentation with Stochastic Experts
###### Abstract
Integrating high-level semantically correlated contents and low-level anatomical features is of central importance in medical image segmentation. Towards this end, recent deep learning-based medical segmentation methods have shown great promise in better modeling such information. However, convolution operators for medical segmentation typically operate on regular grids, which inherently blur the high-frequency regions, _i.e._, boundary regions. In this work, we propose MORSE, a generic implicit neural rendering framework designed at an anatomical level to assist learning in medical image segmentation. Our method is motivated by the fact that implicit neural representation has been shown to be more effective in fitting complex signals and solving computer graphics problems than discrete grid-based representation. The core of our approach is to formulate medical image segmentation as a rendering problem in an end-to-end manner. Specifically, we continuously align the coarse segmentation prediction with the ambiguous coordinate-based point representations and aggregate these features to adaptively refine the boundary region. To parallelly optimize multi-scale pixel-level features, we leverage the idea from Mixture-of-Expert (MoE) to design and train our MORSE with a stochastic gating mechanism. Our experiments demonstrate that MORSE can work well with different medical segmentation backbones, consistently achieving competitive performance improvements in both 2D and 3D supervised medical segmentation methods. We also theoretically analyze the superiority of MORSE.
Keywords:Medical Image Segmentation Implicit Neural Representation Stochastic Mixture-of-Experts.
## 1 Introduction
Medical image segmentation is one of the most fundamental and challenging tasks in medical image analysis. It aims at classifying each pixel in the image into an anatomical category. With the success of deep neural networks (DNNs),
medical image segmentation has achieved great progress in assisting radiologists in contributing to a better disease diagnosis.
Until recently, the field of medical image segmentation has mainly been dominated by an encoder-decoder architecture, and the existing state-of-the-art (SOTA) medical segmentation models are roughly categorized into two groups: (1) convolutional neural networks (CNNs) [20, 4, 15, 1, 24, 31, 10, 11, 6, 29, 30, 27, 25, 26], and (2) Transformers[2, 5, 28]. However, despite their recent success, several challenges persist to build a robust medical segmentation model: \(\blacklozenge\) Classical deep learning methods require precise pixel/voxel-level labels to tackle this problem. Acquiring a large-scale medical dataset with exact pixel- and voxel-level annotations is usually expensive and time-consuming as it requires extensive clinical expertise. Prior works [12, 7] have used point-level supervision on medical image segmentation to refine the boundary prediction, where such supervision requires well-trained model weights and can only capture discrete representations on the pixel-level grids. \(\blacklozenge\) Empirically, it has been observed that CNNs inherently store the discrete signal values in a grid of pixels or voxels, which naturally blur the high-frequency anatomical regions, _i.e._, boundary regions. In contrast, implicit neural representations (INRs), also known as coordinate-based neural representations, are capable of representing discrete data as instances of a continuous manifold, and have shown remarkable promise in computer vision and graphics [17, 22, 23]. Several questions then arise: _how many pixel- or voxel-level labels are needed to achieve good performance? how should those coordinate locations be selected? and how can the selected coordinates and signal values be leveraged efficiently?_
Orthogonally to the popular belief that the model architecture matters the most in medical segmentation (_i.e._, complex architectures generally perform better), this paper focuses on an under-explored and alternative direction: _towards improving segmentation quality via rectifying uncertain coarse predictions_. To this end, we propose a new INR-based framework, MORSE (i**M**plicit anat**O**mical **R**rendering with **S**tochastic **E**x**perts). The core of our approach is to formulate medical image segmentation as a rendering problem in an end-to-end manner. We think of building a generic implicit neural rendering framework to have fine-grained control of segmentation quality, _i.e._, to adaptively compose coordinate-wise point features and rectify uncertain anatomical regions. Specifically, we encode the sampled coordinate-wise point features into a continuous space, and then align position and features with respect to the continuous coordinate.
We further hinge on the idea of mixture-of-experts (MoE) to improve segmentation quality. Considering our goal is to rectify uncertain coarse predictions, we regard multi-scale representations from the decoder as experts. During training, experts are randomly activated for features from multiple blocks of the decoder, and correspondingly the INRs of multi-scale representations are separately parameterized by a group of MLPs that compose a spanning set of the target function class. In this way, the INRs are acquired across the multi-block structure while the stochastic experts are specified by the anatomical features at each block.
In summary, our main contributions are as follows: (1) We propose a new implicit neural rendering framework that has fine-grained control of segmentation quality by adaptively composing INRs (_i.e._, coordinate-wise point features) and rectifying uncertain anatomical regions; (2) We illustrate the advantage of adopting mixture-of-experts that endows the model with better specialization of features maps for improving the performance; (3) Extensive experiments show that our method consistently improves performance compared to 2D and 3D SOTA CNN- and Transformer-based approaches; and (4) Theoretical analysis verifies the expressiveness of our INR-based model. Code will be released with publication.
## 2 Method
Let us assume a supervised medical segmentation dataset \(\mathcal{D}=\{(x,y)\}\), where each input \(x=x_{1},x_{2},...,x_{T}\) is a collection of \(T\) 2D/3D scans, and \(y\) refers to the ground-truth labels. Given an input scan \(x\in\mathbb{R}^{H\times W\times d}\), the goal of medical segmentation is to predict a segmentation map \(\hat{y}\). Fig. 1 illustrates the overview of our MORSE. In the following, we first describe our baseline model \(f\) for standard supervised learning, and subsequently present our MORSE. A baseline segmentation model consists of two main components: (1) encoder module, which generates the multi-scale feature maps such that the model is capable of modeling multi-scale local contexts, and (2) decoder module that makes a prediction \(\hat{y}\) using the generated multi-block features of different resolution. The entire model \(M\) is trained end-to-end using the supervised segmentation loss \(\mathcal{L}_{\text{sup}}\)[28] (_i.e._, equal combination of cross-entropy loss and dice loss).
### Stochastic Mixture-of-Experts (SMoE) Module
MotivationWe want a module that encourages inter- and intra-associations across multi-block features. Intuitively, multi-block features should be specified by anatomical features across each block. We posit that due to the specialization-favored nature of MoE, the model will benefit from explicit use of its own
Figure 1: Illustration of the MORSE pipeline.
anatomical features at each block by learning multi-scale anatomical contexts with adaptively selected experts. In implementation, our SMoE module follows an MoE design [16], where it treats features from multiple blocks of the decoder as experts. To mitigate potential overfitting and enable parameter-efficient property, we further randomly activate experts for each input during training. Our approach makes three major departures compared to [16] (_i.e._, SOTA segmention model): (1) implicitly optimized during training since it greatly trims down the training cost and the model scale; (2) using features from the _decoder_ instead of the _encoder_ tailored for our refinement goal; and (3) empirically showing that "self-slimmable" attribute delivers sufficiently exploited expressiveness of the model.
ModulizationWe first use multiple small MLPs with the same size to process different block features and then up-sample the features to the size of the input scans, _i.e._, \(H\times W\times d\). With \(N\) as the total number of layers (experts) in the decoder, we treat these upsampled features \([\mathbf{F}_{1},\mathbf{F}_{2},...,\mathbf{F}_{N}]\) as expert features. We then train a gating network \(\mathcal{G}\) to re-weight the features from activated experts with the trainable weight matrices \([\mathbf{W}_{1},\mathbf{W}_{2},...,\mathbf{W}_{N}]\), where \(\mathbf{W}\in\mathbb{R}^{H\times W\times d}\). Specifically, the gating network or router \(\mathcal{G}\) outputs these weight matrices satisfying \(\sum_{i}\mathbf{W}_{i}=\mathbf{1}^{H\times W\times d}\) using a structure depicted as follows:
\[\mathbf{W}_{i}=[\texttt{Softmax}(\texttt{Conv}([\mathbf{F}_{1},\mathbf{F}_{2},...,\mathbf{F}_{ N}]))]_{i},\quad\text{for}\;\;i\in[N]. \tag{1}\]
The gating network first concatenates all the expert features along channels and uses several convolutional layers to get \(\texttt{Conv}([\mathbf{F}_{1},\mathbf{F}_{2},...,\mathbf{F}_{N}])\in\mathbb{R}^{C\times H \times W\times d\times N}\), where \(C\) is the channel dimension. A softmax layer is applied over the last dimension (_i.e._, \(N\)-expert) to output the final weight maps. After that, we feed the resultant output \(x_{\text{out}}\) to another MLP to fuse multi-block expert features. Finally, the resultant output \(x_{\text{out}}\) (_i.e._ the coarse feature) is given as follows:
\[x_{\text{out}}=\texttt{MLP}(\sum_{i=1}^{N}\mathbf{W}_{i}\cdot\mathbf{F}_{i}\,), \tag{2}\]
where \(\cdot\) denotes the pixel-wise multiplication, and \(x_{\text{out}}\in\mathbb{R}^{C\times H\times W\times d}\).
Stochastic RoutingThe prior MoE-based model [16] are densely activated. That is, a model needs to access all its parameters to process all inputs. One drawback of such design often comes at the prohibitive training cost. Moreover, the large model size suffers from the representation collapse issue [21], further limiting the model's performance. Our proposed SMoE considers _randomly activated_ expert sub-networks to address the issues. In implementation, we simply apply standard dropout to multiple experts with a dropping probability \(\alpha\). For each training iteration, there are dropout masks placed on experts with the probability \(\alpha\). That is, the omission of experts follows a Bernoulli\((\alpha)\) distribution. As for inference, there is no dropout mask and all experts are activated.
### Implicit Anatomical Rendering (IAR)
The existing methods generally assume that the semantically correlated information and fine anatomical details have been captured and can be used to obtain
high-quality segmentation quality. However, CNNs inherently operate the discrete signals in a grid of pixels or voxels, which naturally blur the high-frequency anatomical regions, _i.e._, boundary regions. To address such issues, INRs in computer graphics are often used to replace standard discrete representations with continuous functions parameterized by MLPs [23, 22]. Our key motivation is that the task of medical segmentation is often framed as a rendering problem that applies implicit neural functions to continuous shape/object/scene representations [17, 22]. Inspired by this, we propose an implicit neural rendering framework to further improve segmentation quality, _i.e._, to adaptively compose coordinate-wise point features and rectify uncertain anatomical regions.
**Point Selection** Given a coarse segmentation map, the rendering head aims at rectifying the uncertain boundary regions. A point selection mechanism is thus required to filter out those pixels where the rendering can achieve maximum segmentation quality improvement. Besides, point selection can significantly reduce computational cost compared to blindly rendering all boundary pixels. Therefore, our MORSE selects \(N_{p}\) points for refinement given the coarse segmentation map using an uncertainty-based criterion. Specifically, MORSE first uniformly randomly samples \(k_{p}N_{p}\) candidates from all pixels where the hyper-parameter \(k_{p}\geq 1\), following [9]. Then, based on the coarse segmentation map, MORSE chooses \(\rho N_{p}\) pixels with the highest uncertainty from these candidates, where \(0.5<\rho<1\). The uncertainty for a pixel is defined as \(\texttt{SecondLargest}(\mathbf{v})-\texttt{max}(\mathbf{v})\), where \(\mathbf{v}\) is the logit vector of that pixel such that the coarse segmentation is given as \(\texttt{Softmax}(\mathbf{v})\). The rest \((1-\rho)N_{p}\) pixels are sampled uniformly from all the remaining pixels. This mechanism ensures the selected points contain a large portion of points with uncertain segmentation which require refinement.
**Positional Encoding** It is well-known that neural networks can be cast as universal function approximators, but they are inferior to high-frequency signals due to their limited learning power [18, 14]. Unlike [9], we explore using the encoded positional information to capture high-frequency signals, which echoes our theoretical findings in Appendix 0.A. Specifically, for a coordinate-based point \((x,y)\in[H]\times[W]\), the positional encoding function is given as:
\[\psi(x,y)=[\sin(2\pi(w_{1}\tilde{x}+v_{1}\tilde{y})),\cdots,\sin(2 \pi(w_{L}\tilde{x}+v_{L}\tilde{y})),\] \[\cos(2\pi(w_{1}\tilde{x}+v_{1}\tilde{y})),\cdots,\cos(2\pi(w_{L} \tilde{+}v_{L}\tilde{y}))], \tag{3}\]
where \(\tilde{x}=2x/H-1\) and \(\tilde{y}=2y/W-1\) are the standardized coordinates with values in between \([-1,1]\). The frequency \(\{w_{i},v_{i}\}_{i=1}^{L}\) are trainable parameters with Gaussian random initialization, where we set \(L=128\)[3]. For each selected point, its position encoding will then be concatenated with the coarse features of that point (_i.e._, \(x_{\text{out}}\) defined in Sec. 2.1), to output the fine-grained features.
**Rendering Head** The fine-grained features are then fed to the rendering head whose goal is to rectify the uncertain predictions with respect to these selected points. Inspired by [9], the rendering head adopts 3-layer MLPs design. Since the rendering head is designed to rectify the class label of the selected points, it is trained using the standard cross-entropy loss \(\mathcal{L}_{\text{rend}}\).
Adaptive Weight AdjustmentInstead of directly leveraging pre-trained weights, it is more desirable to train the model _from scratch_ in an _end-to-end_ way. For instance, we empirically observe that directly using coarse masks by pre-trained weights to modify unclear anatomical regions might lead to suboptimal results (See Sec. 3.1). Thus, we propose to modify the importance of \(\mathcal{L}_{\text{rend}}\) as:
\[\lambda_{t}=\lambda_{\text{rend}}\cdot\left[\mathbbm{1}\{t>T/2\}\cdot\left( \frac{t-T/2}{T}\right)\right], \tag{4}\]
where \(t\) is the index of the iteration, \(T\) denotes the total number of iterations, and \(\mathbbm{1}\{\cdot\}\) denotes the indicator function.
Training ObjectiveAs such, the model is trained in an _end-to-end_ manner using total loss \(\mathcal{L}_{\text{total}}=\mathcal{L}_{\text{sup}}+\lambda_{t}\times\mathcal{ L}_{\text{rend}}\).
## 3 Experiments
DatasetWe evaluate the models on two important medical segmentation tasks. (1) **Synapse multi-organ segmentation**1: Synapse multi-organ segmentation dataset contains 30 abdominal CT scans with 3779 axial contrast-enhanced abdominal clinical CT images in total. Each volume scan has variable volume sizes \(512\times 512\times 85\!\sim\!512\times 512\times 198\) with a voxel spatial resolution of (\([0.54\!\sim\!0.54]\times[0.98\!\sim\!0.98]\times[2.5\!\sim\!5.0]\))mm3. For a fair comparison, the data split2 is fixed with 18 (2211 axial slices) and 12 patients' scans for training and testing, respectively. The entire dataset has a high diversity of aorta, gallbladder, spleen, left kidney, right kidney, liver, pancreas, spleen, and stomach.
Footnote 1: [https://www.synapse.org/#](https://www.synapse.org/#)!Synapse:syn3193805/wiki/217789
Footnote 2: [https://github.com/Beckschen/TransUNet/tree/main/lists/lists_Synapse](https://github.com/Beckschen/TransUNet/tree/main/lists/lists_Synapse)
(2) **Liver segmentation**: Multi-phasic MRI (MP-MRI) dataset is an in-house dataset including 20 patients, each including T1 weighted DCE-MRI images at three-time phases (_i.e._, pre-contrast, arterial, and venous). Here, our evaluation is
\begin{table}
\begin{tabular}{l|c c c c|c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Average} & \multicolumn{4}{c}{Aorta Callbladder Kidney (L) Kidney (R) Liver} & \multicolumn{4}{c}{Pancreas Spleen Stomach} \\ & DSC \(\uparrow\) & Jaccard \(\uparrow\) & 95HD \(\downarrow\) & ASD \(\downarrow\) & & & & & & & & \\ \hline UNet (Baseline) [20] & 70.11 & 59.39 & 44.69 & 14.41 & 84.00 & 56.70 & 72.41 & 62.64 & 86.98 & 48.73 & 81.48 & 67.96 \\ + PointRend [9] & 71.52 & 61.34 & 43.19 & 13.70 & 85.74 & 57.14 & 75.42 & 63.27 & 87.32 & 50.16 & 81.82 & 71.29 \\ + Implicit PointRend [9] & 67.33 & 59.73 & 52.44 & 22.15 & 76.32 & 51.99 & 70.28 & 70.36 & 81.09 & 43.77 & 77.18 & 67.05 \\ + Ours (MoE) & 72.83 & 62.64 & 40.44 & 13.15 & 86.11 & 59.51 & 75.81 & 67.10 & 87.82 & 52.11 & 83.48 & 70.86 \\ + Ours (SMoE) & 74.86 & 64.94 & 37.69 & 12.66 & 86.39 & 63.99 & 77.96 & 68.93 & 88.88 & 53.62 & 86.12 & 72.98 \\ + Ours (IAJR) & 73.11 & 62.98 & 34.01 & 12.67 & 86.28 & 60.25 & 76.58 & 63.84 & 88.32 & 52.12 & 83.47 & 72.51 \\ + Ours (IAR+MoE) & 73.73 & 66.53 & 33.34 & 11.43 & 87.00 & 64.45 & 78.14 & 70.13 & 89.22 & 52.33 & 85.20 & 76.40 \\ + Ours (MORSE) & **76.59** & **66.97** & **32.00** & **10.67** & **87.28** & **64.73** & **80.58** & **71.87** & **90.04** & **54.60** & **86.67** & **76.93** \\ \hline TransUnet (Baseline) [21] & 77.49 & 64.78 & 31.69 & 8.46 & 87.23 & 63.13 & 81.87 & 77.02 & 94.08 & 55.86 & 85.08 & 75.62 \\ + PointRend [9] & 78.30 & 68.58 & 34.17 & 8.62 & 87.93 & 63.96 & 83.47 & 77.23 & 94.96 & 56.45 & 85.76 & 76.75 \\ + Implicit PointRend [9] & 71.92 & 60.62 & 41.42 & 18.55 & 78.39 & 61.64 & 79.79 & 73.20 & 89.61 & 50.01 & 80.17 & 62.75 \\ + Ours (MoE) & 77.85 & 65.30 & 32.75 & 7.90 & 87.40 & 63.46 & 82.34 & 77.88 & 94.14 & 56.12 & 85.24 & 76.25 \\ + Ours (SMoE) & 78.68 & 65.98 & 31.86 & 7.00 & 87.60 & 66.21 & 82.62 & 78.12 & 94.88 & 57.59 & 85.97 & 76.48 \\ + Ours (IAJR) & 79.37 & 65.50 & 30.13 & 7.25 & 88.63 & 66.76 & 83.70 & 79.50 & 95.26 & 57.10 & 86.90 & 77.10 \\ + Ours (IAR+MoE) & 79.60 & 66.99 & 27.59 & 6.54 & 88.73 & 66.83 & 83.85 & 80.19 & 95.98 & 57.12 & 86.92 & 77.21 \\ + Ours (MORSE) & **80.85** & **68.53** & **26.61** & **6.46** & **88.92** & **67.53** & **84.83** & **81.68** & **96.83** & **59.70** & **87.73** & **79.58** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparisons for multi-organ segmentation on the Synapse multi-organ CT dataset. The best results are indicated in **bold**.
conducted via 5-fold cross-validation on the 60 scans. For each fold, the training and testing data includes 48 and 12 cases, respectively.
**Implementation Details** We use AdamW optimizer [13] with an initial learning rate \(5e^{-4}\), and adopt a polynomial-decay learning rate schedule for both datasets. We train each model for 30K iterations. For Synapse, we adopt the input resolution as 256\(\times\)256 and the batch size is 4. For MP-MRI, we randomly crop 96\(\times\)96\(\times\)96 patches and the batch size is 2. For SMoE, following [16], all the MLPs have hidden dimensions \([256,256]\) with ReLU activations, the dimension of expert features \([\mathbf{F}_{1},\mathbf{F}_{2},...,\mathbf{F}_{N}]\) are 256. We empirically set \(\alpha\) as 0.7. Following [9], \(N_{p}\) is set as 2048, and 8192 for training and testing, respectively, and \(k_{p}\), \(\rho\) are 3, 0.75. We follow the same gating network design [16], which includes four \(3\times 3\) convolutional layers with channels \([256,256,256,N]\) and ReLU activations. \(\lambda_{\text{rend}}\) are set to 0.1. We adopt four representative models, including UNet [20], TransUnet [2], 3D-UNet [4], UNETR [5]. Specifically, we set \(N\) for UNet [20], TransUnet [2], 3D-UNet [4], UNETR [5] with 5, 3, 3, 3, respectively. We also use Dice coefficient (DSC), Jaccard, 95% Hausdorff Distance (95HD), and Average Surface Distance (ASD) to evaluate 3D results. We conduct all experiments in the same environments with fixed random seeds (Hardware: Single NVIDIA RTX A6000 GPU; Software: PyTorch 1.12.1+cu116, and Python 3.9.7).
### Comparison with State-of-the-Art Methods
We adopt classical CNN- and transformer-based models, _i.e._, 2D-based {UNet [20], TransUnet [2]} and 3D-based {3D-UNet [4], UNETR [5]}, and train them on {2D Synapse, 3D MP-MRI} in an end-to-end manner 1.
Footnote 1: All comparison experiments are using their released code.
**Main Results** The results for 2D synapse multi-organ segmentation and 3D liver segmentation are shown in Tables 1 and 2, respectively. The following observations can be drawn: (1) Our MORSE demonstrates superior performance compared to all other training algorithms. Specifically, Compared to UNet, TransUnet, 3D-UNet, and UNETR baselines, our MORSE with all experts selected obtains 3.36%\(\sim\)6.48% improvements in Dice across two segmentation tasks. It validates the superiority of our proposed MORSE. (2) The stochastic routing policy shows consistent performance benefits across all four network backbones on 2D and 3D settings. Specifically, we can observe that our SMoE framework improves all the
\begin{table}
\begin{tabular}{l|c c c c|l c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Average} & \multirow{2}{*}{Method} & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } \\ & DSC \(\uparrow\) & Jaccard \(\uparrow\) & 95HD \(\downarrow\) & ASD \(\downarrow\) \\ \hline
3D-UNet (Baseline) [4] & 89.19 & 81.21 & 34.97 & 10.63 & UNETR (Baseline) [5] & 89.95 & 82.17 & 24.64 & 6.04 \\ + PointRend [9] & 89.55 & 81.80 & 30.88 & 10.12 & + PointRend [9] & 90.49 & 82.36 & 21.06 & 5.59 \\ + Implicit PointRend [3] & 88.01 & 79.83 & 37.55 & 12.86 & Implicit PointRend [3] & 88.72 & 80.18 & 26.63 & 10.58 \\ + Ours (MoE) & 89.81 & 82.06 & 29.96 & 10.15 & + Ours (MoE) & 90.70 & 82.80 & 15.31 & 5.93 \\ + Ours (SMoE) & 90.16 & 82.28 & 28.36 & 9.79 & + Ours (SMoE) & 91.02 & 83.29 & 15.12 & 5.64 \\ + Ours (IAR) & 91.22 & 83.30 & 27.84 & 8.89 & + Ours (IAR) & 91.63 & 83.83 & 14.25 & 4.99 \\ + Ours (IAR+MoE) & 92.77 & 83.94 & 26.57 & 7.51 & + Ours (LAR+MoE) & 93.01 & 84.70 & 13.29 & 4.84 \\ + Ours (MORSE) & **93.59** & **84.62** & **19.61** & **6.57** & + Ours (MORSE) & **93.85** & **85.53** & **12.33** & **4.38** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative comparisons for liver segmentation on the Multi-phasic MRI dataset. The best results are indicated in **bold**.
baselines, which is within expectation since our model is implicitly "optimized" given evolved features. (3) As is shown, we can observe that IAR consistently outperforms PointRend across all the baselines (_i.e._, UNet, TransUnet, 3D-UNet, and UNETR) and obtain {1.59%, 1.07%, 2.03%, 1.14%} performance boosts on two segmentation tasks, highlighting the effectiveness of our proposal in INRs. (4) With Implicit PointRend [3] equipped, all the models' performances drop. We find: adding Implicit PointRend leads to significant performance drops of -2.78%, -5.57%, -1.18%, and -1.23% improvements, compared with the SOTA baselines (_i.e._, UNet, TransUnet, 3D-UNet, and UNETR) on two segmentation tasks, respectively. Importantly, we find that: [3] utilizes INRs for producing different parameters of the point head for each object with point-level supervision. As this implicit function does not directly optimize the anatomical regions, we attribute this drop to the introduction of additional noise during training, which leads to the representation collapse. This further verifies the effectiveness of our proposed IAR. In Appendix 0.E. Figs. 2 and 3, we provide visual comparisons from various models. We can observe that MORSE yields sharper and more accurate boundary predictions compared to all the other training algorithms.
**Visualization of IAR Modules** To better understand the IAR module, we visualize the point features on the coarse prediction and refined prediction after the IAR module in Appendix 0.E. Fig. 4. As is shown, we can see that IAR help rectify the uncertain anatomical regions for improving segmentation quality.
### Ablation Study
We first investigate our MORSE equipped with UNet by varying \(\alpha\) (_i.e._, stochastic rate) and \(N\) (_i.e._, experts) on Synapse. The comparison results of \(\alpha\) and \(N\) are reported in Table 3. We find that using \(\alpha=0.7\) performs the best when the expert capacity is \(N=5\). Similarly, when reducing the expert number, the performance also drops considerably. This shows our hyperparameter settings are optimal.
Moreover, we conduct experiments to study the importance of Adaptive Weight Adjustment (AWA). We see that: (1) Disabling AWA and training \(\mathcal{L}_{\text{rend}}\) from scratch causes unsatisfied performance, as echoed in [9]. (2) Introducing AWA shows a consistent advantage compared to the other. This demonstrates the importance of the Adaptive Weight Adjustment.
## 4 Conclusion
In this paper, we proposed MORSE, a new implicit neural rendering framework that has fine-grained control of segmentation quality by adaptively composing
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & DSC[\%]\(\uparrow\) & ASD[voxel]\(\downarrow\) \\ \hline w/o AWA \& train w/ \(\mathcal{L}_{\text{rend}}\) from scratch & 70.56 & 14.89 \\ w/o AWA \& train w/ \(\mathcal{L}_{\text{rend}}\) in \(\frac{\pi}{4}\) & 75.42 & 12.00 \\ w/ AWA & **76.59** & **10.67** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation studies of the Adaptive Weight Adjustment (AWA).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \(\alpha\) & DSC[\%] & ASD[voxel]\(\downarrow\) & \(N\) & DSC[\%]\(\uparrow\) & ASD[voxel]\(\downarrow\) \\ \hline
0.1 & 75.41 & 11.96 & 1 (No Mele) & 75.11 & 11.67 \\
0.2 & 75.68 & 11.59 & 22 & 75.63 & 11.49 \\
0.5 & 76.09 & **10.43** & 3 & 75.82 & 11.34 \\
0.7 & **76.95** & 10.67 & 4 & 76.16 & 11.06 \\
0.9 & 74.16 & 11.32 & 5 & **76.59** & **10.67** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Effect of stochastic rate \(\alpha\) and expert number \(N\).
coordinate-wise point features and rectifying uncertain anatomical regions. We also demonstrate the advantage of leveraging mixture-of-experts that enables the model with better specialization of features maps for improving the performance. Extensive empirical studies across various network backbones and datasets, consistently show the effectiveness of the proposed MORSE. Theoretical analysis further uncovers the expressiveness of our INR-based model.
|
2307.02447 | Using Rewrite Strategies for Efficient Functional Automatic
Differentiation | Automatic Differentiation (AD) has become a dominant technique in ML. AD
frameworks have first been implemented for imperative languages using tapes.
Meanwhile, functional implementations of AD have been developed, often based on
dual numbers, which are close to the formal specification of differentiation
and hence easier to prove correct. But these papers have focussed on
correctness not efficiency. Recently, it was shown how an approach using dual
numbers could be made efficient through the right optimizations. Optimizations
are highly dependent on order, as one optimization can enable another. It can
therefore be useful to have fine-grained control over the scheduling of
optimizations. One method expresses compiler optimizations as rewrite rules,
whose application can be combined and controlled using strategy languages.
Previous work describes the use of term rewriting and strategies to generate
high-performance code in a compiler for a functional language. In this work, we
implement dual numbers AD in a functional array programming language using
rewrite rules and strategy combinators for optimization. We aim to combine the
elegance of differentiation using dual numbers with a succinct expression of
the optimization schedule using a strategy language. We give preliminary
evidence suggesting the viability of the approach on a micro-benchmark. | Timon Böhler, David Richter, Mira Mezini | 2023-07-05T17:17:16Z | http://arxiv.org/abs/2307.02447v2 | # Using Rewrite Strategies for Efficient Functional Automatic Differentiation
###### Abstract.
Automatic Differentiation (AD) has become a dominant technique in ML. AD frameworks have first been implemented for imperative languages using tapes. Meanwhile, functional implementations of AD have been developed, often based on dual numbers, which are close to the formal specification of differentiation and hence easier to prove correct. But these papers have focussed on correctness not efficiency. Recently, it was shown how an approach using dual numbers could be made efficient through the right optimizations. Optimizations are highly dependent on order, as one optimization can enable another. It can therefore be useful to have fine-grained control over the scheduling of optimizations. One method expresses compiler optimizations as rewrite rules, whose application can be combined and controlled using strategy languages. Previous work describes the use of term rewriting and strategies to generate high-performance code in a compiler for a functional language. In this work, we implement dual numbers AD in a functional array programming language using rewrite rules and strategy combinators for optimization. We aim to combine the elegance of differentiation using dual numbers with a succinct expression of the optimization schedule using a strategy language. We give preliminary evidence suggesting the viability of the approach on a micro-benchmark.
differentiable programming, domain-specific language, optimization, term rewriting +
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted: copyrighted:
+
Footnote †: copyrighted: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted: copyrighted:
+
Footnote †: copyrighted: copyrighted:
+
Footnote †: copyrighted:
+
Footnote †: copyrighted: copyrighted:
+
Footnote †: copyrighted: copyrighted:
+
Footnote †: copyrighted: copyrighted:
+
Footnote †: copyrighted: copyrighted:
+
Footnote †: copyrighted: copyrighted:
+
Footnote †: copyrighted: copyrighted:
+
Footnote †: copyrighted: copyrighted:
+
Footnote †: copyrighted: copyrighted:
+
Footnote †: copyrighted: copyrighted:
+
Footnote †: copyrighted: copyrighted: copyrighted:
+
Footnote †: copyrighted: copyrighted:
+
Footnote †: copyrighted: copyrighted: copyrighted:
+
Footnote †: copyrighted: copyrighted: copyrighted: copyrighted:
+
Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted:
+
Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyright: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyright: copyright: copyrighted: copyright: copyrighted: copyright: copyright: copyrighted: copyright: copyright: copyrighted: copyright: copyrighted: copyright: copyright: copyrighted: copyright: copyright: copyrighted: copyright: copyright: copyrighted: copyright: copyright: copyrighted: copyright: copyright: copyrighted: copyright: copyright: copyrighted: copyright: copyright: copyright: copyright:: copyrighted: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright:: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright:: copyright: copyright: copyright: copyright: copyright: copyright:: copyright: copyright: copyright: copyright: copyright: copyright: copyright:: copyright: copyright: copyright: copyright:: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright:: copyright: copyright: copyright: copyright: copyright: copyright: copyright:: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright:: copyright: copyright: copyright: copyright: copyright: copyright:: copyright: copyright: copyright: copyright: copyright: copyright:: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright:: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright:: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright
This means that, for example, the type \(\mathsf{array}_{\mathsf{sim}}\) represents arrays of five integers. Additionally, we use the type \(\mathsf{fin}_{n}\) for indices which represents integers in the range 0..n-1. To retrieve an element from an array of type \(\mathsf{array}_{\mathsf{in}}\), we require the index to be of type \(\mathsf{fin}_{n}\). This is intended to prevent out-of-bounds array accesses. This is similar to the approach used in the Dex array language (Das, 2018).
Figure 1(a) shows the terms of the language. The language is intrinsically typed -- Term carries a parameter representing the type of the expression. Instead of defining the syntax and the type system separately, the language's constructs are always typed and creating an ill-typed expression is impossible. The typing rules do not use contexts; instead, each variable is labeled by its type (Bauer, 2018).
The terms are variables, function application, let-bindings, pair construction and projection, if-then-else, iteration, constants for real numbers, integers and indices as well as pre-defined operations for array construction and indexing, arithmetic, equality checking and conversion. Every variable consists of its name and its type. The typing rule for function application expects a function of type \(\alpha\rightarrow\beta\) and an argument of type \(\alpha\) and yields a term of type \(\beta\). The abstraction case is based on the typing rule for lambda abstractions. Note that we again have to give both a name and a type for each variable. \(\mathsf{let}\) is similar to \(\mathsf{lam}\) in that it also binds a variable, except we also already give the value that the variable should be bound to. \(\mathsf{mkpair}\) is used to build pairs and \(\mathsf{fst}\) and \(\mathsf{snd}\) are used to take them apart. We can perform branching with \(\mathsf{if}\)\(\mathsf{c}\)\(\mathsf{then}\)\(\mathsf{e1}\)\(\mathsf{else}\)\(\mathsf{e2}\), evaluating the first branch if the condition is not zero, and the second otherwise. \(\mathsf{ifold}\) allows bounded iteration. The fact that the loop index has type \(\mathsf{fin}\)\(\mathsf{n}\)\(\mathsf{n}\) is important when using it to access an array's elements, for example when computing the sum of an array. The array operations \(\mathsf{build}\), for constructing arrays, and \(\mathsf{get}\)\(\mathsf{i}\), for accessing elements, are now dependently typed. This means that \(\mathsf{get}\)\(\mathsf{cannot}\) go out of bounds. The types of the two operations also reveal that they are essentially conversion functions, where \(\mathsf{build}\) converts from \(\mathsf{fin}\)\(\mathsf{n}\)\(\mathsf{->}\)\(\mathsf{a}\)\(\mathsf{array}\)\(\mathsf{n}\)\(\mathsf{a}\) and \(\mathsf{geti}\)\(\mathsf{converts}\)\(\mathsf{back}\). There is no \(\mathsf{length}\) operation; as the type of an array expression now carries its size, \(\mathsf{length}\) is superfluous. We also have arithmetic operations.
## 3. Automatic Differentiation
The first step in our implementation of AD is a dual numbers transformation (Kittelmann and Schuster, 2010). As can be seen in Figure 0(b), it is structurally recursive and transforms every occurrence of a real number into a pair of real numbers and leaves other type constructors unchanged. The idea is that each value is bundled with its derivative with regards to some input, so that both the normal result of the computation and the derivative are computed at the same time.
The transformation on terms is defined in Figure 1(b). Its structure follows the structure of the type transformation in that most of the cases of the transformation are trivial, except for those referring to real numbers. Variables have only their type changed. Similarly, the transformation on function application, \(\lambda\)-abstraction, if-then-else and iteration leaves their structure unchanged and simply recursively applies the transformation on the subexpressions. For the cases for addition and multiplication of real numbers, note that we write \((x,y)\) as shorthand for \(\mathsf{mkpair}\)\(x\)\(y\) and \(x_{1}\) and \(x_{2}\) as shorthand for \(\mathsf{fst}\)\(x\) and \(\mathsf{snd}\)\(x\). Other operations are unchanged, e.g. operations on pairs (like \(\mathsf{fst}\)), arrays (like \(\mathsf{build}\)), or integers (like \(\mathsf{addInt}\), \(\mathsf{=}\)\(\mathsf{or}\)\(\mathsf{fromInt}\)). This corresponds to the intuition that we actually just want to replace constants and arithmetic operations with their dual numbers equivalents.
In the case of comparisons like \(<\), the transformed version simply retrieves the primal values from the dual numbers given as input and performs the comparison on those. As Boolean operators are not differentiable, the perturbations of the input numbers are simply discarded.
With the dual numbers transformation defined, we now want to use it to compute the gradient of a given model. The AD operators defined here are inspired by Shaikhha et al. (Shaikhha et al., 2017).
\[\mathsf{addZeroes}\left(\mathsf{v}:\mathsf{Array}_{n}\;\alpha \right):\mathsf{Array}_{n}\left(\alpha\times\mathsf{real}\right):=\] \[\mathsf{build}_{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf }}}}}}}}}}}}\left( \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{ }\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{ }\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{ }\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{ }\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{ }\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{ }\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{ }\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{ }\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{ }\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{ }\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{ }\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{} \mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}}\mathsf{ }\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf{}\mathsf
\[\begin{array}{l}\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\
function vectorMap, which applies a function to every element of an array.
```
defvectorMap{n:Nat}{ab:Typ}: Term(arrayna->(a->b)->arraynb):= lam"v"-(lam"f"- (build'(lam"i-(app"f"(geti'(var"v"(arrayna))"i")))))
```
It is polymorphic with regards to the size of the array as well as to the type of the function and the array's elements. This is represented by a Lean function that takes a number (the size parameter) and two values of type Typ (the type parameters) as input and returns an Term expression. This way, polymorphic functions can be specialized to concrete, monomorphic ones. The concrete types can often be inferred from the context by the Lean type checker.
## 5. Optimization
We want to implement rewrite rules and the following optimization strategies, based on Visser et al. (Visser et al., 2013), in code.
(identifiers)
\[x\]
(rules)
\[x\]
(strategies)
\[s::=x\mid r\mid\id\mid\dot{z}\mid s_{1};s_{2}\mid s_{1}\gets s_{2}\mid\mu x.s\mid\diamond(s)\]
Strategies can be seen as procedures that try to transform terms and either succeed, returning a new term, or fail. First, every rewrite rule can be seen as a strategy that, given a term \(t\), succeeds if the rule can be applied to \(t\) at the root (so there is no nondeterminism here, as we do not apply rules to subterms). We also have the identity strategy id, which always succeeds, leaving the term unchanged. On the other hand, the strategy \(\dot{z}\) always fails. We write \(s_{1};s_{2}\) to denote the sequential composition of two strategies \(s_{1}\) and \(s_{2}\). Strategy \(s_{1}\) is applied first, and if it succeeds, \(s_{2}\) is applied to the result. If either \(s_{1}\) or \(s_{2}\) fails, \(s_{1};s_{2}\) fails. Left choice \(s_{1}\gets s_{2}\) first attempts to apply \(s_{1}\). If the strategy \(s_{1}\) succeeds, its output is returned is the output of \(s_{1}\gets s_{2}\). If it fails, \(s_{2}\) is applied on the term. We also have the fixed point operator \(\mu\), which allows us to define recursive strategies.
We also make use of the following, derived, operation:
\[\text{repeat}(s):=\mu x.\ \ ((s;x)\leftarrow\text{id})\]
repeat(\(s\)) iteratively applies a strategy \(s\) as often as possible and stops once \(s\) fails. Note that repeat(\(s\)) can never fail; however it may loop indefinitely. For example, because id never fails, repeat(id) does not terminate on any input term. As rules are only applied to the root, we need a way to rewrite the subterms of a given term. This is addressed by the \(\diamond\) operator. A strategy \(\diamond(s)\) tries to apply \(s\) to exactly one subterm of the given term and fails if there is no subterm to which \(s\) can be applied successfully.
In a functional programming language, this can be done by representing strategies as functions and combinators as higher-order functions which take and return strategies. Most of the combinators defined in this section are based on those of the ELEVATE strategy language (Friedman et al., 2013), which is itself inspired by Strategy (Visser et al., 2013; Visser et al., 2013).
The type of an expressions is Term a for some a What should the type of a strategy look like? The type Terma->Terma seems sensible, but it assumes that strategies always produce an expression. Strategies may, however, fail. So an improved type would be Terma->Option(Terma), where a value of type Option(Terma) can either be none, representing failure, or some x (where x is of type Terma), representing success. In our case, this leaves one issue open. We need to be able to generate fresh variable names (as part of capture-avoiding substitution, for example). How can we do this in a purely functional language? The answer is to combine Option with the state monad, which allows us to thread a state through our computation. In this case, the state is a natural number which serves as a counter that is incremented whenever a new variable is produced. The counter is then used as part of the returned variable name.
This leads us to the following definitions:
```
defRewriteResulta:Type:=Nat->Option(aXnat)
```
The meaning of RewriteResult is that it represents a computation which takes a counter value and then either fails or returns an output, together with a new, possibly increased counter value. It is a monad, allowing the use of do-notation.
A strategy is then a function taking an expression and returning a RewriteResult, while preserving the type of the expression.
```
defStrategy:Type:= {a:Typ}->Terma->RewriteResult(Terma)
```
We can now define a function freshM, which returns a variable name based on the current counter and increments said counter:
```
deffreshM:RewriteResultString|i->some("x"++toStringi,i+1)
```
We now implement the strategy combinators. First we have id, which takes a term and a counter and returns both unchanged.
```
defid:Strategy:=funp1>some(p,i)
```
Failure \(\dot{z}\) is implemented as a function fail, which always returns none.
```
deffail:Strategy:=fun_->none
```
Sequencing \(s_{1};s_{2}\) is represented as seqs1s2(abbreviated ass1;;s2). The code uses do-notation to first apply strategy s1 to term p, and then, on success, s2.
```
defseq(s1s2:Strategy):Strategy:= funp>>doS2(<-s1p)
```
We write left choice \(s_{1}\gets s_{2}\) as lchoices1s2(abbreviated ass1<+s2). The implementation uses the <|> operator, which takes two computations and returns the result of the left one if it succeeds, and that of the right one otherwise.
```
deflchoice(s1s2:Strategy):Strategy:= funp>s1p<|>S2p
```
We do not introduce a fixed point construct \(\mu x\). \(s\), rather, we define strategies recursively using Lean's support for recursive definitions. This can be seen in the definition of repeat(\(s\)).
```
partialdefrepeat(s:Strategy):Strategy|-,p=>((s;repeats)<+id)p
```
Lean requires us to add the partial keyword before def, indicating that we cannot guarantee termination.
We now consider traversals, which are functions that transform strategies to allow us to rewrite subexpressions of the current expression.
``` defTraversal:=Strategy-Strategy
For each constructor in the language, we define traversals for each subexpression of that constructor. For the function application constructor app, which contains two subexpressions (function and argument), we need two traversals: function s, which applies s to the first subexpression, and argument s, which applies it to the second. If function s or arguments are applied to anything other than a function application, they fail.
def function : Traversal
1 s, -, app f a >> do return app (- s f) a
1 - _ _ -> failure
def argument : Traversal
1 s, -, app f a >> do return app f (- s a)
1 - _ _ -> failure
The same way, we define traversals for the other constructors, one for each of their respective subexpressions.
We can now implement the combinator one s (_os_), which applies s to one subexpression. The implementation given here is deterministic, as it is biased towards the subexpression on the left. one works by trying to apply \(s\) to every type of subexpression in order.
def one (s : Strategy) : Strategy :=
functions s <+ argument s <+ -- other traversals omitted
one by itself only allows us do transform expressions that are direct subexpressions of the root of the abstract syntax tree. To allow transformations of more deeply nested expressions, we define the recursive topDown traversal. topDown s first tries to apply s at the root and if that fails, recurses into the subexpressions until it finds one expression where s succeeds.
partial def topDown : Traversal :=
fun s > s <+ one (topDown s)
Combining topDown with repeat, we get normalize s, which repeatedly applies s until there is no subexpression left to be transformed.
def normalize (s : Strategy) : Strategy :=
repeat (topDown s)
We also define run, which lets us execute a strategy on a term, by initializing the variable counter to 0, applying the strategy, and then discarding the new counter at the end.
def run : Strategy -> Term a -> Option (Term a)
1 s, p > Prod.fst <> s p 0
### Efficient AD
Deriving the gradient of a function \(f\) via forward mode AD involves \(n\) computations of the function, where \(n\) is the size of \(f\)'s input vector. This would appear to make forward mode AD unusable for training large machine learning models.
To address this, Shaikhha et al. (Shaikhha et al., 2017) present a set of rewrite rules. Using these to optimize their programs, they are able to achieve performance on their benchmarks that is competitive with or superior to frameworks using reverse mode AD. We can implement these rules as functions of the Strategy type, using pattern matching.
As an example consider the following rule, where constructing an array and immediately retrieving a single element from it is optimized to a simple function application.
def getBuild : Strategy
1 _, get' (build' e1) e2 > return app e1 e2
1 _, -> failure
The correctness of the rule follows from the following equality that holds in the semantics (not shown) of our DSL.
\[\texttt{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small {\small{\small{\small{\small{\small{\small{\small{\small{\small{ \small{\small{\small{ \small{ \small{ \small \,}}}}}}}{}{}{}{}{}{{}{{{{{{{{ \small{\small{\small{\small{\small{\cdot}}}}}}}{{{{{{{\small{ \cdot}}}{{{{\small{\cdot}}{{\small{\small{\cdot}}{{\small{\cdot}}{{ \small{\cdot}{\small{\cdot}{\small{\cdot}{{\left}{\small{{\cdot}}{{ \left{\left{\cdot}{\left{\left{\cdot}{{\left}{{\left{\left}{{\left}{{\left{ \cdot}}{{\left{\cdot}{{\left}{{\left{\cdot}}{{\left{{\left}}{{ \left{\cdot}{{\left{\cdot}}{{\left{{\left}}{{\left{\cdot}{{\left{{}} \left{{\cdotcdot}}{{\left{{\left}}{{\left{\cdotcdot}{{\left{\cdot}}{{{ \left}}{{\left{\cdotcdot}{{\left{\left{\cdotcdotcdotcdotcdotcdotcdotcdot}{{{ }}_{{\leftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleft
## 6. Evaluation
We conducted a micro-benchmark to test the performance of our implementation to measure the impact of the optimization rules. For the benchmarks we use Python 3.6.9, Futhark 0.22.0, and gcc 7.5.0. The execution is done on a Intel Pentium G860 (3GHz) with 4GB of memory. Our implementation converts the terms to a string representing Futhark (Futhark, 2017) code, which is then compiled by the Futhark compiler. We use the Futhark compiler's C backend.
The micro-benchmark consists of a very simple program which first generates an array of a given length where every entry is the same constant value and then computes the gradient of vectorSum on that array, where vectorSum is a function that sums all the entries of an array. This program is somewhat trivial, in that the vectorSum function is linear and therefore the gradient is always an array consisting of only 1s. This benchmark should merely demonstrate that the optimizations from Sec. 5.1 can in principle lead to asymptotic speedups.
We tested three different versions. First, a program that is compiled to Futhark with no optimizations applied before compilation. Second, the same program with optimizations applied before compilation to Futhark. Due to technical issues with compilation, the unoptimized program is generated from the typed embedding while the optimized one is generated from an untyped one. This should not affect the qualitative observations we make about the results. Third, we have a hand-written Futhark program, using a dual numbers library to implement forward mode AD.
We measured execution time for vector sizes from 2500 to 50000. The results can be seen in Figure 3. We give one plot comparing the three programs and another one focussing only on the runtime of our optimized one.
The left plot shows that the runtime for the unoptimized program in our DSL (orange) increases faster than linearly. This is expected, as forward mode AD leads to an overhead proportional to the size of the input vector. It can be seen however, that the optimized program (blue) is asymptotically faster than the unoptimized one. The rewrite rules are able to optimize away the nested loops involved in computing the gradient of vectorSum.
Additionally, the hand-written Futhark program (green) is also asymptotically slower than the optimized one. As all three versions are compiled by the optimizing Futhark compiler, this demonstrates that we were able to express application-specific optimizations for differentiability in our strategy-based approach which are not included in the fixed optimization passes of the Futhark compiler.
## 7. Related Work
Forward-mode AD tends to be implemented with dual numbers. Alternatively, reverse-mode AD allows computing the gradient in one execution, but incurs a complication, as the control flow for computing the derivative has to be inverted. Some reverse-mode AD frameworks are implemented as non-compositional transformations, including Zygote (Zygote, 1977) for Julia, Enzyme (Futhark, 2010) for LLVM, and Tapenade (Futhark, 2010) for Fortran and C. These lack correctness proofs. Reverse-mode AD has also been implemented compositionally with continuations (Zagoza and Pagani, 2010) or effect handlers (Futhark, 2010) as well as mutable state, which may require advanced techniques like separation logic to verify. An abstract description of differentiation is given by categorical models of differentiation (Becker and Pagani, 2010; Pagani, 2010; Pagani, 2010). These do not directly yield an AD algorithm, but can be used to verify one, as has been done by Crutwell et al. (Crutwell et al., 2010). Another compositional approach comes from Elliott (Elliott, 2010), who implements reverse-mode AD for a first-order language by reifying and transposing the derivative. Mazza and Pagani (Mazza and Pagani, 2010) verify the correctness of AD for a Turing-complete higher-order functional language, but use an inefficient algorithm. We instead apply a forward-mode transformation and recover efficiency by optimizing the code afterwards, making use of the flexibility of rewrite strategies.
## 8. Conclusion
We described the implementation of a higher-order functional array language supporting differentiable programming. Previous work (Futhark, 2010), has not expressed optimizations on differentiated programs using rewrite strategy languages (Dwork et al., 2010; Zygote, 2010) and rewrite strategy languages have not been used for optimizing AD. We showed the effect of the optimizations on a micro-benchmark.
###### Acknowledgements.
This work was funded by the German Federal Ministry of Education and Research (BMBF) and the Hessian Ministry of Higher Education, Research, Science and the Arts (HMWK) within their joint support of the _National Research Center for Applied Cybersecurity ATHENE_ and the HMWK via the project 3rd Wave of AI (3AI).
|
2302.11583 | The Digitization of Historical Astrophysical Literature with
Highly-Localized Figures and Figure Captions | Scientific articles published prior to the "age of digitization" in the late
1990s contain figures which are "trapped" within their scanned pages. While
progress to extract figures and their captions has been made, there is
currently no robust method for this process. We present a YOLO-based method for
use on scanned pages, after they have been processed with Optical Character
Recognition (OCR), which uses both grayscale and OCR-features. We focus our
efforts on translating the intersection-over-union (IOU) metric from the field
of object detection to document layout analysis and quantify "high
localization" levels as an IOU of 0.9. When applied to the astrophysics
literature holdings of the NASA Astrophysics Data System (ADS), we find F1
scores of 90.9% (92.2%) for figures (figure captions) with the IOU cut-off of
0.9 which is a significant improvement over other state-of-the-art methods. | Jill P. Naiman, Peter K. G. Williams, Alyssa Goodman | 2023-02-22T19:00:01Z | http://arxiv.org/abs/2302.11583v1 | The Digitization of Historical Astrophysical Literature with Highly-Localized Figures and Figure Captions
###### Abstract
Scientific articles published prior to the "age of digitization" in the late 1990s contain figures which are "trapped" within their scanned pages. While progress to extract figures and their captions has been made, there is currently no robust method for this process. We present a YOLO-based method for use on scanned pages, after they have been processed with Optical Character Recognition (OCR), which uses both grayscale and OCR-features. We focus our efforts on translating the intersection-over-union (IOU) metric from the field of object detection to document layout analysis and quantify "high localization" levels as an IOU of 0.9. When applied to the astrophysics literature holdings of the NASA Astrophysics Data System (ADS), we find F1 scores of 90.9% (92.2%) for figures (figure captions) with the IOU cut-off of 0.9 which is a significant improvement over other state-of-the-art methods.
scholarly document processing, document layout analysis, astronomy.
## 1 Introduction
With the rise of larger datasets and the ever increasing rate of scientific publication, scientists require the use of automated methods to parse these growing data products, including the academic literature itself. In addition to being a vital component of open science [1; 2], easily accessed and well curated data products are strongly encouraged as part of the submission process to most major scientific journals [3]. However, data products in the form of figures, tables and formulas which are stored in the academic literature, especially from the "pre-digital" era, published prior to \(\sim\)1997, are not accessible for curation unless methods are developed to extract this important information.
The extraction of different layout elements of articles is an important component of scientific data curation, with the accuracy of extraction of the elements such as tables, figures and their captions increasing significantly over the past several years [4; 5; 6; 7]. A large field of study within document layout analysis is the "mining" of PDFs as newer PDFs are generally in "vector" format - the document is rendered from a set of instructions instead
of pixel-by-pixel as in a raster format, and, in theory, the set of instructions can be parsed to determine the locations of figures, captions and tables [8; 9; 10].
However, this parsing is non-trivial and many methods have been developed to complete this task. If the PDF's vector format is well structured, then text and images can be extracted by parsing this known PDF format. Several packages exist which output text and/or images from such PDF files [11]. Once such information is extracted, several methods for organizing of raw figures and text into figure-figure caption pairs, tables and other layout components (e.g. section headings, mathematical formulas) exist. Historically, some of the most popular include heuristic methods where blocks of text are classified as figure or table captions based on keywords (like "Fig." or "Figure") [12; 13].
Deep learning methods have become popular recently for vector and raster documents [14; 15; 16], including those that use methods of semantic segmentation [17] and object detection [18]. When vector-PDFs are available, these deep learning methods are often combined with heuristic methods to extract text during the layout analysis process [15]. While these methods are vital to the extraction of data products from recent academic literature, pre-digital literature is often included in digital platforms with older articles scanned at varying resolutions and deep learning methods developed with newer article training sets often perform poorly on this pre-digital literature [19]. Additionally, layouts, fonts, and article styles are typically different for historical documents when compared to "born-digital" scientific literature [19]. In these cases, text extraction must be performed with optical character recognition (OCR), and figures and tables are extracted from the raw OCR results. When applied to raster-PDF's with text generated from OCR, deep learning document layout analysis methods trained with newer or vector-based PDFs are often not as robust [17; 19]. While progress has been made in augmenting these methods for OCR'd pages, especially for electronic theses and dissertations (ETDs) [19], much work can still be done to extract layout elements from these older, raster-based documents.
Large "benchmark" raster-based datasets are available, however they tend to be comprised of a majority of newer articles. For example, only about 2.6% of the widely used PubLayNet dataset introduced in [20] are articles older than 1997 and benchmark datasets focused on historical scientific articles are less readily available [19]. Additionally, definitions of what constitutes different layout elements - figures, tables, and their captions - can differ across datasets [4; 20], and large hand annotated benchmark datasets can suffer from inconsistencies in layout element definitions [21].
In what follows, we outline a new methodology for extracting figures and figure captions from a dataset that includes both vector and raster based PDF's from the pre-digital scientific literature holdings of the NASA Astrophysics Data System (ADS)1. Our model applies deep learning object detection methods in combination with heuristic techniques to scans of article pages as well as the text features generated from processing scans through the Tesseract OCR engine [22] and combines the results from mining any vector based PDF's for their captions with pdffigures2[13] in a post-processing step.
Footnote 1: [https://ui.adsabs.harvard.edu/](https://ui.adsabs.harvard.edu/)
While the focus of our model is the digitization of astronomical literature - one of the original "big data" sciences [23; 24] - because our method relies heavily on features generated with OCR, our methodology is extendable to other large scientific literature holdings which have already been OCR'd [e.g. the HathiTrust U.S. Federal Document collection2, 25]. Additionally, the design of our pipeline is heavily motivated by both the data (astronomical literature) and the expected users of our model (scientists and digital librarians). Thus, we rely on open-source software (e.g. Tesseract) and programming languages used by both communities (e.g. Python). The outline of our paper is motivated by this possible wide range of utility: Section 2 details our dataset and outlines the design considerations of our pipeline, Section 4 discusses our model architecture and accuracy and in particular Section 4.2 discusses the generalizability of our method to other "benchmark" datasets. Especially relevant to other fields and the larger document layout
analysis community is our discussion of the relationship of extraction metrics. In this discussion within Section 4.1, we consider how the popular object detection metric intersection-over-union (IOU) can be used to quantify the information extracted from localized page objects. We outline our future plans for this work in Section 5. All code is available on GitHub3.
Footnote 3: Full project details for all work housed at: [https://github.com/ReadingTimeMachine](https://github.com/ReadingTimeMachine)
## 2 Design Considerations and Data Pre-processing
### The Data
The dataset used in this work is a subset of the English-language literature archived on the NASA Astrophysics Data System (ADS) of articles prior to the "era of digitization" - publishing year \(\lesssim\) 1997 - as shown in the top panel of Figure 1. Articles span the publications of The Astronomical Journal (AJ), The Astrophysical Journal (ApJ) including The Astrophysical Journal Supplement Series (ApJS) from the years of 1852-1997 (middle panel of Figure 1). Additionally, the dataset is also a subset of the literature featured in the Astrophysics Data System All-Sky Survey (ADSASS) which was an effort to associate each article with its place in the sky from the objects studied within the text [26; 27]. Chosen for this work are articles that are thought to contain images of the sky, as determined by heuristic determinations based on color distributions of pages [26].
The bottom panel of Figure 1 shows the distribution of the subset of our larger database that is annotated with classes of figure, figure caption, table and math formula. These annotated pages were chosen randomly from all articles found with the heuristic methods of [26]. For this work, we will focus on the extraction of figures and figure captions, however we include the annotations for tables and math formulas in the downloadable data accompanying this paper as these elements are often of interest to the document layout analysis community [20; 21; 28]. Our annotation process is discussed more fully in Section 2.4.
Figure 1: The top panel shows the age distribution of all pre-digital ADS listings and the subset that constitutes our annotated database. The middle panel shows the distribution of our full database across time and publisher for a total of \(\sim\)10k articles (out of the \(\sim\)56k of total pre-digital ADS listings from these three publishers). The bottom panel shows the distribution of figures, figure captions, math formulas and tables per year in the main annotated subset of pages taken from articles in the full database. Totals across annotated subset of 5515 pages are: 5010 figures, 4925 figure captions4, 1040 math formulas, 677 tables. There are 553 pages without additional page objects.
The articles present in this work include those which are in a vector-PDF format (and therefore potentially parsable by "PDF-mining tools"), and those that are in the raster-PDF format, with the majority of the articles in a raster-PDF format and the relative percentage of articles in vector-PDF format decreasing for older articles.
Here, we define "PDF-mining" as the process by which the set of "instructions" which are used to construct vector-PDF documents is reverse-engineered to find the locations and content of page objects like text, tables and figures [11; 13; 29; 30; 31; 32; 33]. In addition to these pure-heuristic methods, newer methods often make use of deep learning on rastered page images in combination with heuristics to locate and extract page objects which can often be more precise than pure heuristic methods alone [e.g. 15]. However, due to their reliance on heuristics for text extraction, these newer methods are not accurate enough to extract page objects without access to the parsable instructions of vector-PDFs, and therefore have been shown to not be accurate on historical documents [19; 34]. In what follows, we limit our analysis to pure-heuristic based PDF mining software for simplicity, as the parsablity by these tools will give an estimation of how many of our articles are encoded with a set of instructions in the vector-PDF format.
Determining whether or not an article is parsable by PDF mining tools requires a careful inspection of each PDF page, parsing outputs using mining software, and the quantification of how many words, figures, and tables are correctly parsed. For example, applying the PDF-image extraction tool pdfimages5 to several test pages generally results in corrupted output image files. However, this process takes considerable time and computation to scale for all articles in our full dataset. Thus, we must estimate parsability of the PDFs in other ways.
Footnote 5: [https://www.xpdfreader.com/pdfimages-man.html](https://www.xpdfreader.com/pdfimages-man.html)
As a first _estimation_ of parsability, we apply the PDF mining software GROBID [11] and pdffigures2[13] to the articles associated with our hand-annotated pages and calculate their "parsability" with two metrics. We look for parsed article outputs in which both figure and table numbers start at one and increase uniformly by one to their maximum figure and table number. If these metrics do not hold it is likely because there is a missing or double-counted figure or table. For these estimates, GROBID and pdffigures2 are chosen as they are widely used PDF mining software [13; 35] and are used frequently to extract text and page objects specifically from scientific literature [36; 37; 38].
In this analysis we assume the number system is either whole numbers (e.g. Figure 5 or Fig. 5) or roman numerals (e.g. Table 4 or Tab. 4). Objects which are tagged as tables or figures in pdffigures2 which do not follow these numbering conventions account for 0.7% of all objects. In GROBID these non-standard numbering systems account for 1.3% of all objects. We further assume that these numbering systems are not mixed for a page object, but both may be operating in the same article, as tables are often enumerated with roman numerals while figures are counted with whole numbers. Thus, we define the PDF as parsable if _either_ the whole number or roman numeral system produces monotonically increasing integer figure and table numbers, each starting their count at one.
The results of these estimates are shown in Figure 2. GROBID and pdffigures2 are able to parse articles best in the years \(\approx\)1945-1990 as shown in the upper panel of Figure 2 with GROBID the more successful of the two in this time span. In general, parsability as measured by tables is higher, which is to be expected as figures can often be labeled by words other than "Fig" or "Figure" (e.g. "Plate" in this dataset) while tables are very rarely labeled with words other than "Table" or "Tab".
The peak of GROBID's parsing abilities from \(\approx\)1980-1990 in figures (center panel of Figure 2) and tables (bottom panel of Figure 2) is likely due to the increase of articles from the Astronomical Journal (AJ) during this time (see center panel of Figure 1) which appear to be comparable or slightly more parsable than the other two journal formats.
Taken over all articles, the parsability using either whole numbers or roman numerals with pdffigures2 for figures (tables) is \(\approx\)0.7% (1.1%). For GROBID this percentage increases to \(\approx\)9.0% (33.7%) for figures (tables). Our estimation does not account for any erroneously discovered figures or tables. The possibility of the addition of false positives to our parsability metrics will likely
decrease the accuracies reported here for these tools. Our accuracies are additionally likely lowered beyond the effects of lack of accounting for false positives as our estimates do not include any checks for the correctness of the mined text.
Finally, we remark here that there are many PDF parsers available beyond the three that we have tested here (pdfimages, GROBID, pdffigures2, with our focus on the latter two) which may provide further parsability improvements [29; 30; 31; 32; 33].
### Pipeline development
Our final goal for this dataset is the hosting of figure-caption pairs on the Astronomy Image Explorer (AIE) database6. Currently, a subset of the born-digital ADS holdings - articles housed within the American Astronomical Society Journals (AAS) - automatically populate AIE with their figure-caption pairs. Thus, the pipeline described here begins with the initial OCR'ing of pages and ends with the extraction of figures and their captions by identifying the regions around the figures and the OCR'd words included in the caption region for hosting on a platform such as AIE.
Footnote 6: [http://www.astroexplorer.org/](http://www.astroexplorer.org/)
As the audience for this work is likely to be a mixture of scientists and digital librarians, we focus our efforts on developing a Python-based processing and training pipeline, as this is a language with great overlap between these two populations. Figure 3 shows the outline of our full pipeline - from data generation, through annotation, to training our deep learning model, and finally post processing the model results.
### OCR and Image Processing
Our deep learning model makes use of OCR features for training (see Section 3 for model details), which are generated by processing each page with Tesseract's OCR engine [22]. We use Tesseract parameters to find all English (lang=eng) text on the page, without any assumed format of lines or paragraphs (psm=12), the page rotation (OSD, psm=12) and to process the page with the LSTM OCR engine (oem=1). These parameters allow Tesseract to find all text on the page independent of if the text is in a paragraph or on an image. Tesseract does not locate or tag equations or images explicitly. These parameters allow us to find the majority of text bounding boxes on a page accurately, however the resulting OCR
Figure 2: PDF “parsaybility” as estimated through the PDF-mining tools GROBID and pdffigures2 in either whole numbers or roman numerals as percentages of total articles in a 10-year bin. An article is parsable if the sorted returned list of figures (tables) increase monotonically by one, starting at the figure (table) number “1”. Estimates for parsability over all articles in our dataset are \(\approx\)1-33% (see text for details).
text can be noisy and non-English characters are not included. Cleaning of noisy, mixed-language, OCR text is a significant ongoing field of study in the document layout analysis community [39; 40; 41; 42] and is beyond the scope of this paper, but a subject of future work [43; 44].
Each randomly selected PDF page is processed into both a TIFF format (temporarily stored for OCR'ing) and JPEG format (for the extraction of gray-scale features later on in our pipeline). Original PDF articles are stored within ADS from high-resolution TIFF scans [45]. As the PDF articles maintain the resolution of the original scans and are more fully supported for bulk download from ADS, we make use of these article formats for this work. We extract both TIFF and JPEG from the article PDFs at high resolution (DPI=600) and resize the resulting images to a half their length and width to avoid pixelation and do not implement antialiasing. TIFF images are used for the OCR process as there is evidence they produce fewer errors than other image formats [46]. The temporary TIFF image is passed through Tesseract using a Python wrapper7 and utilizing Tesseract's optimization for OCR'ing full pages. Outputs are stored in the hOCR format. In this step basic image preprocessing is applied with OpenCV[47] and its associated Python wrapper8 (thresholding and gaussian filtering) to address any artifacts on the page.
Footnote 7: [https://pypi.org/project/pytesseract/](https://pypi.org/project/pytesseract/)
Footnote 8: [http://pypi.org/project/opencv-python/](http://pypi.org/project/opencv-python/)
In conjunction with the deep learning model, we use image processing techniques to heuristically find potential figure boxes as well. The locations of the OCR'd words are used to mask out text and the modified pages are processed through a basic shape finder built with OpenCV, tuned to look for rectangles (four corners and sides comprised of approximately parallel lines). This "rectangle finder" is applied to several filtered versions of the page (histogram of oriented gradients, dilation, and color-reversal, and various levels of thresholding). The list of rectangles is culled with K-Means clustering on the locations of square corners, checking for artifact-rectangles which are small in size, and rectangles that are likely colorbars and not figures due to their aspect ratio.
OCR'ing a page and shape-finding with OpenCV takes approximately 20-25 seconds per page (tested on six cores of an Apple M1 Max with 64 Gb of RAM).
### Annotations and Class Definitions
Before delving into the details of our deep learning model, we consider several aspects of our annotation process that are necessary for a clear
Figure 3: Our overall pipeline is shown as four main steps. OCR and image processing is discussed in Section 2.3. Class definitions and “codebook”, along with annotations and PDF mining are discussed in Section 2.4. Feature selection and deep learning model description is housed in Section 3.1, and post-processing techniques are discussed in Section 3.2. (See Appendix 6 and Figure 10 for a larger breakdown of these steps.)
understanding of what our model is endeavoring to locate on each article page.
We begin by defining the classes of figure and figure caption as there is often disagreement in the literature and occasionally between annotators of the same dataset [4; 21]. Here, as shown in Figure 4, we define a figure as the collection of one or more panels on a single page which would be referred to as a single figure in a scientific article (i.e. "Figure 3"). This is different than other works which often treat each "sub-figure" as a separate figure [21]. Additionally, figures are defined to include all axis labels and titles. When figures are spread across multiple pages (often delineated with captions such as "Fig 1b." or "Figure 5, continued") each page of figures is classified as a separate figure. If a figure caption extends horizontally further than its associated figure, the figure is extended horizontally to the edges of the figure caption (see magenta lines in Figure 6). These definitions retain the uniformity of other definitions (e.g. [20]) while defining figure and caption regions by non-overlapping boxes. Finally, except in cases of unusual figure caption placement, the figure bounding boxes do not include the figure captions, in contrast to other definitions (e.g. 4; 19; 20), as part of our goal is to extract figure-caption pairs, we must delineate between these two different kinds of objects on the article page.
Figure captions are defined to be the caption text associated with these figure objects. While there are many figures without captions, with the exception of a few cases, the majority of captions are on the same article page as their associated figures. When more than one figure caption is potentially present (e.g. a sub-figure caption like "Fig 4b") along with a longer figure caption, we choose the longer figure caption as the caption associated with the figure if it is on the same page as the figure. If only a sub-figure caption is present on a page with a figure, we define this sub-figure caption as the caption of the figure.
Document layout objects are classified by hand using MakeSense.ai [48] with the JPEG formatted images. Once the hand annotation is completed, checks using the OCR result are performed. Specifically, the bounding boxes for the figure captions are modified to encompass the OCR word boxes as these are often offset or larger than the text presented in the original grayscale article page. When the OCR results are poor and do not capture the entirety of the caption, this will lead to an offset between visual and processed boxes. For example, if the skew of the page is significant, the scan too noisy, or the text too light, Tesseract may miss many words in the caption, leading to a bounding box that is significantly smaller than that which is detected visually by an annotator. Even under ideal conditions, parts of individual letters can be truncated and therefore excluded from the OCR bounding box9 However as we ultimately will be extracting the OCR'd text (for hosting on AIE in future work), we ignore these edge cases - in our annotated dataset a single instance was reported. By including these noisy instances both here and in future annotation campaigns, our training data will include the noise that is expected to occur in OCR'd pages in the full article corpus. This is a crucial part of the annotation process to ensure we localize the text information of interest _and not a bounding box only on the grayscale image_ which is often offset from the OCR results.
Footnote 9: This is a common issue with OCR engines and comes up often as a question online for Tesseract in particular, e.g. [https://stackoverflow.com/questions/57033120/bounding-boxes-around-characters-for-tesseract-4-0-0-beta-1](https://stackoverflow.com/questions/57033120/bounding-boxes-around-characters-for-tesseract-4-0-0-beta-1) (e.g. 49).
Finally, it is at this stage that we pass our PDF pages through the PDF mining software pdffigures2[13] to extract any figure and caption boxes for vector-PDFs. As we do not necessarily know _a priori_ which pages will be stored in vector-PDF, we run pdffigures2 on all pages. Found figures and figure captions are stored for combination with our model's results in a post-processing step (see Step 3 in Section 3.2).
Both modified hand-annotations and the results from PDF mining are stored in the YOLO-annotation style XML files [50; 51].
## 3 Deep Learning Model Design and Training
In what follows, we discuss our development of a deep learning model that relies heavily on features derived from the OCR results of raster-PDF's to make use of the preponderance of these types of PDFs in our dataset.
### Model Design and Feature Selection
Typical modern methods rely on deep learning techniques to detect layout elements on pages [14], often in combination with heuristics [15]. Methods span the range of object detection using models like YOLO [15; 19; 52] to, more recently, Faster R-CNN [53; 54; 55; 21; 56] and Mask-CNN [57; 58; 59]. Additionally, several pixel-by-pixel segmentation models have been proposed using semantic segmentation [17] and fully convolutional networks [60; 61], including "fully convolutional instance segmentation" [62; 63; 64].
Often these models employ a variety of features derived from article pages as inputs along side or in place of the unprocessed page. Some of the more popular recent methods leverage image processing and computer vision techniques [21] including connected component analysis [65; 66; 67; 68] and the distance transform [56] or some combinations thereof [69].
While the aim of many methods is to detect page objects before any OCR process [e.g. 70; 71; 72; 6; 6; 70; 72], here we implement methods that can be applied after in an effort to support future extensions of our work to large digital archives which are constructed from previously-OCR'd articles such as those within the HathiTrust and the historical documents in the Internet Archive [73; 74; 75]. In what follows, we use a Tensorflow implementation of YOLO-v510[76; 77] and focus our efforts on feature exploration by utilizing a set of features derived from the OCR outputs themselves with the goal to choose the smallest number of the "best" features on which to build our model.
Footnote 10: [https://github.com/jahongri77174/YOLOv5-tf](https://github.com/jahongri77174/YOLOv5-tf)
In addition to the raw grayscale page, there are several possible features derived primarily from the hOCR outputs of \(\mathtt{Tesseract}\). To minimize storage, each feature is scaled as an unsigned-integer, 8-bit "color" channel in a 512x512 (pixels), multi-layer image which is fed into a "mega-YOLO" model capable of processing more than three color channels. Features explored which are output from \(\mathtt{Tesseract}\) in hOCR format include:
* _fontsize (fs)_: the fontsize for each word bounding box is normalized by subtraction of the median page fontsize and division by the standard deviation. Bounding boxes with fonts outside five standard deviations are ignored.
* _carea (c\({}_{b}\))_: the "content area" from automatic page segmentation includes large blocks of text and sometimes encapsulates figures into separate "content areas", but not consistently.
* _paragraphs (\(p_{b}\))_: automatically segmented groups of words as likely paragraphs. Often overlaps with "carea".
* the amount of letters in the word that are above the letter "caps" (e.g. the top of the letter "h"). Ascenders are normalized by subtracting the median value for each page.
* the amount of letters in the word that are below the letter "bottoms" (e.g. the bottom curl of
Figure 4: Illustration of the “codebook” for defining figures and figure captions (right) in comparison to other works (left, center). For example, some works [e.g. 21] (left red boxes) typically split subfigures into separate figures while other works [15; 19; 34] (center red box) combine figure captions and figures into a single “figure” box. Caption boxes (shown in blue) in other methods are delineated with dashed squares as applications often focus on detection of figures alone [e.g. 34]. Note as there is no overlap in datasets for direct comparison, the boxes from other works are shown for illustrative purposes only. See Section4.2 for a comparison of our category definitions and methodology with a subset of several of these datasets.
the letter "g"). Descenders are normalized by subtracting the median value for each page.
* _word confidences (wc)_: the percent confidence of each word
* _word rotation (\(t_{\text{ang}}\))_: rotation of word in steps of \(0^{\circ}\), \(180^{\circ}\), and \(270^{\circ}\).
Other features derived from the page scan and OCR-generated text are:
* _grayscale (gs)_: the image is collapsed into grayscale using the page's luminance. The majority of images are already in grayscale and those few that are in color are transformed to grayscale.
* _fraction of letters in a word (%l)_: the percentage of characters in a word that are letters (scaled 125-255 in order to preserve a "true zero" for spaces in the scanned page that contain no words).
* _fraction of numbers in a word (%n)_: the percentage of characters in a word that are numbers (scaled 125-255).
* _punctuation (p)_: punctuation marks are tagged as 250, non-punctuation characters are tagged as 125 (saving 0 for empty, non-word space).
* _spaCy POS (SP)_: spaCy's [78] 19 entries for "part of speech" (noun, verb, etc.) in the English language
* _spaCy TAG (ST)_: more detailed part of speech tag with 57 values
* _spaCy DEP (SD)_: the 51 "syntactic dependency" tags which specify how different words are related to each other
Figure 5 shows an example of a selection of these features (grayscale (_gs_), fontsize (_fs_), and spaCy DEP (_SD_)) for a single page and their distributions across all pages in our annotated dataset. For illustrative purposes, we have left the grayscale image in the upper left panel of Figure 5 un-inverted, however for use as a feature the grayscale is inverted, as is typical for document layout applications (e.g. 58). The top panels of Figure 5 showcase the different regions highlighted by different feature layers. While the grayscale (top left panel) allows a reader to localize the figure visually, the region of the figure is also denoted by the large fontsizes of the OCR words that are picked out within the figure (red and yellow blocks, top middle panel). While often times erroneous detections, these large-fontsize words are typical for those found by Tesseract within figures. Additionally, the caption located directly below the figure is highlighted by smaller (bluer) fontsize than the rest of the text on the page (majority green). These relatively obvious separate regions in fontsize are not replicated in the patterns seen in the spaCy DEP parameter (top right panel). Here, the colors of OCR bounding boxes do not follow any obvious pattern between regions of the figure, its caption, or the other text on the page.
These patterns extend to the distributions of these three features across all scanned pages in our dataset, as depicted by the histograms in the lower panels of Figure 5. The example features of grayscale and fontsize show differences in distributions in the three categories of figure, figure caption and the "rest" of the page - grayscale distributions are more uniform inside figures (bottom left histogram of Figure 5) and figure captions show a peak in the fontsize distributions toward higher values when compared to the fontsize distributions of figures (bottom middle histogram of Figure 5). Trends in other features are harder to determine, as illustrated in the bottom right histogram of Figure 5 which shows a less clear distinction between figures, figure captions, and the rest of the page for the feature of spaCy DEP.
In addition to potentially being able to distinguish types of page regions, these features are similar in morphology to computer vision features such as connected components [65; 66; 67; 68], making them a natural extension of such work (see for example Figure 2 of [21] to compare to our Figure 5).
In Section 4 we discuss our best model which includes (grayscale, ascenders, descenders, word confidences, fraction of numbers in a word, fraction of letters in a word, punctuation, word rotation and spaCy POS) as the set of input features.
### Post-Processing Pipeline
After features are selected and the model is trained we modify the final found boxes by merging them with OCR word and paragraph boxes and any heuristically found captions and figures at the fractional-pixel level (results are rounded to nearest pixel for intersection-over-union (IOU)
calculations to match precision of ground truth boxes).
Post-processing is a common practice in document layout analysis [69; 79], however it often differs between implementations and is occasionally not incorporated into a final pipeline [80]. Figure 6 depicts how found boxes and F1 score changes with each post-processing step in our pipeline when we compare ground-truth (true) boxes to model-found boxes at various post-processing steps:
* Step 1: "raw" found boxes are those culled with non-maximum suppression to combine several overlapping proposed boxes (i.e. regions that are tagged with a high probability of containing a page object of a particular class) into a single bounding box [81; 82; 83; 84; 85; 52; 81].
* Step 2: if two found boxes overlap with an IOU \(\geq 0.25\) the box with the lowest score is removed, decreasing false positives (FP)
* Step 3: pdffigures2-found figure caption boxes replace those found with the deep learning model when they overlap, which increases caption true positive (TP) rate and decreases FP and false negative (FN) at large IOU thresholds.
* Step 4: caption boxes are found heuristically by first applying a gaussian blur filter on an image of filled-in OCR text boxes. Contours of this image that overlap with text boxes that match with a fuzzy-search of words such as "Fig.", "Figure" and "Plate" are labeled as heuristically-found. If a heuristically-found caption box overlaps with a mega-YOLO-found box, we take the top of the heuristic box (which tends to be more accurate) and the minimum (maximum) of the left (right, bottom) of the two boxes. This results in an overall increase in TP while FN and FP drop.
* Step 5: found captions are expanded by their overlap with OCR word and paragraph boxes, allowing for multiple "grow" iterations in the horizontal direction. Found boxes are only expanded by paragraph and word boxes if the
Figure 5: Examples of selected features for a single page (top row) and the distribution across all pages in the annotated dataset (bottom row). All features have been rescaled into 8-bit color bins (see Section 3.1). Here, “rest” refers to rest of the page that does not include figures or captions. Several features show clear differences in distributions (e.g. grayscale and fontsize) while others do not (e.g. spaCy DEP). Note: for plotting purposes we have left the grayscale image un-inverted, but the image is inverted when used as a feature in our model.
centers of paragraph and word boxes overlap with the found box.
* Step 6: if found figure boxes overlap with rectangles that are found through image processing (as described in Section 2.3), the found box is expanded to include the image processing rectangle. This increases the TP rate at larger IOU thresholds for figures.
* Step 7: any found captions that have areas larger than 75% of the page area are discarded leading to a slight drop in FP for captions.
* Step 8: captions are paired to figures by minimizing the distance between caption center and bottom of a specific figure. Rotational information from the page and overall rotation of the OCR words is used to determine the "bottom" of the figure. Any captions without an associated figure on a page are dropped, leading to a drop in FP.
* Step 9: found figure boxes are extended down to the tops of their associated captions increasing TP for figures and captions at high IOU thresholds.
* Step 10: if a figure caption extends horizontally further than its associated figure, the figure is extended horizontally to the edges of the figure caption. This leads to an increase in TP rates for figures at high IOU thresholds.
Steps 9 and 10 are similar to the steps described for annotated boxes in Section 2.4. The effects on the metrics shown in Figure 6 are modest and predominately affect the results at high IOU thresholds (IOU\(\gtrsim\)0.9) for figures.
### Feature Selection Ablation Experiments
To determine the set of features which produce the most accurate model while minimizing the feature memory footprint, we conduct a series of ablation experiments, summarized in Table 1 and Table 2. In all feature selection runs we use 75% of our data in the training set, 15% in validation, and 10% in the test dataset. Results in Table 2 are shown for this feature selection test dataset.
As it is computationally prohibitive to test all combination of all fourteen different features, we first adopt the strategy of including sets of one or two groups of features at a time until we have a model containing all fourteen features, as shown above the thick horizontal line in Table 1.
Similar to other work (e.g. 17), these parameter combinations endeavor to follow an "intuitive"
Figure 6: Effects of post-processing steps on F1 (left plots) for Model 12 (m12 in Table 1 and Table 2). Post-processing drives changes in the metrics at larger IOU’s – IOU\(\gtrsim\)0.8 and IOU\(\gtrsim\)0.6 for figures and captions, respectively. Changes are depicted for a single page (right plot) showing initial found boxes (Step 1, dark blue) and final (Step 10, orange) in comparison to true boxes (thick dashed magenta).
build up of complexity for our model. For example, our first model (m1) which consists of using only the grayscale (gs) image as a feature, mimics simply applying YOLO to an unprocessed page. We then add "primary" OCR features, which we define here as those which come directly from the OCR engine without extra processing. As the fontsize (fs) of objects like figure captions and figure axis labels is typically visually different from the main article text (see Figure 5), we add this feature second in m2, followed by ascenders (asc) and descenders (dec) which can also change with the different fonts which are often present in different text objects (m3). Word confidences (wc) can be lower for the spurious OCR detections that can often occur within figures which we add to our next model (m4).
Secondary OCR features are defined here as features which are derived from the words generated by the OCR engine. The proportion of OCR words which are numbers and letters and characters which are punctuation (%n, %l and p, respectively) typically changes between caption or main-text words and OCR words found on axis labels or inside figures, as does the rotation of words (m5 and m6). Even further removed from the raw OCR data are the linguistic features derived from the found OCR words (spaCy SP, ST, SD in m7-m9), and finally groupings of words into paragraphs and large word blocks (\(\text{p}_{\text{b}}\) and \(\text{c}_{\text{b}}\) in m10).
From these ten models, we select the most accurate, defined here as having a high F1 score for both figures and their captions, while maintaining a low false positive score (FP). Model 7 is the "best" model out of these first ten models in Table 2. We then subtract one or two features from this model in combinations shown below the thick horizontal line in Table 1. Using the same selection criteria leads us to choose Model 12 as our overall "best" model which includes the features of (grayscale, ascenders, descenders, word confidences, fraction of numbers in a word, fraction of letters in a word, punctuation, word rotation and spaCy POS) as highlighted in Table 2. This model represents a combination of not only the grayscale page, but many of the primary OCR features which intuitively would differ between regions of main-text, figures and figure captions. The addition of the secondary feature of spaCy's part of speech tag (SP) suggests a likely difference in the word usage between regions of text which align with previous work [17; 86; 87].
The implemented optimizer is Adam with a \(\beta_{1}=0.937\)\(\beta_{2}=0.999\). Learning rate is scheduled using a cosine scheduler which depends on initial learning rate, number of epochs and batch size. Practically, when applied to our model this results in a linear increase in learning rate by a
\begin{table}
\begin{tabular}{|c|l|} \hline model & Description \\ \hline m1 & gs \\ \hline m2 & gs + fs \\ \hline m3 & gs + fs + asc + dec \\ \hline m4 & gs + fs + asc + dec + wc \\ \hline m5 & gs + fs + asc + dec + wc + \%n + \%l + p \\ \hline m6 & gs + fs + asc + dec + wc + \%n + \%l + p + tang \\ \hline m7 & gs + fs + asc + dec + wc + \%n + \%l + p + tang + SP \\ \hline m8 & gs + fs + asc + dec + wc + \%n + \%l + p + tang + SP + ST + SD \\ \hline m9 & gs + fs + asc + dec + wc + \%n + \%l + p + tang + SP + ST + SD + p\({}_{\text{b}}\) \\ \hline m10 & gs + fs + asc + dec + wc + \%n + \%l + p + tang + SP + ST + SD + p\({}_{\text{b}}\) + c\({}_{\text{b}}\) \\ \hline m11 & gs + fs + wc + \%n + \%l + p + tang + SP \\ \hline
**m12** & **gs + asc + dec + wc + \%n + \%l + p + t\({}_{\text{ang}}\) + SP** \\ \hline m13 & gs + asc + dec + wc + \%n + p + t\({}_{\text{ang}}\) + SP \\ \hline m14 & gs + asc + dec + wc + \%n + \%l + t\({}_{\text{ang}}\) + SP \\ \hline m15 & gs + asc + dec + wc + \%n + \%l + p + t\({}_{\text{ang}}\) \\ \hline \end{tabular}
\end{table}
Table 1: Ablation experiments with the features discussed in Section 3. All models include post-processing (Section 3.2). Our “best” model, as determined by metrics in Table 2 and the discussion of Section 3.3, is Model 12 (m12) highlighted in bold.
factor of \(\sim\)1.6 in the initial epoch (flat after). Our optimal initial learning rate of 0.004 was chosen from a small set of learning rates (0.008, 0.004, 0.0004, 0.0002). All experiments are run for 150 epochs and converge within this time (tracked by validation losses). No data augmentation is applied. Training is performed on a Tesla V100-SXM2 GPU with an average time of \(\sim\)6.5 minutes per epoch.
## 4 Results
To quantify the results of our "best" model (Model 12) on un-seen data we annotate an additional \(\approx\)600 pages as a "final test dataset" of \(\approx\)500 figure and figure caption ground-truths (490 and 487, respectively). Including post-processing, evaluation takes on average 1.8 seconds per page on a single core of an Apple M1 Max with 64 Gb of RAM. The distribution of figures and figure captions in 10-year time bins for this dataset is shown in the top panel of Figure 7.
Metrics for the performance of our model in this final test dataset across several IOU cut-offs are shown in Table 3. While true positives (TP) and false negatives (FN) are relatively flat across all IOU cut-offs for figures, false positives increase by a small factor at the IOU=0.8 cut-off for captions as true positives drop. Errors are estimated on compound metrics with a 5-fold cross-validation, with averages alone shown for the true positive, false positive, and false negative metrics for the sake of clarity.
The distribution of F1 score over time (in increments of 10 years) for figures and figure captions is shown in the bottom panel of Figure 7. When compared to the distribution of PDF parsability in figures shown in Figure 2, our method shows less decline toward earlier publication times.
In what follows, we contextualize these results with comparisons to other methods and datasets, specifically at high degrees of localization.
### Highly-localized Page Objects - Relevance for Figure extraction
Before comparing our models to others, we begin by defining what is meant by "highly-localized" in the context of document layout analysis.
Prior efforts have highlighted the issues in directly translating object detection (and segmentation) metrics to document layout analysis [89], and in some cases have developed new metrics specifically for document layout analysis and it's related processes [90]. Particular to our
Figure 7: Distribution of figures and figure captions in the final test dataset (_top panel_) and the distribution of F1 scores in each time bin (_bottom panel_) for an IOU=0.8 threshold. Bins are chosen to match the bottom two panels of Figure 1 and all panels in Figure 2. However, there are no pages from articles prior to 1903.
YOLO-based application, while the intersection-over-union metric used in object detection effectively weights all of the intersection area equally, eye tracking studies suggest that the key elements of interpreting a graph include the graph x and y axis and their labels [e.g. 91] which tend to be at the edges of figures. Thus, any translation of object detection metrics, in this case the IOU measurement, to document layout analysis should involve the quantification of how effective the metric is at capturing the number of times these vital figure elements at the edges of the bounding boxes are missed.
While our current annotation methods do not independently track x and y axis labels, we estimate this effect through the "area in excess" and "area lost" from a ground truth figure at a specific IOU with a found box. These areas are depicted by an example for one figure in the left panels of Figure 8. The pixels residing outside the ground truth box (magenta boxes) and inside the found box (orange boxes) are summed to calculate the "area in excess" (green shaded region of diagram in upper left panel of Figure 8) while the sum of all pixels inside the ground truth box but outside the found box are the "area lost" (green shaded region of diagram in lower left panel of Figure 8) For an area-in-excess of \(\sim\)10% of the ground-truth box's area, portions of the figure caption are included in the figure box (green shaded area, upper left panel). While this included information is not an ideal addition to the extracted figure, it is unlikely to cause confusion to a viewer if included on a hosting website like AIE, or in other
\begin{table}
\begin{tabular}{|c|c c|c c|c c|c c|c c|c c|} \hline & \multicolumn{2}{c|}{TP} & \multicolumn{2}{c|}{FP} & \multicolumn{2}{c|}{FN} & \multicolumn{2}{c|}{Prec} & \multicolumn{2}{c|}{Rec} & \multicolumn{2}{c|}{F1} \\ \cline{2-13} & fig & cap & fig & cap & fig & cap & fig & cap & fig & cap & fig & cap \\ \hline m1 & 90.3 & 88.3 & 11.3 & 10.0 & 1.8 & 4.3 & 88.9 & 89.8 & 98.0 & 95.4 & 93.3 & 92.5 \\ m2 & 89.5 & 86.9 & 10.9 & 8.6 & 2.6 & 6.4 & 89.2 & 91.0 & 97.2 & 93.2 & 93.0 & 92.1 \\ m3 & 85.9 & 86.9 & 17.9 & 12.5 & 1.8 & 5.1 & 82.8 & 87.4 & 97.9 & 94.4 & 89.7 & 90.8 \\ m4 & 90.5 & 88.7 & 10.1 & 8.8 & 2.0 & 3.7 & 90.0 & 91.0 & 97.8 & 96.0 & 93.7 & 93.4 \\ m5 & 84.5 & 87.3 & 15.7 & 7.2 & 2.2 & 7.0 & 84.3 & 92.4 & 97.4 & 92.6 & 90.4 & 92.5 \\ m6 & 89.5 & 89.1 & 11.9 & 9.0 & 2.0 & 4.3 & 88.3 & 90.8 & 97.8 & 95.4 & 92.8 & 93.0 \\ m7 & 92.8 & 88.1 & 8.0 & 9.0 & 1.4 & 4.1 & 92.0 & 90.7 & 98.5 & 95.6 & 95.1 & 93.1 \\ m8 & 90.5 & 90.0 & 91.1 & 7.2 & 2.0 & 3.9 & 90.9 & 92.6 & 97.8 & 95.9 & 94.2 & 94.2 \\ m9 & 84.3 & 84.2 & 13.5 & 7.4 & 4.0 & 9.4 & 86.2 & 91.9 & 95.4 & 89.9 & 90.6 & 90.9 \\ m10 & 88.7 & 87.5 & 11.5 & 9.6 & 1.8 & 4.3 & 88.6 & 90.1 & 98.0 & 95.3 & 93.0 & 92.6 \\ m11 & 90.5 & 92.4 & 10.5 & 6.8 & 0.8 & 1.8 & 89.6 & 93.2 & 99.1 & 98.0 & 94.1 & 95.6 \\
**m12** & **92.2** & **89.1** & **64.6** & **6.2** & **4.4** & **93.5** & **93.1** & **97.4** & **94.8** & **95.4** & **94.0** \\ m13 & 92.8 & 88.7 & 7.8 & 8.4 & 2.0 & 4.3 & 92.2 & 91.4 & 97.9 & 95.4 & 95.0 & 93.3 \\ m14 & 87.3 & 88.7 & 15.9 & 8.4 & 1.2 & 6.4 & 84.6 & 91.4 & 98.6 & 93.3 & 91.1 & 92.3 \\ m15 & 89.9 & 89.5 & 8.7 & 5.9 & 2.4 & 5.1 & 91.2 & 93.8 & 97.4 & 94.6 & 94.2 & 94.2 \\ \hline \end{tabular}
\end{table}
Table 2: Metrics for models described in Table 1. There are 497 figures (fig) and 488 figure captions (cap) used in the feature selection test dataset. IOU is 0.9 for both figures and captions. TP, FP and FN are shown as percentages of the total instances.
\begin{table}
\begin{tabular}{|c|c c|c c|c c|} \hline & \multicolumn{2}{c|}{IOU=0.1} & \multicolumn{2}{c|}{IOU=0.6} & \multicolumn{2}{c|}{IOU=0.8} \\ & figure & caption & figure & caption & figure & caption \\ \hline TP & 96.5 & 92.6 & 94.3 & 88.9 & 93.3 & 87.5 \\ FP & 2.9 & 2.7 & 5.1 & 6.4 & 6.1 & 7.8 \\ FN & 3.5 & 6.0 & 3.5 & 6.0 & 3.5 & 6.0 \\ \hline Prec & 97.1\(\pm\)1.5 & 97.1\(\pm\)1.1 & 94.7\(\pm\)2.3 & 93.2\(\pm\)2.3 & 93.8\(\pm\)3.0 & 91.8\(\pm\)1.6 \\ Rec & 96.5\(\pm\)1.6 & 94.0\(\pm\)2.7 & 96.4\(\pm\)1.7 & 93.8\(\pm\)2.8 & 96.4\(\pm\)1.0 & 93.7\(\pm\)1.5 \\ F1 & 96.7\(\pm\)0.8 & 95.5\(\pm\)1.2 & 95.5\(\pm\)1.7 & 93.4\(\pm\)1.0 & 95.1\(\pm\)1.6 & 92.7\(\pm\)1.4 \\ \hline \end{tabular}
\end{table}
Table 3: Metrics of our main model for different IOU cut-offs with the final test dataset. There are 490 figures and 487 figure captions in the test dataset. AP for figures and figure captions using the COCO [0.5:0.95:0.05] scheme [88] is 91.5% and 87.9%, respectively. TP, FP and FN are shown as percentages of the total instances in each category. Errors are calculated from the standard deviation of a 5-fold cross validation.
figure-analysis applications (e.g. in datasets used to study color distributions and other elements of scientific figures [92; 93]). In contrast, we show the significant effects of an area-lost of \(\sim\)5% by the green shaded region in the lower left panel. For larger offset between true and found boxes, it is likely the y-axis label would not be included in the extracted figure. Prior research has shown the axis labels are vital to the understanding of a figure and therefore, there exclusion would render the extracted figure unusable to a reader (e.g. 91).
These examples guide our selection of acceptable cut offs for information in "excess" and information that is "lost" from our figures. Given our application of hosting on the AIE platform, our focus is on minimal information loss, while information in excess is a secondary concern. Thus we select 10% as our acceptable cut-off for the excess area and 5% for the loss area.
We show these cut-offs for the distribution of area-in-excess and area-lost for all of our true-found boxes as a function of the calculated IOU of these pairs in the jointplots of the right panels in Figure 8. These distributions do not include found (true) boxes which do not have a true (found) pair. As shown by the comparison between the excess area (upper right) jointplot and area lost (lower right) jointplot, the distributions over IOU are similarly shaped with the area lost being overall lower than the area in excess. This is not unexpected, given that many parts of our post processing described in Section 3.2 involve enlarging the boxes around figures. While Figure 8 only accounts for true-found pairs for figures, the excess and loss area distributions are similar for captions and are omitted here for brevity.
In horizontal dashed lines of the right panels of Figure 8, we show the proposed estimates for cutoffs of acceptable area-in-excess (10%) and area-lost (5%). The majority of the distributions in the excess/lost area lie within these cut-offs, as shown by the lines in the marginal histograms at the top and sides of the jointplots in Figure 8.
Once these cut-offs have been chosen, we calculate the minimum IOU which contains 90% of the data within these cut-offs in order to avoid any outliers at very low IOU which nonetheless have small values of area in excess or lost. For both area in excess and lost, this results in an IOU of \(\sim 0.9\) as shown by the vertical lines in both panels and the top marginal IOU plot of Figure 8. We stress that at this stage, these numbers are only estimates and the exact appropriate cut-offs when translating object detection metrics to document layout analysis is the subject of future work.
In what follows, we use \(\text{IOU}=0.9\) as our definition of an intersection-over-union metric for a "high degree of localization" an use this cut-off to quantify the robustness of our model in comparison to others.
### Benchmarks for highly-localized page objects (IOU=0.9)
As the ultimate goal of our method is the extraction of figures and their captions from scanned pages, we quantify how well our model performs on our dataset for a high degree of localization (using our definition of an IOU cut-off of 0.9). We find F1 scores of 90.9% (92.2%) for figure (caption) detections at an IOU of 0.9 as shown in the last row and column of Table 4.
To facilitate comparison with other document layout analysis methods we compare our method in two ways - both by quantifying how well other models perform on our dataset and by comparing how well our model performs on other document layout analysis datasets.
The chosen data for this comparison are electronic thesis and dissertations (ETDs) from the ScanBank dataset and the article pages from the PubLayNet dataset, which represent two common methods of scientific literature (thesis and refereed publications, respectively), making them frequent targets for digitization efforts in science for many years [13; 94; 95; 96; 19; 34; 58; 71].
ETDs represent the culmination of the graduate work (Masters and/or Ph.D.) of a scientist and their study is an active and growing field within the information sciences [97]. The ETDs within the ScanBank dataset (published with the ScanBank object detection model [34]) are collected from the MIT DSpace repository11, but are limited to those published prior to 1990 to assure they are in raster, not vector format [19]. The ScanBank dataset contains \(\sim\)10k pages from 70
ETDs from the years 1900-1990, from a total of 36 different fields including those from STEM (e.g. Civil/Mechanical Engineering, Physics, Chemistry) and other fields (e.g. Humanities, Political Science, Modern Languages and Linguistics). An example page containing a figure and caption pair from an ETD in this dataset is shown in the left panel of Figure 9. In comparison with our data (an example page shown in the right panel of Figure 9, the ETD text is more widely spaced and the font, line, and page spacing is larger. While there can be many different forms of ETDs from different fields, the larger line and page spacing is typical for this type of scientific product.
In contrast to the visual differences between our dataset and the ScanBank dataset, an example page from the PubLayNet dataset [20] is shown in the middle panel of Figure 9. The PubLayNet dataset is significantly larger than both our and the ScanBank datasets at \(\sim\)360k pages mined from over one million "post-digital" articles hosted on PubMed12. Beyond the difference
Figure 8: Examples of “excess area” (upper left panel) and “lost area” (bottom left panel) along with distributions of area in excess (upper right panel) and lost (lower right panel) across as a percentage of the ground-truth box in all IOU’s of true-found box pairs of figures in the training and validation datasets. Shaded green regions in the diagrams in the upper left portions of jointplots also show the definitions of these areas in relation to true (magenta) and found (orange) boxes. The IOU chosen for “highly-localized” page objects is 0.9 (see text for further details).
in page size, the PubLayNet dataset is visually similar to ours, as shown in the comparison between the middle and right panels of Figure 9 - fontsizes, page layout, and line spacing are similar in both panels.
Both the ScanBank and PubLayNet datasets include page layout annotations. Classes of figures, tables and their captions are included in the ScanBank dataset while text, title, list, table and figure are included in the PubLayNet dataset. The difference in the number of articles between these two datasets highlights an ongoing issue when training models to digitize pre-digital literature - historical document layout analysis datasets require hand-annotations [ScanBank 19], while newer articles which are post-digital can be mined from their online storage formats [e.g. XML, PubLayNet 20].
These datasets overlap with different aspects of our dataset. ScanBank's age distribution is similar to ours, but differs in its larger diversity of represented fields and publishing venues. The pages in the PubLayNet dataset are formatted more similarly to our dataset, and belong to a STEM field, as is true of our dataset, however they are from articles published more recently than those within our dataset. Given these differences and similarities, in what follows, we compare not only how well our model performs on our dataset, but how models trained on these datasets perform on our dataset.
Table 4 shows how other deep learning models fair on our final test dataset. Here, we use ScanBank[19; 34] (based on DeepFigures[15] which is trained on the ScanBank ETDs) and a version of detectron2[80] trained on the PubLayNet dataset [20].
ScanBank and detectron2 are used for comparison as they are applied to raster-formatted articles (as opposed to vector-based methods like pdffigures2[13] which, as discussed in Section 2.1 results in low accuracies for our dataset. Both ScanBank and detectron2 do not share our definitions of figures and figure captions exactly, thus to facilitate comparison we make some approximations and assumptions.
As discussed in [34], ScanBank's figures are defined as encompassing the figure caption, while our figure definitions exclude the caption. In order to compare with our results, we initially performed metric calculations by re-defining our true figure boxes as the combination of figure and figure caption boxes when figure captions are
Figure 9: Example pages from the ScanBank collection of historical ETDs [19] (left panel), the PubLayNet dataset [20] (middle panel), and our dataset (right panel). These panels show the relative differences (font, line spacing, page size) and similarities (figure and caption pairs and placement) between these three datasets. These pages have been annotated with our definitions for figures (red boxes) and figure captions (blue boxes).
associated with a figure. However, we found that if we instead use our definitions of figures and captions, metrics from detections made with ScanBank are optimized, thus we use our definitions of figures and captions for all comparisons with this model. As detectron2 does not find caption boxes specifically, but rather localizes generic "text" boxes, we define detectron2-detected figure captions as those boxes with centers which are closest to a found figure's center. To test the effects of our post-processing methods alone, we apply a subset of our post-processing steps to the results generated from both ScanBank and detectron2 which we show for comparison to our method with and without post-processing in Table 4. When applying post-processing to these other models' results, we use only up to the "Step 5" described in Section 3.2 as we found this optimized the metrics for ScanBank and detectron2 reported in Table 4.
As shown in Table 4, ScanBank does not perform well on our final test dataset with or without post-processing. In particular, ScanBank does not detect captions reliably as the false negative rate (FN) is high. Additionally, there is a large portion of both figures and figure captions which are either erroneously detected, or not well localized as shown by the high false positive (FP) rate. This is somewhat expected as the ETD format is visually distinct from the articles in our dataset, including different fonts and caption placements. When post-processing is applied the metrics for figure captions improve significantly with an increase of \(\approx\)30% in true positive (TP) rate and decrease of \(\approx\)20% in FP.
For the detectron2 model without post-processing, TP rates are slightly higher than ScanBank's, however FP rates remain comparable to ScanBank's. FN rates are lower than our model (both with and without post-processing) by a few percentage points, likely due in part to the known differences in error profiles between YOLO-based (ours) and Mask-RCNN-based (detection2) object detection models [76]. Post-processing makes a large improvement on the TP rate for captions, increasing it by \(\approx\)35% and decreasing FP by \(\approx\)40%. There is a modest increase in TP of \(\approx\)10% for figures as well when post-processing is applied.
Post-processing (using all steps) has the largest effect on our model's results - increasing TP rates of figures and captions by \(\approx\)25% and \(\approx\)60%, respectively. This is not surprising as our post-processing method was developed using our scanned page training data. Additionally, we employ a YOLO-based model which is used for detecting bounding boxes, not a segmentation method that would tend to produce larger TP rates at higher IOU thresholds - the post-processing pipeline "mimics" segmentation by changing box sizes to fit more closely the precise locations of caption words and figures, increasing overlap IOU.
Taken together, the results of Table 4 suggest that other models generalize to our dataset at best moderately at high IOU, and only with application of our post-processing pipeline. Because our post-processing steps require not only grayscale scanned pages, but their OCR outputs, this additional overhead (of both producing the OCR and post-processing steps) greatly reduces the gains in page processing speeds achieved with these other methods.
This lack of generalizability is a known problem in the field of document layout analysis (e.g. [14] and our model is no exception. Table 5 quantifies how well our model performs on a collection of ETDs from the ScanBank "gold standard" dataset [19; 34] and a selection of PubLayNet's non-commerical article pages (non-commercial in order to access the high resolution scans and perform the OCR needed for our method) [20]. Here, we show how well our model performs on this set of hand-annotated ETD and PubLayNet pages using our definitions of figure and figure captions, as shown by the red (figure) and blue (caption) boxes in the left (ScanBank) and middle panels (PubLayNet) of Figure 9. Additionally, for comparison, we show the results of detectron2 (ScanBank) on the same set of pages from the PubLayNet (ETDs) dataset in parenthesis in Table 5, using the same approximations for figure caption boxes used in Table 4.
Both ScanBank and detectron2 perform better on the figures from their respective training datasets than our model, with increases in true positive rates of \(\approx\)10% and \(\approx\)30% over our detections, respectively. False positive rates tend to be comparable (ScanBank) or higher (by \(\approx\)20% for detectron2) and false negative rates are higher as well.
Results for figure captions tend to be comparable (detectron2) or better (ScanBank) using our model, however we again caution here that the definitions of captions and figures is not constant across all datasets making these comparisons only estimations.
While these results suggest that our model may be more generalizable for figure captions, further tests on a more datasets would strengthen this conclusion. However, the authors caution that the ability of _any_ present-day document layout analysis model to generalize well is typically limited [6; 98]. This lack of generalizability does not appear to depend on training dataset size [98]. Compounding the problem, the definitions of figures, captions and other page objects can differ between fields and indeed the annotators of documents within the same field [21; 99], making even "brute force" hand-classification of diverse sets of documents with high accuracy a non-trivial problem [99].
\begin{table}
\begin{tabular}{|c|c c|c c|c c|c c|c c|c c|} \hline & \multicolumn{2}{c|}{ScanBank} & \multicolumn{2}{c|}{ScanBank} & \multicolumn{2}{c|}{detectron2\({}^{\star}\)} & \multicolumn{2}{c|}{detectron2\({}^{\star}\)} & \multicolumn{2}{c|}{Ours} & \multicolumn{2}{c|}{Ours} \\ & \multicolumn{2}{c|}{No PP} & \multicolumn{2}{c|}{w/PP} & \multicolumn{2}{c|}{No PP} & \multicolumn{2}{c|}{w/PP} & \multicolumn{2}{c|}{No PP} & \multicolumn{2}{c|}{w/PP} \\ & fig & cap & fig & cap & fig & cap\({}^{\dagger}\) & fig & cap\({}^{\dagger}\) & fig & cap & fig & cap \\ \hline TP & 69.9 & 29.0 & 69.3 & 52.8 & 72.0 & 46.4 & 81.0 & 80.9 & 58.2 & 23.2 & 85.7 & 86.7 \\ FP & 71.4 & 28.8 & 43.6 & 8.7 & 41.8 & 68.2 & 27.1 & 22.4 & 45.3 & 82.3 & 13.7 & 8.6 \\ FN & 1.7 & 42.8 & 2.5 & 40.7 & 0.6 & 1.6 & 1.2 & 4.9 & 3.1 & 5.1 & 3.5 & 6.0 \\ \hline Prec & 49.5 & 50.2 & 61.4 & 85.9 & 63.3 & 40.5 & 74.9 & 78.3 & 56.2 & 22.0 & 86.2 & 90.9 \\ Rec & 97.6 & 40.4 & 96.5 & 56.5 & 99.2 & 96.6 & 98.5 & 94.3 & 95.0 & 81.9 & 96.1 & 93.6 \\ F1 & 65.7 & 44.8 & 75.0 & 68.1 & 77.2 & 57.1 & 85.1 & 85.6 & 70.6 & 34.7 & 90.9 & 92.2 \\ \hline \end{tabular} \({}^{\star}\) The tested version of detectron2 is trained on the PubLayNet dataset [80].
\({}^{\dagger}\) Here, captions are the “text” classified box closest to the center of a figure.
\end{table}
Table 4: Performance metrics for ScanBank [19; 34] and detectron2 [80] for our final test dataset. IOU is 0.9. TP, FP, FN are in percentages of total true instances. Models with post-processing (“w/PP”) and those without (“No PP”) are shown for comparison. No retraining or transfer learning of ScanBank or detectron2 have been done with our dataset. Errors from a 5-fold cross validation on all metrics are \(\sim\)1-2%.
\begin{table}
\begin{tabular}{|c|c c|c c|} \hline & \multicolumn{2}{c|}{PubLayNet (Non.Comm.)} & \multicolumn{2}{c|}{ETDs (ScanBank)} \\ & figure & caption\({}^{\dagger}\) & figure & caption \\ & 207 & 201 & 197 & 140 \\ \hline TP & 55.1(83.1) & 50.2(55.2) & 20.8(32.5) & 9.3(0.7) \\ FP & 45.4(25.6) & 23.4(56.7) & 89.1(84.8) & 12.4(10.7) \\ FN & 12.1(1.9) & 29.4(1.5) & 32.1(19.3) & 83.4(90.0) \\ \hline Prec & 54.8(76.4) & 68.2(49.3) & 18.9(27.7) & 42.9(6.2) \\ Rec & 82.0(97.7) & 63.1(97.4) & 39.3(62.7) & 10.1(0.8) \\ F1 & 65.7(85.8) & 65.6(65.5) & 25.5(38.4) & 16.3(1.4) \\ \hline \end{tabular} \({}^{\dagger}\) For detectron2 we assume a box classified as “text” which is closest to the center of a found figure is it’s associated caption.
\end{table}
Table 5: Performance metrics for our model’s performance on the non-commercial pages of the PubLayNet dataset [20] and pages from ETDs in the “gold standard” dataset of ScanBank [19; 34] for IOU=0.9. For comparison included in parenthesis are metrics for models trained on these datasets – detectron2 for the PubLayNet dataset and ScanBank for the ETD dataset. Comparisons should be taken as first estimates – see text for further details.
## 5 Discussion and Future Work
This paper has focused on the localization of figures and figure captions in astronomical scientific literature. We present results of a YOLO-based deep learning model trained on features extracted from the scanned page, hOCR outputs of the Tesseract OCR engine [22], and the text processing outputs from spaCy [78].
As our dataset comprises both vector-PDF and raster-PDF formats, we test the "PDF parsability" of the articles by passing them through the PDF-mining software pdffigures2[13] and GROBID[11]. We find a _maximum_ of \(\approx 33\%\) of our articles are parsable, thus motivating our approach of an object-detection based method for finding figures and captions.
We spend considerable effort to precisely define the classes of "figure" and "figure caption" to both avoid the differences of classifications that can be present in other datasets [e.g. 21] and to align with our goals of not only localizing these objects but extracting them from scanned pages to be hosted separately from their articles of origin.
Through ablation experiments we find the combination of the page and hOCR properties of (grayscale, ascenders, descenders, word confidences, fraction of numbers in a word, fraction of letters in a word, punctuation, word rotation and spaCy POS) maximize our model's performance at detecting figures and their captions. When compared to other deep learning models popular for document layout analysis (ScanBank[19; 34] and detectron2[80]) we find our model performs better on our dataset, particularly at the high IOU thresholds (IOU=0.9) and especially for figure captions.
This IOU cut off is motivated by an analysis of the "area lost" and "area in excess" for true-found box pairs and we thus define the IOU=0.9 cut-off as the definition of "highly-localized" page objects in the application of the YOLO-based object detection model to our document layout analysis problem. In line with our extraction goals, our model has relatively low false positive rates, minimizing the extraction of erroneous page objects.
Similar to the low generalization of other deep learning models to our dataset at high IOU, our model does not generalize well for the detection of figures in a subset of ScanBank's collection of ETDs [19] and PubLayNet's Non-Commercial scanned pages [20] used to train the version of detectron2 used in our comparisons. Our model generalizes significantly better for the detection of figure captions, showing comparable or higher true positive rates and lower false positive rates. Additionally, we show that our post processing pipeline increases the performance of all models, especially for the detection of figure captions. We caution however these comparisons are estimates given the different definitions of figures and captions in our model in comparison to others.
Taken together, our work in first carefully defining the classes of page objects, then defining "high-localization" and testing the generalizability of our models along with those of ScanBank and detectron2 highlight the need within the document layout analysis community to consider carefully about how we apply the methods of object detection and segmentation to the extraction of page objects. It is vital that we first define what _information_ we intend to extract before we quantify how accurately we have performed the extraction. This is in contrast with other works which typically use mAP or an IOU=0.8 as their metric of comparison without quantifying how this translates into the information lost from the extracted page object [17; 52; 58; 59; 61].
Our work relies on a relatively small set of scanned pages (\(\sim\)6000). While the results here for figures and captions surpass the estimates of \(\sim\)2000 instances per class required for training YOLO-based models [50; 51] our data contains many edge cases of complex layouts and we expect more data to improve results for these pages. In addition, our definitions do not link together figures that are spread in panels across multiple pages (e.g. Fig 1a and Fig 1b on separate pages). Linking such pages requires the extraction of captions and the denoising of their OCR results with post-correction methods [e.g 42] and is the subject of ongoing work [44].
As our model relies on more than three feature channels, transfer learning on pre-trained YOLO-based models is less straight forward, but nonetheless could be a way to make use of our small dataset in future work.
Additionally, our current methodology does not test the efficacy of popular image processing features (e.g. connected components [21]) or loss functions/processing techniques that are "non-standard" for YOLO-based methods [100] on our dataset. We also use the "standard" spaCy package for linguistic feature generation, instead of a science-specific version of spaCy (e.g. ScispaCy13) Future testing with the inclusion of these features may increase our model performance.
Footnote 13: [https://allenai.github.io/scispacy/](https://allenai.github.io/scispacy/).
While all of our models converge within 150 training epochs, this is without the inclusion of any data augmentation. As our model uses not only grayscale but hOCR properties, typical data augmentation procedures (e.g. flipping, changes in saturation) are not appropriate for all feature layers. However, it is likely that correctly-applied data augmentation (e.g. grayscale-layer contrast modifications) will increase our model's accuracy above the metrics reported here. Our work would further benefit from both a future large hyperparameter tuning study beyond the several values of learning rate tested in this paper, and additional feature selection analysis, as permitted within any computational constraints.
Finally, given the difference in error profiles between the YOLO-based method presented here and other Mask-RCNN/Faster-RCNN based [76] document layout analysis models (e.g. detectron2), it is likely that an ensemble model using both methods would further increase model performance.
While this work has been completed with an astronomy-specific dataset, it has shown some promise to greater generalizability than other models (e.g. 19; 20; 80), however the accuracy is still far below that necessary for a deployable solution at scale for historical literature in other fields. As suggested in prior work [98], the "generalizability problem" will likely be solved by the careful definitions of page object classes for large-scale annotation campaigns, and quantification of "high localization" in combination with models like the one which was developed in this work.
This work is supported by a Fiddler Fellowship and a NASA Astrophysics Data Analysis Program Grant (20-ADAP20-0225).
|
2308.15966 | SHARP Challenge 2023: Solving CAD History and pArameters Recovery from
Point clouds and 3D scans. Overview, Datasets, Metrics, and Baselines | Recent breakthroughs in geometric Deep Learning (DL) and the availability of
large Computer-Aided Design (CAD) datasets have advanced the research on
learning CAD modeling processes and relating them to real objects. In this
context, 3D reverse engineering of CAD models from 3D scans is considered to be
one of the most sought-after goals for the CAD industry. However, recent
efforts assume multiple simplifications limiting the applications in real-world
settings. The SHARP Challenge 2023 aims at pushing the research a step closer
to the real-world scenario of CAD reverse engineering through dedicated
datasets and tracks. In this paper, we define the proposed SHARP 2023 tracks,
describe the provided datasets, and propose a set of baseline methods along
with suitable evaluation metrics to assess the performance of the track
solutions. All proposed datasets along with useful routines and the evaluation
metrics are publicly available. | Dimitrios Mallis, Sk Aziz Ali, Elona Dupont, Kseniya Cherenkova, Ahmet Serdar Karadeniz, Mohammad Sadil Khan, Anis Kacem, Gleb Gusev, Djamila Aouada | 2023-08-30T11:42:54Z | http://arxiv.org/abs/2308.15966v1 | SHARP Challenge 2023: Solving CAD History and pArameters Recovery from Point clouds and 3D scans. Overview, Datasets, Metrics, and Baselines.
###### Abstract
Recent breakthroughs in geometric Deep Learning (DL) and the availability of large Computer-Aided Design (CAD) datasets have advanced the research on learning CAD modeling processes and relating them to real objects. In this context, 3D reverse engineering of CAD models from 3D scans is considered to be one of the most sought-after goals for the CAD industry. However, recent efforts assume multiple simplifications limiting the applications in real-world settings. The SHARP Challenge 2023 aims at pushing the research a step closer to the real-world scenario of CAD reverse engineering through dedicated datasets and tracks. In this paper, we define the proposed SHARP 2023 tracks, describe the provided datasets, and propose a set of baseline methods along with suitable evaluation metrics to assess the performance of the track solutions. All proposed datasets1 along with useful routines and the evaluation metrics2 are publicly available.
Footnote 1: [https://cvi2.uni.lu/cc3d-data](https://cvi2.uni.lu/cc3d-data)
Footnote 2: [https://gitlab.uni.lu/cvi2/ccv2023-sharp-challenge](https://gitlab.uni.lu/cvi2/ccv2023-sharp-challenge)
## 1 Introduction
_3D reverse engineering is defined as the deduction of intermediate design steps, complete history, and final intent in a reasonable fashion from a given 3D scan of its corresponding Computer-Aided Design (CAD) model._
In today's digital era, using CAD software is the standard approach for designing objects ahead of manufacturing. However, CAD modeling cannot be seen as straightforward and simple procedural design, as it requires the skills of highly qualified engineers. Consequently, 3D reverse engineering has been a long-sought-after goal in CAD industry due to the huge resources and time that it could save [28, 9]. Such technique, also referred to as _Scan-to-CAD_, consists of scanning objects and automatically concluding their corresponding CAD models. Recently, solving this problem has attracted a large interest from the Computer Vision and Graphics research communities [31, 13, 18, 23, 16, 9, 28, 30, 12, 17], thanks to the huge advances in 3D geometric deep learning and the availability of open repositories for CAD models. The idea is to learn a mapping from 3D scans to CAD models using the available models submitted by the designers in open repositories such as OnShape [24] and 3D Content Central [3]. While the representation of 3D scans is well established and often consists of meshes or point clouds, CAD model representations may vary depending on the use case.
Figure 1: 3D scans in CC3D dataset [5] contain various artefacts (unwanted protrusions, smoothness over surfaces/edges, missing regions). Tackling these artefacts is essential for robust Scan-to-CAD algorithms.
One approach for representing CAD models is through its final shape as a collection of geometric primitives, e.g., cylinders, cubes. Such representation is expressed by Constructive Solid Geometry (CSG) modeling, however, modern CAD workflows use Feature-based modeling as a superior alternative. Feature-based modeling is widely used as it allows to create solids by iteratively adding features such as holes, slots, or bosses, thus giving more expressiveness to designers [33]. In this setting, the final object's geometry and topology are stored as a _Boundary Representation_ (B-Rep) which is a graph structure encoding parametric faces and edges, loops, and vertices [16]. Accordingly, recent works have tried to infer some of the attributes of B-Reps from point clouds to enable their editability. Some of them focused on inferring the edges [6, 35, 23], other attempts considered the prediction of the faces [27, 19], and few of them aimed at predicting both faces and edges [21, 10].
CAD modeling can also be seen as the process that allows the creation of the final model referred to as _design history_. Design history consists of the set of ordered steps that were followed by the designer using a CAD software. In feature-based modeling, these ordered steps involve the drawing of CAD sketches [17, 25, 29] followed by CAD operations such as extrusion, revolution, etc [31, 30]. Thanks to the availability of dedicated datasets [13, 30], multiple works in literature focused on learning this design history in order to automatically generate plausible CAD models [31, 34], complete partial designs according to the intent of designers [32], or predict it from point clouds [28, 31, 15, 20].
From the two aforementioned representations for CAD models, the problem of Scan-to-CAD can be seen as either related to recovering some attributes of the B-Rep from the corresponding 3D scan, or inferring the design history that allowed its creation. Despite recent findings, this problem is far from being solved. In particular, the current efforts remain very limited in the context of real-world scenarios due to the strong assumptions that are made to over-simplify the problem. For instance, it is very common to consider simple objects (e.g., cubes and cylinders) and restrict the study to the basic extrusion operation [31, 34, 28]. Furthermore, most of the works in literature assimilate 3D scans to sampled point clouds on CAD models [31, 28, 15] which is not the case in real world scenarios. Indeed, as mentioned in [6, 5], 3D scans are often subject to scanning artifacts resulting in smoothed high-level geometrical details and missing parts. Compared to uniformly sampled point clouds on CAD models, these artifacts make the problem of Scan-to-CAD more challenging.
The aim of the SHARP challenge 2023 is to encourage and help the research community to get a step closer to the real-world setting of inferring CAD history and parameters of objects from their 3D scans. In particular, different variants of the CC3D dataset [5] are proposed along with three different tracks. It is important to highlight that the CC3D dataset has the advantage of bringing pairs of realistic 3D scans with their corresponding CAD models, thus enabling a more realistic scenario of Scan-to-CAD as compared to using sampled point clouds on CAD models. Furthermore, as stated in [9], the CAD models in CC3D dataset are more complex in nature than the ones used in literature [31, 30]. The tracks proposed in SHARP challenge span over the design history and the B-Rep of the CAD models, with one of them tackling the inference of B-Rep parametric edges from 3D scans, the second focusing on the per-point segmentation of B-Rep faces from scans, and the third aiming at segmenting scans into ordered CAD operation steps and types of the corresponding design history. Simple baseline solutions to the aforementioned tracks are also proposed along with a set of dedicated metrics to assess their performance.
The rest of the paper is organised as follows. Section 2 defines the different tracks introduced in the SHARP challenge and describes the datasets. In Section 3, the baseline methods for the proposed tracks are described. The evaluation metrics used to assess the performance of the methods are described in Section 4. Section 5 reports the results of the baseline methods. Finally, conclusions and perspectives of the proposed challenge are drawn in Section 6.
## 2 Challenge and Dataset Description
The SHARP 2023 challenge focuses on three different tasks to bridge the gap between 3D scans and their corresponding CAD models. Three versions of the CC3D dataset [5] are used in these tracks. The CC3D dataset is derived from open CAD repositories such as 3D Content Central [3]. Unlike other alternatives such as ABC dataset [13], where the noise is usually synthetically added to the sampled point cloud, the 3D scans were obtained by virtually scanning the corresponding CAD models, using a proprietary 3D scanning pipeline developed by Artec3D [2]. As shown in Figure 1, the 3D shapes in CC3D dataset may have artifacts in the form of missing parts or protrusions due to specifics of a scanning system. The total number of samples of the CC3D dataset used in SHARP challenge is \(31185\), split into \(25612\) training samples and \(5573\) test samples. Note that the same sets are used for all three tracks and the corresponding B-Rep models of the training set are also provided. The overall objective is to infer different information for the CAD model given a 3D scan. While _Track 1_ and _Track 2_ focus on inferring geometrical and topological properties of the Boundary Representation of the CAD model, _Track 3_ is centered around predicting attributes of the design history of the CAD model. More formally, let us consider a 3D scan represented by a point cloud \(\mathbf{X}~{}=~{}[\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{N}]\in \mathbb{R}^{N\times 3}\), where \(\mathbf{x}_{i}\) denotes the 3D coordinates of a point \(i\) and \(N\) the number of points.
The objective of _Track 1_ and _Track 2_ is to predict different attributes from the B-Rep \(\mathcal{B}\) of the corresponding CAD model, while _Track 3_ aims at recovering other attributes from the design history \(\mathcal{H}\) of the CAD model. The datasets and the attributes of each track are described next.
**Track 1: Parametric Sharp Edge Inference.** Given a 3D scan \(\mathbf{X}\), the goal of _Track 1_ (see Figure 2) is to recover: (1) the set of parametric edges \(\{\mathbf{e}_{j}\}_{j=1}^{N_{e}}\in\mathcal{E}\) that are present in the the B-Rep \(\mathcal{B}\), where \(\mathcal{E}\) denotes the set of 3D parametric curves among circles, lines, and splines; (2) and a sharpness label \(s_{j}\in\{0,1\}\) indicating whether a recovered edge \(\mathbf{e}_{j}\) is sharp (\(s_{j}=1\)) or not (\(s_{j}=0\)). Note that recovering these parametric B-Rep edges and their sharpness from 3D scans is critical for CAD reverse engineering. Indeed, these edges encode the topology and the geometry of the boundary of the B-Rep and some of them can be part of the sketches drawn by the designer.
_CC3D-PSE dataset:_ This dataset consists of a set of 3D scans, annotated with a set of parametric edges and corresponding sharpness values. Both the ground truth edges and the sharpness are extracted from the B-Rep of the corresponding CAD model. The parametric edges are directly extracted from the B-Reps using OpenCascade API [1], and their sharpness value is computed as the angle between the normals of the two neighboring surface patches to the edge. The distribution of the sharpness values (_left_ of Figure 3) reveal that about \(30\%\) of the edges have a sharp value lower than a corresponding angle of 10 degrees. Also, about \(50\%\) of the edges have a sharpness of \(1.57\) corresponding to an angle of \(90\) degrees, which is expected in the context of CAD models. Three types of parametric edges are considered (lines, circles and splines). From the distribution of the different edge types per model (_right_ of Figure 3), we observe that the line type is the most common type. Additionally, about \(15\%\) of the CAD models in the CC3D-PSE have more than 500 edges, demonstrating the complexity of the dataset. The annotations of different edges types are parametrized as follows: (1) A line is parameterized by a start and an end point \(\mathbf{p}_{s}=(p_{s}^{x},p_{s}^{y},p_{s}^{z})\in\mathbb{R}^{3}\) and \(\mathbf{p}_{e}=(p_{e}^{x},p_{e}^{y},p_{e}^{z})\in\mathbb{R}^{3}\). (2) A circle (or circular arc) is defined by a start point \(\mathbf{p}_{s}=(p_{s}^{x},p_{s}^{y},p_{s}^{z})\in\mathbb{R}^{3}\), and an end point \(\mathbf{p}_{e}=(p_{e}^{x},p_{e}^{y},p_{e}^{z})\in\mathbb{R}^{3}\), a center point \(\mathbf{p}_{e}=(p_{c}^{x},p_{c}^{y},p_{c}^{z})\in\mathbb{R}^{3}\), a normal vector \(\vec{\mathbf{n}}=(n_{x},n_{y},n_{z})\in\mathbb{R}^{3}\), and a radius \(r\in\mathbb{R}\). (3) A spline is parameterized by a degree \(K\in\mathbb{N}\) and a set of keypoints \(\mathbf{P}_{k}=[\mathbf{p}_{k}^{1},\mathbf{p}_{k}^{2},\ldots,\mathbf{p}_{k}^ {K}]\), where each \(\mathbf{p}_{k}=(p_{k}^{x},p_{k}^{y},p_{k}^{z})\in\mathbb{R}^{3}\). Note that the original CC3D-PSE dataset has been used in the SHARP 2022 challenge [8] and in [6] for parametric sharp edge inference. This updated version includes all B-Rep edges and their sharpness.
**Track 2: Boundary-Representation (B-Rep) Face Segmentation**. Given a 3D scan \(\mathbf{X}~{}=~{}[\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{N}]\), the goal of _Track 2_ (see Figure 2) is to simultaneously segment it into: **(1)** B-Rep face memberships, which consists of predicting for each point \(\mathbf{x}_{i}\) a face that it should belong to in the B-Rep \(\mathcal{B}\); **(2)** the type of that B-Rep face. In other words, the objective is to predict a point-to-face membership matrix \(\mathbf{M}\in\{0,1\}^{N\times N_{f}}\), where \(N_{f}\) denotes the number of the faces in the B-Rep \(\mathcal{B}\), and a per-point face type matrix \(\mathbf{T}\in\{0,1\}^{N\times N_{f_{t}}}\), where \(N_{f_{t}}\) is the number of B-Rep face types considered. Note that each point \(\mathbf{x}_{i}\) can only belong to a single face and should have a unique type, which suggests that each row of \(\mathbf{M}\) (and \(\mathbf{T}\)) should have a single entry
Figure 3: _(left)_ Histogram of sharpness values for the edges in the CC3D-PSE dataset. _(right)_ Boxplot of the number of edges per CAD models for different edge types.
Figure 2: Predictions targets for the SHARP Challenge 2023. Proposed tracks relate to recovering geometrical and topological properties of the B-rep _(track 1 and track 2)_, as well as attributes of the design history of the CAD model _(track 3)_.
with \(1\) and \(0\) anywhere else. The objective of this track is to infer the B-Rep face structure from raw 3D scans which is extremely important for CAD reverse engineering.
_CC3D-BRepFace dataset: Track 2_ uses C3D-BRepFace, a newly introduced version of the CC3D dataset [5] that brings the annotations of the B-Rep face structure to 3D scans. In particular, the 3D scans were annotated with the face membership and type labels according to the corresponding B-Rep. This is done by processing the B-Reps with OpenCascadeAPI [1] and transferring the labels to the points of the corresponding 3D scans through nearest neighbor assignment. This results in two annotations per point for each 3D scan: **(1)** an ID of the B-Rep face that it belongs to and; **(2)** the type of the face among six possible surface types (Plane, Cylinder, Cone, Sphere, Torus, or B-Spline). Figure 4 shows statistics of the CC3D-BRepFace dataset. The CC3D-BRepFace dataset contains a wide range of model complexities with \(50\%\) of the models containing more that \(37\) faces and the maximum number of faces per model being \(4388\). We see that Plane surface is the most common face type in the dataset followed by Cylinder. Moreover, we found out that \(\approx 60\%\) of the models contain at least 3 different types of face types.
**Track 3: Operation Type and Step Segmentation**. Given a 3D scan \(\mathbf{X}~{}=~{}[\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{N}]\), the goal of _Track 3_ is to simultaneously segment into: (1) the ordered CAD operation steps that allowed its creation; (2) the corresponding CAD operation types that were used by the designer. In other words, the objective is to predict a point-to-step membership matrix \(\mathbf{S}~{}\in~{}\{0,1\}^{N\times N_{s}}\), where \(N_{s}\) denotes the number of steps used in the CAD history \(\mathcal{H}\), and a per-point CAD operation type matrix \(\mathbf{O}~{}\in~{}\{0,1\}^{N\times N_{O_{t}}}\), where \(N_{O_{t}}\) is the number of CAD operation types considered. Similarly to _Track 2_, each point \(\mathbf{x}_{i}\) can only belong to a single CAD operation step and should have a unique CAD type, which suggests that each row of \(\mathbf{S}\) (and \(\mathbf{O}\)) should have a single entry with \(1\) and \(0\) anywhere else. Although _Track 3_ and _Track 2_ seem to be conceptually similar, there are two main differences. Firstly, the membership task in _Track 3_ aims at inferring ordered CAD steps in contrast to _Track 2_ where the order of face membership does not matter. Secondly and most importantly, the nature of the labels that are targeted in the two tracks is totally different. While _Track 2_ focuses on the B-Rep face structure, _Track 3_ goes beyond B-Reps and aims at recovering the CAD operation history out of 3D scans. As highlighted in [9; 16; 31], recovering these operation types and steps is crucial for CAD reverse engineering as it not only provides information about how the model was constructed but also at which stage of the design the geometry was created.
_CC3D-Ops dataset:_ The dataset used in _Track 3_ extends the CC3D-Ops dataset introduced in [9]. While the original CC3D-Ops dataset contains CAD operation type and step annotations on B-Rep faces, the extended version offers the same annotations on the corresponding 3D scans at the point level. In particular, each point of a 3D scan is labelled with: **(1)** a CAD operation step identifier and; **(2)** a CAD operation type that can be one of the following types: ExtrudeSide, ExtrudeEnd, RevolveSide, Filtet, Chamfer, CutExtrudeSide, CutExtrudeEnd, RevolveEnd, CutRevolveSide, CutRevolveEnd, and Other. Figure 5 shows statistics about these annotations. In the left part of this Figure, it can be observed that most of the models of the CC3D-Ops dataset were constructed in \(50\) operation steps or less. In particular, \(90\%\) of the models were created in \(16\) or less operation steps, while the maximum number of steps per model is \(261\). In the right part of Figure 5, the operation type distribution is shown in violin plot. It can be seen that the most common operation type is the extrusion type but the dataset also contains models with a large number of faces created from less common operations such as revolution.
Figure 4: _(left)_: Histogram of the number of faces per CAD model for \(95\%\) the CC3D-BRepFace dataset. _(right)_ Boxplot of the number of faces per CAD models for the different types of faces per CAD model.
Figure 5: _(left)_: Histogram of the number of operation steps per CAD model in the CC3D-Ops dataset. _(right)_: violin plot of the number of faces per CAD models for the different operation types per CAD model. The horizontal lines represent the median values.
## 3 Baseline Methods
We provide simple baselines for all three tracks introduced for the SHARP challenge. Across tracks, the input 3D scan is represented as a point cloud \(\mathbf{X}~{}=~{}[\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{N}]\) that is uniformly downsampled to fixed number of points \(N=10k\). For all three tracks, the input \(\mathbf{X}\) is mapped to a learned per-point representation \(\mathbf{W}~{}=~{}[\mathbf{w}_{1},\mathbf{w}_{2},\ldots,\mathbf{w}_{N}]\in \mathbb{R}^{N\times 960}\) through a point cloud backbone \(\Phi_{b}\). In this work, we model \(\Phi_{b}\) as the Point-Voxel CNN introduced in [22]. Then, \(\mathbf{W}\) is processed by output heads \(\Phi_{t_{1}},\Phi_{t_{2}}\) and \(\Phi_{t_{3}}\), developed to address tracks one to three as discussed next.
**Track 1 Baseline**. The goal of this track is to learn the mapping \(\Phi_{t_{1}}\) from the learned representation \(\mathbf{W}\) to a set of parametric edges \(\{\mathbf{e}_{j}\}_{j=1}^{N_{e}}~{}\in\mathcal{E}\) and their corresponding sharpness labels \(\{s_{j}\}_{j=1}^{N_{e}}\in\{0,1\}\). Our solution to this problem is based on the recently proposed Sepic-Net [7]. \(\Phi_{t_{1}}\) comprises two modules, the _decomposition module_\(\Phi_{t_{1},D}\) followed by the _fitting module_\(\Phi_{t_{1},F}\) that are trained in an end-to-end manner (see Figure 6 (_left_)).
_Decomposition module_\(\Phi_{t_{1},D}\): The decomposition module detects edge points, consolidates them along the edge and groups them into different segments with primitive types identified. More specifically, for each point representation \(\mathbf{w}_{i}\), we have \(\Phi_{t_{1},D}(\mathbf{w}_{i})~{}=~{}\{\mathcal{P}_{i}^{E},\mathbf{v}_{i}, \mathcal{P}_{i}^{T},\mathbf{f}_{i},\mathcal{P}_{i}^{S}\}\), where \(\mathcal{P}_{i}^{E}~{}\in~{}[0,1]\) is the probability that the \(i\)'th point is an edge point, \(\mathbf{v}_{i}~{}\in~{}\mathbb{R}^{3}\) is a displacement vector to the closest edge, \(\mathcal{P}_{i}^{T}~{}\in~{}[0,1]^{3}\) denote the probabilities of the primitive type of the closest edge among three possible types, namely, line, circle, and spline, and \(\mathbf{f}_{i}~{}\in~{}\mathbb{R}^{D_{emb}}\) is a per-point embedding that is used to cluster edge points into distinct segments. Finally, \(\mathcal{P}_{i}^{S}~{}\in~{}[0,1]\) predicts the probability that the closest edge is sharp. To group points corresponding to the same segment, a differentiable version of the mean-shift clustering algorithm is used [26]. Curve type and sharpness for each segment are determined through majority voting. Training labels for edge points, types, sharpness and displacement vectors are derived from the ground truth whereas the per-point embeddings are trained through a triplet loss that contrasts points from the same segment to points from different segments.
_Fitting module_\(\Phi_{t_{1},F}\): The fitting module performs parameter estimation for each detected curve/segment. Curve parameters are estimated through least-squares fitting using differentiable SVD [11] on the set of segmented edge points. Different formulations for differentiable curve parameter estimation are provided for lines, arcs, and splines. Thus, a fitting loss can be formed as a sum of distances between the points sampled on the predicted parameterized segments and points sampled uniformly on the ground truth segments. During training, the decomposition module is first pretrained for \(100\) epochs and then jointly finetuned with the fitting module using a combination of decomposition and fitting losses. For a more thorough description of the fitting module, readers are referred to [7].
**Track 2 Baseline.** The goal of this track is to predict a point-to-face membership matrix \(\mathbf{M}\in\{0,1\}^{N\times N_{f}}\) and a per-point face type matrix \(\mathbf{T}\in\{0,1\}^{N\times N_{f_{t}}}\), (\(N_{f}\) denotes the number of faces and \(N_{f_{t}}\) is the number of B-Rep face types). The above can be equivalently formulated as learning, for each \(\mathbf{w}_{i}\), the mapping \(\Phi_{t_{2}}(\mathbf{w}_{i})~{}=~{}\{\mathbf{\tilde{M}}_{i,:}^{p},\mathbf{ \tilde{T}}_{i,:}^{p}\}\), where \(\mathbf{\tilde{M}}_{i,:}^{p}~{}\in~{}[0,1]^{N_{f}}\) encodes the estimated probabil
Figure 6: Baseline model architecture for the three tracks. The input point cloud is first encoded through a PCVNN encoder. For _Track 1_, the fitting module of [7] is used to produce the set of parametric edges. For _Track 2_ and _Track 3_, the mapping from the vertex embeddings to the target labels is learned directly through separate MLPs.
ities of the \(i\)'th point belonging to one of the \(N_{f}\) faces, and \(\tilde{\mathbf{T}}_{i,:}^{p}~{}\in~{}[0,1]^{N_{f_{t}}}\) denotes the face type probabilities of the \(i\)'th point. This results in two probability matrices \(\tilde{\mathbf{M}}^{p}\in[0,1]^{N\times N_{f}}\) and \(\tilde{\mathbf{T}}^{p}~{}\in~{}[0,1]^{N\times N_{f_{t}}}\) for a scan \(\mathbf{X}\).
To learn \(\tilde{\mathbf{T}}^{p}\), a standard categorical cross-entropy loss is employed _w.r.t_ the ground truth per-point face types. Learning \(\tilde{\mathbf{M}}^{p}\) for face membership prediction additionally requires addressing the inherent ambiguity in face membership labelling, as face labels are arbitrary and do not necessarily have a semantic meaning. Inspired by [19], a Hungarian matching [14] step is adopted to match the predicted face grouping to the ground truth by computing a Relaxed Intersection over Union (RIoU) [19] cost between the predicted membership probabilities and the ground truth face labels. Following class assignment recovery, the RIoU is further employed in a loss function to learn the grouping as in [19]. Finally, the estimated face memberships \(\tilde{\mathbf{M}}\in\{0,1\}^{N\times N_{f}}\) and face types \(\tilde{\mathbf{T}}\in\{0,1\}^{N\times N_{f_{t}}}\) are obtained through majority voting from \(\tilde{\mathbf{M}}^{p}\) and \(\tilde{\mathbf{T}}^{p}\), respectively. Note that since \(\mathbf{X}\) is downsampled with \(N=10k\) during training, an upsampling of the predictions to the original scan resolution is used at inference.
**Track 3 Baseline.** The goal of this track is to predict a point-to-step membership matrix \(\mathbf{S}~{}\in~{}\{0,1\}^{N\times N_{s}}\) and a per-point CAD operation matrix \(\mathbf{O}~{}\in~{}\{0,1\}^{N\times N_{O_{t}}}\), where \(N_{s}\) denotes the number of the steps used in the CAD history and \(N_{O_{t}}\) is the number of CAD operation types considered. Similarly to _Track 2_, the above can be equivalently formulated as learning, for each per-point representation \(\mathbf{w}_{i}\), the mapping \(\Phi_{t_{3}}(\mathbf{w}_{i})=\{\tilde{\mathbf{S}}_{i,:}^{p},\tilde{\mathbf{O} }_{i,:}^{p}\}\) where \(\tilde{\mathbf{S}}_{i,:}^{p}~{}\in~{}[0,1]^{N_{s}}\) encode the estimated probabilities of the \(i\)'th point belonging to one of the \(N_{s}\) CAD operation steps, and \(\tilde{\mathbf{O}}_{i,:}^{p}~{}\in~{}[0,1]^{N_{O_{t}}}\) denotes the CAD operation type probabilities of the \(i\)'th point. This yields two probability matrices \(\tilde{\mathbf{S}}^{p}~{}\in~{}[0,1]^{N\times N_{s}}\) and \(\tilde{\mathbf{O}}^{p}~{}\in~{}[0,1]^{N\times N_{O_{t}}}\) for a scan \(\mathbf{X}\). Compared to the proposed solution for Section 3, the operation steps are ordered and thus, there is no need to address any labelling ambiguity. Hence, both \(\tilde{\mathbf{S}}^{p}\) and \(\tilde{\mathbf{O}}^{p}\) can be learned through a standard categorical cross-entropy loss. The final predicted operation steps \(\tilde{\mathbf{S}}~{}\in~{}\{0,1\}^{N\times N_{s}}\) and types \(\tilde{\mathbf{O}}~{}\in~{}\{0,1\}^{N\times N_{O_{t}}}\) are obtained by majority voting from the learned probability matrices \(\tilde{\mathbf{S}}^{p}\) and \(\tilde{\mathbf{O}}^{p}\), respectively. Note that the upsampling of the predictions is also performed here as described in Section 3 for _Track 2_.
## 4 Evaluation Metrics
To evaluate produced solutions, a set of dedicated metrics are considered to form a final score between \(0\) and \(1\). Evaluation is conducted in th Codalab platform4\({}^{,}\)5\({}^{,}\)6.
Footnote 4: [https://codalab.lisn.upsaclay.fr/competitions/13629](https://codalab.lisn.upsaclay.fr/competitions/13629)
Footnote 5: [https://codalab.lisn.upsaclay.fr/competitions/13956](https://codalab.lisn.upsaclay.fr/competitions/13956)
Footnote 6: [https://codalab.lisn.upsaclay.fr/competitions/13676](https://codalab.lisn.upsaclay.fr/competitions/13676)
**Evaluation Metrics for Track 1**. The evaluation of _Track 1_ consists of assessing the quality of the estimated parametric edges \(\{\tilde{\mathbf{e}}_{j}\}_{j=1}^{\tilde{N}_{e}}\) and their predicted sharpness with respect to the ground truth \(\{\mathbf{e}_{j}\}_{j=1}^{N_{e}}\). Note that the number of predicted edges \(\tilde{N}_{e}\) can be different from the ground truth number \(N_{e}\). This is achieved following three criteria that are described in the following. For notation simplicity, we will denote the predicted set of edges by \(\tilde{\mathbf{e}}\) and the ground truth ones by \(\mathbf{e}\).
_Edge Recovery Score:_ This score measures the similarity between the predicted and the ground truth sets of parametric edges denoted by \(\tilde{\mathbf{e}}\) and \(\mathbf{e}\), respectively. Given the parametric formulation of the predicted and ground truth edges described in Section 2, the first step is to uniformly sample a set of 3D points on these edges proportionally to their length. This results in two 3D point sets \(\tilde{\mathbf{Z}}=\{\tilde{\mathbf{\zeta}}_{i}\}_{i=1}^{\tilde{N}_{e}}\) and \(\tilde{\mathbf{Z}}=\{\boldsymbol{\zeta}_{i}\}_{i=1}^{N_{e}}\) for the predicted and ground truth edges, respectively. Then, two unidirectional Chamfer distances [4] are separately computed between the sampled point set on the predicted edges \(\tilde{\mathbf{Z}}\) and the sampled one on the ground truth \(\mathbf{Z}\) as follows,
\[d_{CD}\left(\tilde{\mathbf{Z}},\mathbf{Z}\right) =\frac{1}{\tilde{N}_{\zeta}D_{\tilde{\mathbf{Z}}}}\sum_{i=1}^{ \tilde{N}_{\zeta}}\min_{\tilde{\mathbf{\zeta}}_{j}\in\mathbf{Z}}\|\tilde{ \boldsymbol{\zeta}}_{i}-\boldsymbol{\zeta}_{j}\|_{2}^{2}\;, \tag{1}\] \[d_{CD}\left(\mathbf{Z},\tilde{\mathbf{Z}}\right) =\frac{1}{N_{\zeta}D_{\tilde{\mathbf{Z}}}}\sum_{i=1}^{N_{\zeta}} \min_{\tilde{\mathbf{\zeta}}_{j}\in\mathbf{Z}}\|\boldsymbol{\zeta}_{i}-\tilde{ \mathbf{\zeta}}_{j}\|_{2}^{2}\;, \tag{2}\]
where \(D_{\tilde{\mathbf{Z}}}\) and \(D_{\tilde{\mathbf{Z}}}\) denote the length of the diagonal of the bounding box of \(\tilde{\mathbf{Z}}\) and \(\mathbf{Z}\), respectively. The obtained Chamfer distances further are normalized through a function \(\Phi_{k}(d_{CD})=e^{-kd_{CD}}\), that maps a distance \(d_{CD}\) to a score in \([0,1]\), where the parameter \(k\) is chosen according to conducted baselines. The two unidirectional scores are finally averaged to obtain a single edge recovery score
\[S_{e}(\tilde{\mathbf{e}},\mathbf{e})=\frac{1}{2}(\Phi_{k}(d_{CD}(\tilde{\mathbf{Z }},\mathbf{Z}))+\Phi_{k}(d_{CD}(\mathbf{Z},\tilde{\mathbf{Z}})))\;. \tag{3}\]
Note that accurately predicted edges close to the ground truth are expected to have a high edge recovery score \(S_{e}\).
_Edge length Score:_ This score quantifies the similarity between the total edge length of the prediction \(\tilde{\mathbf{e}}\) and that of the ground truth \(\mathbf{e}\). The length of the predicted and ground truth edges denoted as \(\tilde{L}\) and \(L\), respectively, are computed by summing over the lengths of all edges in \(\tilde{\mathbf{e}}\) and \(\mathbf{e}\). Note that for splines, the length is estimated by densely sampling points on the spline and then accumulating the consecutive point distances. The normalized length score is given by
\[S_{l}(\tilde{\mathbf{e}},\mathbf{e})=1-|\frac{1-(\tilde{L}/L)}{1+(\tilde{L}/L)}|\;, \tag{4}\]
Note that this score has a range of \([0,1]\). Predicted edges with accurate length estimations are expected to have high length scores \(S_{l}\).
_Sharpness Score:_ The sharpness estimation task is formulated as a binary classification problem where each predicted edge \(\tilde{\mathbf{e}}_{j}\in\tilde{\mathbf{e}}\) is classified as either sharp (\(\tilde{s}_{j}=1\)) or not-sharp (\(\tilde{s}_{j}=0\)). Since ground truth sharpness is given as a continuous value (ranging in \([0,2\pi]\)), we use a threshold of \(1.5\), above which an edge is considered sharp. The sharpness score is defined as a weighted accuracy of the predicted sharpness scores \(\tilde{\mathbf{s}}\) with respect to the ground truth \(\mathbf{s}\). In practice, similarly to the edge recovery score, we sample two 3D point sets \(\tilde{\mathbf{Z}}=\{\tilde{\mathbf{\zeta}}_{i}\}_{i=1}^{N_{\zeta}}\) and \(\mathbf{Z}=\{\mathbf{\zeta}_{i}\}_{i=1}^{N_{\zeta}}\) for the predicted and ground truth edges, respectively. Since the sharpness label is defined per edge, this label is transferred to the 3D points that form that edge. This results in a set of ground truth sharpness labels \(\{\sigma_{i}\}_{i=1}^{i=N_{\zeta}}\in\{0,1\}\), where each \(\sigma_{i}\) denotes the sharpness label of the point \(\mathbf{\zeta}_{i}\). Similarly to the ground truth, another set of predicted sharpness labels \(\{\tilde{\sigma}_{i}\}_{i=1}^{i=\tilde{N}_{\zeta}}\in\{0,1\}\) is produced from the predicted edge sharpness labels \(\tilde{\mathbf{s}}\). The sharpness score is then given by the following weighted accuracy,
\[S_{s}=\frac{1}{\tilde{N}_{\zeta}}\sum_{i=1}^{\tilde{N}_{\zeta}}\Phi_{k}(\| \tilde{\mathbf{\zeta}}_{i}-\mathbf{\zeta}_{\Gamma(i)}\|_{2}^{2})\cdot\mathbb{1}\left( \tilde{\sigma}_{i}=\sigma_{\Gamma(i)}\right)\,, \tag{5}\]
where \(\Gamma(i)\) matches an index \(i\) of a point \(\tilde{\mathbf{\zeta}}_{i}\) in the predicted edges to the index of the closest point in the ground truth edges in the sense of Euclidean distance. \(\Phi_{k}\) is the same mapping function used in the edge recovery score and \(\mathbb{1}(.)\) is an indicator function.
_Final Score:_ The final score of _Track 1_ is a combination of the three scores mentioned above. For each sample, it is computed as the average of the edge recovery score \(S_{e}\), the length score \(S_{l}\), and the sharpness Score \(S_{s}\), or \(S_{track1}~{}=~{}\frac{1}{3}~{}(S_{e}+S_{l}+S_{s})\).
**Evaluation Metrics for Track 2**. The goal of Tracks 2 is to predict the B-Rep face membership and the types for each point of the scan. Accordingly, we evaluate the predictions following two criteria that are described below.
_Face Membership Score:_ Given a scan \(\mathbf{X}\), the predicted face membership \(\tilde{\mathbf{M}}\in\{0,1\}^{N\times\tilde{N}_{f}}\) is evaluated with respect to the ground truth face membership \(\mathbf{M}\in\{0,1\}^{N\times N_{f}}\) using Intersection Over Union (IoU). Note that the number of predicted face memberships \(\tilde{N}_{f}\) can be different from the one of the ground truth \(N_{f}\). Moreover, the evaluation of the face membership segmentation task requires addressing the inherent ambiguity in face membership labelling, as predicted face labels do not necessarily have a predefined match with ground truth class labels. To handle this issue, we use the Hungarian matching algorithm [14] to perform optimal matching between predicted face memberships and ground truth face memberships. Hungarian matching is able to find the best one-to-one correspondence that maximizes the total IoU across all matched pairs. This results in the following face membership score,
\[S_{m}=\frac{1}{N_{f}}\sum_{i=1}^{N_{f}}IoU(\Lambda(\tilde{\mathbf{M}})_{:,i}, \mathbf{M}_{:,i})\;, \tag{6}\]
where \(\Lambda()\) is the reordering from Hungarian matching.
_Face Type Score:_ Similarly to the face membership score, the predicted face types \(\tilde{\mathbf{T}}~{}\in~{}\{0,1\}^{N\times N_{f_{t}}}\) is evaluated with respect to the ground truth face types \(\mathbf{T}~{}\in~{}\{0,1\}^{N\times N_{f_{t}}}\) using IOU. However, in contrast to the face membership scenario where a Hungarian matching was necessary, the IoU is directly computed for the face types to yield the following face type score,
\[S_{t}=\frac{1}{N_{f_{t}}}\sum_{i=1}^{N_{f_{t}}}IoU(\tilde{\mathbf{T}}_{:,i}, \mathbf{T}_{:,i})\;. \tag{7}\]
_Final Score:_ The final score for each sample is the average of the face membership score \(S_{m}\) and the face type score \(S_{t}\) and is given by \(S_{track2}=\frac{1}{2}(S_{m}+S_{t})\).
**Evaluation Metrics for Track 3**. As in _Track 2_, the predicted CAD operation steps and types are evaluated following two criteria.
_CAD Step Score:_ Given a scan \(\mathbf{X}\), the predicted face membership \(\tilde{\mathbf{S}}\in\{0,1\}^{N\times\tilde{N}_{s}}\) is evaluated with respect to the ground truth face membership \(\mathbf{S}\in\{0,1\}^{N\times N_{s}}\) using IoU. As the objective is to predict ordered steps, Hungarian matching is not used and the CAD step score is given by
\[S_{st}=\frac{1}{N_{s}}\sum_{i=1}^{N_{s}}IoU(\tilde{\mathbf{S}}_{:,i},\mathbf{ S}_{:,i})\;. \tag{8}\]
_CAD Type Score:_ As done for the face types of Track 2, the predicted CAD operation types \(\tilde{\mathbf{O}}~{}\in~{}\{0,1\}^{N\times N_{O_{t}}}\) is evaluated with respect to the ground truth face types \(\mathbf{O}~{}\in~{}\{0,1\}^{N\times N_{O_{t}}}\) using the following IoU based score
\[S_{ot}=\frac{1}{N_{t}}\sum_{i=1}^{N_{t}}IoU(\tilde{\mathbf{O}}_{:,i},\mathbf{ O}_{:,i})\;. \tag{9}\]
_Final Score:_ The final score for each sample is the average of the CAD step score \(S_{st}\) and the CAD type score \(S_{ot}\) and is given by \(S_{track3}=\frac{1}{2}(S_{st}+S_{ot})\).
## 5 Results
The proposed baseline methods for all three tracks are evaluated on the dedicated test partition. Results are shown in Table 1 (_left_) and are also given on the online challenge leaderboard (hosted on the Codalab platform [4, 5, 6]). It is essential to note that the performance of the evaluated baselines is not particularly robust, with a reported final score of \(0.34\) for _Track 1_, and _Track 3_ and \(0.32\) for _Track 2_; We highlight that the primary motivation of this experimental analysis is to establish a reference point on baseline performance
for all tracks, rather than striving to set new track records. Consequently, the baselines were not subject to optimization. For instance, we did not specifically address issues such as class imbalance, undertake extensive hyperparameter tuning or utilise the adaptive sampling scheme of [7] for enhanced edge detection. For qualitative results on Figure 7, we observe that our baseline model tends to segment larger circles into fragmented sets of shorter edges, cluster face IDs together and often conflates distinct operation steps. These findings underscore the extent of the difficulty of the challenge and highlight potential areas for improvements in future iterations.
To provide additional insights on model performance, we present a histogram of _prediction-to-groundtruth_ length ratios in Figure 8. We find that our baseline consistently underestimates edge lengths thus limiting performance. In Figure 9, we report per-type IoU for face types (_Track 2_) and operation types (_Track 3_). Our baseline struggles to capture less common types in both cases due to a significant imbalance in class frequency (as also shown in Figure 4 and Figure 5). Finally, we follow [9] and report face and step prediction consistency in Table 1 (_right_) as the percentage of vertices sharing the same face membership / operation step also having the same type. We identify that future improvements in terms of consistency (we report \(0.79\) for _Track 2_ and \(0.90\) for _Track 3_) can positively affect performance.
## 6 Conclusion
In this paper, we introduce the SHARP challenge 2023, aiming to address the nuances of the Scan-to-CAD problem through three distinct tracks. For every track, a new version of the challenging CC3D dataset is presented, along with an exhaustive description of the evaluation metrics and proposed baseline methodologies. This challenge is designed to encourage forthcoming advancements in reverse engineering from 3D scans in a real-world setting.
**Acknowledgement:** Present work is supported by the National Research Fund, Luxembourg under the BRIDGES2021/IS/16849599/FREE-3D and IF/17052459/CASCADES projects, and by Artec 3D.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**Track 1**} & \multicolumn{4}{c}{**Track 2**} & \multicolumn{4}{c}{**Track 3**} \\ \cline{2-10} \cline{10-10} \cline{2-10} \cline{3-10} \cline{5-10} \cline{10-10} \cline{2-10} \cline{2-10} \cline{3-10} \cline{5-10} \cline{10-10} \cline{2-10} \cline{3-10} \cline{5-10} \cline{10-10} \cline{2-10} \cline{3-10} \cline{5-10} \cline{10-10} \cline{2-10} \cline{3-10} \cline{5 |
2304.12304 | A Survey on Multi-Resident Activity Recognition in Smart Environments | Human activity recognition (HAR) is a rapidly growing field that utilizes
smart devices, sensors, and algorithms to automatically classify and identify
the actions of individuals within a given environment. These systems have a
wide range of applications, including assisting with caring tasks, increasing
security, and improving energy efficiency. However, there are several
challenges that must be addressed in order to effectively utilize HAR systems
in multi-resident environments. One of the key challenges is accurately
associating sensor observations with the identities of the individuals
involved, which can be particularly difficult when residents are engaging in
complex and collaborative activities. This paper provides a brief overview of
the design and implementation of HAR systems, including a summary of the
various data collection devices and approaches used for human activity
identification. It also reviews previous research on the use of these systems
in multi-resident environments and offers conclusions on the current state of
the art in the field. | Farhad MortezaPour Shiri, Thinagaran Perumal, Norwati Mustapha, Raihani Mohamed, Mohd Anuaruddin Bin Ahmadon, Shingo Yamaguchi | 2023-04-24T17:55:10Z | http://arxiv.org/abs/2304.12304v1 | # A Survey on Multi-Resident Activity Recognition in Smart Environments
###### Abstract
Human activity recognition (HAR) is a rapidly growing field that utilizes smart devices, sensors, and algorithms to automatically classify and identify the actions of individuals within a given environment. These systems have a wide range of applications, including assisting with caring tasks, increasing security, and improving energy efficiency. However, there are several challenges that must be addressed in order to effectively utilize HAR systems in multi-resident environments. One of the key challenges is accurately associating sensor observations with the identities of the individuals involved, which can be particularly difficult when residents are engaging in complex and collaborative activities. This paper provides a brief overview of the design and implementation of HAR systems, including a summary of the various data collection devices and approaches used for human activity identification. It also reviews previous research on the use of these systems in multi-resident environments and offers conclusions on the current state of the art in the field.
Keywords:Multi-Resident, Human Activity Recognition, Sensors, IoT, Machine Learning, Deep Learning.
## 1 Introduction
Microelectronics and Internet of Things (IoT) technologies are constantly improving, and as a result, sensors are becoming more widely used in everyday situations. These intelligent items make it simple to obtain a variety of information and numerous fields can benefit from this knowledge [1]. Human activity recognition (HAR) has been a focus of research for many years, with significant contributions to the development of various technologies such as smart homes, smart health tracking, smart offices, and smart security. Using computer systems to analyze and comprehend human activity and use assistive technologies can significantly enhance daily life. [2].
HAR systems are effective in a variety of applications, including ambient assisted living and healthcare [3]. In healthcare, they can be used to monitor and manage individuals, particularly the elderly and disabled, through recognition of human activity [4]. These systems can also be utilized in sports to automate recognition of hand movements, in security systems to verify user identity through gait analysis, in self-health management, in military applications, and in human-robot interaction through gesture recognition [5].
Despite the numerous applications and previous research and development efforts, the HAR algorithms currently face a number of challenges. One of these challenges involves recognizing simple and complex human activities [6]. Simple human activities (SHAs) are those that can be described as a single action occurring in a brief period of time, such as standing or sitting. Complex human activities (CHAs), on the other hand, often involve multiple concurrent or overlapping actions occurring over a longer time period, such as cooking or writing. It can be difficult to identify CHAs and differentiate them from SHAs. Other challenges include accurately recognizing a wide range of complex everyday activities, balancing efficiency and privacy, meeting computational requirements for portable and embedded devices, the complexity of data interpretation [7], and a major challenge related to multi-resident environments. The increasing complexity of human activities and difficulties in multi-resident environments have resulted in a data association challenge in the real world [8]. This challenge involves accurately linking each sensor event in a multi-resident area to the person who caused it [9].
The aim of this research is to provide an overview of the current status of human activity recognition (HAR) systems in multi-resident environments. Specifically, Section 2 discusses the process of designing and implementing a HAR system. In Section 3, we delve into the concepts, challenges, datasets, and literature related to recognizing activities in multi-resident settings. Finally, the paper concludes in Section 4.
## 2 HAR Systems
The design and implementation of HAR systems is surveyed briefly in this section. Figure. 1 illustrates a high-level workflow for designing an HAR system [7]. The first step for developing a HAR system is the data collection. The data for a HAR system is acquired using various devices with embedded sensors such as smart phones, and smart watches [10]. The next step in the process is to prepare the raw data for analysis through data cleaning, normalization, and the extraction of features [5]. The third step involves selecting and training a machine learning model, taking into account the number of activities and the quantity of data available [7]. This model is then trained using the cleaned data from the preprocessing stage. Finally, the model is evaluated using metrics such as accuracy, recall, and precision [11]. Further details on each of these steps are provided below.
### Data Collection
Sensors, actuators, and smart devices such as smart phones, smart watches [10], and smart glasses [12] are used for collecting raw data in smart environments. Figure. 2 provides an overview of various data collection methods for an HAR system. In general, the data collection techniques can be classified into two categories: vision-based techniques [13] and sensor-based methods [14].
Figure 1: HAR System Workflow
**Vision-Based Methods**: Cameras that capture visual data are utilized by human activity recognition (HAR) systems to observe changes in the environment and monitor human behavior. A variety of cameras, including basic RGB cameras [15] and more advanced systems that utilize multiple cameras for stereo vision or employ depth cameras that measure the depth of an image using infrared light [13], are employed in vision-based techniques.
**Sensor-Based Methods**: Sensors are generally classified into two categories: radio-based sensors and binary sensors. The most common radio-based sensing systems are Bluetooth, ZigBee, Z-waves, RFID, 6LoWPAN, and WiFi [16]. Binary sensors are divided into two groups of Ambient (Environmental) sensors and wearable sensors [17]. Ambient sensors are typically installed in close proximity to collect precise information on basic environmental characteristics [7]. There are a number of common Ambient sensors such as pressure sensors, barometers, temperature sensors (Thermocouples) [17], sound sensor [18], and passive infrared (PIR) sensors [19]. One category of emerging sensor devices are wearables [3]. Wearable sensors are divided into two groups [14]: inertial sensors and physiological sensors. Inertial sensors are the most popular type of wearable technology for measuring motion and physical activities involved in daily life. Accelerometers, Gyroscopes, and Magnetic sensors are the most popular inertial sensors [20]. The last group of sensors is physiological sensors. The electrical activity of a particular bodily component is reflected in physiological signals. There are four physiological signals namely Electromyogram (EMG), Electroencephalogram (EEG), Electrocardiogram (ECG), and Electrooculogram (EOG) [21]. one of the important physiological sensor is Electroencephalogram (EEG) which is an electronic device that measures the electrical signals produced by the brain [22]. EEG sensors typically record the different electrical signals produced over time by the activity of big groups of neurons close to the surface of the brain.
Figure 2: **Data collection methods in HAR system**
### Data processing
Data processing is a vital step in the creation of a HAR system and involves various techniques for data cleaning, normalization of data, and extraction of features [7].
Data cleaning is the process of finding and fixing or deleting errors, anomalies, systemic problems, and inaccuracies in a dataset. Data cleaning is to guarantee that the dataset is accurate, comprehensive, and consistent so that it can be analyses successfully[23]. Data cleaning usually consist of two steps: error detection, in which difference errors are found and maybe verified by experts; and error correction, in which adjustments to the dataset are implemented (or proposed to human experts) to make the data cleaner. Real-world datasets frequently contain a variety of error types included missing values, outliers, inconsistencies, duplicates, and mislabels [24]. Missing values: A dataset may have missing value if some values are not recorded or are not known. Outliers: an outlier is observation that are significantly different from the other data points in the dataset. Duplicates: records that relate to identical real-world entities are considered duplicates. Duplicate entries in a dataset can distort analyses and produce false results. Inconsistencies: when two cells in a column have different values when they ought to have the same value, this is known as an inconsistency. For instance, in a state column, "CA" and "California," might both be present. Mislabels: When an example is labelled wrongly, it is called a mislabel.
The feature extraction process in a sensor-based HAR system generally focuses on the time and frequency domains [25]. Typically, features like as median, variance, mean, range, and skewness are extracted by time domain approaches. Features of the frequency domain technique include spectrum entropy, spectral power, peak power, and peak frequency [10]. With traditional Machine Learning techniques, features often need to be manually extracted or generated, which typically demands for specialist knowledge and a significant amount of human effort, whereas deep Learning techniques can directly extract robust features from raw data for specific purpose [26].
### Model Selection
In order to develop a HAR system, it is necessary to select and train a classification model. This decision should take into account the quantity of activities to be recognized, the volume of data available for training, and whether computation will be performed locally or remotely [7]. Many well-known machine learning methods have been used in HAR systems for classification. These methods can be classified into two main groups: classical machine learning (CML) methods, and deep learning (DL) techniques. Classical machine learning (CML) models have lower computational needs as well as fewer training data requirements than Deep learning (DL) models. Nevertheless, DL models are capable of identifying more complicated activities with a higher accuracy. Moreover, deep neural networks are capable of learning representative features from unprocessed data with little domain expertise [26].
#### Classical Machine Learning Models
Machine learning approaches employ computers to replicate human learning processes in order to acquire new information and abilities, detect current knowledge, and continually improve performance and success [27]. Machine learning approaches can generally be divided into four categories: supervised, unsupervised, semi-supervised and reinforcement [28]. Various models of machine learning have been developed in different fields. In the following, we will examine a few notable classical machine learning models used in HAR system.
#### 2.3.1.1 Nnn
K Nearest Neighbor (KNN) Algorithm is a classification model which attempts to categorize the sample data point that is presented to it as a classification issue using a dataset that contains data points
divided into many groups [29]. The KNN model has various merits. It is an easy method to use since it is simple. Creating the model costs little. It is a very adaptable classification technique that works well for multimodal classes.
KNN models are used for recognizing human activity by some researchers [30, 31]. A study by [32] proposes a powerful and innovative model for human activity recognition called Reduced Kernel K-Nearest Neighbors.
#### 2.3.1.2 Hmm
Hidden Markov Model (HMM) is a statistical model base on Markov process with hidden states. In HMM, the system changes over time via a series of discrete states that are not immediately observable but create observable outputs. Also, the probability of changing from one state to another is only dependent on the present state and is independent of any earlier states. A number of applications, including face identification, speech recognition, gesture recognition, bio-informatics, and gene prediction are theoretically supported by the mathematical structure of HMM [33].
Several articles are used HHM-based models for recognizing human activity in smart environments [34, 35]. A study by [9] are suggested two HMM-based models, Linked Hidden Markov Model (LHMM) and Combined-label Hidden Markov Model (CL-HMM), for recognizing human activity in smart areas with multiple residents. In [36], a Gaussian Mixture Hidden Markov Model (GMM-HMM) for human activity recognition is presented. Human activity may be described as a Markov process, and complex activity patterns can be fitted by several Gaussian density functions.
#### 2.3.1.3 Random Forest
Random Forest (RF) model is an ensemble classifier that creates several decision trees by using a randomly chosen subset of training samples and variables. The RF classifier produces trustworthy classifications by leveraging predictions obtained from a group of decision trees. Moreover, this classifier may be used to pick and rank the variables that have the best capacity to distinguish between the target classes [37].
There are various articles used models based on random forest for identifying human activities [38, 39, 40]. The author of [41] are proposed a hybrid technique based on Principal Component Analysis (PCA) and Random Forest (RF), that combines an effective approach for extracting clustering feature information with an effective algorithm for classifying data.
#### 2.3.2 Deep Learning Models
The popularity of deep learning methods that use deep neural networks has grown due to the increase of high-performance computer facilities. Deep learning refers to the use of architectures with several hidden layers (deep networks) to learn various characteristics at various abstraction levels [42]. Data is passed through numerous layers by a deep learning algorithm. each layer is able of incrementally extracting features and passing the data to the next layer. Lower-level characteristics are extracted by the first layers, which are combined by later layers to create a full representation [43].
In general, Deep learning methods are divided to two models included convolutional neural networks (CNN) and Recurrent neural networks (RNN). In the following, we briefly explain about these two models and point to the articles that have used these models to recognize human activity in smart environments.
#### Cnn
Convolutional Neural Networks (CNN), is a kind of deep learning models which performs quite well and has been widely utilized in a variety of applications, including object identification, speech recognition, computer vision, video analysis, image classification, and bioinformatics[44]. The CNN identifies behaviors based on patterns, learns from the data automatically, and does not require human feature extraction. A pooling layer and several convolutional layers make up the majority of a CNN [5]. The classification operation is carried out by convolution layers, which are often followed by several fully connected layers. The goal of pooling layers is to reduce the dimensionality of input data and also to extract prominent characteristics that are invariant with regard to rotation and position[29]. Some articles are used CNN models for recognizing human activity [45]. Authors in [3] presented a HAR system that uses a Wi-Fi wearable sensor and convolutional neural networks (CNN) in order to recognize user daily activities in a smart environment. Also, in a study [46] are used Tree-Structure convolutional neural networks (TS-CNN) for multi-resident activity recognition.
#### Rnn
Recurrent Neural Networks (RNN) are a type of deep learning model, which include an internal memory. Since the output computed for the current input depends on both the input and the outcomes of previous computations, they are referred to as recurrent [29]. RNN are used for tasks that involve sequential data, such as natural language processing and speech recognition [47].
* **LSTM:** Long Short-Term Memory (LSTM) are an extension of RNN which perform significantly better than normal RNN in remembering dependencies over a lengthy period of time. LSTM contains 3 gates: input gate, forget gate, and output gate. In particular, the input gate decides how to input the current time step and how to update the system's internal state from the previous time step; the forgetting gate decides how to update the internal state from the previous time step to the internal state of the current time step; and the output gate decides how the internal state affects the system [45]. Several researchers used of LSTM method for human activity recognition [11, 48, 49]. Authors in [2] proposed a Multi-View Convolutional Neural Network and Long Short-Term Memory (CNN-LSTM) network for Radar-Based Human Activity Recognition. In [50] is used a combination of CNN-LSTM for recognizing human activity in multi-resident area.
* **GRU:** Gated Recurrent Unit (GRU) is another type of RNN and is similar to LSTM but GRU have two gates instead of three, and do not include the cell state [51]. Therefore, GRU have simpler structure than LSTM and also due to fewer tensor operations, GRU train more quickly than LSTM. However, this does not imply that it is better than LSTM. Depending on the use situation, one option may be superior to another [29]. A number of HAR researchers have demonstrated use of GRU method [5, 51]. Dong and el. [52] proposed a transformer with a bidirectional gated recurrent unit (GRU) (TRANS-BigGRU) to efficiently learn and recognize different types of activities in multiple residents environments. In [53] a multi-level neural network structure model for complex human activity recognition is proposed. It is based on the combination of Inception Neural Network and Gated Recurrent Unit (GRU).
### 2.4 Model Evaluation
The last step of developing a HAR system is evaluation of trained model. After training the model using training data, the model must be evaluated via testing data. There are several assessment
measures, such as precision, accuracy, recall, and F1-measure for evaluating the trained machine learning model performance [11].
\[Accuracy=\frac{Tp+Tn}{Tp+Tn+Fp+Fn}\hskip 28.452756ptEq.1\]
\[Precision=\frac{Tp}{Tp+Fp}\hskip 56.905512ptEq.2\]
\[Recall=\frac{Tp}{Tp+Fn}\hskip 56.905512ptEq.3\]
\[F1-Score=2\times\frac{Recall\times Precision}{Recall+Precision}\hskip 28.452756ptEq.4\]
Where \(Tp\) = True Positive, \(Tn\) = True Negative, \(Fp\) = False Positive, and \(Fn\) = False Negative.
## 3 Multi-Resident Activity Recognition
HAR systems have shown great success dealing with a single resident within a smart area, but real-world environment does not always have a single resident. Because of this, activity recognition for multiple residents in the smart environments should be supported multiple resident activities. Since, sensor states may not always accurately reflect a specific person's activity, recognition is a more difficult operation [50]. Also, the multi resident scenario faces other challenges such as complexity of parallel and collaborative activities [52], privacy concerns [10], need for effective integration and management of various technologies and systems involved in the recognition process such as sensors, cameras, and machine learning algorithms, and the limitations of the sensors used for activity recognition in terms of range, accuracy, and reliability [52].
In the following, we first mention the types of activities that may happen in a multi-resident environment before the stated the problems and challenges of multi resident recognition. Then, the datasets which have been published for using in this field are mentioned, and in the last part, we will examine the methods and solutions that are proposed in different literature to recognition activities of multiple residents.
### Multi-Resident Activities
Residents of multi-resident environments not only gets involved with individual activities such as sequential, interleaving, and concurrent but also engage in parallel and collaborative activities[54].
* **Sequential Activities:** Activities that are carried out by one individual in a sequential order, one after the other, without being interwoven (e.g., make a phone call, washing hands, and then cooking) [54].
* **Interleaving Activities:** When one person switches between several different activities at once, this is called interleaving. For instance, If a resident receives a call from a friend while they are cooking, they pause their work for a bit to answer the call and then return to the kitchen to finish cooking [55].
* **Concurrent Activities:** These activities are those that a single person engages in while performing multiple tasks at once (e.g., watch TV while using phone) [55].
* **Parallel Activities:** involve multiple individuals completing tasks in the same location at the same time, such as one person making tea in the kitchen while another talks on the phone in another room [52].
* **Collaborative Activities:** involve a group working together to accomplish a common goal, either through jointly performing a task (e.g. carrying a sofa together) or working independently towards the same objective (e.g. preparing dinner together in the kitchen) [52].
### Challenges
Monitoring multiple residents within the same environment poses serious challenges to the current state of the art HAR systems [56]. The first challenge involves recognizing simple and complex human activities. Complex human activities (CHA), often involve multiple concurrent or overlapping actions occurring over a longer period, such as cooking or writing. It can be difficult to identify complex activities and differentiate them from simple human activities (SHA). There are three major categories of CHA recognition in the literature [6]. The first approach ignores the distinctions between CHAs and SHAs and recognizes CHAs using SHA recognition techniques. Since CHAs are far more complex than SHAs, the characteristics that can be derived from SHAs are not comparable to CHAs. The second method combines SHAs to represent each CHA, where the SHAs are predetermined and manually labelled. Nevertheless, because CHAs contain many non-semantic and unlabelled components, this strategy significantly depends on domain expertise and is limited by the established SHAs. In the last scenario, CHAs are represented by latent semantics found in sensor data and by using topic models, the latent semantics are discovered. However, topic models only consider distribution data and disregard sequential data, which might help in CHA recognition.
Also, residents of multi-resident environments, despite of individual activity, can engage in parallel and Collaborative activity due to social interaction of multi-resident [52].
Another major challenge is data association, which is the process of appropriately connecting each environmental sensor event (such the opening of a refrigerator door) with the person who caused it [57]. This can be very challenging to the HAR system, because in contrast to the single-resident situation, where the sensor states directly represent the activities of a particular person, in multi-resident cases the true source of sensor readings and observations is unknown. As a result, it becomes challenging to associate each observed action to the person who is instigating the action [56]. For instance, even when two motion sensors are activated simultaneously in two distinct rooms, indicating the presence of two individuals in the environment, the HAR system will have difficulty for accurately determining which individual is in which room. Consequently, the key challenge in resident activity recognition is to associate the identities of residents to sensor observations [57].
There have been various approaches proposed in the literature for solving data association problem to multi-resident activity recognition, which can be grouped into three categories: wearable-based data association, single-model, and data-driven data association [57]. Single-model approaches rely on raw ambient sensor data to link activities to individuals, leading to implicit learning of data association during the training phase. For example, in [9] The problem of multi-resident activity recognition is investigated using two Hidden Markov Model (HMM) models, Combined label HMM (CL-HMM) and Linked HMM (LHMM). On the other hand, data-driven approaches view data association as a separate learning problem that occurs before activity classification. In [58] Authors describe an algorithm that follows each resident's position while also estimating how many people are living in a smart home without using ground-truth labeled sensor data or any other extra information. Wearable-based data association approaches require residents to wear additional sensors, such as wristbands or smartphones, to
accurately associate environmental events. The authors in [59] proposed a new device to identify people in multi-resident areas, while [56] presented a multi-resident activity detection sensor network that combines Bluetooth Low Energy (BLE) signals from tags worn by occupants and passive infrared (PIR) motion sensors in the home to locate and track residents' activities.
Estimating the number of distinct residents in the same environment and track them is another challenging problem [60]. Any multi-inhabitant context-aware smart environment system must count the users present at any one time and monitor each user's movement trajectory independently and precisely as one of its primary starting stages [61].
### Multi-Resident Dataset
There are several multi-resident activity recognition (HAR) datasets available for use. Table 1 lists a few examples.
* **ARAS**: Activity Recognition with Ambient Sensing (ARAS) dataset which includes data streams collected from two houses with multiple residents for a period of two months [62]. These houses were equipped with 20 binary sensors, such as contact sensors, pressure mats, temperature sensors, and proximity sensors, to track the residents' activities. The raw data in the ARAS dataset includes information on 27 different activities performed by the residents.
* **CASAS**: There are several multi-resident smart home datasets created by the Center of Advanced Studies in Adaptive Systems (CASAS) at Washington State University which two datasets are newer [58]. One of the datasets named TM0041 contains data recorded in a two-bedroom apartment with two old adult residents that monitored by 25 ambient sensors distributed among 8 parts of house. Another dataset is named Kyoto, a two-story house. For collecting data are used 91 sensors which are installed in bedroom, bathroom, kitchen, dining room, living room, and along hallways. Also, Magnetic door sensors are positioned on cabinet, closet, refrigerator, and front and back outdoor doors.
* **UJM**: A multi-resident dataset called SaMO- UJA, which was collected in the UJAmI Smart Lab of the University of Jaen (Spain) [63]. Data were collected from various sensors technologies and information sources included 30 binary sensors in certain objects and the 15 Bluetooth
\begin{table}
\begin{tabular}{l c c c c} \hline
**Dataset** & **Name of** & **\# of Sensors** & **\# of Activities** & **Duration** \\ & **Houses** & & & \\ \hline ARAS[62] & A & 20 of 7 different types & 27 & 30 days \\ ARAS[62] & B & 20 of 6 different types & 27 & 30 days \\ CASAS [58] & TM0041 & 25 & 11 & 30 days \\ CASAS [58] & Kyoto & 91 & 11 & 24 days \\ UJA [63] & SaMO- UJA & 85 & 25 & 9 days \\ SDHAR- & - & Wearable + Ambient sensors & 18 & 60 days \\ HOME[64] & & (35) + Positioning (7) & & \\ MARBLE[65] & - & Wearable + Ambient sensors & 13 & - \\ \hline \end{tabular}
\end{table}
Table 1: Multi-resident datasets
Low Energy (BLE) beacons in the space, acceleration of the resident with the wearable device and smart floor with 40 models for location. The datasets include 25 distinct types of activities.
* **SDHAR-HOME:** A home where two residents and a pet live has provided as the location for the database collecting. They have also some visitors from their friends and relative. This smart home featured a collection of non-intrusive sensors to record events that happened inside home, a positioning system through triangulation using beacons, and a system to track user status using activity wristbands. The daily habits, which were measured continuously for two months and were divided into 18 distinct activity categories [64].
* **MARBLE:** The MARBLE dataset contains information from both wearable technology and ambient sensors. Smartwatches were chosen as wearables devices because of their minimal obtrusiveness, widespread use, and ability to record hand motions that may be used to identify ADLs (e.g., washing dishes). Among environmental sensors, mat (pressure) sensors can detect when people are sitting on seats or sofas, magnetic sensors can detect when drawers and doors are opened and plug sensors may detect when household appliances are being used. additionally, in order to provide indoor location, Wi-Fi access points and BLE beacons were installed [65].
### Literature Review
Over the past 20 years, various methods for identifying human actions using sensor data have been developed. While most of these approaches are designed to work with only one person in the environment, some techniques have been created to address the more difficult task of identifying actions performed by multiple people in the same space [66]. Table 2 provides a comparison of the literature reviewed on multi-resident activity recognition.
The constraints and correlations mining engine (CACE) framework, created by the authors in reference [60], significantly improves the accuracy of recognizing complex daily activities in smart homes with multiple residents. To begin, the model generates an adjacency matrix of sensor nodes by representing the set of deployed nodes as an undirected graph. Then, the sequential importance resampling (SIR) particle filter technique is utilized to track each user's movement trajectories. After this, the state space to be explored is minimized using spatiotemporal rules that consider both deterministic correlations and statistical constraints. Finally, the Hierarchical Dynamic Bayesian Network (HDBN)-based model, which considers both micro and macro contexts, is utilized to further increase the precision of context estimation. The authors found that this approach achieved an average accuracy of 94.5 on the CASAS dataset and 95.1 on the CASE dataset (Collected Data).
In [52] to effectively learn and detect various activity types carried out by multiple residents, authors offer TRANS-BiGRU, which is a deep learning (DL) technique based on using a transformer with a Bidirectional Gated Recurrent Unit (BiGRU). The Recurrent Neural Network (RNN)-related algorithms such as Gated Recurrent Unit (GRU) and Bidirectional Gated Recurrent Unit (BiGRU) can only be computed sequentially in one of two directions: from right to left or from left to right. This technique has two drawbacks. First, the computation of time slice t depends on the outcomes of the computation at time t-1, which reduces the model's capacity for parallelism. Additionally, although the Long Short-Term Memory (LSTM)'s structure helps to some extent with the issue of long-term dependence, it is still ineffective for certain long-term dependent occurrences. A proposed transformer in [60] addresses the two mentioned problems. Provided experimental results show that proposed TRANS-BiGRU significantly
outperforms other approaches with average accuracy of 89.48 on ARAS House A dataset, 90.59 on ARAS House B, and 92.86 on CASAS dataset.
In a study by authors [67], spatial and temporal data was collected in order to recognize the activities of multiple residents in a smart home setting. They used Expectation-Maximization (EM) clustering to develop an adaptive profiling model based on potential interactions. The model was trained using multi-label classification, specifically the label combination (LC) approach, with the help of various classifiers
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**ref** & **Objective** & **Methods** & **Dataset** & **Accuracy** & **Limitation** \\ \hline
**[**60**]** & Developing a framework & 1. Sequential & 1.CASAS & 94.5\% & For more \\ & to track each person’s mobility trajectories. & Importance & & & than two \\ & mobility trajectories. & Resampling (SIR) & 2.CASE & 95.1\% & persons, the \\ & Improves the & particle filter technique & (real-world & & accuracy is \\ & recognition accuracy of & 2. Hierarchical & data) & & poor. \\ & complex daily activities. & Dynamic Bayesian & & & \\ & Network (HDBN) & & & & \\ \hline
**[**52**]** & suggested a technique & A transformer with a & ARAS (A) & 89.48\% & The lack of \\ & for efficiently learning & bidirectional gated & & solving \\ & and identifying the & recurrent unit (TRANS- & ARAS (B) & 90.59\% & resident \\ & various actions carried & BiGRU) & CASAS & 92.86\% & problem \\ & out by several residents. & 1. Multi-label & CASAS & 98.60\% & It was only \\ & information of spatial & classification namely & & done on a \\ & and temporal & the label combination & & home with 2 \\ & information to enhance & (LC) & & residents \\ & the performance of & 2. Expectation- & & & \\ & multiple residents’ & Maximization (EM) & & & \\ & activity recognition & clustering & & & \\ \hline
**[**66**]** & A new method to & Unsupervised HMM- & CASAS & 72.31\% & insufficient \\ & identify multi-inhabitant & based method & & support for \\ & activities without & & & managing \\ & labelled datasets. & & & collaborative \\
**[**68**]** & Improve the & 1. Split-fusion Module & CASAS & 85.08\% & The \\ & effectiveness of human & 2. Multi-layer & & proposed \\ & activity recognition. & perceptron (MLPs) & & framework is \\ & Realize synchronous & & & so \\ & identification of each & & & sophisticated \\ & human and the & & & & \\ & behaviors. & & & \\ \hline
**[**69**]** & Framework to model the & 1. Graph and Rule- & combined & 86.71\% & The lack of \\ & dynamics of interaction & Based Algorithm & five single & investigate \\ & between sensor-based & (GR/ED) & resident & mobility \\ & environments and & 2. nearest neighbor & datasets. & models and \\ & multiple residents. & standard filter (NNSF) & & resident \\ & Solving identification & 3. Dijkstra Algorithms & & tracking \\ & annotation problem. & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of the reviewed literature
such as random forest, K-Nearest Neighbors (KNN), Support Vector Machines (SVM), and Hidden Markov model (HMM). Out of these, the Random Forest-Label Combination (LC-RF) method yielded the highest performance with a score of 98.6 on the CASAS dataset.
A study presented in [66] presents a new approach for identifying multi resident activities without the requirement of labeled data. This is significant because the process of labeling activities can be costly and may infringe on individuals' privacy. The proposed method combines unsupervised techniques utilizing Hidden Markov Models (HMM) and ontological reasoning. Testing on the CASAS dataset demonstrated an average accuracy of 0.7231, which is similar to the performance of supervised techniques.
In [68] an end-to-end Transformer-based model named Fusion Transformer for Multi-Resident Activity Recognition (FTMAR) is proposed. The proposed approach first generates data segments with temporal information from the filtered raw data. Then uses the split-fusion for feature engineering. Two fully connected layers ultimately classify and label the output vectors of this module. The proposed approach can improve recognition performance for both residents and activities to an accuracy of 0.8855 and 0.8508 respectively, according to experimental results on the CASAS dataset.
The authors of [69] developed MoSen, a framework that models the interactions between sensor-based environments and multiple residents. One of the main challenges in using sensors to recognize activities in multi-resident environments is the identification annotation problem, which involves labeling the identities of time-series sensor events according to the residents who generated them. MoSen addresses this issue by utilizing the Graph and Rule-Based Algorithm (GR/ED). In order to create artificial multi-person datasets, the authors combined five single-occupancy datasets. Additionally, MoSen offers guidance on sensor selection and design metrics for sensor layouts in real-world deployments.
In a study [70], the authors utilize a combination of change point detection (CPD) and fuzzy c-means (FCM) for sensor event segmentation. Specifically, sensor events are classified based on their locations using the FCM method, and the CPD technique is subsequently applied to examine the transition actions to determine the segmentation sequence.
A study by [71] suggests using a Multilabel Markov Logic Network (MLN) to classify individuals based on their activity patterns and preferences. This classification considers factors such as the preferred order, length, and frequency of activities, as well as the preferred location of entities or events. The results of the experiments show that combining data-driven and knowledge-driven approaches for analyzing activities is effective.
The Attention based Deep Ensemble Learning for Activity (ADELA) system, proposed in [72], utilizes attention-based ensemble learning in a collaborative smart environment to identify the significance of events from sensor data streams based on their impact on activity recognition. The base models are assigned weights through a weighted majority calculation during the training phase and the information is stored in a trained model storage for future use in inference.
## 4 Conclusion
Human Activity Recognition (HAR) systems can greatly improve daily living by using computer systems to analyze and understand human behavior and implement assistive technologies. However, real-life environments commonly involve more than one person, with complex relationships between the residents and any visitors or family. As a result, multi-resident activity recognition is a key research issue to the development of various technologies such as smart homes, smart health tracking, smart offices, and smart security. In this paper a survey of the recent advances in HAR systems in smart environments, particularly in areas with multi residents was presented. The most important challenge in multi
residential environments is linking the actions to the people who perform them, and the complexity of parallel and collaborative activities. To overcome these challenges and achieve high accuracy, various methods have been proposed in the literature in recent years. This paper provided a brief survey of the notable works in this area.
|
2305.05551 | Digital Transformation in the Public Administrations: a Guided Tour For
Computer Scientists | Digital Transformation (DT) is the process of integrating digital
technologies and solutions into the activities of an organization, whether
public or private. This paper focuses on the DT of public sector organizations,
where the targets of innovative digital solutions are either the citizens or
the administrative bodies or both. This paper is a guided tour for Computer
Scientists, as the digital transformation of the public sector involves more
than just the use of technology. While technological innovation is a crucial
component of any digital transformation, it is not sufficient on its own.
Instead, DT requires a cultural, organizational, and technological shift in the
way public sector organizations operate and relate to their users, creating the
capabilities within the organization to take full advantage of any opportunity
in the fastest, best, and most innovative manner in the ways they operate and
relate to the citizens. Our tutorial is based on the results of a survey that
we performed as an analysis of scientific literature available in some digital
libraries well known to Computer Scientists. Such tutorial let us to identify
four key pillars that sustain a successful DT: (open) data, ICT technologies,
digital skills of citizens and public administrators, and agile processes for
developing new digital services and products. The tutorial discusses the
interaction of these pillars and highlights the importance of data as the first
and foremost pillar of any DT. We have developed a conceptual map in the form
of a graph model to show some basic relationships among these pillars. We
discuss the relationships among the four pillars aiming at avoiding the
potential negative bias that may arise from a rendering of DT restricted to
technology only. We also provide illustrative examples and highlight relevant
trends emerging from the current state of the art. | Paolo Ciancarini, Raffaele Giancarlo, Gennaro Grimaudo | 2023-05-09T15:41:10Z | http://arxiv.org/abs/2305.05551v2 | # Digital Transformation in the Public Administrations:
###### Abstract
**Motivation:** Digital Transformation (DT) is the process of integrating digital technologies and solutions into the activities of an organization, whether public or private. This paper focuses on the DT of public sector organizations, where the targets of innovative digital solutions are either the citizens or the administrative bodies or both. This paper is a guided tour for Computer Scientists, as the digital transformation of the public sector involves more than just the use of technology. While technological innovation is a crucial component of any digital transformation, it is not sufficient on its own. Instead, digital transformation requires a cultural, organizational, and technological shift in the way public sector organizations operate and relate to their users, creating the capabilities within the organization to take full advantage of any opportunity in the fastest, best, and most innovative manner in the ways they operate and relate to the citizens.
**Results:** Our tutorial is based on the results of a survey that we performed as an analysis of scientific literature available in some digital libraries well known to Computer Scientists. Such tutorial let us to identify four key pillars that sustain a successful DT: (open) data, ICT technologies, digital skills of citizens and public administrators, and agile processes for developing new digital services and products. The tutorial discusses the interaction of these pillars and highlights the importance of data as the first and foremost pillar of any DT. We have developed a conceptual map in the form of a graph model to show some basic relationships among these pillars. We discuss the relationships among the four pillars aiming at avoiding the potential negative bias that may arise from a rendering of DT restricted to technology only. We also provide illustrative examples and highlight relevant trends emerging from the current state of the art.
digital transformation public sector open data open government data data governance privacy and security smart cities cloud computing digital twins digital shadows digital skills leadership co-creation agile processes
## 1 Introduction
We all are citizens of a digital era, which offers new possibilities, new rights, new duties [90]. Digital citizens use ICT technologies to communicate, access information, and participate in social, economic, and political activities. The impact that the advent of this new era has had is not limited to citizens, but also involves public administrations (_PAs_, for short), being directed to be more and more digital in how their work, relate, and interact with citizens. As a matter of fact, a Digital Transformation (_DT_, for short) in the _PAs_ is taking place. To date, there is no formally accepted definition of such a transformation: as pointed out in [128], there can actually be many of them. In the context of this work, we use the definition below that attempts to summarise most of them. _DT_ is the process of integrating digital technologies and solutions into all aspects of the activities of an organization, whether public or private. For this reason, it can be a complex, never-ending, and often discouraging process.
DT represents a cultural, organizational, social, and technological shift that leads organizations to initiate a change in the way they operate and relate to their users, trying to be more responsive to their needs. It must have an associated strategy, focusing on creating the capabilities within the organization to take full advantage of the opportunities of new technologies and their impact in the most useful and most innovative manner, keeping in mind that the private and public sector differ. Indeed, the technologies used in the private sector cannot be applied immediately in the public one without an analysis of possible differences of impact [100]. This is due to the fact that _DT_ in the private sector concerns how it impacts employees and how digital technologies and processes improve the productivity and quality of the products offered to customers. In the public sector, _DT_ has a different scope because it impacts not only the citizens, who are scarcely comparable to customers, but also the governing and administrative processes, and the nature itself of the social contract [222]. From now on, unless otherwise specified, in our paper _DT_ refers specifically to the public sector.
As digital transformation projects are implemented, citizens become eager for accessing digital services supporting their activities and life. This requires public institutions to ensure that their digital solutions are user-friendly, secure, and accessible to all citizens [67]. Several institutions and administrations in the public sector are exploring the opportunities offered by digital transformation technologies to enhance their organizational flexibility necessary to adapt to changing contexts and meet new government and citizens demands [52, 113]. More in general, _DT_ has become an increasingly pressing issue in recent years, with the growing need to modernize government services and improve their efficiency and transparency with respect to the needs of citizens. It is not surprising that there are several areas in this context where Computer Science can have a major impact, while receiving further stimuli for its growth. One of them is the development and implementation of digital infrastructures to support specifically the public sector, with particular attention to privacy and security aspects of the data and of their processing. This includes creating systems for data collection, storage, and analysis [162], as well as building networks and platforms for communication and collaboration among different public institutions [214]. Another one is the study regarding how to make government data more accessible, transparent, and useful for the citizens, their administrations, and other stakeholders. This includes developing standards for data sharing and interoperability, as well as creating tools and applications that enable citizens to engage with public Open Data in useful ways [157].
Therefore, _DT_ is certainly of interest for Computer Scientists but, to the best of our knowledge of the State of the Art, the complexity of the _DT_ and the interplay among its key aspects does not seem to be well presented to a Computer Science audience. Filling this gap in the Literature is the aim of this tutorial. Indeed, based on the State of the Art, we identify four key pillars that sustain a successful _DT_. Specifically, (open) data, ICT technologies, digital skills of citizens and public administrators, and agile processes. We dedicate part of this tutorial to the description of each and another part to their interaction. Indeed, although ICT technologies are essential in driving any digital transformation, and well known to Computer Scientists, they are not enough on their own. Therefore, our aim is to present the benefits of technology and avoid the potential negative bias that may arise from a rendering of _DT_ restricted to technology only. This is a novelty for a presentation of this area.
Another novelty is that we focus on data as the first and foremost pillar of any digital transformation. The case of the public sector involves the integration of data technology in all aspects of governance to improve efficiency, transparency, and citizen engagement. Open Data is a critical component of this transformation as it enables the public sector to make its data publicly available for reuse and re-purposing by others. In particular, Open Data in the _PA_ refers to the idea of making government data available to the public in a usable and accessible format. They can include information on government spending, public transportation, healthcare, education, environmental issues, and much more. Their availability has the aim to allow citizens to better understand how the government operates, and how it spends public money. This increased transparency can improve public trust, accountability, and collaboration between the government and its citizens. It also provides valuable insights to policymakers, researchers, and other stakeholders. The corresponding implementation of Open Data policies is crucial for unlocking the potential of data in the _PA_.
The use of ICT technologies, including smart cities and data governance of public clouds, can help to improve the delivery of public services [237]. While smart cities leverage technology to improve quality of life, reduce costs and improve the sustainability of urban areas, the data governance of public clouds can provide a cost-effective and scalable infrastructure for public administrations, allowing them to deliver and monitor their services more efficiently and effectively. We remark that citizens must have the necessary competencies to access and use digital services [118]. This encompasses digital literacy, data literacy and online security awareness. Governments need to invest in programs that promote digital skills development, particularly for underprivileged and marginalized groups. Digital transformation is an ongoing process of integrating technology into various aspects of society, and digital competencies of the citizens play a crucial role in driving this integration. Citizens must be able to navigate the risks associated with online activity and understand the importance of protecting their personal information.
Beyond basic digital skills, citizens should also have a strong understanding of the role of technology in the public sector, and the benefits it can bring to government services. This requires a strong understanding of public policy, as well as an awareness of the needs and interests of different stakeholders. An active digital citizenship also requires a willingness to collaborate and work with others. This includes engaging in online communities, collaborating on digital projects, and building partnerships with government and other stakeholders to co-create digital contents and services.
Finally, we show why we believe that any _DT_ is a process that should exploit an agile approach, when introducing or adopting new services for citizens. Such an approach allows for rapid development and testing of new services, with continuous feedback and improvement. Since _DT_ is an iterative process that requires continuous adaptation to the needs of the citizens and improvements in the policies of the administrators, Agile processes are the ideal "operational tool" for _DT_.
The scope of this tutorial includes public administrations such as municipalities, national governments, and other governmental bodies. We exclude other specific public sector institutions such as the military and homeland defense organizations, educational organizations like universities, or health organizations like hospitals. Our primary goal is to explore how these public administrations can leverage digital technologies to improve their services, operations, and interactions with their citizens.
This tutorial has the following structure: Section 2 describes how we selected the literature at the basis of this tutorial (after checking that no similar tutorial has been published); Section 3 introduces a graphic map of the tutorial, which helps to clarify it structure. Section 4 describes the data ecosystem that is at the basis of the digital information of the public sector. Section 5 describes two technological subareas especially relevant, namely Smart Cities and (open) Data Governance. Section 6 discusses the technical aspect we have found concerning People that got our attention, based on the Literature search: digital skills and citizens' co-creation of contents and services. Section 7 presents the two main technical aspects we have found relevant of the _DT_ processes: Change Management and Frameworks and Maturity Models. Section 8 contains the main discussion of the major ideas presented in this tutorial. Finally, the last Section 9 is a wrap-up of this tutorial.
## 2 Literature Selection
Our effort to provide a systematic homogeneous presentation of _DT_ is based on the current State of the Art. Therefore, a first essential step is to resort to established methods to collect relevant papers for a Literature Review. However, while the former describes relevant papers covering the State of the Art, this Tutorial uses the selected papers to extract the main ingredients of _DT_ and to propose models for it, with the addition of illustrative examples. Details regarding the paper selection process follow.
We have performed a literature search in the ACM and IEEE digital libraries, in addition to Google Scholar. Given that, as stated in the Introduction, the focus is on Agile methodologies, the query term is very focused: "agile" AND "digital transformation" AND "public services". The period of time is January 2017-February 2022.
The search outcome is as follows: 1,992 references from Google Scholar, 745 from IEEE, and 3,143 from ACM Digital Library. For ACM, only the first 2000 items (sorted by relevance) could be accessed, since the search engine limits itself to report that the bottom 1,143 ones are very similar to those available for display. Therefore, over the three databases we have consulted, we have collected a total of 5,880 papers.
After a more detailed review of the titles and abstracts, we selected 151 papers for our initial "core collection", as they primarily focused on computer science technical content related to DT, as opposed to social and ethical issues. During the reading phase of the core papers, it became evident that some additional literature was required, in order to make the Tutorial more comprehensive. This included 13 references covering technical background topics (such as data organization in [162]), 36 references covering reference standards, models, and regulatory procedures specific to DT (such as DT maturity models in [47] and data protection regulation in [68]), and 41 more recent papers that were not
included in our initial core collection (so extending the core to 192 papers). In total, we collected an additional 90 papers.
## 3 The Main Ingredients of the Digital Transformation in the Public Sector
Given the definition of _DT_ provided in the Introduction, we present here its main ingredients in terms of four domains that emerge from an analytic reading of the papers we have considered for this research. We also discuss the interactions among them. The end result is a graph model of _DT_, proposed here for the first time, and that can be used as a "summary map" to describe the _DT_ process. Moreover, the concepts and notions summarized by the model are exemplified via two paradigmatic examples: the Cities of Barcelona and Chicago. Such a choice is motivated by the fact that, although complex cities, their _DT_ scale is well suited for the crisp identification and evaluation of the specific actions regarding their transition to digital.
### A Graph Model for DT
From an examination of the 192 core papers although they address various aspects related to how _PAs_ plan and implement their _DT_ strategies, they predominantly focus on one of four knowledge domains: _Data_ (36 papers), _Technology_ (61 papers), _People_ (33 papers), and _Process_ (62 papers). These domains are briefly discussed below.
* _Data_. The availability of public data has changed significantly over the past decade, resulting in a greater awareness of how it is collected, represented, owned, and managed. As a consequence, the data life-cycle has changed with respect to the past [17, 74, 88, 133, 182], posing new technical problems even to mature areas such as databases [1] and requiring new ways to design software for their management [54]. As far as this Tutorial is concerned, it is important to point out that data are no longer seen as an asset to exploit for a competitive advantage, but as a social "infrastructure" that must be made available to policy makers and citizens to ensure and improve the well-being of Society [82, 83]. With this in mind, more and more _PAs_ are making available their data to improve transparency and accountability [32, 88, 160, 179, 208, 235]. However, due to the heterogeneity and lack of interoperability of the data sources, major problems arise. One is how to exploit at its best the information contained in those data. Another is the realization of the sound technical principle of "only once", i.e., data collected by one administration should be available to other administrations. Scale factors make these problems even more difficult, since "data" can refer to a continent [104], a nation, a city, or be sector specific [129, 103]. In order to address the problems alluded to earlier and of which we have provided two examples, an entire data ecosystem is shaping up, ranging from infrastructures to data analysis tools and applications. Following [162], the term _ecosystem_ is used here, instead of environment, because like real ecosystems, data ecosystems are designed in such a way to have an "evolutionary" part aimed at improving data quality levels over time. It is a node of the proposed graph model and, in what follows, we use the terms data ecosystem and data interchangeably. Moreover, being data the source of information that powers the _DT_, its corresponding node is the central one in the model, as shown in Figure 1. Additional details regarding the components of such a node are presented in Section 4.
* _Technology_. The term technology refers to hardware and software systems supporting _PAs_ in some _DT_ process [28, 32, 49, 163]. Digital platforms that support all stages of governance activities are in place or planned [214], since their realization is perceived as a way to increase the pace of the _DT_[149]. In particular, several _PAs_ are moving to the Cloud [3, 39, 74, 167]. Smart cities [182] are becoming a recurring pillar in the _DT_. Blockchain technologies are also being considered, but they appear somewhat marginal at this stage [228]. Artificial intelligence is expected to play a major role, e.g, [94, 154, 185, 197, 216, 224], although its impact and pervasiveness on privacy, transparency and accountability in the realm of _DT_ is still under study [214]. Such technologies are useful for supporting data-driven decision-making in public administrations which, in turn, have the goal to provide a higher and higher quality of life for their citizens. In order to achieve this goal, in particular for limited geographic areas such as cities, ruling bodies and decision makers are accepting difficult and stimulating challenges related to the creation of complex digital models of cities, that would allow them to respond to the needs of citizens faster than in the past (e.g. [182]). Those new models must ensure privacy and security, making necessary for the _PAs_ to possess regulations about data governance, e.g., the European Data Protection Act (_GDPR_) [68, 53], cyber-security technologies, e.g., the National Cyber-Security Agencies [92, 193, 211] and a flexible and modular strategy to data access and sharing, e.g., the European DECODE project [218]. Technology for the _DT_ is the node shown in Figure 1. Details regarding the components of such a node are presented in Section 5.
* _People_. The services provided by the _PAs_ must be considered valuable by the citizens, who sustain them by paying taxes. Such a fact has an important consequence regarding efficiency, which has had a privileged
position in the design and deployment of services: the aim for it, although valuable, is by no means sufficient in generating services perceived of value to the citizens [49]. In fact, for the design of services, a people-driven delivery model is more and more the one of choice [28, 32]. Such a new model places citizen participation at the center of most service design and implementation initiatives, whose success must be evaluated by their users, namely the citizens and various kinds of decision-makers, according to their perception of the value created [5, 138, 142, 150, 187]. Interestingly, although the meaning of services valuable to the citizens is clear, the meaning of the apparently related term of "business value" in the digital _PA_ is not so clear, although intuitively it relates to the provision of better service to the citizen, efficient operation of the services and the enforcement of the law [120]. People for the _DT_ is the node shown in Figure 1. Details regarding the components of such a node are presented in Section 6.
* _Process_. As part of a _DT_ strategy whose aim, as already stated, is to obtain services that are more citizen-centered, it is to be expected that new ways of process engineering are developed and deployed [10]. Although the transition from established process engineering to new ones is not so simple [137, 229], Agile project management approaches seem to be the "best" candidates to support the mentioned transition [10, 30, 32, 39, 97, 132, 148, 150]. Another crucial aspect regarding process management is the need for new ways to measure success, i.e., in terms of a meaning of "value" that is certainly application specific, but with a rather broad spectrum. Rather than being specific on those measurements, the trend is to measure the degree of maturity achieved by the processes in the _DT_ strategy implementation. In this regard, in the Literature available in this Tutorial, we find several papers proposing different Frameworks and Maturity Indexes, e.g., [22], with which stakeholders could measure the progress achieved in the digital transition of their Organizations. However, among the many available, the GovTech Maturity Index (GTMI, for short) proposed by the World Bank [57] seems to be the most reliable one. Process for the _DT_ is the node shown in Figure 1. Details regarding the components of such a node are presented in Section 7.
### Interactions Among Knowledge Domains
The papers we reviewed show that the four knowledge domains presented above have several mutual interactions and dependencies, summarized in terms of edges in the graph model in Fig. 1. Each edge between two nodes (knowledge domains) encodes an interaction between its end-points, while the direction of the edge encodes the dependence, i.e., an edge \((a,b)\) indicated that \(a\) depends on \(b\) with the label indicating the nature of such a dependency. Details are provided next.
Figure 1: **The _DT_ Graph Model. Nodes represent the knowledge domains. Each edges represents interaction and dependencies between its end nodes, while the label on each edge indicates the type of relationship between its nodes, following the main text.**
* **Interactions with _Data_. At the heart of our graph model there is the data ecosystem, subject to incoming and outgoing data flows. In one direction, _data_ is the source of information for other nodes, and in the other direction, it grows from the information it receives from other nodes. In the graph model, we encode this bi-directionality in terms of a _provide/receive_ paradigm. People provide data and receive information and know-how. Processes provide business intelligence, statistics, etc, and receive data. Technologies provide tools for a better governance over the data, and receive data. It should be emphasized that this paradigm encodes well the flow of information in a data ecosystem [17].
* **Interactions with _Technology_**. Technologies are at the service of the citizens. Indeed, given that a needs-based holism means the reunification of government services around citizens rather than business processes [134], these technologies enable it, with the end results that increase the capacity of _PAs_ to respond to the emerging needs of citizens [8, 221]. Technologies also enable processes and facilitate their re-engineering phase [233].
* **Interactions with _People_**. People, who follow the processes and initiatives provided by _PAs_, leverage different emergent technologies [233] to enable the improvement of public processes (_process re-engineering_).
* **Interactions with _Process_**. Processes guide people, through the provision of quality digital services, to respond more effectively and promptly to the evolving needs of citizens [195]. At the same time, technological choices are often influenced by the processes, and how these are designed or re-engineered [233].
### Accounting for Responsiveness
As stated in the Introduction, one of the main goals of _DT_ is to increase the responsiveness of an organization to the changing needs of citizens. It is evident that in response to changes, such a goal can be reached by being able to: (a) quickly use novel technologies; (b) implement an inclusive strategy that promptly makes available new skills to citizens, administrators and policy-makers; (c) adopt flexible organizational models for the design, implementation and deployment of services. Those main aspects of responsiveness in _DT_ can be summarized as follows: technological responsiveness, which naturally connects to the knowledge domain of _Technology_; inclusive responsiveness, which naturally connects to the knowledge domain of _People_; and organizational responsiveness, which naturally connects to the knowledge domain of _Process_. Therefore, it is felt appropriate to extend the graph model proposed here accordingly, as in Figure 2. We now discuss the terms we have just introduced.
* **Technological Responsiveness**. It concerns the flexibility and versatility of the solutions adopted for the collection, representation, and management of the data, together with the appropriate infrastructures to host and manage them [11, 172, 198, 220, 221]. Those solutions must account for good levels of quality and privacy. The meaning of Quality is given via a set of properties to which data should respond. Specifically: accuracy, completeness, consistency, timeliness, validity, and uniqueness [51]. As for privacy, in addition to the meaning given to it in the domain of IT security, the solutions granting it must be compliant with current legislation, e.g., the European General Data Protection Regulation (GDPR) [68]. A particularly important and novel aspect of data processing is to account for the requirement that users must be given the option to decide who can process their data and for which purposes. As for computer architectures, to date, there are many of them supporting technological responsiveness in the _DT_[21] and even new ones have been proposed, although it is not clear how widespread their adoption is [4]. Moreover, the possibility of migrating from monolithic systems offering services to microservices technologies is also considered [125]: although this suggestion is somewhat isolated, the results are encouraging. From our Literature Review, and in regard to the achievement of responsiveness in the _DT_, it is evident that Smart Cities are technologically very promising and popular, while Data Governance issues are more delicate and difficult for public administrations. Therefore, among the many facets characterizing technological responsiveness, we concentrate on those two, which are briefly discussed next.
* **Smart Cities**. According to the ISO/IEC [91], (but see also [147]) a Smart City is "an innovative city that uses ICT and other means to improve quality of life, the efficiency of urban operation and services, and competitiveness, while ensuring that it meets the needs of present and future generations with respect to economic, social and environmental aspects". Moreover, based on a Literature Review, including both academic papers and practical tools, a proposal regarding the key components that make a City smart has been made in [78] and validated in [112], specifically for Brazil. The components structure that comes out is hierarchical, with the top level consisting of (a) government; (b) society; (c) physical environment and (d) technology and data. A second level follows, e.g., point (d) is further detailed into (d.1) ICTs and other technologies, (d.2) data and information. A third level concludes the hierarchy, e.g., point (d.2) is further broken into (d.2.1) data management, (d.2.2) information processing, (d.2.3) information sharing and integration.
Technology is essential for the sustainable development of a smart city (see above and [173]), in particular Internet-of-Things (IoT) approaches - see for instance [115, 123, 96, 87]. However, technology alone is not enough [37]. Indeed, starting from the fact that a difficulty for the realization of a smart city is the fragmented understanding of the interaction between Information Technologies and novel city governance models [164, 176, 45], changes involving public administration and management seem to be required. For instance, project and risk management need to be changed: the realization of the infrastructural innovations required to transform a city into a smart one need to be planned carefully in order to avoid delays and over-spending [226]. Moreover, there is a need to rethink how software-intensive services are used, in order to implement more flexible infrastructures [146, 179, 182]. Section 5.1 is devoted to this topic, with a focus on Digital Twins [58], which is a new and promising approach to design and implement a smart city, based on the a virtual representation of its main physical city objects, including the inhabitants, that interacts with the real objects and evolves with them [58]. For completeness, we point out that Digital Twins are not a new concept, having been introduced by Greives in 2002 and have been the object of rigorous studies in order to identify their range of application domains [26, 76]. **Data Governance**. For data governance, it is meant a set of processes, roles, policies, standards, and metrics useful for controlling data management [22, 88, 146, 179, 208, 223]. Via the effective and efficient management of the amount of structured and unstructured information coming from a multitude of _PA_ processes and procedures, its goal is to transform those data into a strategic asset, serving the citizens while preserving their privacy. The issue of data governance is so important and strategic that a new professional figure is emerging: Chief Data Officer, with its role and responsibilities still being the object of study [156, 181]. Certainly, such a figure should be able to manage issues regarding privacy, security, regulatory compliance, access control, and the resolution of problems caused by poor data quality across the data life-cycle [17, 22, 53, 98]. Section 5.2 is devoted to Data Governance.
* **Inclusive Responsiveness**. It concerns how fast and broad are the cultural changes associated to the acquisition of multidisciplinary skills, ranging from digital to managerial, aimed at gaining greater awareness of the efforts of _DT_[220, 221]. Although inclusive responsiveness can be further divided into many categories, here we concentrate on some important ones, i.e., skills development, co-creation, and leadership. We point out that skills development and co-creation are treated synergistically here, inspired by a case study regarding the city of Chicago [138], which justifies this approach.
* **Skills and Co-Creation**. Skills development is a well known concept that needs no further elaboration. Co-creation is a concept that strongly depends on team-building and on the digitization culture that, together with correct communication, enables the actors involved to work together to produce public services successfully [180, 187, 208, 221]. It is a continuous improvement process, in which _PAs_ must implement the necessary tools to successfully exploit feedback from the citizens in the evolution phase of a service [5, 146, 233]. This approach changes the way in which public services are evaluated, placing the users at the "center". Indeed, following earlier research regarding how to measure service quality offered by the _PAs_[15], models and procedures for such a novel "user-centered" evaluation of public services are being investigated [55, 140], together with models that identify possible areas, ranging from architectures to risk management, whose improvement would result in the deployment of better services [86]. Section 6.1 is devoted to this topic.
* **Leadership**. It is perceived as a fundamental pillar driving _DT_ in organizations, including the _PAs_ (see [130] and references therein), in particular regarding the definition and implementation of mechanisms that strengthen the governance of digital and smart societies. Although, as pointed out in [105], strategy rather than technology is the key to success in _DT_, according to the study in [114], _PAs_ that have reached a certain degree of maturity in the _DT_ process are quite likely to have had the support of their managers and their involvement in the formulation of _DT_ strategy plans to create new public value. Therefore, IT managers and leaders still play a fundamental part regarding innovation, even with respect to _DT_, but they must also have a deep understanding of which organizational culture is most effective, depending on the type of innovation being implemented [168]. In addition, they must have knowledge and training in regard to a specialized set of skills on modern technologies and related cultural changes [199, 229]. Indeed, the current level of expertise, related to emerging technologies, is a barrier to the adoption of these technologies [114], while for the creation of services perceived of value critically depends on the level of competencies that managers and decision-makers have regarding technology [122]. Furthermore, managers should behave more like product owners of the new services aiming at meeting the needs of citizens [146, 147, 110, 137, 77, 141, 145, 233]. Yet another key to speed up _DT_ is a coordinated policy involving National State, Local States and Municipalities [178].
Interestingly, a technological framework based on Digital Twins has been proposed to help IT governance [169]. The framework, denoted Digital Twin for Governed IT Management (_DG4GITM_, for short), links the management of three interconnected systems: IT governance processes, IT management processes, and IT organizational assets by leveraging the technology of Knowledge Graphs and the resulting computational infrastructure. In particular, a given city virtual entity is created through an enterprise ontology "_GITM Domain Ontology_" that is connected to the organization via data flows to populate it with real data from the resources of the organization. This point is not the object of further discussion, since we have accounted for all the papers that cover this subject and that we have included in the Literature review.
* **Organizational Responsiveness**. It concerns the ability to adopt rapid organizational changes and to undertake new ways of operating within the _PA_[135, 145, 199, 79]. _DT_ is a continuously evolving process that needs to be monitored in order to evaluate its progress and to identify directions for improvement [135]. Two key features to consider are Change Management and Frameworks, and Maturity Models:
* **Change Management**. It refers to the ability to accept innovation while producing quality services [199, 220]. In the _PA_ context, one of the essential parts of this point is the promulgation of laws, regulations, and guidelines, which promote the use of the services offered, enabling the creation of new public value [150, 234]. There is also a corresponding technical part regarding project management. In what follows, the change management in terms of laws and regulations is best accounted for in the areas that are affected by those regulations and laws, e.g., Data. Consequently, the part of this manuscript specifically devoted to Change Management refers to the project management engineering. Section 7.1 is devoted to this topic.
* **Frameworks and Maturity Models**. These models focus on the major technological, inclusive, and organizational elements of which a _PA_ is composed, in order to be able to measure theirt performance and establish the progress achieved in the _DT_ strategy undertaken [57, 151]. Section 7.2 is devoted to this topic.
### Two Paradigmatic Examples
Each Public Organization may have its own _DT_ agenda and plan, which may vary according to factors such as geographic location, size, cultural, economic and infrastructural contexts, e.g., [23, 33, 144, 209]. Comparative studies also exist, as for instance: China, Canada and Estonia; [121]; US and UK [116]; Australia, Denmark and the Republic of Korea [143]. Estonia is particularly appreciated in terms of _DT_[57, 108], to the point of being covered in the general press, e.g., the New Yorker [202], although some criticism is present [107].
Given the above State of the Art, as anticipated and motivated at the beginning of this section, we now introduce two real examples by focusing on their responsiveness aspects: Barcelona [88, 146] and Chicago [138].
* **Technological Responsiveness*
* **Smart Cities**. Barcelona, thanks to a budget allocation of 1.288 million EUR, has launched three key _DT_ initiatives. The first one is the reorganization of data localization, through the establishment of a Municipal Data Office, headed by a Chief Data Officer. The second one is the mapping of the entire Barcelona Data System, integrating each of the existing datasets into a single _data lake_[134], developed for this purpose, according to the Open Standards defined by the World Wide Web Consortium (_W3C_) [205] and referred to as the City Operating System (_CityOS_). It is based on _API_ and the data within it are now organized and interconnected thanks to the design of a standardized ontology for the city of Barcelona. An additional data-sharing platform, referred to as _Data Exchange_, is connected with the _CityOS_ data lake to ensure a continuous two-way flow of data between the City and the World. The third one is the renewal of the open data portal through the _CKAN_ tool [217], to ensure that public, private and personal data can be transformed into a new data-driven social infrastructure. It is worth pointing out that the city-wide data governance model of Barcelona is an extension of the open government agenda promoted by several cities around the world [25, 84], whereby cities support Open Data platforms for civic engagement and improved digital services to address a range of broader challenges, such as the implementation of Smart Cities. In order to make clear what follows, it is useful to recall the EU DECODE (_Decentralised Citizen Owned Data Ecosystem_) project [218]. Its goal is to develop a combination of decentralized software technologies, such as Blockchains and Cryptography, to give citizens more control over access and usage of their data. The DECODE technology allows data to be encoded and shared anonymously. In addition to what mentioned so far, Barcelona has leveraged _DECODE_ through the _citizen Science Data Governance_
pilot project, which uses environmental sensors, placed inside and outside participants homes, to detect noise and pollution levels.
A non-trivial part of this project is the level of detail with which these data are visualized. Data from the _IoT_ networks of sensors are collected through the open source platform _Sentilo_[219] in such a detailed and specific manner that individual homes can be identified. This raised concerns about the privacy of this data, as homeowners feared that its use could result in the profiling of pollution-prone buildings and homes, which would hurt house prices or insurance premiums. With the mentioned _DECODE_ pilot project, the focus was on developing rules that would allow users to code and share their data with different target groups and with different specificities, generating more trust in the use of the collected data.
Chicago, through the creation of a good quality Open Government Data (_OGD_, for short) portal, continuously improved since 2012, provides data visualization tools on over 550 datasets, a number that continues to grow, and is relevant to the city. Currently, in the available Literature on _OGD_, it is generally pointed out that the _OGD_ in the portals that host them are often not accessible, clean, or easy to use [236]. Remarkably, none of these shortcomings seems to have been reported for the City of Chicago. Indeed, from the responses acquired through interviews, several interviewees were appreciative of the availability and quality of the _OGD_ that are available on the Chicago _OGD_ portal [41]. This goal was achieved through a careful processing pipeline in which data were extracted from data owners, e.g. _PAs_, cleaned and transformed through data cleaning techniques, and uploaded periodically to the _OGD_ portal. Approximately 99% of the data in the _OGD_ portal follows this processing pipeline. Maintaining the data quality levels present on the _OGD_ portal requires great citizen participation, and an active engagement of the _PAs_ that are owners and providers of this data [157]. With this initiative, Chicago is becoming a reference model of increasing sensitivity to data, which is useful for the creation of digital services, following a paradigm that is more and more open and collaborative, and less and less driven by top-down approaches [157]. Another important example of Smart Cities in Chicago is in the reduction of the
Figure 2: _DT Graph Model Augmented with Responsiveness Terms_. The _DT_ Graph Model in Figure 1 is augmented in correspondence of the Knowledge Domains. Namely, _Technology_, _People_ and _Process_. _Technology_ is augmented with the Technological Responsiveness aspects, such as Smart Cities and Data Governance. _People_ is augmented with Inclusive Responsiveness aspects, such as Skills, Co-Creation, and Leadership. _Process_ is augmented with the Organizational Responsiveness aspects, such as Change Management and Frameworks and Maturity Models.
exposure of citizens to foodborne diseases [44]. The City of Chicago, in collaboration with its Department of Quantitative Research and Analysis of Allstate, has developed a predictive machine learning model that takes into account various data sources, such as waste, crime, and sanitation data, to support the numerically small staff of the Chicago Department of Public Health (_CDPH_) in prioritizing food inspections to be carried out [43]. The model works by ranking restaurants by the probability that they have a critical food safety violation. The head of the _CDPH_, through a simple Shiny web application [189], is able to assign food inspectors first to the highest-risk restaurants. By using this model, potential foodborne illnesses could be prevented or their severity limited, as the violations were identified and treated earlier with respect to what would have been possible with previous selection methods.
* _Data Governance_. In addition to the aspects of data governance regarding Smart Cities, Barcelona has adopted a series of new standards, technologies, and practices, which have inevitably enabled new ways of managing data by different stakeholders [88, 146], with the result of increasing transparency, simplicity and objectivity, thereby providing a route to technological and data sovereignty. This has been achieved through the appropriate use of procurement clauses, e.g., contracts. The interested reader can find the Barcelona ICT Procurement Guide in [56]. In particular, and in regard to data, a minimum set of requirements are mentioned in regard to availability, accessibility, privacy-compliance, and shareability as Open Data among the various City Departments. In particular, they ensure that decisions around who produces, owns and exploits the data generated in the City remain in public hands. Those procurement guidelines are useful case studies for other Cities [146]. Although Barcelona is a success case in this area, it is to be mentioned that innovative and effective public procurements involving digital systems in the _PA_ may be challenging [72]. It is not sufficiently clear how some aspects of data governance have been handled in Chicago, but in the blog of the Open Data Portal Development Team of the City [42], the process of data collection and accountability is documented specifically for the different types of data collected. The City of Chicago prioritizes personal privacy in the development of datasets for publication. For example, for the Taxi and Transportation Network Provider Trips (TNP or "ride-share") datasets, an anonymization and aggregation technique has been designed and implemented to reduce the risk of passenger re-identification, while enabling favourable public use of the data (see [40] for further details).
* **Inclusive Responsiveness*
* **Skills and Co-Creation**. In October 2016, the Barcelona City Council, with an allocation of 75 million EUR to be spent annually on _DT_, planned to provide public services through an approach based on free software, Open Data sovereignty, and the adoption of Agile development methods, as discussed in [88]. The main challenges addressed in their _DT_ plans give rise to several initiatives as follows. First, the launch of an educational programme (_Steam Barcelona_), focusing on building competencies within city organizations, with the aim of strengthening the digital skills of the citizens. Second, the combined utilization of iterative and Agile development methods, for reducing the burden on citizens to use services (_City empowerment_). Third, the design and deployment of new guidelines on the design and accessibility of public services. With reference to [138], regarding the City of Chicago, the relationship between _OGD_ and co-creation is addressed, in relation to factors that play a role in the co-creation component of _OGD_-driven public services. The result is the identification of a set of key factors for _OGD_-driven co-creation. Specifically: motivated stakeholders, innovative leaders, proper communication, existing _OGD_ portal, external funding, and Agile development. The interested reader is referred to [138] for further details regarding those factors, since we limit ourselves to discuss Agile development within the **Organizational Responsiveness*
* below. There are also some lessons to be learned from this study. In fact, the authors also reported the main barriers to the publication and reuse of _OGD_, such as the widespread lack of understanding of _OGD_ and their benefits. One of the main challenges to the co-creation of public services is the need to redefine the roles of public and private actors in the public service creation process. Some other barriers are connected to the figure of the citizen, such as the internal motivation of participants, personal characteristics, awareness of participation opportunities and participatory skills, perceived ability to participate in co-creation initiatives, trust in co-creation initiatives, the relative importance of the service to be co-created and mutual trust between Government and citizens.
* **Leadership**. There are many facets to this topic. Barcelona exemplifies one of them. Specifically, the establishment of a managerial figure such as the Chief Technology and Digital Innovation Officer, to support the city's administration, thanks to which a series of politically and managerially strong reforms could be initiated [146]. Chicago exemplifies another one. Specifically: technologies, e.g., data analysis techniques that allow better leadership because they support decision-making processes, aiding managers in exploring and solving some of the most difficult problems facing the city [138]. *
* **Organizational Responsiveness*
* **Change Management.*
* The City of Barcelona, in 2017, within its _DT_ transformation plans, has provided guidelines for project management that recommend the use of Agile methodologies [88, 146]. As a matter of fact, Barcelona has developed its own Agile methodology as a variation of the SCRUM Framework, referred to as SCRUM\(@\)IMI since the Institut Municipal d'Informatica has had a major role in the adaptation of SCRUM to the Barcelona ICT needs. The interested reader can find a detailed account of this initiative at [24]. As for Chicago, in terms of Agile development [138], according to the opinion of several interviewed stakeholders involved in the development of many projects, although the implementation of services did not explicitly follow Agile development methodologies, many of the characteristics of such approaches were however present in the development of services. The interviewees have emphasized some of these characteristics, considering them crucial to the success of the project, in the design, implementation, and service delivery phases. Namely, speed of development; release of a minimum viable product (_MVP_); validated learning; incremental development; constant testing; and the ability to respond quickly to feedback and evaluations.
* **Frameworks and Maturity Models.*
* The Barcelona City Council has continuously collected feedback and, in terms of metrics, measured various performance indicators on the services provided in order to monitor signs of progress on the expected results of the adopted _DT_ strategy [88]. It is not clear how progress on the expected outcomes of the initiatives implemented in the City of Chicago is measured, as we found no authoritative documents on this topic.
## 4 Data
We discuss here, in detail, the data ecosystem.
### A Glossary of the Data Ecosystem
For the convenience of the reader, we describe the following well known general terms: _Open Data_, _Linked Data_, and _Linked Open Data_.
* _Open Data_ are accessible, exploitable, modifiable, shareable by anyone for any purpose, including commercial purposes, and released under an open license [106].
* _Linked Data_ are structured in such a way as to be interconnected with other data sources to become more useful, promoting discoverability and interoperability. They are built on standard Web technologies such as _HTTP_, _RDF_, and _URI_, but instead of using them only to serve web pages to human readers, they are as well used to share information in a machine-readable way [18]. This type of data has evolved to encode and model knowledge coming from different sources. A notable example are the _RDF Knowledge Graphs_[16] and related ontologies, built on _Open Data_, that formally model domains of interest.
* _Linked Open Data_ are the intersection of the previous two categories.
Government data is any information, in any form, that is created or obtained by the Government in the course of its business. When the data are public, we distinguish _Open Government Data_ (_OGD_), _Linked Government Data_, and _Linked Open Government Data_ (_LOGD_), respectively. Figure 3 represents the relationships among _Government Data_, _Open Data_, and _Linked Data_[17]. (Linked) Open Government Data make easier for citizens, researchers, developers, businesses to access and use the data to create new applications, analyze social and policy trends, and develop transparency and accountability.
### Collection and Management of Open Data: _PAs_ Are Special
Many of the data that we have described are represented using technologies related to the _Semantic Web_[231], via Open Data standards, which are defined by the W3C and supported by most technology providers, especially those offering data management tools. However, the collection and management of Open Data by a _PA_ seems to require innovation in the processes adopted to carry out those tasks [239]. In the mentioned study, such an innovation is characterized in terms of Agile methodologies: the ability of an organization to capture emerging needs and promptly associate them to the current data processes, in order to obtain innovative data-driven products and services. Based on the empirical study of four _PAs_, a process model for the achievement of the mentioned agility, is proposed (see [239] for details).
Improving the usability of some collected data also requires innovation. Indeed, as discussed in [161], although there is a proliferation of Open Data platforms, their usability for the non-specialist is perceived as poor, mainly due to the fact that they have been designed by software specialists. The mentioned study also provides the evaluation of the usability of an Open Data platform by non-specialists, for the City of Dublin, pointing out the need for innovative designs of user interfaces. Open Data integration is also a serious obstacle to their fruitful use, with some proposals on how to overcome them, accounting for implementation strategies and organizational models [19].
### From a Data Ecosystem Abstraction to its Concrete Realization: Some Examples
Figure 3 provides a general description of a data ecosystem, which can then be realised in several ways, that should be compliant with the Open Standards defined by the W3C [205]. Two incarnations of a data ecosystem have already been presented and discussed in Section 3.4: the Barcelona CityOS data lake and the Chicago Open Data portal. We now provide three additional examples, concentrating on their technical aspects and pointing out their usefulness.
* **Open Data Catalogs for the PA**. They are software applications that build inventories of data resources of a given _PA_, in order to help data professionals and stakeholders to find relevant data for analysis-related uses [109]. They are based on metadata, which provide additional data/information about data resources. The intent is to help catalogue users to understand the structure, nature and context of the data available in computer systems and decide whether they are suitable for their needs. One of the earliest relevant examples of an Open Data Catalog proposal is the _OGD_ Catalog of the Czech Republic: it serves as a single access point to the _OGD_ datasets, supporting the discoverability and reusability of the available _OGDs_. Another, more recent example, is provided in a case study of the Italian _PA_[49], conducted during the period April-December 2017, in which the use of the _OGDs_ favour the implementation and integration of services (or digital platforms) such as _pagoPA_ (payments system to _PAs_ and public service providers in Italy), _SPID_ (Italian Public Digital Identity System), and _ANPR_ (Italian Register of Resident Population). These new services are based on _OGD_ Catalogs, and in addition to the use of Open Standards, and Open Software, are designed as modular structures,
Figure 3: Relationships between Government, Open, and Linked Data (LOGD is Linked Open Government Data). Adapted from [17].
which facilitate their evolution and reduce the complexity of coordination between the actors involved in the co-creation processes of the various projects, as also as reported in [114, 230]. For a successful integration of services, a good interoperability framework of information sources must be provided. In general, it organises the exchange of data and interoperability between different services, data centers and _PAs_. It consists of a set of specific design rules, documents and toolkits for software developers (e.g. Software Development Toolkits). For the specific case study, the main part of the interoperability framework is the Data Analytics Framework (DAF), which collects and processes data from _PAs_ and external actors to make them publicly available and accessible through a Web user interface, and defines protocols and regulations that facilitate the integration and orchestration of services. The DAF empowers each _PA_ to orchestrate the creation of public value by establishing the actors that can have access to the data and the terms under which they can access them. Uploaded data are supervised by the Data Protection Authority [201], which safeguards the privacy of citizens and evaluates how other public agencies use their data. Therefore, each level of government and the different public agencies are responsible for regulating how data is accessed, according to their administrative and political responsibilities. Specifically, a _PA_, as well as a private company, can make data available to the public through the DAF, and can also indicate who can access that data and the ecosystem on which that data should operate. As expectations and needs change, data access settings can be modified to adapt to emerging needs and requirements. When public agencies upload their data to the DAF, they fill out a privacy form to ensure that the data is privacy compliant, so as to avoid any negative effects on citizen privacy.
* **Cloud-based Open Data Federation: the CLIPS experience**[74]. CLIPS is a cloud-based approach for migrating public services to the Cloud, based on the use of microservices. It involves four European Cities: Bremerhaven (Germany), Lecce (Italy), Novisad (Serbia) and Santander (Spain). It is based on the _Open Data_ because, in addition to being a useful resource for developing new value-added services, they seem to be valuable for exploring potential transnational business opportunities. The CLIPS platform includes an _Open Data Federation_ node to allow access to the _Open Data_ sets from different federated Municipalities, as if they were a single data source for front-end applications. The main innovation of CLIPS is to provide a usable methodology, that enables Government employees and other external stakeholders to collaborate on new projects and service delivery from a set of basic building blocks, available in the Cloud. This offers the ability to respond more quickly, reduce service delivery costs and be more responsive to end-user needs. It defines an approach for building an ecosystem in which _PAs_, small and medium-size enterprises, and citizens can co-create new and innovative public utility services. The CLIPS platform is designed as a three-tier cloud platform, including: an Infrastructure-as-a-service (IaaS) which includes all the required modules to provide basic cloud resources like computation, storage and networking; an application serving and development functionality of traditional Platform-as-a-service (PaaS); and a convenient marketplace for the developed cloud-based services and microservices, typical of a Software-as-a-service (SaaS). It consists of several modules, such as authorization, authentication, and monitoring of data access, as well as providing an _API_ to connect the microservices present with each other. Data security aspects are also addressed. In fact, the CLIPS Security strategy, in addition to common security best practices (e.g., ISO/IEC 27002, ISO/IEC 27017, ISO/IEC 27018), is to adopt some innovative techniques and approaches from the open-source community as well as from other European Projects, such as "Secure idenTity acrOss boRders linked" (_STORK_) [66], enabling citizens to use their national credentials in _PA_ applications provided by foreign States and to securely transfer their sensitive data between the States.
* **Data and Smart Cities**. Smart Cities are perceived as data engines [111, 147], e.g., IoT infrastructures, social networks, wearable devices, etc. generate valuable data that can be used to improve or offer new services to the citizens. However, due to its volume and heterogeneity, the collection of data produced by Smart Cities (including the creation of related metadata) requires a non-trivial effort in the verification of its correctness and quality [111]. Metadata can describe different information sources and can be collected and catalogued within appropriate Open Data Catalogs, such as the open-source solution already mentioned CKAN. Moreover, metadata can be represented through data vocabularies designed to facilitate interoperability between open data sources available on the Web, e.g., using the DCAT-AP metadata profile [63]. Fundamental turns out to be the implementation of a set of guidelines and documents such as API documentation, and planning documents, systematically discussed and agreed upon with public officials responsible for providing datasets to improve discoverability, understandability, and further processing of data. As shown in [81], the implementation of these solutions involves a careful design phase of the technology infrastructure (cloud/edge) related to Smart Cities, with an emphasis on the data acquisition plan. The infrastructure must be the pillar of processing and storage of data and also include data analytic tools and methods finalized to the implementation of robust machine intelligence solutions available to the city government for the benefit of its citizens. To this end, three
distinct taxonomies of data analytic tools serving Smart Cities are proposed in [147] and referred to as the DMS Taxonomy, i.e., data, methods and services.
## 5 Technology
We discuss here, in detail, the two main technical aspects we have considered concerning _Technology_: Smart Cities and data governance. As anticipated in Section 3.3, for Smart Cities, we concentrate on Digital Twins.
### Smart Cities: Digital Twins
Being specific to Smart Cities and following [58], the major characteristics of Digital Twins are: accurate City Mapping, for instance, of roads and public illumination; interaction between the virtual and real "objects", e.g., people and their "avatars"; software definition, e.g., platforms that simulate the real city in a virtual space; intelligent feedback, e.g., evaluation of the effects of city plans and initiatives before realization. Interestingly, it has been argued that their realization may enable an acceleration of NetZero emissions in government critical infrastructures [174]. A further refinement of the technical characteristics of Digital Twins is proposed in [240], although its major contribution seems to be the account of Digital Twins initiatives in China, USA, and France.
Although there are many national and city initiatives regarding Digital Twins, e.g., [20, 155, 206, 240], we have found only a limited number of academic papers covering the subject. One is in regard to cross-border Smart Cities, i.e., Helsinki and Tallinn. Recalling that an urban operating system is a network of sensors that can acquire data regarding the city which, in turn, can be transformed into "knowledge" [191] and pointing out that the X-Road data infrastructure [183] is one of the pillars of Estonian _DT_, a cross-border urban operating system involving both cities is proposed in [190]. The intent is to have an integration of the _DT_ that is involving only each of the mentioned cities. For completeness as well as relevancy for this Tutorial, we mention that the notion of urban operating system is investigated in depth in [124], with various examples of it. The study points out the modest impact that it may have on city planning and its contradictions.
Overall, Digital Twins have the potential to bring significant benefits to Smart Cities, including better evaluation of city plans, and potentially even achieving NetZero emissions, but further research is needed to fully understand their impact.
### Data Governance
In the context of our Tutorial, we focus on two particular aspects of data governance, i.e., privacy and cyber-security.
In terms of privacy, in the international scenario, there are several National Data Protection Authorities. An exhaustive list of these Authorities can be found through an interactive map on the website of the French National Commission for Information Technology and Civil Liberties (CNIL) [48]. The main function of the individual Authorities is to protect the privacy of citizens and assess how the _PAs_ (or other organizations) use their data, for example, by keeping under control the data they publish on their respective institutional web portals. Barcelona is a good example in terms of control of the data, regarding availability and detail of access, as discussed in Section 3.4. As well argued in [170], the amount of data that is collected within Smart Cities initiatives, once made public, even in an anonymous form, can be subject to cross-reference attacks that could capture private information. In order to address this problem, the mentioned paper proposes solutions and use-cases. Interestingly, that study is a pilot project funded by the U.S. Department of Homeland Security that has the intent to demonstrate how data privacy technologies can be of help.
In terms of cyber-security, it is well known that there is a proliferation of Cyber-Security Agencies, e.g., the European Union Agency for Cyber-Security (_ENISA_) [69]. This is not surprising, given the increase in the number and quality of the attacks of which we have news in the past few years [46]. However, since _PAs_ are also the object of those attacks, it is surprising that there is only a limited number of papers that have emerged from the literature regarding _DT_ that address cyber-security issues, as we outline next.
With regards to data, in [32], the Organisation for Economic Co-operation and Development (OECD) [158] recommends maintaining a strong balance between the need to provide timely official data and the need to provide reliable data, as well as to manage the risks associated with the increased availability of data in open formats and those related to digital security and privacy. A related issue is the design and management of government data centers architectures, in particular regarding security. Indeed, those centers, due to the heterogeneous nature of services they offer and software they host, are vulnerable from the point of view of security. A proposal on how to achieve ISO/IEC 27000 security standard, a model of government data centers architecture, has been proposed in [212]. More in general, as discussed in [12], there are several initiatives in many countries having the goal to provide methodologies for security assessment.
This latter consists of evaluating an information system from the attacker's point of view, with the aim to provide a systematic review of weaknesses in information systems, with a corresponding assignment of probabilities of attack via each weakness, offering also a scale of severity levels of damages. Recommendations for corrections are also offered.
Smart Cities and their associated technologies, being relatively novel, are also object of study in terms of security. A specific analysis regarding IoT devices and related technological infrastructures, is given in [182] (but see also [241] and references therein). Indeed, due to their interconnected nature, IoT technologies make data security a more complex challenge with respect to the past. Therefore, ensuring the security of IoT products and services has become a top priority. To this end, an entire framework, referred to as SAO, regarding the automation of IoT security has been proposed in [241]. It has the merit of being grounded on a recent review of the State of the Art, clearly describing challenges and proposing solutions. SAO integrates the key elements for security automation and orchestration for IoT systems, including threat modeling, security and privacy by design, trust management, security configuration, threat monitoring, patching, compliance check, and secure data sharing. Another specific analysis is provided in [9], regarding Digital Twins. Indeed, the confluence of a broad set of technologies, ranging from cyber-physical systems to artificial intelligence, and the implicit interaction with the real objects modeled by the Digital Twin, poses new security threats. The mentioned paper offers a classification of them, together with security recommendations on how to address them, via a paradigm that classifies the threats based on the functionality levels composing a Digital Twin.
## 6 People
We discuss here, in detail, the technical aspect we have found concerning _People_ that deserves further attention, based on the Literature search: _Skills and Co-Creation_.
### Skills and Co-Creation
A successful \(DT\) process requires users not only to acquire new skills but also to know how to interact effectively with them [230]. Those skills required to handle \(DT\) do not only relate to a particular discipline but require a multidisciplinary approach, where the importance of knowing the specific competency levels of the individuals that are part of an organisation and the know-how of the entire organisation itself is recognised as a fundamental requirement. The lack of a coherent educational approach to the acquisition of appropriate skills also hurts e-government users, which could generate problems in the usability of the _PAs_ digital services [28]. In [230], the authors, as a possible solution to this shortcoming, propose an educational framework composed of five basic components designed, developed, and tested to achieve the educational goals necessary for a successful \(DT\) strategy. The components of the framework were intended to define: (a) a competency model useful to describe the required competencies; (b) an educational approach that can be provided by the professional or academic context; (c) a maturity model to monitor progress in the process of acquisition of the required competencies; (d) an appropriate didactic model that is tailored to digital capabilities and demands is essential in order to make competence delivery successful and efficient; and (e) a competency certification system to coach organizations and citizens to understand and communicate their competencies, ensuring transparency and quality.
As for co-creation, it is useful to recall from the previous sections that the ultimate goal of a digital \(PA\) is the co-design and deployment of services that are perceived as being of "value" ( see _People_ in Section 3.1). Accordingly, how to achieve that goal and with which methodologies and supporting technologies is an emerging area of research [163], that we outline next.
In its simplest and easiest to realize form, a co-creation methodology is limited to the participation of a strictly selected set of users, particularly in the initial phase of the creation process, and to the related measurement of their perceived satisfaction degree, through constant feedback collection [221]. However, the intent is to have co-creation methodologies that can handle millions of users, i.e., citizens. It is natural, then, that the IT platforms supporting the \(PA\) must support those "in the Large" co-creation methodologies. Recalling from [49] that Government as a Platform (GaaP) is a new way of building digital public services using a collaborative development model by a community of partners, providers and citizens to share and enhance digital public processes and capabilities, or to extend them for the benefit of society, its realizations seem to be designed to achieve efficiency. However, according to the mentioned study, the efficiency granted by GaaP does not necessarily imply the creation of value for the citizens, a point also made in [120, 214]. Indeed, as discussed in the mentioned paper, the key to the creation of public value seems to be the modularity of the platform configuration and the ability to consistently coordinate different ecosystems that support public agencies. To this end, a few examples are provided, borrowed from the private sector and involving IT giants such as Apple, Google and Amazon. Here we limit ourselves to mention the Apple iOS Support Service [61], which enables multiple ecosystems, different in nature, to interact and coexist. According to the analysis reported in [49], the adoption of analogous models would allow the co-creation (\(PA\)s and citizens) of value services with a "large scale"
involvement of active actors. The importance of adequate digital platforms for the co-creation of value involving a large number of actors is also identified as a key success factor in [221]. A paradigm shift from crowd-sourcing and social media monitoring to IoT has also been proposed, with a pilot project that has been set-up in a Municipality in Sweden [85].
In addition to what we have mentioned so far, the notion of participatory design, e.g., the involvement of citizens in urban planning, is being analyzed in view of _DT_. A historic account of how that notion has changed over the decades and how it fits a modern view of _DT_ is provided in [166]. An important related topic is the co-creation of integrated public services. That is, ideally, a one-stop platform for the citizens that integrates the available services to them. The State of the Art, mostly regarding EU, is well presented in [196].
For completeness, we also mention that, in terms of _PAs_ and co-creation of value, the Italian public administration as a platform is studied in [49]; the Norwegian Labour and Welfare Department is studied in [221], while a platform supporting co-creation at different levels of governance in Portugal is presented in [192]. A specific platform for co-creation in the area of Urban Planning and in support of previous initiatives, i.e., the International Laboratory of Architecture and Urban Design, has been proposed in [80]. Finally, a model based on Digital Twins that allows co-creation, as well as evaluation of the final result regarding public services has been proposed in [165], with a planned test of the model in Sofia.
## 7 Process
We discuss here, in detail, the two main technical aspects we have found concerning _Process_: _Change Management_ and _Frameworks and Maturity Models_.
### Change Management
As well put in [24], although the Agile Manifesto dates back to 2001 and despite the remarkable success that the corresponding methodologies have had in the private sector, their adoption in the _PA_ is rather slow. Yet, in the _DT_, Agile project management methodologies (see [188]) seem to be the ones that should replace more classic ones, such as Waterfall [7]. In order to exemplify this point, the experience reported in [152, 153] suffices. In the mentioned studies, the authors point out that the implementation of the e-governance project Digital India Land Records Modernization Program (DILRMP) has highlighted major challenges and complexities, typical of traditional project management. They discuss how an Agile management approach can play a key role in transforming such implementation from slow and ineffective to be more responsive, flexible and effective.
Documented difficulties in the adoption of Agile methodologies have emerged [60, 141, 177, 137, 229, 145]. The cause is common: the difference in _modus operandi_ between the _PAs_ and the private sector, resulting in resilience to change, and difficulty in identifying the most appropriate methodologies for the _PA_. Fortunately, studies [175] seem to have identified "agility enables", i.e., possible actions that can facilitate the transition to Agile models. However, as pointed out in [99], the transition to Agile development models will require the writing of appropriate guidelines to be used to ensure that the development process is Agile. These will depend on the particular requirements of the organization involved in the transition process.
Although the highlighted difficulties persist, there are many _PA_ project management initiatives that use the Agile methodologies, e.g., in the software development Census of the Swedish Government Agencies, the majority of Government Agencies consider their approach to be more agile than planned [31, 117]. In addition to Barcelona and Chicago, mentioned in Section 3.4, the Agile methodologies are applied in several _PAs_[24, 88], ranging from National (e.g. UK), Large Cities (e.g. New York), and Regional Governments (e.g. Andalusia). Apart from the above noteworthy examples, a systematic and technical presentation of the adoption of Agile methodologies for project management in the _PA_ is reported in [10]. The paper makes also a list of the Agile process automation technologies that are in use, i.e., _SCRUM_, _KANBAN_, and _SAFe_ (see again [188]). A comparison is also performed with classic Waterfall methodologies and it is stated that the Agile ones allow for more transparent projects, effective team building, adaptability to change, lack of hierarchy, lack of bureaucracy, continuous education. Some disadvantages are also reported, such as: the risk of endless product changes; the high dependence on the qualification and experience level of the development team; the difficulty of determining total project costs in a timely manner. Although unclear in its impact, an effort is also made for the identification of the specific characteristics that the Agile methodology should have for its use in the public sector [30]. The mentioned paper reinforces the difficulties already mentioned and that must be overcome for such a change of project management. Moreover, it stresses that project management should be reconfigured to provide team autonomy, to some extend. Once again, the barrier being routine practices difficult to abandon and obsolete regulations.
A more specific evaluation of Agile methodologies in the _PA_, regarding DevOps [27], is provided in [148, 232], where it is considered how to bring best practices from the production world into _PA_, making the flow of information more fluid. As a result, the adoption of _DevOps_ promotes organizational responsiveness, which is useful for improving productivity and performance. At the same time, _DevOps_ breaks down organizational barriers by promoting information exchange through the use of shared metrics and feedback mechanisms between development teams, as reported in [75].
By bringing _DevOps_ into the public sector, an effective teamwork and a consequent open flow of knowledge among _PA_ employees are expected [139, 194]. There are initiatives in this regard, as for instance the ones of the Brazilian Federal Government, referred to as Brazilian Public Software (SPB). The objective is to promote sharing and collaboration enabled by Free/Libre/Open Source Software (FLOSS) solutions for _PA_[139]. SPB is an interconnected platform based on different FLOSS tools that provides different solutions for collaborative software development, with the purpose of enabling Brazilian _PAs_ to share information, experiences, and best practices about the use of these tools (see [34] for more details about the architecture and operational manuals).
Furthermore, since transparency and openness are among the core principles of _DevOps_ practices, their use is expected to simplify bureaucracy and decrease corruption in public service delivery. A punctual analysis regarding the benefits of using DevOps Process Model in the _PA_ is presented in [238]. It involves seven Saudi Arabia _PAs_, evaluated with the use of the Bucena DevOps Maturity Model [36]. That study concludes that the use of DevOps is promising although DevOps cultural aspects, process, and technologies need to be strengthened. An additional study proposing DevOps for the generic support of Digital Transformation is presented in [184], being in agreement with the papers mentioned so far.
We mention that there are also experiences indicating that classic Waterfall and Agile methodologies can synergistically coexist. Indeed, we learn from the case study in [132], involving the development of projects through Agile methodologies of some Brazilian governmental organizations, that although the adoption of such methodologies fosters an improvement in the quality of the public services created, these projects achieve greater success when conducted in combination with other traditional software development approaches.
Finally, Agile software development in the public sector must be scalable, i.e., able to work for relatively small projects, coming from small realities such as cities, to large national and international projects, for example through the adoption of the SAFe Agile process automation technology, as reported in [70, 77]. To this end, it is of interest to mention a recent review [62] regarding the use of Agile methodologies on a large scale. Although one would expect that the _PA_ would be the area with the most involvement, it is somewhat disappointing to report that only \(5\%\) of the initiatives reported there belong to the public sector.
### Frameworks and Maturity Models
Over the past decade, various frameworks and models have been developed to measure and monitor the degree of digital maturity achieved by Digital Transformation Strategies. To date, however, it is not possible to choose one among them for which any organizational reality can be perfectly modelled, whether private or public. Each of these captures a particular set of indicators and uses different tools to collect information to be used to quantify the indicators. One of the tools is certainly interviews, with the possible addition of document analysis [13, 49, 71, 73, 97, 101, 105, 126, 127, 132, 186, 198, 199, 225, 233, 234]. In the mentioned case studies, semi-structured interviews are mainly conducted with various IT professionals from public and private organizations, actively involved in _DT_ processes, over different periods in order to measure the degree of digital maturity gained. Several barriers and success factors emerged from the interviews, which are useful for a comprehensive understanding of _DT_. The results show that this survey instrument is quite valid, as effectively reported in [186, 198, 199] (see respective Appendix Sections).
In addition to the model specific to interviews, many general maturity models have been developed over time. For most of them, based on variables specifying the model, the "end-result" is the value of an index that assesses the level of achieved maturity. Some follow macro-economic factors on a national or international level, as in [50, 14, 57, 59, 65, 159, 204, 210, 89, 102]. Others, however, refer to micro-economic factors related to individual organizations, as in [36, 47, 151, 171, 215].
In regard to the first group, we discuss only the GovTech Maturity Index (_GTMI_) developed by the World Bank [57], as part of their _GovTech_ initiative (Government and Technology) [203], since it appears to be the most exhaustive maturity model currently available. It is worth pointing out that _GovTech_ is an approach to the modernization of the public sector, through innovative technological solutions, that promotes a simple, efficient, and transparent Administration with the citizens at the center of the reforms. There are about 80 _GovTech_ initiatives worldwide, with good practices observable in 43 countries out of the 198 observed. In this context, _GTMI_ is a comprehensive measure of the _DT_ in a given country. It is based on 48 key indicators and it is defined to collect data from 198 countries. _GTMI_ measures key aspects of four focus areas of the _GovTech_ initiative: supporting Core Government Systems (_CGSI_, 15 indicators), improving
Service Delivery (_PSDI_, 6 indicators), Engaging citizens (_CEI_, 12 indicators), and promoting the Enabling factors of the _GovTech_ initiative, such as building digital skills in the public sector and an environment conducive to innovation in the public sector (_GTEI_, 15 indicators). Each of the indicators is associated with a certain score and a certain weight, the latter based on the opinions of some domain experts on the relative importance of the selected indicator. Using these scores and weights, the _CGSI_, _PSDI_, _CEI_, and _GTEI_ scores are calculated. The final _GTMI_ score, on a \([0,1]\) scale, is calculated as the arithmetic mean of the four scores just mentioned. See [57] for more explanatory details on the indicators. All 198 countries were grouped into four categories: from A (leaders in _GovTech_) to D (minimal attention in _GovTech_) according to their _GTMI_ score.
Based on analyses comparing the _GTMI_ with other relevant indices, the _GTMI_ indicators were found to be consistent and robust, even concerning the analysis of lesser-known dimensions related to particular characteristics of a given Government. Results and good practices presented in [57] demonstrate how the _GovTech_ focus areas identified by the World Bank are highly relevant to the _DT_ agenda in most countries.
As for the second group of models, which relates to the micro-economic factors of individual organizations. For conciseness, we will only briefly discuss the Digital Maturity Balance Model [151]. It is oriented towards _PAs_ and is based on two axes: digital maturity and importance ratio. The focus is on measuring the balance between the two. Each maturity dimension is assessed by taking into account the importance ratio of this dimension in the Organization. The main categories of maturity dimensions involved are data, IT governance, strategy, organisation, and process. The construction of the model essentially consists of three steps. First, a method must be defined to assess digital maturity. Secondly, a method must be defined to measure the importance of each dimension of digital maturity pertaining to each of the categories involved. Third, a self-assessment tool must be provided that combines the methods just mentioned, e.g., in the form of an online questionnaire, in which the questions allow the assessment of the digital maturity criteria and the digital relationship attributes. Results show that the use of the model and of the self-assessment tool is useful and relevant, but needs further refinement to fully correspond to the reality of a given _PA_.
Interestingly, micro-economic maturity indexes may be of use in measuring other aspects of _DT_, far from the ones they have been designed for. By way of example, the CMMI index [47] has been adapted in [200] in order to measure the success of the adoption of the Agile DevOps methodology in the _PA_ project management.
## 8 Future Directions
* **Data.** From what has been discussed in Section 4, it is evident that data innovations come from using Open Semantic Web standards in the context of _PA_ to represent their information assets. The introduction and use of Open Data is certainly a big step forward since the advanced functionalities they make available have transformed the _OGD_ landscape [38]. Apparently, little attention has been dedicated to the _LOGD_, in particular, to all those activities related to the production and maintenance of quality levels, which facilitate interoperability with other data sources [17], according to the Open Government principles [95, 207]. In particular, a domain that needs attention for the _DT_ is the one regarding the use of _RDF Knowledge Graphs_, since their use would facilitate the discovery of new data sources and improve their interoperability among different _PAs_. Another aspect that needs to be developed is to set-up mechanisms that strengthen the trust among citizens and _PAs_ regarding the use of the collected data [214]. A related topic is security, in particular regarding the creation of a system of protection balancing the needs of _PAs_ and the risks connected to open data and interoperability. Another issue that needs some care during a _DT_ is the efficient storage of the vast amount of data that is continuously increasing. To this end, there is extensive experimental research addressing compression of RDF data in [29, 35, 119, 131, 136], but we found no mention of those techniques as being currently used in the context of _PA_.
* **Technology**. As outlined in Section 3, Cloud Computing is a main component of any _DT_. Moreover, as discussed in Section 5.1, the diffusion of Smart Cities and Digital Twins are very promising. Somewhat unfortunately, the complexities related to their full scale realization are far from being addressed and resolved. The difficulties of scaling are best exemplified by a study regarding energy consumption optimization of "only" sixteen buildings in Rome, via Digital Twins [6]. A recent review clearly outlines the five major challenges that need to be addressed [227]. Not surprisingly, they range from data collection, storage and analysis to computing power. Although some research directions are also mentioned, they lack specificity and a clear assessment of how the scale of what has to be managed via Digital Twins affects costs: a city, even a major one, may not be able to economically sustain its full fledged Digital Twin. Concerning privacy and security, the adoption of recognized standards, such as ISO/IEC 27001 is strongly recommended, as indicated in [74, 212]. To this end, it is suggested that a more collaborative approach be taken to support security in developing effective and appropriate solutions to security challenges, including increased efforts on technologies _IoT_[182], to prevent attacks or minimize their effects. At the State of the
Art, there are no evident documented outcomes in the Literature on how these recommendations have been understood and pursued by the _PAs_. The actions, however, appear to be in place, as shown in the timelines of the _NRRP_ Plans, i.e., [93].
* **People**. One of the major problems that emerge in terms of digital skills is the necessity of proper educational efforts, such as courses and tutorials, in particular in developing countries [230]. In summary, the development of a digital education ecosystem is one of the major needs for an effective _DT_. By way of example, actions in this direction are planned in Europe [64] and recommendations are given in the U.S. [213]. In terms of co-creation, its widespread adoption within _PAs_ requires relevant structural changes, including a sourcing strategy, a governance structure, and a more flexible digital infrastructure, as reported in [221].
* **Process**. It is clear from Sections 3 and 7 that the way in which projects are designed, managed and implemented must change in order to achieve an effective _DT_. Agile technologies are one technical way of realizing such a change. However, the _DT_ is a dynamic process that may generate the need for "new and higher transformations" that may impact the mission of an organization. For instance, the IT department of a large Finnish municipality, transformed its mission from problem-solving to proactive service delivery, partly through a collaborative approach with business units, as reported in [233]. Therefore, Agile technologies may well be the "tools", but a clear plan of what is _DT_ is essential. Such a plan and vision may change depending on the scale (local, regional, national), although some coherence among the various levels of the scale must be ensured. To the best of our knowledge, a _DT_ approach that accounts for the granularity and hierarchy of the components involved is not present. It is to be said that Agile technologies reinforce the need for capacity-development of stakeholders [220], i.e., the acquisitions of digital skills. Moreover, although more collaborative project management approaches are felt as necessary with the goal of interoperability, the lack of agreed processes, the difficulties of interpreting administrative and legislative procedures, and the difficulty of defining authorities and responsibilities are just some of the reasons why interoperability between _PAs_ is not achieved, as outlined in [133]. Again, solutions to this problem are related to the scale at which we look at _DT_: interoperability may be simple to achieve in a restricted and uniform community and much more difficult in larger and more heterogeneous ones.
## 9 Conclusions
This tutorial presents a guided tour of the main areas of the _DT_ of the public sector from the perspective of a computer scientist. We started from an analysis of the literature on Digital Transformation available on some digital libraries well known to computer scientists. Using the query described in Section 2 we found almost six thousands of papers related to Digital Transformation, that were reduced to the papers listed in the references that we have used as the basis for this tutorial. Our study has identified the critical factors of a successful _DT_, and the challenges in the areas of data, technologies, people and processes which have been faced by some public administrations in different countries. We believe we have given an original synthesis of some problems and their solutions, useful for understanding the main topics underlying efforts of digitally transforming the life of citizens by some public administrations. Our findings suggest some future directions for research and practice in the four areas mentioned, as discussed in Section 8.
|
2306.07616 | Uniqueness of the $Φ^4_3$ measures on closed Riemannian $3$-manifolds | We constructed in a previous work the $\Phi^4_3$ measures on compact
boundaryless $3$-dimensional Riemannian manifolds as some invariant probability
measures of some Markovian dynamics. We prove in the present work that these
dynamics have unique invariant probability measures. This is done by using an
explicit coupling by change of measure that does not require any a priori
information on the support of the law of the solution to the dynamics. In
addition, the coupling can be used to see that the semigroup generated by the
dynamics satisfies a Harnack-type inequality, which entails that the semigroup
has the strong Feller property. | I. Bailleul | 2023-06-13T08:20:29Z | http://arxiv.org/abs/2306.07616v2 | # Uniqueness of the \(\Phi_{3}^{4}\) measures on closed Riemannian \(3\)-manifolds
###### Abstract.
We constructed in a previous work the \(\Phi_{3}^{4}\) measures on compact boundaryless \(3\)-dimensional Riemannian manifolds as some invariant probability measures of some Markovian dynamics. We prove in the present work that these dynamics have unique invariant probability measures. This is done by using an explicit coupling by change of measure that does not require any a priori information on the support of the law of the solution to the dynamics. The coupling can be used to see that the semigroup generated by the dynamics satisfies a Harnack-type inequality, which entails that the semigroup has the strong Feller property.
## 1. Introduction
Let \(M\) stand for an arbitrary compact boundaryless \(3\)-dimensional Riemannian manifold. Following the flow of research that grew out of the recent development of the domain of singular stochastic partial differential equations (PDEs), we constructed in our previous work [4] the \(\Phi_{3}^{4}\) measure on \(M\) as an invariant probability measure of some Markovian dynamics on the space \(C^{-1/2-\varepsilon}(M)\) of \((-\frac{1}{2}-\varepsilon)\)-Holder/Besov distributions on \(M\), for an arbitrary \(\varepsilon>0\). The dynamics is given by Parisi & Wu's paradigm of stochastic quantization and takes the form
\[\partial_{t}u=(\Delta-1)u-u^{3}+\xi, \tag{1.1}\]
where \(\xi\) stands for a spacetime white noise. When set on a discrete \(3\)-dimensional torus this PDE rewrites as a coupled system of stochastic differential equations whose invariant measure is unique and has a density with respect to the massive discrete Gaussian free field measure proportional to \(\exp\big{(}-\frac{1}{4}\sum_{i}\phi_{i}^{4}\big{)}\). Its continuous counterpart is 'the' \(\Phi_{3}^{4}\) measure; it has density \(\exp\big{(}-\frac{1}{4}\!\int_{M}\phi^{4}\big{)}\) with respect to the massive Gaussian free field measure on \(M\). However this reference measure has support in the spaces \(C^{-1/2-\varepsilon}(M)\), for all \(\varepsilon>0\), and essentially no better. The fourth power of \(\phi\) is thus almost surely ill-defined and a renormalization procedure is needed to construct such a measure from its density. In particular this makes the unique characterization of the \(\Phi_{3}^{4}\) measure a non-trivial question. The stochastic quantization approach to the construction of the \(\Phi_{3}^{4}\) measure postulates that Equation (1.1) is well-posed for all times and that it defines a Markovian dynamics which has a unique invariant probability measure, defined as the \(\Phi_{3}^{4}\) measure. This approach to the construction of the \(\Phi_{3}^{4}\) measure does not avoid the need of a renormalization process. Indeed spacetime white noise has Holder parabolic regularity \(-5/2-\varepsilon\), for all \(\varepsilon>0\), and no better, so a solution to Equation (1.1) has at best parabolic Holder regularity \(-1/2-\varepsilon\), and the quantity \(u^{3}\) is ill-defined. This problem is what makes Equation (1.1) a _singular_ stochastic PDE. Its proper formulation requires a priori the use of an ad hoc setting such as regularity structures [14, 7, 6, 8, 17], paracontrolled calculus [13, 1, 2, 3] or Duch's renormalization group setting [9, 10]. (So far only paracontrolled calculus has been developed in a manifold setting. A forthcoming work of Hairer & Singh will extend the analytic core of regularity structures to that setting.) Either way one gets (in a Euclidean setting) from the use of any of these tools a proper definition of a solution to Equation (1.1) and a local in time well-posedness result that needs to be supplemented by some ad hoc arguments to prove the long time existence of its solution. The Markovian character of the dynamics on \(C^{-1/2-\varepsilon}(M)\) generated Equation (1.1) is inherited from its discrete counterpart. A compactness argument related to the property of 'coming down from infinity' satisfied by the solutions of Equation (1.1) then gives the long-time existence of the local solution and the existence of an invariant measure for the semigroup on \(C^{-1/2-\varepsilon}(M)\) generated by this equation. This was first proved in the setting of the torus by Mourrat & Weber in [20]. The uniqueness of such an invariant measure was proved in the \(3\)-dimensional torus using a robust argument from dynamical systems: If the semigroup generated by the dynamics (1.1) has the strong Feller property and there is in the state space an accessible point then the semigroup has at most one invariant probability measure. Hairer & Mattingly proved in [15] a general result that shows in particular that the \(\Phi_{3}^{4}\) dynamics on the \(3\)-dimensional torus has the strong Feller property. Hairer & Schonbauer proved in [16] a very general and deep result on the support of the law of a certain class of random
models that gives as a by-product the existence of an accessible point for the \(\Phi^{4}_{3}\) dynamics on the \(3\)-dimensional torus. None of these results are available in a manifold setting, and proving them in a manifold setting in the same generality as in [15] and [16] appears to us as a considerable task.
We did not use regularity structure, paracontrolled calculus or renormalization group methods in our construction of 'a' \(\Phi^{4}_{3}\) measure on an arbitrary boundaryless \(3\)-dimensional Riemannian manifold \(M\) in [4]; rather we followed Jagannath & Perkowski who noticed in [18] that a clever change of variable allows to rewrite the proper formulation of Equation (1.1) as a PDE with random coefficients
\[(\partial_{t}-\Delta+1)v=B\cdot\nabla v-Av^{3}+Z_{2}v^{2}+Z_{1}v+Z_{0}, \tag{1.2}\]
where \(B\in C\big{(}[0,T],C^{-\eta}(M,TM)\big{)},A\in C\big{(}[0,T],C^{1-\eta}(M)\big{)}\) and
\[Z_{i}\in C\big{(}[0,T],C^{-\frac{1}{2}-\eta}(M)\big{)}\qquad(0\leq i\leq 2)\]
are random variables built from the noise, for any \(0<T<\infty\) and \(\eta>0\). The dynamics (1.2) is completed with the datum of an initial condition in a space of the form \(C^{-1/2-\varepsilon}(M)\), for \(\varepsilon>0\) small enough. Note that no singular product is involved in Equation (1.2); the renormalization problem in (1.1) is involved in the definition and construction of the random variables \(A,Z_{2},Z_{1},Z_{0}\). We were able in [4] to construct these random fields and prove an \(L^{p}\) coming down from infinity result for the solutions to Equation (1.2) that entails the existence of an invariant probability measure for the Markovian dynamics (1.1). The question of uniqueness of such an invariant probability measure was left aside in [4]; this is the point that we address in the present work.
**Theorem 1**: **-** _The semigroup on \(C^{-1/2-\varepsilon}(M)\) generated by the dynamics (1.1) has a unique invariant probability measure._
The change of variable \((u\mapsto v)\) from (1.1) to (1.2) is explicit: Adding for instance a (possibly random adapted) drift \(h\) in the dynamics of \(u\) adds an explicit \(h\)-dependent drift in the dynamics of \(v\). We use Equation (1.2) as a convenient description of the Markovian dynamics of \(u\) to construct a coupling by change of measure between two solutions of Equation (1.1) started from two arbitrary initial conditions in the state space. We obtain some explicit control on the probability of a successful coupling that is independent of the pair of initial conditions. This allows us to infer the uniqueness of an invariant probability measure for the dynamics generated by (1.1). To run this approach we need to strengthen the \(L^{p}\) coming down from infinity result proved in [4] into an \(L^{\infty}\) coming down result for the solution \(v\) to Equation (1.2). This is what Section 2 is about. We adapt there to our setting Moinat & Weber' seminal approach [19] to the coming down phenomenon. This kind of control is actually needed not only for \(v\) but also for the solution \(v_{\ell}\) of an equation similar to Equation (1.1), with an additional drift that depends on a real parameter \(\ell\). Section 4 deals with that perturbed equation. As a matter of fact it turns out to be necessary to also have a quantitative control on the sizes of \(v(t)\) and \(v_{\ell}(t)\) in stronger norms, not just \(L^{\infty}\); such controls are provided in Section 3 and Section 4. Equipped with the quantitative estimates proved in these sections we construct in Section 5 a coupling by change of measure that leads to a proof of uniqueness of an invariant measure for the semigroup generated by (1.1). As a by-product of our analysis we prove in Section 6 a Harnack-type inequality for the semigroup that provides a short proof that this semigroup has the strong Feller property. A reader interested only in the uniqueness result can skip Section 2 and Section 3, look at Theorem 7 in Section 4 and read Section 5.
Our uniqueness result gives a characterization of our \(\Phi^{4}_{3}\) measure as the unique invariant probability measure of a Markovian dynamics on some distribution space over \(M\). As this dynamics depends only on the Riemannian structure of \(M\), the \(\Phi^{4}_{3}\) measure appears as depending only on the isometry class of the Riemannian manifold \(M\).
**Notation** - _For an initial condition \(\phi\) of (1.1) we will denote by \(\phi^{\prime}\) the corresponding initial condition of (1.2) given by the Jagannath & Perkowski transform_
\[\phi=\mathfrak{f}(0)-\overset{\text{\tiny\textcircled{\char 37}}}{\text{\tiny \textcircled{\char 37}}}(0)+e^{-3\mathfrak{P}(0)}\big{(}\phi^{\prime}+v_{\text{ref}}(0) \big{)}, \tag{1.3}\]
_with the notations of [4]. The precise definition of the different terms above plays no role here, so we refer the interested reader to [4]._
**Acknowledgements** - _The content of Section 2 and Section 3 owes a lot to discussions with T.D. To. I also warmly thank V.N. Dang for his indirect input in this work._
## 2 An \(L^{\infty}\) coming down from infinity result
The well-posed character of Equation (1.2) on the whole time interval \([0,\infty)\) was proved in Section 2 of [4]. We also obtained therein an explicit control on the \(L^{p}\) norm of the solution \(v\) to Equation (1.2) that is independent of its initial condition. This phenomenon is called 'coming down from infinity'. It was first proved for a transform of the solutions of Equation (1.1) by Mourrat & Weber in their seminal work [20] on the \(\Phi^{4}\) equation on the \(3\)-dimensional torus. It was later extended to the Euclidean setting of \(\mathbb{R}^{3}\) by Moinat & Weber [19] and Gubinelli & Hofmanova [12] using different methods. We obtain in this section a corresponding uniform \(L^{\infty}\) control on \(v\); this is the content of Theorem 2 below. The \(L^{p}\) coming down from infinity result proved in [4], for \(1\leq p<\infty\), is not sufficient for our needs here.
Throughout this section we will use the shorthand notation \(\|\cdot\|\) for \(\|\cdot\|_{L^{\infty}}\) and \(\|\cdot\|_{D}\) for \(\|\cdot\|_{L^{\infty}(D)}\), for a parabolic domain \(D\). Set
\[\mathcal{T}:=\big{\{}A,B,Z_{2},Z_{1},Z_{0}\big{\}}\]
and
\[\begin{array}{l} m_{A}^{-1}=\varepsilon\Big{(}\frac{1}{2}- \varepsilon\Big{)},\quad m_{B}^{-1}=1-\varepsilon\Big{(}\frac{1}{2}-2 \varepsilon\Big{)}-3\varepsilon,\\ m_{Z_{2}}^{-1}=\frac{1}{2}-\varepsilon^{\prime\prime},\quad m_{Z_{1}}^{-1}= \frac{3}{2}-\varepsilon^{\prime\prime},\quad m_{Z_{0}}^{-1}=\frac{5}{2}- \varepsilon^{\prime\prime},\end{array} \tag{2.1}\]
with
\[\frac{1}{2}+\varepsilon^{\prime\prime}:=(1+\varepsilon)\Big{(}\frac{1}{2}+ \varepsilon\Big{)}.\]
Fix \(T\geq 2\); its precise value does not matter here. For \(\lambda>0\) we define the parabolic domain
\[\mathcal{D}_{s}:=(s^{2},T)\times M\subset\mathbb{R}\times M.\]
For \(\tau\in\mathcal{T}\) we define \([\tau]_{|\tau|}\) as the norm of \(\tau\in C_{T}C^{|\tau|}(M)\), where \(|A|=1-\varepsilon,|B|=-\varepsilon\) and \(|Z_{i}|=-1/2-\varepsilon\). Set
\[A_{+}:=\sup_{\mathcal{D}}A\]
and
\[A_{-}:=\min_{\mathcal{D}}A\]
and
\[c_{A}:=\big{(}1+\max(A_{+},A_{-}^{-1})\big{)}^{2}.\]
Recall that we proved in Theorem 5 of [4] that Equation (1.2) is well-posed globally in time. The following statement provides an \(L^{\infty}\) coming down from infinity result; its proof follows the seminal work [19] of Moinat & Weber.
**Theorem 2**: **-** _There exists a positive constant \(C\) such that any solution of Equation (1.2) satisfies for all \(0<s\leq 1\) the estimate_
\[\|v\|_{\mathcal{D}_{s}}\leq C\max\bigg{\{}\frac{1+(\min_{\mathcal{D}}A)^{-1/2 }}{s},\big{(}(c_{A}[\tau]_{|\tau|})^{m_{\tau}}\big{)}_{\tau\in\mathcal{T}} \bigg{\}}. \tag{2.2}\]
### Tools for the proof
We collect in this section two ingredients that will play a key role in the proof of Theorem 2: A Schauder type estimate and a corollary of the maximum principle. Given \(\alpha\in\mathbb{R}\) denote by \(\mathcal{C}^{\alpha}(\mathbb{R}\times M)\) the parabolic Besov space of regularity exponent \(\alpha\) and integrability exponents \((\infty,\infty)\). The statement of Schauder's estimate involves a regularization procedure
\[(\cdot)_{\delta}:h\in\mathcal{C}^{\alpha}(\mathbb{R}\times M)\mapsto h_{\delta }\in\mathcal{C}^{2}(\mathbb{R}\times M),\qquad(\alpha\in\mathbb{R})\]
indexed by \(0<\delta\leq 1\) adapted to the parabolic setting of \(\mathbb{R}\times M\). The particular choice of regularization is not particularly important. To fix the ideas we can proceed as follows. Denote
by \(p\) the kernel of the semigroup generated by the non-positive elliptic operator \(\partial_{t}^{2}-\Delta^{2}\) on the parabolic space \(\mathbb{R}\times\mathbb{R}^{3}\). Let \((f(z,\cdot))_{z\in\mathbb{R}\times M}\) be a smooth family of diffeomorphisms between a \(z\)-dependent neighbourhood of \(z\in\mathbb{R}\times M\) and \(\mathbb{R}\times\mathbb{R}^{3}\). (We can use the compactness of \(M\), hence the fact that it has a positive injectivity radius, to construct such a map.) We choose it in such a way that \(f\big{(}(t,x),\cdot\big{)}\) has support in \([t-1,t+1]\times M\) and \(f\big{(}(t_{1},x_{1}),(t_{2},x_{2})\big{)}=f\big{(}(0,x_{1}),(s-t_{2}-t-1,x_{2 })\big{)}\). Set
\[\varphi_{\delta}(z,z^{\prime}):=p_{\delta}\big{(}0,f(z,z^{\prime})\big{)}.\]
There is a positive constant \(c\) such that \(\varphi_{\delta}(z,\cdot)\) has support in a \((c\delta)\)-neighbourhood of \(z\), uniformly in \(z\in\mathbb{R}\times M\). This regularization map has the property that
\[\|h\|_{\mathcal{C}^{\alpha}}\simeq\sup_{0<\delta\leq 1}\delta^{-\alpha}\|h_{ \delta}\|\]
for any \(\alpha<0\). For \(0<\alpha<1\) and a domain \(D\subset\mathbb{R}\times M\) we define the Holder seminorm
\[[h]_{\alpha,D}:=\sup_{z\neq z^{\prime}\in D}\frac{|h(z)-h(z^{\prime})|}{|z-z^ {\prime}|^{\alpha}}\]
and for \(1<\alpha<2\) we set
\[[h]_{\alpha,D}:=\sup_{z^{\prime}\neq z\in D}\sup_{\theta\in T_{x}M}\frac{\big{|} h(z^{\prime})-h(z)-d(z^{\prime},z)\,dh(z)(\theta)\big{|}}{d(z^{\prime},z)^{ \alpha}}.\]
We write \([h]_{\alpha}\) for \([h]_{\alpha,\mathbb{R}\times M}\). For a function \(h\in\mathcal{C}^{\beta}(\mathbb{R}\times M)\) with \(0<\beta<1\) we have
\[\|h_{\delta}-h\|_{L^{\infty}}\lesssim\delta^{\beta}[h]_{\beta},\]
for an implicit multiplicative constant independent of \(h\). The following notation is used in the statement and proof of Theorem 3. For \(0<\gamma<1\) and for any continuous function \(W\) on \(\mathcal{D}_{s}^{2}\) and subset \(\mathcal{E}\) of \(\mathcal{D}_{s}\) we set for \(\rho>0\)
\[\|W\|_{(\gamma;\,\rho),\mathcal{E}}:=\sup_{z^{\prime}\neq z\in\mathcal{E},|z^ {\prime}-z|\leq\rho}\frac{|W(z^{\prime},z)|}{|z^{\prime}-z|^{\gamma}};\]
this is a kind of \(\rho\)-local \(\gamma\)-Holder norm of \(W\) on the set \(\mathcal{E}\). Theorem 3 below provides a strong control on a function \(w\) in terms of a control on \(\mathcal{L}w\) and a 'weak' control of \(w\) itself. It is a simplified version of a subtler and finer estimate proved by Moinat & Weber in [19], Lemma 2.11 therein. (The latter was itself a generalization of Proposition 2 of Otto, Sauer, Smith & Weber's work [21].)
**Theorem 3**: - _Let a regularity exponent \(\kappa\in(1,2)\) and a constant \(\delta_{0}>0\) be given. There is a constant \(c_{1}>0\) with the following property. If for all \(0<4\delta\leq\lambda\leq\lambda_{0}\) one has_
\[\delta^{2-\kappa}\|\left(\mathcal{L}w\right)_{\delta}\|_{\mathcal{D}_{s+\lambda -\delta}}\leq C_{\lambda} \tag{2.3}\]
_for some function \(w\) on \(\mathcal{D}_{s}\) then one has_
\[\sup_{0<\lambda\leq\lambda_{0}}\lambda^{\kappa}[w]_{\kappa,\mathcal{D}_{s+ \lambda}}\leq c_{1}\Big{(}\sup_{\lambda\leq\lambda_{0}}\lambda^{\kappa}C_{ \lambda}+\|w\|_{\mathcal{D}_{s}}\Big{)}. \tag{2.4}\]
_Moreover one can associate to any \(0<\delta<\delta_{0}\) a constant \(\rho_{\delta}>0\) such that for \(0<\rho<\rho_{\delta}\) one has_
\[\|dw\|_{\mathcal{D}_{s+\delta}}\lesssim\rho^{\kappa-1}[w]_{\kappa,\mathcal{D}_ {s+\delta}}+\frac{1}{\rho}\,\|w\|_{(0\,;\,\rho),\mathcal{D}_{s+\delta}} \tag{2.5}\]
_and_
\[[dw]_{\kappa-1,\mathcal{D}_{s+\delta}}\lesssim[w]_{\kappa,\mathcal{D}_{s+ \delta}}+\frac{1}{\rho^{\kappa}}\,\|w\|_{(0\,;\,\rho),\mathcal{D}_{s+\delta}}. \tag{2.6}\]
**Sketch of proof -** We only give a sketch of proof of this statement; the details will be given elsewhere. The assumption on \((\mathcal{L}w)_{\delta}\) gives an estimate of the size of \(\mathcal{L}w\) seen as an element of \(\mathcal{C}^{\kappa-2}(\mathcal{D}_{s+\lambda})\). Duhamel's formula gives for \(0<s\leq t\)
\[w(t)=e^{(t-s)(\Delta-1)}(w(s))+\int_{s}^{t}e^{(t-r)(\Delta-1)}(\mathcal{L}w)(r) \,dr.\]
Write \((\mathcal{F}f)(t):=e^{t(\Delta-1)}f\) for the free propagation operator. It is classic that
\[\big{\|}\mathcal{F}w(s)\big{\|}_{\mathcal{C}^{\kappa}(\mathcal{D}_{s+\lambda})} \lesssim\lambda^{-\kappa}\|w(s)\|_{L^{\infty}}.\]
Now let \(\bigcup_{i\in I}U_{i}\) be a finite cover of \(M\) by chart domains and let \((\chi_{i})_{i\in I}\) be an associated partition of unity. Let \(\chi_{i}^{+}\in C_{c}^{\infty}(U_{i})\) be equal to \(1\) on \(\operatorname{supp}(\chi_{i})\) for all \(i\in I\). Writing the operator
\[e^{(t-r)(\Delta-1)}f=\sum_{i\in I}\chi_{i}^{+}e^{(t-r)(\Delta-1)} (\chi_{i}f)+\sum_{i\in I}(1-\chi_{i}^{+})e^{(t-r)(\Delta-1)}(\chi_{i}f)\] \[=:\sum_{i\in I}\mathcal{A}_{i}^{t-r}(f)+\sum_{i\in I}\mathcal{B}_ {i}^{t-r}(f),\]
the second sum involves operators that are supported off-diagonal and are smoothing, uniformly in \(t-r\geq 0\). They satisfy for each \(i\in I\) an estimate of the form
\[\Big{\|}\int_{s}^{\bullet}\mathcal{B}_{i}^{\bullet-r}(\mathcal{L}v)(r)\,dr \Big{\|}_{\mathcal{C}^{2}(\mathcal{D}_{s})}\lesssim\|v\|_{L^{\infty}(\mathcal{ D}_{s})}.\]
The kernels \(\mathcal{K}_{i}(t-r)\) of the operators \(\mathcal{A}_{i}^{t-r}\) have support near the the diagonal of \(M\times M\), and in a chart where a generic point \(x\) has coordinates \(\overline{x}\) near \(\mathbf{0}\in\mathbb{R}^{3}\) one has
\[\mathcal{K}_{i}(t-r,x,y)=\chi_{i}^{+}(x)\,(t-r)^{-3/2}K_{i}\Big{(}t-r,\frac{ \overline{x}-\overline{y}}{\sqrt{t-r}},\overline{y}\Big{)}\]
for some function \(\mathcal{K}_{i}\) in the heat calculus, as described e.g. in Grieser's lecture notes [11] - Definition 2.1 therein; the function \(K_{i}\) is, in particular, a smooth function of the square root of its first argument on the semiclosed interval \([0,\infty)\). We now decompose the functions \(K_{i}\) into their'restrictions' to parabolic annuli using the dyadic decomposition
\[a_{-1}(\overline{s},\overline{z})+\sum_{j\geq 0}a\big{(}2^{2j}\overline{s},2^ {j}\overline{z}\big{)}\]
of a function in \(C_{c}^{\infty}([0,\infty)\times\mathbb{R}^{3})\) equal to \(1\) in a neighbourhood of \(\mathbf{0}\in[0,\infty)\times\mathbb{R}^{3}\), with the support of \(a(s_{1},z_{1})\) included in a parabolic annulus
\[0<c_{1}\leq|s_{1}|+|z_{1}|^{2}\leq c_{2}<\infty.\]
We have an associated decomposition for
\[\mathcal{K}_{i}(t-r,x,y)=\chi_{i}^{+}(x)\chi_{i}(y)a_{-1}(t-r, \overline{x}-\overline{y})\mathcal{K}_{i}(t-r,x,y)\] \[\qquad\qquad+\sum_{j\geq 0}\chi_{i}^{+}(x)\chi_{i}(y)a\big{(}2^{2j }(t-r),2^{j}(\overline{x}-\overline{y})\big{)}\,\frac{1}{(t-r)^{3/2}}\,K_{i} \Big{(}t-r,\frac{\overline{x}-\overline{y}}{\sqrt{t-r}},\overline{y}\Big{)}\] \[=:\chi_{i}^{+}(x)\chi_{i}(y)a_{-1}(t-r,\overline{x}-\overline{y}) \mathcal{K}_{i}(t-r,x,y)+\sum_{j\geq 0}K_{i}^{j}\Big{(}t-r,\overline{x}- \overline{y},\overline{y}\,;\,\overline{x}\Big{)}\]
with
\[\sum_{j\geq 0}K_{i}^{j}=\bigg{(}\sum_{0\leq j\leq n}+\sum_{j>n}\bigg{)}K_{i}^{j }=:K_{i}^{\leq n}+K_{i}^{>n}\]
for any \(n\geq 0\). Denote by \(\operatorname{Op}(K_{i}^{\leq n}),\operatorname{Op}(K_{i}^{>n})\) the integral operators on spacetime associated with the kernels \(K_{i}^{\leq n},K_{i}^{>n}\) of the variables \(\big{(}(t,\overline{x}),(s,\overline{y})\big{)}\). In those terms, one has for each \(n\geq 0\)
\[\mathcal{L}^{-1}\simeq\sum_{i\in I}\operatorname{Op}(K_{i}^{\leq n})( \mathcal{L}v)+\operatorname{Op}(K_{i}^{>n})(\mathcal{L}v),\]
up to the regularizing operators associated with \(a_{-1}\). For an integer \(n_{\lambda}\) such that \(2^{n_{\lambda}}\simeq\lambda/2\) one can decompose
\[\big{(}\mathcal{L}^{-1}(\mathcal{L}w)\big{)}_{|\mathcal{D}_{s+ \lambda}} =\big{(}\operatorname{Op}(K_{i}^{\leq n_{\lambda}})(\mathcal{L}w) \big{)}_{|\mathcal{D}_{s+\lambda}}+\big{(}\operatorname{Op}(K_{i}^{>n_{ \lambda}})(\mathcal{L}w)\big{)}_{|\mathcal{D}_{s+\lambda}} \tag{2.7}\] \[=\big{(}\operatorname{Op}(K_{i}^{\leq n_{\lambda}})(\mathcal{L}w) \big{)}_{|\mathcal{D}_{s+\lambda}}+\operatorname{Op}(K_{i}^{>n_{\lambda}}) \big{(}(\mathcal{L}w)_{|\mathcal{D}_{s+\lambda/2}}\big{)}_{|\mathcal{D}_{s+ \lambda}}\]
using the fact that the kernel \(K_{i}^{>n_{\lambda}}\) has support in a parabolic ball of radius approximately equal to \(\lambda/2\). The corresponding operator from \(\mathcal{C}^{\kappa-2}(\mathcal{D}_{s+\lambda/2})\) into \(\mathcal{C}^{\kappa}(\mathcal{D}_{s+\lambda})\) has norm \(O(1)\) uniformly in \(\lambda\), so the corresponding term in (2.7) has size in \(\mathcal{C}^{\kappa}(\mathcal{D}_{s+\lambda})\) of order \(C_{\lambda/2}\) from the assumption (2.3). The operator \(\operatorname{Op}(K_{i}^{\leq n_{\lambda}})\) is regularizing and its norm as an operator from \(L^{\infty}(\mathcal{D}_{s})\) into \(\mathcal{C}^{\kappa}(\mathcal{D}_{s+\lambda})\) is of order \(\lambda^{-\kappa}\). Collecting the above four contributions to the estimate on the size of \(w\in\mathcal{C}^{\kappa}(\mathcal{D}_{s+\lambda})\)
gives for \(0<\lambda\leq 1\)
\[\|w\|_{\mathcal{C}^{\kappa}(\mathcal{D}_{s+\lambda})} \lesssim\lambda^{-\kappa}\|w(s)\|_{L^{\infty}}+\|w\|_{L^{\infty}( \mathcal{D}_{s})}+\lambda^{-\kappa}\|w\|_{L^{\infty}(\mathcal{D}_{s})}+C_{ \lambda/2}\] \[\lesssim\lambda^{-\kappa}\|w\|_{L^{\infty}(\mathcal{D}_{s})}+C_{ \lambda/2}.\]
The estimate (2.4) follows as a consequence. The proofs of the estimates (2.5) and (2.6) on the uniform and \((\kappa-1)\)-Holder norms of \(dv\) are left to the reader. \(\rhd\)
In addition to Theorem 3 we will also use in the proof of Theorem 2 the following statement which provides the form of the bound (2.2).
**Lemma 4**: **-** _Let \(f,g,h\) be continuous functions on \([0,1]\times M\) with \(\min g=:g_{-}>0\). Any continuous function \(w\) on \([0,1]\times M\) such that_
\[(\partial_{t}-\Delta+f\cdot\nabla)w=-gw^{3}+h \tag{2.8}\]
_on \((0,1)\times M\) satisfies for all \(0<t\leq 1\) the estimate_
\[\|w\|_{[t,1]\times M}\leq\max\big{(}(g_{-}\,t)^{-\frac{1}{2}},(\|h\|/g_{-})^{ \frac{1}{3}}_{\infty}\big{)}.\]
Indeed one can check that the functions
\[\pm\Big{(}(2g_{-}t)^{-\frac{1}{2}}+(\|h\|/g_{-})^{\frac{1}{3}}\Big{)}\]
are supersolution (\(+\)) and subsolution (-) of Equation (2.8), so the conclusion comes from the comparison/maximum principle. The strong damping effect of the superlinear term \(-Av^{3}\) in Equation (1.2) will only be used in the proof of Theorem 2 by appealing to Lemma 4 on a regularized version of Equation (1.2).
### Proof of Theorem 2
The main step of the proof of Theorem 2 consists in showing that if one has
\[\|v\|_{\mathcal{D}_{s}}\geq 32\]
and
\[[Z_{1}-1]_{-1/2-\varepsilon}\leq\frac{c}{c_{A}}\,\|v\|_{\mathcal{D}_{s}}^{1/m _{Z_{1}}}\]
and
\[[\tau]_{|\tau|}\leq\frac{c}{c_{A}}\,\|v\|_{\mathcal{D}_{s}}^{1/m_{\tau}} \tag{2.9}\]
for all \(\tau\in\big{\{}A,B,Z_{2},Z_{0}\big{\}}\), for a well-chosen fixed positive constant \(c\) independent of \(v\), then
\[\|v\|_{\mathcal{D}_{s+s_{1}}}\leq\max\bigg{\{}\frac{2\,(\inf_{\mathcal{D}}A)^ {-\frac{1}{2}}}{s_{1}},\frac{\|v\|_{\mathcal{D}_{s}}}{2}\bigg{\}} \tag{2.10}\]
for all \(s_{1}\) with \(s+s_{1}\leq 1/2\) and \(s_{1}\geq 1/\|v\|_{\mathcal{D}_{s}}\). The proof of this inequality is the content of item _(a)_ below. We explain in item _(b)_ how the statement of Theorem 2 follows from that fact.
_(a) The main step: Proof of (2.10)._ The proof of (2.10) proceeds in three steps.
**1.**: We prove that \(\|v\|_{\mathcal{D}_{s}}\) controls the \((3/2-\varepsilon)\) and \((1/2\pm\varepsilon)\) seminorms of \(v\) on the smaller parabolic domains \(\mathcal{D}_{s+\lambda}\). (The Schauder estimate from Theorem 3 is used for that purpose.)
**2.**: We apply Lemma 4 to a regularized version of Equation (1.2) to get with the result of Step 1 a uniform bound on \(v\) on domains of the form \(\mathcal{D}_{s+s_{1}}\). Both \(s_{1}\) and the regularization parameter are free in that step.
**3.**: We tune the regularization parameter to optimize the bound from Step 2.
We use below the shorthand notation \(\mathcal{L}\) for the differential operator \(\partial_{t}-\Delta+1\). The constant \(c\) in (2.9) will be chosen later, just before (2.23), and we write
\[c^{\prime}_{A}\coloneqq\frac{c}{c_{A}}.\]
**Step 1.**_We first derive from Equation (1.2) and the Schauder estimate from Theorem 3 some \(\lambda\)-dependent bound on the \((3/2-\varepsilon)\)-Holder norm of \(v\) on \(\mathcal{D}_{s+\lambda}\) in terms of its uniform norm on
the larger domain \(\mathcal{D}_{s}\)._ (The bound explodes as \(\lambda\) goes to \(0\).) We start from the regularized version
\[(\mathcal{L}v)_{\delta}=-(Av^{3})_{\delta}+(B\cdot\nabla v)_{\delta}+(Z_{2}v^{2} )_{\delta}+(Z_{1}v)_{\delta}+(Z_{0})_{\delta}. \tag{2.11}\]
of Equation 1.2. Note that the conclusion of Theorem 3 makes it possible to use the \((3/2-\varepsilon)\)-Holder seminorm of \(v\) in our estimates of the right-hand side of (2.11) provided it comes with a small factor that can eventually be absorbed in the left-hand side of (2.4). For the \(Z_{0}\) and \(A\) terms in (2.11) we simply bound
\[\|(Z_{0})_{\delta}\|_{\mathcal{D}_{s+\lambda-\delta}} \leq c^{\prime}_{A}\,\delta^{-\frac{1}{2}-\varepsilon}\|v\|_{ \mathcal{D}_{s}}^{1/m_{Z_{0}}}\] \[\|(Av^{3})_{\delta}\|_{\mathcal{D}_{s+\lambda-\delta}} \leq\sqrt{c_{A}}\,\|v\|_{\mathcal{D}_{s}}^{3},\quad\text{ since }\|A\|_{\mathcal{D}}\leq\sqrt{c_{A}}.\]
For the \(Z_{2}\) and \(Z_{1}\) terms in (2.11) one gets from the assumption (2.9) for \(\delta\leq\lambda/4\)
\[\|(Z_{2}v^{2})_{\delta}\|_{\mathcal{D}_{s+\lambda-\delta}} \leq c^{\prime}_{A}\delta^{-\frac{1}{2}-\varepsilon}\|v\|_{ \mathcal{D}_{s}}^{2+1/m_{Z_{2}}}+2c^{\prime}_{A}\delta^{-2\varepsilon}[v]_{(1 /2-\varepsilon\,;\,\delta),\mathcal{D}_{s+\lambda/2}}\|v\|_{\mathcal{D}_{s}} ^{1/m_{Z_{2}}+1},\] \[\|(Z_{1}v)_{\delta}\|_{\mathcal{D}_{s+\lambda-\delta}} \leq c^{\prime}_{A}\delta^{-\frac{1}{2}-\varepsilon}\|v\|_{ \mathcal{D}_{s}}^{1+1/m_{Z_{1}}}+c^{\prime}_{A}\delta^{-2\varepsilon}[v]_{(1 /2-\varepsilon\,;\,\delta),\mathcal{D}_{s+\lambda/2}}\|v\|_{\mathcal{D}_{s}} ^{1/m_{Z_{1}}}.\]
It turns out to be useful to introduce the commutator \(B_{\delta}\cdot\nabla v-(B\cdot\nabla v)_{\delta}\) to estimate \((B\cdot\nabla v)_{\delta}\) itself as we get from Lemma 5 the bound
\[\|(B\cdot\nabla v)_{\delta}\|_{\mathcal{D}_{s+\lambda-\delta}} \leq\|B_{\delta}\cdot\nabla v-(B\cdot\nabla v)_{\delta}\|_{ \mathcal{D}_{s+\lambda-\delta}}+\|B_{\delta}\cdot\nabla v\|_{\mathcal{D}_{s+ \lambda-\delta}}\] \[\lesssim\delta^{1/2-2\varepsilon}[B]_{-\varepsilon}[\nabla v]_{ \frac{1}{2}-\varepsilon,\mathcal{D}_{s+\lambda/2}}+\delta^{-\varepsilon}[B]_{ -\varepsilon}\|\nabla v\|_{\mathcal{D}_{s+\lambda/2}}.\]
Write \(V(z^{\prime},z)\) for \(v(z^{\prime})-v(z)\) and note that for all \(\lambda\) sufficiently small
\[\|\nabla v\|_{\mathcal{D}_{s+\lambda/2}} \lesssim\lambda^{\frac{1}{2}-\varepsilon}[v]_{3/2-\varepsilon, \mathcal{D}_{s+\lambda/2}}+\lambda^{-1}\|V\|_{(0;\lambda),\mathcal{D}_{s+ \lambda/2}}, \tag{2.12}\] \[[\nabla v]_{1/2-\varepsilon,\mathcal{D}_{s+\lambda/2}} \lesssim[v]_{3/2-\varepsilon,\mathcal{D}_{s+\lambda/2}}+\lambda^{- \frac{3}{2}+\varepsilon}\|V\|_{(0;\lambda),\mathcal{D}_{s+\lambda/2}}.\]
Combining these estimates all together yields the bound
\[\delta^{1/2+\varepsilon}\|(\mathcal{L}v)_{\delta}\|_{\mathcal{D}_{s+\lambda- \delta}}\lesssim C_{\lambda}\]
where
\[C_{\lambda} =\lambda^{\frac{1}{2}+\varepsilon}\sqrt{c_{A}}\,\|v\|_{\mathcal{D }_{s}}^{3}+c^{\prime}_{A}\lambda\|v\|_{\mathcal{D}_{s}}^{1/m_{B}}[v]_{3/2- \varepsilon,\mathcal{D}_{s+\lambda/2}}+c^{\prime}_{A}\lambda^{-\frac{1}{2}}\|v \|_{\mathcal{D}_{s}}^{1+1/m_{B}}\] \[\quad+c^{\prime}_{A}\|v\|_{\mathcal{D}_{s}}^{2+1/m_{Z_{2}}}+c^{ \prime}_{A}\lambda^{\frac{1}{2}-\varepsilon}[v]_{(1/2-\varepsilon\,;\,\delta),\mathcal{D}_{s+\lambda/2}}\|v\|_{\mathcal{D}_{s}}^{1/m_{Z_{2}}+1}\] \[\quad+c^{\prime}_{A}\|v\|_{\mathcal{D}_{s}}^{1+1/m_{Z_{1}}}+c^{ \prime}_{A}\lambda^{\frac{1}{2}-\varepsilon}[v]_{(1/2-\varepsilon\,;\,\delta),\mathcal{D}_{s+\lambda/2}}\|v\|_{\mathcal{D}_{s}}^{1/m_{Z_{1}}}+c^{\prime}_{A} \|v\|_{\mathcal{D}_{s}}^{1/m_{Z_{0}}}.\]
Note that \(\lambda/2\) is involved in \(C_{\lambda}\) rather than \(\lambda\) itself. The Schauder estimate from Proposition 3 gives us
\[\sup_{\lambda\leq\frac{\lambda_{0}}{2}}\lambda^{\frac{3}{2}-\varepsilon}[u]_{ \frac{1}{2}-\varepsilon,\mathcal{D}_{s+\lambda}}\leq\sup_{\lambda\leq\lambda_{ 0}}\lambda^{\frac{3}{2}-\varepsilon}[u]_{3/2-\varepsilon,\mathcal{D}_{s+ \lambda}}\lesssim\sup_{\lambda\leq\lambda_{0}}\lambda^{\frac{3}{2}-\varepsilon}C_ {\lambda}+\|v\|_{\mathcal{D}_{s}}, \tag{2.13}\]
where the same domains are involved in the supremum on both sides. Even though \(\lambda^{3/2-\varepsilon}C_{\lambda}\) depends on the \(3/2-\varepsilon\) seminorm of \(v\) on \(\mathcal{D}_{s+\lambda/2}\) it comes with a factor that will eventually be small for the choice of \(\lambda_{0}\) made below. The constant \(\lambda^{3/2-\varepsilon}C_{\lambda}\) still depends on some \(1/2-\varepsilon\) seminorm of \(v\) on \(\mathcal{D}_{s+\lambda/2}\). To eventually have a bound on the supremum in the right hand side of (2.13) that only involves \(\|v\|_{\mathcal{D}_{s}}\) we use the estimates (2.5) and (2.6)
\[\|\nabla v\|_{\mathcal{D}_{s+\lambda}} \lesssim\lambda^{\frac{1}{2}-\varepsilon}[v]_{3/2-\varepsilon, \mathcal{D}_{s+\lambda}}+\lambda^{-1}\|V\|_{(0;\lambda),\mathcal{D}_{s+\lambda}} \tag{2.14}\] \[[v]_{1/2-\varepsilon,\mathcal{D}_{s+\lambda}} \leq\lambda^{1-\varepsilon}[v]_{3/2-\varepsilon,\mathcal{D}_{s+ \lambda}}+\lambda^{\frac{1}{2}+\varepsilon}\|\nabla v\|_{\mathcal{D}_{s+ \lambda}}\]
to see that
\[[v]_{1/2-\varepsilon,\mathcal{D}_{s+\lambda}} \lesssim\lambda^{1-\varepsilon}[v]_{3/2-\varepsilon,\mathcal{D}_{s+ \lambda}}+\lambda^{-\frac{1}{2}+\varepsilon}\|v\|_{\mathcal{D}_{s+\lambda}}.\]
Overall we have a \([v]_{3/2-\varepsilon,\mathcal{D}_{s+\lambda}}\) term in an upper bound for \(\lambda^{3/2-\varepsilon}C_{\lambda}\) that comes with a factor
\[\lambda^{3-3\varepsilon}\|v\|_{\mathcal{D}_{s}}^{1+1/m_{Z_{2}}}.\]
The choice
\[\lambda_{0}\leq\|v\|_{\mathcal{D}_{s}}^{-1}\]
ensures that this term can be absorbed in the left hand side of (2.13) and we get
\[\sup_{\lambda\leq\lambda_{0}/2}\lambda^{\frac{3}{2}-\varepsilon}[v]_{3/2- \varepsilon,\mathcal{D}_{s+\lambda}}\lesssim\|v\|_{\mathcal{D}_{s}}. \tag{2.15}\]
(We used here the inequality \(\varepsilon^{\prime\prime}>\varepsilon\).) Then it follows from (2.14) that
\[\sup_{\lambda\leq\lambda_{0}/2}\lambda^{\frac{1}{2}-\varepsilon}[v]_{\frac{1} {2}-\varepsilon,\mathcal{D}_{s+\lambda}}\lesssim\|v\|_{\mathcal{D}_{s}}. \tag{2.16}\]
A similar estimate holds for \(\sup_{\lambda\leq\lambda_{0}/2}\lambda^{\frac{1}{2}+\varepsilon}[v]_{\frac{1} {2}+\varepsilon,\mathcal{D}_{s+\lambda}}\); we will use it below in Step 2.
We now state an elementary result that will be useful in the next step. For \(z\in\mathbb{R}\times M\) we denote by \(B(z,\delta)\subset\mathbb{R}\times M\) the parabolic ball of center \(z\) and radius \(\delta\).
**Lemma 5 -**_Pick \(\alpha<0\) and \(\beta\in(0,1)\) such that \(\alpha+\beta>0\). Let \(f\in C^{\alpha}(B(z,\delta))\) and \(g\in C^{\beta}(B(z,\delta))\). Then we have_
\[\big{|}\big{(}(fg)_{\delta}-f_{\delta}g\big{)}(z)\big{|}\lesssim\delta^{ \alpha+\beta}[f]_{\alpha,B(z,\delta)}\,[g]_{\beta,B(z,\delta)}. \tag{2.17}\]
_Moreover if \(f\in L^{\infty}(B(z,\delta))\) we have_
\[\big{|}(fg)_{\delta}(z)-(f_{\delta}g)(z)\big{|}\lesssim\delta^{\beta}\|f\|_{ \infty}\,[g]_{\beta,B(z,\delta)}. \tag{2.18}\]
**Step 2.**_We regularize Equation (1.2) and apply the maximum principle of Lemma 4 to its solution._ The regularized version of Equation (1.2) takes the form
\[(\partial_{t}-\Delta+B_{\delta}\cdot\nabla)v_{\delta} =-Av_{\delta}^{3}+[\mathcal{L},(\cdot)_{\delta}](v)+\big{(}B_{ \delta}\cdot\nabla v_{\delta}-(B\cdot\nabla v)_{\delta}\big{)}+\big{(}Av_{ \delta}^{3}-(Av^{3})_{\delta}\big{)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\big{(}Z_{2}v ^{2}\big{)}_{\delta}+\big{(}(Z_{1}-1)v\big{)}_{\delta}+(Z_{0})_{\delta}.\] \[=:-Av_{\delta}^{3}+h_{\delta}.\]
(The \(-1\) in the linear term in \(v\) come from the fact that \(\mathcal{L}=\partial_{t}-\Delta+1\) while Lemma 4 involves the operator \(\partial_{t}-\Delta\).) For all \(s_{1}>0\) such that \(s+s_{1}<\frac{1}{2}\), the pointwise estimate from Lemma 4 gives here for \(\|v_{\delta}\|_{\mathcal{D}_{s+s_{1}}}\) the upper bound
\[\max\Bigl{\{}\big{(}\inf_{\mathcal{D}_{0}}\big{)}^{-\frac{1}{2}} \,\frac{2}{s_{1}},\big{(}\inf_{\mathcal{D}_{0}}\big{)}^{-\frac{1}{3}}\big{\|}[ \mathcal{L},(\cdot)_{\delta}](v)\big{\|}_{\mathcal{D}_{s+s_{1}/2}}^{\frac{1}{ 3}},\big{(}\inf_{\mathcal{D}_{0}}A\big{)}^{-\frac{1}{3}}\|Av_{\delta}^{3}-(Av ^{3})_{\delta}\|_{\mathcal{D}_{s+s_{1}/2}}^{\frac{1}{3}},\] \[\qquad\qquad\qquad\big{(}\inf_{\mathcal{D}_{0}}A\big{)}^{-\frac{1} {3}}\|B_{\delta}\cdot\nabla v_{\delta}-(B\cdot\nabla v)_{\delta}\|_{\mathcal{ D}_{s+s_{1}/2}}^{\frac{1}{3}},\big{(}\inf_{\mathcal{D}_{0}}A\big{)}^{-\frac{1}{3}} \|(Z_{2}v^{2})_{\delta}\|_{\mathcal{D}_{s+s_{1}/2}}^{\frac{1}{3}}, \tag{2.19}\] \[\qquad\qquad\qquad\big{(}\inf_{\mathcal{D}_{0}}A\big{)}^{-\frac{1} {3}}\|(Z_{1}v)_{\delta}\|_{\mathcal{D}_{s+s_{1}/2}}^{\frac{1}{3}},\big{(}\inf_{ \mathcal{D}_{0}}A\big{)}^{-\frac{1}{3}}\|(Z_{0})_{\delta}\|_{\mathcal{D}_{s+s_ {1}/2}}^{\frac{1}{3}}\Bigr{\}},\]
and one gets a bound on \(v\) writing for \(0<\delta\leq s_{1}\)
\[\|v\|_{\mathcal{D}_{s+s_{1}}}\leq\|v_{\delta}\|_{\mathcal{D}_{s+s_{1}}}+\delta ^{\frac{1}{2}-\varepsilon}[v]_{(\frac{1}{2}-\varepsilon;2\delta),\mathcal{D}_{ s+s_{1}-\delta}}\lesssim\|v_{\delta}\|_{\mathcal{D}_{s+s_{1}}}+\Big{(}\frac{ \delta}{s_{1}}\Big{)}^{\frac{1}{2}-\varepsilon}\|v\|_{\mathcal{D}_{s}}, \tag{2.20}\]
as a consequence of (2.18) for the first inequality and (2.16) for the second inequality. Let us introduce a positive parameter \(k\geq 4\) that will be chosen below. Leaving aside the exponent \(1/3\), the different terms in the above maximum can be bounded as follows for \(0<\delta\leq s_{1}/k\)
\[\big{\|}[\mathcal{L},(\cdot)_{\delta}](v)\big{\|}_{\mathcal{D}_{s+ s_{1}/2}}\lesssim\delta^{-1}\|v\|_{\mathcal{D}_{s}},\] \[\big{\|}Av_{\delta}^{3}-(Av^{3})_{\delta}\big{\|}_{\mathcal{D}_{s+ s_{1}/2}}\lesssim\Big{(}\delta^{\frac{1}{2}}\|A\|_{1/2,\mathcal{D}_{s}}+k^{- \frac{1}{2}+\varepsilon}\|A\|_{\mathcal{D}_{s}}\Big{)}\|v\|_{\mathcal{D}_{s}}^{3}, \tag{2.21}\] \[\big{\|}B_{\delta}\cdot\nabla v_{\delta}-(B\cdot\nabla v)_{\delta} \big{\|}_{\mathcal{D}_{s+s_{1}/2}}\lesssim[B]_{-\varepsilon}\|v\|_{\mathcal{D}_{s} }k^{-\frac{1}{2}+2\varepsilon}s_{1}^{-1+3\varepsilon},\]
and
\[\|(Z_{2}v^{2})_{\delta}\|_{\mathcal{D}_{s+s_{1}/2}}\lesssim[Z_{2}]_{ -1/2-\varepsilon}\|v\|_{\mathcal{D}_{s}}^{2}\delta^{-\frac{1}{2}-\varepsilon},\] \[\|(Z_{1}v)_{\delta}\|_{\mathcal{D}_{s+s_{1}/2}}\lesssim\delta^{- \frac{1}{2}-\varepsilon}[Z_{1}]_{-1/2-\varepsilon}\|v\|_{\mathcal{D}_{s}}, \tag{2.22}\] \[\|(Z_{0})_{\delta}\|_{\mathcal{D}_{s+s_{1}/2}}\lesssim\delta^{- \frac{1}{2}-\varepsilon}[Z_{0}]_{-1/2-\varepsilon}.\]
Indeed for the \(A\) term one has
\[(Av_{\delta}^{3})(z)-(Av^{3})_{\delta}(z)=\int\varphi_{\delta}(z,z^{\prime})v( z^{\prime})\Big{\{}\big{(}A(z)-A(z^{\prime})\big{)}v_{\delta}^{2}(z)+A(z^{\prime}) \big{(}v_{\delta}^{2}(z)-v^{2}(z^{\prime})\big{)}\Big{\}}dz^{\prime}.\]
Denote by \(w_{A}(\cdot,D)\) the modulus of continuity of \(A\) on a domain \(D\). Above, the term with the increment of \(A\) gives a contribution bounded above by \(w_{A}(\delta,\mathcal{D}_{s+s_{1}/4})\|v\|_{\mathcal{D}_{s}}^{3}\). As \(0<\delta\leq s_{1}/k\) one has for \(z\in\mathcal{D}_{s+s_{1}/2}\) and \(z^{\prime}\in\mathcal{D}_{s+s_{1}/4}\)
\[\left|v_{\delta}^{2}(z)-v^{2}(z^{\prime})\right|\lesssim\delta^{1/2-\varepsilon }[v]_{1/2-\varepsilon,\mathcal{D}_{s+s_{1}/4}}\|v\|_{\mathcal{D}_{s}},\]
and we get from the fact that \(A\) is (better than) \(1/2\)-Holder and the bound (16) the estimate
\[\left|(Av_{\delta}^{3})(z)-(Av^{3})_{\delta}(z)\right|\lesssim\left(\delta^{ \frac{1}{2}}\|A\|_{1/2,\mathcal{D}_{s}}+\Big{(}\frac{\delta}{s_{1}}\Big{)}^{ \frac{1}{2}-\varepsilon}\|A\|_{\mathcal{D}_{s}}\Big{)}\|v\|_{\mathcal{D}_{s}} ^{3}.\]
The condition \(0<\delta\leq s_{1}/k\) gives the \(A\) estimate from (21).
For the \(B\) term write
\[B_{\delta}\cdot\nabla v_{\delta}-(B\cdot\nabla v)_{\delta}=\big{(}B_{\delta} \cdot\nabla v-(B\cdot\nabla v)_{\delta}\big{)}+B_{\delta}\cdot\nabla(v-v_{ \delta}).\]
We use Lemma 5 to estimate the term in the big parenthesis on the right hand side. This gives
\[\big{\|}B_{\delta}\cdot\nabla v_{\delta}-(B\cdot\nabla v)_{\delta}\big{\|}_{ \mathcal{D}_{s}+s_{1}/2}\lesssim\delta^{\frac{1}{2}-2\varepsilon}[B]_{- \varepsilon}[v]_{3/2-\varepsilon,\mathcal{D}_{s+s_{1}/4}},\]
and using the estimate (15) we get
\[\big{\|}B_{\delta}\cdot\nabla v_{\delta}-(B\cdot\nabla v)_{\delta}\big{\|}_{ \mathcal{D}_{s}+s_{1}/2}\lesssim[B]_{-\varepsilon}\|v\|_{\mathcal{D}_{s}} \Big{(}\frac{\delta}{s_{1}}\Big{)}^{\frac{1}{2}-2\varepsilon}s_{1}^{-1+3 \varepsilon}\lesssim[B]_{-\varepsilon}\|v\|_{\mathcal{D}_{s}}k^{-\frac{1}{2}+2 \varepsilon}s_{1}^{-1+3\varepsilon}.\]
We also use Lemma 5 to deal with the \(Z_{2}\) term and write
\[(Z_{2}v^{2})_{\delta} =(Z_{2})_{\delta}v^{2}+O\big{(}\delta^{\varepsilon}[Z_{2}][v^{2} ]_{1/2+2\varepsilon,\mathcal{D}_{s+s_{1}/4}}\big{)}\] \[=O\big{(}\delta^{-\frac{1}{2}-\varepsilon}[Z_{2}]\|v\|_{\mathcal{ D}_{s}}^{2}\big{)}+O\big{(}\delta^{\varepsilon}[Z_{2}]\|v\|_{\mathcal{D}_{s}} [v]_{1/2+2\varepsilon,\mathcal{D}_{s+s_{1}/4}}\big{)},\]
and using the variation of (16) we obtain the estimate on the \(Z_{2}\) term in (22) since \(\delta\leq s_{1}\). Similarly one has
\[(Z_{1}v)_{\delta} =(Z_{1})_{\delta}v+O\big{(}\delta^{\varepsilon}[Z_{1}][v]_{2 \varepsilon,\mathcal{D}_{s+s_{1}/4}}\big{)}\] \[=O\big{(}\delta^{-\frac{1}{2}-\varepsilon}[Z_{1}]\|v\|_{\mathcal{ D}_{s}}\big{)}+O\big{(}\delta^{\varepsilon}[Z_{1}]\|v\|_{\mathcal{D}_{s}}s_{1}^{- 2\varepsilon}\big{]}\|v\|_{\mathcal{D}_{s}}\big{)}=O\big{(}\delta^{-\frac{1}{2} -\varepsilon}[Z_{1}]\|v\|_{\mathcal{D}_{s}}\big{)}.\]
**Step 3. Choice of scales \(\lambda_{0}\) and \(\delta\).** We choose
\[s_{1}\geq\lambda_{0}=\|v\|_{\mathcal{D}_{s}}^{-1},\qquad\delta=c_{1}\|v\|_{ \mathcal{D}_{s}}^{-1-\varepsilon}\]
for a positive constant \(c_{1}\geq 1\) to be chosen below, so
\[k=c_{1}^{-1}\|v\|_{\mathcal{D}_{s}}^{\varepsilon}\]
makes the job. One then has
\[\big{\|}[\mathcal{L},(\cdot)_{\delta}](v)\big{\|}_{\mathcal{D}_{s+s _{1}/2}}\lesssim c_{1}^{-1}\|v\|_{\mathcal{D}_{s}}^{2+\varepsilon},\] \[\big{\|}Av_{\delta}^{3}-(Av^{3})_{\delta}\big{\|}_{\mathcal{D}_{s +s_{1}/2}}\lesssim c_{1}^{-\frac{1}{2}}\|A\|_{1/2,\mathcal{D}_{s}}\|v\|_{ \mathcal{D}_{s}}^{3-\varepsilon(\frac{1}{2}-\varepsilon)},\] \[\big{\|}B_{\delta}\cdot\nabla v_{\delta}-(B\cdot\nabla v)_{\delta }\big{\|}_{\mathcal{D}_{s+s_{1}/2}}\lesssim c_{1}^{\frac{1}{2}}[B]_{- \varepsilon}\|v\|_{\mathcal{D}_{s}}^{2-\varepsilon(\frac{1}{2}-2\varepsilon )-3\varepsilon},\]
and
\[\|(Z_{2}v^{2})_{\delta}\|_{\mathcal{D}_{s+s_{1}/2}}\lesssim c_{1}^{ -\frac{1}{2}-\varepsilon}[Z_{2}]_{-1/2-\varepsilon}\|v\|_{\mathcal{D}_{s}}^{2+(1+ \varepsilon)(\frac{1}{2}+\varepsilon)},\] \[\|(Z_{1})_{\delta}\|_{\mathcal{D}_{s+s_{1}/2}}\lesssim c_{1}^{- \frac{1}{2}-\varepsilon}[Z_{1}]_{-1/2-\varepsilon}\|v\|_{\mathcal{D}_{s}}^{1+(1+ \varepsilon)(\frac{1}{2}+\varepsilon)},\] \[\|(Z_{0})_{\delta}\|_{\mathcal{D}_{s+s_{1}/2}}\lesssim c_{1}^{- \frac{1}{2}-\varepsilon}[Z_{0}]_{-1/2-\varepsilon}\|v\|_{\mathcal{D}_{s}}^{(1+ \varepsilon)(\frac{1}{2}+\varepsilon)}.\]
We then choose \(c_{1}\) so that the estimate on \([\mathcal{L},(\cdot)_{\delta}](v)\) reads
\[\big{\|}[\mathcal{L},(\cdot)_{\delta}](v)\big{\|}_{\mathcal{D}_{s+s_{1}/2}} \leq 2^{-3}\,\|v\|_{\mathcal{D}_{s}}^{2+\varepsilon},\]
and then one chooses \(c\) in (9) so that the \(A,B,Z_{i}\) terms above are all smaller than \(2^{-3}(\inf_{\mathcal{D}}A)\|v\|_{\mathcal{D}_{s}}^{3}\). The choice of exponents \(m_{\tau}\) in (1) was done precisely for that purpose. The estimates (20) and
(2.19) together give
\[\|v\|_{{\cal D}_{s+s_{2}}}\leq\max\left\{\frac{2\left(\inf_{\cal D}A\right)^{-1/2} }{s_{1}}\,,\,\frac{1}{2}\,\|v\|_{{\cal D}_{s}}\right\}. \tag{2.23}\]
_(b) Proof of Theorem 2 from (2.23)._ We now use an argument due to Moinat & Weber [19] to derive the \(L^{\infty}\) bound (2.2) from the estimate (2.23). We proceed differently depending on whether \(\min_{{\cal D}_{s}}A\leq 1\) or \(\min_{{\cal D}_{s}}A>1\). For \(\min_{{\cal D}_{s}}A\leq 1\) we have for
\[\widetilde{v}:=\left(\min_{{\cal D}_{s}}A\right)^{\frac{1}{2}}v.\]
the estimate
\[\|\widetilde{v}\|_{{\cal D}_{s+s^{\prime}}}\leq\max\left\{\frac{2}{s^{\prime}},\frac{1}{2}\,\|\widetilde{v}\|_{{\cal D}_{s}}\right\}, \tag{2.24}\]
We set \(s_{1}:=4\,\|\widetilde{v}\|_{{\cal D}_{s}}^{-1}\) and define the times \(s=s_{0}<s+s_{1}<\ldots<s+s_{N}=\frac{1}{2}\) from the relation
\[s_{n+1}-s_{n}=4\,\|\widetilde{v}\|_{{\cal D}_{s+s_{n}}}^{-1}. \tag{2.25}\]
The sequence terminates once \(s+s_{n+1}\geq 1/2\), in which case we set \(s_{n+1}=s_{N}=1/2\), or once the assumption (2.9) fails for \({\cal D}_{s+s_{n+1}}\). Since \(4\|\widetilde{v}\|_{{\cal D}_{s+s_{n}}}^{-1}\) is increasing in \(n\) the sequence terminates after finitely many steps and we have
\[\|\widetilde{v}\|_{{\cal D}_{s+s_{n}}}\leq\frac{1}{2}\|\tilde{v}\|_{{\cal D}_ {s+s_{n-1}}}. \tag{2.26}\]
It follows from (2.25) and (2.26) that for \(1\leq n\leq N-1\)
\[s+s_{n}\lesssim\|\widetilde{v}\|_{{\cal D}_{s+s_{n}}}^{-1}. \tag{2.27}\]
Therefore the bound in Theorem 2 holds at the time \(s+s_{n}\), for \(n\leq N-1\). For the last domain \({\cal D}_{s_{N}}\), if the assumption (2.9) fails for \({\cal D}_{s_{N}}\), then we get the bound immediately. Otherwise we have either \(s+s_{N-1}\geq 1/4\) or \(s_{N}-s_{N-1}\geq 1/4\). In the first case, we use the estimate (2.27) for \(s+s_{N-1}\)
\[s+s_{N}=\frac{1}{2}\leq 2(s+s_{N-1})\lesssim\|\widetilde{v}\|_{{\cal D}_{s+s_{N-1 }}}^{-1}. \tag{2.28}\]
In the second case, by (2.25), we have
\[s+s_{N}=\frac{1}{2}\leq 2(s_{N}-s_{N-1})=4\,\|\widetilde{v}\|_{{\cal D}_{s+s_{N- 1}}}^{-1}. \tag{2.29}\]
Finally, for any time \(r\in(s+s_{n},s+s_{n+1})\) with \(0\leq n\leq N-2\), using (2.25) and (2.27) we infer that
\[r\leq s+s_{n+1}=s+s_{n}+(s_{n+1}-s_{n})\lesssim\|\widetilde{v}\|_{{\cal D}_{s +s_{n}}}^{-1}\lesssim\|\widetilde{v}\|_{{\cal D}_{r}}^{-1},\]
and for \(t\in(s+s_{N-1},s+s_{N})\), (2.28) and (2.29) imply that
\[r\leq s+s_{N}\lesssim\|\widetilde{v}\|_{{\cal D}_{s+s_{N-1}}}^{-1}\leq\| \widetilde{v}\|_{{\cal D}_{r}}^{-1}.\]
This gives the desired estimate
\[\|v\|_{{\cal D}_{s}}\leq\frac{2\left(\min_{{\cal D}}A\right)^{-\frac{1}{2}}}{ s}.\]
In the case where \(\min_{{\cal D}}A>1\) we infer from (2.23) that
\[\|v\|_{{\cal D}_{s+s_{1}}}\leq\max\left\{\frac{2}{s_{1}},\frac{1}{2}\|v\|_{{ \cal D}_{s}}\right\}.\]
We get the estimate \(\|v\|_{{\cal D}_{s}}\leq 2/s\) by repeating the preceding argument.
## 3 Controlling stronger norms of \(v\)
We quantify part of Step 1 in the proof of Theorem 2 to get the following result. It requires that \(0<\varepsilon\leq 1/4\). The variant of this statement proved in Section 4 will play a key role in our proof of the uniqueness of the \(\Phi_{3}^{4}\) measure in Section 5.
**Theorem 6**: _There are two functions \(C_{\emptyset}\) and \(C^{\prime}_{\emptyset}\) of the natural sizes of \(A,B,Z_{1},Z_{2}\) that do not depend on \(Z_{0}\) such that setting_
\[\lambda_{0}=C_{\emptyset}\wedge\big{(}\|Z_{2}\|\|v\|_{\mathcal{D}_{s}}\big{)}^{- \frac{2}{3-4\varepsilon}}\]
_one has_
\[[v]_{3/2-\varepsilon,\mathcal{D}_{s+\lambda_{0}}}\lesssim C^{\prime}_{ \emptyset}\big{(}1\vee\|v\|_{\mathcal{D}_{s}}\big{)}^{3}+\|Z_{0}\|.\]
The precise formulas for \(C_{\emptyset}\) and \(C^{\prime}_{\emptyset}\) are given in the proof of Theorem 6 and have no importance in this work.
Proof.: The proof of this statement is essentially a variation on the content of Step 1 in the proof of Theorem 2. We repeat it here for the reader's convenience. We start with the equation
\[(\mathcal{L}v)_{\delta}=-(Av^{3})_{\delta}+(B\cdot\nabla v)_{\delta}+(Z_{2}v^ {2})_{\delta}+(Z_{1}v)_{\delta}+(Z_{0})_{\delta}. \tag{3.1}\]
Note that the conclusion of Proposition 3 makes it possible to use the \((3/2-\varepsilon)\)-Holder seminorm of \(v\) in our estimates of the right-hand side of (3.1) provided it comes with a small factor that can eventually be absorbed in the left-hand side of (2.4). We work with \(0<\delta\leq\lambda/4\). For the \(Z_{0}\) and \(a\) terms in (3.1) we have
\[\|(Z_{0})_{\delta}\|_{\mathcal{D}_{s+\lambda-\delta}} \leq\delta^{-\frac{1}{2}-\varepsilon}\|Z_{0}\|\] \[\|(Av^{3})_{\delta}\|_{\mathcal{D}_{s+\lambda-\delta}} \leq\|A\|\|v\|_{\mathcal{D}_{s}}^{3}.\]
For the \(Z_{2}\) and \(Z_{1}\) terms in (3.1) one gets
\[\|(Z_{2}v^{2})_{\delta}\|_{\mathcal{D}_{s+\lambda-\delta}} \leq\delta^{-\frac{1}{2}-\varepsilon}\|Z_{2}\|\,\|v\|_{\mathcal{D }_{s}}^{2}+2\delta^{-2\varepsilon}\|Z_{2}\|\,[v]_{(1/2-\varepsilon\,;\,\delta),\mathcal{D}_{s+\lambda/2}}\|v\|_{\mathcal{D}_{s}},\] \[\|(Z_{1}v)_{\delta}\|_{\mathcal{D}_{s+\lambda-\delta}} \leq\delta^{-\frac{1}{2}-\varepsilon}\|Z_{1}\|\,\|v\|_{\mathcal{D }_{s}}+\delta^{-2\varepsilon}\|Z_{1}\|\,[v]_{(1/2-\varepsilon\,;\,\delta), \mathcal{D}_{s+\lambda/2}}.\]
It turns out to be useful to introduce the commutator \(B_{\delta}\cdot\nabla v-(B\cdot\nabla v)_{\delta}\) to estimate \((B\cdot\nabla v)_{\delta}\) itself as we get from Lemma 5 the bound
\[\|(B\cdot\nabla v)_{\delta}\|_{\mathcal{D}_{s+\lambda-\delta}} \leq\big{\|}B_{\delta}\cdot\nabla v-(B\cdot\nabla v)_{\delta} \big{\|}_{\mathcal{D}_{s+\lambda-\delta}}+\|B_{\delta}\cdot\nabla v\|_{ \mathcal{D}_{s+\lambda-\delta}}\] \[\lesssim\delta^{\frac{1}{2}-2\varepsilon}\|B\|\,[\nabla v]_{1/2- \varepsilon,\mathcal{D}_{s+\lambda/2}}+\delta^{-\varepsilon}\|B\|\,\|\nabla v \|_{\mathcal{D}_{s+\lambda/2}}.\]
Write \(V(z,z^{\prime})\) for \(v(z)-v(z^{\prime})\) and note that for all \(\lambda\) sufficiently small
\[\|\nabla v\|_{\mathcal{D}_{s+\lambda/2}} \lesssim\lambda^{\frac{1}{2}-\varepsilon}[v]_{3/2-\varepsilon, \mathcal{D}_{s+\lambda/2}}+\lambda^{-1}\|V\|_{(0;\lambda),\mathcal{D}_{s+ \lambda/2}}, \tag{3.2}\] \[[\nabla v]_{1/2-\varepsilon,\mathcal{D}_{s+\lambda/2}} \lesssim[v]_{3/2-\varepsilon,\mathcal{D}_{s+\lambda/2}}+\lambda^{- \frac{3}{2}+\varepsilon}\|V\|_{(0;\lambda),\mathcal{D}_{s+\lambda/2}}.\]
Combining these estimates all together and \(\|V\|_{(0;\lambda),\mathcal{D}_{s+\lambda/2}}\leq 2\|v\|_{\mathcal{D}_{s}}\) yields the bound
\[\delta^{2-\frac{3}{2}+\varepsilon}\|(\mathcal{L}v)_{\delta}\|_{\mathcal{D}_{s +\lambda-\delta}}\lesssim C_{\lambda}\]
with
\[C_{\lambda}:=\lambda^{\frac{1}{2}+\varepsilon}\|A\|\|v\|_{ \mathcal{D}_{s}}^{3} +\|B\|\,[v]_{3/2-\varepsilon,\mathcal{D}_{s+\lambda/2}}+\lambda^{- \frac{1}{2}}\|B\|\,\|v\|_{\mathcal{D}_{s}}\] \[+\|Z_{2}\|_{-1/2-\varepsilon}\|v\|_{\mathcal{D}_{s}}^{2}+ \lambda^{\frac{1}{2}-\varepsilon}\|Z_{2}\|_{-1/2-\varepsilon}[v]_{(1/2- \varepsilon\,;\,\delta),\mathcal{D}_{s+\lambda/2}}\|v\|_{\mathcal{D}_{s}}\] \[+\|Z_{1}\|\,\|v\|_{\mathcal{D}_{s}}+\lambda^{\frac{1}{2}- \varepsilon}\|Z_{1}\|\,[v]_{(1/2-\varepsilon\,;\,\delta),\mathcal{D}_{s+ \lambda/2}}\|v\|_{\mathcal{D}_{s}}+\|Z_{0}\|.\]
Note that \(\lambda/2\) is involved in \(C_{\lambda}\) rather than \(\lambda\) itself. The Schauder estimate from Proposition 3 gives us
\[\sup_{\lambda\leq\frac{\lambda_{0}}{2}}\lambda^{\frac{3}{2}-\varepsilon}[u]_{3/ 2-\varepsilon,\mathcal{D}_{s+\lambda}}\leq\sup_{\lambda\leq\lambda_{0}} \lambda^{\frac{3}{2}-\varepsilon}[u]_{3/2-\varepsilon,\mathcal{D}_{s+\lambda}} \lesssim\sup_{\lambda\leq\lambda_{0}}\lambda^{\frac{3}{2}-\varepsilon}C_{ \lambda}+\|v\|_{\mathcal{D}_{s}} \tag{3.3}\]
Even though the \(C_{\lambda}\) depend on the \((3/2-\varepsilon)\) seminorm of \(v\) on \(\mathcal{D}_{s+\lambda}\) it comes with a small multiplicative factor if one chooses
\[\lambda_{0}=\Big{(}4\max\Big{\{}\|B\|^{\frac{1}{1-\varepsilon}}\,;\,\|Z_{1}\|^{ \frac{1}{3/2-2\varepsilon}}\,;\,\big{(}\|v\|_{\mathcal{D}_{s}}\|Z_{2}\big{)}\| ^{\frac{1}{3/2-2\varepsilon}}\Big{\}}\Big{)}^{-1}.\]
The term of \(C_{\lambda}\) that involves the \((3/2-\varepsilon)\) seminorm of \(v\) can then be absorbed in the left-hand side of (3.3). Still the constant \(C_{\lambda}\) depends on some \((1/2-\varepsilon)\) seminorm of \(v\). To eventually have
a bound on \(C_{\lambda}\) that only involves \(\|v\|_{\mathcal{D}_{s}}\) we use the elementary estimate
\[[v]_{(1/2-\varepsilon\,;\,\delta),\mathcal{D}_{s+\lambda/2}}\leq[v]_{1/2- \varepsilon,\mathcal{D}_{s+\lambda/2}}\leq\lambda^{1-\varepsilon}[v]_{3/2- \varepsilon,\mathcal{D}_{s+\lambda/2}}+\lambda^{\frac{1}{2}+\varepsilon}\| \nabla v\|_{\mathcal{D}_{s+\lambda/2}}\]
and the gradient bound (3.2) on \(v\) in uniform norm to see that
\[[v]_{(1/2-\varepsilon;\delta),\mathcal{D}_{s+\lambda/2}}\lesssim\lambda^{1- \varepsilon}[v]_{3/2-\varepsilon,\mathcal{D}_{s+\lambda/2}}+\lambda^{-\frac{1 }{2}+\varepsilon}\|v\|_{\mathcal{D}_{s+\lambda/2}}.\]
One therefore has
\[\lambda^{2-\varepsilon_{0}-\varepsilon}\|Z_{i}\|\,[v]_{(1/2-\varepsilon\,; \,\delta),\mathcal{D}_{s+\lambda/2}}\lesssim\|Z_{i}\|\big{(}\lambda^{3- \varepsilon_{0}-2\varepsilon}[v]_{3/2-\varepsilon,\mathcal{D}_{s+\lambda/2}}+ \lambda^{\frac{3}{2}-\varepsilon_{0}}\|v\|_{\mathcal{D}_{s}}\big{)}\]
for \(1\leq i\leq 2\), so our choice of \(\lambda_{0}\) ensures that this term can also be absorbed in the left-hand side of (3.3). We get in the end the bound
\[\sup_{\lambda\leq\lambda_{0}/2}\lambda^{\frac{3}{2}-\varepsilon}[v]_{\frac{3 }{2}-\varepsilon,\mathcal{D}_{s+\lambda}}\lesssim\sup_{\lambda\leq\lambda_{0} }C^{\prime}_{\lambda}\perp\|v\|_{\mathcal{D}_{s}}\]
where
\[C^{\prime}_{\lambda}=\lambda^{2}\|A\|\,\|v\|_{\mathcal{D}_{s}}^{3} +\lambda^{1-\varepsilon}\|B\|\,\|v\|_{\mathcal{D}_{s}}+\lambda^{\frac{3}{2}- \varepsilon}\|Z_{2}\|\,\|v\|_{\mathcal{D}_{s}}^{2}+\lambda^{\frac{3}{2}- \varepsilon_{0}}\|Z_{1}\|\,\|v\|_{\mathcal{D}_{s}}\] \[+\lambda^{\frac{3}{2}-\varepsilon_{0}}\|Z_{2}\|\,\|v\|_{\mathcal{ D}_{s}}^{2}+\lambda^{\frac{3}{2}-\varepsilon}\|Z_{1}\|\,\|v\|_{\mathcal{D}_{s}}+ \lambda^{\frac{3}{2}-\varepsilon}\|Z_{0}\|.\]
This is an increasing function of \(\lambda\) and writing
\[[v]_{\frac{3}{2}-\varepsilon,\mathcal{D}_{s+\lambda_{0}}}\lesssim\lambda_{0} ^{-\frac{3}{2}+\varepsilon}\big{(}C^{\prime}_{\lambda_{0}}+\|v\|_{\mathcal{D} _{s}}\big{)}\]
gives the conclusion. \(\rhd\)
## 4 Variation on a theme
We will use in our proof of the uniqueness of an invariant measure of (1.2) a perturbed version of this dynamics that contains an additional drift with a particular form. This section is dedicated to proving for the solution of this perturbed dynamics some explicit control on its \(L^{\infty}\) and stronger norm similar to Theorem 2 and Theorem 6. We use the convention that \(\mathbf{1}_{1<\cdot<\tau}=0\) if \(\tau=1\). Below, the space \(\left(\!\delta\!\alpha_{0},1+2\varepsilon\!\right)\) was introduced in [4] for a particular value of \(\alpha_{0}\) that does not matter here. The norm on this space quantifies the explosion of a function of time \(t>0\) as \(t\) goes to \(0\); its precise definition here does not really matter. For \(0<T<\tau\) the restriction to \([T,\tau)\) of an element of the space \(C\big{(}[0,\tau),C^{-1/2-\varepsilon}(M)\big{)}\cap(\!\delta\!\alpha_{0},1+2 \varepsilon\!]\) is an element of \(C\big{(}[T,\tau],C^{1+2\varepsilon}(M)\big{)}\).
**Theorem 7**: **-** _Pick a constant \(\ell\in\mathbb{R}\) and an initial condition \(\phi^{\prime}_{2}\in C^{-1/2-\varepsilon}(M)\). The equation_
\[\partial_{t}v_{\ell} =(\Delta-1)v_{\ell}-Av_{\ell}^{3}+B\nabla v_{\ell}+Z_{2}v_{\ell} ^{2}+Z_{1}v_{\ell}+Z_{0}+\ell\,\mathbf{1}_{1<t<\tau}\,\frac{v(t)-v_{\ell}(t)}{ \|v(t)-v_{\ell}(t)\|_{L^{2}}}\,\exp\big{(}3\!\delta\!\Phi(t)\big{)}\] \[=:F(v_{\ell})+\ell\,\mathbf{1}_{1<t<\tau}\,\frac{v(t)-v_{\ell}(t) }{\|v(t)-v_{\ell}(t)\|_{L^{2}}}\,\exp\big{(}3\!\Phi(t)\big{)}=:F_{\ell}(t,v_{ \ell}),\qquad(0\leq t<\tau), \tag{4.1}\]
_where_
\[\tau=\tau(\ell,\phi_{1},\phi_{2}):=\inf\big{\{}s\geq 1\,;\,v_{\ell}(s)=v(s) \big{\}}\wedge 2,\]
_has a unique solution in \(C\big{(}[0,\tau),C^{-1/2-\varepsilon}(M)\cap(\!\delta\!\alpha_{0},1+2 \varepsilon\!]\big{)}\). Furthermore, it satisfies the estimates_
\[\|v_{\ell}(s)\|_{L^{\infty}}\leq c_{1}(\widehat{\xi}\,)\,(1+\ell^{\frac{1}{3}}) \tag{4.2}\]
_and_
\[\|v_{\ell}(s)\|_{C^{1+2\varepsilon}}\leq c_{2}(\widehat{\xi}\,)\,(1+\ell) \tag{4.3}\]
_for all \(1\leq s<\tau\), for some explicit functions \(c_{1}(\widehat{\xi}\,),c_{2}(\widehat{\xi}\,)\) of \(\widehat{\xi}\) whose precise values play no role in what follows._
**Proof -**: _Local in time well-posedness beyond time \(1\)._ There is no loss of generality in assuming that \(v(1,\phi^{\prime}_{1})\neq v_{\ell}(1,\phi^{\prime}_{2})\). Denote by \(C_{v_{\ell}(1,\phi^{\prime}_{2})}\big{(}[1,T],C^{1+2\varepsilon}(M)\big{)}\) the set of continuous functions from the interval \([1,T]\) into \(C^{1+2\varepsilon}(M)\) with value \(v_{\ell}(1,\phi^{\prime}_{2})\) at time \(1\). Pick a positive constant
\[m<\big{\|}v(1,\phi^{\prime}_{1})-v_{\ell}(1,\phi^{\prime}_{2})\big{\|}_{L^{2}} \wedge 1\]
(think of it as being small) and set
\[\mathcal{V}_{v_{\ell}(1,\phi_{2}^{\prime})}(m,T):=\Big{\{}w\in C_{v_{\ell}(1, \phi_{2}^{\prime})}\big{(}[1,T],C^{1+2\varepsilon}(M)\big{)}\,;\,\min_{1\leq t \leq T}\,\|v(t)-w(t)\|_{L^{2}}>m\Big{\}}.\]
This is an open subset of \(C_{v_{\ell}(1,\phi_{2}^{\prime})}\big{(}[1,T],C^{1+2\varepsilon}(M)\big{)}\) with closure \(\overline{\mathcal{V}}_{v_{\ell}(1,\phi_{2}^{\prime})}(m,T)\) included in
\[\Big{\{}w\in C_{v_{\ell}(1,\phi_{2}^{\prime})}\big{(}[1,T],C^{1+2\varepsilon} (M)\big{)}\,;\,\min_{1\leq t\leq T}\,\|v(t)-w(t)\|_{L^{2}}\geq m\Big{\}}.\]
**Lemma 6**: _There exists a positive time \(T(\ell,m)\) such that the map \(\mathscr{F}\) defined as_
\[\mathscr{F}(w)(t):=e^{(t-1)(\Delta-1)}\big{(}v_{\ell}(1,\phi_{2}^{\prime}) \big{)}+\int_{0}^{t-1}e^{(t-1-s)(\Delta-1)}\big{(}F_{\ell}(1+s,w)\big{)}ds\]
_is a contraction of \(\overline{\mathcal{V}}_{v_{\ell}(1,\phi_{2}^{\prime})}\big{(}m,T(\ell,m)\big{)}\) into itself. One can choose \(T(\ell,m)\) as a decreasing function of \(m\)._
**Proof -** Indeed, for \(w\in\mathcal{V}_{v_{\ell}(1,\phi_{2}^{\prime})}(C,T)\) one has \(F_{\ell}(w)\in C\big{(}[1,T],C^{-1/2-\varepsilon}(M)\big{)}\) with
\[F_{\ell}(w_{1})-F_{\ell}(w_{2})=(\Delta-1)(w_{1}-w_{2})-A\big{(} w_{1}^{3}-w_{2}^{3}\big{)} +B\nabla(w_{1}-w_{2})+Z_{2}\big{(}w_{1}^{2}-w_{2}^{2}\big{)}+Z_{1}(w_{1}-w_ {2})\] \[+\ell\Big{(}\frac{v-w_{1}}{\|v-w_{1}\|_{L^{2}}}-\frac{v-w_{2}}{\| v-w_{2}\|_{L^{2}}}\Big{)}\]
and
\[\big{\|}F_{\ell}(w_{1})-F_{\ell}(w_{2})\big{\|}_{C_{T}C^{-1/2- \varepsilon}}\lesssim 1+2\big{(}\|w_{1}\|_{L^{\infty}}^{2}+\|w_{2}\|_{L^{ \infty}}^{2}\big{)}\|w_{1}-w_{2}\|_{L^{\infty}}+\|w_{1}-w_{2}\|_{C^{1+ \varepsilon}}\] \[+\big{(}\|w_{1}\|_{L^{\infty}}+\|w_{2}\|_{L^{\infty}}+1\big{)}\|w _{1}-w_{2}\|_{C^{1/2+\varepsilon}}\] \[+\ell\,m^{-2}\big{(}\|v\|_{L^{\infty}}+\|w_{1}\|_{L^{\infty}}+\|w _{2}\|_{L^{\infty}}\big{)}\|w_{1}-w_{2}\|_{L^{\infty}}.\]
(We estimated the \(\ell\)-term in \(L^{\infty}\) by bounding the operator norm of the map \(\|f\|_{L^{2}}f\) from \(L^{\infty}\) into itself. We left the constants on the right hand side to make the verification easier.) The small-time estimate
\[\big{\|}\big{(}\partial_{t}-(\Delta-1)\big{)}^{-1}(f)\big{\|}_{C_{T}C^{1+2 \varepsilon}}\lesssim T^{\frac{1}{2}-3\varepsilon}\|f\|_{C_{T}C^{-1/2- \varepsilon}},\]
then entails the statement of the lemma. \(\blacktriangleright\)
Two solutions corresponding to \(m_{2}<m_{1}\) satisfy \(T(\ell,m_{2})\geq T(\ell,m_{1})\) and coincide on the interval \([0,T(\ell,m_{1})]\) from uniqueness. We prove the non-explosion of the solution to Equation (4.1) before the coupling time with the path \((u(t))_{t\geq 0}\) by showing that any solution to (4.1) on a time interval \((0,T]\) satisfies the size estimates (4.2) and (4.3). By repeated use of the local in time result of Lemma 8 these estimates allow to define the maximal existence time \(\tau(\ell,m)\geq T(\ell,m)\) before \(\|v_{\ell}(t)-v(t)\|_{L^{2}}=m\), with \(\tau(\ell,m_{2})\geq\tau(\ell,m_{1})\) if \(m_{2}<m_{1}\). The coupling time \(\tau\) is defined as
\[\tau=\tau(\ell)=\lim_{m\downarrow 0}\tau(\ell,m).\]
We assume in the next paragraph that \(v_{\ell}\) is a solution to (4.1) on a time interval \((0,T]\), so \(\|v(t)-v_{\ell}(t)\|_{L^{2}}\geq m\) for some \(m>0\) below.
_Quantitative estimates (4.2) and (4.3)._ The proof of these statements is similar to the proofs of Theorem 2 and Theorem 6 but one has to deal with the fact that the drift
\[G_{\ell}(v_{\ell})(t):=\ell\,\frac{v(t)-v_{\ell}(t)}{\|v(t)-v_{\ell}(t)\|_{L^{2} }}\,\exp\big{(}3^{\mathfrak{GP}}(t)\big{)}\]
does not have good controls in \(C^{-1/2-\varepsilon}(M)\) for all times. Rather it has good controls when seen as an element of \(L^{2}(M)\). We only point out the modifications of the proof of Theorem 2 needed to provide a proof of (4.2) and (4.3). We go linearly along the proof of the former.
In Step 1, looking at the drift as a continuous function of time with values in a fixed ball of \(L^{2}(M)\) of radius proportional to \(\ell\), one gets an additional term
\[\big{\|}\big{(}G_{\ell}(v_{\ell})\big{)}_{\delta}\big{\|}_{\mathcal{D}_{s+ \lambda-\delta}}\lesssim_{\widehat{\xi}}\delta^{-\frac{3}{4}}\,\ell.\]
The worst exponent in Step 1 in Section 2 was \(-1/2-\varepsilon\). This difference between the worst exponents in the two situations explains why the reasoning of Step 1 cannot give control of the \(3/2-\varepsilon\) seminorm of \(v_{\ell}\) as the quantity \(\delta^{1/2+\varepsilon}\|(\mathcal{L}v_{\ell})_{\delta}\|_{\mathcal{D}_{s+ \lambda-\delta}}\) is not bounded anymore. It gives
however a control of the \(1+2\varepsilon\) seminorm of \(v_{\ell}\) in terms of \(\|v_{\ell}\|_{\mathcal{D}_{s}}+\ell\), following verbatim what was done in Section 2.2.
The contribution of the drift \(G_{\ell}(v_{\ell})\) in Step 2 is the same as in Step 1 and one needs in addition to replace here the use of the \((3/2-\varepsilon)\) seminorm of \(v\) made in Section 2.2 in the estimate of the \(B\) term by the use of the \((1+2\varepsilon)\) seminorm of \(v_{\ell}\). Step 3 works similarly as in Section 2.2 and provides the estimate
\[\|v_{\ell}\|_{\mathcal{D}_{s+s_{1}}}\leq\max\left\{\frac{2\,(\inf_{\mathcal{D} }A)^{-\frac{1}{2}}}{s_{1}}\,,\,\frac{1}{2}\,\|v_{\ell}\|_{\mathcal{D}_{s}}+ \ell^{\frac{1}{3}}\right\},\]
so we have
\[\|v_{\ell}\|_{\mathcal{D}_{s+s_{1}}}\leq\max\left\{\frac{2\,(\inf_{\mathcal{D} }A)^{-\frac{1}{2}}}{s_{1}}\,,\,\frac{3}{4}\,\|v_{\ell}\|_{\mathcal{D}_{s}}\right\}\]
as long as \(\|v_{\ell}\|_{\mathcal{D}_{s}}\geq 4\,\ell^{\frac{1}{3}}\). Proceeding as in part _(b)_ of the proof of Theorem 2 with the contraction coefficient \(1/2\) replaced by \(3/4\) one gets
\[\|v\|_{\mathcal{D}_{s}}\lesssim\max\left\{\frac{1}{s},\left((c_{A}[\tau]_{| \tau|})^{m_{\tau}}\right)_{\tau\in\mathcal{T}},\,4\,\ell^{\frac{1}{3}}\right\},\]
from which (4.2) follows. We repeat the proof of Theorem 6 to obtain (4.3), adding the contribution of the drift and replacing the \((3/2-\varepsilon)\) seminorm by the \((1+2\varepsilon)\) seminorm. \(\rhd\)
## 5 Uniqueness of the invariant measure
We proved in Proposition 21 of [4] that the dynamics on \(C^{-1/2-\varepsilon}(M)\) generated by (1.1) is Markovian; we prove in this section that it has at most one invariant probability measure. We work for that purpose with Jagannath & Perkowski's formulation (1.2) of (1.1) and use a _coupling argument_ to prove the uniqueness. Given two points \(\phi^{\prime}_{1},\phi^{\prime}_{2}\in C^{-1/2-\varepsilon}(M)\), an elementary coupling of two solutions to (1.2) started from \(\phi^{\prime}_{1}\) and \(\phi^{\prime}_{2}\) would consist in constructing on some probability space a pair of spacetime white noises such that the solutions to Equation (1.2) built from each of these noises take the same value at some fixed positive finite time \(T\) outside of an event of arbitrarily small probability independent of \(\phi^{\prime}_{1},\phi^{\prime}_{2}\). The corresponding solutions of Equation (1.1) would also coincide at that time. One could then take some random initial conditions with law two invariant probability measures \(\mu_{1},\mu_{2}\) for the dynamics generated by (1.1) and write for any continuous function \(f\) on \(C^{-1/2-\varepsilon}(M)\)
\[\mu_{1}(f)=\mathbb{E}\big{[}f(u(T;\mu_{1}))\big{]}=\mathbb{E}\big{[}f(u(T;\mu _{2}))\big{]}=\mu_{2}(f),\]
with \(u(\cdot\,;\,\mu_{i})\) denoting the solution to (1.1) with random initial condition with law \(\mu_{i}\). We are not able to produce such a strong coupling here; rather, given the trajectory \(u(\cdot\,;\,\mu_{1})\), we can add to the dynamics of \(u(\cdot\,;\,\mu_{2})\) a drift that forces the latter to meet the former by a fixed time \(T\) with high probability. With the dynamics of \(u(\cdot\,;\,\mu_{2})\) changed the measure \(\mu_{2}\) is not invariant anymore for this new dynamics and the above simple argument for uniqueness does not apply per se. However, for a particular drift there is an equivalent probability measure on our probability space for which the new dynamics has the same law as the original dynamics with random initial condition with law \(\mu_{2}\). A variation on the above pattern of proof then gives the equality of \(\mu_{1}\) and \(\mu_{2}\).
Denote by \((\Omega,\mathcal{F},\mathbb{P})\) the probability space on which all our random variables have been implicitly defined so far. We write \(\mathscr{L}_{\mathbb{P}}(X)\) for the law under \(\mathbb{P}\) of a random variable \(X\) and use a similar notation \(\mathscr{L}_{\mathbb{Q}}(X)\) for any other probability measure \(\mathbb{Q}\) on \((\Omega,\mathcal{F})\). We write \(\mathbb{E}_{\mathbb{P}}\) and \(\mathbb{E}_{\mathbb{Q}}\) for the corresponding expectation operators. We use a coupling by a change of measure argument to prove that the semigroup \((\mathcal{P}_{t})_{t\geq 0}\) on \(C^{-1/2-\varepsilon}(M)\) generated by (1.1) has at most one invariant probability measure.
**Theorem 9**: _The semigroup \((\mathcal{P}_{t})_{t\geq 0}\) has at most one invariant probability measure._
Proof.: We proceed in two steps and first construct a coupling by a change of measure between two trajectories of the Jagannath & Perkowski version of the \(\Phi^{4}_{3}\) dynamics started from different points. As a preliminary remark note that shifting the noise \(\xi\) by a (possibly random) element
of its Cameron-Martin space with support in time in the interval \([1,2]\) is equivalent to adding a drift \(h\exp\big{(}3^{\cancel{PQ}}\big{)}\) to the dynamics of \(v\). Indeed, let \(\phi\) and \(\phi^{\prime}\) be related by the relation
\[\phi^{\prime}=\exp\big{(}3^{\cancel{PQ}}(0)\big{)}\Big{(}\phi-\hat{\mathfrak{f }}(0)+\cancel{PQ}(0)\Big{)}-v_{\text{ref}}(0).\]
One sees that if \(v_{r}^{h}\) solves an equation of the form
\[(\partial_{t}-\Delta)v_{r}^{h}=B_{r}\cdot\nabla v_{r}^{h}-A_{r}(v_{r}^{h})^{3} +Z_{2,r}(v_{r}^{h})^{2}+Z_{1,r}v_{r}^{h}+Z_{0,r}+(e^{r\Delta}h)\,e^{3^{ \cancel{PQ}}_{r}},\qquad v_{r}^{h}(0)=\phi^{\prime},\]
then
\[u_{r}^{h}=\hat{\mathfrak{f}}_{r}-\cancel{PQ}_{r}+e^{-3^{\cancel{PQ}}_{r}}(v_ {r}^{h}+v_{\text{ref},r}), \tag{5.1}\]
solves the equation
\[(\partial_{t}-\Delta)u_{r}^{h}=-(u_{r}^{h})^{3}+3(a_{r}-b_{r})u_{r}^{h}+e^{r \Delta}(\xi+h),\qquad u_{r}(0)=\phi.\]
The convergence of \(v_{r}^{h}\) to the solution \(v^{h}\) of the equation
\[(\partial_{t}-\Delta)v^{h}=B\cdot\nabla v^{h}-A(v^{h})^{3}+Z_{2}(v^{h})^{2}+Z _{1}v^{h}+Z_{0}+he^{3^{\cancel{PQ}}_{r}},\qquad v^{h}(0)=\phi^{\prime},\]
ensures the convergence of \(u_{r}^{h}\) to a limit.
_Step 1 - The coupling._ Pick \(\phi_{1},\phi_{2}\in C^{-1/2-\varepsilon}(M)\) with corresponding \(\phi_{1}^{\prime},\phi_{2}^{\prime}\). We adopt as above the notation \(v=v(\cdot,\phi_{1}^{\prime})\) for the solution of the Jagannath-Perkowski equation with initial condition \(\phi_{1}^{\prime}\), with \(u=u(\cdot,\phi_{1})\) the corresponding function given by the inverse Jagannath-Perkowski transform. Recall Theorem 7 provides some quantitative estimates on the solutions of the equation
\[\partial_{t}v_{\ell} =(\Delta-1)v_{\ell}-Av_{\ell}^{3}+B\cdot\nabla v_{\ell}+Z_{2}v_{ \ell}^{2}+Z_{1}v_{\ell}+Z_{0}+\ell\,\mathbf{1}_{1<t<\tau}\,\frac{v(t)-v_{\ell }(t)}{\|v(t)-v_{\ell}(t)\|_{L^{2}}}\,\exp\big{(}3^{\cancel{PQ}}(t)\big{)}\] \[=:F(v_{\ell})+\ell\,\mathbf{1}_{1<t<\tau}\,\frac{v(t)-v_{\ell}(t) }{\|v(t)-v_{\ell}(t)\|_{L^{2}}}\,\exp\big{(}3^{\cancel{PQ}}(t)\big{)}=:F_{ \ell}(t,v_{\ell}),\qquad(0\leq t<\tau),\]
where
\[\tau=\tau(\ell,\phi_{1},\phi_{2}):=\inf\big{\{}s\geq 1\,;\,v_{\ell}(s)=v(s) \big{\}}\wedge 2.\]
The random time \(\tau\) is called the coupling time - we take as a convention \(\inf\emptyset=+\infty\). A successful coupling corresponds to the event \(\{\tau<2\}\), in which case we let \(v_{\ell}(t)=v(t)\) for \(t\geq\tau\). We have
\[\mathbf{1}_{\tau<2}\,v_{\ell}(2,\phi_{2}^{\prime})=\mathbf{1}_{\tau<2}\,v(2, \phi_{1}^{\prime})\]
and
\[\mathbf{1}_{\tau<2}\,u_{\ell}(2,\phi_{2})=\mathbf{1}_{\tau<2}\,u(2,\phi_{1}),\]
with \(u_{\ell}\) corresponding to \(v_{\ell}\) via the inverse Jagannath-Perkowski transform (5.1).
**Lemma 10 -**_Take \(\ell\geq 1\). There is an absolute constant \(\varepsilon_{0}>0\) such that for all \(0<\varepsilon<\varepsilon_{0}\) and for \(1<t<\tau\) one has_
\[\big{\langle}F(v)-F_{\ell}(t,v^{\prime}),v-v^{\prime}\big{\rangle}_{L^{2}} \lesssim-\big{(}\ell-o_{\xi}(\ell)\big{)}\|v-v^{\prime}\|_{L^{2}} \tag{5.2}\]
_for all \(v,v^{\prime}\in C^{1+2\varepsilon}(M)\) with_
\[\|v\|_{L^{\infty}}\vee\|v^{\prime}\|_{L^{\infty}}\leq c_{1}(\widehat{\xi}\,)\, \ell^{\frac{1}{3}},\]
_and_
\[\|v\|_{C^{1+2\varepsilon}}\vee\|v^{\prime}\|_{C^{1+2\varepsilon}}\leq c_{2}( \widehat{\xi}\,)\,\ell, \tag{5.3}\]
_for a \(\widehat{\xi}\)-dependent non-negative function \(o_{\widehat{\xi}}(\ell)\) of \(\ell\) such that \(o_{\widehat{\xi}}(\ell)/\ell\) goes to \(0\) as \(\ell\) goes to \(\infty\)._
**Proof -** Note that there are no absolute values in (5.2). We give upper bounds for each term in this expression. We make the common abuse of notation of writing \(\langle Z_{i}f,g\rangle_{L^{2}}\) for \(\langle Z_{i}fg,\mathbf{1}\rangle\), the result of testing a well-defined distribution \(Z_{i}fg\) on the constant function \(\mathbf{1}\). We use a similar convention for \(\langle B\cdot\nabla f,g\rangle_{L^{2}}\).
- _The \(A\) term._ As \(A\) is positive one has
\[\big{\langle}-A\big{(}v^{3}-{v^{\prime}}^{3}\big{)},v-v^{\prime}\big{\rangle}_ {L^{2}}\leq 0,\]
and this term does not contribute to the upper bound (5.2).
- The \(B\) term.We start from the identity
\[(v-v^{\prime})^{2}=2(v-v^{\prime})\prec(v-v^{\prime})+(v-v^{\prime})\odot(v-v^{ \prime}),\]
with the left \((v-v^{\prime})\) seen as an element of \(L^{2}(M)\) and the right \((v-v^{\prime})\) seen as an element of \(C^{1+2\varepsilon}(M)\) in the paraproduct and resonant terms, and estimate each term in \(B^{1+\varepsilon}_{2,\infty}(M)\). Losing a little bit on the regularity exponent allows using the interpolation size estimate between different Besov spaces and estimate
\[\|v-v^{\prime}\|_{B^{1+\varepsilon}_{\infty}}\lesssim_{\widehat{\xi}}\ell^{ \frac{1}{3}\frac{\varepsilon}{3-2\varepsilon}+\frac{1+\varepsilon}{1+2 \varepsilon}},\]
with an exponent strictly smaller than \(1\). We write
\[\|v-v^{\prime}\|_{B^{1+\varepsilon}_{\infty}}\lesssim_{\widehat{\xi}}\ell^{ <1}.\]
One then gets from the classical continuity estimates on the paraproduct and resonant operators that
\[\|(v-v^{\prime})^{2}\|_{B^{1+\varepsilon}_{2\infty}}\lesssim_{\widehat{\xi}} \ell^{<1}\,\|v-v^{\prime}\|_{L^{2}}.\]
- The \(Z_{1}\) term.We take advantage of the fact that \(Z_{1}\in C_{T}B^{-\frac{1}{2}-\frac{\varepsilon}{2}}_{2}(M)\) almost surely. We use the previous estimate to see that
\[\big{|}\langle Z_{1}(v-v^{\prime}),v-v^{\prime}\rangle\big{|}\lesssim_{ \widehat{\xi},Z_{1}}\ell^{<1}\,\|v-v^{\prime}\|_{L^{2}}.\]
- The \(Z_{2}\) term.Here as well we consider \(Z_{2}\) as an element of \(C_{T}B^{-\frac{1}{2}-\frac{\varepsilon}{2}}_{21}(M)\). First we obtain by interpolation between the \(L^{\infty}\) and \(C^{1+2\varepsilon}\) estimate on \(v\) and \(v^{\prime}\) that
\[\|v\pm v^{\prime}\|_{B^{\frac{1}{3}+2\varepsilon}_{\infty}}\lesssim_{\widehat {\xi}}\ell^{\frac{1}{3}\frac{5+6\varepsilon}{3-2\varepsilon}},\]
with an exponent slightly bigger than \(5/9\). We thus get from the classical continuity estimates on the paraproduct and resonant operators that
\[\|(v-v^{\prime})^{2}\|_{B^{\frac{1}{4}+2\varepsilon}_{2\infty}}\lesssim_{ \widehat{\xi}}\ell^{\frac{1}{3}\frac{5+6\varepsilon}{3-2\varepsilon}}\|v-v^{ \prime}\|_{L^{2}}. \tag{5.4}\]
Now write
\[\big{(}v^{2}-(v^{\prime})^{2}\big{)}(v-v^{\prime})=(v-v^{\prime})^{2}(v+v^{ \prime})=(v-v^{\prime})^{2}\prec(v+v^{\prime})+\Big{\{}(v+v^{\prime})\prec(v -v^{\prime})^{2}+(v+v^{\prime})\odot(v-v^{\prime})^{2}\Big{\}}.\]
To estimate the contribution of the first paraproduct in the \(Z_{2}\) term we use the elementary refined continuity estimate from Lemma 13 in Appendix A to get the best of the \(L^{\infty}\) and \(C^{1+2\varepsilon}\) estimates on \((v+v^{\prime})\). We have for all integers \(N\)
\[\big{\|}(v-v^{\prime})^{2}\prec(v+v^{\prime})\big{\|}_{B^{(\frac{1 }{4}+2\varepsilon)-\varepsilon}_{2\infty}} \lesssim_{\widehat{\xi}}\|(v-v^{\prime})^{2}\|_{B^{\frac{1}{4}+2 \varepsilon}_{2\infty}}\big{(}2^{-N\varepsilon}\|v+v^{\prime}\|_{B^{\frac{1}{ 4}+2\varepsilon}_{\infty}}+N\|v+v^{\prime}\|_{L^{\infty}}\Big{)}\] \[\lesssim_{\widehat{\xi}}\ell^{\frac{5+6\varepsilon}{3-2\varepsilon }}\|v-v^{\prime}\|_{L^{2}}\Big{(}2^{-N\varepsilon}\ell^{\frac{1}{3}\frac{5+6 \varepsilon}{3-2\varepsilon}}+N\ell^{\frac{1}{3}}\Big{)}.\]
Choosing \(N\) such that \(2^{-N\varepsilon}\ell^{\frac{5+6\varepsilon}{3-2\varepsilon}}\simeq\ell^{ \frac{1}{3}}\) gives
\[2^{-N\varepsilon/2}\ell^{\frac{5+6\varepsilon}{3-2\varepsilon}}+N\ell^{ \frac{1}{3}}\lesssim\ell^{\frac{1}{3}+\eta}\]
for every \(\eta>0\) and \(\ell\geq\ell(\eta)\) large enough. One thus has
\[\big{\|}(v-v^{\prime})^{2}\prec(v+v^{\prime})\big{\|}_{B^{\frac{1}{4}+ \varepsilon}_{2\infty}}\lesssim_{\widehat{\xi}}\ell^{<1}\,\|v-v^{\prime}\|_{L^{ 2}}\]
for an exponent \(\frac{1}{3}\frac{5+6\varepsilon}{3-2\varepsilon}+\frac{1}{3}+\eta\) of \(\ell\) strictly smaller than \(1\), for an appropriate choice of \(\eta\).
We can directly use (5.4) and the \(L^{\infty}\) estimate on \(v\) and \(v^{\prime}\) to see that
\[\big{\|}(v+v^{\prime})\prec(v-v^{\prime})^{2}+(v+v^{\prime})\odot(v-v^{\prime })^{2}\big{\|}_{B^{\frac{1}{2}+2\varepsilon}_{2\infty}}\lesssim_{\widehat{\xi}} \ell^{\frac{1}{3}\frac{5+6\varepsilon}{3-2\varepsilon}+\frac{1}{3}}\|v-v^{ \prime}\|_{L^{2}},\]
here again with an exponent of \(\ell\) strictly smaller than \(1\).
- The \(Z_{0}\) term.As we have a term \(-\ell\|v-v^{\prime}\|_{L^{2}}\) that comes from \(F_{\ell}(t,v^{\prime})\) and all the other contributions to \(\big{\langle}F(v)-F_{\ell}(t,v^{\prime}),v-v^{\prime}\big{\rangle}_{L^{2}}\) add up to a quantity bounded above by a constant multiple of \(\ell^{<1}\,\|v-v^{\prime}\|_{L^{2}}\) we obtain (5.2) for an appropriate choice of \(\varepsilon_{0}>0\).
The proof makes it clear that one can take \(o_{\widehat{\xi}}(\ell)\) of the form
\[o_{\widehat{\xi}}(\ell)=c\Big{(}\|B_{[1,2]}\|_{C([1,2],C^{-\epsilon}(M))}+\|Z_{2 [1,2]}\|_{C([1,2],C^{-1/2-\epsilon})(M)}+\|Z_{1][1,2]}\|_{C([1,2],C^{-1/2- \epsilon})(M)}\Big{)}\ell^{\gamma} \tag{5.5}\]
for some positive constant \(c\) and some positive exponent \(\gamma<1\).
As in Lemma 4 of [4] one proves that the function of time \(\|v(\cdot,\phi_{1}^{\prime})-v_{\ell}(\cdot,\phi_{2}^{\prime})\|_{L^{2}}\) is Young differentiable on the interval \((1,\tau)\) and one has for \(1<t<\tau\)
\[\big{\|}v(t,\phi_{1}^{\prime})-v_{\ell}(t,\phi_{2}^{\prime})\big{\|} _{L^{2}}-\big{\|}v(1,\phi_{1}^{\prime}) -v_{\ell}(1,\phi_{2}^{\prime})\big{\|}_{L^{2}}\] \[=\int_{1}^{t}\frac{\Big{\langle}F\big{(}v(s,\phi_{1}^{\prime}) \big{)}-F_{\ell}\big{(}t,v_{\ell}(s,\phi_{2}^{\prime})\big{)},v(s,\phi_{1}^{ \prime})-v_{\ell}(s,\phi_{2}^{\prime})\Big{\rangle}_{L^{2}}}{\big{\|}v(s,\phi _{1}^{\prime})-v_{\ell}(s,\phi_{2}^{\prime})\big{\|}_{L^{2}}}\,ds.\]
It follows from Lemma 10 that one has for \(1\leq t<\tau\) the inequality
\[\big{\|}v(t,\phi_{1}^{\prime})-v_{\ell}(t,\phi_{2}^{\prime})\big{\|}_{L^{2}} \leq\big{\|}v(1,\phi_{1}^{\prime})-v_{\ell}(1,\phi_{2}^{\prime})\big{\|}_{L^{ 2}}-\big{(}\ell-o_{\widehat{\xi}}(\ell)\big{)}(t-1), \tag{5.6}\]
for a positive quantity \(o_{\widehat{\xi}}(\ell)\) that depends on \(\widehat{\xi}_{[1,2]}\) such that \(o_{\widehat{\xi}}(\ell)/\ell\) goes to \(0\) as \(\ell\) goes to \(\infty\). So one has a successful coupling on the event
\[\bigg{\{}\big{\|}v(1,\phi_{1}^{\prime})-v_{\ell}(1,\phi_{2}^{\prime})\big{\|} _{L^{2}}\leq\frac{\ell-o_{\widehat{\xi}}(\ell)}{2}\bigg{\}}\subset\big{\{} \tau<2\big{\}}.\]
As a consequence of this inclusion and the \(L^{p}\) or \(L^{\infty}\) coming down from infinity result of [4] or Theorem 2, one can choose \(\ell\) big enough to have both \(\mathbb{P}(\tau(\ell,\phi_{1},\phi_{2})=2)\) and \(\mathbb{Q}_{\ell}(\tau(\ell,\phi_{1},\phi_{2})=2)\) strictly smaller than \(1\) independently of \(\phi_{1},\phi_{2}\), say
\[\max\big{(}\mathbb{P}(\tau(\ell,\phi_{1}^{\prime},\phi_{2}^{\prime})=2), \mathbb{Q}_{\ell}(\tau(\ell,\phi_{1}^{\prime},\phi_{2}^{\prime})=2)\big{)} \leq a<1,\quad(\forall\,\phi_{1}^{\prime},\phi_{2}^{\prime}\in C^{-1/2-\varepsilon }(M)). \tag{5.7}\]
We fix such an \(\ell\) and set
\[R_{\ell,\phi_{1},\phi_{2}}:=\exp\bigg{(}-\ell\,\xi\Big{(}\mathbf{1}_{1<\cdot \tau}\,\frac{v(\cdot,\phi_{1}^{\prime})-v_{\ell}(\cdot,\phi_{2}^{\prime})}{\|v (\cdot,\phi_{1}^{\prime})-v_{\ell}(\cdot,\phi_{2}^{\prime})\|_{L^{2}}}\Big{)}- \frac{\ell^{2}(\tau-1)}{2}\bigg{)}.\]
Since \(\tau\leq 2\), Novikov's integrability criterion
\[\mathbb{E}\Big{[}\exp\Big{(}\frac{\ell^{2}(\tau-1)}{2}\Big{)}\Big{]}<\infty\]
is satisfied and it follows from Girsanov theorem that the process
\[\xi+\ell\,\mathbf{1}_{1<\cdot<\tau}\,\frac{v-v_{\ell}}{\|v-v_{\ell}\|_{L^{2}}}\]
is under the probability
\[d\mathbb{Q}_{\ell,\phi_{1},\phi_{2}}:=R_{\ell,\phi_{1},\phi_{2}}\,d\mathbb{P}\]
a spacetime white noise. Pick \(\alpha\in(0,1]\). We have
\[\mathscr{L}_{\mathbb{Q}_{\ell,\phi_{1},\phi_{2}}}(u_{\ell}(\cdot,\phi_{2}))= \mathscr{L}_{\mathbb{P}}(u(\cdot,\phi_{2})) \tag{5.8}\]
and
\[u_{\ell}(2,\phi_{2})=u(2,\phi_{1})\text{ on the event }\big{\{}\tau<2\big{\}}. \tag{5.9}\]
_Step 2 - Uniqueness of an invariant probability measure._ We can now prove that the semigroup \((\mathcal{P}_{t})_{t\geq 0}\) has at most one invariant probability measure. Otherwise, there would be (at least) two extremal invariant, hence singular, probability measures \(\mu,\nu\). We could take \(\phi_{1}\) random with law \(\mu\), and \(\phi_{2}\) random with law \(\nu\), and keep writing \(\mathbb{E}\) for the expectation operator in this extended probability space. Simply write \(R_{\ell}\) rather than \(R_{\ell,\phi_{1},\phi_{2}}\). Write \(u_{\ell}(\cdot,\nu)\) and \(u(\cdot,\mu)\) to emphasize the law of the initial condition. For a measurable set \(A\subset C^{-1/2-\varepsilon}(M)\) with \(\mu(A)=0\) we prove below that \(\nu(A)=0\). The measure \(\nu\) would thus be absolutely continuous with respect to \(\mu\), a contradiction with the fact that \(\nu\) is singular with respect to \(\mu\). We give two proofs.
1. Assuming \(\mu(A)=0\), one would have from the identity in law (5.8) and the fact that \[\mathbb{P}\big{(}u(2,\mu)\in A\big{)}=\mu(\mathcal{P}_{t}\mathbf{1}_{A})=\mu(A)=0\]
the identity
\[\nu(A) =\nu(\mathcal{P}_{2}\mathbf{1}_{A})=\mathbb{Q}_{\ell}\big{(}u_{\ell }(2,\nu)\in A\big{)}=\mathbb{E}\big{[}R_{\ell}\mathbf{1}_{A}(u_{\ell}(2,\nu)) \big{]}\] \[\stackrel{{\eqref{eq:P_2}}}{{=}}\mathbb{E}\big{[}R_{ \ell}\mathbf{1}_{A}(u(2,\mu))\mathbf{1}_{\tau<2}\big{]}+\mathbb{E}\big{[}R_{ \ell}\mathbf{1}_{A}(u_{\ell}(2,\nu))\mathbf{1}_{\tau=2}\big{]}\] \[=\mathbb{E}\big{[}R_{\ell}\mathbf{1}_{A}(u_{\ell}(2,\nu))\mathbf{ 1}_{\tau=2}\big{]}\leq\mathbb{Q}_{\ell}(\tau=2)\stackrel{{\eqref {eq:P_2}}}{{\leq}}a<1.\]
This is not enough to conclude that \(\nu(A)=0\), but instead of coupling the two dynamics on a single time interval \([1,2]\) we can repeat if necessary our attempts to couple them a fixed finite number of times, during the time intervals \([2k-1,2k]\) after a coupling-free evolution on the time interval \([2k-2,2k-1]\), for \(k\leq n\), say. Denote by \(u_{\ell}^{(n)}(\cdot,\phi_{2}^{\prime})\) the corresponding dynamics. Write \(\tau_{1}\in[1,2],\ldots,\tau_{n}\in[n+1,n+2]\) for the successive coupling times and set
\[\ln R_{\ell,\phi_{1},\phi_{2}}^{(n)}:=-\sum_{k=1}^{n}\bigg{(}\ell\,\xi\Big{(} \mathbf{1}_{2k-1<\cdot<\tau_{k}}\,\frac{v(\cdot,\phi_{1}^{\prime})-v_{\ell}( \cdot,\phi_{2}^{\prime})}{\|v(\cdot,\phi_{1}^{\prime})-v_{\ell}(\cdot,\phi_{2} ^{\prime})\|_{L^{2}}}\Big{)}+\frac{\ell^{2}\big{(}\tau_{k}-2k+1\big{)}}{2} \bigg{)}\]
and
\[d\mathbb{Q}_{\ell}^{(n)}:=R_{\ell}^{(n)}d\mathbb{P}.\]
The probability measure \(\mathbb{Q}_{\ell}^{(n)}\) implicitly depends on \(\phi_{1}\) and \(\phi_{2}\) and we also set
\[\overline{\mathbb{Q}}_{\ell}^{(n)}:=\int\mathbb{Q}_{\ell}^{(n)}\nu(d\phi_{2}) \mu(d\phi_{1}).\]
The process \(u_{\ell}^{(n)}(\cdot,\phi_{2})\) has under \(\mathbb{Q}_{\ell}^{(n)}\) the same distribution as \(u(\cdot,\phi_{2})\) and the pair
\[\big{(}u(\cdot,\phi_{1}),u_{\ell}^{(n)}(\cdot,\phi_{2})\big{)}\]
is Markovian under both \(\mathbb{P}\) and \(\mathbb{Q}_{\ell}^{(n)}\). Denote by \(\theta_{r}:\Omega\to\Omega\) a family of measurable measure preserving maps on \((\Omega,\mathcal{F},\mathbb{P})\) such that \(\theta_{r}\circ\theta_{r^{\prime}}=\theta_{r+r^{\prime}}\) and \(\theta_{r}\xi(\cdot)=\xi(\cdot+r)\). The shift acts on measurables functions of \(\xi\) such as \(u\) and \(u_{\ell}^{(n)}\). One then has as above
\[\nu(A) =\int\mathbb{E}_{\ell}^{(n)}\Big{[}\mathbf{1}_{A}\big{(}u_{\ell} ^{(n)}(2n,\phi_{2})\big{)}\mathbf{1}_{\tau_{n}(\phi_{1},\phi_{2})=2n+2}\Big{]} \nu(d\phi_{2})\mu(d\phi_{1})\] \[=\int\mathbb{E}_{\ell}^{(n)}\Big{[}\mathbf{1}_{A}\big{(}\theta_{ 2n-2}u_{\ell}(2,u_{\ell}^{(n-1)}(2n-2,\phi_{2}))\big{)}\mathbf{1}_{\tau_{n}( \phi_{1},\phi_{2})=2n+2}\Big{]}\nu(d\phi_{2})\mu(d\phi_{1})\] \[=\int\mathbb{E}_{\ell}^{(n-1)}\Big{[}\mathbb{E}_{\ell}^{(1)} \Big{[}\mathbf{1}_{A}\big{(}\theta_{2n-2}u_{\ell}^{(1)}(2,u_{\ell}^{(n-1)}(2n -2,\phi_{2}))\big{)}\mathbf{1}_{\tau_{1}=2}\Big{|}u_{\ell}^{(n-1)}(2n-2,\phi_{ 2})\Big{]}\times\] \[\mathbf{1}_{\tau_{n-1}(\phi_{1},\phi_{2})=2n}\Big{]}\nu(d\phi_{2}) \mu(d\phi_{1})\] \[\leq a\,\overline{\mathbb{Q}}_{\ell}^{(n-1)}(\tau_{n-1}=2)\leq a^{ n},\]
by induction. The conclusion \(\nu(A)=0\) follows from the fact that \(n\) is arbitrary.
_2._ Alternatively, one can assume the two invariant probability measures \(\mu\) and \(\nu\) singular and proceed as follows to get a contradiction. Denote by \(\mathbb{P}_{1}\) the law of \(u(\cdot,\mu)\) and by \(\mathbb{P}_{2}\) the law of \(u(\cdot,\nu)\), with a time parameter running in the time interval \([0,2]\). Step 1 produces a coupling between \(\mathbb{P}_{1}\) and a probability with positive density \(D\) with respect to \(\mathbb{P}_{2}\). This coupling is a probability measure \(\mathbb{Q}\) on \(C\big{(}[0,2],C^{-1/2-\varepsilon}(M)\big{)}\times C\big{(}[0,2],C^{-1/2- \varepsilon}(M)\big{)}\) that gives a positive probability to the event \(\big{\{}u_{1}(2)=u_{2}(2)\big{\}}\), denoting by \(u_{1}\) and \(u_{2}\) the canonical marginal processes on the product space. Denote by \(\pi_{1}\) and \(\pi_{2}\) the canonical projections and set
\[d\mathbb{Q}^{-}:=(1\wedge D^{-1})d\mathbb{Q}\]
and
\[\mathbb{Q}^{+}:=\mathbb{Q}^{-}+(\mathbb{P}_{1}-\pi_{1\star}\mathbb{Q}^{-}) \otimes(\mathbb{P}_{2}-\pi_{2\star}\mathbb{Q}^{-}).\]
The measure \(\mathbb{Q}^{+}\) on \(C\big{(}[0,2],C^{-1/2-\varepsilon}(M)\big{)}\times C\big{(}[0,2],C^{-1/2- \varepsilon}(M)\big{)}\) is a probability measure with marginals \(\mathbb{P}_{1}\) and \(\mathbb{P}_{2}\) - that is, a coupling of these two probability measures. We further have that \(\mathbb{Q}\) is absolutely continuous with respect to \(\mathbb{Q}^{+}\), so
\[\mathbb{Q}^{+}\big{(}u_{1}(2)=u_{2}(2)\big{)}>0. \tag{5.10}\]
Since under \(\mathbb{Q}^{+}\) the random variable \(u_{1}(2)\) has law \(\mu\) and the random variable \(u_{2}(2)\) has law \(\nu\) we cannot have at the same time (5.10) and the fact that \(\mu\) and \(\nu\) are singular. We thank M. Hairer for sharing his insight on this reasoning. \(\rhd\)
Together with the existence result proved in [4] Theorem 9 allows us to define the \(\Phi_{3}^{4}\) measure on \(M\) as the unique invariant probability measure of the semigroup \(({\cal P}_{t})_{t\geq 0}\). This shows that the \(\Phi_{3}^{4}\) measure is associated with the Riemannian manifold \(M\). It follows in particular that any smooth isometry between two \(3\)-dimensional boundaryless Riemannian manifolds sends the \(\Phi_{3}^{4}\) measure of the former on the \(\Phi_{3}^{4}\) measure of the latter.
## 6 Strong Feller property
The coupling used in the proof Theorem 9 can be used to prove the strong Feller property of the semigroup \(({\cal P}_{t})_{t\geq 0}\) by showing that it satisfies some Harnack-type inequality. As a preliminary remark to the next statement, note that since one has the inclusion
\[\big{\{}\tau(\ell,\phi_{1},\phi_{2})=2\big{\}}\subset\bigg{\{}\big{\|}v(1, \phi_{1}^{\prime})-v_{\ell}(1,\phi_{2}^{\prime})\big{\|}_{L^{2}}>\frac{\ell- o_{\bar{\xi}}(\ell)}{2}\bigg{\}}\]
with \(v_{\ell}(1,\phi_{2}^{\prime})=v(1,\phi_{2}^{\prime})\) it follows from the \(L^{p}\) or \(L^{\infty}\) coming down from infinity result that
\[\mathbb{P}(\tau(\ell,\phi_{1},\phi_{2})=2)=:a_{\ell}=o_{\ell}(1)\]
is a function of \(\ell\) that goes to \(0\) as \(\ell\) goes to infinity.
**Theorem 11**: \(-\) _Pick a finite exponent \(p_{1}>1\) and a time \(t>0\). For any \(\ell>0\) there exists a function_
\[\Psi_{\ell}:C^{-1/2-\varepsilon}(M)\times C^{-1/2-\varepsilon}(M)\to\mathbb{R},\]
_that is null on the diagonal and continuous, such that the inequality_
\[({\cal P}_{t}f)^{p_{1}}(\phi_{2})\leq{\cal P}_{t}(f^{p_{1}})(\phi_{1})\,e^{ \Psi_{\ell}(\phi_{1},\phi_{2})}\Big{(}1+a_{\ell}\frac{1}{r^{\frac{1}{n}}}\|f \|_{\infty}\,e^{\Psi_{\ell}(\phi_{1},\phi_{2})}\Big{)}^{p_{1}}, \tag{6.1}\]
_holds for any measurable bounded function \(f\geq 1\) on \(C^{-1/2-\varepsilon}(M)\), and any \(\phi_{1},\phi_{2}\in C^{-1/2-\varepsilon}(M)\)._
**Proof -** We use the notations of the proof of Theorem 9. Since \(u_{\ell}(2,\phi_{2})=u(2,\phi_{1})\) on the event \(\{\tau(\ell,\phi_{1},\phi_{2})<2\}\) one has
\[({\cal P}_{t}f)(\phi_{2}) =\mathbb{E}_{\mathbb{Q}_{\ell,\phi_{1},\phi_{2}}}\big{[}f(u_{ \ell}(2,\phi_{2}))\big{]}=\mathbb{E}\big{[}R_{\ell,\phi_{1},\phi_{2}}f(u_{ \ell}(2,\phi_{2}))\big{]}\] \[=\mathbb{E}\big{[}R_{\ell,\phi_{1},\phi_{2}}f\big{(}u(2,\phi_{1}) \big{)}{\bf 1}_{\tau<2}\big{]}+\mathbb{E}_{\mathbb{Q}_{\ell,\phi_{1},\phi_{2}}} \big{[}f(u_{\ell}(2,\phi_{2})){\bf 1}_{\tau=2}\big{]}\]
and we obtain an inequality of the form (6.1) from Holder inequality and the condition \(f\geq 1\), that allows to factorize in the second inequality below,
\[({\cal P}_{t}f)^{p_{1}}(\phi_{2}) \leq\Big{(}\mathbb{E}\big{[}R_{\ell,\phi_{1},\phi_{2}}f(u(2,\phi_ {1}))\big{]}+\|f\|_{\infty}\,\mathbb{E}\big{[}R_{\ell(\alpha),\phi_{1},\phi_{ 2}}^{\frac{p_{1}}{p_{1}-1}}\big{]}^{\frac{p_{1}}{p_{1}}}a_{\ell}\frac{1}{p^{1}} \Big{)}^{p_{1}}\] \[\leq\mathbb{E}\big{[}R_{\ell,\phi_{1},\phi_{2}}f(u(2,\phi_{1})) \big{]}^{p_{1}}\Big{(}1+\|f\|_{\infty}\,\mathbb{E}\big{[}R_{\ell(\alpha),\phi_ {1},\phi_{2}}^{\frac{p_{1}}{p_{1}}}\big{]}^{\frac{p_{1}}{p_{1}}}a_{\ell}\frac{ 1}{r^{1}}\Big{)}^{p_{1}}\] \[\leq\mathbb{E}\big{[}f^{p_{1}}(u(2,\phi_{1}))\big{]}\,\mathbb{E} \big{[}R_{\ell,\phi_{1},\phi_{2}}^{\frac{p_{1}}{p_{1}-1}}\big{]}^{p_{1}-1} \Big{(}1+\|f\|_{\infty}\,\mathbb{E}\big{[}R_{\ell(\alpha),\phi_{1},\phi_{2}}^{ \frac{p_{1}}{p_{1}}}\big{]}^{\frac{p_{1}}{p_{1}}}a_{\ell}\frac{1}{r^{1}} \Big{)}^{p_{1}}.\]
This is (6.1) with
\[e^{\Psi_{\ell}(\phi_{1},\phi_{2})}=\mathbb{E}\big{[}R_{\ell,\phi_{1},\phi_{2}}^ {\frac{p_{1}}{p_{1}-1}}\big{]}^{p_{1}-1}\]
(The function \(\Psi_{\ell}\) also depends on \(p_{1}\) but we do not emphasize that dependence in the notation as it is irrelevant for us here.) We check from classical arguments that \(\Psi_{\ell}(\phi_{1},\phi_{2})\) is a continuous function of \(\phi_{1}\) and \(\phi_{2}\). \(\rhd\)
**Corollary 12**: \(-\) _The semigroup \(({\cal P}_{t})_{t\geq 0}\) has the strong Feller property._
**Proof -** We follow F.Y. Wang's classical proof - see e.g. Theorem 1.4.1 in [22]. Fix \(t>0\). Applying (6.1) to \(f=1+rg\), with a measurable function \(0\leq g\leq 1\) and \(0<r\leq 1\), one gets
\[\big{(}1+r({\cal P}_{t}g)(\phi_{2})\big{)}^{p_{1}}\leq{\cal P}_{t}\big{(}(1+rg)^ {p_{1}}\big{)}(\phi_{1})\,e^{\Psi_{\ell}(\phi_{1},\phi_{2})}\Big{(}1+a_{\ell} \frac{1}{r^{\frac{1}{n}}}(1+\|g\|_{\infty})\,e^{\Psi_{\ell}(\phi_{1},\phi_{2})} \Big{)}^{p_{1}} \tag{6.2}\]
so
\[1+p_{1}r(\mathcal{P}_{t}g)(\phi_{2})+o(r)\leq\Big{(}1+p_{1}r(\mathcal{P}_{t}g)( \phi_{1})+o(r)\Big{)}e^{\Psi_{\ell}(\phi_{1},\phi_{2})}\Big{(}1+a_{\ell}\tfrac{ 1}{r_{1}}(1+\|g\|_{\infty})\,e^{\Psi_{\ell}(\phi_{1},\phi_{2})}\Big{)}^{p_{1}}.\]
Sending \(\phi_{2}\) to \(\phi_{1}\) one gets from the continuity of \(\Psi_{\ell}\)
\[1+p_{1}r\limsup_{\phi_{2}\to\phi_{1}}(\mathcal{P}_{t}g)(\phi_{2})+o(r)\leq \Big{(}1+p_{1}r(\mathcal{P}_{t}g)(\phi_{1})+o(r)\Big{)}\Big{(}1+a_{\ell}\tfrac{ 1}{r_{1}}(1+\|g\|_{\infty})\Big{)}^{p_{1}}\]
and since \(\ell>0\) and \(r>0\) are arbitrary and \(a_{\ell}\) goes to \(0\) as \(\ell\) goes to infinity
\[\limsup_{\phi_{2}\to\phi_{1}}(\mathcal{P}_{t}g)(\phi_{2})\leq(\mathcal{P}_{t} g)(\phi_{1}).\]
Exchanging \(\phi_{1}\) and \(\phi_{2}\) in (6.2), the same reasoning gives
\[(\mathcal{P}_{t}g)(\phi_{1})\leq\liminf_{\phi_{2}\to\phi_{1}}(\mathcal{P}_{t} g)(\phi_{2}),\]
from which the continuity of \(\mathcal{P}_{t}g\) at \(\phi_{1}\) follows. The conclusion follows since \(\phi_{1}\) is arbitrary. \(\rhd\)
## Appendix A An elementary continuity result
The following statement is a simple variation on the classical proof of continuity of the paraproduct and resonant operators; we learned it from V.N. Dang although it is likely to be known already.
**Lemma 13**: \(-\) _Let \((p_{1},q_{1}),(p_{2},q_{2}),(p,q)\) in \([1,+\infty]\) be such that_
\[\frac{1}{p_{1}}+\frac{1}{p_{2}} =\frac{1}{p},\] \[\frac{1}{q_{1}}+\frac{1}{q_{2}} =\frac{1}{q}.\]
_For any \(\gamma>0\) there is a constant \(C_{\gamma}\) such that for all integers \(N\) and real numbers \(a_{1}\geq 0\) one has_
\[\|f\prec g\|_{B^{a_{2}-\gamma}_{p^{2}}}\leq C_{\gamma}\,\|f\|_{B^{a_{1}}_{p^{ 1}q_{1}}}\Big{(}2^{-N\gamma}\|g\|_{B^{a_{2}}_{p^{2}q_{2}}}+N\|g\|_{L^{p_{2}}} \Big{)}.\] (A.1)
We give here the details for the reader's convenience, when things are set in a Euclidean space. An elementary adaptation of the pattern of proof is needed to make it work in the setting of a 3-dimensional Riemannian manifold with the paraproduct and resonant operators defined in Appendix A of [4] or in [5]. We use the usual \(\Delta_{k}\) notation for the Littlewood-Paley projectors.
**Proof \(-\)** Write
\[f\prec g=\sum_{\ell<N,k\leq\ell-2}(\Delta_{k}f)(\Delta_{\ell}f)+\sum_{\ell \geq N,k\leq\ell-2}(\Delta_{k}f)(\Delta_{\ell}g)\] (A.2)
for any integer \(N\). On the one hand, one has for all \(m\in\mathbb{N}\)
\[\bigg{\|}\Delta_{m}\bigg{(}\sum_{\ell<N,k\leq\ell-2}(\Delta_{k}f )(\Delta_{\ell}g)\bigg{)}\bigg{\|}_{L^{p}}\lesssim\mathbf{1}_{m-1\leq N}\sum_ {|\ell-m|\leq 1,k\leq\ell-2}\big{\|}(\Delta_{k}f)(\Delta_{\ell}g)\big{\|}_{L^{p}}\] \[\lesssim\mathbf{1}_{m-1\leq N}\sum_{|\ell-m|\leq 1,k\leq\ell-2}2^{- \ell a_{1}}\,2^{ka_{1}}\|\Delta_{k}f\|_{L^{p_{1}}}\|\Delta_{\ell}g\|_{L^{p_{2}}}\] \[\lesssim\mathbf{1}_{m-1\leq N}\,m\,\|f\|_{B^{a_{1}}_{p^{1},q_{1}} }\|g\|_{L^{p_{2}}}\lesssim N\|f\|_{B^{a_{1}}_{p^{1},q_{1}}}\|g\|_{L^{p_{2}}},\]
using in the penultimate inequality Holder inequality and the fact that \(a_{1}\geq 0\), and Young convolution inequality to see that \(\|\Delta_{\ell}g\|_{L^{p_{2}}}\leq\|g\|_{L^{p_{2}}}\). On the other hand, one has for \(m\geq N-1\)
\[\bigg{\|}\Delta_{m}\bigg{(}\sum_{\ell\geq N,k\leq\ell-2}(\Delta_{ k}f)(\Delta_{\ell}g)\bigg{)}\bigg{\|}_{L^{p}} \lesssim\sum_{|\ell-m|\leq 1,k\leq\ell-2}\big{\|}(\Delta_{k}f)(\Delta_{ \ell}g)\big{\|}_{L^{p}}\] \[\lesssim\sum_{|\ell-m|\leq 1,k\leq\ell-2}2^{-ka_{1}}\,2^{ka_{1}}\| \Delta_{k}f\|_{L^{p_{1}}}\|\Delta_{\ell}g\|_{L^{p_{2}}}\]
\[\lesssim m^{\frac{q_{1}-1}{q_{1}}}\|f\|_{B^{a_{1}}_{p_{1},q_{1}}}\sum_{|\ell-m| \leq 1}\|\Delta_{\ell}g\|_{L^{p_{2}}}\]
from Holder inequality in the sum over \(k\). Estimate (A.1) follows as a consequence. \(\rhd\)
|
2302.04830 | Counting matrix points on certain varieties over finite fields | Classical hypergeometric functions are well-known to play an important role
in arithmetic algebraic geometry. These functions offer solutions to ordinary
differential equations, and special cases of such solutions are periods of
Picard-Fuchs varieties of Calabi-Yau type. Gauss' $_2F_1$ includes the
celebrated case of elliptic curves through the theory of elliptic functions. In
the 80s, Greene defined finite field hypergeometric functions that can be used
to enumerate the number of finite field points on such varieties. We extend
some of these results to count finite field ``matrix points." For example, for
every $n\geq 1,$ we consider the matrix elliptic curves $$ B^2 = A(A-I_n)(A-a
I_n), $$ where $(A,B)$ are commuting $n\times n$ matrices over a finite field
$\mathbb{F}_q$ and $a\neq 0,1$ is fixed. Our formulas are assembled from
Greene's hypergeometric functions and $q$-multinomial coefficients. We use
these formulas to prove Sato-Tate distributions for the error terms for matrix
point counts for these curves and some families of $K3$ surfaces. | Yifeng Huang, Ken Ono, Hasan Saad | 2023-02-09T18:35:43Z | http://arxiv.org/abs/2302.04830v5 | # Counting matrix points on certain varieties over finite fields
###### Abstract.
Classical hypergeometric functions are well-known to play an important role in arithmetic algebraic geometry. These functions offer solutions to ordinary differential equations, and special cases of such solutions are periods of Picard-Fuchs varieties of Calabi-Yau type. Gauss' \({}_{2}F_{1}\) includes the celebrated case of elliptic curves through the theory of elliptic functions. In the 80s, Greene defined finite field hypergeometric functions that can be used to enumerate the number of finite field points on such varieties. We extend some of these results to count finite field "matrix points." For example, for every \(n\geq 1\), we consider the matrix elliptic curves
\[B^{2}=A(A-I_{n})(A-aI_{n}),\]
where \((A,B)\) are commuting \(n\times n\) matrices over a finite field \(\mathbb{F}_{q}\) and \(a\neq 0,1\) is fixed. Our formulas are assembled from Greene's hypergeometric functions and \(q\)-multinomial coefficients. We use these formulas to prove Sato-Tate distributions for the error terms for matrix point counts for these curves and some families of \(K3\) surfaces.
Key words and phrases:Hypergeometric functions, Matrix points, Elliptic curves, \(K3\) surfaces 2020 Mathematics Subject Classification: 33C70; 14Gxx Y.H. thanks the AMS-Simons Travel Grant for making this collaboration possible K.O. thanks the Thomas Jefferson Fund and the NSF (DMS-2002265 and DMS-2055118) for their support.
## 1. Introduction
Let \(\mathbb{F}_{q}\) be a finite field of characteristic characteristic \(\mathrm{char}(\mathbb{F}_{q})\geq 5\). Let \(\mathbb{F}_{q}\) be a finite field of characteristic \(\mathrm{char}(\mathbb{F}_{q})\geq 5\) and \(a\in\mathbb{F}_{q}\setminus\{0,1\}\). The _characteristic_ of \(\mathbb{F}_{q}\) is the _characteristic_ of \(\mathbb{F}_{q}\).
where \(a\in\mathbb{F}_{q}\setminus\{0,-1\}.\) In this notation, it is known (see Theorem 11.18 of [15] and Proposition 4.1 of [1]) that
\[|X_{a}(\mathbb{F}_{q})|=1+q^{2}+19q+q^{2}\cdot{}_{3}F_{2}^{\text{ff}}(-a)_{q}. \tag{1.7}\]
In this note we show that the hypergeometric identities (1.4) and (1.7), combined with the combinatorial input from partitions and \(q\)-multinomial coefficients, count suitable "matrix points" on these curves and surfaces. To make this precise, we first introduce some notation. If \(n,m\) are positive integers and \(K\) is a field, then let \(C_{n,m}(K)\) denote the set of pairwise-commuting \(m\)-tuples of \(n\times n\)-matrices over \(K.\) Due to the noncommutativity of matrix multiplication, geometric problems related to studying matrix rational points on curves and higher varieties only make sense when the matrices are commuting, that is, when the matrix points are in \(C_{n,m}(K).\) We will be interested in counting tuples in \(C_{n,m}(K)\) which satisfy the equations defining some affine varieties. More precisely, we will consider the sets
\[\{(A,B)\in C_{n,2}(\mathbb{F}_{q}):B^{2}=A(A-I_{n})(A-aI_{n})\}\]
and
\[\{(A,B,C)\in C_{n,3}(\mathbb{F}_{q}):C^{2}=AB(A+I_{n})(B+I_{n})(A+aB)\}\]
as matrix analogues of the Legendre elliptic curves \(E_{\text{L}}\) and the K3 surfaces \(X_{a}\) considered above.
To express our results, we introduce some notation. If \(\lambda\) is a partition of a nonnegative integer \(k,\) we write \(n(\lambda;i)\) to denote the number of times \(i\) is repeated in \(\lambda.\) Furthermore, we write \(|\lambda|=k,\) and write \(l(\lambda)=\sum n(\lambda;i)\) to denote the number of parts of \(\lambda.\) Additionally, we introduce certain polynomials in \(q.\) More precisely, if \(z\) and \(q\) are any complex numbers, and \(n\) is any positive integer, then we define the \(q\)-Pochhammer symbol
\[(z;q)_{n}:=(1-z)(1-zq)\ldots(1-zq^{n-1}) \tag{1.8}\]
with \((z;q)_{0}=1.\) Finally, for an integer \(n\geq 0\) and \(m_{1}+\ldots+m_{n}=n\) a partition of \(n,\) we define the \(q\)-multinomial factor
\[\binom{n}{m_{1},m_{2},\ldots,m_{n}}_{q}:=\frac{(q;q)_{n}}{(q;q)_{m_{1}}(q;q)_{ m_{2}}\ldots(q;q)_{m_{n}}}.\]
It is known that \(\binom{n}{m_{1},\ldots,m_{n}}_{q}\) is a monic polynomial in \(q\) and that \(\binom{n}{m_{1},\ldots,m_{n}}_{q}\) approaches the usual multinomial coefficient as \(q\to 1.\)
We start by expressing the number of commuting matrices on a Legendre elliptic curve. More precisely, if \(n\) is a positive integer, \(q\) is a prime power and \(a\in\mathbb{F}_{q},\) we let
\[N_{n,2}(a;q):=|\{(A,B)\in C_{n,2}(\mathbb{F}_{q}):B^{2}=A(A-I_{n})(A-aI_{n})\}|. \tag{1.9}\]
In this notation, we have the following theorem that determines these counts, and also explains the connection with the classical \({}_{2}F_{1}^{\text{cl}}\)-hypergeometric function.
**Theorem 1.1**.: _If \(q=p^{r}\) is a prime power with \(p\geq 5\) and \(a\in\mathbb{F}_{q}\setminus\{0,1\},\) then_
\[N_{n,2}(a;q)=P(n,0)_{q}-\sum_{k=1}^{n}\phi_{q^{k}}(-1)\cdot P(n,k)_{q}\cdot{}_ {2}F_{1}^{\text{ff}}(a)_{q^{k}},\]
_where_
\[P(n,k)_{q}:=(-1)^{k}q^{n(n-k)+\frac{k(k+1)}{2}}\sum_{s=0}^{\lfloor\frac{n-k}{ 2}\rfloor}q^{2s(s-n+k)}\binom{n}{s,n-k-2s,k+s}_{q}.\]
_Moreover, \(P(n,k)_{q}\) is a polynomial in \(q\) with leading term \((-1)^{k}\cdot q^{n^{2}-\frac{k(k-1)}{2}}\) and_
\[\lim_{q\to 1}P(n,k)_{q}=(-1)^{k}\binom{n}{k}\cdot{}_{2}F_{1}^{\text{cl}}\left( \begin{array}{cc}\frac{k-n}{2}&\frac{k+1-n}{2}\\ &k+1\end{array}\right|4\right).\]
As a corollary, we consider the matrix analog of the Sato-Tate distribution for point counts for elliptic curves over finite fields. In direct analogy, we find that the limiting distribution of the "random part" of matrix point counts on Legendre elliptic curves is semicircular. More precisely, if \(n\) is a positive integer and \(q\) is a prime power, then we let
\[a_{\mathrm{L},n}(a;q):=N_{n,2}(a;q)-P(n,0)_{q}. \tag{1.10}\]
In this notation, we have the following result.
**Corollary 1.2**.: _If \(-2\leq b<c\leq 2\) and \(n\) and \(r\) are fixed positive integers, then we have_
\[\lim_{p\to\infty}\frac{|\{a\in\mathbb{F}_{p^{r}}:p^{\frac{p}{2}-rn^{2}}a_{ \mathrm{L},n}(a;p^{r})\in[b,c]\}|}{p^{r}}=\frac{1}{2\pi}\int_{b}^{c}\sqrt{4-t^ {2}}dt.\]
**Example 1.3**.: For the prime \(p=93283\), we compare the histogram of the distribution of \(p^{-7/2}a_{L,2}(a;p)\) for \(a\in\mathbb{F}_{p}\) with the limiting distribution.
We also consider the matrix version of the \(K3\) surfaces described above. If \(n\) is a positive integer, \(q\) is a prime power, and \(a\in\mathbb{F}_{q}\), then we let
\[N_{n,3}(a;q):=|\{(A,B,C)\in C_{n,3}(\mathbb{F}_{q}):C^{2}=AB(A+I_{n})(B+I_{n}) (A+aB)\}|. \tag{1.11}\]
In this notation, we have the following theorem that gives matrix point counts in terms of the \({}_{3}F_{2}^{\mathrm{ff}}\)-hypergeometric function, \(4\)-tuples of integer partitions, and \(q\)-multinomial coefficients.
**Theorem 1.4**.: _If \(q=p^{r}\) is a prime power with \(p\geq 5\) and \(a\in\mathbb{F}_{q}\setminus\{0,-1\},\) then we have_
\[N_{n,3}(a;q)=R(n,\phi_{q}(a+1))_{q}+Q\left(n,0,\phi_{q}(a+1)\right)_{q}+\sum_{ k=1}^{n}Q\left(n,k,\phi_{q}(a+1)\right)_{q}\cdot{}_{3}F_{2}^{\mathrm{ff}} \left(\frac{a}{a+1}\right)_{q^{k}},\]
_where_
\[Q(n,k,\gamma)_{q}:=q^{\frac{n(n-1)}{2}+k}\sum_{\begin{subarray}{c} \lambda_{1},\ldots,\lambda_{4}\\ |\lambda_{1}|+\ldots|\lambda_{4}|=n\\ l(\lambda_{3})-l(\lambda_{4})=k\end{subarray}}q^{l(\lambda_{1})}\gamma^{l( \lambda_{2})}(-1)^{n-m(\lambda_{1},\ldots,\lambda_{4})}\] \[(q,q)_{n-m(\lambda_{1},\ldots,\lambda_{4})}\cdot q^{\sum\frac{n( \lambda_{i},j)(n(\lambda_{i},j)+1)}{2}}\cdot\binom{n}{n(\lambda_{i},j),n-m( \lambda_{1},\ldots,\lambda_{4})}_{q}\]
_and_
\[R(n,\gamma)_{q}:= -q^{\frac{n(n-1)}{2}}\cdot\sum_{\begin{subarray}{c}\lambda_{1}, \ldots,\lambda_{4}\\ |\lambda_{1}|+\ldots+|\lambda_{4}|=n\\ l(\lambda_{3})\neq l(\lambda_{4})\end{subarray}}q^{l(\lambda_{1})}\gamma^{l( \lambda_{2})+l(\lambda_{3})-l(\lambda_{4})}(-1)^{n-m(\lambda_{1},\ldots, \lambda_{4})}\] \[(q,q)_{n-m(\lambda_{1},\ldots,\lambda_{4})}\cdot q^{\sum\frac{ n(\lambda_{i},j)(n(\lambda_{i},j)+1)}{2}}\cdot\binom{n}{n(\lambda_{i},j),n-m( \lambda_{1},\ldots,\lambda_{4})}_{q}\]
_with \(\lambda_{1},\ldots,\lambda_{4}\) being partitions and \(m(\lambda_{1},\ldots,\lambda_{4})=\sum\limits_{i=1}^{4}l(\lambda_{i}).\) Moreover, \(Q(n,k,\gamma)_{q}\) is a polynomial in \(q\) with leading term \(q^{n^{2}+n}\) and_
\[\lim_{q\to 1}Q(n,k,\gamma)_{q}=\binom{n}{k}\cdot(1+\gamma)^{n-k}{}_{2}F_{1}^{ \mathrm{cl}}\left(\begin{array}{cc}\frac{k-n}{2}&\frac{k+1-n}{2}\\ &k+1\end{array}\bigg{|}\,\frac{4}{(1+\gamma)^{2}}\right),\]
_when \(\gamma\neq-1\) and \(\lim_{q\to 1}Q(n,k,-1)_{q}=0.\)_
This theorem allows us to determine the Sato-Tate type limiting distribution of the "random part" of matrix point counts on the \(K3\) surfaces \(X_{a}.\) More precisely, if \(n\) is a positive integer and \(q\) is a prime power, then we let
\[A_{n}(a;q):=N_{n,3}(a;q)-Q(n,0,\phi_{q}(a+1))_{q}-R(n,\phi_{q}(a+1))_{q}. \tag{1.12}\]
In this notation, we have the following result.
**Corollary 1.5**.: _If \(-3\leq b<c\leq 3\) and \(n\) and \(r\) are fixed positive integers, then we have_
\[\lim_{p\to\infty}\frac{\{a\in\mathbb{F}_{p^{r}}:p^{r-rn^{2}-rn}A_{n}(a;p^{r}) \in[b,c]\}}{p^{r}}=\frac{1}{4\pi}\int_{b}^{c}f(t)dt,\]
_where_
\[f(t):=\begin{cases}\sqrt{\frac{3-|t|}{1+|t|}}&\text{if }1<|t|<3,\\ \sqrt{\frac{3-t}{1+t}}+\sqrt{\frac{3+t}{1-t}}&\text{if }|t|<1,\\ 0&\text{otherwise}.\end{cases}\]
**Example 1.6**.: For the prime \(p=93283,\) we compare the histogram of the distribution of \(p^{-5}A_{2}(a;p)\) for \(a\in\mathbb{F}_{p}\) with the limiting distribution.
\(p^{-5}A_{2}(a;p)\) histogram for \(p=93283\)
\(p^{-5}A_{2}(a;p)\) histogram for \(p=93283\)
## 2. Some zeta functions
Let \(q\) be a prime power. Recall that \(\operatorname{GL}_{n}(\mathbb{F}_{q})\) is the group of \(n\times n\) invertible matrices over the finite field \(\mathbb{F}_{q}\) with \(q\) elements. It will be repetitively used in this paper that
\[|\operatorname{GL}_{n}(\mathbb{F}_{q})|=(-1)^{n}q^{\frac{n(n-1)}{2}}(q;q)_{n}. \tag{2.1}\]
Now, let \(X=\operatorname{Spec}R\) be an affine variety over \(\mathbb{F}_{q}\). Say
\[R:=\frac{\mathbb{F}_{q}[T_{1},\ldots,T_{m}]}{(f_{1},\ldots,f_{r})}. \tag{2.2}\]
Following the work [10] of the first author, we define the set of \(n\times n\)_matrix points_ on \(X\) as the set of commuting tuples of matrices satisfying the defining equations for \(X\):
\[C_{n}(X):=\biggl{\{}\underline{A}=(A_{1},\ldots,A_{m})\in\operatorname{Mat}_{ n}(\mathbb{F}_{q})^{m}:[A_{i},A_{j}]=0,f_{i}(\underline{A})=0\biggr{\}}. \tag{2.3}\]
Note that \(C_{1}(X)\cong X(\mathbb{F}_{q})\). Though not needed in this paper, it is worth pointing out that the cardinality of \(C_{n}(X)\) is independent of the choice of defining equations for \(X\); in fact, by comparing [10, Eq. 4.2] and [10, Eq. 4.15], there is an equation-free equivalent characterization for the cardinality of \(C_{n}(X)\):
\[\frac{|C_{n}(X)|}{|\operatorname{GL}_{n}(\mathbb{F}_{q})|}=\sum_{\dim_{q}H^{q} (X;M)=n}\frac{1}{|\operatorname{Aut}M|}, \tag{2.4}\]
where the sum ranges over all isomorphism classes of zero-dimensional coherent sheaves on \(X\) of degree \(n\). This characterization also makes \(|C_{n}(X)|\) well-defined for any variety \(X\) over \(\mathbb{F}_{q}\).
The number of matrix points on a smooth curve or a smooth surface is given by infinite product formulas for a zeta function associated to it. For any (affine) variety \(X\) over \(\mathbb{F}_{q}\), consider its _Cohen-Lenstra series_ (terminology of [10]):
\[\hat{Z}_{X}(t):=\sum_{n=0}^{\infty}\frac{|C_{n}(X)|}{|\operatorname{GL}_{n}( \mathbb{F}_{q})|}t^{n}, \tag{2.5}\]
and recall the local zeta function
\[Z_{X}(t):=\exp\biggl{(}\sum_{n=1}^{\infty}\frac{|X(\mathbb{F}_{q^{n}})|}{n}\, t^{n}\biggr{)}. \tag{2.6}\]
**Proposition 2.1** ([10, Proposition 4.6(a)]).: _If \(X\) is a smooth curve over \(\mathbb{F}_{q}\), then_
\[\hat{Z}_{X}(t)=\prod_{j\geq 1}Z_{X}(tq^{-j}). \tag{2.7}\]
**Proposition 2.2** ([10, Proposition 4.6(b)]).: _If \(X\) is a smooth surface over \(\mathbb{F}_{q}\), then_
\[\hat{Z}_{X}(t)=\prod_{i,j\geq 1}Z_{X}(t^{i}q^{-j}). \tag{2.8}\]
Proposition 2.1 is essentially due to Cohen and Lenstra [6], and Proposition 2.2 is essentially due to the Feit-Fine formula [7] for counting commuting matrices and ideas of Bryan and Morrison [4]. We remark that both formulas heavily exploit the local geometry of \(X\), namely, smoothness of dimension 1 or 2. In fact, in light of the main theorem of [10], Proposition 2.1 ceases to hold if \(X\) is a multiplicative reduction of an elliptic curve over a number field (but holds if it is a good reduction).
## 3. Proofs of Theorems 1.1 and 1.4
Here we use the results of the previous section to prove Theorems 1.1 and 1.4 and their corollaries.
**3.1. Proof of Theorem 1.1.** Fix a prime power \(q=p^{r}\) with \(p\geq 5\) and \(r\geq 1\), and fix \(a\in\mathbb{F}_{q}\setminus\{0,1\}.\) Then, denoting by \(X\) the affine part of \(E_{\mathrm{L}}(a)\), Theorem V.2.4 of [19] states that
\[Z_{X}(t)=\frac{(1-\alpha t)(1-\overline{\alpha}t)}{1-qt},\]
where \(\alpha\) and \(\overline{\alpha}\) are the traces of Frobenius. Note that there is a missing factor of \(\frac{1}{1-t}\) in this expression since we are only considering the affine part of \(X.\)
By Proposition 2.1, we then have that
\[\hat{Z}_{X}(t)=\prod_{j\geq 1}\frac{(1-\alpha tq^{-j})(1-\overline{\alpha} tq^{-j})}{1-tq^{1-j}}.\]
It is well-known due to Euler [2, Corollary 2.2] that
\[\prod_{j\geq 1}(1-ctq^{-j})=\sum_{m\geq 0}\frac{(ct)^{m}}{(q;q)_{m}} \tag{3.1}\]
and
\[\prod_{j\geq 1}(1-ctq^{-j})^{-1}=\sum_{m\geq 0}\frac{(-1)^{m}q^{m(m-1)/2} \cdot(ct)^{m}}{(q;q)_{m}}. \tag{3.2}\]
This implies that
\[\hat{Z}_{X}(t)=\left(\sum_{r\geq 0}\frac{(\alpha t)^{r}}{(q;q)_{r}}\right) \cdot\left(\sum_{s\geq 0}\frac{(\overline{\alpha}t)^{s}}{(q;q)_{s}}\right) \cdot\left(\sum_{u\geq 0}\frac{(-1)^{u}q^{u(u+1)/2}\cdot t^{u}}{(q;q)_{u}} \right).\]
By the definition of \(\hat{Z}_{X}(t)\) and by (2.1), we then have
\[N_{n,2}(a;q)=(-1)^{n}q^{n(n-1)/2}(q;q)_{n}\cdot\sum_{\begin{subarray}{c}r+s+u =n\\ r,s,u\geq 0\end{subarray}}\frac{\alpha^{r}\overline{\alpha}^{s}(-1)^{u}q^{ \frac{n(u+1)}{2}}}{(q;q)_{r}(q;q)_{s}(q;q)_{u}}.\]
Furthermore, again by Theorem V.2.4 of [19], we have that \(\alpha\overline{\alpha}=q\) and therefore, we can rewrite this sum as
\[N_{n,2}(a;q)=(-1)^{n}q^{n(n-1)/2}(q;q)_{n}\sum_{\begin{subarray}{c}r+s+u=n\\ r,s,u\geq 0\end{subarray}}\frac{(-1)^{u}\alpha^{r-s}q^{s+\frac{n(u+1)}{2}}}{(q; q)_{r}(q;q)_{s}(q;q)_{u}}.\]
Dividing this sum according to the value of \(r-s\), we then have
\[N_{n,2}(a;q)=(-1)^{n}q^{n(n-1)/2}(q;q)_{n}\sum_{k}\alpha^{k}\cdot \sum_{\begin{subarray}{c}s\geq 0\\ r=s+k\geq 0\\ u=n-2s-k\geq 0\end{subarray}}(-1)^{u}q^{s+u(u+1)/2}\frac{1}{(q;q)_{r}(q;q)_{s}(q ;q)_{u}}\] \[=(-1)^{n}q^{n(n-1)/2}(q;q)_{n}\sum_{k}\alpha^{k}\sum_{s=\min\{-k, 0\}}^{\lfloor\frac{n-k}{2}\rfloor}(-1)^{n-k}q^{s+\frac{(n-2s-k)(n-2s-k+1)}{2}} \cdot\frac{1}{(q;q)_{s}(q;q)_{s+k}(q;q)_{n-2s-k}}\] \[=q^{n(n-1)/2}\sum_{k}\alpha^{k}(-1)^{k}\sum_{s=\min\{-k,0\}}^{ \lfloor\frac{n-k}{2}\rfloor}q^{\frac{k^{2}}{2}-kn-\frac{k}{2}+\frac{n^{2}}{2 }+\frac{n}{2}}\cdot q^{2ks-2ns+2s^{2}}\cdot\frac{1}{(q;q)_{s}(q;q)_{s+k}(q;q)_{ n-2s-k}}\] \[=q^{n(n-1)/2}\cdot\sum_{s=0}^{\lfloor\frac{n-k}{2}\rfloor}q^{ \frac{n^{2}}{2}+\frac{n}{2}+2s^{2}-2ns}\cdot\frac{(q;q)_{n}}{(q;q)_{s}(q;q)_{s }(q;q)_{n-2s}}\] \[+q^{n(n-1)/2}\sum_{k>0}\alpha^{k}(-1)^{k}\sum_{s=0}^{\lfloor\frac {n-k}{2}\rfloor}q^{\frac{k^{2}}{2}-kn-\frac{k}{2}+\frac{n^{2}}{2}+\frac{n}{2}} \cdot q^{2ks-2ns+2s^{2}}\cdot\frac{(q;q)_{n}}{(q;q)_{s}(q;q)_{s+k}(q;q)_{n-2s-k}}\] \[+q^{n(n-1)/2}\sum_{k<0}q^{k}\overline{\alpha}^{k}(-1)^{k}\sum_{s= -k}^{\lfloor\frac{n-k}{2}\rfloor}q^{\frac{k^{2}}{2}-kn-\frac{k}{2}+\frac{n^{2} }{2}+\frac{n}{2}}\cdot q^{2ks-2ns+2s^{2}}\cdot\frac{(q;q)_{n}}{(q;q)_{s}(q;q)_{ s+k}(q;q)_{n-2s-k}}.\]
Replacing \(k\) in the last sum with \(-k\) and then \(s\) with \(s+k\), we then have
\[N_{n,2}(a;q) =P(n,0)_{q}+\sum_{k>0}\alpha^{k}(-1)^{k}\sum_{s=0}^{\lfloor\frac {n-k}{2}\rfloor}q^{\frac{k^{2}}{2}-kn-\frac{k}{2}+n^{2}}\cdot q^{2ks-2ns+2s^{ 2}}\cdot\frac{(q;q)_{n}}{(q;q)_{s}(q;q)_{s+k}(q;q)_{n-2s-k}}\] \[+\sum_{k>0}\overline{\alpha}^{k}(-1)^{k}\sum_{s=k}^{\lfloor\frac {n+k}{2}\rfloor}q^{\frac{k^{2}}{2}+kn-\frac{k}{2}+n^{2}}q^{-2ks-2ns+2s^{2}} \cdot\frac{(q;q)_{n}}{(q;q)_{s}(q;q)_{s-k}(q;q)_{n-2s+k}}\] \[=P(n,0)_{q}+\sum_{k>0}\alpha^{k}(-1)^{k}\sum_{s=0}^{\lfloor\frac {n-k}{2}\rfloor}q^{\frac{k^{2}}{2}-kn-\frac{k}{2}+n^{2}}\cdot q^{2ks-2ns+2s^{ 2}}\cdot\frac{(q;q)_{n}}{(q;q)_{s}(q;q)_{s+k}(q;q)_{n-2s-k}}\] \[+\sum_{k>0}\overline{\alpha}^{k}(-1)^{k}\sum_{s=0}^{\lfloor\frac {n-k}{2}\rfloor}q^{\frac{k^{2}}{2}+kn-\frac{k}{2}+n^{2}}q^{-2nk+2ks-2ns+2s^{ 2}}\cdot\frac{(q;q)_{n}}{(q;q)_{s}(q;q)_{s+k}(q;q)_{n-2s-k}}\] \[=P(n,0)_{q}+\sum(-1)^{k}(\alpha^{k}+\overline{\alpha}^{k})\cdot q ^{\frac{k^{2}}{2}-kn-\frac{k}{2}+n^{2}}\sum_{s=0}^{\lfloor\frac{n-k}{2}\rfloor} q^{2ks-2ns+2s^{2}}\cdot\frac{(q;q)_{n}}{(q;q)_{s}(q;q)_{s+k}(q;q)_{n-2s-k}}.\]
Since \(a_{\mathrm{L}}(a;q^{k})=\alpha^{k}+\overline{\alpha}^{k}\), (1.4) implies that
\[N_{n,2}(a;q)=P(n,0)_{q}-\sum_{k=1}^{n}\phi_{q^{k}}(-1)\cdot P(n,k)_{q}\cdot{}_ {2}F_{1}^{\mathrm{ff}}(a)_{q^{k}},\]
where
\[P(n,k)_{q}=(-1)^{k}\cdot q^{n(n-k)+\frac{k(k+1)}{2}}\sum_{s=0}^{\lfloor\frac{ n-k}{2}\rfloor}q^{2s(s-n+k)}\cdot\frac{(q;q)_{n}}{(q;q)_{s}(q;q)_{s+k}(q;q)_{n-2s-k}}.\]
The leading coefficient of \(P(n,k)_{q}\) is clear from the expression. Since the \(q\)-multinomial approaches the usual multinomial as \(q\to 1\), we have
\[\lim_{q\to 1}P(n,k)_{q}=(-1)^{k}\sum_{s=0}^{\lfloor\frac{n-k}{2}\rfloor} \binom{n}{s,n-k-2s,k+s}=(-1)^{k}\binom{n}{k}\sum_{s=0}^{\lfloor\frac{n-k}{2} \rfloor}\frac{k!(n-k)!}{s!(k+s)!(n-k-2s)!}.\]
It is easy to see by induction that if \(m\) and \(s\) are integers with \(m<0\) and \(2s+m\leq 0\), then
\[\left(\frac{m}{2}\right)_{s}\left(\frac{m}{2}+\frac{1}{2}\right)_{s}=\frac{(-m)! }{(-m-2s)!4^{s}}.\]
Furthermore, it is evident by definition that for \(k\geq 0\), we have \((k+1)_{s}=\frac{(k+s)!}{k!}.\) Applying this above with \(m=k-n\), we have
\[\lim_{q\to 1}P(n,k)_{q}=(-1)^{k}{n\choose k}\sum_{s=0}^{\lfloor\frac{n-k}{2} \rfloor}\frac{\left(\frac{k-n}{2}\right)_{s}\left(\frac{k-n+1}{2}\right)_{s}}{ (k+1)_{s}}\cdot\frac{4^{s}}{s!},\]
which is our statement since the summand vanishes for \(s\geq\lfloor\frac{n-k}{2}\rfloor\).
### Proof of Corollary 1.2
We prove this corollary by implementing the method of moments, as employed in previous work by the second two authors in [17]. By Theorem 1.1 and the fact that \(p^{\frac{p}{2}}{}_{2}F_{1}^{\mathrm{ff}}(a)_{p^{r}}\in[-2,2]\), we have
\[p^{\frac{p}{2}-rn^{2}}a_{\mathbb{L},n}(a;p^{r})=-\phi_{p^{r}}(-1)\cdot p^{ \frac{p}{2}}{}_{2}F_{1}^{\mathrm{ff}}(a)_{p^{r}}+O_{r,n}(p^{-\frac{p}{2}}).\]
Therefore, if \(m\) is a nonnegative integer, we have that
\[\frac{1}{p^{r}}\sum_{a\in\mathbb{F}_{p^{r}}\setminus\{0,1\}}\left( p^{\frac{p}{2}-rn^{2}}a_{\mathbb{L},n}(a;p^{r})\right)^{m}=\frac{1}{p^{r}}\sum_{a \in\mathbb{F}_{p^{r}}\setminus\{0,1\}}(-\phi_{p^{r}}(-1)p^{\frac{p}{2}}{}_{2}F _{1}^{\mathrm{ff}}(a)_{p^{r}})^{m}\] \[+\frac{1}{p^{r}}\sum_{k=1}^{m}\frac{1}{p^{\frac{nk}{2}}}\cdot{m \choose k}\frac{1}{p^{r}}\sum_{a\in\mathbb{F}_{p^{r}}\setminus\{0,1\}}(-\phi_{ p^{r}}(-1)p^{\frac{p}{2}}{}_{2}F_{1}^{\mathrm{ff}}(a)_{p^{r}})^{m-k}\] \[=\frac{1}{p^{r}}\sum_{a\in\mathbb{F}_{p^{r}}\setminus\{0,1\}}(- \phi_{p^{r}}(-1)p^{\frac{p}{2}}{}_{2}F_{1}^{\mathrm{ff}}(a)_{p^{r}})^{m}+o_{m, r,n}(1)\text{ as }p\to\infty.\]
By Theorem 1.1 of [17], this implies that as \(p\to\infty\) we have
\[\frac{1}{p^{r}}\sum_{a\in\mathbb{F}_{p^{r}}\setminus\{0,1\}}\left(p^{\frac{p}{ 2}-rn^{2}}a_{\mathbb{L},n}(a;p^{r})\right)^{m}=\begin{cases}o_{m,r,n}(1)&\text{ if }m\text{ is odd}\\ \frac{(2l)!}{l!(l+1)!}+o_{m,r,n}(1)&\text{ if }m=2l\text{ is even}\.\end{cases}\]
The proof of Corollary 1.2 of [17] then implies the limiting distribution.
### Proof of Theorem 1.4
Fix a prime power \(q=p^{r}\) with \(p\geq 5\) and \(r\geq 1\), and fix \(a\in\mathbb{F}_{q}\setminus\{0,1\}.\) Denoting by \(A_{a}\) the affine surface given by
\[s^{2}=xy(x+1)(y+1)(x+ay),\]
then \(X:=A_{a}\) and \(X_{a}\) differ by a connected union of rational curves (see [1, SS1]). In particular, we have
\[[X_{a}]=[X]+19\mathbb{L}+1\]
in the Grothendieck ring of \(\mathbb{F}_{q}\)-varieties, where \(\mathbb{L}\) is the class of the affine line (cf. the terms \((24q-6)-(5q-7)\) at the end of the proof of [1, Proposition 4.1]). Therefore, by Theorem 1.1 of [1], the local zeta function of \(X\) is given by
\[Z_{X}(t)=\frac{1}{(1-q^{2}t)(1-\gamma qt)(1-\gamma\alpha^{2}t)(1-\gamma\overline {\alpha}^{2}t)},\]
where \(\gamma=\phi_{q}(a+1)\) and \(\alpha,\overline{\alpha}\) are the Frobenius eigenvalues for the Clausen elliptic curve \(E_{CL}\left(\frac{-1}{a+1}\right).\)
Therefore, by Proposition 2.2, we have that
\[\hat{Z}_{X}(t) =\prod_{i,j\geq 1}\frac{1}{(1-q^{2-j}t^{i})(1-\gamma q^{1-j}t^{i})( 1-\gamma\alpha^{2}q^{-j}t^{i})(1-\gamma\overline{\alpha}q^{-j}t^{i})}\] \[=\prod_{i\geq 1}\prod_{b\in\{q,\gamma,\frac{\gamma\overline{\alpha} ^{2}}{q},\frac{\gamma\alpha^{2}}{q}\}}\prod_{j\geq 0}\frac{1}{1-bt^{i}q^{-j}}.\]
By (3.2) and \(\alpha\overline{\alpha}=q\), we then have
\[\hat{Z}_{X}(t) =\prod_{i\geq 1}\prod_{b\in\{q,\gamma,\frac{2\pi^{2}}{q},\frac{2 \pi^{2}}{q}\}}\sum_{m\geq 0}\frac{(-1)^{m}q^{\frac{m(m+1)}{2}}b^{m}t^{im}}{(q;q)_{m}}\] \[=\prod_{i\geq 1}\sum_{m\geq 0}t^{im}\cdot\sum_{m_{1}+\ldots+m_{4}=m} \frac{(-1)^{m}q^{\frac{m_{1}(m_{1}+1)}{2}+\ldots+\frac{m_{4}(m_{4}+1)}{2}} \gamma^{m_{2}+m_{3}+m_{4}}\cdot q^{m_{1}-m_{3}+m_{4}}\cdot\alpha^{2(m_{3}-m_{4 })}}{(q;q)_{m_{1}}\cdot\ldots\cdot(q;q)_{m_{4}}}\] \[=\sum_{n\geq 0}t^{n}\cdot\sum(-1)^{\sum m_{u,v}}\frac{q^{\sum m_{u,v}(m_{u,v+1})}}{q^{\sum m_{u,1}-m_{u,3}+m_{u,4}}\gamma^{\sum m_{u,2}+m_{u,3}+ m_{u,4}}\cdot\pi^{2(\sum m_{u,3}-m_{u,4})}}{\prod(q;q)_{m(u,v)}},\]
where the latter sum is over all possible combinations of nonnegative integers \(m_{u,v}\) with \(v=1,\ldots,4\) and such that \(\sum i_{u}m_{u,v}=n\) for positive integers \(i_{u}\geq 1\). To simplify this expression, for each \(v=1,\ldots,4\), we denote by \(\lambda_{v}\) the partition given by adding \(i_{u}\) with multiplicity \(m_{u,v}\).
Then, in the notation of theorem, the coefficient of \(t^{n}\) in \(\hat{Z}_{X}(t)\) is given by
\[\sum_{\begin{subarray}{c}\lambda_{1},\ldots,\lambda_{4}\\ |\lambda_{1}|+\ldots+|\lambda_{4}|=n\end{subarray}}(-1)^{l(\lambda_{1})+\ldots +l(\lambda_{4})}q^{\frac{\sum n(\lambda_{i},j)(n(\lambda_{i},j)+1)}{2}}q^{l( \lambda_{1})-l(\lambda_{3})+l(\lambda_{4})}\gamma^{l(\lambda_{2})+l(\lambda_{3 })+l(\lambda_{4})}\pi^{2(l(\lambda_{3})-l(\lambda_{4}))}.\]
Dividing those partitions into \(l(\lambda_{3})-l(\lambda_{4})=k\) for \(0\leq k\leq n\), using the definition of \(\hat{Z}_{X}(t)\) and using (1.6), we have that
\[N_{n,3}(a;q)=S\left(n,0,\phi_{q}(a+1)\right)+\sum_{k=1}^{n}S(n,k,\phi_{q}(a+1 ))_{q}\left(q^{2k}\cdot\phi_{q}(a+1)^{k}\cdot{}_{3}F_{2}^{\text{ff}}\left( \frac{a}{a+1}\right)_{q^{k}}-q^{k}\right),\]
where
\[S(n,k,\gamma)_{q}:=(-1)^{n}q^{\frac{n(n-1)}{2}}(q;q)_{n}\cdot\sum_{ \begin{subarray}{c}|\lambda_{1}|+\ldots+|\lambda_{4}|=n\\ l(\lambda_{3})-l(\lambda_{4})=k\end{subarray}}(-1)^{m(\lambda_{1},\ldots, \lambda_{4})}\gamma^{l(\lambda_{2})+k+2l(\lambda_{4})}\frac{q^{l(\lambda_{1}) -k}q^{\frac{\sum n(\lambda_{i},j)(n(\lambda_{i},j)+1)}{2}}}{\prod(q;q)_{m(u,v) }}.\]
The expression for \(N_{n,3}(a;q)\) then follows immediately.
The leading coefficient for \(Q(n,k,\gamma)_{q}\) is clear from the definition. It remains to show the behavior of this polynomial as \(q\to 1\). To this end, note that \((q;q)_{n-m(\lambda_{1},\ldots,\lambda_{4})}\to 0\) as \(q\to 1\) if \(m(\lambda_{1},\ldots,\lambda_{4})<n\). Therefore, the only contributing partitions are those with \(l(\lambda_{1})+\ldots+l(\lambda_{4})=n\), that is, \(\lambda_{i}=(1,1,\ldots,1)\). Therefore, we have
\[\lim_{q\to 1}Q(n,k,\gamma)_{q} =\sum_{\begin{subarray}{c}x+y+z+w=n\\ z-w=k\end{subarray}}\frac{n!}{x!y!z!w!}\cdot\gamma^{y}\] \[=\sum_{w=0}^{\frac{n-k}{2}}\frac{n!}{w!(w+k)!(n-k-2w)!}\sum_{y=0} ^{n-k-2w}\gamma^{y}\cdot\binom{n-k-2w}{y}\] \[=\sum_{w=0}^{\frac{n-k}{2}}\binom{n}{w,w+k,n-k-2w}(1+y)^{n-k-2w}.\]
The rest of the proof proceeds exactly as that of Theorem 1.1.
### Proof of Corollary 1.5
We prove this corollary by implementing the method of moments, as employed in previous work by the second two authors in [17]. By Theorem 1.4 and the fact that \(p^{r}{}_{3}F_{2}^{\text{ff}}(a)_{p^{r}}\in[-3,3]\), we have
\[p^{r-rn^{2}-rn}A_{n}(a;p^{r})=p^{r}{}_{3}F_{2}^{\text{ff}}(a)_{p^{r}}+O_{r,n}(p ^{-r}).\]
Therefore, as in the proof of Corollary 1.5, and using Theorem 1.3 of [17], for positive integers \(m\), we have that
\[\frac{1}{p^{r}}\sum_{a\in\mathbb{F}_{p^{r}}\setminus\{0,1\}}\left(p^{r-rn^{2}-rn}A _{n}(a;p^{r})\right)^{m}=\begin{cases}o_{m,r,n}(1)&\text{ if }m\text{ is odd}\\ \sum\limits_{i=0}^{m}(-1)^{i}\binom{m}{i!(i+1)!}+o_{m,r,n}(1)&\text{ if }m\text{ is even.}\end{cases}\]
The proof of Corollary 1.4 of [17] then implies the limiting distribution.
|
2308.01913 | Tutorial-Cum-Survey on Semantic and Goal- Oriented Communication:
Research Landscape, Challenges, and Future Directions | SemCom and goal-oriented SemCom are designed to transmit only
semantically-relevant information and hence help to minimize power usage,
bandwidth consumption, and transmission delay. Consequently, SemCom and
goal-oriented SemCom embody a paradigm shift that can change the status quo
that wireless connectivity is an opaque data pipe carrying messages whose
context-dependent meaning and effectiveness have been ignored. On the other
hand, 6G is critical for the materialization of major SemCom use cases (e.g.,
machine-to-machine SemCom) and major goal-oriented SemCom use cases (e.g.,
autonomous transportation). The paradigms of \textit{6G for (goal-oriented)
SemCom} and \textit{(goal-oriented) SemCom for 6G} call for the tighter
integration and marriage of 6G, SemCom, and goal-oriented SemCom. To facilitate
this integration and marriage of 6G, SemCom, and goal-oriented SemCom, this
comprehensive tutorial-cum-survey paper first explains the fundamentals of
semantics and semantic information, semantic representation, theories of
semantic information, and definitions of semantic entropy. It then builds on
this understanding and details the state-of-the-art research landscape of
SemCom and goal-oriented SemCom in terms of their respective algorithmic,
theoretical, and realization research frontiers. This paper also exposes the
fundamental and major challenges of SemCom and goal-oriented SemCom, and
proposes novel future research directions for them in terms of their
aforementioned research frontiers. By presenting novel future research
directions for SemCom and goal-oriented SemCom along with their corresponding
fundamental and major challenges, this tutorial-cum-survey article duly
stimulates major streams of research on SemCom and goal-oriented SemCom theory,
algorithm, and implementation for 6G and beyond. | Tilahun M. Getu, Georges Kaddoum, Mehdi Bennis | 2023-07-04T07:48:19Z | http://arxiv.org/abs/2308.01913v1 | Tutorial-Cum-Survey on Semantic and Goal-Oriented Communication: Research Landscape, Challenges, and Future Directions
###### Abstract
Amid the global rollout of fifth-generation (5G) wireless communication system services, researchers in academia, industry, and national laboratories have been developing proposals and roadmaps for the sixth generation (6G). Despite the many 6G proposals and roadmaps put forward, the materialization of 6G as presently envisaged is fraught with many fundamental interdisciplinary, multidisciplinary, and transdisciplinary (IMT) challenges. To alleviate some of these challenges, semantic communication (SemCom) and goal-oriented SemCom (effectiveness-level SemCom) have emerged as promising technological enablers of 6G. SemCom and goal-oriented SemCom are designed to transmit only semantically-relevant information and hence help to minimize power usage, bandwidth consumption, and transmission delay. Consequently, SemCom and goal-oriented SemCom embody a paradigm shift that can change the status quo that wireless connectivity is an opaque data pipe carrying messages whose context-dependent meaning and effectiveness have been ignored. On the other hand, 6G is critical for the materialization of major SemCom use cases (e.g., machine-to-machine SemCom) and major goal-oriented SemCom use cases (e.g., autonomous transportation). The paradigms of 6G for _(goal-oriented) SemCom_ and _(goal-oriented) SemCom for 6G_ call for the tighter integration and marriage of 6G, SemCom, and goal-oriented SemCom. To facilitate this integration and marriage of 6G, SemCom, and goal-oriented SemCom, this comprehensive tutorial-cum-survey paper first explains the fundamentals of semantics and semantic information, semantic representation, theories of semantic information, and definitions of semantic entropy. It then builds on this understanding and details the state-of-the-art research landscape of SemCom and goal-oriented SemCom in terms of their respective algorithmic, theoretical, and realization research frontiers. This paper also exposes the fundamental and major challenges of SemCom and goal-oriented SemCom, and proposes novel future research directions for them in terms of their aforementioned research frontiers. By presenting novel future research directions for SemCom and goal-oriented SemCom along with their corresponding fundamental and major challenges, this tutorial-cum-survey article duly stimulates major streams of research on SemCom and goal-oriented SemCom theory, algorithm, and implementation for 6G and beyond.
6G, SemCom, goal-oriented SemCom, semantics and semantic information, research landscape of SemCom, fundamental challenges of SemCom, future directions of SemCom, research landscape of goal-oriented SemCom, fundamental challenges of goal-oriented SemCom, future directions of goal-oriented SemCom.
## I Introduction
### _Motivation_
In tandem with the worldwide rollout of fifth-generation (5G) wireless communication system services, researchers in academia, industry, and national laboratories have been developing visions [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26] of the forthcoming wireless communication technology colloquially known as the sixth generation (6G). 6G as it is envisioned nowadays is driven by a variety of anticipated applications as diverse as:
* Multi-sensory extended reality (XR) applications, connected robotic and autonomous systems, wireless brain-computer interactions, and blockchain and distributed ledger technologies [1].
* Haptic communication, massive internet of things (IoT) [27], and massive IoT-integrated smart cities, and automation and manufacturing [28].
* Accurate indoor positioning, new communication terminals, high-quality communication services on board aircraft, worldwide connectivity and integrated networking, communications that support industry verticals [29], holographic and tactile communications, and human bond communications [12].
* Industrial IoT [30], internet of robots [25], flying vehicles [18], and wireless data centers [18, 31].
* Smart Grid 2.0, Industry 5.0, personalized body area networks, Healthcare 5.0; internet of industrial smart things, and internet of healthcare [3].
* The internet of no things (the Metaverse) [32, 33].
These applications are motivated by numerous trends and use cases that drive 6G.
The trends and use cases that have been buffered to date are as heterogeneous as:
* The emergence of smart surfaces and environments, the massive availability of small data, from self-organizing networks to self-sustaining networks, the convergence of 3CLS (communications, computing, control, localization, and sensing), and the end of the smartphone era [1].
* The use of mobile edge, cloud, and fog computing, intelligent distributed computing and data analytics, and dynamic infrastructure [34].
* Artificial intelligence (AI)-enabled autonomous wireless networks and the convergence of intelligent sensing, communication, computing, caching, and control [35].
* Multi-sensory holographic teleportation, real-time remote healthcare, autonomous cyber-physical systems, intelligent industrial automation, high-performance precision agriculture, space connectivity, and smart infrastructure and environments [11].
* Digital twinning [36, 37, 38, 39, 17].
* Knowledge systems, ubiquitous universal computing, man-machine interfaces, and the three dimensions of data, energy, and computing [17].
* Space exploration, travel by air and sea, maglev transportation, intelligent driving, the internet of vehicles, and 6C (capturing, communication, caching, cognition, computing, and controlling) functions [30].
* Ubiquitous super 3D connectivity [16];
* The 3C (user-centralized, content-centralized, and data-centralized) paradigm [25].
* Intelligent vehicle-to-everything [40], collaborative robots (CoBots) [28].
* Huge scientific data applications, application-aware data burst forwarding, emergency and disaster rescue, and socialized internet of things [41].
* Globalized ubiquitous connectivity, enhanced on-board connectivity, and pervasive intelligence [7].
* Increasing elderly population and gadget-free communication [3].
* Intelligent Internet of medical things [42].
These trends and use cases that drive 6G are believed to be achievable with the materialization of numerous 6G technology enablers.
The 6G technology enablers - that are envisaged for 6G broadband access quantified by KPI (key performance indicator) impact on system capacity, system latency, and system management [11] - can be categorized as infrastructure-level enablers, spectrum-level enablers, and algorithm/protocol-level enablers [43, 44]. The algorithm/protocol-level enablers put forward by various researchers are edge AI [1, 2, 45, 46, 47]; semantic communications [48, 49, 50]; pervasive AI [28]; orbital angular momentum (OAM) multiplexing [35]; ubiquitous sensing [34]; ultra-low-latency communications [34]; network harmonization and interoperability, intelligent proactive caching and mobile edge computing (MEC), multi-objective optimization and routing optimization, massive IoT and big data analytics, configurable multi-antenna systems, and intelligent cognitive radio and self-sustaining wireless networks [51]; blockchain-based spectrum sensing and molecular communications [35]; intelligent radio, AI-enabled closed-loop optimization, intelligent wireless communication, and hardware-aware communications [52]; symbiotic radio and super IoT [53]; AI and photonics-based cognitive radio [54]; multi-mode multi-domain joint transmission and intelligent transmission [55]; autonomous wireless systems with AI [56]; model-aided wireless AI [57, 58]; data-oriented transmission [59]; device-centric wireless communications and demand-driven opportunistic networking [60]; delta-orthogonal multiple access [61]; coded caching and rate splitting [44]; ambient backscatter communication [11]; disaggregation and virtualization [62]; intelligent device-to-device communication [63]; index modulation [64]; self-driving networks [65, 66]; AI/machine learning (ML)-driven air interface design and optimization, and networking with the sixth sense [17]; the seamless integration of wireless information and energy transfer [16]; tactile internet, multi-access edge computing [18]; and intelligent internet of intelligent things [67]. A number of widely recognized spectrum-level enablers of 6G have also been proposed: above 6 GHz for 6G (from small cells to tiny cells) and transceivers with integrated surfaces [1]; the holistic management of communication, computation, caching, and control resources [48]; superfast wireless broadband connectivity [34]; multi-band ultrafast-speed transmission [55]; an all-spectrum reconfigurable front end for dynamic spectrum access [11]; and optical wireless communications [43]. These spectrum-level enablers are complemented - in view of the advent of diverse 6G services1 - by many 6G infrastructure-level enablers.
Footnote 1: The 6G service classes foreseen thus far are: mobile broadband reliable low-latency communication, massive ultra-reliable low-latency communication (URLLC), human-centric services, and multi-purpose SCLS and energy services [1]; holographic communications, high-precision manufacturing, sustainable development and smart environments, and battery-free communication [48]; computation-oriented communication, contextually agile enhanced mobile broadband (cMBB) communication, and event-defined URLLC [52]; ubiquitous mobile broadband, ultra-high-speed low-latency communication, and ultra-high data density [54]; secure wireless computing for private data [30]; network-as-an intelligent-service [23]; and digital replica [68].
The 6G infrastructure-level enablers that have been contemplated to date are communication with reconfigurable intelligent surface (RIS) [1]; integrated terrestrial, airborne, and satellite networks [1]; energy transfer and harvesting [1]; three-dimensional (3D) coverage [48]; cell-free networks, metamaterials-based antennas, fluid antennas, software-defined materials, programmable metasurfaces, and wireless power transfer and energy harvesting [28]; tiny-cell communication and cell-free communications [51]; supermassive multiple-input multiple-output (MIMO), large intelligent surfaces, and holographic beamforming [35]; satellite-assisted IoT communications [53]; holographic radio and photodiode-coupled antenna arrays [54]; multi-purpose converged, full-spectral, and all-photonic radio access networks (RANs) [54]; hyperspectral space-terrestrial integration networks [54]; super flexible integrated networks [55]; airplane-aided integrated networking [69]; extremely large aperture arrays, holographic massive MIMO, six-dimensional positioning, large-scale MIMO radar, and intelligent massive MIMO [70]; integrated access and backhaul networks [43, 44]; internet of space things [11]; antenna-on-glass [17]; and zero-touch networks [3]. In light of these many infrastructure-level enablers and the aforementioned spectrum-level and algorithm/protocol-level enablers, realizing 6G as it is currently imagined needs both an evolutionary and a revolutionary paradigm shift [1].
A paradigm shift has to address the following fundamental
challenges related to 6G [71]:
* Managing ultra-heterogeneity.
* Guaranteeing an ultra-high data rate for most users.
* Ensuring ultra-reliability and low latency for most users.
* Taming ultra-high complexity in 6G networks.
* Incorporating various KPIs in the design.
* Addressing ultra-high mobility.
* Being highly energy efficient.
* Supporting energy-efficient AI.
* Accommodating users' needs or perspectives [72].
* Ensuring security, privacy, and trust.
* Attaining _full intelligence and autonomy_.
* Coping with the inevitable technological uncertainty associated with 6G technology enablers [73].
Addressing the fundamental challenges listed amounts to overcoming numerous interdisciplinary, multidisciplinary, and transdisciplinary (IMT) challenges that are intertwined with several technological challenges. To alleviate these challenges, 6G systems and networks should be holistically designed to minimize power usage, bandwidth consumption, and transmission delay by reducing to a minimum the transmission of information that is _semantically redundant or irrelevant_. Semantic-centric information transmission calls for the efficient transmission of _semantics_ (meaning) by a semantic transmitter followed by faithful recovery by a semantic receiver. This communication paradigm is now widely recognized as semantic communication (SemCom).2
Footnote 2: Throughout this tutorial-cum-survey paper, the acronym SemCom stands for _wireless SemCom_. When we discuss SemCom in the optical and quantum domains, we will explicitly refer to it as _optical SemCom_ and _quantum SemCom_, respectively. For further information about all these types of SemCom, the reader is referred to the survey in [71].
SemCom has the potential to change the status quo perception that wireless connectivity is an opaque data pipe carrying messages whose context-dependent _meaning_ and _effectiveness_ (or goal) have been ignored [74], as the designers of traditional communication systems have viewed it [75]. In stark contrast to traditional communication systems that aim to offer a high data rate and a low symbol (bit) error rate, SemCom focuses on extracting the meaning of information transmitted by a source and interpreting the semantic information at the destination [76]. SemCom's chief objective is to convey the intended meaning, which depends on not only the physical content of the message, but also the sender's personality, intention, and other human-oriented factors that could reflect the real quality of experience (QoE) (i.e., the subjective experience) of human users [77]. SemCom's fundamental goal is therefore to ensure the successful delivery of the transmitted information's representative meaning [78]. Toward a meaning-centric communication system design, SemCom can be designed to focus on conveying the _interpretation_ of transmitted messages instead of an exact or approximate reproduction of them to achieve a meaning-centric communication system [79].
In addition to minimizing the divergence between the meaning of the transmitted messages and the meaning inferred from the recovered messages [80], SemCom aims to transmit only the semantic information that is relevant to the communication goal, thereby significantly reducing data traffic [81]. To this end, it transmits fewer data than traditional communication techniques [80]. In contrast to those techniques focusing on mere data reconstruction, SemCom takes its inspiration from human-to-human communication, whose goal is understanding and delivering the meaning behind a message [82]. To deliver the meaning behind a message, SemCom is designed as a system that attempts to communicate the true meaning of a message rather than ensuring the exact replication of the information transmitted by a source [83]. What matters in the design of SemCom is the source data's semantic content rather than the source's average probabilistic information [49]. Via a semantic content transmission relevant only for accurate interpretation at the destination, SemCom can promote the effective utilization of available network capacity by transmitting semantic content relevant only for accurate interpretation at the destination [84]. It is possible to make effective use of a network's capacity by avoiding the bit-by-bit reconstruction of the transmitted information at the receiver. To this end, SemCom uses a semantic encoder to incorporate the purpose of transmission, simplify the data to be transmitted, and eliminate the transmission of redundant information [85]. More specifically, it embodies the "provisioning of the right and significant piece of information to the right point of computation (or actuation) at the right point in time" [86]. This design philosophy is of paramount importance for networked systems and intelligence tasks.
When it comes to intelligence tasks such as speech recognition and speech transmission, SemCom extracts and transmits only task-related features - and discards the irrelevant ones - thereby considerably reducing bandwidth consumption [87, 88]. The significant reduction in bandwidth consumption that SemCom achieves represents a paradigm shift from "how to transmit" to "what to transmit" [89]. When it comes to "how to transmit," it is worth noting that conventional wireless communication systems have steadily approached the Shannon limit [49]. Consequently, a breakthrough must be made to be able to support the unparalleled proliferation of mobile devices, the insatiable desire for high data rates, and the emergence of new and highly heterogeneous 6G use cases and applications [90]. To these ends, SemCom is emerging as a promising communication paradigm for the design, analysis, and optimization of 6G networks and 6G systems, and a possibly revolutionary one at that. This necessitates our underneath discussion on why SemCom is for 6G.
### _Why SemCom for 6G?_
Although SemCom is a promising paradigm shift for the design, analysis, and optimization of 6G networks and 6G systems, it is a classic communication paradigm that was first proposed around 1950. At that time, the notion and relevance of SemCom were pointed out by Weaver [91, Ch. 1], even though Shannon deliberately ignored3 the semantic aspects of communication in his classic masterpiece [92]. Weaver
envisioned communication using semantics and outlined three hierarchical levels of communication (see Fig. 1) that fundamentally differentiate the broad subject of communication [91, p. 4]:
* **"Level A**. How accurately can the symbols of communication be transmitted? (The technical problem).
* **"Level B**. How precisely do the transmitted symbols convey the desired meaning? (The semantic problem).
* **"Level C**. How effectively does the received meaning affect conduct in the desired way? (The effectiveness problem)."
_The technical problem_ focuses on the accurate transmission of a message's symbols (bits) and is driven by Shannon information theory (see Fig. 1), which is a classical information transmission theory. This theory has steered the design of generations of communication systems and resulted in wireless connectivity being viewed as an opaque data pipe carrying messages [75] - whose context-dependent meaning and effectiveness have been ignored - and the sender and receiver are considered agents without intelligence [93]. In this old paradigm, a huge amount of semantically irrelevant and redundant data are transmitted, and an enormous amount of communication resources such as transmission power and bandwidth are consumed to do so [93]. Aside from the bandwidth and power expenditure, acquiring, processing, and sending an excessive amount of distributed real-time data - that is likely to be useless to the end users or outdated by the time they reach them - will produce communication bottlenecks, and increase latency and safety issues in emerging cyber-physical and autonomous networked systems [75]. Accordingly, communication system designers may need to look beyond the technical problem for a paradigm where communication in itself is a means to achieving specific goals rather than an end goal [75, 94]. To achieve particular goals, how precisely the transmitted symbols convey the desired meaning (i.e., _the semantic problem_ in Fig. 1) needs to be factored in. To this end, it is useful to design an _understand-first-and-then-transmit system_ that can accomplish joint optimization [93].
SemCom interprets information at its semantic level - rather than by bit sequence [95] - and seeks the meaning behind the transmitted symbols (bits) [76]. For example, when a transmitter dispatches "I have just brought a yellow banana," a receiver may receive a message like "I have brought a banana" [83]. Although this message is not exactly what was conveyed by the transmitter, one can still understand the overall idea. However, if a receiver receives a message like "I have just brought a yellow banner" [83] (which has a lower bit-level error rate), the meaning behind this received message is quite different from the transmitted message's encoded meaning, which is naturally undesirable. Semantic-level communication, a.k.a. SemCom, is crucial for meaningful-accurate transmission and reception. Fig. 2 schematizes SemCom between a semantic transmitter and a semantic receiver over a virtual semantic channel using features that encode semantic information. To encode semantic information at the transmitter, a system designer would exploit the source's background knowledge base (KB). Despite some fundamental challenges, as shown in Fig. 2, the source KB is shared with the destination KB, which is then used when the decoded message is interpreted by the semantic receiver. The semantic receiver's interpretation can be drastically affected by semantic noise, which causes semantic information to be misunderstood and semantic decoding errors to occur. This will generate a misleading between the transmitter's intended meaning and the reconstructed meaning of the receiver [96]. Semantic noise happens naturally in SemCom over a semantic channel (see Fig. 2) and can be caused by various factors:
* Multiple possible interpretations of the recovered symbols at the technical level due to ambiguity in some words, sentences, or symbols used in the dispatched message [76, 97].
* which is common in natural language processing (NLP) [97]
- in the reconstructed symbols when multiple sets of data with different meanings are represented by the same semantic symbol [93]. Footnote 4: Semantic ambiguity can arise from _dialect_ and _polysemy_. Polysemy involves an instance of a word (or phrase) being employed to convey two or more different meanings in different contexts [93]. Dialect, on the other hand, concerns the variant of a language that is understood mainly by a specific group of speakers of a given language [93]. Footnote 5: Since it is specific in nature, syntactic information can be produced directly through a subject’s sensing function [100, 101].
* Adversarial examples (possibly created with adversarial noise [98]) that can detrimentally mislead a semantic decoder in its interpretation (see Fig. 2), especially in deep learning (DL)-based SemCom [96, 97, 99].
* Interference (a jamming signal) emitted by a malicious attacker that is received by the antenna transmitting the signal of interest [96].
* resulting in symbol/bit-level errors
- due to considerable physical noise, signal fading, and/or wireless interference [76, 93, 97]. Footnote 6: Identifying those semantic errors that are caused by physical noise, interference, or wireless channel fading, and those that are the consequence of a mismatch between the source KB and destination KB is fundamentally challenging.
* A mismatch between the source KB and the destination KB (even in the absence of syntactic errors) [50].
Substantial semantic noise can impede SemCom's faithfulness and applications, and cause semantic errors. In the SemCom model in [79, Fig. 2], a semantic error occurs if the message to be sent is "true" at the source with respect to (w.r.t.) the transmitter's world model, background KB, and inference procedure, but "false" at the destination w.r.t. the receiver's world model, background KB, and inference procedure [79]. Semantic errors7 can occur due to symbol/bit-level errors during transmission as a result of physical noise, interference, or wireless channel fading [76], or be caused by a mismatch between the source KB and destination KB [93, 76]. A recurring mismatch between the source KB and destination KB can be corrected each time using semantic feedback (see Fig. 1) for robust SemCom. A system designer may design a SemCom system with semantic feedback and syntactic feedback (see Fig. 1) to ensure SemCom is robust.
At the technical level, syntactic feedback can be incorporated between source decoding and source encoding. This feedback can improve the quality of decoded bits/symbols, which will in turn ensure minimal noise (i.e., semantic noise) at the semantic level. At the semantic level, semantic feedback can be incorporated between a semantic transmitter and a semantic receiver - more specifically, between a semantic encoder and a semantic decoder (see Fig. 2) - to alleviate the aforementioned background KB mismatch. Such a mismatch can happen in practice since the source and destination are always adding knowledge to their KBs and can be exploited to convey semantically-secure messages [102, 103]. Semantically-secure messages are difficult to decode (interpret) without the intended receiver's KB [102, 103], which paves the way for a _secure-by-design_ paradigm shift in 6G and beyond.
When it comes to 6G communication and networking, secure _human-to-human_ (H2H), _human-to-machine_ (H2M), and _machine-to-machine_ (M2M) systems can be designed using SemCom [104]. SemCom also has the potential to be a key enabler of 6G edge intelligence with efficient communication and
Fig. 1: Weaver’s three levels of communication [76, FIGURE 1] – KB: knowledge base.
Fig. 2: The main components in a SemCom system – modified from [97, Fig. 3].
computation overheads - despite uncertain wireless environments and limited resources - while overcoming the challenges faced by 6G communication networks [50, 105]. SemCom therefore empowers 6G with the possibility of designing a variety of 6G systems that can benefit greatly from a system design that incorporates not only a semantic component but also an effectiveness component (see Fig. 1). Concerning the latter, a communication system designer seeks to ensure the received meaning in affecting conduct in the desired way [91, p. 4]. This design paradigm has inspired various 6G use cases based on goal-oriented SemCom, as discussed below.
### _Why Goal-Oriented SemCom for 6G?_
SemCom deals with the transmission of complex data structures (e.g., features, patterns, and data lying on manifolds) or, in general, abstract concepts [106]. SemCom in which the effectiveness of semantic transmission is explicitly defined and focused on can be qualified as a goal-oriented SemCom [106]. Thus, SemCom is a broader concept than goal-oriented SemCom,7 since the semantics of information are not necessarily linked to a system's overarching goal [106]. Per this view, goal-oriented SemCom is a subset of SemCom that takes a pragmatic approach to SemCom where the receiver is interested in the significance (semantics) of the source's transmitted message and the message's effectiveness in accomplishing a certain goal [106]. To this end, goal-oriented SemCom is aimed at extracting and transmitting only task-relevant information so that the transmitted signal is substantially compressed, communication efficiency is improved, and low end-to-end latency is achieved [108]. Goal-oriented SemCom is therefore very useful for 6G since communication is not an end but a means to achieve specific goals [75].
Footnote 7: Throughout this tutorial-cum-survey paper, task-oriented communication and goal-oriented communication are detailed under the heading goal-oriented SemCom. Nevertheless, the authors of [107] underline that goal communication is much broader than SemCom, which they classify – per Weaver’s vision – as _semantic-level SemCom_ and _effectiveness-level SemCom_. While the former emphasizes semantic transmission for data reduction and the delivery of the meaning behind the transmitted content, the latter focuses on effectively employing semantic information – at a suitable time – for successful task execution [107]. Moreover, we shall also underscore the view in the wireless communication research community that SemCom and goal-oriented SemCom are more or less different terminologies for the same thing.
A wide variety of 6G use cases such as autonomous transportation, consumer robotics, environmental monitoring, telehealth, smart factories, and networked control systems (NCSs) require ultra-low latency, very high reliability, and ultra-large transmission bandwidth. These stringent requirements can be met by transmitting only the information that is semantically relevant for the effective performance of the desired action. Consequently, goal-oriented SemCom is also quite crucial for designing and realizing 6G w.r.t. minimizing power usage, bandwidth consumption, and transmission delay while aiming to effectively achieve one or more goals. Therefore, the aforementioned 6G use cases are also promising of M2M goal-oriented SemCom, which will be vital for the design and realization of 6G like H2H goal-oriented SemCom and H2M goal-oriented SemCom.
On the other hand, ongoing developments in SemCom, goal-oriented SemCom, and 6G are mutually reinforcing [50]. This justifies the need for the following discussion on why 6G is crucial for SemCom and goal-oriented SemCom.
### _Why 6G for SemCom and Goal-Oriented SemCom?_
Ongoing 6G developments are core enablers - and hence huge opportunities - for further development of SemCom and goal-oriented SemCom. SemCom and goal-oriented SemCom can hugely benefit from the emergence of AI-native networks, ubiquitous connectivity, and trustworthiness-native networks [50]. AI-native networks [109, 110, 111] are emerging in 6G because of the following driving trends:
* The migration of data processing from the network core to the network edge [112, 113, 46].
* A fundamental change from cloud AI to distributed AI [111].
* An emerging paradigm shift from connection-oriented communication to task-oriented communication [111] and computation-oriented communications [52].
These communication- and computation-oriented paradigm shifts enable ubiquitous connectivity, which will in turn promote the development of 6G wireless systems that are based on SemCom and goal-oriented SemCom.uch systems can be designed and optimized to materialize a global ubiquitous connectivity along the maturation of the following 6G technology enablers [50]:
* Space-air-ground integrated network [69, 114, 115].
* (sub-)Yearhertz (THz) communications [116, 117, 24, 118] (despite its major multi-faceted challenges [119]).
* Supermassive (ultra-massive) MIMO [120, 70].
Moreover, the design of trustworthiness-native networks is going to be at the forefront of 6G research [2, 8], which will in turn facilitate the realization of SemCom and goal-oriented SemCom through secure-by-design 6G networks.
Apart from secure-by-design 6G networks, the realization of many SemCom use cases (e.g., H2H SemCom, H2M SemCom, M2M SemCom, and KG-based SemCom) and major goal-oriented SemCom use cases (e.g., autonomous transportation, consumer robotics, environmental monitoring, telehealth, smart factories, and NCSs) require a possibly autonomous and integrated 6G network be designed and realized. Therefore, the paradigm of _6G for SemCom and goal-oriented SemCom_ and the previously discussed paradigms of _SemCom and goal-oriented SemCom for 6G_ necessitate the tighter integration and marriage of 6G, SemCom, and goal-oriented SemCom. Facilitating the tighter integration and marriage of 6G, SemCom, and goal-oriented SemCom, this comprehensive8 tutorial-cum-survey paper delivers the contributions enumerated in Section I-E. To put these contributions in perspective, we compare and contrast them with those of prior survey and tutorial papers in Tables I and II. Meanwhile, the concept map, structure, and organization of this article are depicted in Fig. 3.
Footnote 8: This tutorial-cum-survey paper presents almost everything about SemCom and goal-oriented SemCom except their performance assessment metrics. These metrics are comprehensively surveyed by the first three authors in [71].
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Themes of SemCom and goal-oriented SemCom & \begin{tabular}{c} Scope of \\ Ref. [50] \\ \end{tabular} & \begin{tabular}{c} Scope of \\ Ref. [124] \\ \end{tabular} &
\begin{tabular}{c} Scope of \\ **tutorial-cum-survey paper** \\ \end{tabular} \\ \hline \hline \hline Fundamentals of semantics and semantic information, semantic representations, theories of semantic information, and definitions of semantic entropy & – & **Comprehensively** \\ \hline State-of-the-art research landscape of SemCom & Partially Partially Partially Partially Partially Almost completely \\ \hline State-of-the-art theories of SemCom & Partially Partially Partially Partially – Partially – Partially – – \\ \hline \hline Future directions (in theory, algorithm, and realization) of SemCom & Partially Partially Partially – – Partially – – \\ \hline State-of-the-art research landscape of & Partially – – & **Comprehensively** \\ \hline State-of-the-art trends and use cases of SemCom & Partially Partially Partially Partially Almost completely \\ \hline State-of-the-art theories of goal-oriented SemCom & Partially Partially Partially Partially – – – \\ \hline \hline Fundamental and major challenges (in theory, algorithm, and realization) of goal-oriented SemCom & - & **Comppletely** \\ \hline \hline Future directions (in theory, algorithm, and realization) of goal-oriented SemCom & – & **Comppletely** \\ \hline \end{tabular}
\end{table} TABLE II: Scope of this tutorial-cum-survey paper w.r.t. the scope of prior survey and tutorial papers on SemCom and/or goal-oriented SemCom – Ref.: reference; “–” means the specific reference didn’t discuss the theme listed on a given row.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Themes of SemCom and goal-oriented SemCom & \begin{tabular}{c} Scope of \\ Ref. [50] \\ \end{tabular} & \begin{tabular}{c} Scope of \\ Ref. [76] \\ \end{tabular} & \begin{tabular}{c} Scope of \\ Ref. [78] \\ \end{tabular} & \begin{tabular}{c} Scope of \\ Ref. [97] \\ \end{tabular} &
\begin{tabular}{c} Scope of \\ Ref. [104] \\ \end{tabular} \\ \hline \hline Fundamentals of semantics and semantic information, semantic representations, theories of semantic information, and definitions of semantic entropy & – & **Partially** & **–** & **–** \\ \hline State-of-the-art research landscape of SemCom & Partially Partially Partially Partially Partially Almost completely \\ \hline State-of-the-art theories of SemCom & Partially Partially Partially Partially – Partially – – \\ \hline Fundamental and major challenges (in theory, algorithm, and realization) of SemCom & Partially Partially – – Partially – – \\ \hline State-of-the-art research landscape of & Partially – – – Partially – – \\ \hline State-of-the-art theories of SemCom & Partially Partially Partially – – – \\ \hline State-of-the-art theories of SemCom and realization) of goal-oriented SemCom & Partially – – – – \\ \hline \end{tabular}
\end{table} TABLE III: Scope of this tutorial-cum-survey paper w.r.t. the scope of prior survey and tutorial papers on SemCom and/or goal-oriented SemCom – Ref.: reference; “–” means the specific reference didn’t discuss the theme listed on a given row.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Themes of SemCom and goal-oriented SemCom & \begin{tabular}{c} Scope of \\ Ref. [50] \\ \end{tabular} & \begin{tabular}{c} Scope of \\ Ref. [76] \\ \end{tabular} & \begin{tabular}{c} Scope of \\ Ref. [78] \\ \end{tabular} & \begin{tabular}{c} Scope of \\ Ref. [97] \\ \end{tabular} &
\begin{tabular}{c} Scope of \\ Ref. [104] \\ \end{tabular} \\ \hline \hline Fundamentals of semantics and semantic information, semantic representations, theories of semantic information, and definitions of semantic entropy & – & **Partially** & **–** & **–** \\ \hline State-of-the-art research landscape of SemCom & Partially Partially Partially Partially Almost completely \\ \hline State-of-the-art trends of SemCom & Partially Partially Partially – – Partially – – \\ \hline Fundamental and major challenges (in theory, algorithm, and realization) of SemCom & Partially Partially Partially – – – – \\ \hline State-of-the-art research landscape of SemCom & Partially Partially – – – Partially – – – \\ \hline State-of-the-art theories of SemCom & Partially Partially Partially Partially – – – – – \\ \hline State-of-the-art theories of SemCom and realization) of goal-oriented SemCom & – – – – – – \\ \hline \end{tabular}
\end{table} TABLE III: Scope of this tutorial-cum-survey paper w.r.t. the scope of prior survey and tutorial papers on SemCom and/or goal-oriented SemCom – Ref.: reference; “–” means the specific reference didn’t discuss the theme listed on a given row.
Fig. 3: Concept map, structure, and organization of this tutorial-cum-survey paper.
### _Contributions_
The contributions of this holistically comprehensive - in depth and breadth - tutorial-cum-survey paper are enumerated below.
1. This paper explains the fundamentals of semantics and semantic information, semantic representations, theories of semantic information, and definitions of semantic entropy.
2. This paper details the state-of-the-art research landscape of SemCom.
3. This paper presents the major state-of-the-art trends and use cases of SemCom.
4. This paper discusses the state-of-the-art theories of SemCom.
5. This paper uncovers the fundamental and major challenges (in theory, algorithm, and realization) of SemCom.
6. This paper offers novel future research directions (in theory, algorithm, and realization) of SemCom.
7. This paper documents the state-of-the-art research landscape of goal-oriented SemCom.
8. This paper provides the major state-of-the-art trends and use cases of goal-oriented SemCom.
9. This paper discusses the state-of-the-art theories of goal-oriented SemCom.
10. This paper exposes the fundamental and major challenges (in theory, algorithm, and realization) of goal-oriented SemCom.
11. This paper provides novel future research directions (in theory, algorithm, and realization) of goal-oriented SemCom.
_Notation_: scalars, vectors, and matrices are represented by italic letters, bold lowercase letters, and bold uppercase letters, respectively. Sets, quantization regions, quantizers, and KBs are denoted by calligraphic letters. \(\mathbb{N}\), \(\mathbb{R}\), \(\mathbb{R}_{+}\), \(\mathbb{R}^{n}\), \(\mathbb{R}^{n}_{+}\), and \(\mathbb{R}^{m\times n}\) denote the set of natural numbers, real numbers, non-negative real numbers, \(n\)-dimensional vectors of real numbers, \(n\)-dimensional vectors of non-negative real numbers, and \(m\times n\) matrices of real numbers, respectively. \(:=\) denotes an equality by definition. For \(n,k\in\mathbb{N}\), \([n]:=\{1,2,\ldots,n\}\) and \(\mathbb{N}_{\geq k}:=\{k,k+1,k+2,\ldots\}\). For a matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\), its element in the \(i\)-th row and the \(j\)-th column is denoted by \((\mathbf{A})_{i,j}\) for all \(i\in[m]\) and \(j\in[n]\). The symbols \(\sim\), \(|\cdot|\), \(\|\cdot\|\), \((\cdot)^{T}\), and \(\mathcal{O}(\cdot)\) denote distributed as, the propositional satisfaction relation, absolute value, the Euclidean norm, transpose, and the Landau notation, respectively. The notation \(\min(\cdot)\) (or \(\min\{\cdot\}\)), \(\max(\cdot)\) (or \(\max\{\cdot\}\)), \(\mathbb{E}\{\cdot\}\), \(\mathbb{E}_{X}\{\cdot\}\), and \(\mathbb{P}(\cdot)\) stand for minimum, maximum, expectation, expectation w.r.t. the random variable (RV) \(X\), and probability, respectively. \(\mathbb{P}(A|B)\) represents the probability of event \(A\) conditioned on event \(B\).
\begin{table}
\begin{tabular}{|l|l|} \hline Abbreviation & Definition \\ \hline \hline
3C & User-centralized, content centralized, and \\ & data-centralized \\ \hline
3CLS & Communications, computing, control, \\ & localization, and sensing \\ \hline
3D & Three-dimensional \\ \hline
5G & Fifth-generation \\ \hline SGNR & 5G new radio \\ \hline
6C & Capturing, communication, caching, cognition, \\ & computing, and controlling \\ \hline
6G & Sixth-generation \\ \hline ADJSCC & Attention DL-based JSCC \\ \hline AE & Autoencoder \\ \hline AF & Amplify-and-forward \\ \hline AI & Artificial intelligence \\ \hline AR & Augmented reality \\ \hline ASC & Adapable semantic compression \\ \hline AVs & Autonomous vehicles \\ \hline BA algorithm & Blahut–Arimoto algorithm \\ \hline BCP & Bar-Hillel-Carnap paradox \\ \hline BERT & Bidirectional encoder representations from \\ & transformers \\ \hline Bi-LSTM & Bidirectional long short-term memory \\ \hline BitCom & Bit communication \\ \hline CB-TBMA & Compressed IB-TBMA \\ \hline CCCA & Cache-computing coordination algorithm \\ \hline CDRL & Collaborative deep RL \\ \hline CE & Cross entropy \\ \hline CKG & Cross-modal KG \\ \hline CNN(s) & Convolutional neural network(s) \\ \hline CoBots & Collaborative robots \\ \hline CP-DQN & Content popularity-based DQN \\ \hline CRI & Channel rate information \\ \hline CSED & Combined semantic encoding and decoding \\ \hline CSI & Channel state information \\ \hline CTSF & Communication toward semantic fidelity \\ \hline CU & Centralized unit \\ \hline D2D & Device-to-device \\ \hline DF & Decode-and-forward \\ \hline DII & Data importance information \\ \hline DL & Deep learning \\ \hline DNN(s) & Deep neural network(s) \\ \hline DQN & Deep Q-network \\ \hline DRL & Deep RL \\ \hline DTI & Data type information \\ \hline DT-JSCC & Discrete task-oriented JSCC \\ \hline DIB & Distributed IB \\ \hline EEIM & Evolutionary energetic information model \\ \hline cMBB & Enhanced mobile broadband \\ \hline ESC & Emergent semantic communication \\ \hline FFTs & Fast Fourier transforms \\ \hline FL & Federated learning \\ \hline FPS & Frames per second \\ \hline F-user & Far user \\ \hline GAN & Generative adversarial network \\ \hline GAXNet & Graph attention exchange network \\ \hline GD & Gradient descent \\ \hline GPlowNets & Generative flow networks \\ \hline GIB & Graph IB \\ \hline GOQ & Goal-oriented quantization \\ \hline H2H & Human-to-human \\ \hline H2M & Human-to-machine \\ \hline HARQ & Hybrid automatic repeat request \\ \hline HTC & Human-type communication \\ \hline IB & Information bottleneck \\ \hline IC & Incentive compatibility \\ \hline IE-SC & Intelligent and efficient semantic communication \\ \hline IMT & Interdisciplinary, multidisciplinary, and \\ & transdisciplinary \\ \hline \end{tabular}
\end{table} TABLE III: List of abbreviations and acronyms I.
## II Semantic Information: Fundamentals, Representations, and Theories
SemCom and goal-oriented SemCom revolves around the transmission of semantic information. One must therefore understand semantic information and its theories before they can design rigorous SemCom and goal-oriented SemCom systems. To this end, we hereinafter present the fundamentals of semantics and semantic information, semantic representation, theories of semantic information, and definitions of semantic entropy. We begin with the fundamentals of semantics and semantic information.
### _Fundamentals of Semantics and Semantic Information_
The word _semantics_ originates from natural or formal languages and the concept of compositionality [125, p. 125]. The concept of compositionality asserts that the meaning of a sentence is decided by three ingredients: the composition rule (syntax) of a sentence; the context of a sentence; and the meaning (semantics) of each component of the sentence [125, p. 125]. Semantics has been studied for centuries [121] in a variety of disciplines such as linguistics [126], philosophy [127, 128], cognitive sciences [129], neuroscience [130],
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline Abbreviation & Definition \\ \hline \hline SCT & Semantic coded transmission \\ \hline SE & Semantic extraction \\ \hline Seb & Semantic base \\ \hline SEED & Semantic/effectiveness encoded data \\ \hline SemCom & Semantic communication \\ \hline SemComNet & SemCom-enabled network \\ \hline Seq2Seq-SemCom & Sequence-to-sequence SemCom \\ \hline SF & Semantic forward \\ \hline SFV & Semantic feature vector \\ \hline SINR & signal-to-interference-plus-noise ratio \\ \hline SI plane & Semantic intelligence plane \\ \hline S-IF & Semantic information flow \\ \hline STI & Semantic information theory \\ \hline SNC & Semantic native communication \\ \hline SNN & Spiking neural network \\ \hline SNR & Signal-to-noise ratio \\ \hline S-NP layer & Semantic-empowered network protocol layer \\ \hline S-PB layer & Semantic-empowered physical-bearing layer \\ \hline SPM & Semantic protocol model \\ \hline S-SE & Spectral efficiency \\ \hline STM & System throughput in message \\ \hline SVC & Semantic video conferencing \\ \hline T5 & Text-to-text transfer Transformer \\ \hline TBMA & Type-based multiple access \\ \hline TechnicCom & Technical communication \\ \hline THz & Terahertz \\ \hline TSSI & Theory of strongly semantic information \\ \hline UA & User association \\ \hline UAV & Unmanned aerial vehicle \\ \hline URLLC & Ultra-reliable low-latency communication \\ \hline UT & Universal Transformer \\ \hline VIB & Variational IB \\ \hline VQA & Visual question answering \\ \hline VR & Virtual reality \\ \hline VSRAA-SM & Video semantics-based resource allocation algorithm for spectrum multiplexing scenarios \\ \hline w.r.t. & With respect to \\ \hline WITT & Wireless image transmission transformer \\ \hline XR & Extended reality \\ \hline \end{tabular}
\end{table} TABLE V: List of abbreviations and acronyms III.
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline Abbreviation & Definition \\ \hline \hline IoT & Internet of things \\ \hline IoV & Internet of vehicles \\ \hline IR & Impulse radio \\ \hline IR\({}^{2}\) SemCom & Interference-resistant and robust SemCom \\ \hline iSemComCom & Intelligent SemCom \\ \hline iSemCom-AttNet & An iSemCom-enabled heterogeneous network \\ \hline IS-JSCC & iterative semantic JSCC \\ \hline ISS & Image-to-graph semantic similarity \\ \hline ISSC & Image segmentation semantic communication \\ \hline JSC & Joint source and channel \\ \hline JSCC & Joint source-channel coding \\ \hline JSemC & Joint semantic-channel \\ \hline JSNC & Joint semantics-noise coding \\ \hline KB & Knowledge base \\ \hline KG(s) & Knowledge graph(s) \\ \hline KL & Kullback–Leibler \\ \hline KPI & Key performance indicator \\ \hline LDPC & Low-density parity check codes \\ \hline L-MMSE & Linear minimum MSE \\ \hline LSTM & Long short-term memory \\ \hline M2M & Machine-to-machine \\ \hline MA-FOMP & Multi-agent partially observable Markov decision process \\ \hline MARL & Multi-agent RL \\ \hline MEC & Mobile edge computing \\ \hline MIMO & Multiple-input multiple-output \\ \hline ML & Machine learning \\ \hline MSE & Mean squared error \\ \hline MSP & Mobile service provider \\ \hline MTC & Machine-type communication \\ \hline NCSs & Network control systems \\ \hline Ney AI & Neuro-symbolic AI \\ \hline NeuroComm & Neuromorphic wireless cognition \\ \hline NLP & Natural language processing \\ \hline NOMA & Non-orthogonal multiple access \\ \hline NPMs & DNN-based protocol models \\ \hline NTSCC & Nonlinear transform source-channel coding \\ \hline N-user & Near user \\ \hline OAM & Orbital angular momentum \\ \hline O-DU & Open distributed unit \\ \hline O-RAN & Open RAN \\ \hline O-RU & Open radio unit \\ \hline OSI & Open system interconnection \\ \hline PAI & Partial algorithm information \\ \hline PCM & Pulse code modulation \\ \hline PCV & Point cloud video \\ \hline PDF & Probability density function \\ \hline PHY & Physical layer \\ \hline PMF & Probability mass function \\ \hline QAM & Quadrature amplitude modulation \\ \hline OFDM & orthogonal frequency division multiplexing \\ \hline QKD & Quantum key distribution \\ \hline QML & Quantum ML \\ \hline QoE & Quality of experience \\ \hline QoS & Quality of service \\ \hline RANs & Radio access networks \\ \hline RB & Resource block \\ \hline Ref. & Reference \\ \hline RHS & Right-hand side \\ \hline RIB & Robust IB \\ \hline RIS & Reconfigurable intelligent surface \\ \hline RL & Reinforcement learning \\ \hline RL-ASC & RL-based adaptive semantic coding \\ \hline ROI & Region-of-interest \\ \hline RS & Reed Solomon \\ \hline RSUs & Roadside units \\ \hline RV & Random variable \\ \hline S-AI layer & Semantic-empowered application-intent layer \\ \hline SC & Semantic coding \\ \hline SC-AIT & SemCom paradigm with AI tasks \\ \hline \end{tabular}
\begin{tabular}{|p{142.3pt}|} \hline Abbreviation & Definition \\ \hline \hline SCT & Semantic coded transmission \\ \hline SE & Semantic extraction \\ \hline Seb & Semantic base \\ \hline SEED & Semantic/effectiveness encoded data \\ \hline SemCom & Semantic communication \\ \hline SemComNet & SemCom-enabled network \\ \hline Seq2Seq-SemCom & Sequence-to-sequence SemCom \\ \hline SF & Semantic forward \\ \hline SFV & Semantic feature vector \\ \hline SINR & signal-to-interference-plus-noise ratio \\ \hline SI plane & Semantic intelligence plane \\ \hline S-IF & Semantic information flow \\ \hline STI & Semantic information theory \\ \hline SNC & Semantic native communication \\ \hline SNN & Spiking neural network \\ \hline SNR & Signal-to-noise ratio \\ \hline S-NP layer & Semantic-empowered network protocol layer \\ \hline S-PB layer & Semantic-empowered physical-bearing layer \\ \hline SPM & Semantic protocol model \\ \hline S-SE & Spectral efficiency \\ \hline STM & System throughput in message \\ \hline SVC & Semantic video conferencing \\ \hline T5 & Text-to-text transfer Transformer \\ \hline TBMA & Type-based multiple access \\ \hline TechnicCom & Technical communication \\ \hline THz & Terahertz \\ \hline TSSI & Theory of strongly semantic information \\ \hline UA & User association \\ \hline UAV & Unmanned aerial vehicle \\ \hline URLLC & Ultra-reliable low-latency communication \\ \hline UT & Universal Transformer \\ \hline VIB & Variational IB \\ \hline VQA & Visual question answering \\ \hline VR & Virtual reality \\ \hline VSRAA-SM & Video semantics-based resource allocation algorithm for spectrum multiplexing scenarios \\ \hline w.r.t. & With respect to \\ \hline WITT & Wireless image transmission transformer \\ \hline XR & Extended reality \\ \hline \end{tabular}
\end{table} TABLE IV: List of abbreviations and acronyms II.
biology [131], and robotics [132]. Because semantics is used to mean something different in each discipline, it is a highly complex as well as controversial topic, to the extent that it would be very difficult to provide a concise definition for it that would be widely accepted [121]. In short and in general, however, semantics can be defined as the study of meaning and is closely connected to _semiotics_, the study of _signs_[121]. As for signs, all communication systems are built upon signs [121]. A system of signs and rules amounts broadly to a language, and a language's rules applied to signs are categorized into _syntax_, _semantics_, and _pragmatics_[133]:
* Syntax studies signs and their relationships to one another, and is concerned only with signs and their relationships [121].
* Semantics aims to understand the relationships between signs and the objects to which they apply (i.e., the _designata_). It is built upon syntax and studies signs and their relationship to the world [121]. According to Chomsky [134]9, syntax is independent of semantics [121]. Footnote 9: “Through his now famous sentence ‘Colorless green ideas sleep furiously,’ Chomsky argued that it is possible to construct grammatically consistent but semantically meaningless phrases, hence the separation between syntax and semantics” [121].
* apparent in human communications
- as well as the impact of a sign on the designata [121].
Because semantics is built upon syntax and studies signs and their relationship to the world [121], the fundamental notion of semantic information hinges upon the information ecosystem, which is a complete process of _information-knowledge-intelligence conversion_[100, 101], as shown in Fig. 4. As seen in Fig. 4, the bottom part represents the object in the environment that generates the object information, whilst the top part of it portrays the subject interacting with the object through the following processes [100]: the object information is transformed - via perception - to perceived information, which constitutes knowledge (cognition) that is deployed in an intelligent strategy (decision-making), which leads to the taking of intelligent action (execution) on the object and completes the basic information ecosystem.10 In light of this ecosystem, the theory of semantic information must fulfill the constraints or the requirements dictated by the _ecological process of information_[100] as depicted in Fig. 4. Per Fig. 4, only those types of object information that are participating in the subject-object interaction process - among all the types that exist - are viewed as meaningful [100], hence the basis of semantic information. Accordingly, the following definitions [100] on _object information_ (or _ontological information_) and _perceived information_ (or _epistemological information_) ensue.
Footnote 10: Per the information ecosystem model schematized in Fig. 4, perception leads to action in line with the contemporary _outside-in_ neuroscientific framework [135, 136]. In this framework (the perception-action framework), our knowledge is considered to emerge from the perceptual associations of cause-and-effect relationships and inductive reasoning [130, p. 32] - ascertaining the view that the _brain is an organ for perceptual representation_. This perceptual representation-centered view would imply the presumption of a hidden "homunculus" that would decide whether to respond or not [130, p. 32]. Contrarily, the emerging _inside-out_ framework (the action-perception framework) – which was first advocated by the author of [130] – considers action primarily as a source of knowledge [130, p. 32]: action corroborates the meaning and importance of sensory signals by giving a second opinion [130, p. 32].
**Definition 1** (**Object information [100, Definition 1])**.: _"The object information concerning an object is defined as the set of states at which the object may stay and the pattern with which the states vary presented by the object itself."_
**Definition 2** (**Perceived information [100, Definition 2])**.: _"The perceived information a subject possesses about an object is defined as the triuity of the form (named the syntactic information), the meaning (the semantic information), and the utility (the pragmatic information), all of which are perceived by the subject from the object information."_
If we compare Definitions 1 and 2, the object information originates from the real world, while the perceived information is the outcome of the object information being perceived by a subject [100]. Consequently, the perceived information can comprise more intentions than the object information [100]. Per Definition 2, meanwhile, semantic information is the meaning a subject perceives from the object information, which matches the notion of semantics in semiotics [100]. This leads us to the following definition of _comprehensive information_[100].
**Definition 3** (**Comprehensive information [100, Definition 3])**.: _Comprehensive information is the triuity of syntactic, semantic, and pragmatic information._
Comprehensive information calls for more information research looking into the mutual relationships among syntactic, semantic, and pragmatic information - contrary to their separate studies as approached predominantly in semiotics - as defined below [100].
**Definition 4** (**Mutual relation among syntactic, semantic, and pragmatic information [100, Definition 4])**.: _"The syntactic information is specific in nature and can directly be produced through subject's sensing function while the pragmatic information is also specific in nature and can directly be produced through subject's experiencing. However,
Fig. 4: Model of information ecosystem [100, Fig.1].
the semantic information is abstract in nature and thus cannot be produced via subject's sensing organs and experiencing directly. The semantic information can only be produced based on both syntactic and pragmatic information just produced already, that is, by mapping the joint of syntactic and pragmatic information into the semantic information space and then naming it."_
As asserted in Definition 4 and illustrated in Fig. 5, semantic information is obtained by jointly mapping the syntactic information and the pragmatic information into the semantic information space. To clarify, let \(X\), \(Y\), and \(Z\) denote syntactic, semantic, and pragmatic information, respectively. If the instance of \(X\) is \(x\), then the instance of \(Y\), which is the semantic information \(y\), will be different for different instances of \(Z\) denoted by \(z\)[100]:
* If \(\{x=\)apple, \(z=\) nutritious\(\}\), then \(y=\) fruit.
* If \(\{x=\)apple, \(z=\) used for information processing\(\}\), then \(y=\) iPad.
* If \(\{x=\)apple, \(z=\) information processor in pocket\(\}\), then \(y=\) iPhone.
Semantic information has more importance than syntactic and pragmatic information, and can serve as the _legal representative_ of the perceived information [100]. Furthermore, any theory of semantic information should take into account the information ecosystem (as in Fig. 4) as a _big picture_[100].
In light of the information ecosystem depicted in Fig. 4, the author of [131] proposes an energy-based perspective of semantic information by assuming that syntax, semantics, and pragmatics are structural features of information in biological evolution. More specifically, the author of [131] argues that semantic information is an exclusive feature of biological evolution by proposing an information model named the _evolutionary energetic information model_ (EEIM). EEIM takes into account the transformation of energy into information from the outside to the inside of a biological organism: "Energy is transformed into information via senses and the application of syntactic and semantic rules (by accompanied pragmatics) evaluating energy on its usability as information" [131]. Some of the attributes of EEIM are [131, Table 1]:
* "Information is exclusively being 'produced' by evolution."
* "Information is a quality of informational energy"; "Informational energy provides semantic information to accordingly prepared receiver instances"; "Not all energy is informational."
* "Information is bound to syntax, semantics and pragmatics"; "Information requires a sending instance"; "Information requires a biological receiver able to interpret incoming information."11
Footnote 11: In light of the perspective that semantic information concerns meaning is _interpreted_ by our brains, the following are fundamental questions worth pondering: How does the human brain interpret information? How does the human brain give meaning to information (or something)? From a systems neuroscience standpoint, (observer-independent) semantic information/memory is generally learned after frequent encounters with the same thing or event, which contrasts the one-trial “acquisition” of (observer-dependent) _episodic memory_ concerning a unique event [130, 126].
Apart from the aforementioned perspective of semantic information being driven by the information ecosystem and biological evolution, the authors of [75] advocate for evaluating and extracting the semantic value of data at a macroscopic scale, a mesoscopic scale, and a microscopic scale that correspond to the source level, the link level, and the system level, respectively. At the microscopic scale or source level, semantics connote the relative importance of different events, outcomes, or observations from a stochastic process or source of information [75]. At the mesoscopic scale or link level, the semantics of information concern a composite nonlinear multivariate function comprising the vector of information attributes, which can be either objective (innate) or subjective (contextual) [75]. At the macroscopic scale or system level, the semantics of information alludes to the effective distortion and timing mismatch - quantified end-to-end - between information generated at a point/region in space-time and its reconstructed/estimated version at another point/region in space-time, while considering all sources of variability and latency [75].12
Footnote 12: Latency may be due to sensing latency and accuracy, data gathering, transmission latency, decoding, etc. [75].
After the semantic value of data is extracted at the macroscopic, mesoscopic, and microscopic scales, it has to be represented semantically. Semantic representation can be viewed as a method of efficient compression, which relieves the burden of processing, storage, and transmission [137]. Semantic information can then be directly utilized for design, analysis, learning, and other intelligent tasks in networked intelligence while benefiting from being compact and informative as a result of semantic representation [137], as discussed below.
### _Semantic Representation_
While semantic content is the "meaningful" part of the data, semantic representation is the "minimal way to represent this meaning" [122]. Accordingly, semantic representations of a SemCom language must satisfy three fundamental notions: minimalism, generalizability, and efficiency [122]. The semantic representation of data can be achieved using knowledge graphs (KGs) [138]; NLP [126, 139, 140]; deep neural networks (DNNs) [141, 142, 143]; _toposes_[144, 145]; causal representation learning [146, 147]; and quantum corollas
Fig. 5: Relation of semantic information to syntactic information and pragmatic information [100, Fig.2], [50, Fig. 7].
[148]. While the last way is a quantum (quantum mechanical) method, the first five are classical mechanisms. These mechanisms - along with their pros and cons - are discussed below, beginning with KG.
#### Iv-B1 Knowledge Graph (KG)
one way in which semantic information can be represented (being embedded into) is by using KGs [121]. A KG is composed of three major components: nodes, edges, and labels [121]. A node can be any object, place, or person, and an edge defines the relationship between two nodes [121]. KG embedding amounts to embedding the components of a KG, including its entities and relationships into continuous vector spaces to simplify manipulation while preserving the KG's inherent structure [149]. KG embedding is a crucial technology for solving problems in KGs [121]. State-of-the-art KG techniques can be categorized into translational distance models that employ distance-based scoring and semantic matching models that employ similarity-based scoring functions [121]. Meanwhile, KG-based semantics is deployed in data integration, recommendation systems, and real-time ranking [121], and has been exploited in recently proposed text SemCom techniques [150, 151, 152, 90]. Despite KGs having such broad applicability, they have inherent downsides, since they can merely represent simplified causal graphs and hence are limited to the _expressivity_ of a graph [122]. KGs would thus fail to characterize highly complex tasks despite their causal structure [122]. DNNs are crucial in overcoming this limitation, as discussed below.
#### Iv-B2 DNNs
pertaining to deep networks' inherent ability in universal function approximation [153, 154, 155, 156, 153, 154, 155, 156], DNNs can be effectively employed for semantic representation. As a result, many DL-enabled SemCom techniques (e.g., [157, 158, 159, 160, 161, 162, 81, 87, 88]) have been developed using an end-to-end trained joint source/channel encoder and a joint source/channel decoder that have been designed using state-of-the-art deep networks such as _Transformers_[164, 165, 166], DNNs [167, 168, 169], and convolutional neural networks (CNNs) [170]. In spite of the fact that DNNs are mature and easy to reparameterize; integrate fairly easily with existing AI/ML models; and have been widely adopted for semantic representation and therefore the design of SemCom systems, they are:
* ineffective at reasoning13 the cause, context, or effect of an event or source of data [122]; Footnote 13: Since DNNs are not knowledge-driven networks, their reasoning capacity is limited by the statistical nature of the data [122]. Consequently, DNNs are widely believed to be poor at solving commonsense inference tasks [171].
* unable to capture the complexity of data or learn a proper representation if the data are not purely statistical [122].
Accordingly, DNNs face a major challenge when it comes to accurate semantic representation due to their limited contextual information reasoning capabilities that are not amenable to statistical relationships [122]. This calls for another semantic representation technique known as NLP, as discussed below.
#### Iv-B3 NLP
the NLP methodology for semantic representation centers on using a human language to describe the semantic information contained in raw data [122]. The semantic representation of data has been facilitated by advancements in DL-driven NLP [139, 140]. These advancements have inspired the development of a number of text SemCom techniques [81, 85, 172] in which natural languages can be used to describe the data [122]. Thus, semantic representation using NLP has the following advantages:
* being readily understandable and decodable for design purposes [122], and
* being appropriate for specific data structures that greatly depend on text data [122].
Nevertheless, semantic representation by NLP is restricted by syntax, pragmatics, and wording [122], which translates to the following major challenge: converting the bit-pipeline problem to a word-pipeline problem [122]. In alleviating this challenge, toposes are vital, as presented below.
#### Iv-B4 Toposes
originated from _homological algebra_ and _algebraic topology_, toposes transform every data structure by a family of objects in a well-defined topos [144, 145]. Accordingly, semantic representation using toposes translates current data structures to well-defined _morphisms_ that make it possible to extract the unobserved semantic information [122]. To this end, employing toposes for semantic representation has two benefits:
* toposes are capable of disentangling unobserved contextual patterns [122], and
* toposes can reason beyond statistical boundaries [122].
Despite these crucial advantages, semantic representation using toposes has its drawbacks because several topos concepts remain mathematically intractable and cumbersome to characterize [122]. Consequently, deploying toposes for semantic representation faces these major challenges in connection with SemCom:
* toposes cannot readily be unified with coexisting AI frameworks [122], and
* toposes can be inherently quite computationally complex when handling raw data at the SemCom transmitter [122].
These challenges can make toposes unattractive and non-scalable for the overall design of an end-to-end communication system [122]. In mitigating this challenge in part, causal representation learning is useful, as discussed below.
#### Iv-B5 Causal Representation Learning
semantic representation via causal representation learning [146, 147] aims to learn a minimalist representation that can partly expose the unknown causal structure of the data [122]. The exposed data structure can reveal the semantic content elements of the data and their relationships - hence the context - while assuring high generalizability and minimalism [122]. Apart from its minimal and generalizability, causal representation learning for semantic representation has the following benefits:
* it can leverage _interventions_ and _counterfactuals_[173, 174, 175] to understand the structure of the data beyond _associative logic_, and
* the context of transmission is implicitly characterized by the causes of the semantic content elements [122].
Accordingly, semantic representation via causal representation learning is restricted by the need to pose suitable interventions or counterfactuals at the apprentice [122]. In this vein, a major fundamental challenge of causal representation learning
is embedding a structural causal model within a DNN that can characterize both statistical and causal properties [122].
Apart from semantic representation, the fundamental performance quantification of both SemCom and goal-oriented SemCom systems/algorithms necessitate a fundamental theory of semantic information. To this end, we discuss next some of the existing theories of semantic information.
### _Theories of Semantic Information_
The first theory of semantic information was proposed by Carnap and Bar-Hillel in the early 1950s and was based on logical probabilities [95, 176]. They used logical probabilities (as opposed to the statistical probabilities used in the Shannon information theory) over the content of a sentence to quantify the amount of information in a sentence in a given language [79, 177]. In their theory, information is perceived as a set of excluded possibilities [178], and a sentence's logical probability is measured by the likelihood that it would be true in all possible situations [177]. To this end, Carnap and Bar-Hillel's semantic information theory (SIT) asserts that "A and B" provides more information than "A" or "B" (since "A and B" is less likely to be true), "A" provides more information than "A or B", and a tautology (which is always true) provides no information [177]. This SIT is considered a model-theoretical approach to assign probabilistic values to logical sentences and thereby affirm a close relationship between the quantity of information in a sentence and the set of its models [177]. Thus, a consistent sentence that has fewer models comprises more information [177], in line with Nilsson's probabilistic logic [179].
The above SIT does not consider the qualification of the information content as _truthful_[127, 180]. However, Floridi developed a _theory of strongly semantic information_ (TSSI)14 to capture truthfulness by defining semantic-factual information in terms of its data space as well-formed, meaningful, and truthful data [127, 128, 180]. TSSI is aimed at solving the _Bar-Hillel-Carnap paradox_ (BCP) [128, 180] in Carnap and Bar-Hillel's SIT, wherein contradictions provide an infinite amount of information [177]. TSSI resolves this paradox by working from the basic idea that the informativeness of a statement is measured by a positive/negative degree of semantic distance (or deviation) from "truth" [128, 180]. This makes TSSI completely different from Carnap and Bar-Hillel's SIT, which specifies informativeness as a function over all situations [177].
Footnote 14: We note that the author of [181] attacked – fiercely and rather subjectively – Floridi and his TSSI while seeking to defend modern information theory as follows: “I will defend the view that notions that are associated with truth, knowledge, and meaning all can adequately be reconstructed in the context of modern information theory and that consequently there is no need to introduce a concept of semantic information” [181].
Despite Floridi's attempt to resolve the BCP with his TSSI aimed at capturing truthfulness, D'Alfonso argues [182] that TSSI is incomplete in regard to quantifying all possible statements and that there exist propositional sentences that cannot be assessed using the TSSI approach [177, 180]. As a result, D'Alfonso took inspiration from the existing works on _truthlikeness_[183, 184] (the degree of being similar to the truth) and put forward the _value aggregate method_[182, Section 4] that seeks to capture both inaccuracy and vacuity using the formal models of truthlikeness. In doing so, D'Alfonso attempts to extend information quantification to the semantic concept of quantity of misinformation, where semantic information and semantic misinformation are defined true semantic content and false semantic content, respectively [50]. Apart from this theory and the other aforementioned SITs, other semantic information modeling approaches/theories include semantic information G theory [185], the theory of the semantics of questions and the pragmatics of answers [186], semantic information analysis from the vantage point of thermodynamics [187], the algebraic theory of semantic information [178], the conceptual space theory of semantics [188, 189], causal semantics [175], information algebra [190], the theory of information flow [191], universal semantic communication [94, 192], semantic coding [193], SIT via organized complexity [194], and SIT via quantum corollas [148].
Despite the numerous theories of semantic information that exist, as highlighted above, existing theories are fundamentally incomplete. From the fundamental standpoint of neuroscience and cognitive science, semantics can be defined in (and is closely related to) the context of subjective experience, i.e., "meaningful" parallels "meaningful to a subject/person" [129]. In this vein, the essence of semantics becomes synonymous with the notion of potential experience [129]. This calls for a _semantic general theory of everything_[129].15 However, the development of such a fundamental general theory is fraught with numerous fundamental challenges. Without being bogged down by the fundamental challenges, however, current research progress on SemCom and goal-oriented SemCom can be guided by a rigorous definition of semantic entropy. Accordingly, we move on to the existing definitions of semantic entropy.
Footnote 15: “In conclusion, to develop a unified semantic theory applicable to subjective experience and objective physical reality, we need to better understand semantics itself, from a conceptual to a computational level. This sort of knowledge can be extracted from natural language and all documents, viewed as a collective product of all human minds of all generations” [129].
### _Definitions of Semantic Entropy_
Around 1952, Carnap and Bar-Hillel introduced [176] the concept of semantic entropy of a sentence - within a given language - which is defined as [121, eq. (1)]
\[H(s,e):=-\log c(s,e), \tag{1}\]
where \(c(s,e)\) is the degree of confirmation of sentence \(s\) on the evidence \(e\), which is written as [121, eq. (2)]
\[c(s,e):=\frac{m(e,s)}{m(e)}, \tag{2}\]
where \(m(e,s)\) and \(m(e)\) denote the logical probability of \(s\) on \(e\) and of \(e\)[121], respectively.
Regarding KB being useful to correctly infer the transmitted message - even when the receiver is unable to directly decode a semantic message - given a set of logical relationships [107], the authors of [103] introduce the notion of _knowledge entropy_.
The knowledge entropy of a KB is defined as the _uncertainty of the answers it computes_[107]. Thus, knowledge entropy equates to the average semantic entropy of query (message) \(x\) calculable from the KB \(\mathcal{K}\)[103, eq. 1], [107, eq. (7)]:
\[H(\mathcal{K}):=\frac{1}{|\mathcal{K}|}\sum_{x\in\mathcal{K}}H(x), \tag{3}\]
where \(H(\mathcal{K})\) is the knowledge entropy of \(\mathcal{K}\), \(|\mathcal{K}|\) is the size of the KB, and \(H(x)\) is the semantic entropy of each message \(x\) and is defined as [107, eq. (4)]
\[H(x):=-\big{[}m(x)\log(m(x))+\big{(}1-m(x)\big{)}\log\big{(}1-m(x)\big{)}\big{]}, \tag{4}\]
where \(m(x)\) is the logical probability pertaining to the probability of being true in a given world model [107].
Grounded on a language comprehension model - contrary to the probabilistic structure of a language - in the context of the structure of the world, the authors of [195] derive semantic entropy that is given by [121, eq. (3)]
\[H(\mathbf{v}_{t}):=-\sum_{\mathbf{v}_{M}\in\mathcal{V}_{M}}\mathbb{P}(\mathbf{v}_{M}|\mathbf{ v}_{t})\log\mathbb{P}(\mathbf{v}_{M}|\mathbf{v}_{t}), \tag{5}\]
where \(\mathcal{M}\) denotes the set of models that reflects the probabilistic structure of the world and \(\mathcal{V}_{\mathcal{M}}:=\{\mathbf{v}_{M}|\mathbf{v}_{M}(i)=1\text{ if and only if }M_{i}=M\text{ and }M\text{ is a unique model in }M\}\)[121]. The comprehension-centric semantic entropy model expressed by (5) quantifies uncertainty w.r.t. the whole meaning space and relies on both linguistic experience as well as world knowledge [121].
Aside from the above semantic entropy definitions - expressed in (1) and (5) - that pertain to the language system, semantic entropy concerning intelligent tasks has also been studied in [196]. The author of [196] differs an information-theoretic method for measuring semantic entropy in translation tasks by employing translational distributions of words in parallel text corpora [121]. According to this method, the semantic entropy of each word \(w\) is defined as [121, eq. (4)]
\[H(w):=H(T|w)+N(w)=-\sum_{t\in T}\mathbb{P}(t|w)\log\mathbb{P}(t|w)+\] \[\mathbb{P}(NULL|w)\log F(w), \tag{6}\]
where \(T\) denotes the set of target words, \(H(T|w)\) represents the translational inconsistency of a source word \(w\), \(N(w)\) stands for the contribution of null links of \(w\), and \(F(w)\) is the frequency of \(w\)[121]. In addition, the authors of [197] define semantic entropy for classification tasks by considering the membership degree in axiomatic fuzzy set theory [197]. In accordance with their framework, the authors of [197] first obtain the matching degree regarding the characterization of the semantic entropy of the data samples in class \(C_{j}\) on semantic concept \(\varsigma\) as [121, eq. (5)], [197, eq. (12)]
\[D_{j}(\varsigma):=\frac{\sum_{x\in\mathcal{X}_{C_{j}}}\mu_{\varsigma}(x)}{ \sum_{x\in\mathcal{X}}\mu_{\varsigma}(x)}, \tag{7}\]
where \(\mathcal{X}_{C_{j}}\) denotes the set of data for class \(C_{j}\) w.r.t. all \(j\in\{1,2,\ldots,m\}\) and \(\mathcal{X}\) stands for the data set of all classes [121]. Using (7), the semantic entropy of class \(C_{j}\) on \(\varsigma\) is defined as [121, eq. (6)], [197, eq. (13)]
\[H_{C_{j}}(\varsigma):=-D_{j}(\varsigma)\log_{2}D_{j}(\varsigma). \tag{8}\]
Using (8), the semantic entropy of concept \(\varsigma\) on \(\mathcal{X}\) is defined as [121, eq. (7)], [197, eq. (14)]
\[H(\varsigma):=\sum_{j=1}^{m}H_{C_{j}}(\varsigma). \tag{9}\]
The uncertainty in designing the classifier is minimized by the definitions in (7)-(9), which can be used to obtain the optimal semantic description for each class [121].
Whereas the above-mentioned definitions apply mainly to a single task, the authors of [198] investigate an information-theoretic framework to quantify the semantic information of any source for any task. Regardless of the task at hand, the authors of [198] define semantic entropy as the minimum number of semantic queries about data \(X\) whose answers are sufficient to predict task \(Y\)[121]. Mathematically, the semantic information quantification of the work in [198] is given by [121, eq. (8)], [198, eq. (2)],
\[H_{Q}(X;Y):=\min_{E}\mathbb{E}_{X}\big{\{}\big{|}Code_{Q}^{E}(X) \big{|}\big{\}} \tag{10}\] \[\text{s.t.}\quad\mathbb{P}(y|Code_{Q}^{E}(x))=\mathbb{P}(y|x), \quad\forall x,y,\]
where \(E\) denotes the semantic encoder and \(Code_{Q}^{E}(x)\) represents the query vector extracted from \(X\) with \(E\)[121]. As seen in (10), one needs to find the optimal semantic encoder (that encodes \(X\) into the minimal representation that can faithfully predict the task [121]) to be able to obtain the semantic entropy.
Apart from the previously outlined techniques for measuring semantic entropy, several new methods - such as semantic information pursuit and variational inference - are emerging and they need to be further investigated [121]. To summarize, all of the aforementioned definitions except the last one are task-oriented, whereas the last one that can be applied to different tasks [121]. However, designing the corresponding optimal semantic encoder is as challenging as obtaining semantic entropy [121]. Therefore, there is no unifying definition for semantic entropy: existing definitions lack the operational relevance of the Shannon entropy in many engineering problems [121].
In what follows, we discuss extensively the state-of-the-art research landscape of SemCom.
## III State-of-the-Art Research Landscape of SemCom
SemCom aims to convey a desired meaning. A desired meaning can be commonly through a SemCom transceiver as shown in Fig. 6. In Fig. 6, a semantic representation inspired by [93, Fig. 2] is used to convert the source data to a semantic modality [93] that will be encoded semantically by a semantic encoder (w.r.t. a source KB). The semantic encoder's function in various state-of-the-art studies - such as those on DL-based SemCom (e.g., [81, 87, 88, 99, 199, 200]) - jointly encompasses semantic representation and semantic encoding as schematized in Fig. 6. This figure also shows a
receiver's decoding in reference to the desired meaning using the in tandem operation - w.r.t. the destination KB - of the semantic decoding and semantic inference blocks. In a DL-based SemCom (e.g., [87, 88, 89, 199, 200]), these blocks' semantic inference and semantic decoding tasks are combined and performed by a semantic decoder (see Fig. 2). Per Fig. 6, semantic decoding and semantic inference can suffer greatly from semantic noise (as discussed in Section I-B) when there is a mismatch between the source KB and destination KB. The destination KB needs to be shared with the source KB in real time for effective SemCom, similar to productive human conversation requiring common knowledge of the communicating parties' languages and cultures [50]. This knowledge sharing facilitates KB-assisted semantic extraction (SE), which is a SemCom technique for joint semantic encoding and decoding.
The joint semantic encoding and decoding process is regarded as SE [50]. SE is a core component of a semantic transceiver, and four major state-of-the-art SE techniques exist [50, Fig. 10], [105, Fig. 3]: DL-based SE, reinforcement learning (RL)-based SE, KB-assisted SE, and semantic-native SE, which we discuss below. Apart from these SE techniques, there are also some specialized SE approaches for specific semantic-aware communication scenarios [50].
We begin with DL-based SE, which leverages advancements in DL [201, 202, 203, 204, 205, 206] and NLP [140]. DL-based SE aims to enhance the SemCom system's robustness in low signal-to-noise ratio (SNR) regimes by modeling the semantic (channel) encoder and the semantic (channel) decoder - at the transmitter and receiver, respectively - as two separate learnable sections that are linked through a random channel [50, 105]. The random channel - which is often modeled by a generative adversarial network (GAN) [207, 208, 209] or an untrainable layer - is trained using the DNN-based semantic (channel) decoder and the DNN-based semantic (channel) encoder in an end-to-end manner with differentiable loss functions such as cross entropy (CE) and mean squared error (MSE). MSE- and CE-mediated end-to-end training treats the joint semantic encoding and decoding process as a "black box" [210], which affirms a _fundamental lack of interpretability_. The lack of interpretability makes the effectiveness of DL-based SE hard to quantify [50]. As highlighted so far, DL-based SE considers only the semantic coding problem, without any semantic understanding [50]. This challenge can be alleviated in part by using RL-based SE.
As a first RL-based SE scheme, the authors of [83, 211] propose to integrate RL into an end-to-end text SemCom system, wherein the encoder and decoder are viewed as an agent that interacts with sentences that are considered to be in an external environment [50]. For this RL-based SE technique, simulation results demonstrate it outperforms DL-based SE for non-differential semantic metric optimization. Nonetheless, RL-based SE suffers from inflexibility when it comes to SE for variable goal-oriented SemCom [50]. This type of SemCom's effectiveness can be improved by using KB-assisted SE [50].
KB-assisted SE incorporates the KB into the encoder and decoder in an end-to-end manner with synchronized KBs at both ends to efficiently extract semantic information for scenarios with multiple communication tasks [50, 105, 212]. The KB-assisted SE technique can achieve goal-based SE with retraining [50] using a typical KB, which comprises a computational ontology, facts, rules, and constraints [50]. Meanwhile, a KB in Semcom is of source information, goals corresponding to desired tasks, and methods of reasoning that can be understood, recognized, and learned by all parties involved in communication [105]. Moreover, KB-assisted SE has also addressed goal-based SE without retraining [50]. However, it lacks self-adaptability for the possible evolution of a communication goal [50]. In this vein, it would be especially challenging to build a general KB that can capture the complex and diverse relationships that exist between semantic information and tasks/goals [50]. This limitation is partly addressed by semantic-native SE.
Semantic-native SE underpins the fact that the aforementioned SE techniques are effective for communication systems with unchanging semantics, whereas semantics often vary over time in real-world scenarios [50]. To address this challenge and give transceivers contextual reasoning ability, the authors of [74] introduce _System 1_ and _System 2_ (inspired by the book in [213]) semantic native communication (SNC). In System 1 SNC, a speaker conceptualizes and symbolizes an entity of interest (e.g., an abstract idea, a physical phenomenon, or an object) as a semantic representation - that is decodable as the intended entity by its listener - to be communicated to a target listener [74]. When System 1 SNC is infused with contextual reasoning such that the speaker locally and iteratively communicates with a virtual agent built on a listener's unique way of coding its semantics, it would follow that System 2 SNC can allow the speaker to extract its listener-tailored effective semantics [74]. Despite System 2 SNC having effective semantics, System 1 and 2 SNC have a limitation: semantic-native SE through SNC is still a theoretical model that is difficult to generalize in practice [50].
Considering the wide variety of SemCom techniques and trends that exist, the authors of [122] emphatically clarify16 what SemCom is and is not:
Footnote 16: This paper does not necessarily make distinctions regarding the design philosophies of SemCom and goal-oriented SemCom in order to incorporate a variety of views on SemCom and goal-oriented SemCom.
* SemCom is not data compression.
* SemCom is not only an "AI for wireless" concept.
* SemCom is not only goal-oriented communication.
* SemCom is not only application-aware communication.
In light of these clarifications, the authors of [122, 214] propose a SemCom system with memorizable and learnable data patterns, which is schematized in Fig. 7. Per Fig. 7, the teacher first observes an event \(X\) that is subsequently split into memorizable and learnable patterns. The learnable component \(X_{l}\) and the memorizable component \(X_{m}\) are then transformed into a binary representation and a minimal semantic representation, respectively. Thereafter, the underneath cascaded signal processing follow:
* The binary representation of the memorizable part is transmitted via a physical channel after it is transformed by the source encoder followed by the channel encoder.
The received signal is then transformed via the channel decoder followed by the source decoder. Thereafter, a raw event reconstruction occurs as shown in Fig. 7[122].
* The semantically represented learnable component is transformed by joint source and channel (JSC) coding prior to its transmission via the physical channel. Once transformed by the physical channel, the received signal is passed through the JSC decoder, whose output is employed to determine the materialization of the structure and the variability of the task, as shown in Fig. 7. Then, the semantic content is generated via representation [122].
Once the mentioned cascaded signal processing is completed, the end-to-end SemCom system is concluded by integrating the reconstructed (classical) memorizable component and (semantic) learned content into the recovered content from the teacher [122], as depicted in Fig. 7. The _total achievable capacity_\(C_{T}\) of the SemCom system depicted in Fig. 7 is given by [122, eq. (16)]
\[C_{T}:=C_{C}+C_{R}=W\log_{2}(1+\gamma)+\Omega\log_{2}(1+\eta_{b,d}), \tag{11}\]
where \(C_{C}:=W\log_{2}(1+\gamma)\) is the Shannon capacity for \(W\) and \(\gamma\) being the bandwidth and the signal-to-interference-plus
Fig. 6: System model for semantic-oriented communications – modified from [50, Fig. 6(b)].
Fig. 7: SemCom with memorizable and learnable data patterns – modified from [122, Fig. 8].
noise ratio (SINR), respectively, and \(C_{R}:=\Omega\log_{2}(1+\eta_{b,d})\) is the _reasoning capacity_[122, Proposition 3] for \(\Omega\) and \(\eta_{b,d}\) being the maximum computing capability of the server deployed to represent/generate the semantic representation and the _communication symmetry index_[122, Proposition 2] per second, respectively [122].
Aside from the above-discussed works that attempt to set out the concept of SemCom, some other state-of-the-art works proffer useful SemCom architectures. One such architecture, the layered SemCom architecture that is proposed by the authors of [104] is shown in Fig. 8. The authors of [104] propose a semantic _open system interconnection_ (OSI) model that is built on the conventional OSI protocol stack and has a semantic layer added as a sub-layer of the application layer (see Fig. 8). Due to its sub-layer position, the semantic layer interfaces with sensors, actuators, and users, and has access to algorithms and the content of data in a specific application [104]. It, therefore, executes semantic/effectiveness encoding/decoding and sends (see Fig. 8) lower radio access layers semantic/effectiveness encoded data (SEED) such as data importance information (DII), partial algorithm information (PAI), and data type information (DTI) over a control channel [104]. The semantic layer also receives (per Fig. 8) channel rate information (CRI) from the radio access layers over a control channel. This CRI is employed to mitigate semantic noise for semantic symbol error correction or control computing in the application layer [104].
The authors of [78] put forward the intelligent and efficient semantic communication (IE-SC) network architecture, which is shown in Fig. 9. This architecture comprises a semantic intelligence plane (SI plane), a semantic-empowered physical-bearing layer (S-PB layer), a semantic-empowered network protocol layer (S-NP layer), a semantic-empowered application-intent layer (S-AI layer), and a semantic information flow (S-IF) [78]. Despite this architecture's promises of intelligent and efficient SemCom, it is radically different from previous architectures and may not be interoperable with the state-of-the-art OSI model of 5G networks and their evolution. Meanwhile, the authors of [78] advance the traditional human-machine-thing architecture with their _human-machine-thing-genie_ architecture (to orchestrate the physical and digital worlds) wherein _genie_ is envisioned to be the main entity of the digital world that is used as an AI-empowered _super intelligent_ agent for physical communication objects. Furthermore, the authors of [122] propose the open-RAN (O-RAN) architecture for SemCom-enabled 6G and beyond, which is shown in Fig. 10. In this architecture that incorporates the introduction of the reasoning plane, real-time AI-oriented blocks are incorporated in the open radio unit (O-RU), the open distributed unit (O-DU), and the centralized unit (CU) [122].
We now move on to the state-of-the-art vision and tutorial works on SemCom.
### _Vision and Tutorial Works on SemCom_
In this section, we present the existing vision and tutorial works on SemCom. We begin with the vision works.
#### V-A1 Vision Works on SemCom
the authors of [215] first introduce SemCom for text transmission (text SemCom) by formulating the text SemCom problem as a static Bayesian game and a dynamic game. The authors of [104] explain the principles of SemCom; introduce H2H SemCom, H2M SemCom, and M2M SemCom systems as well as techniques for different areas in which H2M and M2M SemCom can be applied; and detail an approach for designing SemCom systems based on KGs [138]. The authors of [105] envision edge-driven SemCom and SemCom-driven edge toward the efficient _intelligentization_ of future networks and discuss the corresponding open research issues. The authors of [49] propose their vision of 6G wireless networks, wherein SemCom and goal-oriented SemCom are key technologies that derive a crucial paradigm shift from Shannon's information-theoretic principles. Departing from these principles while not necessarily resorting to bandwidth or power, the authors of [49] advocate that increased effectiveness and reliability can be achieved by identifying the information that would be
Fig. 8: A layered architecture of a SemCom system – [104, Figure 2]: SEED: semantic/effectiveness encoded data; PAI: partial algorithm information; DTI: data type information; DII: data importance information; and CRI: channel rate information.
necessary to make a receiver extract precisely the intended meaning or to actuate the right procedures to accomplish a predefined goal efficiently. The authors of [78] put forward a systematic design for SemCom networks in the context of ubiquitous 6G networks that is based on an intelligent and efficient semantic communication network architecture.
The authors of [216] propose _Transformer_-based solutions for several massive MIMO and SemCom problems, demonstrate Transformer-based architectures' superiority over other architectures, and discuss key challenges as well as open issues affecting Transformer-based solutions. The authors of [217] propose an information-theoretic framework wherein the semantic context is explicitly introduced as a hidden RV in the communication system design by recasting SemCom's system design problem as an _information bottleneck_ (IB) [218, 219] optimization problem. In light of Weaver's three levels of communication (see Fig. 1), the authors of [220] introduce the concept of a _semantic-effectiveness_ plane for effective filtering and control for post-5G wireless connectivity. The authors of [93] put forward an _understand-first-and-then-transmit_ SemCom framework. The authors of [77] propose a federated edge AI-based architecture to support resource-efficient semantic-aware networking.
The authors of [221] introduce a unified framework for semantics-guided source and channel coding by proposing an end-to-end SemCom system named _semantic coded transmission_. The authors of [222] envision a new intelligence paradigm named _edge semantic cognitive intelligence_ for 6G networks that is emerging at the confluence of edge intelligence and SemCom. In view of SNC, the authors of
Fig. 10: O-RAN architecture for SemCom-enabled 6G and beyond – modified from [122, Figure 13]: SA: semantic-aware; RT: real-time; RP: reasoning plane; CU: centralized unit; RU: radio unit; DU: distributed unit; O-RU: open radio unit; O-DU: open distributed unit.
Fig. 9: The intelligent and efficient semantic communication (IE-SC) network architecture [78, Fig. 3] – SI plane: semantic intelligence plane; S-PB layer: semantic-empowered physical-bearing layer; S-NP layer: semantic-empowered network protocol layer; S-AI layer: semantic-empowered application-intent layer; S-IF: semantic information flow; and info.: information.
[223] introduce a visionary17 SemCom work that is inspired by a topological space perspective and wherein higher-order data semantics live in a _simplicial complex_. Based on this perspective, a transmitter first maps its data into a \(k\)-order simplicial complex and then learns its high-order correlations [223]. A simplicial autoencoder (AE) CNN is then used to encode this simplicial structure and its features into semantic embeddings in latent space for transmission [223]. Following the transmission and propagation of this SemCom signal, the receiver decodes the simplicial Laplacians from the received embeddings using a bilinear decoder and then infers the missing (or distorted) data using a simplicial convolutional decoder [223]. In summary, the transmitter and receiver collaboratively train a simplicial CNN AE to accomplish a SemCom task [223].
Footnote 17: The curx of the vision in [223] is that information is not scalar as in Shannon’s case but topological space much like the stored information/knowledge in our brains.
To address the lack of interpretability evident in the DNN-based protocol models (NPMs), the authors of [224] differ a semantic protocol model (SPM) that is constructed by converting an NPM into an interpretable symbolic graph written in the probabilistic logic programming language known widely as _ProbLog_[225]. The authors of [224] substantiate that the proposed SPM closely approximates an NPM while utilizing only 0.02% memory. The authors of [226] propose a new SemCom approach to 6G networks by proferring and assessing a hashing-based SemCom framework. In this framework, the authors' design of SE and domain adaptation optimizes the _joint optimization of information gathering, dissemination, and decision-making_ over 6G networks based on SemCom [226]. The authors of [122] disseminate a holistic vision of an end-to-end SemCom network that is rooted in overarching concepts of AI, causal reasoning, communication theory, networking, information theory, transfer learning, and minimum description length theory.
We now continue with existing tutorial works on SemCom.
#### V-A2 Tutorial Works on SemCom
the authors of [97] provide an overview of SemCom theory, frameworks, and DL-enabled system design. The authors of [76] provide an overview of recent works on SemCom, summarize the open issues, and highlight the corresponding challenges in theoretical research and practical implementation. The authors of [211] provide an overview of existing semantics-aware communication techniques (and their underlying shortcomings), rethink the design of semantics-aware communications systems and highlight some related concerns such as implementation cost; and establish a joint semantics-noise coding solution for the semantic coding problem and an RL-based similarity-targeted SemCom technique for both differentiable and non-differentiable semantic similarity metrics. The authors of [227] present a brief tutorial on SemCom and its information-theoretic perspectives.
The authors of [50] offer a comprehensive review of the fundamentals, applications, and challenges of SemCom. The authors of [76] provide an overview of DL-based SemCom, its open issues, and corresponding future research directions. The authors of [123] put forward a survey aimed at providing a clear picture of state-of-the-art SemCom developments. The authors of [121] offer a tutorial for communication theorists and practitioners that provides an introduction to contemporary tools and advancements in SemCom. The authors of [137] offer a brief discussion on the functionalities of SemCom-enabled networked intelligence and their respective open issues, and review related works. The authors of [228] discuss recent developments in state-of-the-art SemCom that exploit the conventional modules in wireless systems. The authors of [107] disseminate a tutorial-cum-survey that aims to provide a comprehensive understanding of state-of-the-art developments in SemCom - generally, semantics-empowered communication - and its applications.
Apart from the aforementioned vision and tutorial works on SemCom, the expanding body of state-of-the-art works on SemCom for the transmission of text, image, video, and multi-modal data encompass numerous SemCom techniques and trends such as cognitive SemCom [90], implicit SemCom [229], adaptive SemCom [230], context-based SemCom [231, 232], digital SemCom [233, 234], cross-modal SemCom [235], sequence-to-sequence SemCom (Seq2Seq-SemCom) [236], SemCom with conceptual spaces [237], inverse SemCom [238], one-to-many SemCom [239], quantum key distribution (QKD)-secured SemCom [240], encrypted SemCom [241], and quantum SemCom [242].
We now continue with state-of-the-art algorithmic developments in semantic-oriented communication.
### _Algorithmic Developments in Semantic-Oriented Communication_
This section discusses the state-of-the-art SemCom techniques for text transmission, audio transmission or recognition, image transmission or recognition, video transmission, and multi-modal signal transmission. We begin with the techniques for text transmission.
#### V-B1 SemCom for Text Transmission
the authors of [215] first introduce SemCom for text transmission (text SemCom) by formulating the text SemCom problem as a static Bayesian game and a dynamic game. In accordance with these games, the authors integrate semantic inference and physical layer (PHY) communications to optimize the entire transceiver. However, the text SemCom scheme in [215] quantifies semantic error at the word level as opposed to the sentence level. The authors of [243] represent the channel by a _dropout layer_[244] and put forward a text SemCom scheme made up of an encoder and a decoder that are implemented by a stacked bidirectional long short-term memory (Bi-LSTM) network [245] and a stacked long short-term memory (LSTM) network [246], respectively.
The authors of [81] develop a DL-based text SemCom system (a Transformer-based system [247] as in Fig. 12) - named _DeepSC_ - that performs joint semantic-channel coding to produce a superior performance gain than existing techniques in which the source and channel are coded separately in low SNR regimes. The authors of [199] build on the DeepSC system and propose a life distributed text SemCom system - dubbed _L-DeepSC_ - for IoT networks that considers the participating devices' limited power and computing capabilities. The authors of [85] took inspiration from L-DeepSC and
DeepSC, and put forward a _Universal Transformer_ (UT)-based text SemCom system that incorporates an adaptive circulation mechanism in the UT. This UT-based text SemCom technique offers a small performance improvement over DeepSC for low SNR regimes. However, the fidelity of DeepSC, UT-based text SemCom, and L-DeepSC can be destroyed by considerable semantic noise. To combat literal semantic noise and adversarial semantic noise, the authors of [157] develop a DL-enabled robust SemCom system named _R-DeepSC_. R-DeepSC improves system robustness in various wireless environments and outperforms DeepSC when the corpus is erroneous [157]. The aforementioned SemCom techniques that employ end-to-end DNNs do not generalize well under varying channel conditions [248], however. To address this challenge, the authors of [248] develop a semi-neural framework with an iterative joint source-channel coding (JSCC) architecture, named _iterative semantic JSCC_ (IS-JSCC).
Considering that most semantic metrics are non-differentiable, the authors of [83] introduce an RL-based optimization paradigm that is a self-critic policy gradient approach for possibly large-scale and complex text semantic transmission. In this technique, the authors handle the non-differentiable semantic channel optimization problem by training the decoupled semantic transceiver using self-critic stochastic iterative updating [83]. This semantic transceiver - whose encoder and decoder are made up of Bi-LSTM and LSTM, respectively - is named _SemanticRL-JSCC_ and improves the recovery of semantically meaningful sentences and the handling of semantic noise [83]. In the spirit of the work in [83], the authors of [211] also propose an RL-powered text SemCom paradigm and introduce a joint semantics-noise coding (JSNC) technique. The authors of [249] introduce SemCom over wireless relay channels and develop an AE-based text SemCom scheme for wireless relay channels with a _semantic forward_ (SF) protocol. The purpose of the SF protocol is to enable direct SemCom when the source node and the destination node have different background KBs [249]. To this end, the relay node can cooperatively use the background KB of the source and that of the destination to forward semantic information between the source and the sink [249].
All the above-discussed text SemCom techniques assume fixed codeword length and are possibly inefficient as well as inflexible when it comes to handling varying sentence length [250]. The authors of [250] address this limitation by exploiting hybrid automatic repeat request (HARQ) and Reed Solomon (RS) channel coding, and combining them with semantic coding (SC) to propose a text SemCom technique named _SC-RS-HARQ_. SC-RS-HARQ benefits from the performance gains of SC and the reliability of the RS channel coding and HARQ [250]. The authors of [250] also put forward an end-to-end text SemCom architecture made up of a Transformer and a fully connected DNN - dubbed SCHARQ - that has been demonstrated to considerably reduce the number of bits required for sentence semantic transmission and the sentence error rate [250]. All the previously highlighted text SemCom techniques overlook the contextual correlation among sentences at the transmitter and fail to take historical text into consideration in the decoding process [232]. Considering historical text in the decoding process and contextual correlation among sentences at the transmitter, the authors of [232] proffer a context-based text SemCom technique. This technique has been demonstrated to outperform DeepSC in low SNR regimes [232]. The authors of [236] propose a computationally efficient - in extracting semantic information - text SemCom technique named seq2seq-SemCom. In seq2seq-SemCom, the pre-trained encoder-decoder transformers are integrated with end-to-end SemCom systems, and the channel encoder and decoder are composed of 5G new radio (5GNR)-compliant modules [236]. The seq2seq-SemCom technique works with all (general) text corpora - unlike DeepSC, which is dependent on specific datasets - and outperforms DeepSC in terms of semantic similarity as demonstrated via link-level simulations that closely resemble actual 5G system [236].
We now move on to discuss state-of-the-art SemCom techniques for audio transmission or recognition.
#### V-A2 SemCom for Audio Transmission or Recognition
SemCom has a variety of applications in semantic-aware (semantic-empowered) speech/audio transmission and recognition systems. We refer to such systems as _audio SemCom_ and discuss audio SemCom techniques below. To begin with, the authors of [88, 160] propose a DL-enabled audio SemCom system named _DeepSC-S_ that improves transmission efficiency by transmitting only semantic information. This audio SemCom scheme adopts a joint semantic encoder/decoder and channel encoder/decoder for efficient learning and speech feature extraction, and for mitigating wireless channel distortion. The authors of [87] put forward another DL-enabled audio SemCom system - dubbed _DeepSC-SR_ - for speech recognition by exploiting a joint semantic encoder/decoder and channel encoder/decoder for learning and extracting speech features while mitigating wireless channel impairments.
The authors of [251] propose a semantic-aware speech-to-text transmission audio SemCom system with a soft alignment module that extracts only the text-related semantic features and a redundancy removal module that drops the semantically redundant content. This technique reduces semantic redundancy and outperforms DeepSC-SR [251]. Extending the work in [251], the authors of [252] proffer a DL-based speech-to-text transmission and a speech-to-speech transmission audio SemCom systems that also deploy a soft alignment module and a redundancy removal module. Apart from audio SemCom for speech transmission or speech recognition, the authors of [158] develop a DL-enabled audio SemCom system - termed _DeepSC-ST_ - for speech recognition and synthesis. DeepSC-ST recovers the text transcription by using text-related semantic features and reconstructs the speech sample sequence through a joint semantic-channel coding scheme deployed to learn and extract semantic features as well as mitigate channel impacts [158]. The work in [158], meanwhile, demonstrates that DeepSC-ST outperforms traditional communication systems for speech recognition and speech synthesis tasks, especially in low SNR regimes.
The authors of [80] exploit federated learning (FL) for audio SemCom and investigate a _wav2vec_[253]-based autoencoder made of CNNs [254, 255] that comprises an audio SemCom
system that can effectively encode, transmit, and decode audio semantic information while reducing communication overhead. This audio SemCom's AE is trained using FL [256, 257, 169] to improve the accuracy of semantic information extraction and substantially reduces transmission error compared with existing audio coding schemes that are based on pulse code modulation (PCM), low-density parity check codes (LDPC), and 64-QAM (quadrature amplitude modulation) [80].
We now move on to discuss SemCom techniques for image transmission or recognition.
#### V-B3 SemCom for Image Transmission or Recognition
hereinafter, we term SemCom techniques/systems for image transmission or recognition _image SemComs_. Among the first image SemCom works on image transmission over wireless channels, the authors of [163] investigate an image SemCom architecture named _deep JSCC_ whose encoder and decoder functions are parameterized by CNNs and trained jointly on the same dataset to minimize the average MSE of the reconstructed image. The joint training enables deep JSCC to not suffer from the _cliff effect_ as demonstrated in [163], while offering a graceful performance degradation as the channel SNR varies w.r.t. the SNR presumed during training. Building on deep JSCC, meanwhile, the authors of [258] introduce an AE-based JSCC scheme - named _DeepJSCC-f_ - that uses the channel output feedback. As demonstrated in [258], DeepJSCC-f significantly improves end-to-end reconstruction quality for fixed-length transmission and average delay for variable-length transmission. Furthermore, the authors of [259] build on DeepJSCC-f [258] and deep JSCC [163] by exploring the use of DL-based methods for progressive image transmission over wireless channels and introduce _DeepJSCC-l_. DeepJSCC-l is a group of DL-based JSCC algorithms made up of CNN-based AEs that are able to encode and decode images over multiple channels while supporting flexible bandwidth-adaptive transmission [259].
The aforementioned JSCC schemes presume that any complex value can be transmitted over a wireless channel [260]. However, this can create compatibility problems for hardware/protocols that can accept only certain sets of channel inputs (e.g., as prescribed by digital modulation) [260]. To overcome this limitation, the authors of [260] develop _DeepJSCC-Q_, which is an end-to-end-trained JSCC scheme for wireless image transmission using a finite channel input alphabet. DeepJSCC-Q has been demonstrated to perform comparably to prior JSCC schemes that permit complex value channel input whenever high modulation orders are available [260]. The authors of [261] took inspiration from the aforementioned DL-based wireless image transmission technique and developed a DL-based image SemCom system - dubbed _MLSC-image_ - that is trained in an end to end manner for wireless image transmission. MLSC-image incorporates a multi-level semantic feature extractor that extracts both high-level semantic information (e.g., text semantics and segmentation semantics) and low-level semantic information (e.g., local spatial details of images) [261]. The semantic features are combined and then encoded by a joint semantic-channel (JSemC) encoder into symbols to be transmitted over the physical channel, whose output is fed to a JSemC decoder and then to an image reconstruction module [261, Fig. 1]. The JSemC encoder and decoder enable MLSC-image to outperform deep JSCC [163] in low compression ratio regimes (though it performs worse than deep JSCC in high compression ratio regimes) [261]. Moreover, the aforementioned JSCC schemes adapt the compression ratio in source coding and the channel coding rate dynamically in accordance with the channel SNR [162]. Instead of a resource allocation strategy, the authors of [162] deploy channel-wise soft attention to scale features according to the SNR in their proposed JSCC technique named attention DL-based JSCC (ADJSCC) [162, Fig. 5]. ADJSCC's adaptability, robustness, and versatility have been demonstrated by simulations [162].
The afore-discussed image SemCom works revolve around JSCC. The authors of [262] stray away from JSCC and devised an _image segmentation semantic communication_ (ISSC) system to manage the transmission of the massive amount of visual data perceived by vehicles' visual sensors over the internet of vehicles (IoV) [40]. For IoV applications, the proposed ISSC system efficiently transmits the semantic features of images that are extracted using a _Swin Transformer_[166]-based multi-scale semantic feature extractor that can broaden the receptive area of an image [262]. The ISSC system's encoder and decoder are jointly designed and end-to-end-trained to globally optimize the model parameters [262]. To this end, the ISSC system has been demonstrated to perform better than traditional coding schemes in the low SNR regimes [262]. The authors of [263] propose another Swin Transformer-based image SemCom scheme [166] - dubbed wireless image transmission transformer (WITT) - that is optimized for image transmission while considering the wireless channel's effect. It has been proven to outperform a CNN-based deep JSCC scheme as well as classical separation-based schemes [263].
The highlighted state-of-the-art JSCC techniques do not integrate any hyperpriors as side information though this is a promising concept that is widely deployed in modern image codecs [264]. To address this limitation, the authors of [264] devised a joint source-channel coding architecture that unifies the concept of nonlinear transform coding [265] and deep JSCC and is named nonlinear transform source-channel coding (NTSCC). NTSCC is a class of high-efficiency deep JSCC techniques that can adapt to the source distribution under the nonlinear transform [264]. The authors of [266] put forward a DNN-constructed joint transmission-recognition scheme that, unlike the previously discussed JSCC-based image SemCom techniques, makes IoT devices transmit data effectively to a server for image recognition. This DL-enabled joint transmission-recognition technique was shown to outperform a JPEG-compressed scheme and a compressed sensing-based scheme under analog transmission and digital transmission at all SNRs on CIFAR-10 image database [266]. Furthermore, the authors of [99] propose a framework that introduces a robust design to image SemCom for an end-to-end robust image SemCom system that can withstand semantic noise through _adversarial training_ that incorporates samples with semantic noise in the training dataset [99]. Concerning this robust framework, simulation results confirm that it can considerably
improve the robustness of image SemCom systems against (image) semantic noise [99].
The authors of [161] develop an image SemCom technique that is both bandwidth sensitive and richer in semantics than the aforementioned image SemCom techniques by devising an RL-based adaptive semantic coding (RL-ASC) approach that encodes images beyond the pixel level. In this technique, a convolutional semantic encoder is used to extract semantic information that is in turn encoded by adaptive quantization in accordance with an RL-based semantic bit allocation model [161]. On the receiver side, meanwhile, the authors of [161] design a generative semantic decoder that exploits the attention model to fuse the local and global features and is deployed to reconstruct the transmitted semantic concepts [161]. Furthermore, the authors of [161] corroborate that their proposed RL-ASC approach can promote multiple vision tasks for SemCom scenarios. Most of the aforementioned JSCC schemes are optimized using traditional semantic metrics such as _peak signal-to-noise ratio_ and _multi-scale structural similarity_ and hardly account for human visual perception in SemCom [267]. To overcome this limitation, the authors of [267] devise a deep JSCC architecture that merges an encoder, a wireless channel, a decoder/generator, and a discriminator that are learned jointly w.r.t. both perceptual and adversarial losses. This deep JSCC technique has been shown to produce results that are more visually pleasing to humans than those of state-of-the-art image-coded transmission techniques and the aforementioned deep JSCC schemes [267].
We now move on to discuss SemCom techniques for video transmission.
#### V-B4 SemCom for Video Transmission
SemCom techniques/systems for video transmission are termed _video SemComs_, and we present below state-of-the-art video SemCom techniques.
To overcome conventional video compression's limitation that it reduces the resolution under limited bandwidth, the authors of [268] study semantic video conferencing (SVC), which maintains high resolution by transmitting key points to represent motion for scenarios in which the video background is almost static and the speakers do not change frequently. For this type of scenario, the authors of [268] develop SVC techniques that exploit HARQ and channel state information (CSI) feedback. These techniques considerably improve transmission efficiency [268]. The authors of [269] developed another video SemCom technique that also benefits from semantic information: a video SemCom framework that exploits nonlinear transform and conditional coding architecture to adaptively extract semantic features across video frames that are transmitted through a group of variable-length (learned) deep JSCC codecs and a wireless channel. This video SemCom technique performs significantly better than traditional wireless video coded transmission schemes [269].
The authors of [270] devised an end-to-end JSCC video transmission scheme - named _DeepWiVe_ - that leverages DNNs to directly map video signals to channel symbols. DeepWiVe combines video compression, channel coding, and modulation steps in a single neural transform and is capable of dynamic bandwidth allocation and residual estimation without the need for distortion feedback [270]. Furthermore, DeepWiVe overcomes the cliff effect, achieves a graceful degradation in channel quality, and produces superior video quality in highly bandwidth-constrained scenarios compared to both H.264 and H.265 [270]. Moreover, following the popularity of point cloud video (PCV) and PCV streaming, the authors of [271] developed an interest-aware SemCom scheme for immersive point cloud video streaming. This video SemCom technique aims to tackle the challenges associated with real-time PCV streaming on resource-constrained devices by using a region-of-interest (ROI) selection module (i.e., a two-stage efficient ROI selection method that considerably reduces the data volume), a lightweight decoder network, and an intelligent scheduler (for adaptive online PCV streaming) [271]. This video SemCom technique has been demonstrated to outperform an AI-driven technique by at least 10 frames per second (FPS) [271].
We now proceed to discuss SemCom techniques for multi-modal signal transmission.
#### V-B5 SemCom for Multi-Modal Signal Transmission
the previously highlighted text SemCom, audio SemCom, image SemCom, and video SemCom techniques focus on the efficient semantic transmission of text, audio, image, and video signal, respectively. However, many driving applications and services of 6G (see Sec. I-A) aim at offering immersive experiences with low latency and high reliability by transmitting multi-modal signals [235, 272]. This requires that multi-modal signals be transmitted efficiently, which can be accomplished by using an emerging SemCom technique named _cross-modal SemCom_[235]. The cross-modal SemCom paradigm is inspired by cross-modal communication [272] and comprises three modules [235]:
* while reducing encoding polysemy
- for transmission [235].
* _Cross-modal semantic decoder_: it ensures the multi-modal source signals and the multi-modal recovered signals are consistent at the bit and semantic levels, while reducing decoding ambiguity [235].
* _Cross-modal KG (CKG)_: it provides essential background knowledge and signal patches for cross-modal semantic encoding and decoding [235].
We now move on to discuss algorithmic developments in semantic-aware communication and processing.
### _Algorithmic Developments in Semantic-Aware Communication and Processing_
A communication system that exploits semantic information in its design is termed hereinafter _semantic-aware communication_. In [273], the authors study the problem of air-to-ground URLLC for a moving ground user and put forward a multi-agent deep RL (DRL) framework, dubbed graph attention exchange network (GAXNet). In GAXNet, each unmanned aerial vehicle (UAV) locally constructs an attention graph that measures its level of attention to its neighboring UAVs using semantic representation encoding while sharing the
attention weights with other UAVs using SemCom to minimize any attention mismatch between them. This semantic-aware scheme has been demonstrated [273] to attain lower latency with higher reliability than the state-of-the-art QMIX scheme (see [274]). Also in the context of collaborative deep RL (CDRL), the authors of [275] propose a semantic-aware CDRL framework that enables knowledge to be efficiently transferred among heterogeneous agents that are distributed across a resource-constrained wireless cellular network and have semantically related tasks. They, therefore, introduce a new heterogeneous federated DRL algorithm [275] for selecting the best subset of semantically-associated DRL agents for collaboration. This CDRL algorithm has been shown to offer an 83% improvement in maximum reward compared with baseline methods [275].
Apart from CDRL algorithms, SemCom has also inspired the development of semantic-aware algorithms for the Metaverse [33]. When it comes to metaverse applications, the authors of [276] developed a semantic-aware transmission framework for transforming and transmitting sensing data from the physical world to a mobile service provider (MSP) in the Metaverse. In this framework, the authors of [276] put forward a semantic-aware sensing data transmission scheme and establish a contest theory-based incentive mechanism. In their transmission scheme, they profler a semantic encoding algorithm for data sensing that considerably reduces the amount of data needed, storage costs, and transmission costs, while ensuring the MSP performs well in the Metaverse. The contest-based incentive mechanism, on the other hand, aims to boost the data uploading frequency of all transmitters by setting rewards and to support the MSP in enhancing its quality of service (QoS) [276]. When it comes to SemCom-aided virtual transportation networks in the Metaverse, the authors of [277] attempt to address the resource allocation problem marked by stochastic user demand by proposing a stochastic semantic transmission scheme that is based on a two-stage stochastic integer programming. This scheme has been demonstrated to minimize the transmission costs of virtual service providers whilst taking into account users' demand uncertainty [277].
The authors of [278] put forward a semantic-driven computation offloading and resource allocation scheme for UAV-assisted vehicle data collection and edge-cloud collaboration to complete intelligent tasks. They propose a CNN segmentation scheme for its offloading decisions and a multi-agent deep Q-network (DQN) algorithm for its resource allocation. Simulation results confirm their semantic-driven scheme improves the multi-objective optimization of latency, energy consumption, and task performance [278]. Furthermore, in regard to using _type-based multiple access_ (TBMA) as a semantic-aware multiple access protocol for remote inference, the authors of [279] devised an IB-inspired design principle for TBMA named IB-TBMA. In their IB-TBMA protocol, the shared codebook is jointly optimized with channel statistics that are based strictly on data and a decoder that is based on artificial neural networks [279]. The authors of [279] also propose the _compressed IB-TBMA_ (CB-TBMA) protocol, which enhances IB-TBMA by making it possible to reduce the number of codewords (via an IB-inspired clustering phase). The authors of [279] show - with numerical results - the power of joint codebook and neural decoder design in CB-TBMA and IB-TBMA while demonstrating the benefits of codebook compression.
We now continue with a discussion of state-of-the-art algorithmic developments in SemCom resource allocation.
### _Algorithmic Developments in SemCom Resource Allocation_
The authors of [151, 152] propose a performance optimization framework for semantic-driven wireless networks in a text SemCom system. The authors of [152] formulate an optimization problem for such networks that jointly considers wireless resource constraints, transmission delay requirements, and SemCom performance and whose goal is to maximize the total metric of semantic similarity - a semantic metric that was introduced by the authors of [151, 152] - by optimizing resource block allocation for the transmission of partial semantic information (modeled by a KG). To solve this problem, the authors of [151, 152] develop an _attention proximal policy optimization algorithm_ that has been shown [152] to considerably reduce the amount of data needed to be transmitted.
After defining the semantic spectral efficiency (S-SE) metric to quantify the communication efficiency of a text SemCom system named DeepSC [81], the authors of [280] formulate a semantic-aware resource allocation problem as an optimization problem that aims to maximize the overall S-SE of all users and report on its optimal solution, the validity and feasibility of which are demonstrated [280] using simulation results. Meanwhile, the authors of [281] study a semantic-aware resource allocation problem in a multi-cell multi-task network in the context of semantic-aware resource allocation. They formulate a QoE maximization problem for the network that constrains the number of semantic symbols transmitted, channel assignment, and power allocation, and provides a _matching theory_-based solution whose effectiveness is demonstrated [281].
In the broader context of intelligent SemCom (iSemCom) - more specifically, SemCom enabled by AI models [282] - and an iSemCom-enabled heterogeneous network (iSemCom-HetNet), the authors of [282] investigate user association (UA) and bandwidth allocation problems for an iSemCom-HetNet by introducing auxiliary KBs into the system model and then developing a new performance metric termed _system throughput in message_ (STM). The authors of [282] approach the joint optimization of UA and BA via STM maximization subject to KB matching and wireless bandwidth constraints, and propose a two-stage solution whose superiority and reliability over two baseline algorithms are demonstrated [282]. The authors of [283] built on the work in [282] and introduce two general SemCom-enabled network (SemComNet) scenarios that are based on all possible knowledge-matching states between mobile users and base stations: _perfect knowledge matching-based SemComNet_ and _imperfect knowledge matching-based SemComNet_. They delineate the semantic channel capacity model for the perfect matching scenario mathematically.
Apart from resource allocation algorithms in text SemCom, there have also been algorithmic developments in resource
allocation for video SemCom. The authors of [284] develop a video semantics-based resource allocation algorithm for spectrum multiplexing scenarios (VSRAA-SM). VSRAA-SM can be used to optimize vehicle-to-infrastructure semantic understanding tasks and vehicle-to-vehicle information transmission tasks [284]. Furthermore, the authors of [285] devise an image SemCom framework that enables a set of servers to collaboratively transmit images to their respective users using SemCom techniques. In their framework, all servers must jointly decide - subject to limited wireless resource constraints - which semantic information to transmit and which corresponding resource block (RB) allocation scheme o use [285]. The authors of [285] formulate this problem as an optimization problem whose target is minimizing the average transmission latency while satisfying the image-to-graph semantic similarity (ISS) requirement - ISS is the authors' proposed semantic metric. To solve this problem, the authors develop a value decomposition-based entropy-maximized multi-agent RL algorithm. This algorithm is shown to considerably reduce transmission latency and improve convergence speed compared with traditional multi-agent RL algorithms [285].
We now proceed to discuss state-of-the-art algorithmic developments in SemCom with regard to security and privacy.
### _Algorithmic Developments in SemCom with Security and Privacy_
Security and privacy must be carefully considered in the design of networks in 6G and beyond. To this end, some state-of-the-art SemCom works [240, 241] have developed a SemCom system with security and privacy features.
The authors of [241] differ an encrypted SemCom system [241, Fig. 1] that provides two modes of semantic transmission - encrypted and unencrypted - without the need to change the semantic encoder or decoder. To realize this system, the authors designed the structure of the secret key, encryptor, and decryptor for SemCom, so they can be embedded in a shared SemCom model. To make the encrypted SemCom system universal and confidential, they put forward an adversarial encryption training scheme that ensures the accuracy of SemCom - in encrypted as well as unencrypted mode - while preventing attackers from eavesdropping on the transmitted semantic information. The results of simulations in which this adversarial training scheme was used to demonstrate that the encrypted SemCom system can considerably enhance the privacy protection capability of a SemCom system [241].
The considerable potential that SemCom represents for 6G and beyond is justified by the fact that many SemCom techniques outperform their traditional counterparts, especially in low SNR regimes. Nonetheless, this attribute of SemCom can be a security risk because an eavesdropper can easily decode semantic information received over a very noisy channel [286]. The authors of [286] therefore put forward a SemCom framework that takes into consideration both semantic decoding efficiency and its risk of privacy leakage. This framework - named _SecureMSE_ - employs a loss function that flexibly regulates the efficiency-privacy tradeoff [286]. Computer experiments demonstrate SecureMSE's effectiveness and robustness when it comes to addressing this tradeoff [286].
The authors of [287] study an AE-based SemCom system that is enabled by DL and deep networks such as DNNs and its ability to convey information from a source to a destination while preserving the semantic information. The authors demonstrate that the use of DNNs makes the SemCom system vulnerable to adversarial attacks in which the attacker attempts to manipulate the deep network inputs. More specifically, they substantiate that:
* Adversarial attacks can be launched in multiple domains: \(1)\) a computer vision attack that injects a malicious perturbation into the input image at the source, or \(2)\) a wireless attack that sends a perturbation signal that is received by the decoder while superimposed on the transmitted signal [287].
* Both a computer vision attack and a wireless attack are effective individually [287]. When these attacks are combined (i.e., a multi-domain adversarial attack), they are even more effective in reducing SemCom performance [287].
* Multi-domain adversarial attacks can not only increase the reconstruction loss but also make the SemCom system so unreliable that the recovered information cannot preserve the semantics of the transmitted message [287].
In a similar spirit as the work in [287], the authors of [288] present novel attack vectors that are based on backdoor and adversarial attacks, demonstrate empirically that goal-oriented communications are also vulnerable to stealth manipulations by smart adversaries. Consequently, the authors underscore the need for novel security mechanisms that can promote the safe adoption of task-oriented communications in 6G and beyond.
We now continue to discuss state-of-the-art algorithmic developments in quantum SemCom.
### _Algorithmic Developments in Quantum SemCom_
The authors of [240] stray from encryption-based that is secured SemCom and propose a SemCom system secured by QKD (see [289, 290, 291]) and in which edge devices need to meet security requirements in QKD nodes and the QKD service providers must supply QKD resources to minimize the deployment cost. The authors develop a decision-making scheme for QKD service providers in a QKD-secured SemCom system by proposing resource allocation methods for the secure transmission of edge devices' semantic information, cost management, and cooperation among QKD service providers. When it comes to resource allocation, the authors implement two-stage stochastic programming to resolve the QKD service providers' solutions so that edge devices can transmit their semantic information that employs resources in the resource pool.
The authors of [242] were inspired by rapid developments in SemCom [50, 97, 107, 122]; ML [292, 293, 294]; quantum ML (QML) [295, 296, 297]; quantum computing [298, 299, 300, 301]; and quantum networking [302, 303, 290] and introduce a quantum semantic communications framework for developing
reasoning-based future communication systems with quantum semantic representations that exhibit minimalism, efficiency, and accuracy. This framework employs quantum embedding and high-dimensional Hilbert spaces to extract the meaning of classical data [242]. An unsupervised QML technique named quantum clustering is exploited for minimalistic and efficient contextual information extraction and accurate characterization of the semantics of the message to be sent [242]. The quantum semantic representations - that are constructed - are then transmitted using quantum communication links [242] established through _quantum entanglement_18[289, 300, 301].
Footnote 18: Quantum entanglement is a remarkable quantum mechanical phenomenon that the states of two or more quantum subsystems are correlated in a manner that is not possible in classical systems [290]. It is a unique quantum mechanical resource driving numerous applications of quantum computation, quantum information, quantum communication, and quantum networking [290, 297].
We now move on to discuss state-of-the-art algorithmic developments in the economics of SemCom.
### _Algorithmic Developments in Economics of SemCom_
The authors of [304] develop an energy allocation framework for wireless-powered SemCom-based IoT. More particularly, they derive the valuation of energy based on SemCom performance metrics and maximize the wireless power transmitters' revenue using a DL-based auction while maintaining the desired properties of _incentive compatibility_ (IC) and _individual rationality_[304]. Building on the work in [304], the authors of [305] put forward incentive mechanisms - i.e., a hierarchical trading system - for _semantic model trading_ and _semantic information trading_. Regarding the former, the proposed mechanism [305] helps to maximize the semantic model providers' revenue from semantic model trading and incentivizes them to take part in SemCom system development. As for semantic information trading, the authors' auction approach promotes trading between multiple semantic information sellers and buyers while ensuring individual rationality, IC, and a balanced budget.
Further development in the economics of SemCom is provided by the authors of [306], who put forward a DL-based auction for edge computing trading in a SemCom-enabled Metaverse system. Their proposed auction aims to maximize the edge computing providers' revenue and attain individual rationality as well as IC [306]. Meanwhile, simulation results demonstrate that the auction considerably improves revenue compared with the baseline while gathering almost zero individual rationality and IC penalties [306].
We now proceed to discuss miscellaneous state-of-the-art algorithmic developments in SemCom.
### _Miscellaneous Algorithmic Developments in SemCom_
The authors of [307] attempt to unify neuromorphic sensing, processing, and communications by introducing an architecture for wireless cognition named _NeuroComm_. NeuroComm's end-to-end design [307] is based on supervised learning via surrogate gradient descent (GD) methods. On the other hand, the authors of [308] consider SemCom with discrete-time analog transmission and validate that DeepJSCC-based SemCom's notable image reconstruction performance can be maintained while the transmitted peak-to-average power ratio is suppressed to an acceptable level, which is important for the practical implementation of DeepJSCC-based SemCom systems [308]. In light of DeepJSCC and the DL model's over-fitting property, the authors of [230] demonstrate the feasibility of overfitting neural-enhancement on SemCom systems and the effectiveness of overfitting when combined with online learning. Consequently, the authors put forward an adaptive SemCom framework that employs online learning to _overfit_ the instant source data sample and CSI. Furthermore, the authors of [237] employ the conceptual space theory of semantics [188, 189] to propose a model for SemCom with conceptual spaces [237, Fig. 2] wherein functional compression is proposed to obtain optimal encoding schemes. The authors simulate image transmission using their SemCom system and confirm its potential to faithfully convey meaning with a gigantic reduction in communication rate.
By approximating the semantic similarity metric by a generalized logistic function, the authors of [309] propose a framework for heterogeneous SemCom and bit communication (BitCom) in which an access point simultaneously transmits the semantic stream to a semantics-interested user and the bit stream to a bit-interested user. Following the work in [309], the authors of [310] propose a semantics-empowered two-user uplink non-orthogonal multiple access (NOMA) framework in which a primary near user (N-user) and a secondary far user (F-user) communicate with the access point using BitCom and SemCom, respectively. The authors investigate this semantic-empowered NOMA framework over fading channels and put forward an opportunistic SemCom and BitCom scheme to allow the secondary F-user to exploit the advantages of both technologies whenever it is admitted into the NOMA framework (at each fading state). This opportunistic scheme is demonstrated to outperform other baseline schemes [310]. Following the work in [310] and in [309], the authors of [311] propose a heterogeneous semantic and bit multi-user framework. Concerning this framework, they uncover that the interplay between NOMA and SemCom shows promise for supporting NOMA-enabled SemCom and SemCom-enhanced NOMA. When it comes to NOMA-enabled SemCom, the authors of [311] proffer a semi-NOMA-enabled heterogeneous SemCom and BitCom scheme that combines conventional OMA and NOMA as its special cases and offers flexible transmission options.
The authors of [96] proffer a framework for robust end-to-end SemCom systems that combats semantic noise and develop an adversarial training with weight perturbation by incorporating samples with semantic noise in the training dataset. They propose to mask a portion of the input wherein the semantic noise appears frequently when employing the training dataset and design a masked vector quantized-variational AE. The results of simulations conducted with this robust image SemCom technique demonstrate it improves the system's robustness against semantic noise [96].
The design of conventional technical communication (TechnicCom) is largely based on stochastic modeling and manipulation in startling contrast with SemCom, which uses
semantic elements that can be logically connected [102, 103]. To address this knowledge gap, the authors of [102, 103] propose a unified approach to semantic information and communication by leveraging probabilistic logic [179] via the interplay of SemCom and TechnicCom. More specifically, the authors of [102, 103] combine the existing TechnicCom layer with a SemCom layer that uses communicating parties' KBs to exchange (text) semantic information. The authors of [90] develop a KG-driven "cognitive" text SemCom framework for a corresponding text SemCom system. More specifically, they propose a simple, general, and interpretable solution for their text SemCom scheme to detect semantic information.
The authors of [312] leverage advancements in 6G research and SemCom to proffer a 6G SemCom technique that is based on intelligent fabrics for in-cabin transportation scenarios. The authors then propose a DL-based end-to-end SemCom scheme for time-series data that exploits DL for semantic sensing and information extraction [312]. The authors of [313] put forward a DeepSC-based network service framework that combines a SemCom system and intelligent fabrics for a smart healthcare system in intelligent fabric. In light of this framework, the authors establish a combined service offloading and bandwidth allocation optimization model for resource-efficient semantic-aware networks with enhanced QoS. The authors of [82] introduce a reasoning-based SemCom architecture wherein semantic information is represented by KG. To convert the KG-based representation, which is high-dimensional, to a low-dimensional representation, the authors develop an embedding-based semantic interpretation framework. They then propose function-based inference to infer hidden information that cannot be directly observed from the received message [82]. The authors of [229] incorporate reasoning into SemCom and develop a framework for representing, modeling, and interpreting implicit semantic meaning. The authors then develop an implicit SemCom architecture in which a reasoning procedure can be trained at the receiving user with the help of the transmitting user [229].
The authors of [314] introduce a signal-shaping method to minimize semantic loss as quantified by a pretrained BERT (bidirectional encoder representations from transformers [247]) model in SemCom systems with a few message candidates. The authors provide a solution that is based on an efficient projected GD method. The authors of [315] put forward an image SemCom system that transmits not only semantic information but also a semantic decoder. The authors of [231] propose a context-aware text SemCom framework wherein the transmitter and the receiver use their respective KB for coding and decoding by developing a part-of-speech-based encoding strategy and a context-based decoding strategy. To investigate the impact of adaptive bit lengths on semantic coding under various SNRs, the authors of [316] put forward progressive semantic HARQ schemes that use incremental knowledge by designing a semantic encoding solution with multibit length selection. The authors of [150] develop a KG-based text SemCom system that adaptively adjusts the transmitted content per the channel quality and allocates more resources to important triplets to enhance the reliability of communication. In light of this text SemCom technique whose performance needs to be enhanced for low-SNR conditions, the authors of [317] investigate reasoning and decoding at the semantic level instead of the grammar level. Employing reasoning and decoding at the semantic level to improve communication reliability, the authors of [317] propose a text SemCom scheme that incorporates a language model, prior information, and parts of speech and wherein the language model and prior information are deployed to enhance the receiver's semantic reasoning.
Despite their promises, SemCom techniques are difficult to realize when source signals are used for a variety of tasks (e.g., as wireless sensing data for localization and activity detection) due to increased processing complexity [238]. To address this challenge, the authors of [238] devise a new SemCom paradigm named _inverse SemCom_ wherein task-related source messages are encoded into a hyper-source message for data transmission or storage rather than extracting semantic information from messages. Concerning this SemCom paradigm, the authors of [238] develop an inverse semantic-aware wireless sensing framework by proposing three algorithms for data sampling, RIS-aided encoding, and self-supervised decoding. At the same time, the authors of [238] design RIS hardware for encoding multiple signal spectrums into one _MetaSpectrum_. Regarding MetaSpectrum, the authors propose a semantic hash sampling method for selecting task-related signal spectrums and a self-supervised learning method for decoding MetaSpectrums. Their framework reduced the data volume by \(95\%\) as reported by the authors in comparison with the volume before encoding without impacting the accomplishment of sensing tasks [238].
In many DL-based SemCom systems, DNNs have substituted various building blocks of conventional communication systems so the entire system can be termed an analog communication system. However, a digital communication system with digital modulation has many advantages over an analog one despite DNN-based digital modulation being a huge challenge [233]. The challenge stems from the fact that DNN-based digital modulation is based on mapping the continuous output of a DNN-based encoder into discrete constellation symbols, which requires a non-differentiable mapping function that cannot be trained using GD algorithms [233]. To overcome this challenge, the authors of [233] devise a joint coding-modulation scheme for a digital SemCom with binary phase shift keying modulation. This digital SemCom scheme is shown to outperform existing digital modulation methods in SemCom over a wide range of SNRs and DNN-based analog modulation in low SNR regimes [233]. As another DL-based digital SemCom technique, the authors of [234] put forward a DL-enabled vector quantized digital image SemCom system for image transmission that is dubbed _VQ-DeepSC_. VQ-DeepSC deploys a CNN-based transceiver to extract an image's multi-scale semantic features and adversarial training to improve the quality of received images. Meanwhile, simulation results corroborate that VQ-DeepSC outperforms traditional image transmission methods, especially in low SNR regimes [234].
In light of the rapidly emerging SemCom trends and use cases that already substantiate SemCom's potential in building
next-generation 6G networks, not all users can be served by SemCom systems, for such systems are mainly specialized to handle specific applications [318]. This fact calls for a network design that rigorously addresses the inevitable coexistence of SemCom systems and BitCom systems. Toward this end, the authors of [318] examine from a network vantage point how introducing emerging SemCom systems impacts the performance of existing BitCom systems. They do so by formulating a max-min fairness problem concerning the coexistence of SemCom and BitCom systems. Regarding this coexistence problem, extensive numerical results corroborate that SemCom systems are indeed a promising next-generation communication alternative [318]. Similar to the work in [318], the authors of [319] study the impact of BitCom and SemCom system coexistence on network performance by analyzing sum-rate maximization with a minimum required SNR constraint for SemCom users. For this problem, the authors provide a power control algorithm whose numerical results substantiate [319] that introducing a SemCom system enhances the sum-rate of BitCom users whilst offering the same or a higher SNR to SemCom users with less transmit power.
Many of the existing SemCom techniques are one-to-one or point-to-point SemCom techniques that do not consider the one-to-many broadcasting scenario. For this scenario, the authors of [239] develop a DNN-enabled one-to-many SemCom system dubbed _MR_DeepSC_. The MR_DeepSC system comprises a transmitter made up of a semantic encoder and a channel decoder that is coupled with multiple receivers that each consists of a channel decoder, a semantic decoder, and a semantic recognizer [239, Fig. 1]. The semantic recognizer is designed to distinguish different users based on a pre-trained model by leveraging their semantic features [239]. In this _one-to-many SemCom system_, transfer learning is adopted to accelerate the training of new receiver networks, and simulation results demonstrate that MR DeepSC performs significantly better - especially in low SNR regimes - than other benchmarks under a variety of channel conditions [239]. The authors of [320] introduce and experimentally demonstrate an optical SemCom system wherein DL is employed to extract semantic information that is then directly transmitted through an optical fiber. This _optical SemCom system_ achieves greater information compression and more stable performance than a bit-based optical communication system, especially in low received optical power regimes [320]. The optical SemCom system in [320] also improves robustness against optical link impairments [320]. Furthermore, the authors of [321] propose a joint semantic sensing, rendering, and communication framework for wireless ultimate XR that is inspired by SemCom and its advancements. This framework involves three components: \(1)\) semantic sensing is employed to enhance sensing efficiency by exploiting the semantic information's spatial-temporal distributions; \(2)\) semantic rendering is intended to reduce the cost of semantically-redundant pixels; and \(3)\) SemCom is adopted for high-efficiency data transmission in wireless ultimate XR [321]. This joint SemCom scheme is demonstrated to be effective via two case studies [321].
We now continue to the major state-of-the-art trends and use cases of SemCom.
## IV Major State-of-the-art Trends and Use Cases of SemCom
Multiple state-of-the-art trends and use cases of SemCom are emerging, and many research communities are putting intensive efforts into research on 6G. Thus, we now discuss the major trends and major use cases of SemCom, starting with the major trends.
### _Major Trends of SemCom_
In this section, we detail the following major trends of SemCom: JSCC schemes [163, 258, 259, 270]; DeepSC and its variants [88, 181, 199]; joint semantics-noise coding (JSNC) [211]; SemCom in the S-PB layer [78]; an understand-first-and-then-transmit SemCom framework [93]; context-based SemCom [123]; semantic coded transmission [221]; neuromorphic wireless cognition [307]; a cognitive SemCom system that is driven by KG [90]; implicit SemCom [229]; innovative SemCom [315]; a reliable SemCom system that is enabled by KG [150]; an AE-based SemCom system [249]; a semantic-aware speech-to-text SemCom system with redundancy removal [251]; cross-modal SemCom [235]; and encrypted SemCom [241]. We begin with JSCC schemes.
#### Iv-A1 JSCC Schemes
unlike separation-based source and channel coding schemes which are known to suffer from the cliff effect, the DL-based JSCC technique depicted in Fig. 11 doesn't suffer from the cliff effect, which makes it a major trend in video SemCom and image SemCom. When it comes to image SemCom, deep JSCC [163], DeepJSCC-f [258], and DeepJSCC-l [259] all use a DL-based end-to-end-trained joint source-channel encoder and joint source-channel decoder, and provide a considerable performance gain over conventional separation-based schemes. When it comes to video SemCom, on the other hand, DeepWiVe [270] is a DL-based end-to-end JSCC video transmission scheme that avoids the cliff effect while achieving a graceful degradation in channel quality and producing superior video quality in comparison with state-of-the-art video transmission techniques [270].
Apart from JSCC, DeepSC [81] and its variants are another major trend in SemCom.
#### Iv-A2 DeepSC and its Variants
a major trend in text SemCom, a DL-based and end-to-end-trained DeepSC architecture that is a well-known text SemCom technique is shown in Fig. 12. As seen on the left side of Fig. 12, the DeepSC transmitter comprises a semantic encoder that extracts semantic information from the source's text using several Transformer encoder
Fig. 11: Components of deep JSCC [163, Fig. 1(b)].
layers [247] that feed the extracted semantic information to a channel encoder made of dense layers with different units that produce semantic symbols to be transmitted to the DeepSC receiver [81, Sec. IV]. The DeepSC receiver - depicted on the right side of Fig. 12 - is composed of a channel decoder that is made of dense layers with different units whose output is inputted in a semantic decoder built from multiple Transformer decoder layers [81, Sec. IV]. The Transformer decoder layers (of the semantic decoder) and the dense layers (of the channel decoder) are used for text recovery and (semantic) symbol detection, respectively. For text recovery applications, end-to-end-trained DeepSC outperforms various contemporary conventional communication system benchmarks - especially in low SNR regimes - for both the additive white Gaussian noise channel and Rayleigh fading channel [81]. In light of DeepSC's significant performance gain for low SNR regimes, several variants of DeepSC are proposed in the literature for both text SemCom and audio SemCom: L-DeepSC [199] as a text SemCom technique for IoT networks (considering the limited power and computing capabilities of the IoT devices); R-DeepSC [157] as a text SemCom technique that improves system robustness in a variety of wireless environments; DeepSC-S [88, 160] as an audio SemCom technique to improve transmission efficiency by transmitting only the semantic information; DeepSC-SR as an audio SemCom scheme for speech recognition; and DeepSC-ST [158] for speech recognition and synthesis.
Following DeepSC and its variants, we proceed with our brief discussion on a text SemCom technique named JSNC [211].
#### V-A3 Jsnc
JSNC - as it is schematized in Fig. 13 - can be used as a text SemCom technique and is put forward by the authors of [211] to address a varying communication channel and incorporate a mechanism for the possible interpretation of semantic meaning. For the former goal, the authors propose encoder distillation and decoder distillation mechanisms - both DNN-based - that aim to refine the _embeddings_ in the encoder and decoder, as seen in Fig. 13. As for the second goal, of semantic meaning interpretation, the authors incorporate DNN-based confidence-based mechanisms (, which are also shown in Fig. 13, at both the transmitter and the receiver) that assess the quality of semantic representation while guiding encoder distillation and decoder distillation. Once a given message is projected into the feature space as depicted in Fig. 13, distillation at the transmitter and the receiver is triggered by the semantic confidence module provided that semantic confidence doesn't reach a pre-defined threshold [211]. Otherwise, the JSNC mechanism discharges the processed semantic information for processing further downstream [211]. Moreover, as shown in Fig. 13, the maximum distillation round is \(N\), which amounts to a JSNC system being done with the SE and ready for further processing [211].
We now move on to our brief discussion on the SemCom framework in the S-PB layer that is proposed by the authors of [78].
#### V-A4 SemCom in the S-PB layer
the authors of [78] propose a text SemCom framework that comprises semantic analysis and encoding at the transmitter and semantic decoding and synthesis at the receiver, all of which are based on semantic base (_Seb_) - as seen in Fig. 14 - which is the basic representation framework for semantic information that the authors introduce. The transmitter's and receiver's Seb-based processing is dictated by the source KB and destination KB, respectively, which are generally different. To overcome this challenge, the authors consider a dynamically updated KB that is shared between the source KB and destination KB in their design depicted in Fig. 14. The authors also suggest joint semantic encoding and channel encoding at the transmitter and joint channel decoding and semantic decoding at the receiver.
We now proceed to our discussion on the understand-first-and-then-transmit SemCom framework [93, Fig. 2].
#### V-A5 Understand First and then Transmit SemCom Framework
the authors of [93] put forward an understand-first-and-then-transmit text SemCom framework - as seen in Fig. 15 - named _communication toward semantic fidelity_ (CTSF). Per CTSF, the source converts the input signal into semantic
Fig. 12: An end-to-end trained DeepSC [81].
symbols through semantic transformation, which is based on an understanding of a source's semantic library (also a determinant of transmission efficiency [93]). Semantic transformation is followed by semantic symbol abstraction. The abstracted semantic symbols are then transformed by semantic symbol encoding, channel encoding, and then modulation prior to their transmission through the communication channel. The receiver receives the channel's output, as seen in Fig. 15, and undoes the transmitter's signal processing through demodulation followed by channel decoding and then semantic symbol decoding. The receiver's semantic inverse transformation process - as shown in Fig. 15 - uses the decoder's output to deliver the reconstructed signal and the reconstructed semantic symbols through the destination's semantic library-dictated semantic symbol recognition process and the semantic inverse representation process, respectively.
We now continue with the design flow for context-based SemCom systems [123, Fig. 9].
#### Vi-B6 Context-Based SemCom
the authors of [123] proffer a design flow for context-based SemCom systems, which is
Fig. 14: A Text SemCom framework in the S-PB layer [78, Fig. 4]: S-PB layer – semantic-empowered physical-bearing layer; Seb – semantic base.
Fig. 13: A joint semantic-noise coding (JSNC) mechanism [211, Fig. 2].
depicted in Fig. 16. The authors' proposed design flow comprises goal definition, objective definition, context definition, and problem definition [123, Fig. 9]. Goal definition and objective definition determine the _why_ aspect of the context and what exactly is to be optimized [123], respectively. Once the optimization objective has been determined, context definition defines the remaining aspects of the context (i.e., who, what, where, and when), and problem definition determines the optimization problem w.r.t. the objective function chosen and the constraints (derived from goal definition, objective definition, and context definition) [123].
We now move on to the semantic coded transmission (SCT) SemCom technique [221, Fig. 1].
#### V-B7 Semantic Coded Transmission
the SCT SemCom technique is introduced by the authors of [221] and schematized in Fig. 17. As shown in Fig. 17, the transmitter consists of the following modules: a semantic analysis transform, a semantic importance modeling module, and a semantics-guided source-channel encoder. The encoder's output is then transmitted to the channel whose output is processed by the receiver that comprises the semantics-guided source-channel decoder and semantic synthesis transform modules. The functions of these receiver modules and the above-mentioned transmitter modules are itemized below:
* The semantic analysis transform module extracts the source data's semantic features and produces semantically annotated messages that are segmented as multiple semantic channels [221], each of which comprises a _semantic feature vector_ (SFV) whose elements relate to the same semantic object [221].
* The semantic importance modeling module evaluates each SFV's semantic value, which is determined based on the communication purpose of a scenario, such as human-type communication (HTC) or machine-type communication (MTC) [221].
* The semantics-guided source-channel encoder is guided by the semantic importance scores and acts on each SFV to ensure the reliable transmission of SemCom signals over the wireless communication channel [221].
* At the receiver, the semantics-guided source-channel decoder reconstructs the SFVs by performing the inverse operation of the semantics-guided source-channel encoder [221].
* The semantic synthesis transform module takes the output of the semantics-guided source-channel decoder and performs the inverse operation of the transmitter's semantic analysis transform [221]. Thereafter, semantic feature fusion is used to either recover the source data or drive downstream machine intelligence tasks [221].
Fig. 16: Design flow for context-based SemCom systems [123, Fig. 9].
Fig. 15: A Text SemCom framework named communication toward semantic fidelity (CTSF) [93, Fig. 2].
In light of the work in [221], the authors of [322] devise a novel versatile SCT system over MIMO fading channels that is dubbed _VST-MIMO_. VST-MIMO supports parallel versatile rate transmission and multiple-stream transmission [322]. To this end, the authors of [322] design an adaptive spatial multiplexing module that guides rate allocation and stream mapping, and effectively couples the source semantics and channel states [322].
We now move on to our brief discussion on neuromorphic wireless cognition (_NeuroComm_) [307, Fig. 2(a)].
#### V-B8 Neuromorphic Wireless Cognition
NeuroComm is proposed by the authors of [307] and aims to combine neuromorphic sensing, processing, and communications by introducing a wireless cognition architecture. As schematized in Fig. 18, NeuroComm's transmitter consists of a neuromorphic sensor whose output is fed to an end-to-end-trained encoding spiking neural network (SNN) followed by an impulse radio (IR) transmitter (see [323]). The IR transmitter's output is then fed to the IR receiver followed by an end-to-end-trained decoding SNN whose output is, in turn, undergoes time decoding. In
Fig. 17: Semantic coded transmission (SCT) [221, Fig. 1]: MTC – machine-type communication; HTC – human-type communication.
Fig. 18: Neuromorphic communication system (NeuroComm) [307, Fig. 2(a)]: SNN – spiking neural network; IR – impulse radio.
view of these transceiver operations, NeuroComm's main innovations are _semantic-aware energy consumption_ and _enhanced time-to-efficiency_[307][307]. These goals are fulfilled by leveraging neuromorphic sensing and computing - which are predominantly event-driven by nature - coupled with the synergy between spike processing and pulse-based transmission through IR [323]. Accordingly, NeuroComm reflects patterns of activity in the monitored scene because neuromorphic sensors, SNNs, and IR consume energy only when spikes are produced [307].
We now move on to highlight a cognitive SemCom system that is driven by KG [90, Fig. 1].
#### V-A9 A Cognitive SemCom System that is Driven by KG
the authors of [90] put forward a KG-based cognitive SemCom framework, which is shown in Fig. 19. As seen in Fig. 19, the proposed framework encompasses a SemCom transceiver made up of a semantic symbol abstraction module, followed by conventional communication system modules, and then a semantic symbol recognition module. For semantic symbol recognition in view of reconstructing a segment of text, the sender's text is first abstracted into semantic symbols in accordance with KG using the Text2KG aligner. The aligner's output is then fed to a semantic symbol coding module followed by a channel coding module before being transmitted over the wireless communication channel. The channel's output is received by a receiving antenna and then processed by the channel decoder, which also exploits KG to correct errors [90]. Once any errors have been corrected, the symbols are fed to the semantic symbol decoder, which produces an estimated semantic symbol (of the transmitted semantic symbol), which can naturally suffer from inherent semantic ambiguity. To alleviate semantic ambiguity and implement the triple-to-text conversion, the authors of [90] fine-tuned a pre-trained model named _text-to-text transfer Transformer_ (T5) on the training corpus of [324]. The fine-tuned T5 model is then deployed by the receiver to reconstruct the transmitted text.
We now move on to our brief discussion on an implicit SemCom architecture [229].
#### V-A10 An Implicit Semantic Communication Architecture
the authors of [229] introduce an implicit SemCom architecture for representing, communicating, and interpreting implicit semantic meaning, which is depicted in Fig. 20. The implicit SemCom architecture is made up of a private source KB, a private destination KB, and a common KB that guides the implicit SemCom architecture's encoder and decoder. The implicit SemCom encoder is made up of an entity detector, an embedding converter, and a semantic comparator as schematized in Fig. 20. As shown in Fig. 20, the implicit SemCom decoder encompasses a semantic interpreter that delivers the recovered semantics of the message sent by the implicit SemCom encoder. The implicit SemCom encoder must first identify one or more key entities from the source signal using its entity detector [229]. The entities are then processed by the embedding converter - as shown in Fig. 20 - and transmitted over the communication channel. The channel output received by the implicit SemCom decoder is then fed to the semantic interpreter. The semantic interpreter is designed to recover a reasoning path \(\eta^{D}\) that represents its interpretation of the implicit meaning associated with the key entities of the transmitted message [229]. If \(\mathbf{p}^{E}\) and \(\mathbf{p}^{D}\) are the embeddings
Fig. 19: A cognitive SemCom framework [90, Fig. 1]: S – sender; D – destination.
corresponding to expert paths and those corresponding to paths generated by the decoder, respectively, as shown in Fig. 20, the semantic comparator is trained to properly distinguish the semantic meaning of expert paths and that of paths generated by the decoder [229]. Furthermore, the implicit SemCom decoder is designed to generate a reasoning path \(\eta^{D}\) that has the shortest semantic distance from the original meaning \(\eta^{E}\) of the source signal, and thus the underneath optimization problem [229]:
\[\min_{\theta}\Gamma_{\theta}\big{(}\eta^{E},\eta^{D}\big{)}, \tag{12}\]
where \(\theta\) denotes the latent parameters of the implicit SemCom decoder's semantic interpreter [229]. In solving (12), the authors of [229] provide a generative adversarial imitation learning-based reasoning mechanism learning (GAML) algorithm [229, Algorithm 1].
We now discuss a SemCom technique named innovative SemCom [315].
#### V-B1 Innovative SemCom
the authors of [315] employ a Shannon theory-based traditional communication system to embed an AI model into the transmitter and receiver of PHY for effective SemCom. This SemCom system is named an innovative SemCom system and is depicted in Fig. 21. As seen on the left side of Fig. 21, the authors pre-train a semantic encoder and a semantic decoder at the AI-based transmitter instead of deploying AI models in advance at all nodes. Per the AI-based transmitter's upper-level requirement, the authors suggest transmitting the selected decoder and its respective semantic coding using a Shannon-based communication system. This system's traditional receiver then recovers the received semantic coding using the AI-based receiver's decoder [315].
At the AI-based transmitter shown in Fig. 21, the category of the information source needs to be identified and an AI model needs to be selected in accordance with the requirements of a classification or clustering algorithm [315]. The AI model then extracts and compresses the source semantics and packages the semantic coding with the AI model to generate a stream of bits [315] for the Shannon-based communication system. As shown in Fig. 21, the information received by the traditional receiver includes semantic information, the AI model, and environmental information [315]. When it comes to environmental information, the spectrum environment and the electromagnetic environment can be interpreted from a Shannon-based traditional PHY [315].
We now proceed with our discussion on a reliable text SemCom system that is enabled by KG [150].
#### V-B12 Reliable SemCom System that is Enabled by KG
the authors of [150] propose a reliable text SemCom system that comprises an SE module, a traditional communication architecture, and a semantic restoration module, and can be broken down as shown in Fig. 22 into a semantic level and a technical level [150]. The semantic level is introduced by the authors of [150], and the technical level is essentially identical to that of the Shannon theory-based traditional communication system [150]. As is shown in Fig. 22, the transmitter's SE module feeds this technical level and extracts the KG of the input sentence to be transmitted and sorts it in order of semantic importance to represent the input sentence's semantics [150]. The semantics are then processed first by the traditional source encoder and then by the traditional channel encoder before they are transmitted over a communication channel. The channel's output is then received by the receiver, whose output is processed by the traditional channel decoder followed by the traditional source decoder. The source decoder's output is then fed to the semantic restoration module, which recovers the transmitted sentence per the received KG [150].
We now move on to briefly discuss an AE-based SemCom system with relay channels [249].
Fig. 20: Implicit SemCom architecture [229, Fig. 1].
#### V-B13 Autoencoder-Based SemCom with Relay Channels
the authors of [249] introduce an AE-based SemCom system with one-way relay channels, which is shown in Fig. 23, to enable SemCom between a source node and a destination node that have no common KB. As can be seen in Fig. 23, the source node transmits its information to the destination node via a relay node using SemCom that incorporates both a transmission level and a semantic level [249]. The semantic level contains a semantic encoder (a Transformer encoder) for extracting semantic information and a semantic decoder (which is made up of three sublayers) for analyzing semantic information [249]. The transmission level, on the other hand, aims to ensure the semantic information is accurately transmitted over the wireless channel [249].
Fig. 21: Innovative SemCom system [315, Fig. 1].
Fig. 22: A reliable text SemCom system [150, Figure 1]: S2G – a mapping from a sentence to knowledge graph (KG); G2S – a mapping from KG to a sentence.
The channel's output is received by a one-way relay which transmits the source's information over another wireless communication channel to the destination. When the destination's background KB and the source's background KB are different, traditional relay protocols fail to convey the source's semantic information to the destination. To overcome this design challenge, the authors of [249] introduce a relay forward protocol named SF. The SF protocol - which is shown in Fig. 23 - consists of two consecutive steps: \(i)\) the relay node executes semantic decoding based on a background KB that is shared between the source node and itself to recover the source's information from the signal it receives, and \(ii)\) the relay node semantically encodes the recovered information based on another background KB that is shared between the source node and itself in a way that the destination node can decode and understand [249].
We now highlight an audio SemCom technique entitled semantic-aware speech-to-text transmission with redundancy removal [251].
#### V-A14 Semantic-Aware Speech-to-Text Transmission with Redundancy Removal
the speech-to-text SemCom system architecture with redundancy removal that is depicted in Fig. 24 was developed by the authors of [251]. As can be seen in Fig. 24, the proposed audio SemCom transceiver consists of a receiver made up of a semantic decoder and a channel decoder as well as a transmitter composed of a channel encoder and a semantic encoder. The semantic encoder is first fed the input speech spectrum \(\mathbf{S}\), which is obtained by applying [251] a 25 ms Hamming window and a 10 ms shift to the input speech signal followed by fast Fourier transforms (FFTs) to get the coefficients as well as the first- and second-order derivatives of 40 filter banks (see [325]). This speech spectrum is then processed by the semantic decoder, with its sequence of four components: the VGG module, the Bi-LSTM module, the soft alignment module, and the redundancy removal module, as shown in Fig. 24. The semantic decoder delivers the latent semantic representations \(\mathbf{L}\) in which the semantic redundancy has been reduced by the redundancy removal module [251]. Using \(\mathbf{L}\) as the input, the channel encoder produces a sequence of symbols \(\mathbf{X}\) that is transmitted over the physical channel [251]. The channel's output is then received by the receiver, whose received signal \(\mathbf{Y}\) is fed to the channel decoder to acquire the estimated latent semantic representation sequence \(\hat{\mathbf{L}}\)[251]. \(\hat{\mathbf{L}}\) is then inputted into the semantic decoder, as seen in Fig. 24, which eventually decodes it and produces the predicted transcription \(\hat{\mathbf{G}}\)[251]. Moreover, it is worth underscoring that the end-to-end-trained speech-to-text SemCom architecture shown in Fig. 24 outperforms DeepSC-SR (see [87]), especially in low SNR regimes [251, Sec. IV], because it removes redundant content.
conventional SemCom), respectively [235]. This multi-modal SemCom paradigm uses video, audio, and haptic signals as multi-modal input signals, which is also shown in Fig. 25.
The video, audio, and haptic signals are fed to the cross-modal semantic encoder, which performs explicit semantic extraction and implicit semantic inference [235]. Explicit semantic extraction leads to explicit semantics (clearly expressed / readily observable semantics) and implicit semantic inference
Fig. 24: An architecture of a speech to text SemCom system with redundancy removal [251, Fig. 1]: VGG – Visual Geometry Group; Bi-LSTM – bidirectional long short term memory; and FC – fully connected.
Fig. 25: A cross-modal SemCom paradigm – modified from [235, Fig. 2]: CKG – cross-modal KG.
leads to implicit semantics19 (indirectly expressed / intention-related semantics), as can be seen in Fig. 25. The generation of implicit semantics is facilitated by the transmitter CKG that the authors of [235] propose to construct using a sequence of _multi-modal knowledge extraction_, _cross-modal knowledge fusion_, and _information storage and application_[235, Fig. 3]. Meanwhile, the explicit and implicit semantics that are generated are fed to the channel decoder, whose output is transmitted over a wireless channel to a cross-modal SemCom receiver, as shown in Fig. 25.
Footnote 19: Implicit semantics can reflect multi-modal signals’ “true meaning” [235], which may be vital to reduce polysemy. However, a hacker can then steal implicit semantics and cause a considerable privacy problem. This possibility affirms that privacy protection can be a much more severe problem in (cross-modal) SemCom than in conventional communication and therefore needs to be given immense attention [235].
At the cross-modal SemCom receiver, the received signal is fed to the channel decoder, whose output of recovered explicit and implicit semantics are inputted to the cross-modal semantic decoder. The decoder's cross-modal signal recovery module transforms the received explicit and implicit semantics into multi-modal signals using GAN-based signal recovery models [235]. These models' output may be incomplete because of several distortions. As seen in Fig. 25, the authors of [235] propose the signal completion module to complete the missing parts by retrieving similar signal patches from the receiver CKG. Doing so considerably improves the quality and completeness of the recovered multi-modal signals [235]. Nevertheless, the recovered signals can suffer from semantic ambiguity. The cross-modal semantic decoder must therefore be designed to minimize semantic ambiguity by ensuring the recovered multi-modal signals are accurate at the semantic level as well as the bit level [235]. To this end, the authors of [235] propose to optimize the cross-modal signal recovery models in an RL manner while using both semantic similarity and bit similarity between the input and recovered multi-modal signals as rewards [235]. Finally, the authors implement the DQN algorithm to optimize the cross-modal signal recovery model whose results lead to the following conclusion in [235]: the combination of semantic similarity and bit similarity is more effective in SemCom applications that require precise signal recovery [235, Sec. II].
We now proceed with our brief discussion of an emerging SemCom system named encrypted SemCom.
#### V-A16 Encrypted SemCom
many conventional SemCom systems require that background KBs be shared between the transmitter and the receiver. Many existing SemCom systems therefore assume a private communication model between two communication agents to jointly train a private semantic encoder and decoder [241]. In this vein, most state-of-the-art SemCom works advocate for centralized SemCom systems and unified multi-user SemCom systems that are trained based on one or more common background KBs [241]. This design philosophy inevitably leads to an important _privacy leakage problem_[241]. Accordingly, balancing the generality and confidentiality of SemCom is a major challenge of SemCom design [241]. To alleviate this challenge, the author of [241] put forward an encrypted SemCom system that is schematized in Fig. 26.
The proposed SemCom system provides _encrypted and unencrypted_ modes of semantic transmission without needing to change the semantic encoder or semantic decoder [241]. When no privacy protection is required, the text SemCom system shown in Fig. 26 transmits the embedded input sentence in an unencrypted manner without the need for a secret key, encryptor, or decryptor. When encryption is needed for privacy protection, the input sentence is first tokenized as a one-hot vector whose length is the size of the word dictionary in the background KB [241]. Each token is then mapped via the word embedding layer to a fixed-dimensional vector of floats, and the output is denoted by \(\hat{S}\)[241]. \(\hat{S}\) is then inputted into the encryptor \(K_{e}(\cdot)\) along with the secret key for encryption, as schematized in Fig. 26, and the encrypted message is then fed to the semantic encoder \(E(\cdot)\) for semantic encoding [241]. The semantic encoder encodes the semantic message (whether it is encrypted or not) and produces the semantic vector \(X/X_{k}\) (with \(X_{k}\) being an encrypted and semantically encoded output)20 as seen in Fig. 26. Vector \(X/X_{k}\) is then fed to the channel encoder, whose output is transmitted over the wireless channel to Bob's receiver (the legitimate receiver with the secret key).
Footnote 20: To be consistent with the notation used in Fig. 26 and by the authors of [241], we abuse our notation rules and represent vectors with uppercase italic letters.
Bob can then decrypt the received encrypted message by first running it through the decryptor \(K_{d}(\cdot)\), as seen in Fig. 26, and then decoding it using the semantic decoder \(D(\cdot)\). It should be noted that an attacker like Eve (see Fig. 26) cannot recover the transmitted encrypted and semantically encoded message even if it has the same semantic decoder as Bob provided that Eve does not have the secret key or the decryptor \(K_{d}(\cdot)\). Meanwhile, Bob's semantic decoder decodes word by word so that the first \(N-1\) outputs of the output embedding layer can be used as another input for Bob's semantic decoder [241] - as seen in Fig. 26. As for the encrypted SemCom system in Fig. 26, the authors of [241] design the structure of the secret key, encryptor, and decryptor, and use simulations to confirm that their proposed encrypted SemCom system considerably enhances a SemCom system's privacy protection ability.
We end our detailed discussion of the major trends in SemCom with the above-discussed encrypted SemCom system. We now move on to discuss major use cases of SemCom.
### _Major Use Cases of SemCom_
We highlight below the following major use cases of SemCom: H2H SemCom, H2M SemCom, M2M SemCom, and KG-based SemCom - along with their respective applications. We begin with H2H SemCom.
#### V-B1 H2H SemCom
consistent with the semantic level of Weaver's framework (see Fig. 1), H2H SemCom aims to deliver accurate meanings over a channel for message exchange between two human beings [104].
We now move on to H2M SemCom.
#### Vi-B2 H2M SemCom
H2M SemCom concerns communication between a human and a machine through the interface of human and machine intelligence by involving the second and third levels of Weaver's framework - the semantic level and the effectiveness level [104]. At both levels, H2M SemCom's success depends on two design elements: \(i)\) a message sent by a human must be correctly interpreted by a machine to trigger the desired action (the effectiveness problem), and \(ii)\) a message sent by a machine should be meaningful to the receiving human (the semantic problem) [104]. In light of these design goals, H2M SemCom has numerous applications as diverse as: \(i)\)_human-machine symbiosis systems_ (e.g., AI-assisted systems, interactive ML, worker-AI collaboration); \(ii)\)_recommendation systems_ (e.g., social network applications such as emotional health monitoring, travel recommendations for mobile tourists, remote healthcare, TV channel recommendations, video and music recommendations, UAV-assisted recommendations for location-based social networks, and distributed recommendations for privacy preservation); \(iii)\)_human sensing and care_ (e.g., elderly monitoring, a super soldier system, general human activity recognition systems, remote healthcare systems, and smart-home monitoring systems); \(iv)\)_virtual reality (VR) / augmented reality (AR) techniques and applications; \(v)\) latent semantic analysis; \(vi)\) computation offloading for edge computing; and \(vii)\)_decentralization for privacy preservation_[104]. For more details about these applications, the reader is referred to [104, Section 3].
We now continue with major use cases of M2M SemCom.
#### Vi-B3 M2M SemCom
M2M SemCom deals with the connection and coordination of multiple machines without human involvement to carry out a computing task [104]. Carrying out a computing task, consequently, is more of an effectiveness problem (level three communication) than a semantic problem (level two communication) [104]. In view of the effectiveness problem, M2M SemCom has myriad applications as varied as: \(i)\)_distributed learning_ (e.g., effectiveness encoding, local gradient computation, over-the-air computing, over-the-air FL, importance-aware radio resource management, differential privacy); \(ii)\)_split inference_ (e.g., feature extraction, importance-aware quantization and radio resource management, and effectiveness encoding and transmission for SplitNet); \(iii)\)_distributed consensus_ (e.g., vehicle platooning, blockchain, local-state estimation and prediction, semantic difference transactions, and practical Byzantine fault tolerance consensus); \(iv)\)_machine-vision cameras_ (e.g., camera-side feature extraction, effectiveness encoding based on regions of interest, surveillance, production-line inspection, and aerial and space sensing) [104]. For more details about these applications, the reader is referred to [104, Section 4]. In the meantime, some use cases of M2M SemCom are described below.
* _IoT networks_: since SemCom consumes few radio resources and is relatively robust to channel noise, it is promising for accurate and instant wireless transmission in IoT networks [199, 76]. Nonetheless, the main challenge of deploying SemCom for IoT networks stems from the limited computation and storage capabilities of IoT devices, which makes the on-board use of complex DNNs unfeasible [199, 76]. As a result, determining how to optimally train and fine-tune an IoT device's semantic encoder (or decoder) and channel encoder (or decoder) in an IoT device is a major challenge.
* _Connected autonomous vehicles (connected AVs)_: in net
Fig. 26: Encrypted SemCom system [241, Fig. 1].
works of connected AVs, which have multiple on-board sensors, tens or even thousands of gigabytes of data are generated per day of videos and images containing traffic information [76]. Most of the data are processed at the AVs, and the remainder is uploaded to roadside units (RSUs) and cloud/edge servers, which leads to considerable uploading latency [76]. SemCom is very promising since it transmits only semantically relevant information by design and is relatively robust against channel noise and interference [76]. Such robustness, consequently, can make SemCom promising for the design and realization of interference-resistant 6G wireless communication [326].
* _Device-to-device (D2D) vehicular communication_: vehicles employing D2D-based vehicular communication share radio resources with cellular users in an underlay fashion, which can lead to possibly severe co-channel interference [76]. SemCom can be used to minimize this interference by exploiting the diversity of KBs to understand the transmitted messages' meaning [76].
* such as machines' status, the temperature, and the humidity
- can be extracted and uploaded to a central controller or a cloud/edge server in order to analyze the status of materials and the quality of products [76].
* _Video communication_: the latest video coding standards, such as H.266/VVC and AV1 [327], reportedly improve coding efficiency by 30%-50% [93]. However, achieving this level of improvement for high-fidelity video communication over a wireless channel with ultra-low bandwidth will be next to impossible [93]. SemCom helps to overcome this challenge by shedding light on achieving high-quality video communication over a wireless channel with low bandwidth via semantic representation and a powerful semantic library [93].
* _Holographic stereo video communication_: holographic stereoscopic video represents information with 5D data encompassing all the human senses (visual, auditory, tactile, smell, and taste) and has the potential to deliver a truly immersive remote interaction experience [93]. However, holographic communication using multiple-view cameras requires data rates in the terabits/second [48]. Edge intelligence [2, 46, 113, 328] can be employed to alleviate this ultra-high data rate requirement by transmitting/recovering only the parts of the scene that users are interested in [93]. Nevertheless, this requires an accurate prediction of users' behavior [93], which is often difficult to obtain in real-time, and the viable solution is therefore to transmit semantic information using a powerful semantic library [93]. Accordingly, SemCom is a potential enabler of holographic stereoscopic video communication by reducing the volume of data to be transmitted so that the user experience is greatly enhanced.
We now move on to some use cases of KG-based SemCom.
#### Iv-B4 KG-based SemCom
KG can be employed to realize faithful M2M SemCom, H2M SemCom, and H2H SemCom. For H2H SemCom, a KG symbolizing knowledge about the domains of the conversing parties can be injected into a semantic encoder to boost SemCom efficiency and robustness [104]. For H2M SemCom, a KG helps a machine to understand the semantic information and its context embedded in the messages it receives from humans and to respond intelligently [104]. For M2M SemCom, KGs can provide a platform for developing large-scale IoT networks such as logistics networks, smart cities, and vehicular networks [104]. KGs can also act as a SemCom management tool to facilitate service selection, resource allocation, and work flow recommendation [104]. Accordingly, KG-based SemCom has the potential to have many applications.
KG-based SemCom is generally useful for enhancing AI applications such as frequently asked questions, virtual assistants, dialogue, and recommendation systems [104]. KG-based M2M SemCom specifically is applicable for KG construction and updating KG-based network management, and interpretation for cross-domain applications [104]. For much more details about these applications, the reader is referred to [104, Section 5].
We now proceed with state-of-the-art theories of SemCom.
## V Theories of SemCom
Several theories using different approaches have been developed to incorporate semantics into Shannon's communication theory [91, 92]. Some of these approaches include probabilistic logic, complexity theory, semantic coding and communication games, and rate distortion theory [329, 330, 331, 332, 333]. We discuss an information-theoretic approach to SemCom. More specifically, we discuss and put into context the latest crucial developments in SemCom theory by deploying the information-theoretic concepts of entropy, relative entropy, and mutual information that are detailed in Appendix A. We start with our discussion of recent SemCom theory [79, 177, 334] using the logical probability of messages.
### _SemCom Theory using the Logical Probability of Messages_
By exploiting the logical probability of messages, which is a classical SIT developed by Carnap and Bar-Hillel [176, 95], the authors of [177, 334, 79] recently developed a theory of SemCom. This theory relies on the system model of SemCom that is schematized in Fig. 27. Fig. 27 shows a SemCom system that comprises a semantic information source (or semantic sender/transmitter) and a semantic information destination (receiver). A semantic information source is a tuple \((W_{s},K_{s},I_{s},M_{s})\), where \(W_{s}\) is the model of the worlds that are possibly observable by the source, \(K_{s}\) is the source's background KB, \(I_{s}\) is the inference procedure employed by the source, and \(M_{s}\) is the source message generator that is used to encode the source's message [79].
Similar to the source, a semantic information destination (or semantic receiver) - as seen in Fig. 27 - is a tuple \((W_{r},K_{r},I_{r},M_{r})\), where [79]\(W_{r}\) is the world model of the destination, \(K_{r}\) is the destination's back
inference procedure deployed by the destination, and \(M_{r}\) is the destination's message interpreter (semantic decoder). In view of the tuples \((W_{r},K_{r},I_{r},M_{r})\) and \((W_{s},K_{s},I_{s},M_{s})\), a SemCom error happens if the message to be conveyed is "true" at the sender (w.r.t. \(W_{s}\), \(K_{s}\), and \(I_{s}\)), but the received message is "false" at the receiver (w.r.t. \(W_{r}\), \(K_{r}\), and \(I_{r}\)) [177]. Errors generally occur because of source coding losses, channel noise, semantic noise, decoding losses, or a combination thereof [177]. This highlights the challenge associated with designing a SemCom system - versus a conventional communication system - whose realization is guided by a rigorous SemCom theory. To inspire the development of such a theory and much more discussion, we hereinafter present results on semantic source coding [177], semantic channel capacity [177], and semantic compression [334]. We start with semantic source coding.
#### V-B1 Semantic Source Coding
the design of semantic source coding deals with the design of the sender's message generator, as viewed in Fig. 27. We thus drop the subscript "s" (when there is no confusion) and start by defining the Shannon entropy of \(W\), i.e., \(H(W):=H(W_{s})\). For a probability measure \(\mu(\cdot)\), the Shannon entropy of \(W\) is defined as [177]
\[H(W):=-\sum_{w\in\mathcal{W}}\mu(w)\log_{2}\mu(w), \tag{13}\]
where \(\mathcal{W}\) is the alphabet of \(W\) and \(\mu(\cdot)\) is a probability measure such that \(\sum_{w\in\mathcal{W}}\mu(w)=1\). \(H(W)\) is the entropy of the source provided that the source is classical with \(\mathcal{W}\) as the symbol set [79]. In this case, \(H(W)\) is called the _model entropy_ of the semantic source [79].
In the design of a semantic encoder for a given interface language, a semantic coding strategy needs to achieve two potentially conflicting goals: \(i)\) maximize the expected faithfulness in symbolizing the observed worlds, and \(ii)\) minimize the expected coding length (the amount of data to be transmitted [79]. Accordingly, a semantic coding strategy is a conditional probability distribution \(p(X|W)\) given that \(X\) is a finite set of allowed messages (messages allowed by the message generator) [79]. Meanwhile, deterministic coding is a type coding in which each \(w\in\mathcal{W}\) has at most one possible coded message [79]. For a \(\mu(\mathcal{W})\) and a \(p(X|W)\) that are known _a priori_, the distribution of the generated messages can be determined as [79]
\[p(x)=\sum_{w\in\mathcal{W}}\mu(w)p(x|w). \tag{14}\]
Using \(p(x)\) as defined in (14), the Shannon entropy of the messages in \(X\) is defined as [79]
\[H(X):=-\sum_{x\in\mathcal{X}}p(x)\log_{2}p(x), \tag{15}\]
where \(\mathcal{X}\) is the alphabet of \(X\).
Interestingly, the message entropy as defined in (15) and the model entropy as defined in (13) can be related to one another. Consequently, the following theorem links the model (semantic) entropy and the message (syntactic) entropy of a source [79].
**Theorem 1** (**Relationship between the model entropy and the message entropy [79, Theorem 1]**).: _The message entropy \(H(X)\) and the model entropy \(H(W)\) that are defined in (15) and (13), respectively, are related as follows:_
\[H(X)=H(W)+H(X|W)-H(W|X). \tag{16}\]
Proof.: The proof is provided in Appendix B.
Fig. 27: A SemCom (system) model that comprises semantic information source and destination – modified from [79, Fig. 2]: KB – knowledge base.
Intuitively, \(H(X|W)\) and \(H(W|X)\) quantify the semantic redundancy of the coding and the semantic ambiguity of the coding, respectively [79]. This intuition and Theorem 1 leads to the following remark.
**Remark 1**.: _Theorem 1 affirms that message entropy can be larger or smaller than model entropy depending on whether semantic redundancy or semantic ambiguity is larger._
With this remark in mind, we now move on to our brief discussion on semantic channel capacity.
#### V-C2 Semantic Channel Capacity
to formally present the semantic channel capacity theorem derived by the authors of [79, 177], we first define the following parameters:
* For a semantically coded message \(X\) and its corresponding semantically decoded message \(Y\), \(I(X;Y)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=H(X)-H(X|Y)\) is the mutual information between \(X\) and \(Y\). \(I(X;Y)\) represents syntactical channel ambiguity due to non-literal semantic transmission, technical noise, or semantic noise [79].
* Given the transmitter's local knowledge \(K_{s}\) and inference procedure \(I_{s}\), \(H_{K_{s},I_{s}}(W|X)\) is the uncertainty of the semantic encoder [79]. A larger \(H_{K_{s},I_{s}}(W|X)\) implies larger semantic ambiguity (in semantic coding) [79].
* Given the receiver's local knowledge \(K_{r}\) and inference procedure \(I_{r}\), \(\overline{H_{s;K_{r},I_{r}}(Y)}=-\sum_{y\in\mathcal{Y}}p(y)H_{s}(y)\)21 (for \(\mathcal{Y}\) being the alphabet of \(Y\)) is the average logical information of the received messages. The bigger \(\overline{H_{s;K_{r},I_{r}}(Y)}\) is, the better the receiver is able to interpret the received messages [79].
Footnote 21: For \(m(y)\) being the logical probability of a message (sentence) \(y\) as defined by Carnap and Bar-Hillel [176, 95], \(H_{s}(y)\) is the semantic entropy of \(y\) and \(H_{s}(y)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=-\log_{2}(m( y))\).
Under the assumptions \(K_{s}=K_{r}\) and \(I_{s}=I_{r}\) (which is less realistic), the following theorem (with no subscripts) holds.
**Theorem 2** (**Semantic channel coding theorem [79, Theorem 3])**.: _For a discrete memoryless channel, the semantic channel capacity \(C_{s}\) given by_
\[C_{s}=\sup_{p(X|W)}\big{\{}I(X;Y)-H(W|X)+\overline{H_{s}(Y)}\big{\}} \tag{17}\]
_has the following property: there exists a block coding strategy for any \(\epsilon>0\) and \(R<C_{s}\) such that the maximum probability of semantic error is less than \(\epsilon\)._
Proof.: The proof is in [177, Appendix].
In (17), the argument \(\sup\) is the semantic coding strategy and \(\sup I(X;Y)\) is the engineering channel capacity [79]. The engineering channel capacity can be larger or smaller than the semantic channel capacity - as asserted by Theorem 2 - depending on whether \(H(W|X)\) or \(\overline{H_{s}(Y)}\) is larger.
Despite the less realistic underlying assumption, Theorem 2 is informative and may signify the best-case semantic capacity scenario, as both the background KB and inference procedure of the sender and receiver are assumed to be the same. This leads us to our discussion on the important result of semantic compression.
#### V-C3 Semantic Compression
semantic compression is carried out by the semantic encoder, which attempts to _efficiently encode_ only the semantically relevant information of the source's message. To this end, a fundamental question of semantic compression is: "To what extent is semantic compression possible?" [79]. Answering this question in part, the authors of [334] provide an informative theorem [334, Theorem 1]. Before presenting this theorem, the following definitions from [334] are in order.
**Definition 5** ( [334, Definition 1]).: _A (statistical/syntactic) source is a process that stochastically generates symbols from some alphabet \(\Delta_{\tilde{X}}\). It is represented by an RV \(\tilde{\mathcal{X}}\) that is drawn from \(\tilde{\mathcal{X}}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}= \{\tilde{x}_{1},\tilde{x}_{2},\ldots,\tilde{x}_{n}\}\) with a probability mass function (PMF) \(p(\cdot)\)._
**Definition 6** ( [334, Definition 2]).: _The Shannon entropy of an RV \(\tilde{X}\) is defined as [334, eq. (1)]_
\[H(\tilde{X})\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=-\sum_{ \tilde{x}\in\tilde{\mathcal{X}}}p(\tilde{x})\log_{2}p(\tilde{x}). \tag{18}\]
**Definition 7** ( [334, Definition 5]).: _Stochastically generating messages with related meanings, a semantic information source \(S\) is symbolized by a tuple \((\tilde{M},\tilde{X},P,L)\), where \(L\) is the formal language, \(\tilde{X}\) is an RV drawn from \(\tilde{\mathcal{X}}\) and each instance of \(\tilde{X}\) is an expression in \(L\), \(\tilde{M}\) is a RV that takes values from \(\tilde{\mathcal{M}}\)22 and each instance of \(\tilde{M}\) is an interpretation of \(L\), and \(P\) is the joint distribution of \((\tilde{M},\tilde{X})\)._
Footnote 22: Both \(\tilde{\mathcal{X}}\) and \(\tilde{\mathcal{M}}\) may be countably infinite [334].
**Definition 8** ( [334, Definition 8]).: _For \(S\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=(\tilde{M},\tilde{X},P,L)\) being the semantic information source whose message is denoted by \(\tilde{x}\in\tilde{\mathcal{X}}\), the logical probability of \(\tilde{x}\) is denoted by \(P_{\tilde{M}}(\tilde{x})\) and defined as [334, eq. (2)]_
\[P_{\tilde{M}}(\tilde{x})\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}} =\sum_{\tilde{m}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}= \tilde{x},\tilde{m}\in\tilde{\mathcal{X}}}P_{\tilde{M}}(\tilde{m}). \tag{19}\]
**Definition 9** ( [334, Definition 9]).: _For \(S\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=(\tilde{M},\tilde{X},P,L)\) being the semantic information source whose message is denoted by \(\tilde{x}\in\tilde{\mathcal{X}}\), the semantic information of \(\tilde{x}\) is denoted by \(H_{s}(\tilde{x})\) and defined as [334, eq. (3)]_
\[H_{s}(\tilde{x})\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=-\log_{2 }P_{\tilde{M}}(\tilde{x}), \tag{20}\]
_where \(H_{s}(\tilde{x})\) quantifies the element of surprise in finding \(\tilde{x}\) to be true [334]._
**Definition 10** ( [334, Definition 10]).: _For \(S\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=(\tilde{M},\tilde{X},P,L)\) being the semantic information source, its semantic entropy is denoted by \(H_{s}(\tilde{X})\) and defined as [334, eq. (4)]_
\[H_{s}(\tilde{X})\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\sum_{ \tilde{x}\in\tilde{\mathcal{X}}}p(\tilde{x})H_{s}(\tilde{x}). \tag{21}\]
**Definition 11** ( [334, Definition 11]).: _For \(S\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=(\tilde{M},\tilde{X},P,L)\) being the semantic information source, its model entropy is denoted by \(H(\tilde{M})\) and defined as [334, eq. (5)]_
\[H(\tilde{M})\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=-\sum_{ \tilde{m}\in\tilde{\mathcal{X}}}P_{\tilde{M}}(\tilde{m})\log_{2}P_{\tilde{M}}( \tilde{m}). \tag{22}\]
In light of Definitions 5-11, semantic compression can be thought of - hypothetically - as a _transformation by a semantic channel_ inside a source whose inputs are models \(\tilde{M}\) and outputs are messages \(\tilde{X}\) (see [334, Fig. 1]). This transformation is characterized by the following theorem.
**Theorem 3** ( [334, Theorem 1]).: _The semantic entropy of a source \(S:=(\tilde{M},\tilde{X},P,L)\) is bounded from above by the mutual information between its models \(\tilde{M}\) and messages \(\tilde{X}\)[334, eq. (8)]:_
\[H_{s}(\tilde{X})\leq I(\tilde{M};\tilde{X}), \tag{23}\]
_where \(I(\tilde{M};\tilde{X}):=H(\tilde{M})-H(\tilde{M}|\tilde{X})=H(\tilde{X})-H( \tilde{X}|\tilde{M})\)._
Proof.: The proof is in [334, p. 195-196].
We now continue with our discussion of indirect rate distortion characterization for semantic sources [329, 335].
### _Rate Distortion Characterization for Semantic Sources_
The authors of [335] took inspiration from the classical rate distortion theory [336, Ch. 10] and propose an indirect rate distortion theory for a source model that comprises an intrinsic state part and an extrinsic observation part. This source model, which is in Fig. 28, is quite relevant in the context of SemCom because the intrinsic state corresponds to the semantic feature \(S\) of the source, which is generally unobservable and can only be inferred from the extrinsic observation \(X\)[335]. The pair of RVs \((S,X)\) that are correlated with the joint probability distribution \(p(s,x)\) are employed to model this memoryless semantic source [335], as schematized in Fig. 28. As is also shown in Fig. 28, the encoder has access to only a length-\(n\) block of the extrinsic observation sequence \(X^{n}\) and its output is fed to the decoder. The decoder has two major tasks, as seen in Fig. 28: \(i)\) replicate the intrinsic state block as \(\hat{S}^{n}\) under a state distortion measure \(d_{s}\), and \(ii)\) replicate the extrinsic observation block as \(\hat{X}^{n}\) under an observation distortion measure \(d_{o}\)[335]. Moreover, the encoder and decoder are linked via a bit pipe in which the codeword \(W\) of \(nR\) bits - with \(R\) hence being the code rate - is transferred from the encoder to the decoder [335].
For the system setup mentioned and Fig. 28, let \(d_{s}:\mathcal{S}\times\hat{\mathcal{S}}\to\mathbb{R}_{+}\) and \(d_{o}:\mathcal{X}\times\hat{X}\to\mathbb{R}_{+}\) be two distortion measures that are defined over the source product alphabet \(\mathcal{S}\times\mathcal{X}\) and the reproduction product alphabet \(\hat{\mathcal{S}}\times\hat{\mathcal{X}}\), respectively [335]. The extended block-wise distortion measures are given by [335, eqs. (1) and (2)]
\[d_{s}(s^{n},\hat{s}^{n}) =\frac{1}{n}\sum_{i=1}^{n}d_{s}(s_{i},\hat{s}_{i}) \tag{24a}\] \[d_{0}(x^{n},\hat{x}^{n}) =\frac{1}{n}\sum_{i=1}^{n}d_{o}(x_{i},\hat{x}_{i}). \tag{24b}\]
The authors of [335] claim that a tuple \((R,D_{s},D_{o})\) is achievable - for any \(\epsilon>0\) and all sufficiently large \(n\) - if the encoding function, the state decoding function, and the observation decoding function defined in [335, p. 5948] exist. This overall setup's goal is to characterize the region of all achievable \((R,D_{s},D_{o})\) tuples, and its semantic rate distortion function is defined as [335, eq. (5)]
\[R(D_{s},D_{o}):=\inf\{R:(R,D_{s},D_{o})\ \text{ is achievable}\}. \tag{25}\]
Regarding the characterization of the problem in (25), the authors of [335] derive the following theorem.
**Theorem 4** ( [335, Theorem 1]).: _For a semantic source modeled by a pair of RVs \((S,X)\sim p(s,x)\) over the source alphabet \(\mathcal{S}\times\mathcal{X}\), the reproduction alphabet \(\hat{\mathcal{S}}\times\hat{\mathcal{X}}\), and distortion measures \(d_{s}\) and \(d_{o}\), the semantic rate distortion function \(R(D_{s},D_{o})\) is equated as [335, eqs. (9)-(11)]_
\[R(D_{s},D_{o}) =\min_{p(\hat{s},\hat{s})x}I(X;\hat{S},\hat{X})\] (26a) s.t. \[\mathbb{E}\{d_{0}(X,\hat{X})\}\leq D_{o} \tag{26b}\] \[\mathbb{E}\{\hat{d}_{s}(X,\hat{S})\}\leq D_{s}, \tag{26c}\]
_where \(S,X,\hat{S},\hat{X}\) constitute a Markov chain \(S\leftrightarrow X\leftrightarrow(\hat{S},\hat{X})\) and [335, eq. (12)]_
\[\hat{d}_{s}(x,\hat{s})=\mathbb{E}\{d_{s}(S,\hat{s})|x\}=\sum_{s\in\mathcal{S} }p(s|x)d_{s}(s,\hat{s}). \tag{27}\]
Proof.: The proof is in [335, Appendix I].
We now discuss the newest SemCom formulation dubbed _semantic language utilization and design_[337].
### _Semantic Language Utilization and Design_
The authors of [337] formulate two fundamental problems related to SemCom: _language design_ and _language utilization_.23 The language design problem deals with the design of common languages or codebooks between the transmitter and receiver to efficiently convey meaning and can be resolved by applying a JSCC theory. Below is the formal definition of this problem.
Footnote 23: Throughout the work in [337], "language” refers to _semantic language_, which is often formed commonly through interactions and is much richer in interpretation and expression: “a meaning can be expressed by multiple messages and a message can be interpreted as multiple meanings” [337].
**Problem 1** (**Language design [337, Problem 2])**.: _Having presumed that both the transmitter and receiver are allowed to negotiate prior to transmission, how can the semantic language and technical language be designed to efficiently communicate the meaning of a semantic source?_
Fig. 28: An illustrating schematic of a semantic source and its loss compression [335, Fig. 1].
Unlike Problem 1, Weaver's SemCom vision [91, Ch. 1] is concerned with the interpretation of meaning by the receiver as compared with the sender's intended meaning [337]. To this end, the only things that the transmitter and receiver agree on are the semantic and technical languages [337] that should be leveraged by the SemCom system designer to reduce misinterpretation by the receiver. This underscores the following language utilization problem.
**Problem 2** (**Language utilization [337, Problem 1])**.: _Once the transmitter and receiver have agreed on semantic and technical languages, the following problems arise while communicating an intended meaning: \(1)\) how to minimize the receiver's misinterpretations from the transmitter's perspective (semantic encoding)? \(2)\) how to minimize the receiver's misinterpretations from the receiver's perspective (semantic decoding)?_
* _Semantic encoding problem: while minimizing the communication cost, how can the transmitter generate a message such that its intended meaning can be recovered at the receiver as accurately as possible? [337]_
* _Semantic decoding problem: given a received message and no prior information about the predetermined meaning of the transmitter, how can the receiver decode the intended meaning of the transmitter? [337]_
_Combing the two mentioned problems, the following combined semantic encoding and decoding (CSED) problem ensues._
* _The CSED problem: while acting separately in their own ways, what if the transmitter and receiver simultaneously perform semantic encoding and semantic decoding, respectively?_
The above language utilization problems are discussed below. We begin with some clarifying definitions from [337].
**Definition 12** (**Words and syntax [337, Definition 3.1]**).: _The smallest elements of a message in a given language are words. Words can be employed on their own or together (with other words) to form a message. Syntax is a set of rules that establish the grouping of words in a message._
**Definition 13** (**The set of messages [337, Definition 3.2]**).: _Let \(\mathcal{S}\) denote the set of all possible messages that are determined by the word and syntax of a language. Assuming that \(\mathcal{S}\) is finite or countably infinite, we let \(\mathcal{S}:=\big{\{}s_{m}:m=1,2,\ldots,M\big{\}}\), where \(s_{m}\) and \(M\) are the \(m\)-th message and the number of all possible messages, respectively._
**Definition 14** (**The set of meanings [337, Definition 3.3]**).: _Let the messages in \(\mathcal{S}\) convey a finite or countably infinite number of meanings. The set of all possible meanings is denoted by \(\mathcal{W}\) and defined as \(\mathcal{W}:=\big{\{}w_{n}:n=1,2,\ldots,N\big{\}}\), where \(w_{n}\) and \(N\) represent one meaning and the number of meanings, respectively. Meanwhile, the probability of the transmitter's intended meaning being \(w_{n}\) is denoted by \(p(w_{n})\)._
**Definition 15** (**Expression [337, Definition 3.4]**).: _Defining a mapping from the set of meanings to the set of messages, the expression of a language is defined as [337, eq. (2)]:_
\[\big{\{}p(s|w)\in[0,1]:w\in\mathcal{W},s\in\mathcal{S},\sum_{s}p(s|w)=1\big{\}}. \tag{28}\]
_These mappings form a matrix \(\boldsymbol{P}\in\mathbb{R}^{N\times M}\) such that \((\boldsymbol{P})_{n,m}:=p(s_{m}|w_{n})\) and \(\sum_{m=1}^{M}(\boldsymbol{P})_{n,m}=1\) for all \(n\in[N]\)._
**Definition 16** (**Interpretation [337, Definition 3.5]**).: _Defining a mapping from the set of messages to the set of meanings, the interpretation of a language is defined as [337, eq. (3)]:_
\[\big{\{}q(w|s)\in[0,1]:w\in\mathcal{W},s\in\mathcal{S},\sum_{w}q(w|s)=1\big{\}}. \tag{29}\]
_It is also defined as a matrix \(\boldsymbol{Q}\in\mathbb{R}^{M\times N}\) such that \(\sum_{n=1}^{N}(\boldsymbol{Q})_{m,n}=1\) for all \(m\in[M]\)._
Per Definitions 12-16, a semantic language is denoted as a 4-tuple \((\mathcal{W},\mathcal{S},\boldsymbol{P},\boldsymbol{Q})\)[337]. We now proceed to state the definition of _semantic channel_.
**Definition 17** (**Semantic channel [337, Definition 3.7]**).: _Considering communication between a transmitter and a receiver, let the transmitted and the received message be \(s\in\mathcal{S}\) and \(\hat{s}\in\mathcal{S}\), respectively. The semantic channel is characterized by the transition probabilities from \(s\) to \(\hat{s}\) and defined as [337, eq. (5)]_
\[\big{\{}c(\hat{s}|s)\in[0,1]:s,\hat{s}\in\mathcal{S},\sum_{\hat{s}}c(\hat{s}| s)=1\big{\}}. \tag{30}\]
_A semantic channel is also expressed as a matrix \(\boldsymbol{C}\in\mathbb{R}^{M\times M}\) and said to be error free if and only if \(c(\hat{s}|s)=1\) for \(s=\hat{s}\) and \(c(\hat{s}|s)=0\) for \(s\neq\hat{s}\)._
With Definitions 12-17 in hand, we now move on to discuss the language utilization problem of semantic encoding.
#### V-A1 Language Utilization - Semantic Encoding
in light of the transmitter's perspective at the semantic decoder is dictated by the _interpretation of the agreed language_, the semantic channel, and the semantic decoder, the semantic encoding problem revolves around encoding the intended meaning to minimize misinterpretation by the receiver [337]. Accordingly, the following definitions of _semantic encoding scheme_ and _semantic distortion of semantic encoding_ ensue.
**Definition 18** (**Semantic encoding schemes [337, Definition 3.8]**).: _A semantic encoding scheme is a mapping from the meaning set \(\mathcal{W}\) to the message set \(\mathcal{S}\) and is defined as [337, eq. (7)]_
\[\big{\{}u(s|w)\in[0,1]:w\in\mathcal{W},s\in\mathcal{S},\sum_{s}u(s|w)=1 \big{\}}, \tag{31}\]
_where (31) constitutes the matrix \(\boldsymbol{U}\in\mathbb{R}^{N\times M}\)._
**Definition 19** (**Semantic distortion of semantic encoding [337, Definition 3.9]**).: _Suppose \(w\in\mathcal{W}\) and \(\hat{w}\in\mathcal{W}\) be the transmitted and reconstructed meanings at the transmitter and receiver, respectively. The average semantic distortion \(D_{\boldsymbol{U},\boldsymbol{Q}}\) attained by a semantic encoding scheme \(\boldsymbol{U}\) can be expressed as [337, eq. (8)]_
\[D_{\boldsymbol{U},\boldsymbol{Q}}:=\sum_{w,s,\hat{s},\hat{w}}p(w)u(s|w)c(\hat{s }|s)q(\hat{w}|\hat{s})d(w,\hat{w}), \tag{32}\]
_where \(d(w,\hat{w}):\mathcal{W}\times\mathcal{W}\rightarrow\mathbb{R}_{+}\) is a semantic distortion measure [337]._
W.r.t. the average semantic distortion as expressed in (32) and the semantic cost defined in [337, eq. (9)], the authors of [337] devise insightful characterizations of the distortion-cost function of semantic encoding [337, Sec. IV-A] and the distortion-cost region of semantic encoding [337, Theorem 4.2].
We now move on to the language utilization problem of semantic decoding.
#### V-B2 Language Utilization - Semantic Decoding
in view of the receiver's perspective that the semantic encoder is dictated by the _expression of the agreed language_, the semantic channel, and the semantic encoder, the semantic decoding problem concerns how to decode a received message in order to minimize semantic distortion [337]. Consequently, the following definitions of _semantic decoding scheme_ and _semantic distortion of semantic decoding_ ensue.
**Definition 20** (Semantic decoding schemes [337, Definition 3.11]).: _A semantic decoding scheme is a mapping from the message set \(\mathcal{S}\) to the meaning set \(\mathcal{W}\) and defined as [337, eq. (10)]_
\[\{v(w|s)\in[0,1]:w\in\mathcal{W},s\in\mathcal{S},\sum_{w}v(w|s)=1\}, \tag{33}\]
_where (33) constitutes the matrix \(\mathbf{V}\in\mathbb{R}^{M\times N}\)._
**Definition 21** (Semantic distortion of semantic decoding [337, Definition 3.12]).: _Suppose \(w\in\mathcal{W}\) and \(\hat{w}\in\mathcal{W}\) be the transmitted and recovered meanings at the transmitter and receiver, respectively. The average semantic distortion \(D_{\mathbf{P},\mathbf{V}}\) attained by a semantic decoding scheme \(\mathbf{V}\) can be expressed as [337, eq. (11)]_
\[D_{\mathbf{P},\mathbf{V}}:=\sum_{w,s,\hat{w},\hat{w}}p(w)p(s|w)c(\hat{s}|s)v(\hat{w}| \hat{s})d(w,\hat{w}). \tag{34}\]
W.r.t. the average distortion as given in (34) and the achievable cost per [337, eq. (23)], the authors of [337] derive the distortion-cost region of semantic decoding [337, Theorem 5.1], semantic decoding with an inaccurate prior [337, Proposition 5.2], and semantic decoding with the Hamming distortion [337, Proposition 5.3], among other results.
We now proceed with our brief discussion of the CSED language utilization problem.
#### V-B3 Language Utilization - CSED
in CSED problem, both the transmitter and the receiver act individually per their own perspectives: the transmitter and receiver simultaneously carry out semantic encoding and semantic decoding, respectively. This problem is formally defined in [337, Definition 5.4]. In light of this problem, the semantic distortion of CSED and the semantic cost of CSED are given by [337, eqs. (46) and (47)]
\[D_{\mathbf{U},\mathbf{V}_{q}^{*}} :=\sum_{w,s,\hat{s},\hat{w}}p(w)u(s|w)c(\hat{s}|s)v(\hat{w}|\hat{s })d(w,\hat{w}) \tag{35a}\] \[L_{\mathbf{U}} :=\sum_{w,s}p(w)u(s|w)\ell(s), \tag{35b}\]
where \(\mathbf{V}_{q}^{*}\) is the optimal semantic decoding strategy and \(\ell(s):\mathcal{S}\rightarrow\mathbb{R}^{+}\) is a cost function [337]. In light of (35a) and (35b), the authors of [337] derive the _distortion-cost region of CSED_[337, Theorem 5.5] and _CSED with an error-free semantic channel_[337, Theorem 5.6]. The result in [337, Theorem 5.6] is particularly insightful in that the CSED scheme would comprise the semantic encoding scheme \(\mathbf{U}\), which is optimized based on the interpretation \(\mathbf{Q}\), and the decoding scheme \(\mathbf{V}\), which is optimized based on the expression \(\mathbf{P}\) of the language [337].
Apart from the advancements in SemCom theory that are discussed in Sections V-A through V-C, there have been several other recent developments and perspectives, such as the equivalence of SemCom and online learning [338]; the rate distortion theory for strategic SemCom [331]; universal SemCom [94, 192]; an IB viewpoint of SemCom [217]; a probabilistic logic approach to SemCom [102, 103]; and compatibility among various vantage points of SemCom [339]. These theoretical advancements and the above-discussed SemCom theories have their respective limitations. Hence, existing SemCom theories are not the most rigorous and complete of theories (though they sure are interesting!) due to the many fundamental and major challenges of SemCom, which are detailed below.
## VI Fundamental and Major Challenges of SemCom
When it comes to realizing high-fidelity SemCom for 6G and beyond, the research field of SemCom is fraught with fundamental and major challenges in the theoretical, algorithmic, and realization/implementation-related research frontiers. These challenges are discussed in detail below, beginning with the challenges in the development of fundamental SemCom theories.
### _Challenges in the Development of Fundamental SemCom Theories_
In what follows, we present (in no particular order) the fundamental and major challenges related to - but not limited to - the development of fundamental SemCom theories.
#### Vi-A1 Lack of any Commonly Accepted Definition of Semantics / Semantic Information
although several notions of semantics have been put forward to define what SemCom could be, none have been satisfactory to date [125, p. 125]. This has hugely restricted further progress in SemCom theories [125, p. 125]. Amid the lack of adequate mathematical quantification of semantic information, DL-based SE has emerged as a popular enabler of SemCom. However, deploying a black box as the foundation of system design and optimization is not fundamentally convincing [50]. This definitely hinders the advancement of SemCom theory (as well as algorithm and realization).
Vi-A2 Quantifying Semantic Noise, Semantic Interference, and the Effect of Semantic Noise and Semantic Interference
semantic noise and semantic interference can occur in (and hence impact) text SemCom, audio SemCom, image SemCom, video SemCom, multi-modal SemCom, and cross-modal SemCom. Accordingly, quantifying semantic noise, semantic interference, and the effect of semantic noise and semantic interference can pave the way for a rigorous SemCom theory.
However, they have the following the respective fundamental challenges: \(i)\)_how to quantify semantic noise? \(ii)\)how to quantify semantic interference?_ and \(iii)\)_how to quantify the effect of semantic noise and semantic interference?_
#### V-B3 Fundamental Performance Analysis of SemCom
the fundamental non-asymptotic performance analysis of SemCom is fundamentally challenging for the following reasons [326]: \(i)\) the lack of a commonly agreed-upon definition of semantics / semantic information [125, Ch. 10, p. 125]; \(ii)\) the fundamental lack of interpretability/explainability of _optimization_, _generalization_, and _approximation_ in DL models [340]; and \(iii)\) the lack of a comprehensive mathematical foundation for SemCom [341, Sec. IV].
#### V-B4 Fundamental Performance Analysis of a SemCom System under (Semantic) Interference
in a SemCom system, interference or semantic interference can cause considerable semantic noise, to the extent that the faithfulness of SemCom is destroyed [326]. Owing to a lack of understanding of how to quantify semantic noise and the aforementioned three fundamental challenges that hinder the fundamental non-asymptotic performance analysis of SemCom, analyzing the fundamental performance of a SemCom system that is subjected to (semantic) interference is indeed fundamentally challenging. The authors of [326] have made progress toward analyzing the asymptotic performance of a DL-based text SemCom system with interference. However, there is a long way to go for the non-asymptotic performance analysis of a SemCom system with interference.
#### V-B5 Performance Analysis of DL-based SemCom Systems
DL-based SemCom systems such as DeepSC [81] benefit from a joint DL-based source and channel coding technique. Nevertheless, the rigorous non-asymptotic performance analysis of DL-based SemCom systems is hampered by the _fundamental lack of interpretability/explainability_[342, 343] that is inherent in (trained) DL models.
#### V-B6 Devising a Reasonable Semantic Channel Model in View of SemCom's Attributes
it is widely recognized that background KB mismatch between a semantic encoder and a semantic decoder can definitely cause semantic ambiguity that leads to information distortion. To this end, a major challenge is devising a reasonable semantic channel model based on different degrees of background KB matching from the perspective of SIT [283].
#### V-B7 Semantic-Enabled Intelligence Evolution
to orchestrate and achieve overarching semantic-enabled networked intelligence regardless of the situation, SemCom systems and semantic networks need to be able to autonomously evolve to an enhanced level of intelligence [175] that may require greater _conciseness_[344, 345, 346, 347, 348]. This calls for _semantic-enabled life-long autonomy_ and _lifelong autonomously networked evolving intelligence_, which are both fraught with innumerable fundamental challenges.
#### V-B8 Fundamental Limits of SemCom
in SemCom PHY design, the overarching goal is to optimize semantic information transmission over various channels with relevant background KBs [78]. When it comes to the background KBs, the fundamental limits of SemCom are contingent on not only their respective PHY constraints but also on the contextual constraints [78]. As for contextual constraints, the degree of mutual understanding between any pair of communication parties can influence the interaction, the signaling strategies, and the volume of SemCom [78]. Accordingly, a suitable measure of _intent-achieving efficiency_ - which is generally abstract and complex - must be established to address this question: _what is the most efficient SemCom strategy to attain a given intent?_[78] Strategies worth considering include: semantic-aware joint source-channel coding in PHY and semantic-linked processing in higher layers [78]. o these ends, \(i)\) some theories and coding schemes should first be established to materialize the new measure framework and \(ii)\) achievable bounds should be derived for the fundamental limits of SemCom [78].
#### V-B9 Fundamental Limits of SemCom-Enabled Distributed Model Training
distributed training such as FL is key for enabling distributed AI and edge AI, especially in memory- and computing capacity-limited IoT devices. However, the performance of the resulting edge AI will depend on not only rigorous model training / model tweaking but also the quality/informativeness of the devices' semantic information, provided that SemCom-enabled distributed model training/retraining is the aim. Determining the fundamental limits of this setup is a significant and relevant fundamental challenge.
#### V-B10 Deriving the Capacity of a Semantic-Aware Network
since semantic-aware networks are more complex, their capacity can be closely related to knowledge sharing among users [77]. Accordingly, developing comprehensive mathematical foundations for the performance limits of a SemCom network is a crucial fundamental challenge.
#### V-B11 Unified Fundamental Theory of Semantic Information
SIT is a bedrock of SemCom that can provide insight into semantic information bounds and serve as a crucial framework for evaluating semantic abstraction [211]. On the other hand, unlike conventional communication systems that mainly rely on a fixed set of transmitted symbols, SemCom has to grapple with the extensibility and openness of semantics, which lead to symbols changing dynamically [93]. How to model a dynamic set and its impact on channel capacity remains unknown [93]. Moreover, despite recent and emerging progress on the theory front (as discussed in Section V), a unified fundamental theory of semantic information remains elusive.
#### V-B12 Unified SemCom Theory
employing logical probability [95], existing and emerging SemCom theories attempt to develop a theory for semantic entropy, semantic channel capacity, semantic-level rate distortion theory, and the relationship between inference accuracy and transmission rate [97]. Nevertheless, it is not clear whether this path could lead to a unified SemCom theory. On the other hand, in sharp contrast with conventional communication systems that often deploy a fixed set of transmitted symbols, SemCom hinges on information semantics whose flexibility and complexity causes the semantic symbol set to change dynamically while possibly exhibiting polysemy [107]. As a result, how to process and model dynamic semantic symbol sets is a fundamental challenge [107] that stands in the way of a unified SemCom theory. The authors of [349] took inspiration from the fundamental
synaptic plasticity [135, 136, 350] of our brains and assert that the fundamental limit of effective semantic information communication hinges on the plasticity of our brains or the substrates on which the semantic domains are constructed. This calls for an _extraordinarily fundamental understanding_ of brain computation/operation - at not only the system level, but also the molecular, cellular, and network levels - prior to developing a unified SemCom theory. Furthermore, per the information ecosystem model (see [100, Fig. 1] and [101, Fig. 1]) proposed by the author of [100], a theory of semantic information must take into account the emerging _rate-distortion-perception tradeoff_[351] - a triple tradeoff rather than Shannon's rate-distortion tradeoff - whose significance is affirmed by the prevalence of DL.
We now proceed to fundamental and major challenges in the development of fundamental SemCom algorithms.
### _Challenges in the Development of Fundamental SemCom Algorithms_
In what follows, we point out (in no particular order) the fundamental and major challenges related to - but not limited to - the development of fundamental SemCom algorithms.
#### V-B1 Inevitability of Semantic Mismatch
although both the source KB and the destination KB can learn from the perceived environment and continuously expand as well as update their entries through training and sharing via communications, the KBs at the source and destination can be quite different as a result of observing different environments (hence worlds) with unequal abilities to understand things [76]. Accordingly, _semantic mismatch_ is inevitable to the extent that it can fundamentally constrain the performance of SemCom-based wireless systems.
#### V-B2 The Need for Additional SemCom Performance Assessment Metrics
even though a variety of metrics have been employed in early algorithmic developments of SemCom, additional performance assessment metrics, such as the ones to evaluate the amount of semantic information that has been preserved or missing are required [97]. For semantic-enabled networked intelligence, on the other hand, a comprehensive evaluation framework is needed that takes into account objective and subjective metrics that can capture the efficiency and potential of systems/networks in achieving intent [137].
#### V-B3 Lack of Unified Semantic Performance Assessment Metrics
unified SemCom performance assessment metrics24 are needed to fairly compare and contrast existing and prospective SemCom techniques [71]. When it comes to unified metrics, the major challenge is to establish concrete metrics that can capture source and network dynamics as well as any potentially non-trivial interdependencies among information attributes [75]. Meanwhile, it is worth underscoring that the lack of unified SemCom performance assessment metrics can hinder the advancement of SemCom research, standardization, and deployment in 6G.
Footnote 24: A proper semantic similarity metric is necessary to define a loss function and pre-train the parameters of DNNs for various tasks [107].
#### V-B4 Semantic Transformation
semantic transformation is a core SemCom process in the understand-first-and-then-transmit SemCom framework [93, Fig. 2] at both a transmitter and a receiver. At the receiver, the semantic symbol recognition module can be fundamentally limited by semantic ambiguity, which is an open challenge, especially when there is no context [93], and a hard problem in the field of NLP [126]. One way to overcome this fundamental challenge is to transmit more symbols to achieve such disambiguation that the sender needs to eliminate the inherent ambiguity in as few symbols as possible [93]. For the achievement of this goal, devising the optimal symbols is an open problem [93].
#### V-B5 Lack of Interpretability in DL-Based SE
differentiable loss functions that are widely deployed by DL-based SE techniques, such as CE and MSE, give equal importance to the semantic contributions of all bits, which is inconsistent with human perception [105]. This corroborates the fundamental lack of interpretability in DL-based SE.
#### V-B6 Lack of Interpretability in DL-Based SemCom
there is a fundamental lack of interpretability in DL-based SemCom techniques due to the fundamental lack of interpretability/explainability that is inherent in DL models [342, 343].
#### V-B7 Addressing Time- and Frequency-Selective Channels in DL-Based SemCom Systems
a number of existing works on DL-based SemCom [81, 243] demonstrate the visible gain that can be achieved by deploying DL using mainly fixed channel conditions (or slow fading channels). However, wireless channels are usually time- and frequency-selective channels whose inherent doubly selective fading will challenge any DL-based design [352]. This challenge will be significant because DL is used to train and test datasets that are drawn from the same distribution, which is in sharp contrast to a realistic wireless communication scenario, wherein the testing distribution can be very different from the training distribution.
#### V-B8 Semantic Communications and Networking over Wireless Fading Channels
semantically-encoded information is often intended to be transmitted over wireless channels, which are naturally fading channels [353]. In a realistic SemCom scenario, the semantic decoding of a semantically-encoded message received over wireless fading channels can result in a complete loss of meaning at the receiver. This leads to the following fundamental question worth attacking: _how to design an optimal semantic transceiver (optimal SemCom system) so that the loss of meaning at the receiver is minimized?_
#### V-B9 Bandwidth Allocation in SemCom
semantic information is unevenly distributed in SemCom and semantic-aware networks. This uneven distribution needs to be taken into consideration, and more bandwidth should be allocated to agents that wish to transmit more semantic information [50]. Nevertheless, quantifying semantic information is fundamentally challenging.
#### V-B10 Different Device Capacities
individual communication devices have different computational power, communication resources, and storage capacity, all of which are limited. In light of this natural limitation, the design of SemCom networks must not assume that all devices have sufficient capacity [50]. To this end, devising effective methods for balancing heterogeneous devices' performance and cost requirements is
a key challenge for the design of robust SemCom networks [50].
#### V-B11 Semantic-Aware Multiple Access
designing an optimal semantic-aware multiple access scheme for plenty of devices that transmit signals in a time- or event-triggered process-aware manner to convey multi-attribute information to a remote destination is challenging [75]. The challenge arises from the following requirement regarding the optimal utilization of the shared medium: the devices have to adapt their access patterns depending on not only the arrival of exogenous traffic (and other nodes' status), but also source/process variability, information semantics, and the needs of applications [75].
#### V-B12 Semantic-Aware Multiple and Random Access
while the celebrated multiple access scheme named NOMA [354, 355] improves spectral efficiency, this improvement hardly translates into better performance w.r.t. freshness or other semantic attributes [86]. Not only is there scheduled multiple access, but there is also random access (CSMA, CSA, E-SSA, frameless ALOHA) for next-generation massive MTC [356] and IoT applications, for which determining semantic principles remains a fundamental problem [86]. This calls for novel protocols that incorporate sampling and data generation [86]. Thus, rethinking random or scheduled access in relation to broader semantic metrics remains an open problem [86].
#### V-B13 Designing Semantics Networks
SemCom can be a pivotal component of distributed intelligent networks in 6G and beyond due to its minimal bandwidth consumption, significantly decreased data transmission, and ability to exploit more knowledge [107]. Nevertheless, semantic network design is affected by the following significant challenges: \(i)\) lack of a specific/comprehensive blueprint definition of semantic network; and \(ii)\) the complexity of transmission/computation to disseminate KBs to distributed devices and adapt to the network's ultra-heterogeneity [107].
#### V-B14 Balancing the Generality and Confidentiality of SemCom
most existing SemCom works promote centralized SemCom systems and unified multi-user SemCom systems that are trained based on one or more shared background KBs [241]. Nonetheless, this design philosophy certainly causes _privacy leakage_[241]. Thus, balancing the generality and confidentiality of SemCom is a major challenge of SemCom [241] from an algorithmic standpoint, but also possibly from a realization vantage point.
We now move on to detailing fundamental and major challenges in the realization of SemCom.
### _Challenges in the Realization of SemCom_
We describe (in no particular order) the fundamental and major challenges related to - but not limited to - the realization of SemCom.
#### V-C1 Huge Time Consumption and Complication in Semantic Index Assignment
semantic index assignment is an intuitive approach to preserving semantic similarity that assigns a binary codeword to each word [76] and wherein semantically similar words and semantically dissimilar (independent) words are coded with the shortest Hamming distance and the longest Hamming distance, respectively [215, 357]. Nevertheless, because the length of a codeword is exponentially proportional to the number of words, the semantic index assignment process is extremely time-consuming and complicated [76].
#### V-C2 Real-Time Requirement
with its promises of minimum power usage, bandwidth consumption, and transmission delay in addition to some level of inherent security, SemCom is indeed an enabler of 6G and hence attractive for many 6G IoT applications [199]. Nonetheless, the vital incorporation of semantic reasoning for correcting transmission errors incurs more delay in the overall SemCom transceivers, which are relatively more complex [107]. Accordingly, satisfying the ultra-low end-to-end latency requirements - i.e., real-time requirements - of 6G (and beyond) is a major challenge25 for the realization of SemCom.
Footnote 25: To overcome the major challenge of satisfying the ultra-low end-to-end latency requirement of 6G, it is useful to develop _lightweight_ SemCom algorithms and improve SemCom transceiver/hardware design [107].
#### V-C3 Scalability
even though some multi-modal SemCom (e.g., [358, 359]) / cross-modal SemCom (e.g., [235]) systems that work for the transmission of multi-modal / cross-modal data show promise, it is very challenging to grapple with more complex data types in legacy OSI models [107]. One scalability challenge is that a general semantic-level framework for distinct types of sources has not yet been developed [107]. Another scalability challenge is the fact that sharing, updating, and maintaining KBs at the source and destination would necessitate additional storage costs and algorithm design [107]. Therefore, SemCom involves considerable computational as well as storage costs. Consequently, guaranteeing the scalability of SemCom remains a challenge [107].
#### V-C4 Knowledge Evolution Tracking
it is well-known that humans' knowledge evolves continuously throughout their lives. Regarding such temporal variations, modeling and keeping track of each piece of knowledge (e.g., aggregating new knowledge entities and relationships while discarding obsolete ones in the context of KGs [77]) is fundamentally important for improving SemCom efficiency and reducing the probability of error in semantic information delivery [77]. Nevertheless, the basic neuroscientific understanding of knowledge, knowledge evolution, and knowledge tracking are very difficult fundamental problems.
#### V-C5 Networked KB Updating and Upgrading
in semantic-enabled networked intelligence, a networked KB plays a predominant role in and is an enabler of intelligence [137]. Nonetheless, networked KB updating and upgrading is a huge challenge for massive communication objects - in 6G and beyond - with highly heterogeneous computation and storage capabilities [137]. Consequently, how to build a networked KB and how to efficiently update and upgrade it are major challenges for semantic-enabled networked intelligence in particular [137] and SemCom in general.
#### V-C6 Compatibility with Existing Communication Infrastructure
any SemCom realization effort must ensure that SemCom is compatible with the existing communication infrastructure [236]. To this end, extensive link-level simulations must be performed to verify the realistic end-to-end performance of SemCom [236].
#### Vi-A7 Generalizability of the Semantic Network
the semantic network has to be generalized to work with any dataset rather than only specific datasets [236]. This calls for DL-based semantic transceivers that can generalize widely over numerous datasets. Deep networks face fundamental limitations in this regard, as they are able to learn only stationary data distributions [360, 361, 362, 363].
#### Vi-A8 Coexistence of SemCom and Technical/Conventional Communication
future networks in 6G and beyond should support highly heterogeneous data transmission and multiple RANs. Accordingly, networks in 6G and beyond must be able to support not only users of SemCom, but also users of traditional BitCom, as the latter cannot be fully replaced by the former [236]. This calls for a robust coexistence design for networks in 6G and beyond network that must be able to deliver information both "as is" and modified with high semantic similarity (depending on the communication scenario) [236].
In light of the above-detailed realization challenges of SemCom, we refer the reader to the work in [364] that implements a real-time image SemCom system whose feasibility in actual wireless environments is demonstrated. Meanwhile, because _challenges are always opportunities_, some of the above-detailed fundamental and major challenges of SemCom are also big opportunities for novel future directions of SemCom, as discussed below.
## VII Future Directions of SemCom
In light of the fundamental and major challenges of SemCom that are detailed in Section VI, the developments in SemCom theory that are presented in Section V, and the many proposals of state-of-the-art SemCom algorithms that are surveyed in Section III, we offer some novel future directions or SemCom theory, algorithm, and realization. We begin with some novel future directions for SemCom theory.
### _Future Directions for SemCom Theory_
We point out (in no distinct order) some novel future directions related to - but not limited to - SemCom theory.
#### Vi-A1 (General) Semantic Information Theory
the following would be needed for a fundamental SIT: investigations of SIT with interference channels and a specific definition of semantic channel and its capacity among other things [76]. For a general fundamental SIT, on the other hand, the following research directions are crucial research topics: \(i)\) the theory of semantic security and robustness; \(ii)\) the theory of the tradeoff between semantic efficiency and generalization; \(iii)\) the theory of _semantic computability_[107].
#### Vi-A2 A General Framework for DL-Based SemCom
for faithful and interpretable DL-based SemCom, a generic framework that encompasses a suitable DNN architecture, proper performance assessment metrics, and the likes should be explored [76].
#### Vi-A3 The Tradeoff Between Semantic Extraction Accuracy and Communication Overhead
in DL-enabled SemCom, the training of accurate SE models hinges on complete KBs - at both senders and receivers - which requires sufficient storage. In case of adequate storage, each user's local KB has to be constantly updated as the communication context evolves [50]. To this end, ensuring that updates to the local KB of each communicating party can be shared in real-time is quite challenging, to the extent that it will cause significant communication overhead [50]. The communication overhead and the real-time update/sharing challenge will be particularly significant in scenarios in which there is a massive number of participating users that are geographically distant. Accordingly, devising insightful tradeoffs between SE accuracy and communication overhead is a crucial future research direction for SemCom.
#### Vi-A4 The Tradeoff Between SemCom Performance and Security
because SemCom transmits only semantically-encoded data - therefore, less data - and the decoding of semantic information depends of the intended receiver's KB, SemCom itself has also been regarded as a potential method of secure wireless communication [102, 103, 334]. When it comes to secure wireless communication, any potential eavesdropper can be made uncertain whether SemCom is being used by introducing AI into the PHY [50]. A PHY with interference, meanwhile, will harm the transmission quality of semantic information, hence the tradeoff between _signal covertness and signal quality_[50]. This calls for future research into an optimal tradeoff between SemCom performance and security. When it comes to ensuring security via SemCom, however, SemCom's secrecy performance under adversarial attacks remains largely unknown [311] and novel research is therefore called for.
#### Vi-A5 The Impact of Semantic KBs
semantic KBs at the source and destination directly impact the faithfulness of SemCom. To this end, the following are research questions worth addressing: \(i)\)_to what extent do shared KBs affect the SemCom process? \(ii)\) how can the semantic flow in KBs that are partially shared be modeled quantitatively?_[107]
We now continue to some novel future directions for SemCom algorithms.
### _Future Directions for SemCom Algorithms_
We call attention (in no particular order) to promising future directions related to - but not limited to - SemCom algorithms.
#### Vi-B1 Multi-User SemCom and Multi-User SemCom Signal Detection
multi-user SemCom signals exploit the diversity of the KBs that comprise different SemCom systems and can be transmitted using the same radio resources - such as frequency or time slot - so that considerable bandwidth can be saved in the interest of spectral-efficient 6G [76]. To this end, optimal multi-user SemCom signal detection and the complexity of message interpretation at the receiver are critical research themes [76].
#### Vi-B2 Multi-User Interpretation Algorithm Design
exploiting the diversity in their KBs might be able to utilize the same frequency or time slot to transmit semantic information over a wireless channel [76]. Nevertheless, this gain introduces the following two major challenges: \(i)\) since semantic information interpretation at a receiver in a multi-user environment should consider joint multi-user detection, channel decoding, and
semantic decoding, the complexity will be very high; \(ii)\) a receiver's KB should include various types of data in order to rigorously distinguish different users' messages [76]. To overcome these challenges, more effective yet efficient interpretation algorithms must be devised for the joint semantic-channel decoding of an intended user [76]. To this end, the design of a low-complexity multi-user interpretation algorithm is an essential future direction for SemCom [76].
#### Vii-B3 Joint Design of the Technical Level and Semantic Level
to understand the impact the data transmission rate has on SemCom and how much semantic information can be transmitted over a wireless channel, the technical level and semantic level of communication - per Weaver's three levels of communication, which is schematized in Fig. 1 - must be jointly designed [76]. In view of a joint design, the interference-resistant and robust SemCom (_IR\({}^{2}\) SemCom_) advocated for by the authors of [326] also requires that the technical and semantic levels of communication be jointly designed.
#### Vii-B4 Addressing the SNR Uncertainty Affecting DL-Based SemCom Systems
SNR uncertainty can arise from the uncertainty in the noise power, the inevitability of interference, and the variation in transmission power [50]. Accordingly, unlike in the fixed SNR approaches that are typically adopted to train current SemCom models, SNR uncertainty's effect on SemCom system performance has to be considered. In this vein, ensuring that the trained semantic model adapts to a variable SNR and understanding/quantifying its generalization ability require further investigation [50].
#### Vii-B5 The Design and Realization of Secure SemCom
secure SemCom can be realized using semantic noise [50] in view of deploying artificial noise to ensure secure wireless transmission [365, 366]. To this end, using a background KB to enhance semantic noise is a promising approach for designing and realizing secure SemCom.
#### Vii-B6 Blending SemCom and Semantic Caching
in contrast to traditional data caching, which mainly focuses on the hit rate of the data content, semantic caching focuses on whether the semantic information in the cache can be accurately inferred by the requester [50]. Determining which semantic information to cache requires prior knowledge, such as the popularity of particular semantic information, since a variety of semantic information could exist for the same data content [50]. Meanwhile, since the context of SemCom changes continually, the lifetime of semantic information is difficult to determine and new estimate refreshing algorithms for semantic caching are required [50].
#### Vii-B7 Reasoning in Implicit SemCom
although most existing SemCom works focus on the transmission of explicit semantic information, communication between users involves not only explicit information, but also rich implicit information that is difficult to express, recognize, or recover [229] (see also [82]). Since explicit semantic information is generally dominant and communication resources should be allocated proportionally between explicit and implicit semantic information, joint optimization algorithms need to be designed [50]. This calls for the design and realization of joint implicit and explicit SemCom with rigorous reasoning.
We now move on to some novel future directions for SemCom realization.
### _Future Directions for SemCom Realization_
We highlight (in no definite order) some useful future directions related to - but not limited to - SemCom realization.
#### Vii-C1 SemCom Implementation
state-of-the-art _system-on-chip_ technologies cannot meet the ultra-low latency requirements of wireless communication in 6G networks [76]. To overcome this major challenge, more advanced microelectronic and chip technologies are needed [76].
#### Vii-C2 The Impact of Inconsistent KBs at the Semantic Source and Destination
although most state-of-the-art SemCom works presume real-time knowledge sharing to make the KBs at the source and destination consistent, the KBs are naturally inconsistent [76]. Therefore, how semantic information can be communicated, shared, and inferred when the KBs are inconsistent are open issues for SemCom design as well as realization [76].
We now move on to our extensive discussion on the state-of-the-art research landscape of goal-oriented SemCom.
## VIII State-of-the-Art Research Landscape of Goal-Oriented SemCom
Goal-oriented SemCom aims to enable interested communicating parties to achieve a joint communication goal/task [50, 94]. To complete a joint communication goal/task, Fig. 29 illustrates a system model for goal-oriented SemCom. The effectiveness-level SemCom's transmitter transforms the source data into semantically encoded information via semantic representation, semantic filtering, and semantic encoding in a sequential process. This process is carried out using the source KB w.r.t. a given communication goal/task. W.r.t. a communication goal/task and a destination KB that largely share common knowledge with a source KB, the receiver aims to take a desired action by acting on the output of the channel decoder via semantic decoding followed by semantic inference. The inference module's output - for instance, in self-driving autonomous cars - includes action execution instructions for accelerating and braking; changing the angle for the steering wheel and flashing the headlights; and responding to pedestrians, roadblocks, and traffic signal changes, among other actions [50]. At the receiver, each of these goals requires (possibly application/goal-tailored) SE followed by semantic filtering and semantic post-processing prior to source signal transmission [84], as depicted in Fig. 30. This figure shows the task/goal-oriented semantic signal processing framework put forward by the authors of [84].
The authors propose a framework that comprises pre-processing, SE,26 semantic filtering, semantic post-processing, and storage/transmission in a sequence. When it comes to storage/transmission scheduling, the pre-processing block first transforms the input signal - following a possible pre-filtering to reduce noise and/or interference - into an appropriate domain for efficient component detection/classification [84]. The
SE block employs this transformed input under a time-varying application/goal and generates the corresponding multi-graph description and attribute sets [84]. Thereafter, the semantic filtering block carries out semantic filtering per the local and time-varying goals to produce semantic data [84]. The goal-filtered semantic data are then fed to the semantic post-processing block (see Fig. 30). The semantic post-processing block finally schedules - while incorporating the (time-varying) local goals - either transmission or storage per the receiver's communication goals/tasks [84, 367]. In the context of this goal-oriented semantic signal processing framework, the principal signal processing problems encountered in IoT networks, such as data compression, data clustering, data estimation, and ML, are related to the paradigm of goal-oriented SemCom [106].
When looking beyond conventional wireless connectivity, it is worth underscoring that communication is not an end in itself, but a means to achieving definite goals [75]. The end-to-end goal-oriented SemCom model that is proposed by the authors of [75] and depicted in Fig. 31 is therefore crucial. This figure comprises the following four building blocks.
* _Multiple continuous or discrete signals (stochastic processes);_ various (possibly correlated) signals illustrating time-varying real-world physical phenomena in space are observed by spatially distributed smart devices [75]. These devices are empowered by heterogeneous sensing, computational, and learning/inference capabilities [75].
* e.g., their observations, measurements, and updates
- to one or more destinations, such as a fusion center or a control unit [75]. Their respective samples are generated using process-aware (non-uniform active) sampling n accordance with the communication characteristics, the semantic-aware applications' requirements, and source variability (in terms of changes, innovation rate, auto-correlation, and self-similarity) [75].
* _Preprocessing of source samples:_ prior to being encoded and scheduled for transmission over noisy and delay-prone (error-prone) communication channels, source samples could be preprocessed [75]. This preprocessing may incorporates quantization, compression, and feature extraction, among other processes [75]. For goal-oriented SemCom, meanwhile, scheduling is performed per the semantic information's value and priority, which are extracted from the input data [75].
* _Signal reconstruction:_ the input signals are eventually reconstructed from causally or non-causally received samples at their respective destinations to serve the purpose of an application such as collision avoidance, remote state
Fig. 30: Goal-oriented semantic signal processing framework [84, Figure 12], [97, Fig. 6].
Fig. 29: System model for goal-oriented SemCom – adopted from [50, Fig. 6(c)].
estimation, control and actuation, situation awareness, and learning model training [75].
Apart from the aforementioned early works on goal-oriented SemCom, the authors of [220] differ a goal-oriented SemCom architecture (see Fig. 32) with a _semantic-effectiveness plans_[78, Fig. 3] whose functionalities address both the semantic and effectiveness problems. When it comes to these problems, and as schematized in Fig. 32, the architecture proposed in [220, Fig. 1(b)] supports not only information extraction, but also direct control.
We now proceed to state-of-the-art vision and tutorial works on goal-oriented SemCom.
### _Vision and Tutorial Works on Goal-Oriented SemCom_
We highlight below vision and tutorial works on goal-oriented SemCom, beginning with vision works.
#### V-A1 Vision Works on Goal-Oriented SemCom
the authors of [75] envision a communication paradigm shift that requires the goal-oriented unification of information generation, information transmission, and information reconstruction while taking into account multiple factors such as process dynamics, data correlation, signal sparsity, and semantic information attributes. The authors of [86] present a vision of a new paradigm shift that targets joint optimal information gathering, information dissemination, and decision-making policies in NCSs that incorporate the semantics of information based on the significance of the messages - but not necessarily the meaning of the messages, and possibly with a real-time constraint - w.r.t. the purpose of the data exchange. The authors of [49] present their vision of 6G wireless networks, wherein SemCom and goal-oriented SemCom are promising technologies that derive a crucial paradigm shift away from Shannon's information-theoretic framework. This paradigm shift underscores the fact that the success of task execution at a given destination (the effectiveness problem) is more of the essence than achieving error-free communication at the symbol level (the technical problem) [49].
To ensure the concrete representation and efficient processing of the semantic information, the authors of [84] introduce a formal graph-based semantic language and a goal filtering method for goal-oriented signal processing. Expanding upon this framework, the authors of [367] introduce a semantic information extraction framework wherein the extracted graph-based imperfect semantic signals can be improved for better fidelity and filtered for reduced semantic source noise. The authors of [368] put forward an architecture that makes it pos
Fig. 31: End-to-end goal-oriented SemCom model [75, FIGURE 1].
Fig. 32: A goal-oriented SemCom architecture with semantic-effectiveness filtering – [220, Fig. 1]: RAT: radio access technology.
sible to learn the representation of semantic symbols for goal-oriented SemCom (effectiveness-level SemCom) and design objective functions, which would help train effective semantic encoders/decoders. The authors of [369] present the challenges and opportunities related to goal-oriented SemCom networks while advocating goal-oriented SemCom as an enabler of 6G use cases.
We now proceed to highlight the existing tutorial works on goal-oriented SemCom.
#### V-A2 Tutorial Works on Goal-Oriented SemCom
the authors of [50] provide a partial review of the fundamentals, applications, and challenges of goal-oriented SemCom. The authors of [121] offer a tutorial - for communication theorists and practitioners - that provides an introduction to state-of-the-art tools and advancements in goal-oriented SemCom. The authors of [106] offer a partial overview of recent research developments in goal-oriented SemCom while focusing on goal-oriented data compression for IoT applications. The authors of [124] review goal-oriented SemCom and semantic transformations.
Apart from the aforementioned vision and tutorial works on goal-oriented SemCom, the rapidly evolving state-of-the-art works on goal-oriented SemCom investigate numerous goal-oriented SemCom techniques, trends, and use cases such as task-oriented communication with digital modulation [108]; goal-oriented SemCom with AI tasks [212]; intent-based goal-oriented SemCom [370, 371]; multi-user goal-oriented SemCom [172]; and cooperative SemCom [372].
We now move on to state-of-the-art algorithmic developments in goal-oriented SemCom.
### _Algorithmic Developments in Goal-Oriented SemCom_
In this section, we detail state-of-the-art algorithms for single-user/single-task goal-oriented SemCom and multi-user/multi-task goal-oriented SemCom, starting with the former.
#### V-B1 Algorithms for Single-User/Single-Task Goal-Oriented SemCom
the authors of [373] aim to devise a joint sampling and communication scheme over a wireless multiple access channel to compute the empirical probability measure of a quantity of interest at the destination and put forward a goal-oriented SemCom strategy that encompasses both \((i)\) semantic-aware active sampling for goal-oriented signal reconstruction (at a fusion center) and \((ii)\) a transmission scheme to access the shared communication medium. The authors of [374], on the other hand, propose a semantic information-aware policy for a MIMO-OFDM (orthogonal frequency division multiplexing) system - whose goal is to classify images - that is employed to transmit images to multiple users. In this goal-oriented SemCom system that is made up of a CNN-based transmitter and a CNN-based receiver, a graph neural network that is fed modulated symbols is deployed to learn a precoding policy [374]. The policy is demonstrated to outperform regularized zero-forcing precoding and zero-forcing precoding when it comes to minimizing the bandwidth consumed by required data to attain an expected level of classification accuracy [374].
The authors of [212] underscore the premise that SemCom must take AI tasks into account and put forward a goal-oriented SemCom paradigm dubbed _SemCom paradigm with AI tasks_ (SC-AIT), which is schematized in Fig. 33. Inspired by this goal-oriented SemCom systems (among others), the authors of [375] investigate a goal-oriented SemCom scheme for image classification task offloading in aerial systems in addition to proffering a joint SE-compression model. Their system is demonstrated to deliver an optimal SE under various channel states while taking into consideration the system's optimization objective that comprises the uplink transmission latency and the classification accuracy of the back-end target model [375]. Moreover, the authors of [376] proffer a curriculum learning-based SemCom framework for goal-oriented task execution. Building on this work, the authors of [377] introduce a goal-oriented SemCom model that incorporates a speaker and a listener who wish to jointly execute a set of tasks for task execution in a dynamic environment with the objective of jointly optimizing task execution time, transmission cost, inference cost, and resource efficiency. To solve this optimization problem, the authors of [377] provide an RL-based bottom-up curriculum learning framework that is shown to outperform traditional RL in terms of convergence time, task execution cost and time, reliability, and belief efficiency [377].
In view of emerging 6G applications such as AR/VR online role-playing game, the authors of [359] proffer a MEC structure for goal-oriented multimodal SemCom, wherein the proposed structure deploys a bidirectional caching task model (a realistic model for emerging AI-enabled applications). More specifically, the authors of [359] put forward an offloading scheme with cache enhancement to minimize a system's computation cost by formulating the cache-computational resource coordination problem as a mixed integer non-linear programming problem. As a result, they develop the content popularity-based DQN caching algorithm (CP-DQN) to make quasi-optimal caching decisions and the cache-computing coordination algorithm (CCCA) to achieve a tradeoff between using computing resources and caching [359]. The CP-DQN and CCCA algorithms are shown to perform optimally w.r.t. cache hit rate, cache reward, and system cost reduction [359]. On the other hand, many current works are geared toward designing advanced algorithms for high-performance goal-oriented SemCom [378]. Nonetheless, energy-hungry and efficiency-limited image retrieval and semantic encoding without considering user personality are major challenges for UAV image-sensing-driven goal-oriented SemCom scenarios [378]. To overcome these challenges, the authors of [378] devise an energy-efficient goal-oriented SemCom framework that uses a triple-based scene graph for image information. Meanwhile, the authors of [378] develop a personalized attention-based mechanism to achieve the differential weight encoding of triplets for crucial information following user preferences and ensure personalized SemCom. This scheme's ability to achieve personalized SemCom is corroborated by numerical results [378].
The authors of [379] leverage the IB framework (see Appendix C) to formalize a rate-distortion tradeoff between the encoded feature's informativeness and the inference per
formance and design a goal-oriented SemCom system. They incorporate variational approximation - named variational IB (VIB) - in their system to build a tractable upper bound w.r.t. IB optimization, which is computationally prohibitive for high-dimensional data. Meanwhile, their system is shown to achieve a better rate-distortion tradeoff than baseline methods while considerably reducing feature transmission latency in dynamic channel conditions [379]. Building on the work in [379], the authors of [380] put forward a goal-oriented SemCom strategy for multi-device cooperative edge inference wherein a group of edge devices transmit task-relevant features to an edge server for aggregation and processing by leveraging the IB principle [219] and the distributed IB (DIB) framework [381]. The IB principle and the DIB framework are exploited in [380] for feature extraction and distributed feature encoding, respectively. This IB- and DB-based goal-oriented SemCom technique is shown to significantly reduce communication overhead in comparison with conventional data-oriented communication and, in turn, enable low-latency cooperative edge inference [380]. Building on the work in [380] and in [379], the authors of [382] study goal-oriented SemCom for edge video analytics by exploiting the deterministic IB principle [383] for feature extraction and the temporal entropy model for encoding. This goal-oriented SemCom scheme outperforms conventional data-oriented communication strategies in terms of its rate-performance tradeoff [382].
Corresponding to the effectiveness level of Weaver's three levels of communication (see Fig. 1), the authors of [384] investigate a multi-agent partially observable Markov decision process (MA-POMDP), wherein agents not only interact with the environment but also communicate with each other over a noisy communication channel. In light of this multi-agent RL (MARL) framework, the authors of [384] demonstrate that the joint policy that is learned by all the agents is far better than the one that is obtained by treating the communication and principal MARL problems separately.
To minimize the amount of semantic information needing to be transmitted for a given task, many works on goal-oriented SemCom aim to transmit only task-relevant information without introducing any redundancy. Nevertheless, doing so with a JSCC-based design causes robustness issues in learning due to channel variation and JSCC, while mapping the source data directly to continuous channel input symbols poses compatibility issues with existing digital communication systems [108]. To address these challenges while examining the inherent tradeoff between the informativeness of the encoded representations and the robustness of the received representations to information distortion, the authors of [108] devise a goal-oriented SemCom system with digital modulation that is dubbed _discrete task-oriented JSCC_ (DT-JSCC). In DT-JSCC, the transmitter encodes the extracted input features into a discrete representation and transmits it to the receiver using digital modulation [108]. As for the DT-JSCC scheme's improved robustness to channel variation, the authors of [108] develop an IB-based encoding framework named _robust IB_ (RIB) and derive a tractable variational upper bound for the RIB objective function using variational approximation [108]. Consequently, DT-JSCC is shown to be robust against channel variation with better inference performance than low-latency baseline methods [108].
The authors of [385] leverage the significance and effectiveness of messages to devise new goal-oriented sampling and communication policies as a means of generating and transmitting only the "most informative samples" for real-time tracking in autonomous systems. For these systems and the use cases mentioned, the results reported by the authors of [385] demonstrate that semantics-empowered policies considerably reduce real-time reconstruction error, the cost of actuation error, and the amount of ineffective updates [385, Sec. 5].
We now continue to state-of-the-art algorithms for multi-user/multi-task goal-oriented SemCom.
#### V-C2 Algorithms for Multi-User/Multi-Task Goal-Oriented SemCom
in single-user goal/task-oriented SemCom, either the trained model has to be updated once the task is altered or several trained models need to be stored to serve different tasks [358]. To overcome this limitation, the authors of [358] develop a unified DL-enabled SemCom system named _U-DeepSC_. U-DeepSC is a unified end-to-end framework that is designed to serve various tasks with multiple modalities [358, 386]. Moreover, the authors of [358] devise a multi-exit architecture in U-DeepSC to provide early-exit results for relatively simple tasks and design a unified codebook for feature representation to serve different tasks with reduced transmission overhead.
Aiming to exploit multimodal data from multiple users, the authors of [200] propose a multi-user task-oriented SemCom system for visual question answering (VQA), named _MU-DeepSC_, to exploit multimodal data from multiple users. MU-DeepSC is a DL-enabled goal-oriented SemCom system whose transceiver is designed and optimized to jointly capture features from the correlated multimodal data of multiple users [200]. Consequently, MU-DeepSC is demonstrated to be more robust to channel variation than traditional communication systems, especially in low SNR regimes [200]. Building on the work in [200], the authors of [172] design and implement multi-user task-oriented SemCom systems for the transmission of both data with one modality and data with multiple modalities. The authors consider image retrieval / machine translation for their single-modal task and VQA for their multimodal task [172]. The authors of [172] develop three Transformer-based transceivers for their systems, which are dubbed _DeepSC-IR_, _DeepSC-MT_, and _DeepSC-VQA_, that share the same transmitter structure but have different receiver structures [172]. When these transceivers are trained jointly in an end-to-end manner using the training algorithms in [172], they are corroborated to outperform traditional transceivers, especially in low SNR regimes [172].
Apart from the above-detailed algorithmic developments in goal-oriented SemCom, useful algorithmic developments have also been made in goal-oriented SemCom resource allocation, which we discuss below.
### _Algorithmic Developments in Goal-Oriented SemCom Resource Allocation_
The authors of [387] consider a multi-user goal-oriented SemCom system at the wireless edge and exploit the Lyapunov
optimization framework to devise a joint computation and transmission management strategy for their overall system. More specifically, the authors of [387] develop a multi-user minimum energy resource allocation strategy that ensures energy-efficient optimal resource allocation for edge devices and edge server. This resource allocation strategy's simulation results demonstrate there is an edge-ML trade-off between energy, latency, and accuracy [387]. Extending the work in [387], the authors of [388] investigate the trade-offs between energy, latency, and accuracy in a goal-oriented SemCom-enabled edge learning system. More specifically, they develop two resource optimization strategies (that also exploit the Lyapunov stochastic optimization framework) to jointly optimize the communication parameters and the computation resources while aiming for an optimal trade-off between energy, latency, and accuracy for the edge learning task [388]. These proposed strategies are corroborated - by extensive simulations - to provide adaptation capabilities and to be effective for edge learning with goal-oriented SemCom [388].
When it comes to personalized saliency-based goal-oriented SemCom in UAV image sensing scenarios, the authors of [378] investigate SemCom personalization and its corresponding optimal resource allocation. For the former, the authors theoretically analyze the effects of wireless fading channels on SemCom, and for the latter, they put forward a game-based model for multi-user resource allocation (to efficiently utilize UAV resources). The resource allocation framework is confirmed to improve UAV resource utilization [378].
The authors of [389] propose a multi-user goal-oriented SemCom framework that aims o enable users to effectively extract, compress, and transmit the semantics of input data to the edge server. The edge server then executes the intelligence task based on the received semantics and delivers results to users [389]. Meanwhile, the authors of [389] also propose a new approach dubbed _adaptable semantic compression_ (ASC) to compress the extracted semantics based on semantic importance, which helps to reduce the communication burden. However, ASC faces the following problem in a multi-user setting: a higher compression ratio requires fewer channel resources but causes considerable semantic distortion, while a lower compression ratio calls on more channel resources and hence results in transmission failure due to the delay constraint (especially in delay-intolerant systems) [389]. In light of this problem, the authors of [389] formulate a resource allocation and compression ratio optimization problem that aims to maximize the _success probability of tasks_27 under bandwidth and power constraints. In addressing this non-convex problem, the authors of [389] develop two algorithms that achieve greater task performance gains than the baseline algorithms do while significantly reducing the volume of data transmitted [389, Sec. VI].
Footnote 27: The success probability of tasks is defined to quantify the performance of goal-oriented SemCom systems and is given by [390, eq. (11)].
We now continue to major state-of-the-art trends and use cases of goal-oriented SemCom.
## IX Major State-of-the-art Trends and Use Cases of Goal-Oriented SemCom
In this section, we present the major state-of-the-art trends and use cases related to goal-oriented SemCom, beginning with the major trends.
### _Major Trends of Goal-Oriented SemCom_
We discuss the following major trends of goal-oriented SemCom: _goal-oriented SemCom with AI tasks_[212], _neuro-symbolic AI for intent-based goal-oriented SemCom_[370], _multi-user goal-oriented SemCom_[172], and _cooperative goal-oriented SemCom_[372]. We start our discussion with goal-oriented SemCom with AI tasks.
#### Ix-A1 Goal-Oriented SemCom with AI Tasks
the authors of [212] were the first to assert that semantic information is closely related to the target AI task. This assertion is indeed reasonable when one considers the detection of a dog from a transmitted image that comprises both a dog and a cat (see [212, Fig. 1]), since the information related to the cat is no longer relevant. For this goal-oriented SemCom scenario, the authors of [212] put forward a goal-oriented SemCom system dubbed goal-oriented SemCom with AI tasks, which is shown in Fig. 33. Fig. 33 shows the technical and semantic levels - per Weaver's vision (shown in Fig. 1) - and a newly proposed effectiveness level. In their effectiveness level design, the authors propose to minimize the redundancy in the semantic information based on the contribution of raw information to the successful execution of AI tasks by discarding the information that is irrelevant to the success of AI tasks. This process can be conducted per the knowledge stored in the source KB that can be designed to account for the relationships between the AI tasks and the semantic information [212].
Once encoded using a semantic encoder, the semantic information is then is then channel-encoded and modulated prior to its transmission over a wireless channel. The received semantic information, which may be contaminated by physical noise and interference is then demodulated and channel-decoded, as seen in Fig. 33. This information is fed to a semantic-level receiver (as shown in Fig. 33), whose semantic decoder is employed to recover the transmitted semantic information in accordance with the destination KB. The destination KB can _synchronize_ its knowledge elements with those of the source KB through a shared KB that can be either stored in an authoritative third party or a virtual KB [212].
We now proceed to neuro-symbolic AI for intent-based goal-oriented SemCom [370].
#### Ix-A2 Neuro-Symbolic AI for Intent-Based Goal-Oriented SemCom
in contrast with the state-of-the-art works on goal-oriented SemCom that characteristically lack data explainability, the work in [370] leverages _neuro-symbolic AI_ (NeSy AI) [391, 392] and generative flow networks (GFlowNets) [393] to introduce a goal-oriented SemCom model named _NeSy AI_ that aims to bring intelligence to the end nodes and is depicted in Fig. 34. As is shown in Fig. 34, NeSy AI's transmitter comprises an attribute extraction module, a state description module, and an encoder. When it comes to the latter two components, the state description module is learnable using
neural AI and grounded in real logic according to the semantic language rules that are embedded in symbolic AI, and the encoder is realizable using neural AI and translates the states to (optimal) physical messages [370]. The receiver, on the other hand, is made up of a decoder and a semantic state extraction module, as shown in Fig. 34. As can be seen in Fig. 34, the decoder (which is designed using neural AI) transforms the received message into an estimated state description that is fed to the SE module (which is also designed using neural AI) that effectively recovers the transmitted semantic states in accordance with the reference semantic language rules (which are realizable using symbolic AI) [370].
Fig. 34: Intent-based goal-oriented SemCom (NeSy AI) [370, Fig. 1].
Fig. 33: Goal-oriented SemCom with AI tasks – modified from [212, Fig. 2].
In NeSy AI, the symbolic part is elaborated by the KB, and the reasoning - from learning the probabilistic structure that generates the data - is enabled by the GFlowNet [393]. The authors of [370], thus, formulate an optimization problem for causal structure learning - from the data and the optimal encoder/decoder functions - whose simulation results indicate it needs to transmit considerably fewer bits than a conventional communication system to reliably convey the same meaning [370]. Building on the work in [370] and NeSy AI, the authors of [371] introduce a goal-oriented SemCom framework named _emergent semantic communication_ (ESC). ESC is made up of a signaling game for emergent language design and a NeSy AI approach for causal reasoning [371]. To design an emergent language that is compositional and semantic-aware, the authors of [371] solve the signaling game - using alternating maximization between the transmit and receive nodes' utilities - and characterize the generalized Nash equilibrium. The authors also deploy GFlowNet [393] to induce causal reasoning at the nodes and prove analytically that ESC systems can encode data with minimal bits in comparison with a classical system that does not employ causal reasoning [371].
We now move on to our discussion of multi-user goal-oriented SemCom systems [172].
#### V-B3 Multi-User Goal-Oriented SemCom
the authors of [172] devise a multi-user goal-oriented SemCom system, which is depicted in Fig. 35, to extend the benefits of single-user, single-modal goal-oriented SemCom to multiple users. Their proposed system is a multi-user MIMO system that is made up of a receiver equipped with \(M\) antennas and \(K\) single-antenna transmitters [172]. Each of the transmitters consists of a DL-based semantic encoder and a JSC encoder (both of which are learnable in an end-to-end fashion), and accepts image, text, video, or speech signals as input [172]. The receiver, on the other hand, can either be a _single-modal multi-user semantic receiver_ and enable single-modal multi-user data transmission or be a _multimodal multi-user semantic receiver_ and enable multimodal multi-user data transmission, as can be seen in Fig. 35 [172].
Single-modal multi-user transmission means that each user independently transmits its extracted semantic information to carry out its task [172]. Multimodal multi-user transmission, on the other hand, means that the data from different users are semantically complementary [172]. Each of these goal-oriented SemCom systems relies on the linear minimum MSE (L-MMSE) detector to recover signals with estimated CSI [172]. Each user's JSC decoder is designed/trained to decompress the received semantic information following L-MMSE detection while mitigating the effects of channel distortion and inter-user interference [172]. When the JSC decoder is used in sequence with the semantic decoder to form a single-modal multi-user semantic receiver, as schematized in Fig. 35, each user's semantic information is exploited to perform different tasks independently [172]. This single-modal multi-user goal-oriented SemCom system can be used for the joint performance of an image retrieval task and a machine translation task, as in [172, Fig. 2]. Moreover, as is shown in Fig. 35, the final task that corresponds to a multimodal semantic receiver is completed by merging the different users' semantic information [172]. This multi-user goal-oriented SemCom system is useful for realizing a multimodal multi-user goal-oriented SemCom system with a _DeepSC-VQA_ transceiver, as shown in [172, Fig. 3].
Fig. 35: Multi-user goal-oriented SemCom systems [172, Fig. 1]: JSC – joint source-channel.
The authors of [372] build on the multimodal multi-user goal-oriented SemCom system that is depicted in Fig. 35 and put forward a goal-oriented SemCom system named cooperative goal-oriented SemCom, which we discuss below.
#### V-A4 Cooperative Goal-Oriented SemCom
it is proposed for IoV applications such as pedestrian detection, traffic analysis, and vehicle tracking [372]. A general cooperative goal-oriented SemCom architecture is shown in Fig. 36. As can be seen in Fig. 36, cooperative goal-oriented SemCom comprises a semantic encoder and a cooperative semantic decoder, a JSC encoder and a cooperative JSC decoder, and a semantic-driven cooperative task performer. Interestingly, the correlation among users is _pre-learned_ and _embedded_ in the cooperative goal-oriented SemCom architecture, including the encoders at the transmitters and the cooperative modules at the receiver [372]. The different modules' functions are itemized below.
* _Semantic encoder:_ it is designed to extract semantic information from the source data with a focus on meaning and goal-relevance [372].
* _Cooperative semantic decoder:_ it recovers the source data according to the specific goals set while leveraging semantic-level correlation among users [372].
* _JSC encoder:_ it is applied to encode the extracted semantic information (the output of the semantic encoder) as channel input symbols [372].
* _Cooperative JSC decoder:_ it is realized/trained to jointly recover the transmitted semantic information of multiple users, as depicted in Fig. 36.
* _Semantic-driven cooperative task performer:_ it is used to achieve specific tasks/actions while adapting its structure to a specific task based on the semantic information recovered from multiple users (its input) [372]. It also leverages semantic-level correlation and distinctive user attributes while cooperatively performing a task by combining information provided by different users [372].
The cooperative goal-oriented SemCom scheme shown in Fig. 36 requires that knowledge be shared between the transmitters and a receiver and that each user have a background KB [372]. The background KBs are presumed to be shared between users and the server by jointly training the whole DNN offline with a common dataset [372].
We now move on to major use cases of goal-oriented SemCom.
### _Major Use Cases of Goal-Oriented SemCom_
Similar to H2H SemCom, H2M SemCom, and M2M SemCom [104], the major use cases of goal-oriented SemCom can be classified as _H2H goal-oriented SemCom_, _H2M goal-oriented SemCom_, and _M2M goal-oriented SemCom_.28 Major use cases of M2M goal-oriented SemCom include autonomous transportation, consumer robotics, environmental monitoring, telehealth, smart factories, and NCSs, which are highlighted below. We begin our discussion with autonomous transportation, consumer robotics, environmental monitoring, and telehealth.
Fig. 36: An architecture for a general cooperative goal-oriented SemCom – modified from [372, Fig. 2]: JSC – joint source and channel.
#### Ix-A1 Autonomous Transportation, Consumer Robotics, Environmental Monitoring, and Telehealth
supporting the scalability of future massive networked intelligent systems, semantic-empowered communication will support the scalability of future massive networked intelligent systems and enhance network resource usage, energy consumption, and computational efficiency significantly, and thus pave the way for the design of next-generation real-time data networking [75]. This type of semantic networking will make it possible to transmit only informative data samples and convey only the information that is relevant, useful, and valuable for achieving its defined goals [75]. Accordingly, goal-oriented SemCom will provide the foundational technology for dozens of socially beneficial services, including autonomous transportation, consumer robotics, environmental monitoring, and telehealth [75].
We now proceed to our brief discussion of smart factories.
#### Ix-A2 Smart Factories
in smart factories of the future, it will be crucial to limit the operation of machines to performing specific actions [76]. In this vein, goal-oriented SemCom can be designed and employed to convey only the semantic information of the control signals [76]. Thus, smart factories can reduce their communication cost and improve their operational efficiency by deploying goal-oriented SemCom [76].
We now continue with our brief discussion of NCSs.
#### Ix-A3 NCSs
emerging and futuristic NCSs are a major use case of goal-oriented SemCom and require the joint optimization of the communication and control objectives [86]. State-of-the-art communication technologies, on the other hand, are agnostic to control objectives and pursue communication network and control system optimization separately, which is likely to yield suboptimal solutions by narrowing the solution space of the problem in both areas [86]. Accordingly, massive-scale NCSs can be enabled by unifying control and communication techniques under the umbrella of semantics of information. To this end, fundamentally re-designing the techniques for information generation, transmission, transport, and reconstruction to optimize the performance of applications that would utilize this information [86] is of paramount importance.
To summarize, goal-oriented SemCom also has many other applications and use cases, including fault detection [394]. We now move on to our discussion of state-of-the-art theories of goal-oriented SemCom.
## X Theories of Goal-Oriented SemCom
In this section, we discuss major developments in goal-oriented SemCom theory. We detail below the rate-distortion approach to goal-oriented SemCom [332, 333]; the extended rate-distortion approach to goal-oriented SemCom [395]; and goal-oriented quantization (GOQ) [396, 397]. We begin with the rate-distortion approach to goal-oriented SemCom and, in particular the role of fidelity in goal-oriented SemCom [332].
### _Rate-Distortion Approach to Goal-Oriented SemCom_
The authors of [332] develop a theory that asserts that choosing the type of individual distortion measures (or context-dependent fidelity criteria) per the application/task requirements can considerably affect the semantic source's remote reconstruction. The authors develop their theory by adopting the problem setup proposed by the authors of [335], which is schematized in Fig. 28. The authors of [332] consider a memoryless source that is represented by the tuple \((\mathbf{x},\mathbf{z})\) and has a joint probability density function (PDF) \(p(x,z)\) in the product alphabet space \(\mathcal{X}\times\mathcal{Z}\). Here, \(\mathbf{x}\) is the source's semantic or intrinsic information (directly unobservable) and \(\mathbf{z}\) is the noisy observation of the source at the encoder side [332].
The system model that the authors of [332] adopted in the above-mentioned setup is shown in [332, Fig. 1]. Per [332, Fig. 1] and its accompanying assumption, an information source is a sequence of \(n\) independent and identically distributed (i.i.d.) RVs \((\mathbf{x}^{n},\mathbf{z}^{n})\), and the PDFs \(p(x)\) and \(p(z|x)\) are assumed to be known [332]. Meanwhile, the encoder (\(E\)) and the decoder (\(D\)) are defined through the following mappings [332, eq. (1)]:
\[f^{E}: \mathcal{Z}^{n}\rightarrow\mathcal{W} \tag{36a}\] \[g_{o}^{D}: \mathcal{W}\rightarrow\hat{\mathcal{Z}}^{n}\] (36b) \[g_{s}^{D}: \mathcal{W}\rightarrow\hat{\mathcal{X}}^{n}, \tag{36c}\]
where \(\mathcal{W}\in[M]\), \(g_{o}^{D}\) is the observation decoder, and \(g_{s}^{D}\) is the semantic information decoder. If one now considers two pertetter distortion measures that are defined by \(d_{s}:\mathcal{X}\times\mathcal{X}\rightarrow[0,\infty)\) and \(d_{o}:\mathcal{Z}\times\hat{\mathcal{Z}}\rightarrow[0,\infty)\), the corresponding average per-symbol distortions are then given by [332, eqs. (2) and (3)]:
\[d_{s}^{n}(x^{n},\hat{x}^{n}):= \frac{1}{n}\sum_{i=1}^{n}d_{s}(x_{i},\hat{x}_{i}) \tag{37a}\] \[d_{o}^{n}(z^{n},\hat{z}^{n}):= \frac{1}{n}\sum_{i=1}^{n}d_{o}(z_{i},\hat{z}_{i}). \tag{37b}\]
Using (37a) and (37b), the fidelity criteria of the semantic information and the observable information are defined as [332]
\[\Delta_{s}:=\mathbb{E}\{d_{s}^{n}(x^{n},\hat{x}^{n})\}\ \text{and}\ \ \Delta_{o}:=\mathbb{E}\{d_{o}^{n}(z^{n},\hat{z}^{n})\}, \tag{38}\]
respectively. Using (37a)-(38), we state the following definition of the achievable rates and the infimum of all achievable rates.
**Definition 22** ([332, Definition 1]).: _For two distortion levels \(D_{o},D_{s}\geq 0\), \(R\) is said to be \((D_{o},D_{s})\)-achievable w.r.t. an arbitrary \(\epsilon>0\), there exists - for a very large \(n\) - a semantic-aware lossy source code \((n,M,\Delta_{o},\Delta_{s})\) with \(M\leq 2^{n(R+\epsilon)}\) given that \(\Delta_{o}\leq D_{0}+\epsilon\) and \(\Delta_{s}\leq D_{s}+\epsilon\). Furthermore, considering that sequences of distortion functions \(\{(d_{o}^{n},d_{s}^{n}):n=1,2,\ldots\}\) are given, then [332, eq. (5)]_
\[R(D_{o},D_{s}):=\inf\{R:(R,D_{o},D_{s})\ \text{is achievable}\}. \tag{39}\]
Per Definition 22, the information-theoretic characterization of (39) is captured by the following lemma.
**Lemma 1** ([332, Lemma 1]).: _For a given \(p(x)\) and \(p(z|x)\), the semantic rate distortion function of the system model in [332, Fig. 1] can be expressed as [332, eq. (6)]_
\[R(D_{s},D_{o}) =\inf_{q(\hat{z},\hat{x}|z)}I(\mathbf{z};\hat{\mathbf{z}},\hat{\mathbf{x}})\] (40a) s.t. \[\mathbb{E}\{\hat{d}_{s}(\mathbf{z},\hat{\mathbf{x}})\}\leq D_{s} \tag{40b}\] \[\mathbb{E}\{d_{o}(\mathbf{z},\hat{\mathbf{z}})\}\leq D_{0}, \tag{40c}\]
_where \(\hat{d}_{s}(z,\hat{x}):=\sum_{x\in\mathcal{X}}p(x|z)d_{s}(x,\hat{x})\), \(D_{s}\in[0,\infty]\), \(D_{o}\in[0,\infty]\), and [332, eq. (7)]_
\[I(\mathbf{z};\hat{\mathbf{z}},\hat{\mathbf{x}}):=\mathbb{E}\bigg{\{}\log \bigg{(}\frac{q(\hat{\mathbf{z}},\hat{\mathbf{x}}|\mathbf{z})}{\nu(\hat{\mathbf{z}},\hat{\mathbf{ x}})}\bigg{)}\bigg{\}}. \tag{41}\]
The constrained optimization problem in (40a)-(40c) can be written as an unconstrained optimization problem through the _Lagrange duality theorem_ as follows [332, eq. (15)]:
\[R(D_{s},D_{o})=\max_{s_{1},s_{2}\leq 0}\ \ q(\hat{z},\hat{x} |z)\geq 0,\sum_{z,z}q(\hat{z},\hat{x}|z)=1\big{\{}I(\mathbf{z};\hat{\mathbf{z}},\hat{\mathbf{x} })\] \[-s_{1}\big{(}\mathbb{E}\{\hat{d}_{s}(\mathbf{z},\hat{\mathbf{x}})\}-D_{s} \big{)}-s_{2}\big{(}\mathbb{E}\{d_{o}(\mathbf{z},\hat{\mathbf{z}})\}-D_{o}\big{)}, \tag{42}\]
where \(s_{1},s_{2}\leq 0\) are the Lagrange multipliers. The authors of [332] solve (42) and state the following main result.
**Theorem 5** ([332, Theorem 1]).: _Given that \(p(x)\) and \(p(z|x)\) are known, the underneath parametric solutions follow for the optimization problem in (40a)-(40c):_
* _If_ \(s_{1},s_{2}<0\)_, the implicit optimal form of the minimizer that attains the minimum is given by_ _[_332_, eq. (_16_)]__. In addition, the optimal parametric solution when_ \(R(D_{s}^{*},D_{o}^{*})>0\) _is expressed by_ _[_332_, eq. (_17_)]__._
* _If_ \(s_{1}<0,s_{2}=0\)_, and_ \(R(D_{s}^{*},D_{o}^{*})>0\)_,_ \(R(D_{s}^{*},D_{o}^{*})\) _is given by_ _[_332_, eq. (_20_)]__._
* _If_ \(s_{1}=0,s_{2}<0\)_, and_ \(R(D_{s}^{*},D_{o}^{*})>0\)_,_ \(R(D_{s}^{*},D_{o}^{*})\) _is characterized by_ _[_332_, eq. (_21_)]__._
* _If_ \(s_{1}=s_{2}=0,\,R(D_{s}^{*},D_{o}^{*})=0\)_._
Proof.: The proof is given in [332, Appendix A].
Theorem 5 is useful for deriving analytical expressions of the constrained optimization problem in (40a)-(40c) and constructing generalizations of the _Blahut-Arimoto algorithm_ (BA algorithm) [336].
We now move on to discuss an extended rate-distortion approach to goal-oriented SemCom [395].
### _Extended Rate-Distortion Approach to Goal-Oriented SemCom_
The authors of [395] put forward a JSCC-based goal-oriented SemCom system that incorporates a semantic reconstruction scheme while focusing on predicting the precision and generalizability of multiple goals/tasks. This goal-oriented SemCom system is composed of a JSCC encoder, a quantizer, a wireless channel, a JSCC decoder, and a network of AI tasks at the receiver [395, Fig. 2]. When the system is fed input \(X\), which denotes an RV pertaining to the source image space, let an RV \(Y\) be the desired output of an AI task. As can be seen in [395, Fig. 2], the JSCC encoder maps the input to semantic representations that are subsequently quantized by the quantizer to minimize the transmission cost. The quantized symbols \(Z\) are then transmitted over a wireless channel to the receiver [395]. At the receiver, the JSCC decoder maps the noisy received symbols to the reconstructed image \(\hat{X}\). Eventually, the AI task network uses \(\hat{X}\) as an input and produces its prediction \(\hat{Y}\). This overall goal-oriented SemCom scheme is formulated as an extended rate-distortion problem [395], and its analytical characterization is presented below.
To ensure that the reconstructed images can perform the given AI task properly, IB distortion [219] must be minimized. To this end, the IB distortion between \(x\) and \(\hat{X}\) amounts to the Kullback-Leibler (KL) divergence \(D_{KL}(p(y|x)||p(y|\hat{x}))\), which is given by [395, eq. (1)]
\[d_{IB}(x,\hat{x}):=\sum_{y\in\mathcal{Y}}p(y|x)\log\frac{p(y|x)}{p(y|\hat{x})}, \tag{43}\]
where \(\mathcal{Y}\) is the alphabet of \(Y\). For (43), \(D_{IB}(X,\hat{X}):=\mathbb{E}\big{\{}d_{IB}(x,\hat{x})\big{\}}\) is the conditional mutual information \(I(X;Y|\hat{X})\)[398] and defined as [395, eq. (2)]
\[D_{IB}(X,\hat{X})=\sum_{x\in\mathcal{X}}\sum_{\hat{x}\in\hat{X}} \sum_{y\in\mathcal{Y}}p(x,\hat{x})p(y|x)\log\frac{p(y|x)}{p(y|\hat{x})}, \tag{44}\]
where \(\mathcal{X}\), \(\hat{X}\), and \(\mathcal{Y}\) are the alphabet of \(X\), \(\hat{X}\), and \(Y\), respectively. The definition in (44) then leads to the following theorem.
**Theorem 6** ([395, Theorem 1]).: \(D_{IB}(X,\hat{X})\) _as defined in (44) can also be expressed as [395, eq. (3)]_
\[D_{IB}(X,\hat{X})=I(X;Y)-I(\hat{X};Y). \tag{45}\]
Proof.: The proof is given in [395, Appendix A].
The expression in (45) intuitively illustrates the reduction of useful information [395]. To improve the generalizability among different AI tasks, one must also minimize the reconstruction distortion \(D_{RD}(X,\hat{X})\) that is equated as [395, eq. (4)]
\[D_{RD}(X,\hat{X}):=\sum_{x\in\mathcal{X}}\sum_{\hat{x}\in\hat{\mathcal{X}}}p(x, \hat{x})d_{RD}(x,\hat{x}), \tag{46}\]
where \(d_{RD}(x,\hat{x})=(x-\hat{x})^{2}\)[395, eq. (5)]. Meanwhile, the authors of [395] take into account the natural tradeoff between \(D_{IB}(X,\hat{X})\) and \(D_{RD}(X,\hat{X})\), and define the semantic distortion measurement \(D_{S}(X,\hat{X})\) as [395, eq. (6)]
\[D_{S}(X,\hat{X}):=D_{RD}(X,\hat{X})+\beta D_{IB}(X,\hat{X}), \tag{47}\]
where \(\beta\) controls the tradeoff between the AI task's prediction accuracy and the goal-oriented SemCom system's generalizability [395]. Using (47), the goal-oriented SemCom system proposed in [395] can be formulated as an extended rate-distortion optimization problem that is given by [395, eq. (10)]
\[\min_{p(\hat{x}|x)} D_{RD}(X,\hat{X})+\beta D_{IB}(X,\hat{X})\] (48a) s.t. \[I(X;\hat{X})\leq I_{C} \tag{48b}\] \[\sum_{\hat{x}}p(\hat{x}|x)=1, \tag{48c}\]
where the constraints in (48b) and (48c) correspond to the maximum channel capacity \(I_{C}\) and the normalization constraint of the conditional PMF \(p(\hat{x}|x)\), respectively [395]. Substituting (45) into (48a) and discarding \(I(X;Y)\) - since it is constant for a given dataset - leads to the following optimization problem:
\[\min_{p(\hat{x}|x)} D_{RD}(X,\hat{X})-\beta I(\hat{X};Y)\] (49a) s.t. \[I(X;\hat{X})\leq I_{C} \tag{49b}\] \[\sum_{\hat{x}}p(\hat{x}|x)=1. \tag{49c}\]
The fact that the authors of [395] solve this optimization problem using the Lagrange multiplier technique leads to the following theorem.
**Theorem 7** ([395, Theorem 2]).: _The optimal mapping from the source images \(X\) to the semantically-reconstructed images \(\hat{X}\) must satisfy [395, eqs. (12)-(15)]:_
\[p(\hat{x}|x) =\frac{p(\hat{x})e^{-\lambda^{-1}d_{S}(x,\hat{x})}}{\mu(x)} \tag{50a}\] \[p(\hat{x}) =\sum_{x\in\mathcal{X}}p(x)p(\hat{x}|x)\] (50b) \[p(y|\hat{x}) =\sum_{x\in\mathcal{X}}p(y|x)p(x|\hat{x}), \tag{50c}\]
_where_
\[\mu(x) =\sum_{\hat{x}\in\mathcal{X}}p(\hat{x})e^{-\lambda^{-1}d_{S}(x, \hat{x})} \tag{51a}\] \[d_{S}(x,\hat{x}) =d_{RD}(x,\hat{x})+\beta d_{IB}(x,\hat{x}). \tag{51b}\]
Proof.: The proof is provided in [395, Appendix B].
The optimal distributions \(p(\hat{x}|x)\), \(p(\hat{x})\), and \(p(y|\hat{x})\) can be obtained [395] using the BA algorithm [336].
We now continue with our discussion of GOQ [396, 397].
### _Goal-Oriented Quantization_
GOQ is quite useful for many applications, including controlled networks that are built on a communication network, wireless resource allocation, and 6G systems [396, 397]. In this vein, a general GOQ framework wherein the goal/task of a receiver is modeled by a generic optimization problem that comprises both decision variables and parameters is illustrated in [396, Fig. 1]. More specifically, the goal is modeled as a minimization problem of a general goal function \(f(\mathbf{x};\mathbf{g})\) for \(\mathbf{x}\) (with dimension \(d\)) being the decision that has to be made from a quantized version of the parameters \(\mathbf{g}\) (with dimension \(p\)) [396]. In view of this problem, we state the following two definitions.
**Definition 23** ([396, Definition II.1]).: _Suppose \(M,d\in\mathbb{N}_{\geq 1}\) and \(\mathcal{G}\in\mathbb{R}^{d}\). An \(M\)-quantizer \(\mathcal{Q}_{M}\) is completely decided by a piecewise constant function \(\mathcal{Q}_{M}:\mathcal{G}\to\mathcal{G}\). This mapping is defined as \(\mathcal{Q}_{M}(g)=z_{m}\) for all \(z_{m}\in\mathcal{G}_{m}\) given that \(m\in[M]\); \(\mathcal{G}_{1},\ldots,\mathcal{G}_{M}\) are the quantization regions that define a partition of \(\mathcal{G}\); and \(z_{1},\ldots,z_{M}\) are the region representatives._
**Definition 24** ([396, Definition II.2]).: _Suppose \(p\in\mathbb{N}_{\geq 1}\) and \(g\) is a fixed parameter. Let \(\chi(g)\) be a decision function that provides the minimum points for the goal function \(f(\mathbf{x};g)\), whose decision variable is \(\mathbf{x}\in\mathbb{R}^{P}\)[396, eq. (1)]:_
\[\chi(g)\in\operatorname*{arg\,min}_{\mathbf{x}}f(\mathbf{x};g). \tag{52}\]
_The optimality loss induced by quantization is equated as [396, eq. (2)]:_
\[L(Q;f):=\alpha_{f}\int_{g\in\mathcal{G}}\big{[}f(\chi(\mathcal{Q}(g));g)-f( \chi(g);g)\big{]}\phi(g)dg, \tag{53}\]
_where \(\phi(\cdot)\) is the PDF of \(g\) and \(\alpha_{f}>0\) denotes a scaling factor that is independent of \(Q\)._
From Definition 24, the following remarks follow.
**Remark 2**.: _The conventional quantization approach can be derived from the GOQ approach by observing that the second term of \(L(Q;f)\) - as defined in (53) - is independent of \(Q\) and specifying \(f\) as \(f(\mathbf{x};\mathbf{g})=\|\mathbf{x}-\mathbf{g}\|^{2}\)[396]._
**Remark 3**.: _Unlike the conventional quantization approach that aims to provide a version of \(g\) that resembles \(g\), what matters in the GOQ approach is the quality of the end decision made [396]._
**Remark 4**.: _The design of a GOQ quantizer constitutes a major difference w.r.t. the conventional quantization approach and thus hinges on the mathematical properties of \(f\) and the underlying decision function \(\chi(\cdot)\)[396]._
When it comes to Remark 4, quantifying the relationship between the nature of \(f\) and the quantization performance is a challenging problem [396]. Meanwhile, for a scalar GOQ such that \(d=p=1\) and \(\rho(\cdot)\) being a density function, the number of quantization intervals over \([a,b]\) can be approximated by \(M\int_{0}^{b}\rho(g)dg\)[396]. Accordingly, the problem of finding a GOQ in the high-resolution regime boils down to finding the density function that minimizes the optimality loss that is denoted by \(L(\rho;f)\)[396]. This leads to the following proposition.
**Proposition 1** ([396, Proposition III.1]).: _Suppose \(f\) is a fixed goal function that is assumed to be \(\kappa\) times differentiable and \(\chi\) differentiable with [396, eq. (4)]_
\[\kappa=\min\Big{\{}i\in\mathbb{N}:\forall g,\frac{\partial^{i}f(x;g)}{\partial x ^{i}}\bigg{|}_{x=\chi(g)}\neq\text{a.s.}\Big{\}}. \tag{54}\]
_In the high resolution regime, the optimality loss \(L(\rho;f)\) is minimized by employing the underneath quantization interval density function [396, eq. (5)]:_
\[\rho^{\star}(g)=C\bigg{[}\bigg{(}\frac{d\chi(g)}{dg}\bigg{)}^{\kappa}\frac{ \partial^{\kappa}f(\chi(g);g)}{\partial x^{\kappa}}\phi(g)\bigg{]}^{\frac{1} {\kappa+1}}, \tag{55}\]
_where \(\frac{1}{C}=\int_{\mathcal{G}}\bigg{[}\bigg{(}\frac{d\chi(t)}{dt}\bigg{)}^{ \kappa}\frac{\partial^{\kappa}f(\chi(t);t)}{\partial x^{\kappa}}\phi(t) \bigg{]}^{\frac{1}{\kappa+1}}dt\)._
Proof.: The proof is provided in [396, Appendix A].
On the other hand, when \(d,p\in\mathbb{N}_{\geq 1}\), the goal-oriented quantization problem becomes a vector GOQ problem [396], which the following proposition is derived for.
**Proposition 2** ([396, Proposition IV.I]).: _Assume \(d,p\in\mathbb{N}_{\geq 1}\); \(\kappa=1\); and \(f\) and \(\chi\) are twice differentiable. Let \(\mathbf{H}_{f}(\mathbf{x};\mathbf{g})\) and \(\mathbf{J}_{\chi}(\mathbf{g}\) be the Hessian matrix of \(f\) and the Jacobian matrix of \(f\) evaluated for an optimal decision \(\chi(\mathbf{g})\), respectively. In the regime of large \(M\), the optimality loss function \(L(Q;f)\) - defined in (53) - can be approximated as [396, eq. (9)]:_
\[L(Q;f)=\underbrace{\alpha_{f}\sum_{m=1}^{M}\int_{\mathcal{G}_{m}}(\mathbf{g}-\mathbf{ z}_{m})^{T}\mathbf{A}_{f,\chi}(\mathbf{g})(\mathbf{g}-\mathbf{z}_{m})\phi(\mathbf{g})d\mathbf{g}}_{=L_{M}(Q; f)}\\ +\mathcal{O}(M^{-2/p}), \tag{56}\]
_where \(\mathbf{A}_{f,\chi}(\mathbf{g})=\mathbf{J}_{\chi}^{T}(\mathbf{g})\mathbf{H}_{f}(\chi(\mathbf{g});\mathbf{ g})\mathbf{J}_{\chi}(\mathbf{g})\). Moreover, \(\hat{L}_{M}(Q;f)\) - as expressed in (56) - can be bounded as \(L_{M}^{\min}(Q;f)\leq\hat{L}_{M}(Q;f)\leq L_{M}^{\max}(Q;f)\), where \(L_{M}^{\min}(Q;f)\) and \(L_{M}^{\max}(Q;f)\) are given in [396, eq. (10)] and [396, eq. (11)], respectively._
Proof.: The proof is provided in [396, Appendix B].
Apart from the above-discussed goal-oriented SemCom theories, there have also been other theoretical developments such as the _theory of goal-oriented communication_[399] and _universal SemCom II_[94]. This leads us to our in-depth discussion of the fundamental and major challenges of goal-oriented SemCom. It is worth noting, however, that the above-discussed goal-oriented SemCom theories have their corresponding limitations and are hence not the most rigorous and complete of theories (though they are interesting!). This is attributed to the numerous fundamental and major challenges of goal-oriented SemCom, which are detailed below.
## XI Fundamental and Major Challenges of Goal-Oriented SemCom
When it comes to realizing high-fidelity goal-oriented SemCom for 6G and beyond, the research field of goal-oriented SemCom is fraught with fundamental and major challenges in the theoretical, algorithmic, and realization/implementation-related research frontiers. These challenges are discussed in detail below, beginning with the challenges in the development of fundamental goal-oriented SemCom theories.
### _Challenges in the Development of Fundamental Goal-Oriented SemCom Theories_
We detail below (in no specific order) the fundamental and major challenges related to - but not limited to - the development of fundamental goal-oriented SemCom theories.
#### X-A1 Lack of a Commonly Accepted Definition of Semantics / Semantic Information
despite the many definitions of semantics / semantic information that exist, there is no commonly agreed upon definition. This is a fundamental challenge that hinders the advancement of goal-oriented SemCom theory (as well as algorithm and realization).
#### X-A2 Fundamental Performance Analysis of Goal-Oriented SemCom
as is the case for SemCom, analyzing the fundamental non-asymptotic performance of goal-oriented SemCom is fundamentally challenging for the following reasons [326]: \(i)\) the lack of a commonly agreed-upon definition of semantics / semantic information [125, Ch. 10, p. 125]; \(ii)\) the fundamental lack of interpretability/explainability of optimization, generalization, and approximation in DL models [340]; and \(iii)\) the lack of a comprehensive mathematical foundation for goal-oriented SemCom [341, Sec. IV]. Moreover, since a system's goal may not be explicitly represented by a utility function, it can be fundamentally challenging to rigorously analyze a goal-oriented SemCom system's performance.
#### X-A3 Performance Analysis of DL-based Goal-Oriented SemCom Systems
DL-based goal-oriented SemCom systems such as cooperative goal-oriented SemCom [372] rely on a joint DL-based source and channel coding technique. The rigorous non-asymptotic performance analysis of DL-based goal-oriented SemCom systems is therefore hindered by the fundamental lack of interpretability/explainability [342, 343] that is inherent in DL models.
#### X-A4 Fundamental Limits of Goal-Oriented SemCom Systems
the fundamental limits of goal-oriented SemCom depend on not only the type of DL-based semantic encoder and semantic decoder used, but also the type of goal, and hence the goal function. The goal function can hardly be detailed enough to capture all aspects of a goal, and DL-based goal-oriented SemCom techniques suffer from a fundamental lack of interpretability (the same as DL-based SemCom schemes).
#### X-A5 Semantic Compressed Sensing and Optimal Sampling Theory
in stark contrast to the state-of-the-art techniques that pursue a "sample-then-compress" structure, _semantic compressed sensing_ is a computationally lighter scheme that gathers only the minimum volume of data needed to reconstruct the signal of interest at the desired resolution, as determined by the application requesting the data [86]. It carries out certain signal processing operations directly in the "compressed domain" without complete signal reconstruction [86]. This calls for tackling the formidable challenge of developing an _optimal sampling theory_ that unifies signal sparsity and aging/semantics for real-time prediction/reconstruction under communication and delay constraints [86].
We now carry on with fundamental and major challenges in the development of fundamental goal-oriented SemCom algorithms.
### _Challenges in the Development of Fundamental Goal-Oriented SemCom Algorithms_
We detail below (in no specific order) the fundamental and major challenges related to -- but not limited to -- the development of fundamental goal-oriented SemCom algorithms.
#### X-B1 Inevitability of Semantic Mismatch
source KB and the destination KB can be quite different because they observe different worlds with unequal abilities to understand things [76]. Consequently, semantic mismatch is unavoidable to the extent that it can fundamentally constrain the performance of wireless systems that are based on goal-oriented SemCom.
2 Lack of Unified Semantic Performance Assessment Metrics
despite the numerous metrics that have been proposed for goal-oriented SemCom, there is a lack of unified/universal performance assessment metrics for goal-oriented SemCom [71]. When it comes to unified metrics, the major challenge is to establish concrete metrics that can capture source and network dynamics, as well as any potentially non-trivial inter-dependencies among information attributes [75].
#### V-B3 Lack of Interpretability in DL-Based Goal-Oriented SemCom
there is a fundamental lack of interpretability in DL-based goal-oriented SemCom algorithms due to the fundamental lack of interpretability/explainability that is inherent in trained DL models [342, 343].
V-B4 Optimal Semantic-Aware Joint Sampling, Transmission, and Reconstruction of Multidimensional Signals
in a number of conventional communication systems, transmission is optimized on the basis of QoS metrics - e.g., delay, rate, timeliness - while ignoring source variations, the fact that samples may be received on time but contain no useful information; or the fact that samples can even be misleading about the system's true state [75]. This scenario highlights the implicit structural links that exist between sampling and communication, which are generally in SemCom and goal-oriented SemCom [75]. For reliable goal-oriented SemCom that enables timely decision-making and satisfies the stringent requirements of real-time NCSs, the foundational challenge is therefore to develop a theory for optimal semantic-aware joint active sampling, transmission, and reconstruction of multidimensional signals, especially under stringent timing constraints [75].
#### V-B5 Resource Allocation for Goal-Oriented SemCom
from the vantage point of optimal resource allocation, goal-oriented SemCom systems face many fundamental challenges, some of which have led to the following major research problems: _how can a generic resource allocation problem be optimized for different goal-oriented SemCom systems? How can a resource allocation policy be optimized while maximizing goal-oriented SemCom's efficiency?_[97].
#### V-B6 Goal-Oriented Resource Orchestration
in emerging cyber-physical and autonomous networked systems, semantic-aware real-time data networking requires effective scheduling and resource allocation policies for gathering (often correlated) multi-source multi-modal information [75]. The objectives in the networked applications could be achieved by using an alternative set of multi-quality data [75]. These goal-oriented resource orchestration problems fall into the realm of real-time scheduling with multiple choices [75]. It is therefore challenging to devise online algorithms that can select which piece of information - from where and when - to gather and transmit under communication and processing constraints [75].
#### V-B7 Multi-Objective Stochastic Optimization
when it comes to goal-oriented end-user-perceived utilities that estimate the relative degree of priority of different information attributes, semantic-aware data gathering and prioritization require multi-criteria optimization [75]. In view of multi-criteria optimization and overcoming its challenges, multi-objective stochastic optimization based on the cumulative prospect theory - which incorporates semantic information via risk-sensitive measures and multi-attribute entropy-based utility functions - holds promise [75].
We now proceed to discuss the fundamental and major challenges in the realization of goal-oriented SemCom.
### _Challenges in the Realization of Goal-Oriented SemCom_
In what follows, we discuss (in no specific order) the fundamental and major challenges related to - but not limited to - the realization of goal-oriented SemCom.
#### V-C1 Real-Time Requirement
several major use cases of goal-oriented SemCom, such as autonomous transportation, tele-health, smart factories, and NCSs, have real-time requirements for goal-oriented communication/control. However, incorporating semantic reasoning into the goal-oriented SemCom use cases mentioned incurs extra delay in goal-oriented SemCom's overall transceivers [107]. Satisfying the ultra-low end-to-end latency requirements (i.e., real-time requirements) of 6G (and beyond) is therefore a major realization challenge for goal-oriented SemCom.
#### V-C2 Scalability
as is the case for SemCom, the realization of goal-oriented SemCom is hampered by several scalability challenges, such as: \(i)\) the lack of a general semantic-level framework for distinct types of sources; \(ii)\) sharing, updating, and maintaining KBs at the source and destination definitely necessitate additional storage costs and algorithm design [107]; and \(iii)\) realizing goal-oriented SemCom involves significant computational as well as storage costs.
#### V-C3 Knowledge Evolution Tracking
many existing goal-oriented SemCom techniques rely on the dynamic sharing of knowledge sharing between the source KB and the destination KB. To this end, modeling and keeping track of each piece of knowledge is fundamentally important for improving the efficiency and reliability of goal-oriented SemCom. Nonetheless, the basic neuroscientific understanding of knowledge, knowledge evolution, and knowledge tracking are very difficult fundamental problems.
#### V-C4 Compatibility with Existing Communication Infrastructure
since BitCom systems and services will still be in use when goal-oriented SemCom systems and services are rolled out in 6G networks and systems, any implementation of goal-oriented SemCom should ensure that futuristic goal-oriented SemCom systems are compatible with the existing communication infrastructure. To this end, extensive link-level simulations must be performed to verify the realistic end-to-end performance of goal-oriented SemCom.
#### V-C5 Efficient Knowledge Sharing in Multi-User MIMO Goal-Oriented SemCom Systems
a multi-user MIMO goal-oriented SemCom system such as cooperative goal-oriented SemCom [372] - which is schematized in Fig. 36 - needs knowledge to be shared between the receiver with multiple antennas and a number of goal-oriented SemCom users that are equipped with either a single antenna or multiple antennas. However, achieving efficient global knowledge sharing in multi-user MIMO goal-oriented SemCom systems is challenging.
Because challenges are always opportunities, some of the above-detailed fundamental and major challenges of goal-oriented SemCom are also big opportunities for novel future directions for goal-oriented SemCom, as discussed below.
## XII Future Directions of Goal-Oriented SemCom
In light of the fundamental and major challenges of goal-oriented SemCom that are detailed in Section XI, the developments in goal-oriented SemCom that are discussed in Section X, and the many proposals of state-of-the-art goal-oriented SemCom algorithms that are surveyed in Section VIII, we offer some novel future directions for goal-oriented SemCom theory, algorithm, and realization. We begin with some novel future directions for goal-oriented SemCom theory.
### _Future Directions for Goal-Oriented SemCom Theory_
We highlight (in no particular order) some novel future directions related to - but not limited to - goal-oriented SemCom theory.
#### X-A1 A Fundamental Theory and the Fundamental Limits of Actionable Intelligence
actionable intelligence is on-time and accurate intelligence that would help decision-makers make an optimal/well-informed decision [400]. Representing decision in the context of goal-oriented SemCom, the reconstructed signals of a communicating smart device can alter the recipients' states and initiate specific actions at the receivers [75]. The limits of actionable intelligence must be well-understood before deploying any goal-oriented SemCom system. To this end, a fundamental theory and the fundamental limits of actionable intelligence - in the context of DL, big data, or a combination thereof - are critical future research directions for goal-oriented SemCom.
X-A2 A Fundamental Theory of Optimal Semantic-Aware Joint Active Sampling, Transmission, and Reconstruction of Multidimensional Signals
a theory of optimal semantic-aware joint active sampling, transmission, and reconstruction of multidimensional signals - especially, under stringent timing constraints - is needed to enable timely decision-making and efficiently meet the requirements of real-time networked applications [75].
We now proceed to highlight some novel future directions for goal-oriented SemCom algorithms.
### _Future Directions for Goal-Oriented SemCom Algorithms_
We point out (in no specific order) the promising future directions related to - but not limited to - goal-oriented SemCom algorithms.
#### X-B1 Semantic-Aware Networking
in semantic-aware goal-oriented networks, the major operations include local goal-oriented information acquisition, representation, and semantic value inference; data prioritization; in-network processing (e.g., fusion, compression); semantic reception; and semantic reconstruction [75]. These operations will require optimal or nearly-optimal algorithms for semantic filtering, semantic preprocessing, semantic reception, and semantic control [75].
#### X-B2 Goal-Oriented SemCom with Time-Evolving Goals
although most state-of-the-art goal-oriented SemCom works consider fixed goals, it is often the case that one task is followed by one or more other tasks in different systems that include smart devices [106]. This enforces the design constraint that a new task needs to be executed seamlessly once the previous task has ended [106]. Nevertheless, retraining from scratch for every goal not only takes time but also wastes resources [106]. Consequently, a unified goal-oriented SemCom framework that takes into account multiple - often causally related - goals while maximizing the expected goal accomplishment [106] is a research direction worth pursuing.
#### X-B3 Goal-Oriented Coding and Control
source coding or JSCC models could be implemented to characterize goal-oriented compression and its performance limits [106]. Whenever a goal relies on not only the state, but also the decision made, in the current time slot as well as previous time slots, formulating dynamic system models can lead to a promising solution [106]. For this scenario, there are two possible ways to design optimal goal-oriented coding and control:
* Resorting to differential equations to explore the evolution of the transmitted messages and the goal [106].
* Revisiting the sampling process by tailoring the sampling problem to a general utility function [106].
We now move on to some crucial future directions for goal-oriented SemCom realization.
### _Future Directions for Goal-Oriented SemCom Realization_
In what follows, we point out (in no particular order) some useful future directions related to - but not limited to - goal-oriented SemCom realization.
#### X-C1 The Coexistence of BitCom and Goal-Oriented SemCom Users
since BitCom service and infrastructure will still be in use when goal-oriented SemCom is implemented in 6G and beyond, the coexistence of BitCom users and goal-oriented SemCom users must be investigated through the lens of not only measurements, but also theory. Regarding theory, the coexistence of BitCom users and goal-oriented SemCom users should be studied in detail from the vantage points of optimal resource allocation and interference mitigation.
#### X-C2 The Impact of Inconsistent KBs at the Source and Destination
although most state-of-the-art goal-oriented SemCom proposals resort to the assumption that knowledge is shared in real time to consider consistent KBs at the source and destination, the source KB and the destination KB are fundamentally inconsistent [76]. Therefore, how to design and realize novel (multi-user) goal-oriented SemCom systems with inconsistent KBs are an open issue in goal-oriented SemCom design and realization.
At last, we continue with this tutorial-cum-survey paper's concluding summary and research outlook.
## XIII Concluding Summary and Research Outlook
Driven by the many highly heterogeneous 6G applications and use cases that exist, numerous researchers in academia, industry, and national laboratories have disseminated several 6G proposals and roadmaps. Despite the abundance of proposals and roadmaps, realizing 6G - as it is presently envisaged - is fraught with many fundamental IMT challenges that are interwoven with several technological challenges and uncertainties. To alleviate some of these technological challenges and uncertainties, SemCom and goal-oriented SemCom (effectiveness-level SemCom) have emerged as promising technological
enablers of 6G. SemCom and goal-oriented SemCom enable 6G because they are designed to transmit only semantically-relevant information. This semantic-centric design helps to minimize power usage, bandwidth consumption, and transmission delay in 6G, which attests to the criticality of SemCom and goal-oriented SemCom for 6G. 6G is also critical for the realization of major SemCom use cases (e.g., H2H SemCom, H2M SemCom, M2M SemCom, and KG-based SemCom) and major goal-oriented SemCom use cases (e.g., autonomous transportation, consumer robotics, environmental monitoring, telehealth, smart factories, and NCSs). The paradigms of _6G for SemCom and goal-oriented SemCom and SemCom and goal-oriented SemCom for 6G_ call for the tighter integration and marriage of 6G, SemCom, and goal-oriented SemCom.
While underscoring an overarching paradigm shift that can change the status quo that wireless connectivity is an opaque data pipe carrying messages whose context-dependent meaning and effectiveness have been ignored, this holistically comprehensive tutorial-cum-survey paper aims to facilitate and inspire a tighter integration and marriage of 6G, SemCom, and goal-oriented SemCom. For this purpose, this article first explained the fundamentals of semantics and semantic information, semantic representation, theories of semantic information, and definitions of semantic entropy. It then built on this understanding by detailing the state-of-the-art research landscape of SemCom, presenting the major state-of-the-art trends and use cases of SemCom, discussing state-of-the-art SemCom theories, uncovering fundamental and major challenges (of SemCom theory, algorithm, and realization), and offering novel future research directions (for SemCom theory, algorithm, and realization). This article also documented the state-of-the-art research landscape of goal-oriented SemCom, provided major state-of-the-art trends and use cases of goal-oriented SemCom, discussed state-of-the-art goal-oriented SemCom theories, exposed the fundamental and major challenges (for goal-oriented SemCom theory, algorithm, and realization), and provided novel future research directions for goal-oriented SemCom theory, algorithm, and realization.
By proffering fundamental and major challenges as well as novel future research directions for SemCom and goal-oriented SemCom, this comprehensive tutorial-cum-survey article fittingly inspires astronomical lines of research on SemCom and goal-oriented SemCom theory, algorithm, and implementation for 6G and beyond. For 6G and beyond, at last, this article calls for novel System 2-type SemCom design and realization in sharp contrast to many existing and discussed SemCom works that are System 1-type by design.
## Appendix A On Entropy, Relative Entropy, and Mutual Information
To lay the groundwork for our discussion of existing SemCom and goal-oriented SemCom theories, we offer a brief discussion on the basics of entropy, relative entropy, and mutual information. We begin by defining the entropy of a discrete RV.
**Definition 25**.: _For a discrete RV \(X\), its entropy \(H(X)\) is defined by [336, eq. (2.1)]_
\[H(X):=-\sum_{x\in\mathcal{X}}p(x)\log_{2}p(x), \tag{57}\]
_where \(\mathcal{X}\) is the alphabet, \(p(x):=p_{X}(x)=\mathbb{P}(\{X=x\})\) is the PMF of \(X\), and the entropy \(H(X)\) is expressed in bits [336]._
The entropy defined in (57) is often referred to as _Shannon entropy_, and \(H(X)\geq 0\)[336, Lemma 2.1.1]. Meanwhile, if \(X\sim p(x)\), the expected value of the RV \(g(X)\) is equated as [336, eq. (2.2)]
\[\mathbb{E}\{g(X)\}:=\sum_{x\in\mathcal{X}}g(x)p(x). \tag{58}\]
Thus, it follows from (58) and (57) that
\[H(X)=-\mathbb{E}\{\log_{2}p(X)\}=\mathbb{E}\{1/\log_{2}p(X)\}. \tag{59}\]
As a generalization of the entropy definitions in (59) and (57), we provide below the definition of _joint entropy_.
**Definition 26**.: _For a pair of discrete RVs \((X,Y)\) with a joint PMF \(p(x,y)\), their joint entropy \(H(X,Y)\) is defined as [336, eq. (2.8)]_
\[H(X,Y):=-\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}}p(x,y)\log_{2}p(x,y), \tag{60}\]
_where \(\mathcal{X}\) and \(\mathcal{Y}\) are the alphabets of \(X\) and \(Y\), respectively, and \(p(x,y):=P_{X,Y}(x,y)=\mathbb{P}(X=x,Y=y)\)._
To express the right-hand side (RHS) of (60) using expectation, we provide the following definition of the expectation of a function of multi-variate RVs: if \(X\sim p(x)\) and \(Y\sim p(y)\), the expected value of the RV \(g(X,Y)\) takes the form [401]
\[\mathbb{E}\{g(X,Y)\}:=\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}}g(x,y)p(x,y). \tag{61}\]
Thus, using (61), the joint entropy - as it is defined in (60) - can also be expressed as [336, eq. (2.9)]
\[H(X,Y)=-\mathbb{E}\{\log_{2}p(X,Y)\}. \tag{62}\]
We now move on to define the _conditional entropy_ of an RV given another RV.
**Definition 27**.: _For a pair of discrete RVs \((X,Y)\sim p(x,y)\), the conditional entropy \(H(Y|X)\) is defined as [336, eq. (2.10)]_
\[H(Y|X):=\sum_{x\in\mathcal{X}}p(x)H(Y|X=x). \tag{63}\]
The RHS of (63) can then be simplified as
\[H(Y|X)\stackrel{{(a)}}{{=}} -\sum_{x\in\mathcal{X}}p(x)\sum_{y\in\mathcal{Y}}p(y|x)\log_{2}p( y|x) \tag{64a}\] \[\stackrel{{(b)}}{{=}} -\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}}p(x)p(y|x)\log_{2}p( y|x)\] (64b) \[\stackrel{{(c)}}{{=}} -\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}}p(x,y)\log_{2}p(y|x)\] (64c) \[\stackrel{{(d)}}{{=}} -\mathbb{E}\{\log_{2}p(Y|X)\}, \tag{64d}\]
where \((a)\) is due to the entropy definition in (57), \((b)\) follows from rearranging the RHS of (64a), \((c)\) is for the definition of the conditional PMF \(p(y|x)\) with \(p(y|x):=p_{Y|X}(y|x)=\mathbb{P}(Y=y|X=x)=p(x,y)/p(x)\)[401], and \((d)\) is because of the definition in (61). It is intuitive from (64d) that \(H(Y|X)\neq H(X|Y)\)[336].
If we now simply add (64d) and (62), it follows that
\[H(Y|X)+ H(X,Y)=-\big{[}\mathbb{E}\{\log_{2}p(Y|X)\}+\mathbb{E}\{\log_{2}p(X, Y)\}\big{]}\] \[\stackrel{{(a)}}{{=}}-\sum_{x\in\mathcal{X}}\sum_{y \in\mathcal{Y}}p(x,y)[\log_{2}p(y|x)+\log_{2}p(x,y)]\] \[\stackrel{{(b)}}{{=}}-\sum_{x\in\mathcal{X}}\sum_{y \in\mathcal{Y}}p(x,y)[\log_{2}p(y|x)p(x,y)]\] \[\stackrel{{(c)}}{{=}}-\sum_{x\in\mathcal{X}}\sum_{y \in\mathcal{Y}}p(x,y)[\log_{2}[p(x,y)]^{2}/p(x)]\] \[\stackrel{{(d)}}{{=}}-2\sum_{x\in\mathcal{X}}\sum_{y \in\mathcal{Y}}p(x,y)\log_{2}p(x,y)+\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{ Y}}p(x,y)\log_{2}p(x)\] \[\stackrel{{(e)}}{{=}}-2\sum_{x\in\mathcal{X}}\sum_{y \in\mathcal{Y}}p(x,y)\log_{2}p(x,y)+\sum_{x\in\mathcal{X}}p(x)\log_{2}p(x)\] \[\stackrel{{(f)}}{{=}}2H(X,Y)-H(X), \tag{65}\]
where \((a)\) is due to (64c) and (60), \((b)\) is due to the property of the logarithm, \((c)\) is for \(p(y|x):=p(x,y)/p(x)\)[401], \((d)\) is also because of the property of the logarithm, \((e)\) is due to the property of the joint PMF \(p(x,y)\) with \(p(x)=\sum_{y\in\mathcal{Y}}p(x,y)\)[401], and \((f)\) follows from (60) and (57). Rearranging (65) gives the result
\[H(Y|X)+H(X)=H(X,Y). \tag{66}\]
This is an important result that is widely known as the _chain rule_[336] and formalized below.
**Theorem 8** (**Chain rule [336, Theorem 2.2.1]**).: _For a pair of discrete RVs \((X,Y)\sim p(x,y)\),_
\[H(X,Y)=H(X)+H(Y|X). \tag{67}\]
From (67), the following corollary [336, eq. (2.21)] follows.
**Corollary 1**.: \[H(X,Y|Z)=H(X|Z)+H(Y|X,Z).\] (68)
We now proceed to define the _relative entropy_[336].
**Definition 28**.: _The relative entropy or KL distance between two PMFs \(p(x)\) and \(q(x)\) is given29 by [336, eqs. (2.26) and (2.27)]_
Footnote 29: Throughout this paper, we follow definitions w.r.t. the logarithm to the base two. However, the logarithm to the base ten are generally used, as it is also the case with some of the literature [336].
\[D(p||q):=\sum_{x\in\mathcal{X}}p(x)\log_{2}\frac{p(x)}{q(x)}=\mathbb{E}_{p(x) }\Big{\{}\log_{2}\frac{p(X)}{q(X)}\Big{\}}, \tag{69}\]
_where the conventions \(0\log_{2}\frac{0}{0}=0\), \(0\log_{2}\frac{0}{q}=0\), and \(p\log_{2}\frac{0}{0}=\infty\) are used [336]._
W.r.t. the relative entropy defined in Definition 28, the _mutual information_ between two RVs is defined below.
**Definition 29**.: _For two discrete RVs \(X\) and \(Y\) with a joint PMF \(p(x,y)\) and marginal PMFs \(p(x)\) and \(p(y)\), respectively, their mutual information \(I(X;Y)\) is the relative entropy between \(p(x,y)\) and the product distribution \(p(x)p(y)\)[336, eqs. (2.28)-(2.30)]:_
\[I(X;Y) =\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}}p(x,y)\log_{2}\frac{p (x,y)}{p(x)p(y)} \tag{70a}\] \[=D(p(x,y)||p(x)p(y))\] (70b) \[=\mathbb{E}_{p(x,y)}\Big{\{}\log_{2}\frac{p(X,Y)}{p(X)p(Y)}\Big{\}}. \tag{70c}\]
From (70c), it directly follows that
\[I(Y;X)=\mathbb{E}_{p(y,x)}\Big{\{}\log_{2}\frac{p(Y,X)}{p(Y)p(X)}\Big{\}}=I(X;Y). \tag{71}\]
The equality in (71) states the _symmetrical_ nature of mutual information: i.e., \(X\) says as much about \(Y\) as \(Y\) says about \(X\)[336]. Meanwhile, simplifying the RHS of (70a), the mutual information \(I(X;Y)\) can also be expressed as [336, eq. (2.39)]
\[I(X;Y)=H(X)-H(X|Y). \tag{72}\]
Thus, it follows from (72) and (71) that
\[I(X;Y)=\overbrace{H(Y)-H(Y|X)}^{=I(Y;X)}. \tag{73}\]
From the chain rule as expressed in (67), \(H(Y|X)=H(X,Y)-H(X)\). Substituting this inequality into the RHS of (73) leads to the relationship [336, eq. (2.41)]
\[I(X;Y)=H(X)+H(Y)-H(X,Y). \tag{74}\]
At last, we note that [336, eq. (2.42)]
\[I(X;X)=H(X)-H(X|X)\stackrel{{(a)}}{{=}}H(X)-0=H(X), \tag{75}\]
where \((a)\) follows through Definition 27 and (64c) due to the probabilistic fact30 that \(p(x|x)=1\) for all \(x\in\mathcal{X}\). In summary, the information-theoretic results in (71)-(75) are formalized in the following theorem.
Footnote 30: Intuitively, \(H(X|X)=0\) is the reflection of the fact that there is no any uncertainty about \(x\in\mathcal{X}\) provided that \(x\) is already known/given.
**Theorem 9** (**Mutual information and entropy [336, Theorem 2.4.1]**).: _The underneath results [336, eqs. (2.43)-(2.47)] are valid concerning the relationship between mutual information and entropy:_
\[I(X;Y) =H(X)-H(X|Y) \tag{76a}\] \[I(X;Y) =H(Y)-H(Y|X)\] (76b) \[I(X;Y) =H(X)+H(Y)-H(X,Y)\] (76c) \[I(X;Y) =I(Y;X)\] (76d) \[I(X;X) =H(X). \tag{76e}\]
We build on the chain rule as stated in (67) and continue with the chain rules for entropy and mutual information. Beginning with the former, we state the following chain rule for the entropy of a collection of RVs.
**Theorem 10** (**Chain rule for the entropy of a collection of RVs [336, Theorem 2.5.1])**.: _For discrete RVs \(X_{1},X_{2},\ldots,X_{n}\) drawn according to \(p(x_{1},x_{2},\ldots,x_{n})\), their joint entropy \(H(X_{1},X_{2},\ldots,X_{n})\) can be expressed as [336, eq. (2.48)]_
\[H(X_{1},X_{2},\ldots,X_{n})=\sum_{i=1}^{n}H(X_{i}|X_{i-1},\ldots,X_{1}). \tag{77}\]
Proof.: The proof is provided in [336, p. 22-23].
We define _conditional mutual information_ below [336].
**Definition 30**.: _For discrete RVs \(X,Y,Z\sim p(x,y,z)\), the conditional mutual information of \(X\) and \(Y\) given \(Z\) is defined by [336, eqs. (2.60) and (2.61)]_
\[I(X;Y|Z) :=H(X|Z)-H(X|Y,Z) \tag{78a}\] \[=\mathbb{E}_{p(x,y,z)}\Big{\{}\log_{2}\frac{p(X,Y|Z)}{p(X|Z)p(Y|Z )}\Big{\}}. \tag{78b}\]
Definition 30 and Theorem 10 then lead to the following theorem on the chain rule for mutual information.
**Theorem 11** (**Chain rule for mutual information [336, Theorem 2.5.2])**.: _The following result is valid for the mutual information of multiple RVs [336, eq. (2.62)]:_
\[I(X_{1},X_{2},\ldots,X_{n};Y)=\sum_{i=1}^{n}I(X_{i};Y|X_{i-1},X_ {i-2},\ldots,X_{1}). \tag{79}\]
Proof.: Theorem 11 follows from Theorem 9 and (76a) that
\[I(X_{1},X_{2},\ldots,X_{n};Y)=H(X_{1},\ldots,X_{n})-H(X_{1}, \ldots,X_{n}|Y)\] \[\stackrel{{(a)}}{{=}}\sum_{i=1}^{n}\big{[}H(X_{i}|X _{i-1},\ldots,X_{1})-H(X_{i}|X_{i-1},\ldots,X_{1},Y)\big{]}\] \[\stackrel{{(b)}}{{=}}\sum_{i=1}^{n}I(X_{i};Y|X_{i-1},X_{i-2},\ldots,X_{1}), \tag{80}\]
where \((a)\) is due to (77) and \((b)\) follows from (78a). The last equation on the RHS of (80) is the RHS of (79). This completes the proof of Theorem 11.
## Appendix B Proof of Theorem 1
Without providing a proof, the authors of [79] wrote that Theorem 1 follows from the definitions of entropy and conditional entropy. For the sake of completeness and insight, we provide below our own proof of Theorem 1.
It follows from Definition 27 and (63) that the conditional entropies \(H(X|W)\) and \(H(W|X)\) can be determined as
\[H(X|W) :=\sum_{w\in\mathcal{W}}p(w)H(X|W=w). \tag{81}\] \[H(W|X) =\sum_{x\in\mathcal{X}}p(x)H(W|X=x). \tag{82}\]
To proceed, we simplify the RHS of (82). To this end, we first note that \(H(W)\) - as it is defined in (13) - is independent of all \(x\in\mathcal{X}\). Consequently,
\[H(W|X=x)=H(W). \tag{83}\]
Substituting the equality in (83) into the RHS of (82), we obtain
\[H(W|X) =\sum_{x\in\mathcal{X}}p(x)H(W)=H(W)\sum_{x\in\mathcal{X}}p(x) \tag{84a}\] \[\stackrel{{(a)}}{{=}}H(W)\sum_{x\in\mathcal{X}}\sum_ {w\in\mathcal{W}}\mu(w)p(x|w), \tag{84b}\]
where \((a)\) is due to (14). Meanwhile, applying _Bayes' rule (Bayes' theorem)_[401] to \(p(x|w)\), it follows that
\[p(x|w)=\frac{p(w|x)p(x)}{p(w)}. \tag{85}\]
Plugging (85) into the RHS of (84b) and rearranging give
\[H(W|X) =H(W)\sum_{w\in\mathcal{W}}\frac{\mu(w)}{p(w)}\sum_{x\in\mathcal{ X}}p(w|x)p(x) \tag{86a}\] \[\stackrel{{(a)}}{{=}}H(W)\sum_{w\in\mathcal{W}}\mu(w), \tag{86b}\]
where \((a)\) is due to the conditional PMF property that \(\sum_{x\in\mathcal{X}}p(w|x)p(x)=p(w)\)[401]. Since \(\mu(\cdot)\) is a probability measure, it is evident that \(\sum_{w\in\mathcal{W}}\mu(w)=1\). Replacing this value in the RHS of (86b) then leads to the relationship
\[H(W|X)=H(W). \tag{87}\]
To move further forward, we now simplify the RHS of (81). In doing so, \(H(X|W=w)\) simplifies through \(H(X)\) - which is defined in (15) - to
\[H(X|W=w):=-\sum_{x\in\mathcal{X}}p(x|w)\log_{2}p(x|w). \tag{88}\]
Meanwhile, it follows from the definition of conditional PMF that [401]
\[p(x|w)=\frac{p(x,w)}{p(w)}. \tag{89}\]
Applying the properties of the logarithm to (89) then produces
\[\log_{2}p(x|w)=\log_{2}p(x,w)-\log_{2}p(w). \tag{90}\]
Replacing (90) and (89) into the RHS of (88) results in
\[H(X|W=w):=-\sum_{x\in\mathcal{X}}\frac{p(x,w)}{p(w)}\big{[}\log_ {2}p(x,w)-\log_{2}p(w)\big{]}. \tag{91}\]
Plugging (91) into (81) then leads to
\[H(X|W)=-\sum_{w\in\mathcal{W}}\sum_{x\in\mathcal{X}}p(x,w)\log_ {2}p(x,w)+\\ \sum_{w\in\mathcal{W}}\Big{[}\sum_{x\in\mathcal{X}}p(x,w)\Big{]} \log_{2}p(w)\stackrel{{(a)}}{{=}}H(X,W)+\\ \sum_{w\in\mathcal{W}}p(w)\log_{2}p(w) \tag{92}\]
where \((a)\) is because of the definition of joint entropy per Definition 26 and the properties of joint PMF that lead to \(\sum_{x\in\mathcal{X}}p(x,w)=p(w)\)[401]. To move on, we determine \(p(w)\) using the PMF \(p(x)\) - which is equated in (14) - as
\[p(w)=\mu(w)\overbrace{p(w|w)}^{=1}+\overbrace{\sum_{\tilde{w}\neq w,\tilde{w}\in\mathcal{W}}\overbrace{\mu(\tilde{w})p(w| \tilde{w})}^{=0}}\stackrel{{(a)}}{{=}}\mu(w), \tag{93}\]
where \((a)\) is because \(p(w|w)=1\) and \(p(w|\tilde{w})=0\), \(\forall\tilde{w}\neq w\).
Substituting (93) into the RHS of (92) produces
\[H(X|W) =H(X,W)+\sum_{w\in\mathcal{W}}\mu(w)\log_{2}\mu(w) \tag{94a}\] \[\stackrel{{(a)}}{{=}}H(X,W)-H(W)\] (94b) \[\stackrel{{(b)}}{{=}}H(X,W)-H(W|X), \tag{94c}\]
where \((a)\) is due to (13) and \((b)\) is because of (87). If we now employ the chain rule for entropy, it follows from (67) that \(H(X,W)=H(X)+H(W|X)\). Substituting this equality into the RHS of (94c) then gives
\[H(X|W)=H(X). \tag{95}\]
If we subtract the equality in (87) from the equality in (95),
\[H(X)-H(W)=H(X|W)-H(W|X). \tag{96}\]
Rearranging (96) then results in the equality
\[H(X)=H(W)+H(X|W)-H(W|X). \tag{97}\]
This is exactly (16) and completes the proof of Theorem 1. \(\blacksquare\)
## Appendix C On Information Bottleneck (IB)
Assume a source encoding of an information source is denoted by an RV \(X\) and we wish to obtain its relevant quantization \(\tilde{X}\) to compress \(X\) as much as possible. Assume also that a relevance RV denoted by \(Y\) (e.g., a classification label) that must not be independent from \(X\)[219]. Thus, \(X\) and \(Y\), have a positive mutual information \(I(X;Y)\), and we presume that we have access to the joint PDF \(p(x,y)\)[219, 398]. Nonetheless, under these settings and contrary to the rate-distortion problem, we would like \(\tilde{X}\) (the quantized information) to capture as much information about \(Y\) (the relevance RV) as possible [219]. The amount of information about \(Y\) that is in \(\tilde{X}\) is given by \(I(\tilde{X};Y)\) and defined as [219, eq. (14)]
\[I(\tilde{X};Y)=\sum_{y}\sum_{\tilde{x}}p(y,\tilde{x})\log\frac{p(y,\tilde{x})} {p(y)p(\tilde{x})}\stackrel{{(a)}}{{\leq}}I(X;Y), \tag{98}\]
where \((a)\) is for lossy compression cannot convey more information than the original signal, and hence, there is always a tradeoff between rate and distortion [219]. Similarly to rate and distortion, there is a natural tradeoff between preserving meaningful information and compressing the original signal [219]. Bearing in mind this tradeoff, the IB problem concerns maintaining a constant amount of meaningful information about the relevant signal \(Y\) whilst minimizing the number of bits from the original information source \(X\) (maximizing its compression) [219]. This is equivalent to maximizing the meaningful information for a fixed compression of the original information signal [219]. Accordingly this amounts to passing the information that \(X\) provides about \(Y\) through a "bottleneck" formed by the compact information content in \(\tilde{X}\)[219].
On par with the aforementioned motivation, the IB problem boils down to solving the following optimization problem [219, eq. (15)], [104, eq. (2)]:
\[\min_{p(\tilde{x}|x)}I(\tilde{X};X)-\beta I(\tilde{X};Y), \tag{99}\]
where the conditional distribution \(p(\tilde{x}|x)\) represents the considered source encoder and \(\beta\) denotes the Lagrange multiplier connected to the constrained meaningful information [104, 219]. Meanwhile, the optimal solution for (99) - i.e., the optimal source encoder - is task-dependent, and a generic algorithm computes the optimal solution by alternating iterations [104]. In every iteration, minimization is performed by converging alternating iterations w.r.t. the PDFs \(p(\tilde{x}|x)\), \(p(\tilde{x})\), and \(p(y|\tilde{x})\)[398, Theorem 5]. This IB approach provides a unified framework for various information processing problems, including prediction, filtering and learning [219]. Toward these ends, IB has many applications in DL [398], ML [218], SemCom [217], and goal-oriented SemCom [379].
## Appendix D On Variants of IB
To inspire much more work on SemCom and goal-oriented SemCom theory, we highlight below the principles of graph IB (GIB) [402], robust IB (RIB), deterministic IB, and distributed IB (DIB), beginning with GIB.
### _Graph IB (GIB)_
To formally define GIB, which is proposed by the authors of [402], let \(Y\) be the target, \(\mathcal{D}:=\big{\{}(\mathbf{A},X)\big{\}}\) be the input data for \(\mathbf{A}\) being the graph structure and \(X\) being the node features, and \(Z\) be the representation. Concerning \(Z\) being the representation, GIB is used to optimize \(Z\) to capture the minimal sufficient information in input data \(\mathcal{D}\) to predict the target \(Y\)[402]. To this end, the GIB problem reduces to solving the following optimization problem [402]:
\[\min_{p(\mathcal{Z}|\mathcal{D})\in\Omega}\text{GIB}_{\beta}(\mathcal{D},Y;Z):=[- I(Y;Z)+\beta I(\mathcal{D};Z)], \tag{100}\]
where \(\Omega\) represents the search space of the optimal model \(p(Z|\mathcal{D})\)[402].
### _Robust IB (RIB)_
The authors of [108] propose to use a design criterion named RIB to design the goal-oriented SemCom system schematized in [108, Fig. 1]. To define RIB formally, let the RVs \(X\), \(Y\), \(Z\), and \(\hat{Z}\) be the input datum, the target (label), the output of an encoder modeled as \(p_{\phi}(z|x)\), and the output of a demodulator, respectively. From the vantage point of data compression, the optimal \(Z\) can be approximated by optimizing the IB problem [108] such that \(I(Y;\hat{Z})\) is maximized while being subjected to the constraint on the amount of preserved information \(I(X;\hat{Z})\)[108, eq. (5)]:
\[\max_{p_{\phi}(z|x)}I(Y;\hat{Z})-\beta I(X;\hat{Z}). \tag{101}\]
Apart from data compression, another crucial goal-oriented SemCom design criterion is the maximization of the transmission rate, and hence [108, eq. (6)]
\[\max_{p_{\phi}(z)}I(Z;\hat{Z}), \tag{102}\]
where \(p_{\phi}(z)\) is the marginal distribution that depends on the parameters \(\phi\)[108]. Meanwhile, combining (101) and (102) leads to the RIB design principle (or criterion) that is given by [108, eq. (7)]
\[\max_{p_{\phi}(z|x)}I(Y;\hat{Z})+\beta[I(Z;\hat{Z})-I(X;\hat{Z})], \tag{103}\]
where \(\beta\) is fixed and \(\beta\geq 0\).
We now move on to highlight deterministic IB [383].
### _Deterministic IB_
The authors of [383] introduce a modified IB criterion named deterministic IB, which they say better captures the essence of compression than an optimal tradeoff between discarding as many bits as possible and selectively keeping the ones that are most important [383]. Meanwhile, the deterministic IB problem boils down to solving the following optimization problem [383, eq. (8)] (which is stated using the notation of Appendix C):
\[\min_{p(\hat{x}|x)}H(\tilde{X})-\beta I(\tilde{X};Y), \tag{104}\]
where the deterministic IB optimization of (104) is subjected to the Markov constraint \(\tilde{X}\leftrightarrow X\leftrightarrow Y\)[383].
We now proceed to highlight DIB [383].
### _Distributed IB (DIB)_
To state and discuss the DIB framework [381], we must first consider the distributed learning (e.g., multi-view learning) model depicted in [381, Fig. 1]. Per [381, Fig. 1], \(Y\) is the signal to be predicted and \((X_{1},\ldots,X_{K})\) are the relevant \(K\) views of \(Y\) that could each be useful to understand one or more aspects of it [381]. Accordingly, the relevant observations could be either distinct or redundant. This justifies the assumption \((X_{1},\ldots,X_{K})\) are independent given \(Y\)[381]. This distributed learning problem's problem formulation [381, Section 2] is highlighted below.
Let \(K\in\mathbb{N}_{\geq 2}\) be given and \(\mathcal{K}:=[K]\). Let \((X_{1},\ldots,X_{K},Y)\) be a tuple of RVs that have a joint PMF \(p_{X_{\mathcal{K}},Y}(x_{\mathcal{K}},y):=p_{X_{1},\ldots,X_{K},Y}(x_{1}, \ldots,x_{K},y)\) for \((x_{1},\ldots,x_{K})\in\mathcal{X}_{1}\times\ldots\times\mathcal{X}_{K}\) and \(y\in\mathcal{Y}\), given that \(\mathcal{X}_{k}\) for all \(k\in\mathcal{K}\) and \(\mathcal{Y}\) represent the alphabet of \(X_{k}\) and \(Y\), respectively. Meanwhile, the Markov chain below is assumed to hold for all \(k\in\mathcal{K}\)[381, eq. (3)]:
\[X_{k}\leftrightarrow Y\leftrightarrow X_{\mathcal{K}/k}, \tag{105}\]
i.e., \(p(x_{\mathcal{K}},y)=p(y)\prod_{k=1}^{K}p(x_{k}|y)\) for \(x_{k}\in\mathcal{X}_{K}\) and \(y\in\mathcal{Y}\). The distributed learning problem aims to characterize how the goal variable \(Y\) can be accurately estimated from the observations \((X_{1},\ldots,X_{K})\) when they are processed individually in different encoders [381].
Moreover, let a training dataset \(\{(X_{1,i},\ldots,X_{K,i},Y_{i})\}_{i=1}^{n}\) comprise \(n\) i.i.d. random samples that are drawn from the joint PMF \(p_{X_{\mathcal{K}},Y}\), which is assumed to be given [381]. The \(k\)-th encoder observes only the sequence \(X_{k}^{n}\), which it would process to generate \(J_{k}=\phi_{k}(X_{k}^{n})\) per the following (possibly stochastic) mapping [381, eq. (4)]:
\[\phi_{k}:\mathcal{X}_{k}^{n}\rightarrow\mathcal{M}_{k}^{n} \tag{106}\]
where \(\mathcal{M}_{k}^{n}\) denotes an arbitrary set of descriptions [381]. Using \(J_{\mathcal{K}}:=(J_{1},\ldots,J_{K})\) as inputs, a (possibly stochastic) decoder \(\psi(\cdot)\) processes all the inputs and returns \(\hat{Y}^{n}\) (an estimate of \(Y^{n}\)) as [381, eq. (5)]
\[\psi:\mathcal{M}_{1}^{n}\times\ldots\times\mathcal{M}_{K}^{n}\rightarrow\hat{ \mathcal{Y}}^{n}. \tag{107}\]
For the mapping in (107), the accuracy of \(\hat{Y}^{n}\) is quantified in terms of _relevance_[381]. Relevance is defined as the information that the descriptions \(\phi_{1}(X_{1}^{n}),\ldots,\phi_{K}(X_{K}^{n})\)_collectively preserve_ about \(Y^{n}\) and is given by [381, eq. (6)]
\[\Delta^{(n)}(p_{X_{\mathcal{K}},Y}):=\frac{1}{n}I_{p_{X_{\mathcal{K}},Y}}(Y^{n },\hat{Y}^{n}), \tag{108}\]
where \(\hat{Y}^{n}:=\psi(\phi_{1}(X_{1}^{n}),\ldots,\phi_{K}(X_{K}^{n}))\) and the subscript \(p_{X_{\mathcal{K}},Y}\) implies that the mutual information is computed w.r.t. the joint distribution \(p_{X_{\mathcal{K}},Y}\)[381].
Should the encoder mappings \(\{\phi_{k}\}_{k=1}^{K}\) be unconstrained, maximizing the RHS of (108) would lead to overfitting [381]. Overfitting can be overcome by using better generalizability, which is usually obtained by constraining the _complexity of the encoders_[381]. To this end, the encoding function \(\phi_{k}(\cdot)\) of encoder \(k\in\mathcal{K}\) needs to fulfill [381, eq. (7)]
\[R_{k}\geq\frac{1}{n}\log|\phi_{k}(X_{k}^{n})|, \tag{109}\]
where (109) must be satisfied for all \(X_{k}^{n}\in\mathcal{X}_{k}^{n}\)[381]. Meanwhile, optimal performance for distributed learning can be cast as finding the region of all simultaneously achievable _relevance-complexity tuples_[381], as defined below.
**Definition 31** ([381, Definition 1]).: _A tuple \((\Delta,R_{1},\ldots,R_{K})\) is termed achievable if there exists a training set of size \(n\), encoders \(\phi_{k}\) for \(k\in[K]\), and a decoder \(\psi\) such that [381, eqs. (8) and (9)]_
\[\Delta \leq\frac{1}{n}I_{p_{X_{\mathcal{K}},Y}}\big{(}Y^{n},\psi(\phi_{ 1}(X_{1}^{n}),\ldots,\phi_{K}(X_{K}^{n}))\big{)} \tag{110a}\] \[R_{k} \geq\frac{1}{n}\log|\phi_{k}(X_{k}^{n})|,\ \ \forall k\in\mathcal{K}. \tag{110b}\]
_The relevance-complexity region \(\mathcal{RI}_{\text{DIB}}\) is expressed by the closure of all attainable tuples \((\Delta,R_{1},\ldots,R_{K})\)[381]._
The region \(\mathcal{RI}_{\text{DIB}}\) is characterized by the following theorem.
**Theorem 12** ([381, Theorem 1]).: _The relevance-complexity region \(\mathcal{RI}_{\text{DIB}}\) of a distributed learning problem with a joint PMF \(p_{X_{\mathcal{K}},Y}\) - for which the Markov chain of (105) holds - is expressed by the union of all tuples \((\Delta,R_{1},\ldots,R_{K})\in\mathbb{R}_{+}^{K+1}\) fulfilling, for all \(\mathcal{S}\subseteq\mathcal{K}\), [381, eq. (14)]:_
\[\Delta\leq\sum_{k\in\mathcal{S}}\big{[}R_{k}-I(X_{k};U_{k}|Y,T)\big{]}+I(Y;U_{ \mathcal{S}^{c}}|T), \tag{111}\]
for some of the PMFs \(\{p_{U_{l}|X_{1},T},\ldots,p_{U_{k}|X_{K},T},p_{T}\}\) with a joint distribution of the form [381, eq. (15)]:_
\[p_{T}(t)p_{Y}(y)\sum_{k=1}^{K}p_{X_{k}|Y}(x_{k}|y)\sum_{k=1}^{K}p_{U_{k}|X_{k},T (u_{k}|x_{k},t)}. \tag{112}\]
Proof.: The proof is provided in [381, Section 7.1].
Theorem 12 extends the single encoder IB principle to the distributed learning model with K encoders, which is dubbed the DIB problem [381].
## Acknowledgment
The first author acknowledges Dr. Hamid Gharavi (_Life Fellow, IEEE_) of NIST, MD, USA for funding and leadership support.
## Disclaimer
The identification of any commercial product or trade name does not imply endorsement or recommendation by the National Institute of Standards and Technology, nor is it intended to imply that the materials or equipment identified are necessarily the best available for the purpose.
|
2310.14832 | Feature Spectrum Topology | Topology is a fundamental aspect of quantum physics, and it has led to key
breakthroughs and results in various fields of quantum materials. In condensed
matters, this has culminated in the recent discovery of symmetry-protected
topological phases. However, symmetry-based topological characterizations rely
heavily on symmetry analysis and are incapable of detecting the topological
phases in systems where the symmetry is broken, thus missing a large portion of
interesting topological physics. Here, we propose a new approach to
understanding the topological nature of quantum materials, which we call
feature spectrum topology. In this framework, the ground-state is separated
into different partitions by the eigenspectrum of a feature, a particular
chosen internal quantum degree of freedom, such as spin or pseudo-spin, and the
topological properties are determined by analysis of these ground-state
partitions. We show that bulk-boundary correspondence guarantees gapless
spectral flows in either one of the energy or feature spectrum. Most
importantly, such 'feature-energy duality' of gapless spectral flows serves as
a fundamental manifestation of a topological phase, thereby paving a new way
towards topological characterizations beyond symmetry considerations. Our
development reveals the topological nature of a quantum ground state hidden
outside symmetry-based characterizations, hence, providing a platform for a
more refined search of unconventional topological materials. | Baokai Wang, Yi-Chun Hung, Xiaoting Zhou, Tzen Ong, Hsin Lin | 2023-10-23T11:50:41Z | http://arxiv.org/abs/2310.14832v1 | # Feature Spectrum Topology
###### Abstract
Topology is a fundamental aspect of quantum physics, and it has led to key breakthroughs and results in various fields of quantum materials. In condensed matters, this has culminated in the recent discovery of symmetry-protected topological phases. However, symmetry-based topological characterizations rely heavily on symmetry analysis and are incapable of detecting the topological phases in systems where the symmetry is broken, thus missing a large portion of interesting topological physics. Here, we propose a new approach to understanding the topological nature of quantum materials, which we call feature spectrum topology. In this framework, the ground-state is separated into different partitions by the eigensepectrum of a feature, a particular chosen internal quantum degree of freedom, such as spin or pseudo-spin, and the topological properties are determined by analysis of these ground-state partitions. We show that bulk-boundary correspondence guarantees gapless spectral flows in either one of the energy or feature spectrum. Most importantly, such "feature-energy duality' of gapless spectral flows serves as a fundamental manifestation of a topological phase, thereby paving a new way towards topological characterizations beyond symmetry considerations. Our development reveals the topological nature of a quantum ground state hidden outside symmetry-based characterizations, hence, providing a platform for a more refined search of unconventional topological materials.
## I. Introduction
The study of ground state topology of quantum systems has been an important field in condensed matter physics, starting with the integer quantum Hall effect (IQHE) and culminating in the discovery of the quantum spin Hall effect (QSHE) and \(\mathds{Z}_{2}\) topological insulator (TI), and the corresponding classification of an entire family of symmetry-protected topological (SPT) materials [1, 2, 3]. Currently, SPT systems and materials are efficiently categorized and predicted using symmetry-based frameworks, such as topological quantum chemistry and symmetry indicators [4, 5, 6, 7, 8, 9, 10], which accurately captures the band inversions at high-symmetry points and the type of topological invariant. However, this framework is inapplicable in the presence of symmetry-breaking perturbations or generic band inversions at non-symmetry-related \(k\)-points. Hence, we are led to consider the possibility of characterizing the non-trivial topological nature of quantum systems in the absence of protecting symmetries.
The close relationship between the QSHE and 2D \(\mathds{Z}_{2}\) TI provided a key insight during the formulation of this non-symmetry protected (nSPT) topological characterization [11, 12, 13, 14, 15], especially the recent advancement in characterizing spin-Chern insulators with broken spin-\(U(1)\) symmetry or time-reversal symmetry using projective operators [13, 14, 16, 17, 21, 22]. In general, the standard paradigm for understanding topology is viewed through the lens of the band structures in the energy domain. The topological invariants characterizing these topological states can be understood as arising from the Berry phase of the occupied wave function, i.e., the geometric evolution of the internal quantum degrees of freedom of the electron. In this picture, the topology is embedded in the fiber bundle constituted by the wavefunctions of valence electrons, i.e., \(P\hat{H}P\). Here, \(P\) is the projection operator to the occupied space, and \(\hat{H}\) is the system's Hamiltonian. However, electrons in solids are characterized by all the internal quantum degrees of freedom, including spin, pseudo-spin, and lattice symmetry-induced characteristics (e.g., Mirror operator \(\hat{M}_{z}\)). This leads us to consider using the projection of a particular quantum number (\(\hat{O}\)) to partition the occupied electronic states, thereby obtaining the spectrum of the operator, \(P\hat{O}P\). The intrinsic topology of each sector originates from the non-trivial Berry curvature of the underlying quantum number, and the overall topological phase of the entire system is characterized by all the constituent sectors, suggesting that the set of topological characters of the different sectors is a more fundamental building block underlying the topology of a quantum system. In sharp contrast to symmetry-protected topological phases, these feature topological invariants are robust in the presence of perturbative symmetry-breaking fields while providing richer topological information. We term this characterization scheme _'feature spectrum topology'_ and introduce it in detail below. The conventional SPT phases can be well incorporated into the feature spectrum topology and can be assigned a topological invariant from the perspective of feature spectrum topology (see Fig. 1(**a**)).
## II. Feature Spectrum Topology
An electron in solids carries multiple properties, such as spin, orbital angular momentum, and sub-lattice index (pseudo-spin), that can be used to categorize electronic states. Formally, we denote their corresponding quantum operator \(\hat{O}\) a _'feature'_ and \(P\hat{O}P\) a _'feature operator'_ for convenience. Note that the choice of the feature is not unique. A schematic diagram for the feature spectrum topology is illustrated in Fig. 1(**b**). Diagonalization of the projected operator produces the feature spectrum consisting of '_sectors_', indexed by \(n\), with feature spectral values \(O_{n}({\bf k})\) at every \(k\)-point, and its
corresponding eigenfunction \(\tilde{\psi}_{O_{n}}\). It should be noted that the feature is ill-defined when the conduction band and the valence band meet at degenerate points, where the occupied states cannot be uniquely defined.
The feature operator \(P\hat{O}P\) partitions the occupied states into sectors according to its eigenspectrum. When \(\hat{O}\) is a symmetry of the system, \(\{\tilde{\psi}_{O_{n}}|\forall n\}\) are eigenstates of the Hamiltonian. In that case, the feature \(P\hat{O}P\) sorts the occupied states into sectors according to their eigenvalues of the symmetry \(\hat{O}\), and the feature spectrum consists of flat bands corresponding to the eigenvalues of the symmetry \(\hat{O}\). In more general situations when \(\hat{O}\) is not a symmetry, \(\{\tilde{\psi}_{O_{n}}|\forall n\}\) are linear combinations of the eigenstates of the Hamiltonian instead. The feature spectrum consists of dispersive bands with their eigenvalues deviating from the eigenvalues of the symmetry \(\hat{O}\) (see Fig. 1(**c**)). However, the topology of each sector can still be characterized by calculating the Wilson loop for each feature sector as long as different sectors remain separated[18; 19],
\[W[\gamma]_{O}=Tr\left[\mathcal{P}\,exp\left(i\oint_{\gamma}\tilde{A}_{\mu}dx^ {\mu}\right)\right]. \tag{1}\]
Here, the Berry connection for the feature spectrum is given by \(\tilde{A}_{\mu}=i\langle\hat{\psi}_{O_{n}}|\nabla_{\mu}|\tilde{\psi}_{O_{n}}\rangle\). The feature spectrum Wilson loop, \(W[\gamma]_{O}\), now allows us to study the topology of the spectral flow of different quantum degrees of freedom in the ground state.
## III. The Feature-Energy Duality
A non-zero winding number, \(\mathds{Z}_{O}\), for \(W[\gamma]_{O}\) identifies a non-trivial topology corresponding to the feature \(\hat{O}\) in the quantum ground state. Since the lab-reference frame is a topologically-trivial vacuum, bulk-boundary correspondence indicates that there have to be gapless modes in either the energy spectrum or the feature spectrum at the boundary to connect the two topologically distinct spaces. We term this result _'feature-energy duality'_, which is the most important outcome of this work. We discuss consequences of this feature-energy duality and how it is reflected in the bulk-boundary correspondence of feature-topological systems and also during a topological quantum phase transition in the following section.
The feature-energy duality is concisely captured in the following scenario: a SPT phase hosts gapless energy bands on the boundary when the protecting symmetry is intact. When a weak symmetry-breaking perturbation is applied, the gapless energy bands become gapped, but the topological invariants in the feature spectrum topology remain unchanged, and in this case, the feature spectrum on the boundary becomes gapless. That is, either the energy or feature spectrum on the edge will have to be gapless for a system to be in a topologically
Figure 1: **Feature Spectrum Topology.** (**a**) Symmetry-protected topology (SPT) focuses on the occupied states in the energy domain, i.e., \(\mathrm{eig}(P\hat{H}P)\). In contrast, feature spectrum topology (FST) separates the ground state into different sectors via the eigenspectrum of the projected quantum number, i.e. \(\mathrm{eig}(P\hat{O}P)\), and analyzes the sectors to classify the ground state topology. (**b**) The quantum states in the Brillouin zone, like fishes living in a water tank, demonstrate their non-trivial topology through their winding number. The feature spectrum acts like a filter, reflecting the topological nature that is only manifested through the behavior of the fish subgroup, which is invisible to the whole fish group. As shown in the diagram, the whole group of fish does not have a net winding number. However, each subgroup of fish, partitioned by their _features_ like body color \(\hat{O}_{c}\) or patterns \(\hat{O}_{p}\), can carry a non-vanishing winding number. If the fishes are partitioned by their body color, the yellow fishes have a net winding number of \(2\) while the green fishes have a net winding number of \(-2\). If the fishes are partitioned by their pattern, the fishes with red eyes have a net winding number of \(1\) while the fishes with red back strips have a net winding number of \(-1\). (**c**) The feature \(\hat{O}\), representing a chosen internal quantum degree of freedom such as spin \(\hat{S}_{z}\), separates the occupied space into sectors. The separation is demonstrated by the _feature spectrum_\(\langle P\hat{O}P\rangle\), on which the bands in each sector can carry Berry curvatures manifesting non-trivial ground state topology.
non-trivial phase. Clearly, SPT phases can be described by topological invariants of the feature spectrum, and can thus be included within this new feature spectrum topology framework.
### A. Topological Quantum Phase Transition
The feature spectrum topology extends the concept of topology from the bulk energy band structure to the bulk feature spectrum. Correspondingly, a topological phase transition may occur via the emergence of gapless modes in the bulk feature spectrum rather than just in the bulk band structure. More specifically, a distinctive signature of a topological phase is the inability to adiabatically deform its ground state to the trivial phase. Distinctly different from the typical topological quantum phase transition (TQPT) (Fig. 2(**a**)) induced by closing of the bulk energy band gap, feature spectrum topology allows for a topological phase transition to occur via gap closure in the bulk feature spectrum while keeping the bulk energy bands gapped (Fig. 2(**b, c**)). Similar to how an energy gap in the band structure protects the topology of SPT states, the gap in the bulk feature spectrum prevents the topology of each feature sector from deforming adiabatically to the trivial case. Therefore, feature spectrum topology allows for a TQPT to occur via two distinctly different routes, the closing of a gap in either the energy band structure or the bulk feature spectrum (Fig. 2(**c**)); with the corresponding emergence of gapless nodes in the bulk feature spectrum instead of gapless nodes in the bulk energy spectrum[20].(See Supplementary Discussion and Fig. S2.)
### B. Bulk-Boundary Correspondence
In feature spectrum topology, topological phase transition can occur via the gap closing in the energy or feature band structures. Correspondingly, the bulk-boundary correspondence manifests either through the gapless energy band structures or gapless feature spectrum on the boundaries (Fig. 2(**d-f**)). The occupied energy states are partitioned in the feature spectrum into different feature sectors, which can be regarded as distinct topological objects sharing the same Brillouin zone. These feature sectors may carry nontrivial topological properties; hence, bulk-boundary correspondence will also apply to the feature spectrum. The non-trivial topology carried by a sector manifests through the existence of a gapless feature spectrum on the boundary when the boundary energy band structure is gapped (Fig. 2(**e, f**)), indicating the existence of boundary states connecting different feature sectors. The expectation value of the feature in the boundary energy band gradually changes from one sector to another as the feature texture evolves in the gapless boundary feature bands (Fig. 2(**e, f**)).
The standard SPT classification is inapplicable in the presence of a symmetry-breaking field, such as applying an out-of-plane electric field to a mirror Chern insulator (See Supplementary Discussion and Fig.S4). However, the feature spectrum allows us to study and distinguish the non-trivial ground state topology in such quantum systems from the trivial lab vacuum, in which gapless modes emerge in the boundary feature spectrum to connect the two topologically-inequivalent ground states. When the symmetry-breaking field is switched off, the standard SPT classification applies again, and the corresponding gapless modes are restored in the boundary energy band structure (Fig. 2(**d**)). Note that the feature spectrum is ill-defined at the \(k\)-point corresponding to the Dirac node in the boundary band structure, thereby allowing a transition between the non-trivial bulk state and the trivial vacuum. We term this simultaneous opening/ closing of a gap in the energy/ feature spectrum on the boundary a '_feature-energy duality_'. The feature-energy duality exceeds the previous generalized Laughlin argument in the quantum spin Hall effect [21; 22] by
Figure 2: **Feature-Energy Duality.** A topological phase transition can occur in two ways. One way is to close the bulk energy band gap as shown in (**a**). The other way is to maintain the bulk energy band gap (**b**) while closing the bulk feature band gap (**c**). Feature-energy duality dictates that the bulk-boundary correspondence manifestation of the non-trivial bulk topology can occur in two ways. One way is through the appearance of gapless boundary states in the energy spectrum, connecting the valence bands and conduction bands, as shown in (**d**). The other way is to have gapped boundary states in the energy spectrum(**e**), but exhibit gapless edge states in the feature spectrum as shown in (**f**). Furthermore, the feature spectral flow reflects how the boundary states connect different feature sectors in the feature spectrum. The evolution marked by 1 \(\sim\) 3 shows how the texture of the feature, such as spin texture, changes on the boundary energy and feature spectrum. As shown in the diagram, the feature texture gradually changes from the \(+1\) sector (1; blue) to the intermediate state with the feature spectral value of 0 (2; magenta), and finally to the \(-1\) sector (3; red).
providing more insight into the origin of the gapless boundary spectral flows in the feature spectrum, enabling applications to broader cases of unconventional topological materials.
## IV IV. Feature Chern Insulators
When the ground states are partitioned by the feature spectrum, each feature sector is regarded as distinct topological objects. If the feature sectors can be characterized by a Chern number, we term this system a _'feature Chern insulator'_[23]. We call these integers _the feature Chern numbers_ in an analogy to the spin Chern numbers in which \(\vec{S}\cdot\hat{n}\) is used as the feature [13, 16, 17, 21, 22]. Further, the feature spectrum topology exhibits more details about the topological nature of the system, which are invisible to the traditional topological characterization method. Here, we take a high-pseudo-spin Chern insulator to demonstrate the feature Chern insulator.
### (High-)Pseudo-Spin-Chern Insulators
A high-pseudo-spin Chern insulator can be constructed by staking of 2D Z\({}_{2}\) TI[24]. The Hamiltonian in Eq. 2 describes a high-pseudo-spin Chern insulator in a thin film of square lattice, with the layer, spin, orbital and site indexed by \(n,\alpha,\gamma,(i,j)\), respectively.
\[H^{0} = t_{k}\sum_{\langle ij\rangle,n}c^{\dagger}_{i,n,\alpha,\gamma} \,\mathbbm{1}_{s}\mathbbm{1}_{\tau}\,c_{j,n,\alpha^{\prime},\gamma^{\prime}}+i \lambda_{so}\sum_{\langle ij\rangle,n}c^{\dagger}_{i,n,\alpha,\gamma}(\vec{s} _{\alpha\alpha^{\prime}}\times\hat{d}_{ij})\cdot\hat{z}\,\tau^{z}_{\gamma \gamma^{\prime}}c_{j,n,\alpha^{\prime},\gamma^{\prime}} \tag{2}\] \[+t_{intra}\sum_{i,n}c^{\dagger}_{i,n,\alpha,\gamma}\,\mathbbm{1} _{s}\tau^{x}_{\gamma\gamma^{\prime}}\,c_{i,n,\alpha^{\prime},\gamma^{\prime}} +M\sum_{\langle\langle ij\rangle\rangle,n}c^{\dagger}_{i,n,\alpha,\gamma}\, \mathbbm{1}_{s}\tau^{x}_{\gamma\gamma^{\prime}}\,c_{j,n,\alpha^{\prime}, \gamma^{\prime}}\] \[+t_{z}\sum_{i,n}c^{\dagger}_{i,n,\alpha,\gamma}\,\mathbbm{1}_{s} \tau^{x}_{\gamma\gamma^{\prime}}\,c_{i,n+1,\alpha^{\prime},\gamma^{\prime}}.\]
The first term is nearest-neighbor in-plane hopping, the second term describes SOC with \(\hat{d}_{ij}\) pointing from site \(i\) to \(j\), the third term is an intra-unit cell inter-orbital hopping, the fourth term is an in-plane next-nearest-neighbor hopping, and the last term couples the 2D layers together along the \(z\)-axis. The parameters are chosen such that the 3D system is a weak TI.
We consider a tri-layer thin film, whose band structures are shown in Fig. 3(**a**). The chosen feature \(s^{z}\otimes\tau^{x}\otimes\mathbbm{1}_{3\times 3}(s\), \(\tau\), \(\mathbbm{1}\) act on spin, orbital, layer space, respectively) captures the topology of the system, encoding more information than the Z\({}_{2}\) classification. The feature spectrum presented in Fig. 3(**b**) consists of two separate branches, corresponding to the two feature sectors. The Wilson loop for the two sectors is plotted in Fig. 3(**c**), from which we read the pseudo-spin Chern number is 3. As a result, the system is Z\({}_{2}\) non-trivial with a single Dirac cone on its edges(see Fig. S1 in supplementary materials). Upon applying a weak Zeeman field, the Dirac cone becomes gapped, and meantime three gapless chiral edge states (due to three layers in the system) emerge in the edge feature spectrum( see Fig. 3(**d, e**). This high pseudo-spin Chern insulators exhibits the feature-energy duality through its gapless edge energy or feature band structures. Moreover, it showscased that the feature spectrum topology is applicable even in the presence of a symmetry-breaking field.
Two more examples of feature Chern insulators can be found in the supplementary materials.
### V. Feature Weyl Semimetals
In the context of feature spectrum topology, besides the feature insulators discussed in last section, _feature Weyl semimetals_ constitute another class of topological materials[25, 26]. As the name suggests, a feature Weyl semimetal hosts Weyl nodes in its bulk feature spectrum. The feature Weyl semimetal offers another interpretation of the topological insulators in 3D.
Using antiferromagnetic TI (AFM TI) [27] as an example, where the non-trivial bulk topology is protected by a combination of time-reversal symmetry and a translation, we discuss an example of a feature Weyl semimetal, and how these feature Weyl nodes contain information about the topological phases in the system[28, 16].
The AFM TI can be described by the Hamiltonian in Eq. 2, \(H_{0}\), with an additional staggered Zeeman field with field strength \(h_{z}\),
\[H^{\text{AFM TI}}=H^{0}\sigma^{0}+h_{z}\sum_{i,n}c^{\dagger}_{i,n,\alpha, \gamma}s^{z}_{\alpha\alpha^{\prime}}\tau^{z}_{\gamma\gamma^{\prime}}\sigma^{z }_{nn}\,c_{i,n,\alpha^{\prime},\gamma^{\prime}}. \tag{3}\]
With an appropriate choice of the parameters, the system is in a non-trivial AFM TI phase with fully gapped surface modes on the \((001)\) surface and symmetry-protected gapless Dirac nodes on the \((100)\) and \((010)\) surfaces.,
The feature chosen here is \(\mathbbm{1}_{4\times 4}\otimes\sigma^{z}\), where \(\sigma\) acts on the layer degree of freedom. Fig. 4(**b**) shows that the two feature
spectrum sectors (positive and negative), \(1_{4\times 4}\otimes\sigma^{z}=\pm 1\), have a non-trivial Chern number in the \(k_{z}=0\) slice, whilst the \(k_{z}=\pi\) slice is trivial. This agrees with the \(\mathds{Z}_{2}\) classification of AFM TI, which has a non-trivial \(\mathds{Z}_{2}\) index in the \(k_{z}=0\) plane. Hence, there have to be two Weyl nodes with opposite chiral charges, located at \(\pm k_{z,c}\) in the bulk feature spectrum, to account for the change of feature Chern number. This feature Weyl semimetal phase provides an alternative perspective of the AFM TI phases[16; 28]. (See Supplementary Discussion and Fig. S6 and S7.)
Upon applying a symmetry-breaking Zeeman field, Fig. 4**(c)** shows a gap opening up in the surface Dirac nodes on the (010) surface while the feature edge states are gapless Fig. 4**(d)**. The gapless feature edge states for the \(|k_{z}|\leq k_{z,c}\) are analogous to Fermi arcs that connect Weyl nodes in 3D Weyl semimetals.
## VI. Role of Spin-Orbit Coupling
All the previous discussions on feature insulators were focused on time-reversal odd features, which gave a feature Chern number classification of the ground-state topology. In this section, we propose a time-reversal invariant feature, \(\mathbf{L}\cdot\mathbf{S}\), which does not allow for a non-zero feature Chern number as the sectors of \(P\mathbf{L}\cdot\mathbf{S}P\) are time-reversal symmetric. We, therefore, have to revert to a \(\mathds{Z}_{2}\)-type classification based upon the feature Wilson loop to track the topological phase. As we show below, the feature spectrum is not only useful in characterizing non-conventional topological phases but also helpful for identifying the key component that induces the topological nature of a material.
We choose \(\mathrm{Bi}_{2}\mathrm{Se}_{3}\), a typical example of 3D TI, for this discussion. We employ a realistic tight-binding model that comprises the \(s,p\) orbitals of \(\mathrm{Bi},\mathrm{Se}\) atoms[29]. SOC causes a band-inversion at \(\Gamma\) near the Fermi level (see Fig. 5**(a)**), and the SOC strength is varied to track the evolution of the band structures, with a band gap closing and reopening at 45% SOC. Meanwhile, we track the evolution of the feature spectrum \(P\mathbf{L}\cdot\mathbf{S}P\). There are three sectors in the feature spectrum corresponding to the states \(|J=\frac{1}{2},L=0,S=\frac{1}{2}\rangle\), \(|J=\frac{1}{2},L=1,S=\frac{1}{2}\rangle\) and \(|J=\frac{3}{2},L=1,S=\frac{1}{2}\rangle\), respectively. Our calculation finds that all three sectors are trivial before the band inversion for small SOC strength. After the band inversion, the sector associated with the feature \(|J=\frac{1}{2},L=1,S=\frac{1}{2}\rangle\) becomes nontrivial. This sector, consisting of the feature bands 1-6, is labeled as sector I in Fig. 5**(b)**. From the \(k_{y}\)-dependence of the feature Wannier center of this sector in the \(k_{z}=0\) and \(k_{z}=\pi\) planes, we find that it is \(\mathds{Z}_{2}\) topologically nontrivial, as shown in Fig. 5**(c, d)**. It demonstrates that it is the bands in the sector \(|J=\frac{1}{2},L=1,S=\frac{1}{2}\rangle\) that are responsible for the nontrivial topology of \(\mathrm{Bi}_{2}\mathrm{Se}_{3}\) in the strong TI phase, in agreement with previous analysis [30]. This clearly shows that the spin-orbit feature spectrum \(P\mathbf{L}\cdot\mathbf{S}P\) is a more refined probe of the ground-state wave function topology.
Figure 3: **Feature Chern Insulator: High-pseudo-spin Chern insulator.** In this work, we use \(t_{k}=-0.5\), \(\lambda_{SO}=-1\), \(M=7\), \(t_{z}=1\) and \(t_{intra}=-26.85\) for the model described in Eq. 2. (**a**) The band structure and (**b**) the feature spectrum \(\langle S_{z}\otimes\pi_{x}\otimes\sigma_{0}\rangle\) of the high-pseudo-spin-Chern insulator, on which the feature spectrum topology can be defined since they are both gapped. (**c**) The Wilson loop for the two sectors in (**b**), where the +(-) indicates the sector with positive(negative) feature spectral values. (**d**) The nanoribbon band structure of the high-pseudo-spin-Chern insulator along the \(\hat{y}\)-direction, in which the edge states on one edge are \(S_{z}\otimes\pi_{x}\otimes\sigma_{0}\) resolved. (**e**) The nanoribbon feature spectrum of the high-pseudo-spin-Chern insulator along the \(\hat{y}\)-direction with a Zeeman field in \((\hat{x}+\hat{y})\)-direction. The edge feature spectra on the same edge in (**d**) are marked in magenta.
Figure 4: **Feature Weyl semimetals in the AFM TI.** (**a**) The bulk energy band structure of the AFM TI, where \(h_{z}=0.05\) is used for the model described in Eq. 3. (**b**) The bulk feature spectrum of the AFM TI around \(\Gamma\) point. The red (blue) circle indicates the feature Weyl node with a positive (negative) feature chiral charge on the \(+\) sector. (**c**) The energy band structure of the \((010)\) slab of the AFM TI. An additional in-plane Zeeman field in \((\hat{x}+\hat{y})\)-direction is added to gap the surface bands. (**d**) The feature spectrum corresponds to (**c**). The surface Fermi arc in the feature spectrum connects the feature Weyl nodes.
## VII. Discussion
We have laid out the concept of feature spectrum topology based on projected quantum numbers, which we have demonstrated to be a more subtle and refined probe of the ground-state topology of quantum systems. The starting point of feature spectrum topology is to extend the topological consideration to the feature domain, aiming to establish a more comprehensive framework of topological characterization. The design principle lets the feature spectrum topology encompass the symmetry-based topological schemes (STS), like symmetry indicators and topological quantum chemistry. In other words, all the topological phases recognized by STS can be well incorporated into the feature spectrum; In reverse, the feature spectrum topology identifies much more topological phases that are invisible to STS, which can be understood in two aspects.
On the one hand, the feature spectrum topology applies to the systems whose symmetry is broken, where STS fails to characterize the ground state topology. One issue identified in the study of TCIs is the lack of gapless states on boundaries that do not possess the required crystalline symmetry. We can apply the feature-energy duality here. When the energy band structure on the boundary is gapped, the feature spectrum for the boundary becomes gapless. This suggests that the topological boundary states cannot be completely removed but may be hidden within the bulk energy bands. This concept also applies to \(\mathds{Z}_{2}\) TIs, where the boundary states may become gapped due to the introduction of magnetic ions, which breaks time-reversal symmetry. Despite this, the topological boundary states, responsible for the gapless feature spectrum on the boundary, must still exist in some form. Symmetry protection is not necessary for topological boundary states but rather indicates metallic behavior on the boundary bands. Moreover, the presence and the difference of mass terms in these gapped boundary states on different boundaries can give rise to soliton states at the lower dimensional boundary, giving the hinge or corner states via the mass-kink mechanism [31; 32; 33; 34; 35; 36]. Further introducing a symmetry-breaking term to the entire bulk can lead to the gap opening of all boundary energy bands and the invalidation of first-order topological invariants. In this scenario, higher-order topological invariants are introduced [37; 38; 39; 40]. From the perspective of feature spectrum topology, it is natural to allow for such symmetry-breaking terms provided the bulk feature spectrum remains gapped, and the topology invariants associated with the feature sectors remain valid. The gapless feature spectrum on the boundary reflects the feature Wilson loop winding number \(\mathds{Z}_{O}\) being a first-order topological invariant. Further research is needed to identify the appropriate feature operators for various higher-order and fragile topologies and to determine whether all higher-order TIs can be considered first-order in the feature spectrum topology. This work would provide a solid foundation for understanding the robustness of these systems from a topological perspective.
On the other hand, the feature spectrum topology captures the topological phase whose topological band inversions are at generic \(k\)-points. In contrast, STS merely focuses on the band inversions at high-symmetry points, missing a great portion of materials with non-trivial ground state topology. The feature spectrum topology dramatically expands the pool of topological materials, providing more candidate materials with desirable properties. Previous research focused on the materials that host gapless edge bands to seek their applicability in electronics, spintronics, and other applications. The feature spectrum topology reminds us that even the phase without gapless boundary states can have such desirable properties due to their non-trivial bulk topology. Therefore, the importance of the feature-energy duality, as a manifestation of the bulk-boundary correspondence due to the non-trivial bulk topology, cannot be overemphasized. Here, we illustrate this with two examples. Consider a SnTe thin film with an odd number of layers, which is shown to support two pairs of gapless Dirac cones on the edges protected by the mirror symmetry \(M_{z}\). When built into a transistor, the gapless edge states can transport mirror-(spin-) polarized electrons. Applying a vertical electric field breaks the mirror symmetry and opens a gap in the edge bands, making the material seems less interesting [41]. However, the feature-energy duality tells us that the thin film supports gapless edge feature bands that can still give the mirror-Hall effect with fine-tuned chemical potential, through which the electrons with different mirror polarizations accumulate on the two opposite boundaries (See Supplementary Discussion and Fig. S4). Another example is
Figure 5: **Topological characterization of \(\mathrm{Bi_{2}Se_{3}}\). (a) The bulk band structure of \(\mathrm{Bi_{2}Se_{3}}\) in the presence of SOC. (b) The bulk feature spectrum \(\langle P\mathbf{L}\cdot\mathbf{S}P\rangle\) of \(\mathrm{Bi_{2}Se_{3}}\) in the presence of SOC. The sector I includes the bands 1-6, as indicated in the figure. **(c, d)** The Wilson loop of the sector I on the \(k_{z}=0\) and the \(k_{z}=\pi\) plane, respectively.
the recently proposed \(\alpha\)-Sb, which was previously recognized as a topologically trivial insulator. Still, its non-trivial spin Chern number \(\mathcal{C}_{s}\)=2 leads to the emergence of edge bands, and hence the spin-accumulation through the nearly-quantized spin-Hall current with fine-tuned chemical potential, providing a platform for efficient spintronics[42] (also See Supplementary Materials and Fig. S5). Similarly, a system can feature orbital Hall conductivity plateaus for the orbital Hall effect if the sectors in the feature spectrum \(\hat{\mathbf{n}}\cdot\mathbf{L}\) carry nonzero Chern numbers. Other cases can even have an isospin Hall conductivity plateaus if the sectors in the feature spectrum \(\hat{\mathbf{n}}\cdot\mathbf{\tau}\) carry nonzero Chern numbers, where \(\mathbf{\tau}\) is the isospin indicating the bonding and anti-bonding states [43; 44]. In general, it applies to the feature Chern insulators featured with physical quantities \(\hat{O}\), resulting in a generalized \(\hat{O}\)-Hall effect with measurable \(\hat{O}\)-conductivity plateaus [13]. The feature spectrum topology provides opportunities for many materials that were previously recognized as topological trivial to support the feature-Hall effect, establishing a new type of electronic engineering, termed _featuretronics_.
Last but not least, the above example of \(\mathrm{Bi_{2}Se_{3}}\) reveals another merit of the feature spectrum topology; that is, it provides a systematic and rigorous way to analyze which component plays a role in making the material topologically nontrivial, surpassing the intuitive but less rigorous methods such as band inversions. A thorough analysis of the feature spectrum thus helps to extract the essence of the topological phase in a material, paving the boulevard toward a more delicate and efficient topological material engineering.
## Acknowledgments
**Funding**: H.L. acknowledges the support of the National Science and Technology Council (NSTC) in Taiwan under grant number MOST 111-2112-M-001-057-MY3. The work at Northeastern University is supported by the Air Force Office of Scientific Research under award number FA9550-20-1-0322, and it benefited from computational resources of Northeastern University's Advanced Scientific Computation Center (ASCC) and the Discovery Cluster. **Author contributions**:H.L. designed research; B.W., Y.-C.H., X.Z., T.O., and H.L. performed research; B.W., Y.-C.H., T.O., and H.L. analyzed data; and B.W., Y.-C.H., T.O., and H.L. wrote the paper. **Competing interests:**The authors declare no competing interest. **Data and materials availability:** All the data is available upon request.
|
2301.02911 | Towards early prediction of neurodevelopmental disorders: Computational
model for Face Touch and Self-adaptors in Infants | Infants' neurological development is heavily influenced by their motor
skills. Evaluating a baby's movements is key to understanding possible risks of
developmental disorders in their growth. Previous research in psychology has
shown that measuring specific movements or gestures such as face touches in
babies is essential to analyse how babies understand themselves and their
context. This research proposes the first automatic approach that detects face
touches from video recordings by tracking infants' movements and gestures. The
study uses a multimodal feature fusion approach mixing spatial and temporal
features and exploits skeleton tracking information to generate more than 170
aggregated features of hand, face and body. This research proposes data-driven
machine learning models for the detection and classification of face touch in
infants. We used cross dataset testing to evaluate our proposed models. The
models achieved 87.0% accuracy in detecting face touches and 71.4%
macro-average accuracy in detecting specific face touch locations with
significant improvements over Zero Rule and uniform random chance baselines.
Moreover, we show that when we run our model to extract face touch frequencies
of a larger dataset, we can predict the development of fine motor skills during
the first 5 months after birth. | Bruno Tafur, Marwa Mahmoud, Staci Weiss | 2023-01-07T18:08:43Z | http://arxiv.org/abs/2301.02911v2 | Towards early prediction of neurodevelopmental disorders: Computational model for Face Touch and Self-adaptors in Infants
###### Abstract
Infants' neurological development is heavily influenced by their motor skills. Evaluating a baby's movements is key to understanding possible risks of developmental disorders in their growth. Previous research in psychology has shown that measuring specific movements or gestures such as face touches in babies is essential to analyse how babies understand themselves and their context. This research proposes the first automatic approach that detects face touches from video recordings by tracking infants' movements and gestures. The study uses a multimodal feature fusion approach mixing spatial and temporal features and exploits skeleton tracking information to generate more than 170 aggregated features of hand, face and body. This research proposes data-driven machine learning models for the detection and classification of face touch in infants. We used cross dataset testing to evaluate our proposed models. The models achieved 87.0% accuracy in detecting face touches and 71.4% macro-average accuracy in detecting specific face touch locations with significant improvements over Zero Rule and uniform random chance baselines. Moreover, we show that when we run our model to extract face touch frequencies of a larger dataset, we can predict the development of fine motor skills during the first 5 months after birth.
Computer Vision Autoencoders Neurodevelopment factors
## 1 Introduction
Analysing body movements in early childhood gives insights into the infant's neurological development, and it can play an essential role in determining if a baby is suffering from injuries in the nervous system or a hereditary disease [1].
Figure 1: An overview of our proposed framework. Spatial, temporal and appearance features are extracted, then they are concatenated with a feature integration layer and a classification approach is used to detect and classify the infant’s face touch, which is subsequently used to predict the neurodevelopmental scores.
###### Abstract
We propose a new approach to the detection of face touch in the face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of face of a face of a face of a face of a face of face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of a face of
In the case of touch, Chen et al. performed a couple of studies focused on the detection of interactions between a caregiver and a child in a controlled environment [12; 13]. Specifically, their research focused on detecting touch from the caregiver to the child in particular locations: head, arms, legs, hand, torso and feet. Their latest study applied two main methods; firstly, they extracted tracking information by detecting the skeleton locations. Secondly, they extracted the infant's location in the image by applying image segmentation using the GrabCut algorithm.
Therefore, although some studies have attempted to tackle some of these issues in infants, most of them have focused on the analysis of general movements instead of specific gestures. This study analyses hand to face gestures in infants in larger detail and granularity and proposes novel machine learning models for automatic detection.
The relationship between face and body touch in infants and how they correlate with cognitive development has not been studied quantitatively and systematically before in previous literature. The Mullen Scales of Early Learning is used to measure the cognitive development of infants in five different categories: gross motor (GM), fine motor (FM), visual reception (VR), receptive language (RL) and expressive language (EL) [14; 17]. They are a key measure of the development of the child during the first years after birth. Previous studies have not tackled the relationship of detected features with MSEL scores. We aim to analyse this relationship based on gesture and movement data extracted from the infant.
## 3 Datasets
For our data-driven models, we used two main datasets: BRIGHT [18] and Chambers [4]. A subset of the two datasets was labelled and validated by a psychology expert to be later used for our models. Then, the videos from the BRIGHT dataset were used to evaluate the correlations between face touch dynamics and neurodevelopmental scores.
### BRIGHT dataset
This dataset was provided by the'removed for anonymous submission' and is part of the studies carried out in the Brain Imaging for Global Health (BRIGHT) Project [18] in which they study infants from Gambia and UK during their first 24 months of life. The initial sample provided included 29 videos of UK infants. From the 29 videos, 23 videos were selected as some of the babies were occluded during most of the video runtime. Each video shows the behaviour of one infant of fewer than 2 months of age, actively responding to the input given by their mother. The videos were recorded in different rooms, with the infant lying down with a mirror positioned on the wall behind the head of the baby. The camera is static, and the infants generally cover a small portion of the frame but can be located in different parts of the frame. Another complex factor that characterises this dataset includes the mother's presence during the video, sometimes occupying a significant part of the frame with a bigger skeleton and limbs. Also, the fact that the infants are lying down while the camera is facing the front means that the camera generally captures the babies' faces from a side or the bottom, making them difficult to detect for traditional algorithms. The babies are shown rotated in the frames at different angles between 90\({}^{\circ}\) and -90\({}^{\circ}\).
### Chambers dataset
This is an open dataset compiled and generated by Chambers et al [4]. 25 videos were selected based on the age of the infants in the video by filtering and selecting only the videos with babies less than 2 months to ensure better consistency with the BRIGHT dataset. The videos show babies lying down on their own and interacting in a natural environment. They could be dancing, playing or rolling over in their rich. The camera is sometimes moving while filming the baby, and the babies generally cover most of the frame. The videos do not feature other people in the frame, but the babies sometimes can move at different angles. Also, the resolutions are very varied between videos, with some of them being more blurry and with smaller frames. The babies are shown rotated in the frames at different angles between 90\({}^{\circ}\) and -90\({}^{\circ}\).
## 4 Labelling
The labelling process was carried out using a tagging system developed for this research which allowed efficient tagging of the image frames. Also, the tagging was carried out with the support of a psychology expert, who helped by labelling part of the dataset and providing her judgement about the different labelling categories.
As this study aims to detect hand over face gestures in infants automatically, the main labelling category to tag needed to differentiate between face touch or no touch in each frame. Therefore, it was defined as follows:
* On Head: From a human perspective, it can be seen that the hand could be touching the head area. In this study, the head area considers any of the following locations or any area enclosed by the those locations: eyes, ears, nose, mouth, cheeks, forehead and neck.
* Outside Head: From a human perspective, it can be seen that the hand is not on the head area as defined.
Additionally, we labelled our dataset with the following non-exclusive categories: eyes, ears, nose, mouth and cheeks, as they are the main differentiable parts of the face. The categories were also discussed and agreed upon with the psychology expert to validate their significance and usefulness from the neurodevelopment perspective.
The final labelled datasets sizes and distributions can be seen in Table I. The final proportion of "on head" versus "outside head" was of 29.5% to 70.5%, which is expected for this kind of natural dataset.
## 5 Method
Because of the small size of the labelled dataset, we could not use an end-to-end deep learning model. In this section we present the feature extraction and selection steps and the proposed feature fusion machine learning model.
### Feature extraction of face and body
Our proposed models required spatial and temporal features related to the infants' face touch gestures. The features extracted were selected considering the relationship between the hands of the baby and the face.
#### 5.1.1 Extraction of face and body landmarks
We first extracted basic face and body landmarks.
- Pose coordinates: Positions of the skeleton parts were extracted for every baby and every frame by using the fine-tuned OpenPose [15] model trained by Chambers et al. [4]. Following the implementation of Chambers et al., the raw pose locations were normalised, smoothed and interpolated per video.
- Face Region: Based on the extracted pose features and estimated orientation, an accurate estimate of the baby's face location was carried out and the image was cropped in the face region. If no possible face was found in a given frame, the locations of the face of the nearest frames were used as guidance. Where possible, the face region was further aligned based on the locations of the eyes and nose.
- Face coordinates: OpenPose provides general locations of the eyes, nose and ears, but its purpose is centred on getting the whole skeleton and not on specific facial landmarks. Therefore, information about the location of facial features based on 3D-FAN [19] was also used. The faces were extracted from the aligned cropped face regions.
#### 5.1.2 Extraction of geometric, appearance and temporal features
After basic landmarks features were extracted, we extracted a set of geometric and temporal feature descriptors.
Based on the initial features, the following features were calculated:
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Dataset** & & **Size** \\ \hline \multirow{4}{*}{Chambers} & \# Videos & 25 \\ & Total Frames & 1769 \\ & Mean Frames per video & 70.76 \\ & \% On Head & 29.2\% \\ & \% Outside Head & 70.8\% \\ \hline \multirow{4}{*}{BRIGHT} & \# Videos & 23 \\ & Total Frames & 2039 \\ \cline{1-1} & Mean Frames per video & 88.6 \\ \cline{1-1} & \% On Head & 29.7\% \\ \cline{1-1} & \% Outside Head & 70.3\% \\ \hline & Total Number of Frames & 3808 \\ \hline \end{tabular}
\end{table}
Table 1: Database sizes and labels
- Face and body geometrical features (Distance and Angular): Based on the coordinates of the skeleton of the baby, the normalised distances between the wrists and the ears, eyes, neck and nose were extracted. For each case, the distances considered included differences in the X direction, differences in the Y direction and euclidean distances. Additionally, based on the coordinates of the skeleton of the baby, the angles of the elbows and shoulders were extracted.
- Hands geometrical features (Distance): As the adapted OpenPose model by Chambers et al. [4] only generated the skeleton up to the wrists, additional information was obtained by extending the skeleton to the hands. The MediaPipe detection algorithm [20] was used in the area surrounding the wrists to obtain the hand coordinates. Based on the coordinates, the normalised distances between the fingers and the eyes and nose were calculated. The distances included differences in the X direction, differences in the Y direction and euclidean distances. Also, confidence scores were considered as additional features based on the confidence of the MediaPipe algorithm detecting each hand.
- Temporal features: The temporal features were centred on aggregated information over various frames. We calculated features including displacement, speed and acceleration obtained based on the coordinates of the skeleton of the baby for the wrists and elbows.
- Appearance Features: Histogram of Oriented Gradients (HOG) [21] is a method for feature extraction based on the directionality of the gradients in different locations in an image. This method has shown significant success rate in different image detection tasks including detecting faces and expressions [22, 23, 24] and detecting gestures [25, 26, 27]. These features were extracted only for the main region of interest, which is the face area. Consequently, these features were extracted from the cropped images of the face. Additionally, we wanted to extract more localised spatial information inside the face. Therefore, more granular HOG features were extracted in two specific face areas: one related to the upper region of the face based on the eyes location and another related to the lower region based on the mouth location. Also, confidence scores were considered as additional features based on the average confidence of the landmarks in each region as calculated by 3D-FAN.
#### 5.1.3 Features smoothing and data augmentation
As a final step in the feature extraction process, we smoothed and augmented the calculated features to be able to train the classification models on the data. The outliers of the geometrical and temporal features per video were replaced by blank values, and the data was interpolated per video to cover any deleted or missing values. If data was still missing, it was replaced by mean values from the training data during the training stage.
Finally, to compensate for the small size of the dataset, the training data was augmented by flipping the images horizontally, flipping all the features accordingly and considering the directionality of these features.
### Face touch detection and classification
After feature extraction and smoothing, we handled face touch detection and classification as two different classification problems. The first is to detect when the hand touches the face as a binary classification problem; then, we classify different touch location areas as a multi-label classification problem. The architectures proposed for both problems are
Figure 3: Examples of HOG features obtained for the face region, the upper head and the lower head. Note the challenging nature of the dataset with extreme head poses and viewpoints.
Figure 2: Example body skeleton extracted using OpenPose and face keypoints extracted using 3D-FAN
very similar. The main difference lies in the method used for the classification component in the final layer. The models can be divided into the following:
#### 5.2.1 Feature selection and dimensionality reduction
Our first proposed method used feature selection and dimensionality reduction and a Support Vector Machine (SVM) classifier for solving the face touch detection problem.
There were four main categories of geometrical and temporal features: body distance features, hand distance features, angular features and temporal features. Many of the features in the same categories correlated with each other as they measured similar characteristics. Therefore, to ensure proper representation of the features, this method proposed reducing these features before training a classifier.
Firstly, the features were filtered based on an automatic feature selection process. Random Forest was used to select the most representative features and prevent skewing the classifier with features that were not that significant. The feature selection was carried out by cross-validating with 5-folds in the training set to ensure independence. Then, Principal Component Analysis (PCA) was used for dimensionality reduction. PCA has shown effective results in detection problems when facing a large number of features [28, 29, 30]. PCA was used to filter a percentage of the explained variance. The threshold of this explained variance was established as a hyperparameter that was also learned by cross-validating with 5-folds in the training set.
After applying PCA, the classification algorithm used SVM using an RBF kernel. The model was cross-validated with 5-folds in the training set to choose the best hyperparameters for SVM. The search for the best hyperparameters for SVM was done in combination with the search for the threshold for PCA, as the hyperparameters were possibly dependent on each other using a grid search method [31].
In the case of multi-class classification for the face areas, the Label Powerset model was used with underlying SVMs to predict the multiple overlapping labels. Label Powerset transforms the labels by creating a class for each possible combination of labels and creates a classifier for each combination [32]. Consequently, it has the advantage of considering the possible relationships between the labels. This model was configured by tuning the hyperparameters in the same way as in the SVM binary classifier.
#### 5.2.2 Feature optimisation using deep learned features
Our second proposed method used autoencoders as a feature optimisation and dimensionality reduction technique. This method has been used in various studies as an effective way of reducing dimensionality while maintaining the representation of the data, and it has been successfully used before with repetitive and correlated features [33]. In this case, autoencoders were used to generate a latent representation of the input features.
Firstly, the dimensions of the features were reduced based on the autoencoder model. The model uses a neural network architecture that learns how to represent the data in lower dimensions and reconstruct it [33]. It then minimises the error between the reconstruction and the original input. The aim of the autoencoder is to exploit the correlations in the input features to reduce the final dimensions without losing relevant information.
This method was used with two alternatives of input features. The first one used only geometrical and temporal features. The second alternative also used the HOG features. The main hyperparameters that were learned for this model included the latent dimensions and the number of epochs. These hyperparameters were selected based on the results of a 5-fold cross-validation in the training set.
After encoding the data, the classification process was done using SVM with an RBF kernel. The input features for the SVM classification process were the output of encoding the features with the trained encoder. The classification layer was also cross-validated with 5-folds in the training set to choose the best hyperparameters for SVM. Finally, in the case of the multi-label classification problem, the Label Powerset model was used with underlying SVMs to be able to predict the multiple face touch locations.
## 6 Evaluation
We evaluated the accuracy of the detection of face touches by using a mixture of spatial and temporal features and analysed models based on dimensionality reduction and optimisation techniques. The models were evaluated cross-dataset to validate their effectiveness and generalisation. The approaches were evaluated with three different configurations of the datasets to ensure the consistency of the models. Also, all segmentations of the data were grouped by video to ensure having different videos in each set. The three configurations used were the following:
Towards early prediction of neurodevelopmentalment disorders: Computational model for Face Touch and Self-adaptors in Infants
* Test on Chambers dataset
* Test on BRIGHT dataset
* Test on the other 50% of BRIGHT dataset
As there was no existing baseline for these models, the models were evaluated against Zero Rule (ZeroR) baseline and random uniform chance. In the case of the ZeroR baseline, it is calculated by assigning the value of the majority class to every data point [34, 35, 36] while random chance assigns a class based on random uniform probabilities. Statistical McNemar's tests were carried out to ensure the results were significantly different. The McNemar test was used as the compared distributions were binary targets instead of continuous variables. All the best performing models were found significantly different with p \(<\) 0.01 in comparison to Random Chance and Zero Rule.
### Detection of face touches
The main target was to determine if there was a face touch. This problem was treated as a binary classification task based on the classes: "on head" and "outside head".
The models that were analysed were the following:
* Feature selection and dimensionality reduction based on geometrical and temporal features (RF-PCA-SVM): This model followed the components described in Section 5.2.1. It performed feature selection with Random Forest (RF) and dimensionality reduction with PCA. Finally, it performed the classification of the labels using SVM in the case of this binary problem. It used the geometrical distance and angular features and aggregated temporal features.
* Feature optimisation using deep learned features based on geometrical and temporal features (AUTO ENC-SVM-I): This model was structured as described in Section 5.2.2. It used an autoencoder neural architecture to reduce the dimensions of the input features. Finally, it performed the classification of the labels using SVM. It used the geometrical features (distance and angular) and the temporal features.
* Feature optimisation using deep learned features based on geometrical, temporal and HOG features (AUTOENC-SVM-II): The model was structured as described in Section 5.2.2. Similar to the previous model, it used an autoencoder neural architecture to reduce the dimensions of the input features and SVM for the binary classification problem. It used the geometrical features, temporal features and HOG features.
The results for predicting between "on head" and "outside head" can be seen in Table 2, 3 and 4. All the models had significantly higher accuracy than uniform random chance and ZeroR baselines. The best performing model reached 87% accuracy when trained in a mixture of both datasets.
Overall the results of the three models were promising with high accuracy in comparison to the baselines. Also, the results were relatively similar between the three models. Some performed better on different datasets, but the performance was very competitive between them. All three models obtained better results than ZeroR or Random Chance in accuracy, precision and recall. Therefore, the results demonstrated that these models can perform well in the detection of face touch.
Even though the autoencoder models (AUTOENC-SVM) outperformed the random forest and PCA model (RF-PCA-SVM) in two of the three dataset configurations, the difference in accuracy performance was limited. These results demonstrate that the RF-PCA-SVM configuration was also very effective. Possibly in larger datasets, the autoencoder based models could extract more representative features that could better outperform the RF-PCA-SVM model.
Similarly, the inclusion of the HOG features in the AUTOENC-SVM-II model did not show a noticeable increase in performance. In the case of the BRIGHT dataset, it did show an improvement over the other models and a higher improvement over AUTOENC-SVM-I. However, the improvement could have been greater. This could be caused by the limited amount of data with very varied head poses and rotations. Therefore, the AUTOENC-SVM-II model might perform better if trained in larger datasets where the HOG features can be learned with more generalisable representations.
Finally, even though there were various challenges in the datasets that could have a negative impact on the models' ability to generalise between datasets, the results demonstrated that the proposed methods had high performance in the detection of face touches.
### Classification of face touch descriptors
These experiments evaluate the face touch on specific locations of the face. These locations were evaluated based on the universe of images where there is a face touch. The key locations to predict included the following: ears, nose, cheeks, mouth, and eyes. The problem mas evaluated as a multi-label problem because the different classes could overlap and the infant could touch more than one location at the same time.
The proposed models for this problem are the same as the ones described in Section 6.1, so we will use the same naming abbreviations. The main difference was the change in the classification method from SVM to Label Powerset with SVM [32] to tackle the problem as a multi-label classification problem. Therefore, the models that were analysed were the following:
* Feature selection and dimensionality reduction based on geometrical and temporal features (RF-PCA-SVM)
* Feature optimisation using deep learned features based on geometrical and temporal features (AUTOENC-SVM-I)
* Feature optimisation using deep learned features based on geometrical, temporal and HOG features (AUTOENC-SVM-II)
The experiments were carried out only on the portion of images labelled as "on head" so that it could be sufficiently balanced; therefore the dataset was even more limited in size than the original.
The obtained results can be seen in Table 5, 6 and 7. The results show the macro-average accuracy, precision and recall of the multiple key locations per model. The highest performing model reached 71.4% average accuracy when testing on the Chamber's dataset.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Model** & **Accuracy** & **Precision** & **Recall** \\ & **Test** & **On Head** & **On Head** \\ \hline Random Chance & 50\% & 28.3\% & 50\% \\ Zero Rule & 71.7\% & 0\% & 0\% \\
**RF-PCA-SVM** & **87.0\%** & **77.2\%** & **76.8\%** \\ AUTOENC-SVM-I & 86.9\% & 76.9\% & 77.1\% \\ AUTOENC-SVM-II & 85.7\% & 71.6\% & 82.2\% \\ \hline \end{tabular}
\end{table}
Table 4: Results -binary classification “on head” vs “outside head”.
Training and CV dataset: Chambers + 50% Bright. Testing dataset: 50% Bright.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Model** & **Accuracy** & **Precision** & **Recall** \\ & **Test** & **On Head** & **On Head** \\ \hline Random Chance & 50\% & 29.7\% & 50\% \\ Zero Rule & 70.3\% & 0\% & 0\% \\ RF-PCA-SVM & 80.3\% & 68.7\% & 62.2\% \\
**AUTOENC-SVM-I** & **80.7\%** & **70.8\%** & **59.6\%** \\ AUTOENC-SVM-II & 80.6\% & 74.4\% & 53.1\% \\ \hline \end{tabular}
\end{table}
Table 2: Results -binary classification “on head” vs “outside head”.
Training and CV dataset: Chambers. Testing dataset: Bright.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Model** & **Accuracy** & **Precision** & **Recall** \\ & **Test** & **On Head** & **On Head** \\ \hline Random Chance & 50\% & 29.2\% & 50\% \\ Zero Rule & 70.8\% & 0\% & 0\% \\ RF-PCA-SVM & 77.8\% & 58.8\% & 80.2\% \\ AUTOENC-SVM-I & 75.2\% & 57.9\% & 54.8\% \\
**AUTOENC-SVM-II** & **79.6\%** & **65.4\%** & **63.8\%** \\ \hline \end{tabular}
\end{table}
Table 3: Results -binary classification “on head” vs “outside head”. Training and CV dataset: Bright. Testing dataset: Chambers.
The main metric established to select the best models during cross-validation was the average macro-accuracy of the locations; so this was used as the main indicator of the models performance. The results demonstrated that the models performed effectively better than the baselines.
As expected, the accuracies were lower than for the face touch problem as this was a complex multi-label problem where multiple locations can overlap on the face, and it can be difficult even for a human to determine the exact location.
Likewise to the previous task, the results were similar between models, but these results show some indication that HOG features might be useful in some instances. The AUTOENCODER-SVM-II model outperformed the accuracies of the other models in two cases and demonstrated a considerable difference in accuracy when it was trained in the BRIGHT dataset. Possibly training with HOG features in more extensive and more varied datasets could make their representations more stable and significant in the end results.
### Predicting neurodevelopment scores
The next step was to evaluate our proposed model results - detected face touch dynamics of infants less than 2 months old - on predicting their neurodevelopmental rates collected at ages 3 and 5 months. We chose the best-performing model for the binary classification task (RF-PCA-SVM) and ran it on a larger dataset. Since we had the Mullen scores only for the BRIGHT dataset, we ran our model on an average of 490 frames per video (19 videos), a total of 9298 frames. We then extracted the face touch frequency for each infant and evaluated it versus the Mullen Scales of Early Learning (MSEL) related to gross motor (GM) skills and fine motor (FM) skills. In this case, the data was limited because the provided metrics were evaluated per infant, and only 19 infants of the BRIGHT dataset had their information available.
In the case of the MSEL metrics, the data consisted of raw scores per visit of the infant related to the different MSEL categories. A rate of development was calculated per infant per category based on the rate of increase during their first five months. The data used for this case were the gross motor (GM) skills and the fine motor (FM) skills, as they are related to the infant's motor development and could be related to face touch behaviour. After calculating the rate of
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Model** & **Accuracy** & **Precision** & **Recall** \\ & **Test** & **Key Area** & **Key Area** \\ \hline Random Chance & 50.0\% & 20.8\% & 50\% \\ Zero Rule & 37.8\% & 15\% & 0\% \\ RF-PCA-SVM & 62.5\% & 36.3\% & 18.8\% \\ AUTOENC-SVM-I & 63.8\% & 29.7\% & 34.3\% \\
**AUTOENC-SVM-II** & **71.4\%** & **35.7\%** & **24.9\%** \\ \hline \end{tabular}
\end{table}
Table 6: Results of predicting key areasTraining and CV dataset: Bright. Testing dataset: Chambers.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Model** & **Accuracy** & **Precision** & **Recall** \\ & **Test** & **Key Area** & **Key Area** \\ \hline Random Chance & 50.0\% & 29.1\% & 50.0\% \\ Zero Rule & 21.8\% & 17.0\% & 0\% \\ RF-PCA-SVM & 60.3\% & 56.0\% & 34.3\% \\ AUTOENC-SVM-I & 58.1\% & 48.2\% & 19.2\% \\
**AUTOENC-SVM-II** & **60.7\%** & **44.3\%** & **16.7\%** \\ \hline \end{tabular}
\end{table}
Table 7: Results of predicting key areas Training and cross-validation dataset: Chambers + 50% Bright. Testing dataset: 50% Bright.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Model** & **Accuracy** & **Precision** & **Recall** \\ & **Test** & **Key Area** & **Key Area** \\ \hline Random Chance & 50.0\% & 31.1\% & 50.0\% \\ Zero Rule & 24.5\% & 13\% & 0\% \\
**RF-PCA-SVM** & **66.6\%** & **33.5\%** & **43.8\%** \\ AUTOENC-SVM-I & 63.2\% & 49.6\% & 16.7\% \\ AUTOENC-SVM-II & 63.0\% & 60.9\% & 13.2\% \\ \hline \end{tabular}
\end{table}
Table 5: Results of predicting key areasTraining and CV dataset: Chambers. Testing dataset: Bright.
development of the GM and FM skills, a correlation was calculated between the ratio of face touches per frame and these rates of development of each child.
The results showed a low to moderate positive correlation between the ratio of face touches and the rate of development during the first months. The correlation coefficient obtained for FM was 0.599 with a significant p-value of 0.0067. The correlation coefficient for GM was 0.186, but the p-value was not found to be significant. It is possible that face touches are more related to fine motor skills as they are more specific and localised movements.
The results indicate that measuring infants' face touch frequencies and dynamics in their first month or two can be a predictive measure of their neurodevelopmental compactness. It also demonstrates the effectiveness of our proposed computational model as a tool for the early prediction of neurodevelopmental factors. We make the trained model available to the research community at'removed for anonymous submission' as a baseline for infant's face touch detection and to facilitate future research in this area on more extensive datasets.
A dataset of 19 infants is limited, so it was not possible to test more complex prediction algorithms. However, these results show that, in more extensive datasets, the face touch frequency could be used as one independent variable to help predict infant neurodevelopment scores such as MSEL. Also, the models proposed during this research could support the automation of the extraction of these face touches.
## 7 Conclusion
Our research proposed a machine learning model for automatic detection of face touches in infants using features extracted from videos. This is the first study to provide a computational model for detection and classification of these types of gestures in infants. Our proposed models using a mix of spatial and temporal features with deep learning features demonstrated significantly high accuracies in predicting face touch and their locations around keypoints in the face establishing a promising step for future research in this area. We also showed the effectiveness of the proposed model in predicting MSEL scores related to fine motor (FM) skills, demonstrating that our proposed model can be used as en early prediction tool for neurodevelopmental informants and it is considered a baseline for future work in this domain. We believe this research will open the door for future research in this area both on the technical as well as neurodevelopmental psychology fronts.
Despite the promising results, there are several limitations to our model. The datasets used were recorded in almost uncontrolled environments, with varied camera angles and the mum's presence in most videos. These characteristics made the labelling as well as the classification tasks very challenging. We are also aware of the small size of the datasets used in this work. Obtaining datasets, especially for infants, is a challenging task due to the privacy and ethical factors that need to be considered. However, we believe this research serves as a baseline for infant face touch detection and classification and will open the door for further research on more extensive datasets in this area.
|
2305.01737 | Competing Heterogeneities in Vaccine Effectiveness Estimation | Understanding waning of vaccine-induced protection is important for both
immunology and public health. Population heterogeneities in underlying
(pre-vaccination) susceptibility and vaccine response can cause measured
vaccine effectiveness (mVE) to change over time even in the absence of pathogen
evolution and any actual waning of immune responses. We use a multi-scale
agent-based models parameterized using epidemiological and immunological data,
to investigate the effect of these heterogeneities on mVE as measured by the
hazard ratio. Based on our previous work, we consider waning of antibodies
according to a power law and link it to protection in two ways: 1) motivated by
correlates of risk data and 2) using a within-host model of stochastic viral
extinction. The effect of the heterogeneities is given by concise and
understandable formulas, one of which is essentially a generalization of
Fisher's fundamental theorem of natural selection to include higher
derivatives. Heterogeneity in underlying susceptibility accelerates apparent
waning, whereas heterogeneity in vaccine response slows down apparent waning.
Our models suggest that heterogeneity in underlying susceptibility is likely to
dominate. However, heterogeneity in vaccine response offsets <10% to >100%
(median of 29%) of this effect in our simulations. Our methodology and results
may be helpful in understanding competing heterogeneities and waning of
immunity and vaccine-induced protection. Our study suggests heterogeneity is
more likely to 'bias' mVE downwards towards faster waning of immunity but a
subtle bias in the opposite direction is also plausible. | Ariel Nikas, Hasan Ahmed, Veronika I. Zarnitsyna | 2023-05-02T19:13:02Z | http://arxiv.org/abs/2305.01737v2 | # Competing Heterogeneities in Vaccine Effectiveness Estimation
###### Abstract
Understanding waning of vaccine-induced protection is important for both immunology and public health. Population heterogeneities in underlying (pre-vaccination) susceptibility and vaccine response can cause measured vaccine effectiveness (mVE) to change over time even in the absence of pathogen evolution and any actual waning of immune responses. We use a multi-scale agent-based models parameterized using epidemiological and immunological data, to investigate the effect of these heterogeneities on mVE as measured by the hazard ratio. Based on our previous work, we consider waning of antibodies according to a power law and link it to protection in two ways: 1) motivated by correlates of risk data and 2) using a within-host model of stochastic viral extinction. The effect of the heterogeneities is given by concise and understandable formulas, one of which is essentially a generalization of Fisher's fundamental theorem of natural selection to include higher derivatives. Heterogeneity in underlying susceptibility accelerates apparent waning, whereas heterogeneity in vaccine response slows down apparent waning. Our models suggest that heterogeneity in underlying susceptibility is likely to dominate. However, heterogeneity in vaccine response offsets \(<\)10% to \(>\)100% (median of 29%) of this effect in our simulations. Our methodology and results may be helpful in understanding competing heterogeneities and waning of immunity and vaccine-induced protection. Our study suggests heterogeneity is more likely to 'bias' mVE downwards towards faster waning of immunity but a subtle bias in the opposite direction is also plausible.
vaccine effectiveness, heterogeneity, frailty, antibody waning by power law, selection, vaccine efficacy, survival analysis.
## 1 Introduction
Providing accurate estimates of vaccine-induced protection is essential in guiding public health policy. However, many factors complicate our ability to estimate population level vaccine effectiveness (VE) such as prior immunity, underlying health risks, timing of vaccination, inconsistent hazards in different locations, and other confounders. Further adding to uncertainty is the common presence of observed waning which may reflect actual waning of immune responses, introduction of different strains, or may be an artifact coming from heterogeneity among individuals.
Many studies report fast, intraseasonal waning of vaccine-induced protection, particularly for viruses such as influenza and SARS-CoV-2 (1-3); however, various effects can bias this conclusion. Depletion of susceptible individuals (also called the frailty effect in biostatistics) can bias estimates
(4, 5). Heterogeneity in exposure risk, even if exactly the same in the vaccinated and unvaccinated groups, tends to bias the vaccine effectiveness estimates downwards potentially leading to spurious claims of waning (6, 7). If natural immunity is not taken into account, merely having a "leaky" vaccine (i.e. a vaccine that provides partial protection) can bias estimates downwards (5, 7). This complicates the estimation of actual waning of vaccine-induced protection which is expected to occur as many correlates of protection, e.g. neutralizing antibodies, have been shown to decrease over time (8-11).
In this paper, we focus on the hazard ratio as the measure of vaccine effectiveness, as the hazard ratio corresponds to relative risk at a particular moment in time. To determine the direction and magnitude of bias, we simulate an epidemic in a population under various frailty and vaccine protection distributions in the absence and presence of waning and evaluate the interplay between these factors. Commonly, hazard ratios are estimated with the Cox proportional hazards model and there are several standard extensions utilized in real world studies (12-19). While Cox proportional hazards models were not intended to be time-varying, several approaches exist in order to make it applicable for measuring waning vaccine effectiveness. We utilize time category-vaccine interactions (henceforth TVI) which, in contrast to the commonly used Cox method utilizing the scaled Schoenfeld residuals, should accurately measure the hazard ratios even in the presence of extreme (observed) waning (20).
If vaccine effectiveness is assessed via the hazard ratio and the outcome of interest is the first infection post-vaccination, heterogeneity in baseline (pre-vaccination) susceptibility causes measured vaccine effectiveness (mVE) to decline over time, whereas heterogeneity in response to vaccination causes mVE to increase over time. Hence any apparent change in mVE may reflect any combination of selection on these heterogeneities in addition to the biological processes of pathogen evolution and waning of immune responses. In this paper, we first illustrate the problem using standard statistical distributions for the heterogeneities. We then provide concise formulas that give the net effect of these heterogeneities on the hazard rates and hazard ratio. Next, using parameter values based on epidemiological and immunological data incorporating waning of antibodies, we use agent-based simulations to investigate the magnitude of these opposing effects. Our models suggest that the larger effect is from heterogeneity in baseline susceptibility but that heterogeneity in vaccine response may offset a substantial fraction of that effect. This exacerbation of observed waning may explain the sometimes negative VE reported in some studies (21, 22).
## 2 Methods
We consider an agent-based model of acute viral infection with a constant background force of infection where we introduce heterogeneity in individual infection risk, heterogeneity in vaccine-induced protection, or both. For most scenarios to be described, vaccine protection follows the "leaky" model, wherein each vaccinated individual experiences a certain percent reduction in risk. Additionally, we model 40% out of a cohort of 100,000 to be vaccinated, in line with the CDC estimate for influenza vaccine coverage (23). Since we model an acute viral infection, we assume sterilizing immunity upon infection for the remainder of the one-year time period considered. All simulations were run in _Julia_ version 1.3.1 and statistical analysis was completed in \(R\) version 4.2.1.
For heterogeneity in underlying risk (risk in the absence of vaccination), we select a daily risk rate for both the vaccinated and unvaccinated groups from a single gamma distribution. For heterogeneity in vaccine protection, we select vaccinated individuals' protection from a variety of distributions including beta distributions, in contrast to leaky, homogeneous vaccination. To establish
a comparison, we use the mean vaccine efficacy in the context of no epidemic, VENE, which thereby removes the effect of selection. We then calculate vaccine effectiveness using a time category-vaccine-interaction (TVI) as the independent variable of the Cox proportional hazards model in order to find a time-varying estimate of protection. The TVI method has been shown to behave accurately in circumstances where waning occurs rapidly [(20)].
## 3 Results
### Susceptibility to Infection (Frailty) Distribution
When considering only heterogeneity in underlying frailty, the given gamma distributions (parameterized as Gamma(\(\alpha_{\gamma}\), \(\beta_{\gamma}\)) where \(\alpha_{\gamma}\)/\(\beta_{\gamma}\) is the mean and \(\alpha_{\gamma}\)-0.5 is the coefficient of variation (CoV)) can produce the appearance of waning vaccine effectiveness though as \(\alpha_{\gamma}\) increases this effect lessens. This appearance of waning corresponds to what many statistical studies have posited would occur and termed the "frailty effect" or "frailty phenomena" [(24-27)]; in epidemiological studies, this is sometimes alternatively called "survivor bias" or "depletion of susceptibles" effect. Some studies have also simulated this effect but have not compared the qualitative effect of different distributions [(5, 6)]. In Figure 1 we show how different gamma distributions with the same mean can cause differing amounts of perceived waning (waning in mVE) when the true vaccine protection is in fact constant and leaky.
### Vaccine Efficacy Distribution
In simulated studies, two modes of vaccine efficacy are often compared: "leaky" vaccination where protection is incomplete but reduces each individual's chance to become infected by a specified amount (e.g. 50%) or "all-or-nothing" vaccination where a fraction of individuals receive complete protection from the vaccine and other receive no protection. A limited number of studies have also considered normal-like distributions for vaccine protection [(7)]. We consider two main beta distributions parameterized by Beta(\(\alpha_{\beta}\), \(\beta_{\beta}\)) where \(\alpha_{\beta}\)/(\(\alpha_{\beta}+\beta_{\beta}\)) is the mean (held here at 0.5): the normal-like Beta(2,2) and the U-shaped Beta(0.5,0.5). These distributions as well as their resultant dynamics and mVE estimates are compared in Figure 2 which shows that for both of these cases, distributions in vaccine protection bias VE estimates upwards. Non-symmetric vaccine protection distributions were also tested (see Supplement Figure S1) but did not change the direction of bias, showing increase in mVE.
As seen in Figures 1 and 2, singly the two types of heterogeneity appear to contribute in opposite directions; beta distributed vaccine protection tends to skew the estimate upwards while gamma distributed underlying risk tends to skew the estimates downwards and to a greater extent. As these effects compete when combined, we constructed a predictor to determine which direction, if any, the competing distributions change mVE from VENE.
### Effect of Selection on Heterogeneities
Assuming hazard rates for a given individual are not time-varying and ignoring stochasticity, \(\bar{r}\), the average hazard rate in not-yet-infected individuals, is given by the following equation
\[\bar{r}(t)=\frac{\int f(r)e^{-rt}\,dr}{\int f(r)e^{-rt}\,dr}. \tag{1}\]
Here \(f(r)\) is the probability density function for the hazard rates at time 0; we allow \(f(r)\) to be a generalized function (e.g. a delta function) so the formula also applies to discrete probability distributions. Let \(R\) denote the random variable for \(f(r)\). Let \(M(t)\) and \(K(t)\) be the moment generating and cumulant generating functions of \(-R\), respectively. Since the denominator of Equation 1 is \(M(t)\) and the numerator is \(-M^{\prime}(t)\) and \(K(t)=\ln\bigl{(}M(t)\bigr{)}\), we get the following relation,
\[-\bar{r}(t)=\tfrac{M^{\prime}(t)}{M(t)}=K^{\prime}(t). \tag{2}\]
Hence, the first derivative of \(-\bar{r}(t)\) is the second cumulant (i.e. the variance) of \(-R\), the second derivative of \(-\bar{r}(t)\) is the third cumulant of \(-R\) (related to the skewness), the third derivative is the fourth cumulant (related to excess kurtosis), and so on. Since \(-R\) can be viewed as fitness, the above is essentially equivalent to a generalization of Fisher's fundamental theorem of natural selection, according to which the \(n\)-th derivative of mean fitness over time is the \(n\)+1 cumulant [28, 29].
Since in most cases the force of infection is not constant, we further extend this relation between the hazard rates and cumulants. If we let \(r_{i}=F0I(t)\cdot q_{i}\) where \(r_{i}\) is individual \(i\)'s hazard rate, _FOI_ is the force of infection at time \(t\), and \(q_{i}\) is the individual's relative hazard, we can recover the above relation in terms of a transformation of time \(s=\int_{0}^{t}F0I(\tau)d\tau\),
\[-\bar{r}(s)=\tfrac{M_{2}^{\prime}(s)}{M_{2}(s)}=K_{2}^{\prime}(s) \tag{3}\]
where \(K_{2}\) is the cumulant generating function for the distribution of -\(q_{i}\).
We now consider the hazard ratio comparing vaccinated to unvaccinated individuals, \(HR(t)=\bar{r}_{\mathrm{v}}/\bar{r_{\mathrm{u}}}\), where \(\bar{r_{\mathrm{v}}}\) is the average hazard rate in not-yet-infected vaccinated individuals and \(\bar{r_{\mathrm{u}}}\) the analogous for unvaccinated individuals. To find the rate of change of the hazard ratio and recalling that the first derivative of the mean hazard rate is the second cumulant (i.e. variance) of the hazard rates, we apply the quotient rule (or the quotient and chain rules for the case of time-varying _FOI_) which yields the following equation
\[\tfrac{d}{dt}[HR]=\frac{-\sigma_{\mathrm{v}}^{2}\bar{r_{\mathrm{u}}}+\sigma_{ \mathrm{u}}^{2}\bar{r_{\mathrm{v}}}}{r_{\mathrm{u}}^{2}}, \tag{4}\]
where \(\sigma_{\mathrm{v}}^{2}\) is the variance of the vaccinated group's hazard rates and \(\sigma_{\mathrm{u}}^{2}\) is the variance of the unvaccinated group's hazard rates.
If at time \(t\)=0 underlying risk is distributed Gamma(\(\alpha_{\gamma}\), \(\beta_{\gamma}\)) and vaccine protection Beta(\(\alpha_{\beta}\), \(\beta_{\beta}\)) and the two are independent, \(\alpha_{\gamma}-\alpha_{\beta}-\beta_{\beta}\) determines the direction in which these heterogeneities affect HR(\(t\)) as
\[\mathrm{sign}[HR^{\prime}(0)]=\mathrm{sign}\bigl{(}\alpha_{\gamma}-\alpha_{ \beta}-\beta_{\beta}\bigr{)}. \tag{5}\]
Hence, even in this simple scenario, heterogeneity can cause either an increase or decrease in mVE.
### Vaccine Effectiveness Under Competing Heterogeneities
This is a provisional file, not the final typeset article
We consider the interplay of heterogeneities in both underlying frailty and vaccine protection and compare vaccine effectiveness estimates to our predicted value based on Equation 4. We find that overall estimates tend to oscillate around the predicted value (purple dashed), as seen in Figure 3. Here we find that, depending on the underlying distributions, the mVE can increase, decrease, or remain steady around \(\mathrm{V}\mathrm{E}_{\mathrm{NE}}\), the vaccine efficacy under no epidemic. Likewise, the extent of observed change is dependent on the interplay of both distributions with some changing a negligible amount and others shown here changing \(>\)20% compared to the \(\mathrm{V}\mathrm{E}_{\mathrm{NE}}\) value. Furthermore, for all of the combinations shown in Figure 3, we maintained a \(\mathrm{V}\mathrm{E}_{\mathrm{NE}}\) of 50%, but the initial protection level given also mediates how far from \(\mathrm{V}\mathrm{E}_{\mathrm{NE}}\) a given distribution can change, as seen in Figure S2.
Even larger changes of mVE can be found by either extending the study period, allowing for more individuals to get infected and thus contribute to the over- or underestimation, or by considering a more extreme distribution. Additionally, it is theoretically possible to generate non-monotonic behavior, as shown in Figure S3, where mVE can go both up and down from the \(\mathrm{V}\mathrm{E}_{\mathrm{NE}}\) value.
### Modeling Waning Protection
Without direct challenge studies, estimating heterogeneity in vaccine effectiveness can be fraught. However, many studies use antibody titers as a correlate of protection, including those for SARS-CoV-2 (30-32). Using data on waning SARS-CoV-2 antibodies, we created a distribution for initial protection in a population that then wanes over time. We model waning of antibodies as a power law of the form
\[Ab=C[(t+41)/42]^{-1} \tag{6}\]
in line with our previous studies (9-11). The exponent -1 corresponds to relatively fast waning. Here C for each individual is drawn from a log normal distribution with a standard deviation of 0.75-1.5 natural log in line with (11, 33, 34). In previous studies, we analyzed antibodies and waning starting at day 42 post-infection or vaccination. We assume here that all individuals in the vaccinated group are fully vaccinated before the study begins. After which we correlate an individual's antibody level to their individual VE using
\[1-VE=\min[Ab^{-1/2},1], \tag{7}\]
where the antibody to VE conversion exponent is based on an adjustment to the relationships given in (34) for HAI titers and risk of infection with exponents of approximately -0.35. Because HAI titer is only one component of the antibody response, we slightly increased the strength of the relationship and used -0.5. We call this the _risk-correlate model_.
We also consider a different relationship between VE and antibody based on a within-host stochastic extinction model
\[1-VE=\max\left[\frac{1-exp\left(\frac{m}{R_{0}}\frac{m\cdot a}{a+k\cdot Ab} \right)}{1-exp\left(\frac{m}{R_{0}}m\right)},0\right], \tag{8}\]
where here \(a=10\) is the death rate of virions, \(R_{0}=\)_10_ is the basic reproductive number at the between-cell level (35), \(k\cdot Ab\) represents the scaled level of antibody, and \(m=0.5\) is the product of the number of viral particles per inoculum and a virion's probability of successfully infecting a cell in
the absence of antibodies (see Supplement for derivation and additional details). This relationship gives qualitatively similar results to the risk-correlate model.
Antibody is scaled to give an approximately 90% initial vaccine protection in the population when the standard deviation for natural log of antibody is equal to 1, again in line with [(11, 33, 34)]; inherently, as standard deviation is varied, this causes the initial average of vaccine-induced protection in the population to vary slightly. This distribution replaces the beta distributions used in Section 3.4 for vaccine protection while underlying frailty in both groups continues to be modeled with gamma distributions with CoV based on [(36)] who estimate CoV of 0.7-1.5 (mean of 0.9) based on contact surveys of very short duration (e.g. two days) and CoV of 0.3-0.9 (mean of 0.5) based on contact surveys when aggregating by 1 year age categories. As elaborated by [(36)], the former is likely an over estimate whereas the latter is likely an underestimate, hence we consider CoV of 0.5-1.
Using the risk-correlate model, Figure 4 compares a simulation without any heterogeneity (Figure 4A) to simulations with just heterogeneity in (antibody-induced) vaccine protection or underlying frailty and simulations with heterogeneity in both protection and underlying frailty, where underlying frailty is characterized by the coefficient of variation (CoV) of the gamma distribution where CoV=1/\(\sqrt{\alpha_{\gamma}}\) and the mean of the gamma distributions are held the same as the previous figure. Recapitulating the earlier simulations, heterogeneity in vaccine protection results in an increase relative to VENE. However, heterogeneity in underlying frailty overwhelms this positive trend and causes VE estimates to be underestimated. Similar qualitative results are also given using the unadjusted power law exponent estimated from the HAI titers (a likely underestimate) as shown in the Supplement.
Using the within-host stochastic model given by Equation 8 yields similar results, as seen in Figure 5 to the risk-correlate model. For both models the degree of the over- or underestimation (relative to VENE) at the end of the season is given in Table S1. Again, for plausible acute infectious disease parameters, mVE tends to be approximately the same as or underestimates VENE.
## 4 Discussion
Heterogeneity complicates the ability to accurately estimate the extent of vaccine-induced protection in a population as well as if protection is truly waning or merely appears to be waning. While this has been extensively investigated for underlying frailty [(4-6)] the confounding effect of heterogeneity in vaccine protection has been less thoroughly explored. In many cases [(7, 37, 38)], only all-or-nothing and leaky vaccines are investigated but we argue that these are edge cases that are wonderful for illustrating theory but are unlikely to accurately model real-world responses to vaccination. In particular, this study considers a much wider array of distributions and shows that the net effect of selection on these heterogeneities can cause either an increase or a decrease in mVE with the effect given by concise and interpretable formulas.
We parameterize our model using data from epidemiological and immunological studies and also incorporate within-host modeling of the immune system and pathogen. We find that, within the estimated ranges, mVE is likely to be underestimated; however, the degree of underestimation is quite varied with heterogeneity in vaccine response offsetting anywhere from \(<\)10% to \(>\)100% (median of 29%) of the effect of heterogeneity in underlying susceptibility alone (Table S1). Therefore, vaccine effectiveness estimates should be interpreted with caution, especially over time as the heterogeneities continue to accumulate differential outcomes. While mVE seems more likely to
underestimate than overestimate \(\mathrm{V}\mathrm{E}_{\mathrm{NE}}\), underestimation should not be assumed as our range of plausible parameters includes cases without any underestimation.
Previous studies have used all or nothing or beta distributions to model vaccine induced protection [7, 37, 38]. However, modeling waning with such distributions is not straightforward. Our technique of modeling decay of immune responses (in our case, antibodies) at the individual level and converting these immune responses into individual level \(\mathrm{V}\mathrm{E}\) is more transparent and possibly easier to implement than modeling waning by shifting a beta distribution over time.
There are some important caveats to interpreting our results. Although Equations 2-4 give the effect of selection on the hazard rates and hazard ratios, the hazard rates and ratios are also affected by regression towards (or away from) the mean. Regression towards the mean would tend to mitigate the effects of both heterogeneities. Secondly in our simulations heterogeneity in underlying susceptibility and heterogeneity in vaccine response are uncorrelated at baseline. Allowing for correlations permits for more diverse outcomes and affects not only waning but also the initial level of mVE. It should be noted that Equations 2-4 are valid even in the presence of such correlations. Extending our simulations to include such correlations and also regression towards or away from the mean is straightforward. In the current study, we focused only on the hazard ratio as a measure of \(\mathrm{V}\mathrm{E}\). We do not consider the effect of seasonality, spatial structure, or epidemic waves. We also did not consider all possible combinations of distributions, but rather limited ourselves to those implied by data for acute respiratory infections.
Overall, we give concise statistical formulas for understanding the effect of selection on mVE and give estimates of the magnitude of this effect under a variety of situations based on both antibody and frailty data [10, 36]. Our results suggest that \(\mathrm{V}\mathrm{E}_{\mathrm{NE}}\) is likely but not certain to be higher than mVE due to variation at the individual level and that the level of discrepancy is dependent on the specifics of the population and vaccine meaning that a simple overall correction cannot be applied. Further exploration for how to correct for these factors statistically or via study design is essential to more accurately understanding vaccine-induced protection.
## 5 Conflict of Interest:
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
## 6 Author Contributions
AN, HA, and VZ contributed to the conception and design of this study. AN performed the simulations. HA designed the within-host models. AN, HA, and VZ analysed the simulation results. AN and HA wrote the first draft of the manuscript. All authors contributed to the manuscript revision, read, and approved the submitted version.
## 7 Funding
This work was supported by the National Heart, Lung, and Blood Institute and the National Institute of Allergy and Infectious Diseases, National Institutes of Health (grants U01 HL139483, U01 AI150747, and U01 AI144616).
The code used to generate the simulations for this study as well as for its analysis can be found upon publication at the Zarnitsyna Lab Github [[https://github.com/ZarnitsynaLab/ArielNikas-VEHeterogeneity](https://github.com/ZarnitsynaLab/ArielNikas-VEHeterogeneity)].
|
2305.17155 | Stability of implicit neural networks for long-term forecasting in
dynamical systems | Forecasting physical signals in long time range is among the most challenging
tasks in Partial Differential Equations (PDEs) research. To circumvent
limitations of traditional solvers, many different Deep Learning methods have
been proposed. They are all based on auto-regressive methods and exhibit
stability issues. Drawing inspiration from the stability property of implicit
numerical schemes, we introduce a stable auto-regressive implicit neural
network. We develop a theory based on the stability definition of schemes to
ensure the stability in forecasting of this network. It leads us to introduce
hard constraints on its weights and propagate the dynamics in the latent space.
Our experimental results validate our stability property, and show improved
results at long-term forecasting for two transports PDEs. | Leon Migus, Julien Salomon, Patrick Gallinari | 2023-05-26T13:58:48Z | http://arxiv.org/abs/2305.17155v2 | # Stability of Implicit Neural Networks for Long-term Forecasting in Dynamical Systems
###### Abstract
Forecasting physical signals in long time range is among the most challenging tasks in Partial Differential Equations (PDEs) research. To circumvent limitations of traditional solvers, many different Deep Learning methods have been proposed. They are all based on auto-regressive methods and exhibit stability issues. Drawing inspiration from the stability property of implicit numerical schemes, we introduce a stable auto-regressive implicit neural network. We develop a theory based on the stability definition of schemes to ensure the stability in forecasting of this network. It leads us to introduce hard constraints on its weights and propagate the dynamics in the latent space. Our experimental results validate our stability property, and show improved results at long-term forecasting for two transports PDEs.
## 1 Introduction and motivation
Numerical simulations are one of the main tools to study systems described by PDEs, which are essential for many applications including, e.g., fluid dynamics and climate science. However, solving these systems and even more using them to predict long term phenomenon remains a complex challenge, mainly due to the accumulation of errors over time. To overcome the limitations of traditional solvers and to exploit the available data, many different deep learning methods have been proposed.
For the task of forecasting spatio-temporal dynamics, Ayed et al. (2019) used a standard residual network with convolutions and Sorteberg et al. (2019); Lino et al. (2020); Fotiadis et al. (2020) used Long short-term memory (LSTM) and Convolutional neural network (CNN) for the wave equation. In Wiewel et al. (2019); Kim et al. (2019), a good performances is obtained by predicting within the latent spaces of neural networks. More recently, Brandstetter et al. (2022) used graph neural networks with several tricks and showed great results for forecasting PDEs solutions behavior. Importantly, these methods all solve the PDE iteratively, meaning that they are auto-regressive, the output of the model is used as the input for the next time step. Another line of recent methods that have greatly improved the learning of PDE dynamics are Neural Operators (Li et al., 2020). These methods can be used as operators or as auto-regressive methods to forecast. However, when used as operators, they do not generalize well beyond the times seen during training. Crucially, these auto-regressive methods tend to accumulate errors over time with no possible control, and respond quite poorly in case of change in the distribution of the data. This leads to stability problems, especially over long periods of time beyond the training horizon.
In the numerical analysis community, the stability issue has been well studied and is usually dealt with implicit schemes. By definition, they imply to solve an equation to go from a time step to the next one but they are generally more stable than explicit schemes. This can be seen on the test equation \(\frac{\mathrm{d}y}{\mathrm{d}t}=\lambda y\), where Euler implicit schemes are always stable while Euler explicit schemes are not. Interpreting residual neural networks as numerical schemes, one can apply such schemes and gain theoretical insights on the properties of neural networks. This has already been done in various forms in Haber and Ruthotto (2017); Chen et al. (2018), but not applied to forecasting. Moreover,
these works use either the stability of the underlying continuous equation or the stability of the numerical scheme on the test equation and its derivatives, which is not the stability of the numerical scheme on the studied equation. Since a network is discrete, the latter is the most relevant.
We therefore use the stability in norm of schemes, as defined in 2.1. In deep learning (DL), this definition has only been applied to image classification problems (Zhang and Schaeffer, 2020). To the best of our knowledge, this work is the first attempt to forecast PDEs with neural networks using stability as studied in the numerical analysis community.
Using implicit schemes in DL has already been done in different contexts. The earliest line of works tackles image problems, with Haber et al. (2019) designing semi-implicit ResNets and Li et al. (2020); Shen et al. (2020); Reshniak and Webster (2021) designing different implicit ResNets. The second line of works focuses on dynamical problems. In this way, Nonnenmacher and Greenberg (2021) designed linear implicit layers, which learn and solve linear systems, and Horie and Mitsume (2022) used an implicit scheme as part of an improvement of graph neural network solvers to improve forecasting generalization with different boundary condition shapes. Tackling different types of problems, none of these methods guarantees the forecast stability. For our analysis, we restrict ourselves to ResNet-type networks, i.e., networks with residual connections. We introduce hard constraints on the weights of the network and predict within the latent space of our network. Hence, by modifying the classic implicit ResNet architecture, our method can forecast dynamical system at long range without diverging. We apply these theoretical constraints in our architecture to two 1D transport problems: the _Advection equation_ and _Burgers' equation_.
To better investigate networks stability, we perform our experiments under the following challenging setting: for training, we only give to the networks the data from \(t=0\) to a small time \(t=\Delta t\), and consider the results in forecasting in the long time range at \(t=N\cdot\Delta t\), with \(N\gg 1\). Note that our setting is harder that the conventional setting presented for e.g. in Brandstetter et al. (2022). Indeed, we only use changes between a single time step for training.
## 2 Method
To guarantee structural forecasting stability, we take inspiration from implicit schemes. We focus our study on an implicit ResNet with a ReLU activation function. In our approach, an equation is solved at each layer, namely \(x_{n+1}:=x_{n}+R_{n}(x_{n+1})\) with \(x\) in \(\mathbb{R}^{M}\) and \(n\) in \(\mathbb{N}\) and \(R_{n}(x)=\text{ReLU}(W_{n}x+b_{n})\) where \(W_{n}\) is upper triangular. The latter constraint is motivated below.
### Implicit ResNet stability analysis
To ensure that our proposed architecture is well-defined, we need to solve \(x=x_{n}+R_{n}(x)\). This equation has a solution, as proven in El Ghaoui et al. (2019) and detailed in Appendix 5.1. We can then study its stability. The recursion defining \((x_{n})_{n\in\mathbb{N}}\) reads as an implicit Euler scheme with a step size of 1. As described in the introduction, an implicit scheme is usually more stable than an explicit one. We first recall the definition of stability for schemes. This property ensures that our architecture has an auto-regressive stability.
**Definition 2.1** (Stability in norm).: _A scheme \((x_{n})_{n\in\mathbb{N}}\) of dimension \(M\) is stable in norm \(L^{p}\) if there exists \(C(T)\) independent of the time discretization step \(\Delta t\) such that:_
\[\forall\;x_{0}\in\mathbb{R}^{M},\;n\geq 0;\;n\Delta t\leq T,\;\|x_{n}\|_{p} \leq C(T)\|x_{0}\|_{p}\,.\]
This general definition of stability in norm ensures that a scheme does not amplify errors. This definition is equivalent to several others in the numerical analysis community.
Suppose that the spectrum of \(W_{n}\) is contained in \([-1,0[\) for every integer \(n\), we can assert that \((x_{n})_{n\in\mathbb{N}}\) is well-defined, using theorem 5.2. The proof of the stability of our Implicit ResNet network is then by induction on the dimension and is given in appendix 5.4:
**Theorem 2.1** (Stability theorem).: _If the elements of the diagonal of \(W_{n}\) are in \([-1,0)\) for every integer \(n\), then \((x_{n})_{n\in\mathbb{N}}\) is stable in norm \(L^{p}\)._
This theorem leads to hard constraints on the weights of the network.
### Implementation
To validate practically our theoretical results, we choose a setting that highlights stability issues. We then test our implementation of an implicit ResNet. In order to respect the assumptions of theorem 2.1, we forecast the dynamics in the latent space, as detailed below.
SettingWe first learn the trajectory at a given small time step \(\Delta t\). We only give data \(t=0\) to \(t=\Delta t\) for the training. We then forecast in long-term, at \(N\cdot\Delta t\) with \(N\gg 1\). This very restricted setting allows us to see how the network will react in forecasting with changes in the distribution and error accumulation. Usually neural network forecasting methods use data from \(t=0\) to \(T=L\cdot\Delta t\) for the training which allows to use different tricks to stabilize the network, such as predicting multiple time steps at the same time. However, in this work, we want to analyze how the network behaves without any trick that can slow down divergence. Indeed, the tricks used in the other settings do not actually guarantee stability of the network. The training is performed with a mean-squared error (MSE) loss.
Implicit neural network architectureTo implement a neural network from Theorem 2.1, we use the following architecture; \(z_{\Delta t}=f_{dec}\circ f^{K}_{res}\circ...\circ f^{1}_{res}\circ f_{enc}(z_ {0})\), with \(f^{k}_{res}(x)=x+\text{ReLU}(W_{k-1}\cdot f^{k}_{res}(x)+b_{k-1})\). The encoder and decoder are linear, and the encoder projects the data to a smaller dimension \(M\). The full architecture is illustrated in Figure 2. The specificity of our architecture is that the residual blocks are connected with an implicit Euler scheme iteration. To do so, we use a differentiable root-finding algorithm (Kasim and Vinko, 2020). More precisely, we use the first Broyden method van de Rotten and Lunel (2005) to solve the root-finding problem. It is a quasi-Newton method. This helped getting better results compared to other algorithms.
Latent space forecastingAs in Wiewel et al. (2019); Kim et al. (2019), the forecast can be done within the latent space of the network; \(z_{N\cdot\Delta t}=f_{dec}\circ f_{block}\circ...\circ f_{block}\circ f_{enc}(z_ {0})\), with \(f_{block}=f^{K}_{res}\circ...\circ f^{1}_{res}\). To predict at time \(N\cdot\Delta t\) from time \(t=0\), we apply \(N\) times the residual blocks. It is illustrated in Figure 4. This propagation allows our network to respect the assumptions of theorem 2.1 and thus be stable.
## 3 Experiments
We evaluate the performance of our method on the _Advection equation_ and _Burgers' equation_.
Datasets_Advection equation_. We construct our first dataset with the following 1-D linear PDE, \(\frac{\partial\psi}{\partial t}=-\frac{1}{2}\frac{\partial\psi}{\partial z},z \in(0,2\pi)\), \(t\in\mathbb{R}^{+}\) and \(\psi(z,0)=f_{0}(z),z\in(0,2\pi)\).
Burgers' equation.In the second dataset, we consider the following 1-D non-linear PDE, \(\frac{\partial\psi}{\partial t}=-\frac{1}{2}\frac{\partial\psi^{2}}{\partial z }+\frac{\partial^{2}\psi^{2}}{\partial z},z\in(0,2\pi),\)\(t\in(0,1]\) and \(\psi(z,0)=f_{0}(z),z\in(0,2\pi)\).
Both equations have periodic boundary conditions. We approximate the mapping from the initial condition \(f_{0}\) to the solution at a given discretization time \(\Delta t\), i.e. \(u_{0}\mapsto u(\cdot,\Delta t)\). We then forecast to longer time ranges. \(\Delta t\) is set to 1 for the _Advection equation_ with a grid of 100 points and 0.0005 for _Burgers' equation_ with a grid of 128 points.
Baseline methodsWe compare our Implicit ResNet with respect to an Explicit ResNet with ReLU activation function and a Fourier Neural Operator (FNO) (Li et al., 2020). We have also implemented two Explicit ResNet, with a tanh activation function and with batch normalization. We design the Explicit ResNet in the same way as our implicit version, with \(K\) layers of residual blocks that are linked by \(x_{n+1}=x_{n}+R_{n}(x_{n})\). Traditional methods forecast by using the output of the network at time \(t\) to predict the dynamics at time \(t+\Delta t\). So to predict at time \(N\cdot\Delta t\) from time \(t=0\), the baseline networks are iteratively applied \(N\) times, as illustrated in Figure 3.
ResultsPrediction errors are reported in Table 1 for the _Advection equation_ and _Burgers' equation_. We also show the error according to the forecast time in Figure 1.
We found that traditional deep learning methods diverge with the forecast time. They reach a MSE of more than \(10^{8}\) for the _Advection equation_ and go to infinity for _Burgers' equation_, respectively at
time \(400\) and \(0.15\). Moreover, we see in Figure 1 that their curve in time is convex, so the increase in error is accelerating over time. We also found that our proposed Implicit ResNet presents better results by several orders of magnitude for both datasets. Moreover, we can see in Figure 1 that its curve in time is reaching a stable plateau, as expected from our theorem 2.1.
As for the training, traditional deep learning methods manage to learn very well the dynamics at \(t=\Delta t\), with two orders of magnitude better than our Implicit ResNet. However, the latter still manages to learn well the dynamics with a MSE of \(10^{-2}\) for the _Advection equation_ and \(10^{-3}\) for _Burgers' equation_. This difference in training can mainly be explained by the longer training time of the Implicit ResNet, which made us take a smaller number of epochs for this network (1250 against 2500).
DiscussionFigure 1 demonstrates the main benefits of our constrained implicit neural network. Our network is stable whereas the other methods diverge in time. However, although being stable and far better than the baselines, it does not manage to forecast accurately the long-term dynamics. This is further confirmed by Table 4, which shows high relative errors. Said otherwise, when stability is guaranteed, convergence is not. We can also note that constraining the weights makes our network harder to train, but guarantees structural forecasting stability.
## 4 Conclusion
In this work, we studied the challenging task of long-term forecasting in dynamical systems. To do so, we developed a theoretical framework to analyze the stability of deep learning methods for forecasting dynamical systems. We then designed a constrained implicit neural network out of this framework. To the best of our knowledge, this is the first work that proposes to study deep learning architectures for forecasting dynamical systems from a numerical schema standpoint. We showed improved results with respect to deep learning baselines for two transport PDEs.
This work opens new perspectives to study neural networks forecasting stability from a numerical schema standpoint, thus offering more robust architectures. However, this analysis still needs improvements. Even though it ensures forecasting stability of our proposed network, it does not guarantee good convergence properties. We believe that developing this line of research could help overcome these challenges, and provide more robust architectures for forecasting dynamical systems in long time range.
## Acknowledgments
We thank Yuan Yin and Etienne Le Naour for helpful insights and discussions on this project.
|
2309.00924 | Entanglement phase transitions in non-Hermitian quasicrystals | The scaling law of entanglement entropy could undergo qualitative changes
during the nonunitary evolution of a quantum many-body system. In this work, we
uncover such entanglement phase transitions in one-dimensional non-Hermitian
quasicrystals (NHQCs). We identify two types of entanglement transitions with
different scaling laws and critical behaviors due to the interplay between
non-Hermitian effects and quasiperiodic potentials. The first type represents a
typical volume-law to area-law transition, which happens together with a
PT-symmetry breaking and a localization transition. The second type features an
abnormal log-law to area-law transition, which is mediated by a critical phase
with a volume-law scaling in the steady-state entanglement entropy. These
entangling phases and transitions are demonstrated in two representative models
of NHQCs. Our results thus advanced the study of entanglement transitions in
non-Hermitian disordered systems and further disclosed the rich entanglement
patterns in NHQCs. | Longwen Zhou | 2023-09-02T12:17:18Z | http://arxiv.org/abs/2309.00924v3 | # Entanglement phase transitions in non-Hermitian quasicrystals
###### Abstract
The scaling law of entanglement entropy could undergo qualitative changes during the nonunitary evolution of a quantum many-body system. In this work, we uncover such entanglement phase transitions in one-dimensional non-Hermitian quasicrystals (NHQCs). We identify two types of entanglement transitions with different scaling laws and critical behaviors due to the interplay between non-Hermitian effects and quasiperiodic potentials. The first type represents a typical volume-law to area-law transition, which happens together with a PT-symmetry breaking and a localization transition. The second type features an abnormal log-law to area-law transition, which is mediated by a critical phase with a volume-law scaling in the steady-state entanglement entropy. These entangling phases and transitions are demonstrated in two representative models of NHQCs. Our results thus advanced the study of entanglement transitions in non-Hermitian disordered systems and further disclosed the rich entanglement patterns in NHQCs.
## I Introduction
Along with the increase of measurement rates, the competition between unitary time evolution and projective measurements could prompt the steady state of a quantum many-body system to switch from a volume-law-entangled phase to a quantum Zeno phase with an area-law entanglement entropy (EE) [1; 2; 3; 4; 5]. Ever since its discovery, this measurement-induced entanglement phase transition has attracted great attention in both theoretical [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52] and experimental [53; 54] studies, with important implications for the understanding of quantum information dynamics and the simulation of quantum many-body systems [55; 56; 57]. Recently, entanglement phase transitions are also considered in the context of non-Hermitian physics [58; 59; 60; 61; 62; 63]. There, the effect of measurement is taken into account by considering a nonunitary evolution generated by a non-Hermitian Hamiltonian. The dissipation-gap formation and the non-Hermitian skin effect are further identified as two typical mechanisms of producing entangling-disentangling phase transitions [61; 62]. Yet, these discoveries are established with a focus on pristine non-Hermitian lattice models.
Non-Hermitian quasicrystal (NHQC) forms a typical category of disordered non-Hermitian setup [64; 65; 66]. In an NHQC, the interplay between correlated disorder and gain/loss or nonreciprocal effects could yield rich phases and phenomena including parity-time-reversal-(PT-) symmetry breaking transitions, localization transitions, topological transitions and mobility edges [67; 68; 69; 70; 71; 72]. Despite great theoretical efforts [73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104], NHQCs have also been experimentally realized by nonunitary photonic quantum walks [105; 106]. However, much less is known regarding entanglement phase transitions in NHQCs [101; 102]. This question can be interesting, as a PT-broken NHQC could belong to either a localized phase [68] or an extended phase [67]. In the latter case, the delocalized bulk states should prefer a volume-law scaling in the steady-state EE after a long-time evolution, while the dissipation gap in the complex energy spectrum may favor an area-law entanglement scaling. The competition between these two opposite trends may lead to new scaling laws and exotic critical behaviors for the EE. Moreover, an NHQC could possess a point-gap instead of a line-gap on the complex energy plane [67; 68], and the implication of a point dissipation gap on entanglement phase transitions is largely unclear. Besides, whether and how entanglement transitions would accompany other phase transitions (e.g., PT-symmetry breaking, localization, etc.) in NHQCs remain to be uncovered.
To resolve these puzzles, we explore in this work the entanglement phase transitions in NHQCs, with a focus on two "minimal" and mutually dual non-Hermitian lattice models [69; 76]. In Sec. II, we introduce these models and review their known spectral and localization properties. The entanglement phase transitions in these models are explored in detail in Sec. III. A unique type of log-law to area-law entanglement transition, mediated by a volume-law critical entangling phase is identified. In Sec. IV, we summarize our results and discuss potential future directions.
## II Models
In this work, we focus on the entanglement phase transitions in two "minimal" non-Hermitian variants of the Aubry-Andre-Harper (AAH) model. They will be denoted by NHAAH1 and NHAAH2 for brevity. We first go over some of their key physical properties in this section. Throughout the discussions, we will set the lattice
constant \(a=1\) and the Planck constant \(\hbar=1\).
In position representation, the Hamiltonian of the NHAAH1 takes the form of
\[\hat{H}_{1}=J\sum_{n}(\hat{c}_{n}^{\dagger}\hat{c}_{n+1}+\hat{c}_{n+1}^{\dagger} \hat{c}_{n})+V\sum_{n}e^{-i2\pi\alpha n}\hat{c}_{n}^{\dagger}\hat{c}_{n}. \tag{1}\]
Here \(\hat{c}_{n}^{\dagger}\) (\(\hat{c}_{n}\)) creates (annihilates) a fermion on the lattice site \(n\). \(J\) denotes the nearest-neighbor hopping amplitude and \(V\) denotes the amplitude of onsite potential \(V_{n}=Ve^{-i2\pi\alpha n}\). Expanding a general state as \(|\psi\rangle=\sum_{n}\psi_{n}\hat{c}_{n}^{\dagger}|\emptyset\rangle\), the eigenvalue equation \(\hat{H}_{1}|\psi\rangle=E|\psi\rangle\) of NHAAH1 can be expressed in the following form
\[J\psi_{n+1}+J\psi_{n-1}+Ve^{-i2\pi\alpha n}\psi_{n}=E\psi_{n}. \tag{2}\]
Here \(|\emptyset\rangle\) denotes the vacuum state and the amplitude \(\psi_{n}\) is normalized as \(\sum_{n}|\psi_{n}|^{2}=1\). It is clear that the NHAAH1 is non-Hermitian due to the complex onsite phase factor \(e^{-i2\pi\alpha n}\). It further possesses the PT symmetry, with the parity \(\mathcal{P}:n\rightarrow-n\) and the time-reversal \(\mathcal{T}=\mathcal{K}\), where \(\mathcal{K}\) performs the complex conjugation. The quasicrystal nature of the system comes about by setting \(\alpha\) as an irrational number, so that the onsite potential is spatially quasiperiodic. The energy spectrum of the system under the periodic boundary condition (PBC) was found to take the form of [69]
\[E=\begin{cases}2J\cos k&|V|\leq|J|\\ \left(V+\frac{J^{2}}{V}\right)\cos k+i\left(V-\frac{J^{2}}{V}\right)\sin k&| V|>|J|\end{cases}, \tag{3}\]
where \(k\in[-\pi,\pi)\). Therefore, the spectrum is real for \(|V|<|J|\) (PT-invariant) and complex for \(|V|>|J|\) (PT-broken). There is a PT transition in the energy spectrum at \(|V|=|J|\).
The Hamiltonian of the NHAAH2 in the position representation is given by
\[\hat{H}_{2}=J\sum_{n}\hat{c}_{n+1}^{\dagger}\hat{c}_{n}+2V\sum_{n}\cos(2\pi \alpha n)\hat{c}_{n}^{\dagger}\hat{c}_{n}, \tag{4}\]
and the related eigenvalue equation reads
\[J\psi_{n-1}+2V\cos(2\pi\alpha n)\psi_{n}=E\psi_{n}. \tag{5}\]
It is clear that the nearest-neighbor hopping is unidirectional from left to right, making the system apparently non-Hermitian. The NHAAH2 is also quasiperiodic if the value of \(\alpha\) is irrational. Taking a rational approximation for \(\alpha\simeq p/q\) (with \(p,q\) being co-prime integers) and performing the discrete Fourier transformation \(\psi_{n}=\frac{1}{L}\sum_{\ell=1}^{L}\phi_{\ell}e^{i2\pi\alpha\ell n}\) under the PBC (\(\psi_{n}=\psi_{n+L}\)), the Eq. (5) can be transformed to the momentum space [68] as
\[V\phi_{\ell+1}+V\phi_{\ell-1}+Je^{-i2\pi\alpha\ell}\phi_{\ell}=E\phi_{\ell}, \tag{6}\]
where \(L\) denotes the length of lattice. It is now clear that the NHAAH2 also possesses the PT symmetry, with \(\mathcal{P}:\ell\rightarrow-\ell\) and \(\mathcal{T}=\mathcal{K}\). The energy spectrum of the system under the PBC is further given by
\[E=\begin{cases}\left(J+\frac{V^{2}}{J}\right)\cos k+i\left(J-\frac{V^{2}}{J} \right)\sin k&|V|<|J|\\ 2V\cos k&|V|\geq|J|\end{cases}, \tag{7}\]
where \(k\in[-\pi,\pi)\). Therefore, the spectrum is real for \(|V|>|J|\) (PT-invariant) and complex for \(|V|<|J|\) (PT-broken). The PT transition of NHAAH2 also happens at \(|V|=|J|\)[76].
By comparing the Eqs. (2) and (6), we further observe a duality relation between the NHAAH1 and NHAAH2, implying the presence of a fixed point along \(|J|=|V|\). In fact, it has been identified that under the PBC and for any irrational \(\alpha\), there is a PT spectral transition together with a localization-delocalization transition at \(|J|=|V|\) for both the NHAAH1 and NHAAH2. When \(|V|<|J|\), the NHAAH1 (NHAAH2) resides in a metallic phase with a real (complex) spectrum and holding only extended eigenstates. When \(|V|>|J|\), the NHAAH1 (NHAAH2) switches to an insulator phase with a complex (real) spectrum and holding only localized eigenstates [69; 76]. The transitions between these phases could be further captured by quantized changes of spectral topological winding numbers [67; 68].
In Fig. 1, we illustrate the phases and transitions in NHAAH1 and NHAAH2 by investigating their spectra and inverse participation ratios (IPRs). The \(\langle\text{Im}E\rangle\) [in Figs. 1(a) and 1(c)] and \(\langle\text{IPR}\rangle\) [in Figs. 1(b) and 1(d)]
Figure 1: Phase diagrams of the NHAAH1 [in (a), (b)] and NHAAH2 [in (c), (d)] under the PBC [107]. We choose \(\alpha=\frac{\sqrt{5}-1}{2}\) and the length of lattice \(L=610\) for all panels. The red dashed lines show the phase boundaries \(J=\pm V\). The NHAAH1 stays in a PT-invariant extended phase for \(|V|<|J|\) and goes to a PT-broken localized phase for \(|V|>|J|\). The NHAAH2 resides in a PT-broken extended phase for \(|V|<|J|\) and switches to a PT-invariant localized phase for \(|V|>|J|\).
are defined as
\[\langle\text{Im}E\rangle=\frac{1}{L}\sum_{j=1}^{L}|\text{Im}E_{j}|, \tag{8}\]
\[\langle\text{IPR}\rangle=\frac{1}{L}\sum_{j=1}^{L}\sum_{n=1}^{L}|\psi_{n}^{j}|^ {4}. \tag{9}\]
Here \(E_{j}\) is the \(j\)th eigenenergy of \(\hat{H}_{1}\) or \(\hat{H}_{2}\) with the normalized right eigenvector \(|\psi_{j}\rangle=\sum_{n=1}^{L}\psi_{n}^{j}\hat{c}_{n}^{\dagger}|\emptyset\rangle\). By definition, we expect \(\langle\text{Im}E\rangle=0\) (\(\langle\text{Im}E\rangle>0\)) in the PT-invariant (PT-broken) phase, and \(\langle\text{IPR}\rangle\to 0\) (\(\langle\text{IPR}\rangle>0\)) in the extended (localized) phase. The numerical results presented in Fig. 1 clearly verified the theoretically predicted metal/insulator phases, PT transitions and localization transitions in these NHQCs [69; 76].
Based on these known physical properties, one may expect to have entanglement phase transitions in the NHAAH1 and NHAAH2. For example, after a long-time evolution, the EE of a typical initial state might be proportional to the system size (volume-law) in the PT-invariant phase and become almost independent to the system size in the PT-broken phase (area-law) [62]. The PT transition of NHAAH1 or NHAAH2 should then also accompany a volume-law to area-law entanglement transition. Meanwhile, one may also expect the scaling of steady-state EE to follow a volume-law in the extended phase and an area-law in the localized phase. However, the PT-invariant (PT-broken) phase of our system could also be a localized (an extended) phase, making the real physical situation more complicated. As will be demonstrated in the following section, despite the more conventional volume-law to area-law entanglement transitions, the steady-state EE of an NHQC may follow an abnormal log-law scaling due to the interplay between quasiperiodicity and non-Hermitian effects. A log-law to area-law entanglement phase transition could further be induced to happen across a critical point where the EE follows a volume-law.
## III Results
In this section, we reveal the entanglement phase transitions in NHAAH1 and NHAAH2. We first discuss the definition of EE and the calculation of its dynamics for a non-Hermitian system. Next, we demonstrate the scaling relations of steady-state EE with respect to the system and subsystem sizes for two NHQC models in Secs. III.1 and III.2. These relations allow us to clearly distinguish different entangling phases in the considered systems. The entanglement phase transitions are further uncovered by investigating the changes of EE with respect to different system parameters. With all these information, we finally establish the entanglement phase diagrams for our NHQC models.
For a system consists of noninteracting fermions, the EE of an arbitrary subsystem and its time evolution can be extracted from the spectrum and dynamics of the single-particle correlator. Consider a system described by the quadratic Hamiltonian \(\hat{H}=\sum_{m,n}\hat{c}_{m}^{\dagger}H_{mn}\hat{c}_{n}\) and prepared at time \(t=0\) in the initial state \(|\Psi_{0}\rangle\), the normalized state of the system at a later time \(t\) is given by
\[|\Psi(t)\rangle=\frac{e^{-i\hat{H}t}|\Psi_{0}\rangle}{\langle\Psi_{0}|e^{i \hat{H}^{\dagger}t}e^{-i\hat{H}t}|\Psi_{0}\rangle}. \tag{10}\]
Here \(\hat{c}_{m}^{\dagger}\) (\(\hat{c}_{n}\)) creates (annihilates) a fermion at the lattice site \(m\) (\(n\)). Note that for a non-Hermitian system, we generally have \(\hat{H}\neq\hat{H}^{\dagger}\), leading to a nonunitary time evolution. In our calculations, we choose the initial state to be in the form of a charge density wave for a half-filled lattice, i.e.,
\[|\Psi_{0}\rangle=\prod_{r\in\mathbb{Z}}\hat{c}_{2r}^{\dagger}|\emptyset\rangle, \tag{11}\]
where \(r=1,2,...,\lfloor L/2\rfloor-1,\lfloor L/2\rfloor\). Other kinds of initial states generate similar results regarding the (sub)system-size scaling of steady-state EE. At a later time \(t\), the element of single-particle correlation matrix \(C(t)\) in position representation is given by
\[C_{mn}(t)=\langle\Psi(t)|\hat{c}_{m}^{\dagger}\hat{c}_{n}|\Psi(t)\rangle, \tag{12}\]
Restricting the indices \(m\) and \(n\) to a subsystem A of size \(l\) and diagonalizing the corresponding \(l\times l\) block of \(C(t)\) result in the correlation-matrix spectrum \(\{\zeta_{j}(t)|j=1,...,l\}\). The EE at time \(t\) can then be obtained as [61]
\[S=-\sum_{j=1}^{l}[\zeta_{j}(t)\ln\zeta_{j}(t)+(1-\zeta_{j}(t))\ln(1-\zeta_{j} (t))]. \tag{13}\]
Note that the \(S\) here is the bipartite EE of a subsystem A. It is defined by tracing over all the degrees of freedom belonging to a complementary subsystem B of the size \(L-l\), in the sense that \(S=-\text{Tr}(\rho_{\text{A}}\ln\rho_{\text{A}})\) and \(\rho_{\text{A}}=\text{Tr}_{\text{B}}(|\Psi(t)\rangle\langle\Psi(t)|)\). Numerically, the EE of a Gaussian state can be computed efficiently following the recipe listed in the Appendix B of Ref. [61].
In the following subsections, we study the EE of our two NHQC models with the method outlined here. We focus on systems under the PBC and set the irrational parameter \(\alpha=(\sqrt{5}-1)/2\) (the inverse golden ratio) for all our calculations. Other choices of the irrational \(\alpha\) lead to similar results.
### Nhaah1
We first reveal entanglement phase transitions in the NHAAH1 by investigating its steady-state EE \(S(L,l)\), with \(L\) and \(l\) being the length of lattice and the size of
its subsystem A. The system is prepared at \(t=0\) in the initial state \(|\Psi_{0}\rangle\) [Eq. (11)] and then evolved according to Eq. (10), with the \(\hat{H}\) given by Eq. (1). The EE \(S(t)\) at a later time \(t\) [Eq. (13)] is obtained from the spectrum of correlation matrix \(C(t)\) [Eq. (12)] restricted to the subsystem A. Focusing on a long-time evolution of duration \(T\), we obtain the steady-state EE \(S(L,l)\) by averaging \(S(t)\) over a suitable time window \(t\in[T^{\prime},T]\) with \(1\ll T^{\prime}<T\). The scaling property of \(S(L,l)\) can then be analyzed by considering different choices of \(L\) and \(l\).
In Fig. 2, we present the steady-state EE versus the system size \(L\) and the subsystem size \(l\) for typical sets of system parameters. In Figs. 2(a) and 2(c), we consider a equal partition of the system (\(l=\lfloor L/2\rfloor\)). For \(|V|>|J|\), we find that the \(S(L,L/2)\) almost does not change with \(L\), which implies that the PT-broken localized phase of the NHAAH1 is area-law entangled. This is expected, as in this case the point dissipation gap on the complex energy plane [see Eq. (3)] and the spatial localization of all eigenstates both tend to hinder the spreading of quantum entanglement across the system. For \(|V|<|J|\), we instead observe that up to leading order, the EE is proportional to the system size \(L\), i.e., \(S(L,L/2)\propto gL\) with the gradient \(g\approx 0.1\sim 0.2\). Therefore, in the PT-invariant extended phase of NHAAH1, the steady-state EE tends to satisfy a volume-law. Such a linear scaling is triggered by the quantum information spreading due to delocalized bulk states with real energies in the system. The gradient \(g\) of the volume-law scaling decreases gradually but remains finite till the critical point of PT and localization transitions at \(|J|=|V|\).
In Figs. 2(b) and 2(d), we consider a fixed system size \(L\) and obtain the curve \(S(L,l)\) versus the size \(l\) of subsystem A for \(l\in(0,L)\). The results show that for \(|V|>|J|\), the \(S(L,l)\) is almost independent of \(l\) up to slight fluctuations, which is an expected situation for an area-law entangled phase. For \(|V|\leq|J|\), the \(S(L,l)\) as a function of \(l\) can be numerically fitted as \(S(L,l)\simeq A\sin(\pi l/L)+B\ln[\sin(\pi l/L)]+C\), where \(A\), \(B\) and \(C\) are some fitting coefficients. This is typical for a volume-law entangled phase. Putting together, we conclude that the steady-state EE of NHAAH1 indeed follows qualitatively different scaling laws with respect to the (sub)system size in different parameter regions, which implies the presence of entanglement phase transitions in the system.
To further decipher the entanglement transitions in NHAAH1, we present its steady-state EE \(S(L,L/2)\) versus \(V\) and \(J\) for different system sizes \(L\) in Figs. 3(a) and 3(c). Two distinct regions can be clearly figured out. In the region with \(|J|<|V|\), the EE shows an \(L\)-independence. Whereas for \(|J|>|V|\), the EE increases monotonically with \(L\). A marked change is observed at \(|J|=|V|\) in the \(L\)-dependence of \(S(L,L/2)\), which implies a transition in the scaling law of EE. In Figs. 3(b) and 3(d), we obtain the gradient \(g\) by fitting the steady-state EE \(S(L,L/2)\) with the function \(gL+s_{0}\) at different values of \(J\) and \(V\). The results show that
Figure 2: Steady-state EE of the NHAAH1 at half-filling versus the system size \(L\) [under bi-partition in (a), (c)] and the subsystem size \(l\) [under a fixed length of lattice \(L=610\) in (b), (d)]. Other system parameters are set as \(J=1\) for (a), (b) and \(V=1\) for (c), (d). The time span of the entire evolution is \(T=1000\).
\(g\simeq 0\) [\(S(L,L/2)\sim s_{0}\sim\mathcal{O}(1)\)] for \(|J|<|V|\) and \(g>0\) [\(S(L,L/2)\propto L\)] for \(|J|\geq|V|\), which are expected behaviors for area-law entangled and volume-law entangled phases, respectively. There is then a discontinuous change of \(g\) at \(|J|=|V|\), which signifies an entanglement phase transition in the NHAAH1.
Collecting together the scaling properties of steady-state EE with respect to the lattice size \(L\) for a half-filled and bipartite system, we arrive at the entanglement phase diagram of NHAAH1 under the PBC in Fig. 4. We find that there are indeed two phases with different entanglement natures, which are separated by an entanglement transition at \(|J|=|V|\). In the PT-broken localized phase (\(|J|<|V|\)), the system is found to be area-law entangled [\(S(L,L/2)\sim s_{0}\sim\mathcal{O}(1)\)]. The spectrum is complex with a point dissipation gap at \(E=0\) on the complex energy plane [see Eq. (3)] and all the eigenstates are localized, both compelling the termination of entanglement spreading in this case. In the PT-invariant extended phase (\(|J|>|V|\)), the system is instead volume-law entangled [\(S(L,L/2)\propto L\) up to the leading order]. Since the system possesses a real spectrum [Eq. (3)] and all its eigenstates are extended in this case, the quantum information is forced to spread and a volume-law entangled phase results. Such a volume-law to area-law entanglement phase transition was identified before in clean non-Hermitian systems due to different physical mechanisms [61; 62]. In the next subsection, we will demonstrate that an even more exotic type of entanglement phase transition could emerge in NHQCs due to the interplay between disorder and nonreciprocity.
Figure 4: Entanglement phase diagram of the NHAAH1. Different colors correspond to different values of the gradient \(g\) extracted from the linear fitting \(S\sim gL+s_{0}\) of steady-state EE versus the system size \(L\).
Figure 3: Bipartite EE of the steady state at half-filling [in (a), (c)] and the related gradient \(g\) in the scaling law of steady-state EE [in (b) and (d)] for the NHAAH1. Other system parameters are set as \(J=1\) for (a), (b) and \(V=1\) for (c), (d). The time span of the entire evolution is \(T=1000\). The values of \(g\) are obtained from the linear fitting \(S\sim gL+s_{0}\) of EE versus the lattice size \(L\) at given system parameters.
### Nhaah2
We now explore the entanglement phase transitions in the NHAAH2 [Eq. (4)] by inspecting the steady-state EE \(S(L,l)\) of a subsystem A, where \(L\) is the length of lattice and \(l\) is the subsystem size. The initial state of the system is still at half-filling and described by the wavefunction \(|\Psi_{0}\rangle\) in Eq. (11). Evolving \(|\Psi_{0}\rangle\) over a long time duration \(T\) from \(t=0\), we obtain the EE \(S(t)\) at each \(t\in[0,T]\) according to Eqs. (10)-(13). The steady-state \(S(L,l)\) is then extracted by averaging \(S(t)\) over a time duration \(t\in[T^{\prime},T]\) for an appropriately chosen \(1\ll T^{\prime}<T\). We could then analyze the scaling behavior of \(S(L,l)\) with respect to the system size \(L\) or the subsystem size \(l\) at any give sets of system parameters \((J,V)\).
Similar to the NHAAH1, we first consider a bipartite system with \(l=\lfloor L/2\rfloor\) for the NHAAH2. The \(L\)-dependence of \(S(L,L/2)\) for some typical cases are then obtained and shown in Fig. 5. We find that the EE almost does not change with \(L\) for \(|V|>|J|\), which suggests that the PT-invariant localized phase of the NHAAH2 is area-law entangled. At \(J=V=1\), we find that up to the leading order \(S(L,L/2)\sim gL\), with the gradient \(g\approx 0.1\). The same scaling law is found for other values of \(J=V\neq 0\), which indicates that the NHAAH2 is volume-law entangled along the critical lines \(J=\pm V\) of the PT-breaking and localization transitions. Interestingly, we find that up to the leading order \(S(L,L/2)\sim g\ln L\) for the cases with \(|V|<|J|\), where the coefficient \(g\approx 0.34\). Therefore, the PT-broken extended phase of the NHAAH2 tends out to be log-law entangled. Such an abnormal entanglement behavior is clearly distinct from typical scaling laws of steady-state EE found in other non-Hermitian systems due to non-Hermitian skin effects or line dissipation gaps [61, 62]. The qualitative change in the scaling law of steady-state EE from \(|V|<|J|\) to \(|V|>|J|\) further suggests a log-law to area-law entanglement transition, which is mediated by a critical volume-law entangled phase along \(|V|=|J|\).
To further decode the entanglement transitions in the NHAAH2, we consider the EE \(S(L,l)\) versus the subsystem size \(l\) for a fixed \(L\), with typical results at different system parameters shown in Fig. 6. For the cases with \(|V|>|J|\), we find that the \(S(L,l)\) is almost independent of \(l\) up to small oscillations, which is typical for an area-law entangled phase. At \(J=V=1\), the \(S(L,l)\) has the shape of the function \(A\sin(\pi l/L)+B\ln[\sin(\pi l/L)]+C\) with a small offset at \(l=L/2\). Interestingly, our numerics suggest the following generic form of EE for \(|V|<|J|\), i.e.,
\[S(L,l)\simeq\frac{1}{3}\ln[\sin(\pi l/L)]+S_{0}, \tag{14}\]
where \(S_{0}\) is a non-universal constant. Referring to the typical form of \(S(L,l)\) for a one-dimensional (1D) quantum critical system [61], Eq. (14) implies a central charge \(c=2\) for the PT-broken extended phase of the NHAAH2. The physical origin of this central charge might be un
Figure 5: Steady-state EE of the NHAAH2 versus the system size \(L\) at half-filling and under equal bi-partition. Other system parameters are set as \(J=1\) for (a), (b) and \(V=1\) for (c), (d). The time span of the entire dynamics is \(T=1000\).
derstood from the fact that at half filling, there are two Fermi points with \(E=0\) at \(k=\pm\pi/2\) on the Fermi surface [see Eq. (7)]. Each of them makes a contribution one to the central charge \(c\). Compared with the forms of \(S(L,l)\) in Figs. 2(b) and 2(d) for the NHAAH1, we further realize that the NHAAH2 should indeed possess a phase with unique entanglement nature as described by the scaling relation (14).
Combining the information obtained from the scaling properties of EE with respect the system size, we are now ready to reveal the entanglement phase transitions in the NHAAH2. In Figs. 7(a) and 7(c), we present the steady-state EE versus \(V\) and \(J\) for different system sizes. A clear peak can be identified at \(J=V\), whose height increases monotonically with the increase of the lattice size \(L\). The presence of such a sharp peak in \(S(L,L/2)\) clearly hints at the occurrence of a entanglement transition at \(J=V\). In Figs. 7(b) and 7(d), we use the relations \(S\sim gL+s_{0}\) and \(S\sim g\ln L+s_{0}\) to fit the data at different \(L\) for \(|J|\leq|V|\) and \(|J|>|V|\), respectively. The results suggest that the scaling form of EE could undergo a discontinuous change from a log-law (\(|V|<|J|\)) with a finite \(g\) in \(S\sim g\ln L+s_{0}\) to an area-law with \(g\simeq 0\) in the linear fitting \(S\sim gL+s_{0}\) (\(|V|>|J|\)). There is thus an entanglement phase transition at \(|V|=|J|\) accompanying the PT and localization transitions in the NHAAH2.
Finally, we establish the entanglement phase diagram of the NHAAH2 by extracting the scaling laws of steady-state EE versus the system size \(L\) for a half-filled and bipartite lattice under the PBC, as shown in Fig. 8. We observe that the EE indeed satisfies an area law [\(S(L,L/2)\sim\mathcal{O}(1)\)] in the PT-invariant localized phase (\(|J|<|V|\)), and fulfills an anomalous log-law scaling [\(S(L,L/2)\propto\ln L\)] in the PT-broken extended phase (\(|J|>|V|\)). Along the phase boundary (\(|J|=|V|\)), the EE shows a volume-law critical scaling behavior [\(S(L,L/2)\propto L\)], which is similar to the NHAAH1. Besides that, the entangled phases and entanglement transitions in the NHAAH2 are rather different from those appeared in NHAAH1.
A possible reason behind these differences is as follows. In the PT-broken extended phase of NHAAH2 (\(|J|>|V|\)), the asymmetric hopping overcomes the block of quasiperiodic disorder and allows the spreading of quantum information across the system, yielding the tendency of forming a volume-law entangled phase. However, the spectrum of the system in the PT-broken phase possesses a point gap on the complex energy plane at \(E=0\) [see Eq. (7)]. The presence of such a dissipation gap tends to suppress the quantum information spreading and prefers an area-law scaling for the steady-state EE. The competition between these two opposite tendencies ends up with a compromise, as reflected by the log-law entangled phase in Fig. 8. In the PT-invariant localized phase of NHAAH2 (\(|J|<|V|\)), the disorder is strong enough to prevents the information spreading and stabilizes the system in an area-law entangled phase, even
Figure 6: Steady-state EE of the NHAAH2 versus the subsystem size \(l\) at half-filling and under a fixed length of lattice \(L=610\). Other system parameters are set as \(J=1\) for (a), (b) and \(V=1\) for (c), (d). The time span of the entire dynamics is \(T=1000\).
though the energy spectrum is fully real [see Eq. (7)]. The very different entanglement dynamics in our mutually dual NHQC models could thus be understood. The clear differences between the entanglement transitions discovered here and some typical situations encountered in previous studies [61; 58; 62] further highlight the interesting role played by disorder in non-Hermitian systems from a quantum information perspective.
## IV Summary
In this work, we revealed entanglement phase transitions in representative 1D NHQCs. In a system with onsite gain and loss, a volume-law to area-law transition in the steady-state EE was found to go hand-in-hand with PT-breaking and localization transitions induced by non-Hermitian quasiperiodic potentials. In a system with nonreciprocal hopping, the steady-state EE instead showcased an area-law to log-law entanglement transition with the increase of the hopping asymmetry, which was mediated by a critical entangling phase whose EE followed a volume-law scaling versus the system size. This transition also went hand-in-hand with PT-breaking and delocalization transitions due to the interplay between hopping nonreciprocity and spatial quasiperiodicity. Even though the two considered models can be viewed as dual to each other, they exhibited rather different entanglement dynamics except at critical points, which were demonstrated in detail by our numerical analysis of their EE scaling laws and entanglement phase diagrams. Our findings thus unveiled the richness of en
Figure 8: Entanglement phase diagram of the NHAAH2. Different colors correspond to different values of the gradient \(g\) extracted from the fitting \(S\sim gL+s_{0}\) (\(S\sim g\ln L+s_{0}\)) of steady-state EE versus the system size \(L\) for \(|V|\geq|J|\) (\(|V|<|J|\)).
Figure 7: Bipartite EE of the steady state at half-filling [in (a), (c)] and the related gradient \(g\) in the scaling law of steady-state EE [in (b) and (d)] for the NHAAH2. Other system parameters are set as \(J=1\) for (a), (b) and \(V=1\) for (c), (d). The time span of the entire evolution is \(T=1000\). The values of \(g\) are obtained from the linear fitting \(S\sim gL+s_{0}\) (\(S\sim g\ln L+s_{0}\)) of EE versus the system size \(L\) for \(J\leq V\) (\(J>V\)) in (b) and (d).
tanglement phases and transitions in non-Hermitian disordered systems, which may find applications in quantum error correction and quantum information storage against decoherence.
Although our results are obtained by investigating two "minimal" NHQC models, we expect to find similar patterns of entanglement phase transitions in other 1D NHQCs with simultaneous PT and localization transitions, such as those considered in Refs. [67, 68, 105]. In more general situations, the metal and insulator phases of an NHQC could be separated by a critical phase, in which extended and localized eigenstates coexist and are separated by mobility edges. The entanglement transition in NHQCs with mobility edges thus constitutes another interesting direction for future research. An in-depth analysis about the critical exponents and universality class of entanglement phase transitions in NHQCs is definitely necessary. Besides, much less is known regarding entanglement transitions in non-Hermitian disordered systems beyond one spatial dimension, with uncorrelated disorder [108, 109, 110] and with many-body interactions [111, 112]. Concrete experimental signatures of entanglement phase transitions in non-Hermitian systems also deserve more thorough considerations.
_Note added_: Before the submission of this work, we realized a new preprint [113], which also explored entanglement phase transitions in NHQCs with a focus on the interplay between disorder and non-Hermitian skin effects.
###### Acknowledgements.
This work is supported by the National Natural Science Foundation of China (Grant Nos. 12275260 and 11905211), the Fundamental Research Funds for the Central Universities (Grant No. 202364008), and the Young Talents Project of Ocean University of China.
|
2305.18266 | Derivation of all structure constants for boundary Liouville CFT | We prove that the probabilistic definition of the most general boundary
three-point and bulk-boundary structure constants in Liouville conformal field
theory (LCFT) agree respectively with the formula proposed by Ponsot-Techsner
(2002) and by Hosomichi (2001). These formulas also respectively describe the
fusion kernel and modular kernel of the Virasoro conformal blocks, which are
important functions in various contexts of mathematical physics. As an
intermediate step, we obtain the formula for the boundary reflection
coefficient of LCFT proposed by Fateev-Zamolodchikov-Zamolodchikov (2000). Our
proof relies on the boundary Belavin-Polyakov-Zamolodchikov differential
equation recently proved by the first named author, and inputs from the
coupling theory of Liouville quantum gravity (LQG) and Schramm Loewner
evolution. Our results supply all the structure constants needed to perform the
conformal bootstrap for boundary LCFT. They also yield exact descriptions for
the joint law of the area and boundary lengths of basic LQG surfaces, including
quantum triangles and two-pointed quantum disks. | Morris Ang, Guillaume Remy, Xin Sun, Tunan Zhu | 2023-05-29T17:40:12Z | http://arxiv.org/abs/2305.18266v2 | # Derivation of all structure constants for boundary Liouville CFT
###### Abstract.
We prove that the probabilistic definition of the most general boundary three-point and bulk-boundary structure constants in Liouville conformal field theory (LCFT) agree respectively with the formula proposed by Ponsot-Techsner (2002) and by Hosomichi (2001). These formulas also respectively describe the fusion kernel and modular kernel of the Virasoro conformal blocks, which are important functions in various contexts of mathematical physics. As an intermediate step, we obtain the formula for the boundary reflection coefficient of LCFT proposed by Fateev-Zamolodchikov-Zamolodchikov (2000). Our proof relies on the boundary Belavin-Polyakov-Zamolodchikov differential equation recently proved by the first named author, a conformal welding result proved by Wu (2023), and an integrable input from the mating-of-trees theory for Liouville quantum gravity (LQG). Our results supply all the structure constants needed to perform the conformal bootstrap for boundary LCFT. They also yield exact descriptions for the joint law of the area and boundary lengths of basic LQG surfaces, including quantum triangles and two-pointed quantum disks.
## 1. Introduction
Liouville conformal field theory (LCFT) describes the law of the conformal factor of random surfaces in Liouville quantum gravity. Introduced by Polyakov [11] in theoretical physics, LCFT was recently made rigorous in probability theory, first in the case of the Riemann sphere in [13], and then in the case of a simply connected domain with boundary in [14]; see also [15, 16] for the case of other topologies. Following the framework of [1], the correlation functions of LCFT can be solved by the conformal bootstrap program. In the probabilistic framework, this was recently carried out on surfaces without boundary [10], [11], [12]. The initial input of the conformal bootstrap are the _structure constants_. For surfaces without boundary, it is the three-point correlation function on the sphere. It has an exact expression called the DOZZ formula which was proposed in [11, 12] and proved in [10].
For LCFT on surfaces with boundary, the structure constants are the correlation functions on the disk with three points on the boundary, or one point in the bulk and one on the boundary. The theory involves both the bulk and the boundary Liouville potentials (see the Liouville action (1.3)). When the bulk Liouville potential is absent, the structure constants were obtained by Remy and Zhu [11]. When there is one bulk insertion and no boundary ones, the structure constant was obtained by Ang, Remy, and Sun in [1], confirming an earlier proposal of Fateev, Zamolodchikov and Zamolodchikov (FZZ) in [14]. The conformal bootstrap is also applicable in the boundary case and B. Wu [26] recently proved a bootstrap formula for the annulus with one insertion at each boundary.
In this paper we obtain the exact formula for the boundary three-point structure constant proposed by Ponsot and Techsner [15] and for the bulk-boundary structure constant proposed by Hosomichi [17], in the most general case where both the boundary and bulk Liouville potentials are present. As an intermediate step, we obtain the formula for the boundary two-point function
of LCFT - also known as the boundary reflection coefficient - which was proposed in [10]. This completes the derivation of all structure constants required for the conformal bootstrap of boundary LCFT. We rely on a novel Belavin-Polyakov-Zamolodchikov (BPZ) differential equation for LCFT on the disk established in [1] and on a conformal welding statement obtained by D. Wu [23]. We also rely on the exact law of the quantum disk from the mating-of-trees theory for Liouville quantum gravity [11, 1]. See Section 1.2 for a summary of our method.
Besides its relevance to the conformal bootstrap, the boundary three-point structure constant and the bulk-boundary structure constant agree modulo a prefactor respectively with the fusion kernel and the modular kernel of the Virasoro conformal blocks [20], an important special function with various interpretations in mathematical physics. Moreover, the boundary reflection coefficient gives the joint area and boundary length distribution of an important family Liouville quantum gravity surfaces called two-pointed quantum disks [11, 12]; the boundary three-point structure constant gives the corresponding information for a more general family of Liouville quantum gravity surfaces called quantum triangles [1, 12].
### Main results
We start by recalling the probabilistic construction of LCFT on the disk from [13], which is adapted from the construction on the Riemann sphere performed in [14]. By conformal invariance we will use the upper half plane \(\mathbb{H}\) as the base domain. Our presentation is brief with more details provided in Sections 2. Fix the global constants
\[\gamma\in(0,2)\quad\text{ and }\quad Q=\frac{\gamma}{2}+\frac{2}{\gamma}. \tag{1.1}\]
In physics LCFT is defined using a formal path integral. Fix \(N\) points \(z_{i}\in\mathbb{H}\) with associated weights \(\alpha_{i}\in\mathbb{R}\) and similarly \(M\) points \(s_{j}\in\mathbb{R}\) with associated weights \(\beta_{j}\in\mathbb{R}\). The correlation function associated to these points is given by the formal integral
\[\left\langle\prod_{i=1}^{N}e^{\alpha_{i}\phi(z_{i})}\prod_{j=1}^{M}e^{\frac{ \beta_{i}}{2}\phi(s_{i})}\right\rangle=\int_{\phi:\mathbb{H}\mapsto\mathbb{R} }D\phi\prod_{i=1}^{N}e^{\alpha_{i}\phi(z_{i})}\prod_{j=1}^{M}e^{\frac{\beta_{ j}}{2}\phi(s_{j})}e^{-S_{L}(\phi)}, \tag{1.2}\]
where \(S_{L}\) is the Liouville action given by:
\[S_{L}(\phi)=\frac{1}{4\pi}\int_{\mathbb{H}}\left(|\partial^{\hat{g}}\phi|^{2} +QR_{\hat{g}}\phi+4\pi\mu e^{\gamma\phi}\right)d\lambda_{\hat{g}}+\frac{1}{2 \pi}\int_{\mathbb{R}}\left(QK_{\hat{g}}\phi+2\pi\mu_{B}e^{\frac{\gamma}{2}\phi }\right)d\lambda_{\partial\hat{g}}. \tag{1.3}\]
Here \(\hat{g}\) is a background metric with \(R_{\hat{g}}\) and \(\lambda_{\hat{g}}\) representing the curvature and volume in the bulk, respectively, and \((K_{\hat{g}},\lambda_{\partial\hat{g}})\) being their boundary counterparts. The terms involving \(\mu\) and \(\mu_{B}\) are the bulk and boundary Liouville potentials, respectively. Here \(\mu>0\) and \(\mu_{B}\) is a complex valued function on \(\mathbb{R}\) which is piecewise constant in between boundary insertions. We assume the \(s_{j}\)'s are chosen in counterclockwise order on \(\mathbb{R}\) and denote by \(\mu_{j}\) the value of \(\mu_{B}\) on the interval \((s_{j-1},s_{j})\), with convention \(s_{0}=-\infty\), \(s_{M+1}=\infty\). We always assume \(\Re(\mu_{j})\geq 0\). Furthermore we frequently work with the following conditions on weights \(\alpha_{i},\beta_{j}\) known as the _Seiberg bound_:
\[\sum_{i=1}^{N}\alpha_{i}+\sum_{j=1}^{M}\frac{\beta_{j}}{2}>Q,\quad\alpha_{i}<Q,\quad\text{and}\quad\beta_{j}<Q. \tag{1.4}\]
We now give a rigorous probabilistic meaning to (1.2) for \(N=0\), \(M=3\) and for \(N=1\), \(M=1\). Let \(P_{\mathbb{H}}\) be the probability measure corresponding to the free-boundary Gaussian free field on \(\mathbb{H}\) normalized to have average zero on the semi-circle \(\partial\mathbb{D}\cap\mathbb{H}\). Let the infinite measure \(\text{LF}_{\mathbb{H}}(d\phi)\) be the law of \(\phi(z)=h(z)-2Q\log|z|_{+}+\mathbf{c}\), where \(|z|_{+}:=\max(|z|,1)\) and \((h,\mathbf{c})\) is sampled according to \(P_{\mathbb{H}}\times[e^{-Qc}\,dc]\). We call the field \(\phi\) sampled from \(\text{LF}_{\mathbb{H}}\) a Liouville field on \(\mathbb{H}\). This definition of \(\text{LF}_{\mathbb{H}}\) corresponds to choosing the background metric in (1.3) to be \(\hat{g}(x)=|z|_{+}^{-4}\). We define
the bulk and boundary Gaussian multiplicative chaos (GMC) measures of \(\phi\) as the limits (see e.g. [1, 14]):
\[A_{\phi}=\lim_{\varepsilon\to 0}\epsilon^{\frac{\gamma^{2}}{2}}\int_{\mathbb{H}}e^{ \gamma\phi_{\varepsilon}(z)}d^{2}z\text{ on }\mathbb{H},\qquad\text{and}\qquad L_{\phi}=\lim_{ \varepsilon\to 0}\epsilon^{\frac{\gamma^{2}}{4}}\int_{\mathbb{R}}e^{\frac{ \gamma}{2}\phi_{\varepsilon}(z)}dz\text{ on }\mathbb{R}.\]
Given three points \(s_{1},s_{2},s_{3}\) lying counterclockwise on \(\mathbb{R}\), let \(L_{j}\) be the \(\nu_{\phi}\)-length of the counterclockwise arc on \(\partial\mathbb{H}\) from \(s_{j-1}\) to \(s_{j}\), namely \(L_{j}=L_{\phi}(s_{j-1},s_{j})\), where we identify \(s_{0}\) as \(s_{3}.\) Fix \(\mu>0\) and \(\mu_{1},\mu_{2},\mu_{3}\geq 0.\) For \(\beta_{1},\beta_{2},\beta_{3}\in\mathbb{R}\) satisfying the bounds of (1.4), _boundary three-point correlation function_ of LCFT on \(\mathbb{H}\) with bulk cosmological constant \(\mu>0\) and boundary cosmological constants \((\mu_{j})_{1\leq j\leq 3}\) is defined by:
\[\left\langle\prod_{j=1}^{3}e^{\frac{\beta_{j}}{2}\phi(s_{j})}\right\rangle= \int\prod_{j=1}^{3}e^{\frac{\beta_{j}}{2}\phi(s_{j})}\cdot e^{-\mu\mu_{\phi}( \mathbb{H})-\sum_{j=1}^{3}\mu_{j}L_{j}}\text{LF}_{\mathbb{H}}(d\phi). \tag{1.5}\]
Similarly, consider a point \(z\in\mathbb{H}\) and a point \(s\in\mathbb{R}.\) Let the weights \(\alpha,\beta\in\mathbb{R}\) obey (1.4), and let \(\mu,\mu_{B}>0.\) We set:
\[\left\langle e^{\alpha\phi(z)}e^{\frac{\beta}{2}\phi(s)}\right\rangle=\int e^ {\alpha\phi(z)}e^{\frac{\beta}{2}\phi(s)}\cdot e^{-\mu\mu_{\phi}(\mathbb{H})- \mu_{B}\nu_{\phi}(\mathbb{R})}\text{LF}_{\mathbb{H}}(d\phi). \tag{1.6}\]
Here although \(\phi\) is only a generalized function, the factors \(\prod_{j=1}^{3}e^{\frac{\beta_{j}}{2}\phi(s_{j})}\) and \(e^{\alpha\phi(z)}e^{\frac{\beta}{2}\phi(s)}\) in the integrands can be made sense of by regularization and Girsanov's theorem. Moreover, the Seiberg bounds (1.4) ensure that both integrations in (1.5) and (1.6) are finite. We will review this construction in full detail in Section 2.2.
Due to conformal symmetry, the 3-point boundary correlation function has the following form
\[\left\langle\prod_{j=1}^{3}e^{\frac{\beta_{j}}{2}\phi(s_{j})}\right\rangle= \frac{H\mu^{\frac{2Q-\beta_{1}-\beta_{2}-\beta_{3}}{2\gamma}}}{|s_{1}-s_{2}|^{ \Delta_{\beta_{1}}+\Delta_{\beta_{2}}-\Delta_{\beta_{3}}}|s_{1}-s_{3}|^{\Delta _{\beta_{1}}+\Delta_{\beta_{3}}-\Delta_{\beta_{2}}}|s_{2}-s_{3}|^{\Delta_{ \beta_{2}}+\Delta_{\beta_{3}}-\Delta_{\beta_{1}}}}, \tag{1.7}\]
while the bulk-boundary correlation has the form:
\[\left\langle e^{\alpha\phi(z)}e^{\frac{\beta}{2}\phi(s)}\right\rangle=\frac{G \mu^{\frac{2Q-2\alpha-\beta}{2\gamma}}}{|z-\overline{z}|^{2\Delta_{\alpha}- \Delta_{\beta}}|z-s|^{2\Delta_{\beta}}}. \tag{1.8}\]
Here the conformal dimensions are given by:
\[\Delta_{\alpha}=\frac{\alpha}{2}(Q-\frac{\alpha}{2}),\quad\Delta_{\beta}= \frac{\beta}{2}(Q-\frac{\beta}{2}).\]
The function \(H\) only depends on \(\beta_{j}\) and \(\mu_{j}/\sqrt{\mu}\) for \(1\leq j\leq 3\), while \(G\) only depends on \(\alpha,\beta\) and \(\mu_{B}/\sqrt{\mu}.\) We call \(H\) the _boundary three-point structure constant_ and \(G\) the _bulk-boundary structure constant_ for LCFT. For simplicity, in the rest of the paper we will set the bulk cosmological constant to be \(\mu=1\) and write \(H\) as \(H^{(\beta_{1},\beta_{2},\beta_{3})}_{(\mu_{1},\mu_{2},\mu_{3})}\) and \(G\) as \(G_{\mu_{B}}(\alpha,\beta).\)
The conjectural formulas in physics for \(H\) and \(G\) requires a crucial change of variable:
\[\mu_{B}(\sigma):=\sqrt{\frac{1}{\sin(\pi\frac{\gamma^{2}}{4})}}\cos\left(\pi \gamma(\sigma-\frac{Q}{2})\right). \tag{1.9}\]
For \(\Re(\mu_{B}(\sigma))>0,\) we choose \(\Re(\sigma)\in(-\frac{1}{2\gamma}+\frac{Q}{2},\frac{1}{2\gamma}+\frac{Q}{2}).\) Ponsot-Teschner [15] proposed a remarkable formula for \(H\) under the reparametrization:
\[H\begin{pmatrix}\beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}:=H^{(\beta_{1},\beta_{2},\beta_{3 })}_{(\mu_{B}(\sigma_{1}),\mu_{B}(\sigma_{2}),\mu_{B}(\sigma_{3}))}. \tag{1.10}\]
The Ponsot-Teschner formula is expressed in terms of the Barnes' double Gamma function \(\Gamma_{\frac{\gamma}{2}}(x)\) and the double sine function \(S_{\frac{\gamma}{2}}(x)=\frac{\Gamma_{\frac{\gamma}{2}}(x)}{\Gamma_{\frac{ \gamma}{2}}(Q-x)}\), which are prevalent in LCFT. Both functions admit a meromorphic extension on \(\mathbb{C}\) with an explicit pole structure; see Section A.2 for more details. Given a function \(f\) that will be taken to be \(\Gamma_{\frac{\gamma}{2}}\) or \(S_{\frac{\gamma}{2}}\) below, we will use the shorthand notation:
\[f(a\pm b):=f(a-b)f(a+b).\]
The Ponsot-Teschner formula is given by:1
Footnote 1: Our formula contains a \(2\pi\) factor that was not present in [22]. This factor of \(2\pi\) is consistent with the checks performed in [23] and with the \(2\pi\) of \(G_{\mathrm{Hos}}\).
\[H_{\mathrm{PT}}\begin{pmatrix}\beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}=2\pi\left(\frac{\pi(\frac{ \gamma}{2})^{2-\frac{\gamma^{2}}{2}}\Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1- \frac{\gamma^{2}}{4})}\right)^{\frac{2Q-(\beta_{1}+\beta_{2}+\beta_{3})}{2 \gamma}}\] \[\times\frac{\Gamma_{\frac{\gamma}{2}}(Q-\frac{\beta_{2}}{2}\pm \frac{Q-\beta_{1}}{2}\pm\frac{Q-\beta_{3}}{2})}{S_{\frac{\gamma}{2}}(\frac{ \beta_{3}}{2}-\sigma_{1}+\frac{Q}{2}\pm(\frac{Q}{2}-\sigma_{3}))S_{\frac{ \gamma}{2}}(\frac{\beta_{1}}{2}+\sigma_{1}-\frac{Q}{2}\pm(\frac{Q}{2}-\sigma_ {2}))\Gamma_{\frac{\gamma}{2}}(Q)\prod_{j=1}^{3}\Gamma_{\frac{\gamma}{2}}(Q- \beta_{j})}\] \[\times\int_{\mathcal{C}}\frac{S_{\frac{\gamma}{2}}(\frac{Q-\beta _{2}}{2}+\sigma_{3}\pm(\frac{Q}{2}-\sigma_{2})+r)S_{\frac{\gamma}{2}}(\frac{Q }{2}\pm\frac{Q-\beta_{3}}{2}+\sigma_{3}-\sigma_{1}+r)}{S_{\frac{\gamma}{2}}( \frac{3Q}{2}\pm\frac{Q-\beta_{1}}{2}-\frac{\beta_{2}}{2}+\sigma_{3}-\sigma_{ 1}+r)S_{\frac{\gamma}{2}}(2\sigma_{3}+r)S_{\frac{\gamma}{2}}(Q+r)}\frac{dr}{ \mathbf{i}}. \tag{1.11}\]
Here \(\mathcal{C}\) is a properly chosen contour from \(-\mathbf{i}\infty\) to \(\mathbf{i}\infty\) such that the integral is meromorphic - namely a ratio of two holomorphic functions - in all of the six variables in the entire complex plane. We provide more details on the definition of \(H_{\mathrm{PT}}\) and the contour \(\mathcal{C}\) in Section 2.1 below. The first main result of our paper is the derivation of Ponsot-Teschner formula.
**Theorem 1.1**.: _Let \(\gamma\in(0,2)\), \(\sum_{j=1}^{3}\beta_{j}>2Q\), \(\beta_{j}<Q\), and \(\mu_{j}=\mu_{B}(\sigma_{j})>0\) for \(j=1,2,3\). Then_
\[H\begin{pmatrix}\beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}=H_{\mathrm{PT}}\begin{pmatrix} \beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}. \tag{1.12}\]
Similarly the formula for the function \(G\) will be expressed in terms of \(\alpha,\beta,\sigma\) where \(\sigma\) is related to \(\mu_{B}\) still by the relation (1.9). We thus write \(G(\alpha,\beta,\sigma):=G_{\mu_{B}}(\alpha,\beta)\). The exact formula \(G_{\mathrm{Hos}}\) proposed for \(G\) by Hosomichi [17] is then given by:2
Footnote 2: For \(G_{\mathrm{Hos}}\) we have an extra \(2^{\Delta_{\beta}-2\Delta_{\alpha}}\) compared with [17] which comes from a different convention for the renormalization of the bulk insertion. This factor is consistent with the FZZ formula of our previous work [20].
\[G_{\mathrm{Hos}}(\alpha,\beta,\sigma)= 2\pi 2^{\Delta_{\beta}-2\Delta_{\alpha}}\left(\frac{\pi(\frac{ \gamma}{2})^{2-\frac{\gamma^{2}}{2}}\Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1- \frac{\gamma^{2}}{4})}\right)^{\frac{1}{\gamma}(Q-\alpha-\frac{\beta}{2}) }\frac{\Gamma_{\frac{\gamma}{2}}(2Q-\frac{\beta}{2}-\alpha)\Gamma_{\frac{ \gamma}{2}}(\alpha-\frac{\beta}{2})\Gamma_{\frac{\gamma}{2}}(Q-\frac{\beta}{2} )^{3}}{\Gamma_{\frac{\gamma}{2}}(Q-\alpha)\Gamma_{\frac{\gamma}{2}}(Q-\beta) \Gamma_{\frac{\gamma}{2}}(\alpha)\Gamma_{\frac{\gamma}{2}}(Q)\Gamma_{\frac{ \gamma}{2}}(\frac{\beta}{2})}\] \[\times\int_{\mathcal{C}}\frac{d\sigma_{2}}{\mathbf{i}}e^{2 \mathbf{i}\pi(Q-2\sigma)\sigma_{2}}\frac{S_{\frac{\gamma}{2}}(\frac{1}{2}( \alpha+\frac{\beta}{2}-Q)\pm\sigma_{2})}{S_{\frac{\gamma}{2}}(\frac{1}{2}( \alpha-\frac{\beta}{2}+Q)\pm\sigma_{2})}.\]
The contour integral in this expression for \(G_{\mathrm{Hos}}\) is converging at \(\pm\mathbf{i}\infty\) if and only if the parameters \(\alpha,\beta,\sigma\) obey the conditions \(\frac{\Re(\beta)}{2}<Q\), \(\Re(\sigma)\in\left[\frac{\Re(\beta)}{4},Q-\frac{\Re(\beta)}{4}\right]\). See also Section 2.1 for more details on this function. Our second main result is:
**Theorem 1.2**.: _Let \(\gamma\in(0,2)\), \(\alpha+\frac{\beta}{2}>Q\), \(\alpha<Q\), \(\beta<Q\), and \(\mu_{B}=\mu_{B}(\sigma)>0\). Then:_
\[G(\alpha,\beta,\sigma)=G_{\mathrm{Hos}}(\alpha,\beta,\sigma). \tag{1.13}\]
Our last main result concerns the reflection coefficient for boundary LCFT. Formally speaking, this quantity is defined as \(R_{\mu_{1},\mu_{2}}(\beta):=|s_{1}-s_{2}|^{2\Delta_{\beta}}\left\langle e^{\frac {\beta}{2}\phi(s_{1})}e^{\frac{\beta}{2}\phi(s_{2})}\right\rangle\) for \(\beta\in\mathbb{R}\), which by conformal symmetry does not depend on \(s_{1},s_{2}\in\mathbb{R}\). Although this is in the same spirit as (1.7), there are multiple subtleties in the rigorous definition of \(R_{\mu_{1},\mu_{2}}(\beta)\). First, instead of integrating \(e^{\frac{\beta}{2}\phi(s_{1})}e^{\frac{\beta}{2}\phi(s_{2})}\mathrm{LF}_{ \mathbb{H}}(d\phi)\) following (1.5), we need to integrate against \(\mathcal{M}_{2}^{\mathrm{disk}}(\beta)\), which is the law of the two-pointed quantum disk with \(\beta\)-insertion. This is because \(\mathcal{M}_{2}^{\mathrm{disk}}(\beta)\) can be viewed as \(e^{\frac{\beta}{2}\phi(s_{1})}e^{\frac{\beta}{2}\phi(s_{2})}\mathrm{LF}_{ \mathbb{H}}(d\phi)\) modulo the redundant conformal symmetries of \(\mathbb{H}\) fixing \(s_{1},s_{2}\). The measure \(\mathcal{M}_{2}^{\mathrm{disk}}(\beta)\) is first introduced in [10] and implicitly used in [11]. It describes the law of a quantum surface with two boundary marked points. We will give its precise definition in Section 2.3.
Another subtlety in defining \(R_{\mu_{1},\mu_{2}}(\beta)\) is that we cannot directly integrate \(e^{-A-\mu_{1}L_{1}-\mu L_{2}}\) over \(\mathcal{M}_{2}^{\mathrm{disk}}(\beta)\) as suggested by (1.5), where \(A\) and \(L_{1},L_{2}\) are the area and the two boundary lengths of a sample from \(\mathcal{M}_{2}^{\mathrm{disk}}(\beta)\), respectively. In fact, there is no \(\beta\) such that this integration is finite. The same issue arises in [1] where the LCFT correlation function on the disk with one bulk insertion is considered. In both cases, one needs to truncate the function \(e^{-x}\) near \(x=0\), after which the integral is finite for some range of \(\beta\). Concretely, for \(\mu_{1}\geq 0,\mu_{2}\geq 0\), we define
\[R_{\mu_{1},\mu_{2}}(\beta):=\frac{2(Q-\beta)}{\gamma}\int(e^{-A-\mu_{1}L_{1}- \mu_{2}L_{2}}-1)\,\mathrm{d}\mathcal{M}_{2}^{\mathrm{disk}}(\beta)\quad\text{ for }\beta\in(\frac{2}{\gamma},Q). \tag{1.14}\]
Then \(R_{\mu_{1},\mu_{2}}(\beta)\) is indeed finite. We will give the full detail of the integral in (1.14) in Section 2.3, including its finiteness. Here we still use the convention that bulk cosmological constant \(\mu=1\) as in the definition of \(H_{(\mu_{1},\mu_{2},\mu_{3})}^{(\beta_{1},\beta_{2},\beta_{3})}\). The prefactor \(\frac{2(Q-\beta)}{\gamma}\) is to match with [11].
In the seminal work [10], Fateev, Zamolodchikov and Zamolodchikov proposed a formula for the boundary reflection coefficient under the same reparametrization \(\mu_{B}(\sigma)\) as in (1.10):
\[R(\beta,\sigma_{1},\sigma_{2}):=R_{\mu_{B}(\sigma_{1}),\mu_{B}(\sigma_{2})}( \beta). \tag{1.15}\]
Their formula is:
\[R_{\mathrm{FZZ}}(\beta,\sigma_{1},\sigma_{2})=\left(\frac{\pi(\frac{\gamma}{2 })^{2-\frac{\gamma^{2}}{2}}\Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1-\frac{ \gamma^{2}}{4})}\right)^{\frac{Q-\beta}{\gamma}}\frac{\Gamma_{\frac{\gamma}{2 }}(\beta-Q)S_{\frac{\gamma}{2}}(Q\pm(Q-\sigma_{1}-\sigma_{2})-\frac{\beta}{2} )}{\Gamma_{\frac{\gamma}{2}}(Q-\beta)S_{\frac{\gamma}{2}}(\frac{\beta}{2}\pm( \sigma_{2}-\sigma_{1}))}. \tag{1.16}\]
Our last main result is the confirmation of their proposal.
**Theorem 1.3**.: _Let \(\gamma\in(0,2)\), \(\beta\in(\frac{2}{\gamma},Q)\), and \(\mu_{j}=\mu_{B}(\sigma_{j})>0\) for \(j=1,2\). Then_
\[R(\beta,\sigma_{1},\sigma_{2})=R_{\mathrm{FZZ}}(\beta,\sigma_{1},\sigma_{2}). \tag{1.17}\]
### Overview of the proof
Our proof strategy can be summarized as follows.
1. Take as input the BPZ equation stated in Theorem 3.4 coming from [1].
2. In the case of one degenerate and three generic boundary points this equation reduces to a hypergeometric equation. Using the solution theory for such equations and the asymptotic analysis of Gaussian multiplicative chaos, one obtains a set of functional equations for the probabilistically defined \(H\) and \(R\) called _shift equations_. This is the content of Section 3.
3. Prove that under certain conditions the solution to these shift equations is unique. Since both \(R,R_{\mathrm{FZZ}}\) and \(H,H_{\mathrm{PT}}\) are respectively solutions to these shift equations and they obey the conditions for uniqueness, this implies Theorems 1.1 and 1.3.
4. Taking as an input the conformal welding identity recently proved in [21], we deduce Theorem 1.2 from Theorem 1.3. This step and the previous are performed in Section 4.
There are several novelties in the proof and technical difficulties that have to be overcome in the present paper. We list them below.
1. In the previous works on boundary theory [12, 13], a handy fact is that the Liouville correlation functions can be written as a moment of Gaussian multiplicative chaos times an explicit prefactor. Under this form the probabilistic definition can be extended meromorphically to a range where the shift equations make sense. This is no longer the case in our paper. We have to introduce proper truncations in the spirit of (1.14) to extend the probabilistic definitions of \(H\) and \(R\), and handle analytic issues coming with this complication. This is performed in Section 2. Along the same lines the fact that the Liouville correlations contain both area and boundary GMC measures complicates the proof of several results, including the limit giving \(R\) from \(H\) (see Section 5), and the proof of the operator product expansions (see Appendix D).
2. At several places in the proofs we use a blending of the techniques of the exact solvability of CFT on one hand and of the conformal welding / mating-of-trees theory on the other. We now summarize the main places where the latter theory enters the picture. Firstly, the BPZ equation from [1] was proved using the mating-of-trees theorem. Secondly, in the derivation of the shift equations for \(R\), we need the exact value of \(R(\gamma,\sigma_{1},\sigma_{2})\) coming from the mating-of-trees theory [11, 1]. This is in contrast to [13] where the bulk potential is absent and the corresponding input was supplied by the main result of [13]. Thirdly, the proof of \(G=G_{\mathrm{Hos}}\) relies on the conformal welding result of [14] and the FZZ formula proved in [1] via conformal welding and mating-of-trees.
3. The fact that we are working with the complicate formulas \(H_{\mathrm{PT}}\) and \(G_{\mathrm{Hos}}\) introduces several technicalities. The shift equations satisfied by \(H_{\mathrm{PT}}\) consist of three term relations, which is significantly more complicated than the two term relations obeyed by \(R_{\mathrm{FZZ}}\) or used in [10]. The uniqueness argument for such relations is quite involved and is carried out in Section 4.2. Similarly, obtaining the final formula for \(G_{\mathrm{Hos}}\) requires an involved computation and uses non-trivial identities of integrals of the functions \(\Gamma_{\frac{1}{2}},S_{\frac{1}{2}}\). Finally, we need a number of technical computations on \(H_{\mathrm{PT}},G_{\mathrm{Hos}}\) collected in Appendix A.
### Connection with the modular and fusion kernels of Virasoro conformal blocks
Conformal blocks are fundamental analytic functions that appear in the conformal bootstrap equations in CFT (see Section 1.4.1). There are two key linear transformations describing the symmetries of the space of conformal blocks. These two kernels are called the fusion and modular kernels respectively. For rational CFT, the existence and properties of such kernels are well known in physics thanks to the work of Verlinde [15], and Moore-Seiberg [16]. In mathematics they are understood in the context of modular tensor category; see e.g. [1]. For generic Virasoro conformal blocks with central charge \(c=1+6Q^{2}\), the existence and explicit expressions of the two kernels were conjectured by Ponsot and Teschner [17, 18] and Teschner [19], respectively; see also [19, 20]. In a work in preparation, the second and the third named authors of this paper will prove these conjectures jointly with P. Ghosal and Y. Sun.
As argued by Ponsot-Teschner [18], the fusion kernel is related to the boundary three-point Liouville correlation. By matching exact formulas, we found that the modular kernel is related to the bulk-boundary Liouville correlation in a similar manner. Let us now explicitly write these connections. Below we will use the notation \(\alpha^{\prime}_{i}\) for the alpha parameters appearing in the fusion and modular kernel to clearly distinguish them from the alpha and beta parameters of the current paper. First we give the fusion kernel, under the parameter identification
\[\beta_{1}=\alpha^{\prime}_{2},\ \beta_{2}=Q-\mathbf{i}P^{\prime},\ \beta_{3}=\alpha^{\prime}_{3},\ 2\sigma_{1}=Q+\mathbf{i}P,\ 2\sigma_{2}=\alpha^{\prime}_{1},\ 2\sigma_{3}=\alpha^{\prime}_{4},\]
one has the relation:
\[\mathcal{M}_{\alpha^{\prime}_{1},\alpha^{\prime}_{2},\alpha^{ \prime}_{3},\alpha^{\prime}_{4}}^{\text{\bf sphere}}(P,P^{\prime})=\frac{1}{2\pi} \left(\frac{\pi(\frac{7}{2})^{2-\frac{\gamma^{2}}{2}}\Gamma(\frac{\gamma^{2}}{4 })}{\Gamma(1-\frac{\gamma^{2}}{4})}\right)^{\frac{\beta_{1}+\beta_{2}+\beta_{3 }-2Q}{2\gamma}}\frac{\Gamma_{\frac{\gamma}{2}}(Q)\Gamma_{\frac{\gamma}{2}}(Q- \beta_{1})\Gamma_{\frac{\gamma}{2}}(Q-\beta_{3})}{\Gamma_{\frac{\gamma}{2}}( \beta_{2}-Q)}\] \[\times\frac{\Gamma_{\frac{\gamma}{2}}(Q\pm(2\sigma_{1}-Q))\Gamma _{\frac{\gamma}{2}}(\frac{\beta_{2}}{2}\pm\frac{Q-2\sigma_{2}}{2}\pm\frac{Q- 2\sigma_{3}}{2})S_{\frac{\gamma}{2}}(\frac{Q+\beta_{2}-\beta_{3}}{2}\pm\frac{Q -\beta_{1}}{2})}{\Gamma_{\frac{\gamma}{2}}(Q-\frac{\beta_{1}}{2}\pm\frac{Q- 2\sigma_{1}}{2}\pm\frac{Q-2\sigma_{2}}{2})\Gamma_{\frac{\gamma}{2}}(Q-\frac{ \beta_{3}}{2}\pm\frac{Q-2\sigma_{1}}{2}\pm\frac{Q-2\sigma_{3}}{2})}H\begin{pmatrix} \beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}.\]
Next we give the same relation for the modular kernel. Under the parameter identification
\[\alpha=Q+\mathbf{i}P^{\prime},\quad\beta=\alpha^{\prime},\quad 2\sigma=Q+ \mathbf{i}P,\]
the following equality holds:
\[\mathcal{M}_{\alpha^{\prime}}^{\text{\bf torus}}(P,P^{\prime}) =-\frac{\pi}{2}2^{2\Delta_{\alpha}-\Delta_{\beta}}\left(\frac{ \gamma}{2}\right)^{(\frac{2}{\gamma}-\frac{\gamma}{2})(Q-\alpha)+1}\left( \frac{\pi(\frac{\gamma}{2})^{2-\frac{\gamma^{2}}{2}}\Gamma(\frac{\gamma^{2}}{ 4})}{\Gamma(1-\frac{\gamma^{2}}{4})}\right)^{\frac{1}{\gamma}(\alpha+\frac{ \beta}{2}-Q)}\] \[\times\frac{\Gamma_{\frac{\gamma}{2}}(Q\pm(2\sigma-Q))\Gamma_{ \frac{\gamma}{2}}(Q-\beta)\Gamma_{\frac{\gamma}{2}}(Q)}{\Gamma(\frac{2}{ \gamma}(\alpha-Q))\Gamma(\frac{\gamma\alpha}{4}-\frac{\gamma^{2}}{4})\Gamma_ {\frac{\gamma}{2}}(Q-\frac{\beta}{2}\pm(2\sigma-Q))\Gamma_{\frac{\gamma}{2}}( Q-\frac{\beta}{2})^{2}}G(\alpha,\beta,\sigma).\]
The relation between the Virasoro fusion kernel and boundary Liouville three-point function was inspired by a similar relation in the context of Virasoro minimal models [10]. We expect that the relation we discovered between the modular kernel and the bulk-boundary correlation function has a similar interpretation, but this remains to be understood.
### Outlook and perspective
#### 1.4.1. Conformal bootstrap for boundary LCFT
Conformal bootstrap is a program for computing correlation functions of a CFT on a Riemann surface with marked points, based on the decomposition of the marked surface into simpler pieces. The data in this program are the spectrum of its Hamiltonian and the structure constants. For LCFT on surfaces without boundary, the bootstrap program was completely carried out in [11, 12, 13], where the structure constant is the 3-point spherical correlation function given by the DOZZ formula [11].
For the boundary LCFT, the structure constants needed for conformal bootstrap are precisely the boundary three-point and bulk-boundary correlation functions whose expressions we have determined in Theorems 1.1 and 1.2. The FZZ formula will also appear as a special case of the bulk-boundary correlation with the boundary weight \(\beta\) set to zero. A first case of the conformal bootstrap for a surface with boundary has recently been proved in [10] on the annulus with one insertion at each boundary. This bootstrap formula involves the bulk-boundary correlation \(G\). The general conformal bootstrap for boundary LCFT, which is the counterpart of [12], is a work in progress by the authors of [12, 10]. As an example involving the boundary three-point function \(H\), we state the bootstrap equation for the boundary four-point function:
\[\left\langle\prod_{j=1}^{4}e^{\frac{\beta_{j}}{2}\phi(s_{j})}\right\rangle= \frac{1}{4\pi}s^{(\frac{Q^{2}}{4}-\Delta_{\beta_{1}}-\Delta_{\beta_{2}})}\int _{\mathbb{R}}H_{(\mu_{1},\mu_{2},\mu_{3})}^{(\beta_{1},\beta_{2},Q+\mathbf{i}P )}H_{(\mu_{3},\mu_{4},\mu_{1})}^{(\beta_{3},\beta_{4},Q-\mathbf{i}P)}s^{\frac{ p^{2}}{4}}\mathcal{F}_{\beta_{1},\beta_{2},\beta_{3},\beta_{4}}(s,P)dP.\]
Here the marked points \(s_{j}\) are at the locations \((0,s,1,\infty)\). The correlation function depends on four cosmological constants \((\mu_{j})_{j=1,\ldots,4}\), and the function \(\mathcal{F}_{\beta_{1},\beta_{2},\beta_{3},\beta_{4}}(s,P)\) is the conformal block.
#### 1.4.2. Implications for GMC measures
In the spirit of the discussions in [1, Section 1.3] and [22, Sections 1.3 and 1.4], we give here some consequences of the exact formulas we have derived for the GMC measures. First of all if one sets all boundary cosmological constants to \(0\), the probabilistic expressions for \(H\) and \(G\) will reduce to a moment of the Gaussian multiplicative chaos area measure of \(\mathbb{H}\). Thus our results give exact formulas for these 2d GMC measures. Starting from the formulas of the present paper, it is also possible to take the limit \(\mu=0\), which since we set \(\mu=1\) is equivalent to taking \(\Im(\sigma_{j})\to+\infty\). After proper rescaling and renormalization, the relation between \(\mu_{j}\) and \(\sigma_{j}\) becomes \(\mu_{j}=e^{\pi\mathbf{i}\gamma(\sigma_{j}-\frac{Q}{2})}\) and one recovers the expressions of [22].
The other application which has also been described in [22] is to view the function \(R\) as the boundary reflection coefficient. Note that this name comes from the equation (3.32) which relates \(H\) at \(\beta_{1}\) and \(H\) at the reflected \(2Q-\beta_{1}\) by a factor of \(R\). The function \(R\) can be used to give the tail expansion of Gaussian multiplicative chaos. Since we have both area and boundary GMC measures, our function \(R\) describes the asymptotic of the following probability:
\[\mathbb{P}\left(\int_{\mathbb{H}\cap\mathbb{D}}\frac{1}{|x|^{\gamma\beta}}e^{ \gamma h(x)}d^{2}x>u^{2},\int_{-1}^{1}\frac{1}{|r|^{\frac{\gamma\beta}{2}}}e ^{\frac{\gamma}{2}h(r)}d\mu_{\partial}(r)>u\right)\underset{u\to\infty}{ \sim}-\frac{1}{\Gamma(1-\frac{2(Q-\beta)}{2})}\frac{R_{\mu_{1},\mu_{2}}(\beta )}{u^{\frac{2}{\gamma}(Q-\beta)}}.\]
Here \(\beta\in(\frac{\gamma}{2},Q)\) and \(d\mu_{\partial}(r):=\mu_{1}\mathbf{1}_{r<0}+\mu_{2}\mathbf{1}_{r>0}\). See [19, 22] for more details on reflection coefficients and tail expansions of GMC measures.
#### 1.4.3. Interaction between the integrability of LCFT and the mating-of-trees theory
We see from (1.14) that the boundary reflection coefficient \(R\) encodes the joint law of the length and area of the two-pointed quantum disk sampled from \(\mathcal{M}_{2}^{\mathrm{disk}}(\beta)\). With Yu, the first and third named authors considered the LQG surfaces with three boundary marked points sampled from \(\prod_{j=1}^{3}e^{\frac{\beta_{j}}{2}\phi(s_{j})}\mathrm{LF}_{\mathbb{H}}(d\phi)\) in [1], which they called the quantum triangles. The boundary structure constant \(H\) encode the area and length distribution of these surfaces. As shown in [1, 1, 1, 2, 3], quantum disks and triangles behave nicely under conformal welding and facilitate a mutually beneficial interaction between the integrability of LCFT and of the mating-of-trees theory. Results of this paper are not only outcomes of this interaction, but also enrich it in several ways. For example, in a work in preparation with Yu, the first and third named authors will derive an exact formula for a quantity in mating-of-trees which is shown in [1] to encode the expected proportion of inversions in a skew Brownian permuton recently introduced in [1]. Our result on \(H\) plays a crucial role there. Moreover, conformal welding results give various relations between the fusion kernel and modular kernel of Virasoro conformal blocks. We plan to investigate the possible connection between these relations and those which are well-known in the modular tensor category context [18].
### Outline of the paper
The rest of our paper is organized as follows. In Section 2 we give precise definitions of the formulas defining \(H_{\mathrm{PT}}\), \(G_{\mathrm{Hos}}\), of the probabilistic expressions \(H\), \(G\), \(R\), and proved the required analytic properties of these functions. In Section 3 we prove the functions \(H\), \(R\) admit an analytic extension and obey a set of functional equations known as the shift equations. In Section 4 we complete the proof of our main theorems using these shift equations. In Section 5 we prove that \(R\) can be obtained as a limit of \(H\) which is needed for Section 3. Finally in the successive subsections of the appendix we give properties of the special functions used in the main text, the mating-of-trees input on the two-point quantum disk, the lemmas used to prove the analyticity of \(R\), and the proofs of the operator product expansions.
### Acknowledgements
The authors are grateful for the hospitality of the Institute for Advanced Study at Princeton, where the final stage of the project took place. We would like to very warmly thank Lorenz Eberhardt who helped us fix the uniqueness argument for the boundary three-point
function by suggesting to write the functional equations in matrix form. We also thank Baojun Wu and Da Wu for helpful discussions. M.A. was partially supported by NSF grant DMS-1712862, and by the Simons Foundation as a Junior Fellow at the Simons Society of Fellows. G.R. was partially supported by an NSF mathematical sciences postdoctoral research fellowship, NSF Grant DMS-1902804, and by a member fellowship from the Institute for Advanced Study. X.S. was partially supported by the NSF grant DMS-2027986, the NSF Career award 2046514, and by a member fellowship from the Institute for Advanced Study.
## 2. Definitions and meromorphicity of \(H_{\rm PT}\), \(G_{\rm Hos}\), \(H\), \(G\), and \(R\)
In this section we start by giving a full definition of the functions \(H_{\rm PT}\) and \(G_{\rm Hos}\) and the probabilistic definition of \(H\) and \(G\) under the Seiberg bounds (1.4). We then given an extended definition of \(H\) and the definition of \(R\) using a truncation procedure. We also state and prove analyticity results for these functions. Throughout this section and the next we will frequently refer to a function of several complex variables as being meromorphic on some domain. By this terminology we simply mean that it can be expressed as the ratio of two holomorphic functions of several variables on that domain.
### The Ponsot-Teschner and Hosimichi functions
We start by giving full details on the definition of \(H_{\rm PT}\) given by equation (1.11) and some of its properties. We first clarify how the contour \(\mathcal{C}\) is chosen. For a function \(f\), recall our notation \(f(a\pm b)=f(a+b)f(a-b)\). Consider the integral
\[\mathcal{J}_{\rm PT}:=\int_{\mathcal{C}}\frac{S_{\frac{\gamma}{2}}(\frac{Q- \beta_{2}}{2}+\sigma_{3}\pm(\frac{Q}{2}-\sigma_{2})+r)S_{\frac{\gamma}{2}}( \frac{Q}{2}\pm\frac{Q-\beta_{3}}{2}+\sigma_{3}-\sigma_{1}+r)}{S_{\frac{\gamma }{2}}(\frac{3Q}{2}\pm\frac{Q-\beta_{1}}{2}-\frac{\beta_{2}}{2}+\sigma_{3}- \sigma_{1}+r)S_{\frac{\gamma}{2}}(2\sigma_{3}+r)S_{\frac{\gamma}{2}}(Q+r)} \frac{dr}{\mathbf{i}}, \tag{2.1}\]
so that \(H_{\rm PT}\) and \(\mathcal{J}_{\rm PT}\) are then related by an explicit prefactor containing the functions \(\Gamma_{\frac{\gamma}{2}}\) and \(S_{\frac{\gamma}{2}}\) as in (1.11). To describe how to choose the contour of integration \(\mathcal{C}\), we must look at the locations of the poles in \(r\) of the integrand. Recall that the function \(S_{\frac{\gamma}{2}}\) has poles at \(x=-n\frac{\gamma}{2}-m\frac{2}{\gamma}\) and zeros at \(x=Q+n\frac{\gamma}{2}+m\frac{2}{\gamma}\) for any \(n,m\in\mathbb{N}\) (here and throughout the paper \(\mathbb{N}\) contains \(0\)). We will group the poles of the integrand into two groups, the ones coming from the poles of the numerator and the ones coming from the zeros of the denominator. More precisely consider the set of poles in the lattices extending to the left
\[P_{\rm PT}^{-}=\left\{\xi-\frac{n\gamma}{2}-\frac{2m}{\gamma}\bigg{|}n,m\in \mathbb{N},\xi\in\{\frac{\beta_{2}}{2}-\sigma_{2}-\sigma_{3},Q+\frac{\beta_ {2}}{2}-\sigma_{3}+\sigma_{2},-\frac{\beta_{3}}{2}-\sigma_{3}+\sigma_{1}\} \right\},\]
and the set of poles in the lattices extending to the right:
\[P_{\rm PT}^{+}=\left\{\xi+\frac{n\gamma}{2}+\frac{2m}{\gamma}\bigg{|}n,m\in \mathbb{N},\xi\in\{-\frac{\beta_{1}}{2}+\frac{\beta_{2}}{2}-\sigma_{3}+\sigma _{1},-Q+\frac{\beta_{1}}{2}+\frac{\beta_{2}}{2}-\sigma_{3}+\sigma_{1},-2\sigma _{3}+Q\}\right\}.\]
Assume the parameters \(\beta_{i},\sigma_{i}\) are chosen so that \(P_{\rm PT}^{-}\cap P_{\rm PT}^{+}=\emptyset\). Then the contour \(\mathcal{C}\) in \(\mathcal{J}_{\rm PT}\) goes from \(-\mathbf{i}\infty\) to \(\mathbf{i}\infty\) passing to the right to all the poles in \(P_{\rm PT}^{-}\) and to the left of all the poles in \(P_{\rm PT}^{+}\). In the region of large or very negative imaginary part, namely away from the region containing the poles, the contour \(\mathcal{C}\) is chosen simply to be a vertical line on the axis \(\mathbf{i}\mathbb{R}\). The following lemma verifies that this integral is converging at \(\pm\mathbf{i}\infty\). In the case where the parameters \(\beta_{i},\sigma_{i}\) are such that \(P_{\rm PT}^{-}\cap P_{\rm PT}^{+}\neq\emptyset\), the function \(\mathcal{J}_{\rm PT}\) will then have a pole which can be determined by residue computation, see Lemma 2.2 below.
**Lemma 2.1**.: _The integral in (2.1) is absolutely converging at \(\pm\mathbf{i}\infty\)._
Proof.: We need to determine the asymptotic in \(r\) of the integrand of the contour integral as \(r\) goes to \(\pm\mathbf{i}\infty\). For this we simply need to use the asymptotic of \(S_{\frac{\gamma}{2}}\) given by equation (A.14). Using this fact, one obtains that the integrand as \(r\to+\mathbf{i}\infty\) is equivalent to \(c_{1}e^{3\mathbf{i}\pi Qr}\) and as \(r\to-\mathbf{i}\infty\) to \(c_{2}e^{-3\mathbf{i}\pi Qr}\) for some constants \(c_{1},c_{2}\in\mathbb{C}\) independent of \(r\). Since \(Q>0\), the integral is absolutely convergent.
We now state a lemma giving the poles of \(\mathcal{J}_{\mathrm{PT}}\) viewed as a meromorphic function over \(\mathbb{C}^{6}\) of the six parameters \(\beta_{1},\beta_{2},\beta_{3},\sigma_{1},\sigma_{2},\sigma_{3}\).
**Lemma 2.2**.: _The poles of the function \(\mathcal{J}_{\mathrm{PT}}\) occur when the set \(\{n\frac{\gamma}{2}+m\frac{2}{\gamma}:n,m\in\mathbb{N}\}\) contains any of the following_
\[\begin{array}{llll}\frac{\beta_{1}}{2}-\sigma_{1}-\sigma_{2},&Q-\frac{\beta _{1}}{2}-\sigma_{1}-\sigma_{2},&\frac{\beta_{2}}{2}-\sigma_{2}+\sigma_{3}-Q, &\frac{\beta_{2}}{2}-\sigma_{2}-\sigma_{3},\\ -Q+\sigma_{2}+\frac{\beta_{1}}{2}-\sigma_{1},&\sigma_{2}-\sigma_{1}-\frac{ \beta_{1}}{2},&-2Q+\frac{\beta_{2}}{2}+\sigma_{3}+\sigma_{2},&-Q+\frac{\beta_ {2}}{2}-\sigma_{3}+\sigma_{2},\\ \frac{\beta_{1}}{2}-\frac{\beta_{2}}{2}-\frac{\beta_{3}}{2},&Q-\frac{\beta_{1 }}{2}-\frac{\beta_{2}}{2}-\frac{\beta_{3}}{2},&-Q-\frac{\beta_{3}}{2}+\sigma_ {3}+\sigma_{1},&-\frac{\beta_{3}}{2}-\sigma_{3}+\sigma_{1},\\ -Q+\frac{\beta_{1}}{2}-\frac{\beta_{2}}{2}+\frac{\beta_{3}}{2},&\frac{\beta_{ 3}}{2}-\frac{\beta_{1}}{2}-\frac{\beta_{2}}{2},&-2Q+\frac{\beta_{3}}{2}+\sigma _{3}+\sigma_{1},&-Q+\frac{\beta_{3}}{2}-\sigma_{3}+\sigma_{1}.\end{array}\]
Proof.: The proof follows exactly the same steps as the one of [22, Lemma 5.11], except that in our case we have four \(S_{\frac{\gamma}{2}}\) functions in the numerator and denominator of the integrand of the contour integral of \(\mathcal{J}_{\mathrm{PT}}\) (instead of three in the case of [22, Lemma 5.11]). We nonetheless recall below the steps of the proof and explain how to derive the list of poles.
As long as the parameters \(\beta_{i},\sigma_{i}\) are chosen such that \(P_{\mathrm{PT}}^{-}\cap P_{\mathrm{PT}}^{+}=\emptyset\), one can convince oneself it is always possible to choose the contour \(\mathcal{C}\) so that it goes from \(-\mathbf{i}\infty\) to \(\mathbf{i}\infty\), passing to the right of the poles of \(P_{\mathrm{PT}}^{-}\) and to the left of the poles of \(P_{\mathrm{PT}}^{+}\). On the other hand there is a problem to choose the contour precisely when the parameters are such that \(P_{\mathrm{PT}}^{-}\cap P_{\mathrm{PT}}^{+}\neq\emptyset\). Indeed if there is at least one pole from \(P_{\mathrm{PT}}^{-}\) collapsing with a pole from \(P_{\mathrm{PT}}^{+}\), it will not be possible for \(\mathcal{C}\) to pass to the right of all \(P_{\mathrm{PT}}^{-}\) and to the left of all \(P_{\mathrm{PT}}^{+}\). This situation will lead to a pole for the function \(\mathcal{J}_{\mathrm{PT}}\). To properly identify this pole, one can start by deforming the contour \(\mathcal{C}\) to cross one of the two collapsing poles before they collapse and pick up a residue contribution (see also in Appendix A.3 the proof of Lemma A.2 where this is explicitly preformed). This residue term viewed as a function of \(\sigma_{i},\beta_{i}\) will then have a pole at the value of parameters \(\sigma_{i},\beta_{i}\) that caused one pole of \(P_{\mathrm{PT}}^{+}\) to collapse with a pole of \(P_{\mathrm{PT}}^{-}\). The conclusion is that the poles of \(\mathcal{J}_{\mathrm{PT}}\) are thus at the values of \(\beta_{i},\sigma_{i}\) where \(P_{\mathrm{PT}}^{-}\cap P_{\mathrm{PT}}^{+}\neq\emptyset\) which is precisely the list given in the lemma.
From here it is immediate that \(H_{\mathrm{PT}}\) is a meromorphic function on \(\mathbb{C}^{6}\).
**Lemma 2.3**.: _The function \(H_{\mathrm{PT}}\) is meromorphic on \(\mathbb{C}^{6}\), namely a ratio of two holomorphic functions on \(\mathbb{C}^{6}\) of the six parameters \(\beta_{1},\beta_{2},\beta_{3},\sigma_{1},\sigma_{2},\sigma_{3}\)._
Proof.: In Lemma 2.2 we have established that \(\mathcal{J}_{\mathrm{PT}}\) is meromorphic on \(\mathbb{C}^{6}\) with prescribed poles. Since \(H_{\mathrm{PT}}\) is equal to \(\mathcal{J}_{\mathrm{PT}}\) times an explicit prefactor containing \(\Gamma_{\frac{\gamma}{2}}\) and \(S_{\frac{\gamma}{2}}\) functions which are meromorphic, this implies the claim.
We now repeat a similar analysis for the contour integral defining the function \(G_{\mathrm{Hos}}\). There will be one notable difference, which is that there will be a condition on the parameters for the contour integral to converge at \(\pm\mathbf{i}\infty\). More precisely, the formula for \(G_{\mathrm{Hos}}\) involves the following contour integral:
\[\mathcal{J}_{\mathrm{Hos}}:=\int_{\mathcal{C}}\frac{d\sigma_{2}}{\mathbf{i}}e^{2 \mathbf{i}\pi(Q-2\sigma)\sigma_{2}}\frac{S_{\frac{\gamma}{2}}(\frac{1}{2}( \alpha+\frac{\beta}{2}-Q)-\sigma_{2})S_{\frac{\gamma}{2}}(\frac{1}{2}(\alpha+ \frac{\beta}{2}-Q)+\sigma_{2})}{S_{\frac{\gamma}{2}}(\frac{1}{2}(\alpha- \frac{\beta}{2}+Q)+\sigma_{2})S_{\frac{\gamma}{2}}(\frac{1}{2}(\alpha-\frac{ \beta}{2}+Q)-\sigma_{2})}.\]
By using again the asymptotic of \(S_{\frac{\gamma}{2}}\) given by equation (A.14), one obtains that this integral is converging at \(\pm\mathfrak{iso}\) if and only if:
\[\frac{\Re(\beta)}{2}<Q,\quad\Re(\sigma)\in\left[\frac{\Re(\beta)}{4},Q-\frac{ \Re(\beta)}{4}\right].\]
Despite this condition it is still possible to analytically continue the function \(\mathcal{J}_{\mathrm{Hos}}\) to a meromorphic function of its three parameters \(\alpha,\beta,\sigma\) on \(\mathbb{C}^{3}\). We do not pursue this here. For the choice of the contour of integration, similarly as for \(\mathcal{J}_{\mathrm{PT}}\), it has to pass to the left of the lattice of poles extending to the right \(P_{\mathrm{Hos}}^{+}\) and to the right of the lattice of poles extending to the left \(P_{\mathrm{Hos}}^{-}\). The lattice of poles extending to the left is given by
\[P_{\mathrm{Hos}}^{-}=\left\{\xi-\frac{n\gamma}{2}-\frac{2m}{\gamma}\quad \middle|\quad n,m\in\mathbb{N},\quad\xi\in\{\frac{1}{2}(Q-\alpha-\frac{\beta}{ 2}),\frac{1}{2}(\alpha-\frac{\beta}{2}-Q)\}\right\},\]
and the lattice of poles extending to the right is:
\[P_{\mathrm{Hos}}^{+}=\left\{\xi+\frac{n\gamma}{2}+\frac{2m}{\gamma}\quad \middle|\quad n,m\in\mathbb{N},\quad\xi\in\{\frac{1}{2}(\alpha+\frac{\beta}{2}- Q),\frac{1}{2}(Q-\alpha+\frac{\beta}{2})\}\right\}.\]
### Definition of \(H\) and \(G\) under the Seiberg bound
Let \(h\) be the free boundary Gaussian free field on the upper half plane \(\mathbb{H}=\{z\in\mathbb{C}:\Im(z)>0\}\) with covariance kernel
\[\mathbb{E}[h(x)h(y)]=G_{\mathbb{H}}(x,y):=\log\frac{1}{|x-y||x-\bar{y}|}+2\log |x|_{+}+2\log|y|_{+}, \tag{2.2}\]
where \(|x|_{+}:=\max(|x|,1)\) and in the sense that \(\mathbb{E}[(h,f)(h,g)]=\iint f(x)\mathbb{E}[h(x)h(y)]g(y)dxdy\), for smooth test functions \(f\) and \(g\). Let \(P_{\mathbb{H}}\) be the law of \(h\), so that \(P_{\mathbb{H}}\) is a probability measure on the negatively indexed Sobolev space \(H^{-1}(\mathbb{H})\) ([10, 11]). The particular covariance kernel corresponds to requiring the field to have average \(0\) on the unit circle.
Following [1], we introduce the Liouville field on \(\mathbb{H}\), possibly with boundary insertions.
**Definition 2.4** (Liouville field).: _Let \((h,\mathbf{c})\) be sampled from \(P_{\mathbb{H}}\times[e^{-Qc}dc]\) and set \(\phi=h(z)-2Q\log|z|_{+}+\mathbf{c}\). We write \(\mathrm{LF}_{\mathbb{H}}\) as the law of \(\phi\), and call a sample from \(\mathrm{LF}_{\mathbb{H}}\) a Liouville field on \(\mathbb{H}\)._
**Definition 2.5** (Liouville field with insertions).: _Let \((\beta_{j},s_{j})\in\mathbb{R}\times\partial\mathbb{H}\) for \(j=1,\ldots,M\), where \(M\geq 1\) and the \(s_{j}\) are pairwise distinct. Sample \((h,\mathbf{c})\) from \(C_{\mathbb{H}}^{(\beta_{j},s_{j})_{j}}P_{\mathbb{H}}\times[e^{(\frac{1}{2}\sum_ {j}\beta_{j}-Q)c}dc]\) where_
\[C_{\mathbb{H}}^{(\beta_{j},s_{j})_{j}}=\prod_{j=1}^{M}|s_{j}|_{+}^{-\beta_{j} (Q-\frac{\beta_{j}}{2})}\prod_{1\leq j<k\leq M}e^{\frac{\beta_{j}\beta_{k}}{4} G_{\mathbb{H}}(s_{j},s_{k})}.\]
_Let \(\phi(z)=h(z)-2Q\log|z|_{+}+\sum_{j=1}^{M}\frac{\beta_{j}}{2}G_{\mathbb{H}}(z,s _{j})+\mathbf{c}\). We write \(\mathrm{LF}_{\mathbb{H}}^{(\beta_{j},s_{j})_{j}}\) for the law of \(\phi\) and call a sample from \(\mathrm{LF}_{\mathbb{H}}^{(\beta_{j},s_{j})_{j}}\) the Liouville field on \(\mathbb{H}\) with insertions \((\beta_{j},s_{j})_{1\leq j\leq M}\)._
We can also define Liouville fields with an insertion at \(\infty\). We will need the case \(\mathrm{LF}_{\mathbb{H}}^{(\beta_{1},0),(\beta_{2},1),(\beta_{3},\infty)}\), which can be defined by \(\lim_{s\to\infty}|s|^{2\Delta_{3}}\mathrm{LF}_{\mathbb{H}}^{(\beta_{1},0),( \beta_{2},1),(\beta_{3},s)}\) with \(\Delta_{3}=\frac{\beta_{3}}{2}(Q-\frac{\beta_{3}}{2})\). Here we give a more explicit definition which is equivalent to the limiting procedure; see [1, Lemma 2.9].
**Definition 2.6**.: _Fix \(\beta_{1},\beta_{2},\beta_{3}\in\mathbb{R}\). Set \(s_{1}=0,s_{2}=1,s_{3}=\infty\) and \(G_{\mathbb{H}}(z,\infty)=2\log|z|_{+}\). Sample \((h,\mathbf{c})\) from \(P_{\mathbb{H}}\times[e^{(\frac{1}{2}\sum_{j}\beta_{j}-Q)c}dc]\). We write \(\mathrm{LF}_{\mathbb{H}}^{(\beta_{1},0),(\beta_{2},1),(\beta_{3},\infty)}\) for the law of \(\phi\) where \(\phi(z)=h(z)-2Q\log|z|_{+}+\sum_{j=1}^{3}\frac{\beta_{j}}{2}G_{\mathbb{H}}(z,s _{j})+\mathbf{c}\)._
Given a sample \(h\) from \(P_{\mathbb{H}}\), the associated quantum area and length measure are defined by
\[\mathcal{A}_{h}=\lim_{\varepsilon\to 0}\epsilon^{\frac{\gamma^{2}}{2}}e^{ \gamma h_{\epsilon}(z)}d^{2}z,\quad\text{and}\quad\mathcal{L}_{h}=\lim_{ \varepsilon\to 0}\epsilon^{\frac{\gamma^{2}}{4}}e^{\frac{\gamma}{2}h_{ \epsilon}(z)}dz, \tag{2.3}\]
where the limits hold in probability in the weak topology of measures. The existence of limits is well-known from Gaussian multiplicative chaos (GMC); see e.g. [1, 13].
**Remark 2.7**.: _Note that it is also possible to define the GMC measures by renormalizing by the variance. The difference between the two conventions is given by:_
\[\mathcal{A}_{h}=\frac{|z|_{+}^{2\gamma^{2}}}{|z-\bar{z}|^{\frac{\gamma^{2}}{2 }}}\lim_{\varepsilon\to 0}e^{\gamma h_{\epsilon}(z)-\frac{\gamma^{2}}{2} \mathbb{E}[h_{\epsilon}(z)^{2}]}d^{2}z,\quad\text{and}\quad\mathcal{L}_{h}=|z |_{+}^{\frac{\gamma^{2}}{2}}\lim_{\varepsilon\to 0}e^{\frac{\gamma}{2}h_{ \epsilon}(z)-\frac{\gamma^{2}}{8}\mathbb{E}[h_{\epsilon}(z)^{2}]}dz.\]
Given a sample \(\phi\) from \(\mathrm{LF}_{\mathbb{H}}\) or \(\mathrm{LF}_{\mathbb{H}}^{(\beta_{j},s_{j})_{j}}\), we similarly define \(\mathcal{A}_{\phi}\) and \(\mathcal{L}_{\phi}\) as in (2.3) with \(h\) replaced by \(\phi\). This allows us to rigorously define the function \(H\) in (1.7) for a certain range of parameters.
**Definition 2.8**.: _Let \(\mu_{i}\geq 0\) for \(i=1,2,3\). Suppose \(\beta_{1},\beta_{2},\beta_{3}\in\mathbb{R}\) satisfy the Seiberg bound:_
\[\sum_{i=1}^{3}\beta_{i}>2Q\quad\text{and}\quad\beta_{i}<Q. \tag{2.4}\]
_Then the boundary three-point structure constant is defined by_
\[H_{(\mu_{1},\mu_{2},\mu_{3})}^{(\beta_{1},\beta_{2},\beta_{3})}:=\int e^{- \mathcal{A}_{\phi}(\mathbb{H})-\mu_{1}\mathcal{L}_{\phi}(-\infty,0)-\mu_{2} \mathcal{L}_{\phi}(0,1)-\mu_{3}\mathcal{L}_{\phi}(1,+\infty)}\;\mathrm{LF}_{ \mathbb{H}}^{(\beta_{1},0),(\beta_{2},1),(\beta_{3},\infty)}(\mathrm{d}\phi). \tag{2.5}\]
In a similar way we can give the probabilistic definition of \(G\). We start with the definition:
**Definition 2.9**.: _Fix \(\alpha,\beta\in\mathbb{R}\). Sample \((h,\mathbf{c})\) from \(2^{-\frac{\alpha^{2}}{2}}P_{\mathbb{H}}\times[e^{(\alpha+\frac{\beta}{2}-Q)c}dc]\). We write \(\mathrm{LF}_{\mathbb{H}}^{(\alpha,\mathbf{i}),(\beta,0)}\) for the law of \(\phi\) where \(\phi(z)=h(z)-2Q\log|z|_{+}+\alpha G_{\mathbb{H}}(z,\mathbf{i})+\frac{\beta}{2} G_{\mathbb{H}}(z,0)+\mathbf{c}\)._
We can now give the precise definition of the structure constant \(G_{\mu_{B}}(\alpha,\beta)\).
**Definition 2.10**.: _Let \(\mu_{B}\geq 0\). Suppose \(\alpha,\beta\in\mathbb{R}\) satisfy the Seiberg bound:_
\[\alpha+\frac{\beta}{2}>Q,\quad\alpha<Q,\quad\text{and}\quad\beta<Q. \tag{2.6}\]
_Then the bulk-boundary structure constant is defined by:_
\[G_{\mu_{B}}(\alpha,\beta):=\int e^{-\mathcal{A}_{\phi}(\mathbb{H})-\mu_{B} \mathcal{L}_{\phi}(\mathbb{R})}\;\mathrm{LF}_{\mathbb{H}}^{(\alpha,\mathbf{i} ),(\beta,0)}(\mathrm{d}\phi). \tag{2.7}\]
### Meromorphic extension of \(H\)
To make sense of shift equations for \(H\) we meromorphically extend the range of its definition via an integration by parts, based on the following lemma.
**Lemma 2.11**.: _In the range of parameters as in Defntion 2.8, set \(s=\frac{1}{2}\sum\beta_{i}-Q\) and_
\[\hat{H}_{(\mu_{1},\mu_{2},\mu_{3})}^{(\beta_{1},\beta_{2},\beta_{3})}:=\int( \gamma(s+\frac{\gamma}{2})A+\frac{\gamma^{2}}{2}A(\sum_{i}\mu_{i}L_{i})+\frac{ \gamma^{2}}{4}(\sum_{i}\mu_{i}L_{i})^{2})e^{-A-\sum_{i}\mu_{i}L_{i}}\,\mathrm{ LF}_{\mathbb{H}}^{(\beta_{1},0),(\beta_{2},1),(\beta_{3},\infty)}( \mathrm{d}\phi). \tag{2.8}\]
_where \(A=\mathcal{A}_{\phi}(\mathbb{H}),L_{1}=\mathcal{L}_{\phi}(-\infty,0),L_{2}= \mathcal{L}_{\phi}(0,1)\) and \(L_{3}=\mathcal{L}_{\phi}(1,\infty)\). Then_
\[s(s+\frac{\gamma}{2})H_{(\mu_{1},\mu_{2},\mu_{3})}^{(\beta_{1},\beta_{2},\beta_ {3})}=\hat{H}_{(\mu_{1},\mu_{2},\mu_{3})}^{(\beta_{1},\beta_{2},\beta_{3})}.\]
Proof.: Let \(h\) be a Gaussian free field and \(\widetilde{h}:=h-2Q\log|\cdot|_{+}+\sum\frac{\beta_{i}}{2}G_{\mathbb{H}}(\cdot,s_{ i})\), where \((s_{1},s_{2},s_{3})=(0,1,\infty)\). Write \(\widetilde{A}=\mathcal{A}_{\widetilde{h}}(\mathbb{H})\) and \(\widetilde{L}_{1}=\mathcal{L}_{\widetilde{h}}(-\infty,0),\widetilde{L}_{2}= \mathcal{L}_{\widetilde{h}}(0,1),\widetilde{L}_{3}=\mathcal{L}_{\widetilde{h} }(1,+\infty)\). Since \(A=e^{\gamma c}\widetilde{A}\) and \(L_{i}=e^{\gamma c/2}\widetilde{L}_{i}\), by Definition 2.8, we have
\[H^{(\beta_{1},\beta_{2},\beta_{3})}_{(\mu_{1},\mu_{2}\mu_{3})}=\mathbb{E}\left[ \int e^{sc}\cdot e^{-e^{\gamma c}\widetilde{A}-e^{\gamma c/2}\sum_{i=1}^{3}\mu _{i}\widetilde{L}_{i}}\,dc\right].\]
For \(s>0\), \(a>0\) and \(\ell\in\mathbb{C}\) with \(\Re\ell>0\), applying integration by parts twice we have
\[\int e^{sc}\cdot e^{-e^{\gamma c}a-e^{\gamma c/2}\ell}\,dc=-\int( \frac{1}{s}e^{sc})((-\gamma e^{\gamma c}a-\frac{\gamma}{2}e^{\gamma c/2}\ell) e^{-e^{\gamma c}a-e^{\gamma c/2}\ell})\,dc\] \[= \frac{\gamma}{s}\int e^{sc}(e^{\gamma c}a)e^{-e^{\gamma c}a-e^{ \gamma c/2}\ell}\,dc+\frac{\gamma}{2s}\int e^{sc}(e^{\gamma c/2}\ell)e^{-e^{ \gamma c}a-e^{\gamma c/2}\ell}\,dc\] \[= \frac{\gamma}{s}\int e^{sc}(e^{\gamma c}a)e^{-e^{\gamma c}a-e^{ \gamma c/2}\ell}\,dc+\frac{\gamma}{2s}\frac{\gamma}{s+\frac{\gamma}{2}}\int e ^{sc}(e^{\gamma c}a)(e^{\gamma c/2}\ell)e^{-e^{\gamma c}a-e^{\gamma c/2}\ell}\,dc\] \[+\frac{\gamma}{2s}\frac{\frac{\gamma}{2}}{s+\frac{\gamma}{2}}\int e ^{sc}(e^{\gamma c/2}\ell)^{2}e^{-e^{\gamma c}a-e^{\gamma c/2}\ell}\,dc.\]
Now setting \(s=\frac{1}{2}\sum\beta_{i}-Q\), \(a=\widetilde{A}\) and \(\ell=\sum_{i}\mu_{i}\widetilde{L}_{i}\) we get the desired equality:
\[H^{(\beta_{1},\beta_{2},\beta_{3})}_{(\mu_{1},\mu_{2}\mu_{3})}=\int(\frac{ \gamma}{s}A+\frac{\gamma}{2s}\frac{\gamma}{s+\frac{\gamma}{2}}A(\sum_{i}\mu_ {i}L_{i})+\frac{\gamma}{2s}\frac{\frac{\gamma}{2}}{s+\frac{\gamma}{2}}(\sum_{ i}\mu_{i}L_{i})^{2})e^{-A-\sum_{i}\mu_{i}L_{i}}\,\mathrm{LF}^{(\beta_{1},0),(\beta_{2},1),( \beta_{3},\infty)}_{\mathbb{H}}(\mathrm{d}\phi).\]
The function \(\hat{H}\) has a better analytic property as shown in the following proposition.
**Proposition 2.12**.: _Set \(V=\{(\beta_{1},\beta_{2},\beta_{3}):Q-\frac{1}{2}\sum\beta_{i}<\gamma\wedge \frac{2}{\gamma}\wedge\min_{i}(Q-\beta_{i})\text{ and }\beta_{i}<Q\text{ for each }i\}\). When \((\beta_{1},\beta_{2},\beta_{3})\in V\) and \(\Re\mu_{i}\geq 0\) for \(i=1,2,3\), the integral in (2.8) defining \(\hat{H}\) is absolutely convergent. Moreover, for each \((\beta_{1},\beta_{2},\beta_{3})\in V\), the function \(H\) is holomorphic on \(\{(\mu_{1},\mu_{2},\mu_{3}):\Re\mu_{i}>0\}\) and continuous on \(\{(\mu_{1},\mu_{2},\mu_{3}):\Re\mu_{i}\geq 0\}\). Finally, for each \((\mu_{1},\mu_{2},\mu_{3})\) satisfying \(\Re\mu_{i}>0\), the function \(\hat{H}\) can be analytically extended on a complex neighborhood in \(\mathbb{C}^{3}\)of \(V\)._
We defer the proof of Proposition 2.12 to Section 2.5 and proceed to extend the definition of \(H\).
**Definition 2.13**.: _Given \((\beta_{1},\beta_{2},\beta_{3})\in V\) in Propositition 2.12 and \(\Re\mu_{i}\geq 0\) for \(i=1,2,3\), define_
\[H^{(\beta_{1},\beta_{2},\beta_{3})}_{(\mu_{1},\mu_{2},\mu_{3})}:=\frac{1}{s(s+ \frac{\gamma}{2})}\hat{H}^{(\beta_{1},\beta_{2},\beta_{3})}_{(\mu_{1},\mu_{2},\mu_{3})}, \tag{2.9}\]
Recall that \(H\) from Theorem 1.1 is expressed in the \(\sigma\) variable via a change of variable. We now transfer Proposition 2.12 in terms of the \(\sigma\) variable.
**Proposition 2.14**.: _For \(V,s\), and \(H^{(\beta_{1},\beta_{2},\beta_{3})}_{(\mu_{1},\mu_{2},\mu_{3})}\) as in Definition 2.13, we define_
\[H\begin{pmatrix}\beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}:=H^{(\beta_{1},\beta_{2},\beta_{ 3})}_{(\mu_{B}(\sigma_{1}),\mu_{B}(\sigma_{2}),\mu_{B}(\sigma_{3}))}\quad \text{where }\mu_{B}(\sigma)=\sqrt{\frac{1}{\sin(\pi\frac{\gamma^{2}}{4})}}\cos \left(\pi\gamma(\sigma-\frac{Q}{2})\right). \tag{2.10}\]
_Let \(\mathcal{B}=(-\frac{1}{2\gamma}+\frac{Q}{2},\frac{1}{2\gamma}+\frac{Q}{2}) \times\mathbb{R}\) and \(\overline{\mathcal{B}}=[-\frac{1}{2\gamma}+\frac{Q}{2},\frac{1}{2\gamma}+\frac{ Q}{2}]\times\mathbb{R}\). Then for each \((\beta_{1},\beta_{2},\beta_{3})\in V\), the function \((\sigma_{1},\sigma_{2},\sigma_{3})\mapsto H\) is holomorphic on \(\mathcal{B}^{3}\) and continuous on \(\overline{\mathcal{B}}^{3}\). Moreover, for each \((\sigma_{1},\sigma_{2},\sigma_{3})\in\mathcal{B}^{3}\), the function \((\beta_{1},\beta_{2},\beta_{3})\mapsto s(s+\frac{\gamma}{2})H\) is analytic on a complex neighborhood in \(\mathbb{C}^{3}\)of \(V\)._
Proof.: The function \(\mu_{B}(\sigma)\) is a holomorhpic bijection between \(\{\mu\in\mathbb{C}:\Re\mu>0\}\) and \(\mathcal{B}^{3}\). Moreover, \(\mu_{B}\) is continuous on \(\overline{\mathcal{B}}^{3}\). Therefore Proposition 2.12 yields Proposition 2.14.
### Probabilistic definition of \(R\) via truncations
We first recall the definition of two pointed quantum disk \(\mathcal{M}_{2}^{\text{disk}}(\beta)\) used in the definition of \(R\). Following [1, Section 2], we describe it using the horizontal strip \(\mathcal{S}=\mathbb{R}\times(0,\pi)\). Let \(h_{\mathcal{S}}(z)=h_{\mathbb{H}}(e^{z})\), where \(h_{\mathbb{H}}\) is sampled from \(P_{\mathbb{H}}\). We call \(h_{\mathcal{S}}\) a free-boundary GFF on \(\mathcal{S}\). The field \(h_{\mathcal{S}}\) can be written as \(h_{\mathcal{S}}=h^{\text{c}}+h^{\ell}\), where \(h^{\text{c}}\) is constant on each vertical line, and \(h^{\ell}\) has mean zero on all such lines [1, Section 4.1.6]. We call \(h^{\ell}\) the _lateral component_ of the free-boundary GFF on \(\mathcal{S}\).
**Definition 2.15**.: _Fix \(\beta<Q\). Let_
\[Y_{t}=\left\{\begin{array}{ll}B_{2t}-(Q-\beta)t&\text{if }t\geq 0\\ \widetilde{B}_{-2t}+(Q-\beta)t&\text{if }t<0\end{array}\right.,\]
_where \((B_{s})_{s\geq 0}\) is a standard Brownian motion conditioned on \(B_{2s}-(Q-\beta)s<0\) for all \(s>0\),3 and \((\widetilde{B}_{s})_{s\geq 0}\) is an independent copy of \((B_{s})_{s\geq 0}\). Let \(h^{1}_{\mathcal{S}}=Y_{\Re z}\) for each \(z\in\mathcal{S}\). Let \(h^{2}_{\mathcal{S}}\) be independent of \(h^{1}\) and have the law the lateral component of the free-boundary GFF on \(\mathcal{S}\). Let \(\psi=h^{1}+h^{2}_{\mathcal{S}}\). Let \(\mathbf{c}\) be a real number sampled from \(\frac{\gamma}{2}e^{(\beta-Q)c}dc\) independent of \(\psi\) and \(\phi=\psi+\mathbf{c}\). Let \(\mathcal{M}_{2}^{\text{disk}}(\beta)\) be the infinite measure describing the law of \(\phi\)._
Footnote 3: Here we condition on a zero probability event. This can be made sense of via a limiting procedure.
For \(\phi\) in Definition 2.15, define \(\mathcal{A}_{\phi}=\lim_{\varepsilon\to 0}\epsilon\frac{\gamma^{2}}{2}e^{ \gamma\phi_{\varepsilon}(z)}d^{2}z\) on \(\mathcal{S}\) and \(\mathcal{L}_{\phi}=\lim_{\varepsilon\to 0}\epsilon\frac{\gamma^{2}}{4}e^{ \frac{\gamma}{2}\phi_{\varepsilon}(z)}dz\) on \(\partial\mathcal{S}\). Write \(A=\mathcal{A}_{\phi}(\mathcal{S})\) as the total \(\mathcal{A}_{\phi}\)-area, and write \(L_{1}\), \(L_{2}\) as the \(\mathcal{L}_{\phi}\)-length of top and bottom boundary arc of \(\mathcal{S}\), respectively. It appears that \(\int e^{-A-\mu_{1}L_{1}-\mu L_{2}}\,\mathrm{d}\mathcal{M}_{2}^{\text{disk}}( \beta)=\infty\). For \(\beta\in(\frac{2}{\gamma},Q)\), the correct definition is via truncation as in (1.14) and we have the following counterpart of Lemma 2.11.
**Lemma 2.16**.: _For \(\beta\in(\frac{2}{\gamma},Q)\). Then the integral defining \(R_{\mu_{1},\mu_{2}}(\beta)\) is finite. Set \(s=\beta-Q\) and_
\[\hat{R}_{\mu_{1},\mu_{2}}(\beta):=\frac{-2s}{\gamma}\int(\gamma(s+\frac{ \gamma}{2})A+\frac{\gamma^{2}}{2}A(\sum_{i}\mu_{i}L_{i})+\frac{\gamma^{2}}{4}( \sum_{i}\mu_{i}L_{i})^{2})e^{-A-\sum_{i}\mu_{i}L_{i}}\,d\mathcal{M}_{2}^{\text {disk}}(\beta), \tag{2.11}\]
_where we write \(A,L_{1},L_{2}\) for the quantum area and boundary arc lengths of a sample from \(\mathcal{M}_{2}^{\text{disk}}(\beta)\). Then \(s(s+\frac{\gamma}{2})R_{\mu_{1},\mu_{2}}(\beta)=\hat{R}_{\mu_{1},\mu_{2}}(\beta)\)._
Proof.: Recall the field \(\psi\) used in Definition 2.15 and write \(\widetilde{A}=A_{\psi}(\mathcal{S})\) and \(L_{1}=\mathcal{L}_{\psi}(\mathbb{R}\times\{\pi\}),L_{2}=\mathcal{L}_{\psi}( \mathbb{R})\). By definition we have
\[R_{\mu_{1},\mu_{2}}(\beta)=-\frac{2s}{\gamma}\mathbb{E}[\int\frac{\gamma}{2}e^{ sc}\cdot(e^{-e^{\gamma c}\widetilde{A}-e^{-\gamma c/2}\sum_{i=1}^{2}\mu_{i} \widetilde{L}_{i}}-1)\,dc].\]
We have \(s\in(-\frac{\gamma}{2},0)\). For \(a>0\) and \(\ell\in\mathbb{C}\) with \(\Re\ell>0\), applying integration by parts twice as in the proof of Lemma 2.11, the expression \(\int e^{sc}\cdot(e^{-e^{\gamma c}a-e^{\gamma c/2}\ell}-1)\,dc\) equals
\[\frac{\gamma}{s}\int e^{sc}(e^{\gamma c}a)e^{-e^{\gamma c}a-e^{ \gamma c/2}\ell}\,dc+\frac{\gamma}{2s}\frac{\gamma}{s+\frac{\gamma}{2}}\int e ^{sc}(e^{\gamma c}a)(e^{\gamma c/2}\ell)e^{-e^{\gamma c}a-e^{\gamma c/2}\ell} \,dc\] \[+\frac{\gamma}{2s}\frac{\frac{\gamma}{2}}{s+\frac{\gamma}{2}}\int e ^{sc}(e^{\gamma c/2}\ell)^{2}e^{-e^{\gamma c}a-e^{\gamma c/2}\ell}\,dc.\]
Applying this with \(a=\widetilde{A}\) and \(\ell=\sum_{i=1}^{2}\mu_{i}\widetilde{L}_{i}\) gives the desired result.
The following proposition extends the definition of \(R\) in the same way as for \(H\) in Proposition 2.12. We defer its proof to Appendix C as it is similar to that of Proposition 2.12.
**Proposition 2.17**.: _For \(\beta\in((\frac{2}{\gamma}-\frac{\gamma}{2})\vee\frac{\gamma}{2},Q)\) and \(\Re\mu_{1},\Re\mu_{2}\geq 0\), the integral in (2.11) defining \(\hat{R}\) is absolutely convergent. Moreover, for each \(\beta\in((\frac{2}{\gamma}-\frac{\gamma}{2})\vee\frac{\gamma}{2},Q)\), the function \((\mu_{1},\mu_{2})\mapsto R\) is holomorphic on \(\{\Re\mu_{1}>0,\Re\mu_{2}>0\}\) and continuous on \(\{\Re\mu_{1}\geq 0,\Re\mu_{2}\geq 0\}\). Finally, for \(\Re\mu_{1}>0,\Re\mu_{2}>0\), the function \(\beta\mapsto R\) can be analytically extended on a complex neighborhood of \(((\frac{2}{\gamma}-\frac{\gamma}{2})\vee\frac{\gamma}{2},Q)\)._
**Definition 2.18**.: _For \(\beta\in((\frac{2}{\gamma}-\frac{\gamma}{2})\vee\frac{\gamma}{2},Q)\) and \(\Re\mu_{1},\Re\mu_{2}\geq 0\), writing \(s=\beta-Q\), we define_
\[R_{\mu_{1},\mu_{2}}(\beta):=\frac{1}{s(s+\frac{\gamma}{2})}\hat{R}_{\mu_{1}, \mu_{2}}(\beta). \tag{2.12}\]
We have the following counterpart of Proposition 2.14 on the change of variable.
**Proposition 2.19**.: _Recall \(\mu_{B}(\sigma)\) and \(\mathcal{B}=(-\frac{1}{2\gamma}+\frac{Q}{2},\frac{1}{2\gamma}+\frac{Q}{2}) \times\mathbb{R}\) in Proposition 2.14. Let \(R(\beta,\sigma_{1},\sigma_{2}):=R_{\mu_{B}(\sigma_{1}),\mu_{B}(\sigma_{2})}(\beta)\). For each \(\beta\in((\frac{2}{\gamma}-\frac{\gamma}{2})\vee\frac{\gamma}{2},Q)\), the function \((\sigma_{1},\sigma_{2})\mapsto R\) is holomorphic on \(\mathcal{B}^{2}\) and continuous on \(\overline{\mathcal{B}}^{2}\). For each \((\sigma_{1},\sigma_{2})\in\mathcal{B}^{2}\), the function \(\beta\mapsto s(s+\frac{\gamma}{2})R\) is analytic on a complex neighborhood of \(((\frac{2}{\gamma}-\frac{\gamma}{2})\vee\frac{\gamma}{2},Q)\)._
Proof.: This follows from Proposition 2.17 similarly as in the proof of Proposition 2.14.
**Remark 2.20**.: _The meromorphic extension of \(H\) can also be done by truncations in the same spirit of (1.14). Indeed, suppose \(\mu_{1},\mu_{2},\mu_{3}\in\mathbb{C}\) satisfy \(\Re\mu_{i}\geq 0\) for \(i=1,2,3\), and \((\beta_{1},\beta_{2},\beta_{3})\) satisfy \(0<Q-\frac{1}{2}\sum\beta_{i}<\frac{\gamma}{2}\wedge\min_{i}(Q-\beta_{i})\), then_
\[H^{(\beta_{1},\beta_{2},\beta_{3})}_{(\mu_{1},\mu_{2},\mu_{3})}=\int\left(e^{- \mathcal{A}_{\phi}(\mathbb{H})-\sum_{i=1}^{3}\mu_{i}L_{i}}-1\right)\;\mathrm{ LF}^{(\beta_{1},0),(\beta_{2},1),(\beta_{3},\infty)}_{\mathbb{H}}(\mathrm{d}\phi) \tag{2.13}\]
_If instead \((\beta_{1},\beta_{2},\beta_{3})\) satisfy \(\frac{\gamma}{2}<Q-\frac{1}{2}\sum\beta_{i}<\gamma\wedge\frac{2}{\gamma} \wedge\min_{i}(Q-\beta_{i})\), then_
\[H^{(\beta_{1},\beta_{2},\beta_{3})}_{(\mu_{1},\mu_{2},\mu_{3})}=\int\left(e^{- \mathcal{A}_{\phi}(\mathbb{H})-\sum_{i=1}^{3}\mu_{i}L_{i}}-1+\sum_{i=1}^{3} \mu_{i}L_{i}\right)\;\mathrm{LF}^{(\beta_{1},0),(\beta_{2},1),(\beta_{3}, \infty)}_{\mathbb{H}}(\mathrm{d}\phi). \tag{2.14}\]
_Both (2.13) and (2.14) follow from integration by parts. One can further extend the range of \(H\) via further truncations of \(e^{-x}\). We left these details to interested readers as we will not need them in this paper. Similarly, we record without a detailed proof that when \(\beta\in((Q-\gamma)\vee\frac{\gamma}{2},\frac{2}{\gamma})\),_
\[R_{\mu_{1},\mu_{2}}(\beta)=\frac{2(Q-\beta)}{\gamma}\int\left(e^{-A-\sum_{i=1} ^{2}\mu_{i}L_{i}}-1+\sum_{i=1}^{2}\mu_{i}L_{i}\right)\;d\mathcal{M}^{\mathrm{ disk}}_{2}(\beta).\]
### Analytic continuation of \(H\): proof of Proposition 2.12
We first prove Proposition 2.21 which will immediately imply Proposition 2.12. Our argument follows the same strategy in [10] for the sphere case, with the additional difficulty that the correlation functions are no longer GMC moments due to the presence of both bulk and boundary Liouville potentials.
**Proposition 2.21**.: _Recall the domain \(V\) from Definition 2.11 and Proposition 2.12. Fix \(\mu_{i}\) with \(\Re\mu_{i}>0\) for \(i=1,2,3\). Then for each of_
\[(\beta_{1},\beta_{2},\beta_{3}) \mapsto\int Ae^{-A-\sum\mu_{i}L_{i}}\,\mathrm{LF}^{(\beta_{1},0)( \beta_{2},1),(\beta_{3},\infty)}_{\mathbb{H}}(d\phi),\] \[(\beta_{1},\beta_{2},\beta_{3}) \mapsto\int A(\sum\mu_{i}L_{i})e^{-A-\sum\mu_{i}L_{i}}\,\mathrm{ LF}^{(\beta_{1},0)(\beta_{2},1),(\beta_{3},\infty)}_{\mathbb{H}}(d\phi),\] \[(\beta_{1},\beta_{2},\beta_{3}) \mapsto\int(\sum\mu_{i}L_{i})^{2}e^{-A-\sum\mu_{i}L_{i}}\,\mathrm{ LF}^{(\beta_{1},0)(\beta_{2},1),(\beta_{3},\infty)}_{\mathbb{H}}(d\phi),\]
the integral converges when \((\beta_{1},\beta_{2},\beta_{3})\in V\), and the function extends analytically to a neighborhood of \(V\) in \(\mathbb{C}^{3}\). Here, we write \(A=\mathcal{A}_{\phi}(\mathbb{H})\), \(L_{1}=\mathcal{L}_{\phi}(-\infty,0)\), \(L_{2}=\mathcal{L}_{\phi}(0,1)\) and \(L_{3}=\mathcal{L}_{\phi}(1,\infty)\)._
We only prove the result for the second function in detail as the other two can be proved similarly with the same inputs. To simplify notation, we will assume that \(\mu_{1}=\mu_{2}=\mu_{3}=1\) but the same argument works for the general case. By the change of coordinates in [1, Lemma 2.20, Proposition 2.16], modulo an explicit prefactor the second function becomes
\[f(\beta_{1},\beta_{2},\beta_{3})=\int\mathcal{A}_{\phi}(\mathbb{H})\mathcal{L }_{\phi}(\mathbb{R})e^{-\mathcal{A}_{\phi}(\mathbb{H})-\mathcal{L}_{\phi}( \mathbb{R})}\,\mathrm{LF}_{\mathbb{H}}^{(\beta_{j},p_{j})_{j}}(d\phi)\quad \text{with }(p_{1},p_{2},p_{3})=(0,2,-2).\]
Therefore, it suffices to prove that \(f\) extends analytically to a neighborhood of \(V\) in \(\mathbb{C}^{3}\). To show this, we will approximate \(f\) using truncations of \(\mathbb{H}\) away from the insertions.
**Lemma 2.22**.: _For \(r\geq 1\), let \(\mathbb{H}_{r}:=\mathbb{H}\backslash\bigcup_{i}B_{e^{-r}}(p_{i})\) and \(\mathbb{R}_{r}:=\mathbb{R}\backslash\bigcup_{i}B_{e^{-r}}(p_{i})\). For a Gaussian free field \(h\) sampled from \(P_{\mathbb{H}}\), write \(A_{r}:=\mathcal{A}_{h-2Q\log|\cdot|_{+}}(\mathbb{H}_{r})\) and \(L_{r}:=\mathcal{L}_{h-2Q\log|\cdot|_{+}}(\mathbb{R}_{r})\). For \(x\in\mathbb{R}\), write \(h_{r}(x)\) for the average of \(h\) on \(\partial B_{e^{-r}}(x)\cap\mathbb{H}\). For \(r\geq 1\), let_
\[f_{r}(\beta_{1},\beta_{2},\beta_{3}):=\int_{\mathbb{R}}dc\,e^{(\frac{1}{2}\sum \beta_{i}-Q)c}\mathbb{E}\left[\prod_{i}e^{\frac{\beta_{i}}{2}h_{r}(p_{i})- \frac{\beta_{i}^{2}}{4}r}(e^{\gamma c}A_{r})(e^{\frac{\gamma}{2}c}L_{r})\exp( -e^{\gamma c}A_{r}-e^{\frac{\gamma}{2}c}L_{r})\right]\]
_Then each point in \(V\) has a neighborhood in \(\mathbb{C}^{3}\) on which \(f_{r}\) converges uniformly as \(r\to\infty\). Moreover, \(\lim_{r\to\infty}f_{r}(\beta_{1},\beta_{2},\beta_{3})=f(\beta_{1},\beta_{2}, \beta_{3})\) for all \((\beta_{1},\beta_{2},\beta_{3})\in V\)._
Proposition 2.21 follows immediately from Lemma 2.22.
Proof of Proposition 2.21.: Since \(f_{r}\) is analytic, by Lemma 2.22, there is a neighborhood of \(V\) on which \(\lim_{r\to\infty}f_{r}\) is holomorphic. Moreover, this limit agrees with \(f\) on \(V\), hence is the desired analytic continuation of \(f\).
Proof of Lemma 2.22.: If we condition on \(h|_{\mathbb{H}_{r}}\), then \((h_{r+1}(p_{i})-h_{r}(p_{i}))_{i=1,2,3}\) is a triple of conditionally independent Gaussians. Therefore
\[f_{r}(\beta_{1},\beta_{2},\beta_{3})=\int_{\mathbb{R}}dc\,e^{(\frac{1}{2}\sum \beta_{i}-Q)c}\mathbb{E}\left[\prod_{i=1}^{3}e^{\frac{\beta_{i}}{2}h_{r+1}(p _{i})-\frac{\beta_{i}^{2}}{4}(r+1)}(e^{\gamma c}A_{r})(e^{\frac{\gamma}{2}c}L _{r})\exp(-e^{\gamma c}A_{r}-e^{\frac{\gamma}{2}c}L_{r})\right].\]
Write \(g(a,\ell):=ae^{-a}\ell e^{-\ell}\). We have \(f_{r+1}(\beta_{1},\beta_{2},\beta_{3})-f_{r}(\beta_{1},\beta_{2},\beta_{3})\) equals
\[\int_{\mathbb{R}}dc\,e^{(\frac{1}{2}\sum\beta_{i}-Q)c}\mathbb{E}\left[\prod_{i }e^{\frac{\beta_{i}}{2}h_{r+1}(p_{i})-\frac{\beta_{i}^{2}}{4}(r+1)}(g(e^{ \gamma c}A_{r+1},e^{\frac{\gamma}{2}c}L_{r+1})-g(e^{\gamma c}A_{r},e^{\frac{ \gamma}{2}c}L_{r}))\right].\]
Fix a point in \(V\) and let \(O\) be a neighborhood of the point. For \((\beta_{1},\beta_{2},\beta_{3})\in O\) write \(\beta_{j}=u_{j}+iv_{j}\) where \(u_{j},v_{j}\in\mathbb{R}\). Set \(\tilde{s}:=\frac{1}{2}\sum_{j}u_{j}-Q\). By the definition of \(V\), we can choose \(O\) small enough such that \(\tilde{s}>-\gamma\). Write \(\widetilde{A}_{r}:=\mathcal{A}_{\tilde{h}}(\mathbb{H}_{r})\) and \(\widetilde{L}_{r}:=\mathcal{L}_{\widetilde{h}}(\mathbb{R}_{r})\) where \(\widetilde{h}=h+\sum\frac{u_{i}}{2}G_{\mathbb{H}}(\cdot,p_{i})-2Q\log|\cdot|_{+}\). By Girsanov's theorem,
\[|f_{r+1}(\beta_{1},\beta_{2},\beta_{3})-f_{r}(\beta_{1},\beta_{2}, \beta_{3})|\] \[\leq\int_{\mathbb{R}}dc\,e^{\tilde{s}c}\mathbb{E}\left[\left|\prod _{i}e^{\frac{\beta_{i}}{2}h_{r+1}(p_{i})-\frac{\beta_{i}^{2}}{4}(r+1)}(g(e^{ \gamma c}A_{r+1},e^{\frac{\gamma}{2}c}L_{r+1})-g(e^{\gamma c}A_{r},e^{\frac{ \gamma}{2}c}L_{r}))\right|\right]\] \[\leq Ce^{\frac{r+1}{4}\sum v_{i}^{2}}\int_{\mathbb{R}}dc\,e^{ \tilde{s}c}\mathbb{E}\left[|g(e^{\gamma c}\widetilde{A}_{r+1},e^{\frac{\gamma} {2}c}\widetilde{L}_{r+1})-g(e^{\gamma c}\widetilde{A}_{r},e^{\frac{\gamma}{2}c }\widetilde{L}_{r})|\right].\]
Now, using the triangle inequality, and the fact that \(\ell\mapsto\ell e^{-\ell}\) is bounded and Lipschitz, we have for any \(a_{r+1}>a_{r}>0\) and \(\ell_{r+1}>\ell_{r}>0\) that
\[|g(a_{r+1},\ell_{r+1})-g(a_{r},\ell_{r})|\lesssim(\ell_{r+1}-\ell_{r})a_{r+1}e^{ -a_{r+1}}+(a_{r+1}-a_{r})e^{-a_{r+1}}+a_{r}(e^{-a_{r}}-e^{-a_{r+1}}).\]
This gives three terms to bound. Now using the identity \(\int e^{tc}e^{-e^{\gamma c}a}\,\mathrm{d}c=\frac{1}{\gamma}\Gamma(\frac{t}{ \gamma})a^{-\frac{t}{\gamma}}\) for \(t,\ell>0\), we have
\[\int_{\mathbb{R}}dc\,e^{(\tilde{s}+\gamma+\frac{\gamma}{2})c} \mathbb{E}[(\widetilde{L}_{r+1}-\widetilde{L}_{r})\widetilde{A}_{r+1}e^{-e^{ \gamma c}\widetilde{A}_{r+1}}] =\frac{1}{\gamma}\Gamma(\frac{\tilde{s}}{\gamma}+\frac{3}{2}) \mathbb{E}[(\widetilde{L}_{r+1}-\widetilde{L}_{r})\widetilde{A}_{r+1}^{\frac{ \tilde{s}}{\gamma}-\frac{1}{2}}],\] \[\int_{\mathbb{R}}dc\,e^{(\tilde{s}+\gamma)c}\mathbb{E}[( \widetilde{A}_{r+1}-\widetilde{A}_{r})e^{-e^{\gamma c}\widetilde{A}_{r+1}}] =\frac{1}{\gamma}\Gamma(\frac{\tilde{s}}{\gamma}+1)\mathbb{E}[( \widetilde{A}_{r+1}-\widetilde{A}_{r})\widetilde{A}_{r+1}^{\frac{\tilde{s}}{ \gamma}-1}],\] \[\int_{\mathbb{R}}dc\,e^{(\tilde{s}+\gamma)c}\mathbb{E}[\widetilde{ A}_{r}(e^{-e^{\gamma c}\widetilde{A}_{r}}-e^{-e^{\gamma c}\widetilde{A}_{r+1}})] =\frac{1}{\gamma}\Gamma(\frac{\tilde{s}}{\gamma}+1)\mathbb{E}[ \widetilde{A}_{r}(\widetilde{A}_{r}^{\frac{\tilde{s}}{\gamma}-1}-\widetilde{A }_{r+1}^{\frac{\tilde{s}}{\gamma}-1})].\]
Now Lemma 2.22 follows from Lemma 2.23, which we state right below.
**Lemma 2.23**.: _For each point in \(V\) there is a neighborhood \(U\subset\mathbb{R}^{3}\) and a constant \(C>0\) such that the following holds. For \((u_{1},u_{2},u_{3})\in U\) and \(r\geq 1\), with \(\tilde{s}\), \(\widetilde{A}_{r}\) and \(\widetilde{L}_{r}\) defined above,_
\[\mathbb{E}[(\widetilde{A}_{r+1}-\widetilde{A}_{r})\widetilde{A}_{r+1}^{-\frac{ \tilde{s}}{\gamma}-1}],\quad\mathbb{E}[\widetilde{A}_{r}(\widetilde{A}_{r}^{- \frac{\tilde{s}}{\gamma}-1}-\widetilde{A}_{r+1}^{-\frac{\tilde{s}}{\gamma}-1} )],\quad\mathbb{E}[(\widetilde{L}_{r+1}-\widetilde{L}_{r})\widetilde{A}_{r+1} ^{-\frac{\tilde{s}}{\gamma}-\frac{1}{2}}]\]
_are all bounded by \(Ce^{-r/C}\). Moreover, write \(\widetilde{A}_{\infty}:=\mathcal{A}_{\widetilde{h}}(\mathbb{H})\) and \(\widetilde{L}_{\infty}:=\mathcal{L}_{\widetilde{h}}(\mathbb{R})\). Then the same holds with \(\widetilde{A}_{\infty},\widetilde{L}_{\infty}\) in place of \(\widetilde{A}_{r+1},\widetilde{L}_{r+1}\)._
By Lemma 2.23, we can choose \(O\) small enough and \(C\) large enough so that \(|f_{r+1}(\beta_{1},\beta_{2},\beta_{3})-f_{r}(\beta_{1},\beta_{2},\beta_{3})| \lesssim e^{(\frac{1}{4}\sum v_{i}^{2}-C^{-1})r}\) uniformly in \(O\). By possibly shrinking \(O\) further so that \(\sum v_{i}^{2}-1/C<-1/2C\) for all \((\beta_{1},\beta_{2},\beta_{3})\in O\), we see that \(f_{r}\) uniformly converge in \(O\) as desired.
It remains to prove \(\lim_{r\to\infty}f_{r}(\beta_{1},\beta_{2},\beta_{3})=f(\beta_{1},\beta_{2}, \beta_{3})\) for \((\beta_{1},\beta_{2},\beta_{3})\in V\). In this case since \(\beta_{1},\beta_{2},\beta_{3}\) are real we have
\[|f(\beta_{1},\beta_{2},\beta_{3})-f_{r}(\beta_{1},\beta_{2},\beta_{3})|\leq C \int_{\mathbb{R}}dc\,e^{(\frac{1}{2}\sum\beta_{i}-Q)c)}\mathbb{E}[|g(e^{\gamma c }\widetilde{A}_{\infty},e^{\frac{\tilde{s}}{2}c}\widetilde{L}_{\infty})-g(e^{ \gamma c}\widetilde{A}_{r},e^{\frac{\gamma}{2}c}\widetilde{L}_{r})|].\]
Now by the same argument as before with the \(\widetilde{A}_{\infty},\widetilde{L}_{\infty}\) case of Lemma 2.23 being applied, we get \(\lim_{r\to\infty}f_{r}(\beta_{1},\beta_{2},\beta_{3})=f(\beta_{1},\beta_{2}, \beta_{3})\).
Proof of Lemma 2.23.: We will only explain the \(A_{r+1},L_{r+1}\) case in detail since the \(A_{\infty},L_{\infty}\) follows from the same argument. Moreover, we will focus on a fixed \((u_{1},u_{2},u_{3})\) in a neighborhood \(U\) of \((\tilde{\beta}_{1},\tilde{\beta}_{2},\tilde{\beta}_{3})\in V\), but all inputs in the argument vary continuously in \((u_{1},u_{2},u_{3})\) which gives the desired uniform bounds in \(U\). We now prove the three bounds one by one.
**First bound.** For \(\varepsilon>0\), we have \(\mathbb{E}[\widetilde{A}_{r+1}^{\frac{\tilde{s}}{\gamma}-1}(\widetilde{A}_{r+ 1}-\widetilde{A}_{r})]\leq\varepsilon\mathbb{E}[\widetilde{A}_{r+1}^{-\frac{ \tilde{s}}{\gamma}-1}]+\mathbb{E}[1_{\widetilde{A}_{r+1}-\widetilde{A}_{r}> \varepsilon}\widetilde{A}_{r+1}^{-\frac{\tilde{s}}{\gamma}}].\) By Holder inequality, \(\mathbb{E}[1_{\widetilde{A}_{r+1}-\widetilde{A}_{r}>\varepsilon}\widetilde{A}_{r+ 1}^{\frac{\tilde{s}}{\gamma}}]\leq\mathbb{E}[\widetilde{A}_{r+1}^{-\frac{ \tilde{s}}{\gamma}p}]^{1/p}\mathbb{P}[\widetilde{A}_{r+1}-\widetilde{A}_{r}> \varepsilon]^{1/q}\) for \(p,q>1\) satisfying \(\frac{1}{p}+\frac{1}{q}=1\). By the definition of \(V\), we can choose \(U\) and \(p\) small enough such that both \(-\frac{\tilde{s}}{\gamma}-1\) and \(-\frac{\tilde{s}}{\gamma}p\) lie in \((-\infty,\frac{2}{\gamma^{2}}\wedge\frac{1}{\gamma}\min(Q-\tilde{\beta}_{i}))\). By [11, Corollary 6.11], for \(\lambda<\frac{2}{\gamma^{2}}\wedge\frac{1}{\gamma}\min(Q-\tilde{\beta}_{i})\) we have \(\mathbb{E}[\widetilde{A}_{r+1}^{\lambda}]<\mathbb{E}[\widetilde{A}_{1}^{ \lambda}]+\mathbb{E}[\widetilde{A}_{\infty}^{\lambda}]<\infty\). Therefore \(\mathbb{E}[\widetilde{A}_{r+1}^{-\frac{\tilde{s}}{\gamma}-1}(\widetilde{A}_{r+ 1}-\widetilde{A}_{r})]\lesssim\varepsilon+\mathbb{P}[\widetilde{A}_{r+1}- \widetilde{A}_{r}>\varepsilon]^{1/q}\), where the implicit constant only depends on \((\tilde{\beta}_{1},\tilde{\beta}_{2},\tilde{\beta}_{3})\) and \(\gamma\). Now, for \(m>0\) small enough so that \(Q-\max_{i}\tilde{\beta}_{i}-m\gamma>0\), by the multifractal spectrum of LQG (e.g. [3, Theorem 3.23] with minor adaptation for the boundary setting) we have
\[\mathbb{P}[\widetilde{A}_{r+1}-\widetilde{A}_{r}>\varepsilon]\leq\varepsilon^{- m}\mathbb{E}[(\widetilde{A}_{r+1}-\widetilde{A}_{r})^{m}]\lesssim\varepsilon^{-m}e^{-m \gamma(Q-\max_{i}\tilde{\beta}_{i}-m\gamma)r} \tag{2.15}\]
where the implicit constant depends on \(m\), and the term \(e^{m\frac{\gamma}{2}r\max_{i}\tilde{\beta}_{i}}\) comes from the log singularities added to the field. Choosing \(\varepsilon=e^{-r/C}\) for large enough \(C\) yields \(\mathbb{E}[\widetilde{A}_{r+1}^{\frac{\varepsilon}{\gamma}-1}(\widetilde{A}_{r+ 1}-\widetilde{A}_{r})]\leq Ce^{-r/C}\).
**Second bound.** Choose \(U\) small enough so that \(-\frac{\tilde{s}}{\gamma}-1<0\). For \(a_{r+1}>a_{r}>0\) we have \(a_{r}(a_{r}^{-\frac{\varepsilon}{\gamma}-1}-a_{r+1}^{-\frac{\varepsilon}{ \gamma}-1})\lesssim(a_{r+1}-a_{r})a_{r}^{-\frac{\varepsilon}{\gamma}-1}\). As in the proof of the first equality, for \(\varepsilon>0\) we have
\[\mathbb{E}[\widetilde{A}_{r}(\widetilde{A}_{r}^{\frac{\varepsilon}{\gamma}-1} -\widetilde{A}_{r+1}^{-\frac{\varepsilon}{\gamma}-1})]\lesssim\varepsilon \mathbb{E}[\widetilde{A}_{r}^{-\frac{\varepsilon}{\gamma}-1}]+\mathbb{E}[1_{ \widetilde{A}_{r+1}-\widetilde{A}_{r}>\varepsilon}\widetilde{A}_{r}^{-\frac{ \varepsilon}{\gamma}}]\lesssim\varepsilon+\mathbb{E}[\widetilde{A}_{r}^{- \frac{\varepsilon}{\gamma}p}]^{1/p}\mathbb{P}[\widetilde{A}_{r+1}-\widetilde{ A}_{r}>\varepsilon]^{1/q}.\]
Now the same argument gives \(\mathbb{E}[\widetilde{A}_{r}(\widetilde{A}_{r}^{-\frac{\varepsilon}{\gamma}-1} -\widetilde{A}_{r+1}^{-\frac{\varepsilon}{\gamma}-1})]\leq Ce^{-r/C}\) for large enough \(C\).
**Third bound.** Arguing as in the first case, we have
\[\mathbb{E}[(\widetilde{L}_{r+1}-\widetilde{L}_{r})\widetilde{A}_{r+1}^{\frac{ \varepsilon}{\gamma}-\frac{1}{2}}]\lesssim\varepsilon+\mathbb{E}[(\widetilde{ L}_{r+1}\widetilde{A}_{r+1}^{\frac{\varepsilon}{\gamma}-\frac{1}{2}})p]^{1/p} \mathbb{P}[\widetilde{L}_{r+1}-\widetilde{L}_{r}>\varepsilon]^{1/q}.\]
We now show \(\mathbb{E}[(\widetilde{L}_{r+1}\widetilde{A}^{-\frac{\varepsilon}{\gamma}- \frac{1}{2}})^{p}]\) is uniformly bounded in \(r\), from which the same argument gives \(\mathbb{E}[(\widetilde{L}_{r+1}-\widetilde{L}_{r})\widetilde{A}_{r+1}^{-\frac {\varepsilon}{\gamma}-\frac{1}{2}}]\leq Ce^{-r/C}\) for large enough \(C\).
Since \(\frac{1}{2}+(-\frac{\varepsilon}{\gamma}-\frac{1}{2})=-\frac{\varepsilon}{ \gamma}<\frac{2}{\gamma^{2}}\wedge\min_{i}\frac{1}{\gamma}(Q-\beta_{i})\) we can choose \(\lambda_{1},\lambda_{2}\) satisfying \(\frac{1}{\lambda_{1}}+\frac{1}{\lambda_{2}}=1\) such that \(\frac{1}{2}\lambda_{1}\) and \((-\frac{\varepsilon}{\gamma}-\frac{1}{2})\lambda_{2}\) are each bounded above by \(\frac{2}{\gamma^{2}}\wedge\frac{1}{\gamma}\min_{i}(Q-\beta_{i})\). Holder's inequality gives \(\mathbb{E}[(\widetilde{L}_{r+1}\widetilde{A}^{-\frac{\varepsilon}{\gamma}- \frac{1}{2}})p]\leq\mathbb{E}[\widetilde{L}_{r+1}^{\lambda_{1}p}]^{1/\lambda_ {1}}\mathbb{E}[(\widetilde{A}_{r+1}^{-\frac{\varepsilon}{\gamma}-\frac{1}{2}} )p\lambda_{2}]^{1/\lambda_{2}}\). Choose \(p>1\) sufficiently small and recall our moment bound for \(\widetilde{A}_{r+1}\) in the first case. By [11, Corollary 3.10] the analogous bound \(\mathbb{E}[\widetilde{L}_{r+1}^{\lambda}]<\mathbb{E}[\widetilde{L}_{1}^{ \lambda}]+\mathbb{E}[\widetilde{L}_{\infty}^{\lambda}]<\infty\) holds for \(\lambda<\frac{4}{\gamma^{2}}\wedge\frac{2}{\gamma}\min(Q-\beta_{i})\). Therefore \(\mathbb{E}[(\widetilde{L}_{r+1}\widetilde{A}_{r+1}^{-\frac{\varepsilon}{ \gamma}-\frac{1}{2}})^{p}]\) is uniformly bounded as needed.
Proof of Proposition 2.12.: The analyticity in \(\mu\) follows from Morera's theorem. The meromorphicity in \(\beta\) follows from Proposition 2.21, where we proved the desired properties for each term in \(\tilde{H}\).
## 3. Shift equations for \(H\) and \(R\)
Our goal is to derive a set of shift equations (Theorems 3.1 and 3.2) that will completely specify \(H\) and \(R\). We will always assume \(\gamma\neq\sqrt{2}\) -- this is a technical limitation inherited from Theorem 3.4 from [1]. In the next section we will use these shift equations to first derive the exact formulas for \(H\) and \(R\) when \(\gamma^{2}\) is irrational, and then use continuity in \(\gamma\) to remove this restriction. In particular the assumption \(\gamma\neq\sqrt{2}\) can be removed from Theorems 3.1 and 3.2 once these exact formulas are proved. For concision we will not explicitly state the \(\gamma\neq\sqrt{2}\) assumption in the lemmas and propositions of this section.
To state the shift equations we introduce the following two notations:
\[q=\frac{2Q-\beta_{1}-\beta_{2}-\beta_{3}+\chi}{\gamma}\quad\text{and}\quad g_{ \chi}(\sigma)=\left(\sin(\frac{\pi\gamma^{2}}{4})\right)^{-\chi/\gamma}\cos \left(2\pi\chi(\sigma-\frac{Q}{2})\right). \tag{3.1}\]
Recalling (1.9), note that for \(\chi=\frac{\gamma}{2}\) one has \(g_{\frac{\gamma}{2}}(\sigma)=\mu_{B}(\sigma)\). We will again refer to a function of several complex variables as being meromorphic on a domain if it can be expressed as the ratio of two holomorphic functions of several variables on the domain.
**Theorem 3.1** (Shift equations for \(H\)).: _Assume \(\gamma\neq\sqrt{2}\), let \(\chi=\frac{\gamma}{2}\) or \(\frac{2}{\gamma}\) and fix \(\sigma_{3}\in[-\frac{1}{2\gamma}+\frac{Q}{2},\frac{1}{2\gamma}+\frac{Q}{2}- \frac{\gamma}{4}]\times\mathbb{R}\). The function \(H\) can be jointly meromorphically extended to a complex neighborhood of \(\mathbb{R}^{3}\) in \((\beta_{1},\beta_{2},\beta_{3})\) and to \(\mathbb{C}^{2}\) in \((\sigma_{1},\sigma_{2})\). It obeys_
\[H\begin{pmatrix}\beta_{1},\beta_{2}-\chi,\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}=\frac{\Gamma(\chi(\beta_{1}-\chi)) \Gamma(1-\chi\beta_{2}+\chi^{2})}{\Gamma(\chi(\beta_{1}-\chi+q\frac{\gamma}{2} ))\Gamma(1-\chi\beta_{2}+\chi^{2}-q\frac{\gamma\chi}{2})}H\begin{pmatrix}\beta_ {1}-\chi,\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2}+\frac{\chi}{2},\sigma_{3}\end{pmatrix}\] \[+\frac{\chi^{2}\frac{2\chi}{\pi}}{\Gamma(1-\frac{\gamma^{2}}{4}) ^{\frac{2\chi}{\gamma}}}\frac{\Gamma(1-\chi\beta_{1})\Gamma(1-\chi\beta_{2}+ \chi^{2})\left(g_{\chi}(\sigma_{1})-g_{\chi}(\sigma_{2}+\frac{\beta_{1}}{2}) \right)}{\sin(\pi\chi(\chi-\beta_{1}))\Gamma(1+\frac{q\gamma\chi}{2})\Gamma(2 -\chi(\beta_{1}+\beta_{2}-2\chi+q\frac{\gamma}{2}))}H\begin{pmatrix}\beta_{1} +\chi,\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2}+\frac{\chi}{2},\sigma_{3}\end{pmatrix}, \tag{3.2}\]
_and_
\[\frac{\chi^{2}\pi^{\frac{2\chi}{\gamma}-1}}{\Gamma(1-\frac{\gamma ^{2}}{4})^{\frac{2\chi}{\gamma}}}\Gamma(1-\chi\beta_{2})\left(g_{\chi}(\sigma_ {3})-g_{\chi}(\sigma_{2}+\frac{\beta_{2}}{2})\right)H\begin{pmatrix}\beta_{1}, \beta_{2}+\chi,\beta_{3}\\ \sigma_{1},\sigma_{2}+\frac{\chi}{2},\sigma_{3}\end{pmatrix}\] \[=\frac{\Gamma(\chi(\beta_{1}-\chi))}{\Gamma(-q\frac{\gamma\chi}{ 2})\Gamma(-1+\chi(\beta_{1}+\beta_{2}-2\chi+q\frac{\gamma}{2}))}H\begin{pmatrix} \beta_{1}-\chi,\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}\] \[+\frac{\chi^{2}\pi^{\frac{2\chi}{\gamma}}}{\Gamma(1-\frac{\gamma ^{2}}{4})^{\frac{2\chi}{\gamma}}}\frac{\left(g_{\chi}(\sigma_{1})-g_{\chi}( \sigma_{2}-\frac{\beta_{1}}{2}+\frac{\chi}{2})\right)\Gamma(1-\chi\beta_{1})} {\sin(\pi\chi(\chi-\beta_{1}))\Gamma(1-\chi(\beta_{1}-\chi+q\frac{\gamma}{2} ))\Gamma(\chi\beta_{2}-\chi^{2}+q\frac{\gamma\chi}{2})}H\begin{pmatrix}\beta_ {1}+\chi,\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}. \tag{3.3}\]
**Theorem 3.2** (Shift equations for \(R\)).: _Assume \(\gamma\neq\sqrt{2}\) and let \(\chi=\frac{\gamma}{2}\) or \(\frac{2}{\gamma}\). The function \(R(\beta,\sigma_{1},\sigma_{2})\) can be jointly meromorphically extended to a complex neighborhood of \(\mathbb{R}\) in \(\beta\) and to \(\mathbb{C}^{2}\) in \((\sigma_{1},\sigma_{2})\). It obeys_
\[\frac{R(\beta,\sigma_{1},\sigma_{2})}{R(\beta+\chi,\sigma_{1}- \frac{\chi}{2},\sigma_{2})} =c_{\chi}(\gamma)\Gamma(-1+\chi\beta-\chi^{2})\Gamma(1-\chi\beta) \left(g_{\chi}(\sigma_{2})-g_{\chi}(\sigma_{1}-\frac{\beta}{2})\right), \tag{3.5}\] \[\frac{R(\beta,\sigma_{1},\sigma_{2})}{R(\beta+\chi,\sigma_{1}+ \frac{\chi}{2},\sigma_{2})} =c_{\chi}(\gamma)\Gamma(-1+\chi\beta-\chi^{2})\Gamma(1-\chi\beta) \left(g_{\chi}(\sigma_{2})-g_{\chi}(\sigma_{1}+\frac{\beta}{2})\right), \tag{3.4}\]
_where \(c_{\frac{\gamma}{2}}(\gamma)=-\frac{1}{\Gamma(-\frac{\gamma^{2}}{4})}\) and \(c_{\frac{2}{\gamma}}(\gamma)=\frac{4}{\gamma^{2}}\pi^{\frac{4}{\gamma^{2}}-1} \Gamma(1-\frac{\gamma^{2}}{4})^{-\frac{4}{\gamma^{2}}}\)._
The proofs of Theorems 3.1 and 3.2 are based on the BPZ equations and the operator product expansions (OPE), a strategy first used in [14] and carried out for boundary LCFT in the absence of bulk Liouville potential in [13]. The main idea is to consider two special cases of four-point correlation functions as deformations of the \(H\) function. They satisfy the BPZ equations proved in [1], which reduce to hypergeometric equations after a change of variable. The OPE is the asymptotic expansion by which we relate the coefficients of the solution space of hypergeometric equations to \(H\). The shift equations on \(H\) are then simply the connection formulas on the solutions space. The shift equations on \(R\) are obtained by taking a suitable limit of the equations on \(H\).
Although the same shift equations as in Theorems 3.1 and 3.2 were derived in [13] except that the function \(g_{\chi}\) is replaced by \(e^{2\chi\pi\mathrm{i}(\sigma-\frac{Q}{2})}\), there are two additional difficulties that we must overcome. The first is that in [13]\(H\) and \(R\) (denoted by \(\overline{H}\) and \(\overline{R}\) in [13]) can reduce to moments of GMC up to a prefactor. This is not possible in our case. We are forced to work with the more involved Definitions 2.13 and 2.18 which introduces extra constraints on the parameters. The second is that [13] used the value of \(R(\beta,\sigma_{1},\sigma_{2})\) at \(\mu_{1}=0\) as an input, which was known from [13]. For us this is not available. Instead we use mating-of-trees to get the needed explicit formula for \(R(\gamma,\sigma_{1},\sigma)\); see Lemma 3.17.
The rest of this section is organized as follows. In Section 3.1 we recall the BPZ equation from [1]. In Sections 3.2 and 3.3 we establish respectively Theorems 3.1 and 3.2 in the case \(\chi=\frac{\gamma}{2}\) and in a limited range of parameters. In Section 3.4 we prove the reflection principle for \(H\) and \(R\) and show they can be meromorphically extended to the full range of parameters claimed in Theorems 3.1 and 3.2. Then finally in Sections 3.5 and 3.6 we establish Theorems 3.2 and 3.1 in the case \(\chi=\frac{2}{\gamma}\). We provide in Appendix A the necessary background on hypergeometric equations and several facts on the Ponsot-Teschner formula \(H_{\mathrm{PT}}\). We defer the proofs of the OPE lemmas to Appendix D.
### The BPZ equation for a deformation of \(H\)
Before stating the BPZ differential equation, let us give a generic definition of the boundary four point function, presented after having applied the Girsanov theorem to all the insertions. This definition was first proposed in [11], modulo the fact that [11] does not consider the varying boundary cosmological constant.
**Definition 3.3**.: _(Boundary four-point function) Consider four points \(s_{1}<s_{2}<s_{3}<s_{4}\) in \(\mathbb{R}\) with associated weights \(\beta_{1},\beta_{2},\beta_{3},\beta_{4}\in\mathbb{R}\) satisfying the conditions \(\beta_{i}<Q\) and \(\sum_{i}\beta_{i}>2Q\). Consider also \(\mu_{1},\mu_{2},\mu_{3},\mu_{4}\) satisfying the condition \(\Re(\mu_{i})\geq 0\). Then define_
\[\langle\prod_{j=1}^{4}B_{\beta_{j}}^{\sigma_{j},\sigma_{j+1}}(s_{ j})\rangle=\prod_{j<j^{\prime}}|s_{j}-s_{j^{\prime}}|^{-\frac{\beta_{j}\beta_{ j^{\prime}}}{2}}\int_{\mathbb{R}}dc\,e^{-\frac{\gamma\widetilde{q}\widetilde{c}}{2}} \mathbb{E}\Bigg{[}\exp\Bigg{(}-e^{\gamma c}\int_{\mathbb{H}}\frac{\hat{g}(x)^ {\frac{\gamma^{2}}{4}(\widetilde{q}+1)}}{\prod_{j=1}^{4}|x-s_{j}|^{\gamma \beta_{j}}}e^{\gamma h(x)}d^{2}x\] \[-e^{\frac{\gamma c}{2}}\int_{\mathbb{R}}\frac{\hat{g}(r)^{\frac{ \gamma^{2}\widetilde{q}}{8}}}{\prod_{j=1}^{4}|r-s_{j}|^{\frac{\gamma\beta_{j} }{2}}}e^{\frac{\gamma}{2}h(r)}d\mu_{\partial}(r)\Bigg{)}\Bigg{]},\]
_where \(\widetilde{q}=\frac{2Q-\beta_{1}-\beta_{2}-\beta_{3}-\beta_{4}}{\gamma}\) and where the boundary measure is given by:_
\[d\mu_{\partial}(r)/dr=\sum_{j=1}^{3}\mu_{j+1}\mathbf{1}_{s_{j}<r<s_{j+1}}+\mu_ {1}\mathbf{1}_{r\notin(s_{1},s_{4})}. \tag{3.6}\]
Now lets fix the range for our parameters where the four-point function with the degenerate insertion will be well-defined. Recall \(\mathcal{B}=(-\frac{1}{2\gamma}+\frac{Q}{2},\frac{1}{2\gamma}+\frac{Q}{2})\times \mathbb{R}\) and \(\overline{\mathcal{B}}=[-\frac{1}{2\gamma}+\frac{Q}{2},\frac{1}{2\gamma}+ \frac{Q}{2}]\times\mathbb{R}\). For \(\chi=\frac{\gamma}{2}\) or \(\frac{2}{\gamma}\), consider the following ranges of parameters
\[\beta_{i}\in(-\infty,Q),\ \beta_{1}+\beta_{2}+\beta_{3}>2Q+\chi,\quad\sigma_{1}- \frac{\chi}{2}\in\overline{\mathcal{B}},\ \sigma_{2}-\frac{\chi}{2}\in\overline{\mathcal{B}},\ \sigma_{1}\in\overline{ \mathcal{B}},\ \sigma_{2}\in\overline{\mathcal{B}},\ \sigma_{3}\in\overline{ \mathcal{B}}, \tag{3.7}\]
and:
\[\beta_{i}\in(-\infty,Q),\ \beta_{1}+\beta_{2}+\beta_{3}>2Q+\chi,\quad\sigma_{1}+ \frac{\chi}{2}\in\overline{\mathcal{B}},\ \sigma_{2}+\frac{\chi}{2}\in\overline{ \mathcal{B}},\ \sigma_{1}\in\overline{\mathcal{B}},\ \sigma_{2}\in\overline{ \mathcal{B}},\ \sigma_{3}\in\overline{\mathcal{B}}. \tag{3.8}\]
Notice these constraints can all be simultaneously satisfied. Indeed, for (3.7), if all three \(\beta_{i}\) are close to \(Q\) then \(\beta_{1}+\beta_{2}+\beta_{3}>2Q+\chi\) is true. For the \(\sigma_{i}\), the band with its boundary included \(\overline{\mathcal{B}}\) has width \(\frac{1}{\gamma}\). Therefore it is possible to have both \(\sigma_{1}-\frac{\chi}{2}\) and \(\sigma_{1}\) in \(\overline{\mathcal{B}}\), but in the case \(\chi=\frac{2}{\gamma}\) this means that \(\sigma_{1}-\frac{\chi}{2}\) and \(\sigma_{1}\) are exactly on the boundary of \(\overline{\mathcal{B}}\). The same is true for \(\sigma_{2}\). Then the parameter range (3.8) works the same. Under these constraints we have the following BPZ equation obtained in [1].
**Theorem 3.4** ([1]).: _Suppose \(\gamma\neq\sqrt{2}\). Consider parameters \((\beta_{i},\sigma_{i})_{i=1,2,3}\) obeying (3.7) or (3.8). Consider disjoint points \(s,s_{1},s_{2},s_{3}\in\mathbb{R}\) with associated weights \(-\chi,\beta_{1},\beta_{2},\beta_{3}\) where \(\chi=\frac{\gamma}{2}\)
_or \(\frac{2}{\gamma}\). Define the following differential operator:_
\[\mathcal{D}:=\frac{1}{\chi^{2}}\partial_{ss}+\sum_{j=1}^{3}\frac{1}{s-s_{j}} \partial_{s_{j}}+\sum_{j=1}^{3}\frac{\Delta_{\beta_{j}}}{(s-s_{j})^{2}}. \tag{3.9}\]
_Then for \(s_{1}<s<s_{2}<s_{3}\) we have_
\[\mathcal{D}\left\langle B_{\beta_{1}}^{\sigma_{1}\pm\frac{\chi}{2},\sigma_{2} \pm\frac{\chi}{2}}(s_{1})B_{-\chi}^{\sigma_{2}\pm\frac{\chi}{2},\sigma_{2}}(s) B_{\beta_{2}}^{\sigma_{2},\sigma_{3}}(s_{2})B_{\beta_{3}}^{\sigma_{3},\sigma_{1} \pm\frac{\chi}{2}}(s_{3})\right\rangle=0, \tag{3.10}\]
_and for \(s<s_{1}<s_{2}<s_{3}\):_
\[\mathcal{D}\left\langle B_{-\chi}^{\sigma_{1}\pm\frac{\chi}{2},\sigma_{1}}(s) B_{\beta_{1}}^{\sigma_{1},\sigma_{2}}(s_{1})B_{\beta_{2}}^{\sigma_{2},\sigma_{3}}(s_{2}) B_{\beta_{3}}^{\sigma_{3},\sigma_{1}\pm\frac{\chi}{2}}(s_{3})\right\rangle=0. \tag{3.11}\]
_Above when \(\pm\frac{\chi}{2}\) equals \(-\frac{\chi}{2}\) the parameters must obey (3.7) and when it equals \(\frac{\chi}{2}\) must obey (3.8)._
We will now use conformal invariance to perform a change of variable that will place the boundary spectator insertions \(s_{1},s_{2},s_{3}\) at \(0,1,\infty\) and the degenerate insertion at \(s\) will be mapped to the cross ratio \(t\) of \(s_{1},s_{2},s_{3},s\). The concrete conformal map we use for this change of variable is thus:
\[\psi(u)=\frac{(u-s_{1})(s_{2}-s_{3})}{(u-s_{3})(s_{2}-s_{1})}. \tag{3.12}\]
By setting \(t=\psi(s)\) we will end up with \(t\in(0,1)\) when \(s_{1}<s<s_{2}\) and \(t\in(-\infty,0)\) when \(s<s_{1}\). Now under the constraint of (3.7) consider the functional
\[H_{\chi}(t)=\int_{\mathbb{R}}dc\,e^{-\frac{\gamma c}{2}}\mathbb{ E}\Bigg{[}\exp\Bigg{(}-e^{\gamma c}\int_{\mathbb{H}}\frac{|x-t|^{\gamma\chi} \hat{g}(x)^{\frac{\gamma^{2}}{4}(q+1)}}{|x|^{\gamma\beta_{1}}|x-1|^{\gamma \beta_{2}}}e^{\gamma h(x)}d^{2}x\] \[-e^{\frac{\gamma c}{2}}\int_{\mathbb{R}}\frac{|r-t|^{\frac{ \gamma\chi}{2}}\hat{g}(r)^{\frac{\gamma^{2}}{8}}}{|r|^{\frac{\gamma\beta_{1}} {2}}|r-1|^{\frac{\gamma\beta_{2}}{2}}}e^{\frac{\gamma}{2}h(r)}d\mu_{\partial}^ {t}(r)\Bigg{)}\Bigg{]}, \tag{3.13}\]
where the boundary measure \(d\mu_{\partial}^{t}(r)\) is defined by:
\[d\mu_{\partial}^{t}(r)/dr=\left\{\begin{array}{ll}g_{\frac{\gamma}{2}}( \sigma_{1}-\frac{\chi}{2})\mathbf{1}_{r<0}+g_{\frac{\gamma}{2}}(\sigma_{2}- \frac{\chi}{2})\mathbf{1}_{0<r<t}+g_{\frac{\gamma}{2}}(\sigma_{2})\mathbf{1}_ {t<r<1}+g_{\frac{\gamma}{2}}(\sigma_{3})\mathbf{1}_{r>1},&\text{for $t\in(0,1)$},\\ g_{\frac{\gamma}{2}}(\sigma_{1}-\frac{\chi}{2})\mathbf{1}_{r<t}+g_{\frac{\gamma }{2}}(\sigma_{1})\mathbf{1}_{t<r<0}+g_{\frac{\gamma}{2}}(\sigma_{2})\mathbf{1} _{0<r<1}+g_{\frac{\gamma}{2}}(\sigma_{3})\mathbf{1}_{r>1},&\text{for $t<0$}.\end{array}\right.\]
Since the change of \(\sigma\) on the left and right or \(t\) can be either \(+\frac{\chi}{2}\) or \(-\frac{\chi}{2}\), we similarly consider the function \(\tilde{H}_{\chi}(t)\) with the other choice under the constraints of (3.8), which is given by the same expression as \(H_{\chi}(t)\) except one needs to replace \(d\mu_{\partial}^{t}(r)\) by the following measure
\[d\tilde{\mu}_{\partial}^{t}(r)/dr=\left\{\begin{array}{ll}g_{\frac{\gamma}{2} }(\sigma_{1}+\frac{\chi}{2})\mathbf{1}_{r<0}+g_{\frac{\gamma}{2}}(\sigma_{2}+ \frac{\chi}{2})\mathbf{1}_{0<r<t}+g_{\frac{\gamma}{2}}(\sigma_{2})\mathbf{1}_{t <r<1}+g_{\frac{\gamma}{2}}(\sigma_{3})\mathbf{1}_{r>1},&\text{for $t\in(0,1)$},\\ g_{\frac{\gamma}{2}}(\sigma_{1}+\frac{\chi}{2})\mathbf{1}_{r<t}+g_{\frac{\gamma}{ 2}}(\sigma_{1})\mathbf{1}_{t<r<0}+g_{\frac{\gamma}{2}}(\sigma_{2})\mathbf{1}_ {0<r<1}+g_{\frac{\gamma}{2}}(\sigma_{3})\mathbf{1}_{r>1},&\text{for $t<0$}.\end{array}\right.\]
We could also of course define in a similar manner \(H_{\chi}(t)\), \(\tilde{H}_{\chi}(t)\) for \(t\in(1,\infty)\), but this will not be needed. By a direct change of variable applied to the differential operator of Theorem 3.4, \(H_{\chi}(t)\) and \(\tilde{H}_{\chi}(t)\) obey the hypergeometric equation. We state this in the next proposition.
**Proposition 3.5**.: _Under the parameter constraint of (3.7), the function \(H_{\chi}\) obeys_
\[t(1-t)\partial_{t}^{2}H_{\chi}+(C-(A+B+1)t)\partial_{t}H_{\chi}-ABH_{\chi}=0, \tag{3.14}\]
_with \(A,B,C\) given by:_
\[A=-q\frac{\gamma\chi}{2},\quad B=-1+\chi(\beta_{1}+\beta_{2}-2\chi+q\frac{\gamma }{2}),\quad C=\chi(\beta_{1}-\chi). \tag{3.15}\]
_The exact same result holds for \(\tilde{H}_{\chi}\) under the parameter constraints (3.8)._
We now state an analyticity result for \(H_{\frac{\gamma}{2}}\) and \(\tilde{H}_{\frac{\gamma}{2}}\) which will be used in the proof of Lemma 3.11 below.
**Lemma 3.6**.: _Set \(\chi=\frac{\gamma}{2}\). Fix the \(\sigma_{i}\) in the parameter range (3.7) but with the boundary of the band being excluded, namely \(\sigma_{1}-\frac{\gamma}{4}\in\mathcal{B}\), \(\sigma_{2}-\frac{\gamma}{4}\in\mathcal{B}\), \(\sigma_{1},\sigma_{2},\sigma_{3}\in\mathcal{B}\). Then the function \(H_{\chi}\) is meromorphic in all three \(\beta_{1},\beta_{2},\beta_{3}\) in a complex neighborhood of the subdomain of \(\mathbb{R}^{3}\) given by the constraints \(\beta_{i}\in(-\infty,Q),\ \beta_{1}+\beta_{2}+\beta_{3}>2Q+\chi\). The same claim holds for \(\tilde{H}_{\chi}\), except this time the \(\sigma_{i}\) need to obey \(\sigma_{1}+\frac{\gamma}{4}\in\mathcal{B}\), \(\sigma_{2}+\frac{\gamma}{4}\in\mathcal{B}\), \(\sigma_{1},\sigma_{2},\sigma_{3}\in\mathcal{B}\)._
Proof.: This result follows from a direct adaptation of the proof of the claim of analyticity in the \(\beta_{i}\) for \(H\) of Proposition 2.12. The fact that we have an extra insertion on the boundary poses no additional difficulty. Here we are also in the range of parameters where the Seiberg bounds are satisfied and thus no truncation procedure is required.
Before further analysis, we recall here the range of parameters where the functions \(H\) and \(R\) have been defined in Section 2. For \(H\) the range on the \(\beta_{i}\) and \(\sigma_{i}\) is given in Definition 2.13:
\[\left\{(\beta_{1},\beta_{2},\beta_{3},\sigma_{1},\sigma_{2},\sigma_{3})\,:\, \beta_{i}<Q,\ Q-\frac{1}{2}\sum\beta_{i}<\gamma\wedge\frac{2}{\gamma}\wedge \min_{i}(Q-\beta_{i}),\ \sigma_{i}\in\overline{\mathcal{B}}\right\}. \tag{3.16}\]
For \(R\) the range of \(\beta\) and \(\sigma_{i}\) is given in Definition 2.18:
\[\left\{(\beta,\sigma_{1},\sigma_{2})\,:\,\frac{\gamma}{2}\vee(\frac{2}{\gamma }-\frac{\gamma}{2})<\beta<Q,\ \sigma_{i}\in\overline{\mathcal{B}}\right\}. \tag{3.17}\]
We have kept the \(\beta_{i}\) to be real parameters above but by the results of Section 2 we shown analytic continuation of \(H\) and \(R\) in the \(\beta_{i}\) to a complex neighborhood of the real interval where they are defined.
### The \(\frac{\gamma}{2}\)-shift equations for \(H\)
We start by proving the shift equations on \(H\) in the case of \(\chi=\frac{\gamma}{2}\). We therefore use the functions \(H_{\frac{\gamma}{2}}(t)\) and \(\tilde{H}_{\frac{\gamma}{2}}(t)\). For this first lemma the parameter range is chosen such that the \(\beta_{i}\) and \(\sigma_{i}\) parameters of each \(H\) in the shift equations belong to the domain (3.16).
**Lemma 3.7**.: _Set \(\chi=\frac{\gamma}{2}\). The shift equation (3.2) holds in the parameter range_
\[\beta_{1}<\frac{2}{\gamma},\quad\beta_{2},\beta_{3}<Q,\quad Q-\frac{1}{2}( \beta_{1}+\beta_{2}+\beta_{3}-\frac{\gamma}{2})<\gamma\wedge\frac{2}{\gamma} \wedge\min_{i}(Q-\beta_{i}),\quad\sigma_{1},\sigma_{2},\sigma_{3}\in\mathcal{ B},\quad\sigma_{2}+\frac{\gamma}{4}\in\mathcal{B}, \tag{3.18}\]
_and the shift equation (3.3) holds in parameter range:_
\[\beta_{1},\beta_{2}<\frac{2}{\gamma},\quad\beta_{3}<Q,\quad Q-\frac{1}{2}( \beta_{1}+\beta_{2}+\beta_{3}-\frac{\gamma}{2})<\gamma\wedge\frac{2}{\gamma} \wedge\min_{i}(Q-\beta_{i}),\quad\sigma_{1},\sigma_{2},\sigma_{3}\in\mathcal{ B},\quad\sigma_{2}+\frac{\gamma}{4}\in\mathcal{B}. \tag{3.19}\]
Proof.: Here we are always assuming \(\chi=\frac{\gamma}{2}\). Lets first take a look at the parameter ranges given by (3.18) and (3.19). For \(\chi=\frac{\gamma}{2}\), the shift equation (3.2) contains the following three \(H\) functions:
\[H\begin{pmatrix}\beta_{1},\beta_{2}-\frac{\gamma}{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix},\quad H\begin{pmatrix}\beta_{1}- \frac{\gamma}{2},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2}+\frac{\gamma}{4},\sigma_{3}\end{pmatrix},\quad H \begin{pmatrix}\beta_{1}+\frac{\gamma}{2},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2}+\frac{\gamma}{4},\sigma_{3}\end{pmatrix}.\]
The \(\beta_{i}\) and \(\sigma_{i}\) parameters of each \(H\) must be in the range (3.16). This gives (3.18). Similarly, the second shift equations (3.3) contains the \(H\) functions
\[H\begin{pmatrix}\beta_{1},\beta_{2}+\frac{\gamma}{2},\beta_{3}\\ \sigma_{1},\sigma_{2}+\frac{\gamma}{4},\sigma_{3}\end{pmatrix},\quad H\begin{pmatrix} \beta_{1}-\frac{\gamma}{2},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix},\quad H\begin{pmatrix}\beta_{1}+ \frac{\gamma}{2},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix},\]
and the \(\beta_{i}\) and \(\sigma_{i}\) then need to obey the constraint (3.19). It is easy to check that these parameter constraints are non-empty. The condition of (3.18) is weaker than (3.19), and (3.19) can be satisfied if all the \(\beta_{i}\) parameters are chosen in the interval \((\frac{2}{\gamma}-\epsilon,\frac{2}{\gamma})\) for a small \(\epsilon>0\).
Lets now derive the first shift equation (3.2). We assume \(\beta_{i},\sigma_{i}\) obey (3.7) in order for \(H_{\frac{\gamma}{2}}\) to be well-defined, plus the following extra constraint on \(\beta_{1}\):
\[\frac{\gamma}{2}<\beta_{1}<\frac{2}{\gamma}. \tag{3.20}\]
By Proposition 3.5, the function \(t\mapsto H_{\frac{\gamma}{2}}(t)\) obeys the hypergeometric equation for \(t\in(0,1)\). Using the basis of solutions of the hypergeometric equation recalled in Section A.1, we can write the following solutions around \(t=0\) and \(t=1\), under the assumption that neither \(C\), \(C-A-B\), or \(A-B\) are integers:4 For \(t\in(0,1)\):
Footnote 4: The values excluded here are recovered by a simple continuity argument in \(\gamma\).
\[H_{\frac{\gamma}{2}}(t) =C_{1}F(A,B,C,t)+C_{2}^{+}t^{1-C}F(1+A-C,1+B-C,2-C,t)\] \[=B_{1}F(A,B,1+A+B-C,1-t)+B_{2}^{-}(1-t)^{C-A-B}F(C-A,C-B,1+C-A-B, 1-t). \tag{3.21}\]
The constants \(C_{1},C_{2}^{+},B_{1},B_{2}^{-}\) are the real constants that parametrize the solution space around \(t=0\) and \(t=1\), we will identify them by Taylor expansion. First we note that by setting \(t=0\):
\[C_{1}=H_{\frac{\gamma}{2}}(0)=H\begin{pmatrix}\beta_{1}-\frac{\gamma}{2}, \beta_{2},\beta_{3}\\ \sigma_{1}-\frac{\gamma}{4},\sigma_{2},\sigma_{3}\end{pmatrix}. \tag{3.22}\]
Next to find \(C_{2}^{+}\) we go at higher order in the \(t\to 0_{+}\) limit. For this we use the asymptotic expansion of \(H_{\frac{\gamma}{2}}(t)-H_{\frac{\gamma}{2}}(0)\) given by Lemma D.1. Under the parameter constraints given by (3.7) and (3.20), the lemma directly tells us that
\[H_{\frac{\gamma}{2}}(t)-H_{\frac{\gamma}{2}}(0)=C_{2}^{+}t^{1-C}+o(t^{1-C}), \tag{3.23}\]
where:
\[C_{2}^{+}= -\frac{\Gamma(-1+\frac{\gamma\beta_{1}}{2}-\frac{\gamma^{2}}{4}) \Gamma(1-\frac{\gamma\beta_{1}}{2})}{\Gamma(-\frac{\gamma^{2}}{4})}\left(g_{ \frac{\gamma}{2}}(\sigma_{1}-\frac{\gamma}{4})-g_{\frac{\gamma}{2}}(\sigma_{2 }+\frac{\beta_{1}}{2}-\frac{\gamma}{4})\right)H\begin{pmatrix}\beta_{1}+ \frac{\gamma}{2},\beta_{2},\beta_{3}\\ \sigma_{1}-\frac{\gamma}{4},\sigma_{2},\sigma_{3}\end{pmatrix}. \tag{3.24}\]
Similarly by setting \(t=1\) we get:
\[B_{1}=H\begin{pmatrix}\beta_{1},\beta_{2}-\frac{\gamma}{2},\beta_{3}\\ \sigma_{1}-\frac{\gamma}{4},\sigma_{2}-\frac{\gamma}{4},\sigma_{3}\end{pmatrix}. \tag{3.25}\]
The connection formula (A.8) between \(C_{1}\), \(C_{2}^{+}\), and \(B_{1}\) then implies the shift equation (3.2) for \(\chi=\frac{\gamma}{2}\) in the range of parameters constraint by (3.7) and (3.20), after performing furthermore the replacement \(\sigma_{1}\to\sigma_{1}+\frac{\gamma}{4}\) and \(\sigma_{2}\to\sigma_{2}+\frac{\gamma}{4}\) (which also rotates the domain where \(\sigma_{1},\sigma_{2}\) belongs). To lift these constraint we then invoque the analyticity of \(H\) as a function of its parameters given by Proposition 2.14. We have thus shown that (3.2) holds for \(\chi=\frac{\gamma}{2}\) in the parameter range given by (3.18).
Now we repeat these steps with \(\tilde{H}_{\frac{\gamma}{2}}\) to obtain the shift equation with the opposite phase. We expand \(\tilde{H}_{\frac{\gamma}{2}}(t)\), for \(t\in(0,1)\)
\[\tilde{H}_{\frac{\gamma}{2}}(t) =\tilde{C}_{1}F(A,B,C,t)+\tilde{C}_{2}^{+}t^{1-C}F(1+A-C,1+B-C,2-C,t)\] \[=\tilde{B}_{1}F(A,B,1+A+B-C,1-t)+\tilde{B}_{2}^{-}(1-t)^{C-A-B}F(C -A,C-B,1+C-A-B,1-t) \tag{3.26}\]
and compute in the same way the values of \(\tilde{C}_{1},\tilde{C}_{2}^{+},\tilde{B}_{2}^{-}\):
\[\tilde{C}_{1} =H\begin{pmatrix}\beta_{1}-\frac{\gamma}{2},\beta_{2},\beta_{3}\\ \sigma_{1}+\frac{\gamma}{4},\sigma_{2},\sigma_{3}\end{pmatrix}, \tag{3.28}\] \[\tilde{C}_{2}^{+} =-\frac{\Gamma(-1+\frac{\gamma\beta_{1}}{2}-\frac{\gamma^{2}}{4} )\Gamma(1-\frac{\gamma\beta_{1}}{2})}{\Gamma(-\frac{\gamma^{2}}{4})}\left(g_ {\frac{\gamma}{2}}(\sigma_{1}+\frac{\gamma}{4})-g_{\frac{\gamma}{2}}(\sigma_ {2}-\frac{\beta_{1}}{2}+\frac{\gamma}{4})\right)H\begin{pmatrix}\beta_{1}+ \frac{\gamma}{2},\beta_{2},\beta_{3}\\ \sigma_{1}+\frac{\gamma}{4},\sigma_{2},\sigma_{3}\end{pmatrix},\] (3.29) \[\tilde{B}_{2}^{-} =-\frac{\Gamma(-1+\frac{\gamma\beta_{2}}{2}-\frac{\gamma^{2}}{4} )\Gamma(1-\frac{\gamma\beta_{2}}{2})}{\Gamma(-\frac{\gamma^{2}}{4})}\left(g_ {\frac{\gamma}{2}}(\sigma_{3})-g_{\frac{\gamma}{2}}(\sigma_{2}+\frac{\beta_{2 }}{2})\right)H\begin{pmatrix}\beta_{1},\beta_{2}+\frac{\gamma}{2},\beta_{3}\\ \sigma_{1}+\frac{\gamma}{4},\sigma_{2}+\frac{\gamma}{4},\sigma_{3}\end{pmatrix}. \tag{3.27}\]
Then the connection formula (A.8) implies the shift equation (3.3) for \(\chi=\frac{\gamma}{2}\).
**Remark 3.8**.: _Notice that the two shift equations we derive using respectively \(H_{\chi}\) and \(\tilde{H}_{\chi}\) do not have exactly the same shape, the reason being that in the proof above we used the connection formula between the coefficients \(C_{1},C_{2}^{+},B_{1}\) to derive (3.2), and used the connection formula between \(\tilde{C}_{1},\tilde{C}_{2}^{+},\tilde{B}_{2}^{-}\) to derive (3.3). One may wonder why we did not use the connection formula between \(\tilde{C}_{1},\tilde{C}_{2}^{+},\tilde{B}_{1}\) in the case of \(\tilde{H}_{\chi}\) as well. The main reason is that with this choice it is possible to perform a linear combination of the shift equations (3.2) and (3.3) to obtain a shift equation of \(H\) shifting only the parameter \(\beta_{1}\). This will be crucial to establish a uniqueness statement for the solutions to (3.2) and (3.3), see the proof of Theorem 1.1 in Section 4.2 and [10]._
### The \(\frac{\gamma}{2}\)-shift equations for \(R\)
The next step is to derive the \(\frac{\gamma}{2}\)-shift equation for the reflection coefficient \(R\). The key idea is to take a suitable limit of the \(\frac{\gamma}{2}\)-shift equations for \(H\) that we have established in Lemma 3.7 to make \(R\) appear from \(H\). For this purpose we give the following result expressing \(R\) as a limit of \(H\).
**Lemma 3.9**.: _Suppose the \(\beta_{i}\) are in the range given by (3.16) and the \(\sigma_{i}\) in \(\mathcal{B}\). Suppose \(Q>\beta_{1}>\beta_{2}\vee\frac{\gamma}{2}\), \(\beta_{2}>0\) and \(\beta_{1}-\beta_{2}<\beta_{3}<Q\). Then the following limit holds:_
\[\lim_{\beta_{3}\downarrow\beta_{1}-\beta_{2}}(\beta_{2}+\beta_{3}-\beta_{1})H \begin{pmatrix}\beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}=2R(\beta_{1},\sigma_{1},\sigma_{ 2}).\]
We defer the proof of Lemma 3.9 to Section 5 and proceed to prove the \(\frac{\gamma}{2}\)-shift equation for \(R\).
**Lemma 3.10**.: _Consider \(\beta_{1}\in(\frac{\gamma}{2}\vee(\frac{2}{\gamma}-\frac{\gamma}{2}),\frac{2} {\gamma})\) and \(\sigma_{1},\sigma_{2}\in\mathbb{C}\) such that \(\sigma_{1},\sigma_{2},\sigma_{2}+\frac{\gamma}{4}\) all belong to \(\mathcal{B}\). Then \(R(\beta_{1},\sigma_{1},\sigma_{2})\) obeys_
\[R(\beta_{1},\sigma_{1},\sigma_{2})=-\frac{\Gamma(-1+\frac{\gamma\beta_{1}}{2}- \frac{\gamma^{2}}{4})\Gamma(1-\frac{\gamma\beta_{1}}{2})}{\Gamma(-\frac{\gamma ^{2}}{4})}\left(g_{\frac{\gamma}{2}}(\sigma_{1})-g_{\frac{\gamma}{2}}(\sigma_ {2}+\frac{\beta_{1}}{2})\right)R(\beta_{1}+\frac{\gamma}{2},\sigma_{1},\sigma_ {2}+\frac{\gamma}{4}). \tag{3.30}\]
_Similarly for \(\beta_{1}\in(0\vee(\frac{2}{\gamma}-\gamma),\frac{2}{\gamma}-\frac{\gamma}{2})\) and the same constraint on \(\sigma_{1},\sigma_{2}\) as previously,_
\[R(\beta_{1}+\frac{\gamma}{2},\sigma_{1},\sigma_{2}+\frac{\gamma}{4})=-\frac{ \Gamma(-1+\frac{\gamma\beta_{1}}{2})\Gamma(1-\frac{\gamma\beta_{1}}{2}-\frac{ \gamma^{2}}{4})}{\Gamma(-\frac{\gamma^{2}}{4})}\left(g_{\frac{\gamma}{2}}( \sigma_{1})-g_{\frac{\gamma}{2}}(\sigma_{2}-\frac{\beta_{1}}{2})\right)R( \beta_{1}+\gamma,\sigma_{1},\sigma_{2}). \tag{3.31}\]
Proof.: Let us derive the first shift equation (3.30) which will follow from taking a limit of (3.2). Fix a \(\beta_{1}\in(\frac{\gamma}{2}\vee(\frac{2}{\gamma}-\frac{\gamma}{2}),\frac{2 }{\gamma})\). Consider two parameters \(\epsilon,\eta>0\) chosen small enough and set \(\beta_{2}=\beta_{1}-\epsilon\), \(\beta_{3}=\beta_{1}-\beta_{2}+\frac{\gamma}{2}+\eta=\frac{\gamma}{2}+ \epsilon+\eta\). Notice that for this parameter choice the condition (3.18) required in Lemma 3.7 is satisfied. By applying Lemma 3.9 we get:
\[\lim_{\beta_{3}\downarrow\beta_{1}-\beta_{2}+\frac{\gamma}{2}}( \beta_{2}+\beta_{3}-\beta_{1}-\frac{\gamma}{2})H\begin{pmatrix}\beta_{1},\beta _{2}-\frac{\gamma}{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}=2R(\beta_{1},\sigma_{1},\sigma_ {2}),\] \[\lim_{\beta_{3}\downarrow\beta_{1}-\beta_{2}+\frac{\gamma}{2}}( \beta_{2}+\beta_{3}-\beta_{1}-\frac{\gamma}{2})\frac{\Gamma(\frac{\gamma\beta_ {1}}{2}-\frac{\gamma^{2}}{4})\Gamma(1-\frac{\gamma\beta_{2}}{2}+\frac{\gamma^{ 2}}{4})}{\Gamma(\frac{\gamma\beta_{1}}{2}+(q-1)\frac{\gamma^{2}}{4})\Gamma(1- \frac{\gamma}{2}\beta_{2}-(q-1)\frac{\gamma^{2}}{4}))}H\begin{pmatrix}\beta_{1 }-\frac{\gamma}{2},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2}+\frac{\gamma}{4},\sigma_{3}\end{pmatrix}=0,\] \[\lim_{\beta_{3}\downarrow\beta_{1}-\beta_{2}+\frac{\gamma}{2}}( \beta_{2}+\beta_{3}-\beta_{1}-\frac{\gamma}{2})\Bigg{[}\frac{\Gamma(2+\frac{ \gamma^{2}}{4}-\frac{\gamma\beta_{1}}{2})\Gamma(1-\frac{\gamma\beta_{2}}{2}+ \frac{\gamma^{2}}{4})\Gamma(-1+\frac{\gamma\beta_{1}}{2}-\frac{\gamma^{2}}{4 })\Gamma(1-\frac{\gamma\beta_{1}}{2})}{\Gamma(1+\frac{\gamma^{2}}{4})\Gamma(2- \frac{\gamma}{2}(\beta_{1}+\beta_{2})-(q-2)\frac{\gamma^{2}}{4}))\Gamma(- \frac{\gamma^{2}}{4})}\] \[\times\left(g_{\frac{\gamma}{2}}(\sigma_{1})-g_{\frac{\gamma}{2}} (\sigma_{2}+\frac{\beta_{1}}{2})\right)H\begin{pmatrix}\beta_{1}+\frac{ \gamma}{2},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2}+\frac{\gamma}{4},\sigma_{3}\end{pmatrix}\Bigg{]}\] \[=2\frac{\Gamma(-1+\frac{\gamma\beta_{1}}{2}-\frac{\gamma^{2}}{4 })\Gamma(1-\frac{\gamma\beta_{1}}{2})}{\Gamma(-\frac{\gamma^{2}}{4})}\left(g_ {\frac{\gamma}{2}}(\sigma_{1})-g_{\frac{\gamma}{2}}(\sigma_{2}+\frac{\beta_{1 }}{2})\right)R(\beta_{1}+\frac{\gamma}{2},\sigma_{1},\sigma_{2}+\frac{\gamma}{ 4}).\]
In the above three limits the second one is actually trivial, only the first and third involve using Lemma 3.9. Putting these limits together using (3.2) leads to (3.30). By using the alternative function \(\widehat{H}_{\frac{\gamma}{2}}(t)\) along the same lines we obtain the relation (3.31) between \(R(\beta_{1}+\frac{\gamma}{2},\sigma_{1},\sigma_{2}+\frac{\gamma}{4})\) and \(R(\beta_{1}+\gamma,\sigma_{1},\sigma_{2})\). Hence this implies the claim of the lemma.
### Analytic continuation for \(H\) and \(R\)
In this subsection we will analytically extend the functions \(H\) and \(R\) to a larger domain of parameters than the one we are currently working with. For this purpose, we must first derive the so-called reflection principle for \(H\) and \(R\) which will be obtain by performing the OPE with reflection in the case \(\chi=\frac{\gamma}{2}\). We rely extensively on Lemma D.2 giving the Taylor expansion using the reflection coefficient. Finally we also extend the validity of Lemma 3.9 to a larger range of parameters.
**Lemma 3.11** (Reflection principle for \(H\)).: _Consider parameters \(\sigma_{1},\sigma_{2},\sigma_{3}\), \(\beta_{1},\beta_{2},\beta_{3}\) satisfying the parameter ranges (3.16) and (3.17) for \(H\) and \(R(\beta_{1},\sigma_{1},\sigma_{2})\) to be well-defined. Then one can meromorphically extend \(\beta_{1}\mapsto H\) beyond the point \(\beta_{1}=Q\) by the following relation:_
\[H\begin{pmatrix}\beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}=R(\beta_{1},\sigma_{1},\sigma_{2} )H\begin{pmatrix}2Q-\beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}. \tag{3.32}\]
_The quantity \(H\begin{pmatrix}2Q-\beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}\) is thus well-defined as long as the parameters of \(H\begin{pmatrix}\beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}\) and \(R(\beta_{1},\sigma_{1},\sigma_{2})\) respectively obey the constraints of (3.16) and (3.17). Similarly, for \(\sigma_{1},\sigma_{2}\in\mathcal{B}\),_
_we can analytically extend \(\beta_{1}\mapsto R(\beta_{1},\sigma_{1},\sigma_{2})\) to the range \(((\frac{2}{\gamma}-\frac{\gamma}{2})\vee\frac{\gamma}{2},Q+(\frac{2}{\gamma} \wedge\gamma))\) thanks to the relation:_
\[R(\beta_{1},\sigma_{1},\sigma_{2})R(2Q-\beta_{1},\sigma_{1},\sigma_{2})=1. \tag{3.33}\]
Proof.: Throughout the proof we keep the same notations as used in the proof of Lemma 3.7 for the solution space of the hypergeometric equation satisfied by \(H_{\frac{\gamma}{2}}(t)\) for \(t\in(0,1)\). We assume the parameters \(\beta_{i}\), \(\sigma_{i}\) obey the condition (3.7) in order for \(H_{\frac{\gamma}{2}}(t)\) to be well-defined. We also add the condition \(\beta_{1}\in(Q-\beta_{0},Q)\) so that we can apply the result of Lemma D.2 and identify the value of \(C_{2}^{+}\) to be:
\[C_{2}^{+}=R(\beta_{1},\sigma_{1}-\frac{\gamma}{4},\sigma_{2}-\frac{\gamma}{4} )H\begin{pmatrix}2Q-\beta_{1}-\frac{\gamma}{2},\beta_{2},\beta_{3}\\ \sigma_{1}-\frac{\gamma}{4},\sigma_{2},\sigma_{3}\end{pmatrix}. \tag{3.34}\]
The key argument is to observe that since by Lemma 3.6\(\beta_{1}\mapsto H_{\frac{\gamma}{2}}(t)\) is complex analytic so is the coefficient \(C_{2}^{+}\). By using this combined with the analyticity of \(R\) and \(H\), we can extend the range of validity of equation (3.34) from \(\beta_{1}\in(Q-\beta_{0},Q)\) to \(\beta_{1}\in(\frac{2}{\gamma},Q)\), still under the constraint of (3.7). Now equation (3.24) derived in the the proof of Lemma 3.7 gives us an alternative expression for \(C_{2}^{+}\), which is valid for \(\beta_{1}\in(\frac{\gamma}{2},\frac{2}{\gamma})\). The analycity of \(\beta_{1}\mapsto C_{2}^{+}\) in a complex neighborhood of \(\frac{2}{\gamma}\) then implies that one can "glue" together the two expressions for \(C_{2}^{+}\). More precisely, after performing the parameter replacement \(\sigma_{i}\to\sigma_{i}+\frac{\gamma}{4}\) for \(i=1,2,3\), the equality
\[R(\beta_{1},\sigma_{1},\sigma_{2})H\begin{pmatrix}2Q-\beta_{1}- \frac{\gamma}{2},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2}+\frac{\gamma}{4},\sigma_{3}+\frac{\gamma}{4}\end{pmatrix} \tag{3.36}\] \[=\frac{\Gamma(-1+\frac{\gamma\beta_{1}}{2}-\frac{\gamma^{2}}{4} )\Gamma(1-\frac{\gamma\beta_{1}}{2})}{\Gamma(-\frac{\gamma^{2}}{4})}\left(g_ {\frac{\gamma}{2}}(\sigma_{1})-g_{\frac{\gamma}{2}}(\sigma_{2}+\frac{\beta_{1 }}{2})\right)H\begin{pmatrix}\beta_{1}+\frac{\gamma}{2},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2}+\frac{\gamma}{4},\sigma_{3}+\frac{\gamma}{4}\end{pmatrix}, \tag{3.35}\]
provides the desired analytic continuation of \(H\). To land on the form of the reflection equation given in the lemma one needs to replace \(\beta_{1}\) by \(\beta_{1}-\frac{\gamma}{2}\). This transforms \(R(\beta_{1},\sigma_{1},\sigma_{2})\) into \(R(\beta_{1}-\frac{\gamma}{2},\sigma_{1},\sigma_{2})\) which we can shift back to \(R(\beta_{1},\sigma_{1},\sigma_{2}+\frac{\gamma}{4})\) using the shift equation (3.30). Lastly we perform the parameter replacement \(\sigma_{2}+\frac{\gamma}{4}\) to \(\sigma_{2}\) and \(\sigma_{3}+\frac{\gamma}{4}\) to \(\sigma_{3}\). Therefore this implies the claim of the reflection principle for \(H\). The claim for \(R\) is then an immediate consequence.
At this stage we will use the shift equations we have derived to analytically continue \(H\) and \(R\) both in the parameters \(\beta_{i}\) and \(\sigma_{i}\). The analytic continuations will be defined in a larger range of parameters than (3.16) and (3.17) required for the GMC expressions to be well-defined. The proofs follow closely the ones of [22]. We start with the case of \(R\) which is very straightforward.
**Lemma 3.12**.: _(Analytic continuation of \(R(\beta_{1},\sigma_{1},\sigma_{2})\)) For all \(\sigma_{1},\sigma_{2}\in\mathcal{B}\), the meromorphic function \(\beta_{1}\mapsto R(\beta_{1},\sigma_{1},\sigma_{2})\) originally defined on the interval \(((\frac{2}{\gamma}-\frac{\gamma}{2})\vee\frac{\gamma}{2},Q)\) extends to a meromorphic function defined in a complex neighborhood of \(\mathbb{R}\) and satisfying the shift equation:_
\[\frac{R(\beta_{1},\sigma_{1},\sigma_{2})}{R(\beta_{1}+\gamma, \sigma_{1},\sigma_{2})}=(\frac{\gamma}{2})^{4}\frac{\Gamma(-1+\frac{\gamma \beta_{1}}{2}-\frac{\gamma^{2}}{4})\Gamma(1-\frac{\gamma\beta_{1}}{2}-\frac{ \gamma^{2}}{4})\Gamma(1-\frac{\gamma\beta_{1}}{2})\Gamma(-1+\frac{\gamma\beta_ {1}}{2})}{\sin(\pi\frac{\gamma^{2}}{4})\Gamma(1-\frac{\gamma^{2}}{4})^{2}}\\ \times 4\sin(\frac{\gamma\pi}{2}(\frac{\beta}{2}-\sigma_{1}-\sigma_{2} +Q))\sin(\frac{\gamma\pi}{2}(\frac{\beta}{2}+\sigma_{1}+\sigma_{2}-Q))\sin( \frac{\gamma\pi}{2}(\frac{\beta}{2}+\sigma_{2}-\sigma_{1}))\sin(\frac{\gamma \pi}{2}(\frac{\beta}{2}+\sigma_{1}-\sigma_{2})). \tag{3.37}\]
_Furthermore, for a fixed \(\beta_{1}\) in the above complex neighborhood of \(\mathbb{R}\), the function \(R(\beta_{1},\sigma_{1},\sigma_{2})\) extends to a meromorphic function of \((\sigma_{1},\sigma_{2})\) on \(\mathbb{C}^{2}\)._
Proof.: This proof is performed in [10] but we sketch it below as it is quite short. Fix first the parameters \(\sigma_{1},\sigma_{2}\) in \(\mathcal{B}\). Thanks to Proposition 2.17, \(\beta_{1}\mapsto R(\beta_{1},\sigma_{1},\sigma_{2})\) is meromorphic in a complex neighborhood of the real interval \(((\frac{2}{\gamma}-\frac{\gamma}{2})\vee\frac{\gamma}{2},Q)\), and thanks to the reflection principle given by Lemma 3.11, it is meromorphic in a complex neighborhood of \(((\frac{2}{\gamma}-\frac{\gamma}{2})\vee\frac{\gamma}{2},Q+(\frac{2}{\gamma} \wedge\gamma))\). Note that the length of this interval is strictly greater than \(\gamma\).
Combining the two shift equations of Lemma 3.10, we obtain the shift equation (3.37) relating \(R(\beta_{1}+\gamma,\sigma_{1},\sigma_{2})\) and \(R(\beta_{1},\sigma_{1},\sigma_{2})\), which does not shift the \(\sigma_{1},\sigma_{2}\) parameters. Since \(R(\beta_{1},\sigma_{1},\sigma_{2})\) is defined in a complex neighborhood of a real interval of length strictly bigger than \(\gamma\), the shift equation (3.37) can be used to meromorphically extend \(\beta_{1}\mapsto R(\beta_{1},\sigma_{1},\sigma_{2})\) to a complex neighborhood of the whole real line.
Now for any \(\beta_{1}\) fixed in this complex neighborhood of \(\mathbb{R}\), we perform the analytic continuation to \(\mathbb{C}^{2}\) in the variables \(\sigma_{1},\sigma_{2}\). For this one can simply apply either of the shift equation of Lemma 3.10. This is possible since the band \(\mathcal{B}\) has width strictly greater than \(\frac{\gamma}{4}\). Hence the result.
**Lemma 3.13**.: _(Analytic continuation of \(H\)) Fix \(\sigma_{3}\in[-\frac{1}{2\gamma}+\frac{Q}{2},\frac{1}{2\gamma}+\frac{Q}{2}- \frac{\gamma}{4}]\times\mathbb{R}\) and fix \(\sigma_{1},\sigma_{2}\in\mathcal{B}\). Then the function \((\beta_{1},\beta_{2},\beta_{3})\mapsto H\) originally defined in the parameter range given by (3.16) extends to a meromorphic function of the three variables in a small complex neighborhood of \(\mathbb{R}^{3}\). Now fix \(\beta_{1},\beta_{2},\beta_{3}\) in this complex neighborhood of \(\mathbb{R}^{3}\), keeping \(\sigma_{3}\) still fixed in \([-\frac{1}{2\gamma}+\frac{Q}{2},\frac{1}{2\gamma}+\frac{Q}{2}-\frac{\gamma}{4 }]\times\mathbb{R}\). The function \((\sigma_{1},\sigma_{2})\mapsto H\) then extends to a meromorphic function of \(\mathbb{C}^{2}\)._
Proof.: This proof also follows exactly the proof of [10, Lemma 3.6], with one notable difference highlighted below that prevents us from performing an analytic continuation in \(\sigma_{3}\). The main idea of [10] is to use the shift equation on \(H\) to successively extend the domain where \(H\) is defined to larger and larger regions until it has been extended to the region claimed in the lemma. We choose as the starting domain given by:
\[\mathcal{E}_{1}:=\left\{\beta_{i}\in(\frac{2}{\gamma}-\delta,Q)\times[-\nu, \nu],\,\sigma_{i}\in(-\frac{1}{2\gamma}+\frac{Q}{2},\frac{1}{2\gamma}+\frac{Q }{2})\times\mathbb{R}\right\}. \tag{3.38}\]
By \(\beta_{i}\in(\frac{2}{\gamma}-\delta,Q)\times[-\nu,\nu]\) we mean \(\Re(\beta_{i})\in(\frac{2}{\gamma}-\delta,Q)\) and \(\Im(\beta_{i})\in[-\nu,\nu]\), the same convention is used for the domain of \(\sigma_{i}\). We have introduced \(\delta,\nu>0\) chosen small enough so that (3.16) holds for \(\beta_{i}\in(\frac{2}{\gamma}-\delta,Q)\) and one can apply Proposition 2.12 to show analyticity of \(H\) in all of its variables on \(\mathcal{E}_{1}\). The condition on \(\delta\) is \(\delta<\frac{1}{\gamma}-\frac{\gamma}{4}\). By applying the shift equations on \(H\) as performed in [10], one can extend \(H\) to the region:
\[\mathcal{E}_{7}:=\left\{\beta_{i}\in\mathbb{R}\times[-\nu,\nu],\,(\sigma_{1}, \sigma_{2})\in\mathbb{C}^{2},\,\sigma_{3}\in(-\frac{1}{2\gamma}+\frac{Q}{2}, \frac{1}{2\gamma}+\frac{Q}{2}-\frac{\gamma}{4})\times\mathbb{R}\right\}. \tag{3.39}\]
Note here that we have performed a cyclic permutation with respect to the indices \(1,2,3\) of the parameters in comparison to the domain \(\mathcal{E}_{7}\) written in [10]. This thus gives the desired domain of analytic continuation of \(H\) claimed in the statement of the lemma.
Contrarily to what is done in [10], we are not able to lift the constraint on \(\sigma_{3}\). Indeed, in the case \(\mu=0\) when \(H\) has an expression reducing to a moment of a boundary GMC measure, if one adds a global constant to each of the \(\sigma_{i}\) the function \(H\) is simply changed by a global phase. In our case this property does not hold which is why we are not able to perform the extension in the variable \(\sigma_{3}\). Notice though that once Theorem 1.1 is established, the exact formula \(H_{\text{PT}}\) implies that \(H\) extends meromorphically in all its parameters to \(\mathbb{C}^{6}\).
We finish this subsection by extending Lemma 3.9 to a larger range of parameters that will be required for the next subsection. This is a novel difficulty that was not present in [10] because in our case we had to perform a truncation procedure to define \(H\) and \(R\) while in [10] they simply reduce to a moment of GMC.
**Lemma 3.14**.: _Suppose \(\beta_{1},\beta_{2},\beta_{3}\) satisfy \(Q-\frac{1}{2}\sum_{i}\beta_{i}<\frac{2}{\gamma}\wedge\min_{i}(Q-\beta_{i})\), \(Q>\beta_{1}>\beta_{2}\vee\frac{\gamma}{2}\) and \(\beta_{1}-\beta_{2}<\beta_{3}<Q\). Let also \(\sigma_{1}\in\overline{\mathcal{B}}\) and \(\sigma_{2},\sigma_{3}\in\mathcal{B}\). Then the following limit holds_
\[\lim_{\beta_{3}\to\beta_{1}-\beta_{2}}(\beta_{2}+\beta_{3}-\beta_{1})H\begin{pmatrix} \beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}=2R(\beta_{1},\sigma_{1},\sigma_ {2}),\]
_where the function \(R\) should be understood by the analytic continuation given in Lemma 3.12._
Proof.: This claim follows directly from Lemma 3.9 and from the analytic continuation result of Lemma 3.13. More precisely, since we know by Lemma 3.13 that \(H\) is a meromorphic function of its parameters, this limit can be viewed as a residue computation and evaluated by the following contour integral
\[\lim_{\beta_{3}\to\beta_{1}-\beta_{2}}(\beta_{2}+\beta_{3}-\beta_{1})H \begin{pmatrix}\beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}=\frac{1}{2\pi\mathbf{i}}\int_{ \mathcal{C}_{\epsilon}(\beta_{1}-\beta_{2})}H\begin{pmatrix}\beta_{1},\beta_{ 2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}d\beta_{3}=2R(\beta_{1},\sigma_{ 1},\sigma_{2}),\]
where above \(\mathcal{C}_{\epsilon}(\beta_{1}-\beta_{2})\) is a circle of radius \(\epsilon\) around \(\beta_{1}-\beta_{2}\). Under this form it is now clear by analyticity of \(H\) that this equality first established in the parameter range of Lemma 3.9 extends to the larger parameter range.
### The \(\frac{2}{\gamma}\)-shift equations for \(R\)
We now derive the \(\frac{2}{\gamma}\)-shift equations on \(R(\beta_{1},\sigma_{1},\sigma_{2})\).
**Lemma 3.15**.: _For all \(\sigma_{1},\sigma_{2}\in\mathbb{C}\), the meromorphic function \(\beta_{1}\mapsto R(\beta_{1},\sigma_{1},\sigma_{2})\) defined in a complex neighborhood of \(\mathbb{R}\) satisfies the shift equations in Theorem 3.2 with \(\chi=\frac{2}{\gamma}\)._
To prove Lemma 3.15 we work exclusively with \(\chi=\frac{2}{\gamma}\). There are two steps that will each require their own range of parameters. We first place ourselves in the following range that will allow us to apply the OPE with reflection of Lemma D.2 around the \(\beta_{1}\) insertion at \(0\):
\[t\in(0,1),\ \ \beta_{1},\beta_{2},\beta_{3}\in(Q-\epsilon,Q),\ \ \sigma_{1},\sigma_{2}\in\mathbf{i}\mathbb{R}+\frac{1}{2\gamma}+\frac{Q}{2},\ \ \sigma_{3}\in\mathcal{B}. \tag{3.40}\]
In the above \(\epsilon\) is chosen small enough, smaller than the constant \(\beta_{0}\) required to apply Lemma D.2. We can thus expand \(H_{\frac{2}{\gamma}}(t)\) on the basis, for \(t\in(0,1)\)
\[H_{\frac{2}{\gamma}}(t) =C_{1}F(A,B,C,t)+C_{2}^{+}t^{1-C}F(1+A-C,1+B-C,2-C,t)\] \[=B_{1}F(A,B,1+A+B-C,1-t)+B_{2}^{-}(1-t)^{C-A-B}F(C-A,C-B,1+C-A-B, 1-t), \tag{3.41}\]
where again \(C_{1},C_{2}^{+},B_{1},B_{2}^{-}\) are parametrizing the solution space around the points \(0\) and \(1\). As before by sending \(t\) to \(0\) and to \(1\) one obtains:
\[C_{1}=H_{\frac{2}{\gamma}}(0)=H\begin{pmatrix}\beta_{1}-\frac{2}{\gamma}, \beta_{2},\beta_{3}\\ \sigma_{1}-\frac{1}{\gamma},\sigma_{2},\sigma_{3}\end{pmatrix},\quad B_{1}=H_ {\frac{2}{\gamma}}(1)=H\begin{pmatrix}\beta_{1},\beta_{2}-\frac{2}{\gamma}, \beta_{3}\\ \sigma_{1}-\frac{1}{\gamma},\sigma_{2}-\frac{1}{\gamma},\sigma_{3}\end{pmatrix}. \tag{3.42}\]
Since the condition required for Lemma D.2, \(\beta_{1}\in(Q-\beta_{0},Q)\), is satisfied one then derives:
\[C_{2}^{+}=R(\beta_{1},\sigma_{1}-\frac{1}{\gamma},\sigma_{2}-\frac{1}{\gamma})H \begin{pmatrix}2Q-\beta_{1}-\frac{2}{\gamma},\beta_{2},\beta_{3}\\ \sigma_{1}-\frac{1}{\gamma},\sigma_{2},\sigma_{3}\end{pmatrix}. \tag{3.43}\]
Similarly, we can apply Lemma D.2 around \(t=1\) and get:
\[B_{2}^{-}=R(\beta_{2},\sigma_{2},\sigma_{3})H\begin{pmatrix}\beta_{1},2Q-\beta_{ 2}-\frac{2}{\gamma},\beta_{3}\\ \sigma_{1}-\frac{1}{\gamma},\sigma_{2}-\frac{1}{\gamma},\sigma_{3}\end{pmatrix}. \tag{3.44}\]
The quantities \(C_{1},B_{2}^{-},C_{2}^{+}\) identified above are then related by the connection formula (A.8):
\[B_{2}^{-}=\frac{\Gamma(C)\Gamma(A+B-C)}{\Gamma(A)\Gamma(B)}C_{1}+\frac{\Gamma (2-C)\Gamma(A+B-C)}{\Gamma(A-C+1)\Gamma(B-C+1)}C_{2}^{+}. \tag{3.45}\]
We repeat the same procedure to identify the coefficients \(D_{2}^{+},C_{2}^{-}\) using the same parameter ranges as before except with \(t\in(-\infty,0)\):
\[t\in(-\infty,0),\ \ \beta_{1},\beta_{2},\beta_{3}\in(Q-\epsilon,Q),\ \ \sigma_{1},\sigma_{2}\in\mathbb{i}\mathbb{R}+\frac{1}{2\gamma}+\frac{Q}{2},\ \ \sigma_{3}\in\mathcal{B}. \tag{3.46}\]
For \(t\in(-\infty,0)\), \(H_{\frac{2}{\gamma}}(t)\) can be expanded on the basis:
\[H_{\frac{2}{\gamma}}(t) =C_{1}F(A,B,C,t)+C_{2}^{-}t^{1-C}F(1+A-C,1+B-C,2-C,t)\] \[=D_{1}e^{\mathbb{i}\pi A}t^{-A}F(A,1+A-C,1+A-B,t^{-1})+D_{2}^{+}e^ {\mathbb{i}\pi B}t^{-B}F(B,1+B-C,1+B-A,t^{-1}). \tag{3.47}\]
The coefficients \(C_{2}^{-},D_{2}^{+}\) have expressions given by:
\[C_{2}^{-} =e^{-\mathbb{i}\pi(1-\frac{2\beta_{1}}{\gamma}+\frac{4}{\gamma^{ 2}})}R(\beta_{1},\sigma_{1},\sigma_{2})H\begin{pmatrix}2Q-\beta_{1}-\frac{2}{ \gamma},\beta_{2},\beta_{3}\\ \sigma_{1}-\frac{1}{\gamma},\sigma_{2},\sigma_{3}\end{pmatrix}, \tag{3.49}\] \[D_{2}^{+} =R(\beta_{3},\sigma_{1}-\frac{1}{\gamma},\sigma_{3})H\begin{pmatrix} \beta_{1},\beta_{2},2Q-\beta_{3}-\frac{2}{\gamma}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}. \tag{3.48}\]
Using the connection formula (A.7) we can write this time:
\[D_{2}^{+}=\frac{\Gamma(C)\Gamma(A-B)}{\Gamma(A)\Gamma(C-B)}C_{1}+e^{\mathbb{i }\pi(1-C)}\frac{\Gamma(2-C)\Gamma(A-B)}{\Gamma(1-B)\Gamma(A-C+1)}C_{2}^{-}. \tag{3.50}\]
By eliminating the coefficient \(C_{1}\) we obtain the relation:
\[\frac{\Gamma(B)}{\Gamma(A+B-C)}B_{2}^{-}-\frac{\Gamma(C-B)}{\Gamma(A-B)}D_{2} ^{+}=\frac{\Gamma(2-C)}{\Gamma(A-C+1)}\left(\frac{\Gamma(B)}{\Gamma(B-C+1)}C _{2}^{+}-\frac{e^{\mathbb{i}\pi(1-C)}\Gamma(C-B)}{\Gamma(1-B)}C_{2}^{-}\right). \tag{3.51}\]
Let us state this identity as a lemma where the constants \(B_{2}^{-}\), \(D_{2}^{+}\), \(C_{2}^{+}\), and \(C_{2}^{-}\) have been replaced by their explicit expressions in terms of \(H\) and \(R\).
**Lemma 3.16**.: _The following identity holds:_
\[\frac{\Gamma(B)}{\Gamma(A+B-C)}R(\beta_{2},\sigma_{2},\sigma_{3})H \begin{pmatrix}\beta_{1},2Q-\beta_{2}-\frac{2}{\gamma},\beta_{3}\\ \sigma_{1}-\frac{1}{\gamma},\sigma_{2}-\frac{1}{\gamma},\sigma_{3}\end{pmatrix} -\frac{\Gamma(C-B)}{\Gamma(A-B)}R(\beta_{3},\sigma_{1}-\frac{1}{\gamma},\sigma _{3})H\begin{pmatrix}\beta_{1},\beta_{2},2Q-\beta_{3}-\frac{2}{\gamma}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}\] \[=\frac{\Gamma(2-C)}{\Gamma(A-C+1)}H\begin{pmatrix}2Q-\beta_{1}- \frac{2}{\gamma},\beta_{2},\beta_{3}\\ \sigma_{1}-\frac{1}{\gamma},\sigma_{2},\sigma_{3}\end{pmatrix}\left(\frac{ \Gamma(B)}{\Gamma(B-C+1)}R(\beta_{1},\sigma_{1}-\frac{1}{\gamma},\sigma_{2}- \frac{1}{\gamma})\right.\] \[\qquad\qquad-\frac{e^{\mathbb{i}\pi(1-C)}\Gamma(C-B)}{\Gamma(1-B) }e^{-\mathbb{i}\pi(1-\frac{2\beta_{1}}{\gamma}+\frac{4}{\gamma^{2}})}R(\beta_{ 1},\sigma_{1},\sigma_{2})\Bigg{)}. \tag{3.52}\]
_This identity is originally derived in the range of parameters_
\[\beta_{1},\beta_{2},\beta_{3}\in(Q-\epsilon,Q),\ \ \sigma_{1},\sigma_{2}\in\mathfrak{ i}\mathbb{R}+\frac{1}{2\gamma}+\frac{Q}{2},\ \ \sigma_{3}\in\mathcal{B}, \tag{3.53}\]
_but it can be viewed as an identity of the meromorphically extended functions \(H\) and \(R\), the extension being provided by Lemmas 3.13 and 3.12._
The next step is to use again the limit \(H\) goes to \(R\) to derive from the above equation a relation on \(R.\) This requires to change the parameter range on the \(\beta_{i}\) in order to apply Lemma 3.14 giving \(R\) as a limit of \(H.\) Let us now take
\[\beta_{1}=\beta\in(\frac{\gamma}{2},\frac{2}{\gamma}),\quad\beta_{2}=\frac{ \gamma}{2}+\eta,\quad\beta_{3}=Q-\beta,\ \sigma_{1}\in\mathfrak{ i}\mathbb{R}+\frac{1}{2\gamma}+\frac{Q}{2},\ \sigma_{2},\sigma_{3}\in\mathcal{B}, \tag{3.54}\]
and study the asymptotic as \(\eta\to 0.\) The functions \(H\) and \(R\) appearing are viewed as defined by the meromorphic extension of Lemmas 3.13 and 3.12. For the above choice of parameters
\[q=\frac{4}{\gamma^{2}}-\frac{\eta}{\gamma},\quad A=-\frac{4}{\gamma^{2}}+\frac {\eta}{\gamma},\quad B=\frac{2\beta}{\gamma}-\frac{4}{\gamma^{2}}+\frac{\eta} {\gamma},\quad C=\frac{2\beta}{\gamma}-\frac{4}{\gamma^{2}}, \tag{3.55}\]
and the two \(H\) functions that we are going to apply the Lemma 3.14 to are
\[H\begin{pmatrix}2Q-\beta-\frac{2}{\gamma},\frac{\gamma}{2}+\eta,Q-\beta\\ \sigma_{1}-\frac{1}{\gamma},\sigma_{2},\sigma_{3}\end{pmatrix},\quad\text{ and}\quad H\begin{pmatrix}\beta,\frac{\gamma}{2}+\eta,\frac{\gamma}{2}+\beta\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}.\]
We compute the following limits, the last one is trivial and does not require using Lemma 3.14.
\[\lim_{\eta\to 0}\eta D_{2}^{+}=2R(Q-\beta,\sigma_{1}-\frac{1}{\gamma}, \sigma_{3})R(\beta+\frac{\gamma}{2},\sigma_{1},\sigma_{3})=\frac{2R(\beta+ \frac{\gamma}{2},\sigma_{1},\sigma_{3})}{R(\beta+Q,\sigma_{1}-\frac{1}{\gamma},\sigma_{3})}, \tag{3.57}\] \[\lim_{\eta\to 0}\eta C_{2}^{-}=2e^{-\mathfrak{i}\pi(1-\frac{2 \beta}{\gamma}+\frac{4}{\gamma^{2}})}R(\beta,\sigma_{1},\sigma_{2})R(2Q-\beta -\frac{2}{\gamma},\sigma_{1}-\frac{1}{\gamma},\sigma_{2})=\frac{2e^{- \mathfrak{i}\pi(1-\frac{2\beta}{\gamma}+\frac{4}{\gamma^{2}})}R(\beta,\sigma_ {1},\sigma_{2})}{R(\beta+\frac{2}{\gamma},\sigma_{1}-\frac{1}{\gamma},\sigma_ {2})},\] (3.58) \[\lim_{\eta\to 0}\eta^{2}B_{2}^{-}=-4\lim_{\eta\to 0}\eta R(\frac{ \gamma}{2}+\eta,\sigma_{2},\sigma_{3}). \tag{3.56}\]
Putting all these into (3.52), we get:
\[-4\frac{\Gamma(\frac{2\beta}{\gamma}-\frac{4}{\gamma^{2}})}{ \Gamma(-\frac{4}{\gamma^{2}})}\lim_{\eta\to 0}\eta R(\frac{\gamma}{2}+\eta, \sigma_{2},\sigma_{3})+\frac{2\gamma}{\Gamma(-\frac{2\beta}{\gamma})}\frac{( 1-\frac{\gamma\beta}{2}+\frac{\gamma^{2}}{4})}{-\frac{\gamma\beta}{2}}\frac{R( \beta,\sigma_{1},\sigma_{3}-\frac{\gamma}{4})}{R(\beta+\frac{2}{\gamma}, \sigma_{1}-\frac{1}{\gamma},\sigma_{3}-\frac{\gamma}{4})}\] \[=\frac{2\gamma(1-\frac{2\beta}{\gamma}+\frac{4}{\gamma^{2}})}{ \Gamma(1-\frac{2\beta}{\gamma})}\frac{R(\beta,\sigma_{1},\sigma_{2})}{R(\beta+ \frac{2}{\gamma},\sigma_{1}-\frac{1}{\gamma},\sigma_{2})}.\]
After simplifications one obtains:
\[\frac{1}{\Gamma(-1+\frac{2\beta}{\gamma}-\frac{4}{\gamma^{2}}) \Gamma(1-\frac{2\beta}{\gamma})}\left(\frac{R(\beta,\sigma_{1},\sigma_{2})}{R( \beta+\frac{2}{\gamma},\sigma_{1}-\frac{1}{\gamma},\sigma_{2})}-\frac{R(\beta, \sigma_{1},\sigma_{3}-\frac{\gamma}{4})}{R(\beta+\frac{2}{\gamma},\sigma_{1}- \frac{1}{\gamma},\sigma_{3}-\frac{\gamma}{4})}\right)\] \[=\frac{2}{\gamma\Gamma(-\frac{4}{\gamma^{2}})}\lim_{\eta\to 0}\eta R(\frac{ \gamma}{2}+\eta,\sigma_{2},\sigma_{3}). \tag{3.59}\]
We want to determine the function on the right hand side of the above equation. It is natural to use the \(\frac{\gamma}{2}\)-shift equation (3.31) on \(R\) to write:
\[R(\frac{\gamma}{2}+\eta,\sigma_{2},\sigma_{3})=-\frac{\Gamma(-1+\frac{\gamma \eta}{2})\Gamma(1-\frac{\gamma^{2}}{4}-\frac{\gamma\eta}{2})}{\Gamma(-\frac{ \gamma^{2}}{4})}\left(g_{\frac{\gamma}{2}}(\sigma_{2})-g_{\frac{\gamma}{2}}( \sigma_{3}-\frac{\gamma}{4}+\frac{\eta}{2})\right)R(\gamma+\eta,\sigma_{2}, \sigma_{3}-\frac{\gamma}{4}).\]
Simplifying the limit, one gets:
\[\lim_{\eta\to 0}\eta R(\frac{\gamma}{2}+\eta,\sigma_{2},\sigma_{3})=\frac{2}{ \gamma}\frac{\Gamma(1-\frac{\gamma^{2}}{4})}{\Gamma(-\frac{\gamma^{2}}{4})} \left(g_{\frac{\gamma}{2}}(\sigma_{2})-g_{\frac{\gamma}{2}}(\sigma_{3}-\frac{ \gamma}{4})\right)R(\gamma,\sigma_{2},\sigma_{3}-\frac{\gamma}{4}). \tag{3.60}\]
We now introduce the shorthand notation \(\mathcal{R}(\beta,\sigma_{1},\sigma_{2})=\frac{R(\beta,\sigma_{1},\sigma_{2}) }{R(\beta+\frac{\gamma}{\gamma},\sigma_{1}-\frac{1}{\gamma},\sigma_{2})}\). By taking \(\sigma_{3}=\sigma_{2}-\frac{\gamma}{4}\) in (3.59), since \(\lim_{\eta\to 0}\eta R(\frac{\gamma}{2}+\eta,\sigma_{2},\sigma_{2}-\frac{ \gamma}{4})=0\), we obtain that \(\mathcal{R}(\beta,\sigma_{1},\sigma_{2})\) is \(\frac{\gamma}{2}\)-periodic in \(\sigma_{2}\). At this point we will use an extra input form the mating-of-trees framework which will give us the explicit value of \(R(\gamma,\sigma_{2},\sigma_{3})\).
**Lemma 3.17**.: _As equality of meromorphic functions of \((\sigma,\sigma^{\prime})\in\mathbb{C}^{2}\):_
\[R(\gamma,\sigma,\sigma^{\prime})=\left(\frac{\pi\Gamma(\frac{\gamma^{2}}{4})} {\Gamma(1-\frac{\gamma^{2}}{4})}\right)^{\frac{2}{\gamma^{2}}-\frac{1}{2}} \frac{\Gamma(1-\frac{4}{\gamma^{2}})}{\Gamma(1-\frac{\gamma^{2}}{4})}\frac{ \cos(\frac{4\pi}{\gamma}(\sigma-\frac{Q}{2}))-\cos(\frac{4\pi}{\gamma}( \sigma^{\prime}-\frac{Q}{2}))}{\cos(\gamma\pi(\sigma-\frac{Q}{2}))-\cos( \gamma\pi(\sigma^{\prime}-\frac{Q}{2}))}.\]
Proof of Lemma 3.17.: Let us introduce the notations \(\mu=g_{\frac{\gamma}{2}}(\sigma_{2})\), \(\mu^{\prime}=g_{\frac{\gamma}{2}}(\sigma_{3}-\frac{\gamma}{4})\), \(h(\mu)=-\frac{1}{\Gamma(-1+\frac{2\beta}{\gamma}-\frac{4}{\gamma^{2}})\Gamma (1-\frac{2\beta}{\gamma})}\frac{R(\beta,\sigma_{1},\sigma_{2})}{R(\beta+\frac{ \gamma}{\gamma},\sigma_{1}-\frac{1}{\gamma},\sigma_{2})}\). Combining equations (3.59) and (3.60) implies that we have for \(\mu,\mu^{\prime}\) in \(\{z\in\mathcal{C}\,:\,\Re z>0\}\) that \((\mu-\mu^{\prime})R(\gamma,\mu,\mu^{\prime})=h(\mu)-h(\mu^{\prime})\) with \(h\) holomorphic on that domain, and by taking \(k\) derivatives with respect to \(\mu\) that:
\[h^{(k)}(\mu)=\mu\frac{\partial^{k}}{\partial\mu^{k}}R(\gamma,\mu,\mu^{\prime}) +k\frac{\partial^{k-1}}{\partial\mu^{k-1}}R(\gamma,\mu,\mu^{\prime}).\]
Sending \(\mu^{\prime}\to 0\) gives, since we can analytically continue \(\frac{\partial^{k-1}}{\partial\mu^{k-1}}R(\beta,\mu,\mu^{\prime})\) to \(\beta=\gamma\) where it has the probabilistic interpretation via \(\mathcal{M}_{2}^{\text{disk}}(\gamma)\). By Lemma B.3, using the notation \(f(\mu)=\cos(\frac{4}{\gamma^{2}}\arccos(\mu/c))\) with \(c=\sqrt{1/\sin(\frac{\pi\gamma^{2}}{4})}\) and \(\mathcal{C}_{1}\) as in the lemma, we conclude \(f^{(k)}(\mu)=\frac{\gamma}{2(Q-\gamma)}\mathcal{C}_{1}h^{(k)}(\mu)\). Since \(f,h\) are both analytic in a suitable domain, we have:
\[\frac{\gamma}{2(Q-\gamma)}\mathcal{C}_{1}h(\mu)=f(\mu)+\sum_{i=0}^{k_{0}}a_{i} \mu^{i}.\]
Thus,
\[R(\gamma,\sigma,\sigma^{\prime})=c_{\gamma}\frac{\cos(\frac{4\pi}{\gamma}( \sigma-\frac{Q}{2}))-\cos(\frac{4\pi}{\gamma}(\sigma^{\prime}-\frac{Q}{2}))+p( \sigma)-p(\sigma^{\prime})}{\cos(\gamma\pi(\sigma-\frac{Q}{2}))-\cos(\gamma\pi (\sigma^{\prime}-\frac{Q}{2}))}\]
where \(p(\sigma)=\sum_{i=1}^{k_{0}}a_{i}(c\cos(\pi\gamma(\sigma-\frac{Q}{2})))^{i}\). Since \(f\) and \(h\) are \(\frac{\gamma}{2}\)-periodic, we conclude that \(p\) is \(\frac{\gamma}{2}\)-periodic. By its definition, it's also clear that \(p\) is \(\frac{2}{\gamma}\)-periodic. Thus, when \(\gamma^{2}\) is irrational, then the function has two periods which differ by an irrational factor, hence \(p\) must be constant (and thus \(p(\sigma)-p(\sigma^{\prime})=0\)). Lastly we compute the \(c_{\gamma}\) introduced above as:
\[c_{\gamma}=(\frac{4}{\gamma^{2}}-1)\frac{1}{\mathcal{C}_{1}}\sqrt{\sin(\frac{ \pi\gamma^{2}}{4})}=\frac{1}{\pi}\Gamma(1-\frac{4}{\gamma^{2}})\sqrt{\sin(\frac{ \pi\gamma^{2}}{4})}\left(\frac{\pi\Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1-\frac{ \gamma^{2}}{4})}\right)^{\frac{2}{\gamma^{2}}}=\frac{\Gamma(1-\frac{4}{\gamma^{2} })}{\Gamma(1-\frac{\gamma^{2}}{4})}\left(\frac{\pi\Gamma(\frac{\gamma^{2}}{4})}{ \Gamma(1-\frac{\gamma^{2}}{4})}\right)^{\frac{2}{\gamma^{2}}-\frac{1}{2}}.\]
Using this lemma we can complete the proof of Lemma 3.15. Below the constant \(c_{\gamma}\) of \(\gamma\) can be different at every line. Using the previous lemma we can compute:
\[\lim_{\eta\to 0}\eta R(\frac{\gamma}{2}+\eta,\sigma_{2},\sigma_{3})=\frac{2}{ \gamma}\frac{\Gamma(1-\frac{\gamma^{2}}{4})}{\Gamma(-\frac{\gamma^{2}}{4})} \left(g_{\frac{\gamma}{2}}(\sigma_{2})-g_{\frac{\gamma}{2}}(\sigma_{3}-\frac{ \gamma}{4})\right)R(\gamma,\sigma_{2},\sigma_{3}-\frac{\gamma}{4}). \tag{3.61}\]
\[\lim_{\eta\to 0}\eta R(\frac{\gamma}{2}+\eta,\sigma_{2},\sigma_{3})=\frac{2}{ \gamma}\frac{\Gamma(1-\frac{\gamma^{2}}{4})}{\Gamma(-\frac{\gamma^{2}}{4}) \sqrt{\sin(\frac{\pi\gamma^{2}}{4})}}\left(\cos(\pi\gamma(\sigma_{2}-\frac{Q} {2}))-\cos(\pi\gamma(\sigma_{3}-\frac{\gamma}{4}-\frac{Q}{2}))\right)R(\gamma,\sigma_{2},\sigma_{3}-\frac{\gamma}{4})\]
\[=\frac{\gamma}{2\pi}\Gamma(1-\frac{4}{\gamma^{2}})\left(\frac{\pi\Gamma( \frac{\gamma^{2}}{4})}{\Gamma(1-\frac{\gamma^{2}}{4})}\right)^{\frac{2}{ \gamma^{2}}}\left(\cos(\frac{4\pi}{\gamma}(\sigma_{2}-\frac{Q}{2}))-\cos( \frac{4\pi}{\gamma}(\sigma_{3}-\frac{\gamma}{4}-\frac{Q}{2}))\right).\]
Therefore we land on:
\[\mathcal{R}(\beta,\sigma_{1},\sigma_{2})-\mathcal{R}(\beta,\sigma_ {1},\sigma_{3}-\frac{\gamma}{4})\] \[=-\frac{4}{\pi\gamma^{2}}\left(\frac{\pi\Gamma(\frac{\gamma^{2}}{ 4})}{\Gamma(1-\frac{\gamma^{2}}{4})}\right)^{\frac{2}{\gamma^{2}}}\Gamma(-1+ \frac{2\beta}{\gamma}-\frac{4}{\gamma^{2}})\Gamma(1-\frac{2\beta}{\gamma}) \left(\cos(\frac{2\pi}{\gamma}(2\sigma_{2}-\frac{2}{\gamma}))-\cos(\frac{2 \pi}{\gamma}(2\sigma_{3}-\frac{\gamma}{2}-\frac{2}{\gamma}))\right). \tag{3.62}\]
By setting \(\sigma_{3}\) to any fixed value, the above equation implies the following claim
\[\mathcal{R}(\beta,\sigma_{1},\sigma_{2})=-\frac{4}{\pi\gamma^{2}}\left(\frac{ \pi\Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1-\frac{\gamma^{2}}{4})}\right)^{ \frac{2}{\gamma^{2}}}\Gamma(-1+\frac{2\beta}{\gamma}-\frac{4}{\gamma^{2}}) \Gamma(1-\frac{2\beta}{\gamma})\cos(\frac{2\pi}{\gamma}(2\sigma_{2}-\frac{2}{ \gamma}))+u(\sigma_{1},\beta,\gamma), \tag{3.63}\]
where \(u(\sigma_{1},\beta,\gamma)\) is an unknown function that does not depend on \(\sigma_{2}\). It thus remains to evaluate this function \(u\). For this we will use the fact that we know that \(\mathcal{R}(\beta,\sigma_{1},\sigma_{1}-\frac{\beta}{2})=0\) which can be easily deduced from the \(\frac{\gamma}{2}\)-shift equation (3.30). Indeed, this is clear since the right hand side of (3.30) is zero when \(\sigma_{2}=\sigma_{1}-\frac{\beta}{2}\). Thus this implies that:
\[u(\sigma_{1},\beta,\gamma)=\frac{4}{\pi\gamma^{2}}\left(\frac{\pi\Gamma(\frac{ \gamma^{2}}{4})}{\Gamma(1-\frac{\gamma^{2}}{4})}\right)^{\frac{2}{\gamma^{2}} }\Gamma(-1+\frac{2\beta}{\gamma}-\frac{4}{\gamma^{2}})\Gamma(1-\frac{2\beta}{ \gamma})\cos(\frac{2\pi}{\gamma}(2\sigma_{1}-\beta-\frac{2}{\gamma})).\]
Setting now \(c_{\frac{2}{\gamma}}(\gamma)=\frac{4}{\gamma^{2}}\pi^{\frac{4}{\gamma^{2}}-1} \Gamma(1-\frac{\gamma^{2}}{4})^{-\frac{4}{\gamma^{2}}}\), we have thus shown that:
\[\frac{R(\beta,\sigma_{1},\sigma_{2})}{R(\beta+\frac{2}{\gamma},\sigma_{1}- \frac{1}{\gamma},\sigma_{2})}=c_{\frac{2}{\gamma}}(\gamma)\Gamma(-1+\frac{2 \beta}{\gamma}-\frac{4}{\gamma^{2}})\Gamma(1-\frac{2\beta}{\gamma})\left(g_{ \frac{2}{\gamma}}(\sigma_{2})-g_{\frac{2}{\gamma}}(\sigma_{1}-\frac{\beta}{2} )\right). \tag{3.64}\]
Similarly by working with auxiliary function \(\tilde{H}_{\chi}(t)\) yields the shift equation:
\[\frac{R(\beta+\frac{2}{\gamma},\sigma_{1}-\frac{1}{\gamma},\sigma_{2})}{R( \beta+\frac{4}{\gamma},\sigma_{1},\sigma_{2})}=c_{\frac{2}{\gamma}}(\gamma) \Gamma(-1+\frac{2\beta}{\gamma})\Gamma(1-\frac{2\beta}{\gamma}-\frac{4}{ \gamma^{2}})\left(g_{\frac{2}{\gamma}}(\sigma_{2})-g_{\frac{2}{\gamma}}(\sigma _{1}+\frac{\beta}{2})\right).\]
This completes the proof of Lemma 3.15.
### Proof of the \(\frac{2}{\gamma}\)-shift equations for \(H\)
We now complete deriving the shift equation for \(H\) in the case where \(\chi=\frac{2}{\gamma}\).
Proof of Proposition 3.1.: First the claim on the meromorphic extension of \(H\) has been obtained in Lemma 3.13. Next the shift equations come from applying (A.8). The first comes from the relation
\[B_{1}=\frac{\Gamma(\chi(\beta_{1}-\chi))\Gamma(1-\chi\beta_{2}+\chi^{2})}{ \Gamma(\chi(\beta_{1}-\chi+q\frac{\gamma}{2})\Gamma(1-\chi\beta_{2}+\chi^{2}- q\frac{\gamma\chi}{2})}C_{1}+\frac{\Gamma(2-\chi\beta_{1}+\chi^{2})\Gamma(1- \chi\beta_{2}+\chi^{2})}{\Gamma(1+\frac{q\gamma\chi}{2})\Gamma(2-\chi(\beta_ {1}+\beta_{2}-2\chi+q\frac{\gamma}{2}))}C_{2}^{+}, \tag{3.65}\]
and the second can be deduced by using:
\[\tilde{B}_{2}^{-}=\frac{\Gamma(\chi(\beta_{1}-\chi))\Gamma(-1+\chi\beta_{2}- \chi^{2})}{\Gamma(-q\frac{\gamma\chi}{2})\Gamma(-1+\chi(\beta_{1}+\beta_{2}- 2\chi+q\frac{\gamma}{2}))}\tilde{C}_{1}+\frac{\Gamma(2-\chi\beta_{1}+\chi^{2} ))\Gamma(-1+\chi\beta_{2}-\chi^{2})}{\Gamma(1-\chi(\beta_{1}-\chi+q\frac{ \gamma}{2})\Gamma(\chi\beta_{2}-\chi^{2}+q\frac{\gamma\chi}{2})}\tilde{C}_{2 }^{+}. \tag{3.66}\]
We have already derived them in the case of \(\chi=\frac{\gamma}{2}\) in Lemma 3.7. Setting now \(\chi=\frac{2}{\gamma}\), we similarly identify the constants \(B_{1},C_{1},C_{2}^{+},\tilde{B}_{2}^{-},\tilde{C}_{1},\tilde{C}_{2}^{+}\). For \(C_{2}^{+},\tilde{B}_{2}^{-},\tilde{C}_{2}^{+}\) we use the result of Lemma D.2 which gives us an expression with both an \(H\) and an \(R\) function. For instance:
\[C_{2}^{+}=R(\beta_{1},\sigma_{1}-\frac{1}{\gamma},\sigma_{2}-\frac{1}{\gamma} )H\begin{pmatrix}2Q-\beta_{1}-\frac{2}{\gamma},\beta_{2},\beta_{3}\\ \sigma_{1}-\frac{1}{\gamma},\sigma_{2},\sigma_{3}\end{pmatrix}.\]
To obtain an expression for \(C_{2}^{+}\) involving \(H\begin{pmatrix}\beta_{1}+\frac{2}{\gamma},\beta_{2},\beta_{3}\\ \sigma_{1}-\frac{1}{\gamma},\sigma_{2},\sigma_{3}\end{pmatrix}\) and no \(R\) function, we will need to apply the shift equation on \(R\) given in Theorem 3.2 to simplify the ratio \(\frac{R(\beta_{1},\sigma_{1}-\frac{1}{\gamma},\sigma_{2}-\frac{1}{\gamma})}{ R(\beta_{1}+\frac{2}{\gamma},\sigma_{1}-\frac{1}{\gamma},\sigma_{2})}\) and then the reflection principle given by equation (3.32). The same strategy can be applied to \(\tilde{C}_{2}^{+}\) and \(\tilde{B}_{2}^{-}\). This allows us to write:
\[C_{2}^{+}=\chi^{2}\pi^{\frac{2\chi}{\gamma}-1}\frac{\Gamma(-1+ \chi\beta_{1}-\chi^{2})\Gamma(1-\chi\beta_{1})}{\Gamma(1-\frac{\gamma^{2}}{4} )^{\frac{2\chi}{\gamma}}}\left(g_{\chi}(\sigma_{1}-\frac{\chi}{2})-g_{\chi}( \sigma_{2}+\frac{\beta_{1}}{2}-\frac{\chi}{2})\right)H\begin{pmatrix}\beta_{ 1}+\chi,\beta_{2},\beta_{3}\\ \sigma_{1}-\frac{\chi}{2},\sigma_{2},\sigma_{3}\end{pmatrix}, \tag{3.68}\] \[\tilde{C}_{2}^{+}=\chi^{2}\pi^{\frac{2\chi}{\gamma}-1}\frac{\Gamma (-1+\chi\beta_{1}-\chi^{2})\Gamma(1-\chi\beta_{1})}{\Gamma(1-\frac{\gamma^{2}} {4})^{\frac{2\chi}{\gamma}}}\left(g_{\chi}(\sigma_{1}+\frac{\chi}{2})-g_{\chi} (\sigma_{2}-\frac{\beta_{1}}{2}+\frac{\chi}{2})\right)H\begin{pmatrix}\beta_{ 1}+\chi,\beta_{2},\beta_{3}\\ \sigma_{1}+\frac{\chi}{2},\sigma_{2},\sigma_{3}\end{pmatrix},\] (3.69) \[\tilde{B}_{2}^{-}=\chi^{2}\pi^{\frac{2\chi}{\gamma}-1}\frac{\Gamma (-1+\chi\beta_{2}-\chi^{2})\Gamma(1-\chi\beta_{2})}{\Gamma(1-\frac{\gamma^{2}} {4})^{\frac{2\chi}{\gamma}}}\left(g_{\chi}(\sigma_{3})-g_{\chi}(\sigma_{2}+ \frac{\beta_{2}}{2})\right)H\begin{pmatrix}\beta_{1},\beta_{2}+\chi,\beta_{3} \\ \sigma_{1}+\frac{\chi}{2},\sigma_{2}+\frac{\chi}{2},\sigma_{3}\end{pmatrix}. \tag{3.67}\]
Putting all these into (3.65) and (3.66) proves the shift equations stated in the proposition.
## 4. Proof of the main theorems
In this section we will successively prove Theorem 1.3, Theorem 1.1 and Theorem 1.2. The first two will rely on the shift equations of Theorems 3.1 and 3.2. The last proof will use Theorem 1.3 combined with the conformal welding result of Wu [23], which itself builds on our earlier result [1].
### Proof of \(R=r_{\rm FZZ}\)
Proof of Theorem 1.3.: First assume \(\gamma^{2}\not\in\mathbb{Q}\). By combining both shift equations of Theorem 3.2 we obtain for \(\chi\in\{\frac{\gamma}{2},\frac{2}{\gamma}\}\)
\[\frac{R(\beta_{1},\sigma_{1},\sigma_{2})}{R(\beta_{1}+2\chi,\sigma_ {1},\sigma_{2})}=\tilde{c}_{\chi}(\gamma)\Gamma(-1+\chi\beta_{1}-\chi^{2}) \Gamma(1-\chi\beta_{1}-\chi^{2})\Gamma(1-\chi\beta_{1})\Gamma(-1+\chi\beta_{1})\] \[\times 4\sin(\pi\chi(\frac{\beta}{2}-\sigma_{1}-\sigma_{2}+Q))\sin( \pi\chi(\frac{\beta}{2}+\sigma_{1}+\sigma_{2}-Q))\sin(\pi\chi(\frac{\beta}{2}+ \sigma_{2}-\sigma_{1}))\sin(\pi\chi(\frac{\beta}{2}+\sigma_{1}-\sigma_{2})), \tag{4.1}\]
where \(\tilde{c}_{\frac{2}{\gamma}}(\gamma)=\frac{1}{\sin(\pi\frac{\gamma^{2}}{4}) \Gamma(-\frac{\gamma^{2}}{4})^{2}}\) and where \(\tilde{c}_{\frac{2}{\gamma}}(\gamma)=\frac{16}{\gamma^{2}}\pi^{\frac{8}{ \gamma^{2}}-2}\Gamma(1-\frac{\gamma^{2}}{4})^{-\frac{8}{\gamma^{2}}}\sin( \frac{\pi\gamma^{2}}{4})^{-\frac{4}{\gamma^{2}}}\).
Consider now the ratio \(f(\beta)=\frac{R(\beta)}{R_{\rm FZZ}(\beta)}\). By using the shift equations (A.10), (A.11), (A.13) satisfied by \(\Gamma_{\frac{\gamma}{2}}\) and \(S_{\frac{\gamma}{2}}\), one can show that \(R_{\rm FZZ}(\beta)\) satisfies the same shift equations for both values of \(\chi\) satisfied by \(R\). From this we obtain that:
\[f(\beta+\gamma)=f(\beta)\quad\text{and}\quad f(\beta+\frac{4}{\gamma})=f(\beta). \tag{4.2}\]
The function \(f\) is meromorphic and thus both \(\gamma\) and \(\frac{4}{\gamma}\) periodic, which for \(\gamma^{2}\notin\mathbb{Q}\) by the standard argument implies that that \(f\) is a constant function of \(\beta\). One can then show \(f(\beta)=1\) by using the fact that \(R(Q)=R_{\rm FZZ}(Q)=-1\). The fact that \(R_{\rm FZZ}(Q)=-1\) can be checked directly on the exact formula simply by using the relation \(S_{\frac{\gamma}{2}}(Q-x)=(S_{\frac{\gamma}{2}}(x))^{-1}\). To show that \(R(Q)=-1\) one can simply combine the result of Lemma 3.17 with the shift equations on \(R\) given by Theorem 3.2. Lastly the case \(\gamma^{2}\in\mathbb{Q}\) is easily recovered by a simple continuity argument (as in [10]).
### Proof of \(H=h_{\rm PT}\)
Before completing the proof of Theorem 1.1, we provide the following simple residue computations on our functions \(H,H_{\rm PT}\) that will be used in the uniqueness argument.
**Lemma 4.1**.: _Set \(\overline{\beta}=\beta_{1}+\beta_{2}+\beta_{3}\). For \(\beta_{i},\sigma_{i}\) in the domain of Lemma 3.13 where \(H\) has been analytically extended, \(H\) satisfies the following properties:_
\[\lim_{\beta_{1}\to 2Q-\beta_{2}-\beta_{3}}(\frac{\bar{\beta}}{2}-Q)H \begin{pmatrix}\beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}=1,\] \[\lim_{\beta_{1}\to 2Q-\beta_{2}-\beta_{3}-\gamma}(\frac{\bar{\beta}}{2 }-Q+\frac{\gamma}{2})H\begin{pmatrix}\beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}=\frac{1}{\pi}\Gamma(1-\frac{ \gamma\beta_{2}}{2})\Gamma(1-\frac{\gamma\beta_{3}}{2})\Gamma(\frac{\gamma \beta_{2}}{2}+\frac{\gamma\beta_{3}}{2}-1)\] \[\times\left(\cos(\pi\gamma(\sigma_{1}-\frac{Q}{2}))\sin(\frac{ \pi\gamma\beta_{2}}{2})+\cos(\pi\gamma(\sigma_{2}-\frac{Q}{2}))\sin(\frac{\pi \gamma\beta_{3}}{2})-\cos(\pi\gamma(\sigma_{3}-\frac{Q}{2}))\sin(\frac{\pi \gamma(\beta_{2}+\beta_{3})}{2})\right).\]
_The same limits hold with \(H\) replaced with \(H_{\rm PT}\)._
Proof.: We will focus on checking the second limit, the first one following the same steps but with much simpler computations. We must first assume our parameters are such that we can represent \(H\) using the probabilistic expression. More precisely they must obey the condition of Definition 2.13, \(\beta_{i}<Q\) and \(Q-\frac{1}{2}\sum\beta_{i}<\gamma\wedge\frac{2}{\gamma}\wedge\min_{i}(Q-\beta_ {i})\). To study the limit \(\beta_{1}\to 2Q-\beta_{2}-\beta_{3}-\gamma\), we will assume that \(\beta_{i}<\frac{2}{\gamma}-\eta\) for \(i=1,2,3\) and that \(Q-\frac{1}{2}\sum_{i}\beta_{i}<\frac{\gamma}{2}+\eta\) for a small \(\eta>0\). These conditions can be satisfied and imply the required bounds.
Now recall from Lemma 2.11 that \(s(s+\frac{\gamma}{2})H_{(\mu_{1},\mu_{2},\mu_{3})}^{(\beta_{1},\beta_{2},\beta_ {3})}=\hat{H}_{(\mu_{1},\mu_{2},\mu_{3})}^{(\beta_{1},\beta_{2},\beta_{3})}\) where
\[\hat{H}_{(\mu_{1},\mu_{2},\mu_{3})}^{(\beta_{1},\beta_{2},\beta_{3})}:=\int( \gamma(s+\frac{\gamma}{2})A+\frac{\gamma^{2}}{2}A(\sum_{i}\mu_{i}L_{i})+\frac{ \gamma^{2}}{4}(\sum_{i}\mu_{i}L_{i})^{2})e^{-A-\sum_{i}\mu_{i}L_{i}}\,\mathrm{ LF}_{\mathbb{H}}^{(\beta_{1},0),(\beta_{2},1),(\beta_{3},\infty)}( \mathrm{d}\phi),\]
and where \(s=\frac{1}{2}\sum_{i}\beta_{i}-Q\), \(A=\mathcal{A}_{\phi}(\mathbb{H}),L_{1}=\mathcal{L}_{\phi}(-\infty,0),L_{2}= \mathcal{L}_{\phi}(0,1)\) and \(L_{3}=\mathcal{L}_{\phi}(1,\infty)\). We can now write out explicitly the integration over the zero mode. For this let \(h\) be a Gaussian free field and \(\widetilde{h}:=h-2Q\log|\cdot|_{+}+\sum\frac{\beta_{i}}{2}G_{\mathbb{H}}(\cdot,s_{i})\), where \((s_{1},s_{2},s_{3})=(0,1,\infty)\). Write \(\widetilde{A}=\mathcal{A}_{\widetilde{h}}(\mathbb{H})\) and \(\widetilde{L}_{1}=\mu\mathcal{L}_{\widetilde{h}}(-\infty,0),\widetilde{L}_{2 }=\mathcal{L}_{\widetilde{h}}(0,1),\widetilde{L}_{3}=\mathcal{L}_{\widetilde{h }}(1,+\infty)\), \(\widetilde{L}=\sum_{i}\widetilde{L}_{i}\). Since \(A=e^{\gamma c}\widetilde{A}\) and \(L_{i}=e^{\gamma c/2}\widetilde{L}_{i}\):
\[\begin{split}\hat{H}^{(\beta_{1},\beta_{2},\beta_{3})}_{(\mu_{1},\mu_{2},\mu_{3})}&=\int_{\mathcal{C}}dce^{sc}\mathbb{E}\left[ \left(\gamma(s+\frac{\gamma}{2})e^{\gamma c}\widetilde{A}+\frac{\gamma^{2}}{2} e^{\frac{3\gamma c}{2}}\widetilde{A}\widetilde{L}+\frac{\gamma^{2}}{4}e^{ \gamma c}\widetilde{L}^{2}\right)e^{-e^{\gamma c}\widetilde{A}-e^{\frac{2 \gamma c}{2}}\widetilde{L}}\right]\\ &\underset{s\rightarrow\frac{\gamma}{2}}{=}\frac{\gamma}{2} \int_{\mathcal{R}}dc\mathbb{E}\left[\widetilde{L}\left(\gamma e^{\gamma c} \widetilde{A}+\frac{\gamma}{2}e^{\frac{\gamma c}{2}}\widetilde{L}\right)e^{-e ^{\gamma c}\widetilde{A}-e^{\frac{\gamma c}{2}}\widetilde{L}}\right]=\frac{ \gamma}{2}\int_{0}^{+\infty}du\mathbb{E}[\widetilde{L}e^{-u}]=\frac{\gamma}{2 }\mathbb{E}[\widetilde{L}].\end{split}\]
In the last line we have used the change of variable \(u=e^{\gamma c}\widetilde{A}+e^{\frac{\gamma c}{2}}\widetilde{L}\). The expectation \(\mathbb{E}[\widetilde{L}]\) of the boundary GMC measure simply reduces to an integral over \(\mathbb{R}\):
\[\begin{split}&\mathbb{E}[\widetilde{L}]=\mu_{1}\int_{-\infty}^{0} \frac{dx}{|x|^{\frac{\gamma\beta_{1}}{2}}|x-1|^{\frac{\gamma\beta_{2}}{2}}}+ \mu_{2}\int_{0}^{1}\frac{dx}{|x|^{\frac{\gamma\beta_{1}}{2}}|x-1|^{\frac{ \gamma\beta_{2}}{2}}}+\mu_{3}\int_{1}^{\infty}\frac{dx}{|x|^{\frac{\gamma \beta_{1}}{2}}|x-1|^{\frac{\gamma\beta_{2}}{2}}}\\ &=\mu_{1}\frac{\Gamma(\frac{\gamma\beta_{1}}{2}+\frac{\gamma\beta _{2}}{2}-1)\Gamma(1-\frac{\gamma\beta_{1}}{2})}{\Gamma(\frac{\gamma\beta_{2}} {2})}+\mu_{2}\frac{\Gamma(1-\frac{\gamma\beta_{1}}{2})\Gamma(1-\frac{\gamma \beta_{2}}{2})}{\Gamma(2-\frac{\gamma\beta_{1}}{2}-\frac{\gamma\beta_{2}}{2} )}+\mu_{3}\frac{\Gamma(\frac{\gamma\beta_{1}}{2}+\frac{\gamma\beta_{2}}{2}-1 )\Gamma(1-\frac{\gamma\beta_{2}}{2})}{\Gamma(\frac{\gamma\beta_{1}}{2})}\\ &=\frac{1}{\pi}\sqrt{\frac{1}{\sin\frac{\pi\gamma^{2}}{4}}} \Gamma(1-\frac{\gamma\beta_{2}}{2})\Gamma(1-\frac{\gamma\beta_{3}}{2})\Gamma( \frac{\gamma\beta_{2}}{2}+\frac{\gamma\beta_{3}}{2}-1)\\ &\times\left(\cos(\pi\gamma(\sigma_{1}-\frac{Q}{2}))\sin(\frac{ \pi\gamma\beta_{2}}{2})+\cos(\pi\gamma(\sigma_{2}-\frac{Q}{2}))\sin(\frac{\pi \gamma\beta_{3}}{2})-\cos(\pi\gamma(\sigma_{3}-\frac{Q}{2}))\sin(\frac{\pi \gamma(\beta_{2}+\beta_{3})}{2})\right).\end{split}\]
Note the integral above converges provided the parameters \(\beta_{1},\beta_{2}\) obey the constraints \(\beta_{1},\beta_{2}<\frac{2}{\gamma}\), \(\beta_{1}+\beta_{2}>\frac{2}{\gamma}\), which are implied by the parameter range at the beginning of the proof. This condition can first be assumed in order for the above integral to converge and then relaxed thanks to the analytic extension of \(H\) given by Lemma 3.13. The other limit is even simpler and using the expression with one less truncation we directly obtain the answer. Thanks to the result of Lemma A.2, the same properties hold for the function \(H_{\text{PT}}\).
We now move to completing the proof of the theorem.
Proof of Theorem 1.1.: First assume \(\gamma^{2}\not\in\mathbb{Q}\). It is less straightforward to see than in the case of \(R\) that the shift equations on \(H\) completely determine its value, since they contain three terms instead of two. Instead of working directly with \(H,H_{\text{PT}}\), we will match the following two functions:
\[\begin{split}&\mathcal{J}_{\text{PT}}\begin{pmatrix}\beta_{1}, \beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}:=\int_{\mathcal{C}}\frac{S_{\frac{ \gamma}{2}}(\frac{Q-\beta_{2}}{2}+\sigma_{3}\pm(\frac{Q}{2}-\sigma_{2})+r)S_{ \frac{\gamma}{2}}(\frac{Q}{2}\pm\frac{Q-\beta_{3}}{2}+\sigma_{3}-\sigma_{1}+r)} {S_{\frac{\gamma}{2}}(\frac{3Q}{2}\pm\frac{Q-\beta_{1}}{2}-\frac{\beta_{2}}{2}+ \sigma_{3}-\sigma_{1}+r)S_{\frac{\gamma}{2}}(2\sigma_{3}+r)S_{\frac{\gamma}{2} }(Q+r)}\frac{dr}{\mathbf{i}},\\ &\mathcal{J}_{H}\begin{pmatrix}\beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}:=H\begin{pmatrix}\beta_{1}, \beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}\times\frac{1}{2\pi}\left(\frac{\pi( \frac{\gamma}{2})^{2-\frac{\gamma^{2}}{2}}\Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1 -\frac{\gamma^{2}}{4})}\right)^{\frac{(\beta_{1}+\beta_{2}+\beta_{3})-2Q}{2 \gamma}}\\ &\times\frac{S_{\frac{\gamma}{2}}(\frac{\beta_{3}}{2}-\sigma_{1}+\frac{Q}{2} \pm(\frac{Q}{2}-\sigma_{3}))S_{\frac{\gamma}{2}}(\frac{\beta_{1}}{2}+\sigma_{1 }-\frac{Q}{2}\pm(\frac{Q}{2}-\sigma_{2}))\Gamma_{\frac{\gamma}{2}}(Q)\prod_{i=1 }^{3}\Gamma_{\frac{\gamma}{2}}(Q-\beta_{i})}{\Gamma_{\frac{\gamma}{2}}(Q- \frac{\beta_{2}}{2}\pm\frac{Q-\beta_{1}}{2}\pm\frac{Q-\beta_{3}}{2})}.\end{split}\]
Namely we have removed the prefactor in front of the contour integral of \(H_{\text{PT}}\) and done the same multiplication for \(H\). Now thanks to Theorem 3.1 and Lemma A.1, the shift equations obeyed by
\(\mathcal{J}_{H}\) and \(\mathcal{J}_{\mathrm{PT}}\) have the following form, still for \(\chi=\frac{\gamma}{2}\) or \(\frac{2}{\gamma}\),
\[\mathcal{J}\begin{pmatrix}\beta_{1},\beta_{2}-\chi,\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix} =f_{1}(\beta_{1},\beta_{2})\mathcal{J}\begin{pmatrix}\beta_{1}- \chi,\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2}+\frac{\chi}{2},\sigma_{3}\end{pmatrix}+f_{2}(\beta_{1}, \beta_{2})\mathcal{J}\begin{pmatrix}\beta_{1}+\chi,\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2}+\frac{\chi}{2},\sigma_{3}\end{pmatrix}, \tag{4.4}\] \[\mathcal{J}\begin{pmatrix}\beta_{1},\beta_{2}+\chi,\beta_{3}\\ \sigma_{1},\sigma_{2}+\frac{\chi}{2},\sigma_{3}\end{pmatrix} =f_{3}(\beta_{1},\beta_{2})\mathcal{J}\begin{pmatrix}\beta_{1}- \chi,\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}+f_{4}(\beta_{1},\beta_{2}) \mathcal{J}\begin{pmatrix}\beta_{1}+\chi,\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}, \tag{4.3}\]
where \(\mathcal{J}=\mathcal{J}_{H}\) or \(\mathcal{J}_{\mathrm{PT}}\) and the functions \(f_{1},f_{2},f_{3},f_{4}\) are given by:
\[f_{1}(\beta_{1},\beta_{2}) =\frac{1}{2\sin(\pi\chi(\beta_{1}-\chi))},\] \[f_{2}(\beta_{1},\beta_{2}) =\frac{2\sin(\pi\chi(\frac{\beta_{1}}{2}+\sigma_{1}+\sigma_{2}-Q ))\sin(\pi\chi(\frac{\beta_{1}}{2}-\sigma_{1}+\sigma_{2}))}{\sin(\pi\chi( \chi-\beta_{1}))},\] \[f_{3}(\beta_{1},\beta_{2}) =\frac{\sin(\pi\chi(\frac{3\chi}{2}-\frac{\bar{\beta}}{2}))\sin( \frac{\pi\chi}{2}(\beta_{1}-\chi+\beta_{2}-\beta_{3}))}{2\sin(\pi\chi(\beta_{1 }-\chi))\sin(\pi\chi(\frac{\beta_{2}}{2}+\sigma_{2}+\sigma_{3}-Q))\sin(\pi\chi (\frac{\beta_{2}}{2}+\sigma_{2}-\sigma_{3}))},\] \[f_{4}(\beta_{1},\beta_{2}) =\frac{2\sin(\frac{\pi\chi}{2}(\beta_{3}-\chi\pm(\beta_{1}-\beta _{2})))\sin(\pi\chi(\frac{\beta_{1}-\chi}{2}-\sigma_{1}-\sigma_{2}+Q))\sin( \pi\chi(\frac{\beta_{1}-\chi}{2}+\sigma_{1}-\sigma_{2}))}{\sin(\pi\chi(\beta _{1}-\chi))\sin(\pi\chi(\frac{\beta_{2}}{2}+\sigma_{2}+\sigma_{3}-Q))\sin( \pi\chi(\frac{\beta_{1}}{2}+\sigma_{2}-\sigma_{3}))}.\]
The advantage of this form of the shift equations is that the four functions \(f_{i}\) all contain only sine functions (and no more gamma functions) and therefore will have a periodicity property we will use below. Now consider the shift equation (4.3) with the parameter replacement \(\beta_{1}\to\beta_{1}+\chi\), \(\beta_{2}\to\beta_{2}+\chi\):
\[\mathcal{J}\begin{pmatrix}\beta_{1}+\chi,\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}=f_{1}(\beta_{1}+\chi,\beta_{2}+ \chi)\mathcal{J}\begin{pmatrix}\beta_{1},\beta_{2}+\chi,\beta_{3}\\ \sigma_{1},\sigma_{2}+\frac{\chi}{2},\sigma_{3}\end{pmatrix}+f_{2}(\beta_{1}+ \chi,\beta_{2}+\chi)\mathcal{J}\begin{pmatrix}\beta_{1}+2\chi,\beta_{2}+\chi,\beta_{3}\\ \sigma_{1},\sigma_{2}+\frac{\chi}{2},\sigma_{3}\end{pmatrix}.\]
In this equation the two \(\mathcal{J}\) functions appearing on the right hand side can be expressed in terms of \(\mathcal{J}\) functions involving only shifts on \(\beta_{1}\) using twice equation (4.4), once as it is and once with the parameter replacement \(\beta_{1}\to\beta_{1}+2\chi\). Performing one more global parameter replacement \(\beta_{1}\) to \(\beta_{1}+\chi\), the conclusion is that we land on the following shift equation:
\[0 =f_{1}(\beta_{1}+2\chi,\beta_{2}+\chi)f_{3}(\beta_{1}+\chi,\beta_ {2})\mathcal{J}\begin{pmatrix}\beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}\] \[+(-1+f_{1}(\beta_{1}+2\chi,\beta_{2}+\chi)f_{4}(\beta_{1}+\chi, \beta_{2})+f_{2}(\beta_{1}+2\chi,\beta_{2}+\chi)f_{3}(\beta_{1}+3\chi,\beta_{2} ))\,\mathcal{J}\begin{pmatrix}\beta_{1}+2\chi,\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}\] \[+f_{2}(\beta_{1}+2\chi,\beta_{2}+\chi)f_{4}(\beta_{1}+3\chi,\beta_ {2})\mathcal{J}\begin{pmatrix}\beta_{1}+4\chi,\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}. \tag{4.5}\]
Now that we have a shift equation only on the variable \(\beta_{1}\), we can fix the five parameters \(\beta_{2},\beta_{3},\sigma_{1},\sigma_{2},\sigma_{3}\) to some generic values and view all of the functions below as functions of the single parameter \(\beta_{1}\). From now on we write simply \(\mathcal{J}(\beta_{1})\) to lighten notations. The above shift equations can be put into the form:
\[\mathcal{J}(\beta_{1}+4\chi)+a_{\chi}(\beta_{1})\mathcal{J}(\beta_{1}+2\chi)+b _{\chi}(\beta_{1})\mathcal{J}(\beta_{1})=0, \tag{4.6}\]
where \(a_{\chi}(\beta_{1}),b_{\chi}(\beta_{1})\) are functions that have the following expressions:
\[a_{\chi}(\beta_{1})=\frac{-1+f_{1}(\beta_{1}+2\chi,\beta_{2}+ \chi)f_{4}(\beta_{1}+\chi,\beta_{2})+f_{2}(\beta_{1}+2\chi,\beta_{2}+\chi)f_{3} (\beta_{1}+3\chi,\beta_{2})}{f_{2}(\beta_{1}+2\chi,\beta_{2}+\chi)f_{4}(\beta_ {1}+3\chi,\beta_{2})},\] \[b_{\chi}(\beta_{1})=\frac{f_{1}(\beta_{1}+2\chi,\beta_{2}+\chi)f _{3}(\beta_{1}+\chi,\beta_{2})}{f_{2}(\beta_{1}+2\chi,\beta_{2}+\chi)f_{4}( \beta_{1}+3\chi,\beta_{2})}.\]
To argue uniqueness we will introduce the matrices:
\[M_{1}(\beta_{1})=\begin{pmatrix}\mathcal{J}_{H}(\beta_{1})&\mathcal{J}_{H}( \beta_{1}-\frac{4}{\gamma})\\ \mathcal{J}_{H}(\beta_{1}-\gamma)&\mathcal{J}_{H}(\beta_{1}-2Q)\end{pmatrix}, \quad M_{2}(\beta_{1})=\begin{pmatrix}\mathcal{J}_{\mathrm{PT}}(\beta_{1})& \mathcal{J}_{\mathrm{PT}}(\beta_{1}-\frac{4}{\gamma})\\ \mathcal{J}_{\mathrm{PT}}(\beta_{1}-\gamma)&\mathcal{J}_{\mathrm{PT}}(\beta_ {1}-2Q)\end{pmatrix}.\]
Set \(\beta_{0}:=2Q-\beta_{2}-\beta_{3}\). We will derive the consequence of Lemma 4.1 for these matrices. Note that both functions \(\mathcal{J}_{H},\mathcal{J}_{\mathrm{PT}}\) obey an exact reflection identity, namely \(\mathcal{J}_{H}(\beta_{1})=\mathcal{J}_{H}(2Q-\beta_{1})\) and \(\mathcal{J}_{\mathrm{PT}}(\beta_{1})=\mathcal{J}_{\mathrm{PT}}(2Q-\beta_{1})\). For \(\mathcal{J}_{\mathrm{PT}}\) this fact is obvious from the definition. For \(\mathcal{J}_{H}\) it is a consequence of Lemma 3.11. Now Lemma 4.1 implies that:
\[\lim_{\beta_{1}\to\beta_{0}}(\beta_{1}-\beta_{0})\mathcal{J}_{H}( \beta_{1})=\lim_{\beta_{1}\to\beta_{0}}(\beta_{1}-\beta_{0})\mathcal{J}_{ \mathrm{PT}}(\beta_{1}),\] \[\lim_{\beta_{1}\to\beta_{0}}(\beta_{1}-\beta_{0})\mathcal{J}_{H}( \beta_{1}-\gamma)=\lim_{\beta_{1}\to\beta_{0}}(\beta_{1}-\beta_{0})\mathcal{J} _{\mathrm{PT}}(\beta_{1}-\gamma).\]
By applying the reflection identity \(\mathcal{J}_{H}(\beta_{1})=\mathcal{J}_{H}(2Q-\beta_{1})\) we can reformulate the first limit as:
\[\lim_{\beta_{1}\to\beta_{0}}(\beta_{1}-\beta_{0})\mathcal{J}_{H}(\beta_{1})= \lim_{\beta_{1}\to\beta_{0}}(\beta_{1}-\beta_{0})\mathcal{J}_{H}(2Q-\beta_{1} )=\lim_{\beta_{1}\to-\beta_{2}-\beta_{3}}(\beta_{1}+\beta_{2}+\beta_{3}) \mathcal{J}_{H}(-\beta_{1}).\]
Since everything we derive works for generic values of \(\beta_{2},\beta_{3}\), we can replace \((\beta_{2},\beta_{3})\) by \((-\beta_{2},-\beta_{3})\). Therefore we obtain the equality of limits:
\[\lim_{\beta_{1}\to\beta_{2}+\beta_{3}}(\beta_{1}-\beta_{2}-\beta _{3})\mathcal{J}_{H}(-\beta_{1})=\lim_{\beta_{1}\to\beta_{2}+\beta_{3}}(\beta_ {1}-\beta_{2}-\beta_{3})\mathcal{J}_{\mathrm{PT}}(-\beta_{1})\] \[\Leftrightarrow\lim_{\beta_{1}\to\beta_{0}}(\beta_{1}-\beta_{0}) \mathcal{J}_{H}(\beta_{1}-2Q)=\lim_{\beta_{1}\to\beta_{0}}(\beta_{1}-\beta_{0} )\mathcal{J}_{\mathrm{PT}}(\beta_{1}-2Q).\]
By applying a similar argument to \(\mathcal{J}_{H}(\beta_{1}-\gamma)\) we can show that the limits also match at for \(\mathcal{J}_{H}(\beta_{1}-\frac{4}{\gamma})\). Therefore the conclusion of the above is that:
\[\lim_{\beta_{1}\to\beta_{0}}(\beta_{1}-\beta_{0})M_{1}(\beta_{1})=\lim_{\beta _{1}\to\beta_{0}}(\beta_{1}-\beta_{0})M_{2}(\beta_{1}). \tag{4.7}\]
Let us write down the shift equations satisfied by the matrices \(M_{1},M_{2}\). One has
\[M_{1}(\beta_{1}+\gamma)=A_{\frac{\gamma}{2}}(\beta_{1})M_{1}(\beta_{1}),\quad M _{1}(\beta_{1}+\frac{4}{\gamma})=M_{1}(\beta_{1})A_{\frac{2}{\gamma}}(\beta_{1 })^{\top},\quad A_{\chi}(\beta_{1}):=\begin{pmatrix}-a_{\chi}(\beta_{1})&-b_{ \chi}(\beta_{1})\\ 1&0\end{pmatrix},\]
and the same relations for \(M_{2}\). The fact that these first order shift equations on the matrices \(M_{1},M_{2}\) hold uses (4.6) combined with the fact that the function \(a_{\chi}(\beta_{1}),b_{\chi}(\beta_{1})\) are both \(2/\chi\)-periodic for both values of \(\chi\). Consider the determinant:
\[D_{2}(\beta_{1})=\det M_{2}(\beta_{1})=\mathcal{J}_{\mathrm{PT}}(\beta_{1}) \mathcal{J}_{\mathrm{PT}}(\beta_{1}-2Q)-\mathcal{J}_{\mathrm{PT}}(\beta_{1}- \gamma)\mathcal{J}_{\mathrm{PT}}(\beta_{1}-\frac{4}{\gamma}).\]
From the result of Lemma A.2 it is easy to see that the residue of \(D_{2}(\beta_{1})\) at \(\beta_{1}=\beta_{0}\) is not zero. More precisely the residues of \(\mathcal{J}_{\mathrm{PT}}(\beta_{1})\) and \(\mathcal{J}_{\mathrm{PT}}(\beta_{1}-\gamma)\) are related by an explicit meromorphic function written in Lemma A.2, and the residues of \(\mathcal{J}_{\mathrm{PT}}(\beta_{1}-2Q)\) and \(\mathcal{J}_{\mathrm{PT}}(\beta_{1}-\frac{4}{\gamma})\) are related by the same function with \(\beta_{2},\beta_{3}\) replaced by \(-\beta_{2},-\beta_{3}\).
Since \(D_{2}(\beta_{1})\) is a nonzero meromorphic function, it must have isolated and at most countably many zeros. Therefore the matrix inverse \(M_{2}(\beta_{1})^{-1}\) is a well-defined \(2\)-by-\(2\) matrix whose entries
are meromorphic functions of \(\beta_{1}\). Now consider \(M(\beta_{1})=M_{1}(\beta_{1})M_{2}(\beta_{1})^{-1}\). Then the matrix \(M(\beta_{1})\) satisfies:
\[M(\beta_{1}+\gamma)=A_{\frac{\gamma}{2}}(\beta_{1})M(\beta_{1})A_{\frac{\gamma} {2}}(\beta_{1})^{-1},\quad M(\beta_{1}+\frac{4}{\gamma})=M(\beta_{1}). \tag{4.8}\]
Similarly as for \(M_{2}\), since \(\det A_{\frac{\gamma}{2}}(\beta_{1})=b_{\frac{\gamma}{2}}(\beta_{1})\) is nonzero meromorphic function, the matrix \(A_{\frac{\gamma}{2}}(\beta_{1})^{-1}\) is a well-defined 2-by-2 matrix with meromorphic entries. Thanks to equation (4.7), we known that \(M(\beta_{1})\) is the identity matrix for \(\beta_{1}=\beta_{0}\). By the same standard argument as below (4.2), when \(\gamma^{2}\notin\mathbb{Q}\), the two shift equations in (4.8) imply \(M(\beta_{1})\) is the identity matrix for all \(\beta_{1}\). The same holds for \(\gamma^{2}\in\mathbb{Q}\) by continuity. Therefore \(\mathcal{J}_{H}=\mathcal{J}_{\mathrm{PT}}\) and hence \(H=H_{\mathrm{PT}}\).
### Proof of \(G=g_{\mathrm{Hos}}\)
We will perform all the computations up to a global constant \(c_{\gamma,\beta}\) depending on \(\gamma,\beta\) but not on \(\alpha\) or \(\mu_{B}\). We use \(c_{\gamma}\) for a constant depending on \(\gamma\). The value of \(c_{\gamma,\beta}\) can vary from line to line. At the end we obtain a formula for \(G_{\mu_{B}}(\alpha,\beta)\) up to a factor \(c_{\gamma,\beta}\). We will finally pin down that factor by setting \(\alpha=Q-\frac{\beta}{2}\) for which we know the value of \(G\).
We first recall three formulas from [22]. We add a 0 subscript to all the formulas to indicate that they all correspond to having the bulk cosmological constant set to zero, namely \(\mu=0\).
\[\overline{U}_{0}(\alpha)=\left(\frac{2^{-\frac{\gamma\alpha}{2}}2\pi}{\Gamma( 1-\frac{\gamma^{2}}{4})}\right)^{\frac{2}{\gamma}(Q-\alpha)}\Gamma(\frac{ \gamma\alpha}{2}-\frac{\gamma^{2}}{4}). \tag{4.9}\]
\[\overline{G}_{0}(\alpha,\beta)=c_{\gamma,\beta}\left(\frac{2^{\frac{\gamma}{ 2}(\frac{\beta}{2}-\alpha)}2\pi}{\Gamma(1-\frac{\gamma^{2}}{4})}\right)^{ \frac{2}{\gamma}(Q-\alpha-\frac{\beta}{2})}\frac{\Gamma(\frac{\gamma\alpha}{2 }+\frac{\gamma\beta}{4}-\frac{\gamma^{2}}{4})\Gamma_{\frac{\gamma}{2}}(\alpha \pm\frac{\beta}{2})}{\Gamma_{\frac{\gamma}{2}}(\alpha)^{2}}. \tag{4.10}\]
\[\overline{R}_{0}(\beta,\mu_{1},\mu_{2})=c_{\gamma,\beta}\frac{e^{\mathbf{i} \pi(\sigma_{1}^{0}+\sigma_{2}^{0}-Q)(Q-\beta)}}{S_{\frac{\gamma}{2}}(\frac{ \beta}{2}+\sigma_{2}^{0}-\sigma_{1}^{0})S_{\frac{\gamma}{2}}(\frac{\beta}{2}+ \sigma_{1}^{0}-\sigma_{2}^{0})},\quad\text{where}\quad\mu_{i}=e^{\mathbf{i} \pi\gamma(\sigma_{i}^{0}-\frac{Q}{2})}. \tag{4.11}\]
We will prove \(G=G_{\mathrm{Hos}}\) from [23, Proposition 1.7]. To recall it properly, for a fixed \(\beta\in(0,\gamma)\), we let \(\Psi_{\beta}\) be a random variable whose moment generating function satisfies for each \(\alpha\in(Q-\frac{\beta}{2},Q)\)
\[\mathbb{E}[\Psi_{\beta}^{2\Delta_{\alpha}-2}]=\frac{\overline{G}_ {0}(\alpha,\gamma)\overline{G}_{0}(\gamma,\beta)}{\overline{G}_{0}(\alpha, \beta)\overline{G}_{0}(\gamma,\gamma)}\frac{\Gamma(\frac{4}{\gamma^{2}})\Gamma (\frac{2}{\gamma}(\frac{\beta}{2}+\gamma-Q))}{\Gamma(\frac{2}{\gamma}(Q- \alpha)+1)\Gamma(\frac{2}{\gamma}(\frac{\beta}{2}+\alpha-Q))}\\ \times\left(\int_{0}^{\infty}\mu_{2}^{\frac{2}{\gamma}(Q-\alpha) }\frac{\partial}{\partial\mu_{2}}R_{0}(Q+\frac{\beta}{2},\mu_{2},1)d\mu_{2} \right)\left(\int_{0}^{\infty}\mu_{2}^{\frac{2}{\gamma}(Q-\gamma)}\frac{ \partial}{\partial\mu_{2}}R_{0}(Q+\frac{\beta}{2},\mu_{2},1)d\mu_{2}\right)^{ -1}. \tag{4.12}\]
By [23, Proposition 7.13], the law of \(\Psi_{\beta}\) describes a half of the conformal radius viewed from \(\mathbf{i}\) of a so-called \(\mathrm{SLE}_{\kappa}(\rho)\) bubble on the upper half plane rooted at the origin, conditioned on surrounding \(\mathbf{i}\). See [24, 25] for more details on the SLE bubble. We do not recall its definition since it is not needed for the proof of \(G=G_{\mathrm{Hos}}\). For \(\mu_{B},\mu_{2}\geq 0\), let \(\mu_{B}=\mu_{B}(\sigma),\mu_{2}=\mu_{B}(\sigma_{2})\) as in (1.9). We also need the following measure \(\mathcal{M}\) defined on a measurable space \((\Omega,\mathcal{F})\) with three positive measurable functions \(L_{1},L_{2},A\) satisfying
\[\mathcal{M}\left[e^{-\mu_{B}L_{1}-\mu_{2}L_{2}-A}\right]=-\frac{\gamma}{\beta }R(Q-\frac{\beta}{2},\sigma,\sigma_{2})^{-1}. \tag{4.13}\]
The measure \(\mathcal{M}\) in [23] is given by the so-called thin quantum disk with a proper \(\beta\)-dependent parameter and \(L_{1},L_{2},A\) are its left/right boundary lengths and its area, respectively. See [23,
Remark 7.19] for more details. Again we do not recall its definition since it is not needed. We are now ready to state [23, Proposition 1.7].
**Proposition 4.2**.: _([23, Proposition 1.7]) Assume \(\alpha,\beta\) obey the Seiberg bounds \(\alpha+\frac{\beta}{2}>Q\), \(\alpha<Q\), \(\beta<Q\). Assume further that \(\beta\in(0,\gamma)\). Then one has_
\[G_{\mu_{B}}(\alpha,\beta) =\frac{c_{\gamma,\beta}}{\mathbb{E}[\Psi_{\beta}^{2\Delta_{\alpha }-2}]}\times\frac{2^{-\frac{\alpha^{2}}{2}}\overline{U}_{0}(\alpha)}{\Gamma( \frac{2}{\gamma}(Q-\alpha))}\left(\frac{1}{2}\sqrt{\frac{1}{\sin(\pi\frac{ \gamma^{2}}{4})}}\right)^{\frac{2}{\gamma}(Q-\alpha)}\] \[\times\mathcal{M}\left[e^{-\mu_{B}L_{1}-A}K_{\frac{2}{\gamma}(Q- \alpha)}\left(L_{2}\sqrt{\frac{1}{\sin\frac{\pi\gamma^{2}}{4}}}\right) \right], \tag{4.14}\]
_where here \(K_{\nu}(x)=\int_{0}^{\infty}e^{-x\cosh t}\cosh(\nu t)dt\) is the modified Bessel function, \(c_{\gamma,\beta}\) a constant of \(\gamma,\beta\)._
We now derive \(G=G_{\mathrm{Hos}}\) from Proposition 4.2. We first focus on \(\mathbb{E}[\Psi_{\beta}^{2\Delta_{\alpha}-2}]\). Record that \(\overline{G}_{0}(\alpha,\gamma)=\frac{1}{\pi}\overline{U}_{0}(\alpha)\) and \(R_{0}(\beta,\mu_{1},\mu_{2})=-\Gamma(1-\frac{2}{\gamma}(Q-\beta))\overline{R }_{0}(\beta,\mu_{1},\mu_{2})\). Therefore we can compute:
\[\frac{\overline{G}_{0}(\alpha,\gamma)\overline{G}_{0}(\gamma,\beta)}{\overline {G}_{0}(\alpha,\beta)\overline{G}_{0}(\gamma,\gamma)}=c_{\gamma,\beta}\frac{ \Gamma(\frac{\gamma\alpha}{2}-\frac{\gamma^{2}}{4})\Gamma_{\frac{\gamma}{2}}( \alpha)^{2}}{\Gamma(\frac{\gamma\alpha}{2}+\frac{\gamma\beta}{4}-\frac{ \gamma^{2}}{4})\Gamma_{\frac{\gamma}{2}}(\alpha\pm\frac{\beta}{2})}.\]
Now lets look at the integral containing \(R_{0}\). We can first write:
\[\int_{0}^{\infty}\mu_{2}^{\frac{2}{2}(Q-\alpha)}\frac{\partial}{ \partial\mu_{2}}R_{0}(Q+\frac{\beta}{2},\mu_{2},1)d\mu_{2}=\frac{2}{\gamma}(Q -\alpha)\Gamma(1+\frac{\beta}{\gamma})\int_{0}^{\infty}\mu_{2}^{\frac{2}{2}(Q -\alpha)-1}\overline{R}_{0}(Q+\frac{\beta}{2},\mu_{2},1)d\mu_{2}\] \[=c_{\gamma,\beta}(Q-\alpha)\int_{0}^{\infty}d\mu_{2}\,\mu_{2}^{ \frac{2}{2}(Q-\alpha)-1}\frac{e^{\mathbf{i}\pi(\sigma_{2}^{0}-\frac{Q}{2})(- \frac{\beta}{2})}}{S_{\frac{\gamma}{2}}(\frac{\beta}{4}+\sigma_{2}^{0})S_{ \frac{\gamma}{2}}(Q+\frac{\beta}{4}-\sigma_{2}^{0})}.\]
We will now compute this last integral, the relation between \(\mu_{2}\) and \(\sigma_{2}^{0}\) is \(\mu_{2}=e^{\mathbf{i}\pi\gamma(\sigma_{2}^{0}-\frac{Q}{2})}\). We get:
\[\int_{0}^{+\infty}d\mu_{2}\,\mu_{2}^{\frac{2(Q-\alpha)}{\gamma}- 1}\frac{e^{\mathbf{i}\pi(\sigma_{2}^{0}-\frac{Q}{2})(-\frac{\beta}{2})}}{S_{ \frac{\gamma}{2}}(\frac{\beta}{4}+\sigma_{2}^{0})S_{\frac{\gamma}{2}}(Q+\frac{ \beta}{4}-\sigma_{2}^{0})}=\pi\gamma\int_{\frac{Q}{2}+\mathbf{i}\mathbb{R}} \frac{d\sigma_{2}^{0}}{\mathbf{i}}\frac{e^{2\mathbf{i}\pi(Q-\alpha)(\sigma_{2} ^{0}-\frac{Q}{2})}e^{\mathbf{i}\pi(\sigma_{2}^{0}-\frac{Q}{2})(-\frac{\beta}{2} )}}{S_{\frac{\gamma}{2}}(\frac{\beta}{4}+\sigma_{2}^{0})S_{\frac{\gamma}{2}}( Q+\frac{\beta}{4}-\sigma_{2}^{0})}\] \[=\pi\gamma e^{\mathbf{i}\pi(Q-\alpha-\frac{\beta}{4})(\frac{Q}{2} -\frac{\beta}{4})}\int_{\mathbf{i}\mathbb{R}}\frac{d\sigma_{2}^{0}}{\mathbf{i} }e^{2\mathbf{i}\pi(Q-\alpha-\frac{\beta}{4})\sigma_{2}^{0}}\frac{S_{\frac{ \gamma}{2}}(Q-\frac{\beta}{2}+\sigma_{2}^{0})}{S_{\frac{\gamma}{2}}(Q+\sigma_{ 2}^{0})}.\]
We will now use the following identity derived in [14, Lemma 15],
\[\int_{\mathbf{i}\mathbb{R}}d\tau e^{2\pi\mathbf{i}\pi\beta^{\prime}}e^{ \mathbf{i}\pi\tau(\alpha^{\prime}-Q)}\frac{S_{\frac{\gamma}{2}}(\tau+\alpha^{ \prime})}{S_{\frac{\gamma}{2}}(\tau+Q)}=\mathbf{i}e^{\frac{\mathbf{i}\pi}{2} \alpha^{\prime}(Q-\alpha^{\prime})}e^{-\mathbf{i}\pi\alpha^{\prime}\beta^{ \prime}}\frac{S_{\frac{\gamma}{2}}(\alpha^{\prime})S_{\frac{\gamma}{2}}( \beta^{\prime})}{S_{\frac{\gamma}{2}}(\alpha^{\prime}+\beta^{\prime})},\]
with \(\alpha^{\prime}=Q-\frac{\beta}{2}\), \(\beta^{\prime}=Q-\alpha\). This gives us:
\[\int_{0}^{+\infty}d\mu_{2}\mu_{2}^{\frac{2(Q-\alpha)}{\gamma}-1} \frac{e^{\mathbf{i}\pi(\sigma_{2}^{0}-\frac{Q}{2})(-\frac{\beta}{2})}}{S_{ \frac{\gamma}{2}}(\frac{\beta}{4}+\sigma_{2}^{0})S_{\frac{\gamma}{2}}(Q+\frac{ \beta}{4}-\sigma_{2}^{0})}\] \[=\pi\gamma e^{2\mathbf{i}\pi(Q-\alpha-\frac{\beta}{4})(\frac{Q}{2} -\frac{\beta}{4})}e^{\frac{\mathbf{i}\pi}{2}\frac{\beta}{2}(Q-\frac{\beta}{2})}e ^{-\mathbf{i}\pi(Q-\frac{\beta}{2})(Q-\alpha)}\frac{S_{\frac{\gamma}{2}}(Q- \frac{\beta}{2})S_{\frac{\gamma}{2}}(Q-\alpha)}{S_{\frac{\gamma}{2}}(2Q-\frac{ \beta}{2}-\alpha)}=\pi\gamma\frac{S_{\frac{\gamma}{2}}(Q-\frac{\beta}{2})S_{ \frac{\gamma}{2}}(Q-\alpha)}{S_{\frac{\gamma}{2}}(2Q-\frac{\beta}{2}-\alpha)}.\]
Finally we obtain:
\[\mathbb{E}[\Psi_{\beta}^{2\Delta_{\alpha}-2}]=\frac{c_{\gamma,\beta}\Gamma(\frac{ \gamma\alpha}{2}-\frac{\gamma^{2}}{4})\Gamma_{\frac{\gamma}{2}}(\alpha)}{\Gamma (\frac{\gamma\alpha}{2}+\frac{\gamma\beta}{4}-\frac{\gamma^{2}}{4})\Gamma_{ \frac{\gamma}{2}}(\alpha\pm\frac{\beta}{2})}\frac{(\alpha-Q)}{\Gamma(\frac{2} {\gamma}(Q-\alpha)+1)\Gamma(\frac{2}{\gamma}(\frac{\beta}{2}+\alpha-Q))}\frac{ \Gamma_{\frac{\gamma}{2}}(Q-\alpha)}{S_{\frac{\gamma}{2}}(2Q-\frac{\beta}{2}- \alpha)}.\]
Coming now back to the equation (4.14), we first compute:
\[\frac{2^{-\frac{\alpha^{2}}{2}}\overline{U}_{0}(\alpha)}{\Gamma(\frac{2}{ \gamma}(Q-\alpha))}\left(\frac{1}{2}\sqrt{\frac{1}{\sin(\pi\frac{\gamma^{2}}{4 })}}\right)^{\frac{2}{\gamma}(Q-\alpha)}=c_{\gamma}\;2^{\frac{a^{2}}{2}- \alpha Q}\frac{\Gamma(\frac{\gamma\alpha}{2}-\frac{\gamma^{2}}{4})}{\Gamma( \frac{2}{\gamma}(Q-\alpha))}\left(\frac{\pi\Gamma(\frac{\gamma^{2}}{4})}{ \Gamma(1-\frac{\gamma^{2}}{4})}\right)^{\frac{1}{\gamma}(Q-\alpha)}.\]
We finally turn to the part with the expectation over \(\mathcal{M}\). We give the following lemma:
**Lemma 4.3**.: _One has:_
\[\mathcal{M}\left[e^{-\mu_{B}L_{1}-A}K_{\frac{2}{\gamma}(Q-\alpha)}\left(L_{2} \sqrt{\frac{1}{\sin\frac{\pi\gamma^{2}}{4}}}\right)\right]=c_{\gamma,\beta} \int_{\mathbb{IR}}\frac{d\sigma_{2}}{\mathbf{i}}e^{2\mathbf{i}\pi(Q-2\sigma_{ 1})\sigma_{2}}\frac{S_{\frac{\gamma}{2}}(\frac{1}{2}(\alpha+\frac{\beta}{2}-Q )\pm\sigma_{2})}{S_{\frac{\gamma}{2}}(\frac{1}{2}(\alpha-\frac{\beta}{2}+Q) \pm\sigma_{2})}.\]
Proof.: Record following parameters and relations:
\[\mu_{B}=\frac{1}{\sqrt{\sin\frac{\pi\gamma^{2}}{4}}}\cos(\pi\gamma(\sigma- \frac{Q}{2})),\;\hat{\mu}_{B}(t)=\frac{1}{\sqrt{\sin\frac{\pi\gamma^{2}}{4}}} \cosh(t),\;t=\mathbf{i}\pi\gamma(\sigma_{2}-\frac{Q}{2}),\;\cosh t=\cos(\pi \gamma(\sigma_{2}-\frac{Q}{2})).\]
Recall from (3.33) that \(R(Q+\frac{\beta}{2},\sigma,\sigma_{2})R(Q-\frac{\beta}{2},\sigma,\sigma_{2})=1\). By (4.13) and the expression of \(K_{\nu}\),
\[\mathcal{M}\left[e^{-\mu_{B}L_{1}-A}K_{\frac{2}{\gamma}(Q-\alpha) }\left(L_{2}\sqrt{\frac{1}{\sin\frac{\pi\gamma^{2}}{4}}}\right)\right]=\frac{ 1}{2}\int_{\mathbb{R}}dt\cosh(\frac{2}{\gamma}(Q-\alpha)t)\mathcal{M}\left[e^ {-\mu_{B}L_{1}-\hat{\mu}_{B}(t)L_{2}-A}\right]\] \[=-\frac{\pi\gamma}{2}\frac{\gamma}{\beta}\int_{\frac{Q}{2}+ \mathbb{IR}}\frac{d\sigma_{2}}{\mathbf{i}}\cos(2\pi(Q-\alpha)(\sigma_{2}- \frac{Q}{2}))R(Q+\frac{\beta}{2},\sigma,\sigma_{2}).\]
Plugging in the exact formula \(R_{\text{FZZ}}\) we have for \(R\), the above integral then becomes:
\[-\frac{\pi\gamma}{2}\frac{\gamma}{\beta}\int_{\frac{Q}{2}+ \mathbb{IR}}\frac{d\sigma_{2}}{\mathbf{i}}\cos(2\pi(Q-\alpha)(\sigma_{2}-\frac {Q}{2}))R(Q+\frac{\beta}{2},\sigma,\sigma_{2})\] \[=-\frac{\pi\gamma^{2}}{2\beta}\left(\frac{\pi(\frac{\gamma}{2})^{ 2-\frac{\gamma^{2}}{2}}\Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1-\frac{\gamma^{2} }{4})}\right)^{-\frac{\beta}{2\gamma}}\frac{\Gamma_{\frac{\gamma}{2}}(\frac{ \beta}{2})}{\Gamma_{\frac{\gamma}{2}}(-\frac{\beta}{2})}\int_{\frac{Q}{2}+ \mathbb{IR}}\frac{d\sigma_{2}}{\mathbf{i}}\cos(2\pi(Q-\alpha)(\sigma_{2}- \frac{Q}{2}))\frac{S_{\frac{\gamma}{2}}(\frac{Q}{2}-\frac{\beta}{4}\pm(Q- \sigma-\sigma_{2}))}{S_{\frac{\gamma}{2}}(\frac{\beta}{2}+\frac{\beta}{4}\pm( \sigma_{2}-\sigma))}.\]
Let us now take only the contour integral above and write the contour integral over the line \(\mathbb{i}\mathbb{R}\):
\[\int_{\mathbb{IR}}d\sigma_{2}\cos(2\pi(Q-\alpha)\sigma_{2})\frac{S_{\frac{\gamma }{2}}(Q-\frac{\beta}{4}-\sigma-\sigma_{2})S_{\frac{\gamma}{2}}(-\frac{\beta}{4 }+\sigma+\sigma_{2})}{S_{\frac{\gamma}{2}}(Q+\frac{\beta}{4}+\sigma_{2}-\sigma )S_{\frac{\gamma}{2}}(\frac{\beta}{4}+\sigma-\sigma_{2})}=\int_{\mathbb{i} \mathbb{R}}d\sigma_{2}e^{2\mathbb{i}\pi(Q-\alpha)\sigma_{2}}\frac{S_{\frac{\gamma }{2}}(-\frac{\beta}{4}+\sigma\pm\sigma_{2})}{S_{\frac{\gamma}{2}}(\frac{\beta }{4}+\sigma\pm\sigma_{2})}.\]
In this last line we have used the symmetry \(\sigma_{2}\to-\sigma_{2}\) of the integrand coming from the identity \(S_{\frac{\gamma}{2}}(x)=\frac{1}{S_{\frac{\gamma}{2}}(Q-x)}\). We will apply the following identity coming from [25, Equation (D.34a)]:
\[\int_{\mathbb{IR}}d\sigma_{2}e^{2\mathbb{i}\pi(Q-\alpha)\sigma_{2}}\frac{S_{ \frac{\gamma}{2}}(-\frac{\beta}{4}+\sigma\pm\sigma_{2})}{S_{\frac{\gamma}{2}}( \frac{\beta}{4}+\sigma\pm\sigma_{2})}=\frac{S_{\frac{\gamma}{2}}(Q-\frac{\beta}{2 })}{S_{\frac{\gamma}{2}}(\frac{\beta}{2})}\int_{\mathbb{IR}}d\sigma_{2}e^{2 \mathbb{i}\pi(Q-2\sigma)\sigma_{2}}\frac{S_{\frac{\gamma}{2}}(\frac{1}{2}( \alpha+\frac{\beta}{2}-Q)\pm\sigma_{2})}{S_{\frac{\gamma}{2}}(\frac{1}{2}( \alpha-\frac{\beta}{2}+Q)\pm\sigma_{2})}.\]
Putting everything together, this gives us the final answer.
Therefore combining all the above results we get:
\[G_{\mu_{B}}(\alpha,\beta)=c_{\gamma,\beta}2^{\frac{\alpha^{2}}{2}- \alpha Q}\frac{\Gamma(\frac{\gamma\alpha}{2}-\frac{\gamma^{2}}{4})}{\Gamma( \frac{2}{\gamma}(Q-\alpha))}\left(\frac{\pi\Gamma(\frac{\gamma^{2}}{4})}{ \Gamma(1-\frac{\gamma^{2}}{4})}\right)^{\frac{1}{\gamma}(Q-\alpha)}\frac{ \Gamma(\frac{\gamma\alpha}{2}+\frac{\gamma\beta}{4}-\frac{\gamma^{2}}{4}) \Gamma_{\frac{\gamma}{2}}(\alpha\pm\frac{\beta}{2})}{\Gamma(\frac{\gamma\alpha }{2}-\frac{\gamma^{2}}{4})\Gamma_{\frac{\gamma}{2}}(\alpha)}\] \[\times\frac{\Gamma(\frac{2}{\gamma}(Q-\alpha)+1)\Gamma(\frac{2}{ \gamma}(\frac{\beta}{2}+\alpha-Q))}{(\alpha-Q)}\frac{S_{\frac{2}{2}}(2Q-\frac{ \beta}{2}-\alpha)}{\Gamma_{\frac{\gamma}{2}}(Q-\alpha)}\int_{\mathbb{IR}} \frac{d\sigma_{2}}{\mathbf{i}}e^{2\pi(Q-2\sigma)\sigma_{2}}\frac{S_{\frac{2}{ 2}}(\frac{1}{2}(\alpha+\frac{\beta}{2}-Q)\pm\sigma_{2})}{S_{\frac{2}{2}}( \frac{1}{2}(\alpha-\frac{\beta}{2}+Q)\pm\sigma_{2})}\] \[=c_{\gamma,\beta}2^{\frac{\alpha^{2}}{2}-\alpha Q}\left(\frac{ \pi\Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1-\frac{\gamma^{2}}{4})}\right)^{ \frac{1}{\gamma}(Q-\alpha)}\frac{\Gamma(\frac{\gamma\alpha}{2}+\frac{\gamma \beta}{4}-\frac{\gamma^{2}}{4})\Gamma_{\frac{\gamma}{2}}(\alpha\pm\frac{\beta} {2})}{\Gamma_{\frac{\gamma}{2}}(\alpha)}\Gamma(\frac{2}{\gamma}(\frac{\beta} {2}+\alpha-Q))\] \[\times\frac{S_{\frac{\gamma}{2}}(2Q-\frac{\beta}{2}-\alpha)}{ \Gamma_{\frac{\gamma}{2}}(Q-\alpha)}\int_{\mathbb{IR}}\frac{d\sigma_{2}}{ \mathbf{i}}e^{2\mathbf{i}\pi(Q-2\sigma)\sigma_{2}}\frac{S_{\frac{\gamma}{2}}( \frac{1}{2}(\alpha+\frac{\beta}{2}-Q)-\sigma_{2})S_{\frac{\gamma}{2}}(\frac{ 1}{2}(\alpha+\frac{\beta}{2}-Q)+\sigma_{2})}{S_{\frac{\gamma}{2}}(\frac{1}{2} (\alpha-\frac{\beta}{2}+Q)+\sigma_{2})S_{\frac{\gamma}{2}}(\frac{1}{2}(\alpha -\frac{\beta}{2}+Q)-\sigma_{2})}.\]
We are now going to pin down the value of \(c_{\gamma,\beta}\) by setting \(\alpha=Q-\frac{\beta}{2}\) in which case we know the value of \(G\). After deriving the full expression of \(G\), we can do a further sanity check by setting \(\beta=0\) and making sure we recover the FZZ formula derived in [1]. The determination of \(c_{\beta,\gamma}\) is based on the fact that we know the following special value of \(G\):
\[\lim_{\alpha\to Q-\frac{\beta}{2}}(\alpha+\frac{\beta}{2}-Q)G(\alpha,\beta)=2 ^{-\frac{1}{2}(Q-\frac{\beta}{2})^{2}}.\]
This residue can be deduced by following the exact same steps as for the residues of \(H\) given in Lemma 4.1. The only difference is the power of \(2\) above which comes from the way we have normalized the \(\alpha\) insertion of \(G\) at \(\mathbf{i}\). Namely the \(2^{-\frac{\alpha^{2}}{2}}\) of Definition 2.9 becomes a \(2^{-\frac{1}{2}(Q-\frac{\beta}{2})^{2}}\) in the limit.
First, the limit of the prefactor of \(G\) in front of the contour integral is given by:
\[c_{\gamma,\beta}2^{-\frac{1}{2}(Q-\frac{\beta}{2})(Q+\frac{\beta }{2})}\left(\frac{\pi\Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1-\frac{\gamma^{2}}{ 4})}\right)^{\frac{\beta}{2\gamma}}\frac{\Gamma_{\frac{\gamma}{2}}(Q-\beta) \Gamma_{\frac{\gamma}{2}}(Q)^{2}}{\Gamma_{\frac{\gamma}{2}}(Q-\frac{\beta}{2} )\Gamma_{\frac{\gamma}{2}}(\frac{\beta}{2})}\lim_{\alpha\to Q-\frac{\beta}{2}} \frac{\Gamma(\frac{2}{\gamma}(\frac{\beta}{2}+\alpha-Q))}{\Gamma_{\frac{\gamma }{2}}(\alpha+\frac{\beta}{2}-Q)}\] \[=c_{\gamma,\beta}2^{-\frac{1}{2}(Q-\frac{\beta}{2})(Q+\frac{\beta }{2})}\left(\frac{\pi\Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1-\frac{\gamma^{2}}{ 4})}\right)^{\frac{\beta}{2\gamma}}\frac{\Gamma_{\frac{\gamma}{2}}(Q-\beta) \Gamma_{\frac{\gamma}{2}}(Q)^{2}}{\Gamma_{\frac{\gamma}{2}}(Q-\frac{\beta}{2} )\Gamma_{\frac{\gamma}{2}}(\frac{\beta}{2})}\frac{\sqrt{2\pi}}{\Gamma_{\frac{ \gamma}{2}}(\frac{2}{\gamma})}(\frac{\gamma}{2})^{1/2}.\]
Now we need to compute the limit of the contour integral, this is given by Lemma A.3. The answer is \((2\pi S_{\frac{\gamma}{2}}(Q-\frac{\beta}{2})^{2})^{-1}\). Combing all these computations we thus get that:
\[\lim_{\alpha\to Q-\frac{\beta}{2}}(\alpha+\frac{\beta}{2}-Q)G(\alpha,\beta)=c_ {\gamma,\beta}2^{-\frac{1}{2}(Q-\frac{\beta}{2})(Q+\frac{\beta}{2})}\frac{ \gamma}{2}\left(\frac{\pi\Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1-\frac{\gamma^{2} }{4})}\right)^{\frac{\beta}{2\gamma}}\frac{\Gamma_{\frac{\gamma}{2}}(Q-\beta) \Gamma_{\frac{\gamma}{2}}(Q)\Gamma_{\frac{\gamma}{2}}(\frac{\beta}{2})}{\Gamma_ {\frac{\gamma}{2}}(Q-\frac{\beta}{2})^{3}}.\]
Above to simplify the computation we have used \(\Gamma_{\frac{\gamma}{2}}(Q)(\Gamma_{\frac{\gamma}{2}}(\frac{2}{\gamma}))^{-1}= \sqrt{2\pi}(\frac{\gamma}{2})^{1/2}\). The above limit equals \(2^{-\frac{1}{2}(Q-\frac{\beta}{2})^{2}}\), giving the following value of \(c_{\gamma,\beta}\):
\[c_{\gamma,\beta}=2^{\frac{\beta}{2}(Q-\frac{\beta}{2})}\frac{2}{\gamma}\left( \frac{\pi\Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1-\frac{\gamma^{2}}{4})}\right)^{- \frac{\beta}{2\gamma}}\frac{\Gamma_{\frac{\gamma}{2}}(Q-\frac{\beta}{2})^{3}}{ \Gamma_{\frac{\gamma}{2}}(Q-\beta)\Gamma_{\frac{\gamma}{2}}(Q)\Gamma_{\frac{ \gamma}{2}}(\frac{\beta}{2})}.\]
Hence we have deduced the following expression for \(G\):
\[G_{\mu_{B}}(\alpha,\beta)=2^{\frac{\alpha^{2}}{2}-\alpha Q}2^{ \frac{\beta}{2}(Q-\frac{\beta}{2})}\frac{2}{\gamma}\left(\frac{\pi\Gamma(\frac{ \gamma^{2}}{4})}{\Gamma(1-\frac{\gamma^{2}}{4})}\right)^{\frac{1}{\gamma}(Q- \alpha-\frac{\beta}{2})}\frac{\Gamma(\frac{\gamma\alpha}{2}+\frac{\gamma\beta }{4}-\frac{\gamma^{2}}{4})\Gamma_{\frac{\gamma}{2}}(\alpha\pm\frac{\beta}{2}) \Gamma_{\frac{\gamma}{2}}(Q-\frac{\beta}{2})^{3}}{\Gamma_{\frac{\gamma}{2}}( \alpha)\Gamma_{\frac{\gamma}{2}}(Q-\beta)\Gamma_{\frac{\gamma}{2}}(Q)\Gamma_{ \frac{\gamma}{2}}(\frac{\beta}{2})}\] \[\times\Gamma(\frac{2}{\gamma}(\frac{\beta}{2}+\alpha-Q))\frac{S_ {\frac{\gamma}{2}}(2Q-\frac{\beta}{2}-\alpha)}{\Gamma_{\frac{\gamma}{2}}(Q- \alpha)}\int_{\mathbb{i}\mathbb{R}}\frac{d\sigma_{2}}{\mathbf{i}}e^{2\pi(Q-2 \sigma)\sigma_{2}}\frac{S_{\frac{\gamma}{2}}(\frac{1}{2}(\alpha+\frac{\beta}{2 }-Q)\pm\sigma_{2})}{S_{\frac{\gamma}{2}}(\frac{1}{2}(\alpha-\frac{\beta}{2}+ Q)\pm\sigma_{2})}\]
Using equation (A.10) and (A.11) on the double gamma function we can simplify:
\[\frac{\Gamma(\frac{2}{\gamma}(\frac{\beta}{2}+\alpha-Q))\Gamma( \frac{\gamma\alpha}{2}+\frac{\gamma\beta}{4}-\frac{\gamma^{2}}{4})\Gamma_{ \frac{\gamma}{2}}(\alpha+\frac{\beta}{2})}{\Gamma_{\frac{\gamma}{2}}(\frac{ \beta}{2}+\alpha-Q)}=2\pi\left(\frac{\gamma}{2}\right)^{(\frac{\gamma}{2}- \frac{\gamma}{\gamma})(\alpha+\frac{\beta}{2}-Q)+1}.\]
From here we arrive at the desired expression \(G_{\mathrm{Hos}}\) of \(G_{\mu B}(\alpha,\beta)\):
\[2\pi 2^{-\alpha Q+\frac{\alpha^{2}}{2}}2^{\frac{\beta}{2}(Q- \frac{\beta}{2})}\left(\frac{\pi(\frac{\gamma}{2})^{2-\frac{\gamma^{2}}{2}} \Gamma(\frac{2^{4}}{4})}{\Gamma(1-\frac{\gamma^{2}}{4})}\right)^{\frac{1}{ \gamma}(Q-\alpha-\frac{\beta}{2})}\frac{\Gamma_{\frac{\gamma}{2}}(2Q-\frac{ \beta}{2})\Gamma_{\frac{\gamma}{2}}(Q-\frac{\beta}{2})^{3}}{\Gamma_{\frac{ \gamma}{2}}(Q-\alpha)\Gamma_{\frac{\gamma}{2}}(Q-\beta)\Gamma_{\frac{\gamma}{2 }}(\alpha)\Gamma_{\frac{\gamma}{2}}(Q)\Gamma_{\frac{\gamma}{2}}(\frac{\beta}{2 })}\] \[\times\int_{\mathbb{i}\mathbb{R}}\frac{d\sigma_{2}}{\mathbf{i}}e^{2 \mathbf{i}\pi(Q-2\sigma)\sigma_{2}}\frac{S_{\frac{\gamma}{2}}(\frac{1}{2}( \alpha+\frac{\beta}{2}-Q)\pm\sigma_{2})}{S_{\frac{\gamma}{2}}(\frac{1}{2}( \alpha-\frac{\beta}{2}+Q)\pm\sigma_{2})}.\]
**Remark 4.4**.: _As a sanity check we will now set \(\beta=0\) and recover the FZZ formula of [1, Equation (1.4)]. Let us first simplify the prefactor in front of the contour integral:_
\[2\pi 2^{-\alpha Q+\frac{\alpha^{2}}{2}}\left(\frac{\pi(\frac{\gamma}{2})^{2- \frac{\gamma^{2}}{2}}\Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1-\frac{\gamma^{2}} {4})}\right)^{\frac{1}{\gamma}(Q-\alpha)}\frac{\Gamma_{\frac{\gamma}{2}}(2Q- \alpha)\Gamma_{\frac{\gamma}{2}}(Q)}{\Gamma_{\frac{\gamma}{2}}(Q-\alpha)\Gamma _{\frac{\gamma}{2}}(\frac{\beta}{2})}.\]
_For this purpose record:_
\[\frac{\Gamma_{\frac{\gamma}{2}}(2Q-\alpha)}{\Gamma_{\frac{\gamma}{2}}(Q- \alpha)}=2\pi\left(\frac{\gamma}{2}\right)^{1+(\frac{\gamma}{2}-\frac{2}{ \gamma})(Q-\alpha)}\frac{1}{\Gamma(1+\frac{\gamma}{2}(Q-\alpha))\Gamma(\frac{2 }{\gamma}(Q-\alpha))},\quad\frac{\Gamma_{\frac{\gamma}{2}}(Q)}{\Gamma_{\frac{ \gamma}{2}}(\frac{\beta}{2})}\underset{\beta\to 0}{\sim}\pi\beta.\]
_For the integral part we have deferred again the residue computation to the appendix. The result is given by Lemma A.4 and equals:_
\[\frac{\cos(\pi(Q-2\sigma)(Q-\alpha))}{2\pi\sin(\frac{\pi\gamma}{2}(\alpha- \frac{\gamma}{2}))\sin(\frac{2\pi}{\gamma}(\alpha-Q))}.\]
_Putting everything together we thus get:_
\[G_{\mu_{B}}(\alpha,0)=\left(\frac{\pi(\frac{\gamma}{2})^{2-\frac{ \gamma^{2}}{2}}\Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1-\frac{\gamma^{2}}{4})} \right)^{\frac{1}{\gamma}(Q-\alpha)}\frac{2\pi 2^{2-\alpha Q+\frac{\alpha^{2}}{2}}\left(\frac{\gamma}{2}\right)^{1+(\frac{ \gamma}{2}-\frac{\beta}{2})(Q-\alpha)}\cos(\pi(Q-2\sigma)(Q-\alpha))}{\Gamma( 1+\frac{\gamma}{2}(Q-\alpha))\Gamma(\frac{2}{\gamma}(Q-\alpha))\sin(\frac{\pi \gamma}{2}(\alpha-\frac{\gamma}{2}))\sin(\frac{2\pi}{\gamma}(\alpha-Q))}\] \[=\pi^{2}\gamma 2^{-\alpha Q+\frac{\alpha^{2}}{2}}\left(\frac{\pi \Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1-\frac{\gamma^{2}}{4})}\right)^{\frac{1}{ \gamma}(Q-\alpha)}\frac{1}{\Gamma(1+\frac{\gamma}{2}(Q-\alpha))\Gamma(\frac{2 }{\gamma}(Q-\alpha))}\frac{\cos(\pi(Q-2\sigma)(Q-\alpha))}{\sin(\frac{\pi \gamma}{2}(\alpha-\frac{\gamma}{2}))\sin(\frac{2\pi}{\gamma}(\alpha-Q))}\] \[=\frac{4}{\gamma}2^{-\alpha Q+\frac{\alpha^{2}}{2}}\left(\frac{\pi \Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1-\frac{\gamma^{2}}{4})}\right)^{\frac{1}{ \gamma}(Q-\alpha)}\Gamma(\frac{\gamma\alpha}{2}-\frac{\gamma^{2}}{4})\Gamma( \frac{2}{\gamma}(\alpha-Q))\cos(\pi(Q-2\sigma)(Q-\alpha)).\]
_This matches with the FZZ formula._
## 5. Reflection coefficient as the limit of \(H\): proof of Lemma 3.9
In this section we prove Lemma 3.9. This is analogous to [13, Proposition 8.1], which obtains the LCFT sphere reflection coefficient from the three-point function. Our high level argument is similar, but with the additional difficulty that correlation functions are not GMC moments due to the presence of both bulk and boundary Liouville potentials.
Throughout this section we consider \(\beta_{i},\sigma_{i},\mu_{i}\) with \(i=1,2,3\) as given in Lemma 3.9. Similarly as in Definition 2.16, it will be more convenient to work with the horizontal strip \(\mathcal{S}=\mathbb{R}\times(0,\pi)\). Let \(P_{\mathcal{S}}\) be the law of a free-boundary GFF on \(\mathcal{S}\). Namley, \(P_{\mathcal{S}}\) is the law of \(h_{\mathcal{S}}=h_{\mathbb{H}}\mathrm{oexp}\) where \(h_{\mathbb{H}}\) is sampled from \(P_{\mathbb{H}}\). Let \(\psi\) be the field from Definition 2.15 with \(\beta\) replaced by \(\beta_{1}\). In particular, the lateral component of \(\psi\) has the same law as that of \(h_{\mathcal{S}}\). For \(c\in\mathbb{R}\), write \(A^{\prime}=\mathcal{A}_{\psi+c}(\mathcal{S}),L^{\prime}_{1}=\mathcal{L}_{\psi +c}(\mathbb{R})\), \(L^{\prime}_{2}=\mathcal{L}_{\psi+c}(\mathbb{R}\times\{\pi\})\). Define
\[\mathfrak{R}_{\psi+c}:=(Q-\beta_{1})\left(\gamma(\beta_{1}-Q+\frac{\gamma}{2} )A^{\prime}+\frac{\gamma^{2}}{2}A^{\prime}(\sum_{i=1}^{2}\mu_{i}L^{\prime}_{i} )+\frac{\gamma^{2}}{4}(\sum_{i=1}^{2}\mu_{i}L^{\prime}_{i})^{2}\right)e^{-A^ {\prime}-\sum_{i=1}^{2}\mu_{i}L^{\prime}_{i}}. \tag{5.1}\]
Recall \(\hat{R}(\beta_{1},\sigma_{1},\sigma_{2})\) from Definition 2.16. We have \(\hat{R}(\beta_{1},\sigma_{1},\sigma_{2})=\int_{-\infty}^{\infty}e^{(\beta_{1} -Q)c}\mathbb{E}[\mathfrak{R}_{\psi+c}]\,dc\).
We now express \(H\) in terms of a field on \(\mathcal{S}\). Sample \((h,\mathbf{c})\) from \(P_{\mathcal{S}}\times[e^{(\frac{1}{2}\sum\beta_{i}-Q)c}\,dc]\), and let
\[\hat{h}(z)=h(z)-(Q-\beta_{1})(0\vee\Re z)-(Q-\beta_{2})(0\vee(-\Re z))+\frac{ \beta_{3}}{2}G_{\mathcal{S}}(z,\mathrm{i}\pi). \tag{5.2}\]
where \(G_{\mathcal{S}}(z,w)=G_{\mathbb{H}}(e^{z},e^{w})\). Let \(\phi=\hat{h}+\mathbf{c}\) and \(\mathrm{LF}_{\mathcal{S}}^{\beta_{1},\beta_{2},\beta_{3}}\) be the law of \(\phi\). Let \(\hat{A}=\mathcal{A}_{\phi}(\mathcal{S})\), \(\hat{L}_{1}=\mathcal{L}_{\phi}(0,\infty)\), \(L_{2}=\mathcal{L}_{\phi}(\mathbb{R}\times\{\pi\}),\hat{L}_{3}=\mathcal{L}_{ \phi}(-\infty,0)\). In light of \(\hat{H}_{(\mu_{1},\mu_{2},\mu_{3})}^{(\beta_{1},\beta_{2},\beta_{3})}\) from (2.8) and Definition 2.13, we define
\[\mathfrak{H}_{\phi}:=(\gamma(s+\frac{\gamma}{2})\hat{A}+\frac{\gamma^{2}}{2} \hat{A}(\sum_{i=1}^{3}\mu_{i}\hat{L}_{i})+\frac{\gamma^{2}}{4}(\sum_{i=1}^{3} \mu_{i}\hat{L}_{i})^{2})e^{-\hat{A}-\sum_{i=1}^{3}\mu_{i}\hat{L}_{i}}\quad \text{with }s=\frac{1}{2}\sum\beta_{i}-Q. \tag{5.3}\]
By [1, Lemma 2.9], the LQG observables of \(\mathrm{LF}_{\mathcal{S}}^{\beta_{1},\beta_{2},\beta_{3}}\) and \(\mathrm{LF}_{\mathbb{H}}^{(\beta_{1},0),(\beta_{2},1),(\beta_{3},\infty)}\) agree in law, under the coordinate change \(z\mapsto e^{-z}\). In particular, \(\hat{H}_{(\sigma_{1},\sigma_{2},\sigma_{3})}^{(\beta_{1},\beta_{2},\beta_{3})}= \mathrm{LF}_{\mathcal{S}}^{\beta_{1},\beta_{2},\beta_{3}}[\mathfrak{H}_{\phi}]\). Therefore Lemma 3.9 can be rephrased as
\[\lim_{\beta_{3}\downarrow\beta_{1}-\beta_{2}}(\beta_{2}+\beta_{3}-\beta_{1}) \mathrm{LF}_{\mathcal{S}}^{\beta_{1},\beta_{2},\beta_{3}}[\mathfrak{H}_{\phi} ]=2\int_{-\infty}^{\infty}e^{(\beta_{1}-Q)c}\mathbb{E}[\mathfrak{R}_{\psi+c}] \,dc. \tag{5.4}\]
The rest of this section is devoted to the proof of (5.4).
### Convergence when conditioning on field average maximum
In this subsection we prove the following variant of (5.4) where we condition on the field average maximum.
**Proposition 5.1**.: _For \(\beta_{1}\in(\frac{\gamma}{2}\vee(Q-\gamma),Q)\), \(\beta_{2}\in(0,\beta_{1})\) and \(\beta_{3}>\beta_{1}-\beta_{2}\), sample \(\phi\) from \(\mathrm{LF}_{\mathcal{S}}^{\beta_{1},\beta_{2},\beta_{3}}\). Let \(M_{\phi}\) be the supremum over \(t>0\) of the average of \(\phi\) on \(\{t\}\times(0,\pi)\). Fix \(m\in\mathbb{R}\). Write \(T_{\phi-m}=(\mathcal{A}_{\phi-m}(\mathcal{S}),\mathcal{L}_{\phi-m}((0,\infty) \times\{0\}),\mathcal{L}_{\phi-m}(\mathbb{R}\times\{\pi\}),\mathcal{L}_{\phi-m }((-\infty,0)\times\{0\}))\). Let \(f:\mathbb{R}^{4}\to\mathbb{R}\) be a bounded continuous function. Then_
\[\lim_{\beta_{3}\downarrow\beta_{1}-\beta_{2}}\mathrm{LF}_{\mathcal{S}}^{\beta_{1},\beta_{2},\beta_{3}}[f(T_{\phi-m})\mid M_{\phi}=m]=\mathbb{E}[f(\mathcal{A}_{ \psi}(\mathcal{S}),\mathcal{L}_{\psi}(\mathbb{R}),\mathcal{L}_{\psi}(\mathbb{R} \times\{\pi\}),0)] \tag{5.5}\]
_where \(\psi\) on the right hand side is the field from Definition 2.15 with \(\beta\) replaced by \(\beta_{1}\)._
We prove Proposition 5.1 by proving the following two lemmas.
**Lemma 5.2**.: _Sample \(h\) from \(P_{\mathcal{S}}\) and define \(\widetilde{h}\) in terms of \(h\) as \(\hat{h}\) in (5.2). For \(M>0\), let \(\widetilde{T}_{\widetilde{h}-M}=(\mathcal{A}_{\widetilde{h}-M}(\mathcal{S}), \mathcal{L}_{\widetilde{h}-M}(0,\infty),\mathcal{L}_{\widetilde{h}-M}(\mathbb{ R}\times\{\pi\}),\mathcal{L}_{\widetilde{h}-M}(-\infty,0))\). Let \(M_{\widetilde{h}}\) be the supremum over \(t>0\) of the average of \(\widetilde{h}\) on \(\{t\}\times(0,\pi)\). Let \(f:\mathbb{R}^{4}\to\mathbb{R}\) be a bounded continuous function. Then_
\[\lim_{M\to\infty}\mathbb{E}[f(\widetilde{T}_{\widetilde{h}-M}))\mid M_{ \widetilde{h}}=M]=\mathbb{E}[f(\mathcal{A}_{\psi}(\mathcal{S}),\mathcal{L}_{ \psi}(\mathbb{R}),\mathcal{L}_{\psi}(\mathbb{R}\times\{\pi\}),0)]. \tag{5.6}\]
_Moreover, the convergence is uniform in \(\beta_{3}\in[\beta_{1}-\beta_{2},\beta_{1}-\beta_{2}+\varepsilon]\) for \(\varepsilon>0\) small enough._
Proof.: Consider \(\widetilde{h}\) conditioned on \(M_{\widetilde{h}}=M\). Let \(X_{t}^{\widetilde{h}}\) be the average of \(\widetilde{h}\) on \(\{t\}\times(0,\pi)\), and define \(X_{t}^{\psi}\) likewise. Let \(\tau=\inf\{t\,:\,X_{t}^{\widetilde{h}}>M-\sqrt{M}\}\) and \(\sigma=\inf\{t\,:\,X_{t}^{\psi}>-\sqrt{M}\}\). We first show that we can couple \(\widetilde{h}\) and \(\psi\) so that with probability \(1-o_{M}(1)\),
\[(\widetilde{h}(\cdot-\tau)-M)|_{\mathcal{S}_{+}}=\psi(\cdot-\sigma)|_{ \mathcal{S}_{+}},\quad\text{where }\mathcal{S}_{+}=(0,\infty)\times(0,\pi). \tag{5.7}\]
We can couple \((X_{t+\tau}^{\widetilde{h}}-M)_{t\geq 0}\) and \((X_{t+\sigma}^{\psi})_{t\geq 0}\) to agree almost surely; indeed they have the law of \((B_{2t}-(Q-\beta_{1})t-\sqrt{M})_{t\geq 0}\) conditioned to have maximum value \(0\), where \(B_{t}\) is standard Brownian motion. Let \(h^{2},\widetilde{h}^{2},\psi^{2}\) denote the lateral components of \(h,\widetilde{h},\psi\), i.e. the projections to the space of distributions with mean zero on \(\{t\}\times(0,\pi)\) for all \(t\). We have \(\tau\to\infty\) in probability as \(M\to\infty\), so with probability \(1-o_{M}(1)\) the Dirichlet energy of \(\gamma G_{\mathcal{S}}(\cdot,\mathfrak{i}\pi)\) on \((\tau,\infty)\times(0,\pi)\) is close to zero. Thus the law of \((h^{2}+\gamma G_{\mathcal{S}}(\cdot,i\pi))|_{(\tau,\infty)\times(0,\pi)}\) is within \(o_{M}(1)\) in total variation distance to the law of \(h^{2}|_{(\tau,\infty)\times(0,\pi)}\)[13, Proposition 2.9], so we can further couple \(\widetilde{h}^{2}(\cdot+\tau)|_{\mathcal{S}_{+}}\) and \(\psi^{2}(\cdot+\sigma)|_{\mathcal{S}_{+}}\) to agree with probability \(1-o_{M}(1)\). This gives the desired coupling (5.7).
To prove (5.6), it suffices to show that as \(M\to\infty\) the quantum areas and lengths of \((\widetilde{h}(\cdot-\tau)-M)|_{\mathcal{S}\setminus\mathcal{S}_{+}}\) and \(\psi(\cdot-\sigma)|_{\mathcal{S}\setminus\mathcal{S}_{+}}\) tend to zero in probability; this holds because \((X_{t+\tau}^{\widetilde{h}}-M)_{t\leq 0}\) and \((X_{t+\sigma}^{\psi})_{t\leq 0}\) are bounded above by \(-\sqrt{M}\) with probability \(1-o_{M}(1)\). The uniform convergence in \(\beta_{3}\) can be checked by inspecting the argument above.
**Lemma 5.3**.: _Sample \((h,\mathbf{c})\) from the measure \(\mathbb{F}:=P_{\mathcal{S}}\times[e^{(\frac{1}{2}\sum\beta_{i}-Q)c}\,dc]\) and let \(\hat{h}\) be as in (5.2). Write \(\phi=\hat{h}+c\) so that the law of \(\phi\) is \(\operatorname{LF}_{\mathcal{S}}^{\beta_{1},\beta_{2},\beta_{3}}\). Let \(M_{\phi}\) be the supremum over \(t>0\) of the average of \(\phi\) on \(\{t\}\times(0,\pi)\). Let \(\delta=\frac{1}{2}(\beta_{2}+\beta_{3}-\beta_{1})\). Fix \(m\in\mathbb{R}\). Then under \(\mathbb{F}\) the conditional law of \(\mathbf{c}\) conditioning on \(\{M_{\phi}=m\}\) is \(1_{c<m}\delta e^{\delta(c-m)}\,dc\). Moreover, the law of \(M_{\phi}\) is_
\[\operatorname{LF}_{\mathcal{S}}^{\beta_{1},\beta_{2},\beta_{3}}[M_{\phi}\in dm ]=\frac{2(Q-\beta_{1})}{\beta_{2}+\beta_{3}-\beta_{1}}e^{(\frac{1}{2}\sum \beta_{i}-Q)m}. \tag{5.8}\]
Proof.: For a standard Brownian motion \((B_{t})_{t\geq 0}\) we have \(\mathbb{P}[\sup_{t\geq 0}B_{2t}-bt\geq m]=\min(e^{-bm},1)\) for \(b>0\). Therefore, with \(M_{\widetilde{h}}\) defined in Lemma 5.2, we see that \(\mathbb{F}[M_{\phi}\geq m\text{ and }\mathbf{c}\in dc]\) equals
\[e^{(\frac{1}{2}\sum\beta_{i}-Q)c}P_{\mathcal{S}}[M_{\widetilde{h}}\geq m-c]= e^{(\frac{1}{2}\sum\beta_{i}-Q)c}(1_{m>c}e^{-(Q-\beta_{1})(m-c)}+1_{m\leq c}).\]
Therefore \(\mathbb{F}[M_{\phi}\in dm\text{ and }\mathbf{c}\in dc]=1_{m>c}(Q-\beta_{1})e^{( \frac{1}{2}\sum\beta_{i}-Q)c-(Q-\beta_{1})(m-c)}\), hence
\[\mathbb{F}[M_{\phi}\in dm]=\int 1_{m>c}(Q-\beta_{1})e^{(\frac{1}{2}\sum\beta_{i}-Q)c-(Q- \beta_{1})(m-c)}\,dc=\frac{2(Q-\beta_{1})}{\beta_{2}+\beta_{3}-\beta_{1}}e^{( \frac{1}{2}\sum\beta_{i}-Q)m},\]
which gives (5.8). By Bayes rule we have \(\mathbb{F}[\mathbf{c}\in dc\mid M_{\phi}=m]=1_{c<m}\delta e^{\delta(c-m)}\,dc\) as desired.
Proof of Proposition 5.1.: By Lemma 5.3, with \(\delta=\frac{1}{2}(\beta_{2}+\beta_{3}-\beta_{1})\) we have
\[\operatorname{LF}_{\mathcal{S}}^{\beta_{1},\beta_{2},\beta_{3}}[f(T_{\phi-m}) \mid M_{\phi}=m]=\int_{-\infty}^{m}\delta e^{\delta(c-m)}\mathbb{E}[f( \widetilde{T}_{\widetilde{h}-(m-c)})\mid M_{\widetilde{h}}=m-c]\,dc.\]
As \(\beta_{3}\downarrow\beta_{1}-\beta_{2}\) we have \(\delta\downarrow 0\). By Lemma 5.2 we get the desired (5.5).
### Removing the maximum conditioning
We prove Lemma 3.9 using the following lemma.
**Lemma 5.4**.: _Fix \(\beta_{1}\in(\frac{\gamma}{2}\vee(Q-\gamma),Q)\) and \(\beta_{2}\in(0,\beta_{1})\). Suppose \(\beta_{3}>\beta_{1}-\beta_{2}\), \(\beta_{3}<2Q-\beta_{1}-\beta_{2}\), and \(\beta_{3}<\beta_{1}+\beta_{2}\). Let \(\Re\mu_{i}>0\) for \(i=1,2,3\). Recall \(M_{\phi}\) from Proposition 5.1 and \(\mathfrak{H}_{\phi}\) from (5.3). We have \(\lim_{C\to\infty}(\beta_{2}+\beta_{3}-\beta_{1})\mathrm{LF}_{\mathcal{S}}^{ \beta_{1},\beta_{2},\beta_{3}}[|\mathfrak{H}_{\phi}|1_{|M_{\phi}|>C}]=0\). Moreover, the convergence is uniform in \(\beta_{3}\in(\beta_{1}-\beta_{2},\beta_{1}-\beta_{2}+\varepsilon]\) for \(\varepsilon>0\) small enough._
Proof of Lemma 3.9 given Lemma 5.4.: Recall \(R_{\psi+m}\) from (5.1) and compare with \(\mathfrak{H}_{\phi}\). By Proposition 5.1 we have \(\lim_{\beta_{3}\downarrow\beta_{1}-\beta_{2}}\mathrm{LF}_{\mathcal{S}}^{ \beta_{1},\beta_{2},\beta_{3}}[\mathfrak{H}_{\phi}\mid M_{\phi}=m]=\frac{1}{Q -\beta_{1}}\mathbb{E}[\mathfrak{R}_{\psi+m}]\). Moreover, for each \(C>0\) the convergence is uniform in \(m\in(-C,C)\). Using the law of \(M_{\phi}\) from (5.8), we get
\[\lim_{\beta_{3}\downarrow\beta_{1}-\beta_{2}}(\beta_{2}+\beta_{3}-\beta_{1}) \mathrm{LF}_{\mathcal{S}}^{\beta_{1},\beta_{2},\beta_{3}}[1_{|M_{\phi}|\leq C }\mathfrak{H}_{\phi}]=2\int_{-C}^{C}e^{(\beta_{1}-Q)m}\mathbb{E}[\mathfrak{R} _{\psi+m}]\,dm.\]
Sending \(C\to\infty\) we get the desired (5.4) from Lemma 5.4.
Proof of Lemma 5.4.: In this proof let \(K\) denote a deterministic constant whose value may change from line to line, but which does not depend on \(\beta_{3}\). Set \(s=\frac{1}{2}\sum\beta_{i}-Q\). Choose \(q\in(-\frac{s}{\gamma},\frac{2}{\gamma^{2}})\) such that \(q<\frac{1}{\gamma}(Q-\beta_{j})\) for \(j=2,3\). This choice is possible due to the conditions placed on \(\beta_{1},\beta_{2},\beta_{3}\). The upper bounds for \(q\) imply that \(\mathbb{E}[\mathcal{A}_{\widetilde{h}}((-\infty,1)\times(0,\pi))^{q}]\) and \(\mathbb{E}[\mathcal{L}_{\widetilde{h}}((-\infty,1)\times\{0,\pi\})^{2q}]\) are finite; see [10, Corollaries 3.8 and 3.10]. Combining with Lemma C.4 gives
\[\mathrm{LF}_{\mathcal{S}}^{\beta_{1},\beta_{2},\beta_{3}}[\mathcal{A}_{\phi} (\mathcal{S})^{q}\mid M_{\phi}]\leq Ke^{q\gamma M_{\phi}}\quad\text{and}\quad \mathrm{LF}_{\mathcal{S}}^{\beta_{1},\beta_{2},\beta_{3}}[\mathcal{L}_{\phi}( \partial\mathcal{S})^{2q}\mid M_{\phi}]\leq Ke^{q\gamma M_{\phi}}.\]
Writing \(L=\sum\mu_{i}L_{i}\), we now show that \(\lim_{C\to\infty}\mathrm{LF}_{\mathcal{S}}^{\beta_{1},\beta_{2},\beta_{3}}[1_ {|M_{\phi}|>C}\mid ALE^{-A-L}|]=0\). Clearly we have \(|ALE^{-A-\mu L}|\leq KA^{q}\). Then the law of \(M_{\phi}\) from (5.8) and the moment bounds give
\[(\beta_{2}+\beta_{3}-\beta_{1})\mathrm{LF}_{\mathcal{S}}^{\beta_{ 1},\beta_{2},\beta_{3}}[|ALE^{-A-\mu L}|1_{|M_{\phi}|>C}]\] \[\leq K\int_{-\infty}^{-C}\mathrm{LF}_{\mathcal{S}}^{\beta_{1}, \beta_{2},\beta_{3}}[A^{q}\mid M_{\phi}=m]e^{sm}\,\mathrm{d}m+K\int_{C}^{ \infty}\mathrm{LF}_{\mathcal{S}}^{\beta_{1},\beta_{2},\beta_{3}}[Ae^{-A} \mid M_{\phi}=m]e^{sm}\,\mathrm{d}m\] \[\leq K\int_{-\infty}^{-C}e^{q\gamma m+sm}\,\mathrm{d}m+K\int_{C}^ {\infty}e^{sm}\,dm\leq K(e^{-(\gamma q+s)C}+e^{sC})\xrightarrow{C\to\infty}0.\]
Here, we are using \(\gamma q+s>0>s\) which follows from our conditions on \(q\) and \(s\). This proves \(\lim_{C\to\infty}(\beta_{2}+\beta_{3}-\beta_{1})\mathrm{LF}_{\mathcal{S}}^{ \beta_{1},\beta_{2},\beta_{3}}[1_{|M_{\phi}|>C}|ALE^{-A-L}|]=0\). The analogous terms for \(Ae^{-A-L}\) and \(L^{2}e^{-A-L}\) are similarly bounded, so the triangle inequality yields the result. The uniform convergence follows by inspecting the proof.
## Appendix A Background on special functions
### The hypergeometric equation
Here we recall some facts we have used on the hypergeometric equation and its solution space. For \(A>0\) let \(\Gamma(A)=\int_{0}^{\infty}t^{A-1}e^{-t}dt\) denote the standard Gamma function which can then be analytically extended to \(\mathbb{C}\setminus\{-\mathbb{N}\}\). Record the following properties:
(A.1) \[\Gamma(A+1)=A\Gamma(A),\quad\Gamma(A)\Gamma(1-A)=\frac{\pi}{\sin(\pi A)},\quad \Gamma(A)\Gamma(A+\frac{1}{2})=\sqrt{\pi}2^{1-2A}\Gamma(2A).\]
Let \((A)_{n}:=\frac{\Gamma(A+n)}{\Gamma(A)}\). For \(A,B,C\), and \(t\) real numbers we define the hypergeometric function \(F\) by:
(A.2) \[F(A,B,C,t):=\sum_{n=0}^{\infty}\frac{(A)_{n}(B)_{n}}{n!(C)_{n}}t^{n}.\]
This function can be used to solve the following hypergeometric equation:
(A.3) \[\left(t(1-t)\frac{d^{2}}{dt^{2}}+(C-(A+B+1)t)\frac{d}{dt}-AB\right)f(t)=0.\]
We can give the following three bases of solutions corresponding respectively to \(t\in(0,1)\), \(t\in(1,+\infty)\) and \(t\in(-\infty,0)\). Under the assumption that \(C\) and \(C-A-B\) are not an integers, for \(t\in(0,1)\) we can write:
(A.4) \[f(t) =C_{1}F(A,B,C,t)+C_{2}^{+}t^{1-C}F(1+A-C,1+B-C,2-C,t)\] \[=B_{1}F(A,B,1+A+B-C,1-t)\] \[+B_{2}^{-}(1-t)^{C-A-B}F(C-A,C-B,1+C-A-B,1-t).\]
Moving next to \(t\in(1,+\infty)\), under the assumption that \(C-A-B\) and \(A-B\) are not integers:
(A.5) \[f(t) =B_{1}F(A,B,1+A+B-C,1-t)\] \[+B_{2}^{+}(1-t)^{C-A-B}F(C-A,C-B,1+C-A-B,1-t)\] \[=D_{1}t^{-A}F(A,1+A-C,1+A-B,t^{-1})\] \[+D_{2}^{-}t^{-B}F(B,1+B-C,1+B-A,t^{-1}).\]
Lastly for \(t\in(-\infty,0)\), under the assumption that \(C\) and \(A-B\) are not an integer:
(A.6) \[f(t) =C_{1}F(A,B,C,t)+C_{2}^{-}t^{1-C}F(1+A-C,1+B-C,2-C,t)\] \[=D_{1}t^{-A}F(A,1+A-C,1+A-B,t^{-1})\] \[+D_{2}^{+}t^{-B}F(B,1+B-C,1+B-A,t^{-1}).\]
For each of the three cases we have four real constants that parametrize the solution space, namely \(C_{1},C_{2}^{+},B_{1},B_{2}^{-},\)\(B_{1},B_{2}^{+},D_{1},D_{2}^{-}\) and \(D_{1},D_{2}^{+},C_{1},C_{2}^{-}\). We thus expect to have an explicit change of basis formula that will give a link between \(C_{1},C_{2}^{+},\)\(B_{1},B_{2}^{-}\), and similarly for the two other cases. This is precisely what gives the so-called connection formulas,
(A.7) \[\begin{pmatrix}C_{1}\\ C_{2}^{-}\end{pmatrix}=\begin{pmatrix}\frac{\Gamma(1-C)\Gamma(A-B+1)}{\Gamma(A- C+1)\Gamma(1-B)}&\frac{\Gamma(1-C)\Gamma(B-A+1)}{\Gamma(B-C+1)\Gamma(1-A)}\\ \frac{\Gamma(C-1)\Gamma(A-B+1)}{\Gamma(A)\Gamma(C-B)}&\frac{\Gamma(C-1) \Gamma(B-A+1)}{\Gamma(B)\Gamma(C-A)}\end{pmatrix}\begin{pmatrix}D_{1}\\ D_{2}^{+}\end{pmatrix},\]
(A.8) \[\begin{pmatrix}B_{1}\\ B_{2}^{-}\end{pmatrix}=\begin{pmatrix}\frac{\Gamma(C)\Gamma(C-A-B)}{\Gamma(C-A )\Gamma(C-B)}&\frac{\Gamma(2-C)\Gamma(C-A-B)}{\Gamma(1-A)\Gamma(1-B)}\\ \frac{\Gamma(C)\Gamma(A+B-C)}{\Gamma(A)\Gamma(B)}&\frac{\Gamma(2-C)\Gamma(A+ B-C)}{\Gamma(A-C+1)\Gamma(B-C+1)}\end{pmatrix}\begin{pmatrix}C_{1}\\ C_{2}^{+}\end{pmatrix}.\]
Note that in the present paper we will have \(C_{2}^{+}\neq C_{2}^{-}\), \(B_{2}^{+}\neq B_{2}^{-}\), \(D_{2}^{+}\neq D_{2}^{-}\), which is why we must distinguish which interval \(t\) belongs to.
### Double Gamma and Double Sine functions
We will now provide some explanations on the functions \(\Gamma_{\frac{\gamma}{2}}(x)\) and \(S_{\frac{\gamma}{2}}(x)\) that we have introduced. For all \(\gamma\in(0,2)\) and for \(\Re(x)>0\), \(\Gamma_{\frac{\gamma}{2}}(x)\) is defined by the integral formula,
(A.9) \[\log\Gamma_{\frac{\gamma}{2}}(x)=\int_{0}^{\infty}\frac{dt}{t}\left[\frac{e^{ -xt}-e^{-\frac{Qt}{2}}}{(1-e^{-\frac{xt}{2}})(1-e^{-\frac{2t}{\gamma}})}- \frac{(\frac{Q}{2}-x)^{2}}{2}e^{-t}+\frac{x-\frac{Q}{2}}{t}\right],\]
where we have \(Q=\frac{\gamma}{2}+\frac{2}{\gamma}.\) Since the function \(\Gamma_{\frac{\gamma}{2}}(x)\) is continuous it is completely determined by the following two shift equations
(A.10) \[\frac{\Gamma_{\frac{\gamma}{2}}(x)}{\Gamma_{\frac{\gamma}{2}}(x+ \frac{\gamma}{2})} =\frac{1}{\sqrt{2\pi}}\Gamma(\frac{\gamma x}{2})(\frac{\gamma}{2} )^{-\frac{\gamma x}{2}+\frac{1}{2}},\] (A.11) \[\frac{\Gamma_{\frac{\gamma}{2}}(x)}{\Gamma_{\frac{\gamma}{2}}(x+ \frac{2}{\gamma})} =\frac{1}{\sqrt{2\pi}}\Gamma(\frac{2x}{\gamma})(\frac{\gamma}{2} )^{\frac{2x}{\gamma}-\frac{1}{2}},\]
and by its value at \(\frac{Q}{2},\)\(\Gamma_{\frac{\gamma}{2}}(\frac{Q}{2})=1.\) Furthermore \(x\mapsto\Gamma_{\frac{\gamma}{2}}(x)\) admits a meromorphic extension to all of \(\mathbb{C}\) with single poles at \(x=-n\frac{\gamma}{2}-m\frac{2}{\gamma}\) for any \(n,m\in\mathbb{N}\) and \(\Gamma_{\frac{\gamma}{2}}(x)\) is never equal to \(0.\) We also need the double sine function defined by:
(A.12) \[S_{\frac{\gamma}{2}}(x)=\frac{\Gamma_{\frac{\gamma}{2}}(x)}{\Gamma_{\frac{ \gamma}{2}}(Q-x)}.\]
It obeys the following two shift equations:
(A.13) \[\frac{S_{\frac{\gamma}{2}}(x+\frac{\gamma}{2})}{S_{\frac{\gamma}{2}}(x)}=2 \sin(\frac{\gamma\pi}{2}x),\quad\frac{S_{\frac{\gamma}{2}}(x+\frac{2}{\gamma}) }{S_{\frac{\gamma}{2}}(x)}=2\sin(\frac{2\pi}{\gamma}x).\]
The double sine function admits a meromorphic extension to \(\mathbb{C}\) with poles at \(x=-n\frac{\gamma}{2}-m\frac{2}{\gamma}\) and with zeros at \(x=Q+n\frac{\gamma}{2}+m\frac{2}{\gamma}\) for any \(n,m\in\mathbb{N}.\) We also record the following asymptotic for \(S_{\frac{\gamma}{2}}(x)\):
(A.14) \[S_{\frac{\gamma}{2}}(x)\sim\begin{cases}e^{-\frac{1}{2}x(x-Q)}&\text{as}\quad \text{Im}(x)\to\infty,\\ e^{\frac{1}{2}x(x-Q)}&\text{as}\quad\text{Im}(x)\to-\infty.\end{cases}\]
### Properties of the Ponsot-Teschner function \(H_{\text{PT}}\)
For the purpose of proving Theorem 1.1, we establish the following two facts about \(H_{\text{PT}}.\)
**Lemma A.1**.: _The function \(H_{\text{PT}}\) satisfies the shift equations of Theorem 3.1 satisfied by \(H\)._
Proof.: Recalling (1.11), we introduce a function \(\varphi\) in the following way
(A.15) \[H_{\text{PT}}\begin{pmatrix}\beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}=\int_{\mathcal{C}}\varphi_{( \sigma_{1},\sigma_{2},\sigma_{3})}^{(\beta_{1},\beta_{2},\beta_{3})}(r)dr,\]
where here \(\varphi_{(\sigma_{1},\sigma_{2},\sigma_{3})}^{(\beta_{1},\beta_{2},\beta_{3})} (r)\) contains the \(r\) dependent integrand of the contour integral \(\int_{\mathcal{C}}\) present in (1.11) times the prefactor in front of the integral (which does not depend on \(r\)).
Checking that \(H_{\text{PT}}\) satisfies the shift equations of Theorem 3.1 is equivalent to checking the following shift equations,
(A.16) \[H_{\text{PT}}\begin{pmatrix}\beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}= \frac{\Gamma(\chi(\beta_{1}-\chi))\Gamma(1-\chi\beta_{2})}{ \Gamma(1-\frac{\chi}{2}(\beta_{2}+\beta_{3}-\beta_{1}))\Gamma(\frac{\chi}{2}( \beta_{1}+\beta_{3}-\beta_{2}-2\chi))}H_{\text{PT}}\begin{pmatrix}\beta_{1}- \chi,\beta_{2}+\chi,\beta_{3}\\ \sigma_{1},\sigma_{2}+\frac{\chi}{2},\sigma_{3}\end{pmatrix}\] \[+\chi^{2}\left(\frac{\pi\Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1- \frac{\gamma^{2}}{4})}\right)^{\frac{\chi}{\gamma}}\frac{\Gamma(1-\chi\beta_{1 })\Gamma(1-\chi\beta_{2})}{\sin(\pi\chi(\chi-\beta_{1}))\Gamma(1+\chi(Q-\frac {\beta}{2}))\Gamma(1-\frac{\chi}{2}(\beta_{1}+\beta_{2}-\beta_{3}))}\] \[\times 2\sin(\pi\chi(\frac{\beta_{1}}{2}+\sigma_{1}+\sigma_{2}-Q)) \sin(\pi\chi(\frac{\beta_{1}}{2}-\sigma_{1}+\sigma_{2}))H_{\text{PT}}\begin{pmatrix} \beta_{1}+\chi,\beta_{2}+\chi,\beta_{3}\\ \sigma_{1},\sigma_{2}+\frac{\chi}{2},\sigma_{3}\end{pmatrix},\]
and:
(A.17) \[\frac{\chi^{2}}{\pi}\Gamma(1-\chi\beta_{2})2\sin(\pi\chi(\frac{\beta_ {2}}{2}+\sigma_{2}+\sigma_{3}-Q))\sin(\pi\chi(\frac{\beta_{2}}{2}+\sigma_{2}- \sigma_{3}))H_{\mathrm{PT}}\begin{pmatrix}\beta_{1}+\chi,\beta_{2}+\chi,\beta_{ 3}\\ \sigma_{1},\sigma_{2}+\frac{\chi}{2},\sigma_{3}\end{pmatrix}\] \[=\left(\frac{\pi\Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1-\frac{ \gamma^{2}}{4})}\right)^{-\frac{\chi}{\gamma}}\frac{\Gamma(\chi\beta_{1})}{ \Gamma(\frac{\chi}{2}(\bar{\beta}-2Q))\Gamma(\frac{\chi}{2}(\beta_{1}+\beta_{2 }-\beta_{3}))}H_{\mathrm{PT}}\begin{pmatrix}\beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}\] \[-\chi^{2}\frac{2\sin(\pi\chi(\frac{\beta_{1}}{2}-\sigma_{1}- \sigma_{2}+Q))\sin(\pi\chi(\frac{\beta_{1}}{2}+\sigma_{1}-\sigma_{2}))\Gamma( 1-\chi\beta_{1}-\chi^{2})}{\sin(\pi\chi\beta_{1})\Gamma(\frac{\chi}{2}(\beta_ {2}+\beta_{3}-\beta_{1}-2\chi))\Gamma(1-\frac{\chi}{2}(\beta_{1}+\beta_{3}- \beta_{2}))}H_{\mathrm{PT}}\begin{pmatrix}\beta_{1}+2\chi,\beta_{2},\beta_{3} \\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}.\]
This formulation of the two shift equations has been derived from Theorem 3.1 by shifting \(\beta_{2}\) to \(\beta_{2}+\chi\) in (3.2) for the first and shifting \(\beta_{1}\) to \(\beta_{1}+\chi\) in (3.3) for the second. We have also used the explicit expression (3.1) for \(g_{\chi}\) and written the difference of cosines as a product of sines. We now compute the following ratios of \(\varphi\),
(A.18) \[\frac{\varphi^{(\beta_{1}-\chi,\beta_{2}+\chi,\beta_{3})}_{( \sigma_{1},\sigma_{2}+\frac{\chi}{2},\sigma_{3})}(r)}{\varphi^{(\beta_{1},\beta _{2},\beta_{3})}_{(\sigma_{1},\sigma_{2},\sigma_{3})}(r)}= \frac{\Gamma(\frac{\chi}{2}(\beta_{1}+\beta_{3}-\beta_{2}-2\chi ))\Gamma(1-\frac{\chi}{2}(\beta_{2}+\beta_{3}-\beta_{1}))\Gamma(1-\chi\beta_ {1}+\chi^{2})}{\pi\Gamma(1-\chi\beta_{2})}\] \[\times\sin(\pi\chi(\frac{\beta_{1}}{2}-\chi+\sigma_{1}-\sigma_{2} ))\frac{\sin(\pi\chi(-\frac{\beta_{1}}{2}+\frac{\beta_{2}}{2}+\sigma_{1}- \sigma_{3}-r))}{\sin(\pi\chi(\frac{\beta_{2}}{2}+\sigma_{2}-\sigma_{3}-r))},\] (A.19) \[\frac{\varphi^{(\beta_{1}+\chi,\beta_{2}+\chi,\beta_{3})}_{( \sigma_{1},\sigma_{2}+\frac{\chi}{2},\sigma_{3})}(r)}{\varphi^{(\beta_{1},\beta _{2},\beta_{3})}_{(\sigma_{1},\sigma_{2},\sigma_{3})}(r)}= \chi^{-2}\left(\frac{\pi\Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1- \frac{\gamma^{2}}{4})}\right)^{-\frac{\chi}{\gamma}}\frac{\Gamma(1+\chi(Q- \frac{\bar{\beta}}{2}))\Gamma(1-\frac{\chi}{2}(\beta_{1}+\beta_{2}-\beta_{3} ))}{\Gamma(1-\chi\beta_{1})\Gamma(1-\chi\beta_{2})}\] \[\times\frac{\sin(\pi\chi(\frac{\beta_{1}}{2}+\frac{\beta_{2}}{2}- \chi+\sigma_{1}-\sigma_{3}-r))}{2\sin(\pi\chi(\frac{\beta_{1}}{2}-\chi+\sigma_ {1}+\sigma_{2}))\sin(\pi\chi(\frac{\beta_{2}}{2}+\sigma_{2}-\sigma_{3}-r))}.\]
If we plug these expressions into equation (A.16) and regroup the terms on one side we get:
(A.20) \[\int_{\mathcal{C}}dr\,\varphi^{(\beta_{1},\beta_{2},\beta_{3})}_{( \sigma_{1},\sigma_{2},\sigma_{3})}(r)\Bigg{[}\frac{\sin(\pi\chi(\frac{\beta_{1}} {2}-\chi+\sigma_{1}-\sigma_{2}))\sin(\pi\chi(-\frac{\beta_{1}}{2}+\frac{\beta_ {2}}{2}+\sigma_{1}-\sigma_{3}-r))}{\sin(\pi\chi(\beta_{1}-\chi))\sin(\pi\chi( \frac{\beta_{2}}{2}+\sigma_{2}-\sigma_{3}-r))}-1\] \[+\frac{\sin(\pi\chi(\frac{\beta_{1}}{2}-\sigma_{1}+\sigma_{2})) \sin(\pi\chi(\frac{\beta_{1}}{2}+\frac{\beta_{2}}{2}-\chi+\sigma_{1}-\sigma_{3} -r))}{\sin(\pi\chi(\beta_{1}-\chi))\sin(\pi\chi(\frac{\beta_{2}}{2}+\sigma_{2}- \sigma_{3}-r))}\Bigg{]}.\]
We can verify with some algebra that the integrand of the above integral equals \(0\), hence (A.16) holds. To check the second shift equation, we will need additionally the ratio:
\[\frac{\varphi^{(\beta_{1}+2\chi,\beta_{2},\beta_{3})}_{(\sigma_{1}, \sigma_{2},\sigma_{3})}(r)}{\varphi^{(\beta_{1},\beta_{2},\beta_{3})}_{(\sigma_{ 1},\sigma_{2},\sigma_{3})}(r)}=-\frac{\pi}{\chi^{2}}\frac{\Gamma(1+\chi(Q- \frac{\bar{\beta}}{2}))\Gamma(1-\frac{\chi}{2}(\beta_{1}+\beta_{2}-\beta_{3} ))}{\Gamma(\frac{\chi}{2}(\beta_{1}+\beta_{3}-\beta_{2}))\Gamma(1-\frac{\chi}{2 }(\beta_{2}+\beta_{3}-\beta_{1}-2\chi))\Gamma(1-\chi\beta_{1}-\chi^{2}))\Gamma(1 -\chi\beta_{1})}\] \[\times\left(\frac{\pi\Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1-\frac{ \gamma^{2}}{4})}\right)^{-\frac{\chi}{\gamma}}\frac{\sin(\pi\chi(\frac{\beta_{1} }{2}+\frac{\beta_{2}}{2}-\chi+\sigma_{1}-\sigma_{3}-r))}{2\sin(\pi\chi(\frac{ \beta_{1}}{2}+\sigma_{1}-\sigma_{2}))\sin(\pi\chi(\frac{\beta_{1}}{2}+\sigma_{1}+ \sigma_{2}-\chi))\sin(\pi\chi(\frac{\beta_{1}}{2}-\frac{\beta_{2}}{2}+\chi+ \sigma_{3}-\sigma_{1}+r))}.\]
Substituting this time into equation (A.17) and regrouping terms on one side we get:
\[\frac{\Gamma(\chi\beta_{1})}{\Gamma(\frac{\chi}{2}(\overline{\beta}-2Q))\Gamma( \frac{\chi}{2}(\beta_{1}+\beta_{2}-\beta_{3}))}\int_{\mathcal{C}}dr\,\varphi_{ (\sigma_{1},\sigma_{2},\sigma_{3})}^{(\beta_{1},\beta_{2},\beta_{3})}(r)\] \[\bigg{[}\frac{\sin(\pi\chi\beta_{1})\sin(\pi\chi(\frac{\beta_{2}} {2}-\chi+\sigma_{2}+\sigma_{3}))\sin(\pi\chi(\frac{\beta_{2}}{2}+\sigma_{2}- \sigma_{3}))\sin(\pi\chi(\frac{\beta_{1}}{2}+\frac{\beta_{2}}{2}-\chi+\sigma_{ 1}-\sigma_{3}-r))}{\sin(\pi\chi(\frac{\beta}{2}-\chi))\sin(\frac{\pi\chi}{2}( \beta_{1}+\beta_{2}-\beta_{3}))\sin(\pi\chi(\frac{\beta_{1}}{2}-\chi+\sigma_{ 1}+\sigma_{2}))\sin(\pi\chi(\frac{\beta_{2}}{2}+\sigma_{2}-\sigma_{3}-r))}-1\] \[-\frac{\sin(\frac{\pi_{2}}{2}(\beta_{1}+\beta_{3}-\beta_{2}))\sin (\frac{\pi_{3}}{2}(\beta_{2}+\beta_{3}-\beta_{1}-2\chi))\sin(\pi\chi(\frac{ \beta_{1}}{2}+\chi-\sigma_{1}-\sigma_{2}))\sin(\pi\chi(\frac{\beta_{1}}{2}+ \frac{\beta_{2}}{2}-\chi+\sigma_{1}-\sigma_{3}-r))}{\sin(\chi(\frac{\beta}{2} -\chi))\sin(\pi\chi(\frac{\beta_{1}}{2}+\beta_{2}-\beta_{3}))\sin(\pi\chi( \frac{\beta_{1}}{2}-\chi+\sigma_{1}+\sigma_{2}))\sin(\pi\chi(\frac{\beta_{1}} {2}-\frac{\beta_{2}}{2}+\chi+\sigma_{3}-\sigma_{1}+r))}\bigg{]}.\]
After some algebra we can write this quantity in the form:
\[\frac{\Gamma(\chi\beta_{1})}{\Gamma(\frac{\chi}{2}(\overline{ \beta}-2Q))\Gamma(\frac{\chi}{2}(\beta_{1}+\beta_{2}-\beta_{3}))}\frac{\sin( \pi\chi\beta_{1})}{\sin(\pi\chi(\frac{\beta}{2}-\chi))\sin(\frac{\pi\chi}{2} (\beta_{1}+\beta_{2}-\beta_{3}))\sin(\pi\chi(\frac{\beta_{1}}{2}-\chi+\sigma_ {1}+\sigma_{2}))}\] \[\times\int_{\mathcal{C}}dr\,\varphi_{(\sigma_{1},\sigma_{2}, \sigma_{3})}^{(\beta_{1},\beta_{2},\beta_{3})}(r)\Bigg{[}\frac{\sin(\pi\chi( \frac{\beta_{1}}{2}+\frac{\beta_{2}}{2}-\chi+\sigma_{1}-\sigma_{3}-r))\sin( \pi\chi r)\sin(\pi\chi(2\sigma_{3}-\chi+r))}{\sin(\pi\chi(\frac{\beta_{2}}{2}+ \sigma_{2}-\sigma_{3}-r))}\] \[\qquad-\frac{\sin(\pi\chi(\frac{\beta_{3}}{2}+\sigma_{3}-\sigma_ {1}+r))\sin(\pi\chi(\frac{\beta_{3}}{2}-\chi+\sigma_{1}-\sigma_{3}-r))\sin( \pi\chi(\frac{\beta_{2}}{2}-\sigma_{2}-\sigma_{3}-r))}{\sin(\pi\chi(\frac{ \beta_{1}}{2}-\frac{\beta_{2}}{2}+\chi+\sigma_{3}-\sigma_{1}+r))}\Bigg{]}\] \[=\frac{\Gamma(\chi\beta_{1})}{\Gamma(\frac{\chi}{2}(\overline{ \beta}-2Q))\Gamma(\frac{\chi}{2}(\beta_{1}+\beta_{2}-\beta_{3}))}\] \[\times\frac{\sin(\pi\chi\beta_{1})\sin(\pi\chi(\frac{\beta_{3}}{2 }+\sigma_{3}-\sigma_{1}))}{\sin(\pi\chi(\frac{\overline{\beta}}{2}-\chi))\sin( \frac{\pi\chi}{2}(\beta_{1}+\beta_{2}-\beta_{3}))\sin(\pi\chi(\frac{\beta_{1}} {2}-\chi+\sigma_{1}-\sigma_{2}))\sin(\pi\chi(\frac{\beta_{3}}{2}-\sigma_{1}- \sigma_{3}))}\] \[\times\int_{\mathcal{C}}dr\,(T_{-\chi}-1)\left(\frac{\sin(\pi\chi( \frac{\beta_{1}}{2}+\frac{\beta_{2}}{2}-\chi+\sigma_{1}-\sigma_{3}-r))\sin( \pi\chi(2\sigma_{3}-\chi+r))\sin(\pi\chi(2\sigma_{3}-2\chi+r))}{\sin(\pi\chi( \frac{\beta_{2}}{2}+\chi-\sigma_{2}-\sigma_{3}-r))}\varphi_{(\sigma_{1},\sigma_{ 2}+\chi,\sigma_{3}+\chi)}^{(\beta_{1},\beta_{2},\beta_{3})}(r)\right),\]
Here we have used the notation \(T_{-\chi}\) for the operator that shifts the argument of the function it is applied to by \(-\chi\), the variable we are shifting being \(r\). Since we have the combination \((T_{-\chi}-1)\) in the integrand, this corresponds to integrating \(r\) over two contours that are separated by a shift horizontal shift of \(\chi\) and which are in the opposite direction. Provided that there are no poles in between these two contours, the whole contour integral will be equal to \(0\). This is indeed the case thanks to the way the contour \(\mathcal{C}\) has been chosen, which is to the right of the lattice \(P_{\mathrm{PT}}^{-}\) of poles extending in the \(-\infty\) direction, and to the left of the lattice \(P_{\mathrm{PT}}^{+}\) of poles extending in the \(+\infty\) direction.
Let us now perform a residue computation directly for the contour integral part of \(H_{\mathrm{PT}}\), namely \(\mathcal{J}_{\mathrm{PT}}\) without multiplying by the global prefactor. See equation (2.1) for a precise definition. This gives the following result.
**Lemma A.2**.: _Set \(\overline{\beta}=\beta_{1}+\beta_{2}+\beta_{3}\). The function \(\mathcal{J}_{\mathrm{PT}}\) satisfies the following properties:_
\[\lim_{\beta_{1}\to 2Q-\beta_{2}-\beta_{3}}(\frac{\bar{\beta}}{2}-Q)\mathcal{J}_{ \mathrm{PT}}\,\begin{pmatrix}\beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}=\frac{1}{2\pi}\frac{S_{\frac{3}{2 }}(\frac{\beta_{1}}{2}+\sigma_{1}+\sigma_{2}-Q)S_{\frac{\gamma}{2}}(\frac{\beta_ {1}}{2}+\sigma_{1}-\sigma_{2})S_{\frac{\gamma}{2}}(Q-\beta_{3})}{S_{\frac{ \gamma}{2}}(\beta_{1})S_{\frac{\gamma}{2}}(-\frac{\beta_{3}}{2}+\sigma_{1}+ \sigma_{3})S_{\frac{\gamma}{2}}(Q-\frac{\beta_{3}}{2}+\sigma_{1}-\sigma_{3})}.\]
\[\lim_{\beta_{1}\to 2Q-\beta_{2}-\beta_{3}-\gamma}(\frac{\bar{\beta}}{2}-Q+ \frac{\gamma}{2})\mathcal{J}_{\mathrm{PT}}\begin{pmatrix}\beta_{1},\beta_{2}, \beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}\] \[=-\frac{1}{\pi\sin(\frac{\gamma^{2}\pi}{4})}\frac{S_{\frac{ \gamma}{2}}(\frac{2}{\gamma}-\beta_{3})S_{\frac{\gamma}{2}}(\frac{2}{\gamma}+ \frac{-\beta_{2}-\beta_{3}}{2}+\sigma_{1}-\sigma_{2})S_{\frac{\gamma}{2}}( \frac{-\beta_{2}-\beta_{3}-\gamma}{2}+\sigma_{1}+\sigma_{2})}{S_{\frac{ \gamma}{2}}(Q+\frac{2}{\gamma}-\beta_{2}-\beta_{3})S_{\frac{\gamma}{2}}(- \frac{\beta_{3}}{2}+\sigma_{3}+\sigma_{1})S_{\frac{\gamma}{2}}(Q-\frac{\beta _{3}}{2}-\sigma_{3}+\sigma_{1})}\] \[\times\left(\sin(\frac{\gamma\pi\beta_{2}}{2})\cos(\frac{\gamma \pi}{2}(-\frac{\gamma}{2}+2\sigma_{1}))+\sin(\frac{\gamma\pi\beta_{3}}{2}) \cos(\frac{\gamma\pi}{2}(-\frac{\gamma}{2}+2\sigma_{2}))-\sin(\frac{\gamma\pi (\beta_{2}+\beta_{3})}{2})\cos(\frac{\gamma\pi}{2}(-\frac{\gamma}{2}+2\sigma_{ 3}))\right).\]
_Furthermore the ratio of the second limit divided by the first is given by_
\[-\frac{1}{2\sin(\frac{\gamma^{2}\pi}{4})}\frac{\sin(\frac{\gamma \gamma}{2}(Q+\frac{2}{\gamma}-\beta_{2}-\beta_{3}))}{\sin(\frac{\gamma\gamma}{ 2}(\frac{2}{\gamma}-\beta_{3}))\sin(\frac{\pi\gamma}{2}(\frac{2}{\gamma}- \frac{\beta_{2}+\beta_{3}}{2}+\sigma_{1}-\sigma_{2}))\sin(\frac{\pi\gamma}{2} (-\frac{\beta_{2}+\beta_{3}+\gamma}{2}+\sigma_{1}-\sigma_{2}))}\] \[\times\left(\sin(\frac{\gamma\pi\beta_{2}}{2})\cos(\frac{\gamma \pi}{2}(-\frac{\gamma}{2}+2\sigma_{1}))+\sin(\frac{\gamma\pi\beta_{3}}{2}) \cos(\frac{\gamma\pi}{2}(-\frac{\gamma}{2}+2\sigma_{2}))-\sin(\frac{\gamma\pi (\beta_{2}+\beta_{3})}{2})\cos(\frac{\gamma\pi}{2}(-\frac{\gamma}{2}+2\sigma_ {3}))\right),\]
_which is a meromorphic function which is not identically \(1\)._
Proof.: The evaluation of the residue is an easy algebra using the shift equations of \(\Gamma_{\frac{\gamma}{2}}\) and \(S_{\frac{\gamma}{2}}\). When \(\beta_{1}\) approaches \(2Q-\beta_{2}-\beta_{3}\) from the right hand side, the two poles at \(r=-(\frac{\beta_{3}}{2}+\sigma_{3}-\sigma_{1})\) and \(r=-(Q-\frac{\beta_{1}}{2}-\frac{\beta_{2}}{2}+\sigma_{3}-\sigma_{1})\) in the contour integral will collapse. To extract the divergent term, we can slightly modify the contour to let it go from the right hand side of \(r=-(Q-\frac{\beta_{1}}{2}-\frac{\beta_{2}}{2}+\sigma_{3}-\sigma_{1})\), this allows us to pick up the divergent term by residue theorem. Let us carry this out in details, we first start by writing the contour integral as:
\[\int_{\mathcal{C}}\frac{S_{\frac{\gamma}{2}}(\frac{Q-\beta_{2}}{2 }+\sigma_{3}\pm(\frac{Q}{2}-\sigma_{2})+r)S_{\frac{\gamma}{2}}(\frac{Q}{2}\pm \frac{Q-\beta_{3}}{2}+\sigma_{3}-\sigma_{1}+r)}{S_{\frac{\gamma}{2}}(\frac{3Q}{ 2}\pm\frac{Q-\beta_{1}}{2}-\frac{\beta_{2}}{2}+\sigma_{3}-\sigma_{1}+r)S_{ \frac{\gamma}{2}}(2\sigma_{3}+r)S_{\frac{\gamma}{2}}(Q+r)}\frac{dr}{i}\] (A.21) \[=\int_{\mathcal{C}}\frac{1}{4\sin(\frac{2\pi}{\gamma}(\frac{ \beta_{3}}{2}+\sigma_{3}-\sigma_{1}+r))\sin(\frac{2\pi}{\gamma}(2Q-\frac{2}{ \gamma}-\frac{\beta_{1}}{2}-\frac{\beta_{2}}{2}+\sigma_{3}-\sigma_{1}+r))}f(r )\frac{dr}{i},\]
where here:
\[f(r)=\frac{S_{\frac{\gamma}{2}}(\frac{Q-\beta_{2}}{2}+\sigma_{3} \pm(\frac{Q}{2}-\sigma_{2})+r)S_{\frac{\gamma}{2}}(Q-\frac{\beta_{3}}{2}+ \sigma_{3}-\sigma_{1}+r)S_{\frac{\gamma}{2}}(\frac{2}{\gamma}+\frac{\beta_{3}} {2}+\sigma_{3}-\sigma_{1}+r)}{S_{\frac{\gamma}{2}}(2Q-\frac{2}{\gamma}-\frac{ \beta_{1}}{2}-\frac{\beta_{2}}{2}+\sigma_{3}-\sigma_{1}+r)S_{\frac{\gamma}{2}} (Q+\frac{\beta_{1}}{2}-\frac{\beta_{2}}{2}+\sigma_{3}-\sigma_{1}+r)S_{\frac{ \gamma}{2}}(2\sigma_{3}+r)S_{\frac{\gamma}{2}}(Q+r)}.\]
We compute:
\[f(\frac{\beta_{1}}{2}+\frac{\beta_{2}}{2}-\sigma_{3}+\sigma_{1}-Q) =\frac{S_{\frac{\gamma}{2}}(\frac{\beta_{1}-Q}{2}+\sigma_{1}\pm( \frac{Q}{2}-\sigma_{2}))S_{\frac{\gamma}{2}}(\frac{\beta_{1}}{2}+\frac{\beta_ {2}}{2}-\frac{\beta_{3}}{2})S_{\frac{\gamma}{2}}(\frac{2}{\gamma}+\frac{\overline {\beta}}{2}-Q)}{S_{\frac{\gamma}{2}}(\frac{2}{\gamma})S_{\frac{\gamma}{2}}( \frac{\beta_{1}}{2})S_{\frac{\gamma}{2}}(-Q+\frac{\beta_{1}}{2}+\frac{\beta_{2}} {2}+\sigma_{3}+\sigma_{1})S_{\frac{\gamma}{2}}(\frac{\beta_{1}}{2}+\frac{\beta_ {2}}{2}-\sigma_{3}+\sigma_{1})},\] \[f(\frac{\beta_{1}}{2}+\frac{\beta_{2}}{2}-\sigma_{3}+\sigma_{1}- \frac{2}{\gamma}) =\frac{S_{\frac{\gamma}{2}}(\frac{\beta_{1}-Q}{2}+\sigma_{1}\pm( \frac{Q}{2}-\sigma_{2})+\frac{\gamma}{2})S_{\frac{\gamma}{2}}(\frac{\beta_{1}}{2}+ \frac{\beta_{2}}{2}-\frac{\beta_{3}}{2}+\frac{\gamma}{2})S_{\frac{\gamma}{2}}( \frac{\overline{\beta}}{2})}{S_{\frac{\gamma}{2}}(\gamma)S_{\frac{\gamma}{2}}( \beta_{1}+\frac{\gamma}{2})S_{\frac{\gamma}{2}}(-\frac{2}{\gamma}+\frac{\beta_{1}} {2}+\frac{\beta_{2}}{2}+\sigma_{3}+\sigma_{1})S_{\frac{\gamma}{2}}(\frac{ \gamma}{2}+\frac{\beta_{1}}{2}+\frac{\beta_{2}}{2}-\sigma_{3}+\sigma_{1})}\]
Let first start by checking the limit \(\beta_{1}\to 2Q-\beta_{2}-\beta_{3}\). By a simple residue computation at the pole \(r=-(Q-\frac{\beta_{1}}{2}-\frac{\beta_{2}}{2}+\sigma_{3}-\sigma_{1})\) and by using the value of \(f(r)\) at that point, one obtains that as \(\beta_{1}\to 2Q-\beta_{2}-\beta_{3}\), the right hand side of (A.21) is equivalent to:
\[\frac{1}{2\pi(\frac{\beta}{2}-Q)}\frac{S_{\frac{\gamma}{2}}(\frac{ \beta_{1}}{2}+\sigma_{1}+\sigma_{2}-Q)S_{\frac{\gamma}{2}}(\frac{\beta_{1}}{2}+ \sigma_{1}-\sigma_{2})S_{\frac{\gamma}{2}}(Q-\beta_{3})}{S_{\frac{\gamma}{2}}( \beta_{1})S_{\frac{\gamma}{2}}(-\frac{\beta_{3}}{2}+\sigma_{1}+\sigma_{3})S_{ \frac{\gamma}{2}}(Q-\frac{\beta_{3}}{2}+\sigma_{1}-\sigma_{3})}.\]
Therefore we obtain:
\[\lim_{\beta_{1}\to 2Q-\beta_{2}-\beta_{3}}(\frac{\bar{\beta}}{2}-Q)\mathcal{J}_{ \mathrm{PT}}\begin{pmatrix}\beta_{1},\beta_{2},\beta_{3}\\ \sigma_{1},\sigma_{2},\sigma_{3}\end{pmatrix}=\frac{1}{2\pi}\frac{S_{\frac{ \gamma}{2}}(\frac{\beta_{1}}{2}+\sigma_{1}+\sigma_{2}-Q)S_{\frac{\gamma}{2}}( \frac{\beta_{1}}{2}+\sigma_{1}-\sigma_{2})S_{\frac{\gamma}{2}}(Q-\beta_{3})}{ S_{\frac{\gamma}{2}}(\beta_{1})S_{\frac{\gamma}{2}}(-\frac{\beta_{3}}{2}+ \sigma_{1}+\sigma_{3})S_{\frac{\gamma}{2}}(Q-\frac{\beta_{3}}{2}+\sigma_{1}- \sigma_{3})}.\]
We now move to the case of the limit \(\beta_{1}\to 2Q-\beta_{2}-\beta_{3}-\gamma.\) This time we need to move the contour of integration \(\mathcal{C}\) to the right of the poles at \(r=-(Q-\frac{\beta_{1}}{2}-\frac{\beta_{2}}{2}+\sigma_{3}-\sigma_{1})\) and \(r=-(Q-\frac{\beta_{1}}{2}-\frac{\beta_{2}}{2}+\sigma_{3}-\sigma_{1})+\frac{ \gamma}{2}.\) This time we get two contributions which in the limit \(\beta_{1}\to 2Q-\beta_{2}-\beta_{3}-\gamma\) are equivalent to:
\[\frac{1}{\frac{4}{\gamma}\sin(\frac{2\pi}{\gamma}(\frac{\overline{ \beta}}{2}-Q))}\frac{S_{\frac{\gamma}{2}}(\frac{2}{\gamma}+\frac{-\beta_{2}- \beta_{3}}{2}+\sigma_{1}-\sigma_{2})S_{\frac{\gamma}{2}}(\frac{-\beta_{2}- \beta_{3}-\gamma}{2}+\sigma_{1}+\sigma_{2})S_{\frac{\gamma}{2}}(Q-\beta_{3}- \frac{\gamma}{2})S_{\frac{\gamma}{2}}(\frac{2}{\gamma}-\frac{\gamma}{2})}{S_{ \frac{\gamma}{2}}(\frac{2}{\gamma})S_{\frac{\gamma}{2}}(\frac{4}{\gamma}- \beta_{2}-\beta_{3})S_{\frac{\gamma}{2}}(-\frac{\beta_{3}}{2}-\frac{\gamma}{2 }+\sigma_{3}+\sigma_{1})S_{\frac{\gamma}{2}}(Q-\frac{\beta_{3}}{2}-\frac{ \gamma}{2}-\sigma_{3}+\sigma_{1})}\] \[+\frac{1}{\frac{4}{\gamma}\sin(\frac{2\pi}{\gamma}(\frac{\overline {\beta}}{2}-Q))}\frac{S_{\frac{\gamma}{2}}(Q+\frac{-\beta_{2}-\beta_{3}}{2}+ \sigma_{1}-\sigma_{2})S_{\frac{\gamma}{2}}(-\frac{\beta_{2}-\beta_{3}}{2}+ \sigma_{1}+\sigma_{2})S_{\frac{\gamma}{2}}(Q-\beta_{3})S_{\frac{\gamma}{2}}( \frac{2}{\gamma})}{S_{\frac{\gamma}{2}}(\gamma)S_{\frac{\gamma}{2}}(\frac{4}{ \gamma}-\beta_{2}-\beta_{3}+\frac{\gamma}{2})S_{\frac{\gamma}{2}}(-\frac{ \beta_{3}}{2}+\sigma_{3}+\sigma_{1})S_{\frac{\gamma}{2}}(Q-\frac{\beta_{3}}{2} -\sigma_{3}+\sigma_{1})}.\]
We obtain:
\[\frac{2\sin(\frac{\gamma^{2}\pi}{4})S_{\frac{\gamma}{2}}(\frac{2 }{\gamma}-\frac{\gamma}{2})S_{\frac{\gamma}{2}}(\frac{2}{\gamma}-\beta_{3})S_{ \frac{\gamma}{2}}(\frac{2}{\gamma}+\frac{-\beta_{2}-\beta_{3}}{2}+\sigma_{1} -\sigma_{2})S_{\frac{\gamma}{2}}(\frac{-\beta_{2}-\beta_{3}-\gamma}{2}+\sigma _{1}+\sigma_{2})}{\frac{4}{\gamma}\sin(\frac{2\pi}{\gamma}(\frac{\overline{ \beta}}{2}-Q))S_{\frac{\gamma}{2}}(Q+\frac{2}{\gamma}-\beta_{2}-\beta_{3})S_{ \frac{\gamma}{2}}(-\frac{\beta_{3}}{2}+\sigma_{3}+\sigma_{1})S_{\frac{\gamma }{2}}(Q-\frac{\beta_{3}}{2}-\sigma_{3}+\sigma_{1})}\] \[\times\Big{[}8\sin(\frac{\gamma\pi}{2}(\beta_{2}+\beta_{3}))\sin( \frac{\gamma\pi}{2}(-\frac{\beta_{3}}{2}-\frac{\gamma}{2}+\sigma_{3}+\sigma_{1} ))\sin(\frac{\gamma\pi}{2}(\frac{\beta_{3}}{2}+\sigma_{3}-\sigma_{1}))\] \[\quad-8\sin(\frac{\gamma\pi}{2})\sin(\frac{\gamma\pi}{2}(-\frac {\beta_{2}+\beta_{3}+\gamma}{2}+\sigma_{1}+\sigma_{2}))\sin(\frac{\gamma\pi}{2 }(\frac{\beta_{2}+\beta_{3}}{2}-\sigma_{1}+\sigma_{2}))\Big{]}.\]
Record the trigonometric identity:
\[2\sin(\frac{\gamma\pi}{2}(\beta_{2}+\beta_{3}))\sin(\frac{\gamma \pi}{2}(-\frac{\beta_{3}}{2}-\frac{\gamma}{2}+\sigma_{3}+\sigma_{1}))\sin( \frac{\gamma\pi}{2}(\frac{\beta_{3}}{2}+\sigma_{3}-\sigma_{1}))\] \[-2\sin(\frac{\gamma\pi\beta_{3}}{2})\sin(\frac{\gamma\pi}{2}(- \frac{\beta_{2}+\beta_{3}+\gamma}{2}+\sigma_{1}+\sigma_{2}))\sin(\frac{\gamma \pi}{2}(\frac{\beta_{2}+\beta_{3}}{2}-\sigma_{1}+\sigma_{2}))\] \[=\sin(\frac{\gamma\pi\beta_{2}}{2})\cos(\frac{\gamma\pi}{2}(- \frac{\gamma}{2}+2\sigma_{1}))+\sin(\frac{\gamma\pi\beta_{3}}{2})\cos(\frac{ \gamma\pi}{2}(-\frac{\gamma}{2}+2\sigma_{2}))-\sin(\frac{\gamma\pi(\beta_{2}+ \beta_{3})}{2})\cos(\frac{\gamma\pi}{2}(-\frac{\gamma}{2}+2\sigma_{3})).\]
Using the above identity we finally we arrive at the final answer:
\[-\frac{1}{\pi\sin(\frac{2\pi}{2})}\frac{S_{\frac{\gamma}{2}}(\frac {2}{\gamma}-\beta_{3})S_{\frac{\gamma}{2}}(\frac{2}{\gamma}+\frac{-\beta_{2}- \beta_{3}}{2}+\sigma_{1}-\sigma_{2})S_{\frac{\gamma}{2}}(\frac{-\beta_{2}- \beta_{3}-\gamma}{2}+\sigma_{1}+\sigma_{2})}{S_{\frac{\gamma}{2}}(Q+\frac{2}{ \gamma}-\beta_{2}-\beta_{3})S_{\frac{\gamma}{2}}(-\frac{\beta_{3}}{2}+\sigma_{3}+ \sigma_{1})S_{\frac{\gamma}{2}}(Q-\frac{\beta_{3}}{2}-\sigma_{3}+\sigma_{1})}\] \[\times\left(\sin(\frac{\gamma\pi\beta_{2}}{2})\cos(\frac{\gamma\pi} {2}(-\frac{\gamma}{2}+2\sigma_{1}))+\sin(\frac{\gamma\pi\beta_{3}}{2})\cos( \frac{\gamma\pi}{2}(-\frac{\gamma}{2}+2\sigma_{2}))-\sin(\frac{\gamma\pi(\beta_{2}+ \beta_{3})}{2})\cos(\frac{\gamma\pi}{2}(-\frac{\gamma}{2}+2\sigma_{3}))\right).\]
For the final claim of the lemma the ratio of the second limit divided by the first can be easily computed using (A.13).
### Properties of the Hosimichi function \(G_{\mathrm{Hos}}\)
For the purposes of showing that \(G=G_{\mathrm{Hos}}\), we will need in the main text the following two lemmas.
**Lemma A.3**.: _The following holds:_
\[\lim_{\alpha\to Q-\frac{\beta}{2}}(\alpha+\frac{\beta}{2}-Q)\int_{\mathbb{I} \mathbb{R}}\frac{d\sigma_{2}}{\mathbf{i}}e^{2\mathrm{i}\pi(Q-2\sigma)\sigma_{2}} \frac{S_{\frac{\gamma}{2}}(\frac{1}{2}(\alpha+\frac{\beta}{2}-Q)\pm\sigma_{2} )}{S_{\frac{\gamma}{2}}(\frac{1}{2}(\alpha-\frac{\beta}{2}+Q)\pm\sigma_{2})}= \frac{1}{2\pi}\frac{1}{S_{\frac{\gamma}{2}}(Q-\frac{\beta}{2})^{2}}.\]
Proof.: Write:
\[\int_{\mathbb{IR}}\frac{d\sigma_{2}}{\mathbf{i}}e^{2\mathrm{i}\pi(Q-2 \sigma)\sigma_{2}}\frac{S_{\frac{\gamma}{2}}(\frac{1}{2}(\alpha+\frac{\beta}{2}- Q)\pm\sigma_{2})}{S_{\frac{\gamma}{2}}(\frac{1}{2}(\alpha-\frac{\beta}{2}+Q)\pm\sigma_{2})}\] \[=\int_{\mathbb{IR}}\frac{d\sigma_{2}}{\mathbf{i}}e^{2\mathrm{i} \pi(Q-2\sigma)\sigma_{2}}\frac{1}{2\sin(\frac{\gamma\pi}{2}(\frac{1}{2}(\alpha+ \frac{\beta}{2}-Q)-\sigma_{2}))}\frac{S_{\frac{\gamma}{2}}(\frac{1}{2}(\alpha +\frac{\beta}{2}-Q)+\sigma_{2})S_{\frac{\gamma}{2}}(\frac{1}{2}(\alpha-\frac{ \beta}{2}+Q)+\sigma_{2})S_{\frac{\gamma}{2}}(\frac{1}{2}(\alpha-\frac{\beta}{ 2}+Q)-\sigma_{2})}\]
The residue we are interested in is given by:
\[2\pi\frac{1}{\gamma\pi}e^{\mathrm{i}\pi(Q-2\sigma)(\alpha+\frac{\beta}{2}-Q)} \frac{S_{\frac{\gamma}{2}}(\frac{\gamma}{2})S_{\frac{\gamma}{2}}(\alpha+\frac {\beta}{2}-Q)}{S_{\frac{\gamma}{2}}(\alpha)S_{\frac{\gamma}{2}}(Q-\frac{ \beta}{2})}.\]
We are then interested in the limit:
\[\lim_{\alpha\to Q-\frac{\beta}{2}}(\alpha+\frac{\beta}{2}-Q)2\pi \frac{1}{\gamma\pi}e^{\mathrm{i}\pi(Q-2\sigma_{1})(\alpha+\frac{\beta}{2}-Q)} \frac{S_{\frac{\gamma}{2}}(\frac{\gamma}{2})S_{\frac{\gamma}{2}}(\alpha+\frac {\beta}{2}-Q)}{S_{\frac{\gamma}{2}}(\alpha)S_{\frac{\gamma}{2}}(Q-\frac{ \beta}{2})}\] \[=\frac{2}{\gamma}\frac{S_{\frac{\gamma}{2}}(\frac{\gamma}{2})}{S_ {\frac{\gamma}{2}}(Q-\frac{\beta}{2})^{2}}\lim_{\alpha\to Q-\frac{\beta}{2}}( \alpha+\frac{\beta}{2}-Q)S_{\frac{\gamma}{2}}(\alpha+\frac{\beta}{2}-Q)=\frac {2}{\gamma}\frac{S_{\frac{\gamma}{2}}(\frac{\gamma}{2})}{S_{\frac{\gamma}{2}} (Q-\frac{\beta}{2})^{2}}\frac{\gamma}{4\pi}S_{\frac{\gamma}{2}}(\frac{2}{ \gamma})=\frac{1}{2\pi}\frac{1}{S_{\frac{\gamma}{2}}(Q-\frac{\beta}{2})^{2}}.\]
Similarly we also have the following lemma.
**Lemma A.4**.: _The following holds:_
\[\lim_{\beta\to 0}\beta\int_{\mathbb{IR}}\frac{d\sigma_{2}}{\mathbf{i}}e^{2 \mathrm{i}\pi(Q-2\sigma)\sigma_{2}}\frac{S_{\frac{\gamma}{2}}(\frac{1}{2}( \alpha+\frac{\beta}{2}-Q)\pm\sigma_{2})}{S_{\frac{\gamma}{2}}(\frac{1}{2}( \alpha-\frac{\beta}{2}+Q)\pm\sigma_{2})}=\frac{\cos(\pi(Q-2\sigma)(Q-\alpha))} {2\pi\sin(\frac{\pi\gamma}{2}(\alpha-\frac{\gamma}{2}))\sin(\frac{2\pi}{ \gamma}(\alpha-Q))}.\]
Proof.: As \(\beta\) tends to \(0\) the pole of the integrand at \(\sigma_{2}=\frac{1}{2}(Q-\alpha-\frac{\beta}{2})\) will collapse with the pole at \(\sigma_{2}=\frac{1}{2}(Q-\alpha+\frac{\beta}{2})\). Similarly, the pole at \(\sigma_{2}=\frac{1}{2}(\alpha-\frac{\beta}{2}-Q)\) will collapse with the pole at \(\sigma_{2}=\frac{1}{2}(\alpha+\frac{\beta}{2}-Q)\). We thus need to perform a residue computation to pick up the contribution form these poles. We first apply the shift equations of the double sine functions:
\[\int_{\mathbb{IR}}\frac{d\sigma_{2}}{\mathbf{i}}e^{2\mathrm{i}\pi (Q-2\sigma)\sigma_{2}}\frac{1}{4\sin(\frac{\pi\gamma}{4}(\alpha+\frac{\beta}{2 }-Q)+\frac{\pi\sigma_{2}\gamma}{2})\sin(-\frac{\pi\gamma^{2}}{4}+\frac{\pi \gamma}{4}(\alpha-\frac{\beta}{2}+Q)-\frac{\pi\sigma_{2}\gamma}{2})}\] \[\quad\times\frac{S_{\frac{\gamma}{2}}(\frac{1}{2}(\alpha+\frac{ \beta}{2}-Q)-\sigma_{2})S_{\frac{\gamma}{2}}(\frac{\gamma}{2}+\frac{1}{2}( \alpha+\frac{\beta}{2}-Q)+\sigma_{2})}{S_{\frac{\gamma}{2}}(\frac{1}{2}(\alpha -\frac{\beta}{2}+Q)+\sigma_{2})S_{\frac{\gamma}{2}}(-\frac{\gamma}{2}+\frac{1 }{2}(\alpha-\frac{\beta}{2}+Q)-\sigma_{2})}.\]
Lets now move the contour slightly to the right and pick the residues at \(\sigma_{2}=\frac{1}{2}(Q-\alpha-\frac{\beta}{2})\) and at \(\frac{1}{2}(\alpha-Q-\frac{\beta}{2})\). We can drop the contour integral in the \(\beta\) to zero limit. Therefore we get:
\[e^{\mathrm{i}\pi(Q-2\sigma)(Q-\alpha-\frac{\beta}{2})}\frac{2\pi} {4\frac{\pi\gamma}{2}\sin(\frac{\pi\gamma}{2}(\alpha-\frac{\gamma}{2}))}\frac{S_ {\frac{\gamma}{2}}(\alpha+\frac{\beta}{2}-Q)S_{\frac{\gamma}{2}}(\frac{\gamma}{ 2})}{S_{\frac{\gamma}{2}}(Q-\frac{\beta}{2})S_{\frac{\gamma}{2}}(-\frac{\gamma }{2}+\alpha)}\] \[+e^{\mathrm{i}\pi(Q-2\sigma)(\alpha-Q-\frac{\beta}{2})}\frac{2\pi} {4\frac{\pi\gamma}{2}\sin(\frac{\pi\gamma}{2}(\alpha-Q))}\frac{S_{\frac{\gamma}{2} }(\frac{\beta}{2})S_{\frac{\gamma}{2}}(\alpha-\frac{2}{\gamma})}{S_{\frac{ \gamma}{2}}(\alpha-\frac{\beta}{2})S_{\frac{\gamma}{2}}(\frac{2}{\gamma})}\] \[=\frac{1}{\gamma\sin(\frac{\pi\gamma}{2}(\alpha-\frac{\gamma}{2}))} \frac{S_{\frac{\gamma}{2}}(\frac{\beta}{2})}{S_{\frac{\gamma}{2}}(\frac{2}{ \gamma})}\left[e^{\mathrm{i}\pi(Q-2\sigma)(Q-\alpha-\frac{\beta}{2})}\frac{S_{ \frac{\gamma}{2}}(\alpha+\frac{\beta}{2}-Q)}{S_{\frac{\gamma}{2}}(-\frac{\gamma }{2}+\alpha)}-e^{\mathrm{i}\pi(Q-2\sigma)(\alpha-Q-\frac{\beta}{2})}\frac{S_{ \frac{\gamma}{2}}(\alpha-\frac{2}{\gamma})}{S_{\frac{\gamma}{2}}(\alpha-\frac{ \beta}{2})}\right]\]
Now we compute:
\[\begin{split}&\lim_{\beta\to 0}\frac{\beta}{\gamma\sin(\frac{\pi \gamma}{2}(\alpha-\frac{\gamma}{2}))}\frac{S_{\frac{\gamma}{2}}(\frac{\beta}{2} )}{S_{\frac{\gamma}{2}}(\frac{2}{\gamma})}\left[e^{\mathrm{i}\pi(Q-2\sigma)(Q -\alpha-\frac{\beta}{2})}\frac{S_{\frac{\gamma}{2}}(\alpha+\frac{\beta}{2}-Q) }{S_{\frac{\gamma}{2}}(-\frac{\gamma}{2}+\alpha)}-e^{\mathrm{i}\pi(Q-2\sigma) (\alpha-Q-\frac{\beta}{2})}\frac{S_{\frac{\gamma}{2}}(\alpha-\frac{\beta}{2})}{ S_{\frac{\gamma}{2}}(\alpha-\frac{\beta}{2})}\right]\\ &=\frac{1}{2\pi\sin(\frac{\pi\gamma}{2}(\alpha-\frac{\gamma}{2}))} \left[e^{\mathrm{i}\pi(Q-2\sigma)(Q-\alpha)}\frac{1}{2\sin(\frac{2\pi}{\gamma} (\alpha-Q))}+e^{\mathrm{i}\pi(Q-2\sigma)(\alpha-Q)}\frac{1}{2\sin(\frac{2\pi} {\gamma}(\alpha-Q))}\right]\\ &=\frac{\cos(\pi(Q-2\sigma)(Q-\alpha))}{2\pi\sin(\frac{\pi\gamma }{2}(\alpha-\frac{\gamma}{2}))\sin(\frac{2\pi}{\gamma}(\alpha-Q))}\end{split}\]
## Appendix B Two-point quantum disk from mating of trees
The goal of this section is to prove Lemma 3.17. We start by giving the following lemma.
**Lemma B.1**.: _For a sample from \(\mathcal{M}_{2}^{\mathrm{disk}}(\gamma)\), let \((L_{1},L_{2})\) be the quantum lengths and \(A\) the quantum area, then_
\[\mathcal{M}_{2}^{\mathrm{disk}}(\gamma)[L_{1}^{m}e^{-A}]=Ce^{\frac{4}{\gamma^ {2}}}\iint_{0}^{\infty}\ell_{1}^{m}\cdot(\ell_{1}+\ell_{2})^{-1}K_{\frac{4}{ \gamma^{2}}}(c(\ell+r))\,d\ell_{1}\,d\ell_{2}\]
_where \(c=\sqrt{1/\sin(\frac{\pi\gamma^{2}}{4})}\) and \(C=\frac{(2\pi)^{\frac{4}{\gamma^{2}}-1}}{(1-\frac{\gamma^{2}}{4})\Gamma(1-\frac {\gamma^{2}}{4})^{\frac{4}{\gamma^{2}}}}\cdot\frac{2}{\Gamma(\frac{4}{\gamma^ {2}})}2^{-\frac{4}{\gamma^{2}}}\)._
Proof.: [1, Proposition 5.2] gives the density function of \((L_{1},L_{2})\) as
\[1_{\ell_{1},\ell_{2}>0}\frac{(2\pi)^{\frac{4}{\gamma^{2}}-1}}{(1-\frac{\gamma ^{2}}{4})\Gamma(1-\frac{\gamma^{2}}{4})^{\frac{4}{\gamma^{2}}}}(\ell_{1}+\ell _{2})^{-\frac{4}{\gamma^{2}}-1}\,d\ell_{1}\,d\ell_{2}.\]
Next, the conditional law of \(A\) given \(L_{1},L_{2}\) was obtained in [1, Theorem 1.2] modulo the mating-of-trees variance which was later computed in [1, Theorem 1.3], from which we get
\[\mathcal{M}_{2}^{\mathrm{disk}}(\gamma)[e^{-A}\mid L_{1},L_{2}]=\frac{2}{ \Gamma(\frac{4}{\gamma^{2}})}\left(\frac{1}{4\sin(\frac{\pi\gamma^{2}}{4})} \right)^{\frac{2}{\gamma^{2}}}(L_{1}+L_{2})^{\frac{4}{\gamma^{2}}}K_{\frac{4}{ \gamma^{2}}}\left((L_{1}+L_{2})\sqrt{\frac{1}{\sin(\frac{\pi\gamma^{2}}{4})}}\right)\]
Combining the two indented equations gives the result.
**Lemma B.2**.: _Consider the setup of Lemma B.1. Suppose \(k\in\mathbb{N}\) satisfies \(k+1>\frac{4}{\gamma^{2}}\) and \(\mu\in(0,c)\). Then with \(C^{\prime}=\frac{1}{2}c^{\frac{4}{\gamma^{2}}-1}C\),_
\[\mathcal{M}_{2}^{\mathrm{disk}}(\gamma)[(-L_{1})^{k}e^{-A-\mu L_{1}}]=C^{\prime }\sum_{i=0}^{\infty}\frac{(-2/c)^{k+i}}{i!(k+i+1)}\Gamma(\frac{1}{2}(k+i+1- \frac{4}{\gamma^{2}}))\Gamma(\frac{1}{2}(k+i+1+\frac{4}{\gamma^{2}}))\mu^{i}.\]
Proof.: Suppose \(m+1>\frac{4}{\gamma^{2}}.\) Then
\[\mathcal{M}_{2}^{\text{disk}}(\gamma)[L_{1}^{m}e^{-A}] =Ce^{\frac{4}{\gamma^{2}}}\iint_{0}^{\infty}\ell_{1}^{m}\cdot(\ell_ {1}+\ell_{2})^{-1}K_{\frac{4}{\gamma^{2}}}(c(\ell_{1}+\ell_{2}))\,d\ell_{1}\,d \ell_{2}\] \[=Ce^{\frac{4}{\gamma^{2}}}\int_{0}^{\infty}\int_{0}^{x}\ell_{1}^ {m}\cdot x^{-1}K_{\frac{4}{\gamma^{2}}}(cx)\,d\ell_{1}\,dx\] \[=Ce^{\frac{4}{\gamma^{2}}}\int_{0}^{\infty}\frac{1}{m+1}\cdot x^ {m}K_{\frac{4}{\gamma^{2}}}(cx)\,dx\] \[=Ce^{\frac{4}{\gamma^{2}}-m-1}\frac{2^{m-1}}{m+1}\Gamma(\frac{1} {2}(m+1-\frac{4}{\gamma^{2}}))\Gamma(\frac{1}{2}(m+1+\frac{4}{\gamma^{2}})).\]
The first step used Lemma B.1. The integral in the last step was evaluated via [6, (10.43.19)], and the condition \(m+1>\frac{4}{\gamma^{2}}\) is necessary for this step. Now, using the Taylor expansion \(e^{-\mu L}=\sum_{i=0}^{\infty}\frac{(-\mu L)^{i}}{i!}\) gives the result.
Lemma B.2 gives the following lemma which will imply Lemma 3.17.
**Lemma B.3**.: _Suppose \(\frac{4}{\gamma^{2}}\not\in\mathbb{Z}\) and set \(c=\sqrt{1/\sin(\frac{\pi\gamma^{2}}{4})}.\) Define \(f:(-c,c)\to\mathbb{R}\) by \(f(\mu)=\cos(\frac{4}{\gamma^{2}}\arccos(\mu/c))\); that is, if \(\sigma\in\mathcal{B}\) satisfies \(\mu=c\cos(\pi\gamma(\sigma-\frac{Q}{2}))\), then \(f(\mu)=\cos(\frac{4\pi}{\gamma}(\sigma-\frac{Q}{2}))\). For all \(k>\frac{4}{\gamma^{2}}\) we have_
\[f^{(k)}(\mu)=\mathcal{C}_{1}\mathcal{M}_{2}^{\text{disk}}(\gamma)[(\mu(-L_{1} )^{k}+k(-L_{1})^{k-1})e^{-A-\mu L_{1}}]\quad\text{ for all }\mu\in(-c,c).\]
_where \(\mathcal{C}_{1}=\frac{4}{\gamma^{2}}\sin(\frac{4\pi}{\gamma^{2}})\Gamma(\frac {4}{\gamma^{2}})(1-\frac{\gamma^{2}}{4})\left(\Gamma(\frac{\gamma^{2}}{4})^{2 }\sin(\frac{\pi\gamma^{2}}{4})\right)^{-\frac{2}{\gamma^{2}}}.\)_
Proof.: By Lemma B.2, gathering equal powers of \(\mu\) and simplifying gives
\[\mathcal{M}_{2}^{\text{disk}}(\gamma)[(\mu(-L_{1})^{k}+k(-L_{1})^{k-1})e^{-A- \mu L_{1}}]=C^{\prime}\sum_{i=0}^{\infty}\frac{(-2/c)^{k+i-1}}{i!}\Gamma(\frac {1}{2}(k+i-\frac{4}{\gamma^{2}}))\Gamma(\frac{1}{2}(k+i+\frac{4}{\gamma^{2}}) )\mu^{i}.\]
with the constant \(C^{\prime}\) here being the same as in Lemma B.2. Now the result is immediate from the following identity (see e.g. [1, Lemma 4.15]):
\[\cos(a\cos^{-1}(x))=\frac{a\sin(\pi a)}{2\pi}\sum_{n=0}^{\infty}\frac{(-2)^{n- 1}}{n!}\Gamma(\frac{1}{2}(n+a))\Gamma(\frac{1}{2}(n-a))x^{n}\text{ for }a\in\mathbb{R} \backslash\mathbb{Z}\text{ and }x\in(-1,1),\]
which by setting \(a=\frac{4}{\gamma^{2}}\) and \(x=\frac{\mu}{c}\) implies:
\[f^{(k)}(\mu)=\frac{2}{\pi\gamma^{2}c}\sin(\frac{4\pi}{\gamma^{2}})\sum_{n=0}^{ \infty}\frac{(-2/c)^{n+k-1}}{n!}\Gamma(\frac{1}{2}(n+k+\frac{4}{\gamma^{2}})) \Gamma(\frac{1}{2}(n+k-\frac{4}{\gamma^{2}}))\mu^{n}.\]
The constant \(\mathcal{C}_{1}\) is given by:
\[\mathcal{C}_{1}=\frac{2}{\pi\gamma^{2}c}\sin(\frac{4\pi}{\gamma^{2}})\frac{1} {C^{\prime}}=\frac{\pi(\frac{4}{\gamma^{2}}-1)}{\Gamma(1-\frac{4}{\gamma^{2}}) }\left(\frac{\pi\Gamma(\frac{\gamma^{2}}{4})}{\Gamma(1-\frac{\gamma^{2}}{4})} \right)^{-\frac{2}{\gamma^{2}}}.\qed\]
## Appendix C Analytic continuation of the reflection coefficient
In this section we prove Proposition 2.17, and along the way supply a technical ingredient also needed in Section 5. The argument is similar to that of Section 2.5, but has two additional complications. Firstly, the reflection coefficient describes the law of a weight \(W\) quantum disk, which does not immediately arise from a Liouville field; instead, the weight \(W\) quantum disk weighted by boundary length is described by a Liouville field [1, Proposition 2.28]. Secondly, the GMC moment we need to bound has positive quantum area exponent but negative quantum length exponent (Lemma C.3), and is only finite due to cancellation between the two terms. Thus a naive application of Holder's inequality fails. We use Lemma C.4 which estimates GMC moments conditioned on the field average maximum.
**Proposition C.1**.: _Fix \(\mu_{i}\) with \(\Re\mu_{i}>0\) for \(i=1,2,3\), and consider the following three maps from \(((Q-\gamma)\vee\frac{\gamma}{2},Q)\) to \(\mathbb{C}\):_
\[\beta \mapsto\int\frac{1}{L_{2}}Ae^{-A-\mu_{1}L_{1}-\mu_{2}L_{2}}\, \mathrm{LF}^{(\beta,0),(\gamma,1),(\beta,\infty)}_{\mathbb{H}}(d\phi),\] \[\beta \mapsto\int\frac{1}{L_{2}}A(\mu_{1}L_{1}+\mu_{2}L_{2})e^{-A-\mu_ {1}L_{1}-\mu_{2}L_{2}}\,\mathrm{LF}^{(\beta,0),(\gamma,1),(\beta,\infty)}_{ \mathbb{H}}(d\phi),\] \[\beta \mapsto\int\frac{1}{L_{2}}(\mu_{1}L_{1}+\mu_{2}L_{2})^{2}e^{-A- \mu_{1}L_{1}-\mu_{2}L_{2}}\,\mathrm{LF}^{(\beta,0),(\gamma,1),(\beta,\infty)}_ {\mathbb{H}}(d\phi),\]
_where we write \(A=\mathcal{A}_{\phi}(\mathbb{H})\), \(L_{1}=\mathcal{L}_{\phi}(-\infty,0)\) and \(L_{2}=\mathcal{L}_{\phi}(0,\infty)\). Then each map is well-defined in the sense that the integrals converge absolutely, and extends analytically to a neighborhood of \(((Q-\gamma)\vee\frac{\gamma}{2},Q)\) in \(\mathbb{C}\)._
Proof.: The proof is a modification of that of Proposition 2.21, so we will be briefer. We only prove the result for the second function, but the others are proved identically. As before, \((p_{1},p_{2},p_{3})=(-2,0,2)\). We work in \((\mathbb{H},p_{1},p_{2},p_{3})\) rather than \((\mathbb{H},0,1,\infty)\) To simplify notation we assume \(\mu_{1}=\mu_{2}=\mu\); the argument works the same way when \(\mu_{1}\neq\mu_{2}\). We need to show that the function
\[f(\beta)=\int\frac{\mathcal{A}_{\phi}(\mathbb{H})\mathcal{L}_{\phi}(\mathbb{ R})}{\mathcal{L}_{\phi}((0,\infty))}e^{-\mathcal{A}_{\phi}(\mathbb{H})-\mu \mathcal{L}_{\phi}(\mathbb{R})}\mathrm{LF}^{(\beta,p_{1}),(\gamma,p_{2}),( \beta,p_{3})}_{\mathbb{H}}(d\phi)\]
extends analytically to a neighborhood of \(((Q-\gamma)\vee\frac{\gamma}{2},Q)\) in \(\mathbb{C}\).
For \(r\geq 1\) let \(\mathbb{H}_{r}:=\mathbb{H}\backslash\bigcup_{i}B_{e^{-r}}(p_{i})\), \(I_{r}=\mathbb{R}\backslash(p_{1}-e^{-r},p_{3}+e^{-r})\) and \(J_{r}=(p_{1}+e^{-r},p_{3}-e^{-r})\). For a sample \(h\sim P_{\mathbb{H}}\), write \(A_{r}\) (resp. \(K_{r},L_{r}\)) for the quantum area of \(\mathbb{H}_{r}\) (resp. quantum length of \(I_{r}\cup J_{r}\), \(J_{r}\)) with respect to the field \(h-2Q\log|\cdot|_{+}+\frac{\gamma}{2}G_{\mathbb{H}}(\cdot,p_{2})\). Write \(h_{r}(x)\) for the average of \(h\) on \(\partial B_{e^{-r}}(x)\cap\mathbb{H}\). Then define for \(r\geq 1\) the function
\[f_{r}(\beta)=\int_{\mathbb{R}}\,dce^{(\beta+\frac{\gamma}{2}-Q)c}\mathbb{E} \left[e^{\frac{\beta}{2}(h_{r}(p_{1})+h_{r}(p_{3}))-\frac{\beta^{2}}{2}r} \frac{(e^{\gamma c}A_{r})(e^{\frac{\gamma}{2}c}K_{r})}{e^{\frac{\gamma}{2}c}L_ {r}}e^{-A_{r}-\mu K_{r}}\right]\]
on a neighborhood of \(((Q-\gamma)\vee\frac{\gamma}{2},Q)\) in \(\mathbb{C}\) to be specified later. Clearly \(f_{r}\) is analytic.
**Claim 1: Each point in \(((Q-\gamma)\vee\frac{\gamma}{2},Q)\) has an open neighborhood in \(\mathbb{C}\) on which \(f_{r}\) converges uniformly as \(r\to\infty\).**
We have the alternative expression
\[f_{r}(\beta)=\int_{\mathbb{R}}e^{(\beta+\frac{\gamma}{2}-Q)c}\mathbb{E}\left[e ^{\frac{\beta}{2}(h_{r+1}(p_{1})+h_{r+1}(p_{3}))-\frac{\beta^{2}}{2}(r+1)}\frac {(e^{\gamma c}A_{r})(e^{\frac{\gamma}{2}c}K_{r})}{e^{\frac{\gamma}{2}c}L_{r}}e^ {-e^{\gamma c}A_{r}-\mu e^{\gamma c/2}K_{r}}\right]\,dc.\]
Write \(\beta=u+iv\), and let \(\widetilde{A}_{r}=\mathcal{A}_{\widetilde{h}}^{*}(\mathbb{H}_{r}),\widetilde{K}_{ r}=\mathcal{L}_{\widetilde{h}}(I_{r}\cup J_{r}),\widetilde{L}_{r}=\mathcal{L}_{ \widetilde{h}}(J_{r})\) where \(\widetilde{h}=h-2Q\log|\cdot|_{+}+\frac{\beta}{2}(G_{\mathbb{H}}(\cdot,p_{1}) +G_{\mathbb{H}}(\cdot,p_{3}))+\frac{\gamma}{2}G_{\mathbb{H}}(\cdot,p_{2})\). As before, with \(g(a,k,\ell)=\frac{ak}{\ell}e^{-a-\mu k}\), we have
\[|f_{r+1}(\beta)-f_{r}(\beta)|\leq Ce^{\frac{r}{2}v^{2}}\int_{\mathbb{R}}e^{(u+ \frac{\gamma}{2}-Q)}\mathbb{E}\left[|g(e^{\gamma c}\widetilde{A}_{r+1},e^{ \frac{\gamma}{2}c}\widetilde{K}_{r+1},e^{\frac{\gamma}{2}c}\widetilde{L}_{r+1 })-g(e^{\gamma c}\widetilde{A}_{r},e^{\frac{\gamma}{2}c}\widetilde{K}_{r},e^{ \frac{\gamma}{2}c}\widetilde{L}_{r})|\right]\,dc.\]
As before, we can bound
\[|g(a_{r+1}, k_{r+1},\ell_{r+1})-g(a_{r},k_{r},\ell_{r})|\] \[\lesssim\frac{a_{r+1}e^{-a_{r+1}}}{\ell_{r+1}}(k_{r+1}-k_{r})+ \frac{(a_{r+1}-a_{r})e^{-a_{r+1}}}{\ell_{r+1}}+\frac{a_{r}(e^{-a_{r}}-e^{a_{r +1}})}{\ell_{r+1}}+\frac{a_{r}e^{-a_{r}}(\ell_{r+1}-\ell_{r})}{\ell_{r}\ell_{r+ 1}}.\]
This gives us four terms to bound. Writing \(s=u+\frac{\gamma}{2}-Q\in((-\frac{\gamma}{2})\vee(\frac{\gamma}{2}-\frac{2}{ \gamma}),\frac{\gamma}{2})\),
\[\int dc\,e^{(s+\gamma)c}\mathbb{E}[\frac{\widetilde{A}_{r+1}( \widetilde{K}_{r+1}-\widetilde{K}_{r})}{\widetilde{L}_{r+1}}e^{-e^{\gamma c} \widetilde{A}_{r+1}}] =\frac{1}{\gamma}\Gamma(\frac{s}{\gamma}+1)\mathbb{E}[\frac{ \widetilde{K}_{r+1}-\widetilde{K}_{r}}{\widetilde{L}_{r+1}}\widetilde{A}_{r+1 }^{\frac{s}{\gamma}}],\] \[\int dc\,e^{(s+\frac{\gamma}{2})c}\mathbb{E}[\frac{\widetilde{A} _{r+1}-\widetilde{A}_{r}}{\widetilde{L}_{r+1}}e^{-e^{\gamma c}\widetilde{A}_{ r+1}}] =\frac{1}{\gamma}\Gamma(\frac{s}{\gamma}+\frac{1}{2})\mathbb{E}[ \frac{\widetilde{A}_{r+1}-\widetilde{A}_{r}}{\widetilde{L}_{r+1}}\widetilde{A} _{r+1}^{\frac{s}{\gamma}-\frac{1}{2}}],\] \[\int dc\,e^{(s+\frac{\gamma}{2})c}\mathbb{E}[\frac{\widetilde{A} _{r}(\widetilde{L}_{r+1}-\widetilde{L}_{r})}{\widetilde{L}_{r+1}\widetilde{L} _{r}}e^{-e^{\gamma c}\widetilde{A}_{r}}] =\frac{1}{\gamma}\Gamma(\frac{s}{\gamma}+\frac{1}{2})\mathbb{E }[\frac{\widetilde{L}_{r+1}-\widetilde{L}_{r}}{\widetilde{L}_{r+1}\widetilde{L} _{r}}\widetilde{A}_{r}^{\frac{s}{\gamma}-\frac{1}{2}}].\]
All four integrals follow from the identity \(\int e^{tc}e^{-e^{\gamma c}a}\,\mathrm{d}c=\frac{1}{\gamma}\Gamma(\frac{t}{ \gamma})a^{-\frac{t}{\gamma}}\) which holds for \(t,a>0\).
By Lemma C.2, for each point in \(((Q-\gamma)\vee\frac{\gamma}{2},Q)\) there is a neighborhood \(O\) and a constant \(C>0\) such that these four terms are uniformly bounded by \(Ce^{-r/C}\) on \(O\). Choosing \(O\) such that \(v\) is small gives the desired uniform convergence.
**Claim 2: \(\lim_{r\to\infty}f_{r}(\beta)=f(\beta)\) when \(\beta\in((Q-\gamma)\vee\frac{\gamma}{2},Q)\).**
The proof is the same as that of Claim 1 with \((r+1)\) replaced by \(\infty\).
**Conclusion.** By Claim 1, there is a neighborhood of \(((Q-\gamma)\vee\frac{\gamma}{2},Q)\) on which \(\lim_{r\to\infty}f_{r}\) exists and is holomorphic. By Claim 2, this limit agrees with \(f\) on \(((Q-\gamma)\vee\frac{\gamma}{2},Q)\), hence is the desired analytic continuation of \(f\).
**Lemma C.2**.: _For each point in \(((Q-\gamma)\vee\frac{\gamma}{2},Q)\) there is a neighborhood \(U\subset\mathbb{R}\) and a constant \(C>0\) such that the following holds. For \(\beta\in U\) and \(r\geq 1\), writing \(s=\beta+\frac{\gamma}{2}-Q\), the expectations_
\[\mathbb{E}[\frac{\widetilde{K}_{r+1}-\widetilde{K}_{r}}{\widetilde{L}_{r+1}} \widetilde{A}_{r+1}^{-\frac{s}{\gamma}}],\quad\mathbb{E}[\frac{\widetilde{A}_{ r+1}-\widetilde{A}_{r}}{\widetilde{L}_{r+1}}\widetilde{A}_{r+1}^{-\frac{s}{ \gamma}-\frac{1}{2}}],\quad\mathbb{E}[\frac{\widetilde{L}_{r+1}-\widetilde{L}_ {r}}{\widetilde{L}_{r+1}\widetilde{L}_{r}}\widetilde{A}_{r}^{-\frac{s}{\gamma}- \frac{1}{2}}],\quad\mathbb{E}[\frac{\widetilde{A}_{r}}{\widetilde{L}_{r+1} \widetilde{L}_{r}}(\widetilde{A}_{r}^{-\frac{s}{\gamma}-\frac{1}{2}}- \widetilde{A}_{r+1}^{-\frac{s}{\gamma}-\frac{1}{2}})]\]
_are each bounded by \(Ce^{-r/C}\). Moreover, the statement still holds when all instances of \((r+1)\) are replaced by \(\infty\)._
_Here, we let \((p_{1},p_{2},p_{3})=(-2,0,2)\), \(\mathbb{H}_{r}:=\mathbb{H}\backslash\bigcup_{i}B_{e^{-r}}(p_{i})\), \(I_{r}=\mathbb{R}\backslash(p_{1}-e^{-r},p_{3}+e^{-r})\) and \(J_{r}=(p_{1}+e^{-r},p_{3}-e^{-r})\). For a sample \(h\sim P_{\mathbb{H}}\), let \(\widetilde{h}=h-2Q\log|\cdot|_{+}+\frac{\beta}{2}(G_{\mathbb{H}}(\cdot,p_{1})+G_{ \mathbb{H}}(\cdot,p_{3}))+\frac{\gamma}{2}G_{\mathbb{H}}(\cdot,p_{2})\), and define \(\widetilde{A}_{r}=\mathcal{A}_{\widetilde{h}}(\mathbb{H}_{r})\), \(\widetilde{K}_{r}=\mathcal{L}_{\widetilde{h}}(I_{r}\cup J_{r})\) and \(\widetilde{L}_{r}=\mathcal{L}_{\widetilde{\phi}}(J_{r})\)._
Proof.: We explain the exponential bounds for each \(\beta\) (rather than uniformly in \(U\)); all inputs in this argument vary continuously so we get uniform bounds in neighborhoods \(U\). We only explain the case where the subscripts are \((r+1)\) rather than \(\infty\) since the argument is the same.
Let \(\varepsilon>0\) and let \(p,q>1\) satisfy \(\frac{1}{p}+\frac{1}{q}=1\). By the same trick used for the first inequality of Lemma 2.23, for the first three terms we have
\[\mathbb{E}[\frac{\widetilde{K}_{r+1}-\widetilde{K}_{r}}{\widetilde{L}_{r+1}} \widetilde{A}_{r+1}^{-\frac{s}{\gamma}-\frac{1}{2}}]\leq\varepsilon\mathbb{E}[ \frac{\widetilde{A}_{r}^{\frac{s}{\gamma}}}{\widetilde{L}_{r+1}}]+\mathbb{E}[ (\frac{\widetilde{K}_{r+1}}{\widetilde{L}_{r+1}}\widetilde{A}_{r+1}^{-\frac{s }{\gamma}})p^{1/p}\mathbb{P}[\widetilde{K}_{r+1}-\widetilde{K}_{r}>\varepsilon ]^{1/q},\]
\[\mathbb{E}[\frac{\widetilde{A}_{r+1}-\widetilde{A}_{r}}{\widetilde{L}_{r+1}} \widetilde{A}_{r+1}^{-\frac{s}{\gamma}-\frac{1}{2}}]\leq\varepsilon\mathbb{E} [\frac{\widetilde{A}_{r+1}^{-\frac{s}{\gamma}-\frac{1}{2}}}{\widetilde{L}_{r+1} }]+\mathbb{E}[(\frac{\widetilde{A}_{r+1}^{-\frac{s}{\gamma}+\frac{1}{2}}}{ \widetilde{L}_{r+1}})p^{1}]^{1/p}\mathbb{P}[\widetilde{A}_{r+1}-\widetilde{A}_ {r}>\varepsilon]^{1/q},\]
\[\mathbb{E}[\frac{\widetilde{L}_{r+1}-\widetilde{L}_{r}}{\widetilde{L}_{r+1} \widetilde{L}_{r}}\widetilde{A}_{r}^{-\frac{s}{\gamma}-\frac{1}{2}}]\leq \varepsilon\mathbb{E}[\frac{\widetilde{A}_{r}^{\frac{s}{\gamma}+\frac{1}{2}} }{\widetilde{L}_{r+1}\widetilde{L}_{r}}]+\mathbb{E}[(\frac{\widetilde{A}_{r}^ {-\frac{s}{\gamma}+\frac{1}{2}}}{\widetilde{L}_{r}})p^{1}]^{1/p}\mathbb{P}[ \widetilde{L}_{r+1}-\widetilde{L}_{r}>\varepsilon]^{1/q}.\]
For the fourth term, we use the same argument as in the second inequality of Lemma 2.23 to get
\[\mathbb{E}[\frac{\widetilde{A}_{r}}{\widetilde{L}_{r+1}}(\widetilde{A}_{r}^{- \frac{s}{\gamma}-\frac{1}{2}}-\widetilde{A}_{r+1}^{-\frac{s}{\gamma}-\frac{1}{ 2}})]\lesssim\varepsilon\mathbb{E}[\frac{\widetilde{A}_{r}^{\frac{s}{\gamma}- \frac{1}{2}}}{\widetilde{L}_{r+1}}]+\mathbb{E}[(\frac{\widetilde{A}_{r}^{- \frac{s}{\gamma}+\frac{1}{2}}}{\widetilde{L}_{r+1}})p^{1/p}\mathbb{P}[A_{r+1} -A_{r}>\varepsilon]^{1/q}\]
Choosing \(p>1\) small, all the expectations in the above inequalities are uniformly bounded in \(r\) by Lemma C.3 since \(\beta>\frac{\gamma}{2}\) implies \(-\frac{2s}{\gamma}+1<\frac{4}{\gamma^{2}}\), so we can ignore those terms. The choice of \(\varepsilon>0\) and the bounds on the probabilities can be done as in Lemma 2.23 to complete the proof.
**Lemma C.3**.: _Let \(\beta<Q\). Let \(x<\frac{2}{\gamma^{2}}\) and \(y,z<\frac{4}{\gamma^{2}}\) satisfy \(\frac{\gamma}{2}(2x+y+z)<\min(Q-\beta,Q-\gamma)\) and \(\max(2x,0)+\max(y,0)+\max(z,0)<\frac{4}{\gamma^{2}}\). In the setting of Lemma C.2, there is a continuous function \(C(\beta,x,y,z)\) such that for all \(r>1\),_
\[\mathbb{E}[\widetilde{A}_{r}^{x}\widetilde{K}_{r}^{y}\widetilde{L}_{r}^{z}]<C( \beta,x,y,z).\]
Proof.: We will just show that this expectation is finite; all the inputs in the proof vary continuously so the statement about \(C(\beta,x,y,z)\) follows.
For \(i=1,3\) let \(M_{i}=\sup_{0\leq t\leq r}h_{r}(p_{i})-Qr\), and let \(M_{2}=\sup_{t\geq 0}h_{r}(p_{2})-Qr\). Let \(M=\max_{i}M_{i}\). We first show that for any \(p<\frac{2}{\gamma^{2}}\) we have
(C.1) \[\max(\mathbb{E}[\widetilde{A}_{r}^{p}\mid M],\mathbb{E}[\widetilde{K}_{r}^{2p} \mid M],\mathbb{E}[\widetilde{L}_{r}^{2p}\mid M])\lesssim e^{p\gamma M}\]
with implicit constant not depending on \(r\). For \(i=1,2,3\) let \(A_{i}=\mathcal{A}_{\widetilde{h}}(\mathbb{H}_{r}\cap B_{1}(p_{i}))\) and let \(A_{0}=\mathcal{A}_{\widetilde{h}}(\mathbb{H}_{r}\backslash\bigcup_{i}B_{1}(p _{i}))\), so \(\widetilde{A}_{r}=\sum_{i=0}^{3}A_{i}\). Write \((M_{j})_{j}=(M_{1},M_{2},M_{3})\). If \(p\geq 0\), then by Lemma C.4 and the Markov property of the GFF we have \(\mathbb{E}[A_{i}^{p}\mid(M_{j})_{j}]\lesssim e^{p\gamma M_{i}}\) so \(\mathbb{E}[\widetilde{A}_{r}^{p}\mid(M_{j})_{j}]\lesssim\mathbb{E}[A_{0}^{p} \mid(M_{j})_{j}]+\sum_{i=1}^{3}\mathbb{E}[A_{i}^{p}\mid(M_{j})_{j}]\lesssim e^{ p\gamma M}\). If instead \(p<0\) we get the desired bound using \(\widetilde{A}_{r}^{p}\lesssim\min_{i}A_{i}^{p}\). Arguing similarly for \(K\) and \(L\) gives (C.1).
Next, we show that \(\mathbb{E}[\widetilde{A}_{r}^{x}\widetilde{K}_{r}^{y}\widetilde{L}_{r}^{z}] \lesssim\mathbb{E}[e^{(2x+y+z)\frac{2}{\gamma}M}]\). We explain the argument in the case \(x,y>0\) and \(z<0\); the other cases are similar. Let \(\lambda_{1},\lambda_{2},\lambda_{3}>1\) satisfy \(\frac{1}{\lambda_{1}}+\frac{1}{\lambda_{2}}+\frac{1}{\lambda_{3}}=1\). We can choose these parameters so that \(\max(\lambda_{1}\cdot 2x,\lambda_{2}\cdot y,\lambda_{3}\cdot z)<\frac{4}{\gamma^{2}}\), e.g. by taking \(\lambda_{3}\gg 0\) and choosing \(\lambda_{1},\lambda_{2}\) such that \(\frac{\lambda_{1}}{2x}=\frac{\lambda_{2}}{y}\). Holder's inequality then gives the desired
\[\mathbb{E}[\widetilde{A}_{r}^{x}\widetilde{K}_{r}^{y}\widetilde{L}_{r}^{z}\mid M ]\leq\mathbb{E}[\widetilde{A}_{r}^{\lambda_{1}x}\mid M]^{1/\lambda_{1}}\mathbb{E }[\widetilde{K}_{r}^{\lambda_{2}y}\mid M]^{1/\lambda_{2}}\mathbb{E}[\widetilde{L}_ {r}^{\lambda_{3}z}\mid M]^{1/\lambda_{3}}\lesssim e^{(2x+y+z)\frac{\gamma}{2}M}.\]
Finally, for \(b>0\), if \((B_{t})_{t\geq 0}\) is standard Brownian motion, the law of \(\sup_{t\geq 0}B_{2t}-bt\) is the exponential distribution with rate \(b\). Thus \(M_{1}\) and \(M_{3}\) are exponential with rate \(Q-\beta\), and \(M_{2}\) is exponential with rate \(Q-\gamma\). We conclude the finiteness of \(\mathbb{E}[e^{(2x+y+z)\frac{\gamma}{2}M}]\) occurs if and only if \(\frac{\gamma}{2}(2x+y+z)<\min(Q-\beta,Q-\gamma)\)
**Lemma C.4**.: _Let \(a>0\) and \(r\in(1,\infty]\). Let \(\mathcal{S}=\mathbb{R}\times(0,\pi)\) be the horizontal strip. Let \(h\) be a GFF on \(\mathcal{S}\) normalized so \((h,\rho)=0\) for some \(\rho\) supported on \((-\infty,0)\times(0,\pi)\), and let \(\widetilde{h}=h-a\Re\). Let \(X_{t}\) be the average of \(\widetilde{h}\) on \(\{t\}\times(0,\pi)\) and let \(M=\sup_{0<t<r-1}X_{t}\). Then for any \(p<\frac{2}{\gamma^{2}}\) and some constant \(C=C(p,a,\rho)\) we have_
\[\max(\mathbb{E}[A^{p}\mid M],\mathbb{E}[K^{2p}\mid M],\mathbb{E}[L^{2p}\mid M ])\leq Ce^{p\gamma M}\]
_where \(A:=\mathcal{A}_{\widetilde{h}}((0,r)\times(0,\pi))\), \(K:=\mathcal{L}_{\widetilde{h}}((0,r)\times\{0,\pi\})\) and \(L:=\mathcal{L}_{\widetilde{h}}((0,r)\times\{0\})\)._
Proof.: We prove the inequality for \(A\) and \(r<\infty\); the others are identically proved. Suppose \(p>0\). A variant of [1, Lemma A.5] gives, with \(j^{*}\) the value of \(t\in[j,j+1]\) maximizing \(X_{t}\),
\[\mathbb{E}[(\sum_{j=0}^{\lfloor r\rfloor}\gamma X_{j^{*}})^{p}\mid M]\lesssim e ^{p\gamma M}.\]
Then the argument of [1, (A.11)] gives the desired result.
Now suppose \(p\leq 0\). Conditon on \(M\) and on the time \(t_{*}\in(0,r-1)\) at which \(X_{t_{*}}\) is maximal. The conditional law of \((X_{t})_{[t_{*},r-1]}\) is variance 2 Brownian motion with drift \(-a\) conditioned to stay below \(M\), and \((X_{t})_{[r-1,r]}\) then evolves as unconditioned variance 2 Brownian motion with drift \(-a\). Thus \(Y:=M-\inf_{[t_{*},t_{*}+1]}X_{t}\) is stochastically dominated by \(\hat{Y}:=-\inf_{0<s<1}(B_{2s}-as)\) where \(B\) is standard Brownian motion. Let \(h_{2}\) be the projection of \(h\) to \(\mathcal{H}_{2}(\mathcal{S})\), then \(h_{2}\) is independent of \((X_{t})\) and hence \((M_{r},t_{*})\), so writing \(S=(t_{*},t_{*}+1)\times(0,\pi)\),
\[\mathbb{E}[A^{p}\mid M,t_{*}] \leq\mathbb{E}[\mathcal{A}_{\widetilde{h}}(S)^{p}\mid M,t_{*}] \leq\mathbb{E}[\mathcal{A}_{h_{2}+M-Y}(S)^{p}\mid M_{r},t_{*}]\] \[\leq e^{p\gamma M}\mathbb{E}[e^{p\gamma\hat{Y}}]\mathbb{E}[ \mathcal{A}_{h_{2}}(0,1)^{p}]\lesssim e^{p\gamma M}.\]
Proof of Proposition 2.17.: Morera's theorem gives analyticity in \(\mu\). Next, [1, Proposition 2.28] relates the two-pointed quantum disk weighted by the quantum length of a side to the three-point Liouville field: writing \(A=\mathcal{A}_{\phi}(\mathbb{H}),L_{1}=\mathcal{L}_{\phi}(-\infty,0),L_{2}= \mathcal{L}_{\phi}(0,\infty)\) and \(s=\beta-Q\), we have
\[R_{\mu_{1},\mu_{2}}(\beta)=\frac{1}{Q-\beta}\int(\frac{\gamma}{s}A+\frac{ \gamma}{2s}\frac{\gamma}{s+\frac{\gamma}{2}}A(\sum_{i}\mu_{i}L_{i})+\frac{ \gamma}{2s}\frac{\frac{\gamma}{2}}{s+\frac{\gamma}{2}}(\sum_{i}\mu_{i}L_{i}) ^{2})\frac{e^{-A-\sum_{i}\mu_{i}L_{i}}}{L_{2}}\mathrm{LF}_{\mathbb{H}}^{(\beta,0),(\gamma,1),(\beta,\infty)}(d\phi).\]
Therefore Proposition C.1 gives the analyticity of \(R_{\mu_{1},\mu_{2}}(\beta)\) in \(\beta\).
## Appendix D Operator product expansion
In this appendix we take a look at the Operator Product Expansion (OPE) lemmas that are used in Section 3. These proofs where first done for LCFT on the sphere in [10] and then in the boundary case in [10]. The main novelty in our case is that we must handle a Liouville functional containing both area and boundary GMC measures and therefore it cannot be reduced to a simple moment of GMC as in [10, 10]. Nonetheless we will be quite brief in several places of the proofs below when the arguments are similar to those of [10]. We first start giving the OPE result without reflection which is valid only for \(\chi=\frac{\gamma}{2}\).
**Lemma D.1**.: _(OPE without reflection) Let \(\chi=\frac{\gamma}{2}\). Assume that the constraints of (3.7) hold, meaning that \(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{1}-\frac{\gamma}{4},\sigma_{2}-\frac {\gamma}{4}\in[-\frac{1}{2\gamma}+\frac{Q}{2},\frac{1}{2\gamma}+\frac{Q}{2}] \times\mathbb{R}\), \(\beta_{1},\beta_{2},\beta_{3}<Q\), \(\sum_{i=1}^{3}\beta_{i}>2Q+\frac{\gamma}{2}\). Assume also that \(\beta_{1}\in(\frac{\gamma}{2},\frac{2}{\gamma})\). Then as \(t\to 0\), one has_
(D.1) \[H_{\frac{\gamma}{2}}(t)-H_{\frac{\gamma}{2}}(0)\underset{t>0}{=}C_{2}^{+}t^{1-C }+o(t^{1-C}),\quad H_{\frac{\gamma}{2}}(t)-H_{\frac{\gamma}{2}}(0)\underset{t <0}{=}C_{2}^{-}t^{1-C}+o(t^{1-C}).\]
_where:_
\[C_{2}^{+} =-\frac{\Gamma(-1+\frac{\gamma\beta_{1}}{2}-\frac{\gamma^{2}}{4}) \Gamma(1-\frac{\gamma\beta_{1}}{2})}{\Gamma(-\frac{\gamma^{2}}{4})}\left(g_{ \frac{\gamma}{2}}(\sigma_{1}-\frac{\gamma}{4})-g_{\frac{\gamma}{2}}(\sigma_{2} +\frac{\beta_{1}}{2}-\frac{\gamma}{4})\right)H\begin{pmatrix}\beta_{1}+\frac{ \gamma}{2},\beta_{2},\beta_{3}\\ \sigma_{1}-\frac{\gamma}{4},\sigma_{2},\sigma_{3}\end{pmatrix},\] \[C_{2}^{-} =-\frac{\Gamma(-1+\frac{\gamma\beta_{1}}{2}-\frac{\gamma^{2}}{4}) \Gamma(1-\frac{\gamma\beta_{1}}{2})}{\Gamma(-\frac{\gamma^{2}}{4})}\left(g_{ \frac{\gamma}{2}}(\sigma_{1})-g_{\frac{\gamma}{2}}(\sigma_{2}+\frac{\beta_{1} }{2})\right)H\begin{pmatrix}\beta_{1}+\frac{\gamma}{2},\beta_{2},\beta_{3}\\ \sigma_{1}-\frac{\gamma}{4},\sigma_{2},\sigma_{3}\end{pmatrix}.\]
_A similar statement holds for \(\tilde{H}_{\frac{\gamma}{2}}(t)\). Assume this time that (3.8) hold, and also \(\beta_{1}\in(\frac{\gamma}{2},\frac{2}{\gamma})\). Then as \(t\to 0\), one has_
(D.2) \[\tilde{H}_{\frac{\gamma}{2}}(t)-\tilde{H}_{\frac{\gamma}{2}}(0)\underset{t>0}{ \equiv}\tilde{C}_{2}^{+}t^{1-C}+o(t^{1-C}),\quad\tilde{H}_{\frac{\gamma}{2}}( t)-\tilde{H}_{\frac{\gamma}{2}}(0)\underset{t<0}{\equiv}\tilde{C}_{2}^{-}t^{1-C}+o(t^{1-C}),\]
_where:_
(D.3) \[\tilde{C}_{2}^{+} =-\frac{\Gamma(-1+\frac{\gamma\beta_{1}}{2}-\frac{\gamma^{2}}{4} )\Gamma(1-\frac{\gamma\beta_{1}}{2})}{\Gamma(-\frac{\gamma^{2}}{4})}\left(g_ {\frac{\gamma}{2}}(\sigma_{1}+\frac{\gamma}{4})-g_{\frac{\gamma}{2}}(\sigma_{ 2}-\frac{\beta_{1}}{2}+\frac{\gamma}{4})\right)H\begin{pmatrix}\beta_{1}+ \frac{\gamma}{2},\beta_{2},\beta_{3}\\ \sigma_{1}+\frac{\gamma}{4},\sigma_{2},\sigma_{3}\end{pmatrix},\] (D.4) \[\tilde{C}_{2}^{-} =-\frac{\Gamma(-1+\frac{\gamma\beta_{1}}{2}-\frac{\gamma^{2}}{4} )\Gamma(1-\frac{\gamma\beta_{1}}{2})}{\Gamma(-\frac{\gamma^{2}}{4})}\left(g_ {\frac{\gamma}{2}}(\sigma_{1})-g_{\frac{\gamma}{2}}(\sigma_{2}-\frac{\beta_{1 }}{2})\right)H\begin{pmatrix}\beta_{1}+\frac{\gamma}{2},\beta_{2},\beta_{3}\\ \sigma_{1}+\frac{\gamma}{4},\sigma_{2},\sigma_{3}\end{pmatrix}.\]
Proof.: This proof follows exactly the steps of [22, Equation (3.15)]. The only difference is that when computing the asymptotic of the difference \(H_{\frac{\gamma}{2}}(t)-H_{\frac{\gamma}{2}}(0)\) there will be no contribution at order \(t^{1-C}\) from the area GMC term. We will only write out the proof for the first asymptotic as the others follow from a similar argument. Now starting from the expression (3.13) of \(H_{\frac{\gamma}{2}}(t)\) one can write:
\[H_{\frac{\gamma}{2}}(t)-H_{\frac{\gamma}{2}}(0)\] \[=-\int_{\mathbb{R}}dc\,e^{(\frac{\gamma p}{2}-\frac{\gamma}{4})c }\mathbb{E}\Bigg{[}e^{\frac{\gamma c}{2}}\int_{\mathbb{R}}\frac{(|r_{1}-t|^{ \frac{\gamma^{2}}{4}}-|r_{1}|^{\frac{\gamma^{2}}{4}})\hat{g}(r_{1})^{\frac{ \gamma^{2}}{8}}}{|r_{1}|^{\frac{\gamma\beta_{1}}{2}}|r_{1}-1|^{\frac{\gamma \beta_{2}}{2}}}e^{\frac{\gamma^{2}}{2}h(r_{1})}d\mu_{\partial}^{t}(r_{1})\] \[\times\exp\Bigg{(}-e^{\gamma c}\int_{\mathbb{H}}\frac{|x|^{\frac{ \gamma^{2}}{2}}\hat{g}(x)^{\frac{\gamma^{2}}{4}(q+1)}}{|x|^{\gamma\beta_{1}}| x-1|^{\gamma\beta_{2}}}e^{\gamma h(x)}d^{2}x-e^{\frac{\gamma c}{2}}\int_{ \mathbb{R}}\frac{|r|^{\frac{\gamma^{2}}{4}}\hat{g}(r)^{\frac{\gamma^{2}}{8}}}{ |r|^{\frac{\gamma^{2}\beta_{1}}{2}}|r-1|^{\frac{\gamma\beta_{2}}{2}}}e^{\frac {\gamma^{2}h(r)}{2}d\mu_{\partial}^{t}(r)}\Bigg{)}\Bigg{]}+o(t^{1-C})\] \[=-t^{1-C}\sqrt{\frac{1}{\sin(\pi\frac{\gamma^{2}}{4})}}\Bigg{(} \cos(\pi\gamma(\sigma_{1}-\frac{\gamma}{4}-\frac{Q}{2}))\int_{\mathbb{R}_{+}} du\frac{(1+u)^{\frac{\gamma^{2}}{4}}-u^{\frac{\gamma^{2}}{4}}}{u^{\frac{\gamma \beta_{1}}{2}}}\] \[-\frac{1}{2}e^{\mathfrak{i}\pi\gamma(\sigma_{2}-\frac{Q}{2}- \frac{\gamma}{4}+\frac{\beta_{1}}{2})}\int_{\mathbb{R}_{+}e^{\mathfrak{i}\pi}}du \frac{(1+u)^{\frac{\gamma^{2}}{4}}-u^{\frac{\gamma^{2}}{4}}}{u^{\frac{\gamma \beta_{1}}{2}}}-\frac{1}{2}e^{-\mathfrak{i}\pi\gamma(\sigma_{2}-\frac{Q}{2}- \frac{\gamma}{4}+\frac{\beta_{1}}{2})}\int_{\mathbb{R}_{+}e^{-\mathfrak{i}\pi}} du\frac{(1+u)^{\frac{\gamma^{2}}{4}}-u^{\frac{\gamma^{2}}{4}}}{u^{\frac{\gamma \beta_{1}}{2}}}\Bigg{)}\] \[\quad\times H\begin{pmatrix}\beta_{1}+\frac{\gamma}{2},\beta_{2}, \beta_{3}\\ \sigma_{1}-\frac{\gamma}{4},\sigma_{2},\sigma_{3}\end{pmatrix}+o(t^{1-C}).\]
Above as in [22] in the last equality we have used the change of variable \(r_{1}=ut\) and applied the Girsanov theorem to the boundary GMC measure \(e^{\frac{\gamma}{2}h(r_{1})}d\mu_{\partial}^{t}(r_{1})\).
Let us now turn to the OPE with reflection. The following result holds for both values of \(\chi\).
**Lemma D.2**.: _(OPE with reflection) For \(\chi=\frac{\gamma}{2}\) or \(\frac{2}{\gamma}\), assume again the constraints of (3.7) are satisfied. Then there exists a parameter \(\beta_{0}>0\) small enough so that under the assumption that \(\beta_{1}\in(Q-\beta_{0},Q)\), as \(t\to 0\) the following asymptotic holds_
(D.5) \[H_{\chi}(t)-H_{\chi}(0)\underset{t>0}{=}C_{2}^{+}t^{1-\chi\beta_{1}+\chi^{2}}+o (|t|^{1-\chi\beta_{1}+\chi^{2}}),\,\,H_{\chi}(t)-H_{\chi}(0)\underset{t<0}{=}C_ {2}^{-}t^{1-\chi\beta_{1}+\chi^{2}}+o(|t|^{1-\chi\beta_{1}+\chi^{2}})\]
_where:_
(D.6) \[C_{2}^{+} =R(\beta_{1},\sigma_{1}-\frac{\chi}{2},\sigma_{2}-\frac{\chi}{2} )H\begin{pmatrix}2Q-\beta_{1}-\chi,\beta_{2},\beta_{3}\\ \sigma_{1}-\frac{\chi}{2},\sigma_{2},\sigma_{3}\end{pmatrix},\] (D.7) \[C_{2}^{-} =e^{-\mathbbm{i}\pi(1-\chi\beta_{1}+\chi^{2})}R(\beta_{1},\sigma _{1},\sigma_{2})H\begin{pmatrix}2Q-\beta_{1}-\chi,\beta_{2},\beta_{3}\\ \sigma_{1}-\frac{\chi}{2},\sigma_{2},\sigma_{3}\end{pmatrix}.\]
_Similarly for \(\tilde{H}_{\chi}\), still for \(\beta_{1}\in(Q-\beta_{0},Q)\) and assuming this time (3.8) holds, the following asymptotic holds_
(D.8) \[\tilde{H}_{\chi}(t)-\tilde{H}_{\chi}(0)\underset{t>0}{=}\tilde{C}_{2}^{+}t^{1- \chi\beta_{1}+\chi^{2}}+o(|t|^{1-\chi\beta_{1}+\chi^{2}}),\,\,\tilde{H}_{\chi }(t)-\tilde{H}_{\chi}(0)\underset{t<0}{=}\tilde{C}_{2}^{-}t^{1-\chi\beta_{1}+ \chi^{2}}+o(|t|^{1-\chi\beta_{1}+\chi^{2}}),\]
_with this time:_
(D.9) \[\tilde{C}_{2}^{+} =R(\beta_{1},\sigma_{1}+\frac{\chi}{2},\sigma_{2}+\frac{\chi}{2} )H\begin{pmatrix}2Q-\beta_{1}+\chi,\beta_{2},\beta_{3}\\ \sigma_{1}+\frac{\chi}{2},\sigma_{2},\sigma_{3}\end{pmatrix},\] (D.10) \[\tilde{C}_{2}^{-} =e^{-\mathbbm{i}\pi(1-\chi\beta_{1}+\chi^{2})}R(\beta_{1},\sigma _{1},\sigma_{2})H\begin{pmatrix}2Q-\beta_{1}+\chi,\beta_{2},\beta_{3}\\ \sigma_{1}+\frac{\chi}{2},\sigma_{2},\sigma_{3}\end{pmatrix}.\]
Proof.: We will only give the proof for \(\chi=\frac{2}{\gamma}\), as the case of \(\chi=\frac{\gamma}{2}\) can also be deduced similarly by adapting the proof strategy of [22]. Throughout this proof we will use the notation \(q=\frac{1}{\gamma}(2Q-\beta_{1}-\beta_{2}-\beta_{3}+\frac{2}{\gamma}).\) For a Borel set \(I\subseteq[0,\infty)\), consider the symmetric interval \(\hat{I}=I\cup-I\) and the domain of the upper-half plane \(D_{I}=\{e^{\mathbbm{i}\theta}t\mid t\in I,\,\theta\in[0,\pi]\}.\) We introduce the notation:
(D.11) \[K_{I}(t):=e^{\gamma c}\int_{D_{I}}\frac{|x-t|^{2}\hat{g}(x)^{\frac{\gamma^{2}} {4}(q+1)}}{|x|^{\gamma\beta_{1}}|x-1|^{\gamma\beta_{2}}}e^{\gamma h(x)}d^{2}x +e^{\frac{\gamma c}{2}}\int_{\hat{I}}\frac{|r-t|\hat{g}(r)^{\frac{\gamma^{2} \theta}{8}}}{|r|^{\frac{\gamma\beta_{1}}{2}}|r-1|^{\frac{\gamma\beta_{2}}{2} }}e^{\frac{\gamma}{2}h(r)}d\mu_{\partial}^{t}(r).\]
Now we want to study the asymptotic of
(D.12) \[\int_{\mathbb{R}}e^{-\frac{\gamma}{2}qc}\mathbb{E}[\exp(-K_{\mathbb{R}_{+}}(t ))]dc-\int_{\mathbb{R}}e^{-\frac{\gamma}{2}qc}\mathbb{E}[\exp(-K_{\mathbb{R}_{ +}}(0))]dc=:T_{1}+T_{2},\]
where we defined:
\[T_{1} :=\int_{\mathbb{R}}e^{-\frac{\gamma}{2}qc}\mathbb{E}\left[\exp(-K_ {(|t|,+\infty)}(t))-\exp(-K_{\mathbb{R}_{+}}(0))\right]dc,\] \[T_{2} :=\int_{\mathbb{R}}e^{-\frac{\gamma}{2}qc}\mathbb{E}\left[\exp(-K_ {\mathbb{R}_{+}}(t))-\exp(-K_{(|t|,+\infty)}(t))\right]dc.\]
Following the same inequalities as [22, Equations (5.21) - (5.24)] one obtains \(T_{1}=o(|t|^{1-\frac{2\beta_{1}}{\gamma}+\frac{4}{\gamma^{2}}})=o(|t|^{\frac{ 2}{\gamma}(Q-\beta_{1})}).\) Now we focus on \(T_{2}.\) The goal is to restrict \(K_{I}\) to the choice \(I=(0,|t|^{1+\eta})\cup(|t|,\infty)\), with \(\eta>0\) a small positive constant to be fixed, and then the area and boundary GMCs on the
three disjoint parts will be weakly correlated. By using the inequalities of [22, Equation (5.26)] and taking \(\eta\) satisfying the condition
(D.13) \[\eta<\frac{1+(\frac{2}{\gamma}-\frac{\gamma}{2})\beta_{1}-\frac{4}{\gamma^{2}}} {\frac{\gamma\beta_{1}}{2}-1},\]
we have \(\int_{\mathbb{R}}e^{-\frac{\gamma}{2}qc}\ \mathbb{E}\left[\exp(-K_{\mathbb{R}_{+}}(t))- \exp(-K_{(0,|t|^{1+\eta})\cup(|t|,\infty)}(t))\right]dc=o(|t|^{\frac{2}{\gamma} (Q-\beta_{1})})\). It remains to evaluate \(\int_{\mathbb{R}}e^{-\frac{\gamma}{2}qc}\mathbb{E}\left[\exp(-K_{(0,|t|^{1+ \eta})\cup(|t|,\infty)}(t))-\exp(-K_{(|t|,\infty)}(t))\right]dc\). We now introduce the radial decomposition of the field \(h\), \(h(x)=B_{-2\log|x|}+Y(x)\), where \(B\), \(Y\) are independent Gaussian processes with \((B_{s})_{s\in\mathbb{R}}\) a Brownian motion starting from \(0\) for \(s\geq 0\), \(B_{s}=0\) when \(s<0\), and \(Y\) is a centered Gaussian process with covariance,
(D.14) \[\mathbb{E}[Y(x)Y(y)]=\begin{cases}2\log\frac{|x|\vee|y|}{|x-y|},&|x|,|y|\leq 1,\\ 2\log\frac{|x|}{|x-y|}-\frac{1}{2}\log\hat{g}(x)-\frac{1}{2}\log\hat{g}(y),& \text{else}.\end{cases}\]
Now with this decomposition one can write:
(D.15) \[K_{I}(t)=e^{\gamma c}\int_{D_{I}}\frac{|x-t|^{2}\hat{g}(x)^{\frac{\gamma^{2}}{ 4}(q+1)}e^{\gamma B_{-2\log|x|}}}{|x|^{\gamma\beta_{1}}|x-1|^{\gamma\beta_{2}} }e^{\gamma Y(x)}d^{2}x+e^{\frac{\gamma c}{2}}\int_{\tilde{I}}\frac{|r-t|\hat{g} (r)^{\frac{\gamma^{2}\hat{g}}{8}}e^{\frac{\gamma^{2}B_{-2\log|r|}}{2}}}{|r|^ {\frac{\gamma\beta_{2}}{2}}|r-1|^{\frac{\gamma\beta_{2}}{2}}}e^{\frac{\gamma}{ 2}Y(r)}d\mu_{\partial}^{t}(r).\]
Define the processes \(P(x)=Y(x)\mathbf{1}_{|x|\leq t^{1+\eta}}+Y(x)\mathbf{1}_{|x|\geq t}\), \(\tilde{P}(x)=\tilde{Y}(x)\mathbf{1}_{|x|\leq t^{1+\eta}}+Y(x)\mathbf{1}_{|x| \geq t}\) where \(\tilde{Y}\) is an independent copy of \(Y\). By using the inequalities as in [22, Equation (5.39)]
(D.16) \[\left|\int_{\mathbb{R}}e^{-\frac{\gamma}{2}qc}\mathbb{E}\left[\exp(-K_{(0,|t| ^{1+\eta})\cup(|t|,\infty)}(t))-\exp(-\tilde{K}_{(0,|t|^{1+\eta})}(t)-K_{(|t|, +\infty)}(t))\right]dc\right|\leq c_{3}\,t^{\eta},\]
for some constant \(c_{3}>0\), and where in \(\tilde{K}_{(0,|t|^{1+\eta})}(t)\) we simply use the field \(\tilde{Y}\) instead of \(Y\). When \(\eta>\frac{2}{\gamma}(Q-\beta_{1})\), we can bound the previous term by \(o(|t|^{\frac{2}{\gamma}(Q-\beta_{1})})\). Consider now the change of variable \(x=t^{1+\eta}e^{-s/2}\). By the Markov property of the Brownian motion and stationarity of
\[N_{\tilde{Y}}(dsd\theta):=e^{\gamma\tilde{Y}(e^{-\frac{\gamma}{2}}e^{\mathbf{ i}\theta})-\frac{\gamma^{2}}{2}\mathbb{E}[\tilde{Y}(e^{-\frac{\gamma}{2}}e^{ \mathbf{i}\theta})^{2}]}dsd\theta,\quad d\mu_{\tilde{Y}}(s):=\mu_{1}e^{\frac{ \gamma}{2}\tilde{Y}(-e^{-s/2})}ds+\mu_{2}e^{\frac{\gamma}{2}\tilde{Y}(e^{-s/2 })}ds,\]
we can express \(\tilde{K}_{(0,|t|^{1+\eta})}(t)\) by the formula
\[\tilde{K}_{(0,|t|^{1+\eta})}(t) =e^{\frac{\gamma c}{2}}\frac{1}{2}t^{1+(1+\eta)(1-\frac{\gamma\beta _{1}}{2}+\frac{\gamma^{2}}{4})}e^{\frac{\gamma}{2}B_{2(1+h)\log(1/t)}}\int_{0} ^{\infty}\frac{|1-t^{\eta}e^{-s/2}|}{|t^{1+\eta}e^{-s/2}-1|^{\frac{\gamma\beta _{2}}{2}}}e^{\frac{\gamma}{2}(\tilde{B}_{s}-\frac{s}{2}(Q-\beta_{1}))}d\mu_{ \tilde{Y}}(s)\] \[+e^{\gamma c}\frac{1}{2}\sigma_{t}^{2}\int_{0}^{\infty}e^{\gamma( \tilde{B}_{s}-\frac{s}{2}(Q-\beta_{1}))}\int_{0}^{\pi}\frac{|e^{\mathbf{i} \theta}t^{\eta}e^{-\frac{s}{2}}-1|^{2}}{|e^{\mathbf{i}\theta}-e^{-\mathbf{i} \theta}|^{\frac{\gamma^{2}}{2}}|e^{\mathbf{i}\theta}t^{1+\eta}e^{-\frac{s}{2}} -1|^{\gamma\beta_{2}}}N_{\tilde{Y}}(dsd\theta),\]
with \(\tilde{B}\) an independent Brownian motion and \(\sigma_{t}:=t^{1+(1+\eta)(1-\frac{\gamma\beta_{1}}{2}+\frac{\gamma^{2}}{4})}e^{ \frac{\gamma}{2}B_{2(1+\eta)\log(1/t)}}\). We denote:
\[V_{1}:=\frac{1}{2}e^{\frac{\gamma c}{2}}\int_{0}^{\infty}e^{\frac{\gamma}{2}( \tilde{B}_{s}-\frac{s}{2}(Q-\beta_{1}))}d\mu_{\tilde{Y}}(s),\ V_{2}:=\frac{1}{2}e^{ \gamma c}\int_{0}^{\infty}e^{\gamma(\tilde{B}_{s}-\frac{s}{2}(Q-\beta_{1}))} \int_{0}^{\pi}\frac{1}{|e^{\mathbf{i}\theta}-e^{-\mathbf{i}\theta}|^{\frac{ \gamma^{2}}{2}}}N_{\tilde{Y}}(dsd\theta).\]
One can then prove that
\[\left|\int_{\mathbb{R}}e^{-\frac{\gamma}{2}qc}\mathbb{E}\left[\exp(-\tilde{K}_{ (0,|t|^{1+\eta})}(t)-K_{(|t|,+\infty)}(t))-\exp(-\sigma_{t}V_{1}-\sigma_{t}^{2}V_{2 }-K_{(|t|,+\infty)}(t))\right]dc\right|\leq c_{4}\,|t|^{(1+\eta)(2-\frac{\gamma \beta_{1}}{2})},\]
for some \(c_{4}>0\). This upper bound is also a \(o(|t|^{\frac{2}{\gamma}(Q-\beta_{1})})\). Now let \(M=\sup_{s\geqslant 0}(\tilde{B}_{s}-\frac{Q-\beta_{1}}{2}s)\) and let \(L_{M}\) be the last time \(\left(\mathcal{B}_{-s}^{\frac{Q-\beta_{1}}{2}}\right)_{s\geq 0}\) hits \(-M\). Recall that the law of \(M\) is known, for \(v\geq 1\), \(\mathbb{P}(e^{\frac{\gamma}{2}M}>v)=v^{-\frac{2}{\gamma}(Q-\beta_{1})}\). For simplicity, we introduce the notation:
\[\rho_{1}(\beta_{1}):=\frac{1}{2}\int_{-\infty}^{\infty}e^{\gamma\mathcal{B}_{s }^{\frac{Q-\beta_{1}}{2}}}\int_{0}^{\pi}\frac{1}{|e^{\mathbf{i}\theta}-e^{- \mathbf{i}\theta}|^{\frac{\gamma}{2}}}N_{\tilde{Y}}(dsd\theta),\quad\rho_{2}( \beta_{1}):=\frac{1}{2}\int_{-\infty}^{\infty}e^{\frac{\gamma}{2}\mathcal{B}_ {s}^{\frac{Q-\beta_{1}}{2}}}\mu_{\tilde{Y}}(ds),\]
We can finally show that:
\[T_{2} =\int_{\mathbb{R}}dce^{\frac{q\gamma c}{2}}\mathbb{E}\left[\exp (-K_{(|t|,+\infty)}(t)-\sigma_{t}^{2}e^{\gamma M+\gamma c}\rho_{1}(\beta_{1})- \sigma_{t}e^{\frac{\gamma}{2}M+\frac{\gamma}{2}c}\rho_{2}(\beta_{1}))-\exp(-K _{(|t|,+\infty)}(t))\right]\] \[+o(|t|^{\frac{2}{\gamma}(Q-\beta_{1})}).\]
Finally, we evaluate the above difference at first order explicitly using the fact that density of \(\frac{\gamma}{2}M\) is known:
\[\int_{\mathbb{R}}dce^{\frac{q\gamma c}{2}}\left(\mathbb{E}[\exp (-K_{(|t|,+\infty)}(t)-\sigma_{t}^{2}e^{\gamma M+\gamma c}\rho_{1}(\beta_{1})- \sigma_{t}e^{\frac{\gamma}{2}M+\frac{\gamma}{2}c}\rho_{2}(\beta_{1}))]- \mathbb{E}[\exp(-K_{(|t|,+\infty)}(t))]\right)\] \[=\int_{\mathbb{R}}dce^{\frac{q\gamma c}{2}}\mathbb{E}\left[\int_{ 1}^{\infty}\frac{dv}{v^{\frac{2}{\gamma}(Q-\beta_{1})+1}}\left(\exp\left(-K_{( |t|,+\infty)}(t)-\sigma_{t}^{2}v^{2}e^{\gamma c}\rho_{1}(\beta_{1})-\sigma_{t }ve^{\frac{\gamma}{2}c}\rho_{2}(\beta_{1})\right)-\exp(-K_{(|t|,+\infty)}(t)) \right)\right]\] \[=\frac{\gamma}{2}\int_{\mathbb{R}}dce^{\frac{q\gamma c}{2}} \mathbb{E}\Bigg{[}\int_{c+\frac{2}{\gamma}\log\sigma_{t}}^{\infty}dw\sigma_{ t}^{\frac{2}{\gamma}(Q-\beta_{1})}e^{(c-w)(Q-\beta_{1})}\] \[\times\Big{(}\exp\left(-K_{(|t|,+\infty)}(t)-e^{\gamma w}\rho_{1} (\beta_{1})-e^{\frac{\gamma}{2}w}\rho_{2}(\beta_{1})\right)-\exp(-K_{(|t|,+ \infty)}(t))\Big{)}\Bigg{]}\] \[=\frac{\gamma}{2}\int_{\mathbb{R}}dce^{\frac{q\gamma c}{2}}e^{c(Q -\beta_{1})}\mathbb{E}\left[\sigma_{t}^{\frac{2}{\gamma}(Q-\beta_{1})}\exp \left(-K_{(|t|,+\infty)}(t)\right)\right]\] \[\times\mathbb{E}\left[\int_{\mathbb{R}}dwe^{-w(Q-\beta_{1})} \left(\exp\left(-e^{\gamma w}\rho_{1}(\beta_{1})-e^{\frac{\gamma}{2}w}\rho_{ 2}(\beta_{1})\right)-1\right)\right]+o(|t|^{\frac{2}{\gamma}(Q-\beta_{1})}).\]
By apply the Girsanov theorem to the power of \(\sigma_{t}\) in the first expectation and taking the limit \(t\to 0\), the first expectation will become \(t^{\frac{2}{\gamma}(Q-\beta_{1})}\) times an \(H\) function. The second expectation is the reflection coefficient \(R\). Therefore we arrive at the final claim for the \(t>0\) limit:
(D.17) \[\int_{\mathbb{R}}dce^{q\gamma c/2}\left(\mathbb{E}[\exp(-K_{(|t|,+ \infty)}(t)-\sigma_{t}^{2}e^{\gamma M+\gamma c}\rho_{1}(\beta_{1})-\sigma_{t} e^{\frac{\gamma}{2}M+\frac{\gamma}{2}c}\rho_{2}(\beta_{1}))]-\mathbb{E}[\exp(-K_{(|t|,+ \infty)}(t))]\right)\] \[=t^{\frac{2}{\gamma}(Q-\beta_{1})}R(\beta_{1},\sigma_{1}-\frac{ 1}{\gamma},\sigma_{2}-\frac{1}{\gamma})H\begin{pmatrix}2Q-\beta_{1}-\frac{2}{ \gamma},\beta_{2},\beta_{3}\\ \sigma_{1}-\frac{1}{\gamma},\sigma_{2},\sigma_{3}\end{pmatrix}+o(|t|^{\frac{2 }{\gamma}(Q-\beta_{1})}).\]
The \(t<0\) case is obtained in a similar fashion.
|
2307.09512 | Dissipative phase transitions and passive error correction | We classify different ways to passively protect classical and quantum
information, i.e. we do not allow for syndrome measurements, in the context of
local Lindblad models for spin systems. Within this family of models, we
suggest that passive error correction is associated with nontrivial phases of
matter and propose a definition for dissipative phases based on robust steady
state degeneracy of a Lindbladian in the thermodynamic limit. We study three
thermalizing models in this context: the 2D Ising model, the 2D toric code, and
the 4D toric code. In the low-temperature phase, the 2D Ising model hosts a
robust classical steady state degeneracy while the 4D toric code hosts a robust
quantum steady state degeneracy. We perturb the models with terms that violate
detailed balance and observe that qualitative features remain unchanged,
suggesting that $\mathbb{Z}_2$ symmetry breaking in a Lindbladian is useful to
protect a classical bit while intrinsic topological order protects a qubit. | Yu-Jie Liu, Simon Lieu | 2023-07-18T18:00:05Z | http://arxiv.org/abs/2307.09512v2 | # Dissipative phase transitions and passive error correction
###### Abstract
We classify different ways to _passively_ protect classical and quantum information, i.e. we do not allow for syndrome measurements, in the context of local Lindblad models for spin systems. Within this family of models, we suggest that passive error correction is associated with nontrivial phases of matter and propose a definition for dissipative phases based on robust _steady state_ degeneracy of a Lindbladian in the thermodynamic limit. We study three thermalizing models in this context: the 2D Ising model, the 2D toric code, and the 4D toric code. In the low-temperature phase, the 2D Ising model hosts a robust classical steady state degeneracy while the 4D toric code hosts a robust _quantum_ steady state degeneracy. We perturb the models with terms that violate detailed balance and observe that qualitative features remain unchanged, suggesting that \(\mathbb{Z}_{2}\) symmetry breaking in a Lindbladian is useful to protect a classical bit while intrinsic topological order protects a qubit.
+
Footnote †: preprint: APS/123-QED
## I Introduction
One of the central challenges toward building a practical quantum computer is the ability to correct quantum errors [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 1777; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 324; 335; 336; 337; 341; 342; 343; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 383; 391; 384; 385; 386; 387; 388; 392; 393; 40; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 91; 92; 93; 94; 95; 96; 97; 98; 99; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 113; 114; 115; 116; 117; 118; 119; 121; 123; 124; 125; 126; 127; 128; 129; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 143; 144; 145; 146; 147; 148; 159; 150; 151; 1539; 160; 161; 162; 163; 164; 165; 166; 167; 168; 179; 170; 171; 174; 175; 176; 177; 178; 179; 180; 183; 184; 185; 186; 187; 188; 189; 190; 187; 188; 189; 191; 201; 219; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 241; 242; 243; 245; 246; 247; 248; 249; 251; 258; 259; 261; 270; 271; 28; 28; 291; 292; 293; 294; 295; 296; 297; 301; 302; 303; 304; 31; 3105; 316; 317; 32; 333; 34, 35; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 47; 48; 49; 51; 58; 59; 60; 629; 61; 63; 64; 65; 66; 67; 68; 69; 70; 72; 74; 75; 76; 77; 78; 79; 81; 82; 84; 85; 86; 87; 89; 90; 910; 104; 105; 106; 107; 108; 109; 111; 117; 118; 119; 133; 141; 142; 143; 144; 151; 152; 153; 167; 178; 189; 192; 203; 217; 223; 246; 25; 267; 279; 31; 32; 333; 34; 35; 36; 370; 38; 39; 41; 43; 44; 45; 46; 47; 48; 49; 52; 53; 54; 56; 57; 58; 59; 61; 70; 73; 74; 75; 76; 78; 79; 80; 82; 83; 84; 85; 86; 87; 89; 92; 93; 94; 95; 96; 97; 98; 101; 117; 118; 119; 120; 123; 124; 125; 126; 127; 128; 129; 133; 144; 145; 146; 147; 148; 159; 160; 177; 181; 197; 198; 199; 204; 199; 210; 211; 225; 127; 233; 247; 248; 25; 269; 271; 283; 293; 285; 294; 295; 286; 296; 297; 30
A continuous-time Markovian generator \(\mathcal{L}\) in Lindblad form is defined by
\[\frac{d\rho}{dt}=\mathcal{L}(\rho)=-i[H,\rho]+\sum_{j}\left(L_{j}\rho L_{j}^{ \dagger}-\frac{1}{2}\{L_{j}^{\dagger}L_{j},\rho\}\right), \tag{1}\]
where \(H\) is the Hamiltonian of the system and \(L_{j}\) are dissipators that arise due to the system-environment coupling [38]. For the thermal baths considered in this work, we can split the Lindbladian into two contributions that occur with different rates, a zero-temperature part and an infinite temperature part:
\[\mathcal{L}_{T}=\kappa_{0}\mathcal{L}_{T=0}+\kappa_{\infty}\mathcal{L}_{T= \infty}. \tag{2}\]
The temperature is determined by the ratio of these two processes \((\kappa_{0},\kappa_{\infty})\) and both processes are local in space. The zero-temperature contribution \(\mathcal{L}_{0}\) represents the corrections that send the system to the code space (ground state manifold). The \(\mathcal{L}_{\infty}\) contribution represents errors that cause the state to leave the code space (i.e. bit flips and phase flips). We suppose that this noisy process occurs for a finite time \(t\) that sends \(\rho_{i}\) to a mixed state \(\rho_{m}(t)=e^{D}(\rho_{i})\).
One of the prerequisites for a quantum memory is that the superoperator \(\mathcal{L}\) needs to have degenerate (i.e. more than one) steady states in the presence of noise. (If \(\mathcal{L}\) has a unique steady state then arbitrary qubit initial states will become indistinguishable on a time scale of order the inverse dissipative gap.) In particular, \(\mathcal{L}\) needs to have at least four eigenvalues of zero: Two to protect the relative populations between the logical states, and two to protect the relative complex phase. If the state preserves quantum information, it can be expressed in the following form:
\[\rho_{m}(t)=e^{D}(\rho_{i})=\begin{pmatrix}|c_{0}|^{2}&c_{0}c_{1}\\ c_{0}^{*}c_{1}^{*}&|c_{1}|^{2}\end{pmatrix}\otimes M(t), \tag{3}\]
where \(M\) is a diagonal matrix that does not have to be pure: \(\text{Tr}[M^{2}]\leq 1\). For a mixed \(M\), such a structure is called a "noiseless subsystem" [2]. We will show explicit examples of such steady states that have diagonal \(M\) matrices that follow the Boltzmann distribution. In a fully passive error correcting scheme, one imagines directly manipulating the quantum information that is stored in the mixed state above, then only doing one destructive measurement of the qubits at the end of the computation, avoiding the need to measure stabilizers throughout the protocol.
It is theoretically convenient to quantify the decay rate of coherences by allowing for a "single-shot" decoding superoperator \(\mathcal{E}\) which sends every state in the Hilbert space back to the code space according to some algorithm (e.g. minimal weight matching for the toric code) [39]. The final state we end up with is:
\[\rho_{f}(t)=\mathcal{E}e^{\mathcal{L}_{f}}(\rho_{i}). \tag{4}\]
We wish to find generic setups where the difference between the initial and final state is exponentially small in the system size for any arbitrary (but finite) \(t\). Specifically, we will focus on the deviation of the overlap from unity:
\[1-\text{Tr}[\rho_{i}\rho_{f}(t)]\sim e^{-cN}, \tag{5}\]
where \(N\) is the system size and \(c\) is a constant. The continuous-time Markov process is capable of ensuring that the errors do not destroy the quantum information.
What are the generic conditions under which a continuous-time Markovian generator \(\mathcal{L}\) will host a noiseless subsystem steady state structure of the form of Eq. (3)? In this work, we investigate emergent noiseless subsystems that only appear in the _thermodynamic limit_ of a \(\mathbb{Z}_{2}\)-symmetry-broken phase, or a phase with intrinsic topological order. This differs from most examples in the literature, where exact noiseless subsystems arise in finite systems due to non-Abelian symmetries [40]. For such systems, the qubit will decohere in the presence of either local bit-flip errors (\(X\)) or local phase-flip errors (\(Z\)) (see Appendix A). By contrast \(\mathbb{Z}_{2}\) symmetry breaking will protect against local bit flips but not phase flips, and intrinsic topological order will protect against both. This is summarized in Table 1. This closely mirrors quantum phase transitions: A nontrivial quantum phase supports a _robust_ ground state degeneracy of the Hamiltonian in the thermodynamic limit. For \(\mathbb{Z}_{2}\) spontaneous symmetry breaking, the degeneracy is fragile to terms which violate the \(\mathbb{Z}_{2}\) symmetry, while for intrinsic topological order any local perturbation cannot split the degeneracy.
Working by analogy, we will search for phase transitions in a Lindbladian \(\mathcal{L}(\alpha)\) as we continuously deform some parameter \(\alpha\) (e.g. the temperature). In particular, we require that:
* \(\mathcal{L}\) is composed of local terms.
* \(\mathcal{L}\) has a steady state degeneracy (e.g. noiseless subsystem) in the thermodynamic limit of a nontrivial phase. This degeneracy can be removed when going across a phase boundary via smoothly tuning parameters \(\alpha\) in \(\mathcal{L}(\alpha)\).
* The steady state degeneracy is robust against arbitrary local perturbations in the master equation. (Up to symmetry constraints for spontaneous symmetry-breaking.)
* \(\mathcal{L}\) has a non-zero dissipative gap away from the critical point and is gapless at the critical point 1. Footnote 1: In a nontrivial phase, there are eigenvalues of the Lindbladian that are exponentially close to zero in system size, necessary for steady state degeneracy. We refer to the gap as the smallest real part of the Lindblad spectrum above these steady state solutions, which should be finite at large system sizes for “gapped” systems.
This last condition ensures that there is a critical slowing down of fluctuations at the critical point:
\[\langle\mathcal{O}(t)\mathcal{O}\rangle-\langle\mathcal{O}\rangle^{2}\sim t^ {-z}, \tag{6}\]
where \(\mathcal{O}\) is an arbitrary observable, expectation values are taken with respect to the steady state, and \(z\) is a dynamical critical exponent, i.e. temporal correlators decay as a power-law rather than an exponential [41]. Fig. 1 provides a sketch of the requirements outlined above.
In searching for a Lindbladian with the properties outlined above, it is useful to notice that the both symmetry-broken phases and topological phases can be thermally stable, and that the Lindbladian can be used to describe _thermal_ phase transitions in these systems. The low-temperature phase of the former is useful for a passive classical bit, while the latter is useful for a qubit [5]. In this work, we study local dissipative models that reproduce thermal phase transitions in the 2D Ising model, the 2D toric code, and the 4D toric code. We perturb the models with terms that break detailed balance and observe that important features of the phase remain preserved. We provide evidence that our model for the 2D Ising model is an example of a symmetry-breaking phase transition that satisfies all of the bullet points above while the 4D toric code is an example of a topological transition with those properties.
Beyond drawing conceptual parallels between phase transitions in open and closed quantum systems, our work raises the possibility that the dissipative 4D toric code is protected against _arbitrary_ local perturbations in the master equation.
## III Two-dimensional Ising model
We begin by considering spins on an \(N\times N\) lattice. The 2D Ising model Hamiltonian reads
\[H_{ix}=-\sum_{x,y=1}^{N}(S_{x,y,\tau}+S_{x,y,\tau}), \tag{7}\]
where
\[S_{x,y;\tau}=Z_{x,y}Z_{x+1,y},\qquad S_{x,y;\tau}=Z_{x,y}Z_{x,y+1}, \tag{8}\]
are \(S_{x,y;\tau,t}\) are stabilizers which pair up a spin on site \((x,y)\) with its neighbor to the right/top, \(r,t\); \(Z_{x,y}\) is the \(Z\) Pauli operator on that site. The ferromagnetic states are the ground states of this model and span the code space: \(|\bar{0}\rangle=|\uparrow\uparrow\uparrow\cdots\rangle,|\bar{1}\rangle=| \downarrow\downarrow\downarrow\cdots\rangle\).
Let us define "zero-temperature" jump operators for each spin \(x,y\):
\[L^{(4)}_{x,y} = \sqrt{\kappa}X_{x,y}P^{+}_{x,y;\tau}P^{-}_{x,y;\tau}P^{-}_{x-1,y; \tau}P^{-}_{x,y-1;\tau} \tag{9}\] \[L^{(3)}_{x,y} = \sqrt{\kappa}X_{x,y}P^{+}_{x,y;\tau}P^{-}_{x,y;\tau}P^{-}_{x-1,y; \tau}P^{-}_{x,y-1;\tau}\] (10) \[L^{(2)}_{x,y} = \sqrt{\kappa}X_{x,y}P^{+}_{x,y;\tau}P^{+}_{x,y;\tau}P^{-}_{x-1,y; \tau}P^{-}_{x,y-1;\tau}, \tag{11}\]
where \(\kappa,\tilde{\kappa}\) are the dissipative rates and \(P^{\pm}_{x,y;\tau/t}=(1\pm S_{x,y;\tau/t})/2\) is a projector onto a particular stabilizer configuration [42]. The superscripts indicate the number of domain walls that the projector is checking for (and we neglect to write jumps related by rotational invariance, e.g. there are four different \(L^{(3)}_{x,y}\) operators). These jump operators will only cause a spin flip if it is energetically favorable to do so. We will also consider uniform bit flips and phase flips on each lattice site:
\[L^{\prime}_{x,y}=\sqrt{\Delta_{x}}X_{x,y},\qquad L^{\prime\prime}_{x,y}=\sqrt {\Delta_{z}}Z_{x,y}. \tag{12}\]
The dissipators in Eq. (12) represent the infinite temperature bath \(\mathcal{L}_{\infty}\), while the ones defined in Eqs. (9) - (11) represent the zero-temperature bath \(\mathcal{L}_{0}\)2.
Footnote 2: The Hamiltonian does not affect the dynamics for the simulations considered in this work and thus we set it to zero for simplicity. Physically, we interpret the dissipative processes as a simple model for a thermalizing bath of the Hamiltonian, i.e. we do not require dissipative engineering.
Figure 1: (a) Caricature spectrum of a Hamiltonian \(H(\alpha)\) across a quantum phase boundary. In a nontrivial phase the ground state is degenerate (red line); the energy gap closes at a critical point, then a unique ground state emerges in the trivial phase (dashed red line). (b) We are looking for analogous phase transitions in a Lindbladian \(\mathcal{L}(\alpha)\), characterized by a steady-state degeneracy in a nontrivial phase, and a closing of the dissipative gap at the phase boundary.
\begin{table}
\begin{tabular}{c|c|c} noiseless subsystem & thermo. limit? & stable to noise? \\ \hline \hline non-Abelian strong symmetry & no & no \\ \hline \(\mathcal{Z}_{2}\) strong symmetry breaking & yes & \(X\) \\ \hline intrinsic topological order & yes & \(X\) and \(Z\) \\ \end{tabular}
\end{table}
Table 1: Different ways to achieve a noiseless subsystem steady state structure. [See Eq. (3).] Imposing a non-Abelian strong symmetry [40] on a _finite_ system ensures that the Lindbladian has a noiseless subsystem steady state but it is generally fragile to both bit flips (\(X\)) and phase flips (\(Z\)). (See Appendix A.) \(\mathcal{Z}_{2}\) strong symmetry breaking [27] requires the thermodynamic limit but is able to protect against bit flips (or phase flips, depending on convention). Intrinsic topological order requires the thermodynamic limit but is stable to both bit and phase flips.
If we set the rate \(\tilde{\kappa}=\sqrt{\Delta_{x}\kappa+\Delta_{x}^{2}}-\Delta_{x}\), then the steady state is the thermal state of the 2D Ising model:
\[\rho_{ss}=\frac{e^{-\beta H_{x}}}{\text{Tr}[e^{-\beta H_{x}}]},\qquad\beta=\frac {1}{8}\ln\left[\frac{\kappa+\Delta_{x}}{\Delta_{x}}\right], \tag{13}\]
with the effective (inverse) temperature of the model set by the relative ratio of the correction rate to the bit-flip rate. This is most easily understood within the quantum jump picture [43] since the rates of transitioning between different classical configurations will respect detailed balance. For example, the transition rate from a ferromagnetic configuration (0) to a configuration with four domain walls (4) satisfies the relationship:
\[\frac{\kappa_{0\to 4}}{\kappa_{4\to 0}}=\frac{\Delta_{x}}{\kappa+\Delta_{x}}=e^{-( \Delta E)\beta}=e^{-8\beta}. \tag{14}\]
(See Fig. 2.)
### Thermal steady states
It is known that the 2D Ising model has a thermal phase transition in the sense that the two ferromagnetic states have an exponentially long lifetime (in \(N\)) when \(\beta>\beta_{c}\approx 0.44\)[41]. This is because excitations come in the form of domain walls with an energy that is proportional to their perimeter, and hence an extensive energy barrier separates the two ferromagnetic states as \(N\to\infty\)[20; 5].
In the ferromagnetic phase (\(\Delta_{x}\ll\kappa\)) and in the limit of no dephasing (\(\Delta_{z}=0\)), the steady state of the model can support a qubit:
\[\rho_{ss}=\sum_{i}\frac{e^{-\beta E_{i}}}{\mathcal{Z}}\begin{pmatrix}|E_{i}^{+ }\rangle,&|E_{i}^{-}\rangle\end{pmatrix}\begin{pmatrix}|c_{0}|^{2}&c_{0}c_{1} \\ c_{0}^{*}c_{1}^{*}&|c_{1}|^{2}\end{pmatrix}\begin{pmatrix}(E_{i}^{+}|)\\ (E_{i}^{-}|)\end{pmatrix}, \tag{15}\]
for \(|c_{0}|^{2}+|c_{1}|^{2}=1\). \(\mathcal{Z}\) is the partition function, and the states \(|E_{i}^{\pm}\rangle\) are energy eigenstates of the Ising Hamiltonian which are twofold degenerate and labeled by their parity: \(P[E_{i^{\prime}}^{+}\rangle=\pm|E_{i}^{+}\rangle\) with \(P=\Pi_{j}X_{j}\). This is a "noiseless subsystem" that was described in Sec. II. The on-diagonal degrees of freedom in Eq. (15) are protected by a "strong" \(\mathbb{Z}_{2}\) symmetry [23; 24]: \([L_{j},P]=0,\forall j\) which is generally fragile. (See Appendix B for details on the block decomposition of a Lindbladian with strong and/or weak symmetry.)
In the more physical case when both the bit flip and phase flip rate is nonzero (i.e. \(\Delta_{x}\neq 0\), \(\Delta_{z}\neq 0\)) then we only get a classical bit in the low-temperature phase:
\[\rho_{ss} =\sum_{i}\frac{e^{-\beta E_{i}}}{Z}\begin{pmatrix}(|E_{i}^{+} \rangle\langle E_{i}^{+}|+|E_{i}^{-}\rangle\langle E_{i}^{-}|)\,/2\end{pmatrix} \tag{16}\] \[+\,(2c-1)\,(|E_{i}^{+}\rangle\langle E_{i}^{-}|+|E_{i}^{-}\rangle \langle E_{i}^{+}|)\,/2)\] (17) \[\approx c|\uparrow\uparrow\cdots\rangle\langle\uparrow\uparrow \cdots|+(1-c)|\downarrow\downarrow\cdots\rangle\langle\downarrow\downarrow \cdots|, \tag{18}\]
for \(c\in[0,1]\). In this case, the system has a "weak" \(\mathbb{Z}_{2}\) symmetry at the level of the full Lindbladian: \([\mathcal{L},\mathcal{P}]=0,\mathcal{P}(\rho)=P\rho P\), and the steady states spontaneously break this symmetry [27; 42].
### Numerics
Suppose we initialize our system in a ferromagnetic state: \(|\psi\rangle=|\uparrow\uparrow\uparrow\cdots\rangle\); we then quench the system with the Lindbladian described above for a time \(t\) which is long enough for the system to settle into its steady state. Finally, we apply a single-shot decoder which brings the state back to the code space via a global majority rule. We apply the following channel superoperator:
\[\mathcal{E}(\rho)=\sum_{\mathbf{r}}F_{\mathbf{r}}\rho F_{\mathbf{r}}^{\dagger },\qquad F_{\mathbf{r}}=U_{\mathbf{r}}P_{\mathbf{r}}, \tag{19}\]
where \(P_{\mathbf{r}}=\Pi_{j}(1+(-1)^{\prime}\gamma_{j}\,S_{j})/2\) is a projector onto a particular domain wall configuration, and \(U_{\mathbf{r}}=\Pi_{\text{e},d_{\mathbf{r}}}X_{k}\) flips all spins \(k\) in the smaller domain. Fig. 3 plots the overlap between the initial and final states as a function of the system size. In the low-temperature phase, the overlap starts to approach one exponentially fast in \(N\), meaning that the logical error rate drops to zero in the thermodynamic limit. Qualitatively different behavior occurs in the high-temperature phase (red dots). Here, the overlap saturates to \(0.5\) for all values of \(N\).
### Connection to classical Glauber dynamics and the dissipative gap
We have described a set of dissipators that essentially perform Glauber-type Monte Carlo updates on the state for every quantum jump [44]. The Glauber dynamics is an efficient way to sample from the equilibrium distribution of the classical Ising model. It works as follows: First pick a spin at random, then flip it with a probability that depends the resulting change in local energy \(\Delta E\). We can utilize results from the vast literature on Glauber dynamics to make inferences on the properties of the Lindblad spectrum. Our analysis suggests that the Lindblad spectrum should have a
Figure 3: (a) The overlap between the initial and final states for the protocol given in the main text, for a Lindbladian in the high-temperature phase (red dots), and in the low-temperature phase (black and blue dots). As linear system size \(N\) grows, the overlap approaches one only in the low-temperature (symmetry-broken) phase corresponding to \(\beta>\beta_{c}\approx 0.44\). (b) Same black data points on a log plot; the overlap tends to one exponentially fast in \(N\). In both (a) and (b), the quench time is \(t=800/\kappa\), i.e. long enough to reach the steady state. The simulation employs the quantum jump approach by averaging over \(10^{5}\) trajectories.
nonzero dissipative gap at a generic point away from the critical point 3.
Footnote 3: It should be noted that the Lindbladian is gapless in the limit of zero noise (\(\Delta_{x}=0\)) since domain walls that include an extensive area can take a time polynomial in the linear system size to completely shrink to a ferromagnetic configuration [45; 46]. However we provide evidence that this gapless behavior is restricted to the fine-tuned case of zero noise.
It is well known that the correlation time of magnetic fluctuations diverges at the critical temperature for Ising-Glauber (or Metropolis-Hastings) simulations [47; 44]. Let us define the magnetic autocorrelation function:
\[\chi(t) =\int dt^{\prime}[m(t^{\prime})-\langle m\rangle][m(t^{\prime}+t )-\langle m\rangle] \tag{20}\] \[=\int dt^{\prime}[m(t^{\prime})m(t^{\prime}+t)-\langle m\rangle^ {2}], \tag{21}\]
where \(m(t)\) is the time-dependent magnetization of a single spin evolving via the Glauber dynamics of the Ising model at equilibrium. Physically this measures the correlation between fluctuations in time about the average value. This function decays exponentially away from the critical point: \(\chi(t)\sim e^{-t/\tau}\), where \(\tau\) is the correlation time. (\(\tau\) diverges at the critical point.)
Within the Lindblad formalism, the magnetic autocorrelation function can be expressed as:
\[\chi(t)=\text{Tr}[Ze^{\mathcal{L}}(Z\rho_{ss})]-\text{Tr}[Z\rho_{ss}]^{2}, \tag{22}\]
where \(Z\) is the Pauli operator associated with an arbitrary spin in the lattice and \(\rho_{ss}\) is the steady state. We can express this in the eigenbasis of the Lindbladian:
\[e^{\mathcal{L}}(Z\rho_{ss})=\sum_{j=0}e^{\lambda_{j}t}c_{j}r_{j}=\text{Tr}[Z \rho_{ss}]\rho_{ss}+\sum_{j\neq 0}e^{\lambda_{j}t}c_{j}r_{j}, \tag{23}\]
where \(l_{j},r_{j}\) are the left and right eigenoperators of \(\mathcal{L}\), \(c_{j}=\text{Tr}[l_{j}^{\dagger}Z\rho_{ss}]\), and we have used the fact that the eigenoperators associated with the steady state \(\lambda_{0}=0\) are \(l_{0}=\mathbb{I},r_{0}=\rho_{ss}\). We thus find that
\[\chi(t)=\sum_{j\neq 0}e^{\lambda_{j}t}c_{j}\text{Tr}[Zr_{j}]. \tag{24}\]
An autocorrelation function \(\chi(t)\) that decays exponentially in time would thus be consistent with a Lindbladian that has a nonzero dissipative gap4 i.e. \(-\text{Re}[\lambda_{j}]>0,\forall j>0\). (Note: Here we use \(\rho_{ss}\) to mean the unique steady state in the trivial phase, and one of the symmetry-broken ferromagnetic states in the nontrivial phase; in the latter case \(\text{Tr}[Z\rho_{ss}]\neq 0\).)
Footnote 4: We use local observables to probe the dynamical correlation, which capture the relaxation to locally stationary states and will in general not be sensitive to global properties such as the steady-state degeneracy of the system.
To simulate this correlator, we use discrete channel evolution that is very similar to a global update of the lattice under Glauber dynamics. The number of jumps that occur during an interval of time \(t\) obeys the Poisson distribution:
\[\rho(t)=e^{\mathcal{L}}[\rho(0)]=\sum_{k=0}^{\infty}\Lambda^{k}[\rho(0)]\left[ \frac{(t\sum_{s}\kappa_{s})^{k}}{k!}\right]e^{-\tau\sum_{s}\kappa_{s}}, \tag{25}\]
where the \(\Lambda\) is the channel superoperator associated with a Glauber-type single jump occurring in the system (see Appendix C for the derivation), and \(\kappa_{s}\) labels all of the jump rates. The spectrum of \(\mathcal{L}\) satisfies: \(\text{Spec}(\mathcal{L})=\left(\sum_{s}\kappa_{s}\right)\text{Spec}(\Lambda)- \left(\sum_{s}\kappa_{s}\right)\). In a time step \(\delta t=1/(\kappa+\Delta)\) the average number of jumps will be \([N^{2}(\kappa+\Delta)]\delta t=N^{2}\), i.e. each spin on the lattice will get one update on average. If we define the channel operator of this one global update rule as \(\Lambda_{g}=\Lambda^{N^{2}}\), then we will approximate the Lindblad dynamics via the following channel evolution:
\[e^{\mathcal{L}(M\delta t)}\approx\Lambda_{g}^{M}. \tag{26}\]
The resulting dynamics for the autocorrelator decays exponentially as a function of \(t\) (away from the critical point). We numerically extract the characteristic decay time (\(\tau\)) and plot it as a function of error rate in the left panel of Fig. 4(a). We see that it diverges precisely at the error rate that corresponds to the critical temperature of the 2D Ising model.
Figure 4: Estimated autocorrelation time for the dissipative Ising model on 30\(\times\)30 square lattice with periodic boundary condition and \(\kappa=1\). The data is obtained from 100 trajectories, each consisting of \(2\times 10^{5}\) global time steps. The autocorrelation time and the estimate of the eigenvalues are obtained using single and two exponential fittings, respectively. The autocorrelation time \(\tau\) is given in units of \(\delta t=(\kappa+\Delta_{x})^{-1}\). The estimated eigenvalues are given in units of \((\delta t)^{-1}\). After fitting the function of the form \(c_{1}e^{-\gamma_{1}t}+c_{2}e^{-\gamma_{2}t}\) with \(\gamma_{1}\leq\gamma_{2}\), an estimate for the eigenvalues of the Lindbladian is given by \(\gamma_{1/2}\), as discussed in Sec. III.3. (a) With detailed balance, the dashed line is the critical noise rate corresponding to the critical temperature of the 2D Ising model. (b) The majority-rule case (without detailed balance).
We can more accurately estimate the low-lying eigenvalues of the Lindbladian by fitting a sum of exponential functions for the value of \(\chi(t)\)[47]. [See Eq. (24).] We find good agreement for a sum of two exponentials for the error rates that we have scanned. (See Appendix D.) We fit the decay rate of the two exponentials, then estimate the Lindblad eigenvalues by taking their inverse. This is plotted as a function of error rate in the right panel of Fig. 4(a). The smallest eigenvalue (estimating the gap) indeed touches zero at the critical temperature.
Another feature of the Glauber-like Ising dynamics is that starting from an arbitrary initial state, the system evolves towards the thermal distribution exponentially fast in time (away from the critical point), thus allowing to efficiently sample from the thermal stationary distribution. We can confirm that this behavior occurs in the model described above (see Appendix E), which is consistent with a finite gap in the Lindbladian [48].
### Perturbing away from equilibrium
An advantage of formulating the dynamics in terms of the Lindbladian is that we can start to perturb the system via terms that explicitly break detailed balance to test whether the stability of the phase is linked to thermal equilibrium or rather the locality and symmetry properties of the model. As a simple example, we set the two correction rates to be equal to each other: \(\bar{\kappa}=\kappa\). This corresponds to a local majority rule, i.e. spins flip with a uniform rate \(\kappa\) if the majority of neighbors are misaligned. This violates the detailed balance condition in Eq. (15) but intuitively the error correcting properties of the phase should persist since the correction processes have a higher rate than in the thermal case.
In Fig. 4 we show that the critical properties of the model appear to be very similar to the thermal case, i.e. the autocorrelation time diverges at a specific value of \(\Delta_{x}\) and the estimated Lindblad gap approaches zero at this point. One notable difference is that the critical error rate is higher than in the thermal case, which intuitively makes sense since the correction rate is larger.
### Effect of nonzero magnetic field
So far we have considered errors of the form: \(L\sim X,Z\), i.e. bit flips and phase flips. In the presence of both of these errors, the Lindbladian still has a weak \(\mathbb{Z}_{2}\) symmetry: \([\mathcal{L},\mathcal{P}]=0,\mathcal{P}(\rho)=P\rho P\). One can ask about the stability of the classical bit with respect to perturbations that violate this condition. For example, the dissipative processes associated with a nonzero magnetic field in the \(Z\) direction, i.e. \(L_{j}=\sqrt{\Delta_{\mathrm{i}}}X_{j}(1-Z_{j})/2\), will ensure that \([\mathcal{L},\mathcal{P}]\neq 0\). While this type of perturbation will lead to a unique steady state, the equilibration time is exponentially long. This dynamics has been extensively studied in the context of 2D Ising metastability [49], which suggest that the equilibration time in the presence of a small field scales as \(\sim\exp[w(\kappa,\Delta_{x})/\Delta_{\mathrm{i}}]\) for some constant \(w\) that depends on temperature, i.e. the ratio of \(\kappa\) and \(\Delta_{x}\)[50].
We can compare the effect of terms that explicitly break the strong and weak \(\mathbb{Z}_{2}\) symmetry: Microscopic dephasing (\(\Delta_{z}\neq 0\)) will break the strong \(\mathbb{Z}_{2}\) symmetry, and leads to decoherence in the logical basis of the ferromagnets. The logical decoherence rate is directly proportional to the microscopic dephasing rate \(\Delta_{z}\). In contrast, terms that violate the weak symmetry (\(\Delta_{\mathrm{i}}\neq 0\)) will destroy the logical classical bit stored in the ferromagnets, but only at a rate that scales as \(\sim\exp[-w(\kappa,\Delta_{x})/\Delta_{\mathrm{i}}]\). The stability of the classical bit with respect to arbitrary local perturbations is intuitively why good classical bits occur in nature (e.g. ferromagnets and solids) even though explicit symmetry-breaking terms are always present.
## IV 2D toric code
In the previous section we suggest that the 2D Ising model is a good classical bit in the presence of generic local noise. The rest of this work will investigate the possibility of obtaining a topological steady state degeneracy in the Lindbladian such that arbitrary local errors will not corrupt the qubit. We will first attempt to do this in the 2D toric code [51].
We consider qubits that live on the edges of a square lattice with periodic boundary conditions and \(N\times N\) unit cells. The toric code Hamiltonian reads
\[H_{tc}=-\sum_{s}A_{s}-\sum_{p}B_{p}, \tag{27}\]
where \(A_{s}=\prod_{i\in s}X_{i}\) and \(B_{p}=\prod_{i\in p}Z_{i}\) are stabilizers at each vertex \(s\) and plaquet \(p\). (See Fig. 5.) The ground states of the model satisfy: \(A_{s}\big{|}\mathrm{gnd}\big{>}=B_{p}\big{|}\mathrm{gnd}\big{>}=\big{|}\mathrm{ gnd}\big{>}\), i.e. they are eigenstates of all of the star and plaquet operators with eigenvalue \(+1\). With periodic boundary conditions, the ground states are robustly four-fold degenerate.
Figure 5: (a) Physical qubits live on the edges of the black squares with two qubits per unit cell. We consider an \(N\times N\) lattice with periodic boundary conditions. Star and plaquet terms couple nearest-neighbor sites. The black and green dots represent sites along \(g_{x}\) and \(g_{y}\) respectively. (b) Eigenstates are labeled by an \(N^{2}\)-dimensional vector \(\vec{k}\) which labels the excited stars (blue dots); excited states are constructed by acting \(Z\) operators (red lines) on the ground state.
The model has a gauge symmetry: \([H_{tc},A_{s}]=[H_{tc},B_{p}]=[A_{s},B_{p}]=0\) which implies that eigenstates are labeled by which star and plaquet terms are violated. Note that star and plaquet excitations must come in pairs, i.e. there is no way to excite a single star without exciting another star too.
To simplify our analysis, we focus on the case where only star excitations are allowed, i.e. none of the plaquets are excited. (Our main conclusions will hold in the presence of both types of excitations.) The eigenvalues of \(B_{p}\) are thus good quantum numbers, and we focus on the gauge sector where \(B_{p}=+1\), i.e. a subspace which contains the ground states of the toric code Hamiltonian. The reduced Hilbert space will consist of states which have an even number of star excitations, and are labeled by:
\[\left|0,0;\vec{0}\right\rangle\sim\prod_{i}\left(1+A_{i}\right) \left|\mathrm{vac}\right\rangle, \tag{28}\] \[\left|r_{x},r_{y};\vec{0}\right\rangle=(g_{x})^{r_{x}}(g_{y})^{r _{y}}\left|0,0;\vec{0}\right\rangle,\] (29) \[\left|r_{x},r_{y};\vec{k}\right\rangle=\left(\prod\left|Z\right\rangle _{\vec{k}}\right|r_{x},r_{y};\vec{0}\right), \tag{30}\]
where \(A_{i}\) represents different stars, \(Z_{j}\left|\mathrm{vac}\right\rangle=\left|\mathrm{vac}\right\rangle,\forall j\); \(r_{x},r_{y};\in 0,1\) label different topological sectors; \(g_{x/y}=\Pi_{\mathrm{ho/vent}}X\) is a product of \(X\) operators along a string (on the dual lattice) which wraps around the horizontal/vertical direction of the torus. (See Fig. 5.) The states that are labeled by \((r_{x},r_{y};\vec{0})\) are orthogonal ground states of \(H_{tc}\), while the states that are labeled by \((r_{x},r_{y},\vec{k})\) are excited states; \(\vec{k}\) is an \(N^{2}\)-dimensional vector which labels the excited stars with \(1\) and non-excited stars with \(0\). Excited eigenstates are defined by applying strings of \((\prod Z)\) operators on the ground state via the the smallest number of \(Z\) operators.
Consider the following dissipators at each vertex (star) of the lattice:
\[L_{m}^{l} =\sqrt{\kappa}Z_{m,l}(1-A_{m})/2, \tag{31}\] \[L_{m}^{u} =\sqrt{\kappa}Z_{m,u}(1-A_{m})/2,\] (32) \[L_{m}^{r} =\sqrt{\kappa}Z_{m,v}(1-A_{m})/2,\] (33) \[L_{m}^{b} =\sqrt{\kappa}Z_{m,b}(1-A_{m})/2, \tag{34}\]
where \(l,r,t,b\) labels the left, right, top and bottom leg of the vertex at \(m\). The dissipators: \(L_{m}^{l/u/r_{b}}\) first check that the star \(A_{m}\) is excited; if so, then they will flip one of its four connecting bonds such that \(A_{m}\) becomes de-excited (and its neighboring star stabilizer will flip). This type of dynamics will cause the star excitations to perform a random walk on the lattice until pairs eventually meet up and annihilate each other. This model has been studied as a way to dissipatively prepare the ground state of the toric code on a Rydberg atom simulator [52, 46]. We also consider the effects of uniform dephasing that acts on each physical qubit:
\[L_{i}=\sqrt{\Delta_{z}}Z_{i}. \tag{35}\]
The steady state of the Lindbladian is the thermal state of the 2D toric code:
\[\rho_{ss}=\frac{e^{-\beta H_{tc}}}{\mathrm{Tr}[e^{-\beta H_{tc}}]},\qquad\beta =\frac{1}{4}\ln\left[\frac{2\kappa+\Delta_{z}}{\Delta_{z}}\right], \tag{36}\]
with the effective temperature of the model set by the relative ratio of the correction rate to the dephasing rate. The transition rates between different stabilizer configurations obey detailed balance with respect to the effective temperature \(\beta^{-1}\).
### Lack of protection
The 2D toric code does not have a thermal phase transition, i.e. the critical properties are strictly a zero-temperature effect [20, 5, 8]. (This is analogous to lack of thermal stability of the quantum phase transition in the 1D Ising model.) Intuitively, this is because there is no extensive energy barrier between degenerate ground states, i.e. one ground state can evolve to another via a single anyonic string excitation that costs a constant amount of energy.
Let us describe the thermal steady state of this model. Within the \(B_{p}=+1\) gauge sector, we can partition the subspace into different topological sectors. We define the projection operators:
\[P_{r_{x},r_{y}}=\sum_{\vec{k}}\left|r_{x},r_{y};\vec{k}\right\rangle\left\langle r _{x},r_{y};\vec{k}\right|, \tag{37}\]
where \(r_{x},r_{y}\in 0,1\); \(P_{r_{x},r_{y}}\) projects states into topological sector \(r_{x_{x}},r_{y}\)5. Note that all of the dissipators will commute with \(P_{r_{x},r_{y}}\), thus these projectors are strong symmetries of the Lindbladian [24]. There exists a basis where the Lindbladian can be block diagonalized into \(4^{2}=16\) different sectors:
Footnote 5: One can construct global string operators similar to \(g_{x/y}\), which consists of a product of \(Z\) operators along a vertical or horizontal string on the lattice. The projector \(P_{r_{x},r_{y}}\) projects onto the eigenstates of the \(Z\) global string operators.
\[\mathcal{L}=\mathrm{Diag}[\mathcal{L}_{0,0},\mathcal{L}_{0,1},\mathcal{L}_{0,2 },\ldots\mathcal{L}_{3,3}] \tag{38}\]
where the numbers \(0\) to \(3\) label four different topological sectors of the bras and kets according to the convention: \((r_{x}=0,r_{y}=0)\to 0,(r_{x}=1,r_{y}=0)\to 1,(r_{x}=0,r_{y}=1)\to 2,(r_{x}=1,r_{y}=1)\to 3\). The Lindbladian \(\mathcal{L}_{0,0}\) acts on operators where both ket and bra belong to the same topological sector \(r_{x}=0,r_{y}=0\). \(\mathcal{L}_{0,1}\) acts on operators where the ket belongs to sector \(r_{x}=0,r_{y}=0\), while the bra belongs to sector \(r_{x}=1,r_{y}=0\).
The effect of noise on the off-diagonal sectors such as \(\mathcal{L}_{0,1}\) can be estimated using an argument based on anyon random walk [53, 20]. To the leading order in the perturbation, the noise creates a single pair of exitations above the steady state. After some time, the exitations will either annihilate with their partner locally or they get separated by a distance \(N/2\) apart, leading to a global loop operator that decoheres the state. The probability that a 2D square lattice random walker does not return to its initial position after \(L\) steps scales like \(1/\ln(L)\). We can estimate that the probability of decoherence scales as \(\Delta_{z}/\ln(N/2)\). We therefore expect that for relatively small system size, the off-diagonal sectors are gapped by the noise
with an eigenvalue \(O(\Delta_{z}N^{2}/\ln(N/2))\) for small \(\Delta_{z}\). Indeed, a more careful analysis for generic \(N\) reveals that the gap scales with \(O(\Delta_{z})\)[20; 53; 8].
We will provide numerical evidence in the next section that the steady state structure will be the following for any non-zero temperature:
\[\rho_{ss}=\sum_{i}\frac{e^{-\beta E_{i}}}{\mathcal{Z}}\left(\sum_{r_{x},r_{y}=0 }^{1}c_{r_{x},r_{y}}|r_{x},r_{y};E_{i}\rangle\langle r_{x},r_{y};E_{i}|\right) \tag{39}\]
where \(E_{i}\) labels the energy, \(|r_{x},r_{y};E_{i}\rangle\) is the corresponding eigenstate in topological sector \(r_{x},r_{y}\), and \(\sum_{r_{x},r_{y}}c_{r_{x},r_{y}}=1\). We therefore find that coherences between different topological sectors are not stable. This implies that only a classical bit can be stored in the steady state. We also note that this classical bit structure is an artifact of imposing the gauge symmetry, i.e. the presence of bit flips (\(L\sim X\)) will remove all strong symmetries, thus reducing the classical bit to a unique thermal steady state.
### Numerics
Suppose we initialize our system in a superposition of ground states in different topological sectors: \(|\psi\rangle=(|0,0;E_{0}\rangle+|1,0;E_{0}\rangle)/\sqrt{2}\). We then quench the system with the Lindbladian described above for a time \(t\) which is long enough for the system to settle into its steady state. Finally, we apply a single-shot decoder which brings the state back to the code space via the channel superoperator:
\[\mathcal{E}(\rho)=\sum_{\mathbf{r}}F_{\mathbf{r}}\rho F_{\mathbf{r}}^{\dagger },\qquad F_{\mathbf{r}}=U_{\mathbf{r}}P_{\mathbf{r}} \tag{40}\]
\(P_{\mathbf{r}}=\Pi_{j}(1+(-1)^{r_{j}}A_{j})/2\) is a projector onto a particular configuration of excited anyons (stars), and \(U_{\mathbf{r}}\) is a minimal-weight matching unitary operator which sends the state back to the code space by applying a minimal number of \(Z\) operators which de-excite all anyons. (See Fig. 5.) The initial and final state overlap is plotted in Fig. 6. We find that the overlap saturates to a value of 0.5, which suggests that any non-zero dephasing is enough to destroy coherences between ground states. There is no critical temperature below which coherences are preserved in the thermodynamic limit (apart from exactly at \(\beta=\infty\) when there is no dephasing).
We can also notice the difference between the 2D toric code and the 2D Ising model by varying the noise rate \(\Delta\) for a fixed (but finite) noise time \(t\), then applying the decoder \(\mathcal{E}\). This is shown in Fig. 7. For the Ising model, we find that the logical error rate gets suppressed as we increase the system size. This is not true for the 2D toric code.
## V 4D toric code
We have studied a local dissipative model that prepares the thermal state of the 2D toric code, and argued that in the presence of bit flips and phase flips the model has a unique thermal steady state. We will now construct a similar model for the 4D toric code, and suggest that it is stable against both bit flips and phase flips.
The 4D toric code can be understood as the hypergraph product of two 2D Ising models [54]; intuitively, one of the Ising models protects against bit flips while the other protects against phase flips. We describe salient features of the model, following the description found in Ref. [13]. For every vertex of an \(N\times N\times N\times N\) lattice, one can associate 4 edges, 6 faces, and 4 cubes. (For a 3D lattice, every vertex has 3 edges, 3 faces, and 1 cube.) Physical qubits live on each face of the lattice, so there are \(6N^{4}\) total physical qubits. There are two types of stabilizers \(S_{e},S_{c}\) which are associated with the edge and cube degrees of freedom respectively. Each physical qubit appears in four of the \(S_{e}\) stabilizers and four of the \(S_{c}\) stabilizers (similar to the 2D Ising model). In the 4D toric code, unsatisfied edge and cube stabilizers must form a closed domain wall, which is ultimately responsible for the thermal stability of the 4D toric code.
Since it is difficult to gain intuition in 4D space, it is useful to reformulate things in a more algebraic way. Each vertex of the lattice can be associated with a four-component vector: \(\vec{v}=[v_{0},v_{1},v_{2},v_{3}]\) where \(v_{i}\in[1,N]\). The edges, faces, and
Figure 7: (a) 2D toric code overlap as a function of dephasing rate for a fixed quench time \(t=3/\kappa\). For a fixed error rate and quench time, the logical error rate does not improve with system size. (b) 2D Ising model overlap for a fixed quench time \(t=200/\kappa\). For a fixed error rate and quench time, the logical error rate improves with system size in the symmetry-broken phase. Plots are averaged over \(10^{4}\) trajectories.
Figure 6: The overlap between the initial and final states for the protocol given in the main text with with \(\Delta_{z}/\kappa=0.01\) (\(\beta=1.3\)). Unlike the Ising model, there is no “threshold” behavior, i.e. any non-zero temperature causes coherences to decay. \(t=20/\kappa\) and we average over \(10^{3}\) trajectories.
cubes corresponding to a particular vertex are associated with a four-component binary vector:
\[\hat{e},\hat{f},\hat{c}\in\{(x_{0},x_{1},x_{2},x_{3})|x_{i}\in[0,1]\}, \tag{41}\]
with edges \(\hat{e}\), faces \(\hat{f}\), and cubes \(\hat{c}\) satisfying the condition \(\sum_{i}x_{i}\) equal to \(1,2,3\) respectively. In other words, a face is defined by two edges, and a cube is defined by three edges. There are indeed 4 edges, 6 faces, and 4 cubes per vertex. Each physical qubit is identified with a tuple \(\vec{v},\hat{f}\) which specifies both the face orientation \(\hat{f}\) and the vertex \(\vec{v}\).
The stabilizers associated with edges and cubes of the lattice are:
\[S_{\vec{v},\hat{e}} =\bigotimes_{i<j}X_{\vec{v},\hat{f}}\otimes X_{\vec{v}-\vec{v}+ \hat{e},\hat{f}}, \tag{42}\] \[S_{\vec{v},\hat{e}} =\bigotimes_{j<\varepsilon}Z_{\vec{v},\hat{f}}\otimes Z_{\vec{v}+ \hat{e}-\vec{v},\hat{f}}. \tag{43}\]
Each edge \(\hat{e}\) appears in 3 faces \(\hat{f}\), and each face appears in 3 cubes \(\hat{c}\), hence both stabilizers are a product of 6 Pauli operators. Note that a particular operator \(X_{\vec{v},\hat{f}}\) appears in four different stabilizers \(S_{\vec{v},\hat{e}}\) and that the stabilizers commute with each other. The 4D toric code Hamiltonian reads:
\[H_{4d}=-\sum_{\vec{v},\hat{e}}S_{\vec{v},\hat{e}}-\sum_{\vec{v},\hat{e}}S_{ \vec{v},\hat{e}}. \tag{44}\]
As before, for simplicity we restrict ourselves to only \(Z\)-dephasing errors which cause excitations of the \(S_{\vec{v},\hat{e}}\) stabilizers. We work in the gauge sector where all cube stabilizers are satisfied: \(S_{\vec{v},\hat{e}}=+1\). The following states span this subspace:
\[\ket{\vec{0};\vec{0}} \sim\prod_{i}(1+S_{\vec{v},\hat{e}})\ket{\text{vac}}, \tag{45}\] \[\ket{\vec{r};\vec{0}} =\Pi_{i}g_{\hat{f}_{i}}^{\epsilon_{j}}\ket{\vec{0};\vec{0}},\] (46) \[\ket{\vec{r},\vec{k}} =\left(\prod Z\right)_{\hat{k}}\ket{\vec{r};\vec{0}}, \tag{47}\]
where the product on \(i\) runs over all edge stabilizers. The vector \(\vec{r}\) has six components that are either 0 or 1. We define 6 logical operators \(g_{\hat{f}_{i}}\), one per each face direction. They read:
\[g_{\hat{f}_{i}}=\bigotimes_{n,m=1}^{N}X_{n\hat{e}_{3}+m\hat{e}_{ \vec{v}},\hat{f}_{i}}, \tag{48}\]
where \(\hat{e}_{3}+\hat{e}_{4}=(1,1,1,1)-\hat{f}_{j}\). These operators commute with the stabilizers and relate states that belong to the \(2^{6}=64\) different topological sectors.
We now describe thermal dissipators of the 4D toric code that are analogous to the dissipators of the 2D Ising model. They read:
\[L^{(4)}_{\nu^{\prime},\hat{f}} =\sqrt{k}Z_{\nu^{\prime},\hat{f}^{\prime}}P^{-}_{\nu^{\prime}, \hat{e}^{\prime}_{1}}P^{-}_{\nu^{\prime}+\hat{f}^{\prime}-\hat{e}^{\prime}_{1} }P^{-}_{\nu^{\prime}+\hat{f}^{\prime}-\hat{e}^{\prime}_{2},\hat{e}^{\prime}_{ 2}} \tag{49}\] \[L^{(3)}_{\nu^{\prime},\hat{f}^{\prime}} =\sqrt{k}Z_{\nu^{\prime},\hat{f}^{\prime}}P^{+}_{\nu^{\prime}, \hat{e}^{\prime}_{1}}P^{-}_{\nu^{\prime}+\hat{f}^{\prime}-\hat{e}^{\prime}_{1} }P^{-}_{\nu^{\prime}+\hat{f}^{\prime}-\hat{e}^{\prime}_{2},\hat{e}^{\prime}_{ 2}}\] (50) \[L^{(2)}_{\nu^{\prime},\hat{f}^{\prime}} =\sqrt{k}Z_{\nu^{\prime},\hat{f}^{\prime}}P^{+}_{\nu^{\prime}, \hat{e}^{\prime}_{1}}P^{+}_{\nu^{\prime},\hat{e}^{\prime}_{2}}P^{-}_{\nu^{ \prime}+\hat{f}^{\prime}-\hat{e}^{\prime}_{1},\hat{e}^{\prime}_{1}}P^{-}_{\nu^ {\prime}+\hat{f}^{\prime}-\hat{e}^{\prime}_{2},\hat{e}^{\prime}_{2}}, \tag{51}\]
where \(P^{+}_{\nu,\hat{e}}=(1\pm S_{\nu,\varepsilon})/2\), and we have used the convention \(e^{\prime}_{1}+e^{\prime}_{2}=f^{\prime}\). We also consider the presence of \(Z\) dephasing on each face: \(L^{\prime}=\sqrt{\Delta_{\varepsilon}}Z_{\nu,\hat{f}}\). For \(\tilde{\kappa}=\sqrt{\Delta_{\varepsilon}\kappa+\Delta_{\varepsilon}^{2}}- \Delta_{\varepsilon}\) the steady state is the thermal state of the 4D toric code:
\[\rho_{ss}=\frac{e^{-\beta H\mu_{4}}}{\text{Tr}[e^{-\beta H\mu_{4}}]},\qquad\beta =\frac{1}{8}\ln\left[\frac{\kappa+\Delta_{\varepsilon}}{\Delta_{\varepsilon}} \right]. \tag{52}\]
### Steady-state qubit
Constructing a single-shot decoder for the 4D toric code is more challenging than for the 2D models studied in this work. (Ref. [54] provides a description of a local decoder for the 4D toric code, but such schemes can get "stuck" in sheet-like configurations that are not in the code space.) It has analytically been shown that the 4D toric code is capable of storing quantum information in its thermal state in the low-temperature phase [5; 9], and therefore we expect that the steady state of our local Lindblad model at low-temperature will assume the form
\[\rho_{ss}=\sum_{i}\frac{e^{-\beta E_{i}}}{\mathcal{Z}}\left(|E^{\prime}_{i} \rangle,\ |E^{\prime}_{i}\rangle\right)\begin{pmatrix}|c_{0}|^{2}&c_{0}c_{1}\\ c_{0}^{*}c_{1}^{*}&|c_{1}|^{2}\end{pmatrix}\begin{pmatrix}\langle E^{\prime}_{i} \rangle\\ \langle E^{\prime}_{i}|\end{pmatrix}, \tag{53}\]
for \(|c_{0}|^{2}+|c_{1}|^{2}=1\), i.e. coherences and populations between different topological sector are protected. Similar thermal dissipators to the ones in Eqs. (49)-(51) can be constructed to protect against logical bit flips \(L\sim X^{6}\). Since the zero-temperature bath superoperator responsible for protection
Figure 8: Extracted autocorrelation time of the mean stabilizer \(\tilde{S}_{\varepsilon}\) for 4D toric code on a \(5\times 5\times 5\times 5\times 5\) lattice with periodic boundary condition and \(\kappa=1\). The lattice contains 3750 spins and the data is collected from 100 trajectories, each containing \(10^{5}\) global time steps. (a) With detailed balance, a critical point is close to \(\Delta_{\varepsilon}\approx 0.0006\). (b) With the majority-vote rule, a critical point is close to \(\Delta_{\varepsilon}\approx 0.0026\).
against \(X\) commutes with the corresponding superoperator that protects against \(Z\), we expect the noiseless subsystem to protect against both sources of noise. We also note that dynamical simulations using a "Toom's rule" model that is very similar to our local Lindbladian have demonstrated an exponential protection against both local bit flips and phase flips [13], again corroborating the description above.
We can observe signatures of the transition by considering the autocorrelation function for the stabilizers of the model. We define the mean stabilizer autocorrelation function as:
\[\chi(t)=\text{Tr}[\bar{S}_{e}e^{\mathcal{L}_{\text{}}}(\bar{S}_{e}\rho_{ss})]- \text{Tr}[\bar{S}_{e}\rho_{ss}]^{2}, \tag{54}\]
where \(\bar{S}_{e}=(\sum_{i^{\prime}\neq 0}S_{i^{\prime}\neq 0})/4N^{4}\) is the average of the edge stabilizers on the lattice7. In analogy with the 2D Ising model, we expect this correlator to decay exponentially in time away from the critical point. In Fig. 8 we plot the extracted correlation time as a function of the error rate for both (a) the case of a thermalizing Lindbladian, and (b) the case of the majority rule Lindbladian (\(\tilde{\kappa}=\kappa\)). We find that indeed both models exhibit a diverging correlation time at a critical error rate. This is consistent with a low-temperature regime that passively protects a qubit.
Footnote 7: We choose the autocorrelator of the average edge stabilizers rather than a single stabilizer since the numerical peak at the critical point is sharper for the former.
## VI Discussion and outlook
Most studies in the field of passive quantum error correction identify dissipative processes that only provide first (or \(n\)th) order protection against noise. Such schemes will require some form of active error correction (i.e. syndrome measurements) to eventually reach fault tolerance. In this work, we have focused on identifying local Lindbladians for spin systems that can _exponentially_ protect against local noise. We suggest that such models are associated with nontrivial states of matter, since the latter are characterized by robust degeneracies in the steady state of the Lindbladian. \(\mathbb{Z}_{2}\) symmetry breaking appears useful to protect a classical bit, while intrinsic topological order protects a qubit.
An important question is whether a Lindbladian can host a phase with intrinsic topological order in less than 4D. The area of driven-dissipative phase transitions might provide a route which has hitherto been unexplored (see Appendix F). It should be noted that many aspects of symmetry-breaking driven-dissipative phase transitions closely resemble their thermal counterparts (e.g. universality classes and lower critical dimensions) [55, 30], so it is unclear whether adding a quantum drive can produce a topological transition in less than 4D. Nevertheless, this is a direction that warrants further attention.
While we have focused on topological steady state degeneracy as a good indicator of topological order in open quantum systems, it would be interesting to see how this compares with other recent efforts to define topological order in a mixed state [56, 57, 58, 59, 60, 61, 62, 63].
It is known that quantum phase transitions come in yet another flavor: symmetry-protected topological (SPT) phase transitions. Notable examples include topological insulators [64, 65] and the Haldane phase of spin chains [66, 67]. Various generalizations of open (Lindblad) SPT matter have recently been put forward [68, 69, 70]. However, none of these studies have found robust zero-decay-rate edge modes in the Lindbladian that survive the presence of local (symmetric) perturbations. This may be because standard SPT phases are not thermally stable. Recent work [69, 71, 81] has suggested that 1-form symmetries must be imposed to obtain a thermally-stable SPT phase. Perhaps such systems host a protected classical bit in the presence of both bit and phase flips, in analogy with the 2D Ising model.
Is 4D necessary to obtain a passive quantum memory? In a recent work [42], we have suggested that it is possible to achieve such a model in 2D by creating an Ising model out of _bosonic_ cat qubits, i.e. going beyond the two-level-system approximation. (An Ising interaction can be generated by placing a Josephson junction between cavities, see SM 5 in [42].) The driven-dissipative cat code [25] is a bosonic qubit that spontaneously breaks \(\mathbb{Z}_{2}\) photon parity symmetry and satisfies the definitions of a phase as outlined in Sec. II [34, 27]. This is another example of a passive classical bit that is encoded in the coherent states: \(|\pm\alpha\rangle\). The 2D Ising-cat model [42] thus breaks two separate \(\mathbb{Z}_{2}\) symmetries (i.e. a photon parity symmetry within each cavity and an Ising parity symmetry of the lattice), one of which protects against bit flips and another which protects against phase flips. An interesting open question remains to find other bosonic lattice systems that have this property, and to construct experimental proposals to realize this model on current hardware platforms.
_Acknowledgements.--_We sincerely thank Victor Albert, Alexey Gorshkov, and Oles Shtanko for useful discussions. Y.-J.L acknowledges support from the Max Planck Gesellschaft (MPG) through the International Max Planck Research School for Quantum Science and Technology (IMPRS-QST) and the Munich Quantum Valley, which is supported by the Bavarian state government with funds from the Hightech Agenda Bayern Plus. S.L. was supported by the NIST NRC Research Postdoctoral Associateship.
_Note added.--_We note an independent work [82] that comes to similar conclusions regarding the 2D toric code, and studies a 3D toric code model with classical steady state degeneracy.
## Appendix A Qubit steady state in a finite system
In this paper, we focus on qubit steady state structures that only emerge in the thermodynamic limit of a nontrivial phase. Here, we study a finite system that hosts a qubit steady state. We show that it does not passively protect against local \(X\) or \(Z\) errors, and expect this behavior to be generic.
Consider a Hilbert space of two qubits. We consider a single jump operator: \(L=X_{2}(1-Z_{1}Z_{2})/2\). This can be
viewed as a single correction dissipator for the Ising model in the main text. The model has a qubit steady state structure: Any state of the form: \(|\psi\rangle=c_{0}|\uparrow\uparrow\rangle+c_{1}|\downarrow\downarrow\rangle\) is a steady state of the model. This qubit is protected by a non-Abelian strong symmetry [40]: \([L,U_{1}]=[L,U_{2}]=0\) where \(U_{1}=Z_{1},U_{2}=X_{1}X_{2},[U_{1},U_{2}]\neq 0\). This system is _not_ protected against noise in either basis, i.e. jump operators of the form \(L\sim Z_{1},Z_{2},X_{1}\) will each cause the qubit to decohere since these jumps do not commute with both symmetries. (Note that for the Ising model in the main text, the qubit is protected against all local \(X\) errors in the thermodynamic limit.)
More generally (to our best knowledge), finite systems require both logical \(\bar{X}\) and \(\bar{Z}\) operators (\(U_{1},U_{2}\) above) to commute with the dissipation operators in order to have a qubit steady state. Generic noise in the \(X,Z\) basis will necessarily anticommute with one of the logical operators leading to destruction of the noiseless subsystem.
In other words, to get a qubit steady state in a finite system requires us to impose two strong symmetry constraints on the noise. For \(\mathbb{Z}_{2}\) strong symmetry breaking, we only need to impose one strong symmetry constraint to obtain a qubit steady state (in the thermodynamic limit). For intrinsic topological order, we do not have any symmetry constraints for a qubit.
## Appendix B Spontaneous symmetry breaking in a Lindbladian
We briefly review the symmetry structure in a Lindbladian in the presence of "strong" and "weak" symmetries [24] and the steady state solutions in a symmetry-broken phase [78], focusing on the case of \(\mathbb{Z}_{2}\). A Lindbladian is said to have a strong \(\mathbb{Z}_{2}\) symmetry if \([\mathcal{L},\mathcal{P}_{l}]=[\mathcal{L},\mathcal{P}_{r}]=0\) where \(\mathcal{P}_{l}(\rho)=P_{P},\mathcal{P}_{r}(\rho)=\rho P\) are superoperators that act on the left and right of an operator, and \(P\) is a parity operator: \(P^{2}=\mathbb{I}\). If all of the microscopic dissipators of a Lindbladian commute with the parity operator: \([L_{j},P]=0,\forall j\), then \(\mathcal{L}\) will have a strong symmetry. In this case the Lindbladian can be block diagonalized into four symmetry sectors
\[\mathcal{L}=\text{Diag}[\mathcal{L}_{++},\mathcal{L}_{--},\mathcal{L}_{+-}, \mathcal{L}_{-+}]. \tag{11}\]
Each sector acts on operators that are eigenoperators of \(\mathcal{P}_{l}\) and \(\mathcal{P}_{r}\), with eigenvalue \(\pm 1\). The sectors \(\mathcal{L}_{++}\) and \(\mathcal{L}_{--}\) contain operators with nonzero trace, and therefore those sectors must each have an exact eigenvalue of zero, corresponding to a steady state. In a symmetry-broken phase, the off-diagonal sectors \(\mathcal{L}_{+-}\) and \(\mathcal{L}_{-+}\) also acquire an eigenvalue of zero but only in the thermodynamic limit. This leads to enough degrees of freedom to store a qubit in the steady state, i.e. a noiseless subsystem.
A Lindbladian is said to have a weak \(\mathbb{Z}_{2}\) symmetry if \([\mathcal{L},\mathcal{P}]=0\) where \(\mathcal{P}(\rho)=P_{P}P\) is a parity superoperator that acts on bras and kets simultaneously. Physically, this means that the symmetry \(P\) is conserved when the system and its environment are both taken into account. This expression can be satisfied even if some of the dissipators anticommute with the parity, and hence it is a weaker condition. In this case the Lindbladian can be block diagonalized into two symmetry sectors
\[\mathcal{L}=\text{Diag}[\mathcal{L}_{+},\mathcal{L}_{-}]. \tag{12}\]
Each sector acts on operators that are eigenoperators of \(\mathcal{P}\) with eigenvalue \(\pm 1\). Only \(\mathcal{L}_{+}\) acts on traceful operators, hence a weak symmetry by itself do not imply multiple steady states. However, in a symmetry-broken phase, the off-diagonal sectors \(\mathcal{L}_{-}\) also acquires an eigenvalue of zero in the thermodynamic limit. This leads to enough degrees of freedom to store a classical bit in the steady state.
## Appendix C Unraveling the dynamics of the Lindbladian
In this section we describe some generic features of the Lindblad models considered in the main text. Denote by \(S\) the set of states \(|\phi\rangle\!\langle\phi^{\prime}|\in\mathcal{H}\otimes\mathcal{H}\) where \(|\phi\rangle\) and \(|\phi^{\prime}\rangle\) are eigenstates with the same eigenvalue for all of the stabilizers. Note that this does not imply \(|\phi\rangle=|\phi^{\prime}\rangle\). We also let \(D\) be the set of states \(|\phi\rangle\!\langle\phi^{\prime}|\in\mathcal{H}\otimes\mathcal{H}\) where \(|\phi\rangle\) and \(|\phi^{\prime}\rangle\) differ by at least one stabilizer value. The subspaces spanned by \(S\) and \(D\) form a bipartition of the entire Hilbert space \(\text{Span}(S)\oplus\text{Span}(D)=\mathcal{H}\otimes\mathcal{H}\). It is useful to notice that in all the stabilizer models we considered in the main text, the dynamics are decoupled between the subspaces spanned by \(S\) and \(D\). More precisely, if \(A\in\text{Span}(S)\), then \(e^{\mathcal{L}}A\in\text{Span}(S)\). The same holds for the set \(D\).
We will now show that the Lindbladians we considered in the main text are generically gapped within the subspace \(\text{Span}(D)\). Suppose \(A\in D\). Recall that the Lindbladians in the main text take a form
\[\mathcal{L}(\rho)=\sum_{s}\mathcal{L}_{s}=\sum_{s}\kappa_{s}\left(L_{s}\rho L _{s}^{\dagger}-\frac{1}{2}\{L_{s}^{\dagger}L_{s},\rho\}\right), \tag{13}\]
for some dissipative rates \(\kappa_{s}\geq 0\). The protection part has jump operators of the form \(L_{s}=U_{s}P_{s}\), where \(U_{s}\) is some Pauli operator and \(P_{s}\) is a projector onto some particular local stabilizer configuration. Let \(\mathcal{L}_{A}=\sum_{s\in\mathcal{L}_{A}}\mathcal{L}_{s}\), where \(C_{A}\) is the set of indices for the terms of the protection part for which the stabilizer values in \(A\) mismatch in its bra and ket. Apply \(\mathcal{L}_{A}\) to \(A\) we find that the terms \(L_{s}\mathcal{L}_{s}^{\dagger}\) vanish due to the mismatch of stabilizer values, only the terms \(\{L_{s}^{\dagger}L_{s},A\}=P_{s}A+AP_{s}\propto A\) contribute. Therefore, \(A\) is a right eigenvector of \(\mathcal{L}_{A}\) with a negative eigenvalue. Since \(\mathcal{L}=\mathcal{L}_{A}+\sum_{s\in\mathcal{L}_{A}}\mathcal{L}_{s}\) and \(\sum_{s\in\mathcal{L}_{A}}\mathcal{L}_{s}\) is itself a Lindbladian whose eigenvalues must have a non-positive real part. It follows that \(\mathcal{L}\) must have a gap greater than the gap of \(\mathcal{L}_{A}\). So \(\mathcal{L}\) is gapped in \(\text{Span}(D)\).
One may notice that there exist highly fine-tuned cases where \(P_{s}A+AP_{s}=0\) for all \(s\in C_{A}\). This can happen, for instance, when the domain walls in 2D Ising model or the 4D toric code contain no corners and are straight across the entire system. However, we expect these configurations to be unstable under any non-zero noise, and they will be rapidly destabilized into a mixture consisting of mostly non-fine-tuned configurations.
Next, we will show that the Poissonian unraveling Eq. (25) is valid in \(\text{Span}(S)\). Therefore, the autocorrelation extracted in the main text is relevant for the spectrum of the Lindbladian in \(\text{Span}(S)\). Consider \(A\in\text{Span}(S)\), then any Lindbladian with jump \(L_{x}=P_{x}\) and \(P_{x}\) being a projector onto some local stabilizer configuration will annihilate \(A\). Within the subspace \(\text{Span}(S)\), inserting these "do-nothing" jumps does not change the dynamics. By adding appropriately chosen do-nothing jumps, the Lindbladians in the main text can be made to satisfy \(\sum_{s}\kappa_{s}L_{s}^{\dagger}L_{s}=\sum_{s}\kappa_{s}\). In this case we can define a completely-positive-trace-preserving map \(\Lambda(\rho)=\sum_{s}\kappa_{s}L_{s}\rho L_{s}^{\dagger}/(\sum_{s}\kappa_{s})\) such that
\[\mathcal{L}(\rho)=\left(\sum_{s}\kappa_{s}\right)\left(\Lambda(\rho)-\rho \right). \tag{26}\]
By Taylor expanding the time-evolution operator \(e^{\mathcal{L}}\) using this relation, we obtain the relation Eq. (25) in the main text.
For the 2D Ising model and the 4D toric code, we have \((\sum_{s}\kappa_{s})=n(\kappa+\Delta)\), where the rates \(\kappa\) and \(\Delta\) (\(\Delta_{x}\) or \(\Delta_{x}\)) are the same as defined in the main text, and \(n\) denotes the number of physical qubits in the system. The channel operator takes the form
\[\Lambda(\rho)=\frac{1}{\kappa+\Delta}\left(\kappa\Lambda_{r}(\rho)+\Delta \Lambda_{r}(\rho)\right), \tag{27}\]
The noise channel \(\Lambda_{e}\) is given by
\[\Lambda_{e}(\rho)=\frac{1}{n}\sum_{i}Z_{i}\rho Z_{i}\;\;\text{or}\;\;\Lambda_ {e}(\rho)=\frac{1}{n}\sum_{i}X_{i}\rho X_{i}, \tag{28}\]
where the index \(i\) sums over all the physical qubits. The protecting channel \(\Lambda_{r}\) is given by
\[\Lambda_{r}(\rho)=\frac{1}{n}\sum_{i}\sum_{m}\left(L_{i}^{(m)}\rho(L_{i}^{(m)} )^{\dagger}+\gamma_{m}P_{i}^{(m)}\rho(P_{i}^{(m)})^{\dagger}\right) \tag{29}\]
where the index \(m\) sums over the different local stabilizer configurations at site \(i\). The jump operators \(L_{i}^{(m)}\) are those defined Eqs. (9)-(11) and Eqs. (49)-(51) (up to an orientation). If the local stabilizer configuration \(m\) is not contained in the jumps in the main text, then \(L_{i}^{(m)}=0\). The rates \(\gamma_{m}\geq 0\) and the local projectors on stabilizer configuration \(P_{i}^{(m)}\) are chosen such that
\[\sum_{m}\left((L_{i}^{(m)})^{\dagger}L_{i}^{(m)}+\gamma_{m}(P_{i}^{(m)})^{ \dagger}P_{i}^{(m)}\right)=\kappa. \tag{30}\]
Therefore, applying the channel \(\Lambda\) is equivalent to stochastically applying either a correcting step \(\Lambda_{r}\) or a noise step \(\Lambda_{e}\). The two steps are essentially update steps for the stabilizer configuration under the Glauber dynamics. Since the dynamics are only sensitive to the stabilizer configurations, we may use the states \(|\phi\rangle\langle\phi|\in S\) to probe the spectrum of the Lindbladian in \(\text{Span}(S)\), the numerical simulation becomes essentially classical.
The arguments above imply that we can directly include the "do-nothing" jumps into the definition of the Lindbladian \(\mathcal{L}\). This will not change the dynamics, i.e. the subspace \(\text{Span}(S)\) remains gapped and the Poissonian unraveling Eq. (25) becomes valid over the entire space \(\mathcal{H}\otimes\mathcal{H}\).
## Appendix D Fitting the autocorrelation with the sum of two exponential functions
Here we show additional data supporting the fit of the 2D Ising autocorrelation function using a sum of two exponentials. The autocorrelation is plotted in Fig. 9 for some selected values of \(\Delta_{x}\). It is clear from the autocorrelation that there are more than one time scales for the decay to happen. While a fit of a single exponential function can estimate the dominant decay time, a fit using a sum of two exponential functions gives a better resolution on the different decay time scales.
## Appendix E Equilibration time for steady state sampling
The mixing time for the thermalization (classical Glauber dynamics) of the 2D Ising model has been well studied. In particular, at low-temperature, the "true" mixing time is known to scale exponentially with the system size due to spontaneous symmetry breaking [83, 84, 85, 86]. However, the equilibration time to sample from one of the symmetry-broken equilibrium states starting from any initial state is much less than that.
We can see this convergence explicitly using the channel evolution \(\Lambda\) mentioned in the main text and Appendix C for an \(N\times N\) 2D Ising model with detailed balance. Starting from an ensemble that has an overall spin orientation that is far from equilibrium, the system converges to one of the equilibrium
Figure 9: Autocorrelation functions for the 2D Ising model (\(\kappa=1\)), (a) the detailed balanced case and (b) the majority-vote case. The plot is in log scale on the y-axis. The black lines are obtained by fitting a sum of two exponential functions. Time is in units of \(\delta t=(\kappa+\delta_{x})^{-1}\).
states rapidly, i.e. the convergence is superpolynomial in \(t\), and the growth of the convergence time obeys slower than linear growth in system size \(N\). In Fig. 10 we plot the convergence of the magnetization as a function of time for various system sizes. For Glauber dynamics in classical spin systems, the mixing time generally grows at least logrithmically with \(N\)[87].
For the initial state with a completely random spin orientation (infinite temperature state), the convergence remains fast in time but the time it takes to relax appears to grow linearly or quadratically with \(N\) (Fig. 11). For a gapped primitive, reversible Lindbladian, the mixing time is \(O(N^{2})\)[88; 89], which is consistent with our numerics. We expect the equilibration time to be similar in the case of 4D toric code due to the analogous domain-wall-type dynamics.
## Appendix F Connection to driven-dissipative phase transitions
We have focused on thermal phase transitions in this work. While thermal phase transitions are caused due to a competition between energy and entropy, it is known that dissipative systems can undergo _non-equilibrium_ phase transitions which arise due to a different mechanism: The competition between a quantum coherent drive and dissipation. These are called _driven-dissipative_ phase transitions [27; 29; 30; 31; 32; 33; 34; 35; 36]. The dynamics of such systems is more "quantum" in the sense that we need to simulate the full quantum Hilbert space within the trajectory approach (unlike the thermal transitions above which are efficiently simulable on a classical computer). To our best knowledge, all examples of driven-dissipative phase transitions arise due to spontaneous symmetry breaking. Is it possible to achieve a driven-dissipative topological phase transition? And can this be done in less than 4D? Here we briefly review the driven-dissipative phase transition in the transverse-field Ising model and speculate on a topological model which might exhibit a transition, albeit in 4D.
Consider the transverse-field Ising Hamiltonian in the presence of dissipation:
\[H=-J\sum_{\langle ij\rangle}X_{i}X_{j}-h\sum_{i}Z_{i},\qquad L_{i}=\sqrt{y} \sigma_{i}^{-}, \tag{10}\]
where \(\sigma_{i}^{-}\) is the lowering operator in the \(Z\) basis [32; 90]. This can be viewed as the rotating-frame Hamiltonian of a lattice of spins in the presence of a coherent drive [32]. It is believed that this model has a phase transition in 2D and higher: For \(J/h\ll 1\) the model is in a trivial paramagnetic phase; for \(J/h\sim 1\) and \(\gamma/h\sim 1\), the drive causes the steady state to spontaneously break the symmetry [90]. This transition is most easily understood within the quantum jump picture: The jump operators want to evolve the system to a state with all spins pointing down, but the non-Hermitian effective Hamiltonian arising from the nearest-neighbor coupling (\(J\)) will cause the spins to rotate. The competition between these two processes will lead to a phase with net magnetization in \(X\) when the drive crosses a certain critical strength.
Working by analogy, we speculate that the following 4D model might exhibit a driven-dissipative _topological_ transition:
\[H=-J\sum_{\vec{v},\vec{z}}S_{\vec{v},\vec{z}}-J\sum_{\vec{v},\vec{z}}S_{\vec{ v},\vec{z}}-h\sum_{i}Z_{i},\qquad L_{i}=\sqrt{y}\sigma_{i}^{-} \tag{11}\]
where the stabilizers \(S\) are defined in Sec. V. (The terms with a prefactor \(J\) are just the 4D toric code Hamiltonian.) Again we expect a trivial paramagnetic phase for \(J/h\ll 1\) since the dissipation acts as a zero-temperature bath in this limit. Nevertheless, for larger values of \(J\) the Hamiltonian evolution could start to cause the (generally mixed) steady state to acquire a non-zero topological order parameter. An
Figure 11: The relaxation starting from the infinite-temperature state. The setup is the same as in Fig. 10. The inset shows the convergence to the stationary value with log scale on the y-axis.
Figure 10: Relaxation to equilibrium magnetization. We show the magnetization dynamics \(\mathrm{Tr}\big{[}2\rho(t)\big{]}\) (with \(Z=(\sum_{i}Z_{i})/N^{2}\)) for the 2D thermal Ising model (\(\kappa=1,\Delta_{x}=0.02\)) as it converges to one of the equilibrium value \(\mathrm{Tr}\big{[}\tilde{Z}_{\rho,i}\big{]}>0\). The initial state is sampled from the ensemble where each spin has 60% probability to be down and 40% to be up. The result is averaged over \(10^{4}\) trajectories. The inset shows the convergence to the stationary value with log scale on the y-axis.
interest direction for future work involves characterizing the phases of such a model.
|
2305.08075 | Analyzing Compression Techniques for Computer Vision | Compressing deep networks is highly desirable for practical use-cases in
computer vision applications. Several techniques have been explored in the
literature, and research has been done in finding efficient strategies for
combining them. For this project, we aimed to explore three different basic
compression techniques - knowledge distillation, pruning, and quantization for
small-scale recognition tasks. Along with the basic methods, we also test the
efficacy of combining them in a sequential manner. We analyze them using MNIST
and CIFAR-10 datasets and present the results along with few observations
inferred from them. | Maniratnam Mandal, Imran Khan | 2023-05-14T05:17:32Z | http://arxiv.org/abs/2305.08075v1 | # Analyzing Compression Techniques for Computer Vision
###### Abstract
Compressing deep networks is highly desirable for practical use-cases in computer vision applications. Several techniques have been explored in the literature, and research has been done in finding efficient strategies for combining them. For this project, we aimed to explore three different basic compression techniques - knowledge distillation, pruning, and quantization for small-scale recognition tasks. Along with the basic methods, we also test the efficacy of combining them in a sequential manner. We analyze them using MNIST and CIFAR-10 datasets and present the results along with few observations inferred from them.
## 1 Introduction
Neural network compression is an important issue for improving the usability of networks in smaller and less powerful devices. As the size of deep networks increases, they become increasingly more demanding in their processing power and storage requirements. Some of the largest neural networks built for vision tasks contain parameters on the scale of billions. As the growing trend is that neural networks are increasingly used in mobile phones, it is sometimes infeasible for extremely large networks to be trained, or even deployed.
In addition to storage and memory concerns with over-parameterized neural networks, there are other concerns that would warrant smaller, more usable networks. It is important to note the speed and latency concerns of networks are just as important. Additionally, it is estimated that the GBT-3 network, with 175 Billion parameters, emitted a total of 552.1 tons of carbon dioxide in training [21]. Thus, if reducing the size of networks can lead to increased energy efficiency, there are potential environmental and ethical reasons for wanting to reduce the size of extremely large networks.
One simple solution to combat this problem is to make smaller networks with much fewer and manageable parameters. While this solution would decrease the size of the networks, this comes at the cost of decreased model performance. This would not be favorable for many recognition based applications, be it auto-navigation, image captioning, visual question answering, or face recognition-based security measures. Similarly, another simple solution would be to "outsource" the model to a server. This would mean processing the model on a server, which would require sending large amounts of data back and forth between the mobile phone and the server. This is also not a perfect solution to our problem, because this would require a great internet connection for the data to be sent, and would incur unnecessary delays that slows down the processing. For example, imagine having to wait every time you wanted to use facial recognition to unlock your phone.
These solutions, although feasible, are not sufficient. The field of network compression has introduced several techniques for mitigating this. The goal of neural network compression is to obtain the same or similar generalization performance by efficient storage, optimization, computation, and often reduction in network size. The reduction in storage and computation leads to critical improvements in real-time applications such as in mobile devices, smart wearables, online learning, etc. Some of the recent major techniques in network compression are - parameter pruning and quantization, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation [19].
According to [24], these methods can be categorized into two common streams: the "transfer" stream and the "compress" stream. Techniques in the transfer stream is focused on training a new small network, whereas techniques in the compress stream are focused on decreasing the size of the model during or after training. As the techniques have been developed independently, they can be used in conjunction and to complement each other. The goal of this project is to analyze and combine these techniques to achieve higher levels of compression while preserving performance. The major gap that this project is hoping to fill is the gap between the two streams. We would like to show empirical results of neural network compression techniques that can be done through a combination of transferring knowledge to a smaller network and simultaneously compressing the
networks.
Our paper is organized as follows: the following section contains a review of the literature that has already been done on the subject of neural network compression; Section 3 discusses the methods we choose to use in our experiments; Section 4 goes into more detail about the details of our experimental setup; Section 5 talks about our results; and we end with a conclusion in Section 6.
## 2 Related Work
### Quantization
Quantization has been widely used as a compression technique across all fields of engineering sciences. Reducing the number of bits to represent each weight can lead to reduced memory and increased computation [23]. [4] aimed to decrease the size of the network through storing with a smaller format. In this paper, the authors test three benchmark data sets with three different storing formats: floating point, fixed point, and dynamic fixed point. They find that very low precision storage is good enough for running trained networks, and that it is also sufficient for training them.
Binary weight neural networks have also been explored [5], although the performance of such networks take a major hit in the case of deep CNNs. [13] introduces Quantization Aware Training, which is a method of helping the accuracy of a model be less affected with Post Training Quantization, by accounting for the quantization loss in the training phase of the model. This will be discussed in more detail in Section 3.3.
### Pruning
Early pruning based methods reduced the number of connections among the layers based on the loss function [19]. Pruning methods also include removing redundant neurons, remove redundant connections, quantizing and encoding weights, and parameter sharing. HashedNets uses a low-cost hash function to store and group weights for parameter sharing [2]. Some compact DNNs are trained with sparsity constraints during optimization for regularization. Pruning with regularization can require a large time to converge and usually involve intricate parameter tuning, thus increasing the training and inference time. Optimal Brain Damage [6] also shows a more robust method of pruning. By removing unimportant weights from a network, they claim they can get improved generalization, require fewer training data points, and improve the speed of learning. This technique takes second-derivative information to balance between network complexity and training set error.
### Knowledge Distillation
Knowledge Distillation (KD) was originally introduced as a training procedure that aims to transfer the knowledge learned by a deeper network (Teacher) to another relatively shallower one (Student), while maintaining the same level of performance [10]. The motivation behind this technique is to reduce the computational complexity of some operations, or compression of large networks, such that devices with limited capacity (e.g. smartphones) could benefit from the achievements of deep learning. In KD, the student network is trained not with the original hard binary labels of the dataset, but instead with the soft labels taken from the output probability of the teacher network. The student model learns to reproduce the output of the teacher model which is used for training. In [10], an ensemble of teacher networks was compressed into a shallow student using the softmax output of the teachers. FitNets [22] compress thin deep networks into wide shallow ones while preserving the performance, and have also been used to learn full feature maps from the teacher. There have been extensions of KD where students have been trained to approximate Monte Carlo teacher [12], trained to represent knowledge in higher neural layers [16], or using multiple networks of decreasing depth in between [17] to transfer knowledge from teacher to assistant to student. Although KD can compress very deep models into much shallower ones, it takes the largest hit in performance among all the compression techniques due to its limited representation capabilities.
### Combined Methods
In [9], the authors use a combined pipeline of Pruning, Quantization, and Huffman Coding. They first prune the model, then use weight sharing and post-training quantization to further reduce the size of their model by 39-50x compression. They manage to achieve this compression with no loss in the accuracy of the model.
### Other Methods
In [7], the authors demonstrate that there is significant redundancy in the weights of large deep learning networks. With a few weight values for each feature they claim it is possible to accurately predict the few unknown weights. Additionally, they show that not only can the parameter values be predicted, but many of them are simply not necessary. In this paper, they train several different networks by learning only a fraction of the parameters and predicting the remainder. In the best cases they are able to predict more than 95% of the weights of a network without a decrease in accuracy.
In [1], the authors show that shallow fully connected models can learn the same functions, regardless of how complex, that were learned by deeper models and attain the same performance of the deep models. In some cases the
shallow neural networks learned these functions using the same number of total parameters as the deeper model. They evaluate the method on the TIMIT phoneme recognition task and successfully train shallow feed forward networks that perform just as well as deep convolutional networks. They suggest that there exist better algorithms for training shallow feedforward nets than those currently available.
Other methods of network compression include using low-rank filters to accelerate convolution [8], using transferred convolutional filters [3], attention-based mechanisms to reduce computations [25], randomly dropping and bypassing layers with identity function [11], etc.
## 3 Methods
In this section, we will introduce the methods we use for our experiments. We introduce the baseline model, the compression techniques we use, and the motivation for these selections.
### Baseline Network Architecture
The proposed network architectures are shown in Figures 1 and 2.
We use two different base models for MNIST [15] and CIFAR-10 [14] datsets. Because we wanted to test the effect of network compression on convolutional layers and dense layers separately, both the models were created accordingly. The base MNIST model has two convolutional layers of size 128 and 256, followed by a hidden dense layer of size 100. Whereas, the CIFAR-10 base model has 4 convolutional layers of sizes 128,128,256, and 256, followed by two hidden layers of size 256. The MNIST and CIFAR-10 base models have around 1.55M and 1.37M trainable parameters respectively.
### Knowledge Distillation
To test the efficacy of knowledge distillation, we try both single-step and multi-step distillation methods. In the former case, we train a student model with the output generated from the teacher (baseline) model as labels, while in the latter case, we introduce a teacher assistant model [18] as well. For both the techniques, the student (compressed) model was kept the same. The MNIST student has two convolutional layers of size 16 and 32, followed by flattening and the output dense layer. There was no hidden dense layer in this case. Same as the corresponding teacher, the CIFAR-10 student has 4 convolutional layers (size 64, 64, 128, and 128) and two dense hidden layers of size 128. The MNIST and CIFAR-10 students have around 20.5K and 343.6K parameters, leading to \(>7500\)% and \(>400\)% compression respectively, when compared to their corresponding teacher models.
For testing the efficacy of multi-step distillation, we train a Teacher Assistant model with labels generated by the teacher, and then train the student using the teacher assistant labels. The Teacher Assistant model is smaller than the Teacher but larger than the Student model for both cases. The MNIST TA has around 176K parameters, and the CIFAR-10 Teacher Assistant has around 1.11M parameters. For both single and multi-step distillation, we also study the effect of distillation temperature on student accuracy.
### Quantization
We examined two methods of quantization of our networks: Post-Training Quantization and Quantization Aware Training. The first quantization technique we explored was Post Training Quantization. Post Training Quantization is performed on a pre-trained model, and it simply reduces the bit-precision of the weights and activation functions within the network. For example, a network which is normally
Figure 1: **Model Architectures for MNIST**
Figure 2: **Model Architectures for CIFAR10**
stored with 32-bit float weights can be instead stored with 8-bit weights, for a theoretical four times decrease in storage size of the network [13].
The second quantization technique we used was Quantization Aware Training is a method of training a neural network that is meant to help the model accuracy downstream when later performing Post-Training Quantization. Quantization Aware Training works by emulating lower precision steps in the forward pass, while keeping the backward pass the same. This way, the optimizer can account for the quantization error in the training phase of the network, and can modify weights step-wise accordingly. Then, at the end of training the network, we can store the weights with Post-Training Quantization and should theoretically, get better validation scores than we would have without the Quantization Aware Training [20].
### Pruning
We compared two different pruning strategies on both of our base models - global and local pruning. For each, we explored both constant sparsity and polynomial decay with changing sparsity training schedules. All the pruning strategies we used in our project was unstructured magnitude-based, which involved iteratively pruning low magnitude, thus less important, weights during training until the desired sparsity is reached.
Polynomial decay involves two different sparsity parameters - the initial sparsity and the final desired sparsity. We trained the models by varying them in three different ways - 0 to 80%, 0 to 50%, and 50 to 80% weight sparsity. In constant decay, the sparsity remains the same as the model is fine tuned. In this case, we compare three different pruning outcomes with 20%, 50%, and 80% sparsities.
Along with global pruning, we also experimented with partial local pruning. We wanted to study the effects of pruning on CNN and dense parts of a network. Hence, for each dataset baseline, we prune the convolutional layers and dense layers separately with both polynomial decay and constant pruning strategies as described before. For all the pruning methods, the pruned trained network was fine tuned for two epochs.
### Combined Techniques
We also explored the combination of different compression techniques to study their efficacy. Primarily, we sequentially combine knowledge distillation with pruning and quantization separately. For the first set of experiments, we first compress the teacher using knowledge distillation and then apply pruning and quantization separately. And for the second set of experiments, we first prune and quantize the teacher, followed by training the student. Although the second experiment does not compress the (final) student model and thus not contribute to further compression, we were curious to study the effects of teacher pruning on distillation.
## 4 Experimental Setup
In this section, we explain the details of how we carry out the experiments, including the implementation and the experimental design.
### Datasets
The datasets we used for experimentation on the model were the MNIST Digits dataset and the CIFAR-10 image recognition dataset. The MNIST dataset consists of 60000 training images of labeled handwritten digits, and 10000 testing images. The digits are centered and size normalized, and each image is represented by a 28x28 grid of pixels. There are a total of 10 labels, one for each digit 0-9 [15].
The CIFAR-10 dataset consists of 60000 training images and 10000 test images. This dataset also has ten labels: airplanes, automobiles, birds, cats, deer, dogs, frogs, horses, ships, and trucks. The images here are larger than the MNIST images. They are represented as color images in a 32x32x3 tensor [14]. We included this dataset as part of our experiment because we were interested in how the additional dimension of color might affect the networks compression ratios and accuracies.
### Implementation
All experiments were done in Python using Tensorflow libraries and a Keras framework. We used the tensorflow_model_optimization package to help with quantization aware training. As there were no readily available packages for knowledge distillation in TensorFlow, we had to create our own class for distillation. We used Google Colab for our programming environment.
For the implementation of our algorithms, we used five epochs for our main models and teacher models, and when we trained student models for knowledge distillation, we used three epochs. We used an ADAM optimizer for all models and it's default learning rate of 0.001. We used the Sparse Categorical Cross Entropy Loss as our loss function. For reproducability, we set a random seed of 1234. Before training, we also normalized all pixel values which originally given to a value between 0 and 1.
### Evaluation Metrics
To evaluate our experiments, we looked at three main metrics. The first metric, was the testing accuracy of each model. With each of the datasets, we held out the test set as to not contaminate the model, and for training we incorporated a 0.1 validation split. The testing accuracy can be measured as a percent of the test set correctly predicted.
The second metric we looked at was the storage size of the model. To find the storage size, we saved all models in a
TFLite format and read the total number of bytes that were saved in the saved model's file path.
The final metric we observed was called the efficacy metric. This was simply calculated as a ratio of the model accuracy to the size of the model. While this last metric is not very intuitive, we wanted to include it as a way of evaluating a compression technique with a mix of how its model's accuracy changes with the changes in size.
## 5 Results
In this section, we will go over the results of the different experiments we ran.
### Knowledge Distillation
Figures 3 shows the performance of the teacher, teacher assistant, and student models (Sec. 3.2) on the MNIST [15] data set for distillation temperatures \(T=\{1,2,5,10,20,30,40\}\). We notice that distillation almost always leads to an improvement for both the single step and multi-step cases. When compared between the two, we observe that introducing TA improves the performance of the student for lower distillation temperatures, whereas for high temperatures, the performance remains almost the same. Furthermore, the performance of TA is slightly better than the student models, though the student model contains only about \(11\)% number of parameters compared to TA. This points to the high compression capabilities of knowledge distillation. The optimal distillation temperature seems to be around \(15\)-\(30\).
Similar experiments on the CIFAR-10 models (Fig. 4) shows a slightly different trend. Here too, for most cases, the distilled models perform better than the teacher. But, unlike the MNIST case, here the TA leads to better performance for higher distillation temperatures, and for lower temperatures, TA does no help. For both the plots, we notice the performance to be higher in the middle and lower for low and high temperatures. Low \(T\) results in harder labels and hence lower accuracy, whereas high \(T\) leads to softer labels but with larger entropy, which may have caused the lower accuracy for high temperatures.
### Pruning
Global Pruning:Table 1 shows the performance of the unstructured pruning for both the polynomial decay and constant sparsity schedules. The percentage improvements over the original baselines have been presented. We observe that performance increases with increase in sparsity. For these set of experiments, the fine tuned pruned models always performed better than the original baselines. Similar to quantization, we noticed a huge improvement in model size reduction, although the accuracy might have slightly changed.
Local Partial Pruning:We experimented with pruning the dense layers and convolutional layers separately. The results of local partial pruning on the CIFAR-10 baseline is presented in Table 2. We notice a slightly better performance when pruning the convolutional layers, which might be due to the innate higher sparsity in them compared to the fully connected layers. Also, comparing to Table 1, we
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{**MNIST**} & \multicolumn{2}{c|}{**CIFAR-10**} \\ \hline
**Model** & **Acc** & **Size(MB)** & **Acc** & **Size(MB)** \\ \hline PD 50\%–80\% & +0.48 & -68 & **+4.57** & **-68** \\ \hline PD 0\%–80\% & **+0.77** & **-68** & +3.27 & -68 \\ PD 50\%–80\% & +0.62 & -37 & +1.68 & -37 \\ \hline \hline CS 20\% & +0.54 & -11.5 & +1.28 & -11.4 \\ \hline CS 50\% & +0.65 & -37 & +2.83 & -37 \\ \hline CS 80\% & **+0.86** & **-68** & **+3.44** & **-68** \\ \hline \end{tabular}
\end{table}
Table 1: Results of global unstructured pruning on the two baselines. All numbers show the percentage improvement over the original baseline models.
Figure 4: **Evaluating KD on CIFAR-10:** The performance of the Teacher, TA, and Student models on the CIFAR-10 data set [14] for different distillation temperatures.
Figure 3: **Evaluating KD on MNIST:** The performance of the Teacher, TA, and Student models on the MNIST data set [15] for different distillation temperatures.
observe that, when compared to global pruning, local pruning is generally less advantageous in terms of compression efficacy (performance/size).
### Quantization
The results of the quantization experiments are shown in Table 3. What we can look at first is the accuracies and sizes of our baseline models- in the MNIST case, the baseline model performs well with an accuracy of 97.74, with a size of 5.923 MB. In the CIFAR-10 case, the baseline model has an accuracy of 70.42 and a storage size of 5.22 MB.
The first observation we can take notice of is the results of the Quantization Aware Training experiment. The accuracy in the case of MNIST actually improved from the original model, while the accuracy of CIFAR-10 went down, but not significantly. Overall, the accuracies here did not change much, but the size of the model decreased significantly. When we applied the Quantization Aware Training, we also apply a 8-bit Post Training Quantization. The size of the model thus decreases to 1.494 MB in the case of MNIST, and it decreases to 1.332 MB in the case of CIFAR-10. These decreases represent an approximate compression ratio of 4x.
The next thing we can notice from the table are the the changes in the original model when just Post Training Quantization is applied. We can notice that the accuracies of the quantized models in MNIST remain almost exactly the same as the original model (97.75 and 97.74 in the 8-bit and the 16-bit models, respectively, vs 97.74 in the original model), and that the accuracies of the CIFAR-10 16-bit quantized model is exactly equal to the accuracy of the CIFAR-10 original model. We can, however, see a very slight decrease in accuracy of the the CIFAR-10 8-bit model, but it is not very significant. We can also observe the storage size differences in all of these models. As expected, the 8-bit models empirically show an approximate compression ratio of 4x from the original 32-bit model, and the 16-bit models empirically show an approximate compression ratio of 2x.
Another interesting observation is the comparison between the Quantization Aware Trained network and the Post-Trained models. It is interesting to note that the Quantization Aware Trained model has a higher accuracy in the MNIST case, which is what is expected from having the Quantization Aware forward passes incorporated in the optimizer. However, we don't see that same pattern in the CIFAR-10 case, where the Quantization Aware accuracy is actually less than both of the Post-Trained models' accuracies. The model sizes remain almost exactly the same, which is expected when they both have the same total number of parameters and the same bit-precision storage.
### Combined Techniques
KD + Pruning:Table 4 shows the comparison between the individual techniques with the combined one. We notice that, although the accuracy of the prune student decreased after pruning, it was accompanied with a huge reduction in size. So, for simple models and data sets, combining the two methods may lead to huge improvements.
KD + Quantization:From Table 3, we observe that the performance of the Distilled Model (Teacher to Student) is exactly equal to the Distilled Model (Quantization Aware Trained Teacher to Student). Essentially, this shows that there was no difference in that distillation process when we gave it the original teacher model or the quantization aware teacher model. What we can see, however, is the large difference in the sizes of the student model when compared to the teacher model, and the very small difference in the accuracy. In the MNIST case, the storage size of the model goes from 5.923 MB to 0.081 MB, which is a compression ratio of almost 75x. This shows that a much smaller network can be learned from a significantly larger network.
Furthermore, we can sequentially add on post training quantization to the distillation process. This even further decreases the size of the model by another 4x, without affecting the accuracy. In the MNIST case, the final compression ratio after going through both distillation and 8-bit post-training quantization is more than 250x. This is an enormous decrease in the size of the model, with minuscule change in the accuracy of the model.
## 6 Conclusions
Performance of all of the methods have been summarized in Table 5.
In general, we noticed that for almost all techniques, accuracy of the models improved after compression. This might have been due to our models being larger than the easy and small data sets we tested them on. Although, our experiments points towards excellent performance efficacy for the compressed models, the results might change when tested on much deeper networks and on large hard data sets.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{|c|}{**Dense Layers**} & \multicolumn{2}{|c|}{**Conv Layers**} \\ \hline
**Model** & **Acc** & **Size(MB)** & **Acc** & **Size(MB)** \\ \hline PD 50\%–80\% & **+3.8** & **-68** & +3.9 & -68 \\ \hline PD 0\%–80\% & **+3.8** & **-68** & **+4.2** & **-68** \\ \hline PD 50\%–80\% & +2.6 & -37 & +2.9 & -37 \\ \hline \hline CS 20\% & +1.2 & -11.5 & +1.03 & -11.4 \\ \hline CS 50\% & +2.7 & -37 & +1.92 & -37 \\ \hline CS 80\% & **+3.6** & **-68** & **+4.4** & **-68** \\ \hline \end{tabular}
\end{table}
Table 2: Results of local unstructured pruning on the dense layers and convolutional layers separately for the CIFAR-10 baseline. All numbers show the percentage improvement over the original baseline models.
The other interesting conclusion we can draw from the knowledge distillation experiments is the advantage of introducing teacher assistant models when the gap between the student and the teacher is large. Although multi-step distillation helps in these cases, it will be interesting to see the performance for deeper models and for more than one steps in between teacher and student. Again, our conclusion is very rudimentary and these need to be explored further.
Finally, from our experiments, we do notice the huge benefit of combining distillation with pruning and quantization. We can achieve highly compressed models without much change in performance.
As part of future work, we would like to explore harder data sets with many more classes, test on deeper models, and experiment on combining other modern compression techniques. We tried to explore the very basic compression techniques in a simple manner, but many more sophisticated methodologies have been explored in the literature. Another part of potential future work could be to employ these algorithms on commonly used pre-existing networks, such as AlexNet, or even on much larger networks where network compression may actually be a problem, like DenseNet. Furthermore, we would like to experiment with even smaller student models in the future, to see just how compact a network can get while still performing well. It would also be interesting to work on an algorithm for recursively removing layers of a deep network in a manner analogous to backwards regression. A comprehensive and thorough analysis of all would be highly desirable. This project serves as a stepping stone in that direction.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{|c|}{**MNIST**} & \multicolumn{2}{|c|}{**CIFAR-10**} \\ \hline
**Model** & **Accuracy** & **Size (MB)** & **Accuracy** & **Size (MB)** \\ \hline Original (Teacher) & 97.83 & 5.76 & 69.68 & 5.08 \\ \hline Teacher \(\rightarrow\) Pruning & 98.29 & 1.84 & 72.87 & 1.62 \\ \hline Teacher \(\rightarrow\) Student & 97.75 & 0.08 & 70.34 & 1.28 \\ \hline Teacher \(\rightarrow\) Pruning \(\rightarrow\) Student & 97.75 & 0.08 & 71.40 & 1.28 \\ \hline Teacher \(\rightarrow\) Student \(\rightarrow\) Pruning & 96.94 & 0.03 & 72.16 & 0.42 \\ \hline \end{tabular}
\end{table}
Table 4: Results of experiments combining knowledge distillation and global unstructured pruning on the two baselines.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{|c|}{**MNIST**} & \multicolumn{2}{|c|}{**CIFAR-10**} \\ \hline
**Model** & **Accuracy** & **Size (MB)** & **Accuracy** & **Size (MB)** \\ \hline Original Model & 97.74 & 5.923 & 70.42 & 5.22 \\ \hline Quantization Aware Training & 98.12 & 1.494 & 69.55 & 1.332 \\ \hline Model + 8-bit Post Training Quantization & 97.75 & 1.494 & 70.36 & 1.326 \\ \hline Model + 16-bit Post Training Quantization & 97.74 & 2.964 & 70.42 & 2.614 \\ \hline Distilled Model (Teacher \(\rightarrow\) Student) & 97.61 & 0.081 & 70.62 & 1.315 \\ \hline Quantization Aware Teacher \(\rightarrow\) Distilled Student & 97.61 & 0.081 & 70.62 & 1.315 \\ \hline Teacher \(\rightarrow\) Quantization Aware Student & 97.55 & 0.024 & 67.93 & 0.345 \\ \hline Teacher \(\rightarrow\) Student + 8-bit Post Training Quantization & 97.62 & 0.023 & 70.59 & 0.341 \\ \hline Teacher \(\rightarrow\) Student + 16-bit Post Training Quantization & 97.61 & 0.042 & 70.62 & 0.662 \\ \hline \end{tabular}
\end{table}
Table 3: Results of all Quantization Experiments
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & \multicolumn{2}{|c|}{**MNIST**} & \multicolumn{2}{|c|}{**CIFAR-10**} \\ \hline
**Model** & **Accuracy** & **Size (MB)** & **Accuracy** & **Size (MB)** \\ \hline Original (Teacher) & 97.83 & 5.76 & 69.68 & 5.08 \\ \hline Teacher \(\rightarrow\) Pruning & 98.29 & 1.84 & 72.87 & 1.62 \\ \hline Teacher \(\rightarrow\) Student & 97.75 & 0.08 & 70.34 & 1.28 \\ \hline Teacher \(\rightarrow\) Pruning \(\rightarrow\) Student & 97.75 & 0.08 & 71.40 & 1.28 \\ \hline Teacher \(\rightarrow\) Student \(\rightarrow\) Pruning & 96.94 & 0.03 & 72.16 & 0.42 \\ \hline \end{tabular}
\end{table}
Table 4: Results of experiments combining knowledge distillation and global unstructured pruning on the two baselines. |
2302.03187 | Sum formulas for Schur multiple zeta values | In this paper, we study sum formulas for Schur multiple zeta values and give
a generalization of the sum formulas for multiple zeta(-star) values. We show
that for ribbons of certain types, the sum over all admissible Young tableaux
of this shape evaluates to a rational multiple of the Riemann zeta value. For
arbitrary ribbons with $n$ corners, we show that these can be always expressed
in terms of multiple zeta values of depth $\leq n$. In particular, when $n=2$,
we give explicit, what we call, bounded type sum formulas for these ribbons.
Finally, we show how to evaluate the sum over all admissible Young tableaux
with exactly one corner and also prove bounded type sum formulas for them. This
will also lead to relations among sums of Schur multiple zeta values over all
admissible Young tableaux of different shapes. | Henrik Bachmann, Shin-ya Kadota, Yuta Suzuki, Shuji Yamamoto, Yoshinori Yamasaki | 2023-02-07T01:36:30Z | http://arxiv.org/abs/2302.03187v2 | # Sum formulas for Schur multiple zeta values
###### Abstract.
In this paper, we study sum formulas for Schur multiple zeta values and give a generalization of the sum formulas for multiple zeta(-star) values. We show that for ribbons of certain types, the sum over all admissible Young tableaux of this shape evaluates to a rational multiple of the Riemann zeta value. For arbitrary ribbons with \(n\) corners, we show that these can be always expressed in terms of multiple zeta values of depth \(\leq n\). In particular, when \(n=2\), we give explicit, what we call, bounded type sum formulas for these ribbons. Finally, we show how to evaluate the sum over all admissible Young tableaux with exactly one corner and also prove bounded type sum formulas for them. This will also lead to relations among sums of Schur multiple zeta values over all admissible Young tableaux of different shapes.
Key words and phrases:Schur multiple values, Sum formulas, Integrals associated with 2-posets, Jacobi-Trudi formula 2020 Mathematics Subject Classification: Primary 11M32; Secondary 05E05
## 1. Introduction
The purpose of this note is to present several different types of sum formulas for Schur multiple zeta values, which can be seen as generalizations of classical sum formulas for multiple zeta(-star) values. Schur multiple zeta values are real numbers introduced in [7], and they can be seen as a simultaneous generalization of the multiple zeta values (MZVs) and multiple zeta-star values (MZSVs), which are defined for an index \(\boldsymbol{k}=(k_{1},\ldots,k_{d})\in\mathbb{Z}_{\geq 1}^{d}\) with \(k_{d}\geq 2\) by
\[\zeta(\boldsymbol{k})\coloneqq\sum_{0<m_{1}<\cdots<m_{d}}\frac{1}{m_{1}^{k_{1}} \cdots m_{d}^{k_{d}}}\,,\qquad\zeta^{\star}(\boldsymbol{k})\coloneqq\sum_{0<m_ {1}\leq\cdots\leq m_{d}}\frac{1}{m_{1}^{k_{1}}\cdots m_{d}^{k_{d}}}\,.\]
Here the condition \(k_{d}\geq 2\) ensures the convergence of the above sums, and the index \(\boldsymbol{k}\) is called admissible in this case. For an index \(\boldsymbol{k}=(k_{1},\ldots,k_{d})\) we write \(\operatorname{wt}(\boldsymbol{k})=k_{1}+\cdots+k_{d}\) to denote its weight and \(\operatorname{dep}(\boldsymbol{k})=d\) for its depth. As classical result is then ([2],[3]), that the sum of MZ(S)Vs over all admissible indices of fixed weight \(w\) and depth \(d\) evaluates to (an integer multiple of) \(\zeta(w)\), i.e., for \(d\geq 1\) and \(w\geq d+1\)
\[\sum_{\begin{subarray}{c}\boldsymbol{k}\text{ admissible}\\ \operatorname{wt}(\boldsymbol{k})=w\\ \operatorname{dep}(\boldsymbol{k})=d\end{subarray}}\zeta(\boldsymbol{k})= \zeta(w),\qquad\sum_{\begin{subarray}{c}\boldsymbol{k}\text{ admissible}\\ \operatorname{wt}(\boldsymbol{k})=w\\ \operatorname{dep}(\boldsymbol{k})=d\end{subarray}}\zeta^{\star}(\boldsymbol {k})=\binom{w-1}{d-1}\zeta(w)\,. \tag{1.1}\]
Schur MZVs generalize MZ(S)Vs by replacing an index \(\boldsymbol{k}\) by a Young tableau (see Definition 1.1 for the exact definition). For example, if we have a skew Young diagram \(\lambda/\mu=(2,2)/(1)=\raisebox{-1.5pt}{\includegraphics[height=1.5pt]{././.
## 1. Introduction
In this paper we consider the following _quasi-normality_ of the \(\lambda\)
single type sum formulas (1.1). As a generalization we will show (Theorem 3.8) that for \(\lambda/\mu=(\underbrace{d,\ldots,d}_{r})/(\underbrace{d-1,\ldots,d-1}_{r-1})\), i.e., when \(\lambda/\mu\) is an anti-hook, we have
\[S_{w}(\lambda/\mu)=S_{w}\left(\begin{array}{c}\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1)\yng(1,1) \
where \(\mu_{i}=0\) for \(i>r\). In the case where \(\mu=\varnothing\) is the empty partition (i.e., the unique partition of zero) we just write \(\lambda/\mu=\lambda\).
A _Young tableau_\(\mathbf{k}=(k_{i,j})_{(i,j)\in D(\lambda/\mu)}\) of shape \(\lambda/\mu\) is a filling of \(D(\lambda/\mu)\) obtained by putting \(k_{i,j}\in\mathbb{Z}_{\geq 1}\) into the \((i,j)\)-entry of \(D(\lambda/\mu)\). For shorter notation, we will also just write \((k_{i,j})\) in the following if the shape \(\lambda/\mu\) is clear from the context. A Young tableau \((m_{i,j})\) is called _semi-standard_ if \(m_{i,j}<m_{i+1,j}\) and \(m_{i,j}\leq m_{i,j+1}\) for all possible \(i\) and \(j\). The set of all Young tableaux and all semi-standard Young tableaux of shape \(\lambda/\mu\) are denoted by \(\operatorname{YT}(\lambda/\mu)\) and \(\operatorname{SSYT}(\lambda/\mu)\), respectively.
An entry \((i,j)\in D(\lambda/\mu)\) is called a _corner_ of \(\lambda/\mu\) if \((i,j+1)\not\in D(\lambda/\mu)\) and \((i+1,j)\not\in D(\lambda/\mu)\). We denote the set of all corners of \(\lambda/\mu\) by \(C(\lambda/\mu)\). For a Young tableau \(\mathbf{k}=(k_{i,j})\in\operatorname{YT}(\lambda/\mu)\) we define its _weight_ by \(\operatorname{wt}(\mathbf{k})=\sum_{(i,j)\in D(\lambda/\mu)}k_{i,j}\) and we call it _admissible_ if \(k_{i,j}\geq 2\) for all \((i,j)\in C(\lambda/\mu)\).
**Definition 1.1**.: For an admissible \(\mathbf{k}=(k_{i,j})\in\operatorname{YT}(\lambda/\mu)\) the _Schur multiple zeta value_ (Schur MZV) is defined by
\[\zeta(\mathbf{k})\coloneqq\sum_{(m_{i,j})\in\operatorname{SSYT}(\lambda/\mu)} \prod_{(i,j)\in D(\lambda/\mu)}\frac{1}{m_{i,j}^{k_{i,j}}}\,. \tag{1.5}\]
Note that the admissibility of \(\mathbf{k}\) ensures the convergence of (1.5) ([7, Lemma 2.1]). For the empty tableau \(\mathbf{k}=\varnothing\), we have \(\zeta(\varnothing)=1\).
Finally, we mention that the convention for the binomial coefficients we use in this work is for \(n,k\in\mathbb{Z}\) given by
\[\binom{n}{k}\coloneqq\begin{cases}\frac{n(n-1)\cdots(n-(k-1))}{k!}&\text{if }k >0,\\ 1&\text{if }k=0,\\ 0&\text{if }k<0.\end{cases}\]
### Acknowledgement
The first author was partially supported by JSPS KAKENHI Grant Numbers JP19K14499, JP21K13771. The third author was partially supported by JSPS KAKENHI Grant Numbers JP19K23402, JP21K13772. The fourth author was partially supported by JSPS KAKENHI Grant Numbers JP18H05233, JP18K03221, JP21K03185. The fifth author was partially supported by JSPS KAKENHI Grant Numbers JP21K03206.
## 2. Weighted sum formulas
When evaluating sums of Schur MZVs we will often encounter weighted sums of MZVs, which we will discuss in this section. For indices \(\mathbf{n}=(n_{1},\ldots,n_{d})\), \(\mathbf{k}=(k_{1},\ldots,k_{d})\) and an integer \(l\geq 0\), define
\[P_{l}(\mathbf{n};\mathbf{k})\coloneqq\sum_{\begin{subarray}{c}\mathbf{w}=(w_{1},\ldots,w_ {d}):\text{ adm.}\\ w_{l}\geq n_{i}\ (i=1,\ldots,d)\\ \operatorname{wt}(\mathbf{w})=\operatorname{wt}(\mathbf{k})+l\end{subarray}}\prod_{i= 1}^{d}\binom{w_{i}-n_{i}}{k_{i}-1}\cdot\zeta(w_{1},\ldots,w_{d}).\]
Notice that by definition \(P_{l}(\varnothing;\varnothing)=1\) if \(l=0\) and \(0\) otherwise. In particular, we put \(P_{l}(\mathbf{k})\coloneqq P_{l}((1,\ldots,1);\mathbf{k})\) and \(Q_{l}(\mathbf{k})\coloneqq P_{l}((1,\ldots,1,2);\mathbf{k})\). The aim of this section is to obtain explicit bounded expressions of \(P_{l}(\mathbf{k})\) and \(Q_{l}(\mathbf{k})\), which play important roles throughout the present paper. Here, we say that an expression of \(P_{l}(\mathbf{k})\) or \(Q_{l}(\mathbf{k})\) is _bounded_ if the number of terms appearing in the expression does not depend on \(l\) but only on \(\mathbf{k}\)
Notice that, since \(\binom{w-2}{k-1}=\sum_{j=0}^{k-1}(-1)^{j}\binom{w-1}{k-j-1}\), we have
\[Q_{l}(\mathbf{k})=\sum_{j=0}^{k_{d}-1}(-1)^{j}P_{l+j}(k_{1},\dots,k_{d-1},k_{d}-j)\,. \tag{2.1}\]
In particular, \(Q_{l}(\mathbf{k})=P_{l}(\mathbf{k})\) if \(\mathbf{k}\) is non-admissible, whence it is sufficient to study only \(P_{l}(\mathbf{k})\). Moreover, we may assume that \(l>0\) because the case \(l=0\) is trivial: \(P_{0}(\mathbf{k})=\zeta(\mathbf{k})\) if \(\mathbf{k}\) is admissible and \(0\) otherwise and \(Q_{0}(\mathbf{k})=0\). Furthermore, when \(d=1\), we have for \(k\geq 1\) and \(l>0\)
\[P_{l}(k)=\binom{l+k-1}{k-1}\zeta(l+k)\,,\quad Q_{l}(k)=\binom{l+k-2}{k-1}\zeta(l +k)\]
and therefore also assume that \(d\geq 2\).
The next proposition asserts that \(P_{l}(\mathbf{k})\) satisfies recursive formulas with respect to \(l\), which can be described by using the Schur MZVs of anti-hook shape
\[\begin{split}\zeta\binom{\mathbf{l}}{\mathbf{k}}&=\zeta \binom{l_{1},\dots,l_{s}}{k_{1},\dots,k_{r}}\\ &\coloneqq\zeta\left(\begin{array}{c|c}\boxed{k_{1}}\\ \hline\hline\hline l_{1}\cdots l_{s}\end{array}\boxed{k_{r}}\end{array}\right)= \sum_{0<a_{1}<\cdots<a_{r}\geq b_{s}\geq\cdots\geq b_{1}>0}\frac{1}{a_{1}^{k_{1 }}\cdots a_{r}^{k_{r}}b_{1}^{l_{1}}\cdots b_{s}^{l_{s}}}\end{split} \tag{2.2}\]
with \(\mathbf{l}=(l_{1},\dots,l_{s})\) being an index and \(\mathbf{k}=(k_{1},\dots,k_{r})\) a non-empty admissible index.
**Proposition 2.1**.: _Let \(d\geq 2\) and \(l>0\)._
* _If_ \(\mathbf{k}=(k_{1},\dots,k_{d})\) _is admissible, then it holds that_ (2.3) \[P_{l}(\mathbf{k})=\sum_{i=1}^{d}\sum_{a_{i}=0}^{k_{i}-1}(-1)^{k_{1}+\cdots+k_{i-1} +a_{i}}P_{k_{i}-1-a_{i}}(k_{i-1},\dots,k_{1},l+1)\,P_{a_{i}}(k_{i+1},\dots,k_{ d})\,.\]
* _If_ \(\mathbf{k}=(k_{1},\dots,k_{d})\) _is non-admissible (i.e.,_ \(k_{d}=1\)_), then it holds that_ (2.4) \[\begin{split}& P_{l}(\mathbf{k})=\sum_{i=1}^{d}\sum_{a_{i}=0}^{k_{i}-1} (-1)^{k_{1}+\cdots+k_{i-1}+a_{i}}P_{k_{i}-1-a_{i}}(k_{i-1},\dots,k_{1},l+1)P_{ a_{i}}(k_{i+1},\dots,k_{d-1},1)\\ &+\sum_{i=1}^{d-1}(-1)^{l+d+k_{i}}\sum_{\begin{subarray}{c}(b_{0 },\dots,b_{d-1})\in\mathbb{Z}_{\geq 1}^{d}\\ b_{0}\geq 2,\,b_{i}=2\\ b_{0}+\cdots+b_{d-1}=\text{wt}(\mathbf{k})+l+1\end{subarray}}(-1)^{b_{0}+b_{1}+ \cdots+b_{i-1}}\binom{b_{0}-1}{l}\Biggl{\{}\prod_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{d-1}\binom{b_{j}-1}{k_{j}-1}\Biggr{\}}\\ &\times\sum_{j=i}^{d-1}\sum_{c_{j}=1}^{b_{j}-1}(-1)^{c_{j}+j+b_{j+1} +\cdots+b_{d-1}}\zeta\binom{c_{j},b_{j+1},\dots,b_{d-1}}{b_{i-1},\dots,b_{1},b _{0}}\zeta(b_{i+1},\dots,b_{j-1},b_{j}-c_{j}+1)\,.\end{split}\]
**Remark 2.2**.: The number of terms in the expression (2.4) is actually bounded with respect to \(l\) by the following reason: By the existence of the binomial coefficient \(\binom{b_{0}-1}{l}\), the summation variable \(b_{0}\) can be restricted by \(b_{0}\geq l+1\) and then the other summation variables are bounded independently of \(l\). Also, by applying the formula (4.1) to an anti-hook, we can expand each term
\[\zeta\binom{c_{j},b_{j+1},\dots,b_{d-1}}{b_{i-1},\dots,b_{1},b_{0}}\]
into a sum of MZVs. The number of appearing MZVs is independent of \(b_{0},\ldots,b_{d-1}\) and \(c_{j}\). Finally, by using the harmonic product formula, we can rewrite the products of MZVs into a sum of MZVs. The resulting number of terms, after the application of the harmonic product formula, depends only on the original number of entries and so it is independent of \(l\).
To prove Proposition 2.1, we first recall the notion of \(2\)-posets and the associated integrals introduced by the fourth-named author in [9].
**Definition 2.3** ([9, Definition 2.1]).:
1. A \(2\)-poset is a pair \((X,\delta_{X})\), where \(X=(X,\leq)\) is a finite partially ordered set (poset for short) and \(\delta_{X}\) is a map (called the label map of \(X\)) from \(X\) to \(\{0,1\}\). We often omit \(\delta_{X}\) and simply say "a \(2\)-poset \(X\)". Moreover, a \(2\)-poset \(X\) is called admissible if \(\delta_{X}(x)=0\) for all maximal elements \(x\) and \(\delta_{X}(x)=1\) for all minimal elements \(x\).
2. For an admissible \(2\)-poset \(X\), the associated integral \(I(X)\) is defined by (2.5) \[I(X)=\int_{\Delta_{X}}\prod_{x\in X}\omega_{\delta_{X}(x)}(t_{x})\,,\] where \(\Delta_{X}=\big{\{}\,(t_{x})_{x}\in[0,1]^{X}\,\big{|}\,\,t_{x}<t_{y}\) if \(x<y\big{\}}\) and \(\omega_{0}(t)=\frac{dt}{t}\) and \(\omega_{1}(t)=\frac{dt}{1-t}\).
We depict a \(2\)-poset as a Hasse diagram in which an element \(x\) with \(\delta_{X}(x)=0\) (resp. \(\delta_{X}(x)=1\)) is represented by \(\circ\) (resp. \(\bullet\)). For example, the diagram
(2.6)
represents the \(2\)-poset \(X=\{x_{1},\ldots,x_{10}\}\) with order \(x_{1}<x_{2}<x_{3}<x_{4}<x_{5}>x_{6}<x_{7}<x_{8}>x_{9}<x_{10}\) and label \((\delta_{X}(x_{1}),\ldots,\delta_{X}(x_{10}))=(1,0,1,1,0,1,0,0,1,0)\).
In [5], it is shown that the Schur MZVs of anti-hook shape has the following expression by the associated integral of a \(2\)-poset. This can be regarded as simultaneous generalization of the integral expressions of MZVs and MZSVs.
**Theorem 2.4** ([5, Theorem 4.1]).: _For an index \(\boldsymbol{l}=(l_{1},\ldots,l_{s})\) and a non-empty admissible index \(\boldsymbol{k}=(k_{1},\ldots,k_{r})\), we have_
\[\zeta\binom{\boldsymbol{l}}{\boldsymbol{k}}=I\left(\begin{array}{c}k_{r}-1 \,\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{.eps}}\\ k_{r-1}-1\,\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{.eps}}\\ k_{1}-1\,\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{.eps}}\\ \end{array}\right)\,.\]
For example, for the \(2\)-poset \(X\) given by (2.6), we have \(\zeta\binom{2,3}{2,1,2}=I(X)\).
In our proof of Proposition 2.1, we consider a kind of extention of the integral \(I(X)\) to non-admissible \(2\)-posets \(X\). This extension is given by using the notion of "admissible part" which we define below.
Let \(\mathscr{X}\) be the set of isomorphism classes of \(2\)-posets, and \(\mathbb{Q}\mathscr{X}\) denote the \(\mathbb{Q}\)-vector space freely generated by this set. We equip \(\mathbb{Q}\mathscr{X}\) with a \(\mathbb{Q}\)-algebra structure by setting
\([X]\cdot[Y]\coloneqq[X\sqcup Y]\). If we let \(\mathscr{X}^{0}\subset\mathscr{X}\) be the subset consisting of admissible 2-posets, its \(\mathbb{Q}\)-span \(\mathbb{Q}\mathscr{X}^{0}\) becomes a \(\mathbb{Q}\)-subalgebra of \(\mathbb{Q}\mathscr{X}\) and the integral (2.5) defines a \(\mathbb{Q}\)-algebra homomorphism \(I\colon\mathbb{Q}\mathscr{X}^{0}\to\mathbb{R}\).
Let \(\mathscr{T}\subset\mathscr{X}\) be the subset of totally ordered 2-posets. Then a \(\mathbb{Q}\)-linear map \(\mathbb{Q}\mathscr{X}\to\mathbb{Q}\mathscr{T}\), which we call the _totally ordered expansion_, is defined by
\[[X]=[X,\leq,\delta]\longmapsto[X]^{\rm tot}\coloneqq\sum_{\leq^{\prime}}[X, \leq^{\prime},\delta],\]
where \([X]=[X,\leq,\delta]\) is the isomorphism class of any 2-poset \(X\) and \(\leq^{\prime}\) runs over the total orders on the set \(X\) which are refinements of the original partial order \(\leq\). We have \([X]^{\rm tot}=[X_{a}^{b}]^{\rm tot}+[X_{b}^{a}]^{\rm tot}\) for any 2-poset \(X\) and non-comparable elements \(a,b\in X\), where \(X_{a}^{b}\) denotes the 2-poset obtained from \(X\) by adjoining the relation \(a<b\). Note also that the integration map \(I\colon\mathbb{Q}\mathscr{X}^{0}\to\mathbb{R}\) factors through the totally ordered expansion, i.e., we have \(I([X])=I([X]^{\rm tot})\) for any \([X]\in\mathscr{X}^{0}\).
For any 2-poset \(X\), we define its _admissible part_\([X]^{\rm adm}\) to be the partial sum of the totally ordered expansion \([X]^{\rm tot}\) consisting of the admissible terms. For example, if \(X=\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ
By repeating similar computations, we have
\[[X]=\sum_{i=1}^{d-1}\sum_{a_{i}=0}^{k_{i}-1}(-1)^{k_{1}+\cdots+k_{i-1}+a_{i}}[X_{ i,a_{i}}]+(-1)^{k_{1}+\cdots+k_{d-1}}[X_{d,0}]\,,\]
where
\[X_{i,a}=\]
that is,
\[X_{i,a} =X_{k_{i}-1-a}(k_{i-1},\ldots,k_{1},l+1)\sqcup X_{a}(k_{i+1},\ldots,k_{d})\,,\] \[X_{d,0} =X_{k_{d}-1}(k_{d-1},\ldots,k_{1},l+1)\,.\]
Notice that \(X_{d,0}\) is always admissible because \(l>0\). By taking the admissible parts and making the integrals associated with these 2-posets, we have
\[P_{l}(\boldsymbol{k}) =\sum_{i=1}^{d-1}\sum_{a_{i}=0}^{k_{i}-1}(-1)^{k_{1}+\cdots+k_{i-1 }+a_{i}}I([X_{i,a_{i}}]^{\mathrm{adm}})\] \[\quad+(-1)^{k_{1}+\cdots+k_{d-1}}P_{k_{d}-1}(k_{d-1},\ldots,k_{1}, l+1)\,. \tag{2.8}\]
If \(m_{d}>0\) (i.e., \(\boldsymbol{k}\) is admissible), since \(X_{i,a}\) is also admissible, the formula (2.3) is immediately obtained from (2.7) and (2.8).
If \(m_{d}=0\) (i.e., \(\boldsymbol{k}\) is non-admissible), noticing that \(X_{i,a}\) is admissible if and only if \(i=d-1\) and \(a>0\), we have2 from (2.8)
Footnote 2: For a condition \(P\), we let \(\mathbb{1}_{P}\) denote the indicator function on \(P\), that is, \(\mathbb{1}_{P}=1\) if \(P\) is satisfied and \(0\) otherwise. We also put \(\overline{\mathbb{1}}_{P}=1-\mathbb{1}_{P}\). Condition with multiple lines stands for the conjunction of all lines.
\[P_{l}(\boldsymbol{k}) =\sum_{i=1}^{d-1}\sum_{a_{i}=0}^{k_{i}-1}\overline{\mathbb{1}}_{ \begin{subarray}{c}i=d-1\\ a_{i}\neq 0\end{subarray}}(-1)^{k_{1}+\cdots+k_{i-1}+a_{i}}I([X_{i,a_{i}}]^{ \mathrm{adm}})\] \[\quad+\sum_{i=d-1}^{d}\sum_{a_{i}=0}^{k_{i}-1}\overline{\mathbb{1 }}_{\begin{subarray}{c}i=d-1\\ a_{i}=0\end{subarray}}(-1)^{k_{1}+\cdots+k_{i-1}+a_{i}}P_{k_{i}-1-a_{i}}(k_{i- 1},\ldots,k_{1},l+1)P_{a_{i}}(k_{i+1},\ldots,k_{d-1},1)\,.\]
Now, we compute \(I([X_{i,a}]^{\mathrm{adm}})\). Observe that
\[[X_{i,a}]^{\mathrm{adm}}=[X_{i,a}^{(1)}]+[X_{i,a}^{(2)}]+[X_{i,a}^{(3)}]\,,\]
where
\[X_{i,a}^{(1)}=\quad\raisebox{-14.226378pt}{\includegraphics[]{figures/1.eps}}\]
\[\begin{split}&\raisebox{-14.226378pt}{\includegraphics[]{figures/1.
for various values \(p_{0},,\ldots,\check{p}_{i},\ldots,p_{d-1}\) (\(\check{p}_{i}\) means that \(p_{i}\) is skipped). Actually, setting \(m_{0}=l\), we have
\[[X_{i,a}^{(1)}]=\sum_{\begin{subarray}{c}\boldsymbol{b}_{i}=(b_{0},\ldots, \check{b}_{i},\ldots,b_{d-1})\in\mathbb{Z}_{\geq 0}^{d-1}\\ \operatorname{wt}(b_{0},\ldots,b_{i-1})=m_{i}-a_{i}\\ \operatorname{wt}(b_{i+1},\ldots,b_{d-1})=a_{i}\end{subarray}}\binom{m_{0}+b_{ 0}-1}{m_{0}-1}\prod_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{d-1}\binom{m_{j}+b_{j}}{m_{j}}[Y_{i}(\boldsymbol{b}_{i} +\boldsymbol{m}_{i})]\,,\]
\[[X_{i,a}^{(2)}]=\sum_{\begin{subarray}{c}\boldsymbol{b}_{i}=(b_{0},\ldots, \check{b}_{i},\ldots,b_{d-1})\in\mathbb{Z}_{\geq 0}^{d-1}\\ \operatorname{wt}(b_{0},\ldots,b_{i-1})=m_{i}-a_{i}\\ \operatorname{wt}(b_{i+1},\ldots,b_{d-1})=a_{i}\end{subarray}}\binom{m_{0}+b_{ 0}-1}{m_{0}}\prod_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{d-1}\binom{m_{j}+b_{j}}{m_{j}}[Y_{i}(\boldsymbol{b}_{i} +\boldsymbol{m}_{i})]\,,\]
where \(\boldsymbol{m}_{i}=(m_{0},\ldots,\check{m}_{i},\ldots,m_{d-1})\). Hence, using the identity \(\binom{s-1}{t-1}+\binom{s-1}{t}=\binom{s}{t}\) for \(s,t\geq 0\), we have
\[[X_{i,a}^{(1)}]+[X_{i,a}^{(2)}]=\sum_{\begin{subarray}{c}\boldsymbol{b}_{i}=( b_{0},\ldots,\check{b}_{i},\ldots,b_{d-1})\in\mathbb{Z}_{\geq 0}^{d-1}\\ \operatorname{wt}(b_{0},\ldots,b_{i-1})=m_{i}-a_{i}\\ \operatorname{wt}(b_{i+1},\ldots,b_{d-1})=a_{i}\end{subarray}}\prod_{ \begin{subarray}{c}j=0\\ j\neq i\end{subarray}}^{d-1}\binom{m_{j}+b_{j}}{m_{j}}[Y_{i}(\boldsymbol{b}_{i} +\boldsymbol{m}_{i})]\,.\]
Substituting this into (2.9) and changing the order of summations, we see that
\[P_{l}(\boldsymbol{k})=\sum_{i=1}^{d-1}(-1)^{l+k_{i}}\sum_{ \begin{subarray}{c}\boldsymbol{b}_{i}=(b_{0},\ldots,\check{b}_{i},\ldots,b_{d- 1})\in\mathbb{Z}_{\geq 1}^{d-1}\\ \operatorname{wt}(\boldsymbol{b}_{i})=\operatorname{wt}(\boldsymbol{k})+l-1 \end{subarray}}(-1)^{b_{0}+\cdots+b_{i-1}}\prod_{\begin{subarray}{c}j=0\\ j\neq i\end{subarray}}^{d-1}\binom{b_{j}-1}{m_{j}}I[Y_{i}(\boldsymbol{b}_{i}- \{1\}^{d-1})]\] \[+\sum_{i=1}^{d}\sum_{a_{i}=0}^{k_{i}-1}(-1)^{k_{1}+\cdots+k_{i-1}+ a_{i}}P_{k_{i}-1-a_{i}}(k_{i-1},\ldots,k_{1},l+1)P_{a_{i}}(k_{i+1},\ldots,k_{d-1},1)\,.\]
Therefore, one obtains (2.4) by employing the expression of \(I([Y_{i}(\boldsymbol{b}_{i}-\{1\}^{d-1})])\) given in Lemma 2.5 below. This completes the proof.
**Lemma 2.5**.: _For \(1\leq i\leq d-1\) and \(\boldsymbol{p}=(p_{0},\ldots,\check{p}_{i},\ldots,p_{d-1})\in\mathbb{Z}_{\geq 0 }^{d-1}\) with \(p_{0}>0\), we have_
\[I([Y_{i}(\boldsymbol{p})]) =\sum_{j=i}^{d-1}\sum_{c_{j}=0}^{p_{j}-1}(-1)^{c_{j}+p_{j+1}+ \cdots+p_{d-1}}\zeta\binom{c_{j}+1,p_{j+1}+1,\cdots,p_{d-1}+1}{p_{i-1}+1, \cdots,p_{1}+1,p_{0}+1}\] \[\times\zeta(p_{i+1}+1,\ldots,p_{j-1}+1,p_{j}-c_{j}+1)\,,\]
_where we set \(p_{i}=1\)._
Proof.: We compute as
\[[Y_{i}(\boldsymbol{p})] =\sum_{c_{d-1}=0}^{p_{d-1}-1}(-1)^{c_{d-1}}\left[\begin{array}{ ccc}\includegraphics[width=14.226378pt]{p_{d-1}-c_{d-1}}\\ \includegraphics[width=14.226378pt]{p_{d-1}-c_{d-1}}\\ \includegraphics[width=14.
\[=\sum_{j=i+1}^{d-1}\sum_{c_{j}=0}^{p_{j}-1}(-1)^{c_{j}+p_{j+1}+\cdots+p_{d-1}} \left[\begin{array}{c}p_{0}\\ p_{1}\end{array}\right]\] \[+(-1)^{p_{i+1}+\cdots+p_{d-1}}\left[\begin{array}{c}p_{0}\\ p_{1}\end{array}\right]\]
This together with Theorem 2.4 show the desired result.
In the following we give explicit expressions of \(P_{l}(\boldsymbol{k})\) for the cases \(d=2\) and \(d=3\) which are valid whether \(\boldsymbol{k}\) is admissible or not.
**Corollary 2.6**.: _For \(k_{1},k_{2}\geq 1\) and \(l>0\), we have_
\[P_{l}(k_{1},k_{2})=(-1)^{k_{2}}\sum_{\begin{subarray}{c}w_{1},w_ {2}\geq 2\\ w_{1}+w_{2}=k_{1}+k_{2}+l\end{subarray}}(-1)^{w_{1}}\binom{w_{1}-1}{k_{2}-1} \binom{w_{2}-1}{l}\zeta(w_{1})\zeta(w_{2})\] \[\quad+(-1)^{k_{1}}\sum_{\begin{subarray}{c}w_{1}\geq 1,w_{2} \geq 2\\ w_{1}+w_{2}=k_{1}+k_{2}+l\end{subarray}}\binom{w_{1}-1}{k_{1}-1}\binom{w_{2}- 1}{l}\zeta(w_{1},w_{2})+\mathbb{1}_{k_{2}=1}\binom{l+k_{1}-1}{k_{1}-1}\zeta \binom{1}{l+k_{1}}\,.\]
**Corollary 2.7**.: _For \(k_{1},k_{2},k_{3}\geq 1\) and \(l>0\), we have_
\[P_{l}(k_{1},k_{2},k_{3})=(-1)^{k_{1}+k_{2}}\sum_{\begin{subarray} {c}w_{1},w_{2}\geq 1,w_{3}\geq 2\\ w_{1}+w_{2}+w_{3}=k_{1}+k_{2}+k_{3}+l\end{subarray}}\binom{w_{1}-1}{k_{2}-1} \binom{w_{2}-1}{k_{1}-1}\binom{w_{3}-1}{l}\zeta(w_{1},w_{2},w_{3})\] \[\quad+(-1)^{k_{2}+k_{3}}\sum_{\begin{subarray}{c}w_{1}\geq 1,w_{2} \geq 2,w_{3}\geq 2\\ w_{1}+w_{2}+w_{3}=k_{1}+k_{2}+k_{3}+l\end{subarray}}(-1)^{w_{1}+w_{2}} \binom{w_{1}-1}{k_{2}-1}\binom{w_{2}-1}{k_{3}-1}\binom{w_{3}-1}{l}\zeta(w_{1},w_{2})\zeta(w_{3})\] \[\quad+(-1)^{k_{1}+k_{3}}\sum_{\begin{subarray}{c}w_{1}\geq 2,w_{2} \geq 1,w_{3}\geq 2\\ w_{1}+w_{2}+w_{3}=k_{1}+k_{2}+k_{3}+l\end{subarray}}(-1)^{w_{1}}\binom{w_{1}- 1}{k_{3}-1}\binom{w_{2}-1}{k_{1}-1}\binom{w_{3}-1}{l}\zeta(w_{1})\zeta(w_{2},w _{3})\] \[\quad+\mathbb{1}_{k_{3}=1}(-1)^{k_{2}+1}\sum_{\begin{subarray}{c }b_{0}\geq 2,b_{2}\geq 1\\ b_{0}+b_{2}=k_{1}+k_{2}+l\end{subarray}}(-1)^{b_{2}}\binom{b_{0}-1}{l} \binom{b_{2}-1}{k_{2}-1}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\times\left\{(-1)^{b_{2}}\zeta\binom{1,b_{2}}{b_{0}}+\sum_{c_{2}=1}^{b_ {2}-1}(-1)^{c_{2}}\zeta\binom{c_{2}}{b_{0}}\zeta(b_{2}-c_{2}+1)\right\}\] \[\quad+\mathbb{1}_{k_{3}=1}(-1)^{k_{1}}\sum_{\begin{subarray}{c}b_{ 0}\geq 2,b_{2}\geq 1\\ b_{0}+b_{1}=k_{1}+k_{2}+l\end{subarray}}\binom{b_{0}-1}{l}\binom{b_{1}-1}{k_{1 }-1}\zeta\binom{1}{b_{1},b_{0}}\,.\]
## 3. Ribbons
### Preparation
In this section, we study the sums of Schur MZVs for ribbon diagrams. Recall that a skew Young diagram is called a _ribbon_ if it is connected and contains no \(2\times 2\) block of boxes. Explicitly, such a can be drawn as
(3.1)
where the integers \(s_{1}\geq 0\), \(s_{2},\ldots,s_{n},r_{1},\ldots,r_{n}>0\) indicate the numbers of boxes.
**Definition 3.1**.: For integers \(w,s_{1},\ldots,s_{n}\geq 0\) and \(r_{1},\ldots,r_{n}>0\), we write
\[S_{w}\binom{s_{1},\ldots,s_{n}}{r_{1},\ldots,r_{n}}\coloneqq\sum_{ \begin{subarray}{c}\begin{subarray}{c}\boldsymbol{l}_{1},\ldots,\boldsymbol{l} _{n}\\ \text{dep}(\boldsymbol{l}_{i})=s_{i}\\ \text{dep}(\boldsymbol{k}_{i})=r_{i}\\ \sum_{i}\text{wt}(\boldsymbol{k}_{i})+\sum_{i}\text{wt}(\boldsymbol{l}_{i})=w \end{subarray}}\zeta\binom{l_{1},\ldots,\boldsymbol{l}_{n}}{\boldsymbol{k}_{1},\ldots,\boldsymbol{k}_{n}},\]
where we define as a generalization of (2.2)
\[\zeta\binom{\boldsymbol{l}_{1},\ldots,\boldsymbol{l}_{n}}{\boldsymbol{k}_{1},\ldots,\boldsymbol{k}_{n}}\coloneqq\sum_{\begin{subarray}{c}0<b_{i,1}\leq \cdots<b_{i,s_{i}+1}\\ 0<a_{i,1}<\cdots<a_{i,r_{i}}\\ b_{i,s_{i}+1}=a_{i,r_{i}}\ (i=1,\ldots,n)\\ b_{i+1,s_{i}}<a_{i,1}\ (i=1,\ldots,n-1)\end{subarray}}\prod_{i=1}^{n} \frac{1}{a_{i,1}^{k_{i,1}}\cdots a_{i,r_{i}}^{k_{i,r_{i}}}b_{i,1}^{l_{i,1}} \cdots b_{i,s_{i}}^{l_{i,s_{i}}}} \tag{3.2}\]
for indices \(\boldsymbol{l}_{i}=(l_{i,1},\ldots,l_{i,s_{i}})\) of depth \(s_{i}\) and \(\boldsymbol{k}_{i}=(k_{i,1},\ldots,k_{i,r_{i}})\) of depth \(r_{i}\). Note that the series (3.2) is meaningful even if some \(s_{i}\) is zero, i.e., \(\boldsymbol{l}_{i}=\varnothing\).
**Remark 3.2**.: Notice that only for \(s_{2},\ldots,s_{n}>0\) the \(S_{w}\binom{s_{1},\ldots,s_{n}}{r_{1},\ldots,r_{n}}\) gives the sum over all admissible tableaux of shape (3.1) as in (1.2). For example, we have
\[S_{w}\binom{2}{2}=\sum_{\begin{subarray}{c}a,b\geq 1,\,c\geq 2\\ a+b+c+d=w\end{subarray}}\zeta\left(\begin{array}{c}\boxed{d}\\ \boxed{b}\ \ c\end{array}\right)=S_{w}\left(\begin{array}{c}\boxed{}\\ \boxed{}\end{array}\right)\]
but
\[S_{w}\binom{2,0}{1,1}=\sum_{\begin{subarray}{c}a,b\geq 1,\,c,d\geq 2\\ a+b+c+d=w\end{subarray}}\zeta\left(\begin{array}{c}\boxed{d}\\ \boxed{b}\ \ c\end{array}\right)\neq S_{w}\left(\begin{array}{c}\boxed{}\\ \boxed{}\end{array}\right).\]
In the latter, the index \((d)\) is required to be admissible, i.e., \(d\geq 2\).
Note that \(S_{w}\binom{s_{1},\ldots,s_{n}}{r_{1},\ldots,r_{n}}\) is nonzero only when
\[w\geq s_{1}+\cdots+s_{n}+r_{1}+\cdots+r_{n}+n.\]
Our basic strategy of computing these sums on ribbons is to reduce the number of corners \(n\) by using the following formula:
**Proposition 3.3**.: _Let \(s_{1},\ldots,s_{n}\geq 0\) and \(r_{1},\ldots,r_{n}>0\) be integers. For \(1\leq i\leq n-1\) with \(r_{i}\geq 2\), we have_
\[S_{w}\binom{s_{1},\ldots,s_{i},s_{i+1},\ldots,s_{n}}{r_{1},\ldots,r_{i},r_{i+1},\ldots,r_{n}}+S_{w}\binom{s_{1},\ldots,\ \ s_{i},\ \ s_{i+1}+1,\ldots,s_{n}}{r_{1},\ldots,r_{i}-1,\ \ r_{i+1},\ \ \ldots,r_{n}}\] \[=\sum_{w_{1}+w_{2}=w}S_{w_{1}}\binom{s_{1},\ldots,s_{i}}{r_{1}, \ldots,r_{i}}\cdot S_{w_{2}}\binom{s_{i+1},\ldots,s_{n}}{r_{i+1},\ldots,r_{n}}. \tag{3.3}\]
Proof.: By switching the inequality \(b_{i+1,1}<a_{i,1}\) in (3.2) (for the given \(i\)) to the opposite \(a_{i,1}\leq b_{i+1,1}\), we deduce that
\[\zeta\binom{\boldsymbol{l}_{1},\,\ldots,\,\boldsymbol{l}_{i},\, \boldsymbol{l}_{i+1},\,\ldots,\boldsymbol{l}_{n}}{\boldsymbol{k}_{1},\ldots, \boldsymbol{k}_{i},\boldsymbol{k}_{i+1},\ldots,\boldsymbol{k}_{n}}+\zeta \binom{\boldsymbol{l}_{1},\,\ldots,\,\boldsymbol{l}_{i},\,\boldsymbol{l}_{i+1 }^{\prime},\,\ldots,\,\boldsymbol{l}_{n}}{\boldsymbol{k}_{1},\ldots, \boldsymbol{k}_{i}^{\prime},\,\boldsymbol{k}_{i+1},\ldots,\boldsymbol{k}_{n}}\] \[=\zeta\binom{\boldsymbol{l}_{1},\,\ldots,\,\boldsymbol{l}_{i}}{ \boldsymbol{k}_{1},\ldots,\boldsymbol{k}_{i}}\cdot\zeta\binom{\boldsymbol{l} _{i+1},\,\ldots,\,\boldsymbol{l}_{n}}{\boldsymbol{k}_{i+1},\ldots,\boldsymbol {k}_{n}}, \tag{3.4}\]
where \(\boldsymbol{k}_{i}^{\prime}=(k_{i,2},\ldots,k_{i,r_{i}})\) and \(\boldsymbol{l}_{i+1}^{\prime}=(k_{i,1},l_{i+1,1},\ldots,l_{i+1,s_{i+1}})\). Then (3.3) follows.
By using Proposition 3.3 repeatedly, the sums on general ribbons are expressed in terms of the values of the type \(S_{w}\binom{s,\ 0,\ \ldots,\ 0}{r_{1},r_{2},\ldots,r_{n}}\). For example, we have
\[S_{w}\binom{s,\ 1}{r_{1},r_{2}} =\sum_{w_{1}+w_{2}=w}S_{w_{1}}\binom{s}{r_{1}+1}S_{w_{2}}\binom{0 }{r_{2}}-S_{w}\binom{s,\ \ 0}{r_{1}+1,r_{2}},\] \[S_{w}\binom{s,\ 2}{r_{1},r_{2}} =\sum_{w_{1}+w_{2}=w}S_{w_{1}}\binom{s}{r_{1}+1}S_{w_{2}}\binom{1 }{r_{2}}-S_{w}\binom{s,\ \ 1}{r_{1}+1,r_{2}}\] \[=\sum_{w_{1}+w_{2}=w}S_{w_{1}}\binom{s}{r_{1}+1}S_{w_{2}}\binom{1 }{r_{2}}\] \[\qquad-\sum_{w_{1}+w_{2}=w}S_{w_{1}}\binom{s}{r_{1}+2}S_{w_{2}} \binom{0}{r_{2}}+S_{w}\binom{s,\ \ 0}{r_{1}+2,r_{2}}\]
and so on (cf. Lemma 3.11). For the latter type sums, the following formula holds:
**Theorem 3.4**.: _For \(w\geq 0\), \(s\geq 0\) and \(r_{1},\ldots,r_{n}>0\), we have_
\[S_{w}\binom{s,\ 0,\ \ldots,\ 0}{r_{1},r_{2},\ldots,r_{n}}=\sum_{ \begin{subarray}{c}t_{1},\ldots,t_{n}\geq 0\\ t_{1}+\cdots\gets t_{n}=w_{1}+\cdots+w_{n}=w\end{subarray}}\sum_{ \begin{subarray}{c}w_{i}\geq r_{i}+t_{i}+1\\ w_{1}+\cdots+w_{n}=w\end{subarray}}\prod_{i=1}^{n}\binom{w_{i}-1}{t_{i}} \zeta(w_{1},\ldots,w_{n}). \tag{3.5}\]
Proof.: Put \(r\coloneqq r_{1}+\cdots+r_{n}\). Then the left hand side is the sum of the series
\[\zeta\left(\begin{array}{c}\framebox{$k_{1}$}\\ \framebox{$l_{1}$}\cdot\framebox{$l_{s}$}\cdot\framebox{$k_{r}$}\\ \end{array}\right)=\sum_{0<a_{1}<\cdots<a_{r}\geq b_{s}\geq\cdots\geq b_{1}>0} \frac{1}{a_{1}^{k_{1}}\cdots a_{r}^{k_{r}}b_{1}^{l_{1}}\cdots b_{s}^{l_{s}}}, \tag{3.6}\]
where \((l_{1},\ldots,l_{s})\) runs through indices of depth \(s\) and \((k_{1},\ldots,k_{r})\) runs through indices of depth \(r\) such that \(k_{p}\geq 2\) for \(p\in\{r_{n},r_{n}+r_{n-1},\ldots,r\}\), satisfying \(l_{1}+\cdots+l_{s}+k_{1}+\cdots+k_{r}=w\). By "stuffling", i.e., classifying all possible orders of \(a_{p}\)'s and \(b_{q}\)'s, this series is expanded into a certain sum of MZVs \(\zeta(w_{1},\ldots,w_{r+u})\) of weight \(w\) with \(0\leq u\leq s\). Here each of entries of \((w_{1},\ldots,w_{r+u})\) is of the form
\[k_{p}+l_{q}+\cdots+l_{q^{\prime}}\text{ or }l_{q}+\cdots+l_{q^{\prime}}, \tag{3.7}\]
where \(l_{q},\ldots,l_{q^{\prime}}\) are some consecutive members of \(l_{1},\ldots,l_{s}\), possibly zero members in the former case and at least one member in the latter.
Let us fix an index \((w_{1},\ldots,w_{r+u})\) of weight \(w\) with \(0\leq u\leq s\) and count how many times \(\zeta(w_{1},\ldots,w_{r+u})\) appears in \(S_{w}\binom{s}{r_{1},r_{2},\ldots,r_{n}}\) when all series (3.6) are expanded as above. First, choose the numbers \(u_{1},\ldots,u_{n}\geq 0\) satisfying \(u_{1}+\cdots+u_{n}=u\), and consider the cases that \(w_{r_{n}+u_{n}+\cdots+r_{i}+u_{i}}\) contains \(k_{r_{n}+\cdots+r_{i}}\) as a summand for \(i=1,\ldots,n\). Then the number of possibilities of the places where other \(k_{p}\)'s appear is \(\prod_{i=1}^{n}\binom{r_{i}-1+u_{i}}{u_{i}}\). Moreover, each entry of \((w_{1},\ldots,w_{r+u})\) is decomposed into a sum of type (3.7) and the total number of the plus symbol '\(+\)' is \(s-u\). The number of possible such decompositions is
\[\binom{(w_{1}-1)+\cdots+(w_{r_{n}+u_{n}-1}-1)+(w_{r_{n}+u_{n}}-2) +\cdots+(w_{r+u}-2)}{s-u}\] \[=\binom{w-n-r-u}{s-u}.\]
Therefore we obtain that
\[S_{w}\binom{s,\ 0,\ \ldots,\ 0}{r_{1},r_{2},\ldots,r_{n}}=\sum_{ \begin{subarray}{c}u_{1},\ldots,u_{n}\geq 0\\ w=u_{1}+\cdots+u_{n}\leq s\end{subarray}}\binom{w-n-r-u}{s-u}\prod_{i=1}^{n} \binom{r_{i}+u_{i}-1}{u_{i}}\] \[\times\sum_{\begin{subarray}{c}w_{1},\ldots,w_{r+u}\geq 1\\ w_{p}\geq 2\\ w_{1}+\cdots+w_{r+u}=w\end{subarray}}\zeta(w_{1},\ldots,w_{r+u}).\]
By applying Ohno's relation to the last sum, we see that this is equal to
\[S_{w}\binom{s,\ 0,\ \ldots,\ 0}{r_{1},r_{2},\ldots,r_{n}}=\sum_{ \begin{subarray}{c}u_{1},\ldots,u_{n}\geq 0\\ w=u_{1}+\cdots+u_{n}\leq s\end{subarray}}\binom{w-n-r-u}{s-u}\prod_{i=1}^{n} \binom{r_{i}+u_{i}-1}{u_{i}}\\ \times\sum_{\begin{subarray}{c}w_{i}\geq r_{i}+u_{i}+1\\ w_{1}+\cdots+w_{n}=w\end{subarray}}\zeta(w_{1},\ldots,w_{n}). \tag{3.8}\]
By means of the identity
\[\binom{w-n-r-u}{s-u}=\sum_{\begin{subarray}{c}v_{1},\ldots,v_{n}\geq 0\\ v_{1}+\cdots+v_{n}=s-u\end{subarray}}\prod_{i=1}^{n}\binom{w_{i}-1-r_{i}-u_{i} }{v_{i}},\]
one can rewrite the expression (3.8) as
\[\sum_{\begin{subarray}{c}u_{1},\ldots,u_{n}\geq 0\\ u_{1}+v_{1},\ldots,u_{n}\geq 0\\ u_{1}+v_{1}+\cdots+u_{n}=u_{n}=s\end{subarray}}\sum_{\begin{subarray}{c}w_{i} \geq r_{i}+u_{i}+1\\ w_{1}+\cdots+w_{n}=w\end{subarray}}\prod_{i=1}^{n}\binom{w_{i}-1-r_{i}-u_{i}}{v_{ i}}\binom{r_{i}+u_{i}-1}{u_{i}}\zeta(w_{1},\ldots,w_{n})\]
\[=\sum_{\begin{subarray}{c}t_{1},\ldots,t_{n}\geq 0\\ t_{1}+\cdots+t_{n}=s\end{subarray}}\sum_{\begin{subarray}{c}w_{i}\geq r_{i}+t_{i }+1\\ w_{1}+\cdots+w_{n}=w\end{subarray}}\prod_{i=1}^{n}\binom{w_{i}-1-r_{i}-u_{i}}{t_ {i}-u_{i}}\binom{r_{i}+u_{i}-1}{u_{i}}\biggr{)}\zeta(w_{1},\ldots,w_{n}).\]
Here we put \(t_{i}=u_{i}+v_{i}\) and use that
\[\binom{w_{i}-1-r_{i}-u_{i}}{t_{i}-u_{i}}=0\quad\text{ for }r_{i}+u_{i}+1\leq w _{i}\leq r_{i}+t_{i}.\]
Thus the theorem follows from the identity
\[\sum_{u_{i}=0}^{t_{i}}\binom{w_{i}-1-r_{i}-u_{i}}{t_{i}-u_{i}}\binom{r_{i}+u_{ i}-1}{u_{i}}=\binom{w_{i}-1}{t_{i}}.\qed\]
**Corollary 3.5**.: _For \(n\geq 1\) and integers \(s_{1},\ldots,s_{n}\geq 0\) and \(r_{1},\ldots,r_{n}>0\) the \(S_{w}\binom{s_{1},\ldots,s_{n}}{r_{1},\ldots,r_{n}}\) with \(w\geq s_{1}+\cdots+s_{n}+r_{1}+\cdots+r_{n}+n\) can be written as a \(\mathbb{Q}\)-linear combination of MZVs of weight \(w\) and depth \(\leq n\)._
Proof.: This is a direct consequence of Proposition 3.3 and Theorem 3.4.
**Corollary 3.6**.: _For any \(s\geq 0\) and \(r_{1},\ldots,r_{n}>0\), the symmetric sum_
\[\sum_{\sigma\in\mathfrak{S}_{n}}S_{w}\binom{s,\quad 0,\quad\ldots,\quad 0}{r_{ \sigma(1)},r_{\sigma(2)},\ldots,r_{\sigma(n)}}\]
_is a polynomial in single zeta values. In particular, \(S_{w}\binom{s,0,\ldots,0}{r,r,\ldots,r}\) is a polynomial in single zeta values._
Proof.: By (3.5), this symmetric sum is equal to
\[\sum_{\begin{subarray}{c}t_{1},\ldots,t_{n}\geq 0\\ t_{1}+\cdots+t_{n}=s\end{subarray}}\sum_{\begin{subarray}{c}w_{i}\geq r_{i}+t _{i}+1\\ w_{1}+\cdots+w_{n}=w\end{subarray}}\prod_{i=1}^{n}\binom{w_{i}-1}{t_{i}}\sum_{ \sigma\in\mathfrak{S}_{n}}\zeta(w_{\sigma(1)},\ldots,w_{\sigma(n)}).\]
Thus the claim follows from Hoffman's symmetric sum formula [3, Theorem 2.2].
**Example 3.7**.: The case of \(n=1\) will be treated in Theorem 3.8. Here let us consider the case of \(n=2\) and \(r_{1}=r_{2}=r\). We have
\[S_{w}\binom{s,0}{r,r} =\sum_{\begin{subarray}{c}t_{1},t_{2}\geq 0\\ t_{1}+t_{2}=s\end{subarray}}\sum_{\begin{subarray}{c}w_{i}\geq r+t_{i}+1\\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{t_{1}}\binom{w_{2}-1}{t_{2}}\zeta( w_{1},w_{2})\] \[=\frac{1}{2}\sum_{\begin{subarray}{c}t_{1},t_{2}\geq 0\\ t_{1}+t_{2}=s\end{subarray}}\sum_{\begin{subarray}{c}w_{i}\geq r+t_{i}+1\\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{t_{1}}\binom{w_{2}-1}{t_{2}}\big{(} \zeta(w_{1},w_{2})+\zeta(w_{2},w_{1})\big{)}\] \[=\frac{1}{2}\sum_{\begin{subarray}{c}t_{1},t_{2}\geq 0\\ t_{1}+t_{2}=s\end{subarray}}\sum_{\begin{subarray}{c}w_{i}\geq r+t_{i}+1\\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{t_{1}}\binom{w_{2}-1}{t_{2}}\big{(} \zeta(w_{1})\zeta(w_{2})-\zeta(w)\big{)}.\]
This is a kind of sum formula of polynomial type. For a sum formula of bounded type, see SS3.3.
### Sum formulas of single type
In this subsection, we present sum formulas of single type for two special types of ribbons. The first is a simultaneous generalization of the classical sum formulas for the MZVs and MZSVs stated in (1.1).
**Theorem 3.8**.: _For any integers \(r\geq 1\), \(s\geq 0\) and \(w\geq s+r+1\), we have_
\[S_{w}\binom{s}{r}=\binom{w-1}{s}\zeta(w). \tag{3.9}\]
Proof.: This immediately follows from Theorem 3.4.
The second formula is for the "stair of tread one" shape. The proof is an application of Proposition 3.3 and Theorem 3.4.
**Theorem 3.9**.: _For any integers \(r\geq 1\), \(n\geq 1\) and \(w\geq(r+2)n+1\), we have_
\[S_{w}\binom{\{1\}^{n-1},\ \ 1}{\{r\}^{n-1},r+1}=c_{w,r}(n)\zeta(w), \tag{3.10}\]
_where_
\[c_{w,r}(n)\coloneqq\frac{w-1}{n}\binom{w-(r+1)n-2}{n-1}.\]
See (1.4) in the introduction for examples in the cases \(r=n=2\) and \(r=1\), \(n=3\).
**Remark 3.10**.: The coefficient \(c_{w,r}(n)\) is a positive integer. In fact,
\[c_{w,r}(n)=(r+1)\binom{w-(r+1)n-2}{n-1}+\binom{w-(r+1)n-1}{n}.\]
Proof of Theorem 3.9.: We prove (3.10) by induction on \(n\). For \(n=1\), this is a special case of (3.9). Let us assume \(n>1\) and for \(1\leq i\leq n\), put
\[S_{w,r}(n,i)\coloneqq S_{w}\binom{\{1\}^{i-1},\ \ 1,\ \ \ \ \{0\}^{n-i}}{\{r\}^{i-1},r+1,\{r+1\}^{n-i}},\qquad T_{w,r}(n) \coloneqq S_{w}\binom{\{0\}^{n}}{\{r+1\}^{n}}.\]
Then, for \(1\leq i\leq n-1\), Proposition 3.3 shows that
\[S_{w,r}(n,i)+S_{w,r}(n,i+1)=\sum_{\begin{subarray}{c}w_{1}\geq(r+2)i+1\\ w_{2}\geq(r+2)(n-i)\end{subarray}}S_{w_{1},r}(i,i)\cdot T_{w_{2},r}(n-i)\]
(here and in what follows, we omit the "total weight \(=w\)" condition like \(w_{1}+w_{2}=w\)). The induction hypothesis gives \(S_{w_{1},r}(i,i)=c_{w_{1},r}(i)\zeta(w_{1})\) for \(1\leq i\leq n-1\), while Theorem 3.4 shows that
\[T_{w_{2},r}(n-i)=\sum_{\begin{subarray}{c}w_{1}^{\prime},\ldots,w_{n-i}^{ \prime}\geq r+2\\ w_{1}^{\prime}+\cdots+w_{n-i}^{\prime}=w_{2}\end{subarray}}\zeta(w_{1}^{\prime},\ldots,w_{n-i}^{\prime}).\]
Hence we obtain
\[\begin{split} S_{w,r}(n,i)+S_{w,r}(n,i+1)&=\sum_{ \begin{subarray}{c}w_{1}\geq(r+2)i+1\\ w_{1}^{\prime},\ldots,w_{n-i}^{\prime}\geq r+2\end{subarray}}c_{w_{1},r}(i) \zeta(w_{1})\zeta(w_{1}^{\prime},\ldots,w_{n-i}^{\prime})\\ &=A_{i}+B_{i},\end{split} \tag{3.11}\]
where \(A_{i}\) and \(B_{i}\) are defined by
\[A_{i}\coloneqq\sum_{\begin{subarray}{c}w_{1}\geq(r+2)i+1\\ w_{1}^{\prime},\ldots,w_{n-i}^{\prime}\geq r+2\end{subarray}}c_{w_{1},r}(i) \big{\{}\zeta(w_{1},w_{1}^{\prime},\ldots,w_{n-i}^{\prime})+\cdots+\zeta(w_{ 1}^{\prime},\ldots,w_{n-i}^{\prime},w_{1})\big{\}}\]
\[=\sum_{j=1}^{n-i+1}\sum_{\begin{subarray}{c}w_{1},\dots,w_{n-i+1}\geq r+2\\ w_{j}\geq(r+2)i+1\end{subarray}}c_{w_{j},r}(i)\zeta(w_{1},\dots,w_{n-i+1}) \tag{3.12}\]
and
\[B_{i} \coloneqq\sum_{\begin{subarray}{c}w_{1}\geq(r+2)i+1\\ w_{1}^{\prime},\dots,w_{n-i}^{\prime}\geq r+2\end{subarray}}c_{w_{1},r}(i) \big{\{}\zeta(w_{1}+w_{1}^{\prime},\dots,w_{n-i}^{\prime})+\dots+\zeta(w_{1}^{ \prime},\dots,w_{1}+w_{n-i}^{\prime})\big{\}}\] \[=\sum_{j=1}^{n-i}\sum_{\begin{subarray}{c}w_{1},\dots,w_{n-i}\geq r +2\\ w_{j}\geq(r+2)(i+1)+1\end{subarray}}\sum_{a=(r+2)i+1}^{w_{j}-(r+2)}c_{a,r}(i) \zeta(w_{1},\dots,w_{n-i}).\]
Note that the definition of \(A_{i}\) works for \(i=n\) and gives \(A_{n}=c_{w,r}(n)\zeta(w)\).
For \(1\leq i\leq n-1\), we have \(A_{i+1}=B_{i}\) since
\[\sum_{a=(r+2)i+1}^{w_{j}-(r+2)}c_{a,r}(i) =\sum_{b=0}^{k}\frac{(r+2)i+b}{i}\binom{b+i-1}{i-1}\qquad(k\coloneqq w _{j}-(r+2)(i+1)-1)\] \[=(r+1)\sum_{b=0}^{k}\binom{b+i-1}{i-1}+\sum_{b=0}^{k}\binom{b+i}{i}\] \[=(r+1)\binom{k+i}{i}+\binom{k+i+1}{i+1}=c_{w_{j},r}(i+1).\]
Thus (3.11) says that
\[S_{w,r}(n,i)+S_{w,r}(n,i+1)=A_{i}+A_{i+1}.\]
This implies inductively that \(S_{w,r}(n,i)=A_{i}\), starting from
\[S_{w,r}(n,1)=\sum_{j=1}^{n}\sum_{\begin{subarray}{c}w_{1},\dots,w_{n}\geq r+2 \\ w_{j}\geq r+3\end{subarray}}(w_{j}-1)\zeta(w_{1},\dots,w_{n})=A_{1},\]
which is a consequence of (3.5) and (3.12). The final identity \(S_{w,r}(n,n)=A_{n}=c_{w,r}(n)\zeta(w)\) is exactly the desired formula.
### Two corners
We next study ribbon shapes with two corners which give a bounded type sum formula. The key ingredient is the sum formula weighted by the binomial coefficients (Proposition 2.1). For ribbons with two corners, we use only the case \(d=2\). For ease of calculation, we rewrite Corollary 2.6 in the following form with \(m_{1}=k_{1}-1\)
and \(m_{2}=k_{2}-1\): For integers \(m_{1},m_{2}\geq 0\) and \(w\geq m_{1}+m_{2}+3\), we have
\[\begin{split}&\sum_{\begin{subarray}{c}w_{1}\geq 1,w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{m_{1}}\binom{w_{2}-1}{m_{2}} \zeta(w_{1},w_{2})\\ &=(-1)^{m_{2}+1}\sum_{\begin{subarray}{c}w_{1},w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}(-1)^{w_{1}}\binom{w_{1}-1}{m_{2}}\binom{w_{2}-1}{m_ {1}+m_{2}+1-w_{1}}\zeta(w_{1})\zeta(w_{2})\\ &\qquad\qquad+(-1)^{m_{1}+1}\sum_{\begin{subarray}{c}w_{1}\geq 1,w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{m_{1}}\binom{w_{2}-1}{m_{1}+m_{2}+ 1-w_{1}}\zeta(w_{1},w_{2})\\ &\qquad\qquad+\mathbbm{1}_{m_{2}=0}\binom{w-2}{m_{1}}\big{(} \zeta(w)+\zeta(1,w-1)\big{)}.\end{split} \tag{3.13}\]
We then start with a preliminary calculation:
**Lemma 3.11**.: _For \(s_{1},s_{2}\geq 0\), \(r_{1},r_{2}>0\) and \(w\geq s_{1}+s_{2}+r_{1}+r_{2}+2\), we have_
\[S_{w}\binom{s_{1},s_{2}}{r_{1},r_{2}}=\sum_{i=0}^{s_{2}-1}(-1)^{s _{2}-i-1}\sum_{\begin{subarray}{c}w_{1}\geq s_{1}+s_{2}+r_{1}-i+1\\ w_{2}\geq r_{2}+i+1\\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{s_{1}}\binom{w_{2}-1}{i}\zeta(w_{ 1})\zeta(w_{2})\\ +(-1)^{s_{2}}\sum_{\begin{subarray}{c}t_{1},t_{2}\geq 0\\ t_{1}+t_{2}=s_{1}\end{subarray}}\sum_{\begin{subarray}{c}w_{1}\geq s_{2}+r_{1 }+t_{1}+1\\ w_{2}\geq r_{2}+t_{2}+1\\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{t_{1}}\binom{w_{2}-1}{t_{2}}\zeta( w_{1},w_{2}).\]
Proof.: By a repeated application of Proposition 3.3, we obtain
\[S_{w}\binom{s_{1},s_{2}}{r_{1},r_{2}} =\sum_{w_{1}+w_{2}=w}S_{w_{1}}\binom{s_{1}}{r_{1}+1}S_{w_{2}} \binom{s_{2}-1}{r_{2}}-S_{w}\binom{s_{1},\ \ s_{2}-1}{r_{1}+1,\ \ r_{2}}\] \[=\cdots\] \[=\sum_{i=0}^{s_{2}-1}(-1)^{s_{2}-i-1}\sum_{w_{1}+w_{2}=w}S_{w_{1}} \binom{s_{1}}{s_{2}+r_{1}-i}S_{w_{2}}\binom{i}{r_{2}}+(-1)^{s_{2}}S_{w}\binom{ s_{1},\ \ \ 0}{s_{2}+r_{1},r_{2}}.\]
By applying Theorem 3.8 to the first sum with taking care of admissible range and by applying Theorem 3.4 to the second sum, we obtain the lemma.
Before proceeding to the general case, we show that there is indeed a polynomial type sum formula for a specific class of ribbons with two corners. With the notation in Lemma 3.11, the following formula is the case \(r_{2}=s_{2}+r_{1}\).
**Theorem 3.12**.: _For \(s_{1},s_{2}\geq 0\), \(r_{1}>0\) and \(w\geq s_{1}+2s_{2}+2r_{1}+2\), we have_
\[S_{w}\binom{s_{1},\ \ \ s_{2}}{r_{1},s_{2}+r_{1}}\]
_is a polynomial in single zeta values._
Proof.: By Lemma 3.11 with \(r_{2}=s_{2}+r_{1}\), it suffices to show
\[\sum_{\begin{subarray}{c}t_{1},t_{2}\geq 0\\ t_{1}+t_{2}=s_{1}\end{subarray}}\sum_{\begin{subarray}{c}w_{1}\geq s_{2}+r_{1} +t_{1}+1\\ w_{2}\geq s_{2}+r_{1}+t_{2}+1\\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{t_{1}}\binom{w_{2}-1}{t_{2}} \zeta(w_{1},w_{2})\]
can be written as a polynomial of single zeta values. By symmetry, this sum is
\[=\frac{1}{2}\sum_{\begin{subarray}{c}t_{1},t_{2}\geq 0\\ t_{1}+t_{2}=s_{1}\end{subarray}}\sum_{\begin{subarray}{c}w_{1}\geq s_{2}+r_{1} +t_{1}+1\\ w_{2}\geq s_{2}+r_{1}+t_{2}+1\\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{t_{1}}\binom{w_{2}-1}{t_{2}} \big{(}\zeta(w_{1},w_{2})+\zeta(w_{2},w_{1})\big{)}.\]
Then, the result follows by the harmonic product formula.
The next theorem is a sum formula for general ribbons with two corners, which is of bounded type. Although the explicit formula itself is rather complicated, it is a direct consequence of (3.13) and Lemma 3.11.
**Theorem 3.13**.: _For \(s_{1},s_{2}\geq 0\), \(r_{1},r_{2}>0\) and \(w\geq s_{1}+s_{2}+r_{1}+r_{2}+2\), we have_
\[\begin{split} S_{w}\binom{s_{1},s_{2}}{r_{1},r_{2}}& =\binom{w-2}{s_{1}+s_{2}}\zeta(w)\\ &\qquad+\sum_{\begin{subarray}{c}w_{1},w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}A_{w_{1},w_{2}}^{s_{1},s_{2},r_{1},r_{2}}\zeta(w_{ 1})\zeta(w_{2})+\sum_{\begin{subarray}{c}w_{1}\geq 1,w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}B_{w_{1},w_{2}}^{s_{1},s_{2},r_{1},r_{2}} \zeta(w_{1},w_{2}),\end{split} \tag{3.14}\]
_where the integers \(A_{w_{1},w_{2}}^{s_{1},s_{2},r_{1},r_{2}}\) and \(B_{w_{1},w_{2}}^{s_{1},s_{2},r_{1},r_{2}}\) are given by_
\[A_{w_{1},w_{2}}^{s_{1},s_{2},r_{1},r_{2}}\coloneqq(-1)^{w_{1}} C_{w_{1},w_{2}}^{s_{1},s_{2}}\] \[\qquad-1_{w_{1}\leq s_{1}+r_{1}\text{ or }w_{2}\leq s_{2}+r_{2}-1} \binom{w_{1}-1}{s_{1}}\binom{w_{2}-2}{s_{2}-1}\] \[\qquad+1_{w_{1}>s_{1}+r_{1}}(-1)^{s_{1}+r_{1}+w_{1}}\binom{w_{1}-1 }{s_{1}}\binom{w_{2}-2}{s_{1}+s_{2}+r_{1}-w_{1}}\] \[\qquad+1_{r_{2}<w_{2}\leq s_{2}+r_{2}-1}(-1)^{s_{2}+r_{2}+w_{2}} \binom{w_{1}-1}{s_{1}}\binom{w_{2}-2}{r_{2}-1},\]
\[B_{w_{1},w_{2}}^{s_{1},s_{2},r_{1},r_{2}}\coloneqq C_{w_{1},w_{2}}^{s_{1},s_{ 2}}-(-1)^{s_{2}}\sum_{\begin{subarray}{c}t_{1},t_{2}\geq 0\\ t_{1}\geq w_{1}-(s_{2}+r_{1})\text{ or }t_{2}\geq w_{2}-r_{2}\\ t_{1}+t_{2}=s_{1}\end{subarray}}\binom{w_{1}-1}{t_{1}}\binom{w_{2}-1}{t_{2}}\]
_with_
\[C_{w_{1},w_{2}}^{s_{1},s_{2}}\coloneqq(-1)^{s_{2}}\sum_{\begin{subarray}{c}0 \leq i\leq s_{1}\\ 1\leq j\leq s_{2}\\ i+j=w_{1}\end{subarray}}\binom{w_{1}-1}{i}\binom{w_{2}-1}{s_{1}-i}-(-1)^{s_{1} }\binom{w_{1}-1}{s_{1}}\binom{w_{2}-2}{s_{1}+s_{2}-w_{1}}.\]
**Remark 3.14**.: By our convention on binomial coefficients, \(A_{w_{1},w_{2}}^{s_{1},s_{2},r_{1},r_{2}}=B_{w_{1},w_{2}}^{s_{1},s_{2},r_{1},r_ {2}}=0\) unless
\[w_{1}\leq s_{1}+s_{2}+r_{1}\quad\text{ or }\quad w_{2}\leq\max(s_{2}+r_{2}-1,s_{1}+r _{2})\]
and so (3.14) is a bounded type sum formula after expanding the product \(\zeta(w_{1})\zeta(w_{2})\) by the harmonic product formula.
Proof of Theorem 3.13.: Write the equation in Lemma 3.11 as
\[S_{w}\binom{s_{1},s_{2}}{r_{1},r_{2}}=S_{1}+S_{2}.\]
The sum \(S_{1}\) can be rewritten as
\[S_{1} =\sum_{i=0}^{s_{2}-1}(-1)^{s_{2}-i-1}\sum_{\begin{subarray}{c}w_{1 },w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{s_{1}}\binom{w_{2}-1}{i}\zeta(w_{1}) \zeta(w_{2})\] \[\quad-\sum_{i=0}^{s_{2}-1}(-1)^{s_{2}-i-1}\sum_{\begin{subarray}{c }2\leq w_{1}\leq s_{1}+s_{2}+r_{1}-i\\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{s_{1}}\binom{w_{2}-1}{i}\zeta(w_{1} )\zeta(w_{2})\] \[\quad-\sum_{i=0}^{s_{2}-1}(-1)^{s_{2}-i-1}\sum_{\begin{subarray}{c }2\leq w_{2}\leq r_{2}+i\\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{s_{1}}\binom{w_{2}-1}{i}\zeta(w_{1} )\zeta(w_{2})\] \[=S_{11}-S_{12}-S_{13},\quad\text{say}.\]
By the harmonic product formula, we have \(S_{11}=S_{111}+S_{112}+S_{113}\) with
\[S_{111} \coloneqq\sum_{i=0}^{s_{2}-1}(-1)^{s_{2}-i-1}\sum_{ \begin{subarray}{c}w_{1},w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{s_{1}}\binom{w_{2}-1}{i}\zeta(w_{1 },w_{2}),\] \[S_{112} \coloneqq\sum_{i=0}^{s_{2}-1}(-1)^{s_{2}-i-1}\sum_{\begin{subarray} {c}w_{1},w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{i}\binom{w_{2}-1}{s_{1}}\zeta(w_{1 },w_{2}),\] \[S_{113} \coloneqq\biggl{\{}\sum_{i=0}^{s_{2}-1}(-1)^{s_{2}-i-1}\sum_{ \begin{subarray}{c}w_{1},w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{s_{1}}\binom{w_{2}-1}{i}\biggr{\}} \zeta(w).\]
For the sum \(S_{111}\), after supplementing the term with \(w_{1}=1\), by (3.13), we get
\[S_{111} =(-1)^{s_{2}}\sum_{i=0}^{s_{2}-1}\sum_{\begin{subarray}{c}w_{1}, w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}(-1)^{w_{1}}\binom{w_{1}-1}{i}\binom{w_{2}-1}{s_{1}+ i+1-w_{1}}\zeta(w_{1})\zeta(w_{2})\] \[\quad+\sum_{i=0}^{s_{2}-1}(-1)^{s_{1}+s_{2}-i}\sum_{\begin{subarray} {c}w_{1}\geq 1,w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{s_{1}}\binom{w_{2}-1}{s_{1}+i+1-w_ {1}}\zeta(w_{1},w_{2})\] \[\quad+\mathbbm{1}_{s_{2}>0}(-1)^{s_{2}-1}\binom{w-2}{s_{1}}\bigl{(} \zeta(w)+\zeta(1,w-1)\bigr{)}\] \[\quad-\mathbbm{1}_{s_{1}=0}\sum_{i=0}^{s_{2}-1}(-1)^{s_{2}-i-1} \binom{w-2}{i}\zeta(1,w-1),\]
where we should note that the sum \(S_{111}\) is empty if \(s_{2}=0\) and \(\mathbbm{1}_{s_{2}>0}\) is inserted to cover such a degenerate case. By swapping the summation and changing variable via \(i\rightsquigarrow w_{1}-i-1\), we have
\[(-1)^{s_{2}}\sum_{i=0}^{s_{2}-1}\sum_{\begin{subarray}{c}w_{1},w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}(-1)^{w_{1}}\binom{w_{1}-1}{i}\binom{w_{2}-1}{s_{1}+ i+1-w_{1}}\zeta(w_{1})\zeta(w_{2})\]
\[=(-1)^{s_{2}}\sum_{\begin{subarray}{c}w_{1},w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}(-1)^{w_{1}}\zeta(w_{1})\zeta(w_{2})\sum_{ \begin{subarray}{c}0\leq i\leq s_{1}\\ 1\leq j\leq s_{2}\\ i+j=w_{1}\end{subarray}}\binom{w_{1}-1}{i}\binom{w_{2}-1}{s_{1}-i}.\]
Since \(\binom{w_{2}-1}{s_{1}+i+1-w_{1}}=0\) if \(i<w_{1}-s_{1}-1\), by changing variable via \(i\leadsto i+w_{1}-s_{1}-1\), we have
\[\sum_{i=0}^{s_{2}-1}(-1)^{s_{1}+s_{2}-i}\sum_{\begin{subarray}{c}w _{1}\geq 1,w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{s_{1}}\binom{w_{2}-1}{s_{1}+i+1-w_ {1}}\zeta(w_{1},w_{2})\] \[=-(-1)^{s_{1}}\sum_{\begin{subarray}{c}w_{1}\geq 1,w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{s_{1}}\binom{w_{2}-2}{s_{1}+s_{2}- w_{1}}\zeta(w_{1},w_{2}).\]
Combining the above two formulas with \(\sum_{i=0}^{s_{2}-1}(-1)^{s_{2}-i-1}\binom{w-2}{i}=\binom{w-3}{s_{2}-1}\), we have
\[S_{111} =(-1)^{s_{2}}\sum_{\begin{subarray}{c}w_{1},w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}(-1)^{w_{1}}\zeta(w_{1})\zeta(w_{2})\sum_{ \begin{subarray}{c}0\leq j\leq s_{1}\\ 1\leq i\leq s_{2}\\ i+j=w_{1}\end{subarray}}\binom{w_{1}-1}{j}\binom{w_{2}-1}{s_{1}-j}\] \[\quad-(-1)^{s_{1}}\sum_{\begin{subarray}{c}w_{1}\geq 1,w_{2}\geq 2 \\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{s_{1}}\binom{w_{2}-2}{s_{1}+s_{2}- w_{1}}\zeta(w_{1},w_{2})\] \[\quad-\mathbb{1}_{s_{2}>0}(-1)^{s_{2}}\binom{w-2}{s_{1}}\big{(} \zeta(w)+\zeta(1,w-1)\big{)}-\mathbb{1}_{s_{1}=0}\binom{w-3}{s_{2}-1}\zeta(1,w -1).\]
Similarly, for the sum \(S_{112}\), we can apply (3.13) to get
\[S_{112} =\sum_{i=0}^{s_{2}-1}(-1)^{s_{1}+s_{2}-i}\sum_{\begin{subarray}{ c}w_{1},w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}(-1)^{w_{1}}\binom{w_{1}-1}{s_{1}}\binom{w_{2}-1}{s _{1}+i+1-w_{1}}\zeta(w_{1})\zeta(w_{2})\] \[\qquad+(-1)^{s_{2}}\sum_{i=0}^{s_{2}-1}\sum_{\begin{subarray}{c}w _{1}\geq 1,w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}\binom{w_{1}-1}{i}\binom{w_{2}-1}{s_{1}+i+1-w_ {1}}\zeta(w_{1},w_{2})\] \[\qquad\qquad\qquad\qquad+\mathbb{1}_{s_{1}=0}\sum_{i=0}^{s_{2}-1} (-1)^{s_{2}-i-1}\binom{w-2}{i}\big{(}\zeta(w)+\zeta(1,w-1)\big{)}\] \[\qquad\qquad\qquad\qquad\qquad+\mathbb{1}_{s_{2}>0}(-1)^{s_{2}} \binom{w-2}{s_{1}}\zeta(1,w-1).\]
By a calculation similar to that of \(S_{111}\), we have
\[S_{112} =-(-1)^{s_{1}}\sum_{\begin{subarray}{c}w_{1},w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}(-1)^{w_{1}}\binom{w_{1}-1}{s_{1}}\binom{w_{2}-2}{s _{1}+s_{2}-w_{1}}\zeta(w_{1})\zeta(w_{2})\] \[\quad+(-1)^{s_{2}}\sum_{\begin{subarray}{c}w_{1}\geq 1,w_{2} \geq 2\\ w_{1}+w_{2}=w\end{subarray}}\zeta(w_{1},w_{2})\sum_{\begin{subarray}{c}0\leq i \leq s_{1}\\ 1\leq j\leq s_{2}\\ i+j=w_{1}\end{subarray}}\binom{w_{1}-1}{i}\binom{w_{2}-1}{s_{1}-i}\] \[\qquad\qquad+\mathbb{1}_{s_{1}=0}\binom{w-3}{s_{2}-1}\big{(}\zeta( w)+\zeta(1,w-1)\big{)}+\mathbb{1}_{s_{2}>0}(-1)^{s_{2}}\binom{w-2}{s_{1}}\zeta(1,w -1).\]
For the sum \(S_{113}\), by \(\sum_{i=0}^{s_{2}-1}(-1)^{s_{2}-i-1}{w_{2}-1\choose i}={w_{2}-2\choose s_{2}-1}\) we have
\[S_{113} =\biggl{\{}\sum_{i=0}^{s_{2}-1}(-1)^{s_{2}-i-1}\sum_{\begin{subarray} {c}w,w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}{w_{1}-1\choose s_{1}}{w_{2}-1\choose i}\biggr{\}} \zeta(w)\] \[=\biggl{\{}\sum_{\begin{subarray}{c}w_{1}\geq 1,w_{2}\geq 2\\ w_{1}+w_{2}=w\end{subarray}}{w_{1}-1\choose s_{1}}{w_{2}-2\choose s_{2}-1}- \mathbbm{1}_{s_{1}=0}{w-3\choose s_{2}-1}\biggr{\}}\zeta(w)\] \[=\biggl{\{}\mathbbm{1}_{s_{2}>0}{w-2\choose s_{1}+s_{2}}- \mathbbm{1}_{s_{1}=0}{w-3\choose s_{2}-1}\biggr{\}}\zeta(w).\]
We next consider the sum \(S_{12}\) and \(S_{13}\). By swapping the summation, we have
\[S_{12} =\sum_{\begin{subarray}{c}2\leq w_{1}\leq s_{1}+s_{2}+r_{1}\\ w_{1}+w_{2}=w\end{subarray}}{w_{1}-1\choose s_{1}}\zeta(w_{1})\zeta(w_{2})\sum _{i=0}^{\min(s_{2}-1,s_{1}+s_{2}+r_{1}-w_{1})}(-1)^{s_{2}-i-1}{w_{2}-1\choose i}\] \[=\sum_{\begin{subarray}{c}2\leq w_{1}\leq s_{1}+r_{1}\\ w_{1}+w_{2}=w\end{subarray}}{w_{1}-1\choose s_{1}}\zeta(w_{1})\zeta(w_{2})\sum _{i=0}^{s_{2}-1}(-1)^{s_{2}-i-1}{w_{2}-1\choose i}\] \[\qquad\quad+\sum_{\begin{subarray}{c}s_{1}+r_{1}<w_{1}\leq s_{1}+ s_{2}+r_{1}\\ w_{1}+w_{2}=w\end{subarray}}{w_{1}-1\choose s_{1}}\zeta(w_{1})\zeta(w_{2})\sum _{i=0}^{s_{1}+s_{2}+r_{1}-w_{1}}(-1)^{s_{2}-i-1}{w_{2}-1\choose i}\] \[=\sum_{\begin{subarray}{c}2\leq w_{1}\leq s_{1}+r_{1}\\ w_{1}+w_{2}=w\end{subarray}}{w_{1}-1\choose s_{1}}{w_{2}-2\choose s_{2}-1}\zeta (w_{1})\zeta(w_{2})\] \[\qquad\quad-\sum_{\begin{subarray}{c}s_{1}+r_{1}<w_{1}\leq s_{1}+ s_{2}+r_{1}\\ w_{1}+w_{2}=w\end{subarray}}(-1)^{s_{1}+r_{1}+w_{1}}{w_{1}-1\choose s_{1}}{w_{2}- 2\choose s_{1}+s_{2}+r_{1}-w_{1}}\zeta(w_{1})\zeta(w_{2}).\]
Similarly, we have
\[S_{13} =\sum_{\begin{subarray}{c}2\leq w_{2}\leq s_{2}+r_{2}-1\\ w_{1}+w_{2}=w\end{subarray}}{w_{1}-1\choose s_{1}}{w_{2}-2\choose s_{2}-1} \zeta(w_{1})\zeta(w_{2})\] \[\qquad-\sum_{\begin{subarray}{c}r_{2}<w_{2}\leq s_{2}+r_{2}-1\\ w_{1}+w_{2}=w\end{subarray}}(-1)^{s_{2}+r_{2}+w_{2}}{w_{1}-1\choose s_{1}}{w_{2 }-2\choose r_{2}-1}\zeta(w_{1})\zeta(w_{2}).\]
For the sum \(S_{2}\), we can dissect as \(S_{2}=S_{21}-S_{22}-S_{23}\) by writing
\[S_{21} \coloneqq(-1)^{s_{2}}\sum_{\begin{subarray}{c}t_{1},t_{2}\geq 0\\ t_{1}+t_{2}=s_{1}\end{subarray}}\sum_{\begin{subarray}{c}w_{1}\geq 1,w_{2} \geq 2\\ w_{1}+w_{2}=w\end{subarray}}{w_{1}-1\choose t_{1}}{w_{2}-1\choose t_{2}} \zeta(w_{1},w_{2}),\] \[S_{22} \] \[S_{23} \coloneqq(-1)^{s_{2}}\sum_{\begin{subarray}{c}t_{1},t_{2}\geq 0\\ t_{1}+t_{2}=s_{1}\end{subarray}}\sum_{\begin{subarray}{c}2\leq w_{2}\leq r_{2} +t_{2}\\ w_{1}+w_{2}=w\end{subarray}}{w_{1}-1\choose t_{1}}{w_{2}-1\choose t_{2}}\zeta( w_{1},w_{2}).\]
For \(S_{21}\), we can calculate the sum over binomial coefficients and get
\[S_{21}=(-1)^{s_{2}}\binom{w-2}{s_{1}}\sum_{\begin{subarray}{c}w_{1}\geq 1,w_{2} \geq 2\\ w_{1}+w_{2}=w\end{subarray}}\zeta(w_{1},w_{2})=(-1)^{s_{2}}\binom{w-2}{s_{1}} \zeta(w)\]
by the usual sum formula. For \(S_{22},S_{23}\), note that
\[S_{22} =(-1)^{s_{2}}\sum_{\begin{subarray}{c}w_{1}\geq 1,w_{2}\geq 2 \\ w_{1}+w_{2}=w\end{subarray}}\zeta(w_{1},w_{2})\sum_{\begin{subarray}{c}t_{1},t_ {2}\geq 0\\ t_{1}\geq w_{1}-(s_{2}+r_{1})\\ t_{1}+t_{2}=s_{1}\end{subarray}}\binom{w_{1}-1}{t_{1}}\binom{w_{2}-1}{t_{2}},\] \[S_{23} =(-1)^{s_{2}}\sum_{\begin{subarray}{c}w_{1}\geq 1,w_{2}\geq 2 \\ w_{1}+w_{2}=w\end{subarray}}\zeta(w_{1},w_{2})\sum_{\begin{subarray}{c}t_{1},t _{2}\geq 0\\ t_{2}\geq w_{2}-r_{2}\\ t_{1}+t_{2}=s_{1}\end{subarray}}\binom{w_{1}-1}{t_{1}}\binom{w_{2}-1}{t_{2}}.\]
Since the range of \((t_{1},t_{2})\) for \(S_{22}\) and \(S_{23}\) are disjoint, we have
\[S_{22}+S_{23}=(-1)^{s_{2}}\sum_{\begin{subarray}{c}w_{1}\geq 1,w_{2}\geq 2 \\ w_{1}+w_{2}=w\end{subarray}}\zeta(w_{1},w_{2})\sum_{\begin{subarray}{c}t_{1},t _{2}\geq 0\\ t_{1}\geq w_{1}-(s_{2}+r_{1})\text{ or }t_{2}\geq w_{2}-r_{2}\\ t_{1}+t_{2}=s_{1}\end{subarray}}\binom{w_{1}-1}{t_{1}}\binom{w_{2}-1}{t_{2}}.\]
By combining the above calculations, we obtain the theorem.
As a special case of Theorem 3.13, we obtain the following sum formula of bounded type for the hook shape. Note that the left hand side of the next corollary corresponds to the hook shape \((s,1^{r})\) when \(s\geq 2\).
**Corollary 3.15**.: _For \(s,r\geq 1\) and \(w\geq s+r+2\), we have_
\[S_{w}\binom{0,s-1}{r,\quad 1} =\binom{w-2}{s-1}\zeta(w)-\sum_{k=1}^{s-1}\binom{w-k-2}{s-k-1} \zeta(k,w-k)+(-1)^{s}\sum_{k=s}^{s+r-1}\zeta(k,w-k)\] \[\quad-\sum_{k=2}^{s-1}(-1)^{k}\binom{w-k-2}{s-k-1}\zeta(k)\zeta(w -k)-\sum_{k=2}^{r}\binom{w-k-2}{s-2}\zeta(k)\zeta(w-k)\] \[\quad+(-1)^{r}\sum_{k=r+1}^{s+r-1}(-1)^{k}\binom{w-k-2}{r+s-1-k} \zeta(k)\zeta(w-k).\]
Proof.: Follows from Theorem 3.13 and some rearrangement of terms.
**Example 3.16**.: For the shape \(\lambda=(3,1,1)\) we obtain for \(w\geq 7\) the sum formula
\[S_{w}\left(\vbox{\hbox{\includegraphics[width=14.226378pt]{figs.eps}}}\right) =S_{w}\binom{0,2}{2,1}\] \[=\binom{w-2}{2}\zeta(w)-(w-3)\zeta(2)\zeta(w-2)-(w-5)\zeta(3) \zeta(w-3)+\zeta(4)\zeta(w-4)\] \[\quad-(w-3)\zeta(1,w-1)-\zeta(2,w-2)-\zeta(3,w-3)-\zeta(4,w-4)\]
from Corollary 3.15 by taking \(s=3\) and \(r=2\).
## 4. Diagrams with one corner
In this section, we consider general shapes with one corner. Recall that in Section 2 we defined for an index \(\boldsymbol{k}=(k_{1},\ldots,k_{d})\) and \(l\geq 0\)
\[Q_{l}(\boldsymbol{k})=\sum_{\begin{subarray}{c}\boldsymbol{w}=(w_{1},\ldots,w_{ d}):\,\text{adm.}\\ \text{wt}(\boldsymbol{w})=\text{wt}(\boldsymbol{k})+l\end{subarray}}\binom{w_{ 1}-1}{k_{1}-1}\cdots\binom{w_{d-1}-1}{k_{d-1}-1}\binom{w_{d}-2}{k_{d}-1}\zeta( \boldsymbol{w})\,.\]
We will show (Theorem 4.2) that any \(S_{w}(\lambda/\mu)\) with \(\lambda/\mu\) having one corner, can be expressed explicitly in terms of \(Q_{w-|\lambda/\mu|}\). This will follow from a purely combinatorial argument and we will see that this statement is already true for truncated Schur MZVs. For an integer \(M\geq 1\) and an arbitrary, not necessarily admissible, Young tableau \(\boldsymbol{k}\), these are defined by
\[\zeta_{M}(\boldsymbol{k})=\sum_{\begin{subarray}{c}(m_{i,j})\in \operatorname{SSYT}(\lambda/\mu)\\ m_{i,j}<M\end{subarray}}\prod_{(i,j)\in D(\lambda/\mu)}\frac{1}{m_{i,j}^{k_{i,j }}}\,.\]
In particular, for an integer \(M\geq 1\) and an index \(\boldsymbol{k}=(k_{1},\ldots,k_{d})\) with \(k_{1},\ldots,k_{d}\geq 1\) the truncated MZVs are given by
\[\zeta_{M}(k_{1},\ldots,k_{d})=\sum_{0<m_{1}<\cdots<m_{d}<M}\frac{1}{m_{1}^{k_{ 1}}\ldots m_{d}^{k_{d}}}\,.\]
To give the precise statement of above mentioned theorem, we will need to introduce some algebraic setup following [4]. Denote by \(\mathfrak{H}^{1}=\mathbb{Q}\langle z_{k}\mid k\geq 1\rangle\) the non-commutative polynomial ring in the variables \(z_{k}\) for \(k\geq 1\). A monic monomial in \(\mathfrak{H}^{1}\) is called a word and the empty word will be denoted by \(\mathbf{1}\). For \(k\geq 1\) and \(n\in\mathbb{Z}\), we define
\[z_{k}^{n}=\left\{\begin{array}{cl}\underbrace{z_{k}\cdots z_{k}}_{n}&\text{ if }n>0,\\ \mathbf{1}&\text{ if }n=0,\\ 0&\text{ if }n<0.\end{array}\right.\]
There is a one-to-one correspondence between indices and words; to each index \(\boldsymbol{k}=(k_{1},\ldots,k_{d})\) corresponds the word \(z_{\boldsymbol{k}}:=z_{k_{1}}\cdots z_{k_{d}}\). Thus we can extend various functions on indices to \(\mathbb{Q}\)-linear maps on \(\mathfrak{H}^{1}\). For example, we define \(Q_{l}\colon\mathfrak{H}^{1}\to\mathbb{R}\) by setting \(Q_{l}(z_{\boldsymbol{k}})=Q_{l}(\boldsymbol{k})\) and extending it linearly.
We define the stuffle product \(*\) and the index shuffle product \(\widetilde{\sqcup}\) on \(\mathfrak{H}^{1}\) as the \(\mathbb{Q}\)-bilinear products, which satisfy \(\mathbf{1}*w=w*\mathbf{1}=w\) and \(\mathbf{1}\)\(\widetilde{\sqcup}\,w=w\)\(\widetilde{\sqcup}\,\mathbf{1}=w\) for any word \(w\in\mathfrak{H}^{1}\) and for any \(i,j\geq 1\) and words \(w_{1},w_{2}\in\mathfrak{H}^{1}\),
\[z_{i}w_{1}*z_{j}w_{2} =z_{i}(w_{1}*z_{j}w_{2})+z_{j}(z_{i}w_{1}*w_{2})+z_{i+j}(w_{1}*w_ {2})\,,\] \[z_{i}w_{1}\,\widetilde{\sqcup}\,z_{j}w_{2} =z_{i}(w_{1}\,\widetilde{\sqcup}\,z_{j}w_{2})+z_{j}(z_{i}w_{1} \,\widetilde{\sqcup}\,w_{2})\,.\]
By [4, Theorem 2.1] we obtain a commutative \(\mathbb{Q}\)-algebra \(\mathfrak{H}^{1}_{*}\).
**Lemma 4.1**.: _Let \(D_{1},\ldots,D_{r}\) be non-empty subsets of \(D(\lambda/\mu)\) which gives a disjoint decomposition of \(D(\lambda/\mu)\), i.e., \(D(\lambda/\mu)=D_{1}\sqcup\cdots\sqcup D_{r}\). Then the following conditions are equivalent:_
1. _If we set_ \(t_{ij}=a\) _for_ \((i,j)\in D_{a}\) _with_ \(a=1,\ldots,r\)_, then_ \((t_{ij})\) _is a semi-standard Young tableau of shape_ \(\lambda/\mu\)
_._
2. _There exists a semi-standard Young tableau_ \((m_{ij})\) _of shape_ \(\lambda/\mu\) _such that_ \[m_{ij}<m_{kl}\iff a<b\] _holds for any_ \((i,j)\in D_{a}\) _and_ \((k,l)\in D_{b}\)_._
Proof.: Obviously, (i) implies (ii). Conversely, assume that a semi-standard tableau \((m_{ij})\) satisfies the condition in (ii). If \((t_{ij})\) is defined as in (i), one has
\[m_{ij}<m_{kl}\iff a<b\iff t_{ij}<t_{kl}\]
for any \((i,j)\in D_{a}\) and \((k,l)\in D_{b}\), which shows that \((t_{ij})\) is also semi-standard. Thus (ii) implies (i).
We call a tuple \((D_{1},\ldots,D_{r})\) of non-empty subsets of \(D(\lambda/\mu)\) satisfying the conditions of Lemma 4.1 a _semi-standard decomposition_. Let \(\operatorname{SSD}(\lambda/\mu)\) denote the set of all semi-standard decompositions of \(D(\lambda/\mu)\). Then we define an element \(\varphi(\lambda/\mu)\) of \(\mathfrak{H}^{1}\) by
\[\varphi(\lambda/\mu)\coloneqq\sum_{(D_{1},\ldots,D_{r})\in\operatorname{SSD}( \lambda/\mu)}z_{|D_{1}|}\cdots z_{|D_{r}|},\]
where \(|D_{i}|\) denotes the number of elements of \(D_{i}\). This element is related to the sum formula as follows.
**Theorem 4.2**.: _When \(\lambda/\mu\) has only one corner, we have for \(w>|\lambda/\mu|\)_
\[S_{w}(\lambda/\mu)=Q_{w-|\lambda/\mu|}(\varphi(\lambda/\mu)).\]
Proof.: For any admissible Young tableau \(\boldsymbol{k}=(k_{ij})\) of shape \(\lambda/\mu\), we see that
\[\zeta(\boldsymbol{k})=\sum_{(D_{1},\ldots,D_{r})\in\operatorname{SSD}( \lambda/\mu)}\zeta\Biggl{(}\sum_{(i,j)\in D_{1}}k_{ij},\ldots,\sum_{(i,j)\in D _{r}}k_{ij}\Biggr{)} \tag{4.1}\]
by classifying the semi-standard tableaux \((m_{ij})\) of shape \(\lambda/\mu\) according to the semi-standard decompositions \((D_{1},\ldots,D_{r})\) determined as in Lemma 4.1 (ii). Then, from the definition of \(S_{w}(\lambda/\mu)\), we have
\[S_{w}(\lambda/\mu)=\sum_{\begin{subarray}{c}k_{ij}\geq 1,\,(i,j)\in D(\lambda/\mu)\\ k_{ij}\geq 2,\,(i,j)\in C(\lambda/\mu)\\ \sum_{(i,j)}k_{ij}=w\end{subarray}}\sum_{(D_{1},\ldots,D_{r})\in\operatorname {SSD}(\lambda/\mu)}\zeta\Biggl{(}\sum_{(i,j)\in D_{1}}k_{ij},\ldots,\sum_{(i, j)\in D_{r}}k_{ij}\Biggr{)}\] \[=\sum_{(D_{1},\ldots,D_{r})\in\operatorname{SSD}(\lambda/\mu)} \sum_{\begin{subarray}{c}w_{1},\ldots,w_{r-1}\geq 1\\ w_{r}\geq 2\\ w_{1}+\cdots+w_{r}=w\end{subarray}}\binom{w_{1}-1}{|D_{1}|-1}\cdots\binom{w_{r- 1}-1}{|D_{r-1}|-1}\binom{w_{r}-2}{|D_{r}|-1}\zeta(w_{1},\ldots,w_{r}).\]
Here note that, for any \((D_{1},\ldots,D_{r})\in\operatorname{SSD}(\lambda/\mu)\), the unique corner belongs to \(D_{r}\). The last expression above is equal to
\[\sum_{(D_{1},\ldots,D_{r})\in\operatorname{SSD}(\lambda/\mu)}Q_{w-|\lambda/ \mu|}(|D_{1}|,\ldots,|D_{r}|)=Q_{w-|\lambda/\mu|}(\varphi(\lambda/\mu)).\]
Thus the proof is complete.
By Remark 2.2 we see that Theorem 4.2 gives a sum formula of bounded type for shapes with one corner. In order to evaluate the sum \(S_{w}(\lambda/\mu)\) in the one corner case, one therefore needs to find an expression of \(\varphi(\lambda/\mu)\). For this purpose, the following expression is useful:
**Proposition 4.3**.: _For any skew shape \(\lambda/\mu\), let \(\lambda^{\prime}=(\lambda^{\prime}_{1},\dots,\lambda^{\prime}_{s})\) and \(\mu^{\prime}=(\mu^{\prime}_{1},\dots,\mu^{\prime}_{s})\) be the conjugates of \(\lambda\) and \(\mu\), respectively. Then we have the identity_
\[\varphi(\lambda/\mu)=\det_{*}\bigl{[}z_{1}^{\lambda^{\prime}_{i}-\mu^{\prime}_{ j}-i+j}\bigr{]}_{1\leq i,j\leq s},\]
_where \(\det_{*}\) denotes the determinant performed in the stuffle algebra \(\mathfrak{H}_{*}^{1}\)._
Proof.: For any integer \(M>0\), we have
\[\zeta_{M}(\varphi(\lambda/\mu))=\sum_{(D_{1},\dots,D_{r})\in\operatorname{SSD }(\lambda/\mu)}\zeta_{M}(|D_{1}|,\dots,|D_{r}|)=\zeta_{M}(\mathbf{O}_{\lambda/ \mu}),\]
where \(\mathbf{O}_{\lambda/\mu}\) denotes the tableau of shape \(\lambda/\mu\) all entries of which are \(1\). Indeed, the first equality is obvious from the definition of \(\varphi(\lambda/\mu)\), and the second follows in the same way as (4.1).
On the other hand, by the Jacobi-Trudi type formula for truncated Schur MZVs ([7, Theorem 1.1], [1, Theorem 4.7]), we have
\[\zeta_{M}(\mathbf{O}_{\lambda/\mu})=\det\bigl{[}\zeta_{M}(z_{1}^{\lambda^{ \prime}_{i}-\mu^{\prime}_{j}-i+j})\bigr{]}_{1\leq i,j\leq s}=\zeta_{M}\Bigl{(} \det_{*}\bigl{[}z_{1}^{\lambda^{\prime}_{i}-\mu^{\prime}_{j}-i+j}\bigr{]}_{1 \leq i,j\leq s}\Bigr{)}.\]
Since the map \(\mathfrak{H}^{1}\to\prod_{M\geq 1}\mathbb{Q}\) given by \(w\mapsto(\zeta_{M}(w))_{M\geq 1}\) is injective ([8, Theorem 3.1]), we obtain the desired identity in \(\mathfrak{H}^{1}\).
In some cases, one can compute \(\varphi(\lambda/\mu)\) explicitly by using Proposition 4.3.
**Theorem 4.4**.: _For \(n\geq 1\) and \(k\geq 0\), it holds that_
\[\varphi\bigl{(}\,(2^{n+k})/(1^{k})\,\bigr{)}=\sum_{l=0}^{n}\frac{k+1}{l+k+1} \binom{2l+k}{l}z_{2}^{n-l}\,\widetilde{\sqcup}\!\!\!\!\sqcup\,z_{1}^{2l+k}.\]
Proof.: By Proposition 4.3, we have
\[\varphi\bigl{(}\,(2^{n+k})/(1^{k})\,\bigr{)}=\begin{vmatrix}z_{1}^{n}&z_{1}^{ n+k+1}\\ z_{1}^{n-1}&z_{1}^{n+k}\end{vmatrix}=z_{1}^{n+k}*z_{1}^{n}-z_{1}^{n+k+1}*z_{1}^{n -1}.\]
By [6, Lemma 1] we get for \(m\geq n\geq 1\)
\[z_{1}^{m}*z_{1}^{n}=\sum_{l=0}^{n}\binom{m+n-2l}{m-l}\sum_{\boldsymbol{k}\in G _{l}^{m+n-2l}}z_{\boldsymbol{k}}=\sum_{l=0}^{n}\binom{m+n-2l}{n-l}z_{2}^{l}\, \widetilde{\sqcup}\!\!\!\!\sqcup\,z_{1}^{m+n-2l},\]
where \(G_{b}^{a}\) is the set of all possible indices containing \(a\) times \(1\) and \(b\) times \(2\). This gives
\[\varphi\bigl{(}\,(2^{n+k})/(1^{k})\,\bigr{)}\] \[=z_{1}^{n+k}*z_{1}^{n}-z_{1}^{n+k+1}*z_{1}^{n-1}\] \[=\sum_{l=0}^{n}\binom{2n+k-2l}{n-l}z_{2}^{l}\,\widetilde{\sqcup} \!\!\!\!\sqcup\,z_{1}^{2n+k-2l}-\sum_{l=0}^{n-1}\binom{2n+k-2l}{n-l-1}z_{2}^{l} \,\widetilde{\sqcup}\!\!\!\!\sqcup\,z_{1}^{2n+k-2l}\] \[=\sum_{l=0}^{n}\left(\binom{2l+k-2l}{n-l}-\binom{2n+k-2l}{n-l-1} \right)z_{2}^{l}\,\widetilde{\sqcup}\!\!\!\!\sqcup\,z_{1}^{2n+k-2l}\] \[=\sum_{l=0}^{n}\left(\binom{2l+k}{l}-\binom{2l+k}{l-1}\right)z_{ 2}^{n-l}\,\widetilde{\sqcup}\!\!\!\!\sqcup\,z_{1}^{2l+k}\] \[=\sum_{l=0}^{n}\frac{k+1}{l+k+1}\binom{2l+k}{l}z_{2}^{n-l}\, \widetilde{\sqcup}\!\!\!\!\sqcup\,z_{1}^{2l+k}.\qed\]
For some shapes \(\lambda/\mu\), the \(\varphi(\lambda/\mu)\) will contain sums over all indices over a fixed weight and depth. The corresponding sums of \(Q\) applied to these can be evaluated by using the following lemma.
**Lemma 4.5**.: _For \(k\geq d\geq 1,w\geq d+1\) and an index \(\boldsymbol{n}=(n_{1},\ldots,n_{d})\), we have_
\[\sum_{\begin{subarray}{c}\boldsymbol{k}=(k_{1},\ldots,k_{d})\\ \operatorname{wt}(\boldsymbol{k})=k\end{subarray}}P_{w-k}(\boldsymbol{n}; \boldsymbol{k})=\binom{w-\operatorname{wt}(\boldsymbol{n})}{k-d}\zeta(w).\]
_In particular,_
\[\sum_{\begin{subarray}{c}\boldsymbol{k}=(k_{1},\ldots,k_{d})\\ \operatorname{wt}(\boldsymbol{k})=k\end{subarray}}P_{w-k}(\boldsymbol{k})= \binom{w-d}{k-d}\zeta(w),\qquad\sum_{\begin{subarray}{c}\boldsymbol{k}=(k_{1},\ldots,k_{d})\\ \operatorname{wt}(\boldsymbol{k})=k\end{subarray}}Q_{w-k}(\boldsymbol{k})= \binom{w-d-1}{k-d}\zeta(w).\]
Proof.: This follows directly by using the Chu-Vandermonde identity and the usual sum formula for MZVs.
**Remark 4.6**.: With the same combinatorial argument as in the proof of Theorem 3.4 one can show that for \(s\geq 0\) and \(r\geq 1\), we have
\[\varphi\big{(}((s+1)^{r})/(s^{r-1})\big{)}=\sum_{l=0}^{s}\binom{r+l-1}{l}\sum _{\begin{subarray}{c}\boldsymbol{k}=(k_{1},\ldots,k_{r+l})\\ \operatorname{wt}(\boldsymbol{k})=r+s\end{subarray}}z_{\boldsymbol{k}}.\]
Using Theorem 4.2 and Lemma 4.5 this gives another way of proving the anti-hook sum formula in Theorem 3.8
\[S_{w}\big{(}((s+1)^{r})/(s^{r-1})\big{)} =Q_{w-(r+s)}\left(\varphi\big{(}((s+1)^{r})/(s^{r-1})\big{)}\right)\] \[=\sum_{l=0}^{s}\binom{r+l-1}{l}\sum_{\begin{subarray}{c} \boldsymbol{k}=(k_{1},\ldots,k_{r+l})\\ \operatorname{wt}(\boldsymbol{k})=r+s\end{subarray}}Q_{w-(r+s)}(\boldsymbol{k})\] \[=\sum_{l=0}^{s}\binom{r+l-1}{l}\binom{w-r-l-1}{s-l}\zeta(w)\] \[=\binom{w-1}{s}\zeta(w)\,.\]
We can summarize the general strategy to give a bounded expression of \(S_{w}(\lambda/\mu)\) for the case when \(\lambda/\mu\) has one corner as follows:
1. Give an expression for \(\varphi(\lambda/\mu)\), by evaluating the determinant in Proposition 4.3 by the stuffle product. Then use Theorem 4.2 to get \(S_{w}(\lambda/\mu)=Q_{w-|\lambda/\mu|}(\varphi(\lambda/\mu))\).
2. If sums of \(Q_{l}\) over all indices of a fixed weight and depth appear, use Lemma 4.5 to write them in terms of Riemann zeta values.
3. For other terms involving \(Q\) write them in terms of \(P\), by using (2.1), i.e. \[Q_{l}(k_{1},\ldots,k_{d})=\sum_{j=0}^{k_{d}-1}(-1)^{j}P_{l+j}(k_{1},\ldots,k_{ d-1},k_{d}-j).\] Then use Proposition 2.1 to get recursive (bounded) expressions of \(P_{l+j}\) in terms of MZVs. For depth \(2\) and \(3\) explicit expressions are given by Corollary 2.6 and 2.7.
**Example 4.7**.: Using above strategy we get formula (1.3) for \(S_{w}\left(\raisebox{-0.5pt}{\includegraphics[height=5.690551pt]{./figures/1.eps}}\right)\) in the introduction and the following examples:
1. For \(w\geq 6\) we have \[S_{w}\left(\raisebox{-0.5pt}{\includegraphics[height=5.690551pt]{./figures/1.eps}} \right) =\binom{w-2}{2}\zeta(2)\zeta(w-2)-\frac{5}{4}\zeta(4)\zeta(w-4)+ \zeta(2)\zeta(2,w-4)\] \[\quad-\zeta(2)\zeta(1,w-3)-\binom{w-2}{2}\zeta(1,w-1)+\binom{w-3} {2}\zeta(2,w-2)\] \[\quad+(w-3)\zeta(3,w-3)+(w-3)\zeta(1,1,w-2)-(w-5)\zeta(1,2,w-3)\] \[\quad-2\zeta(1,3,w-4)+\zeta(2,1,w-3)-\zeta(2,2,w-4).\]
2. For \(w\geq 6\) we have \[S_{w}\left(\raisebox{-0.5pt}{\includegraphics[height=5.690551pt]{./figures/1.eps}}\right)=(w-2)\zeta(2)\zeta(w-2)+(w-5)\zeta(3)\zeta(w-3)-\frac{5 }{4}\zeta(4)\zeta(w-4)\] \[\quad-\zeta(2)\zeta(1,w-3)+\zeta(2)\zeta(2,w-4)+(2-w)\zeta(1,w-1 )+(w-4)\zeta(2,w-2)\] \[\quad+2\zeta(3,w-3)+(w-3)\zeta(1,1,w-2)-(w-5)\zeta(1,2,w-3)\] \[\quad-2\zeta(1,3,w-4)+\zeta(2,1,w-3)-\zeta(2,2,w-4).\]
Comparing (i) and (ii) with (1.3), we see that for all \(w\geq 1\) we have
(4.2)
We will now show that the relation (4.2) among \(S_{w}\) for different shapes is a special case of a more general family of relations. For this we first notice that any skew Young diagram \(D(\lambda/\mu)\) with one corner can be written as
\[\lambda=(n^{m})=(\underbrace{n,\ldots,n}_{m}),\quad\mu=(\mu_{1},\ldots,\mu_{m})\]
with \(\mu_{1}=n\), \(\mu_{m}>0\) (for example, the one-box diagram is represented as \((2,2)/(2,1)\)). Then we write \(I\) for the set of \(i=1,\ldots,m\) that
\[\mu[i]\coloneqq(\mu_{1},\ldots,\mu_{i}-1,\ldots,\mu_{m})\]
is non-increasing. Then, for \(i\in I\), \(D(\lambda/\mu[i])=D(\lambda/\mu)\cup\{(i,\mu_{i})\}\) is still a skew Young diagram with one corner. As a generalization of (4.2) we obtain the following.
**Theorem 4.8**.: _If \(\lambda/\mu\) is a skew Young diagram with one corner then for all \(w\geq 1\)_
\[\sum_{i\in I}\bigl{(}(i-\mu_{i})-(m-n)\bigr{)}S_{w}(\lambda/\mu[i])=(w-| \lambda/\mu|-1)S_{w}(\lambda/\mu). \tag{4.3}\]
To prove this, we need the following Lemma. Define a linear map \(\partial\colon\mathfrak{H}^{1}\to\mathfrak{H}^{1}\) by
\[\partial(z_{k_{1}}\cdots z_{k_{d}})\coloneqq\sum_{a=1}^{d}k_{a}z_{k_{1}} \cdots z_{k_{a}+1}\cdots z_{k_{d}}.\]
In particular, \(\partial(1)=0\).
**Lemma 4.9**.:
1. \(\partial\) _is a derivation with respect to the stuffle product._
_._
2. _For any_ \(N\in\mathbb{Z}\)_, we have_ \[\partial(z_{1}^{N})=z_{1}*z_{1}^{N}-(N+1)z_{1}^{N+1}.\]
3. _We have_ \[(l-1)Q_{l}(v)=Q_{l-1}(\partial(v))\] _for any_ \(v\in\mathfrak{H}^{1}\)_._
Proof.: (i) is verified, e.g., by induction on the depth. It is also easy to show (ii) from the definition. Finally, the identity in (iii) follows from the definition of \(Q_{l}\) and the identity
\[(w-k-1)\binom{w_{1}-1}{k_{1}-1}\cdots\binom{w_{d}-2}{k_{d}-1}\] \[=\big{(}(w_{1}-k_{1})+\cdots+(w_{d}-1-k_{d})\big{)}\binom{w_{1}-1 }{k_{1}-1}\cdots\binom{w_{d}-2}{k_{d}-1}\] \[=\sum_{a=1}^{d}k_{a}\binom{w_{1}-1}{k_{1}-1}\cdots\binom{w_{a}-1 }{k_{a}}\cdots\binom{w_{d}-2}{k_{d}-1},\]
where \(w=w_{1}+\cdots+w_{d}\) and \(k=k_{1}+\cdots+k_{d}\).
Proof of Theorem 4.8.: By Theorem 4.2 and Proposition 4.3, we have
\[S_{w}(\lambda/\mu)=Q_{w-|\lambda/\mu|}\Big{(}\mathrm{det}_{*}\big{[}z_{1}^{m- \mu_{j}^{\prime}-i+j}\big{]}_{1\leq i,j\leq n}\Big{)}.\]
Here \((\mu_{1}^{\prime},\ldots,\mu_{n}^{\prime})\) denotes the transpose of the partition \(\mu=(\mu_{1},\ldots,\mu_{m})\). Note that \(\mu_{1}^{\prime}=m\) and \(\mu_{n}^{\prime}>0\). By Lemma 4.9 (iii) and (i), we see that
\[(w-|\lambda/\mu|-1)S_{w}(\lambda/\mu) =(w-|\lambda/\mu|-1)Q_{w-|\lambda/\mu|}\Big{(}\mathrm{det}_{*} \big{[}z_{1}^{m-\mu_{j}^{\prime}-i+j}\big{]}_{1\leq i,j\leq n}\Big{)}\] \[=Q_{w-|\lambda/\mu|-1}\Big{(}\partial\det_{*}\big{[}z_{1}^{m-\mu_ {j}^{\prime}-i+j}\big{]}_{1\leq i,j\leq n}\Big{)}\] \[=\sum_{k=1}^{n}Q_{w-|\lambda/\mu|-1}\Big{(}\mathrm{det}_{*}\big{[} \partial^{\beta_{jk}}(z_{1}^{m-\mu_{j}^{\prime}-i+j})\big{]}_{1\leq i,j\leq n }\Big{)}. \tag{4.4}\]
Here \(\partial^{\delta_{jk}}\) means the operator \(\partial\) if \(j=k\) and the identity operator otherwise. Moreover, since the identity
\[\partial(z_{1}^{m-\mu_{k}^{\prime}-i+k}) =z_{1}*z_{1}^{m-\mu_{k}^{\prime}-i+k}-(m-\mu_{k}^{\prime}-i+k+1) z_{1}^{m-\mu_{k}^{\prime}-i+k+1}\] \[=\big{(}(\mu_{k}^{\prime}-k)-(m-n)\big{)}z_{1}^{m-(\mu_{k}^{ \prime}-1)-i+k}\] \[\qquad+z_{1}*z_{1}^{m-\mu_{k}^{\prime}-i+k}-(n+1-i)z_{1}^{m-(\mu_ {k}^{\prime}-1)-i+k}\]
holds by Lemma 4.9 (ii), we have
\[\mathrm{det}_{*}\big{[}\partial^{\delta_{jk}}(z_{1}^{m-\mu_{j}^{ \prime}-i+j})\big{]}\] \[=\big{(}(\mu_{k}^{\prime}-k)-(m-n)\big{)}\det_{*}\big{[}z_{1}^{m- (\mu_{j}^{\prime}-\delta_{jk})-i+j}\big{]}\] \[\qquad+z_{1}*\mathrm{det}_{*}\big{[}z_{1}^{m-\mu_{j}^{\prime}-i+j }\big{]}-\mathrm{det}_{*}\big{[}(n+1-i)^{\delta_{jk}}z_{1}^{m-(\mu_{j}^{\prime} -\delta_{jk})-i+j}\big{]}. \tag{4.5}\]
On the other hand, the left-hand side of (4.3) is equal to
\[\sum_{k=1}^{n}\big{(}(\mu_{k}^{\prime}-k)-(m-n)\big{)}Q_{w-|\lambda/\mu|-1} \Big{(}\mathrm{det}_{*}\big{[}z_{1}^{m-(\mu_{j}^{\prime}-\delta_{jk})-i+j} \big{]}\Big{)}. \tag{4.6}\]
Here, a priori, \(k\) runs only over the indices such that \(\mu^{\prime}_{k}>\mu^{\prime}_{k+1}\). However, if \(\mu^{\prime}_{k}=\mu^{\prime}_{k+1}\), the \(k\)-th and \((k+1)\)-st columns of the matrix \(\big{(}z_{1}^{m-(\mu^{\prime}_{j}-\delta_{jk})-i+j}\big{)}_{i,j}\) are equal, so the determinant is zero.
Comparing (4.4), (4.5) and (4.6), it suffices to prove the equality
\[nz_{1}*\det_{*}\!\big{[}z_{1}^{m-\mu^{\prime}_{j}-i+j}\big{]}\stackrel{{?}}{{=}}\sum_{k=1}^{n}\det_{*}\!\big{[}(n+1-i)^{\delta_{jk}}z_{1}^{m-(\mu^{ \prime}_{j}-\delta_{jk})-i+j}\big{]}. \tag{4.7}\]
Let us compute the right-hand side by the cofactor expansion with respect to the \(k\)-th column.
\[\sum_{k=1}^{n}\det_{*}\!\big{[}(n+1-i)^{\delta_{jk}}z_{1}^{m-(\mu ^{\prime}_{j}-\delta_{jk})-i+j}\big{]}\] \[=\sum_{k=1}^{n}\sum_{l=1}^{n}(-1)^{k+l}(n+1-l)z_{1}^{m-(\mu^{ \prime}_{k}-1)-l+k}*\det_{*}\!\big{[}z_{1}^{m-\mu^{\prime}_{j}-i+j}\big{]}_{i \neq l,j\neq k}\] \[=\sum_{l=1}^{n}(n+1-l)\sum_{k=1}^{n}(-1)^{k+l}z_{1}^{m-(\mu^{ \prime}_{k}-1)-l+k}*\det_{*}\!\big{[}z_{1}^{m-\mu^{\prime}_{j}-i+j}\big{]}_{i \neq l,j\neq k}.\]
Then, by the cofactor expansion with respect to the \(l\)-th row, we have
\[\sum_{k=1}^{n}(-1)^{k+l}z_{1}^{m-(\mu^{\prime}_{k}-1)-l+k}*\det_{*}\!\big{[}z_ {1}^{m-\mu^{\prime}_{j}-i+j}\big{]}_{i\neq l,j\neq k}=\det_{*}\!\big{[}z_{1}^{ m-\mu^{\prime}_{j}-(i-\delta_{il})+j}\big{]}.\]
This is zero for \(l=2,\ldots,n\) since the \(l\)-th and \((l-1)\)-st rows are equal. Thus we have shown that the right-hand side of (4.7) is \(n\det_{*}\!\big{[}z_{1}^{m+\delta_{li}-\mu^{\prime}_{j}-i+j}\big{]}\), and now it is enough to prove
\[z_{1}*\det_{*}\!\big{[}z_{1}^{m-\mu^{\prime}_{j}-i+j}\big{]}\stackrel{{?}}{{=}}\det_{*}\!\big{[}z_{1}^{m+\delta_{li}-\mu^{\prime}_{j}-i+j}\big{]}.\]
But this is obvious since two matrices here are of the form
\[\begin{pmatrix}1&*\\ \mathbf{0}&Z\end{pmatrix}\quad\text{and}\quad\begin{pmatrix}z_{1}&*\\ \mathbf{0}&Z\end{pmatrix},\]
respectively, with the common \((n-1)\times(n-1)\) matrix \(Z=\big{[}z_{1}^{m-\mu^{\prime}_{i}-i+j}\big{]}_{2\leq i,j\leq n}\). Hence the proof is complete.
|
2307.06621 | cjdb: a simple, fast, and lean database solution for the CityGML data
model | When it comes to storing 3D city models in a database, the implementation of
the CityGML data model can be quite demanding and often results in complicated
schemas. As an example, 3DCityDB, a widely used solution, depends on a schema
having 66 tables, mapping closely the CityGML architecture. In this paper, we
propose an alternative (called cjdb) for storing CityGML models efficiently in
PostgreSQL with a much simpler table structure and data model design (only 3
tables are necessary). This is achieved by storing the attributes and
geometries of the objects directly in JSON. In the case of the geometries we
thus adopt the Simple Feature paradigm and we use the structure of CityJSON. We
compare our solution against 3DCityDB with large real-world 3D city models, and
we find that cjdb has significantly lower demands in storage space (around a
factor of 10), allows for faster import/export of data, and has a comparable
data retrieval speed with some queries being faster and some slower. The
accompanying software (importer and exporter) is available at
https://github.com/cityjson/cjdb/ under a permissive open-source license. | Leon Powałka, Chris Poon, Yitong Xia, Siebren Meines, Lan Yan, Yuduan Cai, Gina Stavropoulou, Balázs Dukai, Hugo Ledoux | 2023-07-13T08:36:36Z | http://arxiv.org/abs/2307.06621v1 | # cjdb: a simple, fast, and lean database solution for the CityGML data model
###### Abstract
When it comes to storing 3D city models in a database, the implementation of the CityGML data model can be quite demanding and often results in complicated schemas. As an example, 3DCityDB, a widely used solution, depends on a schema having 66 tables, mapping closely the CityGML architecture. In this paper, we propose an alternative (called 'cjdb') for storing CityGML models efficiently in PostgreSQL with a much simpler table structure and data model design (only 3 tables are necessary). This is achieved by storing the attributes and geometries of the objects directly in JSON. In the case of the geometries we thus adopt the _Simple Feature_ paradigm and we use the structure of CityJSON. We compare our solution against 3DCityDB with large real-world 3D city models, and we find that cjdb has significantly lower demands in storage space (around a factor of 10), allows for faster import/export of data, and has a comparable data retrieval speed with some queries being faster and some slower. The accompanying software (improper and exporter) is available at [https://github.com/cityjson/cjdb/](https://github.com/cityjson/cjdb/) under a permissive open-source license.
## 1 Introduction
The international standard _City Geography Markup Language_ (CityGML) is a data model designed for the storage of digital 3D models representing urban areas and landscapes (Kutzner et al, 2020; Groger and Plumer, 2012; OGC, 2021b). It allows us to define and store the majority of commonplace 3D objects within cities, such as buildings, roads, rivers, bridges, vegetation, and city furniture. Additionally, it supports various levels of detail (LoDs) for the 3D objects, which enables and facilitates complex applications and use-cases (Biljecki et al, 2015).
The CityGML data model, currently at version 3.0, has three known encodings (more details in Section 2):
**XML/GML encoding**: : the XML/GML encoding (built upon GML [OGC, 2007]) was initially the only standardised encoding for CityGML, which explains the--rather confusing--name choice for the data model. The latest official release of the XML/GML encoding supports CityGML version 2.0 [OGC, 2012]; however a release planned for 2023 will also support CityGML v3.0.
**CityJSON**: : Its version 1.0 is standardised by the OGC [OGC, 2021a], and its version \(1.1\)1 implements a subset of the CityGML v3.0 data model. Its flat hierarchy and simple structure make it around 6 times more compact than the XML/GML encoding, thus allowing for easier manipulation and exchange on the web [Ledoux et al, 2019].
Footnote 1: [https://cityjson.org](https://cityjson.org)
**3DCityDB**: : the _3D City Database_ is a "geo-database solution" (schema and accompanying software) supporting three different relational DBMSs (database management systems). It implements a mapping of the CityGML data model (currently for v1.0 and v2.0 only) to the database schema to allow for a fast implementation [Yao et al, 2018]. It is not standardised by the OGC.
DBMSs can greatly simplify the management of large 3D city models: they are arguably the best tool to store and manage very large datasets (of any kind), are already part of the ecosystem of many organisations, and offer several advantages over file-based systems, eg security, versioning, scalability, etc. [Ramakrishnan and Gehrke, 2001]. This makes 3DCityDB a popular solution, especially for handling country-level data and for offering access to multiple users. In most cases, the data owners store the data with 3DCityDB on a remote server and allow the users to access the data through a website, filter it by objects/LoDs/areas and obtain a subset of the 3D city model, in various different formats, for instance KML, COLLADA, and gITF.
However, while the 3DCityDB is widely used, [Pantelios, 2022] argues that its use can be somewhat complex and difficult for end-users. The main culprit is the fact that datasets are split over 66 tables and the _Simple Feature_ paradigm OGC [2006] is not used (geometries are stored across different tables, not in a column of an object), which translates to very complex queries that necessitate several joins. Pantelios [2022] solution is to create extra views for attributes and geometries, and to offer simplified access to them through a graphical interface (built upon QGIS). However, this comes at the cost of increasing the size of the database.
We present in this paper an alternative to 3DCityDB, which we name 'cjdb'. It is composed of a database schema (containing only 3 tables) and accompanying software for import and export of CityJSON v1.1 files (thus the CityGML v3.0 core model is supported). As further explained in Section 3, our data model is inspired by the _Simple Feature_ paradigm (each row has the geometries of the object stored in one column), but
instead of using PostGIS geometry types, we exploit the fact that PosgresSQL can store JSON objects directly in binary format with the json type. The reason for this choice is that PostGIS geometry types (notice that to use 3D types the SFCGAL extension (Borne et al, 2023) would be required), would not allow the storage of appearances (textures and/or materials) and of semantic information on the surfaces. Therefore, the geometries of a given city object (eg a building, a tree, a lamppost) are stored together with the object in JSON format, as defined by CityJSON. Our simple structure allows us to compress by an order of around 10 the typical size of a database as stored with 3DCityDB (taking into account data and (spatial) indexes), and, as shown in Section 4, this size reduction does not come with a penalty for the speed of the data retrieval. Our data model is at the moment only for PostgreSQL, but because it is so simple (only 3 tables are necessary), it could surely be ported to other databases.
## 2 Related work
### CityGML data model
To represent a region in 3D, CityGML recursively decomposes it into semantic objects (Groger and Plumer, 2012). It defines the classes most commonly found in an urban or a regional context, and the hierarchical relationships between them (eg a building is composed of parts, which are formed of walls, which have windows). Also, the CityGML semantic classes are structured into several modules, eg Building, Land Use, Water Bodies, and Transportation.
The geometry of the objects is realised with a subset of the geometry definitions in ISO19107 (ISO, 2003) (only linear and planar primitives are however allowed), which also allows aggregations of geometries: a single building can for instance be modelled with a CompositeSolid. Furthermore, it is possible to attach textures, materials, and semantics to each of the surfaces of a 3D geometry. The geometry types of most geobDMSs do not allow us to represent such complex 3D geometries.
One of the main characteristics of CityGML is that it supports different levels of detail (LoDs) for each of the classes, which means that in theory for a single building, or a single tree, several geometries could be stored.
### CityJSON + CityJSONFeature
CityJSON, with its latest version 1.1, is a JSON-based exchange format for the CityGML data model. It implements all the core modules of CityGML v3.0, and some other modules are supported2. As explained in Ledoux et al (2019), it was designed to improve
the weaknesses of the XML-encoding of CityGML: large filesize, complex structure to manipulate, several ways to store a given characteristic, unfit for the web, etc.
Its geometry structure is similar to that of computer graphics formats (eg OBJ and STL), and allows us to compress by a factor of around 6 XML-encoded CityGML files. A thorough comparison shows that it is nearly as compact as formats that do not allow semantics, complex attributes, and coordinate reference systems [Praschl and Pointner, 2022].
While the original files for CityJSON v1.0 were compact, one weakness was that files for large areas were not suitable for streaming. That is, to be able to process one object in a file, the client had to have in memory the whole file. Version 1.1 solved this issue by introducing a new type: CityJSONFeature, which represents _independently_ one city object in a CityJSON file (eg a 'Building' or a 'Bridge'). The idea is to decompose a region into its many features, create several JSON objects of type CityJSONFeature, and stream them sequentially or store them in a JSON text sequence [IETF, 2015]3. This is conceptually the same as the _GeoJSON Text Sequences4_ used for processing and exchanging large 2D GIS datasets.
Footnote 3: _JSON Lines text file_ is one possibility: [https://jsonlines.org/](https://jsonlines.org/)
Footnote 4: [https://datatracker.ietf.org/doc/html/rfc7946#appendix-C](https://datatracker.ietf.org/doc/html/rfc7946#appendix-C)
We exploit this type to store independently each feature in one row of the database, although, as explained below, we modify the JSON structure slightly, split it over a few columns, and use indexes to accelerate performance.
### NoSQL databases
Nys and Billen [2021] developed a non-relational data model for CityJSON, and implement it in a JSON document database (MongoDB). They tested with one dataset (containing around 3500 city objects), and managed to reduce the size by a factor of 40% when compared to 3DCityDB. Only one query was benchmarked (retrieval of a random building with its attributes and single geometry), and their solution performed better than 3DCityDB (again around 40% faster). There is no information about how their solution performs with other queries that practitioners expect from a DBMS solution (eg the ones from Section 4.3).
### 3DCityDB
While the data model of CityGML could have been automatically mapped to relational tables, Yao et al [2018] mentions that for 3DCityDB a semi-automatic method was used to reduced the number of tables and the number of joins to perform queries (which will also typically reduce query times). The result is nonetheless, for v4.4, a total of 66 tables,
many of which remain empty if, for instance, only buildings without appearances are stored.
As Pantelios (2022) mentions, the attributes for a given type are stored in different tables, depending on whether an attribute is prescribed by the CityGML data model or not. This complicates greatly the retrieval of information with SQL queries.
Interestingly, the 3D geometries are decomposed into their (semantic) surfaces, and each surface is stored in a separately row in a single table one table. Different flags are used to indicate whether a 3D geometry is a solid/watertight or a surface/not. While this approach allows to compactly store 3D volumetric geometries, in practice several joins are necessary to retrieve all the surfaces of a given feature. The volumetric 3D types available in PostGIS-SFCGAL (Borne et al, 2023) are also used (thus duplication of data).
Interestingly, PostGIS geometries for the footprint are not stored, only the 2D bounding box. This is in our opinion an odd choice because many queries to 3D city models are 2D queries (all buildings inside a given area, within a given distance, etc).
## 3 Data model, software, and engineering decisions
### Data model
As shown in Figure 1, the cjdb data model is simple and akin to using the Simple Feature paradigm (OGC, 2006), as PostGIS does. Each row in the table city_object stores a CityJSON city object (for instance a 'Building', a 'BuildingPart', a 'Solitary VegetationObject', etc.) and the 'geometry' column stores a JSON array of geometries (Figure 2 shows an example). The array is necessary because a given feature can have more than one geometry (eg the most obvious case is when several LoDs are stored), and it
Figure 1: UML diagram of cjdb.
should be noticed that the JSON stored in the database is different from CityJSON: vertices are not stored separately and we replace the vertex identifiers in the "boundaries" property by their real-world coordinates.
For each row, we also store the identifier of the object, its type, the attributes (stored as jsonb), and we also store the ground geometry as a PostGIS 2D type (to be able to spatially index them in 2D, see below). Notice that the children objects are stored in separate rows (and not with their parent object), ie a 'Building' does not have its potential children in the same row, and each of them (eg a 'BuildingPart') is stored in a separate row.
The implemented version adds one more table: a table named city_object_relationships to store the relations between 'parents' and 'children' city objects (CityJSON has a flat structure and stores for instance a 'Building' at the same level as its 'BuildingPart' or 'BuildingInstallation' and links them). This table is added to improve query speed since we often need to process a parent with its children.
Finally, the table cj_metadata has one entry for each CityJSON file imported to the database and stores the following properties of CityJSON:
* the coordinate reference system (CRS);
* the precision used for storing the coordinates, used mostly when exporting data;
* the geometry templates (used for trees or bus stops);
* the CityJSON Extensions (to extend the base data model of CityJSON with appli_cation-specific types/attributes/semantics);
* the bounding box of the file.
Figure 2: Example snippet stored in the ‘geometry’ column: an array of CityJSON geometries.
### Importer
To facilitate the usage of the cjdb data model we have implemented an importer and released it open-source under the MIT license: [https://github.com/cityjson/cjdb5](https://github.com/cityjson/cjdb5). The tool is developed in Python and has a command-line interface. Observe that because the data model is feature-centric, the importer will read JSONL files (JSON Sequences) and not CityJSON files. A CityJSON file can however be automatically converted into a list of features with the accompanying software _cjio6_.
Footnote 5: the version described in this article is v1.3
Footnote 6: [https://github.com/cityjson/cjio](https://github.com/cityjson/cjio)
The importer creates the 3 necessary tables, and populates them by parsing and modifying the CityJSON features according to the cjdb data model, as explained above.
Ground geometry extraction:As many queries on a 3D city model are typically performed in 2D, such as retrieving all objects within a certain area or selecting the object clicked upon in a 2D view, we have chosen to store the ground surface of each object as a 2D PostGIS geometry. This is achieved by iterating over all of the object's surfaces and selecting the horizontal ones with the lowest elevation. If multiple levels of detail (LoDs) are available, we select the lowest LoD. This addition enables us to perform rapid 2D spatial queries on the data without any joins (see Section 4). In comparison, performing the same spatial queries in 3DcityDB requires multiple joins. Alternatively, the enveloping bounding box, which in 3DcityDB is stored together with the object, can be used instead of the actual object geometry in order to perform spatial queries when the accuracy is not important.
Indexes:The data in the city_object table is expected to be retrieved mostly through spatial queries. Therefore, we decided to add a GiST index on the 'ground_geometry' column and cluster the table based on that index. In order to improve query performance on the JSON columns we added a Generalized Inverted index (GIN), which is specialised for items with composite values. Additional full or partial BTree indexes can be applied during import, on specific attributes of the city objects, if the user expects that the table will be queried based on those attributes.
### Exporter
The Python implementation of cjdb also offers an exporter. As is the case for 3DcityDB, a SQL query is used to filter which objects in the database should be exported (the identifiers in the table city_object). The output is a CityJSONL file (a sequence of CityJSONFeatures), which can automatically be converted to a CityJSON file with cjio.
## 4 Benchmark
To compare the performance of the cjdb data model against that of 3DCityDB, we created a benchmark dataset using data from 3 different countries; the Netherlands (3DBAG), Austria (Vienna), and USA (NYC). The 3DBAG dataset is composed of 100 tiles from the 3DBAG7[Peters et al, 2022]; we randomly chose tiles from 3 cities in the Netherlands (Delft, Amersfoort, and Zwolle). The Vienna dataset covers the Austrian city whereas the NYC dataset covers a small part of central New York City; both datasets can be downloaded in CityJSON format from [https://www.cityjson.org/datasets/](https://www.cityjson.org/datasets/). As can be seen in Table 1, all the datasets are building-centric, as is often the case with 3D city models, but here they are modelled differently and have different LoDs/sizes.
Footnote 7: www.3dbag.nl
We imported each dataset in two different databases, one created with 3DCityDB and one cjdb, and below we compare them in terms of import/export time, data size, and data retrieval. Since cjdb is only available for PostgreSQL, we did not perform any tests on Oracle or PolarDB.
### Import and export times
We compared the import time for all 3 datasets and we found that cjdb is considerably faster than 3DCityDB. As an example, the 100 tiles of the 3DBAG were imported in 21 min in cjdb whereas it took 113 min with 3DCityDB; see Table 2 for all details. This is expected, since the storage in the cjdb database is very close to the data model of CityJSONL, while for 3DCityDB the geometries need to be processed and each surface
\begin{table}
\begin{tabular}{l r r r r} \hline \hline & \multicolumn{2}{c}{**3DCityDB**} & \multicolumn{2}{c}{**cjdb**} \\ \cline{2-5} & import & export & import & export \\ \hline
**3DBAG** & 6780 & 721 & 1260 & 412 \\
**NYC** & 273 & 161 & 23 & 25 \\
**Vienna** & 12 & 7 & 2 & 2.5 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Import and export times, from/ to CityJSONL. All times in seconds.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline & \multicolumn{2}{c}{**3DCityDB**} & \multicolumn{2}{c}{**cjdb**} \\ \cline{2-5} & import & export & import & export \\ \hline
**3DBAG** & 6780 & 721 & 1260 & 412 \\
**NYC** & 273 & 161 & 23 & 25 \\
**Vienna** & 12 & 7 & 2 & 2.5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The 3 datasets used for the benchmark.
to be stored separately. However, it is worth noting that in 3DCityBD the creation of the necessary tables is performed separately before the import, whereas cjdb creates the necessary tables at the time of import.
For the export, we exported each dataset to a CityJSON file (for 3DCityDB) and to a CityJSONL file (for cjdb). As shown in Table 2, cjdb is also generally faster.
Observe that those values are somewhat unfair to compare because the output is to a different format (and some time would be needed to convert from a CityJSONL to a CityJSON), because different languages are used for the importer/exporter (Java for 3DCityDB, Python for cjdb), and because 3DCityDB exporter is multi-threaded.
### Database size
One significant difference between 3DCityDB and cjdb is the database size they occupy. After importing the datasets, we measured the total size of each database, including the indexes and the TOAST tables (The Oversized-Attribute Storage Technique), as shown in Table 3. We notice that cjdb occupies significantly less space that what 3DCityDB demands for the same data.
The main reason for this size difference lies in the different approaches to storing the features. In 3DCityDB, the different semantic surfaces (wall/roof/ground/etc) are considered separate city objects, contributing to the total number of rows in the cityobject table. In cjdb, the walls, roof and surfaces are considered intrinsic properties of each city object and thus remain in the jsonb format stored at the geometry column of each object. This decision has several advantages: reduction of the size of the city_objects table and faster and simpler queries where only city objects are concerned (eg Q1 and Q4 in Table 4). But there is also a major drawback: we cannot perform spatial queries on the geometries of specific semantic surfaces.
Another reason for the different database sizes is the different amount of indexes. We notice that cjdb implements significantly less indexes and as a result requires way less storage for them. In 3DCityDB, indexes account for almost half of the database's size.
One more noticeable difference is that, since cjdb stores attributes and geometries in jsonb format, it requires more space for TOAST tables. TOAST (_The Oversize Attribute Storage Technique_) is a PostgreSQL mechanism which controls the size of the data stored in a field. If the data exceeds the maximum allowed limit, TOAST breaks the too-wide field values down into smaller pieces, and stores them out-of-line in a TOAST table. Columns of type jsonb tend to carry quite wide values and often utilise the TOAST tables. Thus when measuring the cjdb database size we need to take the extra toast tables into account. However, even with the extra TOAST tables, the total database size remains around ten times smaller than that of 3DCityDB.
Finally, it should be noticed that the 3DCityDB size should in reality be larger than the number we obtain: since its data model allows only but one geometry per LoD and
that the refined LoDs by Biljecki et al (2016) are not supported, the 3DCityDB importer selects either the LoD1.2 or LoD1.3 from the inputs, and thus one LoD is missing. In cjdb, all available LoDs are stored together in the 'geometry' column.
### Data Retrieval
The data retrieval comparison was performed based on the execution time of SQL queries which aim to retrieve the same data from both databases. Postgres heavily relies on caching, therefore the queries below were run several times to ensure the cache was warm.
We performed 8 queries that we believe are representative of what a typical practitioner (or server hosting data to be downloaded) would need.
Those are listed in Table 4. The exact SQL queries we used for the 3DBAG dataset are listed in Appendix A; similar queries were used for the other 2 datasets.
Q1. Query based on attributes:3DCityDB offers a list of predefined building attributes within the building table, which include 'year_of_construction' and 'roof_type'--attributes that are not in this list are stored in the table cityobject_genericattrib. Cjdb on the other hand offers more flexibility since all the attributes remain in JSON format in the attributes column, regardless of the attribute name.
\begin{table}
\begin{tabular}{l r r r r r r r r} \hline \hline & \multicolumn{4}{c}{**3DCityDB**} & \multicolumn{4}{c}{**cjdb**} \\ \cline{2-10} & tables & indexes & TOAST & total & tables & indexes & TOAST & total \\ \hline
**3DBAG** & 5463 & 4322 & 112 & 9898 & 257 & 57 & 755 & 1070 \\
**NYC** & 590 & 735 & 0.5 & 1326 & 26 & 4 & 25 & 54 \\
**Vienna** & 30 & 42 & 0.5 & 73 & 1.5 & 0.5 & 4 & 6 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Database size comparison for 100 tiles of the 3DBAG dataset, all values in MB.
\begin{table}
\begin{tabular}{l r r r r r r r r} \hline \hline & \multicolumn{4}{c}{**3DCityDB**} & \multicolumn{4}{c}{**cjdb**} \\ \cline{2-10} & tables & indexes & TOAST & total & tables & indexes & TOAST & total \\ \hline
**3DBAG** & 5463 & 4322 & 112 & 9898 & 257 & 57 & 755 & 1070 \\
**NYC** & 590 & 735 & 0.5 & 1326 & 26 & 4 & 25 & 54 \\
**Vienna** & 30 & 42 & 0.5 & 73 & 1.5 & 0.5 & 4 & 6 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The 8 queries we used for the benchmark.
Since none of our datasets have attributes from the 3DCityDB's predefined list, we decided to compare the attribute-based data retrieval for both databases based on non-listed attributes. In this specific example, we queried all the buildings with roof height ('h_dak_max' for BAG 'HoeheDach' for Vienna) higher than 20 m. The New York dataset was not taken into account for this query, since there is no specific attribute about the roof height.
For cjdb no join is necessary since the attributes are stored together with the city object but the equivalent in 3DCityDB requires a join between the city_object and the cityobject_genericattrib tables. As shown in Table 5, cjdb it is faster than 3DCityDB for Vienna but performs almost the same as 3DCityDB for the 3DBAG dataset. This is related to both the size of the dataset and the size of the attributes column in cjdb; the bigger the jsonb column, the slower the query, since the information stored in the TOAST tables will need to be retrieved and decompressed.
Q2 & Q3. 2D Spatial queries:The spatial queries in 3DCityDB tend to be complicated since the geometries of the objects are stored in other tables and require joins to retrieve them. As an example, in order to find all buildings within a certain 2D bounding box, their ground surfaces had to be retrieved from the surface_geometry table. An alternative but less accurate solution would be to use the bounding box geometry of the object, which is stored in the city objects' table. Cjdb does not require any join to retrieve the same data, since the city_objects table contains the ground geometries of the objects. As a result the cjdb query is significantly faster than the equivalent of 3DCityDB, as shown in Table 5. We observe similar speed differences with other spatial queries, such as retrieving the building intersecting with a given point in 2D (Q3).
Q4. Number of parts query:We compared how the two databases perform with the retrieval of the number of building parts per building. For simplicity we considered only first level children for each building and we also require the buildings without any parts to be part of the result. Both queries require a single join and their execution times varies depending on the size of the dataset. When the 3DBAG dataset is examined, cjdb seems to be slower than 3DCityBD but for the other datasets, cjdb is faster. However it is worth mentioning that for cjdb the join with the city object table could be skipped and the number of parts could be retrieved with a single aggregation query on the city_object_relationships table, but only for the objects which have parts.
Q5. LoD-based query:While one strength of CityGML is that many LoDs of one city object can be stored with the object, exporting a given one to perform an analysis is useful in practice. We therefore tested a query to obtain all building ids with LoD1 (in 3DCityDB) and LoD1.2 (in cjdb) for the 3DBAG dataset and LoD2 for the other 2 datasets. Cjdb performs slower for all datasets; this is probably due to the large size of
the 'geometry' column which is in json format and therefore requires joining to the TOAST tables.
It should be noticed that if the geometry of the building needs also to be retrieved, the equivalent query for 3DCityBD requires joins which significantly lower the speed.
Q6, Q7 & Q8. INSERT/UPDATE/DELETE attribute queries:We also compared how the databases perform when adding, modifying or deleting an attribute of a building. When it comes to adding new attributes to the database (Q6), cjdb performs considerably slower than 3DCityDB. This can be attributed to the different structure of the databases: new attributes in 3DCityDB can simply be inserted as rows in the relevant table, whereas in cjdb the json must be modified. However, when it comes to updating existing attributes (Q7), the speed of cjdb is similar for datasets having many attributes, and faster for datasets having few attributes. Deleting attributes (Q8) follows a similar behaviour. Generally, it can be noticed that the number of attributes and the size of the dataset can significantly affect queries on the json columns.
## 5 Conclusions and future work
The cjdb project started with the goal of creating a simpler and leaner alternative to 3DCityDB for web servers, allowing users to efficiently store and retrieve 3D city models. Our data model follows the Simple Feature paradigm and has only 3 tables (instead of 66 for 3DCityDB). This is achieved by maintaining the structure of CityJSON and storing JSON directly in the database, using the PostgreSQL type jsonb. Because the structure of the data model is close to that of CityJSON files, we can significantly improve the import/export times to/from a database. Furthermore, as we have shown,
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline & \multicolumn{2}{c}{**3DBAG**} & \multicolumn{2}{c}{**NYC**} & \multicolumn{2}{c}{**Vienna**} \\ \cline{2-7} & 3DCityDB & cjdb & 3DCityDB & cjdb & 3DCityDB & cjdb \\ \hline
**Q1** & 290 & 285 & – & – & 22 & 2 \\
**Q2** & 172 & 1.4 & 59 & 0.3 & 14 & 0.4 \\
**Q3** & 1.0 & 0.2 & 0.4 & 0.1 & 0.2 & 0.1 \\
**Q4** & 418 & 478 & 240 & 46 & 15 & 2 \\
**Q5** & 343 & 1877 & 217 & 392 & 15 & 34 \\
**Q6** & 3981 & 11 660 & 1135 & 1425 & 15 & 10 \\
**Q7** & 10 796 & 10 903 & 2333 & 1161 & 15 & 9 \\
**Q8** & 4393 & 10 984 & 1040 & 682 & 59 & 8 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Average execution times for the benchmark queries (all times in ms).
our simple model is around ten times more compact and it offers retrieval speed comparable to those of 3DCityDB. More specifically we notice that the cjdb performs better when it comes to 2D spatial queries but it performs slower when the jsonb columns need to be altered (up to 3 times slower). We also notice that the query speed on the jsonb columns is significantly affected by the size of the dataset; the more the attributes in the jsonb columns get, more time is required to parse them.
Figure 3 shows the 8 queries, each time have been normalised (we divided the time of cjdb by that of 3DCityDB).
It should be mentioned that the data model of 3DCityDB is more generic and can be implemented with 3 different DBMSs (and not only PostgreSQL). It also allows us to query semantic surfaces, something that is currently not possible with the cjdb model because it stores the semantic surfaces in jsonb format in the 'geometry' column of each city object. However, we plan to remedy to the situation by implementing database functions to extract the geometries from semantic surfaces. Furthermore, at the moment, retrieving data from cjdb requires the user to parse the jsonb and extract the necessary information, something that is far from being optimal. We plan in the near future to implement some helper plugins in mainstream open-source products (eg QGIS) to be
Figure 3: Data retrieval comparison between cjdb and 3DCityBD for the 8 queries using the 3DBAG dataset. The y-axis corresponds to (cjdb/3DCityDB); a bar lower than the yellow line (located at 1.0) means that cjdb has a faster query time, a bar higher than 1.0 means a slower query time.
able to view and query cjdb, similar to what is currently being built for 3DCityDB8.
Footnote 8: [https://github.com/tudelft3d/3DCityDB-Tools-for-QGIS](https://github.com/tudelft3d/3DCityDB-Tools-for-QGIS)
While this was not previously discussed, it should be mentioned that CityJSON Extensions are supported by the cjdb data model. This means that extra functions to dynamically extend the data model (as required for 3DCityDB, see Yao and Kolbe (2017)) are not necessary. This is because CityJSON Extensions, unlike CityGML Application Domain Extensions (ADEs), are constrained to follow the structure and rules of other city objects (see Ledoux et al (2019) for more details).
We also plan to add support for textures and material, at the moment the information is simply stored in the JSON of each geometry, but as is the case for semantic surfaces, we will add database functions to allow users to query and update those.
|
2303.05807 | Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields | Common capture low-light scenes are challenging for most computer vision
techniques, including Neural Radiance Fields (NeRF). Vanilla NeRF is
viewer-centred simplifies the rendering process only as light emission from 3D
locations in the viewing direction, thus failing to model the low-illumination
induced darkness. Inspired by the emission theory of ancient Greeks that visual
perception is accomplished by rays casting from eyes, we make slight
modifications on vanilla NeRF to train on multiple views of low-light scenes,
we can thus render out the well-lit scene in an unsupervised manner. We
introduce a surrogate concept, Concealing Fields, that reduces the transport of
light during the volume rendering stage. Specifically, our proposed method,
Aleth-NeRF, directly learns from the dark image to understand volumetric object
representation and concealing field under priors. By simply eliminating
Concealing Fields, we can render a single or multi-view well-lit image(s) and
gain superior performance over other 2D low-light enhancement methods.
Additionally, we collect the first paired LOw-light and normal-light Multi-view
(LOM) datasets for future research. This version is invalid, please refer to
our new AAAI version: arXiv:2312.09093 | Ziteng Cui, Lin Gu, Xiao Sun, Xianzheng Ma, Yu Qiao, Tatsuya Harada | 2023-03-10T09:28:09Z | http://arxiv.org/abs/2303.05807v2 | # Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields
###### Abstract
Common capture low-light scenes are challenging for most computer vision techniques, including Neural Radiance Fields (NeRF). Vanilla NeRF is viewer-centred that simplifies the rendering process only as light emission from 3D locations in the viewing direction, thus failing to model the low-illumination induced darkness. Inspired by emission theory of ancient Greek that visual perception is accomplished by rays casting from eyes, we make slight modifications on vanilla NeRF to train on multiple views of low-light scene, we can thus render out the well-lit scene in an **unsupervised** manner. We introduce a surrogate concept, Concealing Fields, that reduce the transport of light during the volume rendering stage. Specifically, our proposed method, **Aleth-NeRF**, directly learns from the dark image to understand volumetric object representation and concealing field under priors. By simply eliminating Concealing Fields, we can render a single or multi-view well-lit image(s) and gain superior performance over other 2D low light enhancement methods. Additionally, we collect the first paired **LO**w-light and normal-light **M**ulti-view (LOM) datasets for future research. Our code and dataset will be released soon.
## 1 Introduction
Neural Radiance Field (NeRF) [34] has been demonstrated to be effective in understanding 3D scenes from 2D images and generate novel views. However, similar to most computer vision algorithms such as semantic segmentation [59, 24], object detection [54, 6], and _etc._, NeRF often fails in sub-optimal lighting scenes captured under insufficient illumination or limited exposure time [33].
This is because vanilla NeRF is _viewer-centred_ which models the amount of light emission from a location to the viewer without counting the interaction between illumination and scenes. The emitted light results from the reflection of the light in the environment on the scene. The reflected light will be further refracted, absorbed, and thus attenuated in the environment again [48]. As a result, the NeRF algorithm interprets a dark scene as resulting from insufficient radiation from the 3D particles representing objects in a scene. When training NeRF on dark images with a relatively high zero-mean noise signal [57], the algorithm may struggle to maintain multi-view projection consistency, leading to poor reconstruction results, shown in Fig. 2(a).
One solution is extending NeRF to _object-centred_ rendering. That is, the observed lightness of the image is not due to the radiation intensity of the 3D particles representing the object, but rather to the attenuation of the light caused by other physical factors in the environment. However, this solution [48, 67] usually requires known target lighting conditions and additional parametric modeling of the lighting.
Leveraging mature 2D low-light image enhancement (LLIE) methods appear to be another solution. However, Fig. 2(b) shows that directly enhancing the dark images by 2D enhancement methods does not guarantee accurate NeRF estimation, since the independent and inconsistent enhancement of 2D images in multi-views may lead to the
Figure 1: We assume objects are naturally visible. However, the Concealing Field attenuates the light in the viewing direction, making the left user see a low-light scene. Aleth-NeRF takes a low-light image as input and unsupervised learns the distribution of the Concealing Field. Then, we unconceal (alethia) the Concealing field to render the enhanced image. This scene is taken from LOM dataset.
destruction of 3D geometric consistency.
To explore the NeRF application in these commonly captured but often precluded low-light scenes, we aim to propose a framework that directly supervises on low-light sRGB images. The rendering process in NeRF is similar to the emission theory held by ancient Greek. Emission theory ignores the incident light but postulates visual rays emitted from the eye travel in straight lines and interacts with objects to form the visual perception. Therefore, the darkness of an entity is solely caused by the particles between the object and the eye. In other words, all objects are visible by default unless they are concealed. Inspired by this worldview, we assume a simple but NeRF-friendly model that it is the concealing fields in viewing direction makes the viewer see a low-light scene \(\mathbf{C}^{low}\) shown in Fig.1. Aletheia (\(\alpha\mathbf{\gamma}_{i}\mathbf{\hat{0}}\mathbf{\hat{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}}}\)), normally translated as "unconcealedness", "disclosure" or "revealing" [14], removes the concealing fields to let right side viewer catch sight of normal-light scene \(\mathbf{\hat{C}}^{nor}\).
Here, we slightly modify the NeRF and propose our Aleth-NeRF for low-light conditions, which takes multi-view dark images as input to train the model and learn the volumetric representation jointly with Concealing Fields. As shown in Fig.1, we model this Concealing Field between object and viewer to naturally extend the transmittance function in NeRF [34]. In training stage, we aim to estimate not only the Radiance Fields representing the consistent objects, but also the Concealing Fields explaining the image transition from normal-light to low-light. In the testing stage, we could directly render the normal-lit scenes by simply eliminating Concealing Field. In addition, We collect the first paired low-light and normal-light multi-view dataset, **LOM**, to facilitate the understanding of 3D scenes under low-light conditions.
Our contribution could be summarized as follow:
* We propose **Aleth-NeRF**, by our knowledge, the first NeRF trains on dark multi-view sRGB images for unsupervised enhancement. Inspired by ancient Greek philosophy, we naturally extend the transmittance function in vanilla NeRF by modeling Concealing Fields between scene and viewer to interpret low light condition, endowing robustness to darkness with minimal modification on vanilla NeRF.
* We contribute the first low-light \(\&\) normal-light paired multi-view real-world dataset **LOM**, along with comparisons with various 2D image enhancement methods, experiments show that our **Aleth-NeRF** achieves satisfactory performance in both enhancement quality and multi-view consistency.
## 2 Related Work
### Low-light Image Enhancement
Low-light image enhancement (LLIE) is a classical image processing task aiming to recover an image taken under inadequate illumination to its counterpart taken under normal illumination. Traditional LLIE methods usually make pre-defined illumination assumptions and use hand-crafted features for image restoration. For example, the RetiNex-based methods [26, 40, 12] assume that a low-light image is multiplied by illumination and reflection. Histogram Equalization (HE) based methods [10, 49, 41] perform LLIE by spreading out the most frequent intensity values.
With the fast growth of deep learning in recent years, deep neural networks (DNNs) based methods have become the mainstream solutions these years. A series of CNN or Transformer-based methods have been developed in this area [28, 56, 35, 55, 58, 4, 70, 72, 69, 25, 47, 5, 53, 52], which trained on paired low-light and normal-light images for supervision and gain satisfactory results.
Beyond supervised methods, unsupervised low-light image enhancement is a more general but challenging setup, which does not require ground truth normal-light images for training. Some works like [18, 19] leverage the statistical illumination distribution in the target image domain for unsupervised training. Other works complete this task with only low-light information [11, 30, 21, 65, 73, 37]. Like Zero-DCE [11] propose to train the network only with low-light images and regularize the network with modeled curves and some additional non-reference constraints (_i.e._ color constancy [3]). SCI [30] completes this task with a self-calibrated module and unsupervised loss.
Figure 2: (a). NeRF rendering results in normal-light scene and low-light scene. (b). NeRF rendering on enhanced scene by 2D image enhancement methods LIME [12] and IAT [5]. (c) Aleth-NeRF rendering results in low-light scene.
However, current LLIE methods almost build on 2D image space operations, which often fail to exploit the 3D geometry of the scene and make it hard to deal with multi-view inputs. The proposed Aleth-NeRF belongs to the unsupervised family but is empowered with a much stronger and more reliable understanding of 3D space. We follow the volume rendering formulation in NeRF and trace this problem back to the phase of image rendering from 3D radiant particles, where additional light concealing fields are introduced and learned unsupervised for the LLIE problem, meanwhile maintaining the multi-view consistency.
### Novel View synthesis with NeRF
NeRF [34] is proposed for novel view synthesis from a collection of posed input images. It gains substantial attention even though the quality of the rendered image is yet to be comparable to state-of-the-art GAN [23] or diffusion-based [15, 8] models. The unique advantage of NeRF models exists in preserving the 3D geometry consistency thanks to its physical volume rendering scheme.
In addition to general efforts to speed up and improve NeRF training [2, 44, 27, 63, 17, 42, 7, 36] or to use NeRF for scene understanding applications [71, 38, 62, 61, 9]. Many of the latter works focus on improving NeRF's performance under various degradation conditions, such as blurry [29], noisy [39], reflection [13], glossy surfaces [50], or use NeRF to complete low-level tasks in 3D space, like super-resolution [51, 1] and HDR reconstruction [60, 20].
Another line of research extends NeRF for lightness editing in 3D space. Some work, like NeRF-W [31], focuses on rendering NeRF with uncontrolled in-the-wild images, other relighting works [48, 43, 67] rely on known illumination conditions and introduce additional physical elements (_i.e._ normal, light, albedo, etc.), along with complex parametric modeling of these elements. Meanwhile, these methods are not specifically designed for low-light scene enhancement under extreme low-light conditions.
Among these, RAW-NeRF [33] is most close to our work, which proposes to render a NeRF model in nighttime RAW domain with high-dynamic range, and then post-process the rendered scene with image signal processor (ISP) [22], RAW-NeRF has shown a preliminary ability to enhance the scene light but requires sufficient RAW data for training. We present the first work to render NeRF with low-light sRGB inputs and injection unsupervised LLIE into 3D space by an effective concealing fields manner.
## 3 Method
At first we briefly review core concepts of vanilla NeRF on the radiance field, volume rendering, and image reconstruction loss in Section. 3.1. Then we introduce the proposed Concealing Fields in Section. 3.2 and show how it is integrated into the NeRF framework to mimic the process of low-light scene generation. Finally, in Section. 3.3, we show how to learn Concealing Field effectively in an unsupervised manner.
### Neural Radiance Field Revisited
In the NeRF representation [34], a Radiance Field is defined as the density \(\sigma\) and RGB color value \(c\) of a 3D location **x** under a 2D viewing direction **d**. The density \(\sigma\), on the one hand, represents the radiation capacity of the particle itself at **x**, and on the other hand, controls how much radiance is absorbed when other lights pass through **x**.
When rendering an image with a neural radiance field, a camera ray \(\textbf{r}(t)=\textbf{o}+t\cdot\textbf{d}\) (\(\textbf{r}\in\textbf{R}\)), casting from the given camera position **o** towards direction **d**, is used to accumulate all the radiance along the ray to render its corresponding pixel value \(\textbf{C}(\textbf{r})\). This process is commonly implemented with the volume rendering function [32]. Formally,
\[\textbf{C}(\textbf{r})=\int_{t_{n}}^{t_{f}}T(\textbf{r}(t))\sigma(\textbf{r}( t))c(\textbf{r}(t),\textbf{d})dt, \tag{1}\]
where
\[T(\textbf{r}(t))=\exp(-\int_{t_{n}}^{t}\sigma(\textbf{r}(s))ds), \tag{2}\]
is known as the _accumulated transmittance_. It denotes the radiance decay rate of the particle at \(\textbf{r}(t)\) when it is occluded by particles closer to the camera (at \(\textbf{r}(s)\), \(s<t\)). The integrals are computed by a discrete approximation over sampled 3D points along the ray **r**.
\[\textbf{C}(\textbf{r})=\sum_{i=1}^{N}T(\textbf{r}(i))(1-\exp(-\sigma(\textbf{ r}(i))\cdot\delta))\cdot c(\textbf{r}(i),\textbf{d}) \tag{3}\]
\[T(\textbf{r}(i))=\exp\left(-\sum_{j=1}^{i-1}\sigma(\textbf{r}(j))\cdot\delta\right) \tag{4}\]
where \(\delta\) is a constant distance value between adjacent sample points under uniform sampling.
Neural Network Implementation.NeRF learns two multilayer perceptron (MLP) networks (\(F_{\Theta_{\sigma}}\), \(F_{\Theta_{c}}\)) to map the 3D location \(\textbf{r}(i)\) and 2D viewing direction **d** to its density \(\sigma\) and colour \(c\). Specifically,
\[F_{\Theta_{\sigma}}(\textbf{r}(i))\rightarrow\sigma(\textbf{r}(i)),\textbf{h} \tag{5}\]
\[F_{\Theta_{c}}(\textbf{h},\textbf{d})\to c(\textbf{r}(i),\textbf{d}) \tag{6}\]
where **h** is a hidden feature vector, \(\Theta_{\sigma}\) and \(\Theta_{c}\) are learnable network parameters. Note that \(c(\textbf{r}(i),\textbf{d})\) and \(\sigma(\textbf{r}(i))\) are further activated by Sigmoid and ReLU functions so that their value ranges are \([0,1)\) and \([0,\infty)\), respectively.
Given ground truth rendered image \(\mathbf{C}\), the network is optimized by minimizing the image reconstruction loss between predicted image \(\hat{\mathbf{C}}\) and ground truth image \(\mathbf{C}\):
\[\mathcal{L}_{nerf}=\sum_{\mathbf{r}}^{\mathbf{R}}||\hat{\mathbf{C}}(\mathbf{r})- \mathbf{C}(\mathbf{r})||_{2}. \tag{7}\]
We refer techniques such as positional encoding and hierarchical volume sampling to original paper [34] for more details.
### Aleth-NeRF with Concealing Field Assumption
Given low-light scene \(\{\mathbf{C}^{low}(\mathbf{r}),\mathbf{r}\in\mathbf{R}\}\) taken under poor illumination, the goal of LLIE is to recover its normal-lit correspondence \(\{\mathbf{C}^{nor}(\mathbf{r}),\mathbf{r}\in\mathbf{R}\}\). The key idea of our model is that we assume \(\mathbf{C}^{low}\) and \(\mathbf{C}^{nor}\) are rendered under the same Radiance Field condition (namely, they share the same underlying densities \(\sigma\) and colors \(c\) at all 3D locations in the scene) but with or without the proposed Concealing Field assumption.
Global and Local Concealing FieldsWe design two types of Concealing Fields, namely the local Concealing Field \(\Omega\) and global Concealing Field \(\Theta_{G}\) for low-light scene generation. Specifically, \(\Omega\) controls the light concealing at voxel level while \(\Theta_{G}\) controls at scene level.
The local Concealing Field, denoted as \(\Omega(\mathbf{r}(i))\), defines an extra light concealing capacity of a particle at 3D location \(\mathbf{r}(i)\). As it shown in Fig. 3, \(\Omega(\mathbf{r}(i))\) is learned for each 3D location, for implementation we add a linear layer head \(F_{\Theta_{\Omega}}\) upon the first MLP network \(F_{\Theta_{\sigma}}\), an additional convolution layer has been added after \(F_{\Theta_{\Omega}}\) to generate the local concealing \(\Omega(\mathbf{r}(i))\), convolution process could build spatial relations between pixels and let Concealing Field contain more light information rather than structure information [64], make rendering results smooth (see Fig. 4).
\[F_{\Theta_{\Omega}}(F_{\Theta_{\sigma}}(\mathbf{r}(i)))\rightarrow\Omega( \mathbf{r}(i)) \tag{8}\]
The global-wise Concealing Field, denoted as \(\Theta_{G}(i)\), is defined as a set of learnable parameters corresponding to the camera distance \(i\) for all camera rays in \(\mathbf{R}\). \(\Theta_{G}(i)\) is kept the same in a scene rendering and irrelevant to pixel level.
In the training stage, when we render and reconstruct a given low-light image \(\mathbf{C}^{low}\) with the reconstruction loss \(\mathcal{L}_{nerf}\) computed between predicted \(\hat{\mathbf{C}}^{low}\) and ground truth \(\mathbf{C}^{low}\) images, the _accumulated transmittance_\(T\) in Eq. 4 is further modulated with the additional local Concealing Field \(\Omega\) and global Concealing Field \(\Theta_{G}\), to mimic the process of light suppression in a normal scene:
\[T^{low}(\mathbf{r}(i))=\exp\left(-\sum_{j=1}^{i-1}\sigma(\mathbf{r}(j))\cdot \delta\right)\cdot\prod_{j=1}^{i-1}\Omega(\mathbf{r}(j))\Theta_{G}(j) \tag{9}\]
Figure 4: Ablation analyze on the LOL dataset [56], larger convolution size to generate local concealing field \(\Omega\) would further improve enhanced scene \(\hat{\mathbf{C}}^{nor}\)’s smoothness.
Figure 3: Overview of the Aleth-NeRF architecture. Local \(\Omega\) and Global \(\Theta_{G}\) Concealing Fields are additionally learned and integrated into the NeRF framework. We use a modified volume rendering function to render low-light scene taking the Concealing Fields into account.
The testing stage performs aletheia or unconcealedness to remove Concealing Fields \(\Omega\) and \(\Theta_{G}\) to render the predicted normal-light image \(\hat{\textbf{C}}^{nor}\) directly. As a result, we transform the unsupervised LLIE problem into unsupervised learning of Concealing Fields in the low-light scene. By adding or removing the Concealing Fields, Aleth-NeRF can render images under low-light or normal-light conditions of the same scene easily.
### Effective Priors for Unsupervised Training
In this section, we introduce the priors and constraints that promote unsupervised learning of Concealing Fields. Training strategy overview is shown in Fig. 5.
**Value Range Prior.** Value range of concealing fields would determine the enhancement scene \(\hat{\textbf{C}}^{nor}\)'s lightness, generally the concealing fields' range should be less than \(1\), for local concealing field \(\Omega\), we add an \(sigmoid\) activation after the convolution process (see Fig. 3), the activated \(\Omega\) would lie in the range of \((0,1)\). Meanwhile, the global concealing field \(\Theta_{G}(i)\) is a set of learnable parameters in the network, the initial value of \(\Theta_{G}(i)\) is all set to \(0.3\) along the camera ray.
To control the degree of image enhancement, we add a control loss \(\mathcal{L}_{con}\) on the local concealing field \(\Omega\), where we introduce a hyper-parameter: \(\eta\), representing the degree of Aleth-NeRF's concealing ability. To calculate \(\mathcal{L}_{con}\), we apply an average pooling (stride \(64\)) to local concealing field \(\Omega\), then minimize loss \(\mathcal{L}_{con}\) between the pooled \(\Omega\)'s mean and the concealing degree \(\eta\):
\[\mathcal{L}_{con}=||\mathrm{avgpool}(\Omega(\textbf{r}(\mathrm{i})))-\eta||^ {2}, \tag{10}\]
Here, the concealing degree \(\eta\) should be set larger than 0. In our work, we set \(\eta\) to \(0.05\) in extremely dark low-light conditions and to \(0.1\) in conditions not so dark.
**Structure Similarity Prior.** To maintain the structure information of the predicted normal light images \(\hat{\textbf{C}}^{nor}\) consistent with the original low light image \(\textbf{C}^{low}\), we additionally add a structure loss between them, to maintain the detail information and increase the contrast. Specifically, for each rendered pixel \(\hat{\textbf{C}}^{nor}(\textbf{r})\), we keep a structure consistency with neighbored pixel \(\hat{\textbf{C}}^{nor}(\textbf{r}-1)\) and \(\hat{\textbf{C}}^{nor}(\textbf{r}+1)\) along the ray as follows:
\[\begin{split}\mathcal{L}_{st}=&\sum_{k\in[+1,-1]} (\hat{\textbf{C}}^{nor}(\textbf{r})-\hat{\textbf{C}}^{nor}(\textbf{r}+k))- \\ &\frac{0.5}{\eta}(\textbf{C}^{low}(\textbf{r})-\textbf{C}^{low}( \textbf{r}+k)),\end{split} \tag{11}\]
here \(\frac{0.5}{\eta}\) determines generated scene \(\hat{\textbf{C}}^{nor}\)'s contrast degree, which is inversely proportional to conceal degree \(\eta\). More ablation analyses on conceal degree and contrast degree are in our supplementary.
**Color Constancy Prior.** Pixels taken under low-light conditions would lose some color information, and direct take off the concealing fields would easily cause color imbalance. To regularize the color of the predicted normal-light images \(\hat{\textbf{C}}^{tor}\), we add an extra color constancy loss \(\mathcal{L}_{cc}\) on \(\hat{\textbf{C}}^{nor}\). Here we assume that \(\hat{\textbf{C}}^{nor}\) obey the gray-world assumption [3, 11, 19], as follows:
\[\mathcal{L}_{cc}=\sum_{p,q}(\hat{\textbf{C}}^{nor}(\textbf{r})^{p}-\hat{ \textbf{C}}^{nor}(\textbf{r})^{q})^{2}, \tag{12}\]
where \((p,q)\in\{(R,G),(G,B),(B,R)\}\) represents any pair of color channels.
Above all, the total loss function \(\mathcal{L}_{total}\) of Aleth-NeRF include 4 parts, low-light scene rendering loss \(\mathcal{L}_{nerf}\) and additional constrain losses \(\mathcal{L}_{con}\), \(\mathcal{L}_{st}\), \(\mathcal{L}_{cc}\), the total loss used for training can be represented as:
\[\mathcal{L}_{total}=\mathcal{L}_{nerf}+\lambda_{1}\cdot\mathcal{L}_{con}+ \lambda_{2}\cdot\mathcal{L}_{st}+\lambda_{3}\cdot\mathcal{L}_{cc}, \tag{13}\]
where \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\) are three non-negative parameters to balance total loss weights, which we set to \(1e^{-4}\), \(1e^{-3}\) and \(1e^{-4}\) respectively.
DiscussionAlthough our method was originally designed for multi-view conditions, surprisingly, it achieves the same excellent low-light enhancement result for single-view images (see experiments in Sec. 4.1). Despite the ambiguity of depth estimation and the lack of synthesis capability for novel views, we believe that the success of single-view low-light enhancement greatly benefits from the effective priors we present in this section.
## 4 Experiments
In this section, we first benchmark on single image low-light enhancement dataset LOL [56] in Sec. 4.1. Then we would introduce our collected **LOM** dataset with paired
Figure 5: Aleth-NeRF uses reconstruction loss (\(\mathcal{L}_{nerf}\)) on the low-light scene \(\textbf{C}^{low}\), meanwhile additional constraints (control loss \(\mathcal{L}_{con}\), structure loss \(\mathcal{L}_{st}\) and color constancy loss \(\mathcal{L}_{cc}\)) are added to regularize the predicted normal-light scene \(\hat{\textbf{C}}^{nor}\).
multi-view low-light and normal-light images in Sec. 4.2. Multi-view experimental results and analyses are shown in Sec. 4.3.
Our framework is base on NeRF-Factory [16] toolbox. We adopt the Adam optimizer with the initial learning rate set to \(5e^{-4}\). Besides cosine learning rate decay strategy are set at every 2500 iters. For the single image experiments on LOL [56] dataset, the camera position **o** and viewing direction **d** are fixed, and batch size is set to 8192 for 20000 titers training. For multi-view experiments on **LOM** dataset, batch size is set to 4096 for 62500 iters training.
### Generation Quality Assessment
We first conduct the experiments on benchmark single image low-light enhancement dataset LOL [56]. LOL consists of 500 paired low-light images \(\textbf{C}^{low}\) and normal-light images \(\textbf{C}^{nor}\), 485 pairs are split into a train set, and the other 15 pairs are split into evaluation set.
During training stage, we only learn the 15 low-light images \(\textbf{C}^{low}\) in the evaluation set. Without either the reference normal-light images \(\textbf{C}^{nor}\) or the other 485 low-light images in the train set. The quantitative comparison with various low-light enhancement methods [10, 56, 28, 68, 11, 12, 18, 21, 30, 73] is shown in Table. 1, we report three image quality metrics: PSNR, SSIM and LPIPS [66]. Table. 1 shows that our methods gain satisfactory results among the unsupervised methods, ensuring the **generation quality** of Aleth-NeRF. We show 4 examples in Fig. 6, including other enhancement results by current SOTA methods Zero-DCE [11], SCI [30] and IAT [5]. Fig. 6 shows that our Aleth-NeRF could gain high-quality and vivid results compared with various 2D enhancement methods.
making it hard to evaluate the model performance of multi-view low-light enhancement in real-world.
**LOM** dataset has 5 real-world scenes ("_bun_", "_chair_", "_sofa_", "_bike_", "_shrub_"). Each scene includes \(25\sim 65\) low-light and normal-light pairs. We collect the scenes with DJI Osmo Action 3 camera, then generate pair low-light and normal-light images by adjusting exposure time and ISO while other configurations of the camera are fixed. We also take a camera tripod to prevent camera shake (Fig. 7(a)), we capture multi-view images by moving and rotating the tripod. Additionally, for each scene, we add a DSLR color checker (Fig. 7(b)) to help us determine the color and better evaluate generated images. Images are collected with resolution 3000 \(\times\) 4000. We down-sample the original resolution with ratio 8 to 375 \(\times\) 500 for convenience, and generate the ground truth view and angle information by adopting COLMAP [45, 46] on the normal-light scenes. For dataset split, in each scene, we choose 3 \(\sim\) 5 images as the testing set, 1 image as the validation set, and other images to be the training set. The Y-channel pixel distribution of each scene has been shown in Fig. 7(c). Exemplary low-light scene enhancement results by SOTA 2D enhancement methods IAT [5] and SCI [30] are shown in Fig. 7(d).
### Multi-view Enhancement Results
In this section, we show multi-view rendering results on low-light scenes in **LOM** dataset. We design multiple comparison experiments to evaluate the generation quality and multi-view consistency. We first make the comparison with the vanilla NeRF [34], which has the same setting as Aleth-NeRF that only trains with low-light scene images \(\textbf{C}^{low}\). Then we compare with five low-light enhancement methods: histogram equalization (HE) [10], LIME [12], RetiNexNet [56], IAT [5] and SCI [30], here HE and LIME are two traditional enhancement method, IAT and SCI are two very recent SOTA network-based 2D enhancement methods. As shown in Table 2, we design two settings for comparison, (1). Rendering low-light scenes by NeRF and then using 2D enhancement methods to post-process these low-light novel views, name as "NeRF + *". (2). Using 2D enhancement methods to pre-process training set and rendering NeRF on these enhanced views, name as "* + NeRF". All the comparison experiments take the same training epochs, batch size, learning rate, and strategies to
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c} \hline \multirow{2}{*}{method} & \multicolumn{2}{c|}{“_bun_”} & \multicolumn{2}{c|}{“_chair_”} & \multicolumn{2}{c|}{“_softa_”} & \multicolumn{2}{c|}{“_bike_”} & \multicolumn{2}{c|}{“_shrub_”} & \multicolumn{2}{c}{_mean_} \\ \cline{2-9} & PSNR/ SSIM/ LPIPS & PSNR/ SSIM/ LPIPS & PSNR/ SSIM/ LPIPS & PSNR/ SSIM/ LPIPS & PSNR/ SSIM/ LPIPS & PSNR/ SSIM/ LPIPS & PSNR/ SSIM/ LPIPS \\ \hline NeRF [34] & 7.53/ 0.313/ 0.414 & 6.01/ 0.142/ 0.599 & 6.26/ 0.206/ 0.566 & 6.32/ 0.067/ 0.625 & 8.00/ 0.027/ 0.684 & 5.24/ 0.151/ 0.578 \\ NeRF + HE [10] & 14.90/ 0.678/ 0.578 & 15.11/ 0.615/ 0.658 & 17.28/ 0.713/ 0.624 & 14.26/ 0.547/ 0.582 & 12.40/ 0.384/ 0.647 & 14.79/ 0.587/ 0.618 \\ NeRF + LIME [12] & 13.67/ 0.745/ 0.375 & 11.11/ 0.637/ 0.600 & 12.28/ 0.723/ 0.562 & 10.82/ 0.504/ 0.557 & 14.12/ 0.360/ 0.540 & 12.40/ 0.593/ 0.526 \\ NeRF + RetiNexNet [56] & 16.18/ 0.754/ 0.385 & 16.74/ 0.731/ 0.531 & 15.90/ 0.826/ 0.512 & 17.79/ 0.654/ 0.552 & 15.24/ 0.283/ 0.577 & 16.37/ 0.657/ 0.495 \\ NeRF + SCI [30] & 13.66/ 0.769/ 0.433 & 18.42/ 0.736/ 0.545 & 19.82/ 0.813/ 0.569 & 12.61/ 0.560/ 0.550 & 16.87/ 0.41/ 0.555 & 16.27/ 0.658/ 0.530 \\ NeRF + IAT [5] & 14.00/ 0.652/ 0.421 & 19.53/ 0.806/ 0.562 & 10.58/ 0.532/ 0.668 & 16.30/ 0.660/ 0.532 & 10.03/ 0.149/ 0.577 & 14.09/ 0.560/ 0.552 \\ \hline HE [10] + NeRF & 15.54/ 0.755/ 0.473 & 15.46/ 0.756/ 0.492 & 17.89/ 0.801/ 0.457 & 15.47/ 0.699/ 0.455 & 14.97/ 0.414/ 0.511 & 15.87/ 0.685/ 0.470 \\ LIME [12] + NeRF & 13.81/ 0.783/ 0.253 & 11.20/ 0.694/ 0.498 & 12.02/ 0.747/ 0.420 & 11.35/ 0.586/ 0.448 & 14.11/ 0.426/ **0.473** & 12.50/ 0.647/ 0.418 \\ RetiNexNet [56] + NeRF & 16.16/ 0.777/ 0.339 & 16.82/ 0.773/ **0.438** & 16.87/ 0.806/ 0.548 & 18.00/ **0.717**/ 0.448 & 14.65/ 0.269/ 0.513 & 16.50/ 0.668/ 0.457 \\ SCI [30] + NeRF & 13.62/ 0.821/ 0.309 & 11.75/ 0.756/ 0.526 & 10.10/ 0.750/ 0.505 & **19.10**/ 0.644/ 0.458 & 18.13/ 0.510/ 0.469 & 14.54/ 0.696/ 0.453 \\ IAT [5] + NeRF & 14.33/ 0.697/ 0.311 & 18.68/ 0.789/ 0.563 & 17.77/ 0.811/ 0.519 & 13.68/ 0.621/ 0.501 & 13.84/ 0.322/ 0.536 & 15.66/ 0.648/ 0.486 \\ \hline
**Aleth-NeRF** & **18.93**/ **0.825**/ **0.251** & **19.71**/ **0.812**/ 0.476 & **19.98**/ **0.851**/ **0.411** & 15.53/ 0.691 / **0.444** & **18.31**/ **0.511**/ 0.506 & **18.49**/ **0.738**/ **0.417** \\ \hline \end{tabular}
\end{table}
Table 2: LOM [56] dataset results, we evaluate PSNR \(\uparrow\) (higher the better), SSIM \(\uparrow\) (higher the better) and LPIPS \(\downarrow\) (lower the better).
Figure 7: Collection detail of the **LOM** dataset.
ensure fairness.
The comparison results are shown in Table. 2. All rendering results would compare with LOM's normal-lit view counterpart. We report to three image quality metrics: SSIM, PSNR and LPIPS [66]. The results of each scene are then averaged to generate the _mean_ results (last column in Table. 2). From the results, we could find that "NeRF + *" series methods almost perform worse than "* + NeRF" series methods. This is probably due to the poor image quality NeRF produces in dark scenes, making enhancement methods easily fail on the generated dark scenes by NeRF. Meanwhile, for the "* + NeRF" series methods, although the enhancement quality performs well on the training set (see Fig. 7(d)), 2D enhancement methods lack multi-view consistency and often fail on novel view generation (see Fig. 8), make it easier to generate artifacts and noise. Overall, our Aleth-NeRF is an end-to-end method and gain the best performance on most scenes and the _mean_ results, the generated novel views both maintain multi-view consistency and image generation quality. Please refer to our supplementary for more visualization results and ablation analyze.
## 5 Conclusion and Discussion
We propose a novel unsupervised method to handle multi-view synthesis in low-light condition, which directly take low-light scene as input and render out normal-light scene. Inspired by the wisdom of the ancient Greeks, we introduce a concept: Concealing Fields. Experiments demonstrate our superior performance in both image quality and 3D multi-view consistency.
One limitation is that Aleth-NeRF should be specifically trained for each scene, which is the same as original NeRF [34]. Besides, Aleth-NeRF may fail in scenes with non-uniform lighting conditions or shadow conditions. We'll solve these problems in the future.
Figure 8: Multi-view rendering results on **LOM** dataset’s "_sofa_", "_bike_" and “_shrub_” scenes.
## 6 Acknowledgments
This work was partially supported by JST Moonshot R\(\&\)D Grant Number JPMJPS2011, CREST Grant Number JPMJCR2015 and Basic Research Grant (Super AI) of Institute for AI and Beyond of the University of Tokyo. Also this work is partially supported by the National Key R\(\&\)D Program of China(NO.2022ZD0160100),and in part by Shanghai Committee of Science and Technology (Grant No. 21DZ1100100).
|
2302.04824 | Lithium Metal Battery Quality Control via Transformer-CNN Segmentation | Lithium metal battery (LMB) has the potential to be the next-generation
battery system because of its high theoretical energy density. However, defects
known as dendrites are formed by heterogeneous lithium (Li) plating, which
hinders the development and utilization of LMBs. Non-destructive techniques to
observe the dendrite morphology often use X-ray computed tomography (XCT) to
provide cross-sectional views. To retrieve three-dimensional structures inside
a battery, image segmentation becomes essential to quantitatively analyze XCT
images. This work proposes a new semantic segmentation approach using a
transformer-based neural network called TransforCNN that is capable of
segmenting out dendrites from XCT data. In addition, we compare the performance
of the proposed TransforCNN with three other algorithms, such as U-Net, Y-Net,
and E-Net, consisting of an Ensemble Network model for XCT analysis. Our
results show the advantages of using TransforCNN when evaluating
over-segmentation metrics, such as mean Intersection over Union (mIoU) and mean
Dice Similarity Coefficient (mDSC) as well as through several qualitatively
comparative visualizations. | Jerome Quenum, Iryna Zenyuk, Daniela Ushizima | 2023-02-09T18:25:24Z | http://arxiv.org/abs/2302.04824v2 | Lithium Metal Battery Quality Control via Transformer-CNN Segmentation
###### Abstract
Lithium metal battery (LMB) has the potential to be the next-generation battery system because of their high theo- - retical energy density. However, defects known as dendrites are formed by heterogeneous lithium (Li) plating, which hinder the development and utilization of LMBs. Non-destructive techniques to observe the dendrite morphology often use computerized X-ray tomography (XCT) imaging to provide cross-sectional views. To retrieve three-dimensional structures inside a battery, image segmentation becomes essential to quantitatively analyze XCT images. This work proposes a new binary semantic segmentation approach using a transformer-based neural network (T-Net) model ca- pable of segmenting out dendrites from XCT data. In addition, we compare the performance of the proposed T-Net with three other algorithms, such as U-Net, Y-Net, and E-Net, consisting of an Ensemble Network model for XCT analysis. Our results show the advantages of using T-Net in terms of object metrics, such as mean Intersection over Union (mIoU) and mean Dice Similarity Coefficient (mDSC) as well as qualitatively through several comparative visualizations.
Deep learning; Volume Segmentation; Quality Control; Metrology; Battery
## 1 Introduction
Research on new battery designs often targets durability, miniaturization and safety, for example, considering different chemical compositions of battery components to suppress defects such as dendrite formation, an undesirable morphol- ogy that compromises the quality of lithium metal batteries (LMB). This section describes current methods for imaging batteries followed by a review on the main algorithms for image analysis, dendrite detection, and quantification.
### Assessing Battery Quality with Imaging
Synchrotron-based hard X-ray computed tomography (XCT) has spatial resolution suitable to resolve dendrite structure[1]. Lithium has a low atomic number, therefore a low X-ray attenuation. For example, Li dendrites will appear to be void spaces within dense polymer electrolyte materials. XCT imaging of LMB seldom recovers chemical information and relies on differences in material thicknesses and atomic numbers to differentiate Li from solid polymer electrolytes (SPE). To worsen detection of LMB dendrites, they are porous formations that lead to large intensity variation within their volume. When the LMB is subjected to multiple charge-discharge cycles, a phenomenon known as pitting develops, when an electrode can present a combination of pits and dendrites, with both presenting similar X-ray attenuation. Thus, it is challenging to accurately disclose the dendrite structure.
Previous LMB in operando studies using XCT analyzed Li metal plating and how the interphase evolves in sym- metric Li- Li cells with polymer electrolytes and in the batteries with Li-metal anode [2, 3, 4, 5, 6]. Those studies focused on the battery design, functioning, however they lack accurate methods for revealing the structure of the dendrites. Most of those previous studies applied traditional thresholding algorithms to conduct segmentation that are unfortunately not reproducible when applied to new samples and rarely applicable to the differentiation between Li metal, pits, and other materials. Alternatively, manual segmentation could be used for a few cross-sections, but it is often unfeasible for full-stack high-resolution imaging surveillance [7].
### Deep Learning for Semantic Segmentation
Semantic segmentation is the computer vision task in which a model is trained to perform a pixel-wise classification on an input image. Over the years, different models have been proposed starting in 2015 with Fully Convolutional Networks (FCN) [8] in which the then known fully connected layers were modified in a way to allow for each pixel to be classified from feature maps coming from convolutional layers. This builds on a series of local convolutions of preceding layers which aim to obtain a representation of multi-scale feature maps that are used for classification tasks. Around the same time [9] with their CNN-based encoder-decoder model, known as U-net, showed that combining higher and lower features symmetrically is beneficial in obtaining a better performance. Soon after, SegNet [10] and Deeplab[11] were proposed, confirming thereby that the encoder-decoder architecture is well suited for such a task.
A lot of these works [11, 12, 13, 14] also leverages the atrous convolution to show that it could help capture contextual information. In particular, Y-Net [14] used three modules to improve segmentation accuracy. In addition to the Regular Convolution Module, and the Pyramid Pooling Module which allows the model to learn global information about potential locations of the target at different scales, the Dilated Convolution Module took advantage of the fact that the target is often shared out in the samples, which supports learning sparse features in their structure. PSPNet[13] on the other hand, adds ResNet[15] as a backbone while multi-scale feature maps are aggregated in its encoder.
Though these architectures work well, the computer vision community has increasingly seen their design shift from pure CNN-based design with [8, 9, 10, 11, 13, 14] to transformer-based designs which started with ViT [16, 17, 18, 19, 20]. Later one, hybrid models exploiting the best of both worlds by either using a transformer as encoder and CNN as decoder [21, 22], or CNNs as encoder and transformer as decoder, or even using CNNs and encoder-decoder while transformers are used in the middle or in between to process the feature maps. One such hybrid model is HRNet-OCR [23] which a CNN as a backbone and combines it with cross-attention layers between features of different scales in order to account for multiple contexts and scales.
In all these schemes, the common denominator remains the attention mechanism which has proven in the past few years to exceed the performance of models disregard it. That is because it allows the features not to be subject to the inductive biases and translation invariance that occurs in CNNs. Instead, they allow the model to learn long-term dependencies between pixel locations [16]. In other words, it allows for a better representation by leveraging contextual information either between pixels, patches, or channels.
### Problem and Motivation
Li dendrite formations initiate during battery cycling as illustrated in Figure.??, with dendrites nucleating on the interface between the electrolyte and the electrodes. Dendritic growth depends on the current density of plating, electrolyte transference number, electrolyte mechanical properties, and impurities present in Li-metal material and at the interfaces. Earlier works have shown that increasing the shear modulus of the SPEs can help suppress Li dendrite growth but cannot fully eliminate it [24, 25, 26, 27]. Further studies have shown that Li-metal surface impurities (Li_3N, Li_2CO_3 and Li_2O) can result in inhomogeneous current density and promote nucleation of Li dendrites.
Accurate segmentation for measuring dendrite volume have guided research and quality control of battery designs as well as tests of materials used for its components. Deep learning methods can provide exceptional segmentation results [28, 29, 30] when using high-resolution XCT data, particularly when large collections of annotated data are available. For example, Zhang et al [31] used a convolutional neural network (CNN) known as D-LinkNet to inspect the effect of distortion on segmentation accuracy of Li-ion batteries. An additional work by the same team [32] used a U-net for multiphase segmentation of battery electrodes from nano-CT images. Despite being focused on battery segmentation, those studies are not focused on dendrite analysis.
Previous studies [33, 34] on inspecting dendrites in batteries discussed problems regarding the mechanisms and types of nucleation, e.g., lateral growth or Li filaments. Data acquisition modes range from electron microscopy [33, 35, 36] to XCT [34, 36] with valuable morphological characterization and designs for suppression of dendrite growth, but dendrite detection was addressed mostly qualitatively through dendrite projections and/or visualizations. For example, the dendrite volume calculation in [36] was based on median filter and Otsu thresholding, a method that seldom works for more than a few slices from a XCT stack, unless considering strenuous manual postprocessing.
### Research Contributions
The proposed work describes design and implemention of dendrite segmentation for 3D XCT images. In performing segmentation of a 3D volume automatically, we could either design a 3D model that performs directly on an input volume or we could leverage 2D models by subdividing the volume into slices. This paper introduces a 2D model due to its ability to be trained faster and with a limited number of samples, hence making it more versatile. In particular, we propose an architecture that benefits from both the contextual information learned from transformers and the global information captured by CNNs to predict dendrite from real XCT data from a lithium metal battery that underwent cycling.
Our study compares four different deep learning architectures, including U-Net, Y-Net, T-Net, and E-Net, on their ability to segment dendrites inside the cycled symmetric Li-Li battery with polymer electrolytes. While we are aware of the electrochemical differences between dendrite formations and the redeposited Li (Figure 1), the proposed segmentation method will not make a distinction between these two phases as they are both associated to Li agglomeration.
Figure 1: Diagram illustrating the Li-polymer–Li symmetric cell design, imaged using X-ray C1, with highlighted dendrite formations (blue) and the redeposited Li (red).
## 2 Materials Description
Li metal holds a high theoretical capacity (3860 mAh/g) and a large negative thermodynamic potential (-3.06 V vs. SHE)[37]. Thus, it is considered a promising candidate for the next-generation battery anode. Li metal is very active and can introduce a series of side reactions in a battery system with liquid electrolytes. This can also cause the dendrite to form, which would eventually lead to short circuits and bring safety issues to the battery. Using solid electrolytes, instead of liquid electrolyte, can form a more stable interface between the electrolyte and the Li metal electrodes, thus alleviating the dendrite formation issue.
Currently, several solid-state materials are used as electrolytes and separators, such as polymers and single-ion conducting inorganic solid electrolytes (glass or ceramic). Polymer materials are promising as they are mechanically flexible. This indicates that the polymer materials can be produced in a roll-to-roll scalable process and be designed very thinly. However, for SPEs to have broad deployment, strategies for dendrite suppression must be developed, such as coatings and soft interlayers including polymers as well as ionic liquids [38, 39, 40].
Dendrites are generally formed during battery charge and discharge cycles. This happens especially when a battery is charged at high current densities, due to the heterogeneous Li metal plating even when considering solid electrolytes. Observing the structure and evolution of dendrites is important to develop a strategy to prevent their growth. Dendrites are tree-like and porous structures, usually with a size in nano to micro-scale. Given the morphological structure of dendrites and their size relative to an input stack, performing XCT segmentation is a suitable method of analysis as it allows for a pixel-wise classification which helps quantify the volume of dendrites and use it as proxy of battery quality.
### Electrochemical Testing
Li/Li symmetric cells are assembled using two Li metal electrodes and are considered a tool for testing and observing the Li metal anode without being affected by cathode materials.
Free-standing Li-metal foils with a thickness of 100 \(\upmu\)m from FMC were used. Ionic Materials Inc. (Woburn, MA) provided the polymer electrolyte membrane with a thickness of 140 \(\upmu\)m as a research sample. The cell was assembled as Figure. 3 shows. A red shim with a thickness of 50 \(\upmu\)m was used to create a circle with a 0.8 cm diameter. Two circular polymer electrolytes with a diameter of 0.80 cm were punched out and placed on each side of the red shim. Two Li-metal foils were then placed on the outside of the membranes as electrodes. The electrodes were connected to the metal tabs. The cell was sealed with a vacuum sealer. A current density of 1.5 mA/cm\({}^{2}\) was periodically applied
Figure 2: Cross-Sectional Images for the Li–polymer–Li symmetric cell; (A) Cross section of the x-y plane where the training was done on this plane; (B) Cross-sections of the x-z plane and detailing of the cell components; (C) Cross-sections of the y-z plane.
to the cell for 10 minutes, and the battery rested for 20 minutes. The cell was cycled at 3.0 mAh/cm\({}^{2}\) for one full cycle (120 minutes for charging and 120 minutes for discharging), after which the cell XCT scan was acquired.
### Synchrotron X-ray CT Imaging
The XCT scan was done at Beamline 2-BM at Advanced Photon Source (APS) at Argonne National Laboratory (ANL). A 20 \(\upmu\)m LuAG scintillator, 5\(\times\) lenses, and an sCMOS PCO. Edge camera were used. A 27.5 keV energy was selected using a multilayer monochromator. 100 ms exposure time was used per back-projection and over 180 degrees of rotation 1500 projections were collected. The yielded image was with a resolution of 1.33 \(\upmu\)m/pixel and a field-of-view of 3.3 mm. Three FOVs were recorded and were stitched together to form vertical height of \(>\)3 mm during the post-processing. Tomographic reconstructions were performed using TomoPy with Gridrec algorithm[41, 42, 43].
Figure 3: Schematic illustration of the pouch cell
### Raw data Pre-Processing
The resulting raw TomoPy reconstruction was a large volume of size (3977, 2575, 2582) and was not properly aligned as expected due to the various motion involved in collecting the data as shown in Figure. 4. As this raw data contains a lot of noisy and irrelevant parts, we proceed to develop and algorithm that will allow us to cleanly crop out those region. In the process, the inverted the gray scale volume to facilitate our ability to locate corners.
We first used a series of perspective tranfomation and homography to rectify the region of interest along each plane. Though this process could be automated using feature dectectors and feature matching techings such as MOPS [44] and SIFT [45], we manually selected corners for optimal precision. We then then rectified and cropped the raw data to obtained the region of interest to a volume of size (3849, 340, 2071) shown in Figure. 5.
Figure 4: Sample Raw data obtained after TomoPy reconstruction of a CT scan
Figure 5: Sample Region of Interest (RoI) data obtained after pre-processing TomoPy reconstruction of a CT scan
Computational Methods
For a given volume stack, we apply 2D models due to their versality. In doing so, we asubdivide each training hand-labeled slice into \(128\times 128\) patches which were then separated in training, validation, and testing datasets. Based on their known performances over the years, we investigate Convolutional Neural Networks (CNN) based architectures such as U-Net [9] and Y-Net [14]. In addition, we also compared their performance with T-Net, our proposed Transformer encoder-based Network, and E-Net, an Ensemble Network over U-Net, Y-Net, and T-Net. We train the networks in a weakly-supervised fashion where a small subset of labeled data was used in conjunction with a much larger unlabeled sample size. Figure. 11 shows a sample output of the model considered in this work.
### U-Net
U-Net is a CNN architecture that has first been introduced in 2015 by Renneberger et al. for the semantic segmentation of biomedical images. We refer interested readers to [9] for details about the architecture of the network. In this work, the model was adapted to take in images of size \(128\times 128\) as input. For the encoder, we started with 16 - channel \(3\times 3\) kernels and doubles the number at each layer, followed by a ReLU activation and max pooling until a \(256\) - channel \(8\times 8\) resolution feature map is obtained. We reversed the operation with transposed convolutions operation on the decoder side and concatenate with corresponding size encoder feature maps until the out shape is obtained.
### Y-Net
Figure 6: U-Net Architecture.
Y-Net is a CNN architecture originally introduced by Quenum et al [14] to segment small barcodes in Ultra High-Resolution images. We refer interested readers to [14] for details about the architecture of the network. From the architecture described in the paper, We have modified and adapted the Regular Convolution Module to take in \(128\times 128\) images from training slices. As it consists of convolutional and pooling layers, we started with 24 - channel \(3\times 3\) kernels and doubles the number at each layer. We alternate between convolution and max-pooling until we reach a feature map size of \(8\times 8\) pixels. The Dilated Convolution Module here takes advantage of the fact that dendrites are often shared out in the samples to learn sparse features in their structure. It also takes 128\(\times\)128 input patches and we maintained 16 - channel \(3\times 3\) kernels throughout the module while the dimensions of the layers are gradually reduced using a stride of 2 until a feature map of \(8\times 8\) pixels is obtained. Finally, the Pyramid Pooling Module which allows the model to learn global information about potential locations of the dendrites at different scales are concatenated with the layers on the dilated convolution module to preserve the features extracted from both modules.
### T-Net
Figure 7: Y-Net Architecture.
T-Net is a hybrid Transformer-CNN segmentation model that leverages the encoder model of Vision Transformers ViT [16] and the decoder architecture of Convolutional Neural Networks (CNNs). More specifically, its encoder model was first introduced in Natural Language Processing (NLP) by Vaswani et al [46] and its multi-headed self-attention was later shown (by ViT) to help remove the common inductive biases observed in CNN-only models by relating all input sequence with each other. As depicted in Figure. 8, the proposed architecture constitutes the Transformer Encoders Block and the CNN Decoder Block.
The Transformer Encoders Block takes inputs that are 16\(\times\)16 sub-patches sequences from the 128\(\times\)128 patches that were obtained from training slices. These patches are flattened and each is embedded into a 64-dimensional fea- ture vector via a linear projection and is added to its corresponding Fourier Features (FF) positional encoding. We used 8 transformer encoder units and the outputs of every 2 transformer encoders were reshaped into a 2-dimensional feature map representation, concatenated, up-sampled recombined with the layers from the CNN Decoder Block of corresponding dimension.
The CNN Decoder Block takes in the output of the last transformer encoder unit, reshapes it into a 2-dimension representation on which a set of \(3\times 3\) kernels convolutions and max-pooling is applied to obtain a feature map of \(8\times 8\) pixels. The resulting feature maps are then concatenated with corresponding size feature maps coming from the Transformer Encoder Block and up-sampled continuously until the final output is obtained. This last step which allows for the enhancement of the features in the CNN Decoder Block as we are progressively reconstructing the output dimension of \(128\times 128\).
### E-Net
We have performed an ensemble prediction analysis with U-Net, Y-Net, and T-Net called E-Net. It was found that our best mean Intersection over Union (mIoU) is obtained when combining 20% of U-Net with 80% of T-Net while the best mean Dice Similarity Coefficient (mDSC) is obtained while only using a T-NET.
## 4 Experimental Results
On training the models (U-Net [9], Y-Net [14], and T-Net), we used one NVIDIA Tesla V100 GPU for each experiment. We obtained a total of 4433 samples of resolution \(128\times 128\) with their corresponding hand-labeled ground truth that the models were trained on. We used 80% of the examples for the training set, 10% for the validation set, and 10%
Figure 8: T-Net Architecture.
for the testing set. We used data augmentation schemes in training all models which consist on random rotations in all direction, random flips (vertically and horizontally), random cropping (2%), random shifts, random zoom (range in [0.8, 1]) and a small range of random brightness and contrast variation (+/ 5%). We trained the U-Net [9] by 450 epochs while the Y-Net [14] and T-Net models were trained for 130 and 300 epochs respectively.
As shown in Figure. 9, the Y-Net and T-Net converge faster than U-Net with the initial loss of the T-Net model being significantly lower than that of the U-Net and Y-Net models. We have experimented with various loss functions such as Tversky loss [47] described in Eq. 1, the focal Tversky loss [48] described in Eq. 2, the balanced cross-entropy loss described in Eq. 4, and the binary cross-entropy loss out of which the binary cross-entropy (Eq. 3) loss yields the best results. One interesting observation is that though the validation curve on U-Net exhibits characteristics of a better generalization, the quantitative results show otherwise.
For evaluation, we have used the dice similarity coefficient described in Eq. 6 and the Jaccard Index also known as Intersection of Union Eq. 5.
\[\text{L\_Tversky(y,y^{\prime})}=-\Sigma\frac{\text{yy^{\prime}}}{\text{yy^{ \prime}}+\beta(1-y)\text{y^{\prime}}+(1-\beta)\text{y}(1-\text{y^{\prime}})} \tag{1}\]
\[\text{L\_Focal\_Tversky(y,y^{\prime})}=-\Sigma\ 1-\text{L\_Tversky(y,y^{\prime})}^{\gamma} \tag{2}\]
\[\text{L(y,y^{\prime})}=-\frac{\text{L\_T\_y\_log(y^{\prime})}-(1-y)\text{log} (1-y^{\prime})} \tag{3}\]
\[\text{L\_balanced(y,y^{\prime})}=-\frac{\text{L\_T\_y\_log(y^{\prime})}-(1- \beta)(1-y)\text{log}(1-y^{\prime})} \tag{4}\]
where y_i and y^-_i are respectively the ground truth and prediction on patch i.
\[\text{IoU(y,y^{\prime})}=\frac{\text{T\_P\_P\_P\_P\_N}}{\text{T\_P\_P\_P\_N}} \tag{5}\]
\[\text{DSC(y,y^{\prime})}=\frac{\text{2T\_P\_P\_P\_P\_N}}{\text{2T\_P\_P\_P\_N }} \tag{6}\]
where y_i and y^-_i are respectively the ground truth and prediction on patch i.
For all of the models, we use the mean Dice Similarity Coefficient (mDSC) and the mean Intersection over Union (mIoU) as metrics shown in Table 1. As seen, our proposed pipeline outperforms U-Net [9], and Y-Net [14] by a mIoU of 8.13% and 10.3% and mDSC of 6.49% and 8.57% respectively. Also shown in Table 1, is a slight mIoU improvement of 0.03% by our Ensemble Network analysis (E-Net) on T-Net.
In addition, Table 1 displays that while T-Net is successful in segmenting out dendrites, its latency at least 3.16\(\times\) slower than U-Net which has the fastest latency of all models evaluated at 65.36 milliseconds (ms). The slowest of all the models is observed to be E-Net which performs 7.24\(\times\) slower than U-Net.
Qualitatively, Figure. 10 shows the predictions on a given test slice and more specifically, Figure. 11 shows sample predictions at the patch level. As could be seen in the first and second rows (x;b), the T-Net and U-Net predictions are the closest to the Ground Truth. The third, fourth, and eighth rows (c; d; h) show that U-Net and Y-Net tend to generalize better as the unlabelled dendrite regions in the input patches are segmented out by these two models while T-Net and E-Net still reflect the Ground Truth images. The fifth, sixth and seventh rows (e; f; g)show the generalization potential of all models while the predictions of T-Net and E-Net overall tend to remain closer to the Ground Truth.
\begin{table}
\begin{tabular}{c|c c||c c} & mIoU & mDSC & latency (ms) & \#put Patch Resolution (px) U-Net \\ \hline
8.698 & 8.898 & 65.36 & & 28 \(\times\) 128 \\ Y-Net & 8.481 &.8790 & 103.62 & 128 \(\times\) 128 \\ T-Net & 9.511 &.9647 & 206.75 & 128 \(\times\) 128 \\ E-Net & 9.514 &.9641 & 473.59 & 128 \(\times\) 128 \\ \end{tabular}
\end{table}
Table 1: mIoU, mDSC, Inference Time, and Patch Size for Li-Li Symmetric battery Dataset.
Figure 9: Training curves for UNet, YNet, and TNet on the y-axis vs numbers of epochs on the x-axis; the models were optimized over the binary cross-entropy function as loss and evaluated on the dice similarity coefficient as evaluation metric during training; The gray curves depicts behavior on the validation sets while the black curves show behavior on the training sets over increasing numbers of epochs; (a) training and validation dice coefficient for UNet; (b) training and validation dice coefficient for YNet; (c) training and validation dice coefficient for TNet; (d) training and validation loss for UNet; (e) training and validation loss for YNet; (f) training and validation loss for UNet; the use of dropout during only the training phase explains why the models tend to perform better on the validation set over time.
Overall, it was observed that T-Net and E-Net tend to learn semantics in the Ground Truth images provided during training while U-Net and Y-Net tend to simply generalize even to cases where segments in the Ground Truth were wrongly hand-labeled. We speculate that this may lead to U-Net and Y-Net being wrongly penalized during the evaluation process while T-Net and E-Net are rewarded since their predictions always look the closest to the Ground Truth.
Figure 10: U-Net, Y-Net, T-Net, and E-Net sample outputs on a slice showing a cross-section along the x-y plane
Figure 11: U-Net, Y-Net, T-Net, and E-Net sample outputs on random test patches.
## 5 Discussion and Conclusion
The energy density benefits of using lithium metal batteries can only be harvested after we are able to regulated the dendrite formation, and control dendrite growth. In this paper, we showed that dendrites can be efficiently and accurately detected using semantic segmentation with T-Net allowing us to thereby monitor their growth. Experiments have also illustrated that our approach outperforms existing methods though it is slower than the fastest of all considered models. In future work, we aim to extend this method to the multi-class segmentation for dendrites, pits, and bubbles and improve the current latency in a weakly supervised fashion.
## 6 Acknowlegement
We thank Pavel Shevchenko and Francesco De Carlo from the Advanced Photon Source, Argonne National Laboratory for their help in obtaining the dataset used in this work. Also, we acknowledge Dula Parkinson from the Advanced Light Source, Lawrence Berkeley National Laboratory for sharing data storage resources.
|
2306.07602 | Deciding whether a mapping torus is of full rank | The mapping torus induced by an automorphism $\phi$ of the free abelian group
$\mathbb{Z}^n$ is a semi-direct product $G=\mathbb{Z}^n\rtimes_\phi
\mathbb{Z}$. We show that whether the rank of $G$ is equal to $n+1$ is
decidable. As a corollary, the rank of $\mathbb{Z}^3\rtimes_\phi \mathbb{Z}$ is
decidable. | Juemin Lin, Jianchun Wu | 2023-06-13T07:58:43Z | http://arxiv.org/abs/2306.07602v1 | # Deciding whether a mapping torus is of full rank
###### Abstract.
The mapping torus induced by an automorphism \(\phi\) of the free abelian group \(\mathbb{Z}^{n}\) is a semi-direct product \(G=\mathbb{Z}^{n}\rtimes_{\phi}\mathbb{Z}\). We show that whether the rank of \(G\) is equal to \(n+1\) is decidable. As a corollary, the rank of \(\mathbb{Z}^{3}\rtimes_{\phi}\mathbb{Z}\) is decidable.
Key words and phrases:mapping torus, generating orbit sets, rank, generalized linear group 2020 Mathematics Subject Classification: 20F10, 20G30, 20E22 \({}^{*}\) The second author is the corresponding author.
## 2. Preliminaries
**Definition 2.1**.: Let \(A\) be an \(n\times n\) integer matrix and \(S\subseteq\mathbb{Z}^{n}\). The orbit subgroup \(OG_{A}(S)\) on \(S\) by \(A\) is the subgroup of \(\mathbb{Z}^{n}\) generated by \(\{A^{k}v\ |\ k\geq 0,v\in S\}\). If \(OG_{A}(S)\) is the full group \(\mathbb{Z}^{n}\), we call \(S\) a generating orbit set of \(A\). Among all generating orbit sets of \(A\) those having minimal cardinalities are called minimal generating orbit sets. We denote the cardinality of a minimal generating orbit set by \(m_{A}\), that is \(m_{A}=\min\{\#S\ |\ OG_{A}(S)=\mathbb{Z}^{n}\}\).
**Lemma 2.2**.: \(m_{A}=m_{A-\lambda I}\) _for any \(\lambda\in\mathbb{Z}\)._
Proof.: Let \(S\) be a minimal generating orbit set of \(A-\lambda I\), since \((A-\lambda I)^{k}v\) is an integral linear combination of \(\{v,Av,\cdots,A^{k}v\}\) for \(k\geq 0\) and \(v\in S\), we have \(\mathbb{Z}^{n}=OG_{A-\lambda I}(S)\subseteq OG_{A}(S)\) which means \(S\) is a generating orbit set of \(A\), hence \(m_{A-\lambda I}\geq m_{A}\). Similarly, \(m_{A}=m_{(A-\lambda I)+\lambda I}\geq m_{A-\lambda I}\).
For a matrix \(A\in GL_{n}(\mathbb{Z})\), by Cayley-Hamilton Theorem, \(A^{k}v\) is an integral linear combination of \(v,Av,\cdots,A^{n-1}v\) for any \(v\in\mathbb{Z}^{n}\) and \(k\in\mathbb{Z}\), hence the \(A\)-orbit \(\{A^{k}v\ |\ k\in\mathbb{Z}\}\) of \(v\) is a subset of \(OG_{A}(v)\) and \(m_{A}\) is the minimum number of \(A\)-orbits needed to generate \(\mathbb{Z}^{n}\). The following lemma is proved by Levitt and Metaftsis.
**Lemma 2.3** ([5], Corollary 2.4).: _Let \(\phi\) be an automorphism of \(\mathbb{Z}^{n}\), then the rank of \(\mathbb{Z}^{n}\rtimes_{\phi}\mathbb{Z}\) is equal to \(m_{A}+1\) where \(A\in GL_{n}(Z)\) is the matrix induced by \(\phi\) when a basis of \(\mathbb{Z}^{n}\) is fixed._
Suppose \(A,B\) are two \(n\times n\) integer matrices, if there exists \(X\in GL_{n}(\mathbb{Z})\) such that \(B=XAX^{-1}\), then we say \(A\) is integrally conjugate to \(B\). Throughout this paper, conjugation always means integral conjugation. The following lemma shows that \(m_{A}\) is a conjugation invariant.
**Lemma 2.4**.: _Suppose \(A,B\) are two \(n\times n\) integer matrices such that \(A\) is integrally conjugate to \(B\), then \(m_{A}=m_{B}\)._
Proof.: If \(S\) is a minimal generating orbit set of \(A\), since \(B=XAX^{-1}\) for some \(X\in GL_{n}(\mathbb{Z})\), it is obvious that \(\{Xv\ |\ v\in S\}\) is a generating orbit set of \(B\), so \(m_{A}\geq m_{B}\). Similarly, \(m_{B}\geq m_{X^{-1}BX}=m_{A}\).
Let \(C\) be a finite set of integers, the greatest common divisor of absolute values of all elements in \(C\) is denoted by \(\gcd(C)\). We assume any prime number divides \(0\) and \(\gcd(0)=0\).
**Lemma 2.5**.: _Let \(A\) be an \(n\times n\) integer matrix such that \(\gcd(A)\neq 1\), then \(m_{A}=n\)._
Proof.: Let \(d=\gcd(A)\). If \(d=0\), then \(A=0\), hence \(m_{A}=n\). If \(d\neq 0\), let \(S=\{v_{1},\cdots,v_{m}\}\) be a minimal generating orbit set of \(A\) (note that \(m\leq n\)), then \(\mathbb{Z}^{n}\) is spanned by \(\{A^{k}v_{j}\ |\ k\geq 0,j=1,\cdots,m\}\). Denote by \(\phi:\mathbb{Z}^{n}\rightarrow\mathbb{Z}^{n}_{d}\) the canonical module \(d\) homomorphism, then \(\mathbb{Z}^{n}_{d}\) is spanned by \(\{\phi(v_{j})\ |\ j=1,\cdots,m\}\) since \(\phi(A^{k}v_{j})=0\) when \(k>0\). Thus \(m\geq n\) and we have \(m_{A}=n\).
The following lemma will be frequently used in this paper.
**Lemma 2.6**.: _Let \(X\) be an \(n\times m\) integer matrix, then \(\gcd(X)=\gcd(AXB)\) for any \(A\in GL_{n}(\mathbb{Z})\) and \(B\in GL_{m}(\mathbb{Z})\)._
Proof.: It is trivial when \(\gcd(X)=0\). Since all entries in \(AXB\) are integral linear combinations of entries in \(X\), then \(\gcd(X)\) divides \(\gcd(AXB)\). Similarly, \(\gcd(AXB)\) divides \(\gcd(A^{-1}(AXB)B^{-1})=\gcd(X)\). So we have \(\gcd(X)=\gcd(AXB)\)
## 3. Proof of Theorem 1.3
In this section, we will prove \(m_{A}=n\) if and only if \(\gcd(A-a_{11}I)\neq 1\) for \(A=(a_{ij})\in GL_{n}(\mathbb{Z})\) with \(n\geq 3\). By Lemma 2.3, it provides a way to decide whether the rank of a mapping torus \(\mathbb{Z}^{n}\times_{\phi}Z\) is \(n+1\).
**Definition 3.1**.: We call an \(n\times n\) integer matrix \(H=(h_{ij})\) with \(n\geq 3\) a type \(\mathcal{H}\) matrix if \(h_{i1}=0\) for \(i=3,...,n\). That is to say the shape of \(H\) is
\[\begin{bmatrix}h_{11}&h_{12}&\cdots&h_{1n}\\ h_{21}&h_{22}&\cdots&h_{2n}\\ 0&h_{32}&\cdots&h_{3n}\\ \vdots&\vdots&\vdots&\vdots\\ 0&h_{n2}&\cdots&h_{nn}\end{bmatrix}.\]
Moreover, if \(h_{11}=0\), then we say \(H\) is a type \(\mathcal{H}_{0}\) matrix. A matrix \(H=(h_{ij})\) is called type \(\mathcal{H}_{n}\) if \(H\) is of type \(\mathcal{H}_{0}\) and \(\gcd(h_{21},h_{1n},\cdots,h_{nn})=1\).
**Proposition 3.2**.: _Let \(A=(a_{ij})\) be an \(n\times n\) integer matrix with \(n\geq 3\), then \(A\) is integrally conjugate to a matrix \(B=(b_{ij})\) of type \(\mathcal{H}\) with \(b_{11}=a_{11}\)._
Proof.: If \(A\) is not of type \(\mathcal{H}\), then there exists some \(3\leq k\leq n\) such that \(a_{k1}\neq 0\). Without loss of generality, we can assume \(a_{21}\neq 0\), otherwise let
\[X=\begin{pmatrix}1&\vdots&&\vdots&&\vdots&\\ \cdots&0&\cdots&\cdots&\cdots&1&\cdots&\\ &\vdots&1&&\vdots&\vdots&\\ &\vdots&\ddots&\vdots&1&\ddots&\\ \cdots&-1&\cdots&\cdots&0&\cdots&\cdots&\\ &\vdots&&\vdots&1&\ddots&\\ &\vdots&&\vdots&\ddots&1\\ &\vdots&&\vdots&1\\ &\vdots&&\vdots&1\\ &\vdots&&\vdots&1\\ &\vdots&&\vdots&1\\ &2&\overset{\uparrow}{k}\end{pmatrix}\leftarrow 2\]
then \(X\in GL_{n}(\mathbb{Z})\), the \((2,1)\) entry of \(XAX^{-1}\) is not \(0\) and the \((1,1)\) entry is \(a_{11}\).
Now there exist two integers \(s,t\) such that \(sa_{21}+ta_{k1}=d\) where \(d=\gcd(a_{21},a_{k1})\). Let
\[Y=\begin{pmatrix}1&\vdots&&\vdots&&\vdots&\\ \cdots&\vdots&\vdots&&\vdots&\\ &\vdots&1&&\vdots&\\ &\vdots&\ddots&\vdots&\\ &\vdots&1&\ddots&\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}&\overset{\downarrow}{\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow} {\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}\\ &\overset{\downarrow}{\ldots}&\overset{\downarrow}{\ldots}&\overset{\downarrow}{ \ldots}\\ &\overset{\downarrow}{
_Remark 3.3_.: A similar method can be used to prove each \(n\times n\) integer matrix is integrally conjugate to an upper Hessenberg matrix (see [4] Section 21.2 for another proof), but we won't use this fact.
**Lemma 3.4**.: _Let \(v_{1}\) be a non-zero element in \(\mathbb{Z}^{n}\) with \(n\geq 1\), then for any \(v_{2},v_{3}\in\mathbb{Z}^{n}\), there exists \(k\in\mathbb{Z}\) such that \(\gcd(v_{1},v_{2},v_{3})=\gcd(v_{1},v_{3}+kv_{2})\)._
Proof.: If \(\gcd(v_{1},v_{2},v_{3})=1\), denote by \(\mathcal{P}\) the set \(\{p\text{ is prime, }p\mid\gcd(v_{1})\}\) which can be divided into three disjoint subsets as follows
\[\mathcal{P}_{1} =\{p\in\mathcal{P},\ p\mid\gcd(v_{3})\},\] \[\mathcal{P}_{2} =\{p\in\mathcal{P},\ p\nmid\gcd(v_{3}),p\mid\gcd(v_{2})\},\] \[\mathcal{P}_{3} =\mathcal{P}-\mathcal{P}_{1}\cup\mathcal{P}_{2}.\]
Note that \(v_{1}\) is non-zero, \(\mathcal{P}_{3}\subset\mathcal{P}\) is finite. Let
\[k=\begin{cases}1,&\text{if }\mathcal{P}_{3}=\varnothing\\ \prod_{p\in\mathcal{P}_{3}}p,&\text{otherwise.}\end{cases}\]
If \(p\) is a prime number dividing \(\gcd(v_{1},v_{3}+kv_{2})\), then \(p\in\mathcal{P}=\mathcal{P}_{1}\cup\mathcal{P}_{2}\cup\mathcal{P}_{3}\). Moreover, if \(p\in\mathcal{P}_{2}\cup\mathcal{P}_{3}\), then \(p\) divides \(\gcd(kv_{2})\), so \(p\) divides \(\gcd(v_{3})\), but \(p\notin\mathcal{P}_{1}\), we get a contradiction. Hence \(p\in\mathcal{P}_{1}\), \(p\) divides \(\gcd(v_{3})\), then \(p\) divides \(\gcd(kv_{2})\). Note that \(p\) does not divide \(k\), so \(p\) divides \(\gcd(v_{2})\), we have \(p\mid\gcd(v_{1},v_{2},v_{3})=1\), a contradiction. Thus \(\gcd(v_{1},v_{3}+kv_{2})=1=\gcd(v_{1},v_{2},v_{3})\).
If \(\gcd(v_{1},v_{2},v_{3})=d\neq 1\) then \(\gcd(v_{1}/d,v_{2}/d,v_{3}/d)=1\), the above argument shows that there exists \(k\in\mathbb{Z}\) such that \(\gcd(v_{1}/d,v_{3}/d+kv_{2}/d)=\gcd(v_{1}/d,v_{2}/d,v_{3}/d)\), hence \(\gcd(v_{1},v_{3}+kv_{2})=d\gcd(v_{1}/d,v_{3}/d+kv_{2}/d)=d=\gcd(v_{1},v_{2},v _{3})\).
**Proposition 3.5**.: _Suppose \(A\) is an \(n\times n\) integer matrix of type \(\mathcal{H}_{0}\) with \(\gcd(A)=1\), then \(A\) is integrally conjugate to a matrix of type \(\mathcal{H}_{n}\)._
Proof.: Denote by \(v_{j}\) the \(j\)-th column vector of matrix \(A=(a_{ij})\). There are two cases.
Case 1: \(v_{1}\) is zero, that is to say \(\gcd(v_{2},\cdots,v_{n})=1\). Hence there exist \(B\in GL_{n}(\mathbb{Z})\) and \(C\in GL_{n-1}(\mathbb{Z})\) such that \(B[v_{2},\cdots,v_{n}]C\) is the Smith normal form (one can see [6] for more details) of \([v_{2},\cdots,v_{n}]\) whose shape is
\[\begin{bmatrix}1&*&\cdots&*\\ 0&*&\cdots&*\\ \vdots&\vdots&\ddots&\vdots\\ 0&*&\cdots&*\end{bmatrix}_{n\times(n-1)}.\]
Let \(D\) be the permutation matrix
\[\begin{bmatrix}0&\cdots&0&1\\ 1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\end{bmatrix}_{(n-1)\times(n-1)},\]
then \(B[v_{2},\cdots,v_{n}]CD=[*,\cdots,*,e_{1}]\) where \(e_{1}=[1,0,\cdots,0]^{T}\).
Let \(T=\begin{bmatrix}1&0\\ 0&CD\end{bmatrix}\), then
\[T^{-1}AT =T^{-1}[v_{1},v_{2},\cdots,v_{n}]T\] \[=T^{-1}[v_{1},[v_{2},\cdots,v_{n}]CD]\] \[=T^{-1}[v_{1},B^{-1}[*,\cdots,*,e_{1}]]\] \[=[v_{1},*,\cdots,*,T^{-1}B^{-1}e_{1}].\]
By Lemma 2.6, we have \(\gcd(T^{-1}B^{-1}e_{1})=\gcd(e_{1})=1\), hence \(\gcd(v_{1},T^{-1}B^{-1}e_{1})=1\) and \(T^{-1}AT\) is a matrix of type \(\mathcal{H}_{n}\).
Case 2: \(v_{1}\) is non-zero.
We construct \(n-1\) vectors \(\tilde{v}_{2},\cdots,\tilde{v}_{n}\) inductively such that \(\gcd(v_{1},\cdots,v_{j})=\gcd(v_{1},\tilde{v}_{j})\) for \(j=2,\cdots,n\) as follows.
Let \(\tilde{v}_{2}=v_{2}\), then \(\gcd(v_{1},v_{2})=\gcd(v_{1},\tilde{v}_{2})\). Suppose \(\gcd(v_{1},\cdots,v_{j})=\gcd(v_{1},\tilde{v}_{j})\), by Lemma 3.4, there exists \(k_{j+1}\in\mathbb{Z}\), such that
\[\gcd(v_{1},\tilde{v}_{j},v_{j+1})=\gcd(v_{1},v_{j+1}+k_{j+1}\tilde{v}_{j}),\]
let \(\tilde{v}_{j+1}=v_{j+1}+k_{j+1}\tilde{v}_{j}\), then \(\gcd(v_{1},\cdots,v_{j+1})=\gcd(v_{1},\tilde{v}_{j+1})\).
Let \(T=(t_{ij})\in GL_{n}(\mathbb{Z})\), where \(t_{ii}=1\) for \(i=1,\cdots,n\) and \(t_{ij}=\prod_{l=i+1}^{j}k_{l}\) for \(2\leq i<j\leq n\), other \(t_{ij}\)s are 0. That is
\[T=\begin{bmatrix}1&0&0&0&\cdots&0\\ 0&1&k_{3}&k_{3}k_{4}&\cdots&\prod_{l=3}^{n}k_{l}\\ 0&0&1&k_{4}&\cdots&\prod_{l=4}^{n}k_{l}\\ \vdots&\vdots&\vdots&\ddots&\ddots&\vdots\\ 0&0&0&\cdots&1&k_{n}\\ 0&0&0&\cdots&0&1\end{bmatrix}.\]
Note that \(\tilde{v}_{j}=v_{j}+k_{j}v_{j-1}+\cdots+(\prod_{l=i+1}^{j}k_{l})v_{i}+\cdots+( \prod_{l=3}^{j}k_{l})v_{2}.\) One can verify \([v_{1},v_{2},\cdots,v_{n}]T=[v_{1},\tilde{v}_{2},\cdots,\tilde{v}_{n}]\) and \(T^{-1}e_{2}=e_{2}\), where \(e_{2}=[0,1,0,\cdots,0]^{T}\). Since \(A\) is of type \(\mathcal{H}_{0}\), \(v_{1}=a_{21}e_{2}\), then \(T^{-1}v_{1}=v_{1}\). Hence
\[T^{-1}AT=T^{-1}[v_{1},v_{2},\cdots,v_{n}]T=T^{-1}[v_{1},\tilde{v}_{2},\cdots, \tilde{v}_{n}]=[v_{1},T^{-1}\tilde{v}_{2},\cdots,T^{-1}\tilde{v}_{n}],\]
and by Lemma 2.6
\[\gcd(v_{1},T^{-1}\tilde{v}_{n})=\gcd(T^{-1}v_{1},T^{-1}\tilde{v}_{n})=\gcd(v _{1},\tilde{v}_{n})=\gcd(v_{1},\cdots,v_{n})=\gcd(A)=1,\]
which means \(T^{-1}AT\) is a matrix of type \(\mathcal{H}_{n}\).
**Lemma 3.6**.: _Let \(A=(a_{ij})\) be an \(n\times n\) integer matrix of type \(\mathcal{H}_{n}\) satisfying one of the following conditions, then \(m_{A}<n\)._
_(I) \(a_{21}=0\), \(a_{2n}=\cdots=a_{n-1,n}=0\), \(a_{1n}\equiv\pm 1\mod a_{nn}\);_
_(II) \(a_{21}=0\), \(a_{2n},\cdots,a_{n-1,n}\) are not all 0;_
_(III) \(a_{21}\neq 0\)._
Proof.: We show below that there exists \(v\in\mathbb{Z}^{n}\) such that \(\{v,Av\}\) can be extended to a basis \(\{v,Av,u_{3},\cdots,u_{n}\}\) of \(\mathbb{Z}^{n}\). Hence \(\{v,u_{3},\cdots,u_{n}\}\) is a generating orbit set of \(A\) and \(m_{A}<n\).
Let \(v=[s,0,\cdots,0,t]^{T}\), since the shape of \(A=(a_{ij})\) is
\[\begin{bmatrix}0&a_{12}&\cdots&a_{1n}\\ a_{21}&a_{22}&\cdots&a_{2n}\\ 0&a_{32}&\cdots&a_{3n}\\ \vdots&\vdots&\vdots&\vdots\\ 0&a_{n2}&\cdots&a_{nn}\end{bmatrix},\]
we have
\[Y=[v,Av]=\begin{bmatrix}s&0&0&\cdots&0&t\\ a_{1n}t&a_{21}s+a_{2n}t&a_{3n}t&\cdots&a_{n-1,n}t&a_{nn}t\end{bmatrix}^{T}.\]
The Smith normal form of \(Y\) is
\[\begin{bmatrix}d_{1}(Y)&0&0&\cdots&0\\ 0&d_{2}(Y)&0&\cdots&0\end{bmatrix}^{T}\]
where \(d_{1}(Y)=\gcd(Y)\) and \(d_{2}(Y)\) is the greatest common divisor of all \(2\times 2\) minors of \(Y\). Note that \(d_{1}(Y)\) divides \(d_{2}(Y)\), we will choose suitable \(s,t\) such that \(d_{2}(Y)=1\), then the two columns \(v,Av\) of \(Y\) can be extended to a basis of \(\mathbb{Z}^{n}\).
The \(2\times 2\) minors of \(Y\) that might be non-zero are as follows
\[f_{1}(s,t)=a_{21}s^{2}+a_{2n}st=\left|\begin{matrix}s&0\\ a_{1n}t&a_{21}s+a_{2n}t\end{matrix}\right|,\]
\[f_{1}^{\prime}(s,t)=-(a_{21}st+a_{2n}t^{2})=\left|\begin{matrix}0&t\\ a_{21}s+a_{2n}t&a_{nn}t\end{matrix}\right|,\]
\[f_{2}(s,t)=a_{nn}st-a_{1n}t^{2}=\left|\begin{matrix}s&t\\ a_{1n}t&a_{nn}t\end{matrix}\right|,\]
and for \(n>3\),
\[f_{j}(s,t)=a_{jn}st=\left|\begin{matrix}s&0\\ a_{1n}t&a_{jn}t\end{matrix}\right|,j=3,\cdots,n-1,\]
\[f_{j}^{\prime}(s,t)=-a_{jn}t^{2}=\left|\begin{matrix}0&t\\ a_{jn}t&a_{nn}t\end{matrix}\right|,j=3,\cdots,n-1.\]
(I) If \(a_{21}=a_{2n}=\cdots=a_{n-1,n}=0\) and \(a_{1n}\equiv\pm 1\mod a_{nn}\), then there exists \(s_{0}\in\mathbb{Z}\) such that \(a_{nn}s_{0}-a_{1n}=\pm 1\). Let \(s=s_{0},\ t=1\), we have \(\gcd(f_{2}(s,t))=1\), hence \(d_{2}(Y)=1\).
(II) If \(a_{21}=0\) and \(a_{2n},\cdots,a_{n-1,n}\) are not all \(0\), then \(d=\gcd(a_{2n},\cdots,a_{n-1,n})\neq 0\). By Lemma 3.4, there exists \(k\in\mathbb{Z}\) such that
\[\gcd(d,ka_{nn}-a_{1n})=\gcd(d,a_{nn},-a_{1n})=\gcd(a_{21},a_{1n},\cdots,a_{nn} )=1.\]
because \(A\) is of type \(\mathcal{H}_{n}\). Let \(s=k,\ t=1\), then for \(n=3\)
\[\gcd(f_{1}^{\prime}(s,t),f_{2}(s,t))=\gcd(d,ka_{nn}-a_{1n})=1,\]
and for \(n>3\)
\[\gcd(f_{1}^{\prime}(s,t),f_{2}(s,t),f_{3}^{\prime}(s,t),\cdots,f_{n-1}^{\prime }(s,t))=\gcd(d,ka_{nn}-a_{1n})=1.\]
We also have \(d_{2}(Y)=1\).
(III) \(a_{21}\neq 0\). If \(a_{1n}=a_{2n}=0\), let \(s=t=1\), then
\[\gcd(f_{1}(s,t),\cdots,f_{n-1}(s,t))=\gcd(a_{21},a_{1n},\cdots,a_{nn})=1\]
because \(A\) is of type \(\mathcal{H}_{n}\), and we have \(d_{2}(Y)=1\).
Now \(c_{1}=\gcd(a_{1n},a_{2n})\neq 0\), there exist \(k,\ell\in\mathbb{Z}\) such that \(ka_{1n}+\ell a_{2n}=c_{1}\). The matrix \(T=\left[\begin{matrix}\ell&-k\\ a_{1n}/c_{1}&a_{2n}/c_{1}\end{matrix}\right]\) is in \(GL_{2}(\mathbb{Z})\) and
\[T\left[\begin{matrix}a_{21}&a_{2n}\\ a_{nn}&-a_{1n}\end{matrix}\right]=\left[\begin{matrix}c_{2}&c_{1}\\ c_{3}&0\end{matrix}\right]\]
where \(c_{2}=\ell a_{21}-ka_{nn},\ c_{3}=(a_{1n}a_{21}+a_{2n}a_{nn})/c_{1}\). By Lemma 2.6,
\[\gcd(c_{1},c_{2},c_{3})=\gcd(\left[\begin{matrix}c_{2}&c_{1}\\ c_{3}&0\end{matrix}\right])=\gcd(\left[\begin{matrix}a_{21}&a_{2n}\\ a_{nn}&-a_{1n}\end{matrix}\right])=\gcd(a_{21},a_{1n},a_{2n},a_{nn}). \tag{3.1}\]
Note that
\[\left[\begin{matrix}a_{21}&a_{2n}\\ a_{nn}&-a_{1n}\end{matrix}\right]\left[\begin{matrix}s\\ t\end{matrix}\right]=\left[\begin{matrix}\ell&-k\\ a_{1n}/c_{1}&a_{2n}/c_{1}\end{matrix}\right]^{-1}\left[\begin{matrix}c_{2}&c_{1} \\ c_{3}&0\end{matrix}\right]\left[\begin{matrix}s\\ t\end{matrix}\right],\]
i.e.
\[\left[\begin{matrix}a_{21}s+a_{2n}t\\ a_{nn}s-a_{1n}t\end{matrix}\right]=\left[\begin{matrix}\ell&-k\\ a_{1n}/c_{1}&a_{2n}/c_{1}\end{matrix}\right]^{-1}\left[\begin{matrix}tc_{1}+sc_{2} \\ sc_{3}\end{matrix}\right],\]
by Lemma 2.6,
\[\gcd(a_{21}s+a_{2n}t,a_{nn}s-a_{1n}t)=\gcd(tc_{1}+sc_{2},sc_{3}). \tag{3.2}\]
Moreover, for any two coprime integers \(s,t\) such that \(\gcd(a_{21},t)=1\), we have \(\gcd(a_{21}s+a_{2n}t,t)=1\) and so
\[\gcd(a_{21}s+a_{2n}t,t(a_{nn}s-a_{1n}t))=\gcd(a_{21}s+a_{2n}t,a_{nn}s-a_{1n}t). \tag{3.3}\]
Since
\[\gcd(f_{1}(s,t),f_{1}^{\prime}(s,t))=(a_{21}s+a_{2n}t)\gcd(s,-t)=a_{21}s+a_{2n }t,\]
by the equalities (3.3) and (3.2), we have
\[\gcd(f_{1}(s,t),f_{1}^{\prime}(s,t),f_{2}(s,t)) =\gcd(a_{21}s+a_{2n}t,t(a_{nn}s-a_{1n}t))\] \[=\gcd(a_{21}s+a_{2n}t,a_{nn}s-a_{1n}t)\] \[=\gcd(tc_{1}+sc_{2},sc_{3}). \tag{3.4}\]
Let \(c_{4}=\begin{cases}\gcd(a_{3n},\cdots,a_{n-1,n}),&\text{if $n>3$}\\ 0,&\text{if $n=3$}\end{cases}\) and \(\mathcal{P}=\{p\text{ is prime},\ p\ |\ \gcd(c_{3},c_{4})\}\). There are two cases:
Case 1: \(\mathcal{P}\) is infinite which means \(c_{3}=c_{4}=0\), then \(\gcd(a_{21},a_{1n},a_{2n},a_{nn})=1\) since \(A\) is of type \(\mathcal{H}_{n}\). By the equality (3.1), \(\gcd(c_{1},c_{2})=\gcd(c_{1},c_{2},c_{3})=1\) and there exist \(s_{0},t_{0}\in\mathbb{Z}\) with \(\gcd(s_{0},t_{0})=1\) such that \(t_{0}c_{1}+s_{0}c_{2}=1\).
Note that \(\gcd(t_{0},c_{2})=1\) and \(a_{21}\neq 0\), by Lemma 3.4, there exists \(k\in\mathbb{Z}\) such that \(1=\gcd(a_{21},c_{2},t_{0})=\gcd(a_{21},t_{0}+kc_{2})\). Let \(s=s_{0}-kc_{1},t=t_{0}+kc_{2}\), then \(\gcd(a_{21},t)=1\) and \(tc_{1}+sc_{2}=1\). We have \(\gcd(s,t)=1\). By the equality (3.4), \(\gcd(f_{1}(s,t),f_{1}^{\prime}(s,t),f_{2}(s,t))=1\), so \(d_{2}(Y)=1\).
Case 2: \(\mathcal{P}\) is finite. \(\mathcal{P}\) can be divided into three disjoint subsets as follows
\[\mathcal{P}_{1} =\{p\in\mathcal{P},\ p\nmid c_{1}\},\] \[\mathcal{P}_{2} =\{p\in\mathcal{P},\ p\mid c_{1},\ p\nmid a_{21}\},\] \[\mathcal{P}_{3} =\{p\in\mathcal{P},\ p\mid c_{1},\ p\mid a_{21}\}.\]
Let \(s=\begin{cases}1,&\text{if $\mathcal{P}_{1}=\varnothing$, and $t=\begin{cases}1,&\text{if $ \mathcal{P}_{2}=\varnothing$,}\\ \prod_{p\in\mathcal{P}_{1}}p,&\text{otherwise,}\end{cases}$}\) and \(\gcd(s,t)=1\).
Suppose \(p\) is a prime number that divides \(\gcd(sc_{3},tc_{4})\), then \(p\in\mathcal{P}\).
* If \(p\in\mathcal{P}_{1}\), then \(p\mid s\) and \(p\nmid t\), hence \(p\nmid(tc_{1}+sc_{2})\).
* If \(p\in\mathcal{P}_{2}\cup\mathcal{P}_{3}\), then \(p\nmid s\) and \(p\mid\gcd(c_{1},c_{3},c_{4})\). Since \(\gcd(c_{1},c_{2},c_{3},c_{4})=1\), \(p\nmid c_{2}\), hence \(p\nmid tc_{1}+sc_{2}\).
The above argument shows that \(\gcd(sc_{3},tc_{4},tc_{1}+sc_{2})=1\).
If \(n=3\), then \(c_{4}=0\) and by the equality (3.4), we have
\[\gcd(f_{1}(s,t),f_{1}^{\prime}(s,t),f_{2}(s,t))=\gcd(sc_{3},tc_{1}+sc_{2})= \gcd(sc_{3},tc_{4},tc_{1}+sc_{2})=1.\]
If \(n>3\), note that
\[\gcd(f_{3}(s,t),f_{3}^{\prime}(s,t),\cdots,f_{n-1}(s,t),f_{n-1}^{\prime}(s,t)) =tc_{4},\]
by the equality (3.4) we have
\[\gcd(f_{1}(s,t),f_{1}^{\prime}(s,t),f_{2}(s,t),f_{3}(s,t),f_{3}^{\prime}(s,t), \cdots,f_{n-1}(s,t),f_{n-1}^{\prime}(s,t))=1.\]
Hence \(d_{2}(Y)=1\).
**Corollary 3.7**.: _Suppose \(H=(h_{ij})\) is an element in \(GL_{n}(\mathbb{Z})\) such that \(H\) is of type \(\mathcal{H}\) with \(\gcd(H-h_{11}I)=1\), then \(m_{H}<n\)._
Proof.: \(H-h_{11}I\) is of type \(\mathcal{H}_{0}\) with \(\gcd(H-h_{11}I)=1\), by Proposition 3.5, there exists \(T\in GL_{n}(\mathbb{Z})\), such that
\[T^{-1}(H-h_{11}I)T=T^{-1}HT-h_{11}I\]
is a matrix of type \(\mathcal{H}_{n}\). Let \(H^{\prime}=T^{-1}HT=(h^{\prime}_{ij})\) and \(H^{\prime\prime}=H^{\prime}-h_{11}I=(h^{\prime\prime}_{ij})\), by Lemma 2.2 and Lemma 2.4, we have \(m_{H}=m_{H^{\prime}}=m_{H^{\prime\prime}}\).
If \(h^{\prime\prime}_{21},h^{\prime\prime}_{2n},\cdots,h^{\prime\prime}_{n-1,n}\) are not all \(0\), then \(H^{\prime\prime}\) satisfies the condition (II) or (III) of Lemma 3.6, hence \(m_{H}=m_{H^{\prime\prime}}<n\).
Otherwise, \(h^{\prime\prime}_{21}=h^{\prime\prime}_{2n}=\cdots=h^{\prime\prime}_{n-1,n}=0\), we have \(\gcd(h^{\prime\prime}_{1n},h^{\prime\prime}_{nn})=1\) because \(H^{\prime\prime}\) is of type \(\mathcal{H}_{n}\). Moreover, \(h^{\prime}_{21}=h^{\prime}_{2n}=\cdots=h^{\prime}_{n-1,n}=0\) and the shape of \(H^{\prime}\) is
\[\begin{bmatrix}h^{\prime}_{11}&*&h^{\prime}_{1n}\\ 0&H^{*}&0\\ 0&*&h^{\prime}_{nn}\end{bmatrix}.\]
Since \(\pm 1=\det(H)=\det(T^{-1}HT)=\det(H^{\prime})=h^{\prime}_{11}h^{\prime}_{nn} \det(H^{*})\), we have \(h^{\prime}_{11}=\pm 1\) and \(h^{\prime}_{nn}=\pm 1\).
Note that \(h^{\prime\prime}_{11}=h^{\prime}_{11}-h_{11}=0\), then \(h^{\prime\prime}_{nn}=h^{\prime}_{nn}-h_{11}=h^{\prime}_{nn}-h^{\prime}_{11}=0\) or \(\pm 2\). If \(h^{\prime\prime}_{nn}=0\), then \(|h^{\prime\prime}_{1n}|=\gcd(h^{\prime\prime}_{1n},h^{\prime\prime}_{nn})=1\), so \(h^{\prime\prime}_{1n}\equiv\pm 1\mod h^{\prime\prime}_{nn}\). If \(h^{\prime\prime}_{nn}=\pm 2\), then \(h^{\prime\prime}_{1n}\) is odd because \(\gcd(h^{\prime\prime}_{1n},h^{\prime\prime}_{nn})=1\), we also have \(h^{\prime\prime}_{1n}\equiv\pm 1\mod h^{\prime\prime}_{nn}\). Thus \(H^{\prime\prime}\) satisfies the condition (I) of Lemma 3.6 and \(m_{H}=m_{H^{\prime\prime}}<n\).
**Theorem 3.8**.: _Suppose \(A=(a_{ij})\) is an element in \(GL_{n}(\mathbb{Z})\) with \(n\geq 3\), then \(m_{A}=n\) if and only if \(\gcd(A-a_{11}I)\neq 1\)._
Proof.: If \(\gcd(A-a_{11}I)=1\), by Proposition 3.2, there exists \(T\in GL_{n}(\mathbb{Z})\) such that \(H=T^{-1}AT=(h_{ij})\) is of type \(\mathcal{H}\) with \(h_{11}=a_{11}\). By Lemma 2.6, \(\gcd(H-h_{11}I)=\gcd(T^{-1}AT-a_{11}I)=\gcd(T^{-1}(A-a_{11}I)T)=\gcd(A-a_{11}I)=1\), thus by Lemma 2.4 and Corollary 3.7, \(m_{A}=m_{H}<n\).
If \(\gcd(A-a_{11}I)\neq 1\), then by Lemma 2.2 and 2.5, \(m_{A}=m_{A-a_{11}I}=n\).
**Acknowledgements.** The authors are partially supported by National Natural Science Foundation of China (No. 12271385).
|
2304.09056 | Orbital-Free Quasi-Density Functional Theory | Wigner functions are broadly used to probe non-classical effects in the
macroscopic world. Here we develop an orbital-free functional framework to
compute the 1-body Wigner quasi-probability for both fermionic and bosonic
systems. Since the key variable is a quasi-density, this theory is particularly
well suited to circumvent the problem of finding the Pauli potential or
approximating the kinetic energy in orbital-free density functional theory. As
proof of principle, we find that the universal functional for the building
block of optical lattices results from a translation, a contraction, and a
rotation of the corresponding functional of the 1-body reduced density matrix,
indicating a strong connection between these functional theories. Furthermore,
we relate the concepts of Wigner negativity and $v$-representability, and find
a manifold of ground states with negative Wigner functions. | Carlos L. Benavides-Riveros | 2023-04-18T15:27:52Z | http://arxiv.org/abs/2304.09056v3 | # Orbital-Free Quasi-Density Functional Theory
###### Abstract
Wigner functions are broadly used to probe non-classical effects in the macroscopic world. Here we develop an _orbital-free_ functional framework to compute the 1-body Wigner quasi-probability for both fermionic and bosonic systems. Since the key variable is a quasi-density, this theory is particularly well suited to circumvent the problem of finding the Pauli potential or approximating the kinetic energy in orbital-free density functional theory. As proof of principle, we find that the universal functional for the building block of optical lattices results from a translation, a contraction, and a rotation of the corresponding functional of the 1-body reduced density matrix, indicating a strong connection between these functional theories. Furthermore, we relate the concepts of _Wigner negativity_ and _v-representability_, and find a manifold of ground states with negative Wigner functions.
_Introduction.--_ Detecting and understanding quantum features at the macroscopic level is one of the main theoretical and technological challenges of modern quantum sciences. Nowadays, state-of-the-art experiments can directly observe non-classical behavior (as quantum superposition) in systems with a truly macroscopic number of particles, with as many as \(10^{16}\) atoms [1; 2; 3; 4]. A powerful theoretical and computational strategy to detect that _quantumness_ is by directly measuring the system's corresponding Wigner function. Although normalized to unity, Wigner functions are quasi-probability distributions that can take negative values, a phenomenon that has no classical counterpart. Hence, negativity in the Wigner functions has been linked to non-classical features of quantum states and is considered a distinctive signature of quantum entanglement [5; 6; 7], contextuality [8; 9; 10], quantum computation [11], quantum steering [12; 13], or even quantum gravity [14].
Due to the exponentially large Hilbert spaces of quantum many-body systems, finding the corresponding Wigner function is, in general, a computationally prohibitive task. Yet for identical particles it is possible to circumvent the Hilbert space's exponential growth by means of a universal functional of certain reduced, more manageable, quantities, like, e.g., the density. Based on the important observation that electronic systems are fully determined by the ground-state density [15], density functional theory (DFT) is a prominent methodology in electronic structure calculations, with applications ranging from quantum chemistry and material science [16; 17] to self-driving labs [18]. Quite remarkably, orbital-free DFT achieves a computational linear scaling with the system size [19]. But, unfortunately, from the density alone it is not possible to reconstruct the Wigner function, and therefore standard DFT is, in general, not suitable for describing non-classical features of quantum many-body systems.
A recent phase-space formulation of DFT employs, as the central variable, the 1-particle Wigner quasi-density, which is in a one-to-one correspondence with the respective ground state for interacting many-fermion/boson systems [20]. Its main feature is that the 1-body Wigner function can be accessed directly, without pre-computing the full wave function. This Wigner quasi-density functional theory (quasi-DFT) is a promising theoretical tool to model many-body problems while accounting for non-classical features, strong interactions, and quantum correlations, with the same computational cost as standard DFT. As we will show below, the theory has also the potential of bypassing well-known problems of orbital-free DFT. To date, however, there are neither orbital-free nor orbital-dependent functionals for quasi-DFT.
Here, we will obtain equations for the fermionic/bosonic 1-particle quasi-density. This is, we will argue, the initial step to developing a full _orbital-free_ framework for Wigner quasi-DFT. As one of our main results, we will show that \(\omega(\mathbf{r},\mathbf{p})\), the 1-particle Wigner quasi-density, satisfies the following, effective, eigen-equation:
\[h_{\rm eff}\star\omega(\mathbf{r},\mathbf{p})=\omega(\mathbf{r},\mathbf{p}) \star h_{\rm eff}=\mu\,\omega(\mathbf{r},\mathbf{p})\,, \tag{1}\]
where \(h_{\rm eff}=\frac{1}{2}\mathbf{p}^{2}+v_{\rm ext}(\mathbf{r})+v_{\rm eff}( \mathbf{r},\mathbf{p})\), \(v_{\rm ext}(\mathbf{r})\) is the external potential, \(v_{\rm eff}(\mathbf{r},\mathbf{p})\) is certain effective potential that we introduce below, and the symbol \(\star\) is the so-called star product of phase-space quantum mechanics.
The letter is organized as follows: First, we review both the orbital-free functional theories and the Wigner formulation of DFT. Second, we derive an Euler-Lagrange equation for the 1-body Wigner quasi-density. Next, we derive an equation using the Moyal product. We then employ the Hubbard model to present for the first time a functional realization of quasi-DFT. We conclude with a summary and discuss some implications of our results. In the Appendixes, we provide additional technical details.
_Functional theories.--_ The enormous success of DFT in electronic structure calculations is mainly due to the existence of a set of self-consistent 1-particle equations that allow for the computation of the density from 1-particle orbitals [21]. Although it is much cheaper than wave-function methods, this Kohn-Sham DFT still has
an unfavorably computational scaling with the cube of the number of electrons [22]. In turn, orbital-free DFT allows a much more favorable, linear scaling with the system size [17; 19], but this computational advantage is counterbalanced by the fact that the quantum mechanical kinetic energy functional is unknown, and it is written as a classical, approximate function of the electron density. A parallel intellectual effort is the 1-particle reduced density matrix functional theory (1-RDMFT) that exploits the full 1-particle picture of the many-body problem by seeking a universal functional of the 1-body reduced density matrix (1-RDM), [23; 24; 25], for fermionic [26; 27; 28], bosonic [29; 30; 31; 32], or relativistic [33] interacting particles. Similar to DFT, 1-RDMFT is based on a one-to-one correspondence between the ground state and its corresponding 1-RDM. Although this theory is in a better position than DFT to tackle strong correlations [34], its broad use has been hampered by the absence of Kohn-Sham-like equations for the natural orbitals (i.e., the eigenvectors of the 1-RDM) [25; 35]. It, therefore, comes as no surprise that fermionic 1-RDMFT is computationally much more demanding than DFT [36]. Unfortunately, there are quite a few orbital-free formulations of 1-RDMFT (the most notable being the exchange part of the Hartree-Fock functional). The development of an orbital-free perspective of 1-RDMFT could boost its broad applicability.
_Phase-space quantum mechanics.--_ In the phase-space formulation of quantum mechanics, observables are represented by symbols, i.e., functions of position \(\mathbf{r}\) and momentum \(\mathbf{p}\) coordinates. Out of many choices, Wigner functions host the most natural representation of quantum mechanics [37]. In the classical limit, it turns out to be the phase-space distributions of statistical mechanics [38]. In this formulation, quantum operators correspond uniquely to phase-space classical functions via the Weyl correspondence, while operator products correspond to \(\star\)-products. This noncommutative star (twisted or Moyal) product is commonly defined by the phase-space pseudo-differential operator [39]: \(\star\equiv\exp[i\hbar(\widetilde{\partial}_{r}\widetilde{\partial}_{p}- \widetilde{\partial}_{p}\widetilde{\partial}_{r})/2]\); the arrows denote that a given derivative acts only on a function standing on the left/right. This product is defined by \(\mathcal{Q}(f\star g)=\mathcal{Q}(f)\mathcal{Q}(g)\) where \(\mathcal{Q}(f)\) is the quantized operator version (by the Weyl rule) of the phase-space function \(f\)[40]. The eigenvalue problem for the Hamiltonian \(H\) reads \(H\star f_{n}=E_{n}f_{n}=f_{n}\star H\)[41].
_1-body Wigner quasi-density.--_ By definition, the 1-body Wigner quasi-densities are given in terms of the 1-RDM \(\gamma(\mathbf{r},\sigma;\mathbf{r}^{\prime},\sigma^{\prime})\), by the relation:
\[\omega^{\sigma\sigma^{\prime}}(\mathbf{r},\mathbf{p})=\frac{1}{\pi^{3}}\int \gamma(\mathbf{r}-\mathbf{z},\sigma;\mathbf{r}+\mathbf{z},\sigma^{\prime})\,e ^{2i\mathbf{p}\cdot\mathbf{z}}\,d^{3}z\,, \tag{2}\]
where \(\sigma\in\{\uparrow,\downarrow\}\) are the spin variables. Notice that the marginal \(\sum_{\sigma}\int\omega^{\sigma\sigma}(\mathbf{r},\mathbf{p})\,d^{3}p\) gives exactly the density \(n(\mathbf{r})\) (the central object of DFT) [42].
_Wigner quasi-DFT.--_ A generalization of the Hohenberg-Kohn [15] and Gilbert [23] theorems to Hamiltonians of the form \(H=h+V\), with a _fixed_ two-particle interaction \(V\), proves the existence of a universal Wigner functional \(\mathcal{F}_{V}[\omega]\) of the 1-body quasi-density \(\omega\)[20]. Indeed, for any choice of the 1-particle phase-space Hamiltonian \(h(\mathbf{r},\mathbf{p})=\frac{1}{2}\mathbf{p}^{2}+v_{\mathrm{ext}}(\mathbf{r },\mathbf{p})\) the energy functional:
\[\mathcal{E}[\omega]\equiv\int h(\mathbf{r},\mathbf{p})\omega(\mathbf{r}, \mathbf{p})d\Omega+\mathcal{F}_{V}[\omega]\geq E_{\mathrm{gs}}\,, \tag{3}\]
is bounded from below by the exact ground-state energy. The equality in Eq. (3) holds exactly when \(\mathcal{E}[\omega]\) is evaluated using the ground-state 1-body quasi-density \(\omega_{\mathrm{gs}}\). As in standard DFT, the functional \(\mathcal{F}_{V}[\omega]\) is completely independent of any external (phase-space) potential \(v(\mathbf{r},\mathbf{p})\). As in 1-RDMFT, it is also completely independent of the kinetic energy and depends only on the fixed two-particle interaction \(V\). The (universal) functional \(\mathcal{F}_{V}[\omega]\) obeys a constrained-search formulation, by considering only many-body wave-functions that integrate to the same \(\omega\):
\[\mathcal{F}_{V}[\omega]=\min_{\Psi\rightarrow\omega}\left\langle\Psi|V|\Psi \right\rangle. \tag{4}\]
While the functional is unknown, it is known that it has some better scaling properties than the functionals in DFT. For instance, by defining, \(\omega_{\lambda}=\omega(\lambda\mathbf{r},\lambda^{-1}\mathbf{p})\), one can show that \(\mathcal{F}_{V}[\omega_{\lambda}]=\lambda\mathcal{F}_{V}[\omega]\)[20], a fact that lies in the exact knowledge of the kinetic energy functional.
_Representability condition of \(\omega\).--_ In a quite natural way, 1-body quasi-densities inherit the representability conditions of the 1-RDM, \(\gamma\). Due to unitary invariance, those can be expressed as conditions on the eigenvalues of \(\gamma\)[43]. Therefore, it is convenient to use the spectral representation of \(\omega\) (i.e., \(\omega=\sum_{i}n_{i}f_{i}\)) to find its representability conditions. In general, \(n_{i}\geq 0\). In addition, in the case of fermions:
\[\omega\star\omega\leq\omega\,, \tag{5}\]
which is just a consequence of the Pauli exclusion principle [44].
_Equation for the 1-body quasi-density.--_ We now exhibit an exact equation for the phase-space 1-body quasi-density. Let \(\mathcal{E}[\omega]\) be the energy functional of the Wigner function (3), subject to the constraint \(\int d\Omega\,\omega(\mathbf{r},\mathbf{p})=N\). The \(N\)-particle phase-space density which minimizes such a functional is found by applying a functional derivative of the Lagrangian \(\mathcal{E}[\omega]-\mu N\) with respect to \(\omega\), yielding the Euler-Lagrange equation of Wigner quasi-DFT:
\[h(\mathbf{r},\mathbf{p})+\frac{\delta\mathcal{F}_{V}[\omega]}{\delta\omega( \mathbf{r},\mathbf{p})}=\mu\,. \tag{6}\]
There is an important consequence of this result. As is well known, one of the central problems in orbital-free DFT is approximating the kinetic energy functional in terms of the density [45; 46] or, alternatively, the Pauli potential [47; 48]. It is, indeed, particularly crucial that the Pauli principle be captured precisely in the kinetic energy. As we can see in Eq. (6), this important problem
is completely absent in the phase-space formalism. First, the kinetic energy and the external potential are exact, rather simple phase-space functions, and no approximation is needed. Second, the representability condition of the Wigner function (5) guarantees that the Pauli principle is fulfilled. As a consequence, our orbital-free quasi-DFT needs only to approximate the universal functional \(\mathcal{F}_{V}[\omega]\).
\(\star\)_-eigenequation for \(\omega\).--_ Inspired by the work of Levy, Perdew and Sahni [49] we exhibit now an exact \(\star\)-eigenquation for the 1-particle quasi-density. As explained in Appendix C, by computing the directional functional derivative of \(\mathcal{E}[\omega]\) at the point \(\omega\) in the direction of \(\omega\) one can show that \(\omega_{\rm gs}\) fulfills the following equation:
\[\omega_{\rm gs}\star h_{\rm eff}=h_{\rm eff}\star\omega_{\rm gs}=\mu\,\omega_ {\rm gs}\,, \tag{7}\]
where \(h_{\rm eff}=h+\delta\mathcal{F}_{V}[\omega]/\delta\omega|_{\omega=\omega_{\rm gs }}\). The simplicity of this formula can be compared with the one from orbital-free DFT for \(\sqrt{n(\mathbf{r})}\)[50]. Noticeably, the formula (7) allows for a Wigner-Moyal expansion of the equation for the quasi-density: \(\sum_{n}\frac{i^{n}\hbar^{n}}{2^{n}n!}h_{\rm eff}(\!\vec{\partial}_{r}\vec{ \partial}_{p}-\!\vec{\partial}_{p}\vec{\partial}_{r})^{n}\omega=\mu\omega\).
_Functional realization.--_ To the best of our knowledge, there are no explicit functionals of \(\omega\). Although 1-RDMFT functionals could be Wigner transformed, almost all of them are written in terms of natural orbitals [51; 52; 53; 54; 55; 56; 57; 58; 59; 60], so they are not suited for our purposes. Let us, therefore, illustrate the potential of orbital-free quasi-DFT by discussing the generalized Bose-Hubbard dimer, whose standard version has been broadly used to unveil aspects of functional theories [61; 62; 63; 64; 65; 29]. The interacting Hamiltonian, containing all particle-conserving quartic terms, can be written with 3 parameters in the following way:
\[V(u_{1},u_{2},u_{3}) = u_{1}\sum_{j=l,r}\hat{n}_{j}(\hat{n}_{j}-1) \tag{8}\] \[+ u_{2}\hat{n}_{l}\hat{n}_{r}+u_{3}\left[(b_{l}^{\dagger})^{2}b_{ r}^{2}+(b_{r}^{\dagger})^{2}b_{l}^{2}\right]\,,\]
where \(b_{j}^{\dagger}\), \(b_{j}\) and \(\hat{n}_{j}\) are the creation, anhilitation and particle-number operators on the left/right sites \(j\in\{l,r\}\). Normalizing to 1 and assuming real-valued matrix elements, the 1-RDM can be represented in the lattice-site basis \(|l\rangle\), \(|r\rangle\) as
\[\gamma=\left(\tfrac{1}{2}+\vec{\gamma}\cdot\vec{\sigma}\right)\,, \tag{9}\]
where \(\vec{\gamma}=(\gamma_{lr},0,\gamma_{ll}-\tfrac{1}{2})\), \(\vec{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})\) is the vector of Pauli matrices, and \(\gamma_{ij}=\langle i|\gamma|j\rangle\).
To write the corresponding (discrete) Wigner transformation we follow Refs. [66; 67; 68; 69] where the Wigner function is represented on a grid of twice the dimension of the underlying Hilbert space \(\{j,n\}\). For the momentum basis, we choose the one in which the hopping term of the Hubbard Hamiltonian is diagonal: \(|n\rangle=[|l\rangle+(-1)^{n}|r\rangle]/\sqrt{2}\) for \(n\in\{0,1\}\).
The 1-body Wigner quasi-density can now be computed:
\[\omega_{j,n}=\frac{1}{2}[\gamma_{jj}+(-1)^{n}\gamma_{lr}]\,. \tag{10}\]
As it should be, the marginal densities are recovered by the partial sums: \(\sum_{n}\omega_{j,n}=\gamma_{jj}\) and \(\sum_{j}\omega_{j,n}=\tilde{\gamma}_{nn}\), where \(\tilde{\gamma}_{nn}\) is the momentum density. Since \(\omega_{r,1}=\tfrac{1}{2}-\omega_{l,0}\) and \(\omega_{r,0}=\tfrac{1}{2}-\omega_{l,1}\), we take \(\omega_{l,0}\) and \(\omega_{l,1}\) as our two degrees of freedom. It is straightforward to check that the representability condition reads:
\[\left(\omega_{l,0}-\frac{1}{4}\right)^{2}+\left(\omega_{l,1}-\frac{1}{4} \right)^{2}\leq\frac{1}{8}\,. \tag{11}\]
Since this is the equation of a disk of radius \(1/\sqrt{8}\) centered in \((\tfrac{1}{4},\tfrac{1}{4})\), one can parameterize the discrete Wigner function with a radius and an angle, namely, \(\omega_{l,0}(R,\phi)=\tfrac{1}{4}[1+\sqrt{2}R\cos(\phi)]\) and \(\omega_{l,1}(R,\phi)=\tfrac{1}{4}[1+\sqrt{2}R\sin(\phi)]\). In Fig. 1 are presented two different realizations of the Hamiltonian (8) for both 1-RDMFT and quasi-DFT. It can be seen that the functional of quasi-DFT results from the respective 1-RDMFT functional after a translation, a contraction, and a rotation of \(45^{\circ}\). This result seems to be general for lattice systems, as indicated in Appendix A. After applying Eq. (6) (or a discrete version of (7)) one can find \(\omega_{\rm gs}\) for specific values of \(t\) (the strength of the hopping term) and \(v_{l}-v_{r}\) (the external potential).
_Negativity \(\xi\)\(v\)-representability.--_ Quite remarkably, the quasi-DFT presented here can relate two important concepts in Wigner and functional theories: _Wigner negativity_ and _\(v\)-representability_: Which Wigner-negative 1-body quasi-densities come from ground states? We answer explicitly this question for 2 and 3 bosons for the
Figure 1: Universal functionals of 1-RDMFT \(F_{V}[\gamma]\) and Wigner quasi-DFT \(\mathcal{F}_{V}[\gamma]\) for two realizations of the generalized Bose-Hubbard dimer (8) for three particles.
standard Bose-Hubbard dimer in Fig. 2: There are 4 disconnected ground-state regions of Wigner negativity! Relating these two important concepts seems to be new in the literature.
_Conclusion.--_ Unveiling the role of quantum effects at the classical level is a crucial problem for developing quantum technologies. The Wigner quasi-probability is usually employed as a probe of such quantumness. This letter showed that \(\omega(\mathbf{r},\mathbf{p})\), the (fermionic or bosonic) 1-body Wigner quasi-density, can be inserted into a functional-theoretical framework in an orbital-free manner.
## Appendix A Discrete Wigner formalism for the Hubbard model
Here we apply the discrete Wigner formalism to the Hubbard model of \(L\) sites. This is defined in the \(L\)-dimensional Hilbert space \(\mathcal{H}^{L}\) whose position basis is \(\mathcal{S}=\{|1\rangle,...,|L\rangle\}\). Another orthonormal basis for the same Hilbert space is \(\{|\phi_{0}\rangle,...,|\phi_{L-1}\rangle\}\), defined by the Fourier transform:
\[|\phi_{m}\rangle=\frac{1}{\sqrt{L}}\sum_{n=1}^{L}e^{in\phi_{m}}|n\rangle\,, \tag{10}\]
with \(\phi_{m}=\frac{2\pi}{L}m\). The set of pairs \(\{n,\phi_{m}\}_{n,m}\) constitutes a \(L\times L\) grid. This is the phase space \(\Gamma^{L}\) associated with the Hilbert space \(\mathcal{H}^{L}\)[67].
The operators \(\hat{n}=\sum_{n}n|n\rangle\langle n|\) and \(\hat{\phi}=\sum_{m}\phi_{m}|\phi_{m}\rangle\langle\phi_{m}|\) can be used to construct the following unitary operators:
\[\hat{V}=\exp\left(i\frac{2\pi}{L}\hat{n}\right)\qquad\text{and}\qquad\hat{U}= \exp(i\hat{\phi})\,, \tag{11}\]
which satisfy the Weyl relation [69]:
\[\hat{D}(k,l)\equiv\exp\left(-i\frac{\pi kl}{L}\right)\hat{U}^{k}\hat{V}^{l}= \exp\left(i\frac{\pi kl}{L}\right)\hat{U}^{l}\hat{V}^{k}\,,\qquad k,l\in\mathbb{ Z}\,. \tag{12}\]
Figure 2: Representation of the domain of Wigner 1-body quasi-densities for the Bose-Hubbard dimer (10) with \(u_{1}=1\), \(u_{2}=u_{3}=0\), for 2 and 3 bosons. Wigner positive \(\omega>0\) are represented in black (non \(v\)-representable) and orange (\(v\)-representable). Wigner negative \(\omega\) are represented in yellow (\(v\)-representable) and gray (non \(v\)-representable).
With this operator, the authors of Ref. [69] define the phase-space point operator:
\[\hat{\Omega}_{\kappa}(n,\phi_{m})=\frac{1}{L}\sum_{k,l}\kappa(k,l)\hat{D}(k,l) \exp\left[-i\left(k\phi_{m}+\frac{2\pi}{L}ln\right)\right]\,, \tag{10}\]
with a kernel \(\kappa(k,l)\), whose properties are determined by the properties of \(\hat{\Omega}_{\kappa}\). In particular, the operator's hermiticity condition: \(\hat{\Omega}_{\kappa}(n,\phi_{m})=\hat{\Omega}_{\kappa}^{\dagger}(n,\phi_{m})\), \(\forall(n,\phi_{m})\in\Gamma^{L}\), which is needed to map phase-space functions to hermitian operators, results in the condition \(\kappa^{*}(k,l)=(-1)^{L+k+l}\kappa(L-k,L-l)\). For odd \(L=2N+1\), for instance, a kernel can be chosen to be [69]: \(\kappa(k,l)=\cos(\pi kl/L)\).
The map between \(f(n,\phi_{m})\), a real function in \(\Gamma^{L}\), and \(\hat{f}\), an operator in \(\mathcal{H}^{L}\), is realized by means of the following relation:
\[\hat{f}=\frac{1}{L}\sum_{m,n}f(n,\phi_{m})\hat{\Omega}_{\kappa}(n,\phi_{m})\,. \tag{11}\]
Eq. (11) can now be used to find the Wigner quasi-distribution. Since the average value of the observable represented by the operator \(\hat{f}\) in a state defined by the density operator \(\hat{\gamma}\) reads
\[\text{Tr}[\hat{\gamma}\hat{f}]=\frac{1}{L}\sum_{m,n}f(n,\phi_{m})\text{Tr} \left[\hat{\gamma}\hat{\Omega}_{\kappa}(n,\phi_{m})\right]\,. \tag{12}\]
a natural definition for the Wigner quasi-probability (for the kernel \(\kappa\)) arises: \(\omega(n,\phi_{m})=\text{Tr}\left[\hat{\gamma}\hat{\Omega}_{\kappa}(n,\phi_{m} )\right]\). From this definition, one can write:
\[\omega(n,\phi_{m})=\sum_{m^{\prime},n^{\prime}}\mathcal{D}(n,\phi_{m};n^{ \prime},m^{\prime})\langle n^{\prime}|\hat{\gamma}|m^{\prime}\rangle\,, \tag{13}\]
where \(\mathcal{D}(n,\phi_{m};n^{\prime},m^{\prime})=\frac{1}{L^{2}}\sum_{k,l,s} \kappa(k,l)\exp\left[i\left(\frac{\pi kl}{L}+k\phi_{s}+n^{\prime}\phi_{s+l}-m ^{\prime}\phi_{s}-k\phi_{m}-\frac{2\pi}{L}ln\right)\right]\). If one vectorizes both \(\omega\) and \(\gamma\), to wit,
\[|\omega\rangle\!\!\rangle=\begin{pmatrix}\omega(1,\phi_{0})\\ \omega(1,\phi_{1})\\ \omega(1,\phi_{2})\\ \vdots\end{pmatrix}\qquad\text{and}\qquad|\gamma\rangle\!\!\rangle=\begin{pmatrix} \langle|1\gamma|1\rangle\\ \langle 1|\gamma|2\rangle\\ \langle 1|\gamma|3\rangle\\ \vdots\end{pmatrix}\,, \tag{14}\]
one can formally write (13) as \(|\omega\rangle\!\!\rangle=\hat{\mathcal{D}}\,|\gamma\rangle\!\rangle\).
## Appendix B The Generalized Bose-Hubbard dimer
In this section, we focus on the generalized Bose-Hubbard dimer, whose Hamiltonian reads
\[H=-t(b_{l}^{\dagger}b_{r}+b_{r}^{\dagger}b_{l})+\sum_{j=l/r}v_{j}\hat{n}_{j}+V\,, \tag{15}\]
with
\[V=u_{1}\left[\hat{n}_{l}(\hat{n}_{l}-1)+\hat{n}_{r}(\hat{n}_{r}-1)\right]+u_{2 }\hat{n}_{l}\hat{n}_{r}+u_{3}\left[(b_{l}^{\dagger})^{2}(b_{r})^{2}+(b_{r}^{ \dagger})^{2}(b_{l})^{2}\right]\,. \tag{16}\]
The operators \(b_{j}^{\dagger}\) and \(b_{j}\) create and annihilate a particle on the sites \(j=l/r\), and \(\hat{n}_{j}\) is the corresponding particle-number operator. Any \(N\)-body ground state of the Hamiltonian (15) can be expressed as a linear combination of the configuration states \(|n,N-n\rangle\). Assuming real wave functions, we represent the 1-RDM \(\gamma\equiv\text{Tr}_{N-1}[\Gamma]\) of any pure or ensemble state with respect to the lattice site states \(|l\rangle,|r\rangle\),
\[\gamma_{ij}\equiv\frac{1}{N}\langle\Psi|b_{l}^{\dagger}b_{j}|\Psi\rangle\,, \qquad i,j=l,r\,. \tag{17}\]
Since \(\gamma_{ll}+\gamma_{rr}=1\) (by normalization) and \(\gamma_{lr}=\gamma_{rl}\) the 1RDM is fully determined by two free parameters. We represent the 1-RDM as:
\[\gamma=\begin{pmatrix}\gamma_{ll}&\gamma_{lr}\\ \gamma_{lr}&1-\gamma_{ll}\end{pmatrix}\,. \tag{10}\]
The only two degrees of freedom of this matrix can be represented in a vector form: \(\ket{\gamma}=\begin{pmatrix}\gamma_{l}\\ \gamma_{lr}\end{pmatrix}\).
### 1-body Wigner function
Our goal is to find the Wigner function associated with this matrix in the grid \(\{(l,0),(l,1),(r,0),(r,1)\}\), with the momentum basis: \(\ket{0}\equiv\frac{1}{\sqrt{2}}(\ket{L}+\ket{R})\) and \(\ket{1}\equiv\frac{1}{\sqrt{2}}(\ket{L}-\ket{R})\). This grid can be seen as a two-dimensional vector space over a finite field, in which the Wigner function is defined:
\[\begin{array}{c|c}&\\ \omega(l,1)&\omega(r,1)\\ \\ \hline&\\ \omega(l,0)&\omega(r,0)\\ &\end{array}\]
Notice that \(\gamma\) can be written as
\[\gamma=\left(\frac{1}{2}+\vec{\gamma}\cdot\vec{\sigma}\right)\,, \tag{11}\]
where \(\vec{\gamma}=(\gamma_{lr},0,\gamma_{ll}-\frac{1}{2})\) and \(\vec{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})\) are the Pauli matrices. For a qubit a phase-space point operator is known to be [69]:
\[\Omega(n,\phi_{m})=\frac{1}{2}\left[1+(-1)^{m}(\ket{0}\bra{0}-\ket{1}\bra{1} )+(-1)^{n}(\ket{0}\bra{1})+\ket{1}\bra{0})+i(-1)^{n+m}(\ket{0}\bra{1})-\ket{1} \bra{0}\right]\,. \tag{12}\]
Therefore, by computing \(\omega(n,\phi_{m})=\frac{1}{4}\mathrm{Tr}[\gamma\Omega(n,\phi_{m})]\) one finds in vectorized form the following equation:
\[\ket{\omega(\gamma)}=\tfrac{1}{2}\mathcal{D}\ket{\gamma}\,, \tag{13}\]
where
\[\ket{\omega}=\begin{pmatrix}\omega(l,0)\\ \omega(l,1)\end{pmatrix}\qquad\text{and}\qquad\mathcal{D}=\begin{pmatrix}1&1\\ 1&-1\end{pmatrix} \tag{14}\]
with \(\omega(r,1)=\frac{1}{2}-\omega(l,0)\) and \(\omega(r,0)=\frac{1}{2}-\omega(l,1)\). Notice that \(\mathcal{D}\) is an orthogonal matrix. Inverting (13) one gets \(\ket{\gamma(\omega)}=\mathcal{D}\ket{\omega}\).
### Representability
Since \(\gamma^{2}\leq\gamma\), the domain of pure/ensemble \(N\)-representable 1RDMs takes the form of a disc of radius \(\frac{1}{2}\) given by
\[\left(\gamma_{ll}-\frac{1}{2}\right)^{2}+\gamma_{lr}^{2}\leq\frac{1}{4}. \tag{15}\]
The area of this disc is \(A_{N}=\pi/4\) and its boundary \(\partial\mathcal{P}_{p}\) (i.e., \(\gamma_{LR}^{2}+(\gamma_{LL}-\frac{1}{2})^{2}=\frac{1}{4}\)) corresponds to complete Bose-Einstein condensation (BEC) [29]. Plugging (13) in (15) one gets:
\[\left(\omega_{l,0}-\frac{1}{4}\right)^{2}+\left(\omega_{l,1}-\frac{1}{4} \right)^{2}\leq\frac{1}{8}. \tag{16}\]
## Appendix C Proof of Eq. (7)
We take the the functional derivative of \(\mathcal{E}[\omega]-\mu N[\omega]\) at the point \(\omega\) in the direction of \(\omega\):
\[0=\delta\big{(}\mathcal{E}[\omega]-\mu N[\omega]\big{)} =\int\frac{\delta\big{(}\mathcal{E}[\omega]-\mu N[\omega]\big{)}} {\delta\omega(\mathbf{r},\mathbf{p})}\omega(\mathbf{r},\mathbf{p})d\Omega\] \[=\int\left[h(\mathbf{r},\mathbf{p})+\frac{\delta\mathcal{F}[ \omega]}{\delta\omega(\mathbf{r},\mathbf{p})}-\mu\right]\omega(\mathbf{r}, \mathbf{p})d\Omega\] \[=\int\left[h(\mathbf{r},\mathbf{p})\omega(\mathbf{r},\mathbf{p}) +\frac{\delta\mathcal{F}[\omega]}{\delta\omega(\mathbf{r},\mathbf{p})}\omega( \mathbf{r},\mathbf{p})-\mu\,\omega(\mathbf{r},\mathbf{p})\right]d\Omega\] \[=\int\left[h(\mathbf{r},\mathbf{p})\star\omega(\mathbf{r}, \mathbf{p})+\frac{\delta\mathcal{F}[\omega]}{\delta\omega(\mathbf{r},\mathbf{ p})}\star\omega(\mathbf{r},\mathbf{p})-\mu\,\omega(\mathbf{r},\mathbf{p}) \right]d\Omega\,, \tag{10}\]
where \(d\Omega=d^{3}\mathbf{r}d^{3}\mathbf{p}\). In the last line we have used the fact that \(\int fgd\Omega=\int f\star gd\Omega\). Therefore, we conclude that at each phase-space point:
\[h(\mathbf{r},\mathbf{p})\star\omega(\mathbf{r},\mathbf{p})+\frac{\delta \mathcal{F}[\omega]}{\delta\omega(\mathbf{r},\mathbf{p})}\star\omega(\mathbf{ r},\mathbf{p})-\mu\,\omega(\mathbf{r},\mathbf{p})=0\,. \tag{11}\]
Since \(\int fgd\Omega=\int g\star fd\Omega\) also holds, we also conclude that:
\[\omega(\mathbf{r},\mathbf{p})\star h(\mathbf{r},\mathbf{p})+\omega(\mathbf{r },\mathbf{p})\star\frac{\delta\mathcal{F}[\omega]}{\delta\omega(\mathbf{r}, \mathbf{p})}-\mu\,\omega(\mathbf{r},\mathbf{p})=0\,. \tag{12}\]
## Appendix D Hartree-Fock in phase space
In this last section, we investigate the form of the orbital-free Hartree-Fock equations in phase space for a system of \(N\) electrons. As the respective wave function is a single Slater determinant, the 1-body reduced-density matrix is a projector:
\[\gamma(\mathbf{r},\mathbf{r}^{\prime})=\sum_{n=1}^{N}\varphi_{n}(\mathbf{r}) \varphi_{n}^{*}(\mathbf{r}^{\prime})\,,\]
with \(\int\varphi_{n}(\mathbf{r})\varphi_{m}^{*}(\mathbf{r})d^{3}\mathbf{r}=\delta _{nm}\). The first result we will prove is that the corresponding Wigner function satisfies \(\omega\star\omega=\omega\).
Proof.: Let us first define the Wigner phase-space orbitals \(\chi_{n}(\mathbf{r},\mathbf{p})=\int\varphi_{n}(\mathbf{r}-\mathbf{z}) \varphi_{n}^{*}(\mathbf{r}+\mathbf{z})e^{2i\mathbf{p}\cdot\mathbf{z}}d^{3} \mathbf{z}\). They satisfy the following equation:
\[\chi_{n}(\mathbf{r},\mathbf{p})\star\chi_{m}(\mathbf{r},\mathbf{p})\] \[=\int\varphi_{n}(\mathbf{r}^{\prime}-\mathbf{z}^{\prime})\, \varphi_{n}^{*}(\mathbf{r}^{\prime}+\mathbf{z}^{\prime})\,\varphi_{m}(\mathbf{ r}^{\prime\prime}-\mathbf{z}^{\prime\prime})\,\varphi_{m}^{*}(\mathbf{r}^{ \prime\prime}+\mathbf{z}^{\prime\prime})\,e^{2i(\mathbf{r}^{\prime\prime}- \mathbf{r}^{\prime})\cdot\mathbf{p}}e^{2i(\mathbf{r}^{\prime}+\mathbf{r}- \mathbf{r}^{\prime\prime})\cdot\mathbf{p}^{\prime}}e^{2i(\mathbf{r}^{ \prime\prime}+\mathbf{r}^{\prime\prime}-\mathbf{r})\cdot\mathbf{p}^{\prime \prime}}d\Omega^{\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{ \prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{ \prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime \prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{ \prime\prime}\,d\Omega^{\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime}\,d\Omega^{\prime\prime}\,d\Omega^{ \prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{ \prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{ \prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{ \prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{ \prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime \prime}\,d\Omega^{\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{ \prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{ \prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{ \prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime \prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{ \prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime \prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime}\,\Omega^{ \prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime \prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime \prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{ \prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime \prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{ \prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime \prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{ \prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime \prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{ \prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime \prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{ \prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d\Omega^{\prime\prime}\,d \Omega^{\prime\prime}\,d\Omega^{\prime\prime}\
This result indicates that we have to solve the Hartree-Fock functional subject to the condition \(\omega\star\omega=\omega\) and the normalization \(\int\omega(\mathbf{r},\mathbf{p})d\Omega=N\). Using the Lagrange multipliers \(\omega(\mathbf{r},\mathbf{p})\) and \(\beta\), the variational problem reads:
\[\delta\left\{\mathcal{E}_{\mathrm{HF}}[\omega]-\int\alpha(\mathbf{r},\mathbf{p })\left[\omega(\mathbf{r},\mathbf{p})\star\omega(\mathbf{r},\mathbf{p})-\omega (\mathbf{r},\mathbf{p})\right]\,d\Omega-\beta\left[\int\omega(\mathbf{r}, \mathbf{p})\,d\Omega-N\right]\right\}=0\,. \tag{47}\]
Before performing the variation note that
\[\frac{\delta}{\delta\omega(\mathbf{r},\mathbf{p})}\int\alpha( \mathbf{r},\mathbf{p})\,\omega(\mathbf{r},\mathbf{p})\star\omega(\mathbf{r}, \mathbf{p})\,d\Omega\] \[\qquad=\frac{\delta}{\delta\omega(\mathbf{r},\mathbf{p})}\int \alpha(\mathbf{r},\mathbf{p})\,\omega(\mathbf{r}^{\prime},\mathbf{p}^{\prime} )\,\omega(\mathbf{r}^{\prime\prime},\mathbf{p}^{\prime\prime})e^{2i(\mathbf{ r}\cdot\mathbf{p}^{\prime}-\mathbf{r}^{\prime}\cdot\mathbf{p}+\mathbf{r}^{\prime} \cdot\mathbf{p}^{\prime\prime}-\mathbf{r}^{\prime\prime}\cdot\mathbf{p}^{ \prime}+\mathbf{r}^{\prime\prime}\cdot\mathbf{p}^{\prime\prime})}\,d\Omega \,d\Omega^{\prime}\,d\Omega^{\prime\prime}\] \[\qquad=\int\alpha(\mathbf{r}^{\prime},\mathbf{p}^{\prime})\, \omega(\mathbf{r}^{\prime\prime},\mathbf{p}^{\prime\prime})e^{2i(\mathbf{r}^{ \prime}\cdot\mathbf{p}-\mathbf{r}\cdot\mathbf{p}^{\prime}+\mathbf{r}\cdot \mathbf{p}^{\prime\prime}-\mathbf{r}^{\prime\prime}\cdot\mathbf{p}+\mathbf{r} ^{\prime\prime}\cdot\mathbf{p}^{\prime}-\mathbf{r}^{\prime}\cdot\mathbf{p}^{ \prime\prime})}\,d\Omega^{\prime}\,d\Omega^{\prime\prime}\] \[\qquad+\int\alpha(\mathbf{r}^{\prime\prime},\mathbf{p}^{\prime \prime})\,\omega(\mathbf{r}^{\prime},\mathbf{p}^{\prime})e^{2i(\mathbf{r}^{ \prime\prime}\cdot\mathbf{p}^{\prime}-\mathbf{r}^{\prime}\cdot\mathbf{p}^{ \prime\prime}+\mathbf{r}^{\prime}\cdot\mathbf{p}-\mathbf{r}^{\prime}+\mathbf{ r}\cdot\mathbf{p}^{\prime\prime}-\mathbf{r}^{\prime\prime}\cdot\mathbf{p})}\,d\Omega^{ \prime}\,d\Omega^{\prime\prime}\] \[\qquad=\omega(\mathbf{r},\mathbf{p})\star\alpha(\mathbf{r}, \mathbf{p})+\alpha(\mathbf{r},\mathbf{p})\star\omega(\mathbf{r},\mathbf{p})\,. \tag{48}\]
Using this result in Eq. (47) we obtain
\[L(\mathbf{r},\mathbf{p})\equiv f_{\mathrm{HF}}(\mathbf{r},\mathbf{p})-\omega (\mathbf{r},\mathbf{p})\star\alpha(\mathbf{r},\mathbf{p})-\alpha(\mathbf{r}, \mathbf{p})\star\omega(\mathbf{r},\mathbf{p})+\alpha(\mathbf{r},\mathbf{p})- \beta=0\,, \tag{49}\]
where \(f_{\mathrm{HF}}(\mathbf{r},\mathbf{p})=\delta\mathcal{E}_{\mathrm{HF}}[\omega] /\delta\omega(\mathbf{r},\mathbf{p})\). Multiplying (with the \(\star\)-product) this equation on the left by \(\omega(\mathbf{r},\mathbf{p})\) (i.e, \(\omega(\mathbf{r},\mathbf{p})\star L(\mathbf{r},\mathbf{p})\)) and on the right (i.e, \(L(\mathbf{r},\mathbf{p})\star\omega(\mathbf{r},\mathbf{p})\)), and then subtracting both equations we obtain that \(\omega\)\(\star\)-anticommutes with \(f_{\mathrm{HF}}(\mathbf{r},\mathbf{p})\):
\[[f_{\mathrm{HF}}(\mathbf{r},\mathbf{p}),\omega(\mathbf{r},\mathbf{p})]_{\star} \equiv f_{\mathrm{HF}}(\mathbf{r},\mathbf{p})\star\omega(\mathbf{r},\mathbf{p} )-\omega(\mathbf{r},\mathbf{p})\star f_{\mathrm{HF}}(\mathbf{r},\mathbf{p})=0\,. \tag{50}\]
This is the equation of \(\omega(\mathbf{r},\mathbf{p})\) within Hartree-Fock theory. Recall that it admits an expansion in \(\hbar\). For this reason, this equation allows a semiclassical expansion that does not exist in the double-coordinate representation.
To finish the calculation we give now the explicit form of \(f_{\mathrm{HF}}(\mathbf{r},\mathbf{p})\). Let us define the 1-particle Hamiltonian \(\mathbf{p}^{2}/2m+v(\mathbf{r})\), with \(v(\mathbf{r})\) being the external potential. Using the inverse of the Wigner transformation, the Hartree-Fock energy reads:
\[\mathcal{E}_{\mathrm{HF}}[\omega]=\int h(\mathbf{r},\mathbf{p})\,\omega( \mathbf{r},\mathbf{p})\,d\Omega+\frac{1}{2}\int\frac{\omega(\mathbf{r}, \mathbf{p})\,\omega(\mathbf{r}^{\prime},\mathbf{p}^{\prime})}{|\mathbf{r}- \mathbf{r}^{\prime}|}d\Omega\,d\Omega^{\prime}-\frac{1}{2}\int e^{i(\mathbf{ p}-\mathbf{p}^{\prime})\cdot(\mathbf{r}-\mathbf{r}^{\prime})}\frac{\omega(( \mathbf{r}+\mathbf{r}^{\prime})/2,\mathbf{p})\,\omega((\mathbf{r}+\mathbf{r}^{ \prime})/2,\mathbf{p}^{\prime})}{|\mathbf{r}-\mathbf{r}^{\prime}|}d\Omega\,d \Omega^{\prime}\,.\]
A straightforward calculation finally gives:
\[f_{\mathrm{HF}}(\mathbf{r},\mathbf{p})=h(\mathbf{r},\mathbf{p})+\int\frac{ \omega(\mathbf{r}^{\prime},\mathbf{p}^{\prime})}{|\mathbf{r}-\mathbf{r}^{ \prime}|}\,d\Omega^{\prime}-\int\frac{e^{i(\mathbf{p}-\mathbf{p}^{\prime})\cdot \mathbf{r}^{\prime}}}{|\mathbf{r}^{\prime}|}\omega(\mathbf{r},\mathbf{p}^{ \prime})\,d\Omega^{\prime}\,.\]
|
2304.06365 | Existence of Wormholes in $f(\mathcal{G})$ Gravity using Symmetries | The current study examines the geometry of static wormholes with anisotropic
matter distribution in context of modified $f(\mathcal{G})$ gravity. We
consider the well known Noether and conformal symmetries, which help in
investigating wormholes in $f(\mathcal{G})$ gravity. For this purpose, we
develop symmetry generators associated with conserved quantities by taking into
consideration the $f(\mathcal{G})$ gravity model. Moreover, we use the
conservation relationship gained from the classical Noether method and
conformal Killing symmetries to develop the metric potential. These symmetries
provide a strong mathematical background to investigate wormhole solutions by
incorporating some suitable initial conditions. The obtained conserved quantity
performs a significant role in defining the essential physical characteristics
of the shape-function and energy conditions. Further, we also describe the
stability of obtained wormholes solutions by employing the equilibrium
condition in modified $f(\mathcal{G})$ gravity. It is observed from graphical
representation of obtained wormhole solutions that Noether and conformal
Killing symmetries provide the results with physically accepted patterns. | Tayyaba Naz, G. Mustafa, M. Farasat Shamir | 2023-04-13T09:39:33Z | http://arxiv.org/abs/2304.06365v1 | # Existence of Wormholes in \(f(\mathcal{G})\) Gravity using Symmetries
###### Abstract
The current study examines the geometry of static wormholes with anisotropic matter distribution in context of modified \(f(\mathcal{G})\) gravity. We consider the well known Noether and conformal symmetries, which help in investigating wormholes in \(f(\mathcal{G})\) gravity. For this purpose, we develop symmetry generators associated with conserved quantities by taking into consideration the \(f(\mathcal{G})\) gravity model. Moreover, we use the conservation relationship gained from the classical Noether method and conformal Killing symmetries to develop the metric potential. These symmetries provide a strong mathematical background to investigate wormhole solutions by incorporating some suitable initial conditions. The obtained conserved quantity performs a significant role in defining the essential physical characteristics of the shape-function and energy conditions. Further, we also describe the stability of obtained wormholes solutions by employing the equilibrium condition in modified \(f(\mathcal{G})\) gravity. It is observed from graphical representation of obtained wormhole solutions that Noether and conformal Killing symmetries provide the results with physically accepted patterns.
**Keywords**: Wormhole; \(f(\mathcal{G})\) gravity; Noether symmetries; Conformal motion; Conserved quantities.
**PACS**: 04.20.Jb; 98.80.Jk; 98.80.-k.
## I Introduction
The approximation symmetry method played a significant role in evaluating the precise solutions of the differential equations. Such approximations dynamically reduce the complexity of the non-linear equation involved in a scheme by seeking the unknown parameter of equations. The Noether symmetries, in particular, are not only a mechanism for dealing with the dynamics solution, but their presence also provides suitable conditions so that one can specify the universe models physically and analytically according to our measured observations. In addition to this, Noether symmetry technique is believed to be a suitable mathematical approach, which often investigates the exact solutions and computes the associated conserved quantities. This method plays a central role in reducing the nonlinear equation system to a linear equation system. The numerous conservation principles, such as conservation of energy and angular momentum etc., are specifically linked to the symmetries of a specified dynamic system and provide the conserved quantities, which seem to be the consequence of certain type of symmetry being present in that mechanism. Moreover, conserved quantities can be determined by applying the Noether symmetry technique, asking for the Lagrangian symmetry. The presence of any specific type of symmetry for the Euler-Lagrange equations of motion, along with the Lagrangian, will precisely be related to the Noether's symmetry. Whereas, no particular theory is endorsed by the technique of Noether symmetry, the literature studies have indicated that the existence of Noether symmetries is capable of selecting suitable theory and then integrating dynamics through the first integrals referring to Noether symmetries [1]-[5]. In fact, it should be noticed that the Noether symmetries are not just a mathematical method for solving or reducing dynamics, yet their presence even enables to choice of observable universes/wormholes/black holes, etc. and the collection of analytical models relevant to observations [6]. Recently, we have proposed some compact star solutions incorporating Noether symmetry in frame of the modified \(f(\mathcal{G})\) gravity [7]. The Noether Symmetry approach for \(f(\mathcal{G})\) cosmology in \(n\) dimensions has been discussed in [8]. Moreover, in spherically symmetric context, the \(f(\mathcal{G})\) theory of gravity can be employed to address general relativity \((\mathcal{GR})\) inconsistencies [9]. Further, a detailed overview of the Noether symmetry technique to investigate a variety of cosmic scenarios, including viable mimetic \(f(R)\) and \(f(R,T)\) theories is given in [10]-[11]. In this regard, this approach has successively used to cope with cosmologies generated from various theories of gravity [12]-[14].
Our universe often exhibits eye-opening challenges for cosmologists, regrading their fascinating and enigmatic
existence. The presence of hypothetical geometries are perceived to be the most contentious topic leading to wormhole geometry. A debate regarding the existence of the wormhole and the construction of its solutions is among the most interesting challenges in modern astrophysics. A wormhole is a path or tunnel that connects two separate regions of the same or two different type of universes. Flamm [15] used the term bridge for the very first time in 1916. In 1935, Einstein and Rosen mathematically described such bridge as structures renowned as the Einstein-Rosen bridges [16]. In addition, Morris and Throne [17] established wormholes by taking into account exotic matter. Exotic matter is regarded as the necessary component for the formation of these wormholes. The existence of exotic matter by using various techniques have been addressed by many authors [18]-[20]. Moreover, extra geometric terms are thought to be the cause of these exotic matter in modified theories of gravity [21]-[25]. Recently, Sharif and Nawazish [26; 27] have explored static wormhole solutions utilizing the Noether symmetry methodology in modified gravity and they noticed the stable structure of red-shift functions for various cases. Furthermore, Sharif and Hussain [28] used the same technique to explore wormhole physical presence in frame of \(f(\mathcal{G},T)\) gravity and investigate its properties of fluid distributions for both dust and non-dust case.
The current cosmic accelerated expansion has always been considered to be the most revolutionizing reality on the landscape of theoretical and observational modern cosmology. In order to take into account the late-time accelerated expansion, two main approaches have been proposed. The first effective way to describe the idea of accelerated cosmos expansion in context of \(\mathcal{G}\mathcal{R}\) is the existence of dark energy, which exhibits strong negative pressure. The second innovative approach to ponder this concept of universe expansion is to modify the Einstein-Hilbert action at large scales. These modifications of the \(\mathcal{G}\mathcal{R}\) play an influential role in revealing the intriguing dynamics behind the expansion of the universe. Among the various gravitational theories, the theory that has acquired the prominence in the last few years is modified \(f(\mathcal{G})\) gravity [29]. This modified gravity was obtained by incorporating the function \(f(\mathcal{G})\) in Einstein Hilbert action. The Gauss-Bonnet term is of great importance as it facilitates the regularization of gravitational action and can serve to avoid ghost contributions [30]. It is believed that \(f(\mathcal{G})\) gravity is also quite helpful in explaining late cosmic acceleration and reconstructing some form of cosmological solution. Indeed, this theory was used as a significant approach for revealing the mystical nature of the cosmos [31].
Several important studies from literature have shown that anisotropic stars can be modeled utilizing solutions that endorse a single parameter group of conformal motion. Herrera and his colleagues [32]-[34] were among the pioneers who provided the general treatment of the spheres that accepted a single parameter category of conformal motions. Some significant findings employing conformal Killing vectors (CKVs) have been presented in literature [35]-[37]. Nevertheless, CKVs approach is helpful to make the governing system easier to analyze by reducing the nonlinear structure of partial differential equations (PDE's) into the ordinary differential equations. In the spacetime, due to conformal symmetry, some constraints on the gravitational potential are imposed. However, the idea of CKVs was considered in literature to investigate the presence of spherically symmetric wormholes, as the static symmetric spacetime presents a limited category of conformal motions. Kuhfitting [38] has recently investigated the stable wormholes solutions through CKVs and non-commutational distribution. Moreover, Rahaman [39] used the CKVs technique to construct the wormhole solutions in the context of non-commutative geometry.
Motivated by the aforementioned literature, our focus is to construct the wormhole solutions admitting symmetries in the frame of conformal motion. To the best of our understanding, no attempt has yet been made to explore the wormhole solutions using the Noether symmetry technique under conformal motion by considering the \(f(\mathcal{G})=\alpha\mathcal{G}^{n}\) gravity model [40], where \(n=2\). For this aim, we extended the idea of Shamir and Tayyaba [7] and use the conservation relationship gained from Noether method by incorporating some peculiar initial conditions to construct the relation of metric potential to address the formulation of wormhole. The manuscript is organized as follows. In section 2, we discuss some basics formulism of \(f(\mathcal{G})\) gravity in frame of anisotropic matter distributions. In section 3, the geometry of wormhole by employing the Noether and conformal motion has been discussed. Conclusive remarks are presented in Section 4.
## II Some formulation of modified \(f(\mathcal{G})\) gravity
The action for modified \(f(\mathcal{G})\) gravity is expressed as [29]
\[\mathcal{S}=\int d^{4}x\sqrt{-g}\Bigg{[}\frac{\mathcal{R}}{2\kappa^{2}}+f( \mathcal{G})+L_{m}\Bigg{]}, \tag{1}\]
here \(L_{m}\) shows matter Lagrangian, \(\mathcal{R}\) being the Ricci scalar, \(\kappa^{2}=8\pi G\) represents the coupling constant term and \(f(\mathcal{G})\) is an arbitrary function of the Gauss-Bonnet invariant term represented as
\[\mathcal{G}=\mathcal{R}^{2}-4\mathcal{R}_{\mu\nu}\mathcal{R}^{\mu\nu}+ \mathcal{R}_{\mu\nu\sigma\rho}\mathcal{R}^{\mu\nu\sigma\rho}, \tag{2}\]
here \(\mathcal{R}_{\mu\nu}\) and \(\mathcal{R}_{\mu\nu\rho\sigma}\) specify the Ricci and Riemann tensors, respectively. The variation of above action with respect to metric tensor yield the following field equations
\[G_{\xi\eta}+8\big{[}\mathcal{R}_{\xi\rho\eta\sigma}+\mathcal{R}_{\rho\eta}g_{ \sigma\xi}-\mathcal{R}_{\rho\sigma}g_{\eta\xi}-\mathcal{R}_{\xi\eta}g_{\sigma \rho}+\mathcal{R}_{\xi\sigma}g_{\eta\rho}\frac{\mathcal{R}}{2}(g_{\xi\eta}g_{ \sigma\rho}-g_{\xi\sigma}g_{\eta\rho})\big{]}\nabla^{\rho}\nabla^{\sigma}f_{ \mathcal{G}}+(\mathcal{G}f_{\mathcal{G}}-f)g_{\xi\eta}=\kappa^{2}T_{\xi\eta}. \tag{3}\]
An alternate representation of above aforementioned field equations (3), which are familiar with \(\mathcal{G}\mathcal{R}\) may be described as
\[G_{\xi\eta}=\kappa^{2}T_{\xi\eta}^{eff}, \tag{4}\]
the effective stress-energy tensor \(T_{\xi\eta}^{eff}\) is given by
\[T_{\xi\eta}^{eff}=T_{\xi\eta}-\frac{8}{\kappa^{2}}\big{[}\mathcal{R}_{\xi\rho \eta\sigma}+\mathcal{R}_{\rho\eta}g_{\sigma\xi}-\mathcal{R}_{\rho\sigma}g_{ \eta\xi}-\mathcal{R}_{\xi\eta}g_{\sigma\rho}+\mathcal{R}_{\xi\sigma}g_{\eta \rho}+\frac{\mathcal{R}}{2}(g_{\xi\eta}g_{\sigma\rho}-g_{\xi\sigma}g_{\eta\rho })\big{]}\nabla^{\rho}\nabla^{\sigma}f_{\mathcal{G}}-(\mathcal{G}f_{\mathcal{G }}-f)g_{\xi\eta}. \tag{5}\]
We consider the static, spherically symmetric spacetime [41]
\[ds^{2}=e^{\nu(r)}dt^{2}-e^{\lambda(r)}dr^{2}-r^{2}(d\theta^{2}+\sin^{2}\theta d \Phi^{2}). \tag{6}\]
The source of the configuration of matter presumed in this study is anisotropic in nature, represented as
\[\mathcal{T}_{\chi\gamma}=(\rho+p_{t})v_{\chi}v_{\gamma}-p_{t}g_{\chi\gamma}+(p _{r}-p_{t})\xi_{\chi}\xi_{\gamma}, \tag{7}\]
here \(\rho\), \(p_{r}\) and \(p_{t}\) indicate energy density, radial and tangential pressures respectively. The four velocity and radial vector are symbolized by \(v_{\chi}\) and \(\xi_{\chi}\) respectively, which are satisfying the following condition
\[v^{\alpha}=e^{\frac{-\nu}{2}}\delta_{0}^{\alpha},\quad v^{\alpha}v_{\alpha}=1,\quad\xi^{\alpha}=e^{\frac{-\lambda}{2}}\delta_{1}^{\alpha},\quad\xi^{\alpha} \xi_{\alpha}=-1.\]
Using equations (5), (6) and (7), we obtain
\[\rho^{eff} = \rho-8e^{-2\lambda}(f_{\mathcal{G}\mathcal{G}\mathcal{G}}\mathcal{ G}^{\prime 2}+f_{\mathcal{G}\mathcal{G}}\mathcal{G}^{\prime\prime})(\frac{e^{\lambda}-1}{r^{2 }})+4e^{-2\lambda}\lambda^{\prime}\mathcal{G}^{\prime}f_{\mathcal{G}\mathcal{G }}(\frac{e^{\lambda}-3}{r^{2}})-(\mathcal{G}f_{\mathcal{G}}-f), \tag{8}\] \[p_{r}^{eff} = p_{r}-4e^{-2\lambda}\nu^{\prime}\mathcal{G}^{\prime}f_{\mathcal{ G}\mathcal{G}}(\frac{e^{\lambda}-3}{r^{2}})+(\mathcal{G}f_{\mathcal{G}}-f),\] (9) \[p_{t}^{eff} = p_{t}-\frac{4e^{-2\lambda}\nu^{\prime}}{r}(f_{\mathcal{G} \mathcal{G}\mathcal{G}}\mathcal{G}^{\prime 2}+f_{\mathcal{G}\mathcal{G}}\mathcal{G}^{ \prime\prime})-\frac{2e^{-2\lambda}\nu^{\prime 2}f_{\mathcal{G}\mathcal{G}} \mathcal{G}^{\prime}}{r}-\frac{2e^{-2\lambda}f_{\mathcal{G}\mathcal{G}} \mathcal{G}^{\prime}}{r}(2\nu^{\prime\prime}-3\nu^{\prime}\lambda^{\prime})+( \mathcal{G}f_{\mathcal{G}}-f). \tag{10}\]
Here \(\rho\), \(p_{r}\) and \(p_{t}\) are usual energy density, radial pressure and transverse pressure respectively. The Gauss-Bonnet invariant term for the spherically symmetric space time (6) appears as
\[\mathcal{G}=\frac{2e^{-\lambda}}{r^{2}}(\nu^{\prime}\lambda^{\prime}+{\nu^{ \prime}}^{2}e^{-\lambda}-3\nu^{\prime}\lambda^{\prime}e^{-\lambda}-2\nu^{\prime \prime}-{\nu^{\prime}}^{2}+2\nu^{\prime\prime}e^{-\lambda}). \tag{11}\]
. To solve the field equations (8-10), which are extremely nonlinear, complicated and involve many unknowns, we need some suitable mathematical method. For this purpose, we use a special class of Lie point symmetries namely Noether symmetry approach.
In the current study, we consider the following integral of motion as discussed [7], i.e.,
\[\mathcal{I}_{1}=e^{\frac{\nu-3\lambda}{2}}\alpha[-\mathcal{G}\{e^{2\lambda} \mathcal{G}r^{3}+32(e^{\lambda}-1)\nu^{\prime}\}-8(e^{\lambda}-1)(r\nu^{ \prime}-10)\mathcal{G}^{\prime}]. \tag{12}\]
Here, we use another interesting approach in the context of symmetries, is the use of Killing vectors [46]. It has been argued that Killing symmetries form a subalgebra of Noether symmetries. Moreover, Noether equations may be termed as generalized Killing equations for some special cases [47; 48]. Now we discuss CKVs in to connect wormhole geometry with Noether symmetry for the metric (6). In a given space time with manifold \(\mathcal{M}\), the field for conformal vectors \(\gamma\) is defined as
\[\mathfrak{L}_{\gamma}g_{\mu\nu}=g_{\eta\nu}\gamma_{;\lambda}^{\eta}+g_{\mu \eta}\gamma_{;\nu}^{\eta}=\Theta_{f}(r)g_{\mu\nu}, \tag{13}\]
where \(\mathfrak{L}\) represents the Lie derivative. The expressions \(\gamma^{\eta}\) and \(\Theta_{f}(r)\) denote the conformal functions. Among all the symmetries, conformal symmetry and Noether symmetry can approach some results with goodness as both the
symmetry yield a more profound knowledge into the geometry of spacetime. By plugging the spacetime from Eq. (6) in Eq. (13) the following relations can be easily obtained
\[\gamma^{1}\nu^{{}^{\prime}}(r)=\Theta_{f}(r),\quad\gamma^{1}=\frac{r\Theta_{f}(r) }{2},\quad\gamma^{1}\lambda^{{}^{\prime}}(r)+2\gamma^{1}_{,1}=\Theta_{f}(r).\]
The above results further lead to
\[e^{\nu(r)}=\Lambda_{1}^{2}r^{2},\qquad\qquad\qquad e^{\lambda(r)}=\left(\frac {\Lambda_{2}}{\Theta_{f}(r)}\right)^{2}, \tag{14}\]
where \(\Lambda_{1}\) and \(\Lambda_{2}\) represent the constants of integration.
## III Wormholes in \(f(\mathcal{G})\) gravity admitting Noether and conformal symmetries
We shall describe the wormhole solutions in the background of conformal and Noether symmetries in this section. The standard spacetime for wormhole geometry is defined as
\[ds^{2}=e^{2\Omega(r)}dt^{2}-\left(1-\frac{\mathbb{S}_{f}(r)}{r}\right)^{-1}dr^ {2}-r^{2}(d\theta^{2}+\sin^{2}\theta d\Phi^{2}), \tag{15}\]
where \(\Omega(r)\) and \(\mathbb{S}_{f}(r)\) denote the redshift function and shape-function respectively. A certain significant criteria for wormhole physics to be satisfied by the shape function and the red-shift function are summarized here. The value of the red-shift function \(\Omega(r)\) must be finite within the configuration. There is no horizon restriction on \(\Omega(r)\), for the wormhole to be traversable. The appropriate radial distance by enforcing the constraint in the shape function relation \(\mathbb{S}_{f}(r)\), \(\mathcal{L}(r)=\pm\int_{r_{0}}^{r}(1+\frac{\mathbb{S}_{f}(r)}{r})^{-1/2}dr\), with \(r>r_{0}\), should be finite every where in the spacetime geometry. Here, the \(\pm\) incorporates the two different parts of the spacetime geometry, interconnected by the wormhole configuration. The upper segment of the wormhole decreases and hits its lowest at the position of wormhole of the throat, and then rises to the lower part. The \(\mathbb{S}_{f}(r)\) needs to satisfy the inequality given by \((\mathbb{S}_{f}(r)-\mathbb{S}_{f}(r)^{{}^{\prime}}r)/\mathbb{S}_{f}(r)^{2}>0\) and the equality \(\mathbb{S}_{f}(r_{0})=r_{0}\). The \(\mathbb{S}_{f}(r)\) should also satisfy the condition \(\mathbb{S}_{f}^{{}^{\prime}}(r_{0})<1\). By equating the spherically symmetric spacetime (6) with Eq. (15), we get the following relations
\[g_{tt}=e^{\nu(r)}=e^{2\Omega(r)},\qquad\qquad g_{rr}=e^{\lambda(r)}=\left(1- \frac{\mathbb{S}_{f}(r)}{r}\right)^{-1}=\left(\frac{\Lambda_{2}}{\Theta_{f}(r )}\right)^{2}. \tag{16}\]
Using Eq. (16), (8-10) and considering \(\Lambda_{1}^{2}=\Lambda_{3}\), \(\Lambda_{2}^{2}=\Lambda_{4}\), \((\Theta_{f}(r))^{2}=\Theta_{f_{0}}(r)\), we get the simplified system of field equations
\[\rho^{eff} =\frac{1}{\Lambda_{3}^{4}r^{15}\Theta_{f_{0}}(r)^{4}}\bigg{(}540 \alpha r^{3}\left(\Lambda_{3}r^{2}-1\right){}^{2}\Theta_{f_{0}}^{\prime}(r)^{4} -16\alpha r^{2}\Theta_{f_{0}}(r)\left(\Lambda_{3}r^{2}-1\right)\Theta_{f_{0}}^ {\prime}(r)^{2}\left(65r\left(\Lambda_{3}r^{2}-1\right)\Theta_{f_{0}}^{\prime \prime}(r)\right.\] \[+\left(195-113\Lambda_{3}r^{2}\right)\Theta_{f_{0}}^{\prime}(r) \Big{)}+16\alpha r\Theta_{f_{0}}(r)^{2}\left(15r^{2}\left(\Lambda_{3}r^{2}-1 \right){}^{2}\Theta_{f_{0}}^{\prime\prime}(r)^{2}+\left(\Lambda_{3}r^{2}\left( 187\Lambda_{3}r^{2}-706\right)+555\right)\right.\] \[\times\left.\Theta_{f_{0}}^{\prime}(r)^{2}+2r\left(\Lambda_{3}r^ {2}-1\right)\Theta_{f_{0}}^{\prime}(r)\left(10r\Theta_{f_{0}}^{(3)}(r)\left( \Lambda_{3}r^{2}-1\right)+\left(135-77\Lambda_{3}r^{2}\right)\Theta_{f_{0}}^{ \prime\prime}(r)\right)\right)+64\alpha\Theta_{f_{0}}(r)^{3}\] \[\times\left.\left(\left(\Lambda_{3}r^{2}\left(35\Lambda_{3}r^{2}- 234\right)+231\right)\Theta_{f_{0}}^{\prime}(r)+r\left(\left(\Lambda_{3}r^{2} \left(138-35\Lambda_{3}r^{2}\right)-11\right)\Theta_{f_{0}}^{\prime\prime}(r)+ r\left(\Lambda_{3}r^{2}-1\right)\right.\right.\] \[\times\left.\left.\left(r\Theta_{f_{0}}^{(4)}(r)\left(1-\Lambda_{ 3}r^{2}\right)+2\Theta_{f_{0}}^{(3)}(r)\left(5\Lambda_{3}r^{2}-9\right)\right) \right)\right)+\Lambda_{3}^{4}\rho r^{15}\Theta_{f_{0}}(r)^{4}\bigg{)}, \tag{17}\] \[p_{r}^{eff} =\frac{1}{\Lambda_{3}^{4}r^{14}\Theta_{f_{0}}(r)^{4}}\bigg{(}-12 \alpha r^{2}\left(\Lambda_{3}r^{2}-1\right)\left(5\Lambda_{3}r^{2}-21\right) \Theta_{f_{0}}^{\prime}(r)^{4}+16\alpha r\Theta_{f_{0}}(r)\Theta_{f_{0}}^{ \prime}(r)^{2}\left(r\left(\Lambda_{3}r^{2}-1\right)\left(5\Lambda_{3}r^{2}-21\right)\right.\] \[\times\left.\left(3\Lambda_{3}r^{2}-13\right)\Theta_{f_{0}}^{ \prime}(r)^{2}-2r\left(\Lambda_{3}r^{2}-3\right)\Theta_{f_{0}}^{\prime}(r) \left(r\Theta_{f_{0}}^{(3)}(r)\left(\Lambda_{3}r^{2}-1\right)+\left(8-4 \Lambda_{3}r^{2}\right)\Theta_{f_{0}}^{\prime\prime}(r)\right)\right)\] \[+\Lambda_{3}^{4}p_{r}r^{14}\Theta_{f_{0}}(r)^{4}\bigg{)},\] (18) \[p_{t}^{eff} =\frac{1}{\Lambda_{3}^{4}r^{14}\Theta_{f_{0}}(r)^{5}}\bigg{(}-432 \alpha r^{3}\left(\Lambda_{3}r^{2}-1\right)\Theta_{f_{0}}^{\prime}(r)^{5}+4 \alpha r^{2}\Theta_{f_{0}}(r)\Theta_{f_{0}}^{\prime}(r)^{3}\left(208r\left( \Lambda_{3}r^{2}-1\right)\Theta_{f_{0}}^{\prime\prime}(r)+\left(\Lambda_{3}r^{2 }\right.\right.\] \[\times\left.\left(9\Lambda_{3}r^{2}-382\right)+561\right)\Theta_{ f_{0}}^{\prime}(r)\right)-16\alpha r\Theta_{f_{0}}(r)^{2}\Theta_{f_{0}}^{ \prime}(r)\left(16r^{2}\left(\Lambda_{3}r^{2}-1\right)\Theta_{f_{0}}^{\prime \prime}(r)^{2}+\left(\Lambda_{3}r^{2}\left(149-3\Lambda_{3}r^{2}\right)\right.\] \[-\left.354\right)\Theta_{f_{0}}^{\prime}(r)^{2}+r\Theta_{f_{0}}^{ \prime}(r)\left(13r\Theta_{f_{0}}^{(3)}(r)\left(\Lambda_{3}r^{2}-1\right)+3 \left(\Lambda_{3}r^{2}\left(\Lambda_{3}r^{2}-43\right)+64\right)\Theta_{f_{0} }^{\prime\prime}(r)\right)\right)+16\alpha\Theta_{f_{0}}(r)^{3}\] \[\times\left(\left(\left(\Lambda_{3}r^{2}\left(\Lambda_{3}r^{2}-96 \right)+471\right)\Theta_{f_{0}}^{\prime}(r)^{2}+r^{2}\Theta_{f_{0}}^{\prime \prime}(r)\left(2r\Theta_{f_{0}}^{(3)}(r)\left(\Lambda_{3}r^{2}-1\right)+\left( \Lambda_{3}r^{2}\left(\Lambda_{3}r^{2}-12\right)+19\right)\right.\right.\] \[\times\left.\left.\Theta_{f_{0}}^{\prime\prime}(r)\right)-2r \Theta_{f_{0}}^{\prime}(r)\left(\left(\Lambda_{3}r^{2}\left(\Lambda_{3}r^{2}-54 \right)+135\right)\Theta_{f_{0}}^{\prime\prime}(r)+r\left(r\Theta_{f_{0}}^{(4 )}(r)\left(1-\Lambda_{3}r^{2}\right)\right.\right.\right.\] \[+\left.\left.\left.6\Theta_{f_{0}}^{(3)}(r)\left(2\Lambda_{3}r^{2} -3\right)\right)\right)\right)+\Lambda_{3}^{4}p_{t}r^{14}\Theta_{f_{0}}(r)^{5} \bigg{)}. \tag{19}\]
It is interesting to notice that Eqs. (17-19) now involve only one unknown \(\Theta_{f_{0}}(r)\). By using the Eq. (16) in Eq. (12), we have the following differential equation
\[\frac{4\sqrt{\frac{\Lambda_{3}^{2}}{\Theta_{f_{0}}(r)}}}{r^{4} \Theta_{f_{0}}^{4}(r)\left(\Lambda_{3}r^{2}\right){}^{7/2}}\bigg{(}-4r^{2} \Theta_{f_{0}}(r)\left(\Lambda_{3}r^{2}-1\right)\left(\Theta_{f_{0}}^{\prime}(r) ^{2}\right)\left(5r\left(\Lambda_{3}r^{2}-1\right)\Theta_{f_{0}}^{\prime\prime }(r)+\left(63-59\Lambda_{3}r^{2}\right)\Theta_{f_{0}}^{\prime}(r)\right)\] \[+\Theta(r)^{3}\left(\Lambda_{3}r^{2}-1\right)\left(\left(5\Lambda_{3 }r^{2}-21\right)\Theta_{f_{0}}^{\prime}(r)+r\left(r\Theta_{f_{0}}^{(3)}(r) \left(\Lambda_{3}r^{2}-1\right)+\left(9-5\Lambda_{3}r^{2}\right)\Theta_{f_{0}}^{ \prime\prime}(r)\right)\right)+4r\Theta_{f_{0}}(r)^{2}\] \[\times\left(-r^{2}\left(\Lambda_{3}r^{2}-1\right){}^{2}\Theta^{ \prime\prime}(r)^{2}+\left(\Lambda_{3}r^{2}\left(141\Lambda_{3}r^{2}-394\right)+2 49\right)\left(\Theta_{f_{0}}^{\prime}(r)\right)^{2}+2r\left(\Lambda_{3}r^{2}-1 \right)\Theta_{f_{0}}^{\prime}(r)\left(r\Theta_{f_{0}}^{(3)}(r)\right.\right.\] \[\times\left.\left.\left(\Lambda_{3}r^{2}-1\right)\left(42-40 \Lambda_{3}r^{2}\right)\Theta_{f_{0}}^{\prime\prime}(r)\right)\right)+15r^{3} \left(\Lambda_{3}r^{2}-1\right){}^{2}(\Theta_{f_{0}}^{\prime}(r))^{4}\right)- \mathcal{I}_{1}=0. \tag{20}\]
In the context of the current study, this differential equation is the most relevant, since all physical effects depend on its solution. Since it is extremely non-linear, we can solve it numerically by computing \(\Theta_{f0}(r)\) for some appropriate initial condition. Some important results using numerical solution of Eq. (20) are itemized below.
* The positive and increasing behavior of
property of wormhole physics. From Fig. (**2**), the ratio of shape function and radial coordinate, i.e., \(\frac{\mathbb{S}_{f}(r)}{r}\) is seen by right part. In this study, the expression \(\frac{\mathbb{S}_{f}(r)}{r}\) approaches to small value, i.e., 0.855 nearing zero as \(r\) approaches infinity, perhaps due to the use of Noether and conformal symmetries.
### Energy Bounds
The energy bounds are very important for the physical acceptability of wormhole geometry. It would be an interesting task to derive modified energy bounds in frame of \(f(\mathcal{G})\) gravity. For present study, we may use the \(\mathcal{GR}\) energy condition with the following justification.
_Theorem:_ For a solution of Eqs. (8)-(10), expressed by functions \(K_{1}=\big{\{}\nu(r),\lambda(r),f(\mathcal{G})\big{\}}\), if we have a solution in \(\mathcal{GR}\) defined by function \(K_{2}=\big{\{}\nu(r),\lambda(r)\big{\}}\), then the energy conditions would be the same for \(K_{1}\) and \(K_{2}\) since \(T_{\xi\eta}^{eff}\) in (4) performs the significant role of stress energy tensor in \(\mathcal{GR}\)[43; 44].
The energy conditions are described as null energy condition (NEC), weak energy condition (WEC), dominant energy condition (DEC), and strong energy condition (SEC), which are mentioned as [45].
\[NEC: \forall i,\ \ \ \ \rho^{eff}+p_{i}^{eff}\geq 0.\ \
magnified views (right panel) show that the values goes in negative range near the throat indicting existence of exotic matter. The presence of exotic matter shows the goodness and superiority of our study based on the symmetries. Due to presence of exotic matter wormhole throat should be open, which is necessary for the physically acceptable wormhole geometry. However, a little drawback is witnessed that the physical parameters are seen justified in a narrow space due to the use of symmetry approach. It is evident from graphs that fluctuated behavior is obtained for large scale, however, magnified views show that physically viable wormholes are possible.
### Equilibrium Condition
Here, we provide the stability analysis of the wormhole solutions just discussed in the previous section by incorporating the equilibrium limitations under Noether symmetry framework with conformal symmetry, for the \(f(\mathcal{G})\) gravity models under investigation. Here, we will assume Tolman-Oppenheimer-Volkoff equation, which is given as:
\[-\frac{dp_{r}^{eff}}{dr}-\frac{\nu^{{}^{\prime}}(r)}{2}(\rho^{eff}+p_{r}^{eff} )+\frac{2}{r}(p_{t}^{eff}-p_{r}^{eff})=0, \tag{21}\]
where \(\nu(r)=2\Omega(r)\). The forces namely, hydrostatic (\(\mathcal{F}_{\mathrm{h}}\)), the gravitational (\(\mathcal{F}_{\mathrm{g}}\)) and anisotropic force (\(\mathcal{F}_{\mathrm{a}}\)) are represented by following expressions
\[\mathcal{F}_{\mathrm{h}}=-\frac{dp_{r}^{eff}}{dr},\qquad\mathcal{F}_{\mathrm{ a}}=\frac{2}{r}(p_{t}^{eff}-p_{r}^{eff}),\qquad\mathcal{F}_{\mathrm{g}}=- \frac{\nu^{{}^{\prime}}}{2}(\rho^{eff}+p_{r}^{eff}), \tag{22}\]
and thus Eq. (21) takes the form given by
\[\mathcal{F}_{\mathrm{a}}+\mathcal{F}_{\mathrm{g}}+\mathcal{F}_{\mathrm{h}}=0.\]
Fig. (**7**) depicts the graphical behavior of these forces, which are shown balance to each other by left part. In Fig.
Figure 4: is indicating the graphically behavior of \(WEC\).
Figure 3: is indicating the graphically behavior of \(NEC\).
(**7**), the red, black, and green lines represent the hydrostatic, gravitational, and anisotropic forces. The balancing behavior of gravitational and anisotropic forces against hydrostatic force shows the stability of wormhole existence via Tolman-Oppenheimer-Volkoff equation in the background of Noether symmetry with conformal symmetry. The right part of Fig. (**7**) shows the real impact of different forces in very small interval of radial coordinate. The stability via Tolman-Oppenheimer-Volkoff equation shows that the our inquired wormhole solutions are physically realistic and acceptable in \(f(\mathcal{G})\) gravity.
Figure 5: shows the evolution of radial EoS parameter
Figure 7: indicates the balancing behavior of hydrostatic, the gravitational and anisotropic forces.
Conclusive remarks
A basic technique for solving the dynamical equations is known as the Noether symmetry technique. The Noether symmetries, in particular, are not only a mechanism for dealing with the dynamics solution, but their presence also provides suitable conditions so that one can specify the universe models physically and analytically according to our measured observations. The procedure of the Lagrange multiplier enables one to resolve some problems related to the \(f(\mathcal{G})\) gravity model and to minimize the Lagrangian to a canonical form. It is perhaps quite useful to reduce the dynamics of the system in order to identify the exact solutions. In this study, we examine the existence of static wormhole through Noether and conformal symmetries in the frame anisotropic matter distribution. For this purpose, we solve overdetermined system of PDE's and find the Noether symmetry generators along with corresponding conserved quantity by incorporating the \(f(\mathcal{G})\) gravity model. Moreover, a useful conserved quantity is gained from the Noether symmetry of spherically symmetric spacetime. The presence of conserved quantity plays a significant part in defining the possible existence of wormhole solutions under Noether symmetry by utilizing conformal symmetry. Moreover, we have also investigated stable condition of wormhole solutions via modified equilibrium condition by considering the specific red-shift function. Some essential findings and observations regarding the existence of wormhole in modified \(f(\mathcal{G})\) gravity are summarized below.
* The shape-function \(\mathbb{S}_{f}(r)\) remains positive and continues to increase as shown in the left plot of Fig. (**1**). The positive behavior of shape function shows that our obtained solutions are physically acceptable.
* The condition \(\mathbb{S}_{f}(r_{0})=r_{0}\) is justified at \(r_{0}=0.0420\) as depicted in Fig. (**1**), which shows that the wormhole throat should be opened.
* The flaring out condition, i.e., \(\mathbb{S}_{f}(r)^{{}^{\prime}}(r_{0})<1\), is satisfied, as it can be noted from the right panel of Fig. (**2**).
* The flatness condition also can be seen from the right panel of Fig. (**2**).
* Referring to the energy bounds \(NEC\) and \(WEC\), it can be seen from the Fig. (**3**) and Fig. (**4**). The violation of \(NEC\) can be perceived from the Fig. (**3**), which describes the presence of exotic matter. The presence of exotic matter shows the goodness and superiority of our study based on the symmetries, i.e., Noether symmetry and conformal symmetry. Due to presence of exotic matter wormhole throat should be open, which is necessary for the physically acceptable wormhole geometry.
* Fig. (**7**) represents the graphical analysis of three different forces, i.e., \(\mathcal{F}_{\rm a},\mathcal{F}_{\rm g}\) and \(\mathcal{F}_{\rm h}\), which are shown balance to each other by left part. In Fig. (**7**), the red, black, and green lines represent the hydrostatic, gravitational, and anisotropic forces. The balancing behavior of gravitational and anisotropic forces against hydrostatic force shows the stability of wormhole existence via Tolman-Oppenheimer-Volkoff equation in the background of Noether symmetry with conformal symmetry.
All above discussions suggest that wormholes exist in \(f(\mathcal{G})\) gravity using symmetry approach. However, a little drawback is witnessed that the physical parameters are seen justified in a narrow space due to the use of symmetry approach. It is evident from graphs that fluctuated behavior is obtained for large scale, however, magnified views show that physically viable wormholes are possible. Conclusively, it is worthy to mention here that Noether and conformal symmetries are quite helpful in obtaining physically realistic and acceptable wormhole solutions in \(f(\mathcal{G})\) gravity.
|
2305.11530 | When Euler met Brun | We survey and study some aspects of the distribution of primes in very short
intervals. | John Friedlander | 2023-05-19T08:54:45Z | http://arxiv.org/abs/2305.11530v1 | # When Euler Met Brun
# When Euler Met Brun
John B. Friedlander
For the 75th birthday of Henryk Iwaniec
**Abstract:** In this note we survey and study some aspects of the distribution of primes in very short intervals. 1\({}^{2}\)
Footnote 1: MSC 2020 classification: 11N05, 11N36
2
Footnote 2: key words: primes, sieves, reciprocals, short intervals
## 1. **Introduction**
L. Euler famously proved that the sum of the reciprocals of the prime numbers is divergent, strengthening a millennia old theorem of Euclid. It is reasonable to view Euler's result, at least when taken together with Dirichlet's profound generalization to primes in arithmetic progressions, as having launched the modern subject of analytic number theory.
V. Brun famously proved that, when restricted to a sum over the reciprocals of twin primes, the series is in contrast convergent, along the way launching the modern subject of sieve theory.
If we let \(p^{\prime}\) denote the least prime exceeding the prime \(p\), its successor prime, then Brun's theorem may be stated as \(\sum_{p^{\prime}=p+2}1/p<\infty\). It is well-known and clear from the proof that, with virtually no modification, Brun's theorem holds as well for the sum \(\sum_{p^{\prime}=p+k}1/p\) for any even integer \(k\) and, since it is trivial for odd \(k\), we have
\[\sum_{p^{\prime}-p\leqslant K}\frac{1}{p}<\infty,\]
for any fixed \(K\).
So a question that naturally suggests itself is to wonder what happens when we replace \(K\) by a growing function, say \(y(p)=\lambda(p)\log p\), and try to study how quickly growing a function we can take and still maintain the conclusion
\[\lim_{x\to\infty}\sum_{\begin{subarray}{c}p\leqslant x\\ p^{\prime}-p\leqslant y(p)\end{subarray}}\frac{1}{p}<\infty. \tag{1.1}\]
We know, by a result of Goldston, Pintz and Yildirim, Theorem 1 of [1], that for any fixed \(\lambda>0\), a positive proportion \(c(\lambda)\) of the primes \(p\leqslant x\) have successor satisfying the bound \(p^{\prime}-p<\lambda\log p\). Thus we are not able to take \(y(p)=\lambda\log p\) for any fixed \(\lambda>0\), no matter how small. This means that we are dealing with prime gaps that are of smaller order of magnitude than the average.
On the other hand, if we could take a choice of \(y\) which yielded, for some fixed real \(a>1\), an upper bound
\[\sum_{\begin{subarray}{c}p\leqslant x\\ p^{\prime}-p\leqslant y(p)\end{subarray}}1\ll\frac{x}{(\log x)^{a}}\]
then partial summation would show the convergence of its sum of reciprocals. Recall that, in the case of twin primes, we have such a bound with the comfortable margin \(a=2\). Moreover, for the full set of primes (\(y=p\)) the divergence, namely \(\sum_{p\leqslant x}1/p\sim\log\log x\), just barely holds. This suggests that the set of twin primes is substantially further from the "meeting place" and that the relevant gaps are short but not so terribly short.
After a tiny bit of history and a few remarks in Section 2, we introduce in Section 3 what we believe to be a reasonable conjecture which allows us to deduce fairly close upper and lower bounds for the function \(y(p)\) in question. Then, in Section 4, we consider what we are able to learn about the problem unconditionally using sieve bounds. The proofs are elementary.
**Acknowledgements:** Thanks go to the referee for pointing out a mistake in an earlier version. Research of the author supported in part by an NSERC Canada Research Grant, continuously since 1981. The problem considered in this paper was inspired by a question from Peter Rosenthal, colleague since 1980. Title suggested by a work of Robert Reiner. This paper was written in honour of Henryk Iwaniec, frequent collaborator and constant friend since 1976.
## 2. **Small gaps and very small gaps**
By the Prime Number Theorem we know that the average gap between primes of size around \(x\) is \(\log x\). As it happens however, our knowledge of primes is still sufficiently limited that, for at least one problem, that of maximal gaps, intervals as large as \(x^{.49}\) are still too small to be within our reach. Nevertheless, for purposes of this paper,
we think of prime gaps near \(x\) having length \(\lambda\log x,\lambda<1\) a constant, as being "smaller than average", those unbounded but of size \(y(x)=o(\log x)\) as being "very small" and of bounded gaps as being, well, bounded.
At the two extremes of these three gap ranges there has, after many years, been phenomenal recent progress. Thus, for the smaller than average gaps \(\lambda\log x\) one has the breakthroughs of Goldston, Pintz and Yildirim [1], wherein they show that \(\liminf(p^{\prime}-p)/\log p=0\) and subsequently as well the positive proportion result already mentioned.
The GPY bound was greatly strengthened to encompass the bounded gaps by the work of Zhang [10], then Maynard [12], who showed that \(\liminf(p^{\prime}-p)\) is bounded and, in the latter case, quite a bit more as well.
The functions \(y(x)\) which are near the meeting place for our problem lie in what we have called very small gaps, the largely uncharted region between these two extreme interval lengths.
There seems to be not a great deal known about this middle range. One exception comes from Maynard's sieve which, although originally designed to treat distribution in bounded gaps, still has striking consequences which venture into the middle range. Thus, Theorem 3.2 of Maynard [12] states:
**Theorem (Maynard):** For any \(x,y\geqslant 1\) there are \(\gg x\exp(-\sqrt{\log x})\) integers \(x_{0}\in[x,2x]\) such that
\[\pi(x_{0}+y)-\pi(x_{0})\gg\log y.\]
Another exception, closer to our problem, is the sieve-driven upper bound, Theorem 2 of [1], the statement of which we postpone until Section 4.
Finally, we mention the papers [EN] of Erdos and Nathanson and of Zhou [10] which study a problem of flavour similar to ours but dealing with a sum over (suitably weighted) reciprocals of prime gaps.
## 3. **Conditional results**
Despite the breakthroughs of GPY and Maynard, much of what is thought to be true about the distribution of small prime gaps is either conjecture or else deductions from conjectures. Thus, Granville and Lumley [1] study the maximum number of primes in a gap near \(x\) having length \(y\) and conjecture that it is asymptotic to \(y/\log y\) "for \(y\leqslant c\log x\) as long as \(y\to\infty\) as \(x\to\infty\)".
Nevertheless, most of what appears in the literature on gaps of below average length seems to deal only with gaps of size \(\lambda\log x\) with constant
\(\lambda\) rather than \(\lambda=o(1)\). A pioneering work which is concerned with this range of \(\lambda\) constant is that of Gallagher [G].
Gallagher's theorem on primes in short intervals rests on the assumption of a uniform version of a well-known conjecture due to Hardy and Littlewood [HL].
Let \(\mathcal{H}=h_{1},\ldots,h_{r}\) denote a set of distinct non-negative integers and let \(\nu_{\mathcal{H}}(p)\) denote the number of residue classes modulo \(p\) occupied by the \(h_{j}\). Assume that the set \(\mathcal{H}\) is "admissible" in the sense that \(\nu_{\mathcal{H}}(p)<p\) for all primes \(p\). We recall the definition of the relevant"singular series"
\[\mathfrak{S}(\mathcal{H})=\prod_{p}\left(1-\frac{\nu_{\mathcal{H}}(p)}{p} \right)\left(1-\frac{1}{p}\right)^{-r}. \tag{3.1}\]
**Hypothesis G:** Let \(\pi(x;\mathcal{H})\) denote the number of integers \(n\leqslant x\) such that \(n+h_{j}\) is prime for all \(h_{j}\in\mathcal{H}\). Then
\[\pi(x;\mathcal{H})\sim\mathfrak{S}(\mathcal{H})\frac{x}{(\log x)^{r}}, \tag{3.2}\]
as \(x\to\infty\) uniformly for all \(h_{1},\ldots,h_{r}\leqslant\lambda\log x\), where \(\lambda>0\) is fixed.
A similar conjecture (on the number of prime couples and prime triples with fixed spacing) was used by Goldston and Ledoan [GoLe2] in relation to their work on the prime jumping champions.
**Theorem (Gallagher):** Denote by \(P_{k}(h,x)\) the number of integers \(n\leqslant x\) for which the interval \((n,n+h]\) contains exactly \(k\) primes. Then, under the above Hypothesis G, with \(h=\lambda\log x\), \(\lambda>0\) fixed,
\[P_{k}(h,x)\sim xe^{-\lambda}\frac{\lambda^{k}}{k!} \tag{3.3}\]
as \(x\to\infty\).
Soundararajan pointed out, in very slightly different form [S, exercise 1.3] (see also Goldston and Ledoan [GoLe1]), the following consequence of Gallagher's theorem.
**Corollary:** Again under the Hypothesis G of the theorem, we have for fixed \(\lambda>0\),
\[\frac{1}{\pi(x)}\#\left\{p\leqslant x;\frac{p^{\prime}-p}{\log p}\leqslant \lambda\right\}\sim\int_{0}^{\lambda}e^{-t}dt=1-e^{-\lambda}.\]
In order to close in on a consequence of this for our problem, we shall assume that this statement holds in a range slightly different from \(\lambda\) fixed, but rather with \(\lambda\to 0\) slowly as \(x\to\infty\). This seems reasonable,
especially if the approach is sufficiently slow, for instance the following is more than ample.
**Hypothesis S:** Assume that, as \(t\to\infty\) with \(\lambda(t)\to 0\), subject to
\[\lambda(t)\gg\frac{1}{(\log\log t)^{2}} \tag{3.4}\]
we have
\[\frac{1}{\pi(x)}\#\left\{p\leqslant x;\frac{p^{\prime}-p}{\log p}\leqslant \lambda(p)\right\}\sim\int_{0}^{\lambda(x)}e^{-u}du=1-e^{-\lambda(x)}. \tag{3.5}\]
Note that, for \(\lambda=o(1)\) we have \(1-e^{-\lambda}\sim\lambda\).
We prefer to base our conditional statements on Hypothesis S which, as it concerns a _sum_ over gaps of specific lengths, one might believe it could hold even if the previous two conjectures, which deal with individual spacings, did not. We are not suggesting that Hypothesis S holds without further conditions on the function \(\lambda\); we only require it for the specific functions mentioned in Proposition 3.1 below.
As it happens, we shall need iterated logarithms to enter our statements so we define, as usual, for \(k\geqslant 2\), \(\log_{k}\) to be the \(k\)-th iterated logarithm:
\[\log_{k}=\log\log_{k-1},\quad\log_{1}=\log \tag{3.6}\]
and what we might call the logorial function
\[Log_{k}=\prod_{2\leqslant j\leqslant k}\log_{j}. \tag{3.7}\]
For each of these functions, we can ignore the finite set of small integers where it is not defined.
Applying partial summation to the sum in (1.1), then Hypothesis S, next inputting in the resulting integral the Prime Number Theorem, then making the change of variable \(u=\log_{k+1}t\), respectively \(u=\log_{k}t\), one deduces the following results which narrow in on the answer to our question.
**Proposition 3.1**.: _Let \(k\geqslant 2\) be a fixed integer and \(\varepsilon>0\) a fixed real. Denote \(y(p)=\lambda(p)\log p\). Then we have as \(x\to\infty\)_
\[\sum_{\begin{subarray}{c}p\leqslant x\\ p^{\prime}-p\leqslant y(p)\end{subarray}}\frac{1}{p}\sim\log_{k+1}x\to\infty \quad for\ \lambda(p)=1/Log_{k}(p), \tag{3.8}\]
_but_
\[\sum_{\begin{subarray}{c}p\leqslant x\\ p^{\prime}-p\leqslant y(p)\end{subarray}}\frac{1}{p}\quad\text{is bounded}\ for\ \lambda(p)=1/Log_{k}(p)(\log_{k}p)^{\varepsilon}, \tag{3.9}\]
_under the assumption that (3.5) holds for these functions \(\lambda\)._
In fact, as we shall see in the next section, the latter statement (3.9) holds unconditionally, although the former seems hopeless with current tools. We may remark that Theorem 1 of [10] gives a lower bound for the left hand side of (3.5), weaker but unconditional and holding for intervals of smaller than average size, that is \(\lambda\) constant. If this could be extended to also hold for intervals having length in a suitable part of the range \(o(\log x)\), perhaps using ideas from [12], this might give a correpondingly weaker but unconditional result in the direction of (3.8).
Note that (3.8), if true, is not the end of the story since we could then construct more artificial choices for \(y(p)\) which are closer to the breakpoint. To see this, begin with \(\lambda=1/Log_{k}\) for some particular \(k\) as in (3.8), carry on until the sum of reciprocals of primes up to \(x\) exceeds \(1\) (still more complicated choices go further) and also \(x\) is large enough that \(\log_{k+1}x\) is defined. Then, for ensuing \(p\), replace \(1/Log_{k}\) by \(1/Log_{k+1}\) in the definition of \(y(p)\), continue summing and iterate.
## 4. **Sieve survivors**
By a sieve "survivor" we mean what is more awkwardly called "an integer with no small prime factor" but in recent years has, at least in sieve theory, become confused with an "almost-prime" even though the latter is invariably defined as an integer with few prime factors. There are many sieve results which produce what are called almost-primes when they actually produce integers from the special subset of survivors. When we use the term here we are thinking of an integer \(\leqslant x\) having no prime factor \(<z\) with \(z=x^{\delta}\) for some \(\delta>0\), possibly small but fixed. The survivors, at least in one sense, more strongly resemble the primes, occurring as they do with the same order of magnitude. Recall that the almost-primes occur with a greater order of magnitude, how much so depending on the number of prime factors permitted.
In proving lower and upper bounds for the sum \(\sum_{p}1/p\) by partial summation, it is obvious that we do not need an asymptotic formula such as that conjectured in (3.5), but only sufficiently good lower, respectively upper, bounds for the corresponding prime counting function. This suggests that sieve methods can prove useful. Already, in
the work of Gallagher [G], his Theorem 2 gives an upper bound in connection with his question.
More recently, again in the case of the upper bound, one has a result more closely related to our topic, namely Theorem 2 of Goldston, Pintz and Yildirim [GPY4] which states:
For any \(h>2\) as \(x\to\infty\) we have
\[\sum_{\begin{subarray}{c}p\leqslant x\\ p^{\prime}-p\leqslant h\end{subarray}}1\ll\min\{h/\log x,1\}\pi(x), \tag{4.1}\]
which for us is of interest when \(h=o(\log x)\), although it is also meaningful if \(h\leqslant c\log x\) provided that the constant \(c\) is sufficiently small. This shows that, if in (3.5) we replace the conjectured asymptotic by an upper bound, then that result is unconditionally true in a wide range.
We shall take \(z=x^{\delta}\) for some positive \(\delta\). Let \(m\) run through \(\Cal{M}\), the set of integers \(m\leqslant x\) which are the survivors of sieving by the primes less than \(z\). For each \(m\) let \(m^{\prime}\) be its successor, the first larger integer in the set.
A proof, perhaps slightly streamlined, of (4.1) can proceed as follows. We start with a sieve upper bound (e.g. Theorem 7.16 of [Opera]):
\[\sum_{\begin{subarray}{c}m\leqslant x\\ m,m+d\in\Cal{M}\end{subarray}}1\ll\mathfrak{S}_{d}\frac{x}{(\log x)^{2}}, \tag{4.2}\]
which holds for every integer \(d,\,1\leqslant d\leqslant h\) with the same implied constant depending only on \(\delta\). Here, \(\mathfrak{S}_{d}\) is the singular series (3.1) for the set \(\Cal{H}=\{0,d\}\). Summing over \(1\leqslant d\leqslant h\) and using the fact that
\[\sum_{d\leqslant h}\mathfrak{S}_{d}\sim h \tag{4.3}\]
as \(h\to\infty\) (much more is known; see [FG], Proposition 1), we obtain a bound
\[\sum_{\begin{subarray}{c}m\in\Cal{M}\\ m^{\prime}-m\leqslant h\end{subarray}}1\leqslant\sum_{\begin{subarray}{c}m_{1},m_{2}\in\Cal{M}\\ 1\leqslant m_{1}-m_{2}\leqslant h\end{subarray}}1\ll h\frac{x}{(\log x)^{2}} \sim\frac{h}{\log x}\pi(x). \tag{4.4}\]
This bound applies, a fortiori, to the corresponding sums over primes since the number of primes less than \(z\) offers a negligible contribution. This proves (4.1).
Note that the upper bound in (4.4) is actually a bound for a larger number of \(m\in\Cal{M}\), specifically those with two _or more_ unsifted integers in the interval \([m,m+h]\). As it happens, the number of triples in such
a short interval is, as we shall see, of smaller order of magnitude and we lose nothing for this result by ignoring them.
On the other hand, returning to the sum in (4.2), we can instead begin with a sieve lower bound. If we have chosen for instance \(\delta=1/10\), then \(z\) is sufficiently small to enable a positive lower bound for this two-dimensional sieve problem (see e.g. Corollary 6.13 of [Opera]):
\[\sum_{\begin{subarray}{c}m\leqslant x\\ m,m+d\in\mathcal{M}\end{subarray}}1\gg\mathfrak{S}_{d}\frac{x}{(\log x)^{2}}. \tag{4.5}\]
After summing over \(d\) we have, in contrast to (4.4),
\[\sum_{\begin{subarray}{c}m_{1},m_{2}\in\mathcal{M}\\ 1\leqslant m_{1}-m_{2}\leqslant h\end{subarray}}1\gg h\frac{x}{(\log x)^{2}} \sim\frac{h}{\log x}\pi(x). \tag{4.6}\]
Now however, in order to get a lower bound for the number of consecutive pairs corresponding to that in (4.4) we need to subtract out (an upper bound for) the contribution of triples in the short intervals. For this, we need an anologue to (4.3). We take a special case of a more general result of Odlyzko, Rubinstein and Wolf [ORW] which, in turn, extended a basic result of Gallagher [G]; see also Lemma 2 of [GoLe2].
For \(\mathcal{H}=\{0,d_{1},d_{2}\}\) we have
\[\sum_{d_{1},d_{2}\leqslant h}\mathfrak{S}(\mathcal{H})\sim h^{2}, \tag{4.7}\]
as \(h\to\infty\). This yields an upper bound (again, Theorem 7.16 of [Opera])
\[\sum_{1\leqslant d_{1}<d_{2}\leqslant h}\sum_{\begin{subarray}{c}m\leqslant x \\ m,m+d_{1},m+d_{2}\in\mathcal{M}\end{subarray}}1\ll h^{2}\frac{x}{(\log x)^{3}} \tag{4.8}\]
and as we intend to take \(h=o(\log x)\), this is small compared to the lower bound in (4.6). Hence we have
\[\sum_{\begin{subarray}{c}m\in\mathcal{M}\\ m^{\prime}-m\leqslant h\end{subarray}}1\gg h\frac{x}{(\log x)^{2}}\sim\frac{h} {\log x}\pi(x). \tag{4.9}\]
We want to use upper and lower bounds for a sum much as that in (4.9) but slightly modified. In the first place, we want to change the sifting range so that, rather than having \(z=x^{\delta}\) in the definition of \(\mathcal{M}\) we now have \(z=m^{\delta}\). We also want to be able to allow \(h\) to depend on \(m\), specifically \(h=y(m)=\lambda(m)\log m\). To treat this new sum we split the interval \([0,x]\) into dyadic segments \(I=(M,2M]\) with perhaps a shorter interval left over. We request that \(y(m)\) be a positive slowly
increasing function such as either of the two choices in Proposition 3.1. In particular, we want to make use of the fact that \(y(m)\) is very nearly constant over every dyadic interval.
We follow the arguments that led to (4.4), (4.9), obtaining the new bounds
\[\sum_{\begin{subarray}{c}m\in I\cap\mathcal{M}\\ m^{\prime}-m\leqslant y(m)\end{subarray}}1\asymp\frac{y(M)M}{(\log M)^{2}}. \tag{4.10}\]
Summing (4.10) over the subintervals and again using the fact that \(y(m)\) is nearly constant over dyadic intervals, we find the bounds
\[\frac{1}{|\mathcal{M}|}\#\left\{m\in\mathcal{M};\frac{m^{\prime}-m}{\log m} \leqslant\lambda(m)\right\}\asymp 1-e^{-\lambda(x)}. \tag{4.11}\]
Using (4.11) in place of (3.5) in the argument for Proposition 3.1, we obtain the following result.
**Proposition 4.1**.: _Let \(k\geqslant 2\) be a fixed integer and \(\varepsilon>0\) a fixed real. Let \(\mathcal{M}\) denote the set of integers \(m\leqslant x\) which are free from prime divisors less than \(m^{\delta}\) where \(\delta>0\) is fixed and sufficiently small that \(1/\delta\) exceeds the sifting limit for the two-dimensional (beta or Selberg) sieve. Let \(m^{\prime}\) denote the successor of \(m\) in the set \(\mathcal{M}\) and define \(y(m)=\lambda(m)\log m\). Then we have, as \(x\to\infty\),_
\[\sum_{\begin{subarray}{c}m\leqslant x\\ m^{\prime}-m\leqslant y(m)\end{subarray}}\frac{1}{m}\to\infty\quad for\ \lambda(m)=1/Log_{k}(m), \tag{4.12}\]
_but_
\[\sum_{\begin{subarray}{c}m\leqslant x\\ m^{\prime}-m\leqslant y(m)\end{subarray}}\frac{1}{m}\quad is\ bounded\ for\ \lambda(m)=1/Log_{k}(m)(\log_{k}m)^{\varepsilon}. \tag{4.13}\]
Note that the sieve limit restriction on \(\delta\) is not needed in the case of (4.13) and that (4.13) shows that (3.9) is unconditionally true. As far as the lower bound, (4.12) lends some additional credence to the belief that (3.8) is also true.
|
2305.01723 | Stance Detection: A Practical Guide to Classifying Political Beliefs in
Text | Stance detection is identifying expressed beliefs in a document. While
researchers widely use sentiment analysis for this, recent research
demonstrates that sentiment and stance are distinct. This paper advances text
analysis methods by precisely defining stance detection and presenting three
distinct approaches: supervised classification, natural language inference, and
in-context learning with generative language models. I discuss how document
context and trade-offs between resources and workload should inform your
methods. For all three approaches I provide guidance on application and
validation techniques, as well as coding tutorials for implementation. Finally,
I demonstrate how newer classification approaches can replicate supervised
classifiers. | Michael Burnham | 2023-05-02T18:49:12Z | http://arxiv.org/abs/2305.01723v2 | # Stance Detection With Supervised, Zero-Shot, and Few-Shot Applications
###### Abstract
Stance detection is the identification of an author's beliefs about a subject from a document. Researchers widely rely on sentiment analysis to accomplish this. However, recent research has show that sentiment analysis is only loosely correlated with stance, if at all. This paper advances methods in text analysis by precisely defining the task of stance detection, providing a generalized framework for the task, and then presenting three distinct approaches for performing stance detection: supervised classification, zero-shot classification with NLI classifiers, and in-context learning. In doing so, I demonstrate how zero-shot and few-shot language classifiers can replace human labelers for a variety of tasks and discuss how their application and limitations differ from supervised classifiers. Finally, I demonstrate an application of zero-shot stance detection by replicating Block Jr et al. (2022).1
Footnote 1: Brief coding tutorials that demonstrate methods presented are located here: [https://github.com/MLBurnham/stance_detection_tutorials](https://github.com/MLBurnham/stance_detection_tutorials)
## 1 Introduction
Stance detection is the identification of an author's beliefs about a subject from a text sample. To accomplish this, social scientists widely rely on sentiment analysis. However, recent research shows that sentiment analysis is often loosely correlated with stance, if at all (Bestvater and Monroe, 2022; AlDayel and Magdy, 2021). The implications of this are significant. Sentiment analysis is fast, easy to use, and readily applied to many topics. If it is not a reliable form of stance detection, opinion mining is much less accessible than previously thought. Training a classifier is a great alternative, but supervised classifiers are sometimes prohibitively expensive in time and resources. They require manually labeled training data which researchers may need to curate themselves or hire workers for. Further, classifiers are task specific and a new classifier with new training data is needed for each unique stance. While training a classifier remains optimal for many projects, others would benefit from an approach with the speed and accessibility of sentiment analysis.
This article makes three contributions to address this methodological challenge. First, I precisely define stance detection as an entailment classification task. This definition distinguishes between the sentiment and stance of a text as distinct dimensions and provides a rigorous definition against which methods can be validated.
Second, I provide a generalized framework for approaching stance detection based on what I call information context - what information a classifier knows and what information a document
contains. In contrast to previous approaches that may apply sentiment dictionaries or classifiers with simple assumptions about what information documents contain or how humans apply labels to training data, this framework encourages researchers to more systematically address the factors that can affect the outcome of stance classification. The framework applies to both algorithmic classifiers and human coders but is particularly important in the zero- and few-shot classification context where pre-trained models contain a fixed set of knowledge.
Finally, this paper both demonstrates how to apply this framework and alleviates the aforementioned inaccessibility of stance detection by presenting three approaches that are more robust than sentiment analysis. Between them, these methods can meet the constraints of a wide variety of research projects. The first approach is the established method of training a classifier. I particularly focus on the use of transformer neural networks with domain adaptation over bag-of-words classifiers to improve performance and accessibility. The second approach uses zero-shot classifiers trained for natural language inference (NLI). Like sentiment dictionaries, this approach requires no training data and minimal programming to implement. Unlike sentiment dictionaries, it produces results comparable to, and sometimes better than, supervised classification. Finally, I discuss a new approach known as in-context learning that leverages generative language models such as chatGPT and GPT-4 (Brown et al., 2020). In-context learning combines the best of supervised and zero-shot classifiers, but is hampered due to its outcome instability and reliance on massive models. For each approach I discuss validation techniques, with a particular focus on zero-shot and few-shot classification.
Finally, in a replication, I demonstrate zero-shot stance detection to identify non-compliance with COVID-19 health guidelines.
## 2 Stance and Stance Detection
### What is Stance?
Stance is an individual's "attitudes, feelings, judgments, or commitment" to a given proposition (Biber and Finegan, 1988). In the recent past, stance detection was synonymous with sentiment analysis that measures polarity of a text on a positive or negative scale (Stime, 2019).
However, this introduces significant measurement error because the position a document expresses is often different from the sentiment used to express it. 2 Rather, sentiment and stance should be treated as independent dimensions (Bestvater and Monroe, 2022; AlDayel and Magdy, 2021). Some have proposed targeted sentiment (i.e. sentiment towards Trump rather than the general sentiment of a document about Trump) as an operationalization for stance detection (e.g. Sasaki et al., 2016; Bestvater and Monroe, 2022), but this too is inadequate. Positive and negative valence is one of the many dimensions of "attitudes, feelings, judgments, or commitment." Targeted sentiment may adequately capture approval for politicians, but can it adequately capture something like the politicization of the COVID-19 pandemic? While there is broad agreement that COVID-19 is bad, there is disagreement on if it constitutes a significant health threat and how government should respond. These are not disagreements of sentiment, but of facts and values.
Footnote 2: For example: “So excited to see anyone but Trump in the White House!” expresses a negative stance about Trump with a positive sentiment.
### What is Stance Detection?
Stance detection consists of three components: an observation (e.g. a document), a proposed stance (e.g. approval of a politician), and the observation's relationship to that target stance (e.g.
agreement or disagreement). Recent literature operationalizes this task is in terms of entailment classification (AlDayel and Magdy, 2021). Introduced by Dagan et al. (2005), textual entailment is a directional relationship between two documents labeled as the text (_T_) and the hypothesis (_H_).
**Definition 1**: _Text sample T entails hypothesis H when a human reading T would infer that H is most likely true._
For example, if the following tweet from President Trump is paired with the following hypothesis:
**T**: _It's freezing and snowing in New York - we need global warming!_
**H**: _Global warming is good_
we would conclude that text \(T\) entails hypothesis \(H\). Textual entailment does not test if a hypothesis is _necessarily_ true. Entailment is what humans would infer is most likely true from a document. We can adapt this to arrive at a definition of stance detection:
**Definition 2**: _Text sample T entails stance S to author A when a human reading T would infer that A supports S (AlDayel and Magdy, 2021)._
One possible way to re-frame the above entailment task in terms of stance would be the following:
**T**: _It's freezing and snowing in New York - we need global warming!_
**S**: _Donald Trump believes global warming is good._
This definition frames stance detection as either a binary (entailment or not entailment) or multi-label (for, against, neutral) classification task.
However, the above definition does not answer two questions:
1. Is the target of stance detection necessarily the stance expressed in the text, or can it encompass some other stance?
2. What information outside of that in the text should be used to infer the target stance?
These questions are not addressed and thus the task is open ended. Within the literature and public data, stance detection has meant inferring the stance expressed in a document using only the information in the document, or inferring any stance using any information. A negative downstream effect is that how a particular document is labeled will vary based on the answers to these questions. This results in inconsistency in how data is curated and labeled, as well as how to interpret benchmarking and model generalizability.
Consider the examples in table 1 from perhaps the most commonly used data for stance detection benchmarking (Mohammad et al., 2016). Each sample was manually labeled as either in favor, against, or neutral towards Donald Trump. Each example highlights potential inconsistencies in how labels are assigned. The first sample makes no mention of Donald Trump and seemingly reasons that someone who espouses a certain narrative about the founding of the United States probably dislikes Trump. Another labeler may view it as unrelated. Examples two and three express support for the Spanish news station Univision. The labelers from sample two seem aware that Trump had a disagreement with Univision and associated a pro-Univision stance with an anti-Trump stance. The labelers for sample three seemingly lack this context and assigned a different label. Generating consistently correct labels on such documents can require deep, time-sensitive knowledge of global events and the conversation a comment was made in. Unless this information is provided, accurate inference is often not a reasonable expectation.
To solve these issues I make two proposals. First, researchers should distinguish between _stance detection_ and _stance prediction_. I define stance detection as identifying the stance expressed within a document. Stance prediction, however, identifies stances that are latent or not expressed. Admittedly, the distinction between the two can be vague. Is it detection or prediction if the author hints
at a stance? However, an effort to make this distinction sets clearer expectations as to what data, tools, and validation methods are most well suited for a task. For example, a stance detection task is primarily concerned with correctly interpreting text. Thus, human labels may be an appropriate method of validation. In the case of stance prediction, this may not be appropriate if we cannot assume human labelers have accurate perceptions about what beliefs correlate.
Second, I propose that stance detection is entailment within a given _information context_. I define information context as the set of information used to make a classification. Thus, I operationalize stance detection as the following:
**Definition 3**: _Text sample T entails stance S to author A when a human reading T with context C would infer that T expresses support for S._
Precisely defining information context is likely impossible. We cannot know the information labelers might draw on in a specific moment. Similarly, machine learning algorithms and neural networks are largely black boxes. However, by defining stance detection this way, we call attention to a critical component of the task and encourage researchers to consider it when creating coding rules, training manual labelers, collecting data.
## 3 Framework
Information context consists of two parts - the information in a document-hypothesis pair, and the information a labeler or algorithm knows. Accurate stance detection requires finding the appropriate marriage between these dimensions.
On the X axis of figure 1 is the information contained within the document-hypothesis pair. I define a _context complete_ pair as one that contains sufficient information to correctly classify without referencing external information. One that requires additional information is _context incomplete_. Consider the following document-stance pair:
**T**: _The president of Russia invaded Ukraine._
**S**: _Vladimir Putin invaded Eastern Europe._
The correct classification is that T entails S to the author. However, the pair is context incomplete
\begin{table}
\begin{tabular}{c|l|l|l} \hline \# & Text & Label & Notes \\ \hline
1 & @ABC Stupid is as stupid does! & Against & Based on external knowledge that this is about Trump, potentially inferred from stance about narrative of US founding \\
2 & @peddoc63 @realDonaldTrump So I & Against & Infers stance towards Trump based on positive stance towards Univision \\
3 & Honestly I am gonna watch \#Unvision so much more now, just to support the network against & Neutral & Expresses the same stance as example \#2, but the labelers apparently did not have the same contextual information \\ & & & when inferring stance \\ \hline \end{tabular}
\end{table}
Table 1: Text samples from the Semeval 2016 test data set (Mohammad et al., 2016).
Figure 1: Information context consists of both what information a classifier uses to make inferences (classifier knowledge), and what information is included in the text sample (context completeness).
because it does not specify that Putin is the president of Russian and Ukraine is in Eastern Europe. The information necessary for correct classification is not present in the text. Alternatively, the pair below is context complete:
**T**: _The president of Russia, Vladimir Putin, invaded Ukraine, a country in Eastern Europe._
**S**: _Vladimir Putin invaded Eastern Europe._
Here, a labeler or model needs no external information for correct classification.
Document-stance pairs need not be context complete to make correct classifications. The Y axis of figure 1, _classifier knowledge_, refers to external information a human or algorithm references to make predictions. By increasing classifier knowledge, a labeler or model can make associations not explicitly stated. While the most consistent results are expected from document-hypothesis pairs in quadrant one, a person or model with an expansive knowledge about the classification subject may not benefit from contextually complete data and vice-versa. However, labels assigned where there is low contextual completeness and low classifier knowledge, will be particularly noisy (Joseph et al., 2017). The examples provided in table 1 represent this category due to the contextual incompleteness of the text samples and outsourcing the task to crowd workers that were not provided information to consider when inferring labels.
## 4 General Guidance and Methods Review
In this section I outline three approaches to stance detection: supervised classifiers, zero-shot NLI classifiers, and in-context learning. My goal is not to provide step-by-step instructions, but to present the information necessary for researchers to make informed decisions about their methods and design. As a running demonstration, I classify approval for President Trump on Twitter. I compiled multiple data sets collected with various parameters and labeling techniques. The data sets and sample sizes are listed in the appendix. As a summary performance metric, I use Matthew's Correlation Coefficient (MCC), an emerging standard for binary classification performance due to its robustness relative to F1 (Chicco and Jurman, 2020). MCC ranges from -1 to 1 with 0 indicating no correlation between true class and estimated class. I use a 70-30 train-test split and results shown are performance on the testing data. Figure 2 provides a top-line comparison across approaches.
### Supervised Classifiers
Supervised classifiers can achieve performance equivalent to humans on entailment classification tasks like stance detection (sup, 2022). Their primary shortcoming is that they are task specific and require a lot of training data relative to other methods. Thus, consider a supervised classifier if your stance detection task is narrow in scope and can afford to curate manually labeled training data. Because supervised text classification is a well established method, I forego a basic introduction to their use and instead focus on decisions of particular importance to entailment classification.
#### 4.1.1 Model Selection
Supervised classifiers can be divided into two types: machine learning algorithms that classify documents as unordered "bags-of-words" (e.g. logistic regression), and large language models that use word embeddings to classify semantic representations of text (e.g. BERT). Language models provide the best starting point for most stance detection tasks for two reasons. First, language
models generally out-preform machine learning algorithms (e.g. Gonzalez-Carvajal and Garrido-Merchan, 2020).Language models use semantic representations of documents to classify them rather than word counts (Mikolov et al., 2013). This provides a more theoretically robust foundation that is particularly relevant for a task like stance detection. Second, language models are both easier to implement and better maintain the integrity of data. Converting documents to a bag-of-words requires text pre-processing that can increase the labor for researchers and alter the outcome of analysis (Denny and Spirling, 2018). Language models work with unedited documents and reduce the decision points that can influence results.
State-of-the-art models, typically a type of neural network known as a transformer (Vaswani et al., 2017), are used through transfer-learning. In this process, a model is pre-trained on large data sets and then released to the public so others can train the model on task-specific data. Thousands of these pre-trained models are publicly available through Python's Transformers library (Wolf et al., 2020), and many are pre-trained for specific document types or common research tasks (e.g. Nguyen et al., 2020). In practice, this means large models can be trained by practitioners in a matter of minutes with a desktop GPU or a free cloud service such as Google Colab. Software packages like Weights and Biases further ease the training process by automatically sweeping the hyper-parameter space (Biewald, 2020).
While many researchers use BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) models as a default, Electra (Clark et al., 2020) and DeBERTa (He et al., 2020) models are better starting points if there is not a model already pre-trained for your task. Electra uses a more effective training procedure than BERT, making it both faster and less prone to overfitting (Clark et al., 2020). For more complicated tasks, DeBERTa significantly improves upon BERT in speed and learning capacity (He et al., 2020).
#### 4.1.2 Classifier Knowledge: Training Samples
Supervised classifiers offer significant control over information context via classier knowledge. The most obvious vector to control this through is the training data. Researchers should pay particular attention to how they sample their training data and who their manual labelers are. A common
Figure 2: Comparison across methods of stance detection. Stance detection approaches are largely comparable with the obvious exception of sentiment dictionaries.
practice is to label training data with crowd-workers. Joseph et al. (2017) however, showed that when crowd workers lack context both intercoder reliability and classifier performance suffer. This can be somewhat ameliorated by providing examples of correctly labeled text (e.g. Kawintiranon and Singh, 2021) or a contextual prompt (e.g. Joseph et al., 2017). Occasionally, crowd sourcing may not be appropriate because the amount of information needed to infer accurate labels is more than people can reasonably internalize in a short period of time. In such instances, training one or two expert coders may be a more appropriate approach. Discordance between the context known to labelers and the context needed for accurate labels places a ceiling on your classifier's potential by reducing certainty in training labels (Miller et al., 2020; Joseph et al., 2017).
#### 4.1.3 Classifier Knowledge: Domain Adaptation
Some projects may need to stance classify documents that use particular language, such as legal or medical documents. In such cases, domain adaptation can enhance classifier knowledge. Domain adaptation involves training a model for a generic language task, such as predicting missing words in a sentence, on data relevant to the domain. This adapted model can then be fine tuned on specific tasks for better performance in that domain. Many models that have already been domain adapted are available via Python's Transformer's library (e.g. Chalkidis et al., 2020).
To illustrate, I trained three RoBERTa models with varying levels of domain adaptation to classify my test set: RoBERTa base (no adaptation), BERTweet (adapted to Twitter) and PoliBERTweet (adapted to political Twitter)(Kawintiranon and Singh, 2022). For each model I used a random search of the hyperparameter space to train thirty iterations. 3 Results are shown in figure 3. There is weak evidence (\(t=1.32\)) BERTweet (Mean \(\text{MCC}=0.56\)) performs better than RoBERTa base (Mean \(\text{MCC}=0.53\)). PoliBERTweet, however, showed significant (\(t=7.35\)) improvement over both models (Mean \(\text{MCC}=0.65\)). However, this provides only a partial view of how domain adaptation affects classification. Some models that preform well on testing data show evidence of over-fitting and thus fail to generalize well to the entire data set. One sign of over-fitting is evaluation loss, a measurement of error in classifying the testing data. Higher evaluation loss can indicate the model will generalize poorly. Figure 4 plots MCC against the evaluation loss. Models in the top left corner indicate high performance with less evidence of over-fitting. Here, the advantage of domain adaptation is more evident with PoliBERTweet showing better performance and lower evaluation loss.
Footnote 3: To minimize degeneracy I used a Bayesian search of the learning rate and number of training epochs to find a range in which the model was stable. The random search was conducted in this parameter range. Three models failed to converge across the 90 runs. These models are omitted, but including them does not change results
#### 4.1.4 Validation
Out of sample performance is the primary method of validation for supervised classifiers. Additionally, intercoder reliability between human coders can provide insight as to what the performance ceiling for a model is. Researchers should also demonstrate they have adequately searched the hyperparameter space and that models converge to consistent performance. This provides evidence that results are due to consistent measurement rather than the artifact of a particular model.
Figure 3: RoBERTa, BERTweet, and PoliBERTweet share the same model architecture but differ domain adaptation. Each model was trained 30 times using a random search of a bounded hyperparameter space. PoliBERTweet demonstrated consistently better performance due to its domain adaptation.
Figure 4: Thirty models of each type were trained using a random search of a bounded hyperparameter space. Each iteration is plotted by overall performance (MCC) and generalization to out of sample data (Evaluation Loss). The size correlates with the number of training epochs. PoliBERTweet demonstrates both better performance and generalization due to domain adaptation.
**Supervised Classifiers for Stance Detection**
Supervised classifiers provide state-of-the-art performance and control over classifier knowledge. They are a good fit for research questions that need to classify few stances and can collect training data.
**Selecting a Model**
* Transformers generally preform best and are simpler to implement.
* Thousands of pre-trained models are publicly available. If a model has not been pre-trained for your specific task, Electra and DeBERTa are good starting points.
**Controlling Information Context**
* Ensure manual labelers have sufficient context to accurately label documents.
* If using crowd workers, provide examples of correctly labeled data or context prompts.
* When possible, train a model adapted to your domain of interest.
**Validation**
* Out of sample prediction is the primary test of validity.
* Measures of inter-coder reliability can estimate the model's performance ceiling.
* Demonstrate a sweep of the hyperparameter space and convergence on consistent performance.
### Zero-Shot NLI Classifiers
NLI classifiers are transformers pre-trained specifically for recognizing textual entailment. During training, the model is presented a statement and hypothesis pair and is tasked to determine if the statement entails, contradicts, or is unrelated/neutral to the hypothesis. The generalizability of this task allows these models to classify data with no additional training (also known as zero-shot classification) by pairing documents with hypotheses and framing classification as an entailment problem (Yin et al., 2019). For example, to identify tweets that support Trump I might pair each tweet with the hypothesis "The author of this tweet supports Trump" and the model would then determine if each tweet entails that hypothesis. An alternative approach is to devise a set of hypothesis that represent potential labels (e.g. "...supports Trump", "...opposes Trump", etc.) and then classify the document with each hypothesis in the set. The hypothesis most likely to be entailed is the document's label.
A zero-shot model works off-the-shelf on an arbitrary number of stance detection tasks without training a model for each task. In many contexts, a zero-shot NLI classifier performs on par or better than supervised classifiers. Their primary limitation is that the users have no control over classifier knowledge other than selecting which model will be used. This implies zero-shot classifiers may struggle to classify data that is less contextually complete. Accordingly, A zero-shot NLI classifier is ideal for stances that can be framed in well-defined dimensions such as support or opposition.
#### 4.2.1 Classifier Knowledge
The only way to control classifier knowledge in a zero-shot context is model selection. A model that generalizes to arbitrary tasks on unseen data needs a sophisticated understanding of language that
only emerges in large models (Bhargava et al., 2021). Thus, performance will generally increase with model size. To demonstrate, I classified the testing set of my data using DeBERTa models trained for NLI with 44 (small), 86 (base), and 304 (large) million parameters (He et al., 2021). Results in figure 5 show that stance detection on par with supervised classifiers emerges in the largest version of the model tested. Transformers are generally released in sizes that range from small to extra large. While base and small models are commonly used for supervised classifiers, contemporary zero-shot NLI classifiers should not be expected to work well at sizes smaller than large.
A second variable to consider is the data used for pre-training. Unsurprisingly, pre-training a model on more examples of entailment-hypothesis pairs results in more robust generalization (Nie et al., 2020). To date, multiple large data sets have been curated precisely for pre-training, including the ANLI (Nie et al., 2020), and WANLI (Liu et al., 2022) data sets. Training data for NLI evolve as researchers strive to present harder challenges and ensure models learn language in a generalizable way Nie et al. (2020). Models that incorporate both more and more recent NLI data sets into their pre-training should have better performance.
#### 4.2.2 Context Completeness
A simple and effective way to control context completeness is by matching hypotheses to documents via key words. For example, to classify tweets against the hypotheses "The author of this tweet supports Trump" I may limit my data to tweets that contain the word "Trump". This ensures a minimum threshold of context completeness is met. To equate anti-immigration tweets with pro-Trump tweets, I might simply match hypotheses about immigration with tweets that contain immigration related key words, and code them as pro-Trump if they entail anti-immigrant hypotheses. As shown in figure 6, appropriately pairing the hypotheses with text samples can have a
Figure 5: Models that classify data in a zero-shot or few-shot context rely on linguistic understanding that generalizes well to unseen data. For a task as complicated as entailment classification, this behavior only emerges in large language models.
dramatic effect on performance. Tweets that do not mention Trump are classified with much lower precision than tweets that do.
A second consideration is how many hypotheses should be used to classify a document. As a rule of thumb, present the model with enough labels to reasonably capture the range of possible opinions expressed and avoid a set of labels that create a false dilemma. The intuition behind this is simple. Imagine you are given a data set of tweets about President Trump and must assign each tweet a label from a set of two: "The author of this tweet supports Trump", "The author of this tweet opposes Trump". Stance neutral statements about President Trump pose a dilemma in which you must choose a label that may not seem appropriate. To alleviate this you might reduce the task to a binary classification in which you decide if a tweet does or does not express support for Trump, or you might add a third possible label for neutral statements about Trump.
To demonstrate, I replicated this thought experiment with my testing data. I classify the data three times, once with support, oppose, and neutral hypotheses; another time with support and oppose hypotheses; and finally with only a support hypothesis. Each document was hand labeled as support, neutral, or oppose. To understand where performance is affected by the hypothesis set I use, I examine accuracy within each of the three label categories in addition to MCC across all categories.
Results in table 2 show the model is less accurate in classifying a stance when it has no appropriate hypothesis for that stance. The support only set performed worse than the other two on oppose statements, while the set that included a neutral stance performed best on neutral statements. The set of hypotheses that present a false dilemma between support and oppose performs the worst on the holistic measurement, MCC. However, the overall performance of the three models is comparable.
There is not a single way to select a set of hypotheses. Introducing a neutral label will probably not improve performance if there are no neutral samples in your data. Conversely, a data set that mixes news headlines with social media comments may benefit significantly from a neutral stance. It is difficult to know what the composition of unlabeled data is a-priori. Without strong priors, researchers should present a set of hypotheses that are both exhaustive (i.e. at least one distinct hypothesis for each stance) and mutually exclusive. Support, oppose, and neutral hypotheses may be sufficient in most cases.
#### 4.2.3 Validation
Validation presents a particular challenge to zero-shot classifiers because they do not have a large training and test set like supervised classifiers. However, the basics are not fundamentally different from a supervised classifier. First, researchers should be explicit in defining stances and how the hypotheses used are valid representations of those stances. Consider developing a code book if selectively matching hypotheses to documents based on the document's content.
\begin{table}
\begin{tabular}{l c c c c} \hline Set of Hypotheses & Support & Oppose & Neutral & MCC (All) \\ \hline Support & **88\%** & 86\% & 61\% & 0.62 \\ Support/Oppose & 85\% & 88\% & 60\% & 0.60 \\ Support/Oppose/Neutral & 84\% & **89\%** & **67\%** & **0.63** \\ \hline \end{tabular}
\end{table}
Table 2: The set of hypotheses presented to an NLI classifier represents the universe of labels it can assign a document. While it is impossible to know which set of hypotheses will produce the most accurate classifications for each data set, a set of hypotheses that provides a distinct label for each possible stance and avoids false dilemmas is a good starting point.
Second, calculate the sample size needed to estimate performance within a confidence interval and label some data. Performance can be estimated with a 5-10% margin of error at a 95% confidence level with a small fraction of the data needed to train a classifier. For headlines, tweets, or news articles this can often be accomplished in a few hours.
Finally, consider conducting sensitivity analysis by generating synonymous hypotheses and classifying the data multiple times. Researchers should demonstrate their results are not sensitive to a specific set of hypotheses. Well trained NLI classifiers should produce consistent results for synonymous phrases. As an example, I classified my test set with 10 different synonymous hypotheses that varied from simple phrases ("Trump is good") to complete hypotheses ("The author of this tweet approves of Trump").4 Across all sets of hypotheses the average MCC was.60, with a minimum value of 0.57 and a maximum value of 0.63.
Footnote 4: Hypotheses are located in the appendix
**Zero-Shot NLI Classifiers for Stance Detection**
Zero-shot NLI classifiers work with no training data. They are ideal for applications where training data is unavailable, or many stances need to be classified.
**Classifier Knowledge**
* Larger models will generalize better.
* Use models trained on more and more recent data.
**Context Completeness**
* NLI classifiers perform better on data with more context completeness. Sample or match hypotheses to samples based on key words.
* Hypotheses represent the universe of labels that a model can assign. Present a set of hypotheses that is exhaustive and avoids false dilemmas (e.g. support or oppose).
* Phrase hypotheses in complete sentences and plain English.
**Validation**
* Calculate the number of samples needed to estimate performance within a confidence interval and label some data.
* Conduct basic sensitivity analysis by classify documents multiple times with synonymous hypotheses. Estimate inter-coder reliability between the classifications and show results are robust to different phrasings.
### In-Context Learning
In-context learning is an emergent behavior from a class of large language models known as generative transformers (e.g. GPT-3, Brown et al., 2020). In-context learning refers to the capability of these models to learn new tasks via plain English prompts that describe the task. It is called "in-context" because the model is not trained on the task, but learns it during the prompt's forward pass through the network (Brown et al., 2020). This allows the model to do zero-shot classification
(only a description of the task is provided) and few-shot classification (a description of the task and/or a few labeled examples are provided). In-context learning frames classification as a next word prediction task. A description of the task and/or labeled samples are presented to the model in a structured format as show in figure 7, with the last sample leaving the label blank. The model then predicts the next word in the sequence, which is presumably the label for the sample.
The ability to provide the model with a descriptive prompt and examples of the task make them potentially more adaptable than NLI zero-shot transformers. The ability to classify documents with plain English prompts also makes them particularly attractive for off-the-shelf use. However, these models are still not well understood and can be costly to implement. This obviates some of their apparent advantages over supervised and NLI classifiers.
#### 4.3.1 Classifier Knowledge
As with NLI classifiers, in-context learners depend on large model sizes. To test this, I classified my testing data set with both the largest (Davinci) and second largest (Curie) versions of GPT-3 with 50 and 35 labeled examples respectively, the maximum number of examples each model could consistently process. I used the older GPT-3 model for this task, as opposed to GPT-3.5 Turbo and GPT-4, because Open AI has not released recent models in various parameter sizes and GPT-3 allows me to test results when only the number of parameters is varied.5 Shown in figure 5, GPT-3 demonstrates similar behavior to DeBERTa - only large model versions are capable of robust stance detection. This is consistent with evaluations from GPT-3's creators (Brown et al., 2020), and commonly used entailment classification benchmarks (sup, 2022).
Footnote 5: Davinci’s performance was stable at 35 and 20 labeled examples as well, indicating that the number of examples was not what differentiated performance.
Figure 6: NLI classifiers and in-context learners benefit from documents that are more context complete. If the subject of a stance is not mentioned in a document, it is more difficult for the model to infer stance.
Unlike NLI classifiers, this need for large models presents a significant challenge to contemporary generative transformers. GPT-3, GPT-3.5 Turbo, and GPT-4 require hundreds to thousands of times the computing resources of BERT like models (Hu et al., 2021). While supervised and NLI classifiers are deployable via local hardware or free cloud services like Google Colab or university compute clusters, models on the scale of generative transformers require cloud-based super computers. This, combined with the proprietary nature of contemporary generative transformers, poses a significant challenge for model version control and, thus, scientific replication. There may also be significant cost associated with their use due to either costs associated with proprietary APIs or simply paying for the electricity necessary to power the model. Thus, while a smaller NLI model can feasibly classify an entire data set on its own, generative transformers may be more useful in creating training sets that smaller models can be trained on.
#### 4.3.2 Context Completeness
Like NLI classifiers, in-context learners benefit from context completeness. However, in-context learners allow greater control over the context completeness via prompts and labeled examples. As shown in figure 6, GPT-3.5 Turbo and GPT-4 perform better than DeBERTa when the target is not mentioned. However, performance is still significantly better when the target is mentioned. As with NLI classifiers, key-word based sampling or prompt matching will benefit.
Realizing the contextual benefits prompts can provide presents a challenge for in-context classification."Prompt engineering" has spawned a significant literature on its own due to how sensitive generative models are to differences in prompts. Minor changes such as the ordering of labeled examples, formatting, or the frequency of words, can result in large differences in classification. These instabilities do not diminish with model size or the number of examples in a prompt (Lu et al., 2021; Zhao et al., 2021).6 Thus, there are no clear rules on how many examples are needed, how prompts should be formatted, or what makes a good set of labeled examples. Further, searching across many prompts to optimize performance is a potentially expensive endeavor. Instead, I offer broad guidance and recommend users consult the most recent literature in this developing field.
Figure 7: A 3-shot prompt example for GPT-3. The prompt consists of a description of the classification task and three examples of correctly labeled data. The sequence is left incomplete by leaving the label for the final tweet blank. The model classifies the data by predicting the word after “Stance:”. Here the model predicted “oppose”.
First, the temperature parameter should be set to zero for any classification task. Temperature controls the "randomness" of a model's output. To predict the next word, generative transformers use a softmax function so that the joint probability of each token appearing next sums to one. A higher temperature parameter flattens the distribution such that as temperature \(T_{\rightarrow\infty}\) the probability distribution for the predicted token approaches a uniform distribution (Holtzman et al., 2020; Ackley et al., 1985). A temperature of zero makes the model prediction as deterministic as possible so that the most likely class is chosen and results are maximally reproducible.
Second, more examples is generally better. Increasing the number of labeled examples in a prompt can, but does not necessarily, reduce variance in performance. More examples also increases average performance with diminishing returns Zhao et al. (2021). When prompting with a small number of examples, provide a plain English description of the task. Task descriptions provide decreased benefit as the number of labeled samples used in a prompt increases, but can have a substantial impact on prompts with few examples (Gao et al., 2020).
Second, be cognizant that generative transformers exhibit bias towards the majority and final labels in a prompt (Zhao et al., 2021; Lu et al., 2021). Consider any information known about the balance of the data, as well as the nature of the task. If there is no reason to suspect class imbalance, a random sample of labeled examples may be an appropriate prompt. If recall is paramount it may be appropriate to over-sample examples from the class of interest. Avoid long sequences of similarly labeled examples towards the end of the prompt.
#### 4.3.3 Validation
Many of the same validation principles that apply to NLI classifiers apply to in-context learners. A small sample of labeled data can be used to estimate performance within a margin of error. Classifying documents multiple times with prompts that perform similarly can demonstrate robust results. However, the resource and labor expenses associated with these models is multiplied during validation. Researchers may need labeled data to test several prompts, plus labeled data for out of sample testing. Further, classifying the data set with multiple permutations of the prompt for sensitivity analysis multiplies the cost by \(n\) permutations. In short, validation a significant challenge for in-context learners. While they should follow similar principles to zero-shot NLI classifiers, the realities makes it impractical to do so.
**In-Context Learners for Stance Detection**
In-context learners are a promising frontier, however their size and cost makes them currently impractical for most stance detection applications.
**Selecting a Model**
* Larger models are generally more capable entailment classifiers.
**Controlling Information Context**
* In-context learners benefit from more context-complete data.
* Including more labeled examples in the prompt will generally yield better results.
* Minor differences in prompts can cause large differences in results. Multiple permutations should be tested.
* The temperature parameter should be set to zero.
**Validation**
* Estimate performance within a margin of error on a small sample of labeled data.
* Conduct sensitivity analysis by classifying the data multiple times with similar performing prompts and showing results are robust across prompts.
## 5 Replication: COVID-19 and Threat Minimization
An analysis of Twitter posts by Block Jr et al. (2022) showed conservatives were more likely to use language that is threat minimizing or non-compliant with public health guidelines when discussing COVID-19, but that this tendency decreased as deaths within that person's geographic area increased. While the original study used a supervised classifier to label documents, for this replication I use a zero-shot NLI classifier and compare results with the original model.
### Data and Design
The replication data consists of 862,923 tweets related to COVID-19 posted between September 1, 2020 and February 28, 2021 from 23,476 unique users. Each user's ideology is measured on a uni-dimensional left-right scale using the tweetscores method (Barbera, 2015). Included are a set of 2,000 hand labeled training tweets and coding rules for labeling tweets non-compliant. Non-compliant tweets include statements that attempt to downplay the threat of COVID-19 such as comparing it to the flu or casting doubt on the number of deaths caused, as well as statements against mitigation practices such as wearing masks or getting vaccinated
Based on these coding rules, I divide non-compliance into seven dimensions:
1. Masks and mask mandates
2. Shutdowns and stay-at-home orders
3. Vaccines
4. Social distancing
5. Comparisons to the flu
6. Death counts
7. General attitudes towards the threat posed by the virus
For each dimension I devised two sets of synonymous hypotheses that consist of compliant, non-compliant, and neutral stance statements. For example, the first set follows a simple template that reads "The author of this tweet believes..." and I finish the statement with a compliant, non-compliant, and neutral phrases relevant to the dimension. For example, the set of hypotheses for the vaccine dimension are the following:
* The author of this tweet believes vaccines are good.
* The author of this tweet believes vaccines are bad.
* The author of this tweet believes vaccines are neutral.
The second set of hypotheses follow a similar template but use the phrasing "supports", "opposes", and "does not express and opinion about" instead of good, bad, or neutral. In addition, I incorporate a few additional phrases into the second set that are based on Block Jr et al.'s (2022) descriptive analysis of the text. These are designed to classify opinions that were commonly expressed in particular ways. For example according to Block Jr et al. (2022), discourse on lockdown orders differentially focused on how it affected the economy and how it saved lives depending on the author's ideology. A complete list of both sets of hypotheses is included in the appendix.
To classify the data I use the DeBERTa NLI classifier trained by Laurer et al. (2022) and match hypotheses to tweets based on key words. If a threat minimizing hypothesis is selected for any of the classifications the tweet is considered threat minimizing. For example, a tweet that contains the words "mask" and "death" will be classified once for the hypotheses associated with masks and a second time for the phrases associated with death. If the model determines the document entails an anti-mask or anti-vaccine hypothesis it is considered threat minimizing. I use 300 randomly sampled tweets from the training data to test the hypotheses - enough data to estimate performance at a 95% confidence level with a!5% margin of error, and a small enough sample that a single researcher could label the data in a few hours.
To analyze the results I replicate the model used by Block Jr et al. (2022) - a negative binomial regression with a count of the number of threat-minimizing tweets a user makes as the dependent variable. The independent variables are the user's ideology, the COVID-19 death rate within their county, and an interaction between the two. A series of control variables that include county level demographics, political lean, and state fixed effects are also used.
### Analysis
Using the zero-shot NLI classifier on my sample of 300 labeled tweets, the first set of hypotheses had an MCC of 0.70 and 89% accuracy while the second had an MCC of 0.75 and 91% accuracy. The original classifier achieved an MCC of 0.66 and accuracy of 86% on the test set. Between the labels there was a high degree of intercoder reliability (\(MCC=0.84\), \(\kappa=0.83\)). Figure 8 shows the results from regression models that use zero-shot labels from both sets of hypotheses, as well as the original labels from the supervised classifier. The three models show consistency in both the direction and size of the effect.
Figure 8: Replication results from (Block Jr et al., 2022). Supervised represents the original results, which trained an Electra transformer to classify tweets. Zero-shot 1 and 2 used a zero-shot NLI classifier on the same data. The second set of zero-shot hypotheses showed the most consistency with human labels on the training data (\(MCC=0.75\)).
Figure 9 shows the ideological distribution of all tweets labeled threat-minimizing across the three models. The distributions appear identical with each identifying a concentration of threat minimizing tweets on the conservative pole. Across the entire data set, the average ideology of a tweet's author is -0.66. Among samples labeled differently by the zero-shot classifier and the original classifier, the average ideology was -0.11 for the first set of hypotheses, and -0.07 for the second set of hypotheses - indicating the NLI classifiers are slightly more likely to disagree when the author is more conservative. However, the models show a high consistency across the data and there is no obvious source of bias. While it is not immediately clear which model performs better and why, the high level of consistency in identifying such a nuanced stance provides compelling evidence that zero-shot NLI models can be a useful avenue for stance detection.
## 6 Conclusion
In this paper I outlined a precise definition and generalized framework for stance detection. Stance detection is best operationalized as an entailment classification task within a given information context. Information context is the body of knowledge a classifier uses to make inferences and the information contained within a document. By introducing information context to stance detection researchers can both identify the most appropriate methods and understand how design decisions affect results.
I demonstrated three approaches to stance detection: supervised classification, zero-shot classification with NLI classifiers, and in-context learning. Most stance detection tasks will be best serviced by a supervised classifier or a zero-shot NLI classifier. Supervised classifiers are ideal for state-of-the-art accuracy when sufficient training data is available or can be obtained. Supervised classifiers can be further enhanced through domain adaptation. Zero-shot classifiers are well-suited for multi-stance classification tasks, instances where training data is sparse, or when an off-the-shelf approach is needed. Regardless of approach, researchers should consider the information context of
Figure 9: The distribution of tweet author ideology among tweets labeled threat minimizing by the three classifiers.
their task and seek an appropriate match between classifier knowledge and the context completeness of documents.
**Funding**
None.
**Data Availability Statement**
Data and replication materials for this article are forthcoming.
**Conflicts of Interest**
None. |
2301.02905 | REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust
Encoder as a Service | Encoder as a service is an emerging cloud service. Specifically, a service
provider first pre-trains an encoder (i.e., a general-purpose feature
extractor) via either supervised learning or self-supervised learning and then
deploys it as a cloud service API. A client queries the cloud service API to
obtain feature vectors for its training/testing inputs when training/testing
its classifier (called downstream classifier). A downstream classifier is
vulnerable to adversarial examples, which are testing inputs with carefully
crafted perturbation that the downstream classifier misclassifies. Therefore,
in safety and security critical applications, a client aims to build a robust
downstream classifier and certify its robustness guarantees against adversarial
examples.
What APIs should the cloud service provide, such that a client can use any
certification method to certify the robustness of its downstream classifier
against adversarial examples while minimizing the number of queries to the
APIs? How can a service provider pre-train an encoder such that clients can
build more certifiably robust downstream classifiers? We aim to answer the two
questions in this work. For the first question, we show that the cloud service
only needs to provide two APIs, which we carefully design, to enable a client
to certify the robustness of its downstream classifier with a minimal number of
queries to the APIs. For the second question, we show that an encoder
pre-trained using a spectral-norm regularization term enables clients to build
more robust downstream classifiers. | Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong | 2023-01-07T17:40:11Z | http://arxiv.org/abs/2301.02905v1 | # REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service
###### Abstract
_Encoder as a service_ is an emerging cloud service. Specifically, a service provider first pre-trains an encoder (i.e., a general-purpose feature extractor) via either supervised learning or self-supervised learning and then deploys it as a cloud service API. A client queries the cloud service API to obtain feature vectors for its training/testing inputs when training/testing its classifier (called _downstream classifier_). A downstream classifier is vulnerable to _adversarial examples_, which are testing inputs with carefully crafted perturbation that the downstream classifier misclassifies. Therefore, in safety and security critical applications, a client aims to build a robust downstream classifier and certify its robustness guarantees against adversarial examples.
What APIs should the cloud service provide, such that a client can use any certification method to certify the robustness of its downstream classifier against adversarial examples while minimizing the number of queries to the APIs? How can a service provider pre-train an encoder such that clients can build more certifiably robust downstream classifiers? We aim to answer the two questions in this work. For the first question, we show that the cloud service only needs to provide two APIs, which we carefully design, to enable a client to certify the robustness of its downstream classifier with a minimal number of queries to the APIs. For the second question, we show that an encoder pre-trained using a spectral-norm regularization term enables clients to build more robust downstream classifiers.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
+
+
Footnote †: Wenjie Qu performed this research when he was an intern in Gong’s group.
radii for testing inputs. The first challenge is that a client cannot use BC based certification. In particular, the composition of the encoder and the client's downstream classifier is the base classifier that the client needs to certify in BC based certification. However, the client does not have white-box access to the encoder deployed on the cloud server, making BC based certification not applicable. The second challenge is that, although a client can use SC based certification by treating the composition of the encoder and its downstream classifier as a base classifier, it incurs a large communication cost for the client and a large computation cost for the cloud server. Specifically, the client needs to query the Feature-API once for each noisy training input in each training epoch of the downstream classifier because SC based certification trains the base classifier using noisy training inputs. Therefore, the client requires \(e\) queries to the Feature-API _per_ training input, where \(e\) is the number of epochs used to train the downstream classifier. Moreover, to derive the predicted label and certified radius for a testing input, SC based certification requires the base classifier to predict the labels of \(N\) noisy testing inputs. Therefore, the client requires \(N\) queries to the Feature-API _per_ testing input. Note that \(N\) is often a large number (e.g., 10,000) [13]. The large number of queries to the Feature-API imply 1) large communication cost, which is intolerable for resource-constrained clients such as smartphone and IoT devices, and 2) large computation cost for the cloud server. The third challenge is that SC based certification achieves suboptimal certified radii. This is because the base classifier is the composition of the encoder and a client's downstream classifier, but a client cannot train/fine-tune the encoder as it is deployed on the cloud server.
**Our work:** We propose _Robust Encoder as a Service (REaaS)_ to address the three challenges of SEaaS. Figure 1 compares SEaaS with REaaS. Our key idea is to provide another API called _F2Iperturb-API_.1 A downstream classifier essentially takes a feature vector as input and outputs a label. Our F2Iperturb-API enables a client to treat its downstream classifier _alone_ as a base classifier and certify the robustness of its base or smoothed downstream classifier in the _feature space_. Specifically, a client performs three steps to derive the certified radius of a testing input in REaaS. First, the client obtains the feature vector of the testing input via querying the Feature-API. Second, the client views its downstream classifier alone as a base classifier and derives a _feature-space certified radius_\(R_{F}\) for the testing input using any BC/SC certification method. The client's base or smoothed downstream classifier predicts the same label for the testing input if the \(\ell_{2}\)-norm of the adversarial perturbation added to the testing input's feature vector is less than \(R_{F}\). Third, the client sends the testing input and its feature-space certified radius \(R_{F}\) to query the F2Iperturb-API, which returns the corresponding _input-space certified radius_\(R\) to the client. Our input-space certified radius \(R\) guarantees the client's base or smoothed downstream classifier predicts the same label for the testing input if the \(\ell_{2}\)-norm of the adversarial perturbation added to the testing input is less than \(R\).
Footnote 1: ‘F’ stands for Feature and ‘T’ stands for Input.
The key challenge of implementing our F2Iperturb-API is how to find the largest input-space certified radius \(R\) for a given testing input and its feature-space certified radius \(R_{F}\). To address the challenge, we formulate finding the largest \(R\) as an optimization problem, where the objective function is to find the maximum \(R\) and the constraint is that the feature-space perturbation is less than \(R_{F}\). However, the optimization problem is challenging to solve due to the highly non-linear constraint. To address the challenge, we propose a binary search based solution. The key component of our solution is to check whether the constraint is satisfied for a specific \(R\) in each iteration of binary search. Towards this goal, we derive an upper bound of the feature-space perturbation for a given \(R\) and we treat the constraint satisfied if the upper bound is less than \(R_{F}\). Our upper bound can be computed efficiently.
F2Iperturb-API addresses the first two challenges of SEaaS. Specifically, BC based certification is applicable in REaaS. Moreover, SC based certification requires much less queries to the APIs in REaaS. Specifically, for any certification method, a client only requires one query to Feature-API per training input and two queries (one to Feature-API and one to F2Iperturb-API) per testing input in our REaaS.
To address the third challenge of SEaaS, we propose a new method to pre-train a robust encoder, so a client can derive larger certified radii even though it cannot train/fine-tune the encoder. Our method can be combined with standard supervised learning or self-supervised learning to enhance the robustness of a pre-trained encoder. An encoder is more robust if it produces more similar feature vectors for an input and its adversarially perturbed version. Our key idea is to derive an upper bound for the Euclidean distance between the feature vectors of an input and its adversarial version, where our upper bound is a product of a _spectral-norm term_ and the perturbation size. The spectral-norm term depends on the parameters of the encoder, but it does not depend on the input nor the adversarial perturbation. An encoder with a smaller spectral-norm term may produce more similar feature vectors for an input and its adversarial version. Thus, we use the spectral-norm term as a regularization term to regularize the pre-training of an encoder.
We perform a systematic evaluation on multiple datasets including CIFAR10, SVHN, STL10, and Tiny-ImageNet. Our
Figure 1: SEaaS vs. REaaS.
evaluation results show that REaaS addresses the three challenges of SEaaS. First, REaaS makes BC based certification applicable. Second, REaaS incurs orders of magnitude less queries to the cloud service than SEaaS for SC based certification. For instance, REaaS reduces the number of queries to the cloud service APIs respectively by \(25\times\) and \(5,000\times\) per training and testing input when a client trains its downstream classifier for \(e=25\) epochs and uses \(N=10,000\) for certification. Third, in the framework of REaaS, our robust pre-training method achieves larger _average certified radius (ACR)_ for the testing inputs than existing methods to pre-train encoders for both BC and SC based certification. For instance, when the encoder is pre-trained on Tiny-ImageNet and the downstream classifier is trained on SVHN, the ACRs for MoCo (a standard non-robust self-supervised learning method) [1], RoCL (an adversarial training based state-of-the-art robust self-supervised learning method) [14], and our method are respectively 0.011, 0.014, and 0.275 when a client uses SC based certification.
In summary, we make the following contributions:
* We propose REaaS, which enables a client to build a certifiably robust downstream classifier and derive its certified radii using any certification method with a minimal number of queries to the cloud service.
* We propose a method to implement F2IPerturb-API.
* We propose a spectral-norm term to regularize the pre-training of a robust encoder.
* We extensively evaluate REaaS and compare it with SEaaS on multiple datasets.
## II Related Work
### _Adversarial Examples_
We discuss adversarial examples [5, 15] in the context of encoder as a service. We denote by \(f\) a pre-trained encoder and \(g\) a downstream classifier. Given a testing input \(\mathbf{x}\), the encoder outputs a feature vector for it, while the downstream classifier takes the feature vector as input and outputs a label. For simplicity, we denote by \(f(\mathbf{x})\) the feature vector and \(g\circ f(\mathbf{x})\) the predicted label for \(\mathbf{x}\), where \(\circ\) represents the composition of the encoder and downstream classifier. In an adversarial example, an attacker adds a carefully crafted small perturbation \(\delta\) to \(\mathbf{x}\) such that its predicted label changes, i.e., \(g\circ f(\mathbf{x}+\delta)\neq g\circ f(\mathbf{x})\). The carefully perturbed input \(\mathbf{x}+\delta\) is called an adversarial example. Many methods (e.g., [5, 6, 16]) have been proposed to find an adversarial perturbation \(\delta\) for a given input \(\mathbf{x}\). In our work, we focus on certified defenses, which aim to defend against any bounded adversarial perturbations no matter how they are found. Therefore, we omit the details on how an attacker can find an adversarial perturbation.
### _Certifying Robustness of a Classifier_
**Definition of certified radius:** A classifier is certifiably robust against adversarial examples if its predicted label for an input is unaffected by any perturbation once its size is bounded [7, 12, 13]. Formally, a classifier \(h\) is certifiably robust if we have the following guarantee for an input \(\mathbf{x}\):
\[h(\mathbf{x}+\delta)=h(\mathbf{x}),\forall\left\lVert\delta\right\rVert_{2}<R, \tag{1}\]
where \(R\) is known as _certified radius_. Note that certified radius \(R\) may be different for different inputs \(\mathbf{x}\), but we omit the explicit dependency on \(\mathbf{x}\) in the notation for simplicity.
A certification method against adversarial examples aims to build a certifiably robust classifier and derive its certified radius \(R\) for any input \(\mathbf{x}\). There are two general categories of certification methods, i.e., _base classifier (BC) based certification_[7, 8, 9, 10] and _smoothed classifier (SC) based certification_[11, 12, 13]. Both categories of methods may be adopted in different scenarios depending on certification needs. On one hand, BC based certification often produces _deterministic_ guarantees (i.e., the derived certified radius is absolutely correct), while SC based certification often provides _probabilistic_ guarantees (i.e., the derived certified radius may be incorrect with a small _error probability_). On the other hand, SC based certification often derives a larger certified radius than BC based certification due to its probabilistic guarantees.
**Base classifier (BC) based certification:** BC based certification aims to directly derive the certified radius \(R\) of a given classifier (called _base classifier_) for an input \(\mathbf{x}\). These methods often propagate perturbation from the input \(\mathbf{x}\) to the output of the base classifier layer by layer in order to derive the certified radius. Therefore, they require white-box access to the base classifier. Suppose \(F\) is a base classifier that maps an input \(\mathbf{x}\) to one of \(c\) classes \(\{1,2,\cdots,c\}\). We use \(H(\mathbf{x})\) to denote the base classifier's last-layer output vector for \(\mathbf{x}\), where \(H_{l}(\mathbf{x})\) represents the \(l\)th entry of \(H(\mathbf{x})\) and \(l=1,2,\cdots,c\). \(F(\mathbf{x})\) denotes the predicted label for \(\mathbf{x}\), i.e., \(F(\mathbf{x})=\operatorname*{argmax}_{l=1,2,\cdots,c}H_{l}(\mathbf{x})\). Next, we overview how to derive the certified radius \(R\) using CROWN [9], a state-of-the-art BC based certification method. CROWN shows that each entry \(H_{l}(\mathbf{x})\) can be bounded by two linear functions \(H_{l}^{L}(\mathbf{x})\) and \(H_{l}^{U}(\mathbf{x})\). Suppose the base classifier predicts label \(y\) for \(\mathbf{x}\) when there is no adversarial perturbation, i.e., \(F(\mathbf{x})=y\). CROWN finds the largest \(r\) such that the lower bound of the \(y\)th entry (i.e., \(\min_{\left\lVert\delta\right\rVert_{2}<r}H_{y}^{L}(\mathbf{x}+\delta)\)) is larger than the upper bounds of all other entries (i.e., \(\max_{l\neq y}\max_{\left\lVert\delta\right\rVert_{2}<r}H_{l}^{U}(\mathbf{x}+ \delta)\)) and views it as the certified radius \(R\) for the input \(\mathbf{x}\). The complete details of CROWN can be found in Appendix B. In the context of encoder as a service, the composition of the encoder and downstream classifier (i.e., \(g\circ f\)) is a base classifier F, whose certified radius a client aims to derive. However, in SEaaS, a client does not have white-box access to the encoder \(g\) since it is deployed on the cloud server. As a result, a client cannot use BC based certification to derive the certified radius of \(g\circ f\).
**Smoothed classifier (SC) based certification:** SC based certification first builds a _smoothed classifier_ based on the base classifier and then derives the certified radius \(R\) of the smoothed classifier. In SEaaS, a client builds a smoothed classifier \(h\) based on the base classifier \(g\circ f\) via adding random Gaussian noise \(\mathcal{N}(0,\sigma^{2}\mathbf{I})\) to an input \(\mathbf{x}\), where \(\sigma\) is the standard deviation of the Gaussian noise. Specifically, given a testing input \(\mathbf{x}\), the client constructs \(N\) noisy inputs \(\mathbf{x}+\mathbf{n}_{1},\mathbf{x}+\mathbf{n}_{2},\cdots,\mathbf{x}+\mathbf{n }_{N}\), where \(\mathbf{n}_{i}\) (\(i=1,2,\cdots,N\)) is sampled from \(\mathcal{N}(0,\sigma^{2}\mathbf{I})\). The client uses the base classifier \(g\circ f\) to predict the label of each noisy input. Moreover, the client computes the _label frequency_\(N_{l}\) of each label \(l\) among the noisy inputs, i.e., \(N_{l}=\sum_{j=1}^{N}\mathbb{I}(g\circ f(\mathbf{x}+\mathbf{n}_{j})=l)\), where \(\mathbb{I}\) is an indicator function. The smoothed classifier predicts the label with the
largest label frequency for the original testing input \(\mathbf{x}\). Moreover, the client can derive the certified radius \(R\) of the smoothed classifier for \(\mathbf{x}\) based on the label frequencies. Due to the random sampling, the derived certified radius may be incorrect with an error probability \(\alpha\), which can be set by the client. In Appendix C, we take Cohen et al. [13] as an example to discuss more technical details on SC based certification.
To improve certified radius, the base classifier \(g\circ f\) is often trained using noisy training inputs [13]. In encoder as a service (both SEaaS and REaaS), a client does not have white-box access to the encoder \(g\) and thus can only train its downstream classifier \(f\) using noisy training inputs. Specifically, for SEaaS, in each epoch of training the downstream classifier, a client adds random Gaussian noise from \(\mathcal{N}(0,\sigma^{2}\mathbf{I})\) to each training input, queries the Feature-API to obtain the feature vector of each noisy training input, and uses the feature vectors to update the downstream classifier via stochastic gradient descent.
SC based certification faces two challenges in SEaaS. First, a client needs to query the cloud service many times, leading to a large communication cost for the client and a large computation cost for the cloud server. Specifically, for each testing input \(\mathbf{x}\), a client needs to query the Feature-API \(N\) times to obtain the feature vectors of the \(N\) noisy inputs in order to compute the label frequencies. Moreover, the client queries the Feature-API \(e\) times _per_ training input, where \(e\) is the number of epochs used to train the downstream classifier. We note that [17] proposed to prepend a denoiser to a base classifier instead of training it with noisy training inputs, which can reduce the number of queries from \(e\) to 1 per training input when applied to SEaaS. However, it is hard for a client with a small amount of data to train such a denoiser. Second, the derived certified radius is suboptimal because a client cannot fine-tune the encoder, which is not pre-trained to support certified robustness.
### _Pre-training an Encoder_
#### Iii-C1 Pre-training Non-robust Encoders
We discuss both standard supervised learning and self-supervised learning methods to pre-train encoders, which do not take robustness against adversarial examples into consideration.
**Supervised learning:** The idea of using supervised learning to pre-train an encoder is to first train a deep neural network classifier using labeled training data and then use the layers excluding the output layer as an encoder. Specifically, supervised learning defines a loss function \(l(i)\) (e.g., cross-entropy loss) for each labeled training example \((\mathbf{x}_{i},y_{i})\), where \(y_{i}\) is the ground truth label of \(\mathbf{x}_{i}\). Then, supervised learning iteratively trains a deep neural network classifier by minimizing the sum of the losses over the labeled training examples. Such paradigm of training a deep neural network classifier via supervised learning and using the layers excluding the output layer as an encoder is also known as _transfer learning_[18].
**Self-supervised learning:** Unlike supervised learning, self-supervised learning [1, 2] aims to pre-train an encoder using unlabeled data, which has attracted growing attention in the past several years in the AI community. A basic component of self-supervised learning is _data augmentation_. Specifically, given an image, the data augmentation component applies a series of (random) data augmentation operations (e.g., random cropping, color jitter, and flipping) sequentially to produce an _augmented image_. Roughly speaking, the main idea of self-supervised learning is to pre-train an encoder such that it produces similar feature vectors for two augmented images produced from the same image, but dissimilar feature vectors for two augmented images produced from different images. Next, we take MoCo [1], a state-of-the-art self-supervised learning algorithm, as an example to elaborate more details.
MoCo uses an auxiliary encoder (called _momentum encoder_) that has the same architecture as the encoder and a queue (denoted as \(\Gamma\)). In particular, the queue is used to cache the output of the momentum encoder for the augmented images and is dynamically updated. For simplicity, we respectively use \(f\) and \(f_{e}\) to denote the encoder and the momentum encoder, and use \(\theta\) and \(\theta_{e}\) to denote their encoder parameters. Suppose we have a mini-batch of unlabeled images which are denoted as \(\mathbf{x}_{i},i=1,2,\cdots,m\). We apply the data augmentation component to each image in the mini-batch twice. We use \(\mathbf{x}_{i}^{1}\) and \(\mathbf{x}_{i}^{2}\) to denote the two augmented images produced for the image \(\mathbf{x}_{i}\), respectively. Given \(\mathbf{x}_{i}^{1}\), \(\mathbf{x}_{i}^{2}\), and the queue \(\Gamma\), MoCo defines a loss function for \(\mathbf{x}_{i}\) as follows:
\[\ell(i)=-\log(\frac{\exp(Sim(f(\mathbf{x}_{i}^{1}),f_{e}(\mathbf{x}_{i}^{2}))/ \tau)}{\sum_{\mathbf{z}\in\Gamma\cup\{f_{e}(\mathbf{x}_{i}^{2})\}}\exp(Sim(f( \mathbf{x}_{i}^{1}),\mathbf{z})/\tau)}), \tag{2}\]
where \(\tau\) is a temperature parameter and \(Sim(\cdot,\cdot)\) measures the similarity of two feature vectors (e.g., cosine similarity). MoCo uses gradient descent to minimize \(\frac{1}{m}\). \(\sum_{i=1}^{m}\ell(i)\) to update the parameters \(\theta\) in the encoder \(f\). The queue \(\Gamma\) is dynamically updated in each step, where \(\{f_{e}(\mathbf{x}_{1}^{2}),f_{e}(\mathbf{x}_{2}^{2}),\cdots,f_{e}(\mathbf{x }_{m}^{2})\}\) are enqueued and the \(m\) "oldest" vectors are dequeued.
#### Iii-C2 Pre-training Empirically Robust Encoders
Adversarial training [15, 16] is a standard method to train empirically robust classifiers in supervised learning. The key idea is to generate adversarial examples based on training examples during training, and use the adversarial examples to augment the training data. The layers excluding the output layer of the classifier are then used as an encoder. Several studies [19, 14, 20] generalized adversarial training to pre-train a robust encoder in self-supervised learning. Roughly speaking, the idea is to first generate adversarial examples that incur large loss and then use them to pre-train an encoder. For instance, Kim et al. [14] proposed _Robust Contrastive Learning (RoCL)_ to pre-train robust encoders. Specifically, given a training image, RoCL uses Projected Gradient Descent (PGD) [16] to generate an adversarial perturbation for an augmented version of the training image such that the adversarially perturbed augmented version incurs a large loss. Then, RoCL pre-train an encoder such that it outputs similar feature vectors for the adversarially perturbed augmented version and other augmented versions of the training image.
The goal of these studies is to pre-train _empirically_ rather than _certifiably_ robust encoders. As a result, the pre-trained encoders achieve suboptimal certified radii for a client as shown by our experimental results. In this work, we propose a new method, which can be combined with either supervised learning or self-supervised learning to pre-train a robust encoder that can achieve larger certified radii against adversarial examples for a client.
## III Problem Formulation
**Threat model:** We consider an attacker can use adversarial examples to induce misclassification for a client. Specifically, given a testing input, an attacker can add a carefully crafted perturbation to it such that the client's downstream classifier predicts a different label. To defend against adversarial examples, the client aims to build a certifiably robust classifier, which provably predicts the same label for a testing input no matter what perturbation is added to it once the \(\ell_{2}\)-norm of the perturbation is less than a threshold (called _certified radius_). Note that we do not constrain on what method an attacker can use to find the perturbation since we aim to defend against all bounded perturbations.
**Problem definition:** We aim to design an encoder as a service. In particular, when designing an encoder as a service, essentially aim to answer two key questions: 1) what APIs should the cloud service provide for a client? and 2) how to pre-train the encoder?
**Design goals:** We aim to design an encoder as a service to achieve three goals, _generality_, _efficiency_, and _robustness_, which we elaborate in the following:
* **Generality.** As we discussed in Section II-B, BC and SC based certification methods are complementary and may be adopted by different clients due to their different needs. We say an encoder as a service achieves the generality goal if a client can use any certification method to build a certifiably robust classifier and derive its certified radius for any given testing input. We note that SEaaS can only support SC based certification.
* **Efficiency.** We use the number of queries sent to the cloud service to measure the communication cost between a client and the cloud server. Moreover, we use computation time to measure the computation cost for a client and the cloud server. We aim to design an encoder as a service to achieve a small communication cost and computation cost.
* **Robustness.** A certified radius measures the certified robustness of a classifier for a testing input. We use the average certified radius of testing inputs to measure the certified robustness of a classifier. We aim to design an encoder as a service that enables a client to build a downstream classifier with a large average certified radius in both BC and SC based certification.
We note that SEaaS does not achieve any of the three goals. Specifically, SEaaS cannot support BC based certification; SEaaS incurs a large communication cost between a client and the cloud server as well as a large computation cost for the cloud server in SC based certification; and SEaaS achieves suboptimal average certified radius for SC based certification because certified robustness is not taken into consideration when pre-training the encoder.
## IV Our REaaS
### _Overview_
To achieve the generality and efficiency goals, our key idea is to enable a client to treat its own downstream classifier as a base classifier and certify the robustness of its base downstream classifier or smoothed downstream classifier in the _feature space_. Towards this goal, other than the _Feature-API_ provided in SEaaS, our REaaS provides another API (called _F2IPerturb-API_). In particular, since a downstream classifier takes a feature vector as input, we propose a client first derives a feature-space certified radius \(R_{F}\) of its base downstream classifier or smoothed downstream classifier for a testing input. Then, the client transforms the feature-space certified radius \(R_{F}\) to the input-space certified radius \(R\) by querying the F2IPerturb-API. To achieve the robustness goal, we further propose a new method to pre-train a robust encoder, which uses a spectral-norm term to regularize the pre-training of an encoder. Our pre-trained encoder aims to produce similar feature vectors for an input and its adversarially perturbed version.
### _Feature-API and F2IPerturb-API_
#### Iv-B1 Feature-API
We first introduce the input and output of Feature-API, and then its implementation.
**Input and output for a client:** In Feature-API, the input from a client is an image \(\mathbf{x}\) and the output returned to the client is the input's feature vector \(\mathbf{v}\). Formally, Feature-API is represented as \(\mathbf{v}=\textit{Feature-API}(\mathbf{x})\).
**Implementation on the server:** Given an input \(\mathbf{x}\), the cloud server uses a pre-trained encoder \(f\) to compute its feature vector \(\mathbf{v}\). In particular, we have \(\mathbf{v}=f(\mathbf{x})\).
We note that SEaaS only has this Feature-API.
#### Iv-B2 F2IPerturb-API
Like Feature-API, we first introduce the input and output of F2IPerturb-API, and then its implementation on the cloud server.
**Input and output for a client:** F2IPerturb-API transforms a feature-space certified radius to an input-space certified radius. The input from a client contains an input image \(\mathbf{x}\) and a feature-space certified radius \(R_{F}\) for \(\mathbf{x}\). In particular, the client's downstream classifier predicts the same label for \(\mathbf{x}\) once the \(\ell_{2}\)-norm of the perturbation to \(\mathbf{x}\)'s feature vector \(\mathbf{v}\) is bounded by \(R_{F}\). The client can use any BC or SC based method to derive \(R_{F}\) for \(\mathbf{x}\) by treating its downstream classifier alone as a base classifier and \(\mathbf{x}\)'s feature vector \(\mathbf{v}\) as an "input" to the base classifier.
The output of F2IPerturb-API is an input-space certified radius \(R\) such that when the \(\ell_{2}\)-norm of the adversarial perturbation added to the input image \(\mathbf{x}\) is smaller than \(R\), the \(\ell_{2}\)-norm of the perturbation introduced to the feature vector \(\mathbf{v}\) is smaller than \(R_{F}\). Formally, given an input image \(\mathbf{x}\) and a feature-space certified radius \(R_{F}\), F2IPerturb-API is represented as follows: \(R=\textit{F2IPerturb-API}(\mathbf{x},R_{F})\).
**Implementation on the server:** A larger \(R\) enables the client to derive a larger certified radius. The key challenge of implementing the F2IPerturb-API is how to derive the largest \(R\) for a given input \(\mathbf{x}\) and \(R_{F}\). To address the challenge, we formulate the input-space certified radius \(R\) as the solution to the following optimization problem:
\[R=\max_{r}r \tag{3}\] \[\textit{s.t.}\ \max_{\left\|\delta\right\|_{2}<r}\left\|f( \mathbf{x}+\delta)-f(\mathbf{x})\right\|_{2}<R_{F}, \tag{4}\]
where \(f\) is an encoder and \(\delta\) is an adversarial perturbation. However, the optimization problem is challenging to solve because the constraint is highly non-linear when the encoder is a complex neural network. To address the challenge, we propose a binary search based method to solve \(R\) in the optimization problem. In particular, we search in the range \([\rho_{k}^{L},\rho_{k}^{U}]\) in the \(k\)th round of binary search, where we set \(\rho_{1}^{L}\) to be \(0\) and \(\rho_{1}^{U}\) to be a large value (e.g., \(10\) in our experiments) in the first round. Moreover, we denote \(\rho_{k}=\frac{\rho_{k}^{L}+\rho_{k}^{L}}{2}\) for simplicity. In the \(k\)th round, we check whether \(r=\rho_{k}\) satisfies the constraint in Equation (4). If the constraint is satisfied, then we can search the range \([\rho_{k},\rho_{k}^{U}]\) in the \((k+1)\)th round, i.e., \(\rho_{k+1}^{L}=\rho_{k}\) and \(\rho_{k+1}^{U}=\rho_{k}^{U}\). Otherwise, we search the range \([\rho_{k}^{L},\rho_{k}]\) in the \((k+1)\)th round, i.e., \(\rho_{k+1}^{L}=\rho_{k}^{L}\) and \(\rho_{k+1}^{U}=\rho_{k}\). We stop the binary search when \(\rho_{k}^{U}-\rho_{k}^{L}\leq\beta\) and treat \(\rho_{k}^{L}\) as \(R\), where \(\beta\) is a parameter characterizing the binary-search precision.
Our binary search based solution faces a key challenge, i.e., how to check whether \(r=\rho_{k}\) satisfies the constraint in Equation (4). Our key idea to address the challenge is to derive an upper bound for the left hand side of the constraint (i.e., \(\max_{\left\|\delta\right\|_{2}<\rho_{k}}\left\|f(\mathbf{x}+\delta)-f( \mathbf{x})\right\|_{2}\)) and decide that the constraint is satisfied if the upper bound is smaller than \(R_{F}\), where the upper bound can be efficiently computed for any \(\rho_{k}\). Suppose the encoder \(f\) maps an input \(\mathbf{x}\) to a \(d\)-dimensional feature vector \(f(\mathbf{x})\), where \(f_{i}(\mathbf{x})\) represents the \(i\)th entry of \(f(\mathbf{x})\). An encoder \(f\) is essentially a deep neural network. Therefore, according to CROWN [9], we have the following lower bound and upper bound for \(f_{i}(\mathbf{x}+\delta)\) when \(\left\|\delta\right\|_{2}<\rho_{k}\):
\[\min_{\left\|\delta\right\|_{2}<\rho_{k}}f_{i}^{L}(\mathbf{x}+\delta)\leq f_ {i}(\mathbf{x}+\delta)\leq\max_{\left\|\delta\right\|_{2}<\rho_{k}}f_{i}^{U}( \mathbf{x}+\delta), \tag{5}\]
where \(f_{i}^{L}\) and \(f_{i}^{U}\) are two linear functions and \(i=1,2,\cdots,d\). In Appendix D, we show that Equation 5 is tight when \(f\) consists of one linear layer. As \(\min_{\left\|\delta\right\|_{2}<\rho_{k}}f_{i}^{L}(\mathbf{x}+\delta)\leq \min_{\left\|\delta\right\|_{2}<\rho_{k}}f_{i}^{L}(\mathbf{x}+\delta)\) and \(\max_{\left\|\delta\right\|_{2}<\rho_{k}}f_{i}^{U}(\mathbf{x}+\delta)\leq\max _{\left\|\delta\right\|_{2}<\rho_{k}}f_{i}^{U}(\mathbf{x}+\delta)\), we have the following when \(\left\|\delta\right\|_{2}<\rho_{k}\):
\[\min_{\left\|\delta\right\|_{2}\leq\rho_{k}}f_{i}^{L}(\mathbf{x}+\delta)\leq f _{i}(\mathbf{x}+\delta)\leq\max_{\left\|\delta\right\|_{2}\leq\rho_{k}}f_{i}^ {U}(\mathbf{x}+\delta), \tag{6}\]
Therefore, we have the following inequalities for \(\forall\left\|\delta\right\|_{2}<\rho_{k}\):
\[f_{i}(\mathbf{x}+\delta)-f_{i}(\mathbf{x}) \geq L_{i}, \tag{7}\] \[f_{i}(\mathbf{x}+\delta)-f_{i}(\mathbf{x}) \leq U_{i}, \tag{8}\]
where \(L_{i}=\min_{\left\|\delta\right\|_{2}\leq\rho_{k}}f_{i}^{L}(\mathbf{x}+\delta )-f_{i}(\mathbf{x})\) and \(U_{i}=\max_{\left\|\delta\right\|_{2}\leq\rho_{k}}f_{i}^{U}(\mathbf{x}+\delta) -f_{i}(\mathbf{x})\). Based on the above two inequalities, we have the following:
\[\max_{\left\|\delta\right\|_{2}<\rho_{k}}(f_{i}(\mathbf{x}+\delta)-f_{i}( \mathbf{x}))^{2}\leq\max(L_{i}^{2},U_{i}^{2}). \tag{9}\]
Therefore, we can derive an upper bound for \(\max_{\left\|\delta\right\|_{2}<\rho_{k}}\left\|f(\mathbf{x}+\delta)-f( \mathbf{x})\right\|_{2}\) as follows:
\[\max_{\left\|\delta\right\|_{2}<\rho_{k}}\left\|f(\mathbf{x}+ \delta)-f(\mathbf{x})\right\|_{2} \tag{10}\] \[\leq \sqrt{\sum_{i=1}^{d}\max_{\left\|\delta\right\|_{2}<\rho_{k}}(f_{i }(\mathbf{x}+\delta)-f_{i}(\mathbf{x}))^{2}}\] (11) \[\leq \sqrt{\sum_{i=1}^{d}\max(L_{i}^{2},U_{i}^{2})}\] (12) \[\triangleq R_{F}^{\prime}. \tag{13}\]
If the upper bound \(R_{F}^{\prime}\) is smaller than \(R_{F}\), then we have \(r=\rho_{k}\) satisfies the constraint in Equation (4). We note that \(r=\rho_{k}\) may also satisfy the constraint even if the upper bound \(R_{F}^{\prime}\) is no smaller than \(R_{F}\). However, such cases do not influence the correctness of our binary search. Note that \(\min_{\left\|\delta\right\|_{2}\leq\rho_{k}}f_{i}^{L}(\mathbf{x}+\delta)\) and \(\max_{\left\|\delta\right\|_{2}<\rho_{k}}f_{i}^{U}(\mathbf{x}+\delta)\) have closed-form solutions for \(i=1,2,\cdots,d\)[9]. Therefore, \(L_{i}\), \(U_{i}\), and the upper bound \(R_{F}^{\prime}\) can be computed efficiently. In other words, we can efficiently check whether \(r=\rho_{k}\) satisfies the constraint in Equation (4) for any \(\rho_{k}\). Algorithm 1 shows our F2IPerturb-API, where the function Crown obtains the lower bound and upper bound linear functions for each \(f_{i}(\mathbf{x})\).
Our binary search based solution correctly finds a lower bound of the optimal \(R\) of the optimization problem in Equation (3) because the constraint in Equation (4) is guaranteed to be satisfied in each round of our binary search.
**Image rescaling:** We note that, in our above discussion on the two APIs, a client's input image size is the same as the input size of the cloud server's encoder. When the size of a client's input image is different, the cloud server can rescale it to be the input size of its encoder using the standard bilinear interpolation. The bilinear interpolation can be viewed as a linear transformation. In particular, suppose \(\mathbf{x}_{b}\) and \(\mathbf{x}_{a}\) respectively represent the image before and after rescaling. Then, we have \(\mathbf{x}_{a}=\mathbf{W}\cdot\mathbf{x}_{b}\), where \(\mathbf{W}\) is the matrix used to represent the linear transformation. The cloud server can implement this linear transformation (i.e., rescaling) by adding a linear layer whose weight matrix is \(\mathbf{W}\) before the encoder. Moreover, the cloud server can view the linear layer + the encoder as a "new" encoder to implement the two APIs.
### _Pre-training Robust Encoder_
Our REaaS is applicable to any encoder. However, a more robust encoder enables a client to derive a larger certified radius for its testing input. Therefore, we further propose a method to pre-train robust encoders. An encoder \(f\) is more robust if it produces more similar feature vectors for an input and its adversarially perturbed version, i.e., if \(f(\mathbf{x}+\delta)\) and \(f(\mathbf{x})\) are more similar. In particular, based on our implementation of the F2IPerturb-API, if \(\left\|f(\mathbf{x}+\delta)-f(\mathbf{x})\right\|_{2}\) is smaller for any adversarial perturbation \(\delta\), then F2IPerturb-API would return a larger input-space certified radius to a client for a given feature-space certified radius. Therefore, our key idea is to reduce \(\left\|f(\mathbf{x}+\delta)-f(\mathbf{x})\right\|_{2}\) when pre-training an encoder \(f\). Next, we derive an upper bound of \(\left\|f(\mathbf{x}+\delta)-f(\mathbf{x})\right\|_{2}\), based on which we design a regularization term to regularize the pre-training of an encoder.
A neural network (e.g., an encoder) can often be decomposed into the composition of a series of linear transformations [5]. In particular, we can do so if each layer of the neural network (e.g., linear layer, convolutional layer, and batch normalization layer) can be expressed as a linear transformation. We denote an encoder as the composition of \(n\) linear transformations, i.e., \(f(\cdot)=T^{n}\circ T^{n-1}\circ\cdots\circ T^{1}(\cdot)\). [5] showed that the difference between the outputs of any neural network \(f\) (\(f\) is an encoder in our case) for an input and its adversarially perturbed version can be bounded as follows:
\[\left\|f(\mathbf{x}+\delta)-f(\mathbf{x})\right\|_{2}\leq\prod_{j=1}^{n} \left\|T^{j}\right\|_{s}\cdot\left\|\delta\right\|_{2}, \tag{14}\]
where \(\mathbf{x}\) is an input, \(\delta\) is an adversarial perturbation, and \(\left\|\cdot\right\|_{s}\) represents spectral norm. The product of the spectral norms of the \(n\) linear transformations (i.e., \(\prod_{j=1}^{n}\left\|T^{j}\right\|_{s}\)) is independent with input \(\mathbf{x}\) and adversarial perturbation \(\delta\). Therefore, our idea is to add \(\prod_{j=1}^{n}\left\|T^{j}\right\|_{s}\) as a regularization term (called _spectralnorm regularization_) when pre-training an encoder. Minimizing such regularization term may enforce the encoder to produce more similar feature vectors for an input and its adversarially perturbed version, i.e., \(\left\|f(\mathbf{x}+\delta)-f(\mathbf{x})\right\|_{2}\) may be smaller. In particular, we minimize the following loss function for each mini-batch of inputs when pre-training an encoder:
\[\frac{1}{m}\cdot\sum_{i=1}^{m}\ell(i)+\lambda\cdot\prod_{j=1}^{n}\left\|T^{j} \right\|_{s}, \tag{15}\]
where \(\ell(i)\) is a loss for a training input in pre-training, \(m\) is batch size, and \(\lambda\) is a hyper-parameter used to balance the two terms. For instance, when using supervised learning to train a classifier, whose layers excluding the output layer are used as an encoder, the loss \(\ell(i)\) is often the cross-entropy loss; when using self-supervised learning algorithm MoCo [1] to pre-train an encoder, \(\ell(i)\) is defined in Equation (2). We adopt the _power method_[21] to estimate the spectral norms of the linear transformations when pre-training an encoder.
### _Certifying Robustness for a Client_
In REaaS, a client can treat its own downstream classifier as a base classifier. We discuss how a client can use our two APIs to train a base downstream classifier and derive the certified radius of the base downstream classifier in BC based certification or the smoothed downstream classifier in SC based certification for a testing input.
**BC based certification:** When training a base downstream classifier, a client queries the Feature-API to obtain a feature vector for each training input. Then, given the feature vectors and the corresponding training labels, the client can use any training method (e.g., standard supervised learning) to train a base downstream classifier. Given a testing input, the client queries the Feature-API to obtain its feature vector and uses the base downstream classifier to predict its label. Moreover, the client can use any BC based certification method to derive a feature-space certified radius for the testing input by treating its feature vector as an "input" to the base downstream classifier. Then, the client queries the F2IPerturb-API to transform the feature-space certified radius to an input-space certified radius.
**SC based certification:** Similar to BC based certification, a client queries the Feature-API to obtain a feature vector for each training input when training a base downstream classifier. However, unlike BC based certification, the client adds noise to the training feature vectors in SC based certification. In particular, the client adds random noise (e.g., Gaussian noise) to each feature vector in each mini-batch of training feature vectors in each training epoch. Note that the client does not need to query the Feature-API again for the noisy feature vector. Given a testing input, the client queries the Feature-API to obtain its feature vector and uses the smoothed downstream classifier to predict its label and derive its feature-space certified radius. In particular, the client constructs \(N\) noisy feature vectors by adding random noise to the feature vector and uses it's base downstream classifier to predict their labels. Based on the predicted labels, the client can derive the predicted label and feature-space certified radius for the original feature vector. Then, the client queries the F2IPerturb-API to transform the feature-space certified radius to an input-space certified radius.
\begin{table}
\end{table} TABLE I: Comparing the communication and computation cost per training/testing input in SEaaS and REaaS. \(c\) is the number of epochs used to train a base downstream classifier. \(N\) is the number of noisy inputs per testing input in SC. \(T_{f}\) (or \(T_{g}\)) and \(M_{f}\) (or \(M_{g}\)) are respectively the number of layers and the maximum number of neurons in a layer in an encoder (or a downstream classifier). \(K_{f}\) (or \(K_{g}\)) is the number of parameters in an encoder (or a downstream classifier).
### _Theoretical Communication and Computation Cost Analysis_
**Communication cost:** The number of queries to the APIs characterizes the communication cost for a client and the cloud server. In both BC and SC based certification, a client only queries the Feature-API once for each training input in REaaS. Therefore, the number of queries per training input is \(1\) in REaaS. In both BC and SC based certification, a client only queries the Feature-API and F2IPerturb-API once to derive the predicted label and certified radius for a testing input. Therefore, the number of queries per testing input is \(2\) in REaaS. Note that the client only sends an image \(\mathbf{x}\) to the cloud server when querying the Feature-API, while it also sends the feature-space certified radius \(R_{f}\) to the cloud server when querying the F2IPerturb-API. However, \(R_{f}\) is a real number whose communication cost is negligible, compared to that of the image \(\mathbf{x}\). Thus, we consider querying Feature-API and querying F2IPerturb-API have the same communication cost in our analysis for simplicity. Table Ia compares the number of queries per training/testing input in SEaaS and REaaS. Compared with SEaaS, REaaS makes BC based certification applicable and incurs a much smaller communication cost in SC based certification.
**Computation cost:** Table Ib compares the computational complexity of REaaS and SEaaS for the cloud server and a client. In both REaaS and SEaaS, the computation cost for the cloud server to process a query to the Feature-API is linear to the number of encoder parameters, i.e., \(O(K_{f})\), where \(K_{f}\) is the number of parameters in the encoder. In REaaS, we use binary search to process a query to the F2IPerturb-API. Given the initial search range \([\rho_{1}^{L},\rho_{1}^{U}]\) and binary-search precision \(\beta\), the number of rounds of binary search is \(\lceil\log_{2}(\frac{\rho_{1}^{U}-\rho_{1}^{L}}{\beta})\rceil\). In practice, we can set \(\rho_{1}^{L}\), \(\rho_{1}^{U}\), and \(\beta\) to be constants, e.g., \(\rho_{1}^{L}=0\), \(\rho_{1}^{U}=10\), and \(\beta=10^{-50}\), and thus \(\lceil\log_{2}(\frac{\rho_{1}^{U}-\rho_{1}^{L}}{\beta})\rceil\) can be viewed as a constant. From [9], the computational complexity is \(O(T_{f}^{2}\cdot M_{f}^{3})\) in each round of binary search, where \(T_{f}\) and \(M_{f}\) are respectively the number of layers and the maximum number of neurons in a layer in an encoder. Thus, the computational complexity for the cloud server to process a query to the F2IPerturb-API is \(O(T_{f}^{2}\cdot M_{f}^{3})\).
On the client side, the computational complexity of gradient descent is \(O(K_{g})\) for each training input per epoch when training a base downstream classifier in both BC and SC based certification, where \(K_{g}\) is the number of parameters in the base downstream classifier. Therefore, the computational complexity of training a base downstream classifier is \(O(e\cdot K_{g})\) per training input, where \(e\) is the number of training epochs. The computational complexity for a client to derive the feature-space certified radius of a testing input is \(O(T_{g}^{2}\cdot M_{g}^{3})\) in BC based certification [9], where \(T_{g}\) and \(M_{g}\) are respectively the number of layers and the maximum number of neurons in a layer in the base downstream classifier. Moreover, the computational complexity of using the base downstream classifier to predict a label for a (noisy) feature vector is \(O(K_{g})\).
As SEaaS does not support BC based certification, we focus on comparing the computation cost of SEaaS and REaaS for SC based certification. First, we observe that the computation cost per training/testing input is the same for a client in SEaaS and REaaS. Second, REaaS incurs a smaller computation cost per training input for the cloud server than SEaaS, because REaaS incurs much fewer queries than SEaaS. Third, REaaS often incurs a smaller computation cost per testing input for the cloud server than SEaaS, because \(N\) is often large to achieve a large certified radius as shown in our experiments.
## V Evaluation
### _Experimental Setup_
**Datasets:** We use CIFAR10 [22], SVHN [23], STL10 [24], and Tiny-ImageNet [25] in our experiments. CIFAR10 has 50,000 training and 10,000 testing images from ten classes. SVHN contains 73,257 training and 26,032 testing images from ten classes. STL10 contains 5,000 training and 8,000 testing images from ten classes, as well as 100,000 unlabeled images. Tiny-ImageNet contains 100,000 training and 10,000 testing images from 200 classes.
We rescale each image in all datasets to \(32\times 32\) by the standard bi-linear interpolation. Therefore, the input image size in a downstream dataset is the same as the input size of the pre-trained encoder. However, we will also explicitly explore the scenarios in which the input image size of a downstream dataset is different from the input size of the pre-trained encoder.
**Pre-training encoders:** We use STL-10 and Tiny-ImageNet as pre-training datasets to pre-train encoders. We adopt these two datasets because they contain more images than CIFAR10 and SVHN. In particular, we use the unlabeled data of STL10 to pre-train an encoder when STL10 is used as a pre-training dataset. When Tiny-ImageNet is used as a pre-training dataset, we use its training dataset to pre-train an encoder. Unless otherwise mentioned, we adopt MoCo [1] as the pre-training algorithm in SEaaS, while we adopt MoCo with our spectral-norm regularization as the pre-training algorithm in REaaS, since they only need unlabeled data. Moreover, we adopt the public implementation of MoCo [26] in our experiments. When calculating the spectral norm of the encoder during pre-training, we run 10 iterations of power iteration in each mini-batch. The architecture of the encoder can be found in Table XIII in Appendix. We pre-train an encoder for 500 epochs with a learning rate 0.06 and a batch size 512.
**Training downstream classifiers:** As we have four datasets, we use the other three datasets as downstream datasets when a dataset is used as a pre-training dataset. Moreover, when a dataset is used as a downstream dataset, we adopt its training dataset as the downstream training dataset and testing dataset. We use the downstream training dataset. We use the downstream training dataset to train a base downstream classifier. In particular, in BC based certification, we use standard supervised learning to train a base downstream classifier on the feature vectors of the training inputs. We note that some works [27, 28] proposed new methods to train a base classifier to improve its certified robustness in BC based certification. These methods are also applicable in our REaaS, but we do not evaluate them since our focus is to show the applicability of BC based certification in REaaS instead of its optimal certified robustness. For SC based certification, we train a base downstream classifier via adding Gaussian noise \(\mathcal{N}(0,\sigma^{2}\mathbf{I})\) to the training inputs in SEaaS and the feature vectors of the training inputs in REaaS.
We use a fully connected neural network with two hidden layers as a base downstream classifier. We respectively adopt
ReLU and Softmax as the activation functions in the two hidden layers and the output layer. The number of neurons in both hidden layers is 256. We train a base downstream classifier for 25 epochs using cross-entropy loss, a learning rate of 0.06, and a batch size of 512.
**Certification methods:** For BC based certification, we adopt CROWN [9] to derive the certified radius of a base downstream classifier for a testing input in REaaS. We adopt the public implementation of CROWN [29]. For SC based certification, we adopt Gaussian noise based randomized smoothing [13] to build a smoothed classifier and derive its certified radius for a testing input. In SEaaS, a client treats the composition of the encoder and its downstream classifier as a base classifier, while a client treats its downstream classifier alone as a base classifier in REaaS. We use the public code [30] for Gaussian noise based randomized smoothing. Appendix B and C show the technical details of CROWN and Gaussian noise based randomized smoothing, respectively.
**Evaluation metrics:** Recall that REaaS aims to achieve three design goals. We can evaluate the generality goal by showing that REaaS supports both BC and SC based certification. For the efficiency goal, we use _#Queries per training (or testing) input_ to measure the communication cost between a client and the cloud server. Moreover, we use _running time per testing input_ on the cloud server to measure its computation cost. We do not consider running time on a client as it is the same in SEaaS and REaaS. Note that #Queries per training input also characterizes the computation cost per training input for the cloud server as it is linear to the number of queries. For the robustness goal, we use _average certified radius (ACR)_ of the correctly classified testing examples to measure the certified robustness of a base or smoothed classifier.
Note that there often exists a trade-off between robustness and accuracy for a classifier. Therefore, we further consider accuracy under adversarial perturbation as an evaluation metric. In particular, we consider the widely adopted _certified accuracy @ a perturbation size_, which is the fraction of testing inputs in a downstream testing dataset whose labels are correctly predicted and whose certified radii are no smaller than the given perturbation size. Certified accuracy @ a perturbation size is the least testing accuracy that a classifier can achieve no matter what adversarial perturbation is added to each testing input once its \(\ell_{2}\)-norm is at most the given perturbation size. The certified accuracy @ a perturbation size decreases as the perturbation size increases. ACR is the area under the certified accuracy vs. perturbation size curve (details are shown in Appendix A). Therefore, ACR can also be viewed as a metric to measure the robustness-accuracy trade-off of a classifier, where a larger ACR indicates a better trade-off.
**Parameter settings:** F2IPerturb-API has the following three parameters: \(\rho_{1}^{L}\) and \(\rho_{1}^{U}\) which specify the range of \(R\) in the first round of binary search, and \(\beta\) which characterizes the binary-search precision. We set \(\rho_{1}^{U}\) to be \(0\) and set \(\rho_{1}^{U}\) to be \(10\). Note that they do not impact experimental results once \(\rho_{1}^{L}\) is set to \(0\) and \(\rho_{1}^{U}\) is set to a large value (e.g., 10). We set the default value of \(\beta\) as \(0.001\). We note that \(\beta\) has a negligible impact on certified accuracy and ACR. In particular, the absolute difference between the certified accuracy (or ACR) when \(\beta=0.001\) and that when \(\beta\) is an arbitrarily small value (e.g., \(10^{-50}\)) is smaller than \(0.001\).
Randomized smoothing has the following three parameters: the number of Gaussian noise \(N\), standard deviation \(\sigma\) of the Gaussian noise, and error probability \(\alpha\). Following prior work [13], unless otherwise mentioned, we set \(N=100,000\), \(\sigma=0.5\), and \(\alpha=0.001\). We set the default value of the hyperparameter \(\lambda\) in our pre-training method as \(0.00075\). We normalize pixel values to \([0,1]\).
### _Experimental Results_
We first show that REaaS achieves our three design goals, but SEaaS does not. Then, we show the impact of relevant factors on REaaS. In particular, we consider 1) different ways to pre-train an encoder, 2) image scaling, and 3) different hyperparameters of certification methods such as \(N\), \(\sigma\), and \(\alpha\) for randomized smoothing. Note that we fix all other parameters to their default values when studying the impact of one parameter on REaaS.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{3}{*}{Service} & \multirow{3}{*}{\begin{tabular}{c} Certification \\ method \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} Downstre- \\ am dataset \\ \end{tabular} } & \multirow{3}{*}{ACR} & \multicolumn{3}{c|}{\#Queries} \\ \cline{4-6} & & & & \begin{tabular}{c} Per train- \\ ing input \\ \end{tabular} & \multicolumn{1}{c|}{\begin{tabular}{c} Per test- \\ ing input \\ \end{tabular} } \\ \hline \multirow{4}{*}{SEaaS} & \multirow{3}{*}{BC} & CIFAR10 & \multirow{4}{*}{25} & \multirow{4}{*}{\(1\times 10^{5}\)} \\ \cline{4-6} & & & & \begin{tabular}{c} SVHN \\ ImageNet \\ \end{tabular} & & & \\ \cline{4-6} & & & & \begin{tabular}{c} CIFAR10 \\ ImageNet \\ \end{tabular} & & & \\ \cline{4-6} & & & & \begin{tabular}{c} CIFAR10 \\ ImageNet \\ \end{tabular} & & & & \\ \cline{4-6} & & & & \begin{tabular}{c} CIFAR10 \\ ImageNet \\ \end{tabular} & & & & \\ \cline{4-6} & & & & \begin{tabular}{c} CIFAR10 \\ ImageNet \\ \end{tabular} & & & & \\ \cline{4-6} & & & & \begin{tabular}{c} CIFAR10 \\ ImageNet \\ \end{tabular} & & & & \\ \cline{4-6} & & & &
\begin{tabular}{c} CIFAR10 \\ ImageNet \\ \end{tabular} & & & & \\ \hline \end{tabular}
\end{table} TABLE II: ACR and #Queries in SEaaS and REaaS.
**REaaS achieves the generality, efficiency, and robustness goals:** In SC based certification, a client respectively adds Gaussian noise to images and their feature vectors to train a base downstream classifier in SEaaS and REaaS. Thus, the certified robustness of the smoothed classifiers are not comparable even if we use the same standard deviation \(\sigma\) of Gaussian noise in SEaaS and REaaS. Therefore, we try multiple values of \(\sigma\) and report the largest ACR for each service. Moreover, we select \(\sigma\) values such that the largest ACR is not reached at the smallest or largest value of \(\sigma\), to ensure the largest ACR is found for each service. In particular, we try \(\sigma=0.125,0.25,0.5,0.75,1\) for both SEaaS and REaaS. We note that \(\sigma\) controls a tradeoff between certified accuracy without attacks (i.e., perturbation size is 0) and robustness. Specifically, a smaller \(\sigma\) can achieve a larger certified accuracy without attacks but also make the curve drop more quickly (i.e., less robust). ACR measures such trade-off, and thus we adopt the \(\sigma\) that achieves the largest ACR for each method when comparing the certified accuracy of SC based certification in SEaaS and REaaS.
Table II compares ACR and #Queries per training/testing input in SEaaS and REaaS, while Table III compares the running time per testing input for the server in SC for SEaaS and REaaS. We have the following observations. First, REaaS supports both BC and SC. Therefore, REaaS achieves the generality goal. In contrast, SEaaS only supports SC. Second, REaaS achieves the efficiency goal as it is much more efficient than SEaaS. Specifically, #Queries per training/testing input in REaaS is orders of magnitude smaller than that in SEaaS for SC. We note that a client using SEaaS could choose to train a base downstream classifier without adding noise to its training inputs to reduce the #Queries per training input to 1 or use a small \(N\) to reduce the #Queries per testing input. However, the smoothed classifier achieves (much) smaller ACRs in such cases as shown in Table IV and V. Base on Table III, REaaS also incurs a much lower computation cost for the server than SEaaS.
Third, REaaS achieves the robustness goal as it achieves larger ACRs than SEaaS for SC. The reason is that, in SEaaS, the base classifier is the composition of an encoder and a base downstream classifier, but the client can only train the base downstream classifier with noise. In contrast, a client can build a smoothed classifier upon a base downstream classifier alone which can be trained with noise and the encoder is pre-trained in a robust way in REaaS. Figure 7 in Appendix further compares the certified accuracy vs. perturbation size of SC in SEaaS and REaaS. We find that REaaS can achieve a better trade-off between accuracy without attacks and robustness than SEaaS. Specifically, REaaS achieves larger certified accuracy than SEaaS when the perturbation size is small. Moreover, the gap between the certified accuracy of SEaaS and REaaS is much larger when the perturbation size is small than that when the perturbation size is large.
**Impact of methods to pre-train encoders:** We can use different methods to pre-train an encoder in REaaS. Table VI and VII show ACRs in REaaS when different self-supervised
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Downstream & \multicolumn{3}{c|}{N} \\ \cline{2-4} dataset & 100 & 1,000 & 10,000 & 100,000 \\ \hline CIFAR10 & 0.091 & 0.132 & 0.148 & 0.157 \\ \hline SVHN & 0.130 & 0.186 & 0.211 & 0.226 \\ \hline STL10 & 0.079 & 0.111 & 0.127 & 0.134 \\ \hline \end{tabular}
\end{table} TABLE V: Impact of \(N\) on ACR for SC in SEaaS. The pre-training dataset is Tiny-ImageNet.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Verification Method & Pre-training Method & ACR \\ \hline \multirow{3}{*}{BC} & Non-robust MoCo & 0.012 \\ \cline{2-3} & RoCL & 0.016 \\ \cline{2-3} & Ours & 0.138 \\ \hline \hline \multirow{3}{*}{SC} & Non-robust MoCo & 0.020 \\ \cline{2-3} & RoCL & 0.024 \\ \cline{2-3} & Ours & 0.171 \\ \hline \end{tabular}
\end{table} TABLE VI: Comparing the ACRs in REaaS for different downstream datasets when the encoders are pre-trained by different self-supervised learning methods. The pre-training dataset is Tiny-ImageNet.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Downstream & \multicolumn{3}{c|}{ACR} \\ \cline{2-3} dataset & Training with noise & Training without noise \\ \hline CIFAR10 & 0.157 & 0.106 \\ \hline SVHN & 0.226 & 0.155 \\ \hline STL10 & 0.134 & 0.088 \\ \hline \end{tabular}
\end{table} TABLE IV: Training without noise vs. training with noise for SC in SEaaS. The pre-training dataset is Tiny-ImageNet.
\begin{table}
\begin{tabular}{|c|c|c|} \hline & Downstream & Running time (s) per testing input \\ & dataset & 73.77 \\ \hline \multirow{3}{*}{SEaaS} & SVHN & 72.65 \\ \cline{2-3} & STL10 & 73.48 \\ \hline \hline \multirow{3}{*}{REaaS} & CIFAR10 & 1.05 \\ \cline{2-3} & SVHN & 1.06 \\ \cline{2-3} & STL10 & 1.04 \\ \hline \end{tabular}
\end{table} TABLE III: Comparing the running time per testing input for the cloud server in SC for SEaaS and REaaS. The pre-training dataset is Tiny-ImageNet.
learning methods are used to pre-train encoders. In particular, we consider non-robust MoCo [1], RoCL [14], and our robust pre-training method (i.e., MoCo with our spectral-norm regularization). Table VIII shows ACRs of REaaS when different supervised learning methods are used to pre-train encoders. In particular, we consider a standard, non-robust supervised learning method, adversarial training [16] (we use the default parameter settings in the authors' public implementation), and our robust pre-training method (i.e., standard supervised learning with our spectral-norm regularization). We only show results when the pre-training dataset is Tiny-ImageNet for supervised pre-training methods, as STL10 dataset only has a small number of labeled training images which are insufficient to pre-train high-quality encoders using supervised learning. We try \(\sigma=0.125,0.25,0.5,0.75,1\) and report the largest ACR for each pre-training method. As the results show, our robust pre-training method achieves substantially larger ACRs than existing methods for both supervised learning and self-supervised learning. Our method is better than RoCL and adversarial training because they aim to train empirically robust rather than certifiably robust encoders, and is better than MoCo and standard supervised learning because the encoders pre-trained by them are non-robust.
more quickly as the perturbation size increases. Figure 3 shows the impact of \(\lambda\) on ACR. Our results show that, for both BC and SC, ACR first increases as \(\lambda\) increases and then decreases after \(\lambda\) is larger than a certain value. The reason is that a larger or smaller \(\lambda\) leads to a worse trade-off between accuracy without attacks and robustness as shown in Figure 2.
Impact of image rescaling:To study the impact of image rescaling, we create downstream datasets with different input image sizes via resizing images in CIFAR10. Table IX shows the results on ACR and Figure 5 shows the results on certified accuracy. We find that, when the size of the images in a downstream dataset is larger (or smaller) than the input size of the encoder, the downstream input-space ACR is larger (or smaller) for both BC and SC. The reason is that down-scaling (or up-scaling) the downstream input images to be the same size as the input size of the encoder reduces (or enlarges) the perturbation in the downstream image space.
Impact of \(N\), \(\sigma\), and \(\alpha\) for SC:Figure 4 and 8 (in Appendix) shows the impact of \(N\), \(\sigma\), and \(\alpha\) on ACR and certified accuracy of SC in REaaS. We have the following observations. First, both ACR and certified accuracy increase as \(N\) or \(\alpha\) increases. The reason is that the estimated certified radii are larger when \(N\) or \(\alpha\) is larger. Second, we find that \(\sigma\) achieves a trade-off between accuracy without attacks (i.e., perturbation size is 0) and robustness. In particular, a smaller \(\sigma\) can achieve a larger accuracy without attacks, but the curve drops faster as the perturbation size increases. Third, ACR first increases and then decreases as \(\sigma\) increases. The reason is that a smoothed classifier is less accurate without attacks when \(\sigma\) is larger and is less robust when \(\sigma\) is smaller.
REaaS vs. white-box access to the encoder:In REaaS, a client has black-box access to the encoder. We compare REaaS with the scenario where a client has white-box access to the encoder, e.g., the cloud server shares its encoder with a client. Specifically, with white-box access to the encoder, a client can use either BC or SC by treating the composition of the encoder and its downstream classifier as a base classifier. For BC, the client can use CROWN [29] to derive the certified radius of its base classifier for a testing input. For SC, the client can train/fine-tune the base classifier (both the encoder and downstream classifier) using training inputs with noise. The white-box scenario represents the upper-bound robustness a client can achieve. Therefore, comparing with the robustness in the white-box scenario enables us to understand how close our REaaS with the two APIs is to such upper bound. Table X compares the ACRs of REaaS and such white-box scenario. We find that REaaS can achieve comparable ACRs with the white-box scenario.
## VI Discussion
Extension to \(\ell_{p}\)-norm adversarial perturbations:We focus on certified robustness against \(\ell_{2}\)-norm adversarial perturbation in this work. The certified robustness can be extended to other \(\ell_{p}\)-norms, e.g., via leveraging the relationship between \(\ell_{2}\)-norm and other \(\ell_{p}\)-norms. For instance, suppose the certified radius is \(R\) for an image in \(\ell_{2}\)-norm; the certified radius in \(\ell_{1}\)-norm
\begin{table}
\begin{tabular}{|c|c|c|} \hline Certification Method &
\begin{tabular}{c} Size of Images in \\ Downstream Dataset \\ \end{tabular} & ACR \\ \hline \multirow{3}{*}{BC} & 16x16 & 0.082 \\ \cline{2-3} & 32x32 & 0.138 \\ \cline{2-3} & 64x64 & 0.303 \\ \hline \hline \multirow{3}{*}{SC} & 16x16 & 0.068 \\ \cline{2-3} & 32x32 & 0.153 \\ \cline{1-1} \cline{2-3} & 64x64 & 0.305 \\ \hline \end{tabular}
\end{table} TABLE IX: Impact of image rescaling on ACR in REaaS. The pre-training dataset is Tiny-ImageNet and the downstream dataset is (or created from) CIFAR10. The input size of the encoder is 32x32.
Fig. 4: Impact of \(N\), \(\sigma\), and \(\alpha\) on ACR of SC in REaaS. The pre-training dataset is Tiny-ImageNet and the downstream dataset is CIFAR10.
Fig. 5: Impact of downstream input size on certified accuracy vs. perturbation size for BC and SC in REaaS. The pre-training dataset is Tiny-ImageNet and the downstream dataset is (or created from) CIFAR10. The input size of the pre-trained encoder is 32x32.
and \(\ell_{\infty}\)-norm can respectively be computed as \(R\) and \(\frac{R}{\sqrt{dim}}\), where \(dim\) is the product of the number of pixels and the number of channels in the image. Figure 6 shows the certified accuracy of SC in REaaS for \(\ell_{1}\)-norm and \(\ell_{\infty}\)-norm adversarial perturbations, where the \(\ell_{1}\)-norm and \(\ell_{\infty}\)-norm certified radii are obtained from \(\ell_{2}\)-norm certified radius with \(N=100,000\), \(\sigma=0.5\), and \(\alpha=0.001\).
**Extending REaaS to natural language processing (NLP) domain:** An attacker can make a text classifier predict an incorrect label for a text by substituting a small number of words as their synonyms [31, 32, 33]. Our REaaS can also be applied to enable adversarially robust downstream text classifiers against those attacks by slightly adapting our F2IPerturb-API (please refer to Appendix E for details). Given a text and a feature-space certified radius, our adapted F2IPerturb-API returns an input-space certified radius, which is the maximum number of words that can be substituted such that the downstream classifier's predicted label for the text is unchanged. Table XI shows our experimental results (please refer to Appendix E for details of the experimental setup). Our results show that our REaaS is also applicable to NLP domain.
**Encoder stealing:** Our REaaS introduces a new F2IPerturb-API. A natural question is whether the new F2IPerturb-API makes the encoder more vulnerable to stealing attacks. We argue that the answer is probably no. The reason is that our new API returns a certified radius for a query image, which can also be obtained by an attacker via calling the existing Feature-API many times. However, an attacker may obtain such certified radius with less queries using our new API. We explore whether certified radii can be exploited to assist encoder stealing. In particular, we extend StolenEncoder [34], which uses Feature-API to steal encoder, to steal encoders using both Feature-API and F2IPerturb-API (see Appendix F for the details of StolenEncoder and its extended version as well as the experimental setup). Table XII shows our experimental results, where \(\gamma\) is a hyperparameter. Note that the total number of queries to the APIs made by the extended StolenEncoder is twice of StolenEncoder in our comparison. Our results show that the downstream classifiers built upon stolen encoders obtained by StolenEncoder and its extended version achieve comparable accuracy, which implies that certified radii may not be able to assist encoder stealing.
**Privacy-preserving encoder as a service:** In both SEaaS and REaaS, a client sends his/her raw images to the cloud server. Therefore, an untrusted service provider may compromise the privacy of the client, especially when the downstream datasets contain sensitive images such as facial and medical images. We believe it is an interesting future work to develop privacy-preserving encoder as a service. For instance, we can leverage (local) differential privacy [37, 38, 39], secure hardware [40], and cryptography [41, 42] based methods.
**Other attacks to pre-trained encoders:** In this work, we focus on adversarial examples [5, 6]. Some recent studies [43, 44, 45] show that pre-trained encoders are also vulnerable to poisoning and backdoor attacks, which are orthogonal to our work. We believe it is an interesting future work to extend our framework to defend against those attacks.
\begin{table}
\end{table} TABLE XI: ACR and #Queries of REaaS in NLP domain, where BC is used. The pre-training dataset is SST-2 [35] and the downstream dataset is IMDB [36].
\begin{table}
\end{table} TABLE XII: **Comparing StolenEncoder and its extended version using our F2IPerturb-API. The pre-training dataset is Tiny-ImageNet and the downstream dataset is CIFAR10.**
\begin{table}
\end{table} TABLE X: **Comparing the ACRs of REaaS and the white-box scenario for different downstream datasets. The pre-training dataset is Tiny-ImageNet.**
Fig. 6: **Certified accuracy vs. perturbation size of SC in REaaS under \(\ell_{1}\)-norm and \(\ell_{\infty}\)-norm adversarial perturbations. The pre-training dataset is Tiny-ImageNet and the downstream dataset is CIFAR10.**
## VII Conclusion and Future Work
In this work, we show that, via providing two APIs, a cloud server 1) makes it possible for a client to certify robustness of its downstream classifier against adversarial perturbations using any certification method and 2) makes it orders of magnitude more communication efficient and more computation efficient to certify robustness using smoothed classifier based certification. Moreover, when the cloud server pre-trains the encoder via considering our spectral-norm regularization term, it achieves better certified robustness for the clients' downstream classifiers. Interesting future work includes extending REaaS to poisoning and backdoor attacks as well as designing both robust and privacy-preserving encoder as a service.
## Acknowledgements
We thank the anonymous reviewers for the constructive comments. This work was supported by NSF under grant No. 2131859, 2112562, and 1937786, as well as ARO under grant No. W911NF2110182.
|
2310.14826 | Sharp error bounds for imbalanced classification: how many examples in
the minority class? | When dealing with imbalanced classification data, reweighting the loss
function is a standard procedure allowing to equilibrate between the true
positive and true negative rates within the risk measure. Despite significant
theoretical work in this area, existing results do not adequately address a
main challenge within the imbalanced classification framework, which is the
negligible size of one class in relation to the full sample size and the need
to rescale the risk function by a probability tending to zero. To address this
gap, we present two novel contributions in the setting where the rare class
probability approaches zero: (1) a non asymptotic fast rate probability bound
for constrained balanced empirical risk minimization, and (2) a consistent
upper bound for balanced nearest neighbors estimates. Our findings provide a
clearer understanding of the benefits of class-weighting in realistic settings,
opening new avenues for further research in this field. | Anass Aghbalou, François Portier, Anne Sabourin | 2023-10-23T11:45:34Z | http://arxiv.org/abs/2310.14826v2 | # Sharp error bounds for imbalanced classification: how many examples in the minority class?
###### Abstract
When dealing with imbalanced classification data, reweighting the loss function is a standard procedure allowing to equilibrate between the true positive and true negative rates within the risk measure. Despite significant theoretical work in this area, existing results do not adequately address a main challenge within the imbalanced classification framework, which is the negligible size of one class in relation to the full sample size and the need to rescale the risk function by a probability tending to zero. To address this gap, we present two novel contributions in the setting where the rare class probability approaches zero: (1) a non asymptotic fast rate probability bound for constrained balanced empirical risk minimization, and (2) a consistent upper bound for balanced nearest neighbors estimates. Our findings provide a clearer understanding of the benefits of class-weighting in realistic settings, opening new avenues for further research in this field.
## 1 Introduction
Consider the problem of binary classification with covariate \(X\) and target \(Y\in\{-1,1\}\). The flagship approach to this problem in statistical learning is Empirical Risk Minimization (ERM), which produces approximate minimizers of \(\mathcal{R}(g)=\mathbb{E}\left[\ell(g(X),Y)\right]\), given a loss function \(\ell\) and a family of candidate classifiers \(g\in\mathcal{G}\), with the help of observed data. with classifier \(g\), \(\ell_{g}(X,Y)=\ell(g(X),Y)\). However, when the underlying distribution is imbalanced, that is \(p=\mathbb{P}(Y=+1)\) is relatively small, minimizing empirical version of \(\mathcal{R}\) often leads to trivial classification rules for which the majority class is always predicted, because minimizing \(\mathcal{R}(g)\) in that case is similar to minimizing \(\mathbb{E}\left[\ell(g(X),Y)\,|\,Y=-1\right]\). Indeed by the law of total probabilities, \(\mathcal{R}(g)=p\mathbb{E}\left[\ell(g(X),Y)\,|\,Y=+1\right]+(1-p)\mathbb{E} \left[\ell(g(X),Y)\,|\,Y=-1\right]\) and the former term is negligible with respect to the latter when \(p\ll 1\). For this reason, even though standard ERM approaches might enjoy satisfactory generalization properties over imbalanced distributions, with respect to the standard risk \(\mathcal{R}\), they may lead to unpleasantly high false
negative rates and in general the average error on the minority class has no reason to be small, as its contribution to the overall risk \(\mathcal{R}\) is negligible. This is typically what should be avoided in many applications when false negatives are of particular concern, among which medical diagnosis or anomaly detection for aircraft engines, considering the tremendous cost of an error regarding a positive example.
Bypassing the shortcoming described above is the main goal of many works regarding imbalanced classification. The existing literature may be roughly divided into oversampling approaches such as SMOTE and GAN (Chawla et al., 2002; Mariani et al., 2018), undersampling procedures (Liu et al., 2009; Triguero et al., 2015) and risk balancing procedures also known as _cost-sensitive learning_(Scott, 2012; Xu et al., 2020). Here we focus on the latter approach which enjoys numerous benefits, including simplicity, improved decision-making (Elkan, 2001a; Viaene and Dedene, 2005), improved class probability estimation (Wang et al., 2019; Fu et al., 2022), better resource allocation (Xiong et al., 2015; Ryu et al., 2017) and increased fairness (Menon and Williamson, 2018; Agarwal et al., 2018). By incorporating the varying costs of misclassification into the learning process, it enables models to make more informed and accurate predictions for the minority class, leading to higher-quality predictions. Balancing the risk consists of minimizing risk measures that differ significantly from the standard empirical risk, by means of an appropriate weighting of the negative and positive errors, in order to achieve a balance between the contributions of the positive and negative classes to the overall risk. In the present paper we consider the balanced-risk, \(\mathcal{R}_{\mathrm{bal}}(g)=\mathbb{E}\left[\ell(g(X),Y)\,|\,Y=+1\right]+ \mathbb{E}\left[\ell(g(X),Y)\,|\,Y=-1\right]\). Other metrics might be considered as detailed for instance in Table 1 in Menon et al. (2013) which we do not analyze here for the sake of conciseness, even though our techniques of proof may be straightforwardly extended to handle these variants.
Empirical risk minimization based on the balanced risk is a natural idea, which is widely exploited by practitioners and has demonstrated its practical relevance in several operational contexts (Elkan, 2001b; Sun et al., 2007; Wang et al., 2016; Khan et al., 2018; Pathak et al., 2022). From a theoretical perspective, class imbalance has been the subject of several works. For instance, the consistency of the resulting classifier is investigated in Koyejo et al. (2014). Several different risk measures and loss functions are considered in Menon et al. (2013) where results of asymptotic nature are established, for fixed \(p>0\), as \(n\rightarrow\infty\). Also in the recent work by Xu et al. (2020), generalization bounds are established for the imbalanced multi-class problem for a robust variant of the balanced risk considered here. Their main results from the perspective of class imbalance, is their Theorem 1 where the upper bound on the (robust) risk includes a term scaling as \(1/(p\sqrt{n})\). A related subject is weighted ERM where the purpose is to learn from biased data (see _e.g._ Vogel et al. (2020); Bertail et al. (2021) and the references therein), that is, the training distribution and the target distribution differ. The imbalanced classification problem may be seen as a particular instance of this transfer learning problem, where the training distribution is imbalanced and the target is a balanced version of it with equal class weights. A necessary assumption in Bertail et al. (2021) is that the density of the target with respect to the source is bounded, which in our context is equivalent to requiring that \(p\) is bounded away from \(0\), an explicit assumption in Vogel et al. (2020) where the main results impose that \(p>\epsilon\) for some fixed \(\epsilon>0\).
The common working assumption in the cited references that \(p\) is bounded from below, renders their application disputable in concrete situations where the number of positive examples is negligible with respect to a wealth of negative instances. To our best knowledge the literature is silent regarding such a situation. More precisely, we have not found neither
asymptotic results covering the case where \(p\) depends on \(n\) in such a way that \(p\to 0\) as \(n\to\infty\); nor finite sample bounds which would remain sharp even in situations where \(p\) is much smaller than \(1/\sqrt{n}\). Such situations arise in many examples in machine learning (see _e.g._ the motivating examples in the next section). However, existing works assume that the sizes of both classes are of comparable magnitude, which leaves a gap between theory and practice. A possible explanation is that existing works do not exploit the full potential of the _low variance_ of the loss functions on the minority class typically induced by boundedness assumptions combined with a low expected value associated with a small \(p\).
It is the main purpose of this work to overcome this bottleneck and obtain generalization guarantees for the balanced risk which remain sharp even for very small \(p\), that is, under sever class imbalance. Our purpose is to obtain upper bounds on the deviations of the empirical risk (and thus on the empirical risk minimizer) matching the state-of-the art, up to replacing the sample size \(n\) with \(np\), the mean size of the rare class. To our best knowledge, the theoretical results which come closest to this goal are normalized Vapnik-type inequalities (Theorem 1.11 in Lugosi (2002)) and relative deviations (Section 5.1 in Boucheron et al. (2005)). However the latter results only apply to binary valued functions and as such do not extend immediately to general real valued loss functions which we consider in this paper, nor do they yield fast rates for _imbalanced_ classification problems, although relative deviations play a key role in establishing fast rates in _standard_ classification as reviewed in Section 5 from Boucheron et al. (2005). Also, as explained above, we have not found any theoretical result regarding imbalanced classification which would leverage these bounds in order to obtain guarantees with leading terms depending on \(np\) instead of \(n\).
Our main tools are (\(i\)) Bernstein-type concentration inequalities (that is, upper bounds including a variance term) for empirical processes that are consequences of Talagrand inequalities such as in Gine and Guillou (2001), (\(ii\)) fine controls of the expected deviations of the supremum error in the vicinity of the Bayes classifier, by means of local Rademacher complexities Bartlett et al. (2005); Bartlett and Mendelson (2006). Our contributions are two-fold.
**1.** We establish an estimation error bound on the balanced risk which holds true for VC classes of functions, which scales as \(1/\sqrt{np}\) instead of the typical rate \(1/\sqrt{n}\) in well-balanced problem, or \(1/(p\sqrt{n})\) in existing works regarding the imbalanced case (_e.g._ as in Xu et al. (2020)). Thus, in practice, our setting encompasses the case where \(p\ll 1\) (severe class imbalanced) and our upper bound constitutes a crucial improvement by a factor \(\sqrt{p}\) compared with existing works in imbalanced classification. Applying the previous bound to the \(k\)-nearest neighbor classification rule, we obtain the following new consistency result: as soon as \(kp\) goes to infinity, the nearest neighbors classification rule is consistent in case of relative rarity.
**2.** We obtain fast rates for empirical risk minimization procedures under an additional classical assumption called a Bernstein condition. Namely we prove upper bounds on the excess risk scaling as \(1/(np)\), which matches fast rate results in the standard, balanced case, up to replacing the full sample size \(n\) with the expected minority class size \(np\). To our best knowledge such fast rates are the first of their kind in the imbalanced classification literature.
**Outline.** Some mathematical background about imbalanced classification and some motivating examples are given in Section 2. In Section 3, we state our first non-asymptotic bound on the estimation error over VC class of functions and consider application to \(k\)-nearest neighbor classification rules. In Section 4, fast convergence rates are obtained and an application to ERM is given. Finally, some numerical experiments are provided in Section 5 to illustrate
the theory developed in the paper. All proofs of the mathematical statements are in the supplementary material.
## 2 Definition and notation
Consider a standard binary classification problem where random covariates \(X\), defined over a space \(\mathcal{X}\), are employed to distinguish between two classes defined by their labels \(Y=1\) and \(Y=-1\). The underlying probability measure is denoted by \(\mathbb{P}\) and the associated expectancy, by \(\mathbb{E}\). The law of \((X,Y)\) on the sample space \(\mathcal{X}\times\mathcal{Y}:=\mathcal{X}\times\{-1,1\}\), is denoted by \(P\). We assume that the label \(Y=1\) corresponds to minority class, i.e., \(p=\mathbb{P}(Y=1)\ll 1\). In the sequel we assume that \(p>0\), even though \(p\) may be arbitrarily small.
We adopt notation from empirical process theory. Given a measure \(\mu\) on \(\mathcal{X}\times\mathcal{Y}\) and a real function \(f\) defined over \(\mathcal{X}\times\mathcal{Y}\), we denote \(\mu(f)=\int fd\mu\). Then \(f=\mathds{1}_{C}\) for a measurable set \(C\), we may write interchangeably \(\mu(f)=\mu(\mathds{1}_{C})=\mu(C)\). We denote by \(P_{+}\) the conditional law of \((X,Y)\) given that \(Y=+1\), thus
\[P_{+}(f)=\frac{\mathbb{E}(f(X,Y)\mathds{1}\{Y=1\})}{p}=\mathbb{E}(f(X,Y)\mid Y =1).\]
In addition, we denote by \(\mathrm{Var}_{+}(f)\) the conditional variance of \(f(X,Y)\) given that \(Y=+1\). The conditional distribution and variance \(P_{-}\) and \(\mathrm{Var}_{-}\) are defined similarly, conditional to \(Y=-1\).
In this paper we consider general discrimination functions (also called _scores_) \(g:\mathcal{X}\to\mathbb{R}\) and loss functions \(\ell:\mathbb{R}\times\{-1,1\}\to\mathbb{R}\), and our results will hold under boundedness and Vapnik-type complexity assumptions detailed below in Sections 3, 4. Given a score function \(g\) and a loss \(\ell\), it is convenient to introduce the function \(\ell_{g}:(x,y)\mapsto\ell(g(x),y)\). With this notation the (unbalanced) risk of the score function \(g\) is \(\mathcal{R}(g)=\mathbb{E}[\ell_{g}\left(X,Y\right)]\). Notice that the standard \(0-1\) misclassification risk, \(\mathcal{R}^{0-1}(g)=\mathbb{P}\left(g(X)\neq Y\right)\), is retrieved when \(g\) takes values in \(\{-1,1\}\) and \(\ell(g(x),y)=\mathds{1}\{g(x)\neq y\}\), or when \(g\) is real valued and \(\ell(g(x),y)=\mathrm{sign}(-g(x)y)\). Allowing for more general scores and losses is a standard approach in statistical learning allowing to bypass the NP-hardness of the minimization problem associated with \(\mathcal{R}^{0-1}\). Typically (although this is not formally required for our results to hold), the function \(\ell_{g}\) takes the form \(\ell_{g}(x,y)=\phi\left(-g(x)y\right)\), where \(\phi\) is convex and differentiable with \(\phi^{\prime}(0)<0\)(Zhang, 2004; Bartlett et al., 2006). This ensures that the loss is classification calibrated and that \(\mathcal{R}(g)=\mathbb{E}\left[\ell_{g}\left(X,Y\right)\right]\) is a convex upper bound of \(\mathcal{R}^{0-1}(g)\). Various consistency results ensuring that \(g^{\star}=\arg\min_{g\in\mathbb{R}^{\mathcal{X}}}\mathcal{R}(g)=\arg\min_{g \in\mathbb{R}^{\mathcal{X}}}\mathcal{R}^{0-1}(g)\) can be found in Bartlett et al. (2006). Examples include the logistic (\(\phi(u)=\log(1+e^{-u})\)), exponential (\(\phi(u)=e^{-u}\)), squared (\(\phi(u)=(1-u)^{2}\)), and hinge loss (\(\phi(u)=\max(0,1-u)\)).
The balanced \(0-1\) risk is defined as \(\mathcal{R}^{0-1}_{\mathrm{bal}}(g)=(P_{+}(Y\neq g(X))+P_{-}(Y\neq g(X)))/2\) and is referred to as the _AM measure_ in existing literature (Menon et al., 2013). The minimizer of the latter risk, \(g^{\star}_{bal}\), is known as the balanced Bayes classifier. It returns \(1\) when \(\eta(X)=\mathbb{P}(Y=1\,|\,X)\geq p\) and \(-1\) otherwise (refer to Theorem 2 or or Proposition 2 in Koyejo et al. (2014)). In the present paper we consider a general balanced risk allowing for a real-valued loss function \(\ell_{g}\), defined for \(g\in\mathcal{G}\) as
\[\mathcal{R}_{\mathrm{bal}}(g)=\frac{1}{2}\left(P_{+}(\ell_{g})+P_{-}(\ell_{g}) \right).\]
Given an independent and identically distributed sample \((X_{i},Y_{i})_{1\leq i\leq n}\) according to \(P\), we denote by \(P_{n}\) the empirical measure, \(P_{n}(f)=(1/n)\sum_{i=1}^{n}f(X_{i},Y_{i})\), for any measurable and real-valued function \(f\) on \(\mathcal{X}\times\mathcal{Y}\). While the standard risk estimate is simply expressed as \(P_{n}(\ell_{g})\) for any \(g\in\mathcal{G}\), the balanced empirical risk is necessarily defined in terms of empirical conditional measures,
\[P_{n,+}(f)=\frac{P_{n}(f\mathds{1}\{Y=1\})}{p_{n}},\]
where by convention \(P_{n,+}(f)=0\) when \(p_{n}=P_{n}(Y=1)=0\). The empirical measure of the negative class, \(P_{n,-}\), is defined in a similar manner. Finally the balanced empirical risk considered in this paper is
\[\mathcal{R}_{n,\text{bal}}(g)=\frac{1}{2}\left(P_{n,+}(\ell_{g})+P_{n,-}(\ell_ {g})\right).\]
Motivating ExamplesWe now present two examples where the probability \(p\to 0\) as \(n\to\infty\) :
1. The first example is the problem of contaminated data which is central in the robustness literature. A common theoretical assumption is that the number of anomalies \(n_{0}\) grows sub-linearly with the sample size, as discussed in (Xu et al., 2012; Staerman et al., 2021). In this context, \(n_{0}=n^{a}\) for some \(a<1\) and consequently, \(p=n^{a-1}\to 0\).
2. The second example pertains to Extreme Value Theory (EVT) (Resnick, 2013; Goix et al., 2015; Jalalzai et al., 2018; Aghbalou et al., 2023). Consider a continuous positive random variable \(T\), predicting exceedances over arbitrarily high threshold \(t\) may be viewed as a binary classification problem. Indeed for fixed \(t\), consider the binary target \(Y=\mathds{1}\{T>t\}-\mathds{1}\{T\leq t\}\) with marginal class probability \(p=P(T>t)\). The goal is thus to predict \(Y\), by means of the covariate vector \(X\). One major goal of EVT is to learn a classifcation model for extremely high thresholds \(t\). In practice, EVT based approaches set the threshold \(t\) as the \(1-\alpha\) quantile of \(T\) with \(\alpha=k/n\to 0\) and \(k=o(n)\). This approach essentially assumes that the positive class consists of the \(k=o(n)\) largest observations of \(T\) so that \(P(T>t)=P(Y=1)=k/n\to 0\).
## 3 Standard learning rates under relative rarity
### Concentration bound
The primary goal of this paper is to assess the error associated with estimating the balanced risk \(\mathcal{R}_{\text{bal}}(g)\) using the empirical balanced risk \(\mathcal{R}_{n,\text{bal}}(g)\). Given the definition of the balanced risk, the quantity of interest takes the form \((P_{n,+}-P_{+})(f)\), and a similar analysis applies to \((P_{n,-}-P_{-})(f)\).In this paper we control the complexity of the function class _via_ the following notion of VC-complexity.
**Definition 3.1**.: The family of functions \(\mathcal{F}\) is said to be of VC-type with constant envelop \(U>0\) and parameter \((v,A)\) if \(\mathcal{F}\) is bounded by \(U\) and for any \(0<\epsilon<1\) and any probability
measure \(Q\) on \((S,\mathcal{S})\), we have
\[\mathcal{N}\left(\mathcal{F},L_{2}(Q),\epsilon U\right)\leq(A/\epsilon)^{v}.\]
The connection between the usual VC definition (Vapnik and Chervonenkis, 1971) and Definition 3.1 can be directly established through Haussler's inequality (Haussler, 1995), which indicates that the covering number of a class of binary classifiers with VC dimension \(v\) (in the sense of Vapnik and Chervonenkis (1971)) is given by
\[\mathcal{N}\left(\epsilon,\mathcal{F},L_{2}(Q),\epsilon\right)\leq Cv(4e)^{v} \epsilon^{-2v}=\left(\frac{2\sqrt{e}(Cv)^{1/v}}{\epsilon}\right)^{2v},\]
for some universal constant \(C>0\). Thus a VC-class of functions in the sense of Vapnik and Chervonenkis (1971) is necessarily a VC-type class in the sense of Definition 3.1.
Notice that within a class \(\mathcal{F}\) with envelop \(U>0\), the following variance bounds are automatically satisfied:
\[\sigma_{+}^{2},\sigma_{-}^{2}=\sup_{f\in\mathcal{F}}\mathrm{Var}_{+}(f),\sup_ {f\in\mathcal{F}}\mathrm{Var}_{-}(f)\leq U^{2}.\]
The following theorem states a uniform generalization bound that incorporates the probability of each class in such a way that the deviations of the empirical measures are controlled by the expected number of examples in each class, \(np\) and \(n(1-p)\). Interestingly the deviations may be small even for small \(p\), as soon as the product \(np\) is large. The bound also incorporates the conditional variance of a class \((\sigma_{+}^{2},\sigma_{-}^{2})\), which will play a key role in our application to nearest neighbors.
**Theorem 3.2**.: _Let \(\mathcal{F}\) be of VC-type with constant envelop \(U\) and parameter \((v,A)\). For any \(n\) and \(\delta\) such that_
\[np\geq\max\left[\frac{U^{2}}{\sigma_{+}^{2}}v\log\left(K^{\prime}A/\left(2 \delta\sqrt{p}\right)\right),8\log(1/\delta)\right]\]
_we have with probability \(1-\delta\),_
\[\sup_{f\in\mathcal{F}}|P_{n,+}(f)-P_{+}(f)|\leq 4K^{\prime}\sigma_{+}\sqrt{ \frac{v}{np}\log\left(K^{\prime}A/(2\delta\sqrt{p})\right)}\]
_For some universal constant \(K^{\prime}>0\). We also have with probability \(1-\delta\),_
\[\sup_{f\in\mathcal{F}}|P_{n,-}(f)\,-\,P_{-}(f)|\leq 4K^{\prime}\sigma_{-} \sqrt{\frac{v}{n(1-p)}\log\left(K^{\prime}A/(2\delta\sqrt{(1-p)})\right)}.\]
**Remark 3.1**.: _This upper bound extends Theorem 1.11 in Lugosi (2002), which is limited to a binary class of functions characterized by finite shatter coefficients. The extension is possible by utilizing results from Plassier et al. (2023). It is crucial to recognize that all existing non-asymptotic statistical rates in the imbalanced classification literature (Menon et al., 2013; Koyejo et al., 2014; Xu et al., 2020) follow the rate \(1/(p_{n}\sqrt{n})\), leading to a trivial upper bound when \(p_{n}\leq 1/\sqrt{n}\). In our analysis, the upper bound remains consistent provided that \(np_{n}\to\infty\), thereby emphasizing the merits of using concentration inequalities incorporating the variance of the positive class \(\mathrm{Var}(f\mathds{1}\{y=1\})\leq Up\ll 1\)._
The next corollary, which derives from Theorem 3.2 together with standard arguments, provides generalization guarantees for ERM algorithms based upon the balanced risk. Namely it gives an upper bound on the excess risk of a minimizer of the balanced risk. The proof is provided in the supplementary material for completeness.
**Corollary 3.3**.: Suppose that \(\{\ell_{g}\,:\,g\in\mathcal{G}\}\) is VC-type with envelop \(U\) and parameter \(v,A\). Under the conditions of Theorem 3.2, we have, with probability \(1-\delta\),
\[\mathcal{R}_{\mathrm{bal}}\left(\hat{g}_{\mathrm{bal}}\right)\leq\mathcal{R}_ {\mathrm{bal}}\left(g_{\mathrm{bal}}^{\star}\right)+4K^{\prime}\sigma_{\mathrm{ max}}\sqrt{\frac{v\log\left(K^{\prime}A/\left(2\delta\sqrt{p}\right)\right)}{np}},\]
where \(\sigma_{\mathrm{max}}=\max\left(\sigma_{+},\sigma_{-}\right)\leq U\) and \(K^{\prime}>0\) is a universal constant.
The previous result shows that whenever \(np\to\infty\), learning from ERM based on a VC class of functions is consistent. Another application of our result pertains to \(k\)-nearest neighbor classification algorithms. In this case the sharpness of our bound is fully exploited by leveraging the variance term \(\sigma_{+}\). This is the subject of the next section.
### Balanced \(k\)-nearest neighbor
In the context of imbalanced classification, we consider here a balanced version of the standard \(k\)-nearest neighbor (\(k\)-NN for short) rule, which is designed in relation with the balanced risk \(R_{bal}^{*}(g)\). We establish the consistency of the balanced \(k\)-NN classifier with respect to the balanced risk.
Let \(x\in\mathbb{R}^{d}\) and \(\|\cdot\|\) be the Euclidean norm on \(\mathbb{R}^{d}\). Denote by \(B(x,\tau)\) the set of points \(z\in\mathbb{R}^{d}\) such that \(\|x-z\|\leq\tau\). For \(n\geq 1\) and \(k\in\{1,\;\ldots,\;n\}\), the \(k\)-NN radius at \(x\) is defined as
\[\hat{\tau}_{n,k,x}:=\inf\left\{\tau\geq 0\,:\,\sum_{i=1}^{n}1_{B(x,\tau)}(X_{i })\geq k\right\}.\]
Let \(I_{n}(x)\) be the set of index \(i\) such that \(X_{i}\in B(x,\hat{\tau}_{n,k,x})\) and define the estimate of the regression function \(\eta(x)\) as
\[\hat{\eta}_{n}(x)=\frac{1}{k}\sum_{i\in I_{n}(x)}\mathds{1}_{Y_{i}=1}.\]
While standard NN classification rule is a majority vote following \(\hat{\eta}_{n}(x)\), i.e., predict \(1\) whenever \(\hat{\eta}_{n}(x)\geq 1/2\), it is natural, in view of well known results recalled in Section 2, to consider a balanced classifier \(\hat{g}_{n}\) for imbalanced data predicts \(1\) whenever \(\hat{\eta}_{n}(x)\geq p_{n}\), that is \(\hat{g}_{n}=\mathrm{sign}(\hat{\eta}_{n}(x)/p_{n}-1)\).
The analysis of the \(k\)-NN classification rule is conducted for covariates \(X\) that admit a density with respect to the Lebesgue measure. We will need in addition that the support \(S_{X}\) is well shaped and that the density is lower bounded. These standard regularity conditions in the \(k\)-NN literature are recalled below.
* The random variable \(X\) admits a density \(f_{X}\) with compact support \(S_{X}\subset\mathbb{R}^{d}\).
* There is \(c>0\) and \(T>0\) such that \[\forall\tau\in(0,T],\,\forall x\in S_{X},\,\lambda(S_{X}\cap B(x,\tau))\geq c \lambda(B(x,\tau)),\] where \(\lambda\) is the Lebesgue measure.
* There is \(0<b_{X}\leq U_{X}<+\infty\) such that \[b_{X}\leq f_{X}(x)\leq U_{X},\qquad\forall x\in S_{X}.\]
In light of Proposition A.4 (stated in the supplement), we consider the estimation of \(\nu^{*}(x):=\eta(x)/p\) using the \(k\)-NN estimate \(\hat{\eta}_{n}/p_{n}\). The proof, which is postponed to the supplementary file, crucially relies on arguments from the proof of our Theorem 3.2 combined with known results concerning the VC dimension of Euclidean balls (Wenocur and Dudley, 1981).
**Theorem 3.4**.: _Suppose that (X1) (X2) and (X3) are fulfilled and that \(x\mapsto\eta(x)/p\) is \(L\)-Lipschitz on \(S_{X}\). Then whenever \(pn/\log(n)\to\infty\), \(k/\log(n)\to\infty\) and \(k/n\to 0\), we have, with probability \(1\),_
\[\sup_{x\in\mathcal{X}}|\hat{\eta}_{n}(x)/p_{n}-\nu^{*}(x)|=O\left(\sqrt{\frac {\log(n)}{kp}}+\left(\frac{k}{n}\right)^{1/d}\right).\]
The consistency of the balanced \(k\)-NN with respect to the AM risk, encapsulated in the next corollary, follows from Theorem 3.4 combined with an additional result (Proposition A.4) relating the deviations of the empirical regression function with the excess balanced risk.
**Corollary 3.5**.: Suppose that (X1) (X2) and (X3) are fulfilled and that \(x\mapsto\eta(x)/p\) is \(L\)-Lipschitz on \(S_{X}\). Then whenever \(p\leq 1/2\), \(kp/\log(n)\to\infty\) and \(k/n\to 0\), we have, with probability \(1\),
\[\mathcal{R}^{*}_{bal}(\hat{g}_{n})\to\mathcal{R}^{*}_{bal}(g^{*}_{\mathrm{bal} }).\]
The principal interest of Corollary 3.5 is that the condition for consistency involves the product of the number of neighbors \(k\) with the rare class probability \(p\). The take-home message is that learning nonparametric decision rules is possible with imbalanced data, as soon as \(kp\) is large enough. In other words local averaging process should be done carefully to ensure a sufficiently large _expected_ number of neighbors from the rare class.
## 4 Fast rates under relative rarity
### A concentration bound for balanced measures
We now state and prove a concentration inequality that is key to obtain fast convergence rates for excess risk in the context of balanced ERM. Prior to stating this main result, we define a weighted class \(\widetilde{\mathcal{F}}\) as
\[\widetilde{\mathcal{F}}=\left\{\frac{1}{2}fI_{1}+\frac{1}{2}\frac{p}{1-p}fI_{- 1}\mid f\in\mathcal{F}\right\},\]
where \(I_{s}(x,y)=\mathds{1}\{y=s\}\) for \((x,y)\in\mathcal{X}\times\mathcal{Y}\) ans \(s\in\{-1,1\}\). Moreover, for a given measure \(P\), we denote a balanced counterpart of \(P\) as \(P_{\mathrm{bal}}(f)=\frac{1}{2}\left(P_{+}(f)+P_{-}(f)\right)\).
**Theorem 4.1**.: _Suppose that \(\mathcal{F}\) is of VC-type with envelop \(U\geq 1\) and parameter \(v,A\geq 1\). Assume that there is some constant \(B\) such that for every \(\tilde{f}\in\widetilde{\mathcal{F}},P\left(\tilde{f}^{2}\right)\leq BP\tilde{f}\). Then, with \(c_{1}=5\), \(c_{2}=22U\), for any \(K>1\) and every \(\delta>0\), with probability at least \(1-3\delta\),_
\[\forall f\in\mathcal{F}\quad P_{\mathrm{bal}}(f)\leq\frac{K}{K-1}P_{n, \mathrm{bal}}(f)\Bigg{(}1+\sqrt{\frac{3\log(1/\delta)}{np}}\Bigg{)}+\frac{DK} {B}\frac{\log(An)^{2}}{np}+U_{B,K}\frac{\log(1/\delta)}{np}.\]
_Also, with probability at least \(1-2\delta\),_
\[\forall f\in\mathcal{F}\quad P_{n,\mathrm{bal}}(f)\Bigg{(}1-\sqrt{\frac{2 \log(1/\delta)}{np}}\Bigg{)}\leq\frac{K+1}{K}P_{\mathrm{bal}}(f)+\frac{DK}{B} \frac{\log(An)^{2}}{np}+U_{B,K}\frac{\log(1/\delta)}{np},\]
_where \(D=8^{\frac{1}{\epsilon}}(v+1)CAUC_{1}C_{2}\), \(C>0\) is universal constant, \(C_{1}=1/\sqrt{\log(8A)}\), \(C_{2}=\sqrt{2}\left(\max(\log(4AU)/\log(8A),1)+\sqrt{2}\right)\) and \(U_{B,K}=c_{2}+c_{1}BK\)._
Sketch of proof.: The main tool for the proof is Theorem 3.3 in Bartlett et al. (2005) recalled for completeness in the supplementary material (Theorem A.10). More precisely, the argument from the cited reference relies heavily on a fixed point technique relative to a subroot function upper bounding the a local variance term. We establish that the fixed point \(r^{\star}\) involved in the argument satisfies an inequality of the form
\[r^{\star}\leq O\left(\frac{\log(A/r^{\star})}{\sqrt{n}}\right).\]
Using this inequality along with the latter theorem and Lemma 7 from Cucker et al. (2002) yields, with high probability, for any \(\tilde{f}\in\widetilde{\mathcal{F}}\),
\[P(\tilde{f})\leq\frac{K}{K-1}P_{n}(\tilde{f})+O\left(\frac{\log(An)^{2}}{n} \right).\]
It remains to notice that in our context of imbalanced classification, for
\[\tilde{f}=\frac{1}{2}fI_{1}+\frac{1}{2}\frac{p}{1-p}fI_{-1},\]
one has \(P(\tilde{f})=pP_{\mathrm{bal}}(f)\). The result follows by an application of a Chernoff bound (Theorem A.1) The full proof can be found in the supplement, in Section A.3.
**Discussion.** Similar proof techniques can be found in the standard classification literature, for example Corollary 3.7 in Bartlett et al. (2005). Nevertheless, this particular work primarily concentrates on loss functions with binary values, namely \(\{0,1\}\). The proof is based upon the fact that these functions are positive, and it employs the conventional definition of the VC dimension. In contrast, other existing works (e.g. Theorem 2.12 in Bartlett and Mendelson (2006) or Example 7.2 in Gine and Koltchinskii (2006)) demonstrate accelerated convergence rates for the _typical_ empirical risk minimizers, which do not extend to their balanced counterparts. The present result is more general, as it is uniformly applicable to a broader range of bounded functions and encompasses a more extensive definition of the
VC class. This notable extension facilitates the establishment of fast convergence rates for the excess risk of (ML) algorithms employed in imbalanced classification scenarios, such as cost-sensitive logistic regression and balanced boosting (Menon et al., 2013; Koyejo et al., 2014; Tanha et al., 2020; Xu et al., 2020). In the next section we provide examples of algorithms verifying the assumptions of Theorem 4.1.
As an application of Theorem 4.1, we derive fast rates for the excess risk of empirical risk minimizers. The following assumption, known as the Bernstein condition, is a prevalent concept within the fast rates literature (Bartlett and Mendelson, 2006; Klochkov and Zhivotovskiy, 2021).
**Definition 4.2**.: We say that the triplet \(\left(\mathcal{G},P,\ell\right)\) satisfy the Bernstein condition if for some \(B>0\) it holds that,
\[\forall g\in\mathcal{G}\,,\,\mathbb{E}\left[\left(\ell_{g}(X,Y)-\ell_{g^{ \star}}(X,Y)\right)^{2}\right]\leq B\left(\mathcal{R}(g)-\mathcal{R}(g^{\star} )\right),\]
where \(g^{\star}=\arg\min_{g\in\mathcal{G}}\mathcal{R}[g]=\arg\min_{g\in\mathcal{G}} \mathbb{E}\left[\ell_{g}(X,Y)\right]\).
Set
\[\tilde{\ell}_{g}=\ell_{g}I_{1}+\frac{p}{1-p}\ell_{g}I_{-1},\]
and notice that \(\mathcal{R}_{\mathrm{bal}}\left(g\right)=\mathbb{E}\left[\tilde{\ell}_{g}(X,Y )\right]/p\), so that
\[g^{\star}_{\mathrm{bal}}=\operatorname*{arg\,min}_{g\in\mathcal{G}}P(\tilde{ \ell}_{g})=\operatorname*{arg\,min}_{g\in\mathcal{G}}\mathcal{R}_{\mathrm{bal} }\left(g\right).\]
In the sequel we shall suppose that the latter condition holds for \(\left(\mathcal{G},P,\tilde{\ell}\right)\) in order to apply Theorem 4.1 and obtain fast convergence rates for the excess risk. The proof is deferred to the supplementary material.
**Corollary 4.3**.: Suppose that \(\mathcal{F}=\left\{\ell_{g}\,:\,g\in\mathcal{G}\right\}\) is VC, \(L\)-bounded and assume that \(\left(\mathcal{F},P,\tilde{\ell}\right)\) satisfy the Bernstein condition for some \(B>0\). Then, for any \(\delta>0\), we have with probability \(1-4\delta\),
\[\mathcal{R}_{\mathrm{bal}}\left(\hat{g}_{\mathrm{bal}}\right)\leq\mathcal{R}_{ \mathrm{bal}}\left(g^{\star}_{\mathrm{bal}}\right)+\frac{D}{B}\frac{\log(An)^{ 2}}{np}+\frac{\log(1/\delta)\left(c_{2}+c_{1}B\right)}{np},\]
where the constants appearing in the latter inequality are the same as in Theorem 4.1.
In the following lemma, we provide a sufficient condition for the Bernstein assumption (Definition 4.2). The proof and the definition of strongly convex function is given in the supplement.
**Lemma 4.4**.: _Suppose that the family \(\mathcal{G}\) is a normed space. if \(g\mapsto\mathbb{E}\left[\ell_{g}(X,Y)\right]\) is \(L\)-Lipschitz and \(\lambda\)-strongly convex, then \(\left(\mathcal{F},P,\tilde{\ell}\right)\) verifies the Bernstein assumption (Definition 4.2) with \(B=2L^{2}/\lambda\)._
We conclude this section by an illustration of the significance of our results, through the concrete example of a constrained empirical risk minimization problem over a linear class of
classifiers 1. We show that fast rates of convergence are achieved provided that the covariate space \(\mathcal{X}\in\mathcal{R}^{d}\) is bounded and the loss is twice differentiable with a second derivative lower bounded away from \(0\). More precisely, we make the following assumption.
Footnote 1: Non-linear classifiers can be easily produced with the use of kernels.
**Assumption 1**.: The space \(\mathcal{X}\) is bounded in \(\mathbb{R}^{d}\)_i.e._, there exists some \(\Delta_{X}>0\) such as \(\forall x\in\mathcal{X},\|x\|\leq\Delta_{X}\) for a given norm \(\|\cdot\|\). Furthermore, the family of classifier and the loss function are chosen as \(\mathcal{G}_{u}=\left\{g(x)=\beta^{T}x\mid\|\beta\|\leq u\right\}\) and \(\ell_{g}(X,Y)=\phi(\beta^{T}XY)\), where \(\phi:\mathbb{R}\mapsto\mathbb{R}\) is a twice differentiable function verifying \(\inf_{|x|\leq u\Delta_{X}}\phi^{{}^{\prime\prime}}(x)>\lambda\) for some \(\lambda>0\).
An immediate implication of the aforementioned assumption is that, identifying \(g\) with \(\beta\), we have \(\sup_{x,y}\left\|\frac{\partial}{\partial g}\ell_{g}(x,y)\right\|<\infty\) which ensures that the risk is Liptchitz. In addition this assumption guarantees that the risk is \(\lambda\)-strongly convex with respect to \(g\). The following corollary is a direct consequence of Corollary 4.3 and guarantees fast rates of convergence for constrained ERM, specifically for algorithms of the form \(\hat{g}_{u,bal}(x)=\hat{\beta}_{u}^{T}x\) with
\[\hat{\beta}_{u}=\operatorname*{arg\,min}_{\|\beta\|\leq u}\frac{1}{n}\sum_{i= 1}^{n}\phi(\beta^{T}X_{i}Y_{i})\left(\frac{\mathds{1}\{Y=1\}}{p_{n}}+\frac{ \mathds{1}\{Y=-1\}}{1-p_{n}}\right). \tag{1}\]
**Corollary 4.5**.: Suppose that Assumption 1 holds for some \(\lambda>0\). Then the excess risk of \(\hat{g}_{u}\) verifies, for any \(\delta>0\), with probability \(1-4\delta\),
\[\mathcal{R}_{\mathrm{bal}}\left(\hat{g}_{\mathrm{bal}}\right)-\mathcal{R}_{ \mathrm{bal}}\left(g_{\mathrm{bal}}^{\star}\right)\leq\frac{D\lambda}{L^{ \prime 2}}\frac{\log(An)^{2}}{np}+\frac{\log(1/\delta)\left(c_{2}+2c_{1}(L^{ \prime 2}/\lambda)\right)}{np},\]
where \(L^{\prime}=\sup_{|x|\leq u\Delta_{X}}\phi^{\prime}(x)\).
**Discussion.** In the context of constrained logistic regression, where \(\phi(x)=\log(1+e^{-x})\), the latter corollary yields fast convergence rates with constants and \(L^{\prime}=1\), along with \(\lambda=e^{-u}\). Corollary 4.5 further establishes accelerated convergence rates for constrained empirical _balanced_ risk minimization with respect to losses such as mean squared error, squared hinge, and exponential loss, among others. This outcome aligns with expectations, as constrained empirical risk minimization is equivalent to penalization (Lee et al., 2006; Homrighausen and McDonald, 2017). Numerous studies have demonstrated the effectiveness of penalization in achieving rapid convergence rates (Koren and Levy, 2015; van Erven et al., 2015). This aspect is particularly significant in the present context, as the standard convergence rate for imbalanced classification is \(1/\sqrt{np}\), and accelerating the convergence rate leads to a more pronounced impact.
## 5 Numerical illustration
In this section, we provide, using synthetic data, a numerical illustration of the theoretical results on \(k\)-NN classification (Corollary 3.5) and on logistic regression (Corollary 4.5). In both cases, a particular attention is given to highly imbalanced setting where \(p=n^{-a}\) for some \(0<a<1\). Due to space constraint, the real data based numerical experiments are postponed to the supplementary file.
Synthetic dataset.In the two cases considered below, we use the following simple data generation process. Consider the binary classification dataset \((X_{i},Y_{i})_{i=1,\ldots,n}\) such that \(X_{i}\in\mathbb{R}^{2}\) and \(Y\in\{-1,1\}\). For each \(i\), the random variable \(Y_{i}\) is such that \(P(Y_{i}=1)=(1/n^{a})\), for some \(a<1\). Then, having generated \(Y_{i}=y\), \(X_{i}\) is drawn according to a \(t\)-multivariate-student distribution, with parameters \((\mu_{y},\sigma_{y},\nu_{y})\). We set \((\mu_{-1},\mu_{1})=((0,0),(1,1))\), \(\sigma_{1}=3\sigma_{-1}=3I\) and \((\nu_{-1},\nu_{1})=(2.5,\,1.1)\).
### Balanced \(k\)-nearest neighbors
Corollary 3.5 gives conditions on \(k\) and \(p\) to ensure the consistency of the \(k\)-NN classification rule. The key condition on which we focus here is that \(kp\) should go to \(\infty\). This condition suggests the existence of a learning frontier on the set \((k,p)\) above which consistent learning is ensured. Here we validate empirically this result and we also provide numerical results to support the stronger conclusion that whenever \(kp\) is not large enough (we are below the learning frontier), then \(k\)-NN is no longer consistent making clear that the choice of the number of neighbors \(k\) should be made considering the value of \(p\).
The experiments setup is as follows. The training size is \(n=1e4\). We set \(p=1/n^{a}\) and \(k=n^{b}\), while varying \(a,b\) over the interval \([1/4,3/4]\) to cover different cases ranging from \(pn\to 0\) to \(pn\to\infty\). The AM-risk for the classification error associated to the balanced \(k\)-NN classifier (estimated with 20 simulations) is displayed as a function of \((k,p)\) in Figure 1.
Upon examining the figure, it is observed that the performance of the \(k\)-nearest neighbors classifier mirrors that of a random guess, maintaining an AM risk near 0.5, when \(kp\) is kept small. This observation illustrates (and extends) the conclusion of Corollary 3.5, supporting that consistency is obtained if (and only if) \(kp\to\infty\).
### Balanced ERM
Now, keeping in mind the fast convergence rate \(1/(np)\) obtained in Corollary 4.5, our goal is to show that such a rate is quite sharp as it can be recovered in practice.
We consider the simple setting of a linear classifier defined as \(\hat{\beta}_{u}=\hat{g}\) introduced in Section 4 with logistic loss \(\ell_{g}(X,Y)=\ \log(1-e^{-g(X)Y})\), \(g(X)=\beta^{T}X\) and \(u=10\). Here the
Figure 1: Heatmap showing the AM risk of the balanced \(k\)-NN.
sample size \(n\) is ranging over the grid \([100,1e4]\) and rare class probability \(p=n^{-a}\) while \(a\in\{1/3,1/2,2/3\}\).
Some Monte Carlo simulation are needed to estimate \(g_{\mathrm{bal}}^{\star}\). We use an \(1e5\) simulations according to a well balanced data set (\(p=1/2\)) so that the error computing \(g_{\mathrm{bal}}^{\star}\) is sufficiently small. In addition, we use some more Monte Carlo simulation from a balanced test dataset of size \(1e4\), to evaluate without bias the risk function \(\mathcal{R}_{\mathrm{bal}}\). Based on this, we can obtain both \(\mathcal{R}_{\mathrm{bal}}(g_{\mathrm{bal}}^{\star})\) and \(\mathcal{R}_{\mathrm{bal}}(\hat{g})\) so that an excess risk value can be obtained. We perform \(n_{simu}=1e4\) experiments and we report the average and the upper \(0.10\) and \(0.90\) quantile of the absolute error obtained over the \(n_{simu}\) experiments.
Figure 2 displays the excess risk as a function of the sample size \(n\) in a logarithmic scale, for \(a\in\{1/3,1/2,2/3\}\). Other figures exploring other values of \(a\) are reported in the supplementary material. We notice that the excess risk vanishes in the same way as the function \(n\mapsto 1/np\) confirming the accuracy of the upper bound from Corollary 4.5.
## 6 Conclusion
In this paper, we have derived upper bounds for the balanced risk in highly imbalanced classification scenarios. Notably, our bounds remain consistent even under severe class
Figure 2: Excess risk (blue) of logistic regression for different sample size \(n\) and the curve \(1/np\) (orange). The blue area corresponds to the \(0.9\)-confidence interval.
imbalance (\(p\to 0\)), setting our work apart from existing studies in imbalanced classification (Menon et al., 2013; Koyejo et al., 2014; Xu et al., 2020). Furthermore, it is worth to highlight that this is the first study to achieve fast rates in imbalanced classification, marking a significant advancement in the field.
Our findings corroborate that both risk-balancing approaches and cost-sensitive learning are consistent across nearly all imbalanced classification scenarios. This aligns with experimental works previously documented in the literature (Elkan, 2001; Wang et al., 2016; Khan et al., 2018; Wang et al., 2019; Pathak et al., 2022). We also
Furthermore, the methodologies and proof techniques presented in this paper are adaptable to other imbalanced classification metrics beyond balanced classification. Potential extensions include demonstrating consistency for metrics such as the \(F_{1}\) measure, recall, and their respective variants.
|
2310.13191 | Towards Robust Pruning: An Adaptive Knowledge-Retention Pruning Strategy
for Language Models | The pruning objective has recently extended beyond accuracy and sparsity to
robustness in language models. Despite this, existing methods struggle to
enhance robustness against adversarial attacks when continually increasing
model sparsity and require a retraining process. As humans step into the era of
large language models, these issues become increasingly prominent. This paper
proposes that the robustness of language models is proportional to the extent
of pre-trained knowledge they encompass. Accordingly, we introduce a
post-training pruning strategy designed to faithfully replicate the embedding
space and feature space of dense language models, aiming to conserve more
pre-trained knowledge during the pruning process. In this setup, each layer's
reconstruction error not only originates from itself but also includes
cumulative error from preceding layers, followed by an adaptive rectification.
Compared to other state-of-art baselines, our approach demonstrates a superior
balance between accuracy, sparsity, robustness, and pruning cost with BERT on
datasets SST2, IMDB, and AGNews, marking a significant stride towards robust
pruning in language models. | Jianwei Li, Qi Lei, Wei Cheng, Dongkuan Xu | 2023-10-19T23:02:29Z | http://arxiv.org/abs/2310.13191v3 | # Towards Robust Pruning: An Adaptive Knowledge-Retention
###### Abstract
The pruning objective has recently extended beyond accuracy and sparsity to robustness in language models. Despite this, existing methods struggle to enhance robustness against adversarial attacks when continually increasing model sparsity and require a retraining process. As humans step into the era of large language models, these issues become increasingly prominent. This paper proposes that the robustness of language models is proportional to the extent of pre-trained knowledge they encompass. Accordingly, we introduce a post-training pruning strategy designed to faithfully replicate the embedding space and feature space of dense language models, aiming to conserve more pre-trained knowledge during the pruning process. In this setup, each layer's reconstruction error not only originates from itself but also includes cumulative error from preceding layers, followed by an adaptive rectification. Compared to other state-of-art baselines, our approach demonstrates a superior balance between accuracy, sparsity, robustness, and pruning cost with BERT on datasets SST2, IMDB, and AGNews, marking a significant stride towards robust pruning in language models.
## 1 Introduction
Pruning is a widely recognized compression method employed to decrease the model size and accelerate model inference (Frankle and Carbin, 2018; Chen et al., 2020; Prasanna et al., 2020; Chen et al., 2021). In the age of large language models (Andrew and Gao, 2007; Brown et al., 2020; Chowdhery et al., 2022; OpenAI, 2023; Touvron et al., 2023; Ouyang et al., 2022; Smith et al., 2022), the necessity of pruning has increased because it greatly reduces deployment costs (Frantar and Alistarh, 2023). In addition to the significant computation cost, the robustness of language models has emerged as a crucial factor that demands attention. This is primarily because models need to remain resilient against adversarial attacks, even in challenging real-world circumstances (Tran et al., 2022; Wang et al., 2023). Therefore, exploring robust pruning strategies against adversarial attacks in language models could potentially yield a substantial impact (Xu et al., 2021; Du et al., 2023).
Recent research has extended the pruning of language models beyond accuracy and sparsity, with an emphasis on the trade-off between accuracy, sparsity, robustness and cost (Du et al., 2023; Xu et al., 2021; Liang et al., 2021; Xi et al., 2022). Zheng et al. (2022) propose a joint optimization objective to guide the pruning and adversarial training simultaneously. Their approach views the identified subnetworks as robust tickets, which can be trained as normal and offer enhanced robustness. Despite achieving state-of-the-art results on target datasets, these methods still display vulnerabilities, as evidenced by a significant gap between metrics of clean accuracy 1 and accuracy under attack. Moreover, the performance also rapidly declines when sparsity exceeds a moderate level. Expanding on their work, Xi et al. (2022) propose using robust early-bird tickets to reduce the computational cost from adversarial training. However, they face similar challenges regarding the trade-off between robustness and sparsity. In summary, existing robust pruning works often demonstrate limited sparsity, insufficient robustness, and expensive cost, indicating the ongoing challenge of the balance between accuracy and the other three aspects.
Footnote 1: accuracy without adversarial attacks
To address this challenge, this paper investigates why language models are susceptible to adversarial attacks. (Wang et al., 2021; Garg and Ramakrishnan, 2020; Jin et al., 2020). Previous studies have indicated that language models frequently capitalize on biases and artifacts inherent in datasets as predictive shortcuts, which impedes reasoning ability and skills to develop advanced semantic comprehension. (Du et al., 2021; Niven and Kao, 2019;
McCoy et al., 2020; Du et al., 2023). This reliance leads to a more severe loss of pre-trained knowledge during the pruning process. Furthermore, the adversarial samples in Natural Language Processing (NLP) are crafted by replacing components of sentences with semantically similar counterparts, thereby retaining high semantic similarity in the entire sentence Li et al. (2020); Ren et al. (2019); Jin et al. (2020). In this way, language models that depend on spurious features from particular words can not defend against adversarial attacks constructed by replacing those words with semantically similar alternatives. To put it more plainly, this primarily stems from the fact that, without pre-trained knowledge, the sparse language model treats the substitute word simply as an integer identifier. Based on the above observation, we explore the following questions in this paper:
Question 1._What is the core to defend against adversarial attacks for sparse language models?_
This paper proposes that the robustness of sparse language models is directly proportional to the amount of pre-trained knowledge retained after pruning. Intuitively, the robustness of a sparse language model is fundamentally tied to its capability to distill advanced semantic features from input sentences. This capability is largely established during the pre-training phase of dense language models, emphasizing the pivotal role of acquired semantic knowledge. The extensive experiments well support our statement.
Question 2._How can we efficiently prevent the loss of pre-trained knowledge in pruning to preserve or even enhance robustness?_
Previous research has demonstrated that pruning exacerbates the model's dependency on spurious features Xu et al. (2021); Du et al. (2023). We further confirm that traditional pruning methods lead to a considerable loss of pre-trained knowledge and poor robustness. To prevent the above things, we propose a pruning approach that minimizes damage to the embedding space and feature space of dense language models, striving to replicate the features in each layer completely. Specifically, for each layer, we iteratively eliminate a single weight at a time and counterbalance the loss by updating the remaining weights based on the Hessian Matrix. In this setup, the reconstruction error at each layer arises not only from its own layer but also incorporates the accumulated error from preceding layers. This is achieved by adaptively updating the pruning-dependent information in accordance with the sparse output generated by previous layers. Concurrently, there's an ongoing effort to correct these errors collectively. Moreover, our method, being a post-training approach, is cost-effective for current language models, as it circumvents rigorous retraining processes. Extensive experiments show that our approach achieves a better trade-off between accuracy, sparsity, robustness, and pruning cost in SST2, AGNews, and IMDB compared with other state-of-art methods.
## 2 Related Work
Textual Adversarial Attacks and Defense.Textual adversarial attacks pose a significant challenge to the robustness of language models. These attacks, formulated by carefully altering certain segments of sentences with semantically similar counterparts, aim to fool language models Jin et al. (2020); Li et al. (2020). To enhance the robustness of language models and defend against adversarial attacks, a range of potent defensive strategies, such as adversarial training, has been proposed. Madry et al. (2017); Zhu et al. (2019); Li and Qiu (2021). Different from their research, which focuses on dense models, we explore the robustness in the context of language model pruning.
Robust Model Pruning.Prior studies indicate that sparse models tend to underperform in Compression Identified Examples (CIE), suggesting that the pruning process exacerbates the inherent algorithmic biases hidden within the datasets Hooker et al. (2020). In Computer Vision (CV), simultaneous optimization of model pruning and adversarial training has been advocated as an effective solution to this issue Gui et al. (2019); Ye et al. (2019); Sehwag et al. (2020); Vemparala et al. (2021). In NLP, Du et al. (2023) propose to prevent model overfitting on easy samples by leveraging sample difficulty in the context of pruning. Concurrently, Xu et al. (2021) suggest the generation of robust sub-networks through Knowledge Distillation and Post-training Quantization. Taking a different approach, Liang et al. (2021) strive to enhance model generalizability by extracting the super tickets, while Zheng et al. (2022) and Xi et al. (2022) seek to identify robust tickets. Despite recent advancements, achieving enhanced robustness alongside increased sparsity remains a challenge. This paper significantly promotes a better trade-off among accuracy, robustness, sparsity, and pruning cost.
Preliminary
### Shortcut Learning and Mitigation
Recent studies provide evidence that language models are inclined to capitalize on inherent biases and spurious features present in datasets, using these as convenient predictive shortcuts (Niven and Kao, 2019; Du et al., 2021; McCoy et al., 2020). This tendency impedes the development of more advanced semantic understanding and reasoning capacity necessary for NLU tasks. Various preliminary studies have begun to address this bias issue, such as adversarial training and posterior regularization (Stacey et al., 2020; Chen et al., 2021). From a unique perspective, we let language models against adversarial attacks by mitigating this shortcut issue through _weight averaging_. This method will be elaborated further in Section 4.2.
### Pruning with Hessian Matrix
Drawing inspiration from (LeCun et al., 1989; Hassibi et al., 1993), previous study has provided mathematical formulations for effectively eliminating a single weight from a layer and updating the remaining weights to correct the resulting error according to the information from Hessian Matrix (Frantar and Alistarh, 2022). The equations are presented below:
\[\begin{split} w_{p}=\underset{w_{p}}{\text{argmin}}\frac{w_{p}^{ 2}}{[H^{-1}]_{pp}}\\ w_{r}-=\frac{w_{p}}{[H^{-1}]_{pp}}\cdot\,H_{:,p}^{-1}\end{split} \tag{1}\]
where \(H\) is the Hessian Matrix, \(w_{p}\) represents the single weight that will be pruned, while \(w_{r}\) denotes the remaining weights that will be updated. The notation \([H^{-1}]pp\) refers to the \(p_{th}\) diagonal entry of the inverse Hessian Matrix, and \(H_{:,p}^{-1}\) represents its \(p_{th}\) column. However, the inversion of the Hessian Matrix requires updates at each weight removal, which is exceedingly costly. Frantar and Alistarh (2022) observes that Hessian values across different weight matrix rows are independent, as a single weight removal only impacts its respective row output. Accordingly, they simplify the calculation of Hessian Matrix \(H\) and leverage the Gaussian elimination technique to accelerate the update of \(H^{-1}\), as described mathematically below:
\[\begin{split}& H=XX^{T}\\ & H_{-p}^{-1}=(H^{-1}-\frac{1}{[H^{-1}]_{pp}}H_{:,p}^{-1}H_{p:}^{ -1})_{-p}\end{split} \tag{2}\]
Here, \(-p\) denotes the removal action of a single weight at index \(p\). A more detailed explanation can be found in the Appendix.
## 4 Methodology
This section proposes a pruning method for language models that can better balance accuracy, sparsity, robustness, and pruning cost. Figure 1 depicts the architecture of this method.
### Rethink Robust Model Pruning
Given that the predominant challenge in robust pruning primarily centers on robustness and pruning cost, we mainly focus on these two aspects in this paper. To enhance the robustness, we explore the root cause of the poor performance of sparse language models under adversarial attacks. We note that adversarial samples are often crafted by replacing certain words in the sentence with semantically similar substitutes. Thus it is essential to ensure that the representation of the original words and their substitutes remain similar in the embedding space and feature space even after pruning. Based on the above observation, we propose to maintain a highly close alignment between the sparse and dense language models. In other words, robust pruning is supposed to seek sparse parameters \(\hat{W}_{l}\) that minimize the discrepancy between the outputs of dense and sparse layers. The problem can be formally expressed as follows:
\[\begin{split}&\underset{\text{s.t.}}{\text{argmin}}_{\hat{W}_{l}}E_{X_{l}} \;\mathcal{L}(f_{l}(X_{l},W_{l}),f_{l}(X_{l},\hat{W}_{l}))\\ &\text{s.t.}\;\|\hat{W}_{l}\|_{0}\leq k\end{split} \tag{3}\]
Here, each layer of language models is represented by a mathematical function \(f_{l}(W_{l},X_{l})\), and \(X_{l}\) denotes inputs, \(k\) designates the total number of weights that remain non-zero after the pruning process. Predominantly, the Mean Squared Error (MSE) is usually employed to measure the pruning error of each layer. Therefore, the preceding problem can be further reformulated using the MSE, as expressed in the subsequent equation:
\[\underset{\text{argmin}_{\hat{W}_{l}}}{\text{argmin}}\|W_{l}X_{l}-\hat{W}_{l}X _{l}\|^{2}\;\text{s.t.}\;\|\hat{W}_{l}\|_{0}\leq k \tag{4}\]
To reduce the pruning cost, we adopt a post-training setting in our strategy. Specifically, we only utilize a small subset of data to calibrate the weights and generate sparse substitutes to replace them. In summary, our pruning method does not need a rigorous retraining process.
### Weight Averaging for Robust Dense Model
We also realize that language models may rely on surface-level or spurious features in the data
rather than capturing sophisticated semantic features. Thus, when sparse language models fail to defend against adversarial attacks, it becomes challenging to determine whether the failure stems from the pruning methods or inherent issues within the dense model. We circumvents this risk by constructing a robust and dense model before pruning.
Inspired by Croce et al. (2023) and Wortsman et al. (2022), we generate a robust language model via _weight averaging_. The key idea is to train multiple models with different hyperparameters and settings, allowing each model to capture distinct nuances of the data and generalize in diverse ways. By averaging their weights, we can create a robust model that benefits from collective knowledge. Specifically, we order these models in descending order based on the accuracy under attack. Then, we selectively average the weights that contribute to the final robustness. Finally, we obtain a robust and dense model as the foundation of subsequent operations. This approach ensures that any detected vulnerabilities in sparse language models result from the pruning process, eliminating the possibility of them arising from spurious features. More details can be found in Algorithm 3.
### Ada-Pruning for Robust Sparse Model
#### 4.3.1 Notation
To accurately replicate the dense model's behavior regarding embedding space and feature space of each layer, we use the method described in Section 3.2 as the backbone. However, its layer-wise setting, which treats each layer as an independent pruning problem, introduces limitations in realizing a globally optimal solution. To elaborate, let's consider a single layer as an example in the following sections. We'll use \(X_{l}\), \(W_{l}\), and \(Y_{l}\) to represent the input, weight, and output of the layer, respectively, with the subscript \(l\) indicating \(l_{th}\) layer. The use of a hat, as seen in \(\hat{X}_{l}\), \(\hat{W}_{l}\), or \(\hat{Y}_{l}\), represents the input, weight, or output within a sparse context.
#### 4.3.2 Adaptive Hessian Matrix
After completing the pruning of the \(l_{th}\) layer, a certain amount of error stemming from the sparse matrix operation inevitably arises. No matter how minor this error might be, it's important to realize that the output of this layer, denoted as \(\hat{Y}_{l}\), influ
Figure 1: Architecture of Main Strategy. **A:** First, we generate a robust and dense language model in two steps: \(\blacklozenge\) we fine-tune the pre-trained weight with various hyperparameters and settings, resulting in multiple models with different knowledge; \(\blacklozenge\) we then employ a greedy algorithm to only average the weights of models that contribute to the final performance. **B:** Second, \(\blacklozenge\) we apply our adaptive pruning method to generate robust and sparse language models in a layer-wise setting. Specifically, we optimize the \(\blacklozenge\) original independent pruning process of each layer to \(\blacklozenge\) an adaptive way. This requires subsequent layers to update the Hessian Matrix and the optimal dense weight according to the sparse outputs of preceding layers, thereby inheriting and correcting the accumulated error together.
ences the input of the subsequent layer, denoted as \(\hat{X}_{l+1}\). As a result, the initial Hessian Matrix for the \((l+1)_{th}\) layer, defined as \(H_{l+1}=X_{l+1}X_{l+1}^{T}\), becomes outdated. Thus it's crucial to recalculate the Hessian Matrix to obtain more precise pruning-dependent information. We suggest adaptively updating the Hessian Matrix for the subsequent layer after pruning the preceding layers.
#### 4.3.3 Adaptive Dense Weight
We also note that the loss generated by removing a single weight depends on the current weight \(W_{l}\) from corresponding layer, as derived from Equation 1. However, an inevitable fact is that the original dense weight \(W_{l}\) is not optimal for the expected dense output \(Y_{l}\) after pruning the preceding layers (\(\hat{0}_{th}\dots(l-1)_{th}\)). Given that the input \(X_{l}\) has been altered to \(\hat{X}_{l}\) due to the accumulated error, it would be suboptimal to continue using the original weight \(W_{l}\) to calculate the pruning loss for the current layer. To be more clear, the result of \(\hat{X}_{l}W_{l}\) could substantially deviate from the original output \(Y_{l}\). This is incompatible with our goal of producing an output \(\hat{Y}_{l}\) identical to the original \(Y_{l}\) in the pruning process. Thus, it's essential to update the dense weight so that \(\hat{X}_{l}\bar{W}_{l}\) can approximates the original output \(Y_{l}\) more closely. Here, \(\bar{W}_{l}\) denotes the updated dense weight, and we design the following equations to derive \(\bar{W}_{l}\):
\[\bar{W}_{l}=(\hat{X}_{l}^{T}\hat{X}_{l})^{-1}\hat{X}_{l}^{T}Y_{l} \tag{5}\]
where \(T\) represents the transpose operation, and \(-1\) denotes the inverse operation. To ensure that \(\hat{X}_{l}^{T}\hat{X}_{l}\) is invertible, we also introduce a regularization term, such as \(1e-4\), to the diagonal entries of the matrix. Finally, we can compute the pruning loss more accurately with the updated weight \(\bar{W}_{l}\).
We also calibrate the optimal weights for non-pruned layers (such as the pooler layer and classification layer in BERT) with Equation 5, aligning the dense layers' output with the altered input. Algorithm 1 provides detailed steps for the code implementation, offering a comprehensive overview of our methodology. We also provide a comprehensive analysis of the computational complexity of our method in the Appendix.
## 5 Experiments
We first compare our method against several baseline methods, assessing accuracy, robustness, sparsity, and cost. Then, an ablation study is performed to elucidate the contributions of each part in our method. Finally, we augment our core findings with additional experiments and analyses to further illuminate our method.
### Baselines and Datasets
Consistent with the previous works Devlin et al. (2018); Du et al. (2023); Xu et al. (2021); Zheng et al. (2022); Xi et al. (2022), \(\textbf{BERT}_{base}\) serves as the foundational model for all our experiments. We compare our approach with various baselines including:**RobustT**Zheng et al. (2022), which optimizes the pruning mask and input perturbation simultaneously for robust tickets; **Bag-of-Ticks**Xu et al. (2021), which improves sparse model robustness via Knowledge Distillation and Post-Training Quantization; **RMC**Du et al. (2023), a technique preventing sparse language models from overfitting on easy samples using sample difficulty; **SuperTicket**Liang et al. (2021), which identifies a super mask during pruning to reduce variance while preserving bias. Our evaluation primarily involves three text classification datasets: Internet Movie Database (**IMDB**, Maas et al. 2011), AG News Corpus (**AGNEWS**, Zhang et al. 2016), and Stanford Sentiment Treebank for binary classification (**SST-2**, Socher et al. 2013).
### Robustness Evaluation
We assess our model's effectiveness against adversarial attacks using the **TextFooler**, which substitutes crucial words in sentences with semantically similar synonyms Jin et al. (2020). Following previous works Zheng et al. (2022); Xi et al. (2022), our evaluations utilize key metrics like Clean Accuracy **Acc%** (accuracy on clean test data), Accuracy Under Attack **Aua%** (accuracy when subjected to adversarial attacks), and Attack Success Rate **Asr%** (ratio of successful text perturbations to total attempts). A robust method is expected to show higher clean accuracy and accuracy under attack coupled with a lower attack success rate. We also evaluate more attack methods in the Appendix.
### Implementation Details
To begin with, we employ the technique mentioned in Section 4.2 to generate a robust language model for each dataset. Subsequently, we use our method to prune these robust language models with a small calibration dataset. All experimental results are the average of five trials, each initiated with different seeds. Furthermore, we assess the performance under three different levels of sparsity: 30%, 50%, and 87.5%. Additional implementation details can be found in Appendix.
### Main Result on Robustness Evaluation
Table 1 provides a comprehensive comparison of various robust pruning methods, evaluated across three distinct datasets: SST2, AGNEWS, and IMDB, and under varying degrees of model sparsity. Key observations can be made as follows: **1)** Our strategy even enhances the robustness of language models after pruning. We believe this enhancement stems from the regularization effect of sparse architecture. **2)** Our strategy distinguishes itself by consistently surpassing other methods in the **Aua%** and **Asr%**s, regardless of the dataset or the level of sparsity. These results imply that our strategy effectively maintains robustness during the pruning of language models. **3)** Impressively, our method achieves higher robustness even with fewer parameters compared to several other approaches, which further underscores the effectiveness of our robust pruning method. **4)** Although the **Acc%** of
\begin{table}
\begin{tabular}{l|c|c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**\#Param**} & \multirow{2}{*}{**Re-T**} & \multicolumn{3}{c|}{**SST2**} & \multicolumn{3}{c|}{**AGNEWS**} & \multicolumn{3}{c}{**IMDB**} \\ \cline{3-13} & & & **Acc** & **Aua** & **Asr** & **Acc** & **Aua** & **Asr** & **Acc** & **Aua** & **Asr** \\ \hline Fine-tune & 85M & Y & **92.3** & 12.7 & 86.2 & 94.7 & 19.1 & 80.0 & 95.1 & 7.4 & 92.2 \\ \hline FreeLB & 85M & Y & 91.5 & 28.3 & 69.1 & **94.8** & 37.8 & 60.1 & 94.3 & 36.2 & 61.6 \\ \hline Weight Average & 85M & Y & 91.4 & **30.4** & **66.75** & 94.4 & **48.5** & **48.6** & **95.2** & **44.4** & **53.4** \\ \hline \multicolumn{2}{l}{**sparsity \(\leq\) 30\%**} & \multicolumn{3}{c|}{} & & & & & & & & & \\ \hline SuperTicket & 72M & Y & **93.2** & 14.3 & 84.7 & 94.8 & 9.7 & 89.8 & **95.0** & 17.3 & 81.8 \\ \hline Bag-of-Tricks & 60M & N & 86.3 & 25.7 & 70.3 & 87.3 & 31.8 & 63.6 & 85.4 & 24.6 & 71.2 \\ \hline RMC & 60M & Y & 91.2 & 17.6 & 80.7 & 94.2 & 21.4 & 77.3 & 93.9 & 22.3 & 76.3 \\ \hline RobusT & 60M & Y & 90.8 & 28.9 & 68.2 & **94.9** & 33.4 & 64.8 & 92.1 & 55.7 & 39.5 \\ \hline Ours & 60M & N & 90.2 & **42.3** & **53.1** & 93.8 & **48.6** & **48.2** & 94.6 & **57.3** & **39.4** \\ \hline \multicolumn{2}{l}{**sparsity = 50\%**} & \multicolumn{3}{c|}{} & & & & & & & & & \\ \hline Bag-of-Tricks & 43M & N & 87.2 & 21.6 & 75.2 & 90.6 & 33.5 & 63.0 & 91.3 & 21.2 & 76.8 \\ \hline RMC & 43M & Y & **90.8** & 9.7 & 89.3 & 94.1 & 21.2 & 77.5 & 94.1 & 14.7 & 84.4 \\ \hline RobusT & 43M & Y & 90.5 & 24.8 & 73.9 & **94.8** & 28.8 & 69.7 & 93.2 & 31.5 & 66.2 \\ \hline Ours & 43M & N & 88.31 & **43.1** & **51.2** & 93.4 & **48.5** & **48.1** & **94.2** & **53.2** & **43.6** \\ \hline \multicolumn{2}{l}{**sparsity = 87.5\%**} & \multicolumn{3}{c|}{} & & & & & & & & & \\ \hline Bag-of-Tricks & 11M & N & 85.9 & 17.8 & 85.7 & 89.4 & 11.3 & 87.4 & 87.7 & 8.9 & 89.9 \\ \hline RMC & 11M & Y & **86.3** & 3.6 & 95.8 & 92.1 & 4.5 & 95.5 & 91.3 & 11.2 & 87.7 \\ \hline RobusT & 11M & Y & 85.2 & 7.8 & 90.8 & 91.8 & 8.3 & 91.0 & 89.2 & 6.5 & 92.7 \\ \hline Ours & 11M & N & 85.6 & **37.6** & **56.1** & **92.4** & **41.3** & **55.3** & **91.6** & **35.6** & **61.1** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of Adversarial Robustness Assessment on BERT\({}_{base}\). The entry highlighted with an **orange background** denotes our robust and dense model, which serves as the initialization for a range of robust pruning methods except **RobustT** (RobustT is generated from the pre-trained weight). Obviously, our method consistently outperforms all baselines in terms of the **Aua%** and **Asr%** metrics. Regarding **Acc%**, there is a minor decrease in our method’s performance at lower sparsity levels, yet it regains superiority at higher sparsity levels. The highest performance is highlighted in **bold**. The column **Re-T** indicates whether the method necessitates model retraining. Consistent with previous research, we exclude embedding matrices from the calculation of parameter count.
our method is generally lower than other baselines at lower sparsity levels, the improvement of robustness (reflected in **Aua%** and **Asr%**) far outweighs the degree of accuracy degradation. **5)** At higher levels of sparsity, our method outperforms other baselines across all metrics. **6)** Our method does not require model retraining, confirming that our approach offers a better trade-off between accuracy, robustness, sparsity, and pruning cost.
Beyond Bert\({}_{base}\), our methodology was also extended to Bert\({}_{large}\), a model encompassing 330M parameters. The resulting performance, as presented in Table 3, reaffirms the superiority of our method when compared to the baselines. Moreover, we explore the effectiveness of our methods within a structured pruning context, and once again, our approach outperforms the state-of-the-art method: **EarlyRobust**(Xi et al., 2022). More details can be found in Appendix.
### Ablation Study
To elucidate the contributions of each part of our approach, we conduct an ablation study with the following settings:We replace our pruning technique with methods known as **LTH** and **IMP**(Frankle et al., 2020; Frankle and Carbin, 2018), and supple
\begin{table}
\begin{tabular}{l|c|c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**\#Param**} & \multirow{2}{*}{**ReT**} & \multicolumn{4}{c|}{**SST2**} & \multicolumn{4}{c|}{**AGNEWS**} & \multicolumn{4}{c}{**IMDB**} \\ \cline{4-13} & & & **Acc** & **Aua** & **Asr** & **Acc** & **Aua** & **Asr** & **Acc** & **Aua** & **Asr** \\ \hline Fine-tune & 85M & Y & **92.3** & 12.7 & 86.2 & 94.7 & 19.1 & 80.0 & 95.1 & 7.4 & 92.2 \\ \hline Weight Average & 85M & Y & 91.4 & **30.4** & **66.75** & 94.4 & **48.5** & **48.6** & **95.2** & **44.4** & **53.4** \\ \hline IMP & 43M & Y & **92.6** & 4.8 & 94.8 & **94.9** & 7.1 & 92.5 & 94.1 & 7.7 & 91.8 \\ \hline IMP + FreeLB & 43M & Y & 92.4 & 7.9 & 91.5 & 94.3 & 9.2 & 90.2 & 93.8 & 14.3 & 84.8 \\ \hline LTH & 43M & Y & 91.6 & 2.8 & 96.9 & 93.5 & 10.1 & 89.2 & 93.2 & 4.6 & 95.1 \\ \hline LTH + FreeLB & 43M & Y & 91.7 & 9.8 & 89.3 & 93.2 & 12.3 & 86.8 & 93.1 & 9.5 & 89.8 \\ \hline Ours & 43M & N & 88.31 & **43.1** & **51.2** & 93.4 & **48.5** & **48.1** & **94.2** & **53.2** & **43.6** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation Study with Pruning Methods Replacement. We replace our pruning method with most famous others (**IMP** and **LTH**) supplemented with adversarial training (**FreeLB**). Similarly, the orange entry is used for model initialization. Once again, our method outperforms others in preserving or even enhancing robustness.
Figure 2: Attention Score Visualisation in BERT\({}_{base}\). We have selected an adversarial sample (_“it’s a bewitching and often repercussions journey.”_) from SST2 and visualized the attention scores in the robust and dense model (2b, 2e), the sparse language model generated with IMP+FreeLB (2a, 2d), and the sparse language model created using our method (2c, 2f). Here, Figures 1(a), 1(b), and 1(c) depict the attention scores from the first transformer block of BERT\({}_{Base}\), while Figures 1(d), 1(e),and 1(f) show scores from the last transformer block. Evidently, the attention scores produced by our method align more closely with those from the robust and dense model.
ment them with the additional adversarial training method **FreeLB**Zhu et al. (2019). The results are presented in Table 2. From the results, we can make the following key observations: 1) Sparse language models generated by traditional pruning methods performs even worse than the vanilla fine-tuned dense model. This highlights the challenges associated with robust pruning. 2) Our approach consistently generates more robust sparse language models than conventional pruning methods, even supplemented with adversarial training methods. 3) We conjecture that the limited effect of adversarial training here stems from the discrete nature of word tokens and the substantial loss of pre-trained knowledge during pruning.
### Discussion
In this section, we design additional experiments to illustrate our robust pruning method further.
#### 5.6.1 Pretrained Knowledge Detection
To demonstrate the effectiveness of our robust pruning mechanism in preserving pre-trained knowledge, we've chosen adversarial samples that are effectively defended by our method but not by others. We then visualize the attention scores of them in Figure 2. Our method demonstrates superior performance, as evidenced by more reasonable attention scores that align more closely with those from the robust and dense model. In addition, we visualize the distance of sentence representation from sparse language models and their dense counterparts in the feature space. As depicted in Table 4 and Figure 5, our method results in smaller distances between the dense and sparse representations. These findings indicate the superior ability of our robust pruning method to preserve semantic knowledge and maintain cognizance. In other words, our method outperforms others in maintaining robustness during pruning.
#### 5.6.2 Impact of Calibration Data
The calibration data is crucial for our methodology because it directly affects the computation of the Hessian Matrix. As outlined in Algorithm 1, the Hessian Matrix can be derived from \(H=X^{T}X\). To further explore the impact of the number of data points, we designed experiments that gradually increased the number of data points used in our strategy. The results of these experiments are detailed in Figure 3. Our observations indicate that as the number of used data points increases, the robustness and accuracy of the sparse language modes increase, but only up to a certain threshold. We hypothesize that the model can initially retain
\begin{table}
\begin{tabular}{l|c|c|c c|c c|c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**\#Param**} & \multirow{2}{*}{**Re-T**} & \multicolumn{3}{c|}{**SST2**} & \multicolumn{3}{c|}{**AGNEWS**} & \multicolumn{3}{c}{**IMDB**} \\ \cline{4-10} & & & **Acc** & **Aua** & **Asr** & **Acc** & **Aua** & **Asr** & **Acc** & **Aua** & **Asr** \\ \hline Weight Average & 309M & Y & 93.5 & 36.4 & 61.1 & 96.2 & 56.5 & 41.3 & 95.9 & 48.4 & 49.6 \\ \hline Bag-of-Tricks & 155M & N & 90.3 & 27.6 & 69.4 & 93.1 & 35.5 & 61.9 & 93.4 & 29.3 & 68.6 \\ \hline RMC & 155M & Y & **92.6** & 14.7 & 84.1 & 95.4 & 19.2 & 79.9 & **95.8** & 16.7 & 82.6 \\ \hline RobusT & 155M & Y & 92.1 & 29.8 & 67.7 & 95.1 & 32.8 & 65.6 & 95.2 & 31.9 & 66.5 \\ \hline Ours & 155M & N & 91.7 & **47.1** & **48.6** & **95.5** & **53.5** & **44.0** & 95.3 & **55.8** & **41.4** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Quantitative Analysis of Distance from Sentence Embeddings. We compare the distances between sentence embeddings derived from various layers of dense and sparse language models. Our findings reveal that our method aligns better with the dense model, regardless of whether we use the original or adversarial sentence. Refer to Figure 5 for a visualization of these sentence embeddings.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline \multirow{2}{*}{**Layer**} & \multicolumn{3}{c|}{**Distance with dense**} & \multirow{2}{*}{**Data**} \\ \cline{2-2} \cline{4-6} & & **IMP + ADT (\(\Delta\))** & & **v.s.** & **Ours (\(\Delta\))** \\ \hline \multirow{2}{*}{1} & 0.0086 & \(>\) & **0.0000** & Ori \\ & & 0.0086 & \(>\) & **0.0000** & Adv \\ \hline \multirow{2}{*}{2} & 0.0144 & \(>\) & **0.0015** & Ori \\ & & 0.0142 & \(>\) & **0.0105** & Adv \\ \hline \multirow{3}{*}{3} & 0.0156 & \(>\) & **0.0014** & Ori \\ & & 0.0258 & \(>\) & **0.0012** & Adv \\ \hline \multirow{3}{*}{4} & 0.0193 & \(>\) & **0.0017** & Ori \\ & & 0.0407 & \(>\) & **0.0107** & Adv \\ \hline \multirow{2}{*}{5} & 0.0324 & \(>\) & **0.0067** & Ori \\ & & 0.1319 & \(>\) & **0.0069** & Adv \\ \hline \multirow{2}{*}{6} & 0.0763 & \(>\) & **0.0255** & Ori \\ & & 0.0967 & \(>\) & **0.0253** & Adv \\ \hline \multirow{2}{*}{7} & 0.1299 & \(>\) & **0.0869** & Ori \\ & & 0.1478 & \(>\) & **0.0861** & Adv \\ \hline \multirow{2}{*}{8} & 0.2530 & \(>\) & **0.1308** & Ori \\ & & 0.2547 & \(>\) & **0.1078** & Adv \\ \hline \multirow{2}{*}{9} & 0.1880 & \(>\) & **0.0988** & Ori \\ & & 0.2767 & \(>\) & **0.0749** & Adv \\ \hline \multirow{2}{*}{10} & 0.2804 & \(>\) & **0.1254** & Ori \\ & & 0.3099 & \(>\) & **0.1049** & Adv \\ \hline \multirow{2}{*}{11} & 0.4932 & \(>\) & **0.2322** & Ori \\ & & 0.7317 & \(>\) & **0.0265** & Adv \\ \hline \multirow{2}{*}{12} & 0.6872 & \(>\) & **0.2231** & Ori \\ & & 0.6903 & \(>\) & **0.0849** & Adv \\ \hline \hline \end{tabular}
\end{table}
Table 3: Summary of Adversarial Robustness Assessment on BERT\({}_{large}\). Similarly, the entry highlighted with an orange background is used for model initialization. Once again, our method consistently outperforms all baselines in terms of the **Aua%** and **Suc%** metrics.
more general knowledge as data points increase. However, once a threshold is crossed where the new data cannot provide additional information for general features, adding more data points from a similar distribution no longer contributes to model robustness and accuracy.
#### 5.6.3 Impact of Sparsity
As illustrated in Figure 4, we explore the robustness and accuracy of our sparse language models across a range of sparsity levels. In a departure from previous studies Zheng et al. (2022), our observations indicate that as sparsity increases, robustness decreases with a similar pace like accuracy. This trend suggests that the impact of increasing sparsity on model robustness might be less severe than previously assumed. This disparate pattern may stem from the post-training nature of our method. Furthermore, our observations regarding the trend in robustness align with the findings of previous studies by Zheng et al. (2022) and Liang et al. (2021). We note that the robustness of our sparse language models initially improves as sparsity escalates up to a certain threshold. After crossing this threshold, the robustness begins to decline. However, it sustains a level of robustness that is higher than the peak value observed in other models and does not collapse even with 10x compression. This finding further highlights the outstanding performance of our method in robust pruning.
## 6 Conclusion
In this paper, we investigate the application of robust pruning methods for language models. We propose an adaptive pruning method and place a special emphasis on replicating the embedding and feature space of dense models to preserve as much pre-trained knowledge as possible. The effectiveness of this approach is confirmed through a series of experiments conducted across various tasks.
### Limitations
This work introduces a post-training method that can robustly prune the language models without model retraining. Despite bypassing the rigorous retraining process, the computational cost of our method remains significant due to the calculation of the Hessian Matrix and its inverse. Consequently, this approach may not be feasible for language models comprised of billions of parameters. As a next step, we aim to refine our technique to devise a more efficient strategy to replicate the feature space and embedding space of language models
## Acknowledgements
The authors wish to thank the anonymous reviewers for their helpful comments.
## Ethics Statement
This work complies with the ACL Ethics Policy and we have carried out our research following the highest ethical standards. In our work on developing a new pruning strategy to enhance robustness in language models, we carefully considered the broader implications and ethical dimensions of this innovation.
While our research primarily concerns the improvement of model accuracy, sparsity, and robustness, we acknowledge that the use of these enhanced models can potentially be dual-use, which means they can be applied in both beneficial and harmful ways. An improved model can contribute positively by enhancing various NLP applications such as text summarization, machine translation, and sentiment analysis, potentially increasing efficiency and the overall quality of output. Fur
Figure 4: Impact of Sparsity Levels on SST2
Figure 3: Impact of # of Calibration Data from SST2.
thermore, these advancements could contribute to reducing the computational resources required for training and using large language models, which aligns with efforts to reduce the environmental impact of machine learning.
However, the increased robustness of models against adversarial attacks could also be used maliciously if the technology falls into the wrong hands. Bad actors could potentially exploit robust models for the generation of disinformation or manipulation of public sentiment, for instance. Furthermore, although our technique aims to faithfully replicate the feature space of dense models, bias present in the original training data could be preserved in the pruned models. Consequently, decisions made based on the output of these models could perpetuate these biases.
We encourage the use of our findings and methods for applications that promote the public good and contribute to human welfare. Further, we recommend that researchers and practitioners using this technique take into account potential biases in their training data and consider strategies for minimizing their impact. In the future, we hope to conduct more research on mitigating bias and other ethical issues associated with our pruning strategy. It is our belief that technology should be developed and used in a way that is transparent, fair, and beneficial to all.
|
2305.18272 | Constructing non-AMNM weighted convolution algebras for every
semilattice of infinite breadth | The AMNM property for commutative Banach algebras is a form of Ulam stability
for multiplicative linear functionals. We show that on any semilattice of
infinite breadth, one may construct a weight for which the resulting weighted
convolution algebra fails to have the AMNM property. Our work is the
culmination of a trilogy started in [Semigroup Forum 102 (2021), no. 1, 86-103]
and continued in [European J. Combin. 94 (2021), article 103311]. In
particular, we obtain a refinement of the main result of the second paper, by
establishing a dichotomy for union-closed set systems that has a
Ramsey-theoretic flavour. | Yemon Choi, Mahya Ghandehari, Hung Le Pham | 2023-05-29T17:47:53Z | http://arxiv.org/abs/2305.18272v1 | # Constructing non-AMNM weighted convolution algebras for every semilattice of infinite breadth
###### Abstract
The AMNM property for commutative Banach algebras is a form of Ulam stability for multiplicative linear functionals. We show that on any semilattice of infinite breadth, one may construct a weight for which the resulting weighted convolution algebra fails to have the AMNM property. Our work is the culmination of a trilogy started in [1] and continued in [1]. In particular, we obtain a refinement of the main result of [1], by establishing a dichotomy for union-closed set systems that has a Ramsey-theoretic flavour.
Keywords: approximately multiplicative, AMNM, breadth, convolution algebra, perturbation, semilattice, set system, Ulam stability.
MSC 2020: Primary 39B82, 43A20. Secondary 05D10, 06A07, 06A12.
_Dedicated to the memory of H. Garth Dales (1944-2022)_
## 1 Introduction
### Character stability and the AMNM property
A fundamental question in various branches of mathematics is to determine whether "locally approximate versions" of a given structure are small perturbations of that structure in the global sense. Many variations of this question have been studied, often under the name "Ulam stability"; we shall not attempt a comprehensive history here, but for recent work in this direction see e.g. [1, 2, 19, 18, 15].
This article is concerned with a form of this question regarding multiplicative linear functionals on commutative Banach algebras. Given a commutative Banach algebra \(A\), a bounded linear functional \(\psi:A\to\mathbb{C}\) is called _multiplicative_ if it satisfies \(\psi(ab)=\psi(a)\psi(b)\) for all \(a,b\in A\). More generally, given \(\delta>0\), we say that \(\psi\in A^{*}\) is _\(\delta\)-multiplicative_ if the bilinear map \((a,b)\mapsto\psi(a)\psi(b)-\psi(ab)\) has norm at most \(\delta\). An obvious way to obtain examples is to take a multiplicative functional \(\phi\) and put \(\psi=\phi+\mu\) for some \(\mu\in A^{*}\) of suitably small norm, thought of as a perturbation of \(\phi\). The analogue of Ulam's question now becomes: do _all_ "approximately multiplicative" functionals on \(A\) occur as small perturbations of multiplicative functionals?
In [10], Johnson undertook a systematic study of this phenomenon, and coined the acronym AMNM to describe those commutative Banach algebras for which the answer to this Ulam-type question is affirmative. Here AMNM stands for "approximately multiplicative functionals are near to multiplicative ones". Many examples are studied
in [10]: it is shown there that abelian C\({}^{*}\)-algebras, \(L^{1}\)-convolution algebras of locally compact abelian groups, the Banach spaces \(\ell^{p}\) with pointwise product, and certain algebras of holomorphic functions (including the disc algebra and \(\ell^{1}(\mathbb{Z}_{+})\) and \(L^{1}(\mathbb{R}_{+})\)), are all AMNM. On the other hand, it is also shown in [10] that the classical Volterra algebra \(L^{1}(0,1)\) is _not_ AMNM.
In general, it seems that there is no "one-size fits all" method for determining if the AMNM property holds for members of some particular class of commutative Banach algebras. For instance, while many familiar uniform algebras are AMNM, see [11], it remains unknown if \(H^{\infty}\) of the disc is AMNM; the existence of uniform algebras which are not AMNM was open for several years, but such an example was constructed in [13].
In the present paper, we complete a project that was initiated by work of the first author [14] and continued in recent work of the present authors in [15, 16], where the AMNM property is studied for weighted \(\ell^{1}\)-convolution algebras of certain semigroups called "semilattices". Our results give a complete description of those semilattices for which all such weighted convolution algebras are AMNM, or conversely, all those for which there exists some weight producing a non-AMNM algebra. The precise statements are given in the next section, once we have set up the necessary terminology.
### Weighted semilattice algebras: old and new results
In the context of semigroup theory, a _semilattice_ is a commutative semigroup in which each element is idempotent. Such semigroups play a particularly important role in the structure theory of semigroups, see e.g. [17]. Moreover, the investigation of convolution algebras over semilattices fits into a well-established theme of studying how structural properties of a semigroup are reflected in properties of their associated Banach algebras. For more specific motivation, see [14, SS1].
In this paper, we consider weighted convolution algebras over semilattices. A _weight_ on \(S\) is a function \(\omega:S\to(0,\infty)\), and we may then define the associated weighted \(\ell^{1}\)-space \(\ell^{1}(S,\omega)\) as the set of all functions \(f:S\to\mathbb{C}\) satisfying \(\sum_{s\in S}|f(s)|\omega(s)<\infty\). Here, we only deal with weights that are _submultiplicative_, i.e. satisfying \(\omega(xy)\leq\omega(x)\omega(y)\) for all \(x\) and \(y\) in \(S\). Note that for each \(x\in S\), the identity \(x^{2}=x\) and the submultiplicative condition forces \(\omega(x)\geq 1\). The condition that \(\omega\) be submultiplicative guarantees that \(\ell^{1}(S,\omega)\), when equipped with the natural weighted norm, becomes a Banach algebra with respect to the convolution product
\[(f*g)(x):=\sum_{(s,t)\in S\times S:\;st=x}f(s)g(t).\]
Definition 1.1.: Given a semilattice \(S\) and a submultiplicative weight \(\omega\), we refer to \(\ell^{1}(S,\omega)\) equipped with this convolution product as the _weighted convolution algebra_ of the pair \((S,\omega)\).
Remark 1.2.: It is well known that if \(S\) is a semilattice then the unweighted convoution algebra \(\ell^{1}(S)\) is semisimple (see e.g. [13, SS3]). Since any submultiplicative weight \(\omega:S\to(0,\infty)\) must automatically satisfy \(\omega\geq 1\), it follows that \(\ell^{1}(S,\omega)\subseteq\ell^{1}(S)\), and so weighted convolution algebras on \(S\) are also semisimple. Thus the algebras considered in this paper may be viewed as Banach function algebras when represented on suitable carrier spaces.
To our knowledge, the first detailed study of the AMNM problem for weighted semilattice algebras was in work of the first author [14]. The following result was first obtained in that paper; see also Remark 4.7 of [15] for an alternate proof.
Theorem A ([12, Example 3.13 and Theorem 3.14]).: _Let \(S\) be a semilattice that has "finite breadth". Then \(\ell^{1}(S,\omega)\) is AMNM for every submultiplicative weight \(\omega\)._
The breadth of a semilattice takes values in \(\mathbb{N}\cup\{\infty\}\), and measures the internal complexity of the natural partial order in \(S\). The precise definition is rather technical and will be given in Definition 2.8 below. Semilattices of finite breadth can encompass a variety of behaviour, as shown in [1, SSSS4-5], and so Theorem A provides a large source of semisimple commutative Banach algebras with the AMNM property.
In contrast, an explicit example is constructed in [12, Theorem 3.4], of a semilattice \(S\) and a weight \(\omega\) for which \(\ell^{1}(S,\omega)\) is not AMNM. This \(S\) necessarily has infinite breadth, which motivated the first author to ask in [12, SS6] whether the converse of Theorem A is true. That is:
if \(S\) has infinite breadth, does it always admit some submultiplicative weight
\(\omega\) for which \(\ell^{1}(S,\omega)\) is non-AMNM?
In [13, Theorem 4.8] we partially answered this question, obtaining a positive answer whenever \(S\) is a subsemilattice of \(\mathcal{P}^{\mathrm{fin}}(\Omega)\), the collection of all finite subsets of a set \(\Omega\) equipped with union as the semilattice operation. In this article, by building on combinatorial techniques developed in [13], we are able to extend the partial answer to a full answer. This is the main new result of this article, and it provides a large supply of semisimple commutative Banach algebras which _do not_ have the AMNM property.
Theorem B (Converse to Theorem A).: _Let \(S\) be a semilattice that has "infinite breadth". Then a submultiplicative weight \(\omega\) can be constructed such that \(\ell^{1}(S,\omega)\) is not AMNM._
Semilattices of infinite breadth should not be seen as rare or pathological. For example, for any infinite set \(\Omega\), the power set \(\mathcal{P}(\Omega)\) forms a semilattice of infinite breadth (with union as the binary operation). There are many other possibilities. However, it turns out that there are three particular semilattices of infinite breadth, denoted by \(\mathcal{T}_{\mathrm{max}}\), \(\mathcal{T}_{\mathrm{min}}\), and \(\mathcal{T}_{\mathrm{ort}}\), which play a key role when considering general semilattices of infinite breadth. In [13, Theorem 1.6], it was shown that if \(S\) is a semilattice with infinite breadth, then there is a homomorphic image of \(S\) which contains a copy of either \(\mathcal{T}_{\mathrm{max}}\), \(\mathcal{T}_{\mathrm{min}}\) or \(\mathcal{T}_{\mathrm{ort}}\).
To prove Theorem B, we need to prove a refined version of [13, Theorem 1.6], which we obtain using a combinatorial result with a Ramsey-theoretic flavour stated in Theorem 4.4. The results of [13] work with union-closed set systems \(\mathcal{S}\subseteq\mathcal{P}(\Omega)\) and implicitly use the notion of a _spread in \(\Omega\)_ (see Definition 2.10). In this paper we pursue a deeper study of how the set system can interact with such a spread, building up to Theorem 4.4. The result requires too many technical definitions to be stated here, but loosely speaking it says that when \(\mathcal{S}\) interacts with a spread, we can control the complexity in a certain technical sense via a _colouring_ of the spread, unless we are in a special situation with very high complexity ("shattering").
### Organization of the paper
In Section 2, we provide the preliminaries and background on weighted semilattice algebras, breadth of a semilattice, and important examples of semilattices with infinite breadth. In Theorem 2.12, we provide a self-contained statement of the main result of [13], describing the structure of semilattices with infinite breadth in terms of the occurrence of \(\mathcal{T}_{\mathrm{max}}\), \(\mathcal{T}_{\mathrm{min}}\) and \(\mathcal{T}_{\mathrm{ort}}\) in them. Then in Section 3, we give a simple construction of non-AMNM weights for semilattices which contain \(\mathcal{T}_{\mathrm{max}}\) or \(\mathcal{T}_{\mathrm{min}}\) in the sense of Theorem 2.12.
We devote Section 4 to notions of shattering and colouring, and the proof of Theorem 4.4. Using this we can give a refinement of our structure theorem for semilattices of infinite breadth, stated in Theorem 4.6. We use this structure theorem in Section 5 to produce non-AMNM weights for semilattices containing \(\mathcal{T}_{\mathrm{ort}}\). The last section of the paper contains examples showing that the construction of non-AMNM weights is somewhat delicate.
## 2 Preliminaries
### Weighted semilattices and propagation
A _semilattice_ is a commutative semigroup \(S\) satisfying \(x^{2}=x\) for all \(x\in S\). For two elements \(x,y\in S\), we say that \(y\) is a _multiple_ of \(x\) or \(x\) is a _factor_ of \(y\) or \(x\)_divides_\(y\) and write \(x\,|\,y\) if there exists \(z\in S\) such that \(y=xz\); which for the semilattice \(S\) is simply equivalent to \(xy=y\). The divisibility relation provides \(S\) with a standard and canonical partial order, and to signify this aspect of the relation, we sometimes write \(y\preceq x\) instead of \(x\,|\,y\). With respect to this particular partial order, \(xy\) is the _meet_ (or _greatest lower bound_) of \(x\) and \(y\); this gives an alternative, order-theoretic definition of a semilattice.
We repeat some terminology and notation from [10] for the reader's convenience. A weighted semilattice is a semilattice \(S\) equipped with a submultiplicative weight. It will be convenient for later calculations if we switch to working with _log-weights_, by which we mean functions \(\lambda:S\to[0,\infty)\) that satisfy \(\lambda(xy)\leq\lambda(x)+\lambda(y)\) for all \(x,y\in S\). Given such a \(\lambda\) and \(L\geq 0\), we define "level" \(L\) as \(W_{L}(S,\lambda)=\{x\in S\colon\lambda(x)\leq L\}\). When there is no danger of confusion we abbreviate this to \(W_{L}\).
Definition 2.1 (Filters in semilattices).: Let \(S\) be a semilattice and let \(F\subseteq S\). We say that \(F\) is a _filter in \(S\)_ if it is non-empty and satisfies
\[\forall\;x,y\in S\quad(xy\in F\Longleftrightarrow x,y\in F).\]
If \(E\subseteq S\), let \(\mathrm{fil}(E)\) denote the _filter-or-empty-set generated by \(E\)_, i.e. \(\mathrm{fil}(\emptyset)=\emptyset\), and \(\mathrm{fil}(E)\) is the intersection of all filters \(X\subseteq S\) containing \(E\), if \(E\neq\emptyset\). Let \(E\subseteq S\) be non-empty. Note that if \(x,y\in E\) and \(z\succeq xy\) (that is, \(z\) is a factor of \(xy\)) then \(z\in\mathrm{fil}(E)\). Moreover, every \(z\in\mathrm{fil}(E)\) satisfies \(z\succeq x_{1}\cdots x_{k}\) for some \(x_{1},\ldots,x_{k}\in E\).
Definition 2.2 (FBP\({}_{C}\)-stability).: Let \(C\geq 0\). For \(E\subseteq S\) we define _factors of binary products_ of \(E\) as
\[\mathrm{FBP}_{C}(E):=\{z\in W_{C}\colon\text{there exist $x,y\in E\cap W_{C}$ such that $z\succeq xy$}\}\,.\]
Note that \(\mathrm{FBP}_{C}(\emptyset)=\emptyset\). We also define \(\mathrm{FBP}_{C}^{0}(E)=E\cap W_{C}\), and for \(k\geq 1\) recursively define \(\mathrm{FBP}_{C}^{k}(E)=\mathrm{FBP}_{C}(\mathrm{FBP}_{C}^{k-1}(E))\).
Definition 2.3 (Propagation).: For \(z\in\mathrm{fil}(E)\), let
\[V_{E}(z)=\inf\left\{C\geq 0\colon\exists\;n\geq 0\text{ s.t. }z\in\mathrm{FBP}_{C}^{n}(E)\right\}.\]
Given \(L\geq 0\), we say that \((S,\lambda)\)_propagates at level \(L\)_, or _has \(L\)-propagation_, if
\[\sup_{\emptyset\neq E\subseteq W_{L}}\sup_{z\in\mathrm{fil}(E)\cap W_{L}}V_{E }(z)<\infty\;.\]
It is convenient to set \(V_{E}(z):=+\infty\) whenever \(z\notin\mathrm{fil}(E)\).
We now connect these definitions to the original AMNM problem. In the introduction, we defined what it means for a functional on a Banach algebra \(A\) to be multiplicative or \(\delta\)-multiplicative. We write \(\operatorname{\mathrm{Mult}}(A)\) for the set of multiplicative functionals on \(A\) (note that this always includes the zero functional).
Definition 2.4 (Johnson, [14]).: Let \(A\) be a commutative Banach algebra. We say that \(A\)_has the AMNM property_, or that \(A\)_is AMNM_, if for every \(\varepsilon>0\) we can find \(\delta>0\) such that every \(\delta\)-multiplicative \(\psi\in A^{*}\) satisfies \(\operatorname{\mathrm{dist}}(\psi,\operatorname{\mathrm{Mult}}(A))<\varepsilon\).
Theorem 2.5 ([13, Remark 2.7 and Theorem 3.7]).: _Let \((S,\omega)\) be a weighted semilattice and let \(\lambda=\log\omega\). The following conditions are equivalent._
1. \(\ell^{1}(S,\omega)\) _is AMNM._
2. \((S,\lambda)\) _has_ \(L\)_-propagation for all_ \(L\geq 0\)_._
### Concrete semilattices and breadth
To construct log-weights with the desired properties, we shall switch perspective and work with "concrete semilattices" that arise as union-closed set systems.
Definition 2.6.: Let \(\Omega\) be a non-empty set, and write \(\mathcal{P}(\Omega)\) for its power set. A _union-closed set system_ or _concrete semilattice_ on \(\Omega\) is a subset \(\mathcal{S}\subseteq\mathcal{P}(\Omega)\) which is closed under taking finite unions; this is clearly a semilattice, where set-union serves as the binary operation.
Every semilattice can be viewed as a concrete semilattice. Indeed, given a semilattice \(S\), for each \(x\in S\) let \(E_{x}\mathbin{:=}S\setminus\{y\in S\colon x\,|\,y\}\). It is easily checked that \(E_{x}\cup E_{y}=E_{xy}\) for all \(x,y\in S\). Therefore, the function \(E_{\bullet}\mathbin{:}S\to\mathcal{P}(S)\), \(x\mapsto E_{x}\), defines an injective semilattice homomorphism from \(S\) into \((\mathcal{P}(S),\cup)\). This is sometimes known as the _Cayley embedding_ of a semilattice. Using this result, from this point onward, we assume every semilattice is a union-closed set system. (However, see Remark 2.13 for some subtleties.)
We shall use the following notational conventions when working with set systems on a given set \(\Omega\). Elements of \(\Omega\) will usually be denoted by lower-case Greek letters. Set systems on \(\Omega\) (i.e. subsets of \(\mathcal{P}(\Omega)\)) will usually be denoted by letters such as \(\mathcal{B}\), \(\mathcal{S}\), etc. If \(\mathcal{B}\) is such a set system, we refer to _members of \(\mathcal{B}\)_ rather than elements. If \(\mathcal{B}\) and \(\mathcal{S}\) are set systems on \(\Omega\) we denote their union and intersection by \(\mathcal{B}\vee\mathcal{S}\) and \(\mathcal{B}\wedge\mathcal{S}\). Members of a set system \(\mathcal{S}\) will be denoted by letters such as \(\mathsf{a}\), \(\mathsf{b}\), \(\mathsf{p}\), etc., and we write \(\mathsf{a}\cup\mathsf{b}\) and \(\mathsf{a}\cap\mathsf{b}\) for their union and intersection respectively. If it happens that \(\mathsf{a}\) and \(\mathsf{b}\) are disjoint subsets of \(\Omega\) we shall sometimes emphasise this by writing their union as \(\mathsf{a}\mathbin{\dot{\cup}}\mathsf{b}\). Note that in a concrete semilattice \(\mathcal{S}\), \(\mathsf{a}\) is a factor of \(\mathsf{b}\) precisely when \(\mathsf{a}\subseteq\mathsf{b}\).
The following definitions are notions from lattice theory, restated in the current setting of union-closed set systems.
Definition 2.7.: Given \(\mathcal{F}\subseteq\mathcal{P}(\Omega)\), the _join of \(\mathcal{F}\)_ is the set \(\operatorname{\mathrm{join}}(\mathcal{F})\mathbin{:=}\bigcup_{\mathsf{x}\in \mathcal{F}}\mathsf{x}\). If \(\mathcal{F}\) is a finite subset of a union-closed set system \(\mathcal{S}\), then \(\operatorname{\mathrm{join}}(\mathcal{F})\in\mathcal{S}\).
Definition 2.8.: Let \(\mathcal{S}\) be a union-closed set system. Given a finite, non-empty subset \(\mathcal{E}\subseteq\mathcal{S}\), we say \(\mathcal{E}\) is _compressible_ if there exists a proper subset \(\mathcal{E}^{\prime}\subset\mathcal{E}\) such that \(\operatorname{\mathrm{join}}(\mathcal{E}^{\prime})=\operatorname{\mathrm{join} }(\mathcal{E})\); otherwise, we say \(\mathcal{E}\) is _incompressible_. The _breadth_ of a semilattice \(\mathcal{S}\) is defined to be
\[b(\mathcal{S}) =\inf\{n\in\mathbb{N}\colon\text{every $E\subseteq S$ with $n+1$ members is compressible}\}\] \[=\sup\{n\in\mathbb{N}\colon\mathcal{S}\text{ has an incompressible subset with $n$ elements}\}.\]
Example 2.9.: We write \(\mathcal{P}^{\mathrm{fin}}(\Omega)\) for the set of all finite subsets of \(\Omega\); this is a concrete semilattice on \(\Omega\). Note that when \(\Omega\) is infinite, \(\mathcal{P}^{\mathrm{fin}}(\Omega)\) has infinite breadth, since for any \(\gamma_{1},\dots,\gamma_{n}\in\Omega\) the set \(\{\{\gamma_{j}\}\colon 1\leq j\leq n\}\) is an incompressible subset of \(\mathcal{P}^{\mathrm{fin}}(\Omega)\).
A little thought shows that the notion of compressibility for a subset of \(\mathcal{S}\) only depends on the underlying semigroup structure of \((\mathcal{S},\cup)\). Therefore, the breadth of a semilattice \(S\) is an _intrinsic_ invariant, which does not depend on any particular concrete representation of \(S\) as a union-closed set system. (For a direct definition without using concrete representations, see e.g. [10, Definition 4.5].)
The breadth of a semilattice sheds some light on its structure, and is related to more familiar order-theoretic concepts such as height and width. (Some basic links, with references, are surveyed in [1, Section 4.1].) For instance, by examining incompressible subsets, one sees that if \(b(\mathcal{S})\geq n\) then \(\mathcal{S}\) contains a chain (totally ordered subset) and an antichain (subset in which no two elements are comparable) both of cardinality \(n\). In particular, a semilattice \(\mathcal{S}\) has breadth \(1\) exactly when the poset \((\mathcal{S},\preceq)\) is totally ordered.
### Special set systems
If \(b(\mathcal{S})=\infty\), then there are arbitrarily large finite subsets of \(\mathcal{S}\) that are incompressible. However, unlike the example of \(\mathcal{P}^{\mathrm{fin}}(\Omega)\), we cannot always arrange for these to be nested in an infinite sequence \(\mathcal{F}_{1}\subseteq\mathcal{F}_{2}\subseteq\dots\) This can be seen very clearly with the three key examples that will be introduced in Definition 2.11.
Before defining these examples, we introduce some terminology that will be convenient for Section 4.
Definition 2.10.: A _spread_ in a set \(\Omega\) is a sequence \(\mathcal{E}=(E_{n})_{n\geq 1}\) of finite non-empty subsets of \(\Omega\) which are pairwise disjoint and satisfy \(|E_{n}|\to\infty\). A _refinement of \(\mathcal{E}\)_ is a spread \(\mathcal{F}=(F_{j})_{j\geq 1}\) with the property that each \(F_{j}\) is contained in some \(E_{n(j)}\) and \(n(j)\neq n(k)\) whenever \(j\neq k\).
By a slight abuse of notation: if \(\mathcal{E}=(E_{n})_{n\geq 1}\) is a spread, we shall write \(\mathrm{join}(\mathcal{E})\) for \(\bigcup_{n\geq 1}E_{n}\).
Definition 2.11 (Three special set systems).: Let \(\mathcal{E}=(E_{n})_{n\geq 1}\) be a spread in a set \(\Omega\). For \(n\in\mathbb{N}\), let \(E_{<n}\mathrel{\mathop{:}}=E_{1}\cup\dots\cup E_{n-1}\) (with the convention that \(E_{<1}=\emptyset\)) and let \(E_{>n}\mathrel{\mathop{:}}=\dot{\bigcup}_{j\geq n+1}E_{j}\). We now define the following set systems on \(\Omega\):
\[\mathcal{T}_{\mathrm{max}}(\mathcal{E}) \mathrel{\mathop{:}}=\bigvee_{n\geq 1}\bigvee_{\emptyset\neq a \subseteq E_{n}}\left\{E_{<n}\,\dot{\cup}\,\mathsf{a}\right\},\] \[\mathcal{T}_{\mathrm{min}}(\mathcal{E}) \mathrel{\mathop{:}}=\bigvee_{n\geq 1}\bigvee_{\emptyset\neq a \subseteq E_{n}}\left\{\mathsf{a}\,\dot{\cup}\,E_{>n}\right\},\] \[\mathcal{T}_{\mathrm{ort}}(\mathcal{E}) \mathrel{\mathop{:}}=\bigvee_{n\geq 1}\bigvee_{\emptyset\neq a \subseteq E_{n}}\left\{E_{<n}\,\dot{\cup}\,\mathsf{a}\,\dot{\cup}\,E_{>n} \right\}.\]
Note that this definition is slightly more general than [10, Definition 1.3], where \(|E_{n}|=n+1\) was assumed.
The key role of these examples was shown in [10] by the following result.
Theorem 2.12 ([10, Theorem 1.6]).: _Let \(\mathcal{S}\) be a union-closed set system on \(\Omega\). Suppose \(\mathcal{S}\) has infinite breadth. Then there is a spread in \(\Omega\), denoted by \(\mathcal{E}=(E_{n})_{n\in\mathbb{N}}\), such that at least one of the following statements holds:_
1. \(\{\mathsf{x}\cap\operatorname{join}(\mathcal{E})\colon\mathsf{x}\in\mathcal{S}\} \supseteq\mathcal{T}_{\max}(\mathcal{E})\).
2. \(\{\mathsf{x}\cap\operatorname{join}(\mathcal{E})\colon\mathsf{x}\in\mathcal{S}\} \supseteq\mathcal{T}_{\min}(\mathcal{E})\).
3. \(\{\mathsf{x}\cap\operatorname{join}(\mathcal{E})\colon\mathsf{x}\in\mathcal{ S}\}\supseteq\mathcal{T}_{\operatorname{ort}}(\mathcal{E})\).
Remark 2.13.: In applying this theorem to prove Theorem B, we are choosing to represent an abstract semilattice \(S\) as a concrete semilattice \(\mathcal{S}\). The following example shows that the choice of "concrete representation" may affect which case of Theorem 2.12 occurs. (It plays no role in the proof of Theorem B, and can be skipped on a first reading.)
Example 2.14.: Set \(E_{n}:=\{(n,k)\colon 1\leq k\leq n\}\subset\mathbb{N}^{2}\), and set
\[\mathcal{E}:=(E_{n})_{n\geq 1},\quad\Omega_{0}:=\operatorname{join}(\mathcal{E }),\quad\text{and}\quad\Omega:=\Omega_{0}\sqcup\mathbb{N}.\]
Here \(\sqcup\) denotes the formal disjoint union of sets. For each \(\mathsf{a}\in\mathcal{T}_{\min}(\mathcal{E})\), define \(\operatorname{level}(\mathsf{a})\) to be the level of \(\mathsf{a}\) as indicated in Figure 1. Define
\[\mathcal{S}:=\left\{\mathsf{a}\sqcup\{1,\ldots,m\}:\ \mathsf{a}\in\mathcal{T}_{ \min}(\mathcal{E}),\ m\in\mathbb{N},\ m\geq\operatorname{level}(\mathsf{a}) \geq 2\right\}. \tag{2.1}\]
For \(\mathsf{a}_{i}\in\mathcal{T}_{\min}(\mathcal{E})\) and \(m_{i}\geq\operatorname{level}(\mathsf{a}_{i})\)\((i=1,2)\),
\[(\mathsf{a}_{1}\sqcup\{1,\ldots,m_{1}\})\cup(\mathsf{a}_{2}\sqcup\{1,\ldots, m_{2}\})=(\mathsf{a}_{1}\cup\mathsf{a}_{2})\sqcup\{1,\ldots,\max(m_{1},m_{2})\}\]
where \(\operatorname{level}(\mathsf{a}_{1}\cup\mathsf{a}_{2})=\min(\operatorname{ level}(\mathsf{a}_{1}),\operatorname{level}(\mathsf{a}_{2}))\leq\max(m_{1},m_{2})\). Thus \(\mathcal{S}\) is a concrete semilattice on \(\Omega\). It satisfies (ii) of Theorem 2.12, since \(\{\mathsf{x}\cap\operatorname{join}(\mathcal{E})\colon\mathsf{x}\in\mathcal{ S}\}=\mathcal{T}_{\min}(\mathcal{E})\) by construction.
The following short argument shows that \(\mathcal{S}\) satisfies neither (i) or (iii) of that theorem. Suppose that \(\mathcal{F}=(F_{n})_{n=1}^{\infty}\) is a spread on \(\Omega\) such that \(\{\mathsf{x}\cap\operatorname{join}(\mathcal{F})\colon\mathsf{x}\in\mathcal{ S}\}\) contains either \(\mathcal{T}_{\max}(\mathcal{F})\) or \(\mathcal{T}_{\operatorname{ort}}(\mathcal{F})\). We may suppose that \(\operatorname{join}(\mathcal{F})\subseteq\Omega_{0}\) (if not, simply replace each \(F_{n}\) by \(F_{n}\cap\Omega_{0}\) and remove any resulting empty set, noting that \(|F_{n}\setminus\Omega_{0}|\leq 1\) for all \(n\)). Then there exist only finitely many members of \(\{\mathsf{x}\cap\operatorname{join}(\mathcal{F})\colon\mathsf{x}\in\mathcal{ S}\}\) that meet \(F_{1}\), by (2.1). However, there are infinitely many members of \(\mathcal{T}_{\max}(\mathcal{F})\) and of \(\mathcal{T}_{\operatorname{ort}}(\mathcal{F})\) that meet \(F_{1}\); a contradiction.
On the other hand, set \(\Omega^{\prime}:=\Omega_{0}\times\mathbb{N}\), and define
\[\mathcal{S}^{\prime}:=\left\{(\mathsf{a}\times\mathbb{N})\cup(\Omega_{0} \times\{1,\ldots,m\}):\ \mathsf{a}\in\mathcal{T}_{\min}(\mathcal{E}),\ m\in\mathbb{N},\ m\geq \operatorname{level}(\mathsf{a})\geq 2\right\}. \tag{2.2}\]
Then \(\mathcal{S}^{\prime}\) is a concrete semilattice on \(\Omega^{\prime}\), and a little thought shows that \(\mathcal{S}\) and \(\mathcal{S}^{\prime}\) are isomorphic as abstract semilattices. However, we claim that it does not satisfy (ii).
Proof of the claim.: First of all, observe that for any sequence \((\mathsf{x}_{j})_{j\geq 1}\) of distinct members of \(\mathcal{S}^{\prime}\) one has \(\bigcup_{j=1}^{\infty}\mathsf{x}_{j}=\Omega^{\prime}\). Indeed, write \(\mathsf{x}_{j}=(\mathsf{a}_{j}\times\mathbb{N})\cup(\Omega_{0}\times\{1,\dots, m_{j}\})\). Since \(\{\mathsf{x}_{j}\}\) is infinite, (2.2) implies that \(\{m_{j}\}\) is not bounded, and so \(\bigcup_{j=1}^{\infty}\left(\Omega_{0}\times\{1,\dots,m_{j}\}\right)=\Omega^{\prime}\).
Now suppose there exists a spread \(\mathcal{F}=(F_{k})_{k\geq 1}\) such that \(\{\mathsf{x}\cap\operatorname{join}(\mathcal{F})\colon\mathsf{x}\in \mathcal{S}^{\prime}\}\supseteq\mathcal{T}_{\min}(\mathcal{F})\). For each \(j\in\mathbb{N}\), choose \(\mathsf{x}_{j}\in\mathcal{S}^{\prime}\) such that \(\mathsf{x}_{j}\cap\operatorname{join}(\mathcal{F})\) belongs to level \(j+2\) in \(\mathcal{T}_{\min}(\mathcal{F})\). Then
\[\operatorname{join}(\mathcal{F})\neq\bigcup_{j=1}^{\infty}\left(\mathsf{x}_ {j}\cap\operatorname{join}(\mathcal{F})\right)=\left(\bigcup_{j=1}^{\infty} \mathsf{x}_{j}\right)\cap\operatorname{join}(\mathcal{F})=\Omega^{\prime} \cap\operatorname{join}(\mathcal{F})=\operatorname{join}(\mathcal{F})\]
a contradiction.
By similar reasoning, one can give a direct proof that \((\mathcal{S}^{\prime},\Omega^{\prime})\) does not satisfy (i). We omit the details, since it also follows from the next proposition, which shows that the \(\mathcal{T}_{\max}(\mathcal{E})\) case of Theorem 2.12 is better behaved with respect to isomorphism of abstract semilattices. Finally, by Theorem 2.12 the pair \((\mathcal{S}^{\prime},\Omega^{\prime})\) satisfies (iii) of Theorem 2.12.
Proposition 2.15.: _Let \(S\) be a semilattice and let \(\Omega\), \(\Omega^{\prime}\) be infinite sets with injective homomorphisms \(\Theta:S\to(\mathcal{P}(\Omega),\cup)\) and \(\Theta^{\prime}:\mathcal{S}\to(\mathcal{P}(\Omega^{\prime}),\cup)\). Suppose there is a spread \(\mathcal{E}\) in \(\Omega\) such that \(\{\Theta(s)\cap\operatorname{join}(\mathcal{E}):s\in S\}\supseteq\mathcal{T }_{\max}(\mathcal{E})\). Then there is a spread \(\mathcal{E}^{\prime}\) in \(\Omega^{\prime}\) such that \(\{\Theta^{\prime}(s)\cap\operatorname{join}(\mathcal{E}^{\prime}):s\in S\} \supseteq\mathcal{T}_{\max}(\mathcal{E}^{\prime})\)._
Proof.: For simplicity of notation, we shall write \(a\) or \(b_{n}\) for elements of \(S\), while writing \(\mathsf{a}\) or \(\mathsf{b}_{n}\) for the corresponding members of \(\mathcal{S}\) and \(\mathsf{a}^{\prime}\) or \(\mathsf{b}^{\prime}_{n}\) for the corresponding members of \(\mathcal{S}^{\prime}\). Also, without loss of generality, we suppose that \(\mathcal{E}=(E_{n})_{n\geq 1}\) with \(|E_{n}|=n\) for all \(n\); say \(E_{n}=\{\gamma_{nj}\colon 1\leq j\leq n\}\).
For \(1\leq j\leq n\), let \(a_{nj}\) be an element of \(S\) such that \(\mathsf{a}_{nj}\cap\operatorname{join}(\mathcal{E})\) is a member of \(\mathcal{T}_{\max}(\mathcal{E})\) that meets \(E_{n}\) at the singleton \(\{\gamma_{nj}\}\). Then, for each \(n\in\mathbb{N}\), \(a_{nj}\) does not divide \(a_{ml}a_{nk}\) where \(m<n\) and \(k\neq j\), and so we can find an element \(\gamma^{\prime}_{nj}\) of \(\Omega^{\prime}\) that belongs to \(\mathsf{a}^{\prime}_{nj}\) but not to any of \(\mathsf{a}^{\prime}_{ml}\) and \(\mathsf{a}^{\prime}_{nk}\) where \(m<n\) and \(k\neq j\). Define \(E^{\prime}_{n}\!:=\!\left\{\gamma^{\prime}_{nj}\colon 1\leq j\leq n\right\}\), and then \(\mathcal{E}^{\prime}\!:=\!(E^{\prime}_{n})_{n\geq 1}\). Then \(\mathcal{E}^{\prime}\) is a spread in \(\Omega^{\prime}\), and it satisfies
\[\{\mathsf{x}\cap\operatorname{join}(\mathcal{E}^{\prime})\colon\mathsf{x}\in \mathcal{S}^{\prime}\}\supseteq\mathcal{T}_{\max}(\mathcal{E}^{\prime}),\]
since for each \(1\leq j\leq n\) we have
\[\left(\bigcup_{1\leq l\leq m<n}\mathsf{a}^{\prime}_{ml}\cup\mathsf{a}^{\prime}_ {nj}\right)\cap\operatorname{join}(\mathcal{E}^{\prime})=E^{\prime}_{<n}\cup \left\{\gamma^{\prime}_{nj}\right\}.\]
## 3 Constructing non-AMNM weights in the \(\mathcal{T}_{\max}\) and \(\mathcal{T}_{\min}\) cases
Proposition 3.1.: _Let \(\mathcal{S}\) be a union-closed set system on \(\Omega\) and suppose \(\mathcal{E}\) is a spread in \(\Omega\) such that Case_ (i) _of Theorem 2.12 holds. Then there is a log-weight on \(\mathcal{S}\) which fails to propagate at the first level._
Proof.: Suppose that there is a spread \(\mathcal{E}=(E_{n})_{n\geq 1}\) in \(\Omega\) such that \(\{\mathsf{x}\cap\operatorname{join}(\mathcal{E})\colon\mathsf{x}\in \mathcal{S}\}\) contains \(\mathcal{T}_{\max}(\mathcal{E})\). Passing to a refinement of \(\mathcal{E}\) if necessary, we may suppose that \(|E_{n}|=n+1\) for all \(n\) (this just simplifies some of the following formulas and arguments).
For each \(x\in\mathcal{P}(\Omega)\), define
\[\lambda(x):=\begin{cases}0&\text{ if there are $no$ or $infinitely many $n$ such that $E_{n}\cap x\neq\emptyset$},\\ 0&\text{ if $E_{n}\subseteq x$, and $E_{k}\cap x=\emptyset$ for $k>n$},\\ |x\cap E_{n}|&\text{ if $E_{n}\cap x\neq\emptyset$, $E_{n}\not\subseteq x$, and $E_{k}\cap x=\emptyset$ for $k>n$}.\end{cases} \tag{3.1}\]
**Claim 3.2**.: _Let \(x,y\in\mathcal{P}(\Omega)\). Then \(\lambda(x\cup y)\leq\lambda(x)+\lambda(y)\)._
Proof of the claim.: Fix \(x,y\in\mathcal{P}(\Omega)\). The desired inequality is trivial when \(x\cap\operatorname{join}(\mathcal{E})=\emptyset\) or when \(y\cap\operatorname{join}(\mathcal{E})=\emptyset\). On the other hand: if there are infinitely many \(n\) such that \(E_{n}\cap x\neq\emptyset\), or infinitely many \(n\) such that \(E_{n}\cap y\neq\emptyset\), then the same is true for \(x\cup y\), and so the inequality follows, since both \(\lambda(y)\geq 0\) and \(\lambda(x)\geq 0\).
Otherwise, let \(m\) be the largest natural number such that \(E_{m}\cap x\neq\emptyset\), and let \(n\) be the corresponding number for \(y\). If \(m\neq n\), then without loss of generality we suppose \(m>n\). Then \(m\) is also the largest number such that \(E_{m}\cap(x\cup y)\neq\emptyset\). Moreover \(E_{m}\cap(x\cup y)=E_{m}\cap x\), so the inequality follows, since \(\lambda(y)\geq 0\).
If not, \(m=n\) is the largest number such that \(E_{m}\cap(x\cup y)\neq\emptyset\). If furthermore either \(E_{m}\subseteq x\) or \(E_{m}\subseteq y\), then \(E_{m}\subseteq x\cup y\), and the inequality is again obvious. Otherwise, we see that
\[\lambda(x\cup y)\leq|(x\cup y)\cap E_{m}|\leq|x\cap E_{m}|+|y\cap E_{m}|= \lambda(x)+\lambda(y)\,.\]
This completes the proof of our claim.
We will now show that \((\mathcal{S},\lambda)\) does not have \(1\)-propagation. Let \(n\in\mathbb{N}\). Since \(\mathcal{T}_{\max}(\mathcal{E})\) is contained in \(\{x\cap\operatorname{join}(\mathcal{E})\colon x\in\mathcal{S}\}\), there is a size-\((n+1)\) subset \(\mathcal{F}_{n}\subseteq\mathcal{S}\) such that:
* for each \(a\in\mathcal{F}_{n}\), \(E_{j}\subseteq a\) for \(j<n\), \(a\cap E_{n}\) is a singleton, and \(a\cap E_{j}=\emptyset\) for \(j>n\);
* \(a\cap E_{n}\) gives different singletons for different \(a\in\mathcal{F}_{n}\).
From our construction, \(\lambda(a)=1\) for every \(a\in\mathcal{F}_{n}\).
Set \(b_{n}:=\operatorname{join}(\mathcal{F}_{n})\). Then \(\lambda(b_{n})=0\), and so \(b_{n}\in\operatorname{fil}(\mathcal{F}_{n})\wedge W_{1}\). To finish the proof it suffices to show that \(V_{\mathcal{F}_{n}}(b_{n})\to\infty\) as \(n\to\infty\), which we do as follows. Let \(C\geq 1\) be such that \(b_{n}\in\bigcup_{k=0}^{\infty}\operatorname{FBP}_{C}^{k}(\mathcal{F}_{n})\). Let \(m\geq 1\) be minimal with respect to the following property:
there exists \(a\in\operatorname{FBP}_{C}^{m}(\mathcal{F}_{n})\) such that \(E_{n}\subseteq a\).
(Such an \(m\) exists by our assumption, since \(E_{n}\subseteq b_{n}\).) By minimality there are \(a_{1}\) and \(a_{2}\) in \(\operatorname{FBP}_{C}^{m-1}(\mathcal{F}_{n})\) such that \(E_{n}\) is contained in \(a_{1}\cup a_{2}\), yet \(E_{n}\not\subseteq a_{1}\) and \(E_{n}\not\subseteq a_{2}\). (When \(m=1\), our convention here is that \(\operatorname{FBP}_{C}^{0}(\mathcal{F}_{n})=\mathcal{F}_{n}\) and, since \(|E_{n}|\geq 2\), \(E_{n}\) cannot be a subset of any member of \(\mathcal{F}_{n}\) either.)
Let \(i\in\{1,2\}\). By the previous remarks, \(a_{i}\cap E_{n}\) is a proper, nonempty subset of \(E_{n}\). By Definition 2.2, \(a_{i}\subseteq b_{n}\). Thus \(n\) is the largest natural number \(k\) such that \(a_{i}\cap E_{k}\neq\emptyset\), and so \(\lambda(a_{i})=|a_{i}\cap E_{n}|\). Hence \(|E_{n}|\leq\lambda(a_{1})+\lambda(a_{2})\leq 2C\), with the last inequality following because \(a_{1},a_{2}\in W_{C}\). Therefore, \(V_{\mathcal{F}_{n}}(b_{n})\geq\frac{1}{2}|E_{n}|\to\infty\), completing the proof.
**Proposition 3.3**.: _Let \(\mathcal{S}\) be a union-closed set system on \(\Omega\) and suppose \(\mathcal{E}\) is a spread in \(\Omega\) such that Case_ (ii) _of Theorem 2.12 holds. Then there is a log-weight on \(\mathcal{S}\) which fails to propagate at the first level._
Proof.: Suppose that there is a spread \(\mathcal{E}=(E_{n})_{n\geq 1}\) in \(\Omega\) such that Case (ii) of Theorem 2.12 holds. Then, for each \(x\in\mathcal{P}(\Omega)\), define
\[\lambda(x):=\begin{cases}0&\text{$x\cap\operatorname{join}(\mathcal{E})= \emptyset$},\\ 0&\text{ if $E_{n}\subseteq x$, and $E_{k}\cap x=\emptyset$ for $1\leq k<n$},\\ |x\cap E_{n}|&\text{ if $E_{n}\cap x\neq\emptyset$, $E_{n}\not\subseteq x$, and $E_{k}\cap x=\emptyset$ for $1\leq k<n$}.\end{cases} \tag{3.2}\]
**Claim 3.4**.: _In this case, \(\lambda\) is a log-weight on \(\mathcal{S}\), and \((\mathcal{S},\lambda)\) does not have \(1\)-propagation._
The rest of the proof is similar to that of Proposition 3.1, but slightly easier, so we omit the details.
## 4 Shattering, colouring, and a Ramsey-theoretic result
Constructing non-AMNM weights, when Case (iii) of Theorem 2.12 holds, is much more involved. In this case, we need to gain a deeper understanding of the incompressible subsets of the semilattice.
**Definition 4.1** (Shattering a spread).: Let \(\mathcal{E}=(E_{n})_{n\geq 1}\) be a spread in \(\Omega\) and let \((\mathsf{a}_{j})_{j\geq 1}\) be a sequence of subsets of \(\Omega\). We say that this sequence _shatters_\(\mathcal{E}\) if, for every \(m\in\mathbb{N}\) and every \(m\)-tuple \((\mathsf{y}_{1},\ldots,\mathsf{y}_{m})\) such that \(\mathsf{y}_{j}\in\{\mathsf{a}_{j},\mathsf{a}_{j}{}^{c}\}\) for \(j=1,\ldots,m\),
\[\lim_{n\to\infty}\Big{|}E_{n}\cap\bigcap\nolimits_{j=1}^{m}\mathsf{y}_{j} \Big{|}=\infty.\]
It is easy to verify that the if a sequence \((\mathsf{a}_{j})_{j\geq 1}\) shatters \(\mathcal{E}\), then for every \(\mathsf{y}_{j}\in\{\mathsf{a}_{j},\mathsf{a}_{j}{}^{c}\}\) and every \(1\leq j\leq m\), we must have \(\mathsf{y}_{1}\cap\cdots\cap\mathsf{y}_{m}\neq\emptyset\). As a consequence, the sets \(\mathsf{a}_{1},\ldots,\mathsf{a}_{m}\) are mutually distinct and form an incompressible subset of \(\mathcal{P}(\Omega)\). So if we start with a shattering sequence \((\mathsf{a}_{j})\) and allow ourselves to take complements and binary unions (i.e. we generate a ring of sets) then the resulting collection has high complexity; moreover, this complexity is seen inside each \(E_{n}\) once we take \(n\) sufficiently large.
**Definition 4.2**.: Given a spread \(\mathcal{E}=(E_{n})_{n\geq 1}\), a partition of \(\Omega\) into finitely many disjoint subsets \(\Omega=C_{0}\mathbin{\dot{\cup}}\cdots\mathbin{\dot{\cup}}C_{d}\) is said to _colour_\(\mathcal{E}\) if \(\lim_{n}|C_{j}\cap E_{n}|=\infty\) for each \(j=0,\ldots,d\). We call \(\mathcal{C}=\{C_{0},\ldots,C_{d}\}\) a _colouring of \(\mathcal{E}\)_.
**Definition 4.3** (Decisive colourings).: Let \(\mathcal{S}\subseteq\mathcal{P}(\Omega)\) be a set system, let \(\mathcal{E}\) be a spread in \(\Omega\) and let \(\mathcal{C}\) be a coloring of \(\mathcal{E}\). We say that \(\mathcal{C}\)_decides \(\mathcal{S}\) with respect to \(\mathcal{E}\)_, or is _\(\mathcal{S}\)-decisive with respect to \(\mathcal{E}\)_, if there exists a colour class \(C_{0}\in\mathcal{C}\) such that for every \(\mathsf{x}\in\mathcal{S}\) satisfies
\[\sup_{n\geq 1}\;\min\{|\mathsf{x}\cap C_{0}\cap E_{n}|\;;\;|\mathsf{x}^{c} \cap C\cap E_{n}|,C\in\mathcal{C}\}<\infty. \tag{4.1}\]
Such a \(C_{0}\) is said to be a _decisive colour class_ (for \(\mathcal{S}\) with respect to \(\mathcal{E}\)).
Informally speaking, when we have a decisive colour class \(C_{0}\), each \(\mathsf{x}\in\mathcal{S}\) must either have small intersection with \(C_{0}\cap E_{n}\), or else have large intersection with \(C\cap E_{n}\) for some \(C\in\mathcal{C}\), once \(n\) is sufficiently large.
The following theorem tells us, loosely speaking, that unless we are in the highly fragmented case, we can find a decisive colouring.
**Theorem 4.4**.: _Let \(\mathcal{S}\subseteq\mathcal{P}(\Omega)\) be a union-closed set system, and let \(\mathcal{E}\) be a spread in \(\Omega\). Then at least one of the following conclusions holds:_
* _there is a spread_ \(\mathcal{G}\) _that refines_ \(\mathcal{E}\)_, and a sequence_ \((\mathsf{a}_{i})_{i\geq 1}\) _in_ \(\mathcal{S}\) _which shatters_ \(\mathcal{G}\)_;_
* _there is a spread_ \(\mathcal{F}\) _that refines_ \(\mathcal{E}\)_, and an_ \(\mathcal{S}\)_-decisive colouring_ \(\mathcal{C}\) _of_ \(\mathcal{F}\)_._
The proof of Theorem 4.4 requires a double induction, and we isolate part of it as a separate lemma. The following terminology is introduced to streamline the presentation.
Let \(\mathcal{E}=(E_{n})_{n\geq 1}\) be a spread, and let \(D\) and \(F\) be subsets of \(\Omega\). We say that \(F\)_halves_\(D\)_with respect to \(\mathcal{E}\), if
\[\lim_{n\to\infty}|D\cap F\cap E_{n}|=\lim_{n\to\infty}|(D\setminus F)\cap E_{n} |=\infty.\]
Also, if \(N\subseteq\mathbb{N}\) is an infinite subset, and \((t_{n})_{n\geq 1}\) is a sequence in \([0,\infty)\), we say that \(t_{n}\to\infty\)_along_\(n\in N\) if \(\lim_{k\to\infty}t_{n_{k}}=\infty\), where \(N=\{n_{k}\colon k\in\mathbb{N}\}\).
Lemma 4.5.: _Let \(\mathcal{S}\subseteq\mathcal{P}(\Omega)\) be a union-closed set system. Let \(\mathcal{E}=(E_{n})_{n\geq 1}\) be a spread in \(\Omega\), and let \(\mathcal{C}\) be a colouring of \(\mathcal{E}\). Then at least one of the following conclusions holds:_
1. _there is an infinite set_ \(N\subseteq\mathbb{N}\) _and some_ \(\mathsf{y}\in\mathcal{S}\) _which halves every_ \(C\in\mathcal{C}\) _with respect to_ \((E_{n})_{n\in N}\)_;_
2. _there is a spread_ \(\mathcal{F}\) _refining_ \(\mathcal{E}\)_, such that_ \(\mathcal{C}\) _is an_ \(\mathcal{S}\)_-decisive colouring of_ \(\mathcal{F}\)_._
Proof.: The idea is as follows: we attempt to construct, by iteration, members of \(\mathcal{S}\) that are closer and closer to satisfying the property in Case (i). At each stage of the iteration we will be able to continue, unless we find ourselves in Case (ii). Therefore, if Case (ii) does not hold, our iteration will run successfully until Case (i) is satisfied.
Assume from now on that Case (ii) does not hold, and enumerate the members of \(\mathcal{C}\) as \(C_{1},\ldots,C_{d}\). Since \(\mathcal{C}\) is not an \(\mathcal{S}\)-decisive colouring of \(\mathcal{E}\), there exists \(\mathsf{y}_{1}\in\mathcal{S}\) such that
\[\sup_{n}\min\{|\mathsf{y}_{1}\cap C_{1}\cap E_{n}|\ ;\ |\mathsf{y}_{1}{}^{c} \cap C_{j}\cap E_{n}|\,\ 1\leq j\leq d\}=\infty.\]
Passing to an appropriate subsequence, there exists an infinite \(N_{1}\subseteq\mathbb{N}\) such that:
* \(|\mathsf{y}_{1}\cap C_{1}\cap E_{n}|\to\infty\) along \(n\in N_{1}\); and
* \(|\mathsf{y}_{1}{}^{c}\cap C_{j}\cap E_{n}|\to\infty\) along \(n\in N_{1}\) for every \(1\leq j\leq d\).
Now let \(2\leq k\leq d\). Suppose there are \(\mathsf{y}_{k-1}\in\mathcal{S}\) and an infinite \(N_{k-1}\subseteq\mathbb{N}\) such that:
\[|\mathsf{y}_{k-1}\cap C_{i}\cap E_{n}|\to\infty\ \text{along}\ n\in N_{k-1}\ \text{for all}\ 1\leq i\leq k-1 \tag{4.2}\]
and
\[|\mathsf{y}_{k-1}{}^{c}\cap C_{j}\cap E_{n}|\to\infty\ \text{along}\ n\in N_{k-1}\ \text{for all}\ 1\leq j\leq d. \tag{4.3}\]
Informally, (4.2) says that \(\mathsf{y}_{k-1}\cap C_{i}\) is "not too sparse" relative to the spread \((E_{n})_{n\in N_{k-1}}\), for \(1\leq i\leq k-1\), while (4.3) says that \(\mathsf{y}_{k-1}\cap C_{j}\) is "not too dense" relative to the same spread, for all \(j\).
We might have \(\mathsf{y}_{k-1}{}^{c}\cap E_{m}=\emptyset\) for some \(m\in N_{k-1}\). However, by (4.3) we can assume (after replacing \(N_{k-1}\) with some cofinal subset, if necessary) that the sequence \((\mathsf{y}_{k-1}{}^{c}\cap E_{n})_{n\in N_{k-1}}\) is a spread in \(\Omega\), which we denote by \(\mathcal{E}_{k}\). By construction, \(\mathcal{E}_{k}\) refines \(\mathcal{E}\), and by condition (4.3) again, \(\mathcal{C}\) colours \(\mathcal{E}_{k}\). Since this is not an \(\mathcal{S}\)-decisive colouring, in particular \(C_{k}\) must be indecisive. By the same reasoning as before, there is some \(\mathsf{x}\in\mathcal{S}\) and an infinite \(N_{k}\subseteq N_{k-1}\) such that:
\[|\mathsf{x}\cap C_{k}\cap(\mathsf{y}_{k-1}{}^{c}\cap E_{n})|\to\infty\ \text{along}\ n\in N_{k} \tag{4.4}\]
and
\[|\mathsf{x}^{c}\cap C_{j}\cap(\mathsf{y}_{k-1}{}^{c}\cap E_{n})|\to\infty\ \text{along}\ n\in N_{k},\ \text{for every}\ 1\leq j\leq d. \tag{4.5}\]
Let \(\mathsf{y}_{k}:=\mathsf{y}_{k-1}\cup\mathsf{x}\), which belongs to \(\mathcal{S}\) since \(\mathcal{S}\) is union-closed. Since \(N_{k}\subseteq N_{k-1}\) and \(\mathsf{y}_{k}\supseteq\mathsf{y}_{k-1}\), condition (4.2) implies
\[|\mathsf{y}_{k}\cap C_{i}\cap E_{n}|\to\infty\text{ along }n\in N_{k},\,\text{ for all }1\leq i\leq k-1\;;\]
and since \(\mathsf{y}_{k}\supseteq\mathsf{x}\), condition (4.4) implies
\[|\mathsf{y}_{k}\cap C_{k}\cap E_{n}|\to\infty\text{ along }n\in N_{k}\;.\]
But by the definition of \(\mathsf{y}_{k}\), condition (4.5) can be rephrased as
\[|\mathsf{y}_{k}{}^{c}\cap C_{j}\cap E_{n}|\to\infty\text{ along }n\in N_{k},\, \text{for every }1\leq j\leq d\;.\]
Thus the induction can continue. We end up with \(\mathsf{y}_{d}\in\mathcal{S}\) and an infinite subset \(N_{d}\subseteq\mathbb{N}\) such that \(\mathsf{y}_{d}\) halves \(C_{j}\) with respect to \((E_{n})_{n\in N_{d}}\) for every \(1\leq j\leq d\), i.e. we are in Case (i) of the lemma.
Proof of Theorem 4.4.: Suppose that Case (D) of the theorem does not hold. The following notation will be useful: given \(\mathsf{x}_{1},\dots,\mathsf{x}_{m}\in\mathcal{P}(\Omega)\), consider
\[\Gamma(\mathsf{x}_{1},\dots,\mathsf{x}_{m}):=\left\{\bigcap_{j=1}^{m}\mathsf{ y}_{j}\,:\;\mathsf{y}_{j}\in\{\mathsf{x}_{j},\mathsf{x}_{j}^{c}\}\text{ for each }j=1,\dots,m\right\}.\]
This is a partition of \(\Omega\), although some members of the partition might be empty.
Now let \(\mathcal{E}_{0}=\mathcal{E}\) and let \(\mathcal{C}_{0}\) denote the trivial colouring \(\{\Omega\}\). Apply Lemma 4.5 to the pair \((\mathcal{E}_{0},\mathcal{C}_{0})\). Case (ii) of the lemma does not hold, since otherwise we would be in Case (D) of the theorem. Hence we are in Case (i) of the lemma, so there exists an infinite set \(N_{1}\subseteq\mathbb{N}\) and some \(\mathsf{a}_{1}\in\mathcal{S}\) which halves \(\Omega\) with respect to \((E_{n})_{n\in N_{1}}\).
Suppose that for some \(k\geq 1\), we have found \(\mathsf{a}_{1},\dots,\mathsf{a}_{k}\in\mathcal{S}\) and an infinite subset \(N_{k}\subseteq\mathbb{N}\), such that \(\Gamma(\mathsf{a}_{1},\dots,\mathsf{a}_{k})\) colours the spread \((E_{n})_{n\in N_{k}}\). This colouring cannot be \(\mathcal{S}\)-decisive (otherwise we would be in Case (D) of the theorem, contrary to assumption). Hence, by Lemma 4.5 there exist some infinite \(N_{k+1}\subseteq N_{k}\) and some \(\mathsf{a}_{k+1}\in\mathcal{S}\), such that \(\mathsf{a}_{k+1}\) halves \(C\) with respect to \((E_{n})_{n\in N_{k+1}}\) for each \(C\in\Gamma(\mathsf{a}_{1},\dots,\mathsf{a}_{k})\). Now \(\Gamma(\mathsf{a}_{1},\dots,\mathsf{a}_{k+1})\) is a colouring of the spread \((E_{n})_{n\in N_{k+1}}\).
Continuing in this way, we inductively construct a sequence \((\mathsf{a}_{n})_{n\geq 1}\) in \(\mathcal{S}\), and a descending chain of infinite subsets of \(\mathbb{N}\), \(N_{1}\supseteq N_{2}\supseteq\dots\), such that:
for each
\[m\geq 1\]
and each
\[C\in\Gamma(\mathsf{a}_{1},\dots,\mathsf{a}_{m})\]
,
\[|C\cap E_{n}|\to\infty\]
along
\[n\in N_{m}\]
Since \(N_{1}\supseteq N_{2}\supseteq\dots\) is a decreasing sequence of infinite subsets of \(\mathbb{N}\), we can extract a diagonal subsequence \(n(1)<n(2)<n(3)<\dots\) satisfying \(n(k)\in N_{j}\) for every \(j\leq k\). For each \(m\), \((n(i))_{i\geq m}\) is a subsequence of \(N_{m}\), and so for each \(C\in\Gamma(\mathsf{a}_{1},\dots,\mathsf{a}_{m})\) we have
\[\lim_{i\to\infty}|C\cap E_{n(i)}|=\lim_{N_{m}\ni n\to\infty}|C\cap E_{n}|= \infty\;.\]
Therefore, if we define \(G_{i}=E_{n(i)}\), the sequence \((G_{i})_{i\geq 1}\) is a spread which is shattered by the sequence \((\mathsf{a}_{j})_{j\geq 1}\), and we are in Case (S) as required.
We can now refine the structure result in Theorem 2.12. This is the bare minimum that we need for our weight construction in the next section.
Theorem 4.6 (Refined version of Theorem 2.12).: _Let \(\mathcal{S}\) be a union-closed set system on \(\Omega\). If \(\mathcal{S}\) has infinite breadth, then there is a spread in \(\Omega\), denoted by \(\mathcal{E}=(E_{n})_{n\in\mathbb{N}}\), such that at least one of the following statements holds:_
1. \(\{\mathsf{x}\cap\operatorname{join}(\mathcal{E})\colon\mathsf{x}\in\mathcal{S}\}\supseteq \mathcal{T}_{\max}(\mathcal{E})\).
2. \(\{\mathsf{x}\cap\operatorname{join}(\mathcal{E})\colon\mathsf{x}\in\mathcal{S}\}\supseteq \mathcal{T}_{\min}(\mathcal{E})\).
3. \(\{\mathsf{x}\cap\operatorname{join}(\mathcal{E})\colon\mathsf{x}\in\mathcal{ S}\}\supseteq\mathcal{T}_{\operatorname{ort}}(\mathcal{E})\) and there is an \(\mathcal{S}\)-decisive colouring of \(\mathcal{E}\).
Proof.: Let \(\mathcal{S}\subseteq\mathcal{P}(\Omega)\) be a union-closed set system of infinite breadth. It is enough to show that if we are in Case (S) of Theorem 4.4, then there is a spread \(\mathcal{E}^{\prime}\) in \(\Omega\) such that \(\{\mathsf{x}\cap\operatorname{join}(\mathcal{E}^{\prime})\colon\mathsf{x} \in\mathcal{S}\}\) contains \(\mathcal{T}_{\max}(\mathcal{E}^{\prime})\). This fact, together with Theorem 2.12 finishes the proof.
Now assume that Case (S) holds. So there exists a sequence \((\mathsf{a}_{n})_{n\geq 1}\) of distinct members of \(\mathcal{S}\) so that for each \(m\in\mathbb{N}\), the set \(\{\mathsf{a}_{1},\dots,\mathsf{a}_{m}\}\) is incompressible. Fix positive integers \(n_{1}<n_{2}<\dots\) such that \(n_{k+1}-n_{k}\to\infty\) (for instance we could take \(n_{k}=k^{2}\)). Let \(\mathsf{d}_{k}=\bigcup_{i=1}^{n_{k}}\mathsf{a}_{i}\in\mathcal{S}\), and for convenience set \(\mathsf{d}_{0}=\emptyset\), \(n_{0}=0\). It is easy to see that for each \(k\in\mathbb{N}\), the set
\[\mathcal{F}^{\prime}_{k}:=\{\mathsf{a}_{j}\setminus\mathsf{d}_{k-1}\colon n_ {k-1}+1\leq j\leq n_{k}\}\]
is an incompressible subset of the semilattice \(\{\mathsf{x}\setminus\mathsf{d}_{k-1}:\mathsf{x}\in\mathcal{S}\}\). Moreover: since \(\mathcal{F}^{\prime}_{k}\) is incompressible, for each \(j\) with \(n_{k-1}+1\leq j\leq n_{k}\), we can select an element of \(\mathsf{a}_{j}\setminus\mathsf{d}_{k-1}\) that does not belong to any other member of \(\mathcal{F}^{\prime}_{k}\). Let \(E_{k}\) be the set of all these elements (very loosely, one can think of \(E_{k}\) as a "transversal" for \(\mathcal{F}^{\prime}_{k}\)). Since \(E_{k}\subseteq\mathsf{d}_{k}\setminus\mathsf{d}_{k-1}\) and \(|E_{k}|=n_{k+1}-n_{k}\), the sequence \(\mathcal{E}=(E_{k})_{k\in\mathbb{N}}\) is a spread in \(\Omega\).
To finish, it suffices to show that given \(k\in\mathbb{N}\) and some \(\omega\in E_{k}\), there exists some \(\mathsf{z}\in\mathcal{S}\) such that \(\mathsf{z}\cap\operatorname{join}(\mathcal{E})=E_{<k}\mathbin{\dot{\cup}}\{\omega\}\). By construction, there exists \(\mathsf{z}^{\prime}\in\mathcal{F}^{\prime}_{k}\) such that \(\mathsf{z}^{\prime}\cap E_{k}=\{\omega\}\), and also \(E_{1}\cup\dots\cup E_{k-1}=\mathsf{d}_{k-1}\cap\operatorname{join}(\mathcal{E})\). Put \(\mathsf{z}=\mathsf{z}^{\prime}\cup\mathsf{d}_{k-1}\): this satisfies \(\mathsf{z}\cap\operatorname{join}(\mathcal{E})=E_{1}\cup\dots\cup E_{k-1} \cup\{\omega\}\), and we must show \(\mathsf{z}\in\mathcal{S}\). But since \(\mathsf{z}^{\prime}\) has the form \(\mathsf{a}_{i}\setminus\mathsf{d}_{k-1}\) for some \(i\in\mathbb{N}\), we have \(\mathsf{z}=\mathsf{a}_{i}\cup\mathsf{d}_{k-1}\in\mathcal{S}\), as required.
## 5 Constructing non-AMNM weights in the \(\mathcal{T}_{\operatorname{ort}}\) case
In this final case, by Theorem 4.6, we suppose that there exists a spread \(\mathcal{E}\) with a colouring of \(\mathcal{E}\) which is \(\mathcal{S}\)-decisive such that \(\{x\cap\operatorname{join}(\mathcal{E}):x\in\mathcal{S}\}\) contains \(\mathcal{T}_{\operatorname{ort}}(\mathcal{E})\).
Lemma 5.1 (Creating a log-weight from a decisive colouring).: _Let \(\mathcal{S}\subseteq\mathcal{P}(\Omega)\) be union-closed, and let \(\mathcal{E}=(E_{n})_{n\geq 1}\) be a spread in \(\Omega\). Suppose there is an \(\mathcal{S}\)-decisive colouring of \(\mathcal{E}\), call it \(\mathcal{C}\), with a decisive colour class \(C_{0}\)._
_For each \(\mathsf{x}\in\mathcal{S}\) define_
\[T(\mathsf{x})=\{n\in\mathbb{N}\colon|\mathsf{x}\cap C\cap E_{n}|\leq\frac{1}{2} |C\cap E_{n}|\text{ for all }C\in\mathcal{C}\}\]
_and define \(\lambda(\mathsf{x})=\sup_{n\in T(\mathsf{x})}|\mathsf{x}\cap C_{0}\cap E_{n}|\). Then \(\lambda(\mathsf{x})<\infty\), and \(\lambda:\mathcal{S}\to[0,\infty)\) is a log-weight._
Note that if \(T(\mathsf{x})=\emptyset\), then \(\lambda(\mathsf{x})=0\), i.e. we take the usual convention when considering the least upper bounds of subsets of \([0,\infty)\).
Proof.: The first step is to show \(\lambda(\mathsf{x})<\infty\). If \(T(\mathsf{x})\) is finite there is nothing to prove; so assume \(T(\mathsf{x})\) is infinite. Since \(\mathcal{C}\) is a colouring of the spread \(\mathcal{E}\), we have \(\min_{C\in\mathcal{C}}|C\cap E_{n}|\to\infty\). Therefore, since \(T(\mathsf{x})\) is infinite, \(|\mathsf{x}^{c}\cap C\cap E_{n}|\to\infty\) along \(n\in T(\mathsf{x})\). Now it follows from the condition (4.1) and the definition of \(\lambda\) that \(\lambda(\mathsf{x})<\infty\).
Finally, given \(x\) and \(y\) in \(\mathcal{S}\), observe that \(T(x\cup y)\subseteq T(x)\cap T(y)\). Hence
\[\lambda(x\cup y)\leq\sup_{n\in T(x\cup y)}\left(|x\cap C_{0}\cap E_{n}|+|y\cap C _{0}\cap E_{n}|\right)\leq\lambda(x)+\lambda(y),\]
as required.
Proposition 5.2.: _Let \(\mathcal{S}\), \(\mathcal{E}\), \(\mathcal{C}\) and \(\lambda\) be as in the previous lemma, and assume that \(\{x\cap\operatorname{join}(\mathcal{E})\colon x\in\mathcal{S}\}\supseteq \mathcal{T}_{\operatorname{ort}}(\mathcal{E})\). Then \((\mathcal{S},\lambda)\) does not have \(1\)-propagation._
Proof.: Let \(C_{0}\) be the decisive colour class used to define \(\lambda\). We fix \(n\in\mathbb{N}\) and enumerate the elements of \(E_{n}\cap C_{0}\) as \(\gamma_{1},\ldots,\gamma_{M}\), where \(n\) is sufficiently large so that \(M\geq 2\). Since \(\mathcal{S}\) restricted to \(\operatorname{join}(\mathcal{E})\) contains \(\mathcal{T}_{\operatorname{ort}}(\mathcal{E})\), there exist \(x_{1},\ldots,x_{M}\in\mathcal{S}\) such that
\[x_{j}\cap\operatorname{join}(\mathcal{E})=E_{<n}\dot{\cup}\{\gamma_{j}\} \dot{\cup}E_{>n}\qquad(1\leq j\leq M).\]
In particular \(x_{i}\cap E_{n}=x_{i}\cap C_{0}\cap E_{n}=\{\gamma_{i}\}\), while \(x_{i}\cap E_{j}=E_{j}\) for all \(j\neq n\). It follows that \(\lambda(x_{i})=1\) for \(i=1,\ldots,M\).
Let \(\mathcal{F}_{n}=\{x_{1},\ldots,x_{M}\}\subseteq W_{1}\), and let \(b_{n}=\operatorname{join}(\mathcal{F}_{n})\). Since \(b_{n}\cap E_{n}=C_{0}\cap E_{n}\), and since \(b_{n}\cap E_{m}=E_{m}\) for all \(m\neq n\), we have \(\lambda(b_{n})=0\). Hence \(b_{n}\in\operatorname{fil}(\mathcal{F}_{n})\cap W_{1}\).
Let \(K\geq 1\) and suppose \(b_{n}\in\bigcup_{m=0}^{\infty}\operatorname{FBP}_{K}^{m}(\mathcal{F}_{n})\). Then, in particular, there exists \(m\geq 1\) with the following property:
\[\text{there exists }y\in\operatorname{FBP}_{K}^{m}(\mathcal{F}_{n})\text{ such that }|y\cap C_{0}\cap E_{n}|>\frac{1}{2}|C_{0}\cap E_{n}|.\] ( \[*\] )
Let \(m\) be minimal with respect to this property, and let \(y\) satisfy (\(*\)). Then there exist \(y_{1}\) and \(y_{2}\) in \(\operatorname{FBP}_{K}^{m-1}(\mathcal{F}_{n})\) such that \(y\subseteq y_{1}\cup y_{2}\); again, when \(m=1\), our convention here is that \(\operatorname{FBP}_{K}^{0}(\mathcal{F}_{n})=\mathcal{F}_{n}\). By the minimality of \(m\) (when \(m\geq 2\)) or since \(|C_{0}\cap E_{n}|\geq 2\) and \(|x_{j}\cap C_{0}\cap E_{n}|=1\) (when \(m=1\)),
\[|y_{1}\cap C_{0}\cap E_{n}|\leq\frac{1}{2}|C_{0}\cap E_{n}|\quad\text{and} \quad|y_{2}\cap C_{0}\cap E_{n}|\leq\frac{1}{2}|C_{0}\cap E_{n}|\;.\]
Clearly, \(\operatorname{fil}(\mathcal{F}_{n})\subseteq\mathcal{P}(\operatorname{join}( \mathcal{F}_{n}))=\mathcal{P}(b_{n})\). In particular, for \(i\in\{1,2\}\), we have \(y_{i}\subseteq b_{n}\) and hence
\[y_{i}\cap E_{n}\subseteq b_{n}\cap E_{n}=C_{0}\cap E_{n}.\]
This implies that \(n\in T(y_{i})\), so that
\[\lambda(y_{i})=\sup_{j\in T(y_{i})}|y_{i}\cap C_{0}\cap E_{j}|\geq|y_{i}\cap C _{0}\cap E_{n}|\;.\]
Putting this all together, and remembering that \(y_{1}\) and \(y_{2}\) belong to \(W_{K}\),
\[\frac{1}{2}|C_{0}\cap E_{n}| \leq|(y_{1}\cup y_{2})\cap C_{0}\cap E_{n}|\] \[\leq|y_{1}\cap C_{0}\cap E_{n}|+|y_{2}\cap C_{0}\cap E_{n}|\leq \lambda(y_{1})+\lambda(y_{2})\leq 2K\,.\]
Hence \(K\geq\frac{1}{4}|C_{0}\cap E_{n}|\). It follows that \(V_{\mathcal{F}_{n}}(b_{n})\geq\frac{1}{4}|C_{0}\cap E_{n}|\to\infty\), as required.
Final examples
Our construction of non-AMNM weights could have been streamlined if the following claims were true:
1. if \(q:S\to T\) is a surjective homomorphism of semilattices, and \((T,\lambda^{*})\) fails \(L\)-propagation, then \((S,\lambda^{*}\circ q)\) fails \(L\)-propagation;
2. if \((R,\lambda)\) has \(L\)-propagation then so does \((T,\lambda)\) for every subsemilattice \(T\subset R\).
In this section we present examples to show that both claims are false.
Let \(\Omega=\{0,1,2\}\times\mathbb{N}\). Consider two sequences \((\mathsf{x}_{j})_{j=1}^{\infty}\) and \((\mathsf{a}_{n})_{n=1}^{\infty}\) defined as follows:
\[\mathsf{x}_{j}:=\{(1,j),(2,j)\}\quad;\quad\mathsf{a}_{n}:=\left\{(0,k),(1,k) \colon n^{2}\leq k<(n+1)^{2}\right\}. \tag{6.1}\]
It may be helpful to picture these sets as certain rectangular "tiles", as in Figure 2.
Let \(\mathcal{A}=\{\mathsf{a}_{n}\colon n\in\mathbb{N}\}\) and \(\mathcal{X}=\{\mathsf{x}_{j}\colon j\in\mathbb{N}\}\). Define \(\mathcal{S}\) to be the semilattice generated inside \(\mathcal{P}(\Omega)\) by \(\mathcal{A}\) and \(\mathcal{X}\). A little thought shows that every member of \(\mathcal{S}\) has a unique decomposition, up to ordering, as a union of members of \(\mathcal{A}\) and members of \(\mathcal{X}\). We will think of the members of \(\mathcal{A}\) and \(\mathcal{X}\) as "prime factors". Recall that for union-closed set systems, \(\mathsf{a}\) is a factor of \(\mathsf{b}\) precisely when \(\mathsf{a}\subseteq\mathsf{b}\).
Now let \(\Omega^{*}=\{1,2\}\times\mathbb{N}\subset\Omega\), and consider the _truncation homomorphism_\(q:\mathcal{P}(\Omega)\to\mathcal{P}(\Omega^{*})\) defined by \(\mathsf{z}\mapsto\mathsf{z}\cap\Omega^{*}\). Any log-weight \(\lambda^{*}\) defined on \(\mathcal{P}(\Omega^{*})\) pulls back to give a log-weight \(\lambda=\lambda^{*}\circ q\) on \(\mathcal{P}(\Omega)\). We now put
\[\lambda^{*}(\mathsf{z}):=|\{j\in\mathbb{N}\colon(2,j)\in\mathsf{z}\}| \tag{6.2}\]
which is clearly subadditive, and so gives us a log-weight on \(\mathcal{P}(\Omega^{*})\). Note that \(\lambda\) is given by the same formula as \(\lambda^{*}\). Moreover, if \(\mathsf{z}\in\mathcal{S}\), then \(\lambda(\mathsf{z})\) counts how many factors from \(\mathcal{X}\) occur in the "prime factorization" of \(\mathsf{z}\).
Example 6.1.: Define \(\mathcal{T}=q(\mathcal{S})\subset\mathcal{P}(\Omega^{*})\). We will show that \((\mathcal{T},\lambda^{*})\) does not have \(1\)-propagation.
Let \(\mathsf{b}_{n}:=q(\mathsf{a}_{n})=\left\{(1,k)\colon n^{2}\leq k<(n+1)^{2}\right\}\) and let \(\mathcal{B}=\{\mathsf{b}_{n}\colon n\in\mathbb{N}\}\). Then \(\mathcal{T}\) is the subsemilattice of \(\mathcal{P}(\Omega^{*})\) generated by \(\mathcal{X}\) and \(\mathcal{B}\). For each \(n\), let \(\mathcal{E}_{n}=\{\mathsf{x}_{k}\colon n^{2}\leq k<(n+1)^{2}\}\). Then \(\mathcal{E}_{n}\subseteq W_{1}(\mathcal{T})\) and \(\mathsf{b}_{n}\in\operatorname{fil}(\mathcal{E}_{n})\cap W_{1}(\mathcal{T})\). Therefore, it suffices to prove that \(V_{\mathcal{E}_{n}}(\mathsf{b}_{n})\to\infty\) as \(n\to\infty\).
Lemma 6.2.: _Let \(1\leq C\leq n\), and let \(m\geq 0\). Then \(\operatorname{FBP}^{m}_{C}(\mathcal{E}_{n})\) is contained in the union-closed set system generated by \(\mathcal{E}_{n}\), which we denote by \(\langle\mathcal{E}_{n}\rangle\)._
Proof.: We induct on \(m\). The case \(m=0\) is trivial. If the claim holds for \(m=k-1\) where \(k\in\mathbb{N}\), then let \(\mathsf{y},\mathsf{z}\in\operatorname{FBP}^{k-1}_{C}(\mathcal{E}_{n})\) and let \(\mathsf{a}\in W_{C}(\mathcal{T})\) satisfy \(\mathsf{a}\subseteq\mathsf{y}\cup\mathsf{z}\). To complete the inductive step, we will show that \(\mathsf{a}\in\langle\mathcal{E}_{n}\rangle\).
By the inductive hypothesis, \(\mathsf{y}\in\langle\mathcal{E}_{n}\rangle\) and \(\mathsf{z}\in\langle\mathcal{E}_{n}\rangle\). Therefore, by the "prime factorization" property of \(\mathcal{S}\), the only possible factors of \(\mathsf{a}\) in \(\mathcal{T}\) are the members of \(\mathcal{E}_{n}\) together with \(\mathsf{b}_{n}\). Suppose \(\mathsf{b}_{n}\subseteq\mathsf{a}\); then \((1,k)\in\mathsf{y}\cup\mathsf{z}\) for every \(n^{2}\leq k\leq n^{2}+2n\). But this is impossible since \(\lambda^{*}(\mathsf{y})\leq C\leq n\) and \(\lambda^{*}(\mathsf{z})\leq C\leq n\)
Since \(\mathsf{b}_{n}\notin\langle\mathcal{E}_{n}\rangle\), Lemma 6.2 implies that \(V_{\mathcal{E}_{n}}(\mathsf{b}_{n})\geq n\), as required.
**Lemma 6.3**.: \((\mathcal{S},\lambda)\) _has \(L\)-propagation for all \(L\geq 0\)._
Proof.: Fix \(L\geq 0\) and let \(\mathcal{E}\) be a non-empty subset of \(W_{L}\). Let \(\operatorname{fac}(\mathcal{E})\) denote the set of factors of \(\mathcal{E}\) inside \(\mathcal{S}\). Given \(\mathsf{z}\in\operatorname{fil}(\mathcal{E})\cap W_{L}\), we will show that \(V_{\mathcal{E}}(\mathsf{z})\leq L\).
First note that each "prime factor" of \(\mathsf{z}\) must be a factor of some element in \(\mathcal{E}\) (by unique factorization). Hence, \(\mathsf{z}=\mathsf{a}^{\prime}\cup\mathsf{x}^{\prime}\) where \(\mathsf{a}^{\prime}\) is a product of members of \(\mathcal{A}\cap\operatorname{fac}(\mathcal{E})\) and \(\mathsf{x}^{\prime}\) is a product of members of \(\mathcal{X}\cap\operatorname{fac}(\mathcal{E})\).
Write \(\mathsf{x}^{\prime}=\mathsf{x}_{n(1)}\cup\mathsf{x}_{n(2)}\cup\cdots\cup \mathsf{x}_{n(k)}\) where \(n(1)<n(2)<\cdots<n(k)\). Then \(k=\lambda(\mathsf{z})\leq L\). If \(L<1\) then \(\mathsf{z}=\mathsf{a}^{\prime}\); induction on the number \(m\) of the "prime factors" of \(\mathsf{a}^{\prime}\) yields \(\mathsf{a}^{\prime}\in\operatorname{FBP}^{m}_{L}(\mathcal{E})\subseteq \operatorname{FBP}^{\infty}_{L}(\mathcal{E})\), and so we are done. If \(L\geq 1\), then by inductively considering \(\mathsf{y}_{0}\!:=\!\mathsf{a}^{\prime}\), \(\mathsf{y}_{1}\!:=\!\mathsf{y}_{0}\cup\mathsf{x}_{n(1)}\), \(\mathsf{y}_{2}\!:=\!\mathsf{y}_{1}\cup\mathsf{x}_{n(2)}\), etc., we obtain \(\mathsf{y}_{j}\in\operatorname{FBP}^{j+m}_{L}(\mathcal{E})\) for all \(j=0,1,\ldots,k\). Since \(\mathsf{z}=\mathsf{y}_{k}\in\operatorname{FBP}^{\infty}_{L}(\mathcal{E})\), this completes the proof.
For our next example, consider the sets \(\mathsf{g}_{j}\!:=\!\{(1,j)\}\) for \(j\in\mathbb{N}\). Define \(\mathcal{G}=\{\mathsf{g}_{j}\colon j\in\mathbb{N}\}\) and define \(\mathcal{R}\) to be the semilattice generated by \(\mathcal{X}\) and \(\mathcal{G}\). Since each \(\mathsf{b}_{n}\) is the union of finitely many members of \(\mathcal{G}\), \(\mathcal{T}\) is a subsemilattice of \(\mathcal{R}\).
**Example 6.4**.: One can check that \((\mathcal{R},\lambda^{*})\) has \(L\)-propagation for all \(L\geq 0\). The proof is very similar to the proof for \((\mathcal{S},\lambda)\). Unlike \(\mathcal{S}\), the semilattice \(\mathcal{R}\) does not have "unique factorization"; but each \(\mathsf{z}\in\mathcal{R}\) has a largest factor belonging to \(\langle\mathcal{X}\rangle\), and this factor has a unique decomposition as a union of members of \(\mathcal{X}\). Using this, one can carry out the same kind of argument that was used to show \((\mathcal{S},\lambda)\) has \(L\)-propagation. We leave the details to the reader.
## Acknowledgements
This paper is the conclusion of a larger project, which grew out of conversations between the authors while attending the conference "Banach Algebras and Applications", held in Gothenburg, Sweden, July-August 2013, and was further developed while the authors were attending the thematic program "Abstract Harmonic Analysis, Banach and Operator Algebras" at the Fields Institute, Canada, during March-April 2014. The authors thank the organizers of these meetings for invitations to attend and for pleasant environments to discuss research.
The first author acknowledges the financial support of the Faculty of Science and Technology at Lancaster University, in the form of a travel grant to attend the latter meeting. The third author acknowledges the financial supports of a Fast Start Marsden Grant and of Victoria University of Wellington to attend these meetings.
The writing of this paper was facilitated by a visit of the first author to the University of Delaware in October 2022, supported by a Scheme 4 grant from the London Mathematical Society (reference 42128). The second author also acknowledges support from National Science Foundation Grant DMS-1902301 during the preparation of this article.
|
2310.03382 | Maximal line-free sets in $\mathbb{F}_p^n$ | We study subsets of $\mathbb{F}_p^n$ that do not contain progressions of
length $k$. We denote by $r_k(\mathbb{F}_p^n)$ the cardinality of such subsets
containing a maximal number of elements.
In this paper we focus on the case $k=p$ and therefore sets containing no
full line. A~trivial lower bound $r_p(\mathbb{F}_p^n)\geq(p-1)^n$ is achieved
by a hypercube of side length $p-1$ and it is known that equality holds for
$n\in\{1,2\}$. We will however show that $r_p(\mathbb{F}_p^3)\geq
(p-1)^3+p-2\sqrt{p}$, which is the first improvement in the three dimensional
case that is increasing in $p$.
We will also give the upper bound $r_p(\mathbb{F}_p^{3})\leq
p^3-2p^2-(\sqrt{2}-1)p+2$ as well as generalizations for higher dimensions.
Finally we present some bounds for individual $p$ and $n$, in particular
$r_5(\mathbb{F}_5^{3})\geq 70$ and $r_7(\mathbb{F}_7^{3})\geq 225$ which can be
used to give the asymptotic lower bound $4.121^n$ for $r_5(\mathbb{F}_5^{n})$
and $6.082^n$ for $r_7(\mathbb{F}_7^{n})$. | Christian Elsholtz, Jakob Führer, Erik Füredi, Benedek Kovács, Péter Pál Pach, Dániel Gábor Simon, Nóra Velich | 2023-10-05T08:37:13Z | http://arxiv.org/abs/2310.03382v2 | # Maximal line-free sets in \(\mathbb{F}_{p}^{n}\)
###### Abstract.
We study subsets of \(\mathbb{F}_{p}^{n}\) that do not contain progressions of length \(k\). We denote by \(r_{k}(\mathbb{F}_{p}^{n})\) the cardinality of such subsets containing a maximal number of elements.
In this paper we focus on the case \(k=p\) and therefore sets containing no full line. A trivial lower bound \(r_{p}(\mathbb{F}_{p}^{n})\geq(p-1)^{n}\) is achieved by a hypercube of side length \(p-1\) and it is known that equality holds for \(n\in\{1,2\}\). We will however show \(r_{p}(\mathbb{F}_{p}^{n})\geq(p-1)^{3}+p-2\sqrt{p}\), which is the first improvement in the three dimensional case that is increasing in \(p\).
We will also give the upper bound \(r_{p}(\mathbb{F}_{p}^{3})\leq p^{3}-2p^{2}-(\sqrt{2}-1)p+2\) as well as generalizations for higher dimensions.
Finally we present some bounds for individual \(p\) and \(n\), in particular \(r_{5}(\mathbb{F}_{5}^{3})\geq 70\) and \(r_{7}(\mathbb{F}_{7}^{3})\geq 225\) which can be used to give the asymptotic lower bound \(4.121^{n}\) for \(r_{5}(\mathbb{F}_{5}^{n})\) and \(6.082^{n}\) for \(r_{7}(\mathbb{F}_{7}^{n})\).
Key words and phrases:line-free sets, blocking sets, sets without arithmetic progressions, extremal combinatorics, combinatorics in \(\mathbb{F}_{p}^{n}\)2020 Mathematics Subject Classification: 51E21, 11B25, 05D05
## 1. Introduction
In the intersection of finite geometry and extremal combinatorics numerous problems of finding maximal subsets of affine or projective spaces avoiding certain configurations have been studied. One natural question asks for bounds on the cardinality of subsets of the \(n\)-dimensional affine space over a finite field \(\mathbb{F}_{q}\) that do not contain a full line.
We denote by \(r_{k}(\mathbb{F}_{p}^{n})\) the cardinality of a subset \(S\subseteq\mathbb{F}_{p}^{n}\), containing a maximal number of elements such that \(S\) contains no \(k\) points in arithmetic progression. Note that in the case when \(k=p\) is a prime, \(k\)-progressions in \(\mathbb{F}_{p}^{n}\) correspond to lines in the \(n\)-dimensional affine space and we are therefore interested in bounds on \(r_{p}(\mathbb{F}_{p}^{n})\).
When \(p=3\) the problem coincides with the cap set problem, a well studied area where one can use the fact that \(x,y,z\) form a line exactly when they fulfil a non-trivial linear equation \(ax+by+cz=0\) where \(a+b+c=0\). Ellenberg and Gijswijt [9] gave the first exponential improvement to the trivial upper bound of \(3^{n}\) with \(r_{3}(\mathbb{F}_{3}^{n})<2.756^{n}\), for large enough \(n\) which was further improved by Jiang [16] by a factor of \(\sqrt{n}\). The best lower bound was given by Tyrrell [23] with \(r_{3}(\mathbb{F}_{3}^{n})>2.218^{n}\), for large enough \(n\). The exact values of \(r_{3}(\mathbb{F}_{3}^{n})\) are known up to \(n=6\), where \(r_{3}(\mathbb{F}_{3}^{6})=112\) was proven by Potechin [22].
For the general case surprisingly few results on \(r_{p}(\mathbb{F}_{p}^{n})\) are known. There is the trivial lower bound \(r_{p}(\mathbb{F}_{p}^{n})\geq(p-1)^{n}\) achieved by a hypercube of side length \(p-1\). Jamison [15] and Brouwer and Schrijver [5] independently proved that this is sharp for \(n=2\). For \(n=3\) the only improvement to this construction was by a single point described in the post of Zare in a mathoverflow thread [1]. We will prove the following lower bounds:
**Theorem 1.1**.: _Let \(p\geq 5\) be a prime then_
\[r_{p}(\mathbb{F}_{p}^{3})\geq(p-1)^{3}+p-2\sqrt{p}=p^{3}-3p^{2}+4p-2\sqrt{p}-1.\]
This can be improved in some special cases.
**Theorem 1.2**.: _Let \(p\) be a prime with \(p\equiv 7\pmod{24}\), then_
\[r_{p}(\mathbb{F}_{p}^{3})\geq(p-1)^{3}+(p-1)=p^{3}-3p^{2}+4p-2.\]
_Moreover, \(r_{7}(\mathbb{F}_{7}^{3})\geq 225\)._
The simple upper bound \(r_{p}(\mathbb{F}_{p}^{n})\leq p^{n}-\frac{p^{n}-1}{p-1}\) was given by Aleksanyan and Papikian [2] and is achieved by removing at least one point from each line going through a fixed point. In particular \(r_{p}(\mathbb{F}_{p}^{3})\leq p^{3}-p^{2}-p-1\). This was improved by Bishnoi et al. [4] to \(p^{n}-2p^{n-1}+1\) and \(p^{3}-2p^{2}+1\), respectively. We will give the following new bounds:
**Theorem 1.3**.: _Let \(p\geq 3\) be a prime, \(k\in\{3,4,...,p\}\) and \(n\in\mathbb{N}\) then_
\[r_{k}(\mathbb{F}_{p}^{n+1})\leq\frac{2(p^{n+1}-1)r_{k}(\mathbb{F}_{p}^{n})+p^ {n}-\sqrt{4(p^{n+1}-1)r_{k}(\mathbb{F}_{p}^{n})(p^{n}-r_{k}(\mathbb{F}_{p}^{n} ))+p^{2n}}}{2p^{n}},\]
where the three-dimensional case gives the following corollary.
**Corollary 1.4**.: _Let \(p\geq 3\) be a prime then_
\[r_{p}(\mathbb{F}_{p}^{3})\leq\frac{2p^{5}-4p^{4}+2p^{3}-p^{2}+4p-2-\sqrt{8p^{6 }-20p^{5}+17p^{4}-12p^{3}+20p^{2}-16p+4}}{2p^{2}},\]
_in particular,_
\[r_{p}(\mathbb{F}_{p}^{3})\leq p^{3}-2p^{2}-(\sqrt{2}-1)p+2.\]
For other dimensions, there is the lower bound \(r_{p}(\mathbb{F}_{p}^{2p})\geq p(p-1)^{2p-1}\) due to Frankl et al. [13], using large sunflower-free sets.
We found a 70 point 5-progression-free set in \(\mathbb{F}_{5}^{3}\) via a branch and cut approach (see Figure 4) and we will show the following upper bounds for small primes.
**Theorem 1.5**.: \(r_{5}(\mathbb{F}_{5}^{3})<74\).
**Theorem 1.6**.: \(r_{7}(\mathbb{F}_{7}^{3})<243\).
One can use the tensor product \(S_{1}\times S_{2}\) of two line-free sets \(S_{1}\in\mathbb{F}_{p}^{n_{1}}\), \(S_{2}\in\mathbb{F}_{p}^{n_{2}}\) to get a line-free set in the higher dimension \(n_{1}+n_{2}\). This construction also provides us the lower bound \(|S_{1}|^{1/n_{1}}\) for \(\alpha_{p}:=\lim\limits_{n\to\infty}(r_{p}(\mathbb{F}_{p}^{n}))^{1/n}\) and therefore the asymptotic lower bound \((|S_{1}|^{1/n_{1}}-o(1))^{n}\) for \(r_{p}(\mathbb{F}_{p}^{n})\) (see e.g. [7], [18]). The strongest known lower bound for general \(p\) is \(\alpha_{p}\geq p^{1/2p}(p-1)^{(2p-1)/2p}\) using the results of Frankl et al. [13], however for small primes the new three-dimensional lower bounds \(r_{5}(\mathbb{F}_{5}^{3})\geq 70\) and \(r_{7}(\mathbb{F}_{7}^{3})\geq 225\) give better lower bounds, namely, \(\alpha_{5}\geq 4.121\) and \(\alpha_{7}\geq 6.082\).
We will also show the following explicit lower bound for arbitrary dimension.
**Theorem 1.7**.: _Let \(p\geq 3\) be a prime then \(r_{p}(\mathbb{F}_{p}^{n})\geq(p-1)^{n}+\frac{n-2}{2}(p-1)(p-2)^{n-3}\)._
## 2. Related results
* Davis and Maclagan [7] studied the card game SET, where the cards can be described as points in \(\mathbb{F}_{3}^{4}\) and one is interested whether the displayed cards form a cap set. The results of Tyrell [23] are building on the construction of Edel [8], who gave the previously best lower bound for cap sets. Elsholtz and Lipnik [11] and Elsholtz and Pach [12] studied cap sets in other spaces than \(\mathbb{F}_{3}\).
* Croot et al. [6] gave an upper bound for \(3\)-progression-free sets in \(\mathbb{Z}_{4}^{n}\) that is exponentially smaller than \(4^{n}\). Their methods also led to the results of Ellenberg and Gijswijt [9]. Petrov and Pohoata [20] gave an improved upper bound for \(3\)-progression-free sets in \(\mathbb{Z}_{8}^{n}\), Pach and Palincza [19] gave both upper and lower bounds for \(6\)-progression-free sets in \(\mathbb{Z}_{6}^{n}\). Elsholtz et al. [10] studied the general case of \(k\)-progression-free sets in \(\mathbb{Z}_{m}^{n}\). An overview on known bounds is given by Pach [18].
* Moser [17] asked for the maximal size of a subset of \(\{1,2,...,k\}^{n}\) without a geometric line. Similarly, Hales and Jewett asked for a subset without a combinatorial line. The result of Furstenberg and Katznelson [14] also known as the density Hales-Jewett theorem implies that in both cases these sets have to be asymptotically smaller than \(k^{n}\) as \(n\) tends to infinity. Polymath [21] gave some explicit bounds for special cases.
* Sets that intersect every affine subspace of codimension \(s\) are called \(s\)-blocking sets. The complement of a line-free set in a finite \(n\)-dimensional affine space is therefore also called an \((n-1)\)-blocking set. It is known that the union of any \(n\) independent lines intersecting in a single point form a \(1\)-blocking set in \(\mathbb{F}_{p}^{n}\) which is optimal (see e.g. [3], [5], [15]). However, for \((n-1)\)-blocking sets, the union of \(n\) independent hyperplanes, which seems to be the obvious algebraic construction, are not optimal, as will be shown in this paper. Bishnoi et al. [4] gave several upper bounds for the size of \(s\)-blocking sets.
## 3. Notation
We write \(\mathbb{Z}_{n}\) for \(\mathbb{Z}/n\mathbb{Z}\) and \(\mathbb{F}_{p}=\mathbb{Z}_{p}\) is the field with \(p\) elements whenever \(p\) is a prime.
We write \([k,\ell]\) for the set \(\{k,k+1,...,\ell\}\) either as a subset of \(\mathbb{Z}\) or of \(\mathbb{F}_{p}\).
We use both row and column vectors for the elements of \(\mathbb{F}_{p}^{n}\) and we call these elements points.
Given a subset \(S\subseteq\mathbb{F}_{p}^{3}\) we call the image of \(S\cap(\{j\}\times\mathbb{F}_{p}^{2})\) under the projection \(\phi\colon\mathbb{F}_{p}^{3}\longrightarrow\mathbb{F}_{p}^{2}\), \((a,b,c)\mapsto(b,c)\) the \(j\)-layer of \(S\).
## 4. Proofs of the upper bounds
Proof of Theorem 1.3.: Let \(A\subseteq\mathbb{F}_{p}^{n+1}\) be \(k\)-progression-free. We count the number of the point pairs on every \(n\)-dimensional hyperplane
\[s=|\{(\{a,b\},S)\ |\ a,b\in A,\ a\neq b,\ a,b\in S,\ S\text{ is an $n$-dimensional hyperplane}\}|\,.\]
On every hyperplane, the number of points is at most \(r_{k}(\mathbb{F}_{p}^{n})\). Firstly, we assume \(r_{k}(\mathbb{F}_{p}^{n+1})\geq(p-1)r_{k}(\mathbb{F}_{p}^{n})\), then the sum of number of point pairs for \(p\) parallel hyperplanes is maximal if there are \(p-1\) hyperplanes with \(r_{k}(\mathbb{F}_{p}^{n})\) points and one with \(r_{k}(\mathbb{F}_{p}^{n+1})-(p-1)r_{k}(\mathbb{F}_{p}^{n})\).
There are \(\frac{p^{n+1}-1}{p-1}\) disjoint sets of parallel hyperplanes, so
\[s\leq\Big{(}\frac{p^{n+1}-1}{p-1}\Big{)}\Big{(}(p-1)\binom{r_{k}(\mathbb{F}_{p }^{n})}{2}+\binom{r_{k}(\mathbb{F}_{p}^{n+1})-(p-1)r_{k}(\mathbb{F}_{p}^{n})} {2}\Big{)}.\]
Note here that this inequality still holds if \(r_{k}(\mathbb{F}_{p}^{n+1})<(p-1)r_{k}(\mathbb{F}_{p}^{n})\), as in this case the number of point pairs is clearly less than
\[(p-1)\binom{r_{k}(\mathbb{F}_{p}^{n})}{2}\]
and
\[\binom{r_{k}(\mathbb{F}_{p}^{n+1})-(p-1)r_{k}(\mathbb{F}_{p}^{n})}{2}\geq 0.\]
On the other hand, every point pair defines a line that is included in exactly \(\frac{p^{n}-1}{p-1}\)\(n\)-dimensional hyperplanes, so
\[s=\frac{p^{n}-1}{p-1}\binom{r_{k}(\mathbb{F}_{p}^{n+1})}{2}.\]
We get the quadratic inequality
\[p^{n}\big{(}r_{k}(\mathbb{F}_{p}^{n+1})\big{)}^{2}-\big{(}p^{n}+2(p^{n+1}-1)r_ {k}(\mathbb{F}_{p}^{n})\big{)}r_{k}(\mathbb{F}_{p}^{n+1})+(p^{n+2}-p)\big{(}r_ {k}(\mathbb{F}_{p}^{n})\big{)}^{2}\geq 0\]
with roots
\[\frac{2(p^{n+1}-1)r_{k}(\mathbb{F}_{p}^{n})+p^{n}\pm\sqrt{4(p^{n+1}-1)r_{k}( \mathbb{F}_{p}^{n})(p^{n}-r_{k}(\mathbb{F}_{p}^{n}))+p^{2n}}}{2p^{n}}.\]
As
\[r_{k}(\mathbb{F}_{p}^{n+1})\leq p(r_{k}(\mathbb{F}_{p}^{n}))\]
but
\[\frac{2(p^{n+1}-1)r_{k}(\mathbb{F}_{p}^{n})+p^{n}+\sqrt{4(p^{n+1}-1)r_{k}( \mathbb{F}_{p}^{n})(p^{n}-r_{k}(\mathbb{F}_{p}^{n}))+p^{2n}}}{2p^{n}}\]
\[>p(r_{k}(\mathbb{F}_{p}^{n}))+\frac{1}{2}-\frac{r_{k}(\mathbb{F}_{p}^{n})}{p^ {n}}+\frac{\sqrt{p^{2n}}}{2p^{n}}\geq p(r_{k}(\mathbb{F}_{p}^{n}))+\frac{1}{2} -1+\frac{1}{2}=p(r_{k}(\mathbb{F}_{p}^{n})),\]
the theorem follows.
Proof of Corollary 1.4.: The first statement follows immediately from Theorem 1.3 using \(r_{p}(\mathbb{F}_{p}^{2})=(p-1)^{2}\). For the second statement we are using that \(8p^{6}-20p^{5}+17p^{4}-12p^{3}+20p^{2}-16p+4\) can be bounded by \((2\sqrt{2}p^{3}-5/\sqrt{2}p^{2})^{2}\) from below for \(p\geq 3\) and we get
\[r_{p}(\mathbb{F}_{p}^{3}) \leq p^{3}-2p^{2}+p-\frac{1}{2}+\frac{2}{p}-\frac{1}{p^{2}}-\sqrt {2}p+\frac{5}{2\sqrt{2}}\] \[\leq p^{3}-2p^{2}-(\sqrt{2}-1)p-\frac{1}{2}+\frac{2}{3}+\frac{5}{ 2\sqrt{2}}\] \[\leq p^{3}-2p^{2}-(\sqrt{2}-1)p+2.\]
Proof of Theorem 1.5.: Assume that \(S\subseteq\mathbb{F}_{5}^{3}\) is a \(5\)-progression-free set of size \(74\). We will compute a weighted sum over all lines containing \(4\) points to reach a contradiction.
Let us call a line containing exactly \(r\) points an \(r\)-line. Let \(l\) be a \(4\)-line in \(S\) and let \(H_{1},H_{2},...,H_{6}\) be the planes containing \(l\). Then \(\sum_{i=1}^{6}|H_{i}\cap S|=(74-4)+6\cdot 4=94\). Note that \(r_{5}(\mathbb{F}_{5}^{2})=16\) and \(r_{4}(\mathbb{F}_{5}^{2})=11\), which can be easily checked by computer search. Therefore, \(|H_{i}\cap A|\geq 94-5\cdot 16=14\) for all \(i\) and there is no plane in \(\mathbb{F}_{5}^{3}\) containing \(12\) or \(13\)
points. Hence, there are five different distributions for the number of points in five parallel planes:
1. \(\{10,16,16,16,16\}\)
2. \(\{11,15,16,16,16\}\)
3. \(\{14,14,14,16,16\}\)
4. \(\{14,14,15,15,16\}\)
5. \(\{14,15,15,15\}\).
Denote by \(a,b,c,d,e\) the number of classes of parallel planes having these distributions. Note that
\[a+b+c+d+e=31. \tag{4.1}\]
If we compare the number of pairs of points in each plane with the total number of pairs we get \((\binom{10}{2}+4\binom{16}{2})a+(\binom{11}{2}+\binom{15}{2}+3\binom{16}{2})b+ (3\binom{14}{2}+2\binom{16}{2})c+(2\binom{14}{2}+2\binom{15}{2}+\binom{16}{2} )d+(\binom{14}{2}+4\binom{15}{2})e=6\binom{74}{2}\)
\[\Leftrightarrow 525a+520b+513c+512d+511e=16206, \tag{4.2}\]
since each pair lies in exactly six planes.
Now denote by \(A\), \(B\), and \(C\) the number of pairs \((\ell,H)\) where \(H\) is a hyperplane containing \(16\), \(15\) and \(14\) points, respectively and \(\ell\subseteq H\) is a \(4\)-line. Again let \(\ell\) be a \(4\)-line and let \(H_{1},H_{2},...,H_{6}\) be the planes containing \(\ell\). Then
\[\{|H_{i}\cap S|\mid i\in[1,6]\}\in\{\{14,16,16,16,16,16\},\{15,15,16,16,16,16\}\}\]
as multisets and therefore
\[A-2B-5C=0 \tag{4.3}\]
To bound the size of \(A\), \(B\) and \(C\) we need the following claims.
**Claim 1**.: _Every plane containing \(16\) points contains at least twelve \(4\)-lines._
Proof of Claim 1.: Consider a plane \(H\) containing \(16\) points and let \(x_{i}\) be the number of \(i\)-lines in \(H\) for \(i\in\{1,2,3,4\}\). By double counting the points in \(H\) we get \(x_{1}+2x_{2}+3x_{3}+4x_{4}=6\cdot 16=96\) and by double counting the pairs of points in \(H\) we get \(x_{2}+3x_{3}+6x_{4}=\binom{16}{2}=120\). By taking the difference of the two equations we get \(-x_{1}-x_{2}+2x_{4}=24\), implying that \(2x_{4}\geq 24\).
**Claim 2**.: _For \(m\in\{14,15\}\), every plane containing \(m\) points contains at most \(m\)\(4\)-lines._
Proof of Claim 2.: As \(5\cdot 3+1=16>m\), every point in \(S\) can be contained in at most four \(4\)-lines and therefore the number of \(4\)-lines in the plane is bounded from above by \(\frac{4m}{4}=m\).
Finally combining (4.1), (4.2) and (4.3) we obtain the following system of linear equations and inequalities.
\[a+b+c+d+e=31\] \[525a+520b+513c+512d+511e=16206\] \[A-2B-5C=0\] \[A\geq 48a+36b+24c+12d\] \[B\leq 15b+30d+60e\] \[C\leq 42c+28d+14e\] \[a,b,c,d,e,A,B,C\geq 0,\]
which does not have any integral solution, a contradiction to \(|S|=74\).
Proof of Theorem 1.6.: Assume that \(S\subseteq\mathbb{F}_{7}^{3}\) is a \(7\)-progression-free set of size \(243\). Note that we have the following bounds.
**Claim 3**.: _Every plane containing \(36\) points contains at least \(18\)\(6\)-lines and every plane containing \(35\), \(34\) or \(33\) contains at most \(33\), \(30\), \(28\)\(6\)-lines, respectively. Moreover, \(r_{7}(\mathbb{F}_{7}^{2})=36\) and \(r_{6}(\mathbb{F}_{7}^{2})=29\)._
Proof of Claim 3.: Consider a plane \(H\) containing \(m\) points and let \(x_{i}\) be the number of \(i\)-lines in \(H\) for \(i\in[0,6]\). There are \(56\) lines in the plane and therefore
\[x_{0}+x_{1}+x_{2}+x_{3}+x_{4}+x_{5}+x_{6}=56 \tag{4.4}\]
By double counting the points in \(H\) we get
\[x_{1}+2x_{2}+3x_{3}+4x_{4}+5x_{5}+6x_{6}=8m \tag{4.5}\]
and by double counting the pairs of points in \(H\) we get
\[x_{2}+3x_{3}+6x_{4}+10x_{5}+15x_{6}=\binom{m}{2}. \tag{4.6}\]
If \(m=36\) we take the difference of the (4.6) and two times (4.5) and get \(-2x_{1}-3x_{2}-3x_{3}-2x_{4}+3x_{6}=54\), implying that \(x_{6}\geq 18\). If \(m\in\{33,34,35\}\), then by taking three times (4.4) minus two times (4.5) plus (4.6) we get \(3x_{0}+x_{1}+x_{4}+3x_{5}+6x_{6}=168-16m+\binom{m}{2}\) and therefore \(6x_{6}\leq 168-16m+\binom{m}{2}\) which gives the desired bounds.
The last two claims can be easily checked by computer search.
If we now proceed analogously to the proof of Theorem 1.5 we again arrive at a contradiction.
## 5. Proofs of the lower bounds
Proof of Theorem 1.7.: We consider three different types of \(2\)-dimensional layers:
* \(A:=[0,p-2]^{2}\),
* \(B:=[0,p-1]^{2}\setminus\{(i,i)\ |\ i\in[0,p-1]\}\setminus\big{(}\{p-1\} \times[0,\frac{p-3}{2}]\big{)}\setminus\big{(}[0,\frac{p-3}{2}]\times\{p-1\} \big{)}\),
* \(C:=\{(i,i)\ |\ i\in[0,\frac{p-3}{2}]\}\),
and three disjoint subsets of \(\mathbb{F}_{p}^{n-2}\):
* \(\mathcal{A}:=[0,p-3]^{n-2}\),
* \(\mathcal{B}:=[0,p-2]^{n-2}\setminus[0,p-3]^{n-2}\),
* \(\mathcal{C}:=\bigcup_{j\in[1,n-2]}\{x\in\mathbb{F}_{p}^{n-2}\mid(x_{j}=p-1)\wedge( x_{i}\in[0,p-3]\ \forall i\neq j)\}.\)
We show that \(S:=(\mathcal{A}\times A)\cup(\mathcal{B}\times B)\cup(\mathcal{C}\times C)\) is \(p\)-progression-free.
First consider the case \(n=3\). Let \(L:=\{(a_{1},a_{2},a_{3})+(b_{1},b_{2},b_{3})i\mid i\in[0,p-1]\}\) be a \(p\)-progression in \(\mathbb{F}_{p}^{3}\) with \(a_{1},a_{2},a_{3},b_{1},b_{2},b_{3}\in\mathbb{F}_{p}.\)
* Case 1: \(b_{1}=0\) and \(a_{1}\neq p-2:\\ [0,p-2]^{2}\) is \(p\)-progression-free and \(|\{(i,i)\mid i\in[0,\frac{p-3}{2}]\}|<p\), therefore \(L\) is not contained in \(S\).
* Case 2: \(b_{1}=0\) and \(a_{1}=p-2:\\ L^{\prime}:=\{(a_{2},a_{3})+(b_{2},b_{3})i\mid i\in[0,p-1]\}\) and \(\{(i,i)\mid i\in[0,p-1]\}\) are both lines in \(\mathbb{F}_{p}^{2}\). If they are not parallel or they are equal, they do intersect, and \(L\) is not contained in \(S\). Otherwise we can rewrite \(L^{\prime}=\{(i,c+i)\mid i\in[0,p-1]\}\) with \(c\in[1,p-1].\) If \(c\in[1,\frac{p-1}{2}]\) then \(c+(p-1)\in[0,\frac{p-3}{2}]\) (choose \(i=p-1\)) and \((p-2,p-1,c+(p-1))\in L\setminus S.\) Similarly if \(c\in[\frac{p+1}{2},p-1]\), then \(p-1-c\in[0,\frac{p-3}{2}]\) (choose \(i=p-1-c\)) and \((p-2,p-1-c,p-1)\in L\setminus S.\) Therefore, \(L\) is not contained in \(S\).
* Case 3: \(b_{1}\neq 0\): Without the loss of generality let \(b_{1}=1\) and \(a_{1}=p-2\). If \(b_{2}=b_{3}=0\) then \(L\) is not contained in \(S\) because the \((p-2)\)-layer and \((p-1)\)-layer of \(S\) have no common point. Otherwise, without the loss of generality, let \(b_{2}\neq 0\) and therefore \(\{a_{2}+b_{2}i\mid i\in[0,p-1]\}=[0,p-1].\) Assume that \(L\subseteq S\). Then \(a_{2}=p-1\) and \(a_{3}\in[\frac{p-1}{2},p-2]\) because the \((p-2)\)-layer is the only layer containing points with the coordinate \(p-1\). Since the \((p-1)\)-layer does not have coordinates in \([\frac{p-1}{2},p-2]\), also \(b_{3}\neq 0\) and consequently \(\{a_{3}+b_{3}i\mid i\in[0,p-1]\}=[0,p-1].\) As before it follows that \(a_{3}=p-1\) contradicting that \(L\subseteq S\). Thus, \(L\) is not contained in \(S\) and \(S\) is \(p\)-progression-free.
Now consider \(n>3\). We have already seen that every layer is \(p\)-progression-free, so we only consider progressions \(L:=\{a+bi\mid i\in[0,p-1]\}\) visiting \(p\) non-empty layers. Let \(m\) be the number of non-zero entries in the first \(n-2\) coordinates of \(b\). Since only layers of type \(C\) are placed where one of the first \(n-2\) coordinates is \(p-1\) and all layers where two of the first \(n-2\) coordinates are \(p-1\) are empty, \(m\) is also the number of type \(C\) layers visited by \(L\) and \(m\leq p\).
* If \(m=1\), \(L\) is not contained in \(S\), analogously to the \(3\)-dimensional case.
* If \(m\geq 2\) the last two coordinates of every point in \(L\) are equal, since the projection of \(L\) in the last two coordinates is a line containing two points in the main diagonal, or it is a single point in the main diagonal. Now since only layers of type \(B\) are placed where one of the first \(n-2\) coordinates is \(p-2\), \(L\) also visits a layer of type \(B\). Therefore \(L\) is not contained in \(B\) because layers of type \(B\) contain no points on the main diagonal.
Finally, note that layers of type \(A\) and \(B\) contain \((p-1)^{2}\) points and layers of type \(C\) contain \((p-1)/2\) points and thus
\[|S|=(p-1)^{2}(p-1)^{n-2}+\frac{p-1}{2}(n-2)(p-2)^{n-3}=(p-1)^{n}+\frac{n-2}{2} (p-1)(p-2)^{n-3}.\]
Proof of Theorem 1.1.: Let \(k=\lfloor\sqrt{p}\rfloor\), \(t=\lfloor p/k\rfloor\), \(K:=[0,k-1]\) and \(T:=\{jk-1\mid j\in[1,t]\}\). Consider the set
\[S:= [0,p-3]\times[0,p-2]^{2}\] \[\cup\{p-2\}\times([0,p-1]^{2}\setminus\{(j,j)\mid j\in[0,p-1]\} \setminus((K\cup\{p-1\})\times(T\cup\{p-1\})))\] \[\cup\{p-1\}\times K\times T.\]
we will show that \(S\subseteq\mathbb{F}_{p}^{3}\) is \(p\)-progression-free.
Let \(L:=\{(a_{1},a_{2},a_{3})+(b_{1},b_{2},b_{3})i\mid i\in[0,p-1]\}\) be a \(p\)-progression in \(\mathbb{F}_{p}^{3}\) with \(a_{1},a_{2},a_{3},b_{1},b_{2},b_{3}\in\mathbb{F}_{p}\).
* Case 1: \(b_{1}=0\) and \(a_{1}\neq p-2:\) \([0,p-2]^{2}\) is \(p\)-progression-free and \(|K\times T|=kt<p\), therefore \(L\) is not contained in \(S\).
* Case 2: \(b_{1}=0\) and \(a_{1}=p-2:\) \(L^{\prime}:=\{(a_{2},a_{3})+(b_{2},b_{3})i\mid i\in[0,p-1]\}\) and \(\{(i,i)\mid i\in[0,p-1]\}\) are both lines in \(\mathbb{F}_{p}^{2}\). If they are not parallel or they are equal, they do intersect, and \(L\) is not contained in \(S\). Otherwise we can rewrite \(L^{\prime}=\{(i,c+i)\mid i\in[0,p-1]\}\) with \(c\in[1,p-1]\). \(\{(i,c+i)\mid i\in[0,k-1]\}\cap(K\times(T\cup\{p-1\}))\neq\emptyset\) and therefore \(L\) is not contained in \(S\).
Figure 1. A description of the line-free set in Theorem 1.7 for \(p=5\) and \(n=4\).
Figure 2. The last two layers of the line-free set in Theorem 1.1 for \(p=11\).
* Case 3: \(b_{1}\neq 0\): Without the loss of generality, let \(b_{1}=1\) and \(a_{1}=p-2\). If \(b_{2}=b_{3}=0\) then \(L\) is not contained in \(S\) because the \((p-2)\)-layer and \((p-1)\)-layer of \(S\) have no common point. Else, if \(b_{2}\neq 0\) and \(b_{3}\neq 0\) then \(\{a_{2}+b_{2}i\mid i\in[0,p-1]\}=\{a_{3}+b_{3}j\mid j\in[0,p-1]\}=[0,p-1].\) Since the \((p-2)\)-layer is the only layer with \(p-1\) entries but \((p-2,p-1,p-1)\not\in S\), \(L\) is not contained in \(S\). Finally, if either \(b_{2}=0\) or \(b_{3}=0\) but not both, one of the last two coordinates is constant, and the other one visits every possible value. Now again the \((p-2)\)-layer is the only layer with \(p-1\) entries but the \((p-1)\)-layer has empty rows and columns wherever the \((p-2)\)-layer has \(p-1\) entries and therefore \(L\) is not contained in \(S\).
Note that since \(p\) is a prime and \(k\geq 2\), \(t\leq\frac{p-1}{k}\) and that from the definition of \(k\) it follows that
\[k\in[\sqrt{p}-1,\sqrt{p}+1]\] \[\Leftrightarrow k^{2}-2\sqrt{p}k+p-1\leq 0\] \[\Leftrightarrow k+\frac{p-1}{k}\leq 2\sqrt{p},\]
and therefore \(k+t\leq 2\sqrt{p}\). Hence,
\[|S|= (p-2)(p-1)^{2}+(p^{2}-p-(kt-1)-k-t)+kt\] \[= (p-2)(p-1)^{2}+p^{2}-p+1-k-t\] \[\geq (p-2)(p-1)^{2}+p^{2}-p+1-2\sqrt{p}\] \[= (p-1)^{3}+p-2\sqrt{p}\]
Proof of Theorem 1.2.: Let \(p=7+24\ell\) for \(\ell\in\mathbb{Z}_{\geq 0}\), let \(A\) be the set of quadratic residues, that is, \(A=\{a^{2}\mid a\in\mathbb{F}_{p}^{*}\}\) and \(B:=\mathbb{F}_{p}^{*}\setminus A\). Note that \(|A|=|B|=\frac{p-1}{2}\) and the law of quadratic reciprocity yields
\[\left(\frac{-1}{p}\right)=(-1)^{\frac{p-1}{2}}=(-1)^{3+12\ell}=-1,\]
Figure 3: The line-free set in Theorem 1.2 for \(p=7\).
\[\left(\frac{2}{p}\right)=(-1)^{\frac{p^{2}-1}{8}}=(-1)^{6+42\ell+72\ell^{2}}=1,\]
\[\left(\frac{3}{p}\right)=(-1)^{\frac{p-1}{2}\frac{3-1}{2}}\left(\frac{p}{3} \right)=(-1)^{3+12\ell}\left(\frac{1}{3}\right)=-1,\]
and therefore \(2\in A\) and \(\{-1,3\}\subseteq B\). Note here that \(A\) is a subgroup of \(\mathbb{F}_{p}^{*}\) and this means that multiplication by \(2\) or \(\frac{1}{2}\) leaves elements of \(A\) or \(B\) in the same set, while multiplication by \(-1\), \(3\) or \(\frac{1}{3}\) changes the set. For instance, \(3a\in B\) for all \(a\in A\) and \(-\frac{3b}{2}=(-1)\cdot 3\cdot\frac{1}{2}\cdot b\in B\) for all \(b\in B\). Let
\[S:= [1,p-1]^{3}\cup\big{(}\{(a,0,a)|a\in A\}\cup\{(0,a,a)|a\in A\} \big{)}\] \[\cup \big{(}\{(a,a,a)|a\in A\}\cup\{(a/2,a/2,a)|a\in A\}\big{)}\] \[\cup \big{(}\{(3b/2,0,b)|b\in B\}\cup\{(0,3b/2,b)|b\in B\}\cup(3b,0,b) |b\in B\}\cup\{(0,3b,b)|b\in B\}\big{)}\] \[\setminus \big{(}\{(b,b,b)|b\in B\}\cup\{(3b/2,3b/2,b)|b\in B\}\cup\{(b/3,b /3,b)|b\in B\}\big{)}\] \[\setminus \big{(}\{(3b,-3b/2,b)|b\in B\}\cup\{(-3b/2,3b,b)|b\in B\}\big{)}\] \[\cup \big{(}\{(b,b,0)|b\in B\}\cup\{(2a,-a,0)|a\in A\}\cup(-a,2a,0)|a \in A\}\big{)}.\]
We will show that \(S\) is \(p\)-progression-free.
Note that \(S\) is symmetric in the first two coordinates. We will therefore, in this proof, skip one of two symmetric cases, whenever possible.
Let \(L:=\{(c_{1},c_{2},c_{3})+(d_{1},d_{2},d_{3})i\mid i\in[0,p-1]\}\) be a \(p\)-progression in \(\mathbb{F}_{p}^{3}\) with \(c_{1},c_{2},c_{3},d_{1},d_{2},d_{3}\in\mathbb{F}_{p}\) and assume that \(L\subseteq S\).
First, assume that \(d_{3}=0\).
* Case 1: \(c_{3}=0\): Since \(S\) contains no points where the third and one of the first two coordinates is \(0\), \(L\) is not contained in \(S\).
* Case 2: \(c_{3}\in A\): Let \(a:=c_{3}\). Since \((a,0,a)\) and \((0,a,a)\) are the only points where the third coordinate is \(a\) and one of the first two coordinates is \(0\), we can assume \((a,0,a)\in L\). If \(d_{1}=0\) then \((a,a,a)\in L\), a contradiction. If \(d_{1}\neq 0\) then also \((0,a,a)\in L\) and consequently \((\frac{a}{2},\frac{a}{2},a)=\frac{1}{2}(a,0,a)+\frac{1}{2}(0,a,a)\in L\), again a contradiction.
* Case 3: \(c_{3}\in B\): Let \(b:=c_{3}\). First, assume \(d_{1}\neq 0\) and \(d_{2}\neq 0\). Since \((\frac{3b}{2},0,b)\), \((0,\frac{3b}{2},b)\), \((3b,0,b)\) and \((0,3b,b)\) are the only points where the third coordinate is \(b\) and one of the first two coordinates is \(0\), we only have to consider the following cases: If \((\frac{3b}{2},0,b)\in L\) and \((0,\frac{3b}{2},b)\in L\), then also \((-\frac{3b}{2},3b,b)=(-1)(\frac{3b}{2},0,b)+2(0,\frac{3b}{2},b)\in L\), if \((\frac{3b}{2},0,b)\in L\) and \((0,3b,b)\in L\), then also \((b,b,b)=\frac{2}{3}(\frac{3b}{2},0,b)+\frac{1}{3}(0,3b,b)\in L\) and if \((3b,0,b)\in L\) and \((0,3b,b)\in L\), then also \((\frac{3b}{2},\frac{3b}{2},b)=\frac{1}{2}(3b,0,b)+\frac{1}{2}(0,3b,b)\in L\). Consequently, we arrived at a contradiction. Now, if \(d_{1}=0\) or \(d_{2}=0\), again \(L\) has to contain one of the points \((\frac{3b}{2},0,b)\), \((0,\frac{3b}{2},b)\), \((3b,0,b)\), \((0,3b,b)\) and therefore \(L\) also contains one of the points \((\frac{3b}{2},\frac{3b}{2},b)\), \((3b,\frac{-3b}{2},b)\), \((\frac{-3b}{2},3b,b)\), again a contradiction.
Now assume that \(d_{3}\neq 0\). If \(d_{1}=d_{2}=0\), \(L\) contains a point with a zero last coordinate. We get that either \((b,b,0)\) and therefore also \((b,b,b)\) is in \(L\) for some \(b\in B\) or \((2a,-a,0)\in L\)
for some \(a\in A\) and therefore also \((3b,\frac{-3b}{2},b)\in L\) for the unique \(b\in B\) such that \(3b=2a\), both a contradiction.
In the remaining case \(d_{3}\neq 0\) and at least one of \(d_{1}\) and \(d_{2}\) is non-zero. Since there is no point in \(S\) where both the third and one of first two coordinates is zero, \(L\) has to include a point with third coordinate being zero and a different point where one of the other two coordinates is zero. We are therefore left with checking the following cases, where \(L\) is given by a pair of two points in \(S\). For some of these cases it is important to note that \(S\) contains no points where one of the first two coordinates is \(0\) and the other is in \(B\).
* \((b,b,0)\in L\) and \((0,a,a)\in L\): \[L=\bigg{\{}\begin{pmatrix}b\\ b\\ 0\end{pmatrix}+k\begin{pmatrix}-b\\ a-b\\ a\end{pmatrix}\bigg{|}\ k\in\mathbb{F}_{p}\bigg{\}}\] Since \(a\neq b\), setting \(k:=\frac{b}{b-a}\), we get \((x,y,z):=(-\frac{ab}{b-a},0,\frac{ab}{b-a})\in L\). Now \(x=-z\) and therefore \(z\in B\) and \(x\in A\), so \(-1=\frac{3}{2}\) or \(-1=3\), a contradiction.
* \((b,b,0)\in L\) and \((0,\frac{3b^{\prime}}{2},b^{\prime})\in L\): \[L=\bigg{\{}\begin{pmatrix}b\\ b\\ 0\end{pmatrix}+k\begin{pmatrix}-b\\ \frac{3b^{\prime}}{2}-b\\ b^{\prime}\end{pmatrix}\bigg{|}\ k\in\mathbb{F}_{p}\bigg{\}}\] Since \(\frac{3b^{\prime}}{2}\neq b\), setting \(k:=\frac{b}{b-\frac{3b^{\prime}}{2}}\), we get \((x,y,z):=(-\frac{3bb^{\prime}}{2(b-\frac{3b^{\prime}}{2})},0,\frac{bb^{\prime} }{b-\frac{3b^{\prime}}{2}})\in L\). Now \(x=-\frac{3}{2}z\) and therefore \(x,z\in A\), so \(-\frac{3}{2}=1\), a contradiction.
* \((b,b,0)\in L\) and \((0,3b^{\prime},b^{\prime})\in L\): \[L=\bigg{\{}\begin{pmatrix}b\\ b\\ 0\end{pmatrix}+k\begin{pmatrix}-b\\ 3b^{\prime}-b\\ b^{\prime}\end{pmatrix}\bigg{|}\ k\in\mathbb{F}_{p}\bigg{\}}\] Since \(3b^{\prime}\neq b\), setting \(k:=\frac{b}{b-3b^{\prime}}\), we get \((x,y,z):=(-\frac{3bb^{\prime}}{b-3b^{\prime}},0,\frac{bb^{\prime}}{b-3b^{ \prime}})\in L\). Now \(x=-3z\) and therefore \(x,z\in A\), so \(-3=1\), a contradiction.
* \((2a,-a,0)\in L\) and \((0,a^{\prime},a^{\prime})\in L\): \[L=\bigg{\{}\begin{pmatrix}2a\\ -a\\ 0\end{pmatrix}+k\begin{pmatrix}-2a\\ a^{\prime}+a\\ a^{\prime}\end{pmatrix}\bigg{|}\ k\in\mathbb{F}_{p}\bigg{\}}\] Since \(a^{\prime}\neq-a\), setting \(k:=\frac{a}{a+a^{\prime}}\), we get \((x,y,z):=(\frac{2aa^{\prime}}{a+a^{\prime}},0,\frac{aa^{\prime}}{a+a^{\prime} })\in L\). Now \(x=2z\) and therefore \(x,z\in A\), so \(2=1\), a contradiction.
* \((2a,-a,0)\in L\) and \((a^{\prime},0,a^{\prime})\in L\): \[L=\bigg{\{}\begin{pmatrix}2a\\ -a\\ 0\end{pmatrix}+k\begin{pmatrix}a^{\prime}-2a\\ a\\ a^{\prime}\end{pmatrix}\bigg{|}\ k\in\mathbb{F}_{p}\bigg{\}}\] Assume \(a^{\prime}\neq 2a\), then setting \(k:=\frac{2a}{2a-a^{\prime}}\), we get \((x,y,z):=(0,\frac{aa^{\prime}}{2a-a^{\prime}},\frac{2aa^{\prime}}{2a-a^{ \prime}})\in L\). Now \(2y=z\) and therefore \(y,z\in A\), so \(1=2\), a contradiction. If \(a^{\prime}=2a\), then setting \(k:=3\), we get \((x,y,z):=(2a,2a,6a)\in L\), a contradiction since \(6a\in B\).
* \((2a,-a,0)\in L\) and \((\frac{3b}{2},0,b)\in L\): \[L=\bigg{\{}\begin{pmatrix}2a\\ -a\\ 0\end{pmatrix}+k\begin{pmatrix}\frac{3b}{2}-2a\\ a\\ b\end{pmatrix}\bigg{|}\ k\in\mathbb{F}_{p}\bigg{\}}\] Assume \(\frac{3b}{2}\neq 2a\), then setting \(k:=\frac{2a}{2a-\frac{3b}{2}}\), we get \((x,y,z):=(0,\frac{3ab}{2(2a-\frac{3b}{2})},\frac{2ab}{2a-\frac{3b}{2}})\in L\). Now \(y=\frac{3z}{4}\) and therefore \(z\in B\) and \(x\in A\), so \(\frac{3}{4}=\frac{3}{2}\) or \(\frac{3}{4}=3\), a contradiction. If \(\frac{3b}{2}=2a\), then setting \(k:=3\), we get \((x,y,z):=(2a,2a,4a)\in L\), a contradiction since \(4a\in A\).
* \((2a,-a,0)\in L\) and \((0,\frac{3b}{2},b)\in L\): \[L=\bigg{\{}\begin{pmatrix}2a\\ -a\\ 0\end{pmatrix}+k\begin{pmatrix}-2a\\ \frac{3b}{2}+a\\ b\end{pmatrix}\bigg{|}\ k\in\mathbb{F}_{p}\bigg{\}}\] Assume \(\frac{3b}{2}\neq-3a\), then setting \(k:=\frac{3a}{\frac{3b}{2}+3a}\), we get \((x,y,z):=(\frac{3ab}{\frac{3b}{2}+3a},\frac{3ab}{\frac{3b}{2}+3a},\frac{3ab}{ \frac{3b}{2}+3a})\in L\). Now \(x=y=z\), a contradiction. If \(\frac{3b}{2}=-3a\), then setting \(k:=-\frac{1}{2}\), we get \((x,y,z):=(3a,0,a)\in L\), so \(1=3\), a contradiction.
* \((2a,-a,0)\in L\) and \((3b,0,b)\in L\): \[L=\bigg{\{}\begin{pmatrix}2a\\ -a\\ 0\end{pmatrix}+k\begin{pmatrix}3b-2a\\ a\\ b\end{pmatrix}\bigg{|}\ k\in\mathbb{F}_{p}\bigg{\}}\] Since \(b\neq a\), then setting \(k:=\frac{a}{a-b}\), we get \((x,y,z):=(\frac{ab}{a-b},\frac{ab}{a-b},\frac{ab}{a-b})\in L\). Now \(x=y=z\), a contradiction.
* \((2a,-a,0)\in L\) and \((0,3b,b)\in L\): \[L=\bigg{\{}\begin{pmatrix}2a\\ -a\\ 0\end{pmatrix}+k\begin{pmatrix}-2a\\ a+3b\\ b\end{pmatrix}\bigg{|}\ k\in\mathbb{F}_{p}\bigg{\}}\] since \(3b\neq-a\), setting \(k:=\frac{a}{a+3b}\), we get \((x,y,z):=(\frac{6ab}{a+3b},0,\frac{ab}{a+3b})\in L\). Now \(x=6z\) and therefore \(z\in B\) and \(x\in A\), so \(6=\frac{3}{2}\) or \(6=3\), a contradiction.
Finally, note that the layer with third coordinate \(0\) contains \(|B|+2|A|=\frac{3}{2}(p-1)\) points, layers with the third coordinate in \(A\) contain \((p-1)^{2}\) points and layers with the third coordinate in \(B\) contain \((p-1)^{2}-1\) points and thus
\[|S|=\frac{p-1}{2}(p-1)^{2}+\frac{p-1}{2}((p-1)^{2}-1)+\frac{3}{2}(p-1)=(p-1)^{ 3}+(p-1).\]
In the special case of \(p=7\), the layers with third coordinate in \(B\) actually contain \((p-1)^{2}=36\) points, since \(\frac{3}{2}=\frac{1}{3}\), thus giving the lower bound of \(225\).
### Acknowledgements
C.E. was supported by a joint FWF-ANR project ArithRand (I 4945-N and ANR-20-CE91-0006). J.F. was supported by the Austrian Science Fund (FWF) under the project W1230. D.G.S. was supported by the ERC Advanced Grant "Geoscape". B.K. was supported by the UNKP-22-3, New National Excellence Program of the Ministry for
Culture and Innovation from the source of the National Research, Development and Innovation Fund. P.P.P. was supported by the Lendulet program of the Hungarian Academy of Sciences (MTA) and by the National Research, Development and Innovation Office NKFIH (Grant Nr. K124171).
|
2307.09506 | Flat-band spin density wave in twisted bilayer materials | Twisting is a novel technique for creating strongly correlated effects in
two-dimensional bilayered materials, and can tunably generate nontrivial
topological properties, magnetism, and superconductivity. Magnetism is
particularly significant as it can both compete with superconductivity and lead
to the emergence of nontrivial topological states. However, the origin of
magnetism in twisted structures remains a subject of controversy. Using
self-developed large-scale electronic structure calculations, we propose the
magnetism in these twisted bilayer systems originates from spin splitting
induced by the enhanced ratio of the exchange interaction to band dispersion. | Zhigang Song, Jingshan Qi, Olivia Liebman, Prineha Narang | 2023-07-18T18:00:02Z | http://arxiv.org/abs/2307.09506v1 | # Flat-band spin density wave in twisted bilayer materials
###### Abstract
Twisting is a novel technique for creating strongly correlated effects in two-dimensional bilayered materials, and can tunably generate nontrivial topological properties, magnetism, and superconductivity. Magnetism is particularly significant as it can both compete with superconductivity and lead to the emergence of nontrivial topological states. However, the origin of magnetism in twisted structures remains a subject of controversy. Using self-developed large-scale electronic structure calculations, we propose the magnetism in these twisted bilayer systems originates from spin splitting induced by the enhanced ratio of the exchange interaction to band dispersion.
pacs: 71.10. |
2302.03193 | On the Ideal Number of Groups for Isometric Gradient Propagation | Recently, various normalization layers have been proposed to stabilize the
training of deep neural networks. Among them, group normalization is a
generalization of layer normalization and instance normalization by allowing a
degree of freedom in the number of groups it uses. However, to determine the
optimal number of groups, trial-and-error-based hyperparameter tuning is
required, and such experiments are time-consuming. In this study, we discuss a
reasonable method for setting the number of groups. First, we find that the
number of groups influences the gradient behavior of the group normalization
layer. Based on this observation, we derive the ideal number of groups, which
calibrates the gradient scale to facilitate gradient descent optimization. Our
proposed number of groups is theoretically grounded, architecture-aware, and
can provide a proper value in a layer-wise manner for all layers. The proposed
method exhibited improved performance over existing methods in numerous neural
network architectures, tasks, and datasets. | Bum Jun Kim, Hyeyeon Choi, Hyeonah Jang, Sang Woo Kim | 2023-02-07T01:56:09Z | http://arxiv.org/abs/2302.03193v1 | # On the Ideal Number of Groups for Isometric Gradient Propagation
###### Abstract
Recently, various normalization layers have been proposed to stabilize the training of deep neural networks. Among them, group normalization is a generalization of layer normalization and instance normalization by allowing a degree of freedom in the number of groups it uses. However, to determine the optimal number of groups, trial-and-error-based hyperparameter tuning is required, and such experiments are time-consuming. In this study, we discuss a reasonable method for setting the number of groups. First, we find that the number of groups influences the gradient behavior of the group normalization layer. Based on this observation, we derive the ideal number of groups, which calibrates the gradient scale to facilitate gradient descent optimization. Our proposed number of groups is theoretically grounded, architecture-aware, and can provide a proper value in a layer-wise manner for all layers. The proposed method exhibited improved performance over existing methods in numerous neural network architectures, tasks, and datasets.
## 1 Introduction
Deep neural networks have recently shown significant performance in various fields. Despite their current success, in the past, deep neural networks were known to be difficult to train. To stabilize the training of a deep neural network, normalization layers, such as batch normalization (Ioffe and Szegedy, 2015), have been proposed. Normalization layers have addressed the difficulty in the optimization of deep neural networks and are used in most deep neural networks at present.
Other widely used normalization layers include layer normalization (Ba et al., 2016), instance normalization (Ulyanov et al., 2016), and group normalization (Wu and He, 2020). These behave similarly in that they apply mean and standard deviation (std) normalization and an affine transform. The difference lies in the units used for computing the mean and std. For example, for \(n\) features, layer normalization computes a single mean and std for normalization, whereas instance normalization computes \(n\) means and stds. Meanwhile, group normalization partitions \(n\) features into \(G\) groups to compute \(G\) means and stds. From this perspective, layer normalization is a special case of group normalization for \(G=1\), and instance normalization is a special case of group normalization for \(G=n\). Thus, group normalization is more comprehensive and has a degree of freedom from the setting of the number of groups. When the number of groups is set to a specific value, there is a possibility of suboptimality, which leaves room for setting a more appropriate number of groups to further improve the performance.
The setting of the number of groups is also mentioned in the original paper on group normalization (Wu and He, 2020). By experimenting with several trials with \(G=1,~{}2,~{}4,~{}\cdots\), they evaluated the ImageNet accuracy. They observed low accuracy at both extremes of \(G=1\) and \(G=n\). In particular, they empirically found the highest accuracy at \(G=32\) and recommended this as the default value for the number of groups in group normalization. Accordingly, various studies using group normalization have employed \(G=32\)(Kirillov et al., 2019; Ho et al., 2020; Zhu et al., 2021; Yang et al., 2019).
However, this approach to setting the number of groups has several problems. First, the corresponding number of groups lacks theoretical validation. This is because the claim that \(G=32\) yields the highest performance is confirmed only through empirical observations. Second, the neural network architecture is not considered. When a different architecture is employed, there is a possibility that \(G=32\) is suboptimal; therefore, hyperparameter tuning by trial and error is required again to set the optimal number of groups. For example, Dai et al. (2021) used \(G=16\), whereas Michalski et al. (2019) used \(G=4\). Furthermore, Song et al.
[2021] designed a neural network with a different number of groups for each layer. Training a deep neural network is time-consuming, and many hyperparameters already exist; employing an additional hyperparameter leads to a significantly high processing cost [15, 16, 17]. Third, \(G=32\) is not guaranteed to be optimal for all group normalization layers in a deep neural network using tens or hundreds of layers. In other words, since the optimal number of groups can be different for each layer, the number of groups in a layer-wise manner \(G^{l}\) should be considered.
In this study, we propose an appropriate method for determining the number of groups. First, we theoretically analyze the effect of the number of groups on the back-propagation of group normalization. In this regard, we consider a gradient condition that facilitates the training of the neural network and derive the ideal number of groups that satisfies the gradient condition. Second, we show that the ideal number of groups we derived is affected by the width of the neural network. Hence, the ideal number of groups exhibits different values depending on the number of input and output features in the neural network architecture. Third, we demonstrate that the ideal number of groups varies for each layer. In summary, for setting the number of groups, we propose a reasonable method that is theoretically grounded, architecture-aware, and able to provide a proper value for all layers in a layer-wise manner.
For the application of the ideal number of groups, we propose the practical number of groups and apply it to several training experiments on deep neural networks. The proposed practical number of groups demonstrated higher performance in various tasks, architectures, and datasets.
## 2 Ideal Number of Groups
### Theoretical Analysis
NotationIn this paper, we use the notations \(E[x_{i}]\) and \(Var[x_{i}]\) to denote the mean and variance computed along the feature, \(i\)-axis. We do not use sample variance.1
Footnote 1: Some libraries apply Bessel’s correction by default when measuring the variance. To obtain correct results such as those in Table 1, it should be turned off. For example, in PyTorch, torch.var(input, unbiased=False) should be used to apply a biased estimator. In fact, unbiased=False is specified when torch.nn.GroupNorm() measures the standard deviation.
FormulationConsider a unit block that consists of a weight layer, group normalization, and ReLU activation function (Figure 1). First, we denote \(n_{in}^{l}\)-dimensional input features in the \(l\)-th block as \(\mathbf{x}^{l}=\left(x_{1}^{l},\;x_{2}^{l},\;\cdots,\;x_{n_{in}^{l}}^{l}\right)\). Weight in the \(l\)-th block is denoted as \(W^{l}\in R^{n_{in}^{l}\times n_{out}^{l}}\). We assume zero-mean weights, as Glorot and Bengio [2010] and He et al. [2015] did. The weight layer produces output feature \(\mathbf{y}^{l}\), where
\[y_{j}^{l}=\sum_{i=1}^{n_{in}^{l}}W_{ij}^{l}x_{i}^{l}. \tag{1}\]
Now, group normalization with the number of groups \(G^{l}\) is applied to \(\mathbf{y}^{l}\) to produce
\[z_{(s-1)n_{g}^{l}+j}^{l}=\frac{y_{(s-1)n_{g}^{l}+j}^{l}-\mu_{s}^{l}}{\sqrt{( \sigma_{s}^{l})^{2}}}, \tag{2}\]
where
\[\mu_{s}^{l} =\frac{1}{n_{g}^{l}}\sum_{j=1}^{n_{g}^{l}}y_{(s-1)n_{g}^{l}+j}^{l}, \tag{3}\] \[(\sigma_{s}^{l})^{2} =\frac{1}{n_{g}^{l}}\sum_{j=1}^{n_{g}^{l}}(y_{(s-1)n_{g}^{l}+j}^{l }-\mu_{s}^{l})^{2},\] (4) \[n_{g}^{l} =\frac{n_{out}^{l}}{G^{l}}, \tag{5}\]
for \(s=1,\;2,\;\cdots,\;G^{l}\). The above equations mean partitioning \(n_{out}^{l}\) features into \(G^{l}\) groups and normalizing features in each group using the corresponding mean \(\mu_{s}^{l}\) and std \(\sigma_{s}^{l}\). In group normalization, an affine transform of \(\gamma z_{k}^{l}+\beta\) is additionally used. Following De and Smith [2020] and Zhang et al. [2019], we use the default values, _i.e_., \(\gamma=1\) and \(\beta=0\). Finally, the activation function \(f\) results in the next feature at the \((l+1)\)-th block:
\[x_{i}^{l+1}=f(z_{k}^{l}). \tag{6}\]
In summary, we obtain \(\mathbf{x}^{l+1}\) from \(\mathbf{x}^{l}\) by passing a unit block. Here, we consider the following property of the unit block.
Figure 1: Illustration of the unit block.
**Definition 1**: _A unit block mapping \(\mathbf{x}^{l}\) to \(\mathbf{x}^{l+1}\) is isometric with respect to gradient propagation if_
\[Var\Bigg{[}\frac{\partial L}{\partial x_{i}^{l+1}}\Bigg{]}=Var \Bigg{[}\frac{\partial L}{\partial x_{i}^{l}}\Bigg{]}, \tag{7}\]
_where \(L\) denotes a loss function._
This property ensures that the gradient scale is the same in both layers, which prevents unstable optimization due to exploding and vanishing gradients during gradient descent. For example, if the two variances are 10 and 1, this implies an imbalance in the gradient scale, which leads to an unstable optimization in the gradient descent. To stabilize the optimization, it is desirable to obtain the same or the most similar gradient scale. Note that the imbalance in the gradient scale is accumulated by passing tens or hundreds of unit blocks, which results in an exploding or vanishing gradient. So we aim to ensure that each unit block is isometric with respect to gradient propagation (Section 4). This property was also the objective of Glorot and Bengio (2010); He et al. (2015), and Klambauer et al. (2017). In the remainder of our paper, unless specified otherwise, we use the term isometricity to discuss gradient propagation, not forward propagation.
Here, we claim that the number of groups affects the gradient variance. Our goal is to determine the solution for the number of groups that induces the isometric gradient propagation of the unit block. We investigate this, the ideal number of groups \(G_{ideal}^{l}\).
Gradient propagation on weight layerFirst, from Eq. 1, note that an input feature \(x_{i}^{l}\) affects all output features \(y_{1}^{l},\ y_{2}^{l},\ \cdots,\ y_{n_{out}^{l}}^{l}\). From the chain rule for partial derivatives, we have
\[\frac{\partial L}{\partial x_{i}^{l}}=\sum_{j=1}^{n_{out}^{l}} \frac{\partial L}{\partial y_{j}^{l}}\frac{\partial y_{j}^{l}}{\partial x_{i }^{l}}. \tag{8}\]
By computing the variance, we see that \(n_{out}^{l}\) components affect the variance:
\[Var\Bigg{[}\frac{\partial L}{\partial x_{i}^{l}}\Bigg{]}=n_{out }^{l}Var\left[W^{l}\right]Var\Bigg{[}\frac{\partial L}{\partial y_{j}^{l}} \Bigg{]}. \tag{9}\]
Gradient propagation on group normalizationSecond, we investigate the backward propagation of group normalization. Notably, the gradients propagate only within the group. Consider the case in which a feature \(y_{j}^{l}\) belongs to the \(r\)-th group. By Eqs. 3 and 4, we have
\[\frac{\partial\mu_{r}^{l}}{\partial y_{j}^{l}} =\frac{1}{n_{g}^{l}}, \tag{10}\] \[\frac{\partial[(\sigma_{r}^{l})^{2}]}{\partial y_{j}^{l}} =\frac{2(y_{j}^{l}-\mu_{r}^{l})}{n_{g}^{l}}. \tag{11}\]
From Eq. 2, we find that the partial derivative differs depending on whether the index matches. We consider two cases:
\[\frac{\partial z_{k}^{l}}{\partial y_{j}^{l}} =\begin{cases}\frac{1-\frac{1}{n_{g}^{l}}(1+(z_{k}^{l})^{2})}{ \sigma_{r}^{l}},&\text{if }k=j.\\ -\frac{1}{n_{g}^{l}}(1+z_{k}^{l}z_{j}^{l})\\ \end{cases} \tag{12}\] \[=\sigma_{r}^{l}, \tag{13}\]
From the chain rule for partial derivatives, we obtain
\[\frac{\partial L}{\partial y_{j}^{l}} =\sum_{k=1}^{n_{g}^{l}}\frac{\partial L}{\partial z_{k}^{l}} \frac{\partial z_{k}^{l}}{\partial y_{j}^{l}} \tag{14}\] \[=T_{1}-T_{2}-T_{3}, \tag{15}\]
where
\[T_{1} =\frac{1}{\sigma_{r}^{l}}\left(\frac{\partial L}{\partial z_{k}^ {l}}\right), \tag{16}\] \[T_{2} =\frac{1}{\sigma_{r}^{l}}\frac{1}{n_{g}^{l}}\left(\sum_{k=1}^{n_{ g}^{l}}\frac{\partial L}{\partial z_{k}^{l}}\right),\] (17) \[T_{3} =\frac{1}{\sigma_{r}^{l}}\frac{1}{n_{g}^{l}}z_{j}^{l}\left(\sum_{ k=1}^{n_{g}^{l}}z_{k}^{l}\frac{\partial L}{\partial z_{k}^{l}}\right). \tag{18}\]
By computing the variance, we have
\[Var\left[T_{1}\right] =\frac{1}{(\sigma_{s}^{l})^{2}}Var\Bigg{[}\frac{\partial L}{ \partial z_{k}^{l}}\Bigg{]}, \tag{19}\] \[Var\left[T_{2}\right] =\frac{1}{(\sigma_{s}^{l})^{2}}\frac{1}{n_{g}^{l}}Var\Bigg{[}\frac {\partial L}{\partial z_{k}^{l}}\Bigg{]},\] (20) \[Var\left[T_{3}\right] =\frac{1}{(\sigma_{s}^{l})^{2}}\frac{3}{n_{g}^{l}}Var\Bigg{[}\frac {\partial L}{\partial z_{k}^{l}}\Bigg{]}. \tag{21}\]
The third equation holds because \(E[(z_{k}^{l})^{2}\frac{\partial L}{\partial z_{k}^{l}}]=0\) and \(E[(z_{k}^{l})^{4}]=3\) for normalized feature \(z_{k}^{l}\). We denote the \(s\)-th group to represent an arbitrary group. The variance is computed across all features, not the features within a group. Summarizing Eqs. 19-21, we obtain
\[Var\Bigg{[}\frac{\partial L}{\partial y_{j}^{l}}\Bigg{]}=\frac{1}{(\sigma_{s}^ {l})^{2}}\Bigg{(}1+\frac{4}{n_{g}^{l}}\Big{)}Var\Bigg{[}\frac{\partial L}{ \partial z_{k}^{l}}\Bigg{]}. \tag{22}\]
Note that the number of groups \(G^{l}\) is involved in here because \(n_{g}^{l}=\frac{n_{out}^{l}}{G^{l}}\). Thus, the number of groups affects the gradient propagation on the group normalization layer. We exploit this fact as a key to configure the unit block to the state that is closest to isometric.
Gradient propagation on activation functionHere, we investigate the activation function. To derive the variance around the activation function, we introduce the following two properties.
**Definition 2**: _Assume a random variable \(X\sim\mathcal{N}(0,\ \sigma^{2})\) and an arbitrary random variable \(Y\). For a given activation function \(f\), we define forward activation gain \(F_{f,\sigma}\) and backward activation gain \(B_{f,\sigma}\) as follows:_
\[F_{f,\sigma} =\frac{E[(f(X))^{2}]}{Var[X]}, \tag{23}\] \[B_{f,\sigma} =\frac{Var[f^{\prime}(X)Y]}{Var[Y]}. \tag{24}\]
In particular, if \(E[Y]=0\), we have \(B_{f,\sigma}=E[(f^{\prime}(X))^{2}]\).
**Remark 1**: _If \(f(x)=\mathrm{ReLU}(x)=\max(0,\ x)\), we have \(F_{f,\sigma}=B_{f,\sigma}=\frac{1}{2}\)._
Especially for ReLU, the two gains are independent of \(\sigma\). However, for the other activation functions, the two gains can vary depending on \(\sigma\) (Section 2.3).
Now we investigate the variance around the activation function. By Eq. 6, we see that
\[\frac{\partial L}{\partial z_{k}^{l}}=\frac{\partial L}{\partial x_{i}^{l+1}} \frac{\partial x_{i}^{l+1}}{\partial z_{k}^{l}}=\frac{\partial L}{\partial x_ {i}^{l+1}}f^{\prime}(z_{k}^{l}). \tag{25}\]
Thus,
\[Var\Bigg{[}\frac{\partial L}{\partial z_{k}^{l}}\Bigg{]}=B_{f,\sigma}Var \Bigg{[}\frac{\partial L}{\partial x_{i}^{l+1}}\Bigg{]}. \tag{26}\]
In addition, investigating the forward propagation of the \((l-1)\)-th block,
\[(\sigma_{s}^{l})^{2} =Var\left[y_{j}^{l}\right]=n_{in}^{l}Var\left[W^{l}f(z_{i}^{l-1})\right] \tag{27}\] \[=n_{in}^{l}E\left[(W^{l})^{2}\right]E\left[(f(z^{l-1}))^{2}\right]\] (28) \[=n_{in}^{l}F_{f,\sigma}Var\left[W^{l}\right]. \tag{29}\]
Gradient propagation on unit blockFinally, from Eqs. 9, 22, and 26, we obtain the gradient equation from \(\mathbf{x}^{l}\) to \(\mathbf{x}^{l+1}\):
\[Var\Bigg{[}\frac{\partial L}{\partial x_{i}^{l}}\Bigg{]}=\frac{n_{out}^{l}}{ n_{in}^{l}}\Bigg{(}1+\frac{4}{n_{g}^{l}}\Bigg{)}Var\Bigg{[}\frac{\partial L}{ \partial x_{i}^{l+1}}\Bigg{]}. \tag{30}\]
Let \(K(G^{l})\) be the ratio of two variances as
\[K(G^{l})=\frac{n_{out}^{l}}{n_{in}^{l}}\Bigg{(}1+\frac{4}{n_{g}^{l}}\Bigg{)} =\frac{n_{out}^{l}+4G^{l}}{n_{in}^{l}}. \tag{31}\]
When \(K(G^{l})=1\), the unit block is isometric with respect to gradient propagation. Thus, our goal is to find the number of groups \(G^{l}\) that satisfies \(K(G^{l})=1\). Ideally, this condition can be satisfied if the group normalization has
\[G_{ideal}^{l}=\frac{n_{in}^{l}-n_{out}^{l}}{4}, \tag{32}\]
which we call the ideal number of groups. Interestingly, the ideal number of groups depends on the architecture of the neural network, especially the number of input and output features of the weight layer. For example, for an \(l\)-th unit block with 128 input features and 64 output features, applying \(G^{l}=16\) provides isometricity of the unit block. The use of this formula allows us to set an appropriate number of groups on a given layer or architecture without any tuning experiments. Intuitively, it is desirable to have a different number of groups depending on each width. In other words, it is unnatural to apply \(G^{l}=32\) to all layers, regardless of the variety of width of the deep neural network.
However, the ideal number of groups may not be applicable in a practical scenario depending on \(n_{in}^{l}\) and \(n_{out}^{l}\). For example, if \(n_{in}^{l}=n_{out}^{l}=128\), then \(G_{ideal}^{l}=0\), which cannot be employed in group normalization. Note that group normalization is applied to \(n_{out}^{l}\) features. Thus, 1) the number of groups should have lower and upper bounds of \(1\leq G_{ideal}^{l}\leq n_{out}^{l}\), and 2) \(n_{out}^{l}\) should be divisible by an integer \(G_{ideal}^{l}\). Considering this, we seek a practical number of groups.
**Definition 3**: _Let \([n_{out}^{l}]\) be the divisor set of \(n_{out}^{l}\). Find \(G^{l}\in[n_{out}^{l}]\) where \(K(G^{l})\) is closest to 1. We refer to the result as the practical number of groups \(G_{practical}^{l}\)._
The objective of finding the practical number of groups is to configure the unit block to the state that is closest to isometric, facilitating gradient descent optimization. We consider three cases:
Case 1. If \(n_{in}^{l}\leq n_{out}^{l}\), then \(K(G^{l})\) is always greater than one. Thus we seek \(G^{l}\) that yields the smallest \(K(G^{l})\). Because \(K(G^{l})\) increases linearly with \(G^{l}\), we choose the smallest \(G^{l}\), _i.e_., \(G_{practical}^{l}=1\). In fact, this is equivalent to applying a lower bound to the ideal number of groups.
Case 2. If \(n_{in}^{l}\geq 5n_{out}^{l}\), we have \(K(G^{l})=(n_{out}^{l}+4G^{l})/n_{in}^{l}\leq 1\). We need to find \(G^{l}\) that results in the highest \(K(G^{l})\). Thus, we choose \(G_{practical}^{l}=n_{out}^{l}\). Similarly, this is equivalent to applying an upper bound to the ideal number of groups.
Case 3. If \(n_{out}^{l}<n_{in}^{l}<5n_{out}^{l}\), then we choose the number of groups in the divisor set \([n_{out}^{l}]\) that is closest to the ideal number of groups.
From the above analyses, we conclude with the following theorem.
**Theorem 1**: _Algorithm 1 yields the practical number of groups \(G_{practical}^{l}\)._
For example, if \(n_{in}^{l}=n_{out}^{l}=128\), by Case 1, we choose \(G_{practical}^{l}=1\). In this scenario, because \(K(G^{l})=1+4G^{l}/128\), we obtain \(K(1)=1.03125\). However, choosing a different number of groups such as 32 results in \(K(32)=2\), which results in an imbalance in the gradient scale.
### Empirical Validation
In this section, we test the validity of our derivation. We target Eqs. 9, 22, 26, and 30, which are our main results. The four equations are rewritten as follows:
\[Var\Bigg{[}\frac{\partial L}{\partial x_{i}^{l}}\Bigg{]}/Var \Bigg{[}\frac{\partial L}{\partial y_{j}^{l}}\Bigg{]} =n_{out}^{l}Var\left[W^{l}\right],\] (A) \[(\sigma_{s}^{l})^{2}Var\Bigg{[}\frac{\partial L}{\partial y_{j}^ {l}}\Bigg{]}/Var\Bigg{[}\frac{\partial L}{\partial z_{k}^{l}}\Bigg{]} =1+\frac{4}{n_{g}^{l}},\] (B) \[Var\Bigg{[}\frac{\partial L}{\partial z_{k}^{l}}\Bigg{]}/Var \Bigg{[}\frac{\partial L}{\partial x_{i}^{l+1}}\Bigg{]} =B_{f,\sigma},\] (C) \[Var\Bigg{[}\frac{\partial L}{\partial x_{i}^{l}}\Bigg{]}/Var \Bigg{[}\frac{\partial L}{\partial x_{i}^{l+1}}\Bigg{]} =\frac{n_{out}^{l}}{n_{in}^{l}}\Bigg{(}1+\frac{4}{n_{g}^{l}} \Bigg{)}.\] (D)
First, we empirically measure the left-hand side of Eqs. A to D (Empirical) and compare the results with the right-hand side (Theoretical). We use two unit blocks, where \(W^{1}\in R^{n_{in}\times n_{in}}\). and \(W^{2}\in R^{n_{in}\times n_{out}}\) with unit variance. We generate artificial random data sampled from the standard normal distribution. The random data are provided to the first unit block, and we measure the ratios of variances targeting the second unit block. The loss function \(L\) can be defined as an arbitrary function, and we simply define it as an aggregation of the output features of the second unit block. Four cases of (\(n_{in}^{l}\), \(n_{out}^{l}\), \(G^{l}\)) are tested. Considering randomness, for all results, we provide an average over \(10^{5}\) experiments.
The results are summarized in Table 1. We observed that the empirical ratio of variances matched well with theoretical expectations.
### Other Activation Functions
In this section, we investigate whether the ideal number of groups is applicable for other activation functions. First, consider \(\mathrm{PReLU}(x)=\max(0,\ x)+a\min(0,\ x)\) with scalar \(a\), which is a generalization of ReLU and LeakyReLU [10, 11]. We know that
\[E[(f(X))^{2}] =\int_{-\infty}^{\infty}{(f(x))^{2}p(x)dx} \tag{33}\] \[=\int_{-\infty}^{0}{(ax)^{2}p(x)dx}+\int_{0}^{\infty}{x^{2}p(x)dx}\] (34) \[=\frac{1+a^{2}}{2}E[X^{2}], \tag{35}\]
where \(p(x)\) denotes the probability density function of \(X\). Similarly, \(E[(f^{\prime}(X))^{2}]=\frac{1+a^{2}}{2}\). Thus, PReLU has \(F_{f,\sigma}=B_{f,\sigma}=\frac{1+a^{2}}{2}\). Remark 1 in Section 2.1 can be explained by \(a=0\). Moreover, the PReLU family guarantees consistent gain.
**Remark 2**: _The forward and backward activation gains do not vary by \(\sigma\) if and only if the activation function \(f\) is homogeneous \(f(kx)=kf(x)\), or \(f(kx)=-kf(x)\) for a scalar \(k\neq 0\)._
See the Appendix for a detailed proof. For example,
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c} \hline \multirow{2}{*}{(\(n_{in}^{l}\), \(n_{out}^{l}\), \(G^{l}\))} & \multicolumn{2}{c|}{Eq. A} & \multicolumn{2}{c|}{Eq. B} & \multicolumn{2}{c|}{Eq. C} & \multicolumn{2}{c}{Eq. D} \\ & Empirical & Theoretical & Empirical & Theoretical & Empirical & Theoretical & Empirical & Theoretical \\ \hline (1024, 512, 128) & 511.236 & 512 & 1.979 & 2 & 0.500 & 0.5 & 0.995 & 1 \\ (512, 256, 64) & 255.160 & 256 & 1.959 & 2 & 0.499 & 0.5 & 0.989 & 1 \\ (256, 128, 32) & 127.189 & 128 & 2.015 & 2 & 0.500 & 0.5 & 1.032 & 1 \\ (128, 64, 16) & 63.185 & 64 & 1.900 & 2 & 0.500 & 0.5 & 0.992 & 1 \\ \hline \end{tabular}
\end{table}
Table 1: Empirical validation of Eqs. A to D. The results were in agreement with the theoretical expectations.
\(\mathrm{ReLU}(x)\) and its mirror \(-\mathrm{ReLU}(x)\) share two gains that are independent of \(\sigma\).
Remark 2 implies that the forward and backward gains vary by \(\sigma\) for other activation functions, such as SiLU and ELU [Ramachandran et al., 2018, Elfwing et al., 2018, Clevert et al., 2016]. Furthermore, their nonlinear exponential terms make it difficult to compute the exact solution of the forward and backward activation gains.
Alternatively, we provide empirical values for these two gains. We generate \(10^{7}\) samples of \(X\sim\mathcal{N}(0,\ \sigma^{2})\) and measure the forward and backward activation gains on well-known activation functions [Hendrycks and Gimpel, 2016, Klambauer et al., 2017, Zheng et al., 2015, Elliott, 1993]. Here, we list the results for \(\sigma\) of \(\{0.1,\ 1,\ 10\}\) with various activation functions (Table 2).
Note that \(\frac{B_{f,\sigma}}{F_{f,\sigma}}=1\) is used in Eq. 30 assuming ReLU. Some activation functions, such as ReLU, PReLU, GELU, SiLU, ELU, and SELU, yielded a value near 1; thus, the practical number of groups can be safely used with these activation functions. However, \(\frac{B_{f,\sigma}}{F_{f,\sigma}}\) from other activation functions, such as Sigmoid, Tanh, Softplus, Softsign, and LogSigmoid, were far from 1. If we consider this, for an arbitrary activation function, the ideal number of groups should be \(G_{ideal}^{l}=(\frac{F_{f,1}}{B_{f,1}}n_{in}^{l}-n_{out}^{l})/4\). See the Appendix for more results and discussions.
## 3 Experiments
### Image Classification with MLP
In this section, we aim to observe the performance differences of neural networks with different settings for the number of groups. We start with a simple task and then
\begin{table}
\begin{tabular}{l|c c c c c c|c c c c c} \hline \hline Measurement & ReLU & PReLU & GELU & SiLU & ELU & SELU & Sigmoid & Tanh & Softplus & Softsign & LogSigmoid \\ \hline \(F_{f,0.1}\) & 0.500 & 0.531 & 0.255 & 0.252 & 0.928 & 1.876 & 25.079 & 0.981 & 48.438 & 0.751 & 48.495 \\ \(B_{f,0.1}\) & 0.500 & 0.531 & 0.256 & 0.253 & 0.929 & 1.879 & 0.062 & 0.981 & 0.251 & 0.757 & 0.251 \\ \(B_{f,0.1}/F_{f,0.1}\) & 1.000 & 1.000 & 1.006 & 1.002 & 1.001 & 1.001 & 0.002 & 1.000 & 0.005 & 1.007 & 0.005 \\ \hline \(F_{f,1}\) & 0.500 & 0.532 & 0.425 & 0.356 & 0.645 & 1.000 & 0.293 & 0.394 & 0.921 & 0.183 & 0.921 \\ \(B_{f,1}\) & 0.500 & 0.532 & 0.456 & 0.379 & 0.668 & 1.071 & 0.045 & 0.464 & 0.293 & 0.228 & 0.293 \\ \(B_{f,1}/F_{f,1}\) & 1.000 & 1.000 & 1.072 & 1.067 & 1.036 & 1.072 & 0.153 & 1.178 & 0.318 & 1.245 & 0.319 \\ \hline \(F_{f,10}\) & 0.500 & 0.531 & 0.500 & 0.499 & 0.504 & 0.565 & 0.005 & 0.009 & 0.501 & 0.007 & 0.501 \\ \(B_{f,10}\) & 0.500 & 0.531 & 0.506 & 0.507 & 0.520 & 0.613 & 0.007 & 0.053 & 0.461 & 0.026 & 0.461 \\ \(B_{f,10}/F_{f,10}\) & 1.000 & 1.000 & 1.011 & 1.017 & 1.032 & 1.085 & 1.434 & 5.780 & 0.920 & 3.908 & 0.921 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Empirical values for the forward and backward activation gains. For PReLU, slope \(a=0.25\) was used. The six activation functions on the left consistently yielded \(B_{f,\sigma}/F_{f,\sigma}\approx 1\), while the five activation functions on the right did not.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c} \hline \hline \(G^{1}\) & 1 & 2 & 4 & 8 & 16 & 32 & 64\({}^{*}\) & 128 & 256 & 512 \\ \hline Error & 1.827 & 1.720 & 1.773 & 1.730 & 1.727 & 1.720 & **1.670** & 1.753 & 3.240 & 88.650 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Test error (%) for MNIST classification. Lower is better. \({}^{*}\) indicates the practical number of groups.
Figure 2: The learning curve of MLP for each number of groups.
proceed to other large-scale tasks. First, we trained multilayer perceptrons (MLPs) for MNIST image classification. The simplicity of this task allows us to experiment with extensive numbers of groups. We used a two-layer MLP of 512 hidden nodes with group normalization and ReLU. Because \(n^{1}_{in}=784\) for MNIST data and \(n^{1}_{out}=512\), we have \(G^{l}_{ideal}=68\) and thus \(G^{1}_{practical}=64\).
An average over three runs is reported for each result (Table 3). The highest accuracy was found in \(G^{1}=64\), which corresponds to \(G^{1}_{practical}\). Others, such as \(G^{1}=32\) and \(G^{1}=1\), worked fine but they left with possible improvements. Figure 2 shows the learning curve of MLP for each number of groups. Because other conditions are the same, we can see that choosing the number of groups affected the learning curve of train loss, where \(G^{l}=64\) yielded faster convergence compared to others. We conclude that the improved accuracy came from the improved optimization achieved by choosing \(G^{l}=64\).
### Image Classification with Resnet
Second, we conducted experiments on convolutional neural networks (CNNs) for image classification. We targeted ResNet [11], which is a standard model used for image classification. After replacing the existing batch normalization layers with group normalization, we compared the performance with different numbers of groups. Because ResNet uses tens or hundreds of layers and is computationally expensive to train, we compared performance from three settings of the number of groups: \(G^{l}=32\) as the default value of group normalization, \(G^{l}=1\) corresponding to layer normalization, and \(G^{l}=G^{l}_{practical}\) we proposed.
We targeted two datasets: Oxford-IIIT Pet and Caltech-101 [13, 14]. The Oxford-IIIT Pet dataset includes 7K pet images of 37 classes, and the Caltech-101 dataset includes 9K object images of 101 classes with a background category. See the Appendix for details on experiments, such as the used hyperparameters. An average over three runs is reported for each result (Table 4).
We observed that when \(G^{l}=G^{l}_{practical}\) was employed, it achieved a higher accuracy than when \(G^{l}=32\) or \(G^{l}=1\). The performance improvement was consistently confirmed in the two datasets and ResNet-\(\{50,\ 101\}\).
### Panoptic Segmentation with PFPN
Panoptic segmentation is a task that simultaneously solves semantic and instance segmentation [15]. In other words, panoptic segmentation performs both pixel-wise classification and instance delineation. It is a large-scale downstream task that uses a CNN. Here, we focus on the panoptic feature pyramid network (PFPN), one of the representative models employed in the panoptic segmentation task [15]. In addition, the PFPN originally exploited group normalization with \(G^{l}=32\). We compare the performance of the PFPN for \(G^{l}=32\), \(G^{l}=1\), and \(G^{l}=G^{l}_{practical}\).
The COCO-panoptic dataset [10], which includes labeled 80 things and 53 stuff, was used for training and testing. We measured the panoptic quality (\(\mathrm{PQ}\)), a commonly used performance index for the task [15], and its variants \(\mathrm{PQ}^{\mathrm{th}}\) and \(\mathrm{PQ}^{\mathrm{st}}\) for thing and stuff, respectively (Table 5). The use of \(G^{l}=G^{l}_{practical}\) resulted in a higher \(\mathrm{PQ}\) than with \(G^{l}=32\) or \(G^{l}=1\). In particular, in the three indices, neither \(G^{l}=32\) nor \(G^{l}=1\) showed a clearly superior result, but for \(G^{l}=G^{l}_{practical}\), higher \(\mathrm{PQ}\) was consistently observed in all three indices.
### Object Detection with Faster R-CNN on+Ws
Qiao et al. [20] suggested that when group normalization is used, improved training is possible when weight standardization is applied. They experimented with a combination of group normalization and weight standardization for various tasks. Motivated by this practice, we tested the application of the practical number of groups for the case of using group normalization and weight standardization.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline Setup & \(\mathrm{AP}\) & \(\mathrm{AP}^{50}\) & \(\mathrm{AP}^{75}\) \\ \hline \(G^{l}=32\) & 40.5 & 61.0 & 44.2 \\ \(G^{l}=1\) & 40.4 & 60.9 & 44.3 \\ \(G^{l}=G^{l}_{practical}\) & **40.7** & **61.2** & **44.6** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Object detection results for Faster R-CNN GN+WS, where a higher number is better.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline \multirow{2}{*}{Setup} & \multicolumn{2}{c|}{Oxford-IIIT Pet} & \multicolumn{2}{c}{Caltech-101} \\ & R-50 & R-101 & R-50 & R-101 \\ \hline \(G^{l}=32\) & 22.894 & 24.067 & 24.021 & 22.781 \\ \(G^{l}=1\) & 33.514 & 34.567 & 24.167 & 26.647 \\ \(G^{l}=G^{l}_{practical}\) & **21.119** & **22.924** & **22.368** & **22.247** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Test error (%) for image classification. “R” represents ResNet. Lower is better.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline Setup & \(\mathrm{PQ}\) & \(\mathrm{PQ}^{\mathrm{th}}\) & \(\mathrm{PQ}^{\mathrm{st}}\) \\ \hline \(G^{l}=32\) & 41.750 & 49.357 & 30.268 \\ \(G^{l}=1\) & 41.461 & 49.688 & 29.043 \\ \(G^{l}=G^{l}_{practical}\) & **42.147** & **49.816** & **30.572** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Panoptic segmentation results, where a higher number is better.
The target model was Faster R-CNN with group normalization and weight standardization (GN+WS), which is an improved variant of Faster R-CNN, where the existing batch normalization layers are replaced with group normalization, and weight standardization is applied in the convolution layers [14, 15]. The target task was object detection, which is a representative downstream task using a CNN. For training and testing, we used the COCO 2017 dataset, which consists of 118K training images, 5K validation images, and 41K test images. Average precision (\(\mathrm{AP}\)), which is a commonly used index, and its variants (\(\mathrm{AP}^{50}\) and \(\mathrm{AP}^{75}\)) at \(\mathrm{IoU}=50\) and \(\mathrm{IoU}=75\) were measured (Table 6). Applying \(G^{l}=G^{l}_{practical}\) resulted in minor but consistent improvements compared to \(G^{l}=32\) or \(G^{l}=1\).
## 4 Discussion
Glorot and Bengio [16] discussed the condition under which the variance becomes equal for forward and backward propagation. Assuming a neural network composed of weight and sigmoid layers, for forward and backward propagation, they derived the following two conditions:
\[n^{l}_{in}Var\left[W^{l}\right]=1,\;n^{l}_{out}Var\left[W^{l}\right]=1. \tag{36}\]
However, when \(n^{l}_{in}\neq n^{l}_{out}\), because both conditions cannot be simultaneously satisfied, they proposed \(Var\left[W^{l}\right]=\frac{2}{n^{l}_{in}+n^{l}_{out}}\) as a compromise to get as close to the two conditions as possible. This is applied at the initialization of the neural network to control the weights to attain the corresponding variance. He et al. [17] proposed another initialization method that considers the use of ReLU. These studies provide several notable points.
Perfect isometricity is not required.As mentioned above, it is difficult to equalize the variance in the forward and backward propagation simultaneously. In other words, it is difficult to obtain isometricity with respect to both forward and backward propagations. Glorot and Bengio [16] presented a compromising alternative, and He et al. [17] considered only forward variance. Furthermore, even if the neural network was isometric at initialization, the isometricity would vanish during training. For example, weight decay reduces the weight norm during training, which causes it to lose isometricity. In summary, the initialization method of Glorot and Bengio [16] and He et al. [17] helps in training by making the neural network partially isometric but does not pursue perfect isometricity.
Their assumptions differ from the architectures of practical neural networks.Glorot and Bengio [16] assumed that a neural network comprises a combination of weight layers and sigmoid activation functions. For this scenario, they derived the consecutive accumulation of backward variance from the \(l\)-th to \(l^{\prime}\)-th layer as
\[Var\Bigg{[}\frac{\partial L}{\partial x^{l}_{i}}\Bigg{]}=Var\Bigg{[}\frac{ \partial L}{\partial x^{l^{\prime}}_{i}}\Bigg{]}\prod_{m=l}^{l^{\prime}-1}n^{ m}_{out}Var\left[W^{m}\right]. \tag{37}\]
Thus, if Eq. 36 holds for each layer, then Eq. 37 is satisfied, which makes the entire neural network isometric. Similarly, in our paper, we discussed the isometricity of a unit block composed of a weight layer, group normalization, and ReLU activation function. If a neural network is composed of only these unit blocks without any other operations, the use of an ideal number of groups will ensure isometricity for the entire neural network. However, Mishkin and Matas [16] argued that other operations, such as the maxpool operation and other activation functions, should be considered in practice. Because it is difficult to deal with all cases theoretically, they proposed normalizing the variance after empirically measuring it in a neural network. Practical neural networks include various operations such as strided operations and skip or dense connections. When these operations are exploited in conjunction with unit blocks in a neural network, it is difficult to conclude that the isometricity of the unit block guarantees the isometricity of the entire neural network.
Despite these limitations, the initialization methods of Glorot and Bengio [16] and He et al. [17] have been successfully deployed in modern neural networks. The initialization method of He et al. [17] is always specified in various libraries, including torchvision.models, pytorch image models (timm), and MMClassification. Several studies have reported the effectiveness of initialization methods for stable training [13, 14, 15]. These practices imply advocacy of partial isometricity for performance gain rather than opposition due to the limitation of partial isometricity.
In summary, these two viewpoints indicate that training is stabilized even if the neural network is 1) close to an isometric state rather than in a perfect isometric state and 2) isometric only in the local unit block. Similarly, our practical number of groups 1) is different from the ideal number of groups, so it is not a perfect solution and only partially makes it as isometric as possible and 2) guarantees isometricity only for the unit block, not the entire neural network. In other words, our practical number of groups provides only partial isometricity, but it is sufficient to facilitate training.
## 5 Conclusion
In this study, we proposed a practical method for determining the number of groups for group normalization. We stated the limitations of the trial-and-error-based hyperparameter tuning approach for setting the number of groups in group normalization. In this regard, we derived the ideal
number of groups, which is advantageous for gradient descent optimization. Then we proposed the practical number of groups and applied it to various tasks, including image classification, panoptic segmentation, and object detection. We confirmed that the use of the practical number of groups provides improved performance compared to using the other settings for the number of groups.
|
2306.12569 | Trotter error bounds and dynamic multi-product formulas for Hamiltonian
simulation | Multi-product formulas (MPF) are linear combinations of Trotter circuits
offering high-quality simulation of Hamiltonian time evolution with fewer
Trotter steps. Here we report two contributions aimed at making multi-product
formulas more viable for near-term quantum simulations. First, we extend the
theory of Trotter error with commutator scaling developed by Childs, Su, Tran
et al. to multi-product formulas. Our result implies that multi-product
formulas can achieve a quadratic reduction of Trotter error in 1-norm (nuclear
norm) on arbitrary time intervals compared with the regular product formulas
without increasing the required circuit depth or qubit connectivity. The number
of circuit repetitions grows only by a constant factor. Second, we introduce
dynamic multi-product formulas with time-dependent coefficients chosen to
minimize a certain efficiently computable proxy for the Trotter error. We use a
minimax estimation method to make dynamic multi-product formulas robust to
uncertainty from algorithmic errors, sampling and hardware noise. We call this
method Minimax MPF and we provide a rigorous bound on its error. | Sergiy Zhuk, Niall Robertson, Sergey Bravyi | 2023-06-21T21:07:06Z | http://arxiv.org/abs/2306.12569v2 | # Trotter error bounds and dynamic multi-product formulas for Hamiltonian simulation
###### Abstract
Multi-product formulas are linear combinations of Trotter circuits offering high-quality simulation of Hamiltonian time evolution with fewer Trotter steps. Here we report two contributions aimed at making multi-product formulas more viable for near-term quantum simulations. First, we extend the theory of Trotter error with commutator scaling developed by Childs, Su, Tran et al. to multi-product formulas. Our result implies that multi-product formulas can achieve a quadratic reduction of Trotter error on arbitrary time intervals compared with the regular product formulas without increasing the required circuit depth or qubit connectivity. The number of circuit repetitions grows only by a constant factor. Secondly, we introduce dynamic multi-product formulas with time-dependent coefficients chosen to minimize a certain efficiently computable proxy for the Trotter error. Numerical simulations suggest that the error achieved by the dynamic multi-product formulas is close to the optimal one.
## I Introduction
Simulation of Hamiltonian dynamics is one of the most natural use cases for quantum computers. Such simulation can aid researchers in computing electronic structure of large molecules [1; 2], finding energy spectrum of elementary excitations [3], understanding thermalization in closed systems [4], and testing theoretical models of a quantum chaos [5]. It is strongly believed that Hamiltonian dynamics simulation is intractable for conventional classical computers due to an exponentially large dimension of the Hilbert space and highly entangled nature of time-evolved states. In contrast, quantum computers can efficiently simulate Hamiltonian dynamics for any spin or fermion system with few-body interactions [6] or, more generally, for any sparse Hamiltonian [7].
Here we focus on a particular class of quantum simulation algorithms based on product formulas. A product formula is a quantum circuit \(\mathcal{S}(t)\) approximating the evolution operator \(e^{-iHH}\) for a quantum system with a Hamiltonian \(H\) evolved over time \(t\). The circuit \(\mathcal{S}(t)\) is usually chosen as a product of local evolution operators associated with few-body interactions describing the system. As such, product formulas inherit local structure of the underlying system which often obviates the need for long-range entangling gates. For example, simulating a spin chain Hamiltonian with short-range interactions would only require a hardware with the linear qubit connectivity. Furthermore, local structure of product formulas results in improved approximation error bounds that can exploit commutativity of non-overlapping Hamiltonian terms [8]. Most of product formulas also exhibit a periodicity structure such that a few layers of gates simulating a short time step are repeated many times to simulate a longer evolution time. The periodicity structure aids error mitigation methods, such as the Probabilistic Error Cancellation [9; 10], as one has to learn noise models only for a few distinct layers of gates.
High-quality product formulas that accurately approximate Hamiltonian dynamics of practically relevant systems can be very deep. A common approach is to break the desired evolution time \(t\) into \(k\) intervals of length \(t/k\) and apply a product formula \(k\) times with the evolution time \(t/k\). This yields a circuit \(\mathcal{S}(t/k)^{k}\) whose depth scales linearly with \(k\). The required number of time steps \(k\) depends on details of the Hamiltonian, desired approximation error, and the type of a product formula. For example, solving a benchmark simulation problem posed by Childs, Maslov, et al. [11] for a system of 100 qubits would require \(k\geq 5000\) time steps if one uses the fourth-order Trotter product formula, see Section F.3 of [11]. The resulting circuit \(\mathcal{S}(t/k)^{k}\) would be too deep to execute reliably on near-term quantum processors lacking error correction.
Multi-product formulas introduced in the context of quantum computing by Childs and Wiebe [12] and developed further in [13; 14; 15] may provide a more viable path to near-term quantum simulations. The simplest example of a multi-product formula (MPF) is a linear combination \(\sum_{j=1}^{r}c_{j}\mathcal{S}(t/k_{j})^{k_{j}}\), where \(c_{j}\) are real or complex coefficients, \(\mathcal{S}(t)\) is some base product formula, and \(k_{1},\ldots,k_{r}\) is a sequence of integers. The \(j\)-th term in the MPF approximates the desired evolution operator \(e^{-itH}\) by performing \(k_{j}\) steps of the base product formula with the evolution time \(t/k_{j}\). The coefficients \(c_{j}\) can be chosen such that the errors introduced by each circuit \(\mathcal{S}(t/k_{j})^{k_{j}}\) approximately cancel each other enabling high-accuracy simulations with fewer time steps [13].
Refs. [12; 13] envisioned implementing the whole MPF on a quantum computer using the linear combination of unitaries (LCU) method. This yields a simulation algo
rithm with a favorable asymptotic cost that scales nearly linearly with the evolution time and poly-logarithmically with the desired approximation error [13]. However, this algorithm has not been experimentally demonstrated yet due to a high complexity of LCU circuits.
Here we adopt a simpler implementation of MPFs due to Vazquez, Egger, et al. [14], Rendon, Watkins, and Wiebe [15], see also [16]. These authors realized that implementing the whole MPF on a quantum processor is not needed for certain tasks, such as computing time-evolved expected values of observables [14] or quantum phase estimation [15]. Instead, these tasks can be accomplished by implementing each individual circuit \(\mathcal{S}(t/k_{j})^{k_{j}}\) in the MPF on a quantum device and classical post-processing of the measured data. Such quantum-classical MPF can be described as a linear combination of density matrices \(\mu(t)=\sum_{j=1}^{r}c_{j}\rho_{k_{j}}(t)\), where \(\rho_{k}(t)\) is a state obtained by applying \(k\) steps of the base product formula to the initial state \(\rho_{in}\) at time \(t=0\). Equivalently, \(\rho_{k}(t)=\mathcal{S}(t/k)^{k}\rho_{in}\mathcal{S}(t/k)^{-k}\). Assuming that \(\mu(t)\) is a good approximation of the exact time-evolved state \(\rho(t)=e^{-itH}\rho_{in}e^{itH}\), the expected value of any observable \(\mathrm{Tr}(\mathcal{O}(t))\) can be approximated by a linear combination \(\sum_{j=1}^{r}c_{j}\mathrm{Tr}(\mathcal{O}\rho_{k_{j}}(t))\). The latter can be computed classically as long as a quantum processor provides an estimate of the expected values \(\mathrm{Tr}(\mathcal{O}\rho_{k_{j}}(t))\).
To reap the benefits of MPFs one needs to address two points. First, one needs to bound the approximation error \(\|\mu(t)-\rho(t)\|_{1}=\max_{\mathcal{O}}\mathrm{Tr}(\mathcal{O}(\mu(t)-\rho( t)))/\|\mathcal{O}\|\). To the best of our knowledge, such bounds were previously known only in the special case when the number of time steps \(k_{j}\) is sufficiently large so that \(e^{-itH/k_{j}}\approx I\) for all \(j\), see [12; 15]. Here we overcome this limitation by extending the theory of Trotter error with commutator scaling due to Childs, Su, Tran et al [8] from regular product formulas to MPFs. We show that an MPF can achieve a quadratic reduction of the approximation error compared with the base product formula without increasing the requited circuit depth or required qubit connectivity, see Theorem 1 in Section III for a formal statement. To the best of our knowledge, this provides the first rigorous upper bound on the approximation error achieved by MPFs which holds for any evolution time \(t\) and any number of time steps \(k_{j}\). Numerical simulations suggest that the error scaling predicted by Theorem 1 is nearly tight, see Section V. Our main technical innovation is an integral representation of MPFs based on Euler-Maclaurin formula. It allows us to express the difference \(\mu(t)-\rho(t)\) in terms of nested commutators of the type studied in [8].
Secondly, one needs an efficient method for computing MPF coefficients \(c_{j}\). Ideally, one would like to find optimal coefficients minimizing the error \(\|\mu(t)-\rho(t)\|_{1}\) at any given time \(t\). However, in practice this may not be possible since the exact solution \(\rho(t)\) is unknown. Instead, a common strategy is the polynomial extrapolation method [17]. It works by expanding \(\rho_{k_{j}}(t)\) in powers of \(1/k_{j}\). This gives Taylor series with the \(0\)-th order term \(\rho(t)\) corresponding to the limit \(k_{j}\to\infty\) and higher order terms proportional to positive powers of \(1/k_{j}\). By choosing the coefficients \(c_{j}\) as a solution of a suitable linear system one can ensure that all unwanted lower-order terms in the above expansion cancel each other. This is the approach taken in [12; 13; 14; 15] as well as in our upper bound on the MPF approximation error. However, the polynomial extrapolation method is sub-optimal as it forces the coefficients \(c_{j}\) to be time independent and ignores all structure of the simulated system.
To overcome limitations of the extrapolation approach we propose dynamic multi-product formulas (DMPF). These are MPFs of the form \(\mu(t)=\sum_{j=1}^{r}c_{j}(t)\rho_{k_{j}}(t)\) with time-dependent coefficients \(c_{j}(t)\) chosen to minimize a certain proxy for the error \(\|\mu(t)-\rho(t)\|_{1}\). This proxy depends only on overlaps between Trotter circuits such as \(\mathrm{Tr}(\rho_{k_{i}}(t)\rho_{k_{j}}(t))\) which can be efficiently estimated (with a small additive error) on a quantum computer, see Section IV for details. Our approach can be viewed as a robustification of the simulation algorithm due to Li and Benjamin [18] which is based on McLachlan's variational principle. Numerical simulations suggest that the Frobenius norm approximation error \(\|\mu(t)-\rho(t)\|_{F}\) achieved by DMPFs is very close to the optimal error that can be computed numerically by minimizing \(\|\mu(t)-\rho(t)\|_{F}\) over the coefficients \(c_{j}(t)\) using the least-squares method (the latter does not offer an efficient simulation algorithm since it relies on the knowledge of the exact solution \(\rho(t)\)). In our simulations, the improvement achieved by DMPFs compared with "static" MPFs based on the extrapolation method is about 10X reduction of the approximation error (at best).
The rest of this paper is organized as follows. We provide a necessary background on product formulas and the theory of Trotter error with commutator scaling in Section II, which is largely based on [8]. Multi-product formulas and our bound on the Trotter errror (Theorem 1) are stated in Section III. We introduce dynamic multi-product formulas in Section IV. The rigorous upper bound on the Trotter error is compared with numerical simulations in Section V. This section also reports numerical experiments with DMPFs. Appendix A contains the proof of Theorem 1.
## II Product formulas and Trotter error
Suppose \(H\) is a Hamiltonian describing a quantum spin or fermionic system. Our goal is to simulate Hamiltonian dynamics governed by the the von Neumann equation
\[\dot{\rho}(t)=-i[H,\rho(t)],\qquad t\geq 0 \tag{1}\]
with a fixed initial state \(\rho(0)=\rho_{in}\). Its solution is
\[\rho(t)=e^{-itH}\rho_{in}e^{itH}. \tag{2}\]
A common strategy for simulating Hamiltonian dynamics on a quantum computer relies on product formulas.
Suppose we are given a decomposition
\[H=\sum_{a=1}^{d}F_{a}, \tag{3}\]
where \(F_{a}\) are hermitian operators such that each unitary \(e^{-itF_{a}}\) admits an efficient implementation by a quantum circuit for any evolution time \(t\). For example, this is the case if \(F_{a}\) is a sum of few-particle interactions that pairwise commute. A product formula associated with \(F_{1},\ldots,F_{d}\) is an operator-valued function
\[\mathcal{S}(t)=e^{-itF_{d}}\cdots e^{-itF_{2}}e^{-itF_{1}}. \tag{4}\]
Condition Eq. (3) ensures that \(\mathcal{S}(t)=e^{-itH}+O(t^{2})\) in the limit \(t\to 0\). More generally, \(\mathcal{S}(t)\) is called an order-\(p\) product formula if
\[\mathcal{S}(t)=e^{-itH}+O(t^{p+1}) \tag{5}\]
in the limit \(t\to 0\). For example, suppose \(H\) describes a chain of \(n\) qubits with nearest-neighbor interactions, \(H=\sum_{j=1}^{n-1}H_{j,j+1}\). Then a second-order Trotter-type product formula can be chosen as
\[\mathcal{S}(t)=e^{-itF_{3}}e^{-itF_{2}}e^{-itF_{1}} \tag{6}\]
with \(F_{1}=F_{3}=(1/2)(H_{1,2}+H_{3,4}+H_{5,6}+\ldots)\) and \(F_{2}=H_{2,3}+H_{4,5}+H_{6,7}+\ldots\). In this example \(d=3\).
Given an integer \(k\geq 1\) and a product formula \(\mathcal{S}(t)\), define a state \(\rho_{k}(t)\) obtained from \(\rho_{in}\) by applying \(k\) steps of the product formula \(\mathcal{S}(t/k)\), that is,
\[\rho_{k}(t)=\mathcal{S}(t/k)^{k}\rho_{in}\mathcal{S}(t/k)^{-k}. \tag{7}\]
We shall refer to the approximation error \(\|\rho_{k}(t)-\rho(t)\|_{1}\) achieved by a product formula as a Trotter error. Recall that the 1-norm \(\|X\|_{1}\) of an operator \(X\) is defined as the sum of singular values of \(X\) or, equivalently, as \(\|X\|_{1}=\max_{\mathcal{O}}|\mathrm{Tr}(\mathcal{O}X)/\|\mathcal{O}\|\), where the maximum is over all non-zero operators \(\mathcal{O}\). Using the standard properties of the 1-norm [19] and the triangle inequality one gets
\[\|\rho_{k}(t)-\rho(t)\|_{1} \leq 2\|\mathcal{S}(t/k)^{k}-e^{-itH}\|\] \[\leq 2k\|\mathcal{S}(t/k)-e^{-i(t/k)H}\|. \tag{8}\]
A general upper bound on the error \(\|\mathcal{S}(t)-e^{-itH}\|\) which is nearly tight in many cases of interest has been recently proved by Childs, Su, Tran, et al. [8]. To state this bound we need some more notations. Suppose \(A_{1},\ldots,A_{s},B\) are linear operators acting on the same space. Define a quantity
\[\alpha_{\textsf{comm}}(p;A_{1},\ldots,A_{s};B)=\sum_{\begin{subarray}{c}q_{1 },\ldots,q_{s}\geq 0\\ q_{1}+\ldots+q_{s}=p\end{subarray}}\frac{p!}{q_{1}!\cdots q_{s}!}\] \[\cdot\|(\mathrm{Ad}_{A_{1}})^{q_{1}}\cdots(\mathrm{Ad}_{A_{s}})^{ q_{s}}(B)\|. \tag{9}\]
Here \(\mathrm{Ad}_{X}\) denotes the adjoint action of an operator \(X\), that is, \(\mathrm{Ad}_{X}(Y)=XY-YX\) for all operators \(Y\). Assuming that the operators \(A\)'s and \(B\) have the units of energy, \(\alpha_{\textsf{comm}}(p;\ldots)\) has units \([\mathrm{energy}]^{p+1}\). The following lemma is a corollary of the upper bound established in [8].
**Lemma 1** (**Trotter error**).: _Let \(\rho(t)\) and \(\rho_{k}(t)\) be the exact time evolved state and its approximation obtained by applying \(k\) steps of an order-\(p\) product formula, see Eqs. (2,7). Then for all \(t\geq 0\)_
\[\|\rho_{k}(t)-\rho(t)\|_{1}\leq\frac{2\alpha_{p}t^{p+1}}{(p+1)!k^{p}} \tag{10}\]
_where_
\[\alpha_{p}=\sum_{a=2}^{d}\alpha_{\textsf{comm}}(p;F_{d},\ldots,F_{a};F_{a-1}). \tag{11}\]
Since our setting is different from the one of Ref. [8], we provide a proof of Lemma 1 in Appendix A. As a concrete example, consider the second-order Trotter-type product formula Eq. (6). Then Eq. (11) gives
\[\alpha_{2}=\|\left[F_{2},[F_{2},F_{1}]\right]\|+3\|\left[F_{1},[F_{1},F_{2}] \right]\|.\]
In general, \(\alpha_{p}\) involves the norm of order-\(p\) nested commutators composed of \(F_{1},\ldots,F_{d}\). The quantity \(\alpha_{p}\) scales linearly with the system size \(n\) for many interesting Hamiltonians such as quantum lattice models with short-range interactions [8]. In this case the Trotter error Eq. (10) is proportional to \(nt^{p+1}/k^{p}\) with a constant factor that depends on the order \(p\) and details of the considered Hamiltonian.
## III Multi product formulas
A multi product formula (MPF) approximates the exact time evolved state \(\rho(t)\) by a linear combination
\[\mu(t)=\sum_{i=1}^{r}c_{i}\rho_{k_{i}}(t) \tag{12}\]
where \(\rho_{k_{i}}(t)\) is an approximation of \(\rho(t)\) obtained by applying \(k_{i}\) steps of some base product formula \(\mathcal{S}(t)\), see Eq. (7), and \(c_{i}\) are real coefficients. All terms in \(\mu(t)\) use the same base product formula. The key idea behind MPFs is that errors introduced by each individual term in \(\mu(t)\) can be approximately cancelled with a proper choice of the coefficients \(c_{i}\). Thus the error \(\|\mu(t)-\rho(t)\|_{1}\) achieved by an MPF can be much smaller than the errors \(\|\rho_{k_{i}}(t)-\rho(t)\|_{1}\) achieved by each term. We are interested in the error \(\|\mu(t)-\rho(t)\|_{1}\) minimized over the coefficients \(c_{i}\). Note that the optimal MPF \(\mu(t)\) might not be a physical state since some coefficients \(c_{i}\) can be negative. Accordingly, one may not be able to prepare \(\mu(t)\) in the lab. However, this is not needed if one's goal is just to
compute an expected value of some observable \(\mathcal{O}\) on \(\rho(t)\). Indeed, suppose one can efficiently prepare each individual state \(\rho_{k_{i}}(t)\) and obtain an estimate \(x_{i}\in\mathbb{R}\) satisfying
\[|\mathrm{Tr}(\mathcal{O}\rho_{k_{i}}(t))-x_{i}|\leq\epsilon_{i} \tag{13}\]
for some error tolerance \(\epsilon_{i}\). Assuming \(\|\mathcal{O}\|\leq 1\) one gets
\[\left|\mathrm{Tr}(\mathcal{O}\rho(t))-\sum_{i=1}^{r}c_{i}x_{i}\right| \leq\|\mu(t)-\rho(t)\|_{1}\] \[+\sum_{i=1}^{r}\epsilon_{i}|c_{i}|. \tag{14}\]
The last term in Eq. (III) can be made arbitrarily small simply by improving the quality of estimates in Eq. (13). However, to avoid error amplification, the MPF must be well-conditioned [13] such that the condition number \(\kappa=\sum_{i=1}^{r}|c_{i}|\) is sufficiently small.
Let us first consider the error \(\|\mu(t)-\rho(t)\|_{1}\). Unfortunately, the existing rigorous error bounds for MPFs may not be applicable in the regime covered by Lemma 1. Indeed, assuming that the bound of Lemma 1 scales as \(O(nt^{p+1}/k^{p})\), where \(n\) is the number of qubits, the base product formula needs only \(k=\Omega(n^{1/p}t^{1+1/p})\) time steps for an accurate simulation. Meanwhile, the existing error bounds for MPFs such as Lemma 10 of Ref. [15] are only applicable if \(\min_{j}k_{j}=\Omega(nt)\), assuming that the Hamiltonian \(H\) contains \(\Omega(n)\) terms with the norm \(\Omega(1)\). Applying such bounds to quantum advantage demonstrations in which \(t\) is comparable or smaller than \(n\) for a suitable choice of energy units [11; 20] would require each term in the MPF to perform at least \(\Omega(n^{2})\) time steps, whereas \(\Omega(n^{1+2/p})\) steps would suffice for the base product formula. Note that \(n^{1+2/p}\ll n^{2}\) for \(p\geq 3\).
To justify the use of MPFs in the regime covered by Lemma 1, it is desirable to extend the theory of Trotter error with commutator scaling developed in [8] from regular product formulas to MPFs. This is achieved in the following theorem. Our bound on the Trotter error depends on the norm of nested commutators analogous to \(\alpha_{\boldsymbol{comm}}\) and \(\alpha_{p}\), see Eqs. (9,11). Let us first define these commutators. Fix a product formula \(\mathcal{S}(t)=e^{-itF_{4}}\cdots e^{-it_{1}F_{1}}\) and consider a set of unitary operators
\[\Gamma(t)=\{e^{-i\tau_{d}F_{4}}\cdots e^{-i\tau_{1}F_{1}}\,|\,0\leq\tau_{1}, \ldots,\tau_{d}\leq t\}.\]
Here we assume \(t\geq 0\). Given a unitary \(U\in\Gamma(t)\), let \(\hat{U}\) be a linear map such that \(\hat{U}(X)=UXU^{-1}\) for any operator \(X\). Define a quantity
\[\beta_{\boldsymbol{comm}}(p,\ell;A_{1},\ldots,A_{s};B;t)=\sum_{ \begin{subarray}{c}q_{1},\ldots,q_{s}\geq 0\\ q_{1}+\ldots+q_{s}=p\end{subarray}}\frac{p!}{q_{1}!\cdots q_{s}!}\] \[\max_{U\in\Gamma(t)}\cdot\|(\mathrm{Ad}_{H})^{\ell}\hat{U}( \mathrm{Ad}_{A_{1}})^{q_{1}}\cdots(\mathrm{Ad}_{A_{s}})^{q_{s}}(B)\|. \tag{15}\]
Assuming that all operators \(A\)'s and \(B\) have the units of energy, \(\beta_{\boldsymbol{comm}}(p,\ell;\ldots)\) has units \([\mathrm{energy}]^{p+\ell+1}\). Let
\[\beta_{p,\ell}(t)=\sum_{a=2}^{d}\beta_{\boldsymbol{comm}}(p,\ell;F_{d}, \ldots,F_{a};F_{a-1};t). \tag{16}\]
Our main result is as follows.
**Theorem 1** (**MPF Trotter error**).: _Let \(\rho(t)\) and \(\rho_{k}(t)\) be the exact time evolved state and its approximation obtained by applying \(k\) steps of an order-\(p\) product formula. Consider a multi product formula \(\mu(t)=\sum_{i=1}^{r}c_{i}\rho_{k_{i}}(t)\) with \(r=p+1\). Suppose the coefficients \(c_{i}\) solve a linear system_
\[\sum_{i=1}^{r}c_{i}=1\quad\text{and}\quad\sum_{i=1}^{r}\frac{c_{i}}{k_{i}^{q} }=0 \tag{17}\]
_for \(q\in\{p,p+1,\ldots,2p-1\}\). Then for all \(t\geq 0\)_
\[\|\mu(t)-\rho(t)\|_{1}\leq\left(\sum_{i=1}^{r}\frac{|c_{i}|}{k_{i}^{2p}} \right)(a_{1}t^{2p+2}+a_{2}t^{2p+1}+a_{3}t^{2p}),\]
_where_
\[a_{1}=8\left(\frac{\alpha_{p}}{(p+1)!}\right)^{2},\]
\[a_{2}=\frac{4\beta_{2p,0}(0)}{(2p)!}+\frac{8\beta_{p,p}(t/k_{min})}{(2\pi)^{p} p!},\]
_and_
\[a_{3}=4\sum_{\ell=1}^{p}\frac{B_{\ell}\beta_{2p-\ell,\ell-1}(t/k_{min})}{\ell! (2p-\ell)!}.\]
_Here \(B_{\ell}\) is the \(\ell\)-th Bernoulli number and \(k_{min}=\min\left(k_{1},\ldots,k_{r}\right)\)._
In the limit of large evolution time \(t\) the upper bound of Theorem 1 becomes
\[\|\mu(t)-\rho(t)\|_{1}\leq\left(\sum_{i=1}^{r}\frac{|c_{i}|}{k_{i}^{2p}} \right)a_{1}t^{2p+2}\left[1+O(1/t)\right]. \tag{18}\]
Comparing this and Eq. (10) one infers that the MPF achieves a nearly quadratic reduction of error compared with the base product formula for large \(t\). To quantify this, we need to choose an explicit sequence \((k_{1},\ldots,k_{r})\) and solve the linear system Eq. (17) to obtain the coefficients \(c_{i}\). Figure 1 shows examples of MPFs based on product formulas of order \(p=2,4,6\) with \(r=p+1\) terms and the maximum number of time steps per circuit \(k_{max}=25\). A sequence of time steps \((k_{1},\ldots,k_{r})\) was selected by minimizing the sum \(\sum_{i=1}^{r}|c_{i}|/{k_{i}^{2p}}\) over all \(r\)-tuples \((k_{1},\ldots,k_{r})\) with \(1\leq k_{i}\leq k_{max}\). Each of these examples can be turned to a family of MPFs by a
rescaling \(k_{i}\leftarrow\lambda k_{i}\), where \(\lambda\geq 1\) is any integer. This reduces the MPF error by the factor \(1/\lambda^{2p}\) without changing the coefficients \(c_{i}\) and the condition number \(\kappa\) (since the latter depend only on the ratios \(k_{i}/k_{j}\)). In contrast, applying the rescaling \(k\leftarrow\lambda k\) to the base product formula reduces the approximation error only by the factor \(1/\lambda^{p}\), see Lemma 1. Thus MPFs with a constant condition number \(\kappa=O(1)\) achieve a quadratic error reduction compared with the base product formula without increasing the maximum number of time steps per circuit (and thus without increasing the circuit depth). The number of circuit repetitions grows only by a constant factor since the condition number amplifies the approximation error in Eq. (14).
We defer the proof of Theorem 1 to Appendix A. The proof relies on the machinery introduced in Ref. [8]. Our main innovation is an integral representation of MPFs based on Euler-Maclaurin summation formula. Informally, this integral representation reduces the problem of bounding the MPF error to a single time step such that the error operator \(\rho_{k_{j}}(t)-\rho(t)\) can be viewed as a function of the variable \(t/k_{j}\) only. We consider Taylor series for this function at the point \(t/k_{j}=0\). Terms of order less than \(2p\) in the Taylor series give a zero contribution to \(\mu(t)\) due to Eq. (17) and the assumption that the base product formula has order \(p\). The norm of the remaining terms of order at least \(2p\) can be bounded by the quantities proportional to \((\alpha_{p})^{2}\) or \(\beta_{p_{1},p_{2}}\) using Theorem 5 of Ref. [8]. This theorem bounds the norm of truncated Taylor series for operator-valued functions such as \(e^{-it\mathrm{Ad}_{F_{d}}}\cdots e^{-it\mathrm{Ad}_{F_{d}}}(F_{a-1})\). Unfortunately, this theorem does not apply to our problem out of the box because Euler-Maclaurin formula generates several unwanted terms proportional to derivatives of the considered function. To show that these derivative terms are sufficiently small, we have to prove a slightly more general version of Theorem 5, see Lemma 3 in Appendix A.
We expect that the upper bound of Theorem 1 can be strengthened to give a better than quadratic error suppression in the case \(r>p+1\) and leave this extension for a future work.
## IV Dynamic multi product formulas
In this section we propose dynamic MPFs, namely MPFs with time-varying coefficients. We demonstrate that dynamic MPF achieve the best possible approximation error hence further improving MPFs with constant coefficients introduced in Theorem 1. As above, let \(\mathcal{S}(t)\) be a product formula approximating \(e^{-itH}\). Define a dynamic MPF as a linear combination
\[\mu(t)=\sum_{i=1}^{r}c_{i}(t)\rho_{k_{i}}(t) \tag{19}\]
where \(c_{i}(t)\) are real-valued functions obtained by minimizing the following projection error (in Frobenius
Figure 1: Examples of multi-product formulas (MPFs) of order \(p=2,4,6\). Such MPFs have a form \(\mu(t)=\sum_{i=1}^{p+1}c_{i}\rho_{k_{i}}(t)\) where \(\rho_{k_{i}}(t)\) is an approximation of the exact time evolved state obtained by applying \(k_{i}\) steps of the base product formula \(\mathcal{S}(t/k_{i})\). The sequence of time steps \((k_{1},\ldots,k_{p+1})\) reported in the table was obtained by minimizing the factor \(\sum_{i=1}^{p+1}|c_{i}|/k_{i}^{2p}\) in the upper bound of Theorem 1 over all tuples \((k_{1},\ldots,k_{p+1})\) with \(1\leq k_{i}\leq k_{max}\) and solving the linear system Eq. (17) with \(r=p+1\) to obtain the coefficients \(c_{i}\). To avoid clutter, we round the condition number and coefficients in the error scaling to the nearest integer. Note that the MPF approximation error can be reduced by performing a rescaling \(k_{i}\leftarrow\lambda k_{i}\) for all \(i\), where \(\lambda\geq 1\) is any integer. The reduces the upper bound of Theorem 1 by the factor \(1/\lambda^{2p}\) without changing the coefficients \(c_{i}\) and the condition number. Meanwhile, for the regular order-\(p\) product formula, the rescaling \(k\leftarrow\lambda k\) reduces the approximation error only by the factor \(1/\lambda^{p}\), see Lemma 1.
norm):
\[\|\rho(t)-\mu(t)\|_{F}^{2}=\mathrm{Tr}\left((\rho(t)-\mu(t))^{2}\right) \tag{20}\]
over coefficients \(c(t)\in\mathbb{R}^{r}\) for a fixed sequence \(k_{1},\ldots,k_{r}\). A simple algebra gives
\[\|\rho(t)-\mu(t)\|_{F}^{2} =1+\sum_{i,j=1}^{r}M_{i,j}(t)c_{i}(t)c_{j}(t)\] \[-2\sum_{i=1}^{r}L_{i}^{\mathsf{exact}}(t)c_{i}(t) \tag{21}\]
where \(M(t)\) is the Gram matrix
\[M_{i,j}(t) =\mathrm{Tr}(\rho_{k_{i}}(t)\rho_{k_{j}}(t))\] \[=\left|\langle\psi_{in}|\mathcal{S}(t/k_{i})^{-k_{i}}\mathcal{S} (t/k_{j})^{k_{j}}|\psi_{in}\rangle\right|^{2} \tag{22}\]
and \(L^{\mathsf{exact}}(t)\) is a vector of overlaps
\[L_{i}^{\mathsf{exact}}(t)=\mathrm{Tr}(\rho(t)\rho_{k_{i}}(t)). \tag{23}\]
Clearly, the dynamic MPF with optimal projection coefficients \(c^{\mathsf{exact}}\) provide the best approximation error w.r.t. Frobenious norm. If we knew both \(M(t)\) and \(L^{\mathsf{exact}}(t)\) then finding the optimal projection coefficents by minimizing the right-hand side of Eq. (21) over the coefficients \(c(t)\) is straightforward (e.g. by direct differentiation as is done in standard least-squares). However, even though the Gram matrix \(M(t)\) can be efficiently measured on hardware (with a small additive error), the vector of overlaps \(L^{\mathsf{exact}}(t)\) is unknown since it depends on the the exact solution \(\rho(t)\).
In what follows we propose a numerical model, iterative least-squares method which employs von Neumann equation (1) to approximate \(L_{i}^{\mathsf{exact}}\) and thus provides a recursive approximation of \(c^{\mathsf{exact}}\). Fix a precision parameter \(\delta>0\) and discretize time to a grid with grid points \(0,\delta,2\delta,3\delta,\ldots\). Let \(t\) be some initial time instant: if \(t=0\) one can set \(c(0)\) to exact projection coefficients since \(\rho(0)\) is known and so the coefficients can be computed (e.g. by taking a minimal norm solution of eq. (21)), otherwise one can pick the initial coefficient \(c(t)\) as a solution of the linear system Eq. (17),
\[\sum_{j=1}^{r}\frac{c_{j}(t)}{k_{j}^{q}}=\left\{\begin{array}{ll}1&\mbox{if } \;q=0,\\ 0&\mbox{if }\;q=p,p+1,\ldots,2p-1.\end{array}\right. \tag{24}\]
Consider the next grid point \(t+\delta\). Approximate \(\rho(t+\delta)\) by \(\rho^{\mathsf{approx}}(t+\delta)=\mathcal{S}(\delta/k_{0})^{k_{0}}\mu(t) \mathcal{S}(\delta/k_{0})^{-k_{0}}\) with error
\[\|\rho^{\mathsf{approx}}(t+\delta)-\rho(t+\delta)\|_{1}\leq\|\rho(t)-\mu(t) \|_{1}+\frac{2\alpha_{p}\delta^{p+1}}{(p+1)!k_{0}^{p}}\]
where \(p\) is the order of the product formula \(\mathcal{S}\). Note that this error can be made arbitrarily small provided \(\delta<<1\) even for a small integer \(k_{0}>1\) so that the leading error term is given by \(\|\rho(t)-\mu(t)\|_{1}\) which can be upper-bounded as in Theorem 1. Then \(L_{i}^{\mathsf{exact}}(t+\delta)\) is approximated by
\[L_{i}^{\mathsf{approx}}(t+\delta)=\mathrm{Tr}\left(\rho^{\mathsf{approx}}(t+ \delta)\rho_{k_{i}}(t+\delta)\right). \tag{25}\]
By linearity of the trace,
\[L_{i}^{\mathsf{approx}}(t+\delta)=\sum_{j=1}^{r}Q_{i,j}(t)c_{j}(t), \tag{26}\]
where
\[Q_{i,j}(t) =\mathrm{Tr}\left(\mathcal{S}(\delta/k_{0})^{k_{0}}\rho_{k_{j}}( t)\mathcal{S}(\delta/k_{0})^{-k_{0}}\rho_{k_{i}}(t+\delta)\right)\] \[=\left|\langle\psi_{in}|\mathcal{S}(t/k_{j})^{-k_{j}}\mathcal{S}( \delta/k_{0})^{-k_{0}}\mathcal{S}((t+\delta)/k_{i})^{k_{i}}|\psi_{in}\rangle \right|^{2} \tag{27}\]
is a matrix that we can efficiently compute on hardware (with a small additive error). Since the coefficients \(c(t)\) are already known, we can also compute \(L_{i}^{\mathsf{approx}}(t+\delta)\) using Eq. (26). Next compute the coefficients \(c(t+\delta)\) by minimizing the projection error Eq. (21) at time \(t+\delta\) with \(L^{\mathsf{exact}}(t+\delta)\) replaced by \(L^{\mathsf{approx}}(t+\delta)\). In other words, we set
\[c(t+\delta) =\arg\min_{c_{1},\ldots,c_{r}\in\mathbb{R}}1+\sum_{i,j=1}^{r}M_{i,j}(t+\delta)c_{i}c_{j}\] \[-2\sum_{i=1}^{r}L_{i}^{\mathsf{approx}}(t+\delta)c_{i} \tag{28}\]
Once \(c(t+\delta)\) is computed, we advance time \(t\gets t+\delta\) and repeat the same steps as above except that the linear system solution \(c(t)\) is replaced by the coefficients computed at the previous step (we only use the linear system solution for the initial grid point). Proceed inductively until the final grid point is reached.
## V Numerical experiments
We begin by testing predictions of Theorem 1 numerically for the second and fourth order Trotter-type product formulas. We select \(H\) to be the spin chain Hamiltonian proposed by Childs, Maslov et al. [11],
\[H=\sum_{j=0}^{n-2}(X_{j}X_{j+1}+Y_{j}Y_{j+1}+Z_{j}Z_{j+1})+\sum_{j=0}^{n-1}h_{j} Z_{j}. \tag{29}\]
Here \(X_{j},Y_{j},Z_{j}\) are Pauli operators acting on the \(j\)-th qubit and \(h_{j}\in[-1,1]\) are coefficients drawn randomly from the uniform distribution. We choose a second order (\(p=2\)) product formula as
\[\mathcal{S}_{2}(t)=e^{-itF_{5}}e^{-itF_{4}}e^{-itF_{3}}e^{-itF_{2}}e^{-itF_{1}},\]
where
\[F_{1}=F_{5}=\frac{1}{2}\sum_{\mathrm{odd}\,j}X_{j}X_{j+1}+Y_{j}Y_{j+1}+Z_{j}Z_{j +1},\]
\[F_{2}=F_{4}=\frac{1}{2}\sum_{j=0}^{n-1}h_{j}Z_{j},\]
and
\[F_{3}=\sum_{\text{even }j}X_{j}X_{j+1}+Y_{j}Y_{j+1}+Z_{j}Z_{j+1}.\]
Let \(\rho(t)=e^{-itH}|\psi_{in}\rangle\langle\psi_{in}|e^{itH}\), where \(|\psi_{in}\rangle=|1010\ldots 10\rangle\) is Neel-type initial state. Specializing Theorem 1 and Figure 1 to \(p=2\) gives an MPF
\[\mu(t)=\sum_{i=1}^{3}c_{i}\rho_{k_{i}}(t),\]
where \(\rho_{k}(t)=\mathcal{S}_{2}(t/k)^{k}|\psi_{in}\rangle\langle\psi_{in}| \mathcal{S}_{2}(t/k)^{-k}\) and
\[(k_{1},k_{2},k_{3})=\lambda\cdot(4,13,17)\]
for an integer \(\lambda\geq 1\). Solving the linear system Eq. (17) gives coefficients
\[c_{1}=0.016088(4),\ c_{2}=-1.794934(6),\ c_{3}=2.778846(1).\]
Figure 2 shows a comparison between three types of approximation errors: (1) Trotter error \(\|\rho(t)-\rho_{5}(t)\|_{1}\) achieved by the best Trotter circuit \(\mathcal{S}_{2}(t/k_{3})^{k_{3}}\), (2) MPF error \(\|\rho(t)-\mu(t)\|_{1}\), and (3) fitting ansatz based on Eq. (18). The latter was chosen as
\[\epsilon_{\text{fit}}=0.06n^{2}t^{6}\sum_{i=1}^{3}\frac{|c_{i}|}{k_{i}^{4}} \approx\frac{(7.5\times 10^{-6})n^{2}t^{6}}{\lambda^{4}}. \tag{30}\]
Here we fitted the coefficient \(a_{1}\) in Eq. (18) by a quadratic function of \(n\) obtaining an estimate \(a_{1}\approx 0.06n^{2}\). Figure 3 demonstrates that the fitting formula Eq. (30) closely approximates the true MPF error \(\|\rho(t)-\mu(t)\|_{1}\) for several values of \(n\) and \(\lambda\).
One can use Eq. (30) to estimate the number of Trotter steps (and thus the circuit depth) for a given simulation task. For example, solving the benchmark problem of Ref. [11] with \(n=t=100\) and error tolerance \(\epsilon_{\text{fit}}=10^{-3}\) using the second-order MPF would require \(\lambda\approx 3000\) which translates to \(k_{max}=17\lambda\approx 40,000\) Trotter steps.
Next, consider the fourth-order (\(p=4\)) product formula
\[\mathcal{S}_{4}(t)=(\mathcal{S}_{2}(ut))^{2}\mathcal{S}_{2}((1-4u)t)( \mathcal{S}_{2}(ut))^{2},\]
where \(u=\frac{1}{4-4^{1/3}}\), see for instance [11]. Specializing Theorem 1 and Figure 1 to \(p=4\) gives an MPF
\[\mu(t)=\sum_{i=1}^{5}c_{i}\rho_{k_{i}}(t),\]
where
\[(k_{1},k_{2},k_{3},k_{4},k_{5})=\lambda\cdot(2,9,17,23,25)\]
for an integer \(\lambda\geq 1\). Solving the linear system Eq. (17) gives coefficients
\[\begin{array}{cc}c_{1}=1.77273114\times 10^{-8}&c_{4}=-6.7785310(5)\\ c_{2}=-0.00267812(5)&c_{5}=7.2808414(4)\\ c_{3}=0.50036771(6)\end{array}\]
Figure 5 shows a comparison between three types of approximation errors: (1) Trotter error \(\|\rho(t)-\rho_{5}(t)\|_{1}\) achieved by the best Trotter circuit \(\mathcal{S}_{4}(t/k_{5})^{k_{5}}\), (2) MPF error \(\|\rho(t)-\mu(t)\|_{1}\), and (3) fitting ansatz based on Eq. (18). The latter was chosen as
\[\epsilon_{\text{fit}}=0.00014n^{2}t^{10}\sum_{i=1}^{5}\frac{|c_{i}|}{k_{i}^{8 }}\approx\frac{(3\times 10^{-14})n^{2}t^{10}}{\lambda^{8}}. \tag{31}\]
Here we fitted the coefficient \(a_{1}\) in Eq. (18) by a quadratic function of \(n\) obtaining an estimate \(a_{1}\approx 0.00014n^{2}\). Figure 6 demonstrates that the fitting formula Eq. (31) closely approximates the true MPF error \(\|\rho(t)-\mu(t)\|_{1}\) for several values of \(n\) and \(\lambda\) and large evolution time \(t\). Meanwhile, the fitting formula underestimates the error for small \(t\) since it neglects the corrections \(a_{2}t^{2p+1}=a_{2}t^{9}\) and \(a_{3}t^{2p}=a_{3}t^{8}\) in the upper bound of Theorem 1. We expect that these correction may become more important for higher-order formulas.
One can use Eq. (31) to estimate the number of Trotter steps for a given simulation task. For example, solving the benchmark problem of Ref. [11] with \(n=t=100\) and error tolerance \(\epsilon_{\text{fit}}=10^{-3}\) using the fourth-order MPF would require \(\lambda\approx 50\) which translates to \(k_{max}=25\lambda\approx 1,250\) Trotter steps, as opposed to \(40,000\) Trotter steps for the second-order MPF. Note however that each
Figure 2: Approximation error achieved by the second-order Trotter circuit with \(k_{3}=850\) time steps (blue) and MPF with \((k_{1},k_{2},k_{3})=(200,650,850)\) (orange) for the Heisenberg spin chain Hamiltonian Eq. (29) with \(n=14\) qubits. Green line shows the fitting formula Eq. (30).
Trotter step of \(\mathcal{S}_{4}\) has the same depth as five Trotter steps of \(\mathcal{S}_{2}\). Thus the fourth-order MPF achieves roughly six-fold depth reduction compared with the second-order MPF.
Dynamic MPFs.Here we demonstrate the performance of the coefficients computed by the proposed dynamic MPF model eq. (28) and compare it versus dynamic MPF with optimal coefficients (the ground truth), MPF with coefficients computed as suggested in Theorem 1 and the best product formula within MPF. To this end we use the same Hamiltonian as in the previous section, but employ 6 Trotter circuits of 2nd order with \((k_{1},k_{2},k_{3},k_{4},k_{5},k_{6})=3\cdot(1,2,3,4,6,21)\), as recommended in [13]. We compute optimal coefficients by using \(L_{i}^{\text{exact}}(t)\) as per eq. (23) at time instants \(dt=0.05\). To compute the approximated coefficients we use \(k_{0}=26\) as per discussion after eq. (24). Note that we stopped the simulation after optimal error reached \(0.1\). Figure 4 provides the comparison: clearly dynamic MPF outperforms MPF (e.g. for \(t=3.7\) the error of the former is 10x better than that of the latter) as well as the Trotter circuit (up until time \(t=5\)), and importantly most of the times its error curve stays quite close to the ground truth error (measured in Frobenius norm). Interestingly, in this example Trotter circuit performs just as well as the optimal / dynamic MPF at the final time but the error of the MPF (red curve) is about 2x worse.
## VI Conclusions
To conclude, we studied quantum algorithms for simulating Hamiltonian dynamics based on multi-product formulas. We derived an upper bound on the approximation error achieved by such algorithms for general quantum Hamiltonians and multi-product formulas of Trotter type which are commonly used in quantum simulations. This upper bound applies to both short and long evolution times overcoming limitations of the previously known bounds. Numerical simulations confirm that our upper bound correctly predicts the approximation error scaling for the considered spin chain Hamiltonian. In addition, we showed how to go beyond extrapolation-based approaches for computing coefficients in a multi-product formula. To this end we introduced an iterative least-squares algorithm aimed at finding a dynamic multi-product formula by minimizing the approximation error.
A challenging open question is how to extend our bound, Theorem 1, to the case \(r>p+1\) and obtain
Figure 4: Approximation error achieved by dynamic MPF with \((k_{1},k_{2},k_{3},k_{4},k_{5},k_{6})=3\cdot(1,2,3,4,6,21)\) (orange), Trotter circuit with \(k_{6}=63\) (green), optimal coefficients (blue) and MPF (red) for the Heisenberg spin chain Hamiltonian Eq. (29) with \(n=10\) qubits.
Figure 3: Comparison between the true MPF approximation error \(\|\mu(t)-\rho(t)\|_{1}\) (circles) and fitting formula Eq. (30) (solid lines) for the second-order product formulas with \((k_{1},k_{2},k_{3})=\lambda\cdot(4,13,17)\). Top panel: \(n=10\), Bottom panel: \(n=14\).
a stronger than quadratic error suppression compared with the regular product formulas. Likewise, one may ask whether the upper bound of Theorem 1 can be expressed only in terms of low-order nested commutators similar to the bound of Lemma 1. Our work leaves open the question of how to make the iterative least-squares algorithm underlying dynamic multi-product robust to small additive error in the estimation of overlaps between Trotter circuits. Such additive errors are unavoidable in the experimental implementation due to sampling and hardware noise.
## Acknowledgements
SB thanks Minh Tran, Almudena Vazquez, and Stefan Worner for helpful discussions. SB was supported in part by the Army Research Office under Grant Number W911NF-20-1-0014.
## Appendix A Trotter error for Multi Product Formulas
### Proof of Lemma 1
Consider first a single time step, \(k=1\). Then our goal is to upper bound the error \(\|\rho_{1}(t)-\rho(t)\|_{1}\) where
\[\rho(t)=e^{-itH}\rho_{in}e^{itH}\quad\text{and}\quad\rho_{1}(t)=\mathcal{S}(t) \rho_{in}\mathcal{S}(t)^{\dagger}.\]
Recall that \(\mathcal{S}(t)=e^{-itF_{d}}\cdots e^{-itF_{1}}\) is an order-\(p\) product formula associated with a Hamiltonian \(H=\sum_{a=1}^{d}F_{a}\). Time evolution of \(\mathcal{S}(t)\) is governed by a time dependent Hamiltonian \(G(t)\) defined as
\[G(t)=i\left(\frac{d\mathcal{S}(t)}{dt}\right)\mathcal{S}(t)^{-1}. \tag{10}\]
The chain rule for derivatives gives
\[G(t)=F_{d}+\sum_{a=2}^{d}e^{-itF_{d}}\cdots e^{-itF_{a}}F_{a-1}e^{itF_{a}} \cdots e^{itF_{d}}. \tag{11}\]
Note that \(G(0)=\sum_{a=1}^{d}F_{a}=H\). A simple algebra shows that
\[\frac{d}{dt}(\rho_{1}(t)-\rho(t))=-i[G(t),\rho_{1}(t)]+i[H,\rho(t)]=-i[H,\rho _{1}(t)-\rho(t)]-i[G(t)-H,\rho_{1}(t)].\]
Figure 5: Approximation error achieved by the fourth-order Trotter circuit with \(k_{5}=250\) time steps (blue) and MPF with \((k_{1},k_{2},k_{3},k_{4},k_{5})=(20,90,170,230,250)\) (orange) for the Heisenberg spin chain Hamiltonian Eq. (29) with \(n=14\) qubits. Green line shows the fitting formula Eq. (31).
Here we have added and subtracted a term \(i[H,\rho_{1}(t)]\) to get the second equality. The above equation is equivalent to
\[\frac{d}{dt}\left(e^{iHt}(\rho_{1}(t)-\rho(t))e^{-iHt}\right)=-ie^{ itH}[G(t)-H,\rho_{1}(t)]e^{-itH}.\]
Integrating over the interval \([0,t]\) and noting that \(\rho_{1}(0)=\rho(0)\) gives
\[e^{iHt}(\rho_{1}(t)-\rho(t))e^{-iHt}=-i\int_{0}^{t}dt_{1}e^{it_{1}H}[G(t_{1})-H,\rho_{1}(t)]e^{-it_{1}H}. \tag{10}\]
Using the unitarity of \(e^{itH}\) and the triangle inequality one obtains
\[\|\rho_{1}(t)-\rho(t)\|_{1}\leq\int_{0}^{t}dt_{1}\|\left[G(t_{1})-H,\rho_{1}(t _{1})\right]\|_{1}\leq 2\int_{0}^{t}dt_{1}\|G(t_{1})-H\|. \tag{11}\]
The second inequality relies on the bound \(\|XY\|_{1}\leq\|X\|\cdot\|Y\|_{1}\) which holds for any operators \(X,Y\) and the identity \(\|\rho_{1}(t_{1})\|_{1}=1\). We shall need the following result which is a rephrasing of Theorem 5 from Ref. [8].
Figure 6: Comparison between the true MPF approximation error \(\|\mu(t)-\rho(t)\|_{1}\) (circles) and fitting formula Eq. (31) (solid lines) for the fourth-order product formulas with \((k_{1},k_{2},k_{3},k_{4},k_{5})=\lambda(2,9,17,23,25)\). Top panel: \(n=10\), Bottom panel: \(n=14\). For small evolution time \(t\) the fitting formula underestimates the error since it neglects corrections \(a_{2}t^{2p+1}\) and \(a_{3}t^{2p}\) in the upper bound of Theorem 1.
**Fact 1**.: _Suppose \(A_{1},\ldots,A_{s},B\) are hermitian operators. Consider an operator-valued function_
\[B(t)=e^{-itA_{1}}\cdots e^{-itA_{s}}Be^{itA_{s}}\cdots e^{itA_{1}}\]
_and its Taylor series \(B(t)=\sum_{q=0}^{\infty}B_{q}t^{q}\) at \(t=0\). Then_
\[\left\|B(t)-\sum_{q=0}^{p-1}B_{q}t^{q}\right\|\leq\alpha_{\mathbf{comm}}(p;A_{1 },\ldots,A_{s};B)\frac{|t|^{p}}{p!} \tag{10}\]
_where \(\alpha_{\mathbf{comm}}(p;A_{1},\ldots,A_{s};B)\) is defined in Eq. (9)._
Consider Taylor series
\[G(t)=\sum_{q=0}^{\infty}G_{q}t^{q}.\]
We claim that
\[G_{0}=H\quad\text{and}\quad G_{q}=0\quad\text{for }1\leq q\leq p-1. \tag{11}\]
Indeed, Eq. (10) gives \(G_{0}=G(0)=\sum_{a=1}^{d}F_{a}=H\). The assumption that \(\mathcal{S}(t)\) is an order-\(p\) product formula implies that \(\rho_{1}(t)-\rho(t)=O(t^{p+1})\) in the limit \(t\to 0\), see Eq. (8). From Eq. (11) one gets \(G(t)-H=O(t^{p})\) in the limit \(t\to 0\). Thus \(G_{q}=0\) for \(1\leq q\leq p-1\). Combining Eqs. (10,11) and Fact 1 gives
\[\|G(t)-H\|=\left\|G(t)-\sum_{q=0}^{p-1}G_{q}t^{q}\right\|\leq\frac{2t^{p}}{p!} \sum_{a=2}^{d}\alpha_{\mathbf{comm}}(p;F_{d},\ldots,F_{a};F_{a-1})=\frac{2 \alpha_{p}t^{p}}{p!}. \tag{12}\]
Here we applied Fact 1 to each term \(e^{-itF_{d}}\cdots e^{-itF_{a}}F_{a-1}e^{itF_{a}}\cdots e^{itF_{d}}\) in \(G(t)\) with \(a=2,\ldots,d\) and used the triangle inequality. Substituting this into Eq. (11) gives
\[\|\rho_{1}(t)-\rho(t)\|_{1}\leq 2\int_{0}^{t}dt_{1}\frac{2\alpha_{p}t_{1}^{p}} {p!}=\frac{2\alpha_{p}t^{p+1}}{(p+1)!}.\]
This is equivalent to Eq. (10) with a single time step \(k=1\).
Consider now the general case \(k\geq 1\). Define a sequence of states
\[\sigma_{j}=\mathcal{S}(t/k)^{k-j}e^{-itH(j/k)}\rho_{in}e^{itH(j/k)}\mathcal{S} (t/k)^{-k+j}\]
where \(j=0,1,\ldots,k\). We have \(\sigma_{0}=\rho_{k}(t)\) and \(\sigma_{k}=\rho(t)\). The norm \(\|\sigma_{j+1}-\sigma_{j}\|_{1}\) can be bounded using Eq. (10) with \(k=1\), evolution time \(t/k\), and the initial state \(e^{-itH(j/k)}\rho_{in}e^{itH(j/k)}\). By the triangle inequality,
\[\|\rho_{k}(t)-\rho(t)\|_{1}=\|\sigma_{0}-\sigma_{k}\|_{1}\leq\sum_{j=0}^{k-1} \|\sigma_{j+1}-\sigma_{j}\|_{1}\leq\frac{2k\alpha_{p}(t/k)^{p+1}}{(p+1)!}= \frac{2\alpha_{p}t^{p+1}}{(p+1)!k^{p}}.\]
This proves Eq. (10) in the general case.
### Proof of Theorem 1
Below we use notations from the previous subsection. Fix evolution time \(t>0\) and the number of time steps \(k\). Any \(\tau\in[0,t)\) can be uniquely written as
\[\tau=\tau^{\prime}+(t/k)j(\tau)\quad\text{for }\tau^{\prime}\in[0,t/k)\text{ and integer }j(\tau)\in[0,k-1]. \tag{13}\]
Define an operator valued function
\[\mathcal{U}(\tau)=\mathcal{S}(\tau^{\prime})\mathcal{S}(t/k)^{j(\tau)}. \tag{14}\]
By definition, \(\mathcal{U}(0)=I\) and \(\lim_{\tau\to t}\mathcal{U}(\tau)=\mathcal{S}(t/k)^{k}\). Furthermore, for any \(\tau\in[0,t)\) which is not an integer multiple of \(t/k\) the function \(\mathcal{U}(\tau)\) is differentiable and
\[i\frac{d\mathcal{U}(\tau)}{d\tau}=i\frac{d\mathcal{S}(\tau^{\prime})}{d\tau^{ \prime}}\mathcal{S}(t/k)^{j(\tau)}=G(\tau^{\prime})\mathcal{S}(\tau^{\prime}) \mathcal{S}(t/k)^{j(\tau)}=G(\tau^{\prime})\mathcal{U}(\tau). \tag{10}\]
Here the second equality uses Eq. (11). Consider a state
\[\sigma(\tau)=\mathcal{U}(\tau)\rho_{in}\mathcal{U}(\tau)^{\dagger},\qquad 0 \leq\tau<t. \tag{11}\]
By definition,
\[\lim_{\tau\to t}\sigma(\tau)=\mathcal{S}(t/k)^{k}\rho_{in}\mathcal{S}(t/k)^{- k}=\rho_{k}(t). \tag{12}\]
Repeating the same arguments as in the derivation of Eq. (10) one gets
\[e^{i\tau{\rm Ad}_{H}}(\sigma(\tau)-\rho(\tau))=-i\int_{0}^{\tau}d\tau_{1}e^{i \tau_{1}{\rm Ad}_{H}}[G(\tau_{1}^{\prime})-H,\sigma(\tau_{1})]. \tag{13}\]
Adding and subtracting \([G(\tau_{1}^{\prime})-H,\rho(\tau_{1})]\) in the righthand side gives
\[e^{i\tau{\rm Ad}_{H}}(\sigma(\tau)-\rho(\tau))=J_{1}(\tau)-i\int_{0}^{\tau}d \tau_{1}e^{i\tau_{1}{\rm Ad}_{H}}[G(\tau_{1}^{\prime})-H,\sigma(\tau_{1})-\rho (\tau_{1})], \tag{14}\]
where
\[J_{1}(\tau)=-i\int_{0}^{\tau}d\tau_{1}e^{i\tau_{1}{\rm Ad}_{H}}[G(\tau_{1}^{ \prime})-H,\rho(\tau_{1})] \tag{15}\]
We can now apply Eq. (13) again to express \(\sigma(\tau_{1})-\rho(\tau_{1})\) in Eq. (14) as an integral. This gives
\[e^{i\tau{\rm Ad}_{H}}(\sigma(\tau)-\rho(\tau))=J_{1}(\tau)+J_{2}(\tau), \tag{16}\]
where
\[J_{2}(\tau)=-\int_{0}^{\tau}d\tau_{1}\int_{0}^{\tau_{1}}d\tau_{2}[e^{i\tau_{1 }{\rm Ad}_{H}}(G(\tau_{1}^{\prime})-H),e^{i\tau_{2}{\rm Ad}_{H}}[G(\tau_{2}^{ \prime})-H,\sigma(\tau_{2})]]. \tag{17}\]
Using the triangle inequality and normalization \(\|\sigma(\tau_{2})\|_{1}=1\) one gets
\[\|J_{2}(\tau)\|_{1}\leq 4\int_{0}^{\tau}d\tau_{1}\int_{0}^{\tau_{1}}d\tau_{2} \|G(\tau_{1}^{\prime})-H\|\cdot\|G(\tau_{2}^{\prime})-H\|. \tag{18}\]
Consider the limit \(\tau\to t\). Dividing the integration domain into \(k\) regions with \(j(\tau_{1})=j(\tau_{2})\) and \(\binom{k}{2}\) regions with \(j(\tau_{1})>j(\tau_{2})\) leads to
\[\|J_{2}(t)\|_{1}\leq 4k\int_{0}^{t/k}d\tau_{1}^{\prime}\int_{0}^{\tau_{1}}d \tau_{2}^{\prime}\|G(\tau_{1}^{\prime})-H\|\cdot\|G(\tau_{2}^{\prime})-H\|+4 \binom{k}{2}\left(\int_{0}^{t/k}d\tau_{1}^{\prime}\|G(\tau_{1}^{\prime})-H\| \right)^{2}. \tag{19}\]
From Eq. (17) one gets \(\|G(\tau_{1})-H\|\leq 2\alpha_{p}\tau_{1}^{p}/p!\) and \(\|G(\tau_{2})-H\|\leq 2\alpha_{p}\tau_{2}^{p}/p!\). Taking the integrals gives
\[\|J_{2}(t)\|_{1}\leq(2k+2k(k-1))\left(\frac{2\alpha_{p}t^{p+1}}{(p+1)!k^{p+1}} \right)^{2}=2\left(\frac{2\alpha_{p}t^{p+1}}{(p+1)!k^{p}}\right)^{2}. \tag{20}\]
Note that \(\|J_{2}(t)\|_{1}\) is proportional to the square of the regular Trotter error, see Eq. (10).
Next let us examine the term \(J_{1}(t)\). By definition, \(\rho(\tau_{1})=e^{-i\tau_{1}{\rm Ad}_{H}}\rho_{in}\). Thus
\[J_{1}(t)=-i[L(t),\rho_{in}] \tag{21}\]
where
\[L(t)=\int_{0}^{t}d\tau_{1}e^{i\tau_{1}{\rm Ad}_{H}}(G(\tau_{1}^{\prime})-H). \tag{22}\]
Writing \(\tau_{1}=(t/k)(v+j)\) with \(v\in[0,1)\) and integer \(j\in[0,k-1]\) we get
\[L(t)=\frac{t}{k}\sum_{j=0}^{k-1}\int_{0}^{1}dve^{ij(t/k)\mathrm{Ad}_{H}}e^{iv(t/ k)\mathrm{Ad}_{H}}(G(vt/k)-H). \tag{101}\]
The dependence of \(L(t)\) on \(1/k\) is hard to analyze because \(k\) controls the range of the sum \(\sum_{j=0}^{k-1}\). On the other hand, one should expect that the sum \(\sum_{j=0}^{k-1}\) can be well approximated by an integral \(\int_{0}^{k}dx\). The latter can be transformed to an integral over a range \([0,1]\) independent of \(k\) by a simple change of variable. A systematic way to approximate a sum by an integral is based on the Euler-Maclaurin formula, see e.g. Theorem D.2.1 in [21].
**Lemma 2** (**Euler-Maclaurin formula)**.: _Suppose \(f(x)\) is function with continuous derivatives up to order \(s\). Let \(B_{s}(x)\) be the Bernoulli polynomial of order \(s\) and \(B_{s}=B_{s}(0)\) be the Bernoulli number. For example, \(B_{0}(x)\equiv 1\), \(B_{1}(x)=x-1/2\), and \(B_{2}(x)=x^{2}-x+1/6\). Then for any integers \(j_{1}<j_{2}\) one has_
\[\sum_{j=j_{1}+1}^{j_{2}}f(j)=\int_{j_{1}}^{j_{2}}dxf(x)+\sum_{\ell=1}^{s}(-1)^ {\ell}\frac{B_{\ell}}{\ell!}\left(f^{(\ell-1)}(j_{2})-f^{(\ell-1)}(j_{1}) \right)+\frac{(-1)^{s-1}}{s!}\int_{j_{1}}^{j_{2}}dxB_{s}(x-[x])f^{(s)}(x). \tag{102}\]
_Here \(f^{(\ell)}(x)\) denotes the \(\ell\)-th derivative of \(f(x)\) and \([x]\) denotes the integer part of \(x\) such that \(x-[x]\) always lies in the interval \([0,1)\). Furthermore, for all \(s\geq 2\)_
\[\max_{x\in[0,1]}\ |B_{s}(x)|\leq\frac{4s!}{(2\pi)^{s}}. \tag{103}\]
Consider a function \(f(x)=e^{i(x-1)(t/k)\mathrm{Ad}_{H}}\). Note that \(x\) is a real variable while \(f(x)\) is a super-operator (a linear map acting on the space of linear operators). We have
\[f^{(\ell)}(x)=(it/k)^{\ell}(\mathrm{Ad}_{H})^{\ell}f(x).\]
Applying Eq. (102) with \(j_{1}=0\) and \(j_{2}=k\) gives
\[\sum_{j=0}^{k-1}e^{ij(t/k)\mathrm{Ad}_{H}} =\sum_{j=1}^{k}f(j)=\int_{0}^{k}dxe^{i(x-1)(t/k)\mathrm{Ad}_{H}} \tag{104}\] \[+\sum_{\ell=1}^{s}(-1)^{\ell}\frac{B_{\ell}(it)^{\ell-1}}{k^{k} \,\ell!}\left(e^{it\mathrm{Ad}_{H}}-\hat{I}\right)e^{-i(t/k)\mathrm{Ad}_{H}}( \mathrm{Ad}_{H})^{\ell-1}\] \[+\frac{(-1)^{s-1}(it)^{s}}{k^{s}\,s!}\int_{0}^{k}dx\,B_{s}(x-[x]) e^{i(x-1)(t/k)\mathrm{Ad}_{H}}(\mathrm{Ad}_{H})^{s}. \tag{105}\]
Here \(\hat{I}\) is the identity superoperator. Performing a change of variables \(x=kw\) with \(w\in[0,1]\) in the integral over \(x\) and inserting the above expression into Eq. (101) gives
\[L(t) =t\int_{0}^{1}dwe^{itw\mathrm{Ad}_{H}}\int_{0}^{1}dv\,e^{i(v-1)(t /k)\mathrm{Ad}_{H}}(G(tv/k)-H)\] \[-\left(e^{it\mathrm{Ad}_{H}}-\hat{I}\right)\sum_{\ell=1}^{s}(-i)^ {\ell}\frac{B_{\ell}t^{\ell}}{\ell!}\int_{0}^{1}dv\,\frac{1}{k^{\ell}}( \mathrm{Ad}_{H})^{\ell-1}e^{i(v-1)(t/k)\mathrm{Ad}_{H}}(G(tv/k)-H)\] \[-\frac{(-i)^{s}t^{s+1}}{k^{s}\,s!}\int_{0}^{1}dwe^{itw\mathrm{Ad} _{H}}B_{s}(kw-[kw])\int_{0}^{1}dv\,(\mathrm{Ad}_{H})^{s}e^{i(v-1)(t/k)\mathrm{ Ad}_{H}}(G(tv/k)-H). \tag{106}\]
Combining Eqs. (103,11,119,121,121,121,121) and using the triangle inequality gives
\[\left\|\mu(t)-\rho(t)\right\|_{1}\leq 2(\epsilon_{1}+\epsilon_{2}+\epsilon_{3}) +2\left(\frac{2\alpha_{p}t^{p+1}}{(p+1)!}\right)^{2}\sum_{i=1}^{r}\frac{| \epsilon_{i}|}{k_{i}^{2p}} \tag{107}\]
where
\[\epsilon_{1}=t\max_{v\in[0,1]}\ \left\|\sum_{j=1}^{r}c_{j}e^{i(v-1)(t/k_{j}) \mathrm{Ad}_{H}}(G(tv/k_{j})-H)\right\|, \tag{108}\]
\[\epsilon_{2}=\sum_{\ell=1}^{s}\frac{2B_{\ell}t^{\ell}}{\ell!}\max_{v\in[0,1]}\, \left\|\sum_{j=1}^{r}\frac{c_{j}}{k_{j}^{\ell}}(\mathrm{Ad}_{H})^{\ell-1}e^{i(v -1)(t/k_{j})\mathrm{Ad}_{H}}(G(tv/k_{j})-H)\right\| \tag{101}\]
and
\[\epsilon_{3}=\frac{4t^{s+1}}{(2\pi)^{s}}\max_{v\in[0,1]}\,\sum_{j=1}^{r}\frac{ |c_{j}|}{k_{j}^{s}}\cdot\|(\mathrm{Ad}_{H})^{s}e^{i(v-1)(t/k_{j})\mathrm{Ad}_{ H}}(G(tv/k_{j})-H)\|. \tag{102}\]
Below we choose
\[r=p+1\quad\text{and}\quad s=p. \tag{103}\]
In order to bound \(\epsilon_{1}\) and \(\epsilon_{2}\) we have to use condition Eq. (17), that is,
\[\sum_{j=1}^{r}c_{j}=1\quad\text{and}\quad\sum_{j=1}^{r}\frac{c_{j}}{k_{j}^{q}} =0\quad\text{for }q\in\{p,p+1,\ldots,p+r-2\}. \tag{104}\]
Consider the term \(\epsilon_{1}\). Using Eq. (104) one can replace \(e^{i(v-1)(t/k_{j})\mathrm{Ad}_{H}}(G(tv/k_{j})-H)\) in \(\epsilon_{1}\) by its Taylor series with all terms of order less than \(p+r-1=2p\) omitted. Here the Taylor series are computed with respect to the variable \(t/k_{j}\). The norm of such truncated Taylor series can be bounded using Fact 1. It gives
\[\epsilon_{1}\leq\frac{2t^{2p+1}}{(2p)!}\left(\sum_{i=1}^{p+1}\frac{|c_{i}|}{k_ {j}^{2p}}\right)\sum_{a=2}^{d}\alpha_{\text{\small{comm}}}(2p;H,F_{d},\ldots, F_{a};F_{a-1}). \tag{105}\]
To bound \(\epsilon_{2}\) and \(\epsilon_{3}\) we shall need the following generalization of Fact 1.
**Lemma 3**.: _Suppose \(A_{1},\ldots,A_{s},B\) are hermitian operators and \(\ell\geq 0\) is an integer. Consider an operator-valued function_
\[B(t)=(\mathrm{Ad}_{A_{1}})^{\ell}e^{-it\mathrm{Ad}_{A_{1}}}e^{-it\mathrm{Ad}_ {A_{2}}}\cdots e^{-it\mathrm{Ad}_{A_{s}}}(B)\]
_and its Taylor series \(B(t)=\sum_{q=0}^{\infty}B_{q}t^{q}\) at \(t=0\). Define a set of unitary operators_
\[\Gamma(t)=\{e^{-i\tau_{2}A_{2}}\cdots e^{-i\tau_{s}A_{s}}\,|\,0\leq\tau_{2}, \ldots,\tau_{s}\leq t\}.\]
_Then for all \(t\geq 0\)_
\[\left\|B(t)-\sum_{q=0}^{p-1}B_{q}t^{q}\right\|\leq\frac{t^{p}}{p!}\alpha_{ \text{\small{comm}}}(p,\ell;A_{1},\ldots,A_{s};B;t), \tag{106}\]
_with_
\[\alpha_{\text{\small{comm}}}(p,\ell;A_{1},\ldots,A_{s};B;t)=\sum_{ \begin{subarray}{c}j_{1},\ldots,j_{s}\geq 0\\ j_{1}+\ldots+j_{s}=p\end{subarray}}\quad\frac{p!}{j_{1}!\cdots j_{s}!}\quad \max_{U\in\Gamma(t)}\left\|(\mathrm{Ad}_{A_{1}})^{\ell+j_{1}}\hat{U}\prod_{ \gamma=2}^{s}(\mathrm{Ad}_{A_{\gamma}})^{j_{\gamma}}(B)\right\|.\]
_Here \(\hat{U}\) is a linear map such that \(\hat{U}(X)=UXU^{-1}\) for all operators \(X\)._
We postpone the proof of the lemma until the end of this section. Consider the term \(\epsilon_{2}\). Using Eq. (104) one can replace the operator \((\mathrm{Ad}_{H})^{\ell-1}e^{i(v-1)(t/k_{j})\mathrm{Ad}_{H}}(G(tv/k_{j})-H)\) in \(\epsilon_{2}\) by its Taylor series with all terms of order less than \(p+r-1-\ell=2p-\ell\) omitted. The norm of such truncated Taylor series can be bounded using Lemma 3. It gives
\[\epsilon_{2}\leq\sum_{\ell=1}^{p}\frac{2B_{\ell}t^{2p}}{\ell!(2p-\ell)!}\left( \sum_{j=1}^{p+1}\frac{|c_{j}|}{k_{j}^{2p}}\right)\sum_{a=2}^{d}\alpha_{\text {\small{comm}}}(2p-\ell,\ell-1;H,F_{d},\ldots,F_{a};F_{a-1};t/k_{min}) \tag{107}\]
Consider the term \(\epsilon_{3}\) with \(s=p\). The assumption that the base product formula has order \(p\) implies that one can replace the operator \((\mathrm{Ad}_{H})^{p}e^{i(v-1)(t/k_{j})\mathrm{Ad}_{H}}(G(tv/k_{j})-H)\) in \(\epsilon_{3}\) by its Taylor series with all terms of order less than \(p\) omitted. The norm of such truncated Taylor series can be bounded using Lemma 3. It gives
\[\epsilon_{3}\leq\frac{4t^{2p+1}}{(2\pi)^{p}p!}\left(\sum_{j=1}^{p+1}\frac{|c_{ j}|}{k_{j}^{2p}}\right)\sum_{a=2}^{d}\alpha_{\text{\small{comm}}}(p,p;H,F_{d}, \ldots,F_{a};F_{a-1};t/k_{min}). \tag{108}\]
Substituting the bounds on \(\epsilon_{1},\epsilon_{2},\epsilon_{3}\) into Eq. (109) and using the definition of commutator norms \(\beta_{p_{1},p_{2}}\), see Eq. (15,16), completes the proof of Theorem 1.
### Proof of Lemma 3
First let us record the well-known integral representation of truncated Taylor series of the exponential function.
**Lemma 4** (**Taylor's theorem)**.: _For any operator \(X\), evolution time \(t\geq 0\), and integer \(m\geq 1\) one has_
\[\sum_{q=m}^{\infty}\frac{1}{q!}(-it{\rm Ad}_{X})^{q}=\left(\int_{0}^{t}d\tau\mu (\tau)e^{-i\tau{\rm Ad}_{X}}\right)\frac{(-it{\rm Ad}_{X})^{m}}{m!} \tag{103}\]
_where \(\mu(\tau)\) is a function satisfying \(\mu(\tau)\geq 0\) and \(\int_{0}^{t}d\tau\mu(\tau)=1\)._
Proof.: Applying Taylor's theorem with integral form of the remainder to the exponential function \(e^{-it{\rm Ad}_{X}}\) gives
\[\sum_{q=m}^{\infty}\frac{1}{q!}(-it{\rm Ad}_{X})^{q}=\left(mt^{-m}\int_{0}^{t} d\tau(t-\tau)^{m-1}e^{-ir{\rm Ad}_{X}}\right)\frac{(-it{\rm Ad}_{X})^{m}}{m!}\]
This is the desired representation with \(\mu(\tau)=mt^{-m}(t-\tau)^{m-1}\).
Proof of Lemma 3.: First let us introduce some notations. Below we consider \(s\)-tuples of non-negative integers \(J=(j_{1},\ldots,j_{s})\) satisfying \(\sum_{\gamma=1}^{s}j_{\gamma}=p\). Let \(\Omega_{p}\) be the set of all such tuples. For any \(J\in\Omega_{p}\) let \(m(J)\) be largest integer in the range \(\{1,2,\ldots,s\}\) such that \(\sum_{\gamma=m(J)}^{s}j_{\gamma}\geq p\). Note that \(j_{m(J)}\geq 1\) and \(\sum_{\gamma=m(J)+1}^{s}j_{\gamma}\leq p-1\). We use a shorthand \(J!\equiv\prod_{\gamma=1}^{s}(j_{\gamma})!\).
We need an upper bound on the norm of the remainder operator \(R(t)=B(t)-\sum_{q=0}^{p-1}B_{q}t^{q}\). Expanding each exponential function that appears in \(B(t)\) in Taylor series gives
\[R(t)=\sum_{m=1}^{s}R_{m}(t),\]
where
\[R_{m}(t)=({\rm Ad}_{A_{1}})^{\ell}\sum_{J\in\Omega_{p}\,:\,m(J)=m}\ \frac{1}{J!}(-it{\rm Ad}_{A_{1}})^{j_{1}}(-it{\rm Ad}_{A_{2}})^{j_{2}}\cdots( -it{\rm Ad}_{A_{s}})^{j_{s}}(B).\]
The sum over \(J\in\Omega_{p}\) with \(m(J)=m\) can be rewritten as
\[\sum_{J\in\Omega_{p}\,:\,m(J)=m}=\sum_{j_{1}=0}^{\infty}\cdots\sum_{j_{m-1}=0 }^{\infty}\sum_{\begin{subarray}{c}j_{m+1},\ldots,j_{s}\geq 0\\ j_{m+1}+\cdots+j_{s}\leq p-1\end{subarray}}\sum_{j_{m}=p-j_{m+1}-\ldots-j_{s}}^ {\infty}.\]
Thus
\[R_{m}(t)=({\rm Ad}_{A_{1}})^{\ell}e^{-it{\rm Ad}_{A_{1}}}\cdots e^{-it{\rm Ad} _{A_{m-1}}}\sum_{\begin{subarray}{c}j_{m+1},\ldots,j_{s}\geq 0\\ j_{m+1}+\ldots+j_{s}\leq p-1\end{subarray}}\sum_{j_{m}=p-j_{m+1}-\ldots-j_{s}}^ {\infty}\frac{1}{j_{m}!\cdots j_{s}!}(-it{\rm Ad}_{A_{m}})^{j_{m}}(C_{j_{m+1},\ldots,j_{s}})\]
where
\[C_{j_{m+1},\ldots,j_{s}}=(-it{\rm Ad}_{A_{m+1}})^{j_{m+1}}\cdots(-it{\rm Ad}_{ A_{s}})^{j_{s}}(B).\]
Applying Lemma 4 to compute the sum over \(j_{m}\) gives
\[\sum_{j_{m}=p-j_{m+1}-\ldots-j_{s}}^{\infty}\ \frac{1}{j_{m}!}(-it{\rm Ad}_{A_{m}})^{j_ {m}}=\int_{0}^{t}d\tau\mu(\tau)e^{-i\tau{\rm Ad}_{A_{m}}}\frac{(-it{\rm Ad}_{A _{m}})^{p-j_{m+1}+\ldots-j_{s}}}{(p-j_{m+1}-\ldots-j_{s})!},\]
where \(\mu(\tau)\geq 0\) is a normalized probability measure. Plugging this into the formula for \(R_{m}(t)\) one gets
\[R_{m}(t)=(-it)^{p}\int_{0}^{t}d\tau\mu(\tau)({\rm Ad}_{A_{1}})^{\ell}e^{-it{ \rm Ad}_{A_{1}}}\cdots e^{-it{\rm Ad}_{A_{m-1}}}e^{-i\tau{\rm Ad}_{A_{m}}}\sum_ {\begin{subarray}{c}j_{1},\ldots,j_{s}\geq 0\\ j_{1}+\ldots+j_{s}=p\\ j_{m+1}=0\end{subarray}}\frac{1}{J!}({\rm Ad}_{A_{m}})^{j_{m}}\cdots({\rm Ad}_{ A_{s}})^{j_{s}}(B).\]
Since \(e^{-it\operatorname{Ad}_{A_{1}}}\) commutes with \((\operatorname{Ad}_{A_{1}})^{\ell}\) and \(\|e^{-it\operatorname{Ad}_{A_{1}}}(X)\|=\|X\|\) for any operator \(X\), we get
\[\|R_{m}(t)|\leq t^{p}\sum_{\begin{subarray}{c}j_{1},\ldots,j_{s}\geq 0\\ j_{1}+\cdots+j_{s}=p\\ j_{1}=\ldots=j_{m-1}=0\end{subarray}}\frac{1}{J!}\max_{U\in\Gamma(t)}\,\,\Big{\|} (\operatorname{Ad}_{A_{1}})^{\ell+j_{1}}\hat{U}(\operatorname{Ad}_{A_{2}})^{j _{2}}\cdots(\operatorname{Ad}_{A_{s}})^{j_{s}}(B)\Big{\|}\,.\]
The triangle inequality \(\|R(t)\|\leq\sum_{m=1}^{s}\|R_{m}(t)\|\) then gives the desired bound.
|
2306.05015 | Existence of principal values of some singular integrals on Cantor sets,
and Hausdorff dimension | Consider a standard Cantor set in the plane of Hausdorff dimension 1. If the
linear density of the associated measure $\mu$ vanishes, then the set of points
where the principal value of the Cauchy singular integral of $\mu$ exists has
Hausdorff dimension 1. The result is extended to Cantor sets in $\mathbb{R}^d$
of Hausdorff dimension $\alpha$ and Riesz singular integrals of homogeneity
$-\alpha$, 0 < $\alpha$ < d : the set of points where the principal value of
the Riesz singular integral of $\mu$ exists has Hausdorff dimension $\alpha$. A
martingale associated with the singular integral is introduced to support the
proof. | J. CufÃ, J. J. Donaire, P. Mattila, J. Verdera | 2023-06-08T08:10:02Z | http://arxiv.org/abs/2306.05015v2 | # Existence of principal values of some singular integrals on Cantor sets, and Hausdorff dimension
###### Abstract.
Consider a standard Cantor set in the plane of Hausdorff dimension \(1.\) If the linear density of the associated measure \(\mu\) vanishes, then the set of points where the principal value of the Cauchy singular integral of \(\mu\) exists has Hausdorff dimension \(1.\) The result is extended to Cantor sets in \(\mathbb{R}^{d}\) of Hausdorff dimension \(\alpha\) and Riesz singular integrals of homogeneity \(-\alpha,\)\(0<\alpha<d:\) the set of points where the principal value of the Riesz singular integral of \(\mu\) exists has Hausdorff dimension \(\alpha.\) A martingale associated with the singular integral is introduced to support the proof.
**AMS 2020 Mathematics Subject Classification:** 42B20 (primary); 30E20 (secondary), 60F17 (secondary)
**Keywords:** Cauchy singular integral, Riesz singular integral, Cantor set, Hausdorff dimension, martingale.
## 1. Introduction
Our main result deals with the Cauchy singular integral on Cantor sets in the plane and the proof extends with minor variations to the Riesz transforms in \(\mathbb{R}^{d}.\) We first proceed to formulate the result for the Cauchy integral and then for the Riesz transforms.
The appropriate Cantor sets for the Cauchy integral are defined as follows. Let \((\lambda_{n})_{n=1}^{\infty}\) a sequence of real numbers satisfying \(\frac{1}{4}\leq\lambda_{n}\leq\lambda<\frac{1}{2}.\) Let \(Q_{0}:=[0,1]\times[0,1]\) be the unit square. Take the \(4\) squares contained in \(Q_{0}\) with sides of length \(\lambda_{1}\) parallel to the coordinate axis having a vertex in common with \(Q_{0}\) (the \(4\) "corner squares" of side length \(\lambda_{1}\)). Repeat in each of these \(4\) squares the same procedure with the dilation factor \(\lambda_{1}\) replaced by \(\lambda_{2}\) to get \(16\) squares of side length \(\lambda_{1}\lambda_{2}.\) Proceeding inductively we obtain at the \(n\)-th step \(4^{n}\) squares \(Q_{j}^{n},\ 1\leq j\leq 4^{n},\) of side length \(s_{n}=\lambda_{1}\cdots\lambda_{n}.\) Our Cantor set is
\[K=\bigcap_{n=1}^{\infty}\bigcup_{j=1}^{4^{n}}Q_{j}^{n}.\]
Let \(\mu\) be the Borel probability measure on \(K\) with \(\mu(Q_{j}^{n})=4^{-n}\) and denote by \(a_{n}\) the linear density at generation \(n,\) that is,
\[a_{n}=\frac{1}{4^{n}s_{n}}=\frac{\mu(Q_{j}^{n})}{s_{n}}\leq 1.\]
Set \(\mathcal{D}_{n}=\{Q_{j}^{n}:j=1,\ldots,4^{n}\}\) and \(\mathcal{D}=\cup_{n=1}^{\infty}\mathcal{D}_{n}.\)
We then have
**Theorem 1.1**.: _If \(\lim_{n\to\infty}a_{n}=0\), then the set of points \(z\in K\) for which the principal value_
\[\lim_{\varepsilon\to 0}\int_{|w-z|>\varepsilon}\frac{1}{w-z}\,d\mu w \tag{1.1}\]
_exists has Hausdorff dimension 1._
This solves a problem posed in [CPV, Open problem 5.5, p.1621].
If \(a_{n}=1\) for all \(n\), then \(K\) is the famous Garnett-Ivanov Cantor set, which has positive and finite one-dimensional Hausdorff measure. In this case it was noticed in [CPV] that the principal value does not exist at any point of \(K\). If \(a_{n}\to 0\), then the Hausdorff dimension of \(K\) is greater than or equal to 1 and it has non-sigma finite one-dimensional Hausdorff measure. If in addition \(\sum_{n}a_{n}^{2}<\infty\), then the principal value exists \(\mu\) almost everywhere. So Theorem 1.1 is relevant only when \(a_{n}\to 0\) slowly. That the condition \(\sum_{n}a_{n}^{2}<\infty\) implies the almost everywhere existence of principal values can be seen in two ways. First, we introduce a martingale \((S_{n})_{n=0}^{\infty}\) (see (2.1) below) and show that the increments \(|S_{n+1}(x)-S_{n}(x)|\) are bounded by \(C\,a_{n}\), with the constant \(C\) independent of \(n\) and \(x.\) In Lemma 2.3 we prove that for any point \(x\) the principal value exists at \(x\) if and only \((S_{n}(x))_{n=0}^{\infty}\) converges. If \(\sum_{n}a_{n}^{2}<\infty\), then \(S_{n}\) is an \(L^{2}\) martingale and consequently it converges almost everywhere. Alternatively, the condition \(\sum_{n}a_{n}^{2}<\infty\) implies that the Cauchy singular integral operator is bounded in \(L^{2}(\mu)\). In [MV] it was shown in a very general setting that \(L^{2}\) boundedness together with zero density of the measure yields the almost everywhere existence of principal values.
The main argument in the proof of Theorem 1.1 deals with case where \(\sum_{n}a_{n}^{2}=\infty\). It is a variation of a line of reasoning used in other situations (see [DLN] and the references there). We use a stopping time argument to show that \((S_{n}(x))_{n=0}^{\infty}\) converges to 0 in a set of Hausdorff dimension 1 (indeed, given any complex number \(z_{0}\) the martingale \((S_{n}(x))_{n=0}^{\infty}\) converges to \(z_{0}\) in a set of Hausdorff dimension 1). We get the dimension 1 conclusion by applying a lemma of Hungenford [H]. For the sake of the reader we present a proof of Hungerford's lemma in our context in section 4.
Our proof extends with only technical modifications to cover the case of other odd kernels, for instance,
\[\frac{\overline{z}^{n}}{z^{m+1}},\quad m=1,2,\ldots\]
But one of the ingredients of our method fails for the odd kernel \(\frac{z+\overline{z}}{z^{2}}\) and we do not know whether Theorem 1.1 holds in this case. The difficulty is indicated at the fifth line after the statement of Lemma 3.1.
In \(\mathbb{R}^{d}\) our proof works for the Riesz transforms of any homogeneity \(-\alpha,\ 0<\alpha<d\). These are the singular integrals with kernel
\[R^{\alpha}(x)=\frac{x}{|x|^{1+\alpha}},\quad 0<\alpha<d.\]
The appropriate Cantor sets for the \(\alpha\)-Riesz transform are those of Hausdorff dimension \(\alpha.\) They are constructed by the procedure outlined before in the planar case
with dilation factors that satisfy \(2^{-\frac{d}{\alpha}}\leq\lambda_{n}\leq\lambda<2^{-1}.\) At generation \(n\) one has \(2^{dn}\) cubes \(Q_{j}^{n}\) of side length \(s_{n}=\lambda_{1}\cdots\lambda_{n}.\) The Cantor set is defined by
\[K=\bigcap_{n=1}^{\infty}\bigcup_{j=1}^{2^{dn}}Q_{j}^{n}\]
and the canonical measure on \(K\) by \(\mu(Q_{j}^{n})=2^{-dn},\ 1\leq j\leq 2^{dn}.\) The \(\alpha\) density is \(a_{n}=2^{-dn}s_{n}^{-\alpha}=\mu(Q_{j}^{n})s_{n}^{-\alpha}\leq 1.\) For \(\lambda_{n}=2^{-\frac{d}{\alpha}},\ n=1,2\dots,\) one gets the self similar Cantor set of dimension \(\alpha.\) If \(a_{n}\to 0\) then our Cantor set has Hausdorff dimension \(\geq\alpha\) and non \(\sigma\) finite Hausdorff \(\alpha\)-dimensional measure. One has
**Theorem 1.2**.: _If \(\lim_{n\to\infty}a_{n}=0\), then the set of points \(x\in K\) for which the principal value_
\[\lim_{\varepsilon\to 0}\int_{|y-x|>\varepsilon}R^{\alpha}(y-x)\,d\mu y \tag{1.2}\]
_exists has Hausdorff dimension \(\alpha.\)_
In section 5 we give some indications on how to adapt the proof for the Cauchy kernel to the Riesz transforms in higher dimensions.
We let \(d(A)\) denote the diameter and \(\dim A\) the Hausdorff dimension of a set \(A\). We use the notation \(a\lesssim b\) to mean that \(a\leq Cb\) for some absolute constant \(C\), and \(a\sim b\) for \(a\lesssim b\) and \(b\lesssim a\).
## 2. Martingales
Let \(C\) be the Cauchy kernel, \(C(x)=1/x\) for \(x\in\mathbb{C},x\neq 0.\) For each \(x\in K\) let \(Q_{n}(x)\) be the square in \(\mathcal{D}_{n}\) containing \(x.\) Define the truncated Cauchy Integral at generation \(n\) as
\[T_{n}(x)=\int_{K\setminus Q_{n}(x)}C(x-y)\,d\mu y,\quad x\in K,\]
and a martingale \((S_{n}(x))_{n=0}^{\infty}\) by
\[S_{n}(x)=S_{Q_{n}(x)}=\fint_{Q_{n}(x)}T_{n}\,d\mu,\quad x\in K. \tag{2.1}\]
We shall prove
**Theorem 2.1**.: _If \(\lim_{n\to\infty}a_{n}=0\), then the set of points \(x\in K\) for which \((S_{n}(x))_{n=0}^{\infty}\) converges has Hausdorff dimension 1._
We first show that the martingale (2.1) has uniformly bounded increments.
**Lemma 2.2**.: _There exists a positive constant \(C\) such that_
\[|S_{n+1}(x)-S_{n}(x)|\ \leq Ca_{n},\quad n=0,1,\dots\quad\text{and}\quad x\in K. \tag{2.2}\]
Thus if \(\sum_{n}a_{n}\) converges, \((S_{n}(x))_{n=0}^{\infty}\) converges for all \(x\in K\). As mentioned in the introduction, even the weaker condition \(\sum_{n}a_{n}^{2}<\infty\) implies that \((S_{n}(x))_{n=0}^{\infty}\) converges for \(\mu\) almost all \(x\in K\). Hence we shall assume that \(\sum_{n}a_{n}^{2}=\infty\). Under
this assumption one proves in [CPV] that principal values do not exist for \(\mu\) almost all \(x.\) In Lemma 2.3 below we show that principal values exist if and only if the martingale converges. Hence \((S_{n}(x))_{n=0}^{\infty}\) is not convergent for \(\mu\) almost all \(x\in K.\) By a standard result in martingale theory (see, for example, [S, Corollary 6, p.561]) we get
\[\limsup_{n\to\infty}|S_{n}(x)-S_{m}(x)|=\infty,\quad\text{for all}\quad m=0,1, \dots\quad\text{and}\quad\mu\text{ a.e.} \tag{2.3}\]
Proof of Lemma 2.2.: Set \(Q_{n}=Q_{n}(x).\) Then
\[S_{n+1}(x)-S_{n}(x)=\] \[\fint_{Q_{n+1}}\int_{K\setminus Q_{n+1}}C(z-y)\,d\mu y\,d\mu z- \fint_{Q_{n}}\int_{K\setminus Q_{n}}C(w-y)\,d\mu y\,d\mu w=\] \[\int_{K\setminus Q_{n}}(\fint_{Q_{n+1}}C(z-y)\,d\mu z-\fint_{Q_{n }}C(w-y)\,d\mu w)\,d\mu y\] \[+\int_{Q_{n}\setminus Q_{n+1}}\fint_{Q_{n+1}}C(z-y)\,d\mu z\,d \mu y.\]
The last double integral is \(\lesssim a_{n}\). For \(y\in K\setminus Q_{n},\) there are, by continuity, \(z(y)\in Q_{n+1},w(y)\in Q_{n}\) such that
\[\fint_{Q_{n+1}}C(z-y)\,d\mu z-\fint_{Q_{n}}C(w-y)\,d\mu w=C(z(y)-y)-C(w(y)-y).\]
Here
\[|C(z(y)-y)-C(w(y)-y)|=\left|\frac{z(y)-w(y)}{(z(y)-y)(w(y)-y)}\right|\lesssim s _{n}|x-y|^{-2}.\]
Setting \(R_{j}=Q_{j}\setminus Q_{j+1},\) the absolute value of the first summand of \(S_{n+1}(x)-S_{n}(x)\) is
\[\lesssim s_{n}\int_{K\setminus Q_{n}}|x-y|^{-2}\,d\mu y\sim s_{n} \sum_{j=0}^{n-1}s_{j}^{-2}\mu(R_{j})\] \[=s_{n}\sum_{j=0}^{n-1}s_{j}^{-2}4^{-j}\lesssim s_{n}s_{n}^{-2}4^ {-n}=a_{n},\]
because \(s_{j}^{-2}4^{-j}\leq(s_{j+1}^{-2}\lambda^{2})4^{-j}=(4\lambda^{2})s_{j+1}^{-2 }4^{-j-1},\) so \(s_{j}^{-2}4^{-j}\lesssim(4\lambda^{2})^{n-j}s_{n}^{-2}4^{-n}.\) Hence \(|S_{n+1}(x)-S_{n}(x)|\lesssim a_{n}.\)
By the following lemma Theorem 2.1 is equivalent with Theorem 1.1.
**Lemma 2.3**.: _For any \(x\in K\), the principal value (1.1) exists if and only if the sequence \((S_{n}(x))_{n=0}^{\infty}\) converges._
Proof.: Let \(0<\varepsilon<1\) and let \(n\) be such that \(d(Q_{n}(x))\leq\varepsilon<d(Q_{n-1}(x))\). As in the proof of (2.2) we have
\[S_{n}(x)=\int_{K\setminus Q_{n}(x)}C(z(y)-y)\,d\mu y\]
for some \(z(y)\in Q_{n}(x)\). Then
\[\left|S_{n}(x)-\int_{|x-y|>\varepsilon}\frac{1}{x-y}\,d\mu y\right|\lesssim \left|\int_{K\setminus Q_{n-1}(x)}\left(C(z(y)-y)-C(x-y)\right)d\mu y\right|\]
\[+\int_{Q_{n-1}\setminus Q_{n}}|C(z(y)-y)|\,\,d\mu y+\int_{Q_{n-1}\setminus B(x,\varepsilon)}\frac{1}{|x-y|}\,d\mu y.\]
The second and third terms are obviously \(\lesssim a_{n}\), and so is the first by the same argument as in the proof of (2.2). The lemma follows from this.
**Remark 2.4**.: Indeed, a consequence of the proof of Lemma 2.3 is that
\[|S_{n}(x)-T_{n}(x)|\leq C\,a_{n},\quad n=0,1,\dots\quad\text{and}\quad x\in K.\]
We proceed now to discuss relative martingales.
For \(x\in R\subset Q,\,Q\in\mathcal{D}_{m},\,R\in\mathcal{D}_{n},\,m<n\), we define the relative martingale starting at \(Q\) as
\[S_{Q,R}(x)=S_{Q,R}=\fint_{R}\int_{Q\setminus R}C(z-y)\,d\mu y\,d\mu z.\]
Then with some absolute constant \(C\),
\[|S_{R}-S_{Q}-S_{Q,R}|\leq C\,a_{m}. \tag{2.4}\]
Indeed, we have
\[S_{R}-S_{Q}=\] \[\fint_{R}\int_{K\setminus R}C(z-y)\,d\mu y\,d\mu z-\fint_{Q}\int_{ K\setminus Q}C(w-y)\,d\mu y\,d\mu w=\] \[\int_{K\setminus Q}\left(\fint_{R}C(z-y)\,d\mu z-\fint_{Q}C(w-y) \,d\mu w\right)\,d\mu y+\int_{Q\setminus R}\fint_{R}C(z-y)\,d\mu z\,d\mu y=\] \[\int_{K\setminus Q}\left(\fint_{R}C(z-y)\,d\mu z-\fint_{Q}C(w-y) \,d\mu w\right)\,d\mu y+S_{Q,R}.\]
The first summand above is bounded in absolute value by a constant times \(a_{m}\) by the same argument as in the proof of (2.2).
As for (2.2) we have for \(R\subset\tilde{R}\subset Q,Q\in\mathcal{D}_{m},\tilde{R}\in\mathcal{D}_{n},R \in\mathcal{D}_{n+1}\),
\[|S_{Q,R}-S_{Q,\tilde{R}}|\leq C\,a_{n}. \tag{2.5}\]
## 3. The stopping time argument
The proof of Theorem 2.1 is based on a stopping time argument for which we need some preliminary facts.
Given a non-zero complex number \(z\) consider the sector \(\sigma(z,\theta),\ 0<\theta<\pi,\) with vertex at \(z\) and aperture \(\theta\) whose axis is the semi-line emanating from \(z\) and passing through \(0.\) That is, \(w\in\sigma(z,\theta)\) if and only if
\[\langle\frac{w-z}{|w-z|},\frac{-z}{|z|}\rangle\geq\cos(\frac{\theta}{2})\]
where \(\langle\cdot,\cdot\rangle\) denotes the scalar product in the plane.
The octants with vertex \(0\) are the eight sectors
\[\sigma_{j}=\{w\in\mathbb{C}:w=|w|e^{i\phi},\ (j-1)\frac{\pi}{4}\leq\phi\leq j \frac{\pi}{4}\},\quad 1\leq j\leq 8.\]
These are the sectors with vertex the origin of amplitude \(45^{\circ}\) degrees and having an edge over a coordinate axis. It will be convenient to expand these octants so that they have the same axis and amplitude of \(75^{\circ}.\) In other words, we are adding \(15^{\circ}\) in each direction. Denote the expanded sectors by \(\tilde{\sigma}_{j}.\) The octants with vertex \(z\) are the sectors \(\sigma_{j}(z)=z+\sigma_{j},\ 1\leq j\leq 8,\) and the expanded octants \(\tilde{\sigma}_{j}(z)=z+\tilde{\sigma}_{j}.\)
We have the following obvious lemma.
**Lemma 3.1**.: _Given any sector \(\sigma\) of vertex \(z\) and amplitude \(120^{\circ}\) there exists an octant with vertex \(z,\) say \(\sigma_{j}(z)\) for some index \(j\) between \(1\) and \(8,\) such that \(\tilde{\sigma}_{j}(z)\subset\sigma.\)_
Consider the symmetries with respect to the coordinate axis and the main diagonal. That is, \(f_{1}(x+iy)=-x+iy,f_{2}(x+iy)=x-iy\) and \(f_{3}(x+iy)=y+ix\) for \(x+iy\in\mathbb{C}\). For any \(j,k=1,\ldots,8,\) by composing two such symmetries we obtain a linear mapping \(f_{j,k}\) that maps the octant \(\sigma_{j}\) onto the octant \(\sigma_{k}\). Observe that \(C(f_{j}(z))=f_{j}(C(z))\) for \(j=1,2,\) and \(C(f_{3}(z))=-f_{3}(C(z))\). It is precisely this last identity that fails for the kernel \((z+\overline{z})/z^{2}.\)
Let \(Q\in\mathcal{D}\) and let \(c_{Q}\) be its center. Define
\[f_{Q,j,k}(x)=f_{j,k}(x-c_{Q})+c_{Q}\quad x\in Q,\quad j,k=1,\ldots,8.\]
Then
\[S_{Q,f_{Q,j,k}(R)}=\varepsilon_{j,k}\,f_{Q,j,k}(S_{Q,R}),\quad R\subset Q, \quad Q,R\in\mathcal{D}, \tag{3.1}\]
where \(\varepsilon_{j,k}=\pm 1,\) and we conclude that we can transport the value of the relative martingale \(S_{Q,R}\in\sigma_{j}\) at the square \(R\) into \(\pm\) the value \(S_{Q,f_{Q,j,k}(R)}\in\sigma_{k}\) of the relative martingale at the square \(f_{Q,j,k}(R)\) by means of compositions of two of the symmetries considered above. In particular, \(\left|S_{Q,f_{Q,j,k}(R)}\right|=\left|S_{Q,R}\right|.\)
We check (3.1) by the general formula for the image (push-forward) \(f\nu\) of a measure \(\nu\) under an injective map \(f\), see, for example, [M, Theorem 1.9],
\[\int_{f(A)}g\,df\nu=\int_{A}g\circ fd\,\nu.\]
The restriction of \(\mu\) to \(Q\) is invariant under the maps \(f_{Q,j,k}\), that is, \(f_{Q,j,k}(\mu|Q)=\mu|Q\). Hence, since \(Q\setminus f_{Q,j,k}(R)=f_{Q,j,k}(Q\setminus R)\) and \(C(f_{Q,j,k}(z-w))=\varepsilon_{j,k}\,f_{Q,j,k}(C(z-w))\)),
\[\int_{Q\setminus f_{Q,j,k}(R)}\int_{f_{Q,j,k}(R)}C(z-w)\,d\mu z\,d\mu w= \varepsilon_{j,k}\,f_{Q,j,k}\left(\int_{Q\setminus R}\int_{R}C(z-w)\,d\mu z\, d\mu w\right),\]
from which (3.1) follows.
We shall need the following elementary lemma.
**Lemma 3.2**.: _If \(z\in\mathbb{C},w\in\sigma(z,120^{\circ})\) and \(0<|w-z|<|z|/2\), then \(|w|\leq|z|-|w-z|/4\)._
Proof.: Let \(R=|z|,r=|w-z|\) and let \(v\) be the third vertex, in addition to \(0\) and \(z\), of the equilateral triangle containing \(w\). Under the assumptions of the lemma \(|w|\) is maximized when \(w\) lies on the side connecting \(z\) and \(v\). Therefore
\[|w|^{2}\leq(R-r/2)^{2}+(\sqrt{3}r/2)^{2}=r^{2}+R^{2}-rR\leq(R-r/4)^{2}=(|z|-|w -z|/4)^{2}\]
because of the assumption \(r<R/2\).
Proof of Theorem 2.1.: We assume, as we may, that \(\sum_{n}a_{n}^{2}=\infty\). Then for \(\mu\) almost all \(x\) the sequence \((S_{n}(x))_{n=0}^{\infty}\) diverges and (2.3) holds.
Let \(M\) be a big positive integer to be chosen later. We replace \((a_{n})_{n=0}^{\infty}\) by the non-increasing sequence \(b_{n}=C\max_{m\geq n}a_{m}\), where \(C\) is as in inequalities (2.2), (2.4) and (2.5), which now read
\[|S_{n+1}(x)-S_{n}(x)|\ \leq b_{n},\quad n=0,1,\ldots\quad\text{and}\quad x\in K, \tag{3.2}\]
\[|S_{R}-S_{Q}-S_{Q,R}|\leq b_{m},\quad\,Q\in\mathcal{D}_{m},\,R\in\mathcal{D}_{ n},\,R\subset Q, \tag{3.3}\]
and
\[|S_{Q,R}-S_{Q,\tilde{R}}|\leq b_{n},\quad\,Q\in\mathcal{D}_{m},\,R\in \mathcal{D}_{n+1},\,\tilde{R}\in\mathcal{D}_{n},R\subset\tilde{R}\subset Q, \tag{3.4}\]
We plan to define a sequence of stopping time conditions. At each step a family of stopping time squares will arise, which is going to be the family \(\mathcal{F}_{n}\) in Lemma 4.1 (Hungerford's lemma).
The first stopping time condition is
\[|S_{Q}|>M\,b_{0}. \tag{3.5}\]
Declare \(Q\) a stopping time square of first generation if \(Q\) is a square in \(\mathcal{D}\) for which \(|S_{Q}|>M\,b_{0}\) and \(|S_{Q^{\prime}}|\leq M\,b_{0},\ Q\subsetneq Q^{\prime}.\) We call \(\mathcal{F}_{1}\) the set of stopping time squares of first generation. One may think at this as a process as follows. One takes a point \(x\in K\) and looks at the squares in \(\mathcal{D}\) containing \(x.\) One examines all those squares, starting at \(Q_{0}\) and checks whether condition (3.5) is satisfied. If it is not, then one proceeds to the square containing \(x\) in the next generation. The process stops when one finds a square \(Q\) containing \(x\) for which (3.5) holds. Note that the set of \(x\) for which the process never stops has vanishing \(\mu\) measure by (2.3). Hence \(\sum_{Q\in\mathcal{F}_{1}}\mu(Q)=1.\) Since \(S_{Q_{0}}=0\) by (3.2) it is necessary to descend at least \(M+1\) generations to find the first stopping time square.
The second stopping time condition is slightly different. Let \(Q\in\mathcal{F}_{1}\). The second stopping time is performed on the relative martingale associated with \(Q\) and its condition is
\[|S_{Q,R}|>M\,b_{M}. \tag{3.6}\]
A stopping time square \(R\) of second generation satisfies \(|S_{Q,R}|>M\,b_{M}\) and
\[|S_{Q,R^{\prime}}|\leq M\,b_{M},\quad R^{\prime}\in\mathcal{D},\quad R\subsetneq R ^{\prime}\subset Q.\]
By (2.3) and (3.3) the stopping time squares of second generation cover almost all \(Q.\) Again one has to descend through at least \(M+1\) generations to find a stopping time square of second generation. Hence if \(R\) is a stopping time square of second generation and \(R\in\mathcal{D}_{n}\) then \(n\geq 2(M+1).\) We do not put all stopping time squares of second generation in \(\mathcal{F}_{2}(Q).\) We put \(R\) in \(\mathcal{F}_{2}(Q)\) provided \(S_{R}\in\sigma(S_{Q},120^{\circ}).\) That there are many such stopping time squares can be shown as follows. By Lemma 3.1 there is \(j\) with \(1\leq j\leq 8\) such that \(\tilde{\sigma}_{j}(S_{Q})\subset\sigma(S_{Q},120^{\circ}).\) Let \(\alpha\) denote the angle between the vectors \(S_{R}-S_{Q}\) and \(S_{Q,R}\). Then by (3.3),
\[|S_{R}-S_{Q}|\geq|S_{Q,R}|-b_{M}\geq(M-1)\,b_{M}\]
and
\[0\leq|\sin\alpha|\leq\frac{|S_{R}-S_{Q}-S_{Q,R}|}{|S_{R}-S_{Q}|}\leq\frac{b_{ M}}{(M-1)\,b_{M}}=\frac{1}{M-1}<\sin 15^{\circ},\]
provided \(M-1>1/\sin 15^{\circ},\) which we assume. Then \(|\alpha|<15^{\circ},\) because \(\cos\alpha>0.\) Take a stopping time square \(R\) such that \(S_{Q,R}\in\sigma_{j}.\) Then \(S_{R}-S_{Q}\in\tilde{\sigma_{j}}\) and so \(S_{R}\in S_{Q}+\tilde{\sigma_{j}}=\tilde{\sigma_{j}}(S_{Q})\subset\sigma(S_{Q },120^{\circ}).\)
Therefore
\[\sum_{R\in\mathcal{F}_{2}(Q)}\mu(R)\geq\frac{1}{8}\,\mu(Q). \tag{3.7}\]
Define \(\mathcal{F}_{2}=\cup_{Q\in\mathcal{F}_{1}}\mathcal{F}_{2}(Q).\)
Let us obtain some properties of stopping time squares \(R\) in \(\mathcal{F}_{2}(Q).\) Let \(\tilde{R}\) be the father of \(R.\) Then \(\left|S_{Q,\tilde{R}}\right|\leq M\,b_{M}\) and so
\[|S_{\tilde{R}}-S_{Q}|\leq\left|S_{\tilde{R}}-S_{Q}-S_{Q,\tilde{R}}\right|+ \left|S_{Q,\tilde{R}}\right|\leq(M+1)b_{M}\]
and
\[|S_{R}-S_{Q}|\leq|S_{R}-S_{\tilde{R}}|+|S_{\tilde{R}}-S_{Q}|\leq b_{M}+(M+1)b _{M}=(M+2)b_{M}.\]
Also
\[|S_{R}-S_{Q}|\geq|S_{Q,R}|-|S_{R}-S_{Q}-S_{Q,R}|\geq M\,b_{M}-b_{M}=(M-1)b_{M}\]
Now two possibilities appear.
If \(|S_{Q}|\leq 2|S_{R}-S_{Q}|\leq 2(M+2)b_{M},\) then
\[|S_{R}|\leq|S_{R}-S_{Q}|+|S_{Q}|\leq 3(M+2)b_{M}. \tag{3.8}\]
If \(|S_{Q}|>2|S_{R}-S_{Q}|,\) since \(S_{R}\in\sigma(S_{Q},120^{\circ})\) we can apply Lemma 3.2 to get
\[|S_{R}|\leq|S_{Q}|-|S_{R}-S_{Q}|/4\leq|S_{Q}|-(M-1)b_{M}/4\leq|S_{Q}|-b_{M} \tag{3.9}\]
provided \(M\geq 5.\)
We can proceed to define inductively \(\mathcal{F}_{n}\) for \(n\geq 3.\) Assume that we have defined \(\mathcal{F}_{n-1}=\cup_{Q\in\mathcal{F}_{n-2}}\mathcal{F}_{n-1}(Q).\) Given \(Q\in\mathcal{F}_{n-1}\) we set the \(n\) generation stopping time in the relative martingale associated with \(Q\) as
\[|S_{Q,R}|>Mb_{(n-1)M}\]
If \(R\) is a stopping time square of \(n\)-th generation then besides the previous inequality one has
\[|S_{Q,R^{\prime}}|\leq Mb_{(n-1)M},\quad R^{\prime}\in\mathcal{D},\quad R \varsubsetneq R^{\prime}\subset Q,\]
whence
\[|S_{R^{\prime}}-S_{Q}|\leq|S_{Q,R^{\prime}}|+b_{(n-1)M}\leq(M+1)b_{(n-1)M}. \tag{3.10}\]
Note that if \(R\) is a stopping time square of generation \(n,\) we can take advantage of the symmetries of \(Q\) to find another one, say \(R^{\prime},\) of the same size with the additional property that \(S_{R^{\prime}}\in\sigma(S_{Q},120^{\circ}).\) Define \(\mathcal{F}_{n}(Q)\) as the stopping time squares \(R\) of generation \(n\) such that \(S_{R}\in\sigma(S_{Q},120^{\circ})\) and \(\mathcal{F}_{n}=\cup_{Q\in\mathcal{F}_{n-1}}\mathcal{F}_{n}(Q).\) We then have
\[\sum_{R\in\mathcal{F}_{n}(Q)}\mu(R)\geq\frac{1}{8}\,\mu(Q). \tag{3.11}\]
Given \(R\in\mathcal{F}_{n}(Q),\) we have as before two possibilities: either
\[|S_{R}|\leq 3(M+2)b_{(n-1)M} \tag{3.12}\]
or
\[|S_{R}|\leq|S_{Q}|-b_{(n-1)M} \tag{3.13}\]
Set \(F=\bigcap_{n=1}^{\infty}\bigcup_{Q\in\mathcal{F}_{n}}Q.\) To complete the proof we shall show that the hypothesis of Hungerford's Lemma 4.1 are fulfilled and that
\[\lim_{m\to\infty}S_{m}(x)=0,\quad x\in F. \tag{3.14}\]
For (b) in Hungerford's Lemma 4.1 recall that each stopping time square has descended at least \(M+1\) generations from the generating square in the previous family. Then one has (b) with \(\varepsilon\) replaced by \(\frac{1}{4^{M}}\) and taking \(M\) big enough one has \(\frac{1}{4^{M}}<\varepsilon.\) Condition (c) with \(c=\frac{1}{8}\) is (3.11).
To prove (3.14), take \(x\in F\). For every \(n=1,2,\dots,\) there is a unique \(Q_{n}\in\mathcal{F}_{n}\) such that \(x\in Q_{n}.\) Let \(m_{n}\) be the unique positive integer satisfying \(Q_{n}\in\mathcal{D}_{m_{n}}.\) Clearly the sequence \(m_{n}\) is increasing and \(m_{n}>M\,n.\) Since \(S_{Q_{n}}=S_{m_{n}}(x)\) we have by (3.12) and (3.13) two possibilities:
\[|S_{m_{n}}(x)|\leq 3(M+2)b_{(n-1)M}. \tag{3.15}\]
or
\[|S_{m_{n}}(x)|\leq|S_{m_{n-1}}(x)|-b_{(n-1)M},\quad n=1,2,\dots \tag{3.16}\]
For \(m_{n-1}<m<m_{n}\) we have by (3.10)
\[\left|S_{m}(x)-S_{m_{n-1}}(x)\right|\leq(M+1)b_{(n-1)M}. \tag{3.17}\]
To conclude that \(\lim_{m\to\infty}S_{m}(x)=0\) it is enough to show that \(\lim_{n\to\infty}S_{m_{n}}(x)=0.\)
We say that \(n\in\mathcal{N}_{1}\), if (3.16) holds and \(n\in\mathcal{N}_{2}\), if (3.15) holds and (3.16) fails. As \(\sum_{n}b_{n}\) diverges and \((b_{n})_{n=1}^{\infty}\) is non-increasing, also \(\sum_{n}b_{(n-1)M}\) diverges. It follows that (3.16) cannot hold for infinitely many consecutive \(n\), whence \(\mathcal{N}_{2}\) is infinite.
Let \(n\in\mathcal{N}_{2}\) and let \(N>n\) be such that \(k\in\mathcal{N}_{1}\) for all \(n<k<N\). Then by (3.16) and (3.15) for \(n<k<N\),
\[|S_{m_{k}}(x)|\leq|S_{m_{n}}(x)|\leq 3(M+2)b_{(n-1)M}.\]
It follows that \(\lim_{m\to\infty}S_{m}(x)=0\).
## 4. Appendix 1: a lemma on Hausdorff dimension
Let \(\mu\) the canonical measure associated with a Cantor set, as defined in the Introduction. Recall that \(\mathcal{D}_{n}\) is the set of all squares \(Q_{j}^{n},\ 1\leq j\leq 2^{n}\) appearing at the \(n\)-th generation of the construction and \(\mathcal{D}=\cup_{n}\mathcal{D}_{n}\).
The following lemma is due to Hungerford, who worked in a one dimensional context; see [H].
**Lemma 4.1**.: _Let \(0<\varepsilon<c<1\) and let \(\mathcal{F}_{n}\) be a disjoint family of squares in \(\mathcal{D}\), for \(n=0,1,2,\dots\), satisfying the following._
* \(\mathcal{F}_{0}=\{Q_{0}\},\)__
* _if_ \(Q\in\mathcal{F}_{n+1}\)_, then there exists_ \(\tilde{Q}\in\mathcal{F}_{n}\) _with_ \(Q\subset\tilde{Q}\) _and_ \(\mu(Q)\leq\varepsilon\mu(\tilde{Q})\)_,_
* _if_ \(Q\in\mathcal{F}_{n}\)_, then_ \[\sum_{R\subset Q,\,R\in\mathcal{F}_{n+1}}\mu(R)\geq c\,\mu(Q).\]
_Let \(E=\cap_{n}\cup_{Q\in\mathcal{F}_{n}}\). Then_
\[\dim E\geq 1-\log c/\log\varepsilon.\]
Proof.: Set \(\beta=1-\log c/\log\varepsilon\). We will construct a Borel probability measure \(\nu\) with \(\nu(E)=1\) such that for some constant \(C\) and for all balls \(B(x,r)\) centred at \(x\) of radius \(r\) one has
\[\nu(B(x,r))\leq Cr^{\beta}\text{ for }x\in E,0<r\leq 1. \tag{4.1}\]
Then Frostman's lemma will give the result.
Let us define the functions \(\nu_{n}:\mathcal{F}_{n}\to\mathbb{R},n=0,1,2\dots\), setting first \(\nu_{0}(Q_{0})=1\). Suppose that \(\nu_{1},\dots,\nu_{n-1}\) are defined and let for \(Q\in\mathcal{F}_{n}\), with \(\tilde{Q}\) as in (b),
\[\nu_{n}(Q)=\frac{\nu_{n-1}(\tilde{Q})}{\sum_{R\in\mathcal{F}_{n},R\subset \tilde{Q}}\mu(R)}\mu(Q).\]
Then we define the Borel measures \(\nu_{n}\) setting
\[\nu_{n}(A)=\sum_{Q\in\mathcal{F}_{n}}\frac{\nu_{n}(Q)}{\mu(Q)}\mu(A\cap Q)\text { for }A\subset\mathbb{C}.\]
Then for \(Q\in\mathcal{F}_{n},\)
\[\nu_{n+1}(Q)=\sum_{R\in\mathcal{F}_{n+1},R\subset Q}\nu_{n+1}(R)=\] \[\sum_{R\in\mathcal{F}_{n+1},R\subset Q}\frac{\nu_{n}(Q)}{\sum_{P \in\mathcal{F}_{n+1},P\subset Q}\mu(P)}\mu(R)=\nu_{n}(Q).\]
Iterating this we have
\[\nu_{m}(Q)=\nu_{n}(Q)\text{ for }Q\in\mathcal{F}_{n},m>n. \tag{4.2}\]
In particular, each \(\nu_{n}\) is a probability measure and some subsequence of \((\nu_{n})\) converges weakly to a probability measure \(\nu\) such that \(\nu(Q)=\nu_{n}(Q)\) for \(Q\in\mathcal{D}_{n}.\)
Since
\[\nu(\bigcup_{Q\in\mathcal{F}_{n}}Q)=\sum_{Q\in\mathcal{F}_{n}}\nu(Q)=\sum_{Q \in\mathcal{F}_{n}}\nu_{n}(Q)=1,\]
we have \(\nu(E)=1\). Therefore \(\nu(E\setminus\cup_{Q\in\mathcal{F}_{n}}Q)=0\) for every \(n,\) so
\[\nu(Q)=\sum_{R\subset Q,R\in\mathcal{F}_{n+1}}\nu(R),\quad Q\in\mathcal{F}_{n}. \tag{4.3}\]
It remains to verify (4.1). First of all we have by condition (c) for \(Q\in\mathcal{F}_{n},n\geq 2,\)
\[\frac{\nu(Q)}{\mu(Q)}=\frac{\nu_{n}(Q)}{\mu(Q)}=\frac{\nu_{n-1}(\tilde{Q})}{ \sum_{R\subset\tilde{Q},R\in\mathcal{F}_{n}}\mu(R)}\leq\frac{\nu(\tilde{Q})}{ c\mu(\tilde{Q})},\]
and by induction,
\[\frac{\nu(Q)}{\mu(Q)}\leq c^{-n}\text{ for }Q\in\mathcal{F}_{n},n=1,2\dots. \tag{4.4}\]
Now let us prove that
\[\nu(Q)\leq Cd(Q)^{\beta}\text{ for }Q\in\mathcal{D}. \tag{4.5}\]
Take \(n\) such that \(\varepsilon^{n+1}\leq\mu(Q)<\varepsilon^{n}\). We may assume that \(\nu(Q)>0\). Then \(Q\) intersects a square \(R\) in the family \(\mathcal{F}_{n+1}.\) Since by (b) \(\mu(R)\leq\varepsilon^{n+1}\leq\mu(Q),\) one has \(R\subset Q.\) We have, by (4.3) and (4.4),
\[\nu(Q)=\sum_{R\subset Q,R\in\mathcal{F}_{n+1}}\nu(R)\leq c^{-n-1}\sum_{R \subset Q,R\in\mathcal{F}_{n+1}}\mu(R)\leq c^{-n-1}\mu(Q).\]
Since \(\mu(Q)\leq d(Q)\) it is enough to show that \(c^{-n}\mu(Q)\leq\mu(Q)^{\beta}\) which is
\[c^{-n}\leq\mu(Q)^{-\log c/\log\varepsilon},\]
that is,
\[-n\log c\leq-(\log c/\log\varepsilon)\log\mu(Q),\]
or \(n\leq\log\mu(Q)/\log\varepsilon,\) which is a consequence of \(\mu(Q)\leq\varepsilon^{n}.\)
To finish, let \(x\in E\) and \(0<r\leq 1\). For some \(n,\)\(x\) belongs to a square \(Q\in\mathcal{F}_{n}\) with \(d(Q)/4\leq r\leq d(Q)\). Assume that \(Q\in\mathcal{D}_{m}.\) Then \(B(x,r)\) can meet at most 16 squares of \(\mathcal{D}_{m},\) so by (4.5), \(\nu(B(x,r)\leq 16\nu(Q)\leq 16Cd(Q)^{\beta}\leq 4^{\beta+2}Cr^{\beta}\) and (4.1) follows.
## 5. Appendix 2: the Riesz transforms in \(\mathbb{R}^{d}.\)
We first slightly modify the argument in [CPV] to show that \(\sum_{n=1}^{\infty}a_{n}^{2}=\infty\) yields divergence a.e. of the martingale. If the martingale converges in a set of positive measure, then also the principal values of the Riesz transform exist in a set \(E\) of positive measure, by the analog of Lemma 2.3. By a result of Tolsa [T, Theorem 8.13] we find a set \(F\subset E\) of positive measure on which the singular Riesz transform operator is bounded on \(L^{2}(\mu_{|F}).\) In particular, the capacity of \(F\) associated with the Riesz kernel is positive and so also that of the Cantor set. The main result of [MT] (see Theorem 1.2, p. 678 and its extension in the last formula in p. 696) states that the \(\alpha\)-Riesz capacity of the Cantor set is comparable to \((\sum_{n=1}^{\infty}a_{n}^{2})^{-\frac{1}{2}},\) so that positive capacity yields a convergent series. We remark that the previous argument uses very strong results, in particular the non-homogeneous \(T(1)\)-Theorem of Nazarov, Treil and Volberg, to extract the subset \(F\) on which the singular Riesz transform is \(L^{2}(\mu_{|F})\) bounded. In [CPV] one resorts to Menger curvature, which is not available for kernels of homogeneity \(-\alpha\) with \(1<\alpha<d,\) and the proof is slightly simpler. It would be desirable to have a direct argument relating the series to the convergence of the martingale.
The part of the stopping time argument of section 3 that does not obviously extend to higher dimensions is related to the sector \(\sigma(z,120^{\circ}).\) In particular, one should replace the \(45^{\circ}\) degrees sectors centred at the origin with one edge on a coordinate axis with other regions. We proceed as follows. Divide \(\mathbb{R}^{d}\) into \(2^{d}\) regions (which in \(\mathbb{R}^{3}\) are the usual octants) by requiring that each coordinate has a definite sign. For example,
\[O=\{x\in\mathbb{R}^{d}:x_{1}\geq 0,x_{2}\geq 0,\ldots x_{d}\geq 0\}\]
is such a region. Divide the region \(O\) in the \(d!\) subregions determined by a permutation \(\sigma\) of the \(d\) variables
\[O_{\sigma}=\{x\in\mathbb{R}^{d}:0\leq x_{\sigma(1)}\leq x_{\sigma(2)}\leq \cdots\leq x_{\sigma(d)}\}.\]
Note that the maximal angle between two vectors lying in a subregion \(O_{\sigma}\) is precisely \(\arccos(d^{-1/2}),\) which approaches \(90^{\circ}\) as \(d\to\infty.\) Given a cone \(\Gamma\) with vertex the origin and aperture \(\theta,\) we would like to find a region \(O_{\sigma}\) contained in the cone \(\Gamma.\) This can be done as follows. The axis of the cone is a ray emanating from the origin contained in \(O_{\sigma}\) for some \(\sigma.\) Taking \(\theta=\theta(d)<\pi\) close enough to \(\pi\) one can achieve \(O_{\sigma}\subset\Gamma.\) Indeed, something stronger can be obtained: there exists a sufficiently small angle \(\gamma=\gamma(d)\) such that expanding \(O_{\sigma}\) in all directions by at most \(\gamma\) degrees one still remains in the cone \(\Gamma.\)
The planar argument now works with \(\theta\) in place of \(120^{\circ}.\)
One should also realise that the set \(\mathcal{S}\) of symmetries leaving the Cantor set invariant, preserving the measure \(\mu\) and commuting with the Riesz transform is large enough. For instance, fixed a variable \(x_{i},\) the mapping that leaves invariant the variables \(x_{j}\) with \(j\neq i\) and changes the sign of \(x_{i}\) is such a symmetry if one thinks the Cantor set centred at the origin. Also, given two different variables the mapping that interchanges them and leaves the other invariant is in \(\mathcal{S}.\) Given two sets \(O_{\sigma}\) and
\(O_{\sigma^{\prime}}\), one can compose two of the symmetries just mentioned to obtain a symmetry in \(\mathcal{S}\) that maps \(O_{\sigma}\) onto \(O_{\sigma^{\prime}}\).
Finally, Hungerford's Lemma 4.1 holds for the \(\alpha\)-dimensional Cantor set in \(\mathbb{R}^{d}\), with the same proof, with the conclusion in the form \(\dim E\geq\alpha(1-\log c/\log\varepsilon)\).
### Acknowledgements
The authors are grateful to X. Tolsa for various conversations on the subject.
J. Verdera acknowledges support from the grants 2021-SGR-00071(Generalitat de Catalunya), PID2020-112881GB-I00 and Severo Ochoa and Maria de Maeztu CEX2020-001084-M (Ministerio de Ciencia e Innovacion).
### Statements and declarations
The authors have no conflict of interests to declare.
### Data availability statement
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
|
2305.14448 | Robust non-computability of dynamical systems and computability of
robust dynamical systems | In this paper, we examine the relationship between the stability of the
dynamical system $x^{\prime}=f(x)$ and the computability of its basins of
attraction. We present a computable $C^{\infty}$ system $x^{\prime}=f(x)$ that
possesses a computable and stable equilibrium point, yet whose basin of
attraction is robustly non-computable in a neighborhood of $f$ in the sense
that both the equilibrium point and the non-computability of its associated
basin of attraction persist when $f$ is slightly perturbed. This indicates that
local stability near a stable equilibrium point alone is insufficient to
guarantee the computability of its basin of attraction. However, we also
demonstrate that the basins of attraction associated with a structurally stable
- globally stable (robust) - planar system defined on a compact set are
computable. Our findings suggest that the global stability of a system and the
compactness of the domain play a pivotal role in determining the computability
of its basins of attraction. | Daniel S. Graça, Ning Zhong | 2023-05-23T18:12:56Z | http://arxiv.org/abs/2305.14448v4 | # Robust non-computability and stability of dynamical systems
###### Abstract
In this paper, we examine the relationship between the stability of the dynamical system \(x^{\prime}=f(x)\) and the computability of its basins of attraction. We present a computable \(C^{\infty}\) system \(x^{\prime}=f(x)\) that possesses a computable and stable equilibrium point, yet whose basin of attraction is robustly non-computable in a neighborhood of \(f\) in the sense that both the equilibrium point and the non-computability of its associated basin of attraction persist when \(f\) is slightly perturbed. This indicates that local stability near a stable equilibrium point alone is insufficient to guarantee the computability of its basin of attraction. However, we also demonstrate that the basins of attraction associated with a structurally stable - globally stable - planar system are computable. Our findings suggest that the global stability of a system plays a pivotal role in determining the computability of its basins of attraction.
## 1 Introduction
The focus of this paper is on examining the relationship between the stability of the dynamical system
\[\frac{dx}{dt}=f(x) \tag{1}\]
and the feasibility of computing the basin of attraction of a (hyperbolic) equilibrium point.
The problem of computing the basin of attraction of an equilibrium point can be viewed as a continuous variation of the discrete Halting problem. In this paper, we will demonstrate that basins of attraction can exhibit _robust non-computability_ for computable systems. Specifically, we will present a computable system represented by Equation (1) and a neighborhood surrounding function
\(f\) which have the following properties: (i) Equation (1) has a computable equilibrium point, say \(s_{f}\), and the basin of attraction of \(s_{f}\) is non-computable; (ii) there are infinitely many computable functions within this neighborhood; and (iii) for each and every computable function \(g\) in this neighborhood, the system described by \(x^{\prime}=g(x)\) possesses a computable equilibrium point (near \(s_{f}\)) whose basin of attraction is also non-computable. To the best of our knowledge, this is the first instance where a continuous problem is demonstrated to possess robust non-computability.
Equilibrium solutions, also known as equilibrium points or critical points, correspond to the zeros of \(f\) in (1) and play a vital role in dynamical systems theory. They are points where the system comes to rest and are useful in determining the stability of the system. By analyzing the system's behavior in the vicinity of an equilibrium point, we can ascertain whether nearby trajectories (i.e. solutions of (1)) will remain near that point (stable) or move away from it (unstable).
The basins of attraction, on the other hand, represent the collection of initial conditions with the property that their associated trajectories converge to the corresponding equilibrium point. This is pictured in Figure 1. Thus, by identifying the basins of attraction, we can predict the system's long-term behavior for different initial conditions. This information is essential in understanding and characterizing the system's behavior, particularly in the context of complex systems.
A sink of (1) is a special type of equilibrium point where the system in the neighborhood of the equilibrium point is well-behaved and stable. Here "stable" refers to at least two properties. First, each sink \(s\) has a neighborhood \(U\) with the property that any trajectory that enters \(U\) stays there and converges exponentially fast to \(s\) (this means that \(\|\phi_{t}(x)-x_{0}\|\leq\varepsilon e^{-\alpha t}\) for some \(\varepsilon,\alpha>0\), where \(\phi_{t}(x)\) denotes the solution of (1) at time \(t\geq 0\) with initial condition
Figure 1: Example of a dynamical system having three equilibrium points \(A,B,C\). The points \(A\) and \(B\) are sinks (i.e. stable equilibrium points) while \(C\) is not (it is a so-called saddle equilibrium point). The region in orange is the basin of attraction of \(A\) while the region in blue is the basin of attraction of \(B\).
\(\phi_{0}(x)=x\in U\). See [10, Theorem 1 on p. 130]). Second, the system is stable in the sense that if we replace \(f\) in (1) by a nearby function \(\overline{f}\) then it will continue to have a unique sink \(\overline{s}\) in \(U\) (\(\overline{s}\) depends continuously on \(\overline{f}\). In particular, when \(\overline{f}=f\) one has \(s=\overline{s}\). See [10, Theorem 1 on p. 321]) and moreover trajectories of the new system will behave (near the sink) similarly to the trajectories of the original system (1) and will converge exponentially fast to \(\overline{s}\).
This means that if the system (1) is slightly perturbed from a sink, it will eventually return to that point. In other words, a sink point is robust locally under small perturbations. Moreover, even if the dynamics of the system is (slightly) perturbed, nearby trajectories will behave similarly to the original system, providing a better understanding of the long-term behavior of the system. This is particularly important in the study of complex systems, where the stability of the system can be difficult to determine analytically. The concept of robustness also allows for the development of numerical methods for the study of dynamical systems, which are crucial in many applications where analytical methods are not feasible.
The widespread use of numerical algorithms in the analysis of dynamical systems has made it crucial to determine which sets associated with a system can be computed, and which ones cannot. In essence, a set is computable if it can be accurately plotted or numerically described to any desired degree of precision. Equilibrium points and basins of attraction are examples of such sets.
Several studies [11], [12] revealed that the basin of attraction of a sink may not be computable, even if the system is analytic and computable, and the sink is computable. Furthermore, non-computability results are not restricted only to basins of attraction for differential equations (see e.g. [10], [11], [12], [13], [14], [15]). These discoveries highlight the need to understand the limitations of numerical methods in the analysis of dynamical systems. In particular, it raises the question of whether non-computability results are "typical" or if they represent "exceptional" scenarios that are unlikely to have practical significance. In this paper, we specifically concentrate on investigating the non-computability of basins of attractions, as this phenomenon can be viewed as a continuous-time counterpart to the halting problem.
Moreover, it's worth noting that numerical computations have finite precision, and hyperbolic sinks are robust under small perturbations. As a result, it's worth considering if the non-computability remains under small perturbations. If it does not, the non-computability may be ignored in physical realities. In this paper, we show that the non-computability in computing the basin of attraction cannot be overlooked in the sense that the non-computability is robust under small perturbations. The following is our first main result (the precise statement is presented in section 3).
**Theorem 1**. There exists a computable \(C^{\infty}\) function \(f\) for which the system (1) possesses a computable sink \(s_{0}\), but the basin of attraction of \(s_{0}\) is non-computable. Moreover, this non-computability is robust and persists under
small perturbations.
It is worth noting that Theorem 1 establishes that local stability in the vicinity of a sink is insufficient to guarantee the computability of the basin of attraction at the sink.
We also provide a discrete-time variant of this theorem. Actually, this result will be proved first and then used to prove Theorem 1.
**Theorem 2**. There exists an analytic and computable function \(f\) for which the discrete-time dynamical system defined by the iteration of \(f\) possesses a computable sink \(s_{0}\), but the basin of attraction of \(s_{0}\) is non-computable. Moreover, this non-computability is robust and persists under small perturbations.
The third main theorem of the paper provides an answer to the question of which dynamical systems have computable basins of attraction in the plane \(\mathbb{R}^{2}\). The precise statement of the following theorem is presented in section 4.
**Theorem 3**. The map that links each structurally stable planar system to the set of basins of attraction of its sinks is computable.
This theorem provides a positive result that complements the non-computability result presented in the first main theorem. It implies that for the large class of structurally stable - globally stable - planar systems, it is possible to numerically compute the basins of attraction of their equilibrium points and periodic orbits. It is worth noting that the set of structurally stable planar systems defined on a compact disk \(K\) forms an open and dense subset of the set of all planar (\(C^{1}\)-) dynamical systems defined on \(K\).
Taken together, Theorems 1 and 3 demonstrate that global stability is a crucial element in determining the feasibility of numerically computing the basins of attraction of dynamical systems, at least in the case of ordinary differential equations.
## 2 Preliminaries
### Computable analysis
Let \(\mathbb{N},\mathbb{Z},\mathbb{Q}\), and \(\mathbb{R}\) be the set of non-negative integers, integers, rational numbers, and real numbers, respectively. Assuming familiarity with the concept of computable functions defined on \(\mathbb{N}\) with values in \(\mathbb{Q}\), we note that there exist several distinct but equally valid approaches to computable analysis, dating back to the work of Grzegorczyk and Lacombe in the 1950s. For the purposes of this paper, we adopt the oracle Turing machine version presented in e.g. [Ko91].
**Definition 1**.: _A rational-valued function \(\phi:\mathbb{N}\to\mathbb{Q}\) is called an oracle for a real number \(x\) if it satisfies \(|\phi(m)-x|<2^{-m}\) for all \(m\)._
**Definition 2**.: _Let \(S\) be a subset of \(\mathbb{R}\), and let \(f:S\to\mathbb{R}\) be a real-valued function on \(S\). Then \(f\) is said to be computable if there is an oracle Turing Machine \(M^{\phi}(n)\) such that the following holds. If \(\phi\) is an oracle for \(x\in S\), then for every \(n\in\mathbb{N}\), \(M^{\phi}(n)\) returns a rational number \(q\) such that \(|q-f(x)|<2^{-n}\)._
The definition can be extended to functions defined on a subset of \(\mathbb{R}^{d}\) with values in \(\mathbb{R}^{l}\).
**Definition 3**.: _Let \(U\) be a bounded open subset of \(\mathbb{R}^{d}\). Then \(U\) is called computable if there are computable functions \(a,b:\mathbb{N}\to\mathbb{Q}^{d}\) and \(r,s:\mathbb{N}\to\mathbb{Q}\) such that the following holds: \(U=\cup_{n=0}^{\infty}B(a(n),r(n))\) and \(\{\overline{B(b(n),s(n))}\}_{n}\) lists all closed rational balls in \(\mathbb{R}^{d}\) which are disjoint from \(U\), where \(B(a,r)=\{x\in\mathbb{R}^{d}\,:\,|x-a|<r\}\) is the open ball in \(\mathbb{R}^{d}\) centered at \(a\) with the radius \(r\) and \(\overline{B(a,r)}\) is the closure of \(B(a,r)\)._
By definition, a planar computable bounded open set can be rendered on a computer screen with arbitrary magnification. A closed subset \(K\) of \(\mathbb{R}^{d}\) is considered computable if its complement \(\mathbb{R}^{d}\setminus K\) is a computable open subset of \(\mathbb{R}^{d}\), or equivalently, if the distance function \(d_{K}:\mathbb{R}^{d}\to\mathbb{R}\) defined as \(d_{K}(x)=\inf_{y\in D}\|y-x\|\) is computable.
The concept of Turing computability can be extended to encompass a broader range of function spaces and the maps that operate on them. The definitions 2 and 3 indicate that an object is deemed (Turing) computable if it can be approximated with arbitrary precision through computer-generated approximations. Formalizing this idea to carry out computations on infinite objects such as real numbers, we encode those objects as infinite sequences of rational numbers (or equivalently, sequences of any finite or countable set \(\Sigma\) of symbols), using representations (see [20] for a complete development). A represented space is a pair \((X;\delta)\) where \(X\) is a set, \(\delta\) is a coding system (or naming system) on \(X\) with codes from \(\Sigma\) having the property that \(\operatorname{dom}(\delta)\subseteq\Sigma^{\mathbb{N}}\) and \(\delta:\Sigma^{\mathbb{N}}\to X\) is an onto map. Every \(q\in\operatorname{dom}(\delta)\) satisfying \(\delta(q)=x\) is called a \(\delta\)-name of \(x\) (or a name of \(x\) when \(\delta\) is clear from context). Naturally, an element \(x\in X\) is computable if it has a computable name in \(\Sigma^{\mathbb{N}}\). The notion of computability on \(\Sigma^{\mathbb{N}}\) is well established, and \(\delta\) lifts computations on \(X\) to computations on \(\Sigma^{\mathbb{N}}\). The representation \(\delta\) also induces a topology \(\tau_{\delta}\) on \(X\), where \(\tau_{\delta}=\{U\subseteq X:\,\delta^{-1}(U)\text{ is open in }\operatorname{dom}(\delta)\}\) is called the final topology of \(\delta\) on \(X\).
The notion of computable maps between represented spaces now arises naturally. A map \(\Phi:(X;\delta_{X})\to(Y;\delta_{Y})\) between two represented spaces is computable if there is a computable map \(\phi:\Sigma^{\mathbb{N}}\to\Sigma^{\mathbb{N}}\) such that \(\Phi\circ\delta_{X}=\delta_{Y}\circ\phi\) as depicted below (see e.g. [1]). Informally speaking, this means that there is a computer program \(\phi\) that outputs a name of \(\Phi(x)\) when given a name of \(x\) as input. Since \(\phi\) is computable, it transforms every computable element in \(\Sigma^{\mathbb{N}}\) to a computable element in \(\Sigma^{\mathbb{N}}\). Another fact about computable maps is that computable maps are continuous with respect to the corresponding final topologies induced by \(\delta_{X}\) and \(\delta_{Y}\).
### Dynamical systems
There are two broad classes of dynamical systems: discrete-time and continuous-time. Discrete-time dynamical systems are defined by the iteration of a map \(g:\mathbb{R}^{d}\to\mathbb{R}^{d}\), while continuous-time systems are defined by an ordinary differential
equation (ODE) of the form \(x^{\prime}=f(x)\), where \(f:\mathbb{R}^{d}\to\mathbb{R}^{d}\). Regardless of the type of system, the notion of trajectory is fundamental. In the discrete-time case, a trajectory starting at the point \(x_{0}\) is defined by the sequence of iterates of \(g\) as follows
\[x_{0},g(x_{0}),g(g(x_{0})),\dots,g^{[k]}(x_{0}),\dots\]
where \(g^{[k]}\) denotes the \(k\)th iterate of \(g\), while in the continuous time case it is the solution, a function \(\phi(f,x_{0})(\cdot)\) of time \(t\), to the following initial-value problem
\[\left\{\begin{array}{l}x^{\prime}=f(x)\\ x(0)=x_{0}\end{array}\right.\]
In the realm of dynamical systems, a set \(A\) is considered forward invariant if any trajectory starting on \(A\) remains on \(A\) indefinitely for any positive time. If an invariant set consists of only one point, it is called an equilibrium point. For a dynamical system defined by (1), an equilibrium point must be a zero of \(f\). Similarly, for a discrete-time dynamical system defined by \(g\), an equilibrium point must be a fixed point of \(g\) (i.e. it satisfies \(g(x)=x\)) or, equivalently, it must be a zero of \(g(x)-x\).
If trajectories nearby an invariant set converge to this set, then the invariant set is called an attractor. The basin of attraction for a given attractor \(A\) is the set of all points \(x\in\mathbb{R}^{d}\) such that the trajectory starting at \(x\) converges to \(A\) as \(t\to\infty\). Attractors come in different types, including points, periodic orbits, and strange attractors. Equilibrium points are the simplest type of attractor.
An equilibrium point \(x_{0}\) of (1) is _hyperbolic_ if none of the eigenvalues of the Jacobian matrix \(Df(x_{0})\) have zero real part. In particular, if all the eigenvalues of \(Df(x_{0})\) have a negative real part, then we are have a _sink_. A sink has all the properties mentioned in Section 1. In particular given a sink \(s\) there is a neighborhood \(U\) such that any trajectory starting in \(U\) stays there and converges exponentially fast to \(s\). If an hyperbolic equilibrium point is not a sink, then given any neighborhood of this point, there will be a trajectory that will never reach this equilibrium point.
A similar approach can be applied to discrete-time dynamical systems. Specifically, an equilibrium point \(x_{0}\) of the discrete-time dynamical system defined by \(g\) is hyperbolic if none of the eigenvalues of \(Dg(x_{0})\) belong to the unit circle. On the other hand, an equilibrium point \(x_{0}\) is considered a sink if all the eigenvalues of \(Dg(x_{0})\) have an absolute value less than \(1\).
We will now discuss the concept of (\(C^{1}\)-)perturbations. First, we will introduce some notations. Let \(C^{k}(A;\mathbb{R}^{l})\) denote the set of all \(k\)-times continuously differentiable functions from a subset \(A\) of \(\mathbb{R}^{d}\) to \(\mathbb{R}^{l}\). If \(l=d\), we simply write \(C^{k}(A)\) for \(C^{k}(A;\mathbb{R}^{d})\). Suppose \(W\) is an open subset of \(\mathbb{R}^{d}\) and \(f:W\to\mathbb{R}^{d}\) is a \(C^{1}\) vector field. In the field of dynamical systems and differential equations, a perturbation of \(f\) is another \(C^{1}\) vector field \(g:W\to\mathbb{R}^{d}\) that is "\(C^{1}\)-close to \(f\)". To be more precise:
**Definition 4**.: _Let \(f\in C(W)\) (resp. \(f\in C^{1}(W)\)), the \(C\)-norm of \(f\) is defined to be \(\|f\|=\sup_{x\in W}\|f(x)\|\) (resp. the \(C^{1}\)-norm of \(f\) is defined to be
\(\sup_{x\in W}\|f(x)\|+\sup_{x\in W}\|Df(x)\|\)), where \(\|\cdot\|=\) denotes the max-norm on \(\mathbb{R}^{d}\) or the usual norm of the matrix \(Df(x)\), depending on the context._
Note that for \(x\in\mathbb{R}^{d}\), the max-norm is given by \(\|x\|=\max_{1\leq i\leq d}|x_{i}|\). It is possible that \(\|f\|_{1}=\infty\) if the number is unbounded. The \(C^{1}\)-norm \(\|\cdot\|_{1}\) has many of the same formal properties as norms for vector spaces. For \(\epsilon>0\), an \(\epsilon\)-neighborhood of \(f\) in \(C^{1}(W)\) is defined as the set \(\{g\in C^{1}(W):\|g-f\|_{1}<\epsilon\}\). Any function \(g\) in this neighborhood is called an \(\epsilon\)-perturbation of \(f\).
**Remark 1**.: _Upon observation, it can be noted that for any function \(f:W\to\mathbb{R}^{l}\), if \(f\) is computable with a finite \(\|f\|_{1}\), then in any \(\epsilon\)-neighborhood (in \(C^{1}\)-norm) \(\mathcal{N}\), there exist infinitely many computable \(C^{1}\) functions which are distinct from \(f\). For example, \(f_{\alpha},\bar{f}_{\alpha},\tilde{f}_{\alpha}\in\mathcal{N}\) for any rational \(\alpha\) satisfying \(0<\alpha<\epsilon\), where (the operations are done componentwise) \(f_{\alpha}(x)=f(x)+\alpha\), \(\bar{f}_{\alpha}(x)=f(x)+\alpha\sin x\), \(\tilde{f}_{\alpha}(x)=f(x)+e^{-\alpha(1+\|x\|^{2})}\)._
## 3 Proof of Theorem 2 - robust non-computability on the discrete-time case
In this section, we provide an example demonstrating the existence of a computable and analytic function \(f:\mathbb{R}^{3}\to\mathbb{R}^{3}\) that defines a discrete-time dynamical system satisfying the following conditions:
1. \(f\) has a hyperbolic sink \(s\) that is computable.
2. The basin of attraction of \(s\) is non-computable.
3. There exists a neighborhood \(\mathcal{N}\) (in \(C^{1}\)-norm) of \(f\) such that for every function \(g\in\mathcal{N}\), \(g\) has a hyperbolic sink \(s_{g}\) that is computable from \(g\), and the basin of attraction of \(s_{g}\) is non-computable.
The construction demonstrates that the non-computability of computing the basins of attraction can remain robust under small perturbations and sustained throughout an entire neighborhood.
It is worth noting that the function \(f\) inherits strong computability from its analyticity, which implies that every order derivative of \(f\) is computable. Furthermore, in any \(C^{1}\)-neighborhood of \(f\), there exist infinitely many computable functions.
We will make use of an example presented in [1]. In [1] an analytic and computable function \(f:\mathbb{R}^{3}\to\mathbb{R}^{3}\) is explicitly constructed with the following properties:
1. the restriction, \(f_{M}:\mathbb{N}^{3}\to\mathbb{N}^{3}\), of \(f\) on \(\mathbb{N}^{3}\) is the transition function of a universal Turing machine \(M\), where each configuration of \(M\) is coded as an element of \(\mathbb{N}^{3}\) (see [1] for an exact description of the coding). Without loss of generality, \(M\) can be assumed to have just one halting configuration; e.g. just before ending, clear the tape and go to a unique
halting state; thus the halting configuration \(s\in\mathbb{N}^{3}\) is unique. We also assume that \(f_{M}\) is defined over \(s\) and that \(f(s)=s\).
2. the halting configuration \(s\) of \(M\) is a computable sink of the discrete-time evolution of \(f\).
3. the basin of attraction of \(s\) is non-computable.
4. there exists a constant \(\lambda\in(0,1)\) such that if \(x_{0}\) is a configuration of \(M\), then for any \(x\in\mathbb{R}^{3}\), \[\|x-x_{0}\|\leq 1/4\quad\Longrightarrow\quad\|f(x)-f(x_{0})\|\leq \lambda\|x-x_{0}\|\] (2)
In the remaining of this section, the symbols \(f\) and \(s\) are reserved for this particular function and its particular sink - the halting configuration of the universal Turing machine \(M\) whose transition function is \(f_{M}\).
We show in the following that there is a \(C^{1}\)-neighborhood \(\mathcal{N}\) of \(f\) - computable from \(f\) and \(Df(s)\) - such that for every \(g\in\mathcal{N}\), \(g\) has a sink \(s_{g}\) - computable from \(g\) - and the basin of attraction of \(s_{g}\) is non-computable. We begin with two propositions. (Note that the particular function \(f\) is exactly the function constructed in [1] mentioned in the following proposition.)
**Proposition 1**.: _([1, p. 333]) Let \(0<\delta<\epsilon<\frac{1}{2}\). The extension from \(f_{M}:\mathbb{N}^{3}\to\mathbb{N}^{3}\) to \(f:\mathbb{R}^{3}\to\mathbb{R}^{3}\) is robust to perturbations in the following sense: for all \(g:\mathbb{R}^{3}\to\mathbb{R}^{3}\) such that \(\|f-g\|\leq\delta\), for all \(j\in\mathbb{N}\), and for all \(\bar{x}_{0}\in\mathbb{R}^{3}\) satisfying \(\|\bar{x}_{0}-x_{0}\|\leq\epsilon\), where \(x_{0}\in\mathbb{N}^{3}\) represents an initial configuration,_
\[\|g^{[j]}(\bar{x}_{0})-f_{M}^{[j]}(x_{0})\|\leq\epsilon \tag{3}\]
The following proposition is an immediate corollary of a classical result of dynamical systems theory. See e.g. Theorems 1 and 2 in [10, p. 305]. The proof of this proposition has nothing to do with differential equations; rather, it depends on the invertibility of \(Df(s)\) (the invertibility of \(Df(s)\) follows from the fact that \(s\) is a sink).
**Proposition 2**.: _There exist a neighborhood \(U\subset\mathbb{R}^{3}\) of \(s\) and a \(C^{1}\)-neighborhood \(\mathcal{N}\) of \(f\) such that for any \(g\in\mathcal{N}\), \(g\) has a sink \(s_{g}\) in \(U\). Moreover, for any \(\epsilon>0\) one can choose \(\mathcal{N}\) so that \(\|s_{g}-s\|<\epsilon\)._
**Remark 2**.: _We note that if \(s\) is a sink of \(f\), then \(Df(s)-I\) is invertible, i.e. \(\det(Df(s)-I)\neq 0\). Indeed, if \(\det(Df(s)-I)=0\) then there would be a non-zero vector \(v\) such that \((Df(s)-I)v=0\), a contradiction to the assumption that \(s\) is a sink and thus that all eigenvalues of \(Df(s)\) are less than 1 in absolute value. This implies that the sink \(s\) is a zero of \(h(x)=f(x)-x\) and that \(Dh(s)\) is invertible (more generally, \(Dh\) is invertible at any hyperbolic fixed point). Hence, if all fixed points of \(f\) are hyperbolic, then one can compute globally the set of all zeros of \(h(x)=f(x)-x\) (see[1, Theorem 4.2]) and thus all fixed points of \(f\). This implies that \(s\) must be computable. One can then use the
approach of [11] or only use local information to compute the neighborhood \(\mathcal{N}\) of Proposition 2. Indeed, with the aid of computable inverse function theorems such as those in [10, Theorem 19] or [12, Corollary 6.1], a neighborhood \(B(s,1/k_{0})\), where \(k_{0}\in\mathbb{N}\setminus 0\), can be computed from \(f\), \(Df\), and \(s\) such that \(f(x)-x\) is injective within this neighborhood. Consequently, there exists only one zero of \(h(x)=f(x)-x\) in this vicinity, yielding a single fixed point of \(f\). The sink \(s\) can be computed as this zero with accuracy \(1/k\), where \(k\geq k_{0}\), by computing \(h^{-1}(0)\) using the local inverse function \(h^{-1}\) provided by the aforementioned computable inverse function theorems. As the computation is finite, the oracle call merely requires the use of a finite approximation of \(h\), \(Dh\), \(f\), and \(Df\) with an accuracy bounded by \(1/\eta(k)\), where \(\eta:\{k\in\mathbb{N}:,k\geq k_{0}\}\to\mathbb{N}\) (assuming \(\eta(k)>k\) without loss of generality). Note that \(\eta\) can be considered computable from \(f\) and \(Df\), for instance, by computing the sink \(s\) with accuracy \(1/k\) and measuring the maximum argument size used to call the oracle. By employing a similar argument to the one used to establish the computability of \(s\) from \(f\) and \(Df\), one can demonstrate the computability of the function \(F:\mathcal{N}\to\mathbb{R}^{3}\), where, for every \(g\in\mathcal{N}\), \(F(g)=s_{g}\), \(s_{g}\) represents the unique equilibrium point of \(g\) in the closure of \(B(s,1/k_{0})\), and \(\mathcal{N}\) denotes the \(\frac{1}{\eta(k)}\)-neighborhood (in \(C^{1}\)-norm) of \(f\). Furthermore, \(s_{g}\) is a sink and \(|s-s_{g}|<1/k\)._
**Theorem 1**.: _There is a \(C^{1}\)-neighborhood \(\mathcal{N}\) of \(f\) (computable from \(f\) and \(Df(s)\)) such that for any \(g\in\mathcal{N}\), \(g\) has a sink \(s_{g}\) (computable from \(g\)) and the basin of attraction \(W_{g}\) of \(s_{g}\) is non-computable._
Proof.: Let \(0<\delta<\epsilon<\frac{1}{2}\) be the parameters given in Proposition 1 satisfying \(0<\epsilon<1/4\) and let \(k_{0}\), \(\eta\) be given as in Remark 2. Pick a \(k\geq k_{0}\) such that \(0<\frac{1}{k}<\epsilon/2\). Let \(\theta\) be a rational constant satisfying \(0<\theta<\min\{\delta,\frac{1-\lambda}{2},\frac{1}{\eta(k)}\}\), where \(\lambda\in(0,1)\) is the constant in (2). Denote \(\theta+\lambda\) as \(\theta_{\lambda}\). Then \(0<\theta_{\lambda}<\frac{1-\lambda}{2}+\lambda=\frac{1+\lambda}{2}<1\). Let \(\mathcal{N}\) be the \(\theta\)-neighborhood of \(f\) in \(C^{1}\)-norm. Then for any \(g\in\mathcal{N}\), any configuration \(x_{0}\in\mathbb{N}^{3}\) of \(M\), and any \(x\in B(x_{0},1/4)\), we have the following estimate:
\[\|g(x)-g(x_{0})\| \tag{4}\] \[\leq \|(g-f)(x)-(g-f)(x_{0})\|+\|f(x)-f(x_{0})\|\] \[\leq \|D(g-f)\|\,\|x-x_{0}\|+\lambda\|x-x_{0}\|\] \[< (\theta+\lambda)\,\|x-x_{0}\|\]
Since \(0<\theta+\lambda=\theta_{\lambda}<1\), it follows that \(g\) is a contraction in \(B(x_{0},1/4)\) for every configuration \(x_{0}\) of \(M\).
We now show that for any \(g\in\mathcal{N}\) and any configuration \(x_{0}\in\mathbb{N}^{3}\) of \(M\), \(M\) halts on \(x_{0}\) if and only if \(x_{0}\in W_{g}\), where \(W_{g}\) denotes the basin of attraction of \(s_{g}\). First we assume that \(x_{0}\in W_{g}\). Then, by definition of basin of attraction of a sink, \(g^{[j]}(x_{0})\to s_{g}\) as \(j\to\infty\). Hence, there exists \(n\in\mathbb{N}\) such that
\(\|g^{[n]}(x_{0})-s_{g}\|<\frac{\epsilon}{5}\), which in turn implies that
\[\|f_{M}^{[n]}(x_{0})-s\|\] \[\leq \|f_{M}^{[n]}(x_{0})-g^{[n]}(x_{0})\|+\|g^{[n]}(x_{0})-s_{g}\|+\|s_ {g}-s\|\] \[\leq \epsilon+\frac{\epsilon}{5}+\frac{1}{k}<2\epsilon\]
Since \(f_{M}^{[n]}(x_{0}),s\in\mathbb{N}^{3}\) and \(2\epsilon<1/2\) by assumption that \(\epsilon<\frac{1}{4}\), it follows that \(f_{M}^{[n]}(x_{0})=s\). Hence, \(M\) halts on \(x_{0}\). Next we assume that \(M\) halts on \(x_{0}\). This assumption implies that there exists \(n\in\mathbb{N}\) such that \(f_{M}^{[j]}(x_{0})=s\) for all \(j\geq n\). Then for all \(j\geq n\), it follows from Proposition 1 that
\[\|g^{[j]}(x_{0})-s\|\] \[\leq \|g^{[j]}(x_{0})-f_{M}^{[j]}(x_{0})\|+\|f_{M}^{[j]}(x_{0})-s\|\] \[= \|g^{[j]}(x_{0})-f_{M}^{[j]}(x_{0})\|\leq\epsilon\]
The inequality implies that \(\{g^{[j]}(x_{0})\}_{j\geq n}\subset\overline{B(s,\epsilon)}\). From the assumptions that \(s_{g}\) is an equilibrium point of \(g\), \(\|s-s_{g}\|<\frac{1}{k}\) for every \(g\) in the \(\frac{1}{\eta(k)}\)-neighborhood of \(f\) (in \(C^{1}\)-norm), \(\theta\leq\frac{1}{\eta(k)}\), and \(\frac{1}{k}<\epsilon\), it follows that \(g(s_{g})=s_{g}\) and \(s_{g}\in\overline{B(s,\epsilon)}\subset\overline{B(s,1/4)}\). Since \(s\) is a configuration of \(M\) - the halting configuration of \(M\) - it follows from (4) that \(g\) is a contraction on \(\overline{B(s,1/4)}\). Thus, \(\|g^{[n+j]}(x_{0})-s_{g}\|=\|g^{[n+j]}(x_{0})-g^{[n+j]}(s_{g})\|\leq(\theta_{ \lambda})^{j}\|g^{[n]}(x_{0})-s_{g}\|\to 0\) as \(j\rightarrow\infty\). Consequently, \(g^{[j]}(x_{0})\to s_{g}\) as \(j\rightarrow\infty\), This implies that \(x_{0}\in W_{g}\).
To prove that \(W_{g}\) is non-computable, the following stronger inclusion is needed: if \(M\) halts on \(x_{0}\), then \(B(x_{0},\epsilon)\subset W_{g}\). Consider any \(x\in B(x_{0},\epsilon)\). Since \(x_{0}\in W_{g}\) and \(g\) is a contraction on \(B(x_{0},\epsilon)\), it follows that
\[\|g^{[j]}(x)-g^{[j]}(x_{0})\|\leq(\theta_{\lambda})^{j}\|x-x_{0}\|\to 0 \quad\text{as }j\rightarrow\infty\]
Since \(x_{0}\in W_{g}\), \(g^{[j]}(x_{0})\to s_{g}\) as \(j\rightarrow\infty\). Hence, \(g^{[j]}(x)\to s_{g}\) as \(j\rightarrow\infty\). This implies that \(x\in W_{g}\).
It remains to show that \(W_{g}\) is non-computable. Suppose otherwise that \(W_{g}\) was computable. We first note that \(W_{g}=\cup_{t\in\mathbb{N}}\phi_{-t}(B(x_{0},\epsilon))\) is an open set since \(\phi_{t}\) is continuous for every \(t\in\mathbb{R}\) (this is a well-known fact that follows from the formula (20)) and furthermore \(\phi_{t}^{-1}=\phi_{-t}\). Then the distance function \(d_{\mathbb{R}^{3}\setminus W_{g}}\) is computable. We can use this computability to solve the halting problem. Consider any initial configuration \(x_{0}\in\mathbb{N}^{3}\), and compute \(d_{\mathbb{R}^{3}\setminus W_{g}}(x_{0})\). If \(d_{\mathbb{R}^{3}\setminus W_{g}}(x_{0})>\frac{\epsilon}{5}\) or \(d_{\mathbb{R}^{3}\setminus W_{g}}(x_{0})<\frac{\epsilon}{4}\), halt the computation. Since \(\epsilon>0\), this computation always halts.
If \(d_{\mathbb{R}^{3}\setminus W_{g}}(x_{0})>\frac{\epsilon}{5}>0\), then \(x_{0}\in W_{g}\), or equivalently, the Turing machine \(M\) halts on \(x_{0}\). Otherwise, if \(d_{\mathbb{R}^{3}\setminus W_{g}}(x_{0})<\frac{\epsilon}{4}\), then \(x_{0}\not\in W_{g}\), or equivalently, \(M\) does not halt on \(x_{0}\). The exclusion that \(x_{0}\not\in W_{g}\) is derived from the fact that if \(x_{0}\in W_{g}\), then \(B(x_{0},\epsilon)\subseteq W_{g}\); in other words, \(d_{\mathbb{R}^{3}\setminus W_{g}}(x_{0})\geq\epsilon>\frac{\epsilon}{4}\). Therefore, if \(W_{g}\) was computable, then we could solve the halting problem, which is a contradiction.
Hence, we conclude that \(W_{g}\) is non-computable.
**Remark 3**.: _Theorem 1 demonstrates that non-computability can maintain its strength when considering standard topological structures, as in the study of natural phenomena such as identifying invariant sets of a dynamical system. This robustness can manifest in a powerful way: the non-computability of the basins of attraction persists continuously for every function that is "\(C^{1}\) close to \(f\)"._
## 4 Proof of Theorem 1 - robust non-computability in the continuous-time case
In the previous section, we demonstrated that a discrete-time dynamical system defined by the iteration of a map, say \(\bar{f}:\mathbb{R}^{3}\rightarrow\mathbb{R}^{3}\), has a computable sink with a non-computable basin of attraction, and that this non-computability property is robust to perturbations. In this section, we extend this result to continuous-time dynamical systems. Specifically, we prove the existence of a computable \(C^{\infty}\) map \(f:\mathbb{R}^{7}\rightarrow\mathbb{R}^{7}\) such that the ODE \(y^{\prime}=f(y)\) has a computable sink with a non-computable basin of attraction. Moreover, this non-computability property is robust to small perturbations in \(f\).
To be more precise, we show that there exists some \(\varepsilon>0\) such that if \(g:\mathbb{R}^{7}\rightarrow\mathbb{R}^{7}\) is another \(C^{\infty}\) map with \(\left\|f-g\right\|_{1}\leq\varepsilon\), then the ODE \(y^{\prime}=g(y)\) also has a sink (computable from \(g\) and located near the sink of \(y^{\prime}=f(y)\)) with a non-computable basin of attraction. This means that the non-computability of the basin of attraction is a robust property of the underlying dynamical system.
Overall, this result shows that the non-computability of basin of attraction is not limited to discrete-time dynamical systems, but is also present in continuous-time dynamical systems, and is a robust property that persists under small perturbations.
To obtain this result, we will employ a technique that involves iterating the map \(\bar{f}\) with an ODE. We need to ensure that the resulting ODE still has a computable sink and that the non-computability property is robust to perturbations. This technique has been explored in several previous papers, including [10], [11], [12], and [13]. The basic idea is to start with a "targeting" equation with the format
\[x^{\prime}=c(b-x)^{3}\phi(t) \tag{5}\]
where \(b\) is the _target value_ and \(\phi:\mathbb{R}\)\(\rightarrow\mathbb{R}\) is a continuous function which satisfies \(\int_{t_{0}}^{t_{1}}\phi(t)dt>0\) and \(\phi(t)\geq 0\) over \([t_{0},t_{1}]\). This is a separable ODE which can be explicitly solved. Using the solution one can show that for any \(\gamma>0\) (the value \(\gamma\) is called the _targeting error_ for reasons which will be clear in a moment), if one chooses
\[c\geq\frac{1}{2\gamma^{2}\int_{t_{0}}^{t_{1}}\phi(t)dt} \tag{6}\]
in (5), then \(\left|x(t)-b\right|<\gamma\) for all \(t\geq t_{1}\), independent of the initial condition \(x(t_{0})\). Note also that if \(\phi(t)=0\) for all \(t\in[t_{0},t_{1}]\), then \(x(t)=x(t_{0})\) for all
\(t\in[t_{0},t_{1}]\). This targeting equation is the basic construction block for iterating a map \(\tilde{f}:\mathbb{R}\to\mathbb{R}\), which extends a corresponding function \(\tilde{f}_{\mathbb{N}}:\mathbb{N}\to\mathbb{N}\).
To iterate \(\tilde{f}\) (with an ODE) we pick \(t_{1}-t_{0}=1/2\), a continuous periodic function \(\phi:\mathbb{R}\to\mathbb{R}\) of period \(1\), which satisfies \(\phi(t)\geq 0\) for \(t\in]0,1/2[\), \(\phi(t)=0\) for \(t\in[1/2,1]\), and \(\int_{0}^{1}\phi(t)dt>0\), a constant \(c\) satisfying (6) with \(\gamma=1/4\), and a \(C^{\infty}\) function \(r:\mathbb{R}\to\mathbb{R}\) with the property that \(r(k+\varepsilon)=k\) for all \(k\in\mathbb{Z}\) and all \(0\leq\varepsilon\leq 1/4\) (i.e. \(r\) returns the integer part of its argument \(x\) whenever \(x\) is within distance \(\leq 1/4\) of an integer). Although the exact expressions of \(\phi\) and \(r\) are irrelevant to the construction, it is worth noticing that choices can be made (see e.g. [1, p. 344], replacing \(\theta_{j}\) in (20) of that paper by the function \(\chi\) given by (11) below) so that \(\phi\) and \(r\) are \(C^{\infty}\).
Then the ODE
\[\left\{\begin{array}{l}z_{1}^{\prime}=c(\tilde{f}(r(z_{2}))-z_{1})^{3}\phi( t)\\ z_{2}^{\prime}=c(r(z_{1})-z_{2})^{3}\phi(t+1/2)\end{array}\right. \tag{7}\]
will iterate \(\tilde{f}\) in the sense that the continuous flow generated by (7) starting near any integer value will stay close to the (discrete) orbit of \(\tilde{f}\), as we will now see. Suppose that at the initial time \(t=0\), we have \(\left|z_{1}(0)-x_{0}\right|\leq 1/4\) and \(\left|z_{2}(0)-x_{0}\right|\leq 1/4\) for some \(x_{0}\in\mathbb{Z}\). During the first half-unit interval \([0,1/2]\), we have \(\phi(t+1/2)=0\), and thus \(z_{2}^{\prime}(t)=0\). Consequently, \(z_{2}(t)=z_{2}(0)\), and hence \(r(z_{2})=x_{0}\). Therefore, the first equation of (7) becomes a targeting equation (5) on the interval \([0,1/2]\) where the target is \(\tilde{f}(r(z_{2}))=\tilde{f}(x_{0})\). Thus, we have \(\left|z_{1}(1/2)-\tilde{f}(x_{0})\right|\leq 1/4\).
In the next half-unit interval \([1/2,1]\), the behavior of \(z_{1}\) and \(z_{2}\) switches. We have \(\phi(t)=0\), and thus \(z_{1}(t)=z_{1}(1/2)\), which implies that \(r(z_{1})=\tilde{f}(x_{0})\). Hence, the second equation of (7) becomes a targeting equation (5) on the interval \([0,1/2]\) where the target is \(r(z_{1})=\tilde{f}(x_{0})\). Thus, we have \(\left|z_{2}(1)-\tilde{f}(x_{0})\right|\leq 1/4\).
In the next unit interval \([1,2]\), the same behavior repeats itself, so we have \(\left|z_{1}(2)-\tilde{f}(\tilde{f}(x_{0}))\right|\leq 1/4\) and \(\left|z_{2}(2)-\tilde{f}(\tilde{f}(x_{0}))\right|\leq 1/4\). In general, for any \(k\in\mathbb{N}\) and \(t\in[k,k+1/2]\), we will have \(\left|z_{1}(k)-\tilde{f}^{[k]}(x_{0})\right|\leq 1/4\), \(\left|z_{2}(k)-\tilde{f}^{[k]}(x_{0})\right|\leq 1/4\), and \(\left|z_{2}(t)-\tilde{f}^{[k]}(x_{0})\right|\leq 1/4\). In other words, the flow of (7) starting near any integer value stays close to the orbit of \(\tilde{f}\).
Notice also that by choosing \(\gamma=1/8\) instead of \(\gamma=1/4\), we can make (7) robust to perturbations of magnitude \(\leq 1/8\), since under these conditions the system
\[\left\{\begin{array}{l}\bar{z}_{1}^{\prime}=c(\tilde{f}(r(\bar{z}_{2}))-\bar {z}_{1})^{3}\phi(t)+\xi_{1}(t)\\ \bar{z}_{2}^{\prime}=c(r(\bar{z}_{1})-\bar{z}_{2})^{3}\phi(t+1/2)+\xi_{2}(t) \end{array}\right. \tag{8}\]
still satisfies the conditions \(\left|z_{1}(k)-\tilde{f}^{[k]}(x_{0})\right|\leq 1/4\), \(\left|z_{2}(k)-\tilde{f}^{[k]}(x_{0})\right|\leq 1/4\), and \(\left|z_{2}(t)-\tilde{f}^{[k]}(x_{0})\right|\leq 1/4\) for all \(k\in\mathbb{N}\) and \(t\in[k,k+1/2]\), where \(|\xi_{1}(t)|\leq 1/8\), \(|\xi_{2}(t)|\leq 1/8\) for all \(t\in\mathbb{R}\), and \(\left|z_{1}(0)-x_{0}\right|\leq 1/8\), \(\left|z_{2}(0)-x_{0}\right|\leq 1/8\).
Indeed, in \([0,1/2]\) we have \(\phi(t+1/2)=0\) and hence \(\bar{z}_{2}^{\prime}=\xi_{2}(t)\) which yields \(\left|z_{2}(t)-z_{2}(0)\right|\leq\int_{0}^{1/2}\left|\xi_{2}(t)\right|dt\leq(1 /2)(1/8)=1/16\) and thus \(\left|z_{2}(t)-x_{0}\right|\leq\left|z_{2}(t)-z_{2}(0)\right|+\left|z_{2}(0)-x_ {0}\right|\leq 1/16+1/8=3/16\) for all \(t\in[0,1/2]\). Therefore \(\tilde{f}(r(\bar{z}_{2}))=\tilde{f}(x_{0})\) in \([0,1/2]\). Using an analysis similar to that performed in [1, p. 346], where the "perturbed" targeting ODE
\[x^{\prime}=c(b-x)^{3}\phi(t)+\xi(t)\quad(\text{with }|\xi(t)|\leq\rho) \tag{9}\]
is studied, we conclude that if \(c\) satisfies (6), then \(\left|x(t_{1})-b\right|<\gamma+\rho\cdot(t_{1}-t_{0})\). In the present case \(t_{1}-t_{0}=1/2\) and \(\rho=1/8\), and thus \(\left|z_{1}(1/2)-\tilde{f}(x_{0})\right|\leq 1/8+(1/8)(1/2)=3/16\). Similarly, since \(\phi(t)=0\), on \([1/2,1]\) we conclude that \(\left|z_{1}(t)-\tilde{f}(x_{0})\right|\leq\left|z_{1}(1/2)-\tilde{f}(x_{0}) \right|+\int_{1/2}^{1}\left|\xi_{2}(t)\right|dt\leq 3/16+(1/2)(1/8)=1/4\) for all \(t\in[1/2,1]\). Therefore \(r(\bar{z}_{1})=\tilde{f}(x_{0})\) in \([1/2,1]\) and thus \(\left|z_{2}(1)-\tilde{f}(x_{0})\right|\leq 1/8+(1/8)(1/2)=3/16\). By repeating this procedure on subsequent intervals, we conclude that \(\left|z_{1}(k)-\tilde{f}^{[k]}(x_{0})\right|\leq 1/4\), \(\left|z_{2}(k)-\tilde{f}^{[k]}(x_{0})\right|\leq 1/4\) and \(\left|z_{2}(t)-\tilde{f}^{[k]}(x_{0})\right|\leq 1/4\) for all \(k\in\mathbb{N}\) and \(t\in[k,k+1/2]\).
The above procedure can be readily extended to iterate (with an ODE) the three-dimensional map \(\bar{f}:\mathbb{R}^{3}\to\mathbb{R}^{3}\) of the previous section by assuming that \(\bar{f}=(\bar{f}_{1},\bar{f}_{2},\bar{f}_{3})\), where \(\bar{f}_{i}:\mathbb{R}^{3}\to\mathbb{R}\) is a component of \(\bar{f}\) for \(i=1,2,3\). To accomplish this, it suffices to consider the ODE
\[\left\{\begin{array}{l}u_{1}^{\prime}=c(\bar{f}_{1}(r(v_{1}),r(v_{2}),r(v_{ 3}))-u_{1})^{3}\phi(t)\\ u_{2}^{\prime}=c(\bar{f}_{2}(r(v_{1}),r(v_{2}),r(v_{3}))-u_{2})^{3}\phi(t)\\ u_{3}^{\prime}=c(\bar{f}_{3}(r(v_{1}),r(v_{2}),r(v_{3}))-u_{3})^{3}\phi(t)\\ v_{1}^{\prime}=c(r(u_{1})-v_{1})^{3}\phi(t+1/2)\\ v_{2}^{\prime}=c(r(u_{2})-v_{2})^{3}\phi(t+1/2)\\ v_{3}^{\prime}=c(r(u_{3})-v_{3})^{3}\phi(t+1/2)\end{array}\right. \tag{10}\]
This ODE works like (7), but applies componentwise to each component \(\bar{f}_{1},\bar{f}_{2},\bar{f}_{3}\). However, a few problems still need to be addressed in order to achieve our desired results. Specifically, we must: (i) acquire an autonomous system of the form \(y^{\prime}=f(y)\) rather than a non-autonomous one like (10), (ii) demonstrate the existence of a sink with a non-computable basin of attraction, and (iii) establish that both the sink and the non-computability of the basin of attraction are resilient to perturbations.
To address problem (i), one possible solution would be to introduce a new variable \(z\) that satisfies \(z^{\prime}=1\) and \(z(0)=0\), effectively replacing \(t\) in (10) with \(z\). However, this approach would not be compatible with problem (ii) because the component \(z\) would grow infinitely and never converge to a value, which is necessary for the existence of a sink.
One potential solution to this problem is to introduce a new variable \(z\) such that \(z(0)=0\) and \(z^{\prime}=1\) until the Turing machine \(M\) halts, and then set \(z^{\prime}=-z\) afterwards so that the dynamics of \(z\) converge to the sink at \(0\) in one-dimensional dynamics. Since \(z\) will replace \(t\) as the argument of \(\phi\) in (10), we also need to
modify \(\phi\) such that when \(M\) halts, the components of \(u=(u_{1},u_{2},u_{3})\) and \(v=(v_{1},v_{2},v_{3})\) still converge to a sink that corresponds to the unique halting configuration of \(M\).
In order to describe the dynamics of \(z\), we first need to introduce several auxiliary tools. Consider the \(C^{\infty}\) function \(\chi\) defined by
\[\chi(x)=\left\{\begin{array}{ll}0&\quad\mbox{if $x\leq 0$ or $x\geq 1$}\\ e^{\frac{1}{x(x-1)}}&\quad\mbox{if $0<x<1$}.\end{array}\right. \tag{11}\]
Notice that \(\chi\), as well as all its derivatives, is computable. Now consider the \(C^{\infty}\) function \(\zeta\) defined by \(\zeta(0)=0\) and
\[\zeta^{\prime}(x)=c\chi(x)\]
where \(c=\left(\int_{0}^{1}e^{\frac{1}{x(x-1)}}dx\right)^{-1}\), which is a \(C^{\infty}\) version of Heaviside's function (see also [1, p. 4]) since \(\zeta(x)=0\) when \(x\leq 0\), \(\zeta(x)=1\) when \(x\geq 1\), and \(0<\zeta(x)<1\) when \(0<x<1\). Notice that \(\zeta\) is computable since the solution of an ODE with computable data is computable [1, 1], [1]. Similar properties are trivially obtained for the function \(\zeta_{a,b}\), where \(a<b\), defined by
\[\zeta_{a,b}(x)=\zeta\left(\frac{x-a}{b-a}\right)=\left\{\begin{array}{ll}0& \quad\mbox{if $x\leq a$}\\ *&\quad\mbox{if $a<x<b$}\\ 1&\quad\mbox{if $x\geq b$}\end{array}\right.\]
where \(*\) is a value in \(]0,1[\) that depends on \(x\). Let us now update the function \(\phi\) to be used in (10). Recall that, in the previous section, we introduced the map \(\bar{f}:\mathbb{R}^{3}\to\mathbb{R}^{3}\) (\(\bar{f}\) is called \(f\) in the previous section), which simulates a Turing machine by encoding each configuration as (an approximation of) a triplet \((w_{1},w_{2},q)\in\mathbb{N}^{3}\) (for more details, see [1]). Here, \(w_{1}\) encodes the part of the tape to the left of the tape head (excluding the infinite sequence of consecutive blank symbols), \(w_{2}\) encodes the part of the tape from the location of the tape head up to the right, and \(q\) encodes the state. We typically assume that \(1,\ldots,m\) encode the states, and \(m\) represents the halting state. In (10), \(v_{3}\) gives the current state of the Turing machine \(M\), i.e., \(v_{3}(t)=q_{k}\) for all \(t\in[k,k+1/2]\) if the state of \(M\) after \(k\) steps is \(q_{k}\). Additionally, \(v_{3}(t)\in[q_{k},q_{k+1}]\) (\(v_{3}(t)\in[q_{k+1},q_{k}]\)) if \(q_{k}\leq q_{k+1}\) (\(q_{k+1}<q_{k}\), respectively) and \(t\in[k+1/2,k+1]\). Define
\[\bar{\phi}(t,v_{3})=\phi(t)+\zeta_{m-1/4,m-3/16}(v_{3}). \tag{12}\]
We note that \(\phi(x),\zeta_{a,b}(x)\in[0,1]\) for any \(x\in\mathbb{R}\). Moreover, if \(M\) halts in \(k\) steps, then \(\bar{\phi}(t,v_{3}(t))=\phi(t)\) for \(t\leq k-1/2\), and \(1\leq\bar{\phi}(t,v_{3}(t))\leq 2\) when \(t\geq k\). Let us now analyze what happens when \(t\in[k-1/2,k]\). We observe that \(v_{3}(t)\) will increase in this interval from the value of approximately \(q_{k-1}\) until it reaches a \(1/4\)-vicinity of \(q_{k}=m\). Until that happens, \(\bar{\phi}(t)=\phi(t)\). Once \(v_{3}(t)\) is in \([m-1/4,m-3/16]\), we get that \(\bar{\phi}(t,v_{3}(t))=\phi(t)+\zeta_{m-1/4,m-3/16}(v_{3}(t))>\phi(t)\), and if we use \(\bar{\phi}(t,v_{3}(t))\) instead of \(\phi(t)\) in the first three equations of (10), the respective targeting equations still have the same dynamics but with a faster
speed of convergence. Thus, because the targeting error is \(\gamma=1/8\), at a certain time \(t^{*}\) we will have \(v_{3}(t)\geq m-3/16\) for all \(t\geq t^{*}\). From this point on, we will have (note that \(1\geq\phi(t)\))
\[\bar{\phi}(t,v_{3})=\phi(t)+1\geq 1\geq\phi(t) \tag{13}\]
and thus all \(6\) equations of (10) will become "locked" with respect to their convergence, regardless of the value of \(\phi(t)\) (and \(\phi(t+1/2)\)). In other words, for \(t\geq t^{*}\), the convergence of the \(6\) equations of (10) is guaranteed even if \(\phi(t)=0\) or \(\phi(t+1/2)=0\) for all \(t\geq t^{*}\). This means that from this moment \(t\) can take any value. In particular, from that moment we can replace \(t\) by a variable \(z\) which converges to \(0\), as desired from our considerations described above.
Let
\[z^{\prime}=1-\zeta_{m-3/16,m-1/8}(v_{3}(t))(z+1),\quad z(0)=0 \tag{14}\]
Notice that \(z^{\prime}=1\) for all \(t\leq t^{*}\). Hence \(z(t)=t\) for all \(t\leq t^{*}\). Once \(v_{3}(t)\) reaches the value \(m-1/8\) at time \(t^{**}>t^{*}\), we have \(v_{3}(t)\geq m-1/8\). Hence \(z^{\prime}=-z\) for all \(t\geq t^{**}\) and thus \(z\) will converge exponentially fast to \(0\) for \(t\geq t^{**}\).
Let us now show that \(x_{halt}=(0,0,m,0,0,m,0)\in\mathbb{R}^{7}\) is a sink (recall that \((w_{1},w_{2},q)\in\mathbb{N}^{3}\) encodes a configuration when simulating the Turing machine with the map \(\bar{f}:\mathbb{R}^{3}\to\mathbb{R}^{3}\)). We may assume that the machine cleans its tape before halting, thus generating the halting configuration \((0,0,m)\in\mathbb{N}^{3}\). First we should note that, as pointed out in [1, Section 5.5], all \(6\) equations of (10) are variations of the ODE
\[z^{\prime}=-z^{3}\]
which has an equilibrium point at \(z=0\), but is not hyperbolic, and thus \(z=0\) cannot be a sink. Therefore \(x_{halt}\) cannot be a sink of (10) when (14) is added to (10) and \(\phi(t)\) and \(\phi(t+1/2)\) are replaced by \(\bar{\phi}(z,v_{3})\) and \(\bar{\phi}(z+1/2,v_{3})\), respectively. This can be solved as in [1] by taking an ODE with the format \(y^{\prime}=-y^{3}-y\). Hence the system (10) must be updated to
\[\left\{\begin{array}{l}u_{1}^{\prime}=c((\bar{f}_{1}(r(v_{1}),r(v_{2}),r(v_ {3}))-u_{1})^{3}+\bar{f}_{1}(r(v_{1}),r(v_{2}),r(v_{3}))-u_{1})\bar{\phi}(w,v_ {3})\\ u_{2}^{\prime}=c((\bar{f}_{2}(r(v_{1}),r(v_{2}),r(v_{3}))-u_{2})^{3}+\bar{f}_{ 2}(r(v_{1}),r(v_{2}),r(v_{3}))-u_{2})\bar{\phi}(w,v_{3})\\ u_{3}^{\prime}=c((\bar{f}_{3}(r(v_{1}),r(v_{2}),r(v_{3}))-u_{3})^{3}+\bar{f}_{ 3}(r(v_{1}),r(v_{2}),r(v_{3}))-u_{3})\bar{\phi}(w,v_{3})\\ v_{1}^{\prime}=c((r(u_{1})-v_{1})^{3}+r(u_{1})-v_{1})\bar{\phi}(w+1/2,v_{3}) \\ v_{2}^{\prime}=c((r(u_{2})-v_{2})^{3}+r(u_{2})-v_{2})\bar{\phi}(w+1/2,v_{3}) \\ v_{3}^{\prime}=c((r(u_{3})-v_{3})^{3}+r(u_{3})-v_{3})\bar{\phi}(w+1/2,v_{3}) \\ z^{\prime}=1-\zeta_{m-3/16,m-1/8}(v_{3})(z+1).\end{array}\right. \tag{15}\]
To show that \(x_{halt}\) is a sink of (15), we first observe that \(x_{halt}\) is an equilibrium point of (15). If we are able to show that the Jacobian matrix \(A\) of (15) at \(x_{halt}\) has only negative eigenvalues, then \(x_{halt}\) will be a sink. A straightforward
calculation shows that
\[A=\left[\begin{array}{cccccccc}-\bar{\phi}(0,m)&0&0&0&0&0&0\\ 0&-\bar{\phi}(0,m)&0&0&0&0&0\\ 0&0&-\bar{\phi}(0,m)&0&0&0&0\\ 0&0&0&-\bar{\phi}(0,m)&0&0&0\\ 0&0&0&-\bar{\phi}(0,m)&0&0&0\\ 0&0&0&0&-\bar{\phi}(0,m)&0&0\\ 0&0&0&0&0&-\bar{\phi}(0,m)&0\\ 0&0&0&0&0&-\zeta_{m-3/16,m-1/8}(m)\end{array}\right]\]
Thus \(A\) has two eigenvalues: \(-\bar{\phi}(0,m)=-(\phi(0)+1)\leq-1\) and \(-\zeta_{m-3/16,m-1/8}(m)=-1\). Since \(A\) only has negative eigenvalues, we conclude that \(x_{halt}\) is a sink of (15).
We will now demonstrate that the basin of attraction of \(x_{halt}\) is non-computable. Let \(M\) be a universal Turing machine with a transition function simulated by \(\bar{f}=(\bar{f}_{1},\bar{f}_{2},\bar{f}_{3})\). Suppose that the initial state of \(M\) is encoded as the number \(1\) (where the states are encoded as integers \(1,\ldots,m\) and \(m\) is assumed to be the unique halting state). Then, on input \(w\), the initial configuration of \(M\) is encoded as \((0,w,1)\in\mathbb{N}^{3}\). \(M\) halts on input \(w\in\mathbb{N}\) if and only if \(\bar{f}^{[k]}(0,w,1)\) converges to \(\bar{x}_{halt}=(0,0,m)\), and the same is true for any input \(x\in\mathbb{R}^{3}\) satisfying \(\|x-(0,w,1)\|\leq 1/4\).
As shown in the previous section, the basin of attraction of \(\bar{x}_{halt}\) for the discrete dynamical system defined by \(\bar{f}\) cannot be computable. In fact, if the basin of attraction of \(\bar{x}_{halt}\) were computable, then we could solve the Halting problem as follows: compute a \(1/8\)-approximation of the basin of attraction of \(\bar{x}_{halt}\). To decide whether \(M\) halts with input \(w\), check whether \((0,w,1)\) belongs to that approximation. Since the halting problem is not computable, the same should be true for the basin of attraction of \(\bar{x}_{halt}\).
We can apply the same idea to ODEs by using the robust iteration of \(\bar{f}\) via the ODE (15). However, to show a similar result, we need to prove that any \(x\in\mathbb{R}^{7}\) satisfying \(\|x-(0,w,1,0,w,1,0)\|\leq 1/8\) will converge to \(\bar{x}_{halt}\) if and only if \(M\) halts with input \(w\). In other words, we need robustness to perturbations in the initial condition to demonstrate the non-computability of the basin of attraction of \(x_{halt}\), which shows that trajectories starting in a neighborhood of a configuration encoding an initial configuration will either all converge to \(x_{halt}\) (if \(M\) halts with the corresponding input) or none of these trajectories will converge to \(x_{halt}\) (if \(M\) does not halt with the corresponding input).
While the robustness of the convergence to the sink is ensured for the first six components of \((0,w,1,0,w,1,0)\) due to the robustness of \(\bar{f}\) (at least until \(M\) halts), the same does not hold for the last component \(z\), which concerns time. If we start at \(t=-1/4\) or \(t=1/4\), we begin the periodic cycle required to update the iteration of \(\bar{f}\) too soon or too late. To address this problem, we modify the function \(\phi\) (and thus \(\bar{\phi}\) due to (12)) to ensure that \(\phi\) has the additional property that \(\phi(t)=0\) when \(t\in[0,1/4]\), to ensure robustness to "late" starts (i.e. when \(z\in]0,1/4]\)). Note also that \(\phi(t)=0\) when \(t\in[-1/2,0]\), since \(z\) is periodic, which ensures robustness to "premature" starts (i.e. when \(z\in[-1/4,0]\)). Since
\(\phi\) is periodic with period \(1\) and it must be \(\phi(t)=0\) when \(t\in[0,1/4]\cup[1/2,1]\) and \(\phi(t)>0\) when \(t\in]1/4,1/2[\), we take
\[\phi(t)=\zeta\left(\sin\left(2\pi t-\frac{\pi}{4}\right)-\frac{1}{\sqrt{2}} \right).\]
Indeed, in the interval \([0,1]\), \(\sin\left(2\pi t-\frac{\pi}{4}\right)\in[1/\sqrt{2},1]\) only on \([1/4,1/2]\), which implies that \(\phi(t)=0\) when \(t\in[0,1/4]\cup[1/2,1]\) and \(\phi(t)>0\) when \(t\in]1/4,1/2[\), due to the properties of \(\zeta\). With this modification, we have ensured robustness to perturbations in the initial condition for all components of \(x\) including time. We can now conclude, similarly as we did for the map \(\bar{f}\), that the basin of attraction of (15) must be non-computable.
In order to demonstrate that the dynamics of (15) remain robust even when subjected to perturbations, let us consider a function \(g:\mathbb{R}^{7}\rightarrow\mathbb{R}^{7}\) such that \(\left\|f-g\right\|_{1}\leq 1/16\), where (15) is expressed as \(x^{\prime}=f(x)\). As long as \(M\) has not yet halted, the dynamics of \(y^{\prime}=f(x)\) will remain robust against perturbations to \(f\), with the exception of the component \(z\) which is not perturbed. This is because the map \(\bar{f}:\mathbb{R}^{3}\rightarrow\mathbb{R}^{3}\) can robustly simulate Turing machines, and the dynamics of (15) are themselves robust against perturbations of magnitude \(\leq 1/16\), as previously demonstrated in the analysis of (8). We should note that we do not use \(\rho=1/8\) as a bound for \(\xi(t)\) in (9) since, as previously seen, the total targeting error \(\left|x(t)-b\right|\) is bounded by \(\gamma+\rho(t_{1}-t_{0})\). However, when \(z\) is perturbed, as we will see, we may not have \(t_{1}-t_{0}=1/2\), but instead \(t_{1}-t_{0}\in[3/4\cdot 1/2,5/4\cdot 1/2]=[3/8,5/8]\). Using \(\rho=1/16\) instead of \(\rho=1/8\) compensates for this issue.
Under these conditions, we can still use \(y^{\prime}=g(y)\) to simulate \(M\) until it halts. If we add a perturbation of magnitude \(\leq 1/4\) to the right-hand side of the dynamics of \(z\) in (15), we can conclude that \(3/4\leq z^{\prime}(t)\leq 5/4\), meaning that \(z(t)\) will remain strictly increasing and can be used as the "time variable" \(t\) when iterating \(\bar{f}\). However, there is a potential issue when updating the iteration cycles of \(\bar{f}\) with the ODE (15). As previously seen, these cycles occur over consecutive half-unit time intervals. The issue is that the first half-unit interval \([0,1/2]\) in a perturbed version of (15) will correspond to time values \(t_{1}>t_{0}\) such that \(z(t_{0})=0\) and \(z(t_{1})=1/2\). Therefore, when determining the value of \(c\) for (15), we must use \(\int_{t_{0}}^{t_{1}}\phi(z(t))dt\) instead of \(\int_{0}^{1/2}\phi(t)dt\). This will depend on the perturbed value of \(z(t)\), which could potentially lead to issues. However, from \(3/4\leq z^{\prime}(t)\leq 5/4\) (which implies that \(t_{1}-t_{0}\in[3/4\cdot 1/2,5/4\cdot 1/2]\) as assumed above) we get \(4/3\geq 1/z^{\prime}(t)\), and thus
\[\int_{t_{0}}^{t_{1}}\phi(z(t))dt=\int_{t_{0}}^{t_{1}}\phi(z(t))z^{\prime}(t) \frac{1}{z^{\prime}(t)}dt\]
which implies that, by the change of variables \(\tau=z(t)\) (recall that \(\phi(t)\geq 0\) for all \(t\in\mathbb{R}\))
\[0<\int_{t_{0}}^{t_{1}}\phi(z(t))dt\leq\frac{4}{3}\int_{0}^{1/2}\phi(\tau)\tau.\]
Hence, if we take
\[c\geq\frac{1}{2\gamma^{2}\frac{4}{3}\int_{t_{0}}^{t_{1}}\phi(t)dt}=\frac{3}{8 \gamma^{2}\int_{t_{0}}^{t_{1}}\phi(t)dt} \tag{16}\]
we will have enough time to appropriately update each iteration, even if the "new" time variable \(z(t)\) evolves faster than \(t\), thus ensuring robustness to perturbations of the dynamics of (15), at least until \(M\) halts.
Now let's address the main concern: what happens after \(M\) halts. We will choose \(\gamma=1/16\) to ensure that if \(M\) halts with input \(w\), then any trajectory of the perturbed system \(y^{\prime}=g(y)\) starting in \(B(c_{w},1/8)\), where \(c_{w}\in\mathbb{N}^{7}\) is the initial configuration associated with input \(w\), will enter \(B(x_{halt},1/4)\) and stay there, where \(x_{halt}=(0,0,m,0,0,m,0)\) is the halting configuration. Conversely, if \(M\) does not halt with input \(w\), then no trajectory of the perturbed system \(y^{\prime}=g(y)\) starting in \(B(c_{w},1/8)\) will enter \(B(x_{halt},1/4)\) (recall that the total error of the perturbed targeting equation (9) is given by \(\left|x(t_{1})-b\right|<\gamma+\rho(t_{1}-t_{0})\) when (15) is actively simulating \(M\), i.e., until \(M\) halts).
We first observe that, once the machine \(M\) halts at time \(t^{*}\), we can infer from equation (13) that \(2\geq\bar{\phi}(z(t),v_{3}(t))\geq 1\) and \(2\geq\bar{\phi}(-z(t),v_{3}(t))\geq 1\). Now, if we rewrite equation (15) as \(x^{\prime}=f(x)\), we can show that for any \(x\in B(x_{halt},1/4)=\{x:\left\|x-x_{halt}\right\|\leq 1/4\}\), we have (by using the standard inner product and noticing the expressions on the right-hand side of (15)) that:
\[\left\langle f(x)-x_{halt},x-x_{halt}\right\rangle\leq-c\left\|x-x_{halt} \right\|_{2}^{2}\leq-c\left\|x-x_{halt}\right\|^{2}.\]
(Recall also the Euclidean norm \(\left\|(x_{1},\ldots,x_{n})\right\|_{2}=\sqrt{x_{1}^{2}+\ldots+x_{n}^{2}}\) for \((x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\) and that \(\left\|(x_{1},\ldots,x_{n})\right\|\leq\left\|(x_{1},\ldots,x_{n})\right\|_{2} \leq\sqrt{n}\left\|(x_{1},\ldots,x_{n})\right\|\), where \(\left\|\cdot\right\|\) is the max-norm.) As \(c\) must satisfy (16), we can assume without loss of generality that \(c\geq 1\), which yields
\[\left\langle f(x)-x_{halt},x-x_{halt}\right\rangle\leq-\left\|x-x_{halt} \right\|^{2} \tag{17}\]
for all \(x\in B(x_{halt},1/4)\).
By standard results in dynamical systems (see e.g., [10, Theorems 1 and 2 of p. 305]), there exists some \(\varepsilon>0\) such that if \(\left\|g-f\right\|_{1}\leq\varepsilon\) (in fact, this condition only needs to be satisfied on \(B(x_{halt},1/4)\)), then \(g\) will also have a sink \(s_{g}\) in the interior of \(B(x_{halt},1/16)\). We now assume that \(\left\|g-f\right\|_{1}\leq\min(1/16,\varepsilon)\) on \(B(x_{halt},1/4)\).
Next, let us assume that \(x\in B(s_{g},3/16)\). Since \(\left\|s_{g}-x_{halt}\right\|\leq 1/16\), we conclude that \(\left\|x-x_{halt}\right\|\leq\left\|x-s_{g}\right\|+\left\|s_{g}-x_{halt} \right\|\leq 3/16+1/16=1/4\). Therefore, \(x\in B(x_{halt},1/4)\), which implies that (17) holds for every \(x\in B(s_{g},3/16)\). In what follows, we assume that \(x\in B(s_{g},3/16)\). Using (17), we obtain:
\[\left\langle g(x)-x_{halt},x-s_{g}\right\rangle\] \[=\left\langle f(x+x_{halt}-s_{g})-x_{halt},x-s_{g}\right\rangle+ \left\langle g(x)-f(x+x_{halt}-s_{g}),x-s_{g}\right\rangle\] \[=\left\langle f(x+x_{halt}-s_{g})-x_{halt},(x+x_{halt}-s_{g})-x_{ halt}\right\rangle+\left\langle g(x)-f(x+x_{halt}-s_{g}),x-s_{g}\right\rangle\] \[\leq-\left\|x+x_{halt}-s_{g}-x_{halt}\right\|^{2}+\left\langle g (x)-f(x+x_{halt}-s_{g}),x-s_{g}\right\rangle\] \[\leq-\left\|x-s_{g}\right\|^{2}+\left\langle g(x)-f(x+x_{halt}-s_ {g}),x-s_{g}\right\rangle. \tag{18}\]
Furthermore \(\alpha(x)=g(x)-f(x+x_{halt}-s_{g})\) is \(0\) when \(x=s_{g}\) and
\[\left\|D\alpha(x)\right\|\leq\left\|Dg(x)-Df(x)\right\|+\left\|Df(x)-Df(x+x_{ halt}-s_{g})\right\|. \tag{19}\]
Since \(\left\|g-f\right\|_{1}\leq\min(1/16,\varepsilon)\) on \(B(x_{halt},1/4)\), this implies that \(\left\|Dg(x)-Df(x)\right\|\leq 1/16\) on \(B(x_{halt},1/4)\). Moreover, because \(Df\) is continuous on \(B(x_{halt},1/4)\), one can determine some \(\delta>0\) such that \(\left\|Df(x)-Df(y)\right\|\leq 1/16\) for all \(x,y\in B(x_{halt},1/4)\) satisfying \(\left\|x-y\right\|\leq\delta\). In particular, if \(\left\|x_{halt}-s_{g}\right\|\leq\delta\), then (19) yields \(\left\|D\alpha(x)\right\|\leq 1/16+1/16=1/8\). By classical results (e.g. [13, Theorems 1 and 2 of p. 305]) we can choose \(\varepsilon_{2}>0\) such that \(\left\|g-f\right\|_{1}\leq\varepsilon_{2}\) implies \(\left\|x_{halt}-s_{g}\right\|\leq\delta\) as required. Thus when \(\left\|g-f\right\|_{1}\leq\min\{1/16,\delta,\varepsilon_{2}\}\), we get that \(1/8\) is a Lipschitz constant for \(\alpha\) on \(B(x_{halt},1/4)\) and thus
\[\left\|\alpha(x)\right\|=\left\|\alpha(x)-\alpha(s_{g})\right\|\leq 1/8\left\|x-s_ {g}\right\|.\]
This last inequality and the Cauchy-Schwarz inequality imply that
\[\left|\left\langle g(x)-f(x+x_{halt}-s_{g}),x-s_{g}\right\rangle\right| =\left|\left\langle\alpha(x),x-s_{g}\right\rangle\right|\] \[\leq\left\|\alpha(x)\right\|_{2}\cdot\left\|x-s_{g}\right\|_{2}\] \[\leq 7\left\|\alpha(x)\right\|\cdot\left\|x-s_{g}\right\|\] \[\leq\frac{7}{8}\left\|x-s_{g}\right\|^{2}.\]
This, together with (18), yields
\[\left\langle g(x),x-s_{g}\right\rangle \leq-\left\|x-s_{g}\right\|^{2}+\left\langle g(x)-f(x+x_{halt}-s_ {g}),x-s_{g}\right\rangle\] \[\leq-\left\|x-s_{g}\right\|^{2}+\frac{7}{8}\left\|x-s_{g}\right\| ^{2}\] \[\leq-\frac{1}{8}\left\|x-s_{g}\right\|^{2}\]
In particular this shows that \(g(x)\) always points inwards inside \(B(s_{g},3/16)\).
Since it is well known that
\[\frac{d}{dt}\left\|y(t)\right\|_{2}=\frac{1}{\left\|y(t)\right\|_{2}}\left\langle \frac{dy(t)}{dt},y(t)\right\rangle\]
we get from the last inequality that
\[\frac{d}{dt}\left\|x-s_{g}\right\|_{2} =\frac{1}{\left\|x-s_{g}\right\|_{2}}\left\langle x^{\prime},x-s_ {g}\right\rangle\] \[=\frac{1}{\left\|x-s_{g}\right\|_{2}}\left\langle g(x),x-s_{g}\right\rangle\] \[\leq\frac{1}{\left\|x-s_{g}\right\|}\left\langle g(x),x-s_{g}\right\rangle\] \[\leq-\frac{1}{8}\left\|x-s_{g}\right\|\]
which shows that \(\left\|x-s_{g}\right\|_{2}\) converges exponentially fast to \(s_{g}\) whenever \(x\in B\left(s_{g},3/16\right)\). Therefore \(B(s_{g},3/16)\) is contained in the basin of attraction of \(s_{g}\). In particular, because \(B(x_{halt},1/8)\subseteq B(s_{g},3/16)\), we conclude that if an initial configuration \(c_{w}=(0,w,1,0,w,1,0)\in\mathbb{N}^{7}\) is such that \(M\) halts with input \(w\), then a trajectory starting on \(B(c_{w},1/4)\) of the perturbed system \(x^{\prime}=g(x)\) of (15) will reach \(B(x_{halt},1/4)\), and thus \(B(s_{g},3/16)\), iff \(M\) halts with input \(w\). Furthermore, because any trajectory that enters \(B(x_{halt},1/8)\subseteq B(s_{g},3/16)\) will converge to \(s_{g}\) then \(M\) halts with input \(w\) iff \(B(c_{w},1/4)\) is inside the basin of attraction of \(s_{g}\) for \(x^{\prime}=g(x)\) whenever \(\left\|f-g\right\|\leq 1/4\) over \(\mathbb{R}^{7}\) and \(\left\|f-g\right\|_{1}\leq\min\{1/16,\delta,\varepsilon_{2}\}\) over \(B(x_{halt},1/4)\). Indeed, if \(M\) does not halt with input \(w\), then any trajectory which starts on \(B(c_{w},1/4)\) will never enter \(B(x_{halt},1/4)\) under the dynamics of \(x^{\prime}=g(x)\) and thus never enter \(B(s_{g},3/16)\), otherwise it would converge to \(s_{g}\) and then enter \(B(s_{g},1/16)\subseteq B(x_{halt},1/4)\), a contradiction. Using similar arguments to those used for \(f\), we conclude that the basin of attraction for \(g\) is not computable.
We briefly mention that in the context of the continuous dynamical system \(y^{\prime}=f(y)\), the function \(f\) is \(C^{\infty}\) (infinitely differentiable) rather than analytic, as is the case in the discrete counterpart. The absence of analyticity in \(f\) stems from the function \(\phi\) employed to construct it (recall that \(\phi(t)=0\) on the intervals \((k,k+\frac{1}{2})\) for integers \(k\)). However, by employing a more sophisticated \(\phi\) as described in [10], it becomes possible to enhance \(f\) to an analytic function. For the sake of readability, we have chosen to present an example of a \(C^{\infty}\) system.
Proof of Theorem 3 - Basins of attraction of structurally stale planar systems are uniformly computable
In the previous section, we demonstrated the existence of a \(C^{\infty}\) and computable system (1) that possesses a computable sink with a non-computable basin of attraction. Moreover, this non-computability persists throughout a neighborhood of \(f\). It should be noted that a dynamical system is locally stable near a sink. Thus our example shows that local stability at a sink does not guarantee the existence of a numerical algorithm capable of computing its basin of attraction.
In this section, we investigate the relationship between the global stability of a planar structurally stable system (1) and the computability of its basins of attraction. We demonstrate that if the system is globally stable, then the basins of attraction of all its sinks are computable. This result highlights that global stability is not only a strong analytical property but also gives rise to strong computability regarding the computation of basins of attraction. Moreover, it shows that strong computability is "typical" on compact planar systems since it is well known (see e.g. [12, Theorem 3 on p. 325]) that in this case the set of \(C^{1}\) structurally stable vector fields is open and dense over the set of \(C^{1}\) vector fields.
We begin this section by introducing some preliminary definitions. Let \(K\) be a closed disk in \(\mathbb{R}^{2}\) centered at the origin with a rational radius. In particular, let \(\mathbb{D}\) denote the closed unit disk of \(\mathbb{R}^{2}\). We define \(\mathcal{V}(K)\) to be the set of all \(C^{1}\) vector fields mapping \(K\) to \(\mathbb{R}^{2}\) that point inwards along the boundary of \(K\). Furthermore, we define \(\mathcal{O}_{2}\) to be the set of all open subsets of \(\mathbb{R}^{2}\) equipped with the topology generated by the open rational disks, i.e., disks with rational centers and rational radii, as a subbase.
We briefly recall that a planar dynamical system \(dx/dt=f(x)\), where \(f\in\mathcal{V}(K)\), is considered structurally stable if there exists some \(\varepsilon>0\) such that for all \(g\in C^{1}(K)\) satisfying \(\left\|f-g\right\|_{1}\leq\varepsilon\), the trajectories of \(dy/dt=g(y)\) are homeomorphic to the trajectories of \(dx/dt=f(x)\). In other words, there exists a homeomorphism \(h\) such that if \(\gamma\) is a trajectory of \(dx/dt=f(x)\), then \(h(\gamma)\) is a trajectory of \(dy/dt=g(y)\). It should be noted that the homeomorphism \(h\) is required to preserve the orientation of trajectories over time.
For a structurally stable planar system \(x^{\prime}=f(x)\) defined on the closed disk \(K\), it has only finitely many equilibrium points and periodic orbits, and all of them are hyperbolic (see [10]). Recall from Section 2.2 that a point \(x_{0}\in K\) is called an equilibrium point of the system if \(f(x)=0\), since any trajectory starting at an equilibrium stays there for all \(t\in\mathbb{R}\). Recall also that an equilibrium point \(x_{0}\) is called hyperbolic if all the eigenvalues of \(Df(x_{0})\) have non-zero real parts. If both eigenvalues of \(Df(x_{0})\) have negative real parts, then it can be shown that \(x_{0}\) is a sink. A sink attracts nearby trajectories. If both eigenvalues have positive real parts, then \(x_{0}\) is called a source. A source repels nearby trajectories. If the real parts of the eigenvalues have opposite signs, then \(x_{0}\) is called a saddle (see Figure 1 for a picture of a saddle point). A saddle attracts some points (those lying in the stable manifold, which is a one-dimensional manifold for the planar systems), repels other points (those lying in the unstable manifold, which is also a one-dimensional manifold for the planar systems, transversal to the stable manifold), and all trajectories starting in a neighborhood of a saddle point but not lying on the stable manifold will eventually leave this neighborhood. A periodic orbit (or limit cycle) is a closed curve \(\gamma\) with the property that there is some \(T>0\) such that \(\phi(f,x)(T)=x\) for any \(x\in\gamma\). Hyperbolic periodic orbits have properties similar to hyperbolic equilibria. For a planar system, there are only attracting or repelling hyperbolic periodic orbits. See [10, p. 225] for more details.
In this section, we demonstrate the existence of an algorithm that can compute the basins of attraction of sinks for any structurally stable planar vector field defined on a compact disk \(K\) of \(\mathbb{R}^{2}\). Furthermore, this computation is uniform across the entire set of such vector fields.
In Theorem 2 below, we consider the case where \(K=\mathbb{D}\) for simplicity, but the same argument applies to any closed disk with a rational radius. Before stating and proving Theorem 2, we present two lemmas, the proofs of which can be found in [11]. Let \(\mathcal{SS}_{2}\subset\mathcal{V}(\mathbb{D})\) be the set of all \(C^{1}\) structurally stable planar vector fields defined on \(\mathbb{D}\).
**Lemma 1**.: _The map \(\Psi_{N}:\mathcal{SS}_{2}\to\mathbb{N}\), \(f\mapsto\Psi_{N}(f)\), is computable, where \(\Psi_{N}(f)\)
_is the number of the sinks of \(f\) in \(\mathbb{D}\)._
**Lemma 2**.: _The map \(\Psi_{S}:\mathcal{SS}_{2}\times\mathbb{N}\to\mathbb{R}^{2}\cup\{\emptyset\}\) is computable, where_
\[\left\{\begin{array}{ll}(f,i)\mapsto\emptyset&\mbox{ if }i=0\mbox{ or }i\geq\Psi_{N}(f)\\ (f,i)\mapsto\mbox{ith sink of }f&\mbox{ if }1\leq i\leq\Psi_{N}(f).\end{array}\right.\]
**Theorem 2**.: _The map \(\Psi:\mathcal{SS}_{2}\times\mathbb{N}\to\mathcal{O}\) is computable, where_
\[\left\{\begin{array}{ll}(f,i)\mapsto\emptyset&\mbox{ if }i=0\mbox{ or }i\geq\Psi_{N}(f)\\ (f,i)\mapsto W_{s}&\mbox{ if }1\leq i\leq\Psi_{N}(f)\mbox{ and }s=\Psi_{S}(f,i)\mbox{, }\end{array}\right.\]
_where \(W_{s}\) is the basin of attraction of the sink \(s\)._
Proof.: Let us fix an \(f\in\mathcal{SS}_{2}\). Assume that \(\Psi_{N}(f)\neq 0\) and \(s\) is a sink of \(f\). In [22] and [10], it has been shown that:
1. \(W_{s}\) is a r.e. open subset of \(\mathbb{D}\subseteq\mathbb{R}^{2}\);
2. there is an algorithm that on input \(f\) and \(k\in\mathbb{N}\), \(k>0\), computes a finite sequence of mutually disjoint closed squares or closed ring-shaped strips (annulus) such that: 1. each square contains exactly one equilibrium point with a marker indicating if it contains a sink, a source, or a saddle; 2. each annulus contains exactly one periodic orbit with a marker indicating if it contains an attracting or a repelling periodic orbit; 3. each square (resp. annulus) containing a sink (resp. an attracting periodic orbit) is time invariant for \(t\geq 0\); 4. the union of this finite sequence contains all equilibrium points and periodic orbits of \(f\), and the Hausdorff distance between this union and the set of all equilibrium points and periodic orbits is less than \(1/k\); 5. for each annulus, \(1\leq i\leq p(f)\), the minimal distance between the inner boundary (denoted as \(IB_{i}\)) and the outer boundary (denoted as \(OB_{i}\)), \(m_{i}=\min\{d(x,y):\,x\in IB_{i},y\in OB_{i}\}\), is computable from \(f\) and \(m_{i}>0\).
We begin with the case that \(f\) has no saddle point. Since \(W_{s}\) is r.e. open, there exists computable sequences \(\{a_{n}\}\) and \(\{r_{n}\}\), \(a_{n}\in\mathbb{Q}^{2}\) and \(r_{n}\in\mathbb{Q}\), such that \(W_{s}=\cup_{n=1}^{\infty}B(a_{n},r_{n})\).
Let \(A\) be the union of all squares and annuli in the finite sequence containing a sink or an attracting periodic orbit except the square containing \(s\), and let \(B\) be the union of all sources and repelling periodic orbits. Note that a source is an equilibrium point (even if unstable) and thus will not belong to \(W_{s}\). Similarly each repelling periodic orbit is an invariant set and thus will also not belong to \(W_{s}\). Periodic orbits and equilibrium points are closed sets and thus \(B\) is a closed set of \(\mathbb{D}\), which is also computable due to the results from [10] mentioned
above. Hence, \(\mathbb{D}\setminus B\) is a computable open subset of \(\mathbb{D}\). Moreover, since \(f\) has no saddle, \(W_{s}\subset\mathbb{D}\setminus B\). List the squares in \(A\) as \(S_{1},\ldots,S_{e(f)}\) and annuli as \(C_{1},\ldots,C_{p(f)}\). Denote the center and the side-length of \(S_{j}\) as \(CS_{j}\) and \(l_{j}\), respectively, for each \(1\leq j\leq e(f)\).
We first present an algorithm - the classification algorithm - that for each \(x\in\mathbb{D}\setminus B\) determines whether \(x\in W_{s}\) or \(x\) is in the union of basins of attraction of the sinks and attracting periodic orbits contained in \(A\). The algorithm works as follows: for each \(x\in\mathbb{D}\setminus B\), simultaneously compute
\[\left\{\begin{array}{l}d(x,a_{n}),n=1,2,\ldots\\ \\ d(\phi_{t}(x),CS_{j}),1\leq j\leq e(f),t=1,2,\ldots\\ \\ d(\phi_{t}(x),IB_{i})\mbox{ and }d(\phi_{t}(x),OB_{i}),1\leq i\leq p(f),t=1,2, \ldots\end{array}\right.\]
where \(\phi_{t}(x)=\phi(f,x)(t)\) is the solution of the system \(dz/dt=f(z)\) with the initial condition \(z(0)=x\) at time \(t\). (Recall that the solution, as a function of time \(t\), of the initial-value problem is uniformly computable from \(f\) and \(x\)[12, 13].) Halt the computation whenever one of the following occurs: (i) \(d(x,a_{n})<r_{n}\); (ii) \(d(\phi_{t}(x),CS_{j})<l_{j}/2\) for some \(t=l\in\mathbb{N}\) (\(l>0\)); or (iii) \(d(\phi_{t}(x),IB_{i})<m_{i}\) and \(d(\phi_{t}(x),OB_{i})<m_{i}\) for \(t=l\in\mathbb{N}\) (\(l>0\)). If the computation halts, then either \(x\in W_{s}\) provided that \(d(x,a_{n})<r_{n}\) or else \(\phi_{t}(x)\in S_{j}\) or \(\phi_{t}(x)\in C_{i}\) for some \(t=l>0\). Since \(S_{j}\) and \(C_{i}\) are time invariant for \(t>0\) (this follows from the results of [12]), each \(S_{j}\) contains exactly one sink for \(1\leq j\leq e(f)\), and each \(C_{i}\) contains exactly one attracting periodic orbit for \(1\leq i\leq p(f)\), it follows that either \(x\) is in the basin of attraction of the sink contained in \(S_{j}\) if (ii) occurs or \(x\) is in the basin of attraction of the attracting periodic orbit contained in \(C_{i}\) if (iii) occurs. We note that, for any \(x\in\mathbb{D}\setminus B\), exactly one of the halting status, (i), (ii), or (iii), can occur following the definition of \(W_{s}\) and the fact that \(S_{j}\) and \(C_{i}\) are time invariant for \(t>0\). Let \(W_{A}\) be the set of all \(x\in\mathbb{D}\setminus B\) such that the computation halts with halting status (ii) or (iii) on input \(x\). Then it is clear that \(W_{s}\cap W_{A}=\emptyset\).
We turn now to show that the computation will halt. Since there is no saddle, every point of \(\mathbb{D}\) that is not a source or on a repelling periodic orbit will either be in \(W_{s}\) or the trajectory starting on that point will converge to a sink/attracting periodic orbit contained in \(A\) as \(t\to\infty\) (this is ensured by the structural stability of the system and Peixoto's characterization theorem; see, for example, [14]).
Thus either \(x\in W_{s}\) or \(x\) will eventually enter some \(S_{j}\) (or \(C_{i}\)) and stay there afterwards for some sufficiently large positive time \(t\). Hence the condition (i) or (ii) or (iii) will be met for some \(t>0\).
Since \(W_{s}\) is a r.e. open set due to the results of [10], to prove that \(W_{s}\) is computable it is suffices to show that the closed subset \(\mathbb{D}\setminus W_{s}=W_{A}\cup B\) is r.e. closed; or, equivalently, \(W_{A}\cup B\) contains a computable sequence that is dense in \(W_{A}\cup B\) (see e.g. [1, Proposition 5.12]). To see this, we first note that \(\mathbb{D}\setminus B\) has a computable sequence as a dense subset. Indeed, since
\(\mathbb{D}\setminus B\) is computable open, there exist computable sequences \(\{z_{i}\}\) and \(\{\theta_{i}\}\), \(z_{i}\in\mathbb{Q}^{2}\) and \(\theta_{i}\in\mathbb{Q}\), such that \(\mathbb{D}\setminus B=\cup_{i=1}^{\infty}B(z_{i},\theta_{i})\). Let \(\mathcal{G}_{l}=\{(m/2^{l},n/2^{l}):m,n\text{ are integers and }-2^{l}\leq m,n\leq 2^{l}\}\) be the \(\frac{1}{2^{l}}\)-grid on \(\mathbb{D}\), \(l\in\mathbb{N}\). The following procedure produces a computable dense sequence of \(\mathbb{D}\setminus B\): For each input \(l\in\mathbb{N}\), compute \(d(x,z_{i})\), where \(x\in\mathcal{G}_{l}\) and \(1\leq i\leq l\) and output those \(\frac{1}{2^{l}}\)-grid points \(x\) if \(d(x,z_{i})<\theta_{i}\) for some \(1\leq i\leq l\). By a standard paring, the outputs of the computation form a computable dense sequence, \(\{q_{i}\}_{i\in\mathbb{N}}\), of \(\mathbb{D}\setminus B\). We now want to obtain a computable dense sequence in \(W_{A}\). If we are able to show that such a computable sequence exists, then it follows that \(W_{A}\cup B\) contains a computable dense sequence. The conclusion comes from the fact that \(B\) is a computable closed subset; hence \(B\) contains a computable dense sequence.
Then using the previous classification algorithm one can enlist those points in the sequence \(\{q_{i}\}_{i\in\mathbb{N}}\) which fall inside \(W_{A}\), say \(\tilde{q}_{1},\tilde{q}_{2},\ldots\). Clearly, \(\{\tilde{q}_{j}\}_{j\in\mathbb{N}}\) is a computable sequence.
It remains to show that \(\{\tilde{q}_{j}\}\) is dense in \(W_{A}\). It suffices to show that, for any \(x\in W_{A}\) and any neighborhood \(B(x,\epsilon)\cap W_{A}\) of \(x\) in \(W_{A}\), there exists some \(\tilde{q}_{j_{0}}\) such that \(\tilde{q}_{j_{0}}\in B(x,\epsilon)\cap W_{A}\), where \(\epsilon>0\) and the disk \(B(x,\epsilon)\subset\mathbb{D}\setminus B\). We begin by recalling a well-known fact that the solution \(\phi_{t}(x)\) of the initial value problem \(dx/dt=f(x)\), \(\phi_{0}(x)=x\), is continuous in time \(t\) and in initial condition \(x\). In particular, the following estimate holds true for any time \(t>0\) (see e.g. [1]):
\[\|\phi_{t}(x)-\phi_{t}(y)\|\leq\|x-y\|e^{Lt} \tag{20}\]
where \(x=\phi_{0}(x)\) and \(y=\phi_{0}(y)\) are initial conditions, and \(L\) is a Lipschitz constant satisfied by \(f\). (Since \(f\) is \(C^{1}\) on \(\mathbb{D}\), it satisfies a Lipschitz condition and a Lipschitz constant can be computed from \(f\) and \(Df\).) Since \(x\in W_{A}\), the halting status on \(x\) is either (ii) or (iii). Without loss of generality we assume that the halting status of \(x\) is (ii). A similar argument works for the case where the halting status of \(x\) is (iii). It follows from the assumption that \(d(\phi_{t}(x),S_{j})<l_{j}/2\) for some \(1\leq j\leq e(f)\) and some \(t=l>0\). Compute a rational number \(\alpha\) satisfying \(0<\alpha<l_{j}/2-d(\phi_{t}(x),S_{j})\) and compute another rational number \(\beta\) such that \(0<\beta<\epsilon\) and \(\|y_{1}-y_{2}\|e^{l\cdot L}<\alpha\) whenever \(\|y_{1}-y_{2}\|<\beta\). Then for any \(y\in B(x,\beta)\),
\[d(\phi_{t}(y),S_{j})\] \[\leq d(\phi_{t}(y),\phi_{t}(x))+d(\phi_{t}(x),S_{j})\] \[\leq \alpha+d(\phi_{t}(x),S_{j})<(l_{j}/2)-d(\phi_{t}(x),S_{j})+d(\phi _{t}(x),S_{j})=l_{j}/2\]
which implies that \(B(x,\beta)\subset W_{A}\). Since \(B(x,\beta)\subset B(x,\epsilon)\subset\mathbb{D}\setminus B\) and \(\{q_{i}\}\) is dense in \(\mathbb{D}\setminus B\), there exists some \(q_{i_{0}}\) such that \(q_{i_{0}}\in B(x,\beta)\). Since \(B(x,\beta)\subset W_{A}\), it follows that \(q_{i_{0}}=\tilde{q}_{j_{0}}\) for some \(j_{0}\). This shows that \(\tilde{q}_{j_{0}}\in B(x,\epsilon)\cap W_{A}\).
We turn now to the general case where saddle point(s) is present. We continue using the notations introduced for the special case where the system has no saddle point. Assume that the system has the saddle points \(d_{m}\), \(1\leq m\leq d(f)\) and \(D_{m}\) is a closed square containing \(d_{m}\), \(1\leq m\leq d(f)\). For any given \(k\in\mathbb{N}\) (\(k>0\)), the algorithm constructed in [10] will output \(S_{j}\), \(C_{i}\), and \(D_{m}\) such
that each contains exactly one equilibrium point or exactly one periodic orbit, the (rational) closed squares and (rational) closed annuli are mutually disjoint, each square has side-length less than \(1/k\), and the Hausdorff distance between \(C_{i}\) and the periodic orbit contained inside \(C_{i}\) is less than \(1/k\), where \(1\leq j\leq e(f)\), \(1\leq m\leq d(f)\), and \(1\leq i\leq p(f)\). For each saddle point \(d_{m}\), it is proved in [1] that the stable manifold of \(d_{m}\) is locally computable from \(f\) and \(d_{m}\); that is, there is a Turing algorithm that computes a bounded curve - the flow is planar and so the stable manifold is one dimensional - passing through \(d_{m}\) such that \(\lim_{t\to\infty}\phi_{t}(x_{0})=d_{m}\) for every \(x_{0}\) on the curve. In particular, the algorithm produces a computable dense sequence on the curve. Pick two points, \(z_{1}\) and \(z_{2}\), on the curve such that \(d_{m}\) lies on the segment of the curve from \(z_{1}\) to \(z_{2}\). Since the system is structurally stable, there is no saddle connection; i.e. the stable manifold of a saddle point cannot intersect the unstable manifold of the same saddle point or of another saddle point. Thus, \(\phi_{t}(z_{1})\) and \(\phi_{t}(z_{2})\) will enter \(C_{B}\) for all \(t\leq-T\) for some \(T>0\), where \(C_{B}=(\cup\{\overline{S}_{j}:\,s_{j}\in B\})\cup(\cup\{\overline{C}_{i}:\,p_{ i}\subset B\})\), where \(\overline{S}_{j}\) and \(\overline{C}_{i}\) denote the squares and annuli computed by the algorithm of [1] which contain repelling equilibrium points (sources) and repelling periodic orbits, respectively. We denote the curve \(\{\phi_{t}(z_{1}):\,-T\leq t\leq 0\}\cup\{z:\,z\) is on the stable manifold of \(d_{m}\) between \(z_{1}\) and \(z_{2}\}\cup\{\phi_{t}(z_{2}):\,-T\leq t\leq 0\}\) as \(\Gamma_{d_{m}}\). Let \(\widetilde{C}=C_{B}\cup\{\Gamma_{d_{m}}:\,1\leq m\leq d(f)\}\). Then \(\widetilde{C}\) is a computable compact subset in \(\mathbb{D}\). Moreover, every point in \(\mathbb{D}\setminus\widetilde{C}\) converges to either a sink or an attracting periodic orbit because there is no saddle connection. Using the classification algorithm and a similar argument as above we can show that \(W_{A}\cap(\mathbb{D}\setminus\widetilde{C})\) is a computable open subset in \(\mathbb{D}\setminus\widetilde{C}\) and thus computable open in \(\mathbb{D}\) because \(W_{A}\subset(\mathbb{D}\setminus\widetilde{C})\). Since \(W_{A}\subset\mathbb{D}\setminus B\) and \(W_{A}\cap\Gamma_{d_{m}}=\emptyset\), it follows that
\[d_{H}\left(\mathbb{D}\setminus(W_{A}\cap(\mathbb{D}\setminus \widetilde{C})),\,\mathbb{D}\setminus(W_{A}\cap(\mathbb{D}\setminus B))\right)\] \[= d_{H}\left((\mathbb{D}\setminus W_{A})\cup C_{B},\,(\mathbb{D} \setminus W_{A})\cup B\right)\] \[\leq d_{H}(C_{B},B)<\frac{1}{k}.\]
We have thus proved that there is an algorithm that, for each input \(k\in\mathbb{N}\) (\(k>0\)), computes an open subset \(U_{k}=W_{A}\cap(\mathbb{D}\setminus\widetilde{C})\) of \(\mathbb{D}\) such that \(U_{k}\subset W_{A}\) and \(d_{H}(\mathbb{D}\setminus U_{k},\,\mathbb{D}\setminus W_{A})<\frac{1}{k}\). This shows that \(W_{A}\) is a computable open subset of \(\mathbb{D}\). (Recall an equivalent definition for a computable open subset of \(\mathbb{D}\): an open subset \(U\) of \(\mathbb{D}\) is computable if there exists a sequence of computable open subsets \(U_{k}\) of \(\mathbb{D}\) such that \(U=\cup U_{k}\) and \(d_{H}(\mathbb{D}\setminus U_{k},\,\mathbb{D}\setminus U)\leq\frac{1}{k}\) for every \(k\in\mathbb{N}\setminus\{0\}\).)
**Corollary 1**.: _For every \(f\in\mathcal{SS}_{2}\) there is a neighborhood of \(f\) in \(C^{1}(\mathbb{D})\) such that the function \(\Psi\) is (uniformly) computable in this neighborhood._
Proof.: The corollary follows from Peixoto's density theorem and Theorem 2.
**Acknowledgments.** D. Graca was partially funded by FCT/MCTES through national funds and when applicable co-funded by EU funds under the project
UIDB/50008/2020. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 731143.
|
2301.08596 | The chromosphere underneath a Coronal Bright Point | Coronal Bright Points (CBPs) are sets of small-scale coronal loops,
connecting opposite magnetic polarities, primarily characterized by their
enhanced extreme-ultraviolet (EUV) and X-ray emission. Being ubiquitous, they
are thought to play an important role in heating the solar corona. We aim at
characterizing the barely-explored chromosphere underneath CBPs, focusing on
the related spicular activity and on the effects of small-scale magnetic flux
emergence on CBPs. We used high-resolution observations of a CBP in H$\beta$
and Fe I 617.3 nm from the Swedish 1-m Solar Telescope (SST) in coordination
with the Solar Dynamics Observatory (SDO). This work presents the first
high-resolution observation of spicules imaged in H$\beta$. The spicules were
automatically detected using advanced image processing techniques, which were
applied to the Dopplergrams derived from H$\beta$. Here we report their
abundant occurrence close to the CBP ``footpoints", and find that the
orientation of such spicules is aligned along the EUV loops, indicating that
they constitute a fundamental part of the whole CBP magnetic structure.
Spatio-temporal analysis across multiple channels indicates that there are
coronal propagating disturbances associated with the studied spicules,
producing transient EUV intensity variations of the individual CBP loops. Two
small-scale flux emergence episodes appearing below the CBP were analyzed; one
of them leading to quiet-sun Ellerman bombs and enhancing the nearby spicular
activity. This paper presents unique evidence of the tight coupling between the
lower and upper atmosphere of a CBP, thus helping to unravel the dynamic
phenomena underneath CBPs and their impact on the latter. | Souvik Bose, Daniel Nóbrega-Siverio, Bart De Pontieu, Luc Rouppe van der Voort | 2023-01-20T14:15:26Z | http://arxiv.org/abs/2301.08596v1 | # The chromosphere underneath a Coronal Bright Point
###### Abstract
Coronal Bright Points (CBPs) are sets of small-scale coronal loops, connecting opposite magnetic polarities, primarily characterized by their enhanced extreme-ultraviolet (EUV) and X-ray emission. Being ubiquitous, they are thought to play an important role in heating the solar corona. We aim at characterizing the barely-explored chromosphere underneath CBPs, focusing on the related spicular activity and on the effects of small-scale magnetic flux emergence on CBPs. We used high-resolution observations of a CBP in H\(\beta\) and Fe I 617.3 nm from the Swedish 1-m Solar Telescope (SST) in coordination with the Solar Dynamics Observatory (SDO). This work presents the first high-resolution observation of spicules imaged in H\(\beta\). The spicules were automatically detected using advanced image processing techniques, which were applied to the Dopplergrams derived from H\(\beta\). Here we report their abundant occurrence close to the CBP "footpoints", and find that the orientation of such spicules is aligned along the EUV loops, indicating that they constitute a fundamental part of the whole CBP magnetic structure. Spatio-temporal analysis across multiple channels indicates that there are coronal propagating disturbances associated with the studied spicules, producing transient EUV intensity variations of the individual CBP loops. Two small-scale flux emergence episodes appearing below the CBP were analyzed; one of them leading to quiet-sun Ellerman bombs and enhancing the nearby spicular activity. This paper presents unique evidence of the tight coupling between the lower and upper atmosphere of a CBP, thus helping to unravel the dynamic phenomena underneath CBPs and their impact on the latter.
Solar coronal heating (1989) -- Solar spicules (1525) -- Solar chromosphere (1479) -- Solar corona (1483) -- Solar magnetic flux emergence (2000) -- Methods: observational 0000-0002-4681-9884]Souvik Bose
0000-0002-8878-7885]Daniel Nobrega-Siverio
0000-0002-4880-7888]Bart De Pontieu
0000-0002-4888-7888]Luc Rouppe van der Voort
## 1 Introduction
Coronal Bright Points (CBPs) appear as bright, enhanced, blob-like structures when observed in the extreme-ultraviolet (EUV) light or X-rays. First observed in X-rays with the grazing incidence X-ray telescope sounding rocket mission (Vaiana et al., 1973), CBPs comprise small-scale magnetic loops connecting opposite polarities where the confined plasma is heated up to a million degrees presumably by magnetic reconnection (see Priest et al., 1994). CBPs are ubiquitously observed in the coronal holes, quiet-Sun, and in the close vicinity of active regions alike, which makes them interesting from the perspective of their role in coronal heating. Their lifetimes range from a few hours to even a few days (Golub et al., 1974; McIntosh & Gurman, 2005) and, depending upon the wavelength of observation, they appear as roundish blobs with diameters ranging between 5-30''on average (Vaiana et al., 1973; Habbal et al., 1990; Mou et al., 2018). Different studies based on emission spectroscopy and imaging (as discussed in the recent review by Madjarska, 2019) suggest that the heights over which CBPs extend in the corona ranges between 5-10 Mm above the photosphere with an average of 6.5 Mm during their lifetime.
Though CBPs have been the subject of intensive research ever since their discovery back in the early 1970s (Madjarska, 2019), there are still fundamental open questions regarding these ubiquitous phenomena. For in
stance, the CBP chromospheric counterpart remains largely unexplored to date, which may be attributed to the lack of adequate observations that target the corona and chromosphere simultaneously. To the best of our knowledge, only two observational studies - Habbal & Withbroe (1981) and Madjarska et al. (2021) have focused on this particular atmospheric layer, both finding that strong intensity enhancements in the corona preceded lower temperature (chromospheric and transition region (TR)) enhancements, thereby indicating a scenario where the heating takes place first in the corona and is later conducted toward the TR via thermal conduction. Another open question is related to the role of magnetic flux emergence on CBPs. For example, magnetic flux emergence is not only known to be responsible for the origin of nearly half of the CBPs (Mou et al., 2018), but also to enhance the chromospheric activity and associated coronal emission (Madjarska et al., 2021). So far in the CBP literature, the focus has primarily been on large-scale emergence episodes that last for several tens of minutes to hours, therefore studies about the impact of small-scale magnetic flux emergence episodes are scarce: the lack of high-resolution, coordinated magnetograms seems to be a major impediment in this regard.
The aim of this paper is to better understand the chromospheric scenery underneath a CBP with a focus on spicules and the atmospheric responses to small-scale flux emergence episodes. Spicules are one of the most abundant and ubiquitous features observed in the solar chromosphere. They are highly dynamic, thin, (multi)threaded, and elongated structures that permeate both the active and non-active regions alike (Pereira et al., 2012). They are broadly divided into two categories-type I and II, with the latter being more dynamic, with higher apparent velocities, shorter lifetimes, and undergoing vigorous swaying and torsional motion (de Pontieu et al., 2007; Pereira et al., 2012; Bose et al., 2021). The signatures of type-II spicules are often found in the TR and coronal passbands which makes their studies exciting from the perspective of heating and mass-loading of the solar corona (De Pontieu et al., 2009, 2011; Pereira et al., 2014; Rouppe van der Voort et al., 2015; Henriques et al., 2016; Samanta et al., 2019). The on-disk counterparts of type-II spicules, termed as rapid blue-shifted and red-shifted excursions (RBEs and RREs, see Rouppe van der Voort et al., 2009; Sekse et al., 2012; Bose et al., 2019), abundantly occur in the close vicinity of strong magnetic field regions (such as bi-polar/unipolar field patches, see Sekse et al., 2012; Bose et al., 2021, for example). This makes their study also interesting in the context of CBPs since their loops appear to be rooted to strong bi-polar magnetic field configurations present in the photosphere. Multi-dimensional numerical models, e.g. by Wyper et al. (2018) and more recently by Nobrega-Siverio & Moreno-Insertis (2022) suggest that the loops associated with CBPs may have some relationship with jets or spicules observed deeper in solar the atmosphere, which may contribute toward transient intensity variations in the CBPs. Regarding small-scale magnetic flux emergence, our attempt to explore its effects on already existing CBPs is motivated by two very recent papers: Tiwari et al. (2022), which find tiny EUV bright dot-like substructures inside a CBP that seem to be associated with small flux emergence episodes; and Nobrega-Siverio & Moreno-Insertis (2022), which argue that flux emergence occurring in a few granules may be enough to destabilize a CBP and lead to eruptions.
To achieve our objectives, we use a high-quality, ground-based dataset from the Swedish 1-m Solar Telescope (SST, Scharmer et al., 2003) in coordination with the Atmospheric Imaging Assembly (AIA, Lemen et al., 2012) instrument on-board NASA's Solar Dynamics Observatory (SDO, Pesnell et al., 2012). For the first time, we employ high-resolution images of the chromospheric H\(\beta\) spectral line to study the spicule-CBP relationship. Moreover, the impact of multiple small-scale photospheric flux emergence episodes on the chromospheric and coronal activity are also investigated from coordinated, high-resolution magnetic field measurements.
The rest of the paper is divided as follows. Section 2 describes the observations and standard data reduction processes. Section 3 details the methodology employed to detect on-disk spicules from SST observations and enhancing the AIA images. We show the results and discuss their significance in Sect. 4, before finally summarizing and concluding the paper in Sect. 5.
## 2 Observations and Data Reduction
### Swedish 1-m Solar Telescope
For the purpose of this study, we recorded the chromospheric counterparts of the CBP using observations from the CHROMSheric Imaging Spectrometer (CHROMIS, Scharmer, 2017) and CRisp Imaging Spectropolarimeter (CRISP, Scharmer et al., 2008) instruments at the SST on 4 August 2021, under excellent seeing conditions. The coordinates of the target were centered around solar (\(X\),\(Y\)) = (250'',358'') with \(\mu=\cos\theta=0.88\) (\(\theta\) being the heliocentric angle), and the observation sequence lasted for about 11 min starting at 90:56 UTC. Figure 1 shows an overview of the observed target.
CHROMIS sampled the H\(\beta\) spectral line centered at 486.1 nm under imaging spectroscopic mode across 27 wavelength points between \(\pm\) 0.21 nm with respect to the line center. The sampling was uniform between \(\pm\) 0.1 nm with 0.01 nm steps. Beyond this a non-uniform sampling was intentionally chosen so as to avoid the effect of blends. Panels (e)-(g) of Fig. 1 show the H\(\beta\) blue (at a Doppler offset of \(-\)25 km s\({}^{-1}\)), red wing (at a Doppler offset of \(+\)25 km s\({}^{-1}\)), and line core images, respectively. The cadence of the data was 6.8 s with a spatial sampling of 0\(\farcs\)038. CHROMIS also recorded
wideband (WB) images with the help of an auxillary WB channel centered at 484.5 nm (referred to as H\(\beta\) WB in panel h). Besides providing context photospheric images, the WB serves as an anchor channel that aids in image restoration. The WB images have the same cadence as the narrowband H\(\beta\) sequence.
CRISP sampled the Fe i 617.3 nm line across 14 wavelength points under imaging spectropolarimetric mode between \(-0.032\) nm and \(+0.068\) nm with respect to the line center. The full Stokes Fe i 617.3 nm data were inverted using by using a parallel C++/Python implementation1 of the Milne-Eddington (ME) inversion scheme developed by de la Cruz Rodriguez (2019) to infer the photospheric vector magnetic field information. In addition, the Ca ii 854.2 nm line was sampled across 4 wavelength points between \(-0.1\) nm and \(+0.05\) nm with respect to the line core in steps of 0.05 nm under imaging spectroscopic mode. The overall cadence of the combined observation sequences was measured to be 18.5 s with a spatial sampling of 0\(\farcs\)058. In this paper, we only focus on the line-of-sight (LOS) magnetic fields inferred from the Fe i 617.3 spectral line as shown in panel (j) of Fig. 1.
Footnote 1: [https://github.com/jaimedelacruz/pyMilne](https://github.com/jaimedelacruz/pyMilne)
The combination of excellent seeing conditions, the SST adaptive optics system, the high-quality CRISP and CHROMIS re-imaging systems (Scharmer et al., 2019), and Multi-Object Multi-Frame Blind Deconvolution (MOMFBD, van Noort et al., 2005) image restoration resulted in high-spatial resolution data down to the diffraction limit of the telescope (for H\(\beta\) 1.22\(\lambda/D=0\farcs 13\) with \(D=0.97\) nm the effective aperture of SST). The SSTRED reduction pipeline (de la Cruz Rodriguez et al., 2015; Lofdahl et al., 2021) was used to facilitate reduction of the data, including the spectral consistency technique described in Henriques (2012). Furthermore, both the CRISP and CHROMIS time series were destretched to compensate for the residual warping across the field-of-view (FOV) which was not accounted for by the image restoration techniques described earlier.
For this study, the CRISP data (with a lower spatial and temporal resolution) were co-aligned to CHROMIS by expanding the former to CHROMIS pixel scale followed by a cross-correlation between the respective photospheric WB channels shown in panels (h) and (i) of Fig. 1. In other words, the CHROMIS data with a FOV
Figure 1: Overview of the targeted CBP observed on 4 August 2021 at 10:03:31UT. Panel (a) shows an RGB composite image of the CBP and its neighboring area at the original SDO/AIA pixel scale. Red, blue, and green colors correspond to 30.4, 19.3 and 17.1 nm channels, respectively. The SST/CHROMIS pointing and FOV is overlaid as a reference. Panels (b)–(d) illustrate SDO/AIA 30.4, 17.1 and 19.3 nm channels that are rotated and co-aligned to CHROMIS. Panels (e)–(g) show CHROMIS H\(\beta\) images at blue wing (\(-\) 25 km s\({}^{-1}\)), red wing (\(+\) 25 km s\({}^{-1}\)) and line center, respectively. These images depict the chromospheric scene underneath the CBP. Panels (h) and (i) contain the photospheric H\(\beta\) and Fe i 617.3 nm WB images, and panel (j) shows the photospheric LOS magnetic field map (B\({}_{\rm LOS}\)) saturated between \(\pm\) 60 G (black indicated positive polarity). The dashed FOV shown in panels (a)–(j) denotes the region-of-interest associated with the CBP which forms the basis for all investigations carried out in this paper.
of \(66^{\prime\prime}\times 42^{\prime\prime}\) and a cadence of 6.8 s served as a reference for the CRISP data to which the latter was aligned. We used nearest neighbour interpolation for the temporal alignment.
### Solar Dynamics Observatory
The coronal part associated with the CBP was observed with the AIA instrument on-board SDO. The SDO datasets were co-aligned to SST (CHROMIS) datasets in the following manner. The SDO image cutout sequences were first downloaded from the Joint Science Operations Center's (JSOC) website2. Next, the images from all the AIA channels were co-aligned to HMI continuum images (here the AIA 30.4 nm channel was aligned to HMI continuum), followed by the latter's co-alignment to CHROMIS WB channels via an iterative cross-correlation algorithm. Finally, the SDO images were cropped to have the same FOV as SST. The and result of this pipeline is a co-aligned SDO dataset that consists of eleven (nine AIA and two HMI) image sequences that are expanded from their original pixel scale to CHROMIS pixel scale of \(0\farcs 038\) and matched in time by nearest-neighbour sampling to CHROMIS temporal cadence. We used the publicly available3 Interactive Data Language (IDL) based automated pipeline developed by Rob Rutten for this purpose (Rutten, 2020) and refer to Bose et al. (2021) for an example of this pipeline's application.
Footnote 2: [http://jsoc.stanford.edu/](http://jsoc.stanford.edu/)
Footnote 3: [https://robrutten.nl/rridl/00-README/sdo-manual.html](https://robrutten.nl/rridl/00-README/sdo-manual.html)
An RGB composite image, consisting of AIA 30.4, 17.1 and 19.3 nm channels, of the CBP target at the original AIA resolution is shown in Fig. 1 panel (a), while panels (b)-(d) show the same three channels but rotated and co-aligned to the CHROMIS data using the procedure described in this section. This co-aligned SST and SDO dataset was then visualized extensively with CRISPEX (Vissers & Rouppe van der Voort, 2012), an IDL widget-based tool that allows an efficient simultaneous exploration of multi-dimensional and multi-wavelength datasets.
## 3 Methods employed
### Detecting on-disk spicules from H\(\beta\)
We employed an automated detection method based on the difference between images observed in the blue and red wings of the H\(\beta\) spectral line. This is similar to constructing Dopplergrams (see Sekse et al., 2012; De Pontieu et al., 2014; Pereira et al., 2016), but instead of subtracting fixed wavelengths on opposite sides of the line center, an average over a range of wavelengths (between \(\pm\)20-30 \(\mathrm{km\ s^{-1}}\) on opposite sides of the line center) is computed which are then subtracted from one another as shown in Fig. 2 (a). The difference images are then subjected to unsharp masking which causes an enhancement in the high spatial frequency components of the image. In this case, it amplifies the threaded spicular features as seen in the panel (b). RBEs appear as darker threads with negative intensity values whereas RREs appear brighter with positive intensity values in these difference images. It is important to note that the difference maps so obtained (as in panel b) do not correspond to absolute measure of the Doppler velocity associated with RBEs and RREs. The chief goal is to obtain a representation of the spatiotemporal evolution of the velocity patterns associated with these features.
Next, an adaptive intensity thresholding technique was applied on each of the difference images where pixels which had intensities above a certain value on either side of zero were masked and chosen for further processing. As a result, two different binary masks were generated: one which comprised of pixels that satisfied \(I_{\mathrm{USM}}>2.5\sigma\) for RREs and the other one which satisfied \(I_{\mathrm{USM}}<-1.5\sigma\) were considered for RBEs, where \(I_{\mathrm{USM}}\) is the intensity of the difference image post unsharp masking (Fig. 2 b). The difference in the threshold is due to the skewness in the distribution of RREs and RBEs in the difference maps. In both the masks, pixels which satisfied the thresholding criterion were assigned a value of 1 while the remaining was assigned as 0. Once the binary masks were generated, a morphological opening followed by a closing operation was applied to each of the masks (independently for the RBEs and RREs), on a per time step basis, with a \(3\times 3\) diamond-shaped structuring element. We refer the reader to Bose et al. (2021) and appendix A.2 of Bose (2021) for more details on these morphological operations and the associated reasoning behind them.
Finally, connected component labeling in 3D (i.e. combining both spatial and temporal dimensions, see Rosenfeld & Pfaltz, 1966) was performed on the morph processed images so that the RBEs and RREs can be uniquely identified based on a given heuristic. Basically, this technique allows connected neighboring pixels in spatio-temporal domain to be uniquely identified (labeled). To not bias for a particular direction, we employed a 26-neighborhood connectivity criterion in 3D space for this purpose. In other words, two pixels were "connected" if they shared either an edge, a face or a corner. Furthermore, to avoid erroneous detections and focus primarily on the elongated spicular structures a lower cutoff length of \(\sim\)200 km (or 8 CHROMIS pixels) was also imposed on the labeled events.
The above recipe led to a detection of 6457 uniquely labeled events (3623 as RREs/downflowing RREs and 2834 as RBEs) in the complete dataset lasting 11 min over the whole FOV. The occurrence of these (combined) events is shown in the form of 2D probability density map in Fig. 2 (c) against a background of temporally averaged H\(\beta\) WB image.
### Enhancing the AIA images
To facilitate a better understanding of the dynamic relationship between the chromospheric and coronal counterparts of a CBP, it is crucial to enhance the visibility of the coronal images and the loops (strands) associated with the CBP. In this regard, the re-sampled (to CHROMIS pixel scale) AIA images, like the ones shown in the leftmost column of Fig. 3, are subjected to a modified version of the common difference technique where the temporal average, over the entire 11 min duration, of each AIA channel is subtracted from an unsharp masked image of the same channel for each time step. This procedure results in images where small changes in the intensity are visibly more enhanced- due to unsharp masking which adjusts the contrast of the edges (see the rightmost column of Fig.3). In addition, the AIA images are also subjected to a multi-scale Gaussian normalization (MGN) procedure (Morgan & Druckmuller, 2014) that enables a better visualization of the overall topology and the orientation of the overlying coronal structure which is not very prominent in original (resampled) AIA images. They are shown in the middle column of Fig. 3. The various AIA channels used in this study are MGN enhanced by using the default (same) values of weights and coefficients as in Morgan & Druckmuller (2014).
The animation associated with Fig. 3 provides a better idea of the advantage of employing the two methods described above and further adds to their comparison with the original co-aligned AIA images. We immediately notice an improvement over the coronal images shown in the left column, where the loops associated with the CBP are barely noticeable. Consequently, the variation in the intensity of the CBP associated with rapid spicular dynamics is shown with the common difference images while the MGN processed images are used as a proxy of the intensity variation in the CBP for all subsequent analysis and results described in this paper. However, it is important to note that MGN does not preserve the photometric accuracy of the images and creates a standardized emission, which is enhanced (sub-dued) in the regions with lower (higher) intensity. This, however, does not impact the analysis presented in this paper.
## 4 Results and Discussion
This section presents a detailed description and discussion of the results obtained from the analysis. We begin by investigating the chromospheric footpoints of the CBP in Sect. 4.1, followed by a description of representative examples highlighting the spicule-CBP relationship in Sects. 4.2 and 4.3. Finally, in Sect. 4.4, we discuss the impact of two small-scale photospheric flux emergence episodes in the chromosphere and the hotter AIA channels.
### The chromospheric "footpoints" of the CBP
The H\(\beta\) wing and the line core images, shown within the dashed FOV in panels (e)-(g) of Fig. 1, depict the chromospheric scene underlying the CBP. The images clearly show multiple dark, elongated, and threaded structures that resemble spicules (or mottles). A zoom-in to the dashed FOV is shown in Fig. 4 which focuses solely on the region in and around the CBP. To aid better visualization of the intensity disturbances propagating in the CBP, we show the common difference images for the different AIA channels in panels (a) through (c). Panel (d) shows the H\(\beta\) line core (LC) width map which is basically the wavelength separation at half the intensity range between the minimum of the H\(\beta\) line profile and the average intensities at a displaced wing position from the line center (following Cauzzi et al., 2009) for each pixel on the FOV. In this case the displacement parameter was set at \(\pm\) 66 pm from the line center which was determined by converting the displacement parameter of 90 pm for the H\(\alpha\) spectral line, chosen by Cauzzi et al. (2009), into equivalent Doppler units (km s\({}^{-1}\)). RBEs and RREs (including downflowing RREs) appear to be in "emission" (compared to the background features as seen in panel d) in these maps since they gener
Figure 2: Overview of the automated on-disk spicule detection method described in the text. Panel (a) shows the spatio-temporal average of the H\(\beta\) spectral line computed over the entire CHROMIS FOV, and further illustrates how Dopplergrams are generated by subtracting signals in the blue wing from the red wing (indicated by the shaded areas on either side of the line center). Panel (b) shows an example of a generated Dopplergram where RBEs and RREs show up as dark and bright threaded structures. Panel (c) shows the location and the density distribution of the detected spicules against a background of temporally averaged H\(\beta\) WB image.
ally have enhanced opacity owing to their broad LOS velocity distribution (Pereira et al., 2016; Bose et al., 2021) and enhanced temperature (Leenaarts et al., 2012). Panels (e) and (f) show the co-temporal H\(\beta\) line core intensity and the LOS photospheric magnetic field maps underneath the CBP. Spicules and/or mottles dominate the whole FOV and they are seen to be predominantly rooted in the close vicinity of the strong (negative) polarity magnetic field patch which also happens to be the photospheric magnetic roots of the CBP.
A glance at panels (b)-(e) of Fig. 4 immediately suggests that the CBP loops and their chromopsheric counterparts bear a close morphological resemblance. This is further highlighted in the animation associated with the figure where the 17.1 and 19.3 nm loops appear to have propagating disturbances nearly in tandem with the rapid changes in the chromosphere, especially toward the later half of the data sequence. The 30.4 nm common difference image appears to be noisier and it does not show the loops associated with the CBP as prominently as in the other AIA channels. However, the animation shows clear disturbances associated in the same region as underlying spicules but the overall morphology is less pronounced (compared to panels b and c) making them difficult to relate visually. The lack of loop-like appearances in the 30.4 nm channel could be attributed to its relatively lower temperature sensitivity (log \(T(\mathrm{K})\sim 4.7\)) compared to 19.3 and 17.1 nm channels which have a peak temperature sensitivity of around log \(T(\mathrm{K})\sim 6\)(Boerner et al., 2012). Moreover, it is rather common to observe the relatively cooler footpoints of the CBPs in the 30.4 nm channel underneath the hotter loops (Kwon et al., 2012; Madjarska, 2019; Madjarska et al., 2021) which may further justify the less pronounced morphological resemblance.
Madjarska et al. (2021) report that the chromospheric counterpart of a CBP largely comprises of elongated, dark features when observed in the H\(\alpha\) line core images. They name these features "H\(\alpha\) loops" which also appear to constitute a fundamental part of the overall magnetic structure of the CBPs. While we do not find the existence of such loops likely due to our observations being limited to a part of the entire CBP (thereby missing the opposite polarity), spicules dominate our FOV and plays a central role in driving the dynamics of the chromosphere underneath the CBP.
Figure 5 shows the occurrence of the detected on-disk spicules, using the method described in Sect. 3, in the form of a 2D density map against a background of temporally averaged images for four MGN processed AIA
Figure 3: Methods of enhancing the AIA images. The top row (from left to right) shows a zoom in to dashed FOV of AIA 17.1 nm intensity map indicated in Fig. 1 panel (c), the MGN processed version of the same, and the result of applying the modified common difference technique (see text for details) to the original AIA map, respectively. Bottom row (from left to right) illustrates the result of applying the two enhancement techniques to AIA 19.3 nm channel in the same format as the top row. An animation of this figure is available online, which shows a comparison between the different enhancement techniques along with the temporal evolution of the disturbances propagating along the loops. The animation shows solar evolution over 11 min.
Figure 4: The chromosphere and the photosphere underneath the CBP. Panels (a)–(c) show common difference images in the AIA 30.4, 17.1 and 19.3 nm channels at 10:06:43UT. Panel (d) shows the co-temporal H\(\beta\) LC width map saturated between 0.45–0.82 Å. Panel (e) shows the co-temporal H\(\beta\) LC image and panel (f) depicts the corresponding photospheric B\({}_{\rm LOS}\) map saturated between \(\pm\) 60 G. An animation of this figure is available online, which shows the temporal evolution of the chromospheric and photospheric scenery underneath the CBP for the entire duration of 11 min.
Figure 5: Morphological similarities between spicules and the loops associated with the CBP. Panels (a) and (c)–(f) show the 2D density map of the detected spicules overlaid against a background of temporally averaged H\(\beta\) WB, MGN enhanced AIA 30.4, 17.1, 19.3, and 21.1 nm channels, respectively, whereas panel (b) shows a temporal average of the underlying photospheric magnetic field saturated between \(\pm\) 60 G. The FOV in each of the panels correspond to the dashed FOV indicated in Fig. 1.
images show the intensities in absolute units (though it fails to preserve the photometric accuracy) unlike the common difference images. From this figure it is clear that the distribution of spicules is very well correlated with the orientation and overall morphology of the CBP loops, as is evident from the 17.1, 19.3 and 21.1 nm channels. This provides a compelling observational confirmation (in a statistical sense) of spicules tracing the coronal magnetic field lines which, to the best of our knowledge, has not been reported before. Moreover, we also notice that the number density of the detected spicules is predominantly located close to the footpoint of the CBP loops. This scenario seems to suggest that the studied spicules are the chromospheric components of the CBP loops which, post heating, appear in the hotter TR and coronal channels, and further contribute toward the transient intensity disturbances in the already hot CBP loops.(see for example De Pontieu et al., 2011; Madjarska et al., 2011; Pereira et al., 2014; Rouppe van der Voort et al., 2015; De Pontieu et al., 2017; Samanta et al., 2019, and the references therein for studies about the coronal counterpart of spicules). We will explore this aspect further with a few representative examples in Sect. 4.2.
The morphological similarities between the H\(\beta\) spicules and the coronal loops associated with CBP indicates the possibility that the loop structures are associated with spicular mass ejections and transient heating of the plasma from chromospheric to coronal temperatures. A direct investigation of such a connection would however require a more detailed analysis by combing high-resolution numerical simulations with spectroscopic observations of the CBP. Nonetheless, some studies such as De Pontieu et al. (2017) already showed an intriguing connection between spicules in the TR and the formation of coronal strands in a decayed plage region with the help of numerical simulations and coordinated IRIS and SDO observations. Moreover, spicules were also found to be responsible in triggering propagating coronal disturbances (PCDs) along many of the pre-existing (and newly formed) coronal strands rooted to the plage. PCDs are rapid recurring intensity fluctuations (\(\sim 100\ \mathrm{km\ s^{-1}}\)) whose exact nature remains a mystery, especially outside of the sunspots (see De Pontieu and McIntosh, 2010; de Moortel, 2009; De Moortel et al., 2015; Bryans et al., 2016, for example, on the discussion whether PCDs are flows or waves). Therefore, it is likely that the intensity disturbances observed in the common difference coronal images are linked to the rapid spicular dynamics in the chromosphere.
From Fig. 5 we also notice a significant overlap between the widths of the detected chromospheric spicular features and the observed loops associated with the CBP. Using coordinated observations from Hinode's Extreme ultraviolet Imaging Spectrometer and Transition Region and Coronal Explorer instruments, Dere (2009) derived the volumetric plasma filling factor in CBPs, and came to the conclusion that the widths of its loops can be between 0\(\farcs\)2-1\(\farcs\)2 with possible substructures that are below the resolution limit of the instruments. Comprehensive statistical analysis carried out by Pereira et al. (2012) and Bose et al. (2021), indicate that spicule widths, for both off-limb and on-disk cases, are consistent with the range reported by Dere (2009) which further suggests that the H\(\beta\) spicules detected in this study are likely the chromospheric counterparts of the CBP.
Numerical modeling efforts led by Martinez-Sykora et al. (2018) offer key insights into the role of spicules in determining the widths of the coronal loops. They report that the widths of the simulated spicules (and subsequently the coronal loops) are primarily determined by the driving mechanism that generates these flows, along with the overall magnetic topology and heating within the magnetic field lines. Moreover, they find that the magnetic field rapidly expands primarily between the photosphere and middle to upper chromosphere where spicules are seen to be generated (in the model). The expansion of the field line is rather insignificant between the transition region and the corona which may explain why the CBP loops and spicules appear to have similar widths.
### Representative examples of spicular-CBP connection
In this section we further illustrate the spicule-CBP connection discussed in Sect. 4.1 through two representative examples shown in the left and right panels of Fig. 6 including their signatures in the TR (AIA 30.4 nm) and coronal passbands. We show the common difference images for the different AIA channels (in the left columns of each of the two panels) in order to enhance the visibility of the changes in intensity. The dashed vertical yellow lines in the left columns of both panels show the region of interest that is chosen to construct the \(x\)-\(t\) maps. Moreover, in addition to the common difference, we also show the \(x\)-\(t\) maps derived from the MGN processed AIA images and H\(\beta\) LC width maps to highlight the temporal evolution of the plasma emission from each channel.
The left panel shows an example of an RBE in the blue wing of H\(\beta\) (at \(-25\ \mathrm{km\ s^{-1}}\)). From the animation and the \(x\)-\(t\) maps (top row), it is clear that the RBE has an outward (away from the bright network regions) apparent motion and propagates from \(\sim 2^{\prime\prime}\) to \(6^{\prime\prime}\) in the vertical direction during its evolution. This is a commonly observed property of spicules where they originate from strong magnetic flux concentrations and tend to shoot outwards. Since spicules often have a wide range of Doppler shifts associated with them (Pereira et al., 2016; Bose et al., 2021), analysis based on images at fixed wavelength positions can sometimes provide an incomplete picture of their evolution. In such cases LC width maps offer a better understanding since they are
determined by considering a range of wavelengths on either side of the line center (see Sect. 4.1). In the present example however, the \(x\)-\(t\) maps derived from the LC widths and H\(\beta\) blue wing images are seen to be well correlated with each other.
A comparison of the spatio-temporal evolution seen in the corresponding AIA channels show a noticeable correlation with the H\(\beta\) counterpart. An inspection of the animation of the AIA difference images shows clear intensity disturbances propagating in the CBP which appear to be in tandem with the H\(\beta\) spicule. The 19.3 and 17.1 nm difference images, in particular, show a clear propagation from the bottom to the top of the FOV. This is also well highlighted in the difference \(x\)-\(t\) maps (middle column). The 30.4 nm difference images, on the other hand, does not show such a clear propagating disturbance however, the \(x\)-\(t\) maps reveal clear signatures which are also in tandem with the other wavelength channels.
A close look at the different \(x\)-\(t\) maps associated with the left panel of Fig. 6 reveals a small but distinct spatial (and/or) temporal offsets among the different channels (with respect to the dashed cyan line) - with the TR and coronal emission lying above the cooler chromospheric plasma. Such a scenario is consistent with the analysis presented by De Pontieu et al. (2011); Pereira et al. (2014), and it suggests that the RBE has a multi-thermal nature with temperatures that can range from chromospheric to coronal temperatures (of at least 1 MK). In fact, an early study focusing on multi-wavelength diagnostics of a CBP by Habbal & Withbroe (1981), found that coronal emission in CBPs lie a
Figure 6: Two representative examples highlighting the spicule-CBP connection from the chromosphere to the corona. Left panel: an example of an RBE observed in the blue wing (\(-\) 25 km s\({}^{-1}\)) of H\(\beta\) spectral line and its associated propagation in the different AIA passbands as indicated. The dashed vertical lines in the leftmost column indicate the region along which the \(x\)-\(t\) maps have been extracted for the different channels as shown in the middle and the rightmost columns. The solid vertical red lines in the \(x\)-\(t\) maps correspond to the instant at which this figure is shown. The dashed cyan line serves as a reference to illustrate the direction of propagation. Right panel: another example of a spicule observed in the blue wing (\(-\) 30 km s\({}^{-1}\)) of the H\(\beta\) spectral line is shown along with its impact on the AIA channels in the same format as the left panel. Note that the apparent direction of propagation of this spicule is opposite to the example presented in the left panel. Animations of the two panels are available online. They show the spatio-temporal evolution of the two spicules in the chromospheric H\(\beta\), transition region 30.4 nm and coronal 17.1, 19.3 and 21.1 nm channels during their respective lifetimes.
few arcseconds over an above the chromospheric emission suggesting the hypothesis that magnetic loops in a CBP are rooted in the chromosphere. The spatial offset between the TR (30.4 nm) and coronal (17.1/19.3 nm) emission patterns is indicative of the fact that the emission in the coronal channels are not caused by relatively cooler ions (such as O v, see Boerner et al., 2012) which are sensitive to temperatures of about 0.2 MK under equilibrium conditions. Moreover, the emission from the O v is expected to be very faint in comparison to the dominant Fe ix and Fe xv ions and would have occurred in the same spatial region as the 30.4 nm emission. This further adds support in favor of the spicular contribution to coronal emission associated with the CBP.
The right panel of Fig. 6 shows another example of spicular connection associated with the CBP in the same format as the previous example. Unlike the left panel, the spicule appears to be downward propagating as is evident from the animation and the \(x\)-\(t\) maps in the middle and right columns. A quick glance at the H\(\beta\) image would suggest that the example here is a blue-wing counterpart of downflowing RREs (seen in the red wing images of H\(\alpha\), see Bose et al., 2021, 2021). However, a closer inspection of the animation reveals a rather complex scenario where the spicule is rapidly seen to change its orientation (with respect to the LOS of the observer) during its propagation, before finally disappearing around \(t=180\) s. Such a complex propagation seems to convey that the spicule is downward propagating, which in reality could be the opposite.
Regardless of the interpretation associated with the orientation of the spicule and its mass flow, interestingly (and more importantly), the \(x\)-\(t\) maps of the coronal channels show a remarkable correlation with H\(\beta\). Moreover, the spatial offsets among the different channels are consistent with the discussion presented in the previous example and conforms with the multi-thermal aspect of the spicule and its relation to the CBP. This supports our proposition that spicules in the chromosphere have a direct relationship with the disturbances propagating in the CBPs. Although many questions remain, this may also provide support to the idea that the processes associated with spicules may play a role in providing mass and energy flux necessary to sustain the radiative and conductive energy loses in the solar corona as suggested in the numerical simulation studies by Martinez-Sykora et al. (2017, 2018).
### Twists at the footpoints of the CBP
Spicules are known to undergo twisting (torsional) motions that are often interpreted as a sign of Alfvenic waves responsible for driving the fast solar wind and balancing the energy losses suffered in the solar corona (De Pontieu et al., 2007; McIntosh et al., 2011; De Pontieu et al., 2012). Moreover, small-scaled twists associated with spicules are ubiquitously found in the solar atmosphere (active regions and quiet Sun alike), and their signatures have also been found in the TR (De Pontieu et al., 2014).
In Fig. 7, we show a case of twist associated with spicules present at the footpoints of the CBP and their influence on the coronal loop above. The top row of the figure shows an H\(\beta\) Dopplergram at \(\pm\) 25 km s\({}^{-1}\) with blue and red colors indicating plasma motions toward and away from the observer along the LOS. The corresponding \(x\)-\(t\) map and the animation shows a clear change of direction (or color from blue to red) indicating a definite twisting motion in the chromosphere. A close look at the animation indicates that the event starts with a predominantly positive (red) Doppler shift at \(t=0\) s which rapidly converts to a negative (blue) Doppler shift in roughly 30 s. It remains predominantly negative until around \(t=180\) s after which it rapidly twists toward positive (red) once again. This behavior is very similar to the examples observed in the H\(\alpha\) and Ca ii 854.2 spectral lines for both off-limb and on
Figure 7: Chromospheric twist and its likely propagation into the solar corona. This figure is in the same format as Fig. 6 except that the H\(\beta\) wing images is replaced by its Dopplergram at \(\pm\) 25 km s\({}^{-1}\). Blue (red) color is indicative of plasma motion toward (away from) the observer. The associated animation (available online) shows the spatio-temporal evolution of the twist propagation across the chromospheric and coronal channels for roughly 300 s.
disk spicules as outlined in De Pontieu et al. (2012) and De Pontieu et al. (2014). The propagation speed (of roughly 35 km s\({}^{-1}\)) is fairly consistent (given the uncertainty related to the viewing angle) with Alfven speeds at chromospheric heights (De Pontieu et al., 2007). The LC width \(x\)-\(t\) map shows a similar trend as the Doppler-gram \(x\)-\(t\) map where additionally we see that the plasma associated with the twisting motion stands out distinctly with respect to the background.
The \(x\)-\(t\) maps associated with the difference and MGN enhanced AIA 19.3, 17.1 and 21.1 nm channels show strong emission which are in tandem with their chromospheric counterpart (similar to the examples shown in Fig. 6). We also notice a significant offset in the emission among the different channels indicating that the plasma is heated to temperatures of at least 1-2 MK (refer to the discussion in the previous section), in association with the twisting spicules at chromospheric temperatures. Of course, a complete analysis of such a twist propagating in the coronal loops necessitates spectroscopic studies of the solar corona which is not possible with the set of instruments used in this study. Moreover, current observations suggest the prevalence of Alfvenic waves in the corona (e.g., Tomczyk et al., 2007; McIntosh et al., 2011) although the wave energy flux and wave modes are poorly captured with current instrumentation. Alfvenic waves have the potential to heat coronal loops (Antolin et al., 2018), in particular when mass flows are present within a structure in addition to waves (as shown for example in Taroyan, 2009; Williams et al., 2016). Upcoming space missions, such as Solar-C/EUVST or the MUlti-slit Solar Explorer (MUSE, De Pontieu et al., 2020, 2022), can potentially address these aspects.
### Chromospheric and coronal response to emerging magnetic flux
In this section, we investigate the chromospheric and coronal responses to two emerging flux episodes as observed from the LOS magnetic field.
#### 4.4.1 Flux emergence episode 1
Figure 8 and associated animation show an overview of the first episode. In the LOS magnetogram of panel (a), a negative parasitic polarity is seen to emerge in a predominantly positive region. In the figure, the area where this episode takes place is bounded by a green square region of 100\(\times\)100 pixels. The variation of the total positive and negative magnetic fluxes within this green square is shown in panel (f) where we find that the total negative flux starts to increase steadily after 10:01:31UT, reaching its peak value of \(\approx\)4\(\times\)10\({}^{21}\) Mx around 10:06:30 UT.
We investigated the subsequent chromospheric response to the flux emergence episode by analyzing the H\(\beta\) LC width maps (panel (b)). Compared to looking simply at the H\(\beta\) line core, the LC width is optically thin toward the dense fibrilar canopies visible in the line core images, thereby facilitating a better understanding of the "connection" between the spicules and their photospheric footpoints. In particular, we obtain the light curve within the green square since changes in the chromosphere and spicules associated with this flux event would likely be rooted near this region. The result is shown as a black curve in panel (e). There is a clear enhancement in the H\(\beta\) LC width intensity starting from \(\approx\) 10:01 UT (around the same time as the total negative flux in panel (f) starts to show a marked increase). The LC intensity continues to increase until \(\approx\)10:03:30 UT after which it starts to decay and reaches a minimum around 10:05:15 UT, incidentally after which the increase in the total negative flux (by \(\approx\)300% from the start of the event) also tends to stabilize. It seems that as the flux emerges and subsequently interacts with the dominant positive polarity through magnetic reconnection (see Section 4.4.2), there is a significant enhancement in the spicular activity compared to the any of the previous time steps implying a correlation between the two. This is consistent with the analysis presented in Madjarska et al. (2021), in the context of chromospheric response to flux emergence associated with a CBP, and also Samanta et al. (2019), in general. However, unlike Samanta et al. (2019), it is not implied that the flux emergence (and subsequent cancellation) leads to the "generation" of spicules seen in close proximity. Instead, it is more appropriate to say that the emergence likely caused an enhancement in the observed spicular activity. We notice the presence of spicules both well before and after the emergence event as is evident from the animation.
We have also investigated the coronal response using the SDO/AIA 19.3 and 17.1 channels (panels (c) and (d)) along with the 21.1 nm channel. In this case, the corresponding light curves, shown in panel (e), are obtained in a region that is spatially displaced from the emerging flux region (the blue rectangle of 200\(\times\)300 pixels in panels (c) and (d)). This is because it was shown earlier on in the paper that spicules lie close to the footpoints of the CBP structure, whereas the coronal loops extend well beyond and are spatially (and/or temporally) displaced. Before 10:01:31 UT, the 17.1, 19.3, 21.1 nm light curves have very similar intensity levels relative to the maximum of each channel. As soon as the chromospheric activity starts to increase from 10:01:31 UT, we notice a co-temporal increase in the intensity of the 17.1, whereas the 19.3 and 21.1 channels show a steady decrease compared to their respective prevent values. However, unlike the illustrative spicule examples shown in the previous section (i.e. in Figs 6 and 7) and also the many studies conducted in the past (such as, De Pontieu et al., 2011; McIntosh et al., 2011; Henriques et al., 2016) that do establish a coronal connection quite convincingly in different SDO/AIA channels, for this particular enhanced spicular episode, the coro
nal relation is not clear. To not bias our interpretation, we have also computed other light curves over different AIA FOVs (of the same area) spatially displaced from one another, finding similar results.
#### 4.4.2 Reconnection associated with flux emergence 1
Figures 9 and 10 show evidences of magnetic reconnection through two examples of EBs located in the footpoint of the CBP associated with the emerging flux episode described above. We refer to them as quiet-Sun EBs (QSEBs) after Rouppe van der Voort et al. (2016) where these were first described. QSEBs are smaller, shorter-lived and less intense brightenings and are found in relatively quieter areas on the Sun compared to their active region counterparts. With the help of high-quality H\(\beta\) observations from the SST, Joshi et al. (2020); Joshi & Rouppe van der Voort (2022) recently showed that QSEBs are ubiquitous in the solar atmosphere and can play an important role in the energy balance of the chromosphere. Recent numerical modeling efforts led by Hansteen et al. (2017); Danilovic (2017) and Hansteen et al. (2019) have confirmed that (QS)EBs are classic markers of small-scale magnetic reconnection events in the solar photosphere.
In this study, we base our analysis on the small 100\(\times\)100 pixel FOV shown in panel (a) of Fig. 8 with a focus on the flux emergence event. Panel (a) of Fig. 9 shows the QSEB in the blue-wing H\(\beta\) intensity map (in
Figure 8: Chromospheric and coronal response to flux emergence episode 1. Panel (a) shows the photospheric LOS magnetic field map, (b) shows the H\(\beta\) LC width map, (c) and (d) shows the overlying corona of the CBP in AIA 19.3 nm and 17.1 nm channels. The green square boxes drawn in panels (a)–(d) show the region associated with the flux emergence event. Panels (c) and (d) also shows a blue rectangular box centered around (\(X\),\(Y\))=(7\(\farcm\)5,15\(\arcsec\)) where the coronal response to the emerging flux is analyzed. The black contour in panels (c) and (d) indicates the regions with an absolute LOS magnetic field \(\geq\) 50 G. Panel (e) shows the chromospheric and coronal light curves obtained from the green and blue FOVs indicated in panels (b)–(d), respectively. Panel (f) shows the temporal variation of the total positive and negative magnetic flux in the region bounded by the green colored box in panel (a). The gray shaded intervals in panels (e) and (f) shows the time when the chromospheric spicular activity is enhanced, and the pink shaded regions indicates the interval when the two QSEBs are observed. Animation of this figure is available online, which shows the evolution of the magnetic flux and its response in H\(\beta\), AIA 19.3 and 17.1 nm channels in the form of light curves for the entire 11 min of solar evolution.
dicated by the magenta marker). The animation shows a tiny, flame-like brightening lasting for about 40 s. We note that this period (along with the latter QSEB) is marked in panels (e) and (f) of Fig. 8 as QSEB 1 and 2 respectively. The H\(\beta\) spectral-time (\(\lambda-t\)) slice in Fig. 9 (b) shows an enhancement in the line-wings compared to the background which is reflected in the spectra shown in panel (c). The observed spectral shape is characteristic to an EB. Moreover, the co-temporal H\(\beta\) WB image in panel (d) shows no such brightening which clearly distinguishes this QSEB from a typical photospheric magnetic bright point. Panels (e) and (f) show the location of the QSEB on the LOS magnetic field map and the evolution of the total positive and negative magnetic flux inside the smaller cyan box shown in panel (e). From the animation and the light curves, we see that the negative flux decreases up to about 40 s from the start of the event after which it stabilizes up to \(t=55\) s, following which it starts to decrease right around the onset of the QSEB.
Figure 10 shows another QSEB event (QSEB 2) associated with the same emerging flux region under consideration. QSEB 2 is brighter, but shorter-lived (\(\approx\)25 s) in comparison to QSEB 1 and it is shown against a background of red-wing H\(\beta\) intensity map. Both the examples bear close morphological resemblance with distinct flame-like brightenings. The \(\lambda-t\) slice and the spectra for QSEB 2 also show characteristic EB-like behavior but as is also evident from panel (c), the intensity enhancement is stronger compared to QSEB 1. The corresponding WB image (panel d) shows that QSEB 2 is located in the intergranular lane, and from the B\({}_{\rm LOS}\) map we find that in this case the QSEB exists in the intersection of opposite polarities. The variation of the negative polarity flux in panel (f) shows an increase right around the onset of the QSEB.
The examples presented in this section clearly show that the emerging flux episode described in the previous section have a definite impact not just in the form of enhanced chromospheric spicular activity, but also deeper in the solar atmosphere where it reconnects and subsequently releases energy in the form of small-scaled EBs.
#### 4.4.3 Flux emergence episode 2
Figure 9: Details of QSEB 1 observed during the flux emergence episode 1. Panel (a): QSEB observed in the far blue wing of H\(\beta\). Panel (b): temporal variation of the H\(\beta\) line profile for a location in the QSEB indicated by the magenta marker in panel (a) in the form of a \(\lambda-t\) diagram. Panel (c): H\(\beta\) spectral line at a temporal instant indicated by the marker in panel (b) and the spatio-temporal average H\(\beta\) reference profile (dashed black line). Panel (d): corresponding WB image. Panel (e): corresponding LOS magnetic field map saturated between \(\pm\) 60 G, and panel (f) temporal evolution of the total positive and negative magnetic flux within the cyan box shown in panel (e). The dashed vertical line in (e) indicates the instant when this figure is shown. Animation of this figure is available online and it shows the evolution of the magnetic field, the QSEB and the corresponding H\(\beta\) spectra before, during and after the appearance of the QSEB for about 2.5 min of solar evolution.
In this section, we analyze the chromospheric and coronal responses to another flux emergence episode that occurred close to the footpoints of the CBP. Figure 11 depicts the overview of the episode in the same format as Fig. 8, and the emergence is shown with a green colored box (occupying the same area as before) drawn in panel (a). A close inspection of the animation linked with panel (a) suggests a clear but relatively smaller negative flux emergence episode lasting \(\approx\) 5.5 min. This is also evident from panel (f) which shows an increase in the total negative flux within the cyan colored box starting around 09:57UT. The total negative flux increases by \(\approx\)180% compared to the pre-event values and peaks around 09:59:12UT. It then starts to decrease steadily reaching a minimum value of 0.1\(\times\)10\({}^{21}\) Mx at 10:03:50UT. The emerging negative polarity, however, starts to disappear around 10:02:37UT (indicated by the gray region in panels e and f) from the B\({}_{\rm LOS}\) map.
We found a contrasting evolution of the light curves associated with the chromospheric and coronal channels for this emergence episode in comparison to the event described in Sect. 4.4.1. The H\(\beta\) LC width intensity level in panel (e) does not show a marked increase in tandem with the flux emergence and subsequent cancellation; it maintains a steady level during the whole flux emergence episode and only starts to increase well after the total negative flux reaches its minimum value. The dynamical evolution of the spicules seen in panel (b) complements the variation of the H\(\beta\) LC light curve indicated in panel (e) where we do not see any significant enhancement in spicular activity compared to pre-emergence scenario. The coronal channels behave similarly where very little (or no) changes in their respective intensity levels are seen during the entire emergence event, implying that none of the channels are impacted directly by this emergence episode unlike the scenario outlined in Sect. 4.4.1. Again, to not be limited to a single FOV, we repeated the analysis by choosing different (rectangular) cyan FOVs in panels (c) and (d) like before. However, we did not find any differences in the temporal variation of the AIA light curves. In addition, we were also not able to find any signature of QSEBs linked with this event. A possible explanation could be that the strength of the emerging flux in the second episode is at least a factor of three lesser than the first episode, which reinforces our conclusion that there is likely no impact on the chromosphere and the corona associated with this weaker flux emergence event.
Identifying small-scale flux emergence events, such as the ones described in this paper, can be a challenging task. This is primarily because high-resolution observations of the solar photosphere reveal a myriad of magnetic features especially in the regions close to a net
Figure 10: Details of QSEB 2 associated with the flux emergence event 1. The figure and its associated animation (available online) showing about 2.5 min of solar evolution displays the temporal evolution of the magnetic field, QSEB and the H\(\beta\) spectra in the same format as Fig. 9.
work or an inter-network. This is somewhat clear from the LOS magnetic field maps used in this paper where we see sub-arcsecond fields appearing and disappearing all over the FOV. Therefore, it is imperative that future studies warrant the need for high-resolution telescopes (achieving accurate polarimetry at a resolution of 0\(\farcs\)2 or better) to discern the impact of such small-scale flux emergence episodes on the overlying coronal structures.
## 5 Summary and Conclusions
Despite several decades of scientific research dedicated to the study of CBP, their chromospheric counterparts remained largely unexplored with the exception of Habbal and Withbroe (1981), and very recently Madjarska et al. (2021). This paper is an attempt in that direction where the focus is on the chromosphere underneath a CBP observed at spatial and temporal scales that have never been reported before. In particular, this study primarily investigates the relationship between ubiquitous spicules seen at the footpoints of a CBP observed in the H\(\beta\) spectral line, and their coronal counterparts. The chromospheric scenery reveals a conspicuous morphological and topological resemblance with the loops of the CBP, indicating that spicules form an integral part of the overall magnetic structure. This interpretation is further reinforced by computing 2D density distribution of over 6000 spicules detected using an automated procedure, and comparing them against coronal images. Our analysis reveals that these spicules predominantly lie close to the footpoints of the CBP and have the same orientation as their coronal counterparts.
We show illustrative examples indicating the "connection" between spicules and CBP loops that is suggestive of the scenario that spicular flows are often associated with heating to TR and coronal temperatures, and can propagate into the corona likely in the form of PCDs (Nobrega-Siverio and Moreno-Insertis, 2022, see Fig. 4). Thus, they can potentially contribute toward transient intensity perturbations of an already existing CBP. Furthermore, we also show an example of a twist propagating in the CBP loops that is directly correlated with twisting spicules seen in the chromospheric footpoints. All such examples provide strong indication of a direct link that exists between the chromosphere and the corona of a CBP. It is however not straightforward to explain whether spicules observed at the footpoints of
Figure 11: Chromospheric and coronal response to flux emergence episode 2 in the same format as Fig. 8. No QSEBs were observed during this event. An animation of this figure is available online, which shows the flux emergence episode and corresponding chromospheric and coronal response for the entire 11 min duration in the same format as Fig. 8.
the CBP are unique compared to the spicules observed elsewhere. From past studies, it is expected that the strength of the magnetic field (and its inclination) in the lower atmosphere plays a major role in driving the observational properties of spicules. Statistical analysis by Pereira et al. (2012) reveal clear differences between properties of spicules in active regions and quiet-Sun (and coronal holes), and Heggland et al. (2011) report similar findings with numerical simulations. Underneath the CBP, the field strength is distinctly stronger compared the rest of the FOV-where the rapid expansion of the weaker field lines likely leads to different coronal impact. Statistical studies with coordinated chromospheric and higher-resolution coronal observations (e.g. from MUSE), in addition to detailed quantitative analysis of mass-energy exchanges, are needed to determine if the coronal contribution of spicules depend on the strength of the photospheric magnetic fields.
We also investigate the chromospheric and coronal responses to two different flux emergence episodes and find very different results. In the first case, we see a clear enhancement in the chromospheric spicular activity in tandem with the flux emergence event. The emission in the 17.1 nm channel shows a strong correlation with the chromospheric activity whereas the same cannot be said for the 19.3 and 21.1 nm channels. The emission in the latter two channels decreases (but only by about 3%) almost co-temporally with the enhancement seen in the 17.1 nm channel. The second flux emergence episode does not seem to contribute toward either a change in the chromospheric or coronal activity. This is likely due to a weaker (and smaller-scaled) flux emergence compared to the previous episode which causes little to no impact in the upper atmospheres of the Sun. Further coordinated observations (along with numerical simulations) spanning the photosphere through the corona are needed to statistically establish as to when and why such small-scaled emergence episodes impact the CBP above.
We also found distinct signatures of magnetic reconnection associated with the stronger flux emergence episode in the form of multiple QSEBs. Although we found a slight co-temporal intensity increase in one of the coronal channels, it is not straightforward to correlate that directly with the reconnection happening in the upper photosphere. As explained before, the likely cause of such a coronal intensity enhancement is attributed to the enhanced chromospheric spicular activity seen in the chromosphere.
The results presented in this paper attempts to describe the (complex) chromospheric scenery underneath a CBP from the perspective of high-resolution observations for the very first time. However further studies, including both the footpoints of a CBP, are needed in coordination with ground- and space-based observations to answer some of the outstanding questions in more detail. "Connecting" the photospheric magnetic footpoints to the corona through the chromosphere remains a challenge. Current instrumentation does not allow simultaneous photospheric, chromospheric, and coronal magnetic field measurements of sufficient spatial resolution and quality. Until that becomes feasible, a possible way forward is the use of non-potential magnetic field extrapolations in combination with 3D numerical simulations. Such comparisons may lead to a better understanding of how flux emergence impacts the chromosphere and the corona overlying a CBP.
S.B. and B.D.P. gratefully acknowledge support from NASA grant 80NSSC20K1272 "Flux emergence and the structure, dynamics, and energetics of the solar atmosphere". We thank Edvarda Harnes for the SDO to SST alignment. The Swedish 1-m Solar Telescope is operated on the island of La Palma by the Institute for Solar Physics of Stockholm University in the Spanish Observatorio del Roque de Los Muchachos of the Instituto de Astrofisica de Canarias. The Institute for Solar Physics is supported by a grant for research infrastructures of national importance from the Swedish Research Council (registration number 2017-00625). This research is supported by the Research Council of Norway, project numbers 250810, 325491, and through its Centres of Excellence scheme, project number 262622. D.N.S. acknowledges support by the European Research Council through the Synergy Grant number 810218 ("The Whole Sun", ERC-2018-SyG) and by the Spanish Ministry of Science, Innovation and Universities through project PGC2018-095832-B-I00.
|
2307.14231 | Giant conductance of PSS:PEDOT micro-surfaces induced by microbubble
lithography | We provide direct evidence of the effects of interface engineering of various
substrates by Microbubble lithography (MBL). We choose a model organic plastic
(or polymer) poly(3,4-ethylenedioxythiophene) polystyrene sulfonate
(PEDOT:PSS), with conductivity of 140 S/cm, as a representative organic system
to showcase our technique. Thus, we fabricate permanent patterns of PEDOT:PSS
on glass, followed by a flexible PDMS substrate, and observe conductivity
enhancement of 5 times on the former (694 S/cm), and 20 times (2844 S/cm) on
the latter, without the use of external doping agents or invasive chemical
treatment. Probing the patterned interface, we observe that MBL is able to tune
the conformational states of PEDOT:PSS from coils in the pristine form, to
extended coils on glass, and almost linear structures in PDMS due to its more
malleable liquid-like interface. This results in higher ordering and vanishing
grain boundaries leading to the highest conductivity of PEDOT:PSS on PDMS
substrates. | Anand Dev Ranjan, Rakesh Sen, Sumeet Kumar, Rahul Vaippully, Soumya Dutta, Soumyajit Roy, Basudev Roy, Ayan Banerjee | 2023-07-26T15:00:03Z | http://arxiv.org/abs/2307.14231v1 | # Giant conductance of PSS:PEDOT micro-surfaces induced by microbubble lithography
###### Abstract
We provide direct evidence of the effects of interface engineering of various substrates by Microbubble lithography (MBL). We choose a model organic plastic (or polymer) poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS), with
conductivity of 140 S/cm, as a representative organic system to showcase our technique. Thus, we fabricate permanent patterns of PEDOT:PSS on glass, followed by a flexible PDMS substrate, and observe conductivity enhancement of 5 times on the former (694 S/cm), and 20 times (2844 S/cm) on the latter, without the use of external doping agents or invasive chemical treatment. Probing the patterned interface, we observe that MBL is able to tune the conformational states of PEDOT:PSS from coils in the pristine form, to extended coils on glass, and almost linear structures in PDMS due to its more malleable liquid-like interface. This results in higher ordering and vanishing grain boundaries leading to the highest conductivity of PEDOT:PSS on PDMS substrates.
American Chemical Society, Department of Physics, University of California, Berkeley, CA 94720, USA
## 1 Introduction
Light-matter interactions at mesoscopic length scales have led to fascinating emergent phenomena that have modified the properties of both light and matter, and have therefore led to deep physical understanding of nature itself [1, 2, 3]. Thus, spin-orbit interactions of light have been driven by plasmonic materials [1], while the chemical and physical properties of molecules have been modified by coupling them strongly with light [2, 4]. Besides fundamental studies, multifarious applications have also been facilitated by light-matter interactions, ranging from communications and quantum information processing [5], to exciting applications in nano-optics and optoelectronics [6]. Even at somewhat larger length scales, the interaction of light with nanoscopic matter proves to be very useful for applications in biological light harvesting systems, and for the development of efficient artificial photovoltaic devices [7]. In this context, interfaces play a crucial role. Interfacial structure and emergent properties thereof can be induced by simple experimental heuristics rendering such techniques indispensable. [8, 9, 10, 11]
In the case of plastic electronics - a crucial factor is the conductivity of the material used. While conductive organic polymers are typically employed for this purpose, their conductance
is often limited by their electronic structures [12, 13]. Efforts to improve the conductivity of the polymers extrinsically typically rely on doping [14, 15] and chemical treatment [16]. In addition, all these methods consist of multiple processing stages and are therefore time consuming, while doping with the help of harsh chemicals also renders the material entirely unsuitable for any biological application. With regard to this, the role of interfaces in affecting the conductivity of such polymers remains an open question. With various tools of interface engineering being available, such dependence could prove to be of considerable significance in enhancing conductivities of conductive polymers in a generic and non-invasive manner.
Microbubble lithography (MBL) has established itself as a robust tool in self-assembly based bottom-up approaches in fabricating micro-patterns on transparent substrates for use in diverse applications including plastic electronics, catalysis, and even biodetection [17, 18, 19, 20]. Especially in the context of plastic electronics, MBL has been used to pattern conductive polymers and even increase their conductivity significantly by doping occurring concurrently with the patterning process [21]. On another note, it has been demonstrated that irreversible self assembly under non-equilibrium conditions is a reliable method for carrying out interface engineering [22]. MBL promises to be an excellent candidate for implementing such designs. Indeed, several observations in MBL seem to suggest fundamental changes in the properties of mesoscopic matter post patterning - thus leading to exciting studies to reveal the intrinsic science behind the observations [17, 23]. Thus, it appears worthwhile to investigate the role of interfaces in the conductivity of conducting polymers patterned by MBL. This is because MBL makes it possible to restructure interfaces, which allows their chemical modifications to appear as emergent phenomena. Understandably, the presence of different surface matrices would further influence and enhance these phenomena by offering a modular approach for influencing the intrinsic properties of matter, of which one is conductivity.
In this paper, we investigate the role of substrates in affecting the conductivity of micropatterned organic conducting polymers using MBL. We selected the organic polymer PEDOT: PSS (poly(3,4-ethylenedioxythiophene)) as a representative model system since it
has been extensively investigated, and there are several reported results of its application as diodes, PLEDs, supercapacitors, and numerous other devices [24, 25, 26, 27]. Indeed, the material has been identified as one of the primary candidates for the development of next generation flexible plastic electronics [28]. However, PEDOT-PSS also has low conductivity in its pristine form akin to most organic conducting polymers as mentioned earlier, which renders it virtually unsuitable for any electronic device fabrication without extrinsic strategies to improve conductivity. For PEDOT-PSS, besides the usual approaches including doping and chemical treatment, the conductivity has also been improved by applying mechanical stress, blending with other NPs, etc. [29, 30]. However, in all other methods - to the best of our knowledge, the increase in conductivity is performed independently of the patterning strategy, and are two different processes. On the other hand, MBL - by character, juxtaposes the patterning and conductivity-increase processes [21] - which is a significant advantage in process flow optimization for actual applications. Thus, we pattern PEDOT-PSS in two different transparent substrates - glass and PDMS - and observe significant conductivity increase in both cases over the pristine sample. We attempt to understand the underlying physical reason behind such increase of conductivity with the help of various characterization tools including scanning electron microscopy coupled with energy dispersive Xray spectroscopy (SEM-EDX), Raman spectroscopy, and cyclic voltammetry. A careful investigation of the results from all the measurements point to conformational and morphological changes in the PEDOT-PSS crystal structure itself due to high absorption at the laser wavelength used to drive MBL. Indeed, the large energy absorbed due to the high light intensities generated in MBL, melts the insulating PSS shells encapsulating the PEDOT cores. This increases intercrystalline boundaries and grain size of the polymer as we pattern from glass to PDMS - so that the diffusion lengths of charge carries increase as we change the substrate, resulting in higher conductivities being observed. Our results show that the choice of substrates are an important tool to improve the conductivity of organic polymers, and MBL - with its innate capability to simultaneously pattern and induce morphological and conformational changes
of the material being patterned so as to increase conductivity - can be an ideal candidate to facilitate high-performing plastic electronics.
_Results and Discussions: I. Experimental setups for patterning and subsequent measurements on the patterns:_ The schematic of the setup used for the experiments is shown in Fig. 1a. The patterning was performed using an Olympus 1X71 microscope equipped with an inverted Plan-Fluorite 100x oil immersion objective with 1.3 Numerical Aperture (NA) and an overfilled illuminating aperture. The 1064 nm diode laser (Lasever) is focused at the sample plane by the 100x objective, and illumination is carried out using white light passing through a condenser lens, which is separated from the back-scattered IR laser using dichroic mirrors placed before the camera that is fixed at the back-focal plane of the objective lens. The sample chamber, as shown in Fig. 1b is filled with 10 \(\mu\)l of PEDOT:PSS obtained from Sigma Aldrich. The detailed methodology behind MBL has been already described earlier [17, 21, 31]. Here, we use different laser powers (ranging from 3-30 mW at the focal plane) to grow microbubbles of different sizes, and translate them by moving the microscope sample-holder stage, so as to obtain PEDOT:PSS patterns of varying widths both on glass and PDMS substrates. Fig. 1c demonstrates a linear pattern on glass, while Fig. 2a shows a Hall-bar structure patterned on PDMS. Both have been drawn using a laser power of around 24 mW.
After patterning, we performed a set of experiments to measure the conductivity of the patterns, and provide explanations for the enhanced conductivity we observed. Thus, we measured the conductivity of the samples at room temperature by the four-point probe technique using a Keithley 2400 Source Meter Unit. Further, scanning electron microscope (SEM) and Energy Dispersive X-Ray Spectroscopy (EDX) of the patterned material were performed using the Zeiss GeminiSEM using \(K_{\alpha}\) lines of Cu. We then performed Raman measurements using a Horiba micro-Raman spectroscopy system in the back scattering configuration. The Raman lines were excited using a 532.2 nm laser with an incident power of 1 mW and resolution around 1.2 cm-1. Finally, cyclic voltammetry was performed with a
workstation (CH Instruments, Model CHI7091E) in a three-electrode electrochemical setup where 0.1 M sodium sulphate solution is used as an electrolyte. The fabricated patterns are used as the working electrode along with platinum wire as a counter electrode and Ag/AgCl (1M KCl) as reference electrode.
_II. Conductivity measurements:_ To determine the conductivity of the PEDOT:PSS patterns deposited on glass and PDMS, we measured the current-voltage characteristics as shown in Fig. 2c-d,[21] and employed a four-probe measurement, where voltage was provided to one arm of the pattern, while the current was measured in the perpendicular arm as shown in Fig. 2c-d.[21]
Figure 1: Experimental Setup: (a) Schematic of experimental setup used for the experiment. A 1064 nm laser beam is focussed on the sample chamber using 100x (1.3 NA) objective lens and a set of dichroic mirror (Dichroic Mirror 2) and polarising beam splitter (Polarising BS). The sample chamber is imaged using a visible light source (LED) which falls on the sample chamber and is collected by the camera (b) When the focused laser beam is incident on the adsorbed material on glass slide it creates a hot spot resulting in formation of microbubble. The microbubble induces a Gibbs-Marangoni flow in the dispersed material as shown by arrows. This results in deposition of material at the base of the microbubble (c) The patterned PEDOT:PSS on glass substrate using the MBL.
Fig. 2a. Conductivities of all the samples together is shown in Fig. 2b with the corresponding highest and lowest values. We found that the pristine sample has average conductivity of around \(140\pm 12\) S/cm (mean \(\pm\) 1\(\sigma\)) with the lowest conductivity of roughly 141 S/cm, whereas patterns on glass had a 5 times increase in conductivity, with an average conductivity of around \(694\pm 24\) S/cm. The value of the conductivity is already almost three times higher than what we had achieved with other well-known conducting polymers - polypyrrole and polyaniline - that were similarly micropatterned on glass [21] - but in conjunction with soft oxometalates. We observed an even more enhanced conductivity of the PEDOT:PSS micropatterns that were fabricated on PDMS substrates - the average conductivity being
Figure 2: (a) Brightfield image of patterned PEDOT:PSS in a Hall-bar geometry (b) Conductivities of pristine and patterned PEDOT:PSS on glass and PDMS. Patterns on the PDMS substrate show the maximum conductivity compared to either of the samples. The conductivity of pristine is lowest of all the samples (c) IV characteristics of pristine PEDOT:PSS (d) IV graph of the patterned PEDOT:PSS on glass and PDMS substrate.
\(2844\pm 321\) S/cm. Note that this is comparable to the highest conductivity of PEDOT:PSS reported in the literature after patterning on a PDMS substrate (around 3605 S/cm) - this being achieved by doping with sulphuric acid [30]. Our process, obviously, does not involve any such chemical treatment, and is clearly a consequence of the patterning process itself. The question, however, arises as to why the same material when patterned on multiple substrates exhibits different values of conductivity. This is what we attempt to answer now, using different experimental probes, and look for any physical or chemical changes in PEDOT:PSS that could cause such significant change in an intrinsic quantity such as conductivity.
To determine the underlying process behind the increase of conductivity in our case, we employed various experimental techniques such as scanning electron microscopy (SEM), EDX, Raman spectroscopy and cyclic voltammetry (CV) to examine the pristine (taken in the
Figure 3: To determine the elemental composition along the patterned region of the PEDOT:PSS we used the SEM-EDX. This image maps the elemental constituents of PEDOT:PSS i.e Oxygen(O), Silicon(Si), Carbon(C) and Sulphur(S), if present at the given mapped region. (a) We take the elemental map of the pristine PEDOT:PSS, where (b) to (e) shows the presence of O, Si, S and C respectively in the pristine sample. (f) PEDOT:PSS pattern on glass substrate. (g) to (j) show EDX mapping of the pattern on glass showing the presence of O, C, Si and S, respectively, along the patterned regions. (k) PEDOT:PSS pattern on PDMS substrate. (l) to (o) display elemental mapping of O, C, Si and S, respectively, present in the patterned area. We conclude from this image that the patterned region shows high concentration of the elemental constituents of PEDOT:PSS in both the substrates (p-r) the surface morphology of PEDOT:PSS present in the three configurations (scale bar represents 300 nm) (p) The SEM picture of the pristine sample clearly shows the small grain size with non-continuous boundaries which increases for the case of patterns on glass substrate as shown in (q) The PDMS patterns (r) shows longer and larger grain size compared to all other samples with few non-continuous domains.
form of a thin film on a glass cover slip) sample, and those patterned using MBL on glass, and PDMS substrates, respectively. The SEM/EDX images of the pristine and the patterned PEDOT:PSS on glass and PDMS are shown in Fig. 3a-o, with the SEM images displayed in Fig. 3a, f & k, respectively, while the EDX elemental mapping of the pristine sample, and the patterns on glass and PDMS are shown in Fig. 3b-e, g-j & l-o, respectively. These images proves the continuity and are also consistent with the expected chemical composition of elements in a PEDOT:PSS sample comprising of Oxygen (O), Carbon(C), Silicon (Si) and Sulphur (S) [32].
_Investigations to explain high conductivity - analysis of SEM/EDX data:_ The conductivity of PEDOT:PSS is also dependent on the inter-crystalline boundaries and grain size, as is well-known in the literature [33]. For our samples, we used the SEM data to look for any changes in grain size and morphology. We can clearly see from Fig. 3 that the domain size of PEDOT:PSS has increased from pristine (Fig. 3p) to patterned samples (Fig. 3q-r). This increase in size of domains can be attributed to the conformational change in the PEDOT:PSS as we go from pristine to the patterns on glass and PDMS. In addition, it is clear from the SEM images (Fig. 3q) that in comparison to the other two samples, the pristine sample has exceptionally sharp grain boundaries. There are also regions in the pristine thin film where the inter grain spacing is higher (black lines). When compared to the pristine sample, the glass samples have fewer defined grain boundaries, and grains are larger, and virtually continuous. In the case of the PDMS sample, grain size increases even further and grain boundaries practically disappear. Now, grain boundaries act as solid walls, preventing charges from diffusing over the patterns and thin layer. As a result, the reduction in the number of grain boundaries and increase in the grain size improves charge carrier diffusion across the PEDOT:PSS patterns on glass substrates compared to the pristine sample, and even more on PDMS substrates [34, 35] compared to the others.
_Investigations to explain high conductivity by Raman spectroscopy:_ Although the EDX data validates the elemental composition and presence of PEDOT:PSS inside the patterned
region, they do not provide evidence for conformational and molecular changes. On a similar note, SEM pictures reveal the morphology of the surface - validating the hypothesis that patterning on different substrates leads to surface alterations - but they cannot display volume changes. To glean information regarding these aspects, as well as to corroborate doping in the case of PDMS, we performed Raman spectroscopy on the pristine and patterned samples on the two different substrates, as shown in Fig. (4). Earlier studies have pointed out that the Raman fingerprints between 1200 to 1500 is of considerable importance to judge the conformation and structural property of the PEDOT:PSS chains. The Raman vibrations around 1250 cm\({}^{-1}\) correspond to the \(C_{a}\)-\(C_{a}\) inter-ring stretching, while the broader mode around 1425 cm\({}^{-1}\) is identified as the \(C_{a}\)=\(C_{b}\) symmetric stretching modes of the five-member thiophene ring of the PEDOT:PSS [36]. In our case, these two peaks can be clearly seen in all spectra - however, their characteristics appear to change significantly for the patterned PEDOT:PSS in comparison to the pristine sample.
We focus first on the inter-ring stretching mode at 1250 cm\({}^{-1}\). The spectra clearly reveal two changes: the first is a gradual rise in the relative intensity profile by \(\sim\) 40%, and the second is a red shift of the peaks from the pristine (1244 cm\({}^{-1}\) ) to the PDMS (1252 cm\({}^{-1}\) ) samples, as seen in Fig. 4a and Table 1. The same characteristics are observed when PEDOT:PSS is annealed at a temperature between 200 and 300 \({}^{0}\)C, which denotes the melting of the PSS surrounding PEDOT, resulting in improved contact between PEDOT chains [37]. Importantly, similar or higher temperatures are attained while nucleating a microbubble [38] - which implies that MBL performs a role akin to annealing here. Now, this increase in the normalised integrated intensity of the Raman modes has been linked to the subsequent lengthening of the PEDOT chain, which is related to the decrease in material resistivity shown in the electrical characterisation [39]. This indicates that the microbubble causes melting of the PSS structures, thus increasing the contact between the conductive PEDOT chains and providing a boost to the charge conduction mechanism.
Now, we consider the the wider peak in the vicinity of 1425 cm\({}^{-1}\). From the literature
available, it appears that these peaks are due to a combination of two peaks, one around 1430 and the other at 1410 cm\({}^{-1}\), attributed to the benzoid and the quinoid conformation structures, respectively [40]. The benzoid structures have been associated with a coil conformation of the polymer chain, whereas the quinoid structures have primarily been demonstrated to consist of linear conformations [41]. Since the charge conduction of the PEDOT:PSS sheet is better facilitated [39] with the linear conformation quinoid structure, that is the favoured configuration for high conductivity. In Fig. 4, the larger band around 1420 for all the samples have been deconvolved into two components with distinct modes of vibration, one at a higher wavenumber around 1420-30 cm\({}^{-1}\) and the other at a lower wavenumber of 1400-10 cm\({}^{-1}\). As has already been mentioned, the quinoid mode is represented by the lower wavenumber peak (red line) and the benzoid structure is represented by the higher wavenumber peak (green line). The analysis of the fitted quinoid and benzoid peaks as shown in Fig. 4b reveals that the quinoid mode is represented by the lower wavenumber peak (red line) and the benzoid structure is represented by the lower wavenumber peak (green line). The analysis of the quinoid mode and benzoid peaks as shown in Fig. 4b reveals that the quinoid mode is represented by the lower wavenumber peak (green line) and the benzoid structure is represented by the lower wavenumber peak (green line). The analysis of the quinoid mode and benzoid peaks as shown in Fig. 4b reveals that the quinoid mode is represented by the lower wavenumber peak (green line) and the benzoid structure is represented by the lower wavenumber peak (green line).
the broad Raman peak around 1425 cm\({}^{-1}\) for the pristine sample gets increasingly red shifted for glass and PDMS. Moreover, as we observe in Fig. 4b after deconvoluting the broad peak around 1420 cm\({}^{-1}\), the contribution of the quinoid peak gradually increases as we move from pristine to PDMS - with both peaks undergoing a red shift as well from 1433 to 1422 cm\({}^{-1}\). This indicates a structural rearrangement of the polymer chains, presumably as a result of micropatterning at different substrates, which causes a change in the conformational state of the PEDOT:PSS from largely being benzoid to quinoid structures. This is evident in the increasing ratio of the relative integrated intensities of the quinoid versus benzoid peaks which we represent in Table 2. Thus, pristine PSS-PEDOT has the lowest quinoid-benzoid ratio, while the samples patterned using MBL on PDMS have the highest. Given that a quinoid structure favours a linear conformation, PEDOT:PSS grains can be packed more densely, which increases their size and promotes more continuous grain boundaries [42]. It is important to note that grain boundaries function as roadblocks that prevent the efficient transfer of charges. This is also proven by the SEM images where the grain boundaries become more and more homogeneous as we move from pristine to samples patterned on PDMS. Thus, due to the improved molecular order and dense packing, it is likely that interactions between polymers in the linear conformation will be greater than those between polymers in the coil conformation [36]. Together, these factors will lead to the decrease of of the PEDOT:PSS sheet resistance from pristine to the patterned samples, with lowest resistance for the PDMS substrate.
_Investigations to explain high conductivity: analysis of cyclic voltmetry data:_ The Raman spectroscopy data suggests that the PEDOT:PSS patterned on PDMS display a higher
\begin{table}
\begin{tabular}{|c|c|c|} \hline Substrates & Normalised Area & Raman Shift (cm\({}^{-1}\)) of quinoid mode \\ \hline Pristine & 0.8 & 1408 \\ Glass & 2.5 & 1404 \\ PDMS & 4.3 & 1402 \\ \hline \end{tabular}
\end{table}
Table 2: Characteristics of the Raman peak due to the quinoid mode in PEDOT:PSS in the pristine form and after patterning on different substrates.
linear conformation of the constituent polymers due to the predominance of quinoid structures that support such linear conformations. This should also enhance charge delocalization on the PSS:PEDOT chains and raise carrier density. We analyse this issue in greater depths now, and attempt to understand the reason behind the linear conformation of PSS:PEDOT on PDMS substrates. For pristine PEDOT:PSS, we note that conductive PEDOT-rich cores are evenly buried in insulating PSS nanoshells. When exposed to MBL, this PEDOT core preferentially absorbs infrared photons and heats up significantly when irradiated with a 1064 nm laser beam. Following that, such PEDOT cores effectively transport this large amount heat energy to the PSS shell nearby. The heat released from the PEDOT core in the intense and immediate laser heating process then thermally fragments these PSS nanoshells, causing dynamic reorganisation. As a result, conductive PEDOT-rich cores emerge from locations where PSS shells previously existed, resulting in improved contact between surrounding conductive PEDOT-rich cores. This helps in reducing the carrier hopping distances between PEDOT rich cores and eventually leads to giant increase in the electrical conductivity [43]. Now, such restructuring also occurs at the patterning surfaces, which adds to the increase in conductivity. Since glass is a covalent interface, it is much more difficult to restructure than PDMS, which has a liquid polymeric interface that can be easily restructured during MBL. As a consequence of this restructuring, more interfacial orientational order is generated, thus leading to higher conductivity of the PEDOT:PSS patterned on the PDMS substrate compared to that on the glass substrate. This also proves the importance of the substrate in improving the conductivity of patterned PEDOT:PSS. Note also, that if PSS is removed from the vicinity of PEDOT, the coulombic interactions between them would be reduced, causing PEDOT to shift conformation from coil to an extended-coil or a linear structure. Positive charges are then more delocalized as a result of these conformational changes in the PEDOT chain, which would also then contribute to conductivity improvements [44].
In order to probe these effects, the electrochemical properties of the patterns need to be determined, which we carry out using cyclic voltametry. As evident from Fig. 4c the cyclic
voltammograms of pristine PEDOT:PSS and patterns on glass and PDMS show oxidation peaks at 0.3 V, 0.13 V and -0.05V respectively. Similarly, reduction peaks are obtained at -0.7 V, 0 V and -0.3V, respectively. When compared to a pristine sample, the values of integrated current virtually double in the case of a pattern on glass, and rise by a factor of ten in the case of a pattern on PDMS. As a result, when compared to the other two, PDMS patterns produced the highest electrode current. This is clearly due to the fact that PDMS patterns have a higher amount of charge carriers than those on glass and the pristine form, which occurs due to the reasons mentioned earlier.
The rectangular shape of the cyclic voltammograms of the patterns indicates a capacitive nature of the patterns. These redox features of the interface might correspond to the interconversion reactions happening between the oxidized and the reduced states of this conducting polymer due to patterning, while the shift in oxidation peaks can be explained in terms of interfacial ordering of the substrates arising upon laser irradiation during the patterning process. Since the 'liquid-like' interface of pure PEDOT:PSS is in a 'frozen' state in the interface, it allows for longer hopping distances. When patterned, however, the structural ordering in the matrices (both in glass and PDMS) squeezes the system into energy minima, resulting in shorter hopping times and easier electron transfers during interconversion reactions between the oxidised and reduced states, and also causing oxidation peaks to shift towards the lower potential region. Our hypothesis that the restructuring during MBL patterning on a PDMS matrix leads to more ordering than glass is validated by the observations that the lowest oxidation potential and highest conductivity is obtained in the case of the former.
_Conclusions_: We use MBL to carry out successful interface engineering of glass and PDMS substrates when we form stable linear patterns, so that we observe an enhancement of conductivity of around ten times for glass (\(694\pm 24\) S/cm) and twenty times using PDMS (\(2844\pm 321\) S/cm), compared to the pristine sample (\(140\pm 12\) S/cm). Our method involves no external chemical treatment or doping of the substrates, and is intrinsic to the process of
patterning itself. To determine the effect of the interfaces on the material patterned, and thus explain the enhanced conductivities, we carry out a series of characterization experiments including SEM/EDX, Raman Spectroscopy, and cyclic voltametry. We conclude that MBL, due to the heat it generates, triggers a conformational change in the PEDOT:PSS polymers depending on the substrate on which they are patterned. Thus, the incident laser selectively heats up the PEDOT core due to the strong selective absorption of the IR photons as compared to their PSS-affected surroundings as depicted in Fig. 5. By heating up and melting the adjacent PSS shells as a result of this selective absorption, the PEDOT core becomes exposed to neighboring PEDOT cores. Along with the directional self-assembly, the fragmentation of PSS shells and lengthening of PEDOT cores leads to a coil (in pristine form) to an expanded coil (on glass) or even linear conformation (on PDMS) of the PEDOT:PSS, which improves conductivity. The interfacial ordering of the polymer is now further improved when this technique is applied to various surface matrices, which squeezes the system into energy minima and reduces hopping durations while facilitating quicker charge diffusion, all of which contribute to further enhancement of conductivity from pristine to glass to PDMS.
Figure 5: Schematic representation of the mechanism of the conductivity enhancement for the PEDOT:PSS. PEDOT core selectively absorbs at the 1064 nm laser wavelength. This heat is eventually transferred to the nearest neighbour i.e PSS shells which results in their melting and the self assembly which causes this selective heating organises the polymers into linear coil structure.
This hypothesis is confirmed from the EDM images of the patterned PEDOT:PSS material, which display increasingly larger grain boundaries as we go from pristine to MBL patterns on glass and PDMS - with the boundaries virtually disappearing for the samples patterned on PDMS, thus allowing for larger charge diffusion - leading to higher conductivity for that case. Further confirmation is obtained from Raman spectroscopy, which probes deeper into the conformation of the polymer chains that make up PEDOT-PSS. We concentrate on two Raman modes - the inter-ring stretching mode at 1250 cm\({}^{-1}\), and the broad mode around 1425 cm\({}^{-1}\), which comprises of a convolution of benzoid and quinoid modes. We observe that the contribution of the quinoid mode - which is associated with linear polymer structures - increases as we go from pristine to glass and then PDMS substrate. Finally, we perform cyclic voltametry, where we observe that the value of electrode current is higher by a factor of two for PEDOT:PSS samples on glass, and almost ten for that on PDMS compared to the pristine form. This clearly demonstrates the availability of a higher number of charge carriers for PEDOT:PSS patterned on PDMS compared to that on the other interfaces. In addition, the oxidation peaks shift towards increasingly lower potentials for glass and PDMS, which can only occur due to structural ordering in both matrices, with higher ordering PDMS since it has a liquid polymeric interface that is easier to restructure in comparison to glass. This facilitates easier electron transfers due to shorter hopping times during interconversion reactions between oxidised and reduced states. Thus, the evidence we obtain from SEM and Raman spectroscopy that PDMS interfaces lead to more linear polymer structures are entirely validated by the data we obtain from cyclic voltametry. Indeed, the process of interface engineering that we drive by MBL to increase conductivity of PEDOT:PSS across glass and PDMS is summarized by the cartoon depicted in Fig. 5.
We believe that our experiments have provided a convincing argument to confirm that MBL leads to successful interface engineering which can change conductivity of conductive polymers rather drastically in the process of the patterning itself - which is further to the evidence we obtained earlier for this process simultaneously doping materials while patterning
them to achieve similar conductivity increase for polypyrrole and polyaniline [21] micropatterns on glass. The next step is to attempt to build heterostructured interfaces which can lead to development of electronic devices such as diodes and transistors, which can take this extremely promising science for patterning micro or even nano-structures to the next level for value creation and subsequent translation to chip-fabrication technology.
The authors thank IISER Kolkata, an autonomous research and teaching institution funded by the MHRD, Government of India for providing the financial support and infrastructure. The authors also thank Mithun Ajith and Gunaseelan M. of IIT Madras for useful discussions and help in the experiments. R.S. acknowledges DST for the INSPIRE fellowship.
|
2305.04883 | Fuzzy Gene Selection and Cancer Classification Based on Deep Learning
Model | Machine learning (ML) approaches have been used to develop highly accurate
and efficient applications in many fields including bio-medical science.
However, even with advanced ML techniques, cancer classification using gene
expression data is still complicated because of the high dimensionality of the
datasets employed. We developed a new fuzzy gene selection technique (FGS) to
identify informative genes to facilitate cancer classification and reduce the
dimensionality of the available gene expression data. Three feature selection
methods (Mutual Information, F-ClassIf, and Chi-squared) were evaluated and
employed to obtain the score and rank for each gene. Then, using Fuzzification
and Defuzzification methods to obtain the best single score for each gene,
which aids in the identification of significant genes. Our study applied the
fuzzy measures to six gene expression datasets including four Microarray and
two RNA-seq datasets for evaluating the proposed algorithm. With our
FGS-enhanced method, the cancer classification model achieved 96.5%,96.2%,96%,
and 95.9% for accuracy, precision, recall, and f1-score respectively, which is
significantly higher than 69.2% accuracy, 57.8% precision, 66% recall, and
58.2% f1-score when the standard MLP method was used. In examining the six
datasets that were used, the proposed model demonstrates it's capacity to
classify cancer effectively. | Mahmood Khalsan, Mu Mu, Eman Salih Al-Shamery, Lee Machado, Suraj Ajit, Michael Opoku Agyeman | 2023-05-04T21:52:57Z | http://arxiv.org/abs/2305.04883v1 | # Fuzzy Gene Selection and Cancer Classification Based on Deep Learning Model
###### Abstract
Machine learning (ML) approaches have been used to develop highly accurate and efficient applications in many fields including bio-medical science. However, even with advanced ML techniques, cancer classification using gene expression data is still complicated because of the high dimensionality of the datasets employed. We developed a new fuzzy gene selection technique (FGS) to identify informative genes to facilitate cancer classification and reduce the dimensionality of the available gene expression data. Three feature selection methods (Mutual Information, F-ClassIf, and Chi-squared) were evaluated and employed to obtain the score and rank for each gene. Then, using Fuzzification and Defuzzification methods to obtain the best single score for each gene, which aids in the identification of significant genes. Our study applied the fuzzy measures to six gene expression datasets including four Microarray and two RNA-seq datasets for evaluating the
proposed algorithm. With our FGS-enhanced method, the cancer classification model achieved 96.5%,96.2%,96%, and 95.9% for accuracy, precision, recall, and f1-score respectively, which is significantly higher than 69.2% accuracy, 57.8% precision, 66% recall, and 58.2% f1-score when standard MLP method was used. In examining the six datasets that were used, the proposed model demonstrates its capacity to classify cancer effectively.
Gene expression, Classifier methods, Fuzzy gene selection, and Cancer classification
## 1 Introduction
Cancer is the second leading cause of death worldwide and represents the abnormal growth of cells and their frequent metastatic spread throughout the body [1]. Cancer cells frequently proliferate independently of growth signals. and neglect to respond to survival/death that instructs them to stop dividing or to die (i.e. by apoptosis). This phenomenon occurs due to inherited or environmental factors that cause DNA mutations or epigenetic modifications that deregulate normal cellular gene expression programs [2]. For example, DNA mutation is caused by harmful substances in the environment including chemicals in tobacco smoke and ultraviolet radiation from the sun. Some cancer genes are inherited (i.e. BRCA1/2) and have high penetrance due to their fundamental role in cellular regulation. Therefore, the analysis of deregulated gene expression programs in cancer cells may play an important role in the early detection and treatment of cancer. Consequently, identifying a specific set of genes (gene signatures) that aid classification may provide an earlier diagnosis of cancer and provide personalized treatment options [2]. The tools (Microarray and RNA-seq technologies) that have been developed for measuring the expression levels of genes in normal and cancer tissue have opened the door for investigators to build and test a new mathematical and statistical model for analyzing gene expression data. Those measurement tools calculate the expression levels of thousands of genes across hundreds/thousands of clinical samples.
Both (Microarray and RNA- seq technologies) measure transcriptome-wide gene expressions and allow a comparison of cancerous and non-cancerous tissues. Microarray methods measure the intensities of colored fluorescent probes spotted on glass slides, which correspond to gene expression under different conditions. Whereas RNA-Seq methods measures read counts as a proxy for relative gene abundance [3]. RNA-seq methods have largely superseded microarrays as they produce less noise and are more accurate in calculating method gene expression abundance [4]. Researchers have developed a range of mathematical and statistical techniques to analyze gene expression data for various goals. This includes the identification of optimal gene signature pathways, enhanced cancer classification, cancer prediction, drug discovery, and improved personalized therapy. To achieve this, obstacles regarding the high dimensionality and complexity of the publicly available gene expression data remain.
However, measurement tools for calculating gene expressions have improved continuously. Artificial intelligence (AI) is now a powerful tool for mitigating the time taken to analyze large cancer datasets. It has the potential to improve the accuracy of cancer classification and/or cancer prediction. AI is the broadest term used to classify machines that mimic human intelligence. AI includes machine learning (ML) techniques including Support Vector Machine (SVM), K-Nearest Neighbour (KNN), and Random Forest (RF) approaches. ML also includes deep learning (DL) approaches that use Convolutional Neural Networks (CNN), Long short-term memory (LSTM), and, MLP.
The present study provides significant contributions by attempting to address a number of shortcomings.
First, a new fuzzy gene selection technique has been developed to make the datasets on gene expression less dimensional.
Second, using a limited number of genes when using the FGS method prevents or at least reduces overfitting problems when classifier approaches are applied.
Third: Reducing the amount of time required for a classifier model's training stage is made possible by a minimal number of biomarker genes that are utilized as identifiers. Fourth: The suggested paradigm enables early cancer detection and precise cancer classification.
Fifth: Choosing a few useful informative genes to be employed.
The rest of the work is organized as follows: section II explores recent studies analyzing gene expression data by using ML. Section III explains theoretically the concepts of methods that have been used for developing the fuzzy gene selection methods and classifier approaches that have been employed. It also illustrated the repositories that have been used to download the datasets employed for training and testing the proposed model. While section IV explains practically the techniques that have been employed for developing the proposed model (FGS and MLP). Section V discussed the results that have been obtained from the proposed model (FGS and MLP) and compared the other classifier approaches such as (i.e.SVM, KNN, and RF). Conclusions are provided at the end of the paper.
## 2 Related Work
Sun et al. [5], suggested a new approach namely a multimodel deep neural network (MDNN) that aims to improve the performance accuracy of breast cancer classification. The proposed algorithm was trained and tested on publicly available gene expression data that includes 24368 genes across 2509 breast cancer and 548 normal samples [6]. The new model was compared with three different machine learning methods (SVM, RF, and Logistic regression (LR)). Minimum Redundancy Maximum Relevance (mRMR) was also employed as a feature selection Technique to reduce the number of features (genes) to improve the performance of classification accuracy. The accomplished accuracy was 82%, 80%, 79% and 76% for MDNN, SVM, RF, and LR respectively. However, recall values were low in all classifier algorithms (45%, 36%,
22% and 18% for MDNN, SVM, RF, and LR respectively) and precision was 95% for all classifier approaches.
Although the suggested model's performance accuracy was good, further accuracy enhancement is necessary due to the cancer's sensitivity. Furthermore, the recall values were quite low, which had an impact on the performance of the provided method. Typically, research use several datasets for different types of cancer to validate the findings produced by applying their models, which have been evaluated in this work where just one dataset was used.
Jing Xu et al. [7], proposed a novel Deep Neural Forest (DFNForest) algorithm to classify subtypes of three different cancer types (Glioblastoma multiforme (GBM)), Breast, and lung ). The system was tested by employing RNA-seq data available from TCGA. The researcher used two feature selection techniques (fisher ratio and neighborhood rough set) to reduce the dimensionality of the publicly available data, addressed overfitting issues, and selected the genes that significantly impacted the performance of the proposed model [8]. They achieved an accuracy of 93% (breast), 88% (lung), and 84% (GBM).
Guillermo et al. [9], proposed CNN and transfer learning (TL) model for lung tumor prediction. (10535) samples and the top 20k most expressed genes were downloaded from TCGA for 33 different kinds of cancer but the proposed model was tested only on the lung cancer dataset. The system compared the new model against other classifier methods(densely connected multi-layer feed-forward neural network (MLNN) and SVM) to evaluate the suggested model. The achieved accuracy was 68%, 72%, and 69% for CNN, MLNN and SVM respectively. The proposed model showed that low accuracy was accomplished, and it was tested only on one type of cancer(lung) that may not achieve the same score of accuracy for other types of cancer. The proposed model was not achieved better accuracy than compared classifier methods that the investigator described in this study (MLNN) was achieved better accuracy as illustrated previously. Other evaluation measurements from this research were identified in Table1.
Yeganeh et al. [10], multiple machine learning methods with multiple gene expression datasets of ovarian cancer employed for ovarian cancer prediction. Seven GEO datasets(GSE12172, GSE14407, GSE9899, GSE37648, GSE18521, GSE38666, and GSE10971) were obtained for training and testing the machine learning approaches. The system used a 26-gene set panel for training different classifier methods. The highest accomplished accuracy value was 0.89 when a Random Forest pipeline was applied.
\begin{table}
\begin{tabular}{l l l l l} \hline Methods & AUC & Sensitivity & Specificity & Accuracy \\ \hline CNN & 73\% & 67\% & 68\% & 68\% \\ MLNN & 70\% & 61\% & 73\% & 72\% \\ SVM & 70\% & 64\% & 69\% & 69\% \\ \end{tabular}
\end{table}
Table 1: Comparing the performance of CNN against MLNN and SVM
Low accuracy achieved and imbalanced datasets used were recorded as drawbacks in this work.
It concluded from this section that previous work requires developing a new model for improving cancer classification and selecting a small number of significant genes that would be used as identifiers for cancer classification. More studies were discussed in our previous published work freely available [30].
### Publicly available datasets
Below are common data repositories that provided gene expression data from normal and cancer-derived tissues used to train and test models for classification or prediction purposes. Those repositories are further described as follows.
#### 2.1.1 Gene Expression Omnibus (GEO)
GEO [11] is a public functional genomics data repository supporting MIAME-compliant data submissions. The repositories support RNA-seq and Microarray data but GEO mostly provides Microarray data. The total number of samples that are provided by GEO is 3635328 for different diseases. GEO is freely available to download experiments and curated gene expression profiles by users or researchers.
#### 2.1.2 The Cancer Genome Atlas (TCGA)
TCGA [12] is a landmark cancer genomics program that is distinguished in providing 84,031 samples from 33 different cancer types. The datasets that are available on TCGA are measured by the RNA-seq and Microarray methods for measuring expressed levels of gene activity for healthy and unhealthy tissues.
### Feature selection
Feature Selection (FS) is a statistical method that aims to select an optimal feature of a large number of original features for given a dataset [13]. The goal is to choose the best subset of features with k features. FS approaches have valuable benefits in reducing the training time, reducing the complexity of the model, and are easy to interpret. Additionally, there are faster responses with unseen data and powerful generalization that enhances the performance of the model and avoids (or at least reduces) overfitting issues [14]. This work has used three feature selection methods to identify the optimal subset of genes that were employed later as identifiers for training classifier methods. Those feature selection methods are explained below.
#### 2.2.1 Mutual Information
Mutual information (MI) can be defined by how it gauges the amount of information shared by two random variables. In the context of gene selection, employs this definition to select a subset of important genes with respect to the output vector [14]. It has two major benefits: it can be used as a solution with different types of machine learning models, and it is a faster solution for selecting features. Mathematically it
can be defined as follows. X represents the random variables(genes) and Y is the target (cancer types).
\[I(X,Y)=\sum\nolimits_{i}\sum\rho(X,Y)log\frac{\rho(x,y)}{\rho(x)\rho(y)} \tag{1}\]
\[=H(Y)-H(Y/X) \tag{2}\]
Where H(Y--X) is the conditional entropy of Y in the case of X is known.
#### 2.2.2 F-ClassIF
F-class calculates the ratio between different values. In other words, it calculates the variation between features/labels within the samples. This method is called the ANOVA f-test [15]. F-test results in a score that represents how far the feature is from other features. For example, calculate a score for each feature between two classes and use this score for selecting the important features. As shown in In Figure1, the red color presents class 1 and the blue color introduces class 2 and two features on the x and y axes. The x feature is a better separator than y because if we project data on the x-axis, two completely separated classes were obtained but when project data onto y, two classes overlap in the middle of the axis. Based on that the features which were got higher scores will be chosen as the best features for a given dataset.
#### 2.2.3 Chi-squared
The chi-squared statistic is used to assess the independence of two occurrences. To begin, compute the chi-squared between each gene and the class. As a result, select the number of features based on the highest chi-squared scores. The chi-squared formula is presented below [16]:
\[\chi^{2}_{c}=\Sigma(O_{i}-E_{i})^{2}/E_{i} \tag{3}\]
Where: C = degrees of freedom, O = observed value(s), and E = expected value(s)
Figure 1: Illustration example of distributed Features to show up F-classif work
### Fuzzy gene selection (FGS)
The proposed new fuzzy gene selection method of selecting the best subset of genes that were used as an identifier for the training classifier. The proposed FGS can be summarized in four major steps as shown in Figure2. The steps are illustrated as follows:
#### 2.3.1 Pre-processing step
The process of preparing raw data for use by machine learning algorithms is known as pre-possessing. Furthermore, it is the initial stage in data cleansing prior to analysis procedures such as feature selection or classification. The suggested algorithm employed three primary techniques of pre-processing, which are as follows:
1. Address the missing values: In general, missing values in a dataset have a negative influence on classifier performance, hence there are multiple ways for dealing with missing values (Eliminate Data Objects, Ignore the Missing Value During Analysis, and Estimate Missing Values). There are no missing values for a gene's expressed level in gene expression data. However, certain gene symbols are missing. As a result, this stage removed only the raw data that does not contain the gene symbol.
2. Handle the duplication: simply eliminating the duplicated gene symbols.
3. Normalization is a procedure that is commonly used as part of data preparation for ML, particularly inside neural network classifier approaches. The primary goal of normalization is to modify the values of numeric columns in the dataset to use a similar scale without distorting variance in value ranges or losing information. The most common kind of normalization is min-max normalization, which was applied in this study. The normalization value is calculated using the equation below..
\[V=\frac{\nu-\min_{A}}{\max_{A}-\min_{A}} \tag{4}\]
Where:
maxA is the maximum value of original values for a feature.
minA is the minimum value of original values for a feature.
and NmaxA,NminA are the maximum and minimum intervals of value.
V represents the feature value.
#### 2.3.2 Vote step
Three feature selection approaches (MI,F-classif, and chi-squared) were used to select informative genes. Depending on the step function (SF), each feature selection approach chooses a different number of genes. The formula below has been used to compute the step function. This algorithm is intended to avoid using a limited number of selected genes, which may result in neglecting some genes with the same score when using a fixed number of genes, such as the top ten genes. It is also worth noting that using this formula gives more flexibility to the step function value than using constant values such as 0.3. If non- or small-selected features by a feature selection method have scored equal to 0.3, we lose some essential features (genes) that could
have been selected by other feature selection methods.
\[SF=max(FSS)*0.3 \tag{5}\]
Where SF is step function, FSS is the feature selection score for all genes.
max is the maximum score for all features scored by the feature selection method.
The selected genes of this stage have scored either equal to the step function or greater than the step function value that was calculated previously.
#### Fuzzification step
This is the process of changing crisp data into fuzzy data using membership functions, with the goal of transforming the crisp data into data ranging between (0-1). There are different types of membership functions, the Triangular Membership Function was used in this work.
\[Mf=\frac{W_{i}-\alpha}{b-\alpha} \tag{6}\]
Where MF is the membership function.
W is the crisp value (score) for a gene.
a = lowest possible score (min).
b= highest possible score.
This membership function applied for the three feature selection methods which means, there are MF1, MF2, and MF3 in this work.
#### Defuzzification step
This step is a process for converting the output data to crisp data. This step is the final stage of the gene selection method that has been used to select informative genes. The selected genes from these steps have been used as identifiers for training the classifier approaches.
\[ASG=\frac{MF_{i}+MF_{i}+MF_{i}}{N} \tag{7}\]
Where ASG is the Average Score for a gene through the three feature selection methods.
MF is the membership function for each gene. N is the number of feature selection methods that have been employed. In this work (N equal 3).
The two preceding phases show that different filter feature selection approaches provide different scores for the same gene. Fuzzification and Defuzzification were used to get a single score for each gene. As a result, as indicated in the equation below, using a step function for choosing the optimal subset of genes that would be used as identifiers for cancer classification.
\[SF=max(FSS)*0.5 \tag{8}\]
Figure 2: Block Diagram of Proposed Fuzzy Gene selection Process
### Classifier Approaches
#### 2.4.1 Support Vector Machine(SVM)
It is applied for classification and regression challenges. However, SVM is typically applied to a classification problem because it accomplished outstanding performance in this area. SVM aims to create the best decision boundary (Hyperplane) to segregate the input data in different spaces. The SVM algorithm attempts to find the hyperplane in an n-dimensional space that segregates different data points [17][18]. Although, SVM has been widely used. However, it has some weaknesses. For example, SVM underperforms when the datasets are largely comparing it to small datasets. SVM is not working well with datasets containing noise data for instance target classes are overlapping [19]. Additionally, it is not suited when the number of features is larger than the number of samples. These disadvantages of SVM have a high impact when applied to gene expression data because the gene expression data is noisy, and the number of genes is greater than the number of samples.
#### 2.4.2 K-Nearest Neighbors (KNN)
It works on the assumption that similar things are positioned near to one another, making it more suitable for recommended system uses. To put it another way, KNN calculates the distance between the new point and the previously trained points (classes), so that the new point is predicted to the nearest distance of trained classes in feature space if it has two classes (Class A and Class B), as shown in Figure 3, and the "star" in red color represents the new class that requires prediction. Finding the best feature space (K) in KNN is critical because there is no standard method [18]. It often uses a large number of lists of integers to decide which one has the highest accuracy. As a consequence of this, the finest K will be picked. Although KNN is straightforward to use, there are several significant drawbacks. It is prone to noisy and missing data, is inefficient with large datasets, and contains data with high dimensionality.
#### 2.4.3 Decision Tree (DT)
A decision tree is a supervised machine-learning technique that is used for both classification and regression challenges, however, it is mostly employed as a solution for classification purposes [18]. DT works under the principle that the data is continuously split according to a certain parameter. It is easy to understand because it mimics the human process of making decisions and it requires less clean data compared with other ML approaches. However, it is complex compared with other algorithms because it consists of many layers and may have overfitting issues.It is also computationally expensive as more class data labels are applied. The procedure of DT working can be concluded in five main steps as follows[21].
1.Step1: DT starts with an entire dataset, assume S, in a node is called the root node.
2.Step2: Applying an attribute selection measure (ASM) to find the best attribute for given a dataset.
3.Step3: Split the dataset into subsets that include the possible values for finding the best attribute for the given dataset.
4. Create the decision tress nodes, which have the best attribute.
5. Repeat step 3 partitioning the dataset into subsets for making a new decision tree, this process is continuously repeated until there is no possibility of classifying nodes namely leaf nodes that each leaf node presents one class or its probability [14].
#### Gaussian Naive Bayes (GNB)
Gaussian Naive Bayes is supervised learning technique which relies on Bayes theorem that is employed for classification challenge and specifically for text classification because it is more suited to high dimensional training datasets [22]. It is considered one of the top 10 classifier techniques in data mining [23]. It is also characterized by faster prediction compared with other classifier models, easy to build and most effective in classification problems. However, GNB presumes that all features are independent which means it misses the possibility to learn the relationship between features [24][22]. Another drawback of GNB is hardly identifying the conditional independence in microarray data [25]. GNB works by taking each data point and assigning it to whichever class is nearest to it. It disguised not only calculating the distance by employing Euclidean distance between the new points and trained class, but it also calculates how this compares to the class variance. For each dimension, the z-score is calculated, and the distance from the mean is divided by the standard deviation [26].
Figure 3: KNN and its Hyperplane Selection
#### 2.4.5 Multilayer Perceptron(MLP)
MLP is a type of feedforward neural network (ANN) that is vastly used in pattern recognition, classification challenges, and prediction. It is mostly employed to solve supervised learning problems [17]. MLP maps the input to the output in a single direction of data and calculations. Generally, it consists of three perceptron or layers, an input layer, an output layer and at least one in between called a hidden layer[27]. Each layer in MLP is fully connected with the next layer. The input layer is used to receive the signal from the outside world to the network, hidden layers perform the arithmetic operations from the input layer to the output layer while the output layer is responsible of making the decision(prediction). As a result, the output layer aims to transfer the information to the outside environment. Each layer in MLP is composed of a number of nodes (neurons). Most importantly, MLP work can be summarized in four main steps:
1) Step 1: propagating the input data forwarding from the input layer to the output layer.
2) Step 2:MLP is learned by updating the connection weights between the neurons to ensure a backpropagation algorithm is applied after input data of each node in MLP is processed[27].
3) Step 3:Calculate the errors by finding the difference between the predicted classes by MLP and the known classes and employ supervised learning to learn MLP to reduce the calculated errors.
4) The previous three steps will be repeated over multiple iterations to learn perfect weights.
### Cross Validation
Cross Validation in ML is a statistical method that aims to minimize or avoid over-fitting issues in different classifier approaches. Rather than training a model on one training dataset, Cross Validation method allows training the model on many datasets. By splitting the dataset into multiple folds and training the model on different folds [20]. As a result, the model achieves generalization capabilities which is a good sign of a robust model. It also assists to indicate a more accurate estimate of algorithm prediction performance. The datasets split in kfold such as 5 as shown Figure4.
### Evaluation Measurement Methods
This section is the evaluation tools that were used to evaluate the performance of the proposed model against the other previous models or compare the performance of classifier methods when the new fuzzy gene selection method was employed against the classifier methods when the fuzzy gene selection was not applied. As a result, these evaluation parameters are used for measuring the performance of a model. There are four evaluation measurements that must be explained to demonstrate that this proposed study outperformed the previous studies. The evaluation measurements are as follows:
Accuracy (AC) is an evaluation measurement that is utilized to determine which model is the best for a given dataset in AI. A ratio of correctly predicted observations to the
total observations is called as accuracy in AI. The formula below is used to calculate it mathematically [28]:
\[Accuracy=\frac{TP+TN}{TP+FP+TN+FN} \tag{9}\]
Where TP is True Positive, TN is True Negative, FP is False Positive and FN is False Negative.
A TP is the correctly predicted positive value which means that the value of the actual class is cancer and the value of the predicted class is also cancer.
A TN is an outcome where the model correctly predicts the negative class. A FP is an outcome where the model incorrectly predicts the positive class. FN is an outcome where the model incorrectly predicts the negative class.
Precision (Pre) is the ratio of correctly predicted positive observations to the total predicted positive observations as described in [30]
\[Precision=\frac{TP}{TP+FP} \tag{10}\]
A recall (Rec) is the fraction of retrieved instances among all relevant instances. It is also known as sensitivity. The recall formula is illustrated as [28]:
\[Recall=\frac{TP}{TP+FN} \tag{11}\]
The F1 score (F1) has combined the precision and recall of a classifier into a single metric by taking their harmonic mean, where a perfect F1 score has a value of 1 and
Figure 4: KFold Cross Validation Process with K=5
the worst score at 0 [28]:
\[F1=2\times\frac{precision\times recall}{precision+recall} \tag{12}\]
## 3 The proposed model
The proposed model may be divided into three basic stages of development. These phases were completed in the following order:
1. The Pre-processing stage is prior to machine learning included the removal of the raw data that had missing or duplicate gene symbols. The data were normalized by using a min-max normalization algorithm that aims to re-scale the data between (0-1).
2. The gene selection step, which was intended to select the optimal subset of informative genes that would be used as identifiers for training classifier algorithms, is the most significant stage of the proposed model. This stage can be represented by the following two points: To begin, we used three feature selection approaches (MI, F-classif, and chi-squared) with a step function to select a subset (the determined step function was displayed in the voting stage). Second, the developed fuzzy gene selection approach employed fuzzy logic in a further analysis to choose fewer and more significant genes. The suggested FGS employed Triangular Membership Function fuzzification and center of gravity defuzzification with a step function (shown in the defuzzification phase) to choose informative ones with a strong influence on cancer classification.
3. Classifier stage: the proposed algorithm used Multi-layer Perceptron Classifier with three hidden layers. The output of the fuzzy gene selection method(selected genes) was used as an input layer for MLP (node number of input layer based on selected genes), three hidden layers were utilized (300,200,100 nodes) and one output layer which is the output of the classification(normal or malignant for binary classification and the class name for multiclasses datasets).
Summary: The total number of layers for the proposed model fifteen layers illustrated as follows: One input layer, three hidden layers for pre-processing stage (missing values, duplication, and normalization),three parallel hidden layers for filter feature selection methods. Two hidden layers for fuzzification (Triangular Membership Function) and defuzzification (Center of gravity). Three hidden layers for MLP classifier. Finally, one output layer. The number of input nodes is flexible which is based on the number features (number of genes) includes (the number of nodes when filter selection methods employed and the number of nodes when the fuzzy logic applied).
## 4 Results
### Datasets used
Six gene expression datasets of different types of cancer were used for training and testing the proposed model. The datasets comprised RNA-seq and Microarray tools were used to evaluate the proposed fuzzy gene selection algorithm with the two different measurement tools for measuring the expressed level of gene activity. The datasets were obtained from TCGA and GEO (GSE45827, GSE14520, GSE77314, GSE19804,
TCGA, and GSE33630). The total number of samples from the six datasets was 3,011 for multi and binary classes more details were described in ( Table2). To avoid overfitting in the training stage of the algorithm, the cross-validation method has been used with 5 Kfolds to split the datasets into multiple folds and train the algorithm on different folds. In Table 2, KIRC stands for Kidney renal cell cancer, LUAD stands for Lung adenocarcinoma, LUSC stands for Lung squamous cell carcinoma, and UCEC is for Uterine corpus endometrial carcinoma.
### Obtained results
This section investigates the usage of six datasets across five classifier approaches, comparing the use of a fuzzy gene selection method and demonstrating the benefits of using the suggested fuzzy gene selection methodology. In this paper, we examine how FGS affects the performance of cancer classification models. The full details are presented (Table 3 and Table 4) of the datasets used for training and testing the models, cancer types, and the achieved accuracy, precision, recall, and f1-score before the fuzzy gene selection method was applied and after the fuzzy gene selection method was used.
reduction in the training time for models, and the provision of early cancer detection by the choice of instructive genes. Classifier models are also less complicated.
As shown in the two bar charts (8 and 9), a fuzzy gene selection strategy significantly
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Dataset & Tools & N-samples & N-Genes & Cancer Types & N-Class & Reference \\ \hline GSE45827 & Microarray & 155 (Basal 41, Her2 30, Luminal B 30, Luminal A 29, CellLine 14, Normal 11 ) & & Breast cancer subtypes & 6 & [11] \\ \hline GSE14520 & Microarray & 445 ( Cancer 227, Normal 218) & 13425 & Liver Cancer & 2 & [11] \\ \hline GSE77314 & RNA-seq & 100 (Cancer 50, Normal 50) & 29087 & Liver Cancer & 2 & [11] \\ \hline GSE19804 & Microarray & 120 (Cancer 60, Normal 60) & 45782 & Lung Cancer & 2 & [11] \\ \hline TCGA & RNA-seq & 2086 ( BRCA 878, KIRC 537, UCEC 269, LUSC 240,LUAD 162) & 972 & BRCA, KIRC, LUAD, UCEC & 5 & [29] \\ \hline GSE33630 & Microarray & 105 (PTC 49, Normal 45, ATC 11) & 23518 & Thyroid & 3 & [11] \\ \hline \end{tabular}
\end{table}
Table 2: Summary of Datasets were Employed for Training and Testing The Proposed Model
improved the performance of the five classifier approaches for classifying lung cancer. In comparison to other classifier models, the findings demonstrate that the MLP model offers predictions that are closer to the ideal observed value. MLP earned an average accuracy score of 97.5 in 5 kfolds. Other classifiers, however, achieved average scores of 96.6, 96.6, 95.8, and 92.5 in 5 kfolds for SVM, KNN, GNB, and DT, respectively. Additionally, only 36 genes out of 45782 genes were employed for training the classifier models, a considerable decrease in the number of genes used.
Although there is a slight improvement in the accuracy of most of the classifiers used in this study to classify liver cancer datasets(GSE14520). However, there is a significant enhancement in the MLP classifier when using the FGS method, as it improved
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Dataset & Class Types & FS method & N-Genes & Classifier & Ac & Pre & Rec & F1 \\ \hline & & & & DT. & 90\% & 90.6\% & 88.9\% & 89.7\% \\ GSE14520 & Binary class & No & 13425 & KNN. & 94\% & 91\% & 97.6\% & 94\% \\ & & & & SVM. & 97\% & 96\% & 97.6\% & 97\% \\ & & & & GNB. & 95\% & 95.6\% & 94\% & 94.8\% \\ & & & & MLP. & 86.7\% & 76.5\% & 76.7\% & 76.5\% \\ \hline & & & & DT. & 96\% & 95\% & 97\% & 96\% \\ GSE14520 & Binary class & FGS & 23 & KNN. & 96.6\% & 96\% & 97\% & 96.6\% \\ & & & & SVM. & 96\% & 95.6\% & 96\% & 96\% \\ & & & & GNB. & 96.6\% & 96\% & 97\% & 96.6\% \\ & & & & MLP. & 96\% & 96\% & 96\% & 96\% \\ \hline & & & & DT. & 87.6\% & 77.6\% & 81\% & 79\% \\ GSE33630 & Multiclass & No & 23516 & KNN. & 91\% & 87.7\% & 86.5\% & 86\% \\ & & & & SVM. & 93\% & 95\% & 92\% & 92\% \\ GSE33630 & Multiclass & No & & GNB. & 90\% & 93.7\% & 89.7\% & 90\% \\ & & & & MLP. & 72\% & 55.6\% & 64.5\% & 58.5\% \\ \hline & & & & DT. & 93\% & 93\% & 93.5\% & 92.5\% \\ GSE33630 & Multiclass & FGS & 76 & KNN. & 94\% & 96\% & 92.8\% & 93\% \\ & & & & SVM. & 94\% & 96\% & 92.8\% & 93\% \\ GSE33630 & Multiclass & FGS & & GNB. & 92\% & 88\% & 99.8\% & 88.8\% \\ & & & & MLP. & 93\% & 95\% & 92\% & 92.5\% \\ \hline & & & & DT. & 91\% & 87\% & 85\% & 85.8\% \\ TCGA & Multiclass & No & 971 & KNN. & 88\% & 83\% & 81.5\% & 81.9\% \\ & & & SVM. & 95\% & 91.6\% & 91.8\% & 91.6\% \\ & & & & GNB. & 94\% & 89.7\% & 92\% & 90.7\% \\ & & & & MLP. & 94\% & 90.8\% & 89.8\% & 90\% \\ \hline & & & & DT. & 91.7\% & 88\% & 87\% & 86.5\% \\ & & & & KNN. & 93.6\% & 89.8\% & 90\% & 89.6\% \\ TCGA & Multiclass & FGS & 25 & SVM. & 94 \% & 90.5\% & 90.7\% & 90.5\% \\ & & & & GNB. & 92\% & 87.7\% & 90.8\% & 89\% \\ & & & & MLP. & 95\% & 92\% & 91.6\% & 91.6\% \\ \hline \end{tabular}
\end{table}
Table 3: Comparing five classifier approaches when applying and omitting FGS
from 86.6 to 96 as an average accuracy score in 5 kfolds. More importantly, the FGS method reduced the number of genes used to train models to 23 only out of 13425. The two bar charts (10 and 11) explain the comparison accuracy scores with 5 kfolds for the five models when FGS employed and omitted.
Most classifier models used reached close to 100 where the average accuracy score in 5 kfolds is 99% for the SVM, KNN, and MLP while 97% for GNB and DT when fuzzy gene selection techniques are applied to the liver cancer dataset (GSE77314). These remarkable enhancements in accuracy score are shown in (12 and 13). Moreover, the FGS method decreased the number of genes from 29087 to only 12 genes that were
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Dataset & Class Types & FS method & N-Genes & Classifier & Ac & Pre & Rec & F1 \\ \hline & & & & DT. & 89\% & 90\% & 88\% & 90\% \\ & & & & KNN. & 90.8\% & 88\% & 95\% & 91\% \\ GSE19804 & Binary class & No & 45782 & SVM. & 95.8\% & 96.6\% & 95\% & 95.7\% \\ & & & & GNB. & 92.5\% & 95\% & 90\% & 91.9\% \\ & & & & MLP. & 50\% & 20\% & 40\% & 26.6\% \\ \hline & & & & DT. & 92.5\% & 93.6\% & 91.6\% & 92\% \\ GSE19804 & Binary class & FGS & 36 & KNN. & 96.6\% & 96.7\% & 96.6\% & 96.6\% \\ & & & & SVM. & 96.6\% & 97\% & 96.6\% & 96.6\% \\ & & & & GNB. & 95.8\% & 96.7\% & 95\% & 95.7\% \\ & & & & MLP. & 97.5\% & 97\% & 98\% & 97.5\% \\ \hline & & & & DT. & 95\% & 98\% & 91.9\% & 94\% \\ GSE77314 & Binary class & No & 29087 & KNN. & 88.9\% & 82\% & 100\% & 90\% \\ & & & SVM. & 99\% & 98\% & 100\% & 99\% \\ GSE77314 & Binary class & No & & GNB. & 84\% & 100\% & 68\% & 80\% \\ & & & & MLP. & 93\% & 98\% & 88\% & 91\% \\ \hline & & & & DT. & 97\% & 98\% & 96\% & 97\% \\ GSE77314 & Binary class & FGS & 12 & KNN. & 99\% & 98\% & 100\% & 99\% \\ GSE77314 & Binary class & FGS & 12 & SVM. & 99\% & 98\% & 100\% & 99\% \\ & & & & GNB. & 97\% & 98\% & 96\% & 96.8\% \\ & & & & MLP. & 99\% & 98\% & 100\% & 99\% \\ \hline & & & & DT. & 85.8\% & 83\% & 82.6\% & 81.5\% \\ GSE45827 & Multiclass & No & 29873 & KNN. & 85\% & 87.9\% & 87.7\% & 87\% \\ & & & SVM. & 94.8\% & 96\% & 95.8\% & 95.8\% \\ & & & & GNB. & 89\% & 92.7\% & 88.8\% & 89\% \\ & & & & MLP. & 20.6\% & 6\% & 17\% & 7\% \\ \hline & & & & DT & 89.6\% & 90.9\% & 89.6\% & 88.8\% \\ GSE45827 & Multiclass & FGS & 68 & KNN & 95.48\% & 96.5\% & 96\% & 96\% \\ & & & SVM. & 98.7\% & 99\% & 98.8\% & 98.9\% \\ & & & & GNB. & 91.6\% & 94.5\% & 92\% & 92.8\% \\ & & & & MLP. & 98.7\% & 99.3\% & 98.8\% & 98.9\% \\ \hline \end{tabular}
\end{table}
Table 4: Comparing five classifier approaches when applying and omitting FGS
used as identifiers for training the proposed model and compared models. That leads to an increase in the model efficiency and mitigates the time taken through algorithm training and provides early cancer detection.
There was not a significant improvement in (TCGA) datasets because the number of genes used was not large (971), so its use did not achieve a high level of accuracy improvement. However, it improved the performance of the model by reducing the number of selected genes that were used as identifiers to train the technique. As a result, the FGS method decreased the number of genes from 971 to 25 genes only. In addition, a slight improvement in the accuracy as well as the precision, we conclude
that employing FGS in the worst cases will give better accuracy and fewer genes, and that performed less time for training the classifier models and provides early detection of cancer. The two bar charts (14 and 15) illustrate the difference between the accuracy scores in 5 kfolds when the classifier models were applied to the datasets with omitting FGS and the accuracy score in 5 kfolds when the classifier applied to the selected genes by FGS method.
For the majority of applied classifier models, and specifically, MLP, where 72% is the average accuracy score in 5 kfolds when omitting FGS, while 93% when FGS is employed, good enhancement is obtained when the fuzzy gene selection method is applied to thyroid cancer (GSE33630) datasets. Additionally, the number of genes was
reduced from 23516 to 76 genes, which reduced the complexity, interpretability, and training time for algorithms as well as enabled the early identification of cancer. The two bar graphs (16 and 17) show the differences in accuracy scores for five distinct classifier models when the FGS approach is used in comparison to when it is not used.
Briefly, multilayer perceptron achieved the highest average accuracy across the six datasets when fuzzy gene selection was applied which was 96.5%. It also, MLP has accomplished the highest improvement rate for the average accuracy when the proposed fuzzy gene selection which was 27.3%. It can be concluded that the highest improvement impact of fuzzy gene selection was when a MLP classifier was employed and the accuracy improved from 69.2% before FGS was applied while 96.5% when FGS was applied.
Based on the results that were explained previously, a full automated deep neural network was proposed to analyze gene expression data as described in (Figure 5). The proposed model attempted to achieve three main goals as follows: The first goal, reducing the number of genes that would be used as identifiers for training a classifier method in resulting that leads to reduce the time consuming of training a model. Indeed, the proposed model succeeded remarkably in reducing the number of genes as indicated in (Table 3 and Table 4). The second goal, enhancing the performance of the accuracy and other evaluation measurement parameters and the aim was also accomplished where the average accuracy was 96.5%. The third goal, selecting candidate genes as putative targets for biologists to further investigate to determine whether these genes simply useful for classification or are implicated in the pathogenesis of these diseases.
## 5 Conclusion
In order to improve the machine learning performance for cancer classification, this research introduces a novel fuzzy gene selection approach for lowering the dimensionality (reducing the number of features) of gene expression data. It also decreases the amount of time needed for algorithm training. Using the commonly used measurement techniques ( Microarray and RNA-seq) for estimating gene expression data, the proposed model was trained and evaluated on six datasets obtained from TCGA and GEO. Three primary objectives were accomplished by this work: to boost the effectiveness of classifier techniques, help speed up the training process and cut down on the number of chosen genes that are utilized as identifiers for the classifier training model. The findings demonstrate that the suggested model (FGS-MLP) has the best accuracy in the majority of the datasets studied, with accuracy levels ranging from 93% at the lowest end to 99% at the top.
The average accuracy rating across six datasets is 96.5%. As a result, the proposed model shows both the capacity to properly classify cancer and time savings during the training phase. By more carefully choosing characteristics (genes) from different cancer kinds, biologists can also benefit from the selected genes in their study and early cancer detection. Furthermore, FGS may also assist in reducing the complexity of a classifier method and avoiding or at least mitigating the overfitting issue that typically arises when high dimensionality datasets are used.
Regardless of the contributions and promising findings of this research, it has some limitations. First, a limited number of datasets used that can more datasets used for different cancer types especially RNA-seq data. Additionally, no single classical ML classifier can continuously achieve the best accuracy in all given datasets. Due to these limitations, future work will make an effort to use more datasets for different cancer types and propose a new classifier that can accurately and continuously classify gene expression data.
## Declarations
* Funding This research was partly funded by the Ministry of Higher Education and Scientific Research in the Republic of Iraq, according to scholarship number (22223) on (06/09/2017) to sponsor the first author to pursue his PhD research.
* Conflict of interest/Competing interests (check journal-specific guidelines for which heading to use). Not applicable
* Ethics approval Not applicable
* Consent to participate
* Consent for publication
* Availability of data and materials. The original datasets that were employed for cancer classification are freely available at:[https://github.com/mahmoodjasim/OrginalDataset](https://github.com/mahmoodjasim/OrginalDataset). While the final datasets that have been used after applying Fuzzy gene selection method are freely available at:[https://github.com/mahmoodjasim/Datasets-of-selected-genes](https://github.com/mahmoodjasim/Datasets-of-selected-genes)
* Code availability. The codes used in this article are freely available at:[https://github.com/mahmoodjasim/Fuzzy-Gene-Selection-Code](https://github.com/mahmoodjasim/Fuzzy-Gene-Selection-Code)
* Authors' contributions
|
2305.15756 | UniTRec: A Unified Text-to-Text Transformer and Joint Contrastive
Learning Framework for Text-based Recommendation | Prior study has shown that pretrained language models (PLM) can boost the
performance of text-based recommendation. In contrast to previous works that
either use PLM to encode user history as a whole input text, or impose an
additional aggregation network to fuse multi-turn history representations, we
propose a unified local- and global-attention Transformer encoder to better
model two-level contexts of user history. Moreover, conditioned on user history
encoded by Transformer encoders, our framework leverages Transformer decoders
to estimate the language perplexity of candidate text items, which can serve as
a straightforward yet significant contrastive signal for user-item text
matching. Based on this, our framework, UniTRec, unifies the contrastive
objectives of discriminative matching scores and candidate text perplexity to
jointly enhance text-based recommendation. Extensive evaluation shows that
UniTRec delivers SOTA performance on three text-based recommendation tasks.
Code is available at https://github.com/Veason-silverbullet/UniTRec. | Zhiming Mao, Huimin Wang, Yiming Du, Kam-fai Wong | 2023-05-25T06:11:31Z | http://arxiv.org/abs/2305.15756v1 | UniTRec: A Unified Text-to-Text Transformer and Joint Contrastive Learning Framework for Text-based Recommendation
###### Abstract
Prior study has shown that pretrained language models (PLM) can boost the performance of text-based recommendation. In contrast to previous works that either use PLM to encode user history as a whole input text, or impose an additional aggregation network to fuse multi-turn history representations, we propose a unified local- and global-attention Transformer encoder to better model two-level contexts of user history. Moreover, conditioned on user history encoded by Transformer encoders, our framework leverages Transformer decoders to estimate the language perplexity of candidate text items, which can serve as a straightforward yet significant contrastive signal for user-item text matching. Based on this, our framework, UniTRec, unifies the contrastive objectives of discriminative matching scores and candidate text perplexity to jointly enhance text-based recommendation. Extensive evaluation shows that UniTRec delivers SOTA performance on three text-based recommendation tasks.1
Footnote 1: Our code is available at [https://github.com/Vesasonsilverbullet/UniTRec](https://github.com/Vesasonsilverbullet/UniTRec).
## 1 Introduction
Text-based recommendation Li et al. (2010); Gu et al. (2016); Okura et al. (2017); Malkiel et al. (2020) aims to recommend relevant textual content (e.g., news articles, Twitter posts) to people based on their behaviors as represented in historical log texts. For instance, engagement recommendation Cheng et al. (2022) on social media (e.g., Twitter and Reddit) helps users discover and engage with interested threads by modeling their browsing history.
Pretrained language models Devlin et al. (2019); Brown et al. (2020) have made waves in recent text-based recommendation research Zhang et al. (2021); Qi et al. (2022); Geng et al. (2022). The most common practice is using PLM encoders (BERT family) to learn representations of user history and candidate item texts. Recommendation matching scores are computed over the user and item representations and finally optimized by noise contrastive estimation (NCE) loss Gutmann and Hyvarinen (2010) for ranking multiple candidates.
Unlike encoding single text, using PLM to encode multi-turn texts of user history is nontrivial. Existing works Malkiel et al. (2020); Qi et al. (2022); Geng et al. (2022) concatenate multi-turn history texts as a whole input text, then use one PLM encoder to learn the holistic user representation. This is a standard PLM encoding manner but ignores the relation among history turns, as all word tokens from different history turns are _equally attended_2. In contrast, previous studies point out that learning the relation among user history turns is also beneficial Zeng et al. (2020); Qi et al. (2021). Another approach is using PLM encoders to learn representations from multi-turn history texts, followed by an additional aggregation network to fuse the multi-turn representations Wu et al. (2021); Li et al. (2022). However, the imposed aggregation networks (with newly initialized parameters) weaken the representation power of PLM encoders which are already pretrained on large-scale corpora.
Footnote 2: There is no inductive bias of turn-level and history-level relations introduced to Transformer self-attention computation, where each token plays an equal role.
This work introduces UniTRec, a **Un**ified text-to-text **T**ransformer framework for text-based **R**eocommentation. In the encoder component of UniTRec, we design local- and global-attention to learn user history representations through tailored attention masking, which aims to jointly model word-level and turn-level relations of user history. UniTRec can utilize the full power of PLM encoders because it preserves the intact structure of PLM encoders without newly imposed parameters.
Different from most previous works that predict user-candidate matching scores solely based on the representations learned by Transformer encoders, we argue that conditioned on user representations
learned by Transformer encoders, candidate text perplexity (PPL) estimated by pretrained Transformer decoders is also a straightforward yet significant signal for text-based recommendation. As shown in Figure 1, we hypothesize that the candidate text perplexity estimated by pretrained LM decoders can directly measure the text matching degree between user history and candidate texts. It is because the perplexity estimates the likelihood of candidate texts based on encoder outputs, which naturally indicates the probabilities of candidate texts given the user history. Besides, UniTRec can use the last hidden states of Transformer decoders to directly predict matching scores. Hence, this work unifies the contrastive objectives of discriminative matching scores and candidate text perplexity to jointly enhance text-based recommendation.
The contributions of this work are: (1) We propose local- and global-attention to model two-level relation of user history without additional parameters, which enjoys the full power of PLM encoders. (2) We introduce PLM perplexity to measure user-candidate text matching and unify the objectives of discriminative matching scores and candidate text perplexity to enhance text-based recommendation. (3) Experiments on three text-based recommendation datasets validate the effectiveness of UniTRec.
## 2 Approach
### Unified User-history Modeling
Formally, multi-turn history of a user is represented as \(H=[t_{1},t_{2},...,t_{N}]\), and each turn text \(t_{i}\) contains \(|t_{i}|\) words as \(t_{i}=[x_{i}^{1},x_{i}^{2},...,x_{i}^{[t_{i}]}]\). UniTRec aims to unify learning word- and turn-level context representations in one Transformer encoder.
**Local attention on word-level context.** We first concatenate the multi-turn history texts as the input tokens \(X=[x_{1}^{1},x_{1}^{2},...,x_{1}^{[t_{1}]},...,x_{N}^{1},x_{N}^{2},...,x_{N}^ {[t_{N}]}]\). Inspired by Dong et al. (2019), we tailor the attention masking in Transformer self-attention to learn the word-level context of each turn. Specifically, we allow word tokens from the same turn to attend to each other, while tokens from different turns are excluded from self-attention computation:
\[\mathbf{M}_{i,j}=\begin{cases}0,&\text{token $x_{i}$ and $x_{j}$ in the same turn}\\ -\infty,&\text{otherwise}\end{cases} \tag{1}\]
\[\text{Attention}(Q,K,V)=\text{softmax}(\frac{QK^{T}}{\sqrt{d_{k}}}+\mathbf{M})V\]
, where \(Q,K,V\) are self-attention query, key, and value in Vaswani et al. (2017), \(\mathbf{M}\) is the mask matrix to achieve local-attention inside each turn text. The local self-attention blocks consist of \(L_{1}\) layers, by which original PLM encoders can be adapted to learn word-level context representations of turns.
**Global attention on turn-level context.** Over the local self-attention layers, we leverage global self-attention to model the relation among history turns. Specifically, tokens from all turns attend to each other in self-attention computation (by setting the mask matrix \(\mathbf{M}=\mathbf{0}\)). In this way, Transformer encoders can perform global interaction among each token (and turn) to learn turn-level context representations of user history. There are \(L_{2}\) layers in the global self-attention blocks, which can also be inherited from PLM encoders directly.
### Joint Contrastive Ranking Objectives
Conditioned on the history representation, we input the candidate text to Transformer decoders to predict how likely it should be recommended. It is worth noting that Transformer decoders can naturally perform effective **cross-attention** interaction between history and candidate hidden states.
#### 2.2.1 Objective on Discriminative Scores
Motivated by Lewis et al. (2020), we feed the last hidden state of decoder output \(h_{T}\) to an MLP score-head which predicts the user-candidate matching score \(S^{d}=\text{ScoreHead}(h_{T})\). The matching score is discriminative, as higher scores indicate higher user-candidate matching probabilities.
Following previous works Li et al. (2022); Qi et al. (2022), we adopt negative sampling with NCE loss to optimize matching score prediction. Given the user history and its ground truth matched candidate \(C_{i}\), UniTRec predicts the matching score
Figure 1: An example of perplexity-based ranking for candidate item texts, conditioned on user history. The illustrated task is text-based news recommendation.
as \(S_{i}^{d+}\). In addition, \(K\) unmatched negative candidates \(\{C_{j}\}_{j=1}^{K}\) are sampled from the candidate set, and their matching scores are \(\{S_{j}^{d-}\}_{j=1}^{K}\). The NCE loss is represented in a contrastive form:
\[\mathcal{L}_{i}^{d}=-\log\frac{\exp(S_{i}^{d+})}{\exp(S_{i}^{d+})+\sum_{j=1}^{ K}\exp(S_{j}^{d-})} \tag{2}\]
#### 2.2.2 Objective on Candidate Text Perplexity
As aforementioned, UniTRec leverages perplexity to rank candidate texts. Since lower perplexity indicates higher user-candidate matching probability, regarding the candidate text \(Y=[y_{1},y_{2},...,y_{T}]\), we define the perplexity-based matching score \(S^{p}\) as its negative perplexity3:
Footnote 3: Note [https://huggingface.co/docs/transformers/perplexity](https://huggingface.co/docs/transformers/perplexity) for LM perplexity calculation. We empirically discard the outer exponential term in the PPL formula, because it already exists in NCE loss Eq. (4) and does not affect the final ranking.
\[S^{p}=-\mathrm{PPL}(Y)=\frac{1}{T}\sum\nolimits_{i=1}^{T}\log p_{\theta}(y_{i }|y_{<i}) \tag{3}\]
, where \(p_{\theta}(\cdot)\) denotes the target probability output from the UniTRec Transformer decoder. Similar to Eq. (2), we optimize the perplexity-based matching score \(S^{p}\) in the NCE loss form. As perplexity empirically varies in a wide range, we introduce a temperature parameter \(\tau\) to balance the joint NCE loss gradients following Radford et al. (2021).
\[\mathcal{L}_{i}^{p}=-\log\frac{\exp(\tau\cdot S_{i}^{p+})}{\exp(\tau\cdot S_{ i}^{p+})+\sum\nolimits_{j=1}^{K}\exp(\tau\cdot S_{j}^{p-})} \tag{4}\]
, where \(\tau\) is learnable and initialized to \(1\). On the training dataset \(\mathcal{D}\), the joint contrastive learning objective is formulated as:
\[\mathcal{L}=\sum\nolimits_{i=1}^{|\mathcal{D}|}\left(\mathcal{L}_{i}^{d}+ \mathcal{L}_{i}^{p}\right) \tag{5}\]
### Model Initialization and Inference
As UniTRec is a standard text-to-text Transformer, we initialize the parameters from pretrained BART Lewis et al. (2020). In inference, UniTRec predicts the discriminative and perplexity-based scores for each candidate item, respectively. The two separate scores \(S^{d}\) and \(S^{p}\) are normalized, averaged, and finally ranked as the output. Detailed ranking process is provided in Appendix B.
## 3 Experiments
We evaluate UniTRec on three text-based recommendation tasks: 1) _NewsRec_, to recommend news articles to users based on their browsing history. We use the _MIND-small_ dataset Wu et al. (2020) for experiments. 2) _QuoteRec_, to recommend quotations to users based on their conversation history. We use the _Reddit-quotation_ dataset Wang et al. (2021) for experiments. 3) _EngageRec_, to recommend social media posts for users to engage with based on their comment history. We use the dataset released by Zeng et al. (2020) for experiments. Detailed dataset statistics is provided in Appendix A.
**Implementation Details.** The UniTRec encoder and decoder both consist of \(6\) Transformer layers with \(768\)-dimensional hidden states and \(12\) attention heads. We set \(L_{1}=3\) and \(L_{2}=3\). We use AdamW optimizer Loshchilov and Hutter (2019) to train UniTRec with cosine learning rate decay.
**Baselines.** We compare UniTRec with competitive baselines: \(1)\) GRU4Rec Balazs et al. (2016) utilizes a GRU network to learn multi-turn history. \(2)\) SASRec Kang and McAuley (2018) encodes user history with a self-attention based sequential model. \(3)\) BERT4Rec Sun et al. (2019) employs bidirectional self-attention to model user history. \(4)\) RoBERTa-Sim, a simple yet strong baseline men
Figure 2: Overview of UniTRec. In training, matching scores \(S^{d}\) and \(S^{p}\) are optimized by the NCE loss, respectively. In inference, \(S^{d}\) and \(S^{p}\) are normalized and combined to derive the final output ranking.
tioned in Qi et al. (2022), uses the hidden states of [CLS] tokens to measure user-candidate similarity. 5) UNBERT, implemented as Zhang et al. (2021), concatenates history and candidate texts as the input to BERT and predicts matching scores from the final hidden states of [CLS] tokens.
Note that we do not consider other methods that use non-text inputs (e.g., user profile, text topic labels). For fair comparison, all baseline models use pretrained \(12\)-layer RoBERTa-base Liu et al. (2019) as text encoders to learn embeddings of texts.
### Main Results
Table 1 shows the performance of experiment models. From the results of _NewsRec_ and _QuoteRec_, we can see that UniTRec outperforms all baseline models by a clear margin. Also, RoBERTa-Sim and UNBERT that directly use the [CLS] hidden states to represent user history, surpass other baselines that build additional aggregation networks upon the whole RoBERTa outputs. As displayed in the results, _EngageRec_ is the most difficult task. We inspect the dataset and find that the texts on social media contain too much noise (e.g., URL and emoji), and the user history contains less number of turns. Nevertheless, UniTRec achieves better overall performance than other baseline models, validating its robustness on noisy text inputs and limited user history.
### Ablation Studies and Analyses
We further conduct ablation studies on UniTRec. The experiment results are reported in Table 2.
Initialization of UniTRec.We train UniTRec from scratch without initialization from pretrained BART (refer to w/o BART Init). The recommendation performance significantly drops in all three tasks, which indicates that acquiring effective text understanding ability from PLM is a necessary key to UniTRec performance.
Local and global attention.We investigate the function of two-level attention modules of the UniTRec history encoder. Concretely, we set \(L_{1}=0\) in w/o Local-Att and \(L_{2}=0\) in w/o Global-Att, where \(L_{1}+L_{2}=6\). We can observe that removing local and global attention from the original UniTRec history encoder both lead to suboptimal performance, while the performance drop is more significant in w/o Global-Att. The results justify the effectiveness of jointly modeling two-level history contexts through adapted Transformer attention masking without additional parameters.
Discriminative and perplexity-based objectives.We probe into training UniTRec with standalone discriminative (Disc-Score only) and perplexity-based (PPL-Score only) contrastive objectives, respectively. We can see that the discriminative objective yields better performance than the perplexity-based objective. Besides, the model performance on both standalone objectives declines compared to the original joint objective. The results indicate that the discriminative and perplexity-based matching scores are complementary and can jointly provide more accurate signals of user history and candidate text matching for text-based recommendation.
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{_NewsRec_} & \multicolumn{3}{c|}{_QuoteRec_} & \multicolumn{3}{c}{_EngageRec_} \\
**Model** & MRR & NDCG@5/10 & HR@5/10 & MRR & NDCG@5/10 & HR@5/10 & MRR & NDCG@5/10 & HR@5/10 \\ \hline GRU4Rec & 32.91 & 36.20d/42.53 & 50.33/68.35 & 34.08 & 34.65/37.93 & 44.45/54.63 & 2.12 & 1.04/1.51 & 1.27/2.65 \\ SASRec & 32.60 & 36.03/42.37 & 50.63/68.64 & 33.63 & 34.30/37.49 & 44.32/54.20 & 2.40 & 1.49/1.95 & 2.16/3.47 \\ BERT4Rec & 32.87 & 36.18/42.40 & 50.21/67.97 & 33.59 & 34.26/37.27 & 43.76/53.05 & 3.04 & 1.98/3.23 & 2.81/6.67 \\ RoBERTa-Sim & 32.96 & 36.47/42.81 & 51.06/69.08 & 37.13 & 37.96/41.18 & 48.14/58.06 & 3.74 & 2.66/3.75 & 4.42/**7.70** \\ UNBERT & 33.09 & 36.53/42.84 & 50.87/68.82 & 39.75 & 40.74/43.69 & 50.90/60.04 & 2.83 & 1.96/2.67 & 3.11/5.24 \\ \hline UniTRec & **33.76** & **37.63/43.74** & **52.61/69.89** & **41.24** & **42.38/45.31** & **52.87/61.88** & **4.06** & **3.23/4.29** & **4.58/7.68** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experiment results on three text-based recommendation tasks. MRR denotes mean reciprocal rank, NDCG denotes normalized discounted cumulative gain, and HR denotes hit ratio (presented in percentage). The overall performance of UniTRec is better than other baseline models with \(p\)-value \(<0.05\), validated by unpaired t-test.
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{_NewsRec_} & \multicolumn{3}{c|}{_QuoteRec_} & \multicolumn{3}{c}{_EngageRec_} \\
**Model** & MRR & NDCG@5/10 & HR@5/10 & MRR & NDCG@5/10 & HR@5/10 & MRR & NDCG@5/10 & HR@5/10 \\ \hline UniTRec & 33.76 & 37.63/43.74 & 52.61/69.89 & 41.24 & 42.38/45.31 & 52.87/61.88 & 4.06 & 3.23/4.29 & 4.58/7.68 \\ w/o BART Init & 30.31 & 33.32/39.69 & 47.55/65.78 & 19.02 & 17.66/20.80 & 22.45/32.16 & 2.24 & 0.86/1.61 & 1.27/3.62 \\ \hline w/o Local-Att & 33.34 & 37.22/43.32 & 52.28/69.54 & 40.44 & 41.63/44.56 & 52.09/61.15 & 3.92 & 3.19/4.15 & 4.38/7.36 \\ w/o Global-Att & 33.22 & 37.06/43.17 & 52.14/69.47 & 40.25 & 41.47/44.26 & 52.07/60.76 & 3.64 & 2.78/3.59 & 3.89/6.35 \\ \hline Disc-Score only & 33.07 & 36.76/43.03 & 51.68/69.46 & 40.59 & 41.81/44.65 & 52.39/61.14 & 3.82 & 2.99/3.60 & 4.49/6.85 \\ PPL-Score only & 32.83 & 36.39/42.59 & 51.05/68.67 & 40.31 & 41.43/44.47 & 52.13/61.20 & 3.29 & 2.39/3.03 & 3.86/5.66 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Recommendation performance of ablation model variants.
## 4 Conclusion
We present a unified Transformer UniTRec for text-based recommendation. UniTRec learns two-level contexts of multi-turn user history and jointly exploits discriminative matching scores and candidate text perplexity as matching objectives. Empirical experiments on three text-based recommendation datasets corroborate the effectiveness of UniTRec.
## 5 Limitations
Our model only focuses on utilizing text information for recommendation, which is a key limitation of this work. In real-world settings, recommender systems are usually required to handle heterogeneous information inputs. UniTRec is a pure text-based recommender modeling user history and candidate texts as inputs. However, incorporating additional side information (e.g., user profile, text topic labels, and dwell time of user behaviors) could further improve the recommendation performance and alleviate the _cold start_ problem. Furthermore, UniTRec only models two-level relations of user behavior history. Nonetheless, incorporating more user behavior information, such as implicit and negative feedback, could further enhance the recommendation performance.
## Acknowledgements
We appreciate constructive comments from anonymous reviewers. The research described in this paper is partially supported by CUHK under Project No. 3230366.
|
2307.12898 | As Time Goes By: Adding a Temporal Dimension Towards Resolving
Delegations in Liquid Democracy | In recent years, the study of various models and questions related to Liquid
Democracy has been of growing interest among the community of Computational
Social Choice. A concern that has been raised, is that current academic
literature focuses solely on static inputs, concealing a key characteristic of
Liquid Democracy: the right for a voter to change her mind as time goes by,
regarding her options of whether to vote herself or delegate her vote to other
participants, till the final voting deadline. In real life, a period of
extended deliberation preceding the election-day motivates voters to adapt
their behaviour over time, either based on observations of the remaining
electorate or on information acquired for the topic at hand. By adding a
temporal dimension to Liquid Democracy, such adaptations can increase the
number of possible delegation paths and reduce the loss of votes due to
delegation cycles or delegating paths towards abstaining agents, ultimately
enhancing participation. Our work takes a first step to integrate a time
horizon into decision-making problems in Liquid Democracy systems. Our
approach, via a computational complexity analysis, exploits concepts and tools
from temporal graph theory which turn out to be convenient for our framework. | Evangelos Markakis, Georgios Papasotiropoulos | 2023-07-24T15:46:45Z | http://arxiv.org/abs/2307.12898v1 | # As Time Goes By: Adding a Temporal Dimension Towards Resolving Delegations in Liquid Democracy
###### Abstract
In recent years, the study of various models and questions related to Liquid Democracy has been of growing interest among the community of Computational Social Choice. A concern that has been raised, is that current academic literature focuses solely on static inputs, concealing a key characteristic of Liquid Democracy: the right for a voter to change her mind as time goes by, regarding her options of whether to vote herself or delegate her vote to other participants, till the final voting deadline. In real life, a period of extended deliberation preceding the election-day motivates voters to adapt their behaviour over time, either based on observations of the remaining electorate or on information acquired for the topic at hand. By adding a temporal dimension to Liquid Democracy, such adaptations can increase the number of possible delegation paths and reduce the loss of votes due to delegation cycles or delegating paths towards abstaining agents, ultimately enhancing participation. Our work takes a first step to integrate a time horizon into decision-making problems in Liquid Democracy systems. Our approach, via a computational complexity analysis, exploits concepts and tools from temporal graph theory which turn out to be convenient for our framework.
## 1 Introduction
Liquid Democracy (LD) is a novel voting framework that aspires to revolutionize the typical voter's perception of civic engagement and ultimately elevate both the quantity and quality of community involvement. At its core, LD is predicated on empowering voters to determine their mode of participation. This can be achieved by either casting a vote directly, as in direct democracy, or by entrusting a proxy to act on their behalf, as in representative democracy. Notably, delegations are transitive, meaning that a delegate's vote can be delegated afresh, and at the end of the day a voter that has decided to cast a ballot, votes with a weight dependent on the number of agents that she represents, herself included. As a result of its flexibility, LD is alleged to reconcile the appeal of direct democracy with the practicality of representative democracy, yielding the best of both worlds. The origin of the "liquid" metaphor remains a matter of debate up to date, with one view being that it stems from the ability of votes to flow along delegation paths, while an alternative view argues that it arises from the ability of voters to revoke delegation approvals and continuously adjust their choices. As we will justify shortly after, current work tends to forcefully support the second opinion.
According to [6] there is a number of features that suffice to establish a framework as a Liquid Democracy one. Most of them are related to the transitivity property and to the options given to the voters about casting a ballot or choosing representatives. These are more or less taken into account in all relevant works that come from the field of Computational Social Choice. A further aspect, called _Instant Recall_, encompasses the ability of voters to withdraw their delegation at any time. As a matter of fact, in practice, elections allow for extended (sometimes structured) periods of deliberation, until the votes are finalized, and Liquid Democracy could serve as a means of debate empowerment. A revocation of delegation may occur due to disagreements with a delegate's post-delegation choices, doubts on the integrity of one's behavior, or an agent's further understanding of the issue under consideration. A characteristic that is being shared by all the works in the AI community is that they all seem to ignore the Instant Recall feature, and examine isolated static delegation profiles. This oversight was identified and criticized by the team behind the LiquidFeedback platform [2], the most influential and large scale experiment of LD. In [3], inter alia, they claim the following:
_In a governance system with a continuous stream of decisions, we expect that participants observe the actions (and even non-actions) of other participants, in particular the activities of their (direct and indirect) delegates as well as the activities of other participants, who they consider as delegates. Based on their observations, we expect participants to adapt their own behaviour in respect to setting, changing, and removing delegations and their own participation. Based on the track records of the participants, a network of trust or dynamic scheme of representation proves itself to be a responsible power structure. [...] We believe that the effects that occur through observation and adaptation over time are an essential prerequisite for a comprehensive understanding of liquid democracy, (which) requires a broader view, namely adding a temporal dimension to delegation models._
Leaving aside the lack of temporal aspects in the literature, there are also additional concerns to address in traditional LD models. A crucial disadvantage is that we may experience delegation cycles or delegation paths towards abstainers, which result to inevitably lost votes. A way that has been suggested in theory [24, 32, 10] and has been implemented in practice [26], in order to mitigate such issues is to allow each delegating agent to specify an entire set of agents she approves as potential representatives together with a ranking among them that indicates her preferences. Nevertheless, even with these efforts, the discussed issues may still arise at the election-day. And here is where the temporal dimension can come into play! The main focus of our work is in proposing a framework that leverages temporal information to address the identified concerns, while also providing a valuable tool for deliberation.
In particular, our aim is to study the existence of efficient _delegation rules_ that fulfill certain desirable axioms. A delegation rule is a centralized algorithm that takes as input the available information of the deliberation phase and prescribes for each non-abstaining participant a delegation path to a voter who casts a ballot. The main properties that we would like our rules to satisfy are described below.
**Time-Conscious Delegation Rules.** We view the temporal dimension as an important feature in the design of delegation rules. To demonstrate this, consider an election where the information at the very end of the deliberation phase can only produce a path to a cycle or to an abstainer, for some voter. Our main insight is that one way to resolve such scenarios is to look into approvals expressed during the previous time steps of the deliberation phase. Our work operates under the premise that if a voter \(v\) decides to trust another voter \(u\), at a given moment in time, say \(t\), then \(v\) accepts any decisions made by \(u\) at time \(t\) or earlier (up to a certain number of time steps prior to \(t\), which could be given as a parameter by voter \(v\)). This is because the decision to approve a delegation to \(u\) is based on what \(v\) observes in the previous time steps and up until time \(t\). However, voter \(v\) still retains the right to revoke her approval to \(u\) at a later point in time. If this occurs, then voter \(u\) is permitted to represent \(v\) only if she chooses an action that she had declared at or before time \(t\). We refer to the rules that produce delegation paths respecting in such a way the ordering of the time-instants at which a delegation is made available, as _time-conscious_ (for a formal definition refer to Section 2). In our model, time-conscious delegation rules can guarantee the absolute approval of a delegating voter to her ultimate representative.
**Confluent Delegation Rules.** In models incorporating multiple, ranked, delegations, as the one under consideration, an esteemed property is _confidence_, which posits that each voter should have at most one other immediate representative in the final outcome [10]. This desirable attribute guarantees that every voter is instructed to take a single action among the three options: vote, abstain, or delegate her own and all received ballots to a specific voter. On the contrary, a non-confluent rule may prompt a voter to delegate different ballots received from different voters to different representatives (and even delegate her own ballot to yet another representative). Such suggestions can be challenging for a voter to follow. In addition to its intuitive nature, confluence is also significant for maintaining transparency and preserving the high level of accountability inherent in classical Liquid Democracy, as highlighted in [24].
### Contribution
Conceptually, we view as our main contribution that we explicitly add a temporal dimension to (a generalization of) an existing framework. Hence this is putting a stake in the ground in bridging a significant research gap identified by practitioners. We then study the compatibility of computational tractability with desirable properties of delegation rules, with the objective of reducing the loss of votes resulting from delegation cycles or paths towards abstaining agents, and ultimately enhance the electorate's participation. Namely we are interested in polynomially computable rules that maximize the total utility of the electorate and at the same are time-conscious and confluent. Unfortunately, despite the natural appeal of these requirements, it turns out that this is too much to ask for: our results demonstrate that such a delegation rule does not exist, unless P=NP, even for simple variants of our model. Therefore, the best one could hope for is to design efficient procedures that sacrifice one of the considered axioms. Alternatively, one can also study more restricted models in which all the desired properties can be simultaneously satisfied. Indeed, we offer positive results in both directions, circumventing the NP-hardness results in several cases. Finally, we believe that our work is making a pioneering contribution to the Computational Social Choice literature, by incorporating concepts and techniques from temporal graph theory, which is a novel approach in the field.
### Related Work
We discuss first some works related to Liquid Democracy, in a way that "begins at the beginning", following the suggestion made to the White Rabbit in "Alice's Adventures in Wonderland". The author of that novel, Charles Dodgson (also known by his pen name Lewis Carroll), as early as 1884 [12], considers an idea that was meant to be of vital importance for what we call Liquid Democracy today. According to [1], it seems that he is the one that, before all else, discussed the aspect of giving the agents the power to transfer to others their acquired votes. On the other hand, it was Gordon Tullock [43] who initiated the discussion about models that aspire to occupy the ground between direct and representative democracy, by suggesting a model that allows voters to decide whether they are interested in casting a ballot or delegate to another voter. Shortly after, unlike Tullock's suggestion, James Miller [36], brought forward the idea that voters should not only choose their mode of participation but should also enjoy the ability to retract a previously given delegation in a day-to-day basis. At what concerns the nomenclature of LD, the precise origins are unknown. The best one could refer to, is its seemingly first [37] recorded appearance (in an obsolete wiki, preserved only on the Internet Archive [41, 40]), in which a user nicknamed "sayke" discussed about a voting system that lies between direct and representative democracy and aims at increasing civic engagement. However, none of these sources discussed explicitly the aspect of transitivity of votes, as Dodgson did. Reinventions, amendments and compositions of these ideas started to appear at the early 00's and we refer to [23] for an overview of them. The earliest published works that incorporate the aspects of LD, (roughly) as we consider it today are [22, 25, 8, 15]. Nowadays, Liquid Democracy is one of the most active research areas in Computational Social Choice [9, 38].
As already mentioned, the primary motivation of our work is due to [3] and the framework we suggest is a generalization of the model in [10]. Furthermore, our optimization objective coincides with the one in [21, 33]. To our knowledge, our work is the first that incorporates temporal aspects in LD models. Many different models and questions related to Liquid Democracy have been examined. Indicatively, recently published works explored aspects including, the study of voting power concentration through the lens of parameterized complexity [18], the efficiency of altering delegations to achieve consistency in participatory budgeting settings [28], the application of power indices and criticality analysis to voters [16], and the evaluation of LD's epistemic performance [39].
## 2 Temporal Liquid Democracy Elections
We consider elections in which a set \(V\) of \(n\) voters should reach a decision on a certain issue. Apart from voting themselves, the participants are given two additional options: abstaining or delegating to other voters. The voters also have some time available to consider what to do (e.g. to get informed on the issue at hand or to observe other voters' choices) and they are allowed to change their mind, perhaps multiple times, until the actual election-day. We say that such an election is a _Temporal Liquid Democracy Election_, a t-LD election in short, if it consists of two phases:
* A _deliberation phase_ of \(L\) rounds, where at every time-instant \(t\in[L]\), each voter \(v\), has to choose whether to personally vote or not. If she decides to cast a ballot, we consider this as her final decision that will not change in the remaining time-steps. As long as a voter \(v\) has not decided to cast a ballot herself till time \(t\), she is asked to specify the following:
* A set of approved voters \(S^{t}_{v}\subseteq V\setminus\{v\}\) (which may be the empty set, if \(v\) wants to abstain at round \(t\)), indicating the voters that she trusts to cast a ballot on her behalf, possibly with different levels of confidence. These voters may in turn also be willing to delegate their vote to other participants as well.
* A (weak) preference ranking over the voters in \(S^{t}_{v}\), which induces a partition of \(S^{t}_{v}\) into preference groups, according to \(v\). This is accompanied by a positive integer score \(sc^{t}_{v}(i)\), indicating the utility or happiness level that \(v\) experiences if a voter from her \(i\)-th most preferred group at time \(t\), will ultimately be selected as her immediate representative.
* A non-negative integer-valued trust-horizon parameter \(\delta^{t}_{v}\), with \(\delta^{t}_{v}\leq t-1\), by which, she indicates approval for the views held by any voter in \(S^{t}_{v}\) up to \(\delta^{t}_{v}\) time-steps prior to time-instant \(t\).
* A _casting phase_, in which all the voters that, during the deliberation phase, expressed willingness to vote (and only these), eventually cast a ballot on the issue under consideration. Every voter who did not previously declare an intention to vote, is being assigned a representative and a pre-specified delegation rule, that takes into account the entire deliberation phase, is being used to make such decisions. The winner(s) of the election are elected using a weighted voting rule, where the weight of a voter is determined based on the number of voters that she represents.
For an illustrative exposition of our model we refer to the example provided at the end of the section. We now elaborate on the input that is required from the voters, during the deliberation phase. The preference ranking facilitates voters to express different levels of confidence towards other participants who could potentially represent them. Also, the scoring function allows the model to capture the cases where a voter is willing to either increase her scores over rounds due to becoming gradually more informed about another voters' opinions or in the opposite direction, decrease scores due to becoming more hesitant about who represents her. Realistically, we expect voters to have just a few preference groups, and hence they do not need to submit too many numerical parameters. Furthermore, the intuition behind the trust-horizon parameters is that the decision of a voter \(v\) to approve a delegation to \(u\) at time \(t\), can be based only on looking at the behavior of \(u\) in the previous rounds and up until time \(t\) of the deliberation phase. Since a voter \(v\) may not agree with \(u\) in all previous time steps, the parameter \(\delta^{t}_{v}\) specifies that \(v\) agrees with the choices made by \(u\) at any preceding time that is no more than \(\delta^{t}_{v}\) time-instants before \(t\). A simple case to have in mind is when \(\delta^{t}_{v}=t-1\) (i.e., \(v\) trusts whatever \(u\) has chosen at any time in the past). If this property holds for every voter \(v\) and for any \(t\in[L]\), we say that the election profile is of _retrospective trust_. Finally, note that under the suggested model, the voters' declared sets, their preference orders, their scores and trust-horizon parameters may change arbitrarily between subsequent time-instants.
**Variants of the Model and Practical Considerations.** Customization is key to the proposed decision-making model, which offers a range of possibilities to enhance its practicality and reproducibility. For instance, it would be more natural to allow a voter to specify a different time-horizon parameter for each of her approved representatives. Notably, our findings are not impacted by the assumption of a uniform trust-horizon for approved representatives. Furthermore, we have assumed that once a voter expresses the desire to cast a ballot, she will no longer change her opinion until the election-day. This assumption is justified by the fact that once a voter has committed to becoming more informed on the topic, participating in further deliberation is deemed redundant. Nonetheless, the assumption is made only for technical convenience and could be dropped. Moreover, although our aim is to examine the model in its fullest generality, we stress that in potential real-life implementations, the voters may not need to submit all the information that we have described in every round. In particular, the scoring function could be automatically generated by the system, given the (weak) ranking on \(S^{t}_{v}\) submitted by each voter. E.g., one could use the Borda-scoring function (as in [10]), under which, at any time-instant, a voter assigns a score of 1 to her last preference group, a score of 2 to her second to last group, etc) or any other appropriate method. We highlight that our model is a strict generalization of the model considered in [10], not only because of the temporal dimension but also because of the more general scoring functions that we allow. The trust-horizon parameter could also be pre-specified, so that the voters do not need to submit any information regarding it, either by assuming that the trust of every voter goes arbitrarily back in time or for a fixed number of steps prior to each approval. Finally, if voters have the same preferences for consecutive time-steps, they would not need to re-specify them.
**Delegation Rules.** In the elections we consider, we essentially have three types of participants. We refer to the voters that declared intention to vote as _casting voters_, and these will be the only voters who will indeed finally cast a ballot at the election-day. Furthermore, the non-casting voters that will abstain from the election are precisely those who do not appro anyone at the final time-step, e.g. a voter \(v\) such that \(S^{L}_{v}=\emptyset\). We refer to such voters as _abstaining voters_. Finally, the rest of the voters will be called _delegating voters_. As evident from Section 1, and as will be further illustrated by the example at the end of this section, the temporal dimension could be considered valuable when the examination of the isolated instance at \(t=L\) cannot produce a feasible solution (i.e. delegation cycles or paths towards abstainers are unavoidable) or its feasible solutions are not good enough. A delegation rule is a mechanism that'resolves delegations' and addresses such problematic cases, or in other words, a procedure that ultimately assigns to each delegating voter, a casting voter, possibly via following some path of trust relationships. More formally, a delegation rule is a function that takes as input the voters' preferences, as reported during the entire deliberation phase of a t-LD election, and outputs a path to a casting voter, for every delegating voter. A valid delegation rule should ask casting voters to vote, abstaining voters to abstain and should not suggest any delegation path towards an abstainer or introduce delegation cycles.
**Temporal Graphs.** The driving force in our work is to model and analyze t-LD elections using principles from temporal graph theory. We start with a basic overview of the concept and the terminology of temporal graphs and following this, we will introduce some notation that we will use in the remainder. In high level, a temporal graph is nothing more than a simple, called _static_, graph in which a temporal dimension is being added, i.e., a graph that may change over time. Frequently, a temporal (multi)graph is being expressed as a time-based sequence of static graphs. For convenience, we will use an equivalent definition, under which, a (directed) temporal (multi)graph \(G(V,E,\)\(\tau,L)\) is determined by a set of vertices \(V\), a (multi)set of directed, temporal edges \(E\), a discrete time-labelling function \(\tau\) that maps every edge of \(E\) to a subinterval of \([1,L]\), and a lifespan \(L\in\mathbb{N}\). If the edges of \(E\) are weighted according to a function \(w:E\rightarrow\mathbb{N}\), then we say that \(G\) is weighted. The interval \(\tau(e)\), for an edge \(e\), indicates that \(e\) is available at the time-instants that belong to \(\tau(e)\). We say that each edge \(e\) is assigned an interval labeling \(\tau(e)=[s_{e},t_{e}]\), (possibly, \(s_{e}=t_{e}\) if the edge is available for a single time-instant)
and by allowing \(G\) to be a multigraph1 it is permitted for an edge to be present in multiple (disjoint) time-intervals. Unless otherwise stated, henceforth, by the term _graph_, we denote a weighted directed temporal multigraph. For more details on temporal graphs we refer to a relevant survey [35], as well as to the fundamental and influential works [30, 31, 34]. The _static variant_ of a temporal graph is the static graph that emerges if we ignore the time-labels of its edges. We call a graph _temporal directed tree rooted at vertex \(r\)_ if its static variant contains a directed path towards \(r\) from every other vertex and its undirected variant is a tree. A crucial concept for our work, in the context of temporal graphs, is the notion of time-conscious paths, that satisfy a monotonicity property regarding the temporal dimension of their edges. Consider a temporal graph \(G(V,E,\tau,L)\), coupled with a tuple \(\delta_{v}=(\delta_{v}^{t})_{v\in[L]}\in\mathbb{N}^{[L]}\) for every vertex \(v\) of \(V\). Let also \(\delta=(\delta_{v})_{v\in V}\). We say that a path in \(G\) from \(v_{1}\) to \(v_{k+1}\) is \(\delta\)_-time-conscious_ if it can be expressed as an alternating sequence of vertices and temporal edges \((v_{i},(e_{i},t_{i}),v_{i+1})_{i\in[k]}\), such that for every \(i\in[k]\) it holds that \(e_{i}=(v_{i},v_{i+1})\in E\), \(t_{i}\in\tau(e_{i})\) and for every \(i\in[k-1]\) it holds that \(t_{i}\geq t_{i+1}\geq t_{i}-\delta_{v_{i}}^{t}\). Similar notions have been applied to various domains including convenient flight connections detection [44], information diffusion [27] and infectious disease control through contact tracing [5]. In the remainder of Section 2, it will become more clear how this notion fits in our framework. We also call \(\delta\)-time-conscious, a temporal directed tree, rooted at a vertex \(r\), if all its paths towards \(r\) are \(\delta\)-time-conscious. Finally, we conclude by noting that illustrative examples of some of the terminology discussed here, can be found at Appendix B.
Footnote 1: We are using multigraphs instead of (simple) graphs merely for technical convenience, and we note that, alternatively, one could work with graphs by letting \(\tau\) be a function that maps edges to a set of subintervals of \([1,L]\).
**Modelling t-LD Elections as Temporal Graphs.** The deliberation phase of a t-LD election can be modeled as a weighted directed temporal multigraph \(G(V\cup\{\triangledown\},E,\tau,L,w,\delta)\) that is formed by
* a vertex in \(V\) for every voter of the electorate, as well as a special vertex \(\triangledown\), connected only with the casting voters,
* a multiset \(E\) of temporal edges that represent the approvals for delegation or ballot casting per round via a function \(\tau\) that assigns a time-label to every edge,
* a lifespan \(L\) that represents the duration of the deliberation phase,
* a function \(w\) that assigns a weight to every edge \((v,u)\) of \(E\), according to \(sc_{v}^{t}\), provided that \(t\in\tau((v,u))\),
* a vector \(\delta\) that, for every voter \(v\), contains a tuple \((\delta_{v}^{t})_{v\in[L]}\), as declared by \(v\) during the deliberation phase. For convenience, we allow \(\delta\) to have some empty entries, corresponding to casting voters or to time steps during which the corresponding voter abstained.
We note that if a casting voter had indicated preferences for potential representatives before deciding to cast a ballot, these preferences, and their corresponding edges, can be safely disregarded. More precisely, only the following two types of edges may exist: directed edges of the form \(e=(v,u)\) for \(v\in V\setminus C\) and \(u\in V\) with \(\tau(e)=[s_{e},\)\(t_{e}]\), indicating that at any time-instant \(t\in[s_{e},t_{e}]\), voter \(u\) belongs to \(S_{v}^{t}\), and directed edges \(e=(v,\triangledown)\) for \(v\in C\) with \(\tau(e)=[s_{e},L]\), indicating that from time \(s_{e}\) and onwards, voter \(v\) agrees to cast a ballot. Furthermore, we will proceed by assuming that the set of voters \(V\) is implicitly partitioned into three sets, as has been explained before: the set of casting voters \(C\), the set of abstaining voters \(A\) and the set of delegating voters \(D\). More formally, \(C=\{v\in V:(v,\)\(\triangledown)\in E\}\), \(A=\{v\in V\setminus C:L\notin\tau((v,u)),\) for any \((v,u)\in E\}\) and \(D=V\setminus(C\cup A)\). The weight function \(w\) indicates the cardinal preferences of a voter, as implied by the scores that accompany her preference rankings during the deliberation phase. Additionally, for convenience, we set to zero the weights of edges \((v,u)\) such that \(v\) corresponds to a casting or an abstaining voter. This choice can be justified by the upcoming discussion of the optimization objective in the "Electorate's Satisfaction" paragraph. Given a graph \(G(V\cup\{\triangledown\},E,\tau,L,w,\delta)\) that models a t-LD election, a delegation rule returns, for every delegating voter \(v\), a weighted directed temporal path from \(v\) to \(\triangledown\). Such a path infers an assignment of every delegating voter to a casting one. A delegation rule is called _efficient_ if its output can be computed in polynomial time in the input size.
**Axiomatic Principles.** We discuss here the axioms that constitute the main focus of our work, namely time-consciousness and confluence. We have extensively discussed these two properties in the analogous paragraphs of Section 1 and we now formally define them using the framework of temporal graphs. Given a graph \(G(V\cup\{\triangledown\},E,\tau,L,\)\(w,\delta)\) that models a t-LD election, a delegation rule is
1. _time-conscious_, if for every delegating voter \(v\), the delegation path output for \(v\) is a \(\delta\)-time-conscious directed temporal path,
2. _confluent_, if the union of the paths output for all the delegating voters is a directed temporal tree, rooted at vertex \(\triangledown\), that spans the vertices of \(V\setminus A\).
The definition of time-consciousness guarantees that all paths suggested by the delegation rule satisfy the constraints imposed by the voters, regarding their trust-horizon parameters. Hence, for any edge \((v,u)\) in an output path, \(u\) must choose an action (edge) that she had declared at a time that was approved by \(v\). The definition of confluence guarantees that for every delegating voter \(v\), there is a unique path to a casting voter, that is intended to serve both \(v\) and all voters who delegated to \(v\). In Appendix B we provide examples of time-conscious and confluent solutions, for further illustration.
**Electorate's Satisfaction.** We make the usual assumptions for Liquid Democracy models that (a) voters completely trust their representatives and (b) trust between voters is transitive. This implies that if voter \(v\) accepts voter \(u\) as her potential representative, she concurs with any subsequent choice made by \(u\) and also extends trust to any voter \(w\) who may be entrusted by \(u\). Hence, we note that the utility experienced by a delegating voter from a delegation rule can be considered as a local one, being contingent solely on the voter's immediate representative and not influenced by further choices made by the chosen representative. Hence the utility of a delegating voter, under a delegation rule, can be determined by the score that she declared for her immediate representative, specified by the rule. Note that two different time-instants \(t,t^{\prime}\) may exist such that \(u\in S_{v}^{t}\cap S_{v}^{t^{\prime}}\). In these cases, given that the output of a delegation rule is a set of temporal paths, if the rule suggests a delegation from \(v\) to \(u\), it also explicitly specifies the time-instant at which the delegation will occur, say e.g. at time \(t^{\prime}\) and, thereby the utility of \(v\) is equal to \(sc_{v}^{t^{\prime}}(i)\), if \(u\) belongs to the \(i\)-th most preferred group of \(v\), at time \(t^{\prime}\). Regarding now the casting voters, we do not take into account their utility since their will to cast a ballot has been realized; we do the same for abstaining voters. We consider as infeasible every solution that asks a casting (resp. abstaining) voter to delegate her ballot or abstain (resp. vote), and hence, our focus will be on the welfare of the delegating voters. Finally, the quality of a rule is assessed by the total satisfaction it elicits from the electorate which is expressed as the sum of utilities of all delegating voters. Our optimization objective then is to maximize the electorate's satisfaction, as defined by the resolve-delegation problem below.
**Example**.: As an illustration, consider the following instance of a t-LD election with 5 rounds and 6 voters, namely Alice, Bob, Charlie, Daisy, Elsa, and Fred. Their preferences are outlined below:
* Alice initially intended to delegate to Charlie. In the second round, she decided to get informed about the considered issue and vote.
* Bob did not participate in the deliberation phase during the first round, but approved Alice in the second round. In the third round, he revoked his approval of Alice and instead approved Chris and Elsa. Bob's approval of Elsa remained until the final round.
* Charlie approved Alice only in the beginning of the election. He also approved Bob in the first and third round, but removed his approval (and abstained) in the second round. In the fourth round, Charlie approved both Daisy and Fred, but he removed his approval of Daisy in the final round.
* Daisy expressed interest in being a casting voter from the beginning until the end of the deliberation phase.
* Although Elsa intended to delegate her ballot to Fred at certain times, ultimately both refrained from participating in the election.
The described instance can be visualized using the graph shown in the side figure. We assume that \(\delta^{t}_{B}=1\) and \(\delta^{t}_{C}=t-1\) for every \(t\in\{1,2,\ldots,5\}\). The scores assigned by the voters to their approved representatives are encoded by the form of the edges, where curly edges have weight \(1\), straight edges have weight \(2\), and double-lined edges have weight \(3\). Dotted edges indicate the casting voters. The labels of the edges represent the time-intervals of their presence. In this instance, Alice and Daisy form the set of casting voters, while Elsa and Fred abstain. Therefore, edge \((A,C)\) can be removed, since Alice will definitely cast a ballot. In a \(\delta\)-time conscious solution, Bob would not delegate to Charlie, since no \(\delta\)-time-conscious path to \(\triangledown\) using the edge \((B,C)\) exists, for instance, edge \((C,A)\) violates the time-horizon declared by Bob. Similarly, Charlie would not delegate neither to Alice nor to Bob at time \(1\). Since we do not allow Bob to delegate to an abstainer, he must delegate to Alice, whom she trusted at time \(2\). Then, there are two possible outcomes for the delegation rule, depending on the choice made for Charlie. The edge that maximizes Charlie's utility is \((C,D)\). Therefore, the optimal delegation rule that is both time-conscious and confluent, would suggest the set of paths \(\{((C,D),\allowbreakbreak(D,\triangledown)),((B,A),(A,\triangledown))\}\), achieving a total satisfaction score of \(4\). Finally, in this example it is plainly evident how the temporal dimension comes to the rescue: if one were to focus solely on the snapshot taken at time \(5\), disregarding the information garnered from the deliberation phase, the only option would be to ask Bob and Charlie to delegate to abstaining voters. Instead, our framework utilizes the information obtained throughout the deliberation phase to propose an outcome that avoids paths towards abstainers and delegating cycles.
## 3 Computational Complexity of Resolving Delegations in t-LD elections
In this section we explore the compatibility of the axioms we have put forward from Sections 1 and 2, with efficient computation. We highlight that all the missing proofs from the paper can be found in the Supplementary Material of our work. Unfortunately, our first result shows that it is impossible to have polynomially computable utility maximizing delegation rules that satisfy simultaneously the axioms of time-consciousness and confluence, unless P=NP, even under simple and natural restrictions. Before stating the result, we discuss the types of instances for which we establish hardness. To begin with, it is expected that in real-life elections, voters tend to exhibit a relatively stable and consistent opinion over time, and do not revise their preferences numerous times during the deliberation phase, due to the effort it would require to gather and process new information. Similarly, it is reasonable to expect that due to limited cognitive capacity, the voters are only able to partition their accepted representatives into a few disjoint preference groups. The theorem that follows demonstrates that the computational intractability of resolve-delegation persists even when we limit the voters to changing their minds at most once during the deliberation phase and partitioning their accepted representatives into at most two groups, at each round. Furthermore, it holds even for instances of retrospective trust, and with Borda-scoring functions. Therefore, the primary takeaway is that incorporating temporal aspects in conjunction with natural requirements does come at a computational cost.
**Theorem 1**.: resolve-delegation _in a time-conscious and confluent manner is NP-hard, even for profiles of retrospective trust and under the Borda-scoring function._
Proof.: We provide here a description of the reduction and defer the proof to Appendix A. Given a graph \(G(V\cup\{\triangledown\},E,\tau,L,w,\delta)\) and a parameter \(k\), we call \(\Pi\) the decision variant of resolve-delegation in a time-conscious and confluent manner, which asks for the existence of a solution with total satisfaction at least \(k\). At what follows, we provide a reduction to \(\Pi\) from the NP-hard problem [27] minimum temporal spanning tree (t-mst), which we formally define shortly. Before moving on to the definition of t-mst, we note that in temporal graph theory the term _time-respecting_ is being used to describe, a temporal path \((v_{i},(e_{i},t_{i}),v_{i+1})_{i\in[t]}\), such that for every \(i\in[\ell]\), it holds that \(e_{i}=(v_{i},v_{i+1})_{i}\), \(t_{i}\in\tau(e_{i})\), and \(1\leq t_{1}\leq t_{2}\leq\cdots\leq t_{\ell}\leq L\) (also called 'journey' or simply "temporal" in the related literature). We also refer to Appendix B for an example. The difference between time-respecting and \(\delta\)-time-conscious paths is that the paths of the former type are formed by edges whose time-stamps are in non-decreasing order of visiting, in contrast to the paths of the latter type, whose edges have time-stamps in non-increasing order, and on top of that, satisfy a waiting-time constraint indicated by vector \(\delta\).
In the t-mst problem, we are given a temporal graph \(G^{\prime}(V^{\prime},E^{\prime},\allowbreak\tau^{\prime},L^{\prime},w^{ \prime})\), as well as a root vertex \(u^{\prime}_{0}\in V^{\prime}\) and an integer \(k^{\prime}\). We are asked for a directed temporal tree of \(G^{\prime}\), called \(T^{\prime}\), of edge set \(E^{\prime\prime}\), that spans the vertices of \(V^{\prime}\) and that has a time-respecting path from \(u^{\prime}_{0}\) to every vertex of \(V^{\prime}\), such that \(\sum_{e\in E^{\prime\prime}}w^{\prime}(e)\leq k^{\prime}\). Note that t-mst is NP-hard even for the case where \(w^{\prime}(e)\in\{1,2\},\forall e\in E^{\prime}\), and for every \(v\in V^{\prime}\) there exists a \(u\in V^{\prime}\), such that \(L^{\prime}\in\tau^{\prime}((u,v))\). It is without loss of generality to assume that \(u^{\prime}_{0}\) has no incoming edges in \(E^{\prime}\). Furthermore, the hardness holds for instances in which for any pair of vertices \(u,v\) of the input graph \(G^{\prime}(V^{\prime},E^{\prime},\tau^{\prime},L^{\prime},w^{\prime})\), either \((u,v)\notin E^{\prime}\), or there are two copies, \(e_{1}\) and \(e_{2}\) of (\(u\)
\(v\)) in the multiset \(E^{\prime}\). In the second case, it also holds that \(\tau^{\prime}(e_{1})=[1,L^{\prime}-1],\tau^{\prime}(e_{1})=[L^{\prime},L^{\prime}]\) and that \(w^{\prime}(e_{1})=2,w^{\prime}(e_{2})=1\).
Given such an instance \((G^{\prime}(V^{\prime},E^{\prime},\tau^{\prime},L^{\prime},w^{\prime}),u_{0}^{ \prime},k^{\prime})\) of t-mst we create an instance \((G(V\cup\{\triangledown\},E,\tau,L,w,\delta),k)\) of \(\Pi\) as follows:
* let \(L=L^{\prime}\),
* for every vertex \(u^{\prime}\in V^{\prime}\) we add a vertex \(u\in V\),
* for every directed edge \((u^{\prime},v^{\prime})\in E^{\prime}\) we add a directed edge \((v,\)\(u)\) such that \(w(w,u)=3-w^{\prime}(u^{\prime},v^{\prime})\) (recall that \(w^{\prime}(u^{\prime},v^{\prime})\in\{1,\)\(2\}\)) and \(\tau((v,u))=\tau^{\prime}((u^{\prime},v^{\prime}))\).
* we add a special vertex \(v\) and a directed edge \(e=(u_{0},\triangledown)\) such that \(w(e)=0\) and \(\tau(e)=[1,L]\),
* we add one more special vertex \(a\in V\),
* for every \(t\in[L]\) and \(v\in V\) such that there exists in \(E\) an outgoing edge from \(v\) at time \(t\) of weight \(2\) but not of weight \(1\), we add an edge (called "dummy") \(e=(v,a)\) such that \(w(e)=1\) and \(\tau(e)=[t,t]\),
* for every vertex \(v\in V\setminus\{a\}\) that corresponds to a non-casting voter and for every \(t\in[L]\), we set \(\delta_{v}^{t}=t-1,\)
* we set \(k\) to be \(3(n-1)-k^{\prime}\).
For the remainder of the proof, we refer to Appendix A.
We will now explore roads to circumvent the impossibility result of Theorem 1. Our proposal is to relinquish either the necessity for efficiency or one of the axioms of time-consciousness and confluence, in hopes of solving resolve-delegation. Our findings show that this strategy proves successful for some of the problems that emerge, which highlights that Theorem 1 is not devastating. Notably, most of the suggested procedures are simple enough and therefore are confirmed as strong contenders for practical applications.
We begin with studying the easiest variant of resolve-delegation in which the requirement of time-consciousness is being disregarded. This is mainly done for the sake of completeness since studying it requires overlooking the temporal dimension of the instance, which is the defining characteristic of our work. In order to solve efficiently resolve-delegation in a confluent but not necessarily time-conscious manner, the delegation rule can treat any input submitted by a voter at any time as if it was not subject to time-related constraints. Since confluence implies that the output should be a directed tree, and since the utility of each delegating voter is determined by its outgoing edge, then all edges of the tree with non-zero weight will contribute exactly once to the total satisfaction, and therefore, the objective is to find a (static) directed tree of maximum total weight, that is rooted at \(\triangledown\) and spans the vertices of \(V\setminus A\). To solve this problem we leverage the well-known algorithm by Edmonds [20] (also independently discovered in [7, 14] and improved in [42]) for the directed analog of the classic minimum spanning tree problem.2 In this problem, given a weighted directed static graph \(G(V,E,w)\) and a designated vertex \(r\in V,\) we are asked for a subgraph \(T\) of \(G\), the undirected variant of which is a tree, of minimum total cost, such that every vertex of \(G\) is reachable from \(r\) by a directed path in \(T\). It is important to note that in our case, the paths we need to compute are towards a fixed vertex, rather than originate from it. To apply Edmonds' algorithm, we adjust graph \(G\) to an appropriate graph \(G^{\prime}\), as indicated by Procedure 1.
Footnote 2: To be noted that in [10], a confluent delegation rule, referred to as MinSum, has been proposed, under a more restricted voting framework compared to ours, and its polynomial time computability has been very recently established [17], using an approach that is also based on Edmonds’ algorithm.
**Theorem 2**.: _Procedure 1 solves resolve-delegation in a confluent manner, in polynomial time._
**Procedure 1.** A confluent and efficient utility maximizing delegation rule for input \(G(V\cup\{\triangledown\},E,\tau,L,w,\delta)\).
\(G:=\) static variant of \(G\)
\(V^{\prime}:=V\cup\{\triangledown\}\setminus A\)
\(E^{\prime}:=\{(u,v):(v,u)\in E\wedge v,u\in V^{\prime}\}\)
**For every edge \(e^{\prime}\in E^{\prime}\):**
\(w^{\prime}(e^{\prime}):=-w(e),\) where \(e^{\prime}\in E^{\prime}\) corresponds to \(e\in E\)
remove duplicates from \(E^{\prime}\), retaining only the min-weight edge
let \(G^{\prime}\) be the (static) directed weighted graph \((V^{\prime},E^{\prime},w^{\prime})\)
\(T^{\prime}:=\) outcome of Edmonds' algorithm with input \((G^{\prime},\triangledown)\)
**Return** the path from each \(v\in D\) to \(\triangledown\) inferred by \(T^{\prime}\)
We now shift our focus to efficient utility maximizing delegation rules that satisfy time-consciousness but are not necessarily confluent. Despite not necessarily resulting in a tree structure, such a rule should still suggest a precise path to a casting voter for every delegating voter \(v\). Then, the utility of \(v\) will be derived from her immediate representative (i.e., the weight of the first edge) in that path, regardless of whether other paths going through \(v\) may exist for serving other voters who have delegated to \(v\). The question of why non-confluent delegation rules merit investigation is discussed in [10]. It was discovered that, among a large family of delegation rules, only non-confluent rules possessed the potential to satisfy the axiom of _copy-robustness_, an axiom that is also motivated by practical considerations [4]. Moreover, there are non-confluent rules with desirable properties that have been previously studied, such as the Depth-First-Delegation rule that precludes the possibility of Pareto-dominated delegations [32]. Hence, it is not unprecedented to sacrifice confluence on the altar of attaining other desirable attributes. However, quite surprisingly, even in the absence of a requirement for a confluent rule, resolve-delegation remains NP-hard, as shown by the following theorem. Notably, the result holds even for simple scenarios that involve only a brief deliberation phase, uniform trust-horizon parameters across all voters and a lone delegating voter and it is orthogonal to the result of Theorem 1 since it explicitly uses the fact that the considered elections are not of retrospective trust.
**Theorem 3**.: resolve-delegation _in a time-conscious manner is NP-hard, even for profiles with only a single delegating voter._
Continuing with our study of efficient utility-maximizing delegation rules that are time-conscious but not necessarily confluent, we now turn to exploring potential workarounds to the impossibility result of Theorem 3. To overcome the computational intractability, we restrict ourselves to the still hard variant where the voters share the same time-horizon parameter and propose the following relaxations:
* Assuming retrospective trust profiles, i.e. \(\delta_{v}^{t}=t-1,\) for every voter \(v\) and for every time-step \(t\) such that \(v\) approves to delegate her ballot at time \(t\). These profiles are motivated by the fact that in real life, we do not expect voters to change their opinion in an arbitrary manner, and hence it is likely that a delegating voter trusts another voter for all the previous time instants before \(t\).
* Permitting walks instead of only paths, or in other words allowing for revisits to vertices, along a path from a delegating voter to a casting one. This enlarges the solution space and can be helpful towards achieving time-consciousness in certain instances, as it may be necessary to go through a cycle before being able to satisfy the time constraints. For an illustration, we refer to Appendix B.
The approach of neglecting confluence, enables the development of local delegation rules, likewise the rules studied in [11], that make
a decision for every voter completely independent of the choices made for the rest of the electorate. For the two relaxations suggested in the previous discussion, we suggest a simple procedure that, in high level, visits every vertex \(v\), corresponding to a delegating voter \(v\), in a sequential manner, and for each such vertex, it detects a feasible, i.e. \(\delta\)-time-conscious, way to reach \(\triangledown\), that uses the out-going edge of \(v\) of maximum possible weight. The aforementioned way of reaching a casting voter can be computed by a suitable modification of the temporal analog of the Breadth-First search algorithm from [34], in the case where the input profile is of retrospective trust and by using the polynomial procedure that is based on Dijkstra's algorithm, from [5], in the case where walks are allowed and all voters share the same trust-horizon parameter.
Concerning the first relaxation, in [34], a polynomial-time algorithm was suggested to solve a (more general than what we need in our setting) problem, called foremost path. In this, we are given a (unweighted) directed temporal graph \(G(V,E,\tau,L)\), a source vertex \(v\in V\), a sink vertex \(u\in V\), and a time-instant \(t_{start}\in[L]\), and we are asked to compute3 a time-respecting path from \(v\) to \(u\), that starts no sooner than \(t_{start}\) (or report that such a path does not exist). Recall that, the definition of a time-respecting path has been provided in the proof of Theorem 1.
Footnote 3: To be more precise, the goal is to select the path that minimizes the arrival time but for our purposes, this objective is superfluous (but harmless).
For the second relaxation, we give first the following definition: Given, a temporal graph \(G(V,E,\tau,L,\delta)\) in which all entries of the vector \(\delta\) coincide with a fixed value \(\Delta\), a temporal walk \(p\) of \(G\) of length \(\ell\), say \(p=(v_{i-1},(e_{i},t_{i}),v_{i})_{i\in[\ell]}\) such that \(v_{i}\)'s are not necessarily all pairwise distinct, is called \(\Delta\)-restless if for every \(i\in[\ell]\) it holds that \(e_{i}=(v_{i-1},v_{i})\) and that \(t_{i}\in\tau(e_{i})\) and for \(i\in[\ell-1]\) it holds that \(t_{i}\leq t_{i+1}\leq t_{i}+\Delta\). To solve efficiently the relaxation of resolve-delegation in a time-conscious manner, when walks are allowed, we will utilize the procedure from [5], that outputs4 a \(\Delta\)-restless temporal walk between two specified vertices, for any fixed parameter \(\Delta\). For compactness, we provide a unified presentation of the positive results, under Procedure 2, which handles both relaxations. In the statement and analysis of this procedure, we will use the term _journey_ to refer either to a path when dealing with the first relaxation or to a walk when discussing the second relaxation.
Footnote 4: Once again, the problem studied in [5] is more general than the problem we need to consider here, both in terms of the input graph and the optimization objective(s), but it can be easily adapted to meet our requirements.
**Theorem 4**.: _Procedure 2 solves resolve-delegation in a time-conscious manner, in polynomial time, for profiles of retrospective trust. Moreover, the same holds for the variant of the problem where walks are allowed, for profiles in which there is a common, fixed trust-horizon parameter, for all voters and all time-steps._
We conclude with studying the problem resolve-delegation in a time-conscious and confluent manner, but now without the requirement of computational efficiency. Clearly, if polynomial solvability is no longer a worry, a straightforward brute-force procedure, that in time exponential in the number of edges and in \(L\) examines all possible trees, can be utilized to maximize the voters' satisfaction. However, our objective goes beyond this. First, we aim at developing a procedure that could be well-suited for scenarios where the deliberation phase is prolonged, being exponentially dependent in only one of its input parameters. Additionally, observing that the most definitive parameter of resolve-delegation is the number of delegating voters \(|D|\) (upper bounded by \(n\)), we focus on designing an algorithm with a running time exponentially dependent only on \(|D|\), which would be suitable for practical use in any relatively small community. Yet, this is not possible without further assumptions, given the negative result of Theorem 3, that holds even for a single delegating voter. As before, we resort to instances of t-LD elections of retrospective trust and obtain the following result.
**Theorem 5**.: resolve-delegation _in a time-conscious and confluent manner is solvable in time exponential in \(|D|\) and polynomial in the remaining input parameters, for profiles of retrospective trust._
## 4 Conclusions
Succinctly speaking, the main attributes of Liquid Democracy are the (i) voters' ability to cast a ballot, (ii) ability to delegate voting rights, (iii) transitivity of delegations, (iv) ability for topic-specific delegations, (v) ability to modify or recall a delegation. Our work is the first, upon our knowledge, in the Computational Social Choice literature, that studies a model that satisfies every each of the above features. Motivated by the suggestion of [3], on the addition of a temporal dimension in the algorithmic considerations of LD models, and building upon [10], we studied a LD framework from a viewpoint that lies in the middle ground between algorithmic and axiomatic approaches. We intentionally gave significant emphasis on developing a general model for incorporating temporal aspects and we feel it opens up the way for several promising avenues for future research. The first is to examine whether time-consciousness (or other time-related axioms) is compatible with established axioms for LD frameworks. Also an intriguing topic is to identify further realistic families of instances for which all properties studied here can be simultaneously satisfied. It would also be interesting to check whether the positive results we present still apply for further generalizations of t-LD elections, e.g. when voters are able to use a more powerful language to express complex preferences. The delegating voters' preferences over the final representatives and the casting voters' preferences over the issue at hand, as well as an egalitarian objective or restrictions on the maximum in-degree or on the maximum path-length in the output of a delegation rule, deserve further examination. We finally suggest experimental or empirical evaluations of LD frameworks that take into account temporal considerations. |
2301.09850 | RD-NAS: Enhancing One-shot Supernet Ranking Ability via Ranking
Distillation from Zero-cost Proxies | Neural architecture search (NAS) has made tremendous progress in the
automatic design of effective neural network structures but suffers from a
heavy computational burden. One-shot NAS significantly alleviates the burden
through weight sharing and improves computational efficiency. Zero-shot NAS
further reduces the cost by predicting the performance of the network from its
initial state, which conducts no training. Both methods aim to distinguish
between "good" and "bad" architectures, i.e., ranking consistency of predicted
and true performance. In this paper, we propose Ranking Distillation one-shot
NAS (RD-NAS) to enhance ranking consistency, which utilizes zero-cost proxies
as the cheap teacher and adopts the margin ranking loss to distill the ranking
knowledge. Specifically, we propose a margin subnet sampler to distill the
ranking knowledge from zero-shot NAS to one-shot NAS by introducing Group
distance as margin. Our evaluation of the NAS-Bench-201 and ResNet-based search
space demonstrates that RD-NAS achieve 10.7\% and 9.65\% improvements in
ranking ability, respectively. Our codes are available at
https://github.com/pprp/CVPR2022-NAS-competition-Track1-3th-solution | Peijie Dong, Xin Niu, Lujun Li, Zhiliang Tian, Xiaodong Wang, Zimian Wei, Hengyue Pan, Dongsheng Li | 2023-01-24T07:49:04Z | http://arxiv.org/abs/2301.09850v1 | # RD-NAS: Enhancing One-Shot Supernet Ranking Ability via Ranking Distillation from Zero-Cost Proxies
###### Abstract
Neural architecture search (NAS) has made tremendous progress in the automatic design of effective neural network structures but suffers from a heavy computational burden. One-shot NAS significantly alleviates the burden through weight sharing and improves computational efficiency. Zero-shot NAS further reduces the cost by predicting the performance of the network from its initial state, which conducts no training. Both methods aim to distinguish between "good" and "bad" architectures, i.e., ranking consistency of predicted and true performance. In this paper, we propose Ranking Distillation one-shot NAS (RD-NAS) to enhance ranking consistency, which utilizes zero-cost proxies as the cheap teacher and adopts the margin ranking loss to distill the ranking knowledge. Specifically, we propose a margin subnet sampler to distill the ranking knowledge from zero-shot NAS to one-shot NAS by introducing Group distance as margin. Our evaluation of the NAS-Bench-201 and ResNet-based search space demonstrates that RD-NAS achieve 10.7% and 9.65% improvements in ranking ability, respectively. Our codes are available at [https://github.comprpp/CVPR2022-NAS-competition-Track1-3th-solution](https://github.comprpp/CVPR2022-NAS-competition-Track1-3th-solution)
Peijie Dong\({}^{1}\), Xin Niu\({}^{1,*}\), Lujun Li\({}^{2}\), Zhiliang Tian\({}^{1}\), Xiaodong Wang\({}^{1}\) Zimian Wei\({}^{1}\), Hengyue Pan\({}^{1}\), Dongsheng Li\({}^{1}\)\({}^{1}\) School of Computer, National University of Defense Technology, Hunan, China
\({}^{2}\) Chinese Academy of Sciences, Beijing, China
neural architecture search, one-shot NAS, zero-shot NAS, rank consistency
## 1 Introduction
Neural Architecture Search (NAS) has sparked increased interest due to its remarkable progress in a variety of computer vision tasks [5, 3, 6, 7, 8]. It aims to reduce the cost of human efforts in manually designing network architectures and discover promising models automatically. Early NAS works [9, 10, 11] take thousands of GPU hours in search cost, and ENAS [12] first attempt at weight-sharing techniques to accelerate the searching process. The key to weight-sharing-based methods is an over-parameterized network, _a.k.a._ supernet, that encompasses all candidate architectures in the search space. The weight-sharing-based NAS approaches [1, 13] are called one-shot NAS since it only requires the cost of training one supernet. Despite the high efficiency of one-shot NAS, it is theoretically based on the ranking consistency assumption, _a.k.a._ the estimated performance of candidate architectures in supernet should be highly correlated with the true performance of the corresponding architecture when trained from scratch. However, due to the nature of the weight-sharing approach, the subnet architectures interfere with each other, and the estimated accuracy from the supernet is inevitably averaged.
A recent trend of NAS focuses on zero-cost proxies [4, 2], which aims to estimate the relative performance of candidate architecture from a few mini-batch of data without training the model. The approach of inferring the trained accuracy directly from the initial state of the model is called zero-shot NAS since it does not involve any training. However, there is a clear preference for zero-shot NAS that affects its ability to find good architectures [14].
To alleviate the ranking disorder caused by weight-sharing, some predictor-based methods [15, 16] use the ranking loss to enhance the ranking ability of the model, but the challenge is that acquiring training samples (pairs of (model, accuracy))
Figure 1: The ranking consistency of one-shot NAS (SPOS [11]) and zero-shot NAS(FLOPs [2], Params [2], Snip [2], Synflow [2], ZenNAS [3], NASWOT [4], Grasp [2]). Our proposed RD-NAS achieves higher Kendall’s Tau than both one-shot and zero-shot NAS and converges faster than the baseline [1]
is computationally expensive. To improve the ranking consistency economically, we resort to zero-shot NAS as an inexpensive teacher and measure the ranking relationship between a pair of subnets without employing a predictor. In recent years, the experimental investigations [2] enhanced the existing NAS algorithms using zero-cost proxies to initialize the search algorithm at the beginning of the search process, which accelerated the convergence speed of the supernet. Unlike its limited use, we incorporate zero-cost proxies into the whole training procedure with ranking knowledge distillation [17, 18, 19, 20, 21, 22, 23].
We observe the complementarity between one-shot NAS and zero-shot NAS in Fig.1. Zero-cost proxies show good initial ranking consistency, but the upper-performance limit is not as high as one-shot NAS, while one-shot NAS has lower ranking performance in the early stage. This inspires us to bridge one-shot NAS and zero-shot NAS by utilizing zero-cost proxies as ranking teachers. In this paper, we propose a novel Ranking Distillation one-shot NAS (RD-NAS) to transfer the ranking knowledge from Zero-cost proxies to one-shot supernet.
Our contributions are summarized as follows: (1) We propose a Ranking Distillation framework via margin ranking loss to distill knowledge from zero-cost proxies and improve the ranking ability of one-shot NAS. (2) We propose a margin subnet sampler to reduce the uncertainty caused by zero-cost proxies by introducing group distance. (3) We demonstrate the effectiveness of RD-NAS on two types of benchmark, _a.k.a._ ResNet-like search space, and NAS-Bench-201, which achieve 10.7% and 9.65% improvement on Spearman and Kendall's Tau, respectively.
## 2 Ranking Distillation One-Shot NAS
Ranking Distillation one-shot NAS is a framework for distilling ranking knowledge from zero-shot NAS to improve the ranking ability of one-shot NAS. In Sec.2.1, we introduce how to measure the effectiveness of zero-cost proxies in the context of ranking and introduce a new criteria. In Sec.2.2, we propose the margin subnet sampler with margin ranking loss to perform the ranking distillation.
### How to measure Zero-cost Proxies
Given a candidate architecture \(\alpha_{i}\in\Omega,(i=1,2,...,N)\) in the search space \(\Omega\), zero-cost proxies can quickly estimate the relative score \(\pi_{i}^{z}\in\mathbb{R}\) of the \(i\)-th architecture, which can obtain the relative ranking relationship with other counterparts. However, there is a certain gap between the score \(\pi_{i}^{z}\) obtained by zero-cost proxies and truth scores \(\pi_{i}^{*}\in\mathbb{R}\) obtained by stand-alone training. We introduce \(\sigma_{i}\) to estimate the variance of the zero-cost score of the \(i\)-th architecture, and the equation \(\pi_{i}^{*}=\pi_{i}^{z}+\sigma_{i}\) holds. We expect to introduce truth ranking correlation between architectures into the supernet optimization process. To measure the estimation ability of zero-cost proxies, we incorporate the objective function:
\[max_{i,j}P(\pi_{i}^{*}>\pi_{j}^{*}) =max_{i,j}P(\pi_{i}^{z}-\pi_{j}^{z}+\sigma_{i}-\sigma_{j}>0)\] \[\approx max_{i,j}P(\pi_{i}^{z}-\pi_{j}^{z}>0) \tag{1}\]
The variance of a certain zero-cost proxy can be considered as a constant scalar, so that \(\mathbb{E}[\sigma_{i}-\sigma_{j}]\approx 0\), where we use zero-cost score \(\pi^{z}\) to estimate the ground truth \(\pi^{*}\). Here we simplify Kendall's tau and introduce Concordant Pair Ratio (CPR) \(\delta\) to evaluate the pair-wise ranking consistency. We define the subnet pairs that satisfying \((\pi_{i}^{*}-\pi_{j}^{*})(\pi_{i}^{z}-\pi_{j}^{z})>0\) as the concordant pair. The Concordant Pair Ratio coefficient \(\delta\) is defined as:
\[\delta=\frac{\sum_{i,j}1((\pi_{i}^{*}-\pi_{j}^{*})(\pi_{i}^{z}-\pi_{j}^{z})>0) }{\sum_{i,j}1} \tag{2}\]
which is in the range \(0\leq\delta\leq 1\). The closer \(\delta\) is to 1, the better estimation of the zero-cost proxy for ground truth and _vice versa_.
Figure 2: Overview of the proposed RD-NAS. (a) Margin Subnet Sampler is utilized to measure the distance of subnets in the search space and sample subnet pair with margin. (b) The subnet pairs sampled in the subnet space are mapped to the ranking space by the one-shot and Zero-shot methods, and the ranking knowledge from zero-shot NAS is transferred to the one-shot NAS via ranking distillation.
### Ranking Distillation
To transfer the ranking knowledge obtained from zero-cost proxies to one-shot supernet, we propose a ranking distillation to further improve the ranking consistency of the one-shot NAS. In Sec.2.2.1, we introduce the margin ranking loss to distill the ranking knowledge from zero-cost proxies, but the margin is hard to set in one-shot training. Therefore, we propose a margin subnet sampler to sample subnet pairs from the subnet space in Sec.2.2.2.
#### 2.2.1 Margin Ranking Loss
Margin ranking loss is adopted to constrain the optimization of the supernet from the perspective of ranking distillation. Unlike the standard pairwise ranking loss [24], we do not have the true rank, only the estimated rank provided by zero-cost proxies, which suffer from uncertainty. Margin ranking loss can handle difficult subnet pairs by introducing a fixed penalty margin \(m\). The margin ranking loss of random-sampled subnet pair (\(\alpha_{i}\), \(\alpha_{j}\)) is as follows:
\[\mathcal{L}^{rd}_{(\alpha_{i},\alpha_{j})}(\theta^{s})=\max\left[0,m-\left( \mathcal{L}^{ce}\left(x,\theta^{s}_{\alpha_{i}}\right)-\mathcal{L}^{ce}\left( x,\theta^{s}_{\alpha_{j}}\right)\right)\right] \tag{3}\]
where \(\theta^{s}\) denotes the supernet in one-shot NAS, and \(\mathcal{L}^{ce}(\cdot)\) denotes the cross-entropy loss. However, such a fixed learning objective is not realistic as the training goes with inconsistent variation, which might limit the discriminative training of supernet. Instead of using a fixed margin \(m\), we proposed a margin subnet sampler to control the margin in the subnet sampling process.
#### 2.2.2 Margin Subnet Sampler
It is difficult to tune using a fixed margin, and the margin changes as training progresses. The effectiveness of margin ranking loss is greatly affected by the subnet pair chosen. To tackle the problems, we propose a margin subnet sampler, which affects the margin by controlling the sampled subnets. Intuitively, we expect the sampled subnet pairs to cover the entire search space, which can improve the generalization ability. We first randomly select \(n\) pairs of subnet \(\{(\alpha_{i},\alpha_{j})\}_{n}\) from the search space, then we calculate the distance of the subnet pair with distance function \(f(\cdot)\) mapping from subnet space \(\mathbb{R}^{+}\) to ranking space \(\mathbb{R}\) and filter out the \(c\) pairs of subnet \(\{(\alpha_{i},\alpha_{j})\}_{c}=\{(\alpha_{i},\alpha_{j})|f(\alpha_{i},\alpha_ {j})>\mu\}\), where distances less than the threshold \(\mu\) is selected.
Concerning the subnet distance measure \(f(\cdot)\), by encoding the subnet as a sequence composed of candidate choices, the Hamming distance \(f_{h}(\cdot)\) is computed as the number of elements that are different in the two subnet sequences. However, Hamming distance treats all candidate operations as equal and ignores the variability between them. To make the distance measure more discriminative, we propose the Margin Subnet Sampler (MSS) with Group distance \(f_{g}(\cdot)\), which divides the candidate operations into groups with and without parameters. The intra-group distance is set small, and the inter-group distance is set large.
\[f_{g}^{k}(\alpha_{i}^{k},\alpha_{j}^{k})=\left\{\begin{array}{cc}2f_{h}( \alpha_{i}^{k},\alpha_{j}^{k}),&\beta(\alpha_{i}^{k},\alpha_{j}^{k})=0\\ 0.5f_{h}(\alpha_{i}^{k},\alpha_{j}^{k}),&\beta(\alpha_{i}^{k},\alpha_{j}^{k})=1 \end{array}\right. \tag{4}\]
where \(\beta(\cdot)\) is an indicator function of whether \((\alpha_{i}^{k},\alpha_{j}^{k})\) are intra-group and \(f_{g}^{k}(\cdot)\) denotes the group distance of the \(k\)-th operation of the subnet encoding.
## 3 Experiments
The experiments in Sec.3.1 and Sec.3.2 verify the effectiveness of the RD-NAS on the ResNet-like search space and NAS-Bench-201. Then, we conduct the ablation study for RD-NAS.
### Searching on ResNet-like Search Space
**Setup.** Presented in CVPR NAS Workshop, the ResNet-like search space is based on ResNet48 on the ImageNet [32], whose depth and expansion ratio are searchable. Specifically, the search space consists of four stages with different depths {5,5,8,5}, while the candidate expansion ratio is ranging from 0.7 to 1.0 at 0.05 intervals, and the channel number should be divided by 8. The search space size is \(5.06\times 10^{19}\) in total. Our proposed RD-NAS won the third place at CVPR 2022 NAS Track1. We conduct a comprehensive comparison of the zero-shot [3] and one-shot methods [1, 26, 25], and the Spearman's ranking correlation is adopted to measure the
\begin{table}
\begin{tabular}{l|l c} \hline Type & Method & Spearman \(r\)(\%) \\ \hline \multirow{3}{*}{Zero-cost Proxies} & Params [2] & 67.23 \\ & FLOPs [2] & 78.43 \\ & ZenNAS [3] & 81.27 \\ \hline \multirow{3}{*}{One-shot NAS} & SPOS [1] & 72.49 \\ & Sandwich [25] & 74.86 \\ \cline{1-1} & RD-NAS w/o MSS & 81.14 \\ \cline{1-1} & **RD-NAS w/ MSS** & **83.20** \\ \hline \end{tabular}
\end{table}
Table 1: The comparison of Kendall’s Tau w.r.t different methods on ResNet-like search space. “MSS” denotes the margin subnet sampler.
\begin{table}
\begin{tabular}{l c c c c} \hline & \(\tau\)(\%) & \(r\)(\%) & \(\rho\)(\%) & \(\delta\)(\%) \\ \hline FLOPs [2] & 52.92 & 47.12 & 72.44 & 78.49 \\ Params [2] & 56.64 & 49.14 & 74.70 & 77.49 \\ Snip [2] & 51.16 & 28.20 & 69.12 & 80.49 \\ Synflow [2] & 58.83 & 44.90 & 77.64 & 84.99 \\ ZenNAS [3] & 48.80 & 55.72 & 67.83 & 44.99 \\ NASWOT [4] & 55.13 & 86.71 & 74.93 & 70.99 \\ \hline Random & -26.32 & -21.43 & -22.48 & 1.99 \\ SPOS [1] & 77.43 & 89.55 & 92.73 & 100.0 \\ FairNAS [26] & 70.62 & 82.81 & 88.74 & 100.0 \\
**RD-NAS(Ours)** & **87.08** & **95.00** & **97.34** & **100.0** \\ \hline \end{tabular}
\end{table}
Table 2: Ranking correlation on NAS-Bench-201 CIFAR-10. \(\tau\), r, \(\rho\), and \(\delta\) refers to Kendall’s Tau, Pearson, Spearman and CPR, respectively. “Random” refers to the rank correlation on random initialized model.
ranking ability. We first pre-train the supernet for 90 epochs in ImageNet and then train the supernet with a learning rate of 0.001 with a batch size of 256 for 70 epochs. The SGD optimizer is adopted with a momentum of 0.9.
**Results.** The results of Spearman's ranking correlation are shown in Tab.1. RD-NAS could achieve significantly higher ranking correlations than both the zero-shot and one-shot NAS, especially using the margin subnet sampler. The Kendall's Tau of RD-NAS surpasses the baseline method [1] by 10.71% and surpasses the zero-cost proxy - "FLOPs", demonstrating that using zero-cost proxies as teachers for distillation can effectively mitigate the ranking disorder problem.
### Searching on NAS-Bench-201
**Setup.** NAS-Bench-201 [27] provides the performance of 15,625 subnets on CIFAR-10, CIFAR-100, and ImageNet-16-120. In our experiments, the training settings are consistent with NAS-Bench-201. Following SPOS [1], we formulate the search space of NAS-Bench-201 [27] as a single-path supernet. The distance threshold \(\mu\) is set to 6.7.
**Evaluation Criteria.** The correlation criteria used in NAS-Bench-201 are Pearson \(r\), Kendall' Tau \(\tau\)[33], Spearman \(\rho\), and CPR \(\delta\). Pearson measures the linear relationship between the variables, while Kendall's Tau, Spearman, and CPR measure the monotonic relationship. The ranking correlations are computed using 200 randomly selected subnets.
**Results.** Our method obtains 2.04%, 3.12%, and 4.53% absolute accuracy improvement on the validation sets of CIFAR-10, CIFAR-100, and ImageNet-16-120 in Tab.3, compared with SPOS, which further certifies that RD-NAS enables to boost the ranking ability of one-shot NAS. Even compared with the recent NAS methods (e.g., PC-Darts [30] and GDAS [28]), our method achieve comparable performance on the NAS-Bench-201. Especially for ImageNet-16-120, our RD-NAS achieves state-of-the-art performance that surpasses all other methods. The results on Tab.2 reveal that our RD-NAS achieve better ranking correlation than one-shot and zero-shot NAS.
### Ablation Study
The ablation study of the distance measurement and training strategy on NAS-Bench-201 CIFAR-10 is shown in Tab.4. The one-shot training with uniform sampling strategy[1] is employed as the "Random" baseline. The results of Ranking Distillation exhibit the effectiveness of margin ranking loss. Moreover, using hamming or adaptive distance to sample subnets can further improve the rank consistency. The combination of margin subnet sampler and ranking distillation improves the Kendall's Tau from 77% to 87%.
## 4 Conclusion
In this paper, we present a novel ranking knowledge distillation for one-shot NAS, which we call RD-NAS, to mitigate the rank disorder problem. Specifically, our RD-NAS employs zero-cost proxies as teachers and transfers the ranking knowledge to one-shot supernet by margin ranking loss. To reduce the uncertainty of zero-cost proxies, we introduce the margin subnet sampler to increase the reliability of ranking knowledge. We conduct detailed experiments on ResNet-like search space and NAS-Bench-201 to demonstrate the effectiveness of RD-NAS. It is hoped that our work will inspire further research to exploit the strengths of zero-shot NAS.
\begin{table}
\begin{tabular}{c c c c|c} \hline Random & Margin & Hamming & RD & Kendall’ Tau(\%) \\ \hline ✓ & - & - & - & 77.43 \\ - & - & - & ✓ & 79.37 \\ - & - & ✓ & ✓ & 84.18 \\ - & ✓ & - & ✓ & **87.08** \\ \hline \end{tabular}
\end{table}
Table 4: Ablation study of RD-NAS on NAS-Bench-201 CIFAR-10. ”Random” denotes the baseline SPOS[1], ”Margin” and ”Hamming” denotes margin subnet sampler with Group and Hamming distance, respectively. ”RD” denotes Ranking Distillation.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{CIFAR-10 Acc. (\%)} & \multicolumn{2}{c}{CIFAR-100 Acc. (\%)} & \multicolumn{2}{c}{ImageNet-16-120 Acc. (\%)} \\ \cline{2-7} Method & Validation & Test & Validation & Test & Validation & Test \\ \hline GDAS [28] & 90.00\(\pm\)0.21 & 93.51\(\pm\)0.13 & **71.14\(\pm\)0.27** & 70.61\(\pm\)0.26 & 41.70\(\pm\)1.26 & 41.84\(\pm\)0.90 \\ DSWAS [29] & 89.66\(\pm\)0.29 & 93.08\(\pm\)0.13 & 30.87\(\pm\)16.40 & 31.01\(\pm\)16.38 & 40.61\(\pm\)0.09 & 41.07\(\pm\)0.09 \\ PC-Darts [30] & 89.96\(\pm\)0.15 & **93.41\(\pm\)0.30** & 67.12\(\pm\)0.39 & 67.48\(\pm\)0.89 & 40.83\(\pm\)0.08 & 41.31\(\pm\)0.22 \\ \hline NASWOT [4] & 89.14\(\pm\)1.14 & 92.44\(\pm\)1.13 & 68.50\(\pm\)2.03 & 68.62\(\pm\)2.04 & 41.09\(\pm\)3.97 & 41.31\(\pm\)4.11 \\ EPENAS [31] & 89.90\(\pm\)0.21 & 92.63\(\pm\)0.32 & 69.78\(\pm\)2.44 & 70.10\(\pm\)1.71 & 41.73\(\pm\)3.60 & 41.92\(\pm\)4.25 \\ \hline Random & 83.20\(\pm\)13.28 & 86.61\(\pm\)13.46 & 60.70\(\pm\)12.55 & 60.83\(\pm\)12.58 & 33.34\(\pm\)9.39 & 33.13\(\pm\)9.66 \\ SPOS [1] & 88.40\(\pm\)1.07 & 92.24\(\pm\)1.16 & 67.84\(\pm\)2.00 & 68.07\(\pm\)2.25 & 39.28\(\pm\)3.00 & 40.28\(\pm\)3.00 \\
**RD-NAS(Ours)** & **90.44\(\pm\)0.27** & 93.36\(\pm\)0.04 & 70.96\(\pm\)2.12 & **71.83\(\pm\)1.33** & **43.81\(\pm\)0.09** & **44.46\(\pm\)1.58** \\ \hline Optimal & 91.61 & 94.37 & 73.49 & 73.51 & 46.77 & 47.31 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Searching results on NAS-Bench-201 dataset [27]. Our method obtains 1.12%, 3.76%, and 4.18% absolute accuracy improvement compared with SPOS on the test sets of CIFAR-10, CIFAR-100, and ImageNet-16-120, respectively. Note that results for other weight-sharing methods are reported by NAS-Bench-201 benchmark [27], which only uses the labels of the dataset as the true performance of the searched architecture for a fair comparison. |
2306.13677 | Dynamic Net Metering for Energy Communities | We propose a social welfare maximizing market mechanism for an energy
community that aggregates individual and community-shared energy resources
under a general net energy metering (NEM) policy. Referred to as Dynamic NEM
(D-NEM), the proposed mechanism dynamically sets the community NEM prices based
on aggregated community resources, including flexible consumption, storage, and
renewable generation. D-NEM guarantees a higher benefit to each community
member than possible outside the community, and no sub-communities would be
better off departing from its parent community. D-NEM aligns each member's
incentive with that of the community such that each member maximizing
individual surplus under D-NEM results in maximum community social welfare.
Empirical studies compare the proposed mechanism with existing benchmarks,
demonstrating its welfare benefits, operational characteristics, and
responsiveness to NEM rates. | Ahmed S. Alahmed, Lang Tong | 2023-06-21T06:03:07Z | http://arxiv.org/abs/2306.13677v3 | # Dynamic Net Metering for Energy Communities
###### Abstract
We propose a social welfare maximizing market mechanism for an energy community that aggregates individual and community-shared energy resources under a general net energy metering (NEM) policy. Referred to as Dynamic NEM (D-NEM), the proposed mechanism dynamically sets the community NEM price based on aggregated community resources. D-NEM guarantees a higher benefit to each community member than possible outside the community, and no sub-communities would be better off departing from its parent community. D-NEM aligns each member's incentive with that of the community such that each member maximizing individual surplus under D-NEM results in maximum community social welfare. Empirical studies compare the proposed mechanism with existing benchmarks, demonstrating its member and community-level welfare benefits.
distributed energy resources aggregation, energy community, net metering, pricing mechanism, transactive energy system, energy storage sharing.
## I Introduction
We consider the problem of pricing of electricity within a self-organized _energy community_ (a.k.a. energy coalition) in a distribution system operated by a regulated distribution system operator1 (DSO). Fig.1 illustrates a generic energy community consisting of consumers and prosumers with behind-the-meter (BTM) and possibly shared distributed energy resources (DERs) such as solar, wind, and battery storage [2, 3]. Examples of such an energy community include housing cooperatives, commercial businesses, and education/medical campuses, where the pricing policy of electricity within the community is set by its members through an energy community operator. In this work, we assume that the aggregated energy consumption and production are subject to the regulated net energy metering (NEM) tariff imposed by a DSO. In particular, the DSO measures the community's net consumption and charges the community at a _buy (import) rate_ if the community is net-importing and a _sell (export) rate_ if the community is net-exporting.
Footnote 1: We use interchangeably the term of distribution utility and DSO.
Several public utility commissions have initiated energy-community-enabling programs, NEM aggregation (NEMA) and virtual NEM (VNEM) programs2, for university campuses, residential complexes, and medical cities [4]. In Europe, the majority of EU countries passed rules allowing standalone members and apartment complexes to form energy communities and developed jurisdictions to regulate and govern the operation of such coalitions [5].
Footnote 2: See for example, Pacific Gas and Electric company (PG&E), California, and Baltimore Gas and Electric Company (BGE), Maryland.
We focus in this work on the design of a community market mechanism that (i) aligns individual incentives with the community welfare-maximizing objective and (ii) provides higher benefit for every subset of community members than it would have achieved if they were outside the community, thus ensuring stability of the community.
### _Related Work_
There is a rich literature on energy communities covering cost-sharing mechanisms [6, 7, 8], optimal energy management [9, 10, 11], and coordination frameworks [12, 13]. Most relevant to this work is the community pricing and allocation rules [6, 8], and welfare maximizing resource allocation [10, 11]. A holistic review on energy community market mechanisms can be found in [14].
Three energy community models have been widely discussed, each offering a different market hierarchy and flexibility to its members. The first is the decentralized peer-to-peer (P2P) model [15, 16]. Through _bilateral contracts_, the P2P market structure gives full flexibility and privacy to its members and the benefits of robustness in decentralization. The P2P market structure, however, is often challenged by policy and physical restrictions of a practical power system, accurate auditing of power delivery, and convergence to social optimality.
The second is the centralized model involving a community operator scheduling all resources for the benefit of the
Fig. 1: Energy community framework. The decentralized DER include members: flexible consumption, renewables, and net consumption denoted by \(d_{i},g_{i}\in\mathbb{R}_{+},z_{i}\in\mathbb{R}\), respectively. The centralized resources include solar PV \(\tilde{g}\in\mathbb{R}_{+}\) and a battery energy storage system (BESS) \(b_{N}\in\mathbb{R}\).
community [7, 9, 17]. While this model has the potential to achieve the highest community welfare, it often comes with prohibitive computation costs and concerns of member's privacy. A significant issue of centralized mechanism is whether the global objective of maximizing community welfare will lead to individual profit-maximizing objectives.
The third model, to which the work presented here belongs, is the decentralized scheduling of community resources through a pricing mechanism [18, 10, 11, 6, 19, 20, 10]. In [10], a bi-level optimization of an apartment building energy community with DER owners was formulated followed by pricing and energy allocation algorithms. It is unclear whether such a model will offer greater benefits to the community members than the NEM tariff, without which, either no one joins the community or members may abandon the community. In [11], a Nash bargaining benefit-sharing model was proposed to ensure that the community members will not abandon the community. However, the computation complexity of cost allocation grows exponentially with the size of the community. Also, as in [10], the competitiveness to lucrative retail programs such as NEM was not addressed. A low computation allocation rule was proposed in [20] for sharing the realized cost of an energy community with storage and inflexible consumption under the time-of-use (ToU) rates. However, the bi-directionality of power flow, which is the essence of NEM policies, was not considered, as customers were assumed to be always net-importing. The authors in [18] analyzed a stochastic energy community model with cost-minimizing members. An algorithm was proposed for better estimation of the stochastic game. The authors, however, assumed equal energy buy and sell rates. The work in [19] proposed a decentralized approach to energy management via flexible loads in energy communities. It was shown that community members solve their cost minimization problem using a genetic algorithm, after which the community operator aggregates the net consumptions and announces the updated community prices and the process continues until the coordination algorithm converges. Lastly, cost allocation and optimal operation and sizing of shared energy community storage were studied in [6]. The work, however, considers non-flexible demands and the _ex-post_ cost allocation requires an algorithm to search for the nucleolus.
The work of Chakraborty et al. [8] stands out as the first mechanism design under the cost-causation principle, which offers every community member a lower payment than would be outside the community when the customer faces NEM. Unlike nucleolus, Shapely value, and Nash bargaining solutions [9, 18, 21], among others, the computation complexity of the payment rule in [8] does not increase with the cardinality of the coalition, and can be easily understood by the members, which is an important principle when charging end-users [22]. The D-NEM mechanism proposed in this paper augments the favorable features in [8] by including individual _surplus_ and community social welfare as part of the design objectives of the community pricing in a decentralized optimization framework. When prosumer _surplus_ is considered, standalone rational utility customers under NEM may achieve higher surpluses than the customers under [8], which violates individual rationality, hence the cost-causation principle.
To our best knowledge, D-NEM proposed here is the first energy community market mechanism that achieves efficiency and individual/group rationality under the cost-causation principle.
### _Summary of Results and Contributions_
We propose _D-NEM_--an energy community market mechanism that sets the NEM price based on the aggregated DER within the community. D-NEM employs the utility's NEM import and export rates and applies them dynamically to individual members based on the community aggregated renewables. Unlike utility's NEM tariff whereby each prosumer faces differentiated import and export rates based on its own net consumption state, a prosumer under D-NEM faces only one rate depending solely on the community's aggregate net consumption state, an idea that was first articulated in [8].
The main differences between D-NEM and that in [8] are threefold. First, D-NEM prices are set _ex-ante_ (rather than imposed _ex-post_ in [8]) to induce community members' response that achieves community's maximum welfare. Second, D-NEM induces a community-level net-zero consumption zone whereby the community's aggregate flexible resources balance the shared renewables. Third, we incorporate a centralized BESS, and show how D-NEM market mechanism generalizes.
We establish the following properties of D-NEM:
1. Individual surplus maximizations leads to maximum community social welfare (Theorem 1).
2. Individual surplus under D-NEM is higher than the maximum surplus under utility's NEM (Theorem 2).
3. D-NEM satisfies the _cost-causation principle_ (Theorem 3).
4. The energy community is stable under D-NEM, i.e., no subset of users would be better-off if they jointly withdraw from the community (Theorem 4).
A key result in our work is that the maximum community welfare is decentrally achieved through a threshold-based NEM policy that is a function of the aggregated renewables. Our empirical results use real residential data to construct a hypothetical energy community, under which the welfare, price, and community operation under D-NEM are compared to communities under [8] and to standalone _optimal_ utility customers.
### _Notations_
The notational use here is standard. When necessary, we boldface letters to denote column vectors as in
\((x_{1},\cdots,x_{n})\) and \(\mathbf{x}^{\top}\) its transpose. For vectors \(\mathbf{x}\), \(\mathbf{y}\), \(\mathbf{x}\preceq\mathbf{y}\) is the element-wise inequality \(x_{i}\leq y_{i}\) for all \(i\), and \([\mathbf{x}]^{+},[\mathbf{x}]^{-}\) are the element-wise positive and negative parts of vector \(\mathbf{x}\), _i.e.,_\([x_{i}]^{+}=\max\{0,x_{i}\}\), \([x_{i}]^{-}=-\min\{0,x_{i}\}\) for all \(i\), and \(\mathbf{x}=[\mathbf{x}]^{+}-[\mathbf{x}]^{-}\).
## II Energy Community Model
We consider a coalition \(\mathcal{N}\) of \(N\) rational households3, each indexed by \(i\in\mathcal{N}:=\{1,\ldots,N\}\), who pool and aggregate their resources behind a regulated distribution utility revenue meter (Fig.1) that implements an NEM policy design [23]. The community resources include heterogeneous flexible demands, solar PVs, and a centrally scheduled BESS4.
Footnote 3: Although we use the word _household_, the analysis is not restricted to residential customers, i.e., customers can also be commercial, industrial, etc.
Footnote 4: In Appendix C, we show how our analysis extends to communities with central renewables and even distributed BESS under mild assumptions.
The community members are subject to the market mechanism of a neutral community operator, who bilaterally transacts energy and payments with the utility on behalf of the whole community. For billing and pricing purposes, every member's net consumption is assumed to be sub-metered, i.e., the operator attaches a net consumption measurement meter to every household. Network constraints and losses are ignored since the aggregate resources of such communities constitute only a small fraction of the total load/generation at a given distribution network node and the households are spatially adjacent [8, 9, 18, 24]. With little loss of generality, we assume that the household's decision has the same timescale as that of the NEM billing period.
To simplify the exposition and facilitate delivering the intuition of D-NEM, we first consider a community with flexible demands and solar PV. Then, in Sec.IV, we add a BESS and show how D-NEM can be generalized.
### _Household and Community Resources_
Each household \(i\in\mathcal{N}\) is assumed to have \(K\) controllable devices operated by a home energy management system (HEMS), and indexed by \(k\in\mathcal{K}=\{1,\ldots,K\}\). The \(K\) devices' _energy consumption bundle_ is denoted by, \(\forall i\in\mathcal{N}\),
\[\mathbf{d}_{i}=(d_{i1},\cdots,d_{iK})\in\mathcal{D}_{i}:=\{\mathbf{d}_{i}:\underline{ \mathbf{d}}_{i}\preceq\mathbf{d}_{i}\preceq\overline{\mathbf{d}}_{i}\}\subseteq\mathbb{R} _{+}^{K}, \tag{1}\]
where \(\underline{\mathbf{d}}_{i},\overline{\mathbf{d}}_{i}\) are the consumption bundle's lower and upper limits of customer \(i\), respectively. The _aggregate consumption_ of the community is denoted by \(d_{\mathcal{N}}:=\sum_{i\in\mathcal{N}}\mathbf{1}^{\top}\mathbf{d}_{i}\).
Households may have BTM _renewable generation_, whose output is denoted by \(g_{i}\in\mathbb{R}_{+}\) for every \(i\in\mathcal{N}\). The community's _aggregate generation_ is \(g_{\mathcal{N}}:=\sum_{i\in\mathcal{N}}g_{i}\).
The _net consumption_ of every \(i\in\mathcal{N}\) household \(z_{i}\in\mathbb{R}\) and the community _aggregate net consumption_\(z_{\mathcal{N}}\in\mathbb{R}\) are defined as
\[z_{i}:=\mathbf{1}^{\top}\mathbf{d}_{i}-g_{i},\quad z_{\mathcal{N}}:=\sum_{i\in \mathcal{N}}z_{i}=d_{\mathcal{N}}-g_{\mathcal{N}}, \tag{2}\]
where \(z_{i}\geq 0\) (\(z_{\mathcal{N}}\geq 0\)) and \(z_{i}<0\) (\(z_{\mathcal{N}}<0\)) represent a _net-consuming_ and _net-producing_ household (community), respectively.
### _Household and Community Payments_
At the revenue meter (Fig.1), the energy coalition \(\mathcal{N}\), is billed under the utility's NEM X regime [23] based on the measured \(z_{\mathcal{N}}\) and the NEM X parameter \(\pi=(\pi^{+},\pi^{-})\) as
\[P_{\mathcal{N}}^{\pi}(z_{\mathcal{N}})=\pi^{+}[z_{\mathcal{N}}]^{+}-\pi^{-}[z _{\mathcal{N}}]^{-}, \tag{3}\]
where \(\pi^{+},\pi^{-}\in\mathbb{R}_{+}\) are the _buy (retail) rate_, and _self (export) rate_, respectively5. We assume \(\pi^{-}\leq\pi^{+}\). In practice, (3) is referred to as _NEM 1.0_ if \(\pi^{-}=\pi^{+}\), and _NEM 2.0_ and beyond if \(\pi^{-}<\pi^{+}\)[25]. The rates \(\pi^{+}\) and \(\pi^{-}\) can be temporally fixed (flat rates) or varying (ToU rates).
Footnote 5: We ignore the fixed charge in the NEM X payment, as it does not play a role in our welfare-maximization mechanism through pricing.
The payment of the household outside the community, i.e., when the household autonomously faces the utility6 is
Footnote 6: We refer to this payment as the _benchmark payment_.
\[P^{\pi}(z_{i})=\pi^{+}[z_{i}]^{+}-\pi^{-}[z_{i}]^{-}. \tag{4}\]
Given that the payments in (3)-(4) are determined by the utility, we quest how should the community operator design a pricing rule \(\chi\) and a payment rule \(P^{\chi}(\cdot)\) that are efficient, equitable and incentivize customers to join the community.
Given (3) and \(P^{\chi}(\cdot)\), we define the community operator profit \(\Psi_{\mathcal{N}}^{\pi,\chi}\) as the difference between the payments it collects from every \(i\in\mathcal{N}\) member and its payment to the utility as
\[\Psi_{\mathcal{N}}^{\pi,\chi}(\mathbf{z},g_{\mathcal{N}}):=\sum_{i\in\mathcal{N}}P ^{\chi}(z_{i},g_{\mathcal{N}})-P_{\mathcal{N}}^{\pi}(z_{\mathcal{N}}), \tag{5}\]
where \(\mathbf{z}:=(z_{1},\ldots,z_{N})\in\mathbb{R}^{N}\). Note that the operator's profit depends on both the NEM X parameter \(\pi\) and its own designed parameter \(\chi\). To conform with the cost-causation principle for designing equitable allocations [8], we must ensure that the operator is profit-neutral, i.e., \(\Psi_{\mathcal{N}}^{\pi,\chi}(\mathbf{z},g_{\mathcal{N}})=0\).
### _Household and Community Surplus_
For all \(i\in\mathcal{N}\), the surplus of a standalone household (_benchmark surplus_) and the household surplus under the community are defined as
\[S_{i}^{\pi}\left(z_{i}\right) :=U_{i}\left(\mathbf{d}_{i}\right)-P^{\pi}\left(z_{i}\right) \tag{6}\] \[S_{i}^{\chi}(z_{i},g_{\mathcal{N}}) :=U_{i}(\mathbf{d}_{i})-P^{\chi}(z_{i},g_{\mathcal{N}}), \tag{7}\]
respectively, where \(U_{i}(\mathbf{d}_{i})\) is the utility function of consuming the bundle \(\mathbf{d}_{i}\), which may have different functional forms for each \(i\in\mathcal{N}\) based on their flexibility and desired comfort levels. For all \(i\in\mathcal{N},U_{i}(\mathbf{d}_{i})\) is assumed to be strictly concave, non-decreasing, continuously differentiable, and additive, therefore, \(U_{i}\left(\mathbf{d}_{i}\right):=\sum_{k\in\mathcal{K}}U_{ik}\left(d_{ik}\right), \forall i\in\mathcal{N}\). We denote the marginal utility function by \(\mathbf{L}_{i}:=\nabla U_{i}\), and its inverse, for every \(i\in\mathcal{N}\) and \(k\in\mathcal{K}\), by \(f_{ik}:=L_{ik}^{-1}\).
**Definition 1** (Community welfare).: _The welfare of the community \(\mathcal{N}\) is defined as the aggregated surplus of its members_
\[W^{\chi}_{\mathcal{N}}(\mathbf{z},g_{\mathcal{N}}):=\sum_{i\in\mathcal{N}}S^{\chi}_{ i}(z_{i},g_{\mathcal{N}}). \tag{8}\]
Given the profit-neutrality condition \(\Psi^{\pi,\chi}(\cdot)=0\), we have \(W^{\chi}_{\mathcal{N}}(z_{\mathcal{N}})=\sum_{i\in\mathcal{N}}U_{i}(\mathbf{d}_{i} )-P^{\pi}_{\mathcal{N}}(z_{\mathcal{N}})\).
### _Community Market Mechanism_
We generalize the axiomatic community market framework to ensure equity and efficiency [8]. In particular, we are interested in community pricing and payment rules that satisfy the axioms of 1) _individual rationality_, 2) _profit-neutrality_, 3) _equity_, 4) _monotonicity_, 5) _cost-causation penalty_ and 6) _cost-mitigation reward_. The axioms' formal statements are relegated to Appendix A; but we offer here a brief non-mathematical description. _Individual rationality_ is achieved when every customer gains a higher surplus within the community than under the utility's NEM. _Profit-neutrality_ ensures that the payment (compensation) of the community operator is entirely redistributed among its members, i.e. _budget-balance_. _Equity_ is attained when the payments of two community members with the same net consumption are equivalent. The _monotonicity_ axiom ensures that having higher net consumption (net production) results in higher payment (compensation). Lastly, _cost-causation penalty_ and _cost-mitigation reward_ are met if members pay for causing costs and get rewarded for reducing costs to the community, respectively. The six axioms are used next to define a modified cost causation principle [8].
**Definition 2** (Cost-causation principle).: _The community market mechanism meets the cost causation principle if it satisfies axioms 1-6._
Note that the slight difference between the cost-causation principle defined above and that in [8] is the use of _surplus_ rather than _payment_ in _individual rationality_. Satisfying the cost-causation principle defined above implies satisfying that defined in [8].
## III Dynamic NEM for decentralized welfare maximization
The community operator's primary task is to develop a market mechanism that induces its members to schedule their resources in a way that achieves overall welfare optimality, while conforming with the cost-causation principle. If the operator were to centrally schedule its community's resources, the operator would have solved the following welfare maximization program:
\[\begin{array}{ll}\mathcal{P}^{\chi}_{\mathcal{N}}:&\underset{(\mathbf{d}_{i}, \ldots,\mathbf{d}_{N}),\mathbf{z}}{\text{Maximize}}&W^{\chi}_{\mathcal{N}}:=\sum_{i \in\mathcal{N}}S^{\chi}_{i}(z_{i},g_{\mathcal{N}})\\ &\text{subject to}&\sum_{i\in\mathcal{N}}P^{\chi}(z_{i},g_{\mathcal{N}})=P^{ \pi}_{\mathcal{N}}(z_{\mathcal{N}})\\ &z_{\mathcal{N}}=\mathbf{1}^{\top}\mathbf{z}=\sum_{i\in\mathcal{N}}(\mathbf{1}^{\top}\mathbf{ d}_{i}-g_{i})\\ &\underline{\mathbf{d}_{i}}\preceq\mathbf{d}_{i}\preceq\overline{\mathbf{d}}_{i},\ \forall i\in\mathcal{N},\end{array} \tag{9}\]
where the first constraint is the _profit-neutrality_ condition.
Next, we propose D-NEM--a cost-causation-based market mechanism that induces community members to achieve the maximum welfare in (9).
### _Dynamic NEM_
Given prosumers' bids (inverse marginal utilities \(f_{ik},\forall i\in\mathcal{N},k\in\mathcal{K}\)), which reflect their willingness to consume, and the community aggregate generation \(g_{\mathcal{N}}\), the operator envisages the following welfare-maximizing market mechanism.
**Dynamic NEM.**_Under D-NEM, the energy community's pricing policy is threshold-based, given by the 3-tuple parameter \(\chi=(\pi^{+},\pi^{z}(g_{\mathcal{N}}),\pi^{-})\) with the order \(\pi^{+}\geq\pi^{z}(g_{\mathcal{N}})\geq\pi^{-}\), as_
\[\Gamma^{\chi}(g_{\mathcal{N}})=\begin{cases}\pi^{+},&g_{\mathcal{N}}<d^{+}_{ \mathcal{N}}\\ \pi^{z}(g_{\mathcal{N}}),&g_{\mathcal{N}}\in[d^{+}_{\mathcal{N}},d^{-}_{ \mathcal{N}}]\\ \pi^{-},&g_{\mathcal{N}}>d^{-}_{\mathcal{N}},\end{cases} \tag{10}\]
_where the thresholds \(d^{+}_{\mathcal{N}},d^{-}_{\mathcal{N}}\) are given by_
\[d^{+}_{\mathcal{N}} :=\sum_{i\in\mathcal{N}}\sum_{k\in\mathcal{K}}\max\{\underline{d} _{ik},\min\{f_{ik}(\pi^{+}),\bar{d}_{ik}\}\} \tag{11}\] \[d^{-}_{\mathcal{N}} :=\sum_{i\in\mathcal{N}}\sum_{k\in\mathcal{N}}\max\{\underline{d} _{ik},\min\{f_{ik}(\pi^{-}),\bar{d}_{ik}\}\}\geq d^{+}_{\mathcal{N}}, \tag{12}\]
_and the price \(\pi^{z}(g_{\mathcal{N}}):=\mu^{*}(g_{\mathcal{N}})\) is the solution of._
\[\sum_{i\in\mathcal{N}}\sum_{k\in\mathcal{K}}\max\{\underline{d}_{ik},\min\{f_{ ik}(\mu),\bar{d}_{ik}\}}=g_{\mathcal{N}}. \tag{13}\]
_The payment rule is given, for every \(i\in\mathcal{N}\), by_
\[P^{\chi}(z_{i},g_{\mathcal{N}})=\Gamma^{\chi}(g_{\mathcal{N}})z_{i}. \tag{14}\]
#### Iii-A1 Structural Properties of D-NEM
The dynamic pricing policy has an appealing threshold-based resource-aware structure, that announces community prices based on the aggregate renewable generation level compared to two renewable-generation-independent thresholds \(d^{+}_{\mathcal{N}}\) and \(d^{-}_{\mathcal{N}}\). The thresholds arise from the community's optimal aggregate consumption \(d^{*}_{\mathcal{N}}\) that achieves the community maximum welfare, derived in Theorem 1 as
\[d^{*}_{\mathcal{N}}(g_{\mathcal{N}})=\max\{d^{+}_{\mathcal{N}},\min\{g_{ \mathcal{N}},d^{-}_{\mathcal{N}}\}\}, \tag{15}\]
which results in the optimal aggregate net-consumption \(z^{*}_{\mathcal{N}}(g_{\mathcal{N}}):=d^{*}_{\mathcal{N}}(g_{\mathcal{N}})-g_{ \mathcal{N}}\). As shown in Table I, the thresholds \(d^{+}_{\mathcal{N}}\) and \(d^{-}_{\mathcal{N}}\) partition the range of \(g_{\mathcal{N}}\) into three zones based on whether the community is 1) net-consuming (\(g_{\mathcal{N}}<d^{+}_{\mathcal{N}}\)); denoted by (\(+\)), 2) net-producing (\(g_{\mathcal{N}}>d^{-}_{\mathcal{N}}\)); denoted by (\(-\)) or 3) net-zero (\(g_{\mathcal{N}}\in[d^{+}_{\mathcal{N}},d^{-}_{\mathcal{N}}]\)); denoted by (\(z\)), under which the community is off-the-grid.
In the net-consuming and net-producing zones, the optimal community at the revenue meter faces the utility's \(\pi^{+}\) and \(\pi^{-}\), respectively, and directly passes these two prices to its members (see Table I). When \(g_{\mathcal{N}}\in[d^{+}_{\mathcal{N}},d^{-}_{\mathcal{N}}]\), the community is energy-balanced (\(d^{*}_{\mathcal{N}}(g_{\mathcal{N}})=g_{\mathcal{N}}\)), and the community
payment is zero. It turns out that, in the net zero zone, it is optimal to charge members by the Lagrangian multiplier satisfying the Karush-Kuhn-Tucker (KKT) condition of \(d_{\mathcal{N}}^{*}(g_{\mathcal{N}})=g_{\mathcal{N}}\). Therefore, when \(g_{\mathcal{N}}\in[d_{\mathcal{N}}^{+},d_{\mathcal{N}}^{-}]\), the price \(\pi^{z}(g_{\mathcal{N}})\) dynamically varies to keep \(d_{\mathcal{N}}^{*}(g_{\mathcal{N}})=g_{\mathcal{N}}\). In Corollary 1 in Appendix D, we show that the net-zero zone price \(\pi^{z}(g_{\mathcal{N}})\) is bounded by the NEM X buy and sell rates, i.e., \(\pi^{z}(g_{\mathcal{N}})\in[\pi^{-},\pi^{+}]\), and dynamically decreases with increasing \(g_{\mathcal{N}}\) to incentivize demand increases and keep the community off the grid.
It is worth noting that, unlike market mechanisms that clear the community price based on exogenously determining the _roles_ of community prosumers as _buyers_ and _sellers_[16, 26], D-NEM liberates prosumers to choose their _role_.
#### Iii-A2 Intuitions of Dynamic NEM
The resource-aware D-NEM pricing policy is intuitive as it responds to the increasing community generation by dynamically reducing the price from \(\pi^{+}\) to \(\pi^{-}\) through \(\pi^{z}(g_{\mathcal{N}})\). The price \(\Gamma^{\chi}(g_{\mathcal{N}})\) is a monotonically decreasing function of \(g_{\mathcal{N}}\).
Unlike the static retail NEM benchmark _NEM 2.0_ that sets static import/export prices, the payment of community members under D-NEM, in (14), have equivalent import and export rates, i.e., _NEM 1.0_. Net-producing members (\(z_{i}<0\)) under D-NEM are compensated at prices higher than \(\pi^{-}\), if the community is not net-producing, i.e., \(g_{\mathcal{N}}<d_{\mathcal{N}}^{-}\). Also, net-consuming members (\(z_{i}>0\)) face prices lower than \(\pi^{+}\), if the community is not net-consuming, i.e., \(g_{\mathcal{N}}>d_{\mathcal{N}}^{+}\). This also applies to members without any renewables, who are always net-consuming.
In the parlance of peer-to-peer transactions, one could think of a net-producing member \(z_{i}<0\), under D-NEM, as one who offers \(|z_{i}|\) at the price \(\Gamma^{\chi}(g_{\mathcal{N}})\) to any net-consuming member \(z_{j}>0,j\neq i\) who bids \(|z_{j}|\). Therefore, the market under D-NEM is strictly advantageous to the _seller_ if \(\Gamma^{\chi}(g_{\mathcal{N}})>\pi^{-}\) and to the _buyer_ if \(\Gamma^{\chi}(g_{\mathcal{N}})<\pi^{+}\).
### _Community Member Optimization under D-NEM_
Given the announced D-NEM price and payment rules, every \(i\in\mathcal{N}\) member maximizes its surplus by optimally scheduling its consumption as
\[\begin{split}\mathcal{P}_{i}^{\chi}:(\mathbf{d}_{i}^{*},z_{i}^{*})= \operatorname*{argmax}_{\mathbf{d}_{i}\in\mathbb{R}_{+}^{\chi},z_{i}\in\mathbb{R}}& S_{i}^{\chi}:=U_{i}\left(\mathbf{d}_{i}\right)-P^{\chi}\left(z_{i},g_{ \mathcal{N}}\right)\\ \text{subject to}& z_{i}=\mathbf{1}^{\top}\mathbf{d}_{i}-g_{i} \\ &\underline{d}_{i}\preceq\mathbf{d}_{i}\preceq\overline{\mathbf{d}}_{i}. \end{split} \tag{16}\]
A straightforward, yet crucial, implication of the D-NEM mechanism is that the member's optimal consumption becomes solely a function of the community price
\[d_{ik}^{*}=\max\{\underline{d}_{ik},\min\{f_{ik}(\Gamma^{\chi}),\bar{d}_{ik} \}\},\forall i\in\mathcal{N},\forall k\in\mathcal{K}\]
The solution of \(\mathcal{P}_{i}^{\chi}\) in (16) results in the community member's maximum surplus \(S_{i}^{*,\chi}\left(z_{i}^{*},g_{\mathcal{N}}\right)\), where \(z_{i}^{*}:=\mathbf{1}^{\top}\mathbf{d}_{i}^{*}-g_{i}\). A numerical example is provided in Appendix B to show how the D-NEM price is computed and how the members react to maximize their surpluses.
### _Efficiency and Equity under D-NEM_
The following theorem establishes market efficiency under D-NEM whereby the maximum welfare \(W_{\mathcal{N}}^{*,\pi}\) under centralized operation is wholly allocated to the members, through decentralized resource scheduling.
**Theorem 1** (Efficiency under Dynamic NEM).: _Under D-NEM, the community maximum welfare \(W_{\mathcal{N}}^{\chi,\chi}\), resulting from \(\mathcal{P}_{\mathcal{N}}^{\chi}\) in (9), is equivalent to members' surplus sum, i.e.,_
\[W_{\mathcal{N}}^{*,\chi}(g_{\mathcal{N}})=\sum_{i\in\mathcal{N}}S_{i}^{*,\chi} (z_{i}^{*},g_{\mathcal{N}}). \tag{17}\]
Proof.: See Appendix D.
Theorem 1 is proved by solving \(\mathcal{P}_{\mathcal{N}}^{\chi}\) in (9) and comparing it to the surplus sum of the members who solve \(\mathcal{P}_{i}^{\chi}\) in (16). A major result of Theorem 1 is that the maximum welfare is decentrally achieved. In addition to inducing community members to achieve welfare optimality, D-NEM grants them surplus levels that are at least equal to their maximum surplus under the utility's NEM X (\(S_{i}^{*,\pi}(g_{i})\)) derived in [23].
**Theorem 2** (Individual rationality under D-NEM).: _Under D-NEM, every \(i\in\mathcal{N}\) member is worse off outside the community, i.e., \(S_{i}^{*,\chi}\left(z_{i}^{*},g_{\mathcal{N}}\right)\geq S_{i}^{*,\pi}\left(g_{ i}\right)\)._
Proof.: See Appendix D.
Theorem 2 is proved by computing the difference between \(S_{i}^{*,\chi}\left(\cdot\right)\), derived in Theorem 1, and \(S_{i}^{*,\pi}\left(\cdot\right)\), derived in Theorem 1 of [23]. Worth noting is that Theorem 2 applies to non-renewable-adopting members, because \(S_{i}^{*,\chi}(\mathbf{d}_{i}^{*},g_{\mathcal{N}})\geq S_{i}^{*,\pi}(0)\). As a result of Theorems 1-2, the welfare of the community under D-NEM is higher than its benchmark of \(|\mathcal{N}|\)_optimal_ standalone customers under the utility's NEM X, i.e., \(W_{\mathcal{N}}^{\chi}\left(g_{\mathcal{N}}\right)\geq\sum_{i\in\mathcal{N}}S_{i }^{*,\pi}\left(g_{i}\right)\).
Finally, we complete the proof of the D-NEM satisfying the cost-causation principle.
**Theorem 3** (Cost-causation conformity of D-NEM).: _D-NEM satisfies the cost-causation principle._
Proof.: See Appendix D.
We employ Definition 2 to prove Theorem 3 by showing that D-NEM satisfies all six axioms. It is worth noting that many allocation rules do not satisfy the cost-causation principle, including the equal surplus division [27], proportional rule [28], and Shapley value [8].
### _Stability under D-NEM_
Group rationality is an extension of individual rationality, under which no subset of users would be better off if they jointly withdrew and formed a new community. Group rationality assures the _stability_ of the community, as no sub-communities can form [5].
**Theorem 4** (Group rationality under D-NEM).: _Group rationality is satisfied under D-NEM, i.e., for \(\mathcal{H}\subseteq\mathcal{T}\subseteq\mathcal{N}\), denote the surplus of a member who join communities \(\mathcal{T}\) and \(\mathcal{H}\) by \(S_{i,\mathcal{T}}^{\chi}\) and \(S_{i,\mathcal{H}}^{\chi}\), respectively, then under D-NEM we have_
\[\sum_{i\in\mathcal{H}}S_{i,\mathcal{T}}^{*,\chi}(z_{i}^{*},g_{\mathcal{T}}) \geq\sum_{i\in\mathcal{H}}S_{i,\mathcal{H}}^{*,\chi}(z_{i}^{*},g_{\mathcal{H}}).\]
_Proof:_ See Appendix D. \(\Box\)
In the case that \(\sum_{i\in\mathcal{N}}S_{i,\mathcal{T}}^{*,\chi}<W_{\mathcal{H}}^{*,\chi}\), community \(\mathcal{H}\) is better off withdrawing from its mother community \(\mathcal{T}\). In the parlance of coalitional game theory, when group rationality is achieved (Theorem 4), in addition to individual rationality (Theorem 2) and budget balancedness (Theorem 3), D-NEM is a stabilizing allocation.
## IV Dynamic NEM With BESS
In the previous section, we proposed D-NEM for decentralized welfare maximization when the community has distributed renewable PV and flexible demands. Here, we add a centrally scheduled BESS7 co-optimized with flexible demands via D-NEM to maximize the community's welfare. In the sequel, we reformulate the problem to a multi-interval one and introduce the BESS-related parameters and variables. Then, we introduce the generalized D-NEM market mechanism and show that all favorable properties (Thms.1-4) hold.
Footnote 7: With mild assumptions, our results apply to distributed BESS. See Appendix C.
### _Energy Community with BESS_
#### Iv-A1 BESS System Modeling
A BESS system with capacity \(E>0\) is added to the community to provide higher flexibility and more self-consumption of renewables. To account for the BESS and the related dynamics, we consider a multi-time interval formulation under which the community's flexible resources are sequentially scheduled over a finite horizon \(T\) indexed by \(t=0,\ldots,T\). At each stage \(t=0,\ldots,T\), let \(x_{t}\in[0,E]\) denote the BESS state of charge (SoC) at the beginning of stage \(t\). For every \(t=0,\ldots,T-1\), \(b_{\mathcal{N}t}:=\left[b_{\mathcal{N}t}\right]^{+}-\left[b_{\mathcal{N}t} \right]^{-}\in[-\underline{b}_{\mathcal{N}},\overline{b}_{\mathcal{N}}]\) denote the storage output with \(\underline{b}_{\mathcal{N}}\) and \(\overline{b}_{\mathcal{N}}\) as the maximum energy charging and discharging rates, respectively. The battery is charged when \(b_{\mathcal{N}t}>0\) and discharged when \(b_{\mathcal{N}t}<0\). The charging and discharging efficiencies are denoted by \(\tau\in(0,1]\) and \(\rho\in(0,1]\), respectively. Given the storage level and output at stage \(t\), the SoC evolves as per the following
\[x_{t+1}=x_{t}+\tau\left[b_{\mathcal{N}t}\right]^{+}-\left[b_{\mathcal{N}t} \right]^{-}/\rho,\quad t=0,\ldots,T-1, \tag{18}\]
with initial SoC \(x_{0}=0\).
A community member \(i\in\mathcal{N}\) may own a share \(\xi_{i}\in\Xi=\{\xi_{i}\in[0,1]:\sum_{i\in\mathcal{N}}\xi_{i}=1\}\) of the BESS capacity \(e_{i}:=\xi_{i}E\), which makes their output8, for \(t=0,\ldots,T-1\), \(b_{it}=\xi_{i}b_{\mathcal{N}t}\in[-\underline{b},\overline{b}_{i}]\in\mathbb{R}\), where \(\underline{b}_{i}:=\xi_{i}\underline{b}_{\mathcal{N}}\) and \(\overline{b}_{i}:=\xi_{i}\overline{b}_{\mathcal{N}}\) are the maximum energy discharging and charging rates of member \(i\)'s storage share, respectively9.
Footnote 8: Note that \(b_{it}\) is virtually accounted to the member as if it was a BTM BESS, which is the practice in VNEM [4].
Footnote 9: The model allows for adjusting the storage shares \(\xi_{i}\in\Xi\) every \(T\) period.
For every \(t=0,\ldots,T-1\), the _net consumption_ of every \(i\in\mathcal{N}\) household \(z_{it}\in\mathbb{R}\) and the community's _aggregate net consumption_\(z_{\mathcal{N}t}\in\mathbb{R}\) in time interval \(t\) are redefined as
\[z_{it}:=\mathbf{1}^{\top}\boldsymbol{d}_{it}+b_{it}-g_{it},\;\;z_{\mathcal{N}t }:=\sum_{i\in\mathcal{N}}z_{it}:=d_{\mathcal{N}t}+b_{\mathcal{N}t}-g_{ \mathcal{N}t}. \tag{19}\]
#### Iv-A2 Payment, Surplus, and Reward
Given \(z_{it}\) and \(z_{\mathcal{N}t}\), we use the same payment and surplus functions introduced in Sec.II-B-II-C, but with adding the time dimension. The surplus function \(S_{it}^{\chi}(\cdot)\) does not take into account storage gains/losses due to charging/discharging. Therefore, we generalize it to the _reward function_\(Q_{it}^{\chi}\) as, for every \(i\in\mathcal{N}\),
\[Q_{it}^{\chi}(z_{it},g_{\mathcal{N}t}):=S_{it}^{\chi}+\gamma(\tau[b_{it}]^{+}-[b _{it}]^{-}/\rho),\;\;t\in[0,T-1], \tag{20}\]
where the second term on the right-hand side is the reward (cost) of charging (discharging) the storage share, both incurred at the salvage value rate \(\gamma\in\mathbb{R}_{+}\). We assume \(\gamma\in[\frac{1}{\tau}\max\{(\pi_{t}^{-})\},\rho\min\{(\pi_{t}^{+})\}]\) to avoid trivial storage scheduling [29]. Similar to (20), the _reward_ of a standalone household (benchmark) is, for every \(i\in\mathcal{N}\),
\[Q_{it}^{\pi}(z_{it}):=S_{it}^{\pi}+\gamma(\tau[b_{it}]^{+}-[b_{it}]^{-}/\rho), \;\;t\in[0,T-1]. \tag{21}\]
Note that the customer faces \(\gamma\) with and without the community, which builds on the assumption that both the operator and the standalone prosumer place the same value on their stored energy given that they face the same NEM tariff \(\pi\).
The _community welfare_ under the profit neutrality condition is the aggregate reward of its members over the \(T\) intervals:
\[W_{\mathcal{N}}^{\chi}:=\sum_{t=0}^{T-1}\sum_{i=1}^{N}Q_{it}^{\chi}(z_{it},g_{ \mathcal{N}t}). \tag{22}\]
#### Iv-A3 Resource Co-Optimization
If the community welfare was to be centrally maximized, the operator would have formulated the storage-consumption co-optimization as a \(T\)-stage Markov decision process (MDP). The state \(s_{t}:=(x_{t},g_{\mathcal{N}t})\in\mathcal{S}\) of the MDP in interval \(t\) includes the battery SoC \(x_{t}\) and aggregate renewables \(g_{\mathcal{N}t}\), whose evolution is defined by (18) and the exogenous Markov random process \((g_{\mathcal{N}t})\). The initial state is denoted by \(s_{0}=(x_{0},g_{\mathcal{N}0})\). An MDP _policy_\(\mu:=(\mu_{0},\ldots,\mu_{T-1})\) is a sequence of decision rules, \(s_{t}\stackrel{{\mu}}{{\mapsto}}u_{t}:=(\{\mathcal{A}_{it}\}_{i=1}^{N },b_{\mathcal{N}t},\{z_{it}\}_{i=1}^{N})\), that specifies consumption and storage operation in each \(t\).
Therefore, if the community resources were centrally operated, the operator would have solved the following:
\[\mathcal{P}_{\mathcal{N}}^{\chi}:\underset{\mu=(\mu_{0},\ldots,\mu_{T-1 })}{\text{Maximize}} \mathbb{E}_{\mu}\left\{W_{\mathcal{N}}^{\chi}(s_{t},u_{t})\right\} \tag{23a}\] \[\text{Subject to}\] for all \[t=0,\ldots,T-1,\] (23b) \[(\ref{eq:P_N})\] (23c) \[g_{\mathcal{N},t+1}\sim F_{g_{\mathcal{N},t+1}|g_{\mathcal{N}t}}\] (23d) \[0\leq x_{t}\leq E\] (23e) \[-\underline{b}_{\mathcal{N}}\leq[b_{\mathcal{N}t}]^{+}-[b_{ \mathcal{N}t}]^{-}\leq\overline{b}_{\mathcal{N}}\] (23f) \[\underline{\mathbf{d}}_{t}\preceq\mathbf{d}_{it}\preceq\overline{\bm {d}}_{t},\ \ i=1,\ldots,N\] (23g) \[P_{\mathcal{N}t}^{\pi}(z_{\mathcal{N}t})=\sum_{i\in\mathcal{N}} P_{t}^{\chi}(z_{i},g_{\mathcal{N}t})\] (23h) \[s_{0}=(x_{0},g_{\mathcal{N}0}), \tag{23i}\]
where \(F_{g_{\mathcal{N},t+1}|g_{\mathcal{N}t}}\) is the conditional distribution of \(g_{\mathcal{N},t+1}\) given \(g_{\mathcal{N}t}\), and the expectation is taken over the exogenous stochastic aggregate generation \((g_{\mathcal{N}t})\). Note that, by definition, \([b_{\mathcal{N}t}]^{+}\cdot[b_{\mathcal{N}t}]^{-}=0\).
### _Generalized D-NEM_
D-NEM can be generalized to the case when the community has a centrally operated BESS. Although (23) remains hard to solve, it has been shown in [29] that having non-binding storage SoC constraints (23e) is a sufficient condition for a co-optimization policy that does not require a look-ahead of available renewables in the future to achieve the maximum welfare in (23). This sufficient condition is achieved if (i) \(E>2T\tau\overline{b}_{\mathcal{N}}\), and (ii) \(x_{0}\in(T\underline{b}_{\mathcal{N}}/\rho,E-T\tau\overline{b}_{\mathcal{N}})\). The assumption is reasonable when the BESS capacity \(E\) is large or the charging/discharging limits \(\overline{b}_{\mathcal{N}},\underline{b}_{\mathcal{N}}\) are relatively small.
#### Iv-B1 D-Nem
Envisioned by the solution of (23) under the non-binding storage SoC constraints assumption, the community operator devises a threshold-based pricing rule with prices announced based on \(g_{\mathcal{N}t}\) for every \(t\).
**Generalized Dynamic NEM.**_Under D-NEM, for every \(t=0,\ldots,T-1\), the community's pricing rule is threshold-based, given by the 7-tuple parameter \(\chi_{t}=(\pi_{t}^{+},\pi_{t}^{z_{1}}(g_{\mathcal{N}t}),\gamma/\rho,\pi_{t}^{ z_{2}}(g_{\mathcal{N}t}),\gamma\tau,\pi_{t}^{z_{3}}(g_{\mathcal{N}t}),\pi_{t}^{-})\) with the order_
\[\pi_{t}^{+}\geq\pi_{t}^{z_{1}}(g_{\mathcal{N}t})\geq\gamma/\rho\geq\pi_{t}^{z _{2}}(g_{\mathcal{N}t})\geq\gamma\tau\geq\pi_{t}^{z_{3}}(g_{\mathcal{N}t}) \geq\pi_{t}^{-},\]
_as_
\[\Gamma_{t}^{\chi}(g_{\mathcal{N}t})=\left\{\begin{array}{ll}\pi_{t}^{+},&g_ {\mathcal{N}t}\leq\Delta_{t}^{+}\\ \pi_{t}^{z_{1}}(g_{\mathcal{N}t}),&g_{\mathcal{N}t}\in(\Delta_{t}^{+},\sigma_{ t}^{+}]\\ \gamma/\rho,&g_{\mathcal{N}t}\in(\sigma_{t}^{+},\sigma_{t}^{+z})\\ \pi_{t}^{z_{2}}(g_{\mathcal{N}t}),&g_{\mathcal{N}t}\in[\sigma_{t}^{+z},\sigma_ {t}^{-z}]\\ \gamma\tau,&g_{\mathcal{N}t}\in(\sigma_{t}^{-z},\sigma_{t}^{-})\\ \pi_{t}^{z_{3}}(g_{\mathcal{N}t}),&g_{\mathcal{N}t}\in[\sigma_{t}^{-},\Delta_{t }^{-})\\ \pi_{t}^{-},&g_{\mathcal{N}t}\geq\Delta_{t}^{-}\,,\end{array}\right. \tag{24}\]
_where the prices \(\pi_{t}^{+},\gamma/\rho,\gamma\tau,\pi_{t}^{-}\) are constants, and \(\pi_{t}^{z_{1}}(g_{\mathcal{N}t}),\pi_{t}^{z_{2}}(g_{\mathcal{N}t}),\pi_{t}^{z _{3}}(g_{\mathcal{N}t})\) are Lagrange multipliers that are the solutions of_
\[\sum_{i\in\mathcal{N}}\sum_{k\in\mathcal{K}}\max\{\underline{d}_{ ttk},\min\{f_{itk}(\mu_{t}^{z_{1}}),\bar{d}_{itk}\}\}=g_{\mathcal{N}t}+ \underline{b}_{\mathcal{N}} \tag{25}\] \[\sum_{i\in\mathcal{N}}\sum_{k\in\mathcal{K}}\max\{\underline{d}_{ itk},\min\{f_{itk}(\mu_{t}^{z_{2}}),\bar{d}_{itk}\}\}=g_{\mathcal{N}t}\] (26) \[\sum_{i\in\mathcal{N}}\sum_{k\in\mathcal{K}}\max\{\underline{d}_{ itk},\min\{f_{itk}(\mu_{t}^{z_{3}}),\bar{d}_{itk}\}\}=g_{\mathcal{N}t}-\overline{b}_{ \mathcal{N}}, \tag{27}\]
_respectively. For \(t\in[0,T-1]\), the thresholds are computed as:_
\[\Delta_{t}^{+}:=f_{t}(\pi_{t}^{+})-\underline{b}_{\mathcal{N}}, \Delta_{t}^{-}:=f_{t}(\pi_{t}^{-})+\overline{b}_{\mathcal{N}},\] \[\sigma_{t}^{+}:=f_{t}(\gamma/\rho)-\underline{b}_{\mathcal{N}}, \sigma_{t}^{-}:=f_{t}(\gamma\gamma)+\overline{b}_{\mathcal{N}},\] \[\sigma_{t}^{+z}:=f_{t}(\gamma/\rho), \sigma_{t}^{-}:=f_{t}(\tau\gamma),\]
_where \(f_{t}(y):=\sum_{i\in\mathcal{N}}\sum_{k\in\mathcal{K}}\max\{\underline{d}_{ itk},\min\{f_{itk}(y),\bar{d}_{itk}\}\}\). The payment rule, for every \(i\in\mathcal{N},t\in[0,T-1]\), is_
\[P_{t}^{\chi}(z_{it},g_{\mathcal{N}t})=\Gamma_{t}^{\chi}(g_{\mathcal{N}t})z_{it}. \tag{28}\]
Similar to D-NEM without BESS, and as summarized in Table II, the generalized D-NEM, for every \(t\), passes the utility prices in the net consumption (\(g_{\mathcal{N}t}\leq\Delta_{t}^{+}\)) and net production (\(g_{\mathcal{N}t}\geq\Delta_{t}^{-}\)) zones. The price in the net zero zone (\(g_{\mathcal{N}t}\in[\Delta_{t}^{+},\Delta_{t}^{-}]\)) transitions from \(\pi_{t}^{+}\) to \(\pi_{t}^{-}\) as the aggregate renewables \(g_{\mathcal{N}t}\) increase. The member payment form without BESS (14) and with BESS (28) is the same, under which the member faces \(\Gamma_{t}^{\chi}(g_{\mathcal{N}t})\) regardless of the sign of \(z_{it}\), i.e., _NEM 1.0_.
#### Iv-B2 Optimal BESS Operation
The output of the centrally operated BESS is allocated to members through the shares \(\zeta_{i}\)'s. Theorem 1 in [29] shows that the optimal storage operation is a threshold policy based on \(g_{\mathcal{N}t}\). In particular, for every \(t=0,\ldots,T-1\), the optimal BESS operation is
\[b_{\mathcal{N}t}^{*}(g_{\mathcal{N}t})=\left\{\begin{array}{ll}-\underline{b}_{ \mathcal{N}t},&g_{\mathcal{N}t}\leq\sigma_{t}^{+}\\ g_{\mathcal{N}t}-\sigma_{t}^{+z},&g_{\mathcal{N}t}\in[\sigma_{t}^{+},\sigma_{t}^{+z} ]\\ 0,&g_{\mathcal{N}t}\in[\sigma_{t}^{+z},\sigma_{t}^{-}]\\ \underline{g}_{\mathcal{N}t}-\sigma_{t}^{-z},&g_{\mathcal{N}t}\in[\sigma_{t}^{-}, \sigma_{t}^{-}]\\ \overline{b}_{\mathcal{N}t},&g_{\mathcal{N}t}\geq\sigma_{t}^{-}\,.\end{array}\right.\]
Hence, for every \(t=0,\ldots,T-1\), the virtual schedule of every member is, \(b_{it}^{*}(g_{\mathcal{N}t})=\xi_{i}b_{\mathcal{N}t}^{*}(g_{\mathcal{N}t})\). Table II summarizes the generalized D-NEM and the optimal BESS operation.
#### Iv-B3 Community Member Problem
For every \(t=0,\ldots,T-1\), the community price \(\Gamma_{t}^{\chi}(g_{\mathcal{N}t})\) is announced according to D-NEM, and correspondingly, every \(i\in\mathcal{N}
### _Efficiency, Rationality, and Cost-Causation Conformity_
Interestingly, Theorems 1-4 extend to the BESS case under generalized D-NEM. For brevity, we present the results below, leaving out formal statements and proofs, as the proofs trivially, yet tediously, follow that of Theorems 1-4.
1. Efficiency: \(W_{\mathcal{N}}^{*,\chi}=\sum_{t=0}^{T-1}\sum_{i\in\mathcal{N}}Q_{it}^{*,\chi}( z_{it}^{*},g_{\mathcal{N}t})\).
2. Individual rationality: \[Q_{it}^{*,\chi}\left(z_{it}^{*},g_{Nt}\right)\geq Q_{it}^{*,\pi}\left(g_{it} \right),\forall t\in[0,T-1].\]
3. Cost-causation conformity: Axioms 1-6 are satisfied under the generalized D-NEM.
4. Group rationality: For \(\mathcal{H}\subseteq\mathcal{T}\subseteq\mathcal{N}\), and \(\forall t\in[0,T-1]\), denote the reward of a member who join community \(\mathcal{T}\) and \(\mathcal{H}\) by \(Q_{it,\mathcal{T}}^{\chi}\) and \(Q_{it,\mathcal{H}}^{\chi}\), respectively, then under D-NEM we have \[\sum_{i\in\mathcal{H}}Q_{it,\mathcal{T}}^{*,\chi}(z_{it}^{*},g_{\mathcal{T}t })\geq\sum_{i\in\mathcal{H}}Q_{it,\mathcal{H}}^{*,\chi}(z_{it}^{*},g_{ \mathcal{H}t}).\]
## V Numerical Results
To show the performance of the proposed mechanism, we assumed a hypothetical energy community using one-year real data of \(N=23\) residential households10. The scheduling decisions are carried out over 24 hours horizon (\(T=24\)). All households have flexible loads and 18 of them have rooftop solar. The community has a BESS system with charging and discharging efficiencies \(\tau=\rho=0.95\), and power limits \(\overline{b}_{\mathcal{N}}=\underline{b}_{\mathcal{N}}=23\)kW. For simplicity, we assumed that community members have equivalent storage shares, i.e., \(\zeta_{i}=1/N,\forall i\in\mathcal{N}\). The storage size and initial SoC are chosen so that the large capacity sufficient condition is met.
Footnote 10: We used PecanStreet 2018 data for residential users from Austin, TX.
To model consumption preferences of every \(i\in\mathcal{N}\) household, the following utility function was adopted:
\[U_{itk}(d_{itk})=\left\{\begin{array}{ll}\alpha_{itk}d_{itk}-\frac{1}{2} \beta_{itk}d_{itk}^{2},&0\leq d_{itk}\leq\frac{\alpha_{itk}}{\beta_{itk}}\\ \frac{\alpha_{itk}^{2}}{2\beta_{itk}},&d_{itk}>\frac{\alpha_{itk}}{\beta_{itk}},\end{array}\right. \tag{29}\]
for all \(k\in\mathcal{K},t=0,\ldots,T-1\), where \(\alpha_{itk},\beta_{itk}\) are some utility parameters that are learned and calibrated using historical retail prices11 and consumption12, and by predicating an elasticity for the load type. Two load types with two different utility functions of the form in (29) were considered, namely 1) HVAC, and 2) other household loads13.
Footnote 11: We used Data.AustinTexas.gov historical residential rates in Austin, TX.
Footnote 12: We used pre-2018 PecanStreet data for households in Austin, TX.
The community faces the utility's NEM X tariff with a ToU rate with \(\pi_{\text{Peak}}^{+}=\$0.40\)/kWh and \(\pi_{\text{offeak}}^{+}=\$0.20\)/kWh as peak and offpeak prices, respectively. For the export rate \(\pi^{-}\), we used the average 2018 real-time wholesale prices14 in Texas. The storage salvage value rate was chosen so that \(\gamma\in[\frac{1}{\tau}\max\{(\pi_{t}^{-})\},\rho\min\{(\pi_{t}^{+})\}]\).
Footnote 14: The elasticities of HVAC and other household loads are taken from [30].
Footnote 15: The elasticities of HVAC and other household loads are taken from [30].
The simulation results compared the welfare, pricing, and operation of a community under D-NEM to a community under the payment rule in [8], and to optimal standalone customers, who schedule based on [29] when they have storage and [23] when they do not15. In addition to comparing the market mechanisms, we evaluated the communities' performance with and without the BESS.
Footnote 14: The data is accessible at ERCOT.
Footnote 15: We assume that the customers’ schedules under [8] are the same as that of _optimal_ standalone utility customers.
Figure 2 presents a summary of raw data at the community level without BESS. The left plot shows the daily average aggregate consumption (blue), generation (green), and net consumption (red). The community's aggregate net consumption peaked in the evening and plummeted around noon. The right plot shows the community's monthly aggregate generation and net consumption. The net consumption was much higher in summer signaling the effect of air-conditioning loads.
### _Community Welfare With and Without BESS_
Fig.3 shows the monthly welfare gain of communities under D-NEM and [8] over standalone customers with and
Fig. 2: Left: Aggregate daily consumption, generation, and net consumption. Right: Aggregate monthly net consumption and generation (no BESS).
without BESS. In all cases, and all months, forming energy communities achieved welfare gains. The welfare gain was the highest when the aggregate net consumption was smaller (further explained in Sec.V-B). Compare, for instance, the months of March-April to July-August.
In all months, the community under D-NEM, with and without BESS, achieved higher welfare, because, driven by price, D-NEM aligned the aggregated flexible resources with the aggregated generation, which increased renewables' self-consumption (valued at \(>\pi^{-}\)), instead of exporting to the utility (valued at \(\pi^{-}\)). Under [8], however, every member aligned its own generation with its own flexible resources, which was sub-optimal at the aggregate level.
Looking at how each market mechanism performed with and without BESS, we note that, under D-NEM, the community's welfare gain with storage was always higher than its gain without storage. The average gain was 6.32% with storage and 4.29% without. This, however, was not the case under [8], as in most months, the welfare gain of a no-storage community was higher than the welfare gain of the community with storage. This was because when customers did not have storages outside the community, they were vulnerable to the export rate \(\pi^{-}\) more often than customers with storages, which gave the former higher incentives to join the community. Under [8], the average gain with storage was 2.68%, which increased to 3.34% in the absence of storage.
### _Community Price and Operation_
The heatmap of the community announced price (Fig.4) and the corresponding aggregate net consumption (Fig.5) provide intuitions on why D-NEM achieved welfare optimality over [8] with and without BESS. First, when there is no storage (left column), the price under [8] was either \(\pi^{+}\) or \(\pi^{-}\) depending on the aggregate net consumption sign, whereas, under D-NEM, some of the \(\pi^{-}\) and \(\pi^{+}\) intervals got replaced by \(\Gamma^{\chi}(g_{\mathcal{N}})\in(\pi^{-},\pi^{+})\) (shaded in light blue), which drove the community's aggregate demand to equalize the aggregate generation. This phenomenon is reflected in Fig.5, which shows that D-NEM managed to reduce power imports and exports through price, giving it welfare optimality over [8]. Compared to optimal standalone customers, both solar-only communities (left column) reduced the midday aggregate exports, which gave them higher welfare levels (Fig.3).
After adding a BESS (right columns of Figs.4-5), both D-NEM and [8] further reduced power imports and exports, which increased the welfare of both communities. D-NEM was more efficient in keeping the community off-the-grid more often because the BESS and flexible consumptions were both coordinated with the aggregate renewables. As Fig.4 shows, when the D-NEM community added a BESS, the net-zero zone price became more prevalent, as the storage-added-flexibility further enabled maintaining the net-zero zone.
### _Effect of NEM X Rates_
Figure 6 evaluates the effect of the utility's NEM rate ratio \(\pi^{-}/\pi^{+}\) on the welfare gain
Fig. 5: Community net consumption with/without storage.
Fig. 3: Community welfare gain with/without storage.
BESS (left) and with BESS (right), under a flat buy rate \(\pi^{+}=\$0.4/\)kWh. Three main observations are in order. First, under both [8] and D-NEM, the community welfare gain increased as \(\pi^{-}/\pi^{+}\) decreased, because community members are less vulnerable to \(\pi^{-}\) compared to their benchmark. Secondly, the welfare gain gap between [8] and D-NEM increased as \(\pi^{-}/\pi^{+}\) decreased because the D-NEM community is less impacted by \(\pi^{-}\), as through pricing, the resources are scheduled to maximize self-consumption. Lastly, the welfare gain of communities without BESS was higher with BESS at lower rate ratios, because the benchmark customers are less impacted by \(\pi^{-}\) when they adopt BESS.
## VI Conclusion
This paper proposed Dynamic NEM (D-NEM); a social welfare maximizing market mechanism for energy communities that aggregates BTM and community-shared DER under a general NEM policy. Through D-NEM, the community's flexible demands and BESS, become renewable-generation-aware by being dynamically scheduled based on the aggregate renewables. We showed that D-NEM conforms with the cost-causation principle and induces the members to decentrally maximize the community's welfare. The surplus of every member is shown to be no less than their maximum surplus outside the community. We also proved the coalition stability under D-NEM by ensuring that no sub-coalitions would find it advantageous to leave the grand coalition.
An interesting future direction for this work is to make D-NEM network-aware, possibly through dynamic operating envelopes [31]. Also, we intend to explore the notion of _bounded rationality_, under which prosumers may not be surplus-maximizers [24], perhaps due to HEMS absence.
|
2310.01595 | Memory-efficient particle filter recurrent neural network for object
localization | This study proposes a novel memory-efficient recurrent neural network (RNN)
architecture specified to solve the object localization problem. This problem
is to recover the object states along with its movement in a noisy environment.
We take the idea of the classical particle filter and combine it with GRU RNN
architecture. The key feature of the resulting memory-efficient particle filter
RNN model (mePFRNN) is that it requires the same number of parameters to
process environments of different sizes. Thus, the proposed mePFRNN
architecture consumes less memory to store parameters compared to the
previously proposed PFRNN model. To demonstrate the performance of our model,
we test it on symmetric and noisy environments that are incredibly challenging
for filtering algorithms. In our experiments, the mePFRNN model provides more
precise localization than the considered competitors and requires fewer trained
parameters. | Roman Korkin, Ivan Oseledets, Aleksandr Katrutsa | 2023-10-02T19:41:19Z | http://arxiv.org/abs/2310.01595v1 | # Memory-efficient particle filter recurrent neural network for object localization
###### Abstract
This study proposes a novel memory-efficient recurrent neural network (RNN) architecture specified to solve the object localization problem. This problem is to recover the object states along with its movement in a noisy environment. We take the idea of the classical particle filter and combine it with GRU RNN architecture. The key feature of the resulting memory-efficient particle filter RNN model (mePFRNN) is that it requires the same number of parameters to process environments of different sizes. Thus, the proposed mePFRNN architecture consumes less memory to store parameters compared to the previously proposed PFRNN model. To demonstrate the performance of our model, we test it on symmetric and noisy environments that are incredibly challenging for filtering algorithms. In our experiments, the mePFRNN model provides more precise localization than the considered competitors and requires fewer trained parameters.
## 1 Introduction
We consider the object localization problem and propose a novel GRU-like architecture to solve it. Typically standard GRU-like models [1] are used to process sequential data, e.g. to predict the next item in a sequence, and classify or generate texts, audio, and video data. The object localization problem differs from the aforementioned problems since auxiliary data about the environment and particular measurements are available. Therefore, this additional knowledge should be incorporated into the GRU architecture properly. Such a modification can be based on the existing approaches to solve the object localization problem, which are discussed further.
One of the classical non-parametric methods to solve the object localization problem is particle filter [2], which estimates the filtered object state from the states of auxiliary artificial objects that are called particles. A modification of GRU and LSTM recurrent neural networks with particle filter ingredients is presented in [3], where a particle filter recurrent neural network (PFRNN) is proposed. The core element of PFRNN is the modified cell (GRU or LSTM) equipped with analogs of particles and the corresponding weights of particles to estimate the filtered state. However, PFRNN improves the performance of the general sequential data processing and does not consider specific features of the object localization problem. Therefore, we propose the novel _memory-efficient PFRNN (mePFRNN)_ that combines the model assumptions used in the classical filtering methods (e.g. Kalman filter and particle filter) and parametrization from the GRU architecture. Such a combination provides more accurate state estimation and improves robustness in noisy and symmetric environments. Also, the Soft resampling procedure is used to avoid the degeneracy issue and improve the stability of the filtered states.
The main contributions of our study are the following.
1. We propose a modification of the PFRNN architecture specified for the object localization problem. The proposed mePFRNN model does not exploit environment embeddings and extracts this data implicitly in the training stage.
2. We perform an extensive experimental comparison of the proposed GRU-like architecture with the existing recurrent neural networks and other non-parametric methods like the particle filter.
3. The proposed mePFRNN model requires the same number of parameters for environments of different sizes.
Related works.The object localization problem appears in a lot of applications like driving autonomous vehicles [4], navigation [5, 6], image processing [7], finance [8] and fatigue predictions [9]. Therefore, there are a lot of different approaches to solving it. We can split them into two classes: non-parametric and parametric. The first class consists of classical methods that do not require a training stage and perform filtering of the object states on the fly. Examples of such methods are Kalman filter [10, 11], and its modifications like extended [12], unscented [13], invariant extended [14] and ensembled [15] Kalman filters. Also, methods related to the particle filter, e.g. multiparticle Kalman filter [16], particle filters combined with genetic algorithms [17], and particle swarm technique [18], box particle filter [19] and others are non-parametric filtering methods. The second class consists of parametric methods such that a pre-training stage is necessary before starting filtering. Such methods are typically based on neural networks that are trained on the collected historical data and then tested on the new data from real-world simulations. Although the pre-training stage may require a lot of time, one can expect that the inference stage, in which filtering is performed, is sufficiently fast due to modern hardware acceleration. Moreover, since the neural network models can efficiently treat sequential data [20, 21], the parametric methods can provide more accurate filtering results compared to non-parametric methods.
Although the Transformer model [22] demonstrates superior performance over the considered GRU RNN in sequence processing tasks, it consumes a lot of memory to store parameters, requires special techniques for training [23] and may not fit in the on-device memory limits. The memory-efficient Transformer models [24, 25, 26] may be a remedy for the observed issue and will be investigated in future work.
## 2 Problem statement
Consider the trajectory of object states encoded as a sequence of \(d\)-dimensional vectors \(\mathbf{x}_{i}\in\mathbb{R}^{d}\), where \(i\) is an index of the time moment \(t_{i}\). For example, if the object's state consists of 2D coordinates and 2D velocity, then state dimension \(d=4\). The states are changed according to the motion equation, which combines the physical law and the control system of the object. Formally we can write the motion equation as follows
\[\mathbf{x}_{i}=f(\mathbf{x}_{i-1},\mathbf{u}_{i},\boldsymbol{\eta}_{i}), \tag{1}\]
where \(\mathbf{u}_{i}\) is a vector of control at the time moment \(t_{i}\), for example, external forces, and \(\boldsymbol{\eta}_{i}\) is a vector of noise corresponding to the object motion at the time moment \(t_{i}\). Since the object moves with some noise, we should use additional measurements to estimate states more precisely. Typically there are several beacons in the environment, which are used by objects to measure some quantities that can improve their state estimate. For example, distance to the \(k\)-nearest beacons can improve the estimate of the object's location. Formally, denote by \(\mathbf{y}_{i}\in\mathbb{R}^{k}\) a vector of measurements at time moment \(t_{i}\) that is related with state estimate through the measurement function \(g:\mathbb{R}^{d}\rightarrow\mathbb{R}^{k}\):
\[\mathbf{y}_{i}=g(\mathbf{x}_{i},\boldsymbol{\zeta}_{i}), \tag{2}\]
where \(\boldsymbol{\zeta}_{i}\) is the additional noise of measurement.
Object localization problem is the problem of estimating object trajectory from the given motion and measurement functions that represent the physical law of the environment and beacons' configuration, respectively. In this study, we introduce the parametric model \(h_{\boldsymbol{\theta}}:\mathbb{R}^{d}\times\mathbb{R}^{k}\times\mathbb{R}^{ n}\rightarrow\mathbb{R}^{d}\) that depends on the unknown parameters \(\boldsymbol{\theta}\in\mathbb{R}^{n}\) and performs filtering of the inexact state estimate \(\mathbf{x}_{i}\) based on the additional measurements \(\mathbf{y}_{i}\). Assume we have training trajectory of the ground-truth states \(\{\mathbf{x}_{i}^{*}\}_{i=1}^{N}\). Then we can state the optimization problem to fit our parametric model to the training data \(\{\mathbf{x}_{i}^{*}\}_{i=1}^{N}\) and evaluate the generalization ability of the resulting model. In particular, the standard loss function in such a problem is the mean square error loss function
\[MSE=\frac{1}{N}\sum_{i=1}^{N}\|h_{\boldsymbol{\theta}}(\mathbf{x}_{i},\mathbf{ y}_{i})-\mathbf{x}_{i}^{*}\|_{2}^{2} \tag{3}\]
such that the motion function \(f\) and the measurement functions \(g\) give the state estimate \(\mathbf{x}_{i}\) and measurement vector \(\mathbf{y}_{i}\), respectively.
We further focus on the plane motion setup, where the state vector consists of 2D coordinates \(\mathbf{c}\in\mathbb{R}^{2}\) and a heading \(\alpha\in[0,2\pi]\), which defines the direction of movement, i.e. \(\mathbf{x}=[\mathbf{c},\alpha]\in\mathbb{R}^{3}\). Therefore, we follow [3] in slightly
adjusting the MSE objective function (3) to treat coordinates and angles separately and compose the weighted MSE loss function:
\[wMSE=\underbrace{\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{c}_{i}-\mathbf{c}_{i}^{*}\|_ {2}^{2}}_{=\mathrm{MSE}_{c}}+\frac{\beta}{N}\sum_{i=1}^{N}(\alpha_{i}-\alpha_{i }^{*})^{2},\quad\text{where}\quad[\mathbf{c}_{i},\alpha_{i}]=h_{\boldsymbol{ \theta}}(\mathbf{x}_{i},\mathbf{y}_{i}),\quad\mathbf{x}_{i}^{*}=[\mathbf{c}_{ i}^{*},\alpha_{i}^{*}], \tag{4}\]
where \(\beta>0\) is a given weight. However, the \(wMSE\) loss function treats angles \(2\pi-\epsilon\) and \(\epsilon\) as essentially different while they are physically close. Thus, we propose a novel modification of the mean squared loss function (3), that treats headings differently. In particular, we compare not angles but their sine and cosine in the following way:
\[L(\boldsymbol{\theta})=\mathrm{MSE}_{c}+\frac{\beta}{N}\sum_{i=1}^{N}\left[( \sin\alpha_{i}-\sin\alpha_{i}^{*})^{2}+(\cos\alpha-\cos\alpha_{i}^{*})^{2} \right], \tag{5}\]
where we use the same notation as in (4). Thus, we have the following optimization problem:
\[\boldsymbol{\theta}^{*} =\arg\min_{\boldsymbol{\theta}}L(\boldsymbol{\theta}),\] (6) s.t. \[\mathbf{x}_{i} =f(\mathbf{x}_{i-1},\mathbf{u}_{i},\boldsymbol{\eta}_{i})\] \[\mathbf{y}_{i} =g(\mathbf{x}_{i},\boldsymbol{\zeta}_{i}).\]
Additionally to the MSE-like loss function, we evaluate the resulting model with the Final State Error (FSE) loss function, which reads as
\[\mathrm{FSE}=\|\mathbf{c}_{N}-\mathbf{c}_{N}^{*}\|_{2}, \tag{7}\]
where \([\mathbf{c}_{N}^{*},\alpha_{N}^{*}]=\mathbf{x}_{N}^{*}\), \([\mathbf{c}_{N},\alpha_{N}]=h_{\boldsymbol{\theta}}(\mathbf{x}_{N},\mathbf{y }_{N})\) and \(t_{N}\) is a last-time moment in the considered period. Although the FSE loss function is widely used in previous studies [3, 27], it may overestimate the filter performance due to the uncertainty in the filtering process. The final coordinates may be filtered very accurately by accident while filtering the previous coordinates may be quite poor. Thus, we focus on the \(\mathrm{MSE}_{c}\) loss function as the main indicator of the filter performance.
The key ingredient of this approach is the selection of the proper parametric model \(h_{\boldsymbol{\theta}}\). Following [3] we modify the GRU model such that it solves the object localization problem specifically. A detailed description of our modification is presented in the next section.
## 3 Particle filter
One of the most efficient non-parametric approaches to solving the localization problem is the particle filter. This filter considers artificially generated particles with states \(\mathbf{p}_{i}^{(k)}\in\mathbb{R}^{d}\) at the \(i\)-th time step and the corresponding weights \(w_{i}^{k}\geq 0,\sum_{k=1}^{K}w_{i}^{k}=1\) such that the estimate of the object state at the \(i\)-th time step is computed as follows
\[\tilde{\mathbf{x}}_{i}=\sum_{k=1}^{K}w_{i}^{k}\mathbf{p}_{i}^{(k)},\]
where \(K\) is the number of particles. Particles' weights are updated according to the corresponding measurements and state updates based on the Bayes rule and likelihood estimation, see [28] for details. The important step in the particle filter is resampling, which corrects the updated particle weights and states to improve the accuracy of estimate \(\tilde{\mathbf{x}}_{i}\). The resampling step addresses the degeneracy issue, which means a few number of particles have non-zero weights. This phenomenon indicates the poor representation of the target object state. The purely stochastic resampling samples particles' indices from the multinomial distribution according to the updated weights and then update particle states, respectively, see (8). After resampling the resulting particle states are slightly perturbed with random noise to avoid equal particles' states.
\[\begin{split}& i_{1},\ldots,i_{K}\sim\mathrm{Multinomial}(w_{i+1}^{ 1},\ldots,w_{i+1}^{K})\\ &\mathbf{p}_{i+1}^{1},\ldots,\mathbf{p}_{i+1}^{K}\leftarrow \mathbf{p}_{i+1}^{i_{1}},\ldots,\mathbf{p}_{i+1}^{i_{K}}\\ & w_{i+1}^{k}=\frac{1}{K}.\end{split} \tag{8}\]
Since the particle filter processes sequential data through the recurrent updates of the particles and weights, the natural idea is to incorporate a similar approach in the recurrent neural network architecture. The particle filter recurrent neural network is proposed in [3] and we briefly describe it in the next section to highlight the difference with the proposed mePFRNN.
## 4 Recurrent neural networks inspired by particle filter
This section presents our RNN cell based on the particle filter idea, explicitly measured data, and beacons' positions. Since our model is a modification of the PFRNN [3] model, we briefly provide the main ingredients of this model.
PFRNN.Denote by \(K\) a number of particles that are emulated in the PFRNN model. Below we consider motion \(\mathbf{x}_{i}^{(k)}\) and measurement \(\mathbf{y}_{i}^{(k)}\) vectors corresponding to the \(k\)-th particle at the \(i\)-th time moment, so \(k=1,\ldots,K\) and \(i=1,\ldots,N\). PFRNN considers the environment as a 2D array and constructs its embedding through the following encoder subnetwork:
\[\mathrm{Conv}\rightarrow\mathrm{ReLU}\rightarrow\mathrm{Conv}\rightarrow \mathrm{ReLU}\rightarrow\mathrm{Conv}\rightarrow\mathrm{ReLU}\rightarrow \mathrm{Flatten}\rightarrow\mathrm{Linear}\rightarrow\mathrm{ReLU}, \tag{9}\]
where \(\mathrm{Conv}\) is a convolution layer, \(\mathrm{ReLU}\) denotes element-wise ReLU non-linearity, \(\mathrm{Linear}\) denotes a linear layer and \(\mathrm{Flatten}\) denotes a vectorization operation that reshapes the input tensor to a vector. The output of this subnetwork is the environment embedding vector \(\mathbf{e}_{env}\). At the same time, the embeddings for observations \(\mathbf{y}_{i}^{(k)}\) and motions \(\mathbf{x}_{i}^{(k)}\) are constructed via two linear layers and followed by ReLU activations and denoted by \(\mathbf{n}_{i}^{(k)}\) and \(\mathbf{m}_{i}^{(k)}\), respectively. Note that the dimensions of \(\mathbf{n}_{i}^{(k)}\) and \(\mathbf{m}_{i}^{(k)}\) are the same. Then, one transforms the environment embedding \(\mathbf{e}_{env}\) to adjusted embeddings \(\hat{\mathbf{n}}_{i}^{(k)}\) and \(\hat{\mathbf{m}}_{i}^{(k)}\) via two linear layers and followed ReLU activations. Now, the dimensions of \(\hat{\mathbf{n}}_{i}^{(k)}\), \(\hat{\mathbf{m}}_{i}^{(k)}\), \(\mathbf{n}_{i}^{(k)}\) and \(\mathbf{m}_{i}^{(k)}\) are the same. Finally, the input to the PFRNN cell described below (see (11)) is a set of vectors \(\mathbf{v}_{i}^{(k)}\) composed by concatenation of vectors \(\hat{\mathbf{n}}_{i}^{(k)}\odot\mathbf{n}_{i}^{(k)}\) and \(\hat{\mathbf{m}}_{i}^{(k)}\odot\mathbf{m}_{i}^{(k)}\), where \(\odot\) denotes element-wise product.
The baseline PFRNN cell is presented in both a graphical way (see Figure 1) and an analytical way (see equation (11)) for the reader's convenience. We note that this cell includes a reparametrization trick and updates not only the hidden states for every particle \(\mathbf{h}_{i}^{(k)}\) but also the corresponding weights \(w_{i}^{(k)}\) that are used in the resampling step. These weights typically correspond to the probability of the particle being equal to the ground-truth object state. However, in our experiment such weights are the logarithm of the corresponding probabilities, therefore the normalization step after update has the given form (see the last line in (11). After that, we adjust the resampling step to deal with the logarithms of the weights properly, see the paragraph below.
Resampling procedure.After the inference stage in the considered RNN cells, one has to make resampling, to mitigate the potential degeneracy problem. There are different approaches to performing resampling [29, 30]. The main requirement for the resampling procedure in the parametric model is to be differentiable. Therefore, the stochastic resampling (8) is not directly fitted to the considered model. Instead, the Soft Resampling procedure [3] was proposed as a trade-off between the accuracy and the related costs. This approach to resampling considers a mixture of the distribution induced by weights and the uniform distribution with probabilities \(1/K\). Therefore, the formula for updating weights and hidden states reads as follows.
\[\begin{split}& i_{1},\ldots,i_{K}\sim\mathrm{Multinomial}(\alpha w_{i+ 1}^{1}+(1-\alpha)/K,\ldots,\alpha w_{i+1}^{K}+(1-\alpha)/K)\\ &\mathbf{h}_{i+1}^{1},\ldots,\mathbf{h}_{i+1}^{K}\leftarrow \mathbf{h}_{i+1}^{i_{1}},\ldots,\mathbf{h}_{i+1}^{i_{K}}\\ & w_{i+1}^{k}\leftarrow\frac{w_{i+1}^{i_{k}}}{\alpha w_{i+1}^{i_{ k}}+(1-\alpha)/K},\end{split} \tag{10}\]
where \(\alpha>0\) to make the operation differentiable. Note that similar to the stochastic resampling, the updated hidden states \(\mathbf{h}_{i}^{k}\) are slightly perturbed. Section 5 provides more details on the usage of soft resampling in our experiments.
mePFRNN.Since PFRNN encodes the environment with the convolution operation, it requires training a number of parameters proportional to the environment size. To reduce the number of trainable parameters, we do not use the data about an environment as input to our model since such data, like beacons' and obstacles' positions, have to be implicitly extracted in the training stage. We expect such behavior of the considered mePFRNN since the environment is the external factor to the localization problem and stays the same over the particular trajectory. The motion and measurement vectors corresponding to every particle are embedded into a high dimensional space via linear layer and ReLU non-linearity. Then, the obtained embeddings are concatenated and processed by a linear layer with LeakyReLU non-linearity. The result of the latter operation is motion embedding \(\mathbf{e}_{\mathbf{u}}^{(k)}\) for every particle, which is additional input to the proposed mePFRNN cell. The encoding procedure described above is summarized in scheme (12).
\[\mathbf{x}_{i}^{(k)}\rightarrow\mathrm{Linear}\rightarrow\mathrm{ ReLU}\searrow \tag{12}\] \[\mathbf{y}_{i}^{(k)}\rightarrow\mathrm{Linear}\rightarrow\mathrm{ ReLU}\nearrow\]
Thus, mePFRNN is a voxel-independent model that can be easily used in very large environments without increasing the number of trainable parameters. One more benefit of the proposed approach becomes crucial if the beacons in the environment are located not in the middle of the artificially generated voxels in the PFRNN model. These voxels compose a grid for the considered environment to identify the beacons and obstacles with convolution encoding. In this case, the convolution operation does not adequately encode the beacons' positions and makes further filtering more noisy. The resulting cell is shown in Figure 2 graphically and in equations (13) analytically, where \(MLP\) consists of two sequential linear layers and intermediate \(\mathrm{LeakyReLU}\) nonlinearity. Note that, the Soft Resampling procedure is also used here similar to the PFRNN model described above.
\[\mathbf{e}_{i}^{(k)}= [\mathbf{e}_{\mathbf{u}}^{(k)},\mathbf{y}_{i}^{(k)}] \tag{13}\] \[\mathbf{z}_{i}^{(k)}= \sigma(\mathbf{W}_{z}[\mathbf{h}_{i-1}^{(k)},\mathbf{e}_{i}^{(k)}]+ \mathbf{b}_{z})\] \[\mathbf{r}_{i}^{(k)}= \sigma(\mathbf{W}_{r}[\mathbf{h}_{i-1}^{(k)},\mathbf{e}_{i}^{(k)}]+ \mathbf{b}_{r})\] \[\mathbf{\mu}_{i}^{(k)}= \mathbf{W}_{\mathbf{\mu}}[\mathbf{r}_{i}^{(k)}\odot\mathbf{h}_{i-1}^{(k)},\mathbf{e}_{i}^{(k)}]+\mathbf{b}_{\mathbf{\mu}}\] \[\mathbf{\Sigma}_{i}^{(k)}= \mathbf{W}_{\mathbf{\Sigma}}[\mathbf{r}_{i}^{(k)}\odot\mathbf{h}_{i-1}^{(k) },\mathbf{e}_{i}^{(k)}]+\mathbf{b}_{\mathbf{\Sigma}}\] \[\epsilon\sim \mathcal{N}(0,\mathbf{I})\] \[\mathbf{h}_{i}^{(k)}= \mathrm{LeakyReLU(BN}(\mathbf{\mu}_{i}^{(k)}+\mathbf{\Sigma}_{i}^{(k)} \odot\epsilon))\] \[\mathbf{h}_{i}^{(k)}= (1-\mathbf{z}_{i}^{(k)})\odot\mathbf{d}_{i}^{(k)}+\mathbf{z}_{i}^ {(k)}\odot\mathbf{h}_{i-1}^{(k)}\] \[\mathbf{p}_{i}^{(k)}= \mathbf{W}_{w}[\mathbf{o}_{i},\mathbf{h}_{i}^{(k)}]+b_{w}\] \[w_{i}^{(k)}= \mathbf{p}_{i}^{(k)}+w_{i-1}^{(k)}-\] \[-LogSumExp(\mathbf{p}_{i}^{(k)}+w_{i-1}^{(k)})\]
Figure 1: Baseline PFRNN cell design, where \(\mathbf{a},\mathbf{o},\mathbf{z},\mathbf{c}\) denote the linear layers that are used in computing the corresponding intermediate embeddings. For simplicity, we skip the superscript \(k\) that indicates the particle index. This cell updates both particle hidden states and weights. Denote element-wise addition and multiplication by \(+\) and \(\odot\).
Figure 2: The proposed mePFRNN cell. Square block means elementwise square of the input. MLP consists of two sequential linear layers and LeakyReLU intermediate nonlinearity. Other notation is similar to the PFRNN.
Alternative GRU-based models.In addition to the proposed mePFRNN model, we also propose two approaches to exploiting the classical GRU model (see Figure 3 and equations (14)) in the object localization problem. Namely, the EnsembleGRU model consists of many small GRU cells whose predictions are averaged to estimate the target object state. The number of models in the ensemble and the number of trained parameters in every model are selected such that the total number of the trained parameters is approximately equal to # parameters in PFRNN times # particles. The complementary approach is just to use the single GRU cell, where the number of trained parameters is equal to # particles times # parameters in PFRNN. Both approaches are complementary to the PFRNN and mePFRNN models since they do not exploit particles. Also, note that the input to the GRU cell in EnsembleGRU and HeavyGRU models is the same as the input to the PFRNN cell.
## 5 Computational experiment
In this section, we demonstrate the performance of our model and compare it with alternative neural networks and non-parametric models. For training the compared neural networks we use RMSProp optimizer [31] since it shows more stable convergence compared to Adam [32] and SGD with momentum [33], learning rate equal to \(5\cdot 10^{-4}\) and every batch consists of 150 trajectories. The maximum number of epochs is 5000 for the considered environments. During the training stage, a validation set of trajectories is used to identify the overfitting. Therefore, different environments require a different number of epochs before overfitting occurs. In particular, overfitting does not occur after 5000 epochs in the world \(10\times 10\). At the same time, overfitting is observed after 600 and 200 epochs in the World \(18\times 18\) and WORLD \(27\times 27\), respectively.
Trajectories generation procedure.To evaluate the considered methods and demonstrate the performance of the proposed mePFRNN, we consider four environments, see Figure 4. Environments world \(10\times 10\), World \(18\times 18\), and WORLD \(27\times 27\) are symmetric and therefore challenging for object localization since symmetric parts can be confused by a filtering method. Environment _Labyrinth_ is not symmetric and medium challenging for filtering methods. Thus, the considered filtering methods are compared comprehensively due to the diversity in the testing environments.
To train the parametric models we need to generate a set of trajectories \(\{\mathbf{x}_{i}^{\star}\}_{i=1}^{N}\). Since our tests assume that the object's initial state is unknown, we set the initial state \(\mathbf{x}_{0}^{\star}\) randomly for all generated trajectories. Initial states do not intersect with obstacles. Then, every next iteration updates the object state according to the motion equation, where external velocity \(u\in[0,0.2]\) is known and the direction is preserved from the previous step within the noise. In the case of a collision with an obstacle, the object's direction is changed randomly such that the next state does not indicate the collision. To simulate engine noise, the velocity \(u\) is perturbed by \(\eta_{r}\sim\mathcal{U}[-0.02,0.02]\). To simulate uncertainty in the object control system, the direction \(\phi\) is also perturbed by \(\eta_{\phi}\sim 2\pi\alpha\), where \(\alpha\sim\mathcal{U}[-0.01,0.01]\). The measurements \(\mathbf{y}_{i}\) are the distances to the five nearest beacons, which are also noisy with the noise distributed as \(\zeta\sim\mathcal{U}[-0.1,0.1]\). In the considered environments, we set the number of time steps in every trajectory \(N=100\).
To train the considered parametric models, we generate 8000 trajectories, 1000 trajectories for validation, and an additional 10000 trajectories for the testing stage. During the training process, the MSE loss is computed for the validation trajectories and if the obtained value is smaller than the current best one, then the best model is updated. This scheme helps to store the best model during the training and avoid overfitting.
Figure 3: Standard GRU cell, where \(\mathbf{z}\) and \(\mathbf{r}\) denote the linear layers to compute \(\mathbf{z}_{i}\) and \(\mathbf{r}_{i}\), Lin denotes linear layers to compute \(\hat{\mathbf{h}}_{i}\), see (14).
The list of compared models.We compare the proposed mePFRNN model with the following competitors combined in two groups. The first group consists of alternative recurrent neural networks that can solve the object localization problem, in particular the baseline PFRNN model from [3], HeavyGRU, and EnsembleGRU models. Following the study [3] we use the \(wMSE\) loss function (4) to train alternative neural network models and use \(L\) loss function (5) to train the proposed mePFRNN model. Such a choice of training setup highlights the benefit of the proposed loss function \(L\). In both settings, we use \(\beta=0.1\).
The second group consists of the particle filter (PF) and the multiparticle Kalman filter (MKF). We include these methods in the experiments to compare the performance of the parametric and non-parametric models. The performance is measured in terms of MSE\({}_{c}\), FSE, number of trained parameters, training time, and inference time. Note that, non-parametric models do not require training, therefore they are more lightweight. However, to get high accuracy a lot of particles are needed which leads to long runtime. Thus, for adequate comparison with neural methods, the classical filters were used with fewer particles to show a similar runtime as neural network-based models in the inference mode. In addition, we use stochastic resampling in the non-parametric models and Soft Resampling in the parametric ones. However, the Soft Resampling procedure for the non-parametric models does not significantly change the final performance. The comparison of the aforementioned models is presented in the next paragraph.
Discussion of the results.In experiment evaluation, we compare non-parametric and parametric models with the four test environments described above. The obtained results are summarised in Table 1. Also, we track the number of trained parameters, the amount of memory that is necessary to store them, and the runtime to update the object state in one step. From this table follows that the proposed mePFRNN model gives the best or the second-best MSE\({}_{c}\) score for the considered environments. At the same time, the FSE score is typically smaller for HeavyGRU or EnsembleGRU in the considered environments. One more important factor is the number of trainable parameters. The smaller the number of parameters, the easier embedding the model in hardware. The mePFRNN model requires fewer trainable parameters compared with other parametric models, i.e. PFRNN, HeavyGRU, and EnsembleGRU. The last but not least feature of the considered models is the inference time, i.e. the runtime to update the object state from the \(i\)-th to the \((i+1)\)-th time step. mePFRNN is slightly faster than PFRNN, and HeavyGRU appears the fastest model in the inference stage. Thus, we can conclude that the proposed mePFRNN model provides a reasonable trade-off between MSE\({}_{c}\) score, number of trainable parameters, and inference time among the considered parametric and non-parametric models tested in the selected benchmark environments.
Figure 4: Visualization of test environments. Black crosses denote beacons, and grey blocks denote obstacles. The upper row represents the very symmetric environments that are especially challenging for solving the localization problem. The _Labyrinth_ environment is not symmetric and is similar to the environment, which was used for the evaluation of filtering methods in [27].
The number of particles chosen in Table 2 is such that the inference runtime is close to the inference runtime of the considered neural networks. Since in Table 1 we fix the particular number of particles in non-parametric models, we present the MSE\({}_{c}\) and FSE losses for the larger number of particles in Table 2. It shows that if the number of particles is sufficiently large, both MSE\({}_{c}\) and FSE values are smaller than the corresponding values for parametric models. However, such an accurate estimation of states requires a much slower inference runtime compared to the considered parametric models. Thus, the neural network-based filters are of significant interest since they can show better accuracy compared to non-parametric models and provide faster updates of the object's state.
## 6 Conclusion
We present the novel recurrent neural network architecture mePFRNN to solve the object localization problem. It combines the standard GRU RNN, particle filter, and explicit measurements of distances from the object to the beacons. The latter feature makes the proposed model memory-efficient since the number of trainable parameters does not depend on the environment size. We compare the proposed mePFRNN model with the general-purpose PFRNN model and two modifications of standard GRU RNN. The test environments consist of symmetric environments of different sizes and the non-symmetric _Labyrinth_ environment. Such diversity of the test environments leads to the comprehensive comparison of the considered parametric models to solve the object localization problem. The mePFRNN model is simultaneously slightly faster in inference than the baseline PFRNN and filters the object's coordinates more precisely in the considered symmetric environments along the trajectory. Moreover, mePFRNN does not exploit explicit data about the environment or the corresponding embeddings. At the same time, the proposed mePFRNN model outperforms competitors in MSE values for the most of considered test environments.
|
2303.07962 | Large impact of phonon lineshapes on the superconductivity of solid
hydrogen | Phonon anharmonicity plays a crucial role in determining the stability and
vibrational properties of high-pressure hydrides. Furthermore, strong
anharmonicity can render phonon quasiparticle picture obsolete questioning
standard approaches for modeling superconductivity in these material systems.
In this work, we show the effects of non-Lorentzian phonon lineshapes on the
superconductivity of high-pressure solid hydrogen. We calculate the
superconducting critical temperature T$_\mathrm{C}$ \emph{ab initio}
considering the full phonon spectral function and show that it overall enhances
the T$_\mathrm{C}$ estimate. The anharmonicity-induced phonon softening
exhibited in spectral functions increases the estimate of the critical
temperature, while the broadening of phonon lines due to phonon-phonon
interaction decreases it. Our calculations also reveal that superconductivity
emerges in hydrogen in the $Cmca-12$ molecular phase VI at pressures between
450 and 500 GPa and explain the disagreement between the previous theoretical
results and experiments. | Đorđe Dangić, Lorenzo Monacelli, Raffaello Bianco, Francesco Mauri, Ion Errea | 2023-03-14T15:06:00Z | http://arxiv.org/abs/2303.07962v3 | # Large impact of phonon lineshapes on the superconductivity of solid hydrogen
###### Abstract
Phonon anharmonicity plays a crucial role in determining the stability and vibrational properties of high-pressure hydrides. Furthermore, strong anharmonicity can render phonon quasiparticle picture obsolete. In this work, we show the effects of non-Lorentzian phonon lineshapes on the superconductivity of high-pressure solid hydrogen. We calculate the superconducting critical temperature T\({}_{\mathrm{C}}\)_ab initio_ considering the full anharmonic lineshape of the phonon spectral function and show that it substantially enhances T\({}_{\mathrm{C}}\). Despite previous anharmonic calculations estimating a weak anharmonicity in atomic hydrogen, considering the full spectral function enhances the critical temperature by 100 K, pushing it above 400 K. Our calculations also reveal that superconductivity emerges in hydrogen in the Cmca-12 molecular phase VI at pressures between 450 and 500 GPa.
Solid atomic hydrogen was postulated to be a high-temperature superconductor at high pressures by Ashcroft in 1968 [1]. Later this idea has been revised and hydrogen-rich compounds have been hypothesized to be high-temperature superconductors at pressures that are only a fraction of the one needed to get atomic hydrogen [2; 3]. The first experimental verification of that idea came in 2015 when H\({}_{3}\)S was shown to have a transition temperature of 203 K at 155 GPa [4]. This has been followed up by numerous experiments on different hydrogen compounds, many of them exhibiting high-temperature superconductivity [5; 6; 7; 8; 9; 10; 11], verifying without a reasonable doubt the existence of superconductivity in hydrides at high pressures [12].
The discovery of high-temperature superconductivity renewed the interest in synthesizing atomic metallic hydrogen, which is expected to superconduct above room temperature [13; 14; 15; 16]. Recently, a work reported atomic metallic hydrogen at 495 GPa on the basis of enhanced optical reflectivity [17]. While this finding was questioned [18] due to a probable overestimation of the measured pressure, there is an abundant amount of proof of metalization of solid hydrogen in the molecular phase [19; 20]. None of these works, however, observed the transition to the superconducting phase up to 440 GPa [21].
A better understanding of the high-pressure solid hydrogen phase diagram was provided by recent first-principles calculations considering both electronic correlations beyond density functional theory (DFT) and nuclear quantum effects [22; 23; 24]. Monacelli et al. show that at pressures lower than 422 GPa hydrogen crystallizes in the C2/c-24 phase, with 24 atoms in the primitive unit cell (phase III of solid hydrogen). In a pressure range between 422 and 577 GPa hydrogen transforms to the Cmca-12 phase, with 12 atoms per unit cell (phase VI). The value of 422 GPa agrees very well with the experimental transition pressures detected by infrared at 420 GPa [20] and by Raman at 440 GPa [19]. Finally, at pressures higher than 577 GPa, hydrogen transforms into atomic hydrogen with a tetragonal I4\({}_{1}\)/amd-2 structure, containing two atoms per primitive unit cell.
One of the key reasons why studies in Refs. [22; 23] were able to successfully model the phase diagram of solid hydrogen was the inclusion of quantum anharmonic effects. The phonon renormalization due to anharmonicity can significantly alter superconductivity, as shown in Refs. [25; 26; 27; 28; 29]. However, these studies have not explored the anharmonicity-induced dynamical renormalization of phonons and its impact on superconductivity. Some studies have highlighted the importance of these effects on superconductivity utilizing simple single phonon mode toy models [30; 31]. On the other hand, dynamical renormalization of phonons due to electron-phonon coupling has been shown to have little impact on the critical temperature [32] of conventional superconductors. However, the dynamical effects due to phonon-phonon interaction should be much stronger in high-pressure hydrides, and thus a full first principle study of these effects is necessary.
Here we present a first-principles study of the superconducting properties of solid hydrogen in its high
pressure phases from 300 to 600 GPa by accounting for quantum anharmonic effects both on the phonons and the structure with the stochastic self-consistent harmonic approximation (SSCHA) at zero Kelvin [33]. We find that the SSCHA appreciably changes the structure of solid hydrogen in all phases, which leads to an increased density of states (DOS) at the Fermi level and an overall phonon softening. These two effects combine to increase the electron-phonon coupling constants and superconducting transition temperatures in the SSCHA structures, at odds with previous calculations that neglect the impact of ionic quantum effects on the structure [28; 29]. We also show that the phonon spectral functions of all these phases have a complex and broad shape, clearly deviating from a simple Lorentzian, questioning the standard approximation made in the electron-phonon calculations in which the spectral function is represented with a Dirac delta function. By considering the full spectral function for the first time ever, we show that the critical temperature (T\({}_{\rm C}\)) of both molecular and atomic phases is considerably enhanced, especially for the latter, where the critical temperature is boosted above 400 K. Our calculations predict the onset of superconductivity in solid hydrogen in the semimetallic molecular phase VI at pressures between 450 and 500 GPa, in agreement with recent experiments [19].
Quantum anharmonic effects have a large impact on the structures in the phase diagram as shown in Fig. 1. There is a discontinuity in volume at the phase transition between molecular and atomic phases, not evident for the transition between molecular phases III and VI. This discontinuity is partly suppressed in the quantum anharmonic SSCHA structures. The SSCHA expands the structure slightly for all phases, most prominently for the atomic phase, increasing bond lengths and the \(c/a\) ratio at all pressures, as it has been already calculated in other high-pressure hydrides [34; 35]. Importantly, SSCHA changes the qualitative behavior of bond lengths in molecular phases: while in SSCHA the bond length increases with pressure, in the harmonic approximation it stays relatively constant [22].
These changes have a significant effect on the electronic and vibrational properties of solid hydrogen (see Figs. 1 and 2). The most prominent impact is the increase of the DOS at the Fermi level in the quantum anharmonic SSCHA structures. In the molecular phase VI, decreasing volume leads to an increase in the DOS, but with a considerably higher slope for the SSCHA structures than for the harmonic ones. This behavior shows that quantum anharmonic effects tend to increase the DOS at the Fermi level, as already described in several hydrides [35; 36]. Molecular phase III is only weakly semimetallic up to 450 GPa and will not be discussed further on, as, thus, it cannot superconduct despite prior claims [29]. Closing of the fundamental band gap in our calculations occurs above 400 GPa, which is slightly overestimated compared to calculations that include both better approximation for the exchange-correlation functional and the effect of the electron-phonon coupling [22; 24].
In addition to the structure modified by quantum nuclear effects, SSCHA method allows us to obtain second-order force constants renormalized by anharmonicity. Quantum anharmonicity softens phonon frequencies as a consequence of the stretching of the H bonds (see Fig. 1). This is at odds with recent calculations [28; 29], in which the frequencies of the phonon modes excluding the vibrons increase due to anharmonicity. The difference is that in the latter case the effect of the quantum zero-point fluctuations on the structure was neglected, which our calculations show to be important. Both the increase of the DOS at the Fermi level and the phonon softening are beneficial for superconductivity since the electron-phonon coupling constant scales inversely with phonon frequencies and linearly with the DOS at the Fermi level.
Beyond the renormalization of structural parameters and phonon frequencies, anharmonicity has a huge impact on the phonon spectral function (see Supplementary Material for more details [33]). The spectral function of all phases shows further softening with respect to the auxiliary SSCHA phonon frequencies, especially for high-frequency optical modes. More remarkably, even at vanishing temperatures, we predict a huge broadening of the phonon spectral functions, clearly deviating from
Figure 1: (a) Volume of the primitive unit cell per hydrogen atom, (b) length of the hydrogen-hydrogen bond, (c) the electronic DOS at the Fermi level per hydrogen atom and (d) the average phonon frequency in high-pressure solid hydrogen. Solid lines represent data obtained with SSCHA and dashed lines with the harmonic approximation. The color background shows a phase diagram of the solid hydrogen from Ref. [23] and the color of the lines indicate for which phase calculations were performed.
the standard Lorentzian line shape. We illustrate this in Fig. 2, where phonon spectral functions for selected modes at \(\Gamma\) point are presented for structures at 500 GPa in molecular phase VI and atomic phase. We report two representative modes for molecular phase VI: a global lattice vibration (phonon mode) and a stretching of H\({}_{2}\) molecule (vibron mode). In the atomic phase, we only have two optical modes that are non-degenerate and we show both of them. The shift of the phonon frequency is very large in all cases. Additionally, all modes, except the E\({}_{\text{g}}\) one in the atomic phase, have a huge broadening of the phonon spectral function of thousands of cm\({}^{-1}\) and a clear non-Lorentzian line shape. Such anomalous behavior questions the standard practice of approximating the spectral function with slightly smeared Delta functions in first-principles calculations of the superconducting critical temperatures. In fact, it has already been shown that non-Lorentzian lineshapes can have a non-negligible effect on other properties of materials, i.e. the lattice thermal conductivity in highly anharmonic semiconducting chalcogenides [37].
The isotropic Eliashberg function of the electron-phonon interaction can be calculated keeping the full anharmonic spectral function as [38]
\[\alpha^{2}F(\omega)=\frac{1}{N_{\mathbf{q}}}\sum_{ab\mathbf{q}}\frac{\Delta^{ ab}(\mathbf{q})\sigma_{ab}(\mathbf{q},\omega)}{\omega\sqrt{m_{a}m_{b}}}, \tag{1}\]
where \(\sigma_{ab}(\mathbf{q},\omega)\) is the phonon spectral function in the Cartesian basis with wave number \(\mathbf{q}\) (see Supplementary Material for more details [33]). In Eq. (1) \(a\) and \(b\) label both atoms and a Cartesian direction, \(\Delta^{ab}(\mathbf{q})\) represents the average of the deformation potential over the Fermi surface, \(m_{a}\) is the mass of atom \(a\), and \(N_{\mathbf{q}}\) is the number of \(\mathbf{q}\) points in the sum. In the harmonic case, this quantity is calculated for the structure that minimizes the BOES, while in the SSCHA it is calculated for the structure that minimizes the free energy.
All calculations thus far that have accounted for anharmonicity in the calculation of \(\alpha^{2}F(\omega)\) have been performed assuming that \(\sigma_{ab}(\mathbf{q},\omega)\) can be expressed as [35; 36; 25; 26; 15; 27]\(\sigma_{ab}(\mathbf{q},\omega)=\sum_{\mu}e_{\mu}^{a}(\mathbf{q})e_{\mu}^{b*}( \mathbf{q})\sigma_{\mu}^{h}(\mathbf{q},\omega)\), where the harmonic spectral function \(\sigma_{\mu}^{h}(\mathbf{q},\omega)\) of mode \(\mu\) and wave number \(\mathbf{q}\) is a Delta function centered at the harmonic or SSCHA auxiliary phonon frequency, and \(\mathbf{e}_{\mu}(\mathbf{q})\) are either harmonic or SSCHA phonon eigenvectors. As in practical implementations, the Delta functions are numerically approximated with a Gaussian function of fixed spread, we label this approach as _Gaussian_.
However, as we have shown in Fig. 2, anharmonicity can drastically affect the phonon lineshapes. Using the same projection method one can obtain \(\sigma_{ab}(\mathbf{q},\omega)=\sum_{\mu}e_{\mu}^{a}(\mathbf{q})e_{\mu}^{b*}( \mathbf{q})\sigma_{\mu}(\mathbf{q},\omega)\), where \(\sigma_{\mu}(\mathbf{q},\omega)\) are the diagonal phonon spectral functions calculated in the _no-mode mixing_ approximation accounting for the phonon-phonon interaction in the dynamical bubble approximation (see Supplementary Material for more details). The polarization vectors used in the projection, in this case, are those obtained from the SSCHA auxiliary dynamical matrices. By calculating \(\alpha^{2}F(\omega)\) with the \(\sigma_{ab}(\mathbf{q},\omega)\) in the no-mode mixing approximation, we see that the anharmonic spectral function has a huge impact, as shown in Fig. 3. The softening of the phonon modes is also evident in the Eliashberg spectral functions. Additionally, the broadening of the phonon lineshapes leads to the complete closing of the gap between hydrogen vibron and phonon branches in the molecular phase VI. The softening of the phonon modes in the SSCHA coupled with a higher DOS at the Fermi level in the SSCHA structures leads to higher values of the electron-phonon coupling constant \(\lambda\) in most cases compared to the harmonic result, more remarkably in the molecular phase VI. A notable exception is atomic hydrogen at 500 GPa, where the proximity to a phonon instability, which is suppressed by anharmonicity, drastically increases \(\lambda\) in the harmonic approximation (see Supplementary Material [33]).
The no-mode mixing approximation describes the phonon lineshapes very well [39] and is used almost ex
Figure 2: Phonon spectral functions in the no mode mixing approximation in mode basis, \(\sigma_{\mu}(\mathbf{q},\omega)\), of two representative optical phonon modes at \(\Gamma\) of solid hydrogen in (a) molecular Cmca-12 phase VI at 500 GPa, and (b) atomic tetragonal I4\({}_{1}\)/amd-2 phase at 500 GPa. In figure (b) we scaled the values of the E\({}_{\text{g}}\) mode in order to make the figures clearer. Thick dashed vertical lines represent the corresponding frequencies obtained from the auxiliary SSCHA force constants.
clusively when discussing the anharmonic phonon lineshapes. However, we can go one step further and utilize the full spectral function without limiting ourselves to the standard no-mode mixing approximation. In this case, we do not assume that the phonon self-energy is diagonal in the phonon branch index, as it is done in the no-mode mixing case, and instead calculate the spectral function as \(\sigma_{ab}(\mathbf{q},\omega)=\sum_{\mu\nu}e_{\mu}^{a}(\mathbf{q})e_{\nu}^{b*} (\mathbf{q})\sigma_{\mu\nu}(\mathbf{q},\omega)\) fully accounting for off-diagonal terms, where again the polarization vectors are obtained from the SSCHA auxiliary dynamical matrices (see Supplementary Material [33]). The Eliashberg spectral functions in this approach have stronger spectral weights for frequencies near 1000 cm\({}^{-1}\) compared to the no-mode mixing approximation, especially in the atomic phase (see Fig. 3), which increases the electron-phonon coupling constant and T\({}_{\mathrm{C}}\). On the other hand, the phonon density of states calculated with the two methods (see Supplementary Material) are almost identical. Hence, the reason for the increase in the Eliashberg spectral function is nuanced and depends on the redistribution of spectral weights when the full spectral function is considered.These changes might be detected in experiments, at least in the atomic phase, as they could lead to qualitative changes in the reflectivity of the material [40], while the enhancement of \(\lambda\) is large enough to be experimentally resolved [41]. For comparison, we have checked that in the high-T\({}_{\mathrm{C}}\) superconducting H\({}_{3}\)S the Gaussian, no-mode mixing and full spectral functions calculations yield similar results (see Supplementary Material), indicating that the large enhancement of \(\lambda\) observed in hydrogen by the full spectral function is system dependent and not universal.
Solving isotropic Migdal-Eliashberg equations with the \(\alpha^{2}F(\omega)\) obtained considering the full spectral function [38; 42], we can estimate the impact of anharmonicity on the superconducting transition temperature (see Fig. 4). As mentioned above, the C2/c-24 phase of solid hydrogen does not exhibit superconducting behavior in the pressure range of interest. In the molecular phase VI the transition temperature is mostly linear with pressure and correlates well with the value of the DOS at the Fermi level. Because of this, the SSCHA structures consistently show higher transition temperatures than the classical harmonic ones. The difference in T\({}_{\mathrm{C}}\) between these two methods increases with pressure, again due to the stronger dependence of the electronic DOS on the pressure in the SSCHA structures (see Fig. 1), as well as due to the increased electron-phonon coupling due to the anharmonic softening of the phonon modes. Using the full spectral function increases T\({}_{\mathrm{C}}\) by around 60 K compared to the standard Gaussian approximation for SSCHA structures. Considering the critical dependence of T\({}_{\mathrm{C}}\) on the DOS at the Fermi level and that local exchange-correlation functionals tend to overestimate it [13; 15; 28; 29; 43; 44], we perform DFT calculations for the quantum SSCHA structures of phase VI using the B3LYP hybrid functional [45] (see Supplementary Material [33]), which yields the values in Fig. 4. We estimate thus that superconductivity will emerge in solid hydrogen in this phase between 450 and 500 GPa.
The critical temperature is mostly constant with pressure in the atomic tetragonal phase. In this phase, T\({}_{\mathrm{C}}\) is mostly decorrelated with the value of the electronic DOS at the Fermi level because the structures are far away from the metal-insulator phase transition[23] and, despite quantum and anharmonic effects enhance the DOS as well, its relative increase is small compared to the molecular case. Including the full spectral function in the calculation of \(\alpha^{2}F(\omega)\) increases the critical temperature significantly, about 100 K for all pressures compared to the Gaussian approximation. This highlights the important role that anharmonicity plays in the superconductivity of high-pressure hydrogen also in the atomic phase, contrary to the previous calculations that only estimated its effect within the Gaussian approximation of the spectral function [15].
In conclusion, our first-principles calculations considering ionic quantum effects and anharmonicity show that superconductivity will emerge in solid hydrogen in molecular phase VI, between 450 and 500 GPa, and T\({}_{\mathrm{C}}\) will rapidly soar with pressure. We expect a jump of T\({}_{\mathrm{C}}\) to approximately 400 K at the transition to the atomic phase. Quantum anharmonic effects have a huge impact on the structural, vibrational, and superconducting prop
Figure 3: Eliashberg spectral function \(\alpha^{2}F(\omega)\) and integrated electron-phonon coupling constant \(\lambda(\omega)\) of solid hydrogen in (a) molecular Cmca-12 phase VI at 500 GPa, and (b) atomic tetragonal I4\({}_{1}\)/amd-2 phase at 500 GPa calculated in the SSCHA using the spectral function calculated fully and in the no mode mixing approximation, and in the harmonic case using Gaussian method.
erties of both molecular and atomic phases by, for instance, increasing the H-H bonds and making the phonon spectral functions extremely broad and anomalous. We show that considering the full phonon spectral function in the calculation of \(\alpha^{2}F(\omega)\) enhances the predicted critical temperature by 100 K in the atomic phase and 60 K in the molecular phase VI.
This work is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 802533) and the Department of Education, Universities and Research of the Eusko Jaurlaritza and the University of the Basque Country UPV/EHU (Grant No. IT1527-22). L.M. acknowledges the European Union MSCA-IF fellowship for funding the project THERMOH. We acknowledge PRACE for awarding us access to Joliot-Curie Rome at TGCC, France.
|
2307.04744 | Behavioral Analysis of Pathological Speaker Embeddings of Patients
During Oncological Treatment of Oral Cancer | In this paper, we analyze the behavior of speaker embeddings of patients
during oral cancer treatment. First, we found that pre- and post-treatment
speaker embeddings differ significantly, notifying a substantial change in
voice characteristics. However, a partial recovery to pre-operative voice
traits is observed after 12 months post-operation. Secondly, the same-speaker
similarity at distinct treatment stages is similar to healthy speakers,
indicating that the embeddings can capture characterizing features of even
severely impaired speech. Finally, a speaker verification analysis signifies a
stable false positive rate and variable false negative rate when combining
speech samples of different treatment stages. This indicates robustness of the
embeddings towards other speakers, while still capturing the changing voice
characteristics during treatment. To the best of our knowledge, this is the
first analysis of speaker embeddings during oral cancer treatment of patients. | Jenthe Thienpondt, Caroline M. Speksnijder, Kris Demuynck | 2023-07-10T17:53:21Z | http://arxiv.org/abs/2307.04744v2 | Behavioral Analysis of Pathological Speaker Embeddings of Patients During Oncological Treatment of Oral Cancer
###### Abstract
In this paper, we analyze the behavior of speaker embeddings of patients during oral cancer treatment. First, we found that pre- and post-treatment speaker embeddings differ significantly, notifying a substantial change in voice characteristics. However, a partial recovery to pre-operative voice traits is observed after 12 months post-operation. Secondly, the same-speaker similarity at distinct treatment stages is similar to healthy speakers, indicating that the embeddings can capture characterizing features of even severely impaired speech. Finally, a speaker verification analysis signifies a stable false positive rate and variable false negative rate when combining speech samples of different treatment stages. This indicates robustness of the embeddings towards other speakers, while still capturing the changing voice characteristics during treatment. To the best of our knowledge, this is the first analysis of speaker embeddings during oral cancer treatment of patients.
Jenthe Thienpondt\({}^{1}\), Caroline M. Speksnijder\({}^{2}\), Kris Demuynck\({}^{1}\)\({}^{1}\)IDLab, Department of Electronics and Information Systems, Ghent University - imec, Belgium
\({}^{2}\)Department of Oral and Maxillofacial Surgery and Special Dental Care, University Medical Center Utrecht, Utrecht University, The Netherlands
[email protected], [email protected], [email protected]
**Index Terms**: pathological speaker embeddings, oral cancer treatment, speaker recognition
## 1 Introduction
Oral cancer is a type of cancer that can develop in various locations within the oral cavity, predominantly originating in the tissues of the mouth [1]. It is a serious and potentially life-threatening condition that can cause significant damage to the affected tissues and spread to other parts of the body. Common risk factors for oral cancer include tobacco usage and excessive alcohol consumption [2, 3]. Treatment options for oral cancer typically include surgery, radiation therapy and chemotherapy, which may be used in isolation or in conjunction with each other, depending on the stage and location of the cancer.
In prior research, it is shown that oncological treatment of oral cancer can be accompanied with impaired speech capabilities, including articulation and intelligibility [4, 5, 6]. Subsequent research found reduced speech abilities even after extensive recovery periods up to 12 months after surgical intervention [7]. Another study [8], showed a significant decrease in tongue function during oral cancer treatment, which can potentially be an important contributor to post-intervention speech impairment. Other studies [9, 10] observed a significant decrease in speech recognition transcription accuracy when comparing healthy speakers to a group of patients diagnosed with oral cancer in various treatment stages.
However, to the best of our knowledge, there is no prior research on the behavior of speaker embeddings of patients treated for oral cancer. Speaker embedding similarity, in contrast to conventional intelligibility rating systems, could provide an objective and text-independent measurement of changing voice characteristics without relying on any human perceptual evaluation of pathological speech.
In recent years, speaker verification has gained significant performance increases due to the availability of large and labeled datasets [11, 12], a significant increase in computational power and the advent of specialized deep learning models, including the x-vector architecture [13, 14], ECAPA-TDNN [15] and fwSE-ResNet [16]. Low-dimensional speaker embeddings can be extracted from these models and have shown to capture a wide variety of speaker characteristics, including gender, age, spoken language and emotional state [17, 18, 19].
In this paper, we want to analyze the behavior of speaker embeddings at different stages during oral cancer treatment on multiple properties. First, how do the speaker characteristics, according to the speaker embeddings, evolve between the pre- and post-intervention stages. Subsequently, we want to compare this to previous research results and establish the feasibility of potential usage of speaker embeddings during the oral cancer treatment procedure of a patient. Secondly, assess the intrasession robustness of speaker embeddings of patients based on speech samples recorded at the same session during oral cancer treatment and compare this to a cohort of non-pathological speakers. Finally, perform a speaker verification analysis when combining utterances of several steps in the intervention trajectory of the patients with the goal of analyzing the robustness of the pathological embeddings towards other speakers.
## 2 Pathological speaker embeddings
The speech samples in the analysis of this paper were collected from 57 Dutch patients with primary oral carcinoma taken at the University Medical Center Utrecht (UMC Utrecht) and the Radboud University Medical Center (Radboudumc) in the Netherlands between January 2007 and August 2009. The study protocol (study ID: NL1200604106) was approved by
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **\# Male** & **\# Female** & **\# Total** \\ \hline
**Tumor Stage** & & & \\ T1 (\(<\)2 cm) & 5 & 3 & 8 \\ T2 (2-4 cm) & 11 & 6 & 17 \\ T3 (\(>\)4 cm) & 3 & 3 & 6 \\ T4 (metastasis) & 15 & 11 & 26 \\
**Reconstruction Type** & & & \\ Primary Closure & 6 & 8 & 14 \\ Local Flap & 2 & 0 & 2 \\ Free Flap & 19 & 8 & 27 \\ Bone Flap & 7 & 7 & 14 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset composition of patients with oral cancer.
the Ethics Committees of the UMC Utrecht and Radboudumc. All participants received written information and provided their signed informed consent. The oncological treatment of the patients consists of surgery and subsequent radiotherapy. In addition, samples were also collected from 60 healthy speakers, matched for age and gender, as the control group [20]. Speech samples of patients were taken within 4 weeks before oncological intervention, 4 to 6 weeks after both surgery and radiotherapy and 6 and 12 months after surgery during the recovery phase. The healthy control group has speech samples only taken once. At each sampling session, two speech utterances are collected from the speakers by reading two short, phonetically diverse texts which will be referred to as _text1_ and _text2_ in this paper, respectively. The texts and recording equipment is kept consistent across all sampling sessions. The average duration of all collected speech samples is 49.6 seconds.
In addition, the tumor stage, as indicated by T of the commonly used TNM cancer staging system [21] of the patients were also collected during the pre-intervention period. The T variable ranges from T1, indicating small tumors, to T4, indicating large tumors which have potentially invaded nearby structures, known as metastasis. Furthermore, the reconstruction type of the oral cancer surgical procedure is also collected, existing of primary closure, free flap, local flap, and bone flap reconstruction. Primary closure refers to the immediate closure of the incision after the removal of cancerous tissue. Local flap reconstruction uses adjacent oral cavity tissue to reconstruct the affected area after tumor removal, while free flap reconstruction uses tissue from another body part. Bone flap reconstruction is used to rebuild bone structures inside the oral cavity after removal of the cancer tumors. The composition of the speaker characteristics of the patients in the dataset is given in Table 1.
The speaker embeddings are extracted from the state-of-the-art speaker verification fwSE-ResNet34 model presented in [16]. This architecture extends the popular ResNet [22] backbone with a speech-adapted version of Squeeze-Excitation (SE) [23] and incorporates positional encodings to extend the spatial invariance of the 2D convolutional kernels with a notion of frequency positional information. The model is optimized using the Additive Angular Margin (AAM) softmax loss function [24], resulting in the cosine distance being the similarity metric between speaker embeddings. More information about the architecture and training procedure can be found in the accompanying paper [16]. We note that this includes using the same training set, which solely exists of the development part of VoxCeleb2 [12], with no form of subsequent domain adaptation to pathological speech.
## 3 Pathological speaker analysis
It is shown that various functions related to the oral cavity are impacted by surgical and radiotherapy interventions, including the masticatory, swallowing and speech capabilities [25, 6, 8]. To analyze the evolution of the speaker identifying characteristics of patients undergoing oral oncological treatment, we calculate the cosine similarities speaker-wise between the pre-operative _text1_ embedding and all _text2_ embeddings at different stages in the treatment trajectory. We also calculate the cosine similarities between the _text1_ and _text2_ embeddings of the healthy speakers to be compared to the pre-operative embedding behavior of the patients.
Figure 1 depicts a box plot describing the evolution of the speaker similarity relative to the pre-operative speaker embeddings. We observe no significant difference between the pre-operative speaker similarity of the pathological group in comparison to the healthy set of speakers. While pre-operative speech impairment is usually limited for patients diagnosed with oral cancer in comparison to the post-intervention condition [26], it is encouraging to observe similar behavior of pre-operative pathological speakers and the healthy control group. Section 3.3 analyzes the intra-session robustness of the speaker embeddings in more detail.
A significant decrease in pre-operative speaker similarity is observed after surgical treatment of the patients. It is previously shown that surgical intervention in oral cancer treatment has a significant negative impact on a wide variety of oral function abilities, including self-reported speech capability [27, 28]. Those findings are reinforced by observing a comparable degradation between the pre-operative and post-operative speaker embedding similarity, which provides an objective and robust measurement of changing voice characteristics.
Radiotherapy during oral oncological treatment can potentially impact important tissues related to speech production [29]. However, the cumulative effect on oral function of post-operative radiotherapy strongly depends on variables such as tumor location, tumor stage and reconstruction type [27]. In our results, an additional significant change in voice characteristics is discerned after the post-operative radiotherapy stage in the treatment trajectory. We also observe a substantial increase in variability between pre-operative and post-radiotherapy speaker similarity, suggesting the final extent of change in voice characteristics is highly dependent on some underlying variables.
Both an increased pre-operative speaker similarity and decreased variability is noted after the 6-month recovery period, relative to the post-radiotherapy stage, with a similar trend in the following 6 months. This indicates that voice characteristics tend to return to the pre-operative state to a certain extent for at least a 1-year period post-intervention.
Figure 1: Tukey-style box plot depicting the evolution of pre-operative speaker embedding similarity of patients (n=57) during oral cancer treatment. Speaker similarity of a healthy control group (n=60) is included as reference. Notch width indicates the 95% confidence interval of the median.
### Tumor stage impact on voice characteristics
Figure 2 depicts the change in pre-operative voice characteristics for each subgroup of patients based on tumor stage determined before intervention. The figure shows the mean cosine similarity between the pre-operative _text1_ and pre- and post-operative _text2_ embeddings for each subgroup. The number of speakers in each group is given in Table 1.
We notice an inversely proportional relationship between the tumor size and the pre-operative speaker similarity at the post-intervention stages. This corroborates previous research which suggests that late-stage tumors were associated with poorer post-operative speech outcomes, including reduced speech intelligibility and decreased vocal quality [30]. Notably, this is accompanied with a more pronounced recovery towards pre-operative speaker characteristics in the T3 and T4 groups after the 1-year post-intervention period. This suggests that the additional severity of post-radiotherapy changes in speaker characteristics in the late-stage tumor groups is partially or even completely offset after sufficient recovery time.
### Reconstruction type impact on voice characteristics
Likewise, Figure 3 shows the evolution of the pre-operative mean speaker similarity according to the type of reconstructive surgery performed. We observe that primary closure has the least significant impact on post-intervention voice characteristics in comparison to flap-based reconstruction. This supports previous research in which patients treated with primary closure were rated higher in speech intelligibility [31]. Notable is the significantly more severe change of voice characteristics of patients undergoing restorative local flap surgery in comparison to free flap surgery. This can possibly be attributed to the removal of tissue from the oral cavity during local flap surgery, as opposed to tissue removal from other parts of the body in free flap surgery. The removal of tissue in the oral cavity can potentially devise an additional degree of voice transformation in the patient in the case of local flap restoration. However, we note that the number of local flap surgeries in our dataset is limited.
### Intra-session robustness of pathological embeddings
State-of-the-art speaker embeddings have shown to robustly capture speaker characteristics in a variety of challenging conditions, including severe background noise, short sampling duration and language switching [32]. However, it is an open question how well these embeddings can identify speakers who have had severe medical intervention in the oral cavity region. Surgery related to oral cancer treatment can have a severe impact on the structural composition of the vocal tract, which could potentially both limit or enhance the identifying characteristics captured by the speaker embeddings. In this section, we analyze the intra-session robustness of the speaker embeddings at all stages during oral cancer treatment.
To establish the intra-session robustness of the speaker embeddings, we calculate the cosine similarity between the _text1_ and _text2_ embedding of each patient at all sampling sessions during the treatment trajectory. The session-wise mean and standard deviation of the same-speaker cosine similarities is shown in Figure 4. As a reference, the mean similarity between the embeddings from the same speakers in the healthy group is indicated by the dotted line. For comparison, we also plotted the mean and standard deviation of the speaker-wise similarities between the pre-operative _text1_ and post-intervention _text2_ embeddings.
We can observe that the mean intra-session similarity is very consistent during the complete oral cancer treatment trajectory, even slightly exceeding the healthy control group. This indicates that the speaker embeddings are able to capture robust and distinguishing voice characteristics of speakers, even after substantial oncological intervention in the oral cavity, given the changed voice characteristics are temporally stable. This is notable due to the training set of the speaker embedding extractor not containing any comparable pathological speakers. This implies no domain-specific adaption of the training procedure of the speaker embedding extractor is needed, which greatly alleviates the potential medical usage of speaker embeddings in oral cancer treatment.
Figure 3: _Effect of surgical reconstruction type on the evolution of pre-operative voice similarity of patients during oral cancer treatment._
Figure 2: _Effect of tumor stage, as measured by T of the TNM cancer staging model, on the evolution of pre-operative voice similarity of patients during oral cancer treatment._
### Pathological speaker verification analysis
In this section we want to analyze the behavior of pathological speaker embeddings in a speaker verification setting. Speaker verification attempts to solve the task if two utterances are spoken by the same person. We create three groups of speaker verification trials based on speech samples from patients: pre-operative, pre-operative combined with post-operative and pre-operative combined with post-radiotherapy utterances. To increase the number of trials, we create consecutive, non-overlapping crops of 5 seconds of each utterance and subsequently extract the speaker embeddings as described in Section 2. Each trial consists of a _text1_ embedding paired with a _text2_ embedding for text-independency and we balance the amount of positive and negative trials. Results are reported using the equal error rate (EER) and a breakdown of the false positive rate (FPR) and false negative rate (FNR).
As shown in Table 2, the overall EER sharply increases by the subsequent addition of post-operative and post-radiotherapy samples. However, as Figure 5 indicates, the FPR of all trial groups remains almost identical, independent of the chosen speaker verification threshold. The degradation of EER can exclusively be attributed by an increase in FNR in the groups combining pre-operative and post-intervention embeddings. The implications of a stable FPR and variable FNR are desirable from an oral cancer treatment viewpoint. A stable FPR signifies a robust behavior of the speaker embeddings towards other speakers, while simultaneously still being able to capture the change in voice characteristics of the same speaker during the treatment trajectory.
## 4 Future work
As shown in this paper, the use of speaker embeddings has the potential to improve our understanding of changing voice characteristics during oral cancer treatment. Using speaker embeddings to analyze individual treatment trajectories proves viable due to a combination of intra-session robustness, objective and text-independent metrics for changing voice characteristics and no reliance on human perceptual evaluation in the process. In future work, we will attempt to investigate the feasibility of using speaker embeddings to identify potential complications or challenges that may arise during the recovery process.
## 5 Conclusion
In this paper, we analyzed the behavior of speaker embeddings of patients diagnosed with oral cancer at different stages during oncological treatment. First, we found that pre-operative and post-intervention speaker similarity significantly diminishes. However, we observe an evolution of the voice characteristics towards the pre-operative stage in the following 12-month post-operative period. Secondly, we establish the intra-session robustness of current state-of-the-art speaker embeddings on speakers with oral cancer treatment. This indicates that the embeddings can successfully capture pathological speaker characteristics, given the pathology is temporally stable. Finally, we observe a stable false positive rate and variable false negative rate in a speaker verification analysis when speech samples are used from different stages in oral cancer treatment. This signifies a stable behavior of the embeddings towards other speakers while still being able to capture the change in voice characteristics during oral oncological treatment.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **EER (\%)** & **FPR (\%)** & **FNR (\%)** \\ \hline Pre-operation & 1.39 & 4.26 & 0.73 \\ Post-operation & 4.06 & 4.03 & 4.08 \\ Post-radiotherapy & 5.88 & 3.97 & 7.48 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Speaker verification results of oral cancer patients. FPR and FNR are based on a threshold value of 0.35.
Figure 4: Intra-session and inter-session (relative to the pre-operative session) same-speaker similarity of patients during oral cancer treatment. The dotted line indicates the mean same-speaker similarity of the healthy control group.
Figure 5: False positive rates (FPR) and false negative rates (FNR) of speaker verification trials consisting of pre-operative, pre-operative combined with post-operative and pre-operative combined with post-radiotherapy speech samples. |
2306.13111 | Relationships between the Phase Retrieval Problem and Permutation
Invariant Embeddings | This paper discusses the connection between the phase retrieval problem and
permutation invariant embeddings. We show that the real phase retrieval problem
for $\mathbb{R}^d/O(1)$ is equivalent to Euclidean embeddings of the quotient
space $\mathbb{R}^{2\times d}/S_2$ performed by the sorting encoder introduced
in an earlier work. In addition, this relationship provides us with inversion
algorithms of the orbits induced by the group of permutation matrices. | Radu Balan, Efstratios Tsoukanis | 2023-06-21T18:39:34Z | http://arxiv.org/abs/2306.13111v2 | # Relationships between the Phase Retrieval Problem and Permutation Invariant Embeddings
###### Abstract
This paper discusses the connection between the phase retrieval problem and permutation invariant embeddings. We show that the real phase retrieval problem for \(\mathbb{R}^{d}/O(1)\) is equivalent to Euclidean embeddings of the quotient space \(\mathbb{R}^{2\times d}/S_{2}\) performed by the sorting encoder introduced in an earlier work. In addition, this relationship provides us with inversion algorithms of the orbits induced by the group of permutation matrices.
## I Introduction
The phase retrieval problem has a long and illustrious history involving several Nobel prizes along the way. The issue of reconstruction from magnitude of frame coefficients is related to a significant number of problems that appear in separate areas of science and engineering. Here is an incomplete list of some of these applications and reference papers: crystallography [1], [2], [3]; ptychography [4], [5]; source separation and inverse problems [6], [7]; optical data processing [8]; mutually unbiased bases [9], [10], quantum state tomography [11], [12]; low-rank matrix completion problem [13], [14]; tensor algebra and systems of multivariate polynomial equations [15], [16], [17]; signal generating models [18], [19], bandlimited functions [20], [21], radar ambiguity problem [22], [23], learning and scattering networks [24], [25], [26].
In [27], this problem was shown to be a special form of the following setup. Let \(H\) denote a real or complex vector space and let \(A=\{a_{i}\}_{i\in I}\) be a frame for \(H\). The phase retrieval problem asks whether the map \(H\ni x\mapsto\alpha_{A}(x)=\{|\langle\,x,a_{i}\,\rangle|\}_{i\ni I}\in l^{2}(I)\) determines \(x\) uniquely up to a unimodular scalar.
In this paper we focus on the finite dimensional real case of this problem (see also [28]), namely when \(H=\mathbb{R}^{d}\). In this case, a frame \(\mathcal{A}=\{a_{1},\ldots,a_{D}\}\subset\mathbb{R}^{d}\) is simply a spanning set. The group \(O(1)=\{-1,+1\}\) acts on \(H\) by scalar multiplication. Let \(\hat{H}=H/O(1)\) denote the quotient space induced by this action, where the equivalence classes (orbits) are
\[[x]=\{x,-x\}\,\ \text{for}\ x\neq 0\,\ \left[x\right]=\{0\}\,\ \text{for}\ x=0.\]
The analysis operator for this frame is
\[T_{A}:H\rightarrow\mathbb{R}^{D}\ \,\ \ T_{A}(x)=(\langle\,x,a_{k}\, \rangle)_{k=1}^{D}. \tag{1}\]
The relevant nonlinear map \(\alpha_{A}\) is given by taking the absolute value of entries of \(T_{A}\):
\[\alpha_{A}:H\rightarrow\mathbb{R}^{D}\ \,\ \ \alpha_{A}(x)=(|\langle\,x,a_{k}\, \rangle|)_{k=1}^{D}. \tag{2}\]
Notice \(\alpha_{A}\) produces a well-defined map on \(\hat{H}\), which, with a slight abuse, but for simplicity of notation, will be denoted also by \(\alpha_{A}\). Thus \(\alpha_{A}([x])=\alpha_{A}(x)\).
Another customary notation that is often employed: a frame is given either as an indexed set of vectors, \(\mathcal{A}=\{a_{1},\ldots,a_{D}\}\), or through the columns of a \(d\times D\) matrix \(A\). The matrix notation is not canonical, but this is not an issue here. We always identify \(H=\mathbb{R}^{d}\) with its columns vector representation in its canonical basis.
**Definition 1**.: _We say that (the columns of a matrix) \(A\in\mathbb{R}^{d\times D}\) form/is a phase retrievable frame, if \(\alpha_{A}:\widehat{\mathbb{R}}^{d}\rightarrow\mathbb{R}^{D}\), \(\alpha_{A}(x)=(|\langle\,x,a_{k}\,\rangle|)_{k=1}^{D}\) is an injective map (on the quotient space)._
In a different line of works [29], [30], [31], [32] it
was recognized that the phase retrieval problem is a special case of Euclidean representations of metric spaces of orbits defined by certain unitary group actions on Hilbert spaces. Specifically, the setup is as follows. Let \(V\) denote a Hilbert space, and let \(G\) be a group acting unitarily on \(V\). Let \(\hat{V}=V/G\) denote the metric space of orbits, where the quotient space is induced by the equivalence relation \(x,y\in V\), \(x\sim y\) iff \(y=g.x\), for some \(g\in G\). Here \(g.x\) represents the action of the group element \(g\in G\) on vector \(x\). For the purposes of this paper we specialize to the finite dimensional real case, \(V=\mathbb{R}^{n\times d}\) and \(G=S_{n}\), is the group of \(n\times n\) permutation matrices acting on \(V\) by _left multiplication_. Other cases are discussed in aforementioned papers. In particular, in [30] the authors have shown a deep connection to graph deep learning problems. In [31], the authors linked this framework to certain graph matching problems and more. The bi-Lipschitz Euclidean embedding problem for the finite dimensional case is as follows. Given \(\hat{V}=V/G\), construct a map \(\beta:V\rightarrow\mathbb{R}^{m}\) so that, (i) \(\beta(g.x)=\beta(x)\) for all \(g\in G\), \(x\in V\), and (ii) for some \(0<A\leq B<\infty\), and for all \(x,y\in V\),
\[A\ \mathbf{d}([x],[y])\leq\left\|\beta(x)-\beta(y)\right\|\leq B\ \mathbf{d}([x],[y]) \tag{3}\]
where \(\mathbf{d}([x],[y])=\inf_{g\in G}\left\|x-g.y\right\|_{V}\) is the _natural metric_ on the quotient space \(\hat{V}\).
In [30] the following embedding was introduced. Let \(A\in\mathbb{R}^{d\times D}\) be a fixed matrix (termed as _key_) whose columns are denoted by \(a_{1},\dots,a_{D}\). The induced encoder \(\beta_{A}:V\rightarrow\mathbb{R}^{n\times D}\) is defined by
\[\beta_{A}(X)=\downarrow(XA)=\left[\begin{array}{cccc}\Pi_{1}Xa_{1}&\cdots& \Pi_{D}Xa_{D}\end{array}\right] \tag{4}\]
where \(\Pi_{k}\in S_{n}\) is the permutation matrix that sorts in decreasing order the vector \(Xa_{k}\). It was shown in [30] that, for \(D\) large enough, \(\beta_{A}\) provides a bi-Lipschitz Euclidean embedding of \(\hat{V}\). This motivates the following definition.
**Definition 2**.: _We say that \(A\in\mathbb{R}^{d\times D}\) is a universal key for \(\mathbb{R}^{n\times d}\) if \(\beta_{A}:\bar{\mathbb{R}}^{n\times d}\rightarrow\mathbb{R}^{n\times D}\), \(\beta_{A}(X)=\downarrow(XA)\) is an injective map (on the quotient space)._
The purpose of this paper is to show the equivalence between the real phase retrieval problem, specifically the embedding \(\alpha_{A}\), and the permutation invariant embedding \(\beta_{A}\) defined above, in the special case \(n=2\).
## II Main Results
Recall the Hilbert spaces \(H=\mathbb{R}^{d}\) and \(V=\mathbb{R}^{2\times d}\). For \(A\in\mathbb{R}^{d\times D}\) recall also the encoders \(\alpha_{A}:\hat{H}\rightarrow\mathbb{R}^{D}\) and \(\beta_{A}:\hat{V}\rightarrow\mathbb{R}^{2\times D}\) given respectively by \(\alpha_{A}(x)=(|\langle\,x,a_{k}\,\rangle|)_{k\in[D]}\), and \(\beta_{A}(X)=\downarrow(XA)\). Our main result reads as follows.
**Theorem 3**.: _In the case \(n=2\), the following are equivalent._
1. \(\alpha_{A}\) _is injective, hence the columns of_ \(A\) _form a phase retrievable frame;_
2. \(\beta_{A}\) _is injective, hence_ \(A\) _is a universal key._
**Remark 4**.: _Perhaps it is not surprising that, if an equivalence between the phase retrieval problem and permutation invariant representations is possible, then this should occur for \(n=2\). This statement is suggested by the observation that \(O(1)\) is isomorphic with \(S_{2}\), the group of the \(2\times 2\) permutation matrices. What is surprising that, in fact, the two embeddings are intimately related, as the proof and corollaries show._
Proof of Theorem 3.: Let \(X\in V=\mathbb{R}^{2\times d}\). Denote by \(x_{1},x_{2}\in\mathbb{R}^{d}\) its two rows transposed, that is
\[X=\left[\begin{array}{c}x_{1}^{T}\\ x_{2}^{T}\end{array}\right].\]
Notice that, for each \(k\in[D]\), the \(k^{th}\) column of \(\beta_{A}(X)\) is given by
\[\downarrow(Xa_{k})=\begin{bmatrix}max(\langle\,x_{1},a_{k}\,\rangle,\langle\, x_{2},a_{k}\,\rangle)\\ min(\langle\,x_{1},a_{k}\,\rangle,\langle\,x_{2},a_{k}\,\rangle)\end{bmatrix}.\]
The _key observations_ are the following relationships between \(min\), \(max\), and the absolute value \(|\cdot|\):
\[|u-v| = max(u,v)-min(u,v)\] \[u+v = max(u,v)+min(u,v)\] \[max(u,v) = \frac{1}{2}(u+v+|u-v|)\] \[min(u,v) = \frac{1}{2}(u+v-|u-v|)\] \[|\,|u|-|v|\,| = min(|u-v|,|u+v|)\]
In particular, these show that:
\[\begin{bmatrix}1&-1\\ 1&1\end{bmatrix}\beta_{A}(X)=\begin{bmatrix}1&-1\\ 1&1\end{bmatrix}\cdot\downarrow(XA)=\] \[=\begin{bmatrix}\left|\left\langle\left.x_{1}-x_{2},a_{1}\, \right\rangle\right|,&\ldots&,\left|\left\langle\left.x_{1}-x2,a_{D}\,\right\rangle \right|\right|\\ \left\langle\left.x_{1}+x_{2},a_{1}\,\right\rangle,&\ldots&,\left\langle \left.x_{1}-x_{2},a_{D}\,\right\rangle\right.\right]\\ =\begin{bmatrix}\left(\alpha_{A}(x_{1}-x_{2})\right)^{T}\\ \left(T_{A}(x_{1}+x_{2})\right)^{T}\end{bmatrix}\end{bmatrix}\]
Where, \(T_{A}\) was introduced in equation (1).
\((1)\rightarrow(2):\) Suppose that \(\alpha_{A}\) is injective. Let \(X=\begin{bmatrix}x_{1}^{T}\\ x_{2}^{T}\end{bmatrix}\) and \(Y=\begin{bmatrix}y_{1}^{T}\\ y_{2}^{T}\end{bmatrix}\), such that \(\beta_{A}(X)=\beta_{A}(Y)\). Then
\[\begin{bmatrix}1&-1\\ 1&1\end{bmatrix}\beta_{A}(X)=\begin{bmatrix}1&-1\\ 1&1\end{bmatrix}\beta_{A}(Y)\] \[\implies\begin{bmatrix}\left(\alpha_{A}(x_{1}-x_{2})\right)^{T} \\ T_{A}(x_{1}+x_{2})^{T}\end{bmatrix}=\begin{bmatrix}\left(\alpha_{A}(y_{1}-y_{2} )\right)^{T}\\ T_{A}(y_{1}+y_{2})^{T}\end{bmatrix}.\]
But now, \(\alpha_{A}(x_{1}-x_{2})=\alpha_{A}(y_{1}-y_{2}))\implies x_{1}-x_{2}=y_{1}-y_{2}\) or \(x_{1}-x_{2}=y_{2}-y_{1}\) and
\[T_{A}(x_{1}+x_{2})=T_{A}(y_{1}+y_{2})^{T}\implies x_{1}+x_{2}=y_{1}+y_{2}\]
Thus we have that
\[\left\{\begin{array}{ccc}x_{1}&=&y_{1}\\ x_{2}&=&y_{2}\end{array}\right\}\text{ or }\left\{\begin{array}{ccc}x_{1}&=&y_{2}\\ x_{2}&=&y_{1}\end{array}\right\}\]
Either case means
\[\iff X=Y\text{ or }X=\begin{bmatrix}0&1\\ 1&0\end{bmatrix}Y\] \[\iff[X]=[Y]\]
So, \(\beta_{A}\) is injective.
\((2)\rightarrow(1):\) Suppose that \(\beta_{A}\) is injective. Let \(x,y\in\mathbb{R}^{d}\) such that \(\alpha_{A}(x)=\alpha_{A}(y)\), i.e. \(\left|\left\langle\left.x,a_{k}\,\right\rangle\right|=\left|\left\langle\left. y,a_{k}\,\right\rangle\right|\), \(\forall k\in[D]\). Let \(X=\begin{bmatrix}x^{T}\\ -x^{T}\end{bmatrix}\) and \(Y=\begin{bmatrix}y^{T}\\ -y^{T}\end{bmatrix}\). Then,
\[\begin{bmatrix}1&-1\\ 1&1\end{bmatrix}\beta_{A}(X)=\begin{bmatrix}\alpha_{A}(2x)^{T}\\ T_{A}(0)^{T}\end{bmatrix}=2\begin{bmatrix}\alpha_{A}(2x)^{T}\\ 0\end{bmatrix}\]
and
\[\begin{bmatrix}1&-1\\ 1&1\end{bmatrix}\beta_{A}(X)=\begin{bmatrix}\alpha_{A}(2y)^{T}\\ T_{A}(0)^{T}\end{bmatrix}=2\begin{bmatrix}\alpha_{A}(2y)^{T}\\ 0\end{bmatrix}\]
Thus \(\beta_{A}(X)=\beta_{A}(Y)\). Since \(\beta_{A}\) is assumed injective, it follows that \(X=Y\) or \(X=\begin{bmatrix}0&1\\ 1&0\end{bmatrix}Y\). So, \(x=y\) or \(x=-y\). We conclude that \([x]=[y]\), so \(\alpha_{A}\) is injective.
**Corollary 5**.: _If \(\beta_{A}\) is injective, then \(D\geq 2d-1\)._
**Corollary 6**.: _If \(D=2d-1\), then \(\beta_{A}\) is injective if and only if \(A\) is a full spark frame._
Both results follow necessary and sufficient conditions established in, e.g. [27]. Recall that a frame in \(\mathbb{R}^{d}\) is said _full spark_ if any subset of \(d\) vectors is linearly independent (hence basis).
**Remark 7**.: _Assume \(D=2d-1\). Note the embedding dimension for \(\hat{V}=\widehat{\mathbb{R}^{2\times d}}\) is \(m=2(2d-1)=4d-2=2\,dim(V)-2\). In particular this shows the minimal dimension of bi-Lipschitz Euclidean embeddings may be smaller than twice the intrinsic dimension of the Hilbert space where the group acts on. Both papers [30] and [31] present (bi)Lipschitz embeddings into \(\mathbb{R}^{2\,dim(V)}\)._
**Remark 8**.: _As was derived in the proof, \(\alpha_{A}\), \(\beta_{A}\) and \(T_{A}\) are intimately related:_
\[\beta_{A}\left(\left[\begin{array}{c}x_{1}^{T}\\ x_{2}^{T}\end{array}\right]\right)=\frac{1}{2}\left[\begin{array}{cc}1&1 \\ -1&1\end{array}\right]\ \left[\begin{array}{c}\alpha_{A}(x_{1}-x_{2})^{T}\\ T_{A}(x_{1}+x_{2})^{T}\end{array}\right] \tag{5}\]
_In particular, any algorithm for solving the phase retrieval problem solves also the inversion problem for \(\beta_{A}\). Let \(\omega_{A}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{d}\) denote a left inverse of \(\alpha_{A}\) on the metric space \(\widehat{\mathbb{R}}^{d}\). This means \(\omega_{A}(\alpha_{A}(x))\sim x\) in \(\mathbb{R}^{d}/O(1)\). Denote by \(T_{A}^{\dagger}\) a left inverse of the analysis operator (e.g., the synthesis operator associated to the canonical dual frame). Thus \(T_{A}^{\dagger}T_{A}=I_{d}\). Then an inverse for \(\beta_{A}\) is:_
\[\beta_{A}^{-1}(Y)=\frac{1}{2}\left[\begin{array}{c}T_{A}^{\dagger}(y_{2})+ \omega_{A}(y_{1})\\ T_{A}^{\dagger}(y_{2})-\omega_{A}(y_{1})\end{array}\right] \tag{6}\]
_where \(Y=\left[\begin{array}{c}y_{1}^{T}\\ y_{2}^{T}\end{array}\right]\)._
**Remark 9**.: _Equations (6) suggest a lower dimensional embedding than \(\beta_{A}\). Specifically, first we compute the average \(y_{1}=\frac{1}{2}(x_{1}+x_{2})\) which is of size \(\mathbb{R}^{d}\), and then encode the difference \(x_{1}-x_{2}\) using \(\alpha_{A}\), \(y_{2}=\alpha_{A}(x_{1}-x_{2})\). We obtain the following modified encoder, \(\tilde{\beta}_{A}:\mathbb{R}^{2\times d}\rightarrow\mathbb{R}^{d+D}\):_
\[\tilde{\beta}_{A}(x)=\left[\begin{array}{c}\frac{1}{2}(x_{1}+x_{2})^{T} \quad\alpha_{A}(x_{1}-x_{2})^{T}\end{array}\right]. \tag{7}\]
_With the \(\omega_{A}\) left inverse of \(\alpha_{A}\), the inverse of
_is given by:_
\[\tilde{\beta}_{A}^{-1}(Y)=\left[\begin{array}{c}y_{1}+\frac{1}{2}\omega_{A}(y_ {2})\\ y_{1}-\frac{1}{2}\omega_{A}(y_{2})\end{array}\right] \tag{8}\]
_where \(y_{1}=Y(1:d)\) and \(y_{2}=Y(d+1:d+D)\). In the case when \(D=D_{min}=2d-1\), the minimal embedding dimension is \(m=d+D=3d-1\) (instead of \(4d-2\) or \(4d=2\,dim(V)\))._
Reference [30] shows that an upper Lipschitz bound for embedding \(\beta_{A}\) is \(\sigma_{1}(A)\), where \(\sigma_{1}(A)\) is the largest singular value of \(A\). Same reference shows that if \(\beta_{A}\) is injective then there is also a strictly positive lower Lipschitz bound without providing a formula. Using Equation (5) we provide explicit estimates of these bounds.
**Theorem 10**.: _Assume \(A\in\mathbb{R}^{d\times D}\) is a universal key for \(\mathbb{R}^{2\times d}\) (i.e., \(\beta_{A}:\widetilde{\mathbb{R}^{2\times d}}\rightarrow\mathbb{R}^{2\times D}\) is injective), or, equivalently (according to Theorem 3), the columns of \(A\) form a phase retrievable frame in \(\mathbb{R}^{d}\) (i.e., \(\alpha_{A}:\widetilde{\mathbb{R}^{d}}\rightarrow\mathbb{R}^{D}\) is injective). Then both \(\alpha_{A}\) and \(\beta_{A}\) are bi-Lipschitz with same Lipschitz constants, where distances are given by \(\mathbf{d}_{\mathrm{PR}}([x],[y])=\min(\|x-y\|,\|x+y\|)\) on \(\hat{H}\), and \(\mathbf{d}([X],[Y])=\min\limits_{P\in S_{2}}\|X-PY\|\) on \(\hat{V}\), respectively. The optimal upper and lower Lipschitz constants are given by:_
\[A_{0}=\min\limits_{I\subset[D]}\sqrt{\sigma_{d}^{2}(A[I])+\sigma_{d}^{2}(A[I^ {c}])}\;,\;B_{0}=\sigma_{1}(A) \tag{9}\]
_where \(\sigma_{1}(A)\) is the largest singular value of \(A\) (equals the square-root of upper frame bound) and \(\sigma_{d}(A[J])\) is the \(d^{th}\) singular value of submatrix of \(A\) indexed by \(J\). Furthermore, these bounds are achieved by the following vectors. Let \(I_{0}\) denote a optimal partition in (9) and let \(u_{1}\), \(u_{2}\) denote the normalized left singular vectors of \(A[I_{0}]\) and \(A[I_{0}^{c}]\), respectively, each associated to the \(d^{th}\) singular value. Let \(u\) be the normalized principal left singular vector associated to \(A\) (i.e., associated to the largest singular value). Then:_
1. _The upper Lipschitz constant_ \(B_{0}\) _is achieved as follows: (i) for map_ \(\alpha_{A}\) _by vectors_ \(x_{\max}=u\) _and_ \(y_{\max}=0\)_; (ii) for map_ \(\beta_{A}\) _by vectors_ \(X_{\max}=\left[\begin{array}{c}u^{T}\\ 0\end{array}\right]\) _and_ \(Y_{\max}=0\)_._
2. _The lower Lipschitz constant_ \(A_{0}\) _is achieved as follows: (i) for map_ \(\alpha_{A}\) _by vectors_ \(x_{\min}=u_{1}+u_{2}\) _and_ \(y_{\min}=u_{1}-u_{2}\)_; (ii) for map_ \(\beta_{A}\) _by vectors_ \(X_{\min}=\left[\begin{array}{c}\left(u_{1}+u_{2}\right)^{T}\\ 0\end{array}\right]\) _and_ \(Y_{\min}=\left[\begin{array}{c}u_{1}^{T}\\ u_{2}^{T}\end{array}\right]\)_._
**Remark 11**.: _The optimal Lipschitz constants for the map \(\alpha_{A}\) were obtained in [33], [34], including the optimizers. However, for reader's convenience, we prefer to give direct proofs of these results._
Proof.:
1. Upper Lipschitz constants.
(i) Let \(x,y\in\mathbb{R}^{d}\). Then
\[\begin{array}{l}\|\alpha_{A}(x)-\alpha_{A}(y)\|^{2}=\sum_{i=1}^{D}\|\langle \,a_{i},x\rangle\,\|-\langle\,a_{i},y\rangle\,\|^{2}=\\ =\sum_{i=1}^{D}\min\left(|\langle\,a_{i},x-y\rangle\,|^{2},|\langle\,a_{i},x+y \rangle|^{2}\right)\leq\\ \leq\min\left(\sum_{i=1}^{D}|\langle\,a_{i},x-y\rangle\,|^{2},\sum_{i=1}^{D}| \langle\,a_{i},x+y\rangle|^{2}\right)\\ \leq\sigma_{1}^{2}(A)\,\mathbf{d}_{\mathrm{PR}}([x],[y])^{2}.\end{array}\]
So \(\sigma_{1}(A)\) is an upper Lipschitz bound for the map \(\alpha_{A}\). Now for \(x_{\max}=u,y_{\max}=0\) notice that
\[\begin{array}{l}\|\alpha(x_{\max})-\alpha(y_{\max})\|^{2}=\sum_{i=1}^{D}| \langle\,a_{i},u\rangle\,|^{2}=\\ =\sigma_{1}^{2}(A)\|u\|^{2}=\sigma_{1}^{2}(A)\,\mathbf{d}_{\mathrm{PR}}([x_{ \max}],[y_{\max}])^{2}.\end{array}\]
Thus, the upper Lipschitz constant \(\sigma_{1}(A)\) is in fact optimal (tight).
(ii) Map \(\beta_{A}\). Let \(X,Y\in\mathbb{R}^{2\times D}\) and \(P_{0}\in S_{2}\) be a permutation that achieves the distance between \(X\) and \(Y\), i.e. \(\|X-P_{0}Y\|=\mathbf{d}([X],[Y])\). Note that
\[\begin{array}{l}\|\beta_{A}(X)-\beta_{A}(Y)\|^{2}=\sum_{k=1}^{D}\|(\Pi_{k}X- \Xi_{k}Y)a_{k}\|^{2}=\\ =\sum_{k=1}^{D}\|(\Xi_{k}^{T}\Pi_{k}X-Y)a_{k}\|^{2}\end{array}\]
for some \(\Pi_{k},\Xi_{k}\in S_{2}\) that align the vectors. From rearrangement lemma we have that
\[\begin{array}{l}\|(\Pi_{k}X-\Xi_{k}Y)a_{k}\|\leq\|(X-P_{0}Y)a_{k}\|,\;\forall k \in[D]\end{array}\]
so,
\[\begin{array}{l}\sum_{k=1}^{D}\|(\Xi_{k}^{T}\Pi_{k}X-Y)a_{k}\|^{2}\leq\|A\|^{2 }\|\|X-P_{0}Y\|\|^{2}\\ =\sigma_{1}^{2}(A)\,\mathbf{d}([X],[Y])^{2}.\end{array}\]
Therefore, we conclude that \(\sigma_{1}(A)\) is an upper Lipschitz constant for map \(\beta_{A}\). We still need to show that this bound is achieved (i.e., it is optimal). For \(X_{\max}\) and \(Y_{\max}\) defined in part 1) of theorem 10,
\[\|\beta_{A}(X_{\max})-\beta_{A}(Y_{\max})\|^{2}=\|\beta_{A}(X_{\max})\|^{2}=\sum_ {k=1}^{D}\langle\,u,a_{k}\,\rangle^{2}=\sigma_{1}^{2}(A).\]
and \(\mathbf{d}(X_{\max},Y_{\max})=1\). Thus \(B_{0}\) is the
optimal Lipschitz constant both for \(\alpha_{A}\) and for \(\beta_{A}\).
2. Lower Lipschitz constants. (i) Let \(x,y\in\mathbb{R}^{d}\) and define the auxiliary set \[\begin{array}{l}\begin{array}{l}S=S(x,y):=\{j\in[D]\ :\ |\ \langle\,x-y,a_{j}\,\rangle\,|\leq|\langle\,x+y,a_{j}\, \rangle\,|\}\end{array}\end{array}\] Then \[\begin{array}{l}\begin{array}{l}\|\alpha(x)-\alpha(y)\|^{2}=\sum_{i=1}^{ D}\|\langle\,a_{i},x\,\rangle-|\langle\,a_{i},y\,\rangle\,|\,|^{2}=\\ =\sum_{i\in S}|\langle\,a_{i},x-y\,\rangle|^{2}+\sum_{i\in S^{c}}|\langle\,a_ {i},x+y\,\rangle|^{2}\geq\\ \sigma_{d}^{2}(A[S])+\sigma_{d}^{2}(A[S^{c}])\,\mathbf{d}_{\mathrm{PR}}([x], [y])^{2}\geq A_{0}^{2}\,\mathbf{d}_{\mathrm{PR}}([x],[y])^{2}.\end{array}\] So \(A_{0}\) is a lower Lipschitz bound for \(\alpha_{A}\), but we still need to show that it is optimal. Let \(I_{0}\) be the optimal partition, and let \(u_{1}\), \(u_{2}\) be normalized left singular vectors as in the statement of Theorem 10. Then: \[\begin{array}{l}\|\alpha_{A}(u_{1}+u_{2})-\alpha_{A}(u_{1}-u_{2})\|^{2}=\\ =\sum_{i=1}^{D}\|\langle\,a_{i},u_{1}+u_{2}\,\rangle\,|-|\langle\,a_{i},u_{1}-u _{2}\,\rangle\,||^{2}=\\ =\sum_{i=1}^{D}\min(|\langle\,a_{i},u_{2}\,\rangle\,|^{2},|\langle\,a_{i},u_{ 1}\,\rangle\,|^{2})\leq\\ \leq 4\left(\sum_{i\in I_{0}}|\langle\,a_{i},u_{1}\,\rangle\,|^{2}+\sum_{i\in I _{0}^{c}}|\langle\,a_{i},u_{2}\,\rangle\,|^{2}\right)\\ =4(\sigma_{d}^{2}(A[I_{0}])+\sigma_{d}^{2}(A[I_{0}^{c}]))=A_{0}^{2}\,\mathbf{d }_{\mathrm{PR}}([u_{1}+u_{2}],[u_{1}-u_{2}])^{2},\end{array}\] where we used again that \(|\,|a|-|b|\,|=min(|a-b|,|a+b|)\) for any two real numbers \(a,b\in\mathbb{R}\), and, for the inequality, at every \(i\in[D]\) we made a choice between the two terms. Since the reverse inequality is also true, it follows that \(x_{\min}=u_{1}+u_{2}\) and \(y_{\min}=u_{1}-u_{2}\) achieve the lower bound \(A_{0}\) for \(\alpha_{A}\). (ii) Consider now the map \(\beta_{A}\). Let \(X,Y\in\mathbb{R}^{2\times d}\) and define the auxiliary set \[\begin{array}{l}\begin{array}{l}S=S(X,Y):=\{j\in[D]\ :\ |\ \langle\,x_{1}-x_{2}-y_{1}+y_{2},a_{j}\, \rangle\,|\leq\\ \leq|\langle\,x_{1}-x_{2}+y_{1}-y_{2},a_{j}\,\rangle\,|\}\end{array}\\ \end{array}\end{array}\] Then, using Equation (5) we have that \[\begin{array}{l}\|\beta_{A}(X)-\beta_{A}(Y)\|^{2}=\\ \frac{1}{2}\big{(}\|\alpha_{A}(x_{1}-x_{2})-\alpha_{A}(y_{1}-y_{2})\|^{2}+\| T_{A}(x_{1}+x_{2}-y_{1}-y_{2})\|^{2})=\\ \frac{1}{2}\sum_{j\in S}|\langle\,x_{1}-x_{2}-y_{1}+y_{2},a_{j}\,\rangle\,|^{ 2}+|\langle\,x_{1}+x_{2}-y_{1}-y_{2},a_{j}\,\rangle\,|^{2}+\\ +\frac{1}{2}\sum_{j\in S}|\langle\,x_{1}-x_{2}+y_{1}-y_{2},a_{j}\,\rangle\,|^{ 2}+|\langle\,x_{1}+x_{2}-y_{1}-y_{2},a_{j}\,\rangle\,|^{2}=\\ =\sum_{j\in S}|\langle\,x_{1}-y_{1},a_{j}\,\rangle\,|^{2}+|\langle\,x_{2}-y_{2 },a_{j}\,\rangle\,|^{2}+\\ +\sum_{j\in S^{c}}|\langle\,x_{1}-y_{2},a_{j}\,\rangle\,|^{2}+|\langle\,x_{2 }-y_{1},a_{j}\,\rangle\,|^{2}\geq\\ \geq\sigma_{d}^{2}(A[S])\big{(}\|x_{1}-y_{1}\|^{2}+\|x_{2}-y_{2}\|^{2})+\\ \geq A_{0}^{2}\,\mathbf{d}([X],[Y])^{2}.\end{array}\] Therefore \(A_{0}\) is a lower Lipschitz constant for \(\beta_{A}\). It remained to prove that this bound is tight, i.e., it is achieved. Let \(X_{\min}\) and \(Y_{\min}\) be as in the statement of Theorem 10. Then \[\begin{array}{l}\|\beta_{A}(X_{\min})-\beta_{A}(Y_{\min})\|^{2}=\\ \frac{1}{2}\big{(}\|\alpha_{A}(u_{1}+u_{2})-\alpha_{A}(u_{1}-u_{2})\|^{2}+\| T_{A}(u_{1}+u_{2}-u_{1}-u_{2})\|^{2})=\\ \frac{1}{2}\big{(}\|\alpha_{A}(u_{1}+u_{2})-\alpha_{A}(u_{1}-u_{2})\|^{2})=A_{ 0}^{2}\,\mathbf{d}([X_{\min}],[Y_{\min}])^{2}\end{array}\] where the last equality follows from the fact that the lower Lipschitz constant of \(\alpha_{A}\) is achieved by \(u_{1}+u_{2}\) and \(u_{1}-u_{2}\), and the fact that \(\mathbf{d}([X_{\min}],[Y_{\min}])^{2}=2\). So \(A_{0}\) is indeed the optimal lower Lipschitz constant for \(\beta_{A}\).
## III Conclusion
In this paper we analyzed two representation problems, one arising in the phase retrieval problem and the other one in the context of permutation invariant representations. We showed that the real phase retrieval problem in a finite dimensional vector space \(H\) is entirely equivalent to the permutation invariant representations for the space \(V=\mathbb{R}^{2\times dim(H)}\). Our analysis proved that phase retrievability is equivalent to the universal key property in the case of encoding \(2\times d\) matrices. This result is derived based on the lattice space structure \((\mathbb{R},+,min,max)\). It is still an open problem to understand the relationship between \(\alpha_{A}\) and \(\beta_{A}\) in the case \(n>2\). A related problem is the implementation of the sorting operator using a neural network that has ReLU as activation function (or, even the absolute value \(|\cdot|\)). Efficient implementations of such operator may yield novel relationships between \(\alpha_{A}\) and \(\beta_{A}\), in the case \(n\geq 3\).
## Acknowledgment
The authors have been supported in part by a NSF award under grant DMS-2108900 and by the Simons Foundation. |
2304.12476 | Magnetism and metal-insulator transitions in the anisotropic kagome
lattice | The interest in the physical properties of kagome lattices has risen
considerably. In addition to the synthesis of new materials, the possibility of
realizing ultracold atoms on an optical kagome lattice (KL) raises interesting
issues. For instance, by considering the Hubbard model on an anisotropic KL,
with a hopping $t^\prime$ along one of the directions, one is able to
interpolate between the Lieb lattice ($t^\prime=0$) and the isotropic KL
($t^\prime=t$). The ground state of the former is a ferrimagnetic insulator for
any on-site repulsion, $U$, while the latter displays a transition between a
paramagnetic metal and a Mott insulator. One may thus consider $t^\prime$ as a
parameter controlling the degree of magnetic frustration in the system. By
means of extensive quantum Monte Carlo simulations, we have examined magnetic
and transport properties as $t^\prime$ varies between these limits in order to
set up a phase diagram in the $(U/t, t^\prime/t)$ parameter space. As an
auxiliary response, analysis of the average sign of the fermionic determinant
provides consistent predictions for critical points in the phase diagram. We
observe a metal-insulator transition occurring at some critical point
$U_c^\text{M}(t^\prime)$, which increases monotonically with $ t^\prime $, from
the unfrustrated lattice limit. In addition, we have found that the boundary
between the ferrimagnetic insulator and the Mott insulator rises sharply with
$t^\prime$. | Lucas O. Lima, Andressa R. Medeiros-Silva, Raimundo R. dos Santos, Thereza Paiva, Natanael C. Costa | 2023-04-24T22:27:11Z | http://arxiv.org/abs/2304.12476v1 | # Magnetism and metal-insulator transitions in the anisotropic kagome lattice
###### Abstract
The interest in the physical properties of kagome lattices has risen considerably. In addition to the synthesis of new materials, the possibility of realizing ultracold atoms on an optical kagome lattice (KL) raises interesting issues. For instance, by considering the Hubbard model on an anisotropic KL, with a hopping \(t^{\prime}\) along one of the directions, one is able to interpolate between the Lieb lattice (\(t^{\prime}=0\)) and the isotropic KL (\(t^{\prime}=t\)). The ground state of the former is a ferrimagnetic insulator for any on-site repulsion, \(U\), while the latter displays a transition between a paramagnetic metal and a Mott insulator. One may thus consider \(t^{\prime}\) as a parameter controlling the degree of magnetic frustration in the system. By means of extensive quantum Monte Carlo simulations, we have examined magnetic and transport properties as \(t^{\prime}\) varies between these limits in order to set up a phase diagram in the \((U/t,t^{\prime}/t)\) parameter space. As an auxiliary response, analysis of the average sign of the fermionic determinant provides consistent predictions for critical points in the phase diagram. We observe a metal-insulator transition occurring at some critical point \(U_{c}^{\rm M}(t^{\prime})\), which increases monotonically with \(t^{\prime}\), from the unfrustrated lattice limit. In addition, we have found that the boundary between the ferrimagnetic insulator and the Mott insulator rises sharply with \(t^{\prime}\).
## I Introduction
The interplay between geometrically-induced frustration and fermion itinerancy gives rise to fascinating magnetic states such as quantum spin liquids (QSL's), characterized by highly degenerate ground states [1]. Examples of exotic magnetic phases have become more abundant over the last decades, such as those found in organic materials with triangular lattice structures [2; 3; 4], and in herbertsmithite compounds with a kagome lattice (KL) structure [5; 6; 7], although for the latter the nature of the QSL state is still under debate [8; 9; 10]. More recently, the emergence and continuing development of experiments on optical lattices, in which ultracold atoms are loaded and the interaction amongst them controlled through Feshbach resonances, has enabled the study of strongly correlated systems in an unprecedented controllable way [11; 12; 13; 14; 15]. In particular, optical lattices with the kagome structure have been recently realized with bosonic atoms [16; 17; 18], and one expects that optical KL's with fermionic atoms will become available in the near future.
In Fig. 1 we visualize the KL as a tilted square Bravais lattice with a three-site basis, whose sites are denoted by \(\alpha=a,b\), and \(c\). Frustration in the elementary triangles is evident if one allows fermions to hop between sites \(\alpha\) and \(\alpha^{\prime}\neq\alpha\), with amplitude \(t\), subject to an on-site repulsion, \(U>0\), thus giving rise [19] to an effective antiferromagnetic exchange interaction, \(J\sim t^{2}/U\), in the strong coupling regime. The kagome lattice additionally displays a tight-binding feature absent in the triangular lattice, namely the presence of a flat band at its high-energy edge; see Fig. 2(d). Flat bands may lead to a wide range of unexpected electronic properties, such as unconventional magnetism [20; 21] and superconductivity [22]. In particular, the Hubbard model on a KL has been recently studied [23] through determinant Quantum Monte Carlo (DQMC) simulations: at half filling it was found that a phase transition between a paramagnetic metal and a Mott insulator occurs at \(U_{c}/t\approx 6.5\).
Figure 1 also highlights the fact that if the hopping, \(t^{\prime}\), between \(b\) and \(c\) sites (i.e., along the dashed lines) is switched off, one ends up with the so-called Lieb lattice (or decorated square lattice, or CuO\({}_{2}\) lattice). This lattice also displays a flat tight-binding band, but now located at the particle-hole symmetry (PHS) point; see Fig. 2(a). With on-site repulsion, the Lieb lattice (LL) at half filling is not frustrated, but is a ferrimagnetic insulator for all \(U>0\)[24; 25; 26].
A question immediately arising is how the ferrimagnetic insulator at \(t^{\prime}=0\) evolves to either
Figure 1: The kagome lattice is composed of hexagons tiled with intervening corner-sharing triangles; points \(a\), \(b\) and \(c\) comprise the unit cell (highlighted triangle). Solid and dashed lines indicate hopping amplitudes \(t\) and \(t^{\prime}\), respectively.
a Mott insulator or a paramagnetic metal as \(t^{\prime}\to t\). So far, studies of the Hubbard model with anisotropic hopping on the KL have been primarily through mean-field-like approaches [27; 28; 29]. We therefore feel that a thorough investigation of the ground state phase diagram through unbiased methods is certainly in order. With this in mind, here we perform extensive DQMC simulations on the half-filled Hubbard model on the KL with anisotropic hopping, in order to propose a ground state phase diagram, \((U/t,t^{\prime}/t)\).
The layout of the paper is as follows. In Sec. II we present the model and the main features of DQMC method. In Sec. III, we highlight the main properties of the noninteracting case, and investigate the magnetic and transport observables of the interacting one. Sec. IV summarizes our findings.
## II Model and methodology
In order to interpolate between these two limits (Lieb and Kagome lattices), we write the Hamiltonian as
\[\widehat{\mathcal{H}}=\widehat{\mathcal{H}}_{\text{K}}+\widehat{\mathcal{H}}_ {\text{U}}+\widehat{\mathcal{H}}_{\mu}, \tag{1}\]
where \(\widehat{\mathcal{H}}_{\text{K}}\) is the kinetic energy, \(\widehat{\mathcal{H}}_{\text{U}}\) describes the on-site interaction, and \(\widehat{\mathcal{H}}_{\mu}\) controls the band filling. They are defined as
\[\widehat{\mathcal{H}}_{\text{K}}= -t\,\sum_{\mathbf{r},\sigma}\left(a_{\mathbf{r},\sigma}^{\dagger}b _{\mathbf{r},\sigma}+a_{\mathbf{r},\sigma}^{\dagger}c_{\mathbf{r},\sigma}+ \text{H.c}\right)+\] \[-t\,\sum_{\mathbf{r},\sigma}\left(a_{\mathbf{r},\sigma}^{\dagger }b_{\mathbf{r}-\hat{\mathbf{x}},\sigma}+a_{\mathbf{r},\sigma}^{\dagger}c_{ \mathbf{r}-\hat{\mathbf{y}},\sigma}+\text{H.c}\right)+ \tag{2a}\] \[-t^{\prime}\,\sum_{\mathbf{r},\sigma}\left(b_{\mathbf{r},\sigma} ^{\dagger}c_{\mathbf{r},\sigma}+b_{\mathbf{r},\sigma}^{\dagger}c_{\mathbf{r}+ \left(\hat{\mathbf{x}}-\hat{\mathbf{y}}\right),\sigma}+\text{H.c}\right),\] \[\widehat{\mathcal{H}}_{\text{U}}= U\sum_{\mathbf{r},\alpha}\left(n_{\mathbf{r},\uparrow}^{ \alpha}-\nicefrac{{1}}{{2}}\right)\left(n_{\mathbf{r},\downarrow}^{\alpha}- \nicefrac{{1}}{{2}}\right),\] (2b) \[\widehat{\mathcal{H}}_{\mu}= -\mu\sum_{\mathbf{r},\sigma,\alpha}n_{\mathbf{r},\sigma}^{\alpha}\;, \tag{2c}\]
where \(a_{\mathbf{r},\sigma}^{(\dagger)}\), \(b_{\mathbf{r},\sigma}^{(\dagger)}\), and \(c_{\mathbf{r},\sigma}^{(\dagger)}\) are standard fermion annihilation (creation) operators acting on orbital \(\alpha\) at position \(\mathbf{r}\), with spin \(\sigma\); \(n_{\mathbf{r},\sigma}^{\alpha}\) are number operators acting on their corresponding orbitals, \(\alpha=a\), \(b\), or \(c\). \(U\) is the strength of the on-site repulsion, and \(\mu\) is the chemical potential. The first two terms on the right hand side of Eq. (2a) denote the inter- and intracell hopping between \(a\) and \(b\)-orbitals, or \(c\)-orbitals, respectively, while the third term corresponds to the inter- and intracell diagonal hopping between \(b\) and \(c\)-orbitals (dashed line in Fig. 1), respectively. Notice that, by varying \(t^{\prime}/t\) between \(0\) and \(1\), one continuously increases frustration. Hereafter, we define the lattice constant as unity, and set the energy scale in units of the hopping integral \(t\) of the nearest neighbors \(a\)-\(b/c\) sites.
We examine the properties of Hamiltonian in Eq. (1) by employing the DQMC methodology [30; 31; 32; 33; 34], which provides an unbiased treatment of the interactions in \(\widehat{\mathcal{H}}\). The idea is based on an auxiliary field decomposition of the interaction that maps the system into free fermions moving in a floating and time-dependent potential (imaginary) space. The method basically consists of two key steps. First, in the grand partition function one separates the noncommuting parts of the Hamiltonian through the so-called Trotter-Suzuki decoupling,
\[\mathcal{Z} =\text{Tr}\,e^{-\beta\widehat{\mathcal{H}}}=\text{Tr}\,[(e^{- \Delta\tau(\widehat{\mathcal{H}}_{\text{K}}+\widehat{\mathcal{H}}_{\text{U}}) })^{M}]\] \[\approx\text{Tr}\,[e^{-\Delta\tau\widehat{\mathcal{H}}_{\text{K} }}e^{-\Delta\tau\widehat{\mathcal{H}}_{\text{U}}}e^{-\Delta\tau\widehat{ \mathcal{H}}_{\text{K}}}e^{-\Delta\tau\widehat{\mathcal{H}}_{\text{U}}}\cdots], \tag{3}\]
where \(M=\beta/\Delta\tau\), with \(\Delta\tau\) being the grid of the imaginary-time axis and \(\beta=1/(k_{\text{B}}T)\) is the inverse temperature, where \(k_{\text{B}}\) is the Boltzmann constant. This decomposition leads to an error proportional to \((\Delta\tau)^{2}\), which can be systematically reduced as \(\Delta\tau\to 0\). Here, we choose \(\Delta\tau\leq 0.1\), which is small enough so that the systematic errors from the Trotter-Suzuki decoupling are comparable to the statistical ones (i.e., from the Monte Carlo sampling).
The second step is to perform a discrete Hubbard-Stratonovich (HS) transformation [31] on the two-particle terms, \(\exp{(-\Delta\tau\widehat{\mathcal{H}}_{\text{U}})}\), which converts them also to quadratic form in fermion operators, at the cost of introducing discrete HS auxiliary fields, \(s(\mathbf{r},\tau)\). In this way the resulting trace of fermions propagating in an auxiliary bosonic field can be performed. Thus, one can evaluate Green's functions and other physical observables including spin-, charge- and pair correlation functions by sampling the HS fields with the product of fermionic determinants acting as Boltzmann weights, i.e., \(p(s)=\det{M_{\uparrow}(s(\mathbf{r},\tau))}\det{M_{\downarrow}(s(\mathbf{r},\tau))}\). However, these are not positive definite and when averaging an observable, \(\mathcal{O}\), one takes \(|p(s)|\) at the cost of keeping track of the average sign; that is,
\[\langle\,\mathcal{O}\,\rangle=\frac{\sum_{s}|p(s)|\;\text{sign}(s)\,\mathcal{O} (s)}{\sum_{s}|p(s)|\;\text{sign}(s)}\equiv\frac{\langle\,\text{sign}\times \mathcal{O}\,\rangle}{\langle\,\text{sign}\,\rangle}. \tag{4}\]
The positiveness of the product of determinants is guaranteed for systems displaying PHS at half-filling, such as bipartite lattices (e.g., square, honeycomb and Lieb) and for attractive interactions.
Apart from these cases, and depending on band filling, temperature, interaction strength, lattice geometry and size, and so forth, one may end up with \(\langle\,\text{sign}\,\rangle\ll 1\), thus rendering \(\langle\,\mathcal{O}\,\rangle\) meaningless: this is the infamous'minus-sign problem' [35; 36; 37; 38]. Nonetheless, it has been recently suggested that low values of \(\langle\,\text{sign}\,\rangle\) may be used to locate critical points [39; 40]. For the case at hand, we note that the KL is not bipartite, so that PHS is absent for any filling; hence, one must keep track of \(\langle\,\text{sign}\,\rangle\); see below.
Through the DQMC algorithm we obtain Green's functions, from which several quantities are calculated; see. e.g. Refs. [30; 32; 34]. Therefore, the magnetic response of the system may be probed by the real space
spin-spin correlation functions,
\[c^{\alpha\alpha^{\prime}}(\mathbf{\ell})=\tfrac{1}{3}\langle\,\mathbf{S}_{\mathbf{r}_{ 0}}^{\alpha}\,\cdot\,\mathbf{S}_{\mathbf{r}_{0}+\mathbf{\ell}}^{\alpha^{\prime}}\, \rangle\,, \tag{5}\]
with \(\mathbf{r}_{0}\) being the position of a given unit cell, while \(\alpha\) and \(\alpha^{\prime}\) denote the orbitals \(a\), \(b\), or \(c\). The Fourier transform of \(c^{\alpha\alpha^{\prime}}(\mathbf{\ell})\) is the magnetic structure factor,
\[S(\mathbf{q})=\frac{1}{N_{s}}\sum_{\alpha,\alpha^{\prime}}\sum_{\mathbf{\ell}}c^{ \alpha\alpha^{\prime}}(\mathbf{\ell})\,e^{i\,\mathbf{q}\cdot\mathbf{\ell}}\,, \tag{6}\]
where \(N_{s}=3L^{2}\) is the number of sites. It is also instructive to discuss the uniform magnetic susceptibility, which is simply related to the structure factor, Eq. (6), through
\[\chi\equiv\beta S(0,0). \tag{7}\]
Similarly, one may probe the metallic or insulating features by the electronic compressibility,
\[\kappa=\frac{1}{\rho^{2}}\frac{\partial\rho}{\partial\mu}, \tag{8}\]
where \(\rho\) is the global electronic density.
## III Results
### The noninteracting limit (\(U=0\))
In the absence of interactions, and setting \(t^{\prime}=0\), i.e. in the Lieb lattice case, the diagonalization of \(\widehat{\mathcal{H}}_{\mathrm{K}}\), Eq. (2a), is straightforward in \(\mathbf{k}\)-space. It yields two dispersive bands,
\[\epsilon_{\pm}(\mathbf{k})=\pm 2t\sqrt{\cos^{2}(k_{x}/2)+\cos^{2}(k_{y}/2)}, \tag{9}\]
and a third dispersioneless (flat) one, \(\epsilon(\mathbf{k})=0\), associated with the \(b\) and \(c\) sites. This gives rise to the particle-hole symmetric density of states (DOS) shown in Fig. 2(a). Similarly, in the Kagome limit, \(t^{\prime}=1\), the diagonalization yields two dispersive bands,
\[\epsilon_{\pm}(\mathbf{k})=-t[1\pm\sqrt{4f(\mathbf{k})-3}\,], \tag{10}\]
with \(f(\mathbf{k})=\cos^{2}(k_{x}/2)+\cos^{2}(k_{y}/2)+\cos^{2}((k_{x}-k_{y})/2)\), and one dispersionless band, \(\epsilon(\mathbf{k})=2t\). Figure 2(d) shows the corresponding DOS, with the flat band located at the top of the spectrum.
For \(0<t^{\prime}/t<1\), the diagonalization of the noninteracting Hamiltonian is carried out numerically, and leads to the DOS displayed in Fig. 2(b)-(c). These panels show that as \(t^{\prime}\) increases the flat band widens, giving way to two van Hove singularities (vHS): one remains at \(\omega=0\), while the other moves towards the upper band edge, finally merging with another vHS to form another flat band at \(\omega=2t\) when \(t^{\prime}=t\). Further, as shown in Fig. 2(e), the bandwidth increases monotonically as \(t^{\prime}\) increases; the relatively small increase may be attributed to the fact that the kinetic energy is insensitive to hopping anisotropy, and therefore significant changes in total bandwidth are essentially driven by the \(U/t\) ratio. Indeed, with increasing \(U/t\) even at small values, the total spectral density is significantly affected, increasing the total bandwidth [27].
### The chemical potential and \(\langle\,\mathrm{sign}\,\rangle\)
The Hamiltonian, Eq. (1), only displays PHS when \(t^{\prime}=0\), in which case half filling, \(\rho=1\), occurs exactly at \(\mu=0\). When \(t^{\prime}\neq 0\) the chemical potential leading to \(\rho=1\), \(\widetilde{\mu}\), depends on \(U\), \(t^{\prime}\), temperature, and, to a lesser extent, on the linear system size, \(L\). The panels on the left hand side of Fig. 3 show \(\widetilde{\mu}\) as a function of hopping anisotropy, for different interaction strengths, \(U/t\), and temperatures. We see that for a given temperature, \(\widetilde{\mu}/t\) varies between 0 and typically 0.5-0.6, as \(t^{\prime}/t\) goes from 0 to 1; further, the larger the repulsion, the stronger is the dependence of \(\widetilde{\mu}\) with the temperature. The panels on the right hand side of Fig. 3 show \(\langle\,\mathrm{sign}\,\rangle\) as a function of hopping anisotropy, for different interaction strengths, \(U\), and temperatures. As expected, as the temperature is lowered for a given \(U\), the dips in \(\langle\,\mathrm{sign}\,\rangle\) become deeper, and, further, as
Figure 2: (a)-(d): Site-resolved non-interacting DOS for tight-binding fermions on a kagome lattice for different values of \(t^{\prime}/t\), the ratio between hoppings along \(bc\) sites and along sites involving \(a\) sites (see Fig. 1); \(\hbar\omega\) is measured relative to the Fermi energy for half filling. (e): Bandwidth, \(W/t\), as a function of \(t^{\prime}/t\); the dashed line is a guide to the eye.
the on-site repulsion increases, the dips move towards the isotropic region while they also widen; see Fig. 3(f)-(j). These panels therefore map out the regions where a much larger number of Monte Carlo sweeps (on the order of \(10^{6}\)) is required to mitigate the minus-sign problem, so that we concentrate on parameter ranges leading to \(\langle\,\mathrm{sign}\,\rangle\gtrsim 0.1\).
### Magnetic properties
Figure 4 displays the behavior of the spin-spin correlations on nearest \(b\) and \(c\) orbitals, as \(t^{\prime}/t\) increases from \(0\) to \(1\), for different interaction strengths, \(U/t\). Each panel shows that the correlations are maximally positive in the LL limit, and decrease monotonically towards negative values as \(t^{\prime}/t\) increases towards the KL limit. We can also see that for a given \(t^{\prime}/t\), these correlations increase in magnitude as the temperature decreases, a manifestation of their robustness. Thus, the picture that emerges is that a long-ranged ferrimagnetic state [41] at \(t^{\prime}/t=0\) evolves towards a state with strong short-ranged antiferromagnetic correlations in the KL limit as \(t^{\prime}/t\) increases [23; 28].
Given this, we note that the sign of \(c^{\alpha\alpha^{\prime}}(\mathbf{\ell})\) changes at values of \(t^{\prime}/t\) which increase as \(U/t\) increases; these values provide a rough estimate for the boundary of the ferrimagnetic phase, \(t^{\prime}_{c}(U/t)\), as shown in Fig. 7. The growth of \(t^{\prime}_{c}/t\) with \(U/t\) can be understood by the fact that the ferrimagnetic state on the LL may be thought of as due to the formation of triplets in the \(b\) and \(c\) orbitals on the same sublattice, which is more robust the stronger the on-site repulsion is [26].
These analyses are supported by the temperature dependence of the uniform susceptibility, Eq. (7). In the absence of interactions, Pauli behavior is verified for all \(t^{\prime}/t\); see Fig. 5(a). For \(U/t=3\), different values of \(t^{\prime}/t\) cause different responses, as shown in Fig. 5(b): for \(t^{\prime}/t=0.2\)\(\chi\) shows a Curie-like behavior indicating a ferrimagnetic ground state, while for \(t^{\prime}/t\geq 0.6\) Pauli behavior sets in. The behavior for \(t^{\prime}/t=0.4\) is borderline, being consistent with the change in sign of
Figure 3: Left panels (a)-(e): The chemical potential, \(\vec{\mu}\), leading to a half filled band as a function of hopping anisotropy. Each of the panels corresponds to a fixed value of \(U/t\), and, in every panel, the curves are for different inverse temperatures, \(\beta t\), as indicated in (a)-(b). Right panels (f)-(j): Average sign of the fermionic determinant in a log-linear scale. The system parameters are in strict correspondence with those for the (a)-(e) panels. All data are for a linear lattice size \(L=6\), hence with \(N_{s}=3L^{2}\) sites.
Figure 4: Correlations between spins on nearest sites \(b\) and \(c\), as functions of \(t^{\prime}/t\) at different temperatures, and different values of \(U/t\). Each panel corresponds to a given value of \(U/t\), and the linear system size is \(L=6\). When not shown, error bars are smaller than the symbol sizes.
the correlation functions shown in Fig. 4(b). For \(U/t=6\), Fig. 5(c) shows that the Curie-like behavior is still present for \(t^{\prime}/t=0.2\), but the minus-sign problem prevents us from decreasing the temperature below \(T/t\approx 0.3\) for \(t^{\prime}/t\geq 0.4\). Nonetheless, according to Fig. 4(e), for \(t^{\prime}/t<0.6\) the system still displays ferrimagnetic correlations, so that a rise of \(\chi\) should not be discarded if the temperature is lowered. And, finally, data for \(\chi(T)\) in Fig. 5(d) correspond to fixed \(t^{\prime}/t=0.8\), and different on-site interactions. While for \(U/t=3\) and \(U/t=8.5\) the behavior is Pauli-like, for \(U/t=16\) one detects a tendency to increase as \(T/t\) is lowered, similarly to what was found in the KL limit [23].
### Transport properties
Now we proceed by examining transport properties, namely the electronic compressibility, Eq. (8). Interaction-driven metal-insulator transitions are usually signalled by the opening of a single-particle gap at the Fermi energy; this gap appears as a plateau in plots of \(\rho(\mu)\) at fixed low temperatures, and, in turn, leads to an exponential decay of \(\kappa\) at low temperatures, \(\kappa\propto\exp{(-\Delta_{c}/k_{\rm B}T)}\) - here we define \(k_{\rm B}\equiv 1\). This behavior is observed in Figs. 6(a)-(d), in particular for large values of \(U/t\).
By fitting the low temperature data of Figs. 6 (a)-(d), we obtain the dependence of \(\Delta_{c}/t\) with \(U/t\) shown in Fig. 6 (e). The values of \(U/t\) at which \(\Delta_{c}/t\to 0\) provide estimates for \(U_{c}^{\rm M}/t\), the critical values of \(U/t\) above which the system is an insulator.
### The Phase Diagram
The analyses of the preceding subsections may be summarized in the phase diagram shown in Fig. 7. The change in sign of correlation functions provides an estimate for \(t^{\prime}_{c}/t(U/t)\) for a given temperature; see Fig. 4. Since the minus sign prevents us from obtaining a significative sequence of data at different temperatures for all values of \(U/t\), we adopt as \(t^{\prime}_{c}/t(U/t)\) the value at which the curve for the lowest temperature crosses the horizontal axis. The error in this estimate is provided by the grid of \(t^{\prime}/t\) values. With this we are able to set up a boundary for the ferrimagnetic state, whose points are denoted by \(U_{c}^{\rm F}/t\) in Fig. 7.
The boundary of the metallic region, on the other hand, is obtained through the analysis of the single-particle gap. As mentioned in Sec. III.4, for a fixed \(t^{\prime}/t\), we determine \(U_{c}^{\rm M}/t\) as the value at which \(\Delta_{c}\to 0\). The errors, in this case, are those emerging from parabolic fits
Figure 5: Temperature dependence of the uniform susceptibility in linear-log plots. Each of the panels (a) to (c) shows data for fixed \(U/t\) and different \(t^{\prime}/t\); panel (d) shows data for fixed \(t^{\prime}/t=0.8\), and different values of \(U/t\). Data have been obtained for \(L=6\), except in (a), in which case \(L=18\) was used. When not shown, error bars are smaller than the data points.
Figure 6: Panels (a)-(d): Compressibility as a function of temperature, for fixed values of \(t^{\prime}/t\), and different values of \(U/t\). (e) Estimates of the gap obtained by fitting an exponential form, \(\kappa\propto\exp{(-\Delta_{c}/k_{\rm B}T)}\), to the low temperature data of panels (a)-(d); see text. When not shown, error bars are smaller than the symbol size, while the lines are parabolic fits to the curves.
of \(U_{c}/t(\Delta_{c}/t)\) for the curves in Fig. 6 (e). Interestingly, both ferrimagnetic insulator and paramagnetic metal curves are very close to each other for small values of \(t^{\prime}/t\), suggesting that both transitions occur simultaneously. However, from Fig. 7, we see that such behavior changes for larger frustration, with a bifurcation at \(t^{\prime}/t\approx 0.45\) with a slight increase of \(U_{c}^{\rm F}/t\) up to \(t^{\prime}/t\approx 0.6\) - beyond this point, one cannot find ferrimagnetism. Determining the features of the region that is neither metallic nor ferrimagnetic is challenging. Due to its similarities with the Mott region of the Kagome lattice, \(t^{\prime}/t=1\), in particular the strong short-range antiferromagnetic correlations, we define it as a Mott insulator region, and we expect that spin liquid features may occur for sufficiently large \(U/t\).
It is important to note that our data for \(\langle\,\)sign\(\,\rangle\) verify the relation with quantum critical points [39, 40]. Starting with \(U/t=2\), Fig. 3 shows that the dip in \(\langle\,\)sign\(\,\rangle\) for \(\beta t=16\) occurs near \(t_{c}^{\prime}/t=0.20\pm 0.05\). Similarly, for \(U/t=3\) the dip for \(\beta t=10\) occurs at \(t_{c}^{\prime}/t=0.40\pm 0.05\), which is also in accord with the critical point indicated in Fig. 7. For \(U/t=4\), the minimum of \(\langle\,\)sign\(\,\rangle\) moves to \(t_{c}^{\prime}/t=0.50\pm 0.05\). This trajectory of \(t_{c}^{\prime}/t\) therefore provides numerical evidence that \(\langle\,\)sign\(\,\rangle\) is picking up the transitions labelled \(U_{c}^{\rm M}/t\) in Fig. 7. Indeed, although the minima of \(\langle\,\)sign\(\,\rangle\) broaden considerably for \(U/t\gtrsim 5\), they are consistent with a smaller rate of change in \(U_{c}^{\rm M}/t\) in this region. Interestingly, one cannot ascertain whether the dominant cause of a drop in \(\langle\,\)sign\(\,\rangle\) is the transition between a paramagnet and a magnetically ordered state (irrespective of being ferrimagnetic or antiferromagnetic) or between a metal and an insulator.
As mentioned before, this model has been studied within a mean-field (MF) approach [29]. Although restricted to the range \(0.5\leq t^{\prime}/t\leq 1\), the phase diagram thus obtained identifies a metal-insulator (MI) transition, in addition to the ferrimagnetic-Mott transition in the insulating regime. Nonetheless, some quantitave differences are worth mentioning: First, the Paramagnetic metal - MI phase boundary obtained through the MF approximation displays a slower increase with \(t^{\prime}/t\), than the one obtained here. Second, the steep FM-Mott Insulator boundary lies around \(t^{\prime}/t=0.8\) in [29] larger than our estimate, of around \(t^{\prime}/t=0.6\).
## IV Conclusions
We have examined the Hubbard model on a Kagome lattice with hopping anisotropy such that it allows us to continuosly interpolate between the unfrustrated Lieb lattice and the fully frustrated Kagome lattice. This interpolation is achieved by varying the hopping amplitude, \(0\leq t^{\prime}/t\leq 1\), along one of the diagonals of the Kagome lattice. Through extensive quantum Monte Carlo simulations we have calculated different magnetic and transport responses, from which we have obtained a phase diagram in the \((U/t,t^{\prime}/t)\) parameter space. We have also analyzed the average sign of the fermionic determinant, and found that it provides consistent predictions for critical points, as recently proposed [39, 40].
The picture that emerges is that a metal-insulator transition takes place at some \(U_{c}^{\rm M}/t(t^{\prime}/t)\), which increases monotonically with \(t^{\prime}/t\), from \(U_{c}^{\rm M}/t(0)=0\). We have also found that increasing this type of frustration causes a phase transition between a ferrimagnetic phase and a Mott phase. We hope our findings will motivate further studies of ultracold fermionic atoms on Lieb and Kagome lattices.
###### Acknowledgements.
The authors are grateful to the Brazilian Agencies Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), Coordenacao de Aperfeicoamento de Pessoal de Ensino Superior (CAPES), and Instituto Nacional de Ciencia e Tecnologia de Informacao Quantica (INCT-IQ) for funding this project. Financial support from CNPq, Grant No. 313065/2021-7 (N.C.C.), and Grant Nos 308335/2019-8 and 403130/2021-2 (T.P.) and from FAPERJ - Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro -, Grant No. 200.258/2023 (SEI-260003/000623/2023) (N.C.C) and Grant numbers E-26/200.959/2022 and E-26/210.100/2023 (T.P.).
Figure 7: Phase diagram of the Hubbard model on the anisotropic kagome lattice at half-filling. Critical points labelled \(U_{c}^{\rm F}/t\) and \(U_{c}^{\rm M}/t\) have been determined through data in Fig. 4 and Fig. 6(e), respectively; see text. The dashed lines through the critical points are guides to the eye. |
2306.03302 | Statistical Inference Under Constrained Selection Bias | Large-scale datasets are increasingly being used to inform decision making.
While this effort aims to ground policy in real-world evidence, challenges have
arisen as selection bias and other forms of distribution shifts often plague
observational data. Previous attempts to provide robust inference have given
guarantees depending on a user-specified amount of possible distribution shift
(e.g., the maximum KL divergence between the observed and target
distributions). However, decision makers will often have additional knowledge
about the target distribution which constrains the kind of possible shifts. To
leverage such information, we propose a framework that enables statistical
inference in the presence of selection bias which obeys user-specified
constraints in the form of functions whose expectation is known under the
target distribution. The output is high-probability bounds on the value of an
estimand for the target distribution. Hence, our method leverages domain
knowledge in order to partially identify a wide class of estimands. We analyze
the computational and statistical properties of methods to estimate these
bounds and show that our method can produce informative bounds on a variety of
simulated and semisynthetic tasks, as well as in a real-world use case. | Santiago Cortes-Gomez, Mateo Dulce, Carlos Patino, Bryan Wilder | 2023-06-05T23:05:26Z | http://arxiv.org/abs/2306.03302v3 | # Inference under constrained distribution shifts
###### Abstract
Large-scale administrative or observational datasets are increasingly used to inform decision making. While this effort aims to ground policy in real-world evidence, challenges have arise as that selection bias and other forms of distribution shift often plague observational data. Previous attempts to provide robust inferences have given guarantees depending on a user-specified _amount_ of possible distribution shift (e.g., the maximum KL divergence between the observed and target distributions). However, decision makers will often have additional knowledge about the target distribution which constrains the _kind_ of shifts which are possible. To leverage such information, we proposed a framework that enables statistical inference in the presence of distribution shifts which obey user-specified constraints in the form of functions whose expectation is known under the target distribution. The output is high-probability bounds on the value an estimand takes on the target distribution. Hence, our method leverages domain knowledge in order to partially identify a wide class of estimands. We analyze the computational and statistical properties of methods to estimate these bounds, and show that our method can produce informative bounds on a variety of simulated and semisynthetic tasks.
## 1 Introduction
Decision-makers in the public and private sectors increasingly seek to use machine learning or statistical models built on top of large-scale datasets in order to inform policy, operational decisions, individualized treatment rules, and more. However, these administrative datasets are typically purely observational, meaning that they are not carefully designed to sample from a true distribution of interest. Accordingly, such efforts have been hindered by sampling bias and other distributional shifts between the dataset which is observable and the target population of interest. For example, decision-makers in social services or public health may wish to estimate the risk factors which contribute to adverse outcomes for individuals in order to target preventative interventions. However, they likely only have data from a subset of people who engaged with some specific service in the past (e.g., beneficiaries of public programs or patients who have previously visited the same health system). Such selection bias has presented severe issues for past algorithmic systems [1, 2], policy analysis [3, 4], epidemiological studies [5, 6], and more.
We consider the task of statistical inference under distribution shift, or accurately recovering some function of the ground-truth distribution (e.g., the expectation of a covariate, the treatment effect of an intervention, or a parameter in a regression model) using samples from an observed distribution. In general, such inference is intractable without assumptions about the relationship between the observed and target distributions. The simplest setting is where we can observe at least some samples from the target distribution. However, such samples are not available in many settings of policy interest. Absent target-distribution samples, other frameworks (e.g. those related to distributionally robust optimization [7, 8, 9, 10] or sensitivity analysis [11, 12, 13]) allow a user to specify the total _magnitude_ of distribution shift allowed. However, they do not provide a framework for using existing domain knowledge to constrain the _kind_ of distribution shift.
Intuitively though, decision makers will often have aggregate knowledge about the target distribution even if they do not have individual-level samples. For example, a policymaker may know via census data the distribution of demographic characteristics such as age, race, and income in the population as a whole. Or, in a public health setting, serosurveys may provide ground-truth estimates of exposure to infectious diseases in specific locations or population groups [14]. This aggregate information is typically not sufficient by itself for the original task because it fails to measure the key outcome or covariates of interest (e.g., knowing the demographic distribution of a population is by itself unhelpful for estimating a patient's risk of hospital readmission). However, it imposes constraints on the nature of distribution shift between the observed distribution and the underlying population: any valid shift must respect these known quantities.
In this work we propose a framework for statistical inference under distribution shifts which obey such user-specified constraints. More formally, if \(\mathbb{P}\) is the population distribution and \(\mathbb{Q}\) is the observable distribution, for any statistic \(f(\mathbb{P})\) our method outputs upper and lower bounds which are guaranteed to contain \(f(\mathbb{P})\) (with high probability), using only samples from \(\mathbb{Q}\). These bounds are informed by user-specified constraints, in the form of auxiliary functions whose true expectation under \(\mathbb{P}\) is known. This provides _partial identification_ of the statistic of interest. That is, in general the user-specified constraints will be insufficient to exactly specify the relationship between \(\mathbb{P}\) and \(\mathbb{Q}\), and so it is impossible to exactly infer \(f(\mathbb{P})\) using samples from \(\mathbb{Q}\). However, our bounds become tighter as the user provides more informative auxiliary constraints. Such bounds may be sufficient to identify an optimal decision or certify the robustness of a model even when point identification is impossible. In more detail, our contributions are as follows:
* We introduce a framework which allows user-specified constraints on distribution shift via known expectations. Our framework incorporates such external information into an optimization program whose value gives a valid bound on a statistic of interest \(f(\mathbb{P})\) using samples from a observed distribution \(\mathbb{Q}\). We provide conditions under which this optimization problem is convex and hence efficiently solvable.
* We analyze the statistical properties of estimating these bounds using a sample average approximation for the optimization problem. We show that the estimator for our bounds is asymptotically normal, allowing us to provide confidence intervals.
* We extend our framework to accommodate estimands without a closed form (e.g., obtained
by fitting a regression model to estimate a coefficient or nuisance function), provide statistical guarantees for this setting, and propose computational approaches to solve the resulting optimization problem.
* We perform experiments on synthetic and semi-synthetic data to test our methods. The results confirm that our framework provides valid bounds and allows effective use of domain knowledge: incorporation of more informative constraints produces tighter bounds, which can be strongly informative about the estimand of interest. All the code we used for our experiments is available at hidden.
Additional related work.Our work is broadly related to the literature on partial identification, which spans statistics, economics, epidemiology, and computer science [15, 16, 17, 18]. Interest in partial identification has grown recently in the machine learning community, particularly in the causal inference setting. For example, [19, 20] consider partial identification of treatment effects for a known causal graph, extending the classic framework of Balke and Pearl [21] to incorporate generative models. [22] consider the setting where covariates are subject to a user-specified level of noise. Perhaps closest to our setting is [23], who consider methods for incorporating domain knowledge into partial identification problems. However, in their setting, domain knowledge takes the form of functional form restrictions for the treatment effect (e.g., smoothness or number of inflection points), rather than constraints on shift between observed and target distributions. Our work differs from the existing ML literature both in that we are concerned with inference problems broadly (not restricted to treatment effects), and in that we provide a means to impose externally-known constraints on shifts such as selection bias. Related issues have recently been considered in the epidemiology and biostatistics literature, motivated by the growing use of biobank-style datasets which have known selection biases [24, 25]. Our work differs in that we explicitly consider the algorithmic properties of computing the resulting bounds, and in our analysis of estimators which themselves require fitting a model.
## 2 Problem formulation
Let \(X\) be a random variable over a space \(\mathcal{X}\), distributed according to some distribution \(\mathbb{P}\) in a population of interest. We assume that \(\mathbb{P}\) has a density \(p\). Our goal is to estimate a function \(f(\mathbb{P})\). One prominent example is estimating an expectation, i.e., \(f(\mathbb{P})=\mathbb{E}_{X\sim P}[h(X)]\) for some function \(h\), but we will consider a range of examples for \(f\). If we observe iid samples from \(\mathbb{P}\), estimating \(f(\mathbb{P})\) is a standard inference task. For example, we may substitute the empirical distribution \(\hat{\mathbb{P}}_{N}\) and use the plug-in estimate \(f(\hat{\mathbb{P}}_{N})\), or any number of other strategies depending on the estimand.
However, we consider a setting where we instead observe samples drawn iid from a different distribution \(\mathbb{Q}\) with density \(q\). We assume that \(q(X)>0\) whenever \(p(X)>0\) so that the ground truth density ratio \(\theta_{0}(X)=\frac{p(X)}{q(X)}\) is always well defined. With slight abuse of notation, for any \(\mathbb{P}\), we let \(\theta\mathbb{P}\) denote a distribution with density \(\theta(X)p(X)\) so that \(\theta_{0}\mathbb{Q}=\mathbb{P}\). If \(\theta_{0}\) were known, the estimating \(f(\mathbb{P})\) using samples from \(\mathbb{Q}\) would be easily accomplished using standard importance sampling methods.
E.g., in the case of estimating an expectation, we have that \(\mathbb{E}_{X\sim\mathbb{P}}[h(X)]=\mathbb{E}_{X\sim\mathbb{Q}}[\theta_{0}(X)h(X)]\). However, our focus is on the setting where \(\theta_{0}\) is _not_ known, and we do not even have access to samples from \(\mathbb{P}\) from which to estimate it: for many observational datasets, samples with the same covariates are simply not available for the population as a whole. For example, in a healthcare setting, key clinical features and health outcomes can only be observed for patients who seek care.
Without any information about the relationship between \(\mathbb{P}\) and \(\mathbb{Q}\) this is a hopeless task. However, in many settings of importance _some_ information is available, in the form of auxiliary functions whose true expectation under \(\mathbb{P}\) is known. In policy settings, this may come from census data, or well-design population surveys which estimate the ground-truth prevalence of specific health conditions (c.f. [14, 26, 27]). More formally, we are able to observe \(\mathbb{E}_{\mathbb{P}}[g_{j}(X)]\) for a collection of functions \(g_{1}...g_{m}\). Then, we know that \(\theta_{0}\) must satisfy
\[\mathbb{E}_{\mathbb{Q}}[\theta_{0}(X)g_{j}(X)]=\mathbb{E}_{\mathbb{P}}[g_{j}( X)]\quad j=1...m.\]
Define \(\Theta\) to be the set of density ratios \(\theta\) which satisfy the above constraints, all of which are consistent with our observations. In general, \(\Theta\) will not be a singleton and so \(f(\mathbb{P})\) will not be point-identified. However, it is _partially_ identified with the following bounds:
**Proposition 1**.: \(f(\mathbb{P})\in(\min_{\theta\in\Theta}f(\theta\mathbb{Q}),\quad\max_{\theta \in\Theta}f(\theta\mathbb{Q}))\)_, and these bounds are tight._
Proof.: This holds by construction since \(\mathbb{P}=\theta\mathbb{Q}\) for some \(\theta\in\Theta\). To see that these bounds are tight, suppose that we had a conjectured lower bound \(\ell>\min_{\theta\in\Theta}f(\theta\mathbb{Q})\). Then, there would be some \(\theta\) consistent with all constraints in \(\Theta\) for which \(f(\theta\mathbb{Q})<\ell\), implying that there a target distribution \(\mathbb{P}\) consistent with all constraints for which \(\ell\) is not a valid lower bound.
Our goal is to provide computationally and statistically efficient methods for estimating these upper and lower bounds, each of which are defined via an optimization problem over \(\theta\). \(\Theta\) encapsulates the constraints on potential distribution shifts which are known in a particular domain, allowing an analyst to translate additional domain knowledge into tighter identification of the estimand. We assume that we observe a sample of \(N\) points \(X_{1}...X_{N}\) drawn iid from \(\mathbb{Q}\). Our aim is to use this sample to estimate the value of the optimization problems defining each side of the bound, and to provide a confidence bound for \(f(\mathbb{P})\) which accounts for both partial identification and finite-sample uncertainty in estimation. Throughout, we will assume for convenience that the optimization problems defining each side of the bound have a unique solution (if this is not satisfied, we could e.g. modify the objective function to select a minimum-norm solution).
## 3 Methods
We now propose a series of means for estimating such partial identification bounds, each of which combines a particular parameterization of \(\theta\) along with the estimand of interest to arrive at an optimization problem whose value gives the bound. Throughout, we consider the problem of estimating the lower bound (i.e., solving the minimization problem), as the upper bound is symmetric. A starting point is to impose no restrictions at all on the form of \(\theta(X)\) by representing \(\theta(X)\) as an
arbitrary function, i.e., one that may take a completely separate value for every \(X\). This yields the population problem (left) and its corresponding plug-in approximation on the observed empirical distribution (which we denote by \(\hat{\mathbb{Q}}_{N}\)):
\[\min_{\theta}f(\theta\mathbb{Q}) \min_{\theta}f(\theta\hat{\mathbb{Q}}_{N})\] \[\theta(X)\geq 0\quad\forall X \theta(X_{i})\geq 0\quad\forall i=1...N\] \[\mathbb{E}_{X\sim\mathbb{Q}}[\theta(X)g_{j}(X)]=c_{j}\quad j=1...m \frac{1}{N}\sum_{i=1}^{N}\theta(X_{i})g_{j}(X_{i})=c_{j}\quad j=1...m\]
where the constraint \(\theta(X)\geq 0\) ensures that \(\theta\) is a valid density ratio. In some cases, we may be able to replace this with a tighter constraint. For example, consider the case of _selection bias_, where individual samples from \(\mathbb{P}\) select into \(\mathbb{Q}\) based on \(X\). Formally, we model this via an indicator variable \(R\) which is \(1\) if a unit is observed in the sample and \(0\) otherwise. Then, \(q(x)=p(x|R=1)\), and via Bayes theorem we have \(\theta_{0}(X)=\frac{\Pr(R=1)}{\Pr(R=1|X)}\). Since \(\Pr(R=1|X)\leq 1\), we are guaranteed that \(\theta_{0}(X)\geq\Pr(R=1)\). In many cases, the marginal \(\Pr(R=1)\) is easily observable because we know the total size of the population that appears in our sample relative to the true population (e.g., a government may know the fraction of people in a city who are enrolled in a program). This allows us to replace the constraint \(\theta(X)\geq 0\) with the tighter constraint \(\theta(X)\geq\Pr(R=1)\).
**Approach: plug-in estimation using the sample problem.** We will study plug-in estimators which solve the right-hand optimization problem above on the observed sample. Regardless of the lower bound used, the set of constraints is convex in \(\theta\), enabling operations such as efficient projection onto the feasible set. The computational properties of the plug-in estimator will accordingly depend on the estimand \(f\). Statistically, we will use the theory of _sample average approximation_ in optimization to describe how the value estimated using the plug-in optimization problem converges to the value of the population problem on the left-hand side. We will obtain results that the sampling variance of our estimator for the overall bound is just the variance of the plug-in estimate of \(f\): we do not pay any price (asymptotically) in the precision of our estimates due to optimizing over a range of distributions.
### Convex estimands
We start with the case where \(f(\theta\hat{\mathbb{Q}}_{N})\) is a convex function of \(\theta\). The most prominent case where this holds is when \(f\) is the expectation of some function \(h\), in which case \(f(\theta\hat{\mathbb{Q}}_{N})=\mathbb{E}_{X\sim\hat{\mathbb{Q}}_{N}}[\theta(X) h(X)]\) is linear in \(\theta\). Another example is when \(f\) is a conditional expectation, conditioned on some event \(X\in C\). In this case, we have
\[f(\theta\mathbb{Q})=E_{X\sim\theta\mathbb{Q}}[h(X)|X\in C] f(\theta\hat{\mathbb{Q}}_{N})=\frac{\sum_{i=1}^{N}1[X_{i}\in C]\cdot \theta(X_{i})h(X_{i})}{\sum_{i=1}^{N}\theta(X_{i})1[X_{i}\in C]}.\]
The plug-in approximation to \(f\) is no longer linear in \(\theta\) because \(\theta\) determines both the numerator (the joint probability of \(h(X)\) and \(X\in C\)) and the denominator (the marginal probability that \(X\in C\)). However, both the numerator and denominator are linear in \(\theta\), making this an example
of a _linear-fractional_ program, which can be reformulated as a linear program using the standard Charnes-Cooper transformation. In either the expectation or conditional-expectation case, the plug-in estimator requires only that we solve a linear program and is hence computationally efficient.
We now turn to establishing the statistical properties of the plug-in estimator. Define \(\nu\) to be the optimal value of the population problem and \(\hat{\nu}_{N}\) to be the optimal value of the sample problem. When \(f\) is a convex function of \(\theta\), we can use standard results in sample average approximation [28] to show asymptotic normality, in particular that \(\sqrt{N}(\nu-\hat{\nu}_{N})\) converges to a normal distribution. This allows us to derive confidence intervals by estimating the variance of this normal distribution and taking the appropriate quantiles. In particular, let \(\lambda_{j}\) be the dual variable associated with constraint \(j\), and \(\lambda_{j}^{*}\) be the optimal value of \(\lambda_{j}\) in the population problem. Similarly, let \(\theta_{0}\) be the population-optimal value of \(\theta\). Then, we have
**Proposition 2**.: _When \(f(\theta\hat{\mathbb{Q}}_{N})\) is convex in \(\theta\), \(\sqrt{N}(\nu-\hat{\nu}_{N})\rightarrow\mathcal{N}(0,\sigma^{2})\) where \(\sigma^{2}=\text{Var}[\theta_{0}(X)h(X)+\sum_{j=1}^{M}\lambda_{j}^{*}(\theta_{ 0}(X)g_{j}(X)-c_{j})]\) and convergence is in distribution._
Proof.: This follows directly from Theorem 5.11 of Shapiro et al. [28], since the objective and constraints are convex and we have assumed that the population problem has a unique solution.
We can then produce confidence intervals by estimating this variance via the sample estimates \(\hat{\theta}_{N}\) and \(\hat{\lambda}_{N,j}\) (see e.g. Schapiro et al. [28] Eqs. 5.183 and 5.172 ). Letting \(\hat{\sigma}\) be the estimated standard deviation, we have the confidence bound \(\nu\geq\hat{\nu}_{N}+\hat{\sigma}Z_{\frac{\alpha}{2}}\), where \(\alpha\) is the desired significance level and \(Z_{\frac{\alpha}{2}}\) is the corresponding quantile of the standard normal. In order to simplify estimation of this variance, it may be necessary to use a form of sample splitting, where disjoint sets of samples are used to estimate the objective function and each constraint in the sample problem. This eliminates the covariance terms in the expression for \(\sigma\) above. Alternatively, it may be simpler in practice to estimate the variance using the bootstrap.
We also remark that the expression for \(\sigma\) makes clear that the variance of the final estimator depends on \(\theta_{0}(X)h(X)\), not \(h(X)\) alone. \(\theta_{0}(X)h(X)\) will have larger variance when there are \(X\) that have much higher density under the optimizing distribution \(\theta_{0}(X)\mathbb{Q}\) than under \(\mathbb{Q}\), resulting in large values of \(\theta_{0}(X)\). That is, allowing distribution shifts far from \(\mathbb{Q}\) generates uncertainty in two ways: both by worsening identification (i.e., more extreme values of the population optimization problem) and by generating additional finite-sample variance in the evaluation of the bound.
### Parametric forms for the density ratio
In some cases, we may wish to impose a parametric representation of the function \(\theta(X)\) in order to encode specific domain knowledge which constraints the values that \(\theta\) might take. The choice of parametric form can have significant computational and statistical consequences.
One common choice in applied settings is the logistic model, i.e., we could impose that \(\log\frac{Pr(R=1|X)}{Pr(R=0|X)}=\beta^{T}X\) for some set of parameters \(\beta\). Unfortunately, substituting this expression for \(\theta\) in the sample program above makes it clear that this parametric form has a very undesirable property: it results in a set of nonlinear equality constraints, and so the optimization problem will not in general have a computationally efficient solution (even finding a feasible solution may be difficult).
We propose an alternative parameterization, where we directly specify \(\theta\) to be a linear function in some representation of the features: \(\theta(X)=\alpha^{\top}\phi(X)\) for some fixed function \(\phi\). For example, \(\phi\) could contain indicators for the values of categorical features, interaction terms, polynomials or other nonlinear functions of continuous covariates, etc. The choice of \(\phi\) determines the expressivity of the weights \(\theta\), allowing an analyst to specify more or less structure according to their knowledge of the domain. Indeed, using a standard basis for smooth functions (e.g., a set of polynomial basis functions) allows us to represent any smooth function in this framework if desired.
The advantage of this parameterization is that it ensures that \(\theta\) is now a convex (linear) function of the parameter \(\alpha\), enabling provably efficient computation of our bounds. Let \(\theta_{\alpha}\) denote the function \(\theta(X)=\alpha^{\top}\phi(X)\). The population and sample optimization problems become, respectively:
\[\min_{\alpha}f(\theta_{\alpha}\mathbb{Q}) \min_{\alpha}f(\theta_{\alpha}\hat{\mathbb{Q}}_{N})\] \[\alpha^{\top}\phi(X)\geq 1\quad\forall X \alpha^{\top}\phi(X_{i})\geq 1\quad i=1...N\] \[\mathbb{E}_{X\sim\theta_{\alpha}\mathbb{Q}}[g_{j}(X)]=c_{j}\quad j =1...m \frac{1}{N}\sum_{i=1}^{N}\alpha^{\top}\phi(X_{i})g_{j}(X_{i})=c_{j} \quad j=1...m.\]
Whenever the estimand of interest depends on \(\theta\) via a convex function, this is a convex program in \(\alpha\). For example, when \(f\) is an expectation, the sample objective becomes \(\frac{1}{N}\sum_{i=1}^{N}\alpha^{\top}\phi(X_{i})h(X_{i})\) Similarly to the above, when the estimand is a conditional expectation we obtain a linear-fractional program. Accordingly, Proposition 1 applies in this setting as well.
Computationally, the number of variables in this linear program is \(dim(\alpha)\), which is generally much smaller than the size of the dataset. However, it does require enforcing \(\alpha^{\top}\phi(X_{i})\geq 1\) for each \(X_{i}\), generating \(N\) constraints. In some common settings however, we can remove the dependence on the number of samples \(N\) entirely from the optimization problem. Let \(\Phi=\{\phi(X_{i}):i=1...N\}\) be the set of unique values taken by \(\phi\). In actuality, we only have to enforce the lower bound constraint for each value in \(\Phi\), not each sample individually. In many cases, \(\Phi\) will be much smaller than \(N\). For example, in many domains, models for potential selection biases will be formulated based on domain knowledge about a set of variables which are likely to contribute to such biases; e.g., an analyst may wish to obtain bounds under selection on income, education, and insurance status into an observational healthcare dataset. If \(\phi\) depends on only a small number of discrete covariates, then the set \(\Phi\) will be small, resulting in a small number of constraints.
### General estimands
When the estimand \(f(\theta\mathbb{Q})\) is a non-convex function of \(\theta\), in general we cannot obtain provably optimal solutions to the sample problem in an efficient manner. However, we can still obtain locally optimal solutions (as is common for other partial identification settings [19, 20, 23, 24]). In particular, whenever \(f(\theta\mathbb{Q})\) is differentiable with respect to \(\theta\), we can use gradient descent to solve the optimization problem, with a projection step in each iteration to enforce the constraints on \(\theta\). The statistical properties of the plug-in estimator may, however, become more complex, as we illustrate using the following two example estimands.
**Example 1:** Treatment effect estimation. Consider the setting where \(\mathbb{P}\) is a distribution over tuples \((X,Y,A)\), where \(X\) is a covariate vector, \(Y\) is an outcome, and \(A\) is a binary variable indicating assignment of a treatment. For simplicity, we consider the case where the outcome \(Y\) is also binary. Researchers are often interested in estimating the average treatment effect. Under standard identifying assumptions (most prominently that \(A\perp Y^{A=a}|X\)), this produces the estimand:
\[f(\mathbb{P})=\int_{X}p(X)\left[p(Y=1|A=1,X)-p(Y=1|A=0,X)\right].\]
Now, we can consider the setting where we observe samples from a different distribution than \(\mathbb{P}\), resulting in a density ratio \(\theta(X,Y,A)\). In our framework, we may have selection into the sample based on any function of these three variables, including selection based on the outcome \(Y\). Computing the appropriate marginals and conditionals of \(\theta(X,Y,A)p(X,Y,A)\), and substituting these for \(p\) in the above expression, gives an objective which is non-convex in terms of \(\theta\), but which is nonetheless differentiable, enabling gradient-based optimization. However, we will have to use a model to estimate the nuisance functions \(p(Y=1|A=a,X)\) and the statistical properties of the resulting bounds will depend on how these are estimated.
**Example 2:** Coefficients of parametric models. Suppose that a researcher is interested in interpreting the estimated coefficient of a parametric model, e.g., linear or generalized linear models as commonly used in a variety of applied settings. For example, an applied researcher may estimated the odds ratio for an outcome given some exposure using a logistic model, and wish to obtain bounds on this parametric odds ratio under potential selection biases or other distribution shifts.
**Bounding the outputs of \(m\)-estimators:** To provide one way of addressing these (and other) examples, we consider the general challenge of partially identifying quantities produced by an \(m\)-estimator. \(m\)-estimators are those which estimate a parameter \(\beta\) via minimizing the expected value of a function \(m\), i.e., \(f(\mathbb{P})=\arg\min_{\beta}\mathbb{E}_{X\sim\mathbb{P}}[m(X,\beta)]\). One prominent example is where \(m\) is the negative log likelihood, resulting in a maximum likelihood estimate of \(\beta\). \(m\)-estimators are widely used across many areas of statistics and machine learning. Suppose that we are interested in producing bounds for some function \(h(\beta)\). For example, \(h\) may be the value of a single coordinate of \(\beta\) if we are interested in bounding a specific model coefficient that will be interpreted, or \(h\) may be the treatment effect functional described above. This results in the optimization problem
\[\min_{\theta}h(\beta(\theta)) \min_{\theta}h(\hat{\beta}_{N}(\theta))\] \[\beta(\theta)=\arg\min_{\beta}\mathbb{E}\left[\theta(X)m(X,\beta)\right] \hat{\beta}_{N}(\theta)=\arg\min_{\beta}\frac{1}{N}\sum_{i=1}^{N} \theta(X_{i})m(X_{i},\beta)\] \[|\mathbb{E}_{X\sim\mathbb{P}}[\theta(X)g_{j}(X)]-c_{j}|\leq \epsilon\quad j=1...m \left|\frac{1}{N}\sum_{i=1}^{N}\theta(X_{i})g_{j}(X_{i})-c_{j} \right|\leq\epsilon^{\prime}\quad j=1...m.\]
Since this problem is in general non-convex in \(\theta\), we have imposed a somewhat more careful handling of the constraint functions. In particular, in the non-convex case it is necessary to ensure that the constraints hold with high probability [28] by including an additional "margin". We allow an error
of a user-defined parameter \(\epsilon\) in the population problem. Since this is a relaxation of the original constraint, it still results in valid bounds. Then, we solve the sample problem using a smaller value \(\epsilon^{\prime}\), and for sufficiently large \(N\), standard arguments (e.g., Hoeffding's inequality) suffice to show that the sample optimizer will be population-feasible with high probability. Here, \(\epsilon\) and \(\epsilon^{\prime}\) are user-selected quantities which control together the amount of relaxation introduced into the bounds and the sample size needed for the theory to hold (a larger gap between \(\epsilon\) and \(\epsilon^{\prime}\) means that the sample optimizer is population-feasible with high probability for a smaller \(N\)). However, empirically we find that the estimated values converge even if both are set to \(0\). Regardless, ensuring constraints hold with high probability allows us to reason only about the variance induced in the objective function. In particular, we prove the following asymptotic normality result:
**Proposition 3**.: _Assume that \(m\) is twice differentiable for any \(X\) and \(\beta\) and that \(\nabla_{\beta}\mathbb{E}_{X\sim\mathbb{P}}[\theta(X)m(X,\beta)]\) is nonsingular for any \(\theta\) and \(\beta\). Let \(\Sigma_{\theta}\) denote the covariance matrix of \(\hat{\beta}_{N}(\theta_{0})\). Then, \(\sqrt{N}(\hat{\nu}_{N}-\nu)\rightarrow\mathcal{N}(0,\nabla_{\beta}h(\beta( \theta_{0}))\Sigma_{\theta_{0}}\nabla_{\beta}h(\beta(\theta_{0}))^{\top})\)_
The main idea behind the proof (see Appendix) is to leverage the well-known asymptotic normality of \(m\)-estimators to obtain normality of the value of the optimization problem. This requires us to show that \(h(\beta(\theta))\) considered as a function of \(\theta\) converges in distribution to a Gaussian process, i.e., that these values are jointly (and not just marginally) normal. Having shown this, we obtain that the sample value of the optimization problem has variance equal to that of \(h(\beta(\theta_{0}))\), i.e., the variance of the objective function at the population optimizer. This is a desirable property because we do not require uniform bounds on the variance of the objective function at all values of \(\theta\); only the variance at the population-optimal \(\theta_{0}\) is relevant asymptotically. We obtain this property as a direct result of analyzing the entire joint distribution of the estimand instead of using standard concentration inequalities to establish uniform bounds over all \(\theta\). The variance at \(\theta_{0}\) depends on two quantities. First, \(\nabla_{\beta}h(\beta(\theta_{0}))\) represents how smooth \(h\) is, and will be well controlled if, e.g., \(h\) is Lipschitz. Second, \(\Sigma_{\theta_{0}}\) is the covariance matrix inherited from the underlying \(m\)-estimator of \(\beta(\theta_{0})\).
**Computational approach:** In practice, we can solve the associated optimization problem by projected gradient descent, iterating between gradient descent steps with respect to \(\nabla_{\theta}h(\beta(\theta))\) and projecting \(\theta\) back to the feasible set. This is easily accomplished in autodifferentiation frameworks where we can use differentiable optimization [29, 30] or meta-learning style methods [31, 32, 33] to implement the \(\arg\min\) defining \(\beta\) in a manner which supports automatic backpropagation. Similarly, as long as \(h\) can be implemented in the same framework, we obtain gradients \(\nabla_{\theta}h(\beta(\theta))\) automatically. In general, we can only obtain locally (instead of globally) optimal solutions for non-convex problems. However, we observe experimentally (see below 2) that the values obtained are nearly identical across many random restarts of the optimization, suggesting good empirical performance.
## 4 Experiments
We conduct experiments to show how our method allows users to specify domain knowledge and obtain informative bounds on the estimand of interest. We simulate inference for a range of different \(f(\mathbb{P})\) from various scenarios. In each experiment we start with samples from a ground truth distribution
and simulate the observed distribution \(\mathbb{Q}\) using sampling probabilities which depend on the covariates (i.e., simulating selection bias). This ensures that the ground-truth value \(f(\mathbb{P})\) is known (or can be estimated with very high precision), allowing us to verify if our bounds contain the true value. We conduct both synthetic and semi-synthetic experiments where \(\mathbb{P}\) is simulated or consists of samples from a real dataset (specifically, US Census data), respectively. We consider two classes of estimands. First, estimating the conditional mean \(\mathbb{E}_{\mathbb{P}}[Y|A=1]\) for an outcome variable \(Y\) and conditioning covariate \(A\). Second, estimating the coefficent of a regression model, providing an example of the \(m\)-estimation setting from above. In the main text, we show examples where the estimand is a given coefficient of a linear regression model (implemented by differentiating through a least-squares solution), and in the Appendix we show results for bounding the coefficient of a logistic regression (implemented via differentiating through an iteratively reweighted least squares solver [31]). All experiments were implemented using Pytorch.
We show how the amount of domain knowledge specified can result in tighter bounds along two axes. First, we vary the parametric form for \(\theta\), ranging for an arbitrary function of \(X\) to ones which impose specific assumptions (e.g, separability across specific groups of covariates, or that the covariates which the selection probabilities depend on are known). Second, we vary the number of constraints \(\{g_{j}\}\) used to form the set \(\Theta\), modeling the ability of users to impose an increasing degree of constraints on possible distribution shifts. We now summarize how these experiments were instantiated in each setting, with details in the Appendix.
**Synthetic data experiments.** For the first set of experiments, we simulate a distribution \(X=(Y,Y_{2},A,X_{1},X_{2})\sim\mathbb{P}\) used to evaluate previous causal inference methods [34]. Details of the generative process are given in the appendix. We add a selection bias scenario to this process by simulating an indicator variable \(R\sim Ber(logit^{-1}(X_{1}-X_{2}))\). The observed distribution \(\mathbb{Q}\) consists of those samples for which \(R=1\). In this domain, we consider the task of estimating bounds for \(\mathbb{E}_{\mathbb{P}}[Y|A=1]\). A total of six experiments were run. In the first three experiments, we vary the functional form for \(\theta\) while leaving the constraints defining \(\Theta\) fixed. In the first experiment \(\theta\) is an arbitrary function of \(X\). In the second, we use a parametric representation \(\phi\) to specify \(\theta(X):=\theta_{1}(A)+\theta_{2}(X_{1},X_{2})\) where \(\theta_{1}\) is an arbitrary function of \(A\) and \(\theta_{2}\) is an arbitrary function \(X_{1}\) and \(X_{2}\). Finally, in the third experiments we fix \(\theta(X)=\theta(X_{1},X_{2})\), i.e., that \(\theta\) is a function only of the variables which determine \(R\) (simulating a scenario where the variables driving selection bias are known to the user, even if the exact selection probabilities are not). For experiments 4-6, we vary \(\Theta\) instead by adding one more constraint in each successive experiment (see Appendix for details). Throughout each of these experiments varying \(\Theta\), \(\theta(X)\) is an arbitrary function.
**Semi-synthetic data experiments.** Here, we use the Folkstables dataset [35] which provides an interface for data from the US Census (or more precisely, the American Community Survey) from 2014 to 2019. Based on the ACSEmployment task suggested in [35], we consider a binary outcome variable \(Y\) indicating whether or not a person was employed at the time of the survey. Selection probabilities for \(\mathbb{Q}\) can be found in the Appendix.
We first consider estimating \(\mathbb{E}_{\mathbb{P}}[Y|A=1]\), as in the synthetic data setting, where here \(A\) indicates whether the person identifies as white or not. The experiments were constructed analogously to the simulated data setting, by first varying the functional form for \(\theta\) and then the set of constraints
imposed. We then consider estimating the coefficient of the indicator variable \(A=1\) in a linear regression model of \(Y\) on 15 covariates (see Appendix). This second setting is an example of the \(m\)-estimator framework from above. A total of four experiments were run. In the first two experiments the function \(\theta(X)\) changes while leaving the size of the set \(\Theta\) fixed. In the first experiment, analogous to the previous setting, \(\theta(X)\) is an arbitrary function of all the covariates. In the second, \(\theta(X)=\theta(A,X_{1},X_{2})\) is an arbitrary function of \(X_{1}\), \(X_{2}\) and \(A\) (focusing on variables which are involved in selection bias). For experiments 3 and 4, we fix \(\theta(X)\) as in experiment 2. In experiment 3\(\Theta\) has one less restriction than \(\Theta\) from experiment 4. Details of each experiment and results for an analogous Logistic Regression model are shown in the Appendix.
**Results.** Figures 1 and 2 show our main results. Each figure plots the bounds output by our method for each experiment described above. In each plot, we also show the true value of the estimand \(f(\mathbb{P})\), as well as the naive estimate using samples from \(\mathbb{Q}\) (without any attempt to account for selection bias). The results of our experiments are consistent across all three scenarios, showing how the estimated bounds vary depending on the constraints imposed and on the functional form assumed for \(\theta\). Concordant with the theory, the bounds always contain the true value of the estimand. Additionally, as expected, when more external information is available the bounds become narrower. This holds even when the naive estimate is far from the truth, i.e., when sampling bias is relatively severe. In fact, we observe that even when our method is applied with only relatively minimalistic specifications (one constraint, no functional form restrictions on \(\theta\)), the resulting bounds sometimes exclude the naive estimate entirely. When additional constraints are added, it is possible to obtain narrow bounds for the estimand even under distribution shifts which are otherwise arbitrary. These results demonstrate that our framework allows users to translate common forms of domain knowledge into robust and informative statistical inferences.
We further investigate the computational properties of our estimators, exploring their stability and convergence. In each plot in Figures 1 and 2, the orange endpoints for each bound are a box plot showing the estimate of the bound across 10 random restarts of the projected gradient descent algorithm. Notably, we observe that the estimates obtained across different initializations are nearly identical, providing evidence that they can be computed effectively even in the non-convex case induced by regression models. The right plot in Figure 2 presents an illustrative example demonstrating the convergence of different initializations throughout the iterations of gradient descent. This visual representation highlights the reliable convergence behavior of our approach. Overall, these findings reinforce the computational feasibility of our estimators, lending support to their practical implementation and robustness in capturing the underlying estimands.
## 5 Discussion
In this work we present a framework that allows users to leverage domain knowledge to produce trustworthy estimates under distribution shift. In particular, we study a setting in which no samples from a target distribution are available, but we have access to a set of functions whose true expectation under the target distribution is known. We show how tight bounds on an estimand of interest under such distribution shifts correspond to the value of particular optimization problems
Figure 1: Confidence bounds for \(\mathbb{E}_{\mathbb{P}}[Y|A=1]\) produced by our method on synthetic and semi-synthetic data. The bounds are compared against the value of the naive estimator \(\mathbb{E}_{\widehat{\mathbb{Q}}_{N}}[Y|A=1]\) and the true value. The first two images are the results of experiments 1-3 (varying the definition of \(\theta(X)\)) for the synthetic and the semi-synthetic data, respectively. The second two images, are the results of experiments 4-6 (i.e. different sets of constraints \(\Theta\)) for the synthetic and the semi-synthetic data, respectively. In both cases, incorporating more external information leads to narrower bounds.
Figure 2: Estimated value of \(\beta(\theta)\) with our proposed method for the semi-synthetic dataset. The first two images are results for different parameterizations of \(\theta(X)\) and results for different sets of constraints \(\Theta\) respectively. In both cases, incorporating more external information leads to narrower bounds. The third image is the estimated bounds for \(\mathbb{E}_{\mathbb{Q}}[Y\theta(X)|A=1]\) through the optimization process for 10 different runs over random restarts. It can be seen that our method is robust to the randomness of different initialization points.
and give theoretical guarantees for the statistical and computational properties of our proposed estimator of these bounds. Experimentally, we show that our method produces valid bounds (i.e., always containing the true value) which can be highly informative in policy-relevant scenarios (e.g., using census data to constrain a set of possible distribution shifts).
**Limitations and broader impacts.** We anticipate that our methods will have a positive social impact via increasing the robustness of statistical and machine learning methods to distribution shifts such as selection bias, which are particularly consequential for studying the equity impacts of policies and systems (as marginalized populations are typically less likely to be represented in the dataset). However, our methods do come with important limitations, in that the bounds will only be as good as the domain knowledge used to specify the constraints. In particular, it is up to the user to specify the correct target distribution, select informative constraint functions, and ensure that these functions can be accurately estimated on the target distribution (which may be difficult if even the nominal target distribution is subject to measurement errors or other complications). Negative consequences are possible if incorrect domain knowledge is used as the input, resulting in an incorrect inference which is improperly certified as "robust". In short, the framework we propose provides a means to translate domain expertise about potential mechanisms for data biases into rigorous inferences. Our hope is that it will serve to enable collaborations between domain experts and quantitative researchers towards this goal.
## 6 Acknowledgments and Disclosure of Funding
This work was supported in part by the AI2050 program at Schmidt Futures (Grant G-22-64474).
## Appendix A Appendix
### Proof of Proposition 3
We consider inference for a parameter \(\beta\in R^{\ell}\). As noted in the text, the standard Hoeffding bound combined with union bound shows that provided \(N>O\left(\left(\frac{\max_{\theta,i,j}\theta(X_{i})g_{j}(X_{i})}{\epsilon- \epsilon^{\prime}}\right)^{2}\log\frac{m}{\delta}\right)\), we have that the sample optimizers are population-feasible with probability at least \(1-\delta\). Accordingly, we set \(\delta=\frac{1}{N}\) so that this event occurs with probability 1 - \(o(1)\) and treat the constraints as fixed (deterministically satisfied) in what follows. Define \(\beta(\theta)=\arg\min_{\beta}E[\theta(X)m(X,\beta)]\) to be the functional expressing the solution to the inner estimation problem as a function of \(\theta\). Define \(\hat{\beta}_{N}(\theta)\) to be the finite-sample estimate \(\arg\min_{\beta}\frac{1}{N}\sum_{i=1}^{N}m(X_{i},\beta)\). In order to apply standard asymptotics for sample average approximations, we require that \(\sqrt{N}(\hat{\beta}_{N}(\theta)-\beta(\theta))\) converges in distribution to a Gaussian process. Asymptotic normality of \(m\)-estimators implies that this holds under appropriate regularity conditions for any fixed \(\theta\); however, we require control over the joint distribution for all \(\theta\).
To show convergence to a Gaussian process, consider any finite set of \(\theta\), \(\{\theta_{k}\}_{k=1}^{K}\). We can rewrite \(\beta(\theta_{1})...\beta(\theta_{K})\) as the solution to the following single \(m\)-estimation as follows. Define \(\beta=[\beta_{1}...\beta_{K}]\) and \(m_{k}(X,\beta)=\theta_{k}(X)m(X,\beta_{k})\). Finally, let \(\mathbf{m}=\sum_{k=1}^{K}m_{k}\). Then, \(\beta_{0}=[\beta(\theta_{1})...\beta(\theta_{K})]\) is the
unique solution to the \(m\)-estimation problem
\[\arg\min_{\beta}\mathbb{E}_{X\sim P}[\mathbf{m}(X,\beta)],\]
because the objective \(\mathbf{m}\) is separately with respect to the \(\beta\) variables. Let \(\hat{\beta}_{N}\) denote the solution to the empirical problem \(\arg\min_{\beta}\mathbb{E}_{X\sim\hat{P}_{N}}[\mathbf{m}(X,\beta)]\), and note that \(\hat{\beta}_{N}=[\hat{\beta}_{N}(\theta_{1})...\hat{\beta}_{N}(\theta_{K})]\) by the same logic. We will show using standard asymptotics that \(\hat{\beta}_{N}\) (and hence \(\hat{\beta}_{N}(\theta_{1})...\hat{\beta}_{N}(\theta_{K})\)) are asymptotically jointly normal. Formally, in order to apply standard results we must show that some regularity conditions are met. We require that (i) \(\nabla_{\beta}\mathbb{E}_{X\sim P}[\mathbf{m}(X,\beta)]\) is nonsingular for all \(\beta\) (this requirement can be weakened somewhat, but we use this formulation for simplicity) and (ii) that \(\sqrt{N}\left[\nabla_{\beta}\frac{1}{N}\sum_{i=1}^{N}\mathbf{m}(X_{i},\beta_{0}) \right]\rightarrow\mathcal{N}(0,\Sigma)\) for some covariance matrix \(\Sigma\). Starting with condition (i), define \(H_{\theta_{i}}=\nabla_{\beta}\mathbb{E}_{X\sim P}[m_{i}(X,\beta_{i})]\) to be the Hessian matrix of the \(i\)th \(m\)-function at the population optimum. Then, because the optimization problem is separable, \(H(\beta)\) has the following block diagonal structure
\[H(\beta)=\begin{bmatrix}H_{\theta_{1}}&0&\dots&0\\ 0&H_{\theta_{2}}&...&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&...&H_{\theta_{K}}\end{bmatrix}.\]
Via the assumption that each individual \(H_{\theta_{1}}\) is nonsingular, we obtain that \(H(\beta)\) is nonsingular as well. For condition (ii), convergence to a normal distribution follows from the standard central limit theorem since the samples in \(\hat{P}_{N}\) are iid. To precisely describe the structure of \(\Sigma\), we use \(\Sigma(\nabla_{\beta_{k}},\nabla_{\beta_{\ell}})\) to denote the \(\dim(\beta)\times\dim(\beta)\) covariance matrix between \(\nabla_{\beta_{k}}\frac{1}{N}\sum_{i=1}^{N}m_{k}(X_{i},\beta(\theta_{k}))\) and \(\nabla_{\beta_{k}}\frac{1}{N}\sum_{i=1}^{N}m_{\ell}(X_{i},\beta(\theta_{\ell}))\). The overall covariance matrix \(\Sigma\) has the block structure
\[\Sigma=\begin{bmatrix}\Sigma(\nabla_{\beta_{1}},\nabla_{\beta_{1}})&\Sigma( \nabla_{\beta_{1}},\nabla_{\beta_{2}})&\dots&\Sigma(\nabla_{\beta_{1}}, \nabla_{\beta_{K}})\\ \Sigma(\nabla_{\beta_{2}},\nabla_{\beta_{1}})&\Sigma(\nabla_{\beta_{2}}, \nabla_{\beta_{2}})&...&\Sigma(\nabla_{\beta_{2}},\nabla_{\beta_{K}})\\ \vdots&\vdots&\ddots&\vdots\\ \Sigma(\nabla_{\beta_{K}},\nabla_{\beta_{1}})&\Sigma(\nabla_{\beta_{K}}, \nabla_{\beta_{2}})&...&\Sigma(\nabla_{\beta_{K}},\nabla_{\beta_{K}})\end{bmatrix}.\]
Applying Theorem 3.1 of Newey and McFadden [36], we have that \(\sqrt{N}(\hat{\beta}_{N}-\beta_{0})\rightarrow\mathcal{N}(0,H(\beta_{0})^{-1} \Sigma H(\beta_{0})^{-1})\).
Next, we apply the multivariate delta method to handle the function \(h\). We consider the joint distribution of \(\mathbf{h}(\hat{\beta}_{N})\triangleq[h(\hat{\beta}_{N}(\theta_{1})...h(\hat{ \beta}_{N}(\theta_{K})]\). In particular, note that for any \(\beta\), \(\nabla_{\beta}\mathbf{h}(\beta)\) has the block structure
\[\nabla_{\beta}\mathbf{h}(\beta)=\begin{bmatrix}\nabla_{\beta_{1}}h(\beta_{1})&0& \dots&0\\ 0&\nabla_{\beta_{2}}h(\beta_{2})&...&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&...&\nabla_{\beta_{K}}h(\beta_{K})\end{bmatrix}\]
where each individual \(\nabla_{\beta_{i}}h(\beta_{i})\in R^{\ell}\) and \(\nabla_{\beta}\mathbf{h}(\beta)\in R^{\ell k\times\ell}\). This block structure is again due to
separability of \(\mathbf{h}(\beta)\) as we have defined it. As \(\hat{\beta}_{N}\) converges in distribution to a Gaussian, we have via the multivariate delta method that
\[\sqrt{N}\left(\mathbf{h}(\hat{\beta}_{N})-\mathbf{h}(\beta_{0})\right)\to\mathcal{N}(0, \nabla_{\beta}\mathbf{h}(\beta_{0})H(\beta_{0})^{-1}\Sigma H(\beta_{0})^{-1}) \nabla_{\beta}\mathbf{h}(\beta_{0})^{\top})\]
Since \(\{\theta_{k}\}_{k=1}^{K}\) were arbitrary, this is equivalent to convergence of \(h(\beta(\theta))\) to a Gaussian process. Accordingly, using Theorem 3.2 of Schapiro (1991) [37], we have that
\[\sqrt{N}(\hat{\nu_{N}}-\nu_{0})\to\mathcal{N}(0,\mathrm{Var}(h(\hat{\beta}_{N} (\theta_{0}))).\]
To calculate \(\mathrm{Var}(h(\hat{\beta}_{N}(\theta_{0})))\), we consider any collection of \(\{\theta_{k}\}\) containing \(\theta_{0}\) and study the corresponding covariance matrix for \(\mathbf{h}(\hat{\beta}_{N})\) given above. Let \(\theta_{0}\) be indexed by \(i\) in the set of \(\{\theta_{k}\}\). The variance of \(h(\hat{\beta}_{N}(\theta_{0}))\) is given by the \(i\)th element of the diagonal of the full covariance matrix,
\[\left[\nabla_{\beta}\mathbf{h}(\beta_{0})H(\beta_{0})^{-1}\Sigma H(\beta_{0})^{-1 })\nabla_{\beta}\mathbf{h}(\beta_{0})^{\top}\right]_{ii}.\]
This structure simplifies considerably because \(\nabla_{\beta}\mathbf{h}(\beta_{0})\) and \(H(\beta_{0})^{-1}\) are both block-diagonal matrices. Accordingly, the \(i\)th diagonal element is just
\[\nabla_{\beta}h(\beta_{0})H_{\theta_{i}}^{-1}\Sigma(\nabla_{\beta_{i}},\nabla _{\beta_{i}})H_{\theta_{i}}^{-1}\nabla_{\beta}h(\beta_{0})\]
which we recognize as the sampling variance of \(h(\hat{\beta}_{N}(\theta_{0}))\) calculated marginally, where \(H_{\theta_{i}}^{-1}\Sigma(\nabla_{\beta_{i}},\nabla_{\beta_{i}})H_{\theta_{i}} ^{-1}\) is the covariance matrix of the \(m\)-estimator \(\hat{\beta}_{N}(\theta_{0})\). A consistent estimator of the variance can then be obtained using the standard methods for \(m\)-estimators, where we substitute our estimated \(\hat{\theta}_{N}\) for \(\theta_{0}\) in the above.
### Synthetic data experiments details
Inspired on data models used in the casual inference literature [34], the distribution \(X=(Y,Y_{2},A,X_{1},X_{2})\sim\mathbb{P}\) is given by the following model:
\[X_{1} \sim Multinomial(3,0.5,0.3)\] \[X_{2} \sim Ber(0.4)\] \[A \sim Ber(logit^{-1}(X_{2}-X_{1}))\] \[Y \sim Ber(logit^{-1}(2A-X_{1}+X_{2}))\] \[Y_{2} \sim Ber(logit^{-1}((X_{1}+X_{2})/2-A)\]
The observed distribution \(\mathbb{Q}\) is given by simulating selection bias via an indicator variable:
\[R\sim Ber(logit^{-1}(X_{1}-X_{2}))\]
Naturally, the sample from \(\mathbb{Q}\) are all those samples for which \(R=1\). The set \(\Theta\) used in the experiments was:
* **Experiment 1**, **Experiment 2** and **Experiment 3**: \(S:=\{\mathbb{E}_{X\sim\mathbb{P}}[YA]=c_{1},\mathbb{E}_{X\sim\mathbb{P}}[A(1-Y)]= c_{2},\mathbb{E}_{X\sim\mathbb{P}}[(1-A)Y]=c_{3},\mathbb{E}_{X\sim\mathbb{P}}[(1-A)(1-Y)]= c_{4}\}\).
* **Experiment 4**, \(\Theta\) was \(S\setminus\{\mathbb{E}_{X\sim\mathbb{P}}[YA]=c_{1},\mathbb{E}_{X\sim\mathbb{P}} [(A-1)Y]=c_{2}\}\).
* **Experiment 5**\(\Theta\) was \(S\).
* **Experiment 6**\(\Theta\) was \(S\cup\{\mathbb{E}_{X\sim\mathbb{P}}[Y_{2}A]=c_{5},\mathbb{E}_{X\sim\mathbb{P}} [Y_{2}(1-A)]=c_{6}\}\).
Each experiment was run 10 times. The data reported is the average value and standard deviation for the 10 outputs for both bounds in each experiment. The results can be found in table 1.
### Semi-synthetic data
For the semi-synthetic experiments we used the Folkstables package [35] which provides an interface for curated US Census data. We use the ACSEmployment task, where \(Y\) is whether or not a person is employed and the variable of interest \(A\) is self-identifying as white. The rest of the covariates come from a one-hot encode of the features listed below. The last level of each variable is dropped to avoid an unidentifiable model thus obtaining a 15-dimensional representation for every entry in the dataset.
1. _Military service_: 0: "N/A (less than 17 years old)", 1: "Now on active duty", 4: "Never served in the military", 2: "On active duty in the past, but not now", 3: "Only on active duty for training in Reserves/National Guard".
2. _Ancestry_: 4: "Not reported", 2: "Multiple", 3: "Unclassified", 1: "Single".
3. _Nativity_: 1: "Native", 2: "Foreign born".
4. _Hearing difficulty_: 2: "No", 1: "Yes".
5. _Vision difficulty_: 1: "Yes", 2: "No".
6. _Cognitive difficulty_: 1: "Yes", 2: "No".
7. _Sex_: 1: "Male", 2: "Female".
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Average value} & \multicolumn{2}{c}{Standard deviation} \\ \cline{2-5} & Lower bound & Upper bound & Lower bound & Upper bound \\ \hline Experiment 1 & 0.4686 & 0.8549 & 0.0 & 0.0038 \\ Experiment 2 & 0.6102 & 0.8125 & 5.96e-08 & 5.33e-08 \\ Experiment 3 & 0.6785 & 0.7972 & 0.0 & 0.0 \\ Experiment 4 & 0.3606 & 0.9068 & 0.0 & 0.0035 \\ Experiment 5 & 0.4135 & 0.8503 & 0.0 & 0.0040 \\ Experiment 6 & 0.7172 & 0.7459 & 0.0023 & 0.0007 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Synthetic data experiment results. True conditional mean: 0.726.
The observed distribution \(\mathbb{Q}\) is given by simulating selection bias via an indicator variable:
\[R\sim Ber(logit^{-1}(X_{1}-X_{2}))\]
Again, the sample from \(\mathbb{Q}\) are all those samples for which \(R=1\).
The set \(\Theta\) used in the first setting of experiments (estimating \(\mathbb{E}_{\mathbb{P}}[Y|A=1]\)) is:
* **Experiment 1**, **Experiment 2** and **Experiment 3**: \(S:=\{\mathbb{E}_{X\sim\mathbb{P}}[YA]=c_{1},\mathbb{E}_{X\sim\mathbb{P}}[A(1-Y )]=c_{2},\mathbb{E}_{X\sim\mathbb{P}}[(1-A)Y]=c_{3},\mathbb{E}_{X\sim\mathbb{P} }[(1-A)(1-Y)]=c_{4}\}\).
* **Experiment 4**, \(\Theta\) was \(S\setminus\{\mathbb{E}_{X\sim\mathbb{P}}[YA]=c_{1},\mathbb{E}_{X\sim\mathbb{P} }[A(1-Y)]=c_{2}\}\)
* **Experiment 5**\(\Theta\) was \(S\)
* **Experiment 6**\(\Theta\) was \(S\cup\{\mathbb{E}_{X\sim\mathbb{P}}[Y_{2}A]=c_{5},\mathbb{E}_{X\sim\mathbb{P} }[Y_{2}(1-A)]=c_{6}\}\),
where \(Y_{2}\) was defined as the _Hearing difficulty_ variable. Each experiment was run 10 times and the results are in table 3.
The set \(\Theta\) used in the second setting of experiments (estimating the coefficient \(\beta\) of the indicator variable \(A\) in a linear regression model) is:
* **Experiment 1** and **Experiment 2**: \(S:=\{\mathbb{E}_{X\sim\mathbb{P}}[YA]=c_{1},\mathbb{E}_{X\sim\mathbb{P}}[A(1-Y )]=c_{2},\mathbb{E}_{X\sim\mathbb{P}}[(1-A)Y]=c_{3},\mathbb{E}_{X\sim\mathbb{P} }[(1-A)(1-Y)]=c_{4}\}\).
* **Experiment 3**, \(\Theta\) was \(S\setminus\{\mathbb{E}_{X\sim\mathbb{P}}[YA]=c_{1},\mathbb{E}_{X\sim\mathbb{P }}[Y(A-1)]=c_{2}\}\).
* **Experiment 4**\(\Theta\) was \(S\).
Each experiment was run 10 times. The data reported is the average value and standard deviation for the 10 outputs for both bounds in each experiment. The results are presented in table 2.
### Logistic regression model
We run one additional experiment to estimate the coefficient \(\beta\) of the indicator variable \(A\). However, instead of being the coefficient of a linear model, it is now the coefficient of a logistic regression. Only
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Average Value} & \multicolumn{2}{c}{Standard deviation} \\ \cline{2-5} & Lower bound & Upper bound & Lower bound & Upper bound \\ \hline Experiment 1 & 0.2703 & 0.4822 & 8.12e-05 & 0.0002 \\ Experiment 2 & 0.3638 & 0.3811 & 2.99e-05 & 2.23e-05 \\ Experiment 3 & 0.3743 & 0.3796 & 5.74e-05 & 3.21e-05 \\ Experiment 4 & 0.3131 & 0.4351 & 9.94e-06 & 0.0002 \\ Experiment 5 & 0.3756 & 0.3887 & 2.82e-05 & 8.51e-05 \\ Experiment 6 & 0.3730 & 0.3857 & 5.13e-05 & 1.94e-05 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Synthetic data experiment results for estimating \(\mathbb{E}_{\mathbb{P}}[Y|A=1]\). True conditional mean: 0.3758.
one experiment was run. The experiment was run 10 times. The data reported is the average value and standard deviation for the 10 outputs obtained out of the experiment. The results are in figure 3 and table 4.
|
2307.14557 | Accelerating Polynomial Modular Multiplication with Crossbar-Based
Compute-in-Memory | Lattice-based cryptographic algorithms built on ring learning with error
theory are gaining importance due to their potential for providing post-quantum
security. However, these algorithms involve complex polynomial operations, such
as polynomial modular multiplication (PMM), which is the most time-consuming
part of these algorithms. Accelerating PMM is crucial to make lattice-based
cryptographic algorithms widely adopted by more applications. This work
introduces a novel high-throughput and compact PMM accelerator, X-Poly, based
on the crossbar (XB)-type compute-in-memory (CIM). We identify the most
appropriate PMM algorithm for XB-CIM. We then propose a novel bit-mapping
technique to reduce the area and energy of the XB-CIM fabric, and conduct
processing engine (PE)-level optimization to increase memory utilization and
support different problem sizes with a fixed number of XB arrays. X-Poly design
achieves 3.1X10^6 PMM operations/s throughput and offers 200X latency
improvement compared to the CPU-based implementation. It also achieves 3.9X
throughput per area improvement compared with the state-of-the-art CIM
accelerators. | Mengyuan Li, Haoran Geng, Michael Niemier, Xiaobo Sharon Hu | 2023-07-27T00:33:23Z | http://arxiv.org/abs/2307.14557v1 | # Accelerating Polynomial Modular Multiplication
###### Abstract
Lattice-based cryptographic algorithms built on ring learning with error theory are gaining importance due to their potential for providing post-quantum security. However, these algorithms involve complex polynomial operations, such as polynomial modular multiplication (PMM), which is the most time-consuming part of these algorithms. Accelerating PMM is crucial to make lattice-based cryptographic algorithms widely adopted by more applications. This work introduces a novel high-throughput and compact PMM accelerator, _X-Poly_, based on the crossbar (XB)-type compute-in-memory (CIM). We identify the most appropriate PMM algorithm for XB-CIM. We then propose a novel bit-mapping technique to reduce the area and energy of the XB-CIM fabric, and conduct processing engine (PE)-level optimization to increase memory utilization and support different problem sizes with a fixed number of XB arrays. X-Poly design achieves \(3.1\times 10^{6}\) PMM operations/s throughput and offers 200\(\times\) latency improvement compared to the CPU-based implementation. It also achieves **3.9\(\times\)** throughput per area improvement compared with the state-of-the-art CIM accelerators.
## I Introduction
Post-quantum cryptography (PQC) represents a critical area of research in the field of cryptography, driven by the impending threat posed by quantum computing to current cryptographic systems [1]. Many of these cryptosystems are currently being considered potential PQC candidates to address the challenges posed by quantum computing. Among these cryptosystems, lattice-based cryptography has attracted significant interest from the research community owing to its robust security guarantees and relatively low computational complexity [2, 3]. Lattice-based cryptographic algorithms rely on the mathematical concept of a lattice, which is an intricate structure formed by repeating patterns of points in a multi-dimensional space.
One of the fundamental building blocks of lattice-based cryptographic algorithms is polynomial operations, specifically, polynomial modular multiplication (PMM). PMM is a critical operation in ring learning with error (RLWE) theory, a key concept in lattice-based cryptographic algorithms. Moreover, PMM is the most time-consuming part of these algorithms. For example, recent studies show that PMM represents more than half of the computational workload for lattice-based homomorphic encryption (HE) on the cloud side [4], and more than 90% on the edge side [5]. Though algorithmic optimizations like Number-Theoretic Transform (NTT) [6] can decrease computation complexity, PMM latency is still high [4, 5, 7]. As such, accelerating PMM is essential to improve the efficiency and practicality of lattice-based cryptographic algorithms.
Currently, there have been significant efforts to accelerate PMM, particularly through the use of NTT. NTT-based solutions, including those implemented on application-specific integrated circuits (ASICs) [8, 9, 10], field-programmable gate arrays (FPGAs) [11], and compute-in-memory (CIM) architectures [12, 13, 14, 15], have demonstrated promising results in accelerating PMM. CIM-based PMM accelerators have gained attention for their effectiveness in reducing data transfer overheads by moving computation inside the memory [5, 14]. Work in [14] builds a Resistive RAM (ReRAM) based NTT accelerator that supports bit-wise computation inside the memory. Alternatively, [15] presents an in-SRAM NTT accelerator with bit-serial arithmetic operations. Crossbar arrays (XBAs) [16] is another popular CIM fabric that can support highly efficient vector-matrix multiplication (VMM) and is also actively being exploited for supporting high-throughput NTT-based PMM implementations [12, 17].
Existing research efforts to accelerate PMM using XBAs have primarily focused on using NTT-based approaches [12, 17]. Such solutions claim to achieve improvements of over 50\(\times\) compared to other CIM NTT accelerators. However, supporting PMM on XBAs comes with its own set of unique challenges. These challenges differ notably from those associated with the application of XBAs for the well-studied case of convolutional neural networks (CNNs). On the one hand, the high bitwidth and the large polynomial degree required for cryptographic applications result in a huge number of shift-add operations, which incur high area and energy overhead. On the other hand, it remains an open question whether NTT-based solutions are the most suitable for XBA-based CIM architectures. Existing NTT-based PMM implementations on XBAs suffer from high area costs and limited scalability. These challenges restrict existing XBA-based solutions from achieving high performance for lattice-based cryptographic algorithms. Therefore, exploring alternative approaches to accelerate PMM to overcome these limitations is crucial.
This paper proposes a novel XBA-based PMM accelerator, X-Poly. Our solution distinguishes itself from existing XBA-based CIM methods by focusing on the non-NTT-based PMM. Our specific contributions are as follows:
* We present observations revealing that NTT-based PMM may not be the most suitable choice for XBA-CIM. Our extensive studies show that the convolution 1D (Conv1D) solution holds potential advantages regarding area, latency, and noise over NTT when implementing PMM on XBAs.
* We propose a new XBA bit mapping technique for high-bitwidth, large polynomial degree data. The technique significantly reduces the overhead by removing most fine-grained shift-add operations.
* We optimize data mapping at the processing engine (PE) level to support different problem scales with a fixed number of XBAs while maximizing throughput.
Our proposed X-Poly offers significant improvements in throughput and area consumption, making it a competitive solution for accelerating PMM in lattice-based cryptographic algorithms. Specifically, X-Poly achieves 200\(\times\) latency improvement compared with a CPU implementation. It also leads to 3.9\(\times\) throughput per area improvements compared with the state-of-the-art (SOTA) CIM accelerators for PMM.
## II Background
In this section, we discuss the role of PMM in cryptography, describe various PMM methods, review existing strategies for accelerating PMM, and review the concept of XBA.
### _PMM in Cryptography_
The RLWE problem [18], foundational to lattice-based cryptography [19], and specifically to HE schemes [20], leverages polynomials over a specific ring for its operations. HE, which enables arbitrary computations on encrypted data without prior decryption, ensures secure computation in untrusted environments while preserving data privacy. The primary computational bottleneck in HE arises from the need to perform polynomial arithmetic, particularly PMM [21, 22, 23]. Consequently, enhancing PMM's performance with respect to latency and energy consumption becomes critical in cryptography.
### _Pmm_
Polynomial modular multiplication (PMM) is a fundamental operation in various applications, including cryptography, error correction codes, and polynomial arithmetic. It involves multiplying two polynomials and reducing the result modulo a given polynomial, resulting in a polynomial of a lower degree. By performing PMM, it becomes possible to efficiently compute large polynomial expressions while maintaining the desired modulus properties.
PMM can be accomplished using various methods, including the Conv1D approach and more optimized solutions like NTT as shown in Fig. 1(a). The Conv1D approach for PMM follows a straightforward procedure (Fig. 1(a)(1)). Two polynomials \(A(x)\) and \(B(x)\), with polynomial degree \(n\) and modulo \(q\), are multiplied by summing the corresponding terms, akin to Conv1D computation with time complexity of \(O(n^{2})\). Then, the product undergoes modular reduction by dividing it with a modulus polynomial. The remainder is extracted polynomial long division to get the final result \(P(x)\).
NTT, alternatively, is proposed to reduce the computational complexity of PMM, particularly when the modulus polynomial satisfies specific properties, such as being irreducible and having a specific degree [6]. As depicted in Fig. 1(a)(2), the NTT approach involves transforming the polynomials into a different domain through NTT. During the NTT transformation, butterfly computations are performed by combining pairs of coefficients and multiplying them with twiddle factors, which are complex values associated with the modulus polynomial, resulting in the frequency-domain representation of the polynomial [6]. The process has a time complexity of \(O(n\log n)\). Then in this transformed domain, element-wise multiplication is performed, followed by the inverse NTT (INTT) to convert the result back to the original domain to obtain the final polynomial \(P(x)\). Modular reduction is applied after each domain transformation.
The computational complexity of PMM in hardware is primarily influenced by two key factors: the polynomial degree \(n\), which represents the number of coefficients in a polynomial, and the bitwidth \(k\) of modulo \(q\), which signifies the size of these coefficients. In real-world applications, such as HE in privacy-preserving machine learning inference, these parameters can be quite substantial. For instance, the polynomial degree \(n\) in these applications can range from 256 to 8192, while the bitwidth \(k\) can vary from 16 bits to 64 bits [4, 24]. The magnitude of these degrees and bitwidths significantly intensifies the computational complexity of a single PMM, presenting a considerable challenge in the field.
Fig. 1: (a) PMM computation flow using two implementations: (1) Conv1D (2) NTT. (b) PMM operation mappings on XBA: Conv1D mapping; (c) NTT mapping.
### _Related Work_
In this section, we briefly review existing efforts to accelerate PMM. As existing work primarily employs NTT-based solutions, we focus our review accordingly, discussing both traditional ASIC and FPGA solutions, as well as CIM-based accelerators.
#### Ii-C1 ASIC and FPGA solutions
Nejatollahi, H., et al. [11] proposed an innovative FPGA solution by designing two high-throughput systolic array polynomial multipliers, one based on NTT and the other on convolution. Their sequential NTT-based multiplier yielded a 3\(\times\) speedup over the SOTA FPGA implementation of the polynomial multiplier in the NewHope-Simple key exchange mechanism on an Artix7 FPGA [25].
ASIC implementations of lattice-based cryptographic protocols have also been actively studied. LEIA [9], a high-performance lattice encryption instruction accelerator, and Sapphire [10], a configurable processor for low-power embedded devices, both demonstrate substantial performance improvements and energy efficiency compared to prior ASIC designs.
There are also a number of works that directly accelerate HE, inherently accelerating PMM [4, 5, 7, 26, 27]. These works, which also typically use NTT, aim to create large-scale accelerators for privacy-preserving computations. Since this paper focuses on PMM, we will not compare it to these works. It suffices to say that an efficient PMM accelerator will directly help HE implementations.
#### Ii-C2 Compute-in-Memory solutions
Previous research has introduced a variety of CIM kernels, including crossbars and general-purpose CIM. Ranjan et al. [28] have demonstrated that XBAs excel at performing VMM. Reis et al. [29] have discussed the general-purpose CIM enabling Boolean logic and arithmetic operations to be executed directly within the memory. Additionally, ongoing researches focus on exploring different underlying technologies for implementing these CIM kernels, including CMOS, ReRAM, and Ferroelectric FET (FeFET) [30]. These technologies are actively studied due to their potential to provide higher density and lower latency/energy overhead in CIM architecture. Several research efforts have explored the use of CIM architectures for the acceleration of the NTT, including CryptoPIM [14], MENTT [15], RMNTT [12] and BPNTT [13]. We compare X-Poly against these established researches, so we concisely introduce these approaches in the following discussion.
CryptoPIM, MENTT, and BPNTT proposed efficient NTT accelerators based on general-purpose CIM kernels. CryptoPIM [14] and MENTT [15], built on ReRAM and SRAM respectively, both introduced unique mapping strategies to streamline the data flow between NTT stages, leading to significant reductions in latency, energy, and area overheads. BPNTT presented an in-SRAM architecture using bit-parallel modular multiplication, significantly improving throughput-per-watt.
RMNTT [12] proposed an NTT accelerator using ReRAM-based XBAs. RMNTT stores the modified twiddle factor matrix in the XBAs and employs a modified Montgomery reduction algorithm to perform modular reduction on the VMM results. The evaluation results in [12] show that RMNTT outperforms other NTT accelerators in terms of throughput but incurs a large area overhead.
### _Crossbars_
Given the competitiveness of XBA-based NTT accelerators, we consider leveraging XBAs to accelerate PMM. We briefly review the XBA basics below.
XBA [31] is one representative CIM kernel in which every input signal is connected to every output signal through their cross-points consisting of memory elements and selectors. XBAs can efficiently implement VMM and have been widely studied for CNNs. In particular, XBA implemented with nonvolatile memory (NVM) devices such as ReRAM [32] have gained popularity due to their high storage density, nonvolatility, and low energy consumption. However, XBAs face challenges stemming from the underlying memory devices and circuits. In-situ memory device nonidealities, e.g., non-linearity, thermal noise, and variations, impact computed accuracy.
Fig. 2(a) illustrates a general XBA structure. For each column, we adopt the current summing model as shown in Fig. 2(b). In this work, both input voltage (\(V_{j}\)) and memory cell states (\(G_{i,j}\)) assume binary values, i.e., \(I_{i}=\sum_{0}^{R-1}G_{ij}V_{j}\), where \(V_{j}\) and \(G_{ij}\) are either 0 or 1. Binary XBAs exhibit greater robustness to device and circuit nonidealities, and offer improved scalability.
## III NTT vs. Conv1D
The choice of PMM algorithm is critical to achieving high performance in terms of speed, noise, and area in the context of the CIM computing paradigm as discussed in Sec. II-B. Two commonly used methods for performing PMM are Conv1D and NTT. Recent efforts utilizing XBAs for PMM have primarily focused on accelerating NTT-based methods [12][17]. However, there is no systematic comparative study of which method, Conv1D or NTT, is a better fit for leveraging XBAs to accelerate PMM. We fill this gap with an in-depth investigation below. Our study reveals three key insights which favor the
Fig. 2: (a) XBA structure: a C columns x R rows array and the corresponding WL/BL driver. A p-bit ADC is used for converting analog signals to digital signals. (b) Illustration of the current summing scheme in XBA computation.
Conv1D over the NTT-based approach. First, data mapping complexity is higher when using NTT. Second, the Conv1D method potentially offers a better performance trade-off in terms of area and throughput, providing more opportunities for design scalability. Third, the noise growth is generally higher in the NTT approach than Conv1D, which can negatively impact the performance and accuracy of the system. Below, we elaborate on these insights.
### _The Impact of Data Mapping to XBAs_
To use XBAs for PMM, both NTT-based and Conv1D-based PMM approaches require converting their respective operands into matrices and performing VMM on XBAs [12]. In the NTT-based approach, the twiddle factor of NTT must be converted into a matrix. In the Conv1D approach, one of the polynomials is transformed into a matrix, while the other remains a vector, facilitating the execution of VMM. Data mapping to XBAs in NTT and Conv1D can be better visualized in Fig.1(b) and (c). The figures show that the same number of memory cells are needed for both methods; thus, NTT does not provide benefits over Conv1D in terms of the XBA area. Also, due to the butterfly computation involved in NTT, converting the twiddle factors into a matrix is significantly more complex than converting a polynomial into a matrix for Conv1D [12].
The end-to-end computational complexity of NTT-based PMM on XBAs is actively higher than directly mapping Conv1D into XBAs. As depicted in Fig. 1(a), NTT-based PMM involves three main steps: NTT computation (\(O(n\log n)\) complexity), element-wise multiplication (\(O(n)\) complexity), and INTT computation (\(O(n\log n)\) complexity). In contrast, Conv1D-based PMM has a complexity of \(O(n^{2})\). However, when utilizing XBA acceleration, the complexity of Conv1D-based PMM can be reduced from \(O(n^{2})\) to \(O(1)\). By employing similar data mappings, NTT and Conv1D exhibit the same time complexity on XBA. Therefore, Conv1D-based PMM on XBA demonstrates a lower end-to-end complexity compared to NTT-based PMM, as it requires fewer operations--Conv1D only necessitates \(O(1)\) operations, while NTT involves \(O(1)\) + \(O(n)\) + \(O(1)\) operations.
### _Performance Analysis_
NTT-based PMM requires that the twiddle factors be stored for NTT and INTT in the XBAs (See Fig 1(c)). The stored twiddle factors approach necessitates either frequent updates to the twiddle factors stored in the XBAs or the use of additional XBAs to store all twiddle factors needed for NTT. As a result, this leads to either higher latency and energy consumption or increased area. Alternatively, Conv1D-based PMM has numerous identical values that, when stored in XBAs, can be reused repeatedly. This provides the opportunity to devise intelligent data reuse schemes (see Sec. IV-C), ultimately leading to more efficient and optimized solutions in terms of area and energy consumption. Therefore, Conv1D-based PMM can be a more promising method for accelerating PMM with XBAs.
### _Noise_
As discussed in Sec II-D, XBAs are susceptible to accuracy degradation stemming from the intrinsic nonidealities of the memory cells, and the limitation of ADC precision. As a result, using XBAs inevitably introduces a certain amount of noise (i.e., error) in VMM results. When implemented on XBAs, Conv1D-based PMM incurs less noise than NTT-based PMM. The primary reason is that in Conv1D-based PMM, the entire computation can be completed in one step in XBAs, which helps control the magnitude of the noise. However, in NTT-based PMM, the NTT, element-wise multiplication, and INTT must be performed, which increases the noise introduced by XBAs multiplicatively (See Fig 1(a)). In applications such as HE, higher noise levels are not tolerable, making NTT-based XBA PMM unsuitable for such applications.
Based on the observations in this section, we believe that Conv1D-based PMM is a better approach for accelerating PMM with XBAs. We thus focus on the design and optimization of the XBA fabric to accelerate Conv1D-based PMM.
## IV X-Poly
Design and optimization of Conv1D-based PMM on XBAs for long polynomials must solve several key problems. These include mapping data to XBAs to efficiently use the resources, enhancing memory utilization at the Processing Element (PE) level, and effectively implementing modular reduction strategies. We present X-Poly for accelerating the Conv1D-based
Fig. 3: Hierarchical structure of the proposed X-Poly design: (1) Tile level design and data mapping, (2) PE level design and data mapping, and (3) XBA structure.
PMM and provide tailored solutions to address the aforementioned challenges.
### _Overview_
The high-bitwidth long polynomials employed in cryptographic algorithms like HE propose challenges for the design of XBA-based architecture. One specific issue relates to the limited size of the XBA. For instance, an array with 128 rows and 128 columns falls short in accommodating high-bitwidth polynomials with a degree exceeding 256.
To address the challenge, X-Poly utilizes a hierarchical approach to address computational complexity. Fig. 3 illustrates the overall structure and data mapping of X-Poly, consisting of the tile, PEs, and XBAs. The tile (Fig. 3(1)) contains multiple PEs, an accumulator, and a specifically designed reduction unit for modular reduction. Each PE holds one-bit weights and shares the same input. Thus, \(k\) PEs can store \(k\)-bit polynomials from the most significant bit (MSB) to the least significant bit (LSB), working in parallel.
The PE (Fig. 3(2)) is composed of multiple XBAs working on different parts of the polynomials simultaneously, as well as an adder tree and a shifter. The XBAs (Fig. 3(3)) are used for coefficient multiplication, while the adder tree and shifter within each PE accumulate partial results from each XBA and perform shift-add operations.
### _Bit Mapping_
The high bitwidth and large polynomial degree required for cryptographic applications need a large number of shift-add operations, which may not be efficiently supported in a CIM architecture. Due to the limited precision of a memory cell in an XBA, we need to map the bits of weight into multiple memory cells. Fig. 4(a) illustrates the conventional approach for mapping the high bitwidth weight to multiple XBAs. All bits of weight are stored in multiple columns of the XBA. When input arrives at the XBA, each column conducts a multiplication operation. Immediately following this, shift-adders carry out the shift-add operations after the XBA computation. This XBA-level shift-add operation requires lots of shift-adders and is expensive in terms of both time and energy.
As such, in this work, we propose a new bit mapping (BM) technique that groups the same bit of all weights together, as shown in Fig. 4(b). For example, in the case of 4x4 2-bit weights distributed among 2 PEs (4 XBAs per PE), each PE process one bit of each weight. After all PEs process one input bit, the shift operation is performed at the PE level, thereby avoiding a costly array-level shift-add operation. Comparing the conventional mapping (Fig. 4(a)) and the bit mapping (Fig. 4(b)) in the example, the number of shift-adders is reduced from 8 to 2.
As will be seen, this bit mapping strategy can significantly improve both the area and speed for processing high-bitwidth polynomial-based workloads in XBAs. In addition to its benefits for shift operations, the BM technique also simplifies the design of the PE. Since each PE handles a bit of each polynomial, the data patterns are captured at the polynomial coefficient level. We can simultaneously perform mapping optimization for all PEs. Thus, this technique can be easily extended to accommodate polynomials with different degrees or bitwidths, making it a flexible solution for performing polynomial operations in XBAs.
### _Polynomial Mapping_
In our PMM approach (utilizing VMM in XBAs), we first map polynomials into matrices to facilitate computation. Each polynomial is converted into a matrix by horizontally shifting the coefficients of the polynomial across each row, with any remaining gaps filled with zeros. This procedure results in a matrix structure that supports the critical shift-add operations intrinsic to PMM.
Mapping the matrices into XBAs in our PMM approach using VMM is straightforward. However, in an effort to further optimize this mapping scheme, we noted that for any given polynomial degree \(n\) and XBA row length \(x\), there is a consistent pattern of repeated XBAs. Specifically, in every
Fig. 4: Example of the proposed bit mapping technique: mapping 2bit 4x4 weights to 2x2 XBA with binary cells: (a) Conventional mapping. (b) Bit mapping.
instance, we require \(n/x\) identical XBAs to represent the polynomial matrix.
This aspect of our design stands in contrast with NTT-based XBA designs [12], which often find themselves confined to specific polynomial parameter settings. As such, X-Poly offers a significant increase in flexibility. For example, consider two application scenarios for privacy-preserving machine learning (PPML) inference as shown in [24]. On server-side inference, where performance is prioritized, and energy or area constraints are less critical, X-Poly can leverage a larger count of XBAs for high-throughput PMM in HE of PPML. For edge-device inference, where area and energy efficiency are paramount, X-Poly can efficiently handle a variety of large polynomial degrees and bitwidths with a smaller number of XBAs. A detailed study of the scalability of our design, referred to as X-Poly, is provided in Sec. V-E.
### _Modular Reduction_
Modular reduction is a crucial step in PMM, which ensures that the resulting polynomial remains within a specified degree and coefficient bounds. The essential steps in the reduction process include selecting an appropriate modulus for the ring and performing the modulo operation on the degree and coefficients of the resulting polynomial.
In X-Poly, we utilize a variant of the Barrett reduction [33] technique for efficient modular reduction. This method is known for its effectiveness in cryptographic applications and modular arithmetic, as it can compute the remainder of a division operation without performing the division itself. Notably, to minimize computation overhead in reduction, we strategically pre-compute specific parameters. This strategy transforms the complex, time-consuming multiplication and division operations into shift operations, effectively reducing computation time and optimizing the overall reduction process.
### _Computation Flow_
Assuming polynomial \(A\) is mapped onto XBAs in X-Poly, PMM can be accomplished as follows. (1) **Input processing**: We begin by bit-slicing each element in the new polynomial B, separating it into its individual bits. (2) **PE computation:** Within each PE, different arrays handle distinct sections of the polynomial and perform multiplications with corresponding sections of the input. The results are then summed and shifted at the PE level. This bit-by-bit input process continues until all input bits have been addressed. (3) **Tile accumulation:** Afterward, the results from all PEs are accumulated at the tile level, and the partial results obtained from each PE are combined. (4) **Tile reduction:** Finally, a tile-level reduction operation is applied for efficient modular reduction.
We also prioritize maximizing throughput in our design by incorporating a three-stage pipeline into the X-Poly workflow to enhance the PMM process. This pipeline, which encompasses the PE computation, tile accumulation, and tile reduction stages, enables efficient synchronization and overlapping operations.
## V Evaluation
In this section, we present the evaluation of X-Poly. We begin by discussing our implementation setup and evaluation tools and follow with a comparison with both the CPU-based solutions as well as other hardware accelerators. We will quantitatively assess the performance benefits from X-Poly. We then evaluate our bit mapping technique, with a focus on energy and area savings. Then we study the throughput per area performance of X-Poly, demonstrating its superior performance over other SOTA CIM accelerators. Finally, we assess the scalability of X-Poly and highlight its versatility in handling diverse polynomial degrees and bitwidths.
### _Implementation Setup_
To verify the functionality of X-Poly and evaluate performance characteristics such as latency, energy, and area, we have assembled a comprehensive evaluation framework.
This framework considers the simulation of hardware components, including the modular reduction unit, shift-adders, accumulators and XBA arrays. We implemented the reduction unit, shift-adder, and accumulator using RTL, coded in Verilog and evaluated the energy consumption and area of these components using the RTL synthesis tool Cadence Encounter, paired with the 45nm CMOS predictive technology model (PTM) [34]. We used Neurosim [35] to estimate the latency, energy, and area of the ReRAM-based XBAs, as well as successive-approximation-register (SAR) ADCs assuming the same 45nm technology node. The size of each XBA is 128 rows \(\times\) 128 columns and one ADC is shared by 8 columns.
We then incorporated the aforementioned simulation-based results into our Python-based cycle-accurate simulator. This simulator tracks the pipeline stages for a given PMM operation and computes the cycle count and total energy consumption by emulating the operations of each hardware component on a cycle-by-cycle basis. This evaluation framework allows us to generate a holistic and precise assessment of the overall performance of a PMM in X-Poly.
### _Comparison with SOTA Solutions_
#### V-B1 Comparison with CPU
We first compared our X-Poly implementation with a CPU implementation that performs PMM with a SOTA C++ library (Number Theory Library version 11.5.1 [36]). An Intel(R) Xeon(R) CPU E5-2680 v3 operating at 2.50GHz was used for the CPU implementation. The results are shown in Table I (col 3). The latency of the X-Poly design is 200\(\times\) better than the CPU implementation. Performance enhancement is primarily due to the parallel compute capability and fast multiplication inherent in the XBAs in our CIM-based architecture, allowing for a much more efficient PMM execution.
#### V-B2 Comparison with other accelerators
Next, we compared X-Poly with other SOTA accelerators. As current accelerators for PMM only use NTT, we compare our approach to SOTA accelerators that support NTT given a polynomial degree of 256. That said, NTT solutions require additional multiplications and the INTT to obtain final PMM results.
Compared to X-Poly, this may increase overall latency and energy consumption by 2\(\times\). Moreover, with X-Poly, we can generate PMM results in a single step without the need for additional multiplication or INTT.
**XBA solutions:** We first compared our implementation with other CIM solutions, specifically with ReRAM implementations. We scaled the latency and energy of [12] to 45nm for a fair comparison to X-Poly, following the methodology outlined in [13]. Given that the study in [12] did not provide area results, we carried out an estimation using the mapping methodology introduced in their publication. We assume the same area for XBAs and peripheral components as X-Poly. Although our design exhibited similar latency to RMNTT, our improved mapping technique results in a significantly reduced area. That is mainly because we reduce the footprint of shift-adders to just 20% of the original area by using the proposed BM technique, thereby leading to a 3.9\(\times\) improvement in the throughput-per-area ratio.
**Compute in SRAM solutions:** We also compared our implementation with in-SRAM solutions. The X-Poly approach also improves throughput and throughput-per-area. Again, this can be attributed to both the parallel computing capability and the fast multiplication feature of our XBA-type mapping technique, which enables us to perform multiple computations simultaneously. More specifically, throughput is improved by 11\(\times\), and throughput-per-area is improved by up to 3\(\times\).
**Non-CIM solutions:** Finally, we compared X-Poly with non-CIM solutions. The X-Poly design is advantageous as it can store entire polynomial coefficients inside the XBAs. This feature eliminated the need for frequent access to on-chip memory for coefficients in long polynomials, which reduces data movement between the computing unit and the on-chip memory. This results in a reduction in both latency and energy consumption. Overall, X-Poly outperformed ASIC and FPGA solutions in terms of throughput and energy efficiency. Compared to SOTA FPGA implementations, X-Poly can achieve a remarkable 75\(\times\) throughput improvement. When compared against SOTA ASIC implementations, X-Poly can achieve a 2\(\times\) throughput improvement.
### _Bit Mapping Study_
We now evaluate the energy and area benefits of the proposed BM technique, discussed in Sec.IV. To evaluate performance, we consider two scenarios: (1) conventional mapping as the implementation of RMNTT, the SOTA XBA-based NTT accelerator, and (2) our proposed BM technique. Fig.5 illustrates the shift-adder area/energy and ADC area/energy given various polynomial degrees for each mapping. Results suggest that due to the large polynomial degrees and high bitwidths associated with the PMM, the peripherals (such as the shift-adders) in the design with conventional mapping consume a significant proportion of the energy and area. Moreover, this escalates with polynomial degrees. However, our proposed BM technique decreases the area for shift-add operations by 80%, leading to an additional 3\(\times\) reduction in overall area. Moreover, compared to conventional mapping, our design has lower latency and energy consumption.
Fig. 6 illustrates the area and energy breakdown of our proposed design. This analysis further reveals that the majority of the energy consumption and area is spent on ADC operations, with the proposed mapping technique reducing the energy and area consumption for other peripherals significantly.
Fig. 5: Comparison of area and energy breakdown for ADCs and shift-adders in X-Poly and RMNTT [12].
### _Throughput per Area Study_
Table I shows that the XBA-based solutions (X-Poly and [12]) achieve higher throughput but require a larger area than the in-SRAM solution [13]. This is due to the inherent design of XBA-based solutions: they require a more expansive area to accommodate an increase in both polynomial degree and bitwidth [12]. In-SRAM solutions can support larger parameter sizes within a similar area. However, this is accompanied by a substantial reduction in throughput. However, X-Poly reduces XBA area while maintaining its high throughput.
To further understand the trade-off between throughput and area, we conducted an analysis of the throughput per area performance and compare the results with other SOTA CIM solutions. We consider a range of polynomial degrees and bitwidths, to generate a comprehensive perspective regarding the strengths of our design.
Fig. 7 illustrates the throughput per area performance of our design, as well as the SOTA XBA design in [12] and the in-SRAM design in [13]. Our results show that X-Poly can achieve significantly better throughput-per-area performance than both of these solutions, even as the parameter size increases. This highlights how X-Poly can lead to decreased area consumption of the XBA-based solution without compromising the throughput.
### _Scalibility of X-Poly_
Modern applications like HE in privacy-preserving machine learning often choose polynomials with a large degree and bitwidth [7, 24]. Storing these entirely within XBAs demands a high number of arrays, leading to significant area usage and energy consumption.
Our polynomial mapping scheme (Sec IV-C) allows us to reuse arrays. This enables us to employ a smaller number of XBAs to accommodate larger polynomials. However, reusing XBAs could potentially affect our design's latency. To address this, we conducted an experiment where given a fixed number of XBA arrays, we assessed the capability of X-Poly to adapt to various polynomial degrees and bitwidths. The objective here was to determine how we could optimize X-Poly to maximize design throughput under different polynomial degrees and bitwidths constraints.
The left graph in Fig. 8 depicts the maximum throughput of X-Poly using different numbers of XBAs. We considered polynomial degrees ranging from 256 to 2048, with a fixed bitwidth of 16. The right graph demonstrates the maximum throughput for different bitwidths ranging from 8 to 64, while maintaining a constant polynomial degree of 512. Our experiments highlight that our design is capable of managing a wide array of polynomial degrees and bitwidths while maintaining a fixed number of XBAs. As anticipated, higher degrees and bitwidths require longer computation times due to the necessity for reuse of the same arrays within the pipeline. By modifying the number of XBAs, we can manage the balance between area and throughput. Overall, our design showcases robust scalability, effectively adapting to a broad spectrum of polynomial degrees and bitwidths.
## VI Conclusion
In summary, this paper proposes a novel PMM accelerator based on XBA-type CIM for accelerating the most time-consuming part of lattice-based cryptography algorithms. The proposed X-Poly design achieves 3.1 MOP/s throughput and offers 200\(\times\) latency improvement compared to CPU-based implementations. It also achieves 3.9\(\times\) throughput per area improvements compared with the SOTA CIM accelerators. The suitability of NTT-based solutions for CIM-based PMM acceleration is evaluated, and a novel bit mapping technique is proposed to reduce area and energy overhead. PE-level optimization is conducted to increase memory utilization and support different scales of problems with a fixed number of XBAs.
Fig. 8: Scalibility study shows the throughput of X-Poly under different polynomial degrees and bitwidths given a fixed total XBA number.
Fig. 6: Area and energy breakdown for components of X-Poly with polynomial degree 256 and bitwidth 16.
Fig. 7: Throughput per Area (KOPs/\(mm2\)) comparison with the SOTA CIM solutions (RMNTT and BPNTT) under different polynomial degrees and bitwidths. Y-axis is using log-scale for better illustration. |
2304.02665 | Inferring the Astrophysical Population of Gravitational Wave Sources in
the Presence of Noise Transients | The global network of interferometric gravitational wave (GW) observatories
(LIGO, Virgo, KAGRA) has detected and characterized nearly 100 mergers of
binary compact objects. However, many more real GWs are lurking sub-threshold,
which need to be sifted from terrestrial-origin noise triggers (known as
glitches). Because glitches are not due to astrophysical phenomena, inference
on the glitch under the assumption it has an astrophysical source (e.g. binary
black hole coalescence) results in source parameters that are inconsistent with
what is known about the astrophysical population. In this work, we show how one
can extract unbiased population constraints from a catalog of both real GW
events and glitch contaminants by performing Bayesian inference on their source
populations simultaneously. In this paper, we assume glitches come from a
specific class with a well-characterized effective population (blip glitches).
We also calculate posteriors on the probability of each event in the catalog
belonging to the astrophysical or glitch class, and obtain posteriors on the
number of astrophysical events in the catalog, finding it to be consistent with
the actual number of events included. | Jack Heinzel, Colm Talbot, Gregory Ashton, Salvatore Vitale | 2023-04-05T18:00:10Z | http://arxiv.org/abs/2304.02665v2 | Inferring the Astrophysical Population of Gravitational Wave Sources in the Presence of Noise Transients
###### Abstract
The global network of interferometric gravitational wave (GW) observatories (LIGO, Virgo, KAGRA) has detected and characterized nearly 100 mergers of binary compact objects. However, many more real GWs are lurking sub-threshold, which need to be sifted from terrestrial-origin noise triggers (known as glitches). Because glitches are not due to astrophysical phenomena, inference on the glitch under the assumption it has an astrophysical source (e.g. binary black hole coalescence) results in source parameters that are inconsistent with what is known about the astrophysical population. In this work, we show how one can extract unbiased population constraints from a catalog of both real GW events and glitch contaminants by performing Bayesian inference on their source populations simultaneously. In this paper, we assume glitches come from a specific class with a well-characterized effective population (blip glitches). We also calculate posteriors on the probability of each event in the catalog belonging to the astrophysical or glitch class, and obtain posteriors on the number of astrophysical events in the catalog, finding it to be consistent with the actual number of events included.
keywords: black hole mergers - gravitational waves - methods: data analysis - methods: statistical
## 1 Introduction
Since the first direct detection of gravitational waves (GWs) from the merger of two stellar mass black holes (Abbott et al., 2016), the LIGO-Virgo-KAGRA (LVK) network has observed a large population of these stellar mass binary black holes (BBHs) (Abbott et al., 2019; Abbott et al., 2021; The LIGO Scientific Collaboration et al., 2021). With so many detections comes the ability to characterize the population of BBHs, and shed light on the dominant formation channels of stellar mass BBH mergers. While there is no theoretical consensus on the dominant formation channel, there are many proposals.
For instance, isolated binary evolution through a common envelope phase (Van Den Heuvel, 1976; Smarr and Blandford, 1976; Tutukov and Yungelson, 1993; Ivanova et al., 2013), stable mass transfer (Van Den Heuvel et al., 2017), dynamical many-body interactions in dense stellar environments (e.g. globular clusters, Sigurdsson and Hernquist, 1993; Kulkarni et al., 1993; Portegies Zwart and McMillan, 2000), chemically homogeneous stellar evolution (Marchant et al., 2016; Mandel and de Mink, 2016), dynamical triples assisted by the Kozai-Lidov mechanism (Antonini et al., 2017; Silsbee and Tremaine, 2017), or primordial binary black hole systems (Bird et al., 2016; Alidinoud et al., 2017) have been proposed. Traces of these different formation channels are imprinted in the population, distinguishing the relative rates and constraining the sub-population distributions (Zevin et al., 2021; Mapelli, 2021; Mandel and Broekgaarden, 2022). As more GWs are detected, the different astrophysical formation channels will begin to reveal themselves.
However, one is never sure of the origin of a potential GW detection. GWs are detected using search pipelines, which vary in their methodology, but in general scan the LVK data stream for matches to a GW template within some template bank dense over the expected source parameters (Allen, 2005; Usman et al., 2016; Nitz et al., 2017; Messick et al., 2017; Hanna et al., 2020). This provides a point estimate on the source parameters with the best match template. If this best match passes some significance threshold, it is called a trigger.
GW interferometers are plagued by transient noise fluctuations (known as glitches), whose morphology occasionally mimics real events (Cabero et al., 2019; Zevin et al., 2017; Davis et al., 2021; Soni et al., 2021; Acernese et al., 2022; Akutsu et al., 2021; Ashton et al., 2022). Most pipelines estimate the false alarm rate (FAR) of a trigger by time-sliding the data of different interferometers by more than the light-travel time between them. Any coincident triggers therefore cannot be caused by a GW propagating at the speed of light, and are deemed false alarms. By varying the time-slide and counting the total number of false alarms, pipelines can accurately estimate the false alarm rate of a trigger. Comparing the FAR to the expected astrophysical rate of the trigger, search pipelines estimate the probability of astrophysical origin, or \(p_{\text{astro}}\). In order to calculate the expected astrophysical rate of the trigger, pipelines must assume a model for the underlying astrophysical source population (The LIGO Scientific Collaboration et al., 2021).
To mitigate contamination from glitches, it is standard to use only the most significant events. Because \(p_{\text{astro}}\) estimates assume a population, it is unusual to use pipeline calculated \(p_{\text{astro}}\) as a threshold for population inference. Instead, a common threshold is FAR \(<1\text{yr}^{-1}\), yet even with this high threshold, one expects e.g. 4.6 false alarms
in the catalog used by Abbott et al. (2023b) under the assumption that the search pipelines produce events independently (Allen, 2005; Usman et al., 2016; Nitz et al., 2017; Messick et al., 2017; Hanna et al., 2020; Abbott et al., 2023b). Therefore, one must tune the FAR threshold to minimize the systematic uncertainty of including more false alarms in the catalog, and the statistical uncertainty of including fewer events.
There are also a plethora of sub-threshold (FAR \(>1\)yr\({}^{-1}\)) astrophysical events which contain information about the population of gravitational-wave sources in the Universe, especially in some of the more poorly measured regions of parameter space, where glitches are responsible for reduced search sensitivity. Sub-threshold mergers of binary neutron stars (BNS), neutron star black holes (NSBH) or stellar mass BBHs can improve known constraints on the population of these as gravitational wave progenitors. Indeed, there are many more events with lower significance; the rate of GW events scales with SNR\({}^{-4}\), assuming a constant merger rate in a Euclidean volume (Schutz, 2011; Chen and Holz, 2014). Though these lower significance events also encode less information about the progenitor, events as low as SNR\({}^{-}\)\(6-7\) can have well-measured chirp masses (Huang et al., 2018).
Moreover, certain kinds of theoretical GW events may pass this FAR threshold only rarely, with the majority falling deep into the sub-threshold range. For instance, subsolar-mass compact objects are predicted by certain modifications to the standard model of particle physics or \(\Lambda\)CDM (Shandera et al., 2018; Nitz and Wang, 2021; Abbott et al., 2022a). Though no direct detections have been made of a sub-solar mass merger (Abbott et al., 2019; Nitz and Wang, 2021), it is possible there are some lurking within the large set of sub-threshold candidates; because of their low masses, the signal-to-noise ratio (SNR) and significance of the GW will be much lower.
Glitches in GW interferometers are commonly studied by modelling the data as some parametric and deterministic function plus a stationary and stochastic noise process (Conish and Littenberg, 2015; Merritt et al., 2021; Tolley et al., 2023; Udall and Davis, 2023). This is preferable to modelling glitches as some general non-stationary noisy time series, where the statistical properties are unclear. A glitch model then requires a parametric function, called the glitch waveform, for the deterministic part of the signal. Since significant false alarms will mimic real GWs, it is sensible to use a GW model for the glitch waveform. In this paper, we follow this prescription, modelling glitches with a GW waveform.
A more general glitch model distinguishes GWs from terrestrial glitches by signal coherence. Real GWs must be coherent between multiple detectors and the waveforms should be consistent with the same progenitor parameters, while the same is not true for coincident false alarms (Veitch and Vecchio, 2010). Glitches may therefore be modelled as an independent GW waveform in each detector, relaxing this coherence requirement. This is justified as a worst-case scenario, where a background event is distinguished from an astrophysical one based purely on the signal coherence. This glitch model has been used to calculate the probability an event is astrophysical (Isi et al., 2018; Ashton et al., 2019; Pratten and Vecchio, 2021), and to rule out marginal candidates (e.g. Ashton and Thrane, 2020; Vajpeyi et al., 2022). The most general glitch models make no physical assumptions about the source and model glitches as a superposition of wavelets (Conish and Littenberg, 2015).
Whatever the waveform assumed for the glitches, a population would then be given by probability distributions on their parameters. Indeed, it is possible to study the population of glitches and astrophysical events simultaneously, allowing for each event to belong to either class. Previous work approached this problem from different perspectives. Farr et al. (2015) showed how to infer the rates of astrophysical and background populations when the shapes of the populations are known, but the identity of each event (i.e. which population it originates from) is unknown. Gaebel et al. (2019) show that it is indeed possible to do joint inference on an astrophysical and a glitch population, but leave a study with real GW data for a future analysis. Roulet et al. (2020); Galaudage et al. (2020) analyze real GW data, and fold in pipeline information - in particular, \(p_{\rm astro}\) estimates, to build a glitch population model. However, this carries a fixed background event rate estimate by each search pipeline, rather than inferring the rate of events from the background population in a Bayesian manner.
In this paper, we present a general method to simultaneously model the population of background non-astrophysical triggers and the population of astrophysical objects, in a fully Bayesian manner. We use a population of short glitches ("blips") as identified by the GravityPry algorithm (Zevin et al., 2017) to contaminate the catalog of astrophysical signals. While this is done for computational experience, the method can be used for any type of non-astrophysical transients, as long as one can characterize their "usual" properties. Similarly, while we focus on the population of BBHs, the method may be used to study any population of foreground events contaminated with undesirable background events. In section 2, we briefly review Bayesian parameter estimation of GW sources and population inference. Then, we discuss how this picture is complicated when one allows for the possibility that the dataset is contaminated by glitches. In section 2.4 we discuss our glitch population parameterization and constrain the population hyperparameters using a large representative sample. In section 3, we contaminate a catalog of GWs with glitches, and show how our method consistently models and removes the bias due to the contaminants. Finally, in section 4, we summarize and discuss future work.
## 2 Methods
### Parameter Estimation
Consider a stretch of LVK frequency domain data \(d\) which is a sum of noise \(n\) and waveform signal \(h(\theta)\)
\[d=h(\theta)+n, \tag{1}\]
where \(\theta\) represents the unknown parameters of the GW source. Approximating the noise as stationary and gaussian, the likelihood can be written
\[\log\mathcal{L}(d|\theta)=-\sum_{j}\left(2\Delta f\frac{|d_{j}-h_{j}(\theta)|^ {2}}{\mathcal{P}_{j}}+\log(2\pi\mathcal{P}_{j})\right), \tag{2}\]
where \(d_{j}\) and \(h_{j}\) represent the \(j^{\rm th}\) frequency component of the data and waveform, respectively, \(\mathcal{P}_{j}\) is the power-spectral-density, and \(\Delta f\) is the frequency spacing (Whittle, 1951). With this likelihood, a model for the waveform \(h(\theta)\) given some GW parameters, and priors for the GW parameters, one can then sample from the posterior of the GW parameters (Veitch et al., 2015; Thrane and Talbot, 2019; Christensen and Meyer, 2022).
The above process also can apply to glitches, thinking of them as a deterministic signal buried in stochastic noise. Modelling glitches under some parameterization (e.g. a sine-gaussian), one can perform parameter estimation exactly as above for the glitch parameters, which we denote \(\psi\). Indeed, while glitches are usually ruled out by search pipelines by e.g. \(\chi^{2}\) discriminators (Allen, 2005), there can be cases where glitches are mistaken for astrophysical GWs. Because
population inferences generally assume that all events in the catalog are truly astrophysical, a contaminant glitch in the catalog will bias the inference. We want to relax this assumption, and jointly infer the population of astrophysical events and glitches.
### Population Inference Without Glitches
Before we discuss simultaneous inference of the astrophysical and glitch populations, we review the general GW population inference problem. Given posterior samples from a set of data timeseries \(\{d_{i}\}_{1\leq i\leq N_{\rm source}}\), one can write the likelihood for a population model. In general, a population model describes the rate of mergers within a small interval of GW parameter space \([\theta,\theta+d\theta]\). However, the rate is typically assumed to be a Poisson process, and we can instead write down a probability density \(p_{A}(\theta|\Lambda)\), irrespective of the overall rate. Here, \(\Lambda\) are called the hyper-parameters; a finite list of parameters which vary the shape of the population distribution (e.g. the mean and variance of a gaussian, the power index to a power-law, etc.). We give the subscript \(A\) to refer to "astrophysical." This is in contrast to \(G\) for "glitch," which we will use later in this paper.
Assuming a Poisson process for the events and marginalizing over the overall rate \(R\) with an uninformative (uniform in \(\log R\)) prior, one obtains the hierarchical likelihood
\[\mathcal{L}(\{d_{i}\}|\Lambda)\propto\prod_{i=1}^{N_{\rm source}}\frac{\int d \theta\mathcal{L}(d_{i}|\theta)p_{A}(\theta|\Lambda)}{\alpha(\Lambda)} \tag{3}\]
and the selection function
\[\alpha(\Lambda)=\int d\theta p_{\rm det,\Lambda}(\theta)p_{A}(\theta|\Lambda) \tag{4}\]
is the fraction of events which are detectable in the population with hyperparameters \(\Lambda\) (for a derivation of the likelihood see Mandel et al., 2019; Vitale et al., 2020). The quantity \(p_{\rm det,\Lambda}(\theta)\) is the probability of detecting an astrophysical event with parameters \(\theta\), given by
\[p_{\rm det,\Lambda}(\theta)=\int_{\{d\in\mathcal{D}|\rho(d)>\rho_{\rm thr}\}} \mathcal{L}(d|\theta)\mathrm{d}d \tag{5}\]
the integral over all possible data realizations which exceed the detection threshold \(\rho(d)>\rho_{\rm thr}\) (i.e. FAR \(<1\)yr\({}^{-1}\), as in Abbott et al., 2023).
In practice, the integrals in Eq. 3 and 4 are estimated with Monte Carlo estimators. In particular,
\[\int d\theta\mathcal{L}(d_{i}|\theta)p_{A}(\theta|\Lambda)\sim\frac{Z(d_{i})}{ N_{\rm samp}}\sum_{j=1}^{N_{\rm samp}}\frac{p_{A}(\theta_{j}|\Lambda)}{\pi( \theta_{j}|\mathcal{H}_{\rm PE})}\bigg{|}_{\theta_{j}-p(\theta|d_{i})}, \tag{6}\]
where \(\theta_{j}\) are samples from the \(i^{\rm th}\) event posterior,
\[Z(d_{i})=\int d\theta\mathcal{L}(d_{i}|\theta)\pi(\theta|\mathcal{H}_{\rm PE}) \tag{7}\]
is the evidence and \(\pi(\theta|\mathcal{H}_{\rm PE})\) is the sampling prior used for the parameter estimation. As for the selection function,
\[\alpha(\Lambda)\sim\frac{1}{N_{\rm draw}}\sum_{j=1}^{N_{\rm det}}\frac{p_{A}( \theta_{j}|\Lambda)}{p_{\rm draw}(\theta_{j})}\bigg{|}_{\theta_{j}-p_{\rm draw }(\theta)}, \tag{8}\]
where \(N_{\rm draw}\) events are drawn from some fiducial distribution \(p_{\rm draw}(\theta)\), data drawn from the conditioned likelihood \(\mathcal{L}(d|\theta)\) with a suitable power spectral density choice, and then search pipelines run to recover \(N_{\rm det}\) of the total events (for details see e.g. Tiwari, 2018; Farr, 2019).
### Population Inference With Glitches
The above procedure assumes every event which passes the threshold is a real GW. This assumption can be relaxed by simultaneously fitting the glitch population. Suppose the glitch waveform is given by parameters \(\psi\), and we obtain posteriors on \(p(\psi|d_{i})\) for each event in the catalog, as well as posteriors on \(p(\theta|d_{i})\) for the GW parameters. With Eq. 79 in Vitale et al. (2020) and a relative rate \(\eta\) of GWs versus a GW-like glitches, one can marginalize over the total rate with a uniform in \(\log R\) prior to generalize Eq. 3.
\[\mathcal{L}(\{d_{i}\}|\Lambda_{\rm A},\Lambda_{G},\eta)\propto\] \[\prod_{i=1}^{N_{\rm samp}}\frac{\eta\int d\theta\mathcal{L}(d_{i} |\theta)p_{A}(\theta|\Lambda_{A})+(1-\eta)\int d\psi\mathcal{L}(d_{i}|\psi)p_{ G}(\psi|\Lambda_{G})}{\eta\alpha_{A}(\Lambda_{A})+(1-\eta)\alpha_{G}(\Lambda_{G})}, \tag{9}\]
where \(\Lambda_{A}\) and \(\Lambda_{G}\) refer to the astrophysical and glitch hyperparameters, \(p_{G}(\psi|\Lambda_{G})\) is the population model for the glitch waveform parameters and \(\alpha_{X}(\Lambda_{X})\) is the selection function for the \(X\) subpopulation:
\[\alpha_{X}(\Lambda_{X})=\int d\theta p_{\rm det,\mathcal{X}}(\theta)p_{X}( \theta|\Lambda_{X}) \tag{10}\]
\(p_{\rm det,G}\) is analogous to the \(p_{\rm det,\Lambda}\) we defined above, but we want to allow for the possibility that the detection criterion \(\rho(d)>\rho_{\rm thr}\) is different for glitches. In reality, the same detection criterion must be used for all events for a catalog, but for reasons we will describe below, we must use a different detection criterion for glitches in this study.
The mixing fraction \(\eta\) represents the relative rate of all GWs from all GW-like sources (astrophysical and glitches), whether they are detected or not. It is useful to define a detectable mixing fraction:
\[\overline{\eta}=\frac{\eta\alpha_{A}(\Lambda_{A})}{\eta\alpha_{A}(\Lambda_{A})+ (1-\eta)\alpha_{G}(\Lambda_{G})} \tag{11}\]
which is the fraction of detectable events which are GWs. In this case, and a bit of algebra, the likelihood of Eq. 9 can be recast as
\[\mathcal{L}(\{d_{i}\}|\Lambda_{A},\Lambda_{G},\eta)\propto\] \[\prod_{i=1}^{N_{\rm events}}\frac{\overline{\eta}\int d\theta \mathcal{L}(d_{i}|\theta)p_{A}(\theta|\Lambda_{A})}{\alpha_{A}(\Lambda_{A})}+ \frac{(1-\overline{\eta})\int d\psi\mathcal{L}(d_{i}|\psi)p_{G}(\psi|\Lambda_{ G})}{\alpha_{G}(\Lambda_{G})}, \tag{12}\]
which is the form of the likelihood we will use in the sampling.
So far we have assumed glitches and GWs will be characterized with different parameters, \(\theta\) and \(\psi\). However, glitches which can contaminate a GW catalog will necessarily be well modelled by a GW waveform. For this proof-of-principle analysis, we thus model the waveform of a glitch as a GW (we set \(\psi\rightarrow\theta\)). Furthermore, we only model the population in the intrinsic GW parameters; this will be explained further in section 2.4. This simplifies the analysis: we don't need evidences and posterior samples for every event under both the glitch and GW hypotheses-both analyses are the same. Indeed, under these assumptions the analysis reduces to a GW population inference with a mixture population; Eq. 9 becomes Eq. 3 with
\[p(\theta|\Lambda)\rightarrow\eta p_{A}(\theta|\Lambda_{A})+(1-\eta)p_{G}(\theta| \Lambda_{G}), \tag{13}\]
and a selection function
\[\alpha(\Lambda)\rightarrow\ \eta\alpha_{A}(\Lambda_{A})+(1-\eta)\alpha_{G}( \Lambda_{G}). \tag{14}\]
Eq. 13 treats the glitch population as an additional "astrophysical" population, albeit occupying a different region of parameter space from the population of true astrophysical BBHs.
There is one additional caveat. In the LVK population analysis of Abbott et al. (2023b), events included in the catalog are selected by their FAR (\(<1\)yr\({}^{-1}\)), and so we would like to also select glitches by their FAR to match Abbott et al. (2023b). However, this requires us to calculate FARs for many injections from a fiducial glitch population. Running search pipelines to calculate FARs of injected glitches may be necessary for a future study, however for this proof-of-principle paper it is simply too expensive. Instead, we select glitches for inclusion with a cheaper threshold, the signal-to-noise ratio (SNR). We can then estimate \(\alpha_{G}(\Lambda_{G})\) with a reweighted Monte Carlo estimator using a custom set of injections, and estimate \(\alpha_{A}(\Lambda_{A})\) with the injection set already provided in LVK (2021).
### Characterizing the Glitch Population
In the citizen-science project GravitySpv, glitches are classified according to their time frequency spectrograms (Zevin et al., 2017; Glanzer et al., 2023). For instance, blip glitches are short bursts of excess power, with a time frequency spectrogram morphology shown in Fig. 1. In fact, blip glitches are more likely to contaminate a GW catalog, since they can mimic high mass BBHs (Cabero et al., 2019). For this reason, we restrict this first study to blip glitches, though the formalism can be extended to any glitch class, or even combination of classes. This would require a new population model and \(\Lambda_{G}\) for each additional class, plus a mixing fraction.
In order to understand various populations of glitches, Ashton et al. (2022) analyzed a set of 1000 GravitySpv identified blip glitches with the IMRPhenomPv2 GW waveform Hannam et al. (2014); Bohe et al. (2016); Husa et al. (2016); Khan et al. (2016). Since blip glitches are not due to any astrophysical process, they are usually present in a single detector, with multiple detector coincidences occurring randomly. As single detector triggers, only information about the intrinsic parameters (masses and spins) may be extracted. Therefore Ashton et al. (2022) provides posterior samples only over the intrinsic parameters and the redshift.
With the posterior samples in hand, Ashton et al. (2022) fit a population model in the detector frame chirp mass, mass ratio, and primary spin (see their Fig. 2, 3 & 4). Qualitatively, the population of GravitySpv blip glitches shows different features from the population of BBHs (extreme mass ratios, spins, and low redshifts, inconsistent with e.g. Abbott et al. 2023b). We will use this to our advantage to separate the populations.
We slightly modify the population model of Ashton et al. (2022). Instead of modelling the primary spin magnitude, we model in the effective spin parameter:
\[\chi_{\rm eff}=\frac{a_{1}\cos\theta_{1}+qa_{2}\cos\theta_{2}}{1+q} \tag{15}\]
where \(a_{1}\) and \(a_{2}\) are the spin magnitudes of the primary and secondary BHs in Kerr units, \(q=m_{2}/m_{1}\) is the mass ratio (where \(0<q<1\) by convention), and \(\theta_{1}\) and \(\theta_{2}\) are the spin tilts measured from the orbital angular momentum \(\chi_{\rm eff}\) is the spin parameter which occurs at lowest order in the waveform, and is measured better than individual spins (Racine, 2008; Purrer et al., 2016; Vitale et al., 2017; Ng et al., 2018).
We model the glitch population in the detector-frame chirp mass, mass ratio, effective spin parameter, and redshift: \(\theta=(\mathcal{M}_{\rm c,det},q,\chi_{\rm eff},z)\). In particular, we use a skewed gaussian (Eq. 16) for both the detector-frame chirp mass \(\mathcal{M}_{\rm c,det}\) and the redshift \(z\) with hyperparameters \(\mu_{m},\sigma_{m},\kappa_{m}\) and \(\mu_{z},\sigma_{z},\kappa_{z}\) respectively
\[p(x|\mu,\sigma,\kappa)=\frac{2}{\sigma}\phi\left(\frac{x-\mu}{\sigma}\right) \Phi\left(\kappa\frac{x-\mu}{\sigma}\right), \tag{16}\]
where \(\phi\) and \(\Phi\) are the standard gaussian and gaussian integral, respectively. We model \(\chi_{\rm eff}\) and \(q\) with a correlated mixture model of two two-dimensional gaussians in the \(\chi_{\rm eff}\) - \(q\) plane with hyperparameters denoted \(\vec{\lambda}_{qX}\) for brevity
\[p_{qX}(q,\chi_{\rm eff}|\vec{\lambda}_{qX}) =N_{1}q_{q,X}\phi\left(f_{1}[q,\chi_{\rm eff}]\right)\phi\left(g_{ 1}[q,\chi_{\rm eff}]\right)\] \[+N_{2}(1-q_{x,X})\phi\left(\vec{\lambda}_{g}[q,\chi_{\rm eff}] \right)\phi\left(\vec{\lambda}_{g}[q,\chi_{\rm eff}]\right) \tag{17}\] \[\vec{\lambda}_{qX}=(\mu_{q,1}.\mu_{q,2},\mu_{X,1},\mu_{X,2},\sigma _{q,1},\sigma_{q,2},\sigma_{X,1},\sigma_{x,2},\theta_{q,X},\eta_{q,X})\]
where
\[f_{1}[q,\chi_{\rm eff}] =\frac{(q-\mu_{q,i})\cos(\theta_{q,X})+(\chi_{\rm eff}-\mu_{X,i}) \sin(\theta_{q,X})}{\sigma_{q,i}} \tag{18}\] \[g_{1}[q,\chi_{\rm eff}] =\frac{(q-\mu_{q,i})\sin(\theta_{q,X})+(\chi_{\rm eff}-\mu_{X,i}) \cos(\theta_{q,X})}{\sigma_{q,i}}, \tag{19}\]
and \(N_{1}\) and \(N_{2}\) are normalization coefficients, numerically calculated because \(\chi_{\rm eff}\) and \(q\) are required to be positive. Eq. 18 describes a pair of two dimensional gaussians with branching fraction \(\eta_{q,X}\), parameterized by the variances along the eigenvectors of the covariance matrix (\(\sigma_{q,i}^{2}\) and \(\sigma_{X,i}^{2}\)), and the angle they are "tilted" by (\(\theta_{q,X}\), assumed to be the same for both gaussians). The glitch population model for is the product of the \(\mathcal{M}_{\rm c,det}\), \(z\), and \(q-\chi_{\rm eff}\) models, and \(\Lambda_{G}\) is the union of their hyperparameters. We chose to leave precession unmodeled in the population by projecting the 6-dimensional spin population onto the effective aligned spin parameter. However a future study could examine how the populations further separate including the spin precession parameter and correlations therein, or in the full 6-dimensional spin space.
We are now ready to measure the population of blip glitches with our model, as is done in Ashton et al. (2022). By using all 1000 posteriors from Ashton et al. (2022) we obtain tight constraints on the glitch population alone. This is a critical step of our analysis. We must measure the population of glitches well to optimally separate it from the population of GWs. Fortunately, we have access to the unbiased population of blip glitches before any selection criteria are enforced.1 We may assume the 1000 glitches from Ashton et al.
Figure 1: A time frequency spectrogram of a GravitySpv-identified blip glitch in LIGO-Hanford. This blip occurred on August 28, 2019 at UTC 16:56:49. Due to their short duration, blip glitches can be mistaken for high mass BBHs.
(2022) are a representative sample. The constraints we measure in this step inform the boundaries of the priors we use during simultaneous inference. We show the posterior population distribution (PPD) in Fig. 2.
As for the astrophysical population parameterization, we use the powerlaw plus peak model of Abbott et al. (2023b); Talbot & Thrane (2018) and the redshift model of Fishbach et al. (2018). We modify the spin distribution model by modelling \(\chi_{\rm eff}\) with a gaussian, following Roulet & Zaldarriaga (2019); Miller et al. (2020); Callister et al. (2021). This gives us the set of \(\Lambda_{A}\) and \(\Lambda_{G}\), which will be inferred together with the detectable mixing fraction \(\overline{\eta}\) in our joint analysis.
### Simulatenous Inference and Selection Effects
We model the selection effects of glitches in entirely the same way we model the selection effects of GWs. We emphasize that selection effects depend on the data alone. If we believe glitches have data well modelled by a GW plus gaussian noise, then the probability of detecting a glitch is well approximated by the probability of detecting a GW with the corresponding parameters.
We also define \(p_{\rm astro,\it i}(\Lambda)\) for each event in the catalog, a population dependent quantity,
\[p_{\rm astro,\it i}(\Lambda)=p({\rm astro})|d_{\it i},\Lambda)=\frac{\eta \mathcal{L}(d_{\it i}|\Lambda_{A})}{\eta\mathcal{L}(d_{\it i}|\Lambda_{A})+(1 -\eta)\mathcal{L}(d_{\it i}|\Lambda_{G})}. \tag{20}\]
This comes directly from Bayes' Theorem. It is perhaps more intuitive to use \(\overline{\eta}\) instead of \(\eta\), however in that case the likelihood terms must each acquire a \(1/\int d\theta p_{\rm det,\it\it\it\it\it\it\scriptsize\it\scriptsize\it\it \scriptsize\it\it\scriptsize\it\it\scriptsize\it\it\scriptsize\it\it\scriptsize \it\it\scriptsize\it\it\scriptsize\it\scriptsize\it\scriptsize\it\scriptsize\it \scriptsize\it\scriptsize\it\scriptsize\it\scriptsize\it\scriptsize\it\rm\scriptsize \it\scriptsize\it\scriptsize\it\rm\scriptsize\it\scriptsize\it\scriptsize\it\rm\scriptsize \it\scriptsize\it\scriptsize\it\rm\scriptsize\it\it\scriptsize\it\rm\scriptsize\it\rm\scriptsize \it\scriptsize\it\rm\scriptsize\it\scriptsize\it\rm\scriptsize\it\rm\scriptsize\it\rm\scriptsize \it\scriptsize\it\rm\scriptsize\it\rm\scriptsize\it\rm\scriptsize\it\rm\scriptsize\it\rm\scriptsize \it\scriptsize\it\rm\scriptsize\it\rm\scriptsize\it\rm\scriptsize\it\rm\scriptsize\it\rm\scriptsize \it\scriptsize\it\rm\scriptsize\it\rm\scriptsize\it\rm\scriptsize\it\rm\scriptsize\it\rm\scriptsize \it\scriptsize\it\rm\scriptsize\it\rm\scriptsize\it\rm\scriptsize\it\rm\scriptsize\rm\it\rm\scriptsize \it\scriptsize\it\rm\scriptsize\it\rm\scriptsize\it\rm\scriptsize\rm\it\rm\scriptsize\rm\it\rm\scriptsize \it\scriptsize\rm\it\scriptsize\rm\it\rm\scriptsize\it\rm\scriptsize\rm\it\rm\scriptsize\rm\rm\scriptsize\it\rm \scriptsize\it\rm\scriptsize\rm\it\rm\scriptsize\it\rm\scriptsize\rm\rm\scriptsize\rm\it\rm\rm\scriptsize\rm \scriptsize\it\rm\scriptsize\it\rm\rm\scriptsize\rm\it\rm\scriptsize\rm\rm\scriptsize\rm\rm\rm\tiny\rm \scriptsize\it\rm\rm\scriptsize\it\rm\scriptsize\rm\rm\scriptsize\rm\rm\rm\scriptsize\rm\rm\rm\rm \scriptsize\it\rm\rm\scriptsize\rm\rm\scriptsize\rm\rm\scriptsize\rm\rm\rm\scriptsize\rm\rm\rm \scriptsize\rm\rm\scriptsize\rm\rm\scriptsize\rm\rm\rm\scriptsize\rm\rm\rm\scriptsize\rm\rm\rm \scriptsize\rm\rm\scriptsize\rm\rm\scriptsize\rm\rm\rm\scriptsize\rm\rm\rm\rm\tiny\rm\rm\rm \scriptsize\it\rm\rm\tiny\rm\rm\rm\rm\tiny\rm\rm\rm\rm\rm\tiny\rm\rm\rm\rm\rm \tiny\rm\rm\rm\rm\rm\rm\tiny\rm\rm\rm\rm\rm\rm\tiny\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\tiny\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \
events. Note the posteriors peak at the dashed line, i.e. it is recovering the correct number of contaminants.
While the results here suggest the sampling is correctly recovering the blips, there is a caveat. The quantity \(\overline{\eta}\) represents a statement on the underlying relative rates, it is not the fraction of BBHs in the catalog. In other words, this is not a like to like comparison. We want to understand what our inference predicts are the number of BBHs and blips in our catalog.
For instance, suppose we may unambiguously identify which events are BBHs and which are blips solely using the event parameters. That is, the populations are disjoint to the point that every event posterior overlaps with only one of the astrophysical or glitch populations. It turns out that \(\overline{\eta}\) does not converge on a delta function: it will have some width due to Poisson rate uncertainty. Rather, it converges on an analytic optimal posterior, which we calculate by assuming the populations are so disjoint that every event posterior uniquely determines which population the event originates from. Details on this calculation are in the appendix.
From this theoretical optimal posterior, we can calculate the median and 1-3\(\sigma\) levels, which we show as a function of the number of added contaminants (\(x\)-axis) in Fig. 3. Note how similar the measured posteriors on \(\overline{\eta}\) are to the optimal posterior given perfect knowledge on which events are BBHs and glitches. The populations of blip glitches and BBHs are nearly disjoint; this suggests the inference can uncover which events are in which population much more precisely than the \(\overline{\eta}\) posteriors naively indicate.
### Inferred Number of Contaminants and BBHs
Calculating \(p_{\rm astro}\)2 (i.e. Eq. 20) for each event in each run, we notice that the posteriors on each event tends to be sharply peaked, e.g. GW150914 peaks at \(p_{\rm astro}(\Lambda)-1\), the blips peak at \(p_{\rm astro}(\Lambda)\to 0\). We show posteriors on \(1-p_{\rm astro}=p_{\rm blip}\) for two example events in Fig. 4, GW151226 and GW200302. GW200302 is the event with the highest probability of being a blip, see the appendix for details. Note as the number of injected blips increases, the \(p_{\rm blip}\) increases for GW200302. This is because the \(\overline{\eta}\) posterior converges on lower mixing fractions; lowering the odds that any given event is astrophysical. This is much more apparent in GW200302, where \(p_{\rm blip}\) is mostly dominated by these odds. GW151226 is a representative event for what most BBH \(p_{\rm astro}\) posteriors look like. In fact, many posteriors are even more extreme than GW151226; \(\log_{10}(p_{\rm blip})\rightarrow-\infty\) for many events, see Tab. 1 in the appendix for the full event list.
Footnote 2: We emphasize that statements made in this paper about \(p_{\rm astro}\) should be understood as the probability of the event not being a blip, rather than the probability of the event being astrophysical in origin. This is rather cumbersome to write, so we continue with the abuse of notation in \(p_{\rm astro}\).
Most \(p_{\rm astro}\) posteriors are sharply peaked, nearly delta functions. Translating this into a calculation on the number of BBHs and blips in the catalog, this suggests that the inferred number of BBHs and blips in the catalog is also sharply peaked. Indeed, using the \(p_{\rm astro,\,\delta}\) defined in Eq. 20 we may calculate the probability that exactly \(k\) of \(N_{\rm events}\) are astrophysical. Since each data realization is independent, the \(p_{\rm astro,\,\delta}\) of each event will be statistically independent. The probability that exactly \(k\) of \(N_{\rm events}\) total events in the catalog are BBHs is then
\[p_{k}(\Lambda)=\sum_{\gamma\in\Gamma(k,N_{\rm events})}\left[\prod_{j=1}^{k}p_ {\rm astro,\,\gamma(j)}(\Lambda)\prod_{j=k+1}^{N_{\rm events}}1-p_{\rm astro, \,\gamma(j)}(\Lambda)\right] \tag{21}\]
where \(\Gamma(k,N_{\rm events})\) is the set of \(k\)-combinations of \(N_{\rm events}\) (it contains \(N_{\rm events}\) choose \(k\) elements), a subset of the set of permutations of \(N_{\rm events}\). Thinking of permutations as one-to-one and onto functions from the set \(\{1,...,N_{\rm events}\}\) to itself, \(k\)-combinations are permutations where two permutations \(\gamma_{1}\) and \(\gamma_{2}\) are equivalent if there is the set equality \(\gamma_{1}(\{1,..,k\})=\gamma_{2}(\{1,..,k\})\). Informally, the probability that exactly \(k\) of \(N_{\rm events}\) are BBHs is the probability a specific set of \(k\) events are BBHs and the others are glitches, summed over all the possible sets of \(k\) events. Note that if all \(p_{\rm astro,\,\delta}\) are the same, Eq. 21 reduces to the binomial distribution as expected. However, Eq. 21 is much too computationally expensive to evaluate directly. We use a trick with symmetric polynomials to vastly simplify the calculation, see the appendix for details. We also note that Galaudage et al. (2020) consider the sum of the \(p_{\rm astro,\,\delta}\). This is the expectation value over \(k\) of Eq. 21, which is also discussed in further detail in the appendix.
After contaminating the catalog of 69 BBHs passing the LVK selection criteria (Abbott et al., 2023b) with 0 - 20 independently drawn random blips, and running 21 inferences on the hyperparameters \(\Lambda\) on the 21 variably-contaminated catalogs, we calculate Eq. 21 for each \(\Lambda\) sample. We show an example in Fig. 5, the run with 20 contaminant blips. In this run and in most runs, the probability for exactly 69 BBHs in the catalog rails against 1, while for some other runs it can be more uncertain. Variability between runs is due to the differences in how "BBH-like" the blip contaminants are, and how well they fit into the blip population model. We also show posteriors on the probabilities of having exactly 69 BBHs in the catalog. Specifically, since many of the probabilities rail against 1, we show the logarithm of the negation: the \(\log_{10}\) probability of not having 69 BBHs in the catalog, shown in the top panel of Fig. 6. As the number of contaminants increases, the resolving power drops, meaning the probability becomes more spread out between \(\sim 68-70\). Furthermore, the odds any given event is a BBH drops, as the mixing fraction between BBHs and blips becomes more blip-favored. That said, up to 20 injected blips we observe significant probabilities of exactly 69 BBHs in the catalog, and near unity probabilities of 68 or 69 or 70 BBHs in the catalog (Fig. 6). While there is some variation in the probabilities, this method consistently recovers the correct number of injected contaminants, so long as the populations are sufficiently dissimilar. It is not clear that the correctly recovering the number of contaminants prevents slight biases from arising in the population inference, especially given there is some small variability in the inferred number of contaminants in the catalog.
Figure 4: Calculated \(1-p_{\rm astro}=p_{\rm blip}\) for two events, GW200302 and GW151226, in each inference. GW200302 consistently had the lowest \(p_{\rm astro}\) of all the BBH events, while we selected GW151226 to be a representative event for the standard BBH in the catalog. Note a subtle trend for decreasing \(p_{\rm astro}\) as the number of injected blips increased.
### Biases in the BBH Population
While the correct number of blips is recovered in each run, we want to be sure that no biases are introduced in the inferred astrophysical distributions. For example, we show inferred distributions of the primary masses for a control run and with 10 and 20 injected blips in Fig. 7. Qualitatively speaking, they appear to be essentially identical. The control run is a population inference on the catalog of 69 BBHs in Abbott et al. (2023b), using the same astrophysical population model parameterization described in section 2.4. We quantify any differences by calculating the Jensen-Shannon (JS) divergence between the inferred distributions of a control population inference and the inferred astrophysical sub-populations from contaminated catalogs. The JS divergences show no trends, with a median consistently at \(\sim 0.09-0.1\) bits. We show the JS divergences in the middle column of Table 3 in the appendix, and in the first row we show the JS divergences between two draws from the control hyperparameters.
### Biases from Unmodeled Bilp Contaminants
Some glitches appear significantly more astrophysical than others. For the run with 20 blips injected and the 69 BBH mergers, we plot the posteriors on the effective "BBH" parameters of the glitches, and population-averaged \(p_{\rm astro}\) values overlaid on the blip PPD, see Fig. 8. There are some general patterns, most notably that extreme \(\chi_{\rm eff}\) seems to be the strongest predictor of low \(p_{\rm astro}\), and if the primary mass \(m_{1}\) falls above the maximum mass cutoff \(m_{\rm max}\) in the astrophysical model, the \(p_{\rm astro}\) is zero. We show a table of the median and 90% credible region parameters of each blip, along with the SNR and \(p_{\rm astro}\) in Table 2 in the appendix.
We want to understand the kind of biases which are induced by including blips into the population, without controlling for those contaminants with a glitch model. Of the run with 20 injected blips and 69 GWs, we select the blips which could most plausibly be
Figure 5: The probability of having \(k\) events which are astrophysical in the catalog. The horizontal axis is the number of events in the catalog, and the vertical axis represents the \(p_{\rm g}(\Lambda)\) probability of there being exactly \(k\) astrophysical events in the catalog. Since the probability for exactly 69 BBHs rails against 1, we show an inset zoom on the \(p_{69}(\Lambda)\) violin. The uncertainty in the value of the probability \(p_{\rm g}(\Lambda)\) comes from the uncertainty in the population parameters \(\Lambda\). This particular run was with 69 BBHs injected and 20 contaminant blips injected.
Figure 8: We show the 20 posteriors on the blip effective BBH parameters injected, and their corresponding mean \(p_{\rm astro}\), labelled in the figure by the color of the posterior points. See the colorbar on the right. Note some general patterns: very low \(\chi_{\rm eff}\) values and very high masses correspond to low \(p_{\rm astro}\) values. Note also that all the blip \(p_{\rm astro}\) values are still very low, less than \(10^{-4}\).
Figure 6: In the top panel, we show violins for the inferred posterior probabilities of the catalog not having 69 BBHs in it; \(1-p_{69}(\Lambda)\). The vertical axis shows the logarithm of the probability, and the horizontal axis is the number of injected blips in the catalog. In the bottom panel, we show the posterior probabilities of the catalog having some number of BBHs which is not 68, 69, or 70; \(1-p_{\rm g}(\Lambda)-p_{\rm g}(\Lambda)-p_{\rm T0}(\Lambda)\). Note the increase in the probabilities as the number of injected blips increases; this is due to higher odds that any given event is a blip (lower \(\overline{\eta}\)). The dip at exactly 20 injected blips is because those 20 contaminants happen to be easily resolvable from the GW population, and so \(p_{69}(\Lambda)\) peaks strongly at 1.
Figure 7: The inferred astrophysical mass distribution. In green we show the control run, with no contaminants injected and no glitch model included. We also show the runs with with the glitch model included and injected contaminants; we show runs with 10 and 20 blips included. The solid line is the posterior population distribution (PDP) and the dashed lines show the upper and lower limits on the 90% credible region. The inferred distributions appear consistent.
astrophysical, ie. they have the highest \(p_{\rm astro}\). We selected the blip with the highest \(p_{\rm astro}\) (the top row in Table 2), and the 10 blips with the highest \(p_{\rm astro}\) (the top 10 rows in Table 2), and contaminated the catalog of 69 BBH mergers passing the LVK selection criteria (Abbott et al., 2023) with these 1 and 10 blips. We then sample from the population hyperposterior without any glitch model.
In order to prevent population hyperparameters from railing against prior ranges, we extended the prior range of \(m_{\rm max}\) significantly (the maximum cutoff mass parameter in the model of Talbot & Thrane, 2018) to allow values up to \(500M_{\odot}\).
All the inferred distributions are biased. For instance, we show the inferred primary mass distribution for the control run, and for 1 and 10 contaminants, see Fig. 9. We compute the Jensen-Shannon divergences for these inferred distributions, compared to the control distribution. We show them in the right hand column of Table 3 in the appendix.
## 4 Conclusion and Future Work
In this article we presented a method for inference of a population of GW sources which is contaminated by non-astrophysical events. We contaminated the catalog of 69 BBHs of Abbott et al. (2023) with an increasing number of single-interferometer blip glitches from Ashton et al. (2022). We showed how to generalize a population inference to not only infer the shape parameters of a GW population, but to simultaneously infer the population of the glitch background events. We tested this method, and showed that it in practice identifies and removes systematic biases from population inference. As GW astronomy matures, interesting results may reveal themselves only on the level of populations, and satisfactory statistical significance may require delving into sub-threshold events.
As a proof of principle analysis, we chose only to consider the blip glitch class from GravitySpy, since Ashton et al. (2022) had already produced parameter estimation samples for these. We caution that the method we presented here will only be robust to blip glitch contamination; we leave it to a future study to do a full simultaneous analysis with a model for an extended population of glitches.
There is another caveat, in the appropriate estimation of the selection effects. In an end-to-end analysis, the detection criterion is the same for glitches and GWs, and so must be estimated consistently. The current most common method requires a massive set of simulated GWs from a population similar to the population of astrophysical GWs into detector noise, and re-weighting for different population hyperparameters (Abbott et al., 2023; LIGO Scientific Collaboration et al., 2021). The set of glitches comes from regions of parameter space poorly sampled by the injection set, and so to properly estimate the selection effects, one needs an auxiliary suite of injections over the appropriate regions of parameter space. This is a significant computational expense, although it is regularly done by the LVK collaboration to estimate the selection effects of astrophysical GWs.
Though it is a challenge, there are many applications for a method to simultaneously infer the population of astrophysical GWs and non-astrophysical glitches. The most immediate application would be to lower the threshold for including a trigger into the catalog, e.g. select on FAR \(<2\)yr\({}^{-1}\), or FAR \(<5\)yr\({}^{-1}\). There are real GW events lurking below the FAR \(<1\)yr\({}^{-1}\) threshold, and these can aid in constraining the population. This would require an accurate model for the glitches that actually pass the threshold, rather than using our fiducial blip glitch model, and while conceptually similar to this work, the full treatment would also require running end-to-end search pipelines on injections from the glitch population. We leave this to a future study. There are other useful applications as well. Some GWs occur while only a single detector is online (Callister et al., 2017; Nitz et al., 2020; Cabourn Davies & Harry, 2022). These single detector events often cannot enter a catalog for population inference, and so they cannot be used for constraining the population. Our approach of modelling the intrinsic population of glitches is a step towards the use of single detector triggers in population analyses.
This method can also help characterize triggers found in searches for exotic objects. As an example, BBHs beyond the upper mass gap remain elusive (Ezquiaga & Holz, 2021). The search sensitivity for these objects is reduced by the presence of short duration glitches much like blip glitches (Cabero et al., 2019), and so a joint analysis of a population of these background glitches and the astrophysical "beyond-the-gap" BBHs would measure tighter constraints on their rates. As another example, an analogous procedure is conceivable for continuous wave sources. One may be able to characterize the population of continuous waves (CWs) and the "glitches" associated, which are due to monochromatic coherent power between detectors (Cieslar et al., 2021; Abbott et al., 2020, 2022, 2022). This may benefit a search for CWs or population level characterization of CW sources.
For analyses like the one presented, it is critical to have both an accurate waveform model for glitches and an accurate glitch population model. In this paper, we model glitches with a GW waveform, however, it may be useful to use alternative glitch waveforms. One option is to use non-coherent GW waveforms to model the glitches, where the signal in each interferometer is fit with independent GW waveforms (Veitch & Vecchio, 2010). One can also use non-GW waveform models, such as Glitschen (Merritt et al., 2021) or BayesWave (Cornish & Littenberg, 2015). In cases where the glitch waveform model is different from the GW waveform, Eq. 9 must be used in its more general form. Second, we must have an appropriate model for the glitch population, and using as accurate as possible a model will be crucial. For example, if one continues to use a coherent GW waveform, one could fold in the analysis information about extrinsic parameters, e.g. the fact that the population of glitches is not expected to be isotropic (Payne et al., 2020; Vitale et al., 2022; Essick et al., 2023). We plan to explore both these avenues in a future work.
## 5 Acknowledgements
The authors wish to thank Sylvia Biscoveanu, Tom Callister, Tom Dent, Reed Essick, Will Farr, and Jacob Golomb for valuable suggestions and insights, and the rates and populations group of the
Figure 9: The inferred mass distribution for a control run compared to the inferred mass distribution when 1 and 10 astrophysically plausible bips are included into the catalog, without controlling for their bias with a glitch model. Note the increased support at high mass, and the broadening of the gaussian peak. The low mass end of the distribution is much less affected.
LIGO and Virgo Collaborations for helpful feedback on this work. The authors thank Michael Zevin and Christopher Berry for comments, edits and feedback. JH is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 2141064. SV is partially supported by NSF through the award PHY-2045740. CT is supported by an MKI Kavli Fellowship. GA thanks the UKRI Future Leaders Fellowship for support through the grant MR/T01881X/1. JH and CT gratefully acknowledge the hospitality of Royal Hollow, University of London, where a part of this work was completed. This research has made use of data, software and/or web tools obtained from the Gravitational Wave Open Science Center ([https://www.gwopenscience.org](https://www.gwopenscience.org)), a service of LIGO Laboratory, the LIGO Scientific Collaboration and the Virgo Collaboration. Virgo is funded by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale della Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by Polish and Hungarian institutes. We are also grateful to computing resources provided by the LIGO Laboratory computing clusters at California Institute of Technology and LIGO Hanford Observatory supported by National Science Foundation Grants PHY-0757058 and PHY-0823459. The majority of analysis performed for this research was done using resources provided by the Open Science Grid, which is supported by the National Science Foundation award #2030508. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation.
## 6 Data Availability
The data underlying this work can be found on a zenodo data release at [https://doi.org/10.5281/zenodo.7860652](https://doi.org/10.5281/zenodo.7860652). The public gravitational wave data can be found in the Gravitational Wave Open Science Center (Abbott et al., 2023a).
|
2305.03652 | A fully hybrid integrated Erbium-based laser | Erbium-doped fiber lasers exhibit high coherence and low noise as required
for applications in fiber optic sensing, gyroscopes, LiDAR, and optical
frequency metrology. Endowing Erbium-based gain in photonic integrated circuits
can provide a basis for miniaturizing low-noise fiber lasers to chip-scale form
factor, and enable large-volume applications. Yet, while major progress has
been made in the last decade on integrated lasers based on silicon photonics
with III-V gain media, the integration of Erbium lasers on chip has been
compounded by large laser linewidth. Recent advances in photonic integrated
circuit-based high-power Erbium-doped amplifiers, make a new class of
rare-earth-ion-based lasers possible. Here, we demonstrate a fully integrated
chip-scale Erbium laser that achieves high power, narrow linewidth, frequency
agility, and the integration of a III-V pump laser. The laser circuit is based
on an Erbium-implanted ultralow-loss silicon nitride Si$_3$N$4$ photonic
integrated circuit. This device achieves single-mode lasing with a free-running
intrinsic linewidth of 50 Hz, a relative intensity noise of $<$-150 dBc/Hz at
$>$10 MHz offset, and output power up to 17 mW, approaching the performance of
fiber lasers and state-of-the-art semiconductor extended cavity lasers. An
intra-cavity microring-based Vernier filter enables wavelength tunability of
$>$ 40 nm within the C- and L-bands while attaining side mode suppression ratio
(SMSR) of $>$ 70 dB, surpassing legacy fiber lasers in tuning and SMRS
performance. This new class of low-noise, tuneable Erbium waveguide laser could
find applications in LiDAR, microwave photonics, optical frequency synthesis,
and free-space communications. Our approach is extendable to other wavelengths,
and more broadly, constitutes a novel way to photonic integrated circuit-based
rare-earth-ion-doped lasers. | Yang Liu, Zheru Qiu, Xinru Ji, Andrea Bancora, Grigory Lihachev, Johann Riemensberger, Rui Ning Wang, Andrey Voloshin, Tobias J. Kippenberg | 2023-05-05T16:13:31Z | http://arxiv.org/abs/2305.03652v1 | # A fully hybrid integrated Erbium-based laser
###### Abstract
Erbium-doped fiber lasers [1; 2; 3] exhibit high coherence and low noise as required for applications in fiber optic sensing [4], gyroscopes, LiDAR, and optical frequency metrology [5]. Endowing Erbium-based gain in photonic integrated circuits can provide a basis for miniaturzing low-noise fiber lasers to chip-scale form factor, and enable large-volume applications. Yet, while major progress has been made in the last decade on integrated lasers based on silicon photonics with III-V gain media [6; 7; 8; 9; 10; 11; 12; 13; 14], the integration of Erbium lasers on chip has been compounded by large laser linewidth. Recent advances in photonic integrated circuit-based high-power Erbium-doped amplifiers make a new class of rare-earth-ion-based lasers possible [15]. Here, we demonstrate a fully integrated chip-scale Erbium laser that achieves high power, narrow linewidth, frequency agility and the integration of a III-V pump laser. The laser circuit is based on an Erbium-implanted ultralow-loss silicon nitride (Si\({}_{3}\)N\({}_{4}\)) photonic integrated circuit [15]. This device achieves single-mode lasing with a free-running intrinsic linewidth of 50 Hz, a relative intensity noise of \(<\)-150 dBc/Hz at \(>\)10 MHz offset, and an output power up to 17 mW, approaching the performance of fiber lasers [16] and state-of-the-art semiconductor extended cavity lasers [8; 12; 13; 17]. An intra-cavity microring-based Vernier filter enables wavelength tunability of \(>\) 40 nm within the C- and L-bands while attaining side mode suppression ratio (SMSR) of \(>\) 70 dB, surpassing legacy fiber lasers in tuning and SMRS performance. This new class of low-noise, tuneable Erbium waveguide laser could find applications in LiDAR [18], microwave photonics [19; 20], optical frequency synthesis [21], and free-space communications. Our approach is extendable to other wavelengths where rare-earth ions can provide gain, and more broadly, constitutes a novel way to photonic integrated circuit-based rare-earth-ion-doped lasers.
Erbium-doped fiber lasers (EDFLs) [1; 2; 3] have become indispensable sources of high coherence laser light for distributed acoustic sensing [4; 22], optical gyroscopes, free-space optical transmission [23], optical frequency metrology[5], and high-power laser machining [24; 25] and are considered the 'gold standard' of laser phase noise. EDFLs exhibit many advantages such as all-fiberized cavities, alignment-free components, and benefit from the advantageous Erbium-based gain properties including slow gain dynamics, temperature insensitivity, low amplification related noise figure, lower spontaneous emission power coupled to oscillating modes than short semiconductor gain media [26; 27], and excellent confinement of laser radiation for high beam quality. These properties along with low phase noise have led to wide proliferation of Erbium-based fiber lasers in industrial applications. Erbium ions can provide equally a basis for compact photonic integrated circuit-based lasers [28] that can benefit from manufacturing at lower cost, smaller form factor and reduced susceptibility to environmental vibrations compared to fiber lasers. Prior efforts have been made to implement chip-based waveguide lasers using Erbium-doped materials such as Al\({}_{2}\)O\({}_{3}\)[29; 30], TeO\({}_{2}\)[31], LiNbO\({}_{3}\)[32], and Erbium silicate compounds [33] as waveguide diderdings or cores, but the demonstrated laser intrinsic linewidth remained at the level of MHz [34; 35; 36; 37], far above the sub-100-Hz linewidth achieved in commercial fiber lasers and state-of-the-art heterogeneously or hybrid integrated semiconductor-based lasers (Supplementary Note 1). One major obstacle to realizing narrow-linewidth Erbium waveguide lasers is the challenge of integrating long and low-loss active waveguides ranging from tens of centimeters to meters--the lengths routinely deployed in fiber lasers to ensure low phase noise, single-frequency operation, and sufficient round-trip gain [16].
Here, we overcome this challenge and demonstrate hybrid integrated Erbium-doped waveguide lasers (EDWLs) using Si\({}_{3}\)N\({}_{4}\) photonic integrated circuits that achieve narrow linewidth, frequency agility, high power, and the integration with pump lasers. Crucial to this advance are meter-scale-long Erbium-implanted silicon nitride (Er:Si\({}_{3}\)N\({}_{4}\)) photonic integrated circuits that can provide \(>\) 30 dB net gain [15] with \(>\)100 mW output power. The Si\({}_{3}\)N\({}_{4}\) photonic integrated circuit moreover exhibits absence of two-photon absorption in telecommunication bands[38], radiation hardness for space compolarity, high power handling of up to tens of watts[39], a lower temperature sensitivity than silicon, and low Brillouin scattering (a power-limiting factor in silica-based fiber lasers)[40].
## Results
### Hybrid integrated Erbium-based Vernier lasers
The laser device is structured as a linear optical cavity with a spiral Erbium-doped gain waveguide and two reflectors formed by Sagnac loop mirrors at both ends (Fig.1A). One dichroic loop mirror that consists of a dichroic directional coupler allows for laser reflection near 1550 nm and optical pump transmission near 1480 nm, and the other reflector deploys a short waveguide splitter for broadband reflection. The optical pump can also be injected via a waveguide taper connected to a microring bus waveguide. The laser device (Fig.1B) exhibits a compact footprint of only 2 \(\times\) 3 mm\({}^{2}\) with a densely-packed 0.2-m-long Erbium-doped Si\({}_{3}\)N\({}_{4}\) spiral waveguide (Fig.1C) with a cross section of \(0.7\times 2.1\,\mathrm{\SIUnitSymbolMicro m}^{2}\). A narrow-band intra-cavity Vernier filter designed to achieve sub-GHz 3 dB bandwidth and 5 THz FSR using two cascaded add-drop microring resonators (100 GHz FSRs with 2 GHz difference) (Fig.1D) is deployed to ensure single-mode lasing operation with a small laser cavity mode spacing of ca. 200 MHz (Supplementary Note 2). Integrated microheaters are used to align the Vernier filter peak transmission wavelength to a cavity longitudinal mode. This integrated laser circuit was fabricated using the photonic Damascene process [41], followed by selective Erbium ion implantation, post annealing, and heater fabrication (Fig.1E; see Methods and Supplementary Note 3).
To demonstrate a fully integrated EDWL, we performed photonic packaging via hybrid integration in a custom 14-pin butterfly package. The 1480 nm InP Fabry-Perot (FP) laser diode (LD) was edge coupled to one of the laser cavities on an Er:Si\({}_{3}\)N\({}_{4}\) photonic integrated circuit (Fig.2A), with simulated coupling loss of \(<\) 3 dB. The laser output waveguide was end-coupled and glued with a cleaved UHNA-7 optical fiber spliced
Figure 1: **A hybrid integrated Er:Si\({}_{3}\)N\({}_{4}\) laser.** (**A**) Schematic of a hybrid integrated Vernier laser consisting of an Erbium-implanted silicon nitride Er:Si\({}_{3}\)N\({}_{4}\) photonic integrated circuit and an edge-coupled III-V semiconductor pump laser diode. The intra-cavity microring-based Vernier filter enables single-mode lasing operation within the Erbium-based gain bandwidth. (**B**) Optical image of an Er:Si\({}_{3}\)N\({}_{4}\) laser circuit integrated with micro heaters for wavelength and phase tuning. The green dashed circle indicates the Erbium-implanted gain spiral. (**C**) Optical images of the Erbium-implanted spiral waveguides and (**D**) the coupling regime of the Vernier filter indicated by coloured boxes in (**B**). (**E**) Fabrication process flow of the Er:Si\({}_{3}\)N\({}_{4}\) photonic integrated circuit based on selective Erbium ion implantation.
to a SMF-28 optical fiber pigtail, exhibiting 2.7 dB coupling loss at 1550 nm. The pump LD, a Peltier element, a thermistor, and all microheaters are connected to butterfly pins using wire bonding. The integrated micro-heaters were used for the temperature control of the Vernier filter and the phase-shifter section to configure single-mode lasing and wavelength tuning. The Erbium ions can be optically excited by the pump light emitted from the multi-longitudinal-mode pump LD (\(>\)4 nm spectral linewidth near 1480 nm), providing 1.9 dB/cm of measured net gain coefficient [15]. The optical spectrum of the collected laser output shows a single-mode lasing operation with \(>\) 70 dB of side mode suppression ratio (SMSR) at 0.1 nm resolution bandwidth (Fig.2B). This high 72-dB SMSR was made possible using the drop port of the the narrow passband intra-cavity Vernier filter, which can select the lasing mode and reject the broadband amplified spontaneous emission noise. This record high SMSR
Figure 2: **A hybrid integrated Er:Si\({}_{3}\)N\({}_{4}\) Vernier laser operated at single-mode lasing.** (**A**) Optical image of a hybrid integrated Er:Si\({}_{3}\)N\({}_{4}\) Vernier laser edge-coupled with a pump laser diode chip (3SP Technologies, 1943 LCV1). Green luminescence was observed, stemming from the transition from higher-lying levels of excited Erbium ions to the ground state. (**B**) Measured optical spectrum of single-mode lasing. The inset shows the output power as a function of the pump power. (**C**) Measured time-frequency spectrogram of the heterodyne beatnote between the packaged EDWL and a fully-stabilized frequency comb (F1500, Menlo Systems GmbH) over 4 hours. (**D**) Experimental setup for Vernier filter characterization (device ID: D85_04_F04_C15_V1). (**E**) Illustration of the Vernier effect by measuring the superposed resonances through the intermediate bus waveguide of the Vernier filter. (**F**) Zoomed-in range of the measured transmission. Colored circles indicate the resonances of each microring. (**G**) Wide-range transmission overlaid with the Erbium ion gain spectrum. (**H**) Frequency spacing variation between adjacent ring resonances, yielding a Vernier spacing of 4.65 THz, corresponding to 37.1 nm. (**I**) The curve fitting of the measured through port transmission of the resonance indicated in (**F**), and the calculated filtering response at the drop port.
surpasses what has been reported in integrated Erbium lasers, fiber lasers, and integrated semiconductor-based lasers (Supplementary Note 1), typically below 60 dB that is usually limited by intra-cavity filtering performance. Conversely, this is challenging to implement in legacy fiber-based Erbium lasers where the filtering components based on long Bragg gratings can only offer several GHz wide passband with grating side lobes and lack of broadband wavelength tuning capability. We observed an off-chip lasing threshold pump power of ca. 20 mW and an on-chip slope efficiency of 6.7 % when sweeping the pump power (Fig.2B inset), which can be further optimized by reducing the coupling loss and the cavity loss. The fully packaged laser showed a frequency drift of \(<\) 20 MHz over 4 hours (Fig.2C) when performing a heterodyne beatnote measurement with a fully-stabilized optical frequency comb indicating a good frequency stability due the monolithic nature of the laser comprised of both cavity and gain medium. During a 24-hour test, this laser showed a frequency drift of \(<\) 140 MHz without mode hops (Supplementary Note 4), representing a comparable long-term frequency stability as a commercial diode laser (Toptica CTL).
### Single-mode lasing and wavelength tuning
The use of photonic integrated circuits and Vernier structures (Fig.2) enables to endow the integrated Erbium laser with broad wavelength tuning, a capability that bulk fiber lasers lack. We investigated the intra-cavity filtering properties by characterizing the optical transmission of the middle bus waveguide (Fig.2D). The measured transmission of the individual resonators used for the Vernier filter is shown in Fig.2F and the designed 2 GHz FSR was experimentally attained (98 GHz and 100 GHz, respectively), leading to a measured Vernier filter FSR of 4.65 THz that corresponds to 37.1 nm span near 1550 nm wavelength (Fig.2G,H). Such a large Vernier FSR ensures the single-wavelength lasing within the Erbium emission wavelength range (Fig.2G). By overlapping the resonances from the two resonators, i.e. vanishing the frequency spacing (Fig.2H), the lasing wavelength is determined. By fitting the resonance linewidth near 194.8 THz (Fig.2I), we obtain an external coupling rate \(\kappa_{\mathrm{ex,0}}/2\pi=\) 411 MHz (between the microring and the bus waveguide) and an intrinsic loss rate \(\kappa_{0}/2\pi=\) 42.5 MHz. This strong over-coupled configuration (\(\frac{\kappa_{\mathrm{ex}}}{\kappa_{0}}>10\)) can ensure that the Vernier filter _simultaneously_ achieves a narrow 3-dB passband bandwidth of 636 MHz and in principle a low insertion loss. Such strong overcoupling can allow for low loss operation of the Vernier filter, which however in the current device was not attained. The Vernier filter exhibits an insertion loss of -3.2 dB due to the parasitic loss induced by the coupling from the fundamental waveguide mode to higher order modes, which leads to a suboptimal coupling ideality [42] of \(I=0.87\) (Supplementary Note 5).
Next, we demonstrate the wavelength tunability (Fig. 3A). The coarse tuning of laser wavelength was carried out by switching the aligned resonance of two microresonators (Fig. 3B). The step size of ca. 0.8 nm was determined by the microring FSR. Fine tuning of the wavelength can be achieved by simultaneously shifting the two resonators in the same direction and adjusting the phase shifter to align the corresponding cavity longitudinal mode (164 MHz spacing) to the Vernier filter passband. Figure 3C shows the 2-dimensional (2D) laser wavelength tuning map when varying the electrical power applied to the microheaters. From the recorded entire 2D map of wavelengths we selected the settings marked in Fig.3C. This allowed for continuous and deterministic tuning over the entire wavelength band from 1548.1 nm to 1585.8 nm, maintaining power of \(>\) 4 mW and SMSR of \(>\) 70 dB (Fig.3D). Such wavelength tunability cannot be achieved in conventional rare-earth-ion-doped fiber lasers without the use of free space etalon filters. The wavelength tuning range was limited by the Vernier filter FSR and the wavelength-division multiplexing coupler transmission band (Supplementary Note 6). During heater power scanning, we note that a few of wavelength tuning steps were missed due to the misalignment of microring resonances of the Vernier filter. During tuning, the phase shifter was adjusted to maximize the output power at the desired mode. A maximum fiber-coupled output power of ca. 17 mW were measured at 1585 nm with 219 mW pump power. Other competing lasing modes apart from the predominant lasing mode were observed when using high pump power, due to the fact that the large Si\({}_{3}\)N\({}_{4}\) waveguide cross section allows for multiple transversal optical modes that can coincidently satisfy the lasing condition (Supplementary Note 8).
### Frequency and intensity noise measurements
To demonstrate the low noise features of the free-running EDWLs, we characterized the frequency noise, the intrinsic laser linewidth, and the relative intensity noise (RIN), respectively (Fig. 4A). Firstly, a reference external cavity diode laser (free running Toptica CTL) was tuned close to the lasing wavelength near 1560 nm of an EDWL (not packaged) with ca. 3 mW output power for heterodyne photodetection. The in-phase and quadrature components of the sampled beatnote time trace was processed using Welch's method [43] to obtain the single-side power spectral density (PSD) of frequency noise \(S_{\delta v}(f)\). The frequency noise PSD (red line) reached a plateau of \(h_{0}=62.0\) Hz\({}^{2}\)/Hz at the offset frequency of 6 MHz, corresponding to a Lorentzian linewidth of \(\pi h_{0}=\) 194.8 Hz; this measured white noise floor was masked by the ECDL's white noise floor (Fig. 4C). We also applied the delayed self-heterodyne interferometric measurement [44] to validate the intrinsic linewidth (Fig. 4D), which
generates a power spectrum of the autocorrelation of the laser line under sub-coherence condition (Supplementary Note 9). In the offset frequency range from 10 kHz to 2.5 MHz where a relaxation oscillation peak was observed, the Erbium laser shows a higher frequency noise due to the laser cavity fluctuation caused by the pump laser noise transduction and the thermorefractive noise in the microresonator [45]. The measured frequency noise at offset frequencies of \(<\)10 kHz was dominated by ECDL characteristic noise features [46]. We achieved a record low intrinsic linewidth (blue line) of \(\pi h_{0}=50.1\) Hz (\(h_{0}=15.9\) Hz\({}^{2}\)/Hz ) in an Erbium waveguide laser with a higher output power of 10 mW, when beating against a low-noise Erbium fiber laser (Koheras Adjustik). The fully packaged EDWL (Fig. 4B) with 2.8 mW output power shows a comparable intrinsic linewidth (purple line) and a lower frequency noise at the mid-range offset frequencies. Using laser cavity designs with reduced cold cavity losses and increased mode area, hertz-linewidth EDWL can be feasibly achieved (Supplementary Note 10).
The full width at half maximum (FWHM) of the integral linewidth associated with Gaussian contribution was obtained by integrating the frequency noise PSD from the inverse of measurement time (\(1/T_{0}\)) up to the frequency where \(S_{\delta v}(f)\) intersects with the \(\beta\)-separation line \(S_{\delta v}(f)=8\ln(2)/\pi^{2}\) (dashed line) [47]. With the integrated surface \(A\), we obtained a minimum FWHM linewidth (\(8\ln(2)A^{1/2}\)) of the free-running EDWL is 82.2 kHz at 1 ms measurement time, which does not yet supersede a fiber laser, but is lower than 166.6 kHz of an ECDL (Toptica CTL) characterized as a reference
Figure 3: **Demonstration of wideband tuning of the laser wavelength.** (**A**) Experimental setup for laser wavelength tuning demonstration (device ID: D85_04_F8_C508_VL2.0) (**B**) Operating principle of the wavelength tuning of the vernier laser. The traces indicate the transmission of each over-coupled microresonator and the entire verifier filter, respectively. (**C**) Two dimensional laser wavelength tuning map, showing the wavelength of the predominant lasing mode as a function of the electrical power applied to the two micro heaters. The dashed schematically indicates the approach to coarse wavelength tuning. The white regions indicate that the expected laser emission was missing due to the microring resonance misalignment or a competing lasing mode when approaching the edge of the WDM filter transmission band. (**D**) Measured optical spectra of single-mode lasing tuned over 40 nm wavelength range. The optical spectrum analyzer’s resolution bandwidth is set to 0.1 nm.
laser for comparison. For comparison, the commercial stabilized fiber-based laser shows 2.4 kHz of the FWHM linewidth at 1 ms measurement time.
Next, we show that the Erbium waveguide laser features a lower RIN compared to a commercial fiber laser (Koheras Adjustik, KOH45) (Supplementary Note 11). The waveguide laser shows a RIN down to \(-130\) dBc/Hz (yellow& purple) at mid-range offset frequencies between 10 kHz and 1 MHz, lower than the fiber laser RIN (grey) that has a PSD pole induced by relaxation oscillation (Fig. 4E). The mid-range RIN was mainly limited by the pump laser RIN transduction which even contributed to an increased RIN by 5 dB for the un-packaged EDWL. The pump RIN noise transduction at frequency above 20 MHz was suppressed due to the slow dynamics of Erbium ions. The relaxation oscillation frequency can be calculated by \(f_{\mathrm{r}}\approx\frac{1}{2\pi}\sqrt{\frac{P_{\mathrm{sat}}\kappa}{P_{ \mathrm{sat}}\tau}}\) where \(P_{\mathrm{sat}}\) is saturation power of the gain medium, \(P_{\mathrm{cav}}\) is the laser cavity power,\(\kappa\) is the cold cavity loss rate, and \(\tau\) is the Erbium ion upper-state lifetime (Supplementary Note 12). The waveguide laser RIN reduced to \(<-155\) dBc/Hz at offset frequencies of \(>10\) MHz. We observed that the relaxation oscillation frequency the waveguide laser varied from 0.3 MHz to 2.4 MHz when increasing the optical pump power (Fig. 4F), which is higher than the
Figure 4: **Laser noise properties and the fully hybrid integration of an EDWL.** (**A**) Experimental setups for the measurement of laser frequency noise, relative intensity noise, and intrinsic laser linewidth. (**B**) Optical image of a fully hybrid integrated EDWL assembly. (**C**) Measured laser frequency noise based on heterodyne detection with reference lasers. EDWL1 device ID: D8S_04_F8_C508_VL2.0; EDWL2 device ID: D8S_04_F8_C508_VL2.1; EDWL3 device ID: D8S_04_F8_C508_VL2.1. (**D**) Measured and fitted spectra of delayed self-heterodyne interferometric measurement for intrinsic laser linewidth investigation. (**E**) Measured laser relative intensity noise (RIN) based on direct photodetection. (**F**) Relaxation oscillation peaks under varied pump power.
case in the fiber laser (typically \(<100\) kHz ). This higher relaxation oscillation frequency originates from the smaller saturation power and the shorter Erbium upper-state lifetime of 3.4 ms [15].
## Summary
In summary, we have demonstrated a photonic integrated circuit-based Erbium laser that achieved sub-100 Hz intrinsic linewidth, low RIN noise, \(>72\) dB SMSR, and 40 nm wide wavelength tunability with power exceeding 10 mW. The Erbium-doped waveguide lasers use foundry compatible silicon nitride waveguides, and have the potential to combine fiber-laser coherence with low size, weight, power and cost of integrated photonics. Such a laser may find application in existing applications such as coherent sensing, and may equally provide a disruptive solution for emerging applications that require high volumes, such as lasers for coherent FMCW LiDAR, or for coherent optical communications where iTLA (integrated tunable laser assembly) have been widely deployed, but fiber lasers' high coherence is increasingly demanded for advanced high-speed modulation formats while their use has been impeded by the high cost and large size. Co-doping other rare-earth ions such as ytterbium (emission at 1.1 um) and thulium (0.8 um, 1.45 um and 2.0 um) will moreover allow access to other wavelengths. Looking to the future, the compatibility of silicon nitride with heterogeneously integrated thin-film lithium niobate[48], as well as piezoelectric thin films [49, 11], and Erbium waveguide amplifiers [15] provides the capability to create fully-integrated high-speed, low-noise, high-power optical engines for LiDAR, long-haul optical coherent communications, and analog optical links.
**Funding Information**: This work was supported by the Air Force Office of Scientific Research (AFOSR) under Award No. FA9550-19-1-0250, and by contract W911NF2120248 (NINJA) from the Defense Advanced Research Projects Agency (DARPA), Microsystems Technology Office (MTO). This work further supported by the EU H2020 research and innovation programme under grant No. 965124 (FEMTOCHIP), by the SNSF under grant no. 201923 (Ambizione), and by the Marie Sklodowska-Curie IF grant No. 898594 (CompADC) and grant No. 101033663 (RaMSoM). **Acknowledgments**: Silicon nitride samples were fabricated in the EPFL Center of MicroNano/Technology (CMI).
**Author contributions**: Y.L. and Z.Q. performed the experiments. Y.L. carried out data analysis and simulations. Y.L. and Z.Q. designed Si\({}_{3}\)N\({}_{4}\) waveguide laser chips. X.J., G. L. and J.R. provided experimental supports. A. B. and A. V. designed and performed the device packaging. R.N.W., Z.Q. and X.J. fabricated the passive Si\({}_{3}\)N\({}_{4}\) samples. Y.L. wrote the manuscript with the assistance from Z.Q. and the input from all co-authors. T.J.K supervised the project.
**Data Availability Statement**: The code and data used to produce the plots within this work will be released on the repository Zenodo upon publication of this preprint.
**Competing interests** T.J.K. is a cofounder and shareholder of LiGenTec SA, a start-up company offering Si\({}_{3}\)N\({}_{4}\) photonic integrated circuits as a foundry service.
|
2306.15560 | Impact of the solar activity on the propagation of ICMEs: Simulations of
hydro, magnetic and median ICMEs at minimum and maximum of activity | The propagation of Interplanetary Coronal Mass Ejections (ICMEs) in the
heliosphere is influenced by many physical phenomena, related to the internal
structure of the ICME and its interaction with the ambient solar wind and
magnetic field. As the solar magnetic field is modulated by the 11-year dynamo
cycle, our goal is to perform a theoretical exploratory study to assess the
difference of propagation of an ICME in typical minimum and maximum activity
backgrounds. We define a median representative CME at 0.1~au, using both
observations and numerical simulations, and describe it using a spheromak
model. We use the heliospheric propagator European Heliospheric FORecasting
Information Asset (EUHFORIA) to inject the same ICME in two different
background wind environments. We then study how the environment and the
internal CME structure impact the propagation of the ICME towards Earth, by
comparison with an unmagnetized CME. At minimum of activity, the structure of
the heliosphere around the ecliptic causes the ICME to slow down, creating a
delay with the polar parts of the ejecta. This delay is more important if the
ICME is faster. At maximum of activity, a southern coronal hole causes a
northward deflection. For these cases, we always find that the ICME at maximum
of activity arrives first, while the ICME at minimum of activity is actually
more geo-effective. The helicity sign of the ICME is also a crucial parameter
but at minimum of activity only, since it affects the magnetic profile and the
arrival time of up to 8 hours. | Barbara Perri, Brigitte Schmieder, Pascal Démoulin, Stefaan Poedts, Florian Regnault | 2023-06-27T15:36:19Z | http://arxiv.org/abs/2306.15560v1 | # Impact of the solar activity on the propagation of ICMEs:
###### Abstract
The propagation of Interplanetary Coronal Mass Ejections (ICMEs) in the heliosphere is influenced by many physical phenomena, related to the internal structure of the ICME and its interaction with the ambient solar wind and magnetic field. As the solar magnetic field is modulated by the 11-year dynamo cycle, our goal is to perform a theoretical exploratory study to assess the difference of propagation of an ICME in typical minimum and maximum activity backgrounds. We define a median representative CME at 0.1 au, using both observations and numerical simulations, and describe it using a spheromak model. We use the heliospheric propagator European Heliospheric FORecasting Information Asset (EUHFORIA) to inject the same ICME in two different background wind environments. We then study how the environment and the internal CME structure impact the propagation of the ICME towards Earth, by comparison with an unmagnetized CME. At minimum of activity, the structure of the heliosphere around the ecliptic causes the ICME to slow down, creating a delay with the polar parts of the ejecta. This delay is more important if the ICME is faster. At maximum of activity, a southern coronal hole causes a northward deflection. For these cases, we always find that the ICME at maximum of activity arrives first, while the ICME at minimum of activity is actually more geo-effective. The helicity sign of the ICME is also a crucial parameter but at minimum of activity only, since it affects the magnetic profile and the arrival time of up to 8 hours.
solar wind -- solar cycle -- coronal mass ejections -- space weather 0000-0002-4882-8888]Barbara Perri
0000-0002-4882-0888]Brigitte Schmieder
0000-0002-4882-0888]Pascal Demoulin
0000-0002-4882-0888]Stefaan Poedts
0000-0002-1888-7888]Florian Regnault
## 1 Introduction
Space weather is the ability to anticipate sudden events from the Sun and their impact on our planet (Schrijver et al., 2015). Coronal Mass Ejections (or CMEs) are considered one of the main drivers of strong space weather events (Gosling, 1993): they consist of large-scale ejections of plasma and magnetic field by the Sun, which then travels through the inner heliosphere until they eventually reach our planet (Webb & Howard, 2012). CMEs that have an impact on Earth are called geo-effective (Koskinen & Huttunen, 2006). In particular, when they interact with the Earth magnetosphere, they can generate magnetic storms that have consequences for our atmosphere (polar lights) and ground (induced currents) (Pulkkinen, 2007). CMEs are an important concern for space weather due to their frequency (more than 10 times a day launched in all directions during intense phases of solar activity, see Robbrecht et al. (2009)), and their impact on our technology (severe electrical damages due to induced currents, see Pirjola (2005)). The most powerful recorded event was the Carrington event on 1st of September 1859, where a CME traveled
at 2300 km/s and reached Earth within 17 hours (Tsurutani et al., 2003); the cost of the impact of such an event happening nowadays has been evaluated at trillions of dollars (Schrijver, 2015).
CMEs originate mostly from active regions, which are regions where the magnetic field is particularly intense, and stored in sheared and twisted structures (usually flux-ropes, see Demoulin (2008); Schmieder et al. (2015)). Eventually, these structures become unstable, and the plasma trapped inside is released into the heliosphere as magnetic ejectas (MEs, Winslow et al., 2015). When observed close to the Sun, most CMEs are characterized with a bright frontal loop, a dark cavity, and an embedded bright core (Illing and Hundhausen, 1985) possibly corresponding to the erupting filament (House et al., 1981). Images are available using white-light images via Thomson scattering to follow their initial propagation (Davies et al., 2013). The in situ counterpart of CMEs have been historically called Interplanetary Coronal Mass Ejections (ICMEs). Their most important components are both hydrodynamic and magnetic (Dumbovic et al., 2015). The speed and density of the ICME, contribute to the dynamic pressure and also contain signatures of the propagation of the interplanetary shock at the front of the ICME. Interplanetary shocks alone can drive geomagnetic activity (Oliveira and Samsonov, 2018), which is why the most simple models do not include an internal magnetic field structure for the ICME. However, it has been shown that it is its magnetic field amplitude and orientation which are driving the strongest geomagnetic storms (especially its \(B_{z}\) component which favors dayside reconnection with the magnetopause, Lugaz et al., 2016). The interplanetary shock is followed by a compressed and heated region called the sheath (Kilpua et al., 2017). This region is caused by the accumulation of solar wind at the front of the ICME, as well as the expansion of the following ME (Kaymaz and Siscoe, 2006). An ME is characterized by a strong and smooth magnetic field, as well as low temperature and low plasma beta (ratio of the thermal pressure over the magnetic pressure, Wang et al., 2005). Other diagnostics can also be used when available (Zurbuchen and Richardson, 2006). When the magnetic ejecta shows rotation of its magnetic field, and has a proton temperature lower by a factor two than the typical solar wind with the same speed, it is categorized as a magnetic cloud (MC), as it is associated with the existence of a flux-rope (Burlaga et al., 1981; Burlaga, 1995). Such configuration only happen for about one third of ICMEs at 1 au (Wu and Lepping, 2011). However, this may be due to the limitation of having only one measurement point (Jian et al., 2006; Kilpua et al., 2011). The recent era allows for multi-spacecraft coordination thought the heliosphere to try to better quantify the 3D geometry of ICMEs (Mostl et al., 2022).
As ICMEs propagate through the heliosphere, they interact with the various structures they encounter, and as a result evolve. For example, ICMEs will naturally expand as they travel, and usually it can be approximated by a self-similar expansion (Demoulin et al., 2008; Gulisano et al., 2010, 2012; Chane et al., 2021; Verbeke et al., 2022). Concerning their trajectory, CMEs can suffer deflections both in latitude and longitude, because of their interaction with the magnetic field of coronal holes, helmet streamers and the heliospheric current sheet (HCS) (Gopalswamy and Makela, 2014). This tends to focus CMEs towards the Earth latitudes (especially at minimum of activity, see Zuccarello et al. (2012)). Recent studies have shown that the strength and sign of the ambient magnetic field can influence their drift (Asvestari et al., 2022) due to tilting instabilities (Bellan, 2000). It is also linked to the sign of helicity of the source region (Green et al., 2007). Concerning their speed, ICMEs can be accelerated or decelerated through their interaction with the ambient solar wind (Gopalswamy et al., 2000) which causes drag-like effects (Cargill et al., 1995). Concerning their magnetic flux, it can be reduced due to magnetic reconnection, which will lead to magnetic erosion at the front of the ICME (Dasso et al., 2007; Ruffenach et al., 2012). As they propagate, ICMEs are more likely to become more and more complex as a result of their interaction with solar wind structures (Winslow et al., 2015, 2022; Solini et al., 2022, 2023), and to display aging processes that will contribute to their deformation (Demoulin et al., 2020). They will also change as a result of interactions with specific structures, such as high-speed streams (HSSs) (Fenrich and Luhmann, 1998; Heinemann et al., 2019; Scolini et al., 2021), stream-interaction regions (SIRs) or other ICMEs (Lugaz et al., 2005; Scolini et al., 2020). For more details, see reviews by Lavraud and Rouillard (2014) and Shen et al. (2022), and references within.
The medium in which the ICMEs propagate is far from simple to describe, as it shows great complexity and variability. The interplanetary medium is influenced both by the magnetic field close to the Sun, and then the solar wind further away from it. The solar magnetic field is generated inside the Sun by dynamo effect (Moffatt, 1978; Parker, 1993; Brun and Browning, 2017), and then bathes the entire heliosphere following the Parker spiral due to the rotation of the star (Owens and Forsyth, 2013). The solar wind is made of plasma particles continuously emitted by the Sun (Parker, 1958). It has two main components, one slow and one fast, whose source mechanisms differ (see Cranmer et al. (2007), and Viall and Borovsky (2020) and references within). The short-term variability of the interplanetary medium is caused by reconnection effects in the lower corona (causing the switchbacks observed by Parker Solar Probe, see Kasper et al.
(2019)), and transient events (perturbations caused by SIRs, HSSs, previous ICMEs...). The long-term variability on the other hand is due to the solar activity cycle (Hathaway, 2015). Indeed, the solar dynamo is cyclic with a period of 22 years (Weiss, 1990), which generates periods of low magnetic activity, called minima, where the field is mostly dipolar, and periods of high magnetic activity, called maxima, where the field becomes more multipolar (Hoeksema, 1984; DeRosa et al., 2012). This modulation has a direct effect on the structure of the corona and inner heliosphere. At minimum of activity, the corona is very structured with stable equatorial helmet streamers, thus confining the slow solar wind to the equator and the fast wind at the poles; at maximum of activity, the corona is more complex with pseudo-streamers and streamers emerging at various latitudes, and thus the solar wind has slow and fast streams at all latitudes (McComas et al., 2003, 2008).
Because of all these complex interactions, the impact of solar activity on ICME propagation is still unclear. We know that CMEs/ICMEs are more frequent (Gopalswamy et al., 2003), faster (Hundhausen et al., 1990; Dasso et al., 2012) and more magnetized (Wu & Lepping, 2011) at maximum of activity, and yet the most powerful events recorded (like the Carrington event for example) did not necessarily happen during these periods (because they are probably due to ICME-ICME interactions) (Chapman, 2023). There are other interesting relationships between ICMEs and the solar cycle. The sign of the helicity can be estimated using hemispheric rules that depend on the solar cycle polarity (Bothmer & Schwenn, 1998). Gopalswamy et al. (2003) noted that high-latitude CMEs do not occur during polarity reversals phases of the solar cycle, due to the lack of closed field lines close to the poles. Studies have analyzed the dependence of specific properties with the phase of the solar cycle. Dasso et al. (2012) showed that the expansion of the ICME is not related to the solar cycle, but the amount of helicity of the magnetic cloud is. Jian et al. (2011) showed that during solar minima, shocks are more important due to a slower solar wind. Finally, Regnault et al. (2020) analyzed 20 years of ACE data and compared the internal properties of ICMEs based on the solar activity phase: they confirmed that ICMEs during maximum were faster, but otherwise parameters were similar; what changes is that at maximum, they observe a larger distribution of ICME parameters, just like in Wu & Lepping (2016) where they used Wind data.
Our goal with this paper is to explore more quantitatively the impact of solar activity on the propagation and geo-effectiveness of ICMEs. In this first exploratory study, we will inject the same ICME in two different activity background using numerical simulations, and quantify the differences and their origins, as a first step towards understanding what effects to expect.
The article is organized as follows. In Section 2, we describe the numerical set-up behind the European Heliospheric FORecasting Information Asset (EUHFORIA) code, for both the coronal and heliospheric part, as well as the CME/ICME modeling. In Section 3, we explain our choice of boundary conditions, first for the solar wind, then for the CME insertions. In Section 4, we present two limit cases: a first one with only solar wind to quantify the background for the next study of ICME propagation, and a second one where the injected CME is purely hydrodynamic, to analyze the effect of the solar wind speed. In Section 5, we present the results for a magnetized CME inspired by a real event. In Section 6, we present the results for a ICME representing a median case based on statistical analysis of ICME parameters. Finally, in Section 7, we present our discussion and conclusion.
## 2 Description of eUHFORIA simulations
We use the numerical 3D MHD code EUHFORIA (Pomoell & Poedts, 2018). It is divided into two numerical domains with different treatments: a semi-empirical model for the coronal part (from 1 \(R_{\odot}\) to 21.5 \(R_{\odot}\)), and an MHD model for the heliospheric part (from 0.1 to 2 au). The limit between the two domains is set at 0.1 au because at this distance the solar wind is superfast, making the coupling one-way (the heliospheric part cannot back-react on the coronal part).
### Coronal part
The coronal part uses synoptic magnetic maps as inputs, which are observations of the photospheric radial magnetic field used to drive the simulation. They are called synoptic because they cover the 360\({}^{\circ}\) of the solar surface, but they are not necessarily synchronic (which means that the solar observations displayed on the map were not necessarily taken at the same date, for example the observations can last a full solar rotation and thus have a 27-day gap) (Riley et al., 2014). The default set-up of EUHFORIA can use two types of magnetic maps: the ones from the Global Oscillation Network Group (GONG) (Harvey et al., 1996) and the ones from the GONG Air Force Data Assimilative Photospheric Flux Transport Model (GONG-ADAPT) (Arge et al., 2010; Hickmann et al., 2015). From this input, the magnetic field global configuration is derived using two models. First, we use a potential-field source surface (PFSS)
model up to the source surface \(R_{ss}\) of 2.6 \(R_{\odot}\)(Altschuler & Newkirk, 1969). This allows us to compute a current-free configuration by assuming the magnetic field is potential below the source surface, and purely radial afterwards. Then it is coupled to a Schatten current sheet (SCS) model from 2.3 \(R_{\odot}\) up to 21.5 \(R_{\odot}\)(Schatten et al., 1969). The SCS model starts slightly below the source surface to reduce possible kinks due to incompatibility between the two models (McGregor et al., 2008). The SCS field is required to vanish at infinity, in order to extend the magnetic field radially while maintaining a thin structure for the heliospheric current sheet (HCS). This method provides results in better agreement with observations (with the Ulysses mission, for example, see Pinto & Rouillard (2017)).
Once the magnetic configuration is computed, a semi-empirical Wang-Sheeley-Arge method is used to compute the radial velocity (Wang & Sheeley, 1990; Arge et al., 2003), using the following formula (in km/s):
\[v_{r}(f,d)=240+\frac{675}{(1+f)^{0.222}}\left[1.0-0.8\exp\left(-\left(\frac{ \mathrm{d}}{0.02}\right)^{1.25}\right)\right]^{3}, \tag{1}\]
where \(d\) is the angular distance of the footpoint of the magnetic field line to the closest coronal hole boundary, and \(f\) is the flux tube expansion factor between the photosphere and the source surface. This law is adapted from van der Holst et al. (2010); McGregor et al. (2011). Since the solar wind continues to accelerate beyond 0.1 au, we deduct a constant value of 50 km/s from equation (1) to prevent the wind speed from being systematically overestimated. Furthermore, we enforce the final speed to be in the range \(v_{r}\in[275,625]\) km/s by capping it (McGregor et al., 2011). Finally, we apply a rotation of \(10^{\circ}\) to the obtained solar wind speed map at 0.1 au to take into account the approximated solar rotation that is not included in the magnetic field model.
Based on this computed velocity, the density \(n\), temperature \(T\) and radial magnetic field \(B_{r}\) are computed at 0.1 au using the following relations (Pomoell & Poedts, 2018):
\[n=n_{fsw}\left(v_{fsw}/v_{r}\right)^{2}, \tag{2}\]
\[T=T_{fsw}\left(\rho_{fsw}/\rho\right), \tag{3}\]
\[B_{r}=\mathrm{sgn}\left(\mathrm{B_{corona}}\right)\mathrm{B_{fsw}\left(v_{r}/v _{fsw}\right),} \tag{4}\]
with \(v_{fsw}=675\) km/s, \(n_{fsw}=300\) cm\({}^{-3}\), \(T_{fsw}=0.8\) MK, \(\rho_{fsw}=0.5n_{fsw}m_{p}\) (\(m_{p}\) being the proton mass), \(B_{sfw}=300\) nT and \(\mathrm{sgn}\left(\mathrm{B_{corona}}\right)\) being the sign of the magnetic field as given originally by the coronal model at 0.1 au. All these values are consistent with a fast solar wind (hence the abbreviation \(fsw\)) (Odstrcil & Pizzo, 1999). The number density prescription ensures a constant kinetic energy density on the spherical surface at 0.1 au. The plasma thermal pressure is chosen to be constant at 0.1 au (equal to 3.3 nPa), which sets the fast wind temperature. The reconstruction of the magnetic field based on the radial speed instead of the PFSS+SCS avoids the open-flux problem denoted by Linker et al. (2017). These conditions are similar to the ones described in Odstrcil & Pizzo (1999).
### Heliospheric part
Using the boundary conditions provided by the coronal part, the heliospheric part of EUHFORIA then computes the solar wind all the way to 2 au by solving the 3D time-dependent ideal MHD equations augmented with gravity. The equations are solved in the heliocentric Earth equatorial (HEEQ) frame, which is described as the frame with its Z-axis aligned with the rotation axis of the Sun, and its X-axis defined by the intersection of the solar equatorial plane and the solar central meridian of date as seen from the Earth. Although the chosen frame is not inertial, we choose to omit the Coriolis and centrifugal terms which should be the result of the orbital motion of Earth, as their contribution is actually negligible. A value of 1.5 is selected for the polytropic index as in Odstrcil et al. (2004). This value is close to the ones found for protons in all solar winds with Helios and Parker Solar Probe (Dakeyo et al., 2022). The fact that we use a reduced index closer to 1 than 5/3 is a simple way of modelling coronal heating and as a result acceleration of the solar wind (Pomoell & Vainio, 2012).
A relaxation phase is first performed (set to 14 days in our case to avoid any spurious effect caused by the initial transient at the outer boundary condition). Then the forecast phase begins at the date set by the magnetic map. The forecast phase last 7 days in our cases, which is more than enough for the ICME to cross the entire domain.
The default set-up has a 2\({}^{\circ}\) angular resolution. The computational domain extends from 0.1 to 2 au in the radial direction and spans 120\({}^{\circ}\) in latitude and 360\({}^{\circ}\) in longitude. This means that the solar poles are truncated by 30\({}^{\circ}\) in each hemisphere. To solve the MHD equations, we use a finite volume method combined with a constrained transport
approach. To obtain a scheme which is both robust and second-order accurate, we use an approximate Riemann solver with standard piece-wise linear reconstruction (Kissmann and Pomoell, 2012; Pomoell and Vainio, 2012). At the outer radial boundary, we use open boundary conditions implemented via a simple extrapolation, whereas at the latitudinal boundaries we use symmetric reflection boundary conditions.
One last point of detail we would like to discuss here is the difference between EUHFORIA outputs and proton data. As explained in Scolini et al. (2021), EUHFORIA uses a single-fluid approach, and thus makes no difference between the different particle populations (such as protons, electrons, \(\alpha\) particles, etc.). In order to compare EUHFORIA outputs to observational data, we must thus make some assumptions. We will consider the proton and electron populations as the two primary contributors to solar wind plasma, and we assume that the two species have the same temperature: \(T=T_{p}=T_{e}\). To further ensure the quasi-neutrality of the plasma at all locations and times in the heliosphere, we also assume the two species have the same number density: \(n_{p}=n_{e}\). This means that the plasma density in EUHFORIA is \(n=n_{p}+n_{e}=2n_{p}\). As a result, EUHFORIA thermal pressure and \(\beta\) plasma parameter is also twice that of the proton population: \(P_{th}=P_{p,th}+P_{e,th}=n_{p}k_{B}T_{p}+n_{e}k_{B}T_{e}=2P_{p,th}\) and \(\beta=P_{th}/P_{mag}=2P_{p,th}/P_{mag}=2\beta_{p}\). The plasma total pressure is then computed as: \(P_{tot}=P_{mag}+P_{th}=P_{mag}+2P_{p,th}\). The plasma temperature is retrieved using: \(T=T_{p}=P/nk_{B}=P_{p}/n_{p}k_{B}\). Since \(m_{p}\gg m_{e}\), we can assume that \(v\approx v_{p}\). To avoid confusion, EUHFORIA quantities will hereafter be denoted without indexes, while proton quantities will be denoted with index \(p\). This approximation is of course different from known observations, but it would require to move from ideal MHD to at least two-fluid resistive MHD to overcome it, which is beyond the scope of this paper (Priest, 2014).
### CME modeling
EUHFORIA offers the possibility to inject a coronal mass ejection (CME) at 0.1 au inside the heliospheric part in a time-dependent way, in order to model its propagation and interaction with the modeled ambient solar wind. There are various models available to represent the CME. In this study, we will be using two of them.
First, we use the cone model, described in Pomoell and Poedts (2018) and similar to Odstrcil and Pizzo (1999). The CME is treated as a hydrodynamic cloud and is characterized by a constant angular width (which then defines an initial radius at the injection point), propagation direction and radial speed. The user can also prescribe the injection point, defined via its coordinates in HEEQ frame. The cross-section is assumed to be circular, and the CME density, pressure and radial speed are constant.
The other model we use is the linear-force-free spheromak (LFFS) model, described in Verbeke et al. (2019) and similar to Kataoka et al. (2009); Shiota and Kataoka (2016). It is a magnetized model, but unlike the model from Gibson and Low (1998), the CME completely goes through the boundary without footpoints left attached to the solar surface. The CME, considered to be a sphere of radius \(r_{0}\) upon the time of its injection, is launched outward. The velocity of the CME is chosen to be constant everywhere within the CME, and always oriented along the given propagation direction. As a result, the total velocity vector \(v\) is not purely radial at the inner simulation boundary, but contains also a latitudinal and longitudinal component as well. This means that the total speed of the CME \(v_{3D}\) can be decomposed into two components: the radial speed \(v_{rad}\) and the expansion speed \(v_{exp}\). In our case, we prescribe only the radial speed at the injection point. It requires the same physical input parameters as the cone model, plus three additional magnetic parameters: the handedness \(H\) of the spheromak (which determines the polarity of the magnetic field inside the CME); the tilt angle \(\tau\) (measured from the \(z\)-axis in the \(yz\)-plane); the total toroidal flux \(F\) (related to the magnetic field strength \(B_{0}\)). The spheromak orientation is thus given by the injection point, combined with the tilt that gives the axis of symmetry of the spherical structure (the magnetic configuration is shown in appendix in Figure 16). For example, in HEEQ coordinates of EUHFORIA, if the tilt is equal to 0, then the axis of symmetry is the vertical \(z\)-axis; if the tilt is equal to 90\({}^{\circ}\), the axis of symmetry is the horizontal \(y\)-axis. Compared to Euler angles, the tilt angle corresponds to the elevation angle. In the current implementation, we do not need additional angles (like the heading or bank angles) that would allow us to rotate around the axis of symmetry, since the spheromak exhibits symmetry in the azimuthal direction \(\phi\). The direction of propagation of the spheromak is finally set to be perpendicular to the 0.1 au boundary surface at the injection point.
## 3 Boundary conditions
In this section, we discuss how we selected the boundary conditions at 0.1 au for our heliospheric study. We first explain in subsection 3.1 the parameters related to the choice of the solar wind background, and then in subsection 3.2 the parameters related to the ICME initialization.
### Solar wind for minimum and maximum of activity
The idea behind this study is to use a solar wind background that would be more realistic than in previous studies using EUHFORIA: for example, Scolini et al. (2021) used an ideal analytical background, and Asvestari et al. (2022) even approximated the magnetic field as a monopole (which is an impossible configuration due to the Maxwell equations). We especially aim to reflect the complexity brought by the 11-year modulation of the solar cycle. To do so, we first selected two dates that correspond to typical minimum and maximum of solar activity.
For the minimum, we selected the Carrington Rotation number 2077, set between November 20 and December 17, 2008. This corresponds to the end of solar cycle 23. This date is indeed a well-known case used for calibration of coronal models at minimum of activity due to its very quiet magnetic field during this period (Rusin et al., 2010; van der Holst et al., 2014; Wiegelmann et al., 2017; Perri et al., 2022). It has even been chosen as the International Space Weather Action Team (ISWAT) validation benchmark for solar-wind models (Reiss et al., 2022). At this date, only the GONG magnetograms are available; the corresponding magnetogram can be found in the left panel of Figure 1. It exhibits typical features from minimum of activity, such as strong flux concentration at the poles with different polarities in the two hemispheres, which is a marker of the dipolar-dominant structure of the solar magnetic field at this period. There are also very few active regions, so that we mostly see the super granulation pattern typical of the quiet Sun.
For maximum of activity, we selected the date of March 20, 2015. This date corresponds to the maximum of solar cycle 24, and has been extensively used as a benchmark date for many coronal models (Yeates et al., 2018). After 2010, the GONG-ADAPT maps are available, so we use it for the maximum as it reduces the probability of having spurious features at the solar poles thanks to the additional post-processing of ADAPT. This causes a difference in the provider and thus the processing of the input map. However, it guarantees that the final solution does not suffer from numerical artifacts. We also show in Figure 1 the difference between GONG and GONG-ADAPT at maximum of activity. We can see that there is a difference in amplitude for the most intense active region, sometimes a difference in polarity for the quiet Sun and a different method for filling the solar poles. However, the general structure is still very similar, and these differences are still less than the difference between minimum and maximum of activity. This means that the differences we may see in the final CME solutions are indeed mainly due to the difference in solar activity. We will dedicate a specific study on the impact of the input magnetic map on the final CME solution, but for now it is out of the scope of this paper. The map can be seen in the right panel of Figure 1. At maximum of activity, many active regions are present at the surface of the Sun between -0.5 and 0.5 in sine latitude (which corresponds to around -40 and 40\({}^{\circ}\) in latitude), creating intense magnetic field configurations that dominate over the poles and the quiet Sun.
The implication of these two different magnetic maps for the coronal part of EUHFORIA can be seen in Figure 2. We represent the boundary condition computed by the semi-empirical coronal model which will serve as boundary
Figure 1: Comparison between the two selected magnetic maps of the radial magnetic field \(B_{r}\) for minimum (left panel) and maximum (middle panel) of activity. The left panel shows a GONG synoptic map corresponding to the 15th of December 2008, representative of a minimum of activity (dipolar field with very few active regions). The middle panel shows a GONG-ADAPT map corresponding to the 20th of March 2015, representative of a maximum of activity (multipolar field with intense active regions). We also show for information the GONG map corresponding to the date chosen for maximum of activity (right panel). We can thus see that there is more difference between minimum and maximum of activity than between GONG and GONG-ADAPT. Positive polarity of the magnetic field is shown in red, negative polarity in blue. The color bar has been set symmetric and saturated to better visualize the structures (between -20 and 20 G). Note that the y-axis for the left panel is in sine latitude.
condition (BC) for the heliospheric MHD model. The left panel shows the BC for the selected minimum of activity, the right panel shows the BC for the selected maximum of activity. In each panel, the first row corresponds to the radial velocity in km/s, the second to the number density in cm\({}^{-3}\), the third to the temperature in \(10^{6}~{}K\) and the last one to the radial magnetic field component in nT. At minimum of activity (left panel), the corona is very structured, with fast hot wind at the poles and slow dense cold wind at the equator. The magnetic field is very much dipolar, with a negative polarity at the northern pole and positive polarity at the southern pole. The HCS between the two polarity is almost in the ecliptic with few disturbances. At maximum of activity, slow and fast wind are much more mixed at all latitudes. The HCS exhibits a more complex pattern, with the negative polarity from the southern hemisphere almost filling completely the map between longitudes 0 and 130. There is also an incursion of positive polarity in the negative polarity around longitude -50. These different BCs will result in different backgrounds for propagation of the CMEs, as will be detailed in Section 4.1. Note that no optimization was performed here to constrain the wind with observations: first, because we do not aim at reproducing a specific event, this is foremost a theoretical study; second, because we want to keep an operational set-up to evaluate the implication for the space-weather forecasts done using EUHFORIA that use these automatic parameters.
### Definition of the CME parameters
At 0.1 au, we also have to define the parameters of the CME to inject in the simulation. In this study, we have chosen three cases to explore. All the corresponding parameters can be found summarized in Table 1.
The first case is based on a real event that took place on July 12, 2012. This case has been extensively studied in Scolini et al. (2019) and Scolini et al. (2021) using a magnetized LFFS model. The parameters for this CME were derived from observations, using white-light coronagraph images to constrain the geometrical and kinetic parameters, and photospheric and low-corona observations of the source active region to constrain the magnetic parameters (for more details, see Scolini et al., 2019). The resulting parameters are summarized in column labeled Reference CME of Table 1. They have been adapted in the scope of this study: we have adjusted the center of the CME to be along the Sun-Earth axis (which yields \(0^{\circ}\) in HEEQ latitude and longitude), and we have selected the same average initial
Figure 2: Comparison of the EUHFORIA input boundary condition at 0.1 au for minimum (left panel) and maximum (right panel) of activity. The first row shows the radial velocity \(v_{r}\) in km/s, the second row the number density \(N\) in cm\({}^{-3}\), the third row the temperature \(T\) in \(10^{6}~{}K\), the fourth row the radial magnetic field \(B_{r}\) in nT. All these profiles were derived from the magnetic maps displayed in Figure 1 using Wang-Sheeley-Arge and PFSS+SCS empirical relations. X-axis is in Stonyhurst longitude, y-axis in latitude.
radius at injection for all cases of \(15\,R_{\odot}\) (to avoid geometric factors contributing to the variation in mass). This value corresponds to an average of the values found by GCS fitting for the events studied in Scolini et al. (2019), which were 10.5, 14.5, 16.8 and 18.0 \(R_{\odot}\). The injection point being on the Sun-Earth line in HEEQ coordinates, the CME direction of propagation is towards Earth, to simulate optimal initial conditions for a full hit. The tilt has also been set to 0 to study a CME where the full magnetic ejecta propagates along the ecliptic plane. The speed, mass density, temperature, handedness and flux are however the same as in Scolini et al. (2019).
We also wanted to try a limit case with the same CME, but using the cone model instead of the LFFS model. This means that the CME is purely hydrodynamic (without an inner magnetic field). To obtain the corresponding parameters, we used this time the DONKI database1 which recommended a speed around 1400 km/s. The lower initial speed is due to the fact that we prescribe only the input radial speed, which is equal to the full 3D speed inferred from observations for a cone model (\(v_{rad}=v_{3D}\)), but equal to the difference between the 3D speed and the expansion speed for a spheromak (\(v_{rad}=v_{3D}-v_{exp}\)). For the reference case, the CME geometric parameters were derived using a GCS model (Graduated Cylindrical Shell model, see Thernisien et al. (2009); Thernisien (2011)), which gave a full 3D speed \(v_{3D}\) of 1266 or 1352 km/s depending on the fitting. The expansion speed of the CME is derived using empirical relations from Dal Lago et al. (2003) and Schwenn et al. (2005), which is an alternative when 3D reconstruction of the event is not possible due to a single spacecraft configuration for the observation (more details in Scolini et al. (2019)). These relations give: \(v_{rad}=0.43v_{3D}\) in the case of a spheromak, which explains the difference in input speed of a factor close to 2. The DONKI database values are thus consistent with the values obtained from Scolini et al. (2019). This is explained in more details in Scolini et al. (2019). We then completed the input values with the default EUHFORIA mass density and temperature (respectively \(1.0\,10^{-18}\) kg/m3 and \(8.0\,10^{5}\) K, as specified in Pomoell and Poedts (2018)). All other geometric parameters were kept identical to limit the number of free parameters of the study. This is summed up in the column labeled Hydro CME of Table 1.
Footnote 1: [https://kauai.ecmc.gsfc.nasa.gov/DONKI/](https://kauai.ecmc.gsfc.nasa.gov/DONKI/)
Finally, we also wanted to have a magnetized case which would be more representative of an average CME at Earth. To obtain such values, we combined two studies. First, we used the statistical study from Regnault et al. (2020), where 20 years of ACE data have been analyzed using the superposed epoch analysis (SEA) method to derive the distribution of ICME parameters at Earth. From this study, we can extract the median value of the distribution, which yields the following parameters:
\[v_{r}\approx 450\,\mathrm{km/s},\mathrm{N}\approx 5\,\mathrm{cm}^{-3},\mathrm{ T}\approx 4\,10^{4}\mathrm{K},\mathrm{B_{r}}\approx 10\,\mathrm{nT}. \tag{5}\]
By using the median, we select a set of parameters so that 50% of ICMEs have values under this set, and 50% above. We could have also used the mode (which is the most probable set of parameters), but due to the fact that it is a log normal distribution, it would have been biased towards slower events that would have been too close to our background wind speed to allow proper analysis. Once we have these values at 1 au, we need to extrapolate them
\begin{table}
\begin{tabular}{|c||c|c|c|} \hline Case & Hydro CME & Reference CME & Median CME \\ \hline Type of CME model & Cone & LFFS & LFFS \\ \hline Latitude of center [deg. HEEQ] & 0.0 & 0.0 & 0.0 \\ \hline Longitude of center [deg. HEEQ] & 0.0 & 0.0 & 0.0 \\ \hline Radius [\(R_{\odot}\)] & 15.0 & 15.0 & 15.0 \\ \hline Speed [km/s] & 1400 & 763 & 541 \\ \hline Mass density [kg/m3] & \(1.0\,10^{-18}\) & \(1.0\,10^{-18}\) & \(2.0\,10^{-18}\) \\ \hline Temperature [K] & \(8.0\,10^{5}\) & \(2.4\,10^{5}\) & \(6.2\,10^{5}\) \\ \hline Handedness & N/A & \(\pm 1\) & \(\pm 1\) \\ \hline Tilt [deg.] & N/A & 0.0 & 0.0 \\ \hline Flux [Wb] & N/A & \(1.0\,10^{14}\) & \(2.3\,10^{13}\) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the CME parameters used at 0.1 au as a boundary condition for the EUHFORIA runs. For each case, we specify the model used, the center of the injected CME with its latitude and longitude in HEEQ coordinate, the radius of the inserted CME, the speed, mass density, and temperature. For the magnetized CMEs, we also specify the handedness, tilt and magnetic flux.
back to 0.1 au. To do so, we use the scaling laws derived by Scolini et al. (2021) by studying the radial evolution of ICMEs in EUHFORIA:
\[v_{r}\propto r^{-0.08},N_{p}\propto r^{-2.38},T_{p}\propto r^{-1.19},B_{r}\propto r ^{-1.9}. \tag{6}\]
Combining these two set of parameters, we can obtain the corresponding values at 0.1 au for a median CME:
\[v_{r}=541\,\mathrm{km/s},\mathrm{N_{p}}=1.2\,10^{3}\,\mathrm{cm}^{-3},\mathrm{ T_{p}}=6.2\,10^{5}\,\mathrm{K},\mathrm{B_{r}}=7.9\,10^{2}\,\mathrm{nT}. \tag{7}\]
We then convert these values to obtain the parameters needed for the input configuration file of EUHFORIA, using the following relations:
\[\rho=m_{p}N_{p}\times 10^{6},\phi_{t}=\frac{2B_{0}}{\alpha}\left[-\mathrm{sin} \left(\alpha\mathrm{r}_{0}\right)+\int_{0}^{\alpha\mathrm{r_{0}}}\frac{ \mathrm{sin}\,\mathrm{t}}{\mathrm{t}}\mathrm{dt}\right], \tag{8}\]
with \(m_{p}\) being the proton mass, \(\alpha r_{0}=4.4934\) (which corresponds to prescribing the radial component of the magnetic field to be 0 on the boundary of the spheromak) and \(r_{0}=15R_{\odot}\) (for more details, see Verbeke et al. (2019)). The resulting parameters are summed up in the column labeled Median CME of Table 1. Compared to the real event, we can see that the median CME is slower, denser and hotter, and less magnetized than the reference case.
We finally check with EUHFORIA simulations that this approach yields correct results by analyzing the values of the ICME obtained at Earth for the median CME, both at minimum and maximum of activity. Results can be visualized in Figure 3. The minimum of activity case in on the left panel, the maximum of activity case on the right panel. For each case, we plot the total magnetic amplitude in nT, the total velocity amplitude in km/s, the temperature in \(10^{4}\) K, the number density for the protons in cm\({}^{-3}\) and the \(\beta\) parameter of the plasma, defined as \(\beta=P_{th}/P_{mag}=2P_{p,th}/P_{mag}=2\beta_{p}\) (thermal pressure over magnetic pressure). The panel is made to be easily comparable with the analysis from Regnault et al. (2020), which has the same layout for the SEA. On top of the curves, we plot a yellow rectangle to highlight the region corresponding to the sheath and a blue rectangle for the region corresponding to the magnetic ejecta. The method used to derive these regions is explained in more details in Appendix A. In the magnetic ejecta, we obtain on average the following values for the minimum of activity case: \(v_{r}\approx 420\) km/s, \(N\approx 5\,\mathrm{cm}^{-3}\), \(T\approx 5\,10^{4}\) K, \(B\approx 8\) nT. Similarly, we obtain, for the maximum of activity case: \(v_{r}\approx 450\)
Figure 3: Comparison of the ICME parameters obtained at Earth for the median case at minimum (left panel) and maximum (right panel) of activity. The first row is the total magnetic amplitude in nT, the second row the total velocity amplitude in km/s, the third row the temperature in \(10^{4}\) K, the fourth row the number density for the protons in cm\({}^{-3}\) and the last row \(\beta\) parameter of the plasma. A yellow rectangle highlights the region corresponding to the sheath, and a blue rectangle the region corresponding to the magnetic cloud. The method used to derive these regions is explained in more details in Appendix A.
km/s, \(N\approx 10\,\)cm\({}^{-3}\), \(T\approx 4\,10^{4}\) K, \(B\approx 10\) nT. These values are consistent with the targeted valued at 1 au (cf. equation (5)), which means these cases can indeed be classified as representative of a median ICME as seen at Earth.
## 4 Limit Cases
Before analyzing further the differences between the ICMEs, we want to analyze two limit cases to better understand the context of our simulations. In Section 4.1, we will first present a case with only the background wind (no CME inserted) to better quantify the differences and similarities between without and with CME. In Section 4.2, we will then inject a hydrodynamic CME (described in column labeled Hydro CME of Table 1) to quantify the impact of the hydrodynamic parameters alone.
### Wind-only simulation
In this first limit case, we do not inject any CME, we just let the wind background evolve for 7 days to see how it behaves without perturbation. In Figure 4, we can see the heliospheric part of EUHFORIA with the modeling of the ambient solar wind background. We show the radial wind speed, the number density and the radial magnetic field, both in the ecliptic and meridional (including Earth) planes. This allows us to get a first qualitative look at the similarities and differences between the selected minimum and maximum of activity. Since these are realistic solar wind backgrounds, we can see a lot of substructures in the wind. One constrain on the dates is that any large-scale structures (such as high-speed streams where the wind speed reaches 500 km/s or more) is not Earth directed and
Figure 4: Comparison of the heliospheric wind background at minimum (left panel) and maximum (right panel) of activity. The first row is the radial wind speed in km/s, the second row the number density in cm\({}^{-3}\) normalized to 1 au and the last row the radial magnetic field in nT. For each row, we show the ecliptic (view from above) and meridional (view from the side crossing Earth) views in the HEEQ frame. Various planets and satellites are also shown in the bottom legend.
thus not geo-effective, to limit their interference with the propagation of the CME towards Earth. The dates have been adjusted so that the sector in the ecliptic plane is dominantly negative (in blue in the bottom row) in both cases.
Figure 5: Comparison of the 3D heliospheric current sheet (HCS) structure for the selected minimum (left panel) and maximum of activity (right panel). The HCS contour is shown in dark blue. For context, the ecliptic plane with the radial velocity is shown in transparency. The position of the Earth is indicated by a blue sphere. Both figures are in HEEQ frame.
Figure 6: Comparison of the time evolution of the background solar wind parameters at 0.11 au (left panel) and 1 au (right panel) along the Sun-Earth axis. The first row shows the total velocity in km/s, the second row the number density in cm\({}^{-3}\) and the last row the radial magnetic field component in nT. The curves for the minimum of activity are in blue, while the curves for the maximum of activity are in orange. Grey rectangles have been added to show the time intervals when the background solar wind is likely to interact with the ICME.
There are however expected differences that are relevant to this study, because they are representative of what we expect from minimum and maximum of activity configurations. At minimum of activity, the meridional view shows a very organized wind structure, with slow dense wind in the ecliptic plane, and fast less dense wind near the poles. This is a logical consequence of the structures observed in the coronal boundary condition in Section 3.1. At maximum of activity, the solar wind organization is much more complex. For the magnetic field, we also see that at minimum of activity, the current sheet location near the ecliptic results in the polarity sectors being very clearly defined: at Earth location, the northern hemisphere is negative while the southern hemisphere is positive. At maximum of activity, the polarity remains negative in latitude, but a positive sector will cross the Earth later on in the simulation (see the bottom right panel of Figure 4). This kind of fast polarity switch is expected at maximum of activity and is one of the features we are interested in.
To focus on the magnetic field structure, in Figure 5, we represent the HCS in 3D. We can clearly see its quasi-ecliptic shape at minimum of activity, with the Earth being positioned slightly above it. At maximum of activity and in the inertial frame, the current sheet is very distorted, so that the Earth falls into one sector. The contour on the left-side of the picture is the positive polarity incursion from the ecliptic view, and it may interfere with the CME propagation during its latest stages.
Finally, in Figure 6, we show a more quantitative view of the background solar wind with 1D evolution plots. We plot the same quantities as in Figure 4, but along the Sun-Earth axis using virtual satellites. The left panel at 0.11 au gives a view of what the background looks like at the injection point of the CME, while the right panel shows the background at 1 au to show the ambient medium at Earth. Minimum and maximum of activity data are over-imposed, respectively in blue and orange. Grey rectangles have been added to highlight the regions of interest in each panel for the interaction with the ICME. We notice in the right panel that the wind structures are indeed very similar at Earth, except for \(|B_{r}|\) which is more important at maximum of activity (as expected since the solar activity is increasing). Close to injection point (which means on the left panel), the wind speed is low in both cases (around 300 km/s). The main difference lies in the polarity inversion: at minimum of activity, the magnetic field polarity remains mostly negative, getting weak around 1 au (Figure 6 right panel), while at maximum of activity the disturbed HCS allows for a polarity switch from negative to positive as the ICME is progressing outward.
### Hydrodynamic cone CME
In this second limit case, we will now inject a CME, but a purely hydrodynamic one. To do so, we use the cone model, detailed in Section 2.3, and with the parameters described in the column labeled Hydro CME of Table 1. As explained in Section 3.2, this corresponds to the typical parameters prescribed to study the event of the 2nd of July 2012, which is our reference case due to the great number of studies using EUHFORIA on it. This was a single-CME event that triggered a strong magnetic storm on Earth, which also explains why it is relevant for our study. The parameters have been adjusted to make the CME fully directed towards Earth in order to maximize the geo-effectiveness of the event. We inject the same cone CME in the two wind backgrounds corresponding to minimum and maximum of activity. This allows us to quantify the impact of the background solar wind, in a context where there is no magnetic interaction possible between the CME and the background (because the CME does not possess an intrinsic magnetic structure).
The results are displayed in Figure 7. We show the density in cm\({}^{-3}\) to better visualize the CME ejecta as an under-dense structure. We show the moment when the ejecta is reaching Earth (symbolized by a blue circle to the right of the Sun). For each case, we show the ecliptic (view from above), meridional (view from the side) and spherical (view at 1 au) views in the HEEQ frame. From the ecliptic view, we can see that the CME structures seem rather similar, although the shock for the maximum of activity case is better defined. The difference is more visible in the meridional view.
For the minimum of activity case, the CME is actually cut in half, with a dragging middle structure and two accelerated lobes over and under the current sheet. This can easily be understood if we put this result in perspective with the wind structure at minimum of activity described in the previous section: since at minimum of activity the heliosphere is very organized, the parts of the CME caught in the current sheet are slowed down by the equatorial slow wind, while the parts outside the current sheet are accelerated by the fast solar wind. We can wonder how realistic it is to have a CME injected right in the middle of the current sheet at 0.1 au. Although CMEs have very little chances of being generated at the equator because they may face magnetic trapping (Sahade et al., 2022), they can be channeled towards the current sheet if they form close to the border of a streamer or pseudo-streamer (Zuccarello et al., 2012). Also, this effect is dominant at minimum of activity thanks to the dipolar structure of the solar magnetic field, which
justifies even more this result for the minimum case (Lavraud and Rouillard, 2014). Similar configurations have been found for simulations of stellar CMEs associated with extremely dipolar stars (and hence very organized astrospheres, Alvarado-Gomez et al., 2022). At maximum of activity, on the other hand, the CME is much more compact, but undergoes a deflection towards the northern hemisphere.
In the spherical view, we can see more clearly the impact of the ICME at Earth. At minimum of activity, we see the full hit of the central part of the ejecta with an under-dense structure at the same longitude and latitude than Earth. At maximum of activity, detecting the ICME signature is more difficult because of the complex structure of the heliosphere. The large under-dense structure north to the Earth is the ICME, while the largest one in the southern hemisphere is the trace of the open southern coronal hole.
To be more quantitative, we show in Figure 8 the time evolution of the main physical quantities at the position of the Earth at 1 au. We do this by using a virtual satellite within the simulation. From top to bottom, we show the radial velocity (in km/s), the proton number density (in cm\({}^{-3}\)), the temperature (in units of \(10^{4}\) K), the total magnetic field (in nT), and the plasma beta parameter. The blue line shows the minimum of activity case, while the orange line shows the maximum of activity case. The horizontal axis shows the number of physical days after the insertion of the CME for both cases. The arrival of the shock is shown by a vertical line (light blue for minimum of activity case, red for maximum of activity case). The crossing of the ejecta is shown by a colored window (light purple for minimum of activity, light red for maximum of activity). Although the two CMEs have very different 3D structures, their 1D profile at Earth are actually not so different in density, temperature, and especially radial velocity. Because it is a cone CME, the detection of the ICME and its internal borders is a bit more challenging due to the lack of internal magnetic structure, but the global overview of all these physical quantities allow us to make estimations. The initial shock of
Figure 7: Comparison of the cone CME propagation for the minimum (top panel) and maximum (bottom panel) of activity cases. We show the density in cm\({}^{-3}\), normalized to 1 au to better visualize the CME ejecta as an under-dense structure. We show the moment when the ejecta is reaching Earth (symbolized by a blue circle circled by a white line to the right of the Sun). For each case, we show the ecliptic (view from above, left panel), meridional (view from the side including Earth, middle panel) and spherical (view at 1 au, right panel) views in the HEEQ frame. The magnetic field line connecting the Earth to the Sun is shown in dotted line. The polarity of the magnetic sectors is shown at the edges of the frames for reference (red for positive, blue for negative), separated by the HCS shown as a white line inside the domain. Animated versions of this figure are available in the online version, showing the full propagation of the cone CME from 0.1 au to the Earth for both the minimum and maximum of activity backgrounds, with the evolution of the radial magnetic field, density and radial velocity.
the ICME is visible in velocity, but more clear in density and in the total magnetic field. From the radial velocity, we see that the shock is rather gradual, starting around day 4. The CME at minimum of activity is a bit faster, going over 400 km/s in the sheath. This is probably due to the fact that it was a full hit, compared to the maximum of activity case which was a flank hit. We do show the magnetic field in this case, but we remind the reader that it is not representative of the polarity of the ICME itself, but rather of the accumulation of magnetic field from the solar wind background in front of it due to the propagation of the shock. The temperature is mostly shown for context of the variations of the \(\beta_{p}\) parameter. The plasma beta parameter is shown as to visually support our settings of the boundaries of the ejecta for both cases. Because of the clear shock in density and total magnetic field, we can clearly say that the ICME traveling at maximum of activity arrives faster (before 4 days, vs after 4 days for the minimum case). However, if we look at the \(\beta\) parameter, we can see that the sheath region is shorter at minimum of activity (it passes Earth in only 3.83 hours), so that in the end the magnetic ejecta arrives faster at minimum than at maximum (4.28 days vs. 4.76 days). The ejecta duration is rather similar (a bit more than 2 days), so logically the ejecta ends sooner for the minimum of activity. All these results are summarized in Table 2.
Please note that these results are specific to the background chosen, so it is not clear how generalized they can be. It can also be slightly affected by our method selected to detect the ICME borders (see Appendix A for more details).
Figure 8: Comparison of the time evolution of the main physical quantities at 1 au at Earth position for the propagation of the cone CME. The time is the number of physical days that lasted the simulation. The blue line shows the minimum of activity case, while the orange line shows the maximum of activity case. From top to bottom, we show the radial velocity (in km/s), the proton number density (in cm\({}^{-3}\)), the temperature (in units of \(10^{4}\) K), the total magnetic field (in nT) and the plasma beta parameter. The arrival of the shock is shown by a vertical line (light blue for minimum of activity case, red for maximum of activity case). The crossing of the ejecta is shown by a colored window (light purple for minimum of activity, light red for maximum of activity).
In conclusion, these two analysis show that for our purely hydrodynamic CME, the change in activity we used caused mostly a geometric deviation : our minimum case CME is decelerated in the ecliptic plane and accelerated out of it due to the heliosphere structure; our maximum case presents a deviation towards the northern hemisphere. In the 1D profiles, the ICME shock at maximum of activity seems to arrive faster, but the ME arrives later than in minimum. It also reinforces the limitation of one-vantage in situ measurement, since our two different CMEs actually produced similar speed 1D profiles at Earth which do not reflect the major geometrical difference in 3D.
## 5 Reference Spheromak Case from Real Event
In this next section, we now use a different modeling for the CME with intrinsic magnetic field, which is the spheromak CME (described in Section 2.3). With the previous section isolating the hydrodynamic effects, this allows us to better quantify the impact of the interaction of the CME magnetic field with the surrounding background. It becomes especially important that the magnetic field is more intense and more complex at maximum of activity. As explained in Section 3.2, we will first use parameters inspired by a true event from July 12 2012 which has already been extensively studied with EUHFORIA, and that will act as a reference case. The input parameters are summed up in Table 1. With the internal magnetic field comes three new parameters: the tilt, flux intensity and handedness.
In this first configuration, we inject a spheromak with positive handedness, which means the inner flux-rope is a right-handed helix at injection. In order to get a more compact study, negative handedness is only reported in the next section (Section 6.3). For more information about the handedness, please refer to appendix B. The resulting
Figure 9: Comparison of the reference spheromak CME propagation for the minimum (top panel) and maximum (bottom panel) of activity cases. In this case, both CMEs have positive handedness. We show the density in cm\({}^{-3}\), normalized to 1 au to better visualize the CME ejecta as an under-dense structure. We show the moment when the ejecta is reaching Earth (symbolized by a blue circle to the right of the Sun). For each case, we show the ecliptic (view from above, left panel), meridional (view from the side, middle panel) and spherical (view at 1 au, right panel) views in the HEEQ frame. The magnetic field line connecting the Earth to the Sun is shown in dotted line. The polarity of the magnetic sectors is shown at the edges of the frames for reference (red for positive, blue for negative), separated by the HCS shown as a white line inside the domain. Animated version of this figure are available in the online version, showing the full propagation of the reference spheromak CME from 0.1 au to the Earth for both the minimum and maximum of activity backgrounds, with the evolution of the radial magnetic field, density and radial velocity.
CMEs can be seen in Figure 9. We recover mainly similar effects observed in Section 4.2: at minimum of activity the CME appears to be cut in half due to the highly-organized structure of the heliosphere, while at maximum of activity the CME is deflected to the northern hemisphere and thus producing a flank hit. When we compare it with Figure 7, we can also see some differences. In both cases, the CME appears more structured and coherent, which is due to the inner magnetic field allowing for a more cohesive structure. This also produces a more defined shock at the front of the CME in both cases, especially visible in the density structure with a dark ring showing the corresponding over-density (the same color scale is used in both figures). At minimum of activity, the dislocation of the CME is less pronounced, due to the inner magnetic field counteracting the solar wind influence. This is especially visible in the spherical view where this time the under-dense structure is visible not only at Earth latitude, but also north and south of it. At maximum of activity, the CME is more elongated in latitude and presents a slight asymmetry, probably due to the more complex magnetic interaction and reconnection with the irregular HCS magnetic configuration.
To be more quantitative, we also show the 1D evolution at Earth in Figure 10, similar to Figure 8. To complete this figure, we also show this time the vertical deviation at 5 and 10 degrees north and south of the ecliptic plane as colored shaded areas around the curves, similar to what was done in Figure 17 of Scolini et al. (2019). This allows us
Figure 10: Comparison of the time evolution of the main physical quantities at 1 au at Earth position for the propagation of the reference spheromak CME with positive handedness. The time is the number of physical days that lasted the simulation. The blue line shows the minimum of activity case, while the orange line shows the maximum of activity case. From top to bottom, we show the radial velocity (in km/s), the proton number density (in cm\({}^{-3}\)), the temperature (in units of \(10^{4}\) K), the total magnetic field (in nT), the \(z\) components of the magnetic field in HEEQ coordinates (in nT) and the plasma beta parameter. The arrival of the shock is shown by a vertical line (light blue for minimum of activity case, red for maximum of activity case). The crossing of the ejecta is shown by a colored window (light purple for minimum of activity, light red for maximum of activity). We also show the vertical deviation at 5 and 10 degrees north and south of the ecliptic plane as colored shaded areas around the curves.
to display some estimation of the uncertainty for the ICME profile around the Earth position. We can see in this case that the shock is indeed more defined in all the physical quantities, arriving at Earth around 2.75 days. The ICME is thus arriving faster than in the pure hydro case, even though the initial speed of the CME is much less (but we recall from section 3.2 that the full 3D speed is actually the same). Here the CME at maximum of activity still arrives first, but with only a 2-hour lead compared to the CME at minimum of activity, as seen more clearly in the radial velocity. The sheath is then followed by a magnetic ejecta clearly visible in the \(B_{z}\) component, until the parameters return to the initial solar wind before the shock. Once again, the speed of the magnetic ejecta is slightly higher for the minimum of activity case, especially in the deviation at 5 and 10 degrees because of the acceleration of the polar solar wind. Density remains similar in both cases. The magnetic field component, this time representative of the structure of the inner flux-rope, is however different. At minimum of activity, \(B_{z}\) rises first to 20 nT, before going to a -8 nT phase at the end of the ejecta. On the other hand, at maximum of activity \(B_{z}\) always remain positive, but under 20 nT. This difference in \(B_{z}\) is very likely due to the difference in geometry discussed above, which results in a different impact parameter for both cases. In this scenario, none of the CMEs are geo-effective, as their \(B_{z}\) are both mostly positive.
This case shows that with an internal magnetic field, we retrieve similar results for the geometry of the propagation of the CMEs, except that the ejections are more cohesive thanks to their internal flux-rope. In this case, the CMEs arrive at a very similar time at Earth, with only a 2-hour lead in the maximum of activity case.
## 6 Median Spheromak Case
The previous results were interesting, but we can easily argue that they may hold only for the specific case we are studying. Although we do realize and acknowledge that our results are intrinsically dependent on the choice of our solar wind background and CME model, we include one last case that would aim at being slightly broader. To do so, we have defined in Section 3.2 a median CME, based on the statistical analysis of ACE data from Regnault et al. (2020), combined with extrapolations from 1 au to 0.1 au by Scolini et al. (2021). The parameters selected are detailed in Table 1. We recall that compared to the previous case, the median CME is slower (541 vs. 763 km/s at injection), more dense (2 vs. 1 \(10^{-18}\) kg.m\({}^{-3}\) at injection), hotter (6.2 vs. 2.4 \(10^{5}\) K at injection) and less magnetized (2.3 vs 10 \(10^{13}\) Wb flux at injection). We will now perform the same study as in the previous section, but for this median CME, in order to see how general our results can be extrapolated.
### Positive handedness
Once again, we start by injecting a spheromak CME with positive handedness (H=+1) in both solar wind backgrounds. For clarity, we will first focus on the 1D profiles, visible in Figure 11. Similar to Figure 10, the CME at maximum of activity arrives slightly in advance, but it could be due to the fact that the shock is more steep (especially visible in the velocity). What is surprising is that, contrary to the reference case where both CMEs had mostly positive \(B_{z}\), in this case the CME at maximum of activity has a positive \(B_{z}\) while the CME at minimum of activity has a clear negative component (end of day 3). Moreover, the ICME at minimum of activity has a longer ejecta, which means it takes more time to get back to a normal solar wind state after the crossing of the structure (the ejecta lasts 2.1 days, vs. 1.6 days at maximum of activity). This means that the median case is more geo-effective than the reference one. This is surprising, because the reference case was based on a CME event which has proven to be geo-effective by interaction with the wind background (Hu et al., 2016; Marubashi et al., 2017; Gopalswamy et al., 2018; Scolini et al., 2019), although it had been initially underestimated by the space-weather community (Webb and Nitta, 2017). This negative \(B_{z}\) is also significant in magnitude, up to 15 nT. This is also surprising since the median CME is slower and less magnetized than the reference case. This highlights the fact that the geo-effectiveness of a CME does not depend only on the input CME parameters, but also on its interaction with its background and especially the HCS.
### Slicing effect at minimum of activity
Another interesting result we can derive from this study is a comparison of the ICME profiles at minimum of activity. We focus on this specific phase of the cycle because surprisingly, this is where we see the most differences between our cases. We would have expected the ICME at maximum of activity to show more disparity because of the complex wind background, but actually this complexity constrains the magnetic ejecta. As a result, the input parameters or even the modeling of the ICME (cone vs. spheromak) has little effect on the final profiles (in 2D or 1D, as can be seen in the previous figures), affecting only the size of the ejecta (by 0.1 au at most) but not its shape.
At minimum of activity however, the high structuring of the corona allows for more distinct effects that we can quantify. In Figure 12, we compare the meridional profile for our three cases described in Table 1 at minimum of
activity. From left to right, we show the median spheromak, the reference spheromak and the cone model. We show here the cuts in density to visualize the sheath as an over-density and the magnetic ejecta as an under-density.
We find again this slicing effect that we noticed before, where the high-latitude fast wind carries faster the northern and southern part of the ICME, while the equatorial part is slowed down by the slow equatorial wind. These three cases allow us to understand that this effect is not due to the modeling of the ICME, because we find it for both the cone and spheromak models (although the effect is reduced for the spheromak, probably because its internal magnetic structure makes it less sensitive to the wind background). The amount of slicing is highly sensitive to the input speed of the CME. The median and reference spheromaks have the same modeling, and yet the median case barely shows this effect. The major difference is the input speed: the spheromak case has an input speed closer to the fast wind, while the median case has an input speed closer to the slow wind. This indicates that this effect is not due so much to the acceleration caused by the fast wind, but rather by the slow-down caused by the slow equatorial wind. Then,
Figure 11: Comparison of the time evolution of the main physical quantities at 1 au at Earth position for the propagation of the median spheromak CME with positive handedness. The time is the number of physical days that lasted the simulation. The blue line shows the minimum of activity case, while the orange line shows the maximum of activity case. From top to bottom, we show the radial velocity (in km/s), the proton number density (in cm\({}^{-3}\)), the temperature (in units of \(10^{4}\) K), the total magnetic field (in nT), the \(z\) component of the magnetic field in HEEQ coordinates (in nT) and the plasma beta parameter. The arrival of the shock is shown by a vertical line (light blue for minimum of activity case, red for maximum of activity case). The crossing of the ejecta is shown by a colored window (light purple for minimum of activity, light red for maximum of activity). We also show the vertical deviation at 5 and 10 degrees north and south of the ecliptic plane as colored shaded areas around the curves. Animated movies corresponding to this figure are available in the online version, showing the full propagation of the median spheromak CME with positive handedness in 2D cuts (equatorial, meridional and spherical) from 0.1 au to the Earth for both the minimum and maximum of activity backgrounds, with the evolution of the radial magnetic field, density and radial velocity.
as the ICME is faster, its equatorial part is much more slowed down, causing this geometric separation. The effect is even stronger for the cone model: the input speed appears to be even more important, but we recall that the definition of the input speed is slightly different for the cone model (as it does not have an expansion speed as in the spheromak model we also recall that the absence of an internal magnetic field makes the ICME more sensitive to the background wind.
This difference of behaviors in speed was also seen in ACE data in Regnault et al. (2020) for example. We know that at maximum of activity and during the declining phase, active regions have higher magnetic fluxes, which triggers more extreme events. But we could assume beyond this hypothesis that, even if an extreme event were to happen at minimum of activity, it would be slowed down drastically by the solar wind configuration and thus results in lower speed detected at Earth, just like what is seen in the ACE data (see also Chi et al. (2016) and Wu & Lepping (2016)). This result shows that the structuring of the wind itself has a strong influence on the ICME propagation, and not just the wind speed or specific structures at stream interfaces, and calls for more careful and realistic modeling of the solar corona.
### Negative handedness
One final parameter study we undergo is to switch the handedness from positive to negative (\(H=-1\)) for the median case. In order to better compare all the cases and discuss our conclusions, we have summarized all the arrival times derived in the previous figures in table 2.
We plot the 1D profiles at Earth in Figure 13, with the same labels and visualization as before. This figure should be compared with Figure 11 to see the effect of the handedness. Here we can see a slight numerical artifact produced by EUHFORIA because of strong shocks where the velocity drops suddenly before rising again. It can disrupt locally certain physical quantities (like the \(\beta_{p}\) plasma parameter), but this effect is purely local and does not impact the overall results and conclusions. At maximum of activity, the change of handedness causes the ICME to be slightly delayed at Earth, reaching 1 au after 3.65 days of propagation instead of 3.51 days for pos
Figure 12: Comparison of the meridional view of the ICME at minimum of activity depending on the input radial speed. Each panel shows the meridional cut passing across Earth in the simulation (in the \(x-z\) plane in HEEQ coordinates) when the ICME reaches Earth (blue point at 1 au). We show the density in order to visualize the magnetic ejecta as an under-density. The left panel shows the case for the median spheromak CME (Section 6.1), the middle panel the case for the reference spheromak CME (Section 5) and the right panel the case for the cone CME (Section 4.2). Each panel has the input CME speed at 0.1 au as a label. See Table 1 for a reminder of all the differences between the models. We see then that the faster the CME is, the more delay we observe between the equatorial and the polar parts of the ejecta, which means it is caused by the equatorial solar wind deceleration.
be explained by the fact that the ICME speed has been reduced (the speed at the shock is around 400 km/s vs. 500 km/s previously). The ejecta also lasts longer, with a return to solar wind parameters after 2.1 days instead of 1.66. More surprisingly, the ICME becomes slightly geo-effective, with a visible negative \(B_{z}\) after day 4. However, these effects are minor compared to the minimum of activity case.
Once again, the case with the most disparity is surprisingly the case at minimum of activity. The most surprising feature is that the ICME at minimum of activity now arrives first, before the ICME at maximum of activity (this is the only case we have shown where this happens): the shock is move to day 3.31 instead of 3.58. The shock is also better defined, with a steepened slope. As for maximum of activity, the ICME also becomes more geo-effective: for negative handedness, the \(B_{z}\) reaches -20 nT and stays negative, which would be conditions for a mild geomagnetic storm; whereas for positive handedness, the \(B_{z}\) component would reach only -15 nT, but immediately go back to positive afterwards.
Figure 13: Comparison of the time evolution of the main physical quantities at 1 au at Earth position for the propagation of the median spheromak CME with negative handedness. The time is the number of physical days that lasted the simulation. The blue line shows the minimum of activity case, while the orange line shows the maximum of activity case. From top to bottom, we show the radial velocity (in km/s), the proton number density (in cm\({}^{-3}\)), the temperature (in units of \(10^{4}\) K), the total magnetic field (in nT), the \(z\) components of the magnetic field in HEEQ coordinates (in nT) and the plasma beta parameter. The arrival of the shock is shown by a vertical line (light blue for minimum of activity case, red for maximum of activity case). The crossing of the ejecta is shown by a colored window (light purple for minimum of activity, light red for maximum of activity). We also show the vertical deviation at 5 and 10 degrees north and south of the ecliptic plane as colored shaded areas around the curves. Animated movies corresponding to this figure are available in the online version, showing the full propagation of the median spheromak CME with negative handedness in 2D cuts (equatorial, meridional and spherical) from 0.1 au to the Earth for both the minimum and maximum of activity backgrounds, with the evolution of the radial magnetic field, density and radial velocity.
The above result is interesting, because it shows several properties of this case: the handedness can affect the ICME arrival time as well as its geo-effectiveness; these effects are more visible at minimum of activity than at maximum. One explanation we have for this phenomenon is shown in Figure 14. We show the meridional profile of the ICME hitting Earth for positive (left panel) and negative handedness (middle panel). For positive helicity, the ICME structure is typical, with an expanding spherical sheath followed by the magnetic ejecta. But for negative helicity, the sheath is more concentrated towards the equator, while the magnetic ejecta structure is not so visible. It almost seems that there is a second sheath forming behind the main one, with an overextended perturbation of the ejecta in latitude.
Although the two injected CMEs have the same input speed, the one with negative helicity is accelerated immediately after injection (at only 0.11 au, for an injection at 0.1 au), as seen in the right panel of Figure 14. This suggests that the rapid reconfiguration of the spheromak after injection to adjust to the wind background leads in the negative handedness case to the separation of the ICME, between an accelerated first part and then a decelerated one. Since we only changed the magnetic structure, we assume that this is the result of reconnection effects with the heliospheric magnetic field. That would also explain why this effect is more visible at minimum of activity, which would be because the current sheet is closer to the equator, which means closer to the injection point as well as the propagation path of the ICME.
Such effects of the impact of the handedness have been previously investigated by Chane et al. (2005, 2006). They have indeed showed that the handedness itself can change the geo-effectiveness of the ICME as well as its arrival
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline & & Shock arrival & Ejecta arrival & Ejecta end & Sheath duration (hours) & Ejecta duration (days) \\ \hline \hline Cone & Minimum & 4.12 & 4.28 & 6.44 & 3.83 & 2.16 \\ \hline Cone & Maximum & 3.78 & 4.76 & 7.25 & 23.67 & 2.49 \\ \hline Reference & Minimum & 2.73 & 3.07 & 5.73 & 8.17 & 2.66 \\ \hline Reference & Maximum & 2.67 & 3.17 & 6.36 & 12.01 & 3.19 \\ \hline Median (positive H) & Minimum & 3.58 & 3.82 & 6.31 & 5.67 & 2.49 \\ \hline Median (positive H) & Maximum & 3.51 & 3.73 & 5.39 & 5.17 & 1.66 \\ \hline Median (negative H) & Minimum & 3.31 & 3.49 & 6.51 & 4.17 & 3.02 \\ \hline Median (negative H) & Maximum & 3.65 & 3.75 & 5.85 & 2.33 & 2.10 \\ \hline \end{tabular}
\end{table}
Table 2: Table summarizing the key times found for each CME case and each activity level (minimum or maximum of activity). We indicate the shock arrival time (in days), the ejecta arrival time (in days), the ejecta end time (in days), and deduce from them the sheath duration (in hours) and the ejecta duration (in days).
Figure 14: Comparison of the meridional profile of the ICME for positive (left panel) and negative (middle panel) handedness. We show the \(x-z\) plane in HEEQ coordinates when the ICME reaches Earth (blue point at 1 au). We show the proton number density in cm\({}^{-3}\). We can see that for the negative handedness case, the CME is elongated with a double sheath, probably due to the acceleration it gets from initial reconfiguration at 0.1 au. In the right panel, we show the radial evolution of the mean speed inside the ejecta for positive (green) and negative handedness (red).
time, but only for a purely dipolar background configuration. Our results seem to extend this result even for more realistic backgrounds: at minimum of activity, the background is still dipolar enough so that the result can apply, but at maximum of activity the background becomes too complex and the influence of the handedness becomes less dominant. It is however difficult to generalize completely this result, as the CME in our case is only injected at 0.1 au, thus missing the propagation in the lower corona; for a more accurate result and better comparison, we would need to incorporate this phase as well, such as in Talpeanu et al. (2022).
For verification, we have performed the same inversion of handedness for the reference spheromak case. We have found similar effects, although reduced compared to the median case (as expected due to the more intense internal magnetic field that makes the ICME less sensitive to the background). We retrieved the acceleration of the ICME at minimum of activity, although the internal separation was more difficult to see. This means that our results do apply to more extreme cases that can be seen at Earth. Finally, since the spheromak model is not connected to the Sun, it is more likely than other models to self-reconfigure, especially magnetically. It would be interesting to see if we could reproduce the same result with a CME model that has legs connected back to the Sun such as Fri3D (Isavnin, 2016; Maharana et al., 2022), Gibson & Low model (Gibson & Low, 1998), or the modified Titov-Demoulin model (Titov et al., 2014; Regnault et al., 2023; Linan et al., 2023).
## 7 Discussion and Conclusion
In this study, we investigate the role of the solar cycle on the propagation of ICMEs using numerical simulations. To do so, we start with a theoretical study that has an exploratory purpose. We select two dates that were representative of solar minimum (15th of December 2008) and solar maximum (20th of March 2015), based on previous studies. Then we use synoptic maps (GONG and GONG-ADAPT respectively) to drive the EUHFORIA model (Pomoell & Poedts, 2018) to compute the corresponding state of the heliosphere. Finally, we inject the same ICME within these two backgrounds, and quantify the differences and their origins, in order to better understand what to expect from a propagation at minimum versus a propagation at maximum of activity. We use several modelings for the ICME in order to test the robustness of our results: first a cone model to check for the hydrodynamical effects, then a linear force-free spheromak model to see the effect of an internal magnetic field. We also use parameters that were based on a true event (the CME that caused the geomagnetic storm on the 12ht of July 2012), as well as parameters derived to obtain a median ICME (based on ACE observations and EUHFORIA scaling, see Regnault et al. (2020) and Scolini et al. (2021)).
We showed that the solar wind backgrounds selected were yielding similar speeds at Earth, but with very different structures. At minimum of activity, because the magnetic field of the Sun is mostly dipolar, the inner heliosphere is very organized, with slow wind at the equator and fast wind at the poles, while the HCS is close to the ecliptic plane. At maximum of activity on the contrary, coronal holes are more frequent at lower latitudes, leading to slow and fast wind everywhere, as well as a magnetic field with more complex structures and more polarity reversals.
With a purely hydrodynamical ICME, we observe that the 1D speed profile at Earth is very similar, though the 3D structure of the ICME is very different. At minimum, the ICME is a direct hit at Earth, although its core is being slowed down by the slow equatorial wind, while the rest is accelerated by the fast polar wind. At maximum, the ICME is a flank hit, due to a northern-oriented deflection caused by a fast wind stream originating from a coronal hole. The ICME at maximum of activity arrives first at Earth with a 10-hour lead. These geometrical results remain true for a magnetized ICME, although the inner magnetic field allows for the ICME to suffer less deformation and much less drag along the propagation since the ICME reaches 1 au faster with a lower initial velocity. In this case, we can compare the geo-effectiveness of the two ICMEs by checking how negative their \(B_{z}\) component is at Earth. For a positive handedness (H=+1), none are geo-effective (positive \(B_{z}\)), but for a negative handedness (H=-1), it is the ICME at minimum of activity which is actually the most geo-effective (-25 vs. -20 nT). The ICME at maximum of activity still arrives first at Earth, but only with a 2-hour lead. These results remain true even for a median ICME. In this case, the difference in geo-effectiveness is even larger (-25 vs. -5 nT). This could explain why fast halo CMEs observed in solar maximum activity (2002) were poorly geoeffective (Schmieder et al., 2020). There is also a more distinct difference in arrival time: for a positive handedness, the ICME at maximum of activity arrives first with a 3-hour lead; with a negative handedness, it is the ICME at minimum of activity that arrives first with a 8-hour lead. We recover results obtained in Chane et al. (2006) for the minimum of activity, but show that the influence of the CME handedness is less dominant at maximum of activity. This could affect forecasts as it suggests that providing the handedness of the CME is only crucial at minimum of activity. This seems to be due to the self-reconfiguration
of the ICME at injection, influenced by the dipolar magnetic background which accelerates the most favorable initial condition. Another interesting result we obtained is that the deformation of the ICME at minimum of activity, caused by the structuring of the solar wind, depends on the speed of the ICME: the faster it is, the more important this effect will be, because the core of the ICME will be decelerated down even more noticeably.
In conclusion, we have shown that the same ICME will propagate very differently during solar minimum and maximum. The main factors are the organization of the solar wind, that can cause slow-downs or accelerations, but also the organization of the heliospheric magnetic field, that can cause magnetic reconnection so allowing the overtaken plasma to be less stack in front of the ICME. In the cases studied, the ICME at minimum of activity was often the most geo-effective, which shows that the most powerful events will not necessarily happen at maximum of activity. This reinforces the need to quantify in the most precise way the coronal holes locations to anticipate deflections, as well as the HCS position and the ICME handedness to anticipate reconnection effects.
This study is a first step towards better understanding and quantifying the impact of the solar cycle on ICME propagation. This will become more and more important, as solar cycle 25 is on the rise, and space-weather forecasting facilities aim at delivering more and more reliable forecasts. To reach this goal, there are of course many ways to widen the scope of this study. A first natural step would be to include more intermediate states along a solar cycle (for example, following the previous cycle number 24) and see how the same ICME propagates. This would allow to better identify the key features that alter the propagation and when they change. Also, we could use other CME models (Fri3D, Gibson & Low, etc.(Maharana et al., 2022; Linan et al., 2023)), as well as other heliospheric models (ENLIL...) (Odstrcil et al., 2004)to test even further the robustness of our results. We could also focus on reproducing specific events to validate our understanding of the key features of the heliosphere and their interaction with the ICME. Recent solar missions (such as PSP and Solar Orbiter) as well as future ones (such as PUNCH and Vigil) will even provide more data to add better constraints on the features of the inner heliosphere and the observed ICMEs. Finally, an important step would be to include the CME initialization inside the solar corona, in order to take into account the impact of the structures close to the Sun on the early propagation of the CME (similar to Lynch et al. (2022) for example).
The authors would like to thank Anwesha Maharana and Camilla Scolini for interesting discussions and important feedback. This project has also received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No. 870405 (EUHFORIA 2.0) and the ESA project "Heliospheric modelling techniques" (Contract No. 4000133080/20/NL/CRS). SP acknowledges support via the projects C14/19/089 (C1 project Internal Funds KU Leuven), G.0B58.23N (FWO-Vlaanderen), SIDC Data Exploitation (ESA Prodex-12), and Belspo project B2/191/P1/SWiM. The resources and services used in this work were provided by the VSC (Flemish Supercomputer Centre), funded by the Research Foundation - Flanders (FWO) and the Flemish Government. Data were acquired by GONG instruments operated by NISP/NSO/AURA/NSF with contribution from NOAA. This work utilizes data produced collaboratively between AFRL/ADAPT and NSO/NISP. We recognise the collaborative and open nature of knowledge creation and dissemination, under the control of the academic community as expressed by Camille Nous at [http://www.cogitamus.fr/indexen.html](http://www.cogitamus.fr/indexen.html).
## Appendix A Determination of the CME Shock, Sheath and Magnetic Ejecta
An ICME exhibits different substructures that are essential to distinguish, due to the different underlying physics in each one of them. In this study, we need to be able to distinguish between the CME-driven shock, the CME sheath and the magnetic ejecta (also called magnetic cloud in some studies, e.g. when the presence of a flux-rope can be confirmed). To do so, we need to determine the shock time \(t_{shock}\) (which is the beginning of the sheath), the end of the sheath and thus beginning of the magnetic ejecta time \(t_{in}\), and the end of the magnetic ejecta time \(t_{out}\).
To determine \(t_{shock}\) at various distances for various case, we use a modified version of the criterion used in Scolini et al. (2021):
\[(v(t_{i})-v(t_{i}-\Delta t)\geqslant v_{thresh})\quad OR\quad\left(\frac{n(t _{i})}{n(t_{i}-\Delta t)}\geqslant n_{thresh}\right)\quad OR\quad\left(\frac {B(t_{i})}{B(t_{i}-\Delta t)}\geqslant B_{thresh}\right),\] (A1)
where \(t_{i}\) is a generic time in the time series, \(\Delta t\) is a time delay to compare to the steady wind state before the event, and \(v_{thresh}\), \(n_{thresh}\) and \(B_{thresh}\) are threshold parameters to identify the shock. Scolini et al. (2021) set their own thresholds to the following values: \(v_{thresh}=20\) km/s, \(n_{thresh}=1.2\), \(B_{thresh}=1.2\). However, these were for a comparison between the CME run and the corresponding wind-only simulation, which means they used a wind model for reference. In our case, we use only one CME run and compare present time with previous time data. Their values were also optimized for a specific wind background set at maximum of activity. This means that our threshold values need to be different. We have thus adjusted these values by trying different combinations, and selected the most robust and efficient ones. These final parameter values are: \(\Delta t=10\) (with 10 minutes between each output, this means an interval of around 1.6 hours), \(v_{thresh}=30\) km/s, \(n_{thresh}=1.5\), \(B_{thresh}=1.5\) (\(n_{thresh}\) and \(B_{thresh}\) do not have units because they are ratios). Since we have a minimum of activity configuration, the HCS is close to the equatorial place and as a result, the magnetic field can become very close to 0 locally, producing false detection of the shock because of these low values. To avoid this, we have set an additional threshold of 1 G for the local magnetic field. On top of the automatic selection given by this criterion, systematic visual verification has been made to ensure the validity of the results.
For \(t_{in}\), we use a criterion from the \(\beta\) plasma. This criterion is adapted from in situ solar wind measurements (Lepping et al., 2005) and has already been used in Scolini et al. (2021). In the observations, the sheath ends (and the magnetic ejecta begins) when \(\beta_{p,obs}\leqslant 0.3\). For EUHFORIA simulations, the threshold can range from 0.1 to 1. Scolini et al. (2021) found that the value of 0.5 yielded good results in their cases. For our cases, it is the value of 0.1 which usually gives the best results. However, we sometimes had to adjust it manually in order to get a ME selection consistent with the other physical quantities. For clarity, we have indicated the value of \(\beta\) used for the border selection in table 3.
\(t_{out}\) is set when the \(\beta\) parameter goes above the same threshold as for \(t_{in}\). We include an exception for reconnection effects that can occur within the magnetic ejecta, and that usually generate perturbations of the \(\beta\) for only a few points. The goal is to not take into account spikes in the \(\beta\) parameter.
Figure 15 shows how these various criteria manage to adapt to different cases to detect automatically all the substructures. The left panel shows the same case as the left panel in Figure 3, which is the median CME at minimum
Figure 15: Examples of detection of the shock, sheath and magnetic cloud for various cases. For each case, the first row is the total magnetic amplitude in nT, the second row the total velocity amplitude in km/s, the third row the temperature in \(10^{4}\) K, the fourth row the number density for the protons in cm\({}^{-3}\) and the last row \(\beta\) parameter of the plasma. The left panel shows the median case at minimum of activity at 1 au. The right panel shows the same case, but at 0.4 au instead of 1 au. A yellow rectangle highlights the region corresponding to the sheath, and a blue rectangle the region corresponding to the magnetic cloud.
of activity at 1 au. There is some reconnection happening within the magnetic ejecta, because the \(\beta\) spikes above 0.2 at December 18 around noon. However the magnetic ejecta is not over yet, and our criterion can adapt to these cases. The right panel shows the same case, but at 0.4 au instead of 1 au. Adjustment of \(\Delta t\) can be needed for artificial satellites closer to the injection point, because the shock will steepen with the distance, we usually then divide it by 2.
## Appendix B Handedness of a spheromak
The magnetic field in the spheromak model is expressed as follows (in the local spherical coordinate \((r,\theta,\phi)\) frame in which the origin is the center of the spheromak):
\[B_{r} =2B_{0}\frac{j_{1}\left(\alpha r\right)}{\alpha r}\mathrm{cos}\,\theta,\] (B2) \[B_{\theta} =-B_{0}\left[\frac{j_{1}\left(\alpha r\right)}{\alpha r}+ \partial_{r}j_{1}\left(\alpha r\right)\right]\mathrm{sin}\,\theta,\] (B3) \[B_{\phi} =HB_{0}j_{1}\left(\alpha r\right)\mathrm{sin}\,\theta,\] (B4)
where \(B_{0}\) is a parameter determining the magnetic field strength, \(j_{1}(x)\) is the spherical Bessel function of order one, \(\alpha\) is chosen so that \(\alpha r_{0}\) is the first zero of \(j_{1}(x)\), which yields \(\alpha r_{0}\approx 4.4934\) (\(r_{0}\) is the radius of the spheromak), and \(H\) is the handedness.
Figure 16: 3D representation of the magnetic configuration of the initial spheromak. We represent three sets of magnetic field lines: the blue ones correspond to the outer edge of the spheromak, the red ones to the axis of the internal flux rope, and the white ones to the orientation of the flux rope field lines. In the left panel, the handedness is positive since the flux-rope corresponds to a right-handed helix. In the right panel, the handedness is negative since the flux-rope corresponds to a left-handed helix. Credits: Camilla Scolini.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline CME Case & Activity level & \(\beta\) threshold & Corresponding Figure \\ \hline \hline Cone & Minimum & 4.0 & 8 \\ \hline Cone & Maximum & 0.15 & 8 \\ \hline Reference & Minimum & 0.1 & 10 \\ \hline Reference & Maximum & 0.1 & 10 \\ \hline Median (positive H) & Minimum & 0.25 & 11 \\ \hline Median (positive H) & Maximum & 0.1 & 11 \\ \hline Median (negative H) & Minimum & 0.5 & 13 \\ \hline Median (negative H) & Maximum & 0.1 & 13 \\ \hline \end{tabular}
\end{table}
Table 3: Table of the values of the \(\beta\) parameter used as threshold for detection of the borders of the CME sheath.
The handedness is a dimensionless parameter that can be equal to either +1 or -1. Only the azimuthal component of the magnetic field is affected by the handedness. In Figure 16, we show a 3D representation of the resulting spheromak magnetic field (in the meridional plane) for positive and negative handedness. It can be linked to the sign of the magnetic helicity of the originating active region, which is conserved over time (Berger, 2005). The magnetic helicity quantifies how much the magnetic field is sheared and twisted. The handedness only retains its sign. Spheromaks with a positive handedness (\(H=+1\)) exhibit a right-handed helix flux-rope, while spheromaks with a negative handedness (\(H=-1\)) exhibit a left-handed helix flux-rope (shown by the torus located in the central part of the spheromak). Handedness can be estimated using empirical relationships such as hemispheric rules (majority of positive handedness in the southern hemisphere and negative handedness in the northern hemisphere, Bothmer & Schwenn, 1998; Pevtsov et al., 2008), or by analyzing morphological features or the photospheric magnetic field of the active region or/and erupting filament (Demoulin & Pariat, 2009; Palmerio et al., 2017).
There is sometimes a confusion with other names for quantities that have the same or a similar function. The name "chirality" is used interchangeably with handedness (as for example in Shiota & Kataoka (2016) or Scolini et al. (2019)). Confusion may arise when the handedness is referred to as a "helicity parameter", as in Jin et al. (2017). However the handedness is not the helicity, it is only its sign.
|
2308.03876 | Denoising diffusion models with geometry adaptation for high fidelity
calorimeter simulation | Simulation is crucial for all aspects of collider data analysis, but the
available computing budget in the High Luminosity LHC era will be severely
constrained. Generative machine learning models may act as surrogates to
replace physics-based full simulation of particle detectors, and diffusion
models have recently emerged as the state of the art for other generative
tasks. We introduce CaloDiffusion, a denoising diffusion model trained on the
public CaloChallenge datasets to generate calorimeter showers. Our algorithm
employs 3D cylindrical convolutions, which take advantage of symmetries of the
underlying data representation. To handle irregular detector geometries, we
augment the diffusion model with a new geometry latent mapping (GLaM) layer to
learn forward and reverse transformations to a regular geometry that is
suitable for cylindrical convolutions. The showers generated by our approach
are nearly indistinguishable from the full simulation, as measured by several
different metrics. | Oz Amram, Kevin Pedro | 2023-08-07T19:09:33Z | http://arxiv.org/abs/2308.03876v3 | # Denoising diffusion models with geometry adaptation for high fidelity calorimeter simulation
###### Abstract
Simulation is crucial for all aspects of collider data analysis, but the available computing budget in the High Luminosity LHC era will be severely constrained. Generative machine learning models may act as surrogates to replace physics-based full simulation of particle detectors, and diffusion models have recently emerged as the state of the art for other generative tasks. We introduce Calobiffusion, a denoising diffusion model trained on the public _CaloChallenge_ datasets to generate calorimeter showers. Our algorithm employs 3D cylindrical convolutions, which take advantage of symmetries of the underlying data representation. To handle irregular detector geometries, we augment the diffusion model with a new geometry latent mapping (GLAM) layer to learn forward and reverse transformations to a regular geometry that is suitable for cylindrical convolutions. The showers generated by our approach are nearly indistinguishable from the full simulation, as measured by several different metrics.
+
Footnote †: preprint: FERMILAB-PUB-23-384-CSAID-PPD
## I Introduction
High quality simulation plays a crucial role in modern particle physics experiments. Most experiments rely on the Geant4[1, 2, 3] toolkit to simulate interactions of particles with their detector. Achieving accurate results requires simulating the interactions of both the primary particle incident on the detector and the numerous secondary particles produced through interactions with the detector material. For this reason, simulations of calorimeters, which are designed to capture the energy produced by the shower of secondary particles, usually requires the most computational resources. Simulating calorimeters currently consumes a significant fraction of the computing resources of modern collider experiments [4]. The problem will be exacerbated at the High Luminosity LHC, which will feature larger data volumes, more complex detectors [5], and a higher pileup environment. Future high granularity detectors will require more computational resources to simulate because of their more complex geometries and higher levels of precision [6]. At the same time, reconstruction will require a larger fraction of the computing budget because of the expected superlinear scaling of important algorithms with increasing pileup [7].
These resource constraints mean that full, detailed detector simulation using Geant4 will not be possible for every simulated event. Instead, 'fast simulation' methods that approximate the output of Geant4 using less computation will be employed. Most major experiments have developed fast simulation frameworks based on parametric approximations manually tuned to Geant4 [8, 9, 10, 11, 12, 13]. These parametric models generally suffer from deficiencies in modeling detailed observables of calorimeters, limiting their usage in physics analysis.
In order to overcome these challenges, machine learning (ML) models are increasing in popularity as fast surrogate models for Geant4 [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32] (see Ref. [33] for an overview and Ref. [34] for a recent review). These techniques borrow from the growing field of ML-based generative modeling, which has made significant advances in recent years.
In high energy physics (HEP), the first class of generative models proposed for this purpose were generative adversarial networks (GANs) [14]. GANs are trained by iterating between a 'generator network' that learns to produce artificial samples and a 'discriminator network' which attempts to distinguish the artificial samples from true ones. GANs are able to generate high quality showers orders of magnitude faster than Geant4. The ATLAS experiment has now employed calorimeter GANs in their fast simulation framework [13]. GANs are also used for fast simulation by the LHCb [35] experiment and are being explored for emulating the high granularity, 7.5M channel, pixel detector of Belle-II [27]. However, GAN training does not reliably converge because the two competing objectives create a saddle point in the loss space rather than a minimum. Additionally, GANs are known to suffer from'mode collapse', in which the generator network only learns to produce samples from a subset of the full data space.
Variational Autoencoders (VAEs) have also been proposed for calorimeter simulation [21, 17, 22]. A VAE consists of an encoder, which maps the input data to a smaller latent space, and a decoder, which maps the latent space back to the original data. A VAE is distinguished from a regular autoencoder by forcing the latent space to follow a multivariate Gaussian distribution via an additional term in the training loss. New samples can then be generated by drawing random samples from a multivariate Gaussian in the latent space and applying the decoder model. However, VAEs on their own do not seem have the expressive power of GANs and other state-of-the-art models and generally achieve worse quality on complex high-dimensional data such as calorimeter
showers. Refs. [17; 21] instead use a bounded information bottleneck AE (BIB-AE), which is a novel combination of the VAE and GAN architecture.
Normalizing flows (NFs) have also been proposed for calorimeter simulations [19; 26; 31]. NFs are based on a series of invertible transformations that convert the input distributions to multivariate Gaussians. Once trained, new samples can be generated by sampling the Gaussian space and applying the inverse transformations to convert to the data space. However, as the dimensionality of the data has to be preserved in each stage of the flow, it can be difficult to scale NFs to very high-dimensional data.
Recently, a new class of models has become dominant in ML image generation tasks: denoising diffusion models [36; 37; 38]. In this work, we explore the use of denoising diffusion models to generate calorimeter showers. Diffusion models are based on a 'noising process' that continuously perturbs an image until it is degraded to pure noise. A 'denoising model' is then trained to invert the diffusion process. New samples can be generated by constructing a sample in the noise space and repeatedly 'denoising' it back to the original space. The use of diffusion models in image generation has proliferated because of their straightforward training procedure, high quality results, straightforward scaling to high-dimensional data, and manageable computational requirements. Diffusion models were first used for calorimeter simulation in CaloScore[32; 24] with promising results. CaloScore is a score-based diffusion model, which is similar but distinct from the denoising diffusion model employed in this paper. Recent work has combined diffusion with point clouds [30; 39] and demonstrated distillation of diffusion models to improve generation time of jet particle clouds [40; 29]. Several other works apply diffusion to HEP in other contexts [41; 42; 43].
Our approach, dubbed CaloDiffusion, is a denoising diffusion model for calorimeter simulation and employs several novel optimizations to make use of the geometric structure of the data. In contrast to other recent works [30; 39], which have advocated for point cloud representations of calorimeter showers, CaloDiffusion uses voxelized image-like representations of the calorimeter data. The use of this voxelized representation retains the geometric information of the data, allowing for several optimizations that exploit the cylindrical structure and scale well for high-dimensional datasets. We additionally introduce a new geometry latent mapping (GLaM) component, which is able to map irregular detector geometries into a regular structure suitable for symmetry-preserving operations such as convolutions.
We test our approach on the public datasets provided as part of the Fast Calorimeter Simulation Challenge (_CaloChallenge_) [44]. The challenge released three datasets of showers simulated with Geant4 in calorimeters with increasing granularity. We find that CaloDiffusion is able to generate very quality showers that are difficult to distinguish from Geant4 for all datasets of the _CaloChallenge_. Based on quantitative metrics, we demonstrate significant gains over previous state-of-the-art methods, particularly for the high-dimensional datasets of the _CaloChallenge_.
## II Diffusion models
Diffusion models are defined in terms of a 'noising process', which is a Markov chain that starts from data points \(x_{0}\) (following a probability distribution \(q(x_{0})\)) and iteratively adds Gaussian noise. The data points \(x_{t}\) at time \(t\) are generated from data points at the previous time step \(x_{t-1}\) by adding Gaussian noise \(\epsilon\). At the final time step \(T\), the probability distribution of data points \(q(x_{T}|x_{0})\) can then be computed based on the original \(x_{0}\) via a product of Gaussian likelihoods. This is summarized in the following equations:
\[x_{t} =\sqrt{1-\beta_{t}}x_{t-1}+\beta_{t}\epsilon, \tag{1}\] \[q(x_{t}|x_{t-1}) =\mathcal{N}(x_{t}|\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}),\] (2) \[q(x_{T}|x_{0}) =\prod_{t=1}^{T}q(x_{t}|x_{t-1}), \tag{3}\]
where we denote Gaussian likelihoods as \(\mathcal{N}(x|\mu,\sigma^{2})\), \(\epsilon\sim N(0,\mathcal{I})\), and \(\beta_{t}\) is a 'variance schedule' that controls how much Gaussian noise is added at each time step.
For a sufficiently large \(T\) (the total number of diffusion steps), the Gaussian noise will overwhelm the original data and \(x_{T}\) will follow a multivariate Gaussian distribution. Therefore, a new sample \(x_{0}\) could be generated by sampling \(x_{T}\) from a multivariate Gaussian and inverting the diffusion process in order to produce \(x_{0}\sim q(x_{0})\). An exact inversion of the diffusion process requires knowing the reverse distribution \(p(x_{t-1}|x_{t})\), which encodes how likely a particular data point \(x_{t-1}\) is given the noisier version \(x_{t}\). Direct calculation of \(p(x_{t-1}|x_{t})\) could be done via Bayes' rule \(p(x_{t-1}|x_{t})=q(x_{t}|x_{t-1})q(x_{t-1})/q(x_{t})\), but this is intractable because evaluating \(q(x_{t})=\int dx_{0}q(x_{0})\prod\limits_{t=1}^{T}q(x_{t}|x_{t-1})\) requires an integral over the entire data distribution \(q(x_{0})\). We therefore approximate \(p(x_{t-1}|x_{t})\) as:
\[p(x_{t-1}|x_{t})=\mathcal{N}(x_{t-1}|\mu_{\theta}(x_{t},t,z),\beta_{t}\mathcal{ I}), \tag{4}\]
where the estimated mean \(\mu_{\theta}\) is modeled by a neural network with parameters \(\theta\), conditioned on \(t\) and additional information \(z\). There are multiple ways to parameterize \(\mu_{\theta}(x_{t},t,z)\) and we employ two different approaches as discussed in Section IV.2.
Because sums of Gaussians also follow a Gaussian distribution, \(x_{t}\) can be directly sampled from \(x_{0}\) in a single step:
\[q(x_{T}|x_{0}) =\mathcal{N}(x_{t}|\sqrt{\overline{\alpha_{t}}}x_{0},(1-\overline{ \alpha}_{t})\mathcal{I}) \tag{5}\] \[x_{t} =\sqrt{\overline{\alpha}_{t}}x_{0}+\sqrt{1-\overline{\alpha}_{t}}\epsilon \tag{6}\]
where \(\alpha_{t}\equiv 1-\beta_{t}\) and \(\overline{\alpha}_{t}=\prod_{\tau=1}^{t}\alpha_{\tau}\). The variance of the noise for time step \(t\) is therefore \(1-\overline{\alpha}_{t}\), which can be used to define the noise schedule as an alternative to \(\beta_{t}\). This property is convenient because efficiently computing \(x_{t}\) from \(x_{0}\) allows \(t\) to be randomly sampled during training.
Training a denoising diffusion model proceeds via the following steps: sampling a batch of images \(x^{\prime}\) from the training set; choosing random time steps \(t^{\prime}\); producing a set of noised images \(x^{\prime}_{t^{\prime}}\) based on Eq. (6); and comparing the model's prediction for \(\mu_{\theta}\) to the true value to compute the loss. Once the model has been trained, new samples can be generated by first sampling \(x_{T}\sim\mathcal{N}(0,\mathcal{I})\), then repeatedly evaluating \(p(x_{t-1}|x_{t})\) from the trained model until \(x_{0}\) is reached.
The denoising diffusion approach employed here shares many features with score-based diffusion, or score-matching models, such as CaloScore[24, 32]. The score-based approach defines a stochastic differential equation (SDE) that continuously corrupts the data into a known distribution. Rather than directly learning to invert the denoising process, the neural network is trained to evaluate the score of the data, \(\nabla_{x}\log q(x)\), which can then be used to reverse the SDE in order to generate new samples. There are different ways to parameterize the SDE, based on the choices of the 'diffusion' and 'drift' functions. However, the 'variance preserving' formulation is deeply tied to the denoising diffusion approach employed here: the optimal score-matching network is identical to the optimal denoising network (see Appendix B of Ref. [38] for a short derivation). Both score-based and denoising-based diffusion models are being actively explored in the ML literature [38]. We focus on the denoising variant here because of its conceptual simplicity.
## III Datasets
To facilitate a comparison with other work, we test our methods on the datasets of the _CaloChallenge_. The first dataset from the _CaloChallenge_ consists of voxelized showers from single particles, \(\gamma\) or \(\pi^{\pm}\), interacting with the ATLAS detector in the \(\eta\) range [0.2, 0.25][45]. 15 different incident particle energies, spanning the range 256 MeV up to 4 TeV in powers of 2, are included. 10,000 events per incident energy are provided, except for the highest energies, which have fewer events and therefore higher statistical uncertainty. In total, 242000 (241600) events are provided for the photon (pion) dataset. These datasets were used by ATLAS to train the FastCaloGAN[16] model used in AltFast3[13]. The voxelized representations have 5 and 7 layers with 368 and 533 voxels, respectively, for the \(\gamma\) and \(\pi^{\pm}\) showers. There are different numbers of angular and radial bins within each layer to reflect the varying granularity of the ATLAS calorimeter. For the photon (pion) datasets, layers 1 and 2 (1, 2, 12, and 13) have 10 angular bins and the rest have only a single angular bin. Each layer has a unique binning in the radial direction. For example, the first layer of the pion dataset has 8 variable-width bins covering a radial distance up to 600 cm, while the last layer has 10 variable-width bins covering up to 2000 cm. Because of the unique binning in each layer, only two bins from the first layer exactly align with a bin from the last layer. 'There are a total of 30 (23) unique radial bin edges for the photon (pion) dataset.
Datasets 2 and 3 of the _CaloChallenge_ each consist of 200,000 showers from an electron incident on a cylindrical sampling calorimeter with 45 layers, each with an active (silicon) and passive (tungsten) component. The electron energy spans the range of 1 GeV to 1 TeV with a log-uniform distribution. Each layer in dataset 2 has 9 radial bins and 16 angular bins, leading to a total of \(45\times 16\times 9=6480\) voxels in each shower. Dataset 3 features a much higher granularity; each layer has 18 radial and 50 angular bins, leading to a total of \(45\times 50\times 18=40500\) voxels in each shower.
Following the specifications of the _CaloChallenge_, we split the available events evenly between training and evaluation for all datasets. The resulting size of the training sample, only \(O(100\)K) showers, is relatively limited, especially for very high-dimensional data such as dataset 3. It is likely generating additional showers for training would lead to improved performance. However, if this limited sample is taken to represent only a small portion of a real particle detector geometry, it may be a realistic estimate of the practically achievable training sample size, given restrictions on available computing resources. For example, the approach employed by ATLAS for FastCaloGAN involves training a separate model for each of 100 different \(\eta\) regions of the detector and thus can only generate a limited number of events for each \(\eta\) region.
## IV Methods
### Preprocessing
We apply several stages of preprocessing to the showers before the diffusion process. First, the energy in each voxel is divided by the incident particle energy, yielding the normalized energy \(E_{i}\) in voxel \(i\). As in previous work [19, 24], a 'logit' transformation is then applied to the voxel energies:
\[u_{i}=\log\left(\frac{x}{1-x}\right),\quad x=\delta+(1-2*\delta)*E_{i}, \tag{7}\]
where \(\delta=10^{-6}\) avoids discontinuities at \(x=0\) and \(x=1\). We then subtract the mean and divide by the standard deviation of the transformed voxel energy distribution \(u_{i}\):
\[u^{\prime}_{i}=\frac{u_{i}-\overline{u}}{\sigma_{u}}. \tag{8}\]
The distribution of preprocessed voxel energies \(u^{\prime}\) has zero mean and unit variance, which is important to en
sure the signal-to-noise ratio during the diffusion process has the appropriate magnitude.
The incident particle energy is used as a conditioning input to the model. We first apply a logarithm to the energy and then scale the resulting values to fall in the range 0 to 1.
### Diffusion Specifics
We train our model based on a diffusion process with 400 noising steps. We follow Ref. [46] and use a 'cosine' noise schedule, defined as:
\[\overline{\alpha}_{t}=\cos\left(\frac{\frac{t}{T}+s}{1+s}\cdot\frac{\pi}{2}\right) \tag{9}\]
with \(s=0.008\). This noise schedule adds noise more slowly during the intermediate steps of the diffusion process than the simple linear schedule originally used in Ref. [36]. This preserves information for longer during the process, and we find it reduces the number of diffusion steps needed to maintain high quality.
As mentioned in Section II, there are different choices for the parameterization of the training objective of the model. The most obvious approach is to predict the denoised image \(x_{0}\) directly. Ref. [36] suggests predicting the normalized noise component, \(\epsilon\), and then computing \(\mu_{\theta}\) as:
\[\mu_{\theta}(x_{t},t,z)=\frac{1}{\sqrt{\overline{\alpha}_{t}}}\left(x_{t}- \frac{1-\alpha_{t}}{\sqrt{1-\overline{\alpha}_{t}}}\epsilon_{\theta}(x_{t},t )\right). \tag{10}\]
In this case, training proceeds by minimizing the loss:
\[\mathcal{L}=\mathbb{E}_{t,\epsilon}\left[||\epsilon_{\theta}(x_{t},t,z)- \epsilon||^{2}\right]. \tag{11}\]
The argument in favor of predicting the normalized noise component as the training objective is that it allows the model output to stay in a consistent range, so the model learns to make subtle refinements when the noise levels are small. However, when the noise levels are large, small inaccuracies in the model prediction can lead to large changes to the image in the sampling process. This can be somewhat mitigated by skipping the first steps of the diffusion process during generation in order to avoid this divergent behavior. We find this parameterization works well for datasets 1 and 2.
For dataset 3, we find the training objective suggested by Ref. [38], where the model predicts a weighted average of the noise component and the denoised image, yields better results:
\[\begin{split}\mathcal{L}=\mathbb{E}_{t,\epsilon}\left[w(t) \left|\right|\,F_{\theta}(x_{t},t,z)-\right.\\ \left.\frac{1}{c_{\text{out}}(t)}\left(x_{0}-c_{\text{skip}}(t) \cdot(x_{t})\right)\right||^{2}\Bigg{]}.\end{split} \tag{12}\]
The different weighting functions are chosen to be proportional to the standard deviation of the total amount of noise at each step \(t\), \(\sigma(t)=\sqrt{1-\overline{\alpha}_{t}}\). Specifically, \(w(t)=1+\nicefrac{{1}}{{\sigma(t)^{2}}}\), \(c_{\text{skip}}(t)=\nicefrac{{1}}{{\left(\sigma(t)^{2}+1\right)}}\), and \(c_{\text{out}}(t)=\nicefrac{{1}}{{\left(1+\nicefrac{{1}}{{\sigma(t)^{2}}} \right)}}\). With this combination of terms, the model trades off between predicting the noise component when the noise is small, and predicting the denoised image when the noise is large. For \(t\to 0\), \(\sigma(t)\to 0\) and \(c_{\text{skip}}\to 1\), so the training objective of the model is roughly proportional to \(\epsilon\). But for \(t\to T\), \(\sigma(t)\to 1\) and \(c_{\text{skip}}\to\nicefrac{{1}}{{2}}\), and the training objective is a weighted average of the denoised image \(x_{0}\) and the noise component \(\epsilon\). This scheme makes the model less sensitive to inaccurate predictions at high noise levels during the sampling process. This effect is more important for dataset 3 because of its higher sparsity, which leads to a longer tail in the voxel energy distribution. When using this training objective, skipping the first iterations of the diffusion process when sampling is no longer required.
We follow the stochastic sampling algorithm proposed in Ref. [36], in which a small amount of additional noise is added back to the sample after each denoising step:
\[x_{t-1}=\frac{1}{\sqrt{\overline{\alpha}_{t}}}\left(x_{t}-\frac{1-\alpha_{t }}{\sqrt{1-\overline{\alpha}_{t}}}\epsilon_{\theta}(x_{t},t)\right)+\sigma_{ t}\epsilon^{\prime} \tag{13}\]
for \(\epsilon^{\prime}\sim\mathcal{N}(0,\mathbb{I})\) and \(\sigma_{t}=\beta_{t}(\nicefrac{{1-\overline{\alpha}_{t-1}}}{{1-\overline{ \alpha}_{t}}})\).
As long as the model is conditioned on the noise level, the number of diffusion steps in the sampling need not be the same as the number of steps in the training. Decreasing the number of diffusion steps will linearly improve the computational time needed to generate samples, but may produce samples of lower quality. This provides significant flexibility in trading between sample quality and computation time for a trained model. As the optimal balance between sample quality and computation time will be application-specific, in this work we primarily focus on sample quality. For datasets 1-photon, 2 and 3 we choose 200 diffusion steps for sampling because we find it does not significantly degrade sample quality compared to 400 steps, but further reductions do. For datasets-pions we find that 200 diffusion steps noticeably reduces the sample quality and therefore report results using 400 steps. Additionally, for datasets 1 and 2, we find that skipping the first two denoising steps (i.e. starting from \(x_{T-2}\) rather than \(x_{T}\)) avoids instabilities caused by imperfect estimates of \(\epsilon\) at the highest noise levels.
After generation, we apply a cutoff on the minimum voxel energy to match the minimum value in the _CaloChallenge_ datasets. This corresponds to a value of 10 MeV for dataset 1, and 15 keV for datasets 2 and 3. Voxels below this value are set to zero. We note that the threshold for datasets 2 and 3 is likely unrealistically low for a real detector operating in the energy range considered, however we use these values to maintain consistency with the _CaloChallenge_.
### Network Architecture
The primary input to the network is the noisy representation of the shower. However, additional, conditional information is provided as well. The conditional information consists of the scaled logarithm of the incident energy of the particle and the noise level of the current diffusion step (\(\sqrt{1-\overline{\alpha_{t}}}\)). This conditional information is encoded into a 128 dimensional vector via a two-layer fully connected network.
The denoising model uses a U-net [47] architecture, which is commonly employed in diffusion tasks. U-net architectures resemble an encoder-decoder pattern, where the input is gradually compressed to a smaller space, but unlike an autoencoder, skip connections are used so that there is no information bottleneck. Our U-net has an initial convolution followed by a series of ResNet blocks [48]. Conditional convolutions are created by adding the conditional information as an additional bias term after the first convolutional layer of each ResNet block. For datasets 1 and 2 (3) we use three ResNet blocks for the encoder with 16/16/32 (32/32/32) filters. Convolutional layers with a stride of 2 and appropriate padding are used to reduce the data size by a factor of two in each dimension after each of the first two ResNet blocks. Linear self-attention layers [49] are applied after each ResNet block. The architecture is then mirrored, with three more ResNet blocks with the same filter sizes. Convolutional transpose layers are used to upsample by a factor of two after each ResNet block to return to the original data dimension. A schematic of the network architecture is shown in Fig. 1.
In total, the models for datasets 1 and 2 (dataset 3) consists of \(\sim\)520K (\(\sim\)1.2M) parameters. The model architectures were not extensively optimized, and it is likely the performance could be further improved with a dedicated optimization procedure.
## V Geometric Innovations
### Optimizations for Cylindrical Geometry
Regular convolutions achieve their power by exploiting the underlying symmetry of the data: translation invariance along each of the coordinate dimensions. When a convolutional layer is applied to an image, the filters perform the same local operation across the whole input image. This allows for expressive, parameter-efficient operations on high-dimensional images. However, while calorimeter showers represented in a voxelized cylindrical geometry have a regular structure, they are not inherently translation invariant. The distribution of energy deposited in each layer encodes important information about the shower, which would be spoiled by translating the shower in either direction along the layer axis. Likewise, the distribution of energy in the radial direction encodes important information about the transverse spread of the shower and falls rapidly as a function of the distance away from the shower center. Additionally, in a realistic detector, sensors in different layers may have different sizes or be made of different materials. The one coordinate dimension that may be translation invariant is the angular dimension. However, this dimension has a periodic topology that will not be respected by regular convolutions. We therefore design several novel optimizations of the convolution operation tailored to cylindrical data that improve the output fidelity.
In order to respect the periodicity of the angular dimension in cylindrical calorimeters, our denoising network uses cylindrical convolutions rather than standard Cartesian ones. The angular dimension is represented in a linear array, so neighboring values with coordinates near the extrema of the angular range are far apart in the array representation. Before each cylindrical convolution operation, a circular padding is added in the angular dimension, such that both ends of the linear array are extended with the values from the opposite end. This ensures that when a 3D convolution is applied, the voxels close to the ends of the linear array properly interact with their angular neighbors on the opposite end. This padding is only applied to the angular dimension; the radial and \(z\) dimensions remain unchanged.
To allow our convolutional operations to violate translation invariance, we devise a novel scheme for location-conditional convolutions. This is implemented by augmenting the shower image with additional input channels that encode the position of each voxel. We construct one 'layer image' in which the value of each voxel corresponds to the layer number of that voxel, normalized to the range 0 to 1. We similarly construct a 'radial image' that encodes the radial distance of each voxel, also normalized from 0 to 1. For dataset 1, we observe slight non-uniformities in the energy distribution as a function of the angular bin, and find slight performance gains from including an 'angular image' as well. These additional images are concatenated to the per-voxel shower energy as additional input channels. This allows the filters in the convolutional operations to produce different results in different parts of the geometry. The output of the denoising network is still a single channel corresponding to the energy in each voxel. As these images are the same for every input, in principle they do not supply any additional information to the model. Therefore, one would expect that they would be unnecessary for a sufficiently large and expressive model. However, in practice, with the models employed in this work, we have found this technique makes it easier for the network to learn the non-uniformities of the underlying data.
### GLaM: Geometry Latent Mapping
Though datasets 2 and 3 feature significantly larger numbers of voxels than dataset 1, their regular binning allows convolutional operations to be readily applied. In
contrast, the irregular binning in dataset 1 poses a challenge for fully utilizing the geometric structure of the data. Overcoming this challenge is important for the application of these techniques to real detectors, which often do not have perfectly regular geometries. Previous approaches have either used fully connected networks [16] or 1D convolutions with a very large network size [24]. Point clouds approaches have also gained some recent support as a way around this problem [30, 39].
We instead employ a new method called Geometry Latent Mapping or GLaM. GLaM learns a mapping from the data geometry to a perfectly regular geometric structure that is similar to the actual irregular geometry. This embeds the data in a regular space so that computationally efficient operations, such as cylindrical convolutions, can be used to accomplish the primary task of the ML algorithm (here, the denoising task of the diffusion model). The reverse transformation to bring the results of the primary task back to the original space is also learned by GLaM.
Separate mappings can be learned for different regions of the detector geometry. (A practical example is discussed below.) The embedding for a particular region is therefore only based on local information from that region. This ensures that the size of the embedding matrices remains small and that the embedded space reflects the inherent locality of the geometric structure. GLaM is philosophically similar to the approach of Latent Diffusion [37], which encodes data into a latent space learned by an autoencoder using a perceptual loss [50] prior to the generative task. However, with GLaM, the embedded space can be larger than the input space, has a direct geometric interpretation, and is learned simultaneously with the generative task. A schematic of the GLaM approach is shown in Fig. 2.
We apply GLaM to dataset 1 to learn a mapping of the input data to a regular cylindrical structure. During the diffusion training, the noise is still added in the original, irregular data structure, so that the embedding acts as just a part of the denoising model.1 We choose the radial and angular binning of this regular structure to be the superset of all the bin boundaries of the individual layers. This results in 10 angular bins and 30 radial bins for the photon dataset and 10 angular bins and 23 radial bins for the pion dataset. A separate mapping for each layer is then learned from the original binning in that layer to this regular structure. The mapping along the radial dimension for layer \(\ell\) is accomplished via a single matrix \(C^{\ell}\), of size \(c^{\ell}\times c^{\prime}\), where \(c^{\ell}\) is the number of radial bins in the original geometry and \(c^{\prime}\) is the number of bins in the regular geometry. The mapping back to the original space is likewise accomplished via a single matrix, \(D^{\ell}\), of size \(c^{\prime}\times c^{\ell}\). The values of the \(C^{\ell}\) matrix are trainable parameters, but initialized to values reflecting the geometric overlap of the original and regular binning scheme:
Footnote 1: Adding the noise to the regular geometry results in a weak training signal for the embedding map, and therefore is more applicable to a situation in which the embedding is fixed or has been learned by some other means.
\[C^{\ell}_{j,k}=\begin{cases}\frac{r_{k}^{2}-r_{k+1}^{2}}{r_{j}^{2}-r_{j+1}^{2 }}+\kappa_{j,k}&\text{if $r_{k}\geq r_{j}$}\\ \kappa_{j,k}&\text{and $r_{k+1}\leq r_{j+1}$}\\ \kappa_{j,k}&\text{otherwise.}\end{cases} \tag{14}\]
Here, the values \(r_{j}\) denote the bin boundaries in the original geometry, \(r_{k}\) denote the bin boundaries in the regular geometry, and \(\kappa\) is a tensor of Gaussian noise with mean zero and standard deviation \(10^{-5}\).
Figure 1: Left: a schematic of the network architecture. The numbers in parentheses for each module indicate the number of filters used in that module for the network for datasets 1 and 2 / dataset 3. Right: A detailed view of the operations in a ResNet block.
Because \(C^{\ell}\) generally maps between spaces of different dimensionality, the matrix is rectangular and does not have an analytic inverse. \(D^{\ell}\) is instead initialized to the Moore-Penrose pseudo-inverse of \(C^{\ell}\), so that initially \(C^{\ell}D^{\ell}=\mathcal{I}\). However, during training it is computationally difficult to backpropagate through the Moore-Penrose pseudoinverse function, so we instead allow the values of the inverse mapping \(D^{\ell}_{k,j}\) to be independently trainable.
In dataset 1, layers either have a single angular bin or the same 10 angular bins. We therefore take these 10 bins to be the regular structure and evenly divide the energy of layers with only a single angular bin among these 10 bins. We found that learning a mapping for the angular dimension, similar to the one used for the radial dimension, provided no performance improvements beyond the simple energy splitting.
This is quite a simple ansatz, with the embedding being fully specified by 3180 (3404) parameters for the photon (pion) dataset. We nevertheless find it works quite well in combination with cylindrical convolutions. We find that a single embedding matrix with a geometrically-informed initialization yields significantly better results than fully connected neural network layers initialized with standard techniques.
## VI Results
We compare the showers generated with CaloDiffusion to those from Geant4 for all datasets from the _CaloChallenge_.
A comparison of the average showers produced by Geant4 and CaloDiffusion, with the GLaM embedding approach, for the photon sample of dataset 1 is shown in Fig. 3. A comparison of various energy distributions for the photon and pion samples of dataset 1 used in the evaluation of the _CaloChallenge_ are shown in Figs. 4 and 5, respectively. The spatial properties of the shower are characterized by the Cartesian center energy of the shower, defined as \(\overline{x}=\frac{\langle x_{i}E_{i}\rangle}{\sum E_{i}}\) for cell location \(x_{i}\) and energy \(E_{i}\); and the shower width, defined as \(\sqrt{\frac{\langle x^{2}E_{i}\rangle}{\sum E_{i}}-\overline{x}^{2}}\).
For datasets 2 and 3, we examine the distribution of energy as a function of the layer of the calorimeter and as a function of the radial coordinate of the voxel. We examine the total energy of the shower, the distribution of voxel energies, and the number of voxels with energy above 1 MeV. The spatial properties of the shower are represented by the width of the shower in radial and angular dimensions (computed analogously to the Cartesian version defined above) separately for each layer of the calorimeter. The distributions for datasets 2 and 3 are shown in Figs. 6, 7, and 8.
We generally find that CaloDiffusion is successful at modeling all the datasets considered. The spatial distributions of the showers--the shower center and widths for dataset 1 and the layer/radial energy profile for datasets 2 and 3--are especially well reproduced. We observe only very slight degradation in quality on dataset 3 compared to dataset 2, even though it features roughly a factor of 7 higher granularity. This underscores the advantage of the convolutional approach: because it is based on fully local operations, it can readily scale to higher-dimensional data.
One of the most notable deficiencies is that CaloDiffusion produces a tail of low energy voxels for datasets 2 and 3, which is not seen in the Geant4 distributions. The tail likely results from residual noise from the diffusion process that has not been fully removed by the model. The tail begins at approximately 10 keV and thus is not visible in dataset 1 because of the higher voxel energy threshold (10 MeV). The tail would likely be fully removed with a more realistic voxel minimum energy threshold applied to datasets 2 and 3. If not, such a low energy tail would still likely have minimal impact on the downstream reconstruction of the shower.
Perhaps a more relevant deficiency of the model can be seen in the distribution of the shower response, the total shower energy divided by the incident particle energy. This is seen to be particularly discrepant in the photon sample of dataset 1, in which Geant4 exhibits a much narrower peak than CaloDiffusion, and mismodeling is
Figure 2: A diagram of the Geometry Latent Mapping (GLaM) approach.
visible in all datasets. We have found distributions of such a 'global' property of the shower to be among the hardest for the diffusion processes to capture, because most operations are done entirely locally. For such observables, it is not straightforward to add a dedicated loss term to the diffusion training because they are only well defined at the end of the diffusion process, but most of the training uses an intermediate step2.
Footnote 2: We attempted to add a dedicated L2 loss term for the total shower energy, based on estimating the final shower from the intermediate noisy shower through a 1-step estimate of the denoised shower, but it did not produce any improvements, likely because of the amount of noise in this estimate.
In the future, a maximum mean discrepancy loss comparing the distributions from a large batch of events could be tried. Another possibility would be to adopt a two-stage generation approach, as is done in Refs. [26, 30, 31], in which the total energy of the shower, or the per-layer energy, is learned with a dedicated model and then used to normalize the output of the diffusion model.
There is also visible mismodeling of a peak in the energy distribution in layer 1 of the dataset 1 pion showers. This peak comes from very low energy pions that deposit all of their energy in layers 0 and 1 of the calorimeter, producing very sparse showers. These showers are qualitatively different from the rest and perhaps could benefit from some dedicated training or optimization.
### Quantitative Metrics
We compute several metrics sensitive to differences between the Geant4 and CaloDiffusion samples for quantitative assessment of our model's performance.
One proposed metric [19, 51] is based on training a classifier to distinguish between the reference and synthetic samples. An optimal classifier will learn a score proportional to the likelihood ratio between the two samples. The closer the two samples are, the closer the likelihoods will be, and the classifier will struggle to distinguish between the two samples. Performance can be quantified based on the area under the curve (AUC) from the receiver-operating characteristic (ROC) curve of this classifier evaluated on a statistically independent dataset. An AUC of 1 would indicate there is a significant deficiency in the synthetic sample, such that the classifier is always able to distinguish it from a reference sample. An AUC of 0.5 would indicate the classifier cannot separate the two samples. Though Refs. [51, 52] showcased some limitations of the AUC in capturing subtle mismodelings, so far no ML-based calorimeter simulation has reported AUC scores very close to 0.5 on the _CaloChallenge_ or similar datasets. Therefore, it is still a worthwhile metric to compare models.
Following the setup of the _CaloChallenge_, we employ two versions of this classifier test: one where the inputs to the classifier are the full showers themselves, along with the incident particle energy (low-level), and one where the inputs are high-level, physics-informed features of the shower (high-level). The high-level features are those
Figure 3: A comparison of the average showers produced by Geant4 (top) and CaloDiffusion (bottom) for the photon sample of dataset 1.
used in the _CaloChallenge_: the incident particle energy, the energy in each layer, and the center of energy and width of the shower in the \(\eta\) and \(\phi\) directions. In both cases, the classifier is a fully connected network with 2 hidden layers, each with 2048 neurons. Dropout [53], with a rate of 20%, is used after each hidden layer.
In Table 1, we compare the classifier AUC values for CaloDiffusion to those reported by CaloFlow/ iCaloFlow[31, 54] (called here simply CaloFlow), and CaloScore v2, which are the only other models to have published quantitative results on the _CaloChallenge_ at the time of writing 3. CaloFlow is actually a pair of models: the originally trained 'teacher' model and a'student' model derived from the first model, optimized for inference speed. CaloScore v2 is a score-based diffusion model and also features distilled versions based on
Figure 4: A comparison between Geant4 and CaloDiffusion showers across a variety of observables for the photon sample of dataset 1. The top row shows the distribution of energy in different layers of the calorimeter. The middle row shows the distribution of the center and width of the energy spread in two reference layers. The bottom row shows the distribution of voxel energies, the distribution of total shower energy divided by the incident energy, and a scatter plot of deposited energy versus incident energy.
progressive distillation. CaloScore v2 did not provide results for the pion version of dataset 1. As this version of CaloDiffusion was optimized for sample quality and has not used dedicated methods to improve sampling time, we compare to the teacher model of CaloFlow and the undistilled version of CaloScore v24, which have better performance and a similar generation time to CaloDiffusion. Future work will explore the development of a new version of CaloDiffusion with optimized generation speed, which would be more suitable for comparison to the faster versions of each model.
Footnote 4: For dataset 3, the CaloScore v2 authors do not provide results on a model without distillation, so we compare to the 8-step distilled version.
We find that CaloDiffusion produces classifier AUC values below 0.7 for all four datasets, indicating that the classifier struggles to distinguish between CaloDiffusion and Geant4 showers. CaloDiffusion achieves better AUC scores than CaloFlow and CaloScore v2 for all cases except the photon showers of dataset 1 when using high-level features. The performance gains of CaloDiffusion are especially prominent for the higher-dimensional datasets 2 and 3.
For CaloDiffusion, the classifiers trained on low-level features and high-level features have quite similar AUC values. This indicates that most of the discrimination power between CaloDiffusion and Geant4 showers is captured by these high-level features. We generally find that the low-level classifier overfits the training set sig
Figure 5: A comparison between Geant4 and CaloDiffusion showers across a variety of observables for the pion sample of dataset 1. The top row shows the distribution of energy in different layers of the calorimeter. The middle row shows the distribution of the center and width of the energy spread in two reference layers. The bottom row shows the distribution of voxel energies, the distribution of total shower energy divided by the incident energy, and a scatter plot of deposited energy versus incident energy.
nificantly, and therefore an improved architecture would perhaps perform better. However, we generally take this overfitting to be a positive sign, because it indicates that distinguishing between Geant4 and CaloDiffusion showers based on generalizable features is not easy.
We additionally report the Frechet Particle Distance (FPD) and Kernel Particle Distance (KPD) metrics, suggested in Ref. [52] and implemented in the JETNET library [55], interfaced to the _CaloChallenge_ evaluation code. We use the same high-level shower features as in the classifier test but omit the incident particle energy. We find that the FPD metric computed with these features is slightly biased; the reported value does not agree with zero within its uncertainty, even when comparing two samples of Geant4 showers. We therefore normalize our reported values for FPD by subtracting the value
\begin{table}
\begin{tabular}{l c c} Dataset & FPD & KPD \\ \hline
1 (photons) & 0.014(1) & 0.004(1) \\
1 (pions) & 0.029(1) & 0.004(1) \\
2 (electrons) & 0.043(2) & 0.0001(2) \\
3 (electrons) & 0.031(2) & 0.0001(1) \\ \end{tabular}
\end{table}
Table 2: Additional metrics comparing the agreement between showers generated with Geant4 and CaloDiffusion. The number in parentheses is the uncertainty in the last significant digit as evaluated with the JETNET library.
\begin{table}
\begin{tabular}{l c c c} Dataset & CaloDiffusion & CaloFlow & CaloScore v2 \\ \hline
1 (photons) & **0.62** / 0.62 & 0.70 / **0.55** & 0.76 / 0.59 \\
1 (pions) & **0.65** / **0.65** & 0.78 / 0.70 & - / - \\
2 (electrons) & **0.56** / **0.56** & 0.80 / 0.80 & 0.60 / 0.62 \\
3 (electrons) & **0.56** / **0.57** & 0.91 / 0.95 & 0.67 / 0.85 \\ \end{tabular}
\end{table}
Table 1: The AUC values for a classifier trained to distinguish between Geant4 and synthetic showers. The first value listed is the AUC for the classifier trained on low-level features and the second is the AUC for the classifier trained on high-level features. The CaloDiffusion values are the average of 5 independent classifier trainings. In all cases, the variation in scores was observed to be 0.01 or less. In each row, the bold value is the best AUC value for each classifier type.
Figure 6: A comparison between Geant4 and CaloDiffusion showers on datasets 2 (top row) and 3 (bottom row). The average shower energy is shown as a function of layer (left) and as a function of radial bin (center). The width of the shower in the radial direction is also shown (right).
computed comparing two Geant4 samples5. We report these additional metrics in Table 2.
Footnote 5: The FPD values computed comparing two Geant4 samples are 0.008, 0.0005, 0.008, and 0.011 for datasets 1 (photons), 1 (pions), 2 (electrons), and 3 (electrons), respectively.
Further quantitative comparisons with other approaches will be performed at the conclusion of the _CaloChallenge_. However, initial results from the _CaloChallenge_[56] indicated that a preliminary version of CaloDiffusion6 was among the top submissions for every dataset.
Footnote 6: The preliminary version did not use the attention layers and dimensionality reduction in \(z\) that are included in the U-net architecture of the version in this paper.
Ablation studies quantifying the performance improvements for various aspects of CaloDiffusion are discussed in Appendix A.
### Timing
In Table 3, we report the generation time of our model using different batch sizes on both CPUs and GPUs. Results are based on a 2.6 GHz Intel E5-2650v2 "Ivy Bridge" 8-Core CPU and an NVIDIA V100 GPU. The time required to generate a shower in Geant4 depends strongly on the incident energy of the particle. The average over the incident energies used in datasets 2 and 3 is \(O(100\) s)[31].
Because of the iterative denoising process during generation, diffusion models are usu
\begin{table}
\begin{tabular}{l r r r} & & & Time/Shower [s] \\ Dataset & Batch Size & CPU & GPU \\ \hline
1 (photons) & 1 & 9.4 & 6.3 \\ (368 voxels) & 10 & 2.0 & 0.6 \\ & 100 & 1.0 & 0.1 \\ \hline
1 (pions) & 1 & 9.8 & 6.4 \\ (533 voxels) & 10 & 2.0 & 0.6 \\ & 100 & 1.0 & 0.1 \\ \hline
2 (electrons) & 1 & 14.8 & 6.2 \\ (6.5K voxels) & 10 & 4.6 & 0.6 \\ & 100 & 4.0 & 0.2 \\ \hline
3 (electrons) & 1 & 52.7 & 7.1 \\ (40.5K voxels) & 10 & 44.1 & 2.6 \\ & 100 & - & 2.0 \\ \end{tabular}
\end{table}
Table 3: The shower generation time for CaloDiffusion on CPU and GPU for various batch sizes.
Figure 7: A comparison between Geant4 and CaloDiffusion showers on datasets 2 (top row) and 3 (bottom row). The quantities shown are the width of the shower in the angular direction (left), the distribution of total number of non-zero voxels (center), and the energy per voxel (right).
ML approaches. If limited to a batch size of one and running on a CPU, this version of CaloDiffusion may not satisfy the computation time requirement for a fast simulation. Without any additional training or algorithmic changes, the CaloDiffusion generation time can be linearly improved by reducing the number of diffusion steps used in the sampling, with a cost to sample quality. We explore this tradeoff in Section VI.3.
### Sampling Steps and Quality
In this work, our main goal was to demonstrate the fidelity achievable with the CaloDiffusion approach, rather than optimizing for generation speed. We therefore chose the fewest diffusion steps that did not exhibit a significant decrease in sample quality. However, significant reductions in the number of sampling steps can still result in high-quality samples. This tradeoff between number of diffusion steps and sample quality was studied using dataset 2. By changing the noise schedule, the model can be sampled using different numbers diffusion steps without retraining. Inference time scales linearly with the number of diffusion steps regardless of batch size (using 200 steps generates samples twice as fast as 400 steps). We find that one of the distributions most sensitive to the number of diffusion steps is the ratio of deposited to incident energy. This seems to be one of the hardest features for the diffusion model to capture, and it degrades further with fewer steps. A plot of this feature with different numbers of diffusion steps is shown in Fig. 9. In addition to the metrics reported in Sec. VI.1, we report the separation power between Geant4 and CaloDiffusion on this 1D distribution. The separation power is a modified \(\chi^{2}\) metric proposed for calorimeter simulation in Ref. [18] and implemented in the _CaloChallenge_ framework. Results are presented in Table 4.
Figure 8: A comparison between Geant4 and CaloDiffusion showers on datasets 2 (top row) and 3 (bottom row). Shown are the distributions of the total shower energy (left) and the total shower energy divided by the incident particle energy (center), and a scatter plot of the two quantities (right).
\begin{table}
\begin{tabular}{c c c c} Num. Steps & \begin{tabular}{c} Classifier AUC \\ (low / high) \\ \end{tabular} &
\begin{tabular}{c} FPD \\ \end{tabular} & E Ratio Sep. Power \\ \hline
400 & 0.56 / 0.55 & 0.043(1) & 0.011 \\
200 & 0.61 / 0.56 & 0.046(1) & 0.036 \\
100 & 0.69 / 0.59 & 0.065(3) & 0.079 \\
50 & 0.83 / 0.67 & 0.110(4) & 0.251 \\ \end{tabular}
\end{table}
Table 4: Quantitative metrics comparing the agreement between showers generated with Geant4 and CaloDiffusion with different numbers of sampling steps for dataset 2 of the _CaloChallenge_. The separation power is computed using the ratio of deposited to incident energy. See text for details.
Improving the generation time of diffusion models is an active area of research in the machine learning community. Improved sampling algorithms have been proposed and shown to achieve higher sample quality for low numbers of diffusion steps [38]. Alternatively, once trained, the diffusion model can be 'distilled' into a new model which requires an order of magnitude fewer diffusion steps [57; 58] with minimal loss in sample quality. This distillation approach was recently employed for the generation of particle jets using a point cloud representation in Refs. [29; 40] and for detector simulation in Ref. [32], still with some loss of quality.
An alternative approach would be to simplify the diffusion task of the network. The dimensionality of the data can be reduced by first compressing to a smaller latent space, running diffusion, and then decompressing back to the original space [37]. Alternatively, rather than starting the diffusion process from pure noise, it has been demonstrated that diffusion between two images is possible [59]. One could therefore start the diffusion process from an approximate calorimeter simulation, generated by current non-ML fast simulation techniques. By providing input similar to the final result, the diffusion process would likely require fewer steps and some physical features may be learned more easily. This would be a similar approach to Refs. [60], in which CNNs were used to denoise a fast simulation to achieve higher quality results. A conceptually related approach using diffusion with a Schrodinger bridge was recently demonstrated [61]. These techniques for refinement of low-level hits in calorimeter showers can complement regression-based refinement of high-level observables [62] by making the latter easier to learn and therefore even more precise.
## VII Conclusion
In this work, we introduced CaloDiffusion, a new machine learning (ML) model that uses diffusion to generate calorimeter showers. We employed several novel optimizations that exploit the underlying geometry of the calorimeter data. We have also introduced the geometry latent mapping (GLaM), a new approach to handle irregular geometrical structures in data. GLaM learns a lightweight embedding to transform the irregular data geometry into a regular shape, which can then be used in symmetry-preserving operations such as convolutions, and also learns the reverse transformation. We have demonstrated that CaloDiffusion, combined with GLaM, is able to generate high quality showers on a variety of datasets, some with high dimensionality. We have set new benchmarks in quantitative performance metrics that demonstrate it is difficult to distinguish between CaloDiffusion and Geant4 showers.
Our work significantly advances the state of the art in the achievable physics performance from ML-based fast simulation techniques. This is an important step to establish the viability of such techniques to resolve the simulation component of the computing challenges in the High Luminosity LHC era. While the unoptimized generation time for diffusion models is slower than for some other ML architectures, producing showers in batches on GPUs is already noticeably faster than the Geant4-based full detector simulation. Future work will explore and compare a variety of approaches to improve the generation speed of CaloDiffusion and will apply CaloDiffusion with GLaM to datasets with even more complicated geometries.
## Code availability
The code to reproduce the results in this paper, as well as the trained models, can be found at [https://github.com/OzAmram/CaloDiffusionPaper](https://github.com/OzAmram/CaloDiffusionPaper).
###### Acknowledgements.
We thank the organizers of the _CaloChallenge_ for providing the community datasets and evaluation code used in this work. We thank Raghav Kansal for assistance computing the KPD/FPD metrics on the _CaloChallenge_ datasets.
Figure 9: Distribution of the ratio of incident particle energy and total deposited energy of the shower comparing CaloDiffusion samples generated with different numbers of sampling steps to Geant4.
Funding Information
O. Amram and K. Pedro are supported by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. O. Amram is supported by the U.S. CMS Software and Computing Operations Program under the U.S. CMS HL-LHC R&D Initiative.
|
2310.17491 | FedPEAT: Convergence of Federated Learning, Parameter-Efficient Fine
Tuning, and Emulator Assisted Tuning for Artificial Intelligence Foundation
Models with Mobile Edge Computing | The emergence of foundation models, including language and vision models, has
reshaped AI's landscape, offering capabilities across various applications.
Deploying and fine-tuning these large models, like GPT-3 and BERT, presents
challenges, especially in the current foundation model era. We introduce
Emulator-Assisted Tuning (EAT) combined with Parameter-Efficient Fine-Tuning
(PEFT) to form Parameter-Efficient Emulator-Assisted Tuning (PEAT). Further, we
expand this into federated learning as Federated PEAT (FedPEAT). FedPEAT uses
adapters, emulators, and PEFT for federated model tuning, enhancing model
privacy and memory efficiency. Adapters adjust pre-trained models, while
emulators give a compact representation of original models, addressing both
privacy and efficiency. Adaptable to various neural networks, our approach also
uses deep reinforcement learning for hyper-parameter optimization. We tested
FedPEAT in a unique scenario with a server participating in collaborative
federated tuning, showcasing its potential in tackling foundation model
challenges. | Terence Jie Chua, Wenhan Yu, Jun Zhao, Kwok-Yan Lam | 2023-10-26T15:47:44Z | http://arxiv.org/abs/2310.17491v2 | FedPEAT: Convergence of Federated Learning, Parameter-Efficient Fine Tuning, and Emulator Assisted Tuning for Artificial Intelligence Foundation Models with Mobile Edge Computing
###### Abstract
The emergence of foundation models including large language and computer vision models has transformed the landscape of artificial intelligence, offering unprecedented capabilities and versatility across a wide spectrum of applications. However, the deployment and fine-tuning of these models present unique challenges, particularly in the current era of foundation models. These colossal models, such as GPT-3 and BERT, have the potential to revolutionize industries ranging from healthcare to entertainment. Nonetheless, addressing issues related to collaborative training, model ownership, and computational limitations is imperative for realizing their full potential. We generalize the offsite tuning approach to Emulator-Assisted Tuning (EAT) and combine it with Parameter-Efficient Fine-Tuning (PEFT) to create Parameter-Efficient Emulator-Assisted Tuning (PEAT), expanding its use into federated learning (FL) as Federated Parameter-Efficient Emulator-Assisted Tuning (FedPEAT). Our proposed FedPEAT framework uses adapters, emulators, and PEFT techniques for federated model fine-tuning, offering a solution that enhances model privacy and streamlines memory-efficient downstream fine-tuning. Adapters, featuring trainable neural network parameters, customize pre-trained models for specific tasks, while emulators provide compressed, fixed-parameter representations of the original model. This combination not only addresses model privacy concerns by obviating the need to transmit complete models to mobile edge devices but also significantly improves memory and computational efficiency. Our approach is adaptable to diverse neural network architectures and is complemented by an adaptive control mechanism, utilizing deep reinforcement learning, to optimize critical hyper-parameters. The mechanism ensures efficient resource orchestration, making certain that devices possess the memory capacity and computational capabilities required for effective fine-tuning. Our experimental evaluation considers a special case of FedPEAT framework in which the server possesses data and partakes in the collaborative federated foundation model fine-tuning process instead of acting purely as an aggregator. Through these experiments, we demonstrate the practical applicability and efficacy of our proposed framework in addressing the complex challenges associated with large language models in the current era of foundation models.
Foundation model, large language model, federated learning, parameter-efficient fine-tuning, edge computing.
## I Introduction
**Foundation models.** In the realm of artificial intelligence, the advent of large language models represents a monumental leap forward in our understanding of natural language processing. These models have transcended traditional approaches to machine learning and have ushered in a new era, one characterized by their vast capabilities and applicability across a plethora of tasks. These models, pretrained on massive amounts of data, have the ability to understand and generate images, texts, audio with remarkable accuracy. GPT-3 [1], BERT [2], and similar models have become household names, celebrated for their prowess in tasks ranging from natural language understanding and generation to machine translation and summarization. Their immense size, often involving hundreds of millions or even billions of parameters, enables them to capture intricate linguistic nuances and exhibit human-level performance in various tasks. The current age is characterized by the ascendancy of these foundation models, with an ever-expanding suite of applications across industries, from healthcare and finance to education and entertainment. Large language models serve as versatile tools for a wide array of downstream tasks, where they can be fine-tuned to excel in specific domains or tasks. Fine-tuning adapts the pretrained model to a narrower context, making it more specialized and effective in a particular application. For instance, a large language model pretrained on general text data can be fine-tuned to become an expert in medical diagnosis, legal document review, or customer service chatbots. It should be noted that the pre-training of foundation models focuses on self-supervised learning from large-scale unlabeled data, capturing general language understanding. On the other hand, fine-tuning involves adapting these pre-trained models to specific tasks using task-specific labeled data, enabling specialized performance.
**Motivation.** One of the significant challenges associated with large language models lies in the distribution of data. Many real-world applications necessitate the utilization of data that resides on user devices, such as smartphones, laptops, and mobile edge devices, rather than on centralized servers. This decentralization poses logistical and privacy concerns. Federated learning has emerged as promising solutions to these issues. Federated learning is a decentralized machine learning approach that enables model training without the need to centralize data. Instead of sending raw data to a central server, federated learning trains models directly on the user's device. These models are then aggregated to create a global model, preserving data privacy while achieving the desired performance. However, large language models are often owned by research institutions or companies that bear the responsibility of maintaining and updating them. These model owners typically cannot directly share the entire model with external devices due to various reasons, including privacy concerns, intellectual property rights, and the potential for misuse. The lack of easy sharing mechanisms hampers the democratization of large language models and their use in applications that require continuous updates and fine-tuning.
As a result, there is a need to develop mechanisms that allow model owners to collaborate with external parties or distribute portions of models securely. Furthermore, fine-tuning large language models is computationally intensive. Training a model with hundreds of millions or billions of parameters demands substantial computational resources, often beyond the reach of individual users or small organizations. This computational bottleneck can limit the widespread adoption of these models and impede their deployment in resource-constrained environments. Moreover, fine-tuning on local devices, such as smartphones or edge devices, is often not feasible due to their limited computational capabilities. Distributing the model fine-tuning process across devices while ensuring data privacy and model performance adds another layer of complexity.
**Proposed EAT structure.** In addressing the pressing issues of model and data privacy and ownership, as well as the imperative need for memory and computation-efficient downstream model fine-tuning, we propose a novel Emulator-Assisted Tuning (EAT) structure which generalizes the offsite tuning approach introduced by Xiao et al. [3] to encompass all possible combinations of adapter and emulator configurations for large foundation model fine-tuning. Our proposed EAT structure offers the flexibility to adapt the adapter and emulator to the specific requirements of a given application. The adapter and emulator can take any form, whether encompassing layers within a transformer architecture, multi-layer perceptron, or any other neural network structure. The emulator, can have variable number of neural network layers, variable number of nodes per layer, and even variable arrangements of transformer attention units. This adaptability ensures that the model can be fine-tuned efficiently across a wide spectrum of tasks, from simple to complex.
**Expansion to PEAT architecture.** In the field of model tuning, various Parameter-Efficient Fine Tuning (PEFT) methods such as Low-rank Adapters (LoRA) [4], prompt tuning [5], and adapters [6, 7, 8] have been explored to make fine-tuning of pre-trained models more efficient. We combine EAT and Parameter-Efficient Fine-Tuning (PEFT) to present Parameter-Efficient Emulator-Assisted Tuning (PEAT).
**FedPEAT framework.** We extend the use of the PEAT into the domain of federated learning (FL) and introduce a novel framework, Federated Parameter-Efficient Emulator-Assisted Tuning (FedPEAT). This unique integration not only addresses model and data privacy concerns by eliminating the need for the model owner to transmit the entire model to the client and the client to send their local data to the model owner but also substantially improves the memory and computational efficiency of collaborative, downstream federated model fine-tuning. We illustrate the intersection our proposed EAT, PEAT, and FedPEAT in Fig. 1.
**FedPEAT adaptive control mechanism.** To optimize and streamline this adaptive combination of adapters and emulators, we propose coupling them with an adaptive control mechanism. This mechanism employs a deep reinforcement learning orchestrator to control critical hyper-parameters, such as emulator model compression ratios, adapter parameter-efficient fine-tuning parameters, and even device selection for participation in collaborative federated learning during each iteration. This integration facilitates the efficient orchestration of resources, ensuring that the fine-tuning process remains both memory and computation-efficient. This orchestration ensures that participating devices possess the necessary computational resources to carry out fine-tuning effectively. This contribution is essential in guaranteeing the successful application of our model adaptation and fine-tuning technique in real-world, resource-constrained environments.
**Server-Device collaborative tuning.** The FedPEAT framework is applicable to collaborative federated learning of various contribution nature. We note two distinct types of contribution cases. The first case involves federated learning where all data resides on mobile edge devices (i.e., clients), with no central server involvement. In this scenario, model
Fig. 1: Here we illustrate the intersection of Emulator-Assisted Tuning (EAT), Parameter-Efficient Fine-Tuning (PEFT), and Federated learning (FL). The main contribution of our current paper is to introduce Federated Parameter-Efficient Emulator-Assisted Tuning (FedPEAT), as a convergence of EAT, PEFT, and FL, while EAT and Parameter-Efficient Emulator-Assisted Tuning (PEAT) are also terms coined by our paper.
Fig. 2: FedPEAT overview
tuning is entirely performed on the client, while the server's role is restricted to aggregating adapter module parameters. The second case entails federated learning where data is distributed across both client devices and a central server. Fine-tuning occurs on both client devices and the server, presenting a more complex but realistic setting that highlights the adaptability and versatility of our proposed framework. In our experiments, we consider the special case of FedPEAT framework in which the server possesses data and partakes in the collaborative federated foundation model fine-tuning process instead of acting purely as an aggregator. Through these experiments, we aim to demonstrate the practical applicability and efficacy of our approach.
Our contributions can be summarized as follows:
* We generalize the offsite tuning approach introduced by Xiao et al. [3] to Emulator-Assisted Tuning (EAT), encompassing all possible combinations of adapter and emulator configurations for large foundation model fine-tuning.
* We combine EAT and Parameter-Efficient Fine-Tuning (PEFT) to present Parameter-Efficient Emulator-Assisted Tuning (PEAT).
* We expand the use of PEAT into the domain of federated learning (FL) and introduce a novel framework, Federated Parameter-Efficient Emulator-Assisted Tuning (FedPEAT), which forms the bedrock of privacy-aware federated foundation model fine-tuning.
* We propose an adaptive control mechanism which utilizes deep reinforcement learning approaches that dynamically selects emulator compression parameters and user-device participation to facilitate and empower the adoption of the FedPEAT framework.
* We considered user device memory constraints in our adaptive control mechanism problem formulation, in light of the era of foundation models, which was scarcely considered in previous federated learning works.
* We considered and experimented with a special case of FedPEAT framework in which the server possesses data and partakes in the collaborative federated foundation model fine-tuning process instead of acting purely as an aggregator.
## II Related works
**Foundation Models.** Foundation models such as GPT-3 [1] and CLIP [9], which are also known as large pre-trained models, have garnered attention for their capacity to make zero-shot predictions and their ability to adapt to new tasks through a transfer learning approach called fine-tuning [10, 11]. Leveraging these models offers an advantage in terms of time and resource savings as compared to training models from the ground up, especially for large models like GPT3 with 175B+ parameters.
**Model Tuning.** In the field of model tuning, various methods have been explored to make fine-tuning of pre-trained models more efficient. Efforts in model tuning have extended to the realm of adapters [6, 7, 8], which encode task-specific representations within intermediate layers while preserving pre-training knowledge. Different techniques like Low-rank Adapters (LoRA) [4] and Parameter-Efficient Fine Tuning (PEFT) have been proposed, encompassing approaches such as prompt tuning [5, 12], prefix-tuning [13], adapters [8], P-tuning V2 [14], tuning embedding layer inputs [15], tuning hidden states [16], and more. These methods aim to update or add only a limited number of model parameters, reducing resource requirements and allowing for the sharing of parameters from the pre-trained model.
However, fine-tuning for downstream tasks of local devices often necessitates knowledge of the entire model's weights, potentially raising privacy concerns. Furthermore, the process of fine-tuning and deploying foundation models can pose significant resource challenges due to their substantial parameter sizes [17, 18]. Xiao _et al._[3] proposed an approach to fine-tune large foundation models using the combination of an emulator, which is a compressed version of a subset of the original large foundation model. Nevertheless, these authors do not consider the federated and collaborative tuning between devices.
**Federated Learning for Foundation Models.** Several earlier works have proposed federated learning as a privacy-preserving, decentralized machine learning approach [19, 20]. In the domain of federated learning for foundation models, researchers have acknowledged the substantial resource costs associated with cross-device Federated Learning (FL) [21]. To address this challenge, multiple strategies have been explored, encompassing communication efficiency optimizations [22], model compression, quantization [23], client and data sampling [24, 25], as well as on-device training speedup [26]. These efforts aim to make FL more efficient and scalable across diverse devices and datasets.
Federated learning, as a privacy-preserving technique, enables collaborative model training without sharing user data with a central server [19]. However, it raises concerns related to model privacy since each user maintains a local copy of the entire model. Furthermore, federated learning assumes that users can perform training on the complete model weights, which can be impractical for large foundation models [3]. To tackle these abovementioned issues, several works [27, 28, 29, 30, 31] proposed Federated-PEFT approaches. Regarding the fine-tuning of federated models with emulators and adapters, Ding _et al._[32] introduced an approach that involves model compression and an emulator-adapter-like strategy for collaborative tuning of large vision models in a device-server setting. Kuang _et al._[33] proposed FedOT, which is federated version of offsite-tuning. Although Kuang _et al._[33] briefly touch upon an architecture similar to our FedPEAT, they do not provide detailed discussions. In light of these, we propose the FedPEAT framework which generalizes the emulator-adapter approach to all architectural configurations for the federated learning context, and proposed an adaptive control mechanism to support the adoption of FedPEAT.
## III Parameter-Efficient Emulator-Assisted Tuning (PEAT) sub-units
In this section, we explore the intricacies of the PEAT sub-components, specifically the emulator, adapter modules, and the Parameter-Efficient Fine-Tuning (PEFT) adaptation.
### _Emulator_
The emulator represents a collection of neural network weights meticulously designed to mimic the behavior of the original foundation model. Through the compression of extensive neural network knowledge into a more compact architecture, emulators aim to deliver performance that closely rivals their larger counterparts while dramatically reducing computational and storage requirements. The decision to share emulators with client devices, rather than the original foundation model, serves a dual purpose: firstly, it safeguards the proprietary nature of model ownership by obviating the need to divulge the complete model to local devices; secondly, it empowers local devices to store and undertake model fine-tuning using a significantly smaller-sized emulator. In essence, an emulator serves as a streamlined and resource-efficient rendition of a more extensive model, crafted through techniques such as pruning [34], layer drop [35], or knowledge distillation [36]. Importantly, our approach employs emulators with fixed-parameter values, without fine-tuning, to encapsulate the bulk of knowledge and information derived from pre-trained foundation models.
### _Adapter_
Adapters are modular additions to pre-existing foundation models like large language model (LLM), designed to facilitate task-specific adaptations with minimal modifications to the original model [3]. Essentially, adapters are a smaller set of neural network weights with tunable parameters so as to encode information at the user device for downstream task fine-tuning. The smaller adapter size serves two main purpose. Firstly, the adapter is designed to be a plug which can be conveniently placed at the end of the original foundation model at the server and also a plug at the end of the emulator on the local devices. Secondly, the smaller adapter size reduces adapter transmission costs. By only tuning the parameters of these added layers, one can harness the generalized capabilities of large models while efficiently tailoring them for specific tasks.
### _PEFT integration_
PEFT methods like LoRA [4] and Adapter [8] can significantly reduce model size, consequently save memory, while achieving comparable model performance to a model which do not use PEFT approaches [3]. The integration of PEFT methods is seamless and can be directly applied on the adapter module in each federated learning iteration.
## IV Federated Parameter-Efficient Emulator-Assisted Tuning (FedPEAT)
In this section, we will discuss our proposed innovative framework, known as the Federated Parameter-Efficient Emulator-Assisted Tuning (FedPEAT). We offer a thorough examination of the synergistic interaction between emulators and adapters, illuminating their seamless integration with parameter-efficient tuning techniques and their expansion into the domain of federated learning.
### _Emulators and Adapters_
The server houses a foundation model \(M_{\theta_{s}}\), while each user device (UEs) labeled by index \(i\) receives and holds:
\[\begin{cases}\text{An adapter }A_{\phi_{i}}\text{ specifically tuned for the downstream task.}\\ \text{An emulator }E_{\theta_{i}}\text{, which is a tailored version of the}\\ \text{foundational model, represented by}\\ E_{\theta_{i}}=f(M_{\theta_{g}}-A_{\phi_{i}}).\end{cases}\]
The adapter, denoted as \(A\), aligns with the definition put forth by [3], comprising sets of layers embedded within the foundation model's architecture. These layers feature tunable parameters, specifically designed to facilitate model fine-tuning by encoding new information from downstream tasks. In contrast, the emulators designated by \(E\) encompass a (potentially) modified rendition of the original foundation model. The adaptation of the emulator occurs after the removal of the adapter layers and serves as a guiding framework for tuning the adapter parameters. The parameter value of the emulators are fixed and aims to emulate the large foundation models.
The transformation function \(f()\), in this context, refers to model compression algorithms such as layer dropping [37], model pruning [34].
Let \(\omega\) represent the weights that are collaboratively trainable on both the server and device. \(\omega_{s}^{\prime}\) refers to the untrainable weights specific to the server, excluding \(\omega\). \(\omega_{c}^{\prime}\) denotes the untrainable weights specific to a device, distinct from \(\omega\).
Given this, we can generalize emulator-assisted tuning to three cases:
* **Case 1:**\(\omega_{s}^{\prime}\neq\omega_{c}^{\prime}\neq\varnothing:\) This is our proposed, more generalized framework, in which we permit various user devices (UEs) to employ distinct emulators, denoted as \(E_{\theta_{i}}\). These emulators correspond to the untrainable weights on a device, \(\omega_{c}^{\prime}\), which are maintained at fixed values. Similarly, the subset of the model with untrainable parameters \(E_{\theta_{i}}^{\prime}\) corresponds to \(\omega_{s}^{\prime}\). This flexibility is particularly important since UEs frequently operate with constrained storage and computational resources. Therefore, emphasizing the efficient decompression and adaptiveness of the foundation model becomes essential.
* **Case 2:**\(\omega_{s}^{\prime}=\omega_{c}^{\prime}\neq\varnothing:\) This scenario is a subset of Case 1. Here, the emulator designated for UE \(i\) aligns with the static parameters of the overarching foundation model (i.e., \(\omega_{s}^{\prime}=\omega_{c}^{\prime}\)). In this setup, we synergize Federated Learning (FL) training with Parameter-Efficient Fine-Tuning (PEFT) techniques, reflecting strategies showcased in previous research such as [28, 30, 38, 39].
* **Case 3:**\(\omega_{s}^{\prime}=\omega_{c}^{\prime}=\varnothing:\) This is another specific instance within the purview of Case 1. In this scenario, all participants utilize the adjustable parameters of the global foundation model \(\omega\). Essentially, no weights remain untrainable beyond the collaboratively trainable ones (i.e., \(\omega_{s}^{\prime}=\omega_{c}^{\prime}=\varnothing\)). This methodology closely mirrors the conventional federated learning (FL) paradigm, where individual model parameters are amalgamated to shape the global model.
The details of the cases are further illustrated in Fig. 3.
### _Tuning Process_
**Prior to tuning.** The server model, denoted as \(M_{\theta_{g}}\), can be decomposed into two primary components: the untrainable subset of weights of the foundation model \(E^{\prime}_{\theta}\), and the adapter \(A_{\phi}\). After such decomposition, the server model is expressed as \(E^{\prime}_{\theta}\circ A_{\phi}\), with the symbol "\(\circ\)" signifying the neural network connections between \(E^{\prime}_{\theta}\) and \(A_{\phi}\). It is important to note that the arrangement of layers \(E^{\prime}_{\theta}\) and \(A_{\phi}\) is flexible and can be configured in various orders. Emulator-assisted tuning (EAT) is an approach which generalizes all emulator-adapter based configurations, which include those proposed by [3] and extends it to cases beyond those proposed by [3] such as the "Vertical" splitting of the foundation model [32]. Furthermore, the term "_offsite_" in their work [3] only considers a single device tuning and does not consider a multiple device collaborative training scenario. Our proposed EAT approach generalizes the emulator and adapter approach to collaborative tuning between multiple devices. Furthermore, Xiao _et al._[3] do not consider a collaborative fine-tuning scenario where there are datasets that are stored at the server and that the server is able to partake in collaborative fine-tuning as proposed by [32]. The emulator-to-be, represented as \(E^{\prime}_{\theta}\), can be customized to create emulator \(E_{\theta_{i}}\) specific to UE \(i\) taking into account UE \(i\)'s device hardware configurations and conditions of its environment. Subsequently, this tailored emulator, \(E_{\theta_{i}}\), is distributed to each respective UE. In our work, we extend our proposed PEAT approach to a collaborative, federated model fine-tuning context and establish the Federated Parameter-Efficient Emulator-Assisted Tuning (FedPEAT) framework.
**Tuning.** At the start of the first iteration, each adapter \(A^{0}_{\theta_{i}}\) with randomly initialized parameter values is disseminated to UE \(i\) where the complete user device model for UE \(i\) will be \(E^{0}_{\theta_{i}}\circ A^{0}_{\theta_{i}}\). Each user device UE \(i\) will then carry out model emulated-assisted fine-tuning with their local dataset \(D^{t}_{i}\) and update the parameter values of their adapter to produce \(A^{1}_{\theta_{i}}\) with the assistance of the emulator. Each user device UE \(i\) will then upload their adapter parameters to the server for adapter parameter aggregation as follows:
\[A^{t+1}_{\phi_{g}}=\frac{1}{\sum_{i=1}^{N}|D^{t}_{i}|}\cdot\sum_{i=1}^{N}(|D^ {t}_{i}|\cdot A^{t}_{\phi_{i}}), \tag{1}\]
where \(|D^{t}_{i}|\) is the size of the data being trained on at UE \(i\). The server will then disseminate this global adapter \(A^{1}_{\phi_{g}}\). This above mentioned tuning process proceeds for further iterations until model convergence, or as defined by a specific criterion. This process can be summarized in Algorithm 1.
```
1:Initialize \(\phi_{g}\) and \(A^{t+1}_{\phi_{g}}\), \
server changes. The round-trip transmission time taken for the transmission of adapters and downlink of emulators between each UE \(i\) and the server is \(Q_{i}^{t}=\frac{F_{i}^{t}}{r_{i}^{t}}\) where \(r_{i}^{t}\) is the rate of transfer between the user device \(i\) and server, which depends on the server transmission power \(p_{i,trans}^{t}\) to UE \(i\), channel gain \(g_{i}^{t}\) between UE \(i\) and the server, and the propagation model. For sake of simplicity, we take \(p_{i,trans}^{t}\) and \(g_{i}^{t}\) to be arbitrary values in each iteration. \(F_{i}^{t}\) is the total adapter and emulator size transmitted to and adapter size transmitted from UE \(i\) at transmission step \(t\). We also omit the model fine-tuning computation time for the sake of demonstration but the methodology would follow the works of [40] when considered.
Therefore, we formulate the problem as follows:
\[\min_{\varepsilon^{t}\mathcal{U}^{t}}\frac{1}{N} \sum_{t=1}^{T}\sum_{i=1}^{N}P_{i}^{t}+f\cdot\sum_{t=1}^{T}\max_{ i\in\mathcal{N}}Q_{i}^{t}+b\cdot\frac{1}{N}\cdot\sum_{t=1}^{T}\sum_{i=1}^{N}m_{i}^{t}\] \[+s\cdot\frac{1}{N}\cdot\sum_{t=1}^{T}\sum_{i=1}^{N}\chi[E_{n}^{t }\neq E_{n}^{t-1}],\] (2) s.t. \[m_{i}^{t}\leq\frac{1}{d}\cdot M,\ \forall i\in\mathcal{N},\ \forall t\in \mathcal{T}, \tag{2a}\] \[\sum_{t=1}^{T}\chi[E_{i}^{t}\neq E_{i}^{t-1}]_{i}^{t}\leq T\cdot \frac{1}{c},\ \forall i\in\mathcal{N},\ \forall t\in\mathcal{T}. \tag{2b}\]
In the objective function (2) above, \(P_{i}^{t}\) stands for the perplexity score achieved by UE \(i\) at iteration \(t\), where it is a performance measure for how well a language model predicts a set of data, \(Q_{i}^{t}\) stands for the total time taken for a single round of adapter and emulator transmission. \(\chi[E_{i}^{t}\neq E_{i}^{t-1}]\) represents the emulator exchange count, and equals 1 if \(E_{i}^{t}\neq E_{i}^{t-1}\), and 0 otherwise. Essentially, we introduce a penalty for \(\varepsilon\), \(\mathcal{U}\), which stands for the emulator compression parameter and UE selection vector, respectively. \(t\), \(T\) stands for the iteration, total number of training iterations, respectively. \(\mathcal{T}\) represents the set of iterations. \(f\), \(b\), and \(s\) stands for the weight balancing parameters for the perplexity term, transmission delay term, and memory usage term, respectively. \(m_{i}^{t}\) represents the memory space taken for the model assigned to UE \(i\), while \(M_{i}\) represents the total memory capacity of device \(i\) at iteration \(t\). \(c\) and \(d\) are constants.
Essentially, the objective function (2) aims to minimize the sum of perplexity scores across \(t\) iterations which is synonymous with achieving a quicker rate of model tuning convergence, and minimizing the maximum training time amongst all devices for the federated fine-tuning process, via optimizing the emulator compression parameter and device selection vector. For the sake of simplicity in our demonstration, we use \(\frac{1}{N}\sum_{i=1}^{N}P_{i}^{t}\) as an estimate of global \(P^{t}\). The motivation for a formulation as such is to encourage quicker model convergence while ensuring that the maximum total transmission delay among UEs, and memory consumption for all UE is minimized. Constraint 2a ensures that the total memory consumed for any device in each round falls well below a predefined fraction of its memory capacity. Constraint 2b prevents excessive transmissions of emulator to reduce transmission cost.
### _Deep reinforcement learning approach_
We have devised a multi-output, single-agent deep reinforcement learning approach as our driver behind our adaptive control mechanism to tackle our proposed problem as the problem is highly sequential and is a mixed-integer non-linear programming problem. The ingenious configuration of the state, action spaces, and reward function plays a crucial role in the effective integration of reinforcement learning techniques. In the upcoming sections, we will delve into a detailed examination of these three components in our approach. We choose a multi-output single-agent deep reinforcement learning approach over a multi-agent approach for simplicity of demonstration. The choice of deep reinforcement learning agent type and structure is arbitrary.
#### Iii-A1 State
To effectively execute the FedPEAT approach, factors such as memory usage, model-tuning performance and device participation is of utmost importance. Therefore we included the following variables within the state: (1) user device-server channel gain \(g\) which is required for the computation of \(r_{i}^{t}\), (2) FedPEAT UE \(i\) emulator exchange count \(\chi[E_{i}^{t}\neq E_{i}^{t-1}]\) which keeps track of the number of times UE \(i\) has undergone emulator exchange.
#### Iii-A2 Action
In this study, we have 2 actions to include in the agent action space: (1) choice of emulator compression parameter \(\varepsilon_{i}^{t}\) for each device, stored in a vector \(\varepsilon^{t}=\{\varepsilon_{i}^{t},\varepsilon_{2}^{t},...,\varepsilon_{N}^ {t}\}\), (2) UE selection vector \(\mathcal{U}^{t}=\{\text{UE}_{i}^{t},\text{UE}_{2}^{t},...,\text{UE}_{N}^{t}\}\) for each device, stored in a vector.
#### Iii-A3 Reward
We formulate our reward function as per our objective function, where we assign our reinforcement learning agent the reward as follows in each iteration:
\[\begin{split} v=&-\frac{1}{TN}\sum_{t=1}^{T}\sum_{i=1 }^{N}P_{i}^{t}-f\cdot\frac{1}{T}\sum_{t=1}^{T}\max_{i\in\mathcal{N}}Q_{i}^{t} \\ &-b\cdot\frac{1}{TN}\cdot\sum_{t=1}^{T}\sum_{i=1}^{N}m_{i}^{t}-s \cdot\frac{1}{TN}\cdot\sum_{t=1}^{T}\sum_{i=1}^{N}\chi[E_{n}^{t}\neq E_{n}^{t- 1}].\end{split} \tag{3}\]
In addition, we assign the agent very large penalties \(\varkappa\) when (1) the memory size of emulator \(E_{i}^{t}\) and adapter \(A_{i}^{t}\) exceeds an allowable fraction of the local device \(i\)'s memory capacity, in accordance to constraint 2a, and (2) the emulator exchange
count \(\chi[E_{i}^{t}\neq E_{i}^{t-1}]\) exceeds a given fraction of the total iteration, in accordance to constraint 2b.
#### V-B4 Reinforcement Learning Algorithm
We adopted the Proximal Policy Optimization (PPO) algorithm, developed by OpenAI [41], which stands as an advancement over traditional policy gradient algorithms. In the domain of sequential problems such as reinforcement learning, even minor adjustments to parameters can have a profound impact on performance, making parameter fine-tuning a challenging endeavor. PPO tackles the issue of delicate and noisy advantage estimates by implementing a cautious approach. It incorporates a Kullback-Leibler (KL) divergence penalty to regulate policy adjustments. Furthermore, PPO makes use of an importance sampling technique [42] by employing asynchronous policies for training and data collection, enhancing overall efficiency. The loss function for the Actor is formally defined as follows [41]:
\[L^{CLIP}(\varphi)=\mathbb{E}_{t}\left[\min\left(v_{\varphi}\varpi_{t},\text{ clip}\left(v_{t}(\varphi),1-\epsilon,1+\epsilon\right)\varpi_{t}\right)\right].\]
In this context, \(\varphi\) represents the policy. \(\mathbb{E}_{t}\) signifies empirical expectations over the trajectory. \(v_{t}\) represents the ratio of the current policy to the old policy. \(\varpi_{t}\) denotes the estimated advantage at time \(t\) and \(\epsilon\) denotes the clip value. This clipping mechanism acts as a safeguard, preventing significant bias and ensuring that the policy remains within a trusted range.
## VI Numerical Experiments
In this section, we will delve into our experimental configurations and offer a comprehensive analysis of the obtained results.
### _Experiment configuration_
We substantiate our study with several experiments by showing that the FedPEAT framework with adaptive control works, and compare our proposed framework against (1) FedPEAT without adaptive control and (2) Federated full model fine-tuning (Fed-FT). The configuration settings are designed for simplicity of demonstration and are arbitrary. We consider the scenario where the server has data and partakes in UE-server collaborative federated learning.
To further simplify our workflow and for the ease of demonstration, we utilized numerical solutions from the works by [3] to facilitate our experiment. We utilized the _GPT2-XL_[9] large language model as the foundation model, which has 1475 million parameters and is of approximately 6.5 gigabytes (GB) in storage memory. We utilized the layer-drop approach [37] for the emulator compression. We adopted the perplexity-layer drop retention numerical solution from the works by [3] and established the function to be approximated by \(P=25.2\varrho^{2}-43.1\varrho+31.9\) for \(0<\varrho\leq 1\), with an \(R^{2}\) score of \(0.97\), where \(\varrho\) stands for the layer drop retention ratio and varies from 0 to 1. We also adopted LoRA model compression numerical results and perplexity improvements of using LoRA from the works by [3], and establish the parameter compression ratio and perplexity improvement upon application of LoRA to be \(\frac{1}{300}\) and \(-0.78\), respectively. We assume model storage memory usage to follow a linear relationship with the number of parameters of the model. Since the adapter layers are much smaller in size as compared to the emulators, we take their sizes to be negligible.
We set \(T\) which is the number of federated fine-tuning rounds in an episode to be 100. We consider a scenario with 1 main server and 10 user devices. In each round of federated fine-tuning, our adaptive control orchestrator selects 5 devices for fine-tuning. We set our large penalty \(\varkappa\) to be -50.
We assign the bandwidth \(B\) to be \(100\) MHz and noise \(\sigma^{2}\) to be \(-100\) dBm. We initialize and constrain main server power output and user-device channel gain to \((0.0,10.0)\) Watt and \((-60,-120)\) dB, respectively. \(f\), \(b\), and \(s\) are set to 1, 100, and 1, respectively, and these numbers are empirically derived with the aim of balancing the variables in the objective function. We adopt the ADAM optimizer [43] for the algorithms implemented in our study. The models are trained for 1,000,000 steps and evaluated at every 200,000 steps.
### _Result analysis_
Overall, our FedPEAT framework with adaptive control works relatively well as shown in figure 5(a), where the average reward obtained by the adaptive control orchestrator increases with training. This improvement is reflected in the decrease in average model memory usage in local devices as training proceeds, decrement in average perplexity (which is taken as a proxy for the global perplexity) score, and emulator transmission delay (shown in fig 4(a),4(b),4(c)). Furthermore, the improvement in the adaptive control orchestrator with training is noted by a reduction in average emulator exchange count (shown in figure 5(b)), which signifies that the orchestrator learns to not redundantly exchange and transmit new emulator weights if the costs of transmission outweigh the improvements in perplexity or memory consumption.
Nevertheless, we document the baseline results, along with our proposed FedPEAT framework in Table I. Note that the steps displayed are in thousands and the memory usage is measured in gigabytes. We can observe that FedPEAT with adaptive control obtains the highest eventual reward. FedPEAT with adaptive control performs better than the full model federated tuning (Fed-FT) as the large foundation model consumes alot of memory storage space in local devices and have high transmission cost. Furthermore, We observe that FedPEAT works much better with adaptive control than without as the adaptive control mechanism allows the FedPEAT algorithm to obtain more efficient use of memory space, obtain better model performance, achieving a lower emulator transmission delay while reducing the emulator exchange count.
## VII Conclusion
In conclusion, deploying and fine-tuning large foundation models pose unique challenges. Collaborative training, model ownership, and computational constraints must be addressed to unlock their full capabilities. We generalize the offsite tuning approach to Emulator-Assisted Tuning (EAT) and combine it with Parameter-Efficient Fine-Tuning (PEFT) to create Parameter-Efficient Emulator-Assisted Tuning (PEAT), expanding its use into federated learning (FL) as Federated Parameter-Efficient Emulator-Assisted Tuning (FedPEAT). Our proposed FedPEAT framework with adaptive control, a novel fusion of adapters and emulators, offers a path forward by enhancing model privacy and streamlining memory-efficient downstream federated fine-tuning.
Adapters, equipped with trainable neural network parameters, tailor models for specific tasks, while emulators provide compressed, fixed-parameter representations. This approach not only mitigates model privacy concerns by eliminating the need to transmit complete models to edge devices but also substantially improves memory and computational efficiency. It is adaptable to diverse neural network architectures and is complemented by an adaptive control mechanism leveraging deep reinforcement learning to optimize essential hyper-parameters, ensuring efficient resource orchestration. Our experimental evaluation, considering a collaborative user device-central server federated learning scenario, demonstrates the practical applicability and effectiveness of our approach in addressing the intricate challenges associated with large language models in the current age of foundation models. Further work demands that we work on studying the intricacies of federated learning for foundation models empirically by performing a more complete range of federated learning experiments across different model types and architectures, adaptive control variables, and server-device configurations.
|
2305.13441 | Phenomenological aspects of the fermion and scalar sectors of a $S_4$
flavored 3-3-1 model | We proposed a viable and predictive model based on the $SU(3)_C \times
SU(3)_L \times U(1)_X$ gauge symmetry, supplemented by the global $U(1)_{Lg}$
symmetry, the $S_4$ family symmetry and several auxiliary cyclic symmetries,
which successfully reproduces the experimentally observed SM fermion mass and
mixing pattern. The tiny active neutrino masses are generated through an
inverse seesaw mechanism mediated by right-handed Majorana neutrinos. The model
is consistent with the SM fermion masses and mixings and successfully
accommodates the current Higgs diphoton decay rate constraints as well as the
constraints arising from oblique $S$, $T$ and $U$ parameters and we studied the
meson mixing due to flavor changing neutral currents mediated by heavy scalars,
finding parameter space consistent with experimental constraints. | A. E. Cárcamo Hernández, Juan Marchant González, M. L. Mora-Urrutia, Daniel Salinas-Arizmendi | 2023-05-22T19:35:44Z | http://arxiv.org/abs/2305.13441v3 | # Phenomenological aspects of the fermion and scalar sectors of a \(S_{4}\) flavored 3-3-1 model
###### Abstract
We propose a viable and predictive model based on the \(SU(3)_{C}\times SU(3)_{L}\times U(1)_{X}\) gauge symmetry, supplemented by the global \(U(1)_{Lg}\) symmetry, the \(S_{4}\) family symmetry and several auxiliary cyclic symmetries, which successfully reproduces the observed SM fermion mass and mixing pattern. The SM charged fermion mass and quark mixing hierarchy is caused by the spontaneous breaking of the discrete symmetries, whereas the tiny active neutrino masses are generated through an inverse seesaw mechanism mediated by right-handed Majorana neutrinos. The model is consistent with the SM fermion masses and mixings and successfully accommodates the current Higgs diphoton decay rate constraints as well as the constraints arising from oblique \(S\), \(T\) and \(U\) parameters and meson oscillations.
## I Introduction
The Standard Model of particles is a widely accepted theory describing subatomic particles' fundamental interactions. This theory has successfully explained and predicted numerous phenomena observed in particle physics experiments. Despite the great success of the Standard Model (SM), it has several unresolved problems. For example, the SM cannot explain the fermion sector's hierarchy of masses and mixings. The range of fermion mass values extends approximately 13 orders of magnitude from the light-active neutrino mass scale up to the mass of the top quark. Whereas in the lepton sector two of the mixing angles are large and one is small, the mixing angles of the quark sector are very small, thus implying that the quark mixing matrix approaches the identity matrix. These different mass and mixing patters in the quark and lepton sectors corresponds to the SM flavor puzzle, which is not explained by the SM. In addition to this, there are other issues not explained by the SM, such as, for example, the number of families of SM fermions as well the quantization of the electric charge. Explaining the aforementioned issues motivates to consider extensions of the SM, with enlarged particle spectrum and symmetries. Possible SM extension options include models based on the \(SU(3)_{C}\times SU(3)_{L}\times U(1)_{X}\) gauge group (also known as 3-3-1 models). These models have been extensively worked on in the literature [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24] because they provide an explanation of the number of fermion generations and why there are non-universal gauge assignments under the group \(U(1)_{X}\) for left-handed quark fields (LH), which implies the cancellation of chiral anomalies when the number of the fermionic triplet and the anti-triplet of \(SU(3)_{L}\) are equal, which occurs when the number of fermion families is a multiple of 3. Another feature of these 3-3-1 models is that the Peccei-Quinn (PQ) symmetry (a solution to the strong CP problem) is obtained, which occurs naturally, and besides that, these theories contain several sources of CP violation, and explain the quantization of the electric charge. There are two most widely used versions of 3-3-1 models, a minimal one where the lepton components are in the same triplet representation \((\nu,l,l^{C})_{L}\) and a variant where right-handed (RH) neutrinos are included in the same triplet \((\nu,l,l^{C})_{L}\). In addition, within the framework of 3-3-1 models, research focuses on implementing radiative Seesaw mechanisms and non-renormalizable terms through a Frogen-Nilsen (FN) mechanism to explain the pattern of mass and mixing of SM fermions. It should be noted that the FN mechanism will not produce a new break scale since the flavor break scale is the same as the symmetry break scale of the 3-3-1 model.
In this work, we propose an extension of the SM through a 3-3-1 model with (RH) neutrinos, also adding a global lepton symmetry \(U(1)_{Lg}\) to ensure the conservation of the lepton number, a discrete non-abelian symmetry \(S_{4}\) to reproduce the masses and mixing of the fermionic sector and three other auxiliary cyclic symmetries \(Z_{4}\times Z_{4}^{\prime}\times Z_{2}\), where the \(Z_{4}\) symmetry is introduced to obtain a texture with zeros in the entries of the up type quark mass matrix, in addition, the symmetry \(Z_{4}^{\prime}\) together with the VEV pattern of the scalar triplets associated with the sector of charged leptons is necessary to get a diagonal charged lepton mass matrix. Finally, the \(Z_{2}\) symmetry is necessary to obtain all the complete entries of the third column of the mass matrix of the down quarks and thus with the values of the VEVs of the scalars that participate in the Yukawa terms of this sector. Given that the charged lepton mass matrix is predicted to be diagonal, the lepton mixing entirely arises from the neutrino sector, where we will use an inverse seesaw mechanism [25] mediated by right-handed heavy neutrinos to generate the tiny active neutrinos masses. The \(S_{4}\) symmetry group is a compelling choice due to its unique properties and its ability to efficiently describe the observed pattern of fermion masses and mixing angles in the standard model. As the smallest non-abelian group with irreducible representations of doublet, triplet, and singlet, \(S_{4}\) allows for an elegant accommodation of the three fermion families. Furthermore, its cyclic structure and spontaneous breaking provide a suitable framework for generating fermion masses through mechanisms such as the Froggatt-Nielsen mechanism [26] for charged fermions and the inverse seesaw mechanism [25] for light active neutrinos. The application of S 4 has demonstrated success in describing the observed patterns of SM fermion masses and mixings [27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51].
The non-abelian symmetry \(S_{4}\) provides a solid theoretical framework for constructing scalar fields and understanding various phenomenologies. Scalar fields, transforming according to the representations of \(S_{4}\), allow us to determine the phenomenon of meson oscillations. These oscillations are of great interest as they provide valuable information about flavor symmetry violations in the Standard Model. In addition, the scalar mass spectrum is analyzed and discussed in detail. Its analysis allows to study the implications of our model in the decay of the Standard Model like Higgs boson into a photon pair as well as in meson oscillations. Finally, besides providing new physics contribution to meson mixingz and Higgs decay into two photons, the considered model succesfully satisfies the constraints imposed by the oblique parameters, where the corresponding analysis is performed considering the low energy effective field theory below the scale of spontaneous breaking of the \(SU(3)_{L}\times U(1)_{X}\times U(1)_{L_{g}}\) symmetry. These parameters, derived from precision measurements in electroweak physics, are essential for evaluating the consistency between the theoretical model and experimental data. The consistency of our model with the oblique parameters demonstrates its ability to reproduce experimental observations in the context of electroweak physics for energy scales below 1 TeV.
This paper is organized as follows. The section II presents the model and its details, such as symmetries, particle content, field assignments under the symmetry group, and describe the spontaneous symmetry breaking pattern. In the sections III and IV, the implications of the model in the masses and mixings in quark and lepton sectors, respectively, are discussed and analyzed. In addition, section V describes the scalar potential, the resulting scalar mass spectrum and the mixing in the scalar sector. The section VI provides an analysis and discussion of the phenomenological implications of the model in meson mixings. On the other hand, the decay rate of the Higgs to two photons is also studied in section VII. In section VIII, the contribution of the model to the oblique parameters through the masses of the new scalar fields is discussed. We state our conclusions in section IX.
## II The model
The model is based on the \(SU(3)_{C}\times SU(3)_{L}\times U(1)_{X}\) gauge symmetry, supplemented by the \(S_{4}\) family symmetry and the \(Z_{4}\times Z_{4}^{\prime}\times Z_{2}\) auxiliary cyclic symmetries, whose spontaneous breaking generates the observed SM fermion mass and mixing pattern. We also introduce a global \(U(1)_{L_{g}}\) of the generalized leptonic number \(L_{g}\)[19; 7; 15]. That global \(U(1)_{L_{g}}\) lepton number symmetry will be spontaneously broken down to a residual discrete lepton number symmetry \(Z_{2}^{(L_{g})}\) by a VEV of the gauge-singlet scalars \(\varphi\) and \(\xi\) to be introduced below. The correspoding massless Goldstone bosons, the Majoron, are phenomenologically harmless since they are gauge singlets. It is worth mentioning that under the discrete lepton number symmetry \(Z_{2}^{(L_{g})}\), the leptons are charged and the other particles are neutral, thus implying that in any interaction leptons can appear only in pair, thus, forbidding proton decay. The \(S_{4}\) symmetry is the smallest non-Abelian discrete symmetry group having five irreducible representations (irreps), explicitly, two singlets (trivial and no-trivial), one doublet and two triplets (3 and 3') [52]. While the auxiliary cyclic symmetries \(Z_{4}\), \(Z_{4}^{\prime}\) and \(Z_{2}\) select the allowed entries of the SM fermion mass matrices that yield a viable pattern of SM fermion masses and mixings, and at the same time, the cyclic symmetries also allow a successful implementation of the inverse
seesaw mechanism. The \(\mathcal{G}\) chosen symmetry exhibits the following three-step spontaneous breaking:
\[\mathcal{G}=SU(3)_{C}\times SU(3)_{L}\times U(1)_{X}\times U(1)_{L_{ g}}\times S_{4}\times Z_{4}\times Z_{4}^{\prime}\times Z_{2} \tag{1}\] \[\Downarrow\Lambda_{\text{int}}\] \[SU(3)_{C}\times SU(3)_{L}\times U(1)_{X}\times U(1)_{L_{g}}\] \[\Downarrow v_{\chi}\] \[SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}\times Z_{2}^{(L_{g})}\] \[\Downarrow v_{\eta_{2}},v_{\rho}\] \[SU(3)_{C}\times U(1)_{Q}\times Z_{2}^{(L_{g})}\]
where the different symmetry breaking scales satisfy the following hierarchy:
\[v=246\text{GeV}=\sqrt{v_{\rho}^{2}+v_{\eta_{2}}^{2}}\ll v_{\chi}\sim\mathcal{O }\left(9.9\right)\text{ TeV}\ll\Lambda_{\text{int}}, \tag{2}\]
which corresponds in our model to the VEVs of the scalar fields.
The electric charge operator is defined [1; 42] in terms of the \(SU(3)\) generators \(T_{3}\) and \(T_{8}\) and the identity \(I_{3\times 3}\) as follows:
\[Q=T_{3}+\beta T_{8}+I_{3\times 3}X. \tag{3}\]
where for our model we choose \(\beta=-1/\sqrt{3}\), and \(X\) are the charge associated with gauge group \(U(1)_{X}\).
The fermionic content in this model under \(SU(3)_{C}\times SU(3)_{L}\times U(1)_{X}\)[1; 15; 53]. The leptons are in triplets of flavor, in which the third component is an RH neutrino. The three generations of leptons for anomaly cancellation are:
\[L_{iL}=\begin{pmatrix}\nu_{i}\\ l_{i}\\ \nu_{i}^{c}\end{pmatrix}_{L}\sim\left(\mathbf{1},\mathbf{3},-1/3\right),\quad l _{iR}\sim\left(\mathbf{1},\mathbf{1},-1\right),\quad N_{iR}\sim\left(\mathbf{ 1},\mathbf{1},0\right), \tag{4}\]
where \(i=1,2,3\) is the family index. Here \(\nu^{c}\equiv\nu_{R}^{c}\) is the RH neutrino and \(\nu_{iL}\) are the family of neutral leptons, while \(l_{iL}\) (\(e_{L},\mu_{L},\tau_{L}\)) is the family of charged leptons, and \(N_{iR}\) are three right handed Majorana neutrinos, singlets under the 3-3-1 group. Regarding the quark content, the first two generations are antitriplet of flavor, while the third family is a triplet of flavor; note that the third generation has different gauge content compared with the first two generations, which is required by the anomaly cancellation.
\[Q_{nL}=\begin{pmatrix}d_{n}\\ -u_{n}\\ J_{n}\end{pmatrix}_{L}\sim\left(\mathbf{3},\mathbf{\overline{3}},0\right), \quad Q_{3L}=\begin{pmatrix}u_{3}\\ d_{3}\\ T\end{pmatrix}_{L}\sim\left(\mathbf{3},\mathbf{3},1/3\right),\quad n=1,2. \tag{5}\] \[u_{iR}\sim\left(\mathbf{3},\mathbf{1},2/3\right),\quad d_{iR} \sim\left(\mathbf{3},\mathbf{1},-1/3\right),\quad J_{nR}\sim\left(\mathbf{3}, \mathbf{1},-1/3\right),\quad T_{R}\sim\left(\mathbf{3},\mathbf{1},2/3\right).\]
We can observe that the \(d_{iR}\) and \(J_{iR}\) quarks have the same \(X\) quantum number, and so are the \(u_{iR}\) and \(T_{R}\) quarks. Here \(u_{iL}\) and \(d_{iL}\) are the LH up and down type quarks fields in the flavor basis, respectively. Furthermore, \(u_{iR}\) and \(d_{iR}\) are the RH SM quarks, and \(J_{nR}\) and \(T_{R}\) are the RH exotic quarks. And the scalar sector contains four scalar triplets of flavor,
\[\rho = \begin{pmatrix}\rho_{1}^{+}\\ \frac{1}{\sqrt{2}}(v_{\rho}+\xi_{\rho}\pm i\zeta_{\rho})\\ \rho_{3}^{+}\end{pmatrix}\sim\left(\mathbf{1},\mathbf{3},\frac{2}{3}\right), \quad\chi = \begin{pmatrix}\chi_{1}^{0}\\ \chi_{2}^{2}\\ \frac{1}{\sqrt{2}}(v_{\chi}+\xi_{\chi}\pm i\zeta_{\chi})\end{pmatrix}\sim\left( \mathbf{1},\mathbf{3},-\frac{1}{3}\right), \tag{6}\] \[\eta_{2} = \begin{pmatrix}\frac{1}{\sqrt{2}}(v_{\eta_{2}}+\xi_{\eta_{2}}\pm i \zeta_{\eta_{2}})\\ \eta_{2}^{2}\\ \eta_{32}^{0}\end{pmatrix}\sim\left(\mathbf{1},\mathbf{3},-\frac{1}{3}\right), \quad\eta_{1} = \begin{pmatrix}\frac{1}{\sqrt{2}}(\xi_{\eta_{1}}\pm i\zeta_{\eta_{1}})\\ \eta_{2}^{-1}\\ \eta_{31}^{0}\end{pmatrix}\sim\left(\mathbf{1},\mathbf{3},-\frac{1}{3}\right).\]
where \(\eta_{1}\) is an inert triplet scalar, on the other hand, the \(SU(3)_{L}\) scalars \(\rho\), \(\chi\), and \(\eta_{2}\) acquire the following vacuum expectation value (VEV) patterns:
\[\left\langle\chi\right\rangle^{T}=\left(0,0,v_{\chi}/\sqrt{2}\right),\quad \left\langle\rho\right\rangle^{T}=\left(0,v_{\rho}/\sqrt{2},0\right),\quad \left\langle\eta_{2}\right\rangle^{T}=\left(v_{\eta_{2}}/\sqrt{2},0,0\right). \tag{7}\]
In addition, some singleton scalars are introduced: \(\{\sigma,\Theta_{n},\zeta_{n},\varphi,S_{k},\Phi,\xi\}\), \((k=e,\mu,\tau)\) where all fields transform \(({\bf 1},{\bf 1},0)\) under \(SU(3)_{C}\times SU(3)_{L}\times U(1)_{X}\).
With respect to the global leptonic symmetry defined as [7]:
\[L=\frac{4}{\sqrt{3}}T_{8}+I_{3\times 3}L_{g}, \tag{8}\]
where \(L_{g}\) is a conserved charge corresponding to the \(U(1)_{L_{g}}\) global symmetry, which commutes with the gauge symmetry. The difference between the \(SU(3)_{L}\) Higgs triplet can be explained using different charges \(L_{g}\) the generalized lepton number. The lepton and anti-lepton are in the triplet, the leptonic number operator \(L\) does not commute with gauge symmetry.
The choice of the \(S_{4}\) symmetry group containing irreducible triplet, doublet, trivial singlet and non-trivial singlet representations, allows us to naturally group the three charged left lepton families, and the three right-handed majorana neutrinos into \(S_{4}\) triplets; \((L_{1L},L_{2L},L_{2L})\sim{\bf 3}\), \(N_{R}=(N_{1R},N_{2N},N_{2N})\sim{\bf 3}\), while the first two families of left Sm quarks, and the second and third families of right SM quarks in \(S_{4}\) doublets; \(Q_{L}=(Q_{1L},Q_{2L})\sim{\bf 2}\), \(U_{R}=(u_{2R},u_{3R})\sim{\bf 2}\) respectively, as well as the following exotic quarks; \(J_{R}=(J_{1R},J_{2R})\sim{\bf 2}\), the remaining fermionic fields \(Q_{3L}\), \(T_{R}\), and \(l_{iR}\) as trivial singlet, \(u_{1R}\) and \(d_{iR}\) non-trivial singlet of \(S_{4}\). The assignments under \(S_{4}\), the scalar fields \(S_{k}\), \(\Phi\), and \(\xi\) are grouped into triplets. In our model, the \(S_{k}\) fields play a fundamental role in the vacuum configurations for the \(S_{4}\) triplets leading to diagonal mass matrices for the charged leptons of the standard model. The inert field \(\eta_{1}\) and \(\eta_{2}\) in doublet \(\eta=(\eta_{1},\eta_{2})\sim{\bf 2}\), there are two nontrivial singlets; \(\{\Theta_{2},\ \zeta_{2}\}\sim{\bf 1}\)' and six trivial singlets \(\{\rho,\chi,\sigma,\Theta_{1},\varphi,\zeta_{1}\}\).
Using the particle spectrum and symmetries given in Tables 1 and 2, we can write the Yukawa interactions for the quark and lepton sectors:
\[-{\cal L}_{Y}^{(q)} = y_{1}^{(T)}\overline{Q}_{3L}\chi T_{R}+\sum_{j=2}^{3}y_{j}^{(J)} \left(\overline{Q}_{L}\chi^{*}J_{R}\right)_{\bf 1}+y_{3}^{(u)}\overline{Q}_{3L} \left(\eta U_{R}\right)_{\bf 1}+y_{2}^{(u)}\varepsilon_{abc}\left(\overline{Q}_{L}^{a} \eta^{b}\right)_{\bf 2}\chi^{c}U_{R}\frac{\sigma}{\Lambda^{2}}\] \[+y_{1}^{(u)}\varepsilon_{abc}\left(\overline{Q}_{L}^{a}\eta^{b} \right)_{\bf 1^{\prime}}\chi^{c}u_{1R}\frac{\sigma^{2}}{\Lambda^{3}}+y_{13}^{(d)} \left(\overline{Q}_{L}\eta^{*}\right)_{\bf 1^{\prime}}d_{3R}\frac{\zeta_{1}}{ \Lambda}+y_{12}^{(d)}\left(\overline{Q}_{L}\eta^{*}\right)_{\bf 1^{\prime}}d_{2R}\frac{ \Theta_{1}}{\Lambda}\] \[+y_{11}^{(d)}\left(\overline{Q}_{L}\eta^{*}\right)_{\bf 1^{\prime}}d_{ 1R}\frac{\sigma^{2}}{\Lambda^{2}}+y_{23}^{(d)}\left(\overline{Q}_{L}\eta^{*} \right)_{\bf 1}d_{3R}\frac{\zeta_{2}}{\Lambda}+y_{22}^{(d)}\left(\overline{Q}_{L} \eta^{*}\right)_{\bf 1}d_{2R}\frac{\Theta_{2}}{\Lambda}+y_{33}^{(d)}\overline{Q}_{3L} \rho d_{3R}\]
\[-{\cal L}_{Y}^{(l)} = y_{1}^{(L)}\overline{L}_{L}\rho l_{1R}\frac{S_{e}}{\Lambda}+y_{2 }^{(L)}\overline{L}_{L}\rho l_{2R}\frac{S_{\mu}}{\Lambda}+y_{3}^{(L)} \overline{L}_{L}\rho l_{3R}\frac{S_{\tau}}{\Lambda}+y_{\chi}^{(L)}\left( \overline{L}_{L}\chi N_{R}\right)_{\bf 1}\] \[+y_{\nu}\varepsilon_{abc}\left(\overline{L}_{L}^{a}\rho^{*c} \left(L_{L}^{C}\right)^{b}\right)_{\bf 3}\frac{\Phi}{\Lambda}+h_{1N}\left(N_{R} \overline{N_{R}^{C}}\right)_{\bf 1}\varphi\frac{\sigma^{2}}{\Lambda^{2}}+h_{2N} \left(N_{R}\overline{N_{R}^{C}}\right)_{\bf 3}\xi\frac{\sigma^{2}}{\Lambda^{2}}+H.c.,\]
The parametric freedom of the scalar potential allows us to consider the following configuration of the VEV value for the \(S_{4}\) triplets
\[\langle S_{e}\rangle = v_{S_{e}}\left(1,0,0\right),\hskip 28.452756pt\langle S_{\mu} \rangle=v_{S_{\mu}}\left(0,10\right),\hskip 28.452756pt\langle S_{\tau} \rangle=v_{S_{\tau}}\left(0,0,1\right), \tag{11}\] \[\langle\Phi\rangle = v_{\Phi}\left(1,r_{1}e^{i\theta},r_{1}e^{i\theta}\right), \hskip 28.452756pt\langle\xi\rangle=v_{\xi}\left(1,1,r_{2}\right) \tag{12}\]
in doublets,
\[\langle\eta\rangle=\frac{v_{\eta_{2}}}{\sqrt{2}}\left(0,1\right), \tag{13}\]
and in a scalar singlets
\[\langle\chi\rangle=v_{\chi},\quad\langle\rho\rangle=v_{\rho},\quad\langle \sigma\rangle=v_{\sigma},\quad\langle\Theta_{n}\rangle=v_{\Theta_{n}},\quad \langle\varphi\rangle=v_{\varphi},\quad\langle\zeta_{n}\rangle=v_{\zeta_{n}}. \tag{14}\]
The above given VEV pattern allows to get a predictive and viable pattern of SM fermion masses and mixings as it will be shown in the next sections.
## III Quark masses and mixings
After the spontaneous symmetry breaking of the Lagrangian of Eq.(9), we obtain the following \(3\times 3\) low-scale quark mass matrices:
\[M_{U} = \left(\begin{array}{ccc}\frac{v_{\eta_{2}}v_{\chi}v_{\sigma}^{2} }{\Lambda^{3}}y_{1}^{(u)}&0&\frac{v_{\eta_{2}}v_{\chi}v_{\sigma}^{2}}{\Lambda^{ 3}}y_{2}^{(u)}\\ 0&\frac{v_{\eta_{2}}v_{\chi}v_{\sigma}^{2}}{\Lambda}y_{3}^{(u)}&0\\ 0&0&v_{\eta_{2}}y_{1}^{(u)}\end{array}\right)=\left(\begin{array}{ccc}C&0&A \\ 0&A&0\\ 0&0&B\end{array}\right),\] \[M_{D} = \left(\begin{array}{ccc}-\frac{v_{\eta_{2}}v_{\sigma}^{2}}{ \Lambda^{2}}y_{1}^{(d)}&-\frac{v_{\eta_{2}}v_{\sigma_{1}}}{\Lambda}y_{12}^{(d) }&-\frac{v_{\eta_{2}}v_{\sigma_{1}}}{\Lambda}y_{13}^{(d)}\\ 0&\frac{v_{\eta_{2}}v_{\sigma_{2}}}{\Lambda}y_{22}^{(d)}&\frac{v_{\eta_{2}}v _{\sigma_{2}}}{\Lambda}y_{23}^{(d)}\\ 0&0&v_{\rho}y_{33}^{(d)}\end{array}\right)=\left(\begin{array}{ccc}C_{1}&A _{1}&B_{1}\\ 0&A_{2}&B_{2}\\ 0&0&B_{3}\end{array}\right), \tag{15}\]
Considering the complex coupling \(B_{2}\), we can fit the quark sector observables by minimizing a \(\chi^{2}\) function defined as:
\[\chi^{2} = \frac{\left(m_{u}^{\rm exp}-m_{u}^{th}\right)^{2}}{\sigma_{m_{u}}^ {2}}+\frac{\left(m_{c}^{\rm exp}-m_{c}^{th}\right)^{2}}{\sigma_{m_{c}}^{2}}+ \frac{\left(m_{t}^{\rm exp}-m_{t}^{th}\right)^{2}}{\sigma_{m_{t}}^{2}}+\frac{ \left(m_{d}^{\rm exp}-m_{d}^{th}\right)^{2}}{\sigma_{m_{d}}^{2}}+\frac{\left(m_ {s}^{\rm exp}-m_{s}^{th}\right)^{2}}{\sigma_{m_{s}}^{2}}+\frac{\left(m_{b}^{ \rm exp}-m_{b}^{th}\right)^{2}}{\sigma_{m_{b}}^{2}} \tag{16}\] \[+\frac{\left(s_{\theta_{12}}^{\rm exp}-s_{\theta_{12}}^{th} \right)^{2}}{\sigma_{s_{12}}^{2}}+\frac{\left(s_{\theta_{23}}^{\rm exp}-s_{ \theta_{23}}^{th}\right)^{2}}{\sigma_{s_{23}}^{2}}+\frac{\left(s_{\theta_{13}} ^{\rm exp}-s_{\theta_{13}}^{th}\right)^{2}}{\sigma_{s_{13}}^{2}}+\frac{\left(J ^{\rm exp}-J^{th}\right)^{2}}{\sigma_{J}^{2}}\,\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \(Q_{L}\) & \(Q_{3L}\) & \(u_{1R}\) & \(U_{R}\) & \(d_{1R}\) & \(d_{2R}\) & \(d_{3R}\) & \(T_{R}\) & \(J_{R}\) & \(L_{L}\) & \(l_{1R}\) & \(l_{2R}\) & \(l_{3R}\) & \(N_{R}\) \\ \hline \(U(1)_{L_{g}}\) & \(2/3\) & \(-2/3\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-2\) & \(2\) & \(1/3\) & \(1\) & \(1\) & \(1\) & \(-1\) \\ \hline \(S_{4}\) & \(\mathbf{2}\) & \(\mathbf{1}\) & \(\mathbf{1^{\prime}}\) & \(\mathbf{2}\) & \(\mathbf{1^{\prime}}\) & \(\mathbf{1^{\prime}}\) & \(\mathbf{1^{\prime}}\) & \(\mathbf{1}\) & \(\mathbf{2}\) & \(\mathbf{3}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{3}\) \\ \hline \(Z_{4}\) & \(-1\) & \(0\) & \(0\) & \(-1\) & \(2\) & \(0\) & \(0\) & \(0\) & \(-1\) & \(-1\) & \(0\) & \(0\) & \(0\) & \(1\) \\ \hline \(Z_{4}^{\prime}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-1\) & \(-1\) & \(-1\) & \(0\) \\ \hline \(Z_{2}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(1\) & \(0\) & \(0\) & \(0\) & \(1\) & \(1\) & \(1\) & \(0\) \\ \hline \end{tabular}
\end{table}
Table 2: Fermion assignments under \(U(1)_{L_{g}}\times S_{4}\times Z_{4}\times Z_{4}^{\prime}\times Z_{2}\).
where \(m_{i}\) are the masses of the quarks (\(i=u,c,t,d,s,b\)), \(s_{\theta_{jk}}\) is the sine function of the mixing angles (with \(j,k=1,2,3\)) and \(J\) is the Jarlskog invariant. The supra indices represent the experimental ("exp") and theoretical ("\(th\)") values, and the \(\sigma\) are the experimental errors. The best-fit point of our model is shown in table 3 together with the current experimental values, while Eq. (17) shows the benchmark point of the low energy quark sector effective parameters that allow to successfully reproduce the measured SM quark masses and CKM parameters:
\[C =2.03\;\text{MeV}\qquad A=1.26\;\text{GeV}\qquad\quad B=172.5\; \text{GeV}\qquad C_{1}=-4.52\;\text{MeV}\] \[A_{1} =21\;\text{MeV}\qquad\quad A_{2}=-91.3\;\text{MeV}\qquad B_{1}=1 4.3\;\text{MeV}\qquad B_{2}=1.04e^{2.231i}\;\text{MeV}\qquad B_{3}=4.18\; \text{GeV}. \tag{17}\]
Furthermore, in Fig. 1 we can see the correlation plot between the quark mixing angles \(\sin\theta_{13}\) and \(\sin\theta_{23}\) versus the Jarlskog invariant, as well as the correlation plot between the quark mixing angles. These correlation plots were obtained by varying the best-fit point of the quark sector parameters around \(20\%\), whose values are shown in Eq. (17). As indicated by Fig.1, the model predicts that \(\sin\theta_{13}\) is found in the range \(0.0034\lesssim\sin\theta_{13}\lesssim 0.0040\) in the allowed parameter space and, moreover, it increases when the Jarlskog invariant takes larger values. Meanwhile, a similar situation occurs with \(\sin\theta_{23}\) which is found in the range \(0.039\lesssim\sin\theta_{23}\lesssim 0.044\) in the allowed parameter space and, also, it increases when the Jarlskog invariant takes larger values. The last plot, Fig.(c), shows a correlation between \(\sin\theta_{13}\) versus \(\sin\theta_{23}\), in which the first variable takes on a wider range of values, with a lower limit decreasing while the upper limit remains constant, when the second one acquires larger values.
## IV Lepton masses and mixings
### Charged lepton sector
The \(Z_{4}^{\prime}\) charge assignments of the model fields shown in the table 1, as well the VEV pattern of the \(S_{4}\) scalar triplets \(S_{e}\), \(S_{\mu}\) and \(S_{\tau}\) shown in Eq. (11) imply that the charged lepton Yukawa terms of (10), yield a diagonal charged lepton
Figure 1: Correlation plot between the mixing angles of the quarks and the Jarlskog invariant obtained with our model. The green and purple bands represent the \(1\sigma\) range in the experimental values, while the dotted line (black) represents the best-fit point by our model.
mass matrix:
\[M_{l}=\left(\begin{array}{ccc}m_{e}&0&0\\ 0&m_{\mu}&0\\ 0&0&m_{\tau}\end{array}\right), \tag{18}\]
Where the masses of the SM charged leptons are given by:
\[m_{e}=y_{1}^{(L)}\frac{v_{S_{e}}v_{\rho}}{\Lambda}=a_{1}\frac{v_{ \rho}}{\Lambda},\hskip 28.452756ptm_{\mu}=y_{2}^{(L)}\frac{v_{S_{e}}v_{\rho}}{ \Lambda}=a_{2}\frac{v_{\rho}}{\Lambda},\hskip 28.452756ptm_{\tau}=y_{3}^{(L)} \frac{v_{S_{\tau}}v_{\rho}}{\Lambda}=a_{3}\frac{v_{\rho}}{\Lambda} \tag{19}\]
### Neutrino sector
The neutrino Yukawa interactions of Eq. (10) give rise to the following neutrino mass terms:
\[-\mathcal{L}_{mass}^{(\nu)}=\frac{1}{2}\left(\begin{array}{cc} \overline{\nu_{L}^{C}}&\overline{\nu_{R}}&\overline{N_{R}}\end{array}\right)M _{\nu}\left(\begin{array}{c}\nu_{L}\\ \nu_{R}^{C}\\ N_{R}^{C}\end{array}\right)+H.c, \tag{20}\]
where the neutrino mass matrix reads:
\[M_{\nu}=\begin{pmatrix}0_{3\times 3}&m_{\nu_{D}}&0_{3\times 3}\\ m_{\nu_{D}}^{T}&0_{3\times 3}&M\\ 0_{3\times 3}&M^{T}&\mu\end{pmatrix}, \tag{21}\]
and the submatrices are given by:
\[m_{\nu_{D}} = \frac{y_{\nu}v_{\rho}v_{\Phi}}{\sqrt{2}\Lambda}\left(\begin{array} []{ccc}0&r_{1}e^{i\theta}&r_{1}e^{i\theta}\\ -r_{1}e^{i\theta}&0&1\\ -r_{1}e^{i\theta}&-1&0\end{array}\right),\hskip 56.905512ptM=y_{\chi}^{(L)} \frac{v_{\chi}}{\sqrt{2}}\left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&0&1\end{array}\right),\] \[\mu = \left(\begin{array}{ccc}h_{1N}v_{\varphi}&h_{2N}v_{\xi}r_{2}&h_ {2N}v_{\xi}\\ h_{2N}v_{\xi}r_{2}&h_{1N}v_{\varphi}&h_{2N}v_{\xi}\\ h_{2N}v_{\xi}&h_{2N}v_{\xi}&h_{1N}v_{\varphi}\end{array}\right)\frac{v_{\sigma }^{2}}{\Lambda^{2}}. \tag{22}\]
The light active masses arise from an inverse seesaw mechanism and the resulting physical neutrino mass matrices take the form:
\[\widetilde{M}_{\nu} = m_{\nu_{D}}\left(M^{T}\right)^{-1}\mu M^{-1}m_{\nu_{D}}^{T}, \tag{23}\] \[\widetilde{M}_{\nu}^{(1)} = \frac{1}{2}\mu-\frac{1}{2}\left(M+M^{T}\right),\] (24) \[\widetilde{M}_{\nu}^{(2)} = \frac{1}{2}\mu+\frac{1}{2}\left(M+M^{T}\right). \tag{25}\]
Here, \(\widetilde{M}_{\nu}\) is the mass matrix for the active neutrino (\(\nu_{\alpha}\)), whereas \(\widetilde{M}_{\nu}^{(1)}\) and \(\widetilde{M}_{\nu}^{(2)}\) are the sterile neutrinos mass matrices.
Thus, the light active neutrino mass matrix is given by:
\[\widetilde{M}_{\nu}=\left(\begin{array}{ccc}2a^{2}\left(y_{2}+ z\right)&a\left(b\left(y_{2}+z\right)-a\left(y_{1}+y_{2}\right)\right)&-a \left(a\left(y_{1}+y_{2}\right)+b\left(y_{2}+z\right)\right)\\ a\left(b\left(y_{2}+z\right)-a\left(y_{1}+y_{2}\right)\right)&z\left(a^{2}+b^{2 }\right)-2aby_{2}&a^{2}z+ab\left(y_{1}-y_{2}\right)-b^{2}y_{2}\\ -a\left(a\left(y_{1}+y_{2}\right)+b\left(y_{2}+z\right)\right)&a^{2}z+ab\left(y _{1}-y_{2}\right)-b^{2}y_{2}&z\left(a^{2}+b^{2}\right)+2aby_{1}\end{array} \right), \tag{26}\]
where:
\[a = \frac{y_{\nu}v_{\rho}v_{\Phi}r_{1}}{y_{\chi}^{(L)}v_{\chi} \Lambda}e^{i\theta},\hskip 56.905512ptb=\frac{y_{\nu}v_{\rho}v_{\Phi}}{y_{ \chi}^{(L)}v_{\chi}\Lambda},\hskip 56.905512pty_{1}=\frac{v_{\sigma}^{2}h_{2N}v_{ \xi}r_{2}}{\Lambda^{2}},\] \[y_{2} = \frac{v_{\sigma}^{2}h_{2N}v_{\xi}}{\Lambda^{2}},\hskip 56.905512ptz= \frac{v_{\sigma}^{2}h_{1N}v_{\varphi}}{\Lambda^{2}}. \tag{27}\]
If we considering the parameter \(b\) as pure imaginary, we obtain a cobimaximal texture for the mass matrix of the light-active neutrinos in the equation. (26) and adapting the function \(\chi^{2}\) from Eq. (16) for the neutrino sector observables, we obtain the following error function:
\[\chi^{2}=\frac{\left(m_{21}^{\rm exp}-m_{21}^{th}\right)^{2}}{\sigma_{m_{21}}^{2 }}+\frac{\left(m_{31}^{\rm exp}-m_{31}^{th}\right)^{2}}{\sigma_{m_{31}}^{2}}+ \frac{\left(s_{\theta_{12}}^{\rm exp}-s_{\theta_{12}}^{th}\right)^{2}}{\sigma_ {s_{12}}^{2}}+\frac{\left(s_{\theta_{23}}^{\rm exp}-s_{\theta_{23}}^{th}\right) ^{2}}{\sigma_{s_{23}}^{2}}+\frac{\left(s_{\theta_{13}}^{\rm exp}-s_{\theta_{13 }}^{th}\right)^{2}}{\sigma_{s_{13}}^{2}}+\frac{\left(\delta_{CP}^{\rm exp}- \delta_{CP}^{th}\right)^{2}}{\sigma_{\delta}^{2}}\;, \tag{28}\]
which allows us to adjust the parameters of the model. Therefore, after minimizing Eq. (28), we get the following values for the model parameters:
\[a =0.5208e^{-3.027i}, b =1.080i, y_{1} =-0.08645,\;\rm{eV}^{2}\] \[y_{2} =2.053,\;\rm{eV}^{2} z =-2.300,\;\rm{eV}^{2} \tag{29}\]
The diagonalization of the matrix (26) gives us the following eigenvalues:
\[m_{1}^{2} = 0, \tag{30}\] \[m_{2}^{2} = \left(a^{2}\left(y_{2}+2z\right)+ab\left(y_{1}-y_{2}\right)+b^{2}z\right.\] (31) \[-\left.\sqrt{2a^{2}y_{1}^{2}\left(a^{2}+b^{2}\right)+2ay_{2}y_{1} (a-b)\left(2a^{2}+ab+b^{2}\right)+y_{2}^{2}\left(a^{2}+b^{2}\right)\left(3a^{ 2}+2ab+b^{2}\right)}\right)^{2},\] \[m_{3}^{2} = \left(a^{2}\left(y_{2}+2z\right)+ab\left(y_{1}-y_{2}\right)+b^{2}z\right.\] (32) \[+\left.\sqrt{2a^{2}y_{1}^{2}\left(a^{2}+b^{2}\right)+2ay_{2}y_{1} (a-b)\left(2a^{2}+ab+b^{2}\right)+y_{2}^{2}\left(a^{2}+b^{2}\right)\left(3a^{ 2}+2ab+b^{2}\right)}\right)^{2}.\]
With the values of the parameters of our best-fit point of the equation (29), we obtain the results of the neutrino sector that are shown in the table 4, together with the experimental values in the range of \(3\sigma\), whose experimental data were taken from [55]. In the IV table, we can see that all the values obtained by our model of the neutrino oscillation range from \(1\sigma\) to \(3\sigma\).
Fig. 2 shows the correlation between the leptonic Dirac CP violating phase and the neutrino mixing angles as well as the correlations among the leptonic mixing angles, where the green and purple background fringes represent the \(1\sigma\) range of the experimental values and the black bands the dotted lines represent our best-fit point for each observable. In Fig. 2, we see that for the mixing angles, we can get values in the \(1\sigma\) range, while for the CP violating phase, we obtain values up to \(3\sigma\), where each lepton sector observable is obtained in the following range of values: \(0.290\leq\sin^{2}\theta_{12}\leq 0.317\), \(0.0201\leq\sin^{2}\theta_{13}\leq 0.0241\), \(0.584\leq\sin^{2}\theta_{23}\leq 0.603\) and \(244^{\circ}\leq\delta_{CP}\leq 248^{\circ}\).
## V Scalar potential for the \(Su(3)_{l}\) triplets.
The most general renormalizable potential invariant under \(S_{4}\) which we can write with the four triplets of the Eqs. (6) is given by
\[V = -\mu_{\chi}^{2}(\chi^{\dagger}\chi)+\mu_{\eta_{1}}^{2}\eta_{1}^{ \dagger}\eta_{1}-\mu_{\eta_{2}}^{2}\eta_{2}^{\dagger}\eta_{2}-\mu_{\rho}^{2}( \rho^{\dagger}\rho)+A\eta_{2}\rho\chi+\lambda_{1}(\chi^{\dagger}\chi)(\chi^{ \dagger}\chi)+\lambda_{2}(\eta^{\dagger}\eta)_{\bf 1}(\eta^{\dagger}\eta)_{\bf 1}\] \[+\lambda_{3}(\eta^{\dagger}\eta)_{\bf 1^{\prime}}(\eta^{\dagger}\eta)_{\bf 1 ^{\prime}}+\lambda_{4}(\eta^{\dagger}\eta)_{\bf 2}(\eta^{\dagger}\eta)_{\bf 2}+ \lambda_{5}(\eta^{\dagger}\eta)_{\bf 1}(\chi^{\dagger}\chi)+\lambda_{6}\left[(\eta^{ \dagger}\chi)(\chi^{\dagger}\eta)\right]_{\bf 1}\] \[+\lambda_{7}(\rho^{\dagger}\rho)^{2}+\lambda_{8}(\eta^{\dagger} \eta)_{\bf 1}(\rho^{\dagger}\rho)+\lambda_{9}(\chi^{\dagger}\chi)(\rho^{\dagger} \rho)+h.c\]
where \(\eta_{1}\) is an inert \(SU(3)_{L}\) scalar triplet, the \(\mu\)'s are mass parameters and \(A\) is the trilinear scalar coupling, while \(\lambda\)'s are the quartic dimensionless couplings. Furthermore, the minimization conditions of the scalar potential yield
\begin{table}
\begin{tabular}{c|c|c c c c c c} \hline Observable & range & \(\Delta m_{21}^{2}\) [\(10^{-5}\)eV\({}^{2}\)] & \(\Delta m_{31}^{2}\) [\(10^{-3}\)eV\({}^{2}\)] & \(\sin\theta_{12}^{(l)}/10^{-1}\) & \(\sin\theta_{13}^{(l)}/10^{-3}\) & \(\sin\theta_{23}^{(l)}/10^{-1}\) & \(\delta_{CP}^{(l)}\) \\ \hline Experimental & \(1\sigma\) & \(7.50^{+0.22}_{-0.20}\) & \(2.55^{+0.02}_{-0.03}\) & \(3.18\pm 0.16\) & \(2.200^{+0.069}_{-0.062}\) & \(5.74\pm 0.14\) & \(194^{+244}_{-22}\) \\ Value & \(3\sigma\) & \(6.94-8.14\) & \(2.47-2.63\) & \(2.71-3.69\) & \(2.000-2.405\) & \(4.34-6.10\) & \(128-359\) \\ \hline Fit & \(1\sigma-3\sigma\) & \(7.498\) & \(2.541\) & \(3.024\) & \(2.191\) & \(5.931\) & \(246.4\) \\ \hline \end{tabular}
\end{table}
Table 4: Model predictions for the scenario of normal order (NO) neutrino mass. The experimental values are taken from Ref. [55]
the following relations
\[\mu_{\chi}^{2} = -\frac{A}{\sqrt{2}}\frac{v_{\eta_{2}}v_{\rho}}{v_{\chi}}+\frac{ \lambda_{5}}{2}v_{\eta_{2}}^{2}+\frac{\lambda_{9}}{2}v_{\rho}^{2}+\lambda_{1}v_ {\chi}^{2},\] \[\mu_{\eta_{2}}^{2} = -\frac{A}{\sqrt{2}}\frac{v_{\rho}v_{\chi}}{v_{\eta_{2}}}+\frac{ \lambda_{8}}{2}v_{\rho}^{2}+\frac{\lambda_{5}}{2}v_{\chi}^{2}+\left(\lambda_{2 }+\lambda_{4}\right)v_{\eta_{2}}^{2}, \tag{34}\] \[\mu_{\rho}^{2} = -\frac{A}{\sqrt{2}}\frac{v_{\eta_{2}}v_{\chi}}{v_{\rho}}+\frac{ \lambda_{8}}{2}v_{\eta_{2}}^{2}+\frac{\lambda_{9}}{2}v_{\chi}^{2}+\lambda_{7}v _{\rho}^{2}.\]
After spontaneous symmetry breaking, the Higgs mass spectrum comes from the diagonalization of the squared mass
Figure 2: Correlation between the mixing angles of the neutrino sector and the CP violation phase obtained with our model. The green and purple bands represent the \(1\sigma\) range in the experimental values, while the dotted line (black) represents the best-fit point of our model.
matrices (see appendix B). The mixing angles1 for the physical eigenstates are:
Footnote 1: The symbol \(\beta\) is used in the scalar potential for the \(SU(3)_{L}\) triplets as one of the mixing angles, it is a symbol different from the definition of the operand electric charge.
\[\tan\alpha = \frac{v_{\eta_{2}}}{v_{\chi}},\quad\tan\beta\ =\ \frac{v_{\rho}}{v_{\chi}}, \quad\tan\tau\ =\ \frac{v_{\eta_{2}}}{v_{\rho}},\quad\tan\delta=\frac{v_{\chi}}{v_{\rho}\sin\tau},\quad\tan 2\vartheta=\frac{2\sqrt{2}\lambda_{8}}{Av_{\chi}}\frac{v_{\eta_{2}}^{2} v_{\rho}^{2}}{v_{\rho}^{2}-v_{\eta_{2}}^{2}}, \tag{35}\]
the figure 3, presents correlation plots demonstrating the relationships between mixing angles and physical scalar masses. These plots highlight specific correlations, such as the correlation between the \(\tau\) angle and charged fields, and the correlation between the \(\delta\) angle and charged and pseudo-scalar fields. These correlations provide invaluable insights into the interactions among scalar fields and enhance our understanding of particle properties and the relationships between their masses and mixing angles. Moreover, similar correlations were observed for other mixing angles. The correlation analyses of the mixing angles offer valuable information regarding the underlying theoretical structure and relationships within the model.
We find that the charged sector is composed of two Goldstone bosons and three massive charged scalars.
\[M_{G_{1}^{\pm}}^{2} = M_{G_{2}^{\pm}}^{2}=0, \tag{36}\] \[M_{H_{1}^{\pm}}^{2} = \frac{Av_{\eta_{2}}v_{\rho}}{\sqrt{2}v_{\chi}}+\frac{A_{2}v_{\eta _{2}}v_{\chi}}{\sqrt{2}v_{\rho}},\] (37) \[M_{H_{2}^{\pm}}^{2} = \frac{Av_{\eta_{2}}v_{\chi}}{\sqrt{2}v_{\rho}}+\frac{Av_{\rho}v_{ \chi}}{\sqrt{2}v_{\eta_{2}}},\] (38) \[M_{H_{3}^{\pm}}^{2} = \mu_{\eta_{1}}^{2}+\left(\lambda_{2}-\lambda_{4}\right)v_{\eta_{2 }}^{2}+\frac{\lambda_{8}v_{\rho}^{2}}{2}+\frac{\lambda_{5}v_{\chi}^{2}}{2}. \tag{39}\]
The Goldstone bosons come only from mixing between \(\rho_{3}^{\pm}\) and \(\eta_{22}^{\pm}\), through the angle \(\tau\), while \(\chi_{2}^{\pm}\) is a massive field charged, the other two bulk fields correspond to the blending of \(\rho_{1}^{\pm}\) and the charged component of the scalar intert \(\eta_{21}^{\pm}\) by blending angle \(\beta\), i.e,
\[\begin{array}{l}G_{1}^{\pm}=\cos\tau\ \rho_{3}^{\pm}-\sin\tau\ \eta_{22}^{\pm},\quad\ G_{2}^{\pm}=\sin\tau\ \rho_{3}^{\pm}+\cos\tau\ \eta_{22}^{\pm},\\ H_{1}^{\pm}=\cos\beta\ \rho_{1}^{\pm}-\sin\beta\ \eta_{21}^{\pm},\quad H_{2}^{ \pm}=\sin\beta\ \rho_{1}^{\pm}+\cos\beta\ \eta_{21}^{\pm},\\ H_{3}^{\pm}=\chi_{2}^{\pm}.\end{array} \tag{40}\]
Figure 3: Correlations between mixing angles and the masses of the physical charged scalar, neutral scalar/pseudoscalar fields.
The physical mass eigenvalues of the CP odd scalars \(A_{1}^{0},\ A_{2}^{0}\) and the Goldstone bosons \(G_{1}^{0},\ G_{2}^{0}\) can be written as:
\[M_{\tilde{G}_{1}^{0}}^{2} = M_{\tilde{G}_{2}^{0}}^{2}\ =\ 0, \tag{41}\] \[M_{A_{1}^{0}}^{2} = \frac{Av_{\eta_{2}}v_{\rho}}{\sqrt{2}v_{\chi}}+\frac{A}{\sqrt{2} }\left(v_{\eta_{2}}^{2}+v_{\rho}^{2}\right)v_{\chi}^{2},\] (42) \[M_{A_{2}^{0}}^{2} = \mu_{\eta_{1}}^{2}+v_{\eta_{2}}^{2}\left(\lambda_{2}-2\lambda_{3 }-\lambda_{4}\right)+\frac{\lambda_{8}v_{\rho}^{2}}{2}+\frac{\lambda_{5}v_{ \chi}^{2}}{2}. \tag{43}\]
We have the following relationship between the original physical eigenstates:
\[G_{1}^{0}=\cos\tau\ \zeta_{\rho}-\cos\delta\cos\tau\ \zeta_{\eta_{1} }-\sin\tau\ \zeta_{\eta_{2}},\quad G_{2}^{0}=\cos\delta\ \zeta_{\rho}+\sin\delta\ \zeta_{\eta_{1}}, \tag{44}\] \[A_{1}^{0}=\sin\tau\ \zeta_{\rho}-\cos\delta\sin\tau\ \zeta_{\eta_{1} }+\cos\tau\ \zeta_{\eta_{2}},\quad A_{2}^{0}=\zeta_{\chi},\]
where consider the following limit \(v_{\chi}\gg v_{\rho},v_{\eta_{2}}\).
The masses of the light and heavy eigenstates for CP even scalars are given as:
\[M_{h}^{2} = A\frac{\left(v_{\eta_{2}}^{2}+v_{\rho}^{2}\right)v_{\chi}}{2 \sqrt{2}v_{\eta_{2}}v_{\rho}}-\frac{1}{2\sqrt{2}v_{\eta_{2}}v_{\rho}}\sqrt{A^ {2}\left(v_{\eta_{2}}^{2}-v_{\rho}^{2}\right)^{2}v_{\chi}^{2}+8\lambda_{8}v_{ \eta_{2}}^{4}v_{\rho}^{4}}, \tag{45}\] \[M_{H_{1}^{0}} = A\frac{\left(v_{\eta_{2}}^{2}+v_{\rho}^{2}\right)v_{\chi}}{2 \sqrt{2}v_{\eta_{2}}v_{\rho}}+\frac{1}{2\sqrt{2}v_{\eta_{2}}v_{\rho}}\sqrt{A^ {2}\left(v_{\eta_{2}}^{2}-v_{\rho}^{2}\right)^{2}v_{\chi}^{2}+8\lambda_{8}v_{ \eta_{2}}^{4}v_{\rho}^{4}},\] (46) \[M_{H_{2}^{0}} = \mu_{\eta_{1}}^{2}+\left(\lambda_{2}+\lambda_{4}\right)v_{\eta_{2 }}^{2}+\frac{\lambda_{8}}{2}v_{\rho}^{2}+\frac{\lambda_{5}}{2}v_{\chi}^{2},\] (47) \[M_{H_{3}^{0}} = 2\lambda_{1}v_{\chi}^{2}. \tag{48}\]
The lighter mass eigenstate \(h\) is identified as the SM Higgs boson. The two mass eigenstates \(h\) and \(H_{1}^{0}\) are related with the \(\xi_{\eta_{2}}\) and \(\xi_{\rho}\) fields through the rotation angle \(\vartheta\) as:
\[h \simeq \xi_{\eta_{2}}\cos\vartheta-\xi_{\rho}\sin\vartheta, \tag{49}\] \[H_{1}^{0} \simeq \xi_{\eta_{2}}\sin\vartheta+\xi_{\rho}\cos\vartheta, \tag{50}\]
while the heavier fields are related as \(H_{2}^{0}\simeq\xi_{\eta_{1}}\) and \(H_{3}^{0}\simeq\xi_{\chi}\).
Finally, for the pseudoscalar and scalar neutral complex fields, we have composed the mixture of the Imaginary and Real parts of \(\eta_{31}^{0},\ \eta_{32}^{0},\ \chi_{1}^{0}\), respectively,
\[M_{G_{3}^{0}}^{2} = M_{G_{4}^{0}}^{2}\ =\ 0 \tag{51}\] \[M_{A_{3}^{0}}^{2} = \sqrt{2}A\left(\frac{v_{\eta_{2}}v_{\rho}}{v_{\chi}}+\frac{v_{\rho }v_{\chi}}{v_{\eta_{2}}}\right)-\lambda_{6}\left(v_{\chi}^{2}+v_{\eta_{2}}^{2}\right)\] (52) \[M_{A_{4}^{0}}^{2} = 2\mu_{\eta_{1}}^{2}+2\left(\lambda_{2}-\lambda_{4}\right)v_{\eta_ {2}}^{2}+\lambda_{8}v_{\rho}^{2}+\left(\lambda_{5}-\lambda_{6}\right)v_{\chi}^{2}\] (53) \[M_{H_{4}^{0}}^{2} = \sqrt{2}A\left(\frac{v_{\eta_{2}}v_{\rho}}{v_{\chi}}+\frac{v_{\rho }v_{\chi}}{v_{\eta_{2}}}\right)+\lambda_{6}\left(v_{\chi}^{2}+v_{\eta_{2}}^{2}\right)\] (54) \[M_{H_{5}^{0}}^{2} = 2\mu_{\eta_{1}}^{2}+2\left(\lambda_{2}-\lambda_{4}\right)v_{\eta_ {2}}^{2}+\lambda_{8}v_{\rho}^{2}+\left(\lambda_{5}+\lambda_{6}\right)v_{\chi}^{2} \tag{55}\]
In the physical eigenstates, there are two Goldstone bosons and one pseudoscalar massive boson, and a scalar from the mixture of the complex neutral part of \(\eta_{1}\) and \(\eta_{2}\), while,
\[G_{3}^{0}=\sin\alpha\ \text{Im}\eta_{31}^{0}-\cos\alpha\ \text{Im}\eta_{32}^{0},\quad G_{4}^{0}=-\sin\alpha\ \text{Re}\eta_{31}^{0}+\cos\alpha\ \text{Re}\eta_{32}^{0},\] \[A_{3}^{0}=\cos\alpha\ \text{Im}\eta_{31}^{0}+\sin\alpha\ \text{Im}\eta_{32}^{0},\quad H_{4}^{0}=\cos\alpha\ \text{Re}\eta_{31}^{0}+\sin\alpha\ \text{Re}\eta_{32}^{0},\] (56) \[A_{4}^{0}=\text{Im}\chi_{1}^{0},\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\
\(A_{1}^{0}\) respectively, in figure 4 (b), a linear correlation between the masses of the pseudoscalar and scalar neutral field, \(A_{3}^{0}\) and \(H_{3}^{0}\) respectively. The charged Goldstone bosons \(\left(G_{1}^{\pm},G_{2}^{\pm}\right)\) are related to the longitudinal components of the \(W^{\pm}\) and \(W^{\prime\pm}\) gauge bosons respectively; while the neutral Goldstone bosons \(\left(G_{1}^{0},G_{2}^{0},G_{3}^{0},G_{4}^{0}\right)\) are associated to the longitudinal components of the \(Z\), \(Z^{\prime}\), \(K^{0}\) and \(K^{\prime 0}\) gauge bosons. Our best fit point that allows to successfully accommodate the 125 GeV mass for the SM like Higgs boson found at the LHC corresponds to the following VEV values:
\[v_{\chi}\simeq 9.994\text{ TeV},\quad v_{\rho}\simeq 29.54\text{ GeV},\quad v_{\eta_{2}}\simeq 244.2\text{ GeV} \tag{57}\]
which yield a mass of \(m_{h}=125.387\) GeV for the SM like Higgs boson. However, to determine the specific benchmark that reproduces the 125 GeV mass value for the SM-like Higgs boson, the numerical values of the relevant parameters in the model are required. With the adjustment of these parameters, the numerical contributions of the physical spectrum can be determined. These parameters are necessary for accommodating phenomenological processes that arise in this model and provide more precise determinations of phenomena such as K meson oscillations. In the SM-type two-photon Higgs decay constraints, where extra-charged scalar fields induce one loop level corrections to the Higgs diphoton decay. At the same time, the oblique corrections are affected by the presence of extra scalar fields. These phenomenological processes are studied in more detail in the following sections.
## VI Meson mixings
In this section we discuss the implications of our model in the Flavour Changing Neutral Current (FCNC) interactions in the down type quark sector. These FCNC down type quark Yukawa interactions produce \(K^{0}-\bar{K}^{0}\), \(B_{d}^{0}-\bar{B}_{d}^{0}\) and \(B_{s}^{0}-\bar{B}_{s}^{0}\) meson oscillations, whose corresponding effective Hamiltonians are:
\[\mathcal{H}_{eff}^{(K)} = \sum_{j=1}^{3}\kappa_{j}^{(K)}\left(\mu\right)\mathcal{O}_{j}^{(K )}\left(\mu\right), \tag{58}\] \[\mathcal{H}_{eff}^{(B_{d})} = \sum_{j=1}^{3}\kappa_{j}^{(B_{d})}\left(\mu\right)\mathcal{O}_{j }^{(B_{d})}\left(\mu\right),\] (59) \[\mathcal{H}_{eff}^{(B_{s})} = \sum_{j=1}^{3}\kappa_{j}^{(B_{s})}\left(\mu\right)\mathcal{O}_{j }^{(B_{s})}\left(\mu\right), \tag{60}\]
Figure 4: Correlation plot between the pseudoscalar neutral, scalar neutral, and scalar charged masses.
where:
\[\mathcal{O}_{1}^{(K)} = (\overline{s}_{R}d_{L})\left(\overline{s}_{R}d_{L}\right),\qquad \qquad\mathcal{O}_{2}^{(K)}=(\overline{s}_{L}d_{R})\left(\overline{s}_{L}d_{R} \right),\qquad\qquad\mathcal{O}_{3}^{(K)}=(\overline{s}_{R}d_{L})\left( \overline{s}_{L}d_{R}\right), \tag{61}\] \[\mathcal{O}_{1}^{(B_{d})} = \left(\overline{d}_{R}b_{L}\right)\left(\overline{d}_{R}b_{L} \right),\qquad\qquad\mathcal{O}_{2}^{(B_{d})}=\left(\overline{d}_{L}b_{R} \right)\left(\overline{d}_{L}b_{R}\right),\qquad\qquad\mathcal{O}_{3}^{(B_{d} )}=\left(\overline{d}_{R}b_{L}\right)\left(\overline{d}_{L}b_{R}\right),\] (62) \[\mathcal{O}_{1}^{(B_{s})} = (\overline{s}_{R}b_{L})\left(\overline{s}_{R}b_{L}\right),\qquad \qquad\mathcal{O}_{2}^{(B_{s})}=(\overline{s}_{L}b_{R})\left(\overline{s}_{L}b _{R}\right),\qquad\qquad\mathcal{O}_{3}^{(B_{s})}=(\overline{s}_{R}b_{L}) \left(\overline{s}_{L}b_{R}\right), \tag{63}\]
and the Wilson coefficients take the form:
\[\kappa_{1}^{(K)} = \frac{x_{h\overline{s}_{R}d_{L}}^{2}}{m_{h}^{2}}+\sum_{m=1}^{5} \sum_{n=1}^{4}\left(\frac{x_{H_{0}^{2}}^{2}\overline{s}_{R}d_{L}}{m_{H_{0}^{2 }}^{2}}-\frac{x_{A_{0}^{2}}^{2}\overline{s}_{R}d_{L}}{m_{A_{n}^{0}}^{2}},\right) \tag{64}\] \[\kappa_{2}^{(K)} = \frac{x_{h\overline{s}_{L}d_{R}}^{2}}{m_{h}^{2}}+\sum_{m=1}^{5} \sum_{n=1}^{4}\left(\frac{x_{H_{0}^{2}}^{2}\overline{s}_{L}d_{R}}{m_{H_{m}^{2 }}^{2}}-\frac{x_{A_{0}^{2}}^{2}\overline{s}_{L}d_{R}}{m_{A_{n}^{0}}^{2}}\right),\] (65) \[\kappa_{3}^{(K)} = \frac{x_{h\overline{s}_{R}d_{L}}x_{h\overline{s}_{L}d_{R}}}{m_{h }^{2}}+\sum_{m=1}^{5}\sum_{n=1}^{4}\left(\frac{x_{H_{0}^{2}}^{2}\overline{s}_{ R}d_{L}x_{H_{0}^{2}}^{2}\overline{s}_{L}d_{R}}{m_{H_{0}^{2}}^{2}}-\frac{x_{A_{0}^{ 2}}^{2}\overline{s}_{R}d_{L}x_{A_{0}^{2}}^{2}\overline{s}_{L}d_{R}}{m_{A_{n}^ {0}}^{2}}\right), \tag{66}\]
\[\kappa_{1}^{(B_{d})} = \frac{x_{h\overline{d}_{R}b_{L}}^{2}}{m_{h}^{2}}+\sum_{m=1}^{5} \sum_{n=1}^{4}\left(\frac{x_{H_{0}^{2}}^{2}\overline{d}_{R}b_{L}}{m_{H_{0}^{2 }}^{2}}-\frac{x_{A_{0}^{2}}^{2}\overline{d}_{R}b_{L}}{m_{A_{n}^{0}}^{2}}\right), \tag{67}\] \[\kappa_{2}^{(B_{d})} = \frac{x_{h\overline{d}_{L}b_{R}}^{2}}{m_{h}^{2}}+\sum_{m=1}^{5} \sum_{n=1}^{4}\left(\frac{x_{H_{0}^{2}}^{2}\overline{d}_{R}b_{L}}{m_{H_{0}^{2 }}^{2}}-\frac{x_{A_{0}^{2}}^{2}\overline{d}_{L}b_{R}}{m_{A_{n}^{0}}^{2}}\right),\] (68) \[\kappa_{3}^{(B_{d})} = \frac{x_{h\overline{d}_{R}b_{L}}^{2}x_{h\overline{d}_{L}b_{R}}}{m _{h}^{2}}+\sum_{m=1}^{5}\sum_{n=1}^{4}\left(\frac{x_{H_{0}^{2}}^{2}\overline{s }_{R}b_{L}}{m_{H_{0}^{2}}^{2}}-\frac{x_{A_{0}^{2}}^{2}\overline{d}_{R}b_{L}^{2 }\overline{d}_{L}b_{R}}{m_{A_{n}^{0}}^{2}}\right), \tag{69}\]
\[\kappa_{1}^{(B_{s})} = \frac{x_{h\overline{s}_{R}b_{L}}^{2}}{m_{h}^{2}}+\sum_{m=1}^{5} \sum_{n=1}^{4}\left(\frac{x_{H_{0}^{2}}^{2}\overline{s}_{R}b_{L}}{m_{H_{0}^{2 }}^{2}}-\frac{x_{A_{0}^{2}}^{2}\overline{s}_{R}b_{L}}{m_{A_{n}^{0}}^{2}}\right), \tag{70}\] \[\kappa_{2}^{(B_{s})} = \frac{x_{h\overline{s}_{L}b_{R}}^{2}}{m_{h}^{2}}+\sum_{m=1}^{5} \sum_{n=1}^{4}\left(\frac{x_{H_{0}^{2}}^{2}\overline{s}_{L}b_{R}}{m_{H_{0}^{2 }}^{2}}-\frac{x_{A_{0}^{2}}^{2}\overline{s}_{L}b_{R}}{m_{A_{n}^{0}}^{2}}\right),\] (71) \[\kappa_{3}^{(B_{s})} = \frac{x_{h\overline{s}_{R}b_{L}}x_{h\overline{s}_{L}b_{R}}}{m_{h }^{2}}+\sum_{m=1}^{5}\sum_{n=1}^{4}\left(\frac{x_{H_{0}^{2}}^{2}\overline{s}_{R}b _{L}\,x_{H_{0}^{2}}^{2}\overline{s}_{L}b_{R}}{m_{H_{0}^{2}}^{2}}-\frac{x_{A_{0}^ {2}}^{2}\overline{s}_{R}b_{L}\,x_{A_{0}^{2}}\overline{s}_{L}b_{R}}{m_{A_{n}^{0 }}^{2}}\right), \tag{72}\]
where we have used the notation of section V for the physical scalars, assuming \(h\) is the lightest of the CP-even ones and corresponds to the SM Higgs. The \(K-\bar{K}\), \(B_{d}^{0}-\bar{B}_{d}^{0}\) and \(B_{s}^{0}-\bar{B}_{s}^{0}\) meson mass splittings read:
\[\Delta m_{K}=\Delta m_{K}^{(SM)}+\Delta m_{K}^{(NP)},\qquad\Delta m_{B_{d}}= \Delta m_{B_{d}}^{(SM)}+\Delta m_{B_{d}}^{(NP)},\qquad\Delta m_{B_{s}}=\Delta m_{B_{ s}}^{(SM)}+\Delta m_{B_{s}}^{(NP)}, \tag{73}\]
where \(\Delta m_{K}^{(SM)}\), \(\Delta m_{B_{d}}^{(SM)}\) and \(\Delta m_{B_{s}}^{(SM)}\) correspond to the SM contributions, while \(\Delta m_{K}^{(NP)}\), \(\Delta m_{B_{d}}^{(NP)}\) and \(\Delta m_{B_{s}}^{(NP)}\) are due to new physics effects. Our model predicts the following new physics contributions for the \(K-\bar{K}\), \(B_{d}^{0}-\bar{B}_{d}^{0}\) and \(B_{s}^{0}-\bar{B}_{s}^{0}\) meson mass differences:
\[\Delta m_{K}^{(NP)} \simeq \frac{8}{3}f_{K}^{2}\eta_{K}B_{K}m_{K}\left[r_{2}^{(K)}\kappa_{3} ^{(K)}+r_{1}^{(K)}\left(\kappa_{1}^{(K)}+\kappa_{2}^{(K)}\right)\right]\,\] (74) \[\Delta m_{B_{d}}^{(NP)} \simeq \frac{8}{3}f_{B_{d}}^{2}\eta_{B_{d}}B_{B_{d}}m_{B_{d}}\left[r_{2}^{(B _{d})}\kappa_{3}^{(B_{d})}+r_{1}^{(B_{d})}\left(\kappa_{1}^{(B_{d})}+\kappa_{2}^{(B _{d})}\right)\right]\,\] (75) \[\Delta m_{B_{s}}^{(NP)} \simeq \frac{8}{3}f_{B_{s}}^{2}\eta_{B_{s}}B_{B_{s}}m_{B_{s}}\left[r_{2}^{(B _{s})}\kappa_{3}^{(B_{s
exchange of additional scalar and pseudoscalar fields participating in the flavor violating Yukawa interactions of the model under consideration. Using the following numerical values of the meson parameters [56; 57; 58; 59; 60; 61; 62]:
\[\left(\Delta m_{K}\right)_{\rm exp} = \left(3.484\pm 0.006\right)\times 10^{-12}\,{\rm MeV},\qquad \qquad\left(\Delta{\rm m_{K}}\right)_{\rm SM}=3.483\times 10^{-12}\,{\rm MeV}\] \[f_{K} = 155.7\,{\rm MeV},\qquad\qquad{\rm B_{K}}=0.85,\qquad\qquad\eta_ {\rm K}=0.57,\] \[r_{1}^{(K)} = -9.3,\qquad\qquad r_{2}^{(K)}=30.6,\qquad\qquad m_{K}=(497.611\pm 0.013)\ {\rm MeV}, \tag{77}\]
\[\left(\Delta m_{B_{d}}\right)_{\rm exp} = \left(3.334\pm 0.013\right)\times 10^{-10}\,{\rm MeV},\qquad \qquad\left(\Delta{\rm m_{B_{d}}}\right)_{\rm SM}=\left(3.653\pm 0.037\pm 0.019 \right)\times 10^{-10}\,{\rm MeV},\] \[f_{B_{d}} = 188\,{\rm MeV},\qquad\qquad{\rm B_{B_{d}}}=1.26,\qquad\qquad \eta_{\rm B_{d}}=0.55,\] \[r_{1}^{(B_{d})} = -0.52,\qquad\qquad r_{2}^{(B_{d})}=0.88,\qquad\qquad m_{B_{d}}=(5 279.65\pm 0.12)\ {\rm MeV}, \tag{78}\]
\[\left(\Delta m_{B_{s}}\right)_{\rm exp} = \left(1.1683\pm 0.0013\right)\times 10^{-8}\,{\rm MeV},\qquad \qquad\left(\Delta{\rm m_{B_{s}}}\right)_{\rm SM}=\left(1.1577\pm 0.022\pm 0.051 \right)\times 10^{-8}\,{\rm MeV},\] \[f_{B_{s}} = 225\,{\rm MeV},\qquad\qquad\quad B_{\rm B_{s}}=1.33,\qquad\qquad \eta_{\rm B_{s}}=0.55,\] \[r_{1}^{(B_{s})} = -0.52,\qquad\qquad r_{2}^{(B_{s})}=0.88,\qquad\qquad m_{B_{s}}=(5 366.9\pm 0.12)\ {\rm MeV}, \tag{79}\]
Fig. 5 (a) and Fig. 5 (b) show the correlations of the mass splitting \(\Delta m_{B_{k}}\) with the mass of the lightest CP-even and CP-odd scale \(m_{B_{d}^{0}}\) and \(m_{A_{1}^{0}}\), respectively. In our numerical analysis, for the sake of simplicity, we have set the couplings of flavor-changing Yukawa neutral interactions that produce \((K^{0}-\overline{K}^{0})\) mixings to be equal to \(10^{-6}\). In addition, we have varied the masses around \(20\%\) of their best fit-point values obtained in the analysis of the scalar sector shown in the plots of Fig. 4. As indicated in Fig. 5, our model can successfully accommodate the experimental constraints arising from \((K^{0}-\overline{K}^{0})\) meson oscillations for the above specified range of parameter space. We have numerically verified that in the range of masses described above, the values obtained for the mass splittings \(\Delta m_{B_{d}}\) and \(\Delta m_{B_{s}}\) are consistent with the experimental data on meson oscillations for flavor violating Yukawa couplings equal to \(10^{-4}\) and \(2.5\times 10^{-4}\), respectively.
## VII Higgs di-photon decay rate
In order to study the implications of our model in the decay of the 125 GeV Higgs into a photon pair, one introduces the Higgs diphoton signal strength \(R_{\gamma\gamma}\), which is defined as [63]:
\[R_{\gamma\gamma}=\frac{\sigma(pp\to h)\Gamma(h\to\gamma\gamma)}{\sigma(pp\to h )_{\rm SM}\Gamma(h\to\gamma\gamma)_{\rm SM}}\simeq a_{htt}^{2}\frac{\Gamma(h \to\gamma\gamma)}{\Gamma(h\to\gamma\gamma)_{\rm SM}}. \tag{80}\]
That Higgs diphoton signal strength, normalizes the \(\gamma\gamma\) signal predicted by our model in relation to the one given by the SM. Here we have used the fact that in our model, single Higgs production is also dominated by gluon fusion as
Figure 5: Correlation a) between the \(\Delta m_{B_{k}}\) mass splitting and the lightest CP even scalar mass \(m_{H_{2}^{0}}\), b) between the \(\Delta m_{B_{k}}\) mass splitting and the lightest CP odd scalar mass \(m_{A_{1}^{0}}\).
in the Standard Model. In the 3-3-1 model \(\sigma\left(pp\to h\right)=a_{htt}^{2}\sigma\left(pp\to h\right)_{\rm SM}\), so \(R_{\gamma\gamma}\) reduces to the ratio of branching ratios.
The decay rate for the \(h\to\gamma\gamma\) process takes the form [63; 64; 65]:
\[\Gamma(h\to\gamma\gamma)=\frac{\alpha_{\rm em}^{2}m_{h}^{3}}{256\pi^{3}v^{2}} \left|\sum_{f}a_{hff}N_{C}Q_{f}^{2}F_{1/2}\left(\varrho_{f}\right)+a_{hWW}F_{1 }\left(\varrho_{W}\right)+\sum_{k}\frac{C_{hH_{k}^{\pm}H_{k}^{\mp}v}}{2m_{H_{k }^{\pm}}^{2}}F_{0}\left(\varrho_{H_{k}^{\pm}}\right)\right|^{2} \tag{81}\]
where \(\alpha_{\rm em}\) is the fine structure constant, \(N_{C}\) is the color factor (\(N_{C}=3\) for quarks and \(N_{C}=1\) for leptons) and \(Q_{f}\) is the electric charge of the fermion in the loop. From the fermion-loop contributions we only consider the dominant top quark term. The \(\varrho_{i}\) are the mass ratios \(\varrho_{i}=4M_{i}^{2}/m_{h}^{2}\) with \(M_{i}=m_{f},M_{W},M_{H_{k}^{\pm}}\) with \(k=1,2,3\). Furthermore, \(C_{hH_{k}^{\pm}H_{k}^{\mp}}\) is the trilinear coupling between the SM-like Higgs and a pair of charged Higgs, whereas \(a_{htt}\) and \(a_{hWW}\) are the deviation factors from the SM Higgs-top quark coupling and the SM Higgs-W gauge boson coupling, respectively (in the SM these factors are unity). Such deviation factors are close to unity in our model, which is a consequence of the numerical analysis of its scalar, Yukawa and gauge sectors.
The form factors for the contributions from spin-0, 1/2 and 1 particles are:
\[F_{0}(\varrho) = -\varrho(1-\varrho f(\varrho)), \tag{82}\] \[F_{1/2}(\varrho) = 2\varrho(1+(1-\varrho)f(\varrho)),\] (83) \[F_{1}(\varrho) = -\left(2+3\varrho+3\varrho\left(2-\varrho\right)f(\varrho) \right), \tag{84}\]
with
\[f(\varrho)=\begin{cases}\arcsin^{2}\sqrt{\varrho^{-1}}&\text{ for }\varrho\geq 1\\ -\frac{1}{4}\left[\ln\left(\frac{1+\sqrt{1-\varrho}}{1-\sqrt{1-\varrho}} \right)-i\pi\right]^{2}&\text{ for }\varrho<1\end{cases} \tag{85}\]
Table 5 displays the best-fit values of the \(R_{\gamma\gamma}\) ratio in comparison to the best-fit signals measured in CMS [66] and ATLAS [67]. In this analysis, the charged fields play a key role in determining the value of the ratio, while the other fields have an indirect impact through the parameter space involving the VEV (vacuum expectation values) and the trilinear scalar coupling \(A\), as well as some \(\lambda_{i}\). From our numerical analysis, it follows that our model favors a Higgs diphoton decay rate lower than the SM expectation but inside the \(3\sigma\) experimentally allowed range. The correlation of the Higgs diphoton signal strength with the charged scalar mass \(M_{H_{2}^{\pm}}\) is shown in Fig. 6, which indicates that our model successfully accommodates the current Higgs diphoton decay rate constraints. Additionally, it should be noted that the correlation with \(M_{H_{1}^{\pm}}\) is similar; however, the correlation is weaker than for \(M_{H_{3}^{\pm}}\).
## VIII Oblique \(T\), \(S\) and \(U\) parameters
The parameters \(S\), \(T\), and \(U\) basically quantify the corrections to the two-point functions of gauge bosons through loop diagrams. In our case, where there are three \(SU(3)_{L}\) scalar triplets that introduce new scalar particles, which lead to new Higgs-mediated contributions to the self-energies of gauge bosons through loop diagrams. Based on references [68; 69; 70], the parameters \(S\), \(T\), and \(U\) can be defined as follows:
\[T = \frac{1}{\alpha_{\rm em}M_{W}}\left.\left[\Pi_{11}\left(q^{2} \right)-\Pi_{33}\left(q^{2}\right)\right]\right|_{q=0} \tag{86}\] \[S = -\left.\frac{4c_{W}s_{W}}{\alpha_{\rm em}}\frac{d}{dq^{2}}\Pi_{30 }\left(q^{2}\right)\right|_{q^{2}=0},\] (87) \[U = \frac{4s_{W}}{\alpha_{\rm em}}\frac{d}{dq^{2}}\left.\left[\Pi_{11 }\left(q^{2}\right)-\Pi_{33}\left(q^{2}\right)\right]\right|_{q^{2}=0}, \tag{88}\]
\begin{table}
\begin{tabular}{c c c c} \hline \hline Model value & CMS & ATLAS \\ \hline \(R_{\gamma\gamma}\) & \(0.982\pm 0.08\) & \(1.02^{+0.11}_{-0.09}\) & \(1.04^{+0.10}_{-0.09}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: The best fit for the ratio of Higgs boson diphoton decay obtained from the model indicates a lower Higgs decay rate into two photons compared to the expectation of the Standard Model in ATLAS [67] and CMS [66] collaboration. However, this value still falls within the \(3\sigma\) experimentally allowed range.
with \(s_{W}=\sin\theta_{W}\) and \(c_{W}=\cos\theta_{W}\), where \(\theta_{W}\) is the electroweak mixing angle, the quantity \(\Pi_{ij}\left(q\right)\) is defined in terms of the vacuum-polarization tensors
\[\Pi_{ij}^{\mu\nu}\left(q^{2}\right)=g^{\mu\nu}\Pi_{ij}\left(q^{2}\right)-iq^{ \mu}q^{\nu}\Delta_{ij}\left(q^{2}\right), \tag{89}\]
where \(i,j=0,1,3\) for the \(B\), \(W_{1}\) and \(W_{3}\) bosons respectively, or possibly \(i,j=W,Z,\gamma\). If the new physics enters at the TeV scale, the effect of the theory will be well-described by an expansion up to linear order in \(q^{2}\) for \(\Pi_{ij}\left(q^{2}\right)\) as presented in reference [68].
For our 331 model, the scalar fields directly contribute to the new physics values of the \(T\), \(S\), and \(U\) oblique parameters, and we must take into account the scalar mixing angles. We can calculate the parameters considering that the low energy effective field theory below the scale of spontaneous breaking of the \(SU(3)_{L}\times U(1)_{X}\times U(1)_{L_{g}}\) symmetry corresponds to a three Higgs doublet model (3HDM), where the three Higgs doublets arise from the \(\eta_{1}\), \(\eta_{2}\) and \(\rho\)\(SU(3)_{L}\) scalar triplets. Then, following these considerations, in the above described low energy limit scenario, the leading contributions to the oblique \(T\), \(S\) and \(U\) parameters take the form [71; 72; 73; 51]:
\[T \simeq t_{0}\left[\sum_{a=1}^{2}\sum_{k=1}^{2}\left[\left(R_{C}\right)_{ak }\right]^{2}m_{H_{k}^{\pm}}^{2}+\sum_{a=1}^{2}\sum_{i=1}^{2}\sum_{j=1}^{2} \left[\left(R_{H}\right)_{ai}\left(R_{A}\right)_{aj}\right]^{2}F\left(m_{H_{i} ^{0}}^{2},m_{A_{j}^{0}}^{2}\right)\,, \tag{90}\] \[-\sum_{a=1}^{2}\sum_{i=1}^{2}\sum_{k=1}^{2}\left\{\left[\left(R _{H}\right)_{ai}\left(R_{C}\right)_{ak}\right]^{2}F\left(m_{H_{i}^{0}}^{2},m_{ H_{k}^{\pm}}^{2}\right)+\left[\left(R_{A}\right)_{ai}\left(R_{C}\right)_{ak} \right]^{2}F\left(m_{A_{i}^{0}}^{2},m_{H_{k}^{\pm}}^{2}\right)\right\}\right]\] \[S \simeq \frac{1}{12\pi}\sum_{i=1}^{2}\sum_{j=1}^{2}\sum_{k=1}^{2}\left[ \left(R_{H}\right)_{ki}\left(R_{A}\right)_{kj}\right]^{2}K\left(m_{H_{i}^{0}}^ {2},m_{A_{j}^{0}}^{2},m_{H_{k}^{\pm}}^{2}\right),\] (91) \[U \simeq -S+\sum_{a=1}^{2}\sum_{i=1}^{2}\sum_{k=1}^{2}\left\{\left[\left( R_{A}\right)_{ai}\left(R_{C}\right)_{ak}\right]^{2}G\left(m_{A_{i}^{0}}^{2},m_{H_{k}^{ \pm}}^{2}\right)+\left[\left(R_{H}\right)_{ai}\left(R_{C}\right)_{ak}\right]^ {2}G\left(m_{H_{i}^{0}}^{2},m_{H_{k}^{\pm}}^{2}\right)\right\}, \tag{92}\]
where \(t_{0}=\left(16\pi^{2}v^{2}\alpha_{\rm em}\left(M_{Z}\right)\right)^{-1}\) and \(R_{C}\), \(R_{H}\), \(R_{A}\) are the mixing matrices for the charged scalar fields, neutral scalar and pseudoscalars, respectively presented in the Sec. V. Furthermore, the following loop functions \(F\left(m_{1}^{2},m_{2}^{2}\right)\)
Figure 6: Correlation of the Higgs di-photon signal strength with the charged scalar mass. The red star point corresponds to the best fit for \(R_{\gamma\gamma}\) (see Table V).
\(G\left(m_{1}^{2},m_{2}^{2}\right)\) and \(K\left(m_{1}^{2},m_{2}^{2},m_{3}^{2}\right)\) were introduced in [51]:
\[F\left(m_{1}^{2},m_{2}^{2}\right) = \frac{m_{1}^{2}m_{2}^{2}}{m_{1}^{2}-m_{2}^{2}}\ln\left(\frac{m_{1} ^{2}}{m_{2}^{2}}\right), \tag{93}\] \[G\left(m_{1}^{2},m_{2}^{2}\right) = \frac{-5m_{1}^{6}+27m_{1}^{4}m_{2}^{2}-27m_{1}^{2}m_{2}^{4}+6 \left(m_{1}^{6}-3m_{1}^{4}m_{2}^{2}\right)\ln\left(\frac{m_{1}^{2}}{m_{2}^{2}} \right)+5m_{2}^{6}}{6\left(m_{1}^{2}-m_{2}^{2}\right)^{3}},\] (94) \[K\left(m_{1}^{2},m_{2}^{2},m_{3}^{2}\right) = \frac{1}{\left(m_{2}^{2}-m_{1}^{2}\right)^{3}}\left\{m_{1}^{4} \left(3m_{2}^{2}-m_{1}^{2}\right)\ln\left(\frac{m_{1}^{2}}{m_{3}^{2}}\right) -m_{2}^{4}\left(3m_{1}^{2}-m_{2}^{2}\right)\ln\left(\frac{m_{2}^{2}}{m_{3}^{2} }\right)\right.\] (95) \[\left.-\frac{1}{6}\left[27m_{1}^{2}m_{2}^{2}\left(m_{1}^{2}-m_{2} ^{2}\right)+5\left(m_{2}^{6}-m_{1}^{6}\right)\right]\right\}.\]
Besides that, the experimental limits for \(S\), \(T\), and \(U\) are given in ref [54]:
\[T_{\rm exp} = 0.03\pm 0.12 \tag{96}\] \[S_{\rm exp} = -0.02\pm 0.1\] (97) \[U_{\rm exp} = 0.01\pm 0.11 \tag{98}\]
From the numerical analysis, the \(S_{4}\) flavored 331 model has restricted parameters because the determination of the new physics from \(S\), \(T\), and \(U\) is determined by the physical masses of the model, also within the limit indicated by the oblique parameters present some correlation shown in figure 7, we see that the evolution of parameter space is adjusted within the \(1\sigma\) experimental range, which corresponds to the overlying region. The figures 7 (a) and (c) of dispersion involving the \(U\) parameter is produced with values larger than the central one, despite this, the values of \(U\) fit within the range of \(1\sigma\) and the statistical discrepancy is minimal. In the case of the \(S\) value, due to the large uncertainty value, the value fits more naturally, as shown in the figure 7 (b).
Our analysis shows that our model allows a successfull fit for the oblique \(S\), \(T\) and \(U\) parameters, consistent with their current experimental limits. The obtained best fit point values for the oblique \(S\), \(T\) and \(U\) parameters in our model are:
\[T = 0.029\pm 0.009, \tag{99}\] \[S = -0.016\pm 0.006,\] (100) \[U = 0.14\pm 0.04 \tag{101}\]
Our results also suggest that the model favors a larger value for \(U\) within statistical uncertainty. While the values for \(S\) and \(T\) fit within the relative error 0.2 and 0.1, respectively.
## IX Conclusions
We have constructed a 3-3-1 model where the \(SU(3)_{C}\times SU(3)_{L}\times U(1)_{X}\) symmetry is enlarged by the inclusion of the spontaneously broken \(U(1)_{L_{q}}\times S_{4}\times Z_{4}\times Z_{4}^{\prime}\times Z_{2}\) symmetry group. In our theory under consideration, the observed SM charged fermion mass hierarchy and the quark mixing pattern arise from the spontaneous breaking of discrete symmetries and the tiny active neutrino masses are generated from an inverse seesaw mechanism. Our proposed model leads to a successful fit to quark and lepton masses, mixing angles, and CP phases, whose obtained values are consistent with the experimental data within the \(3\sigma\) range. The symmetries of the model give rise to correlations between the mixing angles and the Jarlskog invariant for the quark sector. Regarding the lepton sector, our model predicts a diagonal SM charged lepton mass matrix, thus implying that the leptonic mixing only arises from the neutrino sector, where correlations between the leptonic mixing angles and the leptonic CP violating phase are obtained. In addition, flavor-changing neutral current interactions in the quark sector mediated by CP even and odd CP scalars give rise to meson oscillations, such as the \((K^{0}-\overline{K}^{0})\) mixing, whose experimental constraints are successfully satisfied for an appropriate region of the parameter space. The theory under consideration is consistent with the masses and mixings of SM fermions as well as with the constraints arising from \((K^{0}-\overline{K}^{0})\) and \((B_{d,s}^{0}-\overline{B}_{d,s}^{0})\) meson oscillations. The charged scalars of our model provide the new physics contribution to the Higgs diphoton decay rate. In contrast, the rest of the scalar fields have an indirect influence involving the VEVs and the trilinear scalar coupling \(A\), where our model favors a \(R_{\gamma\gamma}\) value lower than SM, however, is within the \(3\sigma\) experimentally allowed range measured by the
CMS and ATLAS collaborations. The extra scalar fields of our model produce radiative corrections to the oblique parameters \(S\), \(T\), and \(U\), where the numerical analysis yield correlations between these parameters and, in addition, their obtained values are within the \(1\sigma\) experimentally allowed range.
###### Acknowledgements.
A.E.C.H is supported by ANID-Chile FONDECYT 1210378, ANID PIA/APOYO AFB220004 and Milenio-ANID-ICN2019_044 and ANID Programa de Becas Doctorado Nacional code 21212041.
## Appendix A The product rules of the \(S_{4}\) discrete group
The \(S_{4}\) is the smallest non abelian group having doublet, triplet and singlet irreducible representations. \(S_{4}\) is the group of permutations of four objects, which includes five irreducible representations, i.e., \(\mathbf{1},\mathbf{1^{\prime}},\mathbf{2},\mathbf{3},\mathbf{3^{\prime}}\) fulfilling the following tensor product rules [52]:
\[\mathbf{3}\otimes\mathbf{3}=\mathbf{1}\oplus\mathbf{2}\oplus \mathbf{3}\oplus\mathbf{3^{\prime}},\quad\mathbf{3^{\prime}}\otimes\mathbf{3^{ \prime}}=\mathbf{1}\oplus\mathbf{2}\oplus\mathbf{3}\oplus\mathbf{3^{\prime}}, \quad\mathbf{3}\otimes\mathbf{3^{\prime}}=\mathbf{1^{\prime}}\oplus\mathbf{2 }\oplus\mathbf{3}\oplus\mathbf{3^{\prime}},\] \[\mathbf{2}\otimes\mathbf{2}=\mathbf{1}\oplus\mathbf{1^{\prime}} \oplus\mathbf{2},\quad\mathbf{2}\otimes\mathbf{3}=\mathbf{3}\oplus\mathbf{3^{ \prime}},\quad\mathbf{2}\otimes\mathbf{3^{\prime}}=\mathbf{3^{\prime}}\oplus \mathbf{3}\] \[\mathbf{3}\otimes\mathbf{1^{\prime}}=\mathbf{3^{\prime}},\quad \mathbf{3^{\prime}}\otimes\mathbf{1^{\prime}}=\mathbf{3},\quad\mathbf{2} \otimes\mathbf{1^{\prime}}=\mathbf{2}\]
Explicitly, the basis used in this paper corresponds to Ref. [52] and results in
Figure 7: Correlation between the oblique parameters, the shaded region represents the measured values at \(1\sigma\), while the dashed line represents the central value of the ref. [54]
\[\begin{pmatrix}a_{1}\\ a_{2}\\ a_{3}\end{pmatrix}_{\mathbf{3}}\otimes\begin{pmatrix}b_{1}\\ b_{2}\\ b_{3}\end{pmatrix}_{\mathbf{3}} = (a_{1}b_{1}+a_{2}b_{2}+a_{3}b_{3})_{\mathbf{1}_{1}}\oplus\begin{pmatrix} \frac{1}{\sqrt{2}}(a_{2}b_{2}-a_{3}b_{3})\\ \frac{1}{\sqrt{6}}(-2a_{1}b_{1}+a_{2}b_{2}+a_{3}b_{3})\end{pmatrix}_{\mathbf{2}} \tag{101}\] \[\oplus\begin{pmatrix}a_{2}b_{3}+a_{3}b_{1}\\ a_{1}b_{3}+a_{3}b_{1}\\ a_{1}b_{2}+a_{2}b_{1}\end{pmatrix}_{\mathbf{3}}\oplus\begin{pmatrix}a_{3}b_{2}- a_{2}b_{3}\\ a_{1}b_{3}-a_{3}b_{1}\\ a_{2}b_{1}-a_{1}b_{2}\end{pmatrix}_{\mathbf{3}^{\prime}},\]
\[\begin{pmatrix}a_{1}\\ a_{2}\\ a_{3}\end{pmatrix}_{\mathbf{3}^{\prime}}\otimes\begin{pmatrix}b_{1}\\ b_{2}\\ b_{3}\end{pmatrix}_{\mathbf{3}^{\prime}} = (a_{1}b_{1}+a_{2}b_{2}+a_{3}b_{3})_{\mathbf{1}_{1}}\oplus\begin{pmatrix} \frac{1}{\sqrt{2}}(a_{2}b_{2}-a_{3}b_{3})\\ \frac{1}{\sqrt{6}}(-2a_{1}b_{1}+a_{2}b_{2}+a_{3}b_{3})\end{pmatrix}_{\mathbf{2}} \tag{102}\] \[\oplus\begin{pmatrix}a_{2}b_{3}+a_{3}b_{2}\\ a_{1}b_{3}+a_{3}b_{1}\\ a_{1}b_{2}+a_{2}b_{1}\end{pmatrix}_{\mathbf{3}}\oplus\begin{pmatrix}a_{3}b_{2}- a_{2}b_{3}\\ a_{1}b_{3}-a_{3}b_{1}\\ a_{2}b_{1}-a_{1}b_{2}\end{pmatrix}_{\mathbf{3}^{\prime}},\]
\[\begin{pmatrix}a_{1}\\ a_{2}\\ a_{3}\end{pmatrix}_{\mathbf{3}}\otimes\begin{pmatrix}b_{1}\\ b_{2}\\ b_{3}\end{pmatrix}_{\mathbf{3}^{\prime}} = (a_{1}b_{1}+a_{2}b_{2}+a_{3}b_{3})_{\mathbf{1}_{2}}\oplus\begin{pmatrix} \frac{1}{\sqrt{6}}(2a_{1}b_{1}-a_{2}b_{2}-a_{3}b_{3})\\ \frac{1}{\sqrt{2}}(a_{2}b_{2}-a_{3}b_{3})\end{pmatrix}_{\mathbf{2}} \tag{103}\] \[\oplus\begin{pmatrix}a_{3}b_{2}-a_{2}b_{3}\\ a_{1}b_{3}-a_{3}b_{1}\\ a_{2}b_{1}-a_{1}b_{2}\end{pmatrix}_{\mathbf{3}}\oplus\begin{pmatrix}a_{2}b_{3}+ a_{3}b_{2}\\ a_{1}b_{3}+a_{3}b_{1}\\ a_{1}b_{2}+a_{2}b_{1}\end{pmatrix}_{\mathbf{3}^{\prime}}\]
\[\begin{pmatrix}a_{1}\\ a_{2}\end{pmatrix}_{\mathbf{2}}\otimes\begin{pmatrix}b_{1}\\ b_{2}\\ b_{3}\end{pmatrix}_{\mathbf{2}} = (a_{1}b_{1}+a_{2}b_{2})_{\mathbf{1}}\oplus(-a_{1}b_{2}+a_{2}b_{1} )_{\mathbf{1}^{\prime}}\oplus\begin{pmatrix}a_{1}b_{2}+a_{2}b_{1}\\ a_{1}b_{1}-a_{2}b_{2}\end{pmatrix}_{\mathbf{2}}, \tag{104}\] \[\begin{pmatrix}a_{1}\\ a_{2}\end{pmatrix}_{\mathbf{2}}\otimes\begin{pmatrix}b_{1}\\ b_{2}\\ b_{3}\end{pmatrix}_{\mathbf{3}} = \begin{pmatrix}a_{2}b_{1}\\ -\frac{1}{2}(\sqrt{3}a_{1}b_{2}+a_{2}b_{2})\\ \frac{1}{2}(\sqrt{3}a_{1}b_{3}-a_{2}b_{3})\end{pmatrix}_{\mathbf{3}}\oplus \begin{pmatrix}a_{1}b_{1}\\ \frac{1}{2}(\sqrt{3}a_{2}b_{2}-a_{1}b_{2})\\ -\frac{1}{2}(\sqrt{3}a_{2}b_{3}+a_{1}b_{3})\end{pmatrix}_{\mathbf{3}^{\prime}},\] (105) \[\begin{pmatrix}a_{1}\\ a_{2}\end{pmatrix}_{\mathbf{2}}\otimes\begin{pmatrix}b_{1}\\ b_{2}\\ b_{3}\end{pmatrix}_{\mathbf{3}^{\prime}} = \begin{pmatrix}a_{1}b_{1}\\ \frac{1}{2}(\sqrt{3}a_{2}b_{2}-a_{1}b_{2})\\ -\frac{1}{2}(\sqrt{3}a_{2}b_{3}+a_{1}b_{3})\end{pmatrix}_{\mathbf{3}}\oplus \begin{pmatrix}a_{2}b_{1}\\ -\frac{1}{2}(\sqrt{3}a_{1}b_{2}+a_{2}b_{2})\\ \frac{1}{2}(\sqrt{3}a_{1}b_{3}-a_{2}b_{3})\end{pmatrix}_{\mathbf{3}^{\prime}} \tag{106}\]
## Appendix B Mass squared matrices for scalar sector
In the de Born level analysis of the mass spectrum of the Higgs bosons and the physical basis of scalar particles, we must construct the scalar mass matrices of the model. Then substituting the equations of constraintst (34), and Eq.(6) into the scalar potential Eq.(33), the square mass matrices are determined, calculating the second derivatives of the potential
\[\left(M_{\phi^{\pm}}^{2}\right)_{ij}=\left.\frac{\partial V}{\partial\phi_{i}^{+} \partial\phi_{j}^{-}}\right|_{\langle\phi^{\pm}\rangle},\qquad\left(M_{\phi}^ {2}\right)_{ij}=\left.\frac{\partial V}{\partial\phi_{i}\partial\phi_{j}} \right|_{\langle\phi\rangle}, \tag{107}\]
where \(\phi^{\pm}=\chi_{2}^{+},\rho_{1}^{+},\rho_{3}^{+},\eta_{21}^{+},\eta_{22}^{+}, \chi_{2}^{-},\rho_{1}^{-},\rho_{3}^{-},\eta_{21}^{-},\eta_{22}^{-}\), for charged fields and \(\phi=\xi_{\chi},\xi_{\rho},\xi_{\eta_{1}},\xi_{\eta_{2}},\zeta_{\chi}\), \(\zeta_{\rho}\), \(\zeta_{\eta_{1}}\), \(\zeta_{\eta_{2}}\), \(\chi_{1}^{0},\eta_{31}^{0},\eta_{32}^{0}\), for neutral scalar fields. Due to the symmetry of our models, the matrices of the CP odd and CP
even sectors contain two diagonal blocks, a situation similar to that presented in other works on 3-3-1 models [18], however, our results differ in higher dimension matrices due to the extra intert field.
In the charged sector, we can obtain a mass squared matrix in the basis \(\left(\eta_{21}^{\pm},\eta_{22}^{\pm},\rho_{1}^{\pm},\rho_{3}^{\pm},\chi_{2}^{\pm}\right)\) of the form.
\[M_{\phi^{\pm}}^{2}=\left(\begin{array}{ccccc}\mu_{\eta_{1}}^{2}+\left(\lambda -\lambda_{4}\right)v_{\eta}^{2}+\frac{\lambda_{8}}{2}v_{\rho}^{2}+\frac{ \lambda_{8}}{2}v_{\chi}^{2}&0&0&0\\ 0&\frac{Av_{\rho}v_{\chi}}{\sqrt{2}v_{\eta_{2}}}&\frac{Av_{\chi}}{\sqrt{2}}&0 &0\\ 0&\frac{Av_{\rho}}{\sqrt{2}}&\frac{Av_{\chi}v_{\eta_{2}}}{\sqrt{2}v_{\rho}}&0 &0\\ 0&0&0&\frac{Av_{\eta_{2}}}{\sqrt{2}v_{\rho}}&\frac{Av_{\eta_{2}}}{\sqrt{2}}\\ 0&0&0&\frac{Av_{\eta_{2}}}{\sqrt{2}}&\frac{Av_{\eta_{2}}v_{\eta_{2}}}{\sqrt{2} v_{\chi}}\end{array}\right) \tag{104}\]
In sector CP odd, the square matrices in the pseudoscalar neutral basis \(\left(\zeta_{\eta_{1}},\zeta_{\eta_{2}},\zeta_{\rho},\zeta_{\chi}\right)\)
\[M_{\zeta}^{2}=\left(\begin{array}{ccccc}\mu_{\eta_{1}}^{2}+\left(\lambda_{2} -2\lambda_{3}-\lambda_{4}\right)v_{\eta_{2}}^{2}+\frac{\lambda_{8}}{2}v_{\rho }^{2}+\frac{\lambda_{8}}{2}v_{\chi}^{2}&0&0&0\\ 0&\frac{Av_{\rho}v_{\chi}}{\sqrt{2}v_{\eta_{2}}}&\frac{Av_{\chi}}{\sqrt{2}}& \frac{Av_{\rho}}{\sqrt{2}}\\ 0&\frac{Av_{\rho}}{\sqrt{2}}&\frac{Av_{\chi}v_{\eta_{2}}}{\sqrt{2}v_{\rho}}& \frac{Av_{\eta_{2}}}{\sqrt{2}}\\ 0&\frac{Av_{\rho}}{\sqrt{2}}&\frac{Av_{\eta_{2}}}{\sqrt{2}}&\frac{Av_{\eta_{2} }v_{\eta_{2}}}{\sqrt{2}v_{\chi}}\end{array}\right) \tag{105}\]
and neutral scalar complex at the basis \(\left(\text{Im}\eta_{31}^{0},\text{Im}\eta_{32}^{0},\text{Im}\chi_{1}^{0}\right)\)
\[M_{\phi_{\text{lm}}}^{2}=\left(\begin{array}{ccccc}2\mu_{\eta_{1}}^{2}+2 \left(\lambda_{2}-\lambda_{4}\right)v_{\eta_{2}}^{2}+\lambda_{8}v_{\rho}^{2}+ \left(\lambda_{5}-\lambda_{6}\right)v_{\chi}^{2}&0&0\\ 0&\frac{\sqrt{2}Av_{\rho}v_{\chi}}{v_{\eta_{2}}}-\lambda_{6}v_{\chi}^{2}& \lambda_{6}v_{\eta_{2}}v_{\chi}-\sqrt{2}Av_{\rho}\\ 0&\lambda_{6}v_{\eta_{2}}v_{\chi}-\sqrt{2}Av_{\rho}&\frac{\sqrt{2}Av_{\eta_{2} }v_{\rho}}{v_{\chi}}-\lambda_{6}v_{\eta_{2}}^{2}\end{array}\right) \tag{106}\]
In the sector CP even, the square mass matrices in the scalar neutral basis \(\left(\xi_{\eta_{1}},\xi_{\eta_{2}},\xi_{\rho},\xi_{\chi}\right)\)
\[M_{\xi}^{2}=\left(\begin{array}{ccccc}\mu_{\eta_{1}}^{2}+\left(\lambda_{2}+ \lambda_{4}\right)v_{\eta_{2}}^{2}+\frac{\lambda_{8}}{2}v_{\rho}^{2}+\frac{ \lambda_{5}}{2}v_{\chi}^{2}&0&0&0\\ 0&\frac{Av_{\rho}v_{\chi}}{\sqrt{2}v_{\eta_{2}}}+2\left(\lambda_{2}+\lambda_{4 }\right)v_{\eta_{2}}^{2}&\lambda_{8}v_{\eta_{2}}v_{\rho}-\frac{Av_{\chi}}{ \sqrt{2}}&\lambda_{5}v_{\eta_{2}}v_{\chi}-\frac{Av_{\rho}}{\sqrt{2}}\\ 0&\lambda_{8}v_{\eta_{2}}v_{\rho}-\frac{Av_{\chi}}{\sqrt{2}}&\frac{Av_{\eta_{2} }v_{\chi}}{\sqrt{2}v_{\rho}}+2\lambda_{7}v_{\rho}^{2}&\lambda_{9}v_{\rho}v_{ \chi}-\frac{Av_{\eta_{2}}}{\sqrt{2}}\\ 0&\lambda_{5}v_{\eta_{2}}v_{\chi}-\frac{Av_{\rho}}{\sqrt{2}}&\lambda_{9}v_{\rho }v_{\chi}-\frac{Av_{\eta_{2}}}{\sqrt{2}}&\frac{Av_{\eta_{2}}v_{\rho}}{\sqrt{2} v_{\chi}}+2\lambda_{1}v_{\chi}^{2}\end{array}\right) \tag{107}\]
and neutral scalar complex at the basis \(\left(\text{Re}\eta_{31}^{0},\text{Re}\eta_{32}^{0},\text{Re}\chi_{1}^{0}\right)\)
\[M_{\phi_{\text{Re}}}^{2}=\left(\begin{array}{ccccc}2\mu_{\eta_{1}}^{2}+2 \left(\lambda_{2}-\lambda_{4}\right)v_{\eta_{2}}^{2}+\lambda_{8}v_{\rho}^{2}+ \left(\lambda_{5}+\lambda_{6}\right)v_{\chi}^{2}&0&0\\ 0&\frac{\sqrt{2}Av_{\rho}v_{\chi}}{v_{\eta_{2}}}+\lambda_{6}v_{\chi}^{2}&\sqrt{ 2}Av_{\rho}+\lambda_{6}v_{\eta_{2}}v_{\chi}\\ 0&\sqrt{2}Av_{\rho}+\lambda_{6}v_{\eta_{2}}v_{\chi}&\frac{\sqrt{2}Av_{\eta_{2} }v_{\rho}}{v_{\chi}}+\lambda_{6}v_{\eta_{2}}^{2}\end{array}\right) \tag{108}\]
|
2310.09877 | Statistical inference using machine learning and classical techniques
based on accumulated local effects (ALE) | Accumulated Local Effects (ALE) is a model-agnostic approach for global
explanations of the results of black-box machine learning (ML) algorithms.
There are at least three challenges with conducting statistical inference based
on ALE: ensuring the reliability of ALE analyses, especially in the context of
small datasets; intuitively characterizing a variable's overall effect in ML;
and making robust inferences from ML data analysis. In response, we introduce
innovative tools and techniques for statistical inference using ALE,
establishing bootstrapped confidence intervals tailored to dataset size and
introducing ALE effect size measures that intuitively indicate effects on both
the outcome variable scale and a normalized scale. Furthermore, we demonstrate
how to use these tools to draw reliable statistical inferences, reflecting the
flexible patterns ALE adeptly highlights, with implementations available in the
'ale' package in R. This work propels the discourse on ALE and its
applicability in ML and statistical analysis forward, offering practical
solutions to prevailing challenges in the field. | Chitu Okoli | 2023-10-15T16:17:21Z | http://arxiv.org/abs/2310.09877v4 | Statistical inference using machine learning and classical techniques based on accumulated local effects (ALE)
###### Abstract
Accumulated Local Effects (ALE) is a model-agnostic approach for global explanations of the results of black-box machine learning (ML) algorithms. There are at least three challenges with conducting statistical inference based on ALE: ensuring the reliability of ALE analyses, especially in the context of small datasets; intuitively characterizing a variable's overall effect in ML; and making robust inferences from ML data analysis. In response, we introduce innovative tools and techniques for statistical inference using ALE, establishing bootstrapped confidence intervals tailored to dataset size and introducing ALE effect size measures that intuitively indicate effects on both the outcome variable scale and a normalized scale. Furthermore, we demonstrate how to use these tools to draw reliable statistical inferences, reflecting the flexible patterns ALE adeptly highlights, with implementations available in the 'ale' package in R. This work propels the discourse on ALE and its applicability in ML and statistical analysis forward, offering practical solutions to prevailing challenges in the field.
## 1 Introduction
Accumulated Local Effects (ALE) were initially developed as a model-agnostic approach for global explanations of the results of black-box machine learning (ML) algorithms (Apley and Zhu, 2020). ALE has a key advantage over other approaches like partial dependency plots (PDP) and SHapley Additive exPlanations (SHAP): its values represent a clean functional decomposition of the model (Molnar, Casalicchio and Bischl, 2020). As such, ALE values are not affected by the presence or absence of interactions among variables in a mode. Moreover, its computation is relatively rapid (Molnar, 2022).
Despite the potential of ALE, at least three challenges in interpretable machine learning (IML) remain unresolved, despite numerous incremental attempts. Firstly, the reliability of results derived from ALE analyses is brought into question, particularly when considering the prevalent data-only bootstrapping approach (Flora, 2023; Jumelle, Kuhn-Regnier and Rajaratnam, 2020), which, especially in the context of smaller datasets that preclude a training-test split, potentially risks overfitting and undermines the generalizability of the findings.
This important issue is primarily a concern for small datasets, so it is pertinent to ask how small is "small"? From the perspective of this article, the key issue at stake is that applying the training-test split that is common in ML is a crucial technique for increasing the generalizability of data analysis. So, the question becomes focused on, "How small is too small for a training-test split for ML analysis?" We could consider a general principle that ML requires at least 200 rows of data for each predictor variable. So, for example, with five input variables, we would need at least 1,000 rows of data. But this number refers not to the size of the entire dataset but to the minimum size of the training subset. So, with an 80-20 split on the full dataset (that is, 80% training set), we would need at least 1,000 rows for the training set and another 250 rows for the test set, for a minimum of 1,250 rows. (And if we were to carry out hyperparameter tuning with cross-validation on that training set, then we would need even more data.) From these considerations, we suggest that most datasets of less than 2,000 rows are probably "small". Indeed, even many datasets that are more than 2,000 rows are nonetheless "small".
This consideration is pertinent because ALE is a valuable technique for visually characterizing the relationships between predictors and outcomes for any model, not just for large datasets typical in ML, but also for smaller datasets typical in statistical analysis. It is often inappropriate to transfer the ALE analysis techniques that assume large datasets to smaller datasets that need specialized treatment.
A second ongoing challenge concerns how to characterize a variable's overall effect. While effect sizes are extensively treated in statistical analysis based on the general linear model, only recently have ML researchers started trying to develop model-agnostic measures that can characterize the results of any ML analysis. These initial attempts,
while promising, do not always show the effects of individual variables (Molnar, Casalicchio and Bischl 2020) and, when they do, are often not intuitively interpretable (Molnar, Casalicchio and Bischl 2020; Lotsch and Ultsch 2020; Friedman and Popescu 2008).
Third, even with reliable effect size measures, it is not clear how to make reliable inferences from data analysis based on ML. Messner (2023) has made an admirable initial attempt towards hypothesis testing based on ML analysis, but his framework nonetheless masks some of the fine nuances that ALE hopes to uncover with its visual portrayal of flexible relationships.
In response to these challenges, the research objective of this article is to introduce tools and techniques for statistical inference using machine learning and classical statistical techniques based on ALE. We address trusting the reliability of analysis results by creating bootstrapped confidence intervals for ALE using techniques appropriate to the size of the dataset. To characterize the overall effects of a predictor variable on its outcome variable, we create a set of ALE effect size measures that intuitively indicate the effect either on the scale of the outcome variable or on a normalized scale that is comparable across datasets. However, to avoid simplistic summaries of overall effects that might hide important details, we demonstrate how to use these tools to make reliable conclusions of statistical inference that reflect the flexible patterns that ALE can so capably highlight.
We conduct all analyses using the R package ale(Okoli 2023), which was specifically developed to extend ALE with the functionality we describe in this article. Indeed, this article is largely an explanation of the scientific background of the ale package.
The rest of this article is organized as follows. In the "Related work" section, we delve into the existing literature and software implementations of ALE, exploring aspects such as ALE confidence intervals, bootstrapping, effect size measures in machine learning, and inference from analysis results, while also identifying opportunities for improvement in the current methodologies. Subsequently, "Illustrative datasets and models" introduces two distinct datasets and corresponding models--a neural network model for diamond prices and a generalized additive model for mathematics achievement scores--to provide a practical context for the ensuing analyses. The section on "Bootstrapping of accumulated local effects" elucidates the methodologies of data-only bootstrapping, particularly focusing on a large dataset, and introduces model bootstrapping, with a spotlight on a smaller dataset and its application to ALE. Moving forward, "ALE effect size measures" introduces and explores novel effect size measures, both on the scale of the y outcome variable and in a normalized form, while also discussing the implications and applications of random variables. In "Statistical inference with ALE," we navigate through classical statistical inference, explore ALE data structures, and delve into bootstrap-based inference with ALE, culminating in a discussion on confidence regions and random variables. Finally, the "Discussion" section encapsulates the contributions and practical implications of the methodologies and insights presented, providing a comprehensive overview and concluding remarks on the potential trajectories and applications of the enhanced ALE methodologies in machine learning model interpretation and analysis.
## 2 Related work
In the seminal work of Apley and Zhu (2020), the concept of Accumulated Local Effects (ALE) was introduced as a global explanation technique for IML. ALE provides functionality akin to Partial Dependence Plots (PDP), both of which graphically delineate the relationship between a single input variable X and the outcome variable Y, thereby illustrating non-linear and flexible variant relationships. A noteworthy enhancement of ALE over PDP is its resilience to the interactions between variables. A secondary, yet non-negligible advantage of utilizing the ALE algorithm is its reduced computational expense, which serves as a substantial practical incentive for machine learning scientists to incorporate ALE visualizations throughout their analytical processes.
There have been a few recent extensions to fine-tune the original algorithm. Gkolemis et al. (2023) present Robust and Heterogeneity-Aware Accumulated Local Effects (RHALE), a technique that endeavours to address a notable limitation inherent in the original ALE algorithm's approach to binning numeric data according to quantiles. A pivotal challenge resides in the absence of indicators for the heterogeneity of data within each quartile. Consequently, the ALE plotted line may not accurately represent the data. To mitigate this, RHALE considers the standard deviation of data within each prospective bin when determining the ALE bin boundaries. Gkolemis, Dalamagas and Diou (2023) present Differential Accumulated Local Effects (DALE) to address two prominent limitations of the original ALE formulation. Firstly, they argue that the original ALE does not scale effectively to high-dimensional data. Secondly, they note that with smaller samples, the original ALE calculation may not be representative of out-of-distribution sampling. DALE is an adjusted algorithm for calculating ALE that attempts to enhance both the scalability and representational accuracy of the technique
Although our present article focuses on the original ALE algorithm, its findings can probably be readily extrapolated
to such extensions. Indeed, the kinds of extensions in which we are interested have not been documented in scholarly articles that we could find but are rather found to various extents in software packages that implement ALE. Thus, our review in this section largely surveys software implementations in addition to published literature.
### Software implementations of ALE
Since the initial ALEPlot reference package in R was released in 2018 (Apley 2018), implementations in various programming languages have translated or extended the initial concept. ALEPython(Jumelle, Kuhn-Regnier and Rajaratnam 2020) and PyALE(Jomar 2023) in Python have been specifically dedicated to implementing ALE, sometimes with extensions. Most, however, are more general IML packages that include ALE among other techniques. These include iml(Molnar and Schratz 2022) in R; scikit-explain(Flora 2023) and Alibi (Seldon Technologies 2023) in Python; DALEX(Biecek, Maksymiuk and Baniecki 2023) in R and Python; and the Interpretation (RapidMiner 2023) extension for RapidMiner, a Java-based no-code machine learning platform. In addition to these, this present article introduces ale(Okoli 2023), an R package dedicated to the implementation and extension of ALE. In particular, ale aims to resolve some of the issues that we describe subsequently in this section. In Table 1, we list some key characteristics of these packages, focusing on features that are pertinent to the subject of this present article.
In the following subsections, we discuss publications and software implementations that develop confidence intervals, bootstrapping, effect size measures, and inference, almost always with ALE.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Primary & & Latest & \multicolumn{2}{l}{Confidence} & Bootstrap & \\ focus & Package & release & Language intervals & type & ALE statistics \\ \hline ALE & **ALEPlot** (Apley 2018) & 2018 & R & No & N/A & None \\ ALE & **ALEPython** (Jumelle, & 2020 & Python & Monte & data-only & None \\ & Kuhn-Regnier and & & & Carlo & & \\ & Rajaratnam 2020) & & & & & \\ IML & **iml** (Molnar and Schratz & 2022 & R & No & N/A & None \\ & 2022) & & & & & \\ IML & **DALEX** (Biecek, & 2023 & R and & No & N/A & None \\ & Maksymiuk and Baniecki & & & Python & & \\ & 2023) & & & & & \\ ALE & **PyALE** (Jomar 2023) & 2023 & Python & T-statistic & N/A & None \\ IML & **Interpretation** & 2023 & RapidMin\(\delta\)o & N/A & None \\ & (RapidMiner 2023) & & & & & \\ IML & **Alibi** (Seldon & 2023 & Python & No & N/A & None \\ & Technologies 2023) & & & & & \\ IML & **scikit-explain** (Flora & 2023 & Python & Bootstrap & data-only & * Friedman H-statistic for interactions * Interaction * Main effect complexity (HEC) * ALE (Hec) (ALEP) * ALE (Hec) (ALEP) * Normalized (ALEP) (NALED) * Normalized (NALED) * Normalized (NALED) * Normalized (NALED) * Normalized (NALED) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Software packages that implement ALE
### ALE confidence intervals and bootstrapping
The initial ALEPlot implementation did not feature confidence intervals, but several packages have recognized their importance and have thus extended ALE with this feature, implemented in different ways. The simplest approach is that employed by PyALE, which constructs intervals based on the t statistic, using the standard deviation of the ALE values to construct standard errors. However, despite the other merits of that implementation, we cannot find any basis to assume that ALE values are distributed according to a t distribution, or by any other parametric distribution for that matter. Only bootstrap-based confidence intervals are appropriate for data like ALE whose distribution cannot be generalized to any predetermined pattern (Tong, Saminathan and Chang 2017).
Although most readers are likely familiar with the bootstrap algorithm, it is worth repeating it here to highlight the complexities that ALE presents. The classic bootstrap approach involves creating multiple samples of the original dataset. Each bootstrap sample draws rows at random from the original dataset, as many new rows as there are rows in the original dataset. Crucially, sampling is done with replacement so that in each bootstrap sample, some of the original rows are repeated a random number of times and some might not occur at all. The analyst decides how many bootstrap samples they want. With the bootstrap samples, the desired aggregate statistics are computed, such as the mean or standard deviation across all bootstrap samples.
For ALE bootstrapping, the aggregation goal across the bootstrap samples is to calculate the ALE values. The ALE calculation depends first on establishing ALE bins or intervals for an X predictor variable and then an ALE Y estimate is calculated for each ALE X interval. What is particularly tricky with bootstrapping ALE calculations is that the intervals are calculated directly from the data, so each time the data is scrambled with a bootstrap sample, the ALE X intervals necessarily change. Even with large datasets, because the intervals for numeric X predictors are calculated from the quantiles of the particular sets of values, each bootstrap sample will produce distinct intervals, which cannot be combined in any way across bootstrap samples. The problem persists even with categorical X predictors. Whereas with sufficiently large datasets each bootstrap sample will likely include at least a few rows that represent each category of the X predictor, with small datasets, it is often the case that at least some bootstrap samples might not include every category from the original dataset. In such cases, ALE values cannot be combined across bootstrap samples that do not share identical intervals.
In their implementations of bootstrapped confidence intervals, the scikit-explain (Flora 2023) and ale (Okoli 2023) packages resolve this challenge by using the full dataset to establish the ALE X intervals and then they apply these fixed intervals in calculating the ALE Y values for each bootstrap sample. Thus, bootstrap averages and quantiles can be calculated across each common interval across all bootstrap samples. In general, we call this standard approach **"data-only bootstrapping"** because the dataset is bootstrapped, but not the model itself. This approach might also be called model-invariant bootstrapping or case resampling.
The ALEPython package (Jumelle, Kuhn-Regnier and Rajaratnam 2020) adopts a similar approach in their adapted bootstrap confidence intervals that they call "Monte Carlo". This is essentially the same as the standard bootstrap (sampling with replacement) except that instead of constructing samples of the same size as the original dataset, they only sample a fixed fraction of the dataset (10% by default, customizable by the user). Other than that detail, these three packages essentially employ the same approach.
### Effect size measures for machine learning
Although effect size measures are widely used in statistical analyses, usually with smaller datasets, they are not as widely used in machine learning. However, there are a few notable examples. Friedman and Popescu (2008) introduced a metric known as the H-statistic, designed to quantify the intensity of interactions among variables in datasets. It comes in two forms: a pairwise version that reveals the strength of two-way interactions between feature pairs, and another that assesses the interaction strength of a single feature with any other feature in the dataset or model. Lotsch and Ultsch (2020) have formulated an effect size metric, drawing parallels to Cohen's D used in statistical evaluations of group differences. Their innovation, tailored for the machine learning context, not only considers the central tendencies of groups but also the variations within and across these groups, providing a more nuanced measure.
Molnar, Casalicchio and Bischl (2020) delve into interpreting models through a process they refer to as functional decomposition. This approach involves dissecting a model's predictive elements into three distinct components: a constant element, the main effects of individual variables, and interactions between variables. For evaluating main effects, their methodology hinges on ALE, leading to the formulation of a metric termed Main Effect Complexity (MEC). In essence, MEC gauges the degree to which a variable's relationship with the outcome diverges from a linear trend. Regarding the interaction component, they employ a metric known as Interaction Strength (IAS), which serves to signify the robustness of a feature's interaction with others.
Regarding MEC, two distinct calculations exist. Initially, MEC is determined for each feature individually, assessing the extent to which the relationship between a feature and the outcome deviates from a linear representation. Subsequently, an overall MEC is computed as the mean of individual MECs for all features within the model, with this aggregate value typically being the primary focus. Conversely, IAS is consistently calculated for the entire set of features in the model. It essentially measures what remains unaccounted for when the average main effects of all features are considered. Therefore, IAS represents a collective metric for the entire model, rather than a measure of interaction for individual features. The scikit-explain package (Flora, 2023) provides Friedman's H-statistic, MEC, and IAS effect size measures.
Messner (2023) adapts Cohen's \(f^{2}\) from classical statistical analysis to quantify effect strength in ML contexts. He diverges from the conventional version based on \(R^{2}\) used in ordinary least squares regression to an adapted version he calls \(f^{2}_{v}\), based on mean squared error (MSE). \(f^{2}_{v}\), a unit-free measure that ranges from 0 with no upper bound, can be compared across datasets. Additionally, Messner evaluates the monotonicity of an effect--whether an effect consistently moves in one direction--and its directionality, utilizing p-values derived from the Mann-Kendall test and Theil-Sen slope, respectively. These tests determine if the effect is unidirectional and if it is increasing or decreasing, with the magnitude of change assessed via \(f^{2}_{v}\).
### Inference from analysis results
One of the biggest differences between supervised ML and statistical modelling is the general goal of the analysis. Supervised ML is predominantly concerned with obtaining the most accurate prediction possible of the outcome variable. While the predictor variables are necessary to obtain an accurate prediction, the specific relationship between predictors and outcomes is often considered incidental. In contrast, statistical modelling is primarily concerned with reliably describing the relationship between certain predictor variables and the outcome. While an accurate estimation model of the outcome is important, it is only secondarily so as an indication of the reliability of the model. Statistical analysis attempts to infer from the relationships in the data analyzed patterns that can be generalized to the larger population from which the data sample is drawn.
IML bridges the gap between the two modes of analysis by using supervised ML techniques to describe the relationships between predictors and the outcome even with models that are not intrinsically interpretable. One major goal of such interpretation is to apply the more flexible panoply of ML methods for the same goal as statistical inference: reliably inferring relationships between predictors and the outcome to a population larger than the sample. However, whereas statistical inference is largely based on the interpretation of coefficients of models that are variations of the general linear model (GLM), most ML techniques do not have such coefficients. Thus, some scholars have developed approaches to statistical inference specifically tailored to ML.
Sechidis (2015) delves into the realm of hypothesis testing amid the complexities of semi-supervised data, characterized by its partially labelled and incomplete nature. His thesis meticulously examines the unique challenges posed by this data type, particularly when drawing conclusions from hypothesis tests. His exploration is not confined to statistical significance alone; it also encompasses a thorough consideration of effect sizes, thereby providing a more comprehensive understanding of the analytical implications of the data.
Messner (2023) introduces some new measures with the ultimate goal of enabling hypothesis testing from a model-agnostic approach using machine learning. The key elements of his framework are:
* Determine the practical importance of effects based on the magnitude of \(f^{2}_{v}\): \(<0.02\) is trivial; 0.02 to 0.15 is small; 0.15 to 0.35 is medium; and \(>=0.35\) is large. Crucially, rather than statistical significance, his effect-size-based hypothesis testing emphasizes the practical magnitude of effects.
* Regardless of magnitude, determine if the effects are monotonic, applying the Mann-Kendall test for statistical significance.
* For monotonic effects, determine if the effect is increasing or decreasing, applying the Theil-Sen test for statistical significance.
### Opportunities for improvement
Despite the various implementations and valuable extensions of the original ALE concept, we see some ongoing issues that leave room for improvement to better adapt it as a tool for statistical inference using ML.
First, there is a crucial limitation of the standard data-only approach to bootstrapping ALE values that we describe above (Flora, 2023; Jumelle, Kuhn-Regnier and Rajaratnam, 2020). Ultimately, as an ML technique, ALE values are usually calculated on a model to apply the explanations obtained to understand future data. As such, when a model is trained on some training data, the ALE should be calculated on independent test data to give assurance that the ALE does not overfit the data. It is appropriate to bootstrap such independent test data to
obtain ALE confidence intervals. However, some datasets are too small for a training-test split to be feasible, so the model is created on the entire dataset. In this case, any generalizations that might be drawn from such analyses strongly risk overfitting the data and there is no test set to qualify such generalizations.
Second, while it is encouraging to see the development of various model-agnostic effect size measures, those that exist tend to be limited when the goal is to practically quantify the effects of variables. The overall model MEC and the IAS (Molnar, Casalicchio and Bischl 2020) do not show the effects for individual variables but only indicate generally the average effects across all variables. While the variable-specific version of the MEC attempts to address this concern, it mainly indicates the extent to which a variable is non-linear but it does not clearly quantify other important aspects of its possible effect. Similarly, the impact score (Lotsch and Ultsch 2020) and \(f_{v}^{2}\) (Messner 2023) indicate the overall effect of a predictor variable on the outcome but do not shed much light on the nature of the effect. The tests for monotonicity and slope (Messner 2023) are an important step in the direction of clarifying the nature of the effect, but by generalizing the overall trend in one tendency (e.g., monotonically increasing or non-monotonic), the details of complex relationships might not be sufficiently clear.
A third concern is the computational practicality of some of the measures. For example, Friedman's H is a useful measure for interaction strength, but it is computationally rather expensive to calculate (Molnar 2022).
In the following sections of the article, we address these outstanding concerns.
## 3 Illustrative datasets and models
Before we begin ALE analysis, in this section we describe the datasets and the models that we will use for our illustrations. Because the issues that we address are handled differently depending on the relative size of the dataset, we work with two distinct datasets. We develop a neural network model to analyze a moderately large dataset of diamond prices and then a generalized additive model (GAM) to analyze a small dataset of mathematics achievement scores.
### Large dataset: neural network model for diamond prices
For the analysis of a relatively large dataset, we use the diamonds dataset, built-in with the ggplot2 graphics system (Wickham et al. 2023). We cleaned the original version by removing duplicates (Mayer 2021) and invalid entries where the length (x), width (y), or depth (z) is 0. Table 2 presents the description of the modified dataset. The outcome variable that is the focus of our analysis is the price of a diamond, with minimum $326, median $3,365, and maximum $18,823 in USD.
Of particular note is the variable rand_norm. We have added this completely random variable (with a normal distribution) to demonstrate what randomness looks like in our analysis. (However, we deliberately selected the specific random seed of 6 because it highlights some particularly interesting points.)
For machine learning analysis that intends to extrapolate its results to future data, we must first split the dataset into training and test samples. The model is developed on the training set and then evaluated on the test set. So, we split the dataset with an 80-20 split for a training set of 31,758 rows and a test set of 7,981 rows.
\begin{table}
\begin{tabular}{l l} \hline \hline Variable & Description \\ \hline price & price in US dollars (\$326–\$18,823) \\ carat & weight of the diamond (0.2–5.01) \\ cut & quality of the cut (Fair, Good, Very Good, Premium, Ideal) \\ color & diamond color, from D (best) to J (worst) \\ clarity & a measurement of how clear the diamond is (I1 (worst), SI2, SI1, VS2, VS1, VVS2, VVS1, IF (best)) \\ x\_length & length in mm (3.73–10.74) \\ y\_width & width in mm (3.68–58.9) \\ z\_depth & depth in mm (1.07–31.8) \\ depth\_pct & total depth percentage = z / mean(x, y) = 2 * z / (x + y) (43–79) \\ table & width of top of diamond relative to widest point (43–95) \\ rand\_norm & a completely random variable (-4.9 to 4.4) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Large dataset: diamond prices
ALE is a model-agnostic IML approach, that is, it works with any kind of ML model. For this demonstration, we train a neural network to predict diamond prices, selected to demonstrate the benefits of ALE to a notoriously black-box model.
### Small dataset: generalized additive model for mathematics achievement scores
We demonstrate ALE statistics on small datasets with a dataset composed and transformed from the nlme package (Pinheiro et al. 2023). The structure has 160 rows, each of which refers to a school whose students have taken a mathematics achievement test. Table 3 describes the data based on documentation from the nlme package but, unfortunately, many details are not quite clear.
Of particular note again is the variable rand_norm. As with the large dataset, we have added this completely random variable (with a normal distribution) to demonstrate what randomness looks like in our analysis. (However, we selected the specific random seed of 6 because it highlights some particularly interesting points.)
The outcome variable that is the focus of our analysis is math_avg, the average mathematics achievement scores of all students in each school, with minimum 4.24, median 12.9, and maximum 19.7.
Because the samples here are relatively small, we will use general additive models (GAM) for the modelling (Wood 2023). GAM is an extension of statistical regression analysis that lets the model fit flexible patterns in the data instead of being restricted to the best-fitting straight line. It is an ideal approach for samples that are too small for machine learning because it provides flexible curves unlike ordinary least squares regression yet will not overfit excessively as would most machine learning techniques when working with such small samples (Ross 2019).
## 4 Bootstrapping of accumulated local effects
We now come to the application of ALE to address the issues we highlighted above in the introduction and related work sections. In this section, we explore two distinct bootstrapping approaches, each applied to datasets of varying sizes and characteristics. Initially, we begin with the data-only bootstrap approach to improve the reliability of ALE plots by providing confidence intervals that offer more reliable estimates than those derived from a single ALE calculation. Next, we assess the imperative of model bootstrapping to mitigate the risk of overfitting in the context of smaller datasets.
### Data-only bootstrapping of a large dataset (diamonds)
Figure 1 displays the simple ALE plots for the neural network model of diamond prices. (We refer to these plots as "simple" because they simply calculate ALE without confidence intervals.) In this case and all our other analyses on the diamonds dataset, the ALE is calculated on the test dataset, not the training set on which the neural network was trained.
A significant feature of the ale package that we use in this article is that it centres ALE values on the median, unlike the original ALEPlot implementation (Apley 2018), which centres them on zero. Although the ale package
\begin{table}
\begin{tabular}{l l l} \hline \hline variable & format & description \\ \hline math\_avg & double & average mathematics achievement scores of all students in the school \\ size & double & the number of students in the school \\ public & logical & TRUE if the school is in the public sector; \\ & & FALSE if in the Catholic sector \\ academic\_ratio & double & the percentage of students on the academic track \\ female\_ratio & double & percentage of students in the school that are female \\ mean\_ses & double & mean socioeconomic status for the students in the school \\ & & (measurement is not quite clear) \\ minority\_ratio & double & percentage of students that are members of a minority racial group \\ high\_minority & logical & TRUE if the school has a high ratio of students of minority racial groups \\ & & (unclear, but perhaps relative to the location of the school) \\ discrim & double & the “discrimination climate” \\ & & (perhaps an indication of extent of racial discrimination in the school?) \\ rand\_norm & double & a completely random variable \\ \hline \hline \end{tabular}
\end{table}
Table 3: Small dataset: average mathematics achievement scores for schools
Figure 1: Simple ALE plots for neural network model of diamond prices
lets the user centre on zero or even on the mean, the median is particularly crucial for the statistical inference framework that the alepackage offers. An essential element of its visualization is the middle grey band that indicates the median \(\pm\) 2.5%, that is, the middle 5% of all price values in the dataset. We call this the "median band". The idea is that if any predictor can do no better than influencing price to fall within this middle median band, then it only has a minimal effect. For an effect to be considered practically meaningful, its ALE effect should be above or below the median band. Looking at these simple ALE plots, we could draw the following initial conclusions:
* The prices increase sharply with the carat of the diamonds until they plateau at the highest end of the domain just above 2 carat. The rug plot indicates that there are very few diamonds with a higher carat than that.
* cut does not seem to have much of an effect on prices.
* The H, I, and J colors have lower prices; the other values do not seem to have much of an effect.
* The I1, S12, and S11 clarity categories have lower prices while VVS2, VVS1, and IF have higher prices. VS2 and VS1 do not seem to have much of an effect.
* depth_pct has slightly lower prices at its low end and considerably higher prices at its high end. However, the rug plot indicates that there are very few diamonds at either end, so these results are not well supported by data.
* Conversely, table has considerably higher prices at its low end and somewhat lower prices at its high end, but these results are not well supported by data.
* x_length barely crosses the median band in either direction, so it does not seem to have much of a meaningful effect.
* As y_width increases, price sharply increases with it. Although the plot shows a long plateau, the rug plot indicates that the plateau is predominantly in regions of high y_width where there are few diamonds.
* As z_depth increases, price steadily decreases with it throughout the range of z_depth.
* Somewhat surprisingly, although the effect is not so strong as is more marked at the extremes of its domain, the random variable rand_norm seems to have a distinctly negative effect on prices. We will pay attention to this variable as we continue our analyses.
Despite these interesting initial results, it is crucial to bootstrap the ALE results to ensure that they are reliable, that is, generalizable to data beyond the sample on which the model was built. So, now we recreate bootstrapped versions of the ALE data and plots. Specifically, because the ALE data was computed on a distinct test dataset, we only bootstrap the test dataset and we recalculate ALE using the same model on each bootstrap sample; this is data-only bootstrapping.
ALE is a relatively stable IML algorithm (compared to others like PDP), so 100 bootstrap samples should be sufficient for relatively stable results, especially for model development (Apley and Zhu 2020).
The bootstrapped results in Figure 2 are very similar to single (non-bootstrapped) ALE results from Figure 1, which attests to the stability of ALE results when working with moderately large datasets. However, bootstrapping adds confidence intervals to the plot (we use a 95% interval in our illustrations), which adds some uncertainty when certain values are close to the median band as to whether they overlap the band or not. We present tools to clarify this point below.
### Model bootstrapping of a small dataset (math achievement)
Turning to the case of small datasets, we again begin with simple ALE plots to see what results look like without bootstrapping. The ALE plots in Figure 3 suggest that:
* size, academic_ratio, and mean_ses have positive relationships with math achievement scores. The relationship with size is linear, but those with the other variables is non-linear.
* female_ratio and minority_ratio seem to have negative relationships with math achievement scores. The negative relationship of the female_ratio does not seem to be that strong, though it seems to be driven by the fact that all-male schools have above-average scores while all-female schools have below-average scores.
* public schools seem to have lower scores on average while those with high_minority ratios have higher scores.
* The discrimination climate discrim, though somewhat negative, does not seem to have much of an effect.
* The random variable rand_norm, though somewhat positive, does not seem to have much of an effect.
However, before reading too much into these results, we must remember that results that are not bootstrapped are simply not reliable.
Figure 2: Bootstrapped ALE plots for neural network model of diamond prices
Figure 3: Simple ALE plots for GAM of mathematics achievement scores
Figure 4: Data-only (inappropriate) bootstrapped ALE plots for GAM of mathematics achievement scores
#### 4.2.1 Inappropriate data-only bootstrapping of a small dataset
To illustrate the implications of the different bootstrap approaches, we will first carry out a data-only bootstrap as we did with the 100 bootstrap iterations
With data-only bootstrapping, the results in Figure 4 are very similar to those without bootstrapping in Figure 3, with only mean_ses displaying wide confidence bands that suggest that some of the ranges of values might not be relevant. However, data-only bootstrapping is not the appropriate approach for small samples. The issue is not directly the size of the dataset but the fact that the model was trained on the entire dataset rather than on a sample distinct from that on which the ALE data and plots are created. Thus, for the results to represent a broader sample than that of the small dataset, we must not only bootstrap the entire dataset but we must also retrain the entire model on each of the bootstrapped datasets. Although the problem might not be so readily evident by examining the inappropriately bootstrapped plot above, we revisit this issue below when we discuss bootstrap-based inference with ALE and we see how inappropriate this approach is in handling the random variable rand_norm.
#### 4.2.2 Appropriate full-model bootstrapping of a small dataset
Data-only bootstrapping is not appropriate for small datasets where a model is trained on the entirety of available data because there is a high risk of overfitting. Indeed, This is one of the classic cases that the bootstrap was originally designed to address. In such cases, the entire dataset should be bootstrapped and the model should be retrained on each bootstrap sample. Thus, there will be as many models as there are bootstrap samples. Any necessary calculations (typically overall model statistics and variable coefficients) are calculated for each model and the averages, quantiles, etc. of these calculations across model-bootstrap samples are calculated as the bootstrapped estimates.
We call this approach **model bootstrapping** because not only is the data bootstrapped, but the model itself is also bootstrapped. This might also be called model-dependent bootstrap or resampling with model re-fitting. Whereas data-only bootstrapping has \(n_{it}\) (number of iterations) bootstrap samples of the data but only one model, model bootstrapping has \(n_{it}\) distinct models, one for each of the \(n_{it}\) bootstrap samples. Model bootstrapping is much slower than data-only bootstrapping because the model has to be retrained \(n_{it}\) times. While model bootstrapping does not mitigate overfitting, it provides confidence intervals that effectively give more reliable estimates than naively accepting only the single estimates from a single model on the entire dataset. It should be considered mandatory for datasets that are too small to be split further into distinct training and test sets.
Model bootstrapping for ALE is similar to data-only bootstrapping for ALE in that fixed ALE intervals must be calculated from the full dataset and then applied to each bootstrap sample, even though in this case each bootstrap sample has a different model. The essential difference is that a different model is used to calculate the ALE values each time--the model specific to each bootstrap sample.
Thus, we apply the appropriate model bootstrapping approach to the math dataset. Model bootstrapping is particularly slow, even on small datasets, since the entire process is repeated that many times. However, 100 iterations should be sufficiently stable for model building.
With model bootstrapping, although the general patterns of relationships in Figure 5 are similar to those of the data-only bootstrapping in Figure 4, the confidence bands are generally much wider, which indicates that the data-only bootstrap greatly exaggerated the reliability of the ALE results for this small sample. We can see that most variables seem to have some sort of mean effect across various values. However, for statistical inference, our focus must be on the bootstrap intervals. For an effect to be considered statistically significant, there should be no overlap between the confidence regions of a predictor variable and the median band.
For categorical variables (public and high_minority above), the confidence interval bands for all categories overlap the median band. The confidence interval bands indicate two useful pieces of information to us. When we compare them to the median band, their overlap or lack thereof tells us about the practical significance of the category. When we compare the confidence bands of one category with those of others, it allows us to assess if the category has a statistically significant effect that is different from that of the other categories; this is equivalent to the regular interpretation of coefficients for GAM and other GLM models. In both cases, the confidence interval bands of the TRUE and FALSE categories overlap each other, indicating that there is no statistically significant difference between categories. Each confidence interval band overlaps the median band, indicating that none of the effects is practically significant, either.
For numeric variables, the confidence regions overlap the median band for most of the domains of the predictor variables except for some regions that we will examine. The extreme points of each variable (except for discrim and female_ratio) are usually either slightly below or slightly above the median band, indicating that extreme
Figure 5: Model (appropriate) bootstrapped ALE plots for GAM of mathematics achievement scores
values have the most extreme effects: math achievement increases with increasing school size, academic track ratio, and mean socioeconomic status, whereas it decreases with increasing minority ratio. The ratio of females and the discrimination climate both overlap the median band for the entirety of their domains, so any apparent trends are not supported by the data.
Of particular interest is the random variable rand_norm, whose average ALE appears to show some sort of pattern. However, we can see that the confidence intervals overlap the median band for its entire domain. We will return to the implications of this observation below.
## 5 ALE effect size measures
Now that we can calculate reliable ALE effects by bootstrapping, we proceed to consider the strengths of the variable effects. In all cases, we focus on the average of the ALE values calculated from the bootstrap samples. The "average" here is usually the mean, though the median can also be used.
In general, effect size measures are most frequently used in statistical analysis, which typically analyzes relatively small datasets. They are also used to a lesser extent in machine learning with large datasets, typically in the context of variable importance measures for IML (Molnar 2022). Thus, the measures that we describe here are relevant to both small and large datasets. Given their relatively greater relevance for statistical analysis of smaller datasets, we will demonstrate them in this section almost exclusively using the mathematics achievement dataset. However, we note that everything we describe here is fully relevant to large datasets; indeed, we end this section with a pertinent illustration
### ALE effect size plot
Although ALE plots allow rapid and intuitive conclusions for statistical inference, it is often helpful to have summary numbers that quantify the average strengths of the effects of a variable. Thus, we have developed a collection of effect size measures based on ALE tailored for intuitive interpretation. To understand the intuition underlying the various ALE effect size measures, even before we explain the measures in detail, it is useful to first examine the **ALE effects plot** in Figure 6 that graphically summarizes the effect sizes of all the variables in the ALE analysis.
This plot requires some explanation:
* The y (vertical) axis displays the x variables, rather than the x-axis. This is consistent with most effect size plots because they list the full names of variables. It is more readable to list them as labels on the y-axis than the other way around.
* The x (horizontal) axis thus displays the y (outcome) variable. But there are two representations of this same axis, one at the bottom and one at the top.
* On the bottom is a more typical axis of the outcome variable, in our case, math_avg. It is scaled as expected. In our case, the axis breaks default to five units each from 5 to 20, evenly spaced. The median of 13 is also specifically marked.
* On the top, the outcome variable is expressed as percentiles ranging from 0% (the minimum outcome value in the dataset) to 100% (the maximum). It is divided into ten deciles of 10% each. Because percentiles are usually not evenly distributed in a dataset, the decile breaks are not evenly spaced.
* Thus, this plot has two x axes, the lower one in units of the outcome variable and the upper one in percentiles of the outcome variable. To reduce the confusion, the major vertical grid lines that are slightly darker align with the units of the outcome (lower axis) and the minor vertical grid lines that are slightly lighter align with the percentiles (upper axis).
* The vertical grey band in the middle is the median band, representing the median \(\pm\) 2.5% of values.
* The variables on the horizontal axis are sorted by decreasing NALED value (explained below). NALED is the most universal ALE measure of effect size.
Although it is somewhat confusing to have two axes, the percentiles are a direct transformation of the raw outcome values. The first two base ALE effect size measures below are in units of the outcome variable while their normalized versions are in percentiles of the outcome. Thus, the same plot can display the two kinds of measures simultaneously. Referring to this plot can help to understand each of the measures, which we proceed to explain in detail.
Figure 6: ALE effects plot for mathematics achievement scores
### ALE effect size measures on the scale of the y outcome variable
In this subsection, we describe the two base effect size measures that are scaled on the same unit of measure as the y outcome variable.
However, before we explain these measures in detail, we must reiterate the timeless reminder that correlation is not causation. So, none of the scores necessarily means that an x variable _causes_ a certain effect on the y outcome; we can only say that the ALE effect size measures indicate associated or related variations between the two variables.
#### 5.2.1 ALE range (ALER)
The easiest ALE statistic to understand is the ALE range (ALER), so we begin there. It is simply the range from the minimum to the maximum of any ale_y value for that variable. Mathematically, we see this in Equation 1.
\[\text{ALER}(\text{ale\_y})=\{\text{min}(\text{ale\_y}),\text{max}(\text{ale\_ y})\} \tag{1}\]
where ale_y is the vector of ALE y values for a variable.
We note that all the ALE effect size measures are centred on zero so that they are consistent regardless of whether the ALE plots are centred on zero (as in the original ALEPlot implementation (Apley 2018)) or on the median (as in the default with the ale package that we use in this article). Specifically,
* aler_min: minimum of any ale_y value for the variable.
* aler_max: maximum of any ale_y value for the variable.
ALER shows the extreme values of a variable's effect on the outcome. In the effects plot above, it is indicated by the extreme ends of the horizontal lines for each variable. We can access ALE effect size measures through the ale$stats element of the bootstrap result object, with multiple views. To focus on all the measures for a specific variable, we can access the ale$stats$by_term element. In Table 4, we see the effect size measures for the categorical public.
We see there that public has an ALER of -0.34, 0.42. When we consider that the median math score in the dataset is 12.9, this ALER indicates that the minimum of any ALE y value for public (when public == TRUE) is -0.34 below the median. This is shown at the 12.6 mark in the plot above. The maximum (public == FALSE) is 0.42 above the median, shown at the 13.3 point above.
The unit for ALER is the same as the outcome variable; in our case, that is math_avg ranging from 2 to 20. No matter what the average ALE values might be, the ALER quickly shows the minimum and maximum effects of any value of the x variable on the y variable.
In Table 5, we see the ALE effect size measures for the numeric academic_ratio.
The ALER for academic_ratio is considerably broader with -3.66 below and 1.77 above the median.
\begin{table}
\begin{tabular}{l|r|r|r|r|r} \hline statistic & estimate & conf.low & median & mean & conf.high \\ \hline
**aled** & 0.377 & 0.019 & 0.339 & 0.377 & 0.984 \\ \hline
**aler\_min** & -0.344 & -0.846 & -0.320 & -0.344 & -0.020 \\ \hline
**aler\_max** & 0.421 & 0.017 & 0.369 & 0.421 & 1.198 \\ \hline
**naled** & 5.106 & 0.427 & 4.880 & 5.106 & 12.081 \\ \hline
**naler\_min** & 45.672 & 38.796 & 46.586 & 45.672 & 49.703 \\ \hline
**naler\_max** & 56.118 & 50.297 & 55.625 & 56.118 & 67.828 \\ \hline \end{tabular}
\end{table}
Table 4: Effect size measures for the categorical public from the math achievements dataset
\begin{table}
\begin{tabular}{l|r|r|r|r|r} \hline statistic & estimate & conf.low & median & mean & conf.high \\ \hline
**aled** & 0.630 & 0.324 & 0.615 & 0.630 & 1.023 \\ \hline
**aler\_min** & -3.660 & -7.770 & -3.577 & -3.660 & -0.509 \\ \hline
**aler\_max** & 1.765 & 0.688 & 1.734 & 1.765 & 2.905 \\ \hline
**naled** & 8.139 & 3.802 & 7.913 & 8.139 & 13.151 \\ \hline
**naler\_min** & 18.378 & 1.536 & 14.907 & 18.378 & 45.093 \\ \hline
**naler\_max** & 74.281 & 58.688 & 74.541 & 74.281 & 88.347 \\ \hline \end{tabular}
\end{table}
Table 5: Effect size measures for the numeric academic_ratio from the math achievements dataset
#### 5.2.2 ALE deviation (ALED)
While the ALE range shows the most extreme effects a variable might have on the outcome, the ALE deviation indicates its average effect over its full domain of values. The zero-centred ALE values, it is conceptually similar to the weighted mean absolute error (MAE) of the ALE y values. Mathematically, we see this in Equation 2.
\[\text{ALED(ale\_y,ale\_n)}=\frac{\sum_{i=1}^{k}|\text{ale\_y\_i}\times\text{ale\_ n}_{i}|}{\sum_{i=1}^{k}\text{ale\_n}_{i}} \tag{2}\]
where \(i\) is the index of \(k\) ALE x intervals for the variable (for a categorical variable, this is the number of distinct categories), ale\(\_\)y\({}_{i}\) is the ALE y value for the \(i\)th ALE x interval, and ale\(\_\)n\({}_{i}\) is the number of rows of data in the \(i\)th ALE x interval.
Based on its ALED, we can say that the average effect on math scores of whether a school is in the public or Catholic sector is 0.38 (again, out of a range from 2 to 20). In the effects plot above, the ALED is indicated by a white box bounded by parentheses ( and ). As it is centred on the median, we can readily see that the average effect of the school sector barely exceeds the limits of the median band, indicating that it barely exceeds our threshold of practical relevance. The average effect for the ratio of academic track students is slightly higher at 0.63. We can see on the plot that it slightly exceeds the median band on both sides, indicating its slightly stronger effect. We will comment on the values of other variables when we discuss the normalized versions of these scores, to which we proceed next.
### Normalized ALE effect size measures
Since ALER and ALED scores are scaled on the range of y for a given dataset, these scores cannot be compared across datasets. Thus, we present normalized versions of each with intuitive, comparable values. For intuitive interpretation, we normalize the scores on the minimum, median, and maximum of any dataset. In principle, we divide the zero-centred y values in a dataset into two halves: the lower half from the 0th to the 50th percentile (the median) and the upper half from the 50th to the 100th percentile. (Note that the median is included in both halves). With zero-centred ALE y values, all negative and zero values are converted to their percentile score relative to the lower half of the original y values while all positive ALE y values are converted to their percentile score relative to the upper half. (Technically, this percentile assignment is called the empirical cumulative distribution function (ECDF) of each half.) Each half is then divided by two to scale them from 0 to 50 so that together they can represent 100 percentiles. (Note: when a centred ALE y value of exactly 0 occurs, we choose to include the score of zero ALE y in the lower half because it is analogous to the 50th percentile of all values, which more intuitively belongs in the lower half of 100 percentiles.) The transformed maximum ALE y is then scaled as a percentile from 0 to 100%. Its formula is in Equation 3.
\[\text{norm\_ale\_y}=100\times\left\{\frac{\frac{ECDF_{y_{\times 0}}(\text{ale\_y})}{ -ECDF_{y_{\times 0}}(\text{ale\_y})}}{2}\quad\text{if ale\_y}>0\right. \tag{3}\]
where - \(ECDF_{y_{\times 0}}\) is the ECDF of the non-negative values in y. - \(-ECDF_{y_{\times 0}}\) is the ECDF of the negative values in y after they have been inverted (multiplied by -1).
Of course, the formula could be simplified by multiplying by 50 instead of by 100 and not dividing the ECDFs by 2 each. But we prefer the form we have given because it is explicit that each ECDF represents only half the percentile range and that the result is scored to 100 percentiles.
#### 5.3.1 Normalized ALER (NALER)
Based on this normalization, we first have the normalized ALER (NALER), which scales the minimum and maximum ALE y values from 0 to 100%, centred on 50%, as seen in Equation 4.
\[\text{NALER(y,ale\_y)}=\{\text{min(norm\_ale\_y)}+100,\text{max(norm\_ale\_ y)}+100\} \tag{4}\]
where \(y\) is the full vector of y values in the original dataset, required to calculate norm\(\_\)ale\(\_\)y.
ALER shows the extreme values of a variable's effect on the outcome. In the effects plot above, it is indicated by the extreme ends of the horizontal lines for each variable. We see there that public has an ALER of -0.34, 0.42. When we consider that the median math score in the dataset is 12.9, this ALER indicates that the minimum of any ALE y value for public (when public == TRUE) is -0.34 below the median. This is shown at the 12.6 mark
in the plot above. The maximum (public == FALSE) is 0.42 above the median, shown at the 13.3 point above. The ALER for academic_ratio is considerably broader with -3.66 below and 1.77 above the median.
The result of this transformation is that NALER values can be interpreted as percentiles with respect to the range of y around the median (50%). naler_min is always less than 50% and naler_max is always greater than 50%. Their numbers represent the limits of the effect of the x variable with units in percentile scores of y. In the effects plot above, because the percentile scale on the top corresponds exactly to the raw scale below, the NALER limits are represented by exactly the same points as the ALER limits; only the scale changes. The scale for ALER and ALED is the lower scale of the raw outcomes; the scale for NALER and NALED is the upper scale of percentiles.
So, with a NALER of 45.67, 56.12, the minimum of any ALE value for public (public == TRUE) shifts math scores to the 46th percentile of y values whereas the maximum (public == FALSE) shifts math scores to the 56th percentile. Academic track ratio has a NALER of 18.38, 74.28, ranging from the 18th to the 74th percentiles of math scores.
#### 5.3.2 Normalized ALED (NALED)
The normalization of ALED scores applies the same ALED formula as before but on the normalized ALE values instead of on the original ALE y values, as seen in Equation 5.
\[\text{NALED}(y,\text{al\_y},\text{al\_n})=\text{ALED}(\text{norm\_ale\_y}, \text{al\_n}) \tag{5}\]
NALED produces a score that ranges from 0 to 100%. It is essentially the ALED expressed in percentiles, that is, the average effect of a variable over its full domain of values. So, the NALED of public school status of 5.1 indicates that its average effect on math scores spans the middle 5.1% of scores. Academic ratio has an average effect expressed in NALED of 8.1% of scores.
The NALED is particularly helpful in comparing the practical relevance of variables against our threshold by which we consider that a variable needs to shift the outcome on average by more than 5% of the median values. This threshold is the same scale as the NALED. So, we can tell that public school status with its NALED of 5.1 just barely crosses our threshold.
### The median band and random variables
It is particularly striking to focus on the random rand_norm. We reexamine its ALE plot in Figure 7 and then we examine its ALE effect size measures in Table 6.
rand_norm has a NALED of 4.5. It might be surprising that a purely random value has any "effect size" to speak of, but statistically, it must have some numeric value or the other. However, by setting our default value for the median band at 5%, we effectively exclude rand_norm from serious consideration. S
Figure 7: ALE plot for the random variable from mathematics achievement scores
low at a value like 1% would not have excluded the random variable, but 5% seems like a suitable balance. Thus, the effect of a variable like the discrimination climate score (discrim, 4.1) should probably not be considered practically meaningful.
This point is even more striking when we examine the ALE plot of the random variable from our large diamonds dataset in Figure 8 and then we examine its ALE effect size measures in Table 7. Here we see that, even for a large dataset with more values across which to average, a purely random variable can vary by as much as 2.5%.
The analysis of the diamonds dataset is better contextualized when we examine its ALE effects plot in Figure 9. Here we see that three out of nine variables have NALED values less than that of the purely random variable, indicating that those variables cannot be considered to have any meaningful variation, despite the appearances of slight patterns seen in their ALE plots above.
On one hand, 5% as a threshold for the median band might seem to be somewhat arbitrary, inspired by traditional \(\alpha=0.05\) for statistical significance and confidence intervals. The "correct" baseline should be a qualitative question, depending on an analyst's goals and the context of a specific study. On the other hand, our initial analyses here show that 5% seems to be an effective choice for excluding a purely random variable from consideration, whether for small or large datasets.
\begin{table}
\begin{tabular}{l|r|r|r|r|r} \hline statistic & estimate & conf.low & median & mean & conf.high \\ \hline
**aled** & 0.324 & 0.094 & 0.334 & 0.324 & 0.527 \\ \hline
**aler\_min** & -1.450 & -4.109 & -1.195 & -1.450 & -0.159 \\ \hline
**aler\_max** & 1.424 & 0.253 & 1.417 & 1.424 & 2.699 \\ \hline
**naled** & 4.452 & 1.367 & 4.413 & 4.452 & 7.242 \\ \hline
**naler\_min** & 34.315 & 10.562 & 35.405 & 34.315 & 46.250 \\ \hline
**naler\_max** & 69.834 & 53.125 & 70.625 & 69.834 & 86.960 \\ \hline \end{tabular}
\end{table}
Table 6: Effect size measures for the random variable from the math achievements dataset
\begin{table}
\begin{tabular}{l|r|r|r|r|r} \hline statistic & estimate & conf.low & mean & median & conf.high \\ \hline
**aled** & 250.398 & 100.432 & 250.398 & 219.003 & 599.682 \\ \hline
**aler\_min** & -517.816 & -1308.046 & -517.816 & -493.426 & -23.858 \\ \hline
**aler\_max** & 1206.920 & 1206.920 & 1206.920 & 1206.920 & 1206.920 \\ \hline
**naled** & 2.451 & 0.977 & 2.451 & 2.084 & 6.420 \\ \hline
**naler\_min** & 44.418 & 33.861 & 44.418 & 45.026 & 49.739 \\ \hline
**naler\_max** & 61.711 & 61.711 & 61.711 & 61.711 & 61.711 \\ \hline \end{tabular}
\end{table}
Table 7: Effect size measures for the random variable from the diamonds dataset
Figure 8: ALE plot for the random variable from diamond prices
Figure 9: ALE effects plot for diamonds prices
## 6 Statistical inference with ALE
Although effect sizes are valuable in summarizing the global effects of each variable, they mask much nuance, since each variable varies in its effect along its domain of values. Thus, ALE is particularly powerful in its ability to make fine-grained inferences of a variable's effect depending on its specific value. Even more so than with effect sizes, statistical inference is primarily a topic considered with relatively smaller datasets. Thus, we demonstrate the principles in this section only using the mathematics achievement dataset. That said, everything we describe here is fully relevant to large datasets analyzed with ML.
### Classical statistical inference
We begin by briefly reviewing the classical approach to statistical inference. First, we can see the bootstrapped values of the effects of individual variables in Table 8. It is beyond the scope of this article to explain in detail how GAM works (see Ross 2019 for a tutorial). However, for our model illustration here, the estimates for the parametric variables (the non-numeric ones in our model) are interpreted as regular statistical regression coefficients whereas the estimates for the non-parametric smoothed variables (those whose variable names are encapsulated by the smooth s() function) are actually estimates for expected degrees of freedom (EDF in GAM). The smooth function s() lets GAM model these numeric variables as flexible curves that fit the data better than a straight line. The estimate values for the smooth variables above are not so straightforward to interpret, but suffice it to say that they are completely different from regular regression coefficients.
With bootstrap-based confidence intervals, based on the default 95% confidence intervals, a coefficient is statistically significant if conf.low and conf.high are both positive or both negative. However, the statistical significance of the estimate (EDF) of the smooth terms is meaningless here because EDF cannot go below 1.0. Thus, even the random term s(rand_norm) appears to be "statistically significant". Only the values for the non-smooth (parametric terms) public and high_minority should be considered here. So, we find that neither of the coefficient estimates of public nor of high_minority has an effect that is statistically significantly different from zero. (The intercept is not conceptually meaningful here; it is a statistical artifact.)
This initial analysis highlights two limitations of classical hypothesis-testing analysis. First, it might work suitably well when we use models that have traditional linear regression coefficients. But once we use more advanced models like GAM that flexibly fit the data, we cannot interpret coefficients meaningfully and so it is not so clear how to reach inferential conclusions. Second, a basic challenge with models that are based on the general linear model (including GAM and almost all other statistical analyses) is that their coefficient significance compares the estimates with the null hypothesis that there is no effect. However, even if there is an effect, it might not be practically meaningful. As we will see, ALE-based statistics are explicitly tailored to emphasize practical implications beyond the notion of "statistical significance".
### ALE data structures for categorical and numeric variables
To understand how bootstrapped ALE can be used for statistical inference, we must understand the structure of ALE data. We can begin by examining the structure for a binary variable with just two categories, public, in Table 9.
To understand how bootstrapped ALE can be used for statistical inference, we must understand the structure of ALE data. We can begin by examining the structure for a binary variable with just two categories, public, in Table 9. The columns for a categorical variable mean:
\begin{table}
\begin{tabular}{l|r|r|r|r|r|r} \hline term & estimate & conf.low & mean & median & conf.high & std.error \\ \hline
**(Intercept)** & 12.713 & 11.654 & 12.713 & 12.718 & 13.613 & 0.484 \\ \hline
**publicTRUE** & -0.689 & -2.015 & -0.689 & -0.652 & 0.415 & 0.637 \\ \hline
**high_minorityTRUE** & 1.031 & -0.318 & 1.031 & 1.054 & 2.324 & 0.676 \\ \hline
**s(size)** & 3.536 & 1.000 & 3.536 & 2.503 & 8.385 & 2.603 \\ \hline
**s(academic_ratio)** & 6.085 & 1.000 & 6.085 & 7.323 & 8.756 & 2.810 \\ \hline
**s(female_ratio)** & 4.203 & 1.000 & 4.203 & 3.652 & 8.396 & 2.382 \\ \hline
**s(mean_ses)** & 7.492 & 2.561 & 7.492 & 8.449 & 8.995 & 2.008 \\ \hline
**s(minority_ratio)** & 7.300 & 2.588 & 7.300 & 8.120 & 8.987 & 1.797 \\ \hline
**s(discrim)** & 4.205 & 1.000 & 4.205 & 3.560 & 8.800 & 2.673 \\ \hline
**s(rand_norm)** & 6.796 & 1.000 & 6.796 & 7.777 & 8.767 & 2.394 \\ \hline \end{tabular}
\end{table}
Table 8: Bootstrapped coefficents for GAM for mathematics achievement scores
* ale_x: the different categories that exist in the categorical variable.
* ale_n: the number of rows for that category in the dataset provided to the function.
* ale_y: the mean bootstrapped ALE function value calculated for that category.
* ale_y_lo and ale_y_hi: the lower and upper confidence intervals for the bootstrapped ale_y value.
By default, the ale package centres ALE values on the median of the outcome variable; in our dataset, the median of all the schools' average mathematics achievement scores is 12.9. With ALE centred on the median, the weighted sum of ALE y values (weighted on ale_n) above the median is approximately equal to the weighted sum of those below the median. So, in the ALE plots above, when we consider the number of instances indicated by the rug plots and category percentages, the average weighted ALE y approximately equals the median.
In Table 10, we see the ALE data structure for a numeric variable, academic_ratio. The columns are the same as with a categorical variable, but the meaning of ale_x is different since there are no categories. To calculate ALE for numeric variables, the range of x values is divided into fixed intervals (such as into 100 percentiles). If the x values have fewer than 100 distinct values in the data, then each distinct value becomes an ale_x interval. (This is often the case with smaller datasets like ours; here academic_ratio has only 65 distinct values.) If there are more than 100 distinct values, then the range is divided into 100 percentile groups. So, ale_x represents each of these x-variable intervals. The other columns mean the same thing as with categorical variables: ale_n is the number of rows of data in each ale_x interval and ale_y is the calculated ALE for that ale_x value.
### Inference with ALE based on bootstrapped confidence intervals
Whereas the coefficient table above based on classic statistics indicated this conclusion for public, it indicated that high_minority had a statistically significant effect; our ALE analysis indicates that high_minority does not. In addition, e
With the structure of ALE data clear, we can now proceed to statistical inference with ALE based on bootstrapped confidence intervals. In a bootstrapped ALE plot, values within the confidence intervals are statistically significant; values outside of the median band can be considered at least somewhat meaningful. Thus, **the essence of
\begin{table}
\begin{tabular}{c|r|r|r|r} \hline ale\_x & ale\_n & ale\_y & ale\_y\_lo & ale\_y\_hi \\ \hline
[MISSING_PAGE_POST]
\end{tabular}
\end{table}
Table 10: Structure of ALE data for academic_ratio (numeric) from math achievement dataset
\begin{table}
\begin{tabular}{c|r|r|r|r} \hline ale\_x & ale\_n & ale\_y & ale\_y\_lo & ale\_y\_hi \\ \hline
**FALSE** & 70 & 13.302 & 12.192 & 14.176 \\ \hline
**TRUE** & 90 & 12.613 & 11.892 & 13.350 \\ \hline \end{tabular}
\end{table}
Table 9: Structure of ALE data for public (categorical) from math achievement dataset
We can see this, for example, with the plot of academic_ratio in Figure 10. However, it might not always be easy to tell from a plot which regions are relevant, so the results of statistical significance are summarized with a confidence regions table in Table 11. For numeric variables, the confidence regions summary has one row for each consecutive sequence of x values that have the same status: all values in the region are below the median band, they overlap the band, or they are all above the band. Here are the summary components:
* start_x is the first and end_x is the last x value in the sequence. start_y is the y value that corresponds to start_x while end_y corresponds to end_x.
* n is the number of data elements in the sequence; n_pct is the percentage of total data elements out of the total number.
* x_span is the length of x of the sequence that has the same confidence status. However, so that it may be comparable across variables with different units of x, x_span is expressed as a percentage of the full domain of x values.
* trend is the average slope from the point (start_x, start_y) to (end_x, end_y). Because only the start and end points are used to calculate trend, it does not reflect any ups and downs that might occur between those two points. Since the various x values in a dataset are on different scales, the scales of the x and y values in calculating the trend are normalized on a scale of 0 to 1 each so that the trends for all variables are directly comparable. A positive trend means that, on average, y increases with x; a negative trend means that, on average, y decreases with x; a zero trend means that y has the same value at its start and end points-this is always the case if there is only one point in the indicated sequence.
* relative_to_mid is the key information here. It indicates if all the values in sequence from start_x to end_x are below, overlapping, or above the median band:
* below: the higher limit of the confidence interval of ALE y (ale_y_hi) is below the lower limit of the median band.
* above: the lower limit of the confidence interval of ALE y (ale_y_lo) is above the higher limit of the median band.
* overlap: neither of the first two conditions holds; that is, the confidence region from ale_y_lo to ale_y_hi at least partially overlaps the median band.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline start\_x & end\_x & x\_span & n & n\_pct & start\_y & end\_y & trend & relative\_to\_mid \\ \hline
**0.00** & **0.00** & 0.00 & 1 & 0.006 & 9.273 & 9.273 & 0.000 & **below** \\ \hline
**0.05** & **0.91** & 0.86 & 143 & 0.894 & 10.935 & 13.874 & 0.230 & **overlap** \\ \hline
**0.95** & **1.00** & 0.05 & 16 & 0.100 & 14.179 & 14.625 & 0.599 & **above** \\ \hline \end{tabular}
\end{table}
Table 11: Confidence regions for academic_ratio (numeric) from math achievement dataset
Figure 10: ALE plot for academic_ratio from math achievement dataset
These results tell us simply that, for academic_ratio, from 0 to 0, ALE is below the median band from 9.27 to 9.27. From 0.05 to 0.91, ALE overlaps the median band from 10.9 to 13.9. From 0.95 to 1, ALE is above the median band from 14.2 to 14.6. Considering the details from Table 11, we can see that academic_ratio overlaps with the median range for most of its range. Although the lowest value is nominally below the median band, since this represents only 1 data point (0.6% of the dataset), we can discount that finding. However, it is meaningful that when the ratio of academic subjects is 95% or higher, representing 10% of the schools in the dataset, the average math achievement scores are above the median. We would consider this a practically meaningful result supported by analysis.
Confidence region summary tables are available not only for numeric but also for categorical variables, as we see with the ALE plot for public (Figure 11) and its confidence regions summary table in Table 12. Since we have categories here, there are no start or end positions and there is no trend. We instead have each x category and its single ALE y value, with the n and n_pct of the respective category and relative_to_mid as before to indicate whether the indicated category is below, overlaps with, or is above the median band. These results tell us that, for public, for FALSE, the ALE of 13.3 overlaps the median band. For TRUE, the ALE of 12.6 also overlaps the median band. In other words, there is no statistically (or practically) significant value of public. The data does not provide evidence to support the claim that public or non-public schools have significantly different math achievement scores.
### Confidence regions and random variables
Again, our random variable rand_norm is particularly interesting, as we can see from its ALE plot in Figure 12 and its confidence regions summary table in Table 13. Despite the apparent pattern, we see that from -2.4 to 2.61, ALE overlaps the median band from 11.9 to 12.5. So, despite the random highs and lows in the bootstrap confidence interval, there is no reason to suppose that the random variable has any effect anywhere in its domain. It is important to remember that our analysis of the math achievement dataset uses model bootstrapping because
\begin{table}
\begin{tabular}{l|r|r|r|r|r|r|r} \hline start\_x & end\_x & x\_span & n & n\_pct & start\_y & end\_y & trend & relative\_to\_mid \\ \hline
**-2.397** & **2.608** & 1 & 160 & 1 & 11.924 & 12.545 & 0.042 & **overlap** \\ \hline \end{tabular}
\end{table}
Table 13: Confidence regions for the full-model (appropriately) bootstrapped random variable from math achievement dataset
Figure 11: ALE plot for public from math achievement dataset
\begin{table}
\begin{tabular}{l|r|r|r|r|r|r} \hline \hline statistic & estimate & conf.low & mean & median & conf.high \\ \hline
**aled** & 0.105 & 0.093 & 0.105 & 0.104 & 0.118 \\ \hline
**aler\_min** & -0.196 & -0.196 & -0.196 & -0.196 & -0.196 \\ \hline
**aler\_max** & 0.402 & 0.319 & 0.402 & 0.402 & 0.489 \\ \hline
**naled** & 1.552 & 1.430 & 1.552 & 1.534 & 1.751 \\ \hline
**naler\_min** & 46.875 & 46.875 & 46.875 & 46.875 & 46.875 \\ \hline
**naler\_max** & 54.706 & 53.750 & 54.706 & 55.000 & 55.000 \\ \hline \hline \end{tabular}
\end{table}
Table 14: Confidence regions for the data-only (inappropriately) bootstrapped random variable from math achievement dataset
Figure 12: ALE plot for the random variable from math achievement dataset
\begin{table}
\begin{tabular}{c|r|r|r|r|r|r|r} \hline \hline start\_x & end\_x & x\_span & n & n\_pct & start\_y & end\_y & trend & relative\_to\_mid \\ \hline
**-2.397** & **-1.952** & 0.089 & 2 & 0.013 & 12.700 & 12.725 & 0.019 & **below** \\ \hline
**-1.722** & **2.015** & 0.747 & 154 & 0.963 & 12.734 & 13.203 & 0.042 & **overlap** \\ \hline
**2.207** & **2.608** & 0.080 & 4 & 0.025 & 13.243 & 13.323 & 0.067 & **above** \\ \hline \hline \end{tabular}
\end{table}
Table 15: Confidence regions for academic_ratio (numeric) from math achievement dataset
Figure 13: ALE plot for the data-only (inappropriately) bootstrapped random variable from math achievement dataset
its dataset is too small to be analyzed with a train-test split. If, however, we were to use the inappropriate data-only bootstrapping, as we did above for illustration, we can see the random variable from that analysis in Figure 13, its ALE effect size measures in Table 14, and its confidence regions summary table in Table 15.
The data-only bootstrap makes rand_norm appear more reliably irrelevant with a very low NALED of 1.6%. But we note that the considerably higher NALED of 4.5% in the model bootstrap probably more accurately reflects the variability of the random variable. More seriously, the data-only bootstrap, with its excessively narrow confidence intervals, makes rand_norm appear to have statistically significant and practically relevant results at its two extremes, with the inaccurate conclusion that the lowest values are slightly below the median bar while the highest values are slightly above it. Leaving aside the fact that these extremes represent only 1.3% and 2.5% of the data points, respectively, we see that data-only bootstrapping suggests dubious conclusions whereas the appropriate model bootstrapping leaves no doubt that rand_norm has no practically relevant effect on math achievement scores.
## 7 Discussion
In the preceding sections, we have navigated through the multifaceted landscape of ALE, exploring its potential and the innovative extensions proposed in this study. The contributions we present not only forge a path towards more nuanced, interpretable, and computationally efficient applications in ML model interpretation. Through the implementation of full model bootstrapping, the introduction of intuitive ALE-based effect size measures, and the innovative concept of confidence regions, this work endeavours to enhance the robustness and interpretability of ALE analyses.
In this concluding discussion, we shall reflect on the implications, applications, and potential future trajectories of these contributions, anchoring our discourse in the context of both the theoretical and practical domains of ML model interpretation and analysis.
### Contributions
The extensions to the implementation of ALE that we have described in the preceding sections address the opportunities for improvement that we highlight above in the section on related work.
Whereas data-only bootstrapping of ALE values has already been implemented
First, we implement full model bootstrapping specifically tailored for calculating ALE rather than just data-only bootstrapping. This allows the creation of appropriate confidence intervals for the ALE of models that are trained on all available data, which is typically the case for small datasets. It is not so surprising that the Python packages that implement bootstrapping or variations feature just data-only bootstrapping (Flora, 2023; Jumelle, Kuhn-Regnier and Rajaratnam, 2020). Python is rarely used to analyze small datasets that are not amenable to a train-test split; smaller datasets are typically analyzed with R rather than with Python.
Second, we introduce two new pairs of effect size measures based on ALE designed for intuitive interpretability. Unlike the overall model MEC and the IAS (Molnar, Casalicchio and Bischl, 2020) that only show the average effects across all variables, the measures we present here show the effects for individual variables. The ALE deviation indicates the average dispersion of ALE values while the sALE range indicates the maximum dispersion. These base measures are scaled to the Y outcome variable and so are readily interpretable in terms of the applications to the outcome; they are also directly comparable across predictor variables. For meaningful comparison of effect sizes across different datasets and contexts, each of these has a normalized version that is scaled on a percentile scale relative to the range of possible Y outcome values.
Third, to address the nuances of non-linear relationships, we go beyond effect size measures and introduce the notion of confidence regions as an approach to statistical inference based on ALE confidence intervals. Whereas several ML effect size measures like the impact score (Lotsch and Ultsch, 2020) and \(f_{v}^{2}\)(Messner, 2023)--and, indeed, ALED, ALER, NALED, and NALER--only indicate a general effect, Messner (2023) has made an initial effort in indicating the complexity of relationships by testing for monotonicity and average direction. Confidence regions fully embrace the potential complexity of relationships by not trying to reduce them to a single number. Rather, they identify areas significantly above or below a median band. For numeric predictors, confidence regions indicate the trend within these regions to infer the average slope, somewhat mirroring Messner's slope measure, but on a more granular scale that communicates necessary nuances. Monotonicity is less pertinent from this perspective; understanding the relationship's intricate shape, which necessitates graphical interpretation, holds greater significance than a single numeric summary.
A fourth minor but non-negligible contribution is that, as these measures and techniques are all calculated directly from the data generated from ALE, they not only inherit the inherent superior computational efficiency compared to other processor-intensive effect size measures (Molnar 2022), but they are computationally negligible to calculate in any analysis that has already calculated ALE.
### Practical Implications
There are several practical implications of our ALE-based effect size measures and of statistical inference using bootstrapped ALE confidence regions.
Overall effect size measures afford a comprehensive perspective on model performance, providing a global depiction of the impact of each variable on the outcome throughout the model. In contrast, due to the non-homogeneous effect across the entire domain of the X input factors, the ALE confidence regions delineate specific zones within the domain of X where an input predictor may be active or inactive.
The confidence regions offer a nuanced context to aid the interpretation of the overall measures. Specifically, when encountering a particularly broad ALE range or a substantial ALE range interval, coupled with a more modest ALE deviation score, the confidence regions clarify which areas of the domain exhibit atypical effects of that variable.
Overall effect size measures facilitate rapid identification of variables that most significantly impact the outcomes of a system of interest. Subsequently, for precise action, the confidence regions assist decision-makers in focusing on specific values or ranges of interest that might exert undesirable or desirable effects on the outcome.
The intriguing occurrence of a substantial disparity between overall effects, as indicated by ALE deviation, and extreme effects, as highlighted by ALE range, can be dissected with the assistance of confidence regions. These focus on the origins of such disparities and, instead of being deemed problematic, can serve as a fountainhead of unexpected insights. Given that ALE averages effects across intervals, spikes in the ALE range do not invariably represent the influence of potential spurious outliers; they typically signify the effects within a particular data range. Such disparities generally merit closer investigation, often paving the way for unforeseen and valuable insights.
Generally, a pitfall of any overall effect size measure is that it may misleadingly convey the data as more homogeneous than it truly is. While some extensions have been proposed to the original ALE algorithm to account for such data heterogeneity, the confidence regions adeptly illuminate them by indicating where effects are homogeneous and where they are heterogeneous.
### Conclusion
In this article, we have unveiled novel methodologies and metrics that enhance the applicability and interpretability of ALE in ML model analysis. Our contributions span the implementation of a full model bootstrapping approach, tailored for ALE, which facilitates the creation of apt confidence intervals, especially pivotal for models trained on smaller datasets. Furthermore, we introduced two pairs of effect size measures--ALE deviation (ALED) and ALE range (ALER), along with their normalized counterparts (NALED and NALER)--designed to provide a nuanced depiction of individual variable effects, thereby offering a more granular insight compared to existing models. The introduction of confidence regions, which embrace the complexity of relationships without reducing them to a singular numeric value, marks a significant stride towards comprehending the intricacies of non-linear relationships. Lastly, the computational efficiency of our proposed measures and techniques, derived directly from ALE data, not only underscores their practicality but also positions them as computationally viable options in analyses that have already computed ALE.
As we reflect upon these contributions, we recognize the potential they hold in bridging theoretical understanding and practical application, thereby paving the way for more robust, interpretable, and insightful analyses in the realm of ML model interpretation. Future endeavours may explore further applications and validations of these methodologies across diverse datasets and contexts, ensuring their robustness and utility in varied analytical scenarios.
## 8 Acknowledgments
We used ChatGPT in developing the outline of this article and in drafting much of the text. The author takes full responsibility for the entirety of the final revised contents. In addition, we used ChatGPT extensively in developing the code for the ale package. |
2306.05773 | Wilson-loop One-point Functions in ABJM Theory | In this paper we initiate the study of correlation functions of a single
trace operator and a circular supersymmetric Wilson loop in ABJM theory. The
single trace operator is in the scalar sector and is an eigenstate of the
planar two-loop dilatation operator. The Wilson loop is in the fundamental
representation of the gauge group or a suitable (super-)group. Such correlation
functions at tree level can be written as an overlap of the Bethe state
corresponding to the single trace operator and a boundary state which
corresponds to the Wilson loop. There are various type of supersymmetric Wilson
loops in ABJM theory. We show that some of them correspond to tree-level
integrable boundary states while some are not. For the tree-level integrable
ones, we prove their integrability and obtain analytic formula for the
overlaps. For the non-integrable ones, we give examples of non-vanishing
overlaps for Bethe states which violate selection rules. | Yunfeng Jiang, Jun-Bao Wu, Peihe Yang | 2023-06-09T09:26:27Z | http://arxiv.org/abs/2306.05773v5 | # Wilson-loop One-point Functions in ABJM Theory
###### Abstract
In this paper we initiate the study of correlation functions of a single trace operator and a circular supersymmetric Wilson loop in ABJM theory. The single trace operator is in the scalar sector and is an eigenstate of the planar two-loop dilatation operator. The Wilson loop is in the fundamental representation of the gauge group or a suitable (super-)group. Such correlation functions at tree level can be written as an overlap of the Bethe state corresponding to the single trace operator and a boundary state which corresponds to the Wilson loop. There are various type of supersymmetric Wilson loops in ABJM theory. We show that some of them correspond to tree-level integrable boundary states while some are not. For the tree-level integrable ones, we prove their integrability and obtain analytic formula for the overlaps. For the non-integrable ones, we give examples of non-vanishing overlaps for Bethe states which violate selection rules.
## 1 Introduction
Integrable structure of four-dimensional \(\mathcal{N}=4\) super Yang-Mills (SYM) theory enables us to compute many physical observables non-perturbatively in the planar limit.1 The study of integrability in AdS/CFT was initiated by the discovery that the planar one-loop dilatation operator in the scalar sector is identical to the Hamiltonian of an integrable spin chain [2]. Later this result was generalized to the full sector and all-loop order. In the asymptotic regime, the spectrum of local operators can be computed by the all-loop asymptotic Bethe ansatz, which was first proposed in [3] and derived more rigorously in [4]. For operators with finite length, the so-called finite size corrections should be taken into account. To solve this challenging problem, different approaches such as the Luscher formula [5] and the thermodynamic Bethe ansatz (TBA) [6; 7; 8; 9] have been developed, and finally culminated in the quantum spectral curve (QSC) method [10].
Single-trace operators in \({\cal N}=4\) SYM are mapped to _closed_ spin chain states. It turns out that integrable _open_ spin chains also play important roles in AdS/CFT. There are at least two ways that open chains could emerge. The first is by changing the theory. Examples include theories with matters in the fundamental representation of the gauge group, such as four-dimensional \({\cal N}=2\)\(Sp(N)\) theory [11; 12] and \({\cal N}=2\) theory obtained by adding flavors to \({\cal N}=4\) SYM [13]. The second way is considering specific objects within \({\cal N}=4\) SYM theory which play the role of integrable boundaries. Such objects include domain walls [14], determinant operators which are dual to giant gravitons [15] and Wilson lines [16; 17].
More recently, the study of domain wall one-point functions in defect \({\cal N}=4\) SYM [18; 19] introduced integrable boundary states into AdS/CFT integrability2. Integrable boundary states are specific states in the Hilbert space which are annihilated by odd conserved charges. Later integrable boundary states also appear in the computation of the correlation function of two determinant operators and a single trace operator [22; 23], 't Hooft loop one-point functions [24] and Wilson-loop one-point functions [25].3 Although the integrable boundary states appear quite naturally in the cases of domain walls, 't Hooft loops and Wilson loops, their emergence in the correlation functions involving two determinant operators are less obvious. The integrable boundary states only show up after some non-trivial computations, such as using large-\(N\) effective field theory or performing partial Wick contractions between the giant gravitons.
Footnote 2: In [20], it was proved that the boundary states from the domain wall in the D3-D5 case satisfy the condition for the integrable boundary states in [21].
Footnote 3: A class of fermionic BPS Wilson loops in four-dimensional \({\cal N}=2\) quiver theories and \({\cal N}=4\) SYM were constructed last year [26]. it is interesting to study whether they lead to integrable open chains and/or integrable boundary states.
Three-dimensional \({\cal N}=6\) Chern-Simons-matter theory (ABJM theory) [27] is another important example of supersymmetric gauge theories which are integrable in the planar limit.4 Compared to \({\cal N}=4\) SYM theory, almost every aspect of integrability gets more complicated and challenging, due to its smaller symmetry. Similar to \({\cal N}=4\) SYM theory, the first hint of integrability of ABJM theory comes from the fact that the planar two-loop dilatation operator in the scalar sector is integrable [29; 30]. All-loop asymptotic Bethe ansatz equations were proposed in [31]. But there is a to-be-determined interpolating function \(h(\lambda)\) appearing in the dispersion relation of the magnons. A conjecture for the exact expression of \(h(\lambda)\) was proposed [32], based on the computation of the planar slope function using QSC [33] and the result on the vacuum expectation value of 1/6-BPS bosonic circular Wilson loop [34; 35; 36] computed using supersymmetric localization [37; 38].
Footnote 4: The review [28] summarised the related achievement till the end of 2010.
Integrable open spin chains are relatively less studied from the perspective of 3d super-Chern-Simons theories. Planar two-loop reflection matrices of open chains from \({\cal N}=3\) flavored ABJM theory was shown to satisfy boundary Yang-Baxter equations in [39], this is a strong evidence for these chains to be integrable at two-loop. Quite recently, it is proved [40] that all-loop reflection matrices of open chains from half-BPS Wilson lines in ABJM theory are integrable, under certain assumptions [41; 42; 43]. TBA equations for
composite operators inserted in cusped Wilson lines were also obtained [40]. Solutions of the TBA equations can be used to confirm the conjectured interpolating function \(h(\lambda)\). As for the open chain from the determinant operators in ABJM theory, its two-loop integrability was checked using coordinate Bethe ansatz (CBA) [44] and proved by algebraic Bethe ansatz (ABA) [45]. A proposal for the asymptotic all-loop Bethe ansatz equations was given in [46]. The boundary reflection exchanges A-type magnons and B-type magnons in the flavored ABJM case, while the type of the magnon is preserved during the reflection in the determinant operator and Wilson line cases.
The study of integrable boundary states in ABJM theory started with the computation of three-point functions involving two determinant operators and a single trace operator [47]. Later it was shown that certain domain walls in ABJM theory also lead to integrable boundary states [48]5. In both cases, the integrable boundary states satisfy the untwisted integrability condition [51] at two-loop level. The aim of this paper is to initiate the study of correlation functions of a single trace operator and a BPS Wilson loop in ABJM theory. As we will see, this is another important set-up where integrable boundary states emerges naturally.
Footnote 5: It was shown in [49; 50] that certain D-branes dual to domain walls in both \(\mathcal{N}=4\) SYM and ABJM theory indeed provide the integrable boundary conditions for open string attached to them.
There are various types of BPS Wilson loops in ABJM theory (see the review [43]). The first BPS Wilson loops which were constructed are the bosonic 1/6-BPS ones [34; 35; 36]. The construction is based on the bosonic 1/2-BPS (1/3-BPS) Wilson loops in general \(\mathcal{N}=2\) (\(\mathcal{N}=3\)) super-Chern-Simons theory [52] and is similar to the half-BPS Maldacena-Wilson loop [53; 54] in \(\mathcal{N}=4\) SYM. These Wilson loops correspond to F-strings smearing in a \(\mathbf{CP}^{1}\subset\mathbf{CP}^{3}\) in the dual \(AdS_{4}\times\mathbf{CP}^{3}\) background [34; 36]. In another word, the worldsheet theory has Neumann boundary conditions for the directions along the \(\mathbf{CP}^{1}\) subspace [55]. The existence of certain half-BPS probe F-string solutions [34; 36] with Dirichlet boundary conditions in all directions of \(\mathbf{CP}^{3}\) indicates the existence of half-BPS Wilson loops which are invariant under a subgroup \(SU(3)\times U(1)\) of \(SU(4)_{R}\). Such Wilson loops were constructed by Drukker and Trancanelli [56], who introduced fermions in the construction of the Wilson loops. Fermionic 1/6-BPS Wilson loops were constructed in [57; 58], based on the construction of fermionic half-BPS Wilson loops in generic \(\mathcal{N}=2\) super-Chern-Simons theories. These fermionic 1/6-BPS Wilson loops in ABJM theory include the above half-BPS Wilson loops and bosonic 1/6-BPS ones as special cases. A subclass of these fermionic 1/6-BPS Wilson lines are shown to dual to F-strings with complicated mixed boundary conditions [59]. In this paper, we will study the Wilson-loop one-point function within a subclass of circular fermonic 1/6-BPS Wilson loops,6 and 1/3-BPS Wilson loops constructed based on the 1/3-BPS Wilson lines in [60].
Footnote 6: More precisely speaking, this subclass was chosen here to insider the Class I of the classification in [57; 58]. The situation for Class II should be similar.
We will see that the corresponding structure constant can be calculated as the overlap of a boundary state and a Bethe state. The tree-level computation of this one-point function demands the single-trace operator to be an eigenvector of two-loop dilatation operator, which is identical to an integrable Hamiltonian of an alternating SU(4) spin chain [29; 30].
The eigenvectors can be constructed by Bethe ansatz, with additional zero momentum condition. A particularly interesting question for us is that, which supersymmetric Wilson loops correspond to _integrable_ boundary states?
We find that the boundary states corresponding to generic \(1/6\)-BPS Wilson loops in this subclass are not integrable. Only two special cases, namely the bosonic \(1/6\)-BPS and half-BPS Wilson loops give rise to tree-level integrable boundary states. This result implies that the boundary states corresponding to \(1/3\)-BPS Wilson loops are also integrable at tree level. All the aforementioned tree-level integrable boundary states satisfy untwisted integrability condition which leads to selection rule \(\mathbf{u}=-\mathbf{v},\mathbf{w}=-\mathbf{w}\) where \(\mathbf{u}\) and \(\mathbf{v}\) are the two sets of momentum carrying Bethe roots and \(\mathbf{w}\) are the auxiliary roots. For these tree-level integrable boundary states, we obtain analytic formulas for the Wilson loop one-point functions (normalized overlaps) in terms of Bethe roots up to an unimportant phase factor.
The remaining part of this paper is organized as follows. In section 2, we review the construction of various supersymmetric Wilson loops in ABJM theory. In section 3 we compute the Wilson loop one-point functions and find the circular Wilson loops which correspond to integrable boundary states. We prove the tree-level integrability of such states by algebraic Bethe ansatz and derive the selection rules for the exact overlap. In section 4 and 5, we derive the exact overlap formula for the bosonic \(1/6\)-BPS and \(1/2\)-BPS Wilson loops respectively. We conclude in section 6 and discuss some future directions. Two appendices include our conventions for ABJM theory and numerical solutions of the Bethe equations.
## 2 Various BPS Wilson loops in ABJM theory
In this section, we list Wilson loops that will be studied in this paper. Among these Wilson loops, the \(1/3\)-BPS circular Wilson loops are new, and they are constructed based on \(1/3\)-BPS Wilson lines in [60]. We consider the ABJM theory in three-dimensional Euclidean space \(\mathbf{R}^{3}\) and adopt the notations in [58]. The spinor convention, the Lagrangian and the supersymmetry transformation are listed in Appendix A.
Bosonic \(1/6\)-BPS circular WLs.These loops were first constructed in [34; 35; 36]. We consider the loops along \(x^{\mu}=(R\cos\tau,R\sin\tau,0),\tau\in[0,2\pi]\). The construction is the following,
\[W^{B}_{1/6}= \,\mathrm{Tr}\mathcal{P}\,\exp\left(-i\oint d\tau\mathcal{A}^{B}_ {1/6}(\tau)\right)\,,\qquad\hat{W}^{B}_{1/6}= \,\mathrm{Tr}\mathcal{P}\,\exp\left(-i\oint d\tau\hat{\mathcal{A}}^{B}_ {1/6}(\tau)\right)\,, \tag{1}\] \[\mathcal{A}^{B}_{1/6}= \,A_{\mu}\dot{x}^{\mu}+\frac{2\pi}{k}R_{I}^{\phantom{I}J}Y^{I}Y^{ \dagger}_{J}|\dot{x}|\,,\qquad\qquad\hat{\mathcal{A}}^{B}_{1/6}= \,\hat{A}_{\mu}\dot{x}^{\mu}+\frac{2\pi}{k}R_{I}^{\phantom{I}J}Y^{\dagger}_{J }Y^{I}|\dot{x}|\,, \tag{2}\]
where \(\dot{x}^{\mu}=\frac{dx^{\mu}}{d\tau}\), and \(R_{I}^{\phantom{I}J}=\mathrm{diag}(i,i,-i,-i)\).
These two Wilson loops preserve the same supersymmetries,
\[\vartheta_{12}=iR^{-1}\gamma_{3}\theta_{12}\ \vartheta_{34}=-iR^{-1} \gamma_{3}\theta_{34}\,,\] \[\theta_{13}=\theta_{14}=\theta_{23}=\theta_{24}=0\,,\vartheta_{13} =\vartheta_{14}=\vartheta_{23}=\vartheta_{24}=0\,. \tag{3}\]
We can combine the above two connections into a big one,
\[L^{B}_{1/6}=\left(\begin{array}{cc}\mathcal{A}^{B}_{1/6}&\\ &\hat{\mathcal{A}}^{B}_{1/6}\end{array}\right)\,, \tag{4}\]
and construct the following Wilson loops
\[W^{B,\text{big}}_{1/6}=\text{Tr}\mathcal{P}\,\exp\left(-i\oint d\tau L^{B}_{1/ 6}(\tau)\right)\,. \tag{5}\]
Obviously the preserved supersymmetries are still the ones in (3).
Fermionic \(1/6\)-BPS circular WLs.These loops were constructed in [57; 58]. We focus on the Class I loops according to the classification in these papers. Let us consider the loops along the same contour \(x^{\mu}(\tau)=(R\cos\tau,R\sin\tau,0)\) as the bosonic \(1/6\)-BPS Wilson loops,
\[W^{F}_{1/6}=\text{Tr}\mathcal{P}\,\exp\left(-i\oint d\tau L^{F}_ {1/6}(\tau)\right)\,,\quad L^{F}_{1/6}=\left(\begin{array}{cc}\mathcal{A}& \bar{f}_{1}\\ f_{2}&\hat{\mathcal{A}}\end{array}\right)\,, \tag{6}\] \[\mathcal{A}=A_{\mu}\dot{x}^{\mu}+\frac{2\pi}{k}U_{I}{}^{J}Y^{I}Y^ {I}_{J}|\dot{x}|\,,\quad\bar{f}_{1}=\sqrt{\frac{2\pi}{k}}\bar{\alpha}^{I}\bar{ \zeta}\psi_{I}|\dot{x}|\,,\] (7) \[\hat{\mathcal{A}}=\hat{A}_{\mu}\dot{x}^{\mu}+\frac{2\pi}{k}U_{I}{ }^{J}Y^{I}_{J}Y^{I}|\dot{x}|\,,\quad f_{2}=\sqrt{\frac{2\pi}{k}}\psi^{\dagger \,I}\eta\beta_{I}|\dot{x}|\,, \tag{8}\]
with
\[\bar{\alpha}^{I} = (\bar{\alpha}^{1},\bar{\alpha}^{2},0,0)\,,\quad\beta_{I}=(\beta_ {1},\beta_{2},0,0)\,, \tag{9}\] \[\bar{\zeta}^{\alpha} = (e^{i\tau/2},e^{-i\tau/2})\,,\quad\eta_{\alpha}=\left(\begin{array []{c}e^{-i\tau/2}\\ e^{i\tau/2}\end{array}\right)\,,\] (10) \[U_{I}{}^{J} = \left(\begin{array}{cccc}i-2\bar{\alpha}^{2}\beta_{2}&2\bar{ \alpha}^{2}\beta_{1}&0&0\\ 2\bar{\alpha}^{1}\beta_{2}&i-2\bar{\alpha}^{1}\beta_{1}&0&0\\ 0&0&-i&0\\ 0&0&0&-i\end{array}\right)\,. \tag{11}\]
The preserved supersymmetric is the same as the ones in (3) for generic \(\bar{\alpha}^{I},\beta_{I}\).
It was later noticed that we have the equivalence relation [43; 61],
\[(\bar{\alpha}^{I},\beta_{J})\sim(\lambda\bar{\alpha}^{I},\lambda^{-1}\beta_{J })\,,\,\lambda\in\mathbf{C}^{*}=\mathbf{C}-\{0\}\,. \tag{12}\]
One can set \(\bar{\alpha}^{2}=\beta_{2}=0\) to get a subclass of Wilson loops. Then in this subclass, we have \(U_{I}{}^{J}=\text{diag}(i,i-2\bar{\alpha}^{1}\beta_{1},-i,-i)\), and
\[\bar{f}_{1}=\sqrt{\frac{2\pi}{k}}\bar{\alpha}^{1}\bar{\zeta}\psi_{1}|\dot{x}| \,,\quad f_{2}=\sqrt{\frac{2\pi}{k}}\psi^{\dagger\,1}\eta\beta_{1}|\dot{x}|\,. \tag{13}\]
Similar subclass in the fermionic \(1/6\)-BPS Wilson lines was considered in the study of the dual string theory prescription [59].
Half-BPS circular WLs.Half-BPS Wilson loops were first constructed in [56]. A class of them, \(W_{1/2}\), appears among the above class of fermionic \(1/6\)-BPS Wilson loops when the parameters \(\bar{\alpha}^{I},\beta_{I}\) satisfy the following constraints,
\[\beta_{I}=\frac{i\alpha_{I}}{\bar{\alpha}^{J}\alpha_{J}}\,, \tag{14}\]
and at least one of \(\bar{\alpha}^{1},\bar{\alpha}^{2}\) is non-zero. Here \(\alpha_{I}\) is defined by \(\alpha_{I}=(\bar{\alpha}^{I})^{*}\). The preserved supersymmetries are now enhanced to
\[\bar{\alpha}^{I}\vartheta_{IJ}=iR^{-1}\gamma_{3}\bar{\alpha}^{I}\theta_{IJ}\,, \quad\epsilon^{IJKL}\alpha_{J}\vartheta_{KL}=-iR^{-1}\gamma_{3}\epsilon^{IJKL }\alpha_{J}\theta_{KL}\,. \tag{15}\]
Using a suitable R-symmetry transformation acting only on \(I=1,2\), we can choose7
Footnote 7: Notice that this transformation is inside a \(SU(2)\) subgroup of \(SU_{R}(4)\) and keeps \(\text{diag}(-i,-i,i,i)\) invariant. So its action on the Wilson loop only changes \(\bar{\alpha}^{I},\beta_{I}\).
\[\bar{\alpha}^{I}=(\bar{\alpha},0,0,0)\,,\quad\beta_{I}=(\beta,0,0,0)\,, \tag{16}\]
with the constrains \(\bar{\alpha}\beta=i\). Now \(U_{I}^{\ J}=\text{diag}(i,-i,-i,-i)\), \(\bar{f}_{1}\) and \(f_{2}\) become,
\[\bar{f}_{1}=\sqrt{\frac{2\pi}{k}}\bar{\alpha}\bar{\zeta}\psi_{1}|\dot{x}|\,, \quad f_{2}=\sqrt{\frac{2\pi}{k}}\psi^{\dagger 1}\eta\beta|\dot{x}|\,. \tag{17}\]
The equivalence relation (12) now becomes,
\[(\bar{\alpha},\beta)\sim(\lambda\bar{\alpha},\lambda^{-1}\beta)\,,\,\lambda \in\mathbf{C}^{*}\,. \tag{18}\]
We will denote this half-BPS Wilson loops by \(W_{1/2}^{1+}\) and the corresponding super-connection by \(L_{1/2}^{1+}\). The supersymmetries preserved by \(W_{1/2}^{1+}\) are,
\[\vartheta_{1I}=iR^{-1}\gamma_{3}\theta_{1I}\,,\quad\epsilon^{IJK}\vartheta_{ JK}=-iR^{-1}\gamma_{3}\epsilon^{1IJK}\theta_{JK}\,. \tag{19}\]
\(1/3\)-BPS circular WLs.In the construction of \(1/3\)-BPS Wilson loops, we start with the following super-connection \(L_{1/2}^{4-}\) in another half-BPS Wilson loop \(W_{1/2}^{4-}\),
\[L_{1/2}^{4-}=\left(\begin{array}{cc}\mathcal{A}&\bar{f}_{1}\\ f_{2}&\hat{\mathcal{A}}\end{array}\right)\,, \tag{20}\]
with
\[\mathcal{A}=A_{\mu}\dot{x}^{\mu}+\frac{2\pi}{k}|\dot{x}|\tilde{U }_{I}{}^{J}Y^{I}Y_{J}^{\dagger}\,,\quad\bar{f}_{1}=\frac{2\pi}{k}\bar{\rho} \bar{\mu}\psi_{4}|\dot{x}|\,,\] \[\hat{\mathcal{A}}=\hat{A}_{\mu}\dot{x}^{\mu}+\frac{2\pi}{k}|\dot {x}|\tilde{U}_{I}{}^{J}Y_{J}^{\dagger}Y^{I}\,,\quad f_{2}=\frac{2\pi}{k}\psi^ {\dagger 4}\nu\delta|\dot{x}|\,,\] \[\tilde{U}_{I}{}^{J}=\left(\begin{array}{cccc}i&0&0&0\\ 0&i&0&0\\ 0&0&i&0\\ 0&0&0&-i\end{array}\right)\,,\] \[\bar{\mu}^{\alpha}=(e^{i\tau/2},-e^{-i\tau/2})\,,\quad\nu_{ \alpha}=\left(-e^{-i\tau/2},e^{i\tau/2}\right)\,. \tag{21}\]
Here \(\bar{\rho},\delta\) are two complex numbers satisfying \(\bar{\rho}\delta=-i\), and we have the equivalence relation
\[(\bar{\rho},\delta)\sim(\lambda\bar{\rho},\lambda^{-1}\delta)\,,\, \lambda\in\mathbf{C}^{*}\,. \tag{22}\]
The corresponding Wilson loop
\[W_{1/2}^{4-}=\mathrm{Tr}\mathcal{P}\,\exp\left(-i\oint d\tau L_{1/2}^{4-}(\tau) \right)\,, \tag{23}\]
preserve the following supersymmetries,
\[\vartheta_{I4}=-i\gamma_{3}R^{-1}\theta_{I4}\,,\,\epsilon^{4IJK}\vartheta_{JK }=i\gamma_{3}R^{-1}\epsilon_{4IJK}\vartheta_{JK}. \tag{24}\]
This Wilson loop belong to the class II of the fermionic 1/6-BPS Wilson loops in [57, 58].
The 1/3-BPS Wilson loops are constructed by \(L_{1/2}^{1+}\) and \(L_{1/2}^{4-}\),
\[W_{1/3}=\mathrm{Tr}\mathcal{P}\,\exp\left(-i\oint d\tau L_{1/3} (\tau)\right)\,,\] \[L_{1/3}=\mathrm{diag}(\underbrace{L_{1/2}^{1+},\cdots,L_{1/2}^{ 1+}}_{n_{1}},\underbrace{L_{4-}^{1/2},\cdots,L_{1/2}^{4-}}_{n_{4}})\,. \tag{25}\]
Notice that \(L_{1/3}\) is a \(((n_{1}+n_{4})N|(n_{1}+n_{4})N)\) supermatrix, since both \(L_{1/2}^{1+}\) and \(L_{1/2}^{4-}\) are \((N|N)\) supermatrices.
The supersymmetries preserved by \(W_{1/3}\) are given by the following ones shared by \(L_{1/2}^{1+}\) and \(L_{1/2}^{4-}\),
\[\vartheta_{12}=i\gamma_{3}\theta_{12}\,,\,\vartheta_{13}=i\gamma _{3}\theta_{13}\,,\,\vartheta_{24}=-i\gamma_{3}\theta_{24}\,,\,\vartheta_{34} =-i\gamma_{3}\theta_{34}\,, \tag{26}\] \[\theta_{14}=\theta_{23}=\vartheta_{14}=\vartheta_{23}=0\,. \tag{27}\]
## 3 Wilson loop one-point function in ABJM theory
### The boundary states from Wilson loops
The main goal of this paper is to study the tree-level correlation function of the BPS Wilson loops reviewed in the previous section and the single-trace operator,
\[\mathcal{O}_{C}=C_{I_{1}\cdots I_{L}}^{J_{1}\cdots J_{L}}\mathrm{Tr}(Y^{I_{1 }}Y_{J_{1}}^{\dagger}\cdots Y^{I_{L}}Y_{J_{L}}^{\dagger})\,, \tag{28}\]
in the scalar sector. The coefficients \(C_{I_{1}\cdots I_{L}}^{J_{1}\cdots J_{L}}\) are chosen such that this single-trace operator is the eigenstate of the planar two-loop dilatation operator. The single-trace operator is put at the origin of the three-dimensional space. The Wilson loops considered in this paper are in the fundamental representation of a suitable (super-)group. More precisely, the bosonic 1/6-BPS Wilson loop (1) is in the fundamental representation of \(U(N)\). \(W_{1/6}^{B,\mathrm{big}}\) is in the fundamental representation of the gauge group \(U(N)\times U(N)\). The fermionic 1/6-BPS Wilson loop (6) and the half-BPS Wilson one are in the fundamental representation of the supergroup \(U(N|N)\). Finally, the 1/3-BPS Wilson loop (25) is in the fundamental representation of the supergroup \(U((n_{1}+n_{4})N|(n_{1}+n_{4})N)\).
At tree level, the correlator \(\langle W({\cal C})^{B}_{1/6}{\cal O}_{C}(0)\rangle\) only gets contributions from
\[\oint\cdots\oint d\tau_{1>2>\cdots>L}\left(\frac{2\pi}{k}\right)^{L} \langle{\rm tr}(R^{\tilde{J}_{1}}{}_{\tilde{I}_{1}}Y^{\tilde{I}_{1}}(x_{1})Y^{ \dagger}_{\tilde{J}_{1}}(x_{1})\cdots R^{\tilde{J}_{L}}{}_{\tilde{I}_{L}}Y^{ \tilde{I}_{L}}(x_{L})Y^{\dagger}_{\tilde{J}_{L}}(x_{L}))\] \[C^{J_{1}\cdots J_{L}}_{I_{1}\cdots I_{L}}{\rm tr}(Y^{I_{1}}(0)Y^{ \dagger}_{J_{1}}(0)\cdots Y^{I_{L}}(0)Y^{\dagger}_{J_{L}}(0))\rangle\,, \tag{10}\]
where \(x_{i}=(R\cos\tau_{i},R\sin\tau_{i},0)\), \(i=1,\cdots,L\), and
\[\oint\cdots\oint d\tau_{1>2>\cdots>L}=\int_{0}^{2\pi}d\tau_{1}\int_{0}^{\tau_{ 1}}d\tau_{2}\cdots\int_{0}^{\tau_{L-1}}d\tau_{L}\,. \tag{11}\]
In the large \(N\) limit, we only take into account planar Wick contractions, as is shown in figure 1.
One can easily obtain
\[\langle W({\cal C})^{B}_{1/6}{\cal O}_{C}(0)\rangle=\frac{\lambda^{2L}k^{L}}{(L -1)!(2R)^{2L}}C^{J_{1}\cdots J_{L}}_{I_{1}\cdots I_{L}}R^{I_{L}}{}_{J_{L}} \cdots R^{I_{1}}{}_{J_{1}}\,, \tag{12}\]
where \(\lambda\equiv\frac{N}{k}\) is the 't Hooft coupling of ABJM theory and the tree-level propagators of the scalar fields (100) have been used. For later convenience, we introduce the following two-site boundary state which is specified by a \(4\times 4\) matrix \(M\) as
\[\langle{\cal B}_{M}| \equiv M^{I_{1}}{}_{J_{1}}M^{I_{2}}{}_{J_{2}}\cdots M^{I_{L}}{}_{J_{L}} \langle I_{1},J_{1},\cdots,I_{L},J_{L}| \tag{13}\] \[= \left(M^{I}_{J}\langle I,J|\right)^{\otimes L}\,.\]
Then
\[|{\cal B}_{M}\rangle=\left((M^{I}_{J})^{*}|I,J)\right)^{\otimes L} \tag{14}\]
Our convention for the Hermitian conjugation of the spin chain states is
\[\left(\langle I_{1},J_{1},\cdots,I_{L},J_{L}|\right)^{\dagger}=|I_{1},J_{1} \cdots,I_{L},J_{L}\rangle\,. \tag{15}\]
Figure 1: Planar Wick contractions between the local operator and the Wilson loop. Here each blue or red dot indicates a pair of scalar fields and each straight line indicates a pair of contractions. The outer circle indicates the Wilson loop, and the inner circle indicates the single-trace operator. Although the single-trace operator is local in space-time, we draw it as a circle to make the color structure of the operator and the contraction more clear.
We define the \(1/6\)-BPS boundary state as8
Footnote 8: Notice that \(R^{I}{}_{J}\) should not be confused with \(R\) which is the radius of the circular Wilson loop.
\[|\mathcal{B}^{B}_{1/6}\rangle=|\mathcal{B}_{R}\rangle\,, \tag{3.8}\]
where \(R^{I}{}_{J}=\text{diag}(i,i,-i,-i)\). Then the above correlation function can be expressed as
\[\langle W(\mathcal{C})^{B}_{1/6}\mathcal{O}_{C}(0)\rangle=\frac{\lambda^{2L}k^ {L}}{(L-1)!(2R)^{2L}}\langle\mathcal{B}^{B}_{1/6}|\mathcal{O}_{C}\rangle\,, \tag{3.9}\]
where \(|\mathcal{O}_{C}\rangle\)\(\equiv C^{I_{1}\cdots J_{L}}_{I_{1}\cdots I_{L}}|I_{1},J_{1},\cdots,I_{L},J_{L}\rangle\) is the spin chain state corresponding to the operator \(\mathcal{O}_{C}\). Our convention for the overlap of two spin chain states is
\[\langle I_{1},J_{1},\cdots,I_{L},J_{L}|M_{1},N_{1},\cdots,M_{L},N_{L}\rangle= \delta^{M_{1}}_{I_{1}}\delta^{J_{1}}_{N_{1}}\cdots\delta^{M_{L}}_{I_{L}} \delta^{J_{L}}_{N_{L}}\,. \tag{3.10}\]
Let us define the normalization factor \(\mathcal{N}_{\mathcal{O}}\) using the two-point function of \(\mathcal{O}\) and \(\mathcal{O}^{\dagger}\) as
\[\langle\mathcal{O}(x)\mathcal{O}^{\dagger}(y)\rangle=\frac{\mathcal{N}_{ \mathcal{O}}}{|x-y|^{2\Delta_{\mathcal{O}}}}\,, \tag{3.11}\]
where \(\Delta_{\mathcal{O}}\) is the conformal dimension of \(\mathcal{O}\). At tree level and the planar limit, we have
\[\mathcal{N}_{\mathcal{O}}=\left(\frac{N}{4\pi}\right)^{2L}L\langle\mathcal{O} |\mathcal{O}\rangle\,. \tag{3.12}\]
We define the Wilson-loop one-point function as
\[\langle\!\langle\mathcal{O}\rangle\!\rangle_{W(\mathcal{C})}\equiv\frac{ \langle W(\mathcal{C})\mathcal{O}\rangle}{\sqrt{\mathcal{N}_{\mathcal{O}}}}\,. \tag{3.13}\]
Then for \(W^{B}_{1/6}\) we have
\[\langle\!\langle\mathcal{O}\rangle\!\rangle_{W(\mathcal{C})^{B}_{1/6}}=\frac{ \pi^{L}\lambda^{L}}{R^{2L}(L-1)!\sqrt{L}}\frac{\langle\mathcal{B}^{B}_{1/6}| \mathcal{O}\rangle}{\sqrt{\langle\mathcal{O}|\mathcal{O}\rangle}}\,. \tag{3.14}\]
The computation of the Wilson loop one-point function thus amounts to the calculation of
\[\frac{\langle\mathcal{B}^{B}_{1/6}|\mathcal{O}\rangle}{\sqrt{\langle\mathcal{ O}|\mathcal{O}\rangle}}\,. \tag{3.15}\]
Similar computations for other Wilson loops studied in this paper can also be reduced to the computation of overlaps of the form in (3.15), with the corresponding boundary states. For \(\hat{W}(\mathcal{C})^{B}_{1/6}\), the boundary state is
\[\langle\hat{\mathcal{B}}^{B}_{1/6}|=R^{I_{1}}{}_{J_{L}}R^{I_{2}}{}_{J_{1}} \cdots R^{I_{L}}{}_{J_{L-1}}\langle I_{1},J_{1},\cdots,I_{L},J_{L}|\,. \tag{3.16}\]
We can rewrite \(|\hat{\mathcal{B}}^{B}_{1/6}\rangle\) as
\[|\hat{\mathcal{B}}^{B}_{1/6}\rangle=U_{\text{even}}|\mathcal{B}^{B}_{1/6}\rangle \tag{3.17}\]
where \(U_{\rm even}\) is the shift operator which shifts all even site to the left by two units and leave the odd sites untouched,
\[U_{\rm even}|I_{1},J_{1},I_{2},J_{2},\cdots,I_{L-1},J_{L-1},I_{L},J_{L}\rangle=|I _{1},J_{2},I_{2},J_{3},\cdots,I_{L-1},J_{L},I_{L},J_{1}\rangle. \tag{3.18}\]
Combining (3.8) and (3.16), we obtain the boundary state of \(W({\cal C})^{B,{\rm big}}_{1/6}\)
\[|{\cal B}^{B,{\rm big}}_{1/6}\rangle=|{\cal B}^{B}_{1/6}\rangle+|\hat{\cal B}^ {B}_{1/6}\rangle=(1+U_{\rm even})|{\cal B}_{R}\rangle\,. \tag{3.19}\]
For \(W({\cal C})^{F}_{1/6}\), we simply replace \(R^{I}{}_{J}\) by \(U^{I}{}_{J}\) given in (2.11),
\[|{\cal B}^{F}_{1/6}\rangle=(1+U_{\rm even})|{\cal B}_{U}\rangle\,. \tag{3.20}\]
The boundary state corresponding to \(W(C)_{1/2}\), which will be denoted by \(|{\cal B}_{1/2}\rangle\) is given by \(|{\cal B}^{F}_{1/6}\rangle\) with the additional constraints (2.14).
In particular, the boundary state \(|{\cal B}^{1+}_{1/2}\rangle\) corresponding to \(W(C)^{1+}_{1/2}\) is
\[|{\cal B}^{1+}_{1/2}\rangle=(1+U_{\rm even})|{\cal B}_{U}\rangle\,, \tag{3.21}\]
with \(U^{I}{}_{J}={\rm diag}(i,-i,-i,-i)\). Similarly, the boundary state \(|{\cal B}^{1-}_{1/2}\rangle\) corresponding to \(W(C)^{4-}_{1/2}\) is
\[|{\cal B}^{4-}_{1/2}\rangle=(1+U_{\rm even})|{\cal B}_{\tilde{U}}\rangle\,, \tag{3.22}\]
with \(\tilde{U}^{I}{}_{J}={\rm diag}(i,i,i,-i)\).
Finally, for \(W({\cal C})_{1/3}\), we have
\[|{\cal B}_{1/3}\rangle=n_{1}|{\cal B}^{1+}_{1/2}\rangle+n_{4}|{\cal B}^{4-}_{ 1/2}\rangle\,. \tag{3.23}\]
### Integrable and non-integrable boundary states
In this subsection, we will prove that the boundary states \(|{\cal B}^{B}_{1/6}\rangle,|\hat{\cal B}^{B}_{1/6}\rangle,|{\cal B}_{1/2}\rangle\) and \(|{\cal B}_{1/3}\rangle\) are integrable at tree level by employing the method proposed in [21]. We will also show that the \(|{\cal B}^{F}_{1/6}\rangle\) with \(\bar{\alpha}^{2}=\beta_{2}=0\) is not integrable unless \(\bar{\alpha}^{1}\beta_{1}=0,i\).
Let us consider the boundary state \(|{\cal B}_{M}\rangle\) defined by a matrix \(M\) as in (3.5). In what follows, we will encounter several examples in which \(M\) is a diagonal matrix. In this case, the overlap \(\langle{\cal B}_{M}|{\bf u},{\bf v},{\bf w}\rangle\) is nonzero only if the numbers of the Bethe roots, \(K_{\bf u},K_{\bf v},K_{\bf w}\), and the length of the spin chain \(2L\), satisfy \(K_{\bf u}=K_{\bf v}=K_{\bf w}=L\). Notice that this selection rule has nothing to do with integrability of the boundary state.
In the algebraic Behte ansatz approach, the \(SU(4)\) sector \(R\)-matrices of the ABJM theory at two-loop level are given by
\[\begin{array}{l}R^{\bullet\bullet}_{12}(u)=R^{\circ\circ}_{12}(u)=u+P_{12} \equiv R_{12}(u)\,,\\ R^{\bullet\circ}_{12}(u)=R^{\circ\bullet}_{12}(u)=-u-2+K_{12}\equiv\bar{R}_{1 2}(u)\,,\end{array} \tag{3.24}\]
where \(\bullet\) denotes the states in the \({\bf 4}\) representation of \(SU(4)_{R}\), while \(\circ\) denotes the states in the \(\bar{\bf 4}\) representation. The \(R\)-matrices satisfy the following crossing symmetry relations
\[R_{12}(u)^{t_{1}}=\bar{R}_{12}(-u-2),\qquad\bar{R}_{12}(u)^{t_{1}}=R_{12}(-u-2)\,, \tag{3.25}\]
and the relation
\[R_{12}(u)^{t_{1}t_{2}} = R_{12}(u),\qquad\bar{R}_{12}(u)^{t_{1}t_{2}}=\bar{R}_{12}(u),\] \[P_{12}R_{12}(u)P_{12} = R_{12}(u),\qquad P_{12}\bar{R}_{12}(u)P_{12}=\bar{R}_{12}(u)\,, \tag{3.26}\]
where \(t_{i}\) denotes transposition in the \(i\)-th space. One key feature of algebraic Bethe ansatz approach of the ABJM spin chain is that it requires two \(R\)-matrices, \(R(u)\) and \(\bar{R}(u)\), due to its alternating nature. This is different from the case when the spin on each site is in the same representation.
In the following, we will show that when there exists a four-dimensional matrix \(K(u)\) satisfying the boundary Yang-Baxter equation (BYBE)9
Footnote 9: Notice that here we only need to use the BYBE involving one of the \(R\)-matrices, \(R(u)\).
\[R_{12}(u-v)K_{1}(u)R_{12}(u+v)K_{2}(v)=K_{2}(v)R_{12}(u+v)K_{1}(u)R_{12}(u-v)\,, \tag{3.27}\]
then the boundary state \(|\mathcal{B}_{M}\rangle\) with \(M=K(-1)^{*}\) is integrable in the sense that it satisfies the following untwisted integrability condition [62; 51],
\[\tau(-u-2)|\mathcal{B}_{M}\rangle=\tau(u)|\mathcal{B}_{M}\rangle\,, \tag{3.28}\]
or equivalently [63], \(\Pi\bar{\tau}(u)\Pi|\mathcal{B}_{M}\rangle=\tau(u)|\mathcal{B}_{M}\rangle\,,\) where
\[\tau(u)=\mathrm{Tr}_{0}\left(R_{01}(u)\bar{R}_{02}(u)\cdots R_{0,2 L-1}(u)\bar{R}_{0,2L}(u)\right)\,, \tag{3.29}\] \[\bar{\tau}(u)=\mathrm{Tr}_{0^{\prime}}\left(\bar{R}_{0^{\prime}1 }(u)R_{0^{\prime}2}(u)\cdots\bar{R}_{0^{\prime},2L-1}(u)R_{0^{\prime},2L}(u) \right)\,, \tag{3.30}\]
are the transfer matrices. Here \(0,0^{\prime}\) denote two auxiliary spaces and \(\Pi\) is the parity operator
\[\Pi|I_{1},J_{1},\cdots,J_{2L}\rangle=|J_{2L},\cdots,J_{2},I_{1}\rangle\,. \tag{3.31}\]
Using the explicit forms of eigenvalues of \(\tau(u)\) and \(\bar{\tau}(u)\)[63], we conclude that for integrable boundary states, the overlap \(\langle\mathcal{B}_{M}|\mathbf{u},\mathbf{v},\mathbf{w}\rangle\) is non-zero only if the selection rules
\[\mathbf{u}=-\mathbf{v}\,,\qquad\mathbf{w}=-\mathbf{w} \tag{3.32}\]
are satisfied.
Let us define the state
\[|\phi(u-1)\rangle_{ab}=K^{I}_{J}(u-1)|I\rangle_{a}\otimes|J\rangle_{b}\,. \tag{3.33}\]
The boundary Yang-Baxter equation (3.27) leads to
\[\begin{split}&\tilde{R}_{34}(v-u)\tilde{\bar{R}}_{23}(-u-v)\left| \phi_{0}(u-1)\right\rangle_{12}\otimes\left|\phi_{0}(v-1)\right\rangle_{34} \\ =&\tilde{R}_{12}(v-u)\tilde{\bar{R}}_{23}(-u-v)\left| \phi_{0}(v-1)\right\rangle_{12}\otimes\left|\phi_{0}(u-1)\right\rangle_{34}\,, \end{split} \tag{3.34}\]
where \(\tilde{R}_{12}=P_{12}R_{12}\) and \(\tilde{\bar{R}}_{12}=P_{12}\bar{R}_{12}\). This relation can be shown pictorially as in figure 2.
Notice that \(\bar{R}(u)\) also appears here, since we have used one of the crossing symmetry relations (3.25).
Introducing two four-dimensional auxiliary spaces \(h_{0}\) and \(h_{2L+1}\) and following the derivation in the Section 4 and Appendix C of [21], we can prove that
\[\begin{split}&\tilde{R}_{2L+1,2L}(v-u)\tilde{\bar{R}}_{2L,2L-1}(-v-u )\cdots\tilde{R}_{32}(v-u)\tilde{\bar{R}}_{21}(-v-u)\\ &|\phi(u-1))_{01}\otimes|\phi(v-1))_{23}\otimes\cdots\otimes|\phi (v-1))_{2L,2L+1}\\ =&\tilde{R}_{2L+1,2L}(v-u)\tilde{\bar{R}}_{2L,2L-1}(- v-u)\cdots\tilde{R}_{01}(v-u)\tilde{\bar{R}}_{12}(-v-u)\\ &|\phi(v-1))_{01}\otimes|\phi(u-1))_{23}\otimes|\phi(v-1))_{45} \otimes\cdots\otimes|\phi(v-1))_{2L,2L+1}\\ =&\cdots\\ =&\tilde{R}_{01}(v-u)\tilde{R}_{12}(-v-u)\cdots\tilde{ R}_{2L-2,2L-1}(v-u)\tilde{R}_{2L-1,2L}(-v-u)\\ &|\phi(v-1))_{01}\otimes|\phi(v-1))_{23}\otimes\cdots\otimes| \phi(v-1))_{2L-2,2L-1}\otimes|\phi(u-1))_{2L,2L+1}\,.\end{split} \tag{3.35}\]
This in turn implies
\[\begin{split}&\operatorname{Tr}_{0}\left(R_{01}(-u-2)\bar{R}_{02}(-u -2)\cdots R_{0,2L-1}(-u-2)\bar{R}_{0,2L}(-u-2)\right)|\mathcal{B}_{M}\rangle \\ =&\operatorname{Tr}_{0}\left(R_{01}(u)\bar{R}_{02}( u)\cdots R_{0,2L-1}(u)\bar{R}_{0,2L}(u)\right)|\mathcal{B}_{M}\rangle\,.\end{split} \tag{3.36}\]
Here the condition \(K(-1)=M^{*}\) has been used. Pictorially the above derivation is shown in figure 3.
In terms of the transfer matrices \(\tau(u)\) and \(\bar{\tau}(u)\) given in (3.29), (3.36) can be written as
\[\tau(-u-2)|\mathcal{B}_{M}\rangle=\tau(u)|\mathcal{B}_{M}\rangle. \tag{3.37}\]
As mentioned above, this equation is equivalent [63] to the untwisted integrability condition [51; 62], \(\Pi\bar{\tau}(u)\Pi|\mathcal{B}_{M}\rangle=\tau(u)|\mathcal{B}_{M}\rangle\)10. This finishes the proof about the integrability of the boundary state \(|\mathcal{B}_{M}\rangle\) assuming that there exists the matrix \(K(u)\) satisfying the BYBE, (3.27), and \(K(-1)=M^{*}\). We can similarly prove that for such \(M\), the boundary state \(|\hat{\mathcal{B}}_{M}\rangle\equiv U_{\text{even}}|B_{M}\rangle\) is also integrable and leads to the same selection rule11.
Footnote 10: In fact, there exist twisted integrability condition [51; 62]\(\Pi\tau(u)\Pi|\mathcal{B}_{M}\rangle=\tau(u)|\mathcal{B}_{M}\rangle,\Pi\tau(u) \Pi|\mathcal{B}_{M}\rangle=\tau(u)|\mathcal{B}_{M}\rangle\), which leads to the selection rule \(\mathbf{u}=-\mathbf{u},\mathbf{v}=-\mathbf{v},\mathbf{w}=-\mathbf{w}\). But such case has not appeared in the boundary states in the ABJM theory yet [47; 48].
Footnote 11: This result can be also obtained from the general classification of integrable boundary states in [64].
Figure 2: A pictorial representation of eq. (3.34) from the boundary Yang-Baxter equation. Here the red node denotes \(\tilde{R}(-u-v)\) and the blue node denotes \(\check{R}(v-u)\).
Now we turn to boundary states from some BPS Wilson loops we list above. For \(|\mathcal{B}^{B}_{1/6}\rangle(=|\mathcal{B}_{R}\rangle)\) and \(|\hat{\mathcal{B}}^{B}_{1/6}\rangle(=|\hat{\mathcal{B}}_{R}\rangle)\), one can check that the matrix \(K(u)=R^{*}\) is the solution of BYBE (3.27), so these two boundary states are both tree-level integrable. Then \(|\mathcal{B}^{B,\rm{big}}_{1/6}\rangle\) is tree-level integrable as well. As for the boundary state \(|\mathcal{B}^{1+}_{1/2}\rangle(=(1+U_{\rm even})|\mathcal{B}_{U}\rangle)\), we can just choose \(K(u)=U^{*}={\rm diag}(-i,i,i,i)\) since it also satisfies the BYBE (3.27). Then we get that \(|\mathcal{B}^{1+}_{1/2}\rangle\) is integrable at tree level. Since the tree-level integrability of \(|\mathcal{B}_{M}\rangle\) is preserved when we perform a \(SU(4)_{R}\) transformation on \(M\) or multiply \(M\) by a constant, we get that the generic \(|\mathcal{B}_{1/2}\rangle\) and \(|\mathcal{B}^{4-}_{1/2}\rangle\) are also tree-level integrable. This leads to the conclusion that \(|\mathcal{B}_{1/3}\rangle\) is tree-level integrable.
Now we turn consider the boundary state
\[|\mathcal{B}^{F}_{1/6}\rangle = |\mathcal{B}_{U}\rangle+|\hat{\mathcal{B}}_{U}\rangle=(1+U_{\rm even })|\mathcal{B}_{U}\rangle\,, \tag{3.38}\]
with \(U={\rm diag}(i,i-2\epsilon,-i,-i)\). This boundary state corresponds to the generic fermionic \(1/6\)-BPS Wilson loops with \(\bar{\alpha}^{2}=\beta_{2}=0\) and \(\epsilon\) is given by \(\epsilon=\bar{\alpha}^{1}\beta_{1}\).
In the following we will show that when \(\epsilon\neq 0,i\), this state is not integrable. The idea is to employ the following set of Bethe roots with \(L=3,K_{\bf u}=K_{\bf w}=1,K_{\bf v}=2\),
\[u_{1}=0.866025,\ \ \ \ w_{1}=0.866025,\ \ \ \ v_{1}=-0.198072,\ \ \ \ v_{2}=0.631084\,, \tag{3.39}\]
which does not satisfy the selection rule \({\bf u}=-{\bf v},{\bf w}=-{\bf w}\). Notice that the set of roots also satisfy the zero momentum condition. However these roots do not satisfy the first selection rule \(K_{\bf u}=K_{\bf v}=K_{\bf w}=L\). This fact leads to the result that the overlap \(\langle\mathcal{B}^{F}_{1/6}|{\bf u},{\bf v},{\bf w}\rangle=0\) for this set of Bethe roots and whether the \(|\mathcal{B}^{F}_{1/6}\rangle\) is integrable or not can not be detected by this result. The way out is to perform the following \(SO(4)\subset SU(4)_{R}\) transformation [65],
\[U_{\theta}=g(\theta)Ug(\theta)^{-1}\,, \tag{3.40}\]
Figure 3: A pictorial derivation of (3.35).
with
\[g(\theta)=\left(\begin{array}{cccc}\cos^{2}\theta&\sin\theta&0&\sin \theta\cos\theta\\ -\sin\theta\cos^{2}\theta&\cos^{2}\theta&\sin\theta&-\sin^{2}\theta\cos\theta\\ \sin^{2}\theta\cos\theta&-\sin\theta\cos\theta&\cos\theta&\sin^{3}\theta\\ -\sin\theta&0&0&\cos\theta\end{array}\right)\,, \tag{41}\]
where \(\theta\) satisfies \(0<\theta<\frac{\pi}{2}\). Due to \(SU(4)_{R}\) invariance, \(|{\cal B}^{F}_{1/6}\rangle\) is integrable if and only if \(|{\cal B}^{F}_{1/6,\,\theta}\rangle\equiv(1+U_{\rm even})|{\cal B}_{U_{\theta}}\rangle\) is. Through direct computation, we found that \(\langle{\cal B}^{F}_{1/6,\,\theta}|{\bf u},{\bf v},{\bf w}\rangle\) is zero if and only if \(\epsilon=0\) or \(\epsilon=i\). This shows that generic fermionic 1/6-BPS Wilson loop gives non-integrable boundary state. More precisely, such boundary state satisfies neither the twisted condition nor the untwisted one. Notice that, when \(\epsilon=i\), there is supersymmetric enhancement for this fermionic 1/6-BPS Wilson loop and it in fact becomes half-BPS. When \(\epsilon=0\), the bosonic part of this fermionic 1/6-BPS Wilson loop is the same as the big bosonic 1/6-BPS Wilson loop \(W_{1/6}^{B,{\rm big}}\).12 We have already shown that \(\epsilon=0\) and \(\epsilon=i\) lead to integrable boundary states.
Footnote 12: If we consider the vacuum expectation value of the Wilson loop or the correlators of the Wilson loop with operators out of it, this fermionic 1/6-BPS Wilson loop with \(\epsilon=0\) is identidical to the bosonic 1/6-BPS Wilson loop \(W_{1/6}^{B,{\rm big}}\)[61].
To sum up, we find that only some of the supersymmetric Wilson loops correspond to tree-level integrable boundary states, they are listed as follows
* The bosonic 1/6-BPS Wilson loop corresponding to the state \[|{\cal B}^{B,{\rm big}}_{1/6}\rangle=(1+U_{even})|{\cal B}_{R}\rangle,\qquad R ^{I}{}_{J}={\rm diag}(i,i,-i,-i)\] (42)
* The 1/2-BPS Wilson loop corresponding to the state \[|{\cal B}^{1+}_{1/2}\rangle=(1+U_{\rm even})|{\cal B}_{U}\rangle,\qquad U^{I} {}_{J}={\rm diag}(i,-i,-i,-i)\] (43)
* The 1/2-BPS Wilson loop corresponding to the state \[|{\cal B}^{4-}_{1/2}\rangle=(1+U_{\rm even})|{\cal B}_{\tilde{U}}\rangle, \qquad\tilde{U}^{I}{}_{J}={\rm diag}(i,i,i,-i)\,.\] (44)
* The 1/3-BPS Wilson loop corresponding to the state \[|{\cal B}^{1+}_{1/3}\rangle=n_{1}|{\cal B}^{1+}_{1/2}\rangle+n_{4}|{\cal B}^{ 4-}_{1/2}\rangle\] (45)
Notice that in the above states, both \(|{\cal B}_{M}\rangle\) and \(U_{\rm even}|{\cal B}_{M}\rangle\) (\(M=R,U,\tilde{U}\)) are integrable at tree level. In the next section, we will derive the exact overlap formula of the tree-level integrable boundary states and the on-shell Bethe states of the SU(4) alternating spin chain.
Overlap of \(1/6\) BPS Wilson loop
In this section, we derive the exact overlap formula for the \(1/6\)-BPS Wilson loop \(|\mathcal{B}_{1/6}^{B,\text{big}}\rangle\) in (3.42). We will derive the formula for \(|\mathcal{B}_{R}\rangle\) and \(U_{\text{even}}|\mathcal{B}_{R}\rangle\) separately and then take the sum.
We will use the method developed in [51]. For a two-site state \(|\mathcal{B}\rangle\) with the selection rule (3.32), one expects that the overlap takes the following form
\[\frac{\langle\mathcal{B}|\mathbf{u,v,w}\rangle}{\sqrt{\langle \mathbf{u,v,w}|\mathbf{u,v,w}\rangle}}=\prod_{j=1}^{K_{\mathbf{u}}}h^{(1)}(u_{j })\prod_{k=1}^{K_{\mathbf{w}}/2}h^{(2)}(w_{k})\times\sqrt{\frac{\det G_{+}}{ \det G_{-}}}. \tag{4.1}\]
where \(\det G_{\pm}\) are the Gaudin-like determinants whose definitions were given in [47]. The prefactors \(h^{(1)}(u)\) and \(h^{(2)}(w)\) can be calculated by a nesting procedure in the sparse limit where \(L\to\infty\) and the number of excitations are kept finite [51; 65; 66]. In this limit, the ratio of determinants \(\det G_{+}/\det G_{-}\to 1\) and we are left with the contribution from the prefactors. However, this method can not be applied directly in the current situation, for the following two reasons.
First, the Bajnok-Gombor nesting procedure starts with evaluating the overlap \(\langle\mathcal{B}|0\rangle\), where \(|0\rangle\) is the pseudovacuum state. However, it is easy to see this overlap is vanishing for our Wilson loop boundary states. Second, from \(R\)-charge conservation, the overlap \(\langle\mathcal{B}|\mathbf{u,v,w}\rangle\) for the Wilson loop boundary states are non-zero only if \(K_{\mathbf{u}}=K_{\mathbf{v}}=K_{\mathbf{w}}=L\). Therefore we cannot take the limit \(L\to\infty\) while keeping the excitation numbers finite.
### Rotating boundary state
To address these two issues, one can rotate the boundary state by a certain angle \(\theta\)[65]. The \(K\)-matrix (3.27) still satisfies the BYBE under a \(SO(4)\) rotation and hence integrability is preserved. The overlap for the rotated boundary state \(|\mathcal{B}_{\theta}\rangle\) is no longer constrained by the selection rule \(K_{\mathbf{u}}=K_{\mathbf{v}}=K_{\mathbf{w}}=L\) and we can apply Bajnok-Gombor approach to obtain the prefactor. Assuming the \(\theta\to 0\) limit is smooth, we then obtain the prefactors of the original boundary state by taking \(\theta=0\). We will see that this method indeed gives the correct result for the \(1/6\)-BPS Wilson loop.
We first consider the boundary state \(\langle\mathcal{B}_{R}|\) in (3.5) with the following \(SO(4)\) rotation [65]
\[g(\theta)=\left(\begin{array}{cccc}\cos\theta&0&0&-\sin\theta \\ 0&\cos\theta&-\sin\theta&0\\ 0&\sin\theta&\cos\theta&0\\ \sin\theta&0&0&\cos\theta\end{array}\right) \tag{4.2}\]
and define
\[R(\theta)=g(\theta)Rg(-\theta). \tag{4.3}\]
The rotated dual boundary state is given by13
Footnote 13: For the computation of the overlap, we consider the dual boundary state.
\[\langle\mathcal{B}_{R(\theta)}|=\left(R(\theta)^{I}_{\phantom{I} J}\langle I,J|\right)^{\otimes L} \tag{4.4}\]
Similarly, we define
\[\langle\widehat{\mathcal{B}}_{R(\theta)}|=\langle\mathcal{B}_{R(\theta)}|U^{\dagger }_{\rm even}\,. \tag{4.5}\]
We find that
\[R(\theta)^{I}_{\phantom{I}J}\langle I,J|= i\cos(2\theta)\left(\langle 1\bar{1}|+\langle 2\bar{2}|-\langle 3\bar{3} |-\langle 4\bar{4}|\right)\right. \tag{4.6}\] \[+i\sin(2\theta)\left(\langle 1\bar{4}|+\langle 2\bar{3}|+ \langle 3\bar{2}|+\langle 4\bar{1}|\right)\]
where the second line which breaks the selection rule \(K_{\bf u}=K_{\bf v}=K_{\bf w}=L\). The pseudo-dvacuum state is
\[|0\rangle=(|1\bar{4})\rangle^{\otimes L}. \tag{4.7}\]
We thus have
\[\langle\mathcal{B}_{R(\theta)}|0\rangle=(i\sin 2\theta)^{L},\qquad \langle\widehat{\mathcal{B}}_{R(\theta)}|0\rangle=(i\sin 2\theta)^{L}. \tag{4.8}\]
We will perform the Bajnok-Gombor procedure for the state \(|\mathcal{B}_{R(\theta)}\rangle\), the state \(|\widehat{\mathcal{B}}_{R(\theta)}\rangle\) can be treated similarly. We first define the renormalized state
\[\langle\mathcal{B}^{(1)}_{R(\theta)}|=\frac{\langle\mathcal{B}_{R(\theta)}|}{( i\sin 2\theta)^{L}}\,,\qquad\langle\mathcal{B}^{(1)}_{R(\theta)}|0\rangle=1. \tag{4.9}\]
### Two-particle state
Now we consider the excited state with two particles with rapidities \(u\) and \(v\). We denote the type of the particles by \(a\) and \(b\), where \(a,b=1,2\). When a type-\(a\) sites on an odd (even) site, the field \(Y^{1}\) (\(\bar{Y}_{4}\)) is replaced by \(Y^{1+a}\) (\(\bar{Y}_{4-a}\)). It is also possible for two particles with different labels to occupy the same site, leading to the composite excitations \(Y^{4}\) on odd sites and \(\bar{Y}_{1}\) on even sites. In what follows, we will denote a state with two particles of type-\(a\) and -\(b\) at sites \(2n-1\) and \(2m\) by \(|2n-1,2m\rangle_{a,b}\). If the two particles are on the same site \(n\), we denote the state by \(|n\rangle_{\bullet}\). The asymptotic two-particle Bethe state of the SU(4) alternating spin chain has been constructed in [65] and reads
\[\begin{split}|\{u\},\{v\}\rangle_{a,b}=&\sum_{m,n=1 }^{L}e^{ipn+iqm}\sum_{c,d=1}^{2}(-1)^{d-1}\chi_{a,b}^{c,d}(u-v)|2n-1,2m\rangle \!\rangle_{c,d}\\ &+\sum_{n=1}^{L}e^{i(p+q)n}\left(\zeta_{a,b}^{(1)}(u,v)|2n-1 \rangle\!\rangle_{\bullet}+\zeta_{a,b}^{(2)}(u,v)|2n\rangle\!\rangle_{ \bullet}\right)\,.\end{split} \tag{4.10}\]
where
\[e^{ip}=\frac{u+i/2}{u-i/2},\qquad e^{iq}=\frac{v+i/2}{v-i/2}\,. \tag{4.11}\]
The coefficients are given by
\[\chi^{c,d}_{a,b}(u)=\left\{\begin{array}{ll}\delta^{c}_{a}\delta^{d}_{b},&2n-1<2m \\ R^{cd}_{ab}(u),&2n-1>2m\end{array}\right. \tag{4.12}\]
with
\[R^{cd}_{ab}(u)=\frac{u}{u-i}\delta^{c}_{a}\delta^{d}_{b}-\frac{i}{u-i}\delta^{ d}_{a}\delta^{c}_{b} \tag{4.13}\]
The coefficients for the double occupation factor read
\[\zeta^{(1)}_{ab}(u,v)=\epsilon_{ab}\frac{-v+i/2}{u-v-i},\qquad\zeta^{(2)}_{ab} (u,v)=\epsilon_{ab}\frac{u+i/2}{u-v-i}. \tag{4.14}\]
Two comments are in order for the two particle state (4.10). Firstly, this state is an eigenstate of the Hamiltonian in the asymptotic sense. This means it is only an eigenstate for \(L\to\infty\). Secondly, when constructing the asymptotic two-particle state, in principle we should also take into account the possibilities where the two particles are on two different even or odd sites. Nevertheless such states will not contribute to the overlap and can be ignored.
### First level nesting
The first level \(K\)-matrix is given by
\[K^{(1)}_{ab}(u)=\lim_{L\to\infty}\frac{1}{A^{L}}\frac{\left\langle{\cal B}_{R (\theta)}|\{u\},\{-u\}\right\rangle_{a,b}}{\sqrt{\langle\{u\},\{-u\}|\{u\},\{ -u\}\rangle_{a,b}}}, \tag{4.15}\]
where \(A^{L}=\langle{\cal B}_{R(\theta)}|0\rangle=(-i\sin 2\theta)^{L}\). The overlap is given by
\[\left\langle{\cal B}_{R(\theta)}|u,-u\right\rangle_{a,b}=\left(\begin{array}[] {cc}A_{11}&A_{12}\\ A_{21}&A_{22}\end{array}\right) \tag{4.16}\]
Taking \(v=-u\), the two-particle state is simplified further to
\[|\{u\},\{-u\}\rangle_{a,b} =\sum_{n,m}e^{ip(n-m)}\sum_{c,d=1}^{2}(-1)^{d-1}\chi^{c,d}_{a,b}( 2u)|2n-1,2m\rangle\!\rangle_{c,d} \tag{4.17}\] \[\quad+\frac{\epsilon_{ab}}{2}\frac{u+i/2}{u-i/2}\sum_{n}\left(|2n -1\rangle\!\rangle_{\bullet}+|2n\rangle\!\rangle_{\bullet}\right)\]
We have
\[\langle{\cal B}_{R(\theta)}|2m-1,2n\rangle\!\rangle_{1,1} =\langle{\cal B}_{R(\theta)}|2m-1,2n\rangle\!\rangle_{2,2}=(i\sin 2 \theta)^{L}\,\delta_{m,n}\,, \tag{4.18}\] \[\langle{\cal B}_{R(\theta)}|2m-1,2n\rangle\!\rangle_{1,2} =i\cos 2\theta(i\sin 2\theta)^{L-1}\delta_{m,n}\,,\] \[\langle{\cal B}_{R(\theta)}|2m-1,2n\rangle\!\rangle_{2,1} =-i\cos 2\theta(i\sin 2\theta)^{L-1}\delta_{m,n}\,.\]
and
\[\langle\mathcal{B}_{R(\theta)}|2n-1\rangle\!\rangle_{\bullet}=-(i\cos 2 \theta)(i\sin 2\theta)^{L-1},\qquad\langle\mathcal{B}_{R(\theta)}|2n\rangle\! \rangle_{\bullet}=(i\cos 2\theta)(i\sin 2\theta)^{L-1}\,. \tag{4.19}\]
From (4.19), it is clear that the boundary state has vanishing overlap with the second line in (4.17) for all \(a,b=1,2\). Using (4.18), the matrix components of (4.16) and (4.19) can be computed straightforwardly, yielding
\[A_{11} =L\,(i\sin 2\theta)^{L}\,,\qquad A_{22}=-L\,(i\sin 2\theta)^{L}\,, \tag{4.20}\] \[A_{12} =A_{21}=-L(i\cos 2\theta)(i\sin 2\theta)^{L-1}.\]
The norm of the Bethe state is more involved, but the \(L\to\infty\) the leading term is simply
\[\lim_{L\to\infty}\langle\{u\},\{-u\}|\{u\},\{-u\}\rangle_{a,b}=L^{2}+\cdots \tag{4.21}\]
where the ellipsis denote subleading terms. Therefore in the \(L\to\infty\) limit, we obtain
\[K^{(1)}_{ab}(u)=\lim_{L\to\infty}\frac{1}{L(-i\sin 2\theta)^{L}}\left(\begin{array} []{cc}A_{11}&A_{12}\\ A_{21}&A_{22}\end{array}\right)=\left(\begin{array}{cc}1&-\cot 2\theta\\ -\cot 2\theta&-1\end{array}\right) \tag{4.22}\]
for \(1/6\)-BPS Wilson loop. Following Bajnok-Gombor approach, (4.22) implies that
\[h^{(1)}(u)=K^{(1)}_{1,1}(u)=1 \tag{4.23}\]
### Second level nesting
At the second level, we can take a shortcut. It has been shown [51] that for a boundary state described by the following \(K\)-matrix
\[K=\left(\begin{array}{cc}1&-ie^{-\gamma}(\cosh\beta+2\alpha\sinh\beta)\\ -ie^{-\gamma}(\cosh\beta-2\alpha\sinh\beta)&-e^{-2\gamma}\end{array}\right) \tag{4.24}\]
the absolute value of the prefactor is given by
\[h^{(2)}(w)=e^{-2\gamma}(\sinh\beta)^{2}\frac{w^{2}+\alpha^{2}}{w(w-i/2)} \tag{4.25}\]
Comparing (4.25) with (4.22), we find that they are identical if we take
\[\gamma=0,\qquad\alpha=0,\qquad\cosh\beta=-i\cot(2\theta) \tag{4.26}\]
Therefore we conclude that
\[h^{(2)}(w)=-\frac{1}{(\sin 2\theta)^{2}}\frac{w}{w-i/2}\,. \tag{4.27}\]
One comment is that in principle at the second level nesting we should consider the inhomogeneous spin chain and the resulting prefactors contains contributions from the inhomogeneities -- Bethe roots from the first level nesting. At the same time, we should
renormalize the second level boundary state similar to (4.9). This cancels precisely the contributions from the inhomogeneities. Therefore our result is valid. Plugging \(h^{(1)}(u)=1\) and (4.27) into (4.1), we obtain the general overlap formula14
Footnote 14: This overlap formula can be also obtained using the recursion method in [64].
\[\frac{|\langle\mathcal{B}_{R(\theta)}|\mathbf{u},-\mathbf{u},\mathbf{w}| |^{2}}{\langle\mathbf{u},-\mathbf{u},\mathbf{w}|\mathbf{u},-\mathbf{u}, \mathbf{w}\rangle}=(\sin 2\theta)^{2(L-K_{\mathbf{w}})}\prod_{i=1}^{K_{\mathbf{w}}/2} \frac{4w_{i}^{2}}{4w_{i}^{2}+1}\times\frac{\det G^{+}}{\det G^{-}}. \tag{4.28}\]
We have \(L\leq K_{\mathbf{u}}=K_{\mathbf{v}}\leq K_{\mathbf{w}}\). From (4.28), we find that the limit \(\theta\to 0\) is none vanishing only if \(L=K_{\mathbf{u}}=K_{\mathbf{v}}=K_{\mathbf{w}}\), in which case we obtain
\[\frac{|\langle\mathcal{B}_{R}|\mathbf{u},-\mathbf{u},\mathbf{w}\rangle|^{2}}{ \langle\mathbf{u},-\mathbf{u},\mathbf{w}|\mathbf{u},-\mathbf{u},\mathbf{w} \rangle}=\prod_{i=1}^{K_{\mathbf{w}}/2}\frac{w_{i}^{2}}{w_{i}^{2}+1/4}\times \frac{\det G^{+}}{\det G^{-}}\,. \tag{4.29}\]
We have tested (4.29) numerically up to \(L=K_{\mathbf{u}}=K_{\mathbf{v}}=K_{\mathbf{w}}=4\), which is already quite non-trivial.15 Notice that the above result is derived for \(K_{\mathbf{w}}\) being even. Numerical computation shows that the overlap vanishes for odd \(K_{\mathbf{w}}\).
Footnote 15: The Bethe roots used in this numerical test are listed in Appendix B. We exploit the coordinate Bethe ansatz in [47] to construct the Bethe states.
### Exact overlap for the shifted boundary state
Now we move to compute the exact overlap for \(\langle\widehat{\mathcal{B}}_{R}|=\langle\mathcal{B}_{R}|U_{\text{even}}^{ \dagger}\) where \(U_{\text{even}}\) is defined in (3.18) and it shifts all the even sites to the left for one unit. We make the same assumption (4.1) about the exact overlap for \(\langle\widehat{\mathcal{B}}_{R}|\mathbf{u},\mathbf{v},\mathbf{w}\rangle\). To determine the prefactors, we need to compute the overlap with the two-particle state
\[\langle\widehat{\mathcal{B}}_{R}|\{u\},\{-u\}\rangle_{a,b}=\langle\mathcal{B}_ {R}|U_{\text{even}}^{\dagger}|\{u\},\{-u\}\rangle_{a,b} \tag{4.30}\]
From the definition of \(U_{\text{even}}\), it is clear that
\[U_{\text{even}}^{\dagger}|\{u\},\{-u\}\rangle_{a,b} =\sum_{n,m}e^{ip(n-m)}\sum_{c,d}^{2}(-1)^{d-1}\chi_{a,b}^{c,d}(2u) |2n-1,2(m+1)\rangle_{c,d} \tag{4.31}\] \[\quad+\frac{\epsilon_{ab}}{2}\frac{u+i/2}{u-i/2}\sum_{n}\left(|2n -1\rangle_{\bullet}+|2n\rangle_{\bullet}\right)\] \[=\left(\frac{u+i/2}{u-i/2}\right)\sum_{n,m}e^{ip(n-m)}\sum_{c,d}^ {2}(-1)^{d-1}\chi_{a,b}^{c,d}(2u)|2n-1,2m\rangle_{c,d}\] \[\quad+\frac{\epsilon_{ab}}{2}\frac{u+i/2}{u-i/2}\sum_{n}\left(|2n -1\rangle_{\bullet}+|2n\rangle_{\bullet}\right)\,.\]
Namely, after the action of \(U_{\text{even}}^{\dagger}\), the first line is multiplied by a global factor while the second line is left invariant. As we have shown that the second line does not contribute
to the overlap. Therefore, \(\langle\widehat{\mathcal{B}}_{R}|\{u\},\{-u\}\rangle_{a,b}\) is simply proportional to \(\langle\mathcal{B}_{R}|\{u\},\{-u\}\rangle_{a,b}\). The corresponding first level \(K\)-matrix is given by
\[\widehat{K}_{ab}^{(1)}(u)=\lim_{L\to\infty}\frac{1}{A^{L}}\frac{ \left\langle\widehat{\mathcal{B}}_{R(\theta)}|\{u\},\{-u\}\right\rangle_{a,b} }{\sqrt{\langle\{u\},\{-u\}|\{u\},\{-u\}\rangle_{a,b}}}=\left(\frac{u+i/2}{u- i/2}\right)K_{ab}^{(1)}(u)\,. \tag{4.32}\]
Therefore we arrived at the following exact overlap formula for \(|\widehat{\mathcal{B}}_{R}\rangle\)
\[\frac{\langle\widehat{\mathcal{B}}_{R}|\mathbf{u},\mathbf{v}, \mathbf{w}\rangle}{\sqrt{\langle\mathbf{u},\mathbf{v},\mathbf{w}|\mathbf{u}, \mathbf{v},\mathbf{w}\rangle}}=\prod_{j=1}^{K_{\mathbf{u}}}\frac{u_{j}+i/2}{u_ {j}-i/2}\frac{\langle\mathcal{B}_{R}|\mathbf{u},\mathbf{v},\mathbf{w}\rangle} {\sqrt{\langle\mathbf{u},\mathbf{v},\mathbf{w}|\mathbf{u},\mathbf{v},\mathbf{ w}\rangle}}. \tag{4.33}\]
Hence there is a relative phase between these two boundary state. From this we get
\[\frac{|\langle\mathcal{B}_{1/6}^{B,\text{big}}|\mathbf{u},- \mathbf{u},\mathbf{w}\rangle|^{2}}{\langle\mathbf{u},-\mathbf{u},\mathbf{w}| \mathbf{u},-\mathbf{u},\mathbf{w}\rangle}=\left|1+\prod_{j=1}^{K_{\mathbf{u}} }\frac{u_{j}+i/2}{u_{j}-i/2}\right|^{2}\frac{|\langle\mathcal{B}_{R}|\mathbf{ u},-\mathbf{u},\mathbf{w}\rangle|^{2}}{\langle\mathbf{u},-\mathbf{u}, \mathbf{w}|\mathbf{u},-\mathbf{u},\mathbf{w}\rangle}\,. \tag{4.34}\]
## 5 Overlap of 1/2-BPS Wilson loop
We derive the overlap formula for the 1/2-BPS Wilson loop in this section. The procedure is basically the same as for the 1/6-BPS case. There are two types of 1/2-BPS Wilson loops (3.43) and (3.44), we will consider them in the following two subsections.
### 1/2-BPS Wilson loop \(|\mathcal{B}_{1/2}^{1+}\rangle\)
Similar to the 1/6-BPS approach, we need to perform an SO(4) rotation in order to apply the Bajnok-Gombor approach. We take the same rotation as in (4.2) and define
\[U(\theta)=g(\theta)Ug(-\theta),\qquad\langle\mathcal{B}_{U(\theta)}|=\left(U( \theta)^{I}{}_{J}\langle I,J|\right)^{\otimes L},\qquad\langle\widehat{ \mathcal{B}}_{U(\theta)}|=\langle\mathcal{B}_{U(\theta)}|U^{\dagger}_{\text{ even}}\,. \tag{5.1}\]
More explicitly, we have
\[U(\theta)^{I}{}_{J}\langle I,J|=i\cos(2\theta)\left(\langle 1\bar{1}|- \langle 4\bar{4}|\right)-i\left(\langle 2\bar{2}|+\langle 3\bar{3}|\right)+i \sin(2\theta)\left(\langle 1\bar{4}|+\langle 4\bar{1}|\right)\right. \tag{5.2}\]
We start with the first level nesting. We have
\[\langle\mathcal{B}_{U(\theta)}|0\rangle=(i\sin 2\theta)^{L}\,. \tag{5.3}\]
We then compute the overlap \(\langle\mathcal{B}_{U(\theta)}|\{u\},\{-u\}\rangle_{a,b}\) where \(|\{u\},\{-u\}\rangle_{a,b}\) is defined in (4.17). Using
\[\langle\mathcal{B}_{U(\theta)}|2m-1,2n\rangle\rangle_{1,1}= \langle\mathcal{B}_{U(\theta)}|2m-1,2n\rangle\rangle_{2,2}=0\,, \tag{5.4}\] \[\langle\mathcal{B}_{U(\theta)}|2m-1,2n\rangle\rangle_{1,2}=-i(i \sin 2\theta)^{L-1}\,\delta_{m,n}\,,\] \[\langle\mathcal{B}_{U(\theta)}|2m-1,2n\rangle\rangle_{2,1}=i(-i \sin 2\theta)^{L-1}\,\delta_{m,n}\,,\] \[\langle\mathcal{B}_{U(\theta)}|2n-1\rangle\rangle_{\bullet}=-(i \cos 2\theta)(i\sin 2\theta)^{L-1}\,,\] \[\langle\mathcal{B}_{U(\theta)}|2n\rangle\rangle_{\bullet}=(i \cos 2\theta)(i\sin 2\theta)^{L-1}\,,\]
We find that
\[K_{ab}^{(1)}(u)=\lim_{L\to\infty}\frac{1}{L(-i\sin 2\theta)^{L}}\langle\mathcal{B} _{U(\theta)}|\{u\},\{-u\}\rangle_{a,b}=\frac{\epsilon_{ab}}{\sin 2\theta}\,, \tag{100}\]
which is nothing but the dimer state. For the second level nesting, we can directly apply the result of the dimer state [51], which leads to
\[\frac{|\langle\mathcal{B}_{U(\theta)}|\mathbf{u},-\mathbf{u},\mathbf{w}\rangle |^{2}}{\langle\mathbf{u},-\mathbf{u},\mathbf{w}|\mathbf{u},-\mathbf{u},\mathbf{ w}\rangle}=\frac{(-1)^{L}}{(\sin 2\theta)^{2(K_{\mathbf{w}}-L)}}\prod_{i=1}^{K_{ \mathbf{u}}}\left(u_{i}^{2}+\frac{1}{4}\right)\prod_{j=1}^{[K_{\mathbf{w}}/2]} \frac{1}{w_{i}^{2}(w_{i}^{2}+1/4)}\,\frac{\det G_{+}}{\det G_{-}}\,. \tag{101}\]
Again, we find that in the \(\theta\to 0\) limit, we must have \(K_{\mathbf{u}}=K_{\mathbf{v}}=K_{\mathbf{w}}=L\) and the finite result reads
\[\frac{|\langle\mathcal{B}_{U}|\mathbf{u},-\mathbf{u},\mathbf{w}\rangle|^{2}} {\langle\mathbf{u},-\mathbf{u},\mathbf{w}|\mathbf{u},-\mathbf{u},\mathbf{w} \rangle}=(-1)^{L}\prod_{i=1}^{K_{\mathbf{u}}}\left(u_{i}^{2}+\frac{1}{4}\right) \prod_{j=1}^{[K_{\mathbf{w}}/2]}\frac{1}{w_{i}^{2}(w_{i}^{2}+1/4)}\,\frac{\det G _{+}}{\det G_{-}}\,. \tag{102}\]
This result has been tested numerically. Our numerical results also reveal that this formula is also correct when \(L\) is odd although the above derivation was performed for even \(L\). This aspect is different from the bosonic \(1/6\)-BPS case in the previous section.
Shifted stateWe find that the shifted state overlap is again proportional to the unshifted one as
\[\frac{\langle\widehat{\mathcal{B}}_{U}|\mathbf{u},-\mathbf{u},\mathbf{w} \rangle}{\sqrt{\langle\mathbf{u},-\mathbf{u},\mathbf{w}|\mathbf{u},-\mathbf{u},\mathbf{w}\rangle}}=\prod_{j=1}^{K_{\mathbf{u}}}\left(\frac{u_{j}+i/2}{u_{j}- i/2}\right)^{2}\frac{\langle\mathcal{B}_{U}|\mathbf{u},-\mathbf{u}, \mathbf{w}\rangle}{\sqrt{\langle\mathbf{u},-\mathbf{u},\mathbf{w}|\mathbf{u}, -\mathbf{u},\mathbf{w}\rangle}}. \tag{103}\]
Notice that the phase factor is different from the \(1/6\)-BPS case. We have tested this result numerically up to \(L=K_{\mathbf{u}}=K_{\mathbf{v}}=K_{\mathbf{w}}=4\). Naively, one might expect that according to the same argument of \(1/6\)-BPS case, one should obtain the same phase factor (118). However, generalizing this argument to the Dimer state seems a bit subtle, as \(\widehat{K}_{11}=0\) in this case.
Then we have
\[\frac{|\langle\mathcal{B}_{1/2}^{1+}|\mathbf{u},-\mathbf{u},\mathbf{w}\rangle| ^{2}}{\langle\mathbf{u},-\mathbf{u},\mathbf{w}|\mathbf{u},-\mathbf{u},\mathbf{ w}\rangle}=\left|1+\prod_{j=1}^{K_{\mathbf{u}}}\left(\frac{u_{j}+i/2}{u_{j}-i/2} \right)^{2}\right|^{2}\frac{|\langle\mathcal{B}_{U}|\mathbf{u},-\mathbf{u}, \mathbf{w}\rangle|^{2}}{\langle\mathbf{u},-\mathbf{u},\mathbf{w}|\mathbf{u}, -\mathbf{u},\mathbf{w}\rangle}\,. \tag{104}\]
### 1/2-BPS Wilson loop \(|\mathcal{B}_{1/2}^{4-}\rangle\)
We take the same rotation as in (100) and define
\[\tilde{U}(\theta)=g(\theta)\tilde{U}g(-\theta),\qquad\langle\mathcal{B}_{ \tilde{U}(\theta)}|=\left(\tilde{U}(\theta)^{I}{}_{J}\langle I,J|\right)^{ \otimes L},\qquad\langle\widehat{\mathcal{B}}_{\tilde{U}(\theta)}|=\langle \mathcal{B}_{\tilde{U}(\theta)}|\tilde{U}_{\text{even}}^{\dagger}\,. \tag{105}\]
More explicitly, we have
\[\tilde{U}(\theta)^{I}{}_{J}\langle I,J|=i\cos(2\theta)\left(\langle 1\bar{1} |-\langle 4\bar{4}|\right)+i\left(\langle 2\bar{2}|+\langle 3\bar{3}| \right)+i\sin(2\theta)\left(\langle 1\bar{4}|+\langle 4\bar{1}|\right)\right. \tag{106}\]
The rest of the computations are almost identical to the state \(\langle\mathcal{B}^{1+}_{1/2}|\). The only difference is that at the first level nesting we have an additional minus sign, this leads to the following relative phase between the two overlaps
\[\langle\mathcal{B}_{\tilde{U}}|\mathbf{u},-\mathbf{u},\mathbf{w}\rangle=(-1)^{L }\langle\mathcal{B}_{U}|\mathbf{u},-\mathbf{u},\mathbf{w}\rangle \tag{5.12}\]
The norm of the overlap is thus the same as \(\langle\mathcal{B}^{1+}_{1/2}|\)
\[\frac{|\langle\mathcal{B}_{\tilde{U}}|\mathbf{u},-\mathbf{u},\mathbf{w}\rangle| ^{2}}{\langle\mathbf{u},-\mathbf{u},\mathbf{w}|\mathbf{u},-\mathbf{u},\mathbf{ w}\rangle}=(-1)^{L}\prod_{i=1}^{K_{\mathbf{u}}}\left(u_{i}^{2}+\frac{1}{4}\right) \prod_{j=1}^{[K_{\mathbf{w}}/2]}\frac{1}{w_{i}^{2}(w_{i}^{2}+1/4)}\,\frac{\det G _{+}}{\det G_{-}}\,. \tag{5.13}\]
Shifted stateThe overlap for the shifted state leads to the same phase factor as for \(\langle\mathcal{B}^{1+}_{1/2}|\) case
\[\frac{\langle\widehat{\mathcal{B}}_{\tilde{U}}|\mathbf{u},-\mathbf{u},\mathbf{ w}\rangle}{\sqrt{\langle\mathbf{u},-\mathbf{u},\mathbf{w}|\mathbf{u},-\mathbf{u}, \mathbf{w}\rangle}}=\prod_{j=1}^{K_{\mathbf{u}}}\left(\frac{u_{j}+i/2}{u_{j}-i /2}\right)^{2}\frac{\langle\mathcal{B}_{\tilde{U}}|\mathbf{u},-\mathbf{u}, \mathbf{w}\rangle}{\sqrt{\langle\mathbf{u},-\mathbf{u},\mathbf{w}|\mathbf{u}, -\mathbf{u},\mathbf{w}\rangle}}. \tag{5.14}\]
So as in the previous case, we have,
\[\frac{|\langle\mathcal{B}^{4-}_{1/2}|\mathbf{u},-\mathbf{u},\mathbf{w}\rangle| ^{2}}{\langle\mathbf{u},-\mathbf{u},\mathbf{w}|\mathbf{u},-\mathbf{u},\mathbf{ w}\rangle}=\left|1+\prod_{j=1}^{K_{\mathbf{u}}}\left(\frac{u_{j}+i/2}{u_{j}-i/2} \right)^{2}\right|^{2}\frac{|\langle\mathcal{B}_{\tilde{U}}|\mathbf{u},- \mathbf{u},\mathbf{w}\rangle|^{2}}{\langle\mathbf{u},-\mathbf{u},\mathbf{w}| \mathbf{u},-\mathbf{u},\mathbf{w}\rangle}\,. \tag{5.15}\]
Finally for \(|\mathcal{B}_{1/3}\rangle\), we have
\[\frac{|\langle\mathcal{B}_{1/3}|\mathbf{u},-\mathbf{u},\mathbf{w} |\mathbf{u},-\mathbf{u},\mathbf{w}\rangle|^{2}}{\langle\mathbf{u},-\mathbf{u},\mathbf{w}|\mathbf{u},-\mathbf{u},\mathbf{w}\rangle} =\left|(n_{1}+(-1)^{L}n_{4})\left(1+\prod_{j=1}^{K_{\mathbf{u}}} \left(\frac{u_{j}+i/2}{u_{j}-i/2}\right)^{2}\right)\right|^{2}\] \[\times\frac{|\langle\mathcal{B}_{U}|\mathbf{u},-\mathbf{u}, \mathbf{w}\rangle|^{2}}{\langle\mathbf{u},-\mathbf{u},\mathbf{w}|\mathbf{u}, -\mathbf{u},\mathbf{w}\rangle}\,. \tag{5.16}\]
## 6 Conclusion
In this paper, we computed the correlation function of a circular BPS Wilson loop and a single-trace operator in ABJM theory. This correlation function is proportional to the overlap of a boundary state from the Wilson loop and the Bethe state corresponding to the single-trace operator. We proved that among a sub-class of the fermionic 1/6-BPS Wilson loops, only two special cases, bosonic 1/6-BPS Wilson loops and half-BPS Wilson loops can lead to tree-level integrable boundary states. The boundary state from the 1/3-BPS Wilson loop is integrable at tree level as well. Our result for the subclass of fermionic 1/6-BPS Wilson loops is in some sense similar to the results on integrability of the open chains from bosonic (non-)supersymmetric Maldacena-Wilson lines in the \(\mathcal{N}=4\) SYM [17]. There it was found that only the two special cases, half-BPS Maldacena-Wilson loops and the usual Wilson loops, lead to integrable open chains. We also obtained the exact overlap formulae up to a phase for all the tree-level integrable boundary states corresponding to \(W^{B}_{1/6}\), \(\hat{W}^{B}_{1/6}\), \(W^{B,\text{big}}_{1/6}\), \(W^{1+}_{1/2}\), \(W^{4-}_{1/2}\) and \(W_{1/3}\).
There are various directions that deserve further investigations. One immediate problem is to fix the phase of the overlaps for \(W_{1/6}^{B}\), and \(W_{1/2}^{1+}\). Although as long as the Wilson loop one-point function is concerned, the phase is unimportant, as a spin chain problem it is still nice to have a method which also gives the phase factor. It is valuable to generalize our result to other closed sectors even to the full sector at two loop level. If some BPS Wilson loops give integrable boundary states in the full sector, the constraints from the bosonic and fermionic duality [48; 67; 68] of the Bethe ansatz equations may help us to pin down the full sector overlap formulas. The next step is to obtain all loop overlap in the asymptotic sense using integrable bootstrap method, as done for some integrable boundary states in \(\mathcal{N}=4\) SYM [22; 23; 69] case. An even more ambitious goal would be computing the finite-size corrections using the worldsheet \(g\)-function approach [22; 23].
In this paper, we only consider certain BPS Wilson loop in the fundamental representation of a suitable group or super-group. It is interesting to study whether similar Wilson loops in higher dimensional representations can also lead to integrable boundary states. Generating functions of Wilson loops in various representations [70] and the method of introducing one-dimensional scalars and/or fermions along the contour of the Wilson loop [71; 72; 25; 73] should be very helpful here.
One common feature of the Wilson loops considered here is that the scalar coupling is constant. There are also other BPS Wilson loops whose scalar couplings are \(\tau\)-dependent [74; 75; 73], where \(\tau\) is used to parameterise the Wilson loop contour. The correlator of such Wilson loops and a single-trace operator in the \(SU(4)\) sector will lead to boundary states involving integration of \(\tau\) along the circle. It is interesting to seek integrable boundary states among them.
Comparing with the \(\mathcal{N}=4\) SYM case, the study of correlators of BPS Wilson loops and single-trace operators is much more preliminary and there are far less results. One direction complementary to the study here is using localization [37] to compute the correlation functions of BPS Wilson loops and certain BPS local operators and comparing the results at strong coupling in the large \(N\) limit with holographic computations. Some computations in \(\mathcal{N}=4\) SYM case in this direction can be found in [76; 77; 78; 79; 80; 81], however the localization computation in the ABJM case seems more challenging and calls for new developments. Some important progress in this direction was made recently in [82].
## Acknowledgement
We would like to thank Bin Chen for collaboration at early stages of this project, Hong-Fei Shu, Jiaju Zhang for help discussions, and Zhi-Xin Hu for helps on using the computer cluster at CJQS, TJU. Y.J. would like to thank Center for Joint Quantum Studies of Tianjin University for hospitality during the final stage of the work. The work of Y.J. is partly supported by Startup Funding no. 3207022217A1 of Southeast University. The work of J.W. and P.Y. is partly supported by the National Natural Science Foundation of China, Grant No. 11975164, 11935009, 12247103, 12047502, and Natural Science Foundation of Tianjin under Grant No. 20JCYBJC00910.
The Lagrangian and supersymmetry transformations of ABJM theory
### Spinor convention.
The circular BPS Wilson loops in ABJM theory can only exist when the theory is put in the Euclidean space \({\bf R}^{3}\)[83]. We follow the spinor convention in [83]. The metric on \({\bf R}^{3}\) is \(\delta_{\mu\nu}={\rm diag}(1,1,1)\), the coordinates are \(x^{\mu}=(x^{1},x^{2},x^{3})\). The \(\gamma\) matrices are
\[\gamma^{\mu}_{\ \alpha}{}^{\beta}=(-\sigma^{2},\sigma^{1},\sigma^{3})\,. \tag{104}\]
They satisfy
\[\gamma^{\mu}\gamma^{\nu}=\delta^{\mu\nu}+i\epsilon^{\mu\nu\rho}\gamma_{\rho}\,, \tag{105}\]
where \(\epsilon^{\mu\nu\rho}\) is the rank-3 antisymmetric tensor with \(\epsilon^{123}=1\). We raise or lower the spinor indices using anti-symmetric tensor \(\epsilon^{\alpha\beta}\) and \(\epsilon_{\alpha\beta}\)
\[\theta^{\alpha}=\epsilon^{\alpha\beta}\theta_{\beta}\,,\,\theta_{\alpha}= \epsilon_{\alpha\beta}\theta^{\beta}\,, \tag{106}\]
with \(\epsilon^{12}=-\epsilon_{12}=1\). We will use the shorthand notation,
\[\theta\psi=\theta^{\alpha}\psi_{\alpha}\,,\,\theta\gamma^{\mu}\psi=\theta^{ \alpha}\gamma^{\mu}_{\ \alpha}{}^{\beta}\psi^{\beta}. \tag{107}\]
### Field content.
ABJM theory is the three-dimensional \({\cal N}=6\) super-Chern-Simons theory with gauge group \(U(N)\times U(N)\). The Chern-Simons levels are \(k\) and \(-k\), respectively. Besides the gauge fields \(A_{\mu},\hat{A}_{\mu}\) in the adjoint representation of each \(U(N)\). The matter fields include four complex scalars \(Y^{I}\) and four Dirac spinors \(\psi_{I}\) in the bi-fundamental representation of the gauge group. \(Y^{I}\)'s are in the \({\bf 4}\) representation of R-symmetry group \(SU_{R}(4)\) and \(\psi_{I}\)'s are in the \(\bar{\bf 4}\) representation.
Lagrangian.The Lagrangian of ABJM theory can be written as the sum of four parts,
\[{\cal L}_{\rm ABJM}={\cal L}_{\rm CS}+{\cal L}_{k}+{\cal L}_{k}+{\cal L}_{Y}\,, \tag{108}\]
with
\[{\cal L}_{\rm CS} = -\frac{k}{4\pi}\epsilon^{\mu\nu\rho}{\rm Tr}\left(A_{\mu}\partial _{\nu}A_{\rho}+\frac{2i}{3}A_{\mu}A_{\nu}A_{\rho}-\hat{A}_{\mu}\partial_{\nu} \hat{A}_{\rho}+\frac{2i}{3}\hat{A}_{\mu}\hat{A}_{\nu}\hat{A}_{\rho}\right)\,,\] \[{\cal L}_{p} = {\rm Tr}\left(-D_{\mu}Y^{\dagger}_{I}D^{\mu}Y^{I}+i\psi^{\dagger I }\gamma^{\mu}D_{\mu}\psi_{I}\right)\,,\] \[{\cal L}_{p} = \frac{4\pi^{2}}{3k^{2}}{\rm Tr}\left(Y^{I}Y^{\dagger}_{I}Y^{J}Y^ {\dagger}_{J}Y^{K}Y^{\dagger}_{K}+Y^{I}Y^{\dagger}_{J}Y^{J}Y^{\dagger}_{K}Y^{ K}Y^{\dagger}_{I}+4Y^{I}Y^{\dagger}_{J}Y^{K}Y^{\dagger}_{I}Y^{J}Y^{ \dagger}_{K}\right.\] \[\left.-6Y^{I}Y^{\dagger}_{J}Y^{J}Y^{\dagger}_{I}Y^{K}Y^{\dagger}_ {K}\right)\,,\] \[{\cal L}_{Y} = -\frac{2\pi i}{k}{\rm Tr}\left(Y^{I}Y^{\dagger}_{I}\psi_{J}\psi^ {\dagger J}-2Y^{I}Y^{\dagger}_{J}\psi_{I}\psi^{\dagger J}-Y^{\dagger}_{I}Y^{I} \psi^{\dagger J}\psi_{J}+2Y^{\dagger}_{I}Y^{J}\psi^{\dagger I}\psi_{J}\right. \tag{109}\] \[\left.+\epsilon_{IJKL}Y^{I}\psi^{IJ}Y^{K}\psi^{\dagger L}- \epsilon^{IJKL}Y^{\dagger}_{I}\psi_{J}Y^{\dagger}_{K}\psi_{L}\right)\,.\]
Here the covariant derivatives are defined as
\[D_{\mu}Y^{I} = \partial_{\mu}Y^{I}+iA_{\mu}Y^{I}-iY^{I}\hat{A}_{\mu}\,,\] \[D_{\mu}Y^{\dagger}_{I} = \partial_{\mu}Y^{\dagger}_{I}+i\hat{A}_{\mu}Y^{\dagger}_{I}-iY^{ \dagger}_{I}A_{\mu}\,,\] \[D_{\mu}\psi_{I} = \partial_{\mu}\psi_{I}+iA_{\mu}\psi_{I}-i\psi_{I}\hat{A}_{\mu}\,. \tag{110}\]
and \(\epsilon_{IJKL},\epsilon^{IJKL}\) is totally anti-symmetric tensor with \(\epsilon_{1234}=\epsilon^{1234}=1\).
Supersymmetry transformations.The ABJM action is invariant under the following supersymmetry transformations [84; 85; 86; 87]:
\[\delta A_{\mu} = \frac{2\pi}{k}(Y^{I}\psi^{\dagger J}\gamma_{\mu}\varepsilon_{IJ}+ \bar{\varepsilon}^{IJ}\gamma_{\mu}\psi_{J}Y^{\dagger}_{I})\,, \tag{114}\] \[\delta\hat{A}_{\mu} = \frac{2\pi}{k}(\psi^{\dagger J}Y^{I}\gamma_{\mu}\varepsilon_{IJ} +\bar{\varepsilon}^{IJ}Y^{\dagger}_{I}\gamma_{\mu}\psi_{J})\,,\] (115) \[\delta Y^{I} = i\bar{\varepsilon}^{IJ}\psi_{J}\,,\ \delta Y^{T}_{I}=i\psi^{ \dagger J}\varepsilon_{IJ}\,,\] (116) \[\delta\psi_{I} = \gamma^{\mu}\varepsilon_{IJ}D_{\mu}Y^{J}+\vartheta_{IJ}Y^{J}- \frac{2\pi}{k}\varepsilon_{IJ}(Y^{J}Y^{\dagger}_{K}Y^{K}-Y^{K}Y^{\dagger}_{K}Y ^{J})\] (117) \[-\frac{4\pi}{k}\varepsilon_{KL}Y^{K}Y^{\dagger}_{I}Y^{L}\,,\] \[\delta\psi^{\dagger I} = -\bar{\varepsilon}^{IJ}\gamma^{\mu}D_{\mu}Y^{\dagger}_{J}+\bar{ \vartheta}^{IJ}Y^{\dagger}_{J}+\frac{2\pi}{k}\bar{\varepsilon}^{IJ}(Y^{ \dagger}_{J}Y^{K}Y^{\dagger}_{K}-Y^{\dagger}_{K}Y^{K}Y^{\dagger}_{J})\] (118) \[+\frac{4\pi}{k}\bar{\varepsilon}^{KL}Y^{\dagger}_{K}Y^{I}Y^{ \dagger}_{L}.\]
The supersymmetry parameters are \(\varepsilon_{IJ}=\theta_{IJ}+x^{\mu}\gamma_{\mu}\vartheta_{IJ}\) and \(\bar{\varepsilon}^{IJ}=\bar{\theta}^{IJ}-\bar{\vartheta}^{IJ}x^{\mu}\gamma_{\mu}\). Here \(\theta\)'s give the Ponicare supersymmetry, and \(\vartheta\)'s give the conformal supersymmetry. They satisfy the following constraints,
\[\theta_{IJ} = -\theta_{JI}\,,\,\bar{\theta}^{IJ}=\frac{1}{2}\epsilon^{IJKL} \theta_{KL}\,, \tag{119}\] \[\vartheta_{IJ} = -\vartheta_{JI}\,,\bar{\vartheta}^{IJ}=\frac{1}{2}\epsilon^{IJKL} \vartheta_{KL}\,. \tag{120}\]
Notice that for the theory in the Euclidean signature we do not impose that \(\bar{\theta}^{IJ}\) (\(\bar{\vartheta}^{IJ}\)) is the complex conjugation of \(\theta_{IJ}\) (\(\vartheta_{IJ}\)) [88].
Propagators of the scalar fields.The tree-level propagators of the scalar fields are,
\[\langle Y^{I\alpha}{}_{\bar{\beta}}(x)Y^{\dagger}_{J}{}^{\bar{\gamma}}{}_{ \rho}(y)\rangle=\frac{\delta^{I}_{J}\delta^{\alpha}_{\rho}\delta^{\bar{\gamma} }_{\bar{\beta}}}{4\pi|x-y|}\,, \tag{121}\]
where \(\alpha,\bar{\beta},\bar{\gamma},\rho\) are color indices.
## Appendix B Numerical solutions of the Bethe equations
In this appendix, we present a collection of numerical solutions for the Bethe equations in the \(SU(4)\) sector of the ABJM theory
\[1=e^{i\phi_{u_{j}}}=\left(\frac{u_{j}+\frac{i}{2}}{u_{j}-\frac{i }{2}}\right)^{L}\prod_{\begin{subarray}{c}k=1\\ k\neq j\end{subarray}}^{K_{\rm u}}S\left(u_{j},u_{k}\right)\prod_{k=1}^{K_{ \rm w}}\tilde{S}\left(u_{j},w_{k}\right)\,, \tag{122}\] \[1=e^{i\phi_{w_{j}}}=\prod_{\begin{subarray}{c}k=1\\ k\neq j\end{subarray}}^{K_{\rm w}}S\left(w_{j},w_{k}\right)\prod_{k=1}^{K_{ \rm u}}\tilde{S}\left(w_{j},u_{k}\right)\prod_{k=1}^{K_{\rm v}}\tilde{S}\left( w_{j},v_{k}\right)\,,\] (123) \[1=e^{i\phi_{w_{j}}}=\left(\frac{v_{j}+\frac{i}{2}}{v_{j}-\frac{i }{2}}\right)^{L}\prod_{\begin{subarray}{c}k=1\\ k\neq j\end{subarray}}^{K_{\rm v}}S\left(v_{j},v_{k}\right)\prod_{k=1}^{K_{ \rm w}}\tilde{S}\left(v_{j},w_{k}\right)\,, \tag{124}\]
where the S-matrices \(S(u,v)\) and \(\tilde{S}(u,v)\) are given by
\[S(u,v)\equiv\frac{u-v-i}{u-v+i},\quad\tilde{S}(u,v)\equiv\frac{u-v+\frac{i}{2}}{u -v-\frac{i}{2}}\,. \tag{100}\]
Here the numbers of rapidities \(\mathbf{u},\mathbf{v},\mathbf{w}\) are denoted by \(K_{u},K_{v},K_{w}\).
The cyclicity property of the single trace operator is equivalent to the zero momentum condition
\[1=\prod_{j=1}^{K_{\mathbf{u}}}\frac{u_{j}+\frac{i}{2}}{u_{j}-\frac{i}{2}}\prod _{j=1}^{K_{\mathbf{v}}}\frac{v_{j}+\frac{i}{2}}{v_{j}-\frac{i}{2}}\,. \tag{101}\]
In the following, we present a collection of solutions that fulfill both Bethe ansatz equations and the zero momentum condition. Rational Q-system [89; 90] plays an important role here.
\begin{tabular}{|c|l|l|} \hline \hline \(L\) & \(K_{\bf w}\) & \([{\bf u},{\bf v},{\bf w}]\) \\ \hline
1 & 1 & \([\{0\},\{0\},\{0\}]\) \\ \hline
2 & 2 & \([\{\sqrt{\frac{3}{20}},-\sqrt{\frac{3}{20}}\},\{-\sqrt{\frac{3}{20}},\sqrt{ \frac{3}{20}}\},\{\frac{1}{\sqrt{5}},-\frac{1}{\sqrt{5}}\}]\) \\ \hline
3 & 3 & \([\{-0.61842989257770833,\,0\,,0.61842989257770833\},\) \\ & & \(\{0.61842989257770833,\,0\,,-0.61842989257770833\},\) \\ & & \(\{0.71410132990930250,-0.71410132990930250,0\}]\) \\ \hline & & \([\{0.36628143864284446,\,-0.18314071932142223-0.5006211833519472i\},\) \\ & & \(-0.18314071932142223+0.5006211833519472i\},\) \\ & & \(\{-0.36628143864284446,\,0.183140719321422223+0.5006211833519472i\},\) \\ & & \(0.183140719321422223-0.5006211833519472i\},\) \\ & & \(\{-0.4472135954999579i\,,\,0\,,\,0.4472135954999579i\}]\) \\ \hline & & \([\{-0.9393910431943004i,\,0\,,\,-0.9393910431943004i\},\) \\ & & \(\{-1.0847153433251056i\,,\,0\,,\,1.0847153433251056i\}]\) \\ \hline
4 & 4 & \([\{-0.30330564928014186\,-\,0.4984162134634997i,\,-0.30330564928014186\,+\) \\ & & \(0.4984162134634997i\,,\,0.02628462005210284\,,\,0.5803266785081809\},\) \\ & & \(\{\,0.30330564928014186\,+\,0.4984162134634997i,\,0.30330564928014186\,-\) \\ & & \(0.4984162134634997i\,,\,-0.02628462005210284\,,\,-0.5803266785081809\},\) \\ & & \(\{\,-0.5162715680301216\,,\,-0.5001222335995396i\,,\,0.5162715680301216\) \\ & & \(0.5001222335995396i\}]\) \\ \hline & & \([\{-0.16030976462353738\,-\,0.9768494810075854i,\,-0.16030976462353738\,+\) \\ & & \(0.9768494810075854i\,,\,-0.14056546652006302\,,\,0.46118499576713784\},\) \\ & & \(\{\,0.16030976462353738\,+\)\(\,0.9768494810075854i,\,0.16030976462353738\,-\) \\ & & \(0.9768494810075854i\,,\,0.14056546652006302\,,\,-0.46118499576713784\},\) \\ & & \(\{\,0.2416681458566768\,,\,-1.0684026594894636i\,,\,0.24166814585667684\) \\ & & \(1.0684026594894636i\}]\) \\ \hline & & \([\{-0.7905846950242429,\,-0.18184585032628545\,,\) \\ & & \(0.18184585032628545\,,\) \\ & & \(\{\,0.7905846950242429,\,0.18184585032628545\,,\) \\ & & \(-0.18184585032628545\,,\) \\ & & \(-0.18184585032628545\,,\) \\ & & \(\{\,-0.9018353804885377\,,\,-0.25327661600652046\,,\) \\ & & \(0.9018353804885376\,,\,0.25327661600652046\}]\) \\ \hline & & \([\{\,-0.4913865158293109\,\,-\,\,0.5254890261600584i,\,-0.4913865158293109\,+\) \\ & & \(0.5254890261600584i\,,\,0.49138651582931087-0.5254890261600584i\) \\ & & \(0.49138651582931087+0.5254890261600584i\},\) \\ & & \(0.4913865158293109\,\,+\)\(\,0.5254890261600584i\) \\ & & \(0.5254890261600584i\,,\,-0.49138651582931087+0.5254890261600584i\) \\ & & \(-0.49138651582931087-0.5254890261600584i\) \\ & & \(\{\,0.53872049128905\,-\,0.5800492329412736i\,,\,-0.538720491289058\,+\) \\ & & \(0.5800492329412736i\,,\,0.538720491289058\,+\) \\ & & \(0.538720491289058-0.5800492329412736i\}]\) \\ \hline \end{tabular}
Notice that all of the above sets of Bethe roots satisfy the selection rules \(K_{\mathbf{u}}=K_{\mathbf{v}}=K_{\mathbf{w}}=L\) and (3.32).
Moreover, we have also found a set of Bethe roots with \(L=3,K_{\mathbf{u}}=K_{\mathbf{w}}=1,K_{\mathbf{w}}=2\) that satisfies the zero momentum condition and Bethe equations, however it does not satisfy the selection rules \(K_{\mathbf{u}}=K_{\mathbf{v}}=K_{\mathbf{w}}=L\) and (3.32). This set of Bethe roots is
\[u_{1}=0.866025\qquad w_{1}=0.866025,\] (B.6) \[v_{1}=-0.198072\qquad v_{2}=0.631084\,.\]
|
2305.07434 | A branch cut approach to the probability density and distribution
functions of a linear combination of central and non-central Chi-square
random variables | The paper considers the distribution of a general linear combination of
central and non-central chi-square random variables by exploring the branch cut
regions that appear in the standard Laplace inversion process. Due to the
original interest from the directional statistics, the focus of this paper is
on the density function of such distributions and not on their cumulative
distribution function. In fact, our results confirm that the latter is a
special case of the former. Our approach provides new insight by generating
alternative characterizations of the probability density function in terms of a
finite number of feasible univariate integrals. In particular, the central
cases seem to allow an interesting representation in terms of the branch cuts,
while general degrees of freedom and non-centrality can be easily adopted using
recursive differentiation. Numerical results confirm that the proposed approach
works well while more transparency and therefore easier control in the accuracy
is ensured. | Alfred Kume, Tomonari Sei, Andrew T. A. Wood | 2023-05-12T12:58:41Z | http://arxiv.org/abs/2305.07434v1 | A branch cut approach to the probability density and distribution functions of a linear combination of central and non-central Chi-square random variables
###### Abstract
The paper considers the distribution of a general linear combination of central and non-central chi-square random variables by exploring the branch cut regions that appear in the standard Laplace inversion process. Due to the original interest from the directional statistics, the focus of this paper is on the density function of such distributions and not on their cumulative distribution function. In fact, our results confirm that the latter is a special case of the former. Our approach provides new insight by generating alternative characterizations of the probability density function in terms of a finite number of feasible univariate integrals. In particular, the central cases seem to allow an interesting representation in terms of the branch cuts, while general degrees of freedom and non-centrality can be easily adopted using recursive differentiation. Numerical results confirm that the proposed approach works well while more transparency and therefore easier control in the accuracy is ensured.
keywords: Linear combination of Chi-squares, Bingham distributions, Fisher-Bingham distributions, directional statistics, holonomic functions. +
Footnote †: journal: Journal of the American Statistical Association
## 1 Introduction
Let us denote by \(X\) a quadratic form of normally distributed components:
\[X=\sum_{i=1}^{p}Y_{i}^{\top}Y_{i}=\sum_{i=1}^{p}\lambda_{i}\chi_{n_{i}}^{2}( \delta_{i})\quad\lambda_{i}=\frac{1}{2\theta_{i}}\quad\delta_{i}=\frac{\gamma _{i}^{2}}{2\theta_{i}} \tag{1}\]
where \(Y_{i}\sim N_{n_{i}}(\frac{\gamma_{i}}{2\theta_{i}\sqrt{n_{i}}}\mathbf{1},\frac{ 1}{2\theta_{i}}\mathbf{I}_{n_{i}})\) are \(p\) independent multivariate normal rv's of dimension \(n_{i}\), each with covariance matrix the multiple of identity \(\frac{1}{2\theta_{i}}\mathbf{I}_{n_{i}}\) and mean some vector of norm \(\frac{\gamma_{i}}{2\theta_{i}}\).
Evaluating the probability density function (pdf) and the cumulative distribution function (cdf) of such quadratic forms has important applications in statistical theory. Many statistical tests such as the
goodness of fit for non-parametric regression (e.g. Kuonen (2005)), pseudo-likelihood ratio tests (e.g. Liang and Self (1996)), directional statistics models using the Fisher-Bingham distributions (see Mardia and Jupp (2000); Kume and Wood (2005)) or the shape distributions based on the complex Bingham family (see Kent (1994a)), are heavily relying on data modelled by these types of random variables. Since in their most general cases there is no close form expression for the corresponding distributions the numerical evaluation is implemented in many situations. Representation (1) is implicitly assuming that \(\theta_{i}\) as well as \(\lambda_{i}\) are positive. The more general cases when some \(\theta_{i}\)'s are negative can be seen as an extension of (1) since by collecting the terms according to their signs, it can be considered as a difference of two such positive combinations. We will consider these cases separately.
Standard inverting arguments of the corresponding pdf's via the moment generating function or Laplace transform confirm that the density function of \(X\) is
\[f_{\boldsymbol{\theta},\boldsymbol{\gamma}^{2},\boldsymbol{n}}(s) = \prod_{i=1}^{p}\theta_{i}^{\frac{n_{i}}{2}}\exp(-\sum_{i=1}^{p} \frac{\gamma_{i}^{2}}{4\theta_{i}})\frac{1}{2\pi\mathbf{i}}\int\limits_{ \mathbf{i}\mathbb{R}+t_{0}}\frac{\exp(\sum_{i=1}^{p}\frac{\gamma_{i}^{2}}{4( \theta_{i}+t)^{\frac{n_{i}}{2}}})}{\prod_{i=1}^{p}(\theta_{i}+t)^{\frac{n_{i} }{2}}}e^{st}dt,\quad s>0 \tag{2}\]
where \(t_{0}\) can be any real number such that the vertical line \(\mathbf{i}\mathbb{R}+t_{0}\) in the complex plane is on the right of the poles \(-\theta_{i}\) i.e. \(-\min(\boldsymbol{\theta})<t_{0}\). Note that throughout the paper we refer to \(\boldsymbol{\theta},\boldsymbol{\gamma}^{2},\boldsymbol{n}\) as the vectorised entries of \(\theta_{i}\), \(\gamma_{i}\) and \(n_{i}\) respectively where \(\theta_{i}\) are forced to be distinct; and we apply the usual convention for complex integration so that for the closed contours we integrate along the anticlockwise direction and for the unbounded lines such as \(\mathbf{i}\mathbb{R}+t_{0}\) we integrate from \(t_{0}-\mathbf{i}\infty\) to \(t_{0}+\mathbf{i}\infty\). The current methods addressing this inversion problem could be categorised into three groups:
_Standard numerical inversion_: Attempts to numerically evaluate these functions go back as far as in Imhof (1961) where a general numerical integrating procedure along a vertical line similar to that in (2) is shown. The alternative approach introduced in Davies (1973) and Davies (1980) is addressing the evaluation of the cumulative distribution function calculation with numerical performance comparable to that of Imhof (1961). A moment matching procedure to some gamma distribution is reported in Liu et al. (2009). However as indicated by Duchesne and De Micheaux (2010) this approach is not always guaranteeing good performance. All of the above mentioned methods are generally focused on the cdf's and not on the pdf \(f_{\boldsymbol{\theta},\boldsymbol{\gamma}^{2},\boldsymbol{n}}\).
_Saddle point approximation_ (SPA): This method relies on the extensions of the Laplace approximation for integrals (see e.g. Daniels (1954); Barndorff-Nielsen (1990)). The manuscript of Butler (2007) is an excellent review of this popular method. Its implementation in our context is straightforward with little computational cost. This approach is shown in Kuonen (1999) to perform well with a surprising accuracy in non-parametric regression examples reported. The same conclusion is reported by Kume and Wood (2005) in the context of the density function evaluation. The authors implemented this for directional statistics inference involving the evaluation of \(f_{\boldsymbol{\theta},\boldsymbol{\gamma}^{2},\boldsymbol{n}}\) at point \(s=1\).
_Holonomic gradient methods_: The evaluation of \(f_{\boldsymbol{\theta},\boldsymbol{\gamma}^{2},\boldsymbol{n}}\) is in fact characterized as a solution of a particular ODE equation (c.f. Nakayama et al. (2011)) where in principle only some accurate starting point and a good ODE solver is required. The derivation of such ODE derives from the fact that \(f_{\boldsymbol{\theta},\boldsymbol{\gamma}^{2},\boldsymbol{n}}\) is shown to be an holonomic function for which the Pfaffian matrix equations lead to the relevant ODE. The general adjustments for allowing multiplicities in \(\boldsymbol{\theta}\) so that the corresponding ODE do not become degenerate, are studied in Sei and Kume (2013); Kume and Sei (2018) for central and non central cases
respectively. In fact Koyama and Takemura (2016) focus on the problem of using HGM for the cdf. This paper belongs to the first group of these approaches. We focus on exploring the specific structure of the Laplace transform so that a more flexible approach both numerically and algebraically is offered. More specifically, the main contribution aspects of this paper are:
* Theorem 1 expresses the integrating contour in a much simpler form as seen before in the literature, leading to numerically simpler inversion procedures. This is achieved by exploring the simple branch cut structure due to the square-root functions. Additionally, we establish a close connection between the cdf of \(X\) as in (1) and the pdf of some other positive linear combination of non-central Chi-squares \(X^{\prime}\) such that \[P(X\leq x)=f_{\tilde{X}}(x)\frac{e^{x\theta_{0}}}{\theta_{0}}\prod_{i=1}^{p} \left(\frac{\theta_{i}}{\theta_{i}+\theta_{0}}\right)^{\frac{n_{i}}{2}}\exp(- \sum_{i=1}^{p}\frac{\gamma_{i}^{2}\theta_{0}}{4\theta_{i}(\theta_{i}+\theta_{0 })})\quad\forall\theta_{0}>0\] (3) where \(f_{\tilde{X}}\) is the pdf of \(\tilde{X}=\frac{1}{2\theta_{0}}\chi_{2}^{2}(0)+\sum_{i=1}^{p}\frac{1}{2( \theta_{i}+\theta_{0})}\chi_{n_{i}}^{2}(\frac{\gamma_{i}^{2}}{2(\theta_{i}+ \theta_{0})})\) for any arbitrary \(\theta_{0}\). An immediate consequence of this fact is that any general method for evaluating the pdf of any positive linear combination of non-central Chi-square random variables suffices for the distribution functions too. For example the saddlepoint approximation of pdf's as in Kume and Wood (2005) can also be used for the tail probabilities as an alternative to that of Kuronen (1999); Barndorff-Nielsen (1990). Additionally, the ODE equations of Koyama and Takemura (2016) for cdf. could also be seen as a special case of those developed in Sei and Kume (2013); Kume and Sei (2018) for the pdf. To our knowledge property (3) has not been reported in the literature before.
* Theorem 2 makes the corresponding simplifications for the practically important central cases, \(\boldsymbol{\gamma}=0\), where the univariate contours are now reduced to a finite linear combination of some bounded elementary univariate integrals which are numerically manageable and therefore the normalising constants of the Bingham distributions can be easily derived.
* Theorem 3 extends the above mentioned results to the difference of two positive linear combinations including the corresponding result as in (3). In particular, for the case of chi-square distributions of degrees of freedom 2 which corresponds to the rescaled exponential distributions, a closed form expression is immediately obtained using residues of simple poles.
* The relative simplicity and efficiency of our inversion is confirmed through numerical examples. In particular, we show that we can not just simply offer individual entries to the tail probabilities like shown before but the whole function is easily evaluated due to feasibility of the numerical integration. We have successfully implemented here the standard numerical integration routines within the R package where, if required, the multiple precision integration is available calling the GMP library of the C programming environment.
The Laplace inversion (2) provides additional identities for these distributions. More specifically, using the flexibility on the choice of such \(t_{0}\) and applying a change of variable \(u=s(t+c)\), for any real value
of \(c\) implies the rescale and shift property
\[f_{\mathbf{\theta},\mathbf{\gamma}^{2},\mathbf{n}}(s)=\frac{e^{-sc}}{s}f_{s\mathbf{\theta}-sc,s \mathbf{\gamma}^{2}}(1)\prod_{i=1}^{p}\frac{\theta_{i}^{\frac{n_{i}}{2}}}{(\theta_{i }-c)^{\frac{n_{i}}{2}}}{\exp(\sum_{i=1}^{p}\frac{\gamma_{i}^{2}}{4(\theta_{i}-c )}-\frac{\gamma_{i}^{2}}{4\theta_{i}})}. \tag{4}\]
In fact, \(f_{s\mathbf{\theta}-sc,s\mathbf{\gamma}^{2}}(1)\) is also related to that of evaluating the normalising constant of the Fisher-Bingham distributions on \(\sum_{i=1}^{p}n_{i}-1\) dimensional spheres (c.f. Kume and Wood (2005)):
\[\mathcal{C}(\mathbf{\theta},\mathbf{\gamma},\mathbf{n})=\int\limits_{\mathcal{S}\sum_{i=1 }^{p}n_{i-1}}\exp(\sum_{i=1}^{p}-\theta_{i}x_{i}^{2}+\gamma_{i}x_{i})d_{ \mathcal{S}^{p-1}}(\mathbf{x})=2\pi^{\frac{\sum_{i=1}^{p}n_{i}}{2}}\frac{\exp( \sum_{i=1}^{p}\frac{\gamma_{i}^{2}}{4\theta_{i}})}{\prod_{i=1}^{p}\theta_{i}^ {\frac{n_{i}}{2}}}f_{\mathbf{\theta},\mathbf{\gamma}^{2},\mathbf{n}}(1) \tag{5}\]
where \(d_{\mathcal{S}\sum_{i=1}^{p}n_{i}-1}(\mathbf{x})\) is the uniform measure and if all \(\gamma_{i}=0\), \(\mathcal{C}(\mathbf{\theta},\mathbf{0},\mathbf{n})=\mathcal{C}(\mathbf{\theta},\mathbf{n})\) reduces to that of the Bingham distribution. The relationship
\[f_{\mathbf{\theta},\mathbf{\gamma}^{2},\mathbf{n}}(s)=\frac{\prod_{i=1}^{p}\theta_{i}^{ \frac{n_{i}}{2}}\exp(\sum_{i=1}^{p}-\frac{\gamma_{i}^{2}}{4\theta_{i}})}{2\pi ^{\frac{\sum_{i=1}^{p}n_{i}}{2}}}s^{\frac{\sum_{i=1}^{p}n_{i}}{2}-1}\mathcal{C }(s\mathbf{\theta},\sqrt{s}\mathbf{\gamma},\mathbf{n}) \tag{6}\]
implies that the methods developed for evaluating the normalizing constants for directional distributions apply immediately to those of the density function of some linear combination of chi-squares. In fact the shift property (4) implies that \(\theta_{i}\) in (5) can be allowed to be shifted at negative values, as it is the case for alternative parametrisations of \(\mathcal{C}(\mathbf{\theta},\mathbf{\gamma},\mathbf{n})\) where \(\theta_{i}\) can be either positive or negative. An additional property which derives from (2) is the connection between multiplicities (or degrees of freedom) \(n_{i}\) and differentiation:
\[f_{\mathbf{\theta},\mathbf{\gamma}^{2},\mathbf{n}}(s)\prod_{i=1}^{p}\theta_{i}^{-\frac{n_ {i}}{2}}e^{-\sum_{i=1}^{p}\frac{\gamma_{i}^{2}}{4\theta_{i}}}=\mathcal{D} \mathcal{D}^{\prime}\left(f_{\mathbf{\theta}^{0},\mathbf{\gamma}^{2},\mathbf{n}^{0}}(s) \prod_{i=1}^{p}\theta_{i}^{-\frac{n_{i}^{0}}{2}}\exp(-\sum_{i=1}^{p}\frac{ \gamma_{i}^{2}}{4\theta_{i}})\right) \tag{7}\]
where
\[\mathcal{D}^{\prime}=\prod_{i=1}^{p_{1}}\frac{(-1)^{r_{i}}}{n_{i}^{0}(n_{i}^{ 0}+1)\cdots(n_{i}^{0}+r_{i}-1)}\frac{\partial^{r_{i}}}{\partial^{r_{i}}( \theta_{i}^{\prime})}\qquad\mathcal{D}=\prod_{i=1}^{p-p_{1}}\frac{\partial^{ r_{i}}}{\partial^{r_{i}}(\gamma_{i}^{2}/4)}\]
and \(f_{\mathbf{\theta}^{0},\mathbf{\gamma}^{2},\mathbf{n}^{0}}(s)\) representing the pdf with each \(\theta_{i}\) having multiplicity \(n_{i}^{0}\in\{1,2\}\), \(n_{i}=2r_{i}+n_{i}^{0}\) and the first \(p_{1}\) terms are collected to represent the central cases i.e. \(\gamma_{i}=0\).
These differentiation rules generalize similar results of Kume and Wood (2007) for the normalizing constants of Bingham distributions and those of Imhof (1961) for the pdf.
While there have been ongoing attempts in the literature (see the recent work in Chen and Tanaka (2021) and the references therein), to address this problem numerically, they are mostly focused only in the case of positive \(\theta_{i}\). In fact, the properties reported here imply alternative computation methods which are easily implemented terms of some elementary integrating terms and also hold even if in (2) some \(\theta_{i}\leq 0\). Therefore if extreme accuracy is required, standard multiple precision integration can be easily adopted to the resulting elementary functions.
The paper is organized as follows. In Section 2 we explore the branch cuts of the integrand function of (2) and then adjust the contour accordingly. This gives rise to the connection of the pdf expressions with those for the cdf stated in (3). In the following section we report the special adaptations for the relevant quantities for practical implementation. We then extend in Section 4 the results to that of a difference between two quadratic forms of type (1). We report some numerical illustrations in Section 5 and conclude with some general remarks.
## 2 Exploring the branch cut structure
We focus initially on the complex valued function
\[h(t)=\frac{1}{\sqrt{(\theta_{1}+t)(\theta_{2}+t)\cdots(\theta_{p}+t)}}=\prod_{ i=1}^{p}r_{i}^{-1}e^{\mbox{-}\mbox{i}\mathbf{\alpha}}\quad\alpha=\sum_{i=1}^ {p}\alpha_{i} \tag{8}\]
which corresponds to the central case, \(\mathbf{\gamma}=\mathbf{0}\). For each term above we use the parametrization
\[\sqrt{\theta_{i}+t}=r_{i}e^{\mbox{i}\mathbf{\alpha}_{i}}\quad\mbox{ where}\quad\alpha_{i}\in(+\pi,-\pi).\]
It is important to note that these terms are individually not analytic for each positive \(r_{i}\) and \(\alpha_{i}=\pm\pi\) and the corresponding branch cut is the line of real numbers on the left of \(-\theta_{i}\). The product of square roots however leads to pairwise branch cuts as
\[C_{r}=[-\theta_{2r},-\theta_{2r-1}]\]
which are segments determined by pairs of \(-\theta_{i}\). If \(p\) is odd then the last value \(-\theta_{p}\) will be paired with \(-\infty\) as
\[C_{[\frac{p+1}{2}]}^{\infty}=[-\theta_{p},-\infty].\]
One can easily see that in general the product
\[\frac{1}{\sqrt{(\theta_{1}+t)(\theta_{2}+t)\cdots(\theta_{p}+t)}}\]
has \(k=[\frac{p+1}{2}]\) branch cuts
\[C_{1},C_{2}\cdots C_{k}\]
such that if \(p=2k-1\), \(C_{k+1}\) is unbounded.
Please note that the choice of these branch cuts is somewhat arbitrary as each square root term can allow branches on either side of the respective pole \(-\theta_{i}\) and therefore each pair of products \(\sqrt{(\theta_{i}+t)(\theta_{j}+t)}\) can generate either an individual branch cut as the segment \([\theta_{i},\theta_{j}]\) or its complement in the real line. More specifically, using a different parametrization, one could alternatively generate the unbounded branch cut extending from \(-\theta_{1}\) to \(+\infty\) instead.
We are using here the ordered pairs so that the interpretation is meaningful such that it allows for the odd number \(p\) to generate the leftmost branch extending to \(-\infty\). Since outside these branch cuts the
non-central components are analytical functions except the respective poles \(-\theta_{i}\) then the branch cut structure is not affected also for
\[g(t)=\prod_{i=1}^{p}\frac{\exp(\sum_{i=1}^{p}\frac{\gamma_{i}^{2}}{4(\theta_{i}+t )})}{(\theta_{i}+t)^{k_{i}}}\frac{1}{\prod_{i=1}^{p}\sqrt{\theta_{i}+t}}=\prod_{ i=1}^{p}\frac{\exp(\sum_{i=1}^{p}\frac{\gamma_{i}^{2}}{4(\theta_{i}+t)})}{( \theta_{i}+t)^{k_{i}}}h(t)\]
which corresponds also to the cases of **odd** degrees of freedom, i.e. \(n_{i}=2k_{i}+1\). If we have **even multiplicities** for some particular \(\theta_{i}\), say \(n_{i}=2k_{i}\) then there is no presence in the \(h(t)\) part of the integrand and the contribution of the corresponding term \(\frac{1}{(\theta_{i}+t)^{k_{i}}}\) is analytical. Therefore the branch cut structure is not affected except for the appropriate degeneracy at the respective pole at \(-\theta_{i}\). In fact, an even multiplicity can appear in two types:
1. appears if the branch cut \(C_{r}\) reduces to a single point i.e. \(\theta_{2r}=\theta_{2r-1}\) (see the pole \(-\theta_{1}=-\theta_{2}\) in Figure (1), for \(p=3\) and \(r=1\)). This occurs in general if the number of distinct \(\theta_{i}\) less than \(\theta_{2r-1}\) is even.
2. appears if two adjacent branch cuts merge so that the joining extreme appears with multiplicity \(2\) (see the pole \(-\theta_{3}=-\theta_{2}\) in Figure (2)).
**Theorem 1**.: _Assume that \(p\) distinct and increasingly ordered \(\theta_{i}\) have multiplicities \(n_{i}\) generating \(r\) disjoint segments of branch cuts \(C_{k}\) and \(l\) particular poles of even multiplicities of type 1. Then the corresponding pdf and cdf expressions of the random variable \(X=\sum_{i=1}^{p}\frac{1}{2\theta_{i}}\chi_{n_{i}}^{2}(\frac{\gamma_{i}^{2}}{2 \theta_{i}})\) are_
\[f_{\boldsymbol{\theta},\boldsymbol{\gamma}^{2},\boldsymbol{n}}(s)=\frac{f_{ \boldsymbol{\theta}s,\boldsymbol{\gamma}^{2}s,\boldsymbol{n}}(1)}{s}=\kappa \left(\frac{-1}{2\pi\boldsymbol{i}}\sum_{i=1}^{r}\int\limits_{\Gamma^{i}}g(t) e^{st}dt+\frac{-1}{2\pi\boldsymbol{i}}\sum_{j=1}^{l}\int\limits_{\Gamma^{j}_{ i}}g(t)e^{st}dt\right)\]
_where_
\[g(t)=\frac{\exp(\sum_{i=1}^{p}\frac{\gamma_{i}^{2}}{4(\theta_{i}+t)})}{\prod_ {i=1}^{p}(\theta_{i}+t)^{\frac{n_{i}}{2}}}\quad\kappa=\prod_{i=1}^{p}\theta_{i }^{\frac{n_{i}}{2}}\exp(-\sum_{i=1}^{p}\frac{\gamma_{i}^{2}}{4\theta_{i}})\]
_and \(\Gamma^{i}\) are the non overlapping contours enveloping the branch cut regions \(C_{i}\) determined by \(\theta_{i}\) while \(\Gamma^{j}_{\varepsilon}\)
_are the non overlapping contours around the points with even multiplicities \(\theta_{j}\) of type 1. Additionally,_
\[P(X\leq x) = \frac{e^{x\theta_{0}}}{\theta_{0}}f_{\tilde{\mathbf{\theta}},\tilde{\bm {\gamma}}^{2}}(x)\prod\limits_{i=1}^{p}\frac{\sqrt{\theta_{i}}}{\sqrt{\theta_{i }+\theta_{0}}}\exp(-\sum\limits_{i=1}^{p}\frac{\gamma_{i}^{2}\theta_{0}}{4 \theta_{i}(\theta_{i}+\theta_{0})})\] \[= \frac{e^{x\theta_{0}}}{x\theta_{0}}f_{\tilde{\mathbf{\theta}},x \tilde{\mathbf{\gamma}}^{2}}(1)\prod\limits_{i=1}^{p}\frac{\sqrt{\theta_{i}}}{ \sqrt{\theta_{i}+\theta_{0}}}\exp(-\sum\limits_{i=1}^{p}\frac{\gamma_{i}^{2} \theta_{0}}{4\theta_{i}(\theta_{i}+\theta_{0})})\]
_where \(f_{\tilde{\mathbf{\theta}},\tilde{\mathbf{\gamma}}^{2}}\) is the density function of the linear combination \(\tilde{X}=\frac{1}{2\theta_{0}}\chi_{2}^{2}(0)+\sum\limits_{i=1}^{p}\frac{1} {2(\theta_{i}+\theta_{0})}\chi_{n_{i}}^{2}(\frac{\gamma_{i}^{2}}{2(\theta_{i} +\theta_{0})}),\forall\theta_{0}>0\)._
**The proof is presented in Appendix.**
Theorem 1 suggests that it is sufficient to integrate along the non-degenerate contours like "dogbone" type as \(\Gamma^{1}\) or "keyhole" type as \(\Gamma^{2}\) around branch cuts \(C_{1}\) and \(C_{2}\), see Figure (3). Integration along such curves, if appropriately parametrized, gives rise to univariate integrals. For example, the relevant term here \(\frac{-1}{2\pi i}\int\limits_{\Gamma^{i}}g(t)e^{st}dt\) is only the imaginary part of the integrand so that one could use \(\frac{-1}{2\pi}\int\limits_{\Gamma^{i}}\mathbf{Im}(g(t)e^{st})dt\) instead. In our practical implementation reported in Section 5, we use the Romberg integration method on spherical contours which seems to work well within the R package which can be easily utilised to use the multiple precision routines. If more control or efficiency is required other lower level programming tools can in principle be used.
In order to obtain the contours \(\Gamma^{i}\) and \(\Gamma^{j}_{\varepsilon}\) of Theorem above, for a given set of \(p\) distinct \(\theta_{i}\) with multiplicities \(n_{i}\), one could perform the following two steps:
1. Consider initially only those \(\theta_{i}\) which have odd multiplicities and pair them to determine the corresponding branch cuts \(C_{i}\) as in \(\Gamma^{i}\)'s of Theorem.
2. Consider next only those \(\theta_{i}\) with even multiplicities(type 1) which are outside \(\Gamma^{i}\) regions and calculate their residues either numerically by integrating along contours \(\Gamma^{j}_{\varepsilon}\) or analytically if \(\gamma_{i}=0\).
In fact the integrating contours reduce to line segments for the practically important case of all terms being non-central chi squares with single degrees of freedom, namely, the case of \(p\) distinct \(\theta_{i}\), each having multiplicity \(n_{i}=1\) and \(\gamma_{i}=0\).
**Theorem 2**.: _For distinct \(\theta_{i}>0\) and \(\gamma_{i}=0\), the corresponding pdf and cdf expressions for the random variable \(X=\sum\nolimits_{i=1}^{p}\frac{1}{2\theta_{i}}\chi_{1}^{2}(0)\) are:_
\[f_{\mathbf{\theta},\mathbf{0},\mathbf{1}}(s) = \frac{\prod\nolimits_{i=1}^{p}\sqrt{\theta_{i}}}{\pi}\left(\int \nolimits_{\theta_{1}}^{\theta_{2}}\frac{e^{-st}}{\sqrt{-\prod\nolimits_{i=1} ^{p}(\theta_{i}-t)}}dt-\int\nolimits_{\theta_{3}}^{\theta_{4}}\frac{e^{-st}}{ \sqrt{-\prod\nolimits_{i=1}^{p}(\theta_{i}-t)}}dt+\int\nolimits_{\theta_{5}} ^{\theta_{6}}\frac{e^{-st}}{\sqrt{-\prod\nolimits_{i=1}^{p}(\theta_{i}-t)}}dt \cdots\right) \tag{9}\] \[= \frac{\prod\nolimits_{i=1}^{p}\sqrt{\theta_{i}}}{\pi}\sum\limits_ {r=1}^{[\frac{p+1}{2}]}(-1)^{r+1}\int\nolimits_{\theta_{2r-1}}^{\theta_{2r}} \frac{e^{-st}}{\sqrt{-\prod\nolimits_{i=1}^{p}(\theta_{i}-t)}}dt\]
_where_
\[\theta_{2[\frac{p+1}{2}]}=\left\{\begin{array}{ll}\theta_{p}&p\text{ even}\\ \infty&p\text{ odd}\end{array}\right.\]
\[P(X\leq x)=f_{\tilde{\mathbf{\theta}},\mathbf{0},\mathbf{1}}(x)\frac{e^{x\theta_{0}}\prod \limits_{i=1}^{p}\sqrt{\theta_{i}}}{\theta_{0}\prod\limits_{i=1}^{p}\sqrt{ \theta_{i}+\theta_{0}}}=f_{\tilde{\mathbf{\theta}},\mathbf{0},\mathbf{1}}(1)\frac{e^{x \theta_{0}}\prod\limits_{i=1}^{p}\sqrt{\theta_{i}}}{x\theta_{0}\prod\limits_{ i=1}^{p}\sqrt{\theta_{i}+\theta_{0}}}\quad\forall\theta_{0}>0\]
_where \(f_{\tilde{\mathbf{\theta}},\mathbf{0},\mathbf{1}}\) is the pdf of \(\tilde{X}=\frac{1}{2\theta_{0}}\chi_{2}^{2}(0)+\sum_{i=1}^{p}\frac{1}{2(\theta _{i}+\theta_{0})}\chi_{n_{i}}^{2}(0)\)._
**The proof is presented in Appendix.**
In fact these theorems indicate that the regions where the relevant quantities need to be integrated are only those around the branch cuts appearing from \(\theta\)'s with odd multiplicities (or chi-squared with odd degrees of freedom). In the central cases as in Theorem 2, these regions can be also reduced to simple lines. Contour integration is also valid along any lines containing all these branch cut regions and in fact any such choice will lead to a different integrating parametrization.
### Computation remarks for the central cases \(\gamma_{i}=0\)
The univariate integrating terms of Theorem 2 are easily seen to be finite while can have alternative representation:
\[\int\limits_{\theta_{2r-1}}^{\theta_{2r}}\frac{e^{-st}}{\sqrt{- \prod\nolimits_{i=1}^{p}(\theta_{i}-t)}}dt = e^{-s\theta_{2r-1}}\int_{0}^{1}\frac{e^{-su\alpha_{r}}du}{ \sqrt{u(1-u)\prod\limits_{2r-1\neq i\neq 2r}^{p}(\theta_{i}-\theta_{2r-1}-u \alpha_{r})}}\] \[= 2e^{-s\theta_{2r-1}}\int_{0}^{\pi/2}\frac{\exp(-s\alpha_{r}\sin^ {2}u)du}{\sqrt{\prod\limits_{2r-1\neq i\neq 2r}^{p}(\theta_{i}-\theta_{2r-1}\cos^ {2}u-\theta_{2r}\sin^{2}u)}}\]
where \(\alpha_{r}=\theta_{2r}-\theta_{2r-1}\) and \(t=\theta_{2r-1}\cos^{2}u+\theta_{2r}\sin^{2}u\). In the type one multiplicity case i.e. \(\alpha_{r}=0\), one can progress by taking the limit on (10) as \(\alpha_{r}\to 0\):
\[\int\limits_{\theta_{2r-1}}^{\theta_{2r}}\frac{e^{-st}}{\sqrt{- \prod\nolimits_{i=1}^{p}(\theta_{i}-t)}}dt= \xrightarrow[2r-1]{\alpha_{r}\to 0}\xrightarrow[2r-1]{\pi e^{-s\theta_{2r-1}}} \tag{11}\]
and therefore there is no need to numerically integrate this term. This is not surprising as in this case the pole is of order 1 and therefore the residue theorem implies the result above.
Similarly, when \(p\) is odd, the last term corresponds to the branch cut \((-\infty,-\theta_{p})\)
\[\int\limits_{\theta_{p}}^{\infty}\frac{e^{-st}}{\sqrt{-\prod_{i=1}^{p}(\theta_{ i}-t)}}dt=\int\limits_{0}^{\infty}\frac{\exp(-s\theta_{p}-sv)}{\sqrt{\prod_{i=1}^{p-1}( \theta_{i}-\theta_{p}-v)}\sqrt{v}}dt=2e^{-s\theta_{p}}\int\limits_{0}^{+\infty }\frac{e^{-su^{2}}du}{\sqrt{\prod_{i=1}^{p-1}(\theta_{p}-\theta_{i}+u^{2})}} \tag{12}\]
where and \(t=\theta_{p}+v\), \(v=u^{2}\). These terms together with those of (10), can also be efficiently integrated numerically as these functions are either decaying exponentially to zero or having a "U" shape with only one critical point. See Figure 5 for the log-scale behaviour of these elementary functions.
### Adding degrees of freedom (multiplicities of \(\theta_{1}\)) higher than 2
A recursive application of the derivatives to the normalising constants as in Kume and Wood (2007) or as generalised in (7) could in principle generate the required expressions. The differentiation needs to be applied to each of the elementary terms of Theorem 2 and a simple differentiation has the effect adding the multiplicity by 2. For example, the differentiation as below for each of the terms of Theorem 2 leads to the cases of multiplicity 3:
\[\frac{\partial}{\partial\theta_{j}}\int\limits_{\mathcal{C}_{r}}\frac{e^{t}}{ \sqrt{-\prod_{i=1}^{p}(\theta_{i}+t)}}dt=\left\{\begin{array}{ll}-\frac{1}{ 2}\int\limits_{-\theta_{2r}}^{-\theta_{2r-1}}\frac{1}{(\theta_{j}+t)\sqrt{- \prod_{i=1}^{p}(\theta_{i}+t)}}e^{st}dt&\theta_{2r}\neq\theta_{j}\neq\theta_{2r -1}\\ \frac{1}{\theta_{2r}-\theta_{2r-1}}\int\limits_{-\theta_{2r}}^{-\theta_{2r-1}} \frac{\theta_{2r-1}+t}{\sqrt{-\prod_{i=1}^{p}(\theta_{i}+t)}}e^{st}\left(-1+ \frac{1}{2}\sum\limits_{2r-1\neq i\neq 2r}\frac{1}{\theta_{i}+t}\right)dt&j= \theta_{2r-1}\\ \frac{1}{\theta_{2r}-\theta_{2r-1}}\int\limits_{-\theta_{2r}}^{-\theta_{2r-1}} \frac{\theta_{2r+t}}{\sqrt{-\prod_{i=1}^{p}(\theta_{i}+t)}}e^{st}\left(-1+ \frac{1}{2}\sum\limits_{2r-1\neq i\neq 2r}\frac{1}{\theta_{i}+t}\right)dt&j= \theta_{2r-1}\\ \end{array}\right.\]
## 3 Cases of practical importance
### Bingham distribution
For the Bingham distributions with ordered and distinct \(\theta_{i}\) we have that
\[\mathcal{C}(\mathbf{\theta}) = \int\limits_{\mathcal{S}^{p-1}}\exp(\sum_{i=1}^{p}-\theta_{i}x_{i }^{2})d_{\mathcal{S}^{p-1}}(\mathbf{x})=2\pi\frac{\sum_{i=1}^{p}n_{i}}{2}-1\sum_{r =1}^{[\frac{p+1}{2}]}(-1)^{r+1}\int_{\theta_{2r-1}}^{\theta_{2r}}\frac{e^{-t} }{\sqrt{-\prod_{i=1}^{p}(\theta_{i}-t)}}dt. \tag{13}\]
If \(p=3\), the Bingham on the ordinary sphere has normalising constant
\[\frac{\mathcal{C}(\mathbf{\theta})}{4\sqrt{\pi}}=e^{-\theta_{1}}\int_{0}^{\pi/2}\frac {\exp(-(\theta_{2}-\theta_{1})\sin^{2}u)du}{\sqrt{\theta_{3}-\theta_{1}\cos^{2} u-\theta_{2}\sin^{2}u}}du-e^{-\theta_{3}}\int_{0}^{+\infty}\frac{e^{-u^{2}}du}{ \sqrt{\prod_{i=1}^{2}(\theta_{3}-\theta_{i}+u^{2})}}\]
For \(p=4\), there are only two finite branch cuts, and from (10) we can generate this expression as
\[\frac{\mathcal{C}(\mathbf{\theta})}{4\pi} = e^{-\theta_{1}}\int_{0}^{\pi/2}\frac{\exp\left(-(\theta_{2}- \theta_{1})\sin^{2}u\right)du}{\sqrt{\prod_{i=3}^{4}(\theta_{i}-\theta_{1}\cos ^{2}u-\theta_{2}\sin^{2}u)}}du-e^{-\theta_{3}}\int_{0}^{\pi/2}\frac{\exp\left( -(\theta_{4}-\theta_{3})\sin^{2}u\right)du}{\sqrt{\prod_{i=1}^{2}(\theta_{i} -\theta_{3}\cos^{2}u-\theta_{4}\sin^{2}u)}} \tag{15}\] \[= \frac{e^{-\theta_{1}}}{2}\int_{0}^{1}\frac{e^{-u\alpha_{1}}du}{ \sqrt{u(1-u)(\theta_{3}-\theta_{1}-u\alpha_{1})(\theta_{4}-\theta_{1}-u\alpha _{1})}}\] \[-\frac{e^{-\theta_{3}}}{2}\int_{0}^{1}\frac{e^{-u\alpha_{2}}du}{ \sqrt{u(1-u)(\theta_{1}-\theta_{3}-u\alpha_{2})(\theta_{2}-\theta_{3}-u\alpha _{2})}}\]
where \(\alpha_{1}=\theta_{2}-\theta_{1}\) and \(\alpha_{2}=\theta_{4}-\theta_{3}\). In the light of property (7), differentiating for various \(\theta_{i}\) here leads to other representations corresponding to additional multiplicities.
### Fisher distribution of rotations in \(\mathbb{R}^{3}\) and its use in 3-D shape inference
The expression of Bingham distributions for \(p=4\), can be adopted as in Wood (1993) so that for some diagonal matrix
\[\Phi=\left(\begin{array}{ccc}\phi_{1}&0&0\\ 0&\phi_{2}&0\\ 0&0&\phi_{3}\end{array}\right)\quad\text{and}\quad\mathbf{\theta}=-2(\phi_{1}, \phi_{2},\phi_{3},\phi_{1}+\phi_{2}+\phi_{3})\]
the normalizing constant of the Fisher distribution in \(SO(3)\) is
\[\int_{SO(3)}e^{\mathrm{tr}(\Phi\mathbf{R})}d\mathbf{R}=\frac{1}{2}e^{\phi_{1} +\phi_{2}+\phi_{3}}\mathcal{C}(\mathbf{\theta})\]
where the integration is taking place along the Haar uniform measure \(d\mathbf{R}\) on the group of rotations \(SO(3)\) and the evaluation of \(\mathcal{C}(\mathbf{\theta})\) is as in (15). This formula is similar to the one shown in Wood (1993).
In Dryden et al. (2021), the likelihood function for a very general set of regression shape models is based on the efficient evaluation of this normalising constant. In addition they address the bias correction of the well known procrustes mean shape estimators for the 3-d shape models based on the principles of the maximum likelihood estimation. For establishing the resulting estimators the authors suggest using the gradient of \(\log\int_{SO(3)}e^{\mathrm{tr}(\Phi R)}dR\) as a bias correcting term. In particular they use this result
\[\nabla_{\Phi}\log\int_{SO(3)}e^{\mathrm{tr}(\Phi\mathbf{R})}d\mathbf{R}= \begin{pmatrix}1-\frac{2}{\mathcal{C}(\mathbf{\theta})}\frac{\partial\mathcal{C}( \mathbf{\theta})}{\partial\theta_{1}}-\frac{2}{\mathcal{C}(\mathbf{\theta})}\frac{ \partial\mathcal{C}(\mathbf{\theta})}{\partial\theta_{4}}\\ 1-\frac{2}{\mathcal{C}(\mathbf{\theta})}\frac{\partial\mathcal{C}(\mathbf{\theta})}{ \partial\theta_{2}}-\frac{2}{\mathcal{C}(\mathbf{\theta})}\frac{\partial\mathcal{C }(\mathbf{\theta})}{\partial\theta_{4}})\\ 1-\frac{2}{\mathcal{C}(\mathbf{\theta})}\frac{\partial\mathcal{C}(\mathbf{\theta})}{ \partial\theta_{3}}-\frac{2}{\mathcal{C}(\mathbf{\theta})}\frac{\partial\mathcal{ C}(\mathbf{\theta})}{\partial\theta_{4}}\end{pmatrix}\]
where the partial derivative expressions are easily obtained from the integral representations as in (13). In fact Dryden et al. (2021) show that that for practical purposes, these components are accurately and very easily calculated using the saddlepoint approximation.
### Kent's formula for Complex Bingham
If even multiplicities are introduced, the limit of the expression (13) will suffice. For example, if all \(\theta_{i}\) as in Lemma 1 are of even multiplicities then the corresponding function \(g(t)\) will not have any branch cuts but just simple poles therefore a simple application of the residue theorem generates the wellknown result of Kent for the complex Bingham distributions. More specifically, if \(p=2k\) while \(\theta_{i}\)'s are coalescing in pairs i.e. \(\theta_{2r}\rightarrow\theta_{2r-1}\), and following similar arguments to those in (11), each term of (13) becomes
\[\frac{e^{-\theta_{2r}}}{\prod\limits_{2r-1\neq i\neq 2r}^{p}(\theta_{i}- \theta_{2r})^{1/2}}\int_{0}^{1}\frac{du}{\sqrt{u(1-u)}}=\frac{e^{-\theta_{2r} }}{\prod\limits_{2r-1\neq i\neq 2r}^{k}(\theta_{i}-\theta_{2r})}\pi\]
and therefore for \(k\) distinct pairwise \(\theta\)'s.
\[\mathcal{C}(\boldsymbol{\theta})=2\pi^{k}\sum_{r=1}^{k}(-1)^{r+1}\frac{e^{- \theta_{r}}}{\prod\limits_{i\neq r}^{p}(\theta_{i}-\theta_{r})}=2\pi^{k}\sum _{r=1}^{k}\frac{e^{-\theta_{r}}}{\prod\limits_{i\neq r}^{p}(\theta_{r}-\theta_ {i})} \tag{16}\]
which is consistent with the result of Kent for the complex Bingham distribution. Note that in the light of (6) for \(n_{i}=2\) and \(\gamma_{i}=0\), Kent's normalising constant above can also be used for close form expressions of the pdf and cdf of a positive linear combination of the exponential distributing terms see e.g. Kent (1994b).
### Kent distributions
In these distributions, we have only one \(\gamma_{r}\neq 0\) and in particular for the \(p=3\) case, \(\gamma_{2}\neq 0\) and therefore, the integration along the unbounded branch cut is reduced as in Theorem 2.
An additional feature for the Kent distribution is that the middle pole \(\theta_{2}\) is equally distant from the other two so that \(\boldsymbol{\theta}=(\theta_{1},\theta_{1}+\alpha,\theta_{1}+2\alpha)\). Since the non-central term is not on the unbounded branch \((-\infty,-\theta_{3})\), one can reduce that integration along the real line throughout this interval and use an additional circular contour parametrisation \(t=-\theta_{1}-\frac{\alpha}{2}+re^{\mathrm{i}u}\) for the finite branch, and so for the Kent distribution:
\[\frac{\mathcal{C}(\boldsymbol{\theta},\gamma_{2})}{2e^{-\theta_{ 1}}\sqrt{\pi}} = \frac{-re^{-\frac{\alpha}{2}}}{2\pi}\int\limits_{0}^{2\pi}\mathbf{ Im}\left(\frac{\exp\left(\frac{\gamma_{2}^{2}}{4(\frac{\alpha}{2}+re^{\mathrm{i}u})}+re^{ \mathrm{i}u}\right)}{\sqrt{(-\frac{\alpha}{2}+re^{\mathrm{i}u})(\frac{\alpha} {2}+re^{\mathrm{i}u})(3\frac{\alpha}{2}+re^{\mathrm{i}u})}}\right)du\] \[- 2e^{-2\alpha}\int\limits_{0}^{+\infty}\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where the radius \(\frac{\alpha}{2}<r<\alpha\) is chosen so that the circle contains only the first branch cut. Note that for implementing this distribution in directional statistics, \(\theta_{1}\) can be allowed to be 0 and therefore the normalizing constant of this distribution is in fact defined in terms of only two parameters \(\alpha\) and \(\gamma_{2}\). These parameters correspond to \(\beta\) and \(\kappa\) in Kent (1982).
## 4 Difference of two positive linear combinations
If in general some of the \(\lambda_{i}\) in (1) are negative then the characterisation of the problem in terms of normal distribution components is as the difference between two quadratic forms. The contour integration however can be defined along the vertical lines such that the strictly negative poles are on the left of the contours \({\bf i}\mathbb{R}+t_{0}\) while \(t_{0}<0\) as in (2). More specifically, if \(Z=X-Y\) where both \(X\) and \(Y\) have the same general expression as in (1) for some positive parameters \(\theta\), \(\gamma\), \(\theta^{\prime}\) and \(\gamma^{\prime}\), with respective densities
\[f_{\mathbf{\theta},\mathbf{\gamma}^{2},\mathbf{n} }(s)=\kappa\frac{1}{2\pi{\bf i}}\int\limits_{{\bf i}\mathbb{R}+t_{0}}g(t)e^{ st}dt\quad f_{\mathbf{\theta}^{\prime},\mathbf{\gamma}^{ \prime 2},\mathbf{n}^{\prime}}(s)=\kappa^{\prime}\frac{1}{2\pi{\bf i}} \int\limits_{{\bf i}\mathbb{R}+v_{0}}g^{\prime}(v)e^{sv}dv\quad s>0 \tag{17}\]
where \(\kappa=\prod_{i=1}^{p}\theta_{i}^{\frac{n_{i}}{2}}\exp(-\sum_{i=1}^{p}\frac{ \gamma_{i}^{2}}{4\theta_{i}})\), \(\kappa^{\prime}=\prod_{i=1}^{p^{\prime}}\theta_{i}^{\frac{n_{i}^{\prime}}{2}} \exp(-\sum_{i=1}^{p}\frac{\gamma_{i}^{\prime 2}}{4\theta_{i}^{\prime}})\) and
\[g(t)=\frac{\exp(\sum_{i=1}^{p}\frac{\gamma_{i}^{2}}{4(\theta_{i}+t)})}{\prod _{i=1}^{p}(\theta_{i}+t)^{\frac{n_{i}}{2}}}\quad g^{\prime}(v)=\frac{\exp( \sum_{i=1}^{p}\frac{\gamma_{i}^{\prime 2}}{4(\theta_{i}^{\prime}+v)})}{\prod_{i=1}^{p}( \theta_{i}^{\prime}+v)^{\frac{n_{i}^{\prime}}{2}}}\]
the following result holds:
**Theorem 3**.: _If \(X=\sum_{i=1}^{p}\frac{1}{2(\theta_{i})}\chi_{n_{i}}^{2}(\frac{\gamma_{i}^{2}} {2(\theta_{i})})\) and \(Y=\sum_{i=1}^{p^{\prime}}\frac{1}{2(\theta_{i}^{\prime})}\chi_{n_{i}^{\prime} }^{2}(\frac{\gamma_{i}^{\prime 2}}{2(\theta_{i}^{\prime})})\) then the pdf of \(Z=X-Y\) at \(z\geq 0\) is_
\[f_{Z}(z)=\prod_{i=1}^{p}\theta_{i}^{\frac{n_{i}}{2}}\exp(-\sum_{i=1}^{p}\frac{ \gamma_{i}^{2}}{4\theta_{i}})\prod_{i=1}^{p}\theta_{i}^{\frac{n_{i}^{\prime}} {2}}\exp(-\sum_{i=1}^{p}\frac{\gamma_{i}^{\prime 2}}{4\theta_{i}^{\prime}}) \frac{-1}{2\pi{\bf i}}\int\limits_{\Gamma}g(t)g^{\prime}(-t)e^{zt}dt\]
_where the integrating contour \(\Gamma\) can be simplified as in Theorems 1 and 2 along branch cuts involving only the values of \(\theta\). Additionally_
\[P(X-Y>z\geq 0)=\frac{e^{\theta_{0}z}}{\theta_{0}}f_{\tilde{Z}}(z)\prod_{i=1}^{p} \frac{\sqrt{\theta_{i}}}{\sqrt{\theta_{i}+\theta_{0}}}\prod_{i=1}^{p^{\prime} }\frac{\sqrt{\theta_{i}^{\prime}}}{\sqrt{\theta_{i}^{\prime}-\theta_{0}}}\exp (\sum_{i=1}^{p}\frac{\gamma_{i}^{2}}{4\theta_{i}}-\frac{\gamma_{i}^{2}}{4( \theta_{i}+\theta_{0})}+\sum_{i=1}^{p^{\prime}}\frac{\gamma_{i}^{\prime 2}}{4 \theta_{i}^{\prime}}-\frac{\gamma_{i}^{\prime 2}}{4(\theta_{i}^{\prime}-\theta_{0})})\]
_for any arbitrary \(0<\theta_{0}<min(\mathbf{\theta}^{\prime})\) and_
\[\tilde{Z}=\frac{1}{2\theta_{0}}\chi_{2}^{2}(0)+\sum_{i=1}^{p}\frac{1}{2(\theta _{i}+\theta_{0})}\chi_{n_{i}}^{2}(\frac{\gamma_{i}^{2}}{2(\theta_{i}+\theta_ {0})})-\sum_{i=1}^{p^{\prime}}\frac{1}{2(\theta_{i}^{\prime}-\theta_{0})}\chi_{ n_{i}^{\prime}}^{2}(\frac{\gamma_{i}^{\prime 2}}{2(\theta_{i}^{\prime}-\theta_{0})}).\]
_The corresponding expressions for \(z<0\) can be obtained similarly for \(f_{Y-X}(-z)\) by switching the roles of \(X\) and \(Y\) as in \(-Z=Y-X\)._
As a special case one could think of the exponential distributions which correspond modulo some scaling to a central chi-square of degree. These cases have practical implications due to their connections with the Erlang distributions and Phase Type distributions in the probability theory. A recent paper for generating the distribution for the positive entries as in Neumuller et al. (2022) and also the proof for the extended case of real coefficients can be found in Mathai (1983).
In our construction however, these expressions are simply the residues of ordinary poles. Similar to the case of complex Bingham distributions which are also related to the exponentially distributed components, the respective branch cuts reduce to simple poles and the residues generate closed form expressions:
**Corollary 4**.: _If the chi-square components in Theorem 3 are central chi-squares r.v's with degrees of freedom 2, namely \(Z=X-Y\) represents a linear combination of exponential random variables, then the corresponding density and distribution functions at \(z\geq 0\) are evaluated as_
\[f_{Z}(z)=\prod_{i=1}^{p}\theta_{i}\prod_{j=1}^{p^{\prime}}\theta_{j}^{\prime} \sum_{i=1}^{p}(-1)^{i}\frac{e^{-\theta_{i}z}}{\prod_{j\neq i}^{p}(\theta_{j}- \theta_{i})\prod_{k=1}^{p^{\prime}}(\theta_{k}^{\prime}-\theta_{i})}\]
_and_
\[P(Z>z\geq 0)=\prod_{i=1}^{p}\theta_{i}\prod_{j=1}^{p^{\prime}}\theta_{j}^{ \prime}\sum_{i=1}^{p}(-1)^{i}\frac{e^{-\theta_{i}z}}{\theta_{i}\prod_{j\neq i }^{p}(\theta_{j}-\theta_{i})\prod_{k=1}^{p^{\prime}}(\theta_{k}^{\prime}- \theta_{i})}.\]
_For the cases when \(z<0\) the roles of \(\boldsymbol{\theta}\) and \(\boldsymbol{\theta^{\prime}}\) are reversed._
## 5 Numerical evidence
We illustrate our method by replicating the numerical figures reported in Imhof (1961). We focus initially on the first two central cases reported there, where the coefficients lambda of (1) are defined as \(\lambda_{i}=\frac{1}{2\boldsymbol{\theta}_{i}}\) which in our parametrization they correspond to:
\[\boldsymbol{\theta}_{1}=(0.8333333,1.6666667,5.000000)\quad\boldsymbol{\theta }_{2}=(0.8333333,0.833333,1.666667,1.666667,5,5)\]
Note in particular that for \(\boldsymbol{\theta}_{2}\) above, the three distinct values have multiplicity 2 i.e. simple poles and therefore as we have noted in the corresponding inversion is immediately available in closed form for both pdf and cdf using (16) and (6). In Imhof (1961) only the cdf values for specific entries are shown while we can see that the whole function can be easily obtained using our simple contour integrating terms. The comparison of both exact and SPA methods for density, pdf. and cdf. together with the corresponding relative error are shown in Figures (6) and (7). It can be easily seen that the density SPA used for the cdf calculations as stated in our Theorem 1, performs very well especially in the extreme values of the variable \(s\).
As illustrative evidence for the difference of two positive linear combinations, we can mention cases reported in Imhof (1961) where in particular, the three example entries on our Table 1 here relate to the cases \(\frac{Q_{3}}{3}-\frac{2Q_{4}}{3}\) and \(\frac{Q_{5}}{2}-\frac{Q_{6}}{2}\) and \(\frac{1}{6}Q_{3}+\frac{2}{6}Q_{6}-\frac{1}{6}Q_{5}-\frac{2}{6}Q_{4}\) there.
The approximation method reported in Imhof (1961) is illustrated against some exact figures which are also reported in our Table 1. These exact values which the Imhof's method is comparing to are in full agreement with our method of bounded contour integration as seen for various values \(s\). Our method can easily produce the pdf and cdf of these three cases as shown on Figure 8.
## 6 Concluding remarks
In this paper we explore alternative expressions for the integral representation of this important family of distributions. It is clear that the branch cut approach covered here has simplicity in expression leading to an explicit connection between their pdf and df. Therefore one can apply the standard saddle point approximation method in evaluating tail probabilities. The resulting expressions in terms or branch cuts are also easy to implement using the standard numerical routines available in many computer packages. By identifying these cuts enables the user to vary the integrating contours and not just follow standard inversion methods or its improvements like Talbot methods. The saddle point approximation is shown to work well for a range of values, with a relative improvement in the tails of the distributions but such approximations are generally not performing well in the case of negative entries.
|
2305.10148 | Stability analysis of two-dimensional ideal flows with applications to
viscous fluids and plasmas | We are interested in the stability analysis of two-dimensional incompressible
inviscid fluids. Specifically, we revisit a recent result on the stability of
Yudovich's solutions to the incompressible Euler equations in
$L^\infty([0,T];H^1)$ by providing a new approach to its proof based on the
idea of compactness extrapolation and by extending it to the whole plane.
This new method of proof is robust and, when applied to viscous models, leads
to a remarkable logarithmic improvement on the rate of convergence in the
vanishing viscosity limit of two-dimensional fluids. Loosely speaking, this
logarithmic gain is the result of the fact that, in appropriate high-regularity
settings, the smoothness of solutions to the Euler equations at times $t\in
[0,T)$ is strictly higher than their regularity at time $t=T$. This ``memory
effect'' seems to be a general principle which is not exclusive to fluid
mechanics. It is therefore likely to be observed in other setting and deserves
further investigation.
Finally, we also apply the stability results on Euler systems to the study of
two-dimensional ideal plasmas and establish their convergence, in strong
topologies, to solutions of magnetohydrodynamic systems, when the speed of
light tends to infinity. The crux of this asymptotic analysis relies on a fine
understanding of Maxwell's system. | Diogo Arsénio, Haroune Houamed | 2023-05-17T12:09:23Z | http://arxiv.org/abs/2305.10148v2 | # Stability analysis of two-dimensional ideal flows with applications to viscous fluids and plasmas
###### Abstract.
We are interested in the stability analysis of two-dimensional incompressible inviscid fluids. Specifically, we revisit a known recent result on the stability of Yudovich's solutions to the incompressible Euler equations in \(L^{\infty}([0,T];H^{1})\) by providing a new approach to its proof based on the idea of compactness extrapolation and by extending it to the whole plane.
This new method of proof is robust and, when applied to viscous models, leads to a remarkable logarithmic improvement on the rate of convergence in the vanishing viscosity limit of two-dimensional fluids. Loosely speaking, this logarithmic gain is the result of the fact that, in appropriate high-regularity settings, the smoothness of solutions to the Euler equations at times \(t\in[0,T)\) is strictly higher than their regularity at time \(t=T\). This "memory effect" seems to be a general principle which is not exclusive to fluid mechanics. It is therefore likely to be observed in other setting and deserves further investigation.
Finally, we also apply the stability results on Euler systems to the study of two-dimensional ideal plasmas and establish their convergence, in strong topologies, to solutions of magnetohydrodynamic systems, when the speed of light tends to infinity. The crux of this asymptotic analysis relies on a fine understanding of Maxwell's system.
Key words and phrases:Perfect incompressible two-dimensional fluids, Maxwell's system, plasmas, Yudovich's theory, inviscid limit, singular limits
###### Contents
* 1 Stability of the incompressible Euler system in the plane
* 1.1 Introduction and first main result
* 1.2 Overview of previous results
* 1.3 A new method of proof
* 1.4 Notation
* 1.5 Proof of Theorem 1.2
* 2 Rate of convergence of the inviscid limit in Yudovich's class
* 2.1 Second main result
* 2.2 Proof of Theorem 2.1
* 3 Two-dimensional incompressible plasmas
* 3.1 Third main result
* 3.2 Asymptotic analysis of Ampere's equation
* 3.3 Proof of Theorem 3.2
* A Paradifferential calculus and the normal structure
## 1. Stability of the incompressible Euler system in the plane
### Introduction and first main result
We are interested in the stability of the two-dimensional incompressible Euler equations
\[\left\{\begin{aligned} \partial_{t}u^{\varepsilon}+u^{ \varepsilon}\cdot\nabla u^{\varepsilon}+\nabla p^{\varepsilon}&=g^{ \varepsilon},\\ \operatorname{div}u^{\varepsilon}&=0,\\ u^{\varepsilon}_{|_{t=0}}&=u^{\varepsilon}_{0}, \end{aligned}\right. \tag{1.1}\]
in the limit \(\varepsilon\to 0\), where \(u^{\varepsilon}=u^{\varepsilon}(t,x)\) are fluid velocity fields and \(g^{\varepsilon}=g^{\varepsilon}(t,x)\) are prescribed source terms, with \((t,x)\in\mathbb{R}^{+}\times\mathbb{R}^{2}\), taking values in \(\mathbb{R}^{2}\). The pressures \(p^{\varepsilon}\) are real-valued and also depend on \(t\) and \(x\).
Formally, if the inputs \((g^{\varepsilon},u^{\varepsilon}_{0})\) converge to some states \((g,u_{0})\), as \(\varepsilon\) vanishes, then it is expected that \(u^{\varepsilon}\) approaches \(u\), the solution of the same Euler equations
\[\left\{\begin{aligned} \partial_{t}u+u\cdot\nabla u+\nabla p& =g,\\ \operatorname{div}u&=0,\\ u_{|_{t=0}}&=u_{0}.\end{aligned}\right. \tag{1.2}\]
The celebrated Yudovich's theorem allows us to formulate the rigorous weak convergence result described in the following statement.
(Note that we employ below the classical notation \(H^{s}\) to denote the Sobolev space of functions \(f\in L^{2}\) such that the Riesz potential \(|D|^{s}f\) also belongs to \(L^{2}\), whereas \(W^{1,p}\) denotes the Sobolev space of functions \(f\in L^{p}\) such that \(\nabla f\in L^{p}\). Homogeneous versions of these spaces are defined in a similar fashion.)
**Theorem 1.1** ([9]).: _Let \(\varepsilon\in(0,1)\) and \(T\in\mathbb{R}^{+}\cup\{\infty\}\). For any initial data and source satisfying_
\[u^{\varepsilon}_{0}\in H^{1}(\mathbb{R}^{2}),\quad\operatorname{ curl}u^{\varepsilon}_{0}\in L^{\infty}(\mathbb{R}^{2}),\] \[g^{\varepsilon}\in L^{1}([0,T];H^{1}(\mathbb{R}^{2})),\quad \operatorname{curl}g^{\varepsilon}\in L^{1}([0,T];L^{\infty}(\mathbb{R}^{2})),\]
_uniformly in \(\varepsilon\), there is a unique solution \(u^{\varepsilon}\) to (1.1) on \([0,T]\) enjoying the bounds_
\[u^{\varepsilon}\in C([0,T];H^{1}(\mathbb{R}^{2})),\quad\operatorname{curl}u^{ \varepsilon}\in L^{\infty}([0,T];L^{\infty}(\mathbb{R}^{2})),\]
_uniformly in \(\varepsilon.\) Moreover, one has, for all \(t\in[0,T]\), that_
\[\left\|\operatorname{curl}u^{\varepsilon}(t)\right\|_{L^{p}}\leq\left\| \operatorname{curl}u^{\varepsilon}_{0}\right\|_{L^{p}}+\left\|\operatorname{ curl}g^{\varepsilon}\right\|_{L^{1}([0,t);L^{p})},\quad\text{for all }p\in[1,\infty]. \tag{1.3}\]
_Furthermore, if \((u_{0},g)\) denotes a weak limit, in the sense of distributions, of the family \((u^{\varepsilon}_{0},g^{\varepsilon})_{\varepsilon>0}\), then \(u^{\varepsilon}\) converges weakly, in the sense of distributions, to the unique solution \(u\) of the Euler system (1.2)._
We are now interested in refining our understanding of the limit of \(u^{\varepsilon}\) to \(u\), provided the inputs of (1.1) converge in a suitable strong sense. Our first main result is featured in the following theorem, which is a variant of Theorem 1 from [5] extended to an unbounded domain and more general source terms.
**Theorem 1.2**.: _Let \(T\in\mathbb{R}^{+}\) and \((u^{\varepsilon}_{0},g^{\varepsilon})_{\varepsilon>0}\) be a family of initial data and source terms satisfying the assumptions of Theorem 1.1, which converges to \((u_{0},g)\) strongly in \(H^{1}(\mathbb{R}^{2})\times L^{1}([0,T];H^{1}(\mathbb{R}^{2}))\), as \(\varepsilon\to 0\). Then, it holds that_
\[\lim_{\varepsilon\to 0}\sup_{t\in[0,T]}\left\|u^{\varepsilon}(t)-u(t)\right\|_{H^{1}( \mathbb{R}^{2})}=0,\]
_where \(u^{\varepsilon}\) and \(u\) are the unique solutions of (1.1) and (1.2), respectively._
_Remark 1.1_.: Observe in the above theorem that the convergence holds in the whole space \(\mathbb{R}^{2}\). Moreover, employing the Gagliardo-Nirenberg and Holder inequalities, note that one can show the convergence of the velocity in \(L^{\infty}_{t}(L^{\infty}\cap W^{1,p})\), for all \(p\in[2,\infty)\). Accordingly, Theorem 1.2 provides a convergence result in spaces similar to those given in Theorem 1.1.
The novelty of our work lies in the method of proof of Theorem 1.2, given in Section 1.5, below, and in applications of this method in Sections 2 and 3, later on, where we give our second and third main results in Theorems 2.1 and 3.2, respectively.
It is to be emphasized that our approach also allows us to extend Theorem 1.2 to an inviscid limit. More precisely, considering the Navier-Stokes system
\[\left\{\begin{aligned} \partial_{t}u^{\varepsilon}+u^{ \varepsilon}\cdot\nabla u^{\varepsilon}-\varepsilon\Delta u^{\varepsilon}+ \nabla p^{\varepsilon}&=g^{\varepsilon},\\ \operatorname{div}u^{\varepsilon}&=0,\\ u^{\varepsilon}{}_{|_{t=0}}&=u^{\varepsilon}_{0}, \end{aligned}\right. \tag{1.4}\]
we have the next result, whose proof is a straighforward adapation of the proof of Theorem 1.2.
**Theorem 1.3**.: _Let \(T\in\mathbb{R}^{+}\) and \((u^{\varepsilon}_{0},g^{\varepsilon})_{\varepsilon>0}\) be a family of initial data and source terms satisfying the assumptions of Theorem 1.1, which converges to \((u_{0},g)\) strongly in \(H^{1}(\mathbb{R}^{2})\times L^{1}([0,T];H^{1}(\mathbb{R}^{2}))\), as \(\varepsilon\to 0\). Then, it holds that_
\[\lim_{\varepsilon\to 0}\sup_{t\in[0,T]}\left\|u^{\varepsilon}(t)-u(t) \right\|_{H^{1}(\mathbb{R}^{2})}=0,\]
_where \(u^{\varepsilon}\) and \(u\) are the unique solutions of (1.4) and (1.2), respectively._
Note that there are several recent results establishing the strong inviscid limit of solutions of (1.4) (see [4, 5, 10], for instance). Therefore, let us now clarify the novelty in our result as well as the key differences with other works.
### Overview of previous results
We first note that the convergence in \(L^{\infty}_{t}L^{2}(\mathbb{R}^{2})\) of the velocity \(u^{\varepsilon}\) to a solution of Euler's equations is standard and follows from classical stability estimates based on the uniform bounds \(\operatorname{curl}u^{\varepsilon}\in L^{\infty}_{t,x}\) first established by Yudovich [9].
The corresponding convergence in \(L^{\infty}_{t}\dot{H}^{1}(\Omega)\), with \(\Omega=\mathbb{T}^{2}\), has been established much more recently by Constantin, Drivas and Elgindi in [5]. Their method of proof consists in approaching both systems (1.2) and (1.4) by two linear transport equations with regularized inputs, where the solutions are advected by the velocities \(u\) and \(u^{\varepsilon}\), respectively.
Hence, the problem of proving the convergence of the velocity in \(L^{\infty}_{t}\dot{H}^{1}(\Omega)\) (or equivalently the vorticity in \(L^{\infty}_{t}L^{2}(\Omega)\)) is then reduced to proving a stability result for the regularized systems. In [5], in order to study the stability of the vortices in \(L^{\infty}_{t}L^{2}\) in the asymptotic regime \(\varepsilon\to 0\), the authors perform an \(L^{2}\)-energy estimate in the vorticity formulation of the regularized linear systems. They are then able to close the estimates by relying on two essential lemmas.
The first one [5, Lemma 1] is, roughly speaking, a variation of the John-Nirenberg inequality, which states that, for any \(f\in BMO(\mathbb{R}^{2})\) and any compact domain \(K\subset\mathbb{R}^{2}\), there exists \(\beta>0\) and \(C>0\) such that
\[\int_{K}\exp(\beta|f(x)|)dx\leq C.\]
In [5], the authors utilize a variant of the preceding inequality for \(f=\nabla u\), which belongs to \(BMO\) as soon as \(\nabla\times u\) belongs to \(L^{\infty}\).
The second crucial ingredient in the proof given in [5] establishes a loss estimate for the regularized vortices in
\[L^{\infty}\left([0,T];\dot{W}^{1,p(t)}\right),\]
where \(p(t)\) is a decreasing continuous function of time. However, this bound is not uniform with respect to the regularizing parameter in the inputs.
Although the proof from [5] hinges on the boundedness of the torus \(\mathbb{T}^{2}\), we emphasize that a careful and suitable adaptation of that proof would lead to similar results in the whole plane \(\mathbb{R}^{2}\). Rather than rigorously justifying the extension of the arguments from [5] to the whole domain \(\mathbb{R}^{2}\), we choose to present hereafter a different approach, which does not rely on properties of \(BMO\) functions or time-dependent Sobolev norms. Furthermore, this new approach will then play a key role in the proofs of the important applications of Sections 2 and 3, below.
Finally, we also refer to [4, 10] for different proofs using techniques from the celebrated work of DiPerna and Lions [7] on renormalized solutions of general transport equations.
### A new method of proof
Our proof of Theorem 1.2 establishes the \(L^{\infty}_{t}\dot{H}^{1}\) stability of velocity fields without studying the equations satisfied by the difference \(u^{\varepsilon}-u\) in \(L^{\infty}_{t}\dot{H}^{1}\).
Instead, we exploit the convergence of velocities in \(L^{\infty}_{t}L^{2}\), which, as previously emphasized, is a classical consequence of the stability estimates established by Yudovich [9], in combination with a time-dependent version of the following simple but essential extrapolation lemma. This result provides a useful criterion which allows us to recover strong compactness properties in endpoint functional settings. In particular, in the notation of the lemma, we will be using the functional spaces \(\dot{H}^{s_{0}}=L^{2}\) and \(\dot{H}^{s_{1}}=\dot{H}^{1}\).
**Lemma 1.4** (Compactness Extrapolation Lemma).: _Fix the dimension \(d\geq 1\). Let \(s_{0}<s_{1}\) be two real numbers and \((u^{\varepsilon})_{\varepsilon\in(0,1]}\) be a family of bounded functions in \(\dot{H}^{s_{0}}\cap\dot{H}^{s_{1}}(\mathbb{R}^{d})\), uniformly in \(\varepsilon\in(0,1]\). Further assume that_
\[u^{\varepsilon}\to u\quad\text{in }\dot{H}^{s_{0}}(\mathbb{R}^{d}).\]
_Then, it holds that_
\[u^{\varepsilon}\to u\quad\text{in }\dot{H}^{s_{1}}(\mathbb{R}^{d})\]
_if and only if_
\[\lim_{\varepsilon\to 0}\left\|\mathds{1}_{|D|\geq\Theta_{\varepsilon}}u^{ \varepsilon}\right\|_{\dot{H}^{s_{1}}}=0,\]
_for some \(\Theta_{\varepsilon}>0\) satisfying_
\[\lim_{\varepsilon\to 0}\Theta_{\varepsilon}=\infty\quad\text{and}\quad\lim_{ \varepsilon\to 0}\Theta_{\varepsilon}^{s_{1}-s_{0}}\left\|u^{\varepsilon}-u \right\|_{\dot{H}^{s_{0}}}=0.\]
Proof.: The proof straightforwardly follows from the observations that
\[\left\|\mathds{1}_{|D|\leq\Theta_{\varepsilon}}(u^{\varepsilon}-u)\right\|_{ \dot{H}^{s_{1}}}\leq\Theta_{\varepsilon}^{s_{1}-s_{0}}\left\|u^{\varepsilon} -u\right\|_{\dot{H}^{s_{0}}},\]
the fact that the right-hand side above vanishes due to the assumptions on \(\Theta_{\varepsilon}\), and the convergence
\[\lim_{\varepsilon\to 0}\left\|\mathds{1}_{|D|\geq\Theta_{\varepsilon}}u\right\|_{ \dot{H}^{s_{1}}}=0,\]
for any fixed \(u\in\dot{H}^{s_{1}}\).
Thus, by virtue of the compactness criterion given in the previous lemma, our work will be reduced to the control of some high frequencies of the vorticities \(\omega^{\varepsilon}\stackrel{{\rm def}}{{=}}\operatorname{ curl}u^{\varepsilon}\) in \(L^{\infty}_{t}L^{2}\). This is crucial and will be achieved by performing an \(L^{2}\)-energy estimate on dyadic blocks of \(\omega^{\varepsilon}\) which is compatible with the nonlinear structure of the equations and relies on the remarkable identities established in Lemma 1.6, later on.
### Notation
Before moving on to the proofs of our theorems, allow us to clarify some elements of notation that we are about to use.
We denote by \(C>0\) any universal constant that is independent of the main variables of the given problem. Accordingly, we use the inequality \(A\lesssim B\) when there exists a constant \(C>0\) independent of \(A\) and \(B\) such that \(A\leq CB\). In general, the constant \(C\) is allowed to change from one line to the next.
Moreover, for any bounded function \(m:\mathbb{R}^{2}\to\mathbb{R}\), the Fourier multiplier operator \(m(D)\) is defined by
\[m(D)f\stackrel{{\mathrm{def}}}{{=}}\mathcal{F}^{-1}\big{(}m( \xi)\mathcal{F}f(\xi)\big{)},\]
for any tempered distribution \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{2})\), where \(\mathcal{F}\) and \(\mathcal{F}^{-1}\) denote the Fourier transform and its inverse, respectively.
Finally, the commutator between two operators \(Q\) and \(S\) is denoted by \([Q,S]\) and is defined by the relation
\[[Q,S]\omega\stackrel{{\mathrm{def}}}{{=}}QS\omega-SQ\omega,\]
for any suitable \(\omega\).
### Proof of Theorem 1.2
We proceed in several steps:
1. First, in Section 1.5.1, we recall the classical ideas leading to the convergence \(u^{\varepsilon}\to u\) in \(L^{\infty}_{t}L^{2}\).
2. Then, in Section 1.5.2, we discuss the convergence of some suitable low frequencies of vorticities with a simple estimate based on the convergence of velocities in \(L^{\infty}_{t}L^{2}\).
3. Finally, in Section 1.5.3, in the spirit of the Compactness Extrapolation Lemma (Lemma 1.4), we identify and control the remaining high frequencies of vorticities, thereby establishing their convergence and completing the proof of Theorem 1.2.
4. The remaining sections, i.e., Sections 1.5.4 and 1.5.5, are dedicated to essential technical results which are justified separately, for the sake of clarity.
#### 1.5.1. Convergence in \(L^{\infty}_{t}L^{2}\)
The stability of the two-dimensional Euler equations in \(L^{\infty}_{t}L^{2}\) is classical and follows from the uniqueness methods employed in [9]. We also refer to [5, Lemma 4] for a different proof in the torus which can be adapted to the whole plane.
More specifically, it is possible to show that if
\[\widetilde{u}^{\varepsilon}\stackrel{{\mathrm{def}}}{{=}}u^{ \varepsilon}-u\quad\text{and}\quad\widetilde{g}^{\varepsilon}\stackrel{{ \mathrm{def}}}{{=}}g^{\varepsilon}-g\]
are small in the sense that
\[\left\|\widetilde{u}^{\varepsilon}_{0}\right\|_{L^{2}}+\left\|\widetilde{g}^{ \varepsilon}\right\|_{L^{1}([0,T];L^{2})}<C_{*}e^{-\exp(C_{*}T)},\]
where
\[C_{*}\stackrel{{\mathrm{def}}}{{=}}C\sup_{\varepsilon>0}\left( \left\|(\omega_{0},\omega_{0}^{\varepsilon})\right\|_{L^{2}\cap L^{\infty}}+ \left\|(\operatorname{curl}g,\operatorname{curl}g^{\varepsilon})\right\|_{L^ {1}([0,T];L^{2}\cap L^{\infty})}\right), \tag{1.5}\]
for some universal constant \(C>0\), then one has the stability estimate
\[\left\|\widetilde{u}^{\varepsilon}\right\|_{L^{\infty}([0,T];L^{2})}\lesssim_ {C_{*},T}\Big{(}\left\|\widetilde{u}^{\varepsilon}_{0}\right\|_{L^{2}}+\left\| \widetilde{g}^{\varepsilon}\right\|_{L^{1}([0,T];L^{2})}\Big{)}^{\exp(-C_{*}T)}. \tag{1.6}\]
For the sake of completeness, we now outline the idea leading to (1.6) by following the modern proof given in [2, Section 7.3.3]. To that end, we begin by observing that \(\widetilde{u}^{\varepsilon}\) and \(\widetilde{g}^{\varepsilon}\) solve the system
\[\begin{cases}\partial_{t}\widetilde{u}^{\varepsilon}+u^{\varepsilon}\cdot \nabla\widetilde{u}^{\varepsilon}+\nabla\widetilde{p}^{\varepsilon}=- \widetilde{u}^{\varepsilon}\cdot\nabla u+\widetilde{g}^{\varepsilon},\\ \operatorname{div}\widetilde{u}^{\varepsilon}=0,\\ \widetilde{u}^{\varepsilon}_{|_{t=0}}=u_{0}^{\varepsilon}-u_{0}.\end{cases}\]
Then, performing a standard \(L^{2}\)-energy estimate yields, for any \(t\in[0,T]\),
\[\frac{1}{2}\frac{d}{dt}\left\|\widetilde{u}^{\varepsilon}(t)\right\|_{L^{2}}^{2} \leq\left|\int_{\mathbb{R}^{2}}\nabla u:(\widetilde{u}^{\varepsilon}\otimes \widetilde{u}^{\varepsilon})(t,x)dx\right|+\left\|\widetilde{g}^{\varepsilon}( t)\right\|_{L^{2}}\left\|\widetilde{u}^{\varepsilon}(t)\right\|_{L^{2}}.\]
Therefore, by Holder's inequality and the classical Biot-Savart estimate (see [2, Section 7.1.1])
\[\left\|\nabla u\right\|_{L^{q}}\leq Cq\left\|\omega\right\|_{L^{q}},\]
where \(q=q(t)\in(2,\infty)\) is allowed to depend on \(t\in[0,T]\), we infer that
\[\frac{1}{2}\frac{d}{dt}\left\|\widetilde{u}^{\varepsilon}(t) \right\|_{L^{2}}^{2} \leq\left\|\nabla u(t)\right\|_{L^{q}}\left\|\widetilde{u}^{ \varepsilon}(t)\right\|_{L^{\frac{2q}{q-1}}}^{2}+\left\|\widetilde{g}^{ \varepsilon}(t)\right\|_{L^{2}}\left\|\widetilde{u}^{\varepsilon}(t)\right\|_ {L^{2}}\] \[\leq Cq\left\|\omega(t)\right\|_{L^{2}\cap L^{\infty}}\left\| \widetilde{u}^{\varepsilon}(t)\right\|_{L^{2}}^{2-\frac{2}{q}}\left\| \widetilde{u}^{\varepsilon}(t)\right\|_{L^{\infty}}^{\frac{2}{q}}+\left\| \widetilde{g}^{\varepsilon}(t)\right\|_{L^{2}}\left\|\widetilde{u}^{ \varepsilon}(t)\right\|_{L^{2}}.\]
Thus, by virtue of the control (1.3) and the Gagliardo-Nirenberg interpolation inequality
\[\left\|\widetilde{u}^{\varepsilon}(t)\right\|_{L^{\infty}}\lesssim\left\| \widetilde{u}^{\varepsilon}(t)\right\|_{L^{2}}^{\frac{1}{2}}\left\|\widetilde {\omega}^{\varepsilon}(t)\right\|_{L^{\infty}}^{\frac{1}{2}},\]
we find that
\[\frac{1}{2}\left\|\widetilde{u}^{\varepsilon}(t)\right\|_{L^{2}}^ {2} \leq\frac{1}{2}\left\|\widetilde{u}_{0}^{\varepsilon}\right\|_{L^ {2}}^{2}+C\int_{0}^{t}q(\tau)\left\|\omega(\tau)\right\|_{L^{2}\cap L^{\infty }}\left\|\widetilde{u}^{\varepsilon}(\tau)\right\|_{L^{2}}^{2-\frac{1}{q(\tau) }}\left\|\widetilde{\omega}^{\varepsilon}(\tau)\right\|_{L^{\infty}}^{\frac{1 }{q(\tau)}}d\tau\] \[\leq\frac{1}{2}\left\|\widetilde{u}_{0}^{\varepsilon}\right\|_{L^ {2}}^{2}+\int_{0}^{t}q(\tau)C_{*}^{1+\frac{1}{q(\tau)}}\left\|\widetilde{u}^{ \varepsilon}(\tau)\right\|_{L^{2}}^{2-\frac{1}{q(\tau)}}d\tau\] \[\quad+\left(\int_{0}^{t}\left\|\widetilde{g}^{\varepsilon}(\tau) \right\|_{L^{2}}d\tau\right)^{2}+\frac{1}{4}\left\|\widetilde{u}^{\varepsilon }\right\|_{L^{\infty}([0,t);L^{2})}^{2},\]
where \(C_{*}\) is defined by (1.5).
Now, introducing the continuous function
\[f^{\varepsilon}(t)\stackrel{{\rm def}}{{=}}C_{*}^{-1}\left\| \widetilde{u}^{\varepsilon}\right\|_{L^{\infty}([0,t);L^{2})},\]
the preceding estimate can be recast as
\[\frac{1}{4}\big{(}f^{\varepsilon}(t)\big{)}^{2}\leq\frac{1}{2}\big{(}f^{ \varepsilon}(0)\big{)}^{2}+C_{*}\int_{0}^{t}q(\tau)\big{(}f^{\varepsilon}( \tau)\big{)}^{2-\frac{1}{q(\tau)}}d\tau+\left(C_{*}^{-1}\int_{0}^{t}\left\| \widetilde{g}^{\varepsilon}(\tau)\right\|_{L^{2}}d\tau\right)^{2}.\]
Therefore, further assuming that
\[f^{\varepsilon}(t)\leq 1\]
on a time interval \([0,t_{*}]\), for some \(t_{*}\in(0,T]\), and setting
\[q(t)\stackrel{{\rm def}}{{=}}2\log\left(\frac{e}{\big{(}f^{ \varepsilon}(t)\big{)}^{2}}\right),\]
we deduce that
\[\big{(}f^{\varepsilon}(t)\big{)}^{2}\leq 2\big{(}f^{\varepsilon}(0)\big{)}^{2}+C_{0}C_{* }\int_{0}^{t}\log\left(\frac{e}{f^{\varepsilon}(\tau)^{2}}\right)\big{(}f^{ \varepsilon}(\tau)\big{)}^{2}d\tau+4\left(C_{*}^{-1}\int_{0}^{t}\left\| \widetilde{g}^{\varepsilon}(\tau)\right\|_{L^{2}}d\tau\right)^{2},\]
where we employed the observation that
\[q(t)\big{(}f^{\varepsilon}(t)\big{)}^{2-\frac{1}{q(t)}}\leq C_{0}\log\left( \frac{e}{\big{(}f^{\varepsilon}(t)\big{)}^{2}}\right)\big{(}f^{\varepsilon}( t)\big{)}^{2},\]
for some \(C_{0}\geq 1\).
At last, by employing Osgood's lemma [2, Lemma 3.4] (see also [8, Lemma A.1], which can be applied directly to the present case), we arrive at the desired bound
\[\frac{1}{eC_{*}}\left\|\widetilde{u}^{\varepsilon}\right\|_{L^{\infty}([0,t_{*} ];L^{2})}\leq\left(\frac{\sqrt{2}}{eC_{*}}\left\|\widetilde{u}_{0}^{ \varepsilon}\right\|_{L^{2}}+\frac{2}{eC_{*}}\left\|\widetilde{g}^{\varepsilon }\right\|_{L^{1}([0,t_{*}];L^{2})}\right)^{\exp(-C_{0}C_{*}t_{*})}.\]
In particular, assuming that
\[\frac{\sqrt{2}}{eC_{*}}\left\|\widetilde{u}_{0}^{\varepsilon}\right\|_{L^{2}} +\frac{2}{eC_{*}}\left\|\widetilde{g}^{\varepsilon}\right\|_{L^{1}([0,T];L^{2 })}<e^{-\exp(C_{0}C_{*}T)}\leq 1\]
leads to the estimate
\[\frac{1}{eC_{*}}\left\|\widetilde{u}^{\varepsilon}\right\|_{L^{\infty}([0,t_{ *}];L^{2})}\leq\left(\frac{\sqrt{2}}{eC_{*}}\left\|\widetilde{u}_{0}^{ \varepsilon}\right\|_{L^{2}}+\frac{2}{eC_{*}}\left\|\widetilde{g}^{ \varepsilon}\right\|_{L^{1}([0,T];L^{2})}\right)^{\exp(-C_{0}C_{*}T)}<e^{-1}.\]
Then, a classical continuation argument allows us to deduce that \(t_{*}\), in the left-hand side above, can be chosen to be equal to \(T\). This establishes the stability estimate (1.6), by possibly redefining \(C_{*}\) up to a multiplicative universal constant.
#### 1.5.2. Convergence of low frequencies in \(L^{\infty}_{t}\dot{H}^{1}\)
As in the proof of Lemma 1.4, it is readily seen that a direct use of Plancherel's theorem gives that
\[\left\|\mathds{1}_{|D|\leq\Theta_{\varepsilon}}(\omega^{\varepsilon}-\omega) \right\|_{L^{\infty}([0,T];L^{2})}\leq\Theta_{\varepsilon}\left\|u^{ \varepsilon}-u\right\|_{L^{\infty}([0,T];L^{2})},\]
for any \(\Theta_{\varepsilon}>0\). In particular, if
\[\lim_{\varepsilon\to 0}\Theta_{\varepsilon}=\infty\quad\text{and}\quad\lim_{ \varepsilon\to 0}\Theta_{\varepsilon}\left\|u^{\varepsilon}-u\right\|_{L^{\infty}([0,T]; L^{2})}=0, \tag{1.7}\]
we deduce that
\[\lim_{\varepsilon\to 0}\left\|\phi(\Theta_{\varepsilon}^{-1}D)(\omega^{ \varepsilon}-\omega)\right\|_{L^{\infty}([0,T];L^{2})}=0, \tag{1.8}\]
for any compactly supported cutoff function \(\phi\in L^{\infty}(\mathbb{R}^{2})\).
Moreover, if \(\phi\) is assumed to be supported away from the origin, then the assumptions on the behavior of \(\Theta_{\varepsilon}\) allow us to obtain that
\[\lim_{\varepsilon\to 0}\left\|\phi(\Theta_{\varepsilon}^{-1}D)\omega\right\|_{L^{ \infty}([0,T];L^{2})}=0. \tag{1.9}\]
This convergence hinges upon the time continuity of Yudovich's solutions. Indeed, let us suppose, by contradiction, that
\[\left\|\phi(\Theta_{\varepsilon_{k}}^{-1}D)\omega\right\|_{L^{\infty}([0,T];L ^{2})}\geq\delta,\]
for some sequence \(\varepsilon_{k}\to 0\), as \(k\to\infty\), and some constant \(\delta>0\). Then, by continuity, writing
\[\left\|\phi(\Theta_{\varepsilon_{k}}^{-1}D)\omega\right\|_{L^{\infty}([0,T];L ^{2})}=\left\|\phi(\Theta_{\varepsilon_{k}}^{-1}D)\omega(t_{k})\right\|_{L^{2 }},\]
for some suitable \(t_{k}\in[0,T]\), and assuming, by compactness, that \(t_{k}\to t\), we see that the control
\[\left\|\phi(\Theta_{\varepsilon_{k}}^{-1}D)\omega(t_{k})\right\|_{L^{2}}\leq \left\|\phi\right\|_{L^{\infty}}\left\|\omega(t_{k})-\omega(t)\right\|_{L^{2}} +\left\|\phi(\Theta_{\varepsilon_{k}}^{-1}D)\omega(t)\right\|_{L^{2}}\]
implies that
\[\left\|\phi(\Theta_{\varepsilon_{k}}^{-1}D)\omega\right\|_{L^{\infty}([0,T];L ^{2})}\to 0,\]
which is impossible. It follows that (1.9) holds true.
We will make use of the preceding convergence properties with the particular choice of cutoff function \(\phi(D)=\mathds{1}_{M\leq|D|\leq N}\), for any \(0<M<N\), which yields
\[\lim_{\varepsilon\to 0}\left\|\mathds{1}_{M\Theta_{\varepsilon}\leq|D|\leq N \Theta_{\varepsilon}}\omega\right\|_{L^{\infty}([0,T];L^{2})}=0,\]
and
\[\lim_{\varepsilon\to 0}\left\|\mathds{1}_{M\Theta_{\varepsilon}\leq|D|\leq N \Theta_{\varepsilon}}\omega^{\varepsilon}\right\|_{L^{\infty}([0,T];L^{2})}=0, \tag{1.10}\]
for any choice of parameters \(\Theta_{\varepsilon}\) satisfying (1.7).
#### 1.5.3. Convergence of high frequencies in \(L^{\infty}_{t}\dot{H}^{1}\)
We first introduce some notation. Let
\[\mathcal{C}\stackrel{{\mathrm{def}}}{{=}}\left\{\xi\in\mathbb{R}^{ 2}:\frac{3}{4}\leq|\xi|\leq\frac{8}{3}\right\},\qquad\mathcal{B}\stackrel{{ \mathrm{def}}}{{=}}\left\{\xi\in\mathbb{R}^{2}:|\xi|\leq\frac{4}{3} \right\},\]
and consider smooth radial functions \(\varphi\in\mathcal{D}(\mathcal{C})\) and \(\psi\in\mathcal{D}(\mathcal{B})\) with
\[0\leq\varphi(\xi),\psi(\xi)\leq 1,\quad\text{for all }\xi\in\mathbb{R}^{2},\]
and
\[\psi(\xi)=1,\quad\text{for all }|\xi|\leq 1.\]
Furthermore, it is possible to ensure that the family of functions
\[\varphi_{j}(\cdot)\stackrel{{\mathrm{def}}}{{=}}\varphi(2^{-j}.)\in\mathcal{D}(2^{j}\mathcal{C}),\quad j\in\mathbb{Z},\]
provides us with partitions of unity
\[1=\sum_{j\in\mathbb{Z}}\varphi_{j}(\xi)=\sum_{j\geq 0}\varphi_{j}(\xi)+\psi( \xi),\quad\text{for all }\xi\in\mathbb{R}^{2}.\]
In this case, the corresponding convolution operators
\[S_{0}^{\varepsilon}\stackrel{{\mathrm{def}}}{{=}}\psi(\Theta_{ \varepsilon}^{-1}D),\qquad\Delta_{j}^{\varepsilon}\stackrel{{ \mathrm{def}}}{{=}}\varphi_{j}(\Theta_{\varepsilon}^{-1}D), \tag{1.11}\]
where \(\Theta_{\varepsilon}>0\) is any parameter satisfying (1.7), satisfy that
\[S_{0}^{\varepsilon}+\sum_{j\geq 0}\Delta_{j}^{\varepsilon}=\mathrm{Id}.\]
Next, further introducing the Fourier multiplier operators
\[\sqrt{S_{0}^{\varepsilon}}\stackrel{{\mathrm{def}}}{{=}}\sqrt{ \psi(\Theta_{\varepsilon}^{-1}D)}\qquad\text{and}\qquad\sqrt{\mathrm{Id}-S_{0 }^{\varepsilon}}\stackrel{{\mathrm{def}}}{{=}}\sqrt{1-\psi( \Theta_{\varepsilon}^{-1}D)},\]
and observing that
\[\left(\sum_{j\geq 0}\varphi_{j}^{\varepsilon}(\xi)\right)^{2}\leq\sum_{j \geq 0}\varphi_{j}^{\varepsilon}(\xi),\]
it holds, for any given \(f\in L^{2}\), that
\[\left\|(\mathrm{Id}-S_{0}^{\varepsilon})f\right\|_{L^{2}}\leq\left\|\sqrt{ \mathrm{Id}-S_{0}^{\varepsilon}}f\right\|_{L^{2}}, \tag{1.12}\]
which will come in handy later on.
Now, observe that (1.9) yields
\[\lim_{\varepsilon\to 0}\left\|(\mathrm{Id}-S_{0}^{\varepsilon})\omega\right\|_{L^{ \infty}([0,T];L^{2})}=0.\]
Therefore, by (1.8), we conclude that the convergence
\[\lim_{\varepsilon\to 0}\left\|\omega^{\varepsilon}-\omega\right\|_{L^{\infty}([0,T] ;L^{2})}=0\]
is equivalent to
\[\lim_{\varepsilon\to 0}\left\|(\mathrm{Id}-S_{0}^{\varepsilon})\omega^{ \varepsilon}\right\|_{L^{\infty}([0,T];L^{2})}=0, \tag{1.13}\]
which can be interpreted as a time-dependent version of the compactness criterion given in Lemma 1.4.
Thus, in order to prove Theorem 1.2, there only remains to establish (1.13). To that end, we first recall that \(\omega^{\varepsilon}\) solves the transport equation
\[\partial_{t}\omega^{\varepsilon}+u^{\varepsilon}\cdot\nabla\omega^{ \varepsilon}=\operatorname{curl}g^{\varepsilon}. \tag{1.14}\]
Then, formally taking the inner product of (1.14) with \(\sum_{j\geq 0}\Delta_{j}^{\varepsilon}\omega^{\varepsilon}\) and using the divergence-free condition of \(u^{\varepsilon}\), we find that
\[\begin{split}\frac{1}{2}\left\|\sqrt{\operatorname{Id}-S_{0}^{ \varepsilon}}\omega^{\varepsilon}(t)\right\|_{L^{2}}^{2}&=\frac{1 }{2}\left\|\sqrt{\operatorname{Id}-S_{0}^{\varepsilon}}\omega_{0}^{\varepsilon }\right\|_{L^{2}}^{2}+\int_{0}^{t}\int_{\mathbb{R}^{2}}\operatorname{curl}g^{ \varepsilon}(\operatorname{Id}-S_{0}^{\varepsilon})\omega^{\varepsilon}( \tau,x)dxd\tau\\ &\quad-\underbrace{\int_{0}^{t}\int_{\mathbb{R}^{2}}u^{\varepsilon }\cdot\nabla\left(\sum_{j\leq-1}\Delta_{j}^{\varepsilon}\omega^{\varepsilon} \right)\sum_{j\geq 0}\Delta_{j}^{\varepsilon}\omega^{\varepsilon}(\tau,x)dxd\tau}_{ \stackrel{{\text{def}}}{{\longrightarrow}}\mathcal{J}(t)}.\end{split} \tag{1.15}\]
For the sake of completeness, we provide a rigorous justification of (1.15) in Section 1.5.4, below.
We show now how to estimate \(\mathcal{J}(t)\). To that end, we split it into a sum of three terms
\[\begin{split}&\mathcal{J}_{1}(t)\stackrel{{\text{ def}}}{{=}}\int_{0}^{t}\int_{\mathbb{R}^{2}}u^{\varepsilon}\cdot\nabla\left( \Delta_{-1}^{\varepsilon}\omega^{\varepsilon}(\tau)\right)\Delta_{0}^{ \varepsilon}\omega^{\varepsilon}(\tau,x)dxd\tau,\\ &\mathcal{J}_{2}(t)\stackrel{{\text{def}}}{{=}}\int_{0 }^{t}\int_{\mathbb{R}^{2}}u^{\varepsilon}\cdot\nabla\left(\sum_{j\leq-2} \Delta_{j}^{\varepsilon}\omega^{\varepsilon}\right)\sum_{j\geq 0}\Delta_{j}^{ \varepsilon}\omega^{\varepsilon}(\tau,x)dxd\tau,\\ &\mathcal{J}_{3}(t)\stackrel{{\text{def}}}{{=}}\int_{0 }^{t}\int_{\mathbb{R}^{2}}u^{\varepsilon}\cdot\nabla\left(\Delta_{-1}^{ \varepsilon}\omega^{\varepsilon}\right)\sum_{j\geq 1}\Delta_{j}^{\varepsilon} \omega^{\varepsilon}(\tau,x)dxd\tau,\end{split} \tag{1.16}\]
which we control separately.
The term \(\mathcal{J}_{1}\) is the most difficult to estimate. Specifically, one can show that
\[\left|\mathcal{J}_{1}(t)\right|\lesssim\sum_{i=-1}^{0}\int_{0}^{t}\left\| \nabla u^{\varepsilon}(\tau)\right\|_{L^{2}}\left\|\Delta_{i}^{\varepsilon} \omega^{\varepsilon}(\tau)\right\|_{L^{4}}^{2}d\tau. \tag{1.17}\]
For the sake of clarity, we defer the justification of this estimate to Section 1.5.5, below.
Then, observe that
\[\left|\mathcal{J}_{1}(t)\right|\lesssim\left\|\omega^{\varepsilon}\right\|_{L ^{\infty}([0,t];L^{2})}\left\|\omega^{\varepsilon}\right\|_{L^{\infty}([0,t]; L^{\infty})}\sum_{i=-1}^{0}\left\|\Delta_{i}^{\varepsilon}\omega^{\varepsilon} \right\|_{L^{1}([0,t];L^{2})},\]
which follows from Holder's inequality and the Biot-Savart estimate.
As for \(\mathcal{J}_{2}\) and \(\mathcal{J}_{3}\), we begin with exploiting the support localizations
\[\operatorname{supp}\mathcal{F}\left(\sum_{j\leq-2}\Delta_{j}^{ \varepsilon}\omega^{\varepsilon}\right) =\left\{\xi\in\mathbb{R}^{2}:\left|\xi\right|\leq\frac{2\Theta _{\varepsilon}}{3}\right\},\] \[\operatorname{supp}\mathcal{F}\left(\sum_{j\geq 0}\Delta_{j}^{ \varepsilon}\omega^{\varepsilon}\right) =\left\{\xi\in\mathbb{R}^{2}:\left|\xi\right|\geq\frac{3\Theta _{\varepsilon}}{4}\right\},\]
to deduce that
\[\mathcal{F}\left(\sum_{j\leq-2}\Delta_{j}^{\varepsilon}\omega^{ \varepsilon}\sum_{j\geq 0}\Delta_{j}^{\varepsilon}\omega^{\varepsilon}\right) \subset\left\{\xi\in\mathbb{R}^{2}:\left|\xi\right|\geq\frac{\Theta_{ \varepsilon}}{12}\right\}.\]
Similarly, the fact that
\[\operatorname{supp}\mathcal{F}\left(\Delta_{-1}^{\varepsilon}\omega^{ \varepsilon}\right) \subset\left\{\xi\in\mathbb{R}^{2}:|\xi|\leq\frac{4\Theta_{ \varepsilon}}{3}\right\},\] \[\operatorname{supp}\mathcal{F}\left(\sum_{j\geq 1}\Delta_{j}^{ \varepsilon}\omega^{\varepsilon}\right) \subset\left\{\xi\in\mathbb{R}^{2}:|\xi|\geq\frac{3\Theta_{ \varepsilon}}{2}\right\},\]
entails that
\[\mathcal{F}\left(\Delta_{-1}^{\varepsilon}\omega^{\varepsilon}\sum_{j\geq 1} \Delta_{j}^{\varepsilon}\omega^{\varepsilon}\right)\subset\left\{\xi\in \mathbb{R}^{2}:|\xi|\geq\frac{\Theta_{\varepsilon}}{6}\right\}.\]
Accordingly, we infer that
\[\mathcal{J}_{2}(t)=\int_{0}^{t}\int_{\mathbb{R}^{2}}\left(\mathds{1}_{|D|\geq \frac{1}{12}\Theta_{\varepsilon}}u^{\varepsilon}\right)\cdot\nabla\left(\sum _{j\leq-2}\Delta_{j}^{\varepsilon}\omega^{\varepsilon}\right)\sum_{j\geq 0} \Delta_{j}^{\varepsilon}\omega^{\varepsilon}(\tau,x)dxd\tau,\]
and
\[\mathcal{J}_{3}(t)=\int_{0}^{t}\int_{\mathbb{R}^{2}}\left(\mathds{1}_{|D|\geq \frac{1}{12}\Theta_{\varepsilon}}u^{\varepsilon}\right)\cdot\nabla\left( \Delta_{-1}^{\varepsilon}\omega^{\varepsilon}\right)\sum_{j\geq 1}\Delta_{j}^{ \varepsilon}\omega^{\varepsilon}(\tau,x)dxd\tau.\]
Consequently, since \(u^{\varepsilon}\) above is localized in its high frequencies, it holds that
\[\begin{split}|\mathcal{J}_{2}(t)|&\lesssim\int_{0}^{ t}\left\|\mathds{1}_{|D|\geq\frac{1}{12}\Theta_{\varepsilon}}\nabla u^{ \varepsilon}(\tau)\right\|_{L^{2}}\left\|(S_{0}^{\varepsilon}-\Delta_{-1}^{ \varepsilon})\omega^{\varepsilon}(\tau)\right\|_{L^{\infty}}\left\|(\operatorname {Id}-S_{0}^{\varepsilon})\omega^{\varepsilon}(\tau)\right\|_{L^{2}}d\tau\\ &\lesssim\int_{0}^{t}\left\|\mathds{1}_{|D|\geq\frac{1}{12}\Theta _{\varepsilon}}\omega^{\varepsilon}(\tau)\right\|_{L^{2}}\left\|\omega^{ \varepsilon}(\tau)\right\|_{L^{\infty}}\left\|(\operatorname{Id}-S_{0}^{ \varepsilon})\omega^{\varepsilon}(\tau)\right\|_{L^{2}}d\tau,\end{split} \tag{1.18}\]
and, in a similar fashion, that
\[\begin{split}|\mathcal{J}_{3}(t)|&\lesssim\int_{0}^{ t}\left\|\mathds{1}_{|D|\geq\frac{1}{12}\Theta_{\varepsilon}}\nabla u^{ \varepsilon}(\tau)\right\|_{L^{2}}\left\|\Delta_{-1}^{\varepsilon}\omega^{ \varepsilon}(\tau)\right\|_{L^{\infty}}\left\|(\operatorname{Id}-S_{0}^{ \varepsilon}-\Delta_{0}^{\varepsilon})\omega^{\varepsilon}(\tau)\right\|_{L^{ 2}}d\tau\\ &\lesssim\int_{0}^{t}\left\|\mathds{1}_{|D|\geq\frac{1}{12}\Theta _{\varepsilon}}\omega^{\varepsilon}(\tau)\right\|_{L^{2}}\left\|\omega^{ \varepsilon}(\tau)\right\|_{L^{\infty}}\Big{(}\left\|\Delta_{0}^{\varepsilon }\omega^{\varepsilon}(\tau)\right\|_{L^{2}}+\left\|(\operatorname{Id}-S_{0}^ {\varepsilon})\omega^{\varepsilon}(\tau)\right\|_{L^{2}}\Big{)}d\tau.\end{split} \tag{1.19}\]
Therefore, the bounds
\[\left\|\Delta_{0}^{\varepsilon}\omega^{\varepsilon}(\tau)\right\|_{L^{2}}\leq \left\|\mathds{1}_{\frac{1}{12}\Theta_{\varepsilon}\leq|D|\leq\frac{5}{3} \Theta_{\varepsilon}}\omega^{\varepsilon}(\tau)\right\|_{L^{2}}\]
and
\[\begin{split}\left\|\mathds{1}_{|D|\geq\frac{1}{12}\Theta_{ \varepsilon}}\omega^{\varepsilon}(\tau)\right\|_{L^{2}}^{2}&= \left\|\mathds{1}_{\frac{1}{12}\Theta_{\varepsilon}\leq|D|<\frac{4}{3}\Theta_ {\varepsilon}}\omega^{\varepsilon}(\tau)\right\|_{L^{2}}^{2}+\left\|\mathds{1} _{|D|\geq\frac{4}{3}\Theta_{\varepsilon}}\omega^{\varepsilon}(\tau)\right\|_{L ^{2}}^{2}\\ &\leq\left\|\mathds{1}_{\frac{1}{12}\Theta_{\varepsilon}\leq|D|\leq \frac{5}{3}\Theta_{\varepsilon}}\omega^{\varepsilon}(\tau)\right\|_{L^{2}}^{2}+ \left\|(\operatorname{Id}-S_{0}^{\varepsilon})\omega^{\varepsilon}(\tau) \right\|_{L^{2}}^{2},\end{split}\]
lead to
\[\begin{split}|\mathcal{J}_{2}(t)|+|\mathcal{J}_{3}(t)|& \lesssim\left\|\omega^{\varepsilon}\right\|_{L^{\infty}([0,t];L^{2} \cap L^{\infty})}^{2}\left\|\mathds{1}_{\frac{1}{12}\Theta_{\varepsilon}\leq|D| \leq\frac{5}{3}\Theta_{\varepsilon}}\omega^{\varepsilon}\right\|_{L^{1}([0,t]; L^{2})}\\ &+\int_{0}^{t}\left\|\omega^{\varepsilon}(\tau)\right\|_{L^{\infty} }\left\|(\operatorname{Id}-S_{0}^{\varepsilon})\omega^{\varepsilon}(\tau) \right\|_{L^{2}}^{2}d\tau.\end{split}\]
Finally, gathering all estimates on \(\mathcal{J}_{1}\), \(\mathcal{J}_{2}\) and \(\mathcal{J}_{3}\), employing the control
\[\left\|\omega^{\varepsilon}\right\|_{L^{\infty}([0,T];L^{2}\cap L^{\infty})} \leq\sup_{\varepsilon>0}\left(\left\|\omega_{0}^{\varepsilon} \right\|_{L^{2}\cap L^{\infty}}+\left\|\operatorname{curl}g^{\varepsilon} \right\|_{L^{1}([0,T];L^{2}\cap L^{\infty})}\right)\lesssim C_{*},\]
where \(C_{*}\) is defined in (1.5), and using (1.12), we arrive at the bound
\[|\mathcal{J}(t)|\lesssim C_{*}^{2}\left\|\mathds{1}_{\frac{1}{12}\Theta_{\varepsilon }\leq|D|\leq\frac{8}{3}\Theta_{\varepsilon}}\omega^{\varepsilon}\right\|_{L^{ 1}([0,T];L^{2})}+C_{*}\int_{0}^{t}\left\|\sqrt{\mathrm{Id}-S_{0}^{\varepsilon }}\omega^{\varepsilon}(\tau)\right\|_{L^{2}}^{2}d\tau,\]
for any \(t\in[0,T]\).
Further incorporating this estimate into (1.15), we deduce that
\[\left\|\sqrt{\mathrm{Id}-S_{0}^{\varepsilon}}\omega^{\varepsilon} (t)\right\|_{L^{2}}^{2}\leq\left\|\sqrt{\mathrm{Id}-S_{0}^{\varepsilon}} \omega_{0}^{\varepsilon}\right\|_{L^{2}}^{2}+C_{*}\left\|(\mathrm{Id}-S_{0}^{ \varepsilon})\operatorname{curl}g^{\varepsilon}\right\|_{L^{1}([0,T];L^{2})}\] \[\qquad\qquad\qquad\qquad\qquad+C_{*}^{2}\left\|\mathds{1}_{ \frac{1}{12}\Theta_{\varepsilon}\leq|D|\leq\frac{8}{3}\Theta_{\varepsilon}} \omega^{\varepsilon}\right\|_{L^{1}([0,T];L^{2})}+C_{*}\int_{0}^{t}\left\| \sqrt{\mathrm{Id}-S_{0}^{\varepsilon}}\omega^{\varepsilon}(\tau)\right\|_{L^ {2}}^{2}d\tau.\]
At last, applying Gronwall's lemma yields that
\[\left\|\sqrt{\mathrm{Id}-S_{0}^{\varepsilon}}\omega^{\varepsilon} \right\|_{L^{\infty}([0,T];L^{2})}^{2}\lesssim_{C_{*}} \left(\left\|\sqrt{\mathrm{Id}-S_{0}^{\varepsilon}}\omega_{0}^{ \varepsilon}\right\|_{L^{2}}^{2}+\left\|\mathds{1}_{\frac{1}{12}\Theta_{ \varepsilon}\leq|D|\leq\frac{8}{3}\Theta_{\varepsilon}}\omega^{\varepsilon} \right\|_{L^{1}([0,T];L^{2})}\right.\] \[\qquad\qquad\qquad\qquad+\left\|(\mathrm{Id}-S_{0}^{\varepsilon} )\operatorname{curl}g^{\varepsilon}\right\|_{L^{1}([0,T];L^{2})}\left) \exp\left(C_{*}T\right).\]
Clearly, in view of (1.10), the right-hand side above vanishes in the limit \(\varepsilon\to 0\). We therefore conclude that (1.13) holds true, thereby completing the proof of Theorem 1.2.
#### 1.5.4. Justification of (1.15)
Note that, in order to establish (1.15), we have taken advantage of the formal cancellation
\[\int_{\mathbb{R}^{2}}u^{\varepsilon}(\tau,x)\cdot\nabla\left(\sum_{j\geq 0} \Delta_{j}^{\varepsilon}\omega^{\varepsilon}(\tau,x)\right)\left(\sum_{j\geq 0 }\Delta_{j}^{\varepsilon}\omega^{\varepsilon}(\tau,x)\right)dx=0,\quad\text{ for all }\tau\in[0,T],\]
despite the fact that the integral above is not well-defined, for \(\omega^{\varepsilon}(t)\) only belongs to Lebesgue spaces. Here, we show that (1.15) can be justified without relying on the above identity.
To that end, we first observe that
\[\int_{\mathbb{R}^{2}}\mathrm{div}(u^{\varepsilon}\omega^{\varepsilon})S_{0}^{ \varepsilon}\omega^{\varepsilon}(\tau,x)dx=-\int_{\mathbb{R}^{2}}u^{ \varepsilon}\cdot\nabla(S_{0}^{\varepsilon}\omega^{\varepsilon})(\mathrm{Id }-S_{0}^{\varepsilon})\omega^{\varepsilon}(\tau,x)dx,\]
for any \(\tau\in[0,T]\). Accordingly, by taking the inner product of the transport equation (1.14) with \(S_{0}^{\varepsilon}\omega^{\varepsilon}\), we obtain, for any \(t\in[0,T]\), that
\[\frac{1}{2}\left\|\sqrt{S_{0}^{\varepsilon}}\omega^{\varepsilon} (t)\right\|_{L^{2}}^{2} =\frac{1}{2}\left\|\sqrt{S_{0}^{\varepsilon}}\omega_{0}^{ \varepsilon}\right\|_{L^{2}}^{2}+\int_{0}^{t}\int_{\mathbb{R}^{2}} \operatorname{curl}g^{\varepsilon}S_{0}^{\varepsilon}\omega^{\varepsilon}( \tau,x)dxd\tau\] \[\quad+\int_{0}^{t}\int_{\mathbb{R}^{2}}u^{\varepsilon}\cdot\nabla (S_{0}^{\varepsilon}\omega^{\varepsilon})(\mathrm{Id}-S_{0}^{\varepsilon}) \omega^{\varepsilon}(\tau,x)dxd\tau.\]
On the other hand, we know, for any \(t\in[0,T]\), that
\[\frac{1}{2}\left\|\omega^{\varepsilon}(t)\right\|_{L^{2}}^{2}=\frac{1}{2} \left\|\omega_{0}^{\varepsilon}\right\|_{L^{2}}^{2}+\int_{0}^{t}\int_{ \mathbb{R}^{2}}\operatorname{curl}g^{\varepsilon}\omega^{\varepsilon}(\tau,x) dxd\tau.\]
Consequently, we see that (1.15) follows from the combination of the two preceding identities with
\[\left\|\omega^{\varepsilon}(t)\right\|_{L^{2}}^{2}=\left\|\sqrt{S_{0}^{ \varepsilon}}\omega^{\varepsilon}(t)\right\|_{L^{2}}^{2}+\left\|\sqrt{(\mathrm{ Id}-S_{0}^{\varepsilon})}\omega^{\varepsilon}(t)\right\|_{L^{2}}^{2},\]
which completes its justification.
#### 1.5.5. Justification of (1.17)
Here, we give a complete proof of estimate (1.17) on \(\mathcal{J}_{1}\), which follows directly from an application of Lemma 1.7, below. We proceed in several steps. First, in Lemma 1.5, we summarize the fundamental properties of the partition of unity introduced in the preceding steps. Then, in Lemma 1.6, we establish a crucial identity which is a natural consequence of the localization of frequencies in the nonlinear advection term \(u\cdot\nabla\omega\). Finally, we combine the results of Lemmas 1.5 and 1.6 to deduce Lemma 1.7.
**Lemma 1.5**.: _The dyadic frequency decomposition operators given in (1.11) satisfy the identities_
\[\Delta_{i}^{\varepsilon}\Delta_{j}^{\varepsilon}=0,\] \[\Delta_{i}^{\varepsilon}=\Delta_{i}^{\varepsilon}\sum_{k\in\{i,i \pm 1\}}\Delta_{k}^{\varepsilon}, \tag{1.21}\] \[\left(\Delta_{i}^{\varepsilon}+\Delta_{i+1}^{\varepsilon}\right) \Delta_{i}^{\varepsilon}\Delta_{i+1}^{\varepsilon}=\Delta_{i}^{\varepsilon} \Delta_{i+1}^{\varepsilon}, \tag{1.20}\]
_for any \(i,j\in\mathbb{Z}\), with \(|i-j|\geq 2\)._
_Furthermore, we have that_
\[\operatorname{supp}\mathcal{F}\left(\Delta_{i}^{\varepsilon}f\Delta_{i+1}^{ \varepsilon}\Delta_{i+2}^{\varepsilon}g\right)\subset\left\{|\xi|\geq\frac{ \Theta_{\varepsilon}2^{i}}{3}\right\}, \tag{1.22}\]
_for any tempered distributions \(f\) and \(g\)._
Proof.: This is a consequence of the localization of the support of the function \(\varphi_{j}\), which defines the dyadic block \(\Delta_{j}^{\varepsilon}\). We refer to [2, Proposition 2.10] for the proof of the first three identities.
As for (1.22), it follows from the fact that
\[\operatorname{supp}\left(\varphi_{i+1}^{\varepsilon}\varphi_{i+2}^{ \varepsilon}\right)\subset\left\{\xi\in\mathbb{R}^{2}:3\cdot 2^{i}\leq\frac{|\xi|}{ \Theta_{\varepsilon}}\leq\frac{8}{3}\cdot 2^{i+1}\right\},\]
which implies that
\[\operatorname{supp}\varphi_{i}^{\varepsilon}+\operatorname{supp}\left( \varphi_{i+1}^{\varepsilon}\varphi_{i+2}^{\varepsilon}\right)\subset\left\{ \xi\in\mathbb{R}^{2}:\frac{|\xi|}{\Theta_{\varepsilon}}\geq\frac{2^{i}}{3} \right\},\]
thereby completing the proof.
**Lemma 1.6**.: _Let \(u\) be a divergence-free vector field in \(L^{2}(\mathbb{R}^{2})\) and \(\omega\) be a real-valued function in \(L^{2}(\mathbb{R}^{2})\). Then, for any \(j\in\mathbb{Z}\), it holds that_
\[\int_{\mathbb{R}^{2}}u\cdot\nabla\Delta_{j}^{\varepsilon}\omega\Delta_{j+1}^ {\varepsilon}\omega dx=\mathcal{I}_{1}+\mathcal{I}_{2},\]
_where_
\[\mathcal{I}_{1}\stackrel{{\rm def}}{{=}}\int_{\mathbb{R}^{2}} \left[\Delta_{j+1}^{\varepsilon},u\cdot\nabla\right]\Delta_{j}^{\varepsilon} \omega\left(\Delta_{j}^{\varepsilon}+\Delta_{j+1}^{\varepsilon}\right)\omega dx\]
_and_
\[\mathcal{I}_{2}\stackrel{{\rm def}}{{=}}\int_{\mathbb{R}^{2}} \mathds{1}_{|D|>\frac{\Theta_{\varepsilon}2^{j}}{3}}u\cdot\nabla\Delta_{j}^{ \varepsilon}\omega\Delta_{j+1}^{\varepsilon}\Delta_{j+2}^{\varepsilon}\omega dx.\]
Proof.: For the sake of simplicity, we assume that \(j=0\). The general case \(j\in\mathbb{Z}\) follows from a similar argument.
We begin by utilizing identity (1.20) to write that
\[\int_{\mathbb{R}^{2}}u\cdot\nabla\Delta_{0}^{\varepsilon}\omega\Delta_{1}^{ \varepsilon}\omega=\int_{\mathbb{R}^{2}}u\cdot\nabla\Delta_{0}^{\varepsilon} \omega\left(\Delta_{0}^{\varepsilon}+\Delta_{1}^{\varepsilon}\right)\Delta_{1} ^{\varepsilon}\omega+\int_{\mathbb{R}^{2}}u\cdot\nabla\Delta_{0}^{\varepsilon} \omega\Delta_{1}^{\varepsilon}\Delta_{2}^{\varepsilon}\omega\stackrel{{ \rm def}}{{=}}\mathcal{K}_{1}+\mathcal{K}_{2}.\]
Then, by virtue of (1.22), the expression \(\mathcal{K}_{2}\) can be recast as
\[\mathcal{K}_{2}=\int_{\mathbb{R}^{2}}\mathds{1}_{|D|>\frac{\Theta_{\varepsilon}} {3}}u\cdot\nabla\Delta_{0}^{\varepsilon}\omega\Delta_{1}^{\varepsilon}\Delta_{ 2}^{\varepsilon}\omega,\]
which is precisely \(\mathcal{I}_{2}\).
As for \(\mathcal{K}_{1}\), we split it into two parts
\[\mathcal{K}_{1}=\int_{\mathbb{R}^{2}}\left[\Delta_{1}^{\varepsilon},u\cdot \nabla\right]\Delta_{0}^{\varepsilon}\omega^{\varepsilon}\left(\Delta_{0}^{ \varepsilon}+\Delta_{1}^{\varepsilon}\right)\omega+\int_{\mathbb{R}^{2}}u \cdot\nabla\Delta_{0}^{\varepsilon}\Delta_{1}^{\varepsilon}\omega\left( \Delta_{0}^{\varepsilon}+\Delta_{1}^{\varepsilon}\right)\omega\stackrel{{ \mathrm{def}}}{{=}}\mathcal{K}_{11}+\mathcal{K}_{12}.\]
In particular, observe that \(\mathcal{K}_{11}\) already matches the first term in \(\mathcal{I}_{1}\).
Regarding \(\mathcal{K}_{12}\), we perform an integration by parts followed by (1.21) to deduce that
\[\mathcal{K}_{12} =-\int_{\mathbb{R}^{2}}u\cdot\nabla\left(\Delta_{0}^{\varepsilon }+\Delta_{1}^{\varepsilon}\right)\omega\Delta_{0}^{\varepsilon}\Delta_{1}^{ \varepsilon}\omega\] \[=-\int_{\mathbb{R}^{2}}u\cdot\nabla\left(\Delta_{0}^{\varepsilon }+\Delta_{1}^{\varepsilon}\right)\omega\Delta_{0}^{\varepsilon}\Delta_{1}^{ \varepsilon}\left(\Delta_{0}^{\varepsilon}+\Delta_{1}^{\varepsilon}\right)\omega,\]
which, employing commutators, can be rewritten as
\[\mathcal{K}_{12} =-\int_{\mathbb{R}^{2}}\left[\Delta_{0}^{\varepsilon}\Delta_{1}^ {\varepsilon},u\cdot\nabla\right]\left(\Delta_{0}^{\varepsilon}+\Delta_{1}^{ \varepsilon}\right)\omega\left(\Delta_{0}^{\varepsilon}+\Delta_{1}^{ \varepsilon}\right)\omega\] \[\quad-\int_{\mathbb{R}^{2}}u\cdot\nabla\Delta_{0}^{\varepsilon} \Delta_{1}^{\varepsilon}\left(\Delta_{0}^{\varepsilon}+\Delta_{1}^{ \varepsilon}\right)\omega\left(\Delta_{0}^{\varepsilon}+\Delta_{1}^{ \varepsilon}\right)\omega.\]
Then, we use (1.21), again, to find that
\[\mathcal{K}_{12} =-\int_{\mathbb{R}^{2}}\left[\Delta_{0}^{\varepsilon}\Delta_{1}^ {\varepsilon},u\cdot\nabla\right]\left(\Delta_{0}^{\varepsilon}+\Delta_{1}^{ \varepsilon}\right)\omega\left(\Delta_{0}^{\varepsilon}+\Delta_{1}^{ \varepsilon}\right)\omega-\int_{\mathbb{R}^{2}}u\cdot\nabla\Delta_{0}^{ \varepsilon}\Delta_{1}^{\varepsilon}\omega\left(\Delta_{0}^{\varepsilon}+ \Delta_{1}^{\varepsilon}\right)\omega\] \[=-\int_{\mathbb{R}^{2}}\left[\Delta_{0}^{\varepsilon}\Delta_{1}^ {\varepsilon},u\cdot\nabla\right]\left(\Delta_{0}^{\varepsilon}+\Delta_{1}^{ \varepsilon}\right)\omega\left(\Delta_{0}^{\varepsilon}+\Delta_{1}^{ \varepsilon}\right)\omega-\mathcal{K}_{12}.\]
Consequently, we arrive at the conclusion that
\[\mathcal{K}_{12}=-\frac{1}{2}\int_{\mathbb{R}^{2}}\left[\Delta_{0}^{\varepsilon }\Delta_{1}^{\varepsilon},u\cdot\nabla\right]\left(\Delta_{0}^{\varepsilon}+ \Delta_{1}^{\varepsilon}\right)\omega\left(\Delta_{0}^{\varepsilon}+\Delta_{1}^ {\varepsilon}\right)\omega,\]
thereby completing the proof of the lemma by successfully identifying all the terms in \(\mathcal{I}_{1}\).
**Lemma 1.7**.: _For any \(u\in\dot{H}^{1}(\mathbb{R}^{2})\) and \(\omega\in L^{4}(\mathbb{R}^{2})\), it holds that_
\[\left|\int_{\mathbb{R}^{2}}u\cdot\nabla\Delta_{j}^{\varepsilon}\omega\Delta_{j +1}^{\varepsilon}\omega\right|\lesssim\left\|\nabla u\right\|_{L^{2}}\left( \left\|\Delta_{j}^{\varepsilon}\omega\right\|_{L^{4}}^{2}+\left\|\Delta_{j+1} ^{\varepsilon}\omega\right\|_{L^{4}}^{2}\right),\]
_for all \(j\in\mathbb{Z}\)._
_More precisely, there is a decomposition_
\[\int_{\mathbb{R}^{2}}u\cdot\nabla\Delta_{j}^{\varepsilon}\omega\Delta_{j+1}^{ \varepsilon}\omega=\mathcal{I}_{1}+\mathcal{I}_{2},\]
_where_
\[\left|\mathcal{I}_{1}\right|\lesssim\left\|\nabla u\right\|_{L^{q}}\left( \left\|\Delta_{j}^{\varepsilon}\omega\right\|_{L^{\frac{2q}{q-1}}}^{2}+\left\| \Delta_{j+1}^{\varepsilon}\omega\right\|_{L^{\frac{2q}{q-1}}}^{2}\right)\]
_and_
\[\left|\mathcal{I}_{2}\right|\lesssim\left\|\mathds{1}_{|D|>\frac{\Theta_{\varepsilon 2} ^{2j}}{3}}\nabla u\right\|_{L^{2}}\left\|\Delta_{j}^{\varepsilon}\omega\right\|_{L ^{2p}}\left\|\Delta_{j+1}^{\varepsilon}\omega\right\|_{L^{\frac{2n}{p-1}}},\]
_for any \(p,q\in[1,\infty]\) and \(j\in\mathbb{Z}\)._
Proof.: The decomposition into \(\mathcal{I}_{1}+\mathcal{I}_{2}\) follow from an application of Lemma 1.6. Then, a direct application of Holder's inequality gives that
\[|\mathcal{I}_{2}| \lesssim\left\|\mathds{1}_{|D|>\frac{\Theta_{\varepsilon}z^{2}}{3}} \right\|_{L^{2}}\left\|\nabla\Delta_{j}^{\varepsilon}\omega\right\|_{L^{2p}} \left\|\Delta_{j+1}^{\varepsilon}\omega\right\|_{L^{\frac{2p}{p-1}}}\] \[\lesssim\] \[\lesssim\]
which is the desired bound on \(\mathcal{I}_{2}\).
The control of \(\mathcal{I}_{1}\) is slightly more involved. It requires the use of a classical commutator estimate, which can be found in [2, Lemma 2.97] and gives that
\[\left\|\left[\Delta_{j+1}^{\varepsilon},u\cdot\nabla\right]\Delta_{j}^{ \varepsilon}\omega\right\|_{L^{\frac{2q}{q+1}}}\lesssim(2^{j}\Theta_{ \varepsilon})^{-1}\left\|\nabla u\right\|_{L^{q}}\left\|\nabla\Delta_{j}^{ \varepsilon}\omega\right\|_{L^{\frac{2q}{q-1}}}\lesssim\left\|\nabla u \right\|_{L^{q}}\left\|\Delta_{j}^{\varepsilon}\omega\right\|_{L^{\frac{2q}{q- 1}}}.\]
Combining this bound with another use of Holder's inequality provides a suitable estimate on the first term in \(\mathcal{I}_{1}\). The control of the second term in \(\mathcal{I}_{1}\) is similar, which completes the proof of the Lemma.
## 2. Rate of convergence of the inviscid limit in Yudovich's class
### Second main result
Here, we show that our new method of proof of Theorem 1.2 leads to a refined understanding of the inviscid limit in the two-dimensional Navier-Stokes system
\[\left\{\begin{aligned} \partial_{t}u^{\varepsilon}+u^{ \varepsilon}\cdot\nabla u^{\varepsilon}-\varepsilon\Delta u^{\varepsilon}+ \nabla p^{\varepsilon}&=0,\\ \operatorname{div}u^{\varepsilon}&=0,\\ u^{\varepsilon}|_{t=0}&=u_{0}.\end{aligned}\right. \tag{2.1}\]
Specifically, we prove the following result.
(In the statement below, we use the classical notation \(B_{p,q}^{s}\) to denote the usual Besov space of regularity \(s\in\mathbb{R}\), integrability \(p\in[1,\infty]\) and frequency summability \(q\in[1,\infty]\). We will also make use of the homogeneous version of Besov spaces defined in the usual way.)
**Theorem 2.1**.: _Let \(T>0\), \(s\in(0,1)\) and \(u_{0}\in H^{1}\cap\dot{B}^{1+s}_{2,\infty}\cap\dot{B}^{1+s}_{p,\infty}\), for some \(p\in(2,\infty)\). Assume also that \(\omega_{0}=\operatorname{curl}u_{0}\in L^{\infty}\). Let \(u^{\varepsilon}\) be the unique solution of (2.1) and \(u\) be the unique Yudovich solution of the Euler equation (1.2) with the same initial data \(u_{0}\) and no source term, i.e., \(g=0\)._
_Further assume that_
\[\left\|u^{\varepsilon}-u\right\|_{L^{\infty}([0,T];L^{2})}=O\left(\varepsilon ^{\alpha}\right), \tag{2.2}\]
_for some \(\alpha>0\), as \(\varepsilon\to 0\), and that_
\[t\mapsto\left\|(\omega^{\varepsilon},\omega)(t)\right\|_{B_{2,\infty}^{s(t)} \cap B_{p,\infty}^{s(t)}(\mathbb{R}^{2})}\in L^{\infty}([0,T]), \tag{2.3}\]
_uniformly in \(\varepsilon\), for some nonnegative function \(s(t)\in C^{1}([0,T])\), with \(s(0)=s\) and \(s^{\prime}(t)<0\), for all \(t\in[0,T]\)._
_Then, it holds that_
\[\left\|u^{\varepsilon}-u\right\|_{L^{\infty}([0,T];\dot{H}^{1})}=O\left(\left( \frac{\varepsilon^{2\alpha s(T)}}{|\log\varepsilon|}\right)^{\frac{1}{2(1+s(T) )}}\right),\]
_as \(\varepsilon\to 0\)._
Observe that (2.3) yields that \(\omega^{\varepsilon}\) and \(\omega\) are bounded in \(L^{\infty}([0,T];\dot{B}^{s(T)}_{2,\infty})\), uniformly in \(\varepsilon\). Therefore, employing the classical convexity inequality
\[\left\|u^{\varepsilon}-u\right\|_{L^{\infty}([0,T];\dot{H}^{1})} \lesssim\left\|u^{\varepsilon}-u\right\|_{L^{\infty}([0,T];\dot{B }^{1}_{2,1})}\] \[\lesssim\left\|u^{\varepsilon}-u\right\|_{L^{\infty}([0,T];\dot{B }^{0}_{2,\infty})}^{\frac{s(T)}{1+s(T)}}\left\|u^{\varepsilon}-u\right\|_{L^ {\infty}([0,T];\dot{B}^{1+s(T)}_{2,\infty})}^{\frac{1}{1+s(T)}}\] \[\lesssim\left\|u^{\varepsilon}-u\right\|_{L^{\infty}([0,T];L^{2}) }^{\frac{s(T)}{1+s(T)}}\left\|u^{\varepsilon}-u\right\|_{L^{\infty}([0,T]; \dot{B}^{1+s(T)}_{2,\infty})}^{\frac{1}{1+s(T)}}\]
along with the bounds (2.2) and (2.3) yields the interpolated rate of convergence
\[\left\|u^{\varepsilon}-u\right\|_{L^{\infty}([0,T];\dot{H}^{1})}=O\left( \varepsilon^{\frac{\alpha s(T)}{1+s(T)}}\right). \tag{2.4}\]
This rate of convergence has already been discussed by other authors in [5, Corollary 2]. The conclusion of Theorem 2.1 establishes a subtle logarithmic refinement of the above rate of convergence as a consequence of the fact that the regularity of \(\omega^{\varepsilon}\) and \(\omega\) is strictly larger than \(s(T)\) for times \(t<T\). This is achieved by building on our new method of proof of Theorem 1.2.
Thus, the assumption that \(s^{\prime}(t)\) be negative is essential and describes a "memory effect" of the higher regularity at previous times. Moreover, it seems plausible that this phenomenon is not exclusive to hydrodynamical models. It therefore deserves further investigation.
_Remark 2.1_.: A straightforward adaptation of the proof of convergence of velocities in \(L^{\infty}_{t}L^{2}\) given in Section 1.5.1 shows that (2.2) holds with the value \(\alpha=\frac{1}{2}\exp(-C_{*}T)\), where \(C_{*}=C\left\|\omega_{0}\right\|_{L^{2}\cap L^{\infty}}\), for some universal constant \(C>0\). A precise justification for this result can be found in [2, Theorem 7.37]. Furthermore, under the assumptions of Theorem 2.1, classical regularity results on transport flows (see [2, Theorem 3.28]) show that the unique solution \(u^{\varepsilon}\) to (2.1) enjoys the decaying regularity estimate (2.3) with \(s(t)=s\exp(-C_{*}t)\). Hence, an application of Theorem 2.1 establishes the rate of convergence
\[\left\|u^{\varepsilon}-u\right\|_{L^{\infty}([0,T];\dot{H}^{1})}=O\left( \left(\frac{\varepsilon^{s\exp(-2C_{*}T)}}{|\log\varepsilon|}\right)^{\frac{ 1}{2(1+s\exp(-C_{*}T))}}\right),\]
as \(\varepsilon\to 0\). Note that there are other results (see [3], for instance) showing that (2.3) holds with even slower regularity decays, thereby further improving the ensuing rates of convergence stemming from the application of Theorem 2.1.
_Remark 2.2_.: The preceding theorem covers vortex patches with non-smooth boundary, i.e, solutions which correspond to the initial data
\[\omega_{0}=\mathds{1}_{\Omega},\]
where \(\Omega\) is a bounded domain in \(\mathbb{R}^{2}\) whose boundary has a Minkowski dimension \(D\) which is strictly less than \(2\). Indeed, in that case, one can show (see [6, Lemma 3.2]) that
\[\omega_{0}\in B^{\frac{2-D}{p}}_{p,\infty}(\mathbb{R}^{2}),\]
for any \(p\in[1,\infty)\). Accordingly, it follows that
\[\omega_{0}\in B^{s}_{2,\infty}\cap B^{s}_{p,\infty}(\mathbb{R}^{2}),\]
for some \(s\in(0,1)\).
### Proof of Theorem 2.1
This proof consists in refining the ideas leading to Theorem 1.2 by exploiting the available additional regularity and optimizing the choice of parameters. We proceed in three steps:
1. In Section 2.2.1, we revisit the arguments laid out in Section 1.5.2.
2. In Section 2.2.2, we exploit the additional regularity of solutions to improve the ideas behind the Compactness Extrapolation Lemma (Lemma 1.4).
3. In Section 2.2.3, we optimize the choice of \(\Theta_{\varepsilon}\), which is an essential parameter describing the convergence of solutions.
#### 2.2.1. Bounds on low frequencies
We see that
\[\left\|\mathds{1}_{|D|\leq\Theta_{\varepsilon}}(\omega^{\varepsilon}-\omega) \right\|_{L^{\infty}([0,T];L^{2})}\lesssim\Theta_{\varepsilon}\varepsilon^{ \alpha},\]
for any parameter \(\Theta_{\varepsilon}\), which will be chosen such that
\[\lim_{\varepsilon\to 0}\Theta_{\varepsilon}=\infty\quad\text{ and }\quad\lim_{ \varepsilon\to 0}\Theta_{\varepsilon}\varepsilon^{\alpha}=0 \tag{2.5}\]
to ensure the convergence of low frequencies.
#### 2.2.2. Bounds on high frequencies
We focus now on the evanescence of high frequencies. To that end, we write that
\[\left\|\mathds{1}_{|D|\geq\Theta_{\varepsilon}}(\omega^{\varepsilon}-\omega) \right\|_{L^{\infty}([0,T];L^{2})}\leq\left\|\mathds{1}_{|D|\geq\Theta_{ \varepsilon}}\omega^{\varepsilon}\right\|_{L^{\infty}([0,T];L^{2})}+\left\| \mathds{1}_{|D|\geq\Theta_{\varepsilon}}\omega\right\|_{L^{\infty}([0,T];L^{ 2})}\]
and we estimate each term in the right-hand side, above, by refining the techniques laid out in Section 1.5.3 and exploiting the available additional regularity of velocities to obtain a sharp rate of convergence. We only outline the control of the high frequencies of \(\omega^{\varepsilon}\), for the control of \(\omega\) is identical.
From (1.15), we find that
\[\frac{1}{2}\left\|\sqrt{\mathrm{Id}-S_{0}^{\varepsilon}}\omega^{ \varepsilon}(t)\right\|_{L^{2}}^{2} =\frac{1}{2}\left\|\sqrt{\mathrm{Id}-S_{0}^{\varepsilon}}\omega_{ 0}\right\|_{L^{2}}^{2}-\varepsilon\left\|\sqrt{\mathrm{Id}-S_{0}^{ \varepsilon}}\omega^{\varepsilon}\right\|_{L^{2}\dot{H}^{1}}^{2}-\mathcal{J}(t)\] \[\leq\frac{1}{2}\left\|\sqrt{\mathrm{Id}-S_{0}^{\varepsilon}} \omega_{0}\right\|_{L^{2}}^{2}+\sum_{i=1}^{3}|\mathcal{J}_{i}(t)|, \tag{2.6}\]
where \(\{\mathcal{J}_{i}\}_{i\in\{1,2,3\}}\) are given in (1.16).
Then, by applying Lemma 1.7, we find that
\[|\mathcal{J}_{1}(t)| \lesssim\int_{0}^{t}\left\|\nabla u^{\varepsilon}(\tau)\right\|_ {L^{q}}\left(\left\|\Delta_{-1}^{\varepsilon}\omega^{\varepsilon}(\tau) \right\|_{L^{\frac{2q}{q-1}}}^{2}+\left\|\Delta_{0}^{\varepsilon}\omega^{ \varepsilon}(\tau)\right\|_{L^{\frac{2q}{q-1}}}^{2}\right)d\tau\] \[\quad+\int_{0}^{t}\left\|\mathds{1}_{|D|>\frac{\Theta_{\varepsilon }}{\theta}}\nabla u^{\varepsilon}(\tau)\right\|_{L^{2}}\left\|\Delta_{-1}^{ \varepsilon}\omega^{\varepsilon}(\tau)\right\|_{L^{\infty}}\left\|\Delta_{0}^{ \varepsilon}\omega^{\varepsilon}(\tau)\right\|_{L^{2}}d\tau,\]
where \(q\in(2,\infty)\) is fixed. It follows that
\[|\mathcal{J}_{1}(t)|\lesssim\left\|\omega^{\varepsilon}\right\|_{L^{\infty}([ 0,t];L^{2}\cap L^{\infty})}\int_{0}^{t}\left\|(\mathrm{Id}-S_{-3}^{\varepsilon })\omega^{\varepsilon}(\tau)\right\|_{L^{\frac{2q}{q-1}}\cap L^{2}}^{2}d\tau,\]
where we denoted
\[S_{k}^{\varepsilon}=\sum_{j\leq k-1}\Delta_{j}^{\varepsilon},\]
for any \(k\in\mathbb{Z}\). This takes care of \(\mathcal{J}_{1}\).
As for \(\mathcal{J}_{2}\) and \(\mathcal{J}_{3}\), it is readily seen from (1.18) and (1.19) that
\[|\mathcal{J}_{2}(t)|+|\mathcal{J}_{3}(t)|\lesssim\left\|\omega^{\varepsilon} \right\|_{L^{\infty}([0,t];L^{\infty})}\int_{0}^{t}\left\|(\mathrm{Id}-S_{-4}^ {\varepsilon})\omega^{\varepsilon}(\tau)\right\|_{L^{2}}^{2}d\tau.\]
All in all, by plugging the foregoing bounds into (2.6) and utilizing (1.3) with \(p\in\{2,\infty\}\), we obtain that
\[\left\|\sqrt{\mathrm{Id}-S_{0}^{\varepsilon}}\omega^{\varepsilon}(t)\right\|_{L^ {2}}^{2}\lesssim_{q}\left\|\sqrt{\mathrm{Id}-S_{0}^{\varepsilon}}\omega_{0} \right\|_{L^{2}}^{2}+\int_{0}^{t}\left\|(\mathrm{Id}-S_{-4}^{\varepsilon}) \omega^{\varepsilon}(\tau)\right\|_{L^{2}\cap L^{\frac{2q}{q-1}}}^{2}d\tau.\]
At last, observing that the same estimate holds for \(\omega\) in place of \(\omega^{\varepsilon}\), we deduce that
\[\left\|\sqrt{\mathrm{Id}-S_{0}^{\varepsilon}}\left(\omega^{ \varepsilon}-\omega\right)(t)\right\|_{L^{2}}^{2}\lesssim_{q}\left\|\sqrt{ \mathrm{Id}-S_{0}^{\varepsilon}}\omega_{0}\right\|_{L^{2}}^{2}+\int_{0}^{t} \left\|(\mathrm{Id}-S_{-4}^{\varepsilon})(\omega^{\varepsilon},\omega)(\tau) \right\|_{L^{2}\cap L^{\frac{2q}{q-1}}}^{2}d\tau\] \[\lesssim_{q}\Theta_{\varepsilon}^{-2s}\left\|\omega_{0}\right\|_ {\dot{B}_{2,\infty}^{s}}^{2}\] \[\quad+\left(\int_{0}^{t}\Theta_{\varepsilon}^{-2s(\tau)}d\tau \right)\sup_{\tau\in[0,t]}\left\|(\omega^{\varepsilon},\omega)(\tau)\right\|_ {\dot{B}_{2,\infty}^{s(\tau)}\cap\dot{B}_{\frac{2q}{q-1},\infty}^{s(\tau)}}^{2},\]
for any \(t\in[0,T]\). In particular, employing (2.3), we conclude that
\[\left\|\sqrt{\mathrm{Id}-S_{0}^{\varepsilon}}\left(\omega^{\varepsilon}-\omega \right)(t)\right\|_{L^{2}}^{2}\lesssim\Theta_{\varepsilon}^{-2s}+\int_{0}^{t} \Theta_{\varepsilon}^{-2s(\tau)}d\tau,\]
where we chose \(q\) so that \(p=\frac{2q}{q-1}\) (without any loss of generality, we may assume here that \(p\in(2,4)\)).
Finally, evaluating that
\[\int_{0}^{t}\Theta_{\varepsilon}^{-2s(\tau)}d\tau =\int_{0}^{t}\frac{-1}{2s^{\prime}(\tau)\log\Theta_{\varepsilon} }\frac{d}{d\tau}\left(\Theta_{\varepsilon}^{-2s(\tau)}\right)d\tau\] \[\leq\frac{-1}{2s^{\prime}(t)\log\Theta_{\varepsilon}}\int_{0}^{t} \frac{d}{d\tau}\left(\Theta_{\varepsilon}^{-2s(\tau)}\right)d\tau\] \[=\frac{-1}{2s^{\prime}(t)\log\Theta_{\varepsilon}}\left(\Theta_{ \varepsilon}^{-2s(t)}-\Theta_{\varepsilon}^{-2s}\right)\] \[\lesssim\frac{\Theta_{\varepsilon}^{-2s(t)}-\Theta_{\varepsilon }^{-2s}}{\log\Theta_{\varepsilon}},\]
we arrive at
\[\left\|\sqrt{\mathrm{Id}-S_{0}^{\varepsilon}}\left(\omega^{\varepsilon}-\omega \right)(t)\right\|_{L^{2}}^{2}\lesssim\Theta_{\varepsilon}^{-2s}+\frac{\Theta _{\varepsilon}^{-2s(t)}}{\log\Theta_{\varepsilon}}\lesssim\frac{\Theta_{ \varepsilon}^{-2s(t)}}{\log\Theta_{\varepsilon}}.\]
#### 2.2.3. Optimization of \(\Theta_{\varepsilon}\)
By combining the low-frequency and high-frequency estimates established in the previous two steps, we obtain that
\[\left\|\omega^{\varepsilon}-\omega\right\|_{L^{\infty}([0,T];L^{2})}\lesssim \Theta_{\varepsilon}\varepsilon^{\alpha}+\frac{\Theta_{\varepsilon}^{-s(T)}} {\log^{\frac{1}{2}}\Theta_{\varepsilon}}. \tag{2.7}\]
We are now going to choose a specific value for \(\Theta_{\varepsilon}\) satisfying (2.5) which will optimize the above estimate.
To that end, notice first that the value
\[\Theta_{\varepsilon}=\left(\frac{s(T)}{\varepsilon^{\alpha}}\right)^{\frac{1} {1+s(T)}}\]
is the global minimum of the function
\[\Theta_{\varepsilon}\mapsto\Theta_{\varepsilon}\varepsilon^{\alpha}+\Theta_{ \varepsilon}^{-s(T)}.\]
Accordingly, if we ignore the logarithmic correction in (2.7), this choice of \(\Theta_{\varepsilon}\) entails the rate of convergence displayed in (2.4), which is not optimal.
In order to improve this rate of convergence, we need now to exploit the logarithmic decay of the last term in (2.7). Thus, we define
\[\Theta_{\varepsilon}\stackrel{{\mathrm{def}}}{{=}}\varepsilon^{- \frac{\alpha}{1+\varepsilon(T)}}|\log\varepsilon|^{-\beta},\]
where \(\beta\in\mathbb{R}\) will be determined shortly. In particular, observe that
\[\log\Theta_{\varepsilon}\gtrsim|\log\varepsilon|,\]
as \(\varepsilon\to 0\). Therefore, incorporating this value into (2.7) yields that
\[\|\omega^{\varepsilon}-\omega\|_{L^{\infty}([0,T];L^{2})} \lesssim\varepsilon^{\frac{\alpha\sigma(T)}{1+\varepsilon(T)}} \left(|\log\varepsilon|^{-\beta}+|\log\varepsilon|^{\beta s(T)}\log^{-\frac{1} {2}}\Theta_{\varepsilon}\right)\] \[\lesssim\varepsilon^{\frac{\alpha\sigma(T)}{1+\varepsilon(T)}} \left(|\log\varepsilon|^{-\beta}+|\log\varepsilon|^{\beta s(T)-\frac{1}{2}} \right).\]
At last, optimizing the value of \(\beta\) by setting
\[\beta\stackrel{{\mathrm{def}}}{{=}}\frac{1}{2(1+s(T))}\]
leads to the rate of convergence claimed in the statement of Theorem 2.1, which completes its proof.
## 3. Two-dimensional incompressible plasmas
### Third main result
We are now interested in two-dimensional incompressible plasmas described by the Euler-Maxwell equations
(3.1) \[\begin{cases}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll} \text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$ \begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$ \begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll \begin{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ \begin{array}\tiny$\begin{array}{ll}\text{\tiny$\begin{array}{ll}\text{\tiny{\begin{array}[}]{ll \begin{\begin{array}{ll}\text{\begin{\begin{\lst}[}\begin{array}[\]{ll}\text{\begin{array}[ \begin{array}]{\@
Henceforth, we further assume that the fields \(u\), \(E\) and \(B\) satisfy the two-dimensional normal structure
\[u(t,x)=\begin{pmatrix}u_{1}(t,x)\\ u_{2}(t,x)\\ 0\end{pmatrix},\qquad E(t,x)=\begin{pmatrix}E_{1}(t,x)\\ E_{2}(t,x)\\ 0\end{pmatrix}\qquad\text{and}\qquad B(t,x)=\begin{pmatrix}0\\ 0\\ b(t,x)\end{pmatrix}. \tag{3.3}\]
This structure played a significant role in our recent work [1], where we established the global existence and uniqueness of solutions to (3.1), with the structure (3.3), provided the speed of light \(c\) is sufficiently large when compared to the size of the initial data in suitable norms. The condition therein relating \(c\) to the initial data can be interpreted as a strengthening of the physical principle that the velocity of the fluid cannot exceed the speed of light. A precise reformulation of the well-posedness result from [1] is contained in the following theorem.
**Theorem 3.1** ([1]).: _Let \(s\) be any real number in \((\frac{7}{4},2)\) and consider a bounded family of initial data_
\[\left\{\left(u_{0}^{c},E_{0}^{c},B_{0}^{c}\right)\right\}_{c>0}\subset\left(H ^{1}\times H^{s}\times H^{s}\right)(\mathbb{R}^{2}),\]
_with \(\operatorname{div}u_{0}^{c}=\operatorname{div}E_{0}^{c}=\operatorname{div}B_ {0}^{c}\) and the two-dimensional normal structure (3.3), such that the initial vorticities \(\omega_{0}^{c}\stackrel{{\mathrm{def}}}{{=}}\nabla\times u_{0}^{c}\) form a bounded family of \(L^{\infty}(\mathbb{R}^{2})\)._
_There is a constant \(c_{0}>0\), such that, for any speed of light \(c\in(c_{0},\infty)\), there is a global weak solution \((u^{c},E^{c},B^{c})\) to the two-dimensional Euler-Maxwell system (3.1), with the normal structure (3.3) and initial data \((u_{0}^{c},E_{0}^{c},B_{0}^{c})\), satisfying the energy inequality (3.2) and enjoying the additional regularity_
\[u^{c}\in L^{\infty}(\mathbb{R}^{+};L^{\infty}\cap H^{1}),\quad \omega^{c}\stackrel{{\mathrm{def}}}{{=}}\nabla\times u^{c}\in L^ {\infty}(\mathbb{R}^{+};L^{\infty}),\quad(E^{c},B^{c})\in L^{\infty}(\mathbb{R }^{+};H^{s}),\] \[(cE^{c},B^{c})\in L^{2}(\mathbb{R}^{+};\dot{H}^{1}\cap\dot{H}^{s }),\quad(E^{c},B^{c})\in L^{2}(\mathbb{R}^{+};\dot{W}^{1,\infty}),\quad j^{c} \in L^{2}(\mathbb{R}^{+};L^{2}\cap L^{\infty}), \tag{3.4}\]
_where \(j^{c}\stackrel{{\mathrm{def}}}{{=}}\sigma\big{(}cE^{c}+P(u^{c} \times B^{c})\big{)}\). It is to be emphasized that the bounds in (3.4) are uniform in \(c\in(c_{0},\infty)\), for any bounded family of initial data._
_Furthermore, for each \(c\in(c_{0},\infty)\), the solution \((u^{c},E^{c},B^{c})\) is unique in the space of all solutions \((\bar{u},\bar{E},\bar{B})\) to the Euler-Maxwell system (3.1) satisfying the bounds, locally in time,_
\[(\bar{u},\bar{E},\bar{B})\in L^{\infty}_{t}L^{2}_{x},\qquad\bar{u}\in L^{2}_{ t}L^{\infty}_{x},\qquad\bar{j}\in L^{2}_{t,x},\]
_and having the same initial data._
_Remark 3.1_.: We should note that the uniform bound \(j^{c}\in L^{2}(\mathbb{R}^{+};L^{\infty})\) in (3.4) cannot be found in the statements of the theorems from [1]. However, it is established explicitely in Section 3.8 therein, as an intermediate step of the proof of the uniform bound \(\omega^{c}\in L^{\infty}(\mathbb{R}^{+};L^{\infty})\).
In view of the uniform bounds (3.4), it is possible to establish the convergence
\[(u^{c},E^{c},B^{c})\stackrel{{ c\to\infty}}{{\longrightarrow}}(u,0,B),\]
in the strong topology of \(L^{2}_{\mathrm{loc}}(\mathbb{R}^{+}\times\mathbb{R}^{2})\), of the solution from Theorem 3.1, where \((u,B)\) is the unique solution to the magnetohydrodynamic system
\[\begin{cases}\partial_{t}u+u\cdot\nabla u=-\nabla p,&\operatorname{div}u=0,\\ \partial_{t}B+u\cdot\nabla B-\frac{1}{\sigma}\Delta B=0,\end{cases} \tag{3.5}\]
corresponding to the initial data \((u_{0},B_{0})\) and satisfying the bounds
\[u\in L^{\infty}(\mathbb{R}^{+};L^{\infty}\cap H^{1}),\quad \omega\stackrel{{\mathrm{def}}}{{=}}\nabla\times u\in L^{\infty} (\mathbb{R}^{+};L^{\infty}),\] \[B\in L^{\infty}(\mathbb{R}^{+};H^{1})\cap L^{2}(\mathbb{R}^{+}; \dot{H}^{1}\cap\dot{H}^{2}\cap\dot{W}^{1,\infty}).\]
This convergence result and the ensuing bounds follow from a standard compactness argument and is formulated in Corollary 1.2 from [1].
Our aim is now to improve on this asymptotic characterization of (3.1) by showing that the convergence holds globally in space, uniformly in time, in adequate functional spaces. Our main global convergence result for inviscid two-dimensional plasmas is contained in the following theorem.
**Theorem 3.2**.: _Let \(\left\{(u_{0}^{c},E_{0}^{c},B_{0}^{c})\right\}_{c>0}\) be a bounded family of initial data as required in Theorem 3.1, for some fixed \(s\in(\frac{7}{4},2)\), and consider the unique solution \((u^{c},E^{c},B^{c})\) to (3.1) provided by the same theorem, for large \(c\). If \(u_{0}^{c}\) converges to \(u_{0}\) in \(H^{1}(\mathbb{R}^{2})\) and \((E_{0}^{c},B_{0}^{c})\) converges to \((0,B_{0})\) in \(L^{2}(\mathbb{R}^{2})\), then \((u^{c},B^{c})\) converges strongly to \((u,B)\), the unique global solution of (3.5), on any finite time interval \([0,T]\), in the sense that_
\[\lim_{c\to\infty}\left\|u^{c}-u\right\|_{L^{\infty}([0,T];H^{1}(\mathbb{R}^{2} ))}=0,\]
_and_
\[\lim_{c\to\infty}\left(\left\|B^{c}-B\right\|_{L^{\infty}([0,T];L^{2}(\mathbb{ R}^{2}))}+\left\|B^{c}-B\right\|_{L^{2}([0,T];\dot{H}^{1}(\mathbb{R}^{2}))} \right)=0.\]
_Furthermore, the electric current density \(j^{c}=\sigma\big{(}cE^{c}+P(u^{c}\times B^{c})\big{)}\) converges strongly to \(\nabla\times B\) in the sense that_
\[\lim_{c\to\infty}\left\|j^{c}-\nabla\times B\right\|_{L^{2}([0,T];L^{2}( \mathbb{R}^{2}))}=0.\]
_Remark 3.2_.: It is possible to quantify the convergence of \(u^{c}\) and \(B^{c}\) in the preceding theorem with a rate \(O(c^{-\alpha})\), for some \(\alpha>0\). This will be clear in the proof of the theorem, below. Moreover, the electric field \(E^{c}\) vanishes in \(L^{2}(\mathbb{R}^{+};\dot{H}^{1})\) with a rate \(O(c^{-1})\) due to the uniform bounds given in (3.4).
_Remark 3.3_.: In the statements of the above results, the restriction on the range of the regularity parameter \(s\in(\frac{7}{4},2)\) is only necessary in the construction of global solutions given in Theorem 3.1. However, Theorem 3.2 would hold for the wider range \(s\in[1,2]\) provided solutions to (3.1) satisfying the uniform bounds (3.4) could be constructed for that range.
The proof of Theorem 3.2 is given in Section 3.3, below. It relies on a self-contained asymptotic analysis of Ampere's equation, which is detailed in the next section.
### Asymptotic analysis of Ampere's equation
A formal asymptotic analysis of Ampere's equation readily shows that \(j^{c}\) converges to \(\nabla\times B\), in the regime \(c\to\infty\). The proof of our main result Theorem 3.2 hinges upon a refined asymptotic analysis of the same equation. More precisely, the following proposition provides a key estimate on the distance between \(\nabla\times B^{c}\) and \(j^{c}\), as \(c\to\infty\).
**Proposition 3.3**.: _Let \(\left\{(u_{0}^{c},E_{0}^{c},B_{0}^{c})\right\}_{c>0}\) be a bounded family of initial data as required in Theorem 3.1, for some fixed \(s\in(\frac{7}{4},2)\), and consider the unique solution \((u^{c},E^{c},B^{c})\) provided by the same theorem, for \(c>c_{0}\). Then, one has the estimate_
\[\left\|\nabla\times B^{c}-j^{c}\right\|_{L^{2}(\mathbb{R}^{+};\dot{H}^{\eta-1 })}\lesssim\frac{1}{c}\left\|\nabla\times B_{0}^{c}-j_{0}^{c}\right\|_{\dot{H} ^{\eta-1}}+\frac{1}{c}\left\|E_{0}^{c}\right\|_{\dot{H}^{\eta}}+\frac{C_{*}}{ c^{2}},\]
_for every \(\eta\in[1,s]\) and \(c>c_{0}\), where the initial electric current density is given by_
\[j_{0}^{c}\stackrel{{\rm def}}{{=}}\sigma\left(cE_{0}^{c}+P(u_{0} ^{c}\times B_{0}^{c})\right)\]
_and \(C_{*}>0\) depends on the family of initial data._
_Remark 3.4_.: Notice that the family of initial data considered in the preceding proposition is bounded uniformly in the spaces
\[u_{0}^{c}\in L^{2},\qquad E_{0}^{c}\in H^{1},\qquad B_{0}^{c}\in\dot{H}^{1}\cap L ^{\infty}.\]
Therefore, setting \(\eta=1\), the above result provides us with the estimate
\[\left\|\nabla\times B^{c}-j^{c}\right\|_{L^{2}(\mathbb{R}^{+};L^{2})}\lesssim \left\|E_{0}^{c}\right\|_{L^{2}}+\frac{1}{c}, \tag{3.6}\]
which shows that \(\nabla\times B^{c}-j^{c}\) vanishes in the topology of \(L^{2}\) if the initial electric field converges strongly to zero.
Proof.: First applying a time derivative to Ampere's equation and combining it with Faraday's equation and Ohm's law yields the damped wave equation
\[\frac{1}{c}\partial_{t}^{2}E^{c}-c\Delta E^{c}+\sigma c\partial_{t}E^{c}=- \sigma\partial_{t}P(u^{c}\times B^{c}).\]
Then, performing a classical energy estimate by multiplying the above damped wave equation by \((-\Delta)^{\eta-1}\partial_{t}E^{c}\) and integrating in space (this can be done rigorously by employing a Littlewood-Paley decomposition, for instance), we find that
\[\frac{1}{2}\frac{d}{dt}\left(\frac{1}{c}\left\|\partial_{t}E^{c} (t)\right\|_{\dot{H}^{\eta-1}}^{2}+c\left\|E^{c}(t)\right\|_{\dot{H}^{\eta-1} }^{2}\right) +\sigma c\left\|\partial_{t}E^{c}(t)\right\|_{\dot{H}^{\eta-1}}^{2}\] \[\leq\sigma\left\|\partial_{t}P(u^{c}\times B^{c})(t)\right\|_{ \dot{H}^{\eta-1}}\left\|\partial_{t}E^{c}(t)\right\|_{\dot{H}^{\eta-1}}\] \[\leq\frac{\sigma}{2c}\left\|\partial_{t}P(u^{c}\times B^{c})(t) \right\|_{\dot{H}^{\eta-1}}^{2}+\frac{\sigma c}{2}\left\|\partial_{t}E^{c}(t) \right\|_{\dot{H}^{\eta-1}}^{2},\]
whereby
\[\frac{d}{dt}\left(\frac{1}{c}\left\|\partial_{t}E^{c}(t)\right\|_{\dot{H}^{ \eta-1}}^{2}+c\left\|E^{c}(t)\right\|_{\dot{H}^{\eta}}^{2}\right)+\sigma c \left\|\partial_{t}E^{c}(t)\right\|_{\dot{H}^{\eta-1}}^{2}\leq\frac{\sigma}{c }\left\|\partial_{t}P(u^{c}\times B^{c})(t)\right\|_{\dot{H}^{\eta-1}}^{2}.\]
Further integrating in time, we deduce that
\[\frac{1}{c}\left\|\partial_{t}E^{c}\right\|_{L^{\infty}(\mathbb{ R}^{+};\dot{H}^{\eta-1})}+\left\|E^{c}\right\|_{L^{\infty}(\mathbb{R}^{+}; \dot{H}^{\eta})}+\left\|\partial_{t}E^{c}\right\|_{L^{2}(\mathbb{R}^{+};\dot{H }^{\eta-1})}\] \[\lesssim\frac{1}{c}\left\|\partial_{t}E_{0}^{c}\right\|_{\dot{H} ^{\eta-1}}+\left\|E_{0}^{c}\right\|_{\dot{H}^{\eta}}+\frac{1}{c}\left\| \partial_{t}P(u^{c}\times B^{c})\right\|_{L^{2}(\mathbb{R}^{+};\dot{H}^{\eta-1 })}. \tag{3.7}\]
We are now left with estimating \(\partial_{t}P(u^{c}\times B^{c})\), above. To this end, we exploit Euler and Faraday's equations from (3.1) to write
\[\partial_{t}P(u^{c}\times B^{c}) =P\big{(}\partial_{t}u^{c}\times B^{c}+u^{c}\times\partial_{t}B^ {c}\big{)}\] \[=P\Big{(}\big{(}P(j^{c}\times B^{c})-P(u^{c}\cdot\nabla u^{c}) \big{)}\times B^{c}-cu^{c}\times(\nabla\times E^{c})\Big{)}=\mathcal{I}_{1}+ \mathcal{I}_{2}+\mathcal{I}_{3},\]
where
\[\mathcal{I}_{1} =P\Big{(}\big{(}P(j^{c}\times B^{c})\big{)}\times B^{c}\Big{)},\] \[\mathcal{I}_{2} =-P\Big{(}\big{(}P(u^{c}\cdot\nabla u^{c})\big{)}\times B^{c} \Big{)},\] \[\mathcal{I}_{3} =-cP\Big{(}u^{c}\times(\nabla\times E^{c})\Big{)}.\]
Next, we utilize the paradifferential product laws for normal vector fields established in [1] to control each term individually. This is a step where the normal structure (3.3) plays a crucial role. For convenience, we have gathered the relevant product estimates from [1] in the appendix.
Thus, by Lemma A.1 and the energy inequality (3.2), we obtain that
\[\|\mathcal{I}_{1}\|_{L^{2}_{t}\dot{H}^{\eta-1}} \lesssim\|P(j^{c}\times B^{c})\|_{L^{2}_{t}\dot{H}^{\frac{\eta-1}{2} }}\,\|B^{c}\|_{L^{\infty}_{t}\dot{H}^{\frac{\eta+1}{2}}}\] \[\lesssim\|j^{c}\|_{L^{2}_{t}L^{2}}\,\|B^{c}\|^{2}_{L^{\infty}_{t} \dot{H}^{\frac{\eta+1}{2}}}\lesssim\mathcal{E}_{0}\,\|B^{c}\|^{2}_{L^{\infty}_ {t}H^{\eta}}\,.\]
As for the second term \(\mathcal{I}_{2}\), by applying Lemma A.1, we find that
\[\|\mathcal{I}_{2}\|_{L^{2}_{t}\dot{H}^{\eta-1}} \lesssim\|u^{c}\cdot\nabla u^{c}\|_{L^{\infty}_{t}L^{2}}\,\|B^{c} \|_{L^{2}_{t}\dot{H}^{\eta}}\] \[\lesssim\|u^{c}\|_{L^{\infty}_{t}L^{\infty}_{\infty}}\,\|u^{c}\| _{L^{\infty}_{t}\dot{H}^{1}}\,\|B^{c}\|_{L^{2}_{t}\dot{H}^{\eta}}\,.\]
Finally, for the last term \(\mathcal{I}_{3}\), using Lemma A.1, again, yields that
\[\|\mathcal{I}_{3}\|_{L^{2}_{t}\dot{H}^{\eta-1}} \lesssim\|u^{c}\|_{L^{\infty}_{t}(L^{\infty}\cap\dot{H}^{1})}\, \|c\nabla\times E^{c}\|_{L^{2}_{t}\dot{H}^{\eta-1}}\] \[\lesssim\|u^{c}\|_{L^{\infty}_{t}(L^{\infty}\cap\dot{H}^{1})}\, \|cE^{c}\|_{L^{2}_{t}\dot{H}^{\eta}}\,.\]
All in all, in view of the uniform bounds (3.4), we deduce that the control
\[\partial_{t}P(u^{c}\times B^{c})\in L^{2}(\mathbb{R}^{+};\dot{H}^{\eta-1}), \quad\forall\eta\in[1,s],\]
holds uniformly in \(c\).
Now, by incorporating the control of \(\partial_{t}P(u^{c}\times B^{c})\) into (3.7), we find that
\[\frac{1}{c}\,\|\partial_{t}E^{c}\|_{L^{\infty}(\mathbb{R}^{+};\dot{H}^{\eta-1 })}+\|E^{c}\|_{L^{\infty}(\mathbb{R}^{+};\dot{H}^{\eta})}+\|\partial_{t}E^{c} \|_{L^{2}(\mathbb{R}^{+};\dot{H}^{\eta-1})}\lesssim\frac{1}{c}\,\|\partial_{t }E^{c}_{0}\|_{\dot{H}^{\eta-1}}+\|E^{c}_{0}\|_{\dot{H}^{\eta}}+\frac{C_{*}}{c}.\]
At last, employing Ampere's equation to substitute \(\frac{1}{c}\partial_{t}E^{c}\) with \(\nabla\times B^{c}-j^{c}\) completes the proof.
### Proof of Theorem 3.2
We proceed in two steps. First, we show the strong global convergence of the velocity field \(u^{c}\) through an application of Theorem 1.2 combined with Proposition 3.3. Then, we show the stability of the magnetic field \(B^{c}\) by performing a suitable energy estimate on a transport-diffusion equation with a vanishing source term.
#### 3.3.1. Stability of the velocity field
Employing the normal structure (3.3), we write the momentum equation from (3.1) as
\[\partial_{t}u^{c}+u^{c}\cdot\nabla u^{c}+\nabla\left(p^{c}+\frac{|B^{c}|^{2}} {2}-(\mathrm{Id}-P)g^{c}\right)=Pg^{c},\]
where
\[g^{c}\stackrel{{\mathrm{def}}}{{=}}(j^{c}-\nabla\times B^{c}) \times B^{c}.\]
We want now to apply Theorem 1.2 to the above system to deduce the convergence of \(u^{c}\), as \(c\to\infty\). To that end, we need to establish, first, that \(Pg^{c}\) vanishes in \(L^{1}([0,T];H^{1}(\mathbb{R}^{2}))\) and, second, that \(\operatorname{curl}g^{c}\) remains uniformly bounded in \(L^{1}([0,T];L^{\infty}(\mathbb{R}^{2}))\).
In order to prove the vanishing of \(Pg^{c}\), we employ the paradifferential calculus estimate from Lemma A.1 to find that
\[\|Pg^{c}\|_{L^{1}_{t}L^{2}}\lesssim\|\nabla\times B^{c}-j^{c}\|_{L^{2}_{t}L^{ 2}}\,\|B^{c}\|_{L^{2}_{t}\dot{H}^{1}}\,.\]
Furthermore, noticing that
\[\operatorname{curl}g^{c}=(\nabla\times B^{c}-j^{c})\cdot\nabla B^{c}=-j^{c} \cdot\nabla B^{c},\]
we see, by Holder's inequality, that
\[\|Pg^{c}\|_{L^{1}_{t}\dot{H}^{1}}\lesssim\|\operatorname{curl}g^{c}\|_{L^{1}_ {t}L^{2}}\lesssim\|\nabla\times B^{c}-j^{c}\|_{L^{2}_{t}L^{2}}\,\|B^{c}\|_{L^ {2}_{t}\dot{W}^{1,\infty}}\,.\]
Hence, since the assumptions of Theorem 3.2 guarantee that \(E^{c}_{0}\) vanishes in \(L^{2}\) and that \(B^{c}\) remains bounded in \(L^{2}_{t}(\dot{H}^{1}\cap\dot{W}^{1,\infty})\), we deduce from (3.6) that \(Pg^{c}\to 0\) in \(L^{1}([0,T];H^{1}(\mathbb{R}^{2}))\).
As for the uniform bound of \(\operatorname{curl}g^{c}\) in \(L^{1}([0,T];L^{\infty}(\mathbb{R}^{2}))\), it follows immediately upon noticing that both \(j^{c}\) and \(\nabla B^{c}\) are uniformly bounded in \(L^{2}([0,T];L^{\infty}(\mathbb{R}^{2}))\), as emphasized in (3.4).
We can now apply Theorem 1.2 to conclude that
\[\lim_{c\to\infty}\sup_{t\in[0,T]}\left\|u^{c}(t)-u(t)\right\|_{H^{1}(\mathbb{R }^{2})}=0, \tag{3.8}\]
thereby completing the proof of stability of the velocity field.
#### 3.3.2. Stability of the magnetic field
By suitably combining Faraday's equation with Ohm's law from (3.1), we observe that
\[\partial_{t}B^{c}+u^{c}\cdot\nabla B^{c}-\frac{1}{\sigma}\Delta B^{c}=\frac{1 }{\sigma}\nabla\times(\nabla\times B^{c}-j^{c}).\]
Then, subtracting this equation with (3.5), we see that
\[\partial_{t}\widetilde{B}^{c}+u^{c}\cdot\nabla\widetilde{B}^{c}-\frac{1}{ \sigma}\Delta\widetilde{B}^{c}=-(u^{c}-u)\cdot\nabla B+\frac{1}{\sigma}\nabla \times(\nabla\times B^{c}-j^{c}),\]
where \(\widetilde{B}^{c}\stackrel{{\text{def}}}{{=}}B^{c}-B\), with the initial data \(\widetilde{B}^{c}_{0}\stackrel{{\text{def}}}{{=}}B^{c}_{0}-B_{0}\).
By a classical \(L^{2}\)-energy estimate, we find that
\[\left\|\widetilde{B}^{c}\right\|_{L^{\infty}([0,T];L^{2})} +\left\|\widetilde{B}^{c}\right\|_{L^{2}([0,T];\dot{H}^{1})}\] \[\lesssim\left\|\widetilde{B}^{c}_{0}\right\|_{L^{2}}+\left\| \nabla\times\left((u^{c}-u)\times B+\frac{1}{\sigma}(\nabla\times B^{c}-j^{c })\right)\right\|_{L^{2}([0,T];\dot{H}^{-1})}\] \[\lesssim\left\|\widetilde{B}^{c}_{0}\right\|_{L^{2}}+\left\|u^{c} -u\right\|_{L^{\infty}([0,T];L^{2})}\left\|B\right\|_{L^{2}([0,T];\dot{H}^{1} )}+\left\|\nabla\times B^{c}-j^{c}\right\|_{L^{2}([0,T];L^{2})},\]
where we applied Lemma A.1 to deduce the obtain estimate
\[\left\|\nabla\times((u^{c}-u)\times B)\right\|_{L^{2}([0,T];\dot {H}^{-1})} \lesssim\left\|P\left((u^{c}-u)\times B\right)\right\|_{L^{2}([0,T ];L^{2})}\] \[\lesssim\left\|u^{c}-u\right\|_{L^{\infty}([0,T];L^{2})}\left\|B \right\|_{L^{2}([0,T];\dot{H}^{1})}.\]
It then follows from (3.6) and (3.8) that
\[\lim_{c\to\infty}\left(\left\|\widetilde{B}^{c}\right\|_{L^{\infty}([0,T];L^{ 2})}+\left\|\widetilde{B}^{c}\right\|_{L^{2}([0,T];\dot{H}^{1})}\right)=0,\]
which completes the proof of stability of the magnetic field.
At last, writing
\[\left\|j^{c}-\nabla\times B\right\|_{L^{2}([0,T];L^{2})}\leq\left\|B^{c}-B \right\|_{L^{2}([0,T];\dot{H}^{1})}+\left\|j^{c}-\nabla\times B^{c}\right\|_{L ^{2}([0,T];L^{2})},\]
it is readily seen that \(j^{c}\) converges to \(\nabla\times B\) in \(L^{2}([0,T];L^{2})\), which completes the proof of the theorem.
## Appendix A Paradifferential calculus and the normal structure
It was noticed in [1] that the normal structure (3.3) of solutions to the Euler-Maxwell system could be exploited to extend the classical product laws of paradifferential calculus to a wider range of parameter. We recall here a simplified statement of Lemma 3.4 from [1] which provides the product laws which are relevant to the present work. A complete justification of this result can be found in [1].
**Lemma A.1**.: _Let \(s\in(-1,2)\) and let \(F,G:\mathbb{R}^{2}\to\mathbb{R}^{3}\) be divergence-free vector fields having the normal structure_
\[F(x)=\begin{pmatrix}F_{1}(x)\\ F_{2}(x)\\ 0\end{pmatrix}\qquad\text{and}\qquad G(x)=\begin{pmatrix}0\\ 0\\ G_{3}(x)\end{pmatrix}.\]
_Then, for any \(\eta\in(-\infty,1)\) and \(s\in(-\infty,2)\), with \(\eta+s>0\), one has the product law_
\[\|P(F\times G)\|_{\dot{H}^{\eta+s-1}}\lesssim\|F\|_{\dot{H}^{\eta}}\,\|G\|_{\dot{ H}^{s}}\,.\]
_Moreover, in the endpoint case \(\eta=1\), one has that_
\[\|P(F\times G)\|_{\dot{H}^{s}}\lesssim\|F\|_{L^{\infty}\cap\dot{H}^{1}}\,\|G\|_{ \dot{H}^{s}}\,,\]
_for any \(s\in(-1,2)\)._
|
2309.01390 | Bridging the Projection Gap: Overcoming Projection Bias Through
Parameterized Distance Learning | Generalized zero-shot learning (GZSL) aims to recognize samples from both
seen and unseen classes using only seen class samples for training. However,
GZSL methods are prone to bias towards seen classes during inference due to the
projection function being learned from seen classes. Most methods focus on
learning an accurate projection, but bias in the projection is inevitable. We
address this projection bias by proposing to learn a parameterized Mahalanobis
distance metric for robust inference. Our key insight is that the distance
computation during inference is critical, even with a biased projection. We
make two main contributions - (1) We extend the VAEGAN (Variational Autoencoder
\& Generative Adversarial Networks) architecture with two branches to
separately output the projection of samples from seen and unseen classes,
enabling more robust distance learning. (2) We introduce a novel loss function
to optimize the Mahalanobis distance representation and reduce projection bias.
Extensive experiments on four datasets show that our approach outperforms
state-of-the-art GZSL techniques with improvements of up to 3.5 \% on the
harmonic mean metric. | Chong Zhang, Mingyu Jin, Qinkai Yu, Haochen Xue, Shreyank N Gowda, Xiaobo Jin | 2023-09-04T06:41:29Z | http://arxiv.org/abs/2309.01390v3 | # Metric Learning for Projections Bias of Generalized Zero-shot Learning
###### Abstract
Generalized zero-shot learning models (GZSL) aim to recognize samples from seen or unseen classes using only samples from seen classes as training data. During inference, GZSL methods are often biased towards seen classes due to the visibility of seen class samples during training. Most current GZSL methods try to learn an accurate projection function (from visual space to semantic space) to avoid bias and ensure the effectiveness of GZSL methods. However, during inference, the computation of distance will be important when we classify the projection of any sample into its nearest class since we may learn a biased projection function in the model. In our work, we attempt to learn a parameterized Mahalanobis distance within the framework of VAEGAN (Variational Autoencoder & Generative Adversarial Networks), where the weight matrix depends on the network's output. In particular, we improved the network structure of VAEGAN to leverage the discriminative models of two branches to separately predict the seen samples and the unseen samples generated by this seen one. We proposed a new loss function with two branches to help us learn the optimized Mahalanobis distance representation. Comprehensive evaluation benchmarks on four datasets demonstrate the superiority of our method over the state-of-the-art counterparts. Our codes are available at [https://anonymous.4open.science/r/111hr](https://anonymous.4open.science/r/111hr).
## 1 Introduction
With the help of computer vision, deep learning (DL) models have achieved recent advances in image processing and have gained widespread popularity due to their ability to provide end-to-end solutions from feature extraction to classification. Despite their success, traditional deep learning models require a large amount of data labelled for each category. However, collecting large-scale markers is a challenging problem. Zero-shot learning (ZSL) [13, 14] technology provides a good solution to this challenge. ZSL aims to train a model that can classify objects and realize knowledge transfer from seen classes (source classes) to unseen classes (target domain) through semantic information, which is leveraged to bridge the gap between seen and unseen classes. In practice, data samples from seen classes are more common than unseen classes, and it is important to classify samples from both classes at the same time rather than only samples from unseen classes. This task set is called generalized zero-shot learning (GZSL).
Most GZSL methods learn embedding/projection functions to associate seen low-level visual feature classes with their corresponding semantic vectors. The learned function is used to compute the distance between the prototype representation of the class and the projected representation of the sample and classify them to the nearest class. Since each entry of an attribute vector represents a description of that class, class descriptions with similar features are expected to contain similar attribute vectors in the semantic space. However, in visual space, classes with similar properties can be quite different. Therefore, finding a precise and suitable embedding space is a challenging task. Otherwise, it may lead to the ambiguity problem of visual semantics.
Figure 1: Demonstration of how the Mahalanobis distance compensates for the biased nature of GZSL in projective space, both from class-wise and instance-wise perspective: when image instances and class descriptions are biased in the projection space, an image from the Lion class (indicated by the circle) will be misclassified into the Cat class (green pentagram) according to the Euclidean distance (Part b); however, the image will be correctly classified into the Lion class according to the Mahalanobis distance (Part c).
Semantic embeddings learn a projection function from the visual space to the semantic space using different constraints or loss functions, with the aim of forcing semantic embeddings belonging to the same class to map to their ground-truth label embeddings [12, 13], and then classify the given test samples by nearest neighbour search. Visual embedding learns an inverse projection function that maps semantic representations back into visual space and categorizes them in visual space. The goal is to make the semantic representation close to its corresponding visual feature [14]. Potential space embedding projects visual features and semantic representations into a common space \(L\), i.e., the potential space, in order to explore some common semantic properties across modalities [15, 16, 17]. The goal is to project visual and semantic features near each category into the potential space. Zhang et al. proposed that an ideal potential space should satisfy two conditions: (i) intra-class compactness and (ii) inter-class separability [15].
However, there is a problem of projection domain bias for the embedding methods in the GZSL task. On the one hand, vision and semantics are located in two different entity spaces; on the other hand, the samples of the seen and unseen classes do not intersect, and their distributions may be different. Therefore, for the unseen class, failure to make certain adjustments to the embedding space can lead to the problem of projection domain bias [12, 16, 15]. Since GZSL methods need to recognize both seen and unseen categories during inference and only see visual features of seen categories during training, they are usually biased towards seen categories. To overcome this problem, induction-based methods incorporate additional constraints or information about the seen classes. In contrast, transductive-based methods utilize prevalent information to mitigate the problem of projection domain shifts [1, 16, 15].
We note that once we find some projection/embedding space to which both the images and the class semantics will be mapped, then in the inference phase, any unknown image will be classified into the nearest class according to the Euclidean distance in the projection space. However, we argue that the distance measure should be more important at this point since we may be learning a less accurate projection space. In our work, we try to learn a parameterized Mahalanobis distance metric in the generative framework of VAEGAN [13], where with the Mahalanobis distance, we propose a new loss function to help us learn an optimized Mahalanobis distance representation. This distance can effectively influence the decision of the classifier in the inference phase, as shown in Fig. 1, the sample point in Part (b) is classified into the wrong class under the Euclidean distance, but by the Mahalanobis distance, it becomes closest to its true category as shown in Part (c).
Our key contributions can be summarized as follows:
* We introduce the learning of the Mahalanobis distance metric into GZSL, which can effectively alleviate the classification performance degradation caused by projection bias. At the same time, we propose a new loss function with Mahalanobis distance to help us learn an optimized Mahalanobis distance measure.
* We propose a new network architecture based on the VAEGAN framework with two discriminative modules that learn to project samples from seen classes and samples from (pseudo) unseen classes, respectively. This architecture can alleviate the situation in the training phase where only samples of the seen classes are encountered.
* Extensive experimental results show that our method outperforms the state-of-the-art methods on four benchmark datasets.
## 2 Related Work
Generalized zero-shot learning (GZSL) [1, 13] means that the trained classifier can identify the existing data categories in the training set and distinguish data from unseen categories. The goal is to [12] use the semantics or other relevant information of seen (source) and unseen (target) classes to establish a connection between them [1], which can be mainly divided into two categories: embedding-based methods and generative methods.
### Embedding-based methods
The embedding-based approach [20] learns an embedding space by measuring the similarity between prototype and predicted data sample representations that associate low-level visual features of seen categories with semantic vectors and identify new categories. There are various types of embedding-based methods [20], including graph-based, meta-learning, attention-based, autoencoder-based methods, and others.
Graph learning [10, 12] leverages machine learning techniques to extract relevant features and map properties of graphs into feature vectors with the same dimensions in the embedding space. Graph learning techniques [15, 16] have been proven to be effective paradigms in GZSL, but the incorporation of graph-based information increases the complexity of the model.
Meta-learning [17] extracts transferable knowledge from auxiliary tasks to improve performance and avoid overfitting. Multiple studies [21] show that GZSL problems can be effectively solved using meta-learning strategies. By dividing the training classes into a support set and a query set (corresponding to seen and unseen classes, respectively), meta-learning methods can transfer knowledge from seen to unseen classes and alleviate the bias of the GZSL task by randomly selecting different classes [14, 17].
The attention mechanism [18, 19] uses trainable parameters in a deep learning model to assign weights to important parts of the input by identifying key information and fine-grained classes with only a few regions in the image. It can effectively identify key information and important area. Information related to
performing tasks. To further reduce bias and improve localization, this study Yang et al. (2020) adopted a global average pooling scheme as the aggregation mechanism. It uses localized attributes to project local features into semantic space. This approach ensures that the patterns obtained for unseen classes are similar to those of seen classes, thus improving model accuracy.
Autoencoders (AEs) are neural network-based unsupervised learning methods for representation learning. The main benefit of autoencoders (AEs) is that they can be trained without the need for labelled data, which makes them suitable for unsupervised learning. AEs have been widely used to solve generalized zero-shot learning (GZSL) problems. One way to achieve this is to import the decoder as an additional constraint to learn various mappings Liu et al. (2018).
### Generative Models
Generative model-based GZSL methods mitigate the bias of model predictions on seen categories by utilizing semantic representations to classify generated unseen samples (i.e., images or visual features). The earliest generative ZSL/GZSL is CVAE-ZSL Mishra et al. (2018), which employs a Conditional Variational Autoencoder (CVAE) Pagnoni et al. (2018) as the generator, compared to the traditional Variational Autoencoder (VAE) Hou et al. (2017). CVAE introduces conditional features that enable the generator to generate more diverse and realistic samples to approximate the true distribution of unseen categories. Zero-Sample Learning Semantic Embeddings (SE-ZSL) Frome et al. (2013) is another zero-shot learning model whose generative model associates semantically representable category embeddings with the generated samples to ensure that the generated samples capture the semantic features of the categories. Since SE-ZSL may perform poorly in a few categories due to category imbalance, Generative Zero-Sample Learning with Balanced Semantic Embeddings (LBSE-ZSL) Xie et al. (2022) introduces balanced semantic embeddings to specifically address the category imbalance problem, thus improving the learning performance of the minority class model. In addition to the semantic information of known categories, Generative Zero Sample Learning using Visible and Invisible Semantic Relationships (LsrGAN) Vyas et al. (2020) leverages the semantic relationships between seen and unseen classes to enable the generative model to better generate feature representations of unseen categories.
## 3 Method
In this section, we first clarify the problem we aim to solve. We then introduce a modified VAEGAN framework. Next, we explain in detail how to integrate the Mahalanobis distance into VAEGAN and propose a new loss to help us learn the optimal Mahalanobis distance metric. Finally, we show how to use Mahalanobis distance for classification in the inference stage.
### Problem Formulation
The main goal of GZSL is to build a classifier based only on samples \(\mathcal{X}^{s}\) of seen classes \(\mathcal{C}^{s}\) that can simultaneously distinguish samples \(\mathcal{X}^{u}\) from seen classes \(\mathcal{C}^{s}\) and unseen classes \(\mathcal{C}^{u}\), where unseen classes only appear in the test set, e.g. \(\mathcal{C}^{s}\cap\mathcal{C}^{u}=\phi\). In addition to class labels, current existing methods fully use class-level semantic labels \(\mathcal{S}\) (such as attributes or word2vec) to bridge the gap between seen and unseen classes. To this end, we define the training set as \(\mathcal{D}^{tr}=\{I_{i},\mathbf{s}_{i},y_{i}\}|I_{i}\in\mathcal{X}^{s},\mathbf{s}_{i }\in\mathcal{S},y_{i}\in\mathcal{C}^{s}\}\), where \(I_{i}\) and \(\mathbf{s}_{i}\) represent the image of the \(i\)-th sample and its semantic vector, respectively. Similarly, we can represent the test set by \(\mathcal{D}^{te}=\{(I_{i},\mathbf{s}_{i},y_{i})|I_{i}\in\mathcal{X}^{s}\cup\mathcal{ X}^{u},\mathbf{s}_{i}\in\mathcal{S},y_{i}\in\mathcal{C}^{s}\cup\mathcal{C}^{u}\}\), where \(I_{i},\mathbf{y}_{i}\) either belong to the seen classes or belong to the unseen classes.
### Vaegan
VAEGAN Xian et al. (2019) combines the power of VAE models and WGAN models Xian et al. (2018) to learn the data distribution of unlabeled samples by sharing the decoder in VAE and the generator in WGAN. We adopt this model to simulate scenarios where samples in the inference stage may come from unseen categories. During the training phase, the generated samples can be seen as coming from some fake unseen class, although these unseen classes are somewhat similar to the seen class.
Feature ExtractionUnder the framework of VAEGAN, image and text pairs \((I_{i},\mathbf{s}_{i})\) are pre-trained through Resnet-101 He et al. (2016) and BERT model Liu et al. (2019) to obtain their initial representation such as
\[\bar{\mathbf{x}}_{i}=\text{RESNET}(I_{i})\in R^{d},\quad\bar{\mathbf{s}}_{i}=\text{ BERT}(\mathbf{s}_{i})\in R^{k}. \tag{1}\]
We borrowed the Affine Transformation Fusion (ATF) from Tao et al. (2022) to replace the common vector concatenation operation to better fuse the two multimodal information while keeping the dimensions unchanged. We adopt two MLPs \(\alpha(\cdot)\) and \(\theta(\cdot)\) to predict the scale parameter and offset parameter of the affine transformation, respectively, as follows
\[\mathbf{x}_{i}=\alpha(\bar{\mathbf{s}}_{i})\bar{\mathbf{x}}_{i}+\theta(\bar{\mathbf{s}}_{i}) \in R^{d}, \tag{2}\]
where \(\alpha(\cdot)\) will output a scalar, and \(\theta(\cdot)\) will output a \(d\)-dimensional vector, resulting in a fused representation of image \(\bar{\mathbf{x}}_{i}\) and semantics \(\bar{\mathbf{s}}_{i}\). In the following description, unless otherwise specified, we will omit the subscript \(i\).
VaeganA variational autoencoder (VAE) Kingma and Welling (2013) is a deep generative model capable of learning complex density model variables from latent data. Given a nonlinear generative model \(p_{\phi}(\mathbf{x}|\mathbf{z})\), where \(\mathbf{x}\) is the input of the network, the latent variable \(\mathbf{z}\) comes from the prior distribution \(p_{0}(\mathbf{z})\). The goal of a VAE is to approximate the posterior probability distribution of the latent variable \(\mathbf{z}\) by maximizing the following variational lower bound through an inference network \(q_{\tau}(\mathbf{z}|\mathbf{x})\)
\[\mathcal{L}_{\phi,\tau}=\mathbb{E}_{q_{\tau}(\mathbf{z}|\mathbf{x})}[\log p_{\phi}(\bm {x}|\mathbf{z})]-\text{KL}(q_{\tau}(\mathbf{z}|\mathbf{x})||p_{0}(\mathbf{z})). \tag{3}\]
With the above consideration, we minimize the following the VAE loss (Xian et al., 2019) with the input \(\mathbf{x}\)
\[\mathcal{L}_{\text{VAE}}=\text{KL}(q_{\tau}(\mathbf{z}|\mathbf{x})||p_{0}(\mathbf{z}))- \mathbb{E}_{q_{\tau}(\mathbf{z}|\mathbf{x})}[\log p_{\phi}(\mathbf{x}|\mathbf{z})], \tag{4}\]
where \(q_{\tau}(\mathbf{z}|\mathbf{x})\) is an encoder \(E(x)\), which encodes an input \(\mathbf{x}\) to a latent variable \(\mathbf{z}\), \(p_{\phi}(\mathbf{x}|\mathbf{z})\) is a decoder, which reconstructs the input \(\mathbf{x}\) from the latent \(\mathbf{z}\) and the prior distribution \(p_{0}(\mathbf{z})\) is assumed to be a standard normal distribution \(\mathcal{N}(0,1)\).
It is worth noting that in VAEGAN, VAE's decoder \(p_{\phi}(\mathbf{x}|\mathbf{z})\) and GAN's generator \(G(\mathbf{z})\) share a network structure, so we use the discriminator \(D_{A}(\mathbf{x})\) to distinguish real and fake samples, where the discriminator \(D_{A}(\mathbf{x})\) will be optimized by minimizing the loss function
\[\mathcal{L}_{\text{WGAN}} = \mathbb{E}[D_{A}(\mathbf{x})]-\mathbb{E}[D_{A}(\tilde{\mathbf{x}})] \tag{5}\] \[- \lambda\mathbb{E}[(\|\nabla_{\tilde{\mathbf{x}}}D_{A}(\tilde{\mathbf{x}} )\|_{2}-1)^{2}],\]
where \(\tilde{\mathbf{x}}=G(\mathbf{z})\sim p_{\phi}(\mathbf{x}|\mathbf{z})\), \(\hat{\mathbf{x}}=\alpha\mathbf{x}+(1-\alpha)\tilde{\mathbf{x}}\) and \(\alpha\sim U(0,1)\).
Different from the literature (Xian et al., 2019), we effectively fuse the semantic information in the input and condition in VAEGAN through the learning of affine transformation (see Eqn. (2)). In addition, to ensure that the generated samples do not deviate too far from the real samples, we introduce the following MSE loss
\[\mathcal{L}_{\text{MSE}}=\mathbb{E}(\mathbf{x}-\tilde{\mathbf{x}})^{2}. \tag{6}\]
### Metric Learning with Stochastic Gradient Descent
The root cause of projection bias is that in the projection space, the samples of the seen class and the samples of the unseen class are too close to each other (see Fig. 1). When the samples are classified according to the distance from the class description vector, the samples from the unseen class are likely to be classified into seen classes. To this end, we extend the traditional Euclidean distance metric to a general Mahalanobis distance metric so that under this distance metric, samples from unseen classes will be far away from the class vectors of seen classes, thereby improving the classification performance of GZSL.
Given two vectors \(X\), \(Y\) from projected space, we calculate the Mahalanobis distance between \(X\) and \(Y\) by the following formula
\[d_{M}^{2}(X,Y)=(X-Y)^{T}M(X-Y), \tag{7}\]
where \(M\) is a positive definite matrix, which can clearly represent the correlation between the various components of the vector.
For the output \(\tilde{\mathbf{x}}\) of VAEGAN and the image \(I\), we simulate the projection output of unseen class samples and seen class samples respectively by two discriminators
\[X = D_{A}(\tilde{\mathbf{x}})\in R^{k}, \tag{8}\] \[Y = D_{B}(I)\in R^{k}. \tag{9}\]
We stitch \(N\) samples by row to get a \(2N\times k\) matrix \(\tilde{X}\) through the output \(X\) and \(Y\) of the two branches. In order to learn the optimal matrix \(M\) under the framework of gradient descent, we represent \(M\) in the following form
\[M=[\text{cov}(\tilde{X})]^{+}+\epsilon I, \tag{10}\]
where \(R^{+}\) It represents the generalized inverse matrix of the matrix \(R\). Note that \(M\) is actually a function of network
Figure 2: Framework of VAEGAN with Mahalanobis distance contains two branches, where the upper branch generates unseen images on the seen class through a generative network in order to simulate the classification of images from the unseen class in the inference phase, and the lower branch learns the projective representations of the images in the seen class directly, and the newly proposed Mahalanobis distance-based loss function makes samples of the same class from the same branch as close as possible while keeping samples of the same class from different branches as far away as possible.
structure parameters, and it should be a symmetrical positive definite distance, thus ensuring that the distance (7) is an effective distance.
Given a batch of samples, we propose a new loss function below so that the Mahalanobis distances of the projection outputs of the different branches are as far as possible, and the Mahalanobis distances of the projection outputs from the same branch are as close as possible
\[\mathcal{L}_{M}=-\log\sum_{i\neq j}\left(d_{M}^{2}(X_{i},Y_{j})-d_{M}^{2}(X_{i},X_{j})\right), \tag{11}\]
where \(X_{i}\) and \(Y_{j}\) are calculated by Eqn. (8) and (9), respectively.
Ultimately, our algorithm minimizes the following loss function via stochastic gradient descent (as shown in Alg. 1)
\[\mathcal{L}=\mathcal{L}_{\text{WGAN}}+\lambda_{1}\mathcal{L}_{\text{VAE}}+ \lambda_{2}\mathcal{L}_{\text{MSE}}+\lambda_{3}\mathcal{L}_{\text{M}}, \tag{12}\]
where \(\lambda_{1}\),\(\lambda_{2}\) and \(\lambda_{3}\) are hyperparameters.
It is worth noting that since the matrix \(M\) depends on the outputs of the two branches, \(M\) is constantly updated during model iteration. We use \(M^{*}\) obtained in the last iteration as the optimal metric for the inference phase.
```
1:Input: A batch of images and class-attributes pairs \(\langle I_{i},\mathbf{s}_{i},y_{i}\rangle\)
2:for\(i=1,2,\cdots\)do
3:\(\tilde{\mathbf{x}}_{i}=\text{RESNET}(I_{i})\)
4:\(\tilde{\mathbf{s}}_{i}=\text{BERT}(\mathbf{s}_{i})\)
5:\(\mathbf{x}_{i}=\alpha(\tilde{\mathbf{s}}_{i})\tilde{\mathbf{x}}_{i}+\theta(\tilde{\mathbf{s} }_{i})\)
6: Encode:\(\mathbf{z}_{i}=E(\mathbf{x}_{i})\)
7: Decode:\(\tilde{\mathbf{x}}_{i}=G(\mathbf{z}_{i})\)
8: Branch A: \(X_{i}=D_{A}(\tilde{\mathbf{x}}_{i})\)
9: Branch B: \(Y_{i}=D_{B}(I_{i})\)
10:endfor
11:\(\tilde{X}=\text{cat}((X_{1},X_{2},\cdots,Y_{1},Y_{2},\cdots),\text{dim}=0)\)
12:\(M=[\text{cov}(\tilde{X})]^{+}+\epsilon I\)
13: Compute the loss function \(\mathcal{L}\) with Eqn. (12) return\(\mathcal{L}\)
```
**Algorithm 1** VAEGAN with Mahalanobis Metric
When \(M=I\), then the distance metric \(d_{M}^{2}(X,Y)\) becomes an Euclidean distance. Since M is a positive definite distance, it has the following Cholesky decomposition form \(M=LL^{T}\), and thus we have
\[d_{M}^{2}(X,Y)=\|L^{T}(X-Y)\|_{2}^{2}. \tag{13}\]
Unlike the projection bias problem in which the projection to vector \(X\) is learned, our network optimizes the projection matrix of the difference \(X-Y\) of any two vectors \(X\) and \(Y\).
### Inference with Mahalanobis Metric
In the inference stage (as shown in Alg. 2), for any image \(I\), it and all class description text \(s\in\mathcal{X}^{s}\cup\mathcal{X}^{u}\) are input into the upper branch of the network as multiple pairs, and the semantic representation \(X\) of the class prototype is obtained through the discriminator \(A\); at the same time, the image is directly passed through the lower branch of the network to obtain an embedded representation \(Y\) of the image. Finally, according to the Mahalanobis distance between the image and the class prototype, the image will be classified into the nearest class.
```
1:Input: Any given image \(I\)
2:\(\text{dist}=[]\)
3:Branch B: \(Y=D_{B}(I)\)
4:for\(s\in\mathcal{X}^{s}\cup\mathcal{X}^{u}\)do
5:\(\tilde{\mathbf{x}}=\text{RESNET}(I)\)
6:\(\tilde{\mathbf{s}}=\text{BERT}(\mathbf{s})\)
7:\(\mathbf{x}=\alpha(\tilde{\mathbf{s}})\tilde{\mathbf{x}}+\theta(\tilde{\mathbf{s}})\)
8: Encode: \(\mathbf{z}=E(\mathbf{x})\)
9: Decode: \(\tilde{\mathbf{x}}=G(\mathbf{z})\)
10: Branch A: \(X=D_{A}(\tilde{\mathbf{x}})\)
11:\(d=d_{M^{*}}^{2}(X,Y)\)
12: dist.append(\(d\))
13:endfor
14:\(c=\arg\max(\text{dist})\) return\(c\)
```
**Algorithm 2** Inference with Mahalanobis Metric
## 4 Experiments
We first describe four popular public datasets and experimental implementation details. We then describe and compare the experimental implementation details with certain classical approaches. Finally, we conduct an ablation study to test the effectiveness of three important components of the work.
### Datasets and Evaluation Details
We adopted four public datasets including Caltech-UCSD Birds-200-2011 (CUB) [21], Animals with Attribute 1 (AWA1) [16], Animals with Attribute 2 (AWA2) [15], SUN Database (SUN) [14], and four other datasets. The CUB dataset contains 11,788 images of 200 species of birds, with about 60 images from each category. The AWA1 and AWA2 datasets each collect 50 different animal categories with about 40 to 60 images, and the AWA1 and AWA2 datasets have 30,475 and 37,322 images, respectively. The SUN dataset has images from 717 different scene categories, with about 200 to 500 images per category, totalling about 14,340. In dividing the dataset, we followed the conventional division method of GZSL datasets.
In the evaluation method, we used the harmonic mean \(H\) to evaluate the recognition results on both visible and invisible class data simultaneously, which is often used to evaluate the classification performance of the GZSL task and is calculated as follows
\[H=\frac{2\times U\times S}{U+S}, \tag{14}\]
where \(U\) and \(S\) denote the classification accuracy on unseen classes and seen class data, respectively.
### Implementation Details
For basic visual features and visual extraction, we refer to VAEGAN Xian et al. (2019) and improve the original VAEGAN in the feature generation part. We use pre-trained Resnet101He et al. (2016) and Bert TokenizerDevlin et al. (2018) to extract the visual and semantic features of images and generate \(1000\)-dimensional visual feature vectors and \(768\)-dimensional semantic feature vectors, respectively. These two vectors are fused together and converted into a \(1*3*256*256\) tensor for use in the encoding stage of VAEGAN. Subsequently, we obtain a latent semantic representation of size \(500\). The samples generated by the latent semantic representation pass through the discriminator A to obtain a vector with a length of \(900\). Similarly, the original image also gets a \(900\)-dimensional vector through the discriminator B to facilitate our calculation of the Mahalanobis distance. The ADAM Kingma and Ba (2014) optimizer is used in our algorithm, where the learning rate is set to \(10^{-3}\). Finally, the initial values of the hyperparameters \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\) are \(1.0\), \(1.0\) and \(1.0\), respectively.
### Comparison with State-of-the-Art Methods
In this section, we select recent results of the state of the art in GZSL tasks, including ALE Akata et al. (2015), f-CLSWGAN Xian et al. (2018), LiGAN Li et al. (2019), CE-GZSL Han et al. (2021), DCRGAN-TMM Ye et al. (2021), HSVA Chen et al. (2021), DEM Zhang et al. (2017), CADA-VAE Schonfeld et al. (2019), DAZLE Huynh and Elhamifar (2020), BSeGN Xie et al. (2022), f-VAEGAN-D2 Xian et al. (2019) and CMC-GAN Yang et al. (2023), see Tab. 1 for details.
Tab. 1 lists the result comparison between our method and other classical methods. Specifically, in our method, the H-score can reach 67.2% on the CUB dataset, 71.6% on AWA1, 70.9% on AWA2, and 45.9% on SUN. Compared with the original VAEGAN model Xie et al. (2022): Our method improves the H-score of the model from 58.0% to 67.2% on the CUB dataset, from 67.4% to 70.9% on the AWA2, and finally, the SUN data set is increased from 42.9% to 45.9%. The above results show that our proposed method outperforms the state-of-the-art methods in all evaluated datasets and achieves significant improvements, especially in the \(H\)-score. We attribute the above results to two aspects: 1) Under the generative framework of VAEGAN, we simulate the output from seen samples and the output of unseen samples through two branches and make the samples of the two branches as far as possible through the loss function, at the same time, the samples of the same branch are as close as possible so that the seen class and the (pseudo) unseen class can be separated as much as possible; 2) Mahalanobis distance can help us correct wrong decisions when projection bias occurs, thereby improving Classification performance on seen and unseen classes.
It is worth noting that the weight matrix in the Mahalanobis distance does not introduce additional parameters, it only depends on the structural parameters of the network, which limits the complexity of the model to some extent and reduces the risk of model overfitting.
### Ablation Study
**Mahalanobis distance metric/Euclidean distance metric** The Mahalanobis distance plays a key role in our algorithm. To verify its effectiveness, we compare it with ordinary Euclidean distance. Since the Euclidean distance does not contain any parameters, we remove the \(L_{M}\) loss during training. The comparison results of using Euclidean distance and Mahalanobis distance on the four data sets are shown in Tab. 2.
From the experimental results, the Euclidean distance measure is very poorly represented under our model framework, which also shows the important role of the Mahalanobis distance. But in fact, the VAEGAN baseline architecture as shown in Tab. 3 also uses Euclidean distance, where the performance degradation on the 4 datasets is not very obvious. The Mahalanobis distance takes into account the interaction between the sample attribute features in the projection space, and it is continuously accumulated and updated in the iterative process, which alleviates the projection offset problem in the GZSL problem.
**Multi-branch Discriminators/Single-branch Discriminator** In our work, we improve the structure of VAEGAN, in particular, we introduce another branch to better learn features that are discriminative between seen and unseen classes (see Tab. 3).
According to the conducted experiments, our model slightly outperforms the baseline of VAEGAN in all categories except unseen categories in the AWA1 dataset. This suggests that branch B of our model alleviates the projection bias/shift problem of samples to some extent and helps alleviate the problem related to semantic imbalance.
**Affine Transformation Fusion/Concatenation Operation** The fusion of multiple modalities such as images and texts is usually fused by the concatenation operation. However, multiple modalities of the same sample are interrelated, and affine transformation fusion makes the change of text representation directly affect the mapping function to the image, realizing a closer information fusion between the two. In order to verify the effectiveness of the Affine Transformation Fusion (ATF) module under our model framework, we list the results of their comparison in Tab. 4. According to the experimental results, the affine transformation function shows a slight improvement over models that only capture image features and semantic features for all categories except the SUN dataset. As a result, the information fusion effect of this module is basically equivalent to the general concatenation operation.
## 5 Conclusion
The biased projection is an important challenging problem in GZSL. In our work, we introduce the Mahalanobis distance into the VAEGAN framework. To this end, we use two branches to learn the samples of the seen class and the samples of the (pseudo) unseen class, respectively, and propose a new loss function such that The projected space is learned to be more discriminative for samples from unseen and seen classes. In particular, the weight matrix of the Mahalanobis distance does not introduce additional parameters,
which limits the expressive ability of the model and avoids the possibility of further overfitting. Finally, our extensive experimental evaluation shows that our proposed method outperforms the state-of-the-art methods on four benchmark datasets. Our contribution has significant implications for advancing zero-shot learning and provides a promising avenue for future research in this area.
|
2310.14198 | QA-NatVer: Question Answering for Natural Logic-based Fact Verification | Fact verification systems assess a claim's veracity based on evidence. An
important consideration in designing them is faithfulness, i.e. generating
explanations that accurately reflect the reasoning of the model. Recent works
have focused on natural logic, which operates directly on natural language by
capturing the semantic relation of spans between an aligned claim with its
evidence via set-theoretic operators. However, these approaches rely on
substantial resources for training, which are only available for high-resource
languages. To this end, we propose to use question answering to predict natural
logic operators, taking advantage of the generalization capabilities of
instruction-tuned language models. Thus, we obviate the need for annotated
training data while still relying on a deterministic inference system. In a
few-shot setting on FEVER, our approach outperforms the best baseline by $4.3$
accuracy points, including a state-of-the-art pre-trained seq2seq natural logic
system, as well as a state-of-the-art prompt-based classifier. Our system
demonstrates its robustness and portability, achieving competitive performance
on a counterfactual dataset and surpassing all approaches without further
annotation on a Danish verification dataset. A human evaluation indicates that
our approach produces more plausible proofs with fewer erroneous natural logic
operators than previous natural logic-based systems. | Rami Aly, Marek Strong, Andreas Vlachos | 2023-10-22T06:27:31Z | http://arxiv.org/abs/2310.14198v1 | # QA-NatVer: Question Answering for Natural Logic-based Fact Verification
###### Abstract
Fact verification systems assess a claim's veracity based on evidence. An important consideration in designing them is faithfulness, i.e. generating explanations that accurately reflect the reasoning of the model. Recent works have focused on natural logic, which operates directly on natural language by capturing the semantic relation of spans between an aligned claim with its evidence via set-theoretic operators. However, these approaches rely on substantial resources for training, which are only available for high-resource languages. To this end, we propose to use question answering to predict natural logic operators, taking advantage of the generalization capabilities of instruction-tuned language models. Thus, we obviate the need for annotated training data while still relying on a deterministic inference system. In a few-shot setting on FEVER, our approach outperforms the best baseline by \(4.3\) accuracy points, including a state-of-the-art pre-trained seq2seq natural logic system, as well as a state-of-the-art prompt-based classifier. Our system demonstrates its robustness and portability, achieving competitive performance on a counterfactual dataset and surpassing all approaches without further annotation on a Danish verification dataset. A human evaluation indicates that our approach produces more plausible proofs with fewer erroneous natural logic operators than previous natural logic-based systems.
## 1 Introduction
Automated fact verification is concerned with the task of identifying whether a factual statement is true, with the goal of improving digital literacy Vlachos and Riedel (2014). A typical fact verification system consists of an evidence retrieval and a claim verification component (i.e. the judgement of whether a claim is true). The latter is typically implemented as a neural entailment system Guo et al. (2022), _inter alia_), which is not transparent in regards to its underlying reasoning. While efforts have been made to improve their explainability, for instance via highlighting salient parts of the evidence Popat et al. (2018), or generating summaries Kotonya and Toni (2020), there is no guarantee that the explanations are _faithful_, i.e. that they accurately reflect the reasoning of the model Jacovi and Goldberg (2020); Atanasova et al. (2023).
Contrarily, proof systems like NaturalLI Angeli and Manning (2014), perform natural logic inference as proofs, and are faithful by design. Their transparent reasoning empowers actors to make informed decisions about whether to trust the model and which parts of the prediction to dispute Jacovi and Goldberg (2021). Recently Krishna et al. (2022) constructed a natural logic theorem prover for claim verification, using an autoregressive formulation with constrained decoding. However, an important limitation of this approach is its dependence on substantial resources for training, relying on large datasets for claim verification, and structured knowledge bases like the Paraphrase DataBase Ganitkevitch et al. (2013), WordNet Miller (1994), and Wikidata Vrandecic and Krotzsch (2014). However such manually curated resources are typically accessible for high-resource languages, thus limiting its applicability.
To this end, we propose **QA-NatVer**: **Q**uestion **A**nswering for **Nat**ural Logic-based **F**act **V**erification, a natural logic inference system that composes a proof by casting natural logic into a question answering framework. As illustrated in Figure 1, a proof is a sequence of steps, with each step describing the semantic relation between a claim span and an evidence span via a set-theoretic natural logic operator (NatOp), and this sequence of NatOps determines the veracity of the claim following a deterministic finite state automaton (DFA). QA-NatVer predicts NatOps using operator-specific boolean questions (cf. Table 1 for examples for all operators). For instance, the relation between the claim span
_was born_ and the evidence _Born_ is ascribed the equivalence NatOp (\(\equiv\)), which we predict with questions such as _Is "was born" a paraphrase of "Born"?_. This formulation enables us to make use of instruction-finetuned language models, which are powerful learners, even with limited supervision Sanh et al. (2022), _inter alia_). Since the input format to our question-answering formulation constrains the context to the aligned claim-evidence spans, we generate claim-evidence alignments between overlapping spans of varying length, and individually predict the NatOp for each pair of aligned spans. To select the best proof over all possible proofs, we combine the answer scores to the questions associated with each proof.
In a few shot setting with 32 training instances on FEVER, QA-NatVer outperforms all baselines by \(4.3\) accuracy points, including ProofVER, LOREN Chen et al. (2022), and a state-of-the-art few-shot classification system, T-FewLiu et al. (2022). By scaling the instruction-tuning model from BART0 to Flan-T5, we achieve a score of \(70.3\pm 2.1\), closing the gap to fully supervised models, trained on over 140,000 instances, to \(8.2\) points. On an adversarial claim verification dataset, Symmetric FEVER Schuster et al. (2019), QA-NatVER scores higher than few-shot and fully supervised baselines, including ProofVER, demonstrating the robustness of our approach. In a low-resource scenario on a Danish fact verification dataset Norregaard and Derczynski (2021), DanFEVER) without further annotation for training, our system outperforms all baselines by \(1.8\) accuracy points, highlighting the potential of the question-answering formulation for low-resource languages. An ablation study indicates the benefits of QA-NatVer's question-answering formulation, outperforming ChatGPT OpenAI (2022) (over 430x the size of BART0) prompted with in-context examples to predict NatOps by \(11.9\) accuracy points. Finally, we show in a human evaluation that QA-NatVer improves over previous natural logic inference systems in terms of explainability, producing more plausible proofs with fewer erroneous NatOps.1
Footnote 1: Code and models are available at [https://github.com/Raldir/QA-NatVer](https://github.com/Raldir/QA-NatVer)
## 2 Related Work
Natural logic Van Benthem (1986); Sanchez (1991) operates directly on natural language, making it an appealing alternative to explicit meaning representations such as lambda calculus, since the translation of claims and evidence into such representations is error-prone and difficult to decode for non-experts. NatLog MacCartney and Manning (2007); MacCartney and Manning (2009) proposes the use of natural logic for textual inference, which has then subs
Figure 1: At each inference step, a claim span is mutated into an evidence span via a natural logic operator (NatOp). The current veracity state and mutation operator determine the transition to the next state, via a fine state automaton (DFA). Starting at, the span _Anne Rice_ is mutated via the equivalence operation (\(\equiv\)), resulting in, according to the DFA. The inference ends in, indicating the claim’s refutation. We use question-answering to predict the NatOps, taking advantage of the generalization capabilities of instruction-tuned language models.
uralLI proof system. With the surge of pre-trained language models, multiple works have attempted to integrate natural logic into neuro-symbolic reasoning systems (Feng et al., 2020, 2022). In particular, ProoFVer, a natural logic inference system specifically designed for fact verification, achieves competitive performance yet remains faithful and more explainable than its entirely neural approaches (Krishna et al., 2022). Stacey et al. (2022) propose an alternative framework of logical reasoning, evaluating the veracity of individual claim spans (or of atomic facts in Stacey et al. (2023)) and determining the overall truthfulness by following a simple list of logical rules. Chen et al. (2022) use a similar list of logical rules but aggregate the outcomes with a neural network component. However, all previous approaches have in common that they require substantial training data to perform well, limiting their use to resource-rich languages and domains.
Casting a natural language problem to a question-answering setting has previously been explored in a variety of tasks such as relation classification (Levy et al., 2017; Cohen et al., 2022), and semantic role labeling (He et al., 2015; Klein et al., 2022). For fact verification, in particular, previous works have considered formulating it as a question generation task, decomposing a claim into relevant units of information to inquire about, followed by question answering to find relevant answers in a large knowledge base (Fan et al., 2020; Chen et al., 2022). Yet, these works do not consider aggregating nor constructing proofs from the answers to the questions.
Finally, work on few-shot claim verification is limited. Lee et al. (2021) explore using a perplexity score, however, their approach is constrained to binary entailment, i.e. either supported or refuted. Zeng and Zubiaga (2023) explore active learning in combination with PET (Schick and Schutze, 2021), a popular prompt-based few-shot learning method, and Pan et al. (2021) and Wright et al. (2022) generate weakly supervised training data for zero-shot claim verification. However, none of the aforementioned methods produces (faithful) explanations.
## 3 Method
Given a claim \(c\) and a set of \(k\) evidence sentences \(E\), the task of claim verification is to predict a veracity label \(\hat{y}\) and to generate a justification for the selected label. QA-NatVer is a system that returns a natural logic inference proof which consists of a sequence of relations \(P=m_{1},\ldots,m_{l}\), each of them specifying a relation between a claim and evidence span and a NatOp operator \(o\).2 The sequence of operators \(O=o_{1},\ldots,o_{l}\) is then the input to a deterministic finite state automaton that specifies the veracity label \(\hat{y}=\text{DFA}(O)\) (c.f. Figure 1). The proof \(P\) itself serves as the justification for the predicted label \(\hat{y}\).
Footnote 2: In the natural logic literature these operators define mutations that would convert the claim so that it follows from the evidence (Angeli and Manning, 2014), however we do not make use of the edited claim in this paper.
QA-NatVer constructs its proofs following a three-step pipeline, illustrated in Figure 2: multi-granular chunking of the claim and its evidence sentences and the alignment between claim-evidence spans (Sec. 3.1), assignment of NatOps to each aligned pair using question answering (Sec. 3.2), and a proof selection mechanism over all possible proofs by combining the answer probabilities to the questions associated with the proof (Sec. 3.3).
Figure 2: QA-NatVer’s proof construction. We first chunk the claim and the evidence, and align them at multiple granularity levels. We then assign a natural logic operator to each aligned claim-evidence span using question-answering. Finally, we select the proof by combining the answer scores to the questions associated with the proof.
### Multi-granular Chunking & Alignment
We chunk the claim initially into \(l\) non-overlapping consecutive spans \(c=c_{1},\ldots,c_{l}\), using the chunker of Akbik et al. (2019), and merge spans that do not contain any content words with their subsequent spans. To align each claim span \(c_{i}\) with the information of the highest semantic relevance in the evidence \(E\), we use the fine-tuned contextualized word alignment system of Dou and Neubig (2021) to first align individual words between the claim and each evidence sentence \(E_{j}\). These word-level alignments are then mapped back to the span \(c_{i}\) to form an evidence span \(e_{ij}\) from the sentence \(E_{j}\). Since multiple spans in the evidence sentences could align with \(c_{i}\), we measure the cosine similarity between \(c_{i}\) and each aligned evidence span \(e_{ij}\), using latent span embeddings via Sentence Transformers Reimers and Gurevych (2019).
It is of essence that the granularity of a claim's span matches the evidence span to capture their semantic relationship correctly. Therefore, we additionally consider merging the claim chunks \(c_{1},\ldots,c_{l}\) into more coarse-grained chunks. Concretely, we concatenate \(m\) consecutive claim chunks into a single new span \(c_{i:i+m}\), with \(m\) being up to a length of \(4\). The merging process results in a total of at most \(q=4\cdot l-6\) chunks. Additionally, we consider the claim \(c\) itself as the most coarse-grained unit and align it to evidence in \(E\).
Consider the example in Figure 2. A system that only considers a single chunking might consider _was incapable_ and _of writing_ as separate phrases. However, the evidence spans aligned to these individual phrases (_She also wrote four_ and _books_, respectively) do not provide enough context individually to infer their semantic relations with respect to the claim spans. However, when merged into a single chunk, their semantic relation becomes obvious, as the span _was incapable of writing_ is negated by _she also wrote four books_. Hence, a more flexible variable-length chunking enables finding more semantically coherent alignments.
### NatOp Assignment via QA
Each claim-evidence pair has to be assigned a NatOp, specifying their semantic relationship. We assign one out of six NatOps \(o\in\{\equiv,\sqsubseteq,\sqsubseteq,\sqcap,\mid\mid,\#\}\)3 to each claim-evidence span. We formulate the prediction for each NatOp \(o\) as a question-answering task (cf. Table 1), each of which is instantiated by one or more boolean question prompts \(T_{o}\). Only exception is the independence operator (\(\#\)) which is applied when none of the other operators are predicted. To predict whether a NatOp \(o\) holds between a claim span \(c_{i}\) aligned with evidence \(e_{i}\), we compute the log probabilities averaged over all question prompts \(T_{o}\):
Footnote 3: Similarly to Angeli and Manning (2014) and Krishna et al. (2022), we do not consider the rarely appearing cover operator, but instead replace it with the independence NatOp.
\[\text{QA}(a\mid c_{i},e_{i},T_{o})=\frac{1}{|T|}\sum_{t\in T_{o}}\text{ log}\ p_{\theta}(a\mid c_{i},e_{i},t), \tag{1}\]
with \(a\in\{\text{Yes},\text{No}\}\), \(|T|\) being the number of question prompts, and QA being our seq2seq instruction-tuned language model (see App. A for details). We apply an argmax function to select the most probable answer \(\hat{a_{o}}=\text{argmax}_{y}\text{QA}(a\mid c_{i},e_{i},T_{o})\). An answer prediction \(\hat{a_{o}}=\text{{Yes}}\) indicates that the NatOp \(o\) holds for the aligned claim-evidence spans. This formulation enables us to effectively use of the generalization abilities of instruction-tuned language models.
As illustrated in Figure 2, given the aligned claim-evidence spans _was incapable of writing_ and _She also wrote four books_, we ask questions for each of the five NatOps, with the spans embedded in them. In the figure, the negation NatOp (\(\neg\)) is selected due to its corresponding boolean question being positively answered. Since predictions are made independently for each NatOp, it can occur that the model predicts multiple NatOps for a single pair of aligned claim-evidence spans. In these instances, we select the NatOp with the highest probability (as computed in Eq. 1). On the contrary, if none of the five NatOps is predicted, we assign the independence operator (\(\#\)).
\begin{table}
\begin{tabular}{c|c|l} \hline
**NatOp** & **Task** & **Question Example** \\ \hline Equivalence (\(=\)) & Paraphrase identification & Is in New Jersey a paraphrase of in New Orleans? \\ \hline Fwrd. Entailment (\(\sqsubseteq\)) & Entailment & Given the premise in New Orleans does the hypothese is in New Jersey hold? \\ \hline Rev. Entailment (\(\sqsubseteq\)) & Entailment & Does in New Jersey entail in New Orleans? \\ \hline Negation (\(\neg\)) & Negation classification & Is the phrase in New Jersey a negation of in New Orleans? \\ \hline Alternation (\(|\)) & Alternation classification & Does in New Jersey exclude in New Orleans? \\ \hline \end{tabular}
\end{table}
Table 1: Natural logic operators and the corresponding natural language task and boolean question examples.
### Proof Selection
Since we expand the \(l\) initial non-overlapping claim chunks with multi-granular merging into \(q\) overlapping ones, we can construct a total of \(C(l)\) = \(\sum_{i=l-m}^{l-1}C(i)\) proofs, with \(C(i)\) being the number of proofs for \(i\) chunks, \(C(0)=1\), and \(m\) being the maximum merge length.4 To select the most appropriate proof we compute a score for each one, defined as the sum of a NatOp probability score \(s_{p}\) (Eq. 2) and a NatOp verdict score \(s_{v}\) (Eq. 3) introduced later in this section. We select the proof with the highest score.
Footnote 4: While the number of potential proofs can grow large, computations for each claim-evidence span are only made once, resulting in linear complexity with respect to \(l\), c.f. Section 4.5.
Since the probability for each NatOp is computed independently, we define the score \(s_{p}\) as the average of the predicted NatOp probabilities:
\[s_{p}=\frac{1}{n}\sum_{i=1}^{n}\text{QA}(a_{\text{Yes}}\mid c_{i},e_{i},T_{o}), \tag{2}\]
with \(n\) being the length of the proof \(P\), and \(T_{o}\) being the questions for the NatOp \(o\) assigned to the \(i\)-th aligned span in the proof. Since no probability is explicitly assigned to the independence operator (#) as we have no question to directly capture it (cf. Section 3.2), we use the lowest scoring option to be added to the average in these cases (i.e. \(0.5\) since the predictions are boolean).
The NatOp verdict score \(s_{v}\) considers the aligned claim with its evidence in its entirety as the most coarse-grained chunk, for which our question-answering system computes a score over a textual representation of the veracity labels \(y\):
\[s_{v}=\text{QA}(y_{\text{DFA}(O)}\mid c,E,T_{v}), \tag{3}\]
with T\({}_{v}\) being the veracity question templates, and \(O\) being the NatOp sequence in the proof \(P\). The score \(s_{v}\) is defined to be the probability assigned to the veracity label associated with the state in the DFA in which the proof \(P\) would terminate in, i.e. DFA(O). In our example, the proof where _was incapable_ and _of writing_ are considered separate spans receive a low score due to the two independence NatOps and its termination in the _Not enough info_ (NEI) state, while the one where they are merged in a single span is selected due to high answer confidence in the predicted NatOps.
### Training
We fine-tune the word-level alignment system as well as our question-answering system for QA-NatVer. The alignment system is trained on multiple training objectives as defined in Dou and Neubig (2021) for parallel corpora, namely masked language modelling, translation language modelling, and their self-training objective. We consider the claim \(c\) and each gold evidence \(e\in E_{G}\) as a sentence pair. We create further pairs using gold proofs \(P_{G}\) by considering all possible substitutions of claim chunks with their respective evidence chunk. Our question-answering system is fine-tuned following Liu et al. (2022), by optimizing the maximum likelihood estimation (cf. Eq. 1), complemented by an unlikelihood loss which discourages the model from predicting incorrect target sequences. The NatOps in the gold proofs \(P_{G}\) are used as positive QA samples, while we sample negative training instances (i.e. NatOp questions with the answer being "No") from the gold proofs by randomly selecting a wrong NatOp for an aligned claim-evidence span.
## 4 Evaluation
### Few-Shot Claim Verification
DataWe manually annotated \(68\) training instances of FEVER Thorne et al. (2018) with natural logic proofs. Since FEVER does not contain gold evidence for instances labeled with NEI, we use retrieved evidence instead. The samples were drawn to maintain a balanced label distribution, with \(22\) Supported, \(25\) Refuted, and \(21\) NEI instances. This results in proofs with a total of \(183\) Equivalence (\(\equiv\)), \(55\) Forward Entailment (\(\sqsubseteq\)), \(16\) Reverse Entailment (\(\sqsupseteq\)), \(29\) Negation (\(\lnot\)), \(31\) Alternation (\(|\)), and \(40\) independence (#) NatOps. We train all systems on \(32\) samples unless stated otherwise, randomly sampling from the aforementioned annotated instances. We evaluate our system on the development split of FEVER Thorne et al. (2018), consisting of \(19,998\) claims, using retrieved evidence. We use the document retriever of Aly and Vlachos (2022), and the sentence reranker of Stammbach (2021) to select the top \(k=5\) evidence sentences \(E\). To assess the robustness of QA-NatVer, we also evaluate the systems on Symmetric FEVER Schuster et al. (2019), a binary classification dataset (Supports, Refutes) consisting of \(712\) instances, which is built to expose models that learn artefacts and erroneous biases from FEVER.
BaselinesAs a state-of-the-art faithful inference system, ProFVer (Krishna et al., 2022) is the main baseline we compare against, which is based on GENRE Cao et al. (2021), an end-to-end entity linking model, fine-tuned on BART Lewis et al. (2020). We evaluate a version of ProoFVer that is trained on the same data as QA-NatVer to compare both system's data efficiency. We refer to the version of ProoFver trained on over \(140,000\) FEVER instances using additional knowledge sources as outlined in Sec. 1) as ProoFVer-full. Moreover, we evaluate in our few-shot setting LOREN Chen et al. (2022), which decomposes claims into phrases, and predicts their veracity using a neural network regularized on the latently encoded phrases' veracity by simple logical rules. Similarly to ProofVer, LOREN was trained on the entirety of FEVER.
We further consider few-shot baselines that do not guarantee faithfulness or provide the same level of interpretability. These include a state-of-the-art few-shot learning method, T-Few Liu et al. (2022), which uses two additional loss terms to improve few-shot fine-tuning. While Liu et al. (2022) is based on T0 Sanh et al. (2022), we instead use BART0 Lin et al. (2022), a BART model instruction-finetuned on the multi-task data mixture described in Sanh et al. (2022), to keep the baselines comparable. They observe comparable performance between BART0 and T0, which we confirmed in preliminary experiments. Finally, we also evaluate a finetuned DeBERTaLARGE model He et al. (2021), and a DeBERTa model additionally fine-tuned on SNLI Bowman et al. (2015) and MultiNLI Williams et al. (2018), both being common and very competitive claim verification baselines Stammbach (2021); DeHaven and Scott (2023).
Experimental SetupWe are sampling \(K\) training samples and do not consider a validation set for hyperparameter-tuning, following the real-world few-shot learning setting of Alex et al. (2021). We use Awesomealign Dou and Neubig (2021) as the word-level contextualized alignment system. To stay comparable with our ProoFver and T-Few baselines, we also use BART0 as our instruction-tuned language model. Notably, natural language inference is not a task BART0 has seen during instruction fine-tuning. To take advantage of a more powerful QA system, we also evaluate QA-NatVer using Flan-T5 Chung et al. (2022), a state-of-the-art instruction-tuned language model, which has explicitly seen natural language inference amongst many more tasks. Results are averaged over five runs with standard deviation indicated unless otherwise noted.
Main ResultsResults are shown in Figure 2. QA-NatVer achieves an accuracy of \(64.0\) and a macro-averaged F\({}_{1}\) of \(63.1\), outperforming T-Few by \(4.6\) and \(4.3\) absolute points respectively. More importantly, our system substantially outperforms ProoFver which performs only marginally better than random when trained on the same amount of data. We improve accuracy by \(27.8\) points, highlighting the limitation of the ProoFver formulation in a few-shot scenario, and this observation also holds for LOREN. Despite QA-NatVer's BART0 not having seen any natural language inference task in its instruction-tuning, it outperforms the DEBERTaV3-NLI model. When training our QA-NatVer model with the Flan-T5 model instead, our system accuracy improves to \(70.3\pm 2.1\) with an F\({}_{1}\) of \(70.0\pm 1.4\) while being trained only on \(32\) claim annotations. For comparison, ProoFver-full, achieves an accuracy of \(78.5\), highlighting the data efficiency of QA-NatVer.
RobustnessAs shown in Table 3, QA-NatVer performs competitively against all baselines when run without adjustment on Symmetric FEVER. QA-NatVer outperforms ProoFver by \(28.7\) accuracy
\begin{table}
\begin{tabular}{l|c c} \hline \hline & Accuracy & Macro-Avg. F\({}_{1}\) \\ \hline DeBERTa & \(39.1\pm 2.5\) & \(36.7\pm 2.3\) \\ DEBERTa-NLI & \(59.7\pm 1.9\) & \(59.4\pm 1.6\) \\ T-Few & \(59.4\pm 3.2\) & \(58.8\pm 3.1\) \\ LOREN & \(35.7\pm 1.8\) & \(27.9\pm 1.7\) \\ ProoFver & \(36.2\pm 1.3\) & \(32.5\pm 1.3\) \\ \hline QA-NatVer & \(64.0\pm 0.9\) & \(63.1\pm 1.5\) \\ + Flan-T5 & **70.3 \(\pm\) 2.1** & **70.0 \(\pm\) 1.4** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on FEVER with \(32\) claim annotations for training.
\begin{table}
\begin{tabular}{l|c c} \hline \hline & Accuracy & Macro-Avg. F\({}_{1}\) \\ \hline DeBERTa & \(52.3\pm 4.0\) & \(47.5\pm 8.0\) \\ DEBERTa-NLI & \(83.1\pm 1.6\) & \(81.4\pm 0.9\) \\ T-Few & \(73.7\pm 3.8\) & \(73.0\pm 4.7\) \\ ProoFver & \(53.4\pm 2.7\) & \(52.6\pm 2.7\) \\ \hline QA-NatVer & \(80.9\pm 1.2\) & \(80.9\pm 1.2\) \\ + Flan-T5 & **85.8 \(\pm\) 0.9** & **85.8 \(\pm\) 1.0** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results on Symmetric-FEVER.
points and T-Few by \(7.2\) points. Our system is also competitive with models trained on the entirety of FEVER, including ProoFVer-full, performing \(1.2\) accuracy points worse than ProoFVer-full (\(82.1\)). Training DeBERTa on NLI datasets improves robustness substantially, performing even better than ProoFVer-full. Using Flan-T5 as our instruction-tuned model instead, QA-NatVer surpasses all other models, improving scores by about \(4.9\) accuracy points to reach an accuracy of \(85.8\pm 0.9\).
Varying sample sizes.Table 4 compares our approach against the baselines when trained with varying amounts of data in a few-shot setting, ranging between \(16\), \(32\), and \(64\) samples. QA-NatVer consistently outperforms our baselines across sample sizes. Notably, while DeBERTav3 sees improvements with \(64\) samples, ProoFver's improvements are marginal, indicating a larger need for data. The variability decreases across all models with increasing sample size, with QA-NatVer having the lowest standard deviation, indicating its training robustness. QA-NatVer with Flan-T5 achieves a sore of \(71.6\pm 1.7\) when trained with \(64\) samples. The question-answering formulation might also be beneficial to obtaining large-scale proof annotations (cf. Section 2 cheaply, reducing the workload for the annotators. Investigating this option is left to future work.
Claim length.Real-world claims are typically longer than the example claim shown in Figure 1. MultiFC (Augenstein et al., 2019), a large-scale dataset of real-world claims, has an average claim length of 16.7 tokens compared to FEVER with 9.4. We subsequently measure the performance of QA-NatVer as a function of a claim's minimum length, as shown in Figure 3. QA-NatVer shows only a very small performance decline for claims of up to a minimum of 18 tokens, indicating its robustness towards longer claims, correctly predicting the veracity of claims such as "_The abandonment of populated areas due to rising sea levels is caused by global warming_".
AblationWe perform three different ablation studies reported in Table 5. First, we examine the performance of QA-NatVer without multi-granular chunking. We observe a \(7.7\) average score drop in accuracy, demonstrating that considering evidence spans at different levels of granularity improves performance. Second, we ablate our proof selection method by omitting the veracity score \(s_{v}\), observing an accuracy drop by \(3.7\) points.
Finally, we compare our question-answering approach for NatOp assignments to a model that is prompted to predict NatOPs for claim-evidence spans as a multi-class problem without multi-granular chunking. We use ChatGPT (OpenAI, 2022) and Llama-2 (Touvron et al., 2023) as state-of-the-art few-shot language models and prompt them with in-context examples from our annotated proofs. The other parts of QA-NatVer are kept identical. We observe that the non-QA approach leads to predicting more independence operators (#), resulting in an \(11.9\) points drop in accuracy with ChatGPT, and a \(16.6\) drop with Llama2-13B. For details see Appendix D.
(Norregaard and Derczynski, 2021), a three-way claim verification task for Danish, consisting of a total of \(6,407\) instances. To eliminate additional variability from a multilingual retrieval system, we use the gold evidence for evaluation, except for NEI-labeled instances for which we retrieve evidence via BM25.
Baselines & Experimental Setup:We use our baselines with multilingual backbones, namely T-Few, and ProoFVer with an mT0 backbone (Muennighoff et al., 2022), as well as a finetuned XLM-RoBERTa (Conneau et al., 2020) model. We additionally consider ProoFVer-full by translating the claim and evidence from Danish into English, using the translation system by Tiedemann and Thottingal (2020). QA-NatVer also uses mT0 (Muennighoff et al., 2022), a multilingual T5-based model, instruction-tuned on multilingual tasks. Similarly to BART0, mT0 has not seen natural language inference in its fine-tuning. We use the chunking system of Pauli et al. (2021) and keep all other components unchanged. The language of the natural logic questions and answers remains English for all experiments.
ResultsResults on DanFEVER are shown in Table 6. Our system achieves accuracy and F\({}_{1}\) of \(61.0\) and \(56.5\), outperforming all other baselines by \(1.8\) and \(2.7\) points, respectively. The ProoFVer baseline trained on the same data as our model achieves a score of \(47.9\). Notably, in this setting our approach even outperforms ProoFVer-full, where the claims and evidence being translated from Danish into English. Running ProoFVer-full in this setting is computationally expensive due to the translation required and still has worse accuracy than QA-NatVer. The variability in this language transfer setup is higher than for FEVER, particularly for T-Few, but remains low for QA-NatVer.
### Correctness of Natural Logic Operators
Assessing the quality of generated proofs exclusively by the verdict they result in, ignores that an incorrect proof might lead to the correct verdict. For instance in Figure 4, ProoFVer fails to assign equivalence (\(\equiv\)) even to identical spans, such as _Highway_ and _Highway_, yet it still produces the correct veracity label. To intrinsically evaluate the quality of proofs, human subjects (not the authors of this paper) annotated a total of \(114\) NatOp assignments from \(20\) claims and their associated proof from both ProoFVer and QA-NatVer for their correctness. Each NatOp assignment was annotated by \(3\) annotators, resulting in \(342\) data points. The claims are selected via stratified sampling, ensuring that each class is equally represented. We further ensure that both models predict the same verdict. All three subjects assigned the same correctness label to a NatOp in 84.8% of cases, thus indicating high inter-annot agreement. QA-NatVer's
\begin{table}
\begin{tabular}{l|c c} \hline \hline & Accuracy & Macro-Avg. F\({}_{1}\) \\ \hline XLM-RoBERTa & \(41.9\pm 3.0\) & \(31.7\pm 3.3\) \\ XLM-ROBERTa-XNLI & \(54.3\pm 1.3\) & \(52.2\pm 0.7\) \\ T-Few & \(59.2\pm 4.8\) & \(53.8\pm 12.7\) \\ \hline ProoFVer & \(47.9\pm 3.2\) & \(24.6\pm 2.9\) \\ ProoFver-full w/ MT & 56.9 & 52.1 \\ \hline QA-NatVer & \(\mathbf{61.0\pm 3.5}\) & \(\mathbf{56.5\pm 4.9}\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results on DanFEVER, using \(32\) FEVER claims using a multilingual language model.
Figure 4: A FEVER example where ProoFVer and QA-NatVer reach the correct verdict (refutation). QA-NatVer produces more plausible proofs with fewer erroneous NatOp assignments than ProoFVer.
NatOp assignments are correct in \(87.8\%\) of the cases, while ProoFVer is only correct in \(63.4\%\), indicating that the quality of NatOp assignments by QA-NatVer is superior to those by ProoFVer.
Considering the very high label accuracy of ProoFVer (outperforming QA-NatVer by almost \(10\) accuracy points), these results are surprising. We hypothesise that ProoFVer might have learned "shortcuts" to arrive at the correct verdict in its proof due to the noisy signals in the weakly supervised proof dataset it has been trained on, due to the dataset-specific heuristics that have been applied to construct a dataset of sufficient size to train it on. To validate this, we inspect mutations where the claim and evidence spans are identical. These are trivial cases where the model is expected to predict the equivalence operator. However, ProoFVer produces a wrong NatOp for about 16.3% of cases, mostly the independence operator (13%), but our system always predicts equivalence (see App. C).
### Plausibility
To assess the plausibility of the natural logic proofs predicted by QA-NatVer, we run a forward prediction experiment [10]. Human subjects are asked to predict the veracity label _solely_ from the justification (i.e. proof) generated by the model and to specify on a five-level Likert scale, ranging from _very plausible_ to _not plausible_, how plausible the justification appears to them. Since we are evaluating proofs as an explanatory mechanism to humans, we ensured that no subject was familiar with the deterministic nature of natural logic inference. To enable non-experts to make use of the proof, we replaced the NatOps with English phrases, similar to [11] (see Appendix C).
The evaluation consists of \(120\) annotations from \(6\) subjects. The same \(20\) claims used in the human evaluation of correctness are paired with a ProoFver or QA-NatVer proof explanation and are annotated by three subjects. No subject annotates the same claim for both models, as otherwise a subject might be influenced by the explanation it has seen before for the same claim. Using the QA-NatVer proofs, subjects correctly predict the model's decision in \(90\%\) of cases, compared to ProoFVer's 76.9%. All three subjects selected the same verdict in 70% and 91.7% of the cases, for ProoFver and Qa-NatVer, respectively, with an inter-annotator agreement of \(0.60\) and \(0.87\) Fleiss \(\kappa\)[12]. Regarding the plausibility assessments, the subjects rate QA-NatVer proofs an average score of \(4.61\), while ProoFver is rated \(4.16\) out of \(5\) points.
### Efficiency
QA-NatVer remains computationally efficient since the typical bottleneck of transformer models, the input and output length, remain short at every stage of the pipeline. Concretely, the alignment module encodes each evidence sentence with the claim independently. The QA model uses as input a question with a single embedded claim span and its evidence with the output being either Yes/No or a short phrase. The average input length to the QA model on FEVER is 20 tokens while its output is in most cases a single token. This is substantially cheaper computationally than cross-encoding the claim and all evidence sentences and autoregressively generating the proof at once, as done by ProoFVer, with 195.2 input and 31.1 output tokens on average. The entire runtime of the QA module can be described as \(\mathcal{O}(l\cdot n_{span}^{2}+n_{all}^{2})\), with \(l\) being the number of spans, \(n_{span}\) being the input length for the aligned claim-evidence spans (for the NatOp probability score) and \(n_{all}\) being the length of the claim and its evidence sentences (for the NatOp verdict score). We measure the wall-clock time (in minutes) with the BART-large backbone, using the same hardware configuration as described in Appendix B. DeBERTa, T-Few, ProoFver, LOREN, and QA-NatVer train in 5.3, 22.3, 21.4, 27.5, and 36.4 minutes, and inference on the FEVER development set of 19998 instances runs in 20.6, 7.3, 185.2, 116.5, and 89.1 minutes, respectively.
## 5 Conclusion
This paper presented QA-NatVer, a natural logic inference system for few-shot fact verification that frames natural logic operator prediction as question-answering. We show that our approach outperforms all baselines while remaining faithful. Human evaluation shows that QA-NatVer produces more plausible proofs with fewer erroneous natural logic operators than the state-of-the-art natural logic inference system, while being trained on less than a thousandth of the data, highlighting QA-NatVer's generalization ability. Future work looks at extending the capability of natural logic inference systems to more complex types of reasoning, including arithmetic computations.
### Limitations
While natural logic provides strong explainability by operating directly on natural language, it is less expressive than alternative meaning representations that require semantic parses such as lambda calculus [23]. For instance, temporal expressions and numerical reasoning are beyond the expressive power of natural logic [22] but are frequently required when semi-structured information is available [1]. Moreover, cases of ambiguity like cherrypicking, are difficult to process with natural logic. Addressing the limits of natural logic inference systems is out-of-scope for this paper. Similar to ProoFVer, the proof we constructed is intended to be executed in the DFA from left to right, however, natural logic-based inference is not constrained to such execution. Furthermore, all benchmarks explored in the paper use Wikipedia as the knowledge base which is homogeneous compared to heterogeneous sources professional fact-checkers use (e.g., news articles, scientific documents, images, videos)
### Ethics Statement
Our system improves the explainability of claim verification models and empowers actors to make more informed decisions about whether to trust models and their judgements, yet actors must remain critical when evaluating the output of automated claim verification systems and not confuse explainability with correctness. We emphasize that we do not make any judgements about the truth of a statement in the real world, but only consider Wikipedia as the source for evidence to be used. Wikipedia is a great collaborative resource, yet it has mistakes and noise of its own similar to any encyclopedia or knowledge source. Thus we discourage users using our verification system to make absolute statements about the claims being verified, i.e. avoid using it to develop truth-tellers.
## Acknowledgements
This work was supported by the Engineering and Physical Sciences Research Council Doctoral Training Partnership (EPSRC). Andreas Vlachos is supported by the ERC grant AVeriTeC (GA 865958) and the EU H2020 grant MONITIO (GA 965576). Further, the authors would like to thank the subjects who volunteered to be part of the human evaluation, namely Mubashara Akhtar, Julius Cheng, Sana Kidwai, Nedjma Ousdihoum, Andre Schurat, and Ieva Raminta Staliunaite. We finally thank the anonymous reviewers for their time and effort giving us feedback on our paper.
|
2306.11706 | RoboCat: A Self-Improving Generalist Agent for Robotic Manipulation | The ability to leverage heterogeneous robotic experience from different
robots and tasks to quickly master novel skills and embodiments has the
potential to transform robot learning. Inspired by recent advances in
foundation models for vision and language, we propose a multi-embodiment,
multi-task generalist agent for robotic manipulation. This agent, named
RoboCat, is a visual goal-conditioned decision transformer capable of consuming
action-labelled visual experience. This data spans a large repertoire of motor
control skills from simulated and real robotic arms with varying sets of
observations and actions. With RoboCat, we demonstrate the ability to
generalise to new tasks and robots, both zero-shot as well as through
adaptation using only 100-1000 examples for the target task. We also show how a
trained model itself can be used to generate data for subsequent training
iterations, thus providing a basic building block for an autonomous improvement
loop. We investigate the agent's capabilities, with large-scale evaluations
both in simulation and on three different real robot embodiments. We find that
as we grow and diversify its training data, RoboCat not only shows signs of
cross-task transfer, but also becomes more efficient at adapting to new tasks. | Konstantinos Bousmalis, Giulia Vezzani, Dushyant Rao, Coline Devin, Alex X. Lee, Maria Bauza, Todor Davchev, Yuxiang Zhou, Agrim Gupta, Akhil Raju, Antoine Laurens, Claudio Fantacci, Valentin Dalibard, Martina Zambelli, Murilo Martins, Rugile Pevceviciute, Michiel Blokzijl, Misha Denil, Nathan Batchelor, Thomas Lampe, Emilio Parisotto, Konrad Żołna, Scott Reed, Sergio Gómez Colmenarejo, Jon Scholz, Abbas Abdolmaleki, Oliver Groth, Jean-Baptiste Regli, Oleg Sushkov, Tom Rothörl, José Enrique Chen, Yusuf Aytar, Dave Barker, Joy Ortiz, Martin Riedmiller, Jost Tobias Springenberg, Raia Hadsell, Francesco Nori, Nicolas Heess | 2023-06-20T17:35:20Z | http://arxiv.org/abs/2306.11706v2 | # RoboCat: A Self-Improving Foundation Agent for Robotic Manipulation
###### Abstract
The ability to leverage heterogeneous robotic experience from different robots and tasks to quickly master novel skills and embodiments has the potential to transform robot learning. Inspired by recent advances in foundation models for vision and language, we propose a foundation agent for robotic manipulation. This agent, named _RoboCat_, is a visual goal-conditioned decision transformer capable of consuming multi-embodiment action-labelled visual experience. This data spans a large repertoire of motor control skills from simulated and real robotic arms with varying sets of observations and actions. With RoboCat, we demonstrate the ability to generalise to new tasks and robots, both zero-shot as well as through adaptation using only 100-1000 examples for the target task. We also show how a trained model itself can be used to generate data for subsequent training iterations, thus providing a basic building block for an autonomous improvement loop. We investigate the agent's capabilities, with large-scale evaluations both in simulation and on three different real robot embodiments. We find that as we grow and diversify its training data, RoboCat not only shows signs of cross-task transfer, but also becomes more efficient at adapting to new tasks.
## 1 Introduction
Much of real-world robot learning research has focused on developing agents for one task at a time. This is because, even though the cost of task design and robot experience generation is very high, leveraging heterogeneous robot data at scale has remained a challenging problem in the field of robotics.
The advent of high-capacity models, such as the transformer model (Vaswani et al., 2017), has enabled recent successes for multi-task learning in language and vision. These developments have led to progress in modelling multi-modal behaviour and predicting actions with a generalist agent, Gato (Reed et al., 2022), being able to play Atari, caption images, chat, and show some, albeit limited, robotic manipulation capabilities. Specifically in robotics, recent works (Brohan et al., 2022; Driess et al., 2023) have focused on bridging the gap between large pretrained language models and vision-based manipulation by training language-conditioned transformer policies to solve multiple simple, visually-diverse tasks that have the same observation and action spaces.
In this work, we propose RoboCat, a self-improving foundation agent for vision-based robotic manipulation, instantiated as a large transformer sequence model. Inspired by foundation models in other domains (Bommasani et al., 2022), we ultimately aim for a foundation agent for manipulation to be a multi-embodiment agent trained on a large set of robotic episodic experience that enables it to quickly adapt, via fine-tuning, to a broad set of new downstream tasks. As a step towards this, we trained RoboCat on a very large dataset of precise and dexterous vision-based tasks performed with embodiments that have different degrees of freedom, various observation and action specifications, and operate at different control frequencies. Our agent is able to successfully adapt to new tasks and robots via
fine-tuning on a small dataset of new episodic experience of between 100 to 1000 demonstrations, significantly reducing the cost of acquiring new skills and onboarding new embodiments. We further use the fine-tuned RoboCat models to gather additional data that is later added to train new iterations of our agent. This _self-improvement_ process, illustrated in Figure 1, makes for a more capable agent, and improves its cross-task transfer and fine-tuning capabilities to even more tasks.
RoboCat is based on Gato and a VQ-GAN encoder (Esser et al., 2021), which is pretrained on a broad set of images and enables fast iteration. We specify tasks via visual goal-conditioning, which has the desirable property that any image in a trajectory can be labelled as a valid "hindsight goal" (Andrychowicz et al., 2017) for all time steps leading up to it. This means that hindsight goals in existing data can be extracted without additional human supervision and that even suboptimal data collected by the agent can be incorporated back into the training set for self-improvement. Additionally, visual goals provides an intuitive interface to indicate the robot the task that it should do.
Our main contributions in this work are outlined below: _(1)_ we demonstrate, for the first time, that a large transformer sequence model can solve a large set of dexterous tasks on multiple _real_ robotic embodiments with differing observation and action specifications; _(2)_ we investigate RoboCat's capabilities in adapting to unseen tasks, with just a small dataset of expert demonstrations, lowering the bar of learning a new skill, compared to baselines; _(3)_ we show that it is possible to incorporate these skills back to the generalist with a simple but effective self-improvement process; and _(4)_ we show that by scaling and broadening the training data, RoboCat performs better on training tasks and is more efficient at fine-tuning.
The rest of the paper is structured as follows. We first describe RoboCat and the self-improvement loop in Section 2. We introduce the embodiments, tasks, and object sets that we have used in this work in Section 3. We describe our experimental setup for both training and evaluation in Section 4, before we present our extensive experiments to support our claims in Section 5. We finally discuss our work in the context of related work in Section 6, and discuss RoboCat's potential avenues for future work in Section 7.
## 2 RoboCat
We introduce RoboCat, a self-improving foundation agent for robotic manipulation that can perform multiple tasks and control multiple embodiments in simulation and the real world. In this section, we describe each phase of the RoboCat training process, summarised in Figure 1.
Figure 1: **The self-improvement process.** RoboCat is a multi-task, multi-embodiment visual goal-conditioned agent that can iteratively self-improve. Given a version of this generalist agent, it can adapt to new tasks with 100–1000 demonstrations, and be deployed to generate much more data for a given task. The resulting trajectories are then added to the training dataset for the next iteration of RoboCat, increasing the generalist’s repertoire of skills and improving performance across tasks.
### Training and task specification
We consider vision-based tabletop object manipulation tasks. Each task is defined by its (uncountably infinite) set of valid start and end states, and an episode is evaluated for task success by checking if the last state is in the set of valid end states. For example, for the task "Insert the apple into the bowl", the set of valid start states is all states with an apple outside a bowl, and the set of valid end states is all states with the apple inside the bowl. We exclusively consider tasks where success can be determined from only the end state.
We want to train an agent that performs a task when conditioned on an image of a valid end state of that task. Our goal-conditioned agent is represented by a policy \(\pi(a_{t}|o_{t},g_{t})\), where \(a_{t}\) denotes the action vector, \(o_{t}=(x_{t},I_{t})\) are the proprioceptive observation (e.g. robot joint positions and velocities) and image observation, respectively, and \(g_{t}\) is an image of the desired task we want to solve. Note that the goal image is an example of the task being solved and it does not indicate a specific state that the agent should reach. The goal image effectively indicates the task that the agent should do and the agent is only evaluated for task success.
We model \(\pi(a_{t}|o_{t},g_{t})\) via an autoregressive transformer model (Vaswani et al., 2017),
\[\pi(a_{t}|o_{t},g_{t})=P_{\theta}(a_{t}|x_{ct},I_{ct},g_{<t}), \tag{1}\]
where the subscript \(<t\) denotes observations and goal images prior to time step \(t\). Note that the dimensionality of the actions and proprioception observations vary across embodiments. Internally, the autoregressive model operates with tokenised inputs and outputs.
For training, we assume access to a dataset \(\mathcal{D}=\{\tau^{i}\}_{i=1}^{|\mathcal{D}|}\) of trajectories that are transformed into a dataset of tokenised trajectories \(\hat{\mathcal{D}}=\{\hat{\tau}^{i}\}_{i=1}^{|\mathcal{D}|}\). In addition, during tokenisation, the trajectories are augmented with goal images. Concretely, a tokenised trajectory \(\hat{\tau}\in\hat{\mathcal{D}}\) is represented as
\[\hat{\tau}=\left(x_{1}^{1:L},I_{1}^{1:M},g_{1}^{1:N},a_{1}^{1:0},...,\right.\\ \left.x_{T}^{1:L},I_{T}^{1:M},g_{T}^{1:N},a_{T}^{1:Q},x_{T+1}^{1 :L},I_{T+1}^{1:M},g_{T+1}^{1:N}\right), \tag{2}\]
where \(L,M,N,Q\) denote the number of tokens required to encode proprioceptive inputs, images, goals, and actions, respectively, and \(T\) is the number of transitions in the trajectory. Note that \(L\) and \(Q\) vary by embodiment. The goal observations \(g_{t}\) are fixed within a trajectory and repeated for each time step.
A natural choice for a goal image is a _hindsight goal_. Since, by definition, a trajectory always "succeeds" at reaching its own last image, we can use the last image of the same episode as the goal image, \(g_{t}^{i}=l_{T+1}^{i}\), for any trajectory \(\tau^{i}\). Alternatively, we can also consider goal selection using a _semantically-equivalent goal_. That is, for any successful episode \(\tau^{i}\), we can select the last image of a different episode that succeeded at the same task, \(g_{t}^{i}=l_{T+1}^{j}\), where \(\tau^{j}\) is another successful episode from the dataset \(\mathcal{D}\), as measured by a success detector or reward function for a given task. We train with both sources of goals for successful episodes, and use only hindsight goals for unsuccessful episodes. Details on how we weight the different tasks and goal sources are available in Appendix E.2.
#### Architecture and pretraining
Our model is based on the transformer architecture described in Gato (Reed et al., 2022). For tokenisation of proprioceptive observations and agent actions, we follow the same procedure as in Reed et al. (2022). For image tokenisation, however, we instead use a pretrained and frozen VQ-GAN (Esser et al., 2021), which allows for faster training of the generalist, as the image can be tokenised once in advance. The VQ-GAN, similarly to a VQ-VAE (van den Oord et al., 2017), consists of an encoder that encodes an input image into a series of latent vectors and a decoder (which we do not use after training). The encoded vectors are discretised via a nearest neighbour lookup in a codebook of quantised embeddings. Each image is tokenised into an \(8\times 8\) grid of tokens.
We pretrain our VQ-GAN encoder on a _diverse_ collection of images as we find this improves generalisation. Specifically, the dataset we train our encoder on consists of images from ImageNet (Deng et al., 2009), images from the control tasks in Reed et al. (2022) including Atari and MuJoCo (Todorov et al., 2012) locomotion tasks, as well as images from our visual robotic manip
ulation dataset. These datasets, training details, as well as extensive ablations that informed our design choices can be found in Appendix D.
To train the agent model we use a dataset \(\hat{\mathcal{D}}\) containing the joint collection of data from all tasks and utilise a standard token prediction loss. While Gato only predicted actions, we find that, when a VQ-GAN is used, performance is improved by additionally training for predicting future image tokens as produced by the VQ-GAN encoder (Appendix D.3). Specifically, we predict image tokens \(k=5\) time steps into the future as images one step apart can look very similar.
Combining the action and observation prediction losses, at the token level, we obtain the following objective to train the model \(P_{\theta}\):
\[\mathcal{L}(\theta;\mathcal{D})=\mathop{\mathbb{E}}_{\hat{\tau} \sim\hat{\mathcal{D}}}\left[\sum_{t=1}^{T}\sum_{q=1}^{Q}\log P_{\theta}(a_{t}^ {q}|x_{ct}^{1:t},I_{ct}^{1:M},g_{<t}^{1:N})\right.\\ +\left.\sum_{t=1}^{T+1-k}\sum_{m=1}^{M}\log P_{\theta}(I_{t+k}^{m} |x_{\leq t}^{1:t},I_{\leq t}^{1:m},g_{<t}^{1:N})\right]. \tag{3}\]
Note that, in practice, instead of conditioning on the full history of observations (as indicated by the subscript \(<t\)), we use a fixed total token length of 1024 for the model (which corresponds to roughly 3 time steps of history).
#### 2.1.2 Fine-tuning and self-improvement
A key contribution of our work is our study into how RoboCat agents can be fine-tuned and self-improved given a relatively small number of demonstrations. This capability is especially crucial in a real robotics context--unlike in simulation, data is bottlenecked by real-time operation per robot, and high-quality supervision is scarce.
**Fine-tuning** To perform fine-tuning and self-improvement we first collect 100-1000 demonstrations per task via teleoperation. The generalist RoboCat agent is fine-tuned on these demonstrations, which are tokenised and augmented with goal images in the same way as for the generalist training. Formally, we perform the optimisation \(\theta_{\text{ft}}^{y}=\arg\max_{\theta}\mathcal{L}(\theta;\mathcal{D}_{ \text{demo}}^{y})\) where \(\mathcal{D}_{\text{demo}}^{y}\) is the demonstration data for the task \(y\) that we want to fine-tune on, and \(\theta\) is initialised with the weights from pretraining (Section 2.1.1). At the end of this fine-tuning step, we obtain an agent that is specialised to the new task but that may lose performance on the original training tasks.
**Self-improvement** In order to integrate new tasks into a new generalist, we deploy the fine-tuned policies \(P_{\theta_{\text{ft}}^{y}}\) to autonomously collect a large dataset of additional trajectories for each of the self-improvement tasks \(y\in\mathcal{Y}\). After data collection, we perform hindsight goal relabelling as described in Section 2.1. Note that, when using semantically-equivalent goals, we require a reward function to determine the successful trajectories for a given task. For this purpose, we employ learned reward models as described in the next section. The resulting relabelled trajectories form a self-improvement dataset \(\mathcal{D}_{\text{imp}}^{y}\) for the task we want to improve. Finally, using this data, we can construct a new training dataset for training the next iteration of our generalist RoboCat agent. We combine all trajectories with the previous data to form the next dataset,
\[\mathcal{D}_{\text{next}}=\mathcal{D}\cup\bigcup_{y\in\mathcal{Y}}\left( \mathcal{D}_{\text{demo}}^{y}\cup\mathcal{D}_{\text{imp}}^{y}\right), \tag{4}\]
which is then used to train a new VQ-GAN model, after which we continue with the next iteration of training a new generalist \(\theta_{\text{next}}=\arg\max_{\theta}\mathcal{L}(\theta;\mathcal{D}_{\text{ next}})\).
### Real-world deployment
In order to integrate the new task into a new generalist, we deploy the fine-tuned policy on a real robot to collect a large dataset on the new task using images from the demonstrations as goal images. Collecting real-world data autonomously presents two challenges: success classification and task resets.
**Success detection via reward models** While final evaluation numbers are counted manually for accuracy, automated success detection is necessary for the hindsight goal relabelling of semantically-equivalent goals described above during training. In addition, success detection is necessary for determining when a reset is needed. To this end, we train vision-based reward models to detect when a task has succeeded. We first
collect human demonstrations and data from policies trained to perform the task (e.g. evaluation episodes of a RoboCat policy). These episodes are annotated via a crowd-sourcing interface, where annotators mark the time step after which the task is solved in each episode (if at all), resulting in binary annotations. These are then used to train a binary classifier that can be used to detect task success from image observations at any given time step.
Autonomous resets with policy poolsResetting the environment for a single task requires bringing the state from the end state back into the set of valid start states for that task. However, manually programming such reset routines is a highly non-trivial endeavour (in many cases performing a reset is almost as complicated as solving the task itself) leaving us with a problem for autonomous data collection. We solve this issue by observing that the set of end states for some tasks overlap with the set of start states of other tasks. Thus we can "re-use" tasks trained for a given task as reset mechanisms for tasks whose end states overlap with the valid start states for another task. We implement an autonomous reset mechanism based on this observation that we refer to as a _policy pool_. A policy pool is simply a collection of policies (or policies implicitly defined by a pool of goal images) with overlapping start and end states. In each episode, we then pick a policy from this pool to be run next and record its trajectory and success. By pooling multiple policies in this way, we can get automated resets, increase the robot utilisation (by reducing the need for explicit human resets) and increase the diversity of initial conditions for evaluation and data collection. We utilise two types of policy pools in our evaluations: _stateless_ policy pools, in which the policies are executed in some order regardless of the state of the environment (e.g. for lifting tasks); and a _state-based_ policy pool, which samples the next policy to execute based on the state of the environment (e.g. performing a remove task when the initial state corresponds to a successful insertion). In the latter case, the trained reward models are used to evaluate the state of the tabletop and determine which policies are eligible for next execution. More details are provided in Appendix F.2.
## 3 Tasks and Data
One of the main contributions of this work is to demonstrate that RoboCat can learn diverse and dexterous behaviours to solve a large set of tasks. The tasks we use require fine motor skills and hand-eye coordination, and the understanding of complex affordances and multi-object interactions. Additional diversity in the data is obtained through the use of multiple simulated and real embodiments and different approaches to data generation: RL-trained expert trajectories, human-teleoperated demonstrations,
Figure 2: **RoboCat supports multiple robotic embodiments and control modes.** These are all the different embodiments RoboCat is tested on, and the dimensionality of the action it needs to output for each. All robot arms have a Robotiq parallel gripper attached to them, with the exception of the KUKA arm which has a proprietary three-finger hand. Unlike the Panda and Sawyer embodiments, the KUKA embodiment was not seen during training and is only used during fine-tuning.
as well as self-generated data from RoboCat (see Section2.1.2). In this section, we provide an overview of the embodiments, object sets, tasks, and datasets that we refer to in this paper.
### Embodiments
The embodiments used in this work, shown in Figure2, are all in a standardised cage (see Lee et al. (2021)), which contains a "basket" that defines the robot arm's workspace. RoboCat was trained with data from Rethink Sawyer arms controlled with 7-DoF (simulation) and 5-DoF (real), and Franka Panda robot arms controlled with 7-DoF (simulation and real), all fitted with a Robotiq parallel gripper. RoboCat is also able to control KUKA 14-DoF arms, which are fitted with a new, custom-made, three-finger robot hand1--an embodiment that was only seen during the fine-tuning phase. In total, we used 36 real robots in this work: 15 Panda, 17 Sawyer, and 4 KUKA arms.
Footnote 1: Details of this robot hand will be released in the near future.
The simulated Panda and Sawyer embodiments are analogous to the real ones, though they are only coarsely aligned. We did not perform any careful system identification and the images rendered from the simulation were not visually realistic. We randomised the physics parameters in simulation but we did not randomise the visual appearance of the scene. More details available in AppendixC.
### Object sets
We use different object sets and a total of 134 real objects to enable a variety of complex behaviours and affordances (see Figure3). The first two sets of objects are 3D-printed and have been designed to systematically study types of robotic manipulation that involve multi-object interaction, specifically, structure-building (RGB objects) and insertion (NIST-i gears). We use a subset of these (123 objects) in simulation. The other real-world sets include store-bought objects.
RGB objectsThese 116 objects with parametrically defined shapes (only a subset shown in Figure3(a)) were introduced as a benchmark (Lee et al., 2021) to systematically study the physical understanding of multi-object interactions in the context of stacking: To solve the benchmark an agent needs to understand which shapes in which poses can be reliably stacked on top of each other. We use them here to additionally study related structure-building tasks. The basket always contains a triplet of these objects, with respective colours red, green, and blue.
NIST-i gears and 3-peg baseThis set of objects is first introduced in this work to aid a systematic study of the insertion affordance. Inspired by the NIST benchmark for robotic manipulation (Kimble et al., 2020), we designed three gears, different in sizes (small, medium, large), which are to be used in conjunction with a 3-peg base. The pegs are spaced such that successful meshing requires a specific allocation of the gears to the pegs (see Figure3(b)). In the real world, the shafts are metallic and the base is not fixed to the basket, which significantly increases the difficulty of the task. In simulation, the base is fixed. In both cases, there is a 1 mm tolerance when inserting a gear. See AppendixB.1.4 for more details.
YCB fruits, YCB-i vegetables, and bowlIn this work, we use a subset of the YCB object set (Calli et al., 2017), namely the fruit (apple, banana, peach, lemon, strawberry), shown in Figure3(c). The YCB-i vegetables (carrot, cucumber, pepper,
Figure 3: **The real-world object sets used by RoboCat. The first two object sets are used to systematically study structure-building and insertion affordances, respectively. The other object sets are store-bought objects that add visual diversity and challenge the agent with various lifting, insertion, and removal tasks.**
potato) and bowl, also shown in Figure 3(c), are inspired by, but not part of, the official YCB benchmark. This collection of textured and geometrically different objects introduces additional visual diversity and allows us to benchmark RoboCat on tasks with everyday objects.
**Shape-matching objects and base** These wooden objects are parts of a real shape-matching cube, used by toddlers to practice fine-motor control skills and 3D shape understanding. We used three shapes (circle, pentagon, square) and the shape-matching cube lid, shown in Figure 3(d). The lid is used as a base with matching holes that can be used for insertion and removal tasks. These objects are used to further study the insertion affordance. Unlike the NIST-i gears, this toy often requires difficult reorientations to get the objects in the correct orientation for insertion.
### Task families
We consider a total of 253 different task variations which we group into task families. We define a _task family_ to be a group of tasks that utilise the same skill or sequence of skills. For example, lifting the large NIST-i gear and lifting the YCB apple are two different task variations from the same task family. We provide a complete list of the task families in Table 1.
The task families **stacking**, **tower building**, **pyramid building**, and **inverted pyramid building** consist of building structures with either RGB objects or gears. They differ in difficulty, but in all cases require dexterous and precise movements to ensure that the structure remains stable after completion. The **lifting** task family consists of picking up a specific object in a basket with multiple objects. The objects are either fruits, vegetables, or gears. The motivation behind the lifting tasks is to study goal understanding and generalisation to new embodiments and objects. The **insertion** and **removal** task families come in three flavours, either involving fruits and a bowl, gears and a 3-peg base, or shape-matching objects and a base. We treat them as separate task families since they require different skills. The latter two require precise positioning into low-tolerance pegs or base, and shape-matching requires shape understanding and often reorientation. The bowl and bases can freely move in the real world, which substantially increases the complexity of those tasks. For all insertion and removal tasks, we use no resets other than the learnt respective removal and insertion tasks.
Each task variation refers to the combination of a specific embodiment (e.g. simulated Sawyer vs real-world Panda), task family, object set (e.g. RGB triplet 1 vs NIST-i gears), and perceptual variation (e.g. stacking red on blue vs green on red objects). Example goal images corresponding to specific task variations are shown in Figure 4.
### Data sources
RoboCat is trained on both expert and non-expert data. Different subsets of the data are collected in different ways. We use three types of data generation: (i) data produced by specialist RL agents, particularly employed in simulation; (ii) human teleoperated expert data, mostly used for the physical world tasks; and (iii) self-generated data. The primary difference between the two expert types of trajectories is that agent data provides fairly smooth and efficient trajectories due to the way the RL agent acts in the world, while teleoperated data often includes pauses as teleoperators employ behaviours similar to a bang-bang controller. The self-generated data is obtained by
Figure 4: **Example goal images. These images correspond to a subset of the embodiments, task families, and object sets used by RoboCat. The first two images correspond to simulated embodiments and the remaining images to real-world embodiments. See Figure 12 for more examples.**
running extensive evaluations whenever a new version of RoboCat is available: the data collected this way is saved and then reused for the next RoboCat training. This data is collected from RoboCat agents fine-tuned on teleoperated expert data. Therefore, the self-generated data resemble the teleoperation behaviours. See Appendix B.2 for further details about the nature of the data.
## 4 Experimental Setup
### RoboCat training tasks
RoboCat is trained on 240 tasks and fine-tuned on a further 13 tasks, for a total of 253 tasks. This includes data from 2 simulated and 3 real-world embodiments, 5 simulated and 11 real task families, and 123 simulated and 134 real objects. Table 1 summarises the tasks, organised separately for **training** and **fine-tuning** tasks.
### Training and fine-tuning
We train our generalists following the procedure outlined in Reed et al. (2022) except for differences in the encoder where applicable. Most of the experimental results are based on models with a 1.18B-parameter decoder-only transformer (Vaswani et al., 2017) with 24 layers, an embedding size of 2048, and a post-attention feedforward hidden size of 8196. To allow for more extensive experimentation, we use smaller models with 364M parameters for some ablations.
We fine-tune our generalists on a set of diverse real tasks using a limited number of human teleoperation demonstrations, between 100 and 1000 demonstrations for each task.
### Evaluation
For each of the simulated and real tasks, we evaluate each model by averaging over 100 episodes (or more, if specified), using a different goal image for each episode as well as randomised initial states of the environment. The episode length and control frequency varies from task to task, always matching the length of the expert data used for training. The control frequency of training data is not provided to the agent during training, since it may not be known or readily available. Table 14 in Appendix F report the episode length and control frequency used for each task family in simulation and real.
When fine-tuning a generalist to a specific real-world task, it can be difficult to determine the optimal number of fine-tuning steps, since there is no reliable offline measure of success. To address this in a systematic and reproducible way, we employ the following evaluation protocol for each task: we first evaluate the checkpoint every 5000 steps for 25 episodes each to assess the best performing checkpoint, and then evaluate that checkpoint for 100 episodes to measure the final performance.
### Baselines
In order to contextualise the difficulty of the tasks, we compare RoboCat to high capacity, pretrained vision foundation models (VFMs). These present an alternative approach to training robot policies: instead of training a single agent on a diverse set of tasks, we can take a readily-available powerful vision model and fine-tune it on each task separately. We trained and evaluated 59 different VFM baselines (see Appendix G.2 for the complete list) on a subset of tasks in simulation and selected the best two as the main baselines for these experiments: the 438M parameter NFNetf6 model (Brock et al., 2021) pretrained with CLIP (Radford et al., 2021) and the 197M parameter Swin-L model (Liu et al., 2021), also pretrained with CLIP. For each comparison, the VFM models are trained with the same behavioural cloning loss and the same successful episodes that the RoboCat model uses for a given task variant.
## 5 Experiments
The evaluations and comparisons we present in this section investigate the following questions:
1. Can RoboCat learn from heterogeneous data and solve a large set of tasks, specified with visual goals and requiring dexterity on multiple physical and simulated embodiments? (Section 5.1)
2. Can RoboCat adapt, with a small number of demonstrations, to challenging new scenarios such as unseen tasks, new objects, and
new embodiments with unseen morphology and action spaces? (Section 5.1)
3. Does RoboCat exhibit cross-task transfer and generalisation to held-out tasks? (Section 5.2)
4. Can RoboCat self-improve by autonomously collecting data and integrating that new data into the next RoboCat iteration? (Section 5.3)
### Overall RoboCat performance
We evaluated RoboCat over all the training tasks and we report task success rates averaged within each embodiment, task family, and object set, in Table 1 (see Section G.1 for per-task success rates). The tasks are broadly categorised into training (which include the tasks from the self-improvement process) and fine-tuning tasks. The RoboCat generalist agent was trained on all of these training tasks and then evaluated on a total of 141 training task variations. We demonstrate that a _single_ RoboCat agent is able to perform all of these tasks, which involve multiple embodiments--in simulation and the real world--and multiple task families and objects sets.
For the fine-tuning tasks, the RoboCat generalist agent was fine-tuned to individual task variations and then each fine-tuned agent was evaluated on its respective task. We fine-tuned on either 500 or 1000 demonstrations and report results for both cases also in Table 1. RoboCat is able to fine-tune to tasks that not only include previously unseen task families (e.g. fruit insertion into a bowl), but also new object sets (e.g. shape-matching set) and a previously unseen embodiment (real KUKA 14-DoF robot).
In Figure 5, we compare RoboCat to visual foundation model (VFM) baselines trained on each task independently. In simulation, we only ran these baselines for one task from each task family due to the large number of task variations in simulation, whereas for the real-world tasks, we ran them on all of the task variations. The simulation results in Figure 5(a) show that the VFM agents are competitive on the Panda stacking task, but are outperformed by RoboCat on the other simulated building tasks. As shown in Figure 5(b), this is even more apparent in the real-world lifting, insertion, and removal tasks, where the VFM baselines are significantly outperformed by RoboCat.
For fine-tuning experiments, where only up to 1000 demonstrations are available per task, we compare fine-tuned RoboCat agents to VFM agents that are trained with only 1000 demonstrations. The results in Figure 6 show that the VFM baselines perform very poorly whereas the fine-tuned RoboCat agents perform well even when only using 500 demonstrations. Since the VFM agents are trained for single tasks, they are unable to leverage the large amounts of existing training data as done by RoboCat.
Lastly, in Table 2, we compare to previously reported performance on the real Sawyer 5-DoF stacking tasks, which are part of the RGB-Stacking Benchmark (Lee et al., 2021). This allows us to compare RoboCat performance on these tasks with vision-based BC-IMP specialists (Lee et al., 2021), as well as the Gato generalist (Reed et al., 2022). The latter allows us to compare the benefit of training on diverse robotic manipulation data rather than training on tasks from vastly different domains such as Atari or VQA. Although the relative success rates vary per object triplet, RoboCat is comparable to prior methods on average on this benchmark, despite being able to solve many other manipulation tasks.
We demonstrate that RoboCat, by using visual goals, is able to learn from heterogeneous data and perform a large set of tasks on multiple embodiments, and can quickly adapt--using a limited number of demonstrations--to unseen tasks, new object sets, and new embodiments.
### Generalisation and adaptation
In order to analyse how RoboCat agents generalise and adapt, we trained a separate model of the same size but on only a subset of structure-building tasks (stacking, tower, and pyramid) with specific objects and task variations explicitly held out. The training and held-out tasks are listed in Figure 7(a). We refer to this limited-dataset model as _RoboCat-lim_. This enables us to investigate generalisation and adaptation of this agent along specific axes (see Figure 7(b)). Furthermore, since the training tasks for RoboCat-lim is a subset of those used by the final RoboCat model, we are able to evaluate the effect of training on more tasks.
First, we measure how RoboCat-lim generalises 0-shot and k-shot to the held-out tasks, summarised in Figure 7(c). In simulation, RoboCat-lim generalises 0-shot to a held-out object set on the Sawyer (third plot from the left) and the blue-on-green stacking task variant on the Panda (second plot), but does not generalise to that same task variant on the Sawyer embodiment (first plot). However, in both simulation and the real world, the model is effective at fine-tuning to this task variant with as little as 100 demonstra
Figure 7: **Generalisation and adaptation study for RoboCat-lim.** RoboCat-lim can be effectively fine-tuned, given a limited number of demonstrations, to tasks that are novel in terms of objects or task variants, and even to a completely new robot embodiment.
tions. On the real-world blue-on-green stacking variant (fourth plot), RoboCat-lim achieves 88% when fine-tuning on 500 demonstrations, compared to the 60% success reported for Gato on the same data2. The remaining cases show RoboCat's ability to adapt to real-world variants of previously seen simulation tasks (both stacking and tower building), the challenging inverted pyramid building task family (for which even teleoperator success is only 52%), and to the real-world dexterous KUKA embodiment with nearly 80% success. Overall, RoboCat-lim adapts with only 100-500 episodes in almost all settings, including different data sources (agent vs demonstrations; see Table 3), and an entirely unseen embodiment with twice as many degrees of freedom than seen in training.
Footnote 2: The Gato model was fine-tuned with additional simulation episodes of the task, but was not originally trained with the object set in this task.
We also compare RoboCat-lim to the VFM-based agents for few-shot fine-tuning on a couple of individual tasks in simulation. The results in Figure 8 show that our model can learn the tasks with significantly less data than the baselines.
Finally, we measure how much RoboCat benefits from its diverse training set, which includes all the tasks used for RoboCat-lim, the simulated structure building tasks held out from RoboCat-lim, all of the additional real-world data for the self-improvement tasks, and both simulated and real-world NIST-i gears tasks (lifting, insertion, and removal). In Figure 9(a), we compare RoboCat with RoboCat-lim specifically on the tasks that the limited model was trained on. Rather than its performance being negatively impacted due to the additional training tasks, RoboCat exhibits positive transfer across its training tasks and outperforms the more specialised RoboCat-lim. This trend of positive transfer also holds when adapting to new real-world tasks, e.g. as RoboCat was trained on real-world fruit and vegetable lifting data, it adapts better to the insertion and removal tasks with the fruits and bowl (Figure 9(b)).
### Self-improvement via RoboCat fine-tuning and self-generation of data
In this section, we demonstrate the key ability of RoboCat to perform self-improvement. That is, to fine-tune to a new task with a limited number of demonstrations, self-generate a larger amount of experience, and retrain a more capable generalist with this additional data. This represents a first step towards a foundation agent which can iteratively learn new tasks.
To perform self-improvement, we fine-tuned older RoboCat-lim equivalent models to a number of real-world tasks using human-teleoperated data (specifically, blue-on-green stacking, tower and inverted pyramid building, and vegetable
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Data Source} & \multicolumn{2}{c}{Task Success} \\ \cline{2-3} & 100 episodes & 500 episodes \\ \hline Expert agent data & 63\% & 84\% \\ Demonstration data & 82\% & 88\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: **RoboCat-lim fine-tuning using different sources of data.** Despite RoboCat-lim only being trained on agent data originally, the model can be fine-tuned with either agent or human demonstration data. The 0-shot success rate for this task is 0%. This task is the held-out real-world perceptual variant of blue-on-green stacking.
Figure 8: **RoboCat-lim 0-shot and k-shot fine-tuning compared to VFM baselines. RoboCat-lim performs better than the baselines given the same number of episodes on a new task, even for a task in which RoboCat-lim gets 0-shot 0% success. This shows that the model can quickly adapt by reusing information from the tasks and embodiments seen during training. The number of fine-tuning episodes are shown in parentheses for each method. The results here are for single task variants, unlike the results in Figure 7(c).**
and fruit lifting tasks), and then generated large amounts of data autonomously with these policies. All of this data was part of the dataset used to train the main generalist shown in Section5.1.
We first perform a smaller experiment with a subset of the tasks, to provide a proof-of-concept of self-improvement, and carefully isolate and evaluate the contribution of self-generated data alone. We train a smaller 364M model on the structure-building tasks (i.e. those used for RoboCat-lim) and 500 demonstrations from only a few self-improvement tasks: fruit lifting (apple, banana, and peach) and blue-on-green Sawyer stacking. This represents a baseline of directly incorporating the few available demonstrations into the training data for the generalist. We also train a 364M "self-improved" model that additionally sees the self-generated data for these tasks. The results in Figure10 show that the self-improved agent outperforms the baseline agent in all four of these tasks. In other words, given demonstrations of a task, the self-improvement procedure (fine-tuning and self-generating additional data) is significantly better than using the demonstrations directly in training the generalist.
Next, we demonstrate self-improvement at scale: we incorporate self-generated data from numerous task-specific RoboCat-lim fine-tuned agents to yield a stronger generalist. This is the process by which we obtained the main RoboCat generalist presented in Section5.1.
Figure11 shows the performance of these self-data-generating agents, compared with the performance of the full RoboCat generalist. For most cases, the RoboCat generalist performance is similar to or even better than that of the agents generating the data. These results highlight the potential for RoboCat to self-improve and grow its multi-task capabilities, as we have also seen from other experiments. By self-generating data and incorporating additional data from a diverse set of tasks, the resulting agent has better generalisation and fine-tuning capabilities on a broader set of real-world tasks.
### Further ablations
We report a number of additional ablations and evaluations in the appendix. These include ablating the choices for VQ-GAN tokeniser and observation prediction (AppendixD.3), evaluations over many different vision model baselines (AppendixG.2) and ablations of the NIST-i environment (AppendixG.3).
Figure 9: **Positive transfer across tasks: RoboCat-lim vs RoboCat.** Training on more tasks (RoboCat) improves performance on the _limited training tasks_ compared to only training on these limited tasks (RoboCat-lim). In addition, RoboCat is better when fine-tuning to the insertion and removal tasks. The reported numbers are averages of task variants within each grouping.
## 6 Related Work
### Transformers for decision making
Transformers (Vaswani et al., 2017) have shown impressive results at scale in domains like vision (Dosovitskiy et al., 2020; He et al., 2022), language (Brown et al., 2020; Devlin et al., 2018; Vaswani et al., 2017) and speech (Radford et al., 2022). Inspired by these successes, earlier efforts to leverage transformers for decision making focused on improving their training stability for RL (Parisotto et al., 2020), using self-attention for improving relational reasoning (Zambaldi et al., 2018), one-shot imitation learning (Dasari and Gupta, 2021), fast adaptation to novel tasks (Ritter et al., 2020), on the fly adaptation in 3D environments (Team et al., 2023), 3D reasoning (Shridhar et al., 2023), and multi-embodiment continuous control (Gupta et al., 2022; Kurin et al., 2020). However, these works leverage the transformer architecture within the framework of standard RL and imitation algorithms. Recently, generative pretraining for sequence modeling has been extended to offline RL (Chen et al., 2021; Janner et al., 2021), where a transformer model is trained to autoregressively maximise the likelihood of trajectories in the offline dataset for specialist agents with low-dimensional states. Building on this, Jiang et al. (2022); Lee et al. (2022); Reed et al. (2022) train multi-task generalist agents with high-dimensional image observations. Our work is closely related to Gato (Reed et al., 2022) but differs in the variety and scale of robotic tasks _mastered_ by a single agent. We show positive transfer between tasks and fast adaptations to many real-world robot tasks.
### Visual pretraining for control
The use of pretrained visual representations presents a promising approach for efficient robot policy learning, requiring less robot-specific data. Early efforts focused on using supervised pretraining for navigation (Chen et al., 2020; Sax et al., 2018; Zhou et al., 2019) and manipulation (Chen et al., 2020; Yen-Chen et al., 2020; Zhou et al., 2019) domains. Building on the progress in self-supervised representation learning, multiple recent works have shown that frozen visual encoders, trained through self-supervision on internet-scale datasets, can enable sample-efficient behaviour cloning (Majumdar et al., 2023; Nair et al., 2022; Parisi et al., 2022; Radosavovic et al., 2023; Sharma et al., 2023) and on-policy reinforcement learning (Khandelwal et al., 2022; Majumdar et al., 2023; Xiao et al., 2022). In this work we use a frozen pre-trained VQ-GAN (Esser et al., 2021; van den Oord
Figure 11: RobotCar compared with the performance of the data-generating agents or the combined performance of these and the demonstrations, the latter of which is used for training RoboCat.
et al., 2017) trained on a diverse collection of images to speed up training time significantly, and combine the VQ-GAN tokens with future frame prediction (Gupta et al., 2023) for sample-efficient transfer learning. Concurrently, Kotar et al. (2023) also finds similar generalisation benefits of using the combination of VQ-GAN tokens and future frame prediction during policy learning for the navigation domain.
### Goal-conditioned policies
Goal-conditioned agents have long been of interest in policy learning (Kaelbling, 1993; Schaul et al., 2015). Hindsight goal relabelling is a popular method for annotating arbitrary trajectories with goals (Andrychowicz et al., 2017). Learning from visual goals is challenging as images contain a lot of information that may be unrelated to the desired goal-conditioned behaviour, such as lighting or positions of distractors (Pinto et al., 2018). As we are primarily concerned with goal images as task specification in a behaviour cloning setting, this work does not address the question of goal distance, goal generation, or exploration. Unlike Nair et al. (2018), we assume a dataset of goal images is available during evaluation and data collection, as we only deploy our goal-conditioned agent for data collections on tasks for which we had teleoperated episodes to learn from. Davchev et al. (2022) also utilised a dataset of goals, bootstraped from demonstrations. However, they do not work with images. Similar to RoboCat, Groth et al. (2021) also instruct a behaviour-cloned policy with goal images but rely on explicit inductive biases in the network architecture to infer the task. Ghosh et al. (2019) propose iterated goal-conditioned learning as a form of reinforcement learning, which is similar to our self-improvement step.
### Generalist robotic agents
Recent works have looked at the problem of training generalist robot agents. RT-1 takes language instructions to perform a variety of object manipulation tasks (Brohan et al., 2022). While RT-1 trains on data from two different robots, they have the same action specification. PaLM-E demonstrates that large visual-question-answering can serve as planners for robotics tasks. Rather than directly controlling different robots, PaLM-E outputs language instructions (such as "Pick the green rice chip bag from the drawer.") to pretrained lower-level controllers (Driess et al., 2023).
In this work, we look to solve tasks in both simulation and the real-world, covering a wide set of behaviours and affordances, incorporating precision and dexterity, and embracing high-dimensional low-level control over multiple simulated and real embodiments. To our knowledge, RoboCat is the first work to natively support multiple real-world robotic embodiments with different observation and action specifications. We also demonstrate the ability to self-improve by fine-tuning to new tasks and self-generating data for use in retraining--a unique capability over all of the methods we surveyed. Finally, we focus on visual goal-conditioning in this work, but could also enable more flexible task specification in the future, such as language conditioning or full demonstrations; this is already facilitated by some of the other methods.
## 7 Summary and Future Work
In this report, we have presented RoboCat, a foundation agent for vision-based robotic manipulation. RoboCat is able to solve a large and diverse set of tasks specified via goal images; across different task families, embodiments, control modes, and objects; in both simulation and the real world, and from different sources of data. RoboCat is additionally able to quickly adapt, via fine-tuning on 100-1000 demonstrations, to a wide set of downstream tasks and across many different axes of generalisation. More importantly, we can use such adapted agents to generate data that can be added to RoboCat's training dataset for future iterations of the agent, a process we call self-improvement. We have thoroughly investigated our agent's capabilities both in simulation and the real world with tens of thousands of real evaluations on 36 real robots of 3 different types. We have shown that the cost of acquisition of new skills is dramatically lower compared to single-task baselines, even when those are based on visual foundation models. Finally, we have ob
served that by scaling and diversifying its training data we get a RoboCat agent that performs better on training tasks and adapts better to unseen ones.
Future work could look into enabling flexible and multi-modal task specification. Incorporating relevant existing, freely-available datasets with language annotations would be a first good step. Another research avenue could explore improving both training and fine-tuning capabilities of such a model with reinforcement learning (RL), since RoboCat in its current form only employs behaviour cloning. While visual goal specification already allows the agent to learn from failures and sub-optimal data, incorporating RL would enable both learning with rewards and learning online with real-world interaction.
## Broader Impact
This work presents progress on training generalist agents for robotic manipulation. While our work only presents a recipe, and first steps, in an emerging area, the potential impact on society from generalist robotic agents calls for increased interdisciplinary research into their risks and benefits. To provide an easily accessible reference for RoboCat's intended use-case and potential shortcomings we include a model card in Appendix A. We emphasise that the model is for research use only and not currently deployed in any production scenario to any users, and thus expect no immediate societal impact.
In general, RoboCat inherits many of the safety concerns discussed in Gato (Reed et al., 2022); on which it is based. In addition, since RoboCat takes actions in the physical world--and on multiple embodiments--it may pose new challenges with respect to safety. For example, physical embodiments and imitation from human data can cause humans to anthropomorphise the agent; leading to a potentially misplaced trust and underappreciation for inherent dangers that come from interacting with robots3. Additionally, cross-embodiment transfer from one robot to another can lead to undesired movements (such as high gain motor actuation). Considerations with respect to general AGI safety (Bostrom, 2014) may also require updating when considering agents with multiple embodiments.
Footnote 3: We note that we utilise a force-torque compliant controller with built in safety mechanisms.
We consider that value alignment (Russell, 2019) with human preferences (as e.g. expressed via reward labelling in this work) is crucial for a safe evolution of this technology. While our reward labelling process to determine successful and desired behaviours is a starting point for this, future work should consider adapting alignment techniques successfully used for language models to our setting (Bai et al., 2022; Kenton et al., 2021; Ouyang et al., 2022).
Finally, the self-improvement loop we designed for RoboCat allows us to improve the model over time by retraining on data collected from deploying a previous version to our robots. Such a self-improvement loop poses additional challenges with respect to AGI safety since it, partially, implements a reinforcement learning loop; which comes with its own safety concerns (see e.g. Omhundro (2008); Turner et al. (2021)). While further work is needed into AGI safety for reinforcement learning robotic systems, it is important to note that unlike in a reinforcement learning scenario, the self-improvement capabilities of RoboCat are _not_ autonomous and _no learning_ takes place while interacting with the physical world. That is, data collection is started and stopped by humans and uses frozen versions of RoboCat. Learning an improved version is implemented as a supervised learning problem from a fixed data source and is entirely decoupled from data collection.
## Acknowledgements
We would like to thank Jackie Kay for contributions to the VQ-GAN codebase; Federico Casarini for help with the robotic lab operations; Markus Wulfmeier for initial exploration into alternative fine-tuning methods; Nando de Freitas for general advice; Yixin Lin, Vincent Vanhoucke, Shakir Mohamed, and Michael Neunert for paper feedback; and Jonathan Hutchinson for the graphics of Figure 1.
## Author Contributions
**RobcoCat generalist training**
Konstantinos Bousmalis, Giulia Vezzani, Coline Devin, Dushyant Rao, Alex X. Lee, Maria Bauza, Todor Davchev, Yuxiang Zhou, Agrim Gupta
**RobcoCat fine-tuning**
Giulia Vezzani, Dushyant Rao, Alex X. Lee, Coline Devin, Maria Bauza, Todor Davchev, Valentin Dalibard, Martina Zambelli, Agrim Gupta
**Core infrastructure for experiments at scale**
Michiel Blokzijl, Claudio Fantacci, Akhil Raju, Antoine Laurens, Dave Barker
**Data and tasks**
_KUKA_: Murilo Martins, Martina Zambelli, Rugile Pevceviciute, Antoine Laurens, Jose Enrique Chen
_NIST-i:_: Todor Davchev, Maria Bauza, Akhil Raju, Jost Tobias Springenberg, Jon Scholz, Misha Denil, Oleg Sushkov, Jean-Baptiste Regli, Tom Rothorl
_RGB_: Giulia Vezzani, Konstantinos Bousmalis, Dushyant Rao, Coline Devin, Alex X. Lee, Thomas Lampe, Abbas Abdolmaleki, Francesco Nori, Antoine Laurens
_Non-NIST-i insertion/removal_: Akhil Raju, Antoine Laurens, Alex X. Lee
**Evaluation infrastructure: success detectors, no-reset evaluation, annotations**
Akhil Raju, Claudio Fantacci, Misha Denil, Michiel Blokzijl, Todor Davchev, Thomas Lampe, Dave Barker, Maria Bauza, Alex X. Lee, Jon Scholz, Tom Rothorl
**VQ-GAN tokenisation**
Agrim Gupta, Coline Devin, Scott Reed
**Gato architecture and infrastructure**
Emilio Parisotto, Konrad Zohna, Scott Reed, Sergio Gomez Colmenarejo, Jost Tobias Springenberg, Oliver Groth
**Single-task VFM baselines**
Yuxiang Zhou, Todor Davchev, Alex X. Lee
**Teleoperated data collection**
Akhil Raju, Antoine Laurens, Michiel Blokzijl, Misha Denil, Nathan Batchelor, Claudio Fantacci, Joy Ortiz
**Paper and blog post content**
Coline Devin, Alex X. Lee, Dushyant Rao, Konstantinos Bousmalis, Giulia Vezzani, Todor Davchev, Maria Bauza, Agrim Gupta, Akhil Raju, Antoine Laurens, Jost Tobias Springenberg, Misha Denil, Nicolas Heess
**Project leadership and coordination**
Konstantinos Bousmalis, Giulia Vezzani, Joy Ortiz
**Advisors**
Nicolas Heess, Francesco Nori, Raia Hadsell, Jost Tobias Springenberg, Martin Riedmiller, Yusuf Aytar
|
2310.02922 | Public verifiable measurement-only blind quantum computation based on
entanglement witnesses | Recently, Sato et al. proposed an public verifiable blind quantum computation
(BQC) protocol by inserting a third-party arbiter. However, it is not true
public verifiable in a sense, because the arbiter is determined in advance and
participates in the whole process. In this paper, a public verifiable protocol
for measurement-only BQC is proposed. The fidelity between arbitrary states and
the graph states of 2-colorable graphs is estimated by measuring the
entanglement witnesses of the graph states,so as to verify the correctness of
the prepared graph states. Compared with the previous protocol, our protocol is
public verifiable in the true sense by allowing other random clients to execute
the public verification. It also has greater advantages in the efficiency,
where the number of local measurements is O(n^3*log {n}) and graph states'
copies is O(n^2*log{n}). | Wen-Jie Liu, Zi-Xian Li, Wen-Bo Li, Qi Yang | 2023-10-03T17:16:15Z | http://arxiv.org/abs/2310.02922v1 | # Public Verifiable Measurement-Only Blind Quantum Computation Based On Entanglement Witnesses
###### Abstract
Recently Sato et al. proposed an public verifiable blind quantum computation (BQC) protocol by inserting a third-party arbiter. However, it isn't true public verifiable in a sense, because the arbiter is determined in advance and participates in the whole process. In this paper, a public verifiable protocol for measurement-only BQC is proposed. The fidelity between arbitrary states and the graph states of **2**-colorable graphs is estimated by measuring the entanglement witnesses of the graph states, so as to verify the correctness of the prepared graph states. Compared with the previous protocol, our protocol is public verifiable in the true sense by allowing other random clients to execute the public verification. It also has greater advantages in the efficiency, where the number of local measurements is \(O(n^{3}\log n)\) and graph states' copies is \(O(n^{2}\log n)\).
**Keywords: Blind quantum computation, Public verifiability, Graph state, Entanglement witness**
## 1 Introduction
Blind quantum computation (BQC) allows any client (known as Alice) with weak quantum ability to delegate her computing tasks to a quantum server (known as Bob) without leaking her privacy. BQC is divided into two categories: circuit-based BQC (CBQC)[1, 2, 3, 4, 5] and measurement-based BQC (MBQC)[6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]. CBQC realizes blindness through quantum circuits, where the client needs to have the ability to operate some quantum gates. In MBQC, the client only needs to prepare and measure quantum states. Recently, Morimae and Fuji[7] proposed a new type of BQC, called measurement-only BQC (MOBQC), where the server prepare the resource state, while the client just should perform single-qubit measurements.
As more and more BQC protocols are proposed, the _verifiability_ of BQC has attracted much attention. In a verifiable BQC protocol, each party can verify whether other party is honest. Although Broadbent et al.[6] have explored the possibility of verifiability in their protocol, it's not complete. Based on the former, Fitzsimons et al.[8] proposed a relatively complete verifiable version. In this protocol, the verifier encodes the computation task (including the verification mechanism) into a series of single qubits, and then executes BQC. According to the results, it can be verified whether the computation has been correctly executed. In addition, several BQC protocols[2, 8, 9] verifies the correctness of the input of BQC by checking the trap qubits randomly hidden in the input state. For MOBQC[7], it's proposed to verify graph states[10, 11, 12, 13, 14]. Stabilizer testing[14] is a verification technology of the graph state without setting traps. The server generates graph states and send them to the client, and the latter then directly measure stabilizers on the sent graph states to verifies the correctness. However, these verifiable MOBQC protocols [10, 12, 13, 14] using stabilizer test are of high resource consumption, which is an obstacle to the development of scalable quantum computation. In 2021, Xu et al.[15] proposed a verifiable BQC protocol based on entanglement witnesses, which effectively reduces the resource consumption of verification by measuring entanglement witnesses[18] that can detect the graph states.
The above verifiable protocols only allow Alice to verify Bob's honesty, which is called _private verifiability_. However, private verifiability has the following problems: Alice can detect any dishonest behavior of Bob, but the detection results can not make any third party trusted; even if Bob is honest, he can be framed by Alice. In 2016, on the basis of unconditionally verifiable BQC protocol[8], Kentaro[16] proposed the concept of _public verifiability_ and provided a corresponding protocol based on classical cryptography. In 2019, Sato et al.[17] chose to insert a trusted third party as the arbiter to build an arbitrable BQC protocol which realizes public verifiability in a sense. However, the public verifiability depends on the arbiter, which is determined in advance and participates in the whole process, thus the protocol isn't true public verifiable in a sense.
In this paper, inspired by the verifiable mechanism based on entanglement witnesses, we propose a public verifiable MOBQC protocol. The third-party
verifier is randomly selected from other clients rather than a specific arbiter, so as to achieve public verifiability in the true sense. In addition, 2-colorable graphs and entanglement witnesses are introduced to reduce resource consumption. Compared with the number of local measurements (\(O(n^{2n+5})\)) and of copies of the resource states (\(O(n^{2n+5}2^{n})\)) of Sato et al.[17], our protocol have obvious advantages (\(O(n^{3}\log n)\) and \(O(n^{2}\log n)\) respectively). We also consider the communication error and give some error mitigation schemes.
The rest of this paper is organized as follows: In Sect. 2, we briefly introduce 2-colorable graph states, entanglement witnesses, and MOBQC. The protocol is presented in Sect. 3 and analyzed in Sect. 4. Error propagation and mitigation is analyzed in Sect. 5. The paper concludes with Sect. 6.
## 2 Preliminaries
In this section, we briefly introduce 2-colorable graph states and the entanglement witnesses of them, and then review the basic steps of measurement-only BQC.
### 2-colorable graph state
Given an undirected simple graph \(G\) with \(n\) vertices \(i\in V\) and several edges \((i,j)\in E=V\times V\), if all vertices of it can be divided into at least \(m\) disjoint subsets \(S_{1},S_{2},\cdots,S_{m}\), where there is no edge between any pair of vertices in any \(S_{j},j=1,2,\cdots,m\), then we call \(G\) an \(m\)-colorable graph. We use \(n\) qubits to represent vertices of \(G\), and the graph state \(\left|G\right\rangle\) corresponding to \(G\) is defined as \(\left|G\right\rangle=\left(\prod_{(i,j)\in E}U_{ij}\right)\left|+\right\rangle ^{\otimes n}\), where \(\left|+\right\rangle=\frac{1}{\sqrt{2}}\left(\left|0\right\rangle+\left|1 \right\rangle\right)\) is the initial state of each vertex and \(U_{ij}\) is the controlled-\(Z\) gate \(\left|0\right\rangle\left\langle 0\right|\otimes I+\left|1\right\rangle \left\langle 1\right|\otimes Z\) performed on vertices \(i\) and \(j\), where \(I\) is identity operator and \(Z\) is Pauli operator \(\sigma_{z}\). On the other hand, there are \(n\) stabilizer \(g_{i}=X_{i}\prod_{k\in N(i)}Z_{k}\) of \(\left|G\right\rangle\), i.e., \(g_{i}\left|G\right\rangle=\left|G\right\rangle\), where \(i=1,2,\cdots,n\), \(N\left(i\right)\) is the adjacency points set of vertex \(i\) and \(X_{i},Z_{j}\) are the Pauli operators \(\sigma_{x},\sigma_{z}\) performed on \(i,j\) respectively.
In this paper, we only consider 2-colorable graph states which are widely used as resource states of BQC, such as brickwork state[6] and Raussendorf-Harrington-Goyal (RHG) state[19], of which the preparation and verification are of research value. An example of a 2-colorable graph is shown in Figure 1.
Figure 1: A 2-colorable graph as an example, where red vertices belong to \(S_{1}\) and green vertices belong to \(S_{2}\).
### Entanglement witness
An entanglement witness \(W\) is an observable measurement which satisfies: (1) For all separable states \(\varrho_{s}\), \(tr\left(W\varrho_{s}\right)\geq 0\); (2) At least one entangled state \(\varrho_{e}\) satisfies \(tr\left(W\varrho_{e}\right)<0\), where \(tr\left(\cdot\right)\) represents matrix trace; then we say that \(\varrho_{e}\) is detected by \(W\). For an \(n\)-qubit graph state and some states close to it, based on the colorability of the graph, some witnesses with constant measurement times are proposed. The following \(W^{(2)}\) is a witness of a 2-colorable graph state \(|G\rangle\)[18]:
\[W^{(2)}=3I-2\left[\prod_{i\in S_{1}}\frac{g_{i}+I}{2}+\prod_{i\in S2}\frac{g_{ i}+I}{2}\right], \tag{1}\]
where \(S_{1},S_{2}\) are two divided sets of the graph. According to the structure of the witness, for a given 2-colorable graph state, only two measuring settings are needed, and the \(j\)-th setting is observable \(\prod\limits_{i\in S_{j}}g_{i}\). The two settings corresponding to the two-colorable graph in Figure 1 are shown in Figure 2. For the \(j\)-th measuring setting \(\prod\limits_{i\in S_{j}}g_{i}\), we only need to measure the qubits corresponding to \(S_{j}\), and then measure the qubits corresponding to another subset \(\overline{S_{j}}\) according to the adjacency relationship with \(S_{j}\). Therefore, a setting \(\prod\limits_{i\in S_{j}}g_{i}\) only needs \(O(n)\) local measurement times.
### Measurement-only blind quantum computation
In MOBQC, the server Bob only needs to prepare the general resource state, and the client Alice only needs to perform quantum measurement. The protocol steps are as follows: Bob prepares the general resource state and then sends the prepared state particles to Alice via quantum channel, then Alice measures the sent particles on the basis determined by her algorithm. The verification of this model is generally aimed at the correctness of the resource state. Bob is often required to prepare multiple copies of the resource state, and some of which are used for verification and one of the rest is used for calculation, as shown in Figure 3.
Figure 2: The two settings corresponding to the graph in Figure 1, where all red vertices belong to \(S_{1}\) and all green vertices belong to \(S_{2}\). (a)(b) are the observables \(\prod\limits_{i\in S_{1}}g_{i}\), \(\prod\limits_{i\in S_{2}}g_{i}\) corresponding to \(S_{1},S_{2}\) respectively.
## 3 Public Verifiable Measurement-only Blind Quantum Computation based on entanglement witnesses
### Verification algorithm
Inspired from Xu et al.'s verification mechanism[15], we present a verification algorithm to verify the correctness of the prepared graph state. Given a target graph state \(|G\rangle\) corresponding to a 2-colorable graph \(G\), and an unknown state \(\varrho\) to be verified. The two divided subsets of \(G\) are denoted as \(S_{1},S_{2}\), and the verification process is shown in Algorithm 1.
In Algorithm 1, the condition constant \(C\) is determined to make sure the fidelity between the prepared state \(\varrho\) and the required state \(|G\rangle\) is high enough. Considering the fidelity estimation process, \(C\) isn't fixed, but varies with the order of the verifier, i.e., \(C\) will be different for the third-party verifier from the client in our protocol. Therefore, we set \(C\) as a pending parameter so as to ensures the scalability of the verification. Based on the above, Algorithm 1 can be applied to public verification.
### Proposed protocol
In Sato et al.'s protocol[17], Charlie, the third-party arbiter, can arbitrate in case of a dispute between the server Bob and the client Alice. However, the verifier (Charlie) is determined in advance and participates in the whole process, which isn't a true third party independent with Bob and Alice. To achieve a true public verification, i.e., any third party can participate in verification, we removed Charlie's role of third-party verifier, but only retained its storage capacity of quantum state, therefore it's renamed the storage center. The third party that participates in the public verification will be selected from other clients Alice\({}_{2}\), Alice\({}_{3}\),\(\cdots\), Alice\({}_{l}\) randomly, where \(l\) is the total number
Figure 3: The verification process of MOBQC.
of clients. As shown in Figure 4, there are three parties in the protocol: the server Bob is responsible for preparing the graph states; the set A = {Alice\({}_{1}\), Alice\({}_{2}\),\(\cdots\), Alice\({}_{l}\)} is a set of clients of quantum computation, which are all legal users[20, 21] registered with Charlie; the storage center Charlie has the ability to store quantum states and is responsible for distributing the graph states prepared by Bob to the client and selecting the third-party verifier, and is ensured honesty. When the protocol is executed between Alice\({}_{1}\) and Bob, any other client Alice\({}_{t}\) in A where \(t\in\{2,\,\ldots,l\}\) can perform public verification.
Figure 4: The tripartite relationship in the proposed protocol.
The graph states used in our protocol all correspond to 2-colorable graphs. Taking client \(\text{Alice}_{1}\), who has a computing request, as an example, the specific steps are as follows (also shown in Figure 5):
1. \(\text{Alice}_{1}\) sends preparation request to Charlie, then Charlie forwards it to Bob, where the graph state \(\ket{G}\) requested is an \(n\)-qubit state corresponding to a 2-colorable graph and \(n\geq 6\).
2. Bob prepares a \(5Kn\)-qubit state \(\ket{G}^{5K}\), where\(K=\left\lceil n^{2}\log n\right\rceil,\left\lceil\cdot\right\rceil\) is ceiling function. Then he send it to Charlie one by one qubit.
3. Charlie divides the state sent from Bob into \(5K\)\(n\)-qubit states \(\ket{G}\) in turn and stores them in \(n\)-qubit registers respectively. He selects \(2K\) registers independently, evenly and randomly from these \(5K\) registers and keeps them, and then sends the rest \(3K\) to \(\text{Alice}_{1}\) in turn.
4. \(\text{Alice}_{1}\) divides the states sent from Charlie into \(3K\)\(n\)-qubit registers and selects \(2K\) registers independently, evenly and randomly from them, then executes Verification Algorithm (see Algorithm 1) where \(C=\frac{K}{2n}\).
5. If it accepts, \(\text{Alice}_{1}\) considers Bob honest and proceeds to the next step, and otherwise considers Bob dishonest and refuses to pay for services.
6. \(\text{Alice}_{1}\) randomly selects one register from the remaining \(K\) registers and discards the others, then uses this register to perform MBQC, i.e., measures particles on the basis determined by her algorithm.
7. If \(\text{Alice}_{1}\) claims that Bob is dishonest, Bob can ask Charlie for public verification, and then Charlie randomly selects a third party \(\text{Alice}_{t}\) from \(\text{Alice}_{2}\), \(\text{Alice}_{3}\),\(\cdots\), \(\text{Alice}_{l}\) to send verification request.
8. If \(\text{Alice}_{t}\) accepts the request, Charlie sends the \(2K\) copies in his hand to \(\text{Alice}_{t}\) in turn. According to the graph state type, \(\text{Alice}_{t}\) executes Verification Algorithm, where \(C=\frac{3K}{4n}\). If it accepts, \(\text{Alice}_{t}\) claims that \(\text{Alice}_{1}\) is dishonest, otherwise Bob is dishonest.
## 4 Performance analysis
### Completeness analysis
Completeness means that when Bob faithfully prepares the required graph state, it must be accepted by \(\text{Alice}_{1}\) or \(\text{Alice}_{t}\) with high probability. In Step 3 of Algorithm 1, the random variable \(M_{j}^{\varrho}=\prod\limits_{i\in S_{j}}\frac{x_{i}\prod_{k\in N(i)}z_{k}+1 }{2}\in\{0,1\}\) to be calculated is the measurement result of \(\prod\limits_{i\in S_{j}}\frac{g_{i}+I}{2}\). According to quantum measurement theory, we have
\[\overline{M}_{j}^{\varrho}=Tr\left(\prod\limits_{i\in S_{j}}\frac{g_{i}+I}{2} \varrho\right), \tag{2}\]
where \(\overline{M}_{j}^{\varrho}\) is the mathematical expectation of \(M_{j}^{\varrho}\). Assume Bob prepares the correct state \(\varrho=\left|G\right\rangle\left\langle G\right|\), then we have
\[\overline{M}_{j}^{\varrho}=Tr\left(\prod_{i\in S_{j}}\frac{g_{i}+I}{2}\left|G \right\rangle\left\langle G\right|\right)=Tr\left(\left|G\right\rangle\left \langle G\right|\right)=1. \tag{3}\]
We have \(M_{j}^{\varrho}\in\{0,1\}\), which means for each register \(k\in\Pi^{(j)}\), \(M_{j}^{\varrho_{k}}=1\), i.e.,\(K_{j}=0\). Therefore \(\forall 0\leq C\leq 2K,K_{1}+K_{2}\leq C\), i.e., it must be accepted, leading to the completeness.
### Soundness analysis
Soundness means that if Alice\({}_{1}\) or Alice\({}_{t}\) accepts the state \(\varrho\) prepared by Bob, it must close to the graph state required by Alice\({}_{1}\) with high probability. Fidelity \(F=\left\langle G\right|\varrho\left|G\right\rangle\) is generally used to measure the closeness. Fidelity estimation is based on the following inequality [15]:
\[F\geq\frac{1}{2}-\frac{1}{2}Tr\left(W^{(2)}\varrho\right). \tag{4}\]
In our protocol, the expectation \(Tr\left(W^{(2)}\varrho\right)\)is obtained by measuring the entanglement witness \(W^{(2)}\) detecting \(\left|G\right\rangle\) to determine whether the state \(\varrho\) is close to \(\left|G\right\rangle\), so as to verifies the behavior of Bob. We have the following theorem about the soundness of our protocol:
Figure 5: The process of public verifiable protocol. Solid lines represent graph states transmission and dotted lines represent requests transmission.
**Theorem 1**: _In our protocol,_
1. _When_ \(\text{Alice}_{t}\) _hasn't involved in arbitration, if_ \(\text{Alice}_{1}\) _measures_ \(K_{1}+K_{2}\leq\frac{K}{2n}\)_, then we have_ \[F\geq 1-\frac{4\sqrt{\lambda_{1}}+1}{n}\] (5) _with a probability_ \[P\geq 1-4n^{-\frac{\lambda_{1}}{2}},\] (6) _where_ \(\lambda_{1}\) _is arbitrary constant satisfying_ \(\log_{n}16\leq\lambda_{1}\leq\frac{(n-1)^{2}}{16}\)_._
2. _When_ \(\text{Alice}_{t}\) _involves in arbitration, if_ \(\text{Alice}_{t}\) _measures_ \(K_{1}+K_{2}\leq\frac{3K}{4n}\)_, then we have_ \[F\geq 1-\frac{3\sqrt{\lambda_{2}}+1}{n}\] (7) _with a probability_ \[P\geq 1-4n^{-\lambda_{2}},\] (8) _where_ \(\lambda_{2}\) _is arbitrary constant satisfying_ \(\log_{n}4\leq\lambda_{2}\leq\frac{(n-1)^{2}}{9}\)_._
In the theorem, (1) has been proved [15] and we only need to prove (2), using some existing probability inequalities. If we perform the \(j\)-th measurement setting on the rest \(3K\) registers, then for each group of \(K\) registers selected by Charlie, we can obtain an upper bound of the number of registers satisfying \(M_{j}^{\varrho}=0\) in the rest \(3K\) registers measured and the relevant confidence probability, Hence, the lower bound of \(\sum\limits_{k=1}^{3K}M_{1}^{\varrho_{k}}\) is directly given. We then obtain a lower bound of \(\overline{M}_{1}^{\varrho}\) and the relevant confidence probability. By Eq. 2 we can obtain a lower bound of \(Tr\left(\prod\limits_{i\in S_{j}}\frac{q_{i}+I}{2}\varrho\right)\). By Eq. 1 and Eq. 4, we finally prove that the fidelity \(F\) satisfies a lower bound with a certain confidence probability \(P\). See Appendix A for details.
Therefore, if the verification is passed, then the state prepared by Bob is close to the required graph state with high probability, leading to the soundness.
In the protocol, \(K=O\left(n^{2}\log n\right)\) so that the probability in the theorem is \(1-O\left(n^{-\lambda}\right)\) for a constant \(\lambda\), which is high enough; we set \(n\geq 6\) so that the above \(\lambda\) exists. See Appendix A for details.
### Efficiency analysis
Efficiency refers to the number of copies of the resource state prepared by the protocol and the number of local measurements required. As mentioned above, we set \(K=O\left(n^{2}\log n\right)\) so that the probability is high enough. The detail parameters of our and Sato et al.'s protocol[17] are shown in Table 1.
We first consider the number of copies. For the same \(n\)-qubit graph state \(\left|G\right\rangle\), in Sato et al.'s protocol, the number of copies is
\[2k+m+1 \geq 8n^{2}-2+\left(2\ln 2\right)k^{n}n+1\] \[=8n^{2}-2+\left(2\ln 2\right)k^{n}n+1, \tag{9}\] \[=\Theta\left(n^{2n+5}\right)\]
i.e., \(O\left(n^{2n+5}\right)\) at least; in our protocol it is \(5K\geq 5\left\lceil n^{2}\log n\right\rceil=\Theta\left(n^{2}\log n\right)\), i.e., \(O\left(n^{2}\log n\right)\) at least. Thus our protocol has advantages.
Then, the number of required local measurement is taken into account. In Sato et al.'s protocol, they measured stabilizers \(\prod\limits_{i=1}^{n}g_{i}^{k_{i}}\) on \(k\) or \(2k\)\(n\)-qubit graph states, where \(k=\left(k_{1}k_{2}\ldots k_{n}\right)\) is a string randomly selected. Considering that the local measurement decomposition of \(\prod\limits_{i=1}^{n}g_{i}^{k_{i}}\)may be complex, and for a specific graph structure it may be \(O\left(2^{n}\right)\)[22], thus the number of local measurements is \(O\left(kn\right)=O\left(n^{2n+5}2^{n}\right)\); in our protocol, we measure observables \(\prod\limits_{i\in S_{j}}^{n}g_{i}\) on \(2K\)\(n\)-qubit graph states and for each graph state the number of the local measurements is \(O(n)\) as mentioned above, thus the total number is \(O\left(Kn\right)=O\left(n^{3}\log n\right)\). Therefore, our protocol still has advantages.
## 5 Error propagation and mitigation
It is worth noting that all the above analyses are based on the fact that the quantum channel does not contain noise. However, the actual channel has a certain amount of noise, so there must be certain errors in the graph state's propagation. Since Bob is not assumed to be honest in our protocol (i.e., the sent graph state is not guaranteed to be correct), it is impossible to determine whether the graph state is disturbed by noise by comparing it with the target state. To mitigate the impact of noise, we have the following two methods.
(1) _Use channel noise detection_ For each \(n\)-qubit graph state \(\left|G\right\rangle\), the sender (Bob or Charlie) insert some additional qubits that are not entangled with the graph state into the \(n\) qubits. These qubits' initial states are agreed in advance by both the sender and the receiver (Charlie or Alice), and they will be sent together with the graph state as a whole. In this way, if some noise
\begin{table}
\begin{tabular}{l l l l} \hline Protocol & Copies numbers & Parameters ranges & Observables measured \\ \hline Sato et al.’s & \(\left|G\right\rangle^{2k+m+1}\) & \(k\geq 4n^{2}-1\), \(m\geq\left(2\ln 2\right)K^{n}n^{5}\) & \(\prod\limits_{i=1}^{n}g_{i}^{k_{i}}\) \\ Our & \(\left|G\right\rangle^{5K}\) & \(K\geq\left\lceil n^{2}\log n\right\rceil\) & \(\prod\limits_{i\in S_{j}}^{n}g_{i}\) \\ \hline \end{tabular}
\end{table}
Table 1: The detail parameters of public verifiable MBQC protocols
is encountered during one transmission, the state of these extra qubits will be changed with a certain probability, and thus the noise can be detected. If the noise is considered to be too high, the receiver may reject the communication and request retransmission.
For example, assuming that \(k\) additional qubits are in the initial state \(|0\rangle\), bit-flip noise occurs in the quantum channel with probability \(p\), and \(r\) qubits are flipped to \(|1\rangle\) after transmission. Since \(r\) has a binomial distribution, its expectation is \(kp\). By Azuma-Hoeffding bound[23] (see Appendix A for details), \(\forall t>0\), we have
\[\Pr\left(\left(1-\frac{r}{k}\right)-\left(1-p\right)\leq t\right)=\Pr\left(p \leq\frac{r}{k}+t\right)\geq 1-\exp\left(-2kt^{2}\right). \tag{10}\]
If \(p_{th}\) is the noise threshold, then
\[\Pr\left(p\leq p_{th}\right)\geq 1-\exp\left(-2k\left(p_{th}-\frac{r}{k} \right)^{2}\right). \tag{11}\]
Let \(1-\exp\left(-2k\left(p_{th}-\frac{r}{k}\right)^{2}\right)\geq 99\%\), then \(r\leq kp_{th}-p\sqrt{k\ln 10}=r_{th}\). For instance, let \(k=5\), \(p=1\%\), then \(r_{th}=3.44\). If \(r\leq 3\), then we can say \(\Pr\left(p\leq p_{th}\right)\geq 99\%\). The larger the \(k\), the smaller the \(\frac{r_{th}-\left[r_{th}\right]}{r_{th}}\leq\frac{1}{r_{th}}\), and the tighter the upper bound. Other noise types can be detected similarly, and only the corresponding initial states and measurement bases need to be agreed.
(2) _Use fault-tolerant quantum computing (FTQC)_ As mentioned by Morimae et al.[7], using a computational model that can handle particle losses can effectively mitigate noise. An \(\llbracket n,k,d\rrbracket\) quantum error-correcting codes (QECC) encodes \(n\) physical qubits into \(k\) logical qubits, and a QECC with distance \(d\) can correct up to \(\frac{d-1}{2}\) errors on arbitrary qubits[24]. The entanglement of the graph state will not be destroyed in a qubit stabilizer QECC scheme, because it is not necessary to really know the initial state of the target qubit, but only to measure and compare the relative change between the physical qubits. On the other hand, only the measurements and quantum gates of single-qubit Pauli operators \(X,Y,Z\) are required, which means that even receiver with weak quantum ability can implement it. In existing fault-tolerant quantum computing, the noise threshold can even reach \(24.9\%\)[25].
Note that the above two methods can be used in combination because they are independent of each other. First, the channel noise detection can ensure that the noise factor is lower than a certain threshold, and then the fault tolerance mechanism can correct small errors. By using the two methods, the error caused by channel noise can be mitigated to a certain extent. Of course, when the noise reaches a certain level, even retransmission will fail.
## 6 Conclusion
In this paper, we proposes a public verifiable measurement-only blind quantum computation protocol. By introducing a storage center, it allows the third-party verifier to be any other client randomly selected. Compared with the previous protocol, our protocol is public verifiable in the true sense. In the protocol, the fidelity estimation between arbitrary states and graph states are realized by measuring the entanglement witnesses detecting the graph states. Without loss of completeness and soundness, the nature of 2-colorable graph states reduce the number of local measurements (\(O\left(n^{3}\log n\right)\)) and the number of copies of the graph states resources (\(O\left(n^{2}\log n\right)\)). Compared with the arbitrable protocol of Sato et al.[17] (the number of local measurements is \(O(n^{2n+5})\) and of copies of the resource states is \(O(n^{2n+5}2^{n})\)), our protocol has obvious advantages. We also consider the communication error and give some error mitigation schemes.
On the other hand, since we have only considered 2-colorable graphs, the proposed protocol is not applicable to arbitrary graph states. For more general graph states, more research is needed to further improve the efficiency and performance of existing schemes.
**Acknowledgments.** The authors would like to thank the anonymous reviewers and editors for their comments that improved the quality of this paper. This work is supported by the National Natural Science Foundation of China (62071240), the Innovation Program for Quantum Science and Technology (2021ZD0302902), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).
## Appendix A Proof of Theorem 1
In the theorem, (1) has been proved [15] with a condition \(n\geq 6\). Now we prove (2). At first we introduce the following two probability bounds which will be used in the analysis, where \(\Pr\left(\cdot\right)\) represents the event probability and \(\mathrm{E}\left(\cdot\right)\) represents the mathematical expectation:
1. _Serfling's bound[26]_ Given a set \(Y=\left(Y_{1},Y_{2},...,Y_{T}\right)\) of \(T\) binary random variables with \(Y_{k}\in\left\{0,1\right\}\) and two arbitrary positive integers \(N\) and \(K\) that satisfy \(T=N+K\), select \(K\) samples that are distinguished from each other independently, evenly and randomly from \(Y\), and let \(\Pi\) be the set of these samples, \(\overline{\Pi}=Y-\Pi\), then \(\forall 0<v<1\), we have \[\begin{split}&\Pr\left(\sum_{k\in\overline{\Pi}}Y_{k}\leq\frac{N}{K} \sum_{k\in\Pi}Y_{k}+Nv\right)\\ &\geq 1-\exp\left(-\frac{2v^{2}NK^{2}}{\left(N+K\right)\left(K+1 \right)}\right).\end{split}\] (A1)
2. _Azuma-Hoeffding bound[23]_ Given independent random variables \(\xi_{1},\xi_{2},\cdots,\xi_{n}\) where \(\xi_{i}\in\left[a_{i},b_{i}\right],i=1,2,\cdots,n\), then \(\forall t>0\), we have \[\Pr\left(\frac{\xi_{1}+\xi_{2}+\cdots+\xi_{n}}{n}-\mathrm{E}\left( \frac{\xi_{1}+\xi_{2}+\cdots+\xi_{n}}{n}\right)\leq t\right)\] (2) \[\geq 1-\exp\left(-\frac{2n^{2}t^{2}}{\sum\limits_{i=1}^{n}\left(b_ {i}-a_{i}\right)}\right).\]
For the first \(K\) registers selected, we denote them as \(\Pi^{(1)}\) and the rest \(4K\) as \(\overline{\Pi}^{(1)}\). Let \(T=5K,N=4K,Y_{k}=\begin{cases}0,M_{1}^{\varrho_{k}^{\prime}}=1\\ 1,M_{1}^{\varrho_{k}^{\prime}}=0\end{cases}\), where \(\varrho_{k}^{\prime}\) is the state in the \(k\)-th register in \(\Pi^{(1)}\) or \(\overline{\Pi}^{(1)}\), then we have
\[\Pr\left(\sum\limits_{k\in\overline{\Pi}^{(1)}}Y_{k}\leq\frac{4K}{K}\sum \limits_{k\in\Pi^{(1)}}Y_{k}+4Kv\right)\geq 1-\exp\left(-\frac{2v^{2}4KK^{2}}{ \left(4K+K\right)\left(K+1\right)}\right) \tag{3}\]
by Eq 1, which means if we perform the \(j\)-th measurement on the rest \(4K\) registers, then the upper bound of the number of the registers satisfying \(M_{1}^{\varrho}=0\) (i.e., \(Y_{k}=1\)) in \(\overline{\Pi}^{(1)}\) is \(4\sum\limits_{k\in\Pi^{(1)}}Y_{k}+4Kv\), with the probability on the right-side of Eq 3. Similarly, for the second \(K\) registers selected, we denote them as \(\Pi^{(2)}\) and the rest \(3K\) as \(\overline{\Pi}^{(2)}\). Let \(T=4K,N=3K,Y_{k}=\begin{cases}0,M_{2}^{\varrho_{k}^{\prime}}=1\\ 1,M_{2}^{\varrho_{k}^{\prime}}=0\end{cases}\), where \(\varrho_{k}^{\prime}\) is the state in the \(k\)-th register in \(\Pi^{(2)}\) or \(\overline{\Pi}^{(2)}\), then we have
\[\Pr\left(\sum\limits_{k\in\overline{\Pi}^{2}}Y_{k}\leq\frac{3K}{K}\sum \limits_{k\in\Pi^{2}}Y_{k}+3Kv\right)\geq 1-\exp\left(-\frac{2v^{2}3KK^{2}}{ \left(3K+K\right)\left(K+1\right)}\right), \tag{4}\]
which means if we perform the \(j\)-th measurement on the rest \(3K\) registers, then the upper bound of the number of the registers satisfying \(M_{2}^{\varrho}=0\) in \(\overline{\Pi}^{(2)}\) is \(4\sum\limits_{k\in\Pi^{(2)}}Y_{k}+4Kv\), with the probability on the right-side of Eq 4. In the protocol, any two clients do not trust each other, thus it can be considered that the rest \(3K\) registers haven't been measured. If we perform the first measurement on the rest \(3K\) registers, there will be \(3K-\left(4\sum\limits_{k\in\Pi^{(1)}}Y_{k}+4Kv\right)\)
registers satisfying \(M_{1}^{\varrho}=1\) at least, i.e.,
\[\sum_{k=1}^{3K}M_{1}^{\varrho_{k}}\geq 3K-\left(4\sum_{k\in\Pi^{(1)}}Y_{k}+4 Kv\right). \tag{10}\]
Similarly, we have
\[\sum_{k=1}^{3K}M_{2}^{\varrho_{k}}\geq 3K-\left(3\sum_{k\in\Pi^{(2)}}Y_{k}+3 Kv\right). \tag{11}\]
Let \(n=3K,\xi_{k}=M_{1}^{\varrho}\) or \(M_{2}^{\varrho}\), then by Eq 11 we have
\[\Pr\left(\frac{1}{3K}\sum_{k=1}^{3K}M_{1}^{\varrho_{k}}-\overline{M}_{1}^{ \varrho}\leq t\right)\geq 1-\exp\left(-2\cdot 3Kt^{2}\right). \tag{12}\]
By \(Tr\left(\prod\limits_{i\in S_{1}}\frac{g_{i}+I}{2}\varrho\right)=\overline{M} _{1}^{\varrho}\) and Eq 10 we have
\[\Pr\left(Tr\left(\prod\limits_{i\in S_{1}}\frac{g_{i}+I}{2} \varrho\right)>1-\frac{1}{3K}\left(4\sum_{k\in\Pi^{(1)}}Y_{k}+4 Kv\right)-t\right), \tag{13}\] \[\geq 1-\exp\left(-6Kt^{2}\right)\]
and similarly we have
\[\Pr\left(Tr\left(\prod\limits_{i\in S_{2}}\frac{g_{i}+I}{2} \varrho\right)>1-\frac{1}{3K}\left(3\sum_{k\in\Pi^{(2)}}Y_{k}+3 Kv\right)-t\right). \tag{14}\] \[\geq 1-\exp\left(-6Kt^{2}\right)\]
Therefore, we have
\[\begin{split}& F\geq\frac{1}{2}-\frac{1}{2}Tr\left(W^{(2)}\varrho \right)\\ &=\frac{1}{2}-\frac{1}{2}Tr\left(3I\varrho\right)+\frac{1}{2}Tr \left(2\prod\limits_{i\in S_{1}}\frac{g_{i}+I}{2}\varrho\right)+Tr\left(2\prod \limits_{i\in S_{2}}\frac{g_{i}+I}{2}\varrho\right)\\ &=-1+1-\frac{1}{AK}\left(4\sum\limits_{k\in\Pi^{(1)}}Y_{k}+4 Kv\right)-t+1-\frac{1}{3K}\left(3\sum\limits_{k\in\Pi^{(2)}}Y_{k}+3Kv \right)-t\\ &=1-\left(2+\frac{1}{3}\right)v-2t-\frac{1}{3K}\left(4\sum \limits_{k\in\Pi^{(1)}}Y_{k}+3\sum\limits_{k\in\Pi^{(2)}}Y_{k}\right)\\ &\geq 1-\left(2+\frac{1}{3}\right)v-2t-\frac{4}{3K}\left(\sum \limits_{k\in\Pi^{(1)}}Y_{k}+\sum\limits_{k\in\Pi^{(2)}}Y_{k}\right)\\ &=1-\left(2+\frac{1}{3}\right)v-2t-\frac{4}{3K}\left(K_{1}+K_{2} \right).\end{split} \tag{10}\]
with a probability
\[\begin{split}& P\geq\left[1-\exp\left(-\frac{8v^{2}K}{5\left(1+ \frac{1}{K}\right)}\right)\right]\left[1-\exp\left(-\frac{3v^{2}K}{2\left(1+ \frac{1}{K}\right)}\right)\right]\left[1-\exp\left(-6Kt^{2}\right)\right]^{2} \\ &\geq\left[1-\exp\left(-Kv^{2}\right)\right]^{2}\!\left[1-\exp \left(-6Kt^{2}\right)\right]^{2}\end{split}, \tag{11}\]
where the second inequality in Eq 11 holds as long as \(K\geq 2\). Obviously \(\sum\limits_{k\in\Pi^{(1)}}Y_{k}=K_{1}\) and \(\sum\limits_{k\in\Pi^{(2)}}Y_{k}=K_{2}\) in Eq 10. To make \(F=\left\langle G\right|\rho\left|G\right\rangle\) is \(1-O\left(\frac{1}{n}\right)\), which is high enough, we need that \(v=O\left(\frac{1}{n}\right),t=O\left(\frac{1}{n}\right),\frac{4}{3K}\left(K_{ 1}+K_{2}\right)\leq\frac{1}{n}\) which leads to \(F=1-O\left(\frac{1}{n}\right)\). Therefore, we set \(v=\frac{\sqrt{\lambda_{2}}}{n},t=\frac{\sqrt{\lambda_{2}}}{\sqrt{6}n}\), then consider the acceptance condition \(K_{1}+K_{2}\leq\frac{3K}{4n}\) in Algorithm 1, we have
\[F\geq 1-\frac{7}{3}\frac{\sqrt{\lambda_{2}}}{n}-2\frac{\sqrt{\lambda_{2}}}{ \sqrt{6}n}-\frac{1}{n}=1-\frac{\left(\frac{7}{3}+\frac{2}{\sqrt{6}}\right) \sqrt{\lambda_{2}}+1}{n}\geq 1-\frac{3.15\sqrt{\lambda_{2}}+1}{n} \tag{12}\]
with a probability
\[P\geq\left[1-\exp\left(-\frac{\lambda_{2}}{n^{2}}K\right)\right]^{4}\geq 1-4 \exp\left(-\frac{\lambda_{2}}{n^{2}}K\right)\geq 1-4\exp\left(-\frac{ \lambda_{2}}{n^{2}}n^{2}\log n\right). \tag{13}\]
To make the probability \(P=1-O\left(n^{-\lambda}\right)\) for a constant \(\lambda\), we set \(K=\left\lceil n^{2}\log n\right\rceil\), then
\[P\geq 1-4\exp\left(-\frac{\lambda_{2}}{n^{2}}n^{2}\log n\right)=1-4n^{-\lambda_{ 2}},\] (A14)
which is high enough. The condition for above \(F,P\) both to be positive is \(\log_{n}4\leq\lambda_{2}\leq\frac{\left(n-1\right)^{2}}{10}\), where \(n\geq 5\). When \(n\geq 5\), we have \(n^{2}>\frac{\left(n-1\right)^{2}}{10}\), thus \(\lambda_{2}<n^{2}\), then \(v=\frac{\sqrt{\lambda_{2}}}{n}<1\). Consider the condition of (1), we have \(n\geq 6\).
|
2307.16340 | Ion-beam-milled graphite nanoribbons as mesoscopic carbon-based
polarizers | We demonstrate optical reflectivity and Raman responses of graphite
microstructures as a function of light polarization when the incident light is
applied perpendicular to the material's stacking direction (c-axis). For this,
we employed novel graphite nanoribbons with edges polished through ion-beam
etching. In this unique configuration, a strong polarization dependence of the
D, G, and 2D Raman modes is observed. At the same time, polarized reflectivity
measurements demonstrate the potential of such a device as a carbon-based,
on-chip polarizer. We discuss the advantages of the proposed fabrication method
as opposed to the mechanical polishing of bulk crystals. | Marcin Muszyński, Igor Antoniazzi, Bruno Camargo | 2023-07-30T23:31:40Z | http://arxiv.org/abs/2307.16340v1 | # Ion-beam-milled graphite nanoribbons as mesoscopic carbon-based polarizers.
###### Abstract
We demonstrate optical reflectivity and Raman responses of graphite microstructures as a function of light polarization when the incident light is applied perpendicular to the material's stacking direction (c-axis). For this, we employed novel graphite nanoribbons with edges polished through ion-beam etching. In this unique configuration, a strong polarization dependence of the D, G and 2D Raman modes is observed. At the same time, polarized reflectivity measurements demonstrate the potential of such a device as a carbon-based, on-chip polarizer. We discuss advantages of the proposed fabrication method as opposed to the mechanical polishing of bulk crystals.
Graphite is a highly anisotropic carbon-based material composed of a stack of weakly-bonded graphene layers in a Bernal (ABAB) structure [1]. Usually encountered in nature as a highly-oriented crystal, this material has been on the forefront of technological advances since its discovery. Recently, it has found applications in electronics as few- and multi-layer graphene on tentative detectors [2; 3] and battery electrodes [4].
The crystallographic structure of graphite makes it naturally cleavable along the stacking direction, dubbed as the c-axis. However, because single crystals have yet to be achieved, cutting in any other direction usually poses a more arduous task. For this reason, physical property measurements usually focus on in-plane electrical, magnetic and optical properties, with off-plane measurements often being strongly sample dependent (see, e.g. ref. [5] and the SI). Among other reasons, this is caused by stacking faults in the highly oriented structure (which both hinders c-axis transport and couples the dispersion of \(k_{z}\) to the plane [6]), and defects along the edges of the sample. The latter are uncleavable and present irreproducible irregularities when cut.
These irregularities make the measurement of optical properties of graphite cross-sections particularly challenging. Mechanical polishing of the edge of the samples is bound to cause graphite and graphene flakes to bend (see Fig. 1) and cleave out [7], introducing an in-plane response to an otherwise out-of-plane measurement. This results in a limited number of literature reports on the subject.
Among the properties seldom addressed are in-plane light polarizability and Raman spectra. Although careful measurements performed in the 1980s and 1990s addressed such topics to a certain extent, polishing techniques used at the time resulted in large in-plane contributions to the measured data, starnishing the results [7; 8; 9]. More recently, attempts to bypass this issue have been made by Tan et al. [10] by employing a pristine, mined natural graphite crystal. Although such an approach intended to mitigate the effects of polishing the surface, the soft properties of graphite do not allow for an edge without bent over planes.
Here, we revisit this problem, focusing on Raman spectroscopy of graphite when the light incides parallel to the graphene planes. However, instead of using mechanically polished bulk Highly oriented pyrolitic graphite (HOPG) as in refs [8; 9], we employ nanoribbons fabricated through ion-beam milling. This type of samples have been previously used to study the impact of stacking faults on the electrical transport properties of graphite [11]. However, such devices could also be used to probe other off-plane phenomena in HOPG, e.g. ionic channeling [12] and optical properties. This occurs because graphite samples fabricated this way are ribbons whose sides are covered by a thin (approx. 30 nm), yet transparent layer of amorphous carbon, while minimizing the contributions arising from graphitic planes bent from the edges [11] (which occur in mechanical polishing methods). Unlike previous attempts to measure off-plane optical properties of graphite, our approach also has the potential to permit the fabrication of optically transparent samples, in principle enabling one to probe the properties of both reflected and transmitted light through graphite ribbons.
The latter is important because, due to the ever-increasing demand of higher processing speeds at lower power consumption, the employment of photonic integrated circuits has become a necessity for the advancement of the semiconducting industry. Among the studied systems, graphene and the possibility of its utilization in valley- and spin-ronics has contributed to the increased interest in the interplay between light and electrical transport in mesoscale devices [13; 14; 15], in a bid to utilize chiral or valley-selected electrons as quantum-information-carrying agents. Proposals introduced in the last decade to miniaturize and integrate conventional electronics with optics involve the implementation of modulators, couplers and filters in reduced scale, both based on graphene [13], conventional semiconductors [16], or hybrids of these two materials [17]. However, such structures are sensitive to light polarization due to the structural birefringence of the waveguides utilized. Therefore, to properly employ such circuits, it is necessary to also introduce mesoscopic polarizers to filter out unwanted radiation components. Among the most common types of reflection polarizers utilized for such a task, are Bragg gratings and photonic crystals, which are complex structures [18]. We present an alternative, using a device that can be etched from a monolithic block of graphite, thus being completely compatible with graphene-based circuits.
The samples used in this work consisted of thin graphite ribbons with approximate dimensions of 10 \(\mu\)m \(\times\) 300 nm (ab-direction) \(\times\) 5 \(\mu\)m (c-axis). They were prepared with ion-milling etching using a FEI dual beam microscope, following the procedure previously outlined in refs. [12; 19]. Two devices were fabricated with different milling currents. This warranted specimens coated with a thin layer of amorphous carbon, with thicknesses estimated at 30 nm (Sample A) and 60 nm (Sample B) (estimated with SRIM [20]). The graphite ribbons were then deposited atop a mirror-polished SiN\({}_{2}\) substrate, with graphite's c-axis aligned parallel to the SiN\({}_{2}\) surface (graphite planes were perpendicular to the substrate's surface). Afterwards, samples were optically characterized by means of polarization-sensitive Raman spectroscopy and by reflectivity measurements in the visible range. The incident light was always applied perpendicular to the graphite c-axis.
Polarization-sensitive Raman scattering (RS) experiments were performed in the backscattering geometry with 515 nm excitation, using a long-working-distance 50\(\times\) objective with numerical aperture 0.55. The beam spot size was of approx. 1 \(\mu\)m. and was aimed at the center of the sample. The scattered light was sent through a 0.75 m monochromator, and collected by a charge-coupled device (CCD) cooled by liquid nitrogen.
Reflectivity measurements were performed by illuminating the sample with a non-polarized white light source. A 50x microscope objective was used to focus and collect light from a spot with a diameter of approx. 50 \(\mu\)m. The spot was centered at the HOPG sample, but also probed a clean region on the substrate, which was later used as a reference. A linear polarizer was placed on a motorized rotation stage to detect the polarization of the collected light. The sample image was focused by a f = 100 mm lens on a CCD camera. All measurements were performed at room temperature.
Raman spectra of the graphite ribbon labelled as Sample A are presented in Fig. 2 for different values of \(\theta\), defined as the angle between the polarization of the incident light and the sample's c-axis. Results for E\(\perp\)c (\(\theta=90^{\circ}\), electric field aligned in-plane) revealed the presence of bands at 1366 cm\({}^{-1}\) and 1580 cm\({}^{-1}\), whose intensity diminished upon changing the light polarization towards the out-of-plane direction E\(\parallel\)c (\(\theta=0^{\circ}\)). This result confirms these modes as due to the in-plane displacement of carbon atoms [7]. Indeed, such lines have been reported by many authors [7; 8; 9; 10; 21], and correspond to the in-plane D and G modes of graphite, respectively. The former has been attributed to a defect-induced inter valley double resonance process, whereas the latter is associated with a "stretching" mode centered around k = 0 [21].
Superimposed to such a spectra was a broad background, which did not show any dependence on the incident polarization, thus suggesting its origin as a result of an amorphous layer covering the sample. The positions of their maxima are characteristic of a carbonic material with an intermix of sp\({}^{2}\) and sp\({}^{3}\) bonding [22].
Curiously, we report a previously unobserved feature at 1087 cm\({}^{-1}\). This sharp peak did show an isotropic dependence during the measurements. Upon closer inspection, its presence was also found in database Raman spectra of mineral natural graphite from different sources [23] - hailed by many as a material of superior quality [24; 25; 26; 27]. However, to date, no similar observations have been reported in in- and out-of-plane measurements of synthetic HOPG [8; 9; 10; 28]. We tentatively attribute this mode to the stretching vibrational modes of highly ordered H-terminated =C-C=C- chains along the edges of graphite, akin to 1D trans-polyacetylene [29]. In such a compound, the peak at 1087 cm\({}^{-1}\) is usually accompanied by a feature at 1470 cm\({}^{-1}\), which also coincides with the broad \(\theta\)-independent maximum superimposed to our G and D peaks. Its existence in our system can be justified in the context of hydrogen-passivated zigzag edges of graphite, which is usually grown from hydrogen-rich precursors and always contains a large content of H\({}_{2}\) molecules near the surface [30; 31]. We hypothesize that the ion milling-induced decomposition of the sample surface also decomposes interlayer adsorbed H\({}_{2}\), similar as reported for proton-irradiated graphite [30]. This results in passivized graphite edges below an amorphous carbon layer, as opposed to a plethora of different carbon-carbon orbital terminations expected in mechanically-polished graphite.
We stress that the latter method is employed by the entirety of reports in the literature in order to gain access to graphite's out-of-plane optical properties (see e.g. refs [8; 9; 10; 28]). Yet, the low stiffness and layered structure of the material make such a mechanical process challenging, invariably resulting in edge structures similar to those shown in Fig. 1, containing irregular and/or bent-over flakes that contribute to the measured signal of macroscopic specimens. Thus, the fundamental difference between our samples and those found in the literature: the edges of the samples shown here were polished in a contact-free ion-milling process, erasing the possibility of foldings and exfoliations at their edges, caused by friction.
We demonstrate the behavior of our device as a small-scale on-chip polarizer. Reflectance measurements, presented in Fig. 3, show the intensity of the reflected light in our device as a function of the linear polarization angle
Figure 1: SEM image of the edges of a diamond-saw-cut bulk HOPG crystal in a) a region where the sample was broken, b) in a mechanically-polished region and c) in a ion-beam-polished ribbon. The arrows point the c-axis direction.
with respect to the sample's c-axis. The intensity of the reflected light in the configuration E\(\perp\)c is approx. 65% larger than for \(E||c\). The reflectivity spectra measured in visible range for sample A for two orthogonal polarization are presented as a degree of linear polarization in the SI (Fig. S5). The ratio between the reflected intensities in both polarizations, however, strongly depended on the thickness of the amorphous layer covering the sample. Data obtained for device B - in which a 60 nm amorphous layer was expected - presented a polarization ratio of approx. 42%. These results suggest that the amorphous carbon layer obtained during milling acts as a diffusor, whose efficiency can be selected by controlling its thickness during ribbon preparation.
Indeed, Raman measurements performed on device B did not present as sharp G and D lines as in device A for E\(\perp\)c (see the SI). Such a result attests to a higher surface amorphization, as well as an enhanced scattering of light by the amorphous carbon layer coating the sample.
The polarization observed in reflectivity has its origins in the high anisotropy of the graphite conductivity tensor. Indeed, elipsometry measurements dedicated to probe the ordinary and extraordinary dielectric constants of graphite did reveal a non-zero imaginary component of the extraordinary \(\varepsilon\), characterizing the material as a conventional metal for electric fields parallel to the planes (E\(\perp\)c), and as a semiconductor or semimetal otherwise [7]. We note, however, that experiments dedicated probe such properties are few and far between, and might yield conflicting results [7; 32]. This occurs mainly because of the difficulty in controlling the roughness of surfaces parallel to graphite's c-axis through mechanical polishing. Our results suggest that such an issue might be partially addressed by employing the ion-beam milled samples show-cased here.
Although presenting a small degree of polarization (ca. 0.25 for the best sample), it is conceivable that the quality of the polarizing devices presented in Fig. 3 can be improved upon by thinning the amorphous layer at the surface of graphite. Different approaches are currently available to tackle this issue, such as slow cleaning of the sample with low-intensity oxygen or argon plasma, as utilized e.g. in ref. [24] for the electrical addressing of graphite along its stacking direction. The tracking of the optical properties of graphite as a function of the thickness of the amorphous coating will be discussed elsewhere.
Because the fundamental optical properties of pristine graphite ribbons explored here rely on its high electrical anisotropy, we envision our devices as components of miniaturized all-optical contact-less detectors responsive to any effects disturbing the stacking order on graphite.
Figure 3: Normalized reflected light intensity as a function of detection polarization for a region in the graphite sample (square symbols) and a region on the metal deposited atop the SiN\({}_{2}\) substrate (round symbols). Closed squares correspond to a graphite ribbon covered by a ca. 30 nm amorphous C layer (sample A), whereas empty squares correspond to the data of a sample covered by a ca. 60 nm amorphous C layer (sample B). The inset shows a picture of sample A, with the regions where the light intensity was probed indicated by squares.
Figure 2: Colinear Raman spectra obtained for different polarizations of the excitation light. The curves have been displaced vertically for clarity. The inset shows a cartoon of the measurement geometry; \(\theta=0^{\circ}\) corresponds to the excitation polarized along the sample’s c-axis. The main graphite modes, labelled G, D, and 2D, are extinguished as the excitation is polarized along the sample’s c-axis. The broad background and the peak at 1087 cm\({}^{-1}\) (indicated by an arrow) are weakly-dependent on \(\theta\).
Among them are the sensing of traditional graphite interstitials such as Li and H [33, 34]. A graphite-based optical device milled from a monolithic block of graphite would, thus, have the potential to act as a fully-contained optical chemical sensor to probe, for example, charge in Li-based batteries or H diffusion in hydrogen cells. Because our devices are conducting, they can also act as electrodes for electric-field-tunable carbon-based polarizers, such as azo-group liquid crystal [35, 36], effectively enabling an all-carbon method to locally control the circular polarization of light. Such a technology is fundamental for the implementation of vallevtrics in integrated devices, whose carrier population selection currently relies on optical pumping with circularly polarized light [37, 38, 15].
In summary, we demonstrate the anisotropic Raman and reflectivity properties of thin graphite ribbons prepared through ionic etching, in the unusual configuration when the incident light is perpendicular to the sample c-axis. Results revealed that our devices act as small-scale polarizers for the reflected light. This is caused by the better coupling between electric fields and electronic motion when light is polarized along the planes (E\(\perp\)c), as opposed to parallel to the c-axis direction (E\(\parallel\)c). This result, while seemingly unsurprising, demonstrates the applicability of small graphite structures for on-chip optical all-carbon-based devices, resulting in small components chemically identical to graphene. The samples considered here had a thickness of approx. 300 nm. However, state-of-the-art ion-beam-milled HOPG lamellae prepared for transmission electron microscopy measurements (e.g., as employed in ref. [39]) are currently achievable with thicknesses below 100 nm, while still maintaining graphite's stacking structure intact. Recent advancements in single-crystal graphite production, as reported in ref. [40], also hint at the possibility of achieving similar devices through cleaving along high symmetry directions other than graphite's c-axis. In these devices, we expect measurements of transmitted light to be possible, with a similar outcome as the one reported here for the reflected light. We leave such an investigation for a future work.
## Supplementary Material
The supplementary information displays additional spectroscopic data, results for sample B, and compares samples of different origins.
## Acknowledgements
We thank Nilesh Dalla (FUW) for assistance with SEM imaging and Jacek Szczytko (FUW) for helpful discussions. We gracefully acknowledge Adam Babinski and Maciej Molas for providing access to the facilities of the Laboratory of Optical Spectroscopy (LaSSo) at the Faculty of Physics, University of Warsaw. This work has been supported by the National Science Centre, Poland (grants no. 2017/27/B/ST3/00205, 2019/35/B/ST3/04147 and 2018/31/B/ST3/02111).
|
2307.00522 | LEDITS: Real Image Editing with DDPM Inversion and Semantic Guidance | Recent large-scale text-guided diffusion models provide powerful
image-generation capabilities. Currently, a significant effort is given to
enable the modification of these images using text only as means to offer
intuitive and versatile editing. However, editing proves to be difficult for
these generative models due to the inherent nature of editing techniques, which
involves preserving certain content from the original image. Conversely, in
text-based models, even minor modifications to the text prompt frequently
result in an entirely distinct result, making attaining one-shot generation
that accurately corresponds to the users intent exceedingly challenging. In
addition, to edit a real image using these state-of-the-art tools, one must
first invert the image into the pre-trained models domain - adding another
factor affecting the edit quality, as well as latency. In this exploratory
report, we propose LEDITS - a combined lightweight approach for real-image
editing, incorporating the Edit Friendly DDPM inversion technique with Semantic
Guidance, thus extending Semantic Guidance to real image editing, while
harnessing the editing capabilities of DDPM inversion as well. This approach
achieves versatile edits, both subtle and extensive as well as alterations in
composition and style, while requiring no optimization nor extensions to the
architecture. | Linoy Tsaban, Apolinário Passos | 2023-07-02T09:11:09Z | http://arxiv.org/abs/2307.00522v1 | # LEDITS: Real Image Editing with DDPM Inversion and Semantic Guidance
###### Abstract
Recent large-scale text-guided diffusion models provide powerful image generation capabilities. Currently, a significant effort is given to enable the modification of these images using text only as means to offer intuitive and versatile editing. However, editing proves to be difficult for these generative models due to the inherent nature of editing techniques, which involves preserving certain content from the original image. Conversely, in text-based models, even minor modifications to the text prompt frequently result in an entirely distinct result, making attaining one-shot generation that accurately corresponds to the user's intent exceedingly challenging. In addition, to edit a real image using these state-of-the-art tools, one must first invert the image into the pre-trained model's domain - adding another factor affecting the edit quality, as well as latency. In this exploratory report, we propose LEDITS - a combined lightweight approach for real-image editing, incorporating the Edit
Friendly DDPM inversion technique with Semantic Guidance, thus extending Semantic Guidance to real image editing, while harnessing the editing capabilities of DDPM inversion as well. This approach achieves versatile edits, both subtle and extensive as well as alterations in composition and style, while requiring no optimization nor extensions to the architecture. Code and examples are available on the project's webpage.
## 1 Introduction
The exceptional realism and diversity of image synthesis using text-guided diffusion models have garnered significant attention, leading to a surge in interest. The advent of large-scale models [1; 2; 3; 4; 5; 6] has sparked the imaginations of countless users, granting unprecedented creative freedom in generating images. Consequently, ongoing research endeavors have emerged, focusing on exploring ways to utilize these powerful models for image editing. Recent developments in intuitive text-based editing showcased the ability of diffusion based methods to manipulate images using text alone [7; 8; 9; 10; 11; 12; 13].
In a recent work by Brack et al.[7] the concept of semantic guidance (SEGA) for diffusion models was introduced. SEGA requires no external guidance, is calculated during the existing generation process and was demonstrated to have sophisticated image composition and editing capabilities. The concept vectors identified with SEGA were demonstrated to be robust, isolated, can be combined arbitrarily, and scale monotonically. Additional studies explored alternative methods of engaging with image generation that are rooted in semantic understanding, such as Prompt-to-Prompt [8], which leverages the semantic information of the model's cross-attention layers that associates pixels with tokens from the text prompt. While operations on the cross-attention maps enable various changes to the generated image, SEGA does not require token-based conditioning and allows for combinations of multiple semantic changes.
Text-guided editing of a real image with state-of-the-art tools requires inverting the given image, which poses a significant challenge in leveraging them for real images. This requires finding a sequence of noise vectors that once used as input for a diffusion process, would produce the input image. The vast majority of diffusion-based editing works use the denoising diffusion implicit model (DDIM) scheme [8; 9; 11; 12; 14; 15], which is a deterministic mapping from a single noise map to a generated image.
In the work of Huberman et al. [16], an inversion method for the denoising diffusion probabilistic model (DDPM) scheme was proposed. They suggest a new way to compute noise maps involved in the diffusion generation process of the DDPM scheme, so that they behave differently than the ones used in regular DDPM sampling: they are correlated across timesteps and have a higher variance. Edit Friendly DDPM inversion was shown to achieve state-of-the-art results on text-based editing tasks (either by itself or in combination with other editing methods) and can generate diverse results for each input image and text, contrary to DDIM inversion-based methods.
In this overview we aim to casually explore the combination and integration of the DDPM inversion and SEGA techniques, which we refer to as LEDITS. LEDITS consists of a simple modification to the semantically guided diffusion generation process. This modification extends the SEGA technique to real images as well as introduces a combined editing approach that makes use of the editing capabilities of both methods simultaneously, showing competitive qualitative results with state-of-the-art methods.
## 2 Related Work
### Edit friendly DDPM inversion
A significant challenge of diffusion-based methods for image editing and manipulation is the extension to real images that requires inverting the generation process. In particular, inversion of DDPM sampling scheme [1] posed a major challenge that was recently addressed by Huberman et al. [16]. In their work, they suggest an alternative inversion, that consists of a novel way to compute the \(T+1\) noise maps involved in the diffusion generation process of the DDPM scheme, so that they are better suited for editing.
In the DDPM sampling scheme, the reverse diffusion process starts from a random noise vector \(x_{T}\sim N(0,\mathcal{I})\) and iteratively denoises it using
\[x_{t-1}=\hat{\mu_{t}}(x_{t})+\sigma_{t}z_{t}\hskip 28.452756ptt=T,...,1 \tag{1}\]
where \(z_{t}\) are iid standard normal vectors, and
\[\hat{\mu_{t}}(x_{t})=\sqrt{\bar{\alpha}_{t-1}}(x_{t}-\sqrt{1-\bar{\alpha}_{ t}}\epsilon_{\theta_{t}})/\sqrt{\bar{\alpha}}+\sqrt{1-\bar{\alpha}_{t-1}- \sigma_{t}^{2}}\epsilon_{\theta_{t}} \tag{2}\]
where \(\epsilon_{\theta_{t}}\) is the neural network noise estimate of \(x_{t}\), and \(\sigma_{t}=\eta\beta_{t}(1-\bar{\alpha}_{t-1})/(1-\bar{\alpha}_{t})\) where \(\beta_{t}\) stands for a variance schedule and \(\eta\in[0,1]\) with \(\eta=1\) corresponding to the original DDPM work. The edit friendly DDPM inversion method constructs the sequence \(x_{1},..,x_{T}\) such that structures within the image \(x_{0}\) are more strongly "imprinted" into the noise maps \(z_{1},...,z_{T}\) that are extracted by isolating \(z_{t}\) from eq.1.
### Semantic Guidance
The concept of Semantic Guidance [7] was introduced to enhance fine grained control over the generation process of text guided diffusion models. SEGA extends principles introduced in classifier-free guidance by exclusively interacting with the concepts already present in the model's latent space. The calculation takes place within the ongoing diffusion iteration and is designed to impact the diffusion process across multiple directions. More specifically, SEGA uses multiple textual descriptions \(e_{i}\), representing the given target concepts of the generated image, in addition to the text prompt p.
## 3 LEDITS - DDPM Inversion X SEGA
We propose a straightforward integration that consists of a simple modification to the SEGA scheme of the diffusion denoising process. This modification allows the flexibility of editing with both methods while still maintaining complete control over the editing effect of each component. First, we apply DDPM inversion on the input image to estimate the latent code associated with it. To apply the editing operations, we perform the denoising loop such that for each timestep \(t\), we repeat the logic used in SEGA but with the DDPM inversion scheme, using the pre-computed noise vectors. More specifically, we start the denoising process with \(x_{T}\) computed with DDPM inversion. Let \(\epsilon_{\theta_{t}}\) be the the diffusion model's (DM), noise estimate with semantic guidance (following the SEGA logic) in timestep \(t\). Then we update the latents according to eq.1 such that
\[x_{t-1}=\hat{\mu_{t}}(x_{t};\epsilon_{\theta_{t}})+\sigma_{t}z_{t}\]
Figure 2: **LEDITS overview.** Top: inversion of the input image. We first apply DDPM inversion on the original image to obtain the inverted latents and corresponding noise maps. Bottom: We use the inverted latents to drive the reverse diffusion process with semantic guidance. In each denoising step we compute the noise estimate according to the SEGA logic and compute the updated latents according to the DDPM scheme, using pre-computed noise maps.
where \(z_{t}\) is the corresponding noise map, obtained from the inversion process. A pseudo-code of our method is summarized in Alg. 1. A general overview is provided in Fig. 2.
```
0: Input image \(I\), target prompt \(p_{tar}\) and edit concepts \(e_{1},...,e_{k}\)
0: Output image \(\tilde{I}\)
1: Compute the inverted latent and noise maps \(x_{T},z_{1},...,z_{T}\) using DDPM inversion over I;
2:\(c_{p_{tar}},c_{e_{1}},...,c_{e_{k}}\gets DM.encode(p_{tar},e_{1},...,e_{k})\)
3:for\(t=T,...,1\)do
4:\(\epsilon_{\theta_{t}}=DM.predict-noise(x_{t},c_{p_{tar}},c_{e_{1}},...,c_{ e_{k}})\)\(\triangleright\) update latents
5:\(x_{t-1}\leftarrow\hat{\mu}_{t}(x_{t};\epsilon_{\theta_{t}})+\sigma_{t}z_{t}\)
6:endfor
7:\(\tilde{I}\gets DM.decode(x_{0})\)return\(\tilde{I}\)
```
**Algorithm 1** LEDIT
## 4 Experiments
We explored two editing workflows: The first, using DDPM purely for inversion (i.e. target prompt=""), such that a perfect reconstruction of the original image is achieved and editing is done by performing semantic guidance with SEGA edit concepts. The second is performing two editing operations simultaneously by choosing a target prompt that reflects a desired output, in addition to semantic guidance with SEGA edit concepts.
Figure 3: **Image editing with LEDITS.** LEDITS extends fine-grained control over edit operations and introduces flexibility and versatility. We show images edited purely with DDPM Inversion (forth column from the right) and images edited with LEDITS, using both methods simultaneously (three leftmost and rightmost columns) - these images were edited by using the described target prompt (in black) in addition to SEGA concepts (stated in blue). SEGA semantic vectors maintain their monotonically scaling property when used in LEDITS - the gradual effect of increasing/decreasing the strength of SEGA concepts can be observed from the third column on the right to the rightmost column, and from the third column to the left to the leftmost column.
We observe that both approaches add diversity and versatility to the pure DDPM inversion outputs (figures 4 5), and extend the amount of control over edit operations. In addition, our experiments indicate that SEGA guidance vectors generally maintain their properties of robustness and monotonicity as can be seen in figures 1,3. Our qualitative experiments show competitive results with state-of-the-art methods and demonstrate the following properties: **fidelity vs. creativity** - The combined approach adds another layer of flexibility in tuning the effect of the desired edit, balancing between preserving the original image semantics and applying creative edits. **flexibility and versatility** - adding SEGA editing concepts on top of the ddpm edit (reflected in the target prompt) maintains the quality of the DDPM edit (Fig. 1, 3). **Complementing capabilities** - The combined control can compensate for the limitations of one approach or the other in various cases. In Fig. 5. we explore the effect of the skip-steps and target guidance scale (the strength parameter of the classifier-free scale) parameters on the edited output, when using solely DDPM inversion for the editing operation. In comparison, we also examine the effect of SEGA concepts with increasing edit guidance scales when editing solely with SEGA (and using DDPM for inversion). We observe that the pure DDPM inversion edited outputs and pure SEGA edited outputs range differently on the scale of fidelity to the source image and compliance with the target prompt.
In addition, given the straightforward integration of the two methods, we maintain the performance advantages of the two techniques, thus making this overall approach lightweight.
Figure 4: **Comparisons.** We show results for editing real images using pure DDPM inversions, DDPM inversion with prompt-to-prompt and LEDITS respectively. Results shown here were obtained with the first editing workflow, using DDPM purely for inversion and SEGA for editing. All images were generated using the same seed.
## 5 Conclusion
In this report, we explored the combination of the DDPM inversion technique with semantic guidance and introduced LEDITS. We show that this efficient and lightweight approach spans a wide range of editing capabilities and extends the level of fine-grained control users have over the effect of editing operations. Our results indicate LEDITS generally maintains the individual strengths of each method, including SEGA properties such as robustness, and monotonicity. Our qualitative experiments indicate the two techniques can be used simultaneously for independent editing operations leading to more diverse outputs without harming the fidelity to the semantics of the original image and compliance with the editing prompts.
## 6 Limitations
Given the casual and exploratory nature of this report, we leave quantitative evaluations for future works and outside the scope of this work. The purpose of this report was to merely explore and suggest an intuitive editing workflow for real images, demonstrate it's qualitative abilities and potentially drive further works along this path.
## 7 Methods
### Implementation
The implementation of our approach builds on the Stable Diffusion and Semantic Stable Diffusion pipelines from the HuggingFace diffusers library. For all experiments and evaluations we used StableDiffusion-v-1-5 checkpoint.
For the DDPM Inversion implementation, we used the official implementation at - [https://github.com/DDPM-inversion](https://github.com/DDPM-inversion). Our implementation is available on the project's webpage.
Figure 5: **Parameter effect in DDPM inversion vs. LEDITS.** We show the effect of the parameters skip steps and target guidance scale on the output image when using pure DDPM inversion (top panel) compared to the effect of the edit concepts guidance scales when using LEDITS.
### Experiments
All images used for our analysis were downloaded from: [https://www.pexels.com/](https://www.pexels.com/).
In all experiments, we configured all methods to use 100 forward and backward steps. Table 1 summarizes the huper-parameters we used for all methods to produce the results shown in Fig. 4. DDPM and P2P hyper-parameters used for Fig. 4 were set with identical values to those used in [16] for quantitative assessments.
|
2302.05284 | A Systematic Literature Review of Human-Centered, Ethical, and
Responsible AI | As Artificial Intelligence (AI) continues to advance rapidly, it becomes
increasingly important to consider AI's ethical and societal implications. In
this paper, we present a bottom-up mapping of the current state of research at
the intersection of Human-Centered AI, Ethical, and Responsible AI (HCER-AI) by
thematically reviewing and analyzing 164 research papers from leading
conferences in ethical, social, and human factors of AI: AIES, CHI, CSCW, and
FAccT. The ongoing research in HCER-AI places emphasis on governance, fairness,
and explainability. These conferences, however, concentrate on specific themes
rather than encompassing all aspects. While AIES has fewer papers on HCER-AI,
it emphasizes governance and rarely publishes papers about privacy, security,
and human flourishing. FAccT publishes more on governance and lacks papers on
privacy, security, and human flourishing. CHI and CSCW, as more established
conferences, have a broader research portfolio. We find that the current
emphasis on governance and fairness in AI research may not adequately address
the potential unforeseen and unknown implications of AI. Therefore, we
recommend that future research should expand its scope and diversify resources
to prepare for these potential consequences. This could involve exploring
additional areas such as privacy, security, human flourishing, and
explainability. | Mohammad Tahaei, Marios Constantinides, Daniele Quercia, Michael Muller | 2023-02-10T14:47:33Z | http://arxiv.org/abs/2302.05284v3 | # Toward Human-Centered Responsible Artificial Intelligence:
###### Abstract.
As Artificial Intelligence (AI) continues to advance rapidly, it becomes increasingly important to consider AI's ethical and societal implications. In this paper, we present a bottom-up mapping of the current state of research in Human-Centered Responsible AI (HCR-AI) by thematically reviewing and analyzing 27 CHI research papers and 19 toolkits from industry and academia. Our results show that the current research in HCR-AI places a heavy emphasis on explainability, fairness, privacy, and security. We also found that there is space to research accountability in AI and build usable tools for non-experts to audit AI. While the CHI community has started to champion the well-being of individuals directly or indirectly impacted by AI, more research and toolkits are still required to address the long-term effects of AI on individuals, societies, and natural resources (human flourishing and sustainability).
human-centered AI, responsible AI, AI ethics, literature review +
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal:
in the proceedings of CHI, including only full research papers (excluding other materials such as abstracts, panels, and short papers).1 One paper ((26)) did not have any of the specified keywords within the text (possibly due to an error on the ACM's search tool). Therefore, we removed it from our final set. We also removed 6 papers because they did not discuss responsible AI explicitly or extensively (not to our surprise, as one of the search terms was "human-centered AI") but designed or evaluated an AI system using a human-centered approach (27; 42; 84; 87; 88; 92). Our review and findings are based on the remaining **27** papers. The first paper appeared in 2020, and there has been an exponential growth in the number of publications in 2021 and 2022 (count of publications: 2020: 2, 2021: 8, 2022: 17).
Footnote 1: We picked these keywords because human-centered AI is widely used in the CHI community (human-centered AI is a book title by Ben Shneiderman (71)) and industry uses responsible AI commonly (7; 32; 55; 60; 64; 76; 79).
### Toolkits From Industry & Academia
To study toolkits in HCR-AI, we decided to use the LF AI and Data landscape website because it has a regularly updated list of open-source projects in AI to give an overview of the accelerated growth of AI toolkits (48). We focused on the _Trusted and Responsible AI_ category as it was the most relevant category to our RQs. In December 2022, we explored all the **19** toolkits under this category, which were: _Explainability_, _Adversarial_, and _Bias & Fairness_. While 14 toolkits were from industry, 5 were rooted in academic institutes ([1; 80; 81; 82; 73]). Nonetheless, we covered all toolkits, regardless of their origin, either from industry or academia. Figure 1 shows a summary of these toolkits from the LF and Data landscape website.
### Limitations
Our search for literature in academia or industry is not comprehensive and exhaustive. However, covering CHI as a prominent venue in academic research in human-computer interaction gave us insights into the academic state of the art in HCR-AI. Besides, we intended to provide insights and research directions to the CHI community; therefore, we narrowed down our research to CHI proceedings. The toolkit part also covered a well-known collection of toolkits related to HCR-AI, giving us coverage of significant toolkits in HCR-AI. Future research may build on our research and expand the research on other academic venues and toolkits in HCR-AI. We also used keywords instead of deciding which paper to include or exclude based on our judgment. Through this approach, we collected papers related to HCR-AI from the authors' perspective rather than using our judgment of whether a paper is related to HCR-AI. We also aim to build an easy-to-use tool to help HCR-AI researchers add new literature and explore it using visualizations.
## 3. Research Methods, Themes, and Future Directions in HCR-AI
We find that HCR-AI research focuses on explainability, fairness, privacy, security, and human flourishing (Figure 2). In this section, we briefly discuss research methods and themes, with the number of resources in each theme indicated next to the theme title (#N).
### Research Methods in HCR-AI
In the reviewed 27 CHI papers, 20 were qualitative (e.g., user studies for system design and evaluation, interviews, and workshops), 7 were quantitative (e.g., survey, log analysis, and measuring a system's performance with a metric), 3 were reviews (e.g., review of literature, policies, and guidelines), and 2 were essays. Out of the 23 papers with human participants, 1 did not report demographics, and the rest did not report a consistent set of demographics. The typical demographics were age, location, job title, education, gender, ethnicity, and years of experience in a particular topic of interest for the research. Out of those who reported the location of research or participants, 9 mentioned the USA, 1 mentioned France, and 6 had other countries listed as well (e.g., India, Thailand, and South Africa). Table 1 in the Appendix summarizes all reviewed papers.
All the 19 open-source toolkits come as Python libraries (IBM's AI Fairness 360 Toolkit (37) was the only exception to also provide an R implementation) that can be easily installed using pip or conda. Other toolkits, such as Google's Lucid (31), come as interactive notebooks through which one can start visualizing a neural network's output without setup.
Future researchers should consider reporting participant demographics to help with replicability and also give an understanding of how generalizable the results are. As discussed above, there is a lack of consistency when reporting demographics. The community's own findings emphasize that building an AI that respects its users is highly dependent on the target population (see Section 3.3 for details). Therefore, we see a need to provide a framework for reporting demographics for future researchers if
Figure 1. An overview of toolkits related to Trusted and Responsible AI. Explainability is covered in Section 3.2, Adversarial is covered in Sections 3.5 & 3.4, and Bias & Fairness is covered in Section 3.3. Exported PDF was generated by the LF and Data landscape website in December 2022 (48).
Figure 2. Counts of CHI papers and toolkits per theme.
they want to make their research comparable and replicable, which echos Sambasivan et al. (Sambasivan et al., 2019)'s findings: "_the need for incentivizing data excellence in which academic papers should evolve to offer data documentation, provenance, and ethics as mandatory disclosure._" (Sambasivan et al., 2019) Furthermore, considering that CHI is historically focused on Western societies publishing this information will help with research transparency.2 It may also create a push for extending the research to other populations. Especially with the current trend and growth in HCR-AI research, we believe that more people will be influenced by the research done in this community. Thus, additional care is required to make the research inclusive.
Footnote 2: Based on the data from CHI papers published between 2016 to 2020, _“73% of CHI study findings are based on Western participant samples, representing less than 12% of the world’s population._” (Sambasivan et al., 2019)
### Research Theme: Explainability (#22)
If an AI works like a black box,3 users may have trouble trusting the outputs. Therefore, a recent movement in AI tries to make those decisions understandable to gain users' trust (Sambasivan et al., 2019). In human resource management, the decision made by AI can be further explained to gain employees' trust and help them understand why a decision was made with clear evaluation criteria (to make them feel that the decision was fair) (Sambasivan et al., 2019; Sambasivan et al., 2019). Paradoxically, explainability can sometimes conflict with privacy and security (Sundhi et al., 2019; Sambasivan et al., 2019; Sambasivan et al., 2019), and we further discuss this trade-off in Sections 3.4 and 3.5.
Footnote 3: Upon input data, a decision is generated without explanation or interpretation to make the human understand why and how the decision was made.
Explanations, however, need to be human-centered and usable (Sambasivan et al., 2019; Sambasivan et al., 2019; Sambasivan et al., 2019; Sambasivan et al., 2019). For example, when explanations were used in a human-AI team, they did not help improve the accuracy of the final decision (Sambasivan et al., 2019), which emphasizes that not all explanations are helpful. To improve explanations, AI developers could benefit from asking _how_ and _why not_ questions to think about users' expectations and needs to improve explanations (Sambasivan et al., 2019). Another way of improving explanations is to add graphics and control sliders to help users understand the decision made by an AI to help them feel more in control (Sundhi et al., 2019). The other use case for explanations is to help with AI literacy for public and educational purposes, which eventually can lead to broader adoption and understanding of AI implications and harms (e.g., teaching kids about feature selection in machine learning using visual explanations) (Sambasivan et al., 2019).
Several toolkits have been developed to assist AI developers with the explainability of AI. For example, frameworks such as LIME (Sambasivan et al., 2019; Sambasivan et al., 2019) or SHapley Additive exPlanations (SHAP) (Sambasivan et al., 2019; Sambasivan et al., 2019) are widely used to explain a black box's model outputs (i.e., explain the prediction of an instance or observation by computing the contribution of each feature to the prediction). IBM's AI Explainability 360 provides metrics that serve as quantitative proxies for the quality of explanations and teaches developers and practitioners how to ensure AI explainability (Sambasivan et al., 2019). Microsoft's InterpretML (Sambasivan et al., 2019) is an open-source package for ensuring the model's interpretability. Google's Lucid (Sundhi et al., 2019) provides a collection of infrastructure on neural networks' interpretability. Oracle's Skater (Sambasivan et al., 2019) toolkit provides a unified framework to enable model interpretation for all forms of models. TreeInterpreter (Chen et al., 2019) is a Python package for interpreting scikit-learn's decision tree and random forest predictions. Alibi (Sambasivan et al., 2019) and ELI5 (Sambasivan et al., 2019) are open-source Python packages for model inspection, debugging, and interpretation.
Future research is still needed to improve the explainability of AI's inner workings, decisions, and outputs as there is some criticism about the value of explanations like Shapley for humans (Sambasivan et al., 2019). Toolkits for communicating datasets also need to keep track of changes over time, from raw data to the final dataset used in the model and changes that occur post-deployment, alleviating the foregtentance of data work--as Muller and Strohmayer (Muller and Strohmayer, 2019) put it: "_foregtentance in data science is when each action tends to push previous actions into the infrastructure, where the action itself and its consequence are easily forgotten._" (Muller and Strohmayer, 2019) One way for documenting datasets is through a standardized process, for example, using datasheets (Sam
when AI decisions were combined with a human decision maker, the number of racially biased decisions was reduced compared to a situation when AI was the only decision maker. Hence, the human in the loop may improve the decisions by taking an overarching assessment and filling AI limitations in decision-making (Han et al., 2017).
Moving on to the toolkits, the common denominator of the studied toolkits is to provide metrics to surface data and algorithmic biases, ensuring fairness in AI IBM's AI Fairness 360 (Miller et al., 2016) and Microsoft's Fairlearn (Miller et al., 2016) provide metrics to compare subgroups of datasets and identify cases that a model negatively impacts. Audit AI is a Python library for detecting demographic differences in the output of AI models, helping to mitigate the effects of discriminatory patterns in training data (Miller et al., 2016). DeepLIFT, another Python package, implements a method for decomposing the output prediction of a neural network based on a specific input (Zhu et al., 2017; Li et al., 2018). To see how such toolkits can be applied in real-world datasets, the open-source toolkit Aequitas was built (Bahdan et al., 2017)--using Aequitas with the COMPAS dataset shows that African American offenders have higher false positive rates of reconviction than Caucasian offenders.4
Footnote 4: Used by courts in the United States to make pretrial detention and release decisions—_There’s software used across the country [United States] to predict future criminals. And it’s biased against blacks_.” (Miller et al., 2016)
**Future research** could study fairness from multiple perspectives (e.g., culture, gender, ethnicity, and age) and compare expectations or understandings in various groups. The studied toolkits provide metrics to evaluate fairness, primarily for AI developers. Researching how to present those metrics to a non-expert user who is impacted by the system could be a direction to pursue: how to present fairness to the public with minimal knowledge about fairness (e.g., is _statistical parity_ the most understandable way of presenting fairness to a senior adult or a child about the fairness of an AI?). These tools can be helpful to non-expert users to understand AI decisions and make informed decisions as, typically, interactions between people and machines range between two extremes: humans tend to either under-rely on an algorithm by ignoring its recommendations (algorithmic aversion) or over-rely on it by blindly accepting any recommendation (automation bias) (Krause et al., 2017). While toolkits for audit are focused on fairness, we believe there is space to explore practices and build toolkits for audit and accountability in other aspects like privacy and security. Another aspect to consider is that accountability could provide explainability over decisions but also make people feel surveilled (Krause et al., 2017); therefore, finding a balance between accountability, privacy, and fairness is a challenge that needs further exploration.
### Research Theme: Privacy (#12)
Several papers touched on AI's privacy ramifications (Krause et al., 2017; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018). For example, the privacy ramifications of using human resource management tools in companies can make employees uncomfortable. These systems may be as simple as controlling a gate and storing that information for performance measures. However, collecting more data to improve the model's accuracy can result in a sense of surveillance. Although, from the viewpoint of the people deploying the system, such data collection may be seen as a way of providing a fair assessment of the employee's performance (Li et al., 2018; Li et al., 2018). Designers and engineers acknowledge this tension between privacy and collecting more data or sharing datasets between different groups or teams to explore new opportunities. Nevertheless, when a company is a large organization (especially when the client's data is involved), it is not easy to share data because data ownership can be vague and difficult (Li et al., 2018; Li et al., 2018).
A few papers discuss the trade-off between privacy and explainability (Krause et al., 2017; Li et al., 2018; Li et al., 2018). Additional information about the model to provide explainability may compromise privacy or vice versa. For example, in a study with people's WiFi data, people were initially private and sensitive about sharing their WiFi data, but showing visualizations about their data usage made them feel more comfortable sharing their data because it created a sense of trust (Li et al., 2018). On the other hand, privacy may become a barrier to providing explanations in image classification systems. For example, when images are obfuscated for privacy reasons providing explanations for the classification may reveal the identity of people in the images (Li et al., 2018).
The standard practice in the studied toolkits is the so-called _Privacy by Design_(Han et al., 2017; Li et al., 2018), which refers to data privacy considerations throughout the AI life cycle that makes AI compliant with laws, regulations, and standards (Li et al., 2018).5 For example, IBM's AI Privacy 360 toolkit helps developers assess privacy risks and mitigate potential privacy concerns (Li et al., 2018). The toolkit includes data anonymization (i.e., ensuring a model is trained on anonymized data) and data minimization (i.e., ensuring that data collection is only limited to what is directly relevant and necessary to train a model) modules for assessing privacy risks and ensuring privacy compliance.
Footnote 5: For example, according to the European General Data Protection Regulation’s Article 25, data controllers must “_implement appropriate technical and organizational measure_” during the design and implementation stage of data processing “_to protect the rights of data subjects_.” (Li et al., 2018)
**Future research** may explore the trade-off between privacy (and/or security) and explainability, as there is a thin line between what _can_ be monitored and what _should_ be monitored (Li et al., 2018). One perspective is to look at the trade-off from the lens of historian and philosopher Yuval Noah Harari that the digital platforms, whether powered by AI or not, need to follow three basic rules to protect humans from "_digital dictatorships_" (Li et al., 2018). These rules are (1) reasonable use cases for data collection, meaning that any data being collected should be used to help people rather than to manipulate, control, or harm them, (2) surveillance should be bidirectional, meaning that if an entity (e.g., an organization or a government) increases surveillance of individuals, at the same time, accountability needs to increase, and (3) eliminating data monopolies, which come from the concentration of data in a single entity. Human-centered approaches and design methods can be used to explore solutions for these problems through committees that oversee data collection and, ideally, ensure that data is in the hands of the individuals.
### Research Theme: Security (#12)
AI security aims to provide safety and reduce harm to individuals (Li et al., 2018). It concerns a system's ability to resist external threats by testing its resilience against vulnerabilities and cyber-attacks while protecting the integrity and confidentiality of personal data (Li et al., 2018). However, security may conflict with explainability. For example, in the case of human resource management systems, disclosing information about models for explainability reasons may be harmful
because of security reasons as it may damage the organization's reputation and open up opportunities for attacks if all the model details are published openly [(62; 63)]. Besides, using AI in security-sensitive decision-making can be questionable. If AI decisions include many false positives relying on these decisions can create unwanted stress because the operator in charge of making the final decision may want or have to act on them. This can be especially stressful for junior staff who may want to report every incident [(24)].
A family of studied toolkits covers security threats to AI (also known as adversarial). Adversarial Robustness Toolbox (ART) evaluates AI models against adversarial threats of evasion, poisoning, extraction, and inference [(59; 2)]. Advbox (developed by Baidu) [(5)] and Foolbox (developed at the University of Tuebingen) [(80)] generate adversarial examples that can fool neural networks. Similarly, Advertorch is a toolkit developed by RBC Capital for adversarial robustness research [(22; 66)]. CleverHans provides a library for constructing attacks, building defenses, and benchmarking both [(18)].
Future research may want to study the security aspects of AI from a human-centered viewpoint. While there is a specific security and privacy track in CHI that publishes papers about the human factor of security and privacy in various technologies such as smartphones, apps, and browsers or researches the software developer factor in secure development, we believe that the community can benefit from a dedicated focus on what it means to provide human-centered security in AI, considering the trade-off between explainability and security (which resonates with the classic trade-off between usability and security [(77; 20)]).
### Research Theme: Human Flourishing (#8)
Sustainable growth and the well-being of humans using or affected by AI came up in several studies (also known as human flourishing) [(8; 23; 57; 63; 74; 85; 86)]. For example, employees' expectations should be considered if an employer wants to increase productivity using AI with a sustainable perspective by considering a positive change on the employees' side (e.g., monetary compensation) [(63)]. In particular, AI's impact on children has been pointed out as crucial as it can have long-term effects on children and societies. Examples include using robots for education at early stages of child development, which can shape children's perceptions of robots, or entertainment systems collecting children's data, which may affect their entire life if service providers do not respect their privacy and keep their data permanently [(86)]. On a similar note, socio-environmental factors (e.g., bling person navigating social interactions or the country of post-deployment) should also be considered for the success of an AI in a real-world setting [(57; 8)].
A couple of CHI papers focused on the labor behind AI [(85; 74)]. As part of the data creation process, annotators have to work long hours with poor payments, especially when there is a demand for cost-effective ways of annotating large data sets, turning AI companies to hire annotators from third-party annotation companies in the Global South with poor payments. In these companies, some annotators start working with an expectation that annotation work is a door to becoming an engineer in the future and would open up opportunities for high-paying jobs. However, this is often a fallacy, and these annotators rarely move up and progress in their careers [(85)]. Similarly, content moderators sometimes have to read and view inappropriate or disturbing content, causing them severe stress and trauma. Some of these people were not fully aware of the risks associated with the job when they signed up for it due to a lack of transparency in the job description [(74)].
Either of these jobs (annotation or content moderation) is often seen as a dirty job with minimal attention for people by the companies [(85; 69; 74; 90)]. However, their contributions to the AI economy are valuable6 and foundational (by generating data sets and keeping the AI safe from inappropriate content that others do not have to deal with). Three recommendations made by Steiger et al. [(74)] can be a starting point to help these individuals with the difficulties of their jobs: (1) giving a detailed description of the job and its associated risks could give candidates an understanding of whether the job is appropriate for them, (2) putting a limit to the number of content they see or annotate per day or week, and (3) create a supportive community to help with stress relieve.
Footnote 6: *: _Without the work and labor that were poured into the data annotation process, ML efforts are no more than sandcastles._* [(85)]
Future research could study the long-term effects of AI on people, the people who directly use the AI (e.g., a person using a fitness tracker with an AI coach), those who are affected by the AI indirectly (e.g., a person getting an AI-assisted decision about their loan from a credit institute), and the people who work for the AI's development and deployment (e.g., annotators and developers). Docherty and Biega [(23)] argue that the tech industry's focus on user engagement and time spent on the platform is a reductionist view of the well-being of humans. Specific needs and characteristics of an individual should be considered when measuring digital well-being. Drawing on this idea, future researchers could study the well-being of people impacted by AI beyond limiting engagement time, thinking about human flourishing (i.e., "_ability to live a good life_" [(35)]), using a positive computing viewpoint [(12)]. Furthermore, while not discussed in the reviewed papers, we see a need to study the environmental effects of AI on natural resources from a nature-centered viewpoint. For instance, one may design a tool to remind users about the energy consumption of their AI tools or provide information about the energy consumption of AI models (including all stages, such as design, development, and deployment) with an easy-to-understand label, like the Energy Star.
## 4. Conclusion
We reviewed 27 CHI research papers and 19 toolkits from academia and industry about HCR-AI to distill research methods, themes, and future direction in this area. The CHI community and the studied toolkits have focused on AI's fairness, explainability, privacy, and security aspects. The CHI community has started thinking about the individuals' well-being influenced by AI. However, toolkits are still needed to address such issues. Both communities need to put effort into accountability beyond fairness and create usable tools for non-expert users to audit AI, as well as think about sustainability and human flourishing. While these topics are part of the broader discourse in responsible AI [(7; 79; 64; 55; 60; 7]), there is still a need for toolkits and future research to bring them into practice. |
2310.05095 | How Reliable Are AI-Generated-Text Detectors? An Assessment Framework
Using Evasive Soft Prompts | In recent years, there has been a rapid proliferation of AI-generated text,
primarily driven by the release of powerful pre-trained language models (PLMs).
To address the issue of misuse associated with AI-generated text, various
high-performing detectors have been developed, including the OpenAI detector
and the Stanford DetectGPT. In our study, we ask how reliable these detectors
are. We answer the question by designing a novel approach that can prompt any
PLM to generate text that evades these high-performing detectors. The proposed
approach suggests a universal evasive prompt, a novel type of soft prompt,
which guides PLMs in producing "human-like" text that can mislead the
detectors. The novel universal evasive prompt is achieved in two steps: First,
we create an evasive soft prompt tailored to a specific PLM through prompt
tuning; and then, we leverage the transferability of soft prompts to transfer
the learned evasive soft prompt from one PLM to another. Employing multiple
PLMs in various writing tasks, we conduct extensive experiments to evaluate the
efficacy of the evasive soft prompts in their evasion of state-of-the-art
detectors. | Tharindu Kumarage, Paras Sheth, Raha Moraffah, Joshua Garland, Huan Liu | 2023-10-08T09:53:46Z | http://arxiv.org/abs/2310.05095v1 | # How Reliable Are AI-Generated-Text Detectors? An Assessment Framework Using Evasive Soft Prompts
###### Abstract
In recent years, there has been a rapid proliferation of AI-generated text, primarily driven by the release of powerful pre-trained language models (PLMs). To address the issue of misuse associated with AI-generated text, various high-performing detectors have been developed, including the OpenAI detector and the Stanford DetectGPT. In our study, we ask how reliable these detectors are. We answer the question by designing a novel approach that can prompt any PLM to generate text that evades these high-performing detectors. The proposed approach suggests a universal evasive prompt, a novel type of soft prompt, which guides PLMs in producing "human-like" text that can mislead the detectors. The novel universal evasive prompt is achieved in two steps: First, we create an _evasive soft prompt_ tailored to a specific PLM through prompt tuning; and then, we leverage the transferability of soft prompts to transfer the learned evasive soft prompt from one PLM to another. Employing multiple PLMs in various writing tasks, we conduct extensive experiments to evaluate the efficacy of the evasive soft prompts in their evasion of state-of-the-art detectors.
## 1 Introduction
Recent advances in transformer-based Pre-trained Language Models (PLMs), such as PaLM [11], GPT 3.5 [22], and GPT 4 [12], have greatly improved Natural Language Generation (NLG) capabilities. As a result, there is a proliferation of highly compelling AI-generated text across various writing tasks, including summarization, essay writing, academic and scientific writing, and journalism. While AI-generated text can be impressive, it also brings potential risks of misuse, such as academic fraud and the dissemination of AI-generated misinformation [1, 13]. Consequently, to combat the misuse associated with AI-generated text, we observe the emergence of automatic AI-generated-text detectors, which include commercial products like GPTZero [23], Turnitin AI text detector 1, as well as open-source methods as DetectGPT [15] and fine-tuned versions of OpenAI GPT-2 detector [16].
Footnote 1: [https://www.turnitin.com/solutions/ai-writing](https://www.turnitin.com/solutions/ai-writing)
Although AI-generated-text detectors exhibit impressive performance during standard evaluations using existing datasets, their efficacy can be challenged when deployed in real-world scenarios. For instance, malicious users may manipulate the PLMs to generate text in a way that misleads the detector into classifying it as human-written. Such scenarios can lead to detrimental consequences in practical applications like academic fraud detection or AI-generated news identification. Consequently, it is imperative to assess the reliability of current AI-generated-text detectors in such situations. Therefore, in this study, we aim to answer the question of how reliable the existing state-of-the-art AI-generated-text detectors are.
To answer this question, we propose an evasive soft prompt, a novel form of soft prompt specifically crafted to enable the evasion of AI-generated-text detectors. As illustrated in figure 1, when combined with a standard input prompt, this evasive soft prompt effectively guides the PLM towards generating texts that convincingly resemble human-written content, thus evading detection by AI-generated-text detectors. Given the rapid ad
Figure 1: Evasive soft prompt guiding PLM to evade AI-generated-text detector
vancements of PLMs in recent times, the development of a more generalized framework that can assess the reliability of the detectors on both current and future PLMs is very crucial. To facilitate this, we design our evasive soft prompt scheme to be "universal", making it feasible for any PLM to adopt it easily. The proposed universal evasive soft prompt is achieved through a two-step process. First, we develop an evasive soft prompt tailored for a particular PLM using prompt tuning (PT) (Lester et al., 2021). Then, leveraging the transferability of soft prompts, we efficiently transfer the acquired evasive soft prompt from one PLM to another. We refer to our framework for **E**vasive **S**oft **P**rompts for **A**I-generated-text detectors as **EScaPe**.
Our experiments on different PLMs and diverse writing tasks provides empirical evidence of the efficacy of **EScaPe** in evading state-of-the-art detectors. Furthermore, our experiments on transferability demonstrate the successful application of evasive soft prompts across multiple prominent PLMs. Additionally, we present the effectiveness of **EScaPe** in transferring across different state-of-the-art detectors. Our findings underscore the importance of further research in designing more reliable detection mechanisms for combating the misuse related to AI-generated text. In summary, our study makes the following key contributions:
1. We propose the framework **EScaPe**, which introduces the concept of evasive soft prompts--a novel type of soft prompt specifically designed to guide PLMs in evading state-of-the-art AI-generated-text detectors.
2. We demonstrate the transferability of evasive soft prompts learned through the **EScaPe** framework across different PLMs, making it a universal evasive soft prompt to assess the reliability of AI-generated-text detectors on any current or future PLM.
3. We conducted extensive experiments by employing a wide range of PLMs in various real-world writing tasks to demonstrate the effectiveness of the proposed framework, consequently highlighting the vulnerability of the existing AI-generated-text detectors.
## 2 Related Work
### Detection of AI-Generated Text
The task of AI text detection is generally considered a binary classification task, with two classes: "Human-written" and "AI-written." In the early days, several supervised learning-based classification methods were explored for detecting AI-generated text, such as logistic regression and SVC (Ippolito et al., 2019). In contrast, GLTR (Gehrmann et al., 2019) employs a set of simple statistical tests to determine whether an input text sequence is AI-generated or not, making the method zero-shot. In recent years, fine-tuned PLM-based detectors have emerged as the state-of-the-art, including OpenAI's GPT2 detector (Solaiman et al., 2019; Jawahar et al., 2020; Zellers et al., 2019; Kumarage et al., 2023). With the rapid advancement of newer large language models, there is an increasing emphasis on the capabilities of few-shot or zero-shot detection and the interpretability of these detectors (Mitrovic et al., 2023). Some new detectors include commercial products such as GPTZero (Tian, 2023), and OpenAI's detector (Kirchner et al., 2023). A recent high-performing zero-shot detection approach called DetectGPT (Mitchell et al., 2023) operates on the hypothesis that minor rewrites of AI-generated text would exhibit lower token log probabilities than the original sample. Watermarking on PLM-generated text is also an exciting approach gaining attention in the research community (Kirchenbauer et al., 2023). However, it assumes that the AI generator itself supports the implementation of watermarking, which reduces the practicality of the approach.
### AI-generated-text Detector Evasion
Few recent studies have investigated the efficacy of paraphrasing as a technique for evading AI-generated text detection. (Sadasivan et al., 2023; Krishna et al., 2023) demonstrated that paraphrasing the text can considerably undermine the performance of AI-generated text detectors, raising concerns about such detection methods' reliability. However, our work differs significantly from paraphrasing for the following reasons: 1) we aim to assess the reliability of existing detectors against the capabilities of the original PLM that generated the text. In paraphrasing, a secondary PLM is used to rephrase the original PLM's text to evade detection, resulting in a two-step process, and 2) Paraphrasing attack evaluation provides a unified score for type I and type II errors of the detector. However, it is essential to distinguish between these two types of errors to validate the detector's reliability. In real-world scenarios, the most probable situation
would involve type II errors, where malicious actors attempt to generate AI text that can evade the detectors and result in a false negative. Our study focuses explicitly on emulating such a scenario and evaluating the type II errors.
### Soft Prompt Tuning
Prompt tuning is a widely used approach for guiding PLMs to generate desired outputs Jiang et al. (2020). GPT-3 demonstrated early success in prompt tuning, achieving remarkable performance on multiple tasks using tailored prompts Brown et al. (2020). This led to extensive research on hard prompts, which are manually or automatically crafted prompts in discrete space Mishra et al. (2021); Gao et al. (2020). Simultaneously, researchers have explored the potential of soft prompts Liu et al. (2023). Unlike hard prompts, soft prompts are in the continuous embedding space. Therefore, soft prompts can be directly trained with task-specific supervision Wu and Shi (2022); Gu et al. (2022). Notable methods for soft prompts, such as prompt tuning (PT) Lester et al. (2021), and P-tuning Liu et al. (2022), have achieved performance comparable to full parameter finetuning of PLMs for downstream tasks. Furthermore, recent works have presented the transferability of soft prompts across tasks and across PLM architectures Su et al. (2022); Vu et al. (2021).
## 3 Methodology
Figure 2 illustrates the process of generating the universal evasive soft prompt, which involves two main steps: evasive soft prompt learning and evasive soft prompt transfer. In the first step, we learn an evasive soft prompt for a specific frozen PLM (source PLM -\(PLM^{s}\)). The second step is the evasive soft prompt transfer, which involves transferring the learned soft prompt to a frozen target PLM (\(PLM^{t}\)). In the subsequent sections, we comprehensively discuss these two steps.
### Evasive Soft Prompt Learning
#### 3.1.1 Overview of Learning
Evasive soft prompt learning aims to tune the soft prompt \(P^{s}\) so that once the learned \(P^{s}\) is inputted into \(PLM^{s}\), it generates text classified as "Human-written" by the detector. To accomplish this objective, our end-to-end learning framework is defined as follows: first, we configure the soft prompt \(P^{s}\) based on the Prompt Tuning (PT) method Lester et al. (2021). Second, we input both the soft and natural language prompt, which reflects the specific writing task (Figure 1 provides an example of a news writing task), into \(PLM^{s}\), and generate the corresponding output text response. Next, we pass this generated text response to the AI-generated-text detector and obtain the probability values for the final classification classes. Finally, using the detector's output as the reward, we propagate the learning error and update the soft prompt \(P^{s}\) using the Proximal Policy Optimization (PPO) method Schulman et al. (2017); Ziegler et al. (2019).
#### 3.1.2 Configuring the Evasive Soft Prompt
We use the PT method for defining and configuring the evasive soft prompt on a given \(PLM^{s}\). Here the parameters of \(PLM^{s}\) are frozen, and the only parameter we intend to update is the soft prompt \(P^{s}\). The soft prompt \(P^{s}\) consists of a collection of \(k\) virtual tokens \(p^{s}_{1},p^{s}_{2},...,p^{s}_{k}\) connected to the input of \(PLM^{s}\). Unlike regular input tokens, these \(k\) virtual tokens are learnable embedding vectors that can be fine-tuned to guide \(PLM^{s}\) in accomplishing a specific task or goal. In our case, these learnable tokens aim to steer the generation of text that evades AI text detectors. Formally, given a natural language prompt sequence with \(q\) tokens \(X=x^{s}_{1},x^{s}_{2},...,x^{s}_{q}\), we input \(X\) into the PLM and get the corresponding embedding representations while prepending \(k\) randomly initialized soft prompts \(p^{s}_{1},p^{s}_{2},...,p^{s}_{k}\) before them. Here, \(p^{s}_{i}\in\mathbb{R}^{e_{s}}\), and \(e_{s}\) is the input embedding size of \(PLM^{s}\).
Figure 2: The proposed framework, **EScaPe**, encompasses learning evasive soft prompt on a specific PLM, followed by its transfer to a target PLM.
#### 3.1.3 Text Generation
After configuring the evasive soft prompt for \(PLM^{s}\), we generate the corresponding output text \(Y^{s}\). This generation is conditioned on the soft prompt tokens \(P^{s}=p^{s}_{1},p^{s}_{2},...,p^{s}_{k}\) and the input tokens \(X^{s}=x^{s}_{1},x^{s}_{2},...,x^{s}_{q}\), as shown below:
\[p(Y^{s})=\prod_{i=q}^{N}p(x^{s}_{i}|P^{s},x^{s}_{1}...x^{s}_{q}...x^{s}_{i-1}) \tag{1}\]
In equation 1, \(N\) represents the size of the generated output, determined by the stopping criteria used during text generation with \(PLM^{s}\). In our case, we utilize the default maximum sequence size criteria, which halts the generation after reaching the maximum defined sequence length.
#### 3.1.4 Reward Generation from the Detector
Given the text output generated by \(PLM^{s}\), our objective is to calculate a reward value that indicates the degree to which the generated text resembles human-written text for AI text detectors. For this purpose, we utilize a proxy AI-generated-text detector. Given the generated text \(Y\) from \(PLM^{s}\), we compute the probabilities of \(Y\) being classified as "Human" and "AI". Formally, we define the input text to the detector as \(Y=(y^{s}_{1},y^{s}_{2},...,y^{s}_{l})\), where \((y^{s}_{1},y^{s}_{2},...,y^{s}_{l})\) represents the tokens of input text \(Y\) based on the tokenizer of the detector model. The detector has learned the function \(d_{\phi}\), which takes the input tokens and predicts the class probabilities.
\[\begin{split}\{cls^{0}_{p},cls^{1}_{p}\}=d_{\phi}(y^{s}_{1},y^{s }_{2},...,y^{s}_{l})\\ \hat{z}=cls^{0}_{p},\end{split} \tag{2}\]
where \(cls^{i}_{p}\) represents the probability for class \(i\), and \(\hat{z}\) denotes the probability of the "Human" class (\(i=0\)). The ultimate objective of our framework is to fine-tune the soft prompt \(P^{s}\) to maximize the reward \(\hat{z}\). In other words, we aim to guide \(PLM^{s}\) in generating text that enhances the "Human" class probability as predicted by the detector.
#### 3.1.5 Finetuning via Reinforcement Learning
After obtaining the reward from the detector, \(\hat{z}\), we utilize reinforcement learning to update our prompt's parameters effectively. The RL module abides by the following structure:
* **State** consists of two components: the evasive soft prompt and the detector. The agent acts as an optimizer, updating the evasive soft prompt based on the detector's output.
* making the generated text looks human-written to the detector. Given the language model \(PLM^{s}\), we initialize the policy \(\pi\) as \(PLM^{s}\) and then fine-tune only the evasive soft prompt embedding \(P^{s}\) to perform the task well using RL. We fit a reward model \(f\) using the loss: \[loss(f)=\mathbb{E}_{\left(Y,\{\hat{z}\}_{i}\right)}\left[\log\frac{e^{f(Y,\hat{ z})}}{\sum_{i}e^{f(Y,\hat{z}_{i})}}\right]\] (3) where \(Y\) is the text generated by \(PLM^{s}\) for the prompt \(X\), and \(\hat{z}\) is the detector output. To keep \(\pi\) from moving too far from \(PLM^{s}\), we leverage a penalty with expectation \(\beta\mathrm{KL}(\pi,PLM^{s})\), (where KL denotes the KL divergence) modifying the RL reward as, \[F(y,\hat{z}) =f(y,\hat{z})-\beta\mathrm{KL}(\pi,PLM^{s})\] (4) \[=f(y,\hat{z})-\beta\log\frac{\pi(y\mid x)}{PLM^{s}(y\mid x)}\] (5) Here \(x\in X\) and \(y\in Y\) (single instance of an input prompt and generated text). Adding the KL term adds two benefits. First, it prevents the policy from moving too far from the range where \(f\) is valid, and second, it enforces coherence and topicality in the generated text.
* **Action** of the RL agent is updating the \(k\) tokens of the evasive soft prompt \(P^{s}\) based on the policy objective formulated as, \[J(\lambda)=\mathbb{E}_{\pi_{\lambda}}[\sum_{i=1}^{k}F_{i}],\] (6) where \(\lambda\) is the language model's parameters related to the soft prompt embedding, and \(F_{i}\) is the reward for input \(y_{i}\) and reward \(\hat{z}_{i}\). We use the gradient of this objective after each iteration to update the evasive soft prompt.
Optimizing the evasive soft prompt based on the defined reward signals, the model finally outputs text that is detected as human-written by the detector.
### Evasive Soft Prompt Transfer
After learning the evasive soft prompt on the \(PLM^{s}\), we transfer the trained evasive soft prompt to the semantic space of target \(PLM^{t}\). Following the cross-architecture transferability work of soft
prompts Su et al. (2022), we train a projector to efficiently map the evasive soft prompt from the source PLM to the target PLM.
Formally, we define the soft prompts of the source \(PLM^{s}\) as \(P^{s}=p_{1}^{s},...,p_{k}^{s}\); \(P^{s}\in\mathbb{R}^{e_{s}\times k}\). The projector \(g_{\theta}\) aims to project \(P^{s}\) to \(P^{t}\in\mathbb{R}^{e_{t}\times k}\), representing the semantic space of the target \(PLM^{t}\). Here, \(e_{s}\) and \(e_{t}\) denote the input embedding dimensions of the source and target PLMs, respectively. We parameterize the projector function \(g_{\theta}\) using a two-layer feed-forward neural network:
\[P^{t}=g_{\theta}(P^{s})=W_{2}(\sigma(P^{s}W_{1}+b_{1}))+b_{2} \tag{7}\]
In Equation (7), \(W_{1}\in\mathbb{R}^{e_{h}\times e_{s}}\), \(W_{2}\in\mathbb{R}^{e_{t}\times e_{h}}\) are trainable matrices, \(b_{1}\in\mathbb{R}^{e_{h}}\), \(b_{2}\in\mathbb{R}^{e_{t}}\) are biases, and \(\sigma\) is a non-linear activation function. Here \(e_{h}\) denotes the hidden layer size of the projector network. Finally, we train the above cross-model projection parameters using our RL framework for evasive soft prompt learning. However, given that we are now initializing the \(P^{t}\) from an already learned evasive soft prompt \(P^{s}\), learning the evasive soft prompt for the target \(PLM^{t}\) is done in fewer iterations.
## 4 Experiment Setup
Here we describes the experimental settings used to validate our framework, including the AI Generators (PLMs), writing tasks (datasets), detection setup, baselines, and the implementation details of **EScaPe** to support reproducibility.
### AI-Text Generators (PLMs)
We evaluate our framework on numerous open-source PLMs readily available on HuggingFace. Specifically, we selected the current state-of-the-art PLMs such as **LLaMA**(7B; Touvron et al. (2023)), **Falcon**(7B; Almazrouei et al. (2023)), as well as-established powerful PLMs like **GPT-NeoX**(20B; Black et al. (2022)) and **OPT**(2.7B; Zhang et al. (2022)). In our experiments, we used the above PLMs in two ways. Firstly, for zero-shot language generation, we generated AI text for different writing tasks considered in our analysis. Secondly, we used them as the base PLMs for our framework's evasive soft prompt learning task. We employed similar language generation parameters in both cases, setting the _top-p_ to 0.96 and the _temperature_ to 0.9.
### Writing Tasks
In our experiments, we analyzed how well our framework works with different AI-related writing tasks that have the potential for misuse. We focused on three main writing tasks: news writing, academic writing, and creative writing.
For the news writing task, we combined two datasets to gather human-written news articles from reputable sources such as CNN, The Washington Post, and BBC. We obtained CNN and The Washington Post articles from the Turing-Bench dataset Uchendu et al. (2021) and BBC articles from the Xsum dataset Narayan et al. (2018). To represent academic essays, we used the SQuAD dataset Rajpurkar et al. (2016) and extracted Wikipedia paragraphs from the context field. For creative writing, we used the Reddit writing prompts dataset Fan et al. (2018). After collecting the human-written text for the above tasks, we used the selected AI generators to generate corresponding AI text. Prompts used for these AI text generations can be found in Appendix A.1.1.
### AI-generated Text Detection Setup
We followed a similar task setup to related works, considering AI-generated-text detection as a binary classification task with the labels "Human" (0) and "AI" (1). For each writing task mentioned above, we had "Human" text and "AI" text, which we split into a train, test, and validation (\(\approx\) 8:1:1 ratio).
In our work, we experimented with two state-of-the-art AI text detectors representing the two main categories: zero-shot and supervised. For the zero-shot detector, we used DetectGPT Mitchell et al. (2023), and for the supervised detector, we used a fine-tuned version of OpenAI's GPT-2 detector (OpenAI-FT) Solaiman et al. (2019). Since the fine-tuned detector was only trained on GPT2 text, we further fine-tuned it on the text generated by each respective PLM considered in our study. More details can be found in Appendix A.1.3.
### Baselines
We compare our method against the following recent paraphrasing-based AI-generated-text detection evasion techniques.
**Parrot paraphrasing (parrot_pp)**: PLM-based paraphrasing approach that incorporates the T5 model to paraphrase the given input text while degrading the performance of AI-generated text detection Sadasivan et al. (2023).
**DIPPER paraphrasing (DIPPER_pp)**: PLM that is specifically catered for the task of paraphrasing. This method augments the existing paraphrasing capabilities by enabling paragraph-level paraphrasing and enabling control codes to control the diversity of the paraphrase Krishna et al. (2023).
### Implementation Details of EScaPe
To implement evasive soft prompt learning, we first defined the soft prompt model using the PEFT library on Huggingface2. We set the task type to _CAUSAAL_LM_, select an initial prompt text to represent the writing task (e.g., for the news writing task, "write a news article on the given headline"), and specified the number of virtual tokens as \(k=8\). For tuning the evasive soft prompt through reinforcement learning (PPO), we utilized the TRL library von Werra et al. (2020) for the implementation, and the framework is trained using a learning rate of \(1.41\times e^{-5}\) until the detector performance on the validation set no longer decreased. To facilitate the transfer of evasive prompts, we utilized the parameters outlined by Su et al. (2022) in their work on transferability. Further information regarding the implementation can be found in Appendix A.1.4.
Footnote 2: [https://github.com/huggingface/peft](https://github.com/huggingface/peft)
## 5 Results and Discussion
### Evading AI Text Detectors
We recorded the results of our AI-generated-text detector evasion experiment in Table 1. The "Original" row represents the F1 score of the detector when applied to AI-text generated by the respective PLM without any modifications, while the preceding rows present the F1 score of text that has undergone different evasion methods.
**EScaPe successfully evaded detectors**: Our findings demonstrate that **EScaPe** effectively reduces the performance of the detectors across all PLMs and various writing styles. Notably, the OpenAI-FT detector experienced an average F1 score decrease of approximately 42%, while the Detect-GPT detector encountered a decrease of around 22%. The discrepancy in evasion success between the two detectors may stem from their initial performance levels. For instance, DetectGPT achieved an 80% detection F1 score before evasion on LLaMA, Falcon, and GPT-NeoX text. Consequently, the soft prompt learned through the reward from DetectGPT is more limited compared to the soft prompt acquired through the reward from the high-performing OpenAI-FT detector. This claim can be further substantiated by analyzing the detection results of the OPT model. Detect-GPT exhibits higher performance in detecting OPT-generated text, and accordingly, we observe that the soft prompt learned using the reward of Detect-GPT on OPT is successful in evading the detector, unlike the cases observed with other PLMs.
**Lower False Negatives with Paraphrase Evasion**
Upon analyzing the F1 scores after evasion, it becomes apparent that **EScaPe** significantly outperforms Parrot paraphrasing. While the DIPPER paraphraser demonstrates superior evasion compared to Parrot, it falls short compared to **EScaPe** with the OpenAI-FT detector. Both paraphrasing methods show improved evasion capabilities with DetectGPT compared to OpenAI-FT. We attribute this distinction to the initial performance gap of the DetectGPT. When the detector's performance is weak for the original AI-text, specific perturbations can significantly impact its performance.
In contrast to recent works Sadasivan et al. (2023); Krishna et al. (2023), paraphrasing exhibits limited abilities to modify the detector's performance. To address this apparent discrepancy, we analyze the underlying factors. In the presented results of Table 1, we specifically focus on the F1 score of the "AI" class, which assesses the effectiveness of the respective evasion technique in making the AI-generated text appear "Human" to the detector (false negatives). This evaluation approach differs from prior research on paraphrasing-based evasion, where both false positives and false negatives of the detector are considered, resulting in a more significant decline in the evaluation scores (AUROC) when paraphrasing is employed. However, we argue that in real-world scenarios, the most likely situation would involve malicious actors attempting to generate AI text that can evade detectors by producing false negatives.
### Transferbility of the Evasion Prompts
We investigate two aspects of transferability: 1) the ability of **EScaPe** learned on one PLM to transfer to another, and 2) the ability of **EScaPe** learned through one detector to transfer to another detector.
#### 5.2.1 Transferbility Across PLMs
Table 2 presents the transferability of **EScaPe** across different PLMs. For brevity, we provide the detection values of the OpenAI-FT detector,
while the DetectGPT results can be found in Appendix A.2.1. We observe that **EScaPe** demonstrates remarkable transferable performance consistently across all writing tasks and the four PLMs investigated. For the LLaMA, Falcon, and GPT-NeoX PLMs, the deviation of transferability (maximum F1 score of transferred **EScaPe** minus the F1 score of **EScaPe** trained on itself) is less than 5%. In the case of OPT, this value is slightly higher, around 10%. Notably, **EScaPe** trained on OPT exhibits limited transferability compared to the other PLMs. One possible reason for this disparity could be the size of the language model. OPT, with 2.7B parameters, is the smallest model examined in our study. Consequently, the **EScaPe** trained on OPT might have limitations in its capabilities to transfer to larger models. This becomes more evident when considering GPT-NeoX, the largest model we analyzed, which exhibits the lowest deviation in transferability, indicating strong transferability to all the other PLMs.
#### 5.2.2 Transferbility Across Detectors
Figure 3 illustrates the extent to which **EScaPe** exhibits transferability across different detectors. For the sake of brevity, we have only presented the detection values for the news writing task, while the transferability of the detectors in the other two writing tasks can be found in Appendix A.2.1. Figure 3a reports the F1 score of the OpenAI-FT detector in two scenarios: 1) Direct - text generated using **EScaPe** trained with the reward from the OpenAI-FT detector, and 2) Transfer - text generated using **EScaPe** trained with the reward from the DetectGPT detector. Likewise, Figure 3b depicts the F1 score of the DetectGPT detector in the direct and transfer scenarios.
Based on the two figures, it is evident that the **EScaPe** framework trained on one detector can be transferred to another detector. In both scenarios considered for the detectors, we observe that the **EScaPe** effectively evades detection, resulting in lower F1 scores ranging from 50% to 70%. Notably, we observe that the **EScaPe** trained on the supervised detector OpenAI-FT exhibits strong transferability to the zero-shot detector DetectGPT. Surprisingly, the **EScaPe** trained on OpenAI-FT yields a lower F1 score with the DetectGPT detector compared to the **EScaPe** trained on DetectGPT itself. We attribute this significant transferability to the supervised training of OpenAI-FT in detecting AI-generated text. When comparing the original performance (see Table 1) of OpenAI-FT and DetectGPT, OpenAI-FT clearly outperforms DetectGPT as a detector for each respective PLM-generated text. This performance disparity is intuitive, con
\begin{table}
\begin{tabular}{|c|c|c c c c|c c c c c c c|} \hline \multirow{2}{*}{**Detector**} & \multicolumn{2}{c|}{**Writing Task \(\rightarrow\)**} & \multicolumn{3}{c|}{**News Writing**} & \multicolumn{3}{c|}{**Essay Writing**} & \multicolumn{3}{c|}{**Creative Writing Writing**} \\ \cline{2-13} & **PLM \(\rightarrow\)** & \multicolumn{2}{c|}{**FLAM**} & \multicolumn{2}{c|}{**Falcon**} & \multicolumn{2}{c|}{**OPT**} & \multicolumn{2}{c|}{**LLaMA**} & \multicolumn{2}{c|}{**Falcon**} & \multicolumn{2}{c|}{**OPT**} & LLaMA & \multicolumn{2}{c|}{**Falcon**} & \multicolumn{2}{c|}{**OPT**} & LLaMA & \multicolumn{2}{c|}{**Falcon**} & \multicolumn{2}{c|}{**OPT**} \\ \cline{2-13} & **Method** \(\downarrow\) & **LLaMA** & **Falcon** & **NEXO** & **0.592** & **0.611** & **0.593** & **0.592** & **0.593** & **0.594** & **0.595** & **0.596** & **0.595** & **0.596** & **0.596** & **0.596** & **0.596** \\ \hline \multirow{3}{*}{OpenAI-FT} & \multirow{3}{*}{**Parro_PP**} & 0.924 & 0.903 & 0.931 & 0.972 & 0.915 & 0.918 & 0.884 & 0.933 & 0.931 & 0.911 & 0.947 & 0.962 \\ & & DIPER_PP & 0.856 & 0.811 & 0.864 & 0.824 & 0.849 & 0.802 & 0.841 & 0.811 & 0.862 & 0.806 & 0.855 & 0.835 \\ \cline{1-1} & **EScaPe** & **0.543** & **0.551** & **0.532** & **0.522** & **0.551** & **0.547** & **0.543** & **0.528** & **0.545** & **0.532** & **0.539** & **0.519** \\ \hline \multirow{3}{*}{DetectorGPT} & \multirow{3}{*}{**Parro_PP**} & 0.817 & 0.754 & 0.798 & 0.923 & 0.825 & 0.781 & 0.771 & 0.918 & 0.813 & 0.732 & 0.786 & 0.929 \\ \cline{1-1} & & Parro_PP & 0.732 & 0.711 & 0.705 & 0.863 & 0.757 & 0.703 & 0.691 & 0.848 & 0.788 & 0.674 & 0.718 & 0.852 \\ \cline{1-1} & & DIPER_PP & 0.635 & 0.651 & 0.658 & 0.702 & 0.641 & 0.673 & 0.647 & 0.694 & 0.628 & 0.648 & 0.663 & 0.711 \\ \cline{1-1} & **EScaPe** & **0.582** & **0.637** & **0.614** & **0.549** & **0.579** & **0.622** & **0.623** & **0.551** & **0.583** & **0.641** & **0.612** & **0.547** \\ \hline \end{tabular}
\end{table}
Table 1: F1 scores of the detector for text generated using various evasion techniques. ’Original’ denotes text generated by the corresponding PLM without employing any evasion technique. The lowest F1 scores, indicating the highest evasion success, are highlighted in **bold**.
\begin{table}
\begin{tabular}{|c|c|c c c|} \hline \multirow{2}{*}{**Source**} & \multirow{2}{*}{**Target**} & \multicolumn{3}{c|}{**Writing Tasks**} \\ \cline{3-4} & & **News** & **Essay** & **Cre.** \\ \hline \multirow{3}{*}{LLaMA} & Falcon & 0.599 & 0.611 & 0.593 \\ & GPT-NeoX & 0.587 & 0.596 & 0.586 \\ & OPT & **0.554** & **0.559** & **0.555** \\ \hline \multirow{3}{*}{Falcon} & LLaMA & 0.571 & 0.566 & 0.559 \\ & GPT-NeoX & 0.564 & **0.557** & **0.551** \\ \cline{1-1} & OPT & **0.561** & 0.559 & 0.554 \\ \hline \multirow{3}{*}{OPT} & LLaMA & 0.553 & 0.560 & 0.558 \\ \cline{1-1} & Falcon & 0.571 & 0.568 & 0.563 \\ \cline{1-1} & OPT & **0.541** & **0.552** & **0.546** \\ \hline \multirow{3}{*}{OPT} & LLaMA & **0.572** & **0.574** & **0.571** \\ \cline{1-1} & Falcon & 0.604 & 0.613 & 0.611 \\ \cline{1-1} & GPT-NeoX & 0.588 & 0.595 & 0.583 \\ \hline \end{tabular}
\end{table}
Table 2: F1 scores of the detector for the text generated by the PLM in the “Target” column. Here **EScaPe** trained on the PLM in the “Source” column and transferred to the PLM in the “Target” column. The lowest F1 scores, showcasing the highest transferable success for a given “Source” PLM, are highlighted in **bold**.
sidering that we fine-tune OpenAI-FT using the text generated by each PLM examined in our study. Consequently, **EScaPe** trained using rewards from a robust supervised detector demonstrates higher transferability.
### Further Analysis
Here, we conduct a detailed analysis of two aspects: the quality of the generated evasive text and a comparison of different parameter-efficient tuning methods for evasive text generation.
#### 5.3.1 Quality of Evasive Text
We evaluate the perplexity change to assess the disparity between the original AI-text and the text produced after applying evasive techniques. Table 3 presents the perplexity change values for each evasion technique in LLaMA generations. The perplexity is computed using an independent PLM, GPT2-XL Radford et al. (2019). We observe that the evasive text generated by **EScaPe** exhibits the lowest perplexity change when compared to the paraphrasing techniques, Parrot and DIPPER. This finding aligns with our expectations since we impose a KL loss between the frozen PLM and the evasive soft prompt during training. This constraint ensures that the generation, conditioned by the evasive soft prompt, remains close to the original PLM and avoids significant divergence.
#### 5.3.2 Parameter Efficient Tuning Methods
Here, we investigate the effectiveness of various parameter-efficient tuning methods, similar to the PT method, within our framework. Specifically, we explore Prefix-tuning Li and Liang (2021) and LoRA Hu et al. (2021) as alternatives to PT in tuning PLM to generate evasive text. In Prefix-tuning, a set of prefix activations is prepended to each layer in the encoder stack, including the input layer. On the other hand, the LoRA method involves learning a pair of rank decompositions for the attention layer matrices while keeping the original weights of the PLM frozen. Figure 4 illustrates that LoRA yielded slightly better evasion performance compared to PT. However, it is important to note that the transferability of LoRA's rank decompositions between different PLMs has not been thoroughly studied, limiting its applicability in our objective of creating a universal evasive soft prompt.
## 6 Conclusion
In this paper, we investigated the reliability of current state-of-the-art AI-generated-text detectors by introducing a novel universal evasive soft prompt framework. Our framework, referred to as **EScaPe**, was designed to learn an evasive soft prompt capable of efficiently guiding any PLM in generating text that can effectively deceive the detectors. Through experiments conducted on various prominent PLMs across different writing tasks, our results reveal the unreliability of existing AI-generated-text detectors when confronted with text generated by PLMs guided by evasive soft prompts. In future research, exploring the potential benefits of adversarial training in developing more robust detectors to combat the proposed evasive soft prompts would be an intriguing avenue to pursue.
Figure 4: Evasion performance (F1 score of the detector) of **EScaPe** with different Tuning Methods
Figure 3: Transferability of **EScaPe** across different detectors. (a) DetectGPT \(\rightarrow\) OpenAI-FT and (b) OpenAI-FT \(\rightarrow\) DetectGPT. Detector \(X\to Y\) denotes the evaluation of **EScaPe** trained through the reward of \(X\) using the detector \(Y\) (“Transfer” label). The “Direct” label denotes F1 corresponding to the detector \(Y\)’s performance on **EScaPe** trained through the reward of \(Y\) itself.
\begin{table}
\begin{tabular}{|c|c c c|} \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{**Writing Tasks**} \\ \cline{2-4} & **News** & **Essay** & **Cre.** \\ \hline Parrot\_PP & 13.4 & 11.5 & 12.7 \\ \hline DIPPER\_PP & 8.1 & 7.5 & 8.3 \\ \hline
**EScaPe** & **2.5** & **1.7** & **2.1** \\ \hline \end{tabular}
\end{table}
Table 3: Perplexity change of the AI text after applying the evasion method. The lowest Perplexity change, indicating the highest similarity with the original AI text, are highlighted in **bold**.
## 7 Limitations
In our study, we construct the evasive soft prompt based on the assumption that the PLM is capable of using soft prompts. In other words, we consider the PLM as an open-source model, allowing us to seamlessly incorporate these learnable soft prompt embeddings into the model. This assumption restricts our framework to evaluating the AI-generated-text detectors on the PLMs that are accessible via an API, supporting only discrete natural language prompts (e.g., OpenAI API). However, if soft prompting capabilities are eventually supported through APIs in the future, our method can be applied to PLMs accessible via APIs as well.
## 8 Ethics Statement
### Malicious Use of Evasive Soft Prompts
We acknowledge the potential risks associated with adversaries misusing the proposed framework to evade AI-generated text detection systems. However, we argue that the benefits of identifying limitations and vulnerabilities in state-of-the-art detector systems (red-teaming) outweigh the potential for misuse, primarily when we actively assist future researchers in addressing these issues. As a precautionary measure, we will not make the complete codebase or soft prompt weights publicly available. However, upon review, individuals or organizations engaged in legitimate research will be granted access to our framework.
### AI-generated Text
In our work, we experiment with multiple PLMs and generate text related to news articles, academic essays, and creative writing tasks. We recognize the importance of not publicly releasing any AI-generated text used in our work, as we cannot guarantee the factual accuracy of the content. Therefore, we will implement an on-demand release structure to release our AI-generated data. Individuals or organizations requesting access to the generated data for legitimate academic research purposes will be granted permission to download the data.
### Intended Use
It is crucial to consider the intended real-world application of **EScaPe** and its societal impact. Our research on evasive soft prompts focuses on enabling an assessment framework for existing AI-generated text detectors. As AI-generated text becomes increasingly prevalent in various domains, the potential applications of AI-generated text detectors expand, including their role as a primary forensic tool in combating AI-generated misinformation. Therefore, assessing the detectors before their deployment becomes essential, and we emphasize that our framework should be used for this intended purpose. However, a significant ethical concern arises if our framework is utilized for purposes other than its intended use, guiding PLMs to generate harmful and derogatory text that could negatively impact the reputation of the organizations responsible for developing the aforementioned PLMs. Hence, we strongly advise users to adhere to the intended use of our framework -- only as an assessment tool for AI-generated-text detectors.
|
2303.13917 | Convolutional Neural Networks for the classification of glitches in
gravitational-wave data streams | We investigate the use of Convolutional Neural Networks (including the modern
ConvNeXt network family) to classify transient noise signals (i.e.~glitches)
and gravitational waves in data from the Advanced LIGO detectors. First, we use
models with a supervised learning approach, both trained from scratch using the
Gravity Spy dataset and employing transfer learning by fine-tuning pre-trained
models in this dataset. Second, we also explore a self-supervised approach,
pre-training models with automatically generated pseudo-labels. Our findings
are very close to existing results for the same dataset, reaching values for
the F1 score of 97.18% (94.15%) for the best supervised (self-supervised)
model. We further test the models using actual gravitational-wave signals from
LIGO-Virgo's O3 run. Although trained using data from previous runs (O1 and
O2), the models show good performance, in particular when using transfer
learning. We find that transfer learning improves the scores without the need
for any training on real signals apart from the less than 50 chirp examples
from hardware injections present in the Gravity Spy dataset. This motivates the
use of transfer learning not only for glitch classification but also for signal
classification. | Tiago S. Fernandes, Samuel J. Vieira, Antonio Onofre, Juan Calderón Bustillo, Alejandro Torres-Forné, José A. Font | 2023-03-24T11:12:37Z | http://arxiv.org/abs/2303.13917v1 | # Convolutional Neural Networks for the classification of glitches in gravitational-wave data streams
###### Abstract
We investigate the use of Convolutional Neural Networks (including the modern ConvNeXt network family) to classify transient noise signals (i.e. glitches) and gravitational waves in data from the Advanced LIGO detectors. First, we use models with a supervised learning approach, both trained from scratch using the Gravity Spy dataset and employing transfer learning by fine-tuning pre-trained models in this dataset. Second, we also explore a self-supervised approach, pre-training models with automatically generated pseudo-labels. Our findings are very close to existing results for the same dataset, reaching values for the F1 score of 97.18% (94.15%) for the best supervised (self-supervised) model. We further test the models using actual gravitational-wave signals from LIGO-Virgo's O3 run. Although trained using data from previous runs (O1 and O2), the models show good performance, in particular when using transfer learning. We find that transfer learning improves the scores without the need for any training on real signals apart from the less than 50 chirp examples from hardware injections present in the Gravity Spy dataset. This motivates the use of transfer learning not only for glitch classification but also for signal classification.
## I Introduction
The era of Gravitational-Wave (GW) Astronomy started in 2015 with the detection of the GWs emitted in the coalescence of two black holes [1] by the Advanced LIGO detectors [2]. To date, the LIGO-Virgo-KAGRA (LVK) collaboration [2; 3; 4] has reported results from three observing runs (O1, O2, and O3) which comprise 90 confident detections. All signals observed are consistent with being produced in compact binary coalescences (CBCs), namely mergers of binary black holes (BBH), binary neutron stars (BNS), and binaries of a neutron star and a black hole (NSBH) [5; 6; 7; 8; 9]. With the start of the fourth observing run (O4) in May 2023, the number of confident detections is expected to significantly increase, yielding an estimated CBC rate of about one detection per day [10].
GW detectors operate in extremely low noise conditions which may be disrupted by the ground motion surrounding the detectors or earthquakes, storms or even anthropogenic sources of noise [11]. More than 200 thousand auxiliary channels are constantly monitored to minimize instrumental noise [12]. These include angular drift of optics, light transmitted through mirrors as well as actuation signals used to control optic position in order to ensure optical cavity resonance [11; 12]. With a higher detection rate, transient noises of instrumental or environmental origin, commonly dubbed "glitches" [13], will become increasingly more of a concern for the LVK detectors. Since glitches can mimic actual GW signals they hinder the sensitivity of the detectors by increasing the false-alarm rate of true GW signals. Reducing the rate of glitches requires a deep understanding of their nature and physical origin, which in some cases is unclear. Blip glitches constitute a particularly harmful example for the LIGO detectors, where an average rate of two such glitches per hour of data was measured during O1 and O2 [14].
Many different approaches to classify noise transients and subtract them from strain data have been developed over the years [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34]. Current LVK efforts include e.g. BayesWave [17] and gswubtract [25], which model glitches using sine-Gaussian wavelets and use a linear subtraction algorithm to remove them from the data stream. Moreover, GW searches (e.g. [35; 36; 37]) also use strategies to veto glitches, as e.g. the \(\chi^{2}\) time-frequency discriminator of [15] or a sine-Gaussian veto [24]. Glitch classification and mitigation using probabilistic principal component analysis has been put forward in [20; 18; 21]. Moreover, the Gravity Spy citizen-science project [13] has
allowed to confidently divide Advanced LIGO glitches into classes, and provides a reliably classified glitch catalog that can be used by noise classification and subtraction methods. Glitches from the Advanced Virgo detector have also been introduced in more recent versions of Gravity Spy. A similar citizen-science project focused on the Advanced Virgo detector, GwitchHunters [34], introduced the localization of glitches in time-frequency space, as well as the search for correlations in the detector's auxiliary channels. A variety of methods that are becoming increasingly important for glitch characterization are Machine Learning (ML) methods. Those have already been extensively applied to this topic (see [38] and references therein). For instance, ML and Deep Learning (DL) methods have been used to predict the existence of glitches using only auxiliary channel data [28; 16] or using both the GW strain and auxiliary channels [29]. Furthermore, dictionary learning techniques have been used to reconstruct and remove blips from GW data streams in [26; 27].
Another interesting line of work frames the glitch classification problem as a computer vision task, where glitches are converted from a time-series to spectrogram images, suitable for classification taking advantage of the rapid advances of DL methods for computer vision. In Ref. [23] this approach was successfully tested with simulated glitches, using a custom-made Convolutional Neural Network (CNN). After this initial success, images from the Gravity Spy dataset have also been used to train CNNs for glitch classification. First, custom-made CNNs with two to five layers were trained from scratch for 200 epochs, and individual models achieved satisfactory results, with accuracies over 95% [39]. Subsequently, deeper pre-trained networks with more powerful architectures were trained on the same dataset, and it was found that transfer learning reduced training time and improved results, reaching the state-of-the-art accuracy of 98.8% in the Gravity Spy dataset [21]. In that work, popular CNN architectures like Inception [40; 41], ResNets [42], and VGGs [43] were trained using transfer learning, for up to 100 epochs, and it was found that, for some architectures, transfer learning allowed to achieve accuracies higher than 98% within 10 epochs.
Recent advances in computer vision research, like the discovery of better training recipes [44; 45; 46; 47] or more powerful architectures [48; 49; 50; 51], could potentially improve glitch classification even more. With this in mind, this paper focuses on the classification of glitches in image format (spectrograms), using the latest available techniques to attempt to improve the state-of-the-art. In particular, we aim at improving present-day results by using the lcycle training policy [44], trying a more modern and powerful CNN architecture family, the ConvvNeXt [48], training models using both supervised and self-supervised methods, and using the typical three subset split (train/validation/test) for better assessment of the model-generalization performance. We also analyze in this paper the use of transfer learning applied to O3 data. It should be stressed that the networks are trained using glitches from the Gravity Spy catalog, which only includes O1 and O2 data, and they are not retrained or fine-tuned on O3 data. We find that transfer learning significantly increases the flexibility of the algorithms and their performance. This approach might be advantageous for future GW observing runs, as it is not as sensitive to changes in the detector background noise.
This paper is organized as follows: Section II discusses the supervised and self-supervised DL methods that we employ for our study. In Section III we present the dataset used. Section IV contains our results on glitch classification including, as well, a preliminary analysis of the performance of our methods with O3 data and actual GW signals. Finally, in Section V we present our main conclusions and outline our plans for future work.
## II Deep Learning Methods
Deep Learning is a subfield of Machine Learning that relies on artificial neural networks with many intermediate layers, each of which builds on the representations learned by the previous layer in order to automatically develop increasingly complex and useful representations of the data [52]. While shallow learning methods, i.e. methods with very few layers, rely on carefully handcrafted features to produce useful results, DL requires no feature engineering, as it can autonomously find what are the useful features, considerably simplifying ML workflows.
According to the experience1 the algorithms acquire during the training phase, DL methods can be categorized as unsupervised, supervised or self-supervised algorithms. In all cases, the algorithms experience an entire dataset, \(\mathbf{X}\), which is a collection of \(m\) data points \(\mathbf{x}^{(i)}=(x_{0},x_{1},...,x_{n})\), each of which has \(n\) features. Unlike unsupervised algorithms, in supervised learning algorithms each example \(\mathbf{x}^{(i)}\) has, in addition, an associated label, \(y^{(i)}\), which is the target for the prediction of the point. Similarly, self-supervised learning uses the same dataset composed of \(n\) samples, with targets for the predictions indicated by the labels. However, the labels of those same samples are replaced by automatically generated labels, referred to as pseudo-labels. In this work, both supervised and self-supervised algorithms are used.
Footnote 1: Here, “experience” is used with same meaning as in Tom Mitchell’s Machine Learning definition [53].
### Supervised Deep Learning
The ResNet architecture family [42] is used in this investigation as a baseline for the classification task. ResNets are a family of CNNs [54; 55] which use residual connections, i.e. direct paths between non-adjacent
layers, to allow training deeper models. In the original implementation, ResNets end with a fully connected layer with 1000 units, corresponding to the number of classes in the ImageNet dataset [56]. For other applications, the number of neurons in this last layer is changed to match the number of classes.
The models were trained using the fastai library [57], taking advantage of its implementation of the 1cycle training policy [44], and training using the AdamW optimizer [47]. In addition, the learning rates were selected using the learning-rate finder [44]. Mixed precision training [45] was also used to reduce the memory requirements and speed up training.
In this work we employ the cross-entropy loss, which for the case of a single example, is given by
\[\mathcal{L}(\mathbf{\theta})=-\sum_{k=1}^{K}y_{k}\log(\hat{p}_{k}). \tag{1}\]
Above, \(y_{k}\) denotes the target probability of class \(k\), typically either equal to 0 or 1, \(K\) denotes the total number of classes (equal to the number of neurons in the output layer), and \(\hat{p}_{k}\) denotes the probability that the example belongs to class \(k\). A weighted loss function was also tried, to penalize more the model for mistakes in the less represented classes, by introducing a class-dependent weight, \(w_{k}\), in the cross-entropy loss:
\[\mathcal{L}(\mathbf{\theta})=-\sum_{k=1}^{K}w_{k}\ y_{k}\log(\hat{p}_{k}). \tag{2}\]
This approach was tried using class weights inversely proportional to the number of samples in each class,
\[w_{k}=\frac{1}{N_{k}}, \tag{3}\]
with \(N_{k}\) being the total number of samples of class \(k\). In addition, the Effective Number of Samples (ENS) approach [58], which obtains the weights associated with each class using
\[w_{k}=\frac{1-\beta}{1-\beta^{N_{k}}}, \tag{4}\]
was also tried. Here, \(\beta\) is a hyperparameter that allows to control the re-weighting degree. Its value was fixed to \(\beta=0.99\), following the suggestion in [58]. In both cases, the class weights are normalized by dividing each class weight by the sum of the class weights and multiplying by the number of classes, in order to create weights closer to unity.
Moreover, the use of the focal loss function, introduced in Ref. [59], was also tested, since it adds to the cross-entropy loss a modulating factor which down-weights easy examples, that is, examples where the model is more confident. The focal loss is defined by
\[\mathcal{L}(\mathbf{\theta})=-\sum_{k=1}^{K}w_{k}\ (1-\hat{p}_{k})^{\gamma}\ y_{k} \log(\hat{p}_{k}), \tag{5}\]
where \(\gamma\) is the focusing parameter, which was kept to the default value of 2.0 [59].
In addition to training ResNets from scratch, transfer learning [60] has also been considered in our investigation. Transfer learning is a technique that allows to take advantage of knowledge learned in one domain, typically from a task that has many labelled examples, and to apply it to a different but related task which may have a limited number of examples. This technique is one of the cornerstones of the fastai library, as it allows to train more accurate models more quickly, and using less data, as compared to training the same models from scratch [61]. We found it interesting to explore transfer learning here to also understand if the networks (pre-trained in datasets which are different from the ones they are applied to) improve performance (in comparison to when no transfer learning is used), also allowing them to be more robust against changes in the detector background.
In the default fastai transfer-learning approach, after assigning the pretrained weights, the final layer is replaced with a fully connected layer with the correct number of outputs, and it is initialized with entirely random weights. Then, the networks are trained with fastai's fine_tune training schedule [62], which first trains the last layer with the others kept frozen, and then trains the whole network using lower learning rates, while also taking advantage of discriminative learning rates.
For the networks trained with transfer learning, the ConvNeXt network family [48] was also tried. The design of this CNN family is inspired on the successful Vision Transformers such as ViT [49], allowing it to achieve very competitive performances (see e.g. [63]). More details about the architectures used in this work can be found in the appendix of [64].
### Self-supervised Deep Learning
Self-supervised DL is a recently introduced method in the field of ML [65]. This method involves training a model for a pretext task before training the model for the desired (downstream) task, as sketched in Figure 1. For instance, a model can be trained to detect what kind of visual transformation is applied to an image and, after that, a transfer of knowledge be done to another model that is trained with labeled data for the downstream task. To solve the pretext task, the samples contained in the dataset must have their pseudo-labels generated, as observed in box (a) of Figure 1. In this case, each one is associated with the type of visual transformation applied to the image. The model is then trained, allowing it to learn features/patterns about the images in the context of this so-called pretext task. The training section is approached in the same way as in supervised learning, with the objective of minimizing the loss function in order to match the model's prediction to the targets, in this case given by the pseudo-labels.
Following this first training section, transfer learning is
applied by removing the last layer of the trained model and replacing it with a randomly initiated layer of the same type. This updated model is then trained for the downstream task, as shown in box (b) of Figure 1, where the dataset used to feed the neural network already has human-made labels. In theory, the pretext task allows the model to learn certain useful patterns from the images, making it so that the downstream model is easier to train. Usually the pretext task tackles a problem that is a subcategory of the problem associated with the pretext task. This technique is more appropriate if there is a high amount of samples, but only few of those are labelled.
The architecture we chose for the self-supervised approach was the ResNet18, with randomly initialized parameters. This model is trained using a dataset composed of images that only correspond to a specific time-span window. Data augmentation is used to increase the variety in the dataset and avoid overfitting. Similarly to the supervised DL methods discussed before, the AdamW optimizer and cross-entropy loss function are used during training, using the 1cycle training policy and the learning rate finder to select a learning rate.
### Model evaluation
The metric we select to compare possible model configurations is the F1 score. This is the harmonic mean of precision (the ratio of correct positive predictions to the total number of positive predictions) and recall (the ratio between the number of correct positive predictions and the total number of positive examples)
\[\text{F1 score}=\frac{2}{\frac{1}{\text{precision}}+\frac{1}{\text{recall}}}= \frac{\text{TP}}{\text{TP}+\frac{\text{FN}+\text{FP}}{2}}. \tag{6}\]
Above, TP, FP and FN respectively denote the total number, true positives, false positives and false negatives.
The F1 score can be computed for each class but this raises the issue of having multiple metrics. In order to have one single overall metric, the macro-averaged F1 score is used instead. This metric is calculated by plugging in the macro-averaged precision and macro-averaged recall, which are respectively the precision and recall averaged across all classes, in Eq. (6).
As models that train faster are easier to optimize and can also result in faster inference times, a new metric was created to penalize models that take longer to train:
\[\text{combined\_F1\_time}=\text{F1\_score}-\text{total\_runtime}/3\times 10^{4}. \tag{7}\]
The numerical value in the equation was conveniently chosen so that an increase of 0.2 percentage points in the F1 score will only be accepted if it adds less than 60 seconds to the training time. Thus, this metric allows to search for better models without increasing too much the training time.
The metrics were computed on the validation dataset for the candidate models, with only the best configurations being chosen for evaluation on the test set. The use of three separate subsets in the model training and evaluation process is of utmost importance. The inclusion of an intermediate validation set allows to evaluate different model candidates on this set and select them accordingly based on their performance. Then, the selected models can be evaluated on the test set, which had not been used to guide previous decisions, to provide a more accurate estimation of the models' generalization. Had the intermediate set not been used, the test set would provide an over-confident estimate of the models' performance, as they had been optimized for that same dataset.
## III Dataset
The Gravity Spy dataset, introduced in [13; 39], is a collection of spectrograms of (mostly) glitches identified during the O1 and O2 observing runs in the Advanced LIGO detectors. This dataset presents a multi-class classification problem, where the goal is to assign the only correct glitch class to each noise sample. Each sample contains four spectrogram images centered at the same
Figure 1: Scheme of the self-supervised DL framework during training. In the dashed green box (a), labels are automatically generated for a pretext task (i.e. the image transformation imposed on the data is to be predicted by the model). In the dashed purple box (b), the architecture’s last layer is changed to solve a related problem using labeled data.
instant in time but with different durations of 0.5, 1.0, 2.0 and 4.0 seconds, all represented as \(140\times 170\) pixels grey-scale images, as seen in Figure 2. In this work we directly use the public images from Gravity Spy. It can be observed that the glitch classes vary widely in duration, frequency range and shape. Nevertheless, some classes can be easily mistaken for each other due to their similar morphology, like the Blip and Tomte classes [31], or if they are not visualized using the appropriate time window. For instance, a Repeating_Blips sample may be identified as a Blip if the window is too small.
In the version of the dataset used in this work2, v1.0, there are 8583 glitch samples distributed unevenly over 22 different classes, which are thoroughly described in [13]. The majority of examples are Blips, with almost 1900 samples, while there are five minority classes (Paired_Doves, Wandering_Line, Air_Compressor, Chirp, and None_of_the_Above) with less than 190 examples (10% of the number of Blips), the less represented being the Paired_Doves, with only 27 examples. The Chirp class does not actually represent glitches but instead "hardware injections", which are simulated signals created to resemble real GWs from CBCs, and used for the testing and calibration of the detectors [39]. The Gravity Spy dataset is already split, using stratified sampling to ensure similar distributions in each subset, into training (70%), validation (15%) and test (15%) sets, to facilitate the comparison of different methods.
Footnote 2: [https://zenodo.org/record/1476156](https://zenodo.org/record/1476156)
## IV Results
### Supervised models
We start discussing the results of glitch classification for supervised DL models. For a quick search for a baseline classifier the ResNet18 and ResNet34 architectures were chosen, with an adapted number of inputs for the first convolutional layer, and using random weight initialization. Both architectures were trained from scratch with fastai's fit_one_cycle routine over 15 epochs with the steepest point from the learning rate finder as the maximum learning rate; with the other hyperparameters set to fastai's defaults.
Several approaches were tested regarding the amount and format of the information provided to the model. In
Figure 2: Examples of the different glitch classes from the Gravity Spy dataset. Note that the classes None_of_the_Above and No_Glitch are not represented in the figure. Moreover, the timescale is not equal for all images and the grey-scale spectrograms are shown with a colour map for better visualization.
the simplest approach, the model had only access to one of the four spectrograms for each example, always with the same duration. This corresponds to views labelled single1 (0.5 s), single2 (1.0 s), single3 (2.0 s) and single4 (4.0 s). Thus, these single-view models accepted batches of grey-scale (1-channel) \(140\times 170\) (height \(\times\) width) pixels images. The second approach was a merged view model inspired in [66], which consists of placing the four single view 140\(\times\)170 images next to each other, forming a \(280\times 340\) pixels image. Finally, we implemented an approach based on encoding information related to a different time duration in each image channel, as introduced in [67]. Encoded views with all combinations of two to four time durations were compared.
A hyperparameter grid search was run over the two model architectures and all the possible views, in order to select the best views and choose a baseline model. The results in terms of each view are shown in Figure 3. It was found that the single views were outperformed by almost all the other views. The highest average F1 scores were obtained by encoded134. Remarkably, this outperformed merged view and encoded1234, which use all the available information. Taking the total training time in consideration using the combined_f1_time metric from Eq. (7), we conclude that encoded134 is the best view, as the merged view takes about 6 more seconds per epoch (a 70% increase) than view encoded134, while achieving slightly worse performance regarding the F1 score. Therefore, encoded134 was selected as the view used for the rest of the supervised models.
In addition, the results of the grid search were also evaluated in terms of the chosen architecture, as depicted in Figure 4. This figure shows box plots of the combined_F1_time for the validation set3. We found that, on average, ResNet18 achieves better results both on the F1 score and the training time, resulting in a median combined_F1_time 0.6 percentage points higher than ResNet34. Thus, the ResNet18 was chosen for the baseline model.
Footnote 3: In box plots (which we employ in several figures in this paper) the lower limit of the box corresponds to the 25\({}^{\text{th}}\) percentile of the data, and the upper limit to the 75\({}^{\text{th}}\) percentile. The line which goes through the box represents the median of the observations. The whiskers represent the maximum and minimum (excluding outliers) of the quantity being displayed, and the individual points (if any) are outliers, i.e. points that are more than 1.5 inter-quartile distance away from the closest quartile.
After deciding the baseline configuration, five models were trained for 15 epochs each, using the same configuration as before, and selecting the ResNet18 architecture and the encoded134 view. The F1 scores of the five training runs range between 96.7% and 98.1%, and the best one was chosen as the baseline model.
An attempt was made to tune the learning-rate choice method, the number of epochs and the batch size, with the goal of further optimizing the model's performance. Hence, a Bayesian sweep was performed over 50 configurations of the following three hyperparameters: (a) number of epochs, between 8 and 25; (b) batch size, sampling from the set [256, 32, 64, 128]; and (c) learning rate finder function, sampling from the four available methods in fastai (steep, valley, slide, minimum). It was found that no model obtained an F1 score higher than the baseline, although some models reached a slightly better combined_f1_time score by maintaining a comparable F1 score while training for less epochs.
The weighted loss function from Eq. (2) was also tried, in order to further penalize the model for mistakes in the least represented classes. The re-weighting strategies were compared using the baseline configuration in
Figure 4: Box plots with the comparison of the architectures regarding combined_f1_time, computed over the validation set.
Figure 3: Comparison of the different views regarding combined_f1_time computed over the validation set. The horizontal bars show the average of the result of the two architectures, ResNet18 and ResNet34.
addition to a configuration equal to the baseline but with the ResNet34 architecture instead of the ResNet18. Each configuration/strategy combination was run for 10 times, yielding the results shown in Figure 5. When the inverse weighting strategy is introduced, the performance of the baseline decreases, while the ResNet34's F1 scores slightly increase. The effective re-weighting strategy seems to result in similar performance for the baseline in comparison to the models without re-weighting, while the ResNet34 models, once again, slightly improve their performance.
Next, the focal loss from Eq. (5) was tried by training the baseline configuration using this loss five times for each of the previous weighting strategies (including the absence of re-weighting). The results are shown in Figure 6. Focal loss does not seem to work with the inverse re-weighting strategy and yields similar performance without re-weighting and using the effective number of samples strategy. The results with focal loss are worse than the results with cross-entropy in all cases.
### Supervised models with transfer learning
After being unable to improve the baseline model while training from scratch, an approach using transfer learning was attempted. Initially, models pre-trained on the ImageNet dataset using the ResNet18 and ResNet34 architectures were trained with fastai's fine_tune routine with the same configuration as the baseline model. As the results suggested that the ResNet18 architecture may not be the best one for transfer learning, other architectures from the ResNet family were also tried (ResNet26 and ResNet50) as well as architectures from the ConvNeXt family (ConvNeXt_Nano and ConvNeXt_Tiny). Having chosen these six architectures, a grid sweep was performed in order to compare them. In particular, we trained each model for 4, 9 and 14 epochs (in addition to the initial frozen epoch), using two different learning rate functions, steep and minimum, while keeping the other hyperparameters at the default values. In total, each architecture was trained six times, with the results shown in Figure 7. It was observed that ResNet18 produced the best single run, but it appears to be worse than all the other architectures when averages and medians over multiple training runs are considered. The performance of the ResNet family seems to peak with the ResNet34 architecture but it is surpassed, on average, by the ConvNeXt family. ConvNeXt_Nano is close to ConvNeXt_Tiny in terms of F1 score but is preferable when the training time is considered, resulting in higher combined_f1_time values. For these reasons, ConvNeXt_Nano was chosen as the architecture for subsequent optimizations.
A Bayesian sweep was then performed to optimize the hyperparameters of the transfer learning model, with the ConvNeXt_Nano architecture. Fifty configurations were evaluated, given the following parameter space: (a) number of frozen_epochs, with a uniform integer distribution between 1 and 5; (b) number of unfrozen epochs, using a uniform integer distribution between 1 and 10; (c) batch size, sampling from the set [256, 256, 32, 64]; (d) learning rate finder function, sampling from the four available methods in fastai (slide, valley, steep. minimum); (e) loss function: either cross-entropy or focal loss; and (f) re-weighting strategy, which could be none, inverse of class frequency, or ENS.
The six overall best configurations of the sweep as well as the three best models trained for 6 epochs or less were more thoroughly evaluated by re-training them five times each. From these, the three most promising configurations in terms of F1 score were selected, which have the following hyperparameters: (a) tl_best5: 2 frozen and 8 unfrozen epochs, batch size of 64, minimum learn
Figure 5: Box plots of the F1 scores on the validation set for three different weighting strategies using three different configurations.
Figure 6: Box plots of the F1 scores on the validation set for the cross-entropy loss and focal loss functions, using the baseline configuration with three different re-weighting strategies.
ing rate selection function and cross-entropy loss with inverse class re-weighting; (b) tl_fast1: the same hyperparameters as tl_best5 except for a quicker training with 1 frozen and 5 unfrozen epochs; and (c) tl_best6: 1 frozen and 9 unfrozen epochs, batch size of 256, steep learning rate selection function and focal loss with ENF re-weighting. They were trained five more times, totaling ten models for each, and the results are shown in Figure 8.
The highest score was obtained by the best run of the tl_best5 configuration, which beat the baseline's best run. The tl_best5 and tl_fast1 configurations achieved F1 scores higher than the baseline median in all runs. Moreover, tl_fast1 has the highest median among all models.
The model with the highest overall score, corresponding to the best run of tl_best5, was chosen as the transfer learning classifier. In comparison with the baseline model, it only increases the F1 score from 98.07% to 98.21%, but the transfer learning configuration appears to result in more stable performances across multiple training trials, with its worst performance higher than the baseline median. In fact, the baseline configuration achieves an F1 score of \((97.4\pm 0.5)\%\), less than tl_best5's \((97.8\pm 0.2)\%\) and tl_fast1's \((97.9\pm 0.2)\%\). Note that the previous F1 scores are reported in the form \((\mu\pm\sigma)\%\), where \(\mu\) is the distribution average and \(\sigma\) its standard deviation. We highlight that these models were very quick to train, with training times between 1 and 2 minutes using a NVIDIA RTX A4000 GPU (16 GB VRAM).
### Self-supervised model
In contrast to supervised learning, self-supervised learning is a two-phase process that partly uses automatically generated labels, i.e. pseudo-labels, during the training of the architecture. This can be considered an advantage for datasets where data is abundant but the labeling of the samples is scarce or incorrect. As stated in Section II.2, the self-supervised learning framework first needs to solve the pretext task and then the downstream task. This means that this approach utilizes two distinct training phases, each one using its own independent dataset. The first training phase gives rise to the first model (I), which solves the pretext task. In the second phase, we apply transfer learning to the previous model, to obtain the second and final model (II), which solves the downstream task.
After applying the image transformations (in the first phase) or image augmentations (in the second phase), all the images for both datasets are then resized to 128\(\times\)128 pixels through interpolation. This is done to make the models train faster since the input's size is smaller. Just like in supervised models, the F1 score and accuracy are the metrics being examined. Both metrics are only applied to the validation dataset. In both phases the batch size is set to 64.
Model (I) was trained from scratch using pseudo-labels, with a dataset composed of single view images from all four time-span windows (0.5 s, 1.0 s, 2.0 s, and 4.0 s). Each signal has thus four independent single view images associated with it. The visual transformation is picked from a list of operations like random crops (crop), horizontal and vertical flips (flipVertical and flipHorizontal), color jitters (colorize), and rotations of 90\({}^{0}\) and 270\({}^{0}\) (rotate90 and rotate270). The no-transformation case was also used (labeled as noTransformation). For each image, one visual transformation is randomly selected from the list. Once the visual transformation is applied, a pseudo-label is as
Figure 8: Box plots of the F1 scores on the validation set of the ten runs for the baseline and the best three configurations selected from the second Bayesian sweep.
Figure 7: Box plots with the comparison of the architectures regarding combined_f1_time, computed over the validation set.
sociated with it. For instance, if the vertical flip option is selected from the list, the resulting image will be flipped vertically and then resized to 128 pixels by 128 pixels. The resulting pseudo-label is denominated as the flipVertical class.
For simplicity, model (II) only utilizes data with 4-second time windows, although we note that certain types of glitches require longer windows to be recognized. Similar to the previous dataset, all images are single views. As a consequence, the dataset used to train model (I) has four times more images for each epoch compared to the dataset from model (II). During the training phase of this model, the original labels are used instead of the pseudo-labels. In this phase, data augmentation is applied to avoid overfitting, as explained below. Before resizing the image, each one is rotated between -5 and 5 degrees, then undergoes a color jitter, and finally a random crop is applied. Note that since the original labels are used, there is no longer a correlation between the class and the type of augmentation applied to the image.
For both models, the learning rate finder is used to determine the values of the learning rate. For model (I) the learning rate is \(6.92\times 10^{-3}\), and for model (II) is \(5.75\times 10^{-4}\). Since the 1cycle policy is used during training, these values actually correspond to the maximum learning rate, instead of a single fixed learning rate value for all epochs.
Model (I) was trained during 10 epochs since both the values of the loss and the metrics quickly converged after the first few epochs, as shown in Figure 9(a) and (c). Model (II), however, was trained for 30 epochs. The evolution of the metrics for this model, as well as the loss, are represented in Figure 9(b) and (d). The values start to converge somewhere between the \(10^{\text{th}}\) and the \(15^{\text{th}}\) epoch. In the first five epochs, only the parameters of the last layer were allowed to change with a maximum learning rate determined by the learning rate finder. In the remaining 25 epochs, all architecture parameters can be modified, with the maximum learning rate being \(5.75\times 10^{-5}\), a 10 times smaller rate than in the first five epochs. The resulting final model (II) has an F1 score of 92.74% and an accuracy of 96.74%, evaluated on the validation dataset.
### Evaluation of the best models on the test set
The best models found in the previous sections are now evaluated on the test dataset, for both supervised and self-supervised approaches. It is worth recalling that this dataset was not used during the training of the neural networks.
From the supervised models, both the baseline and tl_best5 models were selected to be evaluated on the test dataset, based on their performance on the validation dataset. We found that the F1 score of the baseline slightly dropped to 97.18%, while tl_best5, the best model on the validation dataset, performed worse than the baseline on the test dataset, with an F1 score of 96.84%. Nevertheless, the F1 scores obtained on data which was never seen by the models nor used to perform choices regarding the hyperparameter optimization, are still very high for both models. The confusion matrix of the baseline model, using the predictions on the test dataset, is presented in Figure 10. Both precision and recall are, at least, 95% for 19 out of the 22 classes of glitches. The model appears to have more difficulties in the Air_Compressor and None_of_the_Above classes. This may be explained by the fact that, for these classes, the number of images in the training set is significantly smaller than for the other classes. Note that the None_of_the_Above class, in particular, has a very diverse distribution since it encompasses every glitch outside the defined classes. This class has been deprecated from the glitch classes for O3 data [68].
For the self-supervised model, the corresponding test dataset confusion matrix is shown in Figure 11. The model shows that the recall and precision scores are, at least, 95% for 14 out of the 22 glitch classes. This is slightly worse than the result obtained under the supervised approach, as expected. In particular, the model struggles to identify several other classes beyond those already signaled for the supervised approach above, like the Wandering_Line and Repeating_Blips classes. This happens due to the fact that, for simplicity, these models were only trained with datasets consisting of 4-second wide signals in contrast to some of the previous models that were trained with encoded views. As a result of the resized augmentation, the performance of the model
Figure 9: Epoch evolution of the accuracy and macro-averaged F1 score for training of model I (a) and model II (b). The corresponding losses are represented in panels (c) and (d).
might also be affected, as reducing the number of pixels will reduce the image's details. For these reasons, this model did not perform as well as the supervised trained models. However, the performance is still high, with an overall accuracy of 96.50% (relative to all classes) and an macro-averaged F1 score of 94.15% on the test dataset. For completeness, the confusion matrix of the model trained with pseudo-labels, before transfer learning is applied (first training phase), is also shown in Figure 12. We note that the model has some difficulties distinguishing between the pseudo-labels resize and flipHorizontal, which makes sense since some glitches have vertical symmetry.
The two best supervised models, as well as the best self-supervised model, were compared to the literature in terms of their F1 score and accuracy. The results are shown in Table 1. Both supervised models obtained competitive results, with accuracies higher than the merged view CNNs reported in [39, 66], and the baseline model's performance was also better than the hard fusion ensemble of CNNs [39]. This model, which uses a ResNet18 architecture trained from scratch, is only slightly worse than the fine-tuned ResNet50 from [21], with a decrease in F1 score of less than 0.2 percentage points. It should be stressed, however, that Ref. [21] used a different dataset split, without a validation dataset, resulting in more training data and a different test set. More importantly, the absence of a validation set could imply that the results reported by [21] are over-confident. They report the test set performance from the epochs selected because they had the higher test set performance themselves. On the other hand, our results from self
Figure 10: Normalized confusion matrix of the predictions of the baseline model over the test dataset, for the supervised approach.
supervised model scores show slightly lower values with respect to supervised approaches but the reported numbers are still interestingly high and promising, as they are close to the best performance achieved in Ref. [39] using a single view.
### Evaluation of the best models on GW signals
As mentioned before, the Gravity Spy dataset contains a few examples of the Chirp class, which are simulated GW signals from CBCs. In addition, the best models have shown perfect precision and recall for chirp classification in the Gravity Spy test dataset (cf. Figure 10). These two reasons motivated us to conduct a brief investigation of whether the models trained in this work would be able to identify real detections of GWs as Chirps, using data from the Advanced LIGO detectors collected during the third LVK observing run (O3).
The strain data for the confident detections in the O3 run, for each Advanced LIGO detector, Hanford ("H1") and Livingston ("L1"), was obtained from the Gravitational Wave Open Science Center (GWOSC) [69]. For each data sample, a Q-graph of the strain data in the 5 seconds around the GPS time of the detection was obtained using the q_transform method from the GWpy library [70]. Furthermore, four crops of this Q-graph, centered at the time with the highest energy, were obtained, one for each different single view duration in the Gravity Spy dataset. From the resulting 178 samples (89 events times, 2 detectors), many were discarded due to the GW signal being buried in the background noise, with no visible chirp, resulting in 50 samples with clearly visible chirps.
Figure 11: Normalized confusion matrix of the predictions of the self-supervised trained model (II), for the test dataset.
The views with 0.5, 2.0 and 4.0 seconds of duration were combined taking advantage of the RGB color channels, as done previously, to obtain the encoded134 view. The resulting images were passed through the baseline and tl_best5 model. The results are shown in panel (a) of Figure 13. The performance of the two supervised models tested with real GW signals from the O3 run is noticeably different, despite both having perfect performance in the chirp class of the test dataset. On the one hand, the baseline model is very dependent on the image creation pipeline. For this reason, an extra alignment step for the colour channels was necessary to improve its performance. Even for the aligned images, the model achieves a mere 10% recall, assigning most images to the None_of_the_Above class. On the other hand, the transfer learning model achieves a 52% recall for the aligned images, correctly identifying 26 out of the 50 GW samples. Moreover, the model is much more robust to samples which are different from the ones used to train it (i.e. with different background noise conditions), being practically unaffected by the colour channel misalignment. One possible explanation for the better performance and behaviour with samples slightly out of distribution could be the benefit of pre-trained features, which were reported in [71] to produce a boost in generalization even after fine-tuning.
Panel (b) of Figure 13 displays an illustrative sample of the O3 GW signals used to evaluate the model, as well as the respective tl_best5 model predictions. As expected, we found that the model performs well with chirps similar to those present in the Gravity Spy dataset (top row), typically from BBH mergers, but has more difficulties with different types of chirps (bottom row) which are either much longer (in the case of BNS coalescences) or much narrower (in the case of high-mass BBH mergers) than the chirps present in the training dataset.
Finally we discuss the results for the self-supervised model, applied to the 4-second view images. The results are plotted in Figure 14. Panel (a) of this figure indicates that most of the classified images are wrongly predicted as Blips. This is likely due to the fact that Chirps and Blips in the 4-second window images are quite similar to one another, as observed in panel (b) of Figure 14. In addition, the relatively small number of samples of Chirps in the dataset used for training might contribute to justify the very low GW detection rate. Nevertheless, the model has a similar recall value than the baseline supervised model. However, the value is significantly smaller than that achieved by the tl_best5 model.
Further studies to improve the performance of both the supervised and the self-supervised models by using an appropriate training on O3 data will be conducted in the future. We note, however, that no reliably labelled O3 dataset is yet available in the Gravity Spy project. The labels from the O3 dataset reported in [68] were automatically obtained using a model trained mostly using O1/O2. Until this dataset becomes available, the next step for self-supervised models will be to use encoded views similar to the supervised models. This could allow us to understand if self-supervised methods are worth pursuing.
\begin{table}
\begin{tabular}{l r r l} \hline \hline Model & F1 score (\%) & accuracy (\%) & Notes \\ \hline single2 view CNN [39] & not reported & 96.81 & custom shallow CNN using only one view \\ merged view CNN [66] & not reported & 96.89 & different dataset version (20 classes) \\ merged view CNN [39] & not reported & 97.67 & improved version of [66] \\ hard fusion ensemble [39] & not reported & 98.21 & combines four CNNs \\ fine-tuned ResNet50 [67; 21] & 97.65 & 98.84 & different split (no validation set) \\ baseline [this work] & 97.18 & 98.68 & ResNet18 trained from scratch \\ tl_best5 [this work] & 96.84 & 98.14 & fine-tuned ConvNeXt_Nano \\ self-supervised model [this work] & 94.15 & 96.50 & self-supervised trained ResNet18 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of different models on the Gravity Spy dataset.
Figure 12: Normalized confusion matrix of the predictions of the self-supervised trained model (I) over the test dataset, before transfer learning is applied
## V Conclusion
The classification and mitigation of transient sources of noise, or "glitches", in GW data streams is a crucial task in detector characterization and GW data analysis. In this paper we have investigated the use of Convolutional Neural Networks to classify glitches included in the Gravity Spy dataset, corresponding to the O1 and O2 data-taking periods of the Advanced LIGO and Advanced Virgo detectors. We have used both supervised and self-supervised deep learning. First, we have trained models using a supervised learning approach, both trained from scratch and using transfer learning and hence fine-tuning already pre-trained models in the Gravity Spy dataset. The comparison with the best baseline model when using transfer learning shows a moderate increase of the F1 score (our metric of choice to compare different model configurations) from 98.07% to 98.21%. However, the transfer learning configuration results in more stable performances across multiple training trials. In the second part of this work we have assessed the use of self-supervised training. In this case models have been pre-trained with pseudo-labels corresponding to image transformations applied to the original images, and then fine-tuned with the original labels.
The results obtained from our best models are very close to the state-of-the-art results reported in the literature using the Gravity Spy dataset. Our best baseline and transfer learning supervised models reach accuracies higher than the merged view CNNs reported in [39; 66]. Moreover, the baseline model's performance is also better than the hard fusion ensemble of CNNs [39] and only slightly worse than the fine-tuned ResNet50 results from [21], despite the fact we use a significantly simpler ResNet18 architecture trained from scratch. On the other hand, the scores of our self-supervised models show slightly lower values with respect to supervised approaches, yielding an F1 score of 94.15%. This value,
Figure 14: Results of the predictions of the self-supervised model for O3 GW images. Left panel (a): A confusion matrix with only one row corresponding to the Chirp class. The majority of samples are classified as Blips. Right panel (b): examples of the model predictions together with its input images.
Figure 13: Results of the predictions of the supervised models for O3 GW images. Left panel (a): Confusion matrices with a single row each corresponding to the Chirp class. Right panel (b): examples of the tl_best5 model predictions.
however, is close to the best performance achieved in Ref. [39] using a single view.
In the last part of this study we have tested the models using actual GW signals from LIGO-Virgo's O3 run. We have found that despite the models having been trained using data from previous runs (O1 and O2), they show good performance, in particular the supervised model with transfer learning. When using transfer learning the scores improve in a significant way without the need for any training on actual GW signals (other than the less than 50 chirp examples from hardware injections present in the Gravity Spy dataset). This finding motivates the use of transfer learning not only for glitch classification but also for GW classification, as the model flexibility and undemanding generalization might help detect signals slightly different from the ones used during training. Transfer learning may thus provide a better coverage for glitch classification and GW detection with our ever-changing GW detection network.
To end, we note that an experiment worth performing elsewhere would be to fine-tune the transfer-learning model on O3 data. For this, we would need to classify O3 glitches for each class (including the new ones as e.g. Fast Scattering[72]) by hand, both for training and for testing. The task of producing a reliable O3 glitch dataset (like there is for O1 and O2) would probably take significant effort, but perhaps the outputs of our models, the Gravity Spy team's model, and the human labelling from the Gravity Spy citizen science project can help.
###### Acknowledgements.
We thank Osvaldo Freitas and Solange Nunes for fruitful discussions during the course of this work. We also thank Christopher Berry, Jools Clarke, Tom Dooney, Melissa Lopez, Jade Powell, Max Razzano, and Agata Trovato for useful comments. AO is supported by the FCT project CERN/FIS-PAR/0029/2019. JCB is supported by a fellowship from "la Caixa" Foundation (ID 100010434) and from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 847648 (fellowship code LCF/BQ/PI20/11760016). JCB is also supported by the research grant PID2020-118635GB-I00 from the Spain-Ministerio de Ciencia e Innovacion. ATF and JAF are supported by the Spanish Agencia Estatal de Investigacion (Grant PID2021-125485NB-C21) funded by MCIN/AEI/10.13039/501100011033 and ERDF A way of making Europe). Further support is provided by the EU's Horizon 2020 research and innovation (RISE) programme H2020-MSCA-RISE-2017 (FunFiCO-777740) and by the European Horizon Europe staff exchange (SE) programme HORIZON-MSCA-2021-SE-01 (NewFunFiCO-10108625). This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation.
|
2304.06403 | Leveraging triplet loss for unsupervised action segmentation | In this paper, we propose a novel fully unsupervised framework that learns
action representations suitable for the action segmentation task from the
single input video itself, without requiring any training data. Our method is a
deep metric learning approach rooted in a shallow network with a triplet loss
operating on similarity distributions and a novel triplet selection strategy
that effectively models temporal and semantic priors to discover actions in the
new representational space. Under these circumstances, we successfully recover
temporal boundaries in the learned action representations with higher quality
compared with existing unsupervised approaches. The proposed method is
evaluated on two widely used benchmark datasets for the action segmentation
task and it achieves competitive performance by applying a generic clustering
algorithm on the learned representations. | E. Bueno-Benito, B. Tura, M. Dimiccoli | 2023-04-13T11:10:16Z | http://arxiv.org/abs/2304.06403v2 | # Leveraging triplet loss for unsupervised action segmentation
###### Abstract
In this paper, we propose a novel fully unsupervised framework that learns action representations suitable for the action segmentation task from the single input video itself, without requiring any training data. Our method is a deep metric learning approach rooted in a shallow network with a triplet loss operating on similarity distributions and a novel triplet selection strategy that effectively models temporal and semantic priors to discover actions in the new representational space. Under these circumstances, we successfully recover temporal boundaries in the learned action representations with higher quality compared with existing unsupervised approaches. The proposed method is evaluated on two widely used benchmark datasets for the action segmentation task and it achieves competitive performance by applying a generic clustering algorithm on the learned representations. 1
Footnote 1: [https://github.com/elenabbbuenob/TSA-ActionSeg](https://github.com/elenabbbuenob/TSA-ActionSeg)
## 1 Introduction
Unconstrained videos capturing real-world scenarios are usually long, untrimmed and contain a variety of actions which can be effortlessly divided by a human observer into semantically homogeneous units. The task of action segmentation, which we target in this work, is the process of identifying the boundaries of an action, i.e. _pour water_, in an untrimmed video of an activity, i.e. _making tea_, even when temporally adjacent actions may have very small visual variance between them. This process is a key step in understanding and contextualizing the video. It is also crucial for video browsing, indexing, and summarisation and has applications in areas such as surveillance systems, action recognition, video content-based retrieval, assistive technologies, and robot-human interactions.
This problem has been traditionally tackled through supervised learning approaches [9, 13, 17, 18, 21, 33]. Such approaches require a large amount of annotated training data and typically suffer from domain adaptation problems, being unable to generalize to large-scale target domains. More recently, weakly-supervised and semi-supervised approaches have shown to be an effective way to learn video representations convenient for action segmentation without requiring or requiring very little manual annotations [2, 29, 10, 11, 14, 19, 2, 7, 11, 2, 15]. However, these approaches are still data-hungry and computationally expensive. Unsupervised approaches have developed following two different research lines [16, 1, 20]. Most of them focus on grouping actions across videos and rely on the use of activity labels [16, 28, 30, 32, 15, 20], therefore putting more emphasis on the quality of the representation. A few of them, the most computationally efficient, act on a single video to recover clusters [26] or detect temporal boundaries [8] and do not require any manual annotation.
Our approach stands between these two research lines. We assume that the atomic actions can effectively be modelled as clusters in an underlying representational space and we propose a novel framework that maps the initial feature space of a video into a new one, where the temporal
Figure 1: Our approach learns a parametrized function \(\phi\) that transforms the input feature space (\(X\)) into a new one (\(Z\)), where actions, visualized through different colours, can be easily unveiled by a generic clustering algorithm. The continuous line connects points representing frames from time \(0\) to \(N\) in a t-SNE projection. GT stands for ground truth.
semantic clusters corresponding to atomic actions are unveiled. Similarly to other unsupervised approaches that rely on similar assumptions [15], our focus is on representation learning. However, similarly to [8, 26], our method takes as input a single video and doesn't care whether the same action is present in similar videos or not. This has considerable practical advantages for downstream applications since it can be in principle applied to any video no matter the dataset it belongs to nor if there exist videos having a similar temporal structure. Action segmentation is obtained by applying a generic clustering algorithm on the learned temporal-semantic aware (TSA) representations (see Figure 1). Our contributions are as follows:
* We introduce the novel approach to action representation learning that uses a shallow network and the single video itself as input, without the need for additional training data.
* We demonstrate the effectiveness of using a triplet loss operating on similarity distributions with a novel triplet selection strategy based on a downsampled temporal-semantic similarity weighting matrix for the task of action segmentation.
* We detail ablation study and we achieve state-of-the-art metrics on the _Breakfast_ and _Youtube INRIA Instructional_ benchmark datasets without using any training data than a single video itself as input.
## 2 Related work
**Fully supervised approaches.** Action Segmentation has been traditionally tackled as a supervised learning problem. Existing approaches belonging to this category differ mainly in the way temporal information is taken into account by the model. Traditional approaches follow a two-step pipeline, that first generates frame-wise probabilities and then feeds them to high-level temporal models as in the Hidden Markov Model Tool Kit (HTK) approach [13] or in [33], which is based on recurrent neural networks. Lately, there has been a proliferation of models based on temporal convolutions to directly classify the video frames. Encoder-Decoder Temporal Convolutional Networks (EDTCNs) [17] use a hierarchy of temporal convolutions to perform fine-grained action segmentation, but they can act solely on low-temporal resolution videos. Instead, Multi-Stage Temporal Convolutional Network (MS-TCN and its improved version MS-TCN++) can act on the full temporal resolution of the videos and achieves increased performance [21, 9]. Spatio-temporal convolutional layers [18] have shown promising results in capturing temporal dependencies while being easier to train than previous methods.
The main drawback of traditional supervised approaches to action segmentation is the requirement of a large amount of quality labelled data for training, which limits their applicability to large-scale domains outside of existing pre-segmented datasets [13, 21].
**Weakly and semi-supervised approaches.** To alleviate the need for large annotated datasets, weakly supervised techniques for video segmentation involve using transcripts (ordered list of the actions occurring in the video), visual similarities, and audio information to generate pseudo-labels for training [10]. In [14], a Gaussian Mixture Models + Convolutional Neural Networks (GMM+CNN) is first initialized and used to infer the segments of a video given a transcription of it. The new segmentation is used to re-estimate and update the model parameters until convergence. In [24], a recurrent neural network is used to model a discriminative representation of subactions, and a coarse probabilistic model to allow for temporal alignment and inference over long sequences. Some approaches use machine learning models to infer the segments of the video [22]. Other approaches, such as those based on frame-to-frame visual similarities [11], self-attentions mechanism [23] or iterative soft boundary assignment [7], enforce consistency between the video and labels without the need for temporal supervision. In the work [10], a network is trained on long videos which are only annotated by the set of present actions and are trained by dividing the videos into temporal regions that contain only one action class and are consistent with the set of annotated actions. The work in [25] adopts a Hidden Markov Model grounded on a Gated Recurrent Unit (GRU) for labelling video frames. This model has been subsequently improved in [19], where the model is trained through a Constrained Discriminative Forward Loss (CDFL) that accounts for all candidate segmentations of a training video, instead than a single one. [2] uses speech as additional sources of human-generated information in a weakly-supervised framework.
Recent work has proposed a semi-supervised approach [29] (ICC) consisting of a previous unsupervised training with a contrastive loss followed by a supervised training step with a small amount of labelled samples. These methods are limited to videos with transcripts and cannot be generalized to unconstrained videos.
**Unsupervised learning approaches.** Unsupervised learning approaches typically learn action representation in a self-supervised fashion and then apply a clustering algorithm to obtain the action segmentation (assuming that the number of clusters is known). Some methods model that minimizes prediction errors exploiting the order of scripted activities [15] or combining temporal embedding with visual encoder-decoder pipelines [30]. Other approaches use deep learning architectures, including an ensemble of autoencoders, and classification networks that exploit the relation between actions and activities [20]. Based on the assumption that in task-oriented videos actions occur in a similar temporal space, [15, 28] learn a strong temporal regularization that partially hides visual similarity. [32] propose a Self-Supervised Co-occurrence Action Parsing method
(SSCAP), for unsupervised temporal action segmentation which takes the recurrence of sub-actions into account in estimating the temporal path, and hence is able to handle complex structures of activities. Recently, [16] proposed a joint self-supervised representation learning and online clustering approach, which uses video frame clustering as a pretext task and hence directly optimizes for unsupervised activity segmentation (TOT+TCL).
Even if these approaches do not require labelled data, they are data-hungry and are not suitable for transferring the learned knowledge to a dataset with a different distribution and they demonstrated modest performance for the task of action segmentation at the video level. In contrast, we aim at unveiling the clusters underlying a single video. There is limited work in the literature on learning action representations in a self-supervised manner within a single video. [1] proposes a model based on an encoder LSTM architecture with Adaptive Learning (LSTM+AL) that minimizes the prediction error of future frames and assigns segmentation boundaries based on the prediction error of the next frame. Some recent works proposed to learn event representations [4, 5] and the underlying graph structure from a single sequence (DGE) [6], where temporal windows are taken into account instead of all frames. In both cases, these approaches were not tested for the action segmentation of complex activity videos (high-temporal resolution), but only on low-temporal resolution image sequences.
**Fully unsupervised approaches.** Clustering methods, which generate a partition of the input data based on a specific similarity metric, have been poorly investigated within the field of action segmentation. However, very recent work [26] has shown that simple clustering approaches, i.e. \(K\)-means, are instead a strong baseline for action segmentation. They hence proposed a new clustering approach called Temporally-Weighted FINCH (TW-FINCH), which is similar in spirit to the clustering approach named FINCH [27] but takes into account temporal proximity in addition to semantic similarity. Recently, [8] proposed to detect action boundaries (ABD) by measuring the similarity between adjacent frames based on the insight that actions have internal consistency within and external discrepancy across actions. We based our approach on the same insight that we modelled via a deep metric learning approach.
## 3 Methodology
We assume that the representational clustering grounding action segmentation encodes both temporal and semantic similarity, based on two observations: (_i_) an action in a video is a sequence of images temporally continuous, therefore temporal adjacent frames are likely to belong to the same action. (_ii_) frames corresponding to the same action (but not necessarily temporal adjacent) should have similar representation, encoding the common underlying semantic.
Formally, let \(X\in R^{N\times n}\) denote the matrix of \(n\)-dimensional feature vectors for a given sequence of \(N\) frames. We aim at learning a parametric function \(\phi\) such that given the input feature matrix \(X\), new Temporal-Semantic Aware (TSA) (see Figure 2) representations \(Z\in\mathbb{R}^{N\times n}\) are obtained as \(Z=\phi(X)\).
### Triplet loss and triplet selection
To learn \(\phi\), we minimize a triplet loss function (defined in Equation 1) that implements an original approach to select the triplets appropriately by relying on temporal-semantic similarity distributions \(f_{ts}\) obtained as the weighted sum of the temporal and the semantic similarity distributions, say \(f_{t}\) and \(f_{s}\), \(f_{ts}=\boldsymbol{\alpha}\cdot f_{t}+(1-\boldsymbol{\alpha})\cdot f_{s}\), where \(\boldsymbol{\alpha}\in[0,1]^{N\times 1}\) a vector of learning parameters of the function \(\phi\). Under the model assumptions, it is easy to see that the similarity of \(f_{ts}(k)\) and \(f_{ts}(k^{\prime})\) will be large when \(k\) and \(k^{\prime}\) belong to the same event, and it will be small when \(k\) and \(k^{\prime}\) belong two different events.
Figure 2: Overview of the proposed TSA framework illustrated on a sample video of the Breakfast Dataset: network architecture transforming the initial features \(X\) into the learned features \(Z\) through a shallow network with a novel triplet selection strategy and a triplet loss based on similarity distributions.
Semantic similarity distribution:To define \(f_{\mathcal{S}}\), we assume that the set of most similar frames in the original feature space of an anchor \(i\) is very likely to be part of the same action. The similarity of an anchor \(i\) to all other frames is defined element-wise via a pairwise similarity, upon normalization to the total unit weight, \(f_{\mathcal{S}}=w_{ij}/W,\) with \(W=\sum_{i,j)\in E}w_{ij}\) and \(w_{ij}=exp(-(1-d(x_{i},x_{j}))/h)\), and where \(E\) is the set of pairwise relations, \(d(\cdot,\cdot)\) is the cosine distance and \(h\) is the filtering parameter of the exponential function. The resulting pairwise similarities are normalized to represent empirical joint probability distributions between pairs of elements in the sequence.
Temporal similarity distribution:To define \(f_{t}\), we assume that as we move away from the anchor \(i\), the likelihood of a feature vector \(x_{j\neq i}\) to represent the same action as frame \(i\) decreases. To model this behaviour, we define a weight function \(w(\cdot)\) that depends on the temporal frame distance \(d\) from the given frame as \(w(d)=-1+2\exp(-\frac{1}{\beta}d)\) where \(\beta\) is a constant that controls the slope of the weight function and \(d\) is the temporal distance between frames. By imposing that \(w(L/2)=0\), and then solving for \(\beta\), we get that the constant \(\beta\) can be expressed in terms of the positive window length, that is: \(\beta=-L/(2\ln(\frac{1}{2}))\).
The temporal and semantic distributions are downsampled to reduce computational costs using stochastic pooling during the training. An anchor index is randomly selected from the set of downsampled indices \(i\in D\). Its set of positive samples \(\mathcal{P}_{i}\) is taken as the \(5\%\) of the frames with the highest similarity values in \(i\)-row of the temporal-semantic affinity matrix \(f_{ts}\). We define the negative set \(\mathcal{N}_{i}\) as the frames whose \(i\)-row \(f_{ts}\) is between the mean and the sum of the mean and standard deviation of the similarity metric. Our triplet loss is defined as:
\[\mathcal{L}_{triplet}=\frac{1}{D}\sum_{i\in D}\max(0,\ KL(f_{ts}(i) ||f_{ts}(i^{-}))\\ -KL(f_{ts}(i)||f_{ts}(i^{+})) \tag{1}\]
where \(KL\) represent the KL-divergence of the temporal-semantic similarity distribution \(f_{ts}\). For each loss term, given an anchor index \(i\in D\) with \(D<N\), we define the triplet \(\{i,\,i^{+},\,i^{-}\}\) where \(i^{+}\in\mathcal{P}_{i}\) and \(i^{-}\in\mathcal{N}_{i}\) that they are the sets of positive and negative indices, respectively.
Using probability distribution functions (PDFs), \(f_{ts}\) as feature vectors, instead of initial features \(X\) can provide several benefits for action segmentation. Since PDFs consider all the information in the video for each frame, they are smoother and more robust compared to the initial features extracted from the data \(X\) (they can be noisy or contain irrelevant information). In the ablation study (see Section 4.3), we will prove that this can improve the accuracy and robustness of the triplet loss for action segmentation.
## 4 Experimental evaluation
### Experimental setup
Datasets:We report results on two widely used temporal action segmentation datasets: (1) the _Breakfast Action
Figure 3: Example of training and result obtained by using the TSA approach for a sample video of the _Breakfast Action Dataset_ (P34_cereals). **(a)** Cosine similarity affinity matrix for initial features \(X\) and evolution of the learned features \(Z\) for different training epochs. Actions are highlighted as neighbour communities referring to the segmentation of the video when a clustering algorithm is applied. **(b)** Segmentation plots showing the ground truth, and the result of applying the same clustering algorithm to initial features \(X\) (IDT) and to the learned features \(Z\) (TSA).
_Dataset_[12] consists of \(10\) activities related to breakfast preparation, performed by \(52\) different individuals in \(18\) different kitchens for a total of \(1712\) individual videos of \(64\) dimensions each feature vector; (2) the _Youtube INRIA Instructional Dataset_[2] consists on \(150\) instructional videos from YouTube whose feature vectors are \(3000\)-dimensional. These include \(5\) different unrelated activities (changing tire, preparing coffee, performing a Cardio Pulmonary Resuscitation, jumping a car and repotting a plant), lasting 2 minutes on average. The most challenging part of this dataset is the amount of background frames present in each video, which reaches up to 63.5% of the frames in the dataset.
Evaluation metrics:We use a clustering algorithm on the learned representations to segment the video into its atomic actions. To match the predicted segmentation labels and the ground truth, we follow the Hungarian matching algorithm to obtain a one-to-one mapping between the predicted labels (cluster index) and the ground truth at video-level. Following previous work [1, 8, 26, 28], we report three widely used metrics: (_i_) accuracy of the segmentation and action identification, computed as the _Mean over Frames (MoF)_ metric, which indicates the percentage of action predicted frames correctly labelled after the Hungarian matching. (_ii_) Similarity and diversity of the predicted segments, calculated as the _Intersection over Union (IoU)_ metric, also known as the Jaccard Index or Jaccard similarity coefficient. (_iii_) The _F1-score_ computed across the predicted segments and the known ground truth to evaluate the quality of the action segmentation. Note that, among unsupervised methods, [15, 28] look for a one-to-one global matching (shared across all videos of a given task) instead that at the video-level. This generally leads to poorer performance than computing the the Hungarian at video-level.
### Implementation details
Input features.To ensure a fair comparison to state-of-the-art methods targeting the action segmentation task, we use the same datasets and input features for the frame-level initial representations as in [1, 8, 15, 26, 28, 30]. For the _Breakfast Action_ we use the Improved Dense-Trajectory (IDT) [31] features. These were provided by the authors of CTE [15] in their open-sourced implementation. For _Youtube Inria Instructional_ we use a set of frame-level representations given by their original authors, which are features vectors formed by a concatenation of HOF descriptors and features extracted from VGG16-_conv5_ network.
Model architectureWe used a shallow neural network consisting in our case of a multi-layer perceptron with a single hidden layer followed by a ReLu activation function. This makes our approach (Figure 2) easy to train and more suited for practical applications than existing approaches consisting of multiple convolutional layers and/or recurrent networks. Empirical experiments showed that using a single hidden layer was easier and faster to train than deeper models while achieving similar performance. The reported results are also invariant to the number of units in the hidden layer.
Model training.The parameter \(L\) used in this paper is the average number of action classes for a specific dataset, 6 and 9 for BF and YII, respectively. Empirical experiments showed that using a single hidden layer was easier and faster to train than deeper models while achieving similar performance. The reported results are also invariant to the number of units in the hidden layer. The architecture used to obtain our features is a multi-layer perceptron with as many units as the input feature dimensionality, \(n\), although this could be changed to obtain the desired output dimensionality. The batch size is equivalent to the downsampling and the number of batches will be the quotient of the number of frames and the batch size. We define the distance hyperparameter as the minimum threshold \(\varepsilon\) that the difference of the last two losses should take. This hyperparameter is set to track early stops with a patience of 2 times. The minimum and maximum training epochs are fixed at 2 and 50, respectively. The initial learning rate depends on each dataset and follows an exponential learning decay rate of \(0.3\) and a weight decay \(L_{2}\) of \(10^{-3}\) as the regularisation parameter.
### Model study
Figure 3 illustrates how the initial features are modified during training to gradually unveil the representational clusters. On the left, we visualize the affinity matrix computed from the feature matrix on the original feature space. In the middle, we visualize the affinity matrix computed at different epochs during training and finally, on the right, our learned representations at the end of the training. Finally, the clusters, that were completely hidden on the initial affinity matrix, become more and more visible along the diagonal. The same label appears at different time intervals in the off-diagonal clusters, as reflected by the ground truth.
Ablation study.In Table 2, we show the importance of modelling both temporal and semantic similarities, by using both \(f_{t}\) and \(f_{s}\) (being \(f_{ts}\) when both are marked) and the effectiveness of our network as opposed to adding one more hidden layer or using another function, Radial Basis Function Neural Networks (RBFNN). We use it to compare with our problem because RBFNN are particularly good at modelling non-linear decision boundaries, making RBF networks well suited to our problems [3]. In Table 3, we can see that by representing a frame through a PDF instead of a simpler feature vector, we can significantly improve the accuracy of action segmentation. Without this representation, the approach may not accurately capture the similarity between points in \(X\), leading to segmentation errors. In addition, the average calculation time tells us that this improvement is achieved with no additional cost.
### Experimental results
To obtain the final segmentation from the learned action representations we apply three different clustering algorithms: K-means, Spectral clustering and FINCH [27]. For comparison purposes, we report here the results of existing unsupervised methods that were computed by the proposing authors by applying the Hungarian matching at the video-level. We used the code made publicly available by the authors 2 to compute the performance of DGE on the considered datasets, since this approach, similar to ours, computes a video representation suitable for the task of temporal/action segmentation.
Footnote 2: [https://github.com/mdimiccoli/DGE](https://github.com/mdimiccoli/DGE)
Breakfast Action:The left-hand table 1 reports the resulting metrics for the BF dataset, obtained with a learning rate \(0.051\), distance \(0.032\) and batch size \(128\). Our method significantly outperforms all other existing approaches. Special attention on F1, which is considerably better in our results, which tells us better quality and less over-segmentation in our method. These results are consistent with all three clustering approaches considered for obtaining the final segmentation of our learned features. We can therefore conclude that TSA outperforms SoTA approaches for the downstream task of action segmentation. Examples of segmentation results on a few videos for this dataset can be seen in Figure 3 (b) and Figures 4 (a)-(c).
## 5 Conclusions
This paper introduced a novel fully unsupervised approach for learning action representations in complex activity videos that solely operates on a single unlabelled input video. Our method exploits the temporal proximity and the semantic similarity in the initial feature space to discover the representational clustering grounding action segmentation. Our key contributions are a shallow architecture and a triplet-based loss with a triplet-based selection mechanism based on similarity distribution probabilities to model temporal smoothness and semantic similarity within and across actions. Experimental results on the BF and the YII datasets demonstrated that the learned representations, followed by a generic clustering algorithm, achieve SoTA performance. Furthermore, it has the advantage of not requiring human annotations, is easy to train and does not present domain adaptation issues. Future work will consider how to jointly learn the action clusters and the representation as well as how to build on representations learn at the video-level to match videos at the activity-level.
Figure 4: Segmentation output comparisons on two sample videos from BF and YII. Each caption shows the name of the video and the results \((x,y)\) which are (MoF, F1) for each example. |
2305.11256 | Testing Jeans dynamical models with prolate rotation on a cosmologically
simulated dwarf galaxy | Prolate rotation is characterized by a significant stellar rotation around a
galaxy's major axis, which contrasts with the more common oblate rotation.
Prolate rotation is thought to be due to major mergers and thus studies of
prolate-rotating systems can help us better understand the hierarchical process
of galaxy evolution. Dynamical studies of such galaxies are important to find
their gravitational potential profile, total mass, and dark matter fraction.
Recently, it has been shown in a cosmological simulation that it is possible to
form a prolate-rotating dwarf galaxy following a dwarf-dwarf merger event. The
simulation also shows that the unusual prolate rotation can be time enduring.
In this particular example, the galaxy continued to rotate around its major
axis for at least $7.4$\,Gyr (from the merger event until the end of the
simulation). In this project, we use mock observations of the hydro-dynamically
simulated prolate-rotating dwarf galaxy to fit various stages of its evolution
with Jeans dynamical models. The Jeans models successfully fit the early oblate
state before the major merger event, and also the late prolate stages of the
simulated galaxy, recovering its mass distribution, velocity dispersion, and
rotation profile. We also ran a prolate-rotating N-body simulation with similar
properties to the cosmologically simulated galaxy, which gradually loses its
angular momentum on a short time scale $\sim100$\,Myr. More tests are needed to
understand why prolate rotation is time enduring in the cosmological
simulation, but not in a simple N-body simulation. | Amrit Sedain, Nikolay Kacharov | 2023-05-18T18:39:26Z | http://arxiv.org/abs/2305.11256v1 | # Testing Jeans dynamical models with prolate rotation on a cosmologically simulated dwarf galaxy
###### Abstract
Prolate rotation is characterized by a significant stellar rotation around a galaxy's major axis, which contrasts with the more common oblate rotation. Prolate rotation is thought to be due to major mergers and thus studies of prolate-rotating systems can help us better understand the hierarchical process of galaxy evolution. Dynamical studies of such galaxies are important to find their gravitational potential profile, total mass, and dark matter fraction. Recently, it has been shown in a cosmological simulation that it is possible to form a prolate-rotating dwarf galaxy following a dwarf-dwarf merger event. The simulation also shows that the unusual prolate rotation can be time enduring. In this particular example, the galaxy continued to rotate around its major axis for at least 7.4 Gyr (from the merger event until the end of the simulation). In this project, we use mock observations of the hydro-dynamically simulated prolate-rotating dwarf galaxy to fit various stages of its evolution with Jeans dynamical models. The Jeans models successfully fit the early oblate state before the major merger event, and also the late prolate stages of the simulated galaxy, recovering its mass distribution, velocity dispersion, and rotation profile. We also ran a prolate-rotating N-body simulation with similar properties to the cosmologically simulated galaxy, which gradually loses its angular momentum on a short time scale \(\sim\) 100 Myr. More tests are needed to understand why prolate rotation is time enduring in the cosmological simulation, but not in a simple N-body simulation.
Galaxy dynamics, Prolate rotation, Cosmological simulations +
Footnote †: journal: Journal of Physics of Local Group
## 1 Introduction
Dwarf galaxies are important building blocks in galaxy hierarchical formation and evolution theories, so it is essential to understand their properties. There are distinct types of dwarf galaxies, depending on mass and gas content - mainly gas-rich dwarf irregulars and gas-poor dwarf spheroidals, as well as transition types. The majority of them are dark matter (DM) dominated. However, their low stellar densities and shallow potential wells make them very sensitive to perturbations from stellar feedback, external interactions, etc.
Dynamical studies of dwarf galaxies can reveal their total mass and the mass of their sub-components (stars, gas, DM content), as well as their internal structure and kinematical properties, which is not possible from photometry alone.
On the other hand, cosmological simulations allow us to study the formation and evolution of galaxies at all scales and test real-time observations. By comparing observations and the simulated results we can better understand the physical processes in our Universe. Zoom-in simulations are used to study smaller-scale processes in individual galaxies and galaxy clusters, such as interactions, gas cooling, star formation, and feedback from supernovae and black holes.
While most galaxies have a certain amount of angular momentum, which flattens them in the direction of the rotation axis, there are rare exceptions of galaxies that rotate around their major axis, known as prolate rotation. In the Local Group, we know of only two such cases, the And II dwarf (Amorisco et al. 2014) and the Phoenix dwarf (Kacharov et al. 2017). In both studies, they strongly suggest that a major merger at some point in the galaxy's evolution
causes a change in the rotation axis. Similarly, Cardona-Barrero et al. (2021) discovered the presence of prolate rotation in a cosmologically simulated dwarf galaxy, using the gear code, based on gadget-2 (Springel, 2005; Revaz & Jablonka, 2012). The culprit, causing the flip of the rotation axis is also a major dwarf-dwarf merger with a DM mass ratio of about \(1:5\).
In this study, we aim to test how well Jeans dynamical models can reproduce the mass distribution and kinematic properties of the simulated prolate-rotating dwarf. If successful, such dynamical models can then be applied to real observational data.
## 2 Jeans Anisotropic Multi-Gaussian Expansion (JAM) on the cosmological data
Jeans equations are stellar hydrodynamic equations, derived from the collisionless Boltzmann equation, that are used to study the dynamics of equilibrium systems. One of the advantages of the Jeans equations is that instead of dealing with the phase-space distribution function, we use velocity moments of the line-of-sight velocity distribution (LOSVD), which is directly linked to the gravitational potential and density of the system. This method has been extensively used to study spherical and axisymmetric stellar systems under the assumption of quasi-equilibrium (Cappellari, 2016).
In this work, we utilise the JAM code by Cappellari (2020) to solve the Jeans equations for the cosmologically simulated prolate rotating dwarf galaxy, discovered by Cardona-Barrero et al. (2021). We obtain best-fit Jeans dynamical models at different stages during its evolution in the cosmological simulation, which predicts the dwarf's mass distribution and compares the model results with the known properties of the simulated dwarf. The primary objective is to investigate whether the Jeans models can accurately describe the dynamical properties of the prolate system in a quasi-equilibrium approximation since in the cosmological simulation the prolate rotation survives for more than \(7.56\,\mathrm{Gyr}\).
In the early stages of the galaxy's evolution, its rotation axis was the minor axis (\(q=\frac{b}{a}<1\)). After a massive merger with its satellite halo, the galaxy's axis of rotation changes to the major axis. We assumed here axisymmetry around its major axis (\(q=\frac{b}{a}>1\)). We created JAM models for each of the three main stages of the galaxy's evolution: pre-merger (\(z=1.58\)), shortly after the merger (\(z=0.58\)), and at the present day (\(z=0.00\)).
We used the initial oblate case as a benchmark to evaluate the performance of our prolate models.
## 3 Methods
The Jeans dynamical models link the generally derived from observations stellar density and kinematics of the modelled system to its gravitational potential.
From the simulation, we know the surface brightness profile of the galaxy and perform a simple Sersic profile fit to obtain its central surface brightness, half-light radius, and Sersic index. We represent the resulting Sersic profile with a Multi Gaussian Expansion (MGE), which makes it easy to deproject the density profile.
We have also created Voronoi maps of the line of sight velocity (\(v_{los}\)) and velocity dispersion (\(\sigma\)) at different snapshots from the simulation. We assigned uncertainties to these values based on the local surface brightness (uncertainties increase with decreasing surface brightness).
In our dynamical model, the gravitational potential is driven by DM only. We used a generalised Navarro-Frank-White (NFW) density profile as representative of the gravitational potential to solve the Jeans equations. In this particular JAM model, we used a spherical, cored NFW profile (central density slope \(\gamma=0\)), which we later found to give the most promising mass profile of the model, as compared with the cosmological simulation.
We used Markov Chain Monte Carlo (MCMC) method to get the maximum likelihood of the fitted parameters. We fit for the central density (\(\rho_{0}\)) and scale radius (\(r_{0}\)) of the NFW profile, as well as velocity anisotropy (\(\beta_{z}=1-\frac{\alpha_{z}^{2}}{\sigma_{r}^{2}}\)) and rotation amplitude (\(\kappa\)) of the galaxy.
## 4 Results
We first modelled the simulated galaxy during its pre-merger stage (\(z=1.58\)), when it still had normal oblate rotation. After 2 Gyr, due to a massive collision, the galaxy began rotating in a prolate manner. This is the stage in the simulation, at which we performed our second JAM model (\(z=0.58\)). Interestingly, the galaxy also has prolate rotation at \(z=0\) (the present day of the simulation), indicating that it is in a state of quasi-equilibrium.
In all three cases, we fitted the line-of-sight velocity (\(v_{los}\)) and velocity dispersion, and we estimated the mass distribution of the simulated galaxy using the modelled parameters listed above. An example of the fit for the last case (\(z=0\)) is shown in Figure 1. The Voronoi maps of the mock observations of the galaxy rotation and velocity dispersion are compared to the best-fit results from the Jeans model.
The Jeans model provided a well matched line-of-sight velocity and velocity dispersion for all the stages within the half-light radius. Despite the fewer data points outside the half-light radius, we also got a relatively good fit with the data.
The capability to determine the galaxy's mass profile is one of the main advantages of dynamical modelling. We calculated the mass for each of the three models stated above. The masses for all three models are displayed in Figure 2, along with the mass derived from the cosmological simulation.
## 5 N-body Simulation of the galaxy
We also checked the survivability of the galaxy's rotation using an N-body simulation.
Figure 1: The top-left figure displays the line-of-sight velocity from the cosmological simulation of the galaxy at redshift \(z=0.00\) in Voronoi bins, while the top-right figure shows the corresponding JAM model. The black ellipse indicates the half-light radius. The bottom left figure displays the velocity dispersion from the cosmological simulation, while the bottom right shows the dispersion obtained from the fitted JAM model.
We used the inverse transform sampling method to draw positions from the best-fit de-projected 3D sersic profile and initial 3D velocities from the best-fit Jeans model.
We ran the N-body code assuming star particles move within an embedded spherical DM halo. The simulation was done using SWIFT (Schaller et al., 2016). Interestingly, the galaxy loses its prolate rotation after only 100 Myr, in contrast to its longevity in the cosmological simulation. More research is necessary to determine the underlying reasons. For example, we could try changing the shape of the DM halo to prolate as well.
## 6 Conclusions
We successfully applied the JAM code on a cosmologically simulated, prolate rotating dwarf galaxy and fitted its velocity moments. We used the early oblate, pre-merger stage as a benchmark for our project. We then modelled the prolate rotating stages shortly after the merger (\(z=0.58\)) and at the present time (\(z=0.0\)). In all cases, we fitted a spherical NFW mass profile, as well as the amplitude of rotation and velocity anisotropy. We then calculated the mass profile of the galaxy at different evolutionary stages and compared them to the known mass profiles from the simulation, which agree quite well. This project confirms the viability of the Jeans method for estimating the masses of prolate rotating galaxies in the approximation of quasi-dynamical equilibrium. Furthermore, we also did an N-body simulation but in this simple setup, the galaxy's prolate rotation dies after 100 Myr. Further work is needed to better understand the survivability of its prolate nature.
## Acknowledgements
We want to express our gratitude to Dr. Salvador Cardona-Barrero for providing the simulation data. We acknowledge Dr. Yves Revaz for his help to run the N-body simulation. We thank Prof. Dr. Maria-Rosa Cioni for the opportunity to conduct this research at the AIP.
|
2306.04445 | Multi-modal Latent Diffusion | Multi-modal data-sets are ubiquitous in modern applications, and multi-modal
Variational Autoencoders are a popular family of models that aim to learn a
joint representation of the different modalities. However, existing approaches
suffer from a coherence-quality tradeoff, where models with good generation
quality lack generative coherence across modalities, and vice versa. We discuss
the limitations underlying the unsatisfactory performance of existing methods,
to motivate the need for a different approach. We propose a novel method that
uses a set of independently trained, uni-modal, deterministic autoencoders.
Individual latent variables are concatenated into a common latent space, which
is fed to a masked diffusion model to enable generative modeling. We also
introduce a new multi-time training method to learn the conditional score
network for multi-modal diffusion. Our methodology substantially outperforms
competitors in both generation quality and coherence, as shown through an
extensive experimental campaign. | Mustapha Bounoua, Giulio Franzese, Pietro Michiardi | 2023-06-07T14:16:44Z | http://arxiv.org/abs/2306.04445v2 | # Multi-modal Latent Diffusion
###### Abstract
Multi-modal data-sets are ubiquitous in modern applications, and multi-modal Variational Autoencoders are a popular family of models that aim to learn a joint representation of the different modalities. However, existing approaches suffer from a coherence-quality tradeoff, where models with good generation quality lack generative coherence across modalities, and vice versa. We discuss the limitations underlying the unsatisfactory performance of existing methods, to motivate the need for a different approach. We propose a novel method that uses a set of independently trained, uni-modal, deterministic autoencoders. Individual latent variables are concatenated into a common latent space, which is fed to a masked diffusion model to enable generative modeling. We also introduce a new multi-time training method to learn the conditional score network for multi-modal diffusion. Our methodology substantially outperforms competitors in both generation quality and coherence, as shown through an extensive experimental campaign.
## 1 Introduction
Multi-modal generative modelling is a crucial area of research in machine learning that aims to develop models capable of generating data according to multiple modalities, such as images, text, audio, and more. This is important because real-world observations are often captured in various forms, and combining multiple modalities describing the same information can be an invaluable asset. For instance, images and text can provide complementary information in describing an object, audio and video can capture different aspects of a scene. Multi-modal generative models can also help in tasks such as data augmentation [13; 4; 39], missing modality imputation [3; 8; 58; 51], and conditional generation [20; 27].
Multi-modal models have flourished over the past years and have seen a tremendous interest from academia and industry, especially in the content creation sector. Whereas most recent approaches focus on specialization, by considering text as primary input to be associated mainly to images [36; 38; 35; 49; 56; 30; 7] and videos [6; 18; 42], in this work we target an established literature whose scope is more general, and in which all modalities are considered equally important. A large body of work rely on extensions of the Variational Autoencoder (VAE) [26] to the multi-modal domain: initially interested in learning joint latent representation of multi-modal data, such works have mostly focused on generative modeling. Multi-modal generative models aim at _high-quality_ data generation, as well as generative _coherence_ across all modalities. These objectives apply to both joint generation of new multi-modal data, as well as to conditional generation of a set of missing modalities, given a disjoint set of available modalities.
In short, multi-modal vaes rely on combinations of uni-modal vaes, and the design space consists mainly in the way the uni-modal latent variables are combined, to construct the joint posterior distribution. Early work such as [57] adopt a product of experts approach, whereas others [40] consider a mixture of expert approach. Product-based models achieve high generative quality,
but suffer in terms of both joint and conditional coherence. This was found to be due to experts mis-calibration issues [40; 48]. On the other hand, mixture-based models produce coherent but qualitatively poor samples. A first attempt to address the so called **coherence-quality tradeoff**[9] is represented by the mixture of product of experts approach [48]. However recent comparative studies [9] show that none of the existing approaches fulfill both the generative quality and coherence criteria. A variety of techniques aim at finding a better operating point, such as contrastive learning techniques [41], hierarchical schemes [53], total correlation based calibration of single modality encoders [21], or different training objectives [47]. More recently, the work in [32] considers explicitly separated shared and private latent spaces to overcome the aforementioned limitations.
By expanding on results presented in [9], in SS 2 we further investigate the tradeoff between generative coherence and quality, and argue that it is intrinsic to all variants of multi-modal VAEs. We indicate two root causes of such problem: latent variable collapse [1; 10] and information loss due to mixture sub-sampling. To tackle these issues, in this work, we propose in SS 3 a new approach which uses a set of independent, uni-modal _deterministic_ auto-encoders whose latent variables are simply concatenated in a joint latent variable. Joint and conditional generative capabilities are provided by an additional model that learns a probability density associated to the joint latent variable. We propose an extension of score-based diffusion models [46] to operate on the multi-modal latent space. We thus derive both forward and backward dynamics that are compatible with the multi-modal nature of the latent data. In SS 4 we propose a novel method to train the multi-modal score network, such that it can both be used for joint and conditional generation. Our approach is based on a guidance mechanism, which we compare to alternatives. We label our approach Multi-modal Latent Diffusion (MLD).
Our experimental evaluation of MLD in SS 5 provides compelling evidence of the superiority of our approach for multi-modal generative modeling. We compare MLD to a large variety of VAE-based alternatives, on several real-life multi-modal data-sets, in terms of generative quality and both joint and conditional coherence. Our model outperforms alternatives in all possible scenarios, even those that are notoriously difficult because modalities might be only loosely correlated. Although we are aware of concurrent work that explore the joint generation of multiple modalities [37; 19], also with diffusion models [5], such approaches are application specific, e.g. text-to-image, and essentially only target two modalities. Nevertheless, when possible, we compare our method to alternative approaches to multi-modal diffusion, and show superior performance of MLD.
## 2 Limitations of Multi-modal VAEs
In this work, we consider multi-modal VAEs [57; 40; 48; 32] as the standard modeling approach to tackle both joint and conditional generation of multiple modalities. Our goal here is to motivate the need to go beyond such a standard approach, to overcome limitations that affect multi-modal VAEs, which result in a trade-off between generation quality and generative coherence [9; 32].
Consider the random variable \(X=\{X^{1},\ldots,X^{M}\}\sim p_{D}(x^{1},\ldots,x^{M})\), consisting in the set of \(M\) of modalities sampled from the (unknown) multi-modal data distribution \(p_{D}\).
We indicate the marginal distribution of a single modality by \(X^{i}\sim p_{D}^{i}(x^{i})\) and the collection of a generic subset of modalities by \(X^{A}\sim p_{D}^{A}(x^{A})\), with \(X^{A}\stackrel{{\text{def}}}{{=}}\{X^{i}\}_{i\in A}\), where \(A\subset\{1,\ldots,M\}\) is a set of indexes. For example: given \(A=\{1,3,5\}\), then \(X^{A}=\{X^{1},X^{3},X^{5}\}\).
We begin by considering uni-modal VAEs as particular instances of the Markov chain \(X\to Z\to\hat{X}\), where \(Z\) is a latent variable and \(\hat{X}\) is the generated variable. Models are specified by the two conditional distributions, called the encoder \(Z\,|\,_{X=x}\sim q_{\psi}(z\,|\,x)\), and the decoder \(\hat{X}\,|\,_{Z=z}\sim p_{\theta}(\hat{x}\,|\,z)\). Given a prior distribution \(p_{n}(z)\), the objective is to define a generative model whose samples are distributed as closely as possible to the original data.
In the case of multi-modal VAEs, we consider the general family of Mixture of Product of Experts (MOPOE) [48], which includes as particular cases many existing variants, such as Product of Experts (MVAE) [57] and Mixture of Expert (MMVAE) [40]. Formally, a collection of subsets of modalities \(S=\{A_{1},\ldots A_{K}\}\), along with weighting coefficients \(\omega_{i}\geq 0,\sum_{i=1}^{K}\omega_{i}=1\), define the posterior \(q_{\psi}(z\,|\,x)=\sum_{i}\omega_{i}q_{\psi^{A_{i}}}^{i}(z\,|\,x^{A_{i}})\), with \(\psi=\{\psi^{1},\ldots,\psi^{K}\}\). To lighten the notation, we use \(q_{\psi^{A_{i}}}\) in place of \(q_{\psi^{A_{i}}}^{i}\) noting that the various \(q_{\psi^{A_{i}}}^{i}\) can have both different parameters \(\psi^{A_{i}}\) and functional form. For example, in the MOPOE [48] parametrization, we have: \(q_{\psi^{A_{i}}}(z\,|\,x^{A_{i}})=\prod_{j\in A_{i}}q_{\psi^{j}}(z\,|\,x^{j})\). Our
exposition is more general and not limited to this assumption. The selection of the posterior can be understood as the result induced by the two step procedure where i) each subset of modalities \(A_{i}\) is encoded into specific latent variables \(Y_{i}\sim q_{\psi^{A_{i}}}(\cdot\,|\,x^{A_{i}})\) and ii) the latent variable \(Z\) is obtained as \(Z=Y_{i}\) with probability \(\omega_{i}\). Optimization is performed w.r.t. the following evidence lower bound (ELBO) [9; 48]:
\[\mathcal{L}=\sum_{i}\omega_{i}\int p_{D}(x)q_{\psi^{A_{i}}}(z\,|\,x^{A_{i}}) \log p_{\theta}(x|z)-\log\frac{q_{\psi^{A_{i}}}(z\,|\,x^{A_{i}})}{p_{n}(z)} \mathrm{d}z\mathrm{d}x. \tag{1}\]
From the literature on uni-modal VAEs, a well-known limitation, called the latent collapse problem [1; 10], affects the quality of latent variables \(Z\). For example, consider an hypothetical case in which encoders and decoders are arbitrarily flexible: then, posteriors with zero mutual information with respect to model inputs are valid maximizers of Equation1. To prove this, it is sufficient to substitute the posteriors \(q_{\psi^{A_{i}}}(z\,|\,x^{A_{i}})=p_{n}(z)\) and \(p_{\theta}(x|z)=p_{D}(x)\) into the Equation1 to observe that the optimal value \(\mathcal{L}=\int p_{D}(x)\log p_{D}(x)\mathrm{d}x\) is achieved [1; 10].
The problem of loss of information is further exacerbated in the case of multi-modal VAEs, as discussed in [9]. Intuitively, even if the encoders \(q_{\psi^{A_{i}}}(z\,|\,x^{A_{i}})\) carry relevant information about their inputs \(X^{A_{i}}\), step ii) of the multi-modal encoding procedure described above induces a further information bottleneck. A fraction \(\omega_{i}\) of the time, the latent variable \(Z\) will be a copy of \(Y_{i}\), that only provides information about the subset \(X^{A_{i}}\). No matter how good the encoding step i) discussed above is, the information about \(X^{\{1,\ldots,M\}\setminus A}\) that is not contained in \(X^{A_{i}}\) cannot be retrieved.
Furthermore, if the latent variable carries zero mutual information w.r.t. the multi-modal input, a coherent _conditional_ generation of a set of modalities given others is impossible, since \(\hat{X}^{A_{1}}\perp X^{A_{2}}\) for any generic sets \(A_{1},A_{2}\). While the factorization \(p_{\theta}(x\,|\,z)=\prod_{i=1}^{M}p_{\theta^{i}}(x^{i}\,|\,z)\), \(\theta=\{\theta_{1},\ldots,\theta_{M}\}\) -- where we use \(p_{\theta^{i}}\) instead of \(p_{\theta^{i}}^{i}\) to unclutter the notation -- could enforce preservation of information and guarantee a better quality of the _jointly_ generated data, in practice, the latent collapse phenomenon induces multi-modal VAEs to converge toward sub-optimal operating regime. When the posterior \(q_{\psi}(z\,|\,x)\) collapses onto the uninformative prior \(p_{n}(z)\), the ELBO in Equation1 reduces to the sum of modality independent reconstruction terms \(\sum\limits_{i}\omega_{i}\sum_{j\in A_{i}}\int p_{D}^{j}(x^{j})p_{n}(z)\left( \log p_{\theta^{j}}(x^{j}|z)\right)\mathrm{d}z\mathrm{d}x^{j}\).
In this case, flexible decoders can similarly ignore the latent variable and converge to the solution \(p_{\theta^{j}}(x^{j}|z)=p_{D}^{j}(x^{j})\) where, paradoxically, the quality of the approximation of the various marginal distributions is extremely high, while there is a complete lack of joint coherence.
General principles to avoid latent collapse consist in explicitly forcing the learning of informative encoders \(q_{\theta}(z\,|\,x)\) via \(\beta-\)annealing of the Kullback-Leibler (KL) term in the ELBO and the reduction of the representational power of encoders and decoders. While \(\beta-\)annealing has been explored in the literature [57] with limited improvements, reducing the flexibility of encoders/decoders clearly impacts the generation quality. Hence the presence of a trade-off: to improve coherence, the flexibility of encoders/decoders should be constrained, which in turns hurt generative quality. This trade-off has been recently addressed in the literature of multi-modal VAEs [9; 32], but our experimental results in SS5 indicate that there is ample room for improvement, and that a new approach is truly needed.
## 3 Our Approach: Multi-modal Latent Diffusion
We propose a new method for multi-modal generative modeling that, by design, does not suffer from the limitations discussed in SS2. Our objective is to enable both high-quality and coherent joint/conditional data generation, using a simple design (see SSA for a schematic representation). As an overview, we use deterministic uni-modal autoencoders, whereby each modality \(X^{i}\) is encoded through its encoder \(e_{\psi^{i}}\), which is a short form for \(e_{\psi^{i}}^{i}\), into the modality specific latent variable \(Z^{i}\) and decoded into the corresponding \(\hat{X}^{i}=d_{\theta^{i}}(Z^{i})\). Our approach can be interpreted as a latent variable model where the different latent variables \(Z^{i}\) are concatenated as \(Z=[Z^{1},\ldots,Z^{M}]\). This corresponds to the parametrization of the two conditional distributions as \(q_{\psi}(z\,|\,x)=\prod\limits_{i=1}^{M}\delta(z^{i}-e_{\psi^{i}}(x^{i}))\) and \(p_{\theta}(\hat{x}\,|\,z)=\prod\limits_{i=1}^{M}\delta(\hat{x}^{i}-d_{\theta^ {i}}(z^{i}))\), respectively. Then, in place of an ELBO, we optimize
the parameters of our autoencoders by minimizing the following sum of modality specific losses:
\[\mathcal{L}=\sum_{i=1}^{M}\mathcal{L}_{i},\quad\mathcal{L}_{i}=\int p_{D}^{i}(x^{i })l^{i}(x^{i}-d_{\theta^{i}}(e_{\psi^{i}}(x^{i})))\mathrm{d}x^{i}, \tag{2}\]
where \(l^{i}\) can be any valid distance function, e.g, the square norm \(\|\cdot\|^{2}\). Parameters \(\psi^{i},\theta^{i}\) are modality specific: then, minimization of Equation (2) corresponds to individual training of the different autoencoders. Since the mapping from input to latent is deterministic, there is no loss of information between \(X\) and \(Z\).1 Moreover, this choice avoids any form of interference in the back-propagated gradients corresponding to the uni-modal reconstruction losses. Consequently gradient conflicts issues [22], where stronger modalities pollute weaker ones, are avoided.
Footnote 1: Since the measures are not absolutely continuous w.r.t the Lebesgue measure, mutual information is \(+\infty\).
To enable such a simple design to become a generative model, it is sufficient to generate samples from the induced latent distribution \(Z\sim q_{\psi}(z)=\int p_{D}(x)q_{\psi}(z\,|\,x)\mathrm{d}x\) and decode them as \(\hat{X}=d_{\theta}(Z)=[d_{\theta^{1}}(Z^{1}),\ldots,d_{\theta^{M}}(Z^{M})]\). To obtain such samples, we follow the two-stage procedure described in [28, 50], where samples from the lower dimensional \(q_{\psi}(z)\) are obtained through an appropriate generative model. We consider score-based diffusion models in latent space [36, 52] to solve this task, and call our approach Multi-modal Latent Diffusion (MLD).
It may be helpful to clarify, at this point, that the two-stage training of MLD is carried out separately. Uni-modal deterministic autoencoders are pre-trained first, followed by the training of the score-based diffusion model, which is explained in more detail later.
To conclude the overview of how our model works, for joint data generation, one can sample from noise, perform backward diffusion, and then decode the generated multi-modal latent variable to obtain the corresponding data samples. For conditional data generation, given one modality, the reverse diffusion is guided by this modality, while the other modalities are generated by sampling from noise. The generated latent variable is then decoded to obtain data samples of the missing modality.
### Joint and Conditional Multi-modal Latent Diffusion Processes
In the first stage of our method, the deterministic encoders project the input modalities \(X^{i}\) into the corresponding latent spaces \(Z^{i}\). This transformation induces a distribution \(q_{\psi}(z)\) for the latent variable \(Z=[Z^{1},\ldots,Z^{M}]\), resulting from the concatenation of uni-modal latent variables.
**Joint generation.** To generate a new sample for all modalities we use a simple score-based diffusion model in latent space [43, 46, 52, 28, 50]. This requires reversing a stochastic noising process, starting from a simple, Gaussian distribution. Formally, the noising process is defined by a Stochastic Differential Equation (SDE) of the form:
\[\mathrm{d}R_{t}=\alpha(t)R_{t}\mathrm{d}t+g(t)\mathrm{d}W_{t},\ \ R_{0}\sim q(r,0), \tag{3}\]
where \(\alpha(t)R_{t}\) and \(g(t)\) are the drift and diffusion terms, respectively, and \(W_{t}\) is a Wiener process. The time-varying probability density \(q(r,t)\) of the stochastic process at time \(t\in[0,T]\), where \(T\) is finite, satisfies the Fokker-Planck equation [31], with initial conditions \(q(r,0)\). We assume uniqueness and existence of a stationary distribution \(\rho(r)\) for the process Equation (3).2 The forward diffusion dynamics depend on the initial conditions \(R_{0}\sim q(r,0)\). We consider \(R_{0}=Z\) to be the initial condition for the diffusion process, which is equivalent to \(q(r,0)=q_{\psi}(r)\).
Footnote 2: This is not necessary for the validity of the method [44]
Under loose conditions [2], a time-reversed stochastic process exists, with a new SDE of the form:
\[\mathrm{d}R_{t}=\left(-\alpha(T-t)R_{t}+g^{2}(T-t)\nabla\log(q(R_{t},T-t)) \right)\mathrm{d}t+g(T-t)\mathrm{d}W_{t},\ \ R_{0}\sim q(r,T), \tag{4}\]
indicating that, in principle, simulation of Equation (4) allows to generate samples from the desired distribution \(q(r,0)\). In practice, we use a **parametric score network**\(s_{\chi}(r,t)\) to approximate the true score function, and we approximate \(q(r,T)\) with the stationary distribution \(\rho(r)\). Indeed, the generated data distribution \(q(r,0)\) is close (in KL sense) to the true density as described by [44, 12]:
\[\mathrm{KL}[q_{\psi}(r)\,|\ |\,q(r,0)]\leq\frac{1}{2}\int_{0}^{T}g^{2}(t) \mathbb{E}[\|s_{\chi}(R_{t},t)-\nabla\log q(R_{t},t)\|^{2}]\mathrm{d}t+KL[q(r, T)||\rho(r)], \tag{5}\]
where the first term on the r.h.s is referred to as score-matching objective, and is the loss over which the score network is optimized, and the second is a vanishing term for \(T\to\infty\).
To conclude, joint generation of all modalities is achieved through the simulation of the reverse-time SDE in Equation4, followed by a simple decoding procedure. Indeed, optimally trained decoders (achieving zero in Equation2) can be used to transform \(Z\sim q_{\psi}(z)\) into samples from \(\int p_{\theta}(x\,|\,z)q_{\psi}(z)\mathrm{d}z=p_{D}(x)\).
**Conditional generation.** Given a generic partition of all modalities into non overlapping sets \(A_{1}\cup A_{2}\), where \(A_{2}=(\{1,\ldots,M\}\setminus A_{1})\), conditional generation requires samples from the conditional distribution \(q_{\psi}(z^{A_{1}}\,|\,z^{A_{2}})\), which are based on _masked_ forward and backward diffusion processes.
Given conditioning latent modalities \(z^{A_{2}}\), we consider a modified forward diffusion process with initial conditions \(R_{0}=\mathcal{C}(R_{0}^{A_{1}},R_{0}^{A_{2}})\), with \(R_{0}^{A_{1}}\sim q_{\psi}(r^{A_{1}}\,|\,z^{A_{2}}),R_{0}^{A_{2}}=z^{A_{2}}\). The composition operation \(\mathcal{C}(\cdot)\) concatenates generated (\(R^{A_{1}}\)) and conditioning latents (\(z^{A_{2}}\)). As an illustration, consider \(A_{1}=\{1,3,5\}\), such that \(X^{A_{1}}=\{X^{1},X^{3},X^{5}\}\), and \(A_{2}=\{2,4,6\}\) such that \(X^{A_{2}}=\{X^{2},X^{4},X^{6}\}\). Then, \(R_{0}=\mathcal{C}(R_{0}^{A_{1}},R^{A_{2}})=\mathcal{C}(R_{0}^{A_{1}},z^{A_{2}} )=[R_{0}^{1},z^{2},R_{0}^{3},z^{4},R_{0}^{5},z^{6}]\).
More formally, we define the masked forward diffusion SDE:
\[\mathrm{d}R_{t}=m(A_{1})\odot[\alpha(t)R_{t}\mathrm{d}t+g(t)\mathrm{d}W_{t}],\;\;q(r,0)=q_{\psi}(r^{A_{1}}\,|\,z^{A_{2}})\delta(r^{A_{2}}-z^{A_{2}}). \tag{6}\]
The mask \(m(A_{1})\) contains \(M\) vectors \(u^{i}\), one per modality, and with the corresponding cardinality. If modality \(j\in A_{1}\), then \(u^{j}=\mathbf{1}\), otherwise \(u^{j}=\mathbf{0}\). Then, the effect of masking is to "freeze" throughout the diffusion process the part of the random variable \(R_{t}\) corresponding to the conditioning latent modalities \(z^{A_{2}}\). We naturally associate to this modified forward process the conditional time varying density \(q(r,t\,|\,z^{A_{2}})=q(r^{A_{1}},t\,|\,z^{A_{2}})\delta(r^{A_{2}}-z^{A_{2}})\).
To sample from \(q_{\psi}(z^{A_{1}}\,|\,z^{A_{2}})\), we derive the reverse-time dynamics of Equation6 as follows:
\[\mathrm{d}R_{t}=m(A_{1})\odot\left[\big{(}-\alpha(T-t)R_{t}+g^{2}(T-t)\nabla \log\big{(}q(R_{t},T-t\,|\,z^{A_{2}})\big{)}\big{)}\right)\mathrm{d}t+g(T-t) \mathrm{d}W_{t}\right], \tag{7}\]
with initial conditions \(R_{0}=\mathcal{C}(R_{0}^{A_{1}},z^{A_{2}})\) and \(R_{0}^{A_{1}}\sim q(r^{A_{1}},T\,|\,z^{A_{2}})\). Then, we approximate \(q(r^{A_{1}},T\,|\,z^{A_{2}})\) by its corresponding steady state distribution \(\rho(r^{A_{1}})\), and the true (conditional) score function \(\nabla\log\big{(}q(r,t\,|\,z^{A_{2}})\big{)}\) by a conditional score network \(s_{\chi}(r^{A_{1}},t\,|\,z^{A_{2}})\).
## 4 Guidance Mechanisms to Learn the Conditional Score Network
A correctly optimized score network \(s_{\chi}(r,t)\) allows, through simulation of Equation4, to obtain samples from the joint distribution \(q_{\psi}(z)\). Similarly, a _conditional_ score network \(s_{\chi}(r^{A_{1}},t\,|\,z^{A_{2}})\) allows, through the simulation of Equation7, to sample from \(q_{\psi}(z^{A_{1}}\,|\,z^{A_{2}})\). In SS4.1 we extend guidance mechanisms used in classical diffusion models to allow multi-modal conditional generation. A naive alternative is to rely on the unconditional score network \(s_{\chi}(r,t)\) for the conditional generation task, by casting it as an _in-painting_ objective. Intuitively, any missing modality could be recovered in the same way as a uni-modal diffusion model can recover masked information, e.g. a black patch on an image. In SS4.2 we discuss the implicit assumptions underlying in-painting from an information theoretic perspective, and argue that, in the context of multi-modal data, such assumptions are difficult to satisfy. Our intuition is corroborated by ample empirical evidence, where our method consistently outperform alternatives.
### Multi-time Diffusion
We propose a modification to the classifier-free guidance technique [17] to learn a score network that can generate conditional and unconditional samples from any subset of modalities. Instead of training a separate score network for each possible combination of conditional modalities, which is computationally infeasible, we use a single architecture that accepts all modalities as inputs and a _multi-time vector_\(\tau=[t_{1},\ldots,t_{M}]\). The multi-time vector serves two purposes: it is both a conditioning signal and the time at which we observe the diffusion process.
**Training:** learning the conditional score network relies on randomization. As discussed in SS3.1, we consider an arbitrary partitioning of all modalities in two disjoint sets, \(A_{1}\) and \(A_{2}\). The set \(A_{2}\) contains
randomly selected conditioning modalities, while the remaining modalities belong to set \(A_{1}\). Then, during training, the parametric score network estimates \(\nabla\log\bigl{(}q(r,t\,|\,z^{A_{2}})\bigr{)}\), whereby the set \(A_{2}\) is randomly chosen at every step. This is achieved by the _masked diffusion process_ from Equation6, which only diffuses modalities in \(A_{1}\). More formally, the score network input is \(R_{t}=\mathcal{C}(R_{t}^{A_{1}},Z^{A_{2}})\), along with a multi-time vector \(\tau(A_{1},t)=t\,[\mathbbm{1}(1\in A_{1}),\ldots,\mathbbm{1}(M\in A_{1})]\). As a follow-up of the example in SS3.1, given \(A_{1}=\{1,3,5\}\), such that \(X^{A_{1}}=\{X^{1},X^{3},X^{5}\}\), and \(A_{2}=\{2,4,6\}\) such that \(X^{A_{2}}=\{X^{2},X^{4},X^{6}\}\), then, \(\tau(A_{1},t)=[t,0,t,0,t,0]\).
More precisely, the algorithm for the multi-time diffusion training (see SSA for the pseudo-code) proceeds as follows. At each step, a set of conditioning modalities \(A_{2}\) is sampled from a predefined distribution \(\nu\), where \(\nu(\emptyset)\stackrel{{\mathrm{def}}}{{=}}\mathrm{Pr}(A_{2}= \emptyset)=d\), and \(\nu(U)\stackrel{{\mathrm{def}}}{{=}}\mathrm{Pr}(A_{2}=U)=\nicefrac{{ (1-d)}}{{(2^{M}-1)}}\) with \(U\in\mathcal{P}(\{1,\ldots,M\})\setminus\emptyset\), where \(\mathcal{P}(\{1,\ldots,M\})\) is the powerset of all modalities. The corresponding set \(A_{1}\) and mask \(m(A_{1})\) are constructed, and a sample \(X\) is drawn from the training data-set. The corresponding latent variables \(Z^{A_{1}}=\{e_{\psi}^{i}(X^{i})\}_{i\in A_{1}}\) and \(Z^{A_{2}}=\{e_{\psi}^{i}(X^{i})\}_{i\in A_{2}}\) are computed using the pre-trained encoders, and a diffusion process starting from \(R_{0}=\mathcal{C}(Z^{A_{1}},Z^{A_{2}})\) is simulated for a randomly chosen diffusion time \(t\), using the conditional forward SDE with the mask \(m(A_{1})\). The score network is then fed the current state \(R_{t}\) and multi-time vector \(\tau(A_{1},t)\), and the difference between the score network's prediction and the true score is computed, applying the mask \(m(A_{1})\). The score network parameters are updated using stochastic gradient descent, and this process is repeated for a total of \(L\) training steps. Clearly, when \(A_{2}=\emptyset\), training proceeds as for an un-masked diffusion process, since the mask \(m(A_{1})\) allows all latent variables to be diffused.
**Conditional generation:** any valid numerical integration scheme for Equation7 can be used for conditional sampling (see SSA for an implementation using the Euler-Maruyama integrator). First, conditioning modalities in the set \(A_{2}\) are encoded into the corresponding latent variables \(z^{A_{2}}=\{e^{j}(x^{j})\}_{j\in A_{2}}\). Then, numerical integration is performed with step-size \(\Delta t=\nicefrac{{T}}{{N}}\), starting from the initial conditions \(R_{0}=\mathcal{C}(R_{0}^{A_{1}},z^{A_{2}})\), with \(R_{0}^{A_{1}}\sim\rho(r^{A_{1}})\). At each integration step, the score network \(s_{\chi}\) is fed the current state of the process and the multi-time vector \(\tau(A_{1},\cdot)\). Before updating the state, the masking is applied. Finally, the generated modalities are obtained thanks to the decoders as \(\hat{X}^{A_{1}}=\{d_{\theta}^{j}(R_{T}^{j})\}_{j\in A_{1}}\). Inference time conditional generation is not randomized: conditioning modalities are the ones that are available, whereas the remaining modalities are the ones we wish to generate.
### In-painting and its implicit assumptions
Under certain assumptions, given an unconditional score network \(s_{\chi}(r,t)\) that approximates the true score \(\nabla\log q(r,t)\), it is possible to obtain a conditional score network \(s_{\chi}(r^{A_{1}},t\,|\,z^{A_{2}})\), to approximate \(\nabla\log q(r^{A_{1}},t\,|\,z^{A_{2}})\). We start by observing the equality:
\[q(r^{A_{1}},t\,|\,z^{A_{2}})=\int q(\mathcal{C}(r^{A_{1}},r^{A_{2}}),t\,|\,z^{ A_{2}})\,\mathrm{d}r^{A_{2}}=\int\frac{q(z^{A_{2}}\,|\,\mathcal{C}(r^{A_{1}},r^{A_{ 2}}),t)}{q_{\psi}(z^{A_{2}})}q(\mathcal{C}(r^{A_{1}},r^{A_{2}}),t)\,\mathrm{d}r ^{A_{2}}, \tag{8}\]
where, with a slight abuse of notation, we indicate with \(q(z^{A_{2}}\,|\,\mathcal{C}(r^{A_{1}},r^{A_{2}}),t)\) the density associated to the event: the portion corresponding to \(A_{2}\) of the latent variable \(Z\) is equal to \(z^{A_{2}}\) given that the whole diffused latent \(R_{t}\) at time \(t\), is equal to \(\mathcal{C}(r^{A_{1}},r^{A_{2}})\).
In the literature, the quantity \(q(z^{A_{2}}\,|\,\mathcal{C}(r^{A_{1}},r^{A_{2}}),t)\) is typically approximated by dropping its dependency on \(r^{A_{1}}\). This approximation can be used to manipulate Equation8 as \(q(r^{A_{1}},t\,|\,z^{A_{2}})\simeq\int q(r^{A_{2}},t\,|\,z^{A_{2}})q(r^{A_{1} },t|r^{A_{2}},t)\,\mathrm{d}r\). Further Monte-Carlo approximations [46; 29] of the integral allow implementation of a practical scheme, where an approximate conditional score network is used to generate conditional samples. This approach, known in the literature as _in-painting_, provides high quality results in several _uni-modal_ application domains [46; 29].
The KL divergence between \(q(z^{A_{2}}\,|\,\mathcal{C}(r^{A_{1}},r^{A_{2}}),t)\) and \(q(z^{A_{2}}\,|\,r^{A_{2}},t)\) quantifies, fixing \(r^{A_{1}},r^{A_{2}}\), the discrepancy between the true and approximated conditional probabilities. Similarly, the expected KL divergence \(\Delta=\int q(r,t)\mathrm{KL}[q(z^{A_{2}}\,|\,\mathcal{C}(r^{A_{1}},r^{A_{2}}),t )\,|\,\mid q(z^{A_{2}}\,|\,r^{A_{2}},t)]\mathrm{d}r\), provides information about the average discrepancy. Simple manipulations allow to recast this as a discrepancy in terms of mutual information \(\Delta=I(Z^{A_{2}};R_{t}^{A_{1}},R_{t}^{A_{2}})-I(Z^{A_{2}};R_{t}^{A_{2}})\). Information about \(Z^{A_{2}}\) is contained in \(R_{t}^{A_{2}}\), as the latter is the result of a diffusion with the former as initial conditions, corresponding to the Markov chain \(R_{t}^{A_{2}}\to Z^{A_{2}}\), and in \(R_{t}^{A_{1}}\) through the Markov chain \(Z^{A_{2}}\to Z^{A_{1}}\to R_{t}^{A_{1}}\). The
positive quantity \(\Delta\) is close to zero whenever the rate of loss of information w.r.t initial conditions is similar for the two subsets \(A_{1},A_{2}\). In other terms, \(\Delta\simeq 0\) whenever out of the whole \(R_{t}\), the portion \(R_{t}^{A_{2}}\) is a sufficient statistic for \(Z^{A_{2}}\).
The assumptions underlying the approximation are in general not valid in the case of multi-modal learning, where the robustness to stochastic perturbations of latent variables corresponding to the various modalities can vary greatly. Our claim are supported empirically by an ample analysis on real data in SS B, where we show that multi-time diffusion approach consistently outperforms in-painting.
## 5 Experiments
We compare our method mld to mvae[57], mmvae[40], mopoe[48], Hierarchical Generative Model (nexus)[53] and Multi-view Total Correlation Autoencoder (mvtcae)[21], re-implementing competitors in the same code base as our method, and selecting their best hyperparameters (as indicated by the authors). For fair comparison, we use the same encoder/decoder architecture for all the models. For mld, the score network is implemented using a simple stacked multilayer perceptron (mlp) with skip connections (see SS A for more details).
**Evaluation metrics.**_Coherence_ is measured as in [40; 48; 32], using pre-trained classifiers on the generated data and checking the consistency of their outputs. _Generative quality_ is computed using Frechet Inception Distance (FID)[15] and Frechet Audio Distance (FAD)[23] scores for images and audio respectively. Full details on the metrics are included in SS C. All results are averaged over 5 seeds (We report standard deviation in SS E).
**Results.** Overall, mld largely outperforms alternatives from the literature, **both** in terms of coherence and generative quality. VAE-based models suffer from a coherence-quality trad-off and modality collapse for highly heterogeneous data-sets. We proceed to show this on several standard benchmarks from the multi-modal VAE-based literature (see SS C for details on the data-sets).
The first data-set we consider is **MNIST-SVHN** ([40]), where the two modalities differ in complexity. High variability, noise and ambiguity makes attaining good coherence for the SVHN modality a challenging task. Overall, mld outperforms all VAE-based alternatives in terms of coherency, especially in terms of joint generation and conditional generation of MNIST given SVHN, see Table 1. Mixture models (mmvae, mopoe) suffer from modality collapse (poor SVHN generation), whereas product of experts (mvae, mvtcae) generate better quality samples at the expense of SVHN to mnist conditional coherence. Joint generation is poor for all VAE models. Interestingly, these models also fail at SVHN self-reconstruction which we discuss in SS E. mld achieves the best performance also in terms of generation quality, as confirmed also by qualitative results (Figure 1) showing for example how mld conditionally generates multiple SVHN digits within one sample, given the input MNIST image, whereas other methods fail to do so.
The Multi-modal Handwritten Digits data-set (**MHD**) [53] contains gray-scale digit images, motion trajectory of the hand writing and sounds of the spoken digits. In our experiments, we do not use the label as a forth modality. While digit image and trajectory share a good amount of information, the sound modality contains a lot more of modality specific variation. Consequently, conditional generation involving the sound modality, along with joint generation, are challenging tasks. Coherency-wise (Table 2) MLD outperforms all the competitors where the biggest difference is seen in joint and sound to other modalities generation (in the latter task MVtcaE performs better than other competitors but is still worse than MLD). MLD dominates alternatives also in terms of generation quality (Table 3). This is true both for image, sound modalities, for which some VAE-based models suffer in producing high quality results, demonstrating the limitation of these methods in handling highly heterogeneous modalities. MLD, on the other hand, achieves high generation quality for all modalities, possibly due to the absence of interference thanks to the independent training of its autoencoders.
The **POLYMNIST** data-set [48] consists of 5 modalities vertically generated by using MNIST digits and varying the background images. The homogeneous nature of the modalities is expected to mitigate gradient conflict issues in VAE-based models, and consequently reduce modality collapse. However, MLD still outperforms all alternatives, as shown Figure 2. Concerning generation coherence, MLD achieves the best performance in all cases with the single exception of a single observed modality. On the qualitative performance side, not only MLD is superior to alternatives, but its results are stable when more modalities are considered, a capability that not all competitors share.
Finally, we explore the Caltech Birds **CUB**[40] data-set, following the same experimentation protocol in [9] by using real bird images (instead of ResNet-features as in [40]). Figure 3 presents qualitative results for caption to image conditional generation. MLD is the only model capable of generating bird images with convincing coherence. Clearly, none of the VAE-based methods is able to achieve sufficient caption to image conditional generation quality. We report quantitative results in SS E, where we show generation quality FID metric. Due to the unavailability of the labels in this data-set, coherence evaluation as with the previous data-sets is not possible. We then resort to CLIP-Score (CLIP-S) [14] an image-captioning metric, that, despite its limitations for the considered data-set [24], shows that MLD outperforms competitors.
## 6 Conclusion and Limitations
We have addressed the challenge of multi-modal generative modeling by proposing a novel method, Multimodal Latent Diffusion (MLD). Our approach overcomes the coherence-quality tradeoff that is
\begin{table}
\begin{tabular}{c|c|c c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Models} & \multirow{2}{*}{Joint} & \multicolumn{4}{c}{1 (Image)} & \multicolumn{4}{c}{T (Trajectory)} & \multicolumn{4}{c}{S (Sound)} \\ \cline{2-13} & & T & S & T/S & I & S & I/S & I & T & LT \\ \hline MVAE & 37.77 & 11.68 & 26.46 & 28.4 & 95.55 & 26.66 & 96.58 & 58.87 & 10.76 & 58.16 \\ MWAE & 34.78 & **99.7** & 60.69 & 84.74 & 92.93 & 85.86 & 92.39 & 49.55 & 50.14 & 50.17 \\ mope & 48.84 & 92.64 & 68.67 & 92.09 & 99.28 & 87.42 & 99.35 & 50.73 & 51.5 & 56.97 \\ NEVS & 26.56 & 94.58 & 83.1 & 95.27 & 88.51 & 76.82 & 93.27 & 70.06 & 75.84 & 89.48 \\ MVTcaE & 42.28 & 99.54 & 72.05 & 99.63 & 99.22 & 72.03 & 99.29 & 92.58 & 93.07 & 94.78 \\ \hline mld & **98.34** & \(99.45\) & **88.91** & **99.88** & **99.58** & **88.92** & **99.91** & **97.63** & **97.7** & **98.01** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Generation Coherence (%) for **MHD** (Higher is better). Line above refer to the generated modality while the observed modalities subset are presented below.
\begin{table}
\begin{tabular}{c|c c c c|c c c c|c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{4}{c}{1 (Image)} & \multicolumn{4}{c}{T (Trajectory)} & \multicolumn{4}{c}{S (Sound)} \\ \cline{2-13} & Joint & T & S & T/S & Joint & I & S & I/S & Joint & I & T & I,T \\ \hline MVAE & 94.9 & 93.73 & 92.55 & 91.08 & 39.51 & 20.42 & 38.77 & 19.25 & 14.14 & 14.13 & 14.08 & 14.17 \\ MWAE & 224.01 & 22.6 & 789.12 & 170.41 & 16.52 & **0.5** & 30.39 & 6.07 & 22.8 & 22.61 & 23.72 & 23.01 \\ mope & 147.81 & 16.29 & 838.38 & 15.89 & 13.92 & 0.52 & 33.38 & **0.53** & 18.53 & 24.11 & 24.1 & 23.93 \\ nexus & 281.76 & 116.65 & 282.34 & 117.24 & 18.59 & 6.67 & 33.01 & 7.54 & 13.99 & 19.52 & 18.71 & 16.3 \\ MVtcaE & 121.85 & 5.34 & 54.57 & 3.16 & 19.49 & 0.62 & 13.62 & 0.75 & 15.88 & 14.22 & 14.02 & 13.96 \\ \hline MLD & **7.98** & **1.7** & **4.54** & **1.84** & **3.18** & 0.83 & **2.07** & 0.6 & **2.39** & **2.31** & **2.33** & **2.29** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Generation quality for **MHD** in terms of FMD for image and trajectory modalities and FAD for the sound modality (Lower is better).
inherent in existing multi-modal VAE-based models, whose limitations originate -- as we thoroughly discussed -- from problems related to latent variable collapse and information loss.
To overcome these limitations, MLD uses a set of independently trained, uni-modal, deterministic autoencoders. Generative properties of our model stem from a masked diffusion process that operates on latent variables. We also developed a new multi-time training method to learn the conditional score network for multi-modal diffusion. An extensive experimental campaign on various real-life data-sets, provided compelling evidence on the effectiveness of MLD for multi-modal generative modeling. In all scenarios, including cases with loosely correlated modalities, MLD consistently outperformed the alternatives.
A limitation of our approach stems from the simple nature of encoder/decoder architectures. Concurrent work such as [5] is more specialized, by focusing on complex, tailor-made encoder/decoder architectures. Such complexity might be necessary when moving to higher-resolution data.
As for all generative models, also ours could be misused to produce misinformation. We believe however that the benefits of multi-modal generative models out-weight their potential mis-use.
Figure 3: Qualitative results on **CUB** data-set. Caption used as condition to generate the bird images.
Figure 2: Results for **POLYMNIST** data-set. _Left_: a comparison of the generative coherence (% \(\uparrow\)) and quality in terms of FID (\(\downarrow\))) as a function of the number of modality input. We report the average performance following the leave-one-out strategy (see § C). _Right_: are qualitative results for the joint generation of the 5 modalities. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.